<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Doctoral Theses</title>
<link>https://hdl.handle.net/1721.1/131022</link>
<description/>
<pubDate>Tue, 07 Apr 2026 19:15:06 GMT</pubDate>
<dc:date>2026-04-07T19:15:06Z</dc:date>
<item>
<title>Data Driven Discovery of Modular Biological Systems</title>
<link>https://hdl.handle.net/1721.1/165341</link>
<description>Data Driven Discovery of Modular Biological Systems
Altae-Tran, Han
This dissertation presents a big data-driven approach for biological and biomedical discovery. The topics covered include the evolution and diversity of CRISPR systems, the identification and analysis of hypervariable protein systems, the identification of ancestral systems, and the development of RNA-guided systems for genome editing and therapeutic applications. Additionally, one of the chapters focuses on population-scale longitudinal mapping of COVID-19 symptoms, behavior, and testing, providing valuable insights for public health officials during the early stages of the pandemic. Through the development of novel methodologies and the utilization of big data-driven methods, this dissertation contributes to the expanding landscape of biomedical research.&#13;
&#13;
Chapter II delves into the origins of Cas9 and Cas12, examining the evolutionarily conserved non-coding RNA associated with IscB and the diverse RNA-guided nucleases encoded by IS200/IS605 elements. Through phylogenetic analysis and experimental characterization, we gain insight into the evolutionary history and diversity of IscB systems, and their potential biological functions.&#13;
&#13;
In Chapter III, we explore the diversity and function of Obligate Mobile Element Guided Activity (OMEGA) systems, focusing on TnpB and its relationship with Cas12 systems. We examine the taxonomy, genomic features, and evolution of these systems, as well as their mobility and potential exaptation.&#13;
&#13;
Chapter IV is dedicated to optimizing OMEGA RNA-guided systems for therapeutic applications. We screen natural IscB variants for efficient genome editing and engineer OrufIscB for enhanced activity, demonstrating its potential as a versatile genome interrogation tool.&#13;
&#13;
In Chapter V, we employ deep terascale clustering to discover functionally diverse CRISPR systems. Using a fast locality-sensitive hashing algorithm, we identify rare CRISPR systems, such as DinG-HNH, Type I Cascade components with HNH domains, and the Type VII CRISPR system, which is a precise RNA-guided RNA endonuclease complex containing a β-CASP nuclease.&#13;
&#13;
In Chapter VII, we investigate compact RNA editors, focusing on the discovery and characterization of Cas13bt. We repurpose Cas13bt for base editing and deliver these base editors to human cells using adeno-associated viruses (AAV), demonstrating their potential for therapeutic applications.&#13;
&#13;
Chapter VIII focuses on the identification and analysis of hypervariable protein systems with repeat signatures, seeking to find generalizations of concepts from other repeat systems such as CRISPR and TALENs. A computational pipeline is established to identify hypervariable repeat signatures in proteins, resulting in candidate systems that were characterized in additional detail. Multiple new mechanisms of modularity (two functions that are decoupled via an interchangeable domain or structure, such as repeats) were identified, pointing to a greater landscape of hypervariable protein systems than previously thought. These findings have implications for the understanding of protein architectures and may also provide valuable insights for the design of novel protein-based tools and therapeutics.&#13;
&#13;
Finally, Chapter IX presents one of the early large-scale studies conducted during the COVID pandemic, focusing on population-scale longitudinal mapping of COVID-19 symptoms, behavior, and testing. The study was conducted relatively early during the pandemic and collected data from a large user base of the How We Feel application. The data-driven approach employed various data analysis techniques, such as logistic regression, UMAP, and prediction models, to identify factors associated with testing propensity, symptoms associated with COVID, and behavior of patients after contracting COVID. The findings from this study could have provided valuable insights in the early stages of the pandemic, informing policymakers and public health officials such as the state of Connecticut to make data-driven decisions.&#13;
&#13;
Overall, this dissertation presents methods for and results from applying big data-driven methods to discovery from large biomedical databases. It specifically focuses on the exploration of diverse CRISPR systems, ancestors of CRISPR systems, hypervariable protein systems, and protein engineering for therapeutics and genome editing applications. From examining the evolutionary origins of Cas9 and Cas12 to investigating the diversity of OMEGA systems and optimizing them for therapeutic use, this work deepens our understanding of these complex biological systems. The discovery of rare CRISPR systems and compact RNA editors further broadens the landscape of genetic tools with potential therapeutic applications. Additionally, the identification and analysis of hypervariable protein systems reveal new mechanisms of modularity, with implications for protein architectures and the development of novel protein-based tools. Finally, the large-scale study of COVID-19 symptoms, behavior, and testing during the early stages of the pandemic demonstrates the power of data-driven approaches in informing public health decisions. Collectively, this research contributes significantly to our understanding of complex biological systems and highlights the potential for their application in advancing human health and biotechnology.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165341</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon Nanotube Based Biosensors Using Corona Phase Molecular Recognition (CoPhMoRe): Development and Applications</title>
<link>https://hdl.handle.net/1721.1/165336</link>
<description>Carbon Nanotube Based Biosensors Using Corona Phase Molecular Recognition (CoPhMoRe): Development and Applications
Jin, Xiaojia
Molecular recognition sites that specifically bind a target molecule are essential for clinical research, disease diagnosis, and therapeutic development. To this end, a promising technique developed by the Strano laboratory at MIT is Corona Phase Molecular Recognition (CoPhMoRe), which uses amphiphilic polymers or macromolecules adsorbed onto a nanoparticle surface to generate a synthetic recognition site.  The underlying nanoparticle, which can also function as the sensor transducer, pins the polymer to a specific 3D confirmation using non-covalent interactions, resulting in a binding pocket analogous to the antigen binding domain of a natural antibody. While CoPhMoRe has proven considerable versatile in recognizing small organic molecules such as vitamins, neurotransmitters, pharmaceutical drugs and steroid hormones, the recognition of large molecules such has viral proteins has been less explored.  Macromolecular analytes introduce a much wider set of potential interactions, requiring refined analysis and new insights into mechanisms.  This thesis focuses on (i) constructing CoPhMoRe sites for protein analytes, (ii) exploring new methods and mechanistic understanding to inform CoPhMoRe recognitions and also (iii) translating CoPhMoRe phases to interfaces that can be incorporated into biosensors for specific applications. &#13;
&#13;
Towards Aim (i), this thesis has developed CoPhMoRe sites for protein based disease biomarkers, including interleukin-6, nucleocapsid and spike proteins of SARS-CoV-2, enabling rapid and label-free near-IR fluorescence detection of target analytes with dose dependent responses in complex environments. Towards Aim (ii), this thesis investigates new methods and analyses for CoPhMoRe characterization, such as the expansion of the Molecular Probe Absorption (MPA) technique. This technique measures the accessible surface area of a CoPhMoRe based sensor by using a fluorescent molecule as a probe that quenches upon interacting with the corona phase. Further advances involve instrumentation and mathematical models to analyze the chiroptical properteis of corona phases, facilitating the CoPhMoRe handedness determination at the single molecule level with circularly polarized excitation sources.   Towards Aim (iii), this thesis explores form factor advancements to broaden the utility of CoPhMoRe sensors. It includes profiling cellular immune heterogeneity by integrating the optical nanosensor arrays into microfluidics to interrogate chemical species efflux from individual cells in real-time using Nanosensor Chemical Cytometry (NCC). Furthermore, nanosensors are encapsulated into stable hydrogels for integration into acoustic tags to track hormone levels in marine animals. &#13;
&#13;
Together, the successful development of CoPhMoRe nanosensors opens new pathways for synthetic molecular recognition that enables the detection of biological macromolecules, and holds great promise for life science applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165336</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, Synthesis and Applications of Versatile Porous Poly(arylene ether)s</title>
<link>https://hdl.handle.net/1721.1/165335</link>
<description>Design, Synthesis and Applications of Versatile Porous Poly(arylene ether)s
Wu, Yifan
This thesis describes the synthesis of various porous poly(arylene ether)s for the applications in heterogeneous catalysis and gas separation. &#13;
&#13;
In Chapter I, we offer an overview of the structure and properties of porous poly(arylene ether)s, and introduce the key challenges in heterogeneous catalysis and gas separation. In &#13;
&#13;
Chapter II, we present the development of a solution-processable microporous organic polymer catalyst that displays high catalytic performance and size-selectivity in the heterogeneous SuzukiMiyaura coupling reaction. The catalyst can be used to create catalytic impellers that simplifies its use and recovery, thereby conforming to green chemistry principles.&#13;
&#13;
In Chapter III, we demonstrate a tunable synthetic platform for the advent of eight representative microporous poly(arylene ether)s with tertiary-amine functional groups. We compare the competition enhancements in sorption affinity for H₂S and CO₂ to those of primary-amine functional membranes and provide explanation for the reason why the slight enhancements do not fully translate to enhanced separation performance.&#13;
&#13;
In Chapter IV, we explore the potential of free volume manipulation in enhancing acid gas separation for microporous polymer membranes. By incorporating labile functional groups onto a microporous poly(arylene ether) and employing thermal treatment with oxygen, we improve the combined acid gas selectivity and increase membrane resistance to plasticization.&#13;
&#13;
In Chapter V, we synthesize a series of ionic poly(arylene ethers), and evaluate their potential applications in propylene/propane separation, proton exchange and heterogenous catalysis. We study the separation performance of carboxylate-functionalized polymer, and the ion exchange capacity and efficiency as solid acid catalyst for sulfonated polymers to demonstrate their promise.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165335</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explaining Allied Military Postures: Extended Deterrence, the U.S. Nuclear Umbrella, and the Search for Security</title>
<link>https://hdl.handle.net/1721.1/165334</link>
<description>Explaining Allied Military Postures: Extended Deterrence, the U.S. Nuclear Umbrella, and the Search for Security
Kwon, Jung Jae
How do non-nuclear allies of the United States bolster deterrence even as they rely on the United States and its nuclear umbrella for security? How do they exercise agency in operationalizing the deterrence extended to them by the United States against their nuclear-armed adversaries? Although the existing literature offers valuable insights into the measures the United States employs to signal commitment and enhance the credibility of its security guarantees, it has paid far less attention to the role of allies themselves. This dissertation addresses that gap by introducing allied integration theory, a new framework for understanding allies’ incentives and choices. The theory explains and predicts variation in allies’ peacetime military postures. I first develop a typology of four ideal types, each reflecting allies’ different choices regarding capabilities, doctrine, tolerance for escalation risk, and integration with U.S. forces, including U.S. nuclear forces. I then argue that allies’ decisions are shaped primarily by their threat environment. Specifically, I highlight two key factors: the threat of strikes (conventional, WMD, or nuclear) and the threat of invasion from their adversaries. I then test the theory through two pairs of comparative case studies. The first pair examines Japan and South Korea in the post-Cold War period; the second compares West Germany and Norway, two frontline states of NATO, during the later Cold War (1970s-1980s). This research design allows for examining both within-case and cross-case variation while holding constant many potential confounders within each pair. Drawing on extensive fieldwork, elite interviews, foreign-language sources, U.S. archival materials, and secondary sources, I analyze how each ally made key decisions on their postures and evaluate the performance of allied integration theory. The findings contribute to the broader debates on alliance politics in the nuclear era and the interplay between conventional and nuclear domains. The research also offers practical insights for addressing growing security challenges faced by U.S.-led alliances amid intensifying geopolitical competition with multiple nuclear-armed adversaries.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165334</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery and characterization of diverse microbial RNA-guided systems</title>
<link>https://hdl.handle.net/1721.1/165332</link>
<description>Discovery and characterization of diverse microbial RNA-guided systems
Kannan, Soumya
Precise modification of nucleic acids is a powerful technique to enable understanding of the relationship between variation in genetic information and biological phenotypes and to treat genetic diseases. Over the past decade, the microbial adaptive immune system CRISPR-Cas has revolutionized genome editing, largely due to ease of target reprogramming. Cas nucleases can be retargeted by changing a short sequence in the associated non-coding RNA, which acts to guide the enzyme or complex to a complementary nucleic acid target. Exploration of the natural diversity of CRISPR-Cas systems has uncovered numerous variants with widely differing protein and domain architectures that exhibit distinct substrate specificities and activities, tying RNA-guided nucleic acid recognition to diverse enzymatic functions. This diversity has fueled the development of CRISPR-Cas systems for additional applications beyond genome editing, such as transcriptome editing, rapid diagnostics and molecular recording tools, and has aided in optimization of existing technologies via identification of systems that naturally exhibit desirable properties.&#13;
&#13;
In this thesis, we explore the evolution and diversity of RNA-guided microbial systems and characterize and engineer them for use in human cells. First, we investigate the origins of Cas9 and Cas12, the most widely used Cas proteins for genome editing. We find that they evolved from compact transposon-encoded nucleases, which we termed OMEGA, that already employed an RNA-guiding mechanism. Next, we develop OMEGA systems as genome editing tools that, due to their small size, are more compatible with therapeutic delivery vectors. We further explore microbial genomes and metagenomes to discover Cas13b-t, a similarly compact RNA-targeting system that we develop as a deliverable RNA editing platform. Finally, we extended the theme of mining sequence databases to comprehensively catalog proteins and domains associated with CRISPR-Cas systems. Through this analysis, we identify and characterize several novel CRISPR-Cas types and subtypes with potential for development as biotechnologies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165332</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Ion Conduction in Polymer-Ceramic Composite Electrolytes for Lithium Ion Batteries</title>
<link>https://hdl.handle.net/1721.1/165330</link>
<description>Understanding Ion Conduction in Polymer-Ceramic Composite Electrolytes for Lithium Ion Batteries
Sand, Sara Catherine
In the transition to safer, more energy-dense solid state batteries, polymer-ceramic composite electrolytes may offer a potential route to achieve simultaneously high lithium ion conductivity and enhanced mechanical stability. Despite numerous studies on the polymer-ceramic composite electrolytes, disagreements persist on whether the polymer or the ceramic are the primary conduction pathway and how each phase is impacted by the proximity of the other. This lack of understanding limits the design of effective composite solid electrolytes. Therefore, the goal of this thesis is to establish the role that the polymer plays in the ionic conduction and what changes in this phase to allow for enhanced conduction. I present a collection of well-controlled experiments using model systems and minimizing the parameter space, as well as utilizing advanced characterization techniques. In doing so, we probe the underlying mechanisms of how lithium ion conductivity is affected in polymer-ceramic composites. In particular, the use of positron annihilation spectroscopy has been instrumental in revealing the primary mechanisms that alter lithium ion conductivity in the polymer matrix. First, we present a comprehensive and in-depth review of the literature in Chapter 2. This chapter statistically analyzes the trends in ionic conductivities achieved in the field and present the various arguments for ion conduction pathways and interfacial effects made throughout the last three decades. As a result of this field-wide analysis, we hypothesize that the major component whose ionic conductivity is affected in the composite is the polymer. Thus the following experiments and results focus on how the polymer is altered structurally and/or chemically. Second, in Chapter 3 we present a study utilizing thin polymer films deposited on substrates of varying acidity, as a model system. Comparison of the polymer film conductivities, chemistry and structures allow us to deduce that the modification of polymer structure near a ceramic interface is a major factor, while the ceramic-polymer interface chemistry has a negligible effect on the conductivity. In particular, the crystallinity and free volume of the polymer change appreciably to result in a higher ionic conductivity in thin films as compared to bulk materials. In Chapter 4,  we assess well-controlled composite electrolytes with inert, non-lithium conducting ceramic fillers, and active, lithium conducting ceramic fillers in the polymer, in particular by consistently controlling the particle size and filler volume fraction. We find that the increase in the free volume of the polymer phase plays a crucial role in the conductivity of both types of composites. The addition of active fillers only minorly improves conductivity over inert fillers, solely due to the conduction path introduced by the active ceramic particles. In Chapter 5, we assess composites containing inert nanometer-site particles to those with inert micrometer-size particles. We find that the ionic conductivity improves with nanoparticles but not with the micrometer-size particles, and again we are able to correlate this improvement with the free volume present in the polymer matrix. Lastly, in Chapter 6, we will summarize several experiments that, though less pertinent to the main aims of this thesis, are valuable to the field in understanding the techniques for characterizing these materials.&#13;
&#13;
Through this thesis, we provide a strong argument for the importance of the polymer’s structure in the composite electrolytes, and particularly an increase in the free volume present in the polymer phase. Such analysis of free volume has been limited in the field thus far. Thus, this thesis fills an important knowledge gap that has been limiting the understanding and control of how ceramic fillers increase lithium ion conductivity in polymer-ceramic composite electrolytes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165330</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetization Dynamics in Multi-sublattice Magnetic Materials</title>
<link>https://hdl.handle.net/1721.1/165329</link>
<description>Magnetization Dynamics in Multi-sublattice Magnetic Materials
Lee, Byung Hun
Magnon spintronics explores the solid-state devices that utilize electric control of spin currents carried by magnons, the quanta of spin waves. The magnon, the dynamic eigen-excitation of magnetically ordered materials, offers promising opportunities for next-generation information processing and communication systems. It can be engineered through various parameters, such as the orientation and magnitude of applied magnetic fields, choice of magnetic material, and sample geometry, enabling versatile engineering of wave-based computing technologies. &#13;
&#13;
Beyond the conventional ferromagnetic magnons, the final goal of this thesis is to obtain a better understanding of antiferromagnetic dynamics, which gains huge attention due to its unique characteristics including ultrafast magnetization dynamics, low damping, and negligible dipole fields. To achieve this, the study focuses on the magnetization dynamics within magnetic materials ranging from ferromagnet, ferrimagnet, and antiferromagnet, using Brillouin Light Scattering (BLS) spectroscopy. Starting from the understanding of ferromagnetic magnetization dynamics, the interface-driven effects on magnon physics are studied in ferromagnetic insulator/metal bilayers. The study reveals previously unidentified interface-driven changes in magnetization dynamics that can be exploited to locally control and modulate magnonic properties in thin-film heterostructures. &#13;
&#13;
As the ferromagnetic film transitions towards antiferromagnetic behavior, notable changes in  BLS spectra are observed, including significant linewidth broadening. These experimental results highlight the crucial role of antiferromagnetic exchange interactions in multi-sublattice magnetic materials' magnon dynamics. Motivated by the observed reduction in magneto-optical (MO) signal for ferrimagnetic thin films, we conducted a comprehensive study on canted antiferromagnet, focusing on thin hematite (&#120572; — Fe₂O₃) film. Through both experimental and theoretical analyses of the MO properties of hematite films, we demonstrate that the canted net magnetic moment induced by the Dzyaloshinskii-Moriya interaction (DMI) generates a significant linear MO effect, while the quadratic MO signal arises from the Néel vector. Attributing to the observation, we further demonstrate that the secondary MO effect from dynamical net magnetic moment induces the significant BLS signal, in both theoretical and experimental manners. Furthermore, we demonstrate the strong correlation between magnetic domain structure and local magnon spectra, which is identified for the first time in thin &#120572; — Fe₂O₃ film. &#13;
&#13;
Our study extends to the direct observation of a non-thermalized magnon state with spin current injection, indicative of a characteristic excitation mechanism with linearly polarized modes in an easy-plane antiferromagnet. The magnon thermalization and relaxation processes are further elucidated between the two linearly polarized modes. The result shows the significant contribution of high k magnons to long-distance spin transport in hematite using non-local electric measurements. Finally, we characterize the magnetic anisotropy within hematite films, distinguishing each contribution to its origin. These findings offer insights into magnetization dynamics and the manipulation of AFM spin textures via optical methods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165329</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling Protein and Cell Adhesion Through Interfacial Engineering for More Efficient Biomanufacturing</title>
<link>https://hdl.handle.net/1721.1/165326</link>
<description>Controlling Protein and Cell Adhesion Through Interfacial Engineering for More Efficient Biomanufacturing
McCue, Caroline
Interfaces between biological and synthetic materials present both challenges and opportunities within biomanufacturing. Protein and cell adhesion to surfaces can be controlled to improve steps in a typical biologic manufacturing process from cell culture to protein separation and purification. Within downstream processes, in protein purification, chromatography columns are used to separate target proteins from the output of a bioreactor. This work explores how crystallization could be used as an alternative purification process, by utilizing functionalized nanoparticles to nucleate protein crystals faster and at lower concentrations. We demonstrate significant improvements in nucleation induction time and nucleation rates using bioconjugate-functionalized nanoparticles. Within upstream processes, in adherent cell culture, cells are typically detached from surfaces using trypsin, an enzyme that can damage sensitive cells, and result in genetic mutations. This work studies how passive surface textures and active coatings can impact cell growth, morphology and adhesion. We show that micropost surfaces can significantly reduce cell-surface adhesion, and how electrically active surfaces can be used to detach cells on demand without the use of trypsin. These interfacial platforms present opportunities to reduce the costs of traditional biomanufacturing processes, reduce damage to cells, and enable new high throughput platforms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165326</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Investigation of Allosteric Regulation in Class III Ribonucleotide Reductases</title>
<link>https://hdl.handle.net/1721.1/165325</link>
<description>Structural Investigation of Allosteric Regulation in Class III Ribonucleotide Reductases
Andree, Gisele A.
Ribonucleotide reductases (RNRs) are essential enzymes that use radical-based chemistry to catalyze the reduction of ribonucleotides to deoxyribonucleotides. Each RNR class uses a different cofactor to generate a catalytically essential thiyl radical in the active site. Anaerobic (class III) RNRs employ an oxygen-sensitive glycyl radical cofactor installed within the adjacent glycyl radical domain. To maintain intracellular nucleotide pool balance, RNRs are allosterically regulated. For the prototypical class Ia RNR from Escherichia coli, this regulation involves the Nterminus of the catalytic subunit in a region known as the ‘cone domain’. ATP or dATP binding to the cone domain results in an association between it and the radical-generating subunit that either allows or prevents, respectively, radical transfer. Most class III RNRs have a cone domain and are allosterically regulated, but the mechanism of how this regulation proceeds is not well understood. Allosteric activity regulation for such a class III RNR, the class III RNR from Streptococcus thermophilus (StNrdD) is the focus of this thesis. We have developed a universal liquid chromatography tandem mass spectrometry based activity assay, which was adapted for use in class III RNR, and used the assay to show that ATP is an allosteric activity enhancer and dATP is an allosteric activity inhibitor of StNrdD. We used a combination of cryogenic electron microscopy (cryo-EM) and X-ray crystallography to show that ATP and dATP bind to the StNrdD cone domain and that the cone domains adopt exceptionally different conformations in the presence of either allosteric effector. Mutagenesis assays and hydrogendeuterium exchange mass spectrometry data show that the flexible region between the cone domain and the core is important for catalysis. Changes between ATP- and dATP-binding in the cone domains underlie the observed conformational and allosteric changes. This work answers some long-standing questions surrounding class III allosteric regulation and lays the groundwork to continue improving our molecular-level understanding of this complex enzyme system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165325</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Anticipation and Entrainment in Human-Robot Interaction</title>
<link>https://hdl.handle.net/1721.1/165324</link>
<description>Real-time Anticipation and Entrainment in Human-Robot Interaction
Fourie, Christopher Kurt
In this work, reactive control methodologies, alongside real-time methodologies for dense human motion prediction, are utilized to facilitate real-time anticipation and entrainment in human robot interaction. The technical contributions of this thesis include: extensions to dynamical systems-based modulation approaches that enable real-time circumnavigation of non-convex obstacles (NOMAD), a trajectory clustering approach based on a relaxation of dynamic time warping (TRACER), a real-time human modelling and prediction approach (HABITS), and the integration of these technologies into an anticipation and entrainment controller that enables real-time adaptive synchronization between a human and a robot. NOMAD introduces several on-manifold strategies that enable real-time navigation in the presence of non-convex obstacles, alongside a methodology for the eff icient representation of dense environments that can represent up to 240k points while maintaining a 1ms loop. TRACER is a probabilistic trajectory clustering algorithm that uses the expectation-maximization algorithm and a relaxation of dynamic time warping (Soft-DTW), with demonstrable improvement over non-probabilistic techniques such as kMedoids or DBSCAN. HABITS is an event-driven probabilistic filtering and incremental profiling framework that provides robust segmentation, prediction, and alignment estimation in real-time (25Hz) for emergent interactions in structured settings. The combination of these technologies is then demonstrated to enable both effective real-time anticipation (de-conflicting a workspace), as well as to support entrainment (long-term human-robot synchronization) in human-robot interaction.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165324</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approaches to quantitatively analyze protein complex assembly and regulation</title>
<link>https://hdl.handle.net/1721.1/165323</link>
<description>Approaches to quantitatively analyze protein complex assembly and regulation
Kinman, Laurel F.
Biological macromolecular complexes occupy high-dimensional conformational landscapes corresponding to their assembly and regulation. For many protein complexes, these conformational landscapes encode diverse functional states of the complex, and indeed, there is a growing appreciation of the critical significance of protein dynamics in driving protein function. As such, there is a need for approaches to experimentally and quantitatively resolve these conformational landscapes to better understand: 1) the diverse structural states these complexes sample; 2) how these states and their dynamics are linked to biological function; and 3) how these landscapes are modulated by binding partners or environmental signals, or over the course of complex assembly. State-of-the-art structural approaches including heterogeneous cryogenic electron microscopy (cryo-EM) offer one potentially powerful mechanism for resolving these landscapes, and we present here approaches to resolve large numbers (100s-1,000s) of volumes from a single dataset. Moreover, I explore methods to analyze the resulting large volume ensembles using supervised and unsupervised approaches, and demonstrate the power of these approaches by applying them to identify a proofreading role for the universally-conserved methyltransferase KsgA in biogenesis of the bacterial small ribosomal subunit. However, many of the most highly dynamic protein complexes – involved in critical cellular processes like environmental sensing and signal transduction – remain inaccessible to structural techniques like heterogeneous cryo-EM. I suggest that combining structural approaches with high-throughput techniques, including proteomic and library-based approaches, has substantial power to shed light on the function and regulation of such complexes. As an example, I couple a high-throughput reporter assay with deep mutational scanning, finding highly distributed phosphorylation regulated assembly of the Atg1 complex in Saccharomyces cerevisiae in response to diverse environmental cues. Taken together, my work generates a toolbox of complementary approaches for quantitatively characterizing the assembly and regulation of protein complexes, and I anticipate substantial utility in applying these approaches to broadly characterize the functional and regulatory landscapes of protein complexes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165323</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Full Body, Continuous, Wearable Ultrasound</title>
<link>https://hdl.handle.net/1721.1/165322</link>
<description>Full Body, Continuous, Wearable Ultrasound
Wang, Chonghe
In the rapidly evolving field of healthcare technology, the development of wearable ultrasound systems emerges as a groundbreaking innovation with the potential to transform patient monitoring and disease detection methodologies. This thesis, titled "Full Body, Continuous, Wearable Ultrasound," by Chonghe Wang, explores the design, implementation, and impact of two pioneering wearable ultrasound technologies: the Bioadhesive Ultrasound (BAUS) and the Adjustable Wearable Ultrasound (AWUS) systems. Aimed at enabling continuous, non-invasive monitoring of internal organ health, these systems represent a significant leap forward in medical imaging and healthcare management. At the core of the BAUS system is an innovative bioadhesive hydrogel-elastomer hybrid couplant, which facilitates the attachment of ultrasound probes directly to the skin, ensuring stable, long-term imaging. Conversely, the AWUS system employs phase-transition hydrogel couplants for dynamic adjustment of probe orientation and position, allowing for optimized imaging across multiple organs. Together, these systems offer unparalleled capabilities for full-body monitoring, with potential applications ranging from early disease detection to enhanced patient care and a deeper understanding of human physiology. Through rigorous experimental validation, this research assesses the imaging quality, physiological monitoring accuracy, durability, and comfort of the BAUS and AWUS systems. Employing advanced image processing and machine learning techniques, the study analyzes data collected from continuous imaging sessions, highlighting the systems' ability to capture dynamic physiological events and detect pathological changes. This thesis underscores the transformative potential of wearable ultrasound technology in healthcare and medical research. By shifting the paradigm from episodic to continuous monitoring, it opens new avenues for proactive health management, promising to improve patient outcomes, reduce healthcare costs, and advance our understanding of the human body.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165322</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry, packing, and synchronization in three-dimensional (3D) multicellular development and diseases</title>
<link>https://hdl.handle.net/1721.1/165320</link>
<description>Geometry, packing, and synchronization in three-dimensional (3D) multicellular development and diseases
Tang, Wenhui
Three-dimensional (3D) multicellular systems, along with their developmental and pathological processes, present unique features compare to 2D flat systems. Both the spatial organization and temporal cell dynamics can be influenced by fundamental difference in cell-microenvironment interactions, cell mechanics, as well as cell physical properties. However, the effort to understanding the fundamental physics and mechanics is limited due to the constraint in 3D techniques. Moreover, the evolving multicellular properties during biological processes are unclear. In this thesis, we systematically study the geometry, packing, and synchronization in three-dimensional multicellular development and diseases.&#13;
&#13;
First, we give a perspective on the emerging evidence of 3D tissue geometry in guiding morphogenesis and abnormal growth during diseases. We review the effort in fabricating various tissue-mimicking structures to study collective cell behaviors, and emphasize how tissue geometry influences physical, mechanical, and biological properties. At the end, we propose future directions, challenges, and potential applications in geometry-induced stem cell therapeutics.&#13;
&#13;
Second, we report a surprising result that cells can feel the substrate curvature that they live on. We find that epithelial cells tend to form hexagonal packs to minimize the free energy; cells in these hexagonal packs are more solid-like compared to the cells out of packs. As a result, when the substrate curvature increases, the size of these hexagonal cell packs decreases to release bending energy. Therefore, on more curved surface, we observe a more fluid-like cell monolayer with more active cell dynamics and less collectiveness. Such behavior is observed not only on fabricated geometries, but also on a spontaneously growing human lung alveolosphere system in 3D derived from human induced pluripotent stem cells.&#13;
&#13;
Third, as building blocks of life, cells actively coordinate their positions to form various structures and perform functions. A central question is how do cells in three-dimensional organisms accurately arrange their positions and morphology on curved structures during development? We observe an emergence of topological order, specifically a topological gas-to-liquid transition, during the growth of human lung alveolospheres, regulated by the increasing nucleus-to-cell size ratio. Our finding reveals the critical role of cell nuclear size in regulating cell packing during tissue development, and suggests the importance of topological phase changes in establishing tissue stability.&#13;
&#13;
Last, from a temporal perspective, we report an experimental observation of large-scale synchronized oscillations generated by sustained contraction and expansion, revealing un- expected phase dynamics in epithelia. We then apply this temporal analysis to studying proliferating epithelia undergoing jamming transition and cancer invasion across various degrees of malignancy. Remarkably, this method provides an original perspective to assess the stage of tissue development, estimate cancer malignancy level, as well as distinguish healthy tissues from the diseased ones.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165320</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multidimensional MOF Mixed Matrix Membranes for Efficient Gas Separation</title>
<link>https://hdl.handle.net/1721.1/165319</link>
<description>Multidimensional MOF Mixed Matrix Membranes for Efficient Gas Separation
Lee, Hyunhee
Membrane separations are crucial in the chemical industry, with polymeric materials traditionally used due to their cost and mechanical benefits. However, they face challenges in permeability–selectivity trade-off, and in stability. Metal–organic frameworks (MOFs) offer potential solutions with their customizable properties but are difficult to manufacture. Mixed-matrix membranes (MMMs), which incorporate MOFs into polymers, mitigate some issues, yet high MOF loading can lead to aggregation and voids. This thesis investigates the promising potential of MMMs for efficient and improved gas separation, leveraging unique morphologies and understanding the dynamics of MOF–polymer interactions. First, the novel branch-shaped ZIF-8 (BZ) was developed and incorporated into polymer matrix, which successfully established a percolated network at loadings as low as 20 wt%, showing permeability boost. Also, it showed suppressed polymer chain dynamics and a smaller diffusion cut-off than traditional ZIF-8, which resulted in an enhanced membrane stability and superior performance in H₂-based separations. BZ was studied further by investigating temperature-dependent properties of MMMs. BZ and control ZIF-8 (CZ) MMMs exhibited unique gas transport behaviour in relation to temperature shifts, with BZ MMM demonstrating more significant temperature dependence for H₂-based separations. As temperature decreases, the H₂/CH₄ permselectivity of BZ MMMs&#13;
drastically increases, with minor changes in H₂ permeability. Conversely, at higher temperatures, separation performance aligns with that of CZ MMM, showing continuous yet broad control over the gas performance. To understand the origin of this selectivity difference, facet-specific gas transport in polymer nanocomposites was studied with the hypothesis of BZ consist of facet 100, which characterize less thermally stable polymorph, cubes. A key finding is the interaction between 100 facet and polyimides, which enhances hydrogen-based and ethylene/ethane separation, particularly at subambient temperatures, which is consistent with the trend observed for BZ MMMs. In conclusion, this thesis addresses the enhancement of MMMs through innovative morphological approaches, where percolated network enhances permeability and 100 facet termination may restrict the MOF–polymer interphase confinement, leading to high selectivity for small gas pairs, which is very difficult to achieve at the same time. The temperature effects and facet-termination effects on gas transport in MMMs can also offer substantial contributions to the development and optimization of mixed matrix membranes for efficient gas separations.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165319</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemically Circumventing the Oxidative Instability of Boronic Acids for Biological Applications</title>
<link>https://hdl.handle.net/1721.1/165317</link>
<description>Chemically Circumventing the Oxidative Instability of Boronic Acids for Biological Applications
FitzGerald, Forrest Grant
Boronic acids are more than just chemical reagents used in synthetic organic chemistry. They are life-saving chemo- and radiotherapeutics, optical imaging agents, chemical sensors of biological stress, hydrogels, nanomaterials, and so much more. While the literature is rich with applications of boronic acids, there is much room for improvement. From the environment we live in, to the biochemical transformations that drive cellular respiration, oxidation is a common thread. Boronic acids are highly susceptible to oxidation by reactive oxygen species that are continuously generated in these environments, and though boronic acids have found wide use, their utility is limited by their short lifetimes due to this form a degradation. We sought to find chemical solutions for this instability. Benzoxaborolone, a boralactone, was previously developed in the Raines lab as an oxidatively stable arylboronic acid. In this current work, we demonstrate that this oxidative stability is enhanced by electron-withdrawing functional groups and reduced by electron-donating functional groups appended to the aryl ring. Applying principles of physical organic chemistry, we reasoned that this electronic impact is predictable depending on the identity of the functional group, with valuable insights for use in medicinal chemistry and chemical biology. Next, we demonstrated that a benzoxaborolone–fluorophore conjugate is an equally effective glycan imaging reagent compared to the commonly used phenylboronic acid-bearing reagent, yet is more stable and thus versatile. We designed this reagent to be a general-use minimalistic glycan-binding reagent with high diffusivity and modularity. We then used this reagent to explore the diverse organ-specific glycome in mice, leveraging recently reported techniques in expansion microscopy to acquire fluorescence images in nanoscale resolution. Finally, we took a new approach to circumventing the chemical instability of boronic acids in biological settings. Bortezomib is a highly potent alkylboronic acid chemotherapeutic that has been a mainstay in the clinic for the treatment of multiple myeloma for nearly twenty-five years. Unfortunately, it is highly susceptible to oxidation and thus prone to inactivation during synthesis, storage, and administration. Substituting in an aromatic, oxidatively stable benzoxaborolone is not an effective strategy for bortezomib, as it would completely change the chemical structure and abrogate activity. Instead, we leveraged a popular boronic acid protecting group, Nmethyliminodiacetic acid (MIDA), used primarily in organic chemistry, to create a bench-stable, oxidation-resistant bortezomib that retains its cytotoxic potency toward cancer cells. Furthermore, we uncovered for the first time that the MIDA protecting group may be susceptible to esterase-mediated hydrolysis, paving a path towards potential prodrug applications in the future.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165317</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The true, the good and the justified: Essays on epistemic normativity and value</title>
<link>https://hdl.handle.net/1721.1/165189</link>
<description>The true, the good and the justified: Essays on epistemic normativity and value
Gélineau, Félix-Antoine
This dissertation about the role that value—in particular, the value of truth—plays in the explanation of epistemic norms, the norms that should govern our belief-formation and belief-revision processes. It is a pervasive practice for us to assess one another’s beliefs, decrying some and commending others: we find something amiss in a belief formed by wishful thinking, and deem proper a belief arrived at via meticulous consideration of one’s evidence. It is natural to think that there is a close connection between what epistemic norms sanction, truth, and the fact that truth matters. In what sense is truth valuable? and what does that entail for how we should conceive of epistemic norms? These questions drive my dissertation. &#13;
&#13;
In chapter 1, “The true, the good and the justified”, I argue that the teleological conception of epistemic justification, the view that for a belief to be justified is for it to be formed in a way that is conducive to truth, is safe from the main objections it faces. These import assumptions about value that belong to ethical consequentialism; the epistemic teleologist need not and should not accept them.  “Epistemic value” is best understood as a ‘placeholder’, not as a term denoting value in any substantive sense. The upshot is that endorsing a teleological explanation of epistemic norms does not commit one to the idea that true beliefs ought to be promoted in the way the good ought to be promoted according to ethical consequentialists.&#13;
&#13;
In chapter 2, “Why be antisocial (about epistemic normativity)”, I examine whether epistemic normativity is grounded in the usefulness of truth for communities and argue that it is not. If what we ought to believe were to be explained in terms of the usefulness of truth for communities, we should expect our epistemic norms to be wildly different from what they are: they could condone trade-offs between true and false beliefs across a community when doing so would be useful for it; they would often fail to prescribe believing over other doxastic attitudes when the epistemic aims of the community would be equally well-served by either; finally, there is no way to answer in a satisfying manner the question of what isolated individuals should believe.&#13;
&#13;
In chapter 3, “What’s truth got to do with it?”, I develop an account of epistemic normativity that does justice to the idea that truth matters but avoids the shortcomings of other value-based accounts. I argue that, given any plausible account of the value of truth, the value of truth can explain some, but not all epistemic norms. The account of epistemic normativity that I present is pluralistic: I distinguish between substantive norms, which are explained by the value of truth, and basal norms, which stem from the good functioning of mechanisms of belief-formation and belief-revision, independently of the value of truth. This account is shown to be superior to other accounts discussed in the course of the dissertation.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165189</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ions in Electrically Conductive and Insulating Metal-Organic-Frameworks: Transport and Energy Storage</title>
<link>https://hdl.handle.net/1721.1/165188</link>
<description>Ions in Electrically Conductive and Insulating Metal-Organic-Frameworks: Transport and Energy Storage
Su, Alice Yue
Ion transport in metal-organic frameworks (MOFs) is attracting increasing attention since ions can be easily incorporated into porous MOF structures as guest species, promising a variety of possible applications. While electronically insulating but ionically conductive MOFs show great potential as solid electrolytes, the precise structure and tunability of MOFs also enables rational combination of electronic and ionic conductivity to create intrinsic mixed conductors. The combination of both conduction pathways is highly relevant for energy storage applications where ions interact with and insert into electronically conductive electrode active materials. This thesis first explores ion transport in an anionic MOF electrolyte, conducting mono- and divalent cations. Transport studies give insight into how MOF structure and ion-related variables impact ionic conductivity and activation energy. Studies on this material paint a picture that furthers our understanding of fundamental ion transport mechanism in MOF electrolytes. Incorporating electronic transport as an additional layer of complexity, new mixed proton-electron conductive two-dimensional MOFs based on aromatic azaborine ligands are synthesized. Their dual conductive nature is confirmed by separating the ionic and electronic contributions to the overall transport. Lastly, a family of triazatruxene-based two-dimensional electronically conductive MOFs are explored as pseudocapacitors. Here, the diffusion of ions inside the pore network as well as their interaction with MOF active sites depending on the interlayer spacing are investigated for their impact on capacitance.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165188</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plurals of politeness and the morphosyntax of number</title>
<link>https://hdl.handle.net/1721.1/165187</link>
<description>Plurals of politeness and the morphosyntax of number
Sinha, Yash
Many languages use plural pronouns to address (and refer to) single individuals with politeness or honorification. In some languages, these plurals of politeness (PoPs) show mixed agreement, triggering plural agreement on some agreement targets and singular on others. In this dissertation, I use PoPs to investigate the internal structure of DPs, focusing on how number features are represented within them. &#13;
 &#13;
Starting first with pronominal DPs, I adopt the view that these contain two phrases: (i) a noun phrase (NP) headed by a silent noun (Postal 1966 and subsequent work), and (ii) an index phrase (idxP), realized by the pronoun, which occupies the specifier of the DP and introduces a referential index (Jenks 2022; see also Choi 2014, Giusti 2015). My novel claim is that both the NP and the idxP bear their own number features. In most cases, this is not detectable because the number features of the NP and idxP match. I argue, however, that this is not the case for PoPs, which allows us to see that these two number features are in fact distinct. Specifically, I show that the agreement patterns of PoPs are best accounted for by treating them as consisting of a plural idxP with a singular NP (i.e., a plural pronoun with a silent singular noun). This analysis not only derives the mixed agreement of PoPs seen in some languages, but also explains certain cross-linguistic restrictions on the distribution of singular agreement with them. &#13;
 &#13;
I also extend this analysis to account for nominal PoPs, a type of nominal DP found in a subset of languages with pronominal PoPs. These nominal PoPs have the same semantics and the same agreement patterns as their pronominal counterparts, but crucially, the morphology on the noun in nominal PoPs is singular and not plural.  I argue that these similarities and differences can be explained by positing that idxPs are present in nominal DPs as well, but are not realized overtly. This analysis of nominal PoPs is shown to also be able to account for the DP-internal concord patterns seen with them.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165187</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oddness under Discussion</title>
<link>https://hdl.handle.net/1721.1/165186</link>
<description>Oddness under Discussion
Hénot-Mortier, Adèle
At a broad level, this dissertation's main claim is that many cases of pragmatic oddness do not stem from assertions alone, but rather from their interaction with the questions they implicitly  evoke. Felicitous assertions, must evoke felicitous questions. To operationalize this claim, a model of compositionally derived implicit question is devised, along with conditions of their well-formedness, drawing from familiar concepts in pragmatics, such as Redundancy and Relevance. This model assigns a central role to the degree of specificity, or granularity, conveyed by assertions.&#13;
&#13;
At a more narrow level, this dissertation argues that disjunctions and conditionals fundamentally differ in terms of the questions they evoke, and that this difference has direct consequences on the oddness/felicity profiles of sentences involving these operators. Disjunctions are shown to be prone to Redundancy issues, while conditionals are shown to be prone to Relevance issues. In other words, disjunctions and conditionals typically display distinct flavors of oddness. This is supported by three main classes of sentences. First, sentences that can be seen as equivalent, but which combine conditionals and disjunctions in distinct ways, display varying felicity profiles. Second, "pure" disjunctions and conditionals that can be seen as isomorphic, if not equivalent, display varying felicity profiles. Third, some differences between these disjunctions and conditionals remain when additional pragmatic phenomena, in particular scalar implicatures, are at play, and such differences shift in a way predicted by our approach.&#13;
&#13;
This dissertation therefore justifies the appeal to a more elaborate model of (implicit) questions, which, when fed to the pragmatic module, is characterized by a better empirical accuracy on challenging data, than previous model solely based on assertive content.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165186</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illumination-Robust Terrain Relative Navigation for Planetary Descent</title>
<link>https://hdl.handle.net/1721.1/165184</link>
<description>Illumination-Robust Terrain Relative Navigation for Planetary Descent
Mitchell, Adriana Macieira
Missions to explore planetary bodies (e.g., Moon, Mars, Titan, Europa, Enceladus) through surface exploration have been planned for the next decades. These missions depend on autonomous optical navigation capabilities for safe entry, descent, and landing near key scientific areas of interest, potentially near hazardous terrain. Terrain Relative Navigation (TRN) enables autonomous precision landing by matching descent images to an a priori orbital map. However, performance degrades significantly when large differences in solar illumination exist between the map and descent imagery, particularly under high azimuth angle changes, due to terrain-induced shading inversions that break assumptions of photometric consistency in both frequency and intensity-based correlation methods. This thesis presents a set of methods to robustify TRN performance under large directional illumination changes, addressing three core challenges: understanding the failure of existing TRN methods, improving feature matching robustness to varying Sun angles, and generating reliable navigation maps from incomplete orbital imagery. First, the physical cause of correlation failure is characterized through a frequency-domain analysis of shading effects. Building on the azimuth impact matrix, which models the directional dependence of shading-induced phase reversals, this work applies it as a frequencydomain correction to frequency correlation to improve correlation peak accuracy across large solar azimuth differences. Second, a novel frequency-domain photometric correction method, Solar Orientation Layering via Frequency Image eXtraction (SOLFIX), is introduced to produce corrected map products aligned with expected descent conditions by layering spatial frequency content aligned with known solar angles from multi-resolution, multi-illumination orbital imagery. Third, a predictive illumination-aware map is developed to identify terrain regions likely to yield reliable correlations. This map integrates solar azimuth geometry, terrain aspect from low-resolution digital elevation models, and spatial frequency information to pre-filter unreliable areas of the map prior to localization. The proposed methods are validated on Mars orbital datasets, simulated terrain renderings, and NASA JPL’s field test imagery, each with Sun angle variations. Despite wide Sun angle differences, the proposed methods recover accurate localization where baseline TRN methods fail due to the bias from shading inversions caused by large solar azimuth differences. These contributions enable reliable optical navigation in scenarios where acquiring new orbital maps is infeasible and support future planetary missions operating under severe illumination constraints.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165184</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verification Planning for Efficient Uncertainty Reduction of Space Science Systems</title>
<link>https://hdl.handle.net/1721.1/165183</link>
<description>Verification Planning for Efficient Uncertainty Reduction of Space Science Systems
Stenzel, June
Verification for a complex engineering system is essential for mitigating risks associated with quantitative uncertainty and ensuring confidence in the system’s performance. Verification planning is the problem of deciding how to allocate resources in this process, and verification for some systems may allocate more time and money to certain verification activities than is necessary to achieve a desired level of certainty. The need for efficient and effective verification is especially great for space systems and space science instruments, which are often subject to active cost and schedule constraints that make verification planning a challenge. This research proposes the Uncertainty Quantification Verification Planning Methodology (UQVM) for designing optimal-under-uncertainty verification plans in a systematic, quantitative, model-based way. Uncertainty quantification methods are used to model instrument performance, determine sources and magnitudes of parametric uncertainty, and perform sensitivity analysis. Stochastic models of potential verification activities are also developed with subject matter expertise, and are subjected to uncertainty quantification. A novel approach to optimal Bayesian experimental design (OBED) is developed to determine sets of verification activities that minimize effort and maximize certainty of system performance. A comprehensive systems engineering approach brings together these techniques of uncertainty quantification and experimental design methods, so that systems engineers can optimize design of AI&amp;T and V&amp;V campaigns with respect to programmatic cost and confidence of system performance, and can refine those plans as new verification data is obtained. UQVM is demonstrated for space science system case studies. A study of verification planning for CCD performance shows that optimally uncertainty-reducing plans within a cost cap can be identified that reduce the variance of predicted performance by 67%, and that iterative data-informed verification planning can reduce the variance of predicted performance by 96%. A retrospective analysis of the sensitivity verification for the Large Lenslet Array Magellan Spectrograph (LLAMAS) shows that optimized plans can reduce testbed time by 94% without a loss in uncertainty reduction, and that plans can be identified that perform better than historical plans with a confidence of greater than 90%. An early-phase analysis of precision control verification for the James Webb Space Telescope (JWST) shows that tests can be ranked in order of benefit-at-cost.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165183</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Mechanisms Defining and Driving Receptivity in Conversion of Fibroblasts to Motor Neurons</title>
<link>https://hdl.handle.net/1721.1/165182</link>
<description>Molecular Mechanisms Defining and Driving Receptivity in Conversion of Fibroblasts to Motor Neurons
Beitz, Adam M.
Layers of regulation stabilize cellular identity and prevent aberrant cell-fate transitions. Cell fate conversion processes, such as the conversion of fibroblasts into motor neurons, attempt to induce a cell of one type to become a cell of another type by activating genes and gene networks associated with the desired final cell type. Cell fate conversion processes have the potential to revolutionize drug discovery and drive the development of novel cell-based therapies, but their translational potential is limited by poor conversion rates. Forced overexpression of lineage-specifying transcription factors is rarely sufficient to induce a complete change in cellular identity. We find that overexpression of known oncogenes can enhance a cell’s receptivity to conversion by increasing proliferation rates and increase conversion yields 100x, even when converting to a non-proliferative state. &#13;
In this thesis, we use the model system of mouse embryonic fibroblast to motor neuron conversion to determine how oncogenic mutants of HRAS and the tumor suppressor protein p53 induce a receptive cell state and enhance conversion. We find that cells that proliferate at high rates early in conversion attain the motor neuron identity at higher rates than cells that do not proliferate as much. We isolate cells that attain high rates of proliferation and define the subcellular properties of these conversion-receptive cells. Receptive cells display more compact chromatin as proliferation destabilizes chromatin structures generally, including those that reinforce the starting cell type identity. An increase in trimethylation of H3K27, a histone mark known to induce chromatin compaction and reduce transcriptional output, supports this this compaction and is associated with a global decrease in transcription rates. Finally, we make a counter-intuitive finding that the interaction between mutant and native p53 enhances conversion beyond its role in promoting proliferation that is dependent on the presence of native p53. The p53 mutant induces accumulation of native p53 in a subpopulation of cells. We developed a tool to track p53 levels in mouse embryonic fibroblasts in order to track p53 accumulation during conversion. By developing tools to isolate cells with different conversion-receptivity during oncogene-enhanced conversion, we can characterize the subcellular features that promote conversion. We expect future cell fate conversions may be engineered to systematically guide cells through a receptive state without oncogenes.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165182</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and biochemical investigations of collagen-I trimerization</title>
<link>https://hdl.handle.net/1721.1/165179</link>
<description>Structural and biochemical investigations of collagen-I trimerization
Srinivasa, Sorin Asha
The fibrillar collagens display a unique assembly process, with assembly initiating at the C-terminal propeptide (C-Pro) domain, a small globular domain that encodes vast amounts of information for chain selection, stoichiometry, and molecular recognition. The C-Pro domain is responsible for ensuring that strands that assemble together are of the same type of collagen, and, in the case of certain collagens, that the stoichiometry between strands is correct. In addition to being involved in assembly, the C-Pro domain has also been shown to play a significant role in collagen proteostasis. Quality control factors in the cell are able to specifically recognize misfolded C-Pro domains in the context of disease-causing mutations, demonstrating a vital role for C-Pro folding in this process. In chapter 2, we report progress towards the answer to a long-elusive question: why do certain collagens favor heterotrimeric assemblies even when homotrimeric assemblies are possible? We show progress towards a high-resolution structure of the collagen-I C-Pro 2:1 heterotrimer, and investigate the role of Ca2+ coordination in dictating assembly behavior and chain association. In chapter 3, we characterize the assembly and proteostasis defects associated with a set of C-Pro mutations that have been observed in patients with osteogenesis imperfecta. Our data reveal that, while some variants are effectively recognized by the cell’s quality control mechanisms and retained intracellularly, others are not and secrete at similar levels to the wild-type. We also demonstrate that even severely assembly-deficient C-Pro domains can escape quality control, likely resulting in severe or lethal disease. Collectively, the work presented here significantly advances our understanding of how collagen-I assembly occurs in healthy biological systems, and how it can be disrupted in disease, setting the stage for a variety of future investigations into this challenging system.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165179</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Bayesian Inference for Spatio-Temporal Systems with Applications in Remote Sensing</title>
<link>https://hdl.handle.net/1721.1/165170</link>
<description>Structured Bayesian Inference for Spatio-Temporal Systems with Applications in Remote Sensing
Leung, Kelvin Man Yiu
Satellite-based remote sensing observing systems are a key source of information for understanding Earth system dynamics. Bayesian inference provides a principled framework for retrieving physical parameters from satellite observations while quantifying uncertainty. However, the high dimensionality and spatio-temporal complexity of remote sensing problems pose major computational challenges for traditional inference methods. This thesis develops scalable algorithms for Bayesian inference for remote sensing systems by leveraging low-rank structure and sparse conditional dependence structure. The resulting methods enable accurate and efficient posterior characterization at scales relevant for modern satellite missions. The first theme of this thesis is identifying low-rank structure in problems where the scientific goal is to estimate a small number of quantities of interest (QoIs) that are a function of the unknown parameters. Using a gradient-based dimension reduction framework, we construct informative subspaces of the observation space that are tailored to specific QoIs. This framework is integrated with transport maps to enable simulation-based inference directly for the QoIs, without the need to recover the full posterior of the high-dimensional parameters. We demonstrate this approach on imaging spectroscopy data from NASA’s upcoming Surface Biology and Geology (SBG) mission and show that it achieves inference accuracy comparable to Markov chain Monte Carlo (MCMC) while requiring orders of magnitude less computational time. In addition, we examine the role of preconditioning in dimension reduction and demonstrate that the optimal choice of preconditioner depends on the nonlinearity of the forward model. Next, we explore how conditional independence structure can be used to improve the scalability of inference algorithms relevant to remote sensing systems. We first consider a single-pixel setting and exploit within-state conditional independence to build sparse transport maps for hyperspectral retrievals. These sparse maps reduce computation over standard nonGaussian inference methods while preserving accuracy. Extending beyond individual pixels, we develop an information filter that leverages spatio-temporal conditional independencies in satellite observing systems. By incorporating sparse inverse covariance structure into the filtering equations, we achieve significant improvements in both scalability and inference accuracy on data relevant to NASA’s OCO-2, EMIT, and SBG missions. Building on this structure, this thesis also explores extensions of large-scale spatio-temporal inference to the continuous non-Gaussian setting using measure transport. Drawing inspiration from belief propagation algorithms for Gaussian graphical models, we construct decomposed transport maps tailored to spatio-temporal graphical structures. These methods enable scalable inference while capturing non-Gaussian features of the posterior. We demonstrate their application to spatio-temporal systems, providing a viable framework for high-fidelity uncertainty quantification.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165170</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Intelligent Psycho-Social Support to Augment Behavioral Health Management in Isolated Environments</title>
<link>https://hdl.handle.net/1721.1/165169</link>
<description>Towards Intelligent Psycho-Social Support to Augment Behavioral Health Management in Isolated Environments
Nguyen, Golda
Long-duration spaceflight requires astronauts to live in isolated, confined, and extreme environments while physically and socially separated from support systems for up to years at a time. This thesis explores how intelligent, autonomous tools may augment behavioral health management when real-time, Earth-based support is inaccessible in deep space. Such intelligent psycho-social support must be able to: 1) characterize individual risks, 2) assess health state accurately, and 3) deliver appropriate interventions or countermeasures. To augment these system functions, techniques from statistical modeling, natural language processing, and conversational AI are investigated across three case studies of isolation: wide-scale isolation during the COVID-19 pandemic, isolation in a space analog habitat, and social isolation in the general public. To explore risk characterization, statistical models were constructed on trait-based, behavioral, and social environment factors in relation to mood and anxiety state in isolation. Models of civilian risk under isolation were developed to inform automated risk characterization for future private astronauts. To explore augmenting psychological assessment, a feasibility analysis of natural language processing (NLP) for automated affect classification was conducted. Transformer-based NLP techniques were tested against lexicon-based and other machine learning (ML)-based techniques on affect classification of personal journal text from analog astronauts. Transformer-based models demonstrated improved detection of negative affect classes, but overall, lexicon-based models were still comparable to ML-based models. Finally, to explore augmenting countermeasures, a study of engagement and disclosure in AI-augmented reflection was conducted. In this study, participants shared more content volume and spent more time in a hybrid reflection condition (reflecting alone first before chatting with a prompted chatbot) than in separated conditions (journaling alone or reflecting only with the bot). Higher loneliness was associated with lower comfort and engagement, but behavioral benefits were similar across lonely and non-lonely users, suggesting further tailoring is needed to support isolated individuals. Through these case studies, this work presents a point of departure towards a vision of intelligent psychosocial support systems that are not meant to replace human connection, but to augment behavioral healthcare for individuals in isolated settings on Earth and in space.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165169</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery and characterization of plateau potentials in&#13;
cortical neurons of awake mice</title>
<link>https://hdl.handle.net/1721.1/165168</link>
<description>Discovery and characterization of plateau potentials in&#13;
cortical neurons of awake mice
Mojica Soto-Albors, Raúl E.
Plateau potentials are large calcium-dependent regenerative depolarizations that support burst firing and facilitate behavioral time scale synaptic plasticity (BTSP) in the hippocampus. Despite substantial progress in our understanding of these events in CA1, it remains unclear whether they occur in the neocortex and, if so, how do they manifest. To address this, we performed in vivo whole cell patch clamp recordings from layer (L) 2/3, L4, and L5 pyramidal neurons (PNs) in mouse primary visual cortex (V1) to produce the first systematic characterization of cortical plateau potentials. We established functional correlates of plateau potentials and evaluated their role in plasticity induction. First, we described the high prevalence of prolonged somatic depolarizations accompanied by high-frequency spikes (~105 Hz) in 43% of L5 PNs. Cortical plateau potentials closely resembled those previously described in the hippocampus, averaging ~27 mV in amplitude and ~60 ms in duration, with pronounced intraburst spike amplitude attenuation. Recordings obtained from L2/3 and L4 neurons revealed that cells in these layers do not generate plateaus, indicating a unique generation site in L5. Within L5, neurons exhibiting plateaus had lower input resistance than those that did not, suggesting plateaus may be specific to thick-tufted extratelencephalic (ET) PNs. We further described how the incidence of plateaus in L5 PNs was surprisingly not increased by visual stimulation. Intriguingly, their prevalence more than tripled during periods of behavioral arousal. Furthermore, plateau initiation was more likely during the rising phase of the extracellular theta rhythm (5-10 Hz) in V1, suggesting that cortical plateaus are modulated by internal state and network rhythms rather than visual stimuli alone. Finally, we investigated the role of cortical plateaus in plasticity by pairing a non-preferred stimulus with artificially evoked events. In contrast to the BTSP observed in CA1, spiking output remained consistent before and after pairing in V1. However, subthreshold responses indicated some synaptic depotentiation for the preferred stimulus, indicating that plateaus might be sufficient to alter synaptic weights, though in a different way than that demonstrated in hippocampus. Collectively, this work sheds light on an underexplored cortical output mechanism unique to L5 pyramidal neurons, positioning the plateau potential as a cell-type-specific phenomenon that may reshape sensory representations in the neocortex, with implications for cortical computation and biologically inspired learning rules in neural networks.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165168</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comprehensive laboratory studies of organic oxidation across a range of photochemical ages and peroxy radical conditions</title>
<link>https://hdl.handle.net/1721.1/165167</link>
<description>Comprehensive laboratory studies of organic oxidation across a range of photochemical ages and peroxy radical conditions
Franco Deloya, Lesly Joanne
Reactive organic carbon (ROC), defined as all volatile organic compounds (VOCs) and particulate organic carbon except methane, is the largest source of reactive emissions in the atmosphere and therefore plays a central role in atmospheric chemistry. After emission, ROC undergoes rapid chemical transformations driven by sunlight and atmospheric oxidants, forming peroxy radicals (RO₂) whose fate shapes the resulting oxidation products. This sequence of reactions, known as the ROC lifecycle, continues until ROC is removed via wet or dry deposition or fully oxidized to carbon dioxide (CO₂). Throughout its evolution, ROC contributes to the formation of ozone, particulate matter and CO₂, making its study necessary for understanding air quality and climate. While many aspects of this cycle have been explored through modeling, laboratory experiments, and field campaigns, our understanding of the ROC lifecycle remains incomplete due to the large number of compounds and its chemical complexity as it evolves in the atmosphere. This thesis investigates key aspects of the ROC lifecycle, focusing on its multigenerational chemical evolution over extended atmospheric aging and on how RO₂ chemistry shapes gas-phase species distributions. The first part investigates the multigenerational oxidation of ROC through a series of laboratory experiments designed to simulate atmospheric aging over extended timescales. These experiments make comprehensive measurements of both gas- and particle-phase carbon to gain a holistic understanding of ROC evolution in the atmosphere. While models have been used to simulate the full evolution of ROC, few experiments have constrained this process. These experiments provide the first direct constraints on the lifetime of ROC against heterogeneous oxidation, the formation of small and long-lived oxygenated VOCs, and the evolution of carbon properties (e.g. carbon number, carbon oxidation state) over multiweek atmospheric equivalent aging. We then compare these experimental results to long-term aging simulations using several chemical mechanisms that are implemented in models. Most chemical mechanisms achieve near-total carbon closure, however, they differ in species composition, ROC mineralization rates, and OH reactivity, especially at later stages of oxidation. These discrepancies highlight the gaps that remain in our understanding of the long-term chemical evolution of ROC. Finally, we conduct experiments that simulate a variety of RO₂ environments representative of atmospheric conditions. Past chamber experiments have primarily focused on RO₂ chemistry under extreme conditions that involve short bimolecular lifetimes and “low” or “high” NOₓ conditions. Our findings suggest that traditional metrics used to assess species distributions are insufficient to explain the full range of changes with shifts in the RO₂ environment. A more comprehensive accounting of all relevant RO₂ fates is necessary to explain gas-phase species composition, which in turn influences HOₓ and NOₓ cycling, ozone production and ultimately our understanding of secondary organic aerosol formation. Together, these results provide new constraints on the atmospheric evolution of ROC across a wide range of oxidative and photochemical conditions, improving our understanding of ROC and its role in air quality and climate.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165167</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural characterization and antibiotic development for&#13;
the Neisseria gonorrhoeae class Ia ribonucleotide reductase</title>
<link>https://hdl.handle.net/1721.1/165166</link>
<description>Structural characterization and antibiotic development for&#13;
the Neisseria gonorrhoeae class Ia ribonucleotide reductase
Dorfeuille, Andrew Leonard Jacques
Ribonucleotide reductases (RNRs) are essential enzymes that catalyze the reduction of ribonucleotides to deoxyribonucleotides, a critical step in DNA biosynthesis and repair. Class Ia RNRs, found in eukaryotes and many aerobic bacteria including Escherichia coli (E. coli) and Neisseria gonorrhoeae (N. gonorrhoeae), are regulated by complex allosteric mechanisms that control enzymatic activity and substrate specificity. These enzymes function as α₂β₂ complexes, with the α₂ subunit housing regulatory sites and the β₂ subunit providing a catalytic radical. Proper regulation of RNR is vital for maintaining balanced dNTP pools and genomic integrity, making RNRs attractive targets for therapeutic intervention in both infectious disease and cancer. &#13;
This thesis examines the structural basis of specificity regulation in N. gonorrhoeae class Ia RNR using cryogenic electron microscopy (cryo-EM). Near atomic-resolution structures of the enzyme bound to four canonical substrate/specificity-effector pairs (CDP/dATP, UDP/dATP, GDP/TTP, and ADP/dGTP) reveal that effectors induce conformational changes in loop 2 of the α₂ subunit, which alter hydrogen bonding contacts in the active site, leading to preferential substrate binding. This mechanism is also conserved in the E. coli class Ia RNR, a close homolog. Cryo-EM maps also show weak, non-specific binding of a second nucleotide in the cone domain, likely reflecting artifacts of high nucleotide concentrations used in the cryo-EM experiments rather than physiological relevance.&#13;
Building on these findings, Chapter III investigates the potential interaction of the cyclic dinucleotide, c-diAMP, with the cone domains of E. coli and N. gonorrhoeae RNRs. We find that c-diAMP binds both enzymes with low micromolar affinity but does not alter the activity of either RNR to a large extent over the no-effector control. Although similar, the behavior of E. coli and N. gonorrhoeae RNRs are not identical, highlighting the potential of targeting the cone domain for species-specific RNR inhibition. This approach could enable the development of novel antibiotics, particularly needed for combatting antibiotic-resistant N. gonorrhoeae.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165166</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Reinforcement Learning for Network Control</title>
<link>https://hdl.handle.net/1721.1/165163</link>
<description>Generalizable Reinforcement Learning for Network Control
Wigmore, Jerrod
This thesis confronts the critical generalization gap of Deep Reinforcement Learning (DRL) that hinders its effective application to queueing network control, where policies often fail to perform robustly in unseen topologies and traffic conditions upon deployment. We develop and analyze a suite of novel techniques that systematically embed structural domain knowledge and safety considerations to create more robust, efficient, and generalist learning agents. To improve generalization for a large class of queueing network control problems, we first introduce the Switch-Type Network (STN), a policy architecture that embeds the "switch-type" property common in classical control. This architectural prior improves sample efficiency and enables superior zero-shot generalization across varying network parameters. To address generalization across multi-hop networks, we then propose the Multi-Axis Graph Neural Network (MA-GNN), which augments the traditional inter-node message passing operations of a GNN with a novel intranode aggregation mechanism to capture complex, permutation-invariant dependencies between different traffic classes. This allows the MAGNN to learn and output high-level control coefficients that are effective for unseen network topologies. Recognizing the limitations of offline training, we shift to online adaptation and introduce an intervention-assisted DRL framework that guarantees stability in environments with unbounded state-spaces. By partitioning the state space and ceding control to a provably stable policy in high-congestion regions, this framework prevents catastrophic learning failures; its stability is proven via Lyapunov analysis, and foundational policy gradient theorems are extended to support the interventional setting. As a complementary case study in structured exploration, we also develop a Bayesian Hierarchical Bandit model and a Hierarchical Thompson Sampling (HTS) algorithm for the multi-band radio channel selection problem, which leverages environmental correlations to guide exploration and significantly reduce regret. Collectively, these contributions provide a comprehensive framework for creating DRL agents that are more robust and practical, demonstrating that embedding knowledge of policy structure, network topology, safety, and environmental correlations is a crucial step towards deploying autonomous agents in complex, real-world systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165163</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Competition, Number and Definitness</title>
<link>https://hdl.handle.net/1721.1/165161</link>
<description>Local Competition, Number and Definitness
Doron, Omri
In this dissertation, I explore the consequences of local competition inside the DP. I argue that various phenomena, including multiplicity inferences, homogeneity and definiteness, are best explained as locally-triggered scalar implicatures (SIs), when coupled with a view of SIs as presupposed (Bassi et al., 2021). I begin with the puzzle of the multiplicity inferences that arise from the use of plural indefinites, and show that deriving them as presupposed SIs naturally explains their felicity conditions and projection from embedded environments. I then argue that this competition-based system can account for the typology of number marking, and in fact providing us with a parsimonious theory of the crosslinguistic variation. A key result of this argument is that any language which allows for number marking on nouns has both the singular and the plural feature in its inventory. Finally, I suggest that local competition can also derive the inferences stemming from definite descriptions, including uniqueness, maximality and homogeneity.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165161</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering DNA-based electrochemical diagnostics for translational applications</title>
<link>https://hdl.handle.net/1721.1/165160</link>
<description>Engineering DNA-based electrochemical diagnostics for translational applications
Zhou, Xingcheng
Low-resource settings are disproportionally burdened by infectious diseases due to limited access to early and accurate disease detection. Current gold standard methods, while effective, have high turnaround times and require costly infrastructure that is often impractical in these environments. Electrochemical biosensors are a promising alternative to due to their sensitivity, selectivity, low-cost, portability, and rapid response time. Among various biorecognition elements, DNA is particularly advantageous due to its versatility, comparable stability, and low production cost. However, there is a significant gap between laboratory proof-of-concept biosensors and commercially viable biosensors. The main challenges associated with commercializing DNA-based sensors include ineffective surface chemistries, limited target range, poor long-term stability, and high-cost, non-scalable manufacturing processes. In my thesis, I resolve these specific challenges by engineering DNA-based electrochemical biosensors systems to support their translation into commercially-viable products.&#13;
&#13;
First, I address ineffective surface chemistries for modifying screen-printed carbon electrodes, which are widely used for bioelectrochemical systems due to their low production cost and scalable manufacturing. However, effective modification with biomolecules remains a challenge as the main methods are either non-specific, require harsh reagents, or form weak monolayers. In this project, we develop a new facile, bio-orthogonal, and biocompatible surface chemistry for modifying screen-printed carbon electrodes. This approach enables the modification of electrode surfaces with DNA, whole cells, and proteins while maintaining bioactivity, supporting applications in both biosensing and clean energy.&#13;
&#13;
Next, I expand the range of targets for nucleic acid electrochemical detection. Electrochemical hybridization assays are sensitive and specific but are limited to very short nucleic acids. To resolves this, we develop a restriction enzyme-assisted electrochemical hybridization assay for improved nucleic acid detection. By incorporating target-specific restriction enzymes, I detect long nucleic acids, with performance dependent on the location of the cut site relative to the electrode surface. Thus, I establish guidelines for assay design to serve as a generalizable platform for robust electrochemical detection of long nucleic acids.&#13;
&#13;
Subsequently, I solve the challenge of long-term storage of sensors. Commercialization of DNA-based electrochemical biosensors is challenged by the stability and shelf-life of the DNA monolayer. There is no technology that allows storage of these sensors long term at room temperature at dry conditions. Here, we report a novel method to preserve DNA-based biosensor through a protective coating of polyvinyl alcohol. We show that the coating significantly improves the shelf life at both room temperature and elevated temperatures. We further demonstrate that the DNA is viable for downstream sensing. Our finding allows facilitates the commercialization of DNA-based biosensors as viable products.&#13;
&#13;
Finally, I design and fabricate a multiplexed electrochemical diagnostic device for respiratory viruses. While electrochemical biosensors are a major research area of diagnostics that can be utilized in low resource settings, effectively integrating assays into a seamless, inexpensive fluidic device is difficult. In this project, we first develop an assay to detect three types of respiratory viruses with sensitivity comparable to PCR. We then integrate the workflow into a shelf-stable diagnostic utilizing low-cost materials and a scalable manufacturing process. Our device offers a practical solution for device integration and future disease control for vulnerable populations.&#13;
&#13;
Overall, the methods developed in this work have the potential to support the transition of DNA-based electrochemical biosensor from academic research to commercially-viable products, paving the way for more FDA-approved diagnostics for early disease detection and advancing health equity in vulnerable populations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165160</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Labor and Public Economics</title>
<link>https://hdl.handle.net/1721.1/165159</link>
<description>Essays in Labor and Public Economics
Martin Richmond, Jane Alexandra
This dissertation examines how institutional policies and labor market frictions affect human capital formation, employment outcomes, and economic mobility across generations. The three essays explore distinct but interconnected aspects of how families and workers navigate constraints in education, childcare, and credentialing systems.&#13;
&#13;
The first essay is joint with Jonathan Rothbaum and investigates intergenerational spillovers from parental disability insurance receipt, using linked Census/IRS/SSA administrative data covering over 400,000 families. Using US administrative data, we link dependent children of SSDI recipients to their tax filings at age 25. We document a key descriptive fact, that the income of the child at age 25 is increasing in the age at which the parent receives their first SSDI transfer. We show that the probability that a child themselves receives an SSDI transfer as an adult is decreasing on the same margin; however, the pattern persists in the sample of children who do not receive such transfers. We build a model to show that these facts are consistent with a "lost-years" mechanism by which children whose parents become resource constrained earlier in life lose more years of costly intergenerational human capital investment. &#13;
&#13;
The second essay examines firms' decisions to remove bachelor's degree requirements from job postings and the subsequent hiring outcomes. By tracking within-role requirements over time, I identify instances where employers have explicitly removed bachelor's degree prerequisites from job advertisements. Linking these changes to aggregated resume data, I analyze whether individuals subsequently hired have different educational credentials, alternative qualifications, or experiential backgrounds. After the removal of degree requirements, the share of individuals hired with a degree falls by 1-3 p.p.  Concurrently, the share of people hired that report a non-degree credential is roughly unchanged, and the average years of labor market experience possessed by a candidate increases by 0.5 years. I detail a simple model explaining why firms may be motivated to remove degree requirements and substitute them with other screening mechanisms. To address potential endogeneity concerns, I employ an instrumental variable approach, exploiting staggered state-level policy shifts in the removal of degree requirements from public sector employment, which act as an information treatment to firms. I also show that these firm-level and state-level policy changes are broadly uncorrelated with local labor market conditions.&#13;
&#13;
The third essay is joint with Maya Bidanda and analyzes how childcare access affects parents' choice between wage employment and self-employment.  In this paper, we exploit variation in subsidized Pre-Kindergarten (Pre-K) to understand how access to childcare impacts mothers’ propensity for self-employment and formal work. We find that mothers of children who are too young for school are more likely to be self-employed and less likely to be in formal work than mothers of older children. We do not find similar trends for fathers. In addition, access to low-cost childcare increases the likelihood for formal work and decreases the likelihood of self-employment. This is evidence that mothers are pushed into self-employment due to frictions or barriers in formal work when childcare is not available. In the last part of the paper we discuss the impact of these patterns on mothers' lifetime income.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165159</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Chemical Reactivity Prediction: Paradigms, Challenges, and Applications</title>
<link>https://hdl.handle.net/1721.1/165158</link>
<description>Machine Learning for Chemical Reactivity Prediction: Paradigms, Challenges, and Applications
Raghavan, Priyanka
The discovery of new therapeutic agents in the pharmaceutical industry is a complex, iterative process, often encapsulated by the Design-Make-Test-Analyze (DMTA) cycle, in which chemists ideate, synthesize, and assay compound targets of interest. A significant bottleneck in this cycle is the "Make" phase, where the synthesis of novel compounds can be time-consuming, resource-intensive, and fraught with unpredictable outcomes. Accurate prediction of chemical reactivity, particularly reaction yields and selectivities, is therefore paramount to accelerating drug discovery by enabling more efficient synthesis planning, reducing material waste, and guiding the design of more synthetically accessible molecules. As such, this dissertation explores the application of machine learning (ML) to address critical challenges in chemical reactivity prediction, with a particular focus on low-data regimes and the integration of predictive models into practical drug discovery workflows.&#13;
&#13;
This thesis begins by addressing the pervasive challenge of predicting reaction yields from sparse, literature-derived data. It details the assembly of a large dataset of substrate scopes and evaluates single-task and multi-task ML approaches, highlighting the limitations imposed by data scarcity and noise in real-world chemical literature. Recognizing these challenges, this thesis then provides recommendations for designing experimental datasets that are more conducive to robust machine learning, specifically offering considerations for curating data with the downstream modeling goal in mind.&#13;
&#13;
Building on these insights, this thesis then turns toward specific applications of machine learning in medicinal chemistry, first presenting a direct, impactful implementation of ML to enhance synthetic accessibility in drug design by predicting Suzuki cross-coupling yields from a large, historical pharmaceutical library dataset. ML models are shown to often outperform expert intuition and be successfully integrated into existing workflows for library design and rescue, significantly increasing synthesis efficiency. Finally, this thesis expands from chemical reactions to enzymatic reactions, detailing a computational and ML-based workflow for transaminase enzyme selection, to streamline the enantioselective synthesis of valuable chiral amine building blocks used in medicinal chemistry.&#13;
&#13;
Collectively, this thesis contributes to the growing field of machine learning in chemistry by addressing fundamental challenges in reactivity prediction, particularly in low-data and real-world industrial settings. It provides novel modeling paradigms for existing data and insights into the limitations of current approaches, offers a conceptual framework for improved data generation, and demonstrates the tangible benefits of integrating ML models into the DMTA pipeline. Throughout, the critical interplay between data quality, molecular representation, and model architecture and evaluation is emphasized, paving the way for more reliable and impactful predictive tools that can accelerate the pace of chemical discovery.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165158</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Property prediction with machine learning and ab initio methods for iridium photoactive complexes and metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/165157</link>
<description>Property prediction with machine learning and ab initio methods for iridium photoactive complexes and metal-organic frameworks
Terrones, Gianmarco Guin
Data-driven prediction of a chemical’s properties prior to synthesis or use can accelerate chemical discovery by increasing the probability of candidate suitability for the given application. In this thesis, data-driven models and complementary first-principles calculations have been used to study three types of chemistries: iridium photoactive complexes, metal-organic frameworks, and reactions for azetidine synthesis. Iridium photoactive complexes are commonly used in OLED lighting, photocatalysis, and bioimaging due to their unique phosphorescent properties and triplet excited state population. Metal-organic frameworks are studied for heterogeneous catalysis and gas separations and storage due to their tunable metal environments and porous structures. Azetidine-containing molecules have potential for use as pharmaceuticals due to their high stability and good pharmacokinetics. However, despite their advantageous properties, challenges remain in designing these chemistries and informing this design with computation. The excited state properties of iridium complexes are challenging and costly to predict through first-principles methods. Similarly, stability issues often affect metal-organic frameworks, yet these cannot be efficiently modeled by physics-based routines. Lastly, synthetic approaches toward azetidine synthesis are limited, and computational study of novel synthetic approaches to identify desirable reactant characteristics would benefit the future scope of azetidine products.&#13;
&#13;
The models developed in this thesis have proven to be complementary tools to first-principles approaches, and have major benefits in their speed of application and ability to train directly on experimental data for properties that challenge methods like DFT. The models are applied to screen chemical space for promising candidates through consideration of hypothetical, not-yet-synthesized iridium complexes and metal-organic frameworks generated through component combination of existing structures. The machine learning models are also used to derive structure-property relationships through feature importance analysis, for example identifying qualities of iridium photoactive complexes that impart longer or shorter excited state lifetime. In addition to model generation, work in this thesis has covered code development of a software package for molecule structure generation, modification, and fingerprinting, and also development of intuitive web interfaces for easy use of data-driven models. It is expected that the tools developed in this thesis will both allow for a greater understanding of iridium complexes, metal-organic frameworks, and azetidine synthesis, and enable low-cost exploration of chemical space for novel material selection.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165157</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spacecraft Autonomy through Computer Vision and Onboard Planning</title>
<link>https://hdl.handle.net/1721.1/165156</link>
<description>Spacecraft Autonomy through Computer Vision and Onboard Planning
Kacker, Shreeyam
Earth observation (EO) from satellite platforms has experienced widespread growth since the commercialization and widespread availability of data, and has had large impacts on applications such as agriculture, disaster monitoring, and defense and intelligence. Observing unpredictable phenomena is still challenging for EO missions due to long lead times from scheduling, uplinking, and executing the image capture onboard the spacecraft. This delay between planning and execution means that conditions can change in between them, causing a task to become unobservable and missed in the meantime, for example due to cloud cover obscuring a target. Dynamic tasking (DT) is a mission concept that aims to mitigate this unpredictability by moving autonomy onboard the spacecraft and quickly reacting to conditions as observed, using several potential perception sources. In this work, we consider DT as applied to a tasked Earth-observing satellite, whose goal is to image Earth’s landmass at predefined targets. The considered goal in this work for DT and onboard autonomy is avoidance of cloud cover, which can cause up to 66% of imaging tasks to be occluded, but factoring real-world constraints on operationalization and onboard edge computing. Instead of using end-to-end learned methods, we build upon existing work on spacecraft scheduling, incorporating a mixed-integer linear program (MILP) scheduler as the primary scheduling algorithm. Rather than directly incorporating DT into a global problem, we instead develop a set of heuristics which can estimate the utility of lookahead actions. We construct these heuristics from two directions: one from a simplified and constrained version of the scheduling problem with order statistics, and a second using a convolutional neural network with large amounts of synthetically generated data. We also consider DT using information from meteorological satellites in geostationary Earth orbit (GEO), parameterizing information delay rather than performing detailed analysis of data pipelines. In cases where all tasks are equally valued, all DT cases tested, including both meteorological and vision cases, outperform the conventional scheduler across all trials, ranging between 40% and 100% increase in total schedule utility based on cloud-free captures, depending on the DT method used. In cases where tasks have Pareto-distributed utility, the gap between the omniscient and conventional schedule shrinks drastically, to within 4% of total utility, and only a single DT method outperforms the conventional schedule consistently, as the environment becomes significantly more challenging due to asymmetric upside and downside risk. We also present methods by which to fractionate global state such that data can be efficiently stored and updated across a satellite constellation, allowing these heuristics to continue working across the constellation with minimal modification.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165156</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Multi-Robot Spatial Perception</title>
<link>https://hdl.handle.net/1721.1/165155</link>
<description>Large-Scale Multi-Robot Spatial Perception
Chang, Yun
This thesis addresses the challenge of scalable and robust multi-robot spatial perception, with the goal of supporting autonomous task execution in large-scale environments. The work focuses on two core issues: scaling to large, complex environments, and incorporating highlevel scene understanding to enable autonomy for complex tasks. Current multi-robot systems typically focus on geometric reconstruction for navigation, but often fall short in providing the scene understanding needed for complex decision-making and task execution in real-world environments. Conversely, many recent demonstrations of autonomous task execution are limited to small, controlled environments, with few methods addressing scalability to larger scenes. This work bridges this gap by integrating multi-robot simultaneous localization and mapping (SLAM) with spatial perception in order to support downstream autonomy for complex tasks. We begin by introducing methods to enhance the robustness and efficiency of loop closure detection in centralized multi-robot SLAM, focusing on prioritizing loop closures and mitigating the impact of incorrect loop closures in large-scale environments. We then present the first fully distributed metric-semantic SLAM system for multi-robot teams, which supports real-time semantic mapping and enables large-scale deployments with up to 8 robots and 8 kilometers of traversal. To improve reasoning across robot teams, we extend this work to 3D scene graphs, proposing a framework for collaboratively building and maintaining a shared multi-robot scene graph online. Additionally, we introduce algorithms for task-oriented compression of 3D scene graphs to support communication across robots under bandwidth constraints. Finally, we explore open-set scene understanding made possible by advances in visual-language models and highlight the need for task-driven mapping. Building on this, we propose a novel framework for grounding high-level language commands into scene graphs, enabling robots to decompose high-level tasks into executable subtasks while focusing on task-relevant components of the environment. The contributions of this thesis are validated through experimental evaluations in extreme environments and real-world deployments, where multi-robot teams operate in large-scale settings. These experiments tackle a broad range of tasks, from navigation and object search to executing high-level language commands (e.g., “clean the room”). Our contributions advance multi-robot large-scale spatial perception and have the potential to impact real-world applications such as exploration, service robotics, and search and rescue, where autonomous multi-robot teams are essential for performing complex tasks in large environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165155</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Locality: a case study from Äiwoo</title>
<link>https://hdl.handle.net/1721.1/165154</link>
<description>Locality: a case study from Äiwoo
Roversi, Giovanni
This thesis explores issues in the syntax of Äiwoo, an understudied Oceanic language from the Solomon Islands. This language showcases an intricate system of clausal alternations, where several parameters vary independently from each other: verbal morphology, word order, and possibilities of Ā-extraction (i.e., which argument(s) may or may not be extracted). As the title indicates, this thesis is set up as a case study, in a sort of “learn by doing” fashion. I attempt to build a descriptively and explanatory adequate formal model of the clausal alternation system in Äiwoo, within a Minimalist framework. By doing so, I examine what a theory of grammar must look like for the model to work as intended. Building such a model of Äiwoo teaches us something about a number of central issues in syntactic theory such as the locality of movement, the A /Ā-distinction, and the syntax of Austronesian languages specifically. I show that conjoining van Urk’s (2015) theory of the A /Ā-distinction and to the independent idea of featurally relativized probes (Béjar 2003), we predict the existence of “non-local A-movement”, that is, movement with the binding-theoretical properties of A-movement that nonetheless does not obey strict DPlocality, and crucially without the need of invoking the notion of “mixed A /Ā-movement”. I show that two instances of this predicted kind of movement are attested in the Äiwoo clause: movement to both spec,TP and spec,CP can target either the subject or – nonlocally – the object, depending on their features. I also propose that features assigned by a probe to a goal (“goal-flagging”; Deal to appear, Clem &amp; Deal 2024) can be further manipulated by the syntax, being searched for by a higher probe. Further, Äiwoo shows an interesting instantiation of the Austronesian “pivot-only” Āextraction restriction, in that it comes with a series of exceptions. I argue that in Äiwoo, this restriction is caused by an Ā-intervention effect, contra analyses of this phenomenon in other Austronesian languages based on phasehood (Rackowski &amp; Richards 2005, Erlewine &amp; C. Lim 2023, Hsieh 2025, a.o.) or DP-intervention (Aldridge 2004, 2008, a.o.). Notably, as soon as the highest DP in a clause does not carry Ā-features, the restriction vanishes, thus allowing for the “exceptional” extraction of lower arguments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165154</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid In-Space Assembly and Manufacturing of Large Reticulated Truss Structures</title>
<link>https://hdl.handle.net/1721.1/165153</link>
<description>Rapid In-Space Assembly and Manufacturing of Large Reticulated Truss Structures
Bhundiya, Harsh G.
Modern deployable space structures have enabled spectacular missions like the James Webb Space Telescope, but they are constrained by the rocket fairing and a tradeoff between deployed size and structural precision that limits their use for future communications and astronomy applications. In-space assembly and manufacturing (ISAM), i.e., the construction of structures in the space environment, offers an approach to overcome these issues and enable novel missions both on orbit and on planetary surfaces. Structures constructed in space can be optimized for loads in space, achieve higher packaging ratios, and increase mission flexibility. Given these benefits and decreasing launch costs from modern reusable rockets, there is a resurgence of interest in ISAM from academic, commercial, and governmental entities. However, current ISAM concepts are hindered by inefficient construction processes with high size, weight, and power requirements and a lack of systems-level design of spacecraft and construction processes. This thesis aims to address these challenges to enable energy-efficient, rapid ISAM of large space structures. The first contribution is an analysis of the fabrication time of large truss structures, considering the constraints of spacecraft power, attitude control authority, and avoidance of flexural vibrations. The analysis shows that angular momentum storage of the spacecraft and flexibility of the structure are dominant constraints on fabrication time of gridshell geometries with diameters over 60 m, while the available power and control torque limit fabrication time for diameters under 60 m. This trade study provides quantitative estimates of the total fabrication time, e.g., five spacecraft constructing a 200 m diameter gridshell in five days, and highlights design tradeoffs to enable rapid ISAM, including using multiple spacecraft and varying the feedstock material based on the structure size. Motivated by the long fabrication timescales, the second contribution is an understanding of spacecraft attitude dynamics with changes in mass properties and environmental disturbances during construction. In particular, variable-mass rigid body dynamics are used to understand the feasibility of gravity gradient capture, a passive approach that exploits the gravity gradient disturbance during ISAM. The concept is illustrated with two case studies on the construction of truss structures, a 2D triangle unit cell and a 3D curved gridshell, by spacecraft in circular orbits. Based on the time reversibility of the equations of motion, initial conditions are computed that result in gravity gradient capture by solving the equations backward in time, considering the changes in mass properties from the prescribed construction sequence. The analysis highlights both the feasibility of passive gravity gradient capture and the sensitivity of initial conditions to small perturbations. It is found that deploying a gravity gradient boom before the start of the construction sequence can decrease this initial condition sensitivity by an order of magnitude, and more generally, designing an ISAM process to maintain the minimum principal inertia axis in the direction of the orbit radius vector can facilitate the robust passive gravity gradient capture of large structures. Finally, to aid the design of attitude control systems for ISAM spacecraft, ground experiments are presented to understand the attitude dynamics during Bend-Forming, a candidate deformation process for fabricating truss structures. The experimental results reveal the effect of metal springback during construction, which causes flexible vibrations and coupled motion of both the truss and spacecraft. Additionally, experiments with closed-loop, thruster-based position and attitude control highlight the possibility of control-structure interactions during construction, due to the decreasing natural frequency of the truss structure. Together, these contributions provide a framework for the efficient design of future ISAM spacecraft and construction processes to enable the next generation of space structures.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165153</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space Optical Interferometry for High-Resolution Coherent Imaging of Astronomical Objects</title>
<link>https://hdl.handle.net/1721.1/165152</link>
<description>Space Optical Interferometry for High-Resolution Coherent Imaging of Astronomical Objects
Black, Mason R.
The hunt for Earth-like exoplanets is one of the great scientific endeavors of the 21st century. To date, the characterization of known exoplanets with instruments like JWST has been spatially unresolved—we can study atmospheric constituents via spectroscopy, but we cannot see continents, synoptic weather, or the geographic distribution of potential spectral biosignatures. The diffraction limit dictates that an optical aperture with sufficient angular resolution to resolve planetary scale features from the vantage point of our solar system would be prohibitively immense—hundreds to thousands of kilometers in diameter at visible wavelengths. Optical interferometry offers perhaps the only path to the required angular resolution by interfering light from multiple telescopes, or sub-apertures, providing Fourier components at a resolution corresponding to the sub-aperture separation. In recent decades, ground-based interferometry has made major strides in sensitivity by calibrating for atmosphere-induced piston errors to enable long coherent integration times. In addition to high-resolution astrometry, these sensitivity gains have allowed for milliarcsecond-scale imaging of bright astronomical objects. Still, optical interferometry from the Earth’s surface faces many fundamental performance limitations, and a space-based system would allow for the study of much dimmer targets at higher diffraction-limited resolutions, taking us a step closer to one day mapping exo-Earths. With recent advances in satellite miniaturization and lower cost-to-orbit, multiple groups have proposed new designs for a first demonstration mission, but no astronomical interferometer has yet flown in space. This work investigates the expected performance of first- and second-generation space optical interferometer concepts for astronomical imaging, focusing on maturing the technology that will be needed to push sensitivity and resolution beyond what is currently possible from the ground. The first mission envisioned is a formation flying pathfinder comprising three CubeSats, aiming to demonstrate the first measurements of interference fringes from starlight collected by separated space telescopes. This feat will require sub-wavelength matching of the optical path lengths traveled by the starlight to maintain the mutual coherence, which is achieved using rapid measurements of the interference fringe itself as a source of path length feedback. The performance of this fringe tracking is modeled via a time-domain control simulation accounting for the micro-vibration disturbance environment that would be expected on a CubeSat platform using reaction wheels for attitude control, which could induce up to 5 µm in optical path length noise and several arcseconds in optical alignment errors if left uncorrected. Also considered are the optical losses and noise sources associated with photon arrival statistics and detection as well as the beam pointing jitter. Simulation results indicate that such a mission would be able to stabilize interference fringes on at least the fifty brightest stars to better than 45 nm even under pessimistic disturbance assumptions, which would be sufficient to demonstrate the feasibility of space interferometry for a subsequent larger mission. The second mission concept analyzed aims to push the limits of faint-object interferometry from space by implementing a dual-feed beam combiner for fringe stabilization and phase referencing using bright off-axis guide stars. A three-telescope interferometer is assessed in its ability to map the surfaces of recently discovered dwarf planets in the outer solar system from a sun-synchronous Earth orbit. The population of stars usable as an interferometric phase reference is found to be within the off-axis field of view permitted by a 1-meter differential optical delay line, aiding image reconstruction via use of the Fourier phase referenced to a fixed point in the sky. Bayesian statistical imaging algorithms are employed to demonstrate recovery of an image from simulated noisy measurements of an 18th magnitude rotating object 37 millarcseconds in diameter, but results indicate that to do this with the integration times permitted by the chosen orbital formation would require approximately Hubble-sized telescope apertures. Potential alternative operational strategies to enable longer integration times with more modest apertures are discussed. Informed by the simulation results, recommendations are presented for near-term technology development in support of space interferometry.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165152</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic Mucins for Microbial Modulation</title>
<link>https://hdl.handle.net/1721.1/165150</link>
<description>Synthetic Mucins for Microbial Modulation
Barnes, Carolyn E.
Mucus is a ubiquitous hydrogel that coats all epithelial cell surfaces. Once thought to be an inert hydrogel, the mucosal barrier is now recognized as a critical component of the innate immune system. It serves not only as a physical and chemical barrier against foreign objects, pathogens, and environmental stressors, but also as a selective interface that shapes an organism’s microbiome. Many of these functions are mediated by the primary structural component of mucus: the mucin protein. Mucins are large, densely glycosylated proteins, with carbohydrates contributing up to 80% of their molecular weight. In addition to mediating microbial interactions, mucins contribute to the biophysical properties of mucus that enable lubrication, adhesion, and protection of tissues. Beyond their physical responsibilities, mucins modulate virulence of microbes, promote cultivation of commensal bacteria through adhesion points and nutrient presentation, and mediate critical immune modulations of the host. However, molecular-level insights remain lacking in understanding mucin structure as well as function. The large size and heterogeneity of mucins and their glycosylation have made it challenging to parse the individual contributions of mucin structural features to biological function. Synthetic mucin mimics, developed using polymer chemistry, offering a promising strategy to probe these structure–function relationships by enabling precise control over molecular weight, glycan identity and density, polymer architecture, and morphology. To address these challenges, we developed novel synthetic mucins to elucidate mucin structure–function relationships. Emphasizing the importance of mucin’s extended morphology, we designed new synthetic strategies to isolate and investigate the impact of mucin structural motifs, such as anionic glycan identity and bottlebrush architectures, on synthetic mucin morphology (Chapter 2). We next demonstrated that synthetic mucins can provide insight into the glycan-binding preferences of probiotic bacteria, highlighting the roles of mucins in shaping microbial organization within the microbiome and emphasizing the potential of synthetic mucins as prebiotics (Chapter 3). Finally, recognizing that mucins key modulators of microbial virulence and infection, we explored the potential of synthetic mucins as antibacterial scaffolds capable of delivering therapeutic cargoes. This work emphasized the importance of understanding how polymer structure influences biological activity and targeting capabilities (Chapter 4). Ultimately, we anticipate that the synthetic mucin platform developed in this work will help address fundamental questions in mucin biology and advance the development of mucin-inspired therapeutic materials.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165150</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-Economic Assessment of Grid-Scale Energy Storage Technologies Under Evolving Market and Decarbonization Scenarios: Liquid Air and Lithium-ion Battery Systems</title>
<link>https://hdl.handle.net/1721.1/165149</link>
<description>Techno-Economic Assessment of Grid-Scale Energy Storage Technologies Under Evolving Market and Decarbonization Scenarios: Liquid Air and Lithium-ion Battery Systems
Cetegen, Shaylin Ashley
As global energy systems transition toward low-carbon futures, long-duration energy storage technologies are expected to play a critical role in ensuring grid reliability and flexibility. Liquid air energy storage has emerged as a promising long-duration energy storage solution due to its scalability, ability to be sited flexibly, and environmental sustainability. This thesis investigates the technical modeling and economic feasibility of liquid air energy storage systems under both current and projected future electricity market conditions using a combination of process modeling, mathematical optimization, and techno-economic analysis.&#13;
&#13;
The study begins with an exploratory investigation into the modeling of process components in liquid air energy storage systems, emphasizing multistream heat exchangers and the emergence of bifurcation phenomena in thermodynamic models. This is followed by an economic optimization of standalone LAES systems across various electricity markets in the United States and Europe. A mixed-integer linear programming framework is developed to simultaneously optimize system design and hourly operation with the objective of maximizing net present value, providing new insights to inform LAES deployment strategies.&#13;
&#13;
Building on these foundations, a forward-looking economic analysis is conducted for 18 electricity regions in the United States under eight distinct decarbonization scenarios. This analysis is based on future electricity price projections from the Cambium 2023 dataset developed by the National Renewable Energy Laboratory. The results identify Texas and Florida as especially favorable markets for liquid air energy storage under a range of decarbonization pathways. Sensitivity analyses reveal that economic incentives such as capital expenditure subsidies have a greater impact on profitability than technical improvements such as increased round-trip efficiency.&#13;
&#13;
Finally, the thesis presents a comparative economic assessment of liquid air energy storage systems and lithium-ion battery systems across key metrics, including levelized cost of storage, system lifetime, siting constraints, and cost-effectiveness over multi-hour to multi-day durations. The findings show that liquid air energy storage systems can offer lower cost and greater flexibility than lithium-ion batteries for long-duration applications.&#13;
&#13;
This work establishes a generalizable framework for assessing the economic feasibility of emerging energy storage technologies and offers practical insights for decision-makers evaluating the role of liquid air energy storage in future electricity systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165149</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Epistemology</title>
<link>https://hdl.handle.net/1721.1/165148</link>
<description>Active Epistemology
Fiat, Yonathan
Orthodox epistemology deals with what we might call "passive agents:" agents that merely respond to what's given to them. But we care most about are active agents: agents that can sing, dance, seek evidence, make conjectures, etc. In this dissertation, I explore three ways in which this observation is relevant to the traditional questions of epistemology. I argue that it can shine a new light on the nature of knowledge, help us make sense of standard scientific practices, and help solve some famous puzzles in epistemology.&#13;
&#13;
Chapter 1 asks how the fact that we know something is related to the question of whether we should seek more evidence. This problem has been discussed in the philosophical literature under the name of "the dogmatism puzzle." In this chapter, I use the multi-armed bandit model to argue for a new solution to the dogmatism puzzle: knowledge often requires proper maintenance.&#13;
&#13;
Chapter 2 presents a novel account of significance testing, one of the most important practices in science. It shows that we can make sense of this practice if we accept the claim that predictions are better than accommodation. I then use this account to answer some of the many objections to significance testing.&#13;
&#13;
Finally, chapter 3 argues that what we know often depends on our choices, and not merely on what is given to us. We can gain knowledge by choosing something, whereas if we fail to choose, or attempt to choose too many things, we fail to gain knowledge. I then use this idea to offer a new solution to the lottery paradox, to help understand inductive knowledge, and, once again, to make sense of significance testing and related practices.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165148</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of genetic risk in Alzheimer’s disease</title>
<link>https://hdl.handle.net/1721.1/165147</link>
<description>Mechanisms of genetic risk in Alzheimer’s disease
von Maydell, Djuna
Sporadic Alzheimer's disease (AD) accounts for the majority of dementia cases worldwide, yet effective treatments remain limited. Genetic variants associated with AD provide insight into disease etiology and highlight potential therapeutic targets. The ε4 allele of the APOE gene is the strongest genetic risk factor for AD, and rare variants in the ABCA7 gene are among the next most significant. Both genes encode lipid transporters, suggesting an important role for lipid metabolism in AD etiology. However, the exact cellular mechanisms through which these variants increase AD risk remain incompletely understood. After a brief introduction in Chapter 1, Chapter 2 demonstrates that damaging ABCA7 variants disrupt neuronal phosphatidylcholine metabolism and mitochondrial function. These defects were reversed by supplementation with the phosphatidylcholine precursor cytidine diphosphate-choline (CDP-choline). Chapter 3 shows that APOE4-expressing oligodendrocytes exhibit altered cholesterol transport and impaired myelination. Pharmacological modulation of cholesterol transport in the brain reversed these defects, improving cognitive function in mouse models. These findings suggest that lipid-related mechanisms represent a class of targetable drivers of AD risk, but it remains unclear whether lipid-targeted treatments would be broadly applicable across AD or restricted to specific disease subtypes. Chapter 4 introduces a practical framework for identifying disease subtypes in high-dimensional biological data based on principles from machine learning and data attribution, and applies it to explore transcriptional subtypes among AD brains. Together, these studies reveal potential mechanisms of genetic risk in AD, highlight lipid disruptions as upstream mediators, and propose a practical framework for uncovering AD subtypes.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165147</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Realm of Grace</title>
<link>https://hdl.handle.net/1721.1/165146</link>
<description>The Realm of Grace
Mathew, Abraham
This dissertation examines three familiar interpersonal phenomena—repentance after wronging another, forgiveness in the wake of being wronged, and reciprocation after being helped—and what they teach us about moral obligation in general. &#13;
&#13;
When a wrongdoer sincerely repents, we tend to see that as a reason to forgive them. But why? Chapter 1 offers an answer that I call the ‘Redemptive View’. Repentance, I contend, involves changing how one relates to a past misdeed, acknowledging its wrongness and committing to moral betterment. Consequently, the wrong comes to play a new role in the wrongdoer’s life, becoming a source of moral learning and an impetus towards moral growth. As a result, the wrong is imbued with a new, positive significance that it lacked so far, generating a reason to forgive. &#13;
&#13;
But are we ever obliged to forgive our wrongdoers? Many think not. On the orthodox view, all obligations to others correlate with demandable or enforceable rights, and since no one has the standing to demand or enforce forgiveness, no one can be owed it. Chapter 2 disputes this orthodoxy, arguing that forgiveness is sometimes obligatory, even if no wrongdoer has a right to it. In particular, if you’ve previously accepted a gracious offer of forgiveness and are now in a position to extend an equally or less gracious offer to one of your wrongdoers, then you must forgive. Otherwise, you would be imposing on others a harsher standard than you accepted for yourself. It turns out that disparate instances of forgiving and being forgiven within a life are connected in surprising ways: accepting forgiveness in the past can bear on whether present forgiveness is discretionary or obligatory. &#13;
&#13;
Chapter 3 turns to cases of standing in debt to someone who helps us—debts we can repay by reciprocating. Some debts are transactional: they can be claimed or waived, and once repaid, the parties return to the status ex ante. Others resist this structure: they cannot be claimed or waived, and reciprocation only generates a persisting, often alternating cycle of mutual aid. What explains this distinctive normative profile? I argue that such non-transactional debts arise when an act of 2 care tacitly proposes a more intimate relationship. When the beneficiary responds in kind, the relationship is reshaped, and new constitutive norms take hold. These norms ensure persistence, but also a normatively healthy pattern of alternation: after all, each act in the cycle is an undemandable gift extended in a time of need, and each response an act of due gratitude. &#13;
&#13;
In all, this dissertation challenges the orthodoxy that if we owe something to others, they must have a right to it. We learn that the moral landscape isn’t exhausted by the realm of rights. There is also a ‘realm of grace’—a realm in which we are required to do much for others that they cannot demand of us.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165146</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presuppositionless proxies of politeness: An (eventually) Optimality-Theoretic account</title>
<link>https://hdl.handle.net/1721.1/165145</link>
<description>Presuppositionless proxies of politeness: An (eventually) Optimality-Theoretic account
Wang, Ruoan
Recent research has seen growing interest in the nature of social meaning, “the constellation of qualities and properties that linguistic forms convey about the social identity of language users” (Beltrama 2020, see also Eckert 2008). This dissertation concerns those linguistic forms termed polite pronouns. They convey one particular aspect of social identity: the social distance between language users, where a large social distance mandates the expression of politeness. Most existing work on polite pronouns has modeled polite meaning via dedicated grammatical mechanisms, such as a dedicated feature ([hon], Ackema &amp; Neeleman 2016) or a dedicated dimension of meaning (McCready 2019). This dissertation shows that such dedicated mechanisms are superfluous, as existing grammatical mechanisms can be appropriated to describe and explain both the form and meaning of polite pronouns. Building on Wang (2023), I use a purpose-built typological sample of polite pronouns in &gt;220 genetically and geographically diverse languages to establish the shapes of polite pronouns, showing that polite pronouns are obtained by asymmetrical recruitment of existing ϕ-featural values. Furthermore, the shapes of polite pronouns converge precisely with the shapes of semantic defaults, those ϕ-featural values underspecified in meaning, which must emerge when the context provides little or no information about number or person. To explain this convergence, I use the notion of negative politeness: respecting an interlocutor’s right to be unimpeded (Brown &amp; Levinson 1987). I argue that semantic defaults and polite pronouns are morphologically identical because they are pragmatically identical: underspecification makes them well-suited to be avoidance mechanisms in the service of negative politeness. This captures the intuition that avoidance behaviors are a core component of expressing politeness. Specifically, polite pronouns enable speakers to avoid making presumptions about aspects of the interlocutor. Hence, presuppositionlessness, proxies, and politeness are what this dissertation is all about. I implement this intuition in Optimality-Theoretic terms, where polite grammars are defined by an outranking of Markedness (which mandates speakers to avoid presupposition-rich forms) over Faithfulness (which mandates speakers to be as informative as possible). The resulting system is restrictive but powerful, delivering a factorial typology which exactly mirrors the recruitment asymmetries. With no stipulations specific to polite pronouns, the account is furthermore able to make concrete predictions about politeness phenomena beyond the ϕ-featural domain. Happily, the resulting account is as lean as can be. Its main innovations are methodological and theoretical. Methodologically, the investigation is led by a large cross-linguistic sample, and the additional consideration of use conditions alongside morphological exponence. Theoretically, recruitment affords us with a fresh look at how the universal inventory of features is organized. This dissertation shows that recruitment is not only viable as a meta-grammatical operation, but even desirable for reasons of economy or computational efficiency.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165145</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aft Fuselage Boundary-Layer Ingesting Propulsion Systems for Turbo-Electric Aircraft</title>
<link>https://hdl.handle.net/1721.1/165141</link>
<description>Aft Fuselage Boundary-Layer Ingesting Propulsion Systems for Turbo-Electric Aircraft
Chen, Zhibo
This thesis focuses on a rigorous assessment of a tube-and-wing turbo-electric boundarylayer ingesting (BLI) aircraft, including (i) definitions of the aerodynamic attributes for the best fuel burn benefit, (ii) design guidelines to achieve these attributes, and (iii) a conceptual design of tail-BLI aircraft. The assessment of the BLI benefit relative to a baseline conventional aircraft is based on a TASOPT-CFD analysis framework, using a CFD body-force model for fan representation and a power balance for aircraft performance analysis. The aircraft mission is to carry a 17500 kg payload over 5500 km range at a cruise Mach number of 0.8. The baseline aircraft has two next-generation geared turbofan engines with fan pressure ratio (FPR) of 1.35. The tail-BLI aircraft has six integrated propulsors with electric fans of 1.40 FPR in addition to two underwing turbofans; it is estimated to achieve an 8.5% fuel burn benefit compared to the baseline. To achieve the benefit, the tail-BLI aircraft has a non-axisymmetric aft fuselage that creates axial vorticity in the tail-mounted propulsor inflow, providing co-swirl to reduce rotor incidence variations to improve the fan efficiency and stall margin. The best propulsor configuration, with six 24-inch fans, is established balancing the boundary-layer kinetic energy defect ingestion and propulsion system weight. The integrated propulsor aerodynamic design includes an upstream inlet extension, an inlet leading edge design tailored for each propulsor, a supercritical nacelle, an annular nozzle, an elliptic nozzle plug, and a non-axisymmetric tail cone, to minimize shock loss and achieve attached flow at cruise and takeoff. The BLI fan pressure ratio was selected based on a trade between engine propulsive efficiency and propulsion system weight. Non-axisymmetric stator inlet angles are also used to reduce the stator incidence variations. Fan forcing analysis suggests that the BLI rotor unsteady aerodynamic loading has negligible impact on blade life. Fan stability analysis shows that the BLI inlet distortion reduces the rotor stall margin by 4.5% and 2.7% relative to the rotor in uniform flow, at cruise and takeoff, respectively. The tail-BLI propulsors are thus estimated to have appropriate operability with BLI inlet distortion. In conclusion, the distributed aft fuselage boundary-layer ingesting propulsion system is suggested to offer a relevant pathway for tube-and-wing configurations with a potential major advancement in fuel burn reduction.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165141</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Optimization Algorithms for Step-Size Selection and Online Problem Parameter Estimation</title>
<link>https://hdl.handle.net/1721.1/165140</link>
<description>Adaptive Optimization Algorithms for Step-Size Selection and Online Problem Parameter Estimation
Cavalcanti Vilela, João Vítor
Optimization algorithms have long been fundamental tools across science and engineering, and now are also at the center of the rise of machine learning and artificial intelligence. However, extracting good practical performance from many of these algorithms depends on careful manual calibration and tuning. In fact, the algorithms with best theoretical guarantees do not always perform best in practice. In this light, reducing the effort and skill required to set up optimization algorithms can save immeasurable amounts of time and resources. This thesis makes two contributions to this end, proposing optimization algorithms that require less supervision by adaptively selecting step-sizes and estimating problem parameters online. First, we revisit the foundational subroutine called backtracking line search (BLS). Typically, a base algorithm calls BLS to search for a parameter (e.g., step-size) such that the iterates of that algorithm satisfy a given condition (e.g., Armijo, descent lemma) that leads to desirable behavior (e.g., reducing the value of the objective function). To find a feasible parameter, BLS successively adjusts a parameter candidate by a constant factor until the given condition is satisfied. We propose to instead adjust the parameter candidate by an adaptive factor that takes into account the degree to which the given condition is violated. This adaptive BLS (ABLS) subroutine adds no computational burden relative to BLS, but can lead to significantly better practical results. Experiments on over fifteen real-world datasets demonstrate that ABLS can be more robust than BLS to problem set ups and require significantly fewer condition evaluations to return higher-quality parameters. At the same time, we prove that ABLS enjoys essentially the same theoretical guarantees of BLS. The second contribution of this thesis is a parameter-free algorithm for smooth and strongly convex objective problems called NAG-free. To our knowledge, NAG-free is the first adaptive algorithm capable of directly estimating the strong convexity parameter without priors or resorting to restart schemes. We prove that NAG-free converges globally at least as fast gradient descent, and achieves accelerated convergence locally if the Hessian is locally smooth and other mild additional assumptions hold. Prominent classes of machine learning problems with locally smooth Hessian include the regularized logistic loss, ridge regression, exponential family negative log-likelihoods with bounded natural parameters, and Moreau envelope smoothing. We present real-world experiments in which NAG-free performs comparably well with restart schemes, demonstrating that it can adapt to better local curvature conditions represented by the smoothness and strong convexity parameters.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165140</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding Structural Complexity in Condensed Phosphates: P(V) Reagents for Controlled Phosphoanhydride Bond Construction</title>
<link>https://hdl.handle.net/1721.1/165139</link>
<description>Expanding Structural Complexity in Condensed Phosphates: P(V) Reagents for Controlled Phosphoanhydride Bond Construction
Qian, Kevin
At the outset, Chapter 1 begins by examining the central importance of phosphorus in both nature and industry, situating the chemistry of polyphosphates within a broader historical and conceptual context. A brief overview of the history of polyphosphate research in the life sciences is given, as well as a discussion on the persistent ambiguities in condensed phosphate nomenclature. To frame the work in the following chapters, the idea of the "hydrocarbon analogy" is introduced: a conceptual strategy that draws parallels between the structure and reactivity of organic molecules and that of inorganic phosphate constructs, thus offering a new way of thinking about molecular complexity in this underexplored chemical space.&#13;
&#13;
Building from this foundation, Chapter 2 details the discovery of a diphosphorylation reagent, identified to be a mixture of neutral zwitterionic adducts of P4O10 and pyridine. This reagent emerged serendipitously from our efforts to activate trimetaphosphate and has proved to be a powerful tool for synthesizing functionalized cyclic metaphosphates. Chapter 3 is the extrapolation of our strategy to activate otherwise inert metaphosphates by forming ring-strained bicyclic ultraphosphates. We discuss the reactivity of [P5O14]3–, the oligophosphate analog of the bicyclic hydrocarbon housane. Attempts to push this strategy further toward the synthesis of a hexaphosphorylation reagent were ultimately unsuccessful but provided valuable insight into the limitations of this ring-strain activation paradigm. The methods developed in earlier chapters set the stage for our collaboration with the Fielder group, described in Chapter 4. We designed new reagents to chemoselectively conjugate polyphosphates to densely functionalized peptides and proteins. These synthetic strategies pave the way for the study of recently discovered, but poorly characterized post-translational modifications featuring novel phosphorylation modes. Finally, Chapter 5 presents a series of unpublished studies that expand on the themes of the previous chapter. Together, these investigations contribute to a growing body of knowledge aimed at broadening the chemical space of condensed inorganic phosphates.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165139</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unifying electric field-mediated heterogeneous catalysis</title>
<link>https://hdl.handle.net/1721.1/165138</link>
<description>Unifying electric field-mediated heterogeneous catalysis
Dinakar, Bhavish
Electric fields in the catalytic active sites of enzymes, originating from charged amino acid residues in the protein scaffold, have been invoked as a key factor governing the extraordinary activities and selectivities of enzymes for catalyzing chemical transformations. Models and experimental measurements of these electric fields have been utilized to quantitatively rationalize reaction kinetics, understand active-site solvent structures, and design better enzymatic catalysts. Despite the success of using electric fields to study enzymatic systems, this electric field approach has not yet been widely used in studying liquid phase heterogeneous catalysis, largely due to the difficulty in quantifying electric fields at active sites as well as challenges in designing heterogeneous catalysts with precise control of electric fields. In this thesis, we extend this electric field methodology into the realm of liquid-phase heterogeneous catalysis, both to accelerate heterogeneously catalyzed reactions and to understand solvent structures in catalyst pores.&#13;
&#13;
First, we design an electrochemical basket reactor system to apply an electric field to the surface of Brønsted-acidic carbon nanotube catalysts via an applied potential, and we examine the effect of electric field on acid-catalyzed alcohol dehydration rates of 1-methylcyclopentanol to 1-methylcyclopentene in an electrolyte-containing acetonitrile solution. We observe that the reaction rate is remarkably sensitive to electric field, with reaction rates increasing ~100,000 fold over 0.5 V of applied potential, and we propose a model that explains the rate-potential scaling and dependence on ionic strength. In further support of our model, we experimentally observe a theorized “isokinetic potential” where the reaction rate is independent of ionic strength. Through reactive base titrations, we quantify the density of active sites and demonstrate that the rate promotion is due to increasing the site activity, rather than increasing the number of active sites.&#13;
&#13;
Next, we demonstrate that this electric field effect does not require a special electrochemistry setup to manifest and can also be observed when a catalyst particle is simply touching another electrically conductive material. Through a carefully designed suite of experimental controls, we show that the intrinsic activity of acidic carbon nanotubes changes by an order of magnitude when they are in electrical contact with inert, thermally reduced carbon nanotubes, via the same mechanism that we previously observed when rates changed with an applied potential.&#13;
&#13;
We then shift gears to examine solvent structures in confined zeolite pores using vibrational Stark spectroscopy, an infrared spectroscopic technique developed for measuring electric fields at enzyme active sites. Using Ti-Beta zeolite as a test case, we develop a method that quantifies the electric field experienced by a probe molecule at the Ti active sites in solvent-filled pores. We find that the electric field varies with solvent identity and also with zeolite hydrophobicity, indicating that the secondary sphere interactions of the solvent with the zeolite framework result in distinct solvent structures in the hydrophobic and hydrophilic zeolites. Furthermore, we observe that this method identifies distinct electric fields between hydrophobic and hydrophilic zeolites even in the absence of solvent, offering this technique as a method for distinguishing types of active site environments.&#13;
&#13;
Finally, we apply this approach to examine hydrogen-bonding interactions inside Ti-Beta zeolite pores when filled with secondary alcohol solvents, motivated by previous work which observed that intraporous solvent structures varied significantly with changes in zeolite hydrophobicity. Using an infrared liquid flow cell that we designed to achieve record signal-to-noise and in-pore selectivity, we find that hydrogen-bonding interactions between a probe molecule and solvent are different between the ensemble-average in-pore environment and at the zeolite active site, revealing that there may be a discrepancy between typically observed intraporous bulk solvent structures invoked to explain kinetic phenomena and the kinetically relevant active-site solvent structures.&#13;
&#13;
Collectively, this thesis demonstrates that techniques and models of electric field-induced catalysis previously applied to understand biological systems can also be utilized to improve catalyst activity and understand the structure of solvent under confinement in liquid-phase heterogeneous catalysis. We hope that future work will expand on our methods, applying them to further investigate the effects of applied electric field on reactivity and utilizing vibrational probes to aid in understanding local structure at catalyst active sites.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165138</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Modular Strategies for Enhancing&#13;
Biomanufacturing in Komagataella phaffii</title>
<link>https://hdl.handle.net/1721.1/165137</link>
<description>Development of Modular Strategies for Enhancing&#13;
Biomanufacturing in Komagataella phaffii
Shi, Shuting
The landscape of protein therapeutics is advancing towards more complex modalities that offer greater medical potential but present significant challenges to existing biomanufacturing frameworks. Alongside these technical challenges, socio-economic factors drive an urgent need for accelerated, cost-effective biomanufacturing to increase rapid response capabilities, ensure equitable access to advanced treatments, and alleviate the economic strain on global healthcare systems. This creates a significant and unresolved gap: the cost-effective production of increasingly complex protein therapeutics under an accelerated timeline. Komagataella phaffii holds great potential to fill this gap, as it is a eukaryotic microorganism well-established for the rapid and cost-effective expression of recombinant proteins. However, realizing this potential requires advanced engineering strategies to extend K. phaffii’s strengths to these complex modalities. This thesis presents the development of modular strategies to enhance the biomanufacturing of such complex therapeutics in K. phaffii. The first part of this thesis focuses on providing alternative manufacturing strategies for VLP vaccines, a promising next-generation vaccine platform whose adoption has been limited by manufacturing challenges. To streamline VLP vaccine development and manufacturing, a modular production framework was developed. This framework’s core strategy involves the secretory production of VLP scaffold subunits in K. phaffii followed by their in vitro assembly, which then allows for flexible "plug-and-display" antigen attachment to the pre-formed scaffold. This approach can be adopted for versatile antigen adaptation and can be stockpiled for rapid response. More importantly, this framework circumvents the manufacturing challenges associated with traditional intracellular VLP production, such as complex downstream purification that leads to increased production cost and decreased yield and quality. This transition not only makes each modular step more suited for current available operational units, especially downstream processing, significantly increasing scalability and reducing cost, but also holds promise for integration into future continuous manufacturing frameworks. A specific implementation and key outcome of this approach was the development of the SpyCatcher::I53-50 modular VLP scaffold, which was then validated by demonstrating the high-fidelity display of a SpyTag-fused HIV Env trimer antigen with preservation of critical neutralizing epitopes, showcasing its potential as a promising modular VLP vaccine platform. Enabling the secretory production of the scaffold’s multimeric subunits required overcoming significant manufacturability challenges. For instance, proteolytic degradation of the SpyCatcher-153-50A fusion was resolved via protein engineering, while secretion of the aggregation-prone 153-50B subunit was achieved through a multi-pronged approach combining process optimization, host engineering, and a novel pseudo-chaperone strategy. These rational engineering efforts were not only crucial for producing these key building blocks but also serve as a practical roadmap for tackling other hard-to-produce targets. The challenges encountered in efficiently optimizing VLP subunit production, even with extensive rational engineering, underscored a broader imperative for more powerful and systematic optimization tools. This realization motivated the second part of this thesis: the development of a modular high-throughput screening (HTS) platform to accelerate the engineering of K. phaffii for enhanced production of complex biologics, using monoclonal antibodies (mAbs) as an industrially significant proof-of-concept. The HTS platform is built on a dual-mode yeast surface display (YSD) system. By establishing a quantitative correlation between surface display and secretion for a full-length mAb, this work unlocks the potential of YSD for reliably screening secretion phenotypes, enabling, for the first time, screening under normal cultivation conditions at an unprecedented scale. Its successful application in a genome-wide CRISPR/Cas9 knockout screen identified numerous novel host gene knockout targets. Individual validation of 20 top-ranked candidates confirmed that 15 led to statistically significant increases in mAb specific productivity, with the most effective target yielding an 8-fold improvement over the baseline strain. This represents a substantial technological advancement for K. phaffii engineering, offering a powerful engine to shift from empirical, low-throughput optimization towards systematic, data-driven cell line development. Designed as a modular platform, it holds promise for screening other perturbation libraries and for optimizing a wide range of protein targets beyond mAbs.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165137</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a targeted lipid nanoparticle platform for in vivo RNA delivery to hematopoietic stem and progenitor cells</title>
<link>https://hdl.handle.net/1721.1/165136</link>
<description>Development of a targeted lipid nanoparticle platform for in vivo RNA delivery to hematopoietic stem and progenitor cells
Shi, Dennis
Hematopoietic stem cells (HSCs) are rare cells residing in the bone marrow that are responsible for the generation and maintenance of the body’s immune system through a process known as hematopoiesis. Because of their self-renewal capability and crucial role in producing immune cells, HSCs have garnered a lot of therapeutic interest for the treatment of genetic blood disorders. However, current HSC gene therapies are autologous ex vivo transplantations which consists of three major steps: 1) mobilization and harvest of the patient’s own stem cells, 2) ex vivo editing of those cells, and 3) reinfusion of the edited cells back into the patient. While the results of these ex vivo therapies are quite promising, there are many limitations associated with the current process. First, the manufacturing process is logistically complex which results in high costs and lengthy times. Second, prior to re-infusion of the edited cells, patients must undergo a conditioning regimen (done with a chemotherapeutic) to deplete the existing cells from the bone marrow and make space for the edited cells to graft. This conditioning has many deleterious side effects including organ damage, increased risk of infection, and infertility. One strategy to bypass the existing limitations of ex vivo HSC therapy is to directly edit the HSCs in vivo. Here, we describe the development of a targeted non-viral lipid nanoparticle (LNP) delivery system that can deliver RNA to hematopoietic stem and progenitor cells (HSPCs) in vivo following a single intravenous injection. We targeted CD117, a receptor that is expressed on HSCs, and conjugated an antibody against CD117 to our LNPs for receptormediated delivery of RNA. We demonstrated that modulation of certain LNP parameters such as circulation time and ligand density increase delivery to the bone marrow. Using this targeted platform, we demonstrated LNP uptake and delivery of both siRNA and Cre mRNA into HSPCs. In addition, in the Ai14 mouse model, we showed that HSCs transfected with our targeted LNPs maintain their stemness and functionality to produce mature immune cells. We also evaluated the overall biodistribution of our anti-CD117 LNP and investigated the downstream effects of delivery to organs other than the bone marrow. Finally, we explored in vivo gene editing using a variety of approaches. We optimized our LNP formulation to increase protein expression in bone marrow HSCs and used our optimized formulation for in vivo base editing.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165136</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A 3D human liver tissue model of the hepatobiliary junction</title>
<link>https://hdl.handle.net/1721.1/165135</link>
<description>A 3D human liver tissue model of the hepatobiliary junction
Westerfield, Ashley D.
Cholestasis, or disruption in bile flow, is a poorly-understood feature of many liver diseases and is a well-established indication for liver transplant. Despite this clinical significance, many tissue engineering strategies for modeling or treating liver disease fail to recapitulate physiological bile flow. Recent advances in the field of tissue engineering and organoid technology have enabled the culture of human hepatocytes and bile duct cells in vitro, these models lack a key function of the liver which is bile transport. In this thesis, I first describe developments in bioengineering technology that have allowed for the culture and manipulation of bile duct cell organoids. I then present a 3D multicellular spheroid model that captures the structure and function of the human hepatobiliary junction—the interface between liver and bile duct cells that is often disrupted in liver disease. By co-aggregating primary human hepatocytes and bile duct cells, I engineer a liver spheroid model that recapitulates physiological bile flow through a functional connection between the two cell types. These spheroids maintain cell polarity and transport bile from hepatocyte canaliculi to bile duct structures. This function is quantified by leveraging a high-throughput imaging assay with AI-assisted analysis to track junction formation and bile flow over time. I also use this system to model ischemic injury of the bile duct, a common complication of liver transplant, by tuning the oxygen parameters of the spheroid culture. In this injury model, I observe and describe two processes that potentially contribute to injury: a reversible loss of canalicular function during hypoxia, followed by selective bile duct cell death after reoxygenation. This human-based, scalable platform provides a new tool to study bile duct biology, understand mechanisms of biliary injury after liver transplant, and support drug discovery efforts for cholestatic liver diseases.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165135</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Control of a Continuous Lyophilizer</title>
<link>https://hdl.handle.net/1721.1/165134</link>
<description>Modeling and Control of a Continuous Lyophilizer
Kadambi, Rohan Patrick
Lyophilization, or vacuum freeze-drying, is a separations process in pharmaceutical manufacturing used to stabilize aqueous drug products by removing nearly all the water at cryogenic temperatures. This process is often applied to final-dose formulations of injectable drugs already dispensed in vials. Despite its benefits, freeze-drying is a slow process, with typical cycle times exceeding two days. To meet demand, lyophilization is applied to large batches of thousands to tens of thousands of vials at time. These large batches are plagued by non-uniformity, non-optimal operating conditions, and slow technology transfer across production scales.&#13;
&#13;
Continuous manufacturing offers a solution to many of these problems, with its demonstrated history in improving efficiency, homogeneity, and control through its inherent parallelization of operations. However, the need to interact directly with vials, as well as modify heat transfer to them, has delayed the development of a continuous lyophilization process.&#13;
&#13;
This work presents the design, control, and implementation of a modular continuous lyophilizer. The continuous lyophilizer is composed of a series of custom aluminum chambers around a magnetic levitation system, which facilitate the heat transfer and motion required. The modular nature ensures that critical geometry is conserved when production rate is scaled between laboratory-, pilot-, and industrial-scales. This system demonstrated continuous operation for four days on various solutions in standard 10R vials. In this work, novel continuous freezing and vacuum systems were designed to support the operation. Additionally, an innovative mass sensor was designed to monitor each vial traveling through the system independently. When combined with per-vial temperature data available from thermal imaging, this system enables a more comprehensive understanding of the process dynamics.&#13;
&#13;
This continuous process opens opportunities for expanding the use of lyophilization by simplifying technology transfer during scale-up, improving product uniformity, and increasing process productivity through optimal control.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165134</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic Insights into Disordered Proteins and Condensates via Molecular Simulations</title>
<link>https://hdl.handle.net/1721.1/165133</link>
<description>Atomistic Insights into Disordered Proteins and Condensates via Molecular Simulations
Wang, Cong
Biomolecular condensates formed by intrinsically disordered proteins (IDPs) play essential roles in cellular organization and function, attracting broad scientific interest. In addition to experimental approaches, molecular dynamics simulations—particularly atomistic simulations—serve as powerful tools for gaining mechanistic insights into IDP conformations and inter- and intramolecular interactions within condensates. In this work, we developed a multiscale simulation strategy that balances efficiency and accuracy by combining coarse-grained and atomistic modeling. We further applied dimensionality reduction techniques to extract meaningful features from high-dimensional simulation data. Using this integrated framework, we investigated three scientific problems: (1) the sequence-dependent conformational ensembles of IDPs, (2) nonspecific yet selective condensate–drug interactions, and (3) the mechanism of formation of monocomponent, multiphasic condensates formed by tetrapeptide sequences.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165133</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis on Aircraft Overflight Noise Distribution on Airport Adjacent Communities</title>
<link>https://hdl.handle.net/1721.1/165130</link>
<description>An Analysis on Aircraft Overflight Noise Distribution on Airport Adjacent Communities
Wang, Z. Juju
Aircraft overflight noise and can be a source of noise pollution and can also be the limiting factor in airport operations [1]. This research studies the distribution of overflight noise near 22 airports across 12 metropolitan areas in the United States. It uses a fast noise model to generate noise data from surveillance flight track data and correlates noise data with population data, income, and noise complaints when data is available. Findings reveal that population noise exposure is significantly influenced by urban density and proximity of the airport to city centers. Airports located farther from major cities, particularly newer and larger facilities like IAD, DFW, and DEN, tend to decrease noise exposure due to their expansive land areas and remote siting. Noise exposure is also impacted by operational procedures. Arrival procedures typically follow straight in paths aligned with runway centerlines, concentrating noise along specific corridors. We find a few exceptions to this trend in complex airspaces which have turns in their arrival procedures, often enabled by Area Navigation (RNAV) technology. This allows flights to avoid adjacent airspaces and also overfly targeted lower population areas. The implementation of Area Navigation and Performance Based Navigation technology has enabled greater use of arrival paths involving turns, which were previously limited to visual conditions. In contrast, departure paths and contours from departures are more varied due to routing flexibility. These are often shaped by noise abatement strategies and airspace constraints. Runway orientation relative to urban centers also plays a critical role in determining noise exposure, with runways directed away from cities generally resulting in fewer person-noise events. In a socioeconomic analysis, we find lower-income communities are more frequently located near high-noise zones, though exceptions exist in areas with desirable geographic features near the airport such as waterfront property.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165130</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Bayesian inference for generative models via probabilistic programming</title>
<link>https://hdl.handle.net/1721.1/165128</link>
<description>Scaling Bayesian inference for generative models via probabilistic programming
Loula Guimarães de Campos, João
This thesis addresses these challenges for the field of data science, developing probabilistic programming methods that enable rational AI agents in that domain. The work is organized into two parts: Part I introduces GenLM and Adaptive Weighted Rejection Sampling for translating natural language instructions into structured programs with both syntactic and semantic constraints, outperforming existing approaches across a number of domains. Part II develops Bayesian generative models for tabular data that can answer a wide range of questions, yield stable inferences across subpopulations of different sizes, and scale to hundreds of millions of rows on GPUs; as well as early work on Large Population Models that unify heterogeneous datasets. Together, these contributions provide first steps towards a unified framework for creating AI agents that can rationally formalize and answer questions about data.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165128</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Folding and Quality Control Mechanisms of Procollagen</title>
<link>https://hdl.handle.net/1721.1/165127</link>
<description>Folding and Quality Control Mechanisms of Procollagen
Kim, Seo Yeon
Procollagens undergo a complex folding process with the assistance of various ER proteostasis network components and enzymes that introduce post-translational modifications. In Chapter 2, we investigate the biological role of one of the post-translational modifications, the highly conserved N-glycosylation on procollagen C-propeptides by generating and characterizing multiple mouse models that lack the N-glycan on procollagen-I. We show that the lack of N-glycan can affect procollagen folding inside the cells, which translates to reduced bone mass in mice lacking the N-glycan. These data provide new insights into the roles of the evolutionarily conserved, yet underappreciated collagen’s N-glycan. Mutations in procollagen genes and other proteins important for procollagen folding often cause misfolding defects in procollagen that can ultimately lead to disease. In Chapter 3, we explore the various molecular mechanisms that the ER proteostasis network can use to recognize diverse classes of misfolding procollagen variants. Specifically, we apply cell engineering and quantitative mass spectrometry to elucidate the interactomes of diverse osteogenesis imperfecta-causing procollagen α1(I) variants with distinct folding defects, and show that ER proteins differentially engage with these misfolding procollagen variants. These findings provide a foundation for illuminating quality control pathways that the different types of misfolding procollagen variants are subject to during their biogenesis. Taken together, the work in this thesis meaningfully advances our knowledge in the mechanisms of procollagen folding and quality control.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165127</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tribochemistry of Frictional Ignition in High-Pressure Oxygen</title>
<link>https://hdl.handle.net/1721.1/165126</link>
<description>Tribochemistry of Frictional Ignition in High-Pressure Oxygen
Garcia Jimenez, Andres
Frictional heating of metals at sliding contacts in high-pressure oxygen environments can result in catastrophic metal fires. This phenomenon, known as frictional ignition, presents an ongoing challenge in the development of oxygen-compatible components for next-generation reusable rocket engines. Early NASA investigations on frictional ignition indicated that ignition-resistant materials developed a robust oxide tribolayer. Breakdown of this tribolayer was hypothesized to drive the onset of frictional ignition. While this early work ranked the relative ignition resistance of several engineering alloys, it lacked a physical explanation for the mechanisms of tribolayer breakdown and frictional ignition, limiting its utility in predicting ignition conditions and designing ignition-resistant components. This thesis addresses this knowledge gap through a combination of experiments and modeling, which reveal the fundamental mechanisms governing frictional ignition of metals. These insights are then developed into a quantitative framework to predict ignition conditions and identify safe operating limits for sliding systems in high-pressure oxygen environments. The present work considers frictional ignition in the context of thermal ignition theory. Using high-speed dry sliding experiments and finite element modeling, we establish two conditions that must be satisfied for frictional ignition. The first condition is tribolayer breakdown, which we confirm through in situ measurements of the friction coefficient and complementary thermochemical modeling. Three distinct tribolayer breakdown mechanisms are identified – oxide melting, substrate metal melting, and solid-state mechanical failure. The dominant breakdown mechanism is found to depend on alloy chemistry and operating conditions. The second condition for frictional ignition is satisfaction of the thermal ignition criterion for thermal runaway, i.e., when the oxidative heating rate exceeds the rate of heat loss. Numerical modeling shows that the tribolayer acts as a diffusion barrier that slows oxidative heating. Thermal runaway is only possible in the absence of this tribolayer once the temperature at the sliding surface exceeds a critical threshold. The critical temperature provides a conservative estimate for evaluating ignition risk, since below this temperature, ignition cannot occur. This framework is used to explain the ignition behaviors of all materials with available frictional ignition data, highlighting the exceptional ignition resistance of the Nibase alloy MA754. Experiments show that this behavior derives from the unique mechanical properties, microstructure, and growth kinetics of the MA754 tribolayer. The above framework is extended to assess the ignition behaviors of dry dissimilar metal sliding systems. Frictional ignition experiments revealed that tribolayers may form through oxidation of the parent metal or via material transfer between counter-surfaces, potentially resulting in tribolayers with disparate chemistry from the base alloy. The dynamics of material transfer depend on the contact geometry, operating conditions, and material-pair combination. We find that favorable material transfer may result in the formation of thick, lubricating, and protective oxide tribolayers on both surfaces that suppress ignition. We develop expressions to establish safe operating bounds for dissimilar contacts with different geometries – symmetric contacts and a pin-on-disk geometry. These expressions highlight the effects of material-pair combination and contact geometry on ignition conditions. This thesis concludes by providing materials selection and component design strategies for ignition-resistant tribosystems. The first strategy is to select or design materials with high thermal conductivity, low enthalpy of oxidation, and favorable oxidational wear behaviors, i.e., rapid formation of a refractory, delamination-resistant tribolayer that lubricates the rubbing surface and protects against oxidation. The second strategy is to enhance cooling by optimizing component design (e.g., contact geometry) or modifying operating conditions (e.g., working fluid). Collectively, this thesis provides a roadmap for designing ignition-resistant sliding components capable of withstanding extreme oxygen pressures, with direct application to the development of safer oxygen-compatible hardware for next-generation reusable rocket engines.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165126</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Personalizing Robot Assistance under Uncertainty about the Human</title>
<link>https://hdl.handle.net/1721.1/165124</link>
<description>Personalizing Robot Assistance under Uncertainty about the Human
Li, Shen
Robots have the potential to enhance human well-being by assisting with daily activities, particularly for older adults and people with disabilities. One example is robot-assisted dressing, where a robot helps a person put on clothing. However, no two individuals are alike. Each person has unique preferences, behaviors, and needs, making personalization essential for effective assistance. A central challenge is that robots often operate under uncertainty about the human they are helping. This uncertainty may involve the person’s preferences, hidden physical states, or reactions to assistance. If not properly addressed, such uncertainty can lead to ineffective, undesired, or even unsafe outcomes. This thesis asks: How should a robot behave when it is uncertain about the human? To answer this, I present a unified framework for uncertainty-aware personalization in humanrobot interaction, spanning three core components of robot intelligence: preference learning, state estimation, and motion planning. I propose methods that (1) reduce uncertainty using implicit cognitive signals, (2) represent and respect uncertainty through set-based state estimation, and (3) act under uncertainty using relaxed safety constraints. First, I introduce an approach that uses response time, a subtle yet informative cognitive signal, as implicit feedback for preference learning. While traditional methods rely solely on binary choices, I developed the first algorithm that integrates both choices and response times to infer not just what a person prefers, but how strongly they feel about those preferences. Theoretical analysis reveals that response times significantly reduce uncertainty about user preferences, especially when users have strong preferences. In simulation studies, this method decreased misidentification of the most preferred option by up to 55 %, enabling faster and more accurate personalization without extra user input. Second, I address the problem of estimating hidden human states during physical interaction. For example, in dressing, parts of the body, such as the elbow, may be occluded. I introduce the first set-based estimator that represents and respects uncertainty from human behavior and sensing models trained on limited data. Instead of outputting a point estimate, the method constructs a geometric set, such as a 3D box, guaranteed with high probability to contain the true human state. In dressing experiments, the estimator achieved 92 % inclusion using significantly smaller boxes than prior methods, balancing reliability and precision, supporting safe and responsive physical assistance. Third, I consider how a robot should plan motion when it is uncertain about future human behavior. Traditional safety constraints typically prohibit any contact, which can cause the robot to freeze when uncertainty is high. I propose a more flexible definition of safety that allows either collision avoidance or low-impact contact. Integrated into a learning-based control framework, this approach enables efficient motion while maintaining safety. In dressing tasks, it reduced task time by 78 % without compromising safety. Together, these contributions show how robots can reduce, represent and respect, and act under uncertainty to personalize their assistance. This thesis lays a foundation for robots that not only respond to commands, but also understand and adapt to the nuanced, evolving, and uncertain nature of human behavior.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165124</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacial Engineering of Perovskite Solar Cells</title>
<link>https://hdl.handle.net/1721.1/165122</link>
<description>Interfacial Engineering of Perovskite Solar Cells
Lu, Yongli
The efficiency of the lead halide perovskite solar cells has improved from 3% up to above 27% in a decade. However, the long term stability of the device still need to be improved in order to compete with traditional photovoltaic technologies, such as Silicon and GaAs. Hybrid organic and inorganic interfaces in the devices are the origin of many degradation pathways. Understanding the nature of these interfaces and chemical and physical mechanism behind their evolution under electrical, light and thermal bias is the subject of this thesis. In the following chapters, I focus on developing a electron transport layer (ETL) based on chemical bath deposition (CBD) for the synthesis of a tin dioxide (SnO₂). The conventional CBD recipe uses thioglycolic acid (TGA) to facilitate attachments of SnOx particles onto the substrate. However, nonvolatile TGA is reported to harm the operational stability of PSCs. A volatile oxalic acid (OA) is introduced as an alternative to TGA. OA, a dicarboxylic acid, functions as a chemical linker for the nucleation and attachment of particles to the substrate in the chemical bath. Moreover, OA can be readily removed through thermal annealing followed by a mild H₂O₂ treatment, as shown by FTIR measurements. Synergistically, the mild H₂O₂ treatment selectively oxidizes the surface of the SnOₓ layer, minimizing nonradiative interface carrier recombination. EELS (electron-energy-loss spectroscopy) confirms that the SnOₓ surface is dominated by Sn⁴⁺, while the bulk is a mixture of Sn²⁺ and Sn⁴⁺. This rational design of a CBD SnOₓ layer leads to devices with T₈₅ = 1500h, a significant improvement over the TGA-based device with T₈₀ = 250h. The champion device reached a power conversion effciency of 24.6%. This work offers a rationale for optimizing the complex parameter space of CBD SnOₓ to achieve efficient and stable PSCs. In addition to developing a electron transport layer (ETL) based on chemical bath deposition (CBD) for the synthesis of a tin dioxide (SnO₂), a perovskite ink additive, bis(2- oxo-3-oxazolidinyl) phosphinic chloride (BOP-Cl), was developed with the following benefits: (1) The phosphoryl and two oxo groups form six-membered intermolecular hydrogen-bonded rings with the formamidinium cation (FA), mitigating ion migrations. (2) The hydrogen bonding reduces the electrophilicity of the ammonium protons by donating electron density, therefore reducing its reactivity with the surface oxygen on the metal oxide. Furthermore, the molecule can react to form a protecting group on the nucleophilic oxygen at the tin oxide transport layer surface through the elimination of chlorine. As a result, we achieve perovskite solar cells with an efficiency of 25.0% and improved MPP stability T₉₃ = 1200h at 40–45 °C compared to a control device (T₈₆ = 550h). In addition, we show a negative correlation between the strength of hydrogen bonding of different phosphine oxide derivatives to the organic cations and the degree of metastable behavior (e.g., initial burn-in) of the device.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165122</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Colloidal Semiconductor Nanocrystals: Tools For and&#13;
Insights From First Principles Investigations</title>
<link>https://hdl.handle.net/1721.1/165114</link>
<description>Colloidal Semiconductor Nanocrystals: Tools For and&#13;
Insights From First Principles Investigations
Alexander, Ezra A.
Colloidal semiconductor nanocrystals, also known as quantum dots (QDs), are at once systems of great promise for diverse applications, systems with complex quantum physics that remain poorly understood at a fundamental level, and systems that are difficult to describe with conventional first principles computational chemistry methods due to their large size. Whereas QDs made from toxic materials like Cd and Pb are able to emit and absorb light efficiently and precisely, non-toxic alternative III-V QDs suffer from difficult to control defects that interfere with their radiative processes. Motivated primarily by the problem of understanding defects in III-V quantum dots, in this thesis we develop and apply new frameworks for the computational study of quantum dots. These frameworks include orbital localization techniques for de-convoluting ground-state band structures, a ∆SCF procedure for modeling the x-ray photoelectron spectra (XPS), and a low-cost machine learning framework for predicting Hamiltonians that can be used to extend molecular dynamics simulations. Through these frameworks, we reveal a more comprehensive picture of the defects that interfere with the optical performance of III-V quantum dots. We find that both three-coordinate indium and phosphorus can cause trap states in InP, explore how geometry and charge modulate trap depth, discover and explain a new source of four-coordinate trap states in InP and GaP, and assign shifts in P 2p XPS to specific III-V surface defects.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165114</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cloud-chamber study of cosmic ray showers in lead plates</title>
<link>https://hdl.handle.net/1721.1/165072</link>
<description>Cloud-chamber study of cosmic ray showers in lead plates
Fussell, Lewis.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1938; Includes bibliographical references (leaves [113]-[118]).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165072</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulation and measurement of charge transfer kinetics at chemically modified electrodes</title>
<link>https://hdl.handle.net/1721.1/165069</link>
<description>Manipulation and measurement of charge transfer kinetics at chemically modified electrodes
Lewis, Nathan S.
            (Nathan Saul)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165069</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher Siegel-Weil formulae over function fields</title>
<link>https://hdl.handle.net/1721.1/164977</link>
<description>Higher Siegel-Weil formulae over function fields
Mkrtchyan, Mikayel
In their seminal work, Feng-Yun-Zhang introduced function field analogues of Kudla-Rapoport cycles for moduli spaces of unitary shtukas, and initiated the study of their intersection theory. They proved a higher Siegel-Weil formula in the case of non-degenerate Fourier coefficients, relating the degrees of these cycles to higher derivatives of Siegel-Eisenstein series. In this thesis, we generalize their result in two directions: we 1) prove a higher Siegel-Weil formula for unitary groups for corank-1 degenerate coefficients, and 2) introduce analogous cycles on moduli spaces of symplectic shtukas, and prove a higher Siegel-Weil formula for such cycles in the non-degenerate case, relating their degrees to derivatives of Siegel-Eisenstein series on split orthogonal groups.
</description>
<pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164977</guid>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates</title>
<link>https://hdl.handle.net/1721.1/164976</link>
<description>A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates
Xin, Tiansi
This thesis compiles the published scientific contributions of Tiansi Xin. Chapter 1 consists of a brief collection of eulogies from friends and colleagues, reflecting on his life and time at the Massachusetts Institute of Technology. The subsequent chapters describe the development of novel synthetic methods for the transfer of phosphorus-containing moieties, specifically metaphosphates and phosphinidenes. The work presented here has significant implications for both the fundamental understanding and practical advancement of synthetic inorganic and organic chemistry. Chapters 2 and 3 address the sustainable production and processing of phosphoruscontaining chemicals, focusing on mechanochemical methods to synthesize reduced phosphorus species while circumventing the need to access hazardous white phosphorus as an intermediate. In particular, Chapter 2 describes a solvent-free mechanochemical approach to producing phosphite (HPO₃²⁻) via hydride-mediated reduction of condensed phosphates. Using potassium hydride, a range of inorganic phosphate sources—including pyrophosphate, triphosphate, trimetaphosphate, fluorophosphate, and polyphosphate—were converted to phosphite in moderate to high yields. Mechanistic studies identified overreduction pathways leading to hypophosphite and other low-oxidation P-species. Chapter 3 similarly applies this mechanochemical approach to phosphorus–carbon bond formation, reporting the phosphorylation of acetylides with condensed phosphates to afford phosphonates. Biogenic polyphosphates were also shown to be viable precursors, a proof-of-concept to closing the modern phosphorus cycle using recycled inputs. These results demonstrate the possibility of accessing organophosphorus chemicals directly from condensed phosphates and may offer an opportunity toward a “greener” phosphorus industry. Chapters 4 and 5 shift focus to phosphinidene transfer chemistry and the synthesis of novel phosphorus-containing heterocycles. This expands on previously published studies from the Cummins group on the chemistry of dibenzo-7-phosphanorbornadiene “RPA” reagents. Chapter 4 reports the preparation and structural characterization of iron–phosphido complexes relevant to phosphinidene group transfer catalysis and describes the development of an improved catalytic system based on a simple diiron precursor (Fp₂), enabling efficient synthesis of phosphiranes from electron-deficient alkenes. The mechanism was thoroughly experimentally and computationally interrogated. Chapter 5 describes the novel synthesis of free, uncomplexed phosphet-2-ones via phosphinidene transfer to cyclopropenones, with experimental and theoretical studies supporting a mechanism involving ketene-derived intermediates and transformations to additional phosphorus heterocycles through subsequent reactions.
</description>
<pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164976</guid>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural geology of Eastern Massachusetts</title>
<link>https://hdl.handle.net/1721.1/164922</link>
<description>Structural geology of Eastern Massachusetts
Ilsley, Ralph,
            1896-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Geology, 1934; Vita.
</description>
<pubDate>Mon, 01 Jan 1934 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164922</guid>
<dc:date>1934-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiation transfer in massive binary x-ray systems</title>
<link>https://hdl.handle.net/1721.1/164917</link>
<description>Radiation transfer in massive binary x-ray systems
Lewis, Wayne Lloyd.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1991; Includes bibliographical references (leaves 167-173).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164917</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks</title>
<link>https://hdl.handle.net/1721.1/164863</link>
<description>Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks
Arzuaga García, Ignacio Martín
Understanding the interaction between hydraulically induced fractures and pre-existing natural fractures in geologic formations is key for optimizing subsurface energy systems that rely on fluid injection into fractured rocks. These include Enhanced Geothermal Systems (EGS), CO₂ sequestration, hydrogen storage in depleted reservoirs, unconventional oil and gas development in shale formations, and nuclear waste disposal, among others. In all these applications, controlling fracture propagation and interaction is essential for ensuring operational efficiency, safety, and long-term integrity of the system. This thesis presents a comprehensive experimental and theoretical investigation of hydraulic fracture (HF) interactions with natural fractures (NFs), using Opalinus Clayshale as a representative anisotropic material.&#13;
&#13;
The experimental work involved a series of hydraulic fracturing tests on Opalinus Clayshale specimens under controlled quasi-true-triaxial stress conditions, comparing normal and dried states. Novel monitoring techniques, including high-resolution imaging, high-speed video, acoustic emissions (AE), and pressure tracking, were employed to capture the fracturing process in real-time. Three dominant interaction modes (Crossing, Arrest, and Opening) were systematically characterized and linked to key parameters, including stress ratio, fracture geometry, and injection rates. A critical stress ratio (σ₁/σ₃) of approximately 20 was identified as the threshold for achieving fracture crossing under our experimental conditions: cohesionless, “open” natural fractures, with a low viscosity injection fluid, in a toughness-dominated regime. In dried specimens, high flaw pressurization rates were necessary to overcome matrix fluid loss and achieve crossing.&#13;
&#13;
To complement and interpret the experimental results, existing theoretical models were reviewed and implemented. Furthermore, a simplified version of the OpenT model (Chuprakov et al., 2014) was developed and applied for Opalinus Clayshale, incorporating stress, energy, friction, and permeability effects. By integrating laboratory results with theoretical frameworks, this thesis offers an integral approach to predictive understanding of fracture propagation in naturally fractured rocks, stating that not only the characteristics of the discontinuity or the far-field stresses involved in the process are important in determining the mechanism of interaction, but also the dynamic energy balance at the fracture tip, which is influenced by injection rate, fluid viscosity, and discontinuity properties.&#13;
&#13;
Overall, this thesis bridges the gap between laboratory experiments and theoretical models, advancing a more comprehensive understanding of fracture propagation in naturally fractured media. The findings highlight the importance of considering both mechanical and hydraulic parameters, particularly in low-viscosity, toughness-dominated regimes, for accurately predicting fracture behavior.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164863</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere</title>
<link>https://hdl.handle.net/1721.1/164862</link>
<description>Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere
Parker, William E.
Earth’s orbit has become increasingly congested and contested in recent years. The surge in launched payloads, combined with satellite failures, explosions, and collisions, has contributed to a large and growing population of orbital debris objects that can remain in orbit for decades, centuries, or longer. Meanwhile, decreasing launch costs and maturing satellite technology have created conditions favorable for rapid commercialization across orbital regimes, especially in low Earth orbit (LEO). Today, a small number of commercial entities operate the large majority of the world’s active satellites as part of proliferated LEO constellations. Sustaining productive activity in an increasingly crowded orbital environment has made satellite conjunction assessment and collision avoidance essential for safe operations. These efforts require not just accurate trajectory predictions, but also credible estimates of uncertainty. In LEO, variability in atmospheric drag is by far the dominant source of propagation error, often leading to deviations of several kilometers per day due to unpredictable solar and geomagnetic activity. Even over short timescales, trajectory prediction is challenging because existing forecasts exhibit limited predictive skill. Although forecast errors are often non-Gaussian and heteroscedastic, operational products are generally presented as deterministic, and atmospheric models rarely provide rigorous uncertainty characterization. This work introduces a new approach for probabilistic satellite drag modeling based on historical correlations between space weather drivers and satellite dynamics. Unlike traditional methods, it models satellite behavior directly without reconstructing thermospheric mass density or requiring detailed knowledge of satellite properties such as the ballistic coefficient. This end-to-end strategy offers substantial computational and operational advantages for many space domain awareness tasks. Capturing both trajectory predictions and their associated uncertainty is critical for enabling informed collision avoidance decisions, particularly during geomagnetic storms when current infrastructure frequently fails. Because the orbital lifetime of debris objects can exceed hundreds of years, population dynamics in space critically depend on long-term variability in the composition of Earth’s thermosphere. Rising concentrations of carbon dioxide and other greenhouse gases have caused warming in the troposphere but cooling and contraction in the upper atmosphere. This contraction decreases atmospheric density in LEO, reducing drag and extending the orbital lifetime of debris objects. Longer-lived debris populations pose a persistent collision hazard for all active satellites as long as they remain in orbit. Even natural events, such as a prolonged grand solar minimum, could further reduce thermospheric density and contribute to longer debris lifetime in LEO. With little ability to predict such an event, it is necessary to understand the potential consequences and to identify strategies that enable the continued safe and productive use of LEO. This work models the impact of such long-term environmental changes on limits for sustainable satellite deployments. LEO is a finite respource increasingly at risk of overexploitation. Conserving it and sharing it fairly requires that we first understand its fundamental capacity and our current occupation of that capacity. Some metrics have been proposed to measure the satellite carrying capacity of Earth’s orbit, but none have previously accounted for the potential influence of a changing space climate. This work develops new methods for defining carrying capacity as a common currency, enabling clear constraint-driven thresholds on activity and a better understanding of how existing and proposed missions consume available capacity. These new metrics provide insight into how environmental variability may affect the long-term sustainability of operations in LEO. Respecting and understanding this influence that the natural environment has on our collective ability to operate spacecraft in LEO is critical to preventing the overexploitation of this regime and protecting it for future generations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164862</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics and Econometrics</title>
<link>https://hdl.handle.net/1721.1/164859</link>
<description>Essays in Financial Economics and Econometrics
Orestes, Victor M.
This thesis comprises three essays in finance and econometrics. The first two essays focus on the role of credit access and liquidity in shaping real firm outcomes. The first essay examines the transmission of modern monetary policy through corporate asset markets. Exploiting quasi-experimental variation in the Central Bank of Brazil’s collateral framework and implementing a novel dynamic regression discontinuity design, it shows that monetary policy can ease expected future borrowing constraints, reduce firms’ precautionary cash holdings, and stimulate employment. The second essay analyzes how receivables financing through factoring helps firms smooth cash flows. Using a shift-share instrument and matched administrative data, it finds that cheaper liquidity leads firms to rely more on permanent labor. The third essay develops a new method for distributional inference—nonparametric quantile mixture models. This framework can be applied to financial settings such as tail risk estimation and density forecasting, as well as to causal inference when the objective is to estimate the distributional effects of interventions. It is used here to quantify the heterogeneous wage effects of a major environmental disaster.&#13;
&#13;
The first chapter (joint with Luis Alvarez and Thiago Christiano Silva) studies how modern monetary policy tools, which increasingly operate through corporate asset markets, affect real firm outcomes. We exploit quasi-experimental variation from the inclusion of specific corporate debt instruments in the Central Bank of Brazil’s collateral framework and implement a novel dynamic regression discontinuity design. We find that eligibility increases firms’ debt issuance, modestly lowers spreads, and reduces cash holdings, reflecting a decline in precautionary savings. These effects translate into higher employment and greater supply chain liquidity. We interpret the mechanism through the lens of segmented financial markets: by relaxing firms’ expected future borrowing constraints, the policy acts as a persistent borrowing subsidy and liquidity injection. This encourages firms to reduce cash hoarding and expand production. Using a semi-structural framework calibrated to our reduced-form estimates, we find that an induced 0.8% borrowing subsidy leads to a 1% increase in debt issuance, a 0.2% reduction in cash holdings, and a 0.4% increase in the wage bill.&#13;
&#13;
The second chapter (joint with Thiago Christiano Silva and Henry Zhang) &#13;
shows that firms experience large increases in sales and purchases after receiving cheaper liquidity. We focus on factoring, defined as the supplier-initiated sale of receivables. In Brazil, receivables funds (FIDCs) securitize receivables for institutional investors. By assembling a novel transaction-level dataset of factoring with other credit operations for all registered firms and FIDCs, we construct a shift-share instrument for factoring financing supply based on FIDC flows. We then use a novel combination of electronic payments, trade credit, and employer-employee matched data to estimate the impacts. A flow-induced increase in receivables demand reduces firms’ factoring interest rate. In response, firms demand more permanent labor and less temporary labor. In our model, these effects arise from factoring’s purpose of reducing cash inflow volatility, helping firms match inflows to outflows, which firms otherwise achieve at an efficiency cost through substitution across labor types.&#13;
&#13;
The third chapter (joint with Luis Alvarez) introduces nonparametric quantile mixture models as a computationally convenient and flexible alternative to standard nonparametric density mixtures, which are widely used in Statistics and Econometrics but face significant computational and inferential challenges. We propose a sieve estimator based on a generalized method of L-moments and develop a full inferential theory. In doing so, we contribute to the statistical literature by extending a numerical bootstrap method to high-dimensional settings. As a direct application of our theory, we provide the first inference method for the distributional synthetic controls of Gunsilius (2023), a novel tool for counterfactual analysis that previously lacked formal inference procedures. We apply this method to evaluate the effects of the Brumadinho dam collapse—a large-scale environmental disaster—on the local wage distribution. The results reveal substantial heterogeneity across the distribution, with evidence of displacement effects in which median-paying jobs are replaced by lower-wage contracts.&#13;
JEL Codes: C1, E4, E5, G2, G3
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164859</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Signaling at the Tumor-Immune Interface in Glioblastoma</title>
<link>https://hdl.handle.net/1721.1/164857</link>
<description>Signaling at the Tumor-Immune Interface in Glioblastoma
D'Souza, Alicia D.
Glioblastoma (GBM) is a devastating brain cancer, and the standard of care has not changed in over 20 years. GBM tumors are composed of a milieu of cancer cells and innate immune cells, which are co-opted by the cancer cells to promote an anti-inflammatory environment. Despite tremendous success in immunotherapy in several cancers over the past 10 years, immunotherapies have failed to show efficacy in GBM. A systems biology approach to characterizing temporal changes in tumor-immune interface of glioblastoma could illuminate new strategies to activate an anti-tumor immune response by examining changes in cell signaling and antigen presentation.&#13;
&#13;
In the first part of my thesis, I investigated how macrophages alter their phenotype in response to tumor co-culture and how these changes are reflected at the level of the phosphoproteome. To characterize signaling changes in distinct cell populations during co-culture, I developed a method to preserve and analyze cell-type-specific signaling using fixation. This approach enables phosphoproteomic profiling of two interacting cell types, capturing dynamic signaling events with cell-type resolution. I applied this method to study co-cultures of glioblastoma (GBM) cells and primary human macrophages. When cultured together, GBM cells induced an anti-inflammatory, immunosuppressive phenotype in macrophages, mirroring features observed in the glioblastoma tumor microenvironment. Phosphoproteomic analysis revealed that this phenotypic shift was accompanied by distinct signaling alterations in macrophages, including the upregulation of ABL kinase activity. To test this finding, I treated macrophages with an ABL kinase inhibitor and observed a reduction in the anti-inflammatory phenotype, suggesting that ABL signaling plays a role in supporting immunosuppressive macrophage polarization. Furthermore, in a mouse model of GBM, treatment with an ABL kinase inhibitor led to a reduction in the abundance of anti-inflammatory macrophages within the tumor and was associated with a modest extension of survival.&#13;
&#13;
In the second part, I examined changes in antigen presentation and signaling in glioblastoma tumors in response to treatment with an oncolytic virus (OV). In patient derived tumor (PDX) models in mice, mice treated with OV have increased antigen presentation, pointing to the use of OV therapy to reshape the tumor micro-environment to a more inflammatory state. Finally, tissue obtained from serial biopsies of GBM patients treated with OV shows an increase in antigen presentation and both Class I and Class II MHC protein expression. We also observed an increase in interferon alpha and interferon gamma signaling pathways as well as early induction of apoptotic pathways. These findings highlight the role of therapeutics in altering the tumor microenvironment and potentially priming it for combination immunotherapies. This thesis explores the dynamic nature of the tumor and immune compartments in glioblastoma and underscores how therapies can act on the immune compartment to promote anti-tumor activity.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164857</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue</title>
<link>https://hdl.handle.net/1721.1/164849</link>
<description>Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue
Liu, Fan
Three-dimensional (3D) multicellular tissues are prevailing over 2D monolayer or single cells; their mechanical properties like stiffness, surface tension, and viscosity have been shown to relate to diseases like fibrosis or tumor metastasis. Multicellular tissues have been traditionally modeled as a viscoelastic material due to their apparent shape rearrangement, which hardly considers the internal structure, including the extracellular matrix (ECM) and resulting intercellular water flow. These intercellular communications usually provide significant information on diseases such as tumor invasion, but immediate supporting evidence of this behavior is lacking. In this work, we investigate the bulk response of 3D multicellular tissues due to such intercellular flows and explore the related mechanism through a tailored micro-mechanics platform. &#13;
Firstly, we design and establish a micro-mechanics platform based on the parallel plate compression (PPC) method. We adopt a precise micro-balance as the sensor to detect the force variation of the sample during compression. A piezo linear stage is incorporated to exert such tiny vertical displacement. Besides, a lateral microscope is designed to monitor the compression process instantaneously. This platform has proved to be applicable to various samples, including hydrogels, cell spheroids, and natural tissues or organs. &#13;
Then, we propose the critical criterion, the size dependency of force relaxation time, to distinguish a material's properties, i.e., viscoelasticity and poroelasticity. For poroelastic material, the force relaxation is due to water redistribution; hence, the speed highly depends on the sample sizes. In contrast, for viscoelastic material, it is determined by the bulk material properties, thus independent of the size. We theoretically verify this criterion via Abaqus simulation and experimentally on classic poro-/visco-elastic materials with various dimensions. &#13;
Next, we apply the size-dependency criterion on the 3D multicellular tissues to distinguish the poro-/visco-elasticity in this biomaterial. We take the PPC on multiple cell spheroids with different sizes through the platform. It is observed that the force relaxation times are linearly proportional to the size of all tested cell lines, demonstrating poroelasticity in our experimental time range. Intriguingly, we take tests on the natural organs of the mouse islets and find such linear correlation as well. Hence, both cultured spheroids and natural tissues are poroelastic.&#13;
Finally, we explore the mechanism determining the poroelasticity inside the 3D multicellular tissues. By inhibiting the cell-cell junctions, we demonstrate the intercellular water flow through the extracellular gaps dominates this poroelastic force relaxation in the biomaterial. Further experiments show that the stiffness of the structure and the extracellular gaps inside the 3D multicellular tissues couple to contribute to the intercellular water flow, i.e., the stiffer the structure and/or the larger the gaps, the faster the water flows, thus quicker the force decays after compression.&#13;
These findings highlight the fundamental role of intercellular water flow in the mechanical properties of 3D multicellular tissues. The designed micro-mechanics platform is also beneficial to research at the tissue level with micro-newton forces owing to the development of artificial organoids for early disease diagnosis and treatment.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164849</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models</title>
<link>https://hdl.handle.net/1721.1/164842</link>
<description>Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models
Eisape, Tiwalayo
How closely do neural language models mirror human language processing, and what can this alignment teach us about cognition? This dissertation presents convergent evidence in comprehension, production, and reasoning that neural language models (LMs) can serve as productive instruments for understanding naturalistic human language use at scale. Studies 1-2 examine comprehension with complementary methods. First, Cloze Distillation—a novel method for aligning models with human next-word predictions—improves both language modeling and reading time prediction, demonstrating that LMs and humans make distinct, complementary predictions. Second, new methods for identifying syntactic information in LM hidden states demonstrate that models learn to implicitly represent incremental syntactic state. These probes also enable targeted interventions, allowing us to manipulate representations to resolve (or induce) temporary misinterpretations, confirming mechanistic understanding. While these studies demonstrate prediction’s role in comprehension, a complete account requires examining whether these mechanisms also shape how humans produce language in real-time. Study 3 analyzes a massive corpus of 2.3 million competitive typing events from TypeRacer.com, uncovering the first evidence of in-context predictability effects in this domain of production. Finally, Study 4 compares human and LM reasoning systematically—LMs achieve higher syllogistic reasoning accuracy than humans while still replicating several fine-grained human-like error patterns that are orthogonal to logical accuracy, including premise ordering effects. These converging findings reveal prediction as a fundamental mechanism in comprehension, production, and reasoning in both humans and LMs. While models achieve this through statistical learning rather than specialized cognitive architecture—often outperforming humans yet replicating their systematic biases—this alignment supports predictive processing theories of cognition. This work establishes LMs as scalable cognitive laboratories that can complement traditional experiments, and contributes psycholinguistically principled methods for understanding and controlling LMs.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164842</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa</title>
<link>https://hdl.handle.net/1721.1/164838</link>
<description>Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa
Grant, Fiona R.
To feed the growing global population, agriculture production must be intensified using existing land and resources. Sustainable agriculture intensification is particularly important in the Middle East and North Africa (MENA), the most water-stressed region in the world. Solar-powered drip irrigation (SPDI) has the potential to increase water use efficiency and reduce fossil fuel use for irrigation. Despite these benefits, SPDI adoption is limited by its high investment cost and the misalignment between farmers' risk tolerance and broader sustainability goals. Past work has explored three areas of SPDI innovation: low-pressure drip emitters, system cost optimization, and precision irrigation control. This thesis integrates previous innovations in an end-to-end design process to generate SPDI architectures that are accessible to resource-constrained farmers.&#13;
A market study was conducted to understand farmers' priorities and constraints and articulate SPDI value propositions for the target users. Stakeholder surveys were conducted in Jordan and Morocco for farms ranging from 1–130 hectares. Three market segments were identified, grouping farmers who face similar economic and knowledge barriers. While farmers generally prioritized irrigation reliability and low system costs, the observed variety in farm size, production volume, and technical expertise suggested that SPDI architectures must be tailored to each market segment.&#13;
This thesis proposes an energetic framework that captures system parametric relationships to identify feasible SPDI design trade-offs. The optimized solar power systems were 14%–80% less expensive than conventionally-sized designs. Despite significant changes to the hydraulic operating parameters, the proposed SPDI architectures were as reliable as existing systems. For farms with long irrigation times, it was optimal to pair low-pressure drip emitters with an irrigation schedule that tracks the daily solar profile, termed “solar profile matching” (SPM), to maximize direct solar power use. The SPM schedule reduced system cost by minimizing the battery capacity. An economic analysis demonstrated that the optimal SPDI designs could be made cost-competitive with grid power through SPDI retrofit subsidies, which some local governments already support. Researchers and industry professionals could use the energetic framework and techno-economic analysis presented in this thesis to inform system design and policy decisions and promote SPDI adoption.&#13;
Finally, this work created guidelines for designing a precision irrigation controller in resource-constrained markets. A controller was conceptualized to implement the SPDI-SPM architecture. The controller functional requirements and design specifications were iteratively defined with stakeholders, and a prototype was tested on two farms in the MENA region. The controller reduced water and energy use by up to 44% and 43%, respectively, while maintaining crop yield. However, the controller relied on battery power to execute the irrigation schedule. A yield loss sensitivity analysis found that using 72%–79% of the available solar energy on average, an increase of about 40% from the experiment SPM schedules, would have been sufficient to reliably irrigate with solar alone. The results suggest that, with software modifications, the proposed controller could eliminate the need for a battery and enable low-cost SPDI systems. If adopted, the proposed controller could make sustainable irrigation practices more accessible to farmers.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164838</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in Geometric Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164835</link>
<description>Topics in Geometric Machine Learning
Tahmasebi, Behrooz
Recent advances and the widespread adoption of neural networks have revolutionized machine learning and artificial intelligence. These developments demand learning paradigms capable of processing data from diverse applications and sources. In structured domains such as molecules, graphs, sets, and 3D objects, as well as fields such as drug discovery, materials science, and astronomy, models must account for data structures. The emerging field of geometric machine learning has gained attention for enabling neural networks to handle geometric structures, unlocking novel solutions across scientific disciplines. Despite recent advances, theoretical gaps remain. This thesis aims to address these gaps by studying the benefits and limitations of leveraging geometric structures and symmetries in data. We explore sample complexity, generalization bounds, hypothesis testing for the presence of symmetries in data, time complexity of learning under symmetries, and regularization and optimization in symmetric settings. The goal is to build a robust theoretical framework that validates recent successes and sheds light on unexplored aspects, fostering future progress in geometric machine learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164835</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical exposures in drinking water: contaminant analysis and physicochemical behavior</title>
<link>https://hdl.handle.net/1721.1/164821</link>
<description>Chemical exposures in drinking water: contaminant analysis and physicochemical behavior
Bugher, Nicolette A.
Environmental chemical exposures pose an understudied risk to human health. The quality and accessibility of data on occurrence in the environment and physicochemical behavior of industrial chemicals are integral for accurate exposure risk assessment. In this dissertation, analytical chemistry techniques were developed and leveraged to characterize human exposures to contaminants in drinking water and improve methods for assessing such risks. The occurrence of organic industrial pollutants in domestic well waters was investigated, with a particular focus on the impacts of region-specific industrial activity (e.g., hydraulic fracturing), legacy pollution sites (e.g., Superfund sites), and geochemistry. The exposure risk to water contaminants of domestic well users was further interrogated by evaluating trends in contaminant concentrations resulting from the implementation and maintenance of in-home water treatment devices. The results show widespread, low-dose mixtures of organic pollutants, where the efficacy of removal by in-home water treatment varied by water contaminant class and maintenance frequency. Additionally, analytical methods were optimized to quantify a group of organic water contaminants (i.e., probable carcinogens, N-nitrosamines), improving method sensitivity and critically identifying false-positive interferences. Finally, methods were evaluated and deployed for the determination of physicochemical properties of N-nitrosamines. The results of which demonstrate gaps in existing experimental data, provide a valuable methodological intercomparison (two experimental and two computational approaches), and contribute novel partitioning data. This dissertation addresses gaps in occurrence data, analytical method sensitivity, and reliability of physicochemical parameters for risk assessment. The combination of method development and implementation enables the study of exposures to water contaminant mixtures at health-relevant concentrations, representative of prevalent exposure pathways.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164821</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Data Layouts for Evolving Cloud Table Storage</title>
<link>https://hdl.handle.net/1721.1/164819</link>
<description>Optimizing Data Layouts for Evolving Cloud Table Storage
Sudhir, Sivaprasad
Modern data analytics platforms increasingly adopt disaggregated architectures, storing data in cost-effective cloud object stores. While this approach enables a clean separation of concerns, allowing each layer to be independently managed and scaled, it introduces significant performance bottlenecks due to expensive data movement. Effective data layouts, which organize data to minimize unnecessary data reads, are thus critical to achieving high query performance. However, existing techniques typically rely on manually specified layouts, collect limited metadata, or lack mechanisms to dynamically adapt to changing data and workloads.&#13;
&#13;
This thesis investigates adaptive, metadata-rich, expressive data layouts for cloud table storage. First, we introduce Pando, a correlation-aware layout technique that leverages rich metadata on query predicates to significantly improve data skipping. Next, we propose CopyRight, a partial replication strategy that selectively replicates subsets of data and optimizes each replica differently, efficiently serving heterogeneous query patterns. Finally, we describe Self-Organizing Data Containers (SDCs), a practical table storage layer for the cloud that incrementally reorganizes complex data layouts based on changes in data and workload distributions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164819</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures</title>
<link>https://hdl.handle.net/1721.1/164703</link>
<description>Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures
Tsai, Curtis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (leaves 135-146).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164703</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An experimental study of the law of parity conservation in electromagnetic interactions.</title>
<link>https://hdl.handle.net/1721.1/164702</link>
<description>An experimental study of the law of parity conservation in electromagnetic interactions.
Hegblom, Edwin Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164702</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rules for ring closure and aspects of organolithium chemistry</title>
<link>https://hdl.handle.net/1721.1/164698</link>
<description>Rules for ring closure and aspects of organolithium chemistry
Dupont, William Alan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164698</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Identification and Network Measurement Error in Peer Effects Estimation</title>
<link>https://hdl.handle.net/1721.1/164669</link>
<description>Weak Identification and Network Measurement Error in Peer Effects Estimation
Wang, William Wei
The growing availability of social network data has enabled a surge of research on social interactions. In particular, peer effects, once considered unidentifiable, have now been shown to be identified given knowledge of the network structure. Despite this positive result, questions remain about the existence and nature of peer effects, due to concerns about identification strength and the reliability of network data. This work investigates two key threats to the estimation of peer effects: weak identification and network measurement error. We show that weak instrument problems arise in moderately dense networks due to rapid averaging, leading to slow convergence rates even when estimators remain consistent. On the measurement error side, we show that additive edge weight errors can be mitigated in such networks due to the same averaging phenomena, but the error remains a relevant threat to consistency in sparser networks. We further demonstrate that when both issues are present, the resulting estimators exhibit non-vanishing bias, suggesting that the combined effect of weak instruments and measurement error can be more severe than either problem in isolation. Overall, our results aim to clarify how these non-standard estimation challenges impact our ability to study peer effects using network data.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164669</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing Beyond Limits with Physics-Informed Priors</title>
<link>https://hdl.handle.net/1721.1/164668</link>
<description>Seeing Beyond Limits with Physics-Informed Priors
Liu, Yang
Conventional imaging systems are limited by dimensionality and visibility: standard sensors capture only two-dimensional data, while light diffuses or scatters across surfaces and through complex media. This dissertation reformulates imaging as an interplay of optical encoding and neural decoding. It models forward physical processes and iteratively refines them using deep denoisers. By embedding physics-informed priors into this optimization, it aims to surpass conventional limits in dimensionality and visibility. First, I develop Privacy Dual Imaging using an ambient light sensor. This approach tackles both dimensionality and visibility challenges when imaging with a single-point, non-imaging component on smart devices. Inspired by 1984’s “Big Brother” telescreen, I demonstrate how subtle light intensity fluctuations can reveal unseen image information; however, the goal is to highlight privacy concerns, not exploit them. It addresses two visibility limits—pixel-less and lens-less imaging—by using the screen as a spatial modulator and exploiting involuntary motion to create a virtual pinhole effect. A quantized, physics-informed prior improves reconstruction from heavily quantized sensor measurements. Second, I propose Snapshot Compressive Imaging (SCI) augmented with deep plug-and-play physics-informed priors to overcome the dimensionality limit of 2D sensors. SCI compressively encodes multiple temporal, spectral, or angular frames into a single measurement. A deep plug-and-play prior algorithm introduces high-dimensional priors learned from images and videos into the iterative reconstruction process, improving fidelity, speed, and flexibility. Experiments show notable gains in reconstruction quality and efficiency across different SCI datasets, including largeformat 4K UHD scenarios. Third, I introduce Rank-Reduced physics-informed priors, showing that large pretrained AI models—especially diffusion models—can act as general visual priors across both dimensionality and visibility challenges. A relax-then-tighten strategy handles ill-conditioning by applying truncated singular value decomposition to reduce rank deficiencies, followed by a Stable Diffusion refiner (SDEdit) plug-and-play prior that constrains reconstructions to valid image spaces. Simulations and passive non-line-of-sight imaging experiments verify the approach’s stability and effectiveness. Physics-informed priors promise to extend the boundaries of imaging, enabling us to see beyond current dimensionality and visibility limits and to unlock new applications from macro-scale to micro-scale observations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164668</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundational Abstractions for Quantum Programming</title>
<link>https://hdl.handle.net/1721.1/164666</link>
<description>Foundational Abstractions for Quantum Programming
Yuan, Charles
Bringing the promise of quantum computation into reality requires not only building a quantum computer but also correctly programming it to run a quantum algorithm. To obtain asymptotic advantage over classical algorithms for applications including simulation, search, and optimization, quantum algorithms rely on the ability of data in quantum superposition to exhibit phenomena such as interference and entanglement. In turn, an implementation of the algorithm as a program must correctly orchestrate these phenomena in the states of qubits. Otherwise, it would yield incorrect outputs or lose quantum computational advantage.&#13;
&#13;
Given a quantum algorithm, what are the challenges and costs of realizing it as a program that can run on a physical quantum computer? In this thesis, I answer this question by showing how the basic abstractions of programming upon which many quantum algorithms rely – such as data structures and control flow – can fail to work correctly or efficiently on a quantum computer. I then demonstrate how we can leverage insights from research in programming languages to re-invent the software stack – including abstractions, libraries, and compilers – to meet the demands of quantum algorithms. This approach holds out a promise of expressive and efficient tools to program a quantum computer and thereby practically realize its computational advantage.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164666</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Simple Chemical Heuristics to Model and Discover Materials</title>
<link>https://hdl.handle.net/1721.1/164662</link>
<description>Learning Simple Chemical Heuristics to Model and Discover Materials
Ma, Andrew
Computational approaches have long played an important role in the field of materials science, driving both the scientific study of materials’ fundamental properties and the design of materials for technological applications. Currently, mainstream methods in computational materials science typically rely on either first-principles calculations or deep learning models. In this thesis, we take a different direction by developing remarkably simple data-driven models for predicting fundamental properties of materials, including electronic topology, metallicity, and band gap. These models take the form of highly interpretable chemical heuristics. A key finding of this work is the surprising result that electronic topology diagnosis – often regarded as a highly complex task – can, in fact, be performed heuristically using a simple and intuitive model. We further integrate this model into a workflow for discovering new topological materials. Altogether, this work revisits the classic idea of chemical heuristics through a modern data-driven lens, shedding new light on fundamental problems in materials science.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164662</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Modeling from Visually Grounded Speech</title>
<link>https://hdl.handle.net/1721.1/164660</link>
<description>Language Modeling from Visually Grounded Speech
Lai, Cheng-I Jeff
Recent advancements in spoken language processing have significantly reduced automatic speech recognition (ASR) error rates, driven by large-scale supervised training on paired speech–text data and, more recently, self-supervised pre-training on unpaired speech and audio. These methods have facilitated robust transfer learning across diverse speech and audio tasks. However, fully leveraging multimodal inputs, particularly visual context, remains underexplored. This thesis addresses this gap by developing novel language modeling techniques directly from visually grounded speech. We first introduce the Audio-Visual Neural Syntax Learner (AV-NSL), an unsupervised parser that recovers constituency trees directly from raw speech paired with images, demonstrating how visual context effectively bootstraps grammar induction without textual supervision. Next, we investigate Audio-Visual Word Discovery for Speech Translation, using the Fisher Spanish–English corpus to train a series of speech-to-speech translation models based on pseudo-word units discovered via audio-visual grounding. This study highlights that simplistic acoustic tokens and limited training data degrade re-synthesis and translation quality, underscoring two crucial missing ingredients: richer semantic tokens and large-scale training. Guided by these insights, we present Audio-Visual Gemma (AV-Gemma), a family of multimodal foundation models that condition jointly on images and learned semantic speech tokens. At scale, AV-Gemma generates visually coherent spoken captions and transfers robustly to tasks such as video-to-speech generation and spoken visual question answering, significantly advancing multimodal spoken-language processing.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164660</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Assembly of General Objects</title>
<link>https://hdl.handle.net/1721.1/164657</link>
<description>Scalable Assembly of General Objects
Tian, Yunsheng
In this thesis, I present a scalable system towards fully automated and flexible robotic assembly that generalizes over diverse geometries and complex structures. Most real-world objects are assemblies composed of multiple parts. Assembly presents significant challenges for robots to execute long-horizon, contact-rich manipulation with both reliability and generalization. However, most manufacturing facilities today still rely heavily on manually programmed assembly lines, which require significant labor, time, and setup costs yet offer no flexibility to object variations. My proposed system synergizes global multi-step planning with local reactive learning-based control to enable generalizable and precise assembly. Such an integrated paradigm effectively leverages the best of both worlds, accomplishing results that neither planning nor learning could achieve alone. For planning, I leverage guidance from physical simulation and learned feasibility networks to efficiently search for part sequences, precise motions, and stable grasps for dual-arm robots over long horizons. For learningbased control, I train robust policies via reinforcement learning for submillimeter-level insertion across different part geometries, assembly paths, and grasp poses. I introduce and open-source the largest-scale assembly dataset to date and demonstrate my system’s generalization on thousands of simulated assemblies as well as through end-to-end real robot experiments. By integrating planning and learning, I showcase the first system to achieve complete and generalizable real-world multi-part assembly without domain knowledge or human demonstrations. Although the system plans and learns purely in simulation, it transfers zero-shot to the real world and achieves 80% successful steps. Finally, I will share insights that further scale up robotic assembly and opportunities to extend to general manipulation, and discuss future directions to equip general-purpose robots with multi-step, precise manipulation capabilities.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164657</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Tackle Task Variations in Control - A Transportation Context</title>
<link>https://hdl.handle.net/1721.1/164653</link>
<description>Learning to Tackle Task Variations in Control - A Transportation Context
Jayawardana, Vindula Muthushan
Real-world control tasks are messy and often exhibit task variations. Practical solutions to these problems must exhibit generalization across task variations. For example, in the task of controlling traffic signals, control strategies must adapt to different intersection topologies (the variations), each with distinct dynamics. In this thesis, we consider the challenge of coping with task variations in the context of transportation problems, specifically in roadway interventions where many such variations are both common and imperative to handle. We develop machine learning techniques to address three key challenges: 1) quantify the impact of task variations in control, 2) model them to align with the real world, and 3) optimize in the presence of them. To this end, we begin with a large-scale case study of cooperative eco-driving and illustrate how explicitly modeling task variations can surface otherwise overlooked insights. Building on this, we argue for the necessity of formally incorporating task variations into problem specifications, emphasizing that task underspecification due to loosely defined task variations can severely impair decision-making. We then introduce a contextual reinforcement learning algorithm capable of leveraging the structure of task variations to generalize effectively in cooperative eco-driving with autonomous vehicles. We also present IntersectionZoo, a benchmark designed to promote the development of learning algorithms that generalize by exploiting task variation structures, thus standardizing progress in the field. Last, we explore task variation modeling through a generative modeling lens, using human driver behavior modeling as a case study. Overall, this thesis lays the groundwork for robust control methods by leveraging machine learning to tackle task variations, specifically in roadway intervention designs.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164653</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modern methods for causal inference and missing data</title>
<link>https://hdl.handle.net/1721.1/164650</link>
<description>Modern methods for causal inference and missing data
Xia, Eric
The proliferation of data-driven approaches in a wide array of settings is one of the defining characteristic of the modern era. With this rise, there has been much focus on using data to answer causal questions, e.g. whether A causes a change in B. Furthermore aspects of data collection has given rise to datasets that are often quite messy, sometimes missing important entries. These are both problems that are incredibly relevant to practitioners in a variety of disciplines, including policy-makers looking to make critical decisions that can influence lives of many. On the surface these problems seem quite distinct, yet the literature has highlighted deep connections between these two settings. Indeed, many methods for addressing one question can often be repurposed to address the other. These two settings are quite classical and approaches to address the are still quite so, but there has been great interest recently to develop techniques and algorithms to address them that harness modern developments in statistics and machine learning. This thesis contributes to the literature by providing new methods as well as novel understandings of existing ones.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164650</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steering Vision at Scale: From the Model Weights to Training Data</title>
<link>https://hdl.handle.net/1721.1/164645</link>
<description>Steering Vision at Scale: From the Model Weights to Training Data
Materzyńska, Joanna
We study the interpretability and controllability of multimodal and generative models, with a particular focus on text–image representation models and text-to-image diffusion systems. We begin by addressing limitations in CLIP’s multimodal embeddings, specifically the entanglement between visual and textual concepts within images. We demonstrate the consequences of this entanglement in both generative and discriminative tasks, and introduce a method for disentangling visual and textual representations. We showcase the utility of these disentangled embeddings in typographic attack resistance, improved image generation, and robust out-of-domain OCR detection. Building on this foundation, we explore methods to enhance the controllability of diffusion models. First, we tackle the challenge of unwanted concept generation. We introduce a technique to remove specific visual concepts using only their names, leveraging negative prompts and guidance to suppress target content without modifying training data or requiring model retraining. This approach enhances ethical alignment and enables greater user control in generative systems. We then turn to the complementary problem: incorporating new concepts. We present a few-shot motion customization technique for video generation models, which transfers motion patterns from a small set of examples to novel subjects. This method maintains the generalization capabilities of the base model while enabling consistent, subject-agnostic animation that preserves both identity and temporal coherence. To improve the fine-grained control of visual outputs, we propose a method for continuous manipulation of image attributes. This framework introduces smooth, intuitive controls, that allow for dynamic, continuous steering of generated images. Unlike prompt engineering or token-level interventions, our approach offers real-time adjustment without sacrificing output realism. Finally, we examine whether artistic styles in diffusion models require large-scale pretraining or can be learned in a lightweight, post-training manner. To this end, we train a base model on art-free data and introduce a compact adapter method that learns stylistic concepts from a small set of exemplar artworks. Our findings suggest that artistic domains can be integrated efficiently and ethically, without reliance on web-scale scraped datasets.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164645</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Data Drives ML Models Performance</title>
<link>https://hdl.handle.net/1721.1/164640</link>
<description>How Data Drives ML Models Performance
Khaddaj, Alaa
Data has been been playing an increasingly more important role in the machine learning (ML) pipeline. This thesis deepens the understanding of the effect of the data on model performance and reliability. First, we study how choice of training data affects model performance. We consider a transfer learning setting and present a framework for selecting from a large pool of data a pretraining subset that improves model performance on downstream tasks. Our approach, however, requires training multiple target models which becomes prohibitively expensive at large-scale. To that end, we explore using smaller—and cheaper—proxy models to approximate large model behavior and select the pretraining data using that cheaper model. We show the effectiveness of this approach in two dataset selection settings: language modeling and imitation learning. Second, we explore the role of data in model reliability and consider two different threat models: backdoor attacks and malicious data editing. In this first threat model, an adversary injects a few doctered samples into the training set to control model predictions at inference time. We study the effect of these malicious samples on model behavior and then propose a framework for detecting and removing them from the training data. In the second threat model, an adversary leverages generative models such as diffusion models to maliciously modify personal data and generate harmful digital content. We focus on image editing and investigate how we can imperceptibly modify personal images to mitigate editing using diffusion models and raise and the cost of hamrful content generation. Overall, this thesis contributes to the understanding of the role of the data in driving model behavior. Through these efforts, we aim to provide mechanisms for (i) training models that perform better and (ii) are more reliable when deployed in the real world.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164640</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164603</link>
<description>Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning
Gatmiry, Khashayar
At their core, our machine learning systems are trained by solving an optimization problem, where the goal is to minimize a predefined objective function by adjusting model parameters based on the data. Despite the wealth of structure and prior knowledge present in the data and feedback, our training methods remain relatively simple and independent of this structure. In spite of, or perhaps because of, this simplicity, these methods are often lacking in theoretical guarantees. To design machine learning algorithms that are less data-hungry while ensuring theoretical guarantees on both computational efficiency and output validity, it is essential to better understand and leverage the rich structure within the learning setup and the data distribution, e.g. by altering the geometry of the solution space or adjusting the objective function to induce a more effective learning procedure. This approach moves beyond classical algorithm design, which focuses primarily on handling worst-case instances. This thesis investigates the optimization landscape of central learning problems and develops geometric and analytic schemes adapted to their structure, leading to algorithms with superior computational and statistical performance. In addition, it seeks to advance our mathematical understanding of the principles underlying the success of deep learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164603</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism</title>
<link>https://hdl.handle.net/1721.1/164594</link>
<description>The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism
Wang, Yuting
I examine the causal effects of mandatory quarterly earnings guidance using a regulatory mandate in China that required a subset of listed firms to issue bundled quarterly earnings guidance from 2007 to 2018. A difference-in-differences analysis shows that when these firms are no longer required to issue such guidance, their corporate information environment deteriorates, evidenced by reduced analyst coverage, fewer site visits, and lower price timeliness, meaning that stock prices incorporate less information about current and future earnings. However, these firms increase R&amp;D and SG&amp;A spending, consistent with alleviated managerial myopia as short-term market pressure eases. These findings highlight the dual-edged nature of the mandatory quarterly earnings guidance and offer insights for both practitioners and policymakers.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164594</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols</title>
<link>https://hdl.handle.net/1721.1/164592</link>
<description>Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols
Hogan, Kyle
This thesis focuses on the question of what degree of privacy is achievable in the real world for long-running applications. We explore this question in two main settings: private advertising and anonymous communication. In doing so, we consider constraints each application may have in practice and what adversarial model is realistic for the context in which the application will be deployed. For real world applications, achieving perfect privacy — especially against a worst case adversary — can be impossible. That is, perfect privacy, while achievable in theory, may in practice require assumptions that conflict with usability, deployability, or utility requirements. This presents a challenge as privacy-preserving technologies can, necessarily, only provide privacy for the people who use them. Because of this, designing around user experience is critical, even if doing so requires compromises in the theoretical degree of privacy a system can provide or the strength of adversaries considered in its threat model. In the space of private advertising, we first propose a novel protocol, AdVeil, that eliminates leakage of user data beyond that revealed by the input/output of the ads ecosystem as a whole. We then provide a minimal modeling of the functionality of digital advertising which we use to prove that, even for systems like AdVeil with minimal leakage, the advertising metrics released at the end of the protocol are sufficient to leak information about end users to advertisers when combined with their audience targeting criteria. In the space of anonymous communication, we propose ShorTor, a new routing protocol for Tor that utilizes techniques popular with content distribution networks (CDNs) to reduce latency while maintaining Tor’s existing anonymity guarantees. We evaluate this protocol using a dataset of over 400,000 latency measurements we collected between the 1,000 most popular Tor relays.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164592</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation</title>
<link>https://hdl.handle.net/1721.1/164591</link>
<description>Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation
Balagopalan, Aparna
Automated systems driven by machine learning (ML) have made exciting progress across a spectrum of applications. Despite such progress, encoded biases and other failure modes may create barriers to the real-world utility and reliability of such systems. For example, nonrandom data missingness, biased algorithmic optimization objectives, or model presentation strategies that incorrectly impact user trust can all cause models to fail in practice. In this thesis, guided by such observations and prior work on pipeline-awareness in machine learning, we aim to operationalize reliable ML. Under this goal, we propose a framework consisting of the following three components: responsible data collection, robust algorithm development, and fair model presentation. We first conduct two case studies to advance responsible data collection. We investigate whether standard procedures for acquiring data can be repurposed when training models to mimic human judgments about norm violations. We also demonstrate patterns of delayed demographic data reporting within a longitudinal healthcare dataset and show that timevarying missingness due to such delays can distort disparity assessments. Second, we introduce two novel algorithms to improve reliability: a method that leverages representations from vision-language models to filter noisy training data, and a method to produce fair rankings that account for properties of search queries. Finally, since the presentation design of predictions impacts trust in model consumers, we propose metrics to quantify the fairness of post-hoc explainability techniques. Thus, with this thesis, we re-evaluate measurements throughout the machine learning pipeline and contribute to the broader goal of reliable machine learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164591</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anti-phage defense as a driver of molecular innovation</title>
<link>https://hdl.handle.net/1721.1/164590</link>
<description>Anti-phage defense as a driver of molecular innovation
Doering, Christopher Ross
Bacteriophages, or phages for short, pose a near-constant threat to the bacteria they infect. Billions of years of conflict has been a catalyzing force for the creation of bacterial defense systems and corresponding phage evasion strategies. To counter phage predation, bacteria have developed a vast diversity of enzyme chemistries and molecular sensing mechanisms whose study has produced new biotechnological tools and insights into our own immune systems. In this work, I have investigated anti-phage defense mechanisms at multiple scales using a combination of genetic, biochemical, and bioinformatic approaches. First, I characterized the mechanism of action of the anti-phage defense system CmdTAC, a toxin-antitoxin-chaperone system that recognizes a viral structural protein to activate a novel mRNA ADP-ribosyltransferase, thereby halting infection. Next, I examined the diversity and distribution of anti-phage mechanisms encoded by E. coli lysogenic phages – phages capable of integrating into and lying dormant within their bacterial hosts. This analysis uncovered overlooked classes of lysogenic phages harboring novel candidate defense systems, including one newly validated system with no detectable homology to previously known mechanisms. Together, this work broadens our understanding of bacterial immune systems, expands the pool of known enzyme chemistries, and highlights areas where continued study can reveal additional mechanisms of anti-phage defense.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164590</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities</title>
<link>https://hdl.handle.net/1721.1/164589</link>
<description>Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities
Toneatti Vercelli, Gabriel
Spatial organization plays a critical role in microbial community function, influencing how cells exchange metabolites, coordinate behavior, and compete for resources. This thesis investigates the consequences of spatial structure in natural microbial systems and introduces a novel method to engineer these systems with high precision and scalability. First, we examine the colonization of chitin particles by marine bacteria, a model for particulate organic matter degradation. Using high-throughput phenotyping of natural isolates, we show that vitamin cross-feeding is essential for successful colonization of chitin-particles by many auxotrophic strains. We then model two distinct vitamin cross-feeding mechanisms: lysis and secretion. Using a resource-explicit modeling approach, we leverage metabolic-flux and physiological measurements to predict the colonization success of auxotrophic cross-feeders in this spatially structured environment. Second, we introduce a new chemical method for engineering microbial cell surfaces that enables covalent attachment of molecules such as enzymes and DNA strands to the cell surface. We show that this surface functionalization procedure leads to the acquisition of new phenotypes like antibiotic resistance and programmable adhesion. Altogether, this work reinforces the importance of spatial organization for microbial community function and introduces a new technique to harness this community feature and turn it into a design principle for synthetic microbial systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164589</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants</title>
<link>https://hdl.handle.net/1721.1/164586</link>
<description>Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants
Elacqua, Juniper J.
Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164586</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems</title>
<link>https://hdl.handle.net/1721.1/164585</link>
<description>Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems
Araei, Soroush
An “all-in-one” radio, programmable across the sub-7 GHz spectrum, offers significant hardware efficiency for 5G systems. However, addressing strong interferers in this wide and congested spectrum remains a major design challenge. N-path filters offer a promising solution for efficiently suppressing interference, thanks to their clock-controlled reconfigurability and excellent linearity against in-band and adjacent-channel blockers. While widely adopted in modern receiver architectures, these switched-capacitor circuits remain inherently vulnerable to blockers at clock harmonics, due to their hard-switching nature. These blockers, common in 5G bands, pose a key bottleneck, delaying the realization of fully integrated multi-band, multi-mode radios. This dissertation introduces fully passive topologies to address this challenge. The first design leverages simultaneous charge sharing and capacitor stacking to implement harmonic rejection filtering. It operates entirely without active circuitry and exhibits exceptionally low loss. A second-generation technique, termed “harmonic reset switching”, builds on this approach by rejecting harmonic blockers directly at the driving point of the N-path filter, achieving superior performance with reduced circuit complexity. As a result, existing reconfigurable receiver topologies can be seamlessly transformed into harmonic blocker–resilient architectures. For example, a taped-out mixer-first receiver adopting this technique achieves a 100× improvement in third-harmonic blocker tolerance compared to state-of-the-art broadband receivers. This dissertation also proposes a reconfigurable receiver for IoT-class radios that is tolerant to both close-in and far-out blockers. A scalable clock bootstrapping technique is introduced to enhance linearity while maintaining both power and cost efficiency. All designs are validated through prototypes fabricated in advanced 22-nm and 45-nm silicon-on-insulator (SOI) technologies. By addressing this long-standing challenge, this work paves the way for fully reconfigurable, interference-resilient radios for 5G and beyond.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164585</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video as the Language of Embodied Intelligence</title>
<link>https://hdl.handle.net/1721.1/164584</link>
<description>Video as the Language of Embodied Intelligence
Chen, Boyuan
Achieving general-purpose embodied intelligence remains a central challenge in artificial intelligence. While recent efforts have extended Large Language Models (LLMs) to robotics by incorporating additional modalities, these adaptations face critical limitations in perception, grounding, and control. For example, spatial reasoning—a simple yet indispensable capability for robots—reveals one of such shortcoming clearly: multimodal LLMs often fail even basic spatial perception tasks like estimating distances. This thesis begins by examining these failures through SpatialVLM, a system that augments vision-language models with 3D spatial reasoning. Although more effective in spatial estimation, this work reveals a deeper issue: the fundamental expressive limitations of language-only outputs in capturing sensorimotor dynamics. Based on these findings, the thesis advocates for a ground-up methodology for robot foundation models, starting with identifying an appropriate “language” for embodied AI, then architecting models and training regimes accordingly. We investigate video as the foundational language, integrated with model-based planning for decision-making. This new paradigm is instantiated through two core contributions. The first is Diffusion Forcing, a hybrid modeling framework that combines causal next-token prediction with full-sequence diffusion. This approach supports stable, coherent rollouts far beyond the training horizon and allows guided generation for decision-making tasks, bridging predictive modeling and planning. Building on Diffusion Forcing, we introduce the Diffusion Forcing Transformer (DFoT), a natural architectural extension designed for flexible video generation conditioned on variable-length histories. To further support long-horizon world-modeling, we propose History Guidance, a set of techniques that enhance sample fidelity, temporal consistency, and compositional generalization. Together, these methods enable robust modeling of visual dynamics across extended timeframes. Finally, we present a preliminary yet promising video foundation model for zero-shot robot motion planning, highlighting the potential of video as the foundational language of embodied intelligence.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164584</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance</title>
<link>https://hdl.handle.net/1721.1/164583</link>
<description>Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance
Tariq, Ifrah
Resistance to immune checkpoint inhibitors (ICIs) remains a critical barrier to effective cancer therapy, driven by complex, multi-scale interactions that current biomarkers often fail to capture. This dissertation introduces the Biologically Disentangled Variational Autoencoder (BDVAE)—an interpretable deep learning framework designed to uncover mechanistic drivers of ICI resistance through multi-omic data integration. Using RNA-seq and wholeexome sequencing data from 366 patients across melanoma, renal cell, urothelial, and gastric cancers, BDVAE learns low-dimensional latent representations that are both predictive of response and biologically meaningful. The model reveals distinct latent dimensions aligned with immune regulation, tumorintrinsic signaling, metabolism, and neuroimmune interactions. SHAP-based interpretation and pathway analysis highlight key resistance-associated programs, including immunosuppressive cytokine signaling, metabolic signaling, and neuroactive pathways such as calcium and cAMP signaling. Unsupervised clustering identifies three tumor subtypes—responder-dominant, non-responder-dominant, and an intermediate group—suggesting plastic or transitional immune states. Survival analyses confirm the clinical relevance of these clusters and expose heterogeneity within standard RECIST categories. Overall, this work presents a novel, interpretable framework for modeling ICI response, offering insights into resistance mechanisms and actionable paths for biomarker discovery, patient stratification, and therapeutic innovation in precision immuno-oncology.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164583</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric interpretations of structural demand for the analysis and reduction of design complexity</title>
<link>https://hdl.handle.net/1721.1/164582</link>
<description>Geometric interpretations of structural demand for the analysis and reduction of design complexity
Lee, Keith Janghyun
This dissertation presents a computational framework to effectively interpret the distribution of structural demand that emerges from the design of large-scale structural systems, and develops methods for its quantification and manipulation. Structural demand is the required strength and geometry of individual building components that emerges from design as a result of global geometry, topology, and loading. Existing metrics of structural performance fail to consider how variations in demand at the component level can lead to designs that are theoretically efficient but difficult to construct. This has led to a rejection of low-carbon, high-performance design solutions in practice, or the need for extensive post-hoc rationalization, both under the presumption of untenable design complexity for conventional building practices. This dissertation argues that an explicit consideration of the distribution of induced structural demand can bridge this gap between design intent and construction feasibility.&#13;
&#13;
To achieve this, structural demand is interpreted as sets of geometric objects in n-dimensional feature spaces, where each dimension represents an independent component of demand, such as area, length, or stiffness. By directly visualizing the spatial distribution of demand, designers are presented with a richer context of non-physical structural design information, and can evaluate how decisions in structural form affect this distribution. Further, spatial interpretations of information allow for spatial metrics of similarity and variation to be defined, from which quantitative measures of design complexity are derived that account for the shape and distribution of demand. This framework, named \emph{Demand Space Analysis}, is explored in depth and applied to a range of structural scales, from the demand of truss elements and their connections, to the relationship between demand and fixed sets of capacity. Advancements in structural optimization are also presented, enabling more efficient and direct minimization of modern structural performance metrics, from which the relationship between design performance and demand complexity can be explored. Through case studies in each chapter, this dissertation demonstrates how geometric analysis of structural demand information can inform the designer of the implications of decisions on the perceived complexity of design, and provides tools for its quantification and reduction.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164582</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)</title>
<link>https://hdl.handle.net/1721.1/164580</link>
<description>The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)
Springstubb, Phoebe
Our view of antiquity is not objective. From the eighteenth century on, the same actors and institutions involved in colonizing the Arctic shaped understandings of its deep past. Commercial whalers erected outposts on the Arctic Ocean’s edges; miners stripped tundra; trading companies raised forts. The demands of these projects complicated the Western imperial fiction of an Arctic without a past. Grappling with Arctic terrain, foreigners were confronted by a landscape inhabited not only by people and animals but by time and temporal imaginations that long preceded European colonization. They encountered contemporary Indigenous settlements coexisting with ancestral houses, fossil animals, the ruins of earlier colonial ventures, and ancient routes of exchange. This dissertation, centered on the Bering Sea and its adjacent geographies of eastern Siberia and Arctic North America, tells the story of how imperial upheaval and the rooting of colonial projects in the ground sparked a deliberate historiographic project to write the Arctic’s deep past. At the heart of this project was a conflict of different cultural views of time. Who had the right to narrate history in these northernmost borderlands? In episodes spanning two centuries, from the Russian empire’s claim to the Bering Sea to the rise of modern decolonial movements, this dissertation traces the central role of diverse Native architectures and technologies. Iñupiaq houses built from great whale skeletons, Unangax watercraft hewn from circulating driftwood, and Chukchi ice cellars carved into permafrost were both prisms for temporal explanations and sites driving change. Russian colonial administrators, British geologists, US ethnographers, Orthodox priests, and Soviet engineers co-opted them to the lineal, geological, eschatological, and paleolithic time that scaffolded imperial projects. Simultaneously, these material practices were vital sites for reinvention and identity, where Native nations built futures out of rupture. Illuminating how the ecological and epistemic limits to empire-building spurred new theories of Arctic time, this project shows history-making to be a crucial tool different states adopted to justify and naturalize their possessions of Native lands. At stake was not static historical truth but how politically situated temporalities structured their present-day actions. The ethical dimensions of deep time, imagined from the Bering Strait’s modern lands and seas, empowered empire’s practical work. How the past was conceived in different intellectual traditions informed whether animals and plants were exploitable resources or ancestors giving their bodies to architecture. This project contends that how people understood themselves as being in time was a decisive fulcrum ordering collective beliefs in what was owed to a larger, nonhuman world. Taking time as an analytical lens, this dissertation identifies repeated efforts to cleave the Arctic’s human history from nature’s past. Used to justify a wide range of colonial hierarchies and violence in the long nineteenth century, it underlies a contemporary bias toward seeing the Arctic as a region of deep naturalism. Viewed as a place where an “extreme” climate dominates manifold other historicities, the past so circumscribed continues to shape future possibilities.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164580</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order</title>
<link>https://hdl.handle.net/1721.1/164578</link>
<description>Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order
Athreya, Advait
The three-dimensional organization of the genome within the nucleus plays a central role in determining gene regulation and establishing cellular identity, but the mechanisms by which local molecular interactions give rise to global chromatin architecture remain an active area of study. Interactions between nucleosomes—modulated by histone tail post-translational modifications, histone sequence variants, and the DNA sequence itself—are thought to be a major driver of this emergent structure. In this thesis, I address the question of how these intrinsic physicochemical properties of nucleosomes drive the formation of large-scale structures such as chromatin compartments. I develop a theoretical framework based on Flory-Huggins solution theory to derive pairwise internucleosome contact energies from the results of condense-seq, a novel experimental technique that measures the phase separation likelihood of native nucleosomes. I then use these derived energies to parameterize coarse-grained molecular dynamics simulations of chromatin at various resolutions, ranging from 25kb segments to simulate an entire chromosome, down to individual nucleosomes to simulate up to 10Mb genomic regions. These simulations demonstrate that the intrinsic nucleosome properties alone can capture a significant degree of A/B compartment formation observed in Hi-C experiments, despite the deliberate exclusion of all other factors such as loop extrusion and transcriptionfactor-mediated phenomena. This finding establishes that local nucleosome properties play a fundamental role in genome organization. To capture more detailed chromatin physics, I develop an extended chromatin force-field that incorporates anisotropic nucleosome stacking interactions and linker DNA properties using a novel approach for simulating reversible bond formation in molecular dynamics. This model reveals how nucleosome stacking strength, linker DNA geometry, and torsional stress collectively influence higher-order structures. Early results show that the linker-length-dependent DNA torsion contributes to nematic ordering of chromatin, consistent with experimental studies. Future development of this model will enable probing of discrete domain formation observed in imaging studies. Finally, I address a critical consideration for researchers in the chromatin organization field when analyzing Hi-C results. I compare two software tools — cooltools and dcHiC — highlighting the importance of careful parameter selection and analytical choices in designing workflows to ensure reproducible research. Taken together, this work establishes a quantitative, bottom-up modeling framework that directly links the local physicochemical properties of nucleosomes to the global principles governing three-dimensional genome organization. It provides a complementary approach to more data-driven top-down models that have made significant inroads but are challenging to interpret mechanistically. With further development, the work presented in this thesis will contribute towards predicting the structural consequences of specific epigenetic modifications and move us closer to understanding the molecular grammar of chromatin and its role in cellular function and disease.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164578</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Subannual Variability in the Abyssal Ocean</title>
<link>https://hdl.handle.net/1721.1/164577</link>
<description>On Subannual Variability in the Abyssal Ocean
Chen, Si Yuan
The abyssal ocean is a critical yet understudied component of the climate system and is of growing economic interest. This thesis combines field observations and numerical modeling to advance our understanding of subannual variability in the abyssal ocean and its broader implications.&#13;
&#13;
First, hydrographic measurements from the Clarion-Clipperton Zone of the tropical Northeastern Pacific are used to characterize the structure and variability of the bottom mixed layer (BML) in a region targeted for deep-sea mining. The observations reveal a spatially and temporally variable BML with a mean thickness of ~250 m and influenced by interactions with mesoscale eddies and abyssal thermal fronts. A simplified model of sediment transport suggests that such variations in BML structure could significantly influence the dispersal of sediments resuspended by seabed mining activities.&#13;
&#13;
Second, idealized model experiments are conducted to explore the genesis of benthic storms – episodes of strong near-bottom flows and sediment entrainment – underneath an unstable, surface-intensified jet resembling the Gulf Stream east of Cape Hatteras. In these experiments, the baroclinic instability of the jet gives rise to deep cyclonic and anticyclonic eddies through eddy barotropization and to high levels of eddy kinetic energy at abyssal depths through the convergence of vertical eddy pressure fluxes. The near-bottom currents are comparable in magnitude to those observed during benthic storms, with vertical shears strong enough to produce BMLs up to O(100) m thick. Deep cyclonic eddies transport particles from near the bottom over the entire BML and could contribute to benthic nepheloid layers. The results suggest that the abyssal response to the intrinsic instability of surface-intensified currents could contribute significantly to subannual variability near the seafloor.&#13;
&#13;
Third, a model simulation of western North Atlantic circulation is performed to study the deep cyclones (DCs) observed beneath Gulf Stream meander troughs. The characteristics of the simulated DCs compare well with field observations. The negative pressure tendency during cyclogenesis arises from a small imbalance between the sea surface depression and the vertically-integrated increase in seawater density. Vortex stretching is the primary source of cyclonic vorticity, while vortex tilting is a non-negligible sink. The deep pressure tendency, vorticity fluxes, and ageostrophic flows are diagnosed, and their similarities and differences with mid-latitude synoptic cyclones in the atmosphere are discussed. Near-bottom currents in DCs dominate the basin-scale bottom energy dissipation and transport fluid over ≥1000 km horizontally and O(100) m vertically within 3~4 months, suggesting that they provide an efficient mechanism for tracer and material transport in the abyssal interior.&#13;
&#13;
Collectively, this thesis highlights the importance of transient, mesoscale processes in contributing to subannual variability in the abyssal ocean, particularly near the seafloor. The findings have broader relevance for monitoring the environmental impacts of human activities, including deep-sea mining and carbon sequestration. While further questions remain for future investigation, this work underscores the need for sustained in-situ observations in the abyssal ocean and calls for the implementation of high vertical resolution in numerical ocean circulation models.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164577</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Democratizing High-Performance DSL Development with the BuildIt Framework</title>
<link>https://hdl.handle.net/1721.1/164576</link>
<description>Democratizing High-Performance DSL Development with the BuildIt Framework
Brahmakshatriya, Ajay
Modern high-performance software from a variety of domains relies on hand-written and hand-optimized libraries to obtain the best performance. Besides general fine-grained operators that can be composed to write entire applications, these libraries also provide coarser-grained fused and hand-optimized operators that are much faster due to being optimized for a specific sequence of operations. However, as application needs keep growing, library writers are not able to keep up and have to make the tradeoff of either sacrificing performance or generality. Domain-specific languages or DSLs are able to break this tradeoff by automatically generating the best implementation for any arbitrary sequence of operations specified by the end user. However, DSL compilers suffer from a bigger challenge that they require a lot of compiler knowledge to implement parsers, IR, analysis and transformations, and code generation, which is outside the scope of a typical domain expert. To make compiler technology and the benefits of code-generation more accessible to domain experts, I propose the use of multi-stage programming to allow writers to write library-like code while also combining it to generate the most efficient implementation for any whole program. In this thesis, I discuss the design of different multi-stage programming systems, the benefits and drawbacks. Next, I propose Re-Execution Based Multi-Staging (REMS) that addresses a critical flaw in many imperative Multi-Staging systems - the side-effect leak problem. I introduce BuildIt, an implementation of REMS in one of the most popular languages for writing high-performance applications, C++ in a type-based, lightweight way without changing the compiler. I describe the internals of BuildIt and how it implements the key features of REMS. Furthermore, I describe a set of extensions implemented on top of BuildIt that facilitate the development of high-performance DSLs with ease. I show the application of BuildIt to create three DSLs - EasyGraphit, NetBlocks, and BREeze that target graph analytics, ad-hoc network protocol generation, and Regex matching. All these case studies show 10-100x reduction in the amount of effort required to implement these DSLs that perform on-par with or better than state-of-the-art compiler frameworks while targeting diverse architectures like CPUs and GPUS. Finally, I introduce D2X, a system that is designed to add extensible and contextual debugging support to DSL implementations without having to make any changes to off-the-shelf debuggers or mess with complex debugging formats. Next, I show how applying D2X to the BuildIt system greatly improves the debugging experience for all DSLs written with BuildIt.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164576</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications</title>
<link>https://hdl.handle.net/1721.1/164575</link>
<description>Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications
Buzzell, Drew E.
A Lithium titanate, Li₄Ti₅O₁₂ (LTO4), due to its zero-strain behavior during cycling, excellent chemical stability and cyclability, is a promising anode material for solid-state batteries (SSB) applications. As a thin film, its applications expand to integrated circuits, sensors, flexible batteries, IoT devices, and memristors. Across these, precise control of mixed Li⁺ ionic–electronic transport is vital. While dopants have been shown to improve electron conduction and Li⁺ diffusion in LTO4 powders, thin-film studies remain limited. To bridge this gap, we investigate solid solution dopants (Nb⁵⁺, V⁵⁺, Mg²⁺, Cu²⁺) and their effects on LTO4 thin-film kinetics and performance in batteries and memristors. Films doped with Mg, Cu, Nb, and V with a 0.2M dopant concentration were deposited on Nbdoped SrTiO₂ substrates. Cyclic voltammetry and impedance spectroscopy show that Mg, Nb, and V improve kinetic metrics, while Cu reduces diffusivity but boosts electronic conductivity. Through galvanostatic cycling-based capacity, rate capability, and stability measurements, we found that while all dopants displayed enhanced rate performance, the capacity improved only with Mg, Nb, and V. Furthermore, the Mg-doped film was found to have an unstable capacity leading to Nb- and V-doped thin-films as the best overall performing battery anodes. For memristors, current–voltage cycling measurements revealed that low concentrations (0.05 M) of Cu and Nb doped devices presented the largest improvements in cycle-to-cycle stability, switching ON-voltages, ON-OFF current ratios, and lower loss in peak current with increasing scan rate. With increasing dopant concentrations however, devices would see relative drops in performance. In summary, the inclusion of dopants in LTO4 at the right concentration level leads to improvements in both battery and memristor performance allowing for one material multi-functional systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164575</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of CH–π Interactions in Protein-Carbohydrate Binding</title>
<link>https://hdl.handle.net/1721.1/164573</link>
<description>The Role of CH–π Interactions in Protein-Carbohydrate Binding
Keys, Allison M.
Protein-carbohydrate binding is essential for biological processes, including cellular recognition and immune signaling. Binding is driven by several types of non-covalent interactions: hydrogen bonding, metal ion coordination, and the less well-understood CH–π interactions. CH–π interactions are pervasive in protein-carbohydrate binding sites and have emerged as critical drivers of protein–carbohydrate recognition; however, the energetics of CH–π stacking interactions, their orientational landscapes, and their interplay with other non-covalent interactions have been unclear. &#13;
In this thesis, I identified carbohydrate-aromatic CH–π stacking interactions from crystallographic structures in the Protein Data Bank. I performed quantum mechanical calculations to quantify interaction energies and found that CH–π stacking interactions can be more favorable than hydrogen bonds. Using atomistic simulations, I also demonstrated that CH–π stacking interactions are necessary for human galectin-3 binding to lactose. To assess the orientational landscape of CH–π stacking interactions, I evaluated the orientations of CH–π stacking interactions formed by β-D-galactose and found that numerous orientations are highly favorable. I then identified carbon atom distances that define an orientational landscape for these interactions. To assess the interplay between non-covalent interactions in protein-carbohydrate binding sites, I used CH–π distance features to bias metadynamics simulations of a curated set of protein–β-D-galactoside complexes. From these simulations, I found that while bound carbohydrates sample many CH–π stacking orientations, the hydrogen bonds in the protein binding site drive the optimal orientation of each ligand. Longer carbohydrate ligands with more hydrogen bonding constraints have more specific orientational dependence, while ligands in binding sites with a reduced number of hydrogen bonds occupy a broader range of orientations. Unlike hydrogen bonds, CH–π stacking interactions confer orientational flexibility: enzymes can exploit multiple CH–π stacking interactions to facilitate the translocation of polysaccharide substrates. Extending this analysis to other carbohydrates, I showed that carbohydrate stereochemistry drives the orientational preferences of CH–π stacking interactions; however, there is also a tradeoff between the presence of hydrogen bonds to charged amino acids and the CH–π interaction strength for each carbohydrate. Overall, this thesis demonstrates that CH–π interactions are favorable and confer high orientational flexibility and that hydrogen bonds act in concert with CH–π interactions to stabilize protein-carbohydrate binding. Tuning the number and positions of these interactions through protein engineering should alter protein selectivity and ligand movement in protein binding sites.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164573</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Networking using Waveguide Quantum Electrodynamics</title>
<link>https://hdl.handle.net/1721.1/164572</link>
<description>Quantum Networking using Waveguide Quantum Electrodynamics
Almanakly, Aziza
The architectural principle of modularity enables the construction of complex systems from simpler components, each responsible for a particular function. The quantum computer is an intricate system comprising fragile, error-prone parts known as qubits. Entanglement distribution across a network of non-local processing modules facilitates robust and extensible quantum computation. In modular quantum architectures, photons are natural quantum information carriers which propagate through interconnects between processing nodes. In this thesis, we engineer a quantum interconnect between superconducting modules underpinned by the physics of waveguide Quantum Electrodynamics (wQED). First, we realize a multi-qubit module that exploits quantum interference to emit microwave photons into a waveguide with a specified propagation direction. Next, we construct the quantum interconnect by coupling two modules to a common waveguide and demonstrate directional (chiral) photon emission and absorption. Finally, using this chiral quantum interconnect, we generate remote entanglement, establishing a key resource for distributed quantum computation in an all-to-all network architecture.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164572</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Macro-Finance</title>
<link>https://hdl.handle.net/1721.1/164570</link>
<description>Essays in Macro-Finance
Batista, Quentin
In Chapter 1 (joint with J.R. Scott), we revisit the high-frequency and narrative approaches to estimating the effects of monetary policy shocks. We find that state-of-the-art estimates using both approaches are biased: high-frequency estimates due to nonlinear predictability and narrative estimates due to regularization. To correct for the bias in these approaches, we propose a new estimation procedure called LP-DML that combines ideas from double/debiased machine learning with the local projections framework. We find that LP-DML results in significantly smaller effects of monetary policy on macroeconomic outcomes. In Chapter 2 (joint with Taisuke Nakata and Takeki Sunakawa), we study the following question: how a central bank credibly implement a ”lower-for-longer” strategy? To answer this question, we analyze a series of optimal sustainable policy problems—indexed by the duration of reputational loss—in a sticky-price model with an effective lower bound (ELB) constraint on nominal interest rates. We find that, even when the central bank lacks commitment, the central bank can still credibly keep the policy rate at the ELB for an extended period though not as extended as under the optimal commitment policy—and meaningfully mitigate the adverse effects of the ELB constraint on economic activity. In Chapter 3, I examine the impact of central bank real estate purchases on financial markets, focusing on the Bank of Japan’s (BoJ) intervention in the Real Estate Investment Trust (REIT) market. Using a regression discontinuity design that exploits a discontinuity in the BoJ’s policy rule, I find that a typical intervention — amounting to about 0.014% of market capitalization — leads to an increase of 0.1% to 0.2% of REIT prices in the hours following the intervention. However, at longer horizons, the interventions do not have a significant effect on REIT prices. These findings suggest that the BoJ did not achieve the program’s intended objective of significantly reducing the risk premium on real estate assets.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164570</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward an Integrative Study of Human-AI Interaction</title>
<link>https://hdl.handle.net/1721.1/164569</link>
<description>Toward an Integrative Study of Human-AI Interaction
Alsobay, Mohammed
As artificial intelligence (AI) systems are increasingly embedded in the workflows of individuals and groups, designers and researchers of human-AI interaction (HAI) navigate a vast design space of possible configurations, making decisions that span algorithmic parameters, interface choice, and interaction protocols. This thesis develops an integrative approach that examines how design factors combine and interact to determine the outcomes of human-AI collaboration. &#13;
&#13;
Chapter 1 synthesizes prior HAI research into a coherent design space framework encompassing algorithms, interfaces, users, and task settings, motivating a research program for systematic exploration of interdependencies between these factors. Chapters 2 and 3 turn to group-AI interaction through large-scale behavioral experiments. Chapter 2 investigates how social information---both direct conversation and peer behavior indicators---affects individual reliance on algorithmic decision support. The study reveals that while social information modulates the effects of performance feedback and model explanations on reliance, it does not improve predictive accuracy, illuminating critical tensions between social mechanisms and system design. Chapter 3 examines large language models as facilitators of group deliberation in hidden profile tasks. While LLM facilitation increased information sharing volume, density, and breadth, it did not improve decision quality, highlighting fundamental challenges in group-AI system design beyond information aggregation.&#13;
&#13;
Chapter 4 advances an integrative approach to HAI research, emphasizing shared design spaces, systematic exploration strategies, and predictive models that generalize across contexts. The chapter provides methodological guidance and a tractable roadmap for advancing this integrative research agenda, laying the foundation for a more context-aware science of human-AI collaboration.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164569</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales</title>
<link>https://hdl.handle.net/1721.1/164568</link>
<description>Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales
Feickert, Kiley
Reducing embodied carbon (EC) in structural systems -- the most significant contributor to EC in a building -- is urgent to address the simultaneous need to reduce global warming and increase urban density. Much of the policy and research to date to reduce EC has focused on material-scale interventions or substitutions. However, EC depends on both: 1) the carbon intensity of the processes used to manufacture construction materials, and 2) the volume of raw materials required. Architects have significant agency to reduce the volume of structural materials in a building (and the resulting emissions) since the required quantity depends on design decisions architects make, including column spacing, structural typology, massing, etc. To date, most methods used to estimate EC during early-stage design do not: 1) integrate with architects’ existing design workflows, 2) evaluate multiple material systems simultaneously, and/or 3) include structural analysis to estimate material quantities. This functionality is critical so that designers can understand which decisions EC is sensitive to and evaluate design and EC tradeoffs before significant carbon is locked in.&#13;
&#13;
To address this problem, this dissertation presents a method towards transparent estimation of structural material quantities, intending to inform architectural design and policy, or other emerging EC standards. This method is used to contribute an analysis of the effectiveness of emerging U.S. EC policies, which focus on different scales of intervention, at the building scale. These policies are evaluated in isolation and in combination with strategic design levers that take advantage of structural mechanics to reduce material quantities for various building configurations and material systems. It finds that the most prominent policy approach, “Buy Clean” materials, only reduces EC by ~9% and ~16% for steel and concrete systems, respectively, compared to strategic design choices that have the potential to yield savings of up to ~79%. This dissertation also identifies building massing as a key lever in the EC outcomes of structural systems and proposes a method to quantify the impact of massing using automated structural design and analysis. It finds that in some situations, cantilevered massing typologies can be materialized for no carbon penalty if efficient configurations are used. Indeed, if inefficient configurations are used, they can incur a significant carbon penalty (2.4x) compared to normative massing. The presented results highlight the potential of design to reduce demand-side EC across scales.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164568</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning</title>
<link>https://hdl.handle.net/1721.1/164567</link>
<description>Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning
Fang, Xiaolin
Advancing robotic manipulation to achieve generalization across diverse goals, environments, and embodiments is a critical challenge in robotics research. While the availability of data and large-scale training has brought exciting progress in robotics manipulation, current methods often struggle with generalizing to unseen, unstructured environments and solving long-horizon tasks. In this thesis, I will present my work in robot learning and planning that enables multi-step manipulation in partially observable environments, towards general-purpose embodied agents. Specifically, I will talk about my work in 1) constructing a modular framework that estimates affordances with learned perception models with task-and-motion-planning (TAMP) for object rearrangement in unstructured scenes, 2) learning generative diffusion models of robot skills, which can be composed to solve unseen combination of environmental constraints through infeference-time optimization, 3) leveraging large vision-language models (VLMs) in building task-oriented visual abstractions, allowing skills to generalize across different environments with only 5 to 10 demonstrations. Together, these approaches contribute to the generality and scalability of embodied agents towards solving real-world manipulation in unstructured environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164567</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards High-Dimensional Generalization in Neural Networks</title>
<link>https://hdl.handle.net/1721.1/164566</link>
<description>Towards High-Dimensional Generalization in Neural Networks
Boopathy, Akhilan
Neural networks excel in a wide range of applications due to their ability to generalize beyond training data. However, their performance degrades on high-dimensional tasks without large-scale data, a challenge known as the curse of dimensionality. This thesis addresses this limitation by pursuing three key objectives aimed at understanding and improving neural network generalization. 1. We aim to investigate the scaling laws underlying generalization in neural networks including double descent, a phenomenon in which as a model’s capacity or training data is increased, the test error temporarily increases at a certain point before continuing to decrease. In particular, we will have two goals: 1) a better understanding of when double descent can and cannot be empirically observed and 2) a better understanding of scaling laws with respect to training time. 2. Inductive bias refers to the set of assumptions a learning algorithm makes to predict outputs on inputs it has not encountered. We propose quantifying the amount of inductive bias required for a model to generalize well with a fixed amount of training data. By developing methods to measure inductive bias, we can assess how much information model designers need to incorporate into neural networks to improve their generalizability. This quantification can guide the design of harder tasks that better test a model’s generalization. 3. Finally, we aim to develop new methods to enhance neural network generalization, particularly focusing on reducing the exponential number of training samples required for high-dimensional tasks. This involves creating algorithms and architectures that can learn effectively from limited data by incorporating stronger inductive biases. In particular, we will focus on two inductive biases in particular: 1) learning features of the training loss landscape correlated with generalization and 2) using modular neural network architectures. We expect that these techniques can improve generalization, particularly in high-dimensional tasks. Together, these contributions aim to deepen our theoretical understanding and develop practical tools for enabling neural networks to generalize effectively from limited data.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164566</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Control: Art, Technology, and the Politics of Distance (1966-1972)</title>
<link>https://hdl.handle.net/1721.1/164565</link>
<description>Remote Control: Art, Technology, and the Politics of Distance (1966-1972)
Wexelblatt, Nina Rrose
Platforms carrying dancers across a stage, doors sliding open as if by magic, and simultaneous Happenings in Berlin and Buenos Aires: remote control promised thrills as postwar artists experimented with technologies of distance. Focused on the half-decade between 1966 and 1972, this thesis intervenes in the history of art and technology to argue that a desire to activate the supposedly empty space between artist, art object, and audience effected a new fixation on the nature of that distanced interval, leading artists to incorporate actual remote control technologies into their work. This impulse grew from an unorthodox reading of the work of modernist painters, particularly Jackson Pollock. Where a generation of critics had canonized “presentness” and medium specificity, a younger cohort read the work differently, finding in it permission to embrace remoteness, intermedia experimentation, and political messaging. &#13;
&#13;
Artists including Robert Rauschenberg, Allan Kaprow, Marta Minujín, Wolf Vostell, and Carolee Schneemann, among others, undertook radical experiments with remote systems, often in collaboration with engineers. Theirs was not a technocratically neutral position; this thesis demonstrates that these artists consciously cast the “remoteness” enabled by new technologies as a charged concept, just as controlled distance emerged to define military and industrial relations on domestic, urban, and geopolitical scales. Remote control enabled artists to incorporate, not reject, the expanding frames of reference taking place outside of the sanctioned spaces of the art studio or gallery, from automation to satellite communications to warfare. Artists’ uses of remote technologies intentionally surfaced questions about critical power relations, tying the stakes of their work to debates about the future of U.S. social and economic control and development. In doing so, it also crystallized a newly diffuse, participatory artistic subject: the controller.&#13;
&#13;
The introduction theorizes “remote control” in historical and historiographic context. A second chapter follows Automation House (1970-1972), a Manhattan art space that combined labor mediation and media art to experiment with the American postindustrial labor economy to come. A third chapter centers on Three Country Happening (1966), which took place in New York, Buenos Aires, and Berlin, supposedly mediated by satellite—foiled by the uneven development of the Cold War-era satellite system itself. A fourth chapter delves into Snows (1967), a multimedia performance in protest of the war in Vietnam, which incorporated audience-controlled feedback sensors. A concluding discussion traces the ongoing nature of remote control as it implicates artists and audiences alike in a network of shared responsibility.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164565</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Behavioral Economics and Sophisticated Procrastination</title>
<link>https://hdl.handle.net/1721.1/164561</link>
<description>Essays on Behavioral Economics and Sophisticated Procrastination
Chen, Xi
Procrastination is a widespread yet complex behavior that resists simple explanation. This dissertation integrates theoretical modeling with experimental evidence to examine procrastination through the lens of sophisticated decision-making. It reframes procrastination not merely as a deviation from rationality, but as a behavior shaped by strategic trade-offs, self-awareness, and individual heterogeneity. The first essay develops a theoretical model of Perfectionistic Procrastination, proposing that individuals with high internal standards may delay tasks not as a simple lapse in self-control, but as a strategic response to the anticipated costs of sustained effort. In this framework, deadlines act as external constraints that help perfectionists limit open-ended striving and bring tasks to completion. An accompanying experiment tests the model’s prediction and finds that perfectionists are more likely to prefer deadlines. These results suggest that, in some cases, procrastination may reflect a structured strategy rather than a purely irrational failure of self-control. The second essay explores the phenomenon of Sophisticated Procrastination, challenging traditional models that attribute procrastination to naïveté. Instead, it proposes that even individuals who are aware of their tendency to delay may struggle to act on that awareness. Two experimental studies using a menu-choice framework examine how people choose task timings. In Study 1, participants preferred earlier deadlines when flexibility was available but shifted toward later options when required to commit, revealing a gap between intention and action. Study 2 identified diverse patterns of deadline preferences: while many participants actively avoided the latest possible deadline, their hesitation to commit to any specific deadline suggests a deeper tension rooted in uncertainty or discomfort with commitment. These findings provide early empirical support for Sophisticated Procrastination, indicating that self-awareness alone may not be sufficient to overcome procrastination. The third essay introduces the idea of Prosocial Procrastination, describing the tendency to delay tasks that benefit others, such as charitable activities, more than those with self-interested outcomes. Using two distinct experimental designs, one based on conjoint analysis and the other on single-attribute choice, the studies show that individuals are more likely to prefer longer deadlines when working for a charity than when working for themselves. These findings offer suggestive evidence for Prosocial Procrastination and contribute to the growing literature on the intersection of social preferences and time preferences.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164561</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection</title>
<link>https://hdl.handle.net/1721.1/164554</link>
<description>Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection
Alnegheimish, Sarah
Modern assets – from launched satellites to electric vehicles – output dense, multivariate time series data that must be monitored for deviations from “normal” behavior. This monitoring task is referred to as time series anomaly detection. The current state of the industry still depends on fixed or heuristic thresholds that often drown operators in false alarms, and can miss the subtle, context-dependent faults that matter most. This thesis addresses unsupervised time series anomaly detection as an end-to-end problem, asking how we can learn, evaluate, and deploy models that judiciously flag anomalies while remaining intuitive to the end user.&#13;
This thesis provides contributions in the form of both algorithms and systems. First, it introduces three models that enlarge the design space of unsupervised time series anomaly detection: TadGAN, which leverages adversarial reconstruction; AER, which unifies predictive&#13;
and reconstructive objectives in a single hybrid score; and MixedLSTM, which explicitly incorporates interdependencies to improve anomaly detection in multivariate time series. We propose two range-based evaluation metrics that quantify detection quality over temporal intervals. Second, it presents our system Orion, which abstracts anomaly detection pipelines as directed acyclic graphs of reusable primitives, providing user-friendly APIs and enabling interactive visual inspection. Building on this infrastructure, OrionBench performs periodic, fully reproducible benchmarks, producing leaderboards that align research innovations with the needs of end users. Third, the thesis explores a new paradigm – foundation models for unsupervised time series anomaly detection – by formulating SigLLM , which employs large language models and time series foundation models for zero-shot anomaly detection via prompting and forecasting. This paradigm indicates a promising path to developing scalable models for anomaly detection. Finally, beyond evaluating our systems on publicly available datasets, we provide extensive experiments on two industrial case studies that demonstrate improved detection accuracy and practical usability of our system.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164554</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems</title>
<link>https://hdl.handle.net/1721.1/164553</link>
<description>Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems
Chakraborty, Uttara
Reliability and robustness are key concerns in the development of novel electronic and photonic materials, devices, and systems. This thesis presents statistical and machine learning techniques for reliability analysis of heterogeneously-integrated systems, extraction of variations from photonic test structure measurements, making smart decisions about test configurations in the face of time and resource constraints, and robust design of photonic components. To estimate reliability model parameters from lifetime datasets where multiple underlying failure mechanisms are present, a differential evolution framework and a boundconstrained expectation maximization algorithm are developed; both these approaches significantly outperform the gradient-based L-BFGS-B algorithm. New schemes for strategic failure analysis on a subset of the failed units are presented, both for detecting the presence of a second failure mechanism and for improving two-mechanism reliability models. A regression-based protocol is also presented for optimally selecting reliability test conditions to verify physical failure mechanism models. A maximum-likelihood-estimation-based approach is demonstrated for the simultaneous extraction of waveguide index and thickness variations using integrated photonic direction couplers and Mach-Zehnder interferometers. Schemes are proposed for optimal selection of cut-back test structures and for propagation loss estimation with a Bayesian prior distribution for fiber-coupling error. Finally, a robust Bayesian optimization algorithm using a new tunable acquisition function is presented for photonic component design. The methods developed in this thesis are expected to be broadly applicable to a wide variety of electronic and photonic devices and systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164553</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and Applications of Large-Area Monolayer Graphene</title>
<link>https://hdl.handle.net/1721.1/164515</link>
<description>Synthesis and Applications of Large-Area Monolayer Graphene
Wang, Zhien (Abigail)
Graphene, renowned for its exceptional electrical, mechanical, and chemical properties, is a promising candidate for next-generation electronics, photonics, and biosensing. However, realizing its full potential depends critically on the ability to synthesize high-quality monolayer graphene. In this thesis, we present a robust chemical vapor deposition (CVD) approach for synthesizing large-area, adlayer-free, single-orientation graphene on Cu(111) foil and Cu(111) film/sapphire. A comparative analysis between these two substrates reveals critical differences in wrinkle density, grain size, and strain — offering insights for optimizing graphene growth.&#13;
We further identify and characterize defective merging behavior in single-orientation graphene domains. Contrary to conventional assumptions, these merging regions contain permeable defects, revealing previously unrecognized limitations in using single-orientation stitched graphene as an impermeable barrier. To scale up production while reducing human error, we also develop an autonomous CVD platform with automated sample handling, growth and post-growth oxidation. This system enables high-throughput and reproducible graphene synthesis with minimal supervision.&#13;
Building on these synthesis advances, we explore multiple applications of large-area monolayer graphene. We discover that graphene can promote interfacial oxidation of metals like aluminum and titanium during deposition, whereas metals such as nickel remain stable — a finding that informs the engineering of metal-graphene interfaces for electronic devices. In parallel, we explored diverse applications of graphene, including its role as a transparent, flexible electrode in organic solar cells, along with several collaborative efforts demonstrating its use as a sensor for cardiac microtissues, and as a tunable microheater in mid-infrared devices.&#13;
Altogether, this work advances both the fundamental understanding and technological scalability of monolayer graphene, positioning it as a versatile platform for future applications across electronics, optoelectronics, and biointerfaces.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164515</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution</title>
<link>https://hdl.handle.net/1721.1/164514</link>
<description>Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution
Martin, Caroline A.
Seeds are an exceptional evolutionary innovation that enables the conditional allocation of maternal resources to successfully fertilized ovules. During early development, seeds accumulate nutrients that are utilized either by the embryo or by humans who harvest seed crops for food, biofuel, and livestock feed. Moreover, the grains of maize, rice, and wheat provide approximately 60% of the calories consumed worldwide. Although seeds are a cornerstone for ecosystems and modern agriculture, fundamental aspects of their development are incompletely understood. In this thesis, I develop a transcriptional atlas of seed development using the model plant Arabidopsis thaliana to clarify the functional compartmentalization, diversity, and developmental dynamics of cell types in the seed. I focus my analyses on how seed cell types communicate with one another to ensure successful propagation, and how genetic conflicts in the seed may drive rapid evolution in specific cell types. After characterizing the extent of short, secreted peptide expression in specific seed cell types, I perform in silico screens to match potential peptide hormones with their receptors. In total, I show that the seed coat shows functional compartmentalization around the gateway for maternal resources into seeds, that seed genes differentially expressed in a maternal resource transfer structure are rapidly evolving, and that genes underlying brassinosteroid biosynthesis and response are expressed in adjacent tissues, among other findings. This thesis illuminates potentially new mechanisms for inter-tissue coordination and provides a transcriptional reference for future seed studies.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164514</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making</title>
<link>https://hdl.handle.net/1721.1/164513</link>
<description>Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making
Tsiourvas, Asterios
In recent years, deep learning has emerged as a powerful tool for data-driven decisionmaking. However, its adoption in high-stakes applications is often constrained by challenges related to interpretability, fairness, and generalization in structured or complex environments. This thesis develops new optimization methodologies to enhance the realism, structureawareness, and interpretability of deep learning models in decision-making tasks. We begin, in Chapter 2, by addressing the challenge of optimizing trained neural networks for data-driven decision-making. Although neural networks can encode rich representations of preferences or outcomes, directly optimizing their outputs can be computationally intractable and often may produce unrealistic prescriptions. We introduce scalable algorithms that leverage the piecewise-linear structure of ReLU networks, reducing the original hard-to-solve mixed-integer program into tractable linear programs. To ensure realism, we introduce constraints that restrict decisions to lie on the data manifold. We then extend this framework to any differentiable neural network or MIP-expressible model and show that it scales for networks with millions of parameters. In Chapter 3, we focus on decision-making under observational data. First, we study personalized treatment recommendations under discrete treatments. We introduce the Prescriptive ReLU (P-ReLU) network, a piecewise-linear model that partitions the input space into polyhedral regions, assigning treatments uniformly within each, and that can be translated into an equivalent interpretable decision tree. We demonstrate that P-ReLU achieves strong prescriptive accuracy and accommodates structural/prescriptive constraints with ease. Next, we consider the problem of large language model (LLM) routing, where a query must be dynamically routed to the best model under competing metrics like accuracy and cost. We develop a causal, end-to-end approach that learns routing policies directly from logged observational data, minimizing directly decision-making regret. Finally, we tackle the problem of generating realistic, manifold-aligned counterfactual explanations. To address this problem, we present a MIP formulation where we explicitly enforce manifold alignment by reformulating the highly nonlinear Local Outlier Factor (LOF) metric as a set of mixed-integer constraints. To address the computational challenge, we leverage the geometry of the network and propose an efficient decomposition scheme that reduces the initial hard-to-solve problem into a series of significantly smaller, easier-to-solve problems. We further extend this framework to any differentiable neural network or MIP-expressible machine learning model. In Chapter 4, we focus on structured machine learning. We first address the problem of hierarchical time series forecasting, where predictions must be both accurate and consistent with the aggregation structure of the hierarchy. While prior methods rely on fixed projection matrices, we propose learning the optimal oblique projection directly from data. The proposed end-to-end approach jointly trains the forecasting model and projection layer, significantly improving accuracy and coherence. Next, we study the problem of creating a highly expressive, interpretable, and fair machine learning model. We propose Neural-Informed Decision Trees (NIDTs), a model that combines the predictive power of neural networks with the inherent interpretability of decision trees. NIDTs use axis-aligned splits on dataset features to form transparent decision paths, and at each leaf, apply a linear predictor based on both the original features and neural embeddings from a task-specific network to capture non-linearities. To generate NIDTs, we develop a decomposition training scheme that supports direct integration of fairness constraints via a constrained convex optimization problem solved at each leaf. Finally, in Chapter 5, we address fairness and efficiency in emergency department (ED) operations, where prolonged length of stay (LOS) has been linked to adverse outcomes such as increased mortality and higher risk of hospital-acquired infections. We focus on the patient prioritization and placement aspects of ED operations to improve throughput and reduce wait times. We propose a novel MIP predictive-prescriptive framework that decomposes predicted LOS into actionable components, enabling a more granular and operationally meaningful model of ED dynamics. Fairness considerations are explicitly incorporated into the formulation. To address uncertainty, we introduce a sampling-based solution approach. Our method increases ED throughput by 50–100% and reduces average wait time by 50–75%, depending on current utilization levels, while achieving near-optimal performance compared to a clairvoyant oracle. This work was conducted in collaboration with a major U.S. academic medical center. To facilitate practical implementation, we also design an interpretable metamodel that approximates the predictive-prescriptive algorithm with high fidelity. Together, these contributions provide a unified perspective on deep learning for reliable decision-making, grounded in optimization and encompassing interpretability, structure-awareness, and causal reasoning, well-suited for high-stakes operational environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164513</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reverberation Mapping of Supermassive Black Holes using Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164512</link>
<description>Reverberation Mapping of Supermassive Black Holes using Machine Learning
Lewin, Collin
Accreting supermassive black holes at the centers of galaxies, known as active galactic nuclei (AGN), offer a unique window into the physics of accretion and feedback that shape galactic evolution. Yet, the small spatial scales of these regions remain inaccessible to direct imaging. Reverberation mapping circumvents this limitation by using time delays between correlated emission at different wavelengths to infer physical size scales. While X-ray reverberation probes the innermost accretion flow, continuum reverberation in the UV, optical, and infrared (UVOIR) traces reprocessing by the accretion disk and broad-line region (BLR). In this thesis, I develop and apply frequency-domain timing techniques based on Gaussian Process (GP) regression to study AGN reverberation across X-ray and UVOIR regimes. By modeling the empirical variability of AGN light curves with GPs, I interpolate onto an evenly sampled time grid, enabling robust estimation of Fourier-resolved time lags despite irregular sampling or large time gaps. I apply this method to NuSTAR observations of the Narrow-line Seyfert 1 galaxy Ark 564, introducing a multi-task GP model that jointly learns kernel hyperparameters across light curves. This enables the first simultaneous modeling of lag and flux spectra from both NuSTAR and XMM-Newton using a relativistic reverberation model to constrain black hole mass and disk properties. Recent reverberation campaigns with the Neil Gehrels Swift Observatory and ground-based telescopes have revealed significant discrepancies between observed inter-band lags and standard accretion disk theory. These include unexpectedly large lag amplitudes (the “accretion disk size problem”) and weak correlations between X-ray and UV/optical light curves. To investigate further, I analyze recent Swift campaigns of Mrk 335 and Mrk 817 using GP-based frequency-resolved lag analysis. In both sources, standard disk lags appear only on short timescales (high frequencies), while longer-than-expected lags dominate at low frequencies. These lag excesses are consistent with reprocessing at larger radii, similar to the BLR. Mrk 817 offers a rare opportunity to connect the inner and outer accretion flow: I detect the first simultaneous measurement of X-ray and UVOIR lags, effectively mapping the full disk. These lags vary significantly over the campaign, with longer delays during periods of stronger X-ray obscuration. This suggests that a disk wind may modulate the observed lags by introducing additional reprocessing and/or blocking ionizing flux from reaching more-distant material. To test this obscuration effect across a population, I conduct the first statistical study of UV/optical lag excess versus physical parameters across the Swift campaigns. The results show that the lag excess is driven entirely by obscured AGN, while the lags of unobscured sources are, on average, consistent with thin-disk theory. Regression analysis reveals that X-ray column density explains over 80% of the variance in lag excess. As for the X-ray/UV connection, obscured AGN also tend to show weaker correlations and more variable lags, suggesting that line-of-sight absorption not only contributes additional reprocessed emission that extends the UV/optical lags, but may also decouple or delay the X-ray and UV variability. To make GP-based time series analysis accessible to the community, I developed the STELA Toolkit, a fully documented Python package for computing frequency-domain data products using GPs. I also benchmark GP performance against other interpolation methods, including state-of-the-art transformers, paving the way for scalable, ML-enabled timing analysis in the era of time-domain surveys like Vera Rubin.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164512</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Nonlinear Dynamics: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/164511</link>
<description>Learning Nonlinear Dynamics: Methods and Applications
Rossi, Baptiste T.
Accurate modeling of dynamical systems through differential equations is essential for scientific prediction and prescriptive control. Traditional model development, which relies on expert knowledge, parameter fitting and validation, is often iterative, time-consuming, and complicated by real-world data complexities such as noise and missing observations. This thesis addresses these challenges by developing robust, scalable, and interpretable methods for learning nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) directly from data, with a particular emphasis on applications in fluid dynamics.&#13;
&#13;
In Chapter 2, we introduce a novel methodology for learning arbitrary nonlinear ODEs using collocation methods combined with interpolation. This approach demonstrates enhanced robustness to noise and significant computational speed-ups compared to classical system identification techniques, including the popular SINDy framework. It also provides a constructive method for reconstructing unobserved system components, making it applicable to partially observed systems, and offers theoretical guarantees on accuracy traditionally absent in strong-form identification.&#13;
&#13;
In Chapter 3, we combine the approach from Chapter 2 with sparse regression to derive sparse ODEs from data, demonstrating enhanced robustness to observational noise. Our method shows improved performance in recovering the true structures and coefficients on canonical benchmark tests under significant noise, while the performance of traditional surrogate methods deteriorates even with minimal noise.&#13;
&#13;
In Chapter 4, we extend this methodology to Partial Differential Equations (PDEs) using the method of lines, addressing issues related to data scale and interpolation ill-posedness. With a focus on Computational Fluid Dynamics (CFD), we show that our method goes beyond recovering complex nonlinear PDEs, such as the Navier-Stokes equations, from simulation data. The method can also be used as an a-posteriori indicator of simulation quality, providing insights into the effective PDEs represented by a given simulation, and pinpointing error-generating areas to inform adaptive mesh techniques.&#13;
&#13;
Lastly, in Chapter 5, we introduce a novel data-driven framework for modeling turbulent phenomena, a long-standing challenge in aerospace and climate science. Our approach addresses the Reynolds-Averaged Navier-Stokes (RANS) closure problem, which involves modeling the unobserved eddy viscosity field. We tackle two interconnected inverse problems: reconstructing the eddy viscosity from flow data and discovering its governing partial differential equations (PDEs), thereby proposing a new pathway to uncover new or refined RANS closure models directly from high-fidelity simulations. This chapter establishes a tractable baseline using a composite loss function, which we evaluate on canonical turbulent flows. Our results demonstrate that while the approach can recover governing equations when the ground truth eddy viscosity is known, significant challenges remain due to noise and numerical errors. We conclude that a more advanced reconstruction methodology is essential for robustly discovering these models, underscoring the potential of this data-driven approach and identifying critical areas for future research.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164511</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Electronic Compressibility of Rhombohedral Graphene Multilayers</title>
<link>https://hdl.handle.net/1721.1/164510</link>
<description>The Electronic Compressibility of Rhombohedral Graphene Multilayers
Aronson, Samuel H.
In condensed matter systems, energy bands with narrow dispersion frequently host correlated electronic phases that arise from strong Coulomb interactions. When these bands also have concentrated Berry curvature, the correlated phases may be topologically non-trivial. The low-energy bands of rhombohedral graphene multilayers possess both of these ingredients, making this a promising class of materials in which to search for correlated topological electronic ground states. This thesis describes our electronic compressibility measurements on rhombohedral graphene multilayers, with a particular focus on the pentalayer system (R5G). We utilize a planar capacitance technique that probes the thermodynamic density of states and enables us to extract energy gaps of incompressible phases. We observe a variety of correlated electronic phenomena including half and quarter metals, layer antiferromagnetism, correlation-driven Chern insulators, and thermodynamic signatures of potential Wigner crystallization. We also study the electronic compressibility of R5G aligned to a hexagonal boron nitride (hBN) substrate to form a moiré superlattice. Motivated by the recent discovery of the fractional quantum anomalous Hall effect in this system when the electrons are pushed away from the moiré interface by an external electric displacement field, we study the opposite moiré-proximal limit, in which the superlattice potential is considerably stronger. We observe integer and fractional Chern insulator states that persist down to low magnetic fields in addition to numerous trivial and topological charge density waves. We map out a phase diagram that is highly sensitive to both displacement and magnetic fields, establishing the R5G-hBN superlattice as a highly-tunable system for studying the interplay between intrinsic band topology and strong lattice effects.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164510</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hadronic Structure with Classical and Quantum Computing</title>
<link>https://hdl.handle.net/1721.1/164509</link>
<description>Hadronic Structure with Classical and Quantum Computing
Avkhadiev, Artur
Calculations in lattice quantum chromodynamics (QCD) — presently the only known systematically improvable approach to describe the strong nuclear force in the nonperturbative regime from first principles — are playing an increasingly important role in revealing how hadrons emerge from the interactions of the underlying degrees of freedom: quarks and gluons. With computational and theoretical advances, more fruitful connections have emerged between lattice QCD and phenomenology, and the field is now well into a stage ripe for deriving tighter constraints on hadronic structure through joint analyses of numerical lattice QCD results with experimental data.&#13;
 This thesis summarizes lattice QCD calculations of the Collins-Soper (CS) kernel: a nonperturbative function whose inclusion in joint analyses has the potential to advance the study of multidimensional hadronic structure. The CS kernel is an anomalous dimension of transverse-momentum-dependent (TMD) distributions describing a three-dimensional structure of ultrarelativistic hadrons as a function of quark-gluon momenta collinear with and transverse to the hadron's motion. Constraints on the CS kernel at nonperturbative transverse-momentum scales are instrumental to relate TMDs across scales and processes. The kernel differs for quark and gluon TMDs, but is otherwise universal. This thesis presents the first lattice QCD determination of the quark CS kernel with systematic control over operator mixing, quark mass, and lattice discretization, and a proof-of-principle lattice calculation of the gluon CS kernel providing the first nonperturbative constraints on this quantity.&#13;
 Additionally, this thesis summarizes exploratory studies on how Hamiltonian calculations — realized with quantum-computer simulations and tensor networks — may be combined with conventional Monte Carlo calculations based on Lagrangian formulations in Euclidean space. These studies examine how constructions of interpolating operators, used in conventional calculations to map between the vacuum and a ground state of interest, may be optimized in Hamiltonian calculations to increase overlap with the target state. Results, limited to the Schwinger model, support further investigations of this approach in theories more closely resembling QCD as quantum-computing and tensor-network technologies continue to mature.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164509</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Charcuterie Platter of QCD Matter</title>
<link>https://hdl.handle.net/1721.1/164505</link>
<description>A Charcuterie Platter of QCD Matter
Sun, Zhiquan
One of the greatest current challenges in theoretical high energy physics is to understand the dynamics of Quantum Chromodynamics (QCD). In this thesis, I address a variety of questions in QCD using Effective Field Theory (EFT). The first question deals directly with the observed phenomenology of QCD: How can we use EFT to disentangle the complicated three-dimensional dynamics of how quarks and gluons, the fundamental degrees of freedom of QCD, combine to form the observed bound states in nature called hadrons? I initiate a new formalism using Heavy Quark Effective Theory to study this dynamical process known as hadronization. I shed new light on the transverse momentum-dependent fragmentation process of heavy (charm and bottom) quarks by making use of the fact that heavy quarks with masses much larger than the strong interaction scale decouple from the rest of the hadronization cascade. I also present exciting heavy quark phenomenology at existing colliders and the upcoming Electron-Ion Collider. The second question investigates the field theory structure of QCD: What can we learn about the nonperturbative structure of the quantum field theory through the abstruse emergent phenomenon in QCD called “confinement”, which traps quarks and gluons inside hadrons? I study a class of cleverly constructed observables known as energy correlators by using fieldtheory based methods to determine the leading nonperturbative contribution, and examine the universality of the nonperturbative matrix element that gives rise to this contribution. I also show that including the nonperturbative contribution has a significant impact on the extraction of the strong coupling constant, a fundamental parameter of the Standard Model, using tools such as factorization and resummation from EFT. Last but not least, the final question explores the underlying symmetry properties of QCD and its potential completions: How robust is the axion solution to the strong CP (ChargeParity) problem, and what are some of its implications beyond the realm of QCD? I examine the axion quality problem in post-inflationary QCD axion models with different symmetry properties and identify a new tension with standard cosmology. I further show that the axion string-domain wall dynamics is more complicated than commonly expected, undermining the reliability of a unique mass prediction for axion dark matter in post-inflationary models. I showcase the importance of considering both high-energy extensions and the EFT at low energy, and uncover new complexity of the axion solution to the strong CP problem.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164505</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from pre-pandemic data to design and test future-proof therapeutics</title>
<link>https://hdl.handle.net/1721.1/164504</link>
<description>Learning from pre-pandemic data to design and test future-proof therapeutics
Gurev, Sarah
Effective pandemic preparedness relies on predicting immune-evasive viral mutations to enable early detection of variants of concern and design vaccines and therapeutics that are resilient to future viral evolution. However, current strategies for viral evolution prediction are not available early in a pandemic and have limited predictive power – experimental approaches require host polyclonal antibodies and existing computational methods draw heavily from current strain prevalence. In addition, vaccines and therapeutics have been designed with an eye towards past or circulating variants, not towards future evolution. To address these challenges, we developed EVEscape, a generalizable framework that integrates fitness predictions from a deep generative model of evolutionary sequences with biophysical and structural information. EVEscape quantifies the immune escape potential of viral strains at scale and is applicable before surveillance sequencing, experimental scans, or 3D structures of antibody complexes are available. We demonstrate that EVEscape, trained on sequences available prior to 2020, performs as accurately as high-throughput experimental scans at anticipating pandemic variation for SARS-CoV-2 and is generalizable to other viruses including Influenza A virus, HIV, and understudied viruses with pandemic potential such as Lassa and Nipah. We investigate both alignment-based and protein language models to explore the best model of mutation effects across pandemic-threat viral families. We demonstrate the utility of EVEscape in three critical applications: (1) Surveillance efforts flagging high escape SARS-CoV-2 variants from their first appearance (2) Design of panels of viral antigens that mimic future viral variants for early, proactive evaluation of the future protection of vaccines and therapeutic; and (3) Design of a pan-sarbecovirus nanoparticle-based vaccine capable of eliciting broad, long-lasting protection against sarbecoviruses, including future variants. This three-pronged approach represents a paradigm shift in pandemic preparedness, offering a novel strategy to preemptively address viral families with pandemic potential and significantly bolster global prevention efforts.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164504</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in quantum information theory and quantum&#13;
many-body physics</title>
<link>https://hdl.handle.net/1721.1/164503</link>
<description>Topics in quantum information theory and quantum&#13;
many-body physics
Balasubramanian, Shankar
In this thesis we present two results relating to the intersection between quantum information theory and quantum many-body physics. The first pertains to quantum algorithms, where few computational problems are believed to exhibit exponential separation between quantum and classical performance. For those that are, natural generalizations remain elusive. One speedup that has especially resisted generalization is the use of quantum walks to traverse the welded tree graph, due to Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman. We show how to generalize this to a large class of hierarchical graphs in which the vertices are grouped into “supervertices” that are arranged according to a d-dimensional lattice. Supervertices can have different sizes, and edges between supervertices correspond to random connections between their constituent vertices. The traversal time of quantum walks on these graphs are related to (a) the existence of small subspaces within which the quantum walk evolves and (b) the localization properties of the quantum walk within these subspaces. We find examples of hierarchical graphs that yield provable speedups over classical algorithms ranging from superpolynomial to exponential, depending on the underlying dimension and the random graph model. We also discuss how to relax criterion (a) to the existence of a small and approximate subspace by using techniques from graph sparsification. The second result pertains to fault-tolerant quantum memories. Storing a qubit in a noisy environment is crucial for developing full-scale quantum computers. While constructions of fault-tolerant quantum memories exist, they often assume that quantum operations are not local and assisting classical computation operates instantaneously and noislessly. In particular, constructing a topological quantum memory below four dimensions with local quantum and classical operations that is fault-tolerant under both quantum and classical noise is an open problem. We construct a local quantum memory for the 2D toric code using ideas from the classical cellular automata of Tsirelson and Gács. Our memory preserves a logical state for exponential time in the presence of both classical and quantum noise below a constant noise rate. While our 2D quantum memory is built from operations that depend on space and time, we construct a fault-tolerant quantum memory in 3D using stacks of 2D toric codes that can be built with time-independent operations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164503</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Matter in the Era of Generalized Symmetries</title>
<link>https://hdl.handle.net/1721.1/164502</link>
<description>Quantum Matter in the Era of Generalized Symmetries
Chatterjee, Arkya
The discovery of generalized symmetries has led to powerful new insights into quantum matter. They have been used to classify new families of quantum phases, place constraints on phases realizable in a given physical system, and conceptually unify seemingly disparate phenomena. In many ways, they prove just as powerful as traditional symmetries at organizing and constraining the theories that describe quantum matter. In this thesis, we attempt a unification of such constraints by developing a holographic correspondence between (generalized) symmetries and topological orders, called the Sym/TO correspondence. For any (finite internal) symmetry of a quantum system in d (spatial) dimensions, we associate with it a unique topological order in d + 1 dimensions, called its Symmetry Topological Order (SymTO). We devise an operator algebraic recipe to compute the SymTO data for any lattice spin model, demonstrating it in a number of examples. We then use the SymTO to classify possible quantum phases allowed by the symmetry—we call this a generalized Landau paradigm. Besides classifying phases, we also identify constraints on the phase transitions between them using a SymTO-resolved modular bootstrap. We test this framework in a quantum spin chain with non-invertible symmetries. We discover a new Kramers-Wannier-like duality and a rich phase diagram including a noninvertible symmetry-enriched incommensurate phase. The translation symmetry of the spin chain has a nontrivial interplay with the lattice Kramers-Wannier duality, which matches the anomaly of the corresponding non-invertible symmetry in the low-energy effective field theory. Finally, we explore such unusual anomaly-matching mechanisms in more detail in the context of the chiral anomaly of a single massless Dirac fermion, demonstrating a novel lattice realization of chiral symmetries and their anomaly.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164502</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Materials Design of Ordered Nanocomposite Assemblies</title>
<link>https://hdl.handle.net/1721.1/164501</link>
<description>Systems Materials Design of Ordered Nanocomposite Assemblies
Thrasher, Carl James
The ability to precisely organize matter across multiple length scales is a central challenge in modern materials science. In this dissertation, I develop a systems materials design approach to engineer hierarchically structured nanocomposite assemblies, integrating molecular recognition, supramolecular chemistry, colloidal assembly, and bulk processing into unified material platforms. At the molecular and nanoscale, I investigate how multivalent supramolecular interactions can be rationally programmed by controlling the architecture of polymer binders grafted to nanoparticle surfaces. Through systematic variations in polymer topology, recognition group density, and scaffold geometry, I demonstrate how polymer design dictates the thermodynamic strength and multivalency of nanoparticle superlattice assembly, enabling precise control of thermal stability,&#13;
crystallographic symmetry, and collective bonding behaviors in massively multivalent systems. Building on these design principles, I develop a colloidal metallurgy framework to process selfassembled nanoparticle superlattices into dense macroscopic polycrystalline solids while preserving nanoscale order. By systematically studying the interplay of temperature, pressure, and time during colloidal sintering, I elucidate mechanisms of densification, defect evolution, and grain growth unique to colloidal systems, establishing processing–structure relationships that parallel but fundamentally diverge from atomic sintering. Finally, I extend these concepts to create stretchable nanocomposite supercrystals, embedding supramolecularly assembled superlattices into elastomeric matrices via co-engineered polymer chemistries that enable hierarchical strain&#13;
transduction. These materials combine the nanoscale precision of supercrystals with mechanical robustness, reconfigurability, and stimuli-responsive optical properties, illustrating a scalable pathway to multifunctional metamaterials. Collectively, this work demonstrates how a systemslevel integration of molecular design, colloidal assembly, and bulk processing enables new&#13;
paradigms for the synthesis of hierarchically ordered, functional nanocomposites.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164501</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction</title>
<link>https://hdl.handle.net/1721.1/164500</link>
<description>The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction
Traunbauer, Anna Katharina
Reduced effector function and elevated inhibitory receptor expression are hallmarks of exhausted CD8⁺ T cells, yet the underlying molecular and epigenetic drivers remain incompletely defined. Here, we developed an in vitro repeated stimulation model to recapitulate features of human CD8⁺ T cell dysfunction and delineate transcriptional and epigenetic landscapes. Our analyses revealed that BCL6 and BATF3 are robustly upregulated in dysfunctional CD8⁺ T cells, with ATAC-seq demonstrating enhanced chromatin accessibility at their gene loci. Transcription factor footprinting shows increased BATF3 motif occupancy in chronically stimulated cells and integrative multi-omic analysis combining footprints, open chromatin regions, RNA-seq and ChIP-seq data revealed that putative BATF3 target genes may include master regulators of exhaustion. Moreover, overexpression of BCL6 or BATF3 markedly upregulates TIM-3 expression and suppressed cytokine release, establishing their capacity to induce T cell dysfunction. We further validated these findings ex vivo in antigen-specific CD8⁺ T cells from patients with advanced melanoma, as well as HCV and HIV infections, where cells were enriched for BCL6^high and BATF3^high subsets co-expressing canonical exhaustion markers such as PD-1, TIM-3 and CD39. Notably, Single-cell RNA sequencing of HIV-specific CD8⁺ T cells identified a distinct BCL6^high PD1⁻ progenitor population that gives rise to two distinct subsets via divergent differentiation trajectories: one branch generates effector-like BCL6^high PD1⁺ cells, whereas the other produces BCL6^high PD1⁺ cells that retain an exhaustion gene signature alongside partial memory-like feature. Collectively, these findings identify BCL6 and BATF3 as key mediators of human CD8⁺ T cell dysfunction and illuminate novel transcriptional and epigenetic pathways that may be leveraged for therapeutic intervention in cancer and chronic viral infections.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164500</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of Nonperturbative Heavy Quark Physics</title>
<link>https://hdl.handle.net/1721.1/164499</link>
<description>Aspects of Nonperturbative Heavy Quark Physics
Lin, Joshua
The properties of charm and bottom quarks are an interesting corner of Quantum Chromo-Dynamics (QCD) due to the fact that their masses are much heavier than the typical QCD interaction energy ΛQCD. Due to this scale separation, it is possible to describe these heavy quarks by Effective Field Theories (EFTs) that simplify their equations of motion, make explicit additional symmetries that only appear for heavier quark masses, and simplify the theoretical calculations required for predictions. By discretising these EFTs in a lattice regularisation, nonperturbative calculations of observables of interest become possible. This thesis presents progress towards systematically controlled calculations of two such observables: the Spectator Effect contributions to the inclusive decay rates of b-hadrons, and the real-time dynamics of fermions propagating in a thermal medium. Standard EFT calculations in Lattice-QCD proceed by expressing observables as sums over perturbatively computed Wilson coefficients and nonperturbative matrix elements that can be calculated by path integral monte-carlo methods. Though it is possible to carry out this procedure within a regulator-independent renormalization scheme, in practice almost all such decompositions are computed in the modified minimal subtraction scheme MS which is only defined for the dimensional regulator (DR), due to its simplicity. Computing such observables therefore requires a matching between lattice regularised operators and operators renormalized in MS. In Chapter 2, both the dimensional regulator (DR) and the lattice regulator are reviewed, with a particular emphasis on techniques needed for calculations carried out in later sections. An interesting subtelty in DR is the need to introduce d-dimensional counterparts to the Dirac γ-matrices, which a-priori are only well defined in integer number of dimensions. This analytic continuation is of practical importance as it introduces additional Evanescent Operators (Sec. 2.1.4) that have physical consequences. In Sec. 2.1.5, traces of d-dimensional γ-matrices were related to Tutte polynomial evaluations [4], presenting a new graph-theoretic interpretation of the dimensionally regulated γ-matrices. One strategy of renormalizing lattice-regulated operators into MS involves first renormalizing into a regulator independent scheme, before perturbatively matching between the regulator independent scheme and MS. In Chapter 3, regulator independent position-space (X-space) schemes for renormalizing operators defined in the leading order Heavy Quark Effective Theory (HQET) are proposed [3]. Compared to other regulator independent renormalization schemes such as RI-xMOM, X-space schemes have the benefit that they are gauge invariant. The next to leading order matching calculations between X-space and MS are presented for heavy-light and heavy-light-light multiplicatively renormalizable operators, as well as ∆Q = 0 and ∆Q = 2 four quark operators relevant for heavy hadron decays and mixing, where Q refers to the static quark number. Due to their heavy masses, hadrons containing heavy quarks decay via the weak nuclear force. Experimental measurements of these lifetimes provide precision determinations of the fundamental parameters of the Standard Model. The Heavy Quark Expansion expresses the inclusive lifetimes of heavy hadrons in terms of matrix elements of HQET operators of increasing dimension. The Spectator Effects are contributions due to the ∆Q = 0 four-quark operators, where the light quark degrees of freedom within a heavy hadron participate in the decay. In Chapter 4, a Lattice-QCD determination of the static decay constant f HQET B and the isospin-nonsinglet portion of the Spectator Effect matrix elements for heavy-light mesons is presented. Fits of bare matrix elements were performed for three different lattice spacings, and renormalized with the schemes proposed in Chapter 3 before a continuum limit is taken. Due to the heavy masses mQ of the heavy quarks, it is possible to find temperatures T approximately satisfying a hierarchy ΛQCD ≪ T ≪ mQ. At these temperatures, QCD undergoes a deconfinement transition into the Quark-Gluon-Plasma (QGP) phase where the light degrees of freedom are no longer confined, and instead screen the long-range colour forces. The heavy quarks however are not thermalised, and act as probes of the QGP. Further understanding of the QGP requires first principles simulations of the heavy quark dynamics at finite temperature, however such calculations are difficult due to the enormous size of the Hilbert space. Variational approximations of the Hilbert space encode wavefunctions within a few parameters, and provide a practical method to simulate many particle systems. As a testcase, the variational approach was applied for the first time to simulate fermions at finite temperature in a simple QFT: the 1+1d U(1) gauge theory known as the massive Schwinger model. Both the real-time dynamics of string like states, and the properties of the thermal state were studied, and such variational methods are shown to be promising approaches to the more difficult case of a heavy quark effective theory in QCD.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164499</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions</title>
<link>https://hdl.handle.net/1721.1/164498</link>
<description>Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions
Abbott, Ryan William
Quantum Chromodynamics (QCD) is a cornerstone of the standard model of particle physics, and the best known theory of strong nuclear interactions. The only known systematically improvable ab-initio method for accessing the nonperturbative physics of QCD is Lattice QCD is, and this thesis presents two advances in our understanding QCD using lattice-based methods. The first is a calculation using many-pion systems to map out the entire zero temperature, nonzero isospin density region of the QCD phase diagram. The calculation uses novel methods for working with many-pion systems that enables working with thousands of pions, and furthermore provides rigorous constraints on the baryon-dense region of the QCD phase diagram. The second is an application of methods from machine learning (namely normalizing flows) in order to accelerate sampling. This approach has the promise of eliminating issues such as critical slowing down, as well as introducing novel tools and methods that enable methods of calculation that would be possible otherwise.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164498</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limits of QCD</title>
<link>https://hdl.handle.net/1721.1/164497</link>
<description>Limits of QCD
Gao, Anjie
This thesis explores the fundamental kinematic limits of Quantum Chromodynamics (QCD), including the soft, collinear, and Regge limits, using soft-collinear effective theory (SCET). We begin by studying transverse momentum dependent (TMD) physics in semi-inclusive deep inelastic scattering (SIDIS), which probes the small transverse momentum regime arising from the soft and collinear limits of QCD. We derive all-order factorization theorems for azimuthal asymmetries in SIDIS at next-to-leading power (NLP). We also propose a new angular observable, q_∗, for probing TMD dynamics at the future Electron-Ion Collider (EIC), which enables an order-of-magnitude improvement in experimental resolution while retaining sensitivity to TMD distributions. Next, we apply the TMD formalism to a class of observables known as energy correlators. We study the transverse energy-energy correlator (TEEC) in the back-to-back limit, a dijet observable at hadron colliders, and the three-point energy correlator (EEEC) in the coplanar limit, a trijet observable at lepton colliders. For both observables, we derive allorder factorization theorems and resum large logarithms to next-to-next-to-next-to-leading logarithmic (N3LL) accuracy. Finally, we analyze the Regge limit of 2 → 2 QCD amplitudes. By factorizing these amplitudes into collinear jet and soft functions and studying their rapidity evolution, we define Regge-like anomalous dimensions in a gauge-invariant manner. At the level of the exchange of two Glauber gluons in the t-channel, we recover the BFKL equation from a purely collinear perspective. Extending to three-Glauber exchange, we derive the first closed-form renormalization group equations for Regge cut contributions in several nontrivial t-channel color representations, providing a systematic method for organizing non-planar QCD amplitudes at high energy.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164497</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells</title>
<link>https://hdl.handle.net/1721.1/164496</link>
<description>Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells
Lee, April
Precise regulation of nutrient availability is crucial for cellular function and survival. Iron, in particular, is tightly regulated as it serves as an essential cofactor for numerous enzymes but can catalyze the formation of toxic radicals at elevated levels. To maintain the necessary cytoplasmic iron concentration, cells store excess iron in large proteinaceous cages called ferritin and, when available iron levels fall, they degrade these cages, liberating the stored iron for use. This thesis focuses on the molecular mechanisms underlying cellular iron sensing, as well as the molecular interactions supporting regulated ferritin degradation and subsequent iron release. Specifically, this work interrogates the protein interactions involved in ferritinophagy, a form of selective autophagy that leads to the lysosomal degradation of ferritin. Extending prior work that identified key components supporting ferritinophagy, including the selective autophagy receptor protein NCOA4 and its cognate autophagosomal receptor GATE16, experiments described here uncover the molecular contacts between these proteins. I found that NCOA4 bears two short linear motifs that each bind to GATE16 with weak affinity. However, these binding motifs are highly avid and, in concert, support high-affinity binding of NCOA4 to oligomerized GATE16. I further describe that ferritin degradation in cultured human cells relies on the contacts I identified biochemically. Moreover, I found that iron decreases NCOA4’s affinity for GATE16, providing a plausible mechanism for irondependent regulation of ferritinophagy. Taken together, this work suggests a general mechanism by which selective autophagy receptors can distinguish between inactive monomeric GATE16 and the active oligomerized forms that primarily drive autophagy. In related studies, I have biochemically probed the NCOA4•ferritin interface, with these experiments suggesting a novel function of NCOA4 in modulating ferritin cage structure – either through cage dismantling or through the formation of higher order structures. Taken together, these studies further define the molecular mechanisms by which NCOA4 aids cells in maintaining iron homeostasis, and they provide the requisite reagents for future work aimed at building a unified model for how mammalian cells regulate this vital but toxic metal.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164496</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels</title>
<link>https://hdl.handle.net/1721.1/164494</link>
<description>Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels
Zheng, Daniel J.
With the ever-rising CO₂ levels in the atmosphere, it is paramount to cease reliance on fossil fuels to meet global energy demands. While the cost of electricity from renewable sources, such as solar and wind, continues to decrease and has even fallen below that of fossil fuels since 2014, these renewable energy sources suffer from intermittency, potentially causing shortages at peak demands. Thus, methods to store or economically use excess renewable energy are needed for full decarbonization. One promising avenue is to store the excess generated electrical energy in chemical bonds, creating molecules and materials with industrial or energy storage utility. In this proposed scheme, the renewable electricity would be used to electrochemically convert earth-abundant molecules into value-added chemical or fuels. These generated products could then be utilized as feedstocks in industrial applications or as a fuel source to generate electricity when needed by transforming back into their earth-abundant forms.&#13;
&#13;
Central to transforming earth-abundant molecules into value-added chemicals or fuels is the oxygen evolution reaction (OER), which is found in nearly every process. The plentiful nature of OER’s main reactant, water, and moderate thermodynamic potential of 1.23 V vs. the reversible hydrogen electrode, make OER an ideal reaction to pair with other transformations. However, the slow kinetics of OER significant hinder the efficiency of these processes. As such, discovering new OER catalysts with high activity and stability would have wide-spread impacts. On the other hand, one of the most promising renewable fuel sources is methanol, which boasts about 3 times the energy density of hydrogen and can be used as an alternative to hydrogen in proton exchange membrane fuel cells. However, the sluggish kinetics of the methanol oxidation reaction (MOR), even with current state-of-the-art noble metal catalysts causes direct methanol fuel cells to reach an efficiency of &lt;40%, limiting their practical usage. While significant research has been invested in discovering new MOR electrocatalysts, PtRu has reigned for 5 decades, highlighting the need for a true breakthrough. &#13;
&#13;
In this thesis, electrocatalysts for OER and MOR are examined in depth. For OER, metal-hydroxide organic frameworks (MHOFs), a promising new class of hybrid organic-inorganic materials with potential to mimic the superior functionality of enzymes, are studied. Operando vibrational and absorption spectroscopy methods are used to characterize the degradation mechanisms and lattice oxygen exchange capacity as a function of the linkers. Using such knowledge, defects are engineered into the MHOF that increase both the activity and stability compared to the pristine material. Furthermore, the traditionally reported MOR mechanism is studied using isotope-labeled reactants and operando mass spectrometry. These experiments revealed that, in contradiction to typically accepted mechanisms, the C-O bond in methanol can be cleaved during MOR, with the resulting CO₂ molecule containing two water-derived oxygen atoms, opening a new paradigm for MOR catalyst design. Driven by the need to discover new materials at scale, a fluorescence-based OER catalyst screening method is developed that can screen an entire composition space simultaneously. In addition, an AI-driven, automated platform for screening a high-dimensional multimetallic space for MOR is presented.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164494</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action</title>
<link>https://hdl.handle.net/1721.1/164493</link>
<description>Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action
Petridis, Periklis S.
Despite significant theoretical advances in Operations Research (OR) and Artificial Intelligence (AI), a persistent gap remains between these developments and their practical implementation in real-world settings. Despite significant progress in these fields, many OR and ML approaches struggle to scale to realistic problem sizes, lack robustness to uncertainty, or fail to address implementation constraints faced by practitioners in industry. Through four distinct works conducted in collaboration with industry partners, this research demonstrates how methodological advancements can bridge this theory-practice divide while maintaining rigorous theoretical foundations and guarantees. In the first part, we focus on optimization methodologies that scale traditional OR approaches to handle real-world problem sizes and uncertainty. In Chapter 2, we develop a stochastic Benders decomposition scheme for large-scale network design problems, a class of problems ubiquitous in logistics, transportation, and energy sectors. By incorporating sampling techniques within the decomposition framework, we achieve deterministic optimality guarantees while reducing computational costs, enabling solutions for networks with 700 nodes—an order of magnitude larger than previously tractable instances—while achieving optimality gaps of 5-7% compared to 16-27% for traditional deterministic approaches. In Chapter 3, we present a holistic framework for industrial decarbonization, developed with a major phosphate producer planning to quadruple energy consumption while transitioning to renewable sources. Our robust optimization approach combines strategic capacity expansion planning over a 25-year horizon with adaptive operational models, providing 95% reliability guarantees while balancing solar and wind integration with battery storage to meet a projected 12 TWh annual demand. In the second part, we shift our focus to developing AI systems that address the unique challenges of medical data abstraction and clinical decision support. In Chapter 4, we address the challenge of automating clinical data abstraction from electronic health records, collaborating with the Society of Thoracic Surgeons to populate their Adult Cardiac Surgery Database. Our AI pipeline combines 31 models per target variable with a two-tiered quality control system, achieving over 99% accuracy while automatically extracting 43-50% of registry variables, demonstrating how AI can dramatically reduce manual abstraction burden while maintaining clinical standards. In Chapter 5, we extend this healthcare AI focus by developing xHAIM (Explainable Holistic AI in Medicine), which addresses the limitations of current clinical AI systems in handling extensive patient records, providing interpretability, and incorporating medical knowledge. Through semantic similarity techniques and generative AI, xHAIM improves predictive performance while generating clinically grounded explanations that enhance trust and adoption by healthcare practitioners.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164493</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death</title>
<link>https://hdl.handle.net/1721.1/164492</link>
<description>Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death
Lam, Breanna
During hibernation of Syrian hamsters, the core body temperature shows a remarkable decrease, going from 37°C to 4°C. Although this ability to survive at low temperatures could in principle be due to systemic factors that occur during hibernation, we and others have seen that cells from hibernating rodents cultured in vitro maintain this ability. Although others have studied characteristics of cells from hibernating and non-hibernating organisms, the genes and pathways that are involved in cold-induced cell death have not been systematically explored. &#13;
In this thesis, we conduct two genome-wide CRISPR-Cas9 screens in both a cold-sensitive (K562) and cold-resistant (BHK-21) cell line, and uncover GPX4 and related selenocysteine incorporation genes as important for protection against cold-induced cell death. Using genetic knockdowns, along with overexpression of GPX4, we confirm our findings and demonstrate that levels of GPX4 may be limiting in K562 cells, contributing to their cold sensitivity. Additionally, pharmacological validation using inhibitors of GPX4 reveal that the catalytic activity of GPX4 is dependent on the selenocysteine in the active site. Our findings are extended across multiple cell lines and cell types across six species. Taken together, our results suggest that GPX4 may be a powerful and conserved suppressor of cold-induced cell death. &#13;
Building on our initial findings, we go on to show that cold exposure leads to increases in membrane permeability. This membrane permeability is transient, as rewarming of the cells reduces permeability to baseline levels. We also test the role of lipid peroxidation in contributing to membrane permeability and find that although it contributes in some cell lines, it is not the sole contributor as ferroptosis inhibitors do not fully mitigate membrane permeability. We go on to test different membrane channels and do not see decreases in membrane permeability, potentially indicating pathway-independent effects of temperature on membrane permeability. Altogether, this work provides a foundation for understanding how cold exposure influences mammalian cells.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164492</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States</title>
<link>https://hdl.handle.net/1721.1/164491</link>
<description>Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States
Baidoo, Jacqueline E.
Municipal solid waste (MSW) is a heterogeneous mixture of materials discarded by residential and nonresidential generators at end-of-life processing facilities for treatment and disposal. Conventional treatment methods reduce waste volumes through recycling via material recovery facilities, energy recovery via municipal solid waste incinerators, and biochemical conversion via composting. Even so, nearly 50% of total MSW generated in the United States was sent to landfills for final disposal in 2018 and almost half of all landfills currently in operation are expected to reach capacity by 2050. Waste planners seek to use developing resource recovery technologies like dry anaerobic digestion, gasification, and pyrolysis to narrow the gaps in end-of-life processing. Such technologies are posited to improve materials circularity and advance zero-waste landfill diversion goals by transforming residuals into electricity, fuels, and precursors to chemicals and fertilizers. However, despite demonstrated improvements to technical inefficiencies in waste valorization, numerous projects built on these technologies have failed to break through to commercial success. We investigate the contribution of regional and economic factors to the success of resource recovery projects through the lens of why landfills remain the predominant method of waste disposal. We build cost models of conventional and select developing treatment methods and use discounted cash flow analysis to estimate financial feasibility by local MSW compositions as reported in regional waste characterization studies.&#13;
&#13;
Findings indicate that the most critical factor to sustainable operation is consistent supply of waste materials at the quality and scale that maximize production efficiency, which is not achievable without rigorous data monitoring of MSW composition. Conversely, dependence on waste volume rather than composition makes land disposal a uniquely flexible pathway capable of subsidizing the costs of resource recovery. Progress towards landfill diversion is economically linked to the opportunity cost of avoiding landfill utilization. Unless municipalities are able to introduce subsidies elsewhere in the waste management ecosystem through gate fees and credits, projects will fail where marginal net costs of diversion exceed the revenues lost from avoided landfilling. Targeted processing of organic wastes can facilitate an average diversion of 24% for the compositions surveyed and was found to be viable for composting and dry anaerobic digestion projects at low to negligible financial losses compared to landfilling.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164491</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enlightening Artificial Intelligence with Science</title>
<link>https://hdl.handle.net/1721.1/164490</link>
<description>Enlightening Artificial Intelligence with Science
Liu, Ziming
Today’s artifciail intelligence (AI) systems, while remarkably capable, are largely black boxes. The black-box nature raises concerns for those who build AI – “How can we construct an understand AI in scientifically grounded ways?”, and those who use AI – “How can we trust systems we do not understand?”. This thesis takes a humble step towards addressing the black-box problem. Building white boxes with science (Science for AI): The prevailing paradigm in AI today – “scaling is all you need" – focuses on scaling up existing models. However, this approach often yields systems that are neither interpretable nor efficient. I argue that scientific principles offer fresh perspectives for designing more transparent and effective AI systems. This is demonstrated through Kolmogorov-Arnold Networks (KANs) inspired by mathematics, Poisson Flow Generative Models (PFGM) rooted in physical intuition, and brain-inspired modular training (BIMT) drawing insights from neuroscience, etc. Opening black boxes (Science of AI): Modern AI models exhibit a range of puzzling behaviors – such as grokking, neural scaling laws and emergent representation learning – whose underlying mechanisms remain poorly understood. I employed simplified “spherical cow” models to investigate these phenomena from the perspective of phase transitions. I will show that grokking is a special phase in the hyperparameter space, which can be controlled and eliminated. The learned algorithms after grokking also display distinct phases, called clock or pizza algorithms. AI for Science: With greater interpretability, AI systems can begin to function as “AI Scientists” capable of (re)discovering deep scientific structures from data. These include conservation laws, hidden symmetries, integrable systems, Langrangian and Hamiltonian formulations, modular structures, and high-precion solutions. I believe my research work contributes to the emerging interdiscipinary field that unites AI and Science. Building opon the foundation laid in this thesis, I envision a future in which science guides AI out of its current era of alchemy and into a true era of scientific understanding.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164490</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative modeling of 5' splice site subclass regulation and evolution</title>
<link>https://hdl.handle.net/1721.1/164489</link>
<description>Quantitative modeling of 5' splice site subclass regulation and evolution
Kenny, Connor Jens
Pre-mRNA splicing is an essential molecular process required for eukaryotic gene expression. In this thesis, I present a previously unknown mechanism of splicing regulation where a family of splicing factors, the LUC7 family, compete to differentially impact 5→ splice site (5→ SS) selection in a sequence-dependent manner. I quantitatively characterize two major subclasses of 5→ SS in eukaryotes and outline distinctive features of 5→ SS in exons affected by the three human LUC7 paralogs: LUC7L2 and LUC7L enhance splicing of “right-handed” 5→ SS that exhibit stronger consensus matching on the intron side of the nearly-invariant / GU, while LUC7L3 boosts splicing of “left-handed” 5→ SS with stronger consensus matching upstream of the /GU. Using a range of experimental systems, from human cells to mutant plants, I show that LUC7 paralogs have opposing effects on these two 5→ SS subclasses and that this regulatory mechanism likely originated in the last common ancestor of animals and plants over 1.5 billion years ago. I further evaluate a competing model of 5→ SS subclass regulation involving METTL16- mediated U6 snRNA modification and reconcile both models by devising computational tools that identify sequence features predictive splicing dysregulation in transcriptome-wide datasets. Finally, I examine the evolutionary dynamics of left- and right-handed 5→ SS and propose a model of intron evolution in which codon and intron phase constraints in protein-coding genes shape both minor-to-major intron conversion and transitions between left- and right- 5→ SS subclasses.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164489</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compiler-Hardware Co-Design for Pervasive Parallelization</title>
<link>https://hdl.handle.net/1721.1/164488</link>
<description>Compiler-Hardware Co-Design for Pervasive Parallelization
Ying, Victor A.
Modern computer systems have hundreds of processor cores, so highly parallel programs are critical to achieve high performance. But parallel programming remains difficult on current systems, so many programs are still sequential. This dissertation presents new compilers and hardware architectures that can parallelize complex programs while retaining the simplicity of sequential code. Our new systems allow real-world programs to use hundreds of cores without burdening programmers with concurrency, deadlock, or data races. &#13;
 &#13;
This dissertation follows a novel approach that eliminates the burden of explicit parallel programming to make parallel execution pervasive. This approach relies on four guiding principles. First, exploiting implicit parallelism preserves the simplicity of sequential execution. Second, dividing computation into tiny tasks, as short as tens of instructions each, unlocks plentiful fine-grained parallelism in challenging programs. Hardware-compiler co-design techniques can create many tasks in parallel and reduce per-task overheads to make tiny tasks scale to many cores. Third, new hardware and software mechanisms can compose parallelism across entire programs, removing serializing barriers to overlap executions of nested parallel subroutines. Finally, exploiting static and dynamic information for data locality reduces data movement costs while maintaining load balance on large multicore systems. &#13;
 &#13;
This dissertation presents three systems that embody these four principles. First, T4 introduces automatic program transformations that exploit a novel hardware architecture to parallelize sequential programs. As a result, T4 scales hard-to-parallelize real-world programs to tens of cores, resulting in order-of-magnitude speedups. Second, S5 builds on T4 with novel transformations to remove needless serialization in a broad class of challenging data structures. Thus, S5 scales complex real-world programs to hundreds of cores, delivers additional order-of-magnitude speedups over T4, and outperforms manually parallelized code tuned by experts. Finally, ASH is an accelerator that demonstrates the same approach can be applied with simpler mechanisms tailored for digital circuit simulation. A small ASH implementation is 32x faster than a large multicore CPU running a state-of-the-art parallel simulator.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164488</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials</title>
<link>https://hdl.handle.net/1721.1/164486</link>
<description>Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials
Cao, Yifan
Alloy solidification is a critical process in materials design and manufacturing, as it governs the formation of microstructures that determines the mechanical, thermal, and chemical properties of materials. However, direct in situ observation remains extremely challenging due the need for high spatial and temporal resolution under elevated temperatures. On the theory side, solidification is a complex phenomenon often studied using phase-field simulations, which rely on empirically fitted parameters and simplified assumptions about interfacial kinetics, limiting their predictive capability. Capturing this process at the atomistic level can yield more fundamental insights, but is hindered by the need for interatomic models that are both accurate and computationally efficient across relevant timescales and length scales. To overcome these challenges, this thesis develops and applies machine-learning interatomic potentials (MLPs) that capture the chemical complexity of metallic alloys, providing a physically accurate and computationally efficient backbone for large-scale atomistic simulations of complex alloy solidification. We first address a foundational challenge in deploying MLPs: the systematic construction of robust and transferable training datasets. Using CrCoNi as a model system, we evaluate various strategies for training MLPs to capture chemical short-range order (SRO), a critical feature in high-entropy alloys, and its effects on materials quantities of relevance for mechanical properties, such as stacking-fault energy and phase stability. It is demonstrated that energy accuracy on test sets often does not correlate with accuracy in capturing material properties, which is fundamental in enabling large-scale atomistic simulations of metallic alloys with high physical fidelity. Based on this analysis we systematically derive design principles for the rational construction of MLPs that capture SRO in the crystal and liquid phases of alloys. The resulting MLPs are validated against experimental measurements on key thermophysical properties, including melting points, heat capacities, thermal expansion coefficients, and enthalpy of SRO formation, confirming their suitability for predictive simulations. With these validated potentials, we then investigate the evolution of SRO during rapid solidification processes. Our simulations reveal that alloy processing can lead to nonequilibrium steady states of SRO that differ qualitatively from any equilibrium configuration. We attribute this behavior to an inherent ordering bias introduced by nonequilibrium dynamics during solidification. These findings suggest that conventional manufacturing processes offer new opportunities to tailor alloy properties by accessing a broader spectrum of nonequilibrium SRO states, expanding the alloy design space beyond the equilibrium spectrum. Finally, we conduct predictive solidification simulations of chemically complex alloys across experimentally relevant growth rates (0.15–2 m/s) , alloy compositions, interface orientations, and undercooling levels. These simulations capture the dynamic build up of solute partitioning at the solid-liquid interface and reveal kinetics-dependent segregation patterns that deviate markedly from equilibrium predictions. The developed framework enables direct evaluation of key kinetic properties under realistic growth conditions, including interface mobility, liquid diffusivity, and solute trapping. Altogether, this thesis develops machine-learning potentials capable of capturing the chemical complexity of metallic alloys with near DFT-level accuracy, and establishes a framework for extracting key kinetic properties through predictive simulations of alloy solidification. When combined with emerging advances in continuum-scale modeling, these results lay the groundwork for truly multiscale investigations of alloy solidification, enabling DFT-level predictive capabilities at scales directly comparable to experimental alloy design and additive manufacturing processes.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164486</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequential Resource Allocation and Applications in Revenue Management</title>
<link>https://hdl.handle.net/1721.1/164485</link>
<description>Sequential Resource Allocation and Applications in Revenue Management
Zhou, Zijie
Sequential resource allocation is a fundamental problem in operations research, encompassing a wide range of applications where decisions must be made dynamically under uncertainty. This thesis develops new theoretical foundations, explores practical applications, and establishes evaluation methodologies for sequential resource allocation, with a focus on revenue management, robustness and fairness, and experiment design. On the theoretical side, this thesis advances the study of classical network revenue management, a long-standing challenge in dynamic resource allocation. We introduce the first LP-free algorithm, improving the regret bound from O(T ^1/2) to O(T ^3/8)—a significant step toward closing the gap between existing algorithms and the theoretical lower bound of O(1). Additionally, we enhance robustness in sequential resource allocation by developing algorithms that incorporate machine-learned advice, striking a balance between overly conservative worst-case models and overly optimistic stochastic assumptions. Furthermore, we integrate individual fairness into sequential decision-making, ensuring equitable resource allocation without compromising competitive performance. On the application side, we demonstrate the impact of sequential resource allocation in the hospitality management domain. Collaborated with Oracle Lab, we design an online upgrading mechanism that enables hotels to dynamically determine when and at what price to offer room upgrades. Additionally, we propose near-optimal, fast approximation algorithms for this mechanism, achieving a regret bound of O(logT), which is close to the natural lower bound of O(1). We also incorporate our upgrading algorithm to a hotel dataset, and improves more than 20% revenue in 2022. Finally, we introduce new methodologies for evaluating sequential decision-making policies, with a focus on online experiment design. Traditional A/B testing methods struggle with dynamically arriving data, leading to biased or inefficient experimental results. Our pigeonhole experimental design effectively reduces bias and outperforms several well-known experimental design policies, including matched pair design and completely randomized design, making it a more reliable approach for evaluating sequential decision-making strategies. By unifying theoretical insights, real-world applications, and online evaluation frameworks, this thesis contributes to the broader field of sequential resource allocation, providing fundamental advancements with practical implications across revenue management and experimental design.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164485</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of Moiré Quantum Matter</title>
<link>https://hdl.handle.net/1721.1/164484</link>
<description>Aspects of Moiré Quantum Matter
Paul, Nisarga
The advent of moiré quantum matter has newly unified disparate themes in modern condensed matter physics, chief among them band theory, correlations, and topology. This thesis investigates how the interplay between these foundational elements leads to novel electronic phenomena uniquely enabled by moiré superlattices. We focus on modulated Landau levels, which is one of the simplest settings with all three of band dispersion, correlations and topology, yet is rich enough to capture much of the interesting phenomena of moiré quantum matter. We characterize emergent quantum phases that are newly unlocked by the moiré regime. Specifically, we discuss directional localization, formation of Hall crystals with tunable Chern numbers, and novel fractional Chern insulator collective mode physics in the context of modulated Landau levels. We also show that a class of models comprising itinerant electrons strongly coupled to skyrmion-like magnetic textures, closely connected with moiré transition metal dichalcogenides in which the fractional quantum anomalous Hall effect was observed, can host flat Chern bands, emergent Landau levels, and zero-field non-Abelian topological order. This thesis provides a framework for the study of the essential features of moiré quantum matter and demonstrates how moiré systems provide unprecedented opportunities to explore, design, and manipulate strongly correlated topological quantum matter.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164484</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimized Bayesian Analysis Framework for the KATRIN Experiment</title>
<link>https://hdl.handle.net/1721.1/164483</link>
<description>An Optimized Bayesian Analysis Framework for the KATRIN Experiment
Xu, Weiran
Neutrinos, which were originally predicted to be massless within the Standard Model of particle physics, have been confirmed to possess non-zero masses through the discovery of neutrino flavor oscillations. These oscillations precisely measure mass-squared splittings between neutrino mass eigenstates, establishing lower limits for the effective electron-neutrino mass at 0.009 eV for normal mass ordering and 0.050 eV for inverted mass ordering. However, the absolute neutrino mass scale remains a fundamental open question in both particle physics and cosmology.&#13;
&#13;
Precise spectroscopy of beta-decay spectrum provides a model-independent probe of the absolute neutrino mass via decay kinematics. The KArlsruhe TRItium Neutrino (KATRIN) experiment, utilizing a Magnetic Adiabatic Collimation and Electrostatic (MAC-E) filter spectrometer, sets the world's tightest upper limit of m_v &lt; 0.45 eV (90% C.L.) based on its first five measurement campaigns. KATRIN is scheduled to complete its 1,000-day data-taking period by the end of 2025, targeting a final sensitivity of m_v &lt; 0.3 eV}. Future improvements on neutrino mass measurements will depend on advances in differential detection techniques and the development of atomic tritium sources.&#13;
&#13;
This thesis presents an optimized modeling of the KATRIN beta spectrum and a comprehensive analysis of the first five measurement campaigns. An improved framework for computing the theoretical beta spectrum and the KATRIN response function is developed to address the complexities arising from the asymmetric field configurations in the main spectrometer. Benefiting from a computational speedup of four orders of magnitude and improved numerical stability, frequentist best-fit values for individual campaigns are reported, together with an upper limit on neutrino mass using the Lokhov-Tkachov confidence belt construction method.&#13;
&#13;
Parallel Bayesian analyses are conducted on the same dataset, yielding an independent and complementary statistical interpretation of the experimental results. Posterior distributions for the squared neutrino mass are sampled for each campaign under a flat prior on m²ᵥ using the parallel Stretch-Move algorithm, and are subsequently combined with a novel approach developed in this work to enhance computational efficiency. Convergence of each Markov chain is assessed through autocorrelation time analysis, and the robustness of the results is validated through cross-team comparison and consistency checks with profile likelihood. The Bayesian results reported here enable straightforward integration with constraints from oscillation measurements and cosmological observations, and the methodologies developed in this work are directly applicable to the final KATRIN dataset, providing a foundation for future neutrino mass analyses and searches for physics beyond the Standard Model.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164483</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redox-Mediated Processes Toward Modular Electrochemical Systems</title>
<link>https://hdl.handle.net/1721.1/164482</link>
<description>Redox-Mediated Processes Toward Modular Electrochemical Systems
Mallia, Christopher T.
Electrochemical technologies offer an attractive path toward a sustainable future where conventional methods of storing energy or producing critical materials are increasingly coupled to renewable electricity generation. To enable such a future, it is imperative that we have strong foundational understanding of electrochemical reactions that are useful to our needs. Redox flow batteries (RFBs) have emerged as a promising architecture for large scale storage of electricity to bridge the gap when renewable generation is unavailable. These devices operate by storing charge in the form of redox-active species that are dissolved into an electrolyte, and subsequently passed through an electrochemical cell to either store or release electrical energy. An extension of the concept of RFBs toward more general applications is to use the dissolved redox-active species to drive a reaction with another material, either to increase the energy storage density through an electrochemically active charge-dense material, or to drive a useful chemical reaction. This extension is termed a redox-mediated (RM) process, and inherits many of the complexities and intricacies of conventional electrochemical technologies, specifically that of RFB-type devices. The subject of this thesis is the development of knowledge and techniques for studying RM processes toward practical embodiments. While technical implementations of this concept are still nascent, many promising early results have been found in devices that use redox-mediated reactions to store electricity. Despite this, progress is frequently hindered by a lack of foundational knowledge from which to ideate better systems, and techniques to experimentally determine underlying physics. First, I establish the development of the RM concept over the past years as primarily through proof-of-concept electrochemical reactors which mimic RFBs. Second, we establish that the underlying nature of some RM reactions can be quantified and understood through corrosion principles, which guide our intuition for selecting chemistries and operating conditions. Third, I demonstrate that the behavior of many desirable RM chemistries is intrinsically coupled to passivation phenomena, and that this must be accounted for in reaction design. Fourth and finally, I provide experimental and practical guidance for researchers in this field, coupled with the design of some apparatus and techniques useful for characterizing RM reactions in specific and electrochemical processes in general. This body of work is broadly intended to advance understanding of electrochemically active interfaces and enable technology concepts which promote a sustainable future.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164482</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Modeling of Chemical Reactivity for Sustainability</title>
<link>https://hdl.handle.net/1721.1/164481</link>
<description>Predictive Modeling of Chemical Reactivity for Sustainability
Singhal, Avni Priya
Predicting and controlling chemical reactivity is key to sustainable material and process design. However, modeling reactivity at scale remains challenging due to the computational demands of quantum chemical methods and the complexity of reaction mechanisms. This thesis explores how high-throughput computational approaches, rooted in quantum chemistry and enabled by automation, can be used to interrogate reactivity across large chemical spaces. We focus on two domains where reactivity governs process efficiency and sustainability: solvent-based carbon capture and polymer, specifically thermoset, manufacturing.&#13;
&#13;
We first investigate pi-conjugated heterocyclic nucleophiles as alternative carbon capture solvents to address the high regeneration energy and degradation rates of conventional amine-based systems. We combine synthetic template-based library enumeration, density functional theory (DFT), and machine learning models to evaluate binding energies, capture capacity, regeneration thermodynamics, and oxidative stability. Structure–property analysis reveals design strategies to enhance capture strength while balancing tradeoffs with desorption temperature and degradation resistance.&#13;
&#13;
We next focus on designing monomers for frontal ring-opening metathesis polymerization (FROMP), a polymerization mode that enables rapid, energy-efficient manufacturing of polymers. This self-propagating process harnesses exothermic reactions to sustain a polymerization front without continuous external heating, but it requires monomers with a finely tuned balance of thermodynamic and kinetic parameters. We develop a multi-level screening pipeline that integrates DFT-calculated properties with a reaction-diffusion model to predict front behavior directly from the atomistic structure of the monomer. We experimentally validate a preliminary pipeline, identifying a new class of FROMP-capable furan-benzyne monomers, and uncover additional candidates from unexplored chemical spaces that overcome limitations of known systems. &#13;
&#13;
Together, these studies demonstrate how high-throughput, mechanism-informed modeling can guide the discovery of molecules and materials that meet complex reactivity and performance criteria.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164481</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding</title>
<link>https://hdl.handle.net/1721.1/164479</link>
<description>The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding
Gill, Manraj Singh
An evolutionary selection for optimal expression of genes in regulatory networks has led to discernable sequence patterns in bacterial genomes observed in nature. Such patterns result from gene regulatory strategies that leverage sequence-dependent interactions with key cellular machineries and regulatory molecules. While numerous regulatory strategies that shape bacterial gene sequence have been characterized, predicting functional consequences from sequence alone remains challenging due to the sheer vastness of the possible sequence space. Moreover, the primary gene sequence encodes information on secondary and tertiary topologies that the molecules of the central dogma can fold into. Specifically, though local messenger RNA (mRNA) structures are known to regulate bacterial gene expression, the role of long-range mRNA folding remains unclear despite the predicted prevalence of such interactions across mRNAs. In bacteria, a major regulator of mRNA decay and translation rates is accessibility of the ribosome binding site (RBS) to the ribosome. Sequences in the mRNA’s 5´ untranslated region (UTR) complementary to the RBS can decrease gene expression by base pairing and occluding ribosomes from binding. To determine whether such antagonistic sequences are also the primary determinants of sequence choice along the rest of the mRNA transcript, we measured the effect of all possible 8-nucleotide substitutions (65,536 variants) on mRNA levels when placed in multiple positions along a bacterial transcript. We find that, while the vast majority of substitutions in the middle of genes negligibly affect RNA level, 8mers with complementarity to parts of the RBS exhibit the strongest effects by increasing RNA degradation rates up to 4-fold. RBS-complementary sequences also decrease translation initiation rates when placed in a coding sequence, and are able to occlude ribosome binding even when they are located hundreds of nucleotides away from the start codon. The inhibitory effect of such secondary structures on gene expression likely explains a strong selection against sequences complementary to conserved parts of RBSs throughout coding sequences of genes from diverse bacterial genomes, which we uncover through computational analysis. Together, this thesis reveals the widespread impact of RNA intramolecular interactions in vivo on both mRNA stability and translation and uncovers a key constraint on gene sequences.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164479</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding</title>
<link>https://hdl.handle.net/1721.1/164478</link>
<description>Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding
Hong, Celestine Jia Huey
Non-compressible torso hemorrhage (NCTH) and internal bleeding results in a significant number of preventable casualties worldwide among civilians and in the field. In particular, internal bleeding can only be diagnosed through changes in vital signs and then through imaging modalities that may only be available in a hospital setting. Over the past few decades, researchers in the field have sought to address these needs by developing hemostats that can rapidly expand, bind, or seal an exposed wound, or interact with wound-specific components when delivered intravenously to enhance preexisting hemostatic processes.&#13;
&#13;
The first part of this thesis investigates the effect of hemostatic nanoparticle size on their interactions with platelets. Small nanoparticles were observed to result in an increased percentage of specifically-bound single platelets under flow and intermediate nanoparticles were observed to result in the greatest degree of platelet recruitment to a platelet-collagen surface. Large nanoparticles were observed to result in the most nanoparticle mass bound to a surface, the shortest circulation time and retention, and the highest pulmonary accumulation. Ultimately, intermediate nanoparticles were shown to result in the most significant increase in survival relative to the saline control in a lethal inferior vena cava (IVC) injury model (84.6% vs 26.7%), as well as the greatest accumulation at the injured IVC relative to uninjured vessel controls. &#13;
&#13;
Subsequently, the intermediate nanoparticles from the prior study were functionalized with bio-orthogonal click-crosslinkable azide groups to achieve targeted crosslinking behavior. Commercial multiarm PEG functionalized with the corresponding clickable moiety, dibenzylcyclooctyne (DBCO), and DBCO-PEG-b-PLGA nanoparticles were delivered as the second part of this two-component system. This system was demonstrated to increase platelet recruitment, and  decrease fibrin loss during plasminolysis in vitro. When challenged in a mouse liver resection model, the two-component system resulted in significantly increased survival relative to the nanoparticle-only system and higher accumulation in the remnant liver. &#13;
&#13;
Finally, a charge-inverting polymer was synthesized through controlled radical polymerization. The material was demonstrated to undergo rapid charge inversion when exposed to physiological pH, resulting in the near-complete lift-off within a minute of a layer-by-layer drug film into the dermis when coated on microneedles. This versatile release platform could be coated on wound dressings to facilitate the release of therapeutics to aid in healing, or other applications involving charged films. &#13;
&#13;
In sum, this thesis has investigated several new materials and assays for the treatment of traumatic hemorrhage, opening potential avenues for the development of more effective hemostats.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164478</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Nonconvex and Robust Optimization</title>
<link>https://hdl.handle.net/1721.1/164477</link>
<description>Advances in Nonconvex and Robust Optimization
Koukouvinos, Theodoros
Nonconvex optimization presents significant challenges, as identifying the global optimum is often difficult. This thesis introduces novel algorithms to find the exact solution of a broad class of nonconvex optimization problems. The thesis is structured into four parts. In Chapter 2, we propose a novel method for solving nonconvex optimization problems, in which the nonconvex components are sums of linear times convex (SLC) functions. We introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the original nonconvex optimization problem. We then incorporate RPT within branch and bound to obtain the global optimal solution of the nonconvex optimization problem. By using the RPT, we obtain a convex relaxation by forming the perspective of each convex function and linearizing all product terms with newly introduced variables. To further tighten the approximation, we pairwise multiply constraints. Therefore, in Chapter 3, we analyze all possibilities of multiplying conic constraints, a very wide class of constraints. Further, we delineate methods for deriving new, valid linear and second-order cone inequalities for pairwise constraint multiplications involving the power cone and exponential cone, thereby enhancing the strength of the approximation. In Chapter 4, we address nonconvex optimization problems that involve polynomials. We derive valid SLC decompositions for polynomials, in which the linear functions are inequalities of the feasible region and the convex functions are quadratics. We prove the existence of such SLC decompositions for arbitrary degree polynomials. Further, out of the many possible SLC decompositions, we obtain the one that results in the tightest lower bound. Finally, in the numerical experiments we show that our method often outperforms state-of-the-art approaches for polynomial optimization. In Chapter 5, we propose a robust optimization framework that immunizes some of the central linear algebra problems in the presence of data uncertainty. Namely, we formulate linear systems, matrix inversion, eigenvalues-eigenvectors and matrix factorization under uncertainty, as robust optimization problems using appropriate descriptions of uncertainty. We show that for both linear systems and matrix inversion, the robust approach leads to more accurate solutions than the nominal, in the case of nearly singular matrices.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164477</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications</title>
<link>https://hdl.handle.net/1721.1/164476</link>
<description>Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications
Kadosh Zhitomirsky, Tamar
Green energy is a contemporary global concern, and research of materials for solar energy harvesting is the heart of potential solutions for the energy crisis. Halide perovskites are leading candidates to replace silicon in next generation solar cells. This thesis focuses on halide perovskite materials, aiming to understand their structure, electronic and ionic properties and photo-activity; and to re-direct their fabrication techniques to address global market needs and requirements. In this work we developed alternative, vapor-based fabrication techniques, based on manufacturing-compatible, safe, rapid and scalable processes, that have the potential to improve material stability and efficiency.&#13;
Vapor Transport Deposition (VTD) is investigated as a promising fabrication method for thin film halide perovskites and beyond. We explored the deposition parameter space and elucidated relationships and trends regarding composition, structure and deposition rate. We examined the morphology, crystal phase formation, optical and electrical properties, and finally the performance of the deposited films when incorporated into solar cells.&#13;
We begin by exemplifying the viability of vapor transport co-deposition in fabricating active perovskite films, utilizing methylammonium lead iodide (MAPbI3) as a simplified model system. We then design an improved version of the vapor transport deposition system and transition to the more technologically attractive perovskite composition formamidinium lead iodide (FAPbI3). Learning from previous attempts to fabricate this material, we developed a novel technique that we call Hybrid two-step vapor-solution deposition in which we use VTD to deposit the inorganic&#13;
4&#13;
precursor, not readily dissolved in industry acceptable solvents, and then react it with a solution of the organic precursors dissolved in a benign solvent. This technique allowed us to fabricate functioning FAPbI3 based solar cell devices, in a safe, fast-paced, scalable and manufacturing compatible fashion. The deposition rate is significantly influenced by chamber pressure and source temperature, and by controlling all deposition parameters, we systematically reached rates of up to 1200 nm/min, that is orders of magnitude faster than current comparable techniques. We found the technique to be reproducible, yielding 13% efficient devices, with champion efficiencies of up to 15.3%. Based on the proposed novel fabrication process, we believe it offers an avenue for further improvement in solar cell stability and efficiency.&#13;
CsPbBr3, a fully inorganic halide perovskite, also shows great promise as a photo and gamma ray detector and like the other halide perovskites is known to support halide ion conductivity that contributes to device instability and reduced sensitivity to irradiation. We choose this as a model system to apply concepts from defect chemistry and demonstrate the ability to measure and manipulate the ionic conductivity in the material by stoichiometry control and doping.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164476</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis</title>
<link>https://hdl.handle.net/1721.1/164475</link>
<description>Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis
Higinbotham, Hugh
Membrane associated proteins are an essential component of the complex biochemistry that is carried out at the membrane interface and perform essential functions for cellular life. Biophysical characterization of protein structure-function relationships faces a unique set of challenges due to the constraints of phospholipid bilayer chemistry and geometry. Advances in x-ray crystallography and cryo electron microscopy have made progress in this regard, but dynamic structural features remain difficult to study. Small membrane proteins, such as those responsible for bacterial glycosylation, remain challenging to structurally characterize at all. Bacterial glycan synthesis pathways are essential for cell function yet highly variable between strains, making them promising systems for targeted antibiotic development. Many pathways have initiating SmPGTs that show incredible specificity for minute changes in glycan chemistry despite being small enough to streamline many computational methods, which makes them ideal model systems for developing multidisciplinary strategies to study membrane protein dynamics. This thesis presents a strategy that employs structural bioinformatics in Chapter 2, molecular dynamics simulation (MD) in Chapter 3, and single-molecule FRET microscopy (smFRET) in Chapter 4 to observe the ligand-dependent conformational dynamics of integral membrane proteins in situ. It focuses on representative members of the small monotopic phosphoglycosyl transferase (SmPGT) superfamily, which catalyze transfer of a phosphosugar from a soluble nucleotide-sugar donor to a membrane-embedded polyprenol phosphate acceptor in the initiating step of glycoconjugate biosynthesis in prokaryotes. The pipeline is employed to confirm the role of SmPGT conformational dynamics in substrate binding and informs the design of non-hydrolyzable substratemimetic inhibitors. Chapter 5 further sets the stage for the use of structural bioinformatics and molecular simulation to characterize subsequent glycosyl transferase (GT) enzymes down pathway and presents initial results characterizing inter-protein cooperative interactions. The integrated approach to incorporate computational and experimental characterization methods has significantly contributed to the understanding of SmPGT structure-function relationships and opened up new directions of inquiry into specific PGTligand interactions, the development of new inhibitory compounds, and the role of interprotein interactions in bacterial glycan synthesis.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164475</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions</title>
<link>https://hdl.handle.net/1721.1/164474</link>
<description>Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions
Carlson, Rebecca J.
Host-pathogen interactions represent a complex interplay between hosts and pathogens that can evolve over millions of years. Interactions between bacteria or viruses and human cells, and the resulting evolved antipathogenic signaling pathways, are processes responsible for pathologies ranging from infectious diseases to autoimmune conditions and cancer. In addition, engineered designs inspired by pathogen interactions with hosts are increasingly being used to both treat and diagnose many pathologies that need not originate from infection with a pathogen. Therefore, it is critical to build and deploy scalable tools to better understand host-pathogen dynamics in order to both better treat conditions where pathogens or antipathogenic signaling contribute directly to disease pathology as well as to engineer new treatments to address a broader range of disease states.&#13;
&#13;
In this thesis, I describe approaches to leverage functional genomics and image-based screening to perturb and profile host-pathogen interactions, including responses to two RNA viruses, Sendai virus and Ebola virus. These provide case studies highlighting the utility of high-content image-based screening for revealing new genes regulating predefined phenotypes of interest as well as for generating single-cell imaging profiles that can be used to infer new genetic functions and phenotypic states directly from screening data without a priori specification. I also highlight an example of a genetic screen that revealed a robust negative result, leading to hypothesis and validation of a novel function of the STING protein as a proton channel.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164474</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The politics of metropolitan transportation.</title>
<link>https://hdl.handle.net/1721.1/164455</link>
<description>The politics of metropolitan transportation.
Colcord, Frank Carlton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1964
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164455</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On hypergraphs and hypergeometries.</title>
<link>https://hdl.handle.net/1721.1/164452</link>
<description>On hypergraphs and hypergeometries.
Helgason, Thorkell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1971; Vita.; Bibliography: leaves 158-159.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164452</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cocoa in the Ghanaian economy.</title>
<link>https://hdl.handle.net/1721.1/164450</link>
<description>Cocoa in the Ghanaian economy.
Bateman, Merril Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164450</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect</title>
<link>https://hdl.handle.net/1721.1/164449</link>
<description>Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect
Willett, Robert L.
            (Robert Lee)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1989; Includes bibliographical references (leaves 6-7).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164449</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations</title>
<link>https://hdl.handle.net/1721.1/164447</link>
<description>Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations
Callias, Constantine John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164447</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave equations, particles and chronometric geometry.</title>
<link>https://hdl.handle.net/1721.1/164446</link>
<description>Wave equations, particles and chronometric geometry.
Orsted, Bent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164446</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis</title>
<link>https://hdl.handle.net/1721.1/164410</link>
<description>Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis
Berger, Adam G.
The standard of care for diabetic wounds has remained relatively unchanged for decades, resulting in patients with wounds that do not heal on meaningful time scales, referred to as ulcers, and high rates of recurrence for patients whose wounds do heal. This common complication of diabetes decreases quality of life, increases mortality, and raises health care costs. New paradigms to treat these wounds remains a formidable but critical challenge.&#13;
&#13;
Addressing diabetic ulcers at the molecular level may decrease healing time and prevent recurrence. Impaired blood vessel formation, or angiogenesis, in diabetic ulcers is an important target pathway. Angiogenesis is needed to bring oxygen, nutrients, signaling cues, and cells to newly formed tissue while removing waste. Nucleic acid oligonucleotide therapies, such as small interfering RNAs (siRNAs) or microRNA inhibitors (anti-miRs), that regulate gene expression at the post-transcriptional level, hold particular promise for promoting angiogenesis and wound healing; however, the large size and negative charge of these therapies require drug carriers to mediate their biological effect.&#13;
&#13;
In this thesis, we leverage sequential electrostatic adsorption of oligonucleotide therapy and polyelectrolytes into thin film coatings on commercial wound dressings through the layer-by-layer (LbL) process. These dressings package oligonucleotide, enhance its transfection efficacy, and control its temporal release locally to the wound bed. After initial validation experiments, we sought to systematically understand our drug carrier system and use this insight to engineer better wound dressings. First, we developed a proof-of-concept anti-miR-coated dressing and showed its efficacy in promoting both wound closure and sex-dependent angiogenesis. We found that therapy released from coated dressings had a preferential association with different wound cell types, particularly endothelial cells. We then sought to uncover how changes in the oligonucleotide structure itself may alter interactions with transfection polymers in thin film coatings. We found that binding with certain polyelectrolytes differed based on whether the therapy was a flexible single stranded anti-miR or a more rigid double stranded helix siRNA. We also showed how chemically modified nucleotides, such as locked nucleic acid and 2’-O-methyl RNA, can modulate affinity to polyelectrolytes and ultimately impact transfection efficacy. We also elucidated how physicochemical properties of the hydrolysable transfection-enhancing poly(β-aminoester) polymer mediate its efficiency in transfecting oligonucleotide therapy. We demonstrated that a more hydrophobic polymer enhanced transfection efficacy through its ability to facilitate permeation of biological barriers. Finally, we identified how modulation of the anionic excipients contained in these thin film coatings can be leveraged to vary the release kinetics from coated wound dressings. We engineered formulations that released on a fast or slow time scale. We observed that while both release time scales promoted efficacy in wound closure, they did so through potentially different mechanisms despite the same putative pro-angiogenic anti-miR therapy.&#13;
&#13;
In sum, this thesis elucidates how physicochemical properties and formulation of coated wound dressings alter their interfacial effects with biological systems. We use this knowledge to rationally design better drug carriers that can deliver pro-angiogenic oligonucleotide therapeutics to the wound bed. The findings have broad applications in the delivery of nucleic acid therapies for a wide host of diseases where local delivery to the injured tissue could prove beneficial. Ultimately, we also advance our pro-angiogenic coated wound dressing strategy towards clinical translation. Our strategy has the potential to provide a new, targeted therapeutic paradigm to help those suffering from diabetic ulcers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164410</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors</title>
<link>https://hdl.handle.net/1721.1/164345</link>
<description>Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors
Vivona, Daniele
Solid-state oxygen ion conductors are crucial for electrochemical devices such as separation membranes, solid-oxide electrolyzers, fuel cells, and sensors, serving as a technological link between renewable energy generation and consumption. Currently, these conductors are limited by slow transport rates and high operational temperatures, which pose challenges and increase costs. Developing faster conductors that operate at lower temperatures requires reducing activation energy and enhancing the pre-exponential factor in the Arrhenius equation of conductivity. However, our understanding of the fundamental processes in oxygen ion transport and methods to improve oxygen ion conductivity remain limited. This thesis focuses on understanding the fundamental mechanisms that regulate oxygen ion transport. First, the migration energy barrier in perovskite oxides is linked to an electronic energy penalty from local charge screening near the hopping ion. The energy of local electronic states is identified as a fundamental descriptor of the migration barrier. Next, migration entropy and phonon density of states (DOS) are highlighted as the main factors regulating the pre-exponential factor of oxygen ion conductivity across different materials. The phonons of oxygen ions near the hopping ion significantly contribute to migration entropy, suggesting that migration entropy can be tuned by designing the phonon dynamics of these atoms. These results imply that a widely observed correlation between increasing pre-exponential factors and activation energy arises from coupling local electronic energy states and phonons. The results are extended to the formation of oxygen vacancies and interstitials in perovskite and RuddlesdenPopper oxides. We find that defect formation energy rises with defect formation entropy, which is linked to electronic energy states interacting with phonons. In perovskite oxides, lower vacancy formation entropy is correlated with increasing oxygen phonon band center and shortening bond lengths with oxygen vacancy formation. In Ruddlesden-Popper oxides, lower interstitial formation entropy is associated with reduced octahedral tilting and local phonon changes. This thesis establishes a theoretical foundation for treating migration entropy and defect formation entropy as design variables in next-generation ionic conductors. By highlighting the impact of electronic structure and lattice dynamics on energy barriers and entropic drivers, the findings suggest new pathways for material design through the strategic separation of these factors and the intelligent design of lattice moieties in oxygen ion transport environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164345</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link>https://hdl.handle.net/1721.1/164269</link>
<description>Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Manuj
As Artificial Intelligence (AI) becomes increasingly interwoven into our creative, social, and learning experiences, we must ask: Will these technologies deepen our connection to the timeless human experiences of Being, Being Together, and Being Creative Together—or will they pull us apart, leaving us more anxious and isolated? In an era where AI systems are increasingly framed as our “co-creators” and “companions,” enabling hyper-personalized yet hyper-isolated interactions, this dissertation reclaims the prefix ‘co-’ as fundamentally interhuman—introducing a set of new paradigms that center human connection, co-creativity, and calm in the design of technologies.&#13;
&#13;
Central to this work, we’ve developed CoCo (coco.build), a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers—spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for young people. &#13;
&#13;
We weave these interconnected ideas into the unifying theme of “Being. Creative. Together.”— values we believe are both timeless and especially timely in the AI era. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture our capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another.&#13;
&#13;
Note: This work has been co-developed with Shruti Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164269</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward the computational transformation of legal theory and practice</title>
<link>https://hdl.handle.net/1721.1/164268</link>
<description>Toward the computational transformation of legal theory and practice
Mahari, Robert
This doctoral thesis seeks to advance the formalization of computational law as a distinct research discipline. It explores three interwoven key themes: the empirical understanding of legal systems through advanced computational methods; the development of computational tools to augment the capabilities of legal practitioners, thereby expanding access to justice; and the identification of novel, computationally-enabled regulatory interventions. This research directly confronts the global access to justice crisis and the shortcomings of conventional legal services that frequently leave businesses and individuals without adequate support. Furthermore, the thesis investigates innovative regulatory strategies for emerging technologies, aiming to synchronize legal frameworks with contemporary technological progress by exploring adaptive and forward-looking governance approaches.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164268</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields</title>
<link>https://hdl.handle.net/1721.1/164267</link>
<description>Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields
Shtarbanov, Ali
Physical, digital, and conceptual tools and building blocks are fundamental enablers and accelerators of humanity’s progress in technology, science, medicine, art, and even in abstract fields like mathematics, philosophy, and social sciences. Hardware development platforms present a special class of tools and building blocks, facilitating and accelerating innovation, prototyping, and research. They drastically reduce prototyping time and complexity, improve efficiency for experts, democratize access to innovation, and even inspire entirely new ideas. This research investigates how to design, develop, and deploy development platforms in ways that maximize their real-world impact potential. It focuses not only on the technical and engineering aspects, but also on the complete ecosystem a platform needs in order to have impact, including community building, engagement with users and volunteers, content strategy, online presence, publicity, deployment, feedback loops modularity, financial viability, and symbiotic relationships.  A comprehensive Design &amp; Deployment Framework is introduced as a conceptual tool for creating high-impact platforms and creative ecosystems, recognizing and fostering the positive feedback loops that sustain them and that shape their evolution and growth. This framework is applied in the development and deployment of multiple novel platform and ecosystem projects, including FlowIO, SleeveIO, and ModiStrap, as well as the ecosystem SoftRobotics.IO. Those works have benefited thousands of people around the world, providing researchers, designers, and engineers with powerful, reconfigurable, modular enabling artifacts that streamline prototyping, accelerate research, and lower barriers in fields like soft robotics, haptics, assistive technology, shape-changing interfaces, interactive arts, and more. A multitude of research, art, and engineering projects made possible by FlowIO and SoftRobotics.IO are presented, as well as over a dozen case studies showcasing how other users across disciplines have adopted, utilized, and extended these systems to advance their own creative, educational, and technical endeavors. Additionally, this thesis also investigates various deployment models for hardware and introduces a new hardware deployment model for equitable access to expensive hardware that may otherwise be financially out of reach for many users, as well as an “earned open-source” model, which preserves the essence of the traditional open-source model, while eliminating many of its pitfalls.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164267</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private, Verifiable, and Auditable AI Systems</title>
<link>https://hdl.handle.net/1721.1/164264</link>
<description>Private, Verifiable, and Auditable AI Systems
South, Tobin
The growing societal reliance on artificial intelligence necessitates robust frameworks for ensuring its security, accountability, and trustworthiness. This thesis addresses the complex interplay between privacy, verifiability, and auditability in modern AI, particularly in foundation models. It argues that technical solutions that integrate these elements are critical for responsible AI innovation. Drawing from international policy contributions and technical research to identify key risks in the AI pipeline, this work introduces novel technical solutions for critical privacy and verifiability challenges.  Specifically, the research introduces techniques for enabling verifiable and auditable claims about AI systems using zero-knowledge cryptography; utilizing secure multi-party computation and trusted execution environments for auditable, confidential deployment of large language models and information retrieval; and implementing enhanced delegation mechanisms, credentialing systems, and access controls to secure interactions with autonomous and multi-agent AI systems. Synthesizing these technical advancements, this dissertation presents a cohesive perspective on balancing privacy, verifiability, and auditability in foundation model-based AI systems, offering practical blueprints for system designers and informing policy discussions on AI safety and governance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164264</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Models as Mirrors and Bridges for Intergroup Communication</title>
<link>https://hdl.handle.net/1721.1/164263</link>
<description>Language Models as Mirrors and Bridges for Intergroup Communication
Jiang, Hang
This dissertation explores how large language models (LLMs) can serve dual roles in intergroup communication: as mirrors that reflect intergroup differences, and as bridges that facilitate communication across group boundaries. Intergroup communication refers to interactions between individuals from different social groups, such as political, cultural, or professional communities, where divergent perspectives often lead to misunderstandings, unequal access to information, and social fragmentation.&#13;
&#13;
The first part of the dissertation presents LLMs as mirrors that reveal intergroup differences. We first introduce CommunityLM, a novel framework for probing public opinion by fine-tuning LLMs on social media posts from specific communities. Our case study comparing Republican and Democratic groups reveals that model predictions align well with human survey responses, substantially outperforming established baselines. Building on this foundation, we develop PersonaLLM to investigate whether prompt-based LLM agents can generate content aligned with assigned personas, which has emerged as a popular approach for modeling the behaviors of social groups. Through automated and human evaluations, we demonstrate that these agents can complete personality tests and write stories that reflect the distinctive behavioral patterns of specific personality profiles. Together, these complementary projects illustrate how LLMs can effectively capture and simulate the unique perspectives and behaviors that characterize diverse social groups.&#13;
&#13;
The second part of the dissertation presents LLMs as bridges that facilitate communication across group boundaries. First, we introduce Bridging Dictionary, an interactive tool that uses retrieval-augmented generation (RAG) techniques with LLMs to identify polarized language and suggest more inclusive alternatives. In collaboration with PBS Frontline, we demonstrate the potential of LLMs to reduce misunderstanding in journalism and political communication. Second, we present Legal Storytelling, a human-LLM collaboration framework that generates accessible narratives to explain complex legal concepts to non-experts. Through randomized controlled trials (RCTs), we find that LLM-generated narratives can improve legal literacy and help bridge communication gaps between experts and laypeople, particularly among non-native English speakers. Third, we develop FaciliTrain, a voice-based, LLM-powered system that enables facilitators to learn and practice intergroup dialogue skills with multiple LLM agents representing diverse social backgrounds and personas in a small-group setting. User studies with campus participants show encouraging early results, suggesting that LLMs can effectively support the development of communication skills essential for constructive intergroup dialogue. Together, these projects illustrate how LLMs can actively foster mutual understanding across social divides by promoting more inclusive, accessible, and constructive communication.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164263</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Facilitating Creative Learning: Engaging in a Practice of Care</title>
<link>https://hdl.handle.net/1721.1/164261</link>
<description>Facilitating Creative Learning: Engaging in a Practice of Care
Presicce, Carmelo
Creative learning is shaped not only by tools and activities, but by relationships. This dissertation explores facilitation in creative learning environments as a relational practice centered on care—not as a set of techniques, but as a deeply human way of being with others, a commitment to creating spaces where people feel supported enough to explore, connected enough to share, and valued enough to express themselves. Grounded in constructionist, socioconstructivist, and humanistic pedagogies, the research draws from my multi-year engagement with Learning Creative Learning (LCL)—an online course and global community for educators—and WeScratch, a series of hands-on, collaborative online workshops introducing educators to creative coding. Through qualitative analysis of small-group facilitation during WeScratch workshops, I explore how volunteer facilitators experience and reflect on their practice. Drawing from three case studies, I examine how care takes shape in the situated, relational work of creative learning facilitation. In particular, I identify three interrelated forms of care: epistemic care, which focuses on what and how people learn; affirming care, which supports what learners value and who they are; and convivial care, which attends to how learners feel and relate to one another in a group. After introducing these three forms of care through the work of individual facilitators, I show how epistemic, affirming, and convivial care are deeply interwoven in practice—at times reinforcing one another, at times pulling in different directions. Facilitators must navigate these tensions in the moment, making situated judgments about when to step in, when to hold back, and how to respond to the evolving needs of individuals and groups. By centering care, this research highlights facilitation as deeply human, relational work that sustains the conditions for creative learning, contributing to the broader and evolving discourse on constructionism. It also makes the case for seeing facilitation as an ethical and political practice. In a time when educational discourse is increasingly shaped by ideals of efficiency and optimization—and the world faces rising authoritarianism and dehumanization—choosing to care is not only pedagogically meaningful, but also politically urgent.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164261</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments</title>
<link>https://hdl.handle.net/1721.1/164260</link>
<description>Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments
Brenneis, Rebecca J.
Annual global average temperatures in the past year have already exceeded the international target limit of 1.5°C, and the window to prevent that rise from extending is rapidly closing. The high global warming potential (GWP) and short atmospheric residence time (half-life of around 12 years) of methane make it a critical target for action to slow the pace of climate change in this decade. Yet technological solutions for methane abatement are challenged by methane’s inertness, dilute atmospheric concentrations, and diffuse, variable emissions sources. In this thesis, I propose the use of a bio-inspired, earth-abundant, heterogeneous catalysts as a novel tool for atmospheric and emissions-based methane abatement. Copper zeolites were characterized for their ability to convert low levels of methane, continuously, at low temperatures, for moderate durations, and in the presence of a variety of gaseous mixture influents, designed to mimic atmospheric air at standard temperatures and pressures. Catalytic performance was tested under conditions designed to mimic those found at two of the primary sources of low-level, anthropogenic emissions: ventilation air methane (VAM) and industrial dairy. Laboratory synthesized catalysts were shown to completely oxidize methane at concentrations ranging from atmospheric to 1%, covering the range of subflarable levels. Conversion was demonstrated at temperature as low a 270°C, with complete conversion achievable as low as 350°C, in the presence of 20% oxygen. While the presence of water vapor, nitric oxide, and hydrogen sulfide were shown to partially reduce catalytic efficiency, conversion efficiency was restored with increased temperature. The presence of carbon dioxide, alkanes, ammonia and hydrogen, at industrially relevant concentrations, had no effect on catalytic performance. Finally, atmospheric samples were collected at six industrial scale dairy barns across the Midwest and compared with the simulated laboratory conditions. Dairy samples fell within the ranges tested at the bench scale showing no evidence of any impediment to copper zeolite as a potential abatement tool. Methane concentrations at dairies were shown to be on the order of atmospheric to low 10s of ppmv making copper zeolites the only currently identified abatement strategy to address methane emissions at these locations. While it remains to be shown that these zeolites can provide net greenhouse gas benefit in the conditions required, copper zeolites are a strong option on a short list of technologies to address methane at any subflarable concentration, sources of which comprise 80% of global emissions sources, showing great promise as a climate technology breakthrough.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164260</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link>https://hdl.handle.net/1721.1/164259</link>
<description>To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Shruti
In an era where Artificial Intelligence (AI) systems are increasingly framed as our “companions” and “co-creators,” this dissertation reclaims “co-” as a fundamental marker of shared human experience—using it as a foundation to reimagine and build technologies that consciously center interhuman connection and co-creativity. Central to this work, we’ve developed CoCo (coco.build)—a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers, spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for youth. We anchor these interconnected ideas in a unifying theme of “Being. Creative. Together.”—reflecting timeless values that have become especially timely in an era when AI tools can further accentuate individualized digital experiences for young people. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture young people’s capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another. &#13;
&#13;
Note: This work has been co-developed with Manuj Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164259</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Machine Learning over Fragmented Data</title>
<link>https://hdl.handle.net/1721.1/164169</link>
<description>Decentralized Machine Learning over Fragmented Data
Singh, Abhishek
The remarkable scaling of data and computation has unlocked unprecedented capabilities in text and image generation, raising the question: Why hasn’t healthcare seen similar breakthroughs? This disparity stems primarily from healthcare data being fragmented across thousands of institutions, each safeguarding patient records in regulatory-compliant silos. The problem is not limited to healthcare but extends to other industries with fragmented data across institutions and individuals. Instead of centralizing various datasets to solve the fragmentation problem, which raises regulatory and ethical concerns, this thesis proposes systems and algorithms to decentralize the machine learning pipeline. Current approaches in this area have centered around Federated Learning (FL), which enables model training over distributed data. However, FL’s dependence on central coordination and inflexibility with heterogeneous systems limit its applicability in healthcare settings. Motivated by these challenges, I explore the following three core themes:&#13;
&#13;
1) Coordination – Today’s coordination algorithms typically rely on static rules or randomized communication, approaches that turn out to be sub-optimal when data heterogeneity is high. I present a new system and a benchmark framework that enables systematic assessment of different coordination algorithms. Next, I propose an adaptive coordination algorithm that leverages historical performance and learning dynamics to improve coordination.&#13;
&#13;
2) Heterogeneity – Data owners can vary significantly in their data distributions, computational resources, and privacy requirements. To address this heterogeneity, I turn the focus from the traditionally protected training phase to securing the critical inference process. Next, I develop techniques for distributed training that adapt to heterogeneous computational capabilities across different agents.&#13;
&#13;
3) Scalability – Enabling scaling in decentralized ML requires addressing three key challenges: parallelization, synchronization, and self-scaling. While parallelization has advanced significantly, the other two remain challenging. I present a framework for offline collaboration through sanitized, synthetic datasets that eliminates constant synchronization needs while preserving privacy.&#13;
&#13;
This thesis identifies and addresses some of the bottlenecks along these three core themes through a complementary set of solutions: adaptive coordination, heterogeneity-aware training, and scalable collaboration. Together, these building blocks can enable a practical framework for unlocking data silos across institutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164169</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between spatial structure and competition in ecological communities</title>
<link>https://hdl.handle.net/1721.1/164168</link>
<description>Interplay between spatial structure and competition in ecological communities
Swartz, Daniel W.
Ecology, much like physics, has a long history of theoretical contribution. In this thesis, we take a physics approach to describing ecological communities, searching for simple, emergent features that can generalize beyond specific models of community dynamics. Unifying all of the models we study is an underlying spatial structure, leading to a richer set of possible behaviors than a typical well-mixed model. We first study the case of a metapopulation, a collection of smaller communities linked by dispersal. We find that when the environment is allowed to fluctuate stochastically, new growth laws emerge at the single species level, and high diversity is achieved in the case with many species. We then study the case of pathogen evolution, again in the metapopulation framework. We find that intermediate dispersal can act as a strong driver of pathogen evolution. We also study what happens as a population of microbes expands into unexplored territory, known as a range expansion. We find that a simple model can capture all morphological phases observed in experiments and predict invasion fitness as a function of local and global competitive ability. We also break a standard assumption in microbial ecology, the isotropy of space, and find that a new sector morphology emerges.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164168</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation</title>
<link>https://hdl.handle.net/1721.1/164167</link>
<description>The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation
Martyn, John Michael
Since the dawn of science, computation and physics have evolved alongside each other, both driven by a shared quest to solve problems and calculate properties of the natural world. Today, this symbiotic relationship is epitomized in quantum information science, which proposes to use quantum mechanics to solve hard computational problems and develop new paradigms of communication and cryptography. Yet often absent from these developments is a clear, human-interpretable understanding, with many quantum protocols built from inherently quantum concepts (e.g., entanglement, superposition) that defy our classical line of thought and muddle the search for efficient quantum algorithms. Here we show that this search need not be so opaque: simple mathematical tools, namely polynomials and their fundamental theorems, in unison with concepts from classical computing, provide a powerful framework for the design of quantum algorithms. We develop this framework and use it to construct an assortment of quantum algorithms, including methods for quantum simulation, parallel computing, randomized algorithms, and continuousvariable quantum hardware. In illuminating this framework, we find a striking bidirectional flow: just as classical concepts inspire new quantum algorithms, so too can quantum mechanical insights bring about novel methods of classical computing. In this reverse direction, we adopt inherently quantum concepts, such as random compilation and bosonic symmetry, to develop new classical methods, with applications in simulating quantum systems and designing robust neural networks. In aggregate, this thesis provides a compendium of algorithmic techniques for probing quantum systems and solving hard problems, using both quantum and classical tools—an “algorithmic cookbook”—predicated on deep connections between these two domains. The recipes presented here aim to demystify black boxes of quantum information science, and provide a valuable resource for future developments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164167</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics</title>
<link>https://hdl.handle.net/1721.1/164166</link>
<description>The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics
Shi, Zhengyan
The emergence of quasiparticles at low temperature provides a powerful organizing principle for many quantum phases of matter, ranging from conventional magnets and superconductors to exotic insulators with topological order. In this thesis, I describe my research in gapless quantum phases in which the framework of quasiparticles breaks down. The main characters are two categories of gapless phases that feature the interplay between strong interactions and two additional ingredients – Fermi surfaces and fractional statistics. Chapter 2 through Chapter 5 focus on strongly interacting metals with Fermi surfaces. The most salient examples are a class of Hertz-Millis models describing the onset of spontaneous symmetry breaking in a metallic environment. At the quantum critical point, gapless order parameter fluctuations destroy quasiparticles living on the Fermi surface, giving rise to a strongly coupled non-Fermi liquid metal. A key result of these chapters is the identification of an infinite-dimensional symmetry that survives in these non-Fermi liquid metals despite the death of quasiparticles. This infinite-dimensional symmetry and its quantum anomaly lead to a series of non-perturbative results on thermodynamics and transport, which are confirmed by perturbative diagrammatic calculations in special examples. Chapter 6 through Chapter 8 explore quantum phases in which anyonic quasiparticles with fractional statistics play an essential role. When parameters in the system are tuned to close the anyon energy gaps, the original anyons lose their coherence and a variety of novel phases emerge. A highlight in this direction is a new mechanism for topological superconductivity in itinerant abelian and non-abelian anyon fluids, which could make contact with experiments on doped fractional quantum anomalous Hall states in the near future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164166</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets</title>
<link>https://hdl.handle.net/1721.1/164165</link>
<description>Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets
Alipour-fard, Samuel
This thesis presents the author’s work in developing probes of the inner structure of jets in high-energy particle collisions. We begin by introducing QCD and the scattering of partons (quarks and gluons), discussing jets as theoretical and experimental proxies for partonic physics, and presenting the partonic cascade model of jet formation and jet substructure. Noting the ubiquitous presence of low-energy pollution in particle collision events, in the forms of hadronization, detector effects, the underlying event (UE), and pileup (PU), we then move towards the modern research area of developing pollution-insensitive probes of jet substructure. Pollution-insensitive features of jet substructure are often accessed theoretically either through jet grooming or energyweighted correlation functions. We present the basics of the modern theory of jet grooming as well as the work of the author in developing the Piranha paradigm for continuous jet grooming, introduced by the author in Ref. [1], and explore the formal and phenomenological benefits of continuous grooming techniques as pollutioninsensitive probes of jet substructure. We introduce the basics of the simplest energy-weighted correlation function – the energy-energy correlator (EEC), which probes angular correlations between particle pairs – and discuss its multi-particle analogues. We focus on the efficient and visually intuitive projected and resolved energy correlators introduced by the author in Ref. [2], which provide computationally-realistic, pollution-insensitive probes of angular many-body correlations in QCD jets. Finally, we exposit the generic theory of energy-weighted observable correlations (EWOCs), introduced by the author in Ref. [3], which utilizes the energy weighting of the EEC to provide pollution-insensitive probes of non-angular correlations within jets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164165</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-state cavity quantum electrodynamics with spin ensembles</title>
<link>https://hdl.handle.net/1721.1/164164</link>
<description>Solid-state cavity quantum electrodynamics with spin ensembles
Wang, Hanfeng
Quantum sensors have the potential to operate at fundamental physical performance limits. Among various quantum sensing platforms, solid-state spin emitters stand out due to advantageous characteristics such as room-temperature spin polarization and readout, atomic-scale spatial resolution, and extended coherence times. Despite these strengths, traditional optical detection methods exhibit low readout fidelity in solid-state ensembles, severely limiting their achievable sensitivity. This thesis addresses this limitation by coupling a solid-state emitter ensemble to a microwave cavity, forming a cavity quantum electrodynamics system. Our approach eliminates the need for photon collection required by conventional optical readout methods, and the resulting strongly coupled system allows efficient cavity-based probing of the solid-state spin ensemble. By exploiting the hybrid quantum system with cavity quantum electrodynamics, we achieve record-high sensitivity for solid-state quantum sensors, representing a substantial advancement toward achieving fundamental sensing limits.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164164</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering</title>
<link>https://hdl.handle.net/1721.1/164163</link>
<description>Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering
Ghorashi, Ali
Focusing on the topological band properties of photonic crystals and the plasmonic properties of two-dimensional metals, we seek to answer the question: what is the phase space of photons in matter? For topology, what are the physical parameters that determine whether a given photonic crystal band hosts Dirac points, a non-zero Chern number, or topologically protected corner states? And for plasmons, what are the experimentally addressable ranges of plasmonic dispersions, phase velocities, confinements, and losses? In particular, is it possible to engineer the elusive lossless plasmon? Using high-throughput screening, artificial intelligence, and atom-by-atom engineering through density functional theory, we determine the topological prevalence of photonic bands, propose two systems that evade plasmonic losses through the electron-phonon interaction, and (re)discover general physical laws that govern the geometries of photonic eigenstates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164163</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Sample Efficiency of Data-Driven Decision Making</title>
<link>https://hdl.handle.net/1721.1/164162</link>
<description>On the Sample Efficiency of Data-Driven Decision Making
Qian, Jian
This thesis studies the fundamental problem of decision making under uncertainty through the lens of statistical decision theory. We characterize the minimax risk, which captures the sample efficiency required for effective decision making across three key settings: offline estimation with batch data, online estimation with sequential data, and interactive decision making as exemplified by multi-armed bandits and reinforcement learning. The first part of the thesis develops novel algorithmic and theoretical tools to enhance decision making in these regimes and to bridge the gaps between them. We revisit logistic regression in the offline setting and provide guarantees without restrictive boundedness assumptions. We then propose meta-algorithms that reduce online estimation to offline estimation, enabling any offline estimator to be used effectively in online scenarios. Furthermore, we present general-purpose algorithms for interactive decision making problems by leveraging offline or online estimation techniques. The second part of the thesis introduces a unified approach to understanding the fundamental complexity of interactive decision making. We propose the Decision Making with Structured Observation (DMSO) framework, which encompasses bandits, reinforcement learning, and more general settings. Within this framework, we develop a new complexity measure—the Decision-Estimation Coefficient (DEC)—which captures both upper and lower bounds for minimax regret. DEC extends classical notions such as the modulus of continuity to interactive scenarios by introducing an adaptive variant of Le Cam’s method. Finally, we unify the three classical lower bound techniques—Le Cam’s method, Assouad’s lemma, and Fano’s inequality—through a generalized formulation that also incorporates the DEC, offering a comprehensive understanding of the minimax risk in decision making tasks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164162</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards achieving power autonomy in soft-actuated micro aerial robots</title>
<link>https://hdl.handle.net/1721.1/164161</link>
<description>Towards achieving power autonomy in soft-actuated micro aerial robots
Ren, Zhijian
Micro aerial robots with insect-like flight capabilities hold immense promise for various applications, including environmental monitoring, precision agriculture, and infrastructure inspection in confined spaces. However, realizing power autonomy in these miniature robotic platforms presents significant challenges due to weight constraints, power density limitations, and inefficient actuation at small scales. This dissertation presents three essential improvements towards achieving power autonomy in soft-actuated micro aerial robots. Our robotic platform is driven by a dielectric elastomer actuator (DEA) and generates lift force through flapping wings, a similar mechanism found in flying insects. First, we implemented a dynamic model to optimize the robot components for pairing with an improved DEA to generate a higher lift force. The robot achieved a peak lift-to-weight ratio of 4.3 and demonstrated a 20-second hovering flight with position and attitude errors smaller than 2.5 cm and 2◦ . Second, we fabricated a lightweight high-voltage boost converter that transformed a 7 V DC input into an AC waveform of 600 V and 400 Hz to drive the actuator. This is the first onboard boost converter that can drive the soft-actuated micro aerial robot to take off, and it represents a substantial achievement in miniaturizing power electronics for microrobots. Third, we took inspiration from the natural autorotation of maple seeds in their slow descent. We implemented the first samara-inspired mechanism on micro aerial robots, enhancing lift generation while maintaining in-flight attitude stability without feedback control. The 1.22-gram vehicle can stably take off in 1 second with a total input thrust of 1 gram-force. These accomplishments provide a pathway towards achieving power autonomy and open opportunities for developing agile, robust, and autonomous micro aerial robots for diverse applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164161</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope</title>
<link>https://hdl.handle.net/1721.1/164160</link>
<description>Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope
Simonaitis, John
The interaction of free-electrons with matter and light is among the most fundamental of processes in nature. From the use of free-electrons for atomic imaging,  to their use in the generation of high-intensity, tunable light in synchrotrons, the physics of unconfined electrons has wide application. In recent years, there has been a new focus on the quantum nature of individual electrons in electron microscopes to enable further improvements in these technologies. This work takes advantage of developments in ultrafast optics, electron spectroscopy, quantum optics, and nanofabrication to explore various electron-electron, electron-photon, and electron-material interactions. In this thesis, we construct a low-energy, ultrafast scanning electron microscope,  using it to explore quantum coherent interactions between electrons, light, and matter.&#13;
&#13;
In Chapter 1, we review the history of free electron experiments and how advances in nanofabrication, low-dimensional materials, and ultrafast optics have opened new opportunities for electron-light interactions to a degree not previously possible. In Chapter 2 we discuss experimental forms of quantum electron microscopy known as interaction-free measurement and electron multi-passing. Chapter 3 details a general theory of electron-photon interactions, including simulations with quantum two-level systems and extended optical nanostructures. In Chapter 4, we design and construct a second microscope with ultrafast triggering, an electron spectrometer with sub-eV resolution, nanostructured interaction regions, and active beam alignment. Chapter 5 explores various experimental results, demonstrating enhanced loss spectroscopy of 2D materials, energy resolution of gold nanoparticle plasmons, as well as spectroscopy of time-tagged cathodoluminescence from optical fibers.  Finally, in chapter 6 we discuss future perspectives of this approach, analyzing the impact a heralded electron source would have on electron microscopy and lithography.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164160</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment</title>
<link>https://hdl.handle.net/1721.1/164159</link>
<description>Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment
Park, Mary Isabelle
In the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC), lead ions are collided at ultra-relativistic velocities to produce Quark-Gluon Plasma (QGP), a state of matter where quarks and gluons are deconfined and move collectively. Jets are produced in high-momentum transfer parton scatterings prior to and independently of QGP formation, and serve as natural probes of its properties. As the high-energy partons pass through the QGP, they lose energy through medium-induced gluon radiation and elastic scattering, resulting in jets that are modified with respect to the vacuum baseline. In this thesis, jet modification is quantified by measuring the jet production cross section as a function of jet radius in inclusive jets and the jet axis decorrelation in jets recoiling from isolated photons in Lead-Lead (PbPb) and Proton-Proton (pp) collisions. Both measurements indicate that effects of medium-induced jet broadening may be balanced by survivor bias in PbPb collisions, potentially due to differences in the magnitude of quenching of wide versus narrow jets. The results underline the importance of constraining the initial jet kinematics with bosons, which are unmodified by the QGP.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164159</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery</title>
<link>https://hdl.handle.net/1721.1/164158</link>
<description>Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery
Wang, Peidong
Stratospheric ozone serves as Earth’s natural protective layer, shielding the surface from harmful ultraviolet radiation. The discovery of the Antarctic ozone “hole” in the late 1980s raised significant societal and scientific concern, prompting the rapid regulation of ozonedepleting substances (ODSs) under international treaties. While the signs of ozone recovery have begun, new challenges continue to arise. This thesis investigates three critical factors driving stratospheric ozone changes and influencing the detection of ozone recovery: (1) ODS emissions, (2) chemical chlorine processes, and (3) internal climate variability. With ODS emissions being regulated under the Montreal Protocol and studies now focusing on illicit new production on the order of tens of gigagrams per year, the ocean’s role as both a natural source and sink of ODSs becomes increasingly important. However, these processes have often been overlooked or highly simplified in past ozone assessments. Using a hierarchy of models, from simple box models to global ocean general circulation models, I quantified the ocean’s uptake and release of various ODSs. Chapter 2 examines the ocean’s uptake of chlorofluorocarbons (CFCs), particularly emphasizing its influence on recent illicit CFC emissions estimation. Chapter 3 extends this analysis to include ocean uptake and potential microbial degradation processes, evaluating their effects on emission estimates for various hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs), which are chemical constituents that have been used to replace CFCs. Once these man-made ODSs reach the stratosphere, they are photolyzed to chlorine reservoir species (e.g., HCl and ClONO2), which, through heterogeneous reactions, can transform into reactive chlorine that depletes ozone. While heterogeneous chlorine activation on volcanic ash is well understood, the unprecedented 2020 Australian wildfires raised new questions about chemical processes on smoke particles. This knowledge gap existed because only a few wildfires had injected significant amounts of smoke particles into the stratosphere during the satellite era. Leveraging over 30 years of satellite data, I separated chemical and dynamic processes affecting chlorine reservoir species to quantify chemical chlorine activation across different aerosol types. In Chapter 4, I developed a new approach to quantitatively estimate the onset temperature for chemical chlorine activation after the 2020 Australian wildfire using satellite observations. Chapter 5 applies this method to compare the impact of chemical chlorine activation from two independent wildfire events with that from a series of volcanic eruptions of varying magnitudes. Despite emerging challenges such as illicit emissions and recent wildfires and volcanic eruptions, advancements in observational records, our understanding of ozone chemistry, and computational power have significantly enhanced our ability to quantitatively detect and attribute stratospheric ozone changes. In Chapter 6, I applied a pattern-based “fingerprinting” technique to quantitatively separate the contributions of ODS forcing from other external forcings and internal variabilities in satellite observations. This analysis shows that Antarctic ozone increases cannot be explained by climate internal variability alone, providing strong confidence that ozone recovery is underway, primarily driven by human efforts to reduce ODS emissions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164158</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-Induced Collective Interactions in Arrays of Quantum Emitters</title>
<link>https://hdl.handle.net/1721.1/164157</link>
<description>Light-Induced Collective Interactions in Arrays of Quantum Emitters
Rubies-Bigorda, Oriol
The interaction between light and matter has captivated physicists for centuries, from early studies of vision and refraction in ancient Greece to the development of quantum mechanics and quantum electrodynamics in the past century. While the response of a single quantum emitter to light is well understood, the radiative properties of an ensemble of closely spaced emitters are far more intricate. Coupling to a shared electromagnetic environment induces coherent and dissipative interactions between emitters, giving rise to a collective response that cannot be captured by treating them independently. In the regime of few excitations, the system hosts delocalized subradiant states, that is, coherent superpositions that are largely decoupled from the electromagnetic field and thus decay at suppressed rates. While this weak coupling makes subradiant states attractive for quantum technologies, it also renders them difficult to manipulate. At higher excitation densities, the intrinsic nonlinearity of emitters and the exponential growth of the Hilbert space make theoretical and numerical descriptions of the system and its dynamics increasingly challenging. This thesis explores two fundamental questions: How can subradiant and dark states be selectively accessed and harnessed for practical applications in quantum technologies? And how can interacting ensembles of quantum emitters be efficiently simulated to uncover their many-body physics? The first part of the thesis develops protocols for controlling and addressing dark states in free-space and waveguide-coupled atomic arrays, demonstrating their utility in quantum storage and the deterministic generation of entangled photonic states. Incorporating atomic motion, we further show that collective subradiant states can enhance cooling in dense atomic arrays, offering new avenues for controlling motional dynamics. In the second part, we introduce cumulant expansions of the equations of motion as a powerful tool to analytically and numerically investigate collective decay in the many-body regime. We first examine the collective decay of fully excited atomic arrays in free space, characterizing the onset and scaling of the superradiant burst across different geometries. In collaboration with experiments on ultracold erbium atoms in optical lattices, we provide the first direct observations of many-body collective effects in free-space ordered arrays, including early-time superradiant bursts, late-time subradiant tails, and the emergence of atomic correlations throughout the dynamics. Finally, we theoretically and numerically explore the transient formation of multi-excitation subradiant states, and demonstrate how the existence of multiple dissipation channels suppresses steady-state superradiance in extended arrays.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164157</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Core Inductors for High Saturation Capability</title>
<link>https://hdl.handle.net/1721.1/164156</link>
<description>Hybrid Core Inductors for High Saturation Capability
Yang, Rachel S.
Power electronics are critical for any system requiring electricity and often impact the performance of these systems. In many cases, the performance of power electronics is limited by lossy and large inductors that are constrained by the saturation of their magnetic core material. Such saturation-limited inductors are typically found in power electronics applications where the inductor sees large dc current with relatively small ac ripple, such as EMI filters or converters operating in continuous conduction mode. This thesis investigates two types of inductor designs that can achieve higher saturation capability by combining multiple materials in a single core, enabling these designs to achieve greater energy storage or lower loss than conventional single-material cores. The first design combines a permanent magnet with a soft magnetic material (e.g. ferrite) to form a PM hybrid core. This core achieves higher saturation capability by directing PM flux to oppose winding flux in the ferrite. First-order models, design processes, and other theory for the PM hybrid core are developed in this thesis, and different geometries for this core are explored. Additionally, two PM hybrid core prototypes are presented, one using a pot core geometry and one using a modified E core geometry. The PM hybrid pot core prototype achieves 70% more energy storage or 50% of the dc loss versus comparable ferrite prototypes, while the PM hybrid E core prototype achieves 30% more energy storage or a minimum of 52% of the total loss versus comparable ferrite prototypes. The second design pairs a low-frequency, high-saturation material (e.g. steel) with a low-saturation, highfrequency material (e.g. ferrite) to form a steel hybrid core. This core achieves higher saturation capability by directing most of the dc flux to the steel and all of the ac flux to the ferrite, enabling the core to leverage both materials’ advantages while avoiding their detriments. First-order models and design processes for the steel hybrid core are developed in this thesis. An example steel hybrid core design using a pot core is also presented. This design can achieve 220% more energy storage versus a comparable ferrite prototype, and it may achieve lower loss. Its performance, though, is sensitive to manufacturing and assembly imperfections. In this thesis, both the PM hybrid and steel hybrid cores are demonstrated to have great potential in achieving high saturation capability. By leveraging these hybrid cores, inductor designs can achieve greater energy storage density or lower loss and thus enable higher performance power electronics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164156</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Gas Microscopy of Bosonic Correlations in the Continuum</title>
<link>https://hdl.handle.net/1721.1/164155</link>
<description>Quantum Gas Microscopy of Bosonic Correlations in the Continuum
Xiang, Jinggang
This thesis details the complete upgrade and renovation of an existing experimental platform into a high-resolution quantum gas microscope for ultracold 87Rb atoms. Quantum gas microscopes enable site-resolved imaging, providing unprecedented access to quantum statistical effects and many-body phenomena. While such instruments are often employed to study physics in optical lattices, we have innovatively adapted our apparatus to investigate bulk system behavior. A major part of this project involved upgrading the scientific apparatus and retrofitting the previous system. We introduced new optical components, including a high-NA objective, and improved the vacuum system for better optical access. Extensive lab renovations, from upgrading the optical table to reorganizing the laser and imaging setups, were carried out to enhance mechanical and thermal stability. Rigorous optical benchmarking confirmed that the objective achieves diffractionlimited imaging, which is critical for resolving single atoms. This capability allowed us to detect density fluctuations at the scale of the thermal de Broglie wavelength in a quasi-two-dimensional gas of 87Rb atoms. In an experiment resembling Hanbury Brown and Twiss interferometry, we measured a 30% enhancement in the second-order correlation function in situ, demonstrating strong bosonic bunching. This outcome underscores the microscope’s precision and the importance of high-resolution imaging in capturing subtle quantum statistical effects. The successful realization of this apparatus demonstrates the utility of quantum gas microscopes in probing bulk systems. With this new platform in place, future studies can explore critical phenomena, many-body correlations, matter-wave emission, and quantum simulations with cold atoms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164155</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer</title>
<link>https://hdl.handle.net/1721.1/164154</link>
<description>Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer
LaVecchia, Gianni
The study of cosmic rays and their properties provides insight into the origins of our universe and is a unique lens on the nuclear physics of the cosmos. The identification of cosmic ray isotopes poses a particular challenge, as it requires the measurement of multiple observables to a high degree of accuracy for the deduction of nuclear mass. Using the unique detection capabilities of the Alpha Magnetic Spectrometer (AMS), the isotope fluxes of cosmic ray lithium in the rigidity range of 1.92 to 25 GV are presented. This work is based on 0.97 million ⁶Li and 1.04 million ⁷Li nuclei collected by the AMS over a 12.5 year period, and improves the error and extent of existing measurements by a factor of 10. These results lead to the conclusion that there is no sizable primary component in cosmic ray ⁷Li. The&#13;
improvements to the AMS velocity measurement establishes the groundwork for future cosmic ray isotope measurements.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164154</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics</title>
<link>https://hdl.handle.net/1721.1/164153</link>
<description>Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics
Gambhir, Rikab
The interface between particle theory and particle experiments is essential to improving our understanding of the Standard Model and looking for new physics beyond it. At this interface lies a complicated web of complex and expensive simulations that cannot fully be trusted, experimental and theoretical uncertainties, overwhelmingly large amounts of data, all while we have yet to find any deviations from the Standard Model.&#13;
&#13;
In this thesis, we propose strategies for improving the theory ↔ experiment pipeline at all stages. We first show how modern Machine Learning and statistical techniques can be used to improve the calibration and resolution of particle detectors in a robust way, which can lead to improved measurement precision. We then develop brand new classes of measurable observables based on the principle of infrared-and-collinear-safety, geometry, and machine learning, which come with guarantees about their theoretical calculability and interpretability, in turn motivating measurements at collider experiments. Finally, we then present two complementary approaches to search for new physics: one, in the form of an experimental proposal for a muon beam dump experiment that is viable alongside a full future collider program; and the other, in the form of machine-learning based anomaly detection to search for subtle signals in already-published data.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164153</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery</title>
<link>https://hdl.handle.net/1721.1/164152</link>
<description>Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery
Wu, Menghua
Scientific research revolves around the discovery and validation of causal relationships between variables. Machine learning has the potential to increase the efficiency of this process by proposing novel hypotheses from data observations, or by designing experiments that maximize success rate. This thesis addresses these problems through pragmatic approaches, designed to model large systems and incorporate rich domain knowledge. These algorithms are applied to use cases in molecular biology and drug discovery, which highlight their potential to inform efficient experiment design and to automate the analysis of experimental results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164152</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization</title>
<link>https://hdl.handle.net/1721.1/164151</link>
<description>Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization
Wynne, Eric Michael
The widespread adoption of monoclonal antibody therapies is often constrained by their high prices, which can limit accessibility, particularly for patients in low- and middle-income countries. Addressing this economic barrier is crucial to ensure that life-saving treatments can reach all who need them. We present a series of bioprocessing innovations designed to reduce the cost of monoclonal antibody manufacturing and improve global access to these critical therapeutics. The work focuses on developing technologies for media regeneration and recycling, with the goal of reducing the economic and environmental impact of cell culture media in perfusion mammalian cell culture.&#13;
We demonstrate a microfluidic separation device engineered to selectively remove metabolic waste products—specifically ammonia and lactate—from spent media using ion concentration polarization. Building on this foundation, a scalable millifluidic system was developed to enable higher-throughput waste removal. We characterized the impact of media recycling upon batch and perfusion cell cultures. We devised a nutrient supplementation strategy to create ‘regenerated’ media that minimized any effect on cell growth and productivity compared to fresh media.&#13;
To support continuous manufacturing, a perfusion culture system incorporating a microfluidic spiral cell retention device and continuous cell bleed was established, and stable performance was maintained over extended durations. A further innovation introduced a multi-stage waste recovery system that increased media regeneration yield to 87.5%. This recovery rate enabled a self-recycling perfusion bioreactor in which 75% of the media feed was regenerated, without significant impact on cell growth, productivity, or product quality.&#13;
Together, these advances establish a novel biomanufacturing platform that combines electrokinetic waste removal with media regeneration and recycling. The approach is broadly adaptable to mammalian cell culture processes and offers a promising path toward more sustainable, cost-effective, and environmentally responsible production of monoclonal antibodies and other biologics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164151</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming</title>
<link>https://hdl.handle.net/1721.1/164150</link>
<description>Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming
Zhi-Xuan, Tan
How can we build cooperative machines that model and understand human minds — machines that assist us with our goals, coordinate on plans, infer the intentions behind our words, and even learn our norms and values? This thesis presents a scalable model-based approach to building such systems via inverse planning and probabilistic programming. First, we introduce a probabilistic programming architecture that implements a Bayesian theory of mind. This architecture, Sequential Inverse Plan Search (SIPS), performs online inference of human goals and plans by inverting a Bayesian model of incremental human planning. By combining high-performance symbolic planners with sequential Monte Carlo (SMC) inference, SIPS achieves faster-than-real-time speed, while scaling to hundreds of possible goals, and remaining robust to human mistakes due to boundedly-rational planning. Second, we present Cooperative Language-guided Inverse Plan Search (CLIPS), a system that integrates SIPS with large language models (LLMs) to model communicative cooperation. By using LLMs as likelihood functions within probabilistic programs, CLIPS can infer human goals from ambiguous instructions, then provide uncertainty-aware assistance with much higher levels of reliability than LLMs can on their own. In addition, CLIPS can be used to infer the shared intentions of communicating agents from their actions and words. Third, we show how inverse planning can model the acquisition of social normativity, formalizing norm-guided societal behavior as a norm-augmented stochastic game (NSG). In NSGs, agents assume that society follows a shared set of social norms, and infer these norms from the actions of other agents. By doing so, agents can rapidly learn cooperative social norms using orders of magnitude less data than model-free approaches. Finally, we present advances in probabilistic programming infrastructure that have enabled architectures such as SIPS and CLIPS. Through interfaces for programmable SMC and probabilistic programming with LLMs, developers can readily compose modeling and inference subroutines when designing probabilistically coherent intelligent systems. Together, these innovations demonstrate the feasibility and scalability of rational AI engineering for cooperatively intelligent machines, while illuminating the computational and algorithmic foundations of human cooperative intelligence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164150</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials</title>
<link>https://hdl.handle.net/1721.1/164149</link>
<description>Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials
Tremsina, Elizaveta A.
The development of novel energy-efficient computing hardware is imperative for the reduction of the carbon footprint and for the extension of computing, mobile and wearable device lifespan. Recent advances have been focused on turning to novel material systems, and one such avenue is magnetic thin films. Bits of information can be encoded by magnetic twisted textures called skyrmions, which can be efficiently driven by applying electrical current. Recently, emphasis has been placed on investigating antiferromagnetic and ferrimagnetic skyrmions, as opposed to the single-sublattice ferromagnetic ones studied earlier, due to their potential for more rapid dynamics and magnetic stability. However, there is a pressing need for a thorough and detailed understanding of the intricacies of skyrmion motion, in particular, limiting velocity, optimization of trajectory, controlled mobility and, notably, the observed dynamic distortions of skyrmion profiles. For this reason, experimental studies are simply not enough to provide a complete picture, since the material parameter space for systems hosting skyrmions is quite large. We perform a comprehensive study combining simulation-based as well as analytical approaches, of the spin-orbit torque motion of skyrmions in a wide host of magnetic materials, ranging from crystalline antiferromagnetic to ferrimagnetic, to ferromagnetic. We systematically analyze the relationship between physical distortions of the skyrmion profiles, based on the action of local Thiele forces, and internal elastic tension forces, providing a quantitative and nuanced explanation of these effects. These results expand the understanding of fundamental properties of magnetic skyrmions, as well as their potential use in spintronics applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164149</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality</title>
<link>https://hdl.handle.net/1721.1/164148</link>
<description>Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality
Leong, Joanne Sau Ling
Learning is a fundamental human drive that has been shaped by technological advancements over the years. The emergence of generative AI marks a profound shift—its capacity to produce text, images, and video challenges long‐held beliefs about what only humans could create. This shift creates new opportunities for learning, including enabling the design of more customized and personalized learning experiences. Recognizing that learning is deeply influenced by our perceptions of ourselves, others, and our materials and environments, I propose creating transformative lenses powered by generative AI and augmented reality (AR) to adapt what learners perceive, as a means to empower them with new perspectives. I design and implement a set of novel interactive systems and experiences as case studies that address factors including creativity, communication, and motivation. Studying the use of these systems, I gather early evidence that such lenses can help people to overcome their own limiting thoughts and emotions to move towards realizing their full potential. Reflecting on these case studies, I distill key considerations for designing and applying transformative lenses. Finally, I discuss the broader implications of this work at the evolving intersection of generative AI and learning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164148</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Models as Opinion Models: Techniques and Applications</title>
<link>https://hdl.handle.net/1721.1/164147</link>
<description>Language Models as Opinion Models: Techniques and Applications
Brannon, William
Real-time social media platforms now host the news cycle and shape public opinion, while large language models (LLMs) give us new tools to observe and predict those shifts. This dissertation links the new affordances of social media with the predictive power of LLMs to explain -- and forecast -- opinion change. We first quantify the dynamics of news on an influential social platform, then develop LLM-based tools to forecast persuasion and predict heterogeneous treatment effects (HTEs).&#13;
&#13;
Study I — Media tempo and tone. Using 518,000 hours of U.S. talk-radio broadcasts and 26.6 million tweets from elite and mass users, we show that Twitter discourse (i) moves faster at both take-off and fade-out stages of a news event and (ii) sustains greater outrage than radio – despite radio’s often explicitly outrage-focused business model. To our knowledge, this is the first large-scale, data-driven comparison between Twitter and traditional media of both outrage levels and the rate of decay of attention to news.&#13;
&#13;
Study II — Zero-shot persuasion forecasting. Across a diverse set of 28 randomized experiments, LLM-based methods outperform an ensemble of strong baselines at predicting HTEs and deliver good performance at predicting average treatment effects (ATEs) — all without any experiment-specific fine-tuning.&#13;
&#13;
Study III — Transfer and scaling. Fine-tuning LLMs on contemporaneous news coverage boosts HTE (and ATE) prediction performance greatly, to more than 3x baseline performance. A new minibatch-moment-matching (M3) objective lets us train a 400M-parameter model to nearly match the HTE prediction performance of an 8B model at a fraction of the inference cost. Transfer, however, falters out of distribution on held-out experiments and demographic groups, lending support to contextual theories of persuasion.&#13;
&#13;
Overall, we (i) quantify how platform affordances shape the tone and tempo of public discourse, (ii) introduce LLM-based methods that make causal experiments more sample-efficient, and (iii) chart the limits of transfer learning for opinion prediction. Our findings provide practical tools for HTE prediction and help researchers anticipate persuasion dynamics in a media landscape shaped by both humans and machines.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164147</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence</title>
<link>https://hdl.handle.net/1721.1/164146</link>
<description>Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence
Denniston, Andrew W.
The atomic nucleus presents an intricate system due to the non-linear forces described by Quantum Chromodynamics (QCD) that govern its structure. The range of scales involved is remarkable; the most massive nuclei weigh approximately five orders of magnitude more than the quarks that compose them. The nucleus can be analyzed at various levels, from quarks to hadrons to the nucleus as a whole. Short-Range Correlations (SRCs) within the nucleus play a significant role that spans these diverse scales. At the most fundamental level, SRCs influence the interaction between nucleons. The nucleon-nucleon (NN) interaction, arising from QCD, is crucial in determining nuclear properties. SRCs serve as valuable probes for measuring this NN interaction, as the nucleons within SRCs become effectively decoupled from the rest of the nucleus. Multiple experimental techniques, including electron scattering, have been employed to investigate the NN interaction through SRCs. However, our first project demonstrates that inclusive measurements alone are inadequate to constrain this interaction fully. Moving to the scale of the nucleus, SRCs contribute to the high-momentum tail of the nuclear spectral function. While the low-momentum region is characterized by nucleons exhibiting bulk properties, nucleons begin to pair into SRCs at higher momenta. Our research aims to bridge the understanding between the mean-field portion of the nucleus and its high-momentum SRC components. Additionally, SRCs affect the quark structure of protons, as evidenced by the EMC effect, which indicates that quarks behave differently when protons are embedded within a nucleus—an effect referred to as medium modification. This thesis explores the correlation between SRCs and medium modification across various experimental setups. Finally, we seek to establish an interpretation of the nuclear ground-state. Accomplishing this requires demonstrating that our SRC observables are independent of the probe’s scale and scheme. The concluding project of this thesis illustrates how we utilize triple coincidence quasi-elastic scattering across a range of (Q2 ) values to develop a model-dependent framework for understanding SRC distributions within the nucleus’s ground-state wavefunction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164146</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering and Engineering the Computation Underlying Large Intelligent Agents</title>
<link>https://hdl.handle.net/1721.1/164145</link>
<description>Discovering and Engineering the Computation Underlying Large Intelligent Agents
Sharma, Pratyusha
The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this dissertation, I present three perspectives to answer these questions. First, I present experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought—hierarchical and compositional—and show how it can enable long-horizon reasoning and planning in these systems. This dissertation ultimately aims to bridge the gap between natural and artificial intelligence, offering new insights into both the fundamental nature of communication in complex biological organisms in the wild and the development of more powerful, and improved AI systems. A key pattern in the discoveries in this thesis has been how simple structures enable complex externalized behaviors in both biological organisms and AI systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164145</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Diffusion Models Towards De Novo Protein Design</title>
<link>https://hdl.handle.net/1721.1/164143</link>
<description>Generative Diffusion Models Towards De Novo Protein Design
Yim, Jason
De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies. De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164143</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems</title>
<link>https://hdl.handle.net/1721.1/164142</link>
<description>Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems
Yang, Zhutian
If we want mobile robots that perform multi-step tasks in visually diverse and geometrically complex environments, we need them to quickly decide what to do and how to do it. Manipulating multiple objects in environments with movable and articulated obstacles over time requires the robot to satisfy constraints like collision-freeness, reachability, and action feasibility. For problems with large state spaces, continuous action spaces, and long decision horizons, the hybrid constraint satisfaction problems induced by planners become combinatorially difficult to solve. In this thesis, I will discuss strategies for using offline learning to speed up deploymenttime planning, i.e., using a plan feasibility predictor, a subgoal generator, or a compositional joint continuous constraint solver. I will also present strategies for chaining policies learned from demonstrations using conditional inputs, such as key poses and natural language, for generalization in real-world environments. With the resulting efficient long-horizon manipulation planning system, we can solve complex robotic manipulation problems faster at deployment time. It can also be used to generate diverse large-scale whole-body trajectories as part of the data mixture for training robot foundation models in embodied reasoning, planning, and acting.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164142</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.</title>
<link>https://hdl.handle.net/1721.1/164141</link>
<description>Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.
Sergeeva, Elena
The frequently adopted definition of knowledge defines it as “justified true belief”. As one may notice this definition presents some issues when applied to AI: it is unclear to which degree it is justified to use “humanizing” vocabulary like “belief” or “justification” when describing the performance of an AI system. Traditional explicit knowledge-representation based AI involves reasoning over symbolic representation of statements standing for such “justified true beliefs” [1], the modern connectionist methodology however replaces explicit reasoning with making a prediction based on a set of computations done over weighted continuous representations of the inputs. The continuous representations learned by such systems remain “black box-like”, where the only elements directly understandable by the human user are the model inputs and outputs. In the first part of this thesis I introduce a set of Masked-Language model transformer based models for a diverse set of medical natural language processing tasks including Named Entity Recognition, Negation Extraction and Relation extraction that perform as well or better than bigger prompt-and-generate transformer-based causal language models. In the second part of the thesis, I discuss the modern “prompt-and-generate” approach to natural language processing where both the inputs and the outputs of the model are word-like elements commonly referred to as “tokens”. I explore the nature of token based representation of the input and look at the way token “meaning” is refined at each layer of the successive transformer computation. With respect to the outputs, I explore how people engage with AI generated sequences of tokens that people perceive as “explained” predictions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164141</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language-Centric Medical Image Understanding</title>
<link>https://hdl.handle.net/1721.1/164140</link>
<description>Language-Centric Medical Image Understanding
Wang, Peiqi
This thesis advances medical image understanding by leveraging the multifaceted roles of language: as supervision, prior knowledge, and a medium for communication. We introduce three main contributions: (1) a weakly supervised framework that uses language in clinical reports to guide fine-grained alignment between image regions and textual descriptions, (2) an adaptive debiasing method that uses language prior to improve the robustness of learning algorithms under noisy supervision, and (3) a novel approach for calibrating linguistic expressions of diagnostic certainty, enabling more reliable communication of clinical findings. Together, these methods lead to more accurate, robust, and reliable machine learning systems, ultimately streamlining clinical workflows and improving patient care.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164140</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring spin physics with ultracold atoms</title>
<link>https://hdl.handle.net/1721.1/164139</link>
<description>Exploring spin physics with ultracold atoms
Lee, Yoo Kyung
The dynamics of many interacting spins is an active frontier of research; not only can they explain magnetic phenomena, but they also provide paradigmatic models with deep connections to high-T_c superconductivity, optimization problems, neural networks, and more. Experiments with ultracold alkali atoms in optical lattices have realized spin models with great success. In particular, the isotropic Heisenberg model---the XXX model---was realized more than a decade ago. The ⁷Li apparatus described here was the first to realize a tunable, anisotropic Heisenberg model, also known as the XXZ model.&#13;
&#13;
In this thesis, I will describe how the capabilities of this apparatus were harnessed to characterize the spin models we realize, employ them to observe new resonances, and to contribute to studies in spin squeezing and fundamental physics. First, I will discuss how we prepared and observed phantom helix states: eigenstates of the XXZ Hamiltonian. Our understanding of the contact interactions and the phantom helix states enabled us to observe long-predicted lattice-induced resonances, whose effects can be leveraged as another knob to tune the XXZ Hamiltonian. Furthermore, our control over the spin system allowed us to generate spin-squeezed states,  a paradigmatic form of entanglement for spin ensembles. This is the first time squeezed states were realized with nearest-neighbor contact interactions in a lattice. Finally, our control over the spin degree of freedom and defects in our state preparation allowed us to create pristine periodic lattices with which to study gedankenexperiments in light scattering.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164139</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the Diversity of Fast Radio Bursts with CHIME/FRB</title>
<link>https://hdl.handle.net/1721.1/164138</link>
<description>Probing the Diversity of Fast Radio Bursts with CHIME/FRB
Shin, Kaitlyn
Fast radio bursts (FRBs) are extremely bright extragalactic radio transients that flash for microseconds to milliseconds at a time, most never to repeat again. Encoded in every observed FRB is information from burst propagation effects, giving us clues about their mysterious origins as well as the environments they traveled through. With inferred all-sky rates of hundreds per day, FRBs have held great interest for those interested in extreme astrophysical processes as well as those interested in cosmological properties of the Universe. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has revolutionized the FRB field with its field-leading discovery rate. With CHIME/FRB, we can start to carry out population-level studies of FRBs to constrain their origins and inform their use as cosmological probes. I present the first population-level studies of CHIME/FRB-observed FRBs using the CHIME/FRB Catalog 1 data release and the injections system to account for observational biases. I discover that CHIME/FRB is likely observationally biased against bursts originating from turbulent local environments, and constrain the energy and distance distributions of FRBs. I also present the Catalog 1 dataset updated with channelized raw voltage (“baseband”) data (“BaseCat1”), for which I played a pivotal role. The CHIME/FRB baseband localization pipeline can localize FRBs to arcminute-precision as long as the signal is bright enough to trigger the saving of offline baseband data. I then discuss two single source-studies enabled by the baseband localization pipeline — one discovering repeaters during phases of unusually heightened burst activity, and one using the burst properties of an unusual FRB to probe the properties of its sightline. In the latter study, I constrain the electron density content of a diffuse filamentary structure on the outskirts of the Virgo Cluster, demonstrating the power of FRBs as probes of diffuse media.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164138</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>DePUDS: Decentralized Prosocial Urban Development System</title>
<link>https://hdl.handle.net/1721.1/164135</link>
<description>DePUDS: Decentralized Prosocial Urban Development System
Zhang, Yan
Urban areas face severe socio-economic and environmental challenges like housing crises, inequity, and environmental degradation, often worsened by traditional zoning practices. These are typically rigid, inefficient, outdated, and susceptible to obstruction by narrow special interests (NIMBYism), failing to engage the broader community or adapt to evolving needs. This dissertation proposes the Decentralized Prosocial Urban Development System (DePUDS), a novel governance framework designed to overcome these shortcomings by empowering informed collective consensus and including the often-silent majority.&#13;
DePUDS integrates decentralized technologies like blockchain and smart contracts with structured economic incentives, facilitated through an accessible user-friendly Decentralized Application (DApp) to encourage broad participation. This system fosters transparent, inclusive, and equitable urban development. Its core mechanism, adaptive incentive-based zoning, dynamically aligns developer profitability with community-endorsed priorities—such as affordable housing, public amenities, and sustainability—providing flexibility absent in traditional zoning.&#13;
Employing advanced agent-based simulations enhanced by large language models (LLMs), this research rigorously assesses DePUDS's effectiveness across two distinct case studies: Kendall Square in Cambridge, MA (a dynamic innovation hub) and the Inner Richmond District in San Francisco, CA (a culturally rich but housing-constrained neighborhood). Simulation results demonstrate DePUDS significantly aligns development outcomes with community preferences. In Kendall Square, targeted incentives substantially increased affordable housing and public amenities without hindering private investment. In the Inner Richmond, substantial community-driven incentives successfully unlocked constrained development, markedly reducing displacement risks, boosting affordable housing, enhancing amenity access, lowering carbon emissions via density, and preserving local cultural assets.&#13;
The comparative analysis underscores DePUDS's versatility, showing its potential to enhance growth in active markets and stimulate development in constrained ones. Key policy implications point towards structured DApp-based community participation, adaptive incentive zoning, and dedicated funding. While acknowledging practical implementation hurdles (legal, economic, technological), the findings affirm the feasibility, effectiveness, and transformative potential of decentralized, incentive-driven urban governance. This dissertation offers significant theoretical contributions, practical policy guidelines, and future research pathways to foster more inclusive, sustainable, and resilient urban communities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164135</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biophysical specializations supporting efficiency in neural&#13;
networks</title>
<link>https://hdl.handle.net/1721.1/164133</link>
<description>Biophysical specializations supporting efficiency in neural&#13;
networks
Toloza, Enrique H.S.
Neuroscience and artificial intelligence (AI) research have long enjoyed a synergistic relationship. AI has drawn key inspiration from the organization and function of the brain, while our understanding of the biological processes underlying computation has been profoundly enriched by studying the behavior of artificial systems. As breakthroughs in generative AI continue to transform our world, and as the need for more sustainable artificial neural systems becomes more urgent, the neuro-AI feedback loop has never been more important. AI needs ever more powerful and efficient systems, and neuroscience needs further insight into how our brains work. The development of more brain-like AI promises solutions to both of these problems. Unfortunately, this has thus far been stymied by two critical challenges: 1) how do we identify the features that make a system brain-like and 2) how do we incorporate these features into artificial networks in a useful and interpretable way? To address the first of these challenges, I will use the remarkable structural and biophysical diversity of the brain as an introduction into what it means for a system to be “brain-like.” This will lead us to a discussion of dendrites, the tree-like structures implicated at virtually every length scale of neural computation. Dendrites will thereafter act as the focal point for our study of brain-like computation. Specifically, I will trace how relatively simple biophysical features defined at the subcellular level can transform the computational landscape of large networks of neurons. To address the second of these challenges, it is necessary to discuss several enduring problems in computational neuroscience, broken down as chapters in this thesis. In Chapter 2, I will present the development of a new model of single-neuron dynamics that is realistic enough to capture the rich dynamics of dendritic spiking but efficient enough for use in simulations of thousands of neurons, thereby filling a long unmet need in the field. In Chapter 3, I will describe a solution to the general problem of training neural networks with arbitrary differentiable dynamics, thus opening the door for the study of countless biophysical phenomena in the context of networks that can learn to perform computations. In Chapter 4, I will use these tools to test several longstanding hypotheses regarding the utility of different biophysical features in neurons, performing first-of-their-kind fair comparisons of the computational performance of spiking networks, rate-based networks, and networks with nonlinear and linear dendrites. Finally, in Chapter 5, I will use insights gained from studying dendrites at the network level to provide a new perspective as to how the structural and biophysical diversity of the brain could emerge from a complex interplay of functional pressures (e.g., task demands) and physical constraints (e.g., space and energy). Together, the chapters of this thesis outline a general quantitative framework for building more brain-like AI for use in both AI research and neuroscience. This framework illustrates how biophysical specializations arising at the level of single neurons shape the emergent dynamics of the brain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164133</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Learnability of General Reinforcement-Learning Objectives</title>
<link>https://hdl.handle.net/1721.1/164131</link>
<description>On the Learnability of General Reinforcement-Learning Objectives
Yang, Cambridge
Reinforcement learning enables agents to learn decision-making policies in unknown environments to achieve specified objectives. Traditionally, these objectives are expressed through reward functions, enabling well-established guarantees on learning near-optimal policies with a high probability — a property known as probably approximately correct (PAC) -learnability. However, reward functions often serve as imperfect surrogates for true objectives, leading to reward hacking and undermining these guarantees. This thesis formalizes the specification and learnability of general reinforcement-learning objectives beyond rewards, addressing fundamental questions of expressivity and policy learnability. I examine three increasingly expressive classes of objectives: (1) Linear Temporal Logic (LTL) objectives, which extend conventional scalar rewards to temporal specifications of behavior and have garnered recent attention, (2) Computable objectives, encompassing a broad class of structured, algorithmically definable objectives and (3) Non-computable objectives, representing general objectives beyond the computable class. For LTL objectives, I prove that only finitary LTL objectives are PAC-learnable, while infinite-horizon LTL objectives are inherently intractable under the PAC-MDP framework. Extending this result, I establish a general criterion: an objective is PAC-learnable if it is continuous and computable. This criterion facilitates the establishment of PAC-learnability for various existing classes of objectives with unknown PAC-learnability and informs the design of new, learnable objective specifications. Finally, for non-computable objectives, I introduce limit PAC-learnability, a practical relaxation where a sequence of computable, PAC-learnable objectives approximates a non-computable objective. I formalize a universal representation of non-computable objectives using nested limits of computable functions and provide sufficient conditions under which limit PAC-learnability holds. By establishing a theoretical foundation for general RL objectives, this thesis advances our understanding of which objectives are learnable, how they can be specified, and how agents can effectively learn policies to optimize them. These results contribute to the broader goal of designing intelligent agents that align with expressive, formally defined objectives—moving beyond the limitations of reward-based surrogates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164131</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-State Quantum Memories for Near-Term Quantum Repeaters</title>
<link>https://hdl.handle.net/1721.1/164130</link>
<description>Solid-State Quantum Memories for Near-Term Quantum Repeaters
Sutula, Madison M.
Over the past decade, quantum computers have emerged as a promising technology to enable transformational advances in information processing and communication and solve problems that are intractable to classical computers. While there is great promise in linking quantum computers together over long distances via quantum channels, these technologies are still under development. Solid-state emitters with coherent spin-photon interfaces, long spin lifetimes, and narrow optical transitions are a leading platform for use as quantum memories in networked quantum repeaters. However, while such emitters have already enabled advanced quantum networking demonstrations in laboratory settings, applying these devices as useful memory devices at scale is a key outstanding challenge. In this thesis, we experimentally investigate solid-state quantum memories for quantum information applications. First, we develop experimental techniques to characterize solid-state emitters with high throughput, enabling both better understanding of the distribution of emitter properties and improved feedback on material preparation and device fabrication. Next, we implement quantum frequency conversion to create a coherent spin-photon interface between silicon-vacancy centers in diamond and optical photons in the low-loss telecom band. Finally, we investigate color centers in other engineering materials, including silicon and silicon carbide, to better understand the fundamental trade space of requirements for solid-state hosts. Together, these efforts represent a significant advance in creating, controlling, and deploying telecom-compatible spin interfaces, paving the way for memory-enabled quantum repeaters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164130</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principled Approaches for Latency Reduction in Networking Systems</title>
<link>https://hdl.handle.net/1721.1/164128</link>
<description>Principled Approaches for Latency Reduction in Networking Systems
Pit-Claudel, Benoit
Modern networks face unprecedented challenges due to exponential growth in traffic demands, driven by AI workloads in datacenters and the ubiquitous adoption of cloud services across the internet. This dissertation addresses three critical challenges in network systems: efficient scheduling of inference tasks, performance optimization in hybrid networks, and memory-efficient load balancing in datacenters.&#13;
&#13;
First, we introduce Nona, a stochastic scheduling framework that leverages queueing theory to optimize task placement in datacenter environments. By employing randomized algorithms and considering both network and compute constraints, Nona demonstrates multiple orders of magnitude improvements in job completion times while maintaining implementation simplicity. Nona proposes stochastic scheduling, in which the complexity of the scheduling problem is moved to an offline phase. When handling jobs online, stochastic schedulers are oblivious to the instantaneous state of the network and only rely on predetermined allocation probabilities to make lightning-fast decisions. Second, we present LINC, an in-network coding solution designed for hybrid backbone networks. Through comprehensive mathematical analysis and simulation, we highlight the benefits of network coding in cases where no modifications of the end-hosts are possible. Finally, we develop Sirona, a memory-efficient version of a reactive subflow spraying mechanism suited for hardware deployment. We show that Sirona can achieve competitive performance in homogeneous and heterogeneous datacenter networks while keeping a low memory footprint.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164128</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz</title>
<link>https://hdl.handle.net/1721.1/164127</link>
<description>Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz
Stein-Lubrano, Benjamin
The tokamak is a promising approach to magnetic confinement fusion. Tokamak functionality is threatened by plasma disruption events, which can damage critical machine components. Disruption damage can be mitigated by high-Z impurities, delivered by Massive Gas Injection (MGI) or Shattered Pellet Injection (SPI). Impurities radiate energy out of the plasma and onto the first wall. Evenly distributed radiation causes less damage than unmitigated disruption pathways, which deliver concentrated heat loads. In order to successfully develop and deploy mitigation systems, it is important to accurately measure and characterize disruption radiation. Accurate measurement is challenged by fast disruption timescales and highly asymmetric radiation patterns, which push the time and spatial resolution limits of radiant heat sensors. Previous radiation analysis approaches are typically limited to two dimensions or less by the highly under-determined nature of tomographic reconstruction and limited spatial resolution of sensors. Two dimensional analysis is often inaccurate for disruption radiation, which can be highly three dimensional as a result of localized impurity sources and fast 3D MHD events. In this thesis, I present a new algorithm for 3D radiation analysis in tokamak disruptions, called Emis3D. When Emis3D is applied to mitigated disruptions on the JET tokamak, a significant injection plume radiation effect in mitigated disruptions is revealed. When this effect is included in radiated energy calculations, the mitigated radiation fraction of plasmas with high thermal energy content is significantly improved, indicating that thermal mitigation is more effective than previously thought. Emis3D can also be used as a design tool to evaluate potential radiant heat sensor layouts. When applied to the SPARC tokamak, Emis3D demonstrates that toroidally skewed sensor sightlines improve spatial resolution and reduce blind spots, allowing more accurate measurement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164127</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Milky Way with Stars</title>
<link>https://hdl.handle.net/1721.1/164126</link>
<description>Understanding the Milky Way with Stars
Ou, Xiaowei
"How do galaxies form?" is one of the most important questions in modern astrophysics. Hierarchical growth, the most plausible theory behind galaxy formation, suggests that galaxies, including the Milky Way, assemble through the accretion of smaller systems, over a scaffolding of the invisible Dark Matter. Such growth is evidenced by the differences in stellar structures found in the Galaxy over the last few decades, accelerated most recently by the Gaia space mission. Yet, we still lack a full picture of the formation of the Milky Way and its stellar components, and we are even further in understanding its underlying Dark Matter distribution. For the latter, discrepancies between observations and predictions from CDM model at galactic scales have sparked debate about how well this model accounts for the evolution of the Milky Way. Stellar tracers provide a powerful tool for examining these discrepancies, helping us explore the hierarchical assembly of galaxies in the Local Group and test different models for dark matter. At the same time, cosmological simulations and machine learning techniques offer a bridge between the theory and observations.&#13;
&#13;
In this thesis, I combine observation of stellar kinematics and chemistry with cosmological simulations to understand the formation and evolution of the Milky Way and its satellite dwarf galaxies. I map the dark matter distributions in the Milky Way and one of its ultra-faint dwarf galaxies using stellar dynamics, combining simulations of tidal disruption with observational data to study ongoing merger events and how hierarchical assembly shaped the Milky Way today. I conduct robust machine learning searches of kinematic substructures from disrupted dwarf galaxy debris in the Milky Way and utilize stellar heavy element abundances to probe the galaxies that merged with the Milky Way in the past. Lastly, I develop synthetic surveys from simulations to bridge gaps between theory and observation, testing the robustness of current and future methodologies in understanding how the Milky Way came to be.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164126</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coating Thermal Noise in Gravitational-Wave Detectors</title>
<link>https://hdl.handle.net/1721.1/164125</link>
<description>Coating Thermal Noise in Gravitational-Wave Detectors
Demos, Nicholas
The direct detection of gravitational waves, originating from cataclysmic events such as black hole and neutron star mergers, has ushered in a new era of observational astronomy. These signals offer unique insights into astrophysical phenomena and fundamental physics, but fully realizing their potential requires continued improvements in detector sensitivity. A primary factor limiting the performance of current ground-based interferometers like Advanced LIGO and Advanced Virgo is thermal noise arising from the highly reflective multilayer coatings on the test mass mirrors. Reducing this coating thermal noise, particularly its Brownian component, while simultaneously maintaining exceptionally low optical absorption and scatter is necessary to advance detector capabilities.&#13;
&#13;
This thesis addresses this challenge through the characterization and development of alternative coating materials and designs. Central to this work is a dedicated experimental apparatus employing a high-finesse folded optical cavity and a multimode co-resonance technique. This system enables direct, high-precision measurements of coating thermal noise in the frequency band relevant to gravitational-wave detectors and allows for relatively rapid evaluation of candidate coatings, providing timely feedback for materials development.&#13;
&#13;
Coating materials such as niobia-based oxides, hafnia-tantala mixtures, and substoichiometric silica, were explored employing strategies like compositional optimization, post-deposition annealing, and multimaterial designs with buried layers. Progress toward lower-noise coatings is demonstrated. Highly reflective coatings based on optimized titania-silica, titania-germania, and ternary silicon nitride structures achieved thermal noise levels approximately 75% that of current detector coatings. These coatings also exhibited exceptionally low optical absorption, reaching levels near 1 part-per-million following appropriate heat treatment. While challenges related to defect formation during annealing and discrepancies between different noise measurement methodologies were identified, ongoing research, particularly on defect mitigation in materials like titania-germania, continues to advance the field. The findings presented here contribute to the materials science foundation for improving current gravitational-wave detectors and guiding the design of future observatories.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164125</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite</title>
<link>https://hdl.handle.net/1721.1/164124</link>
<description>Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite
Jayaraman, Rahul
The Transiting Exoplanet Survey Satellite (TESS) is conducting an all-sky survey with the primary aim of detecting planets orbiting nearby stars. However, its large field of view and 200 s imaging cadence are useful for other science cases, ranging from stellar astrophysics to transient science. This thesis focuses on using TESS to study both the circumstellar environment and stellar interiors, as well as using the satellite to detect and characterize optical emission from gamma-ray bursts (GRBs). Chapter 2 focuses on the discovery of HD 135348, a "rigidly rotating magnetospheric" star–wherein the stellar magnetic field traps dust in a co-rotating orbit and leads to complex periodic photometric modulations–using solely photometric data. Chapter 3 focuses on the discovery of a long-period subdwarf B (sdB) star using 20 s cadence TESS data and proposes a novel formation mechanism for long-period sdB stars that relies upon stable, nonconservative mass transfer. Chapters 4 and 5 focus on pulsating stars in close binaries, and the evolutionary insights that these "tidally tilted" pulsations enable. In particular, we focus on developing models to track the amplitude and phase of these pulsations as a function of orbital phase, as well as tools to perform physically-motivated modeling of the binary components. Chapters 6-7 focus on the optical signatures of gamma-ray bursts in TESS, and analyze the prompt optical flash that is often observed contemporaneously with the high-energy emission from these bursts. Chapter 7, in particular, aims to connect the prompt optical flash to the high-energy spectral energy distribution (SED), and explains the suppression of the optical flash (compared to the extrapolation of the high-energy SED) by invoking dust extinction in the host galaxy. This thesis represents a significant step forward in both stellar and transient astrophysics; throughout this work, we emphasize the use of an unconventional tool–TESS–to pursue timely scientific questions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164124</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Intelligence that can Interact with the Physical World</title>
<link>https://hdl.handle.net/1721.1/164123</link>
<description>Building Intelligence that can Interact with the Physical World
Wang, Tsun-Hsuan (Johnson)
Recent advances in Artificial Intelligence (AI) have demonstrated remarkable success in parsing, reasoning, and generating digital content across modalities such as natural language, speech, images, videos, and 3D data. However, these breakthroughs have yet to extend meaningfully beyond the digital realm into the physical world. Developing AI for physical interaction poses challenges such as limited grounding, scarce physical data, and high reliability demands in safety-critical settings. This thesis takes a holistic approach to building intelligence that can interact with the physical world – through the lenses of data, brain, and body. Data is the fuel powering highly capable AI systems. We present methods for data-driven simulation that synthesize sensor measurements from physical processes, and knowledge-driven simulation that leverages large language models to generate actor behaviors and scenarios. By reverse engineering the generative processes behind physical data, we address data scarcity while enabling scalable and effective evaluation. The brain, driven by data, demands a deep understanding of the physical world and reliable interaction with it. We introduce methods to bridge the internet-scale knowledge of digital AI with the physical world to improve generalization and interpretability. For greater reliability, we integrate control-theoretic modules into AI models to enable certifiability. Beyond the behavioral intelligence, the body plays a crucial role in physical interaction. We demonstrate how morphological intelligence can emerge from computation and show how pre-trained generative AI models (brain), when augmented with physics-based simulation that provides feedback on generated data, can be applied to robot design. To this end, this thesis explores how digital AI can be extended into the physical world through a comprehensive investigation of data, brain, and body – laying the groundwork for building physical AI.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164123</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomolecular Modeling at Scale</title>
<link>https://hdl.handle.net/1721.1/164122</link>
<description>Biomolecular Modeling at Scale
Wohlwend, Jeremy
Predicting the structure and interactions of biomolecules is a fundamental problem in computational biology, with broad implications for disease understanding and drug discovery. Advances in deep learning have enabled remarkable progress, but scaling these approaches to the varied and complex realities of biology is a persistent challenge. This work introduces deep learning methods for biomolecular modeling at scale, designed for efficiency, adaptability, and accessibility. The early chapters present models developed in the general molecular domain, including prediction of structure and interactions for proteins, nucleic acids, and small molecules. To demonstrate how these methods extend to specific biological problems, the latter portion of this work focuses on modeling T cell receptor recognition. As a key immunological mechanism, it highlights the promise of scalable models, but also their present limitations in capturing fine-grained molecular selectivity. Together, these contributions define a framework for bridging foundational models and domain-specific applications, with the potential to scale, and meet the demands of increasingly complex biological systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164122</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimessenger signatures of compact binaries</title>
<link>https://hdl.handle.net/1721.1/164121</link>
<description>Multimessenger signatures of compact binaries
Mo, Geoffrey Kwan Lok
Gravitational waves and electromagnetic observations provide complementary views into some of the most extreme objects in the Universe. In this thesis, I present studies of multimessenger compact binaries from two angles: electromagnetic follow-up of gravitational-waves, and gravitational-wave follow-up of electromagnetic sources. I first describe technical and computational efforts to enable the distribution of alerts of kHz gravitational-wave sources as a member of the LIGO--Virgo--KAGRA collaboration, and to improve localizations of these events by folding in galaxy catalog information. I then detail work to enable electromagnetic follow-up observations of binary neutron star and neutron star--black hole mergers with two telescopes, the Transiting Exoplanet Survey Satellite (TESS) and the Wide-field Infrared Transient Explorer (WINTER). Approaching multimessenger observations from the opposite direction, I describe a search for gravitational waves coincident with fast radio bursts from the only Galactic fast radio burst source. Lastly, I perform an electromagnetic study of Type Ia supernovae in the mid-infrared, whose white dwarf binary progenitors will be mHz gravitational-wave sources for the future LISA space mission.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164121</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Network Systems Design for Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164120</link>
<description>Efficient Network Systems Design for Machine Learning
Yang, Mingran
Machine learning (ML) is transforming modern life by powering a diverse range of groundbreaking applications. As ML models and datasets expand, the scale of training and inference workloads in modern datacenters is increasing at an unprecedented pace. As the demand for computing resources grows, the need for low-latency and energy-efficient network systems becomes increasingly urgent.&#13;
&#13;
This thesis introduces efficient network systems designed to support machine learning workloads. It presents three key systems: Trio-ML, which accelerates ML training; Lightning, which enhances ML inference efficiency; and on-fiber photonic computing, a forward-looking vision for next-generation computing systems.&#13;
&#13;
The first system, Trio-ML, accelerates data-parallel distributed ML training by leveraging in-network computing on Juniper Networks' programmable chipset Trio. Trio-ML features two key designs: in-network aggregation, which utilizes Trio packet processing threads to aggregate gradients directly inside the network, and in-network straggler mitigation, which utilizes Trio timer threads to detect and address stragglers. We prototype Trio-ML on a testbed with three real DNN models (ResNet50, DenseNet161, and VGG11) to demonstrate its effectiveness in mitigating stragglers while performing in-network aggregation. Our evaluations show that when stragglers occur in the cluster, Trio-ML outperforms today's state-of-the-art in-network aggregation solutions by up to 1.8x.&#13;
&#13;
The second system, Lightning, is the first reconfigurable photonic-electronic smartNIC to serve real-time ML inference requests. Lightning uses a fast datapath to feed traffic from the NIC into the photonic domain without creating digital packet processing and data movement bottlenecks. To do so, Lightning leverages a novel reconfigurable count-action abstraction that keeps track of the required computation operations of each inference packet. Our count-action abstraction decouples the compute control plane from the data plane by counting the number of operations in each task and triggers the execution of the next task(s) without interrupting the dataflow. We evaluate Lightning's performance using four platforms: prototype, chip synthesis, emulations, and simulations. Our simulations with large DNN models show that compared to Nvidia A100 GPU, A100X DPU, and Brainwave smartNIC, Lightning accelerates the average inference serve time by 337x, 329x, and 42x, while consuming 352x, 419x, and 54x less energy, respectively.&#13;
&#13;
Building on the in-network computing and photonic computing concepts discussed in Trio-ML and Lightning, we present a forward-looking vision for future computing systems. We argue that pluggable transponders are a prime platform for performing photonic computing inside the network without having to replace networking switches and routers. Optical transponders are ubiquitous in today's wide-area and datacenter networks, giving us a unique opportunity to re-purpose them for photonic computing. To this end, we introduce on-fiber photonic computing, explore key research challenges in bringing this vision to reality, and discuss real-world applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164120</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags</title>
<link>https://hdl.handle.net/1721.1/164119</link>
<description>Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags
Yildirim, Deniz Umut
The Internet of Things (IoT) is revolutionizing various industries, enabling a new wave of smart applications such as automated asset tracking in warehouses, substation monitoring in smart grids, and precision agriculture. However, as IoT devices proliferate, powering these devices in a sustainable and maintenance-free manner has become a critical challenge. Traditional IoT systems rely on batteries, which present issues of limited lifespan, environmental impact, and maintenance costs, especially in large-scale deployments. As a result, the development of battery-free IoT devices powered by ambient energy harvesting has gained significant attention. Among various energy-harvesting technologies, radio frequency (RF) energy harvesting has emerged as a promising solution for powering IoT devices. By harvesting energy from ambient RF signals in licensed frequency bands, RF energy-harvesting systems eliminate the need for batteries and allow for continuous, maintenance-free operation. This is especially crucial in environments where battery replacement is impractical or impossible, such as in large industrial warehouses, remote infrastructure, and hazardous environments. However, achieving high sensitivity and reliable operation in RF energy-harvesting systems poses several challenges. High-sensitivity rectifiers are required to capture and convert weak RF signals into usable energy, but integrating these rectifiers with ultra-low power baseband data processing circuits remains a significant hurdle. Moreover, antenna-rectifier matching calibration must be compatible with the duty-cycled operation of these tags, where brief communication periods are followed by long charging intervals. Additionally, the antenna system must be robust to detuning when placed on various objects, ensuring that the system can operate effectively in diverse environments. This thesis presents two integrated circuits to work towards these goals. The first chip is designed with the goal of minimizing the charging time as much as possible, which is critical in scenarios such as inventory management in warehouses, and tamper detection. The goal was to achieve &lt; 1-minute charging time while maintaining sensitivity competitive with the state-of-the-art. Unlike previous harvesters that either focused solely on sensitivity without integrating baseband processing and communication, or included those features but considered continuous communication at low sensitivity, the IC developed in this work achieves a sensitivity of −31 dBm and is capable of backscattering data approximately 18 seconds after a cold start. It also provides a detailed description of the difficulty of achieving higher sensitivities at higher 5G frequencies. The second chip in this thesis builds upon the first one and integrates an analog front-end to convert sensor data for environmental monitoring. We implemented an antenna-rectifier calibration method that is maintained as long as there if RF power, even though the tag goes into long charging periods. Even though the charging time, or the data readout interval, for these tags is more relaxed compared to the inventory management applications, we have also developed a design methodology to minimize the energy required to generate a data packet for backscattering, through which we were able to keep the charging time at 4 minutes while having additional functionalities and backscattering at a higher data rate compared to the first chip. Finally, a simple shielding method was implemented to enable the tags to be placed on any objects without resonance frequency detuning. All of these were achieved while still obtaining a sensitivity of −30 dBm, competitive with the state of the art. In addition, the third project investigates the use of heterogeneously integrated “beyondCMOS” devices to enhance overall rectifier performance. These emerging devices, fabricated by Palacios Group at MIT, show promise in overcoming sensitivity limitations commonly found in rectifiers, thereby extending the range and coverage of energy-harvesting IoT systems. We conduct a detailed characterization of these devices, highlighting their unique physical behaviors not present in standard CMOS technology, and provide system-level design guidelines for building improved rectifiers. Preliminary simulation results show that rectifiers using negative-capacitance field-effect transistors (NCFETs) can harvest up to four times as much power than their CMOS-based counterparts, while maintaining the same sensitivity. This thesis outlines the design, implementation, and evaluation of all three systems. The two aforementioned ICs are tested both in simulation and in real-world scenarios such as a typical office environment. Meanwhile, the novel device technologies are explored through simulation, demonstrating their significant potential for next-generation rectifier design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164119</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Additive Manufacturing of Electrical Machines and Electronic Devices</title>
<link>https://hdl.handle.net/1721.1/164065</link>
<description>Additive Manufacturing of Electrical Machines and Electronic Devices
Cañada Pérez-Sala, Jorge
Recent advancements in the additive manufacture of electronics and electrical machines have led to successful demonstrations of 3D-printed passive (e.g., resistors, capacitors, inductors) and active (e.g., transistors) electronic components, as well as magnetic cores and power transfer devices. However, each new demonstration of 3D-printed functional devices has typically required increasingly specialized and expensive manufacturing hardware. This work opposes that trend by developing a technology capable of fabricating all such devices on a single, affordable machine: a material extrusion 3D printer. Material extrusion stands out among additive manufacturing technologies for its widespread availability and its compatibility with monolithic multi-material manufacturing, essential for the fabrication of functional electromagnetic devices. These attributes, together with its well-established ability to fabricate mechanically functional parts, make material extrusion a promising technology for the single-step fabrication of electronics and electrical machines, and for their monolithic integration into complex devices, such as custom functionalized prostheses, robots, and space exploration hardware. In this research, a desktop 3D printer was transformed into an almost-universal manufacturing machine capable of fabricating a myriad of electrically, magnetically, and mechanically functional devices, using various feedstock formats (e.g., filament, pellets, ink). With this machine, milestones such as the fabrication of the first semiconductorfree, fully 3D-printed logic gates, and that of the first fully 3D-printed motor, have been achieved. Built for under $4000 in parts, the modified 3D printer opens the door to the democratization of electronics and electrical machine manufacturing, empowering institutions and individuals alike, and serving as an educational tool to introduce advanced manufacturing to new generations. Additionally, this work investigates optimization strategies for planar inductors and alternative techniques for the creation of miniaturized, three-dimensional, electrically functional components via two-photon polymerization. By demonstrating novel methods and applications, this thesis advances the state of the art in the additive manufacture of electromagnetic devices and paves the way toward the decentralized fabrication of electrical machines and electronic devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164065</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels</title>
<link>https://hdl.handle.net/1721.1/164064</link>
<description>A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels
Hamilton, Mark T.
How does the human mind make sense of raw information without being taught how to see or hear? This thesis presents a unifying theory that describes how algorithms can learn and discover structure in complex systems, like natural images, audio, language, and video - without human input. This class of algorithms has the possibility to extend our own understanding of the world by helping us to see previously unseen patterns in nature and science. At the core of this thesis’ unified theory is the notion that relationships between deep network representations hold the key discover the structure of the world without human input. This work will begin with a few examples of this principle in action; discovering hidden connections that span cultures and millennia in the visual arts, discovering visual objects in large image corpora, classifying every pixel of our visual world, and rediscovering the meaning of words from raw audio, all without human labels. In the latter half of this thesis, we will present two unifying mathematical theories of unsupervised learning. The first will explain why relationships between deep features can rediscover the semantic structure of the natural world by connecting model explainability, cooperative game theory, and deep feature relationships. The second mathematical theory will show that relationships between representations can be used to unify over 20 common machine learning algorithms spanning 100 years of progress in the field of machine learning. In particular, we introduce a single equation that unifies classification, regression, large language modeling, dimensionality reduction, clustering, contrastive learning, and spectral methods. This thesis uses this unified equation as the basis for a “periodic table of representation learning” that predicts the existence of new types of algorithms. We show that one of these predicted algorithms is a state-of-the-art unsupervised image classification technique. Finally, this work will summarize the key findings and share ongoing and future directions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164064</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Score Estimation for Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/164063</link>
<description>Score Estimation for Generative Modeling
Jayashankar, Tejas Kumar
Recent advances in score-based (diffusion) generative models have achieved state-of-the-art sample quality across standard benchmarks. Building on the remarkable property of these models in estimating scores, this thesis presents three core contributions: 1) new objectives to reduce score estimation error, 2) a novel Bayesian-inspired optimization framework for solving inverse problems, and 3) a fast one-step generative modeling framework that is based on a novel amortized score estimation framework. In the first part of this thesis, we introduce two new score estimation objectives with applications to both implicit and diffusion-based generative models. To improve spectralbased non-parametric estimators, we propose a theoretically optimal parametric framework that learns the score by projecting it onto its top-L principal directions. Additionally, inspired by matrix-valued kernel methods, we present a second approach that lifts the score into the space of outer products, and minimizes the distance between the estimated and true scores in this higher-order space. In the second part, we shift focus from score estimation to leveraging diffusion models as data-driven priors for solving inverse problems. Centering our development around the problem of source separation, we introduce a novel algorithm inspired by maximum a posteriori estimation. This approach combines multiple levels of Gaussian smoothing with an α-posterior, enabling effective signal separation using only independent priors for the sources. We demonstrate the effectiveness of this method through its application to interference mitigation in digital communication signals. Finally, we outline how this framework can be naturally extended to tackle a broader class of inverse problems. In the final part, we return to the fundamental challenge of efficient sampling, which is critical for enabling practical data-driven engineering systems. We propose a novel generative modeling framework that enables training a one-step neural sampler from scratch. At the core of this method is a new objective based on multi-divergence minimization, guided by a novel approach for score estimation of mixture distributions. Our framework is simple to implement, stable during training, unifies several existing approaches, and achieves state-of-the-art performance in image generation tasks. Furthermore, we discuss how this framework can be naturally extended to multi-step neural sampling and adapted for fast posterior sampling—an essential component in simulation-based inverse problem solvers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164063</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory</title>
<link>https://hdl.handle.net/1721.1/164062</link>
<description>Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory
Medeiros, Owen A.
Superconducting nanowire integrated circuits (SNICs) are a promising class of cryogenic electronics that harness the zero resistance, high kinetic inductance, and nanoscale geometry of ultrathin superconducting wires to implement logic, memory, amplification, and sensing with minimal energy dissipation. Unlike Josephson-junction-based circuits, SNICs support compact, planar layouts compatible with single-layer fabrication and operation in unshielded cryogenic environments. This thesis develops superconducting nanowire memory (SNM) as a scalable implementation of SNICs. A modular cell architecture is introduced, exploiting hysteretic switching and inductive asymmetry to enable nonvolatile digital state storage with zero static power consumption. A hierarchical design framework is established, combining automated layout generation, electrothermal simulation in LTspice, and microscopic modeling using the time-dependent Ginzburg–Landau (TDGL) formalism. To enable scalable integration, this work implements a row–column SNM array layout and demonstrates fabrication across full 4-inch wafers using a planar, singlelayer process. Cryogenic measurements validate reliable operation in both single cells and multi-cell arrays, confirming the predictive accuracy of the design and modeling framework. Tradeoffs in bias current levels, pulse timing, and read/write conditions are systematically evaluated through cryogenic measurements, revealing their impact on bit error rate, operational margins, and energy efficiency across single cells and arrays. Together, these contributions establish SNICs as a viable and extensible platform for cryogenic memory, providing the tools, models, and infrastructure needed to enable broader adoption in quantum computing, neuromorphic systems, and other energy-constrained cryogenic applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164062</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next Generation Operating Systems for the Datacenter</title>
<link>https://hdl.handle.net/1721.1/164061</link>
<description>Next Generation Operating Systems for the Datacenter
Fried, Joshua
Modern datacenters face a fundamental challenge: handling demanding real-time and dataintensive workloads that require both microsecond-scale low latency and high throughput, while simultaneously achieving high resource utilization and efficient multi-tenancy. Traditional operating systems, designed for an era of slower hardware, introduce significant overheads to microsecond-scale I/O that prevent applications from exploiting the full performance of the underlying hardware. Furthermore, their millisecond-scale resource management is ill-equipped to handle the microsecond-level burstiness of modern workloads, leading to costly overprovisioning and idle resources. Recognizing the performance limitations imposed by traditional OSes, a common workaround has emerged: letting applications communicate directly with hardware, bypassing the OS entirely. While this approach offers performance gains by removing the OS from the critical path, existing kernel-bypass solutions require dedicated resources, extensive application rewrites, and provide weak isolation, making them unsuitable for general-purpose, shared environments. This thesis presents a new datacenter operating system, composed of three integrated systems: Shenango, Caladan, and Junction. Together, they preserve the high-performance, low-overhead I/O benefits of kernel bypass, while providing efficient resource management, strong isolation for multi-tenant workloads, and compatibility with unmodified software. First, Shenango enables applications to bypass traditional OS-mediated I/O without dedicating CPU cores solely to polling. Next, Caladan ensures that idle resources can be used productively by other applications by actively managing competition for microarchitectural resources, thereby preserving each application’s high I/O performance and responsiveness. Finally, Junction overcomes several common limitations of kernel-bypass solutions, bringing these benefits to all applications by preserving compatibility with existing software and reducing memory and polling overheads. Collectively, these systems provide the advantages of direct hardware access without sacrificing the flexibility or efficiency of a general-purpose operating system. This work demonstrates that high I/O performance, efficient resource utilization, and broad application compatibility can indeed coexist, paving the way for a new generation of datacenter operating systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164061</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications</title>
<link>https://hdl.handle.net/1721.1/164060</link>
<description>Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications
Gao, Mingye
Artificial intelligence (AI) has brought transformative changes to healthcare industry in the recent years from various aspects, such as patient care, disease diagnosis and medical research. As healthcare systems worldwide face increasing pressure from aging populations and rising chronic disease rates, there is an urgent need for systematic approaches to develop reliable and safe AI solutions. This thesis advances the systematic development of healthcare AI through four interconnected components: data curation, algorithm optimization, benchmark design, and clinical applications. The primary contribution of this thesis focuses on establishing a comprehensive pipeline for healthcare large language models (LLMs), spanning from data curation to clinical deployment. At the data level, a rule-based filtering framework was developed to select high-quality subsets from the large pre-training corpora, significantly improving both continue pre-training and fine-tuning performance of LLMs. For safety alignment, an automated pipeline was developed for preference learning that includes preference dataset synthesis, rule-based and data-adaptive annotation, and reward model training. Additionally, two novel benchmarks were created to ensure reliability and safety of LLMs in healthcare tasks: one assessing demographic biases of LLMs across common diseases, while another assessing models’ ability to reject illogical requests from users in drug-related scenarios. Finally, LLMs were used to generating patient-friendly educational content for clinical trials, demonstrating their role in improving patient education and engagement in clinical trials. This systematic progression from data to deployment establishes a blueprint for developing safe and effective language models in healthcare settings. Beyond language models, machine learning techniques were applied on an additional healthcare task. In this project, a novel approach combining normalized cross-correlation and attention graph convolutional recurrent networks was developed to realize contactless, continuous and reliable radar-based vital signs monitoring in dynamic home environments. Through systematic data collection and algorithm optimization, the accurate heart rate can be obtained across varying radar-subject distances (2-2.5m) and subject orientations, demonstrating robust performance in real-world conditions through extensive validation in four test houses with six subjects. Collectively, these contributions advance healthcare AI development across 2 fronts: establishing frameworks for safe and effective deployment of language models in healthcare settings and enabling reliable and continuous health monitoring at-home without wearable devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164060</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Biomolecular Interactions with Generative Models</title>
<link>https://hdl.handle.net/1721.1/164058</link>
<description>Modeling Biomolecular Interactions with Generative Models
Corso, Gabriele
In 2021, DeepMind’s AlphaFold2 revolutionized single-chain protein structure prediction achieving atomic accuracy, solving a longstanding challenge in biology. However, understanding biomolecular interactions, a critical problem for advancing drug discovery and biological research, remained unsolved. This thesis presents our research to redefine the machine learning approach to this problem, modeling structures with a new generative paradigm and tailoring the neural architectures and learning tasks to the specific challenges that arose. These ideas combined with significant engineering efforts led us to develop a class of open-source models from DiffDock to the recent Boltz-1. These have significantly pushed our ability to understand biomolecular interactions, they have been widely adopted in industry and academia to help with drug development and protein design and they have opened the door to new research paradigms to push biological research further.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164058</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems</title>
<link>https://hdl.handle.net/1721.1/164057</link>
<description>Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems
Feldmann, Axel
Solving sparse linear systems is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive’s importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize &lt; 1% of available compute throughput on existing architectures such as CPUs and GPUs. There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, tight dependences, and sparsity-induced load imbalance. This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. The accelerators exploit static sparsity as the basis of novel hardware-software co-designed scheduling approaches. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers. These accelerators deliver order-of-magnitude speedups over state-of-the-art GPU baselines, achieving &gt; 100× speedups on many inputs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164057</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Foundations for Learning in Games and Dynamic Environments</title>
<link>https://hdl.handle.net/1721.1/164054</link>
<description>Theoretical Foundations for Learning in Games and Dynamic Environments
Golowich, Noah
Decision-making problems lie at the heart of numerous aspects of human and algorithmic behavior across our society, ranging from healthcare systems to financial systems to interactions with the physical world. A central challenge that arises across many decision-making problems is the presence of multiple agents, often with competing incentives. To understand how agents will act in such situations, it is often productive to compute equilibria, which have the property that no agent can deviate from them and improve their utility. An additional challenge is that decisions made by agents often change the state of the environment, which is modeled as dynamic. Thus, we need efficient algorithms for learning good policies, which tell the agent what to do as a function of the environment’s state. Extensive work spanning multiple domains such as economics, computer science, and statistics has been developed to model these decision-making problems. This has led to many celebrated results, which include, for instance, a considerable body of work studying the computational properties of Nash equilibria in normal-form games, and a long line of papers on reinforcement learning. However, many of these classical works suffer from a few shortcomings: first, they often do not account for the enormous state or action spaces available to agents in realistic decision-making settings, and second, many of them do not derive computationally efficient algorithms for the desired solution concepts. These shortcomings are brought to the forefront by the remarkable recent progress in artificial intelligence, which holds promise for solving decision-making problems with enormous state or action spaces but which is often bottlenecked by computation. The objective of this thesis is to develop theoretical foundations for the computational aspects of such decision-making problems: e.g., How do we efficiently compute equilibria in large games?, and: How can we efficiently learn near-optimal policies in complex environments? Some highlights of our results are listed below—first, we study problems in which there are multiple agents and the goal is to compute some notion of equilibrium: • We show the first near-optimal rate of convergence to equilibrium for a no-regret learning algorithm in normal-form games, resolving a decade-long line of work which had aimed to establish increasingly better rates. • We establish the first algorithm with sublinear swap regret against arbitrary adversaries enjoying only polylogarithmic dependence on the number of actions, resolving a question of Blum and Mansour from 2007. • As a corollary of the preceding result, we obtain the first polynomial-time algorithm for approximating a correlated equilibrium in extensive-form games (to constant approximation error), addressing a question of von Stengel &amp; Forges from 2008. Additionally we obtain near-optimal bounds on the communication and query complexity of approximating correlated equilibria in normal-form games (to constant approximation error), addressing several open problems in the literature. • We give the first algorithm for the sequential calibration problem with calibration error beating that of the seminal work of Foster &amp; Vohra from 1998. Moving on to decision-making problems where the environment is modeled as dynamic (typically studied in the framework of reinforcement learning (RL)), our results include the following: • We give the first end-to-end computationally efficient algorithms for learning a nearoptimal policy in many fundamental reinforcement learning problems, such as those of (constant-action) Linear Bellman Complete MDPs and sparse linear MDPs. • We give the first quasi-polynomial time algorithm for finding a near-optimal policy in a general and well-motivated class of partially observable RL environments, and show that our bound is tight. • We prove some (perhaps surprising) hardness results that arise in multi-agent RL problems. For instance, we show that it is computationally hard to implement noregret learning algorithms in multi-agent RL environments even when the agents can coordinate on their choice of algorithm, which creates a stark contrast with simpler multi-agent learning settings (e.g., in normal-form games) where no-regret learning has formed the bedrock for a wide array of developments over the last several decades. • Nevertheless, we show that by adjusting the type of equilibrium appropriately, we can circumvent the above hardness results and derive computationally efficient decentralized algorithms for computing equilibria in multi-agent RL environments. Many of the above results have inspired follow-up work which includes applications of our results to various problems in game theory, reinforcement learning, online learning, and related domains, as well as the formulation of new problems which are inspired by the above results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164054</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing human vision through large-scale brain imaging and computational models</title>
<link>https://hdl.handle.net/1721.1/164053</link>
<description>Characterizing human vision through large-scale brain imaging and computational models
Lahner, Benjamin
Efforts to understand the neural underpinnings of human visual processing require sufficient experimental data and robust models. This thesis significantly contributes to both these fronts while simultaneously elucidating some of the most intriguing aspects of the human visual system. In the first chapter, I use a combination of classical machine learning, artificial neural networks, and a joint MEG/fMRI neuroimaging dataset to reveal that the human visual system extensively processes highly memorable images in regions distributed throughout visual cortex late in time. In the second chapter, I present the BOLD Moments Dataset, a large-scale fMRI dataset using short video stimuli to extend computational models of visual processing into the video domain to better understand how humans process visual content unfolding over time. The last chapter introduces a fMRI dataset aggregation framework titled MOSAIC to achieve the scale and stimulus diversity needed for training modern neural networks directly on brain responses. This body of work exemplifies how large-scale experimental data and artificial neural networks can contribute towards a robust and generalizable understanding of human visual processing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164053</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring</title>
<link>https://hdl.handle.net/1721.1/164052</link>
<description>Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring
Afzal, Sayed Saad
This thesis describes how wireless sensing can drive significant advancements in climate and sustainability. Specifically, it shows how we can leverage diverse signals—acoustics, ultrasound, THz, and optics— in unconventional ways to unlock new capabilities in underwater climate monitoring, food safety, and disaster response. The thesis introduces two novel technologies. The first technology enables long-term, ultra-low power ocean sensor networks for use in climate modeling, marine monitoring, and sustainable aquaculture. Unlike existing IoT technologies – like Bluetooth, WiFi, and GPS – which cannot work underwater, we design and implement an ultra-low power subsea backscatter communication system, enabling battery-free underwater imaging, sensing and localization. Second, the thesis describes a new technology that can support sustainability in agriculture through real-time food quality assessment that reduces food waste. In contrast to existing food quality technologies that require direct contact with produce, we introduce a new wireless system for accurate, non-invasive sensing using sub-THz signals. We describe the design, implementation, and evaluation of multiple systems that leverage these technologies to monitor the ocean and food waste: First, we present a ultra-wideband metamaterial sensor design that facilitates scalable, and long-range battery-free underwater communication. Next, we describe a system that can push the throughput of this technology using higher order modulation. Beyond building sensor networks, we demonstrate their real-world potential through two systems: one for underwater localization that uses rich spatio-temporal-spectral features for accurate positioning, and another for battery-free imaging that fuses acoustic and optical signals to capture color images in the dark. Finally, we present a novel solution for accurate fruit ripeness sensing using sub-terahertz wireless signals. These systems unlock new IoT applications in climate modeling, aquaculture, robotics, and agriculture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164052</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Integration and Differentiation of Probabilistic Programs</title>
<link>https://hdl.handle.net/1721.1/164051</link>
<description>Automatic Integration and Differentiation of Probabilistic Programs
Lew, Alex K.
This thesis addresses the challenge of automating fundamental operations from probability theory and calculus on probability distributions defined by higher-order probabilistic programs. It does this by developing a suite of composable program transformations for an expressive core calculus for probabilistic programming: • Integration: Compiling a probabilistic program into a deterministic representation of its expectation operator, handling potentially intractable integrals symbolically. • Unbiased estimation: Transforming programs involving intractable operations (like integration) into runnable probabilistic programs that yield provably unbiased estimates of the original value, with flexible levers for users to navigate cost-variance trade-offs. • Radon-Nikodym differentiation: Compiling probabilistic programs into implementations of a novel interface for the unbiased estimation of density ratios, of the sort that arise in Monte Carlo and variational inference. • Differentiation: Extending automatic differentiation (AD) to compose with the above transformations, enabling the optimization of expected values and density ratios of probabilistic programs. These transformations operate on an expressive higher-order probabilistic programming language and are proven correct using denotational semantics and logical relations. The resulting framework enables the sound and automated implementation of a wide range of algorithms for probabilistic inference and learning. To demonstrate the practical value of these techniques, we use them to implement three systems for scalable probabilistic inference in different domains: (1) extensions to the Gen probabilistic programming system that accelerate and automate a broad range of Monte Carlo and variational inference algorithms, (2) the PClean system for automated Bayesian reasoning about relational data, and (3) the GenLM system for controllable generation from language models. We find that our techniques enable these systems to scale to a variety of complex, real-world problems, and to achieve state-of-the-art performance on a range of benchmarks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164051</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizer-space computation</title>
<link>https://hdl.handle.net/1721.1/164050</link>
<description>Minimizer-space computation
Ekim, Barış C.
As the volume of DNA sequencing data increases, the need for algorithmic advances to efficiently handle the data arises. One such concept is minimizers, which are genomic substrings that allow for reduced representations of larger DNA sequences. In this thesis, we introduce minimizer-space computation as a new algorithmic paradigm for DNA sequence analysis. Instead of DNA nucleotides, we consider minimizers as the letters of an extended alphabet in which algorithms operate. We present several techniques on how to efficiently construct these extended alphabets, demonstrate how to develop approaches that use these alphabets and consequently use only a fraction of sequence data, and show how fundamental biological tasks, such as genome assembly and read mapping, can be significantly accelerated over state-of-the-art methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164050</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performant and Resilient Service Composition for Modern Cloud Applications</title>
<link>https://hdl.handle.net/1721.1/164049</link>
<description>Performant and Resilient Service Composition for Modern Cloud Applications
Li, Tianyu
Modern cloud applications are often distributed systems composed from vendor-provided building blocks (e.g., object storage services, container orchestration services). Consequently, distributed fault-tolerance is a central concern for application correctness. Although each building block may offer individual fault-tolerance, the end-to-end application is still susceptible to failures, because the composition logic that orchestrates them may still fail. This thesis explores resilient composition, a systematic way to assemble fault-tolerant components into resilient end-to-end distributed applications. We begin by presenting the fail-restart system model, which captures the unique fault-tolerance challenges that arise when composing services. Based on this model, we define Composable Resilient Steps (CReSt), an atomic programming abstraction that guarantees fault-tolerance across the assembled application. We then detail efficient methods for implementing CReSt using a range of database techniques, and a novel distributed protocol that allow optimistic, speculative execution ahead of slower fault-tolerance safeguards. Together, these pieces allow developers to assemble fault-tolerant distributed systems that are correct by construction and often more performant than existing solutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164049</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Succinct Cryptography via Propositional Proofs</title>
<link>https://hdl.handle.net/1721.1/164048</link>
<description>Succinct Cryptography via Propositional Proofs
Mathialagan, Surya
The goal in modern cryptography is to obtain security while minimizing the use of computational resources. In recent years, we have been incredibly successful in our pursuit for efficiency, even for cryptographic tasks that were thought to be “science fiction”. For example, we have constructions of fully homomorphic encryption and private information retrieval from standard, cryptographic assumptions which achieve the ideal levels of succinctness. However, there are still some tasks in cryptography where achieving the “ideal” efficiency from standard assumptions has evaded us. In this thesis, we study the problem of achieving succinctness in two such settings: • Can we construct succinct indistinguishability obfuscation (IO) for Turing machines? In particular, can we construct an obfuscated program whose size is independent of the input length? • Can we construct succinct non-interactive arguments (SNARGs) for all of NP? While the problems seem unrelated at first glance, the root difficulty seems to stem from a similar place: both primitives have non-falsifiable security definitions. In fact, this type of barrier exists for many other cryptographic primitives, including witness encryption. This leads to a central question which we refer to as the “non-falsifiability barrier”: how can we construct non-falsifiable primitives from falsifiable assumptions? In this thesis, we show how to leverage propositional proofs to overcome the non-falsifiability barrier, and make substantial progress in the goal of achieving succinctness in both settings. Our main result is universal construction of both SNARGs and succinct IO for Turing machines from standard assumptions using propositional proofs. We then show several applications, including rate-1 IO for many programs, the first succinct secret sharin schemes for monotone circuits, and many more. Our results establish propositional proofs as a foundational tool for achieving succinctness across a broad range of cryptographic settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164048</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical and Algorithmic Thresholds in Spin Glasses</title>
<link>https://hdl.handle.net/1721.1/164047</link>
<description>Statistical and Algorithmic Thresholds in Spin Glasses
Huang, Brice
This thesis studies spin glasses, disordered complex systems originating in statistical physics. Such systems model optimization, sampling, and inference problems from probability and statistics, which are of fundamental importance to modern data science. In particular, spin glasses provide natural examples of random, high-dimensional, and often highly non-convex cost or log-likelihood functions, making them an excellent testing ground for such questions. Part I of this thesis studies statistical properties of these models. Chapter 2 identifies the storage capacity of the Ising perceptron, a simple model of a neural network, subject to a numerical condition. This gives a conditional proof of a 1989 conjecture of Krauth and M´ezard. Chapter 3 gives a new proof of the celebrated Parisi formula for the free energy of the spherical mean-field spin glass, which was first proved by Talagrand and in more generality by Panchenko. Our proof takes a simpler modular approach, drawing on recent advances in spin glass free energy landscapes due to Subag. Chapter 4 characterizes the topology trivialization phase transition of multi-species spherical spin glasses and shows that lowtemperature Langevin dynamics finds the ground state in the topologically trivial regime; the latter result is new even in the single-species setting. Part II of this thesis concerns algorithms for optimization and sampling problems on spin glasses. Chapter 5 studies the problem of optimizing the Hamiltonian of a multi-species spherical spin glass. Our main result exactly characterizes the maximum value attainable by a class of algorithms that are suitably Lipschitz in the disorder. This class includes gradient-based algorithms and Langevin dynamics on constant time scales, and in particular includes the best algorithm known for this problem. This chapter is part of a series of works where we establish exact algorithmic thresholds using the branching overlap gap property (OGP), a landscape property introduced in our earlier work (which appears in our S.M. thesis). In this chapter, we develop a more robust way to establish the branching OGP that does not require Guerra’s interpolation; this allows our method to be applied to models well beyond the (single-species) mean-field spin glass we previously considered. Chapters 6 and 7 study sampling from the Gibbs measure of a spherical mean-field spin glass. Chapter 6 develops a sampling algorithm based on simulating Eldan’s stochastic localization scheme, while Chapter 7 analyzes simulated annealing of Langevin dynamics. We prove both algorithms succeed for inverse temperatures up to a stochastic localization threshold. Chapter 6 gives the first stochastic localization-based sampler with a guarantee of vanishing total variation error, improving on earlier algorithms with vanishing Wasserstein error. Chapter 7 provides the first provable guarantees for a Markov chain in this model beyond the uniqueness threshold, where mixing from worst-case initialization is provably slow.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164047</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing</title>
<link>https://hdl.handle.net/1721.1/164046</link>
<description>A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing
Hu, Beili
Neutral atom arrays have rapidly emerged as a leading platform for quantum computing, boasting scalable, configurable arrays of single atoms trapped in optical tweezers, fast, high-fidelity entangling gates through Rydberg interactions, and programmable, parallelized control of qubit operations. Coupling an atom array to an optical cavity opens a new frontier. Leveraging enhanced light-atom interactions in cavity quantum electrodynamics, cavity- coupled atom arrays acquire capabilities that can further expand the neutral atom toolbox, including cavity-enhanced atom readouts, atom-photon entanglement, and photon-mediated interactions between distant atoms.&#13;
&#13;
This thesis presents a quantum hardware platform that integrates an array of neutral atoms with a high-finesse optical cavity. After describing the design and development of the experimental apparatus, I demonstrate high-fidelity atom state readout through the cavity, achieving improved speed and atom survival compared to conventional free-space imaging methods. I then introduce a new technique for selectively controlling atom-cavity coupling on arbitrary subsets of the array, using local AC Stark shifts on the excited states of the atoms. Building on these tools, I demonstrate fast, non-destructive cavity-based readout of atom arrays, a crucial bottleneck of atom array platforms. I also showcase real-time measurement and feedback capabilities with a demonstration of classical error correction, using a register of atomic bits. Finally, I describe progress toward implementing single- and two-qubit gates within the cavity-coupled system. By combining coherent control, tunable interactions, and high-fidelity, non-destructive readout integrated and real-time feedback, the cavity-coupled Rydberg atom array offers a promising path toward fault-tolerant quantum computing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164046</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation, Prediction and Counterfactual Inference with Dependent Observations</title>
<link>https://hdl.handle.net/1721.1/164045</link>
<description>Estimation, Prediction and Counterfactual Inference with Dependent Observations
Kandiros, Anthimos Vardis
The success of modern data science is largely driven by access to large-scale, high dimensional data. Much of classical machine learning has been developed under the assumption that this data is generated independently from some distribution. However, this assumption is often violated when data exhibit complex dependencies across a spatial or temporal domain, or due to social interactions. In this thesis, our goal is to design and analyze methods that address these dependencies for performing three fundamental estimation tasks: unsupervised learning, supervised learning and counterfactual inference. In supervised learning, we observe a sequence of unlabeled examples and our goal is to infer some structural property from the distribution they came from. The presence of dependencies could severely complicate this question. Our results in this direction encompass both fully observable as well as latent variable models. For fully observable models, we use the celebrated Ising model to describe the dependencies. Assuming we have access to a single sample from some Ising model, which captures a variety of real-world scenarios, we design and analyze polynomial time algorithms for recovering the matrix corresponding to the network structure of the model. We then leverage these techniques to obtain improved guarantees for estimating Ising models in Total Variation (TV) distance from multiple samples. For latent variable models, we focus on the case where the structure is a tree and we get samples from the leaves, which is a common scenario in phylogenetics. Assuming the model is Gaussian, we analyze the behavior of the Expectation-Maximization (EM) algorithm, a popular heuristic for latent variable models. We show that for trees with a single latent node, EM converges to the true model and for general tree topologies, the only stationary point in the interior of the domain is the true model. We then shift our focus to discrete models and study latent tree Ising models, for which we provide polynomial time algorithms for learning the distribution of leaves in TV distance. In supervised learning, we observe a sequence of feature-label pairs and our task is to learn the predictive relationship between the features and the labels. Here, this relationship could be confounded by the presence of dependencies among labels. We formulate this question as a regression problem, where the labels of the units follow the joint distribution of an Ising model with an unknown strength parameter and external fields that are determined by the regression function. We characterize the minimax optimal rate of estimation for the various parameters and provide an efficient algorithm that achieves it. Interestingly, it might not be possible to estimate all the parameters in some cases. In counterfactual inference, we focus on the design of network experiments, where the treatment of a unit could affect the outcome of a neighboring unit in an underlying graph. Our goal is to estimate a general causal effect that can be defined as the average difference in outcomes for a unit under two different interventions. For an arbitrary such effect, we propose an experimental design, called the conflict graph design. For an unbiased estimator of that effect, we prove bounds on its variance that yield the best known rates of estimation for various effects studied in the literature, such as the average direct effect and the total effect, but also provide estimation rates for effects that have received less attention from the perspective of experimental design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164045</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach</title>
<link>https://hdl.handle.net/1721.1/164044</link>
<description>Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach
Dréan, Jules Guillaume Jacques Bénony D
Trusted Execution Environments (TEEs) [1–5] promised to enable secure computation even in the presence of privileged adversaries by providing hardware-enforced isolation. However, the discovery of microarchitectural side-channel and transient execution attacks [6–10] has severely undermined these security guarantees. These attacks exploit shared hardware resources and speculative execution to leak sensitive information across security boundaries, effectively bypassing the architectural isolation enforced by TEEs. The widespread impact of these vulnerabilities is evidenced by more than 43 published attacks [11] targeting commercial TEE platforms including Intel SGX, AMD-SEV, and ARM TrustZone. Existing approaches to defend against these attacks face significant limitations. Hardware-based solutions [12–14] often require complex processor modifications with significant hardware overhead. Replacing trusted hardware with cryptographic approaches incurs prohibitive performance overheads [15]. Meanwhile, formal verification methods struggle to scale to realistic code base sizes and often fail to capture subtle microarchitectural behaviors [16–18]. This thesis proposes a constructive approach to TEE security and demonstrates that practical defenses against microarchitectural attacks are achievable through careful system design. Rather than relying only on models and simulations, we focus on constructing systems that are secure by design. Our work is concretely realized through the design, implementation, and evaluation of two novel platforms: First, we present Citadel, a TEE platform that enables secure shared memory while providing precise guarantees against microarchitectural side-channel attacks. Citadel introduces relaxed microarchitectural isolation (RMI), a novel security property that allows programs to share memory while restricting information leakage to that of a non-speculative execution. To achieve RMI, Citadel combines hardware-enforced microarchitectural isolation with two simple mechanisms for controlled speculation: SpecSafe, which prevents speculative shared-memory accesses entirely, and Burst mode, which enables better performance through constrained speculation on small code snippets. Through a fully functional FPGA prototype, we demonstrate that Citadel can run real-world applications including cryptographic libraries and private ML inference with less than 5% overhead while maintaining strong security guarantees. Second, we develop Argos, an “integrity-only” TEE specifically designed for verifiable fully homomorphic encryption, that enables the deployment of FHE schemes in real-world settings where malicious security is required. We show that by carefully constraining the attack surface and employing simple hardware mechanisms, we can achieve complete security against microarchitectural attacks. Argos introduces a simplified transcript-based attestation scheme that only requires one signature per FHE computation, amortizing the cost of relying on a physical TPM to microarchitecturally isolate secrets. Argos can be used to not only enforce circuit-level integrity of FHE schemes but can also be extended to support more complex FHE-based applications that take (potentially poisoned) input from the (malicious) circuit evaluator. Argos is compatible with commodity hardware and only incurs minimal performance overhead with an average of 3% overhead for FHE evaluation and 8% overhead for complex protocols. Through these systems, we show that effective defenses can be built against microarchitectural side channel and transient execution attacks. Our constructive approach yields practical systems that are secure by design while maintaining efficiency and usability. This thesis opens new possibilities for the deployment of trusted hardware by demonstrating concrete paths toward robust microarchitectural security.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164044</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steering Robots with Inference-Time Interactions</title>
<link>https://hdl.handle.net/1721.1/164043</link>
<description>Steering Robots with Inference-Time Interactions
Wang, Yanwei
Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to correct its behavior. While collecting additional data for finetuning can address such issues, doing so for each downstream use case is inefficient at deployment. My research proposes an alternative: keeping pretrained policies frozen as a fixed skill repertoire while allowing user interactions to guide behavior generation toward user preferences at inference time. By making pretrained policies steerable, users can help correct policy errors when the model struggles to generalize—without needing to finetune the policy. Specifically, I propose (1) inference-time steering, which leverages user interactions to switch between discrete skills, and (2) task and motion imitation, which enables user interactions to edit continuous motions while satisfying task constraints defined by discrete symbolic plans. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164043</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation</title>
<link>https://hdl.handle.net/1721.1/164042</link>
<description>Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation
Lin, Yujun
The explosive growth of artificial intelligence (AI) technologies, particularly large-scale deep learning models such as large language models and diffusion models, has intensified the demand for efficient full-stack inference solutions that effectively balance performance and costs. This work will present a comprehensive exploration into the algorithm-system co-optimization, hardware design specialization and automation for scalable AI deployment. First, we begin with algorithmic optimization for large-scale models, including large language models and diffusion models, developing inference libraries that leverage quantization to boost the performance of generative AIs on existing GPU platforms. Next, we design specialized hardware accelerators for domain-specific applications, specifically point cloud understanding, emphasizing efficiency improvements through the exploitation of data sparsity. Finally, we open up the hardware design space beyond template-based sizing, and progress into the automated learning-based co-design of neural network and hardware architectures, maximizing their synergy with a full-stack joint optimization. We then introduce an automated framework for spatial accelerator generation, transforming high-level mappings into custom hardware designs that support scalable deployment. Together, these contributions advance AI inference efficiency by bridging the gap between advanced computational requirements and hardware capabilities, between theoretical potential and practical solutions, and between design cost and effectiveness.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164042</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Theoretic Foundations for Understanding Quantum Systems</title>
<link>https://hdl.handle.net/1721.1/164041</link>
<description>Learning Theoretic Foundations for Understanding Quantum Systems
Liu, Allen
Understanding and harnessing the power of quantum systems has the potential to transform many domains in science and technology. However, before we can achieve these aspirations, we must first build a better understanding of how quantum systems fundamentally behave. In this thesis, we approach this question through the lens of learning theory to develop new paradigms for learning about quantum systems and understanding their structural properties. We deliver several surprising results, upending previous beliefs about even fundamental laws and giving provably efficient algorithms for learning about quantum systems in settings previously conjectured to be intractable. Typically in quantum many-body systems, the particles in the system interact locally with respect to some geometry as described by a local Hamiltonian. Two key questions are first, understanding equilibrium properties of a system with a given Hamiltonian and second, recovering the Hamiltonian from measurements of the properties of the system. For the first, we prove a universal law that there is a sudden death of entanglement, at a critical temperature depending only on the geometry but not on the system size. For the second, we give the first efficient algorithm for recovering the Hamiltonian at any temperature, breaking a conjectured barrier at low temperatures. Beyond systems with local interactions, we also consider learning and testing properties of general quantum states, focusing on the interplay between statistical complexity and near-term quantum device constraints, only allowing for entangled measurements over a limited number of copies of the state. We characterize the optimal rates for learning and testing with single-copy measurements and for multi-copy measurements in many relevant near-term regimes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164041</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Domain Wall Based Magnonics in Iron Garnet</title>
<link>https://hdl.handle.net/1721.1/164040</link>
<description>Domain Wall Based Magnonics in Iron Garnet
Gross, Miela J.
Magnonic devices leverage magnons, quantized spin waves, as the mechanism to process and transfer information. In materials with low Gilbert damping, these spin wave-based systems enable ultra-fast operation while eliminating thermal heating and leakage currents inherent to conventional electron-based microelectronics. To maximize energy efficiency and processing speed, materials like iron garnets, ferrimagnetic insulators with tunable magnetic properties, are essential. Key magnetic parameters, including saturation magnetization, perpendicular magnetic anisotropy, coercivity, and Gilbert damping, can be tailored through elemental substitution or strain engineering in thin films. Furthermore, relativistic domain wall velocities reported in yttrium iron garnet (YIG), bismuth substituted YIG, and thulium iron garnet lay the foundations for high-speed operation. These unique attributes position garnets as ideal materials for the development of magnonic devices that integrate efficiency, speed, and versatility. This thesis presents my research on integrating thin film garnets into a domain wall based magnonic devices. It begins by exploring the magnetic characterization of thin film iron garnets, including the growth process, temperature dependent magnetic behavior, and tunable magnetic anisotropy. Next, we report on magnonics within the garnet, focusing on the interactions between spin waves and domain walls. Finally, we demonstrate a write mechanism for a magnonic device driven by spin wave-induced domain wall motion, providing detailed characterization of the device behavior and performance. These results underscore the potential of iron garnets for magnonic-based device applications and offer insights into the efficiency of write mechanism, paving the way for energy-efficient high-speed spintronic technologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164040</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)</title>
<link>https://hdl.handle.net/1721.1/164039</link>
<description>Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)
Drago, John M.
High-field magnetic resonance imaging (MRI) using a standard volume coil results in a spatially varying flip angle across the body, which renders images difficult to clinically interpret. This arises from the complex interactions of electromagnetic fields from current-carrying elements surrounding the imaging region. Parallel transmission (pTx) mitigates this issue by employing multiple high-power, independently controlled transmit elements for more precise excitation control. However, since the wavelength of the applied radio waves is shortened in tissue, the effect becomes highly dependent on the patient’s anatomy. As a result, optimization must be performed on a patient-by-patient basis, and methods that attempt full control of these independent waveforms are too computationally intensive to execute during the limited examination time. Additionally, the high-field excitations create complex electric field distributions that require control and careful monitoring to avoid excessive tissue power deposition (and ultimately heating), quantified as the specific absorption rate (SAR). To address these challenges, we introduce a method for optimizing patient-specific pulses using a global waveform (Ritz) approach, enabling rapid, in-scanner optimization. While pTx effectively addresses flip angle inhomogeneity, it remains costly and introduces challenges in SAR management. We address the SAR management and cost problems of pTx by introducing and characterizing the MP-pTx method, which leverages the multiphoton phenomenon to improve homogeneity using a standard volume coil supplemented with low-frequency (kilohertz) parallel channels. MP-pTx reduces costs and simplifies SAR management by shifting the parallel irradiation to low-cost, lowSAR shim array channels. These channels supplement an off-resonant excitation from a conventional birdcage coil with an oscillating, z-directed field that satisfies the resonance condition for spin state transitions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164039</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion</title>
<link>https://hdl.handle.net/1721.1/164038</link>
<description>Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion
Miller, Adam Joseph
In recent years, reinforcement learning has demonstrated its promise as a powerful tool for developing innovative and advanced control systems for legged robots. The method’s robustness, versatility, and generality have made it a prime candidate for future robotic systems deployed in the real world. Through the development of more advanced machine learning algorithms and more reliable and efficient physics simulators, reinforcement learning continues to improve and enable new, dynamic, and agile capabilities. While the results are often impressive and the tools relatively beginner-friendly, there remain impediments to scalable and reliable progress. Poor reward function scaling, challenges balancing exploration versus exploitation, and misalignment from the engineer’s intent are roadblocks to better performance. To get beyond these limitations, new tools and frameworks are necessary. In this work, I present novel methods to address these challenges and extend the capabilities of reinforcement learning on robot hardware. Through the quantification of the distributional sim-to-real gap, simulation model optimization for hardware matching, latent space motion sequence planning, and latent style training, I demonstrate never-before-seen performance on legged hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164038</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Weak Supervision: Theory, Methods, and Applications</title>
<link>https://hdl.handle.net/1721.1/164037</link>
<description>Learning from Weak Supervision: Theory, Methods, and Applications
Lang, Hunter
The growing demand for high-quality labeled data to train machine learning models has driven widespread adoption of weak supervision and synthetic data methods, which use automated models instead of humans for annotation. Large language models (LLMs) have further accelerated this trend because their zero- and few-shot classification performance enables them to serve as effective “synthetic annotators” for various tasks. In practice, the data generated by these weak annotators is imperfect, but it enables the training of strong models. However, theoretical understanding of why training one model on the outputs of another leads to strong performance remains limited, especially when the annotator model exhibits suboptimal performance on the target task. In this thesis, I develop a theoretical framework for learning from weak supervision that captures the key aspects of the problem better than existing approaches in the crowdsourcing and learning-with-noisy-label literature. This framework establishes structural conditions that explain when and why weak supervision can reliably train strong models. Building on these theoretical results, the second part of the thesis introduces methods to improve how models learn from weak supervision and applies these methods to low-labeled-data settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164037</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning</title>
<link>https://hdl.handle.net/1721.1/164036</link>
<description>Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning
Ryou, Gilhyun
Data-driven methods have significantly advanced robot learning, yet their direct application to real-world robots remains challenging, particularly under extreme conditions. This challenge is especially pronounced for highly maneuverable vehicles like quadrotor aircraft, which often operate in scenarios requiring rapid maneuvering, such as racing, defense systems, or safety-critical obstacle avoidance. In such extreme conditions, real-world constraints like control delays, state estimation errors, and battery voltage fluctuations often compromise trajectory reliability, even when conforming to ideal dynamics. However, the typical data-driven methods are usually developed in simulated environments. Consequently, the transition to real-world dynamics requires extensive fine-tuning, which can be risky, as perfect training in simulations does not guarantee safe transitions to real-world dynamics. This thesis employs methods from optimal experiment design to address these challenges. By quantifying uncertainty and maximizing information gain, the approach aims to safely and efficiently design the real-world experiments required for accurate constraint modeling. In the first chapter, we present a multi-fidelity Bayesian optimization method that searches for time-optimal speed profiles for quadrotor aircraft, effectively balancing numerical simulations with real-world flight experiments. The second chapter extends the optimal experiment design method to a high-dimensional online planning problem through integration with reinforcement learning. The proposed algorithms, trained and validated through real-world flight experiments, significantly outperform baseline methods in trajectory time and computational efficiency. Additionally, these algorithms have been adapted to various planning problems, including fixed-wing aircraft planning, cooperative multi-drone systems, and energy-efficient trajectory generation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164036</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Systems for Large-Scale Graph Representation Learning</title>
<link>https://hdl.handle.net/1721.1/164035</link>
<description>Efficient Systems for Large-Scale Graph Representation Learning
Huang, Tianhao
Graph representation learning has gained significant traction in critical domains including finance, social networks, and transportation systems due to its successful application to graphstructured data. Graph neural networks (GNNs), which integrate the power of deep learning with graph structures, have emerged as the leading methods in this field, delivering superior performance across diverse graph related tasks. However, training graph neural networks on large-scale datasets encounters scalability challenges on current system architectures. First, the sparse, non-localized structures of real-world graphs lead to inefficiencies in data sampling and movement. This characteristic heavily stresses system input/output (I/O), particularly burdening the peripheral buses during the sampling phase of GNN training. Second, the suboptimal mapping of training procedure to GPU kernels leads to compute inefficiencies, including substantial kernel orchestration overhead and redundant computations. Addressing these challenges requires a comprehensive, full-stack optimization approach that fully leverages hardware capabilities. This thesis presents two complementary works to achieve the goal. The first work, Hanoi, unblocks the data loading bottleneck in out-of-core GNN training by co-designing the sampling algorithms to align with the hierarchical memory organization of commodity hardware. Hanoi drastically reduces I/O traffic to external storage, delivering up to 4.2× speedup over strong baselines with negligible impacts on the model quality. Notably, Hanoi is able to obtain competitive performance close to in-memory training with only a fraction of memory requirements. Building on this foundation, the second work, Joestar, introduces a unified framework for optimized GNN training on GPUs. Joestar adapts the multistage sampling approach from Hanoi to in-memory training which frees CPUs from heavy data loading workloads. Joestar also identifies novel kernel fusion opportunities and formulates better execution schedules by jointly considering the sampling and compute stages. Combined with compiler infrastructure in PyTorch, Joestar achieves state-of-the-art GNN training throughputs for billion-edge graph datasets on a single GPU.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164035</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability</title>
<link>https://hdl.handle.net/1721.1/164034</link>
<description>Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability
Curtis, Aidan
A central goal in embodied artificial intelligence is to enable autonomous agents to accomplish complex, long-horizon tasks in novel, partially observable environments. In these scenarios, agents must effectively reason about uncertainty, generalize from limited experiences, and proactively plan actions to acquire missing information. This thesis tackles these core challenges by developing and evaluating novel methods specifically designed for partially observable contexts. The first part of this thesis introduces an enhanced heuristicguided planning technique that increases search efficiency in sparse-reward domains with significant uncertainty. Next, we investigate how symbolic reasoning can be integrated into the decision-making framework, accelerating search through the use of temporal and belief-space abstractions. Next, we propose a method for sequencing low-level reinforcement learning skills alongside information gathering actions, enabling increased task complexity and robustness in real-world tasks. Lastly, we show how large language models may be leveraged for few-shot model learning, allowing agents to rapidly adapt and generalize to new scenarios. The methods presented in this thesis advance the state-of-the-art in embodied AI by enabling robots to better handle uncertainty and incomplete information, ultimately paving the way for more capable, exploratory, and risk-aware autonomous systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164034</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Abstractions for Robust Hierarchical Manipulation Planning</title>
<link>https://hdl.handle.net/1721.1/164032</link>
<description>Adaptive Abstractions for Robust Hierarchical Manipulation Planning
Noseworthy, Michael S.
In this thesis, we address the problem of long-horizon robotic manipulation under partial observability. Tasks such as gearbox assembly or tidying a workstation involve many objects and necessitate a variety of manipulation capabilities. These long-horizon tasks are commonly addressed by hierarchical approaches, which introduce state and action abstractions to make planning tractable. However, our abstractions often rely on imperfect models of the world, which can lead to brittle execution. Furthermore, these abstractions depend on having accurate state information, which is often only noisily sensed, if sensed at all. For example, in the assembly domain, the pose of each part may only be known within a few millimeters, and a box’s mass distribution may be completely unsensed. To deploy robots outside of structured environments like the factory, they will need to be robust to model misspecification and partial observability. The central idea of this thesis is that we can develop adaptive abstractions to improve the robustness of hierarchical planning once the robot is deployed. Adaptive abstractions incorporate observations from the real world that are informative about misspecifications and partial observability, essentially allowing the planner to adapt to its deployment environment. We explore this idea by developing three types of models that enable this adaptivity at different levels of the abstraction hierarchy: plan feasibility models, adaptive samplers, and reactive control policies. In our first contribution, we consider adding adaptivity to a task and motion planning system at the task-planning level. We focus on a setup where the robot has access to a set of parameterized skills, but these skills are derived from imperfect models. To enable robust planning, we propose to autonomously learn skill feasibility models once the robot is deployed through a curious exploration phase. Critically, we propose a novel active learning framework to enable efficient learning without human intervention. We show that the resulting feasibility model leads to robust task performance on multiple downstream tasks in a stacking domain. Our second contribution looks at developing adaptive samplers that can incorporate information about object state that is typically unobserved (e.g., inertial and frictional properties). General-purpose belief representations can handle this partial observability, but online inference is computationally expensive. Instead, we propose to use an offline phase to learn an inference network that directly predicts a distribution over object properties that is consistent with the interaction history. We show that inference networks enable efficient adaptation in a grasping domain with heavy objects. Our final contribution focuses on learning adaptive controllers such that robustness is handled at the lowest level of the abstraction. We consider precise contact-rich manipulation tasks that are sensitive to pose estimation errors. To overcome noisy poses at the control level, explorative contact is necessary, but unintended forces can lead to catastrophic outcomes such as part slippage or damage. We propose to use simulation in an offline phase to train reactive force-aware policies. The policies are trained to overcome pose uncertainty while using force-sensing to adaptively limit excessive forces. The result is robust real-world performance on the multistage assembly of a planetary gearbox system, which includes insertion, gear-meshing, and nut-threading tasks. In summary, adaptive abstractions can be used to increase the robustness of hierarchical manipulation planning, an important step in deploying robots outside of the lab or factory. Throughout the thesis, we validate the proposed approaches on the real robot in stacking and assembly domains.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164032</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Generalization Under Distribution Shift</title>
<link>https://hdl.handle.net/1721.1/164031</link>
<description>Methods for Generalization Under Distribution Shift
Netanyahu, Aviv
Machine learning systems have achieved remarkable performance in tasks where test data closely resembles the training distribution. However, real-world applications often require systems capable of handling more challenging situations -- specifically, adapting to new tasks and extrapolating to data points outside the distribution of the training set. The current paradigm for handling distribution shifts is collecting and training models on large datasets. This work offers two more principled frameworks that enable machine learning models to generalize effectively to out-of-distribution scenarios without sacrificing the power of modern overparameterized models.&#13;
&#13;
The first framework converts an out-of-support zero-shot generalization problem into an out-of-combination problem via a transductive reparameterization, which is possible under low-rank style conditions. We explore how this idea can be applied to domains like robotics, where the environment is changing, and materials and molecular design, where predicting properties of materials or molecules outside of known ranges is crucial to driving more efficient materials discovery.&#13;
&#13;
The second framework focuses on few-shot task learning, which involves agents learning new tasks from minimal data and applying them to new environments. We formulate the problem of few-shot task learning as Few-Shot Task Learning through Inverse Generative Modeling, which allows us to leverage the power of neural generative models pretrained on a set of base tasks. We adapt a method for efficient concept learning to few-shot task learning based on our formulation and rapidly learn new tasks with only a few examples, enabling task execution from autonomous driving to real-world robotic manipulation tasks in novel settings without the need for extensive retraining.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164031</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling 3D Scene Perception via Probabilistic Programming</title>
<link>https://hdl.handle.net/1721.1/164030</link>
<description>Scaling 3D Scene Perception via Probabilistic Programming
Gothoskar, Nishad
Understanding and interpreting the 3D structure of the world is a central challenge in artificial intelligence. Our physical world is 3D, yet our AI systems often “see” that world through pixels and images. In order to build truly intelligent AI systems, we must go beyond pixels and images and build 3D vision systems that can build meaningful and useful 3D representations of the world. This is the problem of 3D scene perception. How do we transform raw visual input into 3D representations of the world? 3D scene perception has numerous applications from robotics to augmented reality. Despite the advances over the last decade, 3D perception remains a major bottleneck in real-world robotics applications. The challenge stems from the immense variability in real-world conditions, e.g. lighting, color, viewpoint, camera properties, object appearance, the incompleteness of visual data due to limited resolution, noise, and occlusions, and the approximations in our models of visual data. Developing more robust and generalizable 3D perception systems would be an important step towards more general-purpose robotics. In this thesis, we explore a probabilistic architecture for 3D perception based on structured generative models and probabilistic programs. We begin with 3DP3, the first iteration of our approach, which infers 3D scene graphs from real-world depth image data. 3DP3 demonstrates that our method could work on real-world benchmarks and correct commonsense errors from deep learning systems. Building on this foundation, we develop Bayes3D, which scaled up these ideas using a GPU-accelerated image likelihood and generative model alongside a parallel coarse-to-fine inference algorithm. Next, we explore two approaches for incorporating RGB image data into generative 3D graphics programs, expanding their applicability. We then introduce DurableVS, which extends inverse-graphics techniques to model scenes involving a robot and multiple cameras, enabling precise control of a robot. Finally, we present Gen3D, which integrates all the key ideas from this thesis into a real-time 3D perception system that uses multi-resolution probabilistic models of 3D matter to enable real-time tracking that is competitive with vision transformers and 3D Gaussian splatting, state-of-the-art methods in computer vision and computer graphics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164030</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Generative Models for Visual Synthesis</title>
<link>https://hdl.handle.net/1721.1/164029</link>
<description>Efficient Generative Models for Visual Synthesis
Yin, Tianwei
While current visual generative models produce high-quality outputs, they suffer from significant computational costs and latency, limiting their applicability in interactive settings. In this dissertation, we introduce a suite of techniques designed to enhance the efficiency of generative models for image and video synthesis. First, we propose distribution matching distillation, a method that enables the training of one- or few-step visual generators by distilling knowledge from computationally expensive yet highly capable diffusion models. Next, we develop improved distillation techniques that enhance robustness and scalability, culminating in a production-grade few-step image generator. This system is now deployed in widely used software, generating hundreds of millions of images annually. Finally, we extend our approach to video generation by adopting an autoregressive paradigm, significantly reducing latency and enabling fast interactive video generation and world simulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164029</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves</title>
<link>https://hdl.handle.net/1721.1/164027</link>
<description>Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves
He, Hao
Remote monitoring of sleep and physiological signals is critical for tracking human health, managing diseases, and enabling early intervention. However, existing monitoring solutions face two major limitations: (1) they are often unsuitable for vulnerable populations—such as infants and seniors—and (2) most of them raise concerns about measurement accuracy. We propose a novel, contactless approach that addresses both challenges by combining advances in artificial intelligence (AI) and radio-frequency (RF) sensing. Our solution makes monitoring more comfortable, accessible, and affordable, while still delivering clinically meaningful insights. This thesis makes four fundamental contributions: First, we introduce a system that can extract high-fidelity breathing signals from ambient RF reflections, even in complex scenarios where multiple individuals are present, such as couples sharing a bed. Second, we develop an AI-based sleep monitoring framework that generates sleep hypnograms and detects respiratory events entirely without the need for on-body sensors. Third we develop AI models that infer critical biomarkers—such as blood oxygen saturation (SpO₂) and inflammation (C-reactive protein levels)—in a fully passive and non-intrusive manner. Finally, inspired by the success of large language models, we show that physiological signals can be represented and interpreted analogously to language. This insight enables effective translation between modalities (e.g., from respiration to EEG) and unlocks robust representation learning for downstream clinical tasks. Together, these contributions establish a new paradigm for remote sleep and physiological monitoring—one that is contactless, continuous, and passive. We validate our system on real world datasets and demonstrate its potential to fundamentally transform clinical care and home health monitoring.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164027</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field</title>
<link>https://hdl.handle.net/1721.1/164007</link>
<description>The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field
Weber, Robert.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Physics, 1959; Includes bibliographical references (leaves 46-47).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164007</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of braced excavations.</title>
<link>https://hdl.handle.net/1721.1/164000</link>
<description>Analysis of braced excavations.
Wong, Ing Hieng.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1971; Three leaves on transparent sheets. Vita.; Bibliography: leaves 95-99.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164000</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling rail freight management.</title>
<link>https://hdl.handle.net/1721.1/163998</link>
<description>Modelling rail freight management.
Assad, A.
            (Arjang)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1978; Vita.; Bibliography: leaves 277-292.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163998</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolism in vivo of 1, 3-butanediol in the rat</title>
<link>https://hdl.handle.net/1721.1/163995</link>
<description>Metabolism in vivo of 1, 3-butanediol in the rat
Nahapetian, Aratoonnaz,
            author.
The metabolism of 1, 3-butanediol (BD) was investigated in vitamin B 12 -deficient and normal rats and in liver slice and diaphragm systems. Body weight gain and feed efficiency were determined in rats fed ad libitum for five weeks on a basal 5% BD or 5% sodium propionate diet with and without vitamin B12. The rats were train-fed for ten months on the same diets. The presence of sodium prop i onate in vitamin B12-deficient basal diets resulted in reduced food intake while BD had the opposite effect. As a result, vitamin B12-deficient rats fed a 5% sodium propionate diet grew less than those fed a 5% BD diet. The metabolism in vivo of BD labeled in carbon-1 (BD-l-cl4) and carbon-4 (BD-4-cl4) were compared to the metabolism of propionate-l-cl4 (PRP-l-cl4) in vitamin B12-deficient and normal rats. Vitamin B12 deficiency reduced the oxidation of sodium propionate but not that of BD, and had no effect on glycogen labeling from BD-l-cl4 and BD-4-cl4. For PRP-l-cl4 however, vitamin B12 deficiency resulted in not only no incorporation of label but liver glycogen levels were very small. On the other hand , when vitamin B12 was present in the diet, the labeling of glycogen from propionate was higher than that from either of the BD-labeled test compounds. Methylmalonic aciduria and urinary loss of ingested activity was higher in vitamin B12-deficient rats fed PRP-l-cl4 than in those fed l abe l ed BD. Nearly all of the urinary activity of vitamin B 1 2-deficient rats fed PRP-l-cl4 was in the form of me t hy l malonic acid (MMA), while little, if any, of the activity was found in the MMA fraction of urine of vita m in B12-deficient rats fed labeled BD. The metabolism in vivo of BD-c14 and BD-3-c14 was investigated in normal rats. About eighty percent of BD was oxidized to carbon dioxide within 32 hours. Its oxidation in the first eight hours was higher when BD was administered intraperitoneally than when it was fed by stomach tube. The loss of ingested activity in the urine expressed as a percentage of total intake and 1,3-BD was higher at the higher doses of BD. However, the activity in urinary BD could not account for all the activity in the urine. A considerable amount of ketone bodies was detected in urine of rats after feeding BD while no detectable ketone bodies were found in the urine of control rats. In addition, relative specific activities of urinary BD and S -hydroxybutyrate were 0.91 and 0.50 respectively. Polarimetry of both purified urinary BD and S -hydroxybutyrate showed that the percentages of (+)- and (-)-isomers of both compounds were 40 and 60% respectively. The metabolism in vitro of BD-3-c14 and DL-S - hydroxybutyrate-4-cl4 were investigated in systems which contained liver slices alone, diaphragm alone or both liver slices plus diaphragm. The oxidation rate of S -hydroxybut y rate was lower in liver slices than in either the diaphragm or the liver slices plus diaphragm systems. Moreover, the rate of oxidation of S -hydrox y- butyrate was highest in the system which included both liver slices and diaphragm. On the other hand, the oxidation rate of BD was lower in the system which had only diaphragm than in the other two systems. However, the rate of BD oxidation was highest in the system - wh ich included both liver slices and diaphragm. Presence of BD gave rise to increased D-(-)- S -hydroxybutyrate and acetoacetate in systems which contained liver slices or liver slices plus diaphragm. In addition, the production rate of D-(-)- S - hydroxybutyric acid was higher than that of acetoacetate in the pre sence of BD, while the opposite was true in its absence. Finally, all the radioactivity in the control incubation media was accounted for by BD-3-cl4, while about 1.5 and 98.5 percent of incubation media activity were recovered in S -hydroxybutyrate and BD peaks, respectively, in incub at ion systems containing liver. The results of this study indicate that 1,3-BD and sodium propionate do not share a common metabolic pathway in the rat. The data suggest, however, that nahapetian-4-1, 3-BD is most probably oxidized to S-hydroxybutyric acid using a "1,3-butanediol dehydrogenase" that is higher in activity in the liver than in the diaphragm. M oreover the (+)-isomer of BD is oxidized at a faster rate than the (-)-isomer, suggesting that the two isomers are oxidized by two different pathways.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1971; Thesis supervised by Sanford A. Miller Vita: page 196; Includes bibliographical references (pages 182-196)
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163995</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design</title>
<link>https://hdl.handle.net/1721.1/163721</link>
<description>New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design
Barksdale, Alex Christopher
Part I: Magnetic Particle Imaging for Human Functional Neuroimaging While Magnetic Resonance Imaging (MRI) has revolutionized diagnostic imaging since its clinical introduction in the 1980s — primarily focusing on hydrogen nuclei — it remains fundamentally limited by the weak nature of nuclear spin magnetism. For example, functional MRI (fMRI) provides valuable insights into brain activity through BOLD signaling, but its limited sensitivity and reliance on indirect physiological measures often necessitate large subject pools for meaningful analysis. In contrast, Magnetic Particle Imaging (MPI) utilizes the much stronger magnetism associated with superparamagnetic iron oxide nanoparticles (SPIONs), and by minimizing background signal levels which are not modulated by functional activity, it offers a promising alternative. However, there are no approved SPION tracers for human use that are well-suited to MPI, and we have little experience scaling this technology up to human-sized imagers. This thesis therefore demonstrates a human-scale MPI scanner using functional MPI (fMPI) in non-human primates and assesses its potential for future human studies. Additionally, we investigate safety aspects of MPI, specifically focusing on peripheral nerve stimulation (PNS) induced by the 25 kHz magnetic excitation fields used in MPI. Because this is a higher frequency than those used by MRI gradients, threshold data at this frequency are lacking. This thesis measures the PNS stimulation threshold in human subjects to better understand high-frequency magnetic PNS and ensure the safe implementation of human-scale MPI for future neuroimaging applications. Part II: Short Mid-Field MRI Magnet Designs Anxiety induced by the long, narrow tube of conventional 1.5T and 3T scanners is a common cause of incomplete patient examinations, leading to delays in diagnosis and reduced facility throughput. In contrast, the short aspect ratio of CT scanner bores is known to alleviate this anxiety, eliminating this problem. This thesis also addresses the need for a more patient-friendly MRI scanning option by introducing a new “hybrid” superconducting and permanent magnet concept applicable to mid-field (0.5T) superconducting solenoid magnets. While mid-field scanners offer lower sensitivity than high-field alternatives, recent advances in image reconstruction and denoising have significantly enhanced their utility, allowing them to deliver diagnostic information comparable to that of the previous generation of 1.5T scanners. Additionally, they increase the range of compatible metallic implants and offer hospitals a lower-cost, easier-to-site alternative to 1.5T and 3T scanners. They can also enhance patient comfort through shorter bore lengths and larger diameters, but their optimized winding designs still reach a limit in how short they can be made for a given homogeneity and diameter specification. This thesis introduces the use of rare-earth permanent magnets to enable further reductions in scanner length, aiming to match the aspect ratio of CT scanners.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163721</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference-Time Learning Algorithms of Language Models</title>
<link>https://hdl.handle.net/1721.1/163719</link>
<description>Inference-Time Learning Algorithms of Language Models
Akyurek, Ekin
Modern language models (LMs) can perform complex tasks through in-context learning (ICL)—they can adapt to a task via examples provided in their input without any parameter updates. However, fundamental questions remain about when this adaptation works, what algorithms underlie it, and how to improve it. This thesis studies the mechanisms and limitations of ICL and develops better methods for test time adaptation of LMs on diverse benchmarks of language modeling and reasoning. I begin by evaluating the ICL capabilities of pre-trained LMs. I demonstrate that LMs can achieve strong compositional generalization when provided with few-shot examples. In a separate analysis, I show that their performance deteriorates significantly when faced with counterfactual variants of tasks they normally performed well on. Later, I develop "model problems" of ICL test the ability of LMs to learn novel mathematical structures in-context like linear functions and probabilistic formal languages. I interpret the algorithmic foundations of ICL. First, I prove that Transformer models with sufficient capacity can execute both iterative and closed-form solutions to linear regression problems, and demonstrate that these theoretical solutions manifest as interpretable intermediate variables. Then, I reveal how LMs develop specialized circuits that implement approximate n-gram learning algorithms for probabilistic languages. Building on these insights, I develop two approaches to enhance LMs. First, I demonstrate that explicitly incorporating n-gram computation into model architectures improves performance across multiple domains. Second, I introduce a test-time training that enables rapid adaptation through gradient updates on input data, achieving significant improvements over standard few-shot learning on abstract reasoning tasks. Together, these results advance our understanding of how LMs adapt to novel tasks and provide practical techniques for enhancing their test-time learning capabilities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163719</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology</title>
<link>https://hdl.handle.net/1721.1/163710</link>
<description>Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology
Boiarsky, Rebecca
Single-cell RNA sequencing (scRNA-seq) offers a detailed view of the cellular and phenotypic composition of healthy and diseased tissues. While machine learning (ML) methods are well-suited for the high-dimensional nature of scRNA-seq data, current computational tools face limitations, particularly when confronted with data from clinical oncology. This thesis presents the development and application of ML techniques for scRNA-seq data to address key computational challenges, with a focus on challenges in clinical oncology. It covers four key areas: identifying gene signatures and biomarkers in multiple myeloma, developing methods to account for somatic copy number variations in tumor samples, benchmarking large, pre-trained scRNA-seq foundation models, and creating a framework for predicting clinical outcomes using patient-level representations of single-cell data. Together, these studies aim to develop and evaluate novel ML algorithms for scRNA-seq data which can unlock actionable insights for personalized medicine.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163710</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design</title>
<link>https://hdl.handle.net/1721.1/163577</link>
<description>Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design
Sundar, Vikram
Engineering sequence-specific proteases would enable a wide variety of therapeutic applications in diseases ranging from cancer to Parkinson’s disease. However, many previous experimental and physics-based attempts at protease engineering have failed to engineer specificity in cleaving alternative substrates, rendering them useless. In this thesis, we aim to engineer TEV (tobacco etch virus) protease, a highly sequence-specific protease, to cleave alternative substrates. We incorporate novel high-throughput assays and powerful machine learning (ML) methods for highly effective protein engineering. The first portion of this thesis focuses on generating fitness landscapes from high-throughput experiments. Most machine learning models do not account for experimental noise, harming model performance and changing model rankings in benchmarking studies. Here we develop FLIGHTED, a Bayesian method of accounting for uncertainty by generating probabilistic fitness landscapes from noisy high-throughput experiments. We demonstrate how FLIGHTED can improve model performance on two categories of experiments: single-step selection assays, such as phage display, and a novel high-throughput assay called DHARMA that ties activity to base editing. FLIGHTED can be used to generate robust, well-calibrated fitness landscapes, and when combined with DHARMA, our methods enable us to generate fitness landscapes of millions of variants. We then evaluate how to model protein fitness given a fitness dataset of millions of variants. Accounting for noise via FLIGHTED significantly improves model performance, especially of high-performing models. Data size, not model scale, is the most important factor in improving model performance. Furthermore, the choice of top model architecture matters more than the protein language model embedding. The best way to generate sufficient data scale is via error-prone PCR libraries; models trained on these landscapes achieve high accuracy. Using these methods, we successfully engineer both activity on an alternative substrate and specificity when compared to the wild-type. The ML-designed variants outperform anything found in the training set, demonstrating the value of machine learning even with experimental libraries of millions of variants. However, our results are limited to relatively close substrates. How best to improve model performance on distant substrates remains an open question.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163577</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Guided Optimization for Intelligent Mobility Systems</title>
<link>https://hdl.handle.net/1721.1/163568</link>
<description>Learning-Guided Optimization for Intelligent Mobility Systems
Li, Sirui
Efficient and reliable mobility systems are essential to modern-day society, with broad impacts ranging from day-to-day commuting, public transportation, emergency response to last-mile package delivery and freight logistics. Autonomous vehicles have the potential to improve mobility efficiency and convenience but also raise questions about reliability and feasibility of deployment. The first contribution of this thesis is a set of novel, principled control-theoretical analyses that provide strong stability and reliability guarantees for autonomous vehicles and human-compatible driving, and they further covers emergent traffic behaviors in mixed-autonomy systems. While these theoretical guarantees offer valuable insights, mobility systems are inherently complex, and their overall performance often relies on solving difficult optimization problems, many of which are combinatorial, thus presenting significant scalability challenges. Overcoming these challenges requires innovative approaches that extend beyond traditional control techniques. This thesis further contributes a set of machine learning-guided optimization algorithms that significantly enhance the efficiency and scalability of solving combinatorial optimization problems. These algorithms have proven effective across a wide range of mobility-related applications. Compared to state-of-the-art solvers, they achieve 10× to 100× speed-up in large-scale vehicle routing problems, 35% to 70% solve-time improvement in various mixed-integer linear programming problems, and up to 54% acceleration in long-horizon scheduling problems. These advancements open new possibilities for efficient decision-making in large-scale transportation systems, enabling smarter, faster, and more adaptive mobility solutions. Combining learning, optimization, and control, this thesis demonstrates the potential of learning-guided optimization and principled control-theoretical analysis to address the increasing complexity of modern mobility systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163568</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Top-down and bottom-up interactions for cortical bursting</title>
<link>https://hdl.handle.net/1721.1/163567</link>
<description>Top-down and bottom-up interactions for cortical bursting
Tang, Vincent D.
High-frequency burst firing occurs throughout the mammalian cortex in vivo, yet both the underlying mechanisms and functional roles of bursts are unclear. Burst firing in brain slices is strongly modulated by the activity of apical dendrites, which branch extensively in layer 1 (L1) and receive long-range inputs from higher-order cortical and thalamic areas. These properties suggest a powerful subcellular substrate by which single pyramidal neurons could multiplex bottom-up and top-down information via L1-independent tonic spikes and L1-dependent bursts, respectively, and have provided a basis for emerging theoretical models of cortical computation and learning. However, our understanding of burst firing and subcellular processing remains critically limited by a lack of evidence in awake animals. It is unclear whether burst firing a) is preferentially recruited by bottom-up versus top-down inputs, and b) requires apical dendritic engagement. To answer these questions, we performed high-density extracellular recordings in primary visual cortex of awake mice while presenting a battery of Gabor (bottom-up) and inverse (top-down) visual stimuli. We report widespread high-frequency bursts in L2/3 and L5 pyramidal neurons. Contrary to expectation, bursts exhibited extremely short response latencies, and were most strongly recruited by Gabor stimuli. We further tested the causal contribution(s) of apical dendrites to burst firing and top-down visual tuning via two optogenetic manipulations: direct L5 apical tuft inhibition and NDNF interneuron activation. Strikingly, L1 inhibition only modestly reduced the burst fraction, and did not differentially affect Gabor vs inverse responses. Taken together, these results challenge prevailing theories of apical dendritic involvement in burst spike generation and feedback visual tuning, and provide new biological constraints for future theoretical and experimental work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163567</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice</title>
<link>https://hdl.handle.net/1721.1/163561</link>
<description>Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice
Irani, Ali
Heating, Ventilation, and Air Conditioning (HVAC) systems are vital to ensuring a healthy indoor environment in buildings. They are essential to the global shift toward a decarbonized, all-electric future. While integrated design practice has promised cost, energy, and space savings due to earlier and more frequent collaboration between design disciplines, remaining missed opportunities in the HVAC system design and coordination process often lead to spatial conflicts, performance tradeoffs, and uncomfortable spaces. This dissertation aims to understand current coordination practices to identify the root causes of existing problems, timeline issues, and knowledge gaps. Then, it proposes a series of enhancements to address these shortcomings, focusing on National Architectural Accrediting Board (NAAB) accredited architectural education programs that train the next generation of practicing architects. The proposed research hypotheses are validated in a three-part research approach: (1) releasing architecture industry surveys and conducting interviews, (2) designing and testing an early-stage design tool, and (3) developing, implementing, and evaluating a comprehensive HVAC curriculum for architecture students. The dissertation demonstrates that with the right tools and educational resources, architecture students can make informed, intuition-based HVAC system selections and integrate them into their building design, with students who studied the comprehensive curriculum demonstrating a 13% improvement in understanding and application of HVAC concepts compared to a control group of students. This work helps bridge the knowledge gap regarding HVAC systems, empowering designers to coordinate more effectively and prioritizing the role of HVAC systems in building performance simulation education.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163561</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain</title>
<link>https://hdl.handle.net/1721.1/163558</link>
<description>Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain
Watters, Nick
Sample-efficient learning and flexible generalization are hallmarks of intelligent behavior. Both sample-efficient learning and flexible generalization rely on re-using a mental model of the world in new contexts. For many decades, researchers in cognitive science, neuroscience, and machine learning have studied competing theories about the structure of our mental model of the world. One set of theories concerns the structure of multi-object representations in the brain. Some studies claim the brain represents multiple objects by allocating them to disjoint “slots” in working memory, others claim that the brain flexibly distributes a common pool of resources across objects, and yet others claim the brain represents multiple objects by rapidly switching between them through time. Another set of theories concerns the nature of predicting object motion. Some claim that the mind has an internal model of physics in the world that it uses to simulate the motion of objects through time, whereas others claim the mind relies on priors and heuristics to predict object motion without explicit simulation. Both of these sets of competing theories are long-standing and unresolved. In this work, we tackle these two open questions using primate neurophysiology and computational modeling. We trained monkeys to perform multi-object memory and motion prediction tasks, recorded large-scale single-unit activity from frontal cortex brain areas, and rigorously compared different hypotheses for the neural mechanisms of multi-object working memory and motion prediction. In the case of multi-object working memory, we found that the neural activity we recorded is more consistent with a model that flexibly distributes attentional resources across objects than with models that use object slots or temporal switching representations. In the case of motion prediction, we found that the neural activity is not consistent with the monkeys simulating an occluded moving object in real-time. Instead, the monkeys’ neural activity is driven largely by an anticipation of the position of the object at a future point in time. Both of these findings call into question long-standing cognitive theories and imply that the brain’s model of the world incorporates attentional mechanisms, priors, and heuristics. Lastly, we introduce a neural data preprocessing method for stabilizing electrophysiology recordings. This method improves spike-sorting results, helped us recover more neurons from our data, and we hope may help others make the most of their electrophysiology data as well.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163558</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Revenue Management to Satellite Communications</title>
<link>https://hdl.handle.net/1721.1/163554</link>
<description>Application of Revenue Management to Satellite Communications
Eiskowitz, Skylar
As the demand for satellite Internet continues to grow, satellite communication (SatCom) operators are faced with the challenge of effectively managing their capacity sales. While Revenue Management (RM) techniques have been widely used in other industries such as airline, hotel, and car rental services, the application of these methods in the context of SatCom is still scarce. This Thesis aims to bridge this gap by developing RM concepts, techniques, and optimization algorithms specifically tailored to the unique operational characteristics of SatCom capacity management and sales. The proposed SatCom RM method guides operators with quantitative recommendations of the amount of capacity to sell to different products in time and in different regions to maximize revenues.&#13;
&#13;
 Though SatCom has characteristics that favor the use of RM concepts (perishable inventory, fixed capacity with a low variable cost, the possibility to segment demand), there are unique structural characteristics that complicate the development of SatCom RM models. The primary challenge is that different products consume varying amounts of capacity, with larger terminal size products utilizing less power on a satellite than smaller terminal size products. Moreover, the selling practices in SatCom are complex because products may be sold in one period and consumed across multiple periods in which additional sales are made. This requires rolling both the selling and consumption periods. Lastly, the SatCom RM problem poses a multidimensional network problem, as products can consume bundles of resources in both space and time. &#13;
&#13;
We extend two commonly used airline RM algorithms, Expected Marginal Seat Revenue (EMSRb) heuristic and Displacement Adjusted Virtual Nesting (DAVN) to the SatCom problem to create booking limits. The booking limits recommend a threshold amount of capacity an operator should sell of each product. The contribution of this Thesis is the modification of established airline RM algorithms to handle products with variable capacity uptakes. Further, these algorithms typically account for displacement costs of products, but only in one dimension of space or time (e.g., selling an airline flight that uses multiple spatial legs may displace capacity away from flights that only use one leg). Our modifications allow for the consideration of displacement costs in both dimensions of space and time.&#13;
 &#13;
In order to evaluate the effectiveness of our inventory control approach, we conduct simulations of various demand scenarios and compare the revenue gains to a baseline scenario with no controls, as well as a simpler method that does not consider product duration. In a large-scale simulation spanning three years and encompassing thousands of product requests, we observe revenue gains ranging from 15%-30% depending on the demand scenario. Then, we extend the model to multiple zones and achieve 2%-10% revenue improvement using our Multi-Zone DAVN method compared to the DAVN method applied to each zone separately.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163554</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-invasive tuning of experience-dependent plasticity in the primary visual cortex</title>
<link>https://hdl.handle.net/1721.1/163552</link>
<description>Non-invasive tuning of experience-dependent plasticity in the primary visual cortex
Reilly-Andújar, Francis
The cerebral cortex exhibits a remarkable capacity for experience-dependent plasticity, a feature that is predominantly confined to critical periods (CPs) during early postnatal development. In the mouse primary visual cortex (V1), ocular dominance plasticity (ODP) has served as a premier model for investigating the cellular and molecular mechanisms that underlie the formation and stabilization of cortical circuits. During the CP, short-term monocular deprivation (MD) induces both functional and anatomical changes in binocular V1, characterized by a weakening of deprived-eye responsiveness via mechanisms of synaptic long-term depression. As the critical period closes, increased inhibitory drive and the emergence of perineuronal nets (PNNs) stabilize neural circuits and restrict further experience-dependent plasticity. In Chapter 1, I review the key literature on ODP and provide a survey of interventions that have been shown to enhance ODP in adulthood. In Chapter 2, I present our findings that repeated anesthetic ketamine treatment can reinstate ‘juvenile-like’ plasticity in the adult mouse V1. Importantly, I demonstrate that this effect relies on the microglia-mediated depletion of PNNs, and that interfering with microglial purinergic P2Y12 receptor activation blocks the ketamine-induced enhancement of ODP. Building on these insights, Chapter 3 investigates the use of non-invasive light-flicker stimulation at different temporal frequencies as a means to unlock different forms of ODP in the adult mouse V1. Our results reveal that 60 Hz light-flicker stimulation reduces PNN levels and promotes a depression of deprived-eye responses following short-term MD, whereas 40 Hz stimulation – without altering PNN levels – enhances an adult form of ODP characterized by the strengthening of non-deprived eye responses following short-term MD. Furthermore, we show that in mice subjected to long-term MD initiated early in life, 40 Hz light-flicker treatment promotes recovery of visual function, as evidenced through physiological and behavioral assays. Finally, Chapter 4, outlines a series of future experiments designed to further elucidate the mechanisms by which light-flicker stimulation promotes enhanced ODP in adult V1. Together, the findings presented in this thesis introduce novel, minimally invasive (ketamine) and non-invasive (light-flicker) interventions that show promise as therapeutic strategies for ameliorating deficits arising from early life sensory deprivation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163552</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of texture in auditory scene analysis</title>
<link>https://hdl.handle.net/1721.1/163549</link>
<description>The role of texture in auditory scene analysis
Hicks, Jarrod M.
Everyday auditory scenes contain sounds from many sources. For example, when crossing the street, you might hear sounds produced from the rumble of passing cars, the chatter of pedestrians, and the rapid tick of crosswalk signals. To make sense of this complex mixture of sounds, the auditory system must separate the mixture into coherent perceptual representations that are likely to correspond to the underlying sources in the world. This process is known as auditory scene analysis. Although a rich body of work has probed auditory scene analysis with simple synthetic stimuli and revealed principles of perceptual organization, the extent to which these principles apply to real-world scenes with natural sounds remains unclear. This thesis empirically examines auditory scene analysis with realistic sounds. In particular, we study the perception of scenes containing a common class of environmental sounds known as “textures”, investigating how the auditory system makes use of statistical structure to separate textures from other sources and how the underlying statistical representation both constrains and enables scene analysis. We first investigated the mechanisms of hearing in noise using real-world background “noise” textures. The results show that the auditory system estimates the properties of “noise” textures and stores them over time, using the resulting internal model to estimate other concurrent sounds. We then considered how concurrent sound texture sources are separated from each other. We found that auditory scene analysis with textures involves some principles identified in classical scene analysis work with simple sounds, but that these principles apply to the higher-order statistical representations that define natural textures. Together, the results reveal new aspects of auditory scene analysis with real-world sounds and clarify the role texture plays in everyday hearing. Our findings provide a bridge between the simple, synthetic stimuli studied historically and the rich complexity of real-world sounds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163549</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback</title>
<link>https://hdl.handle.net/1721.1/163548</link>
<description>Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback
Scherrer, Josefa R.
Much of human existence is based on our ability to learn complex sequences of motor movements. Speech, writing, and tool use all require activating a series of different muscles in a precisely timed pattern, and these patterns are learned through a long process of trial and error. How does the neural circuitry in our motor system learn to generate the activity patterns that drive these sequences? This question can be explored by studying a similarly precise learned motor pattern in a different organism, the learned song of the songbird zebra finch.&#13;
&#13;
Zebra finches learn to sing a stereotyped song through a process of vocal experimentation and comparison to an internal template. Every time a bird sings, it varies the acoustic parameters of its song and determines whether each variation brings the song closer to its internal template. Variations that result in a better match are then repeated in subsequent renditions of the song, in a trial and error process suggestive of reinforcement learning. The learning process requires a basal ganglia-thalamocortical loop called the anterior forebrain pathway (AFP) that is similar to basal ganglia-thalamocortical circuitry in mammals. Existing evidence suggests that the AFP learns a time-dependent bias signal that steers the motor pathway to avoid vocal errors. This bias signal is known to be dependent on the cortical output of the AFP known as LMAN (lateral magnocellular nucleus of the anterior nidopallium). However, little is known about the neural code in LMAN that underlies this bias signal, or how this neural code is learned and generated.&#13;
&#13;
We address these questions by building a neural feedback system that allows us to impose correlations between the activity of individual LMAN neurons and a dopaminergic reward signal. We designed a low-latency feedback system that records neural activity from a chronic Neuropixels 2.0 implant, extracts the activity of specific neurons, and plays noise bursts to the bird contingent on the activity of those neurons. We used this system to perform feedback based on the activity of an arbitrarily chosen neuron in LMAN within a given 10 ms window in songs. All birds responded to the feedback by learning to bias the activity of the chosen LMAN neuron up or down within the chosen time window, transiently driving firing rates up by as much as 200 Hz. We observed a remarkable degree of timing precision in the learned bias, with birds able to control the activity of the chosen neuron at single millisecond levels of rise time and jitter. This high degree of precision informs models of the basal ganglia circuit architecture thought to drive learning. We also found the learned bias to be specific to the LMAN neurons correlated with reward, with neighboring uncorrelated neurons exhibiting no change in firing rate during learning. This single-neuron specificity strongly constrains the spatial precision of axonal targeting from thalamic regions that are thought to propagate the learned bias signal from the basal ganglia to LMAN. Finally, we demonstrated that fluctuations in neural activity of a given LMAN neuron drive transient and predictable changes in vocal output approximately 25 milliseconds later, consistent with what is known about signal propagation speeds in the song system. This fact, together with the results of our feedback experiments, combine to confirm our central hypothesis that LMAN drives song learning by independently activating LMAN neurons at precise points in time in order to bias vocal output and avoid vocal errors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163548</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Spiritual Curation of American Modernism</title>
<link>https://hdl.handle.net/1721.1/163546</link>
<description>The Spiritual Curation of American Modernism
Saha, Indrani
Where do the spiritual go? In this study of late-nineteenth- and early-twentieth-century seekers, they join seances in Vermont farmhouses, attend Theosophical lectures on Karma, get lost in copies of Jnana-Y oga, journey to Buddhist temples in China, and consume spiritual manuals on Mentalphysics. But where do they go after those encounters? And, more importantly, what do they do? In this dissertation, they build modern art institutions. A cadre of artist-writers, museum curators, and public intellectuals found their power in early 20th century America by building institutions to introduce a new, spiritually grounded modern art to a mercantile nation. In the US, beyond European sources for "the spiritual" were flirtations with vaguely "Eastern" ones by way of Theosophy. Those who sought to institutionally manifest Wassily Kandinsky's "spiritual" in art believed themselves to provide the assistance necessary to cultivate and preserve these spiritual impulses in modern art. Alfred Stieglitz's Intimate Gallery (1925-1929), Katherine Sophie Dreier's Societe Anonyme (1920-1950), and Hilla Rebay's Museum of Non-Objective Painting (1939-1952)-all in New York City-served as intermediaries in translating predominantly Eastern spiritual ideas into productive ways of being. It would be needed, each curator believed, to cultivate these spiritual protocols just to survive in a material world they held to be detrimentally bankrupt of spirit. In other words, the American institutionalization of modernism built its canon around spiritual systems of national aesthetic welfare. Crucial to these spiritual curators' respective operations would be the promotion of not just any abstraction but a radically non-objective art thought to use the inner expressions of the artist to elevate the spectator. This dissertation takes the turn-of-the-century claims of spirituality by the founders of key art institutions seriously. In doing so, I argue that esoteric forms of Eastern spirituality infused formerly Protestant centers of culture to propel a twentieth-century embrace of radically abstract modern art.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163546</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation</title>
<link>https://hdl.handle.net/1721.1/163545</link>
<description>Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation
Curth, Alexander (Sandy) McCormick
Large-scale additive manufacturing (LSAM) with locally sourced materials, such as earth, presents a promising approach to addressing the urgent challenges of rapid urbanization and construction-related carbon emissions. &#13;
This dissertation establishes a comprehensive framework for integrating low-carbon materials, particularly minimally processed earth, with computational design methodologies and robotic fabrication processes for architectural-scale applications. Through systematic material characterization, novel testing protocols, and case studies across multiple building systems, the research demonstrates that minimally processed earthen materials can be transformed into high-performance building elements uniquely suited to local environmental conditions and design considerations. The developed computational framework employs multi-objective optimization and material-aware toolpath generation to balance structural performance, thermal comfort, embodied carbon, and construction time. &#13;
Four case studies validate this approach: (1) toolpath optimization for shell structures, (2) a hybrid floor system combining shape-optimized concrete beams with 3D-printed ceramic blocks, (3) zero-waste earthen formwork for reinforced concrete, and (4) thermally optimized wall systems for passive climate control. Life cycle assessment reveals that 3D-printed earth structures have approximately one-fifth the embodied carbon of conventional concrete and one-fiftieth that of industry-standard 3D-printed mortar. This research bridges the gap between additive computational design and material circularity, offering scalable approaches to sustainable construction that can be implemented across diverse environmental and economic contexts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163545</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Limits of Longevity</title>
<link>https://hdl.handle.net/1721.1/163534</link>
<description>The Limits of Longevity
Rodriguez, Christopher W.
Do all animals age? Although aging seems to be a widespread phenomenon, some demographic studies have failed to find evidence of aging in certain species, including some highly regenerative species of planarians and Hydra that reproduce through asexual fission. However, all demographic studies have limits on observation times and sample sizes, so it is unknown if these failures were because of an actual absence of aging or these inherent study limitations. Some argue that these species must be ageless. Because of pressures that result from the lack of a clean division between the germ line and the soma in fissiparous organisms, agelessness becomes necessary as a prerequisite of this kind of reproductive strategy. Others argue that fundamental theories of the evolutionary biology of aging absolutely preclude agelessness. Even putting evolutionary arguments aside, some mathematical models of cellular competition and senescence argue that agelessness is impossible mechanistically in multicellular organisms. In this work, I address evolutionary and mechanistic arguments for and against agelessness. I develop mathematical models of the Disposable Soma Theory that incorporate facets of the arguments for agelessness in asexual fissioning organisms. I construct models of mutation accumulation and drift within an individual and explore how this genetic decay could manifest in the mortality rates. I use these models to understand if aging is inevitable generally and apply them to planarians and Hydra to seek to estimate the likelihood of aging more narrowly in those specific cases. Contrary to other work, I find that agelessness (defined as non-increasing mortality rates in a population) is indeed possible as the optimal evolutionary strategy for multicellular organisms. However, the evolution and mechanistic realization of agelessness requires conditions that are unlikely to be met in any existing species. In the case of planarians and Hydra, they likely do not face the right kind of evolutionary pressure to completely avoid aging. Even if they do face necessary evolutionary pressure, intraindividual genetic decay will almost certainly induce increasing mortality on the population with little recourse. Therefore, these species likely do age, although they could have median lifespans on the order of hundreds or perhaps even thousands of years, which would make detecting aging in any given population study quite difficult indeed.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163534</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental representations of regions and interactions in spatial&#13;
transcriptomics</title>
<link>https://hdl.handle.net/1721.1/163533</link>
<description>Fundamental representations of regions and interactions in spatial&#13;
transcriptomics
Maher, Kamal M.
While cells are often considered the fundamental unit of biology, it is their spatial coordination that gives rise to the tissue architectures underlying both health and disease. Spatial transcriptomics technologies offer a unique window into this coordination by simultaneously capturing the spatial and molecular identities of individual cells, providing unprecedented insight into tissue organization. However, the computational landscape for analyzing tissue structure remains fragmented, with a wide array of disparate methods. In this work, we aim to distill these approaches into a unified quantitative framework for analyzing tissue architecture. Tissue structure can be represented in terms of anatomical regions as well as the cellcell interactions that occur within them. For regional tissue organization, many existing methods—including those based on probabilistic models and graph neural networks—ultimately perform a form of smoothing, or local averaging of gene expression across neighboring cells. This process emphasizes large-scale spatial variation and enables standard single-cell analysis workflows, such as clustering and trajectory inference, to be applied in spatial contexts. However, we find that naive smoothing introduces artifacts that obscure meaningful spatial features. To address this, we introduce a minimal but powerful modification: subsampling within each neighborhood prior to averaging. This approach enhances spatial feature resolution and generalizes conventional analyses to spatial features: clustering identifies multicellular regions; data integration aligns spatial regions across samples and technologies; and trajectory inference captures spatial gradients. We also show that this subsampling strategy improves the performance of more complex downstream methods. To further generalize our framework, we formalize the joint analysis of tissue regions and multiscale cell-cell interactions using signal processing over graphs: low-frequency components represent regional gene expression patterns across a tissue mesh; high-frequency components capture fine-scale, cell-cell interactions; and mid-frequency signals correspond to boundaries between regions and diffusive signaling. By interpreting spatial gene expression in this spectral framework, we provide a principled way to bridge conceptual and computational perspectives on tissue structure. Ultimately, this work serves as both a theoretical foundation to understand existing methods and a roadmap for developing future approaches to quantitatively describe molecular tissue architecture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163533</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure</title>
<link>https://hdl.handle.net/1721.1/163530</link>
<description>Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure
Bromley, Joshua David
The human respiratory tract is constantly subject to environmental stressors and perturbations that cause deviations from homeostatic conditions. The airway’s cellular constituents – epithelial, stromal, and immune cells – maintain local and global homeostasis by facilitating gas exchange and providing a barrier against noxious environmental agents (e.g., xenobiotics, allergens, toxins, and microbes). Infection with viral, microbial, and eukaryotic pathogens can disrupt airway homeostasis, leading to local and systemic inflammation, which can either contribute to the clearance or persistence of the pathogen. Prior antigenic exposure - prophylactically or from a previous infection - can promote transient and long-lived changes in cellular epigenetics, gene expression networks, and cell type composition that may contribute to protective (or maladaptive) immunity; however, we lack a complete understanding of the pathogen and cellular determinants that modulate immunity upon reinfection. In this thesis, we employed single-cell RNA-seq (scRNA-seq), computational methods, and microbial assays to discover the host and pathogen determinants governing airway homeostasis during primary infection and reinfection at barrier sites where the infection begins and may persist: the nasopharynx, airways, and lung parenchyma. First, we leveraged scRNA-seq to identify the cellular and molecular features of mild, moderate, and severe COVID-19, revealing that persons with severe COVID-19 have blunted anti-viral immunity in the nasopharynx. We further extended these findings by profiling nasopharyngeal swabs from vaccinated and unvaccinated individuals across three waves of SARS-CoV-2 variants, revealing shifts in viral tropism and that intramuscular COVID-19 vaccines promote the recruitment of putative antigen presenting macrophages to the nasal mucosa. Next, we used rhesus macaques to interrogate temporal host-pathogen interactions during SARS-CoV-2 infection and reinfection in the lower respiratory tract. This work identified innate training-like gene programs among myeloid populations that provided enhanced protection against SARS-CoV-2 reinfection. Finally, we used cynomolgus macaques as a model to study Mtb infection and reinfection, demonstrating that CD4+ T cells are required to restrict bacterial growth and induce protective immunomodulatory gene programming and cell-cell interaction networks in pulmonary granulomas formed following Mtb reinfection. These findings extend beyond long-held paradigms of protective TB immunity, revealing that CD4+ T cells regulate pro- and anti-inflammatory granuloma equilibria. Collectively, the work presented in this thesis highlights the utility of single-cell genomics for studying respiratory infection- and immuno-biology and provides a framework for contextualizing pathogen-induced deviations from biological homeostasis in the airways, which has implications for the development of prophylactics and therapeutics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163530</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis</title>
<link>https://hdl.handle.net/1721.1/163529</link>
<description>Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis
Emenari, Amauche
In this dissertation, we present an exploratory methodology, termed expansion microscopy of extracellular space (ExECS), designed to enhance the visualization of the extracellular space (ECS) within aldehyde-fixed tissue. This technique leverages the principles of expansion microscopy (ExM), a method that facilitates nanoscale imaging on conventional microscopes through physical magnification of specimens, thereby supporting improved visualization of various cellular and tissue components including proteins, nucleic acids, and lipids 1. The ECS forms a continuous environment between cells2. Its presence throughout neural tissue makes it an attractive target for contrast-based techniques such as shadow imaging, where the ECS is selectively labeled to produce negative contrast, revealing cell shapes and boundaries as unlabeled silhouettes within a labeled background. Although ECS delineation in fixed tissue is limited by the fidelity of fixation and may not fully reflect its live-state structure, the resulting contrast with the intracellular environment may offer useful contrast for investigating neural morphology and connectivity, offering a useful approximation of network organization. A key component of the ExECS methodology is the introduction of a customengineered ECS Filler solution. This formulation, detailed later, includes a macromolecular probe intended to serve as a proxy for the ECS. When applied to aldehyde-fixed tissue, the filler is designed to diffuse throughout the sample, preferentially occupying extracellular compartments while remaining largely excluded from intracellular regions. This selective distribution is expected to persist even in areas where aldehyde fixation may have increased membrane permeability. This diffusion behavior is presumed to result from a combination of size-based exclusion and intermolecular interactions between the hyaluronan polymers, which form the main component of the filler solution, and the plasma membrane. The constituent hyaluronan is functionalized with amine groups to enable covalent crosslinking and with azide groups to allow fluorescent tagging via click chemistry. These modifications are intended to enable the ECS filler to act as a contrast agent by labeling the extracellular space, providing a foundation for a shadow-based imaging strategy to delineate morphology of cellular structures. In parallel, we introduce a lipid-targeted form of ExM, termed membrane expansion microscopy (mExM). This approach employs a custom chemical tag that enables nanoscale optical imaging of lipid membranes using a lipid-optimized expansion protocol. mExM, via a novel post-expansion antibody labeling protocol, enables protein-lipid relationships to be imaged in intracellular organelles. This technique may offer new opportunities to examine aspects of neural circuitry by linking cellular morphology with molecular identity. Together, ExECS and mExM offer a potential basis for a light microscopy-based framework for connectomic reconstructions. Unlike traditional electron microscopy approaches, which are labor-intensive and low-throughput3, this strategy aims to improve throughput in mapping of neuronal morphology with enhanced resolution that surpasses diffraction limitations. With the aim of bridging the gap between tissue ultrastructure and optical accessibility, this work may contribute to efforts toward scalable, high-resolution analysis of neural tissue organization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163529</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens</title>
<link>https://hdl.handle.net/1721.1/163528</link>
<description>Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens
Liu, Nuo
At the heart of any human disease is an imbalance between normal and aberrant physiological processes— a disproportion between hypo-immunity and hyperimmunity—a lack of homeostasis. In many cases, a more comprehensive understanding of the molecular basis underlying disease progression and therapeutic failure is still required to devise new strategies for improving patient outcomes. Technological advancements in biomedical research, especially in single-cell omics (e.g. single-cell RNA sequencing, single-cell spatial profiling) have given us unprecedented power to decipher the intricate cellular and molecular features that maintain—or disrupt—this balance. However, validating the causality of these features remains a huge challenge, as the wealth of data often results in a considerable number of hypotheses to test. In this thesis, I explore applications of single-cell genomics tools to understand cellular features associated with disease, with a particular focus on tuberculosis (TB). I then present a potential solution for performing phenotypic screens at scale. In the first part, I applied single-cell RNA sequencing and analysis to human lung samples from a TB-endemic region in South Africa. Using contrastive analysis, I identify key cell populations that are differentially abundant between TB-diseased and TB-negative lung including several neutrophils, macrophages, and fibroblasts subsets. I discovered a de novo gene program highly enriched in the MMP1+CXCL5+ Fibroblast that correlates with TB burden in a non-human primates (NHP) granuloma dataset, supporting the importance of this subset in TB. In a collaborative effort, we validate that this MMP1+CXCL5+ Fibroblast localizes to TB granuloma on independent TB-diseased lung tissues using immunohistochemistry assays and recapitulate the induction of this population from lung-derived fibroblast through in vitro stimulation experiment with M.tb. I further report a SPP1+ macrophage population that is enhanced in TB diseased lungs through single-cell analysis. Moreover, I identified a prominent cross talk between SPP1+ macrophages and fibroblasts in TB diseased lung that mimics similar observations in cancer and fibrosis, supporting an important role for this axis in TB. These distinctive cell populations could serve as potential targets for novel host-directed therapies in tuberculosis. In the second part, I developed a method to compress small molecule phenotypic screens by designing randomized drug pools with replicates of distinct candidates across different drug pools. Our team demonstrated that linear regression models can be applied to computationally deconvolute the individual hits, enabling the identification of top effectors for downstream validation. We benchmarked and demonstrated the efficacy of this approach in a cost-effective imaging platform and then moved into applications on pancreatic ductal adenocarcinoma (PDAC), where we discovered a new perturbation response signature to IL-4/IL-13 with prognostic value for patient survival. We also showcased the utility of this tool on understanding immunomodulation effects in heterogenous mixtures of primary blood cells. Together, this thesis describes novel cellular features important to TB in human lungs, offering new insights that complement existing knowledge from animal models. It also presents a bold, yet effective strategy to scale up phenotypic screen across different biological systems, providing a much-needed solution that bridges the translational gap between human disease and experimental model.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163528</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration</title>
<link>https://hdl.handle.net/1721.1/163526</link>
<description>Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration
Briden, Julia
Increasingly complex and high-mass planetary missions require autonomous long-horizon trajectory generation to achieve dynamically feasible powered descent guidance. While analytical and indirect methods are computationally efficient, significant simplifications of the dynamics and constraints are required for both problem formulations. Numerical optimization algorithms enable minimum-energy trajectory generation subject to system dynamics and safety constraints but currently remain computationally infeasible on flight-grade processors, taking seconds to minutes to compute a single trajectory. The objective of this dissertation is to develop new algorithms to advance the state of the art in trajectory optimization and planning for autonomous systems. Due to the limited computational abilities of radiation-hardened processors and an increased need for spacecraft and robotic autonomy, specialized algorithms capable of running in realtime constitute enabling technologies for space exploration. Three major contributions are developed in this dissertation. First, a transformer neural network-based algorithm is created to predict the tight constraints that recover the solution and parameter sets for constrained optimization problems. By training on prior runs of the numerical optimization solver, the learned mapping can construct a reduced problem formulation that recovers the optimal solution while reducing runtime by up to an order of magnitude. Second, a method to embed problem-specific information into the neural network training process was developed. By embedding the Lagrangian and Lagrangian gradient merit functions into the training process, neural network-generated control policies are biased toward constraint satisfaction. Third, an autonomous hybrid targeting and guidance algorithm was designed to utilize probabilistic risk maps and numerical optimization to select and navigate to minimum-risk landing sites. Applications in planetary powered descent and landing, as well as rover path planning, are used to benchmark algorithm performance.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163526</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum theory of mode locking.</title>
<link>https://hdl.handle.net/1721.1/163519</link>
<description>Quantum theory of mode locking.
Lang, W. R.
            (W. Roy)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 88-90.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163519</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat</title>
<link>https://hdl.handle.net/1721.1/163517</link>
<description>Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat
Berson, David Matthew.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Psychology, 1980; Vita.; Bibliography: leaves 114-126.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163517</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New theoretical methods for the study of the electronic structure of solids.</title>
<link>https://hdl.handle.net/1721.1/163516</link>
<description>New theoretical methods for the study of the electronic structure of solids.
Mele, Eugene John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163516</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fracture Mechanics of Networks</title>
<link>https://hdl.handle.net/1721.1/163459</link>
<description>Fracture Mechanics of Networks
Hartquist, Chase M.
Networks of interconnected materials permeate throughout nature, biology, and technology due to exceptional mechanical performance. Despite the importance of failure resistance in network design and utility, no existing physical model effectively reconciles strand mechanics and connectivity to predict fracture in diverse networks that constitute polymeric, architected, and biological materials. While traditional models predict the intrinsic fracture energy – the minimum energy to propagate a crack per unit area – of a polymer network is the energy to rupture a layer of chains, they can underestimate experiments by up to two orders of magnitude. In Part I, we show that the intrinsic fracture energy of polymer-like networks stems from nonlocal energy dissipation. We then reveal a general scaling law that captures nonlocal energetic contributions and connects strand mechanics with topological connectivity to universally predict the intrinsic fracture energy of stretchable networks. We measure intrinsic fracture energy using experiments and simulations of 2D and 3D networks with various strand constitutive behaviors, defect densities, strand length distributions, lattice topologies, and length scales. Results show that local strand rupture and nonlocal energy release contribute synergistically to the measured intrinsic fracture energy in networks. These effects align such that the intrinsic fracture energy scales independent of the energy to rupture a strand; it instead depends on the strand rupture force, breaking length, and connectivity. In Part II, we present a model for real polymer fracture and design elastomers with highly regular connectivity. End-linking then deswelling star polymers produces a class of elastomers with low defects and no trapped entanglements, enabling ultrahigh straininduced crystallinity of up to 50% and stretchability that scales beyond the saturated limit. These features promote a pronounced elastocaloric cooling effect and enable reversible two-way tuning of thermal conductivity by strain or temperature modulation. The mechanical and thermal properties of these polymer networks offer promise in addressing challenges in clean energy, thermal management, and biomedicine. Our findings establish a physical basis for understanding network fracture and design principles for fabricating tough polymeric, biological, and architected materials across multiple length scales for advanced applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163459</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation</title>
<link>https://hdl.handle.net/1721.1/163458</link>
<description>Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation
Bell IV, John H.
The Sit-to-Stand (STS) transition is one of the most dangerous daily activities for the elderly population, as it is one of the situations in which falls occur most often. Despite its risks, STS dynamics remain poorly understood, and current STS assistance devices fail to utilize knowledge of STS dynamics to effect their support. This thesis presents contributions to the dynamic modeling of STS and to human-robot collaboration for improving robotic assistance of STS. To coherently capture the multi-phase nature of STS, lifting linearization, a dynamic modeling methodology inspired by Koopman operator theory, to subsume segmented local dynamics in a globally linear dynamic model. A novel class of lifting linearization basis functions, termed “State-Membership Product (SMP)” observables, enables both the seamless blending of local dynamics into a global model, and the direct extraction of phase-specific behaviors from the global model. It is shown that an SMP-Koopman linear model tuned to published data of STS experiments is capable of reproducing the multi-phase STS dynamics with a single linear model. Building on this framework, STS is additionally modeled as a lifted linear feedback control system, composed of an SMP-Koopman-based open-loop biomechanical model of the human body and a linear quadratic regulator (LQR) which guides the body to stand up. The LQR controller, tuned to replicate STS motion, guides the human body model through the phases of STS without explicit phase-switches, improving system robustness. To enhance human-robot collaboration in STS assistance, a framework for estimating patient cooperativeness is also introduced, leveraging a simplified dynamic model and an Extended Kalman Filter. By analyzing a human’s initial response to applied physical and verbal cues, the estimation framework assesses willingness to engage in assisted STS. Together, these contributions advance both the modeling and estimation of STS, offering insights crucial for the development of safe, effective robotic assistance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163458</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Human Balance Performance and Control to Inform Therapy</title>
<link>https://hdl.handle.net/1721.1/163456</link>
<description>Quantifying Human Balance Performance and Control to Inform Therapy
Shiozawa, Kaymie S.
Maintaining balance is essential for daily activities and overall health. However, balance capability often declines with age or due to health conditions such as stroke, increasing fall risk. Falls among older adults are a major public health concern, affecting 14 million older adults annually in the US and directly causing over 40,000 deaths. Timely and accurate assessment of balance impairment is crucial to prevent falls and promote independence. Current assessments rely heavily on subjective therapist evaluations, underscoring the need for objective, quantitative methods. With the growing strain on healthcare systems due to an aging population, continuous at-home balance monitoring is also increasingly important. Additionally, a comprehensive understanding of the motor control mechanisms that deteriorate with aging or disease is crucial for informing therapy methods and technologies. &#13;
&#13;
The goal of this thesis was to develop and validate methods that quantify quiet balance ability and control in unimpaired and impaired human participants. The first part focuses on assessing balance ability, the capacity to maintain upright posture during quiet stance that is currently often quantified by measures of body sway. A review of the strengths and limitations of current clinical and instrumented balance assessments highlighted a critical need for continuous assessment methods that enable objective monitoring of balance function outside of clinical settings. Addressing this need, a novel algorithm that quantifies balance ability using only force and motion sensors embedded in an instrumented cane was developed. Well-established balance measures were successfully estimated in both younger and older adults, demonstrating the proposed method's potential to facilitate continuous balance monitoring in real-world environments.&#13;
&#13;
The next part focuses on identifying balance control strategies. The novel intersection-point analysis, based on foot-force direction and point of application, was used in conjunction with a simple biomechanical model and an optimal controller to quantify balance control. The first study demonstrated that unimpaired quiet balance in a challenging environment was best described by a controller that maintained minimal effort by adjusting relative ankle and hip joint torques. Applying this method to aging populations in a subsequent study revealed that older adults rely more on neural feedback, possibly to compensate for muscle strength deficiency. This study also quantified individual balance controllers, highlighting the method's potential as a diagnostic tool for aging populations. Finally, the model was extended to describe balance control after stroke. The results suggest that the non-paretic limb compensated for the paretic limb's abnormal coordination pattern by strongly favoring neural feedback. As one of the first studies to model quiet balance after stroke, this work lays the foundation for future efforts on studying balance impairments. The contributions of this thesis are instrumental to enhancing at-home monitoring, advancing clinical practices, and reducing fall-related injuries, ultimately improving quality of life for aging and neurologically impaired populations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163456</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformable Object Manipulation with a Tactile Reactive Gripper</title>
<link>https://hdl.handle.net/1721.1/163455</link>
<description>Deformable Object Manipulation with a Tactile Reactive Gripper
Sunil, Neha
Manipulating deformable objects remains a fundamental challenge in robotics, as techniques developed for rigid objects often fail to generalize. Deformable objects exhibit infinite-dimensional configuration spaces, frequent self-occlusion, and high model uncertainty, making global state estimation and predictive modeling unreliable. To address these challenges, we propose a perception-driven framework that combines global visual understanding with local tactile feedback. Rather than modeling the full configuration of the object, we leverage local constraints, grounded in modular visual and tactile representations, to enable robust, reactive, and generalizable manipulation. The primary contributions of this work include: • Chapter 2: Cable Following. A tactile control strategy for in-hand cable manipulation that decouples contact regulation from object pose control, enabling fast, reactive sliding and closed-loop plug insertion using only local tactile feedback. • Chapter 3: Towel Edge Tracing. An extension of contact-based control to fabric edge following and the learned tactile perception networks to support this capability. • Chapter 4: Visuotactile Grasp Affordance. A grasp affordance model trained in simulation and refined with tactile self-supervision, enabling high-confidence edge grasping on towels. • Chapter 5: Dense Object Correspondence. A confidence-aware dense descriptor representation. Supports correspondence across crumpled and symmetric garments in air and on a table. • Chapter 6: Behavior Architecture and Planning Interfaces. Integration of perception modules into a reactive, confidence-based folding system and an exploration of how dense descriptors can interface with demonstrations, language, and task and motion planning. Collectively, these contributions show that global state estimation and dynamics prediction are not required for reliable deformable manipulation. Instead, semantically meaningful local interactions, guided by modular visual and tactile representations, can drive scalable, long-horizon behaviors across varied objects, configurations, and tasks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163455</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion</title>
<link>https://hdl.handle.net/1721.1/163450</link>
<description>A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion
Chignoli, Matthew T.
Legged robots have long been envisioned as a means of expanding robotic capabilities beyond structured environments, yet achieving high-agility locomotion remains a fundamental challenge. This thesis presents a model-based framework for parkour-style locomotion, enabling robots to execute highly dynamic maneuvers such as jumps, rolls, and flips with precision and robustness. A key challenge in planning these motions is selecting an appropriate dynamic model that balances computational efficiency with physical accuracy. To address this, a model assessment strategy is introduced to determine the simplest model capable of capturing task-relevant dynamics. Even with well-chosen models, solving long-horizon trajectory optimization problems for dynamic motions is computationally demanding. This thesis introduces graduated optimization techniques, which improve solver efficiency and reliability by generating high-quality initial guesses through progressively refined problem formulations. Additionally, a novel formulation of rigid-body dynamics algorithms for systems with kinematic loops accelerates trajectory optimization and simulation. Finally, two control strategies are proposed to execute planned motions on hardware: a model-based tracking controller for real-time adjustments and an imitation learning policy trained on optimal trajectories to enhance robustness. Extensive experiments on hardware validate the framework, demonstrating the successful execution of complex, high-impact locomotion behaviors. By integrating advanced planning, optimization, and control techniques, this work establishes a foundation for high-agility legged locomotion, pushing beyond conventional automation toward real-world, dynamic robotic movement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163450</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks</title>
<link>https://hdl.handle.net/1721.1/163447</link>
<description>Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks
Huffman, Sandra
In professional science and engineering contexts, modeling practices are frequent and diverse. To understand, analyze, and communicate, scientists and engineers simplify and distort the complex systems with which they work. This practice is known as modeling. Typically, scientists create models to predict and explain phenomena while engineers develop them to analyze and test systems, make design decisions, and predict the performance of built systems. Models can include verbal (ex. analogy, story), visual (ex. diagrams, graphs, images), and symbolic (ex. equations) representations. When scientists and engineers model, they do so expansively: pulling from different resources, combining modeling strategies, engaging in critique and iteration, and contextualizing their claims in the work of their field. This is not the case for students in technical engineering classes who are attempting to learn these skills. Traditional, lecture-based courses are the norm for introducing technical material to undergraduate engineering students. These courses typically consist of lectures, recitations, problem sets, and exams. In this type of class, students report homework and test problems as having an outsized influence on their learning approach. These problems tend to be narrow and prescribed. Colloquially known as ‘Textbook-Style’ problems, well-defined, single-solution problems are not sucient to prepare students to successfully tackle the ill-defined, multifaceted engineering problems they will face in their careers. These problems do not elicit student engagement in scientific or engineering modeling practices. Instead, they lead to inauthentic, bounded learning where students develop strategies adequate for groups of similar problems, but too narrow for use outside of the classroom. There has been significant research on innovative educational interventions and alternative problem types shown to improve classroom learning. However, educators work within established structures that resist change, leading to the perpetuation of insucient practices. The gap between textbook-style problems and the problems engineers face, therefore, exists not just in the problem type, but in the context surrounding the task. In this work, I describe and characterize the norms and practices of the classroom environment through three qualitative studies, each centered on traditional technical thermal-fluids courses. Specifically, I investigate the ways in which the development of student modeling practices are supported or undermined. I do this, in part, by adapting the theoretical framework of Figured Worlds. Originally developed by Dorothy Holland and later used in Engineering Education research, figured worlds is a situative framework that allows researchers to look at distinct, sometimes contradictory cultural worlds within the same group and activity. In the first study, I look at individual student approaches to classroom tasks in a think-aloud study, comparing their problem solving approaches and analyzing prompt-student interactions. In the second study, I analyze small groups’ modeling practices and how they are limited by the cultural practices of schooling. In the third study, through semi-structured interviews, I document instructor perceptions of their research and teaching, and discuss the misalignments within and between these contexts. Together, these works outline the mechanisms by which school practices can inhibit the development of student modeling capabilities and the role of students and instructors in perpetuating these practices. In describing student and instructor behavior and contextualizing practices that may otherwise be ascribed to misconceptions, carelessness, or ignorance, I hope to build a foundation for future research into pragmatic educational interventions for enhanced learning outcomes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163447</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for Longevity: Service and System Innovation</title>
<link>https://hdl.handle.net/1721.1/163445</link>
<description>Design for Longevity: Service and System Innovation
Lee, Sheng-Hung
The global demographic shift toward an aging population presents complex social, economic, and systemic challenges, necessitating innovative approaches to service design, systems thinking, and financial planning. This dissertation, Design for Longevity: Service and System Innovation, examines these transformations and proposes strategies to foster a “longevity society”, a new era in society necessitating a fundamental rethinking of age and ageing to effectively harness the opportunities afforded by increased life expectancy (Scott, 2021). This research is built upon five relevant paradigm shifts: 1. from age-based to stage-based mindsets, 2. from product-driven to service-driven solutions, 3. from human-centered to humanity-centered design, 4. from circular to longevity economics, and 5. from an aging society to a longevity society. These shifts redefine the role of designers and researchers in creating adaptive, inclusive, and sustainable systems for the future. This dissertation explores how tangible artifacts, Longevity Planning Blocks (LPBs), can be employed to create effective service encounters. The research questions explore 1. how to use boundary objects (BOs) to uncover and define latent user needs, 2. how to use a mixed-method approach to analyze experiment data, 3. data-driven persona creation, and 4. the design of longevity planning services across financial planning, service innovation, and system thinking. Central to the research is a study of LPBs, BOs designed to facilitate collaborative engagement between a facilitator and 69 Boston-based participants, stratified by age, gender, pre-tax annual income, and assets. LPBs, employed in experiments, help investigate participants’ needs and concerns across various life transitions and stages. These tangible BOs facilitated informal yet insightful discussions, uncovering how individuals navigate ambiguity, make complex decisions, manage their evolving physical, mental, and social health, and perceptions about living solo. Data from in-person longevity planning experiments provided nuanced insights into the interplay of individual, societal, and systemic factors shaping longevity planning services. A mixed-methods approach integrates qualitative and quantitative techniques, including expert and user interviews, co-creation workshops, pre- and post-experiment surveys, hierarchical cluster analysis, K-means clustering for persona development, and causal loop diagrams for longevity planning service system modeling. Constructivist grounded theory and exploratory factor analysis uncover emerging themes and systemic interconnections, emphasizing the importance of adaptive services that align with changing needs and broader social infrastructures. The study introduces the notion of Design for Longevity (D4L), expanding on longevity economics and circular economy principles to address the complexities of extended lifespans. D4L highlights how evolving resources, transformative needs, and systems integrate life stages into the design of products, services, and experiences. This dissertation contributes to service innovation, financial planning, and system design by proposing actionable insights for longevity planning services. It emphasizes multi-stage life planning, intergenerational collaboration, and systemic thinking as foundational to a longevity society. This dissertation contributes a mixed-method approach, offering design practitioners a replicable, data-driven framework for persona creation applicable beyond longevity planning. Concluding with reflections on social infrastructure, community, and culture, the study calls for cross-disciplinary collaboration to address longevity planning challenges. By advancing the understanding of longevity planning and its systemic implications, this work lays a foundation for designing a future where extended lifespans are inclusive and socially engaged.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163445</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation</title>
<link>https://hdl.handle.net/1721.1/163444</link>
<description>Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation
Petelina, Nina T.
A well-fitting, high-performance prosthesis for people with a lower limb amputation can greatly improve users’ mobility and quality of life. Still, many amputees lack access to high-performance prosthetic components due to the cost and availability of continuous care. This thesis aims to design low-cost, high biomechanical performance above-knee prosthetic leg components (prosthetic foot and knee) that will result in a walking motion likely to be perceived as able-bodied after minimal acclimation time. Above-knee amputees have two common gait deviations from able-bodied and below-knee amputee gait: lack of early stance knee flexion (ESF) and delayed initiation of knee flexion (IOF) during late stance phase. These deviations are likely a result of prioritization of stability at the expense of other functions such as shock absorption and progression through stance. A preliminary perception study was conducted to investigate the acceptable bounds of gait deviation that can be incorporated into a prosthetic leg design without compromising the perception of "typical" walking. Using these results, I created the Hip Trajectory Error (HTE) framework for designing prosthetic feet specifically for people with an above-knee amputation. The HTE framework takes into account the lack of ESF by incorporating the shock absorption function of ESF within the prosthetic foot design. This is achieved by targeting able-bodied hip center motion, which is correlated with sufficient shock absorption during the stance phase. This thesis presents an optimization and performance evaluation process that resulted in a prosthetic foot structure that not only closely replicates able-bodied hip center motion but also could be manufactured for a low cost. An experimental study successfully demonstrated that the Hip Trajectory Error (HTE) framework can be used to predictively design prosthetic feet for aboveknee amputees. HTE-designed prosthetic feet enable comparable biomechanical performance to daily-use tuned and prescribed prosthetic feet within 10-15 minutes of acclimation time and without iterative multi-day fittings. Next, I proposed a method to recommend a damping coefficient for the prosthetic knee to achieve able-bodied peak knee flexion during swing phase. A range of recommended damping coefficients to achieve target peak knee flexion angle in transfemoral amputees was determined using a simple three-step framework. This framework incorporates effects from common transfemoral prosthetic gait deviations, such as slower self-selected walking speeds and delay in initiation of knee flexion during late stance. The calculated range of recommended damping coefficients was experimentally investigated and found to enable a peak knee flexion angle within two standard deviations of able-bodied peak knee flexion angle. Lastly, I created the Full Leg Optimization (FLO) framework to design the prosthetic foot and knee concurrently based on minimal inputs from the user and the prosthetist. The framework anticipates the lack of ESF and delay initiation of late stance knee flexion and uses the HTE framework to predict the orientation and location of the knee mechanism. Using this prediction, the rotational axes of the prosthetic knee can be positioned to start knee flexion at a point in late stance chosen by the prosthetist to provide sufficient stability to the user. A proof-of-concept study demonstrated the accuracy of the prediction for one user after minimal acclimation time, confirming the ability to predictively design prosthetic leg components in tandem. The FLO framework can therefore be used to predictively design a passive prosthetic leg for above-knee amputees while considering common gait deviations due to stability needs. This doctoral work demonstrates that the presented frameworks can be used to quantitatively design prosthetic feet and knees based on the needs of above-knee amputees, which could save fitting time, manufacturing cost, and improve mobility.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163444</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/163440</link>
<description>Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation
SaLoutos, Andrew
Within the field of robotic manipulation, much research focus has been placed on improving perception and planning algorithms, assuming that the actions output by these high-level planners will be easily achieved by the robot systems. However, to surpass human manipulation performance, fast and robust execution of manipulation plans is just as critical as improved perception and planning methods. In this thesis, we introduce the last centimeter problem, which states that the most difficult part of grasp execution is when less than a centimeter remains between fingertips and an object, and contact is imminent. To solve this problem, we propose a reflexive control framework, which is a manipulation control architecture that decouples low-level, high-bandwidth behaviors, which we call reflexes, from broad high-level plans. The reflexes are fast, autonomous reactions to local sensing information that are designed to add robustness to high-level manipulation plans while also reducing the necessary complexity of manipulation planning problems. To deploy our reflexes, we design hardware platforms that incorporate high-bandwidth actuation and low-latency tactile sensing, allowing us to maximize the reactive capabilities of the overall manipulation system. We validate our approach through studies on teleoperated grasping and autonomous planar grasping, which show that our reflexive controllers increase manipulation speed and robustness. Then, we perform extensive simulation studies for autonomous grasping in SE(3), conducting experiments with single objects as well as cluttered scenes, using a variety of state-of-the-art grasp planners. Our results show greatly improved grasp robustness with our reflexive controllers, across all object types and grasp planners. Further experiments show that the benefits of our reflexes persist across sets of objects that are larger, heavier, and more slippery, and with increasing magnitudes of errors in the executed grasp poses. While this thesis demonstrates that the reflexive control framework is effective at increasing grasp robustness during picking, our framework is constructed in a way that is amenable to extension to other tasks, like in-hand manipulation or constrained object placement, as well as application to more complex grippers, such as those with three or more dexterous fingers and more diverse sensing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163440</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Certification of Deep Learning-based Dynamical System Identification</title>
<link>https://hdl.handle.net/1721.1/163435</link>
<description>On the Certification of Deep Learning-based Dynamical System Identification
Zhang, Wang
Dynamical system identification, the reconstruction of the system governing equations from observations, has been studied for decades. With the recent emergence of deep learning techniques, neural network-based parameterization enriches this classical field by offering new capabilities in modeling complex systems. While promising advances have been made, these black box models face significant challenges due to their limited interpretability and lack of physical guarantees, raising concerns about their applicability in scenarios where trustworthiness is critical.&#13;
&#13;
In this thesis, we developed a comprehensive framework to analyze, understand and learn dynamical systems. We start with a contrastive learning method to capture system invariants (i.e., conserved quantities) from trajectory observation of dynamical systems. Building on these learned invariants or known priors, we introduce a projection layer for neural networks that guarantees the preservation of physics constraints in the learned dynamics models. This two-step approach significantly improves the trustworthiness and interpretability of the traditional black-box models. On top of this, we extend this methodology to learn physically meaningful embeddings corresponding to inter-system characteristics, enabling zero-shot meta-learning capabilities for dynamical system models. Finally, we reduce the bias gap in the classical neural network-based aleatoric uncertainty estimators. We identify overestimation issues in existing variance attenuation methods and propose a novel denoising-based approach that provides more accurate estimates of data uncertainty. This method not only applies to regression tasks but also extends to dynamical system observations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163435</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters</title>
<link>https://hdl.handle.net/1721.1/163433</link>
<description>Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters
Ghodgaonkar, Aditya
This thesis presents the derivation, experimental validation, and demonstration of new design theories for compact, low-pressure, clog-resistant drip emitters that can make drip irrigation affordable, reliable, and easier for farmers to adopt. Broad adoption of water-efficient irrigation methods such as drip irrigation is imperative to sustainably meet projected global food demand against the backdrop of diminishing freshwater resources, constrained arable land, and climate change. In drip irrigation systems, emitters are passive flow-regulating devices that are inserted into the drip tube to align with every plant. They are designed to provide a constant flow rate once they are pressurized to at least their activation pressure, thus ensuring uniform, localized irrigation of plants. However, conventional emitters directly contribute to three barriers that have limited drip irrigation adoption – high raw material-driven equipment costs, high pumping power costs associated with pressurizing all emitters in the field to their activation pressure, and gradual loss of reliability due to clogging. Compact, low-pressure, clog-resistant emitters can address these challenges, but to design them, we must model and tune their operating physics, which is centered around two complex features – a millimeter-scale tortuous passage called the labyrinth, and fluid-structure interaction (FSI) involving a flexible silicone rubber diaphragm and a micro-duct. This makes conventional design approaches relying on high-fidelity simulation software or empirical trial-and-error too expensive and time-consuming to use for the development of compact, low-pressure, clog-resistant emitters on competitive industrial timelines. This thesis addresses these challenges through three contributions. &#13;
&#13;
The first contribution presents an empirically derived hydraulic model of emitter labyrinths, which are typically the most volume-intensive feature of emitters. The model relates labyrinth flow rate to select material volume agnostic parameters, allowing designers to create compact labyrinths with desired hydraulic performance. The compact labyrinths can enable up to 10% reduction in the raw material-driven cost of drip equipment. &#13;
&#13;
The second contribution presents a 1-dimensional model of the FSI in emitters that can predict their flow rate-pressure performance in 2-3 minutes and within 8-14% error, cutting down on design cycle times by orders of magnitude. This facilitated the rapid synthesis of low-pressure emitter designs having 50-60% less activation pressure than conventional emitters, cutting pumping power costs by an estimated 18-23%. &#13;
&#13;
Together, the first two contributions can enable an estimated 18% reduction in the lifetime costs of drip irrigation, but long-term adoption requires that the emitters be clog-resistant and compatible with the current maintenance practices of farmers. To that end, the third contribution presents an experimental investigation of clogging in low-pressure emitters. The results of the investigation directly correlated the geometry of emitter hydraulic features to the critical particle size that would clog them. As a result, compact, low-pressure emitters could be designed to be compatible with the same filters and maintenance practices as current state-of-the-art products that have higher activation pressures. This was confirmed by field testing the compact, low-pressure, clog-resistant (MIT) emitters alongside commercial reference designs with their prescribed filters for nearly 1200 hours. At the end of the field test, the MIT emitters still held 90-94% of their initial flow rate, putting them on par with or better than the reference products in terms of irrigation reliability. The collective contributions of this thesis present the knowledge needed to design emitters that can make drip irrigation more affordable to adopt by farmers and demonstrate that substantial capital and operating cost reductions can be realized without sacrificing product reliability or requiring expensive changes to current farmer maintenance practices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163433</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Complexity of Model-Based Controllers for Legged Robots</title>
<link>https://hdl.handle.net/1721.1/163429</link>
<description>Tailoring Complexity of Model-Based Controllers for Legged Robots
Khazoom, Charles
Humanoid robots promise human-like mobility, but must manage complex and often conflicting control objectives. While model-based controllers can address these challenges using online optimization, they have high computational demands. Model predictive control (MPC) provides closed-loop stability with online trajectory optimization, but achieving real-time rates is difficult for high-dimensional systems. To mitigate this limitation, most MPC implementations rely on reduced-order models (ROMs) that simplify planning but fail to capture whole-body constraints like joint limits and self-collisions. Reactive whole-body controllers (WBCs) partially address this limitation by projecting ROM trajectories onto some wholebody constraints, but these are restricted to acceleration-level constraints like friction cones and torque limits. This thesis advances humanoid planning and control through a renewed focus on model fidelity, solution accuracy ans solve times with three key contributions. First, we propose the CBF-WBC, which augments reactive WBCs with position constraints using control barrier functions (CBFs), enabling the MIT Humanoid to avoid selfcollisions with minimal computational overhead. As a result, the robot can reactively deviate from infeasible trajectories from a reduced-order MPC. Despite fast solve times below 100 microseconds, conflicts can arise between the reduced-order MPC and the CBF-WBC. To address this, we enable real-time whole-body MPC using the alternating direction method of multipliers (ADMM) to provide low-accuracy solutions at high feedback rates. The controller is reliably deployed on hardware and enables the MIT Humanoid to walk robustly on rough terrains and plan complex crossed-leg and arm motions that enhance stability when recovering from significant disturbances. While low-accuracy solutions often suffice for real-time control, we found that higher accuracy could still improve closed-loop performance if computational speed allows. Building on this insight, we propose a framework to simultaneously optimize solution accuracy and model complexity to maximize closed-loop performance. Instead of planning with a single model that is too complex or too simple, solve times can be reduced by planning over a sequence of models of reducing complexity. We extract ROMs from whole-body dynamics equations and optimize their horizons, discretization timesteps and solution accuracy using blackbox optimization. The optimizer can sacrifice model complexity for additional ADMM iterations, reducing falls by nine-fold and enabling a 2 m/s walking speed on hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163429</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions</title>
<link>https://hdl.handle.net/1721.1/163425</link>
<description>Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions
Kombargi, Aly
This study presents a sustainable and cost-effective method for hydrogen generation using aluminum waste, addressing both energy and environmental challenges. Activated aluminum reacts with water to produce hydrogen, heat, and aluminum oxyhydroxide (boehmite), a commercially valuable byproduct. As a safe, efficient, and cost-effective energy carrier with an energy density exceeding 20 kWh/L (8 kWh/kg), aluminum enables on-demand hydrogen production for diverse applications, including maritime transport and off-grid power systems. This research optimizes reaction kinetics to enhance hydrogen yield and rate while minimizing costs and carbon emissions.&#13;
&#13;
Activation involves coating aluminum with a gallium-indium eutectic (eGaIn) liquid metal, which disrupts the oxide layer and enables spontaneous reaction in aqueous environments. The study investigates seawater as an ionic medium for eGaIn eutectic agglomeration and reuse. However, chlorine binding slows the reaction, which was countered using high-temperature operation and catalytic enhancement. Adding 0.02 M imidazole accelerated the reaction 60-fold, enabled 92% eutectic recovery, and achieved 99% of the theoretical hydrogen yield.&#13;
&#13;
Environmental conditions significantly influence reaction efficiency. Increasing seawater temperature from 20°C to 90°C enhanced reaction rates 44-fold, aligning with Arrhenius Law. Isochoric reactions at high pressure were tested to simulate deep-sea vehicle environments using onboard hydrogen reactors fueled by aluminum and surrounding seawater. Results showed a 33% yield increase at 6 MPa (586 m depth) compared to atmospheric pressure, primarily due to surface tension effects that reduce hydrogen bubble size, improving aluminum-water contact at higher pressures.&#13;
&#13;
A life cycle and cost analysis identified an optimized production scenario with a carbon footprint of 1.45 kgCO2eq/kg H2, meeting green hydrogen standards. Major contributors include recycled aluminum use and processing, and the eGaIn alloy; but eutectic recovery and thermal energy reuse further reduce emissions. Using scrap aluminum and recovering byproducts, hydrogen production costs are estimated at $9.2/kg. Additionally, reselling boehmite (market price $2.5/kg) could generate revenue 5.6 times greater than input costs, significantly improving economic viability.&#13;
&#13;
To demonstrate scalability, a modular hydrogen reactor was developed and directly integrated with a commercial generator, reliably producing 400W of power from on-demand, 99% purity lab-tested hydrogen. The envisioned application is a fully integrated aluminum recycling system that utilizes aluminum waste and seawater to generate hydrogen, thermal energy, and boehmite. This approach advances clean energy technology by providing a scalable and economically viable hydrogen production pathway.&#13;
&#13;
Beyond its direct application in underwater technologies, this optimized reaction can support energy-intensive operations such as heating, desalination, transportation, industrial hydrogen production for refining and fertilizer synthesis, stationary energy systems for off-grid power, and renewable energy storage. Its versatility strengthens energy security and decarbonization efforts while offering a cost-competitive alternative to conventional fuels, positioning it as a key enabler of a sustainable energy future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163425</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Venture Capital and Corporate Finance</title>
<link>https://hdl.handle.net/1721.1/163424</link>
<description>Essays in Venture Capital and Corporate Finance
Paine, Fiona
This thesis is three chapters. In the first chapter, I study the impact of restricting foreign venture capital investments for national security reasons. Countries have increasingly been using economic policies to further geopolitical and national security goals. Thus far, economists have focused on studying tariffs and subsidies despite a broader range of economic tools actually being implemented. How costly are these other policies and what are their effects on capital markets, investment, and the economy more broadly? In this paper, I examine a 2018 U.S. law (FIRRMA), which expanded the government’s ability to review and block transactions on national security grounds to include venture capital (VC) investments by foreign investors. I use the passing of FIRRMA, its differential impact on specific VC industries, and the role of Chinese investors in U.S. venture capital to study whether foreign investment screening impacts capital supply. I find that FIRRMA had a negative effect on capital supply in impacted industries due to two factors: 1) the specialization of VC investing (such that the substitution of outside capital into impacted industries is low) and 2) networks in VC investing (there are spillovers to domestic syndication partners in impacted industries). I further find that the change in capital supply is costly, leading to lower innovation by startups. I introduce a novel way of measuring innovation early in the life of a startup using text from startup websites. I use this measure to show there is a selection effect where VCs give first round funding to less innovative startups after FIRRMA. Finally, in a case study of the biotechnology industry, I show that impacted startups suspend drug projects at higher rates, and in particular their risky projects. In the second chapter, joint with Johnathan Jensen, we study municipal cyber risk. Cyber attacks are estimated to cost billions of dollars per year. However, cyber risk is hard to study since companies rarely disclose hacks and don’t share information on cyber security investment. This paper takes a novel approach by looking at municipal hacking. We use a dataset of municipal ransomware attacks merged with hand collected IT investment data and municipal bond data. We find that lower IT investment predicts hacking. Furthermore, following a ransomware attack, municipal bond yields fall by 13 basis points and IT investment as a share of total town expenditure increases by 23 basis points. We investigate potential channels leading to decreased yields post hacking. We find evidence that being hacked reduces cyber risk by disciplining municipalities to move closer to the optimal level of IT spending. The third chapter investigates the impact of firm data collection and analysis of collected data on the riskiness of firm cash flows. I use a scraped data set of the third party resources loaded on firms’ websites as a measure of firm data collection and analysis practices. I find that firm u se of less effective web analytics is as sociated with an increase in the variance of sales, inventory, and both fixed and variable costs. This effect is de spite a lack of change in the level of these variables. Looking at the effect of treatment on the treated, there i s higher profit and sales variance during times of higher uncertainty. I use differences in web analytics technology and a change in their relative effectiveness as my identification strategy. As a case study of a large negative demand shock, I look at differences in fi rm reactions to COVID-19 based on their web analytics usage.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163424</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries</title>
<link>https://hdl.handle.net/1721.1/163423</link>
<description>Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries
Rixey V, Eppa
This dissertation asks how do small firms overcome regulatory constraints despite powerful opposition? Significant research has documented the nonmarket strategies of large, multinational firms seeking to benefit from and capture regulatory systems. However, despite the historically important role of small and medium-sized enterprises (SMEs) in the economic and civic structures of the US, there is much we do not know about whether and how they attempt to exert their own influence in regulatory environments. To explore this, the US beer industry was selected as a strategic research site where SMEs have had a range of successes and failures in developing policy influence. In the late 1970s, the US beer industry rapidly consolidated to less than 100 breweries, but today, with the rise of small, craft breweries, there are over 9,000 breweries in the US. Over 7,000 of these focus on direct-to-consumer (DTC) sales, which were explicitly or practically illegal in all 50 states in 1980. How did this market and regulatory transformation take place and why did some states significantly change their policies to support small brewers while others did not? Two studies were conducted to explore this, an in-depth qualitative study of a single state and a mixed-methods comparative study of six states. The single state was selected for variation in policy outcomes over time and at local levels. Through interviews and archival research, it was revealed that craft breweries engaged in a bottoms-up approach, through which individual firms venue shift downwards, from state to local regulators, to successfully ease state-level constraints. In local public hearings, individual entrepreneurs blended local corporate social responsibility (CSR) with an experimental approach to corporate political activity (CPA) that motivated city-based regulators to challenge state-level restrictions on DTC business models. To understand how this process of developing policy influence unfolds in the absence of local regulators, the national trade associations in the beer industry were analyzed and six states where the state has near exclusive control over alcohol regulations were selected for further analysis. Controlling for a range of factors through a cross-sectional database led to a geographically proximate sample of six comparable states with wide variation in the favorability of policies and the number of breweries per capita. A unique dataset of over 5,000 legislative updates on proposed and enacted federal and state policy changes was supplemented with archival and interview data to assess policy influence. The conventional approach described in the literature, collective action via a trade association, was important but often insufficient. Each state had a functioning trade association representing most craft breweries, but sustained policy influence was observed only in states where full-time leaders of these associations understood the political landscape and developed policy partnerships to tilt the odds in their favor. Policy partnerships entailed legislation alleviating regulatory constraints while also including new provisions that ensured long-term alignment among the partners. Taken together, these studies reveal the vital importance of collective action extending beyond the focal industry for SMEs to develop policy influence at the local or state level.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163423</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators</title>
<link>https://hdl.handle.net/1721.1/163418</link>
<description>Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators
Poon, Ryan Joseph Mar
Tendon-driven mechanisms provide a range of benefits for robotic systems, particularly by allowing actuators to be mounted at the base of a manipulator and reducing its inertia. This thesis explores two projects that exploit and advance tendon-driven mechanisms: a wheeled-grasping hybrid climbing robot with modular tendon-driven grasping arms and a hybrid twisted-winching string actuator. Called CLIMR (Cabled Limb Interlocking Modular Robot), the novel climbing robot adapts to columns of varying diameters by adding or removing modular arm links. CLIMR also features capabilities like self-locking (the ability of the robot to stay on the column without power), autonomous grasping, and rotation around the column axis. Mathematical models describe conditions for self-locking, vertical wheeled climbing, and complete grasping of a column. Simulations and experimental results validate the proposed models. The insights from CLIMR are then extended into general design strategies for future developments of similar hybrid climbing robots, focusing on methods to inform design decisions and assess metrics such as adaptability. Ultimately, this work provides a comprehensive framework for designing hybrid climbing robots, highlighting the potential of autonomous solutions for environments where climbing tall structures is critical. Stemming from this climbing robot work is a novel actuator system combining a twisted string actuator (TSA) with a winch mechanism. Relative to traditional hydraulic and pneumatic systems, TSAs are compact but face limitations in stroke length and velocity. This TSA-winch system overcomes these constraints without risking overtwisting by providing both high displacement winching and high force twisting modes. The design features a rotating turret that houses a winch and a worm gear transmission driven by a through-hole drive shaft. Models are developed for the combined displacement and velocity control of this system. Experiments validate the open loop model as well as the closed loop model, which uses a conductive string feedback controller with a gain scheduling and control effort allocation scheme. For specific cases that require large displacement winching followed by high force twisting over several repeatable cycles, an alternate design sacrifices complete string state control and replaces a motor with passive automatic clutches to achieve a seamless transition between modes triggered by the string load. The models of the clutch torque thresholds for this version of the actuator are verified by experiments. Overall, this research contributes to the development of more versatile and efficient actuation systems for tendon-driven robotic applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163418</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid</title>
<link>https://hdl.handle.net/1721.1/163417</link>
<description>Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid
Jagadeesan Nair, Vineet
Rapid decarbonization of the power grid is essential to meet climate goals by reducing emissions and enabling sustainable electrification of sectors like transport and heating. This requires shifting from centralized fossil-fuel generation to variable renewables like wind and solar. The grid must also adapt to a growing number of small-scale, distributed energy resources (DERs) at the edge, such as rooftop solar, batteries, electric vehicles, and heat pumps. This thesis focuses on modeling, optimizing, and coordinating DERs to enable a flexible, resilient, and affordable grid. First, it proposes a novel hierarchical local electricity market for low and medium-voltage distribution grids. This structure enables DER participation through decentralized and distributed optimization, respecting grid physics while preserving privacy and scalability. The market is applicable to both balanced and unbalanced radial grids using two different convex relaxations and power flow models. Grid services are also priced based on duality theory. Numerical simulations show improved dispatch efficiency, reliability, voltage regulation, and lower retail electricity rates. Second, the thesis applies game theory and mechanism design to extract flexibility from autonomous, strategic DER owners. A repeated Stackelberg game with incomplete information and intertemporal constraints yields equilibrium pricing with closed-form solutions. Third, a distributed decision-making framework is developed to coordinate DERs for grid resilience. It mitigates cyber-physical attacks and outages, ranging from 5 to 40% of peak load, using local flexibility and grid reconfiguration, extensively validated through both software and hardware-in-the-loop simulations. Finally, the thesis addresses DER hosting capacity. New algorithms are developed that co-optimize the siting and sizing of diverse DERs under uncertainty using Monte Carlo sampling, stochastic programming, and k-means clustering for scenario reduction. Results show that intelligent DER coordination can defer grid infrastructure upgrades and support greater renewable integration and electrified demand growth. Together, these contributions provide analytical and simulation tools to improve the planning and real-time operation of future distributed, low-carbon power grids.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163417</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction</title>
<link>https://hdl.handle.net/1721.1/163416</link>
<description>The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction
Koo, Bon H. (Brandon)
There is strong demand for portable technologies that enhance human power output while maintaining safety and range, not only in defense and industry but also in aerospace. Exoskeletons and other wearable powered devices have been proposed as solutions, but a major barrier to adoption is the issue of “fluency”: a combination of metrics representing the seamlessness of human-robot interaction. Most current exoskeleton systems, especially for non-cyclic motions, disrupt user intent and movement, often offering no benefit, or even causing harm by increasing discomfort and injury risk. This lack of fluency is frequently linked to poor intent recognition and absence of predictive control. To address this, we propose developing a human motion prediction system and studying its impact on fluency in exoskeleton-like devices and related human-centered technologies in real-world applications. We introduce an expanded metric “tandem fluency” based on conventional fluency, tailored for evaluating human-robot interaction (HRI) systems where human and robot agents are kinematically synchronized to perform functional tasks. We then develop a proof-of-concept and a functional deep neural network (DNN) capable of detecting human motion intent and predicting motion trajectories in advance using biosignals such as surface electromyography (sEMG). In parallel, we build and test prototype exoskeleton hardware with both single and multiple degrees of freedom. Finally, we conduct human trials with the full closed-loop tandem human-exoskeleton system to evaluate the impact of motion prediction-based control on tandem fluency. The results show that classification and regression prediction of human motion prior to initiation of physical motion is possible and can have performance necessary for practical application of this information, the prediction can be generated not only prior to the physical motion initiation, but often even before the full electrical activation of the primary agonist in many motions, the DNN is robust to variations in sensor hardware and input formatting, and furthermore the use of this prediction in the controls of a tandem robot system has potential to improve tandem fluency by positively affecting both subjective experience and objective/metabolic results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163416</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying over Individual Concepts</title>
<link>https://hdl.handle.net/1721.1/163336</link>
<description>Quantifying over Individual Concepts
Kobayashi, Filipe Hisao
Since Montague (1973), it has been assumed that quantificational DPs must, at least sometimes, be analyzed as quantifiers over individual concepts (i.e., functions from indices of evaluation to individuals). Because the domain of individual concepts is significantly greater than that of individuals, the challenge has always been how to properly constrain quantification over these objects. This dissertation proposes a solution to this problem by developing a novel theory as to how NPs are shifted from predicates of individual into predicates of individual concepts. The idea is that, since NPs are interpreted as restrictors, the nature of this shifting mechanism will constrain quantification. The proposal bears a strong resemblance to the analysis of interrogative clauses of Karttunen (1977): suitable predicates of individual concepts are built from the interaction of a type-shifting operation and existential quantifiers. In three cases studies, I show how this theory can solve old and new puzzles: (i) the different interpretations of sentences of the form ‘[Det NP] changed’ (Nathan 2006); (ii) two ambiguities in the interpretation of concealed questions (Heim 1979); and (iii) question intruders, a novel puzzle concerning the interpretation of both embedded interrogative clauses and concealed questions.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163336</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Within ‘Reason’: A Study of Normative Language</title>
<link>https://hdl.handle.net/1721.1/163335</link>
<description>Within ‘Reason’: A Study of Normative Language
Watkins, Eliot
What do we mean when we say that someone ought to do something? What do we mean when we say that someone has a reason to do something? What do we mean when we say that someone has more reason to do one thing rather than another? The primary goal of this project is to shed light on these semantic questions.&#13;
&#13;
The picture of normative talk that I develop across this thesis has a distinctive feature: the notion of a reason (roughly, a fact that counts in favour of something) isn’t given any fundamental role to play. Instead, the meanings of ‘ought’, ‘must’ and ‘is a reason for…’ are all understood in terms of something gradable – they’re understood in terms of facts about how much reason there is for something to be done.&#13;
&#13;
Chapter One focuses on deontic modals like ‘ought’ and ‘must’. I argue that the standard semantics for these expressions is incompatible with the idea that facts about what you ought to do are connected with facts about what you have reason to do. I develop a new semantics for deontic modals which builds-in the connections between ought and reasons from the ground up.&#13;
&#13;
Chapter Two centres on ‘reason’. We use ‘reason’ as both a count noun (as in “there is a reason for you to read my dissertation”) and a mass noun (as in “there is some reason for you to read my dissertation”). I argue that the best semantics for ‘reason’ will treat the mass form as fundamental. ‘Reason’ is a predicate of a particular kind of state – the state someone is in when they have reason to do something. I turn this result into an argument against the enduringly popular idea that count noun reasons are normatively fundamental.&#13;
&#13;
Chapter Three stays with reasons. According to a standard picture, normative reasons do not extend beyond the boundaries of agency. If something isn’t an agent – if it can’t do rudimentary practical reasoning – then there can’t be normative reasons for it to do one thing rather than another. I argue that this standard picture gets things totally wrong: there are reasons for non-agents to be certain ways and do certain things. We must not analyse what it is to be a reason by appealing to distinctively agential capacities.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163335</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lessons from CP in Passamaquoddy and beyond</title>
<link>https://hdl.handle.net/1721.1/163334</link>
<description>Lessons from CP in Passamaquoddy and beyond
Grishin, Peter Nicholas
This thesis explores various aspects of CP morphosyntax in Passamaquoddy-Wolastoqey and other Algonquian languages and their consequences for broader generative syntactic theory. It consists of two parts: one investigates clause typing and clause size in Passamaquoddy, and the other investigates the properties of a CP-layer agreement marker, the peripheral suffix, across Algonquian. In addition, a lengthy background chapter offers new data and insight on the correct analysis of the inverse and obviation in Passamaquoddy and across Algonquian.&#13;
&#13;
Part I studies the distribution of the three morphologically-distinguished non-imperative clause types in Passamaquoddy: the independent, the conjunct, and the subordinative. I argue that their distribution in complementation and coordination structures falls out naturally from their structural size, following the work of Wurmbrand and Lohninger (2023) and Bjorkman (2012, 2013). I support this conclusion by carefully investigating how each clause type interacts with Ā phenomena like wh movement and long distance agreement, showing that various complex interactions between these syntactic processes are derivative of clause size: independent clauses and conjunct clauses under epistemic attitudes are large, phasal CPs, conjunct clauses under direct perception predicates are smaller, non-phasal CPs, and subordinative clauses are bare TPs.&#13;
&#13;
Part II studies two unexpected properties of peripheral agreement across Algonquian: (i) its preference for agreeing with third persons, no matter their syntactic role (found in all Algonquian languages); and (ii) its preference for agreeing with the least local goal (found in languages like Passamaquoddy, Ojibwe, and Wampanoag). I explore the consequences of these typologically unusual properties for the theory of φ agreement and provide an analysis of the cross-Algonquian variation we find in peripheral agreement (building on Xu 2021, 2022). I argue that Algonquian third person preference forces us to accept Nevins (2007) and Trommer’s (2008) conclusion that third person cannot be underspecified relative to first and second person, even in the syntax (contra Preminger 2019a and van Alem 2023). Additionally, I show that Algonquian lowest preference doesn’t force us to give up on standard locality properties of Agree, and argue for an analysis under which C agrees with all matching accessible goals, but only spells out the last Agree relation—Expone Outermost—building a parallel with similar ideas in the domain of multiple case assignment. Finally, I capture cross-Algonquian variation in peripheral agreement by varying the specification of the peripheral agreement probe and varying which arguments are able to shift out of the VP phase.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163334</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>*ABA in Multidimensional Paradigms: A MAX/DEP-based account</title>
<link>https://hdl.handle.net/1721.1/163333</link>
<description>*ABA in Multidimensional Paradigms: A MAX/DEP-based account
Zompì, Stanislao
The last decade and a half has witnessed intensive research into *ABA universals—generalizations such as “If a nominative and the corresponding dative have the same exponent, then the corresponding accusative has that exponent, too” (Caha 2009; Smith et al. 2019). Most existing work on these universals has only focused on one ‘paradigm column’ at a time, by checking a given paradigm’s nominative singular, accusative singular, and dative singular, for example, with no heed to whether any of the relevant exponents would also show up in that paradigm’s nominative plural, accusative plural, or dative plural. However, some recent literature has pointed out that inspecting full paradigms is crucial to our understanding of *ABA, because some classic accounts that derive *ABA column-internally turn out to also make predictions as to what may or may not happen across columns, and those predictions are often incorrect (cf., among others, Christopoulos &amp; Zompì 2022). In this dissertation, I review those incorrect predictions and replace them with a novel generalization specifically concerning *ABA-like effects in multidimensional paradigms. I then set out to derive this generalization by setting up an exponent-selection system wherein exponents may both be underspecified and be overspecified with respect to their exponenda, with each of these departures from a perfect match being penalized but not necessarily fatal. In particular, I explicitly implement this intuition in optimality-theoretic terms, via a strict-domination ranking of violable Max and Dep constraints (cf. in particular Ackema &amp; Neeleman 2005; Wolf 2008; Müller 2020), and I show that the resulting system, while restrictive enough to derive the desired generalization, is also powerful enough to afford a natural account of some notoriously unnatural (‘morphomic’) exponent distributions in the inflection of Germanic pronouns and Romance verbs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163333</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas</title>
<link>https://hdl.handle.net/1721.1/163324</link>
<description>Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas
Gius, Luca
This dissertation investigates a fundamental challenge complicating the evaluation and commercialization of entrepreneurial opportunities: some ideas are valuable precisely because not everyone recognizes their worth. The first essay analyzes barriers against the commercialization of contrarian ideas. Researchers working with unpopular AI algorithms tend to commercialize their work only after a successful public evaluation. Those who clear this hurdle subsequently achieve better entrepreneurial outcomes. A regression-discontinuity analysis shows that this partly reflects status quo bias: for unpopular methods only, winning a contest serves as a certification, channeling disproportionate resources to the winner while equally strong near-misses remain sidelined. The second essay finds that greater judge disagreement in venture competitions predicts higher future success, especially for more distinctive startups. The third essay shows that skewness in idea value exacerbates asymmetric information in markets for ideas. Using data from auctions for digital businesses, I illustrate how this can explain why online marketplaces for ideas have struggled to emerge despite lowering transaction costs: informational frictions severely depress bids and prevent high-value digital startups from trading. The final essay, coauthored with Alfonso Gambardella and Scott Stern, introduces the archetype of Homo Entrepreneuricus: an entrepreneur who deliberately tests subjective beliefs through structured experimentation to navigate uncertainty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163324</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity</title>
<link>https://hdl.handle.net/1721.1/163323</link>
<description>Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity
Jin, Ce
In this thesis, we investigate the fine-grained complexity of various algorithmic problems with an additive flavor, including 3SUM, Subset Sum, and their close relatives. We explore their connections to various areas, such as graph algorithms, discrete optimization, combinatorial pattern matching, and computational geometry. Our new results include improved algorithms and conditional lower bounds for a wide range of problems, answering multiple open questions from the literature:&#13;
&#13;
• Conditional lower bounds for graph problems: We prove new lower bounds for 4-Cycle Listing and Approximate Distance Oracles conditioned on the 3SUM Hypothesis. As a key intermediate step, we show a fine-grained reduction from 3SUM to the special case of 3SUM where all pairwise sums of input numbers are distinct.&#13;
&#13;
• Combinatorial pattern matching: We design improved algorithms for Text-to-Pattern Hamming Distances, Pattern Matching with Wildcards, and Geometric Pattern Matching, by drawing connections from 3SUM and sparse convolution.&#13;
&#13;
• Knapsack-type problems: We obtain a pseudo-polynomial time algorithm for 0-1 Knapsack with (conditionally) near-optimal dependence on the maximum item weight, an improved approximation scheme for the counting problem #Knapsack, and improved exponential time algorithms for the total search problem Pigeonhole Equal Subset Sum.&#13;
&#13;
In order to obtain these results, we employ and develop techniques based on convolution algorithms and their extensions, as well as classic tools from additive combinatorics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163323</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Content Creator Conduct</title>
<link>https://hdl.handle.net/1721.1/163321</link>
<description>Content Creator Conduct
Du, Jason
This thesis investigates the behaviors of content creators. The first study examines whether musicians learn from the success of earlier songs when they create new ones, finding that tracks on a musician’s next album tend to be more similar to the songs that performed better on their current album. The second study explores the cultural, social, and psychological aspects of content creation by tracing first-person singular pronoun usage in contemporary music, revealing geographic, temporal, and genre-based patterns. The third study analyzes the association between content creators' learning tendencies and the explainability of previous outcomes, showing that news editors are more likely to resemble previous popular headlines when those outcomes are more explainable. Collectively, these studies facilitate understanding of the factors that underlie content creation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163321</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A</title>
<link>https://hdl.handle.net/1721.1/163300</link>
<description>Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A
Zhang, Cindy
This paper studies whether firm ownership structure influences pollutive activity. Using facility-level data from the Toxics Release Inventory, I employ a differences-in-differences (DiD) approach to compare toxic chemical release and pollution prevention activity between public and private firms' facilities by exploiting ownership changes. I compare facilities initially owned by private firms that were acquired by public firms and those that were acquired by private firms in the same year. My findings suggest that public acquirers significantly reduce toxic release activity relative to private acquirers. In the reverse case, I find that private acquirers decrease abatement, but pollution volume does not differ significantly. However, for later ownership changes in my sample, private acquirers increase toxic release volume and intensity significantly relative to public acquirers. Lastly, I explore how financial constraints and the local political environment moderate pollution activity. Debt-constrained public acquirers show no significant difference in pollution activity from private acquirers. In Democrat-leaning counties, public acquirers reduce toxic releases more than private acquirers, but in Republican-leaning counties, the differences are less pronounced. Overall, my findings suggest that public firms have decreased toxic release activity over time, but the declines have been offset by increases from private firms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163300</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Corporate Transparency and Cybersecurity Risks</title>
<link>https://hdl.handle.net/1721.1/163295</link>
<description>Corporate Transparency and Cybersecurity Risks
Kim, David Sunghyo
I study whether disclosure mandates alter the equilibrium of cyberattacks by unintentionally informing cybercriminals. The California Consumer Privacy Act (CCPA) requires companies to disclose their personal information collection practices to consumers, inadvertently informing cybercriminals about the potential benefits of breaching each firm. Using a difference-in-differences design, I find that firms disclosing the collection of valuable personal data face an increased probability of data breaches. These firms also strengthen their cyberdefenses both in terms of cybersecurity software and cybersecurity specialists. Firms trade off cybersecurity costs against the risk of data breaches, with the increase in breach probabilities more pronounced among firms that invest less in cybersecurity. Finally, I find that firms adjust their data collection policies as additional defense strategies. Overall, this study highlights the trade-off between transparency and cybersecurity risks in today’s digital economy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163295</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systematic Political Philosophy of Education</title>
<link>https://hdl.handle.net/1721.1/163277</link>
<description>A Systematic Political Philosophy of Education
Pavel, Sonia Maria
My dissertation proposes a fundamental repositioning of philosophy of education relative to political philosophy. I argue that we cannot afford doing political philosophy without a theory of education, just as we cannot afford making philosophy of education modular, insulated from the rest of political philosophy. To this end, I propose a systematic political philosophy of education, meaning both a systematization of existing approaches to education and a comprehensive assessment of their merits and limitations. I reconstruct the main theories of education – liberal, conservative, democratic, and critical – from their most basic social ontological assumptions to their political programs for education. I then argue that they all struggle to realize their goals for education either as a result of flawed social ontological assumptions or because of a failure to institutionalize their commitments in practice. The lessons I draw from these critiques form the basis of my own novel systematic theory of education. My theory combines traditional political philosophy with insights spanning critical theory, social ontology, and education studies. The central goal is to reconfigure the school as a democratic institution of social learning that not only enables the flourishing of all students but helps society as a whole progress. The project advances on two levels: a methodological and a substantive-normative one. Methodologically, I resist a growing tendency towards the unmooring of political philosophy and philosophy of education. This tendency is peculiar both from a historical and a conceptual perspective. Historically, education was a core issue of political philosophy. Many, even most, of the canonical political philosophers started from the assumption that education is a central purpose of political life. In my substantive introduction, I take a historical excursus through the canonical political thinkers who best exemplify this emphasis on education: Plato, Rousseau, and Dewey. For all the differences in their views, all three understood education as essential to realizing their visions. They would have regarded any political philosophy that failed to address education as incomplete. Today, however, few political philosophers address the subject at all, let alone give it pride of place in their theories. This unmooring has had bad consequences for both subfields. Much contemporary work in philosophy of education takes for granted a liberal social ontology and liberal normative commitments without sufficient critical scrutiny. Similarly, most contemporary political theory neglects the topic of education and operates under the assumption of fully formed liberal agents. The lack of conceptual clarity is mirrored in political practice. Education is marred by persistent and seemingly intractable disagreements – from controversies about indoctrination to failures to realize the ideal of equality of opportunity. Our substantive disagreements about education, I argue in my first chapter, are not merely value disagreements about the goals of education. They stem from deep-rooted social ontological assumptions about the nature of human beings and society. But these social ontological assumptions are rarely acknowledged, let alone articulated, by political philosophers or philosophers of education. To correct this, I propose a novel metatheory that shows the systematic connections between the social ontology, normative commitments, and political programs of our dominant approaches to education (liberal, conservative, democratic). My reconstruction illuminates several surprising agreements and differences between them. For example, it reveals that many of our most heated political debates about education, between left and right liberals, are merely intramural disagreements among thinkers committed to the same individualist ontology. The systematic reconstruction also illuminates these theories’ failure to generate a coherent vision for education. My critiques show that each approach is characterized by a flawed or incomplete social theory which prevents it from promoting its own values and fulfilling its aims for education. In the case of liberal theories, I show that the liberal goal of cultivating autonomy is selfundermining in light of liberal theory’s individualist social ontology. In the second chapter, I turn to critical theories, which focus on the function of education in reproducing our broader social system. Whereas the dominant approaches start by asking about the nature and goals of education in general, critical theories analyze our contemporary educational systems under specific political and economic conditions. They reveal how schools contribute to perpetuating an oppressive and unjust social system. In other words, the focus of these theories is not on the school as a standalone institution, but as a particularly important subsystem in a larger process of social reproduction. While they are promising in many ways, I nevertheless argue that critical theories of education also have distinct limitations. In particular, even though their social theory and normative commitment are more compelling than the dominant views’, they do not satisfactorily translate these into practical proposals for remaking our systems of education. Having found none of the existing approaches fully satisfactory, I start developing the positive and evaluative dimensions of my own view in the third chapter. I go beyond critical social theory while relying on the broad strokes of its ontology of the human. My aim is to supplement this ontology by drawing on both empirical social studies and complexity theory to more precisely characterize the social relations and practices that constitute the domain of education. More specifically, I argue that we can best understand the educational subsystem by attending to its overlap and co-integration with the family, the state, and economic production. Schools are the mediating institutional domain between the family on one hand and the polity and economic production on the other. At the evaluative level, I articulate three critiques of social pathologies that I believe have been ignored or underutilized in critical education studies: alienation, commodification, and fragmentation. Alienation refers to a pathological relation of disconnection from one’s own learning, other students, and teachers. Commodification and fragmentation, on the other hand, are problems with the organization and distribution of resources in the education system. In my fourth and final chapter, I propose a new program for education that seeks to overcome some of the barriers faced by other systematic theories of education. Attempting to counter the problems I diagnosed and explained in the third chapter, I argue for a few different kinds of interventions. First, I propose restructuring the educational system in order to resist fragmentation by pursuing a unified distributive pool, consolidating school districts, and abolishing charters. Second, I argue for a reconfiguration of the co-integrated subsystems of the family, the school, and production that seeks to empower both children and those involved in their care to be involved in free, meaningful work. Finally, I articulate a set of classroom-level practices that seek to equalize access to development for individual students while cultivating their collective social and political imagination. One of the long-term goals is to make schools into democratic institutions of social learning, through which we strive to remove social blockages such as ideology and reflexivity deficits, in order to collectively solve political problems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163277</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Semantic Account of Distributional Constraints on Temporal in-Adverbials</title>
<link>https://hdl.handle.net/1721.1/163275</link>
<description>A Semantic Account of Distributional Constraints on Temporal in-Adverbials
Rouillard, Vincent S
Temporal in-adverbials (TIAs) are a class of English expressions that can be exemplified with in three days. They are remarkable in that, depending on the syntactic position they occupy, TIAs are subject to very different distributional constraints. In some configurations, their licensing is conditioned by the lexical aspect of verbal predicates. In others, these expressions are negative polarity items. Though both varieties of TIAs have been discussed extensively in the semantics literature (Gajewski, 2005, 2007; Hoeksema, 2006; Iatridou and Zeijlstra, 2017, 2021; Krifka, 1989, 1998), no attempt has been made to understand the relationship between the two. I offer a unified semantic analysis of TIAs, which derives from semantic principles their eclectic distributional constraints.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163275</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrical Grids and Active Edges</title>
<link>https://hdl.handle.net/1721.1/163274</link>
<description>Metrical Grids and Active Edges
Asherov, Daniel
Theories of word stress assignment differ in the kind of representations they adopt. One family of theories takes stress to be assigned by grouping stress-bearing elements into small units below the level of the word (typically, metrical feet), such that one element in each unit is marked as stronger, hence stressed (e.g., Liberman and Prince 1977; Hayes 1980). Another family of theories, often referred to as grid-only, models stress assignment without appealing to feet or similar bracketed representations above the syllable (Prince 1983; Selkirk 1984; Gordon 2002).&#13;
While the grid-only approach generates the attested languages with relatively simple representations, it also generates a host of patterns which are very different from those attested in human languages (Hayes 1995; Kager 2012; also see Stanton 2016).&#13;
This thesis aims to solve a set of overgeneration problems that arise in the grid-only approach. The solution involves three components. The first is a novel class of constraints that are sensitive to word edges but unspecified to the edge they apply to (left or right). The value of this edge, considered the “active” edge, is determined by the ranking between two competing constraints (cf. Richards 2016). The second component involves a specific characterization of alignment constraints and the crucial exclusion of computationally weaker or stronger alternatives. The third component is a set of principled fixed rankings between two classes of constraints. In particular, I propose that constraints sensitive to the active edge systematically outrank constraints that regulate rhythmic alternations (cf. van der Hulst 1997; 2012). The result is a grid-only theory of stress that has a significantly tighter fit to the typology compared to previous theories.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163274</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design</title>
<link>https://hdl.handle.net/1721.1/163272</link>
<description>Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design
Kramer, Evan L.
Earth observation satellites serve as vital information gatherers for effectively addressing some of humanity’s most pressing challenges including management of limited resources and minimization of losses from disasters. Synthetic aperture radar (SAR) is a type of active remote sensing instrument that operates in the microwave portion of the electromagnetic spectrum and is a preferred Earth observation system thanks to the reliable imagery it can collect in all illumination and weather conditions. SAR data is acquired using a side-looking viewing geometry in which the radar is pointed perpendicular to the satellite platform’s direction of motion. This viewing geometry, in conjunction with the illuminated terrain’s topography, results in geometric distortions termed layover and shadow. These distortions degrade the utility of the collected imagery since they effectively obscure portions of the image and preclude extraction of actionable insights. While geometric distortions will be everpresent in SAR imagery, their location and coverage can be manipulated by controlling the relative orientation between the satellite and the illuminated topography. Such manipulation has historically been infeasible for legacy SAR satellites that collect globally consistent data sets under rigid operating requirements. However, the recent advent of commercial SAR satellite constellations has re-framed the practicality of carefully tuned observation geometries that maximize region of interest visibility. Commercial SAR constellations operate on a task-wise basis that grants data end-users flexibility in specifying desired observation parameters including acquisition times and observation geometries. However, a mismatch between on-orbit capabilities and delivered data quality exists due to a lack of formalized tools for planning observations with maximum region of interest visibility. Specifically, no systematic method for identifying visibility-favorable observation geometries exists. This dissertation addresses this gap in a stepwise approach. First, an extension of opensource radar processing software is developed that enables prediction of layover and shadow in a 2D distortion mask for any satellite-target relative geometry. Visibility metrics are then defined to represent the favorability of a particular observation geometry with respect to a distortion mask. The computation of visibility metric scores at geometries spanning the entire sample space enables creation of visibility maps that completely characterize the visibility characteristics of a given region of interest. To broaden the suitability of visibility maps for observation planning, a set of generalizable visibility maps are created to enable estimation of region of interest visibility characteristics in mission scenarios that are computationallyconstrained and information-limited. Visibility maps are then directly integrated into satellite operations by developing the first SAR observation scheduling algorithm that explicitly accounts for visibility. Finally, visibility is considered in the orbit design process to establish general guidance on optimal repeat ground track orbit parameters for pre-defined region of interest visibility characteristics. Region of interest visibility improvements of up to 90% are obtained for individual tasks when using the observation planning tools developed in this dissertation. Constellation-wide visibility improvements of 18% are achieved with modest reductions in traditional performance measures when integrating visibility into observation scheduling. Two-fold improvements in the visibility characteristics of observation opportunities are attained for orbits designed to maximize overpass geometry quality. The contributions of this dissertation are timely, given the concurrent proliferation of flexible, high-resolution SAR observation capabilities, and lay the groundwork for enabling the acquisition of SAR data that is maximally useful for limited resource management, disaster response, and other applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163272</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Content Moderation Interventions for Addressing Online Misinformation</title>
<link>https://hdl.handle.net/1721.1/163267</link>
<description>Essays on Content Moderation Interventions for Addressing Online Misinformation
Martel, Cameron
In Chapter 1, I examine the efficacy of fact-checker warning labels as a content moderation intervention for addressing online misinformation. Warning labels from professional fact-checkers are one of the most historically used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? In a first correlational study, we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in, and sharing of, false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Our results suggest fact-checker warning labels are a broadly effective tool for combatting misinformation.&#13;
&#13;
In Chapter 2, joint with Jennifer Allen, Gordon Pennycook, and David G. Rand, I investigate the potential of crowdsourced fact-checking systems to identify misleading online content. Social media platforms are increasingly adopting crowd-based content moderation interventions for identifying false or misleading content. However, existing theories posit that lay individuals can be highly politically biased, and that these strong political motivations inherently undermine accuracy. Alternatively, we propose that political and accuracy motivations may, in some cases, operate in tandem – in which case politically motivated individuals need not hamper truth discernment. We empirically assess this by analyzing a survey study of misinformation flagging and field data from X’s Community Notes. Consistent with a simple model of flagging behavior, posts that are both false and politically discordant are flagged the most. Importantly, we find that more politically motivated users flag a greater number of posts, engage in more politically biased flagging, and yet exhibit the same or better flagging discernment. Together, these results show that politically motivated individuals are integral to provisioning a high overall quantity and quality of crowdsourced fact-checks.&#13;
&#13;
In Chapter 3, I assess the perceived legitimacy of different content moderation interventions for addressing online misinformation. Current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts, laypeople, or non-juries. We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163267</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials and Devices for Optoelectronic Packaging</title>
<link>https://hdl.handle.net/1721.1/163266</link>
<description>Materials and Devices for Optoelectronic Packaging
Weninger, Drew Michael
Over the last two decades, improvements in semiconductor manufacturing have allowed for the commercialization of silicon photonic integrated circuits with over 10,000 devices. These chips are critical to data and telecommunications networks where they convert and encode optical signals to electrical signals, and vice versa, in the form of transceivers. Scaling up the number of transceivers and optical fiber connections, or optical input/output (I/O), will be critical to meet the exponential rise in demand for cloud data capacity since 2010.  However, the costly process of active alignment and bonding of optical fiber arrays directly to photonic chips presents a barrier to their high volume packaging and assembly. This approach limits optical I/O density to a maximum of 8 connections per millimeter since optical fibers for communications applications have cladding diameters of 125 micron.&#13;
&#13;
To address this challenge, this thesis explored a new field of silicon integrated photonics involving chip-to-chip (i.e. flip-chip) optical coupling. Evanescent chip-to-chip couplers were designed, fabricated, packaged, and tested for directly connecting silicon photonic chips to other silicon photonic chips, interposers, or printed circuit boards using automated assembly. The design's compact footprint allows for coupler pitches below 10 micron, or an optical I/O density of greater than 100 connections per millimeter, to be realized - an order of magnitude improvement over fiber-to-chip connections. By designing the coupler to use silicon materials and back-end-of-line compatible packaging processes, ease of integration with existing microelectronic foundry tool sets was ensured. Results from an experimental flip-chip coupler prototype showed greater than 90% coupling efficiency with micron scale alignment tolerances when coupling from silicon nitride to silicon-on-insulator waveguides, the first demonstration of such a device. &#13;
&#13;
To further improve optical flip-chip coupler performance, designs were proposed for combining the evanescent coupler with an integrated graded index lens using silicon oxynitride films. Such a device would provide a universal coupling interface in silicon photonics for both chip-to-chip or fiber-to-chip connections. Simulations showed sub-dB coupling loss across all interfaces including flip-chip coupling across a 10 micron gap. Initial fabrication processes were established to deposit, pattern, and etch greater than 10 micron thick silicon oxynitride graded index lenses on silicon and glass substrates. In showing that automated pick-and-place tools can be used for photonic chip assembly, this work represents a critical step in eliminating active alignment and sustainably scaling optical I/O in future transceiver packages.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163266</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting for and Selecting Despite: A Javanese case study</title>
<link>https://hdl.handle.net/1721.1/163265</link>
<description>Selecting for and Selecting Despite: A Javanese case study
Lesure, Cora
This is an investigation of the argument structure of Javanese (Austronesian, Indonesia) which focuses on the distribution of four core derivational morphemes: the Actor Voice prefix, and the suffixes -ake, -i, and -an. The project is based on original consultant work conducted with a speaker of the Central dialect of Javanese. The work establishes language internal diagnostics for various aspects of a stem's lexical semantics and lexical category and then utilizes these criteria to analyze a wide variety of morphological derivatives, both verbal and nominal. The resulting analysis is able to predict the distribution of derivational morphemes and the nature of their resulting derivatives to a higher degree than what was previously understood to be possible.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163265</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering Mandarin Speaker Knowledge with Language Game Experiments</title>
<link>https://hdl.handle.net/1721.1/163264</link>
<description>Uncovering Mandarin Speaker Knowledge with Language Game Experiments
Fu, Boer
Mandarin Chinese offers many intriguing puzzles for linguists because it has a shortage of morphophonological alternations. This has resulted in indeterminacy in various aspects of its phonological grammar, triggering much debate on syllable structure and allophonic mapping. The ambiguity of the data is also a problem for children acquiring Mandarin since alternative grammars can account for the surface forms equally well.&#13;
&#13;
In order to find out what Mandarin speakers have learned about the phonology of their language, I conducted two language game experiments based on fanqie secret languages. It was found that markedness and faithfulness constraints are psychologically real for Mandarin speakers. Furthermore, the interactions between markedness and faithfulness constraints are shown to have an effect on glide movement in the language game. In addition, much speaker variation was observed in the experiment. I demonstrate that it is the result of constraint ranking variation. Nevertheless, general population-level trends on constraint ranking could still be identified. These trends lead to insights on phonological learning beyond Mandarin, showing evidence for naturalness bias and lexicon optimization.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163264</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-Aware Reinforcement Learning with Safety Constraints</title>
<link>https://hdl.handle.net/1721.1/163262</link>
<description>Risk-Aware Reinforcement Learning with Safety Constraints
Feng, Meng
Safety is a critical concern in reinforcement learning (RL) and learning-based systems more broadly, as ensuring reliable and safe decision-making is essential for their deployment in real-world applications. Traditional approaches to address safety often rely on techniques such as reward shaping, carefully curated training data, or explicit handcrafted rules to avoid unsafe actions. More recent advancements have adopted the Constrained Markov Decision Process (CMDP) framework, which trains agents while explicitly enforcing constraints on auxiliary measures such as safety or risk. However, these methods often suffer from significant constraint violations. This thesis identifies the root cause of such violations as stemming from the pursuit of maximal task performance in each policy update. Given the inherent limitations of sample-based constraint assessments in RL, where data is limited and approximation errors are inevitable, these methods often fail near constraint boundaries, leading to excessive violations. To address this, we propose a novel constrained reinforcement learning algorithm that dynamically adjusts its conservativeness during policy updates. By incorporating the risk of constraint violation into the update process, our method can shift focus toward constraint satisfaction when violations are likely, while still striving to improve task performance whenever feasible. Our algorithm reduces constraint violations by up to 99% compared to state-of-the-art baselines while achieving comparable task performance. In the second part of this thesis, we extend CMDPs to address multi-goal, long-horizon problems. We augment the CMDP formulation to incorporate goals, enabling it to handle multiple goals while preserving goal-independent constraint specifications in CMDP. To tackle the complexity of long-horizon tasks with high-dimensional inputs (e.g., visual observations), we propose a method that integrates planning with safe reinforcement learning. By leveraging deep reinforcement learning, we acquire the essential components for planning, including a low-dimensional state-space representation and planning heuristics. The planning algorithm then decomposes long-horizon problems into a sequence of shorter, easier subgoal-reaching tasks. The learned agents safely navigate toward these subgoals step by step, ultimately reaching the final goal. We evaluate our method on both single-agent and multi-agent tasks. In 2D navigation, our approach demonstrated up to 74.2% risk reduction, while in visual navigation, it achieved up to 49.3% risk reduction, all while reaching comparable or better success rates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163262</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Obscured universality in Mandarin</title>
<link>https://hdl.handle.net/1721.1/163254</link>
<description>Obscured universality in Mandarin
Chen, Fulang
In this dissertation, I investigate the apparently distinctive syntactic properties associated with the BEI-construction, the BA-construction, and resultative constructions in Mandarin Chinese, which I argue obscure properties that are universal across natural languages. In the case of the Mandarin BEI-construction, it exhibits both passive-like and tough-movementlike properties. I argue for a novel analysis of the BEI-construction as a passive construction, where the passive head/BEI hosts a composite probe [&#120601; + Ā], which triggers composite A/Ā-movement, in the sense of Van Urk (2015). The subject in the BEI-construction is derived via (successivecyclic) composite A/Ā-movement, followed by a terminating step of A-movement, similar to Longenbaugh’s (2017) analysis of English tough-movement. Under the proposed analysis, the mixed A/Ā-properties associated with the BEI-construction are direct consequences of composite A/Āmovement (following Van Urk 2015; Longenbaugh 2017). In the case of the Mandarin BA-construction, it involves an apparently pre-posed noun phrase (the post-BA NP) with an affectedness interpretation, which can be identified with either the subject of a resultative phrase in a complex predicate or the direct object of a simple transitive verb. I argue for a novel analysis of the Mandarin BA-construction as a causative construction, where the causative head, which selects a predicate of the caused/resulting event and projects a predicate of the causing event (following Pylkkänen 2002, 2008), has two additional arguments: a causer and a causee. The post-BA NP, as the causee argument of the causative head, also controls a PRO subject in a resultative phrase (following Huang 1992), which is overt in a complex-predicate BAconstruction and is phonologically null in a simple-transitive BA-construction (following Sybesma 1992, 1999). The post-BA NP is interpreted as being affected in the causing event, in the sense that it is caused to perform an action or undergo a change of state (following Alsina 1992). Lastly, in Mandarin, there is no apparent unaccusative-unergative distinction in resultative constructions, unlike languages like English, where distinctions between resultative constructions with unaccusative and unergative matrix verbs follow from the Unaccusativity Hypothesis (Perlmutter 1978; Burzio 1986) and general principles such as the Direct Object Restriction (Simpson 1983; Levin &amp; Rappaport Hovav 1995) and Burzio’s generalization (Burzio 1986). I argue that resultative constructions in Mandarin are causative constructions, where the causative head has four possible argument structures, depending on whether the matrix verb is unaccusative, unergative, or transitive, as well as the semantic relation between the matrix subject and the matrix verb (and between the post-verbal NP and the matrix verb). Despite the fact that the argument structure of the causative head obscures the argument structure of the matrix verb, I argue that in Mandarin resultative constructions, the sole argument of an unaccusative matrix verb is always a causee argument, whether or not an additional causer external argument is present, while the sole argument of an unergative matrix verb, which is a causer external argument otherwise, is a causer argument when the causer is an internal argument. The dissertation showcases how Mandarin provides insight in defending and expanding our knowledge of cross-linguistic properties such as passivization (which embodies Burzio’s generalization and feature-driven movement), composite probing, the bi-clausal syntax and bi-eventive semantics of causative constructions, as well as the nature of affectedness (in causative constructions) and implications for the Unaccusativity Hypothesis and the Uniformity of Theta-Assignment Hypothesis (Baker 1988).
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163254</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions</title>
<link>https://hdl.handle.net/1721.1/163245</link>
<description>How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions
Yang, Christopher
Contemporary phonological research has increasingly become interested in exploring the topic of learnability through the use of computational models. However, many of the proposed models lack one or more of the following properties. (1) Many models do not consider the effect of the lexicon at all on performance, and those that do fail to consider the effect contextual allomorphy has on performance. (2) Many models characterize learnability in terms of the algorithmic implementation of search, rather than a more principled relationship between the data and the hypothesis space. These properties are critically relevant when it comes to the learnability of process interactions. The experimental literature has demonstrated that artificial languages exhibiting patterns generated from certain process interactions are more likely to be successfully reproduced and generalized by participants than others (Ettlinger 2008; Kim 2012; Brooks, Pajak, &amp; Baković 2013; Prickett 2019). The historical literature has likewise noted that surface patterns generated from particular process interactions are more likely to change in systematic ways than others, including lexicalization, in which an alternation is encoded into the lexicon instead of the phonology, and reanalysis, in which a surface generalization is lost or changed entirely (Kiparsky 1968, 1971). Each of these hypotheses make different predictions when generating forms not seen during training. In this dissertation, I make the following contributions. (1) I propose a novel noisy-channel model of morphophonological learning. This model jointly infers a weighted space of consistent and nearly consistent lexicons and grammars from labelled, unparsed surface data. Predictions are generated given the entirety of the inferred weighted space. (2) I compare the predictions of the model to the results two artificial language learning experiments, which, despite involving the same underlying processes, produced contradictory results. I show that the model is able to achieve the results of both experiments under a unified account: the generalizability of a pattern is determined by the number of hypotheses compatible or nearly compatible with that pattern.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163245</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chemical and physical constitution of reduced copper-red glazes</title>
<link>https://hdl.handle.net/1721.1/163101</link>
<description>The chemical and physical constitution of reduced copper-red glazes
Brown, Sherwood Fiske.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1961; Includes bibliographical references (leaf 60).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163101</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on structuralism and development</title>
<link>https://hdl.handle.net/1721.1/163098</link>
<description>Essays on structuralism and development
Boutros-Ghali, Y.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163098</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳</title>
<link>https://hdl.handle.net/1721.1/163096</link>
<description>Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳
Lee, Young Sang,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2000; In title on t.p., double-underscored "y" appears as subscript.; Includes bibliographical references (p. 195-215).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163096</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Object Perception for Robotics</title>
<link>https://hdl.handle.net/1721.1/163020</link>
<description>Resilient Object Perception for Robotics
Shi, Jingnan
A broad array of applications, ranging from search and rescue to self-driving vehicles, requires robots to perceive and understand the geometry of objects in the environment. Object perception needs to reliably work in a variety of scenarios and preserve a desired level of performance in the face of outliers and shifts from the training domain. Obtaining such a level of performance requires robust estimation algorithms that are able to identify and reject outliers, as well as techniques to continually improve performance of learningbased perception modules during test-time. In this thesis, we address these challenges by proposing (1) certifiably optimal solvers and a graph-theoretic framework that together help achieve state-of-the-art pose estimation performance even under high outlier rates, (2) self-supervised object pose estimators that can improve performance during test-time with accuracy comparable to state-of-the-art supervised methods, and (3) a test-time adaptation method for both object shape reconstruction and pose estimation without the need for CAD models. Throughout the thesis, we demonstrate that by using a variety of tools from optimization and learning, we can develop resilient object perception systems that perform reliably in a wide range of conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163020</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grain Boundary Solute Segregation in Vanadium</title>
<link>https://hdl.handle.net/1721.1/163016</link>
<description>Grain Boundary Solute Segregation in Vanadium
Ng, Daniel S.
Vanadium alloys are a candidate structural material in nuclear fusion applications, where the presence of grain boundaries can improve mechanical properties and act as a sink for radiation- induced defects. Solutes with a thermodynamic preference to segregate to grain boundaries can stabilize them, making this a prime consideration for alloy design, but there are limited quantitative solute segregation data for vanadium. Based on results from an ab-initio computational framework for predicting the spectrum of grain boundary segregation energies across the periodic table, select nanocrystalline vanadium-based binary alloy systems were synthesized via mechanical alloying for targeted experiments characterizing differences in segregation strength. Scanning transmission electron microscopy and energy-dispersive x-ray spectroscopy measurements of solute concentrations in the grain boundary and bulk validate computational predictions of the average segregation strengths for different solutes, while showing inhomogeneous solute distributions along the grain boundary network that confirm the necessity for a spectral model that captures the behavior of site-specific segregation energies.&#13;
&#13;
After establishing the segregation behavior of different solutes in vanadium, the effects of solute segregation on other properties are examined. Heating experiments demonstrate that vanadium alloys containing strongly segregating species retain smaller grain sizes upon thermal annealing, indicating better grain boundary stability. The powder metallurgical route used produce these vanadium alloys requires a subsequent sintering step to densify powders into bulk parts for engineering applications, and dilatometry experiments reveal that that the addition of strongly segregating solutes also dramatically suppresses the sintering behavior. A kinetic analysis of the dilatometry data suggests that rapid grain boundary diffusion pathways that are necessary for effective sintering are obstructed by solute segregant, which has important repercussions for the processability of these alloys. Finally, microstructural characterization and nanohardness testing after ion-irradiation experiments demonstrate that the alloys with solute-stabilized grain boundaries are more resistant to nanovoid formation and radiation hardening. The work in this thesis advances our understanding of solute segregation and its effects in vanadium alloys, and highlights an approach for controlling grain boundaries that may facilitate future alloy design efforts for improved microstructural stability and radiation damage tolerance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163016</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors</title>
<link>https://hdl.handle.net/1721.1/163015</link>
<description>Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors
Sun, Zehao
Microphase-separated block copolymers are attractive materials for self-assembled nanolithography, yet there is a disconnect between the simple patterns commonly formed by block copolymers and the complex patterns required for many nanoscale applications, particularly in microelectronics. To meet this challenge, researchers have sought to design and build copolymer systems at ever-increasing levels of complexity in the (macro)molecular level, which promises to show emergent intriguing properties that are otherwise absent. However, the synthetic challenge as well as the vastly increased parameter space have obscured the systematic study of such complex systems. An efficient, modular synthetic route is thus highly desired for Lego-like molecular construction of property-decoupled, individually-tunable target materials.&#13;
&#13;
In this thesis, we will highlight the research endeavor in developing a multiblock Janus bottlebrush copolymer architecture as a novel platform for generation of diverse nanostructures that have been challenging to fabricate. The architecture, which features two orthogonal Janus domains, can be facilely constructed from corresponding building blocks by graft-through synthesis and can yield hierarchically engineerable phase-in-phase patterns.&#13;
&#13;
Surprisingly, the two constituent domains, though relatively independent of each other, behave significantly differently when combined together under certain circumstances. Their collective behavior gives rise to two low-symmetry mesh-like network phases (monoclinic and tetragonal respectively) that have not been observed in other soft materials before, which are of both fundamental and technological interest. Through a suite of experimental and computational study, we show that this peculiar phenomenon is an outcome of intrinsic molecular confinement, an emergent effect unique to multi-body, multi-hierarchy complex architectures. This work demonstrates that intrinsic molecular confinement is a viable path to bottom-up assembly of new geometrical phases of soft matter, extending the capabilities of block copolymer nanofabrication.&#13;
&#13;
As another example of modular synthesis, we will show an iterative polymerization methodology for controlled synthesis of bottlebrush copolymers with expanded compositional and architectural scope. When synergizing with other components, this strategy allows rapid access to functional materials that display different phase behavior when compared to the self-assembly of conventional copolymers.&#13;
&#13;
Our work introduced here is expected to facilitate the synthesis of complex functional copolymers, spark interest in the exploration of their property-function relationship, and enable more opportunities for their application in nanopatterning and other advanced materials.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163015</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites</title>
<link>https://hdl.handle.net/1721.1/163008</link>
<description>Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites
Durso, Michael Nathan
Continuous carbon nanotube (CNT) networks are an emerging, hierarchically-structured, and commercially available nanomaterial built from countless CNT nanocrystals. These macroscopic yarn materials promise to bridge the gap between microscopic CNT fibers – which are well-known for their superlative material properties – and human-scale fiber reinforcements for extreme-performance composites. Yet because the constituent CNTs interact only via intermolecular forces, network properties fall short of their building blocks. Although these materials show promise as reinforcement in composites, the networks’ low-permeability and tortuous nanoporous structure renders imbibition with liquids like a polymer matrix or surface functionalizing agents challenging. Thus, traditional composite fabrication strategies can be ineffective when applied to CNT yarns, especially commercial products subject to proprietary microstructural manipulation.&#13;
&#13;
Using commercially-available CNT yarns fabricated through floating-catalyst chemical vapor deposition (FCCVD) as model systems, we first explore yarn characteristics which are unique to their hierarchical, bundled-fiber structure, placing focus on the oxygen-rich amorphous carbon phase found in pre-densified, chemically-stretched yarns. A green hydrothermal technique is explored to remove this phase from the surface level inward, allowing for purification and improved infiltrability. However, we find this phase is distinct from previously-reported amorphous carbons found in CNTs, showing it behaves as a matrix which may improve polymer bonding. An analysis of imbibition and fluid transport in these CNT yarns finds that while infiltration of low-viscosity liquids like water is thermodynamically-favored, it is limited when surpassing the threshold of capillary pore percolation. Nevertheless, infiltration in lower-density networks is not only observed, but exploited through the demonstration of dielectric heating in a microwave reactor, where we show fluid imbibed within the network can be boiled to induce swelling and exfoliation of CNT bundles (or conversely, this may be avoided) through optimization of the heating parameters and solvent.&#13;
&#13;
Next, with a firm understanding of the yarn networks’ properties and the impact of various processing effects, we demonstrate two techniques of producing polymer in-situ using dissolved monomers to side-step slow infiltration. The first technique is in-situ interfacial polymerization (ISIP), which is adapted to the yarns studied in this work to yield polyetherimide–CNT yarn composites. When applied to chemically-stretched yarn, specific strengths as high as 2.2 GPa/(g-cm3) are achieved in the flexible and durable yarn composite. We show parameters and conditions which maximize tensile properties and challenges associated with the rapid nature of the process, concluding with the successful demonstration of a roll-to-roll fabrication scheme for producing arbitrary amounts of polymer.&#13;
&#13;
In our second technique, we produce extreme-performance polyimide and polybenzimidazole composites through green in-situ polymerizations (ISSP) in CNTs and macroscopic fiber networks. This approach utilizes superheated water and alcohol as a powerful medium to disperse monomers and initiate polymerization of high-performance coatings within a porous network. We demonstrate ISSP-CNT composites with variable coating morphologies (conformal, shish-kebab, etc.), in-air stability to over 500°C, and doubled specific stiffness and specific strength. Finally, we validate the multifunctional behavior of polyimide-CNT composites by showing a strong, flexible composite can store energy and behave as a free-standing battery electrode.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163008</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Census-Based Population Autonomy for Marine Robots: Theory and Experiments</title>
<link>https://hdl.handle.net/1721.1/163005</link>
<description>Census-Based Population Autonomy for Marine Robots: Theory and Experiments
Paine, Tyler
Collaborating groups of robots show promise due in their ability to complete missions more efficiently and with improved robustness, attributes that are particularly useful for systems operating in marine environments. A key issue is how to model, analyze, and design these multi-robot systems to realize the full benefits of collaboration even with limited communication, a challenging task since the domain of multi-robot autonomy encompasses both collective and individual behaviors. This thesis presents a layered model of multi-robot autonomy that uses the principle of census, or a weighted count of the inputs from neighbors, for collective decision-making coupled with multi-objective behavior optimization for individual decision-making. The census component is expressed as a nonlinear opinion dynamics model and the multi-objective behavior optimization is accomplished using interval programming. This model can be reduced to recover foundational algorithms in distributed optimization and control, while the full model enables new types of collective behaviors that are useful in real-world scenarios. To illustrate these points, a new method for distributed optimization of subgroup allocation is introduced where robots use a gradient descent algorithm to minimize portions of the cost functions that are locally known, while being influenced by the opinion states from neighbors to account for the unobservable costs. With this method the group can collectively use the information contained in the Hessian matrix of the total global cost. In addition, the critical issue of controlling subgroup size to minimize a collective cost signal is addressed, an initial step toward establishing a general definition of controllability of the nonlinear opinion dynamics model. The utility of this model is experimentally validated in three categorically different experiments with fleets of autonomous surface vehicles: an adaptive sampling scenario, a high value unit protection scenario, and a competitive game of capture the flag.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163005</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates</title>
<link>https://hdl.handle.net/1721.1/163003</link>
<description>Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates
Dowding, Ian
Extreme strain rate deformations, above 10⁶ s⁻¹, are seen across many fields of science and engineering; from meteorite impacts and impact induced crystallographic phase changes to high-speed machining and additive manufacturing. Despite the range of applications, many common high-rate impact experiments are intrinsically limited to strain rates of only 10⁴ s⁻¹ before complicating the material deformation with a superimposing state of shock due to high impact pressures. However, recent advances in optically driven microballistics using laser induced projectile impact tests have provided a new quantitative look into extreme mechanics of materials, at rates above 106 s-1 and well below the onset of shock effects.&#13;
As deformation strain rates increase, additional strengthening mechanisms in metals become available, leading to a change in the underlying physics of dislocation motion and an increase in strength. This thesis first explores the mechanical properties of pure metals when deformed at extreme strain rates − both in ambient conditions and elevated temperatures. Using an array of complimentary characterization methods, two independent measurements of strength, the dynamic strength and dynamic hardness, are assessed. As the temperature is increased from ambient, the strength and hardness of pure metals both increase an appreciable amount. At these deformation rates, conventional thermal softening effects are now in competition with anti-thermal hardening that arises from ballistic transport of dislocations from phonon interactions in the crystal lattice. These effects are quantified systematically and it is shown that the anomalous thermal strengthening seen is, thermodynamically and kinetically, the expected form of plasticity under these impact conditions.&#13;
Next, the limits of where this anomalous thermal strengthening occur in metals are investigated. First, solute elements are added to pure Ni to evaluate how additional dislocation pinning mechanisms effect the strength at ambient and elevated temperatures during extreme strain rate deformations. The strengthen increase due to solute pinning of dislocations is additive to the other strengthening mechanisms, yet thermally controlled, which provides a transition from ballistic transport of dislocations to thermally activated strengthening at a critical concentration of solutes. Finally, the upper bound of temperature for dislocation phonon drag strengthening is assessed. While it was shown that pure metals increase strength with increasing temperature, this “hotter-is-stronger” trend breaks down as the temperature approaches the melting point of the metal. Using Sn, due to its low melting temperature, the breakdown from “hotter-is-stronger” to “hotter-is-softer” as the initial substrate temperature approaches the melting temperature is systematically explored.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163003</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation</title>
<link>https://hdl.handle.net/1721.1/162935</link>
<description>Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation
Nayak, Siddharth Nagar
Autonomous multi-agent systems must efficiently plan, explore, and navigate in dynamic and unknown environments, particularly for tasks like search &amp; rescue and environmental monitoring. These settings are often characterized by partial observability, limited communication, and dynamic objectives that require flexible coordination across agents. Designing autonomy that scales with team size and task complexity requires modular decision-making systems capable of high-level reasoning, information-driven exploration, and robust decentralized execution. This dissertation presents a hierarchical decision-making framework that addresses these challenges across three complementary levels of autonomy: high-level planning, adaptive exploration, and decentralized scalable navigation. At the highest level, LLaMAR (Language Model-based Long-Horizon Planner for Multi-Agent Robotics) leverages large language models (LLMs) to decompose long-horizon tasks into structured subtasks, enabling agents to adapt their strategies dynamically. However, the effective execution of these plans requires knowledge about the environment. Our mid-level exploration strategy, BaTMaN (Banditbased Tracking and Monitoring and Navigation), systematically prioritizes waypoints that maximize information gain while balancing real-world constraints such as energy efficiency and sensor reliability. Finally, InforMARL provides a scalable, decentralized navigation by leveraging graph-based local information aggregation, improving sample efficiency, and demonstrating transferability to unseen team sizes. This dissertation develops each of these modules to address a distinct level of the autonomy stack. LLaMAR functions as the high-level planner, translating natural language goals into structured sequences of subtasks and incorporating real-time corrections through a plan-act-correct-verify cycle. BaTMaN serves as the mid-level exploration engine, guiding sensor-equipped agents to prioritize informative regions based on uncertainty. InforMARL operates at the execution level, enabling decentralized agents to navigate through dynamic environments using graph-based local information aggregation and reactive control policies. Each module is independently deployable and optimized for different challenges: strategic reasoning, data-efficient monitoring, and scalable navigation, respectively. When combined, the three modules form a coherent autonomy stack for multi-agent systems operating under uncertainty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162935</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models</title>
<link>https://hdl.handle.net/1721.1/162911</link>
<description>Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models
Davidson, Rosemary K.
As space-based precision-pointed telescopes continue to grow in scale and complexity, integrated models are increasingly relied upon to inform early design decisions and support system-level verification. When ground testing of full-system configurations is infeasible, integrated models, including structural-thermal-optical performance models, are essential for predicting performance and validating requirements across multidisciplinary, coupled domains. In early design phases, when uncertainty is high and design decisions have long-term implications for cost and schedule, it is especially important to understand which uncertain parameters most influence system performance. Global sensitivity analysis can help identify dominant uncertainty sources and inform decisions about model reduction, testing priorities, and resource allocation. However, the computational cost of applying global sensitivity analysis to integrated models often exceeds available resources. The presence of cross-disciplinary coupling between subsystem models further complicates analysis efforts. Coupled and dependent variables obscure how specific inputs influence system-level performance, limiting the ability to reduce model dimensionality or focus testing efforts on individual subsystems. There is a need for integrated modeling methodologies that enable tractable global sensitivity analysis of large, feedforward-coupled systems while preserving the accuracy needed to support early-phase design.&#13;
&#13;
This thesis develops both exact and approximate methods for performing global sensitivity analysis on integrated models. A set of exact propagation techniques is introduced to compute end-to-end sensitivity indices when specific structural conditions are met, including functional linearity, non-interacting transforms, and monotonic intermediate mappings. These methods are evaluated using a suite of benchmark test cases that isolate when the exact sensitivity analysis method is valid and when structural assumptions begin to break down. A modular modeling framework is developed to compute exact or approximate end-to-end sensitivity indices and to enable automated mapping between disciplinary models in the integrated chain. The approach is also applied to a representative linearized structural-thermal-optical performance model, demonstrating how end-to-end global sensitivity analysis can be performed efficiently across thermal, structural, and optical subsystems.&#13;
&#13;
To extend tractable sensitivity analysis to black-box models, several approximate strategies are introduced, including multifidelity surrogate modeling and statistical regression. These methods support both forward uncertainty propagation and variance-based global sensitivity analysis for structurally complex integrated models, without requiring full-system evaluation at every iteration. Together, the exact and approximate strategies developed in this work provide a foundation for scalable end-to-end global sensitivity analysis in early-phase design, where identifying influential parameters and constraining model complexity are essential for evaluating candidate architectures and informing mission decisions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162911</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes</title>
<link>https://hdl.handle.net/1721.1/162906</link>
<description>Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes
Belsten, Nicholas
Future space telescopes will use adaptive optics to suppress starlight to directly image and characterize exoplanets. A measurement using this technique may be the first to detect extraterrestrial life in the universe. However, the real-time execution of adaptive optics control algorithms places unprecedented demands on spaceborne processors. Previous work has determined that processing limitations can degrade the achievable contrast and scientific yield of future exoplanet imaging missions. In this work, we quantify the relationship between adaptive optics processing needs and high contrast performance for the Habitable Worlds Observatory (HWO), a mission expected to launch in the 2040s and achieve the 10^-10 contrast necessary to image Earth-like planets around Sun-like stars.&#13;
&#13;
We survey the current landscape of high-order wavefront sensing and control (HOWFSC) algorithms for a future mission like HWO. We parameterize the compute requirements of multiple algorithms through analyses of computational patterns, benchmarks, and problem scaling. In parallel, we assess the capabilities of current and emerging spaceborne processors. We integrate these findings to model processor requirements across several dimensions of telescope design, and we predict whether various processor choices can meet the computational demands of specific HWO configurations. To validate our models, we implement HOWFSC algorithms on representative embedded processors and compare measured performance to predictions. These implementations also reduce risk for spaceflight by increasing the technology readiness level (TRL) of the algorithm–processor pairing to TRL 4.&#13;
&#13;
Given the significant uncertainty in HWO’s eventual design, we extend our deterministic models using Monte Carlo methods to evaluate system performance under uncertainty. We identify key sources of uncertainty and estimate the achievable contrast across a range of system configurations. Our results show that offloading computation to the ground is an important architectural option for most HWO designs. Even under optimistic assumptions, current space processors are insufficient to support the full range of HWO configurations. However, newly developed efficient algorithms substantially reduce the computational burden. Overall, we estimate that current technology has only a 40% probability of supporting HWO’s mission goals without additional architectural innovations. We conclude by recommending combinations of onboard computing, ground offloading, and optical design constraints to help close this technology gap as the mission design matures. In particular, we find that telescope stability and ground-in-the-loop performance are primary drivers of contrast performance, while algorithmic advances such as AD-EFC and onboard compute approaching ground-based GPU performance also provide significant benefits.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162906</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI</title>
<link>https://hdl.handle.net/1721.1/162738</link>
<description>Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI
Ren, Zhichu
The discovery of novel energy catalysts is a critical challenge in the field of materials science. Traditional methods for materials discovery are labor-intensive and time-consuming, hindering the rapid development of new catalysts. To address this issue, we introduce a comprehensive approach that integrates automation, active learning, and artificial intelligence (AI) to accelerate the discovery process.&#13;
&#13;
Our approach introduces the Copilot for Real-world Experimental Scientist (CRESt) system, which combines a large multimodal model (LMM) with an active learning-guided robotic system. CRESt streamlines the workflow of composition selection, high-throughput materials synthesis, electrochemical screening and characterization for the optimization of high-entropy alloy catalysts. The system allows researchers, regardless of their programming skills, to interact with the robotic platform using voice commands, making it highly accessible and user-friendly.&#13;
&#13;
We demonstrate the effectiveness of our approach by experimentally exploring over 700 chemistries and 1300 samples. The optimized 8-dimensional alloy (Pd-Pt-Cu-Au-Ir-Ce-Nb-Cr) achieved approximately 10 times the cost-specific performance of commercial catalysts for the direct formate fuel cell. This breakthrough highlights the potential of our approach to accelerate the discovery of novel energy catalysts across various domains.&#13;
&#13;
Furthermore, we discuss the challenges and considerations associated with implementing active learning in real-world experiments. We provide guidance on addressing model-centric and data-centric issues, such as model customization and data irreproducibility, to ensure the successful application of active learning in materials research projects.&#13;
&#13;
Looking ahead, we explore the role of human experimentalists in the era of AI-driven discovery. While AI and automation are poised to transform many aspects of experimental research, we argue that human experimentalists remain irreplaceable for now. Our ability to exercise critical thinking and engage in complex real-world interactions sets us apart from abiotic intelligence. However, as AI becomes more deeply integrated into research practices, the experimental landscape is bound to undergo significant changes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162738</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Atom-Light Scattering in the Quantum Regime</title>
<link>https://hdl.handle.net/1721.1/162699</link>
<description>Exploring Atom-Light Scattering in the Quantum Regime
Lu, Yu-Kun
Ultracold atoms and molecules are promising platforms for exploring modern quantum science and technologies, such as quantum simulation and quantum computation. Here, light is the essential tool to manipulate and probe these systems. However, unlike in condensed matter systems where scattering experiments are routinely employed to characterize materials, ultracold atom and molecule systems are usually probed by imaging and not by light scattering.&#13;
&#13;
In this thesis, I present a systematic investigation of atom-light scattering under various scenarios. When atoms are confined in optical lattices, light scattering can be used to explore single-body, two-body, and many-body physics. Focusing on single-atom physics, I study coherent and incoherent light scattering of single-atom wavepackets and the relation to which-way information. For two atoms tightly localized to a 20nm size on the same lattice site, I demonstrate the strong electric dipolar interactions between them, which result in large momentum transfers and spectroscopic shift of the resonance. On the many-body side, I show how light scattering can reveal distinct quantum phases at thermal equilibrium or defect generation in dynamical ramps. For atoms released from the optical lattice, I demonstrate that light scattering can read out the quantum statistical information and initial density correlations hidden in the interference of atomic wavepackets.&#13;
&#13;
When atoms move freely in the form of degenerate quantum gases, I investigate how quantum statistics, phase transition, and interactions modify the atomic pair correlation and consequently the light scattering. For thermal gas at high density, I demonstrate nonlinear optical effects from high optical density and high scattering rate. &#13;
&#13;
Finally, I describe our recent efforts on manipulating atoms at subwavelength length scales. I discuss our attempts in optical tweezers and in optical lattices, and the prospect of observing magnetic pairing between two distant layers under attractive dipolar interaction.&#13;
&#13;
The techniques presented in this thesis should be of general use for pursuing quantum science and technology with ultracold atoms and molecules.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162699</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Initial segments in ordinal recursion theory.</title>
<link>https://hdl.handle.net/1721.1/162619</link>
<description>Initial segments in ordinal recursion theory.
Dorer, David John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1979; Vita.; Bibliography: leaf 49.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162619</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models for investigating the unreliability of freight shipments by rail.</title>
<link>https://hdl.handle.net/1721.1/162615</link>
<description>Models for investigating the unreliability of freight shipments by rail.
Folk, Joseph Frederick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Vita.; Bibliography: leaves 279-284.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162615</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boiling heat transfer in rotating channels with reference to gas turbine blade cooling</title>
<link>https://hdl.handle.net/1721.1/162611</link>
<description>Boiling heat transfer in rotating channels with reference to gas turbine blade cooling
Mudawar, Issam Abdallah.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162611</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lentiviral Vector Engineering for High-Throughput Immune Profiling</title>
<link>https://hdl.handle.net/1721.1/162532</link>
<description>Lentiviral Vector Engineering for High-Throughput Immune Profiling
Dobson, Connor S.
The ability to decipher immune recognition is critical to understanding a broad range of diseases, including cancer, infection, and autoimmunity, as well as for the development of countermeasures such as vaccines and immunotherapy. Efforts to do so have been hampered by a lack of technologies that are capable of scaling to simultaneously capture the complexity of the adaptive immune repertoire and the landscape of potential antigens. Each individual’s immune repertoire consists of tens of millions of unique receptors that are responsible for surveying the trillions of possible antigens that might be encountered in one’s lifetime. As a result, there has been intense focus on the development of tools for screening large antigen sets or large collections of potential immune receptors, but most of these capture complexity on only one side of the interaction. We have therefore used synthetic virology approaches to engineer a “lentivirus surface display” platform capable of screening complex antigen mixtures against the full complexity of the adaptive immune repertoire. In Chapter 2 of this thesis, we describe our molecular engineering approaches that enabled the development of VSVGmut, an efficient and modular targeted pseudotyping strategy. In Chapter 3, we leverage VSVGmut and further advances to enable one-pot library on library antigen identification screens for T cells by displaying antigens on the surface of lentiviruses and encoding their identity in the viral genome. Antigen-specific viral infection of cells allows readout of both antigen and receptor identities via single-cell sequencing. In Chapters 4 and 5, we extend our approaches to B cells and present preliminary data for applications in both cellular and humoral profiling. Taken together, our approaches represent a new class of tools for identifying the molecular targets of the adaptive immune response at scale.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162532</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combustion Physics and Inverse Modeling of Energetic Materials</title>
<link>https://hdl.handle.net/1721.1/162511</link>
<description>Combustion Physics and Inverse Modeling of Energetic Materials
Kim, Suyong
Energetic material combustion involves intricate multi-scale and multi-phase dynamics, where the interplay of chemical reaction and transport processes results in complex wave patterns across a wide range of length scales from nano to millimeter. Our limited fundamental understanding of these combustion processes poses a challenge to design and optimize combustion properties, leading to significant reliance on empirical knowledge. Deeper comprehension can be achieved by linking fundamental aspects of reaction and transport to combustion dynamics. However, there are very limited diagnostic tools available to quantify material properties and chemical kinetics for heterogeneous materials under combustion, which hinders the quantitative analysis of combustion waves. Furthermore, combustion wave dynamics and flame structures in modern nanocomposite energetic materials have not been fully resolved. This lack of breath in modeling techniques and experimental characterization has prevented quantitative analysis of combustion wave dynamics for energetic materials.&#13;
This thesis aims to establish theories for combustion waves in energetic materials by correlating their intrinsic chemical reaction and transport properties with wave dynamics. To achieve this goal, two major steps are involved. First, we propose a novel inverse modeling approach to infer material properties and chemical kinetics using PDE-constrained optimization, which allows for deciphering the reaction-transport coupling from observable dynamics in currently available combustion diagnostic tools. We further discuss training challenges of neural differential equations with data subject to scale separation and propose mitigation strategies that enable learning stiff dynamical systems. Secondly, we investigate flame structures and dynamics in nanocomposite energetic materials at length scale ranging from micron to sub-millimeter using high-speed microscopic imaging techniques. Two distinct combustion wave patterns are characterized by flame dynamics and stability. Based on inverse modeling and microscopic observation, we finally construct two theories of combustion wave propagation and wave stability, by performing scaling analysis on wave dynamics in terms of mass and thermal transports and chemical reaction. A systematic view of energetic material combustion allows for deeper comprehension of how multi-scale dynamics of reaction and transports evolve in macro-scale combustion waves, potentially leading to the development of predictive models for the intricate heterogeneous combustion dynamics of energetic materials.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162511</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for small-molecule transparent semiconductors</title>
<link>https://hdl.handle.net/1721.1/162510</link>
<description>Computational methods for small-molecule transparent semiconductors
Carter, Ki-Jana
Solar energy has enormous potential to meet global energy demand in a renewable and environmentally sustainable manner. Although silicon-based photovoltaic (PV) devices have become significantly more affordable and accessible in recent decades, there is a need to develop alternative PV technologies which can be deployed more widely and cheaply. Visibly transparent PV devices based on organic semiconductors are well-suited to this role due to their ability to be installed on windows and building facades, their mechanical flexibility, and their high degree of tunability. However, in order for transparent PV to become commercially viable, further research is needed to shed light on the systematic tuning of the optical properties of molecular materials with visible transparency. This work applies computational tools — namely density functional theory (DFT) and graph neural networks — to gain a deeper understanding of how molecular structure impacts macroscopic optical properties and suggest directions for future study.&#13;
&#13;
In this work we employ linear-response time-dependent DFT with optimally tuned and screened range-separated hybrid functionals in order to compute accurate photoabsorption spectra with relatively low computational cost. Additionally, we utilize molecular graph neural networks as a means to leverage quantum mechanical datasets to accelerate the materials discovery process. These methods are combined to make progress on the optical design of organic semiconductors. &#13;
&#13;
This thesis document is organized as follows. Chapter 1 introduces transparent photovoltaics and the associated materials design considerations. Chapter 2 summarizes the computational methods employed in this work. Chapter 3 describes the first-principles modeling of small-molecule transparent absorbers using perylene dimide derivatives as a case study. Chapter 4 studies principles underlying the design of molecular graph neural networks. Chapter 5 applies these modeling techniques to construct a spectral dataset and train a scalable spectral model; screen a large dataset of organic molecules; identify physical trends and structure-property relationships; and suggest promising candidate materials for transparent photovoltaic applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162510</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topology optimization of buildings-scale structures with&#13;
material and fabrication constraints</title>
<link>https://hdl.handle.net/1721.1/162452</link>
<description>Topology optimization of buildings-scale structures with&#13;
material and fabrication constraints
Jewett, Jackson L.
The construction industry releases about 10% of anthropogenic Carbon Dioxide every year, primarily due to the manufacturing of construction materials. Structural optimization has been proposed as means of improving material efficiency in buildings, and thus reducing material demand for construction projects. Topology optimization has great potential for materially-efficient design because it is a free-form optimization method, allowing for performant geometries to be computationally derived with minimal input from the user. However, topology optimization algorithms must be modified to account for the specific fabrication and material constraints that are inherent in construction practices. This thesis shares a collection of research projects related to the use of topology optimization for large-scale structures relevant to the construction industry. First, a novel algorithm is proposed for large-scale 3D printed structures. The work focuses on the limitations presented by the printing nozzle, and the anisotropies that arise in 3D printed systems. Second, topology optimization is modified for design of structural glass. Several algorithms are developed, which are then used to design, fabricate, and test physical specimen to evaluate their real-world performance. Third, a framework is presented to design low-weight reinforced concrete structures. This system is used to design, build, and test reinforced concrete beams, so their performance can be compared to conventionally designed specimen. This thesis considers the diverse ways that topology optimization could be applied to design large-scale structures of various construction materials. The results demonstrate the types of computational techniques that can be used for generative design in the built environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162452</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Design in Operations</title>
<link>https://hdl.handle.net/1721.1/162433</link>
<description>Experimental Design in Operations
Wang, Chonghuan
Experimental design has been fundamental to many fields, yet applications in operations research (OR) and operations management (OM) bring in complexities, as well as opportunities, that extend beyond classical statistical goals. This thesis discusses why OR/OM should care about experimentation and its design, where the challenges lie in operational and service systems for classical experimental design, and why OR/OM researchers are uniquely suited to address the challenges. More specifically, this thesis advances experimental design by introducing more operational perspectives, addressing two core challenges: incorporating operational objectives and leveraging operational modeling to enhance experimentation.&#13;
&#13;
First, traditional experimental approaches, such as A/B testing, primarily aim at statistical efficiency (e.g., reducing variance or bias). However, OR/OM applications frequently involve additional operational considerations, such as welfare preservation, revenue optimization, risk control, and non-stationarity. We investigate these settings in Chapters 2–4, developing frameworks for multi-objective experimental design. In Chapter 2, we introduce a minimax multi-objective optimization formulation to balance statistical power and welfare loss, derive necessary and sufficient conditions for Pareto optimal solutions, and propose robust multi-armed bandit designs. Chapter 3 extends this approach to pricing experiments, exploring trade-offs between estimating causal effects (price elasticity), maximizing revenue, and controlling tail risks, along with robust statistical inference methods. Chapter 4 addresses non-stationary experimental environments where treatment effects dynamically evolve, designing experiments that optimally balance accurate estimation of changing effects and welfare minimization.&#13;
&#13;
Furthermore, we highlight the substantial value of operational models—particularly Markov Decision Processes (MDPs)—in experimental design. In Chapter 5, we address the challenge of estimating long-term cumulative outcomes, such as customer lifetime value, using short-term experimental data. We develop novel inference methods grounded in MDPs, which effectively bridge short-term data to long-term outcomes. Moreover, by recognizing many real-world treatments tend to be localized for practical efficiency, we introduce novel estimators that leverage the localized structures to achieve substantial variance reductions.&#13;
&#13;
In summary, this thesis underscores how OR/OM contexts uniquely enrich experimental design, offering robust theoretical frameworks and practical solutions to operational challenges, ultimately broadening both the theoretical foundations and the practical impacts of experimentation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162433</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Junctions and Strands: Breaking Property Tradeoffs in Polymer Networks and Composite Polymer Electrolytes</title>
<link>https://hdl.handle.net/1721.1/162421</link>
<description>Junctions and Strands: Breaking Property Tradeoffs in Polymer Networks and Composite Polymer Electrolytes
Herzog-Arbeitman, Abraham
This dissertation first examines the mechanics of polymer networks, specifically material toughness and the nature of material fracture. Polymer networks, which include tire rubber, windmill turbine blades, tissue engineering scaffolds, polymer electrolytes (vide infra) and many other materials, possess a useful lifetime typically limited by a fracture event. Thus, methods of controlling toughness (the resistance of a material to tearing) without compromising composition or other properties would dramatically affect waste generation and energy use in the myriad applications in which polymer networks are employed. Toughness in rubbery polymer networks derives from the length and density of the polymer strands; thus, it is generally inversely related to stiffness, as captured in the classic Lake-Thomas theory. This inverse relationship has been perturbed through incorporation of forceresponsive molecules (mechanophores) that may either toughen or weaken the material depending on network construction and topology. The first part of this thesis identifies a new class of mechanophores called tetrafunctional cyclobutanes (TCBs), which can be used to either toughen or weaken a network of single topology without substantial change of network composition, even in dilute gels which are difficult to toughen by other methods. TCBs are then used to identify the mechanisms of mechanophore toughening or weakening in other networks, through a proposed topological metric called network strand continuity (NSC). We show that TCB substituents control the regio- and chemo-selectivity of the cyclobutane core under stress, and this molecular-level selectivity is responsible for network toughening or weakening on the macroscale. These effects can be predicted based on knowledge of activation energetics of the junction guided by NSC. Subsequently, effects of other network structure parameters on the magnitude of toughening or weakening are considered and the molecular design of second-generation highly active TCBs is described. The second part of this dissertation concerns the design of microporous polymer electrolytes and the applications of their composites and gels in batteries. Polymer electrolytes are a highly anticipated alternative to the liquid electrolytes currently in use, which are toxic, flammable, and incompatible with next-generation battery chemistries. Previous polymer electrolytes exhibit inadequate conductivity and a severe tradeoff between conductivity and mechanical properties. These challenges are accentuated in single-ion conductors, which are theorized to have the strongest rate capability. A new class of single-ion conducting polymer electrolytes that mimics the conduction mechanism of ceramic electrolytes to achieve strong mechanical properties, high conductivity, processability, stability, and recyclability is described. These polymers constitute the first regular microporous polyanions, and the most dissociative microporous polyanions to date. These polymers, alongside other rigid (but not microporous) polysulfonimides enable strong conductivity performance when coupled with suitable dopant (here succinonitrile) in low weight fractions. Low molecular weight controls flexible polymer analogs show inferior mechanical and conductivity properties. In fact, microporous composites outperform even liquid analogs. These composites show best-in-class combinations of mechanical and conductivity properties, and can even conduct divalent cations like Zn(II), a challenging but energy-dense battery metal. Simulations show that polymer-succinonitrile interactions enable fast conduction at the pore edge which results in synergistic behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162421</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abiotic and Biotic Polymer Degradation to Inform Sustainable Design</title>
<link>https://hdl.handle.net/1721.1/162414</link>
<description>Abiotic and Biotic Polymer Degradation to Inform Sustainable Design
Tantawi, Omar
As global plastic production continues to rise, understanding environmental processes governing plastic degradation is crucial to inform the sustainable design of polymers. This thesis is structured into three chapters, each addressing critical aspects of polymer degradation:&#13;
In the first chapter, I develop and apply a sequential abiotic (photodegradation and hydrolysis) and biotic degradation test to a diverse suite of 18 polymers, including novel polyhydroxyalkanoates polyesters, commercially available bio-based polymers (e.g., polylactic acid, poly-3-hydroxybutyrate), and conventional fossil-derived polymers (e.g., polypropylene, polyethylene terephthalate). Results illustrate that current biodegradation standard methods relying only on mineralization underestimate polymer degradation by up to two-fold. Simulated sunlight notably enhanced polymer degradation by mobilizing dissolved organic carbon (DOC), which proved highly biodegradable in marine environment. Chemical structural differences were clearly linked to degradation behaviors, emphasizing the utility of the developed workflow for rapidly identifying environmentally relevant degradation mechanisms, which can inform structure-property relationships for future polymer designs.&#13;
In the second chapter, I delve deeper into characterizing polymer-derived dissolved degradation products. Conducting Mass Remainder Analysis (MARA) using non-target liquid chromatography–high-resolution mass spectrometry (LC-HRMS) data, we systematically identified oligomeric degradation products and homologous series of polyamide-6 (PA6), polycaprolactone (PCL), and polylactic acid (PLA). Complementary experimental approaches (retention-time shifts across varied mobile phase pH, fragmentation analysis, and spectral matching) were essential to improve structure elucidation and determine acid-base properties (pKa) and hydrophobicity (logKow and logD). The experimental findings emphasized large deviations of oligomers hydrophobicity from computational predictions, underscoring the necessity for oligomer-specific experimental data to enhance environmental fate modeling and risk assessment accuracy.&#13;
In the third chapter, I investigate the fate of polymer-derived dissolved organic carbon (p-DOC) from PLA, PCL and PA6, focusing specifically on oligomer chemistry. Using natural marine microbial communities, PLA- and PCL-derived DOC demonstrated rapid biodegradation (82-85% within six days), while PA6-derived DOC exhibited resistance. Detailed analysis using high-resolution mass spectrometry and MARA revealed significant chemical structure dependence in biodegradation rates, with rapid degradation of aliphatic ester-containing cyclic and linear oligomers. Larger cyclic oligomers degraded faster, while short linear oligomers showed transient accumulation followed by degradation. PA6 oligomers exhibited limited biodegradability, with cyclic oligomers showing minimal degradation. The results emphasize the critical influence of oligomer chemistry and microbial enzymatic specificity, providing essential insights for designing sustainable polymers compatible with marine environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162414</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperspectral Remote Sensing for UXO Detection and Damage Assessment on Airfield Pavements</title>
<link>https://hdl.handle.net/1721.1/162411</link>
<description>Hyperspectral Remote Sensing for UXO Detection and Damage Assessment on Airfield Pavements
Pietersen, Randall
If an airfield being operated by the U.S. Air Force is attacked, the current method for assessing its condition is a slow visual and manual inspection process, exposing personnel to dangerous conditions and delaying repair operations. Developing a fully autonomous remote assessment solution would improve the speed and safety of this critical task, but remains an unsolved problem despite continued advances in drone technology, deep learning, and computer vision. This research explores using near-surface hyperspectral sensors as an alternative to red, green, blue (RGB) digital cameras, in hopes of improving detection precision and accuracy for airfield assessment. However, even with modern hyperspectral sensors the benefit of increasing spectral image resolution comes at a cost, creating addition complexity, uncertainty, and sensitivity in the acquisition, data correction, and downstream detection processes. &#13;
&#13;
This work presents a series of tests, each designed to better understand and refine a full hyperspectral image detection sequence, starting with sensor selection and raw data acquisition, proceeding to radiometric correction, and culminating in image recognition by means of supervised deep learning (DL). Regarding sensor selection and data acquisition, these findings indicate that for many applications of computer vision, using a hyperspectral camera with high spectral resolution is unnecessary. It is more beneficial to select a camera with snapshot imaging that instead maximizes spectral range or spatial resolution. Radiometric correction is then explored, and experiments demonstrate that correction makes machine learning classification models less sensitive to changes in scene illumination, thus improving overall image recognition performance. Finally, deep learning models for image recognition are tested and a new method for generating synthetic hyperspectral data is developed and shown to be useful for estimating hyperspectral model performance on larger datasets, when real data are limited. Overall, the findings presented in this thesis suggest that by refining the methods used for data acquisition, correction, and detection, hyperspectral imaging improves image recognition when compared to traditional RGB cameras. This applies not only for airfield damage assessment but extends to other real-world applications requiring computer vision and scene understanding.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162411</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Measurement Approaches to the Study of Secondary Organic Aerosol</title>
<link>https://hdl.handle.net/1721.1/162410</link>
<description>New Measurement Approaches to the Study of Secondary Organic Aerosol
Helstrom, Erik
Aerosol particles constitute a class of atmospheric pollutants that are detrimental to human health and influence the Earth’s climate. A significant fraction of aerosol mass is composed of organic material, produced by photochemical reactions of organic trace gases which form secondary organic aerosol (SOA). However, the large diversity of volatile organic compounds (VOCs) makes it challenging to identify all of the chemical reactions contributing to SOA formation. In addition to this chemical complexity, our ability to identify and measure all of the relevant organic compounds, especially species present in aerosol particles, is limited by challenges in efficiently sampling and detecting the various classes of molecules formed. Improving knowledge of the chemical behavior of aerosol will improve our ability to predict how changing emissions and chemical conditions will impact the formation and properties of particulate matter in the future.&#13;
This thesis will explore recent improvements in instrumentation and measurement techniques and apply them to laboratory studies of organic carbon and SOA. First, we adapt a technique for measuring total suspended carbon to laboratory chamber experiments, converting organic compounds with high temperature catalysis to carbon dioxide, which is then monitored in real time. This allows for a “top-down” constraint on the overall concentration of all organic species (including SOA) as experiments proceed, as some lower volatility products are lost to the surfaces of the laboratory chamber. Second, we compare the measurements of SOA from three chemical ionization mass spectrometers using different ionization and desorption methods to detect particle-phase species. Clear differences emerge in the detected formulas across instruments, highlighting variations in chemical sensitivities to different classes of compounds and the influence of fragmentation on the detected products. Finally, we explore how changing peroxy radical fate influences SOA formation by monitoring SOA composition with extractive electrospray ionization (EESI) mass spectrometry. Differences in particle-phase products, particularly nitrates, hydroperoxides, and dimers, make the dependence of initial SOA composition on peroxy radical pathways clear. Over time, we observe a convergence of SOA spectra formed under different peroxy radical regimes, suggesting the influence of secondary products and particle-phase chemistry, though some differences persist from the initial gas-phase peroxy radical fate. Overall, this thesis demonstrates improved tools for constraining and investigating VOC oxidation pathways leading to particle-phase organic species.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162410</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for dissecting multicellular mechanisms of complex diseases</title>
<link>https://hdl.handle.net/1721.1/162335</link>
<description>Computational methods for dissecting multicellular mechanisms of complex diseases
Mitchel, Jonathan
Single-cell genomics technologies have enabled unbiased characterization of cell types and cellular states. However, the high-dimensional nature of this data necessitates computational and statistical methods to uncover the biological processes that shape it. In my thesis research, I developed three computational methods to explore genetic regulatory mechanisms underlying common diseases and the resulting multicellular patterns of dysfunction. In the first project, I developed a method called scITD to investigate how cellular processes across distinct cell types coordinate in disease contexts. scITD identifies sets of genes in one or more cell types that co-vary together across biological samples. Through the application of this tool to various immune-cell datasets, we uncovered highly reproducible gene expression patterns associated with autoimmune patient phenotypes. In the second project, I characterized technical artifacts prevalent in imaging-based spatial transcriptomics data. These artifacts arise from the misassignment of transcript molecules to incorrect cells. I further demonstrated how these artifacts confound downstream analyses, including differential expression and cell-cell interaction inference. To address this, I jointly developed a correction method that mitigates these artifacts, thereby uncovering novel biological insights in cancer datasets. In the third project, I introduced a computational method to unravel the mechanisms of genetic variants identified from genome-wide association study loci. This method tests whether these same genetic variants also underly changes to gene expression in specific cell types or states. Applying this tool to autoimmune and neurodegenerative datasets uncovered new SNP-gene-phenotype links and localized their effects to specific cell populations, helping to refine our understanding of these pathologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162335</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Packaging and Integration Solutions for Next-Generation Photonic Systems</title>
<link>https://hdl.handle.net/1721.1/162334</link>
<description>Scalable Packaging and Integration Solutions for Next-Generation Photonic Systems
Ranno, Luigi
The ever-increasing demand for faster and more efficient computation has propelled the rapid growth of integrated photonics, with commercial products starting to reach global markets in recent years. Nevertheless, integrated photonics still lacks the scale required to meet market demands and falls short of the performance targets necessary for many critical applications. Innovative solutions are imperative if photonics is to drive technological advancement and become a ubiquitous part of next-generation systems, rather than being confined to niche or high-end applications.&#13;
Among the key bottlenecks is photonics packaging, which refers to the challenge of electrically, optically, and thermally interfacing with a photonic integrated circuit (PIC). Current packaging solutions often impose significant design tradeoffs, contributing to industry fragmentation and high costs. Two-photon lithography (TPL), a high-resolution 3D manufacturing technique, has emerged as a promising enabler of robust and efficient optical interconnects. However, existing research has focused heavily on performance, often relying on additional chip processing steps (e.g., cladding removal) that hinder scalability. Moreover, prior work largely restricts itself to parameterized geometries, such as quadratic curves or spherical sections, that underutilize the true design freedom of TPL. My work addresses both of these limitations. I developed a freeform, facet-attached micro-reflector solution that is fully compatible with standard foundry processes, adaptable to challenging coupling scenarios, and computationally efficient to design. This coupling solution demonstrates all the properties desired in an ideal optical interface: low insertion loss (~0.6 dB), wide bandwidth (&gt;300 nm), foundry compatibility, and geometric universality across PIC platforms.&#13;
Another major challenge facing the photonics industry is the lack of critical functionalities within current foundry processes due to limited material availability. Significant gains in performance and capability can be realized by integrating new materials on-chip, but doing so while maintaining CMOS-foundry compatibility remains a formidable task. To address this, I helped develop a novel photonics platform enabling substrate-inverted multi-material integration. This platform supports seamless integration of diverse materials while leveraging existing PIC process stacks, including metallization layers, unlocking new classes of high-performance devices. Building on this idea, I further demonstrated how material integration can directly enable new applications. Specifically, I developed a selective and ultra-sensitive environmental lead (Pb²⁺) sensor, based on a crown ether functionalization layer. This device showcases the potential of hybrid material platforms to deliver practical, field-relevant solutions in environmental monitoring and beyond.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162334</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic and Post-Synthetic Methods towards Fine Tuning&#13;
the Chemical and Physical Properties of Metal-Organic Frameworks</title>
<link>https://hdl.handle.net/1721.1/162333</link>
<description>Synthetic and Post-Synthetic Methods towards Fine Tuning&#13;
the Chemical and Physical Properties of Metal-Organic Frameworks
Iliescu, Andrei
This thesis explores synthetic and post-synthetic strategies for tailoring the chemical and physical properties of metal-organic frameworks (MOFs), with a particular emphasis on modulating redox activity, framework composition, and ionic conductivity. The first part of the work focuses on leveraging MOF-embedded polynuclear metal clusters for multi-electron redox chemistry. A square-planar tetramanganese cluster was shown to reversibly interconvert between molecular oxygen and metal-oxo species via a four-electron pathway. This reactivity was then investigated by varying the identity and redox potential of the metal centers within the tetrametal cluster. The Fe(II) and Co(II) analogs reveal distinct metal-specific behavior and provide insight into the tunability of redox-active SBUs within MOFs. Next, post-synthetic cation exchange was employed to access a previously unreported Zn-based MOF, ZnZnBTT, which exhibits significant Zn-ion conductivity due to mobile charge-balancing cations. This material demonstrates the potential of MOFs in next-generation solid-state battery technologies. Finally, the impact of linker electron donicity on cluster structure and reactivity was explored using a new mixed-azolate ligand. Four isostructural MOFs incorporating Co, Ni, Cu, and Cd were synthesized, revealing that the electron-rich pyrazolate groups modulate cluster composition and redox behavior. Notably, CoBTDP exhibits O₂ reactivity, unlike its all-tetrazolate counterpart, underscoring the role of linker design in tuning MOF function. Together, these studies demonstrate how careful control over MOF synthesis and post-synthetic modification can be used to fine-tune redox behavior, framework composition, and ion transport, providing new avenues for the design of functional porous materials.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162333</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Metal Complexes for Optical Read-out of Magnetic Fields</title>
<link>https://hdl.handle.net/1721.1/162332</link>
<description>Leveraging Metal Complexes for Optical Read-out of Magnetic Fields
Yi, Seungyeon
Optical detection of magnetic phenomena offers a compelling pathway toward the development of highly sensitive and versatile molecular sensors. This thesis investigates the design of metal complexes tailored for magnetic field read-out through light–matter interactions, focusing on two strategies. The first section explores magnetochiral dichroism (MChD), an optical effect that emerges from the interplay between molecular chirality and magnetism. By systematically varying the metal centers within a series of chiral lanthanide complexes—specifically, Tb³⁺ and Dy³⁺—we examine how differences in magnetic moment modulate the MChD response. This comparative study reveals fundamental chemical design principles for enhancing MChD intensity and deepens our understanding of how structural and electronic factors jointly shape this directional optical effect. The second section addresses the challenge of engineering optically addressable molecular qubits based on Ni²⁺ complexes. Realizing effective spin-state read-out in these systems requires precise control over both magnetic and photophysical properties. To this end, we investigate ligand modification strategies aimed at enhancing luminescence while preserving an S = 1 ground state suitable for quantum applications. Collectively, we hope these studies contribute to a better understanding of the design space for spin–photon coupled molecular systems, offering new tools for magnetooptical sensing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162332</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Plasticity and Temperature in High Velocity Microparticle Impacts</title>
<link>https://hdl.handle.net/1721.1/162331</link>
<description>Quantifying Plasticity and Temperature in High Velocity Microparticle Impacts
Lucas, Tyler J.
The Laser Induced Particle Impact Test, or LIPIT, is a benchtop experimental setup that enables in-situ observation of micron-scale particles impacting targets at velocities ~10-1500 m/s. Through a combination of the high velocity and small length-scale of the impact, strain rates exceeding 10⁷ /s can be achieved while maintaining a subsonic plastic wave, preventing formation of strong shockwaves and hydrodynamic behavior. The LIPIT has been effectively applied to study phenomena in mechanical behavior, cold spray, and astronomical impacts, all of which will be further investigated in this work. This thesis combines LIPIT experiments and finite element modeling to explore the dynamic behavior of pure metals in the unique regime of strain, strain rate, and pressure achieved in high velocity impact conditions. First, the effect of material microstructure on the mechanical behavior of copper at high strain rates is explored to improve the capability of constitutive strength models in accurately representing experiments. Next, a method is introduced to measure the dynamic yield strength of ductile microparticles, effectively removing the need for the tacit assumption that the properties of bulk materials can be imposed on powders despite differences in processing. The understanding of microstructure and particle behavior are then combined to study the influence of material microstructure on the solid-state bonding of copper particles to copper substrates of different temper. Finally, this work applies the new understanding of plasticity and dynamic modeling in high strain rate conditions to quantitatively study the behavior of metals with a phase transition in absence of strong shockwaves.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162331</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endothelial cell plasticity as a marker of vascular disease and&#13;
predictor of adverse outcomes to stress</title>
<link>https://hdl.handle.net/1721.1/162330</link>
<description>Endothelial cell plasticity as a marker of vascular disease and&#13;
predictor of adverse outcomes to stress
Salazar Martín, Antonio Gabino
Endothelial cells (ECs) dynamically sense and adapt to their local biomechanical and biochemical environments, a process crucial for maintaining vascular homeostasis. Loss of this plasticity is implicated in vascular diseases, where endothelial dysfunction and maladaptive responses exacerbate disease progression and limit therapeutic efficacy. We investigated the role of endothelial cell plasticity under pathophysiological conditions and its impact on therapeutic interventions – mechanical, pharmacologic and genetic. Specifically, Aim 1 characterizes the modulation of EC plasticity by shear stress, revealing that flow patterns drive distinct transcriptional signatures and subpopulation behaviors, as demonstrated through singlecell transcriptomics in human aortic endothelial cells. Aim 2 examines the interplay between EC dynamism and antiproliferative drugs, in particular rapamycin and paclitaxel – the agents released from drug-eluting stents, showing that biomechanical cues from flow dominate EC responses, potentially limiting drug efficacy in regions of disturbed flow. Aim 3, extends the investigation by moving beyond the overwhelming of cells with pharmacologic dosing into the domain of controlled genetic modification, which is in concert with the direction of modern therapeutics and also provides a further dimension to the perspective of endothelial biology. We sought to discern if genetically modified cells maintain their characteristic "endothelial" profile or if the interplay among genomic alterations, transcriptional and proteomic shifts, and environmental cues leads to a state that challenges the hypothesis that ECs remain plastic until they become committed to flow. The integration of single-cell transcriptomics and in vivo models provides novel insights into the heterogeneity of endothelial responses and underscores the importance of considering biomechanical and biological factors in developing targeted therapeutic strategies for vascular diseases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162330</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Order Under Pressure: Structural and Magnetic Characterization at Extreme Stresses</title>
<link>https://hdl.handle.net/1721.1/162326</link>
<description>Order Under Pressure: Structural and Magnetic Characterization at Extreme Stresses
Riesel, Eric Alan
Mechanical stress is an exquisitely versatile tool for controlling chemical bonding. This multi-dimensional synthetic lever tunes the electronic structure of elements and changes the way that atoms arrange and coordinate to one another. These unique electronic configurations and coordination environments have profound impacts on the properties of materials giving rise to functionality ranging from high-temperature superconductivity to diverse magnetism. Despite over a century of research on solid-state materials over one gigapascal (GPa), experimental and theoretical obstacles remain for structural and physical characterization of complex phases which only persist at these conditions. We begin to address the wide-reaching challenge of structural characterization in complex, bulky sample environments by employing recent advancements in generative artificial intelligence to develop a generalized approach to solving the structure of crystalline solid-state materials. We demonstrate that our model achieves a 42% match rate on a curated set of experimental powder diffraction patterns, and we then use our model to solve the structure of several previously unsolved structures at high pressure. We proceed to focus on a different structural characterization problem: defects which arise exclusively under mechanical stress. We demonstrate that site-disorder is unlikely to occur at room temperature and high pressure in InBi and instead propose a set of defects which explain the X-ray spectra and scattering patterns equally well. Progressing to properties characterization and magnetic ordering at high pressure, we experimentally demonstrate that MnBi2, a compound which does not persist to ambient pressure, is a permanent magnet. Comparing the orbital and spin contributions to the total moment across compounds in the Mn–Bi system, we build up design principles for permanent magnets using heavy main-group elements. The combination of our work in structural and physical characterization at extreme stresses charts a path towards the discovery of functional high-pressure bulk materials and defects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162326</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Dynamic Nuclear Polarization: Mechanistic Insights and Experimental Advances</title>
<link>https://hdl.handle.net/1721.1/162325</link>
<description>Coherent Dynamic Nuclear Polarization: Mechanistic Insights and Experimental Advances
Ouyang, Yifu
Dynamic Nuclear Polarization (DNP) enhances the sensitivity of solid-state Nuclear Magnetic Resonance (NMR) by transferring polarization from electrons to nuclei. While traditional continuous-wave (CW) DNP has advanced through improved radical design, the development of pulsed DNP—employing short, high-power microwave bursts—has shown the advantages of coherent spin control. We present both theoretical and experimental investigations aimed at understanding and optimizing polarization transfer. On the theoretical side, we examined multiple DNP mechanisms, including a re-evaluation of the Overhauser effect in insulating solids and a foundational treatment of the chirped solid effect. We also identified a new transfer channel, termed Resonant Mixing, arising from interference effects under off-resonance driving. Building on these insights, we developed a general framework for analyzing amplitude-, phase-, and frequency-modulated pulses. This approach enables the design of hybrid pulse sequences that combine modulation and chirping to produce efficient, selective spin transfer. These sequences maintain high enhancement even at reduced microwave power, thereby improving scalability to high magnetic fields. To test the practical viability of this approach, we designed and evaluated a prototype 400 MHz/263 GHz probe incorporating new resonator and RF technologies. While the initial performance was limited, the system provided a testbed for future high-field pulsed DNP experiments under realistic conditions. Together, these results establish a theoretical and technical foundation for next-generation pulsed DNP, emphasizing coherent spin manipulation, power-efficient design, and applicability to high-field, static-solid NMR systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162325</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Quantitative Solid-State NMR Methods&#13;
to Characterize Membrane Proteins</title>
<link>https://hdl.handle.net/1721.1/162324</link>
<description>Development of Quantitative Solid-State NMR Methods&#13;
to Characterize Membrane Proteins
Somberg, Noah H.
Membrane proteins are critical components of all cells and viruses. While about one quarter of human proteins are membrane proteins, they constitute half of all drug targets. Despite their importance, membrane proteins are underrepresented among known protein structures, constituting only about 2 percent of the Protein Data Bank. This discrepancy is due to the unique difficulties in studying membrane proteins, which make many techniques commonly used in structural biology extremely challenging. Membrane proteins often display a structural dependence on the local environment. It is therefore essential to have structural biology tools to study these critical proteins in native-like environments. Solid-state Nuclear Magnetic Resonance (NMR) spectroscopy provides one of the few methods available to study the structure and dynamics of membrane proteins directly in the lipid bilayer. Herein, practical and theoretical considerations of dipolar and chemical shift anisotropy recoupling experiments are presented. These experiments were applied to the study of membrane proteins. New experiments and novel analysis techniques were developed, and the results guided biophysical understanding and drug development. &#13;
&#13;
Among the membrane proteins of SARS-CoV-2, the Envelope (E) protein is the least understood. E forms a membrane-bound ion channel and is associated with inducing the respiratory symptoms of the disease. The exact oligomeric state of E was not known. The fluorine centerband-only detection of exchange (CODEX) experiment was employed to directly measure the oligomeric state of E in lipid bilayers. The transmembrane domain of E forms a pentamer, while a construct including the ectodomain forms a dimer. Under certain conditions, the pentamers cluster together, forming supramolecular assemblies that may have a unique role in the virus life cycle. &#13;
&#13;
New sensitivity-enhanced carbon-fluorine rotational-echo double-resonance (REDOR) experiments are developed and used to investigate the drug binding of E. The small molecule drug hexamethylene amiloride binds to E at the protein-lipid interface. This informed the development of higher affinity inhibitors, which were also shown to bind E at the lipid interface. A novel strategy to identify ligand binding sites of proteins without sequential resonance assignment is presented. The technique uses a computationally efficient second moment approximation to calculate REDOR dephasing, and simulated annealing to explore the associated parameter space.&#13;
&#13;
The new methods and advances in quantification and simulation of the REDOR and CODEX experiments enhance the available solid-state NMR toolkit for the study of critical membrane proteins.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162324</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving the reliability of optical phase change materials-based devices</title>
<link>https://hdl.handle.net/1721.1/162323</link>
<description>Improving the reliability of optical phase change materials-based devices
Popescu, Cosmin-Constantin
Optical components are part of our daily lives, including vision and camera systems, data transmission in telecommunication, sensing applications, manufacturing and medicine, and more. Compact on-chip integrated optics such as photonic integrated circuits and optical metasurfaces can provide us with the desired functionality, but there is a continuous need for active non-volatile tuning capabilities of these devices. &#13;
Chalcogenide optical phase change materials (PCM) (e.g. Ge₂Sb₂Te₅) have gathered sustained interest in the past several years in the photonics community exactly due to their potential for non-volatile control of optical signals. Prior work had showcased the integration of PCMs via free-space metal heaters for metasurfaces, demonstrating switching for several tens of cycles. To understand the limiting mechanisms preventing extended cycling of such devices, we have developed a near IR transparent platform on doped silicon-on-insulator for testing both material behavior and device performance, along with the auxiliary code and design needed for such testing. Using this platform, a Ge₂Sb₂Se₄Te-based transmissive metasurface filter was demonstrated with a cycling performance of 1250 cycles. Following, the mechanisms limiting the performance of such devices were explored, providing guidelines to improve their reliability and endurance both at the phase change material scale as well as the accompanying device level. Furthermore, we showcase future potential devices that can be leveraged for PCM photonics, including a theoretical design that avoids free carrier absorption losses from the doped silicon heater by placing the dopants at the node of a resonant mode, limiting their overlap with the regions of high field amplitude, and a matrix array of heaters for higher device functionality. Finally, we point towards areas to focus in order to scale these concepts to commercial applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162323</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Medicine in Diabetes Using Continuous Glucose Monitoring</title>
<link>https://hdl.handle.net/1721.1/162322</link>
<description>Precision Medicine in Diabetes Using Continuous Glucose Monitoring
Healey, Elizabeth
Diabetes affects millions of individuals around the world and is a leading cause of death. The risk of serious long-term complications in diabetes can be mitigated through early interventions in the form of medication and behavioral changes. However, the pathophysiology of diabetes and the course of the disease are markedly heterogeneous, making it essential that disease management is tailored to the individual. Continuous glucose monitoring (CGM) helps patients manage their disease through the collection of real-time measurements of interstitial glucose, providing insight into glycemic dynamics that laboratory measurements cannot capture. In this thesis, we investigate how CGM can be used to enable personalized disease management in diabetes using modern methods from machine learning and signal processing. We first investigate a model-based approach to estimate metabolic parameters from CGM data. We show that parameters estimated from daily CGM data correlate with parameters derived from in-clinic laboratory measurements. Then, we explore how&#13;
the rapidly emerging field of generative artificial intelligence can be integrated into diabetes care through analysis of CGM data. We show how large language model agents hold promising potential to assist patients and clinicians in managing diabetes through the synthesis and narrative summarization of large amounts of CGM data. Finally, we leverage observational CGM data to understand heterogeneity in type 2 diabetes. The work in this thesis shows how modern computational methods in machine learning can enable precision medicine in diabetes by leveraging wearable CGM data for improved disease management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162322</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Insights into Mycobacteriales Galactan Biosynthesis</title>
<link>https://hdl.handle.net/1721.1/162321</link>
<description>Structural Insights into Mycobacteriales Galactan Biosynthesis
Carter, Alan Wylde
The order Mycobacteriales includes a number of severe human pathogens, including Mycobacterium tuberculosis, the causative agent of tuberculosis and a leading cause of infectious disease-related mortality worldwide. The unique cell wall structure of these bacteria is essential for their viability, and has been studied as a potential target for novel therapeutics development. A key component of the mycobacterial cell wall is the galactan, a 30-40 residue linear polysaccharide of galactofuranose (Galf) with an alternating β(1,5) and β(1,6) linkage pattern, synthesized by the polymerase Galactofuranosyl Transferase 2 (GlfT2). While GlfT2 has been established as a processive polymerase with intrinsic sequence control, the mechanism underlying this activity remains unclear. In the studies presented here, we provide structural insights into Nocardia brasiliensis GlfT2 (NbrGlfT2) using X-ray crystallography and cryo-electron microscopy. We characterize both the acceptor-bound and membrane-embedded structures of NbrGlfT2 and propose three models for its catalysis: Processive Galactan Sliding, Feedback-Regulated Sequence Control, and Membrane Curvature-Mediated Polymerization. Furthermore, we structurally characterize a previously undescribed GlfT2 paralog from Rhodococcus equi, which we term ReqGlfT3. We confirm its galactofuranosyl transferase activity and identify the production of β(1,3) and β(1,5) linkages. These findings offer new insights into GlfT2 and related polymerizing glycosyltransferases, which will provide insights into enzymatic regioselectivity mechanisms and polysaccharide biosynthesis across the bacterial kingdom.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162321</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Energy Work: Enacting Renewable Transitions in the Deserts of Chile and California</title>
<link>https://hdl.handle.net/1721.1/162320</link>
<description>Making Energy Work: Enacting Renewable Transitions in the Deserts of Chile and California
White-Nockleby, Caroline Celeste
This dissertation explores how different people enact and engage the “Energy Transition,” a temporal orientation increasingly used to describe a range of activities that operate on and through electricity, renewable energy, and fossil fuels. I ask, given the infrastructural, political, economic, and material-semiotic continuities and imbrications between renewable energies and fossil fuels, how do actors craft, stabilize, and mobilize the [just] renewable energy transition? How, in other words, do people distinguish the activities and projects of the energy transition from continuity and more ordinary change? I also ask, what kind of political work is “transition” – along with its usual modifiers, energy, renewability, and justice – doing in the world? Building on scholarship that explores the history, genealogy, materiality, and political economy of energies and resources, I investigate these questions by analyzing energy transition projects across two region-scale field sites: Antofagasta, Chile and Imperial County, California. &#13;
&#13;
I find that self-consciously small-scale technologies like maps, models, and pilot projects are vital to assembling just, renewable resources – and to demarcating particular places and projects as in transition. Though these technologies often aspire to make and move green energy 24/7 and worldwide, they face substantial obstacles to doing so. Their value is, thus, often drawn more from the future they index than their present functionality. I term these temporal indexical technologies “anticipatory devices,” and show that such devices gain significance in relation to the particular forms of expertise that actors draw on to design them. Each chapter, therefore, analyzes a different disciplinary form of expertise in which the concepts of renewability, justice, transition, and energy, aided by various anticipatory devices, take shape: cartography (Chapters 1 and 2), chemistry (Chapter 3), engineering (Chapter 4), and economics (Chapter 5).&#13;
&#13;
Ultimately, I find that energy transition is often treated as a universalizing, singular narrative, which can shape and limit the scope of climate mitigation projects. Profit motives often incentivize corporate actors to design projects to align with the more temporary kinds of transitions that have long been constitutive of capitalism, even though it is longer-term changes that will most effectively mitigate climate change. The same is true for renewability, which can easily be articulated as an ideal that supports visions for unfettered capitalist growth. Moreover, approaches that treat “transition” as universal can also easily echo and reinvigorate an evolutionist approach to time, in which places and countries compare their relative advancement towards carbon neutrality along a single, teleological temporal axis. Yet I also encountered many actors engaged in more situated projects that attended to local histories of land use, industry, and power. These projects pluralized transition – or did not use the term at all – offering situated and distributive visions of energetic change that might enable more regenerative futures to germinate.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162320</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Containerless measurement and thermodynamic prediction of the physical properties of liquid steels</title>
<link>https://hdl.handle.net/1721.1/162316</link>
<description>Containerless measurement and thermodynamic prediction of the physical properties of liquid steels
Benderly-Kremen, Ethan
Incomplete control over macrosegregation during steel solidification hinders the development of novel steel alloys and applications by limiting compositions which can be successfully cast. Analysis of macrosegregation at the solidification front is aided by study of the liquid state via fluid mechanics, which can place bounds on when macrosegregation can occur.&#13;
This non-dimensional analysis requires knowledge of the physical properties of the liquid steel: density, surface tension, viscosity- and how they change with composition as solutes are rejected from solid at the solidification front. Macrosegregation is most pronounced in ferrous liquids containing light, non-metallic species, i.e. carbon, oxygen, and sulfur. Yet, existing literature models for predicting the physical properties of liquid alloys are incapable of accounting for these interstitial species inside an iron lattice. Additionally, direct experimental measurement of these properties is hindered by the requisite high temperature, high reactivity of the melt, and the vast composition space of steel alloys.&#13;
&#13;
Herein, both the experimental and modelling challenges are introduced and addressed. An experimental technique using a floating zone furnace, pendant drop geometry, high-speed camera, and video segmentation was developed for simultaneous, containerless, high-throughput measurement of the physical properties of liquid steel samples. The central atom model, a multicomponent solution model, is extended to investigate the statistical structure of alloys consistent with their energetics and solution thermodynamics. This allows liquid structure determination from thermochemical measurements, bypassing structural and atomistic modeling challenges of high-temperature liquid systems.&#13;
&#13;
These methods and models are explored on the binary systems of iron-nickel, the major substitutional alloying element in steel, and iron-carbon, the major interstitial species. Results demonstrate successful liquid property measurement at experimental rates far exceeding traditional high-temperature research and introduce a basis for a unified treatment of thermodynamic and physical properties in multicomponent alloy melts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162316</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Probes and Strategies to Study Mycobacterial Cell Envelope Assembly</title>
<link>https://hdl.handle.net/1721.1/162312</link>
<description>Chemical Probes and Strategies to Study Mycobacterial Cell Envelope Assembly
Lee, So Young
The cell envelope of Mycobacterium tuberculosis (Mtb) is central to its pathogenicity, immune evasion, and intrinsic drug resistance. While the importance of its glycan components is well recognized, their structural intricacies have hindered efforts to directly perturb and investigate their function. In this work, we discuss chemical approaches to study and manipulate mycobacterial cell envelope biosynthesis. Specifically, we present biosynthetic glycan labeling strategies that leverage the activity of native glycosyltransferases to probe arabinogalactan and mannose-containing glycolipids. Building upon prior work using lipid-linked probes to label mycobacterial arabinan, Chapter 2 details the development of azido-functionalized farnesyl phosphoryl mannose (AzFPM) probes that mimic native polyprenyl-phosphoryl donors and selectively label mannose-containing glycolipids in live mycobacteria. Chapter 3 showcases how these probes enable glycan substructure-specific labeling and biochemical enrichment of glycolipids across Corynebacterium glutamicum, Mycobacterium smegmatis, and Mtb. This strategy provides a platform to study glycolipid dynamics in wild-type cells, a task previously hindered by the lack of selective labeling tools. In Chapter 4, we further interrogate endogenous glycan biosynthesis by applying biosynthetic labeling probes in C. glutamicum. Perturbation of arabinan structure by probe incorporation led to impaired cell wall integrity and growth defects. In glycosyltransferase deletion strains, altered probe incorporation patterns revealed enzyme-specific roles in glycan assembly and architecture. Beyond novel labeling strategies, Chapter 5 describes the development of targeted inhibitors of galactan biosynthesis, an essential yet underexplored component of the mycobacterial cell wall. We employed a prodrug strategy to inhibit UDP-galactopyranose mutase (UGM), which catalyzes the committed step in Galf production. To overcome delivery challenges, we designed amide prodrugs activated intracellularly by amidases. One prodrug exhibited improved efficacy in Mtb, providing a promising lead for antibiotic development. Collectively, these studies establish biosynthetic labeling and targeted galactan inhibition as powerful tools for dissecting the structure and function of the mycobacterial cell envelope, offering new avenues for developing chemical probes and therapeutics against tuberculosis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162312</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photonic design for chemical analysis</title>
<link>https://hdl.handle.net/1721.1/162311</link>
<description>Photonic design for chemical analysis
Ma, Wenchao
With light being manipulated at subwavelength scales, photonic design has been explored for various applications. In this dissertation, we investigate potential application of photonic design to chemical analysis, with a focus on spectrometry and chemical sensing.&#13;
First, we demonstrate inverse design of single-layer metasurfaces with shape optimization. Each of the designed metasurfaces simultaneously focuses light and shapes the spectra of focused light without using any filters. Thus, both spatial and spectral properties of the meta-optics are engineered. We chose the color matching functions of the CIE 1931 XYZ color space as the target spectral shapes and a distant region with a finite size as the focal area.&#13;
We then present an inverse-design approach for computational spectrometers, in which the scattering media are topology-optimized to achieve higher robustness in inference, without the need of a training set of spectra and noise. Our approach also allows the selection of the inference algorithm to be decoupled from that of the scatterer. For smooth spectra, we devise a regularized reconstruction algorithm based on Chebyshev interpolation, which yields higher accuracy compared with conventional treatment in which the spectra are sampled at equally spaced frequencies or wavelengths with equal weights. Our approaches are numerically demonstrated via inverse design of integrated computational spectrometers and reconstruction of example spectra. The inverse-designed spectrometer exhibits significantly better performance in the presence of noise than its counterparts with random scatterers.&#13;
Furthermore, we discuss chemical detection using optical resonances, which can increase the sensitivity of measurements to material perturbations and also accelerate photochemical reactions. We show that these two effects can be combined multiplicatively, to enhance the detection via weak/low-concentration photochemical reactions far beyond what could previously be attained.  For an optical resonance with a quality factor Q, the sensitivity of our detection scheme is enhanced by ~ Q² (where ~ denotes approximate proportionality), as demonstrated by both theoretical arguments and numerical simulations of a simple optical grating resonance coupled with reaction-diffusion equations.  Such an approach opens a door to further improvements by careful design of the resonance: even a 3-parameter optimization of the grating resonance yields an additional ≈ 7 × improvement.&#13;
Finally, regarding linear electromagnetic systems possessing time-reversal symmetry, we present an approach to bound ratios of internal fields excited from different ports, using only the scattering matrix, improving upon previous related bounds by Sounas and Alù (2017). By reciprocity, emitted-wave amplitudes from internal dipole sources are bounded in a similar way. When applied to coupled-resonant systems, our method constrains ratios of resonant coupling/decay coefficients. We also obtain a relation for the relative phase of fields excited from the two ports and the ratio of field intensities in a two-port system. In addition, although lossy systems do not have time-reversal symmetry, we can still approximately bound loss-induced non-unitarity of the S matrix using only the lossless S matrix. We show numerical validations of the near-tightness of our bounds in various scattering systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162311</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sulfidation of Ternary Oxides: A Thermodynamic and Experimental Study Toward Selective Metal Extraction</title>
<link>https://hdl.handle.net/1721.1/162310</link>
<description>Sulfidation of Ternary Oxides: A Thermodynamic and Experimental Study Toward Selective Metal Extraction
Boury, Charles A.
Conventional metal refining techniques face growing challenges due to increasing ore complexity and their limited ability to accommodate post-consumer recycling feedstocks. Sulfidation is a promising pyrometallurgical approach for the selective separation and recovery of critical elements from such complex feedstocks. This thesis presents a chemical thermodynamic framework for the sulfidation of divalent alkaline earth and transition metal ternary oxides of titanates, molybdates, tungstates, niobates, and tantalates. Modified predominance diagrams were constructed to determine the possible sulfidation outcomes, and a sensitivity analysis on the input thermodynamic data was performed to assess their impact on the outcome of sulfidation. A high-temperature apparatus was designed and tested to compare predicted and observed sulfidation behavior. Together, the model and experimental apparatus provide a new experimental method to estimate the high-temperature Gibbs energy of multicomponent oxides. Application to current chemical metallurgy challenges, including the recycling of tantalum-based capacitors and the refining of tungsten-bearing ores, underscore sulfidation as a powerful step to support new processing route for sustainable metal recovery.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162310</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamentals, Voltage Control and Novel Application of Exchange Bias in Magnetic Thin Films</title>
<link>https://hdl.handle.net/1721.1/162308</link>
<description>Fundamentals, Voltage Control and Novel Application of Exchange Bias in Magnetic Thin Films
Hasan, Muhammad Usama
The past half-century has seen remarkable advances in microelectronics, but as transistors approach their physical limits, there is a growing need for beyond-CMOS technologies. Spintronics, which aims to utilize the electron's spin in magnetic thin films for data storage and manipulation, is a promising alternative. Among the rich physical interactions that appear in magnetic thin films, the exchange-bias (EB) effect is essential for many spintronic devices. EB is an effect that arises at ferromagnet/antiferromagnet interfaces, which imposes an internal field on the ferromagnet, enhancing the range of functionalities that can be derived from devices. This thesis explores EB in Co/Co₁₋ₓNiₓO systems at multiple levels, from fundamental understanding to its manipulation and applications. First, we introduce a new model to predict EB in polycrystalline antiferromagnetic thin films and validate it with experimental data. Second, we tackle another fundamental aspect – disentangling the effects of EB on nucleation and propagation of magnetization reversal. We discover that nucleation and propagation EB can be unequal and demonstrate how that can lead to unexpected behavior of the system, including having asymmetric hysteresis loops. Building on these insights, we demonstrate voltage-controlled ionic gating to manipulate EB, achieving cyclic toggling of the EB sign in a ferrimagnetic system, where the magnetization direction is fully determined by the gating state. Furthermore, by targeting the antiferromagnet directly, we discover EB enhancement up to 100%, which can be explained with the help of the model developed earlier. We demonstrate sub-millisecond and analog operation in this system. Finally, a new approach to improving bit-stability whilst preserving performance in magnetic racetrack memory is proposed which involves incorporating an EB layer with the right properties into the track. The benefits obtained from this strategy can help push this next-gen memory device closer to commercialization. We believe the findings in this thesis substantially extend the state-of-the-art in terms of basic understanding of EB, ways of EB manipulation and unexplored use-cases of EB, paving the way for new functionalities in spintronic devices applicable for non-conventional computing paradigms or next-gen memory devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162308</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Urban Resilience to Environmental and Health Risks</title>
<link>https://hdl.handle.net/1721.1/162307</link>
<description>Essays on Urban Resilience to Environmental and Health Risks
Fan, Yichun
Cities today face growing environmental and health threats due to climate change. Building urban resilience requires understanding the complex interplays between environmental and social systems that account for adaptation dynamics. Using new data, computational tools, and economic analysis, this thesis explores how people and places adapt to environmental risks and the implications for urban policy and infrastructure planning.&#13;
&#13;
Chapter 1 examines how the financing structure of climate resilience infrastructure impacts long-term economic dynamics. Using satellite imagery to develop new performance metrics for U.S. flood protection levees, I find that decentralized financing of infrastructure maintenance creates a feedback loop: lower housing values and property tax revenues reduce fiscal capacity for levee maintenance, which increases levee failure risk and further depresses housing values. These feedback dynamics reinforce under-maintenance and perpetuate spatial inequality. &#13;
&#13;
Chapter 2 analyzes the social cost of behavioral adaptation. Leveraging 27 million fitness app exercise records and quasi-experimental designs, I find that heavy air pollution reduces outdoor exercise likelihood by 28%, with information and risk awareness as key moderators. This behavioral response results in significant health costs often overlooked in environmental health studies.&#13;
&#13;
Chapter 3 explores the role of subjective traits in predicting adaptation behavior. Applying Natural Language Processing to social media posts from 500,000 users, we classify individual fear types and find that pre-pandemic fearfulness strongly predicts social distancing behavior during COVID-19. This project provides a scalable tool for measuring unobserved subjective traits to predict behaviors under risk and target interventions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162307</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring and Treating Neurological Conditions Through Focal Interfacing with the Brain</title>
<link>https://hdl.handle.net/1721.1/162305</link>
<description>Monitoring and Treating Neurological Conditions Through Focal Interfacing with the Brain
Jackson, Hannah Dale
Neurological dysregulation serves as the fundamental basis for a spectrum of debilitating disorders such as Parkinson's disease and epilepsy. Despite considerable efforts, our current comprehension of these disorders and ability to treat them remains limited. Neurochemical sampling of the affected tissue can be used to monitor pathological states, but existing tools are limited by tissue reactivity and suboptimal spatiotemporal resolution. Additionally, methods for treating neurological disorders predominantly rely on systemic drug administration which is hampered by inadequate targeting and off-target effects. &#13;
&#13;
There is a need for minimally invasive modalities to both monitor and treat neurological disorders that have high spatial resolution, maintain chronic functionality, and preserve overall brain function. This thesis presents the development and implementation of neural implants capable of both infusing and sampling sub-microliter volumes of fluid with exceptional spatial precision. These implants utilize micron-scale technology to minimize scarring following implantation and allow for sustained chronic functionality. We use these devices to answer two key questions: (1) Can the localized delivery of drugs to specific neural circuits provide effective treatment for neurological diseases? and (2) Can micro-invasive sampling of brain interstitial fluid facilitate disease diagnosis and monitoring?&#13;
&#13;
We assessed our ability to treat focal epilepsy with this platform by delivering antiseizure medications directly to the seizure focus in a mouse model of temporal lobe epilepsy. We found that localized drug delivery effectively suppressed seizure activity without adverse effects. We also explored micro-invasive, membrane-free sampling of interstitial fluid from different brain regions using our device. We detected hundreds of distinct proteins from minute sample volumes with high spatial resolution and minimal tissue damage. The results from both studies highlight the platform’s potential for targeted drug delivery and biomarker detection across a variety of disease states.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162305</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward sequence-to-structure predictions of chromatin: Generative AI sheds light on genome organization</title>
<link>https://hdl.handle.net/1721.1/162304</link>
<description>Toward sequence-to-structure predictions of chromatin: Generative AI sheds light on genome organization
Schuette, Greg
The secrets of the genome have captivated scientists for well over a century, though the active role its spatial organization plays in gene regulation, cell determination, and disease formation has become clear only in recent decades. Significant strides have been made toward characterizing and understanding three-dimensional genome organization, but the scale, complexity, and heterogeneity of the genome and nuclear environment complicate investigations into this system. This thesis alleviates these challenges and holds the potential to accelerate genome organization research by presenting several methodological advances.&#13;
&#13;
An efficient Hi-C inversion algorithm appears first. This technique extracts pairwise contact potentials from experimental Hi-C data, uncovering mechanistic details obscured by the correlation between Hi-C contact probabilities. This required the development of a spin-glass model of chromatin and the derivation of a corresponding model inversion; the model may find use in further theoretical studies of chromatin, while the inversion can be applied more broadly. The inversion successfully revealed the location of chromatin loop anchors, supported the phase separation formation of chromatin compartments, and parameterized polymer models that reproduced the experimental Hi-C data with reasonable accuracy.&#13;
&#13;
The focus then shifts toward ChromoGen, a generative AI model that predicts three-dimensional chromatin structures directly from DNA sequence and chromatin accessibility data. ChromoGen provided biologically accurate structural ensembles throughout the genome of two cell types, including one omitted from its training data. This transferability suggests that ChromoGen can provide access to the organization of chromatin in a wide variety of cell types while only relying on widely available sequencing data. &#13;
&#13;
Afterward, we discuss several strategies to extend ChromoGen to full-chromosome structure prediction tasks. Preliminary results suggest that the technology of today can provide this capability, as we have generated physical chromosome conformations for mouse chromosomes, although sequencing data did not guide this generative process. Correspondingly, we explore the possibility of incorporating a multimodal model with ChromoGen, allowing it to condition structure generation on a wide variety of data types. Success in this area could enable true de novo structure prediction, greatly simplifying research aiming to understand the relationship between sequence, structure, and cellular function while also accelerating the development of treatments for diseases that implicate chromatin dysregulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162304</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reparative Urban Science: Challenging the Myth of Neutrality and &#13;
Crafting Data-Driven Narratives</title>
<link>https://hdl.handle.net/1721.1/162303</link>
<description>Reparative Urban Science: Challenging the Myth of Neutrality and &#13;
Crafting Data-Driven Narratives
So, Wonyoung
This dissertation constructs a distinctive lens on how we should see urban technology in the context of a long history of systemic racism, and how we can take a reparative approach to intervene in contemporary situations of racial inequality with technology/data as a method to address systemic racism. The current discourse of urban science often puts an emphasis on newly available (and big) data, primarily values methodologies of “hard” sciences such as physics, computer science, and mathematics, and evolves to incorporate the latest technologies and analytic methods including machine learning and artificial intelligence. However, the role of urban science and analytics that “move[s] beyond analysis” has not been extensively theorized. In particular, the relationship between urban technologies, white supremacy, and racial capitalism has not been extensively studied. Nonetheless, the impact of the applications of such “urban analysis” on people’s lives has been substantial. Building on planning scholars’ calls for reparative planning and emerging discourses of “algorithmic reparation,” this dissertation proposes a normative framework of reparative urban science that challenges whiteness in urban science and embraces the epistemologies and methodologies of reparations. &#13;
&#13;
The dissertation follows a three-paper structure, with the first paper serving as the theoretical framework for two empirical studies, and includes a concluding chapter. The first paper introduces the overarching theory of reparative urban science, identifying three mechanisms—formalizing, context removal and legitimization, and penalization—through which urban technologies perpetuate historical inequalities under a race-neutral guise. It then proposes reparative methodologies, including algorithmic auditing, crowd-sourced community data collection, and algorithms designed to simulate and deliberate reparative futures. The second and third papers demonstrate reparative urban science in action, exemplifying these methodologies. The second paper investigates tenant screening services and landlord decision-making. It reveals the mechanisms how tenant screening algorithms contribute to obscuring historical racial biases, and how landlords interact with them to exert harms. The third paper evaluates the reparative potential of housing programs using algorithmic methods, particularly comparing race-neutral versus race-conscious Special Purpose Credit Programs (SPCPs). It demonstrates that race-conscious SPCPs could significantly reduce the racial housing wealth gap than race-neutral ones, showing race-conscious policies as potential reparative tools. The concluding chapter explores theoretical and practical considerations of housing reparations through the lens of reparative justice, arguing for a deeper acknowledgment of the historical and structural harms related to land and property. Overall, this dissertation seeks to reorient urban science toward justice and repair, envisioning a transformative path forward that actively confronts historical harms and promotes healing and equity in urban futures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162303</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Growth-Induced Cation Order and Magnetic Anisotropy Engineering in Iron Garnet Thin Films</title>
<link>https://hdl.handle.net/1721.1/162302</link>
<description>Growth-Induced Cation Order and Magnetic Anisotropy Engineering in Iron Garnet Thin Films
Kaczmarek, Allison C.
At its heart, Materials Science and Engineering is a discipline seeking to advance technologies by improving the materials that make them. Currently, the development of electronic devices is limited by the need for materials that enhance speed, reduce size, and improve energy efficiency. Spin-based memory devices, which encode data in a material’s magnetic state, offer a promising solution. Magnetic memory technologies are widespread today, powering devices such as memory disks, tapes, and magnetic random access memory (MRAM). While garnet materials have long been studied for these applications, they have faced challenges in becoming adoptable technologies. However, with advanced research techniques and a deeper understanding of material behaviors, magnetic garnets are experiencing a renaissance. This thesis explores the engineering of iron garnet thin films for next-generation spin-based memory applications. The work presented advances the understanding of non-equilibrium growth, characterization, and engineering of iron garnet thin films and their magnetic properties, emphasizing kinetic phenomena that govern atomic organization beyond classic ordering of unit cells and emergent magnetic anisotropy. In a composition series of europium-thulium iron garnet (EuTmIG) films, experiment confirms the 50-year-old theory of cation site preference of Eu and Tm, demonstrating that enhanced magnetic anisotropy, named ’magnetotaxial anisotropy’, is linked to cation ordering during growth. These findings lay the foundation for anisotropy engineering by cation order. Further studies investigate the effects of film formulation, growth kinetics, and post-growth annealing on structural ordering and magnetotaxial anisotropy. In bismuth-yttrium iron garnet (BiYIG) films, a linear relationship between Bi-Y ordering, magnetic anisotropy, and substrate-lattice mismatch provides deeper insight into the forces that drive cation ordering. Annealing is shown to further enhance magnetic anisotropy in these films. In lutetium-yttrium iron garnet (LuYIG) films, the laser pulse rate during growth by pulsed laser deposition is shown to influence Lu-Y ordering and magnetic anisotropy, reinforcing the kinetic nature of the cation ordering. The findings of this thesis contribute to the fundamental understanding of cation ordering in complex oxide films and provide a framework for engineering and characterizing garnet materials, enabling the future development of new spintronic devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162302</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling and probing nonlinear collective mode dynamics in quantum materials</title>
<link>https://hdl.handle.net/1721.1/162301</link>
<description>Controlling and probing nonlinear collective mode dynamics in quantum materials
Zhang, Zhuquan
Tailored laser pulses offer a powerful means of driving materials out of equilibrium by selectively addressing specific degrees of freedom. In particular, the excitation of low-energy collective modes in solids—such as lattice vibrations (phonons) and spin precessions (magnons)—to large amplitudes opens fundamentally new pathways for controlling and probing material properties that are otherwise inaccessible under thermal equilibrium conditions. In this regime, both the nonlinear interactions between light and matter and the intrinsic nonlinear dynamics of the driven modes present significant challenges for understanding the underlying mechanisms and for realizing potential applications.&#13;
&#13;
This dissertation centers on two major themes: (1) probing equilibrium properties of materials via nonlinear light-matter interactions; and (2) unveiling emergent phenomena hidden in equilibrium by driving collective modes far from equilibrium. &#13;
&#13;
I begin by providing an overview of recent advances in controlling and probing quantum materials out of equilibrium, followed by a discussion of the theoretical frameworks and experimental methodologies used to interrogate collective excitations. Building on this foundation, I present two studies demonstrating how terahertz Raman excitation can reveal distinct spectroscopic signatures of material states.&#13;
&#13;
Subsequently, I focus on coherent nonlinear magnon-magnon interactions in canted antiferromagnets, induced by tailored terahertz fields. In these experiments, we demonstrate a unidirectional magnon upconversion process and identify correlated magnonic responses at both the sum and difference frequencies of the interacting modes. We achieve parametric amplification of magnon coherence by tuning the magnonic difference-frequency generation into resonance with a low-frequency magnon. Furthermore, by increasing the driving field strength to access a far-from-equilibrium regime, we uncover spectroscopic signatures of non-perturbative dynamics marked by strong magnon self-interactions.&#13;
&#13;
Finally, I present an example in which spatially heterogeneous responses of electromagnon modes in a van der Waals multiferroic are revealed through terahertz photon echo measurements. Together, these results highlight how tailored light-matter interactions can be leveraged to probe, control, and manipulate material degrees of freedom, both in and out of equilibrium.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162301</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Brain Somatic Mosaicism with New Single-Cell Copy Number Analysis Methods</title>
<link>https://hdl.handle.net/1721.1/162300</link>
<description>Decoding Brain Somatic Mosaicism with New Single-Cell Copy Number Analysis Methods
Zhao, Yifan
Copy number variants (CNVs) represent a significant but understudied form of somatic variation in the human brain, with potential implications for neurodevelopment, aging and disease. While single-cell whole-genome sequencing (scWGS) enables genome-wide profiling at single-cell resolution, existing computational methods struggle to accurately detect non-clonal CNVs, limiting our understanding of genomic mosaicism in the brain. In this thesis, I present two novel and complementary computational approaches for high-resolution CNV analysis in single cells. The first, HiScanner, is a CNV detection method that integrates single-cell assay-specific characteristics and introduces innovations in bin size optimization, read depth normalization, and joint segmentation across cells. Through extensive benchmarking experiments, I demonstrate HiScanner’s superior performance compared to existing tools. The second is a validation method that leverages unique molecular patterns from tagmentation-based scWGS, representing the first tool that exploits fragment overlap patterns to corroborate CNV predictions. I then apply these tools to investigate CNVs in three biological contexts: tumor evolution in paired initial and recurrent meningiomas, age-related genomic changes in neurotypical human brains, and developmental patterns in fetal and postnatal brain tissues. By analyzing both scWGS and multimodal single-cell data (paired RNA-seq and ATAC-seq), I characterize cell-type-specific CNV patterns and their potential functional implications. This work establishes a robust framework for studying somatic CNVs at single-cell resolution and provides insights into genomic instability in brain development, aging, and disease.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162300</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Origin and Correlates of Viral Rebound in SIV-Infected Rhesus Macaques Following Discontinuation of ART</title>
<link>https://hdl.handle.net/1721.1/162298</link>
<description>Origin and Correlates of Viral Rebound in SIV-Infected Rhesus Macaques Following Discontinuation of ART
King, Irena V.
The earliest events of viral rebound following discontinuation of ART in people living with Human Immunodeficiency Virus-1 remain largely unknown. We investigated detailed reservoir characteristics and viral rebound dynamics in 18 Simian Immunodeficiency Virus-infected rhesus macaques treated with antiretroviral therapy for 70 weeks and then necropsied after a 12-day analytical treatment interruption (ATI). Using molecularly barcoded SIVmac239M, we tracked viral clonotypes following ATI in both peripheral blood and tissues at necropsy. Viral rebound appeared to originate by reactivation of a single or a few barcode clonotypes from a limited number of deep lymph nodes or gastrointestinal tissues, followed by rapid virus replication of this clonotype in peripheral blood and tissues as well as serial reactivation of multiple additional barcode clonotypes from different anatomic sites, resulting in oligoclonal plasma viremia. Daily transcriptomic and proteomics profiling in peripheral blood following ATI identified early upregulation of pathways related to T cell signaling, cytokine responses, and metabolism prior to detectable plasma viremia, presumably reflecting initial viral replication in tissues. Taken together, these data provide a detailed anatomic, virologic, and immunologic characterization of viral rebound in SIV-infected macaques following ATI, which provides critical information to inform the development of next generation HIV-1 cure strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162298</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale Modeling of Genome Organization: Bridging Polymer Physics, Molecular Dynamics, and AI</title>
<link>https://hdl.handle.net/1721.1/162297</link>
<description>Multiscale Modeling of Genome Organization: Bridging Polymer Physics, Molecular Dynamics, and AI
Lao, Zhuohan
The human genome is intricately organized within the nucleus, and its spatial arrangement plays a critical role in gene regulation, cellular function, and disease. Recent advances in high- throughput experiments have unveiled the heterogeneous and dynamic nature of chromatin organization at single-cell resolution. However, computational tools that can both simulate and predict such complex structures are still limited. In this thesis, we develop and apply computational frameworks to investigate nuclear genome organization at high spatial and temporal resolution. Our approaches integrate biophysical modeling and generative artificial intelligence to address complementary aspects of nuclear architecture.&#13;
&#13;
In Chapter 1, we provide an overview of the hierarchical organization of the genome and discuss emerging principles that govern chromatin folding, nuclear compartmentalization, and their functional implications. We introduce data-driven, physics-based, and generative artificial intelligence modeling approaches, highlighting the need for interpretable and efficient models capable of capturing the structural diversity of the nucleus across individual cells.&#13;
&#13;
In Chapter 2, we present OpenNucleome, a high-resolution molecular dynamics framework for simulating the entire human nucleus at 100-kilobase resolution. OpenNucleome incorpo- rates explicit representations of chromosomes, nuclear bodies, and the nuclear lamina, and faithfully reproduces experimental data from Hi-C, TSA-seq, DamID, and DNA-MERFISH. The developed software is fully open-source and GPU-accelerated, enabling large-scale simu- lations and mechanistic explorations.&#13;
&#13;
In Chapter 3, we explore the impact of genome organization on various biological phe- nomena within the cell nucleus—focusing on telomere and telomere condensate dynamics, and nuclear deformation—using OpenNucleome. Our results demonstrate that the three- dimensional genome architecture plays a pivotal role in governing the dynamics of genomic loci such as telomeres, influencing the kinetics and outcomes of droplet coarsening. More- over, specific interactions between the genome and nuclear bodies form robustly across cells, providing strong support for a nuclear zoning model of genome function.&#13;
&#13;
In Chapter 4, we introduce ChromoGen, a generative diffusion model that predicts single- cell chromatin conformations de novo from DNA sequence and DNase-seq data. Unlike traditional simulation frameworks, ChromoGen learns from experimental single-cell 3D structures to generate physically realistic, region- and cell type-specific ensembles. ChromoGen achieves high agreement with both experimental Dip-C and Hi-C data while maintaining computational efficiency, enabling rapid exploration of chromatin heterogeneity across the genome and cell types.&#13;
&#13;
Together, these two frameworks—OpenNucleome and ChromoGen—provide powerful and complementary tools for understanding genome structure and function at the single-cell level, bridging physics-based modeling and deep generative artificial intelligence modeling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162297</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On single-cell immune dynamics of chronic HIV infection and treatment in rhesus macaque models</title>
<link>https://hdl.handle.net/1721.1/162296</link>
<description>On single-cell immune dynamics of chronic HIV infection and treatment in rhesus macaque models
Quinn, Sarah Lynne
Human Immunodeficiency Virus (HIV) continues to be an overwhelming challenge in both  global health and immunology. With no cure available, 30 million people worldwide  rely on antiretroviral therapy (ART) to prevent transmission and disease progression. However, individuals on ART are at elevated risk for numerous comorbidities and are susceptible to continued disease progression should treatment be stopped or interrupted. Addressing the challenges resulting from treatment and lack of cure requires a deeper understanding of the complex underlying immunology of HIV infection, treatment, and therapeutics. &#13;
Single-cell RNA sequencing (scRNAseq) is continually advancing our understanding of immune dynamics and when combined with well-characterized rhesus macaque models of HIV, provides an opportunity to profile immune perturbations over extensive time courses in a controlled setting. In this thesis, I present two studies that further our understanding of the host immune response across stages of infection, treatment, and therapeutic intervention, using rhesus macaque models.  &#13;
In the first study (Chapter 2), I comprehensively profiled immune dynamics during untreated infection, ART initiation, and long-term ART, leveraging a longitudinal cohort of SIV-infected (Simian Immunodeficiency Virus) macaques. This work is particularly relevant given the increasing age and average time spend on ART among people living with HIV. scRNAseq revealed key immune shifts during acute and chronic infection, as well as over five years of subsequent ART. I identified cell type composition shifts during prolonged untreated infection and uncovered areas of unresolved immune dysregulation despite long-term ART– most prominently among myeloid gene expression and enrichment. I further link transcriptional changes to intact proviral burden and identified ribosomal pathways as markers infection stage, treatment status, reservoir size. Finally, by evaluating published immune correlates of treatment outcome, I identify nuances in signatures changing and remaining stable with time on ART. &#13;
In the second study (Chapter 3), I expand on the baseline infection and treatment case by evaluating immune dynamics in response to a post-exposure combination therapeutic (Ad26/MVA + PGT121 + Vesatolimod) previously shown to induce post-ART viral control in most (7/10) treated macaques with Simian Human Immunodeficiency Virus (SHIV). Here I identify features of therapeutic response and implicate a previously defined Antibody-Dependent Cellular Phagocytosis signature as being associated with control. Furthermore, I identify a new cytotoxic transcriptional module in T and NK cells associated with both non-rebounding animals and post-rebound controller animals, suggesting a shared effector program associated with successful virologic control. &#13;
Supported by a thorough introduction on immunological techniques, questions and applications to HIV studies (Chapter 1), and a discussion of intersectionality and future directions of the field (Chapter 4), this thesis provides a comprehensive analysis of immune dynamics across the lifecycle of viral infection, treatment, and therapeutic intervention in macaque models of HIV. This work demonstrates how the host immune environment influences therapeutic success, laying a foundation for future therapeutic design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162296</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adsorption and electrostatic potentials at the electrochemical interface</title>
<link>https://hdl.handle.net/1721.1/162295</link>
<description>Adsorption and electrostatic potentials at the electrochemical interface
Nowack, Linsey
This paper explores adsorption in electrochemical systems. Part I reviews ways to quantify adsorption, the energetic considerations that make adsorption favorable or unfavorable, how to measure adsorption experimentally, and how it has previously been modeled using extensions of the Langmuir isotherm. At the end of Part I, a simple Monte Carlo model (MC) is applied to a very complex carbon dioxide reduction system to study competitive adsorption. This application of Monte Carlo simulations demonstrates the challenges of extracting meaningful parameters from empirically fitting MC simulations to isotherms derived from nanoparticle-enhanced Raman spectra.&#13;
&#13;
Part II examines how adsorption influences the electrostatic potential in the electrochemical double layer using molecular dynamics simulations. Adding on to previous work from the Willard group, this chapter calculates how adsorbate polarity and coverage influences two characterizations of Coulombic interactions: the Poisson potential and the Madelung potential. Both potentials, while having different shapes as a function of distance from the electrode surface, exhibit strong sensitivity to water structure. At high coverage, adsorbates decrease the number of interfacial waters, shifts the position of the molecular layers of waters at the interface, and disrupts the water's orientational order. Lastly, cross-sections of the 3D Poisson potential parallel to the electrode surface reveal large heterogeneity in Poisson potential values as a result of adsorbates. This suggests that 1D electrostatic potential profiles are not enough to understand forces in the electrochemical double layer.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162295</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The combustion of droplets of heavy liquid fuels</title>
<link>https://hdl.handle.net/1721.1/162243</link>
<description>The combustion of droplets of heavy liquid fuels
Simpson, Hugh Cameron.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1954; Vita.; Bibliography: leaves 542-552.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162243</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the use of selected derivatives in the specific characterization of alcohols, phenols, amines and mercaptans</title>
<link>https://hdl.handle.net/1721.1/162241</link>
<description>On the use of selected derivatives in the specific characterization of alcohols, phenols, amines and mercaptans
Zeng, Zhaolun,
            1898-1967.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemistry, 1926; Vita.
</description>
<pubDate>Fri, 01 Jan 1926 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162241</guid>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chemistry of octahalodirhenate (III).</title>
<link>https://hdl.handle.net/1721.1/162232</link>
<description>The chemistry of octahalodirhenate (III).
Robinson, William Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1966; Bibliography: leaves 112-116.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162232</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the economics of advertising.</title>
<link>https://hdl.handle.net/1721.1/162229</link>
<description>On the economics of advertising.
Schmalensee, Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1970; Vita.; Bibliography: leaves 485-496.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162229</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An essay on taxation and growth in an enclave economy.</title>
<link>https://hdl.handle.net/1721.1/162228</link>
<description>An essay on taxation and growth in an enclave economy.
Francis, Alfred Alexander Jaques.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1968; Bibliography: leaves 130-131.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162228</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Hiddenness Argument and The Limits of Doxastic Positioning</title>
<link>https://hdl.handle.net/1721.1/162162</link>
<description>The Hiddenness Argument and The Limits of Doxastic Positioning
Garcia, Nicole
If God exists, how clear or obvious should we expect his existence to be? Particularly if such a God is interested in having a personal relationship with us? The Hiddenness Argument contends that it should be much clearer than it in fact is. If God exists and really wants us to know as much, we should expect to inhabit a very different epistemic situation than we in fact do – one that rules out the possibility of rational nonbelief. The evidence available for God’s existence should be so definitive that it would be impossible for us to fail to believe on good epistemic terms. &#13;
My dissertation sets out to delegitimize this expectation. While available objections to it challenge its propriety – God may have overriding reasons for disclosing his existence in a way that allows for rational nonbelief – my account challenges its feasibility – whether it is in principle possible to meet. Expecting divine self-disclosure to rule out rational nonbelief assumes that it can rule out rational nonbelief – but can it? By homing in on the nature, mechanics, and limitations of disclosure itself, I show it cannot.&#13;
Divine self-disclosure is an instance of what I call doxastic positioning: the process of positioning someone to rationally form some belief – in this case, theistic belief. To rule out rational nonbelief, God’s disclosure would need to make theistic belief a universal rational requirement. Given the success conditions of doxastic positioning, this would involve the provision of sufficient evidence as well as the universal possession and appreciability of said evidence. But no matter what evidence God provides, God cannot guarantee on pain of irrationality that humans will possess or be in a position to appreciate the available evidence, leaving rational nonbelief an ever live possibility. —Which is just to say that divine self-disclosure cannot rule out rational nonbelief. The expectation that it would do so is, then, illegitimate and the hiddenness argument depending on it fails.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162162</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proteolethargy is a pathogenic mechanism in chronic disease</title>
<link>https://hdl.handle.net/1721.1/162161</link>
<description>Proteolethargy is a pathogenic mechanism in chronic disease
Moreno, Shannon
The pathogenic mechanisms of many diseases are well understood at the molecular level, but there are prevalent syndromes associated with pathogenic signaling, such as diabetes and chronic inflammation, where our understanding is more limited. Here, I present evidence that pathogenic signaling suppresses the mobility of a spectrum of proteins that play essential roles in cellular functions known to be dysregulated in these chronic diseases. The reduced protein mobility, which we call proteolethargy, was linked to cysteine residues in the affected proteins and signaling-related increases in excess reactive oxygen species. Diverse pathogenic stimuli, including hyperglycemia, dyslipidemia, and inflammation, produce similar reduced protein mobility phenotypes. I propose that proteolethargy is an overlooked cellular mechanism that may account for various pathogenic features of diverse chronic diseases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162161</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Sequences Underlying Directed Turning in C. elegans</title>
<link>https://hdl.handle.net/1721.1/162160</link>
<description>Neural Sequences Underlying Directed Turning in C. elegans
Kramer, Talya
Complex behaviors like navigation rely on sequenced motor outputs that combine to generate effective movement. The brain-wide organization of the circuits that integrate sensory signals to select and execute appropriate motor sequences is not well understood. Here, we characterize the architecture of neural circuits that control C. elegans olfactory navigation. We identify error-correcting turns during navigation and use whole-brain calcium imaging and cell-specific perturbations to determine their neural underpinnings. These turns occur as motor sequences accompanied by neural sequences, in which defined neurons activate in a stereotyped order during each turn. Distinct neurons in this sequence respond to sensory cues, anticipate upcoming turn directions, and drive movement, linking key features of this sensorimotor behavior across time. The neuromodulator tyramine coordinates these sequential brain dynamics. Our results illustrate how neuromodulation can act on a defined neural architecture to generate sequential patterns of activity that link sensory cues to motor actions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162160</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Dimensional Statistics for Causal Inference and Panel Data</title>
<link>https://hdl.handle.net/1721.1/162159</link>
<description>High-Dimensional Statistics for Causal Inference and Panel Data
Klosin, Sylvia
This dissertation develops new econometric tools for causal inference in panel data settings, with a focus on addressing key biases that arise in high-dimensional and dynamic environments. While this dissertation is motivated by the need to flexibly measure the economic impacts of climate change, the methods I develop are much more general. They apply broadly to panel data problems across empirical economics—including in labor, development, and industrial organization—where standard fixed effects estimators may fail.&#13;
The first chapter identifies a previously overlooked source of bias in fixed effects panel estimators, which I term dynamic bias. This bias arises when dynamic feedback—where past outcomes influence current outcomes—is ignored in the estimating equation. I show that dynamic bias can be severe even when treatments are randomly assigned and that it often exceeds the well-known Nickell bias. To address this, I develop a bias-corrected estimator that is consistent in panels with a fixed number of time periods. I apply this method to estimate the effects of temperature shocks on GDP, where accounting for dynamic feedback reduces estimated damages substantially. The second chapter, coauthored with Max Vilgalys, proposes a flexible estimator for continuous treatment effects using panel data with fixed effects. We extend the double debiased machine learning (DML) framework to this setting and prove consistency and asymptotic normality. In an application to U.S. agriculture, we show that our estimator captures nonlinear effects of temperature on crop yields more accurately than standard linear models, estimating substantially larger damages from extreme heat. The final chapter further generalizes the methodological contribution by introducing a non-parametric estimator of the average dose-response function. Building on recent developments in DML and automatic double machine learning (ADML), I propose a novel debiasing strategy that directly estimates the bias correction term, yielding favorable theoretical properties.&#13;
Together, these essays provide practical and theoretically grounded tools for applied researchers working with panel data, particularly in settings characterized by high dimensionality, continuous treatments, or dynamic feedback.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162159</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Health Economics</title>
<link>https://hdl.handle.net/1721.1/162158</link>
<description>Essays in Health Economics
Moran, Kelsey C.
This dissertation comprises three essays in health economics. The first paper studies how imperfect electronic health record (EHR) system compatibility, or interoperability, affects patients. My coauthors, Rebekah Dix and Thi Mai Anh Nguyen, and I find that improved EHR interoperability between hospitals leads to better health outcomes and lower costs for shared patients. We also show that hospitals prefer sending patients to facilities with more compatible EHR systems, causing patient reallocation across providers based on technological factors. Using a model of patient flows, we estimate that eliminating these frictions would generate substantial welfare gains by improving patient outcomes and reducing allocative distortions. The second paper examines how regulatory requirements influence hospital charity care by analyzing the Hill-Burton Act of 1946, which allocated $6 billion to over 3,500 hospitals in exchange for providing free care to uninsured patients. I find that after these obligations expire, hospitals strategically reduce charity care by 30% and decrease admissions of charity-eligible patients by 14%. These patients subsequently shift to neighboring public and non-profit hospitals, where they must pay for care and experience higher mortality rates. The third paper, co-authored with Ari Bronsoler, Joseph Doyle, and John Van Reenen, studies the broad impact of Health Information Exchange (HIE) on patient outcomes. Using a newly compiled database of state HIE laws as instruments for hospital HIE, we find that HIE significantly reduces mortality from infectious diseases and hospital readmission rates for common conditions. With HIE usage increasing by 50 percentage points from 2009 to 2019, we estimate this technology saved approximately 27,000 lives annually through improved care coordination and public health response.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162158</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Firms and Technology in Development Economics</title>
<link>https://hdl.handle.net/1721.1/162157</link>
<description>Essays on Firms and Technology in Development Economics
Houeix, Deivy
My thesis investigates the relationships between technology and firms in lower-income countries. I explore both the economic impacts of technology on firms—how it affects their economic outcomes and their relationships with stakeholders, both within and across firms—and the determinants of technology adoption: what underlying factors impede or drive the uptake of new technologies? I combine a diverse set of methods, including large-scale field experiments, economic theory, and long-term collaborations with local partners to investigate how small firms—such as taxis, retailers, and others—transform their practices and relationships as they adopt new technologies. My work centers on West Africa, one of the world’s poorest regions, where economic research remains limited. &#13;
&#13;
Chapter 1: The first chapter investigates the idea that digital technologies have the potential to increase firm productivity. However, they often come bundled with data observability, which can be a double-edged sword. Observability reduces information frictions and can increase efficiency, but some agents may lose their informational rent and thus resist adoption. I explore this trade-off between observability and adoption through two field experiments conducted over nearly two years. These experiments, guided by contract theory, introduce digital payments to the Senegalese taxi industry in partnership with the country's largest payment company. In the first experiment, I randomize access to digital payments for drivers (employees) and transaction observability to taxi owners (employers). I find that digital payments reduce drivers' cash-related costs by about half but also serve as effective monitoring tools for taxi owners. Transaction observability substantially increases driver effort, contract efficiency, and the duration of owner-driver relationships. However, 50% of drivers—primarily the worst-performing and poorest—decline to adopt digital payments when transactions are observable. The second experiment shows that the adoption rate doubles when drivers are assured that owners will not be able to observe their transactions. I develop a theoretical framework and use the experimental variations to estimate the welfare impacts of policy counterfactuals. I show that removing transaction observability would maintain moral hazard problems but broaden adoption and thus increase overall welfare—an approach ultimately implemented by the payment company. These findings highlight the crucial role of information embedded in digital technologies, as it magnifies gains for adopting firms but can deter initial adoption.&#13;
&#13;
Chapter 2: In the second chapter, I conduct a randomized experiment to study the nationwide technology diffusion of a new digital payment technology in Senegal. By leveraging two novel sources of network data—mobile money transactions and anonymized phone contact directories covering the near universe of the adult population in Senegal—I causally identify three sets of adoption spillovers from taxi firms randomized to receive early access to the technology: intra-industry among taxi firms; inter-industry between taxi drivers and other small businesses; and inter-regional spillovers from the capital city to businesses in other urban centers. I show that spillovers go beyond strategic complementarities, reflecting social learning within firms' social networks, driven by social ties and remote interactions.&#13;
&#13;
Chapter 3: In the third and final chapter, I explore the fact that search and trust frictions have historically made it hard for small firms in lower-income countries to buy inputs from foreign markets. The growth in smartphone ownership and social media usage has the potential to alleviate these barriers. Informed by a dynamic model of relational contracting, we run a field experiment leveraging these technological tools to provide exogenous variation in (1) search frictions and (2) trust frictions (adverse selection and moral hazard) in a large international import market. In our search treatment, we connect a randomly selected 80% of 1,862 small garment firms in Senegal to new suppliers in Turkey. We then cross-randomize two trust treatments that provide additional information about the types (adverse selection) and incentives (moral hazard) of these new suppliers. Alleviating search frictions is sufficient to increase access to foreign markets: in all treated groups, firms are 26% more likely to have the varieties a mystery shopper requests and the goods sold are 30% more likely to be high quality. However, the trust treatments are necessary for longer-term impact: using both transaction-level mobile payments data and a follow-up survey, we show that these groups are significantly more likely to develop the connections into relationships that persist beyond the study. These new relationships lead to increases in medium-run profit and sales. Finally, we use the treatment effects to estimate the model and evaluate counterfactuals where we set various combinations of the frictions to zero, finding that the largest gains come from eliminating adverse selection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162157</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Policy, Misallocation and Production Networks</title>
<link>https://hdl.handle.net/1721.1/162156</link>
<description>Essays in Industrial Policy, Misallocation and Production Networks
Garg, Tishara
The thesis comprises three chapters studying industrial policy, misallocation and macroeconomic propagation in developing countries. The first chapter studies the impact of placebased industrial policies on equilibrium selection with Indian industrial parks as the empirical context. The second and third chapters study firm networks in Chile and Turkey respectively— using a common theoretical framework, the former studies the incidence of distortions while the latter studies the propagation of a large refugee shock. The first chapter introduces a method to study the impact of policy events on equilibrium selection in settings where strong complementarities may lead to multiple equilibria and coordination failures. Many industrial policies are rooted in the idea of coordination failures and big-push’ theories, yet empirical evidence on their effectiveness remains limited, since distinguishing equilibrium shifts from direct changes in fundamentals is challenging. Leveraging tools from industrial organization and algebraic geometry, I develop an approach to study coordination effects without imposing strong assumptions on the distribution or responsiveness of economic fundamentals. The method identifies the ‘types’ of factual and counterfactual equilibria through a three-step procedure: model estimation and inversion, equilibrium enumeration, and type assignment. Types of factual equilibria may be used to examine how events, like urban infrastructure, subsidy drives, or trade liberalization, affect equilibrium selection. Types of counterfactual equilibria further allow decomposition of observed effects into fundamentals- versus coordination-driven. I apply this method to study industrial zones in India. Using a newly assembled dataset, I find that municipalities receiving an industrial zone see a 60% increase in non-farm employment over 15 years, with significant spillovers to non-targeted sectors and municipalities. Combining the methodology with event study designs, I find that industrial zones increase the probability of escaping a low-industrialization equilibrium by 38%, with coordination effects explaining roughly onethird of the observed change in outcomes. The second chapter (joint with David Atkin, Baptiste Bernadac, Dave Donaldson, and Federico Huneeus) combines unique datasets from Chile to quantify the full incidence of distortions for the first time. Economic distortions—such as market power, taxes, credit constraints, etc.—are fundamental in understanding the difference between developing and developed economies. Recent work has documented the pervasive extent of economic distortions and how they lead to substantial misallocation, or aggregate productivity loss. Far less well understood is how these phenomena affect members of society differently. We embed a new dataset which we build by linking workers and owners to firms, firms to each other, firms to consumers, and firms and consumers to the government, inside a general equilibrium model of the Chilean economy. Armed with internal estimates of distortions on exchanges throughout the economy, as well as data on the network of such linkages, we conduct a series of counterfactual simulations that illuminate the incidence of distortions in our model economy. We find that the burden of distortions falls relatively more on the shoulders of the poor, the young and women. The final chapter (joint with Ahmet Gulek) investigates how immigration-induced wage shocks can propagate beyond the regions receiving immigrants through the production network. Using the Syrian refugee crisis in Turkey as a quasi-experiment and the near universe of domestic firm-to-firm transaction data from VAT records, we show that the immigration shock propagates both forward and backward along the supply chain. Firms in non-host regions who directly or indirectly buy from host regions demand more labor. Firms who sell to host regions weakly increase their sales. Estimates imply an elasticity of substitution between labor and intermediate goods of 0.76 and an elasticity of substitution of nearly 1 between intermediates. Counterfactual analyses show that the spillover effects on non-host regions are economically meaningful when the host regions are central nodes of the domestic trade network. For example, a 1% increase in labor supply in Istanbul decreases real wages in Istanbul by 0.56% and increases real wages in the average non-host city by 0.38%.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162156</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The influence of nutrient availability on tumor metabolism</title>
<link>https://hdl.handle.net/1721.1/162152</link>
<description>The influence of nutrient availability on tumor metabolism
Abbott, Keene Louis
Tumor growth and progression are profoundly influenced by nutrient availability in the tumor microenvironment (TME). Nutrient accessibility not only shapes cancer metabolism but also affects therapeutic responses, genetic dependencies, and metastatic behavior. This dissertation explores how nutrient availability modulates these cancer phenotypes. First, we examined how environmental nutrient levels influence the efficacy of drugs targeting metabolic enzymes, showing that their effectiveness varies under different nutrient conditions. We also found that the nutrient composition of the TME in solid tumors is primarily determined by the tissue of origin rather than by the tumor itself. By contrast, leukemia cells actively reshape their nutrient environment. Furthermore, we assessed the impact of physiological nutrient conditions on genetic dependencies, identifying numerous genes whose essentiality is dictated by nutrient levels and uncovering potential new therapeutic targets in leukemia. Finally, we established that single nutrients do not dictate metastatic site preference. Instead, metastatic growth is driven by a complex interplay among multiple nutrients in the microenvironment and the intrinsic properties of cancer cells. These findings provide critical insights into how the nutrient environment influences tumor metabolism.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162152</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Healthy Behavior: Essays in Health and Behavioral Economics</title>
<link>https://hdl.handle.net/1721.1/162151</link>
<description>Healthy Behavior: Essays in Health and Behavioral Economics
Shreekumar, Advik
These essays examine beliefs and decision-making in health settings, emphasizing the role of attention, information, and technology in shaping behavior. The first essay studies human error in chest x-ray interpretation, a common and consequential medical task. It casts radiologists as facing a classical decision-theory problem, derives a novel martingale test for optimal behavior, and implements this test through a prudent application of machine learning to anonymized health records from the Beth Israel Deaconess Medical Center. I find that 58 percent of radiologists make predictable mistakes when assessing cardiac health on chest x-rays. Roughly two thirds of errors are explainable as individual radiologists making inconsistent decisions, and one third reflect the possibility that algorithms detect novel or complex signals. The second essay studies app-based mindfulness meditation, which has grown popular due to claims about its effects on mental well-being, productivity, and decision making. We assess these claims an experiment with 2,384 US adults, randomizing access and usage incentives for a popular mindfulness app. App access improves an index of anxiety, depression, and stress at two weeks and four weeks, with persistent effects three months later. It also improves earnings on a focused proofreading task by 2 percent. The third essay studies a tradeoff governments face when making recommendations in an evolving crisis. We investigate the effect of taking an early position on how much people believe later recommendations, using an online experiment with 1,900 US respondents in early April 2020. We present participants with CDC projection about coronavirus death counts and randomize exposure to information that highlights how the President previously downplayed the threat. When the President’s inconsistency is salient, participants are less likely to revise their beliefs about death counts from the CDC projection. This aligns with a model of signal extraction from government communication, and has implications for changing guidelines in other settings. JEL Codes: D91, I12, C8
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162151</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Optimization using Reinforcement Learning, Dynamic Programming, and Column Generation</title>
<link>https://hdl.handle.net/1721.1/162148</link>
<description>Large-Scale Optimization using Reinforcement Learning, Dynamic Programming, and Column Generation
Paskov, Alexander Spassimirov
One of the most enduring challenges in large-scale optimization is determining how to push the boundaries of scalability without compromising on performance or rigor. For decades, the exponential advances in computational power offered a straightforward solution: bigger problems could simply be tackled by bigger machines. However, in recent years, it has become increasingly apparent that pure computational force alone can no longer keep pace with the ever-growing complexity and scale of real-world applications. Additionally, despite the remarkable success of general-purpose methods for linear and integer optimization, these methods often struggle when confronted with domains that involve intricate dynamics, massive dimensionality, or a need for fine-grained sequential decisions. The simple question thus arises: can we design new optimization methods that scale more appropriately? In this thesis, we propose using dynamic programming, reinforcement learning, and column generation as a practical way to address this need across a variety of settings.&#13;
&#13;
We begin by developing and refining our methodology within the context of reinforcement learning and dynamic programming. We then move on to the application of column generation, and finally show how these techniques can be combined to supercharge fundamental machine learning methods with large-scale optimality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162148</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of phage detection by bacterial innate immune proteins</title>
<link>https://hdl.handle.net/1721.1/162140</link>
<description>Mechanisms of phage detection by bacterial innate immune proteins
Zhang, Tong
Bacteria are under constant threat from their viral predators, known as bacteriophages (or phages). As a result, bacteria have evolved diverse immune mechanisms to protect themselves from phage infection, such as restriction modification, CRISPR-Cas, and abortive infection (Abi) systems. Because Abi systems function through killing infected cells to protect the bacterial population, they must stay inactive prior to infection, but rapidly detect phages and promptly trigger an immune response. Although many novel Abi systems have been discovered in recent years, how they detect phage infection remains poorly understood. Here, I demonstrated that CapRel_SJ46, an anti-phage protein from E. coli, senses phage infection by directly binding to the newly synthesized major capsid proteins (MCPs) of certain phages. Binding to the MCPs releases autoinhibition of the CapRel_SJ46 toxin domain, enabling it to pyrophosphorylate tRNAs, which blocks translation to restrict viral infection. Detection of the MCPs is analogous to how eukaryotic innate immune systems detect foreign invaders through conserved pathogen-associated molecular patterns (PAMPs). In addition to the MCPs, I found that CapRel_SJ46 can directly bind to another unrelated and structurally different phage protein, called Gp54. Bas11 phages harbor two trigger proteins, and both are sensed by CapRel_SJ46 during infection, indicating that a bacterial immunity protein can sense more than one phage-encoded trigger. Additionally, I demonstrated that another CapRel homolog, CapRel_Ebc, senses the inhibition of a host cell division protein by the phage-encoded trigger, which is analogous to effector-triggered immunity in eukaryotes, where innate immune proteins sense virulence-associated activities of pathogens rather than directly sensing PAMPs. Lastly, I characterized another Abi system, named RAZR (ring-activated zinc-finger RNase), and showed that RAZR forms a ring-shaped supramolecular complex of over 1 MDa upon sensing a phage-encoded PAMP, leading to activation of its RNase activity to restrict phage infection. This finding highlights the importance of higher-order molecular assembly in bacterial innate immunity. Collectively, my thesis work has provided new insights into the molecular mechanisms by which bacterial innate immune systems detect phage infection.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162140</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Theory and New Practical Methods for Solving Large-Scale Linear and Conic Optimization</title>
<link>https://hdl.handle.net/1721.1/162139</link>
<description>New Theory and New Practical Methods for Solving Large-Scale Linear and Conic Optimization
Xiong, Zikai
In the last several years there has been a dramatic shift in the way many large-scale linear programs (LPs) are solved in practice, with classic methods (simplex method and interior-point methods) being replaced by the primal-dual hybrid gradient method (PDHG) to solve large-scale LP problems. While PDHG---with heuristic enhancements and GPU implementation---has been very successful in solving large-scale LP problems, its performance can have substantial variance and an intuitive understanding of the drivers of its performance has been lacking.  In this context the research in this thesis has three related goals: (i) the development of new theory to explain the performance of PDHG for large-scale LPs, (ii) the development of new practical methods for solving large-scale LP problems based on PDHG, and (iii) the generalization of such new theory and new practical methods to the more general class of conic optimization problems.&#13;
 &#13;
The thesis is organized as follows. Chapter 1 is an introduction and a unified summary of the thesis research as a whole.  Chapter 2 presents computational guarantees for PDHG for solving LP problems based on two instance-dependent natural geometric condition measures, namely the "limiting error ratio" and the "LP sharpness." The connection between these condition measures and other LP condition numbers is also developed. Chapter 3 presents computational guarantees for more general conic optimization problems using the geometry of the primal-dual (sub)level sets.  Based on our analysis we propose a central-path Hessian-based rescaling to enhance algorithmic performance by improving the (sub)level set geometry. We present computational results that show the potential of our methodology to improve the performance of PDHG in practice. Chapter 4 presents a closed-form expression of the iteration complexity of PDHG for LP instances with unique optima. The iteration bound has a reciprocal relationship with (i) stability under data perturbation, (ii) proximity to multiple optima, and (iii) LP sharpness. Chapter 5  considers the iteration complexity of LP instances under a sub-Gaussian model of instance generation.  In this model we show that PDHG is a polynomial-time algorithm with high probability.  This result partially shrinks the gap between theory and practice for PDHG by showing that PDHG can solve "most" LP instances in polynomial time. Finally, Chapter 6 presents a practical PDHG-based large-scale conic optimization solver with GPU enhancements.  In this chapter we present computational experiments that show that the solver is more efficient than other first-order methods and commercial solvers for large-scale conic optimization problems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162139</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Informing Public Health Policy Design and Operations with Analytics: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/162138</link>
<description>Informing Public Health Policy Design and Operations with Analytics: Methods and Applications
Zerhouni, El Ghali Ahmed
Data-driven approaches hold immense potential for improving public health decision-making in complex, uncertain, and high-risk environments. Yet, there are several key challenges that stand in the way of successfully leveraging large-scale, heterogeneous, and noisy data into timely and actionable policy insights. These challenges are particularly pronounced when conventional modeling tools fall short, for instance, in settings where health risks arise from infection propagation in intricate supply chains, rapidly mutating pathogens, or delayed and fragmented surveillance systems. This thesis introduces a suite of novel methodologies and use cases at the intersection of operations research, epidemiology, and machine learning to address some of these challenges and support more informed, timely, and proactive public health decisions.&#13;
&#13;
A central focus of this thesis is the management of health risks related to zoonotic viruses, which are pathogens that emerge in animals and can potentially jump to humans, then further evolve to become transmissible between humans. These viruses pose a growing global health threat. Notably, outbreaks of zoonotic viruses frequently emerge in live animal markets in developing countries, even when infection rates in the upstream farms supplying these markets remain consistently low. Motivated by this empirical observation, the first chapter of this thesis develops an innovative epidemiological model called the Transmission, Interaction, and Persistence (TIP) model. This model integrates stochastic supply chain dynamics and environmental transmission mechanisms, and sheds light on how market-level factors amplify the risk of infection outbreaks. It yields actionable insights regarding the potential effectiveness of risk mitigation strategies such as frequent market sanitation and supply consolidation.&#13;
&#13;
Since March 2020, the world has experienced multiple waves of infections caused by the SARS-CoV-2 virus. Similar to past pandemics, SARS-CoV-2 has spread in waves, each driven by different genetic variants of the virus. Public health agencies have often struggled to predict in advance which variants would drive a new wave of infections. The second chapter of this thesis introduces an AI-enabled early warning system for emerging viral variants. The newly developed predictive model incorporates genetic and epidemiological features and is trained and tested on over 9 million sequenced SARS-CoV-2 variants across 30 countries. It accurately predicts whether each new variant will drive a significant wave of infections within the following 3 months.&#13;
&#13;
There is ample biological and empirical evidence regarding the roles of mutating variants and population immunity in driving infection waves of respiratory viruses. Motivated by this, the third chapter of the thesis develops the first epidemiological model, called the Immunity-Variants-Epidemic (IV-Epidemic) model, that explicitly captures circulating variants and the evolving population immunity profile to more accurately reflect the long-term trajectory of variant-driven pandemics. It incorporates variant evolution and population immunity dynamics, and is able to replicate the observed multi-wave infection patterns without requiring ad hoc recalibration.&#13;
&#13;
The fourth chapter of the thesis focuses on post-marketing pharmacovigilance, which is key to drug safety regulatory work. It presents PR1SM (Patients Really are 1st in Signal Management), an AI-based framework for identifying potential drug safety signals using post-marketing surveillance data. By structuring adverse event reports into parallel time series at multiple levels of clinical aggregation and adjusting for exposure trends, PR1SM complements standard disproportionality methods to detect safety signals earlier and with greater sensitivity in both real-world and synthetic settings.&#13;
&#13;
Collectively, the chapters of this thesis demonstrate how operations research can be combined with domain-specific methods in biology, epidemiology, and pharmacovigilance to inform data-driven public health strategies. The proposed analytical frameworks offer interpretable, scalable, and policy-relevant tools to create more resilient public health systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162138</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation and activity of the 5-methylcytosine DNA glycosylase ROS1 contributes to DNA methylation patterning across development</title>
<link>https://hdl.handle.net/1721.1/162129</link>
<description>Regulation and activity of the 5-methylcytosine DNA glycosylase ROS1 contributes to DNA methylation patterning across development
Hemenway, Elizabeth A.
DNA methylation patterning is a consequence of opposing activities of DNA methyltransferases and DNA demethylases. A 5-methylcytosine DNA glycosylase, ROS1, removes DNA methylation from the Arabidopsis genome. In flowering plants, two distinct female gametes, the egg cell and the central cell, are fertilized, producing what will become the embryo and the endosperm of the seed. In Arabidopsis, a 5-methylcytosine DNA glycosylase, DME, demethylates regions in the central cell genome, leading to methylation differences between maternally- and paternally-inherited endosperm genomes after fertilization. DME is required for endosperm gene imprinting. Homologues of DME include ROS1, DML2 and DML3. It is unknown whether any of these DNA glycosylases are required for endosperm methylation patterning. We show that ROS1 prevents hypermethylation of paternally-inherited alleles in the endosperm at regions that lack maternal or paternal-allele methylation in wild-type. Thus, ROS1 promotes epigenetic symmetry between genomes in the endosperm by preventing paternal genome hypermethylation. We investigated dynamics of DNA methylation at the edges of transposable elements, where ROS1 is known to prevent spreading of DNA methylation into neighboring regions of the genome. We found that DNA methylation spreading in ros1 mutant is unidirectional, which has implications for the field’s understanding of the mechanism of ROS1 activity at TEs as well as the mechanism of methylation establishment at TEs. We have investigated the regulation of ROS1 expression by interaction of ROS1 and the RdDM pathway at the ROS1 promoter. Using a previously characterized deletion in the ROS1 promoter, we investigated the consequences of ROS1 regulation across the genome in the presence of a wild-type RdDM pathway. Finally, I discuss the implications of the work I have done in understanding the role of ROS1 across plant development and the mechanisms by which DNA methylation is patterned in plants, and propose future directions related to these findings.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162129</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conflict between bacteriophages and a mobile genetic element in bacterial immunity</title>
<link>https://hdl.handle.net/1721.1/162128</link>
<description>Conflict between bacteriophages and a mobile genetic element in bacterial immunity
Loyo, Christian L.
Bacteriophages (phages) are the most abundant biological entities on the planet. They are ubiquitous and numerous across the many environments bacteria are found. To combat phage predation, bacteria have evolved numerous immune strategies, so-called anti-phage defense systems. Likewise, phages encode counter-defenses that prevent the function of anti-phage defense systems. Many anti-phage defense systems are found within mobile genetic elements, like plasmids, temperate bacteriophages, and integrative and conjugative elements (ICEs). In this thesis, I show how an ICE in the bacterium Bacillus subtilis, called ICEBs1, protects populations of cells from phage predation by phages in the SPβ family. ICEBs1 has a phage defense system, spbK, which, upon phage infection, causes cell death prior to generating phage progeny. This mechanism of phage defense is considered abortive infection and protects populations of cells via altruistic cell death of infected cells. I show that during SpbK-mediated abortive infection, cells experience NAD⁺ depletion dependent on the Toll-interleukin-1 receptor (TIR) domain of SpbK. Depletion of NAD⁺ likely starves both the cell and infecting phage of energy, killing the cell and preventing the generation of phage progeny. I found that SpbK recognizes phage infection by recognizing and binding to the phage portal protein, YonE, through an interaction between the N-terminus of SpbK and the clip domain of YonE. Furthermore, I show that a gene in the SPβ-like phage Φ3T, nip, was necessary and sufficient to prevent SpbK-mediated anti-phage defense. I found that Nip binds to the TIR domain of SpbK and inhibits NADase activity to prevent abortive infection and enable viable phage production. These findings highlight the conflicts that occur between mobile genetic elements and the co-evolutionary arms race between bacteria and phages.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162128</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Effects of Immigration on Labor Markets</title>
<link>https://hdl.handle.net/1721.1/162125</link>
<description>Essays on the Effects of Immigration on Labor Markets
Gulek, Ahmet
This thesis consists of three chapters on the effects of immigration on labor markets. The first chapter studies the effects of an informal labor supply shock on the host regions, the second chapter investigates the spillover effects on non-host regions through the production network, and the third chapter provides a method that the first two chapters rely on.&#13;
&#13;
The first chapter studies the effects of Syrian refugees, who are denied work permits and thus can only work informally, on Turkish firms and workers. Using travel distance as an instrument for refugee location, I show that low-skill natives lose both informal and formal salaried jobs. I document two mechanisms: formal firms reduce their formal labor demand and new firms do not enter the formal economy. Estimates imply an elasticity of substitution of 10 between formal and informal workers. Counterfactual exercises predict that granting refugees work permits would have created up to 120,000 formal jobs in the economy through higher informal wages.&#13;
&#13;
The second chapter, co-written with Tishara Garg, investigates how immigration-induced wage shocks can propagate beyond the regions receiving immigrants through the production network. Using the Syrian refugee crisis in Turkey as a quasi-experiment and the near universe of domestic firm-to-firm transaction data from VAT records, we show that the immigration shock propagates both forward and backward along the supply chain. Firms in non-host regions who directly or indirectly buy from host regions demand more labor. Firms who sell to host regions weakly increase their sales. Estimates imply an elasticity of substitution between labor and intermediate goods of 0.76 and an elasticity of substitution of nearly 1 between intermediates. Counterfactual analyses show that the spillover effects on non-host regions are economically meaningful when the host regions are central nodes of the domestic trade network. For example, a 1% increase in labor supply in Istanbul decreases real wages in Istanbul by 0.56% and increases real wages in the average non-host city by 0.38%.&#13;
&#13;
The third chapter, co-written with Jaume Vives-i-Bastida, proposes a Synthetic Instrumental Variables (SIV) estimator for panel data that combines the strengths of instrumental variables and synthetic controls to address unmeasured confounding. We derive conditions under which SIV is consistent and asymptotically normal, even when the standard IV estimator is not. Motivated by the finite sample properties of our estimator, we introduce an ensemble estimator that simultaneously addresses multiple sources of bias and provide a permutation-based inference procedure. We demonstrate the effectiveness of our methods through a calibrated simulation exercise, two shift-share empirical applications, and an application in digital economics that includes both observational data and data from a randomized control trial. In our primary empirical application, we examine the impact of the Syrian refugee crisis on Turkish labor markets. Here, the SIV estimator reveals significant effects that the standard IV does not capture. Similarly, in our digital economics application, the SIV estimator successfully recovers the experimental estimates, whereas the standard IV does not.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162125</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Political Economy</title>
<link>https://hdl.handle.net/1721.1/162122</link>
<description>Essays in Political Economy
Sapiro-Gheiler, Eitan
In this thesis, I describe three approaches to political communication and decision-making. Chapter 1, ``Persuasion with Ambiguous Receiver Preferences,'' studies an informed Sender who knows only the average threshold belief needed to persuade a Receiver and wishes to safeguard against unfavorable distributions of individual preferences. Chapter 2, ``Discovery through Trial Balloons,'' examines how correlation between different projects affects information disclosure by a principal who designs a bundle of projects that an agent can then choose to approve. Chapter 3, ``Strategic Opinion-Writing on Appellate Courts,'' describes how and why the partisan composition of quasi-random panels of judges on the U.S. Federal Courts of Appeals affects consensus-building. I describe each chapter in more detail below.&#13;
&#13;
The first chapter, ``Persuasion with Ambiguous Receiver Preferences,'' describes a Bayesian persuasion problem where Receiver has a private belief  cutoff for Sender’s preferred action and Sender has maxmin preferences over all Receiver type distributions with known mean and bounds. This problem can be represented as a zero-sum game where Sender chooses a mean-preserving contraction of the prior over states and adversarial Nature chooses a Receiver type distribution. I formalize the connection between maxmin persuasion and similar games used to model political spending, all-pay auctions, and competitive persuasion. In both a standard binary-state setting and a new continuous-state setting, Sender optimally linearizes the prior distribution over states to create a distribution of posterior means that is uniform on a known interval with an atom at the lower bound of its support.&#13;
&#13;
The second chapter, ``Discovery through Trial Balloons,'' presents a model of a principal and an agent who face symmetric uncertainty about the agent's value for two correlated projects. The principal chooses which project values to publicly discover and makes a proposal to the agent, who accepts if and only if the expected sum of values is positive. I characterize optimal discovery for various principal preferences: maximizing the probability of the grand bundle, of having at least one project approved, and of a weighted combination of projects. My results show when discovering ex-ante disfavored projects may be optimal; these conclusions rationalize the inclusion of controversial policies in omnibus bills and the presence of moonshot projects in organizations.&#13;
&#13;
The third chapter, ``Strategic Opinion-Writing on Appellate Courts,'' studies consensus and decision-making by powerful judges on the U.S. Federal Courts of Appeals. Using quasi-random three-judge panels on these courts from 1970\textendash 2013 I document a novel pattern in dissenting opinions: compared to party-unanimous panels, party-mixed panels cause all judges to dissent more often, and at equal rates. This result is incompatible with classical models of judicial politics and is unique to partisanship. To explain my results, I introduce a theoretical framework where judges' favored coalitions are more homogeneous along both partisan and non-partisan dimensions. Using judge metadata, I find suggestive evidence for the model's result that polarization increases dissents by judges of panel-minority law school or gender. With state-of-the-art machine learning tools from natural language processing, I generalize beyond dissents, showing that those same features drive differences in opinion text even when rulings are unanimous.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162122</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering the biogenesis pathways for human mitochondrial alpha-helical outer membrane proteins using genome-wide approaches</title>
<link>https://hdl.handle.net/1721.1/162121</link>
<description>Uncovering the biogenesis pathways for human mitochondrial alpha-helical outer membrane proteins using genome-wide approaches
Muthukumar, Gayathri A.
Mitochondria are critical double-membraned organelles that act as biosynthetic and bioenergetic cellular factories, with the outer membrane providing an interface with the rest of the cell. In humans, the outer mitochondrial membrane (OMM) contains ~110 different proteins which are encoded in the nuclear genome, synthesized in the cytosol and must be targeted to the membrane. OMM proteins are defined by the secondary structure of their transmembrane domains (TMDs), as two classes – beta-barrel proteins, evolutionarily derived from the outer membranes of gram-negative bacteria, and alpha-helical proteins, an evolutionarily more recent class. Beta-barrel proteins are first translocated into the mitochondrial intermembrane space (IMS) via the translocase of the outer membrane (TOM) and subsequently inserted by the sorting and assembly machinery (SAM) complex. Comparatively, alpha-helical OMM protein biogenesis is poorly understood. Alpha-helical proteins are classified as signal-anchored (a single N-terminal anchored TMD), tail-anchored (a single C-terminal anchored TMD) and polytopic (multiple TMDs), by the number and orientation of their TMDs with respect to the membrane. While the novel OMM insertase MTCH2 was discovered using a genome-wide CRISPRi screen for alpha-helical tailanchored substrates (Guna et. al, 2022), the broader biogenesis and targeting pathways for all biophysically diverse alpha-helical proteins remained unexplored. Critically, the mechanisms of cytosolic chaperoning and targeting for all alpha-helical OMM proteins were unknown. This thesis presents a large-scale investigation that systematically delineates alphahelical biogenesis pathways, from cytosolic chaperoning to membrane insertion to quality control of unassembled or mis-localized TMDs. Genome-wide CRISPRi screens in human cells for varied signal-anchored and polytopic substrates revealed novel cytosolic chaperones, targeting and quality control factors. Arrayed follow-up genetic screens against a large and biophysically more varied panel of substrates revealed that alpha-helical proteins are triaged in the cytosol by TMD number and topology, thus defining a set of ‘rules’ for biogenesis. Cell biological and in vitro biochemistry experiments further discovered a new role for the ribosome-bound chaperone NAC in regulating polytopic protein biogenesis and characterized a novel signal-anchored targeting factor TTC1 that chaperones TMDs using a conserved C-terminal hydrophobic groove. Cumulatively, this work both defines the pathways for biogenesis and quality control of alphahelical OMM proteins and identifies mechanisms by which mitochondrial protein composition and thereby function can be tuned through manipulation of mitochondrial membrane protein biogenesis machinery in diverse pathophysiological conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162121</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and use of advanced nuclear diagnostics and neural networks to diagnose 3D morphology and power balance in inertial fusion implosions at OMEGA and NIF</title>
<link>https://hdl.handle.net/1721.1/162119</link>
<description>Development and use of advanced nuclear diagnostics and neural networks to diagnose 3D morphology and power balance in inertial fusion implosions at OMEGA and NIF
Kunimune, Justin H.
Inertial confinement fusion (ICF) is one of several ways to perform nuclear fusion in the laboratory, and is thus appealing as a potential future energy source. Achieving high gain at ICF facilities like the National Ignition Facility (NIF) and OMEGA requires new ways of measuring implosion conditions such as the shape of the shell at minimum-volume and the power balance in the hot-spot. This dissertation describes several novel instruments and analysis techniques to measure these. First is a method to combine information from existing diagnostics that probe asymmetries, such as the neutron imaging system, the real-time neutron activation detectors, and the neutron time-of-flight spectrometers. Our technique uses a forward-fit to a simplified physics model to produce a single self-consistent 3D picture of the implosion. Markov chain Monte Carlo is used to provide robust uncertainty quantification. Second is a knock-on deuteron imager to measure deuterons elastically scattered out of the shell by fusion neutrons. This diagnostic would enable a full 3D reconstruction of both the hot-spot and shell geometry. Analysis procedures were developed for this diagnostic, and commissioning experiments were carried out to validate the procedures and associated hardware, providing improved capabilities for imaging OMEGA implosions. Third is a time-resolved neutron spectrometer called MRSt which would record a time-resolved neutron spectrum. Extensive modelling of the MRSt’s response and analysis procedures has been carried out, with which it has been predicted that the system as designed will meet the top-level physics requirements needed for novel insights. A path forward for implementing this spectrometer has been identified. These projects represent significant advancements in our abilities to diagnose ICF implosions, which will improve our understanding of degradations and failure modes in ICF implosions and lead to higher gain overall. This will hopefully one day enable nuclear fusion energy as a clean energy source.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162119</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Institution and Innovation</title>
<link>https://hdl.handle.net/1721.1/162118</link>
<description>Essays on Institution and Innovation
Zhou, Jie
This dissertation explores the complex interplay between institutions and innovation across three distinct contexts: digital protectionism, academic governance, and language policy. The first essay examines whether protectionist policies can foster domestic innovation in the digital economy, focusing on China’s Great Firewall (GFW)—the world’s most extensive system of internet censorship. Leveraging the quasi-random timing of foreign app blockages, I find that Chinese substitute apps experienced a 30% increase in user base following foreign app bans. Using novel data extracted from compiled app code, I show that in-house technological development at these firms rose by 14% two years after blockage. This innovation diffused broadly, as both Chinese and foreign apps subsequently adopted more Chinese-origin technologies. I further document that expanded access to user data—enabled by increased data requests and third-party sharing—was a key driver. Quasi-random introductions of new data access types causally boosted in-house development, and firms receiving shared user data also intensified innovation. These findings suggest that digital protectionism, under certain conditions, can catalyze domestic technological growth. The second essay investigates how powerful institutional actors shape academic research and innovation in China. Using data on publications from researchers at 109 top Chinese universities and leadership transitions within these institutions, I apply natural language processing (NLP) techniques to assess alignment between faculty and leader research agendas. Faculty shift their research toward that of incoming leaders—particularly those appointed by the Communist Party—immediately after leadership transitions. This influence is stronger in fields with histories of political control or academic repression. While some alignment may reflect coordination, I find significant costs to research quality: transitions to low-productivity leaders lead to sharp increases in topic similarity and declines in citation impact, especially for research most closely aligned with new leadership. These results highlight the tension between centralized control and research autonomy in high-stakes innovation environments. The third essay explores how language policy affects national identity formation, analyzing Taiwan’s Chinese language unification campaign. Exploiting variation in individuals’ age-based ability to learn Mandarin and their linguistic distance from it, I implement a difference-in-differences design to identify the policy’s long-term effects. I find that cohorts more affected by the policy became more fluent in Mandarin but were less likely to identify as Taiwanese or support self-determination. The intergenerational disruption of native language transmission plays a key role, with the identity impact comparable to 11% of the effect of losing a parent. The policy also increased consumption of state-controlled media among treated cohorts. These findings underscore how language policies can reshape political identity and social cohesion. Together, these essays show that institutions—through mechanisms of control, exclusion, and cultural shaping—play a pivotal role in determining the direction, diffusion, and societal implications of innovation. JEL code: O33, O38, L86, I23, Z13, C23
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162118</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Causal Estimation</title>
<link>https://hdl.handle.net/1721.1/162111</link>
<description>Machine Learning for Causal Estimation
Quintas-Martínez, Víctor M.
The intersection of causal inference and machine learning (ML) has given rise to powerful tools for tackling complex empirical questions, especially in high-dimensional or highly nonlinear settings where traditional methods often fall short. This thesis develops and analyzes novel ML-based methods for estimating causal effects, with a focus on flexibility, robustness, and valid statistical inference.&#13;
&#13;
The first chapter addresses the challenge of regularization and model selection bias that arises when ML is used to estimate nuisance parameters. We propose a new framework for automatic debiased machine learning (DML), which we term Riesz regression. This approach constructs debiased estimating equations without requiring explicit characterizations of the debiasing terms, allowing for seamless integration with any ML algorithm. We extend the framework to generalized regressions, including high-dimensional generalized linear models (GLMs). To illustrate its practical value, we apply Riesz regression to a study of discrimination in lending, showing how neural networks can be leveraged for automatic debiasing. Monte Carlo simulations demonstrate that our method frequently outperforms conventional inverse propensity weighting approaches.&#13;
&#13;
The second chapter introduces a new method for causal change attribution, which quantifies how different causal mechanisms contribute to shifts in the distribution of an outcome variable over time or across groups. Building on a given causal model, our approach combines regression and re-weighting to identify and estimate the relevant counterfactual quantities. Our methodology is multiply robust, meaning it remains valid even when some components of the model are misspecified. We establish consistency and asymptotic normality. Moreover, we show how our algorithm can be embedded into popular attribution frameworks such as Shapley values, which then inherit its statistical guarantees. Simulation studies confirm the excellent performance of our method, and we demonstrate its utility through an applied case study.&#13;
&#13;
The third chapter tackles a common challenge in applied work: estimating and conducting inference on many related causal parameters, such as causal effects of many treatments or on multiple outcomes. We derive uniform error bounds and construct valid simultaneous confidence bands for collections of average treatment effects (ATEs) estimated via DML. Our framework accommodates both finite sets and continua of functionals, and leverages strong Gaussian approximation results to account for dependence across estimates. This enables rigorous simultaneous inference with control over familywise error rates.&#13;
&#13;
Together, these contributions advance the state of the art in machine learning for causal estimation by unifying flexible modeling with rigorous inferential theory. The methods developed are broadly applicable to problems in economics, public policy, healthcare, and beyond, where understanding causal relationships in complex, data-rich environments is essential. This thesis emphasizes practical applicability while maintaining strong theoretical guarantees, equipping researchers with tools to make credible, data-driven causal claims.&#13;
JEL: C14, C21, C45
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162111</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Information Economics</title>
<link>https://hdl.handle.net/1721.1/162110</link>
<description>Essays on Information Economics
Veiel, Rafael
This thesis contains 5 chapters. Every chapter deals with the question of how information affects equilibrium behavior in strategic problems. &#13;
&#13;
Chapter 1 is my job market paper "Limits of Global Games.'' It considers the impact of information on equilibrium multiplicity in two-player games of strategic complementarities. Games with strategic complementarities often exhibit multiple equilibria. In a global game, players privately observe a noisy signal of the underlying payoff matrix. As the noise diminishes, a unique equilibrium is selected in almost all binary-action games with strategic complementarities - a property known as "limit uniqueness.'' This chapter describes the limits of that approach in two-player games, as we move beyond two actions. Unlike binary-action games, limit uniqueness is not an intrinsic feature of all games with strategic complementarities. When the noise is symmetric, we demonstrate that limit uniqueness holds if and only if the payoffs exhibit a generalized ordinal potential property. Moreover, we provide an example illustrating how this condition can be easily violated.&#13;
&#13;
Chapter 2 is co-authored with Olivier Gossner and is titled "Strategic Type Spaces.'' We provide a strategic foundation for information: in any given game with incomplete information we define strategic quotients as information representations that are sufficient for players to compute best-responses to other players. We prove 1/ existence and essential uniqueness of a minimal strategic quotient called the Strategic Type Space (STS) in which a type is given by an interim correlated rationalizability hierarchy together with the set of beliefs over other players' types and nature that rationalize this hierarchy 2/ that this minimal STS is a quotient of the universal type space and 3/ that the minimal STS has a recursive structure that is captured by a finite automaton.&#13;
&#13;
Chapter 3 is also co-authored with Olivier Gossner and is titled "Information Design for Rationalizability.'' We study (interim correlated) rationalizability in games with incomplete information. For each given game, we show that a simple and finitely parameterized class of information structures is sufficient to generate every outcome distribution induced by general common prior information structures. In this parameterized family, players observe signals of two kinds: A finite signal and a common state with additive, idiosyncratic noise. We characterize the set of rationalizable outcomes of a given game as a convex polyhedron.&#13;
&#13;
Chapter 4 is co-authored with Stephen Morris and Dirk Bergemann and is titled "A Strategic Topology on Information Structures.'' Two information structures are said to be close if, with high probability, there is approximate common knowledge that interim beliefs are close under the two information structures. We define an "almost common knowledge topology'' reflecting this notion of closeness. We show that it is the coarsest topology generating continuity of equilibrium outcomes. We show that finite information structures are dense in the almost common knowledge topology and thus it is without loss to restrict attention to finite information structures in information&#13;
design problems.&#13;
&#13;
Finally, chapter 5 is a short note describing an information aggregation mechanism that can be used by players before playing a game of strategic complementarities under incomplete information. In such a game, players may have an incentive to share overly optimistic information with other players, thus inducing them to play higher actions. In this mechanism, players trade a token before playing the game. Players who want to communicate good news must purchase this worthless token and burn resources. The note shows that players only need to observe the market clearing price that arises from the token trades to aggregate their private information. Each element in a player's private information set is encoded as a prime in the prime factorization of the market clearing price. The element that is contained in every player's information set is identified as the prime with the highest multiplicity.&#13;
JEL Classification Codes: C72, D82
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162110</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Private and Social Insurance</title>
<link>https://hdl.handle.net/1721.1/162109</link>
<description>Essays on the Economics of Private and Social Insurance
Solomon, Adam
The first chapter, joint with Sylvia Klosin, studies the 'scope' of insurance. Distinct risks are typically insured separately. A single 'aggregate' contract that pays more when many shocks occur simultaneously, but less when positive shocks offset negative shocks, is utility-increasing absent moral hazard. However, an aggregate contract discourages diversification, leading to a novel insurance-incentive trade-off. We study the US Federal Crop Insurance Program (FCIP), where farmers can choose the `scope' of their policy - whether to insure each field separately, or all fields of the crop as an aggregate unit. Starting in 2009, the FCIP introduced a large subsidy increase for aggregate insurance. We show that farms that moved to aggregate insurance reduced crop diversity and irrigation, farmed less and conserved more land, and insured price risk  ---  all reducing the diversification of their risks. This increased the variability of farm yield by 14%, raising the fiscal cost of aggregate insurance by about $1.5 billion per year.  We derive and estimate a formula for the optimal contract scope.  We find that an aggregate policy is never welfare maximizing, but that the optimal policy lies partway between separate and aggregate. More generally, we discuss scope's widespread relevance in insurance design.&#13;
&#13;
The second chapter, proceeds from the fact that increasing climate risk has caused insurance in many locations to become unaffordable or unavailable. I study a novel policy response in Australian home insurance: government provided, mandatory, actuarially fair, reinsurance for cyclone damage. In this scheme, the government reinsures the cyclone risk, while the private market covers the remaining  idiosyncratic risk. I find that public reinsurance leads to a 21% decrease in home insurance premiums and an 11% increase in the probability of insurance being offered at all. In terms of mechanisms, I rule out subsidization and show that the ambiguity of the risk has a minimal impact on premiums and insurance offerings. Instead, the entirety of the increase in insurance offered, and much of the decrease in premiums, comes from reducing the implicit costs associated with insuring spatially correlated risk. Increased competition due to insurer entry explains the remaining premium reductions. This isolates the cause of  market dysfunction - correlated risk - and suggests that public reinsurance is a cost-effective policy to rehabilitate insurance markets for catastrophic climate risks.&#13;
&#13;
The third chapter, studies bundling in insurance contracts. Every insurance contract bundles risks, and explicit bundling discounts are common. I show theoretically that bundling arises in a competitive market whenever correlation between risk types enables insurer 'cream-skimming': willingness-to-pay for insurance against one risk must be negatively correlated with expected costs from the other risk. I analyze long-term care insurance, in which both-spouse bundles are discounted by 20-35%. I show that cream-skimming incentives are sufficient to explain these discounts, and model-predicted equilibrium bundling discounts closely match empirical discounts. I rule out standard economies-of-scale and differential contract lapsation as alternate explanations of the offered discounts. Counterfactually, banning bundling would raise welfare by 10% by correcting separate-market unraveling, while mandatory family bundling would reduce welfare by 15% by exacerbating advantageous selection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162109</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Development Economics and Trade</title>
<link>https://hdl.handle.net/1721.1/162108</link>
<description>Essays in Development Economics and Trade
Wiles, Edward
This thesis comprises three chapters. &#13;
&#13;
The first chapter, written with Deivy Houeix, studies search and trust frictions, which have historically made it hard for small firms in lower-income countries to buy inputs from foreign markets. The growth in smartphone ownership and social media usage has the potential to alleviate these barriers. Informed by a dynamic model of relational contracting, we run a field experiment leveraging these technological tools to provide exogenous variation in (1) search frictions and (2) trust frictions (adverse selection and moral hazard) in a large international import market. In our search treatment, we connect a randomly selected 80% of 1,862 small garment firms in Senegal to new suppliers in Turkey. We then cross-randomize two trust treatments that provide additional information about the types (adverse selection) and incentives (moral hazard) of these new suppliers. Alleviating search frictions is sufficient to increase access to foreign markets: in all treated groups, firms are 26% more likely to have the varieties a mystery shopper requests and the goods sold are 30% more likely to be high quality. However, the trust treatments are necessary for longer-term impact: using both transaction-level mobile payments data and a follow-up survey, we show that these groups are significantly more likely to develop the connections into relationships that persist beyond the study. These new relationships lead to increases in medium-run profit and sales. Finally, we use the treatment effects to estimate the model and evaluate counterfactuals where we set various combinations of the frictions to zero, finding that the largest gains come from eliminating adverse selection.&#13;
&#13;
The second chapter, written with Habib Ansari and Dave Donaldson, is motivated by a modern revolution in spatial economic modeling that aims to answer quantitative counterfactual questions by using models that feature micro-level heterogeneity. This heterogeneity is then often assumed to come from particular parametric families---such as Frechet in Eaton and Kortum's (2002) Ricardian model.  While these parametric choices greatly enhance the tractability of model simulations, it is unknown how sensitive the answers to counterfactual questions are to these assumptions of convenience because there are infinitely many alternative distributions of heterogeneity to be evaluated. We overcome this challenge by building a general trade model that leverages recent advances in the robustness literature. Our method calculates sharp bounds on the values of model counterfactuals that could obtain---while still exactly matching all aggregate trade data points, a gravity-like moment condition, and satisfying equilibrium constraints---under all possible distributions of underlying heterogeneity that lie within a given divergence from a chosen reference distribution. Applying this method to the Eaton and Kortum (2002) model, we find that the gains from trade in these models could be several times larger or smaller than they appear to be under standard benchmark distributions, even if heterogeneity is drawn from a distribution that is at least as similar to Frechet as are the types of parametric alternatives that are commonly explored in sensitivity analysis.&#13;
&#13;
The third chapter, written with Tishara Garg, studies regional integration, a major issue both across and within countries. Yet, integration can take many forms, ranging from lowering tariffs to lowering administrative frictions. We provide evidence on the gains to removing administrative frictions using rich microdata on firm-to-firm trade to study a major fiscal integration reform in India. Using an event-study style regression derived from a gravity model, we estimate that the reform increased interstate trade by around 15% on average. We plug this estimate into the model and use it to calculate the aggregate and distributional welfare gains. We find that all but a handful of districts saw welfare gains, with an aggregate welfare increase of around 1%.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162108</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Anisotropic Noise and Fast Gates with Superconducting Qubits</title>
<link>https://hdl.handle.net/1721.1/162107</link>
<description>Exploring Anisotropic Noise and Fast Gates with Superconducting Qubits
Rower, David A.
Rapid recent progress in the engineering of quantum systems across multiple platforms has enabled quantum science at never-before-seen precision and scale, and may yield useful quantum technology. However, two major challenges slow such progress: (1) decoherence from interactions between target systems and uncontrolled external degrees of freedom, and (2) errors in the control of target systems, which often arise from physics beyond the models used to design control protocols. We report on three novel results addressing both coherence and control, utilizing superconducting qubits. &#13;
&#13;
Our first result is the characterization of superconducting qubit flux noise, a primary source of decoherence, under the influence of weak, in-plane magnetic fields. We reveal two trends which serve as a novel experimental benchmark for microscopic theories of flux noise: (1) a 1/f to approximately Lorentzian transition in the noise power spectral density below 1 Hz, and (2) noise suppression above 1 MHz. &#13;
&#13;
Our second result is the suppression of coherent qubit-control errors induced by the counter-rotating component of strong, linearly-polarized drives. We establish two complementary protocols for mitigating such errors, which previously limited the speed of single-qubit gates for low-frequency qubits. The first protocol realizes circularly-polarized drives in circuit quantum electrodynamics. The second protocol---commensurate pulses---uses pulse-timing restrictions to homogenize counter-rotating errors and enable their mitigation with conventional calibration routines. With commensurate pulses, we demonstrate world-class single-qubit gate fidelities reliably exceeding 99.997%.&#13;
&#13;
Our third result is the observation of a novel signature in the decoherence dynamics of qubits subject to anisotropic transverse noise. Through injected noise experiments with a fluxonium qubit, we directly observe time-domain state-purity oscillations at twice the qubit frequency arising from the intrinsic qubit Larmor precession. We probe the oscillation dependence on noise anisotropy, lab-frame orientation, and power spectral density. Such oscillations are a result of physics beyond standard qubit-decoherence models within the rotating-wave approximation, and were previously unobserved in experiment.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162107</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in International Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/162101</link>
<description>Essays in International Macroeconomics
Gertler, Sarah M.
How do exchange rates and tariffs shape the economy? Their effects both independently and in regard to each other are puzzling with respect to classical economic models. This dissertation focuses on how macroeconomic factors -- sticky prices (Chapter 1), interest rates (Chapter 2), and scale economies (Chapter 3) -- inform the scope of exchange rates and tariffs. Unveiling these factors helps illuminate the nature of these shocks' influence.&#13;
&#13;
Chapter 1: The first chapter revisits the classic relationship between exchange rate pass-through, how exchange rates influence prices, and expenditure-switching, the resulting substitution between home and foreign goods. Expenditure-switching is the main channel through which exchange rates transmit to the real economy. Conventional wisdom holds that this channel's strength is increasing in exchange rate pass-through into prices: assuming the import demand elasticity is independent of pass-through, larger effects of exchange rates on prices yield larger substitution of spending between domestic and foreign goods. In this paper, I show that this conventional wisdom does not hold. Using confidential US micro-data and a panel-data local projection technique, I show that quantity-exchange rate elasticities are similar across high and low pass-through environments. In essence, low pass-through is subject to a larger import demand elasticity than is high pass-through. I then propose an extension of a standard small open economy New Keynesian model by adding a layer of import buying (retail) firms, in which both exporting and importing firms are subject to price rigidities. I show empirically and theoretically that the ``import buyer rigidity" dampens overall adjustment, but less so under low pass-through because in this case the pass-through is more persistent. The model thus accounts for why the quantity-exchange rate elasticities are similar across pricing regimes. I conclude by exploring the implications of this framework for monetary and exchange rate policy, actually finding a stronger expenditure-switching channel under low pass-through.&#13;
&#13;
Chapter 2: The second chapter, joint with Victor Orestes, documents how currency markets and trade flows respond to tariffs imposed by and on the US as related to other countries' macrofinancial position. We show that countries which maintain higher interest rates than the US depreciate much more strongly -- to the point of offsetting the tariffs on impact -- than their low-interest counterparts. However, these effects are not as persistent as the tariff shocks. Our results highlight a US hegemonic asymmetry: tariffs imposed on the US have little effect on currency markets, US demand for high-interest countries' goods is relatively elastic, but the latter's demand for US exports is not. Monetary policy can be an effective tool to target the exchange rate fluctuation as it has a similar incidence as tariffs. Finally, we present evidence that the interest rate analysis could draw from trade-network fundamentals. To rationalize our findings, we modify a baseline model of exchange rate determination using the interest rate as a "sufficient statistic" wedge in fundamentals. Our model indicates that the financial market imperfections we observe in data distort the global response to tariff escalation.&#13;
&#13;
Chapter 3: The third chapter proposes an answer to the question of why there is complete long-run pass-through of both tariffs and exchange rates in US exports, despite evidence of flexible markups. I develop a methodology to leverage tariffs and exchange rates to uncover the structural drivers of pass-through, the markup elasticity and the marginal cost scale elasticity. I derive and quantify the scale channel of pass-through, which can be decomposed into a bilateral scale and the novel "shock span" scale effect. The shock span channel arises because different correlation patterns across customers enters prices via the scale channel. Because exchange rates are correlated across trading partners, compared to tariffs they have greater capacity for shock-span effects of scale economies. Quantifying the bilateral and shock span components of the scale channel, the paper demonstrates that scale economies can rationalize the discrepancy between markup flexibility and observed pass-through.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162101</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Technology and Trade</title>
<link>https://hdl.handle.net/1721.1/162100</link>
<description>Essays on Technology and Trade
Kikuchi, Shinnosuke
This thesis consists of essays on technology and trade. In Chapter 1, I study how technology in the 21st century has changed the pattern of trade. I document that skill-abundant countries no longer have a comparative advantage in skill-intensive sectors. While this empirical relationship was strong in the 1980s, it weakened in the 1990s and disappeared by the 2000s. The decline is more pronounced in countries and sectors with higher automation. I find no such heterogeneous effects among countries and sectors more exposed to offshoring. Using a quantitative trade model incorporating automation and offshoring, I confirm that the observed changes in automation can account for the evolution of comparative advantage while observed changes in offshoring cannot. I conclude by revisiting the relationships between globalization, technology, and inequality through this model. Automation increases skill premia in developed countries with high automation and also raises welfare globally, whereas offshoring leads to smaller, more evenly distributed welfare gains.&#13;
&#13;
In Chapter 2 (joint with Daniel G. O'Connor), we turn to the geographic consequences of technology and trade by analyzing the role of granularity—the dominance of a few large firms in local labor markets. We propose a new economic geography model featuring granular firms subject to idiosyncratic shocks. We show that average wages increase in the size of the local labor market due to that granularity, and provide a sufficient statistic for the contribution of our mechanism. We further prove that too few firms enter in equilibrium. Using Japanese administrative data on manufacturing, we provide evidence consistent with our mechanism and quantify it. Our mechanism implies that markets with around 2 firms per sector have an elasticity of wages to population of 0.05 and firms capture only 85% of their contribution to production in profits. In large markets like Tokyo, the elasticity is around 0.001, and firm entry is approximately efficient. Enacting optimal place-based industrial policy would increase the number of firms in modest-sized cities by more than 30% and actually decrease the number of firms and people in Tokyo.&#13;
&#13;
In Chapter 3 (joint with Sagiri Kitao), we study the distributional consequences of technological and trade-induced polarization—wage and employment losses of middle-class workers relative to low- and high-skill groups. We build a model of overlapping generations who choose consumption, savings, labor supply, and occupations over their life-cycles, and accumulate human capital. We simulate a wage shift observed since the early 1980s and investigate individuals' responses. Polarization improves welfare of young individuals that are high-skilled, while it hurts low-skilled individuals across all ages and especially younger ones. The gain of the high-skilled is larger for generations entering in later periods, who can fully exploit the rising skill premium.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162100</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of Chromatin Landscape on and by the Human Sex Chromosomes</title>
<link>https://hdl.handle.net/1721.1/162099</link>
<description>Regulation of Chromatin Landscape on and by the Human Sex Chromosomes
Bokil, Neha Vijay
Sex chromosome constitution is the largest and oldest source of genetic variation in the human population. One sex chromosome—the “active X” (Xa)—is present in all individuals. The second sex chromosome differs between sexes; most males have a Y while most females have a second X, which adopts a distinct conformation from Xa and is termed the “inactive X” (Xi). Despite its name, the human Xi expresses ~20% of its genes. Xi-expressed genes and their Y homologs play critical gene regulatory roles. Examining mechanisms and effects of Xi gene expression is essential to understanding these functions. In this thesis, I investigate chromatin landscape across the human Xi to identify features of Xi-expressed and Xi-silent genes; I also interrogate the role of an Xi-expressed gene and its Y homolog in regulating chromatin genome-wide. To examine chromatin state differences between Xi-expressed and Xi-silent genes, we quantified H3K4me3, H3K27me3, and CTCF along Xi by linear modeling in cells of individuals with zero to three Xis. We demonstrate that Xi-expressed genes are enriched for H3K4me3 compared to Xi-silent genes. Moreover, Xi-silent genes near strongly Xi-expressed genes have higher H3K27me3 than other Xi-silent genes. CTCF shields strongly Xi-expressed gene promoters from surrounding heterochromatin. We propose a framework associating combinations of chromatin marks with subcategories of Xi-expressed and Xi-silent genes. A key Xi-expressed gene, KDM6A, encodes an H3K27me3 demethylase—enabling Xi to impact chromatin structure genome-wide. Its Y homolog, UTY, is thought to encode a catalytically dead enzyme. However, we demonstrate that Xi and Y copy number-dependent changes to H3K27me3 across autosomes are strongly correlated. Moreover, KDM6A knockdown results in increased H3K27me3 at similar genomic regions as UTY knockdown. We posit that KDM6A and UTY share demethylase-dependent functions. Deciphering features and genome-wide effects of Xi expression is essential to understanding fundamental mechanisms of gene regulation and the shared and differential roles of the sex chromosomes outside the reproductive tract. This work highlights critical chromatin-level differences between Xi-silent and Xi-expressed genes, the effects of an Xi-expressed gene on chromatin structure genome-wide, and striking similarities between Xi and Y in modulating autosomal chromatin structure and gene expression.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162099</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Econometrics and Policy Evaluation</title>
<link>https://hdl.handle.net/1721.1/162098</link>
<description>Essays on Econometrics and Policy Evaluation
Vives-i-Bastida, Jaume
This thesis consists of four chapters that study the statistical properties of synthetic control methods and their application to public policy evaluation and the digital economy. &#13;
&#13;
The first chapter, co-written with Ahmet Gulek, proposes a Synthetic Instrumental Variables (SIV) estimator for panel data that combines the strengths of instrumental variables and synthetic controls to address unmeasured confounding. We derive conditions under which SIV is consistent and asymptotically normal, even when the standard IV estimator is not. Motivated by the finite sample properties of our estimator, we introduce an ensemble estimator that simultaneously addresses multiple sources of bias and provide a permutation-based inference procedure. We demonstrate the effectiveness of our methods through a calibrated simulation exercise, two shift-share empirical applications, and an application in digital economics that includes both observational data and data from a randomized control trial. In our primary empirical application, we examine the impact of the Syrian refugee crisis on Turkish labor markets. Here, the SIV estimator reveals significant effects that the standard IV does not capture. Similarly, in our digital economics application, the SIV estimator successfully recovers the experimental estimates, whereas the standard IV does not.&#13;
&#13;
The second chapter, co-written with Ignacio Martinez, proposes a Bayesian alternative to the synthetic control method and explores the frequentist properties of the method in the context of linear factor models. In this chapter, we characterize the conditions&#13;
on the factor model primitives (the factor loadings) for which the statistical risk minimizers are synthetic controls (in the simplex). Then, we propose a Bayesian alternative to the synthetic control method that preserves the main features of the standard method and provides a new way of doing valid inference. We explore a Bernstein-von Mises style result to link our Bayesian inference to the frequentist inference. For linear factor model frameworks we show that a maximum likelihood estimator (MLE) of the synthetic control weights can consistently estimate the predictive function of the potential outcomes for the treated unit and that our Bayes estimator is asymptotically close to the MLE in the total variation sense. Through simulations, we show that there is convergence between the Bayesian and frequentist approach even in sparse settings. Finally, we apply the method to re-visit the study of the economic costs of the German re-unification and the Catalan secession movement. The Bayesian synthetic control method is available in the bsynth R-package.&#13;
&#13;
The third chapter, recognizes that synthetic control methods often rely on matching pre-treatment characteristics (called&#13;
predictors) of the treated unit, and that the choice of predictors and how they are weighted plays a key role in the performance and interpretability of synthetic control estimators. This chapter proposes the use of a sparse synthetic control procedure that penalizes the number of predictors used in generating the counterfactual to select the most important predictors. I derive, in a linear factor model framework, a new model selection consistency result and show that the penalized procedure has a faster mean squared error convergence rate. Through a simulation study, I then show that the sparse synthetic control achieves lower bias and has better post-treatment performance than the unpenalized synthetic control. Finally, I apply the method to revisit the study of the passage of Proposition 99 in California in an augmented setting with a large number of predictors available.&#13;
&#13;
The fourth chapter, co-written with Alberto Abadie, proposes a set of simple principles to guide empirical practice in&#13;
synthetic control studies. The proposed principles follow from formal properties of synthetic control estimators, and pertain to the nature, implications, and prevention of over-fitting biases within a synthetic control framework, to the interpretability of the results, and to the availability of validation exercises. We discuss and visually demonstrate the relevance of the proposed principles under a variety of data configurations.&#13;
JEL: C23, C26, C11, C52.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162098</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Flow in Particle Collisions</title>
<link>https://hdl.handle.net/1721.1/162097</link>
<description>Energy Flow in Particle Collisions
Metodiev, Eric Mario
In this thesis, I introduce a new bottom-up approach to quantum field theory and collider physics, beginning from the observable energy flow: the energy distribution produced by particle collisions. First, I establish a metric space for collision events by comparing their energy flows. I unify many ideas spanning multiple decades, such as observables and jets, as simple geometric objects in this new space. Second, I develop a basis of observables by systematically expanding in particle energies and angles, encompassing many existing observables and uncovering new analytic structures. I highlight how the traditional criteria for theoretical calculability emerge as consistency conditions, due to the redundancy of describing an event using particles rather than its energy flow. Finally, I propose a definition of particle type, or flavor, which makes use of only observable information. This definition requires refining the notion of flavor from a per-event label to a statistical category, and I showcase its direct experimental applicability at colliders. Throughout, I synthesize concepts from particle physics with ideas from statistics and computer science to expand the theoretical understanding of particle interactions and enhance the experimental capabilities of collider data analysis techniques.
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162097</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Preoptic Neurocircuit that Regulates Blood Glucose Homeostasis</title>
<link>https://hdl.handle.net/1721.1/162096</link>
<description>A Preoptic Neurocircuit that Regulates Blood Glucose Homeostasis
Roessler, Julian McFadden
The preoptic area (POA) is the core thermoregulatory center of all known endothermic species and balances heat generation and cooling in response to environmental stimuli. This delicate balance is executed via a brain-body exchange of sensory information and thermoregulatory output that is intimately connected to the nutritional state of the organism. When faced with food deprivation, certain endotherms engage in torpor, a behavior in which body temperature and metabolic rate are substantially depressed to improve the probability of organismal survival. Induction of torpor is regulated by anteroventral POA (avPOA) Vglut2⁺/Adcyap1⁺ neurons, which are necessary and sufficient to induce this state. How these neurons regulate the metabolic depression observed during torpor remains poorly understood. In this work, we show that activation of avPOA_Vglut2/PACAP neurons results in temperature-independent changes in whole-body changes in fuel usage, from glucose to fatty acids, driven predominantly via insulin signaling defects in skeletal muscle. This metabolic shift is executed via engagement of the hypothalamic-pituitary-adrenal axis, and impairment of this process via silencing of avPOA_Vglut2/PACAP neurons results in a loss of fasting glucose homeostasis. Taken together these results nominate torpor-associated avPOA_Vglut2/PACAP neurons as core regulators of glucose homeostasis, and provide a basis for understanding how endotherms utilize hierarchical control of metabolism to tune energy expenditure and survive extreme periods of energetic deprivation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162096</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing population-level variation in mRNA splicing and&#13;
implications for human genetic interpretation</title>
<link>https://hdl.handle.net/1721.1/162095</link>
<description>Characterizing population-level variation in mRNA splicing and&#13;
implications for human genetic interpretation
Jacobs, Hannah N.
Alternative splicing is when a single gene sequence gives rise to multiple RNA sequences. DNA mutations in this gene sequence can alter this process, shifting the relative usage of RNA sequences. This relative usage is called percent spliced in (PSI). Sometimes changes in PSI triggers a change in function, happening at the level of a cell, organism, or of fitness. The consequences of splicing variability, and the contribution of genetic variation to this process, remains incompletely characterized.&#13;
&#13;
In this thesis, we seek to characterize the splicing events specifically present in a subset of the human population. We use the Genotype-Tissue Expression project (GTEx), which encompasses genomic DNA sequence information and bulk mRNA data from 49 tissues in 838 individuals. In this dataset we implement a 3-component beta-binomal model using RNA-sequencing reads, at a tissue-specifc level, to reliably call splicing events present in a subset of the samples within a tissue. We call these naturally variable exons (NVEs), and identify a total 57,271 unique NVEs in GTEx. We find NVEs in a large portion of the transcriptome, existing in 75% of all protein-coding genes.&#13;
&#13;
The beta-binomal model generates a population distribution of each NVE and we leverage that to estimate an NVE frequency at a PSI level of interest. This enables us to compare NVEs by their frequencies. We find that NVEs either tend to be rare in frequency ( ≤ 10%) in the population) or quite high in frequency ( ≥ 90%). We find that NVEs tend to be in 5' untranslated regions at higher frequencies, and tend to be in coding regions at lower frequencies. &#13;
&#13;
60% of NVEs have been previously found to be modulated by genetic variants. We find that proximity to a splice site is one of the most important predictors in predicting if a genetic variant will impact splicing in GTEx, which enables better predictions over existing methods (increase in AUC by 0.39). Surprisingly, we find that NVEs tend to be in genetically constrained genes (depleted of loss-of-function mutations), with the lowest frequency NVEs occurring in the most constrained genes. We find a subset of genetically-modified NVEs that target genes in a manner consistent with inducing nonsense-mediated decay (NMD). We highlight a couple of such variants linked to diseases, such as those associated with heart disease. &#13;
&#13;
These findings demonstrate that quantifying the population frequency of splicing events can reveal novel axes of molecular variability, and provide potential insight into the evolution of alternative splicing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162095</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical Characterization of the DUF3328 Protein in the Biosynthesis of Cyclic Peptide Cyclochlorotine</title>
<link>https://hdl.handle.net/1721.1/162094</link>
<description>Biochemical Characterization of the DUF3328 Protein in the Biosynthesis of Cyclic Peptide Cyclochlorotine
Huang, Wentao
Cyclic peptide natural products are valuable sources for medicine. They exhibit significant biological and chemical diversity. Cyclochlorotine, a fungal cyclic pentapeptide produced by Talaromyces islandicus, possesses unique structural modifications, including dichlorination and hydroxylation, yet the enzymatic basis for these transformations remains poorly understood. This study biochemically characterizes the Domains of Unknown Function 3328 (DUF3328) protein family and investigates its role in cyclochlorotine biosynthesis.&#13;
Through transcriptomic sequencing and CRISPR/Cas9 knockout experiments, I revealed that CctP2 is essential for chlorination and CctR is required for hydroxylation. Computational sequence and structural analyses using AlphaFold suggested that DUF3328 proteins contain a conserved HxxHC(x)nHxxHC motif, a putative metal-binding site. Structural modeling further indicated that DUF3328 proteins form disulfide-linked homodimers, an unusual feature among biosynthetic enzymes.&#13;
To elucidate their biochemical roles, I purified CctR and CctP2 from Sf9 insect cells, overcoming challenges posed by their membrane association and intrinsic disorder. In vitro assays demonstrated that CctR is a copper-dependent enzyme that hydroxylates cyclochlorotine, and dimerization is essential for the activity. Mechanistic studies using isotopic labeling confirmed dioxygen as the oxygen source. Copper redox cycling was found to be essential, with Cu(I) required for the catalysis.&#13;
This work establishes DUF3328 proteins as a new class of copper-dependent enzymes involved in fungal secondary metabolism. The discovery of their catalytic mechanisms expands our understanding of enzymology and provides a foundation for future enzyme characterization in this family. More broadly, this study highlights the power of computational tools such as AlphaFold in guiding the functional characterization of previously uncharacterized protein families.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162094</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Spatial Economics</title>
<link>https://hdl.handle.net/1721.1/162093</link>
<description>Essays on Spatial Economics
O'Connor, Daniel
This thesis comprises three chapters, each studying optimal policy in a model of spatial economics. The first chapter considers the policy problem of a country looking to influence the geopolitical actions of another country. The second chapter considers how a central government should design place-based transfers to fight local recessions. And the third chapter considers how granularity affects the geography of economic activity and what that might mean for optimal place-based policy.&#13;
&#13;
In the first chapter (joint with John Sturm Becko), we suppose a country anticipates that it may use trade as a point of&#13;
leverage in future geopolitical conflicts. How should it develop domestic industries and international trading relationships today in order to strengthen its hand tomorrow? Domestically, we show that the country should abstain from peacetime industrial policies if it can credibly threaten trade taxes as geopolitical punishments during conflict, but not otherwise. Internationally, its peacetime trade policy should promote the accumulation of foreign capital that makes foreign prices---not foreign welfare---more sensitive to trade during conflict. We apply these insights to provide the first quantitative exploration of the US's optimal policies for building geopolitical power vis-à-vis China. The optimal policy promotes US-China trade on both the import and export margins.&#13;
&#13;
In the second chapter, I note that many regions in the US experience depressed labor demand and high unemployment, even when the rest of the United States does not. How should the US government respond? In this chapter, I characterize optimal place-based transfers in a dynamic economic geography model with nominal wage rigidity and compare them to observed government transfers. I show that transfers not only have a stimulus effect—by boosting local demand—but also a migration effect—by encouraging local residents to stay. Analytically, I provide optimal transfer formulas that capture this trade-off and show, perhaps surprisingly, that the optimal transfer to a distressed region may be a tax due to the migration effect. All else equal, transfers should be larger in the short-run and when there are distressed regions nearby. Quantitatively, I find that observed transfers are both too small in the short-run and too large in the medium-run, achieving just over half of the gains from the fully optimal response to idiosyncratic local shocks. I conclude by exploring how the US government could have responded to the China trade shock in the 2000s.&#13;
&#13;
In the third chapter (joint with Shinnosuke Kikuchi),  we ask how does the fact that individual firms dominate labor markets affect the geography of economic activity? And what does it mean for the efficiency of firm entry? To answer these questions, we propose a new economic geography model featuring granular firms subject to idiosyncratic shocks. We show that average wages increase in the size of the local labor market due to that granularity, and provide a sufficient statistic for the contribution of our mechanism. We further prove that too few firms enter in equilibrium. Using Japanese administrative data on manufacturing, we provide evidence consistent with our mechanism and quantify it. Our mechanism implies that markets with around 2 firms per sector have an elasticity of wages to population of 0.05 and firms capture only 85% of their contribution to production in profits. In large markets like Tokyo, the elasticity is around 0.001, and firm entry is approximately efficient. Enacting optimal place-based industrial policy would increase the number of firms in modest-sized cities by more than 30% and actually decrease the number of firms and people in Tokyo. JEL Codes: F1, E3, R1.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162093</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Novel Technologies to Investigate DNA Double-Strand Break Repair Uncovers a Role for the ATM Kinase in Error-Free NHEJ with Implications for Neurodegenerative Diseases</title>
<link>https://hdl.handle.net/1721.1/162091</link>
<description>Development of Novel Technologies to Investigate DNA Double-Strand Break Repair Uncovers a Role for the ATM Kinase in Error-Free NHEJ with Implications for Neurodegenerative Diseases
Kruswick, Alex J.
DNA double strand breaks (DSBs) are considered to be the most lethal genotoxic lesion because they can result in chromosomal translocations or result in a major loss of genetic information if repaired incorrectly. To preserve genomic integrity, mammalian cells have evolved a set of complimentary and redundant repair pathways that faithfully repair DSBs. Consequently, eukaryotic cells utilize an evolutionary conserved set of protein kinase signaling pathways that recognize and respond to DNA damage by pausing cell cycle progression and recruiting DNA repair machinery to ultimately determine the fidelity of DSB repair. Mutations and/or acquired defects that compromise the function of DNA damage response (DDR) pathways result in enhanced mutagenesis and underlie the development and progression of cancer and neurodegenerative conditions. How cells chose which DSB repair pathways to use when fixing a DSB in order to maximize repair fidelity is incompletely understood.&#13;
To better understand how cells decide which repair pathway to use when fixing DSBs and to specifically investigate protein kinase signaling that coordinates DSB repair pathway selection, we developed a set of multicolor fluorescent reporter systems, named DSB-Spectrum and DSB-Prism. DSB-Prism is uniquely designed to report on the choice between DSB repair via error-free non-homologous end joining (EF-NHEJ), mutagenic end joining (mut-EJ), alternative end joining (alt-EJ), homologous recombination (HR), and single strand annealing (SSA) at a single break created within individual cells by CRISPR-Cas9. We demonstrate that DSB-Prism robustly reveals patterns of DSB repair pathway compensation following chemical inhibition or genetic perturbation of DDR repair factors.&#13;
We report that the majority, but not all, EF-NHEJ repair requires DNA-PKcs. We observed that DNA-PKcs kinase activity is essential for its function in EF-NHEJ repair, while autophosphorylation of DNA-PKcs on the previously mapped ABCDE phosphorylation site cluster plays only a minor role in this process primarily through the Ku80 DNA-PKcs long range synaptic complex.&#13;
We utilized DSB-Prism to uncover a novel role for the ATM kinase in promoting EF-NHEJ repair at highly transcribed genes. We show that ATM promotes EF-NHEJ repair via two genetically distinct pathways independently of DNA-PKcs kinase signaling. First, ATM promotes EF-NHEJ through a phosphorylation-dependent interaction between 53BP1 and RIF1 independently of the Shieldin and CST complexes, which we propose serves to physically hold DSB ends together in a redundant manner with the core NHEJ-mediated end synapsis machinery. Second, we propose that ATM promotes EF-NHEJ via promoting R-loop resolution by both SETX and ERCC6L2. We show that the role of ATM in promoting EF-NHEJ is largely independent of MRN-dependent ATM activation, and completely independent of ROS-dependent ATM activation. We discover a novel N-terminal set of positively charged residues that we propose directly interact with the negatively charged DNA phosphate backbone adjacent to DSBs in order to activate ATM. These N-terminal positively charged residues, in combination with MRN, promote binding of ATM to chromatin but are particularly important for ATM’s function in promoting EF-NHEJ repair within the DSB-Prism reporter. &#13;
Finally, we characterized a cohort of ATM patient mutations and observe that the ability of ATM mutants to promote EF-NHEJ perfectly correlates with patient clinical A-T disease severity. We propose that this loss of EF-NHEJ repair is a major mechanistic cause of Purkinje cell death and cerebellar neurodegeneration observed in A-T patients.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162091</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Mechanisms that Determine the Edge&#13;
Electron Density Profile in Tokamaks</title>
<link>https://hdl.handle.net/1721.1/162086</link>
<description>Understanding the Mechanisms that Determine the Edge&#13;
Electron Density Profile in Tokamaks
Miller Hernández, Marco Andrés
The interaction between the physics of plasma turbulence and that of atomic neutral dynamics, intrinsic to the tokamak edge, makes prediction of edge profiles difficult. It is unclear to what extent neutral ionization, as opposed to particle transport, is responsible for the build up of edge density gradients. To this end, this thesis combines electron and neutral measurements across the edge region with high-fidelity simulations of neutrals to study these processes in high density and high magnetic field plasmas on Alcator C-Mod. This is enabled by measurements of Lyman-α (Lyₐ) emission made by the LYMID camera, as well as measurements of electron density, nₑ, and electron temperature, Tₑ, by the edge Thomson scattering (ETS) system. These result in a large database of inferred neutral density, n₀ and ionization source, S_ion, as well as radial particle flux, Γ_D, and effective diffusivity, D_eff, for stationary periods. For selected discharges, these are used to impose additional constraints to simulations of neutral dynamics in the plasma edge using SOLPS-ITER. This methodology is used to examine stiffness in the edge gradients forming the so-called “pedestal" in the high-confinement mode (H-mode) in response to increased ionization. This phenomenon is found to be associated with changes to local particle transport, and is observed to be correlated with a local parameter governing the influence of turbulence from interchange instabilities as opposed to that resulting from drift-waves. Reaching the threshold in this parameter may be avoided through improved particle control and is found to also be highly dependent on the 2D distribution of neutrals in the unconfined plasma region. The competition between interchange modes and the drift-wave is probed on Alcator C-Mod through validation of a semi-empirical model for tokamak operational boundaries. The separatrix operational space (SepOS) model [Eich &amp; Manz, Nuclear Fusion (2021)] predicts boundaries for the L-H transition, the L-mode density limit, and the ideal MHD ballooning limit in terms of plasma quantities evaluated using separatrix parameters for a wide range of Alcator C-Mod plasmas. These boundaries are expressed in terms of dimensionless quantities borrowed from electromagnetic fluid drift turbulence (EMFDT) theory. The combined workflow of ETS and LYMID also allows for evaluation of quantities associated with plasma transport in connection with the plasma operational space. Experimental evidence of changes to particle transport near the boundaries is provided for the first time. Organization of Γ_D at the separatrix is observed in both H-modes and low-confinement modes (L-mode) for key dimensionless parameters. The model is also used to elicit the physics of high confinement regimes free of Type-I edge localized modes (ELMs). Databases of the transition to the improved-confinement (I-mode) and that between the Type-I ELMy H-mode and the Enhanced Dα (EDA) H-mode are studied using the SepOS framework. An empirical model for particle transport in the EDA H-mode to explain pedestal saturation in this regime is developed and tested. The findings are then leveraged for modeling of next-generation devices, with priority on core-edge integration and improved power handling. high confinement regimes free of Type-I edge localized modes (ELMs). Databases of the transition to the improved-confinement (I-mode) and that between the Type-I ELMy H-mode and the Enhanced Dₐ (EDA) H-mode are studied using the SepOS framework. An empirical model for particle transport in the EDA H-mode to explain pedestal saturation in this regime is developed and tested. The findings are then leveraged for modeling of next-generation devices, with priority on core-edge integration and improved power handling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162086</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Economic Growth and Innovation</title>
<link>https://hdl.handle.net/1721.1/162076</link>
<description>Essays on Economic Growth and Innovation
Lensman, Todd
A foundational observation by Robert Solow holds that long-run economic growth is primarily driven by the innovation and adoption of new technologies (Solow, 1957). This set of essays provides new theory and evidence to explain how firms choose which technologies to innovate and adopt. A point of emphasis, particularly in the first two chapters, is that complementarities across firms play an important role in determining the rate and direction of technological change. These complementarities arise as firms build shared knowledge by innovating (Chapter 1) and from joint consumption of new products (Chapter 2). They provide a new channel through which market structure and property rights affect long-run technological change.&#13;
&#13;
Chapter 1. The first chapter is motivated by the observation that the direction of innovation shapes both current technologies and future innovation opportunities, as firms acquire expertise and create public knowledge through discovery. But how do firms choose which technologies to develop? Do they ever fail to exploit new technological paradigms? I build a new model of innovation and firm dynamics to study a novel link between market structure, the direction of innovation, and economic growth: Expertise in a current technology gives incumbents a comparative advantage at innovating it relative to entrants, who instead favor a new technology with higher growth potential. Each firm’s innovation decisions influence others through knowledge spillovers, so the initial market structure can affect the long-run direction of innovation. Concentrating R&amp;D resources in a small number of firms allows faster accumulation of expertise, raising growth when all firms innovate the same technology. But it can lower growth when firms face a technology choice, amplifying the influence of incumbents and potentially delaying or preventing the emergence of the new technology. I provide empirical evidence for the theory using data on firm patenting and R&amp;D expenditures. I also show that it explains the historical development of mRNA vaccines, and I explore its implications for the highly concentrated innovation of artificial intelligence.&#13;
&#13;
Chapter 2. In the second chapter, joint with Rebekah Dix, we observe that innovations often combine several components to achieve outcomes greater than the “sum of the parts.” We argue that such combination innovations can introduce an understudied inefficiency—a positive market expansion externality that benefits the owners of the components. We demonstrate the importance of this externality in the market for pharmaceutical cancer treatments, where drug combination therapies have proven highly effective. Using data on clinical trial investments, we document several facts consistent with inefficiently low private innovation: firms are less likely than publicly funded researchers to trial combinations, firms are less likely to trial combinations including other firms’ drugs than those including their own drugs, and firms often wait to trial combinations including other firms’ drugs until those drugs experience generic entry. Using microdata on drug prices and utilization, we quantify the externalities that arise from new combinations and find that the market expansion externality often dominates the standard negative business stealing externality, suggesting too little innovation in combination therapies. As a result, firms may have incentives to free ride off others’ innovation, which we analyze with a dynamic structural model of innovation decisions. We use the model to design cost-effective policies that advance combination innovation. Redirecting publicly funded innovation toward combinations with high predicted market expansion or consumer surplus spillovers minimizes crowd out of private investments, increasing the rate of combination innovation and total welfare while remaining budget neutral.&#13;
&#13;
Chapter 3. The final chapter, joint with Daron Acemoglu, considers incentives to adopt transformative technologies that promise to accelerate productivity growth across many sectors but also present new risks from potential misuse. We develop a multi-sector technology adoption model to study the optimal regulation of transformative technologies when society can learn about these risks over time. Socially optimal adoption is gradual and typically convex. If social damages are large and proportional to the new technology’s productivity, a higher growth rate paradoxically leads to slower optimal adoption. Equilibrium adoption is inefficient when firms do not internalize all social damages, and sector-independent regulation is helpful but generally not sufficient to restore optimality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162076</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Organization</title>
<link>https://hdl.handle.net/1721.1/162075</link>
<description>Essays in Industrial Organization
Dix, Rebekah A.
This thesis comprises three essays on industrial organization. The first chapter, joint with Todd Lensman, studies the innovation of cancer drug combination therapies. Innovations often combine several components to achieve outcomes greater than the “sum of the parts.” We argue that such combination innovations can introduce an understudied inefficiency—a positive market expansion externality that benefits the owners of the components. We demonstrate the importance of this externality in the market for pharmaceutical cancer treatments, where drug combination therapies have proven highly effective. Using data on clinical trial investments, we document several facts consistent with inefficiently low private innovation: firms are less likely than publicly funded researchers to trial combinations, firms are less likely to trial combinations including other firms’ drugs than those including their own drugs, and firms often wait to trial combinations including other firms’ drugs until those drugs experience generic entry. Using microdata on drug prices and utilization, we quantify the externalities that arise from new combinations and find that the market expansion externality often dominates the standard negative business stealing externality, suggesting too little innovation in combination therapies. As a result, firms may have incentives to free ride off others’ innovation, which we analyze with a dynamic structural model of innovation decisions. We use the model to design cost-effective policies that advance combination innovation. Redirecting publicly funded innovation toward combinations with high predicted market expansion or consumer surplus spillovers minimizes crowd out of private investments, increasing the rate of combination innovation and total welfare while remaining budget neutral.&#13;
&#13;
The second chapter, joint with Kelsey Moran and Thi Mai Anh Nguyen, studies the interoperability of electronic health record systems. Interoperability—the ability of different systems to work together—is an increasingly vital component of product markets. We study the impact of interoperability frictions in the context of US hospital Electronic Health Record (EHR) systems. While use of EHR systems is widespread, interoperability of these systems remains low, particularly across those produced by different EHR vendors. We examine how interoperability affects patients by considering both a direct, technological effect of influencing health information exchange and an allocative effect of shifting the flow of patients across providers. Using an event study design in which interoperability between hospital pairs changes when one changes EHR vendors, we find evidence for both channels. When two hospitals switch to having the same EHR vendor, charges and readmissions rates for patients who are transferred and referred between them decrease by 4% and 11%, respectively. In addition, these hospitals now share 8% more inpatient transfers and 9-10% more referrals. This change in patient flows further affects patient outcomes: patient health improves when their sending hospitals switch to EHR vendors used by higher-quality hospitals in the market and worsens when the opposite occurs. To quantify the welfare gain from reducing interoperability frictions, we estimate a demand model of how patients and providers trade-off interoperability with other receiving hospital characteristics when choosing where to send patients. The model is identified by changes in patient flows following changes in hospital EHR vendors and interoperability levels. We show that eliminating all interoperability frictions would redirect 7.5% of patients to different hospitals and increase joint hospital-patient welfare by 21%, the equivalent of a 57-kilometer reduction in travel distance.&#13;
&#13;
The third chapter, joint with Roi Orzach, studies the relationship between the fares of direct and connecting flights. Airlines operate complicated flight networks, often utilizing hub-and-spoke systems to efficiently route connecting travelers and optimize costs. Despite the prevalence of connecting travelers—accounting for approximately one-third of passenger itineraries—most analyses of the welfare effects of changes in competition focus on nonstop routes. We show that when firms face capacity constraints or adjustment costs, a price decrease on a direct route may incentivize firms to decrease prices on indirect routes using this route as a leg. We document that this pass-through is positive using the price effects of low-cost carrier entry and airline mergers: connecting fares decrease after low-cost carrier entry on one of the legs and increase after a merger of carriers that competed on one of the legs. Our findings demonstrate that ignoring these network effects leads to significantly underestimating changes in consumer surplus—by up to 115%—in response to changes in competition. Thus, considering full airline networks is essential to accurately estimating the impact of changes in competition on consumers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162075</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Environmental and Supply Chain Topics in&#13;
Finance</title>
<link>https://hdl.handle.net/1721.1/162070</link>
<description>Essays in Environmental and Supply Chain Topics in&#13;
Finance
Zhang, Henry H.
This thesis comprises three essays in finance. The first two essays study how liquidity provision by the financial sector affects firms’ production decisions in response to shocks. The third essay studies the real and financial impacts of regulatory enforcement in an environmental setting.&#13;
&#13;
The first chapter (joint with Victor Orestes and Thiago Christiano Silva) shows that firms experience large increases in sales and purchases after receiving cheaper liquidity. We focus on factoring, defined as the supplier-initiated sale of receivables. In Brazil, receivables funds (FIDCs) securitize receivables for institutional investors. By assembling a novel transaction-level dataset of factoring with other credit operations for all registered firms and FIDCs, we construct a shift-share instrument for factoring financing supply based on FIDC flows. We then use a novel combination of electronic payments, trade credit, and employer-employee matched data to estimate the impacts. A flow-induced increase in receivables demand reduces firms’ factoring interest rate. In response, firms demand more permanent labor and less temporary labor. In our model, these effects arise from factoring’s purpose of reducing cash inflow volatility, helping firms match inflows to outflows, which firms otherwise achieve at an efficiency cost through substitution across labor types..&#13;
&#13;
The second chapter (joint with Victor Orestes and Thiago Christiano Silva) uses transaction-level data on payments, credit, and insurance to examine how Brazilian farmers responded to the severe frost of July 2021, a shock that affected coffee, a perennial crop whose plants are a major component of farm value. The frost shock reduced both output and the pledgeable value of farmers’ collateral. We find that insured farmers increased investment in the years following the shock, while uninsured farmers reduced investment and borrowing. We show how this pattern is consistent with models of imperfect pledgeability of a firm’s collateral, where constrained firms neither insure (ex-ante) nor fully recover from a shock (ex-post). Limited commitment endogenously generates under-insurance through the combination of upfront payment of the insurance premium with the tightening of borrowing constraints post-shock due to the decrease in total collateral. We discuss two equilibrium implications of this mechanism: the inefficacy of emergency credit lines in targeting liquidity constrained firms and the amplification of output volatility from the rising risk of extreme weather shocks.&#13;
&#13;
The third chapter (joint with Ananya Kotia and Utkarsh Saxena) studies the aggregate impacts of court-ordered iron ore mining bans in India. The local sectoral ban is a command-and-control (CAC) policy that is commonly applied to natural resource settings, usually when the regulator has a signal of widespread non-compliance. The Supreme Court of India imposed bans on iron ore mining and outbound iron ore trade in two states in response to reports that mines operated under fake environmental permits and underpaid mining royalties. Using firm-level industrial survey data, mine-level output data, we decompose the bans’ effects into trade, production networks, and local labor demand channels. Our results indicate persistent declines in employment, capital stock, and borrowing by iron-consuming plants, despite the temporary duration of the ban. These findings highlight the economic spillovers caused by CAC policies, especially in industries that are upstream in the supply chain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162070</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexibility in Platform Operations</title>
<link>https://hdl.handle.net/1721.1/162068</link>
<description>Flexibility in Platform Operations
Zhao, Jiayu
This thesis studies how modern service platforms, through algorithmic and market design, leverage agents’ flexibility to enhance operational efficiency. The last decade has witnessed the booming growth of such platforms in ride-hailing (e.g., Uber), e-commerce (e.g., Amazon), and hospitality (e.g., OpenTable). A central operational challenge in these systems lies in the heterogeneity across both supply and demand. For example, Uber cannot match a rider and a driver who are far apart in time or location. To address such challenges, platforms increasingly rely on flexibility levers—interventions that encourage market participants to be more accommodating in when or how they interact with the platform. For instance, Uber’s ``wait and save'' option offers a discount to riders who are willing to wait longer, making it easier to find compatible matches. Motivated by the growing use of such flexibility incentives, this thesis examines how flexibility can be structured, coordinated, and optimized in modern platforms. It focuses on two central dimensions of flexibility: (1) how flexibility levers interact across a platform’s ecosystem and (2) how flexibility decisions can be optimized to improve operational performance.&#13;
&#13;
Part I of this thesis examines the interactions and implications of platforms' flexibility decisions. Decisions around flexibility on platforms influence both (i) horizontal dynamics across market sides and (ii) vertical dynamics in a supply chain. Chapter 2 investigates the horizontal interaction between demand-side and supply-side flexibility incentives. While such incentives are common on both the demand (e.g., "wait and save" feature at Uber) and the supply side (Ride streak bonuses at Uber) of platforms, they have been treated in isolation in the literature and in practice. Chapter 2 initiates the study of two-sided flexibility in platforms: by modeling how these incentives influence the likelihood of compatibility between agents and the resulting matching size, we study whether and when platforms should invest in flexibility across both market sides. Moreover, we identify that platforms may realize significant efficiency gains by incorporating the horizontal interplay of flexibility when designing different incentives. &#13;
&#13;
In an orthogonal direction, Chapter 3 investigates the vertical supply chain implications of ride-hailing platforms' flexibility decisions. When dual-sourcing autonomous vehicles (AVs) and flexible human drivers with self-scheduling capacity, platforms (e.g., Uber's operations in Phoenix and Austin) make dispatch prioritization decisions to fulfill demand through a hybrid fleet. These decisions affect the incentives of AV suppliers and human drivers, and the self-scheduling nature of gig workers introduces novel supply chain challenges. We study how these challenges can hinder successful AV deployments and provide contracting solutions to overcome them.&#13;
&#13;
Part II of the thesis focuses on optimizing specific operational levers for flexibility. The digitization of modern platforms allows for algorithms that provide better customization and timing to harness flexibility. For instance, booking platforms can adjust their admission control decisions in real-time by considering customers' heterogeneous probabilities of being no-shows (i.e., not requiring service) and their compensation requirements for overbooking. In Chapter 4 we analyze an online resource allocation problem that allows overbooking and propose a policy that improves the additive profit loss guarantee (compared to a clairvoyant) in T periods from an order of square-root-T in the literature to a bounded constant. &#13;
&#13;
A related application appears in e-commerce, where retailers seek to use promotional discounts to align customer demand with their inventory position. Chapter 4 investigates how platforms can leverage an "opaque selling" strategy to dynamically time these discounts to influence purchase behavior and balance inventory. We propose a class of dynamic inventory-balancing algorithms that adapt opaque selling to real-time inventory states, achieving order-optimal fulfillment costs. This chapter demonstrates how demand-side flexibility can be operationalized through pricing levers for better inventory management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162068</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Environmental Regulation</title>
<link>https://hdl.handle.net/1721.1/162063</link>
<description>Essays on Environmental Regulation
Aspelund, Karl Milutin
I focus on three challenges that often confront regulators in designing environmental regulations around the world: equity-efficiency tradeoffs, incomplete information, and significant ecologic and economic uncertainty across time and space. First, I analyze the efficiency and distributional consequences of trade restrictions in environmental permit markets, I study common trade restrictions—segmentation and production requirements—in Iceland’s fisheries permit market, showing how they increase employment and compress the income distribution at an efficiency cost. Second, in the U.S. Conservation Reserve Program, Anna Russo and I find that auction mechanisms designed to incentivize land conservation suffer from widespread non-additionality due to adverse selection in land use. A redesigned scoring system that accounts for counterfactual land outcomes improves welfare. Third, using a stylized model calibrated to the Atlantic scallop fishery, Aaron Berman and I evaluate the use of output, input, and quantity-based regulations over time when a resource exhibits ecological and economic uncertainty across large areas of space. We show that, for a given ultimate sustainability goal, output taxes maximize value but exacerbate inequality and ecological risk, while input limits can strike a balance between flexibility, equity, and robustness.&#13;
JEL Classification: L51, Q22, Q28
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162063</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of Group Decision-Making</title>
<link>https://hdl.handle.net/1721.1/162055</link>
<description>Dynamics of Group Decision-Making
Orzach, Roi
This thesis comprises three chapters, all focused on Microeconomic Theory, specifically the dynamics of decision-making. The first explores how the desire for conformity results in long-run misperceptions due to uninformative decision-making. The second chapter studies multi-project collaborative experimentation. The final chapter analyzes whether decentralized organizations should utilize sequential or concurrent decision-making. The first chapter notes that in many settings, individuals imitate their peers’ public decisions for one or both of two reasons: to adapt to a common fundamental state, and to conform to their peers’ preferences. In this model, the fundamental state and peers’ preferences are unknown, and the players learn these random variables by observing others’ decisions. With each additional decision, the public beliefs about these unknowns become more precise. This increased precision endogenously increases the desire to conform and can result in decisions that are uninformative about a player’s preferences or perceptions of the fundamental state. When this occurs, social learning about peers’ preferences and fundamentals ceases prematurely, resulting in inefficient decisions. In line with findings from social psychology, I show that interventions aimed at correcting misperceptions of peers’ preferences may lead to more efficient decision-making in settings where interventions aimed at correcting misperceptions of the fundamental state may have no effect. The second chapter (joint with Charles Angelucci) analyzes collaborative experimentation across multiple independent domains. Each domain contains infinitely many potential projects with asymmetric benefits. In each period and domain, two players can idle, jointly explore a new project, or jointly exploit a known one, with voluntary transfers. For intermediate discount factors, treating domains as independent during experimentation is suboptimal. The optimal experimentation policy exhibits common features of collaborative experimentation: lengthy exploration, temporary project exploitation, recall of past projects, and inefficient initial or terminal idling within certain domains. We connect these findings to research on buyer-supplier dynamics and persistent productivity differences. The final chapter examines how the timing of decision-making shapes the allocation of decision rights within an organization. Here, I analyze concurrent versus sequential decision-making in a model where two units first communicate and then make decisions, attempting to both adapt to their local conditions and coordinate with their partner. Sequential decision-making improves overall information sharing compared to concurrent decisionmaking. However, first movers also have an incentive to over-adapt to their state, knowing second movers will conform to their decision. A surplus-maximizing headquarters prefers sequential decision-making to concurrent if and only if (i) the two units’ local conditions have sufficiently different volatilities and (ii) their need to coordinate is sufficiently asymmetric or low. Finally, sequential decision-making is shown to be optimal even when allowing for additional governance structures involving the reallocation of decision rights across the units and the headquarters and is shown to render some commonly-analyzed forms of decentralization sub-optimal. JEL Classification Numbers: C72, D83, D90
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162055</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Storage and capacity rights markets in the natural gas industry</title>
<link>https://hdl.handle.net/1721.1/161778</link>
<description>Storage and capacity rights markets in the natural gas industry
Paz-Galindo, Luis A.
            (Luis Andrés)
Thesis: Ph. D., Massachusetts Institute of Technology, Technology, Management, and Policy Program, 1999; Includes bibliographical references (p. 165-169).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161778</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adsorption physics of metals partially coated by metallic films</title>
<link>https://hdl.handle.net/1721.1/161777</link>
<description>Adsorption physics of metals partially coated by metallic films
Levine, Jules David.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1963; Vita.; Includes bibliographical references (leaves 123-129).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161777</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluorocarbon synthesis in a high-intensity carbon arc</title>
<link>https://hdl.handle.net/1721.1/161776</link>
<description>Fluorocarbon synthesis in a high-intensity carbon arc
Bronfin, Barry Robert.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1963; Includes bibliographical references (leaves 95-103).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161776</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine verification of mathematical proof</title>
<link>https://hdl.handle.net/1721.1/161775</link>
<description>Machine verification of mathematical proof
Abraham, Paul W.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mathematics, 1963; Vita.; Includes bibliographical references (leaves 208-210).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161775</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of an x-ray selected sample of cataclysmic variables</title>
<link>https://hdl.handle.net/1721.1/161769</link>
<description>Studies of an x-ray selected sample of cataclysmic variables
Silber, Andrew D.
            (Andrew David)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (p. 253-254).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161769</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A bid-rent analysis of housing market discrimination.</title>
<link>https://hdl.handle.net/1721.1/161768</link>
<description>A bid-rent analysis of housing market discrimination.
Galster, George Charles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1974; Vita.; Bibliography: leaves 273-283.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161768</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An economic analysis of the cobalt industry.</title>
<link>https://hdl.handle.net/1721.1/161766</link>
<description>An economic analysis of the cobalt industry.
Burrows, James Christian.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1970; Vita.; Bibliography: leaves 400-405.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161766</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private interests and international conflict : a case study of US intervention in the Congo</title>
<link>https://hdl.handle.net/1721.1/161765</link>
<description>Private interests and international conflict : a case study of US intervention in the Congo
Gibbs, David Neil.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Includes bibliographical references (leaves 404-427).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161765</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Yang-Mills on the two-sphere</title>
<link>https://hdl.handle.net/1721.1/161764</link>
<description>Quantum Yang-Mills on the two-sphere
Fine, Dana Stanley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1989; Includes bibliographical references (p. 40).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161764</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The making of industrial policy : ad hoc corporatism and cable and satellite technology in West Germany</title>
<link>https://hdl.handle.net/1721.1/161763</link>
<description>The making of industrial policy : ad hoc corporatism and cable and satellite technology in West Germany
McKnight, Lee W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Vita. M.I.T. copy lacks leaves 15 and 385.; Includes bibliographical references (leaves 378-403).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161763</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays on the theory of contracts</title>
<link>https://hdl.handle.net/1721.1/161762</link>
<description>Three essays on the theory of contracts
Hermalin, Benjamin E.
            (Benjamin Edward)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1988; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161762</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reflectivity studies of semimetals under pressure.</title>
<link>https://hdl.handle.net/1721.1/161760</link>
<description>Reflectivity studies of semimetals under pressure.
Mendez Perez, Emilio Eugenio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161760</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-Frontal Exchange at the US Northeast Shelfbreak</title>
<link>https://hdl.handle.net/1721.1/159959</link>
<description>Cross-Frontal Exchange at the US Northeast Shelfbreak
Taenzer, Lukas L.
Exchange across the semipermeable US Northeast shelfbreak front is a potential driver of irreversible change to the continental shelf waters, its productive ecosystem, and economically valuable fisheries. However, cross-frontal exchange is difficult to observe directly because it is highly intermittent, non-linear, and driven by both internal frontal instability and external forcing. In this thesis, I quantify eddy-driven exchange across the US Northeast shelfbreak front and its impact on the coastal ocean, starting on seasonal timescales and moving toward individual synoptic events. For this task, I take advantage of unprecedented multi-year observations from the Ocean Observatories Initiative (OOI) Coastal Pioneer Array (2014-2022). On seasonal timescales, the buoyancy-driven shelfbreak front is persistently trapped at the shelfbreak, which supports theoretical predictions of shelfbreak frontogenesis (Chapter 2). However, exchange across the shelfbreak front leads to a significant increase in salinity on the continental shelf between spring and fall. A volume budget of the subsurface continental shelf 'cold pool', habitat of the valuable benthic ecosystem, quantifies the contribution of eddy-driven advection to the observed salinity increase and explains the seasonal cycle of watermass variability on the shelf (Chapter 3). However, the multi-year averaged cold pool watermass budget does not capture the intermittency of cross-shelfbreak eddy-fluxes on synoptic timescales. Thus, I demonstrate how individual mooring timeseries can be used to capture the statistical distribution of eddy-driven exchange by assessing cross-shelfbreak eddy-covariance fluxes of salt and heat (Chapter 4). Mean eddy-covariance fluxes align well with previous residual estimates of cross-shelfbreak exchange to close coastal watermass budgets, and just 10-20% of statistically anomalous events are responsible for half the multi-year mean flux. To characterize rapid changes in continental shelf watermass properties over short timescales, I investigate the decline of seasonal stratification due to individual weather events and identify signatures of cross-shelfbreak exchange in wind-driven destratification (Chapter 5). Altogether, this thesis extends our understanding of the characteristics, timing, and magnitude of eddy-driven exchange across the US Northeast shelfbreak front on varying timescales. This information can help to inform how large-scale, long-term trends will impact the US East Coast coastal ocean and its marine ecosystem.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159959</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-throughput tools for decoding T cell receptor specificity</title>
<link>https://hdl.handle.net/1721.1/159957</link>
<description>High-throughput tools for decoding T cell receptor specificity
Gaglione, Stephanie A.
T cells play a central role in adaptive immunity by recognizing specific antigens through their T cell receptors (TCRs). These receptors bind to peptides presented by major histocompatibility complex (pMHC) proteins, driving immune responses in cancer, infection, and autoimmunity. Understanding how TCRs recognize antigens is crucial for developing cancer immunotherapies and identifying therapeutic targets in autoimmunity, infectious disease, and allergy. However, large-scale mapping of TCR-antigen interactions remains a challenge due to the vast diversity of both TCRs and antigens, as well as the limitations in current screening technologies in cost, throughput, and accessibility.&#13;
This work presents two advances in large-scale TCR-antigen screening. The first aim introduces a scalable and cost-effective platform for synthesizing tens of thousands of TCRs from sequence data to create synthetic TCR libraries. We integrate this approach with a high-throughput antigen discovery platform that leverages pMHC-pseudotyped viruses to identify TCR-pMHC pairs. Using this system, we screen 3,808 vitiligo patient-derived TCRs against 101 antigens, and synthesize 30,810 TCRs from patients with pancreatic ductal adenocarcinoma (PDAC). By streamlining TCR assembly and antigen screening, this pipeline has the potential to advance immunotherapy, accelerate vaccine design, and deepen our understanding of TCR recognition.&#13;
The second aim presents a new method that couples pMHC-displaying virus-like particles with yeast display, enabling efficient screening of millions of TCR variants against ~100 pMHCs at once. Yeast display is a powerful tool for studying TCR-antigen interactions but is constrained by its reliance on recombinant protein production. Our approach overcomes this limitation by replacing recombinant protein with barcoded lentiviral particles, allowing large-scale, multiplexed screening of TCR libraries. By overcoming key technical barriers, these tools significantly expand our ability to study TCR specificity and engineer new antigen-specific therapeutics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159957</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A relative trace formula approach to the stable trace&#13;
formula on the unitary group</title>
<link>https://hdl.handle.net/1721.1/159956</link>
<description>A relative trace formula approach to the stable trace&#13;
formula on the unitary group
Lu, Weixiao
We develop a relative trace formula on GLₙ which can be compared to the stable trace formula on the unitary group. Locally, we prove the fundamental lemma and transfer. We also proof a character identity based on the transfer. Globally, we develop a (simple) relative trace formula and compared it to the (simple) stable trace formula on the unitary group.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159956</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The influence of topography on ice-ocean interactions in coastal Antarctica</title>
<link>https://hdl.handle.net/1721.1/159955</link>
<description>The influence of topography on ice-ocean interactions in coastal Antarctica
Gaul, Alan
Interactions between various water masses and ice shelves along the Antarctic coastline impact the global climate and sea level. This thesis focuses on how geometric features such as troughs and fast ice affect cross-shelf exchange in dense water formation regions of the Antarctic continental shelf. In Chapter 2, we use an idealized, eddy-resolving model to examine how an outflow of Dense Shelf Water (DSW) drives an inflow of warmer Circumpolar Deep Water (CDW) in a narrow, prograde trough. We find that the trough organizes mesoscale, cyclonic eddies in the dense outflow into a chain pattern. These cyclones then, as an efficient group, entrain filaments of CDW towards the coast. In Chapter 3, we use the same model to investigate buoyancy-driven cross-shelf exchange in a wide, retrograde trough. We find that the dynamics of the CDW intrusion change near the shelf-break. Here, the DSW outflow excites Topographic Vorticity Waves which interact with the DSW outflow to drive onshore intrusions of CDW. Onshore of the shelf-break, CDW intrudes further poleward due to a mean flow driven by eddy rectification. In Chapter 4, we switch to a realistic model of Prydz Bay, East Antarctica, to test the impact of local icebergs on cross-shelf exchange, dense water formation, and ice shelf basal melt rates. We find that removing the Cape Darnley Ice Barrier increases CDW intrusions and decreases dense water formation due to changes to the sea ice cover and wind-driven circulation. Conversely, removing the tabular Iceberg D-15 has little impact on heat transport and only slightly decreases dense water formation. Thus, the location of grounded icebergs greatly influences their impact on regional hydrography and ice shelf melt. In all, this thesis uses numerical models to examine the dynamics of cross-shelf exchange in coastal Antarctica. Understanding these dynamics is imperative for projecting how the Antarctic margins will impact the globe in a changing climate.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159955</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Steel Decarbonization Strategies and Supply Chain Integration</title>
<link>https://hdl.handle.net/1721.1/159954</link>
<description>Analysis of Steel Decarbonization Strategies and Supply Chain Integration
Johnson, Sydney Rose
Industrial decarbonization is an obstacle as the global community focuses on climate mitigation. Steel production is responsible for 7% of global emissions and faces unique challenges in reducing emissions in the ironmaking process. Models are developed to assess the emission and cost characteristics of current and emerging steel decarbonization strategies at a plant and sector level. Case studies are performed in India, the second-largest steel producer, and the United States, the fourth-largest steel producer, to highlight differences in strategies. In addition, a model of hydrogen-based steel production and a corresponding hydrogen network is created to assess supply chain needs. From this analysis, we can identify key cost and logistic barriers to technology implementation and the impact on the steel industry in respective locations. Finally, the future of the steel industry is assessed from a strategic standpoint while considering challenges to commercialization and policy mechanisms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159954</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aligning Machine Learning and Robust Decision-Making</title>
<link>https://hdl.handle.net/1721.1/159952</link>
<description>Aligning Machine Learning and Robust Decision-Making
Cristian, Rares C.
Machine learning (ML) has become increasingly ubiquitous across many applications worldwide, ranging from areas like supply chain to personalized pricing, recommendations, and more. These predictive models are often used as tools to inform operations and decision-making, with the potential to revolutionize decision-making.  The main key question this thesis aims to address is: How can we make ML methods aware of their downstream impact on the full decision-making process? As a result, we focus on developing methods to align AI with real-world objectives in order to make efficient, safe, and robust systems.&#13;
&#13;
This thesis is split into three chapters focusing on different aspects of this problem. In Chapter I of this thesis we address the heavy computational complexity of existing methods. We present a meta-optimization machine learning framework to learn fast approximations to general convex problems. We further apply this within an end-to-end learning framework which trains ML models with an optimization-based loss function to minimize the decision cost directly. This meta-optimization approach allows us to tackle problem sizes that were intractable using approaches from the previous literature. Furthermore, this work establishes analytically that this learning approach guarantees fast convergence to nearly-optimal solutions. Through this chapter it is shown that the proposed approach consistently scales better in terms of runtime as problem size increases, being 2 to 10 times faster for various problems while retaining nearly the same accuracy. &#13;
&#13;
In Chapter II we focus on the robustness problem, to make decisions that protect against worst-case scenarios as well as to noise in the data. Traditional robust optimization methods tackle this issue by creating uncertainty sets for each observation, aiming to minimize costs in worst-case scenarios. However, these methods assume the worst-case scenario happens at every observation, which can be too pessimistic. We propose a new approach that avoids constructing uncertainty sets and links uncertainties across the entire feature space. This allows for robust decision-making without assuming worst-case scenarios at every observation. Our approach integrates robustness with a concept of learning stability, proving that algorithms with a stability property inherently produce robust solutions without explicitly solving the robust optimization problem. This chapter finally tests the framework on a variety of problems such as portfolio optimization using historical stock data,  inventory allocation and electricity generation using real-world data, showing significant improvement in terms of robustness and competitive results in terms of the average error relative to existing literature.&#13;
&#13;
Finally in Chapter III we consider the endogenous setting where decisions we take affect outcomes, like pricing and assortment optimization where decisions (like price) affect demand. In the end-to-end spirit, this research introduces an approach to jointly predict and optimize in this setting which learns a prediction aligned with expected cost. We further introduce a robust optimization decision-making method that can account for uncertainty in ML models --- specifically by constructing uncertainty sets over the space of ML models and optimize actions to protect against worst-case predictions. We further prove guarantees that our method can capture near-optimal decisions with high probability as a function of data. We also introduce a new class of two-stage stochastic optimization problems to the end-to-end learning framework that can now be addressed through our framework. Here, the first stage is an information-gathering problem to decide which random variable to ``poll'' and gain information about before making a second-stage decision based off of it. We present several computational experiments  for pricing and inventory assortment/recommendation problems. We compare against existing methods in bandits and offline reinforcement learning, showing our approach has consistent improved performance over these.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159952</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing the Compositional and Structural Degeneracy of Planetary Interiors</title>
<link>https://hdl.handle.net/1721.1/159951</link>
<description>Reducing the Compositional and Structural Degeneracy of Planetary Interiors
Lin, Zifan
The interior conditions of planets are highly uncertain, because two types of intrinsic degeneracies – compositional degeneracy and structural degeneracy – prevent precise characterization. In this thesis, I develop a planetary interior code package, CORGI, incorporating state-of-the-art physical properties of planet-forming materials. Using CORGI, I eliminate unmixed interior scenarios for Uranus, rule out the fossil-compressed formation hypothesis for high-density exoplanets, and establish a link between formation history and atmospheric composition for hypothetical Earth-like white dwarf (WD) exoplanets, reducing interior degeneracy for these planets. However, I also identify a novel carbon-rich interior composition for sub-Neptunes, introducing an additional degeneracy to this already ambiguous category.&#13;
&#13;
It is heatedly debated that whether Uranus is a distinct-layer “ice giant” with greater than 70 wt% ice or a “rock giant” with compositional gradients and roughly equal amounts of ice and rock. Gravity field measurements from spacecraft, which directly probe interior mass distribution, are expected to resolve this debate. However, I show that the degeneracy will persist even with future Uranus Orbiter and Probe (UOP) mission, but the level of degeneracy can be reduced. My models indicate that only highly mixed interiors – either those with smooth density gradients or those with substantial light elements in the mantle and heavy elements in the atmosphere – are consistent with previous Voyager 2 measurements. Additionally, I demonstrate that the UOP can distinguish between high- and low-atmospheric metallicity scenarios and constrain the J6 harmonic, and potentially J8, if placed in close-in polar orbits, informing the mission and orbit design of UOP.&#13;
&#13;
For exoplanets with no solar system counterparts, interior models are essential for understanding their composition, structure, formation, and evolution. I apply CORGI to a category of high-density planets that are consistent with greater than 50% core mass fraction, substantially higher than that of the Earth (33%). By combining planetary interior modeling with photoevaporation modeling, I investigate into one of the hypotheses – the fossil-compressed hypothesis – for the origin of high-density planets. My models revealed that most high-density planets are highly improbable to be fossil-compressed cores, because most or even all of the iron-silicate core is molten during the evolution, while the fossil-compressed hypothesis requires a solid core. Kolmogorov–Smirnov test statistics show that this result is robust for planets with both hydrogen-dominated and steam envelopes.&#13;
&#13;
Planetary interior models sometimes reveal new degeneracies rather than resolving them. By combining interior, atmospheric chemistry, and transmission spectra models, I identify a new possible interior composition for sub-Neptunes: carbon-rich composition. I posit that sub-Neptunes formed between the “soot line” – a condensation line for refractory organic carbon – and the water snow line would have high bulk C/O ratios and a substantial carbon layer. Interior models revealed that such carbon-rich compositions are consistent with the masses and radii of sub-Neptunes, given appropriate atmospheric metallicity. Atmospheric chemistry and transmission spectra models found that the spectral features predicted for carbon-rich sub-Neptunes are compatible with observations by the Hubble Space Telescope and the James Webb Space Telescope.&#13;
&#13;
Finally, I explore the connection between post-main-sequence evolutions and the atmosphere and interior conditions of hypothetical Earth-like planets orbiting WDs. I showed that first-generation WD planets that have experienced significant atmospheric loss and second-generation WD planets that are formed in WD debris disks under a more clement radiation environment can be distinguished by the presence of a hydrogen-dominated atmosphere. Additionally, the interior conditions of second-generation WD planets can be inferred from WD pollution observations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159951</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principal Slip Zones in Nature and Experiments and Their Role in the Earthquake Cycle</title>
<link>https://hdl.handle.net/1721.1/159948</link>
<description>Principal Slip Zones in Nature and Experiments and Their Role in the Earthquake Cycle
Ortega-Arroyo, Daniel
Earthquakes generally do not occur in intact rocks but rather within extremely narrow (≤10 cm) principal slip zones found along fault zones. These slip zones are relatively weaker than the surrounding wall rocks suggesting they play an important through the earthquake cycle. This thesis explores the microphysical processes occurring along principal slip zones and examines their influence in fault behavior from various perspectives and scales. Chapter 2 examines slickensides from three different fault systems, using laser profilometry to measure fault surface roughness and detailed microstructural analyses to identify the processes leading to these structures. Chapter 3 presents stick-slip experiments aimed at understanding the energy flow during earthquakes, quantifying the complete energy budget of individual events through a combination of microstructural analyses, novel magnetic field imaging, ultrasonic probing and numerical modelling. Chapter 4 involves Differential Scanning Calorimetry (DSC) measurements on ball-milled granite powders to investigate how extreme grain size reduction affects earthquake processes. Lastly, Chapter 5 presents DSC measurements of pseudotachylites aimed at constraining the thermal history of past slip-events. Results from this thesis highlight that the strain path significantly influences how the energy flows during the earthquake cycle, underscoring the importance of microstructural evolution in determining bulk sample behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159948</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Machine Learning Approach to Robust Optimization: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/159945</link>
<description>A Novel Machine Learning Approach to Robust Optimization: Theory and Applications
Boucher, Benjamin
The increasing availability of data offers modelers unprecedented opportunities to improve decision-making. In particular, we can leverage machine learning based approaches to estimate parameters of optimization models, enabling more informed decisions. However, these models often inherit uncertainty from the data they are trained on, leading to unreliable decisions when deployed at face value. This body of work develops robust optimization frameworks that take into account these uncertainties, by bridging theory and practice to mitigate decision-making risk. This thesis is organized into three chapters.&#13;
&#13;
In Chapter 2, we present a robust scheduling approach tailored to hospital operations, where post-surgery recovery times are uncertain and right-skewed. Our method captures the underlying distribution of patients' length of stay by taking into account their surgery type, and without necessitating detailed patient-level features. Applied to the Bone and Joint Institute of Hartford Hospital’s elective surgery scheduling problem, our approach reduces the monthly peak census---freeing up valuable hospital beds and improving system flexibility in the face of emergencies.&#13;
&#13;
In Chapter 3, we introduce a general methodology for constructing uncertainty sets informed by the loss functions of machine learning models. These sets are designed to protect against prediction errors in estimated optimization parameters. Extending guarantees from the robust optimization literature, we derive strong guarantees on the probability of violation. Synthetic computational experiments show that our method requires uncertainty sets with radii up to one order of magnitude smaller than those of other approaches.&#13;
&#13;
Lastly, in Chapter 4, we apply robust optimization to the domain of recommendation systems, where user and item interaction data are often noisy or adversarially perturbed. We can improve model robustness by modifying the training loss to defend against worst-case inaccuracies in user preference data. Because our approach adds only a single trainable parameter to the optimization model, its runtime impact is negligible. To evaluate the effectiveness of our method, we apply our modified loss function to a suite of recommendation systems from the literature and show consistent improvements in the performance of these methods on synthetic and benchmark datasets, as well as diminished ranking sensitivity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159945</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transition Metal Heterogeneous Catalysis Towards Applications in Sustainable Energy: Leveraging Rational Design Principles for Activity, Stability, and Stereoselectivity</title>
<link>https://hdl.handle.net/1721.1/159941</link>
<description>Transition Metal Heterogeneous Catalysis Towards Applications in Sustainable Energy: Leveraging Rational Design Principles for Activity, Stability, and Stereoselectivity
McCormack, Kaylee Lynn
As global demand grows for renewable energy storage and conversion technologies, novel methods of storing energy and providing portable power are a necessity to accommodate variability in energy resources. Heterogeneous catalysis is a fundamental driver in the development of direct liquid fuel cells, water electrolyzers, and other sustainable energy storage applications including liquid organic hydrogen carriers (LOHCs). &#13;
The methanol oxidation reaction (MOR) is a multistep reaction comprised of methanol dehydrogenation leading to CO adsorbed on the catalyst surface, followed by CO oxidation. Incorporation of an oxophilic material that facilitates the formation of OH groups on the surface is highly effective for improving CO oxidation and MOR performances. Thus in addition to the enhanced MOR activity through incorporation of the carbide core beneath the monolayers of Pt, the performance of these catalysts is expected to increase further by adding Ru atoms to the Pt shell, resulting in an overall 10 times enhancement in mass activity compared to commercial DMFC catalysts.&#13;
&#13;
Metal hydroxide organic frameworks (MHOFs) comprise edge-sharing metal hydroxide octahedra layers interconnected by carboxylate linkers which utilize pi-pi stacking to impart additional stability for electrochemical applications including the oxygen evolution reaction (OER). However, we discovered that there are definitive limits to this stability. This work explored the underlying processes causing loss of MHOF-specific motifs, which lead to phase transformations from MHOF to the Ni oxyhydroxide-like phase during OER, providing insight into the phase stability of these types of materials in base. During extended electrochemical OER cycling, linkers leach from the MHOF structure, exposing more electrochemically active Ni sites, thereby increasing the geometric OER activity. The linker leaching was observed to be accelerated by Ni²⁺ to Ni³⁺/⁴⁺ oxidation, which leads to a phase transformation from MHOF to NiOOH₂₋ₓ structure. A phase transformation mechanism is proposed where mono-μ-oxo bridge motifs found only in the MHOF structure convert to di-μ-oxo bridge motifs in the Ni oxyhydroxide-like phase. MHOFs with the weaker pi-pi interaction L1 linker underwent full transformation to this Ni oxyhydroxide-like phase. Meanwhile, the MHOFs with the stronger pi-pi interaction L4 linker showed transformations to Ni oxyhydroxide-like phases only at near surface regions, where the MHOF can remain as a less active core, thereby identifying NiOOH₂₋ₓ as the OER active phase, but highlights the potential of stability these MHOF materials for alkaline water oxidation. &#13;
&#13;
Finally, MHOFs present unique opportunities as sacrificial templates for thermocatalysis, with adjustable metal centers, structural robustness, and heteroatom incorporation through linker selection. In this thesis I present a model for using MHOFs and analogous MOFs to generate catalysts with unique catalytic properties which differentiate them from typical Ni hydrogenation catalysts. The MHOF-based catalysts perform similarly to other Ni-based catalysts in naphthalene and tetralin to decalin conversion rates per active site, however with a notable stereoselectivity toward cis-decalin across compared to the other Ni catalysts. This work highlights Ni-MHOFs as precursors for transition metal catalysts that emulate the stereoselectivity of NM catalysts, thereby reducing energy requirements in LOHC dehydrogenation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159941</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Regulation of Metabolic Flux Using Orthogonal Quorum&#13;
Sensing</title>
<link>https://hdl.handle.net/1721.1/159939</link>
<description>Dynamic Regulation of Metabolic Flux Using Orthogonal Quorum&#13;
Sensing
Ream, Michael James
Dynamic regulation allows engineers to direct metabolic flux and cellular resources towards target pathways, improving production of value-added chemicals. One dynamic regulation strategy is quorum sensing (QS), a cell-to-cell communication that allows populations of cells to function as a collective. By applying QS to engineered pathways, the diversion of metabolic resources can be coupled to the population of the culture, thereby ensuring sufficient growth is achieved. These circuits can then be layered to allow for fine-tuned control of the cell.&#13;
&#13;
Previous research has focused on QS systems that utilize acyl homoserine lactones (AHL) as signaling molecules. These systems are well characterized, but pairing them in layered systems is difficult due to similarities in signals, which can cause unintended switching of the opposing control system. Here, we identified orthogonal AHL systems for an independently-controlled, multi-layered regulation circuit, which was then applied to increase the production of the valuable natural products of naringenin and bisnoryangonin in Escherichia coli. To our knowledge, the resulting regulations led to the highest extracellular titers at the flask scale with a final naringenin titer of 1251.2 +/- 59.6 mg/L and a bisnoryangonin titer of 597.7 +/- 18.3 mg/L in naringenin equivalence.&#13;
&#13;
In a parallel effort to obtain orthogonal QS-based regulations, we focused on expanding the available QS systems for the model organism E. coli. Specifically, the Gram-positive QS systems of Agr from Staphylococcus aureus and Com from Bacillus subtilis were implemented and subsequently improved for functionality in E. coli. These systems have tight control of expression, which was demonstrated by dynamic downregulation of the aromatic amino acid pathways via CRISPRi. The efficacy of these systems in synthetic biology was further illustrated by using T7 RNA polymerase to amplify the expression output of an Agr-controlled circuit.&#13;
&#13;
Overall, this work developed and applied QS-based regulation systems to improve microbial production of value-added chemicals in E. coli.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159939</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Single-Chain Polymer Nanoparticles to Mimic Globular Proteins</title>
<link>https://hdl.handle.net/1721.1/159938</link>
<description>Design of Single-Chain Polymer Nanoparticles to Mimic Globular Proteins
Jin, Tianyi
While globular proteins exhibit an impressive range of precise functionalities, their sensitivity to environmental changes has motivated scientists to pursue two complementary strategies: (1) engineering and designing proteins directly or indirectly, and (2) exploring synthetic alternatives with higher stability. Single-chain polymer nanoparticles (SCNPs) based on random heteropolymers (RHPs) have emerged as a promising platform as both protein stabilizers and mimetics. However, theoretical understanding of the origins underlying their functional versatility has lagged behind experimental advances. Unlike natural proteins, which rely on well-defined sequences and threedimensional structures, RHPs achieve their functions through sequence and structure ensembles. In this thesis, I use multiscale molecular simulation techniques to uncover the molecular origins of the versatile, protein-mimetic functions of RHPs. This work is motivated by recent experimental findings showing that four-component methacrylate-based (MMA-based) RHPs can function as catalysts, proton channels, and chaperonins. By comparing the behaviors of MMA-based RHPs with that of globular proteins, I provide fundamental physicochemical insights and design principles for SCNPs as protein mimetics and stabilizers. I highlight the significance of chemical polarity and nuances in materials design. In Part I, I study the self-assembly and dynamics of MMA-based RHPs in both melt and solution. I show that MMA-based RHPs collapse into compact globular structures with dynamical heterogeneity and slow dynamics due to glassy backbone. Properties including compactness, monomer hydration, and potential to stabilize membrane protein are largely insensitive to sequence, but strongly depend on composition. At the core of their behavior lies a phenomenon known as hydration frustration, where polar groups become dehydrated and hydrophobic groups remain hydrated. This is a key feature observed in globular proteins. This effect arises from a negative Flory–Huggins interaction parameter (&#120594;) between methyl methacrylate and polyethylene glycol in MMA-based RHPs. Guided by these insights, I design a biodegradable, polyester-based RHP that exhibits similar properties in silico. I further map the potential energy landscape of these RHPs through microsecond simulations. In Part II, I study the adsorption and stabilization behaviors of MMA-based RHPs on both synthetic and biological surfaces. I show that adsorption onto graphene and non-specific binding to &#120573;-barrel membrane proteins are primarily through side-chain interactions, with limited backbone reconfiguration. The transition from a globular to a wrapped morphology is hindered by internal friction arising from the deformation of the glassy backbone. I demonstrate that population-based stabilization of &#120573;-barrel proteins is mediated through loop-specific contacts that reduce fluctuations in flexible regions. The findings of this thesis provide a comprehensive framework for understanding and designing synthetic protein mimetics and stabilizers. MMA-based RHPs present a promising alternative to natural proteins, offering greater resilience, improved cost-effectiveness, and enhanced scalability. The structural and functional parallels between RHPs and globular proteins suggest that the principles uncovered here may generalize across a broad class of biomimetic and bio-synthetic hybrid systems. This work lays the foundation for the rational design of SCNPs for emerging applications
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159938</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fourier-Bessel Series and Hard Edge Limits</title>
<link>https://hdl.handle.net/1721.1/159937</link>
<description>The Fourier-Bessel Series and Hard Edge Limits
Lerner-Brecher, Matthew
The universality classes defined by the Airy and Bessel kernels are two of the most fundamental in random matrices and growth models more generally. Broadly speaking, one often encounters the Airy kernel when studying models where the relevant eigenvalues or particles are unbounded, and the Bessel kernel when examining their constrained counterparts. In this thesis, we analyze two recent problems where the relevant expressions involve a variant of the Airy functions known as the Fourier-Airy series. In both cases, we find that the constrained versions have natural analogues expressible in terms of the Fourier-Bessel series echoing the relationship between the Airy and Bessel kernels. In the first part, we study the hard edge limit of a multilevel extension of the Laguerre β-ensemble at zero temperature. In particular, we show that asymptotically the ensemble is given by Gaussians with covariance matrix expressible in terms of the Fourier-Bessel series. These Gaussians also have an explicit representation as the partition functions of additive polymers arising from a random walk on roots of the Bessel functions. Our approach builds off of techniques introduced by Gorin and Kleptsyn [1] and is rooted in using the theory of dual and associated polynomials to diagonalize transition matrices relating levels of the ensemble. Like the corresponding soft edge limit in the Hermite case studied by Gorin and Kleptsyn, the object we introduce should represent a new universality class for zero temperature random matrices. In the second part, we introduce a new diffusion process which arises as the n → ∞ limit of a Bessel process of dimension d ≥ 2 conditioned upon remaining bounded below one until time n. In addition to being interesting in its own right, we argue that the resulting diffusion process is a natural hard edge counterpart to the Ferrari-Spohn diffusion of [2].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159937</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative embeddings with applications</title>
<link>https://hdl.handle.net/1721.1/159936</link>
<description>Quantitative embeddings with applications
Portnoy, Elia
In this thesis, we discuss quantitative embeddings that generalize a theorem of Kolmogorov and Barzdin. The theorem says that any bounded degree graph with V vertices can be mapped into a 3-dimensional ball of radius sqrt(V), so that at most a constant number of edges intersect any unit ball. In one generalization we describe how much freedom we have in placing the vertices of the graph, and in the other we prove a similar result for simplicial complexes of any dimension. We also discuss applications of these quantitative embeddings to a problem in metric geometry related to the isoperimetric inequality and a problem about constructing local quantum error-correcting codes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159936</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-rank 1 Arithmetic Siegel--Weil</title>
<link>https://hdl.handle.net/1721.1/159935</link>
<description>Co-rank 1 Arithmetic Siegel--Weil
Chen, Ryan C.
We prove the arithmetic Siegel–Weil formula in co-rank 1, for Kudla–Rapoport special cycles on exotic smooth integral models of unitary Shimura varieties of arbitrarily large even arithmetic dimension. We also propose a construction for arithmetic special cycle classes associated to possibly singular matrices of arbitrary co-rank. Our arithmetic Siegel–Weil formula implies that degrees of Kudla–Rapoport arithmetic special 1-cycles are encoded in near-central first derivatives of unitary Eisenstein series Fourier coefficients. The key input is a new limiting method at all places. On the analytic side, the limit relates local Whittaker functions on different groups. On the geometric side at nonsplit non-Archimedean places, the limit relates degrees of 0-cycles on Rapoport–Zink spaces and local contributions to heights of 1-cycles in mixed characteristic.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159935</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon flow and food web structure in the mesopelagic zone of the North Atlantic Ocean</title>
<link>https://hdl.handle.net/1721.1/159934</link>
<description>Carbon flow and food web structure in the mesopelagic zone of the North Atlantic Ocean
Gardner, Kayla Grace
Mesopelagic ecosystems are vital habitats that link the euphotic zone and the deep ocean through food web interactions and carbon flow pathways. In this dissertation, I use carbon compound-specific stable carbon isotope analysis of amino acids (CSIA-AA) and DNA gut metabarcoding methodologies to provide a broad ecological outlook on mesopelagic carbon flow coupled with finer scale taxonomic details. In Chapter 2, I analyze the diets of seven abundant mesopelagic fish species by combining the integrative power of CSIA-AA with the instantaneous, taxonomic aspects of DNA gut metabarcoding. Three primary diet types were identified: copepod-based, fish-based, and generalist. Additionally, carbon sources were variable across the two years, but cyanobacteria were consistently an important carbon source - evidence that mesopelagic fish are essential exporters in weaker biological pump systems. Finally, this chapter includes cyanobacteria CSIA-AA signature data that was previously missing from the literature. In Chapter 3, I augment the CSIA-AA data by adding genus-level zooplankton data and samples from the winter season. Zooplankton were more dispersed among all the end members than fish, particularly in the winter. Fish, however, still relied the most on cyanobacteria-sourced carbon. This chapter supplies the first zooplankton carbon CSIA-AA data set at such a fine taxonomic resolution. In Chapter 4, I examine the effect of phytoplankton community structure on fish and zooplankton carbon sources by sampling before and during diatom a diatom bloom. Zooplankton, and to a lesser extent fish, showed a shift to diatom-based carbon sources during the bloom. As a whole, this dissertation advances our knowledge of mesopelagic food webs by providing a baseline carbon-CSIA-AA data set for key zooplankton and fish species across several seasons that will inform ecological models to understand how the mesopelagic will react to anthropogenic pressure.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159934</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometrically-informed methods of wave-based imaging</title>
<link>https://hdl.handle.net/1721.1/159933</link>
<description>Geometrically-informed methods of wave-based imaging
Greer, Sarah Yvonne
In this thesis, we are interested in understanding and advancing wave-based imaging techniques defined by the adjoint-state method. Wave-based imaging uses wavefield data from receivers on the boundary of a domain to produce an image of the underlying structure in the domain of interest. These images are defined by the imaging condition, derived from the first-order adjoint-state method, which corresponds to the gradient and maps recorded data to their reflection points in the domain. In the first part, we introduce a nonlinear modification to the standard imaging condition that can produce images with resolutions greater than that ordinarily expected using the standard imaging condition. We show that the phase of the integrand of the imaging condition, in the Fourier domain, has a special significance in some settings that can be exploited to derive a super-resolved modification of the imaging condition. Whereas standard imaging techniques can resolve features of a length scale of λ, our technique allows for resolution level R ă λ, where the super-resolution factor (SRF) is typically λ{R. We show that, in the presence of noise, R „ σ. In the second part, we investigate the Hessian operator, which arises from the second-order adjoint-state method, in the context of full-waveform inversion, a non-linear least-squares problem for estimating material properties within the domain of interest. We analyze the contributions of reflected and transmitted waves to the linearized Hessian operator, demonstrating that reflected waves generally produce a high-rank component with well-distributed eigenvalues, while transmitted waves contribute to a low-rank component with poorly distributed eigenvalues. This decomposition of the Hessian, motivated by the underlying physical system, provides insights that can be used to improve inversion strategies. The advancements in both parts of this thesis leverage the underlying structure and geometry of the domain of interest, providing the foundation for the zero-phase imaging condition in the first part and informing the decomposition of the Hessian operator in the second part.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159933</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry and analysis of Ricci curvature and mean&#13;
curvature flows</title>
<link>https://hdl.handle.net/1721.1/159932</link>
<description>Geometry and analysis of Ricci curvature and mean&#13;
curvature flows
Zhao, Xinrui
In this thesis, we study the geometry and analysis of spaces with Ricci curvature bounded below from the following three perspectives and the asymptotically conical singularities of mean curvature flows in the following two perspectives. For the spaces with Ricci curvature bounded below, firstly we study the unique continuation problem on RCD spaces, which is a long-standing open problem, with little known even in the setting of Alexandrov spaces. Together with Qin Deng, we proved that on RCD(K,2) spaces both harmonic functions and caloric functions satisfy weak unique continuation properties. Furthermore we constructed counter-examples showing that strong unique continuation in general fails for harmonic and caloric functions on RCD(K,N) spaces where N is greater or equal to 4. Secondly, we consider constructing a canonical diffeomorphism between the n-sphere and a n-dimensional space with Ricci curvature bounded from below by n-1 which is close to the n-sphere in the Gromov-Hausdorff sense. Together with Bing Wang we proved that the first (n+1)-eigenfunctions of Laplacian provides a bi-Holder diffeomorphism and we further give a counter-example showing that the bi-Holder estimate is sharp and cannot be improved to a bi-Lipschitz estimate. Thirdly, we study the Margulis Lemma on RCD spaces. Together with Qin Deng, Jaime Santos-Rodríguez and Sergio Zamora, we extend the Margulis Lemma for manifolds with lower Ricci curvature bounds to the RCD setting. As one of our main tools, we obtain improved regularity estimates for Regular Lagrangian flows on these spaces. For the asymptotically conical singularities of mean curvature flows, firstly together with Tang-Kai Lee, we proved asymptotically conical self-shrinkers as tangent flows of MCFs are unique, generalizing the result in the case of hypersurface proven by Chodosh-Schulze. Secondly, together with Tang-Kai Lee we prove that given any asymptotically conical shrinker, there exists an embedded closed hypersurface such that the mean curvature flow starting from it develops a type I singularity at time 1 at the origin modeled on the given shrinker.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159932</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Movement and Trophic Ecology of Large Pelagic Fishes Connecting Surface Waters with the Ocean's Twilight Zone</title>
<link>https://hdl.handle.net/1721.1/159931</link>
<description>Movement and Trophic Ecology of Large Pelagic Fishes Connecting Surface Waters with the Ocean's Twilight Zone
Willis, Ciara Sinead Roche
The ocean’s twilight zone is a vast area of the global ocean that lies between the sunlit surface waters and perpetually dark midnight zones, covering depths from ~200 to 1000 meters. Recent work in the twilight (or mesopelagic) zone has revealed unexpected biomass and diversity that may not only challenge scientific understanding of ocean systems but also provide new and largely untapped resources for fisheries harvest. The extent to which commercially valuable, highly migratory top predators such as tuna and swordfish rely on mesopelagic biomass for forage has not previously been quantified but is thought to be substantial. Pressure from emerging industrial fisheries in the twilight zone makes determining the linkages between mesopelagic prey and migratory predators of pressing concern for sound management in keeping with the precautionary principle. Ocean predators are further hypothesized to dive into the deep ocean for a range of motives beyond forage, including for navigation on their long migrations. In this thesis, I begin by using compound-specific stable isotope analysis to trace the flow of carbon through pelagic ecosystems in the northwest Atlantic to three predators: bigeye tuna (Thunnus obesus), swordfish (Xiphias gladius), and yellowfin tuna (Thunnus albacares). I confirm the presumed high reliance of these predators on mesopelagic prey using a Bayesian mixing model approach that estimated 50-60% of their temperate carbon is sourced from mesopelagic food webs. Next, I take a larger view of epi- and mesopelagic food webs by sampling simultaneously across a pelagic food web from bottom to top at one point in time and space in the northwest Atlantic Ocean. I trace the movement of carbon and nitrogen from particulate organic matter, through mid-level consumers, up to top predators using compound-specific stable isotope analysis of amino acids. Nitrogen stable isotope analyses is also used to calculate trophic positions, providing a more detailed view of pelagic food web structure and function. To complement these trophic studies, I conduct a movement analysis of vertical habitat use by swordfish focused on their intermittent extreme dives. I explore possible motivations for these dives, including forage, predator avoidance, and navigation. Qualitative investigation of dive geometry, as well as quantitative logistic models of the physical and biological environment, indicate that navigation is the most likely motive. Finally, I consider the implications of predator reliance on mesopelagic forage in a fisheries economics context. Using my earlier diet sourcing results, I adapt a bioeconomic model with a new predator-prey dynamic to evaluate the effects of potential mesopelagic fisheries on their predators with bigeye tuna as the representative predator. Model results highlight the importance of recognizing predator-prey interactions in management of mesopelagic fisheries and demonstrate the sensitivity of equilibrium economic and ecological conditions for the tuna stock under different price and cost scenarios. Overall, these studies emphasize the importance of the deep ocean to marine predators and suggest that a new mesopelagic fishery could be economically viable in-and-of itself but may have significant negative impacts on existing tuna and swordfish fisheries due to reduced forage.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159931</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Theory to Practice: Improving Causal Conclusions from Healthcare Data</title>
<link>https://hdl.handle.net/1721.1/159930</link>
<description>From Theory to Practice: Improving Causal Conclusions from Healthcare Data
Cobzaru, Raluca-Ioana
Causal inference in biomedical, epidemiological, and health policy research often relies on observational data, such as electronic health records (EHRs), patient registries, or insurance claims, to compensate for the inaccessibility of randomized controlled trials. However, causal inference from observational data depends on strong, often unverifiable assumptions, including exchangeability, parallel trends, and correct model specification. Violations of these assumptions can bias treatment effect estimates, making it essential to assess the sensitivity of causal conclusions--particularly in healthcare applications, where properly interpreting causal relationships and delivering reliable insights is critical for guiding clinical practice and informing system-wide decisions. This thesis contributes to the theoretical and empirical analysis of causal methods under realistic data limitations, with a focus on covariate selection and adjustment, proximal inference for unobserved confounding, and applications of modern estimation techniques to healthcare-relevant settings.&#13;
In Chapter 2, we investigate the performance and robustness of state-of-the-art machine learning estimators for causal inference when covariate selection for statistical adjustment is performed in a realistically suboptimal manner. Although nonparametric doubly robust methods are asymptotically unbiased, they can perform poorly in finite samples due to slow convergence of nuisance function estimates. Through an extensive simulation study, built upon previous research on statin use and atherosclerotic cardiovascular disease (ASCVD) incidence, we examine how including extraneous covariates---a likely risk when researchers over-adjust to mitigate concerns about unmeasured confounding---may degrade estimator performance. These findings highlight the importance of incorporating domain knowledge to guide covariate selection, even when using flexible data-adaptive methods.&#13;
In Chapter 3, we explore proximal causal inference, a novel framework designed to address unobserved confounding by leveraging negative control exposures and outcomes to recover the true causal effects. While this approach offers an alternative to the exchangeability assumption, it relies on identification conditions for the proxy variables set and model specifications that remain empirically untestable. We derive closed-form bias expressions under a linear structural equation model to quantify the impact of violating these assumptions and propose a practical bias adjustment strategy using data from an observational ICU study. These results provide a foundation for formal sensitivity analysis and offer insight into the real-world utility of proximal methods.&#13;
Finally, in Chapter 4 we evaluate the impact of the Meaningful Use Incentive Program on hospital performance, using modern causal methods in a multi-period difference-in-differences (DiD) design. We apply a staggered DiD estimation framework, along with a sensitivity analysis of dynamic treatment effect estimates under potential violations of the parallel trends assumption, across a wide range of quality, safety, and process of care measures. By accounting for treatment timing variation, allowing for heterogeneous effects over a longer follow-up period, and testing for violations of identifying assumptions, our study offers a more rigorous and comprehensive assessment of the causal impact of health information technology (IT) policies introduced by the Meaningful Use program. Our findings help reconcile mixed findings in the literature and inform the design of future hospital incentive programs that aim to promote advanced use of EHRs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159930</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic insights into how collective effects mediate the T cell response</title>
<link>https://hdl.handle.net/1721.1/159929</link>
<description>Mechanistic insights into how collective effects mediate the T cell response
Yin, Rose
T cells play an important role in the adaptive immune system by providing robust responses to foreign pathogens while avoiding widespread autoimmunity. Although many specific microscopic factors are thought to contribute to this self/non-self discrimination, based on a theoretical paper and experiments, a generalized mechanistic framework has emerged over the past decade to describe the remarkable robustness of self/non-self discrimination in spite of the presence of autoimmune T cells in every host. This quorum threshold mechanism states that a threshold number of T cells (a quorum) must be activated by a foreign antigen in a local area for an immune response to ensue. In my thesis, I use analytical and computational models to show how this mechanism enables a response against foreign pathogens while tolerating exposure to self-tissue, and how it increases robustness against perturbations such as changed self-antigen presentation or increased epitope spreading due to inflammation. However, under persistent or severe infections, these models also show that the risk of autoimmunity increases through enhanced sampling of rare epitopes and activation of cross-reactive T cells. These results provide a potential explanation for why persistent infections often trigger autoimmune diseases. To further understand the emergence of the quorum threshold, I developed a population dynamics model. Our results show that steady states corresponding to an effective or ineffective immune response are separated by a threshold dependent on both activated T cell population concentration and concentration of a growth factor (IL2) that is secreted by T cells and absorbed by cells that dampen the immune response. Notably, the threshold’s existence proves robust across randomized parameters, highlighting its fundamental role in regulating T cell responses.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159929</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uniqueness of p-local truncated Brown-Peterson spectra</title>
<link>https://hdl.handle.net/1721.1/159928</link>
<description>Uniqueness of p-local truncated Brown-Peterson spectra
Lee, David Jongwon
When p is an odd prime, we prove that the Fp-cohomology of BP⟨n⟩ as a module over the Steenrod algebra determines the p-local spectrum BP⟨n⟩. In particular, we prove that the p-local spectrum BP⟨n⟩ only depends on its p-completion BP⟨n⟩p̂. As a corollary, this proves that the p-local homotopy type of BP⟨n⟩ does not depend on the ideal by which we take the quotient of BP. In the course of the argument, we show that there is a vanishing line for odd degree classes in the Adams spectral sequence for endomorphisms of BP⟨n⟩. We also prove that there are enough endomorphisms of BP⟨n⟩ in a suitable sense. When p = 2, we obtain the results for n ≤ 3.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159928</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing microearthquakes and shallow structure with dense array and optical fibers</title>
<link>https://hdl.handle.net/1721.1/159927</link>
<description>Characterizing microearthquakes and shallow structure with dense array and optical fibers
Chang, Hilary
Source properties of small earthquakes, such as source dimension and stress drop, help us to constrain source physics and assess seismic hazards. Small events carry information about the stress state in the subsurface. They also help us predict the behavior of larger earthquakes. However, the source properties of small earthquakes (magnitude less than 3) are poorly constrained because of trade-offs with other wave propagation effects. The trade-offs with attenuation can cause the apparent stress drop to vary, resulting in an apparent breakdown of earthquake self-similarity. To date, researchers are still trying to understand the uncertainty in source parameter measurements and to improve their accuracy. In the first part of the thesis, I use a dense array in Oklahoma to investigate the influence of site effects on source parameter modeling. By analyzing ground motions, subsurface velocity structure, and attenuation, I show how these factors relate to site effects, and how source parameter estimations vary under different modeling assumptions. To avoid large site-effect-related biases and uncertainties when modeling source parameters, I suggest (1) assuming a realistic attenuation model, (2) using selected stations on hard rocks instead of using many stations with unknown site conditions, and (3) constraining variables in the model during the inversion to avoid parameter trade-offs.&#13;
&#13;
In the second part of the thesis, I explore the use of fiber-optic cables in several seismic applications. Distributed Acoustic Sensing (DAS) turns optical fibers into dense receiver arrays. These fiber-optic cables have the advantage of being resilient and easier to maintain compared to mechanical sensors. The cable provides a dense array that helps us separate source and wave propagation effects for different purposes. Here, I use cables in wells in geothermal reservoirs and a telecom cable on the MIT campus. The applications include structure monitoring and imaging, seismic hazard assessment, and earthquake source characterization. DAS measures strain and requires special considerations to fit into conventional seismic methods built on particle motions. Deconvolution-based methods help deal with the DAS instrument response. The gauge length adds a velocity-dependent amplitude response that we need to consider when modeling the DAS spectrum. I provide workflows for conducting seismic imaging surveys using telecom cable and downhole DAS for temporal monitoring and source parameter analysis. The cables can reach places that were difficult to reach in the past. With careful processing, DAS can be a promising tool for structure monitoring, urban seismic hazard assessment, and microearthquake source analysis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159927</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Regimes for Topology Optimization in Photonics</title>
<link>https://hdl.handle.net/1721.1/159926</link>
<description>New Regimes for Topology Optimization in Photonics
Chen, Mo
Inverse design is a powerful methodology to obtain non-trivial and non-intuitive photonic structures of unprecedented performance. Topology optimization is a particular class of inverse design method that has been increasingly popular in photonics. Numerous topology optimization tools and frameworks have been developed and often yield satisfying results for various engineering problems. This work explores the subtleties involved in the development and application of topology optimization, and presents new regimes for photonic design, where the key to finding the right solutions lies in posing the right questions. To begin with, we first review the current frameworks for photonic topology optimization. We point out that, as new algorithms emerge, the lack of standardized validation methods presents a challenge for further advancements. To address this, we provide a comprehensive suite of test problems along with a length-scale metric for comparing designs across different algorithms, aiming to facilitate the development and validation of future inverse design approaches. However, a functioning inverse design algorithm alone is not sufficient to guarantee satisfying designs. We present two case studies highlighting the importance of careful formulation for achieving mathematical robustness and tractability that is crucial to the success of optimization. The first case examines the inverse design of 3D-printable metalenses with complementary dispersion for terahertz imaging. It illustrates a physical dichotomy between achieving two distinct dispersion behaviors in a thin structure. We demonstrate that a key aspect in making such design tractable is carefully balancing the trade-offs between focal quality and scanning rate in the optimization problem formulation. The second case focuses on the inverse design of multiresonance filters via quasi-normal mode theory. Traditional filter design approaches have various limitations, and directly applying topology optimization leads to numerically stiff formulations. We propose a new practical high-order-filter design method based on a minimal set of analytical design criteria derived from quasi-normal mode theory. We illustrate our approach by designing 3rd and 4th-order elliptic and Chebyshev dielectric filters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159926</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress on the Interplay of Machine Learning and Optimization</title>
<link>https://hdl.handle.net/1721.1/159924</link>
<description>Progress on the Interplay of Machine Learning and Optimization
Lin, Zhen
Machine learning and optimization have been playing significant roles in the world. Despite the remarkable advancements in these fields, various crucial problems remain unsolved. In this thesis, we address some of these problems by exploring the interplay of machine learning and optimization.&#13;
In the first part of this thesis, we utilize optimization tools to address two practically important and critical topics in machine learning: interpretability of machine learning models, and improving data for prediction. In Chapter 2 and 3, we focus on improving the interpretability of machine learning models. In particular, Chapter 2 presents an efficient algorithm for training high-quality Nonlinear Oblique Classification Trees using gradient descent. We demonstrate on real-world datasets this is an effective approach. In Chapter 3, we develop an optimization approach to train low depth (up to depth 8) classification trees with hyperplanes to closely approximate neural networks. We also incorporate sparsity in the hyperplanes of the trees. In this way, we contribute in increasing the interpretability of neural networks. Computational results on real-world datasets with different sizes of neural networks show the effectiveness of our algorithm. In Chapter 4, we propose an integer optimization method to improve class-imbalanced data. Our method undersamples the majority class and performs better than existing methods on real-world imbalanced datasets.&#13;
In the second part of the thesis, we explore the direction of applying machine learning to optimization. In Chapter 5, we show that optimization methods can significantly benefit from a machine learning treatment. We develop a model-based trust-region method for derivative-free optimization problems under noise. Our method, which uses robust and sparse regression to build models of functions, is much more robust and has higher scalability than existing methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159924</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Modeling Approaches to Quantify Vehicle-to-Grid Services in an Evolving Power Sector</title>
<link>https://hdl.handle.net/1721.1/159922</link>
<description>Integrated Modeling Approaches to Quantify Vehicle-to-Grid Services in an Evolving Power Sector
Owens, James
The U.S. transportation sector emitted 27% of nationwide greenhouse gas (GHG) emissions in 2020. In addition to cleaner fuels and more efficient powertrains, vehicle electrification is poised to be a key driver of sector decarbonization. However, fleet electrification poses an unprecedented coupling of the transportation sector and electric grid. Electric vehicle charging and other new loads, if not sufficiently managed, are anticipated to add significant strain to the grid. In light of these challenges, vehicle-to-grid (V2G) has been proposed as a form of flexible load and decentralized energy storage. Within a V2G framework, grid-connected electric vehicles provide services to power grids, for example by shifting when they charge or discharging their batteries to the grid when power demand is high. Conceptually, V2G can reduce the costs of intermittency, facilitate renewables growth, and provide storage services to the grid.&#13;
&#13;
While V2G continues to evolve and gain market traction, there remain several aspects of the technology, both operational and economic, that stand to be better understood and improved upon to best facilitate widespread adoption. For instance, EVs can theoretically displace stationary energy storage, but to what extent? What are demand side implications for the grid? For early technology adopters, particularly commercial fleets, how do travel needs and network tariffs affect V2G revenues? How can one practically simulate V2G and other service outcomes and do the potential revenues justify initial investment? &#13;
&#13;
This thesis addresses such questions and concerns through the development and application of methods that (1) quantify the technology's ultimate value proposition at the systems level, and (2) enable risk-informed market participation and financial analysis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159922</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence for System Medicine: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/159920</link>
<description>Artificial Intelligence for System Medicine: Methods and Applications
Ma, Yu
Modern medicine is facing a fundamental shift with the increasing availability of large-scale electronic health data and artificial intelligence-based technologies. In particular, integration across different patient characteristics to optimize, learn, and plan simultaneously across multiple medical tasks of interest, or what we call system medicine, provides opportunities for clinical and operational systems to improve disease diagnosis, operational efficiency, and, most importantly, clinical understanding. This thesis aims to develop and validate novel methods using artificial intelligence and optimization to address challenges faced in this domain. &#13;
&#13;
We introduce general-purpose artificial intelligence frameworks in Part 1. First, we introduce Holistic Artificial Intelligence in Medicine (HAIM), an integrated pipeline that combines multimodal data ranging from tabular, time-series, vision, and language into a single framework for downstream task learning. We then develop Multimodal Multitask Machine Learning for Healthcare (M3H), an end-to-end, many-to-many framework that joins the learning of multimodal data with a diverse set of medical and machine learning problem tasks. This work proposed a novel attention mechanism as well as a new explainability metric that extends previous works on the evaluation of input space contributions (features) to the output space (outcomes). These works are actively being incorporated to improve diagnosis in cardiovascular and oncology studies using ECG and multi-omics data. &#13;
&#13;
We then address real-world adoption concerns to design responsible machine learning models using optimization in Part 2. We first introduce robust regression under averaged uncertainty, which yields exact, closed-form, and analytical solutions that recover ridge regression. We show insight into how the geometric properties of the uncertainty set are closely linked to the regularization strength of the equivalent ridge regression. We then proposed an adaptive, data-driven approach for personalized breast cancer screening scheduling, which integrates an ML-based survival prediction model and a stochastic optimization formulation that balances screening delay and screening frequency. &#13;
&#13;
Finally, we apply predictive and prescriptive analytic methods to improve general medical outcomes in Part 3 and Part 4, respectively. These studies range from oncology, trauma, cardiovascular, and logistics planning. In Part 3, we aim to develop models that can most accurately learn the outcome. We show that predictive methods across different machine learning methodologies, including deep neural networks for computer vision tasks, tree-based models (including Optimal Classification Trees and gradient boosted trees), can significantly improve over either existing benchmarks or achieve comparable performances with manual physician practice. In Part 4, we delve into prescriptive analytics, which focuses on assigning the optimal treatment or other clinical decision to achieve the best outcome. We apply the interpretable Optimal Policy Trees methodology across oncology and trauma settings and observe improved medical outcomes (i.e., mortality rate).
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159920</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidation of gene clusters underlying withanolide biosynthesis in ashwagandha</title>
<link>https://hdl.handle.net/1721.1/159914</link>
<description>Elucidation of gene clusters underlying withanolide biosynthesis in ashwagandha
Reynolds, Erin E.
Withanolides are medicinally important steroidal lactones produced by Withania somnifera (ashwagandha) amongst other Solanaceae family plants, known for their anti-inflammatory, anti-cancer, and adaptogenic properties. However, the biosynthetic pathway to withanolides is largely unknown, preventing scale-up and hindering pharmaceutical applications. In this thesis, we report a chromosome-scale assembly of the W. somnifera genome, which we use for biosynthetic gene cluster mining. We identify two biosynthetic gene clusters likely involved in withanolide biosynthesis and explore some aspects of their evolution. The identified clusters are among the largest identified in plants to date and they exhibit an unusual tissue-specific subcluster structure. Next, we characterize the genes in the identified biosynthetic gene clusters using heterologous expression in yeast and tobacco, in conjunction with in vitro enzyme assays. We discover two cytochromes P450 (CYP87G1 and CYP749B2) and a short-chain dehydrogenase (SDH2) responsible for formation of the lactone ring on the sterol side chain, a key chemical feature of withanolides. Two additional P450s (CYP88C7 and CYP88C10) and a sulfotransferase (SULF1) generate the characteristic A-ring structure of withanolides, featuring a C₁ ketone and C₂-C₃ unsaturation. The discovery of SULF1 as a core withanolide pathway enzyme challenges the conventional view of sulfotransferases as tailoring enzymes and suggests a wider role for this enzyme family in plant secondary metabolism. This work opens new avenues for the sustainable production of withanolides through biomanufacturing and for drug development leveraging the withanolide scaffold.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159914</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging System-Level Analyses and Techno-Economic Modeling to Inform the Viability of Electrochemically-Mediated CO₂ Separation</title>
<link>https://hdl.handle.net/1721.1/159913</link>
<description>Leveraging System-Level Analyses and Techno-Economic Modeling to Inform the Viability of Electrochemically-Mediated CO₂ Separation
Ripley-Kenyon, Katelyn M.
Replacing fossil fuels with renewable energy and removing carbon dioxide (CO₂) via carbon capture, utilization, and storage (CCUS) are essential strategies for addressing the climate crisis and achieving net-zero emissions by 2050. While renewable energy is predicted to supply more than 60% of net electricity generation in the United States by mid-century, it is also predicted that coal and natural gas plants will remain operational in the near term to meet growing energy demands. This, coupled with the persistence of hard-to-decarbonize processes, requires point-source capture technologies to mitigate remaining CO₂ emissions. State-of-the-art CO₂ separation systems are typically based on low efficiency, temperature-swing cycles that exploit the natural affinity of alkanolamines for CO₂ at ambient conditions. Alternatively, electrochemical capture systems may enable CO₂ removal from flue gas streams at higher energetic efficiencies while also offering more modular and scalable designs. However, direct comparisons between the thermochemical and electrochemical approaches are scant, likely due to the nascency of the latter.&#13;
&#13;
In this thesis, I develop modeling frameworks that enable system-level comparisons of two types of electrochemical CO₂ capture (eCCC) technologies and the incumbent thermochemical, amine-based capture technologies. I first start by developing a reactive absorption model to predict the absorption column sizes required in “4–stage” eCCC systems (i.e., comprising of an electrochemical reactor, absorption column, and flash tank). I use the model to inform operating conditions and molecular properties that will allow these processes to utilize columns that are comparable in size to those presently deployed in thermochemical systems. While this helps address capital cost comparisons, to couple these effects with operating costs I next combine the absorption column model with an electrochemical cell model to predict the levelized cost of capture (LCOC) of the capture platforms at a coal pilot plant facility. This techno-economic model allows for thorough investigation of the property sets, operating conditions, and target cost factors that will lead to conditions where the electrochemical systems can compete economically with amine scrubbing systems. Next, this in-house model is used to probe the effects of scale and flue gas composition on the overall LCOC to provide commentary on the conditions and costs likely for operation at commercial-scale plants. Finally, I apply my knowledge of decarbonization efforts to inform realistic pathways for decarbonizing cement production facilities in the near-term. Ultimately, the goal of this thesis is to lay the foundation for quantitative comparisons between different technologies available for point-source capture applications while also offering models that can be used to investigate the viability of promising molecules and electrolytes in eCCC.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159913</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uniqueness problems in mean curvature flow</title>
<link>https://hdl.handle.net/1721.1/159912</link>
<description>Uniqueness problems in mean curvature flow
Lee, Tang-Kai
We investigate uniqueness phenomena in mean curvature flow, focusing on two central problems: the behavior of the flow near singularities and the extension of the flow beyond singular times. These questions have significant applications in geometry, topology, and analysis. For the first problem, with Jingze Zhu, we formulate a canonical way to study the limit model near a singularity of a generic closed mean curvature flow of surfaces. Using this framework, we establish a uniqueness result for singularity models. As a consequence, we resolve a uniqueness problem for gradient flow lines in ordinary differential equation theory, related to a question posed by Thom and Arnold, and revisited by Colding–Minicozzi. For the second problem, with Alec Payne, we examine the level set flow as a weak formulation that ensures long-time existence and uniqueness of mean curvature flow past singularities. This approach, however, can lead to fattening, a phenomenon reflecting genuine non-uniqueness of the extended flow. While genuine uniqueness cannot always be expected, we address this challenge by establishing an intersection principle for comparing two intersecting flows. We prove that level set flows satisfy this principle in the absence of non-uniqueness. Finally, with Larry Guth, we explore a problem concerning homotopy classes of maps between spheres. Recent progress on this problem relies on delicate analysis of high codimensional graphical mean curvature flow. We use a direct method to refine a homotopy criterion for maps between low-dimensional spheres.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159912</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Steenrod operations and Fukaya categories</title>
<link>https://hdl.handle.net/1721.1/159911</link>
<description>Quantum Steenrod operations and Fukaya categories
Chen, Zihong
The recent introduction of mod p equivariant operations to symplectic Gromov-Witten theory has fueled exciting developments in the field. In this thesis, we develop new tools for understanding these operations and explore an application to the quantum connection. In one direction, we construct certain operations on the equivariant Hochschild (co)homology of a general A∞-category. We show that when applied to the Fukaya category of a nondegenerated closed monotone symplectic manifold, this construction can be identified with the quantum Steenrod operations via Ganatra’s cyclic open-closed maps. A key ingredient in this identification is a new homotopy theoretic framework for studying various equivariant open-closed maps at once, using a combination of cyclic categories, edgewise subdivision and Abouzaid-Groman-Varolgunes’ operadic Floer theory. In another direction, we utilize quantum Steenrod operations, and Lee’s observation that it is related to the p-curvature of the quantum connection, to study singularities of the quantum connection in characteristic 0, and prove the exponential type conjecture for all closed monotone symplectic manifolds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159911</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organic influences on hydrated magnesium carbonate mineral formation</title>
<link>https://hdl.handle.net/1721.1/159910</link>
<description>Organic influences on hydrated magnesium carbonate mineral formation
Baldes, Matthew J.
Carbonate minerals retain organic compounds and preserve textural and chemical evidence of microbial activity early in the geologic record of Earth. For this reason, magnesium carbonates thought to be associated with lacustrine deposits in Jezero Crater are an important target of the Mars Sample Return Mission. The presence of hydrated magnesium carbonates in the deposits suggests that these minerals experienced minimal postdepositional alteration and may have the potential to preserve biosignatures from a habitable early martian environment. Microbial influences on calcium carbonate precipitation are well documented, but magnesium carbonates have received considerably less attention as a result of their relative scarcity in terrestrial deposits. The few modern lacustrine environments where hydrated magnesium carbonate minerals form have been proposed as analogs for Jezero Crater. Precipitation often occurs in association with microbial communities in these alkaline lake systems, but little is known about the potential for hydrated magnesium carbonates to preserve biosignatures, especially in depositional environments analogous to the carbonate sediments and coatings identified by the Perseverance Rover in Jezero Crater. This thesis explores organic influences on hydrated magnesium carbonate precipitation and the potential for these minerals to retain evidence of microbial activity. I begin by culturing cyanobacterial biofilms in solutions that replicate natural lacustrine environments where hydrated magnesium carbonate precipitation occurs. I designed experiments to isolate the role of cyanobacterial extracellular polymeric substances (EPS) in mediating the mineralogy of hydrated magnesium carbonate precipitates and rate of amorphous magnesium carbonate (AMC) maturation. I also compared the precipitates that formed in association with cyanobacterial biofilms to those formed under inorganic conditions to determine if hydrated magnesium carbonates preserve biosignatures. The results from these experiments demonstrate that cyanobacterial EPS promotes the early stabilization of the hydrated magnesium carbonate mineral dypingite and that biologically associated precipitates encapsulate cells and retain organic compounds detectable with Raman spectroscopy. I complement this laboratory work by seeking to identify similar spectroscopic and textural evidence of microbial activity in a range of carbonate deposits from Lake Salda, Türkiye including sands, crusts on coarse siliciclastic sediments, and alteration veins in serpentinized ultramafic bedrock. Analyses of these samples revealed that hydromagnesite sands and crusts have a higher potential to preserve biosignatures than dolomite veins in a system analogous to Jezero Crater.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159910</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A probabilistic perspective on graph coloring</title>
<link>https://hdl.handle.net/1721.1/159909</link>
<description>A probabilistic perspective on graph coloring
Mani, Nitya
Graph coloring is perhaps the most fundamental, deeply-studied, and well-known area in graph theory, with many of the most basic questions in the field still widely open. Graph coloring questions often have wide ranging applications across fields as diverse as statistical physics, theoretical computer science, route planning, disease spread, cybersecurity, circuit design, and network science more broadly. This thesis studies graph coloring from a probabilist’s perspective, focusing on graph coloring problems that share an underlying theme: given an exponentially large family of objects derived from a graph vertex-coloring, can we understand what a typical or random object in this large family looks like without manually searching through exponentially many alternatives? The majority of this thesis is centered around two basic graph coloring problems, each of which has been heavily studied and comes with a rich history and many applications. We begin this thesis by establishing a fourth moment phenomenon for the number of monochromatic copies of any fixed subgraph in a given graph sequence (when given at least eight colors). We also study, and in many special cases, characterize, failures of a fourth moment phenomenon to hold in the two-color regime. We then continue to our second major topic of study. We essentially resolve a folklore conjecture about the uniform distribution of proper colorings of a bounded-degree tree. As a consequence, we are able to make significant progress towards a longstanding conjecture in the statistical physics community and one of the oldest and most basic still-open questions in the field of approximate counting and sampling. We also disprove the efficacy of a particular, popular approach to tackling this pair of conjectures. Finally, we conclude the thesis by taking a different approach to studying typical samples from exponentially large families, applying the graph container method to study two coloring-adjacent questions: upper bounding the number of error correcting codes and understanding the structure of typical unit-distance avoiding sets in R².
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159909</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Bridging and Governing Decentralized Communities</title>
<link>https://hdl.handle.net/1721.1/159908</link>
<description>Towards Bridging and Governing Decentralized Communities
Saldías Fuentes, Belén Carolina
"Unless the spaces in a building are arranged in a sequence which corresponds to their degrees of privacy the visits made by strangers, friends, guests, clients, family, will always be a little awkward." (Alexander, 1977) — Unlike physical spaces, where we can move seamlessly between different environments with varying degrees of privacy, much of our online experience occurs in noisy, crowded, and imposed public areas. This can undermine meaningful engagement, deepen social divides, and exacerbate anxiety, polarization, and distrust arising from unnecessary friction and misunderstandings. Moreover, while different communities with distinct values and norms often share these same public venues, they are typically subject to one-size-fits-all policies that fail to address local contexts. Consequently, toxic behavior is policed at the platform level rather than by the communities themselves, leading to oversimplified governance solutions that favor some communities while silencing others.&#13;
&#13;
Fortunately, emerging strategies in decentralized protocols and networks have begun to change this dynamic. Decentralized systems designed for local governance can empower communities to create more nuanced and context-sensitive rules. However, these approaches remain largely inaccessible to non-technical users and risk creating a "paradox of decentralization," wherein isolated servers or communities potentially deepen echo chambers. This thesis contends that by placing community governance and user agency at the center of online platforms—and by leveraging advances in large language models (LLMs)—we can build healthier digital spaces that foster pro-social interactions while respecting individual groups' autonomy.&#13;
&#13;
To explore these possibilities, this dissertation examines how intentional design principles can promote constructive communication in decentralized contexts. First, it presents a large-scale historical Reddit dataset encompassing over 230K removed posts across more than 19K mission-defined communities—that captures a diverse range of speech, community norms, and moderation approaches. By analyzing over 60K community rules, I propose an empirically grounded norms schema and reveal how the purpose statement correlates with pro-social behavior reflected in community-centered discourse.&#13;
&#13;
Building on these insights, the dissertation next tackles the challenge of shifting from centralized, top-down moderation to distributed, community-specific content governance. While centralized methods provide highly generalizable moderation powered by advanced AI, they hinder specificity and community-specific definitions of behavior, limiting community and user participation in shaping how their content is moderated and ranked. I prototype and evaluate tools for (i) explainable, decentralized content moderation—where interpretable models illuminate why a post is flagged or removed—and (ii) surfacing unspoken differences in the definitions and understanding of seemingly similar norms across communities. These prototypes show how LLMs can assist by clarifying value mismatches, supporting local decision-making, and enabling communities to mediate misunderstandings across divides.&#13;
&#13;
Finally, I consolidate these findings in a real-world social network platform called Odessa—a DEcentralized Social Systems App—deliberately designed as a user-friendly, decentralized environment that allows communities to define—and iteratively refine—their own norms, moderation, ranking algorithms, and, more generally, governance strategies. Through system deployment and user experiments, I investigate how participants navigate local governance controls and interact within bridged spaces across communities. Odessa's bridging mechanisms illustrate how communities can preserve distinct values without sacrificing cross-community connections. By open-sourcing Odessa, I provide a framework for researchers and practitioners to test human-AI partnerships in governance and a learning environment for apprentices. The results presented here underscore both the opportunities and challenges in democratizing content moderation, highlighting the pivotal role of transparent AI in promoting trust and mutual understanding.&#13;
&#13;
This dissertation makes the case that future social media ecosystems should emphasize bottom-up, community-driven governance aided by interpretable AI tools. By enabling communities to shape their social expectations through purpose and norms, explain decisions through transparent AI and access to human rationales, and forge connections with other communities, we can cultivate online environments where pro-social discourse thrives. In doing so, we move beyond merely "fighting toxicity" toward intentionally designing spaces that support constructive dialogue and genuine community development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159908</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Optimal and Approximate Algorithms in Optimization Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/159907</link>
<description>Efficient Optimal and Approximate Algorithms in Optimization Under Uncertainty
Gonzalez, Victor
Some of the most important and challenging decisions must be made with incomplete information. This lack of information can refer to ignorance regarding events or conditions that have happened or events that have yet to occur. This lack of information can relate to what decisions are available as well as the consequences of these decisions. Optimization under uncertainty has applications in a wide range of settings including hiring (How good is the candidate? Will a better candidate arrive?), disaster relief (Where are people who need help? How long do rescue teams have before they are in a more critical condition?), and manufacturing (How much will we be able to manufacture? What orders should we accept?). These problems can be solved naively by reformulating the problem as a deterministic problem, but this can dramatically increase the size of the problem making the naive reformulation to be computationally expensive to solve. We aim to develop efficient algorithms to solve optimization problems under uncertainty and construct approximate algorithms that quickly approximate the solution in instances that are too large to solve exactly. In Chapter 2, we discuss a secretary problem with generalized decisions. The goal is to “rank” incoming items arriving in an uncertain order. Once an item arrives, it must be assigned a rank before the next item arrives, and this cannot be changed when new items arrive. We exploit the structure of the problem using exact dynamic programming to construct an algorithm that can be used to compute an exact solution. Additionally, we develop heuristics that can be used to construct an approximate solution in larger problems. In Chapter 3, we discuss a search and rescue drone problem. In this problem, we construct routes to maximize the number of people who are in need of rescue in the event of a natural disaster. After constructing a nonlinear mixed-integer program (MIP), we construct simplifying policies that allow us to solve it in real-time allowing for updates as new information is learned. In Chapter 4, we discuss a two-stage stochastic knapsack problem. In manufacturing settings, orders must be accepted or rejected before it is known how much resources will be available to fill those orders. We formalize this problem as a two-stage stochastic knapsack problem. We construct lower bounds based on feasible solutions and upper bounds based on the optimal solutions of a relaxation of the problem. We then use these bounds to construct optimal solutions that beat traditional solvers. We then develop algorithms that construct approximate solutions for larger instances that perform well compared to the optimal solutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159907</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Adaptive Robust Optimization Approach to Electricity Markets Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/159906</link>
<description>A Unified Adaptive Robust Optimization Approach to Electricity Markets Under Uncertainty
Koulouras, Angelos Georgio
Electricity grid operations in the US rely heavily on two short-term markets: the DayAhead Market (DAM) and the Real-Time Market (RTM). Although the DAM is the cornerstone of electricity markets, it does not adapt to the fast-changing reality in the grid, such as the increased uncertainties due to renewables. Therefore, the existing deterministic market suffers from inherent uncertainties and creates inefficiencies, which require ad hoc and suboptimal solutions, like out-of-market interventions by the grid operators. To address these issues, in this thesis, we advocate for an adaptive mindset in electricity markets and propose a unified and adaptive redesign of the DAM. The proposed market cooptimizes the existing DAM and out-of-market processes, like the Reliability Unit Commitment (RUC), under adaptive robust optimization (ARO). Through ARO, we explicitly procure and price flexibility using adaptive reserve products that provide generation plans contingent on the uncertainty in the RTM. The grid uncertainty is captured through uncertainty sets that contain all the scenarios against which the market operator hedges, while it is priced through new marginal pricing mechanisms. In Chapter 2, we provide marginal pricing for uncertainty in ARO as a technical enabler of the proposed market. We derive locational marginal prices for unit commitment problems with ARO under load and capacity uncertainty and provide guarantees on the participant incentives under worst-case uncertainty. These pricing mechanisms are then used in Chapter 3, which features the redesign of the DAM. Specifically, the proposed DAM eliminates RUC-like processes by introducing deterministic reserve products that were previously procured in a nontransparent way by the market operators. It also hedges against load forecast errors by using adaptive reserve products that reward participant flexibility. The overall design, which is also applied to ISO New England market data, increases the social welfare and reliability in the market and reduces the arbitrage opportunities. Finally, in Chapter 4, we provide data-driven uncertainty calibration methods for the proposed market. We determine the size of the uncertainty set using machine learning models and mixed-integer optimization, leveraging historical data that consist of covariates or features. This method has been successfully applied to wind generation forecasts from a vendor that caters to a large US grid operator.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159906</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the external forcing of Indian Ocean climate variability&#13;
across timescales</title>
<link>https://hdl.handle.net/1721.1/159897</link>
<description>On the external forcing of Indian Ocean climate variability&#13;
across timescales
Tiger, Benjamin H.
It is imperative to understand the dynamics of external climate forcings and the nature of the climate system’s responses for improved predictability. These forcings include low-probability, high-impact events like explosive volcanic eruptions as well as the continued injection of anthropogenic greenhouse gases into the atmosphere. This thesis explores how external forcings affect Indian Ocean climate in the past, present, and future using paleoclimate archives in conjunction with observational and climate model data. Chapter 2 presents a novel geochemical stalagmite record from northern Madagascar which spans the end of the last glacial period. Stable isotope and trace metal proxies indicate drier conditions in response to North Atlantic cooling events, such as Heinrich stadials, and wetter conditions during North Atlantic warming events, such as the Bølling–Allerød. These responses are opposite what would be expected from north-south shifts in the Intertropical Convergence Zone. Instead, we hypothesize that west-east tropical Indian Ocean temperature gradient variability akin to the modern-day Indian Ocean Dipole explains the consistent hydroclimate response to North Atlantic forcing reconstructed by eastern African sites. Chapter 3 explores the effects of volcanic eruptions on interannual Indo-Pacific climate variability using an ensemble of last millennium simulations. Following the largest tropical eruptions, these simulations demonstrate a consistent negative Indian Ocean Dipole response which leads an El Niño. This response scales with eruption intensity and persists for up to 8 years for the strongest events. We also find that Interdecadal Pacific Oscillation phasing at time of eruption preconditions the initial Indian Ocean Dipole response via low frequency thermocline depth modulation. Finally, in Chapter 4 we use marine sedimentary archives in combination with climate simulations to expand on the Atlantic-Indian Ocean teleconnection hypothesized in Chapter 2. The reconstructed west-east surface temperature gradient responds in lockstep to previous instances of Atlantic Meridional Overturning Circulation (AMOC) variability during the last glacial period, such as Heinrich stadials, the Bølling–Allerød, and the Younger Dryas. An analysis of single-forcing simulations featuring meltwater addition to the North Atlantic under glacial and interglacial boundary conditions further demonstrates this inter-basin connectivity. We find that in simulations of high greenhouse gas emission scenarios, uncertainties in future Indian Ocean temperature and precipitation patterns are attributable to uncertainties in the magnitude of future AMOC weakening. This thesis bridges disparate timescales and data sources to gain insight into how the external forcing of the Earth system works at a fundamental level, from geochemical records of abrupt climate transitions during the last ice age to numerical simulations of the Atlantic overturning slowing by the end of the 21st century.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159897</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization and Quantification of Solid Electrolyte Interphases for Composition-Functionality Relationships at Lithium Metal Electrodes</title>
<link>https://hdl.handle.net/1721.1/159894</link>
<description>Characterization and Quantification of Solid Electrolyte Interphases for Composition-Functionality Relationships at Lithium Metal Electrodes
Steinberg, Katherine Julia
Lithium (Li) has the lowest electrochemical reduction potential and density of any metal, making it an exceptionally desirable anode material for batteries and a powerful chemical reductant. However, the reducing nature which makes Li so useful brings challenges: it is thermodynamically unstable in practical liquid electrolytes, driving the formation of a passivation film called the solid electrolyte interphase (SEI). The SEI mediates transport and reactivity at the Li surface, and in practical systems it both consumes active Li directly and leads to spatial heterogeneity in fluxes to and from the lithium surface, resulting in inefficiency in plating and stripping. Together, these effects make the SEI the most important factor determining the efficiency of Li electrochemistry, but its nanoscale, heterogeneous, and reactive nature make it extremely challenging to study experimentally. As a result, existing understanding of the impact of composition and structure on SEI functionality is limited. This thesis aims to enhance conceptual understanding in this space, combining multimodal characterization, the design and application of informative model systems, and the quantification of key phases to reveal mechanistic insights that advance understanding of composition-functionality relationships at Li interfaces. &#13;
&#13;
To begin, this work focuses on the role of the SEI in Li-mediated electrochemical ammonia synthesis (LiMEAS), one of the most promising electrochemical pathways for nitrogen fixation. Here, quantification of major side products, multiscale imaging, and spectroscopic analysis were conducted methodically in four model systems, which introduced the presence of nitrogen gas and a proton donor separately. This study revealed that the electrolyte-derived SEI inhibits reactivity between Li and nitrogen, and that the proton donor is needed to disrupt this passivating interphase. &#13;
&#13;
Next, focus shifted to Li metal battery anodes. Lithium carbonate has long been considered beneficial in anode SEI, but the field has lacked a mechanistic explanation for its effects. Here, lithium carbonate was studied through the development of two model systems, a model SEI formed by sequentially reacting oxygen and carbon dioxide with metallic lithium, and Li-copper (Cu) half cells saturated with either argon or carbon dioxide. Through electrochemical impedance analysis on the model SEI, lithium carbonate was found to exhibit elevated conductivity compared to other common inorganic SEI materials. Cycling and subsequent titration analysis of Li-Cu cells revealed that carbon dioxide addition led to less inactive lithium formation during cycling, and that this avoided capacity loss was the driver behind increased Coulombic efficiency (CE) in numerous electrolytes. &#13;
&#13;
Finally, an analysis was conducted to decipher unresolved materials in a set of techniques for the quantitative analysis of Li anode Coulombic inefficiencies. These techniques directly quantify capacity losses from formation of inactive lithium and several SEI materials, but lack the ability to delineate between residuals lost during cycling or sample processing, and SEI materials not yet resolvable by quantitative techniques. Here, a set of measurements was developed to explicitly measure material losses during sample processing steps. This work confirmed that material losses do not alter broader trends between electrolytes, validating simpler approaches for electrolyte comparisons while also offering a protocol that can be used when quantitative material accounting is of particular importance. &#13;
&#13;
Together, these studies illustrate a multimodal approach for deriving mechanistic insights into the relationships between SEI composition and electrode performance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159894</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher dimensional fractal uncertainty</title>
<link>https://hdl.handle.net/1721.1/159893</link>
<description>Higher dimensional fractal uncertainty
Cohen, Alex
We prove that if a fractal set in Rᵈ avoids lines in a certain quantitative sense, which we call line porosity, then it has a fractal uncertainty principle. The main ingredient is a new higher dimensional Beurling and Malliavin multiplier theorem, which allows us to construct band-limited functions that decay rapidly on line porous sets. To prove this theorem, we first explicitly construct certain plurisubharmonic functions on Cᵈ. Then, following Bourgain, we use Hörmander’s L² theory for the ¯∂ equation to construct band-limited functions. The main theorem has since been applied by Kim and Miller to lower bounds for the mass of eigenfunctions on higher dimensional hyperbolic manifolds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159893</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Invertible Functorial Field Theory for Symmetry Breaking and Interactions in Quantum Field Theory</title>
<link>https://hdl.handle.net/1721.1/159892</link>
<description>Invertible Functorial Field Theory for Symmetry Breaking and Interactions in Quantum Field Theory
Krulewski, Cameron
We apply invertible field theories to study two questions in quantum field theory. Specifically, we study reflection-positive fully-extended invertible field theories on manifolds with twisted spin structures, which are computed as Anderson-dual bordism groups [1, 2].&#13;
&#13;
In high energy physics, invertible field theories represent anomalies of quantum field theories. Our first application is toward ’t Hooft anomaly matching—a method first developed in the 1980s in which one treats anomalies as invariants of theories of interest and uses them to compute how quantum field theories change under physical processes. Specifically, we model three related processes around a form of spontaneous symmetry breaking via a charged order parameter using a twisted Gysin sequence of Anderson-dual bordism groups. We study the Smith maps of Madsen-Tillmann spectra that underlie the sequence, collecting examples and cataloging periodicities. Finally, we compute an extensive set of examples of physical interest and draw physical predictions from the results.&#13;
&#13;
In condensed matter physics, invertible field theories model the low energy field theories of symmetry-protected topological phases (SPTs). In this second application, we develop and compute homotopical free-to-interacting maps to compare two classifications of fermionic SPTs: those for free (i.e. non-interacting) models, and more general interacting classifications. These maps contribute to what has been a prolific line of research in the physics literature for the past fifteen years. Generalizing Freed--Hopkins [1], we construct maps from K-theory to twisted spin IFTs using T-duality and twisted versions of the spin orientation of K-theory [3]. We focus on two situations: weak phases [4, 5], which are SPTs protected by discrete translation symmetry, and primed phases [6], which are closely related to the famous tenfold way [7, 8], but which have a very different interacting classification. In the latter case, we demonstrate the dependence of the interacting classification on more than the Morita class of the symmetry algebra.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159892</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Tradeoffs and Symmetry in Polynomial Nonnegativity</title>
<link>https://hdl.handle.net/1721.1/159891</link>
<description>Computational Tradeoffs and Symmetry in Polynomial Nonnegativity
Harris, Mitchell
Understanding when a polynomial is nonnegative on a region is a fundamental problem in applied mathematics. Although exact conditions for nonnegativity are computationally intractable, there has been a surge of recent work giving sufficient conditions for nonnegativity to address its many practical applications. A major trend in this direction has been the use of convex optimization to characterize polynomials that are sums of squares (SOS); nevertheless, this well-studied condition can be computationally intensive for polynomials of moderate degree and dimension. &#13;
This thesis addresses the challenge of balancing computational cost against the strength of sufficient conditions for nonnegativity. We make progress towards bridging the gap between simple but crude sufficient conditions, and the more powerful but expensive SOS approach.&#13;
In the first part, we introduce new certificates of nonnegativity that may be used when SOS is too expensive yet cheaper sufficient conditions are too conservative. For this, we leverage different features of the polynomials, including its Bernstein coefficients, a lower-degree interpolant, or its harmonic decomposition.&#13;
In the second part, we construct coordinate-invariant sufficient conditions for nonnegativity and study the symmetry properties of the space of Gram matrices. By considering it as a representation of GL(n,R) and combining this module structure with classical invariant theory, we construct an explicit equivariant map for nonnegativity certification. We further introduce an alternative approach using equivariant neural networks, analyzing their benefits and limitations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159891</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affine Springer Fibers and the Kazhdan-Lusztig Map</title>
<link>https://hdl.handle.net/1721.1/159890</link>
<description>Affine Springer Fibers and the Kazhdan-Lusztig Map
Chua, Anlong
Let G be a connected reductive group with Lie algebra g and Weyl group W. Let P ⊂ G((t)) be a parahoric subgroup with Levi quotient Gₚ. Using the topology of Lie P, Kazhdan and Lusztig define a map from nilpotent orbits in Lie Gₚ to conjugacy classes in W. This thesis proves compatibilities between Kazhdan-Lusztig maps associated to different parahoric subgroups, as well as the Kazhdan-Lusztig map for the Langlands dual. These compatibilities come from studying the W-representation on the cohomology of affine Springer fibers. The main tool is Yun’s Global Springer Theory. We give two applications of these compatibilities. The first is an affine analog of the classical picture relating singular supports of IC sheaves on the flag variety with special nilpotent orbits. The second is a resolution of Lusztig’s conjecture that strata can be described by fibers of (parahoric) Kazhdan-Lusztig maps.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159890</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Changing Role of Reactive Nitrogen in the Troposphere</title>
<link>https://hdl.handle.net/1721.1/159889</link>
<description>The Changing Role of Reactive Nitrogen in the Troposphere
Dutta, Ishir
Nitrogen is the most abundant molecule in the Earth’s atmosphere and is one of the essential ingredients for life as we know it. Human activities, especially over the last century, have radically perturbed the natural nitrogen cycle, primarily via emissions of reduced nitrogen from the production and use of fertilizer for agriculture and of oxidized nitrogen from the combustion of fossil fuels. Nitrogen oxides play a central role in driving tropospheric chemistry and are a key ingredient of fine particulate matter, acid rain, and ozone. However, despite this long-understood importance of reactive oxidized nitrogen (NOy) species, even modern chemical transport models have struggled to accurately represent their chemistry.&#13;
&#13;
This thesis spans three projects that seek to characterize and explain possible sources of this uncertainty. The first project presents a comprehensive budget of reactive oxidized nitrogen in the troposphere using a state-of-the-science chemical transport model, and observational constraints for this budget from remote troposphere flight campaign data. We also provide modeled estimates for the chemical fluxes between key NOy species, finding that species beyond those that have been the foci of previous work play a crucial role in driving overall chemical cycling. In the second project we explore the sensitivity of this NOy budget to uncertain multiphase chemistry, including the photolysis of nitrate aerosol, the reactive uptake of nitrogen dioxide on aerosol surfaces, and the uptake of nitric acid on dust. We find that these processes may have substantial regional or temporal importance, but they have limited effects on the global NOy budget and are insufficient to explain inter-model discrepancies. Finally, we investigate the utility of long-term wet and dry deposition measurements made in the continental United States as a constraint on regional anthropogenic emissions trends of acid rain precursors (nitrogen and sulfur oxides). We find that dry deposition fluxes follow anthropogenic emissions trends, and wet deposition fluxes are likely more representative of total regional emissions (natural and anthropogenic). Taken together, these studies provide novel, holistic constraints on reactive oxidized nitrogen and identify key chemical processes that govern the fate of NOy in the troposphere. As anthropogenic emissions continue to decline and the effects of climate change intensify, these insights and such a framework will be useful in accurately predicting future atmospheric chemistry and composition.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159889</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric representation learning for chemical property&#13;
prediction, structure elucidation, and molecular design</title>
<link>https://hdl.handle.net/1721.1/159888</link>
<description>Geometric representation learning for chemical property&#13;
prediction, structure elucidation, and molecular design
Adams, Keir Alexander Joseph
Molecular representation learning has revolutionized computer-aided chemistry by enabling the automatic extraction of arbitrarily complex patterns from datasets of (potentially labeled) molecular structures via deep neural networks. In predictive chemistry, deep learning is increasingly being used to replace expensive physics-based simulations and even experimental measurements of chemical properties. In generative chemistry, deep generative models are powering molecular design and optimization campaigns across chemical industries. Notably, this paradigm shift has been driven by the development of sophisticated representation learning algorithms that encode and decode molecular structures with increasing geometric detail – from minimal SMILES strings to elaborate atomistic structures. Yet, many aspects of molecular structure remain neglected by leading geometric representation learning models. Accordingly, this thesis advances the geometric representation learning of molecular structure to create new opportunities in chemical property prediction, structure elucidation, and molecular design. This thesis begins by highlighting surprising failure modes of graph neural networks when predicting properties dependent on chirality and conformational isomerism. A new stereochemistry-tailored model is then developed to imbue graph networks with tetrahedral chiral expressivity while evading pitfalls plaguing preceding 2D and 3D graph networks. This thesis then examines how the geometric quality of structures encoded by 3D networks impacts their accuracy in property prediction tasks requiring the model to reason about conformational flexibility. Neglecting structural characteristics that are challenging to model is also common in computational chemistry. In nuclear magnetic resonance (NMR) prediction, for example, quantum chemical calculations typically estimate magnetic shieldings from stationary gas-phase geometries – ignoring vibrations and explicit solvent. To advance chemical structure elucidation, this thesis next develops neural surrogates for magnetic shielding calculations that, when integrated with molecular dynamics simulations, provide access to unprecedented accuracy in solvent-sensitive NMR spectra prediction. Finally, this thesis advances de novo molecular design by explicitly representing 3D shapes, electrostatics, and non-covalent interactions in deep generative models for small molecules. A shape-conditioned variational autoencoder is first developed to design chemically diverse molecules that can adopt desired conformational shapes, like ligand binding poses. This strategy is then generalized into a powerful interaction-aware diffusion modeling framework to comprehensively enable bioisosteric replacement in ligand-based drug design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159888</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Foundations of Flow-based Methods for Sampling and Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/159886</link>
<description>Theoretical Foundations of Flow-based Methods for Sampling and Generative Modeling
Ren, Zhi (Robert)
Sampling from an arbitrary probability distribution is a central problem in computational statistics and machine learning. Transportation of measure offers a useful approach to this problem: the idea is to construct a measurable map that pushes forward a relatively simple source distribution to the target probability distribution. One can then simulate from the target distribution by drawing samples from the source distribution and evaluating the transport map. This construction is applicable to both generative modeling and variational inference; when the map is invertible, one can also estimate the density of the target measure by evaluating the density of the pushforward of the source distribution under the inverse transport map. Over the past decade, various parameterizations of such transports have been proposed. Generally speaking, they fall into two categories: the static approach, where the displacement from x to T(x) is represented directly and the dynamic approach that employs evolution of measures by some differential equation over some fictitious time. While many of these models have achieved enormous success in practical applications, their theoretical underpinnings remain largely unexplored. In this thesis, we provide a theoretical foundation for flow-based methods for sampling and generative modeling, and a unified view of both continuous and discrete-time approaches. In the first part of the thesis, we address the approximation theory of flow based methods. In particular, we show how the regularity of the underlying ODE velocity field relates to the regularity of densities and prove related neural network approximation bounds. In addition, we show how introduction of a time-reparameterized schedule can dramatically improve the regularity of the velocity, helping resolve potential singularities. In the second part of the thesis, we focus on the interplay between flow-based models and nonparametric statistics. In particular, we consider pullback density estimators under these flow based models from likelihood-based objectives. The estimators we consider arise from both discrete and continuous-time parameterizations of the transport, and the underlying function classes we consider include Hölder balls and neural networks. In all these cases, we show they achieve near minimax optimal rate for learning s-smooth densities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159886</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dual Pairs and Disconnected Reductive Groups</title>
<link>https://hdl.handle.net/1721.1/159885</link>
<description>Dual Pairs and Disconnected Reductive Groups
Gaetz, Marisa
In R. Howe’s seminal paper, “Remarks on classical invariant theory,” he introduces the notion of a Lie algebra dual pair (a pair (g₁, g₂) of reductive Lie subalgebras of a Lie algebra g such that g₁ and g₂ equal each other’s centralizers in g) and the notion of a Lie group dual pair (a pair (G₁, G₂) of reductive subgroups of a reductive Lie group G such that G₁ and G₂ are each other’s centralizers in G). Both notions have since been widely used and studied. This thesis extends what is known about the classifications of complex reductive Lie group and Lie algebra dual pairs, and establishes a step towards a more general framework for understanding complex reductive Lie group dual pairs. In the first part of this thesis, we classify the reductive dual pairs in the complex classical Lie groups: GL(n, C), SL(n, C), O(n, C), SO(n, C), and Sp(2n, C). We also establish some general relationships between Lie group dual pairs and dual pairs in corresponding Lie algebras and quotient groups. These relationships lead to complete classifications of the reductive dual pairs in the complex classical Lie algebras (gl(n, C), sl(n, C), so(n, C), and sp(2n, C)) and preliminary progress towards classifying dual pairs in the projective classical groups (P GL(n, C), P Sp(2n, C), P O(n, C), and P SO(n, C)). In the second part of this thesis, we complete an explicit classification of the semisimple Lie algebra dual pairs in the complex exceptional Lie algebras, initially outlined by H. Rubenthaler in a 1994 paper. This explicit classification makes Rubenthaler’s 1994 result more complete, usable, and understandable. A major obstacle to understanding reductive Lie group dual pairs is their potential disconnectedness. Inspired in part by this obstacle, in the third part of this thesis we describe the possible disconnected complex reductive algebraic groups E with component group Γ = E/E₀. We show that there is a natural bijection between such groups E and algebraic extensions of Γ by Z(E₀). Finally, in the last part of this thesis we classify the reductive dual pairs in P GL(n, C). While the connected dual pairs in P GL(n, C) can be easily understood using tools from the first part of this thesis, the classification of the disconnected dual pairs in P GL(n, C) is much more difficult and requires tools from the third part of this thesis. This serves as the first complete classification of dual pairs in a non-classical group and as a step towards understanding how disconnectedness factors into the classification of dual pairs more generally.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159885</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Average Size of 2-Selmer Groups of Elliptic Curves in Characteristic 2</title>
<link>https://hdl.handle.net/1721.1/159884</link>
<description>The Average Size of 2-Selmer Groups of Elliptic Curves in Characteristic 2
Achenjang, Niven
Let K be the function field of a smooth curve B over a finite field k of arbitrary characteristic. We prove that the average size of the 2-Selmer groups of elliptic curves E/K is at most 1 + 2ζʙ(2)ζʙ(10), where ζʙ is the zeta function of B. In particular, in the limit as q = #k ! ∞ (with the genus g(B) fixed), we see that the average size of 2-Selmer is bounded above by 3, even in “bad” characteristics. This completes the proof that the average rank of elliptic curves, over any fixed global field, is finite. Handling the case of characteristic 2 requires us to develop a new theory of integral models of 2-Selmer elements, dubbed “hyper-Weierstrass curves.”
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159884</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-shore Transformation of Breaking Random Waves&#13;
in the Surfzone</title>
<link>https://hdl.handle.net/1721.1/159883</link>
<description>Cross-shore Transformation of Breaking Random Waves&#13;
in the Surfzone
Chen, Jinshi
The transformation of breaking waves in the surfzone, including the evolution of the roller, the foamy air-water mixture on the surface of a breaking wave, and the turbulence, determines the wave-driven onshore-directed mass transport, the vertical structure of the compensating return flow (undertow), and the increase in the mean water level (setup). A two-phase Reynolds-Averaged Navier-Stokes (RANS) model and field and laboratory observations are used to study the cross-shore transformation of the roller, turbulence, and undertow resulting from irregular breaking waves. Modeled wave heights, wave spectra, setup, and undertow agree well with field and laboratory observations on barred and unbarred bathymetry. The roller forcing contributes 50% - 60% to the setup. The horizontal advection and turbulence each contribute ∼ 20% to the setup, whereas the contribution of bottom stress is largest (up to 20%) for shallow sandbar crest depths. The majority of the energy transferred to the roller is dissipated internally, while 15% - 25% of the energy in breaking waves first is transferred to the roller and then diffused back to the water column. Internal dissipation of roller energy increases with increasing depth of the sandbar crest, possibly indicating a change from plunging to spilling breakers. The momentum flux balance in the mid- and lowerwater column is between the wave, vertical turbulence transfers, vertical inertia, and setup, whereas near the surface the roller and pressure slope are important. Turbulence transports momentum downwards, while vertical inertia transfers momentum upwards. Turbulence production dominates the near-surface turbulence-energy-flux balance, and its penetration depth in the trough onshore of the sandbar is correlated with the local wave height. The roller thickness is related to the local wave height. Surfzone turbulence is more anisotropic than plane-wake turbulence, and is dominated by cross-shore normal stresses. Cross-shore vertical two-dimensional anisotropy is dependent on the cross-shore position in the surfzone, vertical shear of the cross-shore current, wave directional spread, frequency, and proximity to the seafloor. The three dimensional turbulence structure is related to the total vertical current shear, and to the directions of both mean currents and waves. Horizontal turbulence length scales are larger than the vertical length scales, consistent with prior studies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159883</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Artificial Intelligence for Efficient and Synthesizable In-silico Molecular Design</title>
<link>https://hdl.handle.net/1721.1/159882</link>
<description>Advancing Artificial Intelligence for Efficient and Synthesizable In-silico Molecular Design
Gao, Wenhao
Small organic molecules possess an astronomical number of structural possibilities and a wide range of functionalities, holding immense potential to provide material-level solutions to critical societal challenges such as health and the environment. However, the discovery of molecules with functionalities tailored to specific applications remains a challenging, timeconsuming, and resource-intensive process, often relying on trial-and-error experimentation. Recent advances in computational techniques—particularly in artificial intelligence—offer promising solutions to this inefficiency. These developments are paving the way toward a more systematic and efficient approach to molecular discovery, enabling the design of novel functional molecules tailored to specific needs and accelerating the development of solutions to urgent issues in health, sustainability, and energy. This thesis presents algorithmic advances in artificial intelligence, particularly deep learning, for de novo molecular discovery, framed as a black-box optimization problem with a focus on small organic molecules. The contributions span three core aspects: The first section focuses on improving the sample efficiency of molecular optimization. A central capability of any molecular design algorithm is to determine which direction to explore next within chemical space in order to identify molecules with more optimal properties, given a limited set of known examples. Due to the inherent trade-off between computational efficiency and predictive accuracy in modeling methods, it is crucial to evaluate as few candidate molecules as possible to identify the optimal structure. This section introduces the problem formulation and benchmarking efforts for sample-efficient molecular optimization, followed by several approaches aimed at enhancing efficiency. The second section addresses the challenge of ensuring synthetic accessibility during molecular design. For small organic molecules with non-trivial syntheses, any design that cannot be realized in the lab has limited practical value. This presents a unique constraint in small molecule design that often renders direct adoption of algorithms developed for language or vision tasks ineffective. After framing the problem, this section introduces a generative modeling framework that integrates synthesis and design, ensuring that the search is constrained to synthesizable chemical space. It further introduces the concept of “generative molecular projection” and demonstrates its application in balancing sample efficiency and synthetic feasibility. The third section targets the improvement of oracle accuracy for molecular discovery. Achieving both accurate and efficient prediction of molecular properties has long been a central goal in computational chemistry. While deep learning has shown promise in breaking the traditional trade-off between accuracy and efficiency by leveraging large-scale historical data, its full potential—especially for directly learning experimentally measured bioactivities under data-scarce conditions—has yet to be realized. This section presents a benchmarking effort on applying deep learning to therapeutic-related property prediction, and introduces substrate scope contrastive learning as a strategy to learn reactivity-related patterns from published reaction datasets. Together, these three components present a systematic, data-driven methodology for small organic molecule discovery that minimizes the need for extensive domain expertise. The algorithms developed in this thesis are designed to support autonomous workflows, potentially enabling closed-loop molecular discovery that maximizes efficiency and reduces both cost and reliance on human intuition. While the demonstrations in this thesis primarily target pharmaceutical applications, the methods are task-agnostic and can be readily extended to broader material discovery efforts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159882</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Competitive Sorption in Microporous Polymer Membranes to Enhance Gas Separation Performance</title>
<link>https://hdl.handle.net/1721.1/159881</link>
<description>Leveraging Competitive Sorption in Microporous Polymer Membranes to Enhance Gas Separation Performance
Dean, Pablo A.
Chemical separations account for roughly half of the United States’ industrial energy consumption, 49% of which is attributed to distillation alone. Membrane-based systems, on the other hand, offer a more energy-efficient alternative to conventional separation processes because they do not require thermally intensive phase changes to operate. Specifically, polymer membranes with a more permanent porosity (termed “microporous”) have gained attention due to their impressive combination of permeability (throughput) and permselectivity (separation efficiency) relative to the empirically defined “upper bound” for membrane materials.&#13;
Traditionally, the permeability of a membrane for a gas is defined by the product of the gas’s diffusivity and sorption coefficient in the material.  By extension, a membrane’s permselectivity can be broken down into the product of its diffusion selectivity and sorption selectivity. Microporous polymer membranes exhibit impressive diffusion selectivity due to their small free volume elements (&lt; 2 nm) and rigid backbones. However, separating gases based primarily on size can become exceedingly difficult given that some gases differ in kinetic diameter by less than an angstrom. Instead, recent advancements in the design of microporous polymers have indicated that a phenomenon known as competitive sorption can be used to enhance separation performance by leveraging gas–polymer interactions instead of differences in gas diffusivity. This thesis investigates how the increase of sorption selectivity through competition between gases can be exploited to enhance the permselectivity of microporous polymer membranes. Specific focus is placed on the archetypal polymer of intrinsic microporosity (PIM-1) and its amine-functional analog (PIM-NH₂) to study how enhanced acid-gas (CO₂ and H₂S) sorption brought on by amine functionality positively impacts separation performance. To confirm the generalizability of these trends, competition effects in the microporous poly(arylene ether) (PAE) backbone were studied as well. To investigate more industrially viable membranes while retaining strong gas–polymer interactions afforded by the amine group, this PAE backbone was also used to develop 8 solution-processable tertiary-amine-functional analogs. Lastly, in an effort to study the effects of water vapor on CO₂-focused separations in amine-functional microporous polymer membranes, a humidified gas permeation apparatus was developed and used to measure dry and humidified CO₂ transport in PIM-1, PIM-NH₂, and a novel secondary-amine-functional analog, PIM-NHiPr. Taken together, this thesis focuses on the fundamentals and practical implications of leveraging competitive sorption to enhance performance in application-relevant and multi-component gas mixtures. More specifically, this work provides valuable insight regarding amine functionalization and its strong effects on sorption energetics and humidified gas transport that will help to inform future design of polymer membranes for gas separations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159881</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalytic implications of confined solvent ensembles within Lewis acid zeolites</title>
<link>https://hdl.handle.net/1721.1/159879</link>
<description>Catalytic implications of confined solvent ensembles within Lewis acid zeolites
Johnson, Blake A.
Lewis acidic zeolites are microporous crystalline materials that offer promise as catalysts for the activation and conversion of biomass-derived precursors in the liquid-phase due to their unique water-tolerance and synthetic versatility. The active site environment in zeolite catalysts is multifaceted in nature and is composed of a primary catalytic binding site, the secondary pore structure that confines such binding sites, and occluded solvent and reactant molecules that interact with adsorbed species.  Moreover, Lewis acidic heteroatoms can adopt structurally diverse coordination that selectively catalyze different classes of chemical transformations and can be difficult to control synthetically or characterize spectroscopically.  In this thesis, precise mechanistic interpretation of liquid-phase zeolite catalysis was realized through the development of synthetic, spectroscopic, and kinetic methods that decouple complex active site structures and probe the interactions that occur between confined active sites, solvent and reactant molecules, and adsorbed intermediates and transition states.&#13;
&#13;
First, we show how hydrophobic Beta zeolites containing framework Sn atoms catalyze transfer hydrogenation reactions of cyclohexanone in a 2-butanol solvent 10x faster than their hydrophilic analogues. This rate enhancement stems from the ability of hydrophobic Sn-Beta to inhibit the formation of extended liquid-like 2-butanol oligomers and promote dimeric H-bonded 2-butanol networks. The ordered H-bonding solvent network present in hydrophobic Sn-Beta stabilizes the transfer hydrogenation transition state to a greater extent than the liquid-like 2-butanol solvent present in hydrophilic Sn-Beta, giving rise to higher turnover rates on hydrophobic Sn-Beta. Additionally, reactant adsorption within hydrophobic Sn-Beta is entropically-driven by the breakup of intraporous solvent-solvent interactions, resulting in positive enthalpies of adsorption that are partially compensated by an increase in the solvent reorganization entropy. These results emphasize the ability of the zeolite pore to regulate the structure of confined non-aqueous H-bonding solvent networks, which offers an additional dimension to modulate adsorption and reactivity.&#13;
&#13;
Next, we extend our studies to understand how different intraporous alcohol networks reorganize in response to adsorbate sterics and the presence of non-H-bonding co-solvents. Here, we find that first-order rates for methyl-cyclohexanone transfer hydrogenation are ~2-5x higher than for tert-butyl-cyclohexanone, but converge in the zero-order regime across all temperatures (333-393 K) in a bulk 2-butanol solvent. These results show that, while intrinsic bond-activation steps at the active site are largely independent of molecular functionalization of the ketone reactant, adsorption within hydrophobic Sn-Beta is still driven by the breakup of intraporous solvent-solvent interactions. Furthermore, comparisons between bulk toluene or acetonitrile solvents, with 1 M 2-butanol as a reactant, show the significance of intraporous solvent for stabilizing kinetically-relevant species and the complex interdependencies between solvent and catalyst hydrophilicity. Apparent zero-order activation enthalpies and entropies increase with decreasing solvent polarity over hydrophobic zeolites indicating that the transition state is more tightly bound to the open Sn site when first-shell solvent molecules become more polarizing. Conversely, adsorption and activation entropies and enthalpies measured on hydrophilic zeolite in toluene and acetonitrile solvents are nearly identical to those measured in a bulk 2-butanol solvent, suggesting that the intraporous solvating environment in bulk, non-H-bonding co-solvents is similar to that observed when bulk 2-butanol is the solvent. &#13;
&#13;
Finally, we exploit the ability of carbonyl groups to measure electric field differences arising from the different intraporous solvent structures through the vibrational Stark effect. By measuring infrared absorption spectra of Ti-bound acetone in Beta zeolites of varying framework hydrophobicity across a wide range of non-coordinating solvents, we find unique electric field differences arising from distinct solvation under nanoconfinement. Moreover, in the absence of intraporous solvent, we observe a ~7 cm-1 shift in the Ti-bound carbonyl stretching frequency. These results suggest that local differences in the Lewis acid site environment, which influence observed kinetics across reaction classes, arise from the synthetic protocol used to produce each material. &#13;
&#13;
Taken together, the results of this thesis reveal how different solvent-mediated, non-covalent interactions control liquid-phase reactivity within porous, Lewis acid zeolite catalysts. It is our hope that the kinetic and spectroscopic approaches advanced here will provide a useful roadmap for further experimental investigations into the catalytic implications of confined solvent.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159879</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms</title>
<link>https://hdl.handle.net/1721.1/159878</link>
<description>Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms
Chen, Qinyi
Modern online platforms—such as recommendation systems, advertising markets and e-commerce sites—operate in dynamic and complex environments where efficient algorithmic decision-making is essential. These platforms must continuously adapt to rapidly changing user behaviors, market fluctuations, and data uncertainties while optimizing for both learning efficacy and revenue generation. However, focusing solely on performance can lead to biased outcomes and inequitable treatment of users and items, raising concerns about fairness. Balancing efficiency and fairness is therefore crucial for sustainable platform growth. In this thesis, we tackle these challenges by developing novel algorithmic frameworks and methods that integrate fairness considerations with robust learning and optimization techniques. We explore these problems from three distinct perspectives, each contributing to enhancing the decision quality and fairness considerations in online decision-making.&#13;
&#13;
In Chapter 2, we first focus on the topic of efficiency, by addressing the challenge of performing online learning in a highly non-stationary environment. User behaviors and preferences often change over time, making it difficult for traditional algorithms to maintain good performance. This issue is particularly prevalent in real-world applications such as recommendation systems and advertising platforms, where shifts in user dynamics can undermine decision-making efficacy. To tackle this, we propose a novel algorithm for the widely adopted multi-armed bandit framework that enables platforms to adaptively learn in a fast-changing environment characterized by auto-regressive temporal dependencies.&#13;
&#13;
In Chapter 3, we shift our focus to the realm of fairness and explore how fairness considerations can be effectively integrated into the context of assortment planning. As algorithmic recommendations become integral to platform operations, a purely revenue-driven approach can result in highly imbalanced outcomes, leading to certain items receiving minimal exposure and exiting the platform in the long run. To address this, we develop a combinatorial optimization framework that incorporates fairness constraints, ensuring equitable exposure and opportunities for all items on the platform. We design a series of polynomial-time approximation algorithms to solve the fair assortment problem. Through numerical studies on both synthetic data and real-world MovieLens data, we showcase the effectiveness of our algorithms and provide insights into the platform's price of fairness.&#13;
&#13;
In Chapter 4, we bridge the topics of fairness and learning efficiency by examining how to achieve multi-stakeholder fairness in a multi-sided recommendation system. Here, the challenge is multifaceted, including ensuring high platform revenue, maintaining fair outcomes for diverse stakeholders, and enabling robust learning amidst data uncertainty. We propose a novel optimization framework that maximizes platform revenue while enforcing fairness constraints for both items and users, accommodating various fairness notions and outcome metrics. Building on this, we introduce a low-regret online learning and optimization algorithm that dynamically balances learning and fairness—two objectives that are often at odds. Finally, we demonstrate the efficacy of our approach via a real-world case study on Amazon review data and offer actionable guidelines for implementing fair policies in practice.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159878</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The interaction of shock waves with porous materials</title>
<link>https://hdl.handle.net/1721.1/159847</link>
<description>The interaction of shock waves with porous materials
McMillan, Charles Frederick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1983; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159847</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An operational analysis of industrial research</title>
<link>https://hdl.handle.net/1721.1/159846</link>
<description>An operational analysis of industrial research
Freeman, Raoul J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1957; Vita.; Bibliography: leaves 101-106.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159846</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and biochemical characterization of RNA&#13;
polymerase II transcription</title>
<link>https://hdl.handle.net/1721.1/159835</link>
<description>Structural and biochemical characterization of RNA&#13;
polymerase II transcription
Su, Bonnie G.
Eukaryotic development requires precise temporal regulation of gene expression orchestrated through a series of complex mechanisms. One such mechanism involves pausing of RNA polymerase II (Pol II) in the promoter-proximal region of genes. Pausing is stabilized by the protein complexes DRB-sensitivity inducing factor (DSIF) and negative elongation factor (NELF). Prior structural and biochemical studies provide specific mechanisms for stabilization of paused Pol II by NELF. However, cellular data suggests that NELF can accompany actively elongating Pol II into the gene body, indicating that NELF may be able to associate with pol II without enforcing Pol II pausing. This thesis presents cryo-electron microscopy structures of Pol II-DSIF-NELF complexes with NELF in two distinct conformations on the surface of Pol II, the paused state and the poised state. The poised state does not support a tilted RNA-DNA hybrid, a key characteristic of pausing, indicating that NELF in the poised state is compatible with elongating Pol II. Furthermore, Pol II bound to NELF in the poised conformation can accommodate TFIIS binding simultaneously, allowing reactivation of Pol II at sites of pausing or backtracking. Finally, a region of the flexible NELF-A tentacle engages with the RPB2 protrusion, an interface necessary for pausing. These results define how NELF can support both Pol II pausing and elongation and provide the molecular basis for how transcription can be reactivated when NELF is bound to Pol II.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159835</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synaptic Multimodal Imaging and Molecular Network Inference</title>
<link>https://hdl.handle.net/1721.1/159834</link>
<description>Synaptic Multimodal Imaging and Molecular Network Inference
Falkovich, Reuven
All cognitive function is reliant on synaptic function – the molecular computation that integrates activity history, chemical environment, and the genetic state of its pre- and post-synaptic neurons to modulate neuron-neuron communication through synaptic plasticity. This computation is performed by the highly compartmentalized, tightly regulated, and complex network of interactions between synaptic activity and hundreds of proteins and the mechanisms that regulate them. Isolating individual processes loses the context in which they occur, while bulk analyses average over highly heterogeneous populations and lose correlation information. A top-down study of the entire system in action requires measurement of multiple synaptic parameters – composition and activity - simultaneously in individual synapses. Building on a previously developed probe exchange multiprotein imaging technique, this thesis presents MINI-ME, a versatile, modular platform for integrating multiple information modalities at single synapses. We developed an approach for tandem live-fixed imaging to combine synaptic calcium dynamics or glutamate spiking information with multiprotein measurements. We also developed an integration of rolling circle amplification-based in situ methods, such as a reporter on gene specific translation. We analyzed, based on simulated and experimental data, the application of Bayesian network inference to analyze high-dimensional multimodal synapse distributions to extract biological insight. Finally, we applied this new approach to in-depth investigation of synaptic molecular perturbations associated with autism and schizophrenia genetics and psychiatric drug activity
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159834</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sideroflexins enable mitochondrial transport of polar neutral amino acids</title>
<link>https://hdl.handle.net/1721.1/159833</link>
<description>Sideroflexins enable mitochondrial transport of polar neutral amino acids
Block, Samuel
Mitochondria contribute to compartmentalized metabolism in eukaryotic cells, facilitating diverse enzymatic reactions that support cell function. However, this compartmentalization of metabolism necessitates regulated transport of metabolites across the inner mitochondrial membrane. While many proteins enabling mitochondrial membrane transport of metabolites are known, how some metabolites are transported is not known, and several mitochondrial amino acid transporters are largely uncharacterized. The goal of this dissertation is to better understand which proteins in the mitochondrial inner membrane regulate amino acid transport, particularly for substrates that lack known transporters, and how these proteins regulate associated metabolic pathways. Using CRISPR-Cas9-mediated candidate transporter knockouts coupled with assessment of metabolite transport via a mitochondrial swelling assay, we identified SFXN1 as a gene that mediates mitochondrial membrane permeability to polar neutral amino acids, including proline, glycine, taurine, hypotuarine, beta-alanine, and gammaaminobutyric acid (GABA). SFXN2 and SFXN3 partially complemented loss of SFXN1 to enable glycine transport, while SFXN2 and SFXN5 partially complemented loss of SFXN1 to enable GABA transport. Altogether, this work suggests that sideroflexins regulate the delivery of polar neutral amino acids across the inner mitochondrial membrane, many of which lack known carriers, and contributes to a better understanding of how mitochondrial amino acid transport regulates cellular metabolism.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159833</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Iterative Engineering, System Confidence, and In-space Servicing Assembly &amp; Manufacturing</title>
<link>https://hdl.handle.net/1721.1/159831</link>
<description>Iterative Engineering, System Confidence, and In-space Servicing Assembly &amp; Manufacturing
Luu, Michael A.
System Confidence is proposed as a method for quantifying the progress and performance of engineering systems. Confidence measures the degree of certainty that a system, design, or process will perform as intended. This metric aggregates existing tools in model-based systems engineering with requirement verification/test events. This is used to mitigate the obstacles of adopting iterative engineering for hardware systems due to the loosely defined requirements set at the beginning of these programs.&#13;
 &#13;
Iterative engineering has enabled the rapid progress of recent commercial space systems, reduced launch costs through reusable rockets, and deployed large-scale constellations to orbit. This engineering method has been made possible through recent advancements in digital engineering, additive manufacturing, scalable systems, modeling, and simulation software. Iterative engineering challenges the traditional V-model of systems engineering as the default and only choice in designing complex space systems. Rapidly testing novel payloads and satellite capabilities early in the design process can reduce development schedules and deliver capabilities earlier than traditional systems.&#13;
 &#13;
The advent of In-space Servicing, Assembly, and Manufacturing (ISAM) has ignited new prospects for system architectures, designs, and applications. Until now, iterative engineering has only leveraged in designing robust space systems to be resilient and self-reliant in post-launch deployment/operations. ISAM extends options, flexibility, and engineering decisions beyond the launch phase of space systems.&#13;
 &#13;
First, System Confidence is defined and quantified, measuring a system's capabilities throughout its development cycle. Second, System Confidence is used to evaluate three existing and historic space programs. Third, ISAM is explored as an extension of iterative engineering for space systems. Lastly, the relationship between iterative engineering outcomes and the utility of ISAM for a space systems architecture is analyzed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159831</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution and engineering of protein-protein interactions</title>
<link>https://hdl.handle.net/1721.1/159830</link>
<description>Evolution and engineering of protein-protein interactions
Ghose, Ashavari (Dia)
Protein-protein interactions are crucial elements in most biological processes. The gain and loss of interactions during evolution have important phenotypic consequences that are subject to selection. Therefore, in a crowded cellular environment, proteins must evolve mechanisms to maintain the correct interactions and avoid inappropriate ones. In this work, I leveraged high-throughput methods for the functional characterization of thousands of protein variants to characterize the sequence spaces associated with paralogous families of interacting proteins. Protein families are formed by gene duplication and divergence, a common source of evolutionary novelty. Family members maintain conserved structural and sequence elements, and yet must often form distinct protein-protein interactions. To probe the extent to which the requirement for interaction specificity constrains evolution, I focused on the twocomponent system family of bacterial signaling proteins. I tested protein variants with all possible single substitutions in the interacting domain of a model protein for their ability to interact with a cognate partner protein and with closely related non-cognate partners. I found that a large fraction of substitutions introduce non-specific interactions, suggesting that paralogs only evolve ‘marginal specificity’ that can easily be disrupted. Bioinformatic evidence indicates that the resulting crowded local sequence space has restricted the evolvability of two-component systems. I also characterized the effects of environmental context constraints, specifically temperature, on the sequence space relevant to two-component system function. This revealed generally conserved sequence-function landscapes across temperatures, with small numbers of variants showing either temperature sensitivity or resistance. Biochemical characterization of these variants challenges existing paradigms relating to the effects of temperature on evolution. Finally, I utilized insights into the evolution of protein-protein interaction specificity to inform the design of protein binders to toxin-antitoxin systems. These binders are selective in their interactions with toxin homologs, and inhibit toxin-antitoxin-mediated bacterial anti-phage defense activity, suggesting their potential use in clinical phage therapy applications. Taken together, these results shed light on the role of protein-protein interactions and their specificity in shaping evolution and suggest the utility of leveraging interaction specificity for engineering purposes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159830</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market value and financial structure in the railroad industry</title>
<link>https://hdl.handle.net/1721.1/159441</link>
<description>Market value and financial structure in the railroad industry
Nielsen, Scott.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1961; Includes bibliographical references (leaf 117).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159441</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some physical and rheological properties of human blood.</title>
<link>https://hdl.handle.net/1721.1/159438</link>
<description>Some physical and rheological properties of human blood.
Meiselman, Herbert Joel.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1966; Bibliography: p. 312-325.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159438</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model for groupoids of homeomorphisms</title>
<link>https://hdl.handle.net/1721.1/159436</link>
<description>A model for groupoids of homeomorphisms
Greenberg, Peter A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1982; Bibliography: leaf 84.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159436</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solvent extraction in packed columns</title>
<link>https://hdl.handle.net/1721.1/159434</link>
<description>Solvent extraction in packed columns
Evans, James E.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1938; Vita.; Includes bibliographical references (leaf 142).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159434</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tissue-encoded Design Principles of Host Defense</title>
<link>https://hdl.handle.net/1721.1/159375</link>
<description>Tissue-encoded Design Principles of Host Defense
Misra, Aditya
Inflammatory diseases have been rising in incidence over the past few decades and are a result of inappropriate activation of tissue-resident immunity. This inappropriate activation&#13;
can be derived from any number of cell types, including immunoregulatory functions of non-immune cells such as epithelial cells. In this thesis, we investigated tissue metabolism&#13;
and inflammation across different temporal and spatial scales using a unique combination of metabolomics, mathematical modeling, metabolic assays, and chemical characterization.&#13;
Our aim was to identify pathways that protect against inflammation-induced tissue damage and improve clinical outcomes. Thus, we studied A) chronic local tissue inflammation using a colitis model (Chapter 2) and B) acute systemic inflammation using a sepsis model (Chapter 3). In each disease, we studied changes in tissue architecture and the resulting cross-talk among cell types in the microenvironment. In colitis, we found that upon release during tissue damage, IL-18 launches a unique metabolic program in macrophages that 1) exhibits bistable and hysteretic behavior, 2) provides protective memory against inflammatory challenge, and 3) relies on positive feedback with intestinal epithelial cells to maintain the program. In our mouse model of bacterial sepsis, we performed liver tissue metabolomics and found that branched-chain ketoacids (BCKAs), metabolic products of branched-chain amino acids, are released during systemic inflammation and serve as endogenous antioxidants that neutralize extracellular peroxides. They thus reduce tissue damage and increase survival rates by more than double. Through this thesis, we show tissue-intrinsic mechanisms that 1) organize positive feedback loops among cells to establish protective memory against inflammation and 2) secrete endogenous antioxidants to limit pathogenic extracellular oxidants induced by inflammation without quenching bactericidal intracellular oxidants.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159375</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Muscle Sensing Modalities for Advanced Bionics</title>
<link>https://hdl.handle.net/1721.1/159374</link>
<description>Enhancing Muscle Sensing Modalities for Advanced Bionics
Yeon, Seong Ho
Muscle sensing technologies have significantly advanced our understanding of biomechanics and enhanced the efficacy of bionic devices. These technologies enable volitional control of prostheses and assistive devices by mapping the electrical and mechanical activities of muscles as control inputs. This dissertation presents novel paradigms and findings to improve the utility and efficacy of muscle sensing modalities for advanced bionic applications.&#13;
&#13;
In the first part, I introduce a comprehensive approach to improve the acquisition and processing of surface electromyography (sEMG) signals for bionic applications. This includes innovations in electrode materials and design to enhance user comfort and signal quality for long-term use within prosthetic sockets. Additionally, I propose a real-time impulse filtering algorithm to effectively suppress artifacts while preserving the underlying sEMG signal during dynamic movements. Furthermore, I demonstrate a synchronous sEMG and ultrasound acquisition method that enables simultaneous assessment of muscle electrical activity and mechanical deformation, providing valuable insights into muscle function and control.&#13;
&#13;
In the second part, I explore how Magnetomicrometry can serve as a new in-vivo and real-time mechanical muscle state tracking modality. Previous work has shown significant potential for Magnetomicrometry in muscle-state tracking via a tightly-controlled in situ setup. In this work, I demonstrate real-time tracking of muscle tissue length in freely-moving animals performing various motor activities, suggesting that Magnetomicrometry could be extended as a viable in-vivo and real-time muscle sensing modality.&#13;
&#13;
In the final part, I propose a novel theoretical framework leveraging Riemannian geometry and manifold theory to enhance the magnet tracking technology stack for Magnetomicrometry. By representing the magnetic dipole state on a manifold and incorporating its dynamics, I develop a more accurate and robust magnet tracking algorithm that addresses the limitations of existing methods. Through simulations and real-world data evaluations, I demonstrate the superior performance of the proposed manifold-based tracking paradigm, showcasing its potential to improve the resolution and extend the observable depth of Magnetomicrometry.&#13;
&#13;
The advancements presented in this dissertation have significant implications for the development of next-generation bionic devices, enabling more adaptive, versatile, and reliable myo-neural interfaces. Through this work, I hope to open up new possibilities for the design and control of advanced prostheses and assistive technologies with the advanced myo-neural control interface.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159374</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polyanionizing rocksalt cathodes for lithium-ion batteries</title>
<link>https://hdl.handle.net/1721.1/159367</link>
<description>Polyanionizing rocksalt cathodes for lithium-ion batteries
Huang, Yimeng
Rocksalt-type and olivine polyanion-type cathodes are the two most dominant families of practical lithium-ion battery (LIB) cathodes. Rocksalt-type cathodes, including layered LiCoO₂, [chemical formula] (NCM), spinel LiMn₂O₄ and [chemical formula], and the later-developed disordered rocksalt cathodes (DRX), can have high energy densities up to 1000 Wh kg⁻¹ when utilizing hybrid anion-/cation- redox (HACR) under high upper cutoff voltages &gt; 4.5 V vs. Li/Li⁺. However, anion (oxygen) redox sacrifices cycling stability as oxidized oxide ions [chemical formula] are more mobile than O²⁻, which can lead to percolating lattice oxygen diffusion to the reactive particle surface, oxygen loss, and extensive side reactions with the electrolyte. Meanwhile, polyanion-type cathodes, represented by LiFePO₄, has excellent thermal and structural stability, due to strong covalent bonding in the PO₄ polyanion structure unit that improves structural integrity. But its application is limited by low energy density. While each of them has their own advantages, there was never a marriage between the two materials families for performance-safety synergy.&#13;
&#13;
To achieve high energy density with good cycling stability, we hybridize rocksalt and polyanion-type cathodes by introducing a new cathode family of polyanionized disordered rocksalt cathode with spinel order (DRXPS), where an optimal amount of XO₄ (X = P, S, etc.) polyanions are incorporated in Li-M-O (M = Mn, Fe) rocksalt cathodes, free of Co and Ni. We propose design rules to estimate the optimal polyanion amount, x, such that [XO₄]x is sufficient to suppress long-range percolation of lattice oxygen diffusion/loss at high voltages, while not harming capacity (XO₄ content is much less than that in LiFePO₄). The estimated optimal x by design is verified with electrochemical cycling data. Rules for Li/M/O ratio selection are also proposed and verified experimentally.&#13;
&#13;
The DRXPS cathode family, represented by [chemical formula], can deliver high initial discharge capacities and energy densities up to 367 mAh g⁻¹ and 1122 Wh kg⁻¹, respectively (among the highest for LIB cathodes to date). But most importantly, they can have &gt; 70% capacity and energy density retention after 100 cycles, far exceeding the cycling performance of un-polyanionized DRX with high energy densities. This addresses the cycling stability issues that are the bottleneck for DRX development. The DRXPS cathodes also demonstrate good rate performance and large compositional tunability (similar performance for compositions with varying M, X elements and u, v, x selection). In addition, we clarify their crystal structure, morphology, and redox mechanism via systematic characterizations.&#13;
&#13;
We believe that polyanionization should be a general strategy to address the poor high-voltage cyclability issue, which is a key challenge for earth-abundant cathodes. The superior performance demonstrated and the design rules proposed for DRXPS shed light on the future development of advanced sustainable cathodes.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159367</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A class of high-efficiency air-core power transformers with flux-guiding resonators</title>
<link>https://hdl.handle.net/1721.1/159365</link>
<description>A class of high-efficiency air-core power transformers with flux-guiding resonators
Salk, Noah J.
Developments in high frequency power semiconductors have enabled the miniaturization of power system components, leading to the reduction of heavy, lossy magnetic steel cores as a media for electromagnetic energy transfer. A final push towards fully&#13;
“air-core” power devices is underway and a new class of coreless transformers is under development at MIT which targets the cost-sensitive application of grid-tied renewable energy farms. The topology is composed of a primary coil, a secondary coil, and one or more nested resonant tanks that facilitate efficient multi-path energy transfer. This class of transformers presents opportunities for upfront cost savings via material reduction, and long-term cost savings via efficiency gains and the resulting reduction of lost profit. This work will examine the theory, modeling efforts, system-level considerations, and rigorous experimental validation necessary to compare the performance of these transformers with other topologies and establish industrial viability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159365</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Acoustic Expander: A New Expansion Machine for Use in Cryogenic Refrigeration</title>
<link>https://hdl.handle.net/1721.1/159362</link>
<description>The Acoustic Expander: A New Expansion Machine for Use in Cryogenic Refrigeration
Adams, Jacob
The acoustic expander is a new expansion machine with potential applications to cryogenic refrigeration and liquefaction. Cryogenic expansion machines produce mechanical energy from the expansion of a working fluid from high-pressure to low-pressure thereby cooling the low-pressure fluid for use in refrigeration. The novelty of the acoustic expander is that the mechanical energy is transferred through a gaseous acoustic wave, as opposed to a traditional piston- or turbo-expander where the mechanical energy is transferred through a solid piston or a spinning shaft. The acoustic expander is comprised of passive reed-valves that are coupled to a resonant cavity, much like a wind instrument. The working fluid enters and exits the resonant cavity through the oscillating reed-valves which drives a standing acoustic wave in the resonant cavity.  The acoustic wave carries mechanical energy from the low-temperature region to high-temperature region where the energy is then dissipated as heat to the ambient environment; this transfer of energy cools the low-pressure fluid as it exits the acoustic expander. The practical advantage of the acoustic expander over piston- or turbo-expanders is the absence of dynamic sliding seals or complex moving parts at cryogenic temperatures. &#13;
&#13;
This dissertation presents a first principles thermodynamic model of the acoustic expander and describes the behavior of the coupled reed-resonator system. The model reveals that flow blow-by through the reed-valves at large pressure differentials is a primary loss mechanism. Several proof-of-concept prototypes were constructed, with both a single reed-valve and a double reed-valve, demonstrating isentropic expansion efficiencies between 40% to 50% at pressure ratios up to 2.5. These experiments incorporated the acoustic expander with a recuperative heat exchanger, reaching temperatures of -62 C (211 K) with vacuum insulation and air working fluid. The cooling power of these prototypes is between 50 to 150 Watts at room temperature with a mass flow rate of 1 to 3 g/s. Instabilities that cause the acoustic expander to shut-off and reed-valve fatigue are identified as key challenges. Future work may address these challenges and integrate the acoustic expander into cryogenic cooling systems for the refrigeration of superconducting magnets or quantum computers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159362</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning through the Lens of Data</title>
<link>https://hdl.handle.net/1721.1/159360</link>
<description>Machine Learning through the Lens of Data
Park, Sung Min
Many critical challenges in machine learning—e.g., debugging model behavior or selecting good training data—require us to relate outputs of models back to the training data. The goal of predictive data attribution, the focus of this thesis, is to precisely characterize the resulting model behavior as a function of the training data in order to tackle these challenges. In the first part of this thesis, we introduce a framework, datamodeling, for formalizing and constructing effective methods for predictive data attribution. Despite the complexity of modern machine learning systems (e.g., end-to-end training of deep neural networks using stochastic gradient algorithms), we show that we can accurately predict model outputs from simple linear functions of the training data. We then demonstrate that these predictors—which we call datamodels—provide a versatile primitive for various tasks, ranging from predicting the effect of dataset counterfactuals to identifying brittle predictions. Next, to further improve the scalability of data attribution in this framework, we design a new method trak (Tracing with the Randomly-projected After Kernel) that is both effective and computationally tractable for large-scale, differentiable models. By leveraging a kernel approximation and other classic ideas from statistics and algorithm design, we are able to reduce the challenging problem of attributing the original DNN to that of attributing a simpler surrogate. We demonstrate the effectiveness of trak across various modalities and scales: image classifiers trained on ImageNet, vision-language models (CLIP), language models (BERT and mT5), and diffusion models. In the second part of this thesis, we explore applications of this framework developed in the first part: First, we leverage datamodels for the problem of learning algorithm comparison, where the goal is to detect differences between models trained with two different learning algorithms. Our algorithm, ModelDiff, enables us to automatically surface biases that distinguish different learning algorithms by differentiating how they use the same training data. Lastly, we tackle the challenging problem of machine unlearning, wherein the goal is to “unlearn” a small fraction of training data from a trained model. By leveraging the fact that datamodels can accurately approximate the “oracle” predictions, we design a simple finetuning algorithm that allows us to unlearn at a significantly smaller cost than prior methods.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159360</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of technological diffusion : the replacement of steam by diesel locomotives in the United States.</title>
<link>https://hdl.handle.net/1721.1/159324</link>
<description>A study of technological diffusion : the replacement of steam by diesel locomotives in the United States.
Hydell, Richard Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1977; Vita.; Bibliography : leaves 306-308.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159324</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection of products of molecular beam reactions by laser-induced fluorescence.</title>
<link>https://hdl.handle.net/1721.1/159322</link>
<description>Detection of products of molecular beam reactions by laser-induced fluorescence.
Silver, Joel Art.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159322</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the existence of certain entire functions of zero type</title>
<link>https://hdl.handle.net/1721.1/159314</link>
<description>On the existence of certain entire functions of zero type
Scanlan, Robert H.
            (Robert Harris),
            1914-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1943; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1943 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159314</guid>
<dc:date>1943-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The semi-simplicial free lie ring</title>
<link>https://hdl.handle.net/1721.1/159305</link>
<description>The semi-simplicial free lie ring
Schlesinger, James W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1964; Vita.; Includes bibliographical references (leaf 29).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159305</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Olefins from amine oxides</title>
<link>https://hdl.handle.net/1721.1/159304</link>
<description>Olefins from amine oxides
Lebel, Norman A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1957; Vita.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159304</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication in the presence of noise</title>
<link>https://hdl.handle.net/1721.1/159303</link>
<description>Communication in the presence of noise
Schulman, Leonard J. Y.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1992; Includes bibliographical references (leaves 58-61).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159303</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High energy scattering of helium by the molecular hydrogen isotopes</title>
<link>https://hdl.handle.net/1721.1/159301</link>
<description>High energy scattering of helium by the molecular hydrogen isotopes
Fowler, Michael Coolidge,
            1941-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1967; Vita.; Bibliography: leaves [168]-[171].
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159301</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The political economy of trade : a computer simulation of contending regimes and the international division of labor</title>
<link>https://hdl.handle.net/1721.1/159297</link>
<description>The political economy of trade : a computer simulation of contending regimes and the international division of labor
Pollins, Brian.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1981; Bibliography: leaves 319-330.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159297</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation of the spatial organization of three-dimensional shapes for visual recognition.</title>
<link>https://hdl.handle.net/1721.1/159294</link>
<description>Representation of the spatial organization of three-dimensional shapes for visual recognition.
Nishihara, H. K.
            (Herbert Keith)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1978; Bibliography: p. 179-181.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159294</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of pressure broadening theory to atmospheric microwave absorption.</title>
<link>https://hdl.handle.net/1721.1/159292</link>
<description>Application of pressure broadening theory to atmospheric microwave absorption.
Lam, Kai S.
            (Kai Shue),
            1949-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1976; Vita.; Bibliography: leaves 415-419.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159292</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Mechanocaloric Effects and Tunable Thermal Conductivity in Amorphous Elastic Polymer Fibers</title>
<link>https://hdl.handle.net/1721.1/159264</link>
<description>Engineering Mechanocaloric Effects and Tunable Thermal Conductivity in Amorphous Elastic Polymer Fibers
Li, Buxuan
Energy-efficient clean technologies for active heating/cooling and passive thermal regulation are in high demand for applications spanning different scales, from city planning and building heating/cooling to wearable and portable devices to miniature electronics. To advance relevant technologies and simultaneously lower their  environmental footprint, two key research and engineering questions await to be answered: (1) how to pump/convert energy between thermal and other forms more efficiently, and (2) how to transport thermal energy in a tunable and scalable way for dissipation/insulation applications.&#13;
&#13;
Properly-engineered polymer materials may provide solutions to both challenges in a synergistic way. Among all materials, polymers stand out by several figures of merit, including low cost, chemical inertness, ease of manufacture and scalability, and light weight. They can be engineered by the application of temperature and/or strain, which impose different molecular arrangements within the material, enabling control over the degree of crystallinity, chain entanglement, and the dominant chain orientation. When polymers undergo microscopic structural changes, they may exhibit temperature responses driven by their internal entropy changes, known as mechanocaloric (mC) effects. mC effects offer a venue of conversion between mechanical and thermal forms of energy. Polymer chain alignment, on the other hand, also has a strong effect on the vibration characteristics of polymers, and thus on their thermal conductivity (TC) values. Through a continuous strain-temperature engineering of elastic amorphous polymer fibers, we demonstrate unique opportunities to address both challenges in energy conversion and transfer.&#13;
&#13;
We developed elastic fibers, which are melt spun from an olefin block copolymer (OBC), and exhibit (1) competitive mC performance with the temperature change exceeding 5K and the material coefficient of performance (COP) larger than 10, and (2) reversible thermal conductivity, which is continuously tunable in the range from 1.2 to 2.5 W/mK via uniaxial strain  deformations. The entanglement-enabled elasticity of the cross-linker-free block co-polymer chosen for this research allows the fibers to survive thousands of loading-release stretching cycles. In striking contrast with the vulcanized rubber commonly used as an efficient mC material, the OBC is a thermoplastic with a relatively low melting temperature (&lt;120C), which can be easily recycled and molded into different geometries. By optimizing both the fabrication parameters and the operational scenarios, we demonstrated high potential of elastic OBC fibers in advanced thermal applications within a wide temperature window from -20C to 70C. We further analyzed structural changes, thermodynamics, and vibration spectra of OBC fibers under different strains and temperatures, elucidating the mechanisms underlying the observed phenomena. This study provides insights into sustainable engineering and optimization of polymer-based solid-state refrigerators, heat pumps and tunable materials for efficient energy dissipation and passive thermoregulation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159264</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Language Representations in the Human Mind and Brain</title>
<link>https://hdl.handle.net/1721.1/159208</link>
<description>Characterizing Language Representations in the Human Mind and Brain
Tuckute, Greta
Language allows for the mapping of speech signals or written characters to meaning every time we engage in conversation or read. How can biological tissue, our brains, support this mapping process? This thesis characterizes the neural representations that enable humans to infer the meaning of a sentence. &#13;
My work builds on the foundation that regions in the left frontal and temporal parts of the brain causally and selectively support language processing (the ‘language network’). Chapter 2 asks how the language network develops. Through a case study of an individual born without their left temporal lobe (but with neurotypical language abilities), I demonstrate that the presence of temporal language regions appears to be necessary for the development of ipsilateral frontal regions, which echoes evidence from aphasia that the temporal areas are more important for language function. Chapters 3-5 aim to understand the representations and computations that mediate language comprehension. Traditionally, this line of inquiry has been challenging given the limited utility of probing animal models whose communication systems differ substantially from human language. However, the recent advent of artificial language models (LMs) has demonstrated that a system other than the human brain is capable of generating fluent and coherent text. Chapter 3 introduces the use of LMs as model systems for studying neural representations of language. I ask what aspects of an LM’s representation of the linguistic input matter the most for model-to-brain similarity. Across a series of systematic comparisons, I show that meanings of content words, such as nouns and verbs, matter more than syntactic structure (e.g., word order and function words). In Chapter 4, I leverage this model-to-brain similarity to ask what kinds of linguistic input the human language regions are most responsive to. I use an LM to identify sentences that maximally drive or suppress activity in language regions, and I demonstrate that these regions respond most strongly to sentences that are sufficiently linguistically well-formed but unpredictable in their structure or meaning, suggesting that this network is tuned to input predictability in the service of efficient meaning extraction. Finally, in Chapter 5, I use high-field (7T) fMRI to search for the organizing dimensions of the language network. By performing a data-driven decomposition of neural responses to linguistically diverse sentences, I show that only two components—shared across individuals—emerged robustly, accounting for about 34% of the explainable variance. In line with work in Chapter 4, the first component appears to correspond to processing difficulty. The second component appears to correspond to meaning abstractness. Both components are distributed across frontal and temporal brain areas but show systematic topographies across participants. &#13;
Altogether, this thesis provides a detailed characterization—across thousands of sentences and through spatially-precise neural measurements—of how the fronto-temporal language network supports language comprehension. This work brings us closer to deciphering the circuits and mechanisms that underlie the astonishing human capacity to infer complex meanings through language.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159208</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spontaneous activity in the mouse visual cortical slice: biophysical characterization and pathophysiology</title>
<link>https://hdl.handle.net/1721.1/159207</link>
<description>Spontaneous activity in the mouse visual cortical slice: biophysical characterization and pathophysiology
Heinrich, Maxwell John
As we continue to await the first disease-modifying treatment for Fragile X Syndrome (FXS), the leading inherited cause of intellectual disability, the search continues for novel ways to address the core pathophysiology of this neurodevelopmental disorder. FXS is caused by silencing of the FMR1 gene, which results in the loss of fragile X messenger ribonucleoprotein (FMRP), a critical protein in regulating nervous system development and neural circuit function. Due to the loss of FMRP’s canonical role in inhibiting mRNA translation, an elevated rate of protein synthesis is widely recognized as a core feature of FXS pathophysiology. In this thesis, I present my investigation of a relatively understudied form of pathophysiology that arises in brain slices prepared from a mouse model of FXS, the Fmr1-knockout (KO) mouse. Relative to wildtype (WT) slices, Fmr1-KO visual cortical slices exhibit increased spiking activity in layer 5. Critically, this hyperactivity phenotype is rapidly reversed not only by treatments known to restore elevated rates of protein synthesis to WT levels, but also by the protein synthesis inhibitor cycloheximide. Therefore, rapidly turned over pathogenic proteins are suspected to actively maintain this form of pathophysiology. Identifying these pathogenic proteins could reveal novel therapeutic targets for the treatment of FXS. Progress requires a deeper understanding not only of the cellular pathophysiology supporting this hyperactivity, but also of the biophysical mechanisms driving the activity itself, as each remains relatively unexplored. In Chapter 1, I review relevant FXS pathophysiology and our understanding of the various forms of spontaneous activity generation in neocortical brain slices. In Chapter 2, I dive into the biophysical mechanisms underlying the sparse, spontaneous spiking activity generated in WT visual cortical slices. Here, I find extreme sensitivity to the ionic composition of the artificial cerebral spinal fluid (aCSF) bathing the slices. Lower, more physiologic concentrations of extracellular divalent cations render extratelencephalic layer 5 pyramidal neurons intrinsically active by altering the activity of the persistent sodium current. In Chapter 3, I detail my journey investigating the pathophysiology underlying the hyperactivity phenotype in Fmr1-KO mice. While my early investigations indicated that depolarized intratelencephalic layer 5 pyramidal neurons drive hyperactivity of the layer 5 circuit, this intracellular phenotype proved to be ephemeral and likely due to suboptimal slice conditions. Informed by the investigations of Chapter 2, I conclude that the hyperactivity phenotype is not driven by cell-intrinsic hyperexcitability of layer 5 pyramidal neurons. My optimization of slice conditions preserves the hyperactivity phenotype, setting the stage for future intracellular investigation of the cause of this pathophysiology. In Chapter 4, I describe the implications of my work for understanding activity generation in neocortex and provide direction for future studies of spontaneous activity in WT and Fmr1-KO visual cortical slices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159207</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models and Tools for Studying Infants’ Attention</title>
<link>https://hdl.handle.net/1721.1/159205</link>
<description>Models and Tools for Studying Infants’ Attention
Raz, Gal
From birth, infants actively control where they look, long before they gain any significant motor control over other body parts. This early emergence of attentional preferences has allowed psychologists to use infants' gaze to gain insight into the developmental origins of perception and cognition. Understanding infant gaze therefore is critical both for understanding early development, and for interpreting decades of literature in developmental psychology. This thesis studies the functions of infants' looking behavior, and introduces novel tools to accelerate its study. Chapter 1 is a theoretical review which challenges the notion that learning in infancy is primarily incidental and passive. I outline ways in which infants use their gaze to learn, as well as form and manage social relationships. Chapter 2 demonstrates that, indeed, infants' looking behavior is better understood as an active sampling process. I describe a computational model that posits that infants' gaze is optimized to maximize expected information gain from noisy perceptual input, and show through large-scale behavioral experiments that infant looking is well described by this model. Chapter 3 then confronts the methodological challenges of studying infant gaze empirically: to obtain and process data from a single infant in an infant looking time experiment takes about 2 hours per infant. I describe a workflow in which we reduce this time to about 5 minutes per infants by a) using asynchronous, instead of in-lab, testing, b) training parents, rather than experimenters, to control the flow of experiments, and c) replacing manual gaze coding with automatic annotation using modern computer vision tools. Finally, synthesizing the preceding chapters, Chapter 4 describes outstanding challenges for the empirical and computational study of infant attention.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159205</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural correlates of trait mindfulness</title>
<link>https://hdl.handle.net/1721.1/159204</link>
<description>Neural correlates of trait mindfulness
Treves, Isaac N.
There is a clear and present need to identify the brain biological bases of mental health and mental illness. In this thesis, I focus on the brain bases of trait mindfulness, measured using self-report. Mindful individuals pay attention to the present moment and bring an attitude of acceptance and non-judgement to their thoughts and feelings. Despite the well-established importance of trait mindfulness to well-being, there are no well-established brain measures of trait mindfulness. This may be because of methodological obstacles to brain-behavior association studies. In this thesis, I evaluated the significance of these obstacles in the field and addressed them empirically. In Chapter 2, I conducted a systematic review of 68 brain imaging studies of trait mindfulness. There were some commonalities, but also large gaps in the literature. Sample sizes were small, and studies focused on single regions, networks or EEG responses. There was a lack of research on self-awareness and body awareness, important components of mindfulness. In the following chapters, I conducted three fMRI studies using large existing datasets and rigorous methodology to elucidate brain-mindfulness associations. In Chapter 3, I conducted connectome predictive modelling with the largest sample of any lab-based neuroimaging study of mindfulness (n = 367 adults). I found whole-brain network models of attention and non-judgement components of mindfulness that generalized to one of two held-out datasets. The models incorporated default-mode, somatomotor, and visual networks. Overall mindfulness scores were not predictable, suggesting challenges to a single brain marker of mindfulness. In Chapter 4, I analyzed a dataset of resting-state fMRI in adolescents, conducting dynamic connectivity analyses to find time-varying brain states. I selected brain states that showed good test-retest reliability, and these brain states correlated with mindfulness. Interestingly, one brain state exhibited global hyperconnectivity, perhaps a marker of arousal or awareness. Finally, in Chapter 5, I questioned whether using a task involving breath-counting would bolster correlations with trait mindfulness. I found differential brain responses to the task vs resting-state, but the responses did not correlate with trait mindfulness. Together, these studies contribute to an emerging picture of the mindful brain as being reflected in both unimodal (e.g. VIS) and heteromodal (e.g. DMN) brain networks, as well as static and dynamic functional organization. However, results underscore the difficulty of finding generalizable correlates across samples, conditions, and mindfulness scales.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159204</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Postnatal specialization of astrocyte regional heterogeneity in the mammalian brain and improved tools for studying glia</title>
<link>https://hdl.handle.net/1721.1/159203</link>
<description>Postnatal specialization of astrocyte regional heterogeneity in the mammalian brain and improved tools for studying glia
Schroeder, Margaret E.
Astrocytes are an abundant class of glial cells with critical roles in neural circuit assembly and function. Though many studies have uncovered significant molecular distinctions between astrocytes from different brain regions, the developmental trajectory of this regional heterogeneity requires further systematic study. Chapter 1 of this thesis provides a detailed literature review on the development of astrocyte regional heterogeneity. To address existing knowledge gaps, we used single-nucleus RNA sequencing to characterize the molecular diversity of brain cells across six developmental stages and four brain regions in the mouse and marmoset brain (Chapter 2). Using this transcriptomic atlas, we show that astrocyte regional specialization is shaped by postnatal development in both species, with significant species divergence in astrocyte gene expression signatures (Chapter 3). In Chapter 4, we report multiplexed expansion revealing (multiExR), a technique that can be used to visualize 20 or more proteins at nanoscale resolution in the same tissue sample. Finally, in Chapter 5, we describe the generation and characterization of Gpr17-Cre, a novel Cre recombinase driver line sensitive and specific for the oligodendrocyte lineage and a subset of astrocytes in the central nervous system.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159203</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated and Provable Privatization for Black-Box Processing</title>
<link>https://hdl.handle.net/1721.1/159202</link>
<description>Automated and Provable Privatization for Black-Box Processing
Xiao, Hanshen
This thesis initiates a study on universal leakage quantification and automated privacy-preserving solutions. To minimize assumptions on leakage generation and symbiotically accommodate cutting-edge advances in both algorithms and their implementations, a framework is established that models leakage as the output of a black-box processing function and produces rigorous privacy analysis based entirely on end-to-end simulation. At a high level, we demonstrate the following results: Given access to the underlying black-box secret generation, through mechanized evaluations of the black-box processing function, the hardness of adversarial inference can be provably quantified and controlled through properly selected perturbations. The detailed contributions can be summarized from three perspectives: a). Privacy Definition: We propose a new and semantic notion, called ProbablyApproximately-Correct (PAC) Privacy. This concept describes privacy intuitively as an impossible inference task for a computationally-unbounded adversary and supports expression of a universal privacy concern that is accessible to a general audience. b). Black-Box Leakage Quantification: We introduce randomization optimization and noise smoothing tricks and develop a set of information-theoretical tools based on f-divergence to characterize privacy risk through a statistical mean estimation. Provided sufficient sampling, one can approach this objective risk bound arbitrarily closely, which thus leads to a high confidence proof. The established theory also connects algorithmic stability and generalization error, demonstrating win-win situations in machine learning that simultaneously improve PAC Privacy and learning performance. c). Automated Privacy-Preserving Solutions: Theoretically, we characterize the tradeoff between required privacy guarantees (privacy budget), approximation error of the optimal perturbation strategy (utility loss), and simulation budget (computation power) to automatically construct a perturbation-based privacy solution from black-box evaluations. Operationally, we establish a series of tools to efficiently optimize the noise distribution in high-dimensional or constrained support spaces, and study their online versions with adversarially-adaptive composition. Concrete applications are presented, ranging from formal privacy proof for heuristic obfuscations, to privacy-preserving statistical learning, to response privacy in deep learning with vision models and large language models (LLM), such as ResNet and GPT-2, and hardware security, such as side-channel cache-timing leakage control.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159202</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of hydrolysis of triphenylsilyl fluorides in water-acetone solutions and the mechanism of decarboxylation of β-ketoacids in water, benzene, and hexane</title>
<link>https://hdl.handle.net/1721.1/159198</link>
<description>The mechanism of hydrolysis of triphenylsilyl fluorides in water-acetone solutions and the mechanism of decarboxylation of β-ketoacids in water, benzene, and hexane
Esteve Campderá, Ramón María.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1951; Vita.
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159198</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capillary phenomena in cohesionless soils</title>
<link>https://hdl.handle.net/1721.1/159196</link>
<description>Capillary phenomena in cohesionless soils
Lambe, T. William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1948; Includes bibliographical references (leaves 186-188).
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159196</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling, off-design performance and control analysis of OTEC power plants</title>
<link>https://hdl.handle.net/1721.1/159193</link>
<description>Modelling, off-design performance and control analysis of OTEC power plants
Calvo Sotelo, José,
            1893-1936.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Ocean Engineering, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159193</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Analysis of Plant Responses to Drought</title>
<link>https://hdl.handle.net/1721.1/159151</link>
<description>Systems Analysis of Plant Responses to Drought
Yun, Jie
Understanding how plants respond to environmental stress is critical for ensuring stable crop performance and predicting how natural populations may adapt to a changing climate. While plant biology has traditionally focused on plant physiology and molecular biology of model plants to elucidate plant responses, there is immense diversity in how plants respond to environmental conditions, arising from complex genotype-by-environment interactions (GxE).  &#13;
This dissertation investigates these themes, aiming to advance our understanding of the mechanisms driving plant responses to environmental stress and providing insights for improving agricultural resilience and sustainability, as well as contributing to evolutionary biology. This thesis focuses on three projects: &#13;
(1) While GxE is widely observed in traits and gene expression patterns, the mechanisms driving these interactions remain unclear. This thesis will present a framework using casual inference to study GxE interactions in gene regulatory networks to uncover the molecular mechanisms driving diverse environmental responses. We study two genotypes of the model grass species Brachypodium distachyon, leveraging natural variation and RNA-sequencing to study their responses to drought stress. &#13;
(2) Natural perturbations can be used to understand complex traits. In wild species, limited resources drive allocation strategies that balance trade-offs between survival risks and fitness benefits, which is central to their ecology. This thesis particularly focuses on understanding a whole plant trait – carbon allocation – using divergent responses of annual and perennial species of Brachypodium to drought stress. &#13;
(3) Does domestication trade-off stress tolerance for rapid growth? Plant domestication is thought to create trade-offs between high yield and stress tolerance, raising concerns about yield stability in future climates. This thesis will present a high-throughput phenotyping approach to study this question, focusing on leaf growth environmental response and its cellular regulatory mechanisms.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159151</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Effects of Pharmaceutical Interventions, Social Policies, and Exogeneous Shocks on People's Health and Behavior</title>
<link>https://hdl.handle.net/1721.1/159148</link>
<description>Evaluating the Effects of Pharmaceutical Interventions, Social Policies, and Exogeneous Shocks on People's Health and Behavior
Charpignon, Marie-Laure
Aging individuals tend to suffer from chronic conditions, some of which manifest in midlife (e.g., type 2 diabetes and hypertension) and some later (e.g., neurodegenerative disorders). As the global population increases and as people are living longer, finding strategies to prevent or delay these diseases has become a key priority. Concurrent advances in public health and biomedicine offer an array of pharmaceutical (e.g., oral drugs, vaccines) and non-pharmaceutical solutions (e.g., preventative and behavioral health measures). Meanwhile, exogenous shocks such as pandemics also affect the health and well-being of aging and other vulnerable individuals or populations (e.g., immunocompromised individuals, multigenerational households). In such circumstances, pharmaceutical interventions may not be readily available, forcing governments to implement socio-behavioral policies such as lockdowns and mask-wearing mandates and companies to adopt remote and hybrid work practices. Natural experiments, such as the social isolation induced by the COVID-19 pandemic or incentive-based vaccine distribution programs aimed to bolster vaccine uptake during this time, provide an opportunity to assess retrospectively the effect of federal, state, or local government policies. Another example consists of leveraging new drug approvals and changes in clinical guidelines to learn from electronic health records (EHR) which existing treatments could be repurposed to delay neurodegeneration and/or increase longevity, and if so, for whom they would work best. However, unlike randomized controlled trials, natural experiments suffer from multiple sources of confounding. The use of appropriate causal inference methods can help mitigate confounding bias, including via weighting and regression discontinuity designs. This thesis illustrates the use of existing causal inference approaches in population health and proposes new methods to evaluate the effects of pharmaceutical interventions (Chapters 1 and 2), exogenous shocks (Chapters 3, 4, and 5), and socio-behavioral policies (Chapters 3 and 5) on the health and well-being of aging and other vulnerable individuals or populations. Specifically, Chapters 1 and 2 leverage the target trial emulation framework to study the comparative effectiveness of antidiabetic and antihypertensive drugs towards preventing dementia or delaying its onset, using EHR data from Mass General Brigham healthcare system. Our target trial emulations suggest the diabetes drug metformin and the antihypertensive drug class of angiotensin receptor blockers as potential repurposing candidates for dementia, especially if initiated before age 70. Chapter 3 uses regression discontinuity designs to quantify the benefits of a local vaccine companion program in Massachusetts during the COVID-19 pandemic. We estimate that this initiative may have bolstered vaccine uptake among older adults aged 75+ by up to 22 percentage points. Chapter 4 implements counterfactual time series modeling to estimate pandemic-period excess mortality associated with overdoses in the US, by substance and geography. We find ∼25,650 excess deaths nationally (March 2020-August 2021), disproportionately affecting Southern and Western regions of the country and attributable mainly to synthetic opioids, methamphetamines, and alcohol as well as polysubstance use. Chapter 5 characterizes changes in team coordination among knowledge workers at a large global tech company to better understand the rise of hybrid work practices and their potential implications for well-being. Using two-way fixed effect regression models, we find evidence of voluntary alignment of work schedules with managers and greater co-attendance among employees who were recently hired or work in shared office spaces. Collectively, these five studies demonstrate how we can effectively learn from data about past events, medical records, and office attendance logs, to provide insights that inform the design of future public health strategies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159148</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data futures: Transforming digital traces into public goods in the age of commercial surveillance</title>
<link>https://hdl.handle.net/1721.1/159147</link>
<description>Data futures: Transforming digital traces into public goods in the age of commercial surveillance
Berke, Alex
For decades, government agencies have collected surveys to produce datasets and statistics that serve as public goods, enabling research and empowering communities from whom data are collected. These data sources are costly to collect and are in decline as survey response rates drop. In contrast, increasing quantities of data are collected from the public by companies -- data we unavoidably generate by making purchases, using the Internet, or simply operating a mobile phone.  This data collection might be considered a form of surveying the public, but where privatized datasets empower corporations rather than communities, and the ensuing potential harms cannot be empirically assessed without access to these data. &#13;
&#13;
This thesis considers a future where corporations can more accurately track populations and estimate statistics than the government agencies traditionally tasked with such efforts. This thesis illustrates how this future may be nearby and explores resulting questions through case studies. Namely, are there more privacy-preserving or equitable or cooperative ways to manage these data, to benefit the public from whom they are sourced?&#13;
&#13;
The first set of case studies use location data from mobile phones, first developing a more privacy-preserving approach by leveraging recurrent neural networks to generate realistic synthetic data, and second developing aggregated mobility metrics to improve country level population estimates and COVID-19 epidemic models. The next set of case studies use web browser data to evaluate risks of cross-site user tracking that are present despite privacy-enhancing browser developments. The first web study repurposes data collected by a data broker; the second uses a dataset we crowdsourced and openly published to benefit this research and future research. For the next set of case studies, we crowdsourced and published a first-of-its-kind open dataset of purchase histories from thousands of Amazon.com users, along with their sociodemographics. We use this dataset to demonstrate how corporate data can provide insights into societal changes and also evaluate privacy risks due to inferring sensitive consumer information from purchases.&#13;
&#13;
The data used in this thesis (mobile device locations, web browsing data, purchase histories) are examples of digital traces collected continuously from people throughout everyday activities, without explicit consent. This work points towards cooperative data sharing as a paradigm to empower research that benefits the public while prioritizing consent. Could such a paradigm exist with public support and participation? In order to study this and inform future crowdsourcing efforts, we embedded behavior experiments and surveys into our crowdsourcing tools, shedding light on what impacts users' likelihood to share their data, how users believe their data should be used, and how results differ across demographics.&#13;
&#13;
Throughout these studies, this thesis asks a broader question: Can we envision, and build towards, a future with alternative data economies that shift the power dynamics of data collection, along with the control and benefits of these data? To begin to address this question, this thesis proposes speculative, privacy-enhancing, and cooperative commerce networks. Such system changes may incur new costs for consumers. The final case study measures consumers' willingness to pay for privacy in new package delivery networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159147</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Discovery via Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/159135</link>
<description>Generative Discovery via Reinforcement Learning
Hong, Zhang-Wei
Discovering new knowledge is crucial for technological advancement and mirrors how humans and animals learn new skills—often through trial and error. Ancient humans, for example, discovered fire by experimenting with different methods, and children learned to walk and use tools through repeated attempts and failures. In chemistry, scientists find new catalysts by testing various compositions. But how exactly do humans use trial-and-error to improve existing solutions (like learning more efficient ways to walk or synthesizing novel compounds)? Can we design computational models that mimic or exceed human discovery? Such computational models could greatly accelerate progress in science and engineering since they can automate or assist human scientists’ and engineers’ works and discover new knowledge more efficiently (e.g., new compounds, streamlining the robot controller design, etc.). Reinforcement learning (RL) is well-suited for discovery tasks because it enables machines to learn through trial and error. My work overcomes the following major limitation of today’s RL algorithms and thereby advances their discovery potential: Mitigate the bias of reward shaping. RL relies on reward signals from trial-anderror experience, but these signals can be sparse, meaning they are only provided once a desired solution is found and otherwise zero. Most trials, therefore, offer little to no feedback. A common strategy to improve performance under sparse rewards is to provide additional hints (i.e., reward shaping) to guide RL algorithms. However, if these hints are inaccurate, they can steer the algorithm toward worse solutions than those without them. I propose a new RL framework that can be combined with any standard RL algorithm, ensuring that training with hints finds better solutions instead of harming performance. Learning with sub-optimal data. RL can learn not only from online interaction with the world but also from datasets of logged experiences. For expensive or time-consuming tasks like material discovery or robot learning, offline RL could be preferred because it leverages existing data rather than requires new interaction with the world. However, such datasets could contain mostly low-reward solutions, which limits the offline RL algorithm’s performance in finding solutions better than what’s in the dataset (as we show later in this thesis). I introduce sample reweighting strategies that reweight the dataset in a way that current offline RL algorithms trained with the weighted samples are able to discover solutions far better than what’s in the dataset, even if low-reward solutions predominated the dataset. Safety via Diversity. Standard RL algorithms aim to find a single “best” solution. Yet, in many discovery problems—such as drug development—it is more valuable to generate multiple high-rewards solutions with distinct properties (i.e., diversity) than to focus on only one. I study this problem in an emerging discovery task-red-teaming large language models (LLMs). In red-teaming, we desire diverse prompts that trigger undesired outputs from target language models. Current approaches leveraging RL to train an LLM to red-team another one, but they fall short of the diversity of generated prompts and often converge to a few prompts that consistently trigger undesired outputs. I propose to reward the agent to maximize the diversity of generated prompts, which also improves the the success of prompts at triggering undesired outputs from the target LLM.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159135</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Gap: Generative Machines and Inventive Minds</title>
<link>https://hdl.handle.net/1721.1/159134</link>
<description>Bridging the Gap: Generative Machines and Inventive Minds
Singh, Nikhil
Recording technologies, from the phonograph to digital media, have profoundly reshaped the human experience by enabling the capture and reproduction of our sensory world. These technologies allow us to relive experiences through artifacts of remarkable fidelity like photographs and videos, extending the reach of our perception and memory. Of course, we didn’t stop at the phonograph; we have built a rich ecosystem of tools for creating, sharing, and exploring recorded media that have had transformative effects on cognition and culture. Recently, a new and powerful class of tools has emerged: generative models. Unlike recorded media, which reproduces external experiences, generative models can translate our ideas directly into artifacts. Here, ideas refer to abstract mental constructs that seed media creation, externally expressed in text prompts, sketches, vocalizations, or other intuitive representations. Just as recorded media augmented our ability to perceive and remember, generative media promises to expand our ability to imagine and invent by offering a more immediate path from cognition to high fidelity creation. Creative work often has us operating at our limits, negotiating boundaries between knowledge and novelty, skill and aspiration, from individual exploration to collective understanding. Generative models, in principle, have the potential to scaffold and accelerate how we transcend these limits by increasing the efficiency with which we discover and pursue new ideas. In this thesis, I suggest that realizing this potential presents a complex set of challenges that span computation and design. I argue that it requires us to develop a rich stack of precision tools for human-AI co-creation, as we have done and continue to do for recorded media. Specifically, I present contributions across two key dimensions of this:&#13;
1. Computational machinery that supports creative work. I present research on topics including visually-driven acoustic simulation, interpretable and controllable sound generation from descriptions, and audiovisual content understanding. Focusing on sound as a case study, I describe systems that effectively represent and manipulate creative knowledge across modalities and levels of abstraction. &#13;
2. Interactive systems and studies that investigate the integration of human and machine effort in content creation. This includes work on conceptual integration in AI-assisted story writing, author-in-the-loop description authoring for accessibility of complex scientific figures, and generative constraints for human ideation. In all, this work seeks insights for designing systems that support human creators through exploration, collaboration, and feedback, rather than aiming to replace or constrain human agency and expertise. &#13;
To conclude this thesis, I present a discussion on bridging AI and HCI to gain insights into human creative work and develop stable, generalizable design knowledge for augmenting it. I argue for the design of flexible, parametric tools that enable systematic study of creative behavior under different augmentation designs. Based on this, I propose a conceptual framework to seed the development of a more robust science of human-AI co-creation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159134</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference Under Privacy Constraints</title>
<link>https://hdl.handle.net/1721.1/159132</link>
<description>Causal Inference Under Privacy Constraints
Yao, Leon
Causal inference is an important tool for learning the effects of interventions in observational or experimental settings. It is widely used in many fields such as epidemiology, economics, and political science to find answers like the average treatment effect of a medical procedure or the individual treatment effect of a personalized ad campaign. In commercial applications, the era of big data allows companies to increase their experiment volume, incentivizing them, in turn, to collect more user data. On one hand, large volumes of data are necessary to train generative models like ChatGPT. At the same time, companies’ increasing use of user data has drawn heavy criticism and consumer backlash, incurring legitimate concerns about privacy and consent. As concerns over user data safety and privacy grow, rules and regulations like GDPR change what kinds of data companies and researchers can acquire and how they can analyze the data. The necessity of now performing causal inference under a range of privacy constrants has carved new spaces for research at the intersection of causal inference and privacy. In my thesis, I will be exploring three paradigms for protecting user data — data minimization, differential privacy and synthetic data — and how to perform causal inference techniques under these new privacy regimes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159132</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferencing Techniques for Enhanced Monitoring of Thermal-Fluid Systems</title>
<link>https://hdl.handle.net/1721.1/159130</link>
<description>Inferencing Techniques for Enhanced Monitoring of Thermal-Fluid Systems
Kim, Haeseong
Sensor data augmentation for accurate system monitoring is relevant to many engineering applications, as there is often a gap between available instrumentation and measurement needs. Installing sensors can be limited due to factors such as harsh environmental conditions, the need to avoid operational distortions, and limited space. While continued efforts to develop novel sensor technologies to improve measurement density and quality are important, it is equally crucial to maximize the use of data from existing sensors and measurements. In this work, we employed physics-based methods to solve inverse heat transfer (IHT) problems. Because accurate and well-understood physics models provide strong prior knowledge, physics-based IHT can provide clear solution with use of small amount of temperature measurements. However, existing work in IHT relies on 'perfect' physics models and has been used to solve relatively simple problems such as conduction heat transfer problems. This thesis extends the IHT problem scope to thermal fluid systems, including the efficient use of sensor data and uncertainty quantification (UQ).&#13;
&#13;
We leveraged high-resolution thermal-fluid experiments to demonstrate the solution of two types of IHT problems. The first problem estimates the operating conditions of the experiment based on the minimal use of sensors from high-resolution temperature data. The estimated solution is used to reconstruct the entire temperature distribution on a heating surface, while the rest of the data is used to validate the inverse problem methodology. The estimation result is supported by UQ considering measurement errors and modeling errors that adds value to the estimation. The second IHT problem consists of identifying sharp-featured 2D heat source distributions with an array of temperature sensors from a subset of experiment data. Solving IHT involved regularization prior with strong sparsity-promoting capability. The designed iterative solution optimization process finds the unknown heat source distribution as well as regularization hyperparameter. In addition, Bayesian inference enhanced the solution quality by providing UQ of the heat source magnitude.&#13;
&#13;
Expanding the scope of IHT problems, we also addressed online state estimation in dynamic systems. This work focuses on a hypothetical inverse conduction problem of a transient heat source in a composite materials system. The physics modeling of system is assumed to include uncertainty arising from gap thermal resistance at material interfaces, which complicates the estimation of an internal heat source from external sensor data. To address this challenge, the IHT approach leverages future time-step measurements to correct estimates at the current time step, enabling more efficient use of limited sensor information. The approach is sampling-based and its statistics provides UQ on the quantity of interest.&#13;
&#13;
While this work addresses inverse problems within specific thermal-fluid systems, the methodology is designed for broad applicability beyond these cases. It lays the groundwork for advanced sparse sensing and inverse problem-solving in thermal systems, offering a more efficient, tractable, and reliable tool for engineers and researchers addressing system monitoring with modeling uncertainty. Looking forward, these methodologies could be valuable for digital twin applications, where live sensor measurements are integrated to provide robust, real-time estimation of the state of physical systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159130</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on the Economics of Land Use, Environmental Value, and Public Spending</title>
<link>https://hdl.handle.net/1721.1/159125</link>
<description>Three Essays on the Economics of Land Use, Environmental Value, and Public Spending
Larson, Kelsey R.
Across the world, public spending on government programs profoundly alters land use, preservation of environmental value, and the wellbeing of rural populations. These essays explore three such programs and derive lessons for improving their targeting. Chapter 1 tests the effect of conservation easement tax incentives on land conservation in Virginia, using a difference-in-difference design around a 2002 tax reform. This finds that the environmental quality distribution of easements is wide and matches the statewide quality distribution of all undeveloped land, suggesting the program has considerable room to improve targeting. Increasing tax incentives attract donations of similar or lower quality, but targeting tax incentives only at high-quality land would substantially increase high-quality acres at a cost of 1.18 low-quality acres per high-quality acre. Chapter 2 investigates the targeting of short-term incentives for long-term behavior change, focusing on the case of the EQIP agricultural incentives program. The model connects the short-term and long-term effects of incentives as products of the immediate adoption costs and long-term repeated costs and benefits of a practice. If populations vary primarily by adoption cost, targeting groups with the greatest short-term effect will also maximize the long-term effect. If populations vary primarily by long-term costs and benefits, the groups with the greatest short-term impact are those for whom the practice is highly unprofitable in the long run, and a program can improve long-term impacts by instead targeting those for whom the practice is slightly profitable in the long run. A discontinuity analysis comparing successful and unsuccessful EQIP applicants shows that EQIP induces significant short-term change. Chapter 3 investigates the behavior of Mongolian livestock markets after severe weather shocks, and the role that a livestock insurance program may play in smoothing shocks. During severe Mongolian winters, livestock sales increase and prices fall as credit-constrained nomadic herders look to make necessary investments to protect their remaining herd. National integration in livestock markets absorbs a significant share of the weather-related shocks, as 40-60% of district price risk is due to national market fluctuations and 20-40% is due to province effects. This paper finds that national mortality strongly drives price variations, and livestock insurance reduces sales during high-mortality periods.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159125</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Disease Resistance in a Reservoir Species for the Mice Against Ticks Project</title>
<link>https://hdl.handle.net/1721.1/159123</link>
<description>Engineering Disease Resistance in a Reservoir Species for the Mice Against Ticks Project
Buchthal, Joanna
This thesis explores the application of genome editing technologies to combat zoonotic infectious diseases through the development of a novel heritable immunization strategy targeting reservoir species. Focusing on Lyme disease, where white-footed mice (Peromyscus leucopus) serve as the primary reservoir, we propose embedding immunity into the germline of these animals to disrupt the disease transmission cycle and reduce the prevalence of the disease in the environment. By establishing genome engineering protocols for Peromyscus and demonstrating heritable protection against Lyme disease in genetically engineered Mus musculus, we show the feasibility of heritable immunization for long-term disease prevention. This work highlights the potential of genetic engineering for ecological interventions, offering a novel approach to public health challenges while fostering responsible community engagement in ecosystem engineering.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159123</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Spatial Transcriptome Across Whole Organisms</title>
<link>https://hdl.handle.net/1721.1/159122</link>
<description>Mapping the Spatial Transcriptome Across Whole Organisms
Zhang, Ruihan
This study utilizes Expansion Sequencing (ExSeq) to thoroughly investigate the spatial transcriptome of the Caenorhabditis elegans (C. elegans) body. Beyond mapping gene distribution within individual specimens, this research sequences multiple C. elegans to identify both shared and distinct transcriptomic features. The findings lay crucial groundwork for future integration of transcriptomic data with in situ connectomics and in vivo neural activity recordings. Understanding the spatial transcriptome in C. elegans is vital for insights into neural circuit coordination, disease mechanisms, and developmental biology.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159122</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Economic Engineering of Personalized Experiences</title>
<link>https://hdl.handle.net/1721.1/159105</link>
<description>The Economic Engineering of Personalized Experiences
Haupt, Andreas A.
Consumer applications employ algorithms to deliver personalized experiences to users, among others, in search, e-commerce, online streaming, and social media, impacting how users spend their time and money. This dissertation studies the design of such personalization algorithms and the economic consequences of their deployment.&#13;
&#13;
The first chapter focuses on the impacts of reward signal precision on online learning algorithms frequently used for personalization. Reward signals are precise when individual measurement is accurate and heterogeneity is low. While some algorithms, which we call "risk-averse", favor experiences that yield more precise reward signals and hence favor measurability and homogeneity, others, in the limit, choose experiences independently of the precision of their associated reward signals.&#13;
&#13;
The third chapter analyzes how preference measurement error differentially affects user groups in optimal personalization. If such measurement error is symmetric, welfare maximization requires delivering majority-preferred experiences at a rate beyond their proportion in the user population and hence increasing concentration. However, asymmetric preference measurement errors may arise due to users' actions to reduce measurement error. Participants in a survey of TikTok state that they engage in such costly actions.&#13;
&#13;
The fifth chapter studies, through the introduction of a new desideratum for market design, how to achieve personalization without infringing on user privacy. Contextual privacy demands that all (preference) information elicited by an algorithm is necessary for computing an outcome of interest in all possible configurations of users’ information. This property is demanding, as it requires that no two pieces of information can jointly but not unilaterally influence the outcome. Algorithms can protect the privacy of users who are queried late and whose information is not used to compute public statistics of the user population, hence achieving the relaxed notion of maximal contextual privacy.&#13;
&#13;
Two brief chapters introduce new models of human-machine interaction. The first examines the design of generative models, while the second proposes stated regret of past consumption as a new data modality and presents a corresponding data collection tool.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159105</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating Links: Building an Educational Platform to Ask Relevant Questions in Education</title>
<link>https://hdl.handle.net/1721.1/159104</link>
<description>Creating Links: Building an Educational Platform to Ask Relevant Questions in Education
García Bulle Bueno, Bernardo
In this thesis, I document the findings and process through which we built an educational platform (JANN) to do research while having a positive impact on a community. Through JANN we have coordinated more than 100k hours of tutoring sessions and built (to our knowledge) one of the largest databases of educational recordings in the world. Broadly the contributions here are twofold: first, we demonstrate the research potential building a platform can offer. Second, using our educational platform, we pursue novel questions in the field of education with granular information that is traditionally inaccessible for research.&#13;
&#13;
After introducing the work and describing the construction of the platform, the first chapter details an RCT where we show the effect of receiving tutoring on Math performance. Second, we document how we built an estimator of emotions using audio. The estimator was further validated on our dataset and then used to show that activating emotions are related to better class quality. Third, we document an RCT where Math tutors were asked to dedicate some time per week to teach Socioemotional learning skills. We show that this had a positive effect on learning. Moreover, it also caused tutors to teach longer Math classes. Students showed more trust in their tutors, and ultimately the classes had a higher prevalence of positive emotions. Finally, we also study doing causal inference on observational data on another platform. Using Facebook data we study digital groups and through a regression discontinuity design we find that joining a group has a positive effect on making new friends and can diversify a person's connections in terms of income. &#13;
&#13;
Overall, we find that building a platform, can broaden the granularity of the data one has access to, make research more scalable, and ultimately also have a positive effect on a community.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159104</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence Tools, Curricula, and Agents for Creative Learning</title>
<link>https://hdl.handle.net/1721.1/159100</link>
<description>Artificial Intelligence Tools, Curricula, and Agents for Creative Learning
Ali, Safinah Arshad
Children's early development of creativity contributes to their learning outcomes and personal growth. However, as children enter formal schooling systems, their creativity declines. Although Artificial Intelligence (AI)-powered tools for K-12 learning hold immense potential for reducing barriers to creative expression, access to these AI tools and AI knowledge among K-12 students and educators remains inequitable to children from groups underrepresented in STEM. In this thesis, I explore how AI, as an emerging creative medium, can be made more accessible to all young creators. I explore two mechanisms of making a mode of creation more accessible: Creative AI literacy materials for diverse classrooms and AI agentic interactions for scaffolding creative expression for diverse learners. &#13;
&#13;
Utilizing literacy as a mode of making Creative AI tools accessible, I outline the design and evaluation of various Creative AI curricula that I have developed for diverse groups of K-12 students and teachers. To adapt AI learning to art classrooms, I co-developed the AI and Art curriculum with creative educators, designed specifically for use in creative classrooms with creative educators and learners. I implemented the curriculum with 94 middle and high school students across six week-long sessions. I report findings from teacher co-design sessions and students’ learning experiences. Teachers designed learning objectives and AI tools for their classrooms. Students gained knowledge and skills in art concepts, AI concepts, and the application of art in AI. Students also demonstrated significant shifts in their attitudes towards using AI in the creative process, and their sense of belonging in both AI and art communities was heightened. I discuss how AI curricula can be adapted to diverse disciplines and how art can serve as a meaningful avenue for students to engage with AI concepts. &#13;
&#13;
Utilizing social interaction from AI agents as a mode of fostering creative expression in children with neurodevelopmental disorders, I designed and applied inclusive child-robot interactions for collaborative creativity, where 32 elementary school children and a social robot collaboratively created picture stories. The robot provided creativity scaffolding during different parts of the creative storytelling process through social interactions such as feedback, question-asking, divergent thinking, and positive reinforcement, while personalizing the scaffolding to meet the unique needs of neurodivergent children. I investigated the impact of the social robot on children’s exhibited creativity and their emergent creative collaborative interactions in storytelling over multiple sessions. Inclusive design practices eliminated creative barriers for children with neurodevelopmental disorders, and the robot's creativity scaffolding interactions positively influenced children’s creative product and creative process in storytelling. I propose Inclusive Co-creative Child-robot Interaction (ICCRI) guidelines for fostering creativity in children with neurodevelopmental disorders, and accommodating diverse creator styles in complex, open-ended creative tasks.&#13;
&#13;
In this thesis, I contribute curricula, learning tools, child-robot interactions, and findings from examining long-term child-AI co-creative interactions. I discuss design implications for integrating AI tools, curricula and agents in creative learning environments. This thesis is a step towards empowering all children with powerful modes of creation, while helping them be responsible creators, thinkers and citizens in an AI-driven future.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159100</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher-Order Interactions in Social Systems</title>
<link>https://hdl.handle.net/1721.1/159095</link>
<description>Higher-Order Interactions in Social Systems
Sarker, Arnab Kumar
The de facto representation of a social network is a graph— individuals are represented as nodes, and relationships between pairs of individuals are represented as edges. This results in a powerful abstraction by which social relationships can be systematically studied to understand emergent population-scale behavior. However, many social interactions occur in groups: three individuals may co-author a paper, a team of employees may collaborate on a task, a single tweet may mention four users. Breaking such interactions into a collection of pairwise relationships can oversimplify the rich social contexts in which these individuals know one another. This thesis explores a different paradigm of social network analysis, namely, using "higher-order" network models such as hypergraphs and simplicial complexes which can explicitly encode co-present contexts between three or more individuals. The first two projects describe how higher-order interactions can differ from pairwise interactions in terms of micro-level content and macro-level structure, respectively. The latter two projects then develop an applied mathematical toolkit for the algebraic topological analysis of higher-order interactions in social networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159095</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modulating the Electrochemistry of Calcium Metal Anodes</title>
<link>https://hdl.handle.net/1721.1/159078</link>
<description>Modulating the Electrochemistry of Calcium Metal Anodes
Melemed, Aaron M.
Rechargeable lithium ion (Li-ion) batteries have been a foundational energy storage technology. However, there are significant technoeconomic, geopolitical, and sustainability concerns regarding the procurement of Li-ion battery components, from transition metals within the cathode to lithium itself. As such, the development of battery chemistries beyond Li is vital to the long-term viability of electrochemical energy storage. Batteries based on calcium (Ca) metal anodes offer a compelling alternative; Ca is the fifth-most abundant element in the earth’s crust at 41,500 ppm (vs. 200 ppm for Li), offering potential improvements in scalability and sustainability. Ca metal also offers attractive electrochemical metrics, with a redox potential 0.17 V more positive than Li and a theoretical volumetric capacity of 2073 mAh/cm³ (vs. 850 mAh/cm³ for graphite and 2062 mAh/cm³ for Li metal). The field of Ca metal batteries is currently in its early stages, however, due to a limited number of electrolytes that can reversibly plate and strip Ca — a requirement for rechargeability. Two important challenges to overcome are (1) the formation of a passivating solid electrolyte interphase (SEI) between Ca and the electrolyte that inhibits Ca²⁺ transport to the anode, and (2) attractive Ca²⁺--anion interactions in the electrolyte that suppress ionic conductivity and hinder Ca electrochemistry. These limitations rendered Ca plating/stripping unattainable until a groundbreaking first demonstration in 2015. In the decade since, only a handful of reversible electrolytes have been reported, reflecting a severely constrained electrolyte design space. This thesis expands upon this design space through interfacial and electrolyte engineering, offering novel techniques to modulate Ca electrochemistry that provide new degrees of freedom for the development of Ca-based batteries.&#13;
&#13;
To begin, the practical assembly and cycling behavior of Ca foil electrodes are examined in a reversible electrolyte system for the first time. In contrast to historical work examining Ca foil in other common battery electrolytes, Ca foils are demonstrated to be electrochemically accessible for both plating and stripping in Ca(BH₄)₂ in tetrahydrofuran (THF). However, the first cyclic voltammetry (CV) cycle reflects persistent, history-dependent behavior from prior handling, which manifests as characteristic interface-derived features. Three exemplar SEI exhibit this interface-dominated behavior during initial CV cycles, though the interfacial features diminish with continued cycling. These results reveal that long-term cycling behavior is, to a greater extent, governed by the electrolyte, informing ensuing research into electrolyte composition and speciation. &#13;
&#13;
Competitive interactions between Ca²⁺, anions, and solvent molecules are next harnessed to modify the Ca²⁺ coordination environment in this baseline electrolyte. An exemplar dual-salt electrolyte with differing Ca²⁺--anion interaction strengths, Ca(BH₄)₂ + Ca(TFSI)₂ in THF, is systematically altered. Introduction of a more-dissociating source of Ca²⁺ via Ca(TFSI)₂ drives re-speciation of strongly ion-paired Ca(BH₄)₂, generating larger populations of charged species and enhancing Ca plating currents. A critical parameter is proposed to govern electroactivity, the BH4 /Ca²⁺ ratio. Parasitic TFSI- decomposition prevents Ca plating when the BH4-/Ca²⁺ ratio is less than one. However, Ca plating in a TFSI--containing electrolyte is demonstrated for the first time when the BH4-/Ca²⁺ ratio is greater than 1, as BH₄⁻ displaces strongly coordinating TFSI⁻ from the Ca²⁺ coordination environment. These results directly evidence the impact of coordination-shell chemistry on plating activity and indicate that Ca²⁺--BH₄⁻ interactions can unlock electroactivity in the presence of other Ca salts, significantly increasing the Ca electrolyte design space. &#13;
&#13;
Ca²⁺--solvent interactions are next examined as a subtler tool for electrochemical manipulation. The systematic introduction of glymes into the baseline electrolyte is shown to induce differential changes in Ca²⁺ coordination, as stronger glyme coordination displaces THF from the Ca²⁺ coordination environment, weakens Ca²⁺--BH₄⁻ interactions, and prompts BH₄⁻ redistribution. Examination of electrochemically-formed SEI indicates that BH₄⁻-facilitated solvent decomposition governs Ca electrochemistry in these systems, as coordinated THF promotes beneficial borate formation in the SEI but coordinated glymes instead favor the formation of Ca²⁺ blocking phases. The link between Ca²⁺ coordination strength and solvent decomposition is corroborated through the quantification of gaseous products. Altogether, these strategies for the modulation of Ca electrochemistry reveal new avenues for electrolyte engineering that will promote further development of Ca-based batteries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159078</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heat transfer and friction for heating and cooling of fluids in pipes</title>
<link>https://hdl.handle.net/1721.1/159005</link>
<description>Heat transfer and friction for heating and cooling of fluids in pipes
Keevil, Charles S.
            (Charles Samuel)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1930; Includes bibliographical references (leaves 135-136).
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159005</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The classification of the indecomposable integral representations of the dihedral group of order 2p</title>
<link>https://hdl.handle.net/1721.1/159004</link>
<description>The classification of the indecomposable integral representations of the dihedral group of order 2p
Leahey, William Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1962; Vita.; Includes bibliographical references (leaf 84).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159004</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual system analysis through use of finite field sine gratings.</title>
<link>https://hdl.handle.net/1721.1/159002</link>
<description>Visual system analysis through use of finite field sine gratings.
Magnuski, Henry Stanley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Vita.; Bibliography: leaves 150-152.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159002</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Data, to Models, and Back: Making Machine Learning Predictably Reliable</title>
<link>https://hdl.handle.net/1721.1/158964</link>
<description>From Data, to Models, and Back: Making Machine Learning Predictably Reliable
Ilyas, Andrew
Machine learning systems exhibit impressive performance, but we currently lack scalable ways to anticipate their successes, failure modes, and biases. This position limits our ability to deploy these systems in the appropriate contexts, and to build systems which we can confidently deploy in high-risk settings. Motivated by this state of affairs, this thesis aims to develop design principles for predictably reliable machine learning. Our ultimate goal is to enable developers to know when their models will work, anticipate when they will fail, and understand “why” in both cases. In pursuit of this goal, this thesis combines large-scale experiments with theoretical analysis to form a precise understanding of the ML “pipeline,” from training data (and the way we collect it), to learning algorithms, to deployment. Fully realized, such an understanding would allow us to build ML systems the same way we build buildings or airplanes—safely, scalably, and with a robust grasp of the underlying principles. In this thesis, we focus on four design choices within this pipeline: model deployment (Part I), dataset creation (Part II), data collection (Part III), and algorithm selection (Part IV). For each of these design choices, we use targeted experiments to uncover the corresponding principles that actually underlie the behavior of ML systems. We distill these principles into concise conceptual models which allow us to both reason about existing systems and design improved ones. Along the way, we will revisit, challenge, and refine various aspects of conventional wisdom surrounding ML model development.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158964</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ΔB₀ Field Control in High Field MRI with Local Multcoil Shim Arrays</title>
<link>https://hdl.handle.net/1721.1/158963</link>
<description>ΔB₀ Field Control in High Field MRI with Local Multcoil Shim Arrays
Arango, Nicolas
Local multicoil ΔB₀ shim arrays enable low-cost, simple to fabricate, and physically small, static magnetic field control in magnetic resonance imaging. The presented thesis will show frameworks for coil current calculation for homogeneity and novel selective excitation applications. As MRI RF coils trend towards repositionable and flexible systems for their ease of use and tight-to-the-patient fit, ΔB₀ shim arrays are left behind for lack of rapid, patient-on-the-table calibration. We show an inverse problem approach with physics-based regularization and adaptation to accelerate calibration by over 50 fold. The numerical tools developed for calibration also proved useful for design to enable novel upper bounds on ΔB₀ shim performance and new tools for automatic application and anatomy-specific local multicoil array design.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158963</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-orthogonal multiple access using guessing random additive noise decoding aided macrosymbols</title>
<link>https://hdl.handle.net/1721.1/158962</link>
<description>Non-orthogonal multiple access using guessing random additive noise decoding aided macrosymbols
Yang, Kathleen
We propose guessing random additive noise decoding-aided macrosymbols (GRANDAM) as a nonorthogonal multiple access (NOMA) method that can detect, error correct, and decode multiple users in multiple input multiple output (MIMO) systems that involve imperfect channel estimation, symbol-wise asynchronous transmission, and interference. GRAND-AM is a NOMA method that uses both joint multiuser detection and joint error correction decoding to handle multiple access interference (MAI) from the users of interest. Our method avoids codebook design and iterative decoding techniques, which are associated with other commonly researched NOMA techniques. We introduce the concept of a macrosymbol, which is constructed from the combination of all user symbols, for the joint detection component of GRANDAM. For the error correction decoding component, we introduce multiple access channel (MAC) codes, which are codes that are used to split the channel rate between users and correct errors due to the MAI. Each user has their information bits encoded with independent MAC codes, which can be short, low rate linear codes such as cyclic redundancy check (CRC) codes or space time codes such as the Alamouti code. We use a soft detection variant of GRAND, a near maximum likelihood (ML) universal decoding algorithm that inverts noise effect sequences from a sequence of symbols to arrive at a codeword, to correct the received sequence of macrosymbols, and ensure that all user codebooks are simultaneously satisfied in the joint decoding process. We show that the methodology of using joint detection and joint decoding at the receiver leads to lower error rates compared to an individual detection and decoding technique, and has comparable performance to an orthogonal multiple access (OMA) system with a similar code rate and length.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158962</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-Technology Co-Optimization of Scaled Electronics&#13;
Based on Two-Dimensional Materials</title>
<link>https://hdl.handle.net/1721.1/158961</link>
<description>System-Technology Co-Optimization of Scaled Electronics&#13;
Based on Two-Dimensional Materials
Zhu, Jiadi
Over the past 60 years, the semiconductor industry has focused on developing highly scaled electronic devices and high-density integrated circuits. However, bottlenecks have arisen recently as transistor dimensions approach the physical limits, and integration density is constrained. This thesis addresses these issues with two-dimensional (2D) materials, which includes inventing a low-temperature (&lt; 300 °C) metal-organic chemical vapor deposition (MOCVD) method for 2D materials on 8-inch wafers, investigating extreme device scaling and multi-channel transistors. Design-Technology Co-Optimization (DTCO) and SystemTechnology Co-Optimization (STCO) are employed to rapidly model, evaluate and optimize device and circuit performance. Moreover, heterogeneous integration and monolithic 3D integration techniques are investigated, addressing challenges in integrating 2D materials with silicon complementary-metal-oxide-semiconductor (CMOS) circuits and flexible substrates. This research aims to advance high-density, high-performance electronics with low-power consumption for next-generation integrated systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158961</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven General Purpose Foundation Models for&#13;
Computational Pathology</title>
<link>https://hdl.handle.net/1721.1/158957</link>
<description>Data-Driven General Purpose Foundation Models for&#13;
Computational Pathology
Lu, Ming Yang (Max)
The field of computational pathology has undergone a remarkable transformation in recent years. Researchers have leveraged supervised and weakly-supervised deep learning with varying degrees of success to address a wide range of tasks, including cancer subtyping and grading, metastasis detection, survival and treatment response prediction, tumor site-of-origin identification, mutation prediction, biomarker screening, and more. However, traditional task-specific models often require extensive labeled data and struggle to generalize across diverse pathology tasks. This limitation motivates the exploration of foundation models, which promise a more scalable, versatile solution by learning broad representations that can be adapted to various downstream applications. In this thesis, I will investigate the capabilities and limitations of data-driven foundation models in computational pathology. Specifically, I will explore two frameworks for developing general-purpose encoder models for pathology images: one using paired image-text data, and another leveraging self-supervised learning on large-scale unlabeled images. Additionally, I will examine downstream applications of these foundation models, including zero-shot transfer to gigapixel whole slide images and the development of an interactive multimodal AI assistant for pathologists.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158957</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics and Optical Properties of Lead Halide&#13;
Perovskite Nanocrystals: From Nanorods to Nanocubes</title>
<link>https://hdl.handle.net/1721.1/158955</link>
<description>Exciton Dynamics and Optical Properties of Lead Halide&#13;
Perovskite Nanocrystals: From Nanorods to Nanocubes
Šverko, Tara
Lead halide perovskites, particularly CsPbBr3, have emerged as leading light emitters for their spectral purity, brightness, and facile synthesis. Their soft, ionic lattice makes them unusually defect tolerant but introduces problems with stability. Additionally, dephasing mechanisms and coupling to phonons are not yet well understood in these semiconductors. &#13;
In the first part of the thesis, I investigate highly confined, anisotropic CsPbBr3 nanorods, elucidating the photophysics governing their broad single-particle linewidths. I utilize ensemble and single particle photoluminescence techniques across a wide temperature range in order to pinpoint exciton-phonon coupling mechanisms, structural and surface effects, and spin mixing in these novel materials.&#13;
In the second part of the thesis, I focus on the opposite size regime, where collective behaviour dominates the optical properties. I develop a novel spectroscopy to pinpoint dephasing mechanisms that could reduce superradiant and coherent emission in order to promote rational design and future integration of these nanocrystals into quantum information devices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158955</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems for Usable Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158953</link>
<description>Systems for Usable Machine Learning
Zytek, Alexandra
Many real-world decision problems are complex, with outcomes difficult to measure and evaluate. The impact of decisions made in these domains is nuanced and takes a long time to be fully realized. Individual mistakes can lead to significant costs, and computational tools such as ML models must be integrated alongside existing, well-established human workflows. These properties of such decision problems means that ML solutions must be usable in order to be effective — in other words, developed and deployed in such a way as to be used by humans in decision-making and improve outcomes. In order improve ML usability, developers create ML tools, or diverse kinds of interfaces that allow users to understand ML models and their predictions. In this thesis, we use real-world case studies to synthesize generalizable lessons for applying usable ML tools to complex, real-world decision problems. Based on experience developing ML tools for child welfare screening, we propose a formal taxonomy of feature properties related to usability and interpretability. We then discuss the design and development of a system to make generating ML explanations that use such interpretable features more effective. Pyreal is a framework and Python library implementation that uses updated data transformers to generate explanations of ML models and predictions using interpretable features. Motivated by the development and customization effort required to develop ML tools for new applications, we then discuss the development of Sibyl, a configurable and comprehensive system for generating usable ML interfaces for a wide range of applications. We then discuss our case study in applying Sibyl to the decision problem of wind turbine monitoring. We then discuss Explingo, our system for transforming traditional ML explanations into natural language narratives to further improve the usability of ML outputs. We finish by discussing the practical lessons this work demonstrates related to the need for usable ML, the challenges specific to these complex applications, ethical questions, and future directions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158953</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Foundations for Pragmatic Data Science</title>
<link>https://hdl.handle.net/1721.1/158952</link>
<description>Causal Foundations for Pragmatic Data Science
Squires, Chandler
A key goal of scientific discovery is the acquisition of knowledge that is practically useful for societal endeavors, such as the development of medicine or the design of fruitful economic policies. In this thesis, I place front and center the role that scientific models play in the process of decision-making, emphasizing the importance of causal models in science, i.e., models which describe the possible effects of actions upon a system. The work contained explores central topics in this domain, including causal discovery (learning causal models from data), causal representation learning (learning how to coarse-grain observations into causally sensible “macro-variables”), and end-to-end causal inference (the interplay between causal discovery and downstream decision-making).
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158952</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Foundational End-to-End Verification of Systems Stacks</title>
<link>https://hdl.handle.net/1721.1/158951</link>
<description>Techniques for Foundational End-to-End Verification of Systems Stacks
Gruetter, Samuel
Today's software is full of bugs and vulnerabilities. Formal verification provides a promising remedy through mathematical specifications and machine-checked proofs that the implementations conform to the specifications. However, there could still be bugs in the specifications or in the verification tools, which could lead to missed bugs in the software being verified. Therefore, this dissertation advocates for foundational end-to-end verification, a proof-based software development method that can mitigate both of these concerns:&#13;
&#13;
It is end-to-end in the sense that the correctness proofs of individual components are used to discharge the assumptions of adjacent components throughout the whole stack, resulting in end-to-end theorems that only mention the top-most and bottom-most specifications, so that bugs in intermediate specifications cannot invalidate the soundness of the end-to-end statement anymore.&#13;
&#13;
The method is foundational in the sense that the soundness of the proofs relies only on the foundations of mathematics and on the correctness of a small proof-checking kernel, but not on the correctness of other, domain-specific verification tools, because these tools are either proven correct once-and-for-all, or they output proofs that are checked by the kernel.&#13;
&#13;
Ensuring that all the reasoning can be checked by the same small foundational kernel requires considerable effort, and the first part of this dissertation presents techniques to reduce this effort:&#13;
&#13;
Omnisemantics, a new style of semantics that can be used instead of traditional small-step or big-step operational semantics, offer a smooth way of combining undefined behavior and nondeterminism, and enable forward-simulation compiler correctness proofs with nondeterministic languages, whereas previous approaches need to fall back to the much less convenient backward simulations if support for nondeterminism is needed.&#13;
&#13;
Live Verification is proposed, a technique to turn an interactive proof assistant into a programming assistant that displays the symbolic state of the program as the user writes it and allows the user to tweak the symbolic state as long as the tweaks are provably sound. An additional convenience-improving feature is that instead of stating lengthy loop invariants, the user only needs to give the diff between the symbolic state before the loop and the desired loop invariant, resulting in shorter and more maintainable annotations. Finally, in order to make Live Verification practical, a number of additional proof techniques is presented.&#13;
&#13;
The second part of the dissertation shows how these techniques were useful in three collaborative case studies: An embedded system running on a verified processor with an end-to-end proof where the software-hardware interface specification cancels out, a cryptographic server with an end-to-end proof going from high-level elliptic-curve math all the way down to machine code, and a trap handler to catch unsupported-instruction exceptions whose correctness proof combines program-logic proofs about C-level functions, a compiler correctness proof, and proofs about hand-written assembly.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158951</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling Algorithmic Problems on Massive Graphs</title>
<link>https://hdl.handle.net/1721.1/158948</link>
<description>Tackling Algorithmic Problems on Massive Graphs
Biswas, Amartya Shankha
As datasets grow increasingly larger, traditional computational models, which require reading the entire input, become impractical due to constraints on time, memory, and randomness. This thesis explores alternative algorithmic approaches for processing massive graphs under these constraints. Specifically, we focus on algorithms for the following graph problems. Motif Counting and Sampling: This involves developing efficient algorithms for counting and sampling small motifs (constant sized subgraphs) like stars and triangles, which are crucial for applications in biology, chemistry, and social networks. The thesis presents improved methods for both approximate and exact counting and sampling of general motifs. Graph Sparsification and Spanners: The problem of sparsifying graphs involves removing (usually most) edges of the input graph in a way that preserves essential properties such as connectivity and approximate distances. This thesis introduces algorithms for constructing sparse spanning graphs, as well spanners – sparse subgraphs that approximate distances up to a multiplicative factor. We obtain faster algorithms in parallel settings, and also initiate the study of average case graph inputs in the sublinear setting, and obtain results beyond the worst case lower bounds We investigate both of these problems in different models, including sublinear query access, local computation algorithms (LCAs), and the MPC model, and also discuss implications of these in distributed and parallel models of computation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158948</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Lewis Acidic Pnictenium Ions Using Carbone and Capping Arene Ligands for Bond Functionalization</title>
<link>https://hdl.handle.net/1721.1/158947</link>
<description>Design of Lewis Acidic Pnictenium Ions Using Carbone and Capping Arene Ligands for Bond Functionalization
Warring, Levi
Interest in the chemistry of antimony and bismuth is rapidly growing due to isolation of low coordinate, subvalent or Lewis acidic compounds that can mediate reactivity traditionally reserved for their d block counterparts. Ligand strategies play a key role in the isolation of such species. Anionic ligands with large steric profiles, as well as carbenes, have been widely implemented to stabilize subvalent heavy group 15 element compounds. However, synthetic strategies to prepare Lewis acidic antimony and bismuth complexes remain underexplored. Cationization is one of the most common methods used to enhance the Lewis acidity of heavy group 15 elements by creating a vacant p orbital on the pnictogen atom. Lewis acids are also employed in frustrated Lewis pair (FLP) chemistry to enable intra- and intermolecular reactivity. Carbone ligands, which are neutral, 4 electron donor ligands, offer a unique ability to support highly electrophilic main-group elements. This dissertation investigates the stabilization of heavy pnictenium ions using neutral donor ligands, such as carbodicarbenes and capping arene ligands, and explores their potential in Lewis acid-mediated chemistry. In Chapter Two, the synthesis and characterization of a series carbodicarbene-pnictenium ions is described. The utilization of strongly donating carbodicarbene ligands enables the isolation of mono-, di- and tri-cationic antimony and bismuth cations. These ions have multiple bond character between carbon and antimony/bismuth, representing some of the first examples of stibaalkene and bismaalkene cationic compounds. The Lewis acidity of these ions was assessed using the Gutmann-Beckett method and computationally derived fluoride ion affinities, the latter of which indicates Lewis superacidity for the bis(pyridyl)carbodicarbene-pnictenium trications. In Chapter Three, the reactivity of the bis(pyridyl)-carbodicarbene stibenium trication toward C(sp³)–H and C(sp)–H bonds is demonstrated. The Lewis superacidic antimony cation mimics the chemistry of frustrated Lewis pairs in the presence of the sterically encumbered base 2,6-di-tert-butylpyridine to enable C–H bond breaking of acetonitrile and a set of terminal alkynes. Kinetic analyses, in conjunction with density functional theory, support a mechanism by which acetonitrile coordinates to antimony, acidifying the C–H bonds, which can be subsequently deprotonated by the base in solution. The resulting stiba-methylene nitrile and stiba-alkynyl adducts undergo reactivity with elemental iodine to generate iodoacetonitrile and 1-iodoalkynes while reforming a stibenium trication. In Chapter Four, capping arene ligands are coordinated to antimony and bismuth tribromide to afford a series of κ²-bound complexes. Bromide abstraction from these neutral adducts affords ionic compounds. Both the neutral and ionic species have distinctive Menschutkin interactions, whereby the lone pair on the pnictogen atom is oriented toward the π system of the pendant arene. Shortening of the distances between the pyridyl nitrogen atoms and pnictogen atom are observed upon cationization from the neutral adducts. The Lewis acidity of these complexes was assessed using the Gutmann-Beckett method. Notably, acceptor numbers as high as 111 are observed for these ions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158947</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Structure for Efficient and Dexterous Contact-Rich Manipulation</title>
<link>https://hdl.handle.net/1721.1/158946</link>
<description>Leveraging Structure for Efficient and Dexterous Contact-Rich Manipulation
Suh, Hyung Ju Terry
Contact-rich manipulation has proved challenging due to the need to consider multiple combinatoric possibilities of making or breaking contact with the surrounding environment. As a result, existing methods have often resorted to combinatorial optimization that utilizes dynamics structure but considers all possibilities exhaustively, or compute-heavy and inefficient sampling methods that utilize blackbox optimization such as Reinforcement Learning (RL). In this thesis, I aim to show that by combining structured contact smoothing in conjunction with local gradient-based control and sampling-based motion planning, we can bypass the combinatorial explosion of contact modes while still leveraging structure and achieve highly efficient contact-rich manipulation. To achieve this capability, I first shed light on how RL abstracts contact modes and optimizes difficult landscapes by combining stochastic smoothing and zeroth-order optimization; yet, I show how following a similar stochastic strategy while using gradients suffers from several drawbacks such as empirical bias and high variance. To leverage structure in a more helpful manner, I propose a method for smoothing contact dynamics without relying on stochastic smoothing, bypassing these drawbacks. Using this smoothing scheme, I present a highly efficient and performant local control based on gradient-based trajectory optimization and model predictive control. Finally, I connect these local control capabilities with global sampling-based motion planners to achieve long-horizon global plans. The proposed method achieves contact-rich plans such as dexterous in-hand reorientation and whole-body manipulation much more efficiently than RL while being highly scalable compared to methods that explicitly reason about contact modes. These results achieve a reduction of contact-rich manipulation to kinodynamic motion planning, and exposes the true difficulty of contact-rich manipulation from combinatorial explosion in contact modes to combinatorial and highly non-local decisions over motion planning behaviors.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158946</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quality-Centric Single-Image Procedural Material Generation</title>
<link>https://hdl.handle.net/1721.1/158945</link>
<description>Quality-Centric Single-Image Procedural Material Generation
Li, Beichen
Procedural materials, represented as functional node graphs, are ubiquitous in computer graphics for photorealistic material appearance design. They allow users to perform intuitive and precise editing to achieve desired visual appearances. However, even for experienced artists, creating a procedural material given an input image requires professional knowledge and significant effort. Current inverse procedural material modeling approaches enable the automatic generation of procedural materials from input images. However, the visual quality of the generated materials is fundamentally limited by insufficient high-quality training data from industry-standard procedural materials, reliance on token-space supervision without visual feedback, and the absence of approximation-free node parameter post-optimization. My thesis presents advanced dataset augmentation, model training, and parameter post-optimization algorithms to address these challenges, significantly improving the perceptual match between the generated procedural material and the input image. Furthermore, the methodologies can be applied to other inverse procedural graphics problems to expedite similar artistic creation processes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158945</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Additively Manufactured Quadrupole Mass Filters for Low-Cost and High-Performance Applications</title>
<link>https://hdl.handle.net/1721.1/158944</link>
<description>Development of Additively Manufactured Quadrupole Mass Filters for Low-Cost and High-Performance Applications
Eckhoff, Colin C.
With a growing need for more compact and affordable mass spectrometers, many efforts have been made to miniaturize quadrupole mass filters (QMFs). Unfortunately, these efforts have yielded devices with inadequate performance for practical applications in analytical chemistry. This study reports the successful creation of a low-cost, high-performance QMF by means of additive manufacturing. Vat photopolymerization of glass-ceramic feedstock was used to create a novel, monolithic structure, and selective electroless nickel-boron plating metallizes the structure, forming a completed QMF that is lightweight and inexpensive to produce (20 USD per device). Furthermore, additive manufacturing allows QMF dimensions to be rapidly scaled to the optimal sizes for a given application, which is larger than most prior affordable quadrupole designs. Despite the limited precision of additive manufacturing, optimization techniques can be leveraged to produce high-quality devices with smooth surfaces. As a result, our QMFs achieved mass resolutions up to 164 at 69 Da, with abundance sensitivities sufficient to detect carbon-13 isotopes at lower masses—a level of performance comparable to commercial devices. These results indicate that additive manufacturing, properly employed, can significantly advance the state of the art of QMFs and other mass spectrometry technologies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158944</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast methods for full-wave electromagnetic solvers in MRI</title>
<link>https://hdl.handle.net/1721.1/158943</link>
<description>Fast methods for full-wave electromagnetic solvers in MRI
Guryev, Georgy D.
High static field ( 3T) MR scanners can produce human tissue images of astounding clarity, but rely on high frequency ( 123MHz) electromagnetic radiation that generates complex in-tissue field patterns that are patient-specific and potentially harmful. Many such scanners use multiple transmitters to better control field patterns, but then adjust the transmitters based on general guidelines rather than optimizing for the specific patient, mostly because computing patient-specific fields was presumed far too slow. It was recently demonstrated that the combination of fast low-resolution tissue mapping and fast voxel-based field simulation can be used to perform a rapid patient-specific MR safety check. However, the field simulation still required several minutes, making it too slow to perform the dozens of simulations that would be needed for patient-specific optimization. In this work, we develop a set of numerical acceleration techniques that facilitate fast field simulations that bridge the gap between the performance of current state-of-art full-wave electromagnetic packages and time requirements dictated by real-time patient-specific field optimization in a clinical setting. These techniques cater to a large range of body sizes and complex coil geometries.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158943</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Shock Waves on a Chip: Generation and Applications</title>
<link>https://hdl.handle.net/1721.1/158942</link>
<description>Weak Shock Waves on a Chip: Generation and Applications
Deschamps, Jude
In conventional laser-shock experiments in solid media, shock waves are typically excited from the ablation of a photoacoustic transducer layer deposited onto the sample of interest. Unavoidably, the target materials are damaged. This leads to the necessity of changing targets after each exposure, likely lowering the shot-to-shot reproducibility and data quality, while lowering the throughput of the experiment. Motivated by the need to generate large-amplitude transient strain waves at a high repetition rate, this thesis introduces a novel platform for the non-destructive generation and amplification of acoustic waves with associated strain levels in the percent range — up to the formation of shock waves. The acoustic amplification scheme is first described. Then, owing to the capabilities of the technique to repeatedly load a material with finite-amplitude strain waves, a demonstration of the use of the platform for microscale fatigue testing is made. Finally, the strain localization of surface acoustic waves is leveraged by transiently modulating a monolayer of a transition metal dichalcogenide deposited on a substrate.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158942</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Structures for Scalable Vertical Gallium Nitride Power Devices</title>
<link>https://hdl.handle.net/1721.1/158939</link>
<description>Novel Structures for Scalable Vertical Gallium Nitride Power Devices
Perozek, Joshua Andrew
Solid state electronic devices have been the backbone of modern power systems for decades. However, as we enter an era fuelled by renewable energy and defined by pervasive electrification, novel power devices must be developed to address the increasingly stringent demands for high power density and efficiency. In this thesis, the theory and fabrication of several new gallium nitride (GaN) power devices will be developed to push beyond current device limitations.&#13;
&#13;
A key advancement surrounds the acknowledgment that vertical GaN power devices are fundamentally three-dimensional. Fabrication of these devices does not readily benefit from the decades of expertise gained in planar processing within the silicon industry. Instead, we will present how a new approach to creating vertical fin-based devices will enable self-aligned fabrication of vertical GaN finFETs and related devices. &#13;
&#13;
Within this work, we also explore the scalability of vertical GaN finFETs. Working with 8-inch GaN substrates, we demonstrate that vertical finFETs can be fabricated using a fully CMOS compatible process flow. This enables a scalable pathway to the widespread adoption of GaN by leveraging existing manufacturing capabilities.&#13;
&#13;
As a final look towards the future of GaN devices, we explore methods to surpassing the one-dimensional, unipolar limit of GaN through devices known as superjunction. The theory that has been highly successful for Si devices is applied to GaN, and a new framework for designing devices is presented. Using our approach to creating vertical fin-based devices, we are able to fabricate record high aspect-ratio demonstrations of a new class of fin diodes that reveal a promising path towards the next generation of GaN power devices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158939</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Generalizable Systems by Learning Composable Energy Landscapes</title>
<link>https://hdl.handle.net/1721.1/158938</link>
<description>Learning Generalizable Systems by Learning Composable Energy Landscapes
Du, Yilun
How can we construct intelligent embodied agents in the physical world? Such agents should be able to autonomously solve tasks that have not been seen before, subject to external disturbances in the environment, as well as new combinations of factors such as lighting, varying sensor inputs, and unexpected interactions with agents and other objects. An important subgoal towards constructing such intelligent agents is to construct models that can robustly generalize, not only to distributions of tasks similar to ones seen at training time but also to new unseen distributions. This departs from standard machine learning techniques which usually assume identical training and test distributions. Towards this goal, in this dissertation, we’ll illustrate how we can achieve certain forms of generalization by estimating energy landscapes over possible predictions for each task, with accurate predictions assigned lower energy. This modeling choice formulates prediction as a search process on the energy landscape, enabling zero-shot generalization to new constraints by adapting the energy landscape. In addition, this allows us to generalize to entirely new distributions of tasks in a zero-shot manner by composing multiple learned energy landscapes together. In this dissertation, we first introduce a set of techniques to train energy landscapes and an algebra in which we can compose and discover composable energy landscapes. Next, we illustrate how energy landscapes can be composed in a diverse set of ways, ranging from logical operators, probability distributions, graphical models, constraints, and hierarchical compositions, enabling effective generalization across vision, decision-making, multimodal, and scientific settings.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158938</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>(De)fluorination of Organic Substrates Mediated by Nontrigonal Phosphorus Triamide</title>
<link>https://hdl.handle.net/1721.1/158937</link>
<description>(De)fluorination of Organic Substrates Mediated by Nontrigonal Phosphorus Triamide
Lim, Soohyun
Due to its high electronegativity and small size, fluorine atoms form the strongest single bond to carbon, and impart unique physical, chemical, and physiological properties to organic compounds. Therefore, the number of industrially synthesized products containing fluorine has seen a substantial increase in recent decades. The strategies to access organofluorine compounds include two opposite approaches: 1) (nucleophilic, electrophilic, or radical) fluorination, and 2) selective defluorination of polyfluorinated substrates. Both creating and breaking C−F bonds in selective manners are of great importance, and present challenges of their own. The work herein describes chemical transformations incorporating the cleavage or formation of the C−F bonds mediated by nontrigonal phosphorus triamide. Thanks to the enhanced biphilicity resulting from geometric deformation, the Cs-symmetric tricoordinate phosphorus compound can activate strong covalent bonds.&#13;
&#13;
At the outset, Chapter 1 reviews the existing literature on (de)fluorinative chemical transformations focused on deoxyfluorination and hydrodefluorination, as well as examples of nontrigonal tricoordinate phosphorus compounds and their characteristic reactivity. Combining the two approaches, Chapters 2 and 3 introduce method development for accessing organofluorine compounds using a butterfly-shaped phosphorus triamide as a bond activator. In Chapter 2, the method for deoxyfluorination of aliphatic alcohol substrates via O−H activation by phosphorus, catalyzed by borane Lewis acids, is detailed. The scope of the method covers tertiary alkyl fluorides, which are generally challenging targets, selectively yielding stereoinversion products for chiral substrates. Chapter 3 reports a closed P(III)/P(V) synthetic cycle for the hydrodefluorination of polyfluoroarene substrates that consists of C−F oxidative addition, F-to-H ligand metathesis, and C−H reductive elimination. The overall sequence is analogous to transition metal-catalyzed aryl cross-coupling reactions. Taken together, the methods described in this dissertation highlight the potential of nontrigonal phosphorus compounds as a mediator for the manipulation of strong covalent bonds, useful in the development of synthetic methods that complement existing ones.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158937</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Telecom Band-Compatible Molecular Color Centers for Quantum Networking</title>
<link>https://hdl.handle.net/1721.1/158936</link>
<description>Developing Telecom Band-Compatible Molecular Color Centers for Quantum Networking
Greer, Rianna Bliss
Quantum networking is a new modality of information transmission that will revolutionize the future of telecommunications. However, the realization and widespread use of quantum networking demands low signal loss and distortion over long distances. To achieve this, prospective materials for quantum networking must emit in fiber optics’ optical communications band defined as 1260 to 1625 nm, commonly known as the “telecom band.” Vanadium dopants in silicon carbide have demonstrated near-infrared emission combined with a spin-photon interface, but these systems lack tunability over emission wavelength, preventing emission in the telecom band. This thesis combines the promising electronic structure of these dopants and the inherent tunability of molecular systems to create a family of luminescent paramagnetic vanadium complexes that can achieve both telecom band emission and generalized finetuned control over emission wavelength. Chapters 2 and 3 will outline approaches to target telecom band emission in a series of V_III complexes through a gradual and controlled increase of metal-ligand bonding covalency. This strategy culminates in a series of V_III complexes which tune emission wavelength from 1237 nm to 1424 nm, achieving emission into the telecom band. Chapter 4 will discuss the impact of these strategies on the magnetic properties and spin dynamics of these systems through an analysis of their behavior under high-frequency high-field EPR spectroscopy. This work provides a blueprint for the next generation of molecular spins with optical addressability in the near-infrared regime for applications in quantum networking.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158936</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining the Influence of Host Cell Proteostasis Networks and&#13;
Temperature on Influenza Evolution</title>
<link>https://hdl.handle.net/1721.1/158935</link>
<description>Defining the Influence of Host Cell Proteostasis Networks and&#13;
Temperature on Influenza Evolution
Patrick, Jessica
Viruses accumulate mutations and evolve more rapidly than any domain of life. Not only does the random acquisition of mutations drive this high evolutionary rate, but constant pressure from the host also contributes. As minimalistic pathogens, viruses rely on host machineries to synthesize, fold, and degrade their proteins. These proteostasis machineries can influence the accessible sequence landscape of viral proteins, and thus shape their evolution. Furthermore, the entire viral replication cycle takes place within the host cell. Therefore, the environment of the host, including factors such as temperature, can influence the evolutionary trajectory of viral proteins. The overarching goal of my thesis work is to better understand the influence of the host cell environment, with a particular focus on the proteostasis networks and the temperature of the cell.&#13;
My first project uses deep mutational scanning to elucidate the roles of the host proteostasis networks in defining influenza hemagglutinin’s evolutionary ability. My second project takes a similar approach to investigate how high or low temperature impact the accessible sequence space of HA. My third project combines both proteostasis network and temperature perturbations to investigate how the host cell environment can impact HA’s ability to escape neutralizing antibodies. My final project leverages the high mutation rate of influenza to study the phenomenon of error catastrophe, and the impact of altered proteostasis network environments on buffering the effect of mutations. Together, these studies clearly define a role for both the host proteostasis networks as well as temperature environment in determining influenza’s accessible sequence space, currently underappreciated factors in predicting how viruses evolve to evade selection pressures and a critical component to consider for successful vaccine and drug development as well as pandemic preparedness.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158935</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Designing Efficient Systems and Algorithms for Sparse and&#13;
Quantized Deep Learning Computing</title>
<link>https://hdl.handle.net/1721.1/158928</link>
<description>Co-Designing Efficient Systems and Algorithms for Sparse and&#13;
Quantized Deep Learning Computing
Tang, Haotian
Deep learning models are becoming increasingly complex, expanding from 1D text and 2D images to 3D point clouds, while their size continues to grow exponentially. This trend highlights the need for greater efficiency. This thesis systematically explores efficiency in two resource-intensive domains—autonomous driving and generative AI—by focusing on fundamental model compression techniques: sparsity and quantization, alongside the co-optimization of systems and algorithms. Sparsity is crucial for autonomous vehicle (AV) applications. LiDAR processing, which requires 3D sparse computation, is inefficiently handled by current GPU libraries, creating a performance bottleneck in AV perception. To address this, we propose TorchSparse++, a high-performance GPU system for 3D sparse convolution, achieving 1.7-3.3× speedups over state-of-the-art libraries. Additionally, we introduce BEVFusion, an efficient multi-sensor fusion framework that fuses information in bird’s-eye-view (BEV) space, reducing computation by 1.9× while enhancing accuracy compared to prior methods. Generative AI is constrained by the massive size of models, necessitating quantization for efficient deployment. This thesis presents two GPU systems for accelerating large language models (LLMs): TinyChat for edge LLM deployment and QServe for cloud-based LLM serving. TinyChat boosts edge LLM inference by 3× using activation-aware weight quantization (AWQ). QServe further improves performance with activation and KV cache quantization, enhancing the throughput of NVIDIA TensorRT-LLM by 1.2-2.4× on A100 GPUs. Finally, we introduce HART, an efficient autoregressive image generation method that achieves 4.5-7.7× higher throughput compared to diffusion models while maintaining visual quality. HART achieves this improvement by leveraging quantized, or discrete, visual tokens to capture the high-level structure of images, while a lightweight diffusion model is used for fast inference of finer details.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158928</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building World Models with Neural Physics</title>
<link>https://hdl.handle.net/1721.1/158927</link>
<description>Building World Models with Neural Physics
Ma, Pingchuan
World models learn the dynamics of environments in a data-driven manner, enhancing performance and efficiency in downstream tasks such as control, design, recognition, and generation, thanks to cost-effective simulation and differentiability. A pre-trained world model should ideally (1) accurately simulate ground-truth dynamics, (2) adapt easily to novel configurations, and (3) generalize across diverse physical effects. Previous attempts in this area have either utilized differentiable model-based physics with few parameters exposed or trained for specific scenarios with minimal physical priors integrated. These world models fall short of their objectives, limiting their applicability in real-world accuratecritic deployments and scalability to larger pre-trained world models. In this thesis, we aim to build world models with neural physics, a hybrid neural-physics framework that models the basic dynamics with differentiable physics while learning all additional modules through neural networks. By integrating neural physics, the world models adhere closely to physical principles while efficiently learning diverse effects. The modular structure of neural physics allows world models to generalize to novel configurations simply by installing different pretrained neural modules. We will demonstrate the effectiveness of this novel framework in applications such as reconstruction, robotic control, and scientific discovery.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158927</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Sensing of N-Nitrosodimethylamine and Methane</title>
<link>https://hdl.handle.net/1721.1/158926</link>
<description>Chemical Sensing of N-Nitrosodimethylamine and Methane
Feng, Haosheng
In Chapter 1, an introduction to chemical sensing is presented. Several modalities are introduced, including optical, gravimetric as well as chemiresistive together with brief in-troductory backgrounds. Subsequently, metrics to assess sensor performance are sum-marized. Finally, some strategies to combat interferants during chemical sensing are dis-cussed.&#13;
&#13;
In Chapter 2, published work on a luminescent method to determine levels of N-nitrosamines is presented. This work involved the synthesis of five phosphines bearing N-heterocycles, followed by coordination with Cu(I) to give luminescent complexes. Emission spectra spanned the visible range, demonstrating the tuneability of these compounds. The complexes’ interactions with N-nitrosamines were also examined through spectroscopy and crystallography.&#13;
&#13;
In Chapter 3, development of free-volume promoting monomers and catalysts for in-sertion polymerization is demonstrated. Insertion polymerized material was compared to that synthesized using Ring Opening Metathesis Polymerization (ROMP), showing that the former had superior properties for methane detection through higher surface areas and po-rosity.&#13;
&#13;
In Chapter 4, the structure activity relationship of components within a previously pub-lished methane sensing assembly was thoroughly examined to identify how changes in humidity levels influenced sensing response. Poly-4-vinylpyridine modification was per-formed under flow conditions, while the chemical composition of the polyoxometalate (POM) component was also varied. Humidity was determined to most significantly affect the POM and influence the electrical contact between carbon nanotubes and gold.&#13;
&#13;
Finally, Chapter 5 presents several modifications of the parent porous framework out-lined reported in Chapter 3. A soluble monomer bearing adamantyl substituents was suc-cessfully synthesized by attachment of isopropyl units. Its propensity to participate in inser-tion polymerization was then examined. Sulfonation and nitration of the parent polymer I-AntN were also conducted and the product analyzed. Attempts at copolymerization of AntN with CO were also described.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158926</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Scalable Quantum Systems From First-Principles to Large-Scale Control</title>
<link>https://hdl.handle.net/1721.1/158925</link>
<description>Engineering Scalable Quantum Systems From First-Principles to Large-Scale Control
Harris, Isaac B. W.
Color centers in solids are promising platforms for quantum communication, sensing, and computing, featuring highly coherent optical transitions, as well as native electron and nuclear spins that can be used as quantum memories. Existing state-of-the-art demonstrations have shown that multi-qubit control, spin-photon entanglement, and heralded entanglement are possible with devices consisting of a few color centers. However, the path to scaling the number of color centers integrated in these devices to the thousands or millions needed for advanced quantum networking and computing applications remains unclear. In particular, the requirement for highly coherent quantum operations both necessitates operation at cryogenic temperatures, and precise classical control signals delivered to each color center. Precise qubit control greatly increases the system complexity, while the cryogenic operation limits the amount of power that the system can dissipate. Both factors severely limit the number of color centers that can realistically be included in a single device using existing methods. This work will tackle the scaling problem from a system-level perspective from two directions. Firstly, I will quantify performance trade-offs between coherence, temperature, and optical properties of the group-IV color centers. A novel color center system, the ¹¹⁷SnV⁻ hyperfine color center, will be presented and its advantages compared to traditional group-IV color centers will be explored. Secondly, a method to integrate color centers with application specific integrated circuit (ASICs) will be demonstrated. The ASICs provides multiplexed control signals and increased control field efficiency, thus decreasing both the wiring complexity and thermal load per qubit. This work will thus pave the way to color center-based devices in which the number of qubits is not limited by the complexity or power dissipation of the control system.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158925</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information-centric Algorithms for Feature Extraction in High-Dimensional Sequential Data</title>
<link>https://hdl.handle.net/1721.1/158923</link>
<description>Information-centric Algorithms for Feature Extraction in High-Dimensional Sequential Data
Jin, Jiejun
Hidden Markov Models (HMMs) are a cornerstone of sequential data analysis, offering a robust framework for modeling observable events influenced by hidden internal states. With applications spanning speech recognition, video analysis, bioinformatics, and financial time series, HMMs enable the prediction and classification of raw data by leveraging their dual-layer stochastic structure: hidden Markov states and observable outputs. However, as real-world data grows increasingly high-dimensional, extracting meaningful features from observations becomes critical to reduce computational complexity while retaining relevant information.&#13;
&#13;
This thesis addresses key challenges in feature extraction for high-dimensional HMMs. Current methods, such as neural networks (NNs), are widely used for nonlinear feature learning but lack mechanisms to prioritize useful features or incorporate known structural constraints. To bridge this gap, this work proposes novel algorithms to decouple representation learning from task-specific objectives and extract features aligned with predefined constraints.&#13;
&#13;
The theoretical foundation, including local information geometry and Hirschfeld-Gebelein-Rényi (HGR) maximal correlation, is introduced in Chapter 2. Chapter 3 details three innovative feature extraction algorithms and their corresponding neural network architectures, highlighting their strengths and limitations. Convergence analyses and tail bounds for these methods are presented in Chapter 4. Numerical simulations validating the efficacy of the proposed approaches are provided in Chapter 5, while Chapter 6 concludes with a summary of contributions and potential future research directions.&#13;
&#13;
This thesis advances the field by offering structured, constraint-aware feature extraction techniques tailored for high-dimensional sequential data, setting the stage for more effective and interpretable inference in HMMs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158923</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Distributed Deep Neural Network Training&#13;
and Fine-Tuning Through Resource Interleaving</title>
<link>https://hdl.handle.net/1721.1/158918</link>
<description>Accelerating Distributed Deep Neural Network Training&#13;
and Fine-Tuning Through Resource Interleaving
Rajasekaran, Sudarsanan
The ever-growing increase in dataset and model sizes of deep learning has created a massive demand for efficient GPU clusters. As the number of GPUs increases, the communication overhead of distributed Machine Learning (ML) training and fine-tuning workloads quickly takes up a significant portion of iteration time. Yet state-of-the-art ML schedulers tend to ignore the communication pattern of ML jobs when placing workers on GPUs. This thesis advocates for communication-aware resource scheduling as a critical approach to optimizing network utilization in ML clusters. We introduce a key idea for accelerating Deep Neural Network (DNN) jobs that interleaves the communication demands of different jobs sharing a network link. To illustrate this concept of interleaving, we first demonstrate how intentionally creating unfairness in bandwidth share between different DNN jobs improves their iteration times. Building on this insight, we present two novel systems designed to minimize network congestion and accelerate DNN training and fine-tuning jobs. The first system, Cassini, achieves interleaving using a centralized approach. In contrast, the second system, MLTCP, achieves the same goal using a distributed approach. Both systems are practical and readily deployable, depending on the service provider’s preference on deploying centralized or distributed solutions. In particular, Cassini, is a centralized network-aware job scheduler for ML clusters. Cassini introduces a novel geometric abstraction to consider the communication pattern of different jobs while placing them on network links. To do so, Cassini uses an Affinity graph that finds a series of time-shift values to adjust the communication phases of a subset of jobs such that the communication patterns of jobs sharing the same network link are interleaved with each other. Second is MLTCP, a distributed technique to approximate an interleaved centralized flow schedule. At the heart of MLTCP lies a straight-forward principle based on a key conceptual insight: by scaling the congestion window size (or sending rate) based on the number of bytes sent at each iteration, MLTCP flows eventually converge into a schedule that reduces network contention. To evaluate these systems, we conduct experiments using real-world DNN models on a testbed with Nvidia A100 GPUS. Cassini and MLTCP improve the average iteration times by up to 1.6× and 1.9×, respectively, demonstrating their effectiveness in reducing network congestion and accelerating ML workloads.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158918</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Fleet Learning From Heterogeneous Data</title>
<link>https://hdl.handle.net/1721.1/158917</link>
<description>Robot Fleet Learning From Heterogeneous Data
Wang, Lirui
One of the key roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. Similar to humans, robots and embodied agents inherently have to deal with heterogeneous inputs and outputs due to the nature of the perception-action loops across diverse environments. The data format and distributions collected from these systems and used for training them are varied in different modalities such as color, depth, tactile, and proprioceptive information, and/or collected in different domains such as simulation, real robots, and human videos. Moreover, fleets of robots and machines ingest massive amounts of streaming data generated by interacting with their environments in a distributed fashion, and teams of robots shall co-acquire diverse skills through their experiences in varied settings. The core idea behind my research, fleet learning, is to embrace the heterogeneous nature of robot learning to develop efficient and general algorithms. In this thesis, I will present a few angles toward tackling such challenging problems and application domains, ranging from tokenizing data, aligning representations, and merging policies, to composing skills. We develop insights and theories, often from linear settings, for how fleet learning can lead to more principled and effective use of robotic data and propose algorithmic progress, often through alignments, toward building generalist robotic foundation models. Empirically, we show advanced robotic manipulation capabilities by leveraging data from multimodal sensory inputs and multiple domains. In addition to outperforming several previous state-of-the-art across simulation and real-world benchmarks, we develop intelligent systems for robotic applications such as package handling in warehouses as well as dexterous tool-use tasks that have applications such as manufacturing, logistics, and household robots.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158917</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of foundation models for molecular representation in cancer drug discovery and precision oncology</title>
<link>https://hdl.handle.net/1721.1/158915</link>
<description>Application of foundation models for molecular representation in cancer drug discovery and precision oncology
Khokhlov, Khrystofor
Drug discovery is a resource-intensive and time-consuming process, often requiring decades of effort and substantial financial investment, with a high risk of failure. Despite advances in high-throughput screening technologies, the size of chemical space presents a significant challenge: it is not feasible to experimentally screen all potential drug-like molecules. Most commercially available chemical libraries consist of molecules that are synthesized on demand from pre-existing building blocks, further limiting the exploration of novel chemotypes. This thesis aims to explore whether drug discovery could be accelerated by leveraging advances in deep learning (DL) models to identify promising hit candidates and improve the prediction of drug response in cancer. Development of cancer drugs that will be effective on a predictable set of targets remains a major challenge. We are developing a DL model capable of identifying potentially novel cancer drug chemotypes and reliably predicting drug response on cancer cell line targets. Leveraging recent progress in transformer-based architectures and graph neural networks, we use molecular language models, graph models and cell foundation models to embed both molecular and genomic data into low-dimensional subspaces and then use standard machine learning (ML) tools in these low-dimensional spaces to predict the efficacy of the molecules in particular cell lines. We utilize the large-scale drug repurposing and oncology datasets from the PRISM project at the Broad Institute, which provide a wealth of drug repurposing and oncology data, enabling robust training of ML models. We show that these vector embeddings are superior to existing methods, as they enable more accurate drug response predictions. The first part of this thesis is dedicated to development of a deep learning cancer drug discovery model, focused on in silico screening of chemical space to search for cancer drug candidates. The second part is focused on development of a precision oncology model, based on a multichannel neural network architecture. Our pipeline involves training single-target models on drug molecular structures, followed by integrating genomic data to enhance biological context and train a hybrid model capable of predicting drug response for novel drug:target pairs. Our results demonstrate that vector embeddings produced by the proposed framework outperform existing approaches, offering a more accurate and efficient means of exploring chemical space. This work highlights the transformative potential of ML/DL methods in drug discovery, enabling targeted, cost-effective exploration of chemical libraries, and advancing the development of precision oncology treatments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158915</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermally Hardened RF GaN HEMTs in Extreme Environments</title>
<link>https://hdl.handle.net/1721.1/158912</link>
<description>Thermally Hardened RF GaN HEMTs in Extreme Environments
Niroula, John
Traditional, room temperature electronics based on silicon has truly changed the world around us over the past 70+ years. However, many more applications still exist that are limited by the temperature performance of silicon devices (&lt;250◦C). This area of high temperature (HT) electronics is an increasingly growing field with critical future applications in geothermal energy, space exploration, hypersonic aircraft, and deep gas/oil drilling, among others. Gallium Nitride (GaN) high electron mobility transistors (HEMTs) are especially well suited for high temperature electronic applications due to their low intrinsic carrier concentration and excellent electrical properties. Despite great progress in HT GaN technology, most demonstrations target logic or mixed-signal applications, and the performance of radio-frequency (RF) GaN devices remains lacking at high temperatures despite the critical need for wireless communication systems and high-speed electronics for these high-temperature applications. In this thesis, we investigate the physics of GaN HEMT devices at high temperatures and design RF transistors that demonstrate record performance at these temperatures.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158912</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Rich Personalized Causal Inference</title>
<link>https://hdl.handle.net/1721.1/158911</link>
<description>Data-Rich Personalized Causal Inference
Shah, Abhin  .
There is a growing interest in individual-level causal questions to enable personalized decision-making. For example, what happens to a particular patient’s health if we prescribe a drug to them, or what happens to a particular consumer’s behavior if we recommend a product to them? Conducting large-scale randomized experiments to answer such questions is impractical—if not infeasible—due to cost, the level of personalization, or ethical concerns. Observational data offer a valuable alternative, but their lack of explicit randomization makes statistical analysis particularly challenging. In this thesis, we exploit the richness of modern observational data to develop methods for personalized causal inference. In the first part, we introduce a framework for causal inference using exponential family modeling. In particular, we reduce answering causal questions to learning exponential family from one sample. En route, we introduce a computationally tractable alternative to maximum likelihood estimation for learning exponential family. In the second part, we leverage ideas from doubly robust estimation to enable causal inference with black-box matrix completion under a latent factor model.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158911</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coarse Modality</title>
<link>https://hdl.handle.net/1721.1/158905</link>
<description>Coarse Modality
Flor, Enrico
One of the early successes of the application of possible worlds semantics to the analysis of natural language is Kratzer’s account of modality. A large part of the subsequent literature on modals has sought to expand the crosslinguistic coverage of that framework, and, in so doing, many new generalizations and constraints have been proposed and re-examined. The present dissertation situates itself within this tradition and makes both an empirical and theoretical contribution. Using the Italian adverb magari as the main empirical source, it will be argued that there exists a previously unnoticed type of modality which is referred to here as “coarse”. Its most evident manifestation is a special type of epistemic possibility, one that comes with an “antievidential” requirement. Antievidential possibility in assertions and questions is discussed in Chapters 1 and 3 respectively. Chapter 2 frames coarse modality as a more general phenomenon that comes about through modification of modal expressions. The theoretical argument of this dissertation is a novel corroboration of Kratzer’s premise semantics approach. It will be argued that the most natural and general account of coarse modality is possible by utilizing the premise set, a powerful resource of the system, in a novel way.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158905</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Interpretation and Management for an Atmospheric Probe Mission to Venus</title>
<link>https://hdl.handle.net/1721.1/158902</link>
<description>Data Interpretation and Management for an Atmospheric Probe Mission to Venus
Apodaca Moreno, Maria Regina
After nearly 40 years without a dedicated U.S. mission to Venus, the Rocket Lab Mission to Venus is planning to launch a small probe to analyze the composition of Venus’ cloud layers. As the probe descends through the atmosphere, it will spend around five minutes in the cloud deck, from 66 km to 48 km above the surface, and roughly 20 minutes total in the atmosphere [French et al., 2022]. The probe’s primary scientific instrument, the Autofluorescence Nephelometer (AFN), will gather data by measuring the light scattering off particles, providing insight into their chemical composition based on refractive index and particle size [Baumgardner et al., 2022]. Unfortunately, the natural phenomena described by Mie scattering [Mie, 1908], the physics theory underpinning the AFN, holds that light scattering for a small solid angle is fundamentally degenerate: different combinations of refractive index and particle size can lead to identical light scattering. This degeneracy limits scientists’ ability to uniquely determine physical parameters of interest, leading some previous authors to rely upon helpful, but perhaps limiting, assumptions that mitigate this degeneracy. Complicating matters still further, the probe’s communication with Earth is subject to a strict data budget, limiting the amount of AFN measurements that may be used for analysis to begin with. This thesis addresses two important problems associated with the Rocket Lab Mission to Venus: 1) how to mitigate the light scattering degeneracy with minimal assumptions and 2) how to transmit valuable information within the limited data budget. To address the first problem, I introduce a data retrieval algorithm, based upon Bayesian statistical inference [Lindley, 1965], which combines a physical model of the instrument and a prior probability distribution describing each physical property. In some cases, this method can estimate the correct particle size and refractive index of a particle as the maximum likelihood value, from a single measurement even as it relaxes certain assumptions that were previously standard in the field, such as a small refractive index range. Using my data retrieval algorithm, I reanalyze the data collected by the Pioneer MultiProbe Mission to Venus’ nephelometers without the need for supplementary data from a different instrument [Ragent and Blamont, 1980]. I also provide new insight into the particle size and refractive index distributions seen by the Pioneer Mission’s small probes, which had not been possible with previous techniques. To address the second problem, I propose a data strategy for limited data missions like the Rocket Lab Mission to Venus. The method developed in this work relies upon Gaussian Mixture Models, which can efficiently represent multiple measurements as
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158902</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Atmospheric and Oceanic Drivers of Atlantic Multidecadal Variability and Predictability</title>
<link>https://hdl.handle.net/1721.1/158897</link>
<description>Investigating the Atmospheric and Oceanic Drivers of Atlantic Multidecadal Variability and Predictability
Liu, Glenn Yu-zu
Despite its numerous impacts across the Earth system, the relative importance of ocean and atmospheric dynamics in generating Atlantic Multidecadal Variability (AMV) remains an open question. This thesis presents three pathways to understanding how oceanic and atmospheric processes generate key spatio-temporal signatures of AMV through a combination of processed-based and data-driven approaches. Part 1 (Chapter 2) takes a "bottom up" approach, building a hierarchy of stochastic models to identify the contributions of vertical entrainment and seasonality in local upper-ocean processes to sea surface temperature (SST) variability. Through this hierarchy, I highlight unrealistic features present in slab ocean models widely used to isolate atmospheric contributions to AMV. On the opposite end of the spectrum, Part 2 (Chapter 3) utilizes a "top-down" data-driven approach where deep neural networks are trained to predict the North Atlantic SST Index in both the Community Earth System Model 1 Large Ensemble (CESM1) and observation-based datasets using atmospheric and oceanic predictors. I apply explainable artificial intelligence techniques to highlight a significant source of multidecadal predictability over the Transition Zone in oceanic predictors such as sea surface salinity (SSS) and sea surface height in the presence of external forcings. Part 3 (Chapter 4) returns to the process-based hierarchy, but applies this to understanding SSS variability. The stochastic salinity model is used to investigate the role of mixed-layer re-emergence, subsurface ocean damping and SST-evaporation feedback in shaping the pattern and amplitude of AMV.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158897</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Information Sharing for Satellite Navigation and Coordination</title>
<link>https://hdl.handle.net/1721.1/158894</link>
<description>Leveraging Information Sharing for Satellite Navigation and Coordination
Dolan, Sydney
As the number of objects in orbit grows, so does the risk of collisions. The sheer volume of collision warning messages far exceeds the capacity of human analysts, placing a significant burden on satellite operators and underscoring the need for autonomous, decentralized traffic management. Unlike centralized conjunction analysis, decentralized space traffic management distributes coordination across multiple independent nodes, allowing satellites to collaborate directly. This approach could enhance the resilience, speed, and international cooperation of space operations, helping to manage the space environment. For decentralized space traffic management to be viable, satellites must possess an accurate understanding of both the locations and intentions of other satellites. While satellites have precise knowledge of their own state, this accuracy diminishes when predicting the state of others. This gap is due to the limitations of onboard measurement systems and knowledge of each satellite’s structure, configuration, and maneuverability. Such differences motivate the exploration of information sharing between operators to improve coordination. Sharing information could benefit both individual operators and the broader space community by enabling more accurate trajectory predictions, facilitating formal maneuver negotiations, and enhancing overall orbital safety and efficiency. The main contribution of this thesis is to develop methods for autonomous satellite decision-making. By advancing the state of satellite autonomy, we can enhance high-level decision-making processes, enabling more adaptive and intelligent satellite coordination. This thesis begins by developing a multi-agent reinforcement learning environment to simulate satellite interactions in complex, high-dimensional settings. Then, we relax the assumption on synchronous communications and explore an alternate learning framework that relies on asynchronous communication between satellites. Our final contribution lies in a game-theoretic model of operator behavior in non-cooperative settings. Space is a competitive environment, and willingness to collaborate is mixed. As a result, we use game theory to obtain strategies to determine maneuvering and timing.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158894</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Max-Stable Processes,  Measure Transport &amp; Conditional Sampling</title>
<link>https://hdl.handle.net/1721.1/158893</link>
<description>Max-Stable Processes,  Measure Transport &amp; Conditional Sampling
Konomis, Dimitris C.
The modeling of extremes, known as extreme value theory (EVT), aims to understand events characterized by extreme deviations from the mean of a probability distribution. These events are significant in fields such as finance, environmental science, engineering, and insurance. EVT aims to predict the occurrence and impact of these events, which often have severe consequences. Applications of EVT include modeling extreme market movements in finance, natural disasters in environmental sciences, structural reliability in engineering, and catastrophic event risk management in insurance. Conditional sampling and simulation methods, such as normalizing flows and measure transport, are crucial for estimating extremes at un-monitored sites or under specific conditions, thereby improving our understanding and risk management strategies. The goal of this thesis is to make significant contributions to both extreme value theory and measure transport, as well as to establish a link between the two. First, we develop new Markov chain Monte Carlo algorithms for conditional sampling of max-stable processes. Next, we create models that incorporate physical laws, encoded by partial differential equations, to extend max-stable processes into regions without observations. Third, we design specialized transport map frameworks for distributions with bounded support, enabling accurate and efficient sampling and inference. Finally, we use transport maps parameterized by neural networks to learn and condition the distributions of shortest path statistics in polymer systems, accelerating the prediction of microstructural evolution under various conditions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158893</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical investigations of vortex dynamics: bursting, twist waves, and sensitivity analysis</title>
<link>https://hdl.handle.net/1721.1/158892</link>
<description>Numerical investigations of vortex dynamics: bursting, twist waves, and sensitivity analysis
Ji, Lingbo
Vortical structures are ubiquitous in real-world fluid flows, from the vortices generated by swimming fish to the wakes of aircraft and propellers. They form the backbone of high Reynolds number turbulent flows. Their dynamics are governed by non-linear processes, leading to a range of vortical instabilities that significantly influence engineering applications. Despite decades of research, many questions remain about core mechanisms responsible for the dynamic evolution of vortical structures due to the nonlinearity and complexity of flows at high Reynolds numbers. A particular scenario that lacks systematic investigation is vortices with initial core-size variations, which leads to the phenomena of twist wave propagation and vortex bursting. In this thesis, we first examine straight vortex tubes with initial core-size perturbations at high Reynolds numbers by performing high-fidelity numerical simulations. The differential rotation along the vortex tubes generates twist wave packets that propagate and collide, resulting in a sudden increase in the local core size – the phenomenon of bursting. We analyze the effects of perturbation amplitudes on the detailed evolution at each stage, including the underlying mechanisms for the growth and decay of the bursting structure. The bursting process is associated with significant energy dissipation, which is quantified and compared to that of unperturbed vortex tubes. Meanwhile, vortices in real fluid flows are often nonrectilinear and experience strain from environmental or self-induced effects. We extend our study to curved vortex tubes and investigate the impact of centerline non-rectilinearity on twist wave propagation and the stability of the bursting structure. Additionally, we adopt a relatively recent geometric perspective on vortical flows and analyze the helicity dynamics during the flow evolution. To systematically initialize vortex dynamics simulations based on a late-time or time-averaged flow metric, we explore different methods for sensitivity analysis of two-dimensional vortical flows. The sensitivity values obtained are then used in gradient-based optimizations, which shows promising pathways for control and optimization of vortical flow applications. Additionally, we present a numerical study of the locomotion of a rotating cylinder pair with periodic gaits in a low Reynolds number flow. We characterize the motion pattern and efficiency of the cylinder pair through a combination of theoretical arguments and numerical simulations, which provides a foundation for potential engineering applications at the microscale. Overall, our findings provide understanding of fundamental mechanisms for vortex bursting and associated twist wave dynamics at high Reynolds numbers, explorations of sensitivity analysis for vortical flow applications, along with insights into locomotion at low Reynolds numbers.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158892</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the nature and measurement of variational bias: a developmental perspective</title>
<link>https://hdl.handle.net/1721.1/158887</link>
<description>On the nature and measurement of variational bias: a developmental perspective
Cai, Haoran
Natural selection cannot work with imaginary phenotypes, only those realized by developmental systems. The observed diversity of life on Earth occupies only a subset of conceivable forms in the absence of selection. This is because of the non-linear and discrete nature of genotype-to-phenotype maps as an outcome of the developmental system. Despite that, it is widely accepted in population and quantitative genetic modelings that the phenotypic production from random mutations is isotropic and uniform. Conventional methods linking genetic variants and phenotypic variation often assume that the origin of phenotypic variation is purely due to genetic and environmental factors. Here, in this thesis, I adopt a developmental causation view which proposes that patterns of variation may emerge as an inherent consequence guided by physico-chemical principles and that part of the nature can not be fully reducible to genetic factors. The distribution of phenotypic variants that arise from genetic and environmental variation is influenced by the developmental processes that transform the embryonic phenotype into the adult form. This developmental process is subject to constraints that stem from the structure, character, composition, or dynamics of development. We term such a constraint as developmental bias. Despite the prevalence of developmental bias, detecting and testing its role remains a challenge. To address this gap, in the thesis, I propose frameworks and showcase examples aimed at identifying developmental bias and testing its implications in shaping phenotypic evolution. Specifically, I answer three questions: (1) How does the central conponent of nonlinear genotype-to-phentype map --- transcriptional regulation --- bias the analyses of gene-gene interactions? (2) How to disentangle the contribution of developmental bias in trait-trait interdependencies? (3) How expression variability affects gene retention and gene expression evolution following gene and genome duplication.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158887</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems-Theoretic Approach to Design of Early Concepts for Novel, Complex Systems in Aerospace</title>
<link>https://hdl.handle.net/1721.1/158885</link>
<description>A Systems-Theoretic Approach to Design of Early Concepts for Novel, Complex Systems in Aerospace
Hillman, Alexander P.
The complexity of engineered systems has grown exponentially over the last forty years. One of the main challenges in modern engineering is managing this complexity, particularly as the pace of technological change continues to accelerate across industries. Traditional approaches to generating early concepts for novel, aerospace control-oriented systems typically employ a design-first approach, ignoring critical steps required to truly understand the intent and context of a new system. This tendency also leads to a focus on low-level, highly granular design activities that seek to integrate advanced technologies together for technology’s sake. Unfortunately, today’s applied early concept generation methods do not facilitate the effective generation of early system concepts and an initial high-level design for aerospace control systems. To address these shortcomings, this thesis proposes a systematic and rigorous framework to generate early system concepts using Systems-Theoretic Accident Model and Processes (STAMP) principles and a new lens to examine system intent for a novel, complex system. This work also introduces a new level of abstraction for a portfolio-of-systems context and a method to propose an initial design artifact for new systems that is both architecture-agnostic and relevant during the earliest system engineering lifecycle activities. This method, Systems-Theoretic Concept Design, uses a top-down, three-phased approach to conduct mission analysis and determine the intent for a new system within a specific portfolio-level context. Upon building this intent model, the method enables the synchronization of stakeholder mental models through the use of transformation models built using the principles of Systems Theory. Finally, in its last phase, this early design concept generation framework delivers an initial design artifact that is technology- and requirements-agnostic in the language of Systems Theory using the semantics of STAMP. This initial design artifact is in the form of the Portfolio-of-Systems control structure, a control structure that frames a portfolio’s desired high-level capability as a control problem at a new level of abstraction while enabling analysis and examination of complex interactions across systems that may operate asynchronously or in geographically separated operating environments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158885</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational-Experimental Process Development for Laser Powder Bed Fusion Additive Manufacturing</title>
<link>https://hdl.handle.net/1721.1/158884</link>
<description>Computational-Experimental Process Development for Laser Powder Bed Fusion Additive Manufacturing
Weißbach, Reimar
Laser powder bed fusion (LPBF) additive manufacturing (AM) is instrumental for advances in high-value industries such as aerospace and medical devices. However, widespread adoption is still held back, in part due to challenges with powder handling, identification of process parameters, part qualification and quality control, and low build rates that lead to high part costs. This thesis presents workflows, tools, and understanding for practitioners and researchers seeking to address these challenges, in particular (i) powder spreading, (ii) parameter selection, and (iii) build rate improvement. Cohesive powders (D50 ≤ 20 &#120583;&#119898;) are challenging to spread and therefore not commonly used in LPBF, but promise more stable melting conditions during laser melting and potentially allow for finer geometrical resolution. Various spreading strategies are explored using an integrated discrete element-finite element (DEM-FEM) framework and a schematic process window for counter-rotational roller spreading is proposed. A new strategy of spreading with a transversely oscillating tool is chosen for experimental implementation and validated using a custom-built mechanized powder spreading testbed. Powder layers are analyzed using X-ray transmission imaging and layer quality is statistically correlated to kinematic spreading parameters. A methodology for performing melt track experiments using high-precision metal templates as well as a machine learning-based automated image analysis tool is presented and applied to melt track scaling studies. Based on single track parameter studies with layer thicknesses and laser spot sizes of up to 600 &#120583;&#119898;, a dimensionless LPBF process window using the normalized enthalpy Δ&#119867; / ℎₛ as well as the Fourier number is developed. A workflow for rapid LPBF build parameter selection is proposed, that is shown to fabricate near-full dense parts (up to 99.99 %) on the first attempt. Build rate scaling analysis reveals the trade-off between laser spot size and laser scan speed given laser power limitations. Further, LPBF with a standard powder (15 − 45 &#120583;&#119898;) is compared to a fine powder (0 − 25 &#120583;&#119898;) under similar processing conditions. The fine powder exhibits superior melt track stability and continuity, as well as significantly increased melt track cross-sectional area, allowing build rate to be increased by almost 20 %. Finally, to enable better understanding of the underlying thermo-fluid dynamics of the melt pool, an approach for computational model parameter estimation using Bayesian inference is presented and applied to the important model parameter of laser absorptivity. This is within the context of a Smooth Particle Hydrodynamics (SPH) computational melt pool model, developed collaboratively by researchers at the Technical University of Munich. The diffuse interface approach employed in SPH is validated using a discretization refinement study, showing the sensitivity of physical phenomena characteristic for LPBF, such as the vapor-induced recoil pressure, to computational hyper-parameters. These combined contributions enhance both practical implementation and theoretical understanding of LPBF, ultimately advancing the field of additive manufacturing towards more cost-effective and higher quality LPBF processes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158884</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Thermo-Chemo-Mechanics Framework for the Large-Scale Simulation of Material and Structural Failure in Hypersonic Environments</title>
<link>https://hdl.handle.net/1721.1/158883</link>
<description>A Computational Thermo-Chemo-Mechanics Framework for the Large-Scale Simulation of Material and Structural Failure in Hypersonic Environments
Pickard, Daniel N.
Materials and structures subjected to the extreme conditions of hypersonic flight undergo complex degradation and fracture processes. This thesis presents a theoretical formulation and a computational framework that enables the large-scale simulation of thermochemically fracturing solids exhibiting complex post-fracture interface response. The continuum theory is based on a general thermodynamically-consistent description of the coupled multiphysics problem, and the numerical formulation extends the scalable discontinuous Galerkin(DG)/Cohesive Zone Modeling paradigm to thermo-chemo-fracture mechanics. The approach is distinguished by its unified DG treatment of the coupled problems, which facilitates the analysis of fracture propagation, fracture-dependent heat and mass transfer as well as thermally-activated solid-phase chemical reactions. The framework is verified against two analytical solutions of boundary value problems drawn from thermo-poro-elasticity and thermally-driven delamination. Three-dimensional simulations of a benchmark thermochemically-driven fracture problem illustrate the parallel scalability of the fully-coupled computational framework. We utilize this framework to render models of passive oxidation-induced fracture in ultrahigh temperature ceramics computationally tractable. First, a rigorous constitutive theory is shown to capture the molecular diffusion of oxidant through the reaction product layer using only fundamental transport properties, i.e. without the need for calibration to reaction experiments. The physical processes observed on the diminutive scale of an oxide layer are explicitly resolved, but the approach is limited to microscale analyses by scale separation. We sidestep this limitation by specializing the general theory under specific phenomenological assumptions, thereby yielding a practical model that can reproduce oxidation experiments. We use this specialized model to analyze oxidation-induced swelling, fracture and delamination in SiC/coating systems, and unveil the coupled thermochemical response as well as fracture morphologies in the vicinity of critical flaws. Then, we conduct a parametric study of three-dimensional coatings that exposes the channeling mechanisms above penny-shaped delaminations of various sizes. The computational analyses identify a transition from decussating to circumferential channel cracking that explains the wide variety of surface channel cracks observed in experiment. The physical mechanisms and fracture morphology regimes are corroborated by a simple structural theory. Finally, cohesive fracture models, splitting methods and thermal solvers are developed specifically for applications to thermally shocked ceramics. Simple and rigorous calibration procedures are proposed which facilitate the direct analysis of fragmentation and comminution in brittle solids subjected to extreme advective heat transfer. The presented examples serve as evidence that the framework can successfully enable three-dimensional, thermochemically-coupled fracture analyses of unprecedented physical fidelity, which furnish new insights into complex hypersonic thermal protection system response.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158883</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the dynamics and interparticle forces of electrostatically stabilized colloidal suspensions</title>
<link>https://hdl.handle.net/1721.1/158882</link>
<description>On the dynamics and interparticle forces of electrostatically stabilized colloidal suspensions
Krucker-Velasquez, Emily
In a broad spectrum of industrial and biomedical applications, the equilibrium and dynamic properties of colloidal suspensions play a pivotal role, with systems ranging from simple gold nanoparticles in electrolyte solutions to complex assemblies like micelles, vesicles, nanocapsules, and dendritic polymers. Typically, these systems are approached through the Derjaguin–Landau–Verwey–Overbeek (DLVO) theory and Poisson-Boltzmann models, frameworks that approximate charged particles as point charges to predict interparticle interactions. While these frameworks have been instrumental for low-concentration, idealized systems, it falls short in capturing critical behaviors in more concentrated regimes. In such environments, overlooked phenomena—such as excluded volume effects and ion-ion correlations—become essential in shaping the colloidal system’s equilibrium and dynamics. By leveraging advanced computational techniques, we systematically interrogate these mesoscale interactions, offering insights that extend beyond the traditional paradigms of mean-field theory and enhance our understanding of colloidal behavior in complex environments. The first part of this work presents the development of efficient algorithms that significantly advance the computational speed of induced polarization calculations within Brownian Dynamics simulations of polarizable colloidal particles. By establishing a new benchmark in simulation methodologies, these algorithms lay the groundwork for exploring complex soft matter systems, enabling deeper insights into the dynamic and equilibrium properties of colloidal suspensions beyond the limitations of conventional theories. Together, these advancements provide a robust computational framework for examining mesoscale interactions in concentrated colloidal systems, where ion correlations, finite ion volumes, and thermal fluctuations critically influence behavior. The next part of this work focuses on the study of equilibrium properties of charged soft matter systems in crowded environments through the implementation of robust computational techniques. We meticulously examine charge-density correlations and clustering behaviors that arise due to the complex electrostatic interactions between colloidal particles. At high ion concentrations, the system undergoes distinct structural transitions that are modulated by the ionic strength and specific particle characteristics. These transitions are characterized by emergent patterns in the spatial distribution of charges, forming structured clusters that reflect the balance between electrostatic and entropic forces. We further our studies by computing the potential of mean force (PMF) between metallic nanoparticles, a measure of the effective interaction potential that inherently captures how particles interact across various separation distances in an electrolyte. The PMF analysis reveals oscillatory behavior in particle interactions at different concentrations. Our study delivers robust free energy profiles, enabling a more nuanced understanding of the electrostatic forces at play in dense colloidal suspensions. These insights shed light on the mechanisms of charge screening and packing within high-density systems. The final part of this thesis focuses on the study of the non-linear transport properties of concentrated macroions to external electric fields, revealing intricate dependencies on both ionic structure and external electric fields. Our studies reveal how conductivity is modulated by charge density correlations and field strength. A notable disruption of local ionic atmospheres was observed with increasing field strengths, which in turn accelerates ion mobility and significantly alters the transport properties. We further advance the investigation into the dynamic response of concentrated macroions and electrolytes by examining their behavior under time-varying electric fields. Through simulations involving frequency sweeps and chirp signals, we discerned that the dynamic response of these concentrated charged soft matter systems is best understood through the lens of two distinct transport regimes—characterized by short- and long-time responses. This bifurcation enables the introduction of a relaxation time scale that captures the intricate coupling between ionic correlations and the macroscopic system response, highlighting the pivotal role of excluded volume effects in densely populated environments. The study provides a detailed framework for manipulating ion transport in concentrated electrolytes and macroions, paving the way for innovations in fields reliant on precise control of electrostatic conditions and ionic mobility.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158882</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection and Localization of Pressure Transients in Water Distribution Systems</title>
<link>https://hdl.handle.net/1721.1/158880</link>
<description>Detection and Localization of Pressure Transients in Water Distribution Systems
Liu, Shiqing
Water distribution systems are critical to urban water supply, but as they age they become increasingly vulnerable to bursts and leaks, leading to significant economic, social, and environmental consequences. The complexity and inaccessibility of underground pipelines present substantial challenges for their maintenance. As a result, the development of real-time monitoring systems for these systems is essential to reduce water waste and minimize adverse impacts to consumers and surrounding infrastructures. This thesis investigates the effectiveness of continuous pressure monitoring systems in detecting pipe bursts and transient events within water distribution systems. Using PTSNet, a parallel transient simulation Python package, we simulate pipe burst events at each node in a real-world system and examine the pressure-time response at all other nodes. By adding Gaussian noise to the simulation results to mimic real-world background noise, we assess the detection success of pressure signals at each node using a modified CUSUM algorithm. The correlations between detection success and three spatial metrics between the source and sensor are calculated. We show that a spatial metric, the effective number of magnitude-changing junctions along the fastest path, (NJFP), has a stronger correlation with detection success than the shortest travel path or the shortest distance. By comparing detection performance for networks with differing topologies (gridded, looped, and branched) and pipe characteristics, we discover that multiple shortest paths (MSP; where pressure waves from different paths arrive almost simultaneously at the sensor) amplify the signal due to transient interference phenomenon and enhance the detectability of transients. This effect is particularly pronounced in gridded networks. We investigate the capabilities of monitoring, from a network of fixed stations, to achieve unique localization of pressure transient events using a time-reversal back-propagation algorithm. This algorithm identifies the event source by matching the theoretical and detected arrival time differences at the sensors. A novel time differences space is constructed, representing the independent shortest time differences from locations along all the pipes to the sensors, based on network information and sensor locations. Pipe sections with unique shortest time differences are identified as uniquely localizable pipes. Effective-NJFP-based probabilities of transient detection with accurate arrival times (error &lt; 0.1s) are derived from these simulation results. The localization performance of the sensor network is evaluated by the probability-weighted total lengths of the pipes that can be uniquely localized.&#13;
We consider sensor placement strategies aimed at maximizing the detection and localization performance of pressure monitoring sensor networks. Detection performance is defined as the total weighted pipe lengths in the network, where the weight of each pipe corresponds to its detection probability. Two problems are addressed: In order to maximize transient event detection performance when only a limited number of sensors are available, we formulate a mixed-integer programming (MIP) optimization model and employ a genetic algorithm to find solutions. The second problem involves determining the minimum number of sensors and their optimal locations to detect transient events across the entire network without a constraint on the number of available sensors. This is formulated as a minimum set cover problem, and an optimal solution is obtained using a mixed-integer linear programming solver. We focus on maximizing transient localization performance with a limited number of sensors. A genetic algorithm is applied to determine sensor locations, and the solutions obtained by this method provide significantly better localization performance than other approaches. We show differences in sensor placements for detection and localization: sensors are more evenly distributed throughout the network for detection purposes, while for localization, they are more concentrated in areas with longer pipes and simpler network structures. Finally, we present an analysis of two pressure monitoring datasets collected from a real-world water distribution system (SLG network). The first dataset consists of data from 28 sensors with a 100 Hz sampling frequency, collected over 7 to 30 days. We propose a method to identify and analyze noise levels and distributions at each sensor. Using a modified CUSUM algorithm, we detect transients and correlate them across sensors to identify events detected by multiple sensors. A transient-magnitude-based clustering method is then employed to group events based on their magnitudes, followed by a localization approach that utilizes the arrival time differences of transients between sensors. The findings indicate that noise levels in real-world monitoring data vary both spatially and temporally and are not independently normally distributed. Additionally, the arrival times detected by the modified CUSUM algorithm may not always accurately reflect the true transient arrival times due to mismatches between the signal characteristics and tuning of model parameters. Accurately identification of transient arrival time is particularly challenging for slowly changing pressure wave fronts. The second dataset includes pressure monitoring data from 7 sensors, during which 14 active transients with known source locations, times, and magnitudes were generated. We apply the modified CUSUM algorithm to detect transients at the sensors and correlate detection success with spatial metrics. The analysis confirms that the effective NJFP has the highest correlation with detection success, consistent with the simulation results. Additionally, the transient magnitude ratios between sensors and the source are found to be similar to the ratios calculated based on theoretical transmission coefficients when the source and sensor are in close proximity, suggesting that transmission coefficients can be used to estimate transient magnitudes in real networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158880</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observations of Surfzone Vorticity Using Optical Remote Sensing</title>
<link>https://hdl.handle.net/1721.1/158879</link>
<description>Observations of Surfzone Vorticity Using Optical Remote Sensing
Dooley, Ciara Jaya
The surfzone is the dynamic interface between the land and ocean, where waves shoal and break as they reach shallow water near the shore. Currents and circulation patterns in the surfzone transport sediment, nutrients, pollutants, and other materials along and across the coast, and can create hazardous conditions for swimmers (rip currents). However, understanding of the strength and structure of eddies and vortices in the flow field primarily remains limited to numerical models and theory. Here, novel observations of surfzone vorticity at small [O(10m)] and large [O(100m)] spatial scales are presented and related to incident wave conditions and the measured underlying bathymetry. Field experiments were conducted at a sandy beach on the Atlantic Ocean, and nearshore flows were observed using optical remote sensing (coastal imaging) and in situ sensors. Remote sensing algorithms are expanded from previous applications to estimate high spatial resolution two-dimensional surface flows by tracking the motion of naturally occurring foam throughout the surfzone. Estimated currents are correlated with in situ flow measurements, and errors increase as the sea-surface viewing angle becomes more oblique and image quality decreases. Large spatial-scale vorticity estimated using remotely sensed flows increases with alongshore bathymetric inhomogeneity, and complex circulation patterns corresponding to holes and channels in the seafloor persist for days at a time. Small spatial-scale vorticity estimated from a 5-m diameter ring of 14 current meters increases with the directional spread of the incident wave field, consistent with increased vorticity injection from the crest-ends of breaking waves. Small spatial-scale vorticity estimated using remotely sensed flows is spatially variable and correlated with the amount of wave breaking observed at a given location. Enhanced vorticity at large and small spatial scales occurs in the inner surfzone, and virtual drifters released into the remotely sensed flow fields demonstrate cross-shore variability in dispersion and mixing. This thesis expands the understanding of vorticity dynamics in the surfzone through unique field observations and provides new tools for coastal research and monitoring through development of remote sensing techniques.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158879</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two's More Fun than One: How the Presence of Multiple Nutrients Changes Microbial Competition and Foraging in Unexpected Ways</title>
<link>https://hdl.handle.net/1721.1/158877</link>
<description>Two's More Fun than One: How the Presence of Multiple Nutrients Changes Microbial Competition and Foraging in Unexpected Ways
Bloxham, Blox Willow
Microbes exist in incredibly diverse environments with many possible resources (i.e. nutrients) to compete and forage for. To make this complex system tractable, ecologists often study microbes in the presence of a single resource in order to predict and explain what happens with multiple resources. But what gets lost when we do this? Are there phenomena that only emerge in the presence of multiple resources? Here, I explore the ecological implications of three phenomena that each require the presence of at least two resources. First, I show that the diauxic lags that occur when a microbe needs to switch between resources after one is depleted can allow ‘fast-switcher’ microbes to coexist with competitors that exclude them in single-resource environments. Then, I derive a rich temporal niche structure that arises from variations in the order in which resources are depleted in ecosystems with a pulsed resource supply and show that these temporal niches reshape community structure, vastly increasing the expected diversity of microbial ecosystems. Finally, I present a novel differential strategy in which a microbe attempting to intercept a moving source of multiple resources can treat one resource as an attractant and the other as a repellent to significantly increase its chances of successfully intercepting the source as compared to just being attracted to the resources released by the source. Each of these phenomena fundamentally requires the presence of at least two resources and reshapes microbial behavior and ecology. Thus, they collectively highlight the need to carefully consider how characterizations from single-resource environments actually combine to determine what happens in multi-resource environments and what new dynamics must be accounting for in such a bottom-up approach. I conclude with an argument that the case of two resources may be particularly relevant to study due to how much complexity can emerge at just the first step up from one resource to two.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158877</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Based Complex Terrain Navigation Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/158876</link>
<description>Learning-Based Complex Terrain Navigation Under Uncertainty
Cai, Xiaoyi
In complex off-road environments, accurately identifying traversable terrain is crucial for achieving fast and reliable navigation. Existing methods learn terrain properties directly from data via self-supervision to automatically penalize trajectories moving through undesirable terrain. However, challenges remain in properly quantifying and mitigating risk due to uncertainty in learned models and improving model generalization in novel environments. To address these challenges, this thesis presents a unified framework to learn uncertainty-aware, physics-informed traversability models and achieve risk-aware navigation in both indistribution and out-of-distribution terrain. First, the proposed method efficiently quantifies both aleatoric and epistemic uncertainty by learning discrete traversability distributions and probability densities of the traversability predictor’s latent features. Leveraging evidential deep learning, this work parameterizes Dirichlet distributions with network outputs and proposes a novel uncertainty-aware squared Earth Mover’s distance loss with a closed-form expression that enhances learning accuracy and navigation performance. Second, the proposed method achieves risk-aware navigation by simulating state trajectories with the worst-case expected traversability values to handle aleatoric uncertainty and by penalizing trajectories moving through novel terrain with high epistemic uncertainty. Third, the proposed method improves model generalization by embedding physics priors directly into the mathematical formulation of evidential neural networks and implicitly aligning learned models with physics models through a physics-informed training loss. Finally, through extensive simulation and real-world experiments on wheeled and quadruped robots, it is demonstrated that this work leads to faster navigation with higher success rates when compared to existing risk-aware approaches, even in environments with significant distribution shifts.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158876</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Methods to Improve Satellite Attitude Determination and Control with a Focus on Autonomy, Generalizability, and Underactuation</title>
<link>https://hdl.handle.net/1721.1/158874</link>
<description>Computational Methods to Improve Satellite Attitude Determination and Control with a Focus on Autonomy, Generalizability, and Underactuation
McKeen, Patrick
The attitude determination and control system (ADCS) onboard a satellite uses sensors to measure orientation and angular velocity, enabling the satellite to manage angular momentum, counteract disturbances, and point in the desired directions. Many historical ADCS approaches are designed for constant pointing goals, high accuracy sensors, powerful actuators, or larger, high-inertia satellites. Many modern satellites are small satellites (tens of kilograms or less), with lower-cost actuators and sensors, and may have more complicated attitude goals. This dissertation presents a variety of computational approaches to improve ADCS performance by leveraging detailed satellite dynamics modeling and estimation, disturbance inclusion, and trajectory planning–all optimized for efficient onboard computation suitable for small satellites. The proposed framework generalizes ADCS operations, allowing it to adapt automatically to different satellite types, mission requirements, and operational goals, reducing reliance on predefined ground-based commands. This framework can be used in place of standard control laws to make ADCS more autonomous and “hands-off,” calculating its own slews and desaturation while meeting pointing goals, even in cases of underactuation or large disturbances. This generalized and autonomous framework is a contribution of this work, alongside each of its components, which can be individually used in their own right. One key component of this work is a generalized state estimator that integrates a dynamic model of the spacecraft. This estimator demonstrates high accuracy across various satellite configurations, achieving angular error as low as 0.01◦ in low Earth orbit (LEO) with highquality sensors (but no star trackers), compared to the typical 1◦ error of conventional methods. The estimator can account for biases, sensor errors, and external disturbances, ensuring robust performance (e.g., 0.1◦ error in LEO) even with lower-quality sensors (MEMS gyroscopes, plus magnetometers and sun sensors). This adaptability highlights the increased autonomy of the system, as it requires minimal human intervention to maintain high accuracy across diverse mission scenarios. Another major contribution is the integration of disturbance modeling into control laws. By accounting for disturbances directly (either individually or as an all-in-one value tracked by the estimator), rather than through reactive measures like integral control, the proposed methods improve stability and performance, particularly for underactuated systems–improving pointing accuracy by up to 20 degrees. The developed control laws are adaptable to various actuator configurations, disturbance environments, and pointing objectives. This flexibility extends to modifying pointing goals, such as aligning specific vectors rather than requiring a fully specified orientation, enhancing mission adaptability. This work also implements a novel trajectory planning method that generates efficient pointing trajectories for both constant and time-varying goals. The method, based on the Augmented Lagrangian iterated-LQR (ALTRO) approach, creates sequential mission trajectories that optimize performance even under underactuation or disturbance conditions. The planned trajectories are followed by two types of robust closed-loop controllers, applicable across satellite architectures ranging from large weather satellites to 3U CubeSats. By enabling onboard trajectory planning and adaptive control adjustments, this method significantly reduces the need for ground-based planning and interventions, further advancing autonomous operation. The combined framework of estimation, disturbance-aware control, and trajectory planning achieves significantly higher accuracy than traditional ADCS approaches. This enables the use of commercial off-the-shelf components in high-performance missions, overcoming the limitations of low-cost sensors and actuators. The proposed methods allow satellites to operate with weaker or fewer actuators, such as magnetic-only control, while still achieving precise pointing, thereby expanding the feasibility of more autonomous, robust, and cost-effective satellite operations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158874</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>thesis in the field of Mechanical Engineering: Relevance for Human-Robot Collaboration: Definitions, Systems, Algorithms, and Applications</title>
<link>https://hdl.handle.net/1721.1/158873</link>
<description>thesis in the field of Mechanical Engineering: Relevance for Human-Robot Collaboration: Definitions, Systems, Algorithms, and Applications
Zhang, Xiaotong
Human-Robot Collaboration (HRC) combines the strengths of human and robotic capabilities to accomplish complex tasks, yielding significant impacts in various domains. To enable seamless interaction in dynamic and unpredictable environments, robots are required to efficiently and accurately perceive their surroundings, align reasoning with human cognition, anticipate key attributes, and generate safe, effective actions to support humans proactively. This thesis introduces relevance, a novel concept inspired by human cognition, to improve the efficiency, safety, and intelligence of HRC. Relevance enables robots to prioritize objects based on their importance to human goals, allowing them to concentrate computational resources on key elements. This focused approach reduces input space for essential algorithms, minimizes processing delays, and enhances safety and adaptability in dynamic environments, facilitating more natural and intuitive collaboration with humans. This thesis systematically explores the concept of relevance, introducing a hierarchical model for relevance quantification that combines scene understanding in cluttered environments with an event-based, multi-modality framework, enabling real-time relevance determination based on human objectives, preferences, spatial-temporal relationships, and constraints. A relevance-based perception strategy further directs models to prioritize key areas, reducing computational and inference times, while two new safety metrics—Critical Collision Probability (CCP) and Average Collision Probability (ACP)—quantify reduced collision risks in Human-Robot Collaboration (HRC). Additionally, a relevance-driven framework integrates relevance quantification with dynamic scene understanding and decision-making, achieving high human objective and relevance prediction accuracy. An advanced human intention prediction framework using head orientation, object affordance, and hand movement also enhances precision, accuracy, and F1 scores over baseline models. Results demonstrate that relevance quantification significantly reduces task planning time by 79.56% and inquiries by 80.84%, with a real-world coffee-serving demonstration highlighting its potential for proactive, autonomous assistance. Furthermore, the safe motion generation algorithm reduces collision incidents by 63.76% and collision frames by 44.74%, supporting accurate, safe robotic assistance in dynamic environments. The concept of relevance enhances the efficiency, safety, and intelligence of human-robot collaboration (HRC) within dynamic and unpredictable environments, supporting a deeper integration of robotics into diverse real-world applications. Its potential extends beyond HRC, with promising applicability in autonomous driving and other complex domains where adaptive, context-aware decision-making is essential.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158873</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of terrestrial organic carbon export and preservation in the marine environment</title>
<link>https://hdl.handle.net/1721.1/158872</link>
<description>Mechanisms of terrestrial organic carbon export and preservation in the marine environment
Boehman, Brenna L.
Export of terrestrial carbon from land to sea is a globally important carbon flux that is poorly constrained and has implication for atmospheric carbon levels over modern and geologic timescales. Many factors control the fate of exported carbon and the subsequent impact on carbon budgets, including the timescales of export, the composition of organic matter, and degradation processes. This thesis uses biomarkers, bulk geochemical tools, and incubation studies to interrogate the factors controlling terrestrial carbon export and preservation in the marine environment. The thesis focuses on two globally important river systems that collectively deliver 25% of the total terrestrial carbon flux to the ocean, the Ganges-Brahmaputra (G-B) Rivers and the Amazon River. The first two chapters focus on the G-B Rivers, utilizing compound specific biomarker analysis within a high sedimentation rate (30 cm/yr) terrestrial archive in the Bay of Bengal, we interrogate (i) timescales of organic carbon export from land to sea, and (ii) basin-scale geochemical responses to rice agriculture expansion. These analyses utilize the radiocarbon ages and stable carbon-13 isotopic composition of lipids produced by Archaea and Bacteria. We identify that ca. 75% of these biomarkers experience millennial scale storage in the G-B basin, in agreement with previously assessed plant-derived compounds, highlighting that an overarching soil stabilization mechanism controls the age of exported terrestrial organic matter. Individual biomarkers and bulk geochemical analysis chronicle the change in methane-derived soil carbon within the basin due to rice paddy expansion, highlighting that 49% of Bangladesh’s methane emissions from 1990-2008 have been abated by soil storage. The last two chapters focus on the Amazon River, to examine the fate of terrestrial organic carbon in the marine environment, (iii) utilizing geochemical analysis of historical sediments and sediments from a field campaign in 2023, and (iv) utilizing terrestrial and marine endmembers in incubation experiments simulating the dynamic coastal environment. Sediment geochemical and biomarker analyses highlight the preservation of an isotopically distinct terrestrial endmember in the coastal sediments, which has led to at least 50% underestimation of the burial efficiency. Quantitative stable isotope probing incubations using 13C-lignin indicate the dual role of microbially-mediated and photo-degradation, and highlight that the microbial communities primarily responsible for lignin degradation in the marine environment are of terrestrial origin, and identify a new ecological role for Bathyarchaeota. This thesis integrates diverse biogeochemical techniques across the terrestrial-marine interface to examine important open questions in globally important carbon budgets, merging isotope geochemistry, microbiology and earth science. The findings contribute to our understanding of the modern carbon cycle and the impact of anthropogenic perturbations of the last decades and into the future.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158872</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reproduction, settlement, and phenology of intertidal barnacles: Implications for larval dispersal</title>
<link>https://hdl.handle.net/1721.1/158871</link>
<description>Reproduction, settlement, and phenology of intertidal barnacles: Implications for larval dispersal
Weinstock, Jane B.
Knowledge of the consequences of ocean warming on marine populations and communities is urgent. Warming oceans are predicted to result in changes to the seasonal timing of reproduction and settlement (phenology); faster development rates and, for crustaceans, smaller larvae; reduced larval dispersal distances; and reduced connectivity between coastal populations. However, these predictions are largely based on laboratory and modelling studies, with little observational research to explore how these interactions unfold in natural ecosystems where temperature variability is pervasive. In this thesis, I investigate the links between reproduction and settlement timing of intertidal barnacles, and I explore the extent to which the timing of these events is explained by environmental and astronomical cycles and by water column conditions. In Chapter 2, I assess the cycles driving Chthamalus fissus reproduction and settlement in Southern California, and I offer a first order estimate of alongshore larval transport. I found that barnacles were reproductively active almost year-round, with clear lunar cyclicality and modest seasonality. Conversely, settlement exhibited little cyclicality on any timescale. Chapters 3, 4, and 5 focus on the effects of temperature on Semibalanus balanoides early life history along a steep temperature gradient in the northwest Atlantic over twenty years of warming. In Chapter 3, I investigate the effects of intertidal temperature on reproduction timing, analyzing separately the processes of fertilization, embryonic brooding, and larval release. In Chapter 4, I estimate larval duration in natural populations, and I measure the impact of temperature on larval duration in the laboratory and field. In Chapter 5, I investigate the effects of water temperature on larval size at settlement. I found that warmer nearshore temperatures significantly correlated with shorter brooding times of developing embryos, shorter field-estimated larval duration, and smaller larval settlers. Notably, the interplay between benthic reproduction, pelagic development, and temperature variability across space and time created counter-intuitive patterns in larval duration, size, and likely dispersal. Together, these findings point to the importance of reproductive timing in determining dispersal and population connectivity, and they highlight the need for extensive field measurements to quantify phenology and phenology shifts in benthic systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158871</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radium and Mercury Dynamics in the Arctic: Investigating Terrestrial Inputs, Groundwater Discharge, and Chemical Cycling in a Changing Climate</title>
<link>https://hdl.handle.net/1721.1/158870</link>
<description>Radium and Mercury Dynamics in the Arctic: Investigating Terrestrial Inputs, Groundwater Discharge, and Chemical Cycling in a Changing Climate
Bullock, Emma Jacqueline
The Arctic Ocean is distinctive due to its extreme seasonal variations in temperature and significant terrestrial inputs, including freshwater, carbon, nutrients, and toxins. Of particular concern is mercury (Hg) in its neurotoxic form, methylmercury (MeHg), which is already beginning to adversely affect Arctic human populations and wildlife. However, the region’s harsh conditions and remoteness have made conducting seasonal chemical and hydrological studies challenging. Tracers of boundary inputs, such as the radium (Ra) isotope quartet, offer potential for tracking and quantifying riverine and submarine groundwater discharge (SGD) of species like Hg into the Arctic Ocean. This thesis employs seasonal data and laboratory experiments to investigate the factors influencing terrestrial Ra inputs to the Arctic Ocean, quantifies SGD and associated Hg inputs to an Arctic coastal lagoon, and elucidates the chemical and geological factors influencing Hg cycling in Arctic groundwater.&#13;
&#13;
Using historical and unpublished datasets combined with new laboratory investigations, differences in inputs of riverine Ra isotopes between the North American and Eurasian land masses were identified. The findings revealed higher Ra fluxes from the North American continent, attributed to greater sediment loads and lower organic matter in rivers compared to those on the Eurasian land mass. Subsequently, Ra data from five extensive field campaigns to Simpson Lagoon, Alaska, provided insights into Ra cycling on a more localized scale. These campaigns offered the first seasonal perspective on supra-permafrost SGD along an Arctic coastline, suggesting that SGD fluxes may rival those of rivers along the Beaufort Sea coast. Concurrently collected Hg groundwater concentrations allowed for the development of the first estimates of Hg fluxes from groundwater to the Arctic Ocean. If these estimates hold true along the rest of the Pan-Arctic coastline, they could significantly alter our understanding of microbial MeHg uptake in the Arctic Ocean. Finally, sediment cores from Simpson Lagoon and two other locations along the Beaufort Sea coast were used to examine how changing groundwater conditions, such as changing salinity, temperature, and redox conditions, influence Hg cycling. These experiments, alongside findings from Simpson Lagoon groundwater, indicate that Hg cycling in recently thawed permafrost sediments involves a complex interplay between organic material, metal oxides, and sulfide species, with groundwater conditions and soil carbon content playing crucial roles in Hg mobilization.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158870</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an Artificial Neuroscience: Analytics for Language Model Interpretability</title>
<link>https://hdl.handle.net/1721.1/158869</link>
<description>Towards an Artificial Neuroscience: Analytics for Language Model Interpretability
Gurnee, Robert Wesley
The growing deployment of neural language models demands greater understanding of their internal mechanisms. The goal of this thesis is to make progress on understanding the latent computations within large language models (LLMs) to lay the groundwork for monitoring, controlling, and aligning future powerful AI systems. We explore four areas using open source language models: concept encoding across neurons, universality of learned features and components across model initializations, presence of spatial and temporal representations, and basic dynamical systems modeling.&#13;
&#13;
In Chapter 2, we adapt optimal sparse classification methods to neural network probing, allowing us to study how concepts are represented across multiple neurons. This sparse probing technique reveals both monosemantic neurons (dedicated to single concepts) and polysemantic neurons (representing multiple concepts in superposition) in full-scale LLMs confirming predictions from toy models. In Chapter 3, we identify and exhaustively catalog universal neurons across different model initializations by computing pairwise correlations of neuron activations over large datasets. Our findings show that 1-5\% of neurons are universal, often with clear interpretations, and we taxonomize them into distinct neuron families.&#13;
&#13;
To investigate spatial and temporal representations, we analyze LLM activations on carefully curated datasets of real-world entities in Chapter 4. We discover that models learn linear representations of space and time across multiple scales, which are robust to prompting variations and unified across different entity types. We identify individual "space neurons" and "time neurons" that reliably encode spatial and temporal coordinates. In Chapter 5, we use optimal sprase regression techniques to improve the sparse identification of nonlinear dynamics (SINDy) framework, demonstrating improved sample efficiency and support recovery in canonical differential systems. We then leverage this improvement to study the ability of LLMs to in-context learn dynamical systems and find internal representations which track the underlying system state.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158869</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>OPTASAT: An Open-Source, Flexible Software Framework for Small Satellite Operations</title>
<link>https://hdl.handle.net/1721.1/158868</link>
<description>OPTASAT: An Open-Source, Flexible Software Framework for Small Satellite Operations
Murphy III, Thomas Joseph
The unprecedented growth in access to space has created a corresponding growth in the number of spacecraft and the number of people operating spacecraft. This has meant that many of these operators are operating spacecraft for the first time. Gone are the days when the only operators of spacecraft were national governments, militaries, and massive corporations. The operators of small spacecraft today include many early-career individuals who need the tools to enable them to make strong decisions in the behavior of their spacecraft. The tools for operating spacecraft are often overlooked by teams focusing on the spacecraft themselves, but these operating tools are critical for mission success. Spacecraft operations tools have not developed in a similarly low-cost, widespread fashion as the spacecraft have. The best tools for modeling and understanding the situation of a satellite in space remain locked behind high barriers to entry including high cost, long training, and complex interfaces. In the same way that satellites have gone from the size of automobiles to the size of toasters, the software for operating them needs to go from expensive, complicated, high-performing suites to simple, flexible, approachable options that are accessible to the democratized space operators. New spacecraft operations staff need straightforward, direct interfaces which give them the knowledge of where their spacecraft is, where it will be, and what it will be able to do, and they need to know when all the options at their disposal are viable. Operators also need to be given the capability to adjust their software in whatever ways are necessary to tailor it to the particular parameters of their missions, to reflect the incredible variety of spacecraft and missions that exist today. A gap exists in spaceflight software. Users need software that can perform their mission planning tasks in the short term and to inform them of the upcoming parameters of their spacecraft which concern them, whether this is the spacecraft’s location, solar illumination, orientation, or any other property which is relevant to their particular mission. This software must also allow the users to be aware of the expected output of their sensors, especially imaging sensors, such that they may have an understanding of what they are imaging and what it ought to look like. Finally, this software must be open-source, enabling the user to audit the software and make changes to the software to customize it to their preferences, which may differ from anything the original software developer could have imagined. Such spaceflight software does not yet exist. This dissertation develops and presents OPTASAT, the Open-source Python Tool for Awareness of Spacecraft and Analysis of Telemetry, which provides an extensible, modular interface for incorporation of multiple tools which contextualize spacecraft data in a manner which maximizes usefulness for the operators. A priority is visualization of data to facilitate rapid understanding and distillation of the complexity of a spaceflight operation. This software has been released as a fully-featured, open-source software toolkit which performs the mission analysis components deemed most crucial to those who stand to benefit from it. This software is intended to fulfill the needs of small spacecraft missions. Several particular application cases are studied, including that of an Earth Sensing mission, and Astronomy mission, and modeling communications for a real laser crosslink mission. These case studies are evaluated for their ability to present the relevant information to the operator. For Earth Sensing, this involves displaying information regarding the spacecraft’s location with respect to the Earth, and enabling the selection of ground targets for imaging. For astronomy, the relevant information concerns the stars visible in the sky, and the spacecraft’s relationship to sources of interference like the Sun and Moon. For the laser crosslink example, we study the operator’s understanding of the spacecraft as they pass over a ground station and determine the operational configurations available for this communication. OPTASAT fills gaps in the field. OPTASAT presents users with a tool which is flexible and intuitive to use for understanding data from spacecraft in a way that is not currently available in the offerings on the market. Additionally, it takes functionality that is currently available in proprietary paid software and makes it available for free, in an open source offering that is accessible to everyone. OPTASAT will allow spacecraft operators (especially those operating spacecraft for the first time) to confidently know the state of their spacecraft, enabling them to make the best decisions for their satellites. This will reduce barriers to entry and smooth the learning curve, reducing the amount of overhead to new spacecraft operators. OPTASAT will be yet another step in the ongoing process of making space more accessible to a larger pool of users.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158868</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Hidden Roots of Neoliberal Success in Agrarian Transformation: State Engagement, Farmer Professionalization, and Technological Interdependence in the Senegal River Valley</title>
<link>https://hdl.handle.net/1721.1/158867</link>
<description>The Hidden Roots of Neoliberal Success in Agrarian Transformation: State Engagement, Farmer Professionalization, and Technological Interdependence in the Senegal River Valley
Spielberg, Brian Jonars Besana
Recent scholarship celebrates irrigated rice in the Senegal River Valley (SRV) as a success story. This is remarkable considering the SRV’s history of agrarian transformation, which critics characterize as incoherent, erratic, and self-destructive. How did this turnaround happen? How did good seeds emerge from bad soil? Conventional explanations point to enlightened market-based reforms and technological upgrading following state withdrawal from most agricultural activities. In other words, the SRV is portrayed as a triumph of neoliberalization. This dissertation offers an alternative, additive view. In Paper 1, I situate the SRV’s transformation in broad historical context, showing how notions of development, technological change, and poverty alleviation have evolved and the implications for what strategies are pursued. I illustrate how a popular contemporary development model— appropriate technology (AT) 2.0, an evolution of Schumacher’s 1970s AT 1.0—that valorizes smallscale technologies and market-led interventions is attractive in explaining successes like the SRV, even as it proves ultimately reductive. In Paper 2, I demonstrate how the state, despite policies curtailing its activities and a dominant narrative asserting its disengagement, continues to play an active role in the SRV. By imparting practical skills, such as pump operation, contract negotiation, and bookkeeping, state action helped farmers professionalize. A durable effect is a “we’re in this together” state-farmer mentality. When this relationship is tested, well-respected intermediaries, often religious leaders, intercede. In Paper 3, I show how farmers construct assemblages of resources, skills, and knowledge to achieve their goals. They rely on negotiation skills and social ties with local leaders, appealing to “public interest” couched in religious terms. In forsaking key aspects of the dominant assemblage to pursue alternatives, farmers exercise their agency and enhance market functioning by permitting flexibility, acknowledging technological interdependencies, and mitigating recurrent risks. This dissertation offers hope that successful agrarian development is possible in challenging, resource-constrained environments. Based on 11 months of fieldwork, I show how state and farmer actions bolstered market reforms, underpinning their success. In centering on-the-ground realities, I move beyond dominant explanations and neat theoretical classifications to reveal underreported but nonetheless fundamental processes and mechanisms through which development occurs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158867</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Singlet exciton fission-enhanced silicon photovoltaics: Interfacial engineering, device design and spectroscopic technique development</title>
<link>https://hdl.handle.net/1721.1/158864</link>
<description>Singlet exciton fission-enhanced silicon photovoltaics: Interfacial engineering, device design and spectroscopic technique development
Nagaya, Narumi
The growing global energy demand combined with resource and space limitations necessitate enhancements in crystalline silicon solar cells, which are the current dominant solar technology. However, their efficiencies have only increased incrementally over the recent 20 years, as they are starting to approach the theoretical efficiency limit. The main source of loss is thermalization, where energy in excess of the bandgap absorbed by silicon is lost as heat. Singlet exciton fission in organic molecules has been proposed to reduce these losses. By having the organic layer absorb the high energy light and transferring the triplet excitons generated from the singlet fission process to silicon, the photocurrent in this spectral region can be doubled, with the potential of raising the efficiency from the traditional limit of 29.4 % to up to 42 %.&#13;
&#13;
The greatest challenge with these devices has been to demonstrate an increase in the silicon photocurrent, a necessary condition to show that the technology is viable. Scientifically, there are three main components to this problem. The first is to successfully couple the triplet excitons to silicon. The second is that not much is understood regarding the exciton and charge carrier dynamics at this interface. Finally, the silicon solar cell architecture should also be considered to extract transferred carriers effectively.&#13;
&#13;
This thesis tackles these three parts from an interfacial materials, device architecture and spectroscopy approach. Using tetracene as the singlet fission layer and n-doped silicon, we show that defect-induced states in a thin interlayer of hafnium oxynitride that lie near the band edge of silicon are beneficial for triplet exciton transfer. We also identify that triplet-induced electric field-effect passivation is beneficial for the triplet sensitization process of silicon, and design a new bilayer interface consisting of a zinc phthalocyanine donor layer that introduces preferential near- silicon band edge states, and an ultrathin oxide chemical passivation layer. We then study various device architectures, confirming the importance of using a device designed to extract surface charge carriers efficiently, demonstrating the first enhancements in single-junction silicon solar cell external quantum efficiencies and photocurrent from singlet fission. Finally, we build and use advanced spectroscopy techniques and numerical frameworks to study exciton and charge carrier dynamics in singlet fission-sensitized solar cell materials, confirming that the triplet excitons are contributing to all the positive effects observed in the devices.&#13;
&#13;
These results have shown that singlet fission-sensitized silicon solar cells are a viable technology for enhancing silicon solar cell efficiencies beyond the conventional single-junction limit. This interface remains a rich area for fundamental scientific studies, involving coupling between molecular dark states to bulk silicon. We hope that the key findings can help direct research efforts towards scalable implementation of this technology, and stress that the fundamental understanding of the interface also has broad implications to other silicon technologies that can benefit from enhanced quantum yields, including photodetectors.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158864</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Advances in Range-Aided Navigation</title>
<link>https://hdl.handle.net/1721.1/158863</link>
<description>Algorithmic Advances in Range-Aided Navigation
Papalia, Alan A.
This thesis contributes to the advancement of range-aided simultaneous localization and mapping (RA-SLAM) through algorithmic developments and real-world demonstrations. Broadly speaking, SLAM is the process by which an agent combines sensor measurements to simultaneously create a map of the world and localize itself within this map. SLAM has been called the ‘holy grail’ of field robotics, and in many instances it is a critical enabling capability for autonomous agents to operate in the real world. RA-SLAM is the specific case of SLAM which incorporates point-to-point distance measurements (e.g., distance measurements between an autonomous underwater vehicle and an acoustic buoy) into the inference process. The ability to leverage such measurements is desirable, as they can help in resolving ambiguities (e.g., am I in hallway A or B) and the relevant sensors are often low-cost and simple to integrate (and thus pose the potential to be widely deployed). However, there are theoretical challenges that have historically limited the reliability of RASLAM approaches. At the root of these challenges is the issue that a single range measurement does not uniquely determine the relative position between two points. In state-of-the-art RASLAM formulations, this ambiguity manifests as non-convexity in the maximum a posteriori inference problem. As a result of this non-convexity, standard local-search optimizers are highly dependent on quality initializations to obtain the correct state estimate. To address this issue of reliability, this thesis presents the first certifiably correct algorithm for RA-SLAM. This algorithm, Certifiably Correct RA-SLAM (CORA), is capable of (i) obtaining globally optimal solutions for many real-world RA-SLAM problem instances and (ii) providing certificates of correctness for these solutions. CORA leverages a novel semidefinite programming (SDP) relaxation of the RA-SLAM problem, which it solves efficiently using the Riemannian Staircase methodology. This methodology allows CORA to typically obtain globally optimal solutions faster than the existing state-of-the-art local solvers. These results expand our understanding of problems suited for efficient global solvers and highlight the key problem structures that appear necessary to develop and deploy such solvers, pointing towards exciting future directions in trustworthy model-based autonomy. We demonstrated the performance of CORA on a range of real-world RA-SLAM datasets, including a set of large-scale multi-agent experiments conducted as part of this work. In these experiments CORA reliably estimates agents’ trajectories in both single- and multi-robot settings. CORA gracefully scales to large problems consisting of multiple agents and tens of thousands of robot poses. These experiments not only validate CORA’s performance, but also fill an existing gap in open-source datasets available to the research community and provide practical insights to guide future deployments of autonomous navigation systems in large, complex environments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158863</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Flood Risks to City Infrastructure Systems&#13;
Utilizing Scalable, Time Sensitive Modeling</title>
<link>https://hdl.handle.net/1721.1/158861</link>
<description>Predicting Flood Risks to City Infrastructure Systems&#13;
Utilizing Scalable, Time Sensitive Modeling
Boukin, Katerina
Flooding is emerging as the most expensive and frequent natural hazard around the world. Floods are highly dynamic in nature and cause physical damage to our built environment, loss of life, economic damage, and major impacts to society. An example of this is the at-ground road system, which comprises 30-60% of a city’s area in the US, is highly susceptible to flood damage, while still needs to act as evacuation routes for local residents. Similarly, the underground built system is extremely vulnerable to flooding damage as well as life risk to anyone within it. With urban landscapes constantly evolving, accurately predicting flood propagation and extent is imperative to mitigate these risks, especially as floods worsen due to climate change.&#13;
&#13;
Historically, the focus of flood risk assessment through industry and academia has been on the coastal urban environment, assessing the impact of fluvial flooding. This resulted in many risk assessment tools that mostly caters to the infinite amount of flood water identified from a riverine or coastal fluvial flooding. As for the rain-driven impact, the common practice simply changed the flood modeling to pluvial oriented, keeping the rest of the risk tool components identical for the different flood mechanisms. For pluvial flooding, existing urban flood modeling tools such as SWMM and PC-SWMM are limited by their catchment-based approach, neglecting surface runoff dynamics and spatial-temporal flood impacts. Consequently, these tools fail to capture the full extent of rain-driven floods, underestimating their severity and impact on urban environments.&#13;
&#13;
Addressing this gap requires sophisticated simulations that account for rain event characteristics and city morphology, yet such simulations are computationally demanding and require detailed urban data. Currently, flood impact analysis tools lack specificity for pluvial flood risks and do not address the risks to various city systems beyond building damage. As a result, the contribution of pluvial floods to overall flood risks is underestimated, compromising infrastructure resilience. As flood model results are a critical component in flood risk assessments, the accuracy of spatial temporal urban flood results will allow the pluvial flood impact assessment to be simplified and the flood damage to the different urban systems will be quantified.&#13;
&#13;
This research aims to develop a scalable and streamlined method to accurately quantify the risks of rain-driven floods to urban infrastructure systems. It addresses three key questions: (1) To what extent does current practice underestimate pluvial flood impacts? (2) What are the impacts of pluvial flooding on pavement systems when incorporating spatial-temporal modeling? (3) What is the significance of modeling pluvial floods using urban underground spaces? Using advanced flood modeling and numerical soil-water infiltration techniques, this research will quantify damages and lifecycle impacts to pavement and underground spaces systems. The method will provide information on the spatial and temporal distribution of flood damage and will enable scaling up single-element assessments to system-wide impacts. This holistic approach will improve urban flood risk management, supporting informed decision-making and the development of resilient infrastructure systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158861</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examining the placenta’s role in neurodevelopment in the context of maternal obesity</title>
<link>https://hdl.handle.net/1721.1/158860</link>
<description>Examining the placenta’s role in neurodevelopment in the context of maternal obesity
Gunter-Rahman, Fatima M.
The placenta is a key organ determining fetal development and likely contributes to programming of long-term offspring health, in particular neurodevelopment. Various maternal exposures, such as psychosocial stress, diabetes, infection, and high body mass index (BMI) are associated with higher risks of impaired neurodevelopment in the offspring. One third of women in the United States are affected by maternal obesity (MO) during pregnancy, making it one of the most common exposures.&#13;
We profiled the term placental transcriptome in humans using single-nucleus RNA-seq, comparing expression profiles in MO versus lean conditions, in each of the two faces of the placenta separately. On both sides of the placenta across several cell types, MO was associated with upregulation of hypoxia response genes. On only the maternal-facing side, hypoxia gene expression was associated with offspring neurodevelopment outcomes measured at multiple time-points, in the Genetics of Glucose regulation in Gestation and Growth (Gen3G) cohort, an independent pre-birth cohort with bulk RNA-seq from placental tissue. We leveraged Gen3G to determine genes that correlated with impaired neurodevelopment and found these genes to be most highly expressed in extravillous trophoblasts (EVTs). EVTs further showed the strongest correlation between neurodevelopment impairment gene scores (NDIGSs) and the hypoxia gene score. We validated these findings in EVTs in an independent single-cell RNA-seq cohort from second trimester placenta, and found that cultured EVTs have increased NDIGSs in response to exposure to hypoxia. These data suggest that hypoxia in EVTs may be a key process in the neurodevelopmental programming of fetal exposure to MO. Our work opens up new directions of research, such as exploring applications of antioxidants to potentially mitigate some of the offspring consequences associated with MO.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158860</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strange Attitudes on Top</title>
<link>https://hdl.handle.net/1721.1/158857</link>
<description>Strange Attitudes on Top
Močnik, Maša
This dissertation investigates how attitude verbs of belief and desire engage with embedded material of a similar nature. Chapter 1 looks at the (cross-linguistically unusual) Slovenian existential doxastic attitude verb dopuščati (‘allow for the possibility’) and the embedding of epistemic modal verbs under it. Chapter 2 looks at the (overall puzzling) want and its Slovenian counterpart hoteti, and at their behaviour with respect to embedded doxastic attitudes, epistemic adverbs, and epistemic adjectives. Chapter 3 looks at the (cross-linguistically unusual) Koryak variable-force variable-flavour attitude verb ivək (‘think’, ‘allow for the possibility’, ‘say’, ‘suggest’) and at how its apparent bouletic flavour (‘wish’, ‘hope’, ‘fear’) is derived with the help of covert desiderative components inside the embedded clause. Attitude verbs have the standard role as quantifiers over possible worlds (Hintikka 1962), parameters of evaluation are assumed to contain a set of worlds called the information state (Yalcin 2007; a.o.), which the attitude verb modifies and passes to the embedded clause, while the epistemic modal base is taken to be ‘local’, forming a subset of the information state (Mandelkern 2017, 2019a). Some of the overarching theoretical contributions are the introduction of a new parameter of evaluation (‘selected state’), which is crucial in modelling embedding under non-universal attitude verbs, and a refined view of epistemic modality. Subjective epistemic modality is proposed to involve a second constraint on the shape of the modal base, whose effect is to strengthen embedded necessity claims and help derive the infelicities observed in chapters 1 and 2. We also address the connection between beliefs and desires in the context of various desire interpretations (wants in chapter 2, hopes and wishes in chapter 3).
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158857</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simple Models for Complex Tropical Dynamics</title>
<link>https://hdl.handle.net/1721.1/158855</link>
<description>Simple Models for Complex Tropical Dynamics
Tuckman, P.J.
Studying Earth's tropics is an essential part of understanding the climate, simulating the Earth system, and predicting the societal impacts of weather. In this thesis, we use a hierarchy of models -- including analytically tractable equations, simplified simulations, and full general circulation models -- to study tropical phenomena including the Hadley Circulation, the Inter-Tropical Convergence Zone (ITCZ), the South Asian monsoon, Pacific and ENSO seasonality, the Walker Circulation, and the modeling of the tropical energy budget. We begin with an examination of tropical SSTs and the ITCZ under warming, finding that the Hadley cells weaken and tropical SST gradients decrease in a warmer climate. The ocean's subtropical cells strengthen and transport more energy in a warmer climate, further flattening SST gradients. The ITCZ, meanwhile, increases in strength with warming because of the exponential relationship between humidity and temperature, and the presence of a dynamic ocean changes a single-ITCZ with a sinusoidal seasonal cycle to a double-ITCZ with a square wave seasonal cycle. Next, we study the ``monsoonal mode,'' an energy and precipitation anomaly triggered by the South Asian Monsoon that moves into the West Pacific during Northern Hemisphere autumn. The monsoonal mode is discussed as a possible underlying cause of the seasonality of the Pacific, i.e., that the West Pacific and ENSO both have seasonalities that favor one season despite being on the equator. To show this, ENSO seasonality is examined using simplified simulations and an energy budget of the Central-Eastern Equatorial Pacific. Similar techniques are then used to study ENSO events in warmer climates, and it is found that the Pacific zonal SST gradient and the Walker circulation, which are the sources of ENSO instability, weaken with warming, decreasing  the magnitude of ENSO events. Lastly, we assess the energy budget of CMIP6 models. It is shown that all CMIP6 models have more energy input to the deep tropics than ERA5 reanalysis, and this bias is bigger in the Southern Hemisphere. The hemispheric asymmetry in this bias can be traced back to radiation absorbed by the atmosphere, which is associated with dust (for shortwave radiation) and total column water (for longwave radiation). As a whole, this thesis demonstrates the utility of studying complex problems with simple models and deepens our understanding of Earth's tropics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158855</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ponderomotive Forces in Pilot-Wave Hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/158854</link>
<description>Ponderomotive Forces in Pilot-Wave Hydrodynamics
Evans, Davis J.
Droplets bouncing on a vibrating bath may self-propel (or ‘walk’) via a resonant interaction with their self-induced pilot wave. In pilot-wave hydrodynamics (PWH), the spontaneous emergence of coherent, wave-like statistics from chaotic trajectories has been reported in several settings. Owing to the similarity of PWH to Louis de Broglie’s realist picture of quantum mechanics, the question of how such statistics emerge has received considerable recent attention.&#13;
&#13;
A compelling setting where coherent statistics emerge in PWH is the hydrodynamic analog of the quantum corral. When walking droplets are confined to a circular cavity or ‘corral’, a coherent statistical pattern emerges, marked by peaks in the positional histogram coincident with extrema of the cavity eigenmode. Stroboscopic models that idealize the drop’s bouncing dynamics as being perfectly resonant with their Faraday wave field have proven incapable of capturing the emergent statistics.&#13;
&#13;
In this thesis, we present new experimental and theoretical findings in a variety of pilotwave hydrodynamical settings where non-resonant bouncing plays a key role in the droplet dynamics and emergent statistics. First, we find that modulations to resonant bouncing influence the stability threshold of a Bravais lattice. Second, we demonstrate that resonant bouncing can be disrupted by the imposition of suboctave driving, which may be used to induce a rearrangement of bound states of bouncing droplets.&#13;
&#13;
We then proceed to an integrated experimental and theoretical study of the hydrodynamic corral, highlighting the role of non-resonant bouncing in the emergent statistics. We first introduce a new experimental method for simultaneously measuring the drop position and pilot wave height. We then report new measurements of the pilot wave and vertical bouncing dynamics. We demonstrate that the complex pilot wave arising in corrals may play the same role as suboctave driving in disrupting resonant walking. Our experimental findings motivate a new theoretical framework that predicts that modulations in the histogram emerge as a consequence of ponderomotive effects induced by non-resonant bouncing. We then connect the ponderomotive drift observed in hydrodynamic corrals to extant theories of quantum mechanics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158854</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Computing from Graphs</title>
<link>https://hdl.handle.net/1721.1/158853</link>
<description>Quantum Computing from Graphs
Khesin, Andrey Boris
While stabilizer tableaus have proven exceptionally useful as a descriptive tool for additive quantum codes, they otherwise offer little guidance for concrete constructions or coding algorithm analysis. We introduce a representation of stabilizer codes as graphs with certain structures. Specifically, the graphs take a semi-bipartite form wherein input nodes map to output nodes, such that output nodes may connect to each other but input nodes may not. Intuitively, the graph’s input-output edges represent information propagation of the encoding circuit, while output-output edges represent the code’s entanglement structure. We prove that this graph representation is in bijection with tableaus and give an efficient compilation algorithm that transforms tableaus into graphs. We then show that this map is efficiently invertible, which gives a new universal recipe for code construction by way of finding graphs with sufficiently nice properties.&#13;
&#13;
The graph representation gives insight into both code construction and algorithms. To the former, we argue that graphs provide a flexible platform for building codes particularly at small non-asymptotic scales. We construct as examples several constant-size codes and several infinite families codes. We also leverage graphs in a probabilistic analysis to extend the quantum Gilbert-Varshamov bound into a three-way distance-rate-weight trade-off. To the latter, we show that key coding algorithms, distance approximation, weight reduction, and decoding, are unified as instances of a single optimization game on a graph. Moreover, key code properties such as distance, weight, and encoding circuit depth, are all controlled by the graph degree. We give efficient algorithms for producing simple encoding circuits whose depths scale as twice the degree and for implementing logical diagonal and certain Clifford gates with non-constant but reduced depth. Finally, we construct a simple efficient decoding algorithm and prove a performance guarantee for a certain classes of graphs. These results give evidence that graphs are generically useful for the study of quantum computing and its practical implementations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158853</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Underwater Semantic Simultaneous Localization and&#13;
Mapping</title>
<link>https://hdl.handle.net/1721.1/158852</link>
<description>Underwater Semantic Simultaneous Localization and&#13;
Mapping
Singh, Kurran
Building semantically meaningful object level maps of underwater environments is crucial for enabling higher-level autonomy, fostering human-robot collaboration, and providing compressed map representations for bandwidth-constrained underwater communications, while localizing against such maps can improve the positioning accuracy of underwater vehicles by correcting for odometric drift. However, underwater semantic simultaneous localization and mapping (SLAM) has lagged behind analogous terrestrial and aerial semantic SLAM techniques largely due to the lack of large labeled underwater datasets and the challenging sensor modalities specific to underwater environments. To address these shortcomings, this thesis develops a range of methodologies to advance underwater semantic SLAM capabilities. &#13;
&#13;
First, self-supervised learning and visual foundation models are leveraged to detect and segment underwater objects in an open-set manner, i.e., objects need not be present in the training dataset to be detected. The machinery of the open-set object detection technique breaks several assumptions made by existing closed-set semantic SLAM methods. Thus, new methods for object representation and data association are proposed and demonstrated. A method to localize underwater objects is then developed through an analysis of the geometry of underwater monocular cameras and multibeam sonars. &#13;
&#13;
Finally, a formulation of open-set object-level place recognition as a graph matching problem is introduced. The formulation includes a method for calculating and tracking semantic uncertainty for open-set object detections. Experimental results on both underwater and terrestrial datasets demonstrate that the proposed formulation can be used for real-time accurate open-set object-based place recognition. &#13;
&#13;
In summary, techniques for underwater object detection, localization, and data association are introduced and integrated with probabilistic graphical models for open-set semantic SLAM. The proposed techniques are tested across a wide variety of scenarios, and are shown to generalize to terrestrial settings as well.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158852</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organizational Forms and Practices: Essays on Implications for Frontline Workers and Performance</title>
<link>https://hdl.handle.net/1721.1/158850</link>
<description>Organizational Forms and Practices: Essays on Implications for Frontline Workers and Performance
Scott, Karen MacKenzie
In three essays, this dissertation explores how organizational forms and workforce practices shape frontline work experiences and organizational performance. Using both quantitative and qualitative methods, I explore how frontline workers experience work and what factors shape their performance. In the first essay, I examine how workforce practices in nursing homes relate to organizational performance. Specifically, I evaluate performance on resident health outcomes for both pre-pandemic and COVID-19 conditions. Combining Federal and state administrative data sets with non-public data on early COVID-19 spread and mortality, I investigate the degree to which the organization of work for frontline workers predicted resident health as a measure of organizational performance for nursing homes. In a period of global stress on health and care systems, I seek to understand to what extent prepandemic predictors of performance remained important. When nurses spent more time with residents, residents experienced better care both before and during the pandemic. Yet contrary to expectation, the role of clinical outsourcing became more relevant during the pandemic, potentially reflecting greater workforce flexibility or targeted COVID-19 workforce support to facilities that outsourced nursing activities before the pandemic. These results depict how environmental changes and alternative performance measures call into question established relationships in the high-performance work systems literature. In the second essay, I use in-depth interviews and field observations to uncover the process of constructing ownership culture in an employee-owned firm. I demonstrate how workers co-create their own control system, supported by a high financial value of ownership, strategic managerial communication, peer pressure, and performance management. This critical case challenges the dominant view in the employee-ownership literature that success requires formal worker participation in decision-making. Further, it investigates the “black box” of culture-building in an employee-owned firm. The third essay builds on this understanding by evaluating the stated motives of individual worker-owners in a home care cooperative. The cooperative developed as a pilot initiative with non-profit partners to develop a home care organization that would provide quality jobs and quality care, while integrating immigrant workers. I traced the workers’ justifications for joining and participating in these cooperatives. Rather than aligning with expected motives from previous studies or with Worker Center motives, I find that these workers adapted motives to reflect their realities, such as multiple jobs and a lack of labor rights in practice. This analysis emphasizes the decoupling of workers’ experiences from stated organizational goals, emphasizing the importance of collecting workers’ perspectives. Taken together, these three essays contribute insights into how frontline workers shape organizational performance by interpreting organizational context, culture, and structure. Results indicate that organizational performance is not merely a function of workplace practices, but rather, directly influenced by frontline workers based on their individual motives and roles in workplace culture. These findings imply that by directly engaging with frontline workers’ motives, organizational leaders and policymakers can design organizations that improve work and performance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158850</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speech Therapy</title>
<link>https://hdl.handle.net/1721.1/158846</link>
<description>Speech Therapy
Hintikka, Kathleen
Words can hurt. Most theorists who are working on speech and harm, even across disciplinary lines, agree to that much. There is less agreement, however, regarding the mechanisms by which speech causes or constitutes harm. By way of paying special attention to the dangers of ordinary, and to varying degrees, “socially acceptable" language, my dissertation, Speech Therapy, is a three-part exploration of the ways that speech plays significant roles in constructing and maintaining unjust conditions of domination and subordination both between social groups and within the broader society, even when its effects are unintended or go unnoticed. The first paper modifies a Gricean account of implicature to accommodate implicature in the interrogative mood. I then argue that interrogative implicature can help us make better sense of certain kinds of common microaggressions. In the second paper, I focus more explicitly on the kinds of harm and subordination that speech can cast on its targets even when the target is not around to hear it. I argue that all socially significant speech — speech about race, class, etc. — articulated from any standpoint, contain second-personal vocative hails. That is, all socially significant speech, even that not uttered second-personally, contains a second-personal norm that implicates both the speaker and members of the targeted social category, even when no member of the targeted social category is invited as an interlocutor by the speaker. The resulting view is a non-ideal, intersectional, and situated approach to second-personhood. The final paper is about how the things we do with language can alter the epistemic landscape of our communities. I argue that slurs, pejoratives, and misused epithets, a class of terms that I will refer to as demeaning speech, constitute a specific kind of epistemic oppression. My view is not that demeaning speech causes epistemic oppression, but rather that demeaning speech constitutes epistemic oppression. The oppression occurs in the mere uttering of these terms; the act of making it such that someone might have their testimony discredited in the future, of inflicting epistemic risk, is itself an epistemic injustice.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158846</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding options for the mechanical characterization of biological materials</title>
<link>https://hdl.handle.net/1721.1/158845</link>
<description>Expanding options for the mechanical characterization of biological materials
Varner, Hannah Martin
The mechanical properties of biological tissues change over time and with disease progression, and they provide important information regarding the limits a tissue can sustain before injury. Therefore, quantifying these properties in biological materials and their synthetic simulants could be instrumental for accurate medical diagnoses, treatment of disease, and prediction of traumatic injury survivability. Conventional methods of mechanical testing, such as uniaxial tension, compression, and nanoindentation,  provide highly repeatable and reliable results for the stiff materials for which they were originally developed. However, the same cannot be said when these methods are applied to the characterization of soft and biological materials due to limitations of specimen size, fixturing capabilities, and sample preparation. Volume Controlled Cavity Expansion (VCCE) is a recently developed technique to measure local mechanical properties of soft materials in their natural environment. Through the highly controlled expansion of a fluid bubble at the tip of an injection needle, paired with simultaneous measurement of the resisting pressure, a local signature of a material's mechanical response can be obtained. &#13;
&#13;
This thesis presents the first systematic application of VCCE to biological materials. It begins by presenting a cautionary example of the limitations of soft material testing, focusing on the synthetic silicone and tissue simulant polydimethylsiloxane (PDMS). We find that the wide range of mechanical properties  reported in literature are due to biases imparted by different testing methods. We then use VCCE to examine the elastic response of gelatin, whole blood clot and liver tissue, demonstrating with high repeatability that subtle mechanical changes occur within a matter of days as these tissues age. Finally, this work applies VCCE to investigate what happens to these materials after elastic expansion, and throughout a process of controlled damage. Biological materials are found to demonstrate toughening  that does not appear  in gelatin and PDMS. Because of these observed differences, we caution against using gelatin and PDMS for simulating the behavior of biological materials in extreme loading cases. Combining these findings, this thesis provides evidence that  more widespread adoption of VCCE in mechanical testing would provide a path to better understanding of the mechanics of soft and biological materials, with implications in fundamental mechanics research as well has in biological and healthcare applications.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158845</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Order Immersed Finite Difference Methods for Complex Domains with Moving Boundaries and Interfaces</title>
<link>https://hdl.handle.net/1721.1/158843</link>
<description>High Order Immersed Finite Difference Methods for Complex Domains with Moving Boundaries and Interfaces
Gabbard, James
Moving domain boundaries and material interfaces are a hallmark of multiphysics systems such as fluid-structure interaction, alloy solidification, and multiphase flows. Simulating moving interfaces with traditional techniques requires a moving mesh that continuously adapts to the interface, which is costly and places restrictions on the interface motion. Immersed methods avoid these challenges by simulating moving geometries on a stationary Cartesian grid, locally altering the numerical method to account for boundaries and interfaces that are not grid-aligned. Most existing immersed methods have low-order spatial accuracy, requiring fine grids to generate accurate results. High order immersed methods can produce more accurate results at lower resolution, making them a promising tool for 3D simulations with tight error tolerances. However, the majority of available high order immersed methods have been numerical experiments developed for stationary 2D geometries and simple PDEs. In this thesis we demonstrate that high order immersed methods can be extended to complex nonlinear PDEs and moving 3D geometries, both of which are necessary to simulate practical engineering problems. We begin by introducing a boundary treatment that locally approximates PDE solutions with high order accuracy using a weighted least-squares fit, and show that the procedure remains valid for smooth 2D or 3D geometries satisfying a local curvature constraint. This boundary treatment is combined with a high order finite difference method to discretize the Poisson equation with up to sixth order accuracy. We then expand the scope of the method to include PDEs with immersed material interfaces, spatially-variable coefficients, vector-valued unknowns, cross-derivative terms, and nonlinearities. These techniques are applied to generate a sixth-order discretization of 2D nonlinear elasticity, demonstrating the applicability of high order immersed methods to complex PDE systems relevant in mechanical engineering. In the second half, we focus on large-scale 3D simulations with moving boundaries. We construct a third order immersed advection discretization with provable stability in one dimension, and show experimentally that the scheme remains stable in 2D and 3D domains. To treat moving boundaries, we introduce a general framework that allows high order immersed methods to maintain their accuracy in both space and time when paired with any explicit Runge-Kutta time integrator. We conclude by presenting results from massively-parallel high order simulations of the 3D advection-diffusion equation with moving boundaries on a multiresolution grid. Taken together, these results demonstrate that high order immersed methods can achieve the scale and complexity necessary to enable practical simulations that are difficult or impossible with traditional mesh-based techniques.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158843</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Floor Plan Design Collaborator: A Data-Driven Approach to Assist Human Architects in Design Exploration</title>
<link>https://hdl.handle.net/1721.1/158842</link>
<description>Floor Plan Design Collaborator: A Data-Driven Approach to Assist Human Architects in Design Exploration
Sung, Woongki
After a long AI winter since the 1980s, artificial intelligence is now experiencing a renaissance due to enhanced computing power and access to vast amounts of data. Today, machines can talk, sing, and draw like human experts. Despite this progress, we are still far from the vision where human designers and AI collaboratively discuss and develop designs. This study argues that a data-driven approach holds great potential in the design process by quickly learning from existing examples and generating new alternatives for exploration. To support this claim, the study presents a generative framework that learns from existing examples and generates new designs. Specifically, the proposed framework employs Bayesian networks to encode site layout data and floor plan examples, generating new design examples through a Markov Chain Monte Carlo (MCMC) sampling procedure. Experiments on real-world examples demonstrate that the framework effectively summarizes the statistical information of given design examples and generates unseen examples based on the learned knowledge. The transparency of the data representation and the inner workings of the proposed framework facilitate an active feedback loop in the iterative learning and generation process between human designers and machines. Observations throughout the study reveal intrinsic limitations and potential improvements of contemporary optimization-based approaches from the perspective of both lateral and vertical design development.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158842</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory Astrophysics Studies of Magnetized Collisionless Shock Precursors and the ³He³He Proton Spectrum at the OMEGA Laser Facility</title>
<link>https://hdl.handle.net/1721.1/158838</link>
<description>Laboratory Astrophysics Studies of Magnetized Collisionless Shock Precursors and the ³He³He Proton Spectrum at the OMEGA Laser Facility
Johnson, Timothy Mark
Laboratory astrophysics enables the study of astrophysical systems in the lab. There are broadly two types of laboratory astrophysics experiments: macrophysics and microphysics. Macrophysics experiments study a scaled down version of an astrophysical system while microphysics experiments create a small volume of matter with the same conditions as an astrophysical system. This thesis details work related to both macrophysics and microphysics laboratory astrophysics experiments. For the macrophysics contribution, collisionless shocks experiments were conducted at the OMEGA laser facility using the new gas jet platform. Collisionless shocks are shock waves formed through plasma processes when particle collisions are negligible. These shocks can form as bow shocks in the interaction between the solar wind and planetary ionospheres and can accelerate charge particles to high energies. In the experiment, a CH plasma flow collides with a hydrogen gas jet plasma to create a forming magnetized collisionless shock. Different diagnostics show a moving density jump, strong magnetic fields, and the acceleration of electrons. These observations coupled with magnetohydrodynamics and kinetic particle-in-cell simulations paint a complete physical picture of the forming shock in a configuration similar to the bow shock of Venus. Late time proton radiographs show a complicated structure which is studied for magnetic turbulence. Turbulence is important in several astrophysical systems, especially collisionless shocks where it dissipates shock kinetic energy and is essential for accelerating charged particles to cosmic ray energies. Magnetic power spectra extracted from proton radiography data show a break in the spectrum between the ion Larmor radius and the ion skin depth for high plasma β, a sign of kinetic turbulence. Large scale particle-in-cell simulations of high β turbulence also have this feature showing that the experimental data are consistent with high β kinetic turbulence. For the microphysics contribution, a new proton spectrometer is designed for measurements of the ³He³He proton spectrum. The ³He³He fusion reaction is the last step of the proton-proton I chain which produces the majority of the sun’s power. Previous experiments were not able to measure the ³He³He proton spectrum below 6 MeV. A new proton step range filter (SRF) spectrometer with a larger energy range is designed using a Monte Carlo tool. This tool uses Geant4 and is able to self-consistently apply the instrument response function. The new SRF design is validated and a method for analyzing experimental data using the Monte Carlo code is presented.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158838</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shining a Light on the Nucleus: Photonuclear Measurements from Correlations to Charmonium</title>
<link>https://hdl.handle.net/1721.1/158837</link>
<description>Shining a Light on the Nucleus: Photonuclear Measurements from Correlations to Charmonium
Pybus, Jackson R.
The atomic nucleus is comprised of a collection of nucleons (protons and neutrons), which are bound together by the nucleon-nucleon (NN) interaction that originates from Quantum Chromodynamics (QCD). While most nucleons experience the force from the rest of the nucleus as a single net “mean-field” interaction that binds them relatively weakly, a small but impactful fraction are in configurations called “Short-Range Correlations” (SRCs), in which they pair with another nucleon at very short distance to experience strong interactions, significant binding, and high momentum. Hard, high-energy scattering reactions in which an SRC pair is broken apart, knocking both nucleons out of the nucleus, provide the ability to probe the details of these SRC configurations in the nucleus. Previous measurements have had limited statistics and kinematic reach, and the theoretical tools available were insufficient to draw quantitative conclusions regarding the ground-state properties of SRCs. The studies described in this thesis represent the first global analysis of SRC breakup measurements in order to present a unified picture of SRCs within light- to medium-size nuclei. This includes the use of a novel theoretical framework, the Generalized Contact Formalism, which connects scattering cross-section measurements and the ground-state properties of the SRC pair, to quantitatively interpret a variety of electron-scattering measurements. This is brought to culmination by a report on the first measurement of SRC pairs via the use of hard meson photoproduction reactions, which, despite differing significantly from the mechanics of electron-scattering, is well-described under a common framework, pointing to a consistent and universal picture of SRCs across reaction channels. I also report on the first measurement of J/ψ photoproduction in the near- and below-threshold kinematic region, giving the first insights to the gluonic structure of bound nucleons in the large-x “valence” region and providing constraints on a gluonic “EMC effect”. In addition to these studies, I provide details on the search for Primakoff production of axion-like particles using the photoproduction data taken for this experiment, and I conclude by describing studies of nucleon spin structure measurements that will be performed at the forthcoming U.S. Electron-Ion Collider.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158837</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acoustic scattering of spherical directional waves by smooth and statistically rough solid elastic cylinders</title>
<link>https://hdl.handle.net/1721.1/158835</link>
<description>Acoustic scattering of spherical directional waves by smooth and statistically rough solid elastic cylinders
Mursaline, Miad Al
Realistic sonars radiate spherically spreading waves and have directivity. Therefore, they insonify a target over a finite number of Fresnel zones and span a continuum of oblique incident angles, even when the center of the beam is at normal incidence. These effects strongly influence both the overall scattered pressure levels and resonances. For example, because of the spreading of the beam and associated oblique insonification within the beam, normal modes associated with axially propagating guided waves are excited that would not have otherwise existed for an idealized incident plane wave. This thesis analyzes acoustic scattering by solid elastic cylinders insonified by realistic sonars both theoretically and experimentally. A theoretical model to predict scattering by arbitrary-length cylinders is derived based on the apparent volume flow accounting for the above-mentioned practical sonar properties, namely, spherical spreading and directionality. The formulation is first bench-marked against the formally exact T-matrix solution and tested against previously published laboratory data for finite cylinders. It is found that the formulation outperforms the T-matrix solution in predicting laboratory observations at near-normal incidence. Laboratory experiments are then conducted on arbitrary length smooth cylinders insonified by a directional sonar, with a small number of Fresnel zone excited, to evaluate the theory for monostatic as well as bistatic geometries. The formulation is found to outperform the classical scattering models in predicting the new measurements. For example, resonances associated with axially propagating guided waves excited at broadside incidence observed in the experiments are predicted by the proposed formulation but not by the classical models. The measurements are found to agree well with predictions in terms of overall scattering levels and resonance locations. In addition to testing the predictions, the bistatic laboratory observations presented herein substantiate the significant effects on scattering due to the properties of the incident field from practical sonars. The comparison between theoretical and experimental results is then extended for the more complex case involving statistically rough elastic cylinders with one-dimensional Gaussian roughness. The roughness is found to have a considerable impact on all aspects of scattering—overall levels as well as locations and shapes of resonances. General agreement is found between the theoretically predicted and measured ensemble averaged scattered pressure. Both the theory and data reveal two main observations in the ensemble-averaged scattered field: overall scattered pressure levels are seen to decrease, and resonance effects are diminished compared to the corresponding case of smooth cylinders. Effect of various statistical properties of the rough cylinder, namely, different root mean square (RMS) roughness for fixed correlation length and different correlation lengths for fixed RMS roughness on the scattered field are investigated. Finally, the fluctuations of the scattered field are analyzed using the derived formulation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158835</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>More than the sum of parts: deconstructing tissues in their spatial, temporal, and environmental contexts</title>
<link>https://hdl.handle.net/1721.1/158827</link>
<description>More than the sum of parts: deconstructing tissues in their spatial, temporal, and environmental contexts
Tzouanas, Constantine
The human body is composed of ~37,000,000,000,000 cells, exquisitely organized into tissues delivering emergent functions beyond individual cells’ capabilities (e.g., the brain’s seemingly-effortless computations, the liver’s wide-ranging chemical processing). In my PhD, I studied how healthy tissues arise from properties and interactions of constituent cells, and how disease outcomes stem from dysregulation of underlying cellular parts. 1) To study how cells’ spatial organization shapes tissue function, I created photochemistry tools to discover gradients in how immune cells combat cancer across a tumor’s core vs. periphery. 2) To then explore spatially-structured tissues, I turned to tuberculosis (TB) granulomas: just centimeters apart, the immune system can kill bacteria in one granuloma or permit years-long bacterial survival in another. Reconciling this paradox, I discovered that bacterial killing needs coordinated signaling across immune cells, but TB-permissive granulomas structurally remodel to inhibit TB spread at the expense of “walling out” immune cells. 3) Connecting disease to lifestyle exposures, I determined tobacco smoking increases TB risk via blood-to-lung migration of TB-permissive cells. 4) Intrigued by past stresses seeding future dysfunction, I studied similar themes in adaptations to high-fat diets, discovering tradeoffs where individual liver cells promote their own survival at the expense of reduced tissue function and increased cancer risk. Through these studies, I dissected tissues and diseases with unprecedented resolution via single-cell multi-omics and mechanistic perturbations, defining the parts, interactions, and causal regulators that underlie tissue (dys)function.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158827</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Manufacture of a Modular Continuous Unit Dose Pharmaceutical Lyophilizer</title>
<link>https://hdl.handle.net/1721.1/158826</link>
<description>Design and Manufacture of a Modular Continuous Unit Dose Pharmaceutical Lyophilizer
Burcat, Steven
Pharmaceutical lyophilization (freeze-drying) enables long term storage and simplified transportation for aqueous vaccines and protein formulations. Modern industrial pharmaceutical freeze-driers rely on large batch and open loop formulation processing, limiting supply chains and resulting in variable quality products. This work describes the design and manufacture of a modular continuous lyophilization machine for pharmaceutical production. Additionally, the scaling and design methodology outlined in this work enables the development of both smaller systems for laboratory testing and larger machines to fit the needs and requirements of individual facilities. This machine introduces three new technologies to the pharmaceutical freeze-drying process. The first innovation is a continuous flow lyophilization topology which separates the lyophilization steps spatially rather than temporally. This layout allows product to travel through the system in smaller batches for increased product uniformity and quality control. The second innovation is a weight-based sensor for monitoring residual water content. This sensor enables in-situ monitoring of product during sublimation, and it resolves mass measurements as small as 5mg. The third innovation is the implementation of a thermal shock method of inducing controlled nucleation. The convective cooling and spatial non-uniformity within the machine allow vials to experience a 40°C temperature drop in less than 30 seconds. This nucleation front starts on the vial walls, rather than at the top surface of the solution in the vial, potentially increasing the water sublimation rate during drying compared to current nucleation methods. The machine designed and built for this work integrates into modern factory processes and can be scaled from the lab bench to a production line. The manufactured prototype demonstrates improvements on the production rate, flexibility, and quality of existing machines.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158826</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and perception of sounds from physical interactions reveals auditory intuitive physics</title>
<link>https://hdl.handle.net/1721.1/158825</link>
<description>Synthesis and perception of sounds from physical interactions reveals auditory intuitive physics
Agarwal, Vinayak
Object interactions – collisions, scraping and rolling – create many of the sounds that we hear in the world around us. These sounds are generated via lawful physical dynamics. Anecdotally, humans possess some intuitive knowledge of the physical generative processes underlying sound production, but little is known about the extent and nature of this knowledge. This thesis characterizes the auditory perception of physical object interactions, making three main contributions. First, we develop realistic contact sound synthesis tools, in part via large-scale measurements of object acoustics. Second, we show that humans solve ill-posed problem of inferring of object mass and damping by using internalized knowledge of the distribution of object resonances. Third, we provide evidence for “auditory intuitive physics” in which human listeners derive physical information through sound, maintain it over time in object representations, and compare it across sensory modalities.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158825</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning Beyond Crisis: The Promise of Insurgent Planning in Post-Disaster Mocoa</title>
<link>https://hdl.handle.net/1721.1/158822</link>
<description>Planning Beyond Crisis: The Promise of Insurgent Planning in Post-Disaster Mocoa
Osorio Botero, Juan Camilo
During the evening of March 31, 2017, a catastrophic landslide engulfed the Colombian city of Mocoa, killing at least 335 people in roughly thirty minutes. Seventy others disappeared, and over one hundred people were reported injured across 48 neighborhoods, and roughly 1,500 housing units were destroyed. With a total of 22,000 people impacted, this catastrophe was the deadliest disaster affecting Colombia in recent decades. Yet, despite an alignment of major national political commitments, international cooperation, and a multi-million-dollar humanitarian budget, reconstruction plans have not been completed seven years later. Why? As the first comprehensive analysis of the landslide and its aftermath, this dissertation is a novel investigation into the competing forces that ultimately canceled the central reconstruction plan, demonstrating that the kind of disruption caused by the disaster mobilized new actors and new forms of agency. In contrast to the popular perception that this kind of lack of remediation suggests the failure of urban governance, the dissertation speaks to the success of activists who have neutralized the government’s reconstruction plan, which activists perceived as worsening the circumstances leading up to both the catastrophe and recovery. Distinguishing between the “landslide” and the “larger disaster,” the dissertation further explains the government’s proposed reconstruction plan within a history of violent extraction, dispossession and displacement. Framing an original case consisting of fifteen planning vignettes to trace actions, reactions and counteractions, I expose the reduction of the planning process as crisis urbanism. My research contributes to our understanding of variability among insurgent planning actors and their invented spaces for engagement in the context of disaster, by defining technocratic resistance as a valid form of dissent inside the government, and by proposing a new device for the study of insurgent planning called transformative spaces enabling local community’s right to plan. Drawing on contemporary debates on anti-crisis, risk and decolonial thought, the dissertation imagines an alternative paradigm for planning beyond crisis that enables radical community action through dissenting grassroots leadership. &#13;
&#13;
Keywords: crisis urbanism, technocratic resistance, insurgent planning, regenerative planning, anti-crisis, risk, decolonial thought
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158822</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Identity-Oriented Systems Engineering Framework for Complex Sociotechnical Systems: A case study of Zero Robotics</title>
<link>https://hdl.handle.net/1721.1/158821</link>
<description>An Identity-Oriented Systems Engineering Framework for Complex Sociotechnical Systems: A case study of Zero Robotics
Zhang, Yiyun
Historical and ongoing discrimination of certain identity groups such as by racial, gender, social class, and other differences leads to persistent inequalities in various fields of society including socioeconomic, health system, political powers, education opportunities, etc. Technology however often entrenches or sustains the hierarchies and further strengthens these social inequalities. While there are many frameworks for studying complex systems, a framework with a focus on advancing social justice and an integration of technological and social considerations is missing. This work introduces the Intersectional Antiracist Technology Framework as a new tool and applies it to an existing complex system of Zero Robotics in STEM education. STEM education, with increasing importance in the modern world’s competitions, is one of the most popular methods to cultivate students’ interests and capabilities in solving complex problems. However, the disparities in access to quality STEM learning opportunities and inclusion in STEM activities remain significant challenges in promoting social equality. This work builds upon the systems engineering tools and uses the innovative Intersectional Antiracist Technology Framework to describe, explain, and evaluate an existing complex system of Zero Robotics. Zero Robotics is an education outreach program that is designed as an early intervention to enroll students in aerospace and related fields. The program aims to serve students across the pipeline and provide them with learning opportunities through interactions with a space robot. It is a perfect example of a complex sociotechnical system that has technological and social factors. Through the case study of Zero Robotics, data are collected through interviews, surveys, participant observation, and available documents. Qualitative program outcomes are assessed from student surveys before and after the Zero Robotics competition. This work is the first attempt to apply the Intersectional Antiracist Technology Framework to an existing complex system that is being managed by the author. The findings from this study demonstrate insights that can be gained about complex, sociotechnical systems by viewing them from multiple Stakeholder perspectives and blending the information about the technical and social design aspects.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158821</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>thesis in the field of Chemical Oceanography: Marine iodine biogeochemistry: inorganic speciation, redox dynamics and organic complexation</title>
<link>https://hdl.handle.net/1721.1/158819</link>
<description>thesis in the field of Chemical Oceanography: Marine iodine biogeochemistry: inorganic speciation, redox dynamics and organic complexation
Ștreangă, Iulia-Mădălina
Iodine holds significant importance across various disciplines, including medicine, industrial processes, organic synthesis, paleoclimatology, atmospheric chemistry and modern climate science. The ocean, as a major surficial iodine reservoir and the primary source of this element to the atmosphere, plays a central role in global iodine cycling. Despite significant progress, key aspects of iodine cycling in the marine environment remain poorly understood. This thesis leverages recent advances in high-precision techniques, including liquid chromatography and mass spectrometry, to enhance our understanding of marine iodine biogeochemistry. Detailed analyses of the major inorganic iodine species in seawater, iodide and iodate, were conducted in the oligotrophic waters of the North Pacific and the oxygen minimum zones of the Eastern Tropical Pacific. The observed distributions reflect the impact of both in situ and ex situ processes on dissolved iodine concentrations, offering valuable insights into the prevalence and extent of anoxic conditions within oxygen minimum zones. Iodate formation rates were investigated through surface seawater incubations using iodide-129, a long-lived radioisotope, as a tracer. The experimental results underscore the pivotal role of particles in mediating redox transformations between iodide and iodate, while also emphasizing the significance of iodine species with intermediate oxidation states in these processes. Building on this observation, a significant focus of this thesis is the characterization of dissolved organic iodine in the ocean. Two innovative methodologies for identifying dissolved organic iodine compounds are presented. The first approach focuses on labelling cultures of the cyanobacterium Synechococcus with iodide-129 to generate a diagnostic isotopic pattern in resultant dissolved organic iodine complexes. The second approach employs sequential purification and isolation of a target compound from a large-volume seawater sample collected in the North Pacific. Collectively, the findings presented in this thesis significantly enhance our understanding of iodine cycling in the marine environment, offering novel insights into the distribution and composition of both inorganic and organic iodine, as well as the rates and dependencies governing iodine cycling processes. Furthermore, the methodologies introduced here pave the way for future research to elucidate the mechanisms driving iodine redox transformations in seawater, refine the marine distribution of inorganic iodine, and advance the molecular characterization of dissolved organic iodine.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158819</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Right and Left Ventricular Coupling for Optimization of Mechanical Circulatory Support</title>
<link>https://hdl.handle.net/1721.1/158818</link>
<description>Leveraging Right and Left Ventricular Coupling for Optimization of Mechanical Circulatory Support
Lamberti, Kimberly Kate
Mechanical circulatory support devices have the potential for profound impact on cardiogenic shock patients. They enable volume propulsion and pressure gradient generation first by unloading and later by decoupling native cardio-vascular interactions, which reduces cardiac load and energy consumption while increasing organ perfusion in the face of disease.  However, there is a potential price in that coupling evolved to optimize blood flow dynamics and the complex interplay between individual cardiovascular components and interposing organs like the lung. Disrupting native coupling with mechanical support risks decompensation if the heart and lung cannot tolerate these changes.&#13;
&#13;
One particularly concerning consequence of altered coupling is that upwards of 40% of patients with left-sided mechanical support face ensuing right heart failure, which requires urgent action and often is associated with even higher mortality rates. We hypothesized that better understanding of right heart function and the mechanisms of right heart (in)tolerance to left-sided support will improve device utility by aiding device selection as well as titration throughout a patient’s clinical course. In particular, we focused on right and left ventricular coupling, which consists of serial coupling across the closed-loop cardiovascular circuit, and parallel coupling that enables intracardiac interdependence and force transmission between the ventricles. Each interaction plays a critical role in a patient’s tolerance to mechanical support and optimal setpoint.&#13;
&#13;
We used a series of controlled porcine experiments to evaluate right and left heart coupling during mechanical support. In each set of experiments, we induced graded models of disease that range from health to progressive impairment, enabling evaluation of  mechanical support across a spectrum of right and left heart states. Through these studies, we improved mechanistic understanding of the differences between right and left heart function, and how those differences dictate the response to left-sided support. Specifically, we found that pulmonary vascular compliance enabled a unique right heart adaptability to varied flow, but limitations in compliance due to disease yield right heart intolerance to support. We leveraged the indwelling pump to dynamically alter load in the system, creating a method to rapidly evaluate pulmonary vascular compliance adaptability and therefore predict the need for right-sided support. Finally, we created a metric using device-organ interactions for tracking right-left coupling over time, which can aid optimization of device speed based on relative right and left ventricular volume setpoints. Translation of these findings to the clinic could better inform use of mechanical circulatory support technologies with the goal of improving outcomes for cardiogenic shock patients.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158818</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Long-timescale Behavior of Positive DC&#13;
Streamer Coronas</title>
<link>https://hdl.handle.net/1721.1/158816</link>
<description>Investigation of Long-timescale Behavior of Positive DC&#13;
Streamer Coronas
Strobel, Lee R.
Positive DC streamers are filamentary low-temperature discharges that are relevant to many applications, including sterilization, ionic wind generation, agriculture and atmospheric electricity. Even when excited by a DC voltage, streamers in atmospheric-pressure air typically self-pulsate with a frequency of several kilohertz. The generally-accepted explanation for DC streamer self-pulsation is that it is driven by recovery of the electric field near the tipped anode, due to electrostatic removal of ionic space charge from the inter-electrode gap over inter-pulse timescales. However, this theory has not been validated, either experimentally or numerically. Most prior works investigating DC streamers have focused on the streamer propagation phase (a few tens of nanoseconds) - few have investigated longer timescales, including the bridging of the electrode gap by the streamer and the subsequent current pulse (hundreds of nanoseconds) and the period in-between streamer pulses, leading up to initiation of the next streamer discharge (hundreds of microseconds). The work presented in this thesis focuses on investigation of the longer timescales of positive DC streamer development in a tip-to-plane geometry, in particular beyond the streamer propagation phase, through the current flow and inter-pulse phases. This begins with an experimental study to measure the long-timescale development of the electric field inside a streamer corona using the E-FISH laser diagnostic technique. This shows some surprising results, which do not seem to be consistent with the theory of DC streamer selfpulsation being driven by electric field recovery at the anode. The near-anode electric field is not observed to recover during the inter-pulse period - instead, the near anode behavior seems to be dominated by a persistent glow discharge and a curious wave-like feature is observed in the electric field, traveling towards the anode on ionic timescales. This is followed by the development of a 1.5D reduced-order numerical model of a DC streamer, which is optimized for solving over long timescales via a ‘triple-stack’ of transient solvers. The model is able to fully resolve the boundary sheath layers of the plasma and is able to capture detailed behavior of the cathode sheath development during bridging via the use of a kinetic flux boundary condition for the charged species. This model is firstly applied to modeling the bridging and current flow phases of streamer development, and its prediction shows a good qualitative match to the behavior of the experimental current pulse. Parameter sweeps show that the streamer current pulse is sensitive to the assumed radial behavior and the rate of electron-ion recombination, but insensitive to the applied boundary conditions or secondary emission. The final section describes an extension of the 1.5D streamer model to simulate the streamer inter-pulse phase and initiation of a second streamer. It is shown that initiation of a second streamer can be predicted by a fluid model and that radial expansion of positive ions plays an important role; however, it has proven difficult to integrate that effect into the 1.5D model. The model results are consistent with streamer self-pulsation being due to electric field recovery; however, comparison with the results of the E-FISH experiment suggest there may be different mechanisms driving positive DC streamer self-pulsation, depending on the presence or not of a glow discharge on the anode.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158816</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Modeling of Biological Function</title>
<link>https://hdl.handle.net/1721.1/158814</link>
<description>Computational Modeling of Biological Function
Khodaee, Farhan
How biological function emerges from complex molecular patterns is a fundamental question in biology. Addressing this question requires a deep exploration of the concepts of genotype and phenotype, which serve as the foundation of this inquiry. This dissertation focuses on providing a quantitative approach through the lens of computation to dissect the dynamic relationship between genotype and phenotype. In particular, recent advancements in high-content genotyping methods, such as genome-wide association studies (GWAS) and single-cell RNA sequencing, have provided powerful tools for mapping the molecular basis of biological function, but also have introduced challenges due to the high dimensionality, vast combinatorial possibilities, and multimodal characteristics of the data. The overarching goal of this dissertation is first to provide a critical discussion on the theories of genotype and phenotype as they relate to biological function and propose new methods to map their relationship. Specifically, we present the integrated genetics framework designed to analyze and interpret the manifold of genotypes and their associated phenotypes simultaneously. We applied this approach to develop a multimodal foundation model for human transcriptomics at the cellular level. To further test the capabilities of this method, we apply it to dissect the aging process. The results of this study provide novel concepts and methods for analyzing the genetic data along with phenotypic information with higher resolution. Moreover, the results shed light on uncovered potential cross-tissue biomarkers that are undetectable through conventional gene expression analysis alone. Overall, this study aims to advance our understanding of the dynamic interplay between gene patterns and phenotypic manifestation and demonstrates the potential of computational modeling in uncovering new dimensions of cellular function and complexity.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158814</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking carbon fluxes across ocean interfaces using dissolved gas observations</title>
<link>https://hdl.handle.net/1721.1/158813</link>
<description>Tracking carbon fluxes across ocean interfaces using dissolved gas observations
Traylor, Shawnee Nicole
The cycling and exchange of carbon between Earth’s systems play a pivotal role in regulating climate, yet two major carbon fluxes remain poorly constrained: the biological carbon pump (BCP) and carbon release from Arctic permafrost. This thesis focuses on dissolved gases as tracers and drivers of these processes through both autonomous and field-based observations. It encompasses (i) improvements to sensor-based measurements of O₂, (ii) the use of these measurements to assess the strength of the BCP in two distinct export regimes, and (iii) isotopic approaches to carbon dioxide (CO₂) and methane (CH₄) dynamics at a coastal permafrost site. The first part of the thesis is centered around the NASA EXPORTS campaign and studies the BCP at two contrasting field sites. Using autonomous platforms, carbon export was evaluated at both sites and demonstrated that at the lower productivity site, a greater proportion of fixed carbon was routed to sinking particulate organic carbon (POC), while the higher productivity site resulted in near equal proportions of dissolved organic carbon production and sinking POC. These findings underscore the value of autonomous sensors in capturing spatial and temporal variability in oceanic carbon cycling. The second part of this thesis shifts focus to the Arctic, where rapid warming threatens to mobilize vast (~1,500 Pg) amounts of carbon currently stored in permafrost. This study presents observations from the spring thaw at a coastal Arctic site and demonstrated that even sites with high CH₄ and CO₂ concentrations drew less than 10% of their carbon source from ancient permafrost sources. The variability in CH₄ and CO₂ emissions reflects the complex interplay between hydrological changes, primary productivity, and microbial processes. The research highlights the need for regular monitoring of Arctic rivers, which integrate changes in the terrestrial system, as a potential early warning system for abrupt permafrost thaw. This thesis leverages the fundamentals of dissolved gas geochemistry to examine key climate-relevant biogeochemical cycles across diverse environments that are sensitive to global change. These insights contribute to refining Earth system models and emphasize the need for expanded monitoring to predict future shifts in global carbon cycling and climate dynamics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158813</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lagrangian perspective of mesoscale biophysical interactions in the subtropical ocean</title>
<link>https://hdl.handle.net/1721.1/158812</link>
<description>A Lagrangian perspective of mesoscale biophysical interactions in the subtropical ocean
Jones-Kellett, Alexandra E.
The most kinetic energy in the ocean is at the mesoscale, which includes highly dynamic physical perturbations that persist for months, a biologically relevant timescale for phytoplankton growth and bloom development. Importantly, mesoscale currents and the associated biological responses (i.e., biophysical interactions) are not spatiotemporally static, so they are difficult to characterize. In this thesis, we interpret phytoplankton observations in an objective Lagrangian manner, or with a frame of reference that follows the motion of water parcels experienced by drifting organisms. We build a Lagrangian coherent eddy tracking algorithm that identifies the boundaries of water masses trapped for a month or longer. Using this tool, we assess the variability of the lateral advective properties of eddies across the North Pacific Subtropical Gyre, finding that only half of the remotely sensed eddies identified from the traditional, Eulerian sea level anomaly method trap waters for these timescales. We then statistically compare satellite-observed chlorophyll-a anomalies associated with eddies that trap versus mix across their boundaries. Lagrangian coherent vortices have more anomalous biological signatures in the gyre, so we argue that the role of leaky eddies in altering biogeochemistry may be underestimated due to lateral dilution. We also highlight substantial regional and seasonal variability in the dominant biophysical interactions within the oligotrophic regime, helping to explain inconsistencies of in situ eddy observations across this region. Lastly, we show how the Lagrangian water mass histories of in situ samples shape the phytoplankton community in the open ocean, quantified with amplicon sequencing and internal genomic standards. In non-eddy waters, we found that cyanobacteria are advantaged over eukaryotic phytoplankton when lateral mixing is minimized for several months. In or near mesoscale eddies, where vertical perturbations are a source of new nutrients, eukaryotic phytoplankton gene abundance has no dependence on the lateral mixing histories. The results suggest dispersal and niche generation drive phytoplankton variability but in different ways in and outside eddies. This thesis emphasizes how Lagrangian tools reveal mesoscale structures (otherwise invisible with Eulerian reference frames) that trap, transport, and transform ecosystems, generating phytoplankton patchiness and variability in the surface ocean.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158812</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Challenges in Object-Based Robot Navigation and Mapping</title>
<link>https://hdl.handle.net/1721.1/158807</link>
<description>Addressing Challenges in Object-Based Robot Navigation and Mapping
Lu, Ziqi
Developing fully autonomous systems that can safely traverse and interact with the environment has been a long-term objective in robotics. Many relevant tasks, such as planning and mobile manipulation, require the robot to possess an object-level understanding of the ambient world. In particular, it would be crucial to maintain a globally consistent objectbased map of the environment for these operations. Without external assistance – such as a prior map or a motion capture system – the robot needs to navigate and map the environment using an object-based SLAM system. This thesis is dedicated to addressing several key challenges in developing object SLAM systems. The first challenge arises from the ambiguity of object poses in single-view observations. When an object is observed from a single vantage point, it can often have multiple probable poses due to symmetry, occlusion, or perceptual failures. It would be difficult for an object SLAM system to incorporate such ambiguous measurements. To address this issue, we introduce an ambiguity-aware object SLAM method. We use Gaussian max-mixture models to represent and efficiently track the multiple object pose hypotheses, and gradually disambiguate the poses to construct a globally consistent object-level map. The second challenge is the performance degradation of neural networks when deployed in novel robot operating environments, commonly known as the domain gap problem. Specifically, when a pre-trained 6DoF object pose estimator is used in a novel environment, its pose predictions are often corrupted by outliers, and quantifying their uncertainties becomes difficult. Using these noisy predictions with unmodeled uncertainties as measurements in an object SLAM system can lead to significant estimation errors. To mitigate the problem, we propose a SLAM-supported self-training pipeline for domain adaptation of 6DoF object pose estimators. We exploit robust pose graph optimization (PGO) results to pseudo-label robot-collected images and fine-tune 6D object pose estimators. In particular, we develop an Automatic Covariance Tuning (ACT) method to model pose prediction uncertainties automatically during the PGO process. The third challenge is environmental changes. As changes occur in the scene, such as object insertion, removal, or rearrangement, the robot needs to efficiently detect these changes and update the map accordingly. While detecting and reflecting scene changes is relatively straightforward with handcrafted map representations like point clouds or voxels, it becomes significantly more difficult with learned radiance-field-based scene representations, such as Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) models. In this thesis, we develop a radiance-field-based 3D change detection method to identify 3D object-level scene changes. Our approach can rapidly detect object changes in cluttered environments represented with radiance field models from as few as a single post-change image observation. We also develop efficient update methods for NeRF and 3DGS models to reflect physical object rearrangements, guided by sparse post-change images. By addressing these challenges, this thesis advances the robustness and adaptability of object SLAM systems in real-world environments, paving the way for more reliable and autonomous robotic systems capable of complex interactions with the environment.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158807</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Sustainability in Agriculture and Food Systems</title>
<link>https://hdl.handle.net/1721.1/158806</link>
<description>Essays on Sustainability in Agriculture and Food Systems
Liu, Xinming
Agriculture and food systems face severe challenges from climate change, population growth, and food insecurity. These unprecedented issues leave millions vulnerable to hunger and malnutrition, underscoring the urgent need for a transition toward sustainable agriculture and food systems. The first research stream in this thesis focuses on promoting sustainability in agriculture, particularly through contract farming. In Chapter 2, we model contract farming as a bi-level optimization problem for a farmer and a company. We analytically demonstrate that different contract structures offer varying incentives for farmers to invest in quality-improving efforts, resulting in different levels of quality for agricultural products. Empirical analysis of production-level data supports these model predictions.&#13;
&#13;
The second research stream examines sustainability in food systems, specifically addressing the issue of food waste. In Chapter 3, we explore the impact of online grocery shopping on household food waste. Using large-scale Nielsen Consumer Panel data and instrumental variable analysis, we establish a statistically significant causal relationship, showing that households with higher frequency of online grocery shopping experience lower waste per capita, a proxy of household food waste. These findings emphasize the role of digital platforms in fostering sustainable consumption and call for continued support for online grocery shopping to mitigate consumer-level food waste. In Chapter 4, we turn to retail-level food waste. We design and implement behavioral interventions aimed at reducing food waste in restaurant kitchens in Ghana. As a Sub-Saharan African country, Ghana faces both food waste and food insecurity. Through a six-week field experiment and a difference-in-differences analysis, we demonstrate that interventions focused on public- and private-interest lead to 9% and 19% reductions in food waste in kitchens, respectively. Follow-up surveys and further analyses reveal that this result may be related to the demographic/socioeconomic characteristics of workers (e.g., age and income), their perception of power distance within the management hierarchy, and their satisfaction with restaurant management.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158806</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Freight Distribution During Disasters: Measuring and Improving Operational Performance of Critical Systems</title>
<link>https://hdl.handle.net/1721.1/158803</link>
<description>Freight Distribution During Disasters: Measuring and Improving Operational Performance of Critical Systems
Rana, Shraddha
The frequency and intensity of weather-related natural disasters have increased in the last őve decades. Moreover, the US faces more than a third of the disaster-related economic losses globally, majority of which are from storms. As the demand for distribution of essential freight increases during disasters, the physical and operational constraints decrease the capacity of the freight distribution systems. Accordingly, public and private-sector stakeholders seek disaster preparedness and response interventions to ensure timely and economic distribution of vital freight to the population in need. The goal of this thesis is to facilitate better strategic and tactical planning that results in higher operational performance of essential freight distribution systems during disasters. We study two critical freight distribution systems, namely, downstream fuel distribution and full truckload transportation of general freight. Truckload transportation plays a vital role in distributing relief supplies during emergencies, and fuel is required for humanitarian operations such as running generators, moving emergency response crews, and evacuation of the affected population. We collaborate with The US Federal Emergency Management Agency in response to multiple North-Atlantic storms and measure the operational performance of these systems under regular and disaster conditions, as well as identify public and private-sector interventions to make the performance better during future disasters. Our research contributes to the bodies of disaster modeling and management, fuel distribution, service procurement, and truckload procurement literature by, i) creating system level understanding of multi-server tandem cyclic queues with time-limited customers, ii) studying process improvement interventions for disasters, iii) quantifying the magnitude, geographical extent, timing, and duration of the causal effects of disaster conditions and consequent disaster relief activities on transportation procurement prices, iv) using datadriven analysis to design ŕexible truckload contracts that consider uncertainty in demand, and v) modeling dynamic-pricing where the buyer offers the price to service providers. In this thesis, we provide several actionable insights for public and private-sector stakeholders to manage freight distribution during future disasters. We identify which process improvement interventions are best suited for which type of downstream fuel distribution system, and which storage terminals should be prioritized under limited budget. We also measure how private-sector shippers should account for changes in truckload spot procurement prices during disaster episodes to manage their budgets and operational decisions. Moreover, we offer an alternate dynamic-priced truckload contract solution for public-sector shippers that deal with uncertain episodic demand in response to disasters. We demonstrate the impact of our research by implementing it to multiple real-life case studies in the US. Furthermore, our methodologies and results are generalizable to other geographical regions as well as other disaster conditions. Thus, we hope that they are used by public and private-sector actors to better manage essential freight distribution moving forward.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158803</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Foundation Models in Medical Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/158802</link>
<description>Natural Language Foundation Models in Medical Artificial Intelligence
Palepu, Anil
Over the past decade, the transformative rise of deep learning, particularly large language models (LLMs), has inspired experts across diverse fields, including healthcare, to think deeply about how artificial intelligence (AI) can revolutionize their fields. In this time, general foundation models, rather than narrow and highly specialized task-specific systems, have begun to emerge as the dominant paradigm. In healthcare, AI systems are already seeing widespread implementation in a variety of real-world use cases, perhaps without adequate evaluation and validation. Indeed, their often impressive ability to process natural language, a crucial medium of knowledge and communication in medicine, suggests that many of these modern foundation models may hold immense promise in the healthcare space. However, there exists a need to better study and understand their strengths, limitations, and robustness, particularly in more realistic and clinically relevant settings.&#13;
&#13;
This thesis focuses on two key classes of natural language-driven foundation models --- Contrastive Language Image Pretraining (CLIP) models, and Large Language Models (LLMs) --- and investigates how such models can encode and deliver useful clinical knowledge, for tasks like chest x-ray interpretation, differential diagnosis, history taking, and clinical management. As a whole, this thesis aims to further our collective understanding of the potential of natural language foundation models in medicine, while emphasizing the need for significant further research to address real-world challenges and understand the scopes in which such systems can be implemented safely and efficaciously.&#13;
&#13;
In the first chapter, I provide an overview of some relevant background, including contrastive language-image pretrained models, large language models, and their evaluation in the medical domain. &#13;
&#13;
In chapter 2, we improve the CLIP architecture for chest x-ray interpretation through a novel regularization technique applied during pre-training, and use this model for the zero-shot identification of chest x-ray findings.&#13;
&#13;
In chapter 3, we examine the reliability of CLIP-style models.  First, we evaluate their robustness to shortcut learning to understand the potential protective effects of text self-supervision. Next, we explore how conformal prediction can be used to control zero-shot classification performance and preempt compatible inputs for these CLIP-style models.&#13;
&#13;
In chapter 4, I describe the development of Articulate Medical Intelligence Explorer (AMIE), a conversational diagnostic AI fine-tuned with simulated medical dialogue. We evaluate the diagnostic capabilities of AMIE in two randomized studies with primary care physicians; first, in challenging clinicopathological conference (CPC) cases, and then in virtual text-based objective structured clinical examinations (OSCE).&#13;
&#13;
In chapter 5, we explore AMIE's management reasoning capabilities in two subspecialty domains: genetic cardiovascular disease and breast oncology. In these studies, we design domain-specific assessments for case management and compare AMIE's performance to generalists under subspecialist evaluation, as well as studying its potential assistive effect.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158802</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Persian Lessons: Islamic Art in America, circa 1876–1925</title>
<link>https://hdl.handle.net/1721.1/158801</link>
<description>Persian Lessons: Islamic Art in America, circa 1876–1925
Goldberg, Roxanne
This dissertation investigates the prehistory of academic Islamic art history in the United States through the lens of American cultural history. It shows that between the US Centennial in 1876 and the inauguration of the Pahlavi dynasty in Iran in 1925, aesthetic theory and American citizenship were debated in the United States through objects identified, regardless of actual provenance, as “Persian.” This cultural phenomenon coincided with the acceleration of the transnational market for Islamic art, including architectural tiles, single-page paintings, and hand-knotted pile carpets. Examining instances of collecting, classifying, displaying, and otherwise handling and beholding Islamic art within different scales of home (family, nation, and international Christianity) and spaces of pedagogy (the living room, commercial gallery, advertisement, schoolroom, voluntary association, museum, and world’s fair), "Persian Lessons" reveals that notions of Persian art were instrumentalized in the service of competing American identities and ideologies in the late nineteenth and early twentieth centuries. &#13;
&#13;
Through an analysis of published writings, museum archives, and government documents, the study shows how the art critic S. G. W. Benjamin, who also served as the first US diplomat to Iran in 1883–85, constructed an ideal of the Persian artist to champion liberal individualism and public art education. An investigation into the presence of Muslim prayer carpets in American Christian homes reveals that Sarkis Nahigian and other diasporic entrepreneurs from the Ottoman Empire became partners to middle-class women, who jointly turned the Oriental carpet into a symbol of obligation to the American nation. Lastly, an examination of visual and textual evidence recasts a collection of more than 20,000 objects—given to the Museum of Fine Arts, Boston, and William Hayes Fogg Art Museum of Harvard University by design pedagogue and museum patron-administrator Denman Waldo Ross between 1888 and 1935—as a tool of “training for citizenship.” Ross regarded Persian textiles and single-page paintings as value-neutral objects for the design education that he believed bolstered participatory democracy. &#13;
&#13;
The fifty-year history that this dissertation covers concludes in the late 1920s and '30s with the establishment of the first official positions in Islamic art history at universities and museums in the United States. "Persian Lessons" thus shows that the founding of Islamic art history as an academic discipline was not simply imported from Europe. Professionalization stabilized a half century of domestic engagement with Persian art as a polysemic guiding light for American culture and society.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158801</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Impacts of Digital Sketching on Concept Generation in Early Stage Design</title>
<link>https://hdl.handle.net/1721.1/158800</link>
<description>Assessing Impacts of Digital Sketching on Concept Generation in Early Stage Design
Das, Madhurima
Digital design tools have become increasingly popular for facilitating designers with different steps of the design process because they can simplify or automate certain components of these steps. Computer Aided Design (CAD) tools have assisted designers with tasks such as modeling and visualizing products prior to production and easily creating engineering drawings for manufacturing. Artificial Intelligence (AI) tools are being explored as collaborators who assist designers with interpreting sketches, assessing user needs, and generating ideas. Digital sketching tools such as tablets are a popular way for designers to easily create drawings that include different colors and styles and create multiple drafts of a concept by copying and pasting elements from previous sketches. However, the introduction of new tools into the design process always has broader implications for the design process. For instance, using CAD tools too early in the process can lead to design fixation and result in designers thinking a concept is more refined than it actually is due to the high quality and polish of the visualization created. Many researchers are now investigating when and how the best way to use AI tools in the design process is, but all struggle with the associated ethical implications of using the right training data and ensuring that the results are validated due to the serious risks related to misuse of AI. This dissertation focuses on one such digital design tool: tablets that are used for sketching. In an effort to expand the discipline’s understanding of how tablet use for sketching may enhance or detract from the design process, this thesis describes a series of studies investigating differences in ideation sketch attributes between tablets and paper/pen. Several of these sketch attributes have been linked with success in design- for instance, creating more sketches during ideation is linked with having better eventual design outcomes. This work investigates how sketch quality and quantity is impacted by the tools used for a short high level brainstorming session as well as a more detailed engineering concept generation task. Subsequently, it explores differences in content or novelty of ideas generated using each medium. Finally, it examines ways in which designers’ ideas evolve throughout the ideation process on both tablets and pen and paper. These aspects of the ideation process are important to understand, especially if the use of tablets leads to different results. The first area of investigation is related to exploring differences in sketch metrics including quantity, quality, and understandability between different sketching tools. These metrics have been found to be related to longer-term design outcomes and perceived creativity of concepts, so understanding the effect of the tablet on these sketch metrics can provide an understanding of how using a tablet for sketching could enhance or detract from overall design performance. The first study in this section investigates differences between pencil, pen, and tablet sketches during a short concept generation exercise and finds that sketch quality was highest for pencil drawings and lower for pen drawings but that tablet drawings do not significantly differ in quality from either pencil or pen drawings. Subsequently, a longer engineering design specific concept generation exercise was conducted to compare tablet sketching to pen and paper sketching. Here, there were no differences found in sketch quantity or understandability between paper and tablet. However, sketch quality, smoothness, and proportion/accuracy were all found to be higher on pen and paper than tablet. The second area of investigation explores whether or not using a tablet influenced designers’ ideation patterns. For instance, does the ability to copy and paste result in designers creating more interrelated ideas during brainstorming instead of exploring a variety of different design directions? There were no major differences found in the overall quantity of concept evolution present between tablet sketching and pen and paper sketching. However, tablet sketches across an ideation session had statistically significantly more concept chaining (related ideas appearing in a row) than paper and pen sketches despite having a similar number of related ideas overall. Additionally, concept chaining patterns were different for design prompts that had more than one functional requirement since not all ideas addressed all parts of the design prompt. However, for these prompts, the results from the primary functional requirement exhibited the same concept chaining patterns with more chaining present for tablet sketching than paper and pen sketching. The final area of investigation explores how designers’ ideas themselves are influenced by the sketching tool used through explorations of concept novelty and concept evolution. One study investigated novelty differences in concepts generated on tablet vs paper and found no correlation between the sketching tool used and the novelty of concepts generated. A second study was conducted to specifically compare designers’ own understanding of the interrelatedness of their ideas with the interrelatedness that could be assessed from the functional similarity of their sketches. Here, designers’ and reviewers’ assessments were found to not be aligned. In other words, sketches as standalone design artifacts did not communicate the extent of interrelatedness of concepts that was clear to the designer. Furthermore, the sketching tool used (tablet vs paper and pen) does not influence the level of agreement between designer and reviewer assessments. As such, using a tablet for sketching does not enhance or detract from the level of interrelatedness represented in sketches. These results suggest that assessing visual or functional similarity from sketches alone, regardless of the sketching tool used, may be insufficient in understanding all the relationship between a series of concepts as understood by the designer. Overall, these results indicate that using tablets as sketching tools does not have a clear significant benefit or burden on designers during ideation. It does not appear to enhance designers’ creative skills when it comes to sketch quantity or novelty though it did result in lower quality sketches, which has implications for the perceived creativity of concepts. Tablets were found to exhibit more instances of concept chaining than paper and pen sketches, though this trend did not persist when designers assessed their own concepts. Finally, this dissertation demonstrates that it is critical to seek designer input in identifying similarities across sketches as functional similarity may not be aligned with designers’ own understanding of which of their ideas are related.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158800</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sorption-based atmospheric water harvesting: from atoms to applications</title>
<link>https://hdl.handle.net/1721.1/158799</link>
<description>Sorption-based atmospheric water harvesting: from atoms to applications
Zhong, Yang
Thirteen thousand trillion liters of water in the atmosphere is a natural resource found anywhere on the earth, and available to anyone. Sorption-based atmospheric water harvesting (SAWH) is the extraction of water vapor using sorbent materials across a broad spectrum of relative humidity, which opens new avenues to address water scarcity faced by two-thirds of the world’s population. SAWH technologies gained significant attention in 2017 with the development of a solar-powered system utilizing metal-organic framework (MOF) sorbents to extract water from the air. While groundbreaking, this proof-of-concept device produced only a few milliliters of water, far from sufficient to meet even a single person’s daily water needs. A large gap thus remains between laboratory discoveries and real-world applications. This thesis aims to advance the understanding of SAWH technologies from atoms to applications. It begins with a multiscale perspective on SAWH technologies towards real-world applications, addressing knowledge gaps across various length scales. Through this multiscale approach, we developed a framework that can bridge material innovations with device realization. At the molecular scale, the thesis seeks to address a fundamental challenge: the inability to directly observe water sorption processes. To overcome this long-standing challenge, we introduced the use of cryogenic transmission electron microscopy (cryo-TEM) to probe water sorption in nanoporous materials at the single-pore level. This approach allows us to image water sorption and material structures with atomic resolution. Owning to the high resolution and in situ capabilities of cryo-TEM, we resolved a partially water-filled state of MOF crystals and observed that water molecules tend to occupy the centers of pores and fill neighboring pores once adjacent ones are filled. This technique offers new insights into sorption mechanisms and holds significant potential for the development of new sorbent materials. Building on the material-device-bridging framework, we proposed a dual-stage device architecture inspired by multistage distillation in desalination, where condensation heat from one stage drives desorption in the next, increasing productivity and thermal efficiency. To guide materials selection based on operating conditions, a universal thermodynamic model is developed to predict the efficiency of sorbent materials given their sorption isotherms. Additionally, this analysis reveals practical strategies to improve devicelevel sorption kinetics and heat transfer performance, pushing the technology toward thermodynamic limits. At the global scale, the framework enables the optimization of material deployment tailored to diverse climatic conditions. The real-world impact is further demonstrated through a technoeconomic assessment, which illustrates SAWH technology’s competitiveness with bottled and tap water and pathways to further improve its cost-effectiveness. The thesis concludes with an outlook on future opportunities for SAWH technologies and a discussion of their societal and environmental impacts at scale, including their potential role in mitigating climate change.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158799</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference with Survival Outcomes via Orthogonal Statistical Learning</title>
<link>https://hdl.handle.net/1721.1/158798</link>
<description>Causal Inference with Survival Outcomes via Orthogonal Statistical Learning
Xu, Shenbo
The field of causal inference has recently made great strides in incorporating machine learning into confounding adjustment and estimation of heterogeneous treatment effects (HTE). However, there were some gaps regarding survival outcomes.&#13;
&#13;
First, overlap-weighted effect estimators based on machine learning nuisance models were not available for such outcomes. Thus, researchers wishing to mitigate bias and variance from poor overlap had to accept potential bias from nuisance model misspecification in its place. In Chapter 2, we fill this gap by proposing a class of one-step cross-fitted double/debiased machine learning estimators for cumulative weighted average treatment effects for both survival outcomes and competing risk outcomes. Our approach combines importance sampling, semiparametric theory, and Neyman orthogonality to resolve both model misspecification and lack of covariate overlap between treatment arms in observational studies with censored outcomes. We give regularity conditions for the consistency, asymptotic linearity, and semiparametric efficiency bounds of the proposed estimators. Through simulation, it is shown that the proposed estimators do not require oracle parametric nuisance models. We apply the proposed estimators to compare the effects of two first-line anti-diabetic drugs on cancer outcomes.&#13;
&#13;
Second, a wide range of machine learning methods (or ”learners”) for estimating heterogeneous treatment effects were not applicable to estimating effects on survival outcomes, particularly in the presence of competing risks. In Chapter 3, we fill this gap by developing several once-for-all (orthogonal) censoring unbiased transformations which convert time-to-event data into continuous outcomes, such that all HTE learners and oracle rates for continuous outcomes can be borrowed. Our approach not only reduces the pressing need to develop various HTE learners for censored outcomes and especially competing risks, but also fully leverage the state of the art of existing schemes. Through direct application of HTE learners to these transformed continuous outcomes, we obtain consistent estimates of heterogeneous cumulative incidence effects, total effects, and separable direct effects. We provide generic model-free learner-specific oracle inequalities bounding the finite-sample excess risk. The oracle efficiency results depend on the oracle selector and estimated nuisance functions from all steps involved in the transformation. We demonstrate the empirical performance of the proposed methods in simulation studies.&#13;
&#13;
An important application area for causal inference methods, and one which originally motivated my interest in the field, is drug repurposing. In Chapter 4, we apply the methods of Chapter 2 to investigate whether metformin, a diabetes medication, might also have unexpected beneficial effects on cancer. The analysis encountered three major challenges: poor overlap between treatment groups, model misspecification, and pre-cancer death as competing risks for cancer incidence. To resolve these issues simultaneously, we take balancingweighted total cause-specific effects, controlled direct effect, and separable effects as causal estimands and develop balancing-weighted double/debiased machine learning estimators for both cumulative incidence functions and restricted mean time lost, with all estimators satisfying Neyman orthogonality. Using the Clinical Practice Research Datalink (CPRD) data, we find that metformin revealed a preventive direct effect on cancer incidence over sulfonylureas. The results also demonstrate the advantage of choosing the average treatment effect for the overlap population as the target quantity.&#13;
&#13;
Finally, just as machine learning helps to automate nuisance model estimation for confounding adjustment and modeling effect heterogeneity, causally informed artificial intelligence (AI) and large language models (LLMs) might help to automate hypothesis generation for drug repurposing and surveillance opportunities. In Chapter 5, we explore this potential by developing a high-throughput screening approach to evaluate available drugs across multiple diseases. The screening methodology aims to identify drug-disease pairs with significant positive signals that could represent promising repurposing candidates, while also detecting pairs with negative signals that might indicate potential safety concerns–both being critical aspects for pharmacoepidemiology research. This systematic approach leverages the convergence of expanding healthcare data sources and modern data science advances to establish a data-driven framework for drug repurposing discovery and pharmacovigilance.&#13;
&#13;
To conclude, we discuss the limitations of the proposed methods and provide possible future research directions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158798</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning over Hierarchical Abstractions for Long-Horizon Planning in Robotics</title>
<link>https://hdl.handle.net/1721.1/158796</link>
<description>Reasoning over Hierarchical Abstractions for Long-Horizon Planning in Robotics
Bradley, Christopher P.
We aim to enable robots to act intelligently in complex environments not explicitly designed around them. In order to do so, robots can simplify decision making by forming hierarchical abstractions of their world, and planning within those representations. However, in reality, the types of abstractions robots are able to build are often poorly aligned with the planning problems they must solve, which limits how useful those abstractions can be in efficient decision making. For example, autonomous agents struggle in many real world scenarios, particularly when their environments are large, cluttered with obstructions, or beset by uncertainty. These factors often imply that decisions made at higher levels of abstraction may not be easily refined to low level plans, leading to backtracking during either search or execution. In this thesis, we consider contributions which improve the efficiency and quality of long-horizon hierarchical planning in robotics. Specifically, we propose approaches which explicitly reason about the imperfections of the abstractions available to robots during planning, and show how those methods can improve performance on a variety of tasks and environments.&#13;
&#13;
There are three primary settings for which we make contributions in this thesis. First, we will consider the problem of solving tasks in partially revealed environments, wherein our abstract plans cannot be known to be feasible until we attempt execution because the world is not fully known at planning time. To solve this problem, we first develop a high level planning representation which recognizes that actions that enter unknown space can either succeed or fail with some probability. The first contribution of this work is then to learn to predict the feasibility and cost of actions within that abstraction from visual input. We also describe a method for planning which uses these predictions, and we are able to show that our approach can generate plans that are significantly faster at completing tasks in unknown environments experimentally when compared with heuristic driven baselines. Next, we will discuss work in Task and Motion Planning (TAMP), where the world is fully known, but the problems require complex interaction with the environment to the point that we must intelligently guide search in order to find plans efficiently. We build upon our work in the first setting by once again learning to predict the outcome and cost of different sub-tasks within a TAMP abstraction. We further contribute a novel method to guide search in this setting for plans which minimize cost given our learned predictions, and demonstrate the ability to find faster plans than established TAMP approaches both in simulation, and on real world robots. In our final problem setting, we consider attempting to solve TAMP problems in real world, large-scale environments. To do this, we define an approach for constructing tractable planning abstractions from real perception using hierarchical scene graphs, ensuring that when we refine our abstract plans within these representations, the low-level trajectories still satisfy the given task’s constraints. A major contribution of this work is an approach for planning efficiently in these domains by pruning provably superfluous information from the world model. The unifying aim of the work in this thesis is to develop approaches which enable robots to solve complex tasks in large-scale, real world environments without human intervention. To that end, across all contributions, we demonstrate experimentally on real robots the importance of accounting for imperfections in hierarchical abstraction during planning.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158796</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New tools for Bayesian optimal experimental design and kernel-based generative modeling</title>
<link>https://hdl.handle.net/1721.1/158795</link>
<description>New tools for Bayesian optimal experimental design and kernel-based generative modeling
Li, Fengyi
This thesis develops new computational approaches for two canonical problems in statistics and machine learning: optimal experimental design and generative modeling.&#13;
Optimal experimental design (OED) is important to model development for science and engineering applications and beyond, especially when only a small number of observations can be taken or experiments performed, due to resource limitations. In the Bayesian setting, a useful criterion for the importance of candidate experiments is the expected information gain (EIG) from prior to posterior, or equivalently, the mutual information (MI) between candidate observations and the parameters of interest. Yet estimating EIG for a given design can be quite challenging in nonlinear/non-Gaussian models, and for high-dimensional parameters and observations. &#13;
&#13;
In the first part of the thesis, we introduce new methods for estimating EIG based on transportation of measure. Specifically, we use marginal and conditional density estimates, obtained with semi-parametric transport models, in a Monte Carlo estimator. The density estimates are obtained by solving convex optimization problems. This framework is also compatible with implicit models, where one can simulate from the likelihood or prior but the associated density functions are unknown. We identify the optimal scaling of sample sizes between the "inner" density estimation steps and the "outer" EIG estimation, and demonstrate the efficiency of these choices numerically. If the dimensions of the parameters or observations are high, however, direct density estimation becomes intractable. Here, we use gradient-based information bounds, obtained via log-Sobolev inequalities, to identify optimal projections of the parameters and observations, and then apply our transport-based EIG estimation scheme. &#13;
&#13;
We next study the problem of cardinality-constrained observation selection to maximize MI in non-Gaussian settings, i.e., choosing the most informative subset of k observations from a candidate pool of size n &gt; k. Finding the exact solution is to this combinatorial optimization problem is computationally costly, so we resort to greedy approaches based on computationally inexpensive lower bounds for MI. Here we again use log-Sobolev inequalities to construct such lower bounds for certain classes of non-Gaussian distributions, and exploit these lower bounds within the combinatorial problems. We demonstrate that our method outperforms random selection strategies and Gaussian approximations in many settings, including challenging nonlinear design problems with non-additive noise.&#13;
&#13;
In the second part of the thesis, we turn our attention to generative modeling, which can be understood as the problem of drawing new samples from an unknown distribution, from which a fixed sample is available. Our approaches employ kernel-type algorithms based on diffusion maps.&#13;
First, we propose an interacting particle system for generative modeling, based on diffusion maps and Laplacian-adjusted Wasserstein gradient descent (LAWGD). Diffusion maps are used to approximate the generator of the corresponding Langevin diffusion process from samples, and hence to learn the underlying data-generating manifold. LAWGD enables efficient sampling from the target distribution given the generator of the Langevin diffusion process, which we construct here via a spectral approximation via kernels, computed with diffusion maps. Our method requires no offline training and minimal tuning, and can outperform other approaches on data sets of moderate dimension.&#13;
&#13;
Second, we propose a generative model combining diffusion maps and Langevin dynamics. Diffusion maps are used to approximate the drift term from the available training samples, which is then implemented in a discrete-time Langevin sampler to generate new samples. By setting the kernel bandwidth to match the time step size used in the unadjusted Langevin algorithm, our method effectively circumvents any stability issues typically associated with time-stepping stiff stochastic differential equations. We demonstrate the performance of our proposed scheme through experiments on synthetic datasets of increasing dimension, and on a conditional sampling problem arising in stochastic subgrid-scale parametrization of a dynamical system.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158795</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Spatial Constraints and Gender Equality: the Impact of COVID-19 Lockdowns on Work-from-Anywhere Dynamics and Gender Equality in Job Searches</title>
<link>https://hdl.handle.net/1721.1/158794</link>
<description>Essays on Spatial Constraints and Gender Equality: the Impact of COVID-19 Lockdowns on Work-from-Anywhere Dynamics and Gender Equality in Job Searches
Labuzova, Tatiana
This dissertation explores the intersection of spatial constraints and gender equality by leveraging the COVID-19 lockdowns as a natural experiment to study the impact of work-from-anywhere (WFA) dynamics on job search behaviors. The introduction of mandatory lockdowns drastically shifted the labor market landscape, prompting an increase in the demand for flexible work formats. Utilizing data from over one million job seekers on an online employment platform, this research examines how the sudden wide availability of remote work options influenced job search activities differently across genders. Using unique data from a large online job platform, a comparison of pre- and post-COVID-19 lockdown data shows that women significantly increased their engagement with geographically flexible job postings, reacting more strongly than men to the rise in remote job opportunities at both the job viewing and application stages. This shift also resulted in a narrowing of the wage gap in positions viewed and applied for during the post-lockdown period compared to pre-lockdown benchmarks. Notably, the study identifies variations in job search behavior among those likely constrained by domestic responsibilities. While differences in job posting views suggest an initial differential impact, such differences vanish at the application stage. Collectively, these results indicate that the pandemic-induced shift towards remote work has contributed to a gender-equalizing effect in the job market, including those navigating domestic labor constraints. This research not only highlights the transformative potential of WFA arrangements in promoting gender equality but also provides insights into the mechanisms that drive these changes within the labor market. &#13;
&#13;
Keywords: organizational studies, gender inequality, flexible working arrangements, hiring, applications processes, decision making, digital platforms.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158794</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems-Theoretic Framework For Safety-Driven Development of System Architectures</title>
<link>https://hdl.handle.net/1721.1/158793</link>
<description>A Systems-Theoretic Framework For Safety-Driven Development of System Architectures
Poh, Justin Wei Siang
Modern complex systems are increasingly expected to exhibit emergent properties such as safety and security even as they become more complex, interconnected, and reliant on software than ever before. Because of this evolution in the characteristics of these systems, the methods available today for developing system architectures no longer provide systems engineers with adequate design support. As a result, it is becoming increasingly challenging for systems engineers to develop system architectures that exhibit emergent properties like safety. This thesis addresses this problem by developing a safety-driven architecture development framework that enables the design of emergent properties such as safety into a system architecture from the beginning. The key idea is that the results from a hazard analysis process known as Systems Theoretic Process Analysis (STPA) should drive design decisions. The framework therefore starts with an initial STPA analysis of the system to determine how unsafe or undesirable behavior could occur. Structured and systematic processes are then provided to help systems engineers use the STPA results to develop the required control behavior of the system and explore possible system architecture options to implement that control behavior. This framework therefore enables systems engineers to make more informed early architectural design decisions driven by safety considerations. This framework is applied to an Urban Air Mobility (UAM) case study to demonstrate that it provides the necessary design support to enable the development and refinement of an air traffic management (ATM) architecture for UAM. When creating a system architecture, assumptions may also need to be made to mitigate the inherent uncertainties and lack of detailed information about the system at that early stage of design. However, these assumptions are used as the basis for design decisions, and it is important that they remain valid to avoid flaws in the architecture arising when underlying assumptions become invalid. Thus, this thesis also develops and demonstrates a supporting framework to help identify these underlying assumptions and ensure they remain valid both during system development and after the system is placed into operation. Modern complex systems are increasingly expected to exhibit emergent properties such as safety and security even as they become more complex, interconnected, and reliant on software than ever before. Because of this evolution in the characteristics of these systems, the methods available today for developing system architectures no longer provide systems engineers with adequate design support. As a result, it is becoming increasingly challenging for systems engineers to develop system architectures that exhibit emergent properties like safety. This thesis addresses this problem by developing a safety-driven architecture development framework that enables the design of emergent properties such as safety into a system architecture from the beginning. The key idea is that the results from a hazard analysis process known as Systems Theoretic Process Analysis (STPA) should drive design decisions. The framework therefore starts with an initial STPA analysis of the system to determine how unsafe or undesirable behavior could occur. Structured and systematic processes are then provided to help systems engineers use the STPA results to develop the required control behavior of the system and explore possible system architecture options to implement that control behavior. This framework therefore enables systems engineers to make more informed early architectural design decisions driven by safety considerations. This framework is applied to an Urban Air Mobility (UAM) case study to demonstrate that it provides the necessary design support to enable the development and refinement of an air traffic management (ATM) architecture for UAM. When creating a system architecture, assumptions may also need to be made to mitigate the inherent uncertainties and lack of detailed information about the system at that early stage of design. However, these assumptions are used as the basis for design decisions, and it is &#13;
important that they remain valid to avoid flaws in the architecture arising when underlying assumptions become invalid. Thus, this thesis also develops and demonstrates a supporting framework to help identify these underlying assumptions and ensure they remain valid both during system development and after the system is placed into operation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158793</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explicit formulas for weighted orbital integrals for the inhomogeneous and semi-Lie arithmetic fundamental lemmas conjectured for the full spherical Hecke algebra</title>
<link>https://hdl.handle.net/1721.1/158792</link>
<description>Explicit formulas for weighted orbital integrals for the inhomogeneous and semi-Lie arithmetic fundamental lemmas conjectured for the full spherical Hecke algebra
Chen, Evan
As an analog to the Jacquet-Rallis fundamental lemma that appears in the relative trace formula approach to the Gan-Gross-Prasad conjectures, the arithmetic fundamental lemma was proposed by Wei Zhang and used in an approach to the arithmetic Gan-Gross-Prasad conjectures. The Jacquet-Rallis fundamental lemma was recently generalized by Spencer Leslie to a statement holding for the full spherical Hecke algebra. In the same spirit, there is a recent conjectural generalization of the arithmetic fundamental lemma to the full spherical Hecke algebra. This paper formulates another analogous conjecture for the semi-Lie version of the arithmetic fundamental lemma proposed by Yifeng Liu. Then this paper produces explicit formulas for particular cases of the weighted orbital integrals in the two conjectures mentioned above.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158792</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooling with less: Design and simulation of multifunctional building components for a material-efficient, heat-resilient architecture</title>
<link>https://hdl.handle.net/1721.1/158791</link>
<description>Cooling with less: Design and simulation of multifunctional building components for a material-efficient, heat-resilient architecture
Gascón Alvarez, Eduardo
As temperatures rise globally and the demand for housing intensifies, designing affordable buildings for heat resilience and with low carbon emissions becomes crucial. Conventional air conditioning (AC) systems, although often an effective and accessible cooling solution, are energy-intensive and typically fail to consider local climatic and urban contexts. This work alternatively focuses on the opportunity behind designing building components (such as slabs, blocks, roofs, or footings) for multifunctionality, integrating passive strategies and low-energy cooling systems within them in a material-efficient manner. Collapsing multiple functions into a single building component is typically regarded as a strategy that leads to better overall performance and reduced costs compared to implementing each function separately. However, the effectiveness of this strategy in cooling-dominated climates and in the context of the current climate crisis remains underexplored. &#13;
&#13;
The dissertation proposes new designs and evaluation methods for three multifunctional building components: multi-hollowed blocks (ceramic blocks with interior air pockets), shaped chilled slabs (shaped concrete slabs with embedded radiant ceiling systems), and integrated heat sinks (thermally activated concrete footings and roofs). Each component is designed to optimize a specific cooling strategy based on its context within the building and intrinsic material properties - thermal mass, radiant cooling, and ground/radiative cooling. Chapter 2 demonstrates how shape-optimized ceramic blocks can double the heat capacity of existing commercial solutions without additional material or reduce their weight by 33% while increasing the heat capacity by 23%. Chapter 3 presents slab geometries that achieve embodied carbon reductions of up to 50% relative to conventional prismatic floors while reducing operational carbon by 12-14%. Chapter 4 finds that buildings in temperate climates with a Floor Area Ratio (FAR) of up to 4.5 can meet 100% of the cooling demand exclusively through heat dissipation systems integrated into the building’s foundations and roof.  Methodologically, this research puts together heat transfer theory and analytical models with state-of-the-art shape optimization methods; this effort results in a fast and accurate multi-objective simulation framework tailored for early design stages.&#13;
&#13;
This thesis provides, for the first time, validated methods and quantitative results that support the viability of multifunctional building components in cooling-dominated climates, optimizing the shape of walls, blocks, foundations, and roofs to improve their structural and thermal performance simultaneously, reducing their weight and improving buildings’ resilience to heat.  From a climate adaptation perspective, this approach ensures that buildings are ready for extreme heat even when active systems are unavailable due to, for example, a power outage. From a carbon mitigation perspective, the presented results highlight the potential to reduce the whole-life carbon of buildings by shape-optimizing components for enhanced thermal performance and material efficiency.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158791</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrospray Thrusters in Chemical-Electric Multimode Propulsion for Small Satellites</title>
<link>https://hdl.handle.net/1721.1/158790</link>
<description>Electrospray Thrusters in Chemical-Electric Multimode Propulsion for Small Satellites
Bruno, Amelia R.
Propulsion for small spacecraft is typically one of two modes, chemical or electric. These modes offer complementary propulsive performance: chemical propulsion provides high thrust and low specific impulse, while electric propulsion provides the inverse. As such, having access to both modes on the same spacecraft (i.e. multimode propulsion) is extremely useful. Unfortunately, the conventional propellants used by chemical and electric thrusters are highly incompatible, making this particularly difficult on small spacecraft that lack the mass, power, and volume to accommodate two separate propulsion systems. However, recent advancements in green monopropellants -- developed as less-toxic alternatives to hydrazine in chemical monopropellant thrusters -- have created a new family of ionic liquids monopropellants, making them the natural propellant for a highly compact form of electric propulsion known as electrospray thrusters. This presents a unique opportunity for a propellant to be shared between two propulsion modes, decreasing required mass and volume to be feasible for small spacecraft. This thesis examines the use of ionic liquid monopropellants in electrospray thrusters for a multimode chemical-electric propulsion system. This thesis focuses particularly on ASCENT, a high-maturity monopropellant with flight heritage in chemical thrusters.&#13;
&#13;
In this work, the performance of ASCENT in the MIT ion Electrospray Propulsion System (iEPS) is extensively characterized. Experimental work includes ion plume diagnostics, indirectly and directly obtained performance estimates, temperature-dependent performance estimates, and extended duration firing behavior. Preliminary studies of similar monopropellants are also conducted to assess their use in a multimode system. To support an upcoming technology demonstration flight, a new multimode-compatible iEPS thruster tank is designed, fabricated, and validated. The integration and operation requirements for this thruster in a flight-ready system are defined. Finally, the mission benefits of an ASCENT multimode system for CubeSats are compared against current commercial options using an Earth observation mission case study.&#13;
&#13;
This work finds that an iEPS thruster with ASCENT propellant has thrust of 9-15 µN, a specific impulse of 600-750 seconds, and a total efficiency of 18-22%, depending on current setpoint. We find that ASCENT is slightly volatile in high vacuum, which causes time-dependent losses in efficiency and specific impulse from gradual propellant evaporation. This volatility may also increase thruster lifetime by mitigating the risk of thruster failure by emitter flooding. This work also identified a modified version of ASCENT, created when the propellant is exposed to iron. This modified version produces a dramatically higher thrust and thrust-to-power compared to standard ASCENT. Additionally, flight-ready configurations of a multimode system are defined for 6U, 12U, and 27U CubeSats. A case study analysis found that the benefits of a chemical-electrospray multimode system are best realized at the 12U scale and above. Overall, this thesis provides critical insights on the performance, integration, and operation of electrospray thrusters with ionic liquid monopropellants. These results can then be used to enable a multimode propulsion system for small satellites.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158790</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Crafting Cannabinoid Capitalism: Health, Sustainability, and Regeneration in the United States</title>
<link>https://hdl.handle.net/1721.1/158789</link>
<description>Crafting Cannabinoid Capitalism: Health, Sustainability, and Regeneration in the United States
Rewegan, Alexander Nicholas
This dissertation offers a critical exploration of cannabis legalization through an ethnographic study of small-scale "legacy" cannabis farmers in Humboldt County, California, as they navigate a complex transition from prohibition to commodity capitalism. I focus on their collective efforts to envision and practice “regenerative agriculture" as a response to both the historical injustices of prohibition and the compounding challenges of climate change. Drawing on history, STS, and the anthropology of food, agriculture, and medicine, I show how the logics of the war on drugs— rooted in carcerality, settler colonialism, and plantation agriculture—structurally and affectively persist in the socalled “post-prohibition” era, frustrating farmers’ efforts to resist monopolization and dispossession. Throughout, I attend to how the pervasive notions of “health,” “sustainability,” and “regeneration” are actively negotiated, modified, and put to use as material and symbolic tools in crafting medicinal, agricultural, and ecological futures. The Introduction weaves a tapestry of themes, histories, and theories that set the stage for the main ethnography. Through a blend of personal narrative, ethnographic vignette, and critical theory, it works to situate cannabis as a fluid and multifaceted object, highlighting people’s ambivalent hopes and cynicisms towards legalization. From alternative farming to molecularized biocapital, it articulates the intersecting influences of climate change, racial capitalism, and Indigenous sovereignties in ongoing projects to commercialize and legalize cannabis in a globally connected United States. Chapter One outlines my research methods and provides a social and narrative history of the study’s fieldsite, grappling with the anthropological complexities and complicities of studying working landscapes in a settler colonial “frontier ecology.” Chapter Two unpacks the shifting and embodied subjectivities of both farmers and workers as they reconfigure themselves in service of licensed production, highlighting sociocultural tensions and contradictions, the structural challenges of regenerative gardening, and the labor dynamics that shape these processes. Chapter Three analyzes how the inchoate and social nature of cannabis regulation both hinders and supports regenerative farming, emphasizing financial strain, and the ever-pervasive role that surveillance technologies are playing in cannabis governance. Chapter Four shifts to the harvest season, exploring farmers’ collective efforts to market their products through the concept of “drug terroir,” unpacking how their values and practices entangled with regional efforts to address wildfires and remediate leftover drug war infrastructures. Chapter Five moves off the farm and onto the topic of consumption as it historicizes the growing scientific literature about cannabis and pregnancy, demonstrating how carcerality continues to infiltrate maternal-fetal health science and conceptions of reproduction and health. The dissertation ultimately explores the ways in which American cannabis legalization often regenerates, rather than resolves, the legacies of prohibition and settler colonialism, while at the same time illuminating alternative and promising practices that might challenge these enduring forces.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158789</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decadal to centennial-scale climate interactions across the Indo-Pacific region</title>
<link>https://hdl.handle.net/1721.1/158787</link>
<description>Decadal to centennial-scale climate interactions across the Indo-Pacific region
Wang, Shouyi
An improved understanding of decadal to centennial-scale climate variability is critical for properly attributing recently observed low-frequency changes to internal climate oscillations and/or anthropogenic forcings as well as improving predictability of decadal variability. This thesis investigates ocean and atmospheric circulation changes and associated impacts within the tropical Indo-Pacific, where low-frequency changes in heat and freshwater impact the livelihoods of billions of people. Because the instrumental record is too short to investigate centennial variability, this thesis leverages numerical simulations and records from paleoclimate archives to provide insights into low-frequency tropical dynamics. In Chapter 2, we explore the dynamics that drive Indonesian Throughflow surface transport variability using a series of forced global high-resolution ocean simulations. We show that surface wind changes associated with Pacific decadal variability drive changes in the western boundary currents that modulate the Indonesian throughflow, consistent with mechanisms identified on interannual timescales. This work identifies a relationship between atmospheric circulation and transport through a key low-latitude passageway. Motivated by paleoclimate evidence of multi-year droughts in Southeast Asia, we investigate their potential drivers in Chapter 3 using an ensemble of coupled climate model simulations. These simulations illustrate that Indo-Pacific internal variability dominated Southeast Asian rainfall extremes during the last millennium, although the influence of volcanic eruptions was detectable. We found that multi-year pluvials were contributed by both Pacific and Indian Ocean modes, while droughts were largely only driven by Pacific Ocean impacts. Our analysis not only quantifies the role of internal and external drivers to Southeast Asia rainfall but also presents a probabilistic analysis framework that may be useful for water resources prediction. Lastly, in Chapter 4 we reconstruct the Indian and Pacific Walker circulations and the Indian Ocean Basin Mode by synthesizing tropical records (corals, tree-rings, and speleothems) of past ocean and atmospheric conditions to investigate basin interactions over the past four centuries. Our results demonstrate that IndoPacific climate was generally coupled on decadal-centennial timescales throughout the past four centuries but was notably decoupled in the early 19th century. Using climate models, we attribute this decoupling to a series of strong volcanic eruptions. Dynamically, we link this inter-basin decoupling to volcanically induced changes in hemispheric temperature gradients, which modulate the teleconnections across the Indo-Pacific. These past disruptions in basin interactions provide context for ongoing and simulated future decoupling under a high emission scenario, as global warming also alters interhemispheric temperature gradients. This thesis sheds light on the complex dynamics that drive ocean-atmosphere variability across the Indo-Pacific on decadal to centennial timescales.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158787</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact-aware and multi-modal robotic manipulation</title>
<link>https://hdl.handle.net/1721.1/158785</link>
<description>Contact-aware and multi-modal robotic manipulation
Zhao, Jialiang
Intelligent robotic manipulation has advanced significantly in recent years, driven by progress in foundational cognitive models, sensor-fusion techniques, and improvements in actuators and sensors. However, most contemporary robotic systems still lack the ability to effectively recognize and understand contact dynamics, which are critical for performing manipulation tasks beyond basic pick-and-place operations. This thesis argues and proves that contact awareness is essential for the successful deployment of robotic systems, not only in structured environments such as factories but also in unstructured settings like domestic households. Achieving contact awareness necessitates advancements in three key areas: the development of improved contact-sensing hardware, the creation of more expressive frameworks for representing and interpreting contact information, and the design of efficient modality-fusion algorithms to integrate these capabilities into robotic action planning. This work addresses these challenges by (1) proposing novel mechanical designs that enable touch sensors to adopt more compact and versatile forms while enhancing their durability and manufacturability, (2) introducing a foundational representation learning framework capable of learning a shared tactile latent representation, which can be transferred across different sensors and downstream tasks, and (3) developing a compositional diffusion-based approach for action prediction that integrates tactile sensing signals with other perception modalities, thereby enabling learning across diverse environments and promoting policy reuse. Along the way, this thesis demonstrates that tactile sensors can be both compact and versatile, challenging common perceptions to the contrary. It also establishes that tactile sensing is indispensable not only for high-precision tasks, such as electronics assembly, but also for everyday activities, including cooking and tool usage.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158785</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formally Verifying Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation</title>
<link>https://hdl.handle.net/1721.1/158521</link>
<description>Formally Verifying Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation
Athalye, Anish
Hardware and software systems are susceptible to bugs and timing side-channel vulnerabilities. Timing leakage is particularly hard to eliminate because leakage is an emergent property that can arise from subtle behaviors or interactions between hardware and software components in the entire system, with root causes such as non-constant-time code, compiler-generated timing variation, and microarchitectural side channels. This thesis contributes a new approach using formal verification to rule out such bugs and build systems that are correct, secure, and leakage-free.&#13;
&#13;
This thesis introduces a new theory called information-preserving refinement (IPR) for capturing non-leakage in addition to correctness and security, implements a verification approach for IPR in the Parfait framework, and applies it to verifying hardware security modules (HSMs). Using Parfait, a developer can verify that an HSM implementation leaks no more information than is allowed by a succinct application-level specification of the device's intended behavior, with proofs covering the implementation's hardware and software down to its cycle-precise wire-I/O-level behavior.&#13;
&#13;
This thesis uses Parfait to implement and verify several HSMs, including an ECDSA certificate-signing HSM and a password-hashing HSM, on top of Ibex and PicoRV32-based hardware platforms. Parfait provides strong guarantees for these HSMs: for example, it proves that the ECDSA-on-Ibex implementation—2,300 lines of code and 13,500 lines of Verilog—leaks nothing more than what is allowed by a 40-line specification of its behavior.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158521</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Deep Probabilistic Models</title>
<link>https://hdl.handle.net/1721.1/158520</link>
<description>Guiding Deep Probabilistic Models
Garipov, Timur
Deep probabilistic models utilize deep neural networks to learn probability distributions in high-dimensional data spaces. Learning and inference in these models are complicated due to the difficulty of direct evaluation of the differences between the model distribution and the target. This thesis addresses this challenge and develops novel algorithms for learning and inference based on the guidance of complex parameterized distributions towards desired configurations via signals from auxiliary discriminative models.&#13;
&#13;
In the first part of the thesis, we develop novel stable training objectives for Generative Adversarial Networks (GANs). We show that under standard unary-discriminator objectives, most of the valid solutions, where the learned distribution is aligned with the target, are unstable. We propose training objectives based on pairwise discriminators that provably preserve distribution alignment and demonstrate improved training stability in image generation tasks.&#13;
&#13;
In the second part of the thesis, we introduce distribution support alignment as an alternative to the distribution alignment objective and develop a learning algorithm that guides distributions towards support alignment. We demonstrate the effectiveness of our approach in unsupervised domain adaptation under label distribution shift. Recent works have shown that under cross-domain label distribution shift, optimizing for distribution alignment is excessively restrictive and causes performance degradation. Our algorithm, which is based on support alignment, alleviates this issue.&#13;
&#13;
In the third part of the thesis, we develop a novel approach to compositional generation in iterative generative processes: diffusion models and Generative Flow Networks (GFlowNets). Motivated by the growing prominence of generative models pre-trained at scale and the high training costs, we propose composition operations and guidance-based sampling algorithms that enable the combination of multiple pre-trained iterative generative processes. We offer empirical results on image and molecular generation tasks.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158520</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Technology Platform for Enabling Next-Generation Vacuum Electronic Devices Based on Silicon Field Emitter Arrays</title>
<link>https://hdl.handle.net/1721.1/158519</link>
<description>A Technology Platform for Enabling Next-Generation Vacuum Electronic Devices Based on Silicon Field Emitter Arrays
Karaulac, Nedeljko
As the demand for electronics with better performance and increased functionality continues to escalate, researchers are finding it more and more difficult to surpass the limitations of conventional transistors due to electron transport in solid-state. Nanoscale vacuum-channel transistors, in which the electron transport channel is vacuum instead of solid-state, offer a potential alternative device architecture beyond device scaling. Due to their ballistic transport and higher breakdown field, nanoscale vacuum-channel transistors are expected to show better performance in a wide variety of high-frequency, high-power, or harsh environment applications. Silicon field emitter arrays (FEAs) are a proven and mature technology that can be implemented as vacuum transistors, and they could also be used in vacuum integrated circuits. Many of the challenges regarding uniformity, reliability, and lifetime have been addressed in this technology. However, the scalability of the emission current remains a challenge. &#13;
&#13;
In this work, we develop a layout-independent fabrication process for silicon FEAs that improves the scalability of emission current with array size. The fabrication process begins by first fabricating field emitters everywhere across the wafer and then selectively etching field emitters to form individual arrays. Using this process, we present for the first time silicon FEAs with array sizes ranging from 1 μm2 to 1 mm2, and we obtain emission current ranging from 1 nA to 1 mA, which represents a range of six orders of magnitude. In order to facilitate design of future vacuum integrated circuits, we develop a circuit model for silicon FEAs based on measurements of the transfer and output characteristics. The circuit model is used to demonstrate a proof-of-concept inverter based on a silicon FEA and pull-up resistor that could potentially be fabricated as a vacuum integrated circuit. Lastly, we characterize and model the statistical variation in emission current to determine if it is feasible to build vacuum integrated circuits using the layout-independent fabrication process presented in this work.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158519</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Employing Magnetic Field Records at Key Moments in Planetary Evolution</title>
<link>https://hdl.handle.net/1721.1/158515</link>
<description>Employing Magnetic Field Records at Key Moments in Planetary Evolution
Mansbach, Elias N.
The analysis of the paleomagnetic record in meteorites provides a unique and powerful viewpoint on early solar system and planetary evolution. Indeed, meteorites are the only tangible objects that bore witness to these important events, making their records particularly precious. In this thesis, I present my dissertation work that addresses how the meteoritic paleomagnetic record and additional records from materials provided by return sample missions can be used to study three stages in early solar system and planetary evolution: I) The&#13;
protoplanetary disk; II) The initial melting on the first planetary bodies; and III) The early interior evolution of modern planets. I address Stage I through a paleomagnetic analysis of returned samples from asteroid Ryugu to determine the role of magnetic fields in stellar accretion in the distal solar system (Chapter 2). I address Stage II through a paleomagnetic analysis of the Acapulco primitive achondrite (Chapter 3) and micromagnetic modeling of the ferromagnetic mineral tetrataenite (Chapter 4) to eludcidate core formation on small bodies. Lastly, I address Stage III through preparation for paleomagnetic studies of future returned samples from the Perseverance rover to determine the lifetime and properties of the Martian dynamo (Chapters 5 and 6). I end with a brief conclusion and ideas for future work.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158515</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Mechanics for Multi-step Robotic Manipulation Planning</title>
<link>https://hdl.handle.net/1721.1/158511</link>
<description>Leveraging Mechanics for Multi-step Robotic Manipulation Planning
Holladay, Rachel
This thesis focuses on enabling robots to robustly perform complex, multi-step manipulation tasks, like chopping vegetables or wielding a wrench. Completing such tasks requires a robot to plan and execute long sequences of actions, where each action involves many connected, discrete and continuous choices that are critically impacted by constraints relating to force, motion and contact. To tackle this, this thesis contributes models and algorithms that exploit the physics and geometry of the world in order to address the dual challenges of long-horizon decision-making and acting under uncertainty. We apply this in the context of three domains: in-hand manipulation, forceful manipulation and briefly-dynamic manipulation.&#13;
&#13;
First, to reorient a grasped object, we develop a sampling-based motion planner to generate sequences of pushes that slide the object in-hand. We derive an abstraction for pushing to enable the planner to reason about frictional constraints. Second, we focus on forceful manipulation tasks, such as opening a childproof medicine bottle or twisting a nut on a bolt, where the robot's planning choices are impacted by the need to exert force. We define constraints that explicitly consider torque and frictional limits and integrate these into an existing task and motion planning framework. We leverage cost-sensitive planning to enable the robot to generate plans that are robust to uncertainty in the physical parameters. Finally, we frame planning with dynamic actions, like shoveling or toppling, as requiring the robot to reason about both action uncertainty and potential dead ends. We learn a simple action model and formulate a sample-based manipulation planner that guards against dead ends in the face of uncertainty. Throughout this thesis, we validate the practical applicability of our model-based approaches by evaluating them on real robots.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158511</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximation and System Identification Techniques for Stochastic Biomolecular Systems</title>
<link>https://hdl.handle.net/1721.1/158509</link>
<description>Approximation and System Identification Techniques for Stochastic Biomolecular Systems
Grunberg, Theodore W.
Many biomolecular systems can be modeled as chemical reaction networks with a set of relevant species interacting via chemical reactions. When the molecular counts of the species are small, the inherent stochasticity in the occurrence of the reactions plays an important role in the behavior of the system. This stochasticity presents opportunities for system identification, since when a large population of cells is measured, one has many samples from the underlying distribution of the stochastic model. On the other hand, using the stochastic models of chemical reaction networks, given by continuous time Markov chains with countably infinite state spaces, creates computational and analytical difficulties when performing analysis or system identification. Therefore, approximate models that exploit timescale separation between different sets of chemical reactions to create reduced order models, or deterministic or diffusion approximations that approximate the continuous time Markov chain with an ordinary differential equation or stochastic differential equation respectively must be exploited. This thesis makes contributions in both directions, rigorously justifying such approximations as well as developing the theory to perform system identification on the approximate models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158509</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Magnonics in Antiferromagnets and Cavity Spintronic Devices</title>
<link>https://hdl.handle.net/1721.1/158508</link>
<description>Hybrid Magnonics in Antiferromagnets and Cavity Spintronic Devices
Hou, Justin T.
Hybrid dynamic systems combine advantages from different subsystems for realizing information processing tasks in both classical and quantum domains. Magnons, the collective spin wave excitations in magnetically ordered materials, have recently attracted great attention for realizing hybrid dynamic systems. In this thesis, we develop hybrid magnonic systems with reduced complexity, improved scalability, and new functionality. In the first work, by utilizing van der Waals antiferromagnetic material CrCl3, we realize strong magnonmagnon coupling within a single material, simplifying the design of magnon-magnon hybrid systems which conventionally require two magnetic materials. Secondly, by utilizing planar microwave resonators, we realize on-chip, lithographically scalable, and Circuit Quantum Electrodynamics compatible magnon-photon hybrid systems. Strong magnon-photon coupling with three orders of magnitude reduction in spin number is demonstrated due to the reduced effective cavity mode volume. Moreover, the on-chip design, featuring substantial coupling strength, enables the integration of spintronic techniques to control the magnon subsystem dynamics via electrical currents. Along this line, in the third work, we theoretically propose a novel microwave oscillator device: a spin-torque-oscillator maser, which combines a spin-torque oscillator with a resonant cavity. This device aims to overcome the limitations of area, power, and linewidth inherent to traditional spin-torque nano-oscillator devices. In the fourth work, we experimentally realize a tunable magnon-photon hybrid system that leverages the spin-torque effect to electrically modulate magnon dissipation. We observe distinct linewidth modulation effects in systems with different cooperativities. Finally, we suggest methods to enhance the efficiency of magnon dissipation tuning while reducing power consumption, thereby laying groundwork for the development of spin-torque-oscillator masers. This thesis work serves as a foundation for future advancement of hybrid magnonic systems, highlighting their potential for both fundamental research and practical device applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158508</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Representation Learning for Agentic AI Systems</title>
<link>https://hdl.handle.net/1721.1/158506</link>
<description>Multimodal Representation Learning for Agentic AI Systems
Andonian, Alexander
Modern artificial intelligence (AI) is poised to transform the scientific process, from ideation and experimentation to peer review. Many researchers posit that emerging generalist AI “agents” will soon no longer be mere tools, but equal partners in scientific exploration. In this work, we contribute to this evolving landscape through converging lines of research focused on developing and evaluating more efficient and interpretable AI systems, spanning both vision and language domains, and their applications to scientific evaluation and review. Our research focuses on three key areas. First, we introduce a novel framework to enhance the efficiency and robustness of cross-modal representation learning methods. Our approach utilizes progressive self-distillation and soft image-text alignments to model the many-to-many correspondences found in noisy web-harvested datasets. Extensive evaluation demonstrates that our method consistently outperforms CLIP across multiple benchmarks, including improved robustness to natural distribution shifts. We extend this framework to zero-shot open vocabulary detection, introducing augmentation, architectural and self-training strategies for improving vision-text feature alignment. Evaluation on long-tail detection benchmarks demonstrates state-of-the-art performance, with competitive performance for unseen classes, as well as superior transfer to additional datasets. Finally, we present the Review Integrated Scientific Evaluation (RISE) benchmark, a novel framework for assessing AI performance in understanding, critiquing, and providing constructive feedback on scientific manuscripts. Our study compares AI-generated reviews against human expert evaluations, revealing both the promising capabilities and current limitations of AI in scientific peer review. The dissertation concludes by proposing future directions for AI-accelerated science, emphasizing the need for collaborative human-AI scientific communities and the development of evaluation methods for higher-level autonomous capabilities in scientific domains. Altogether, this work contributes to the ongoing discourse on AI’s role in scientific research and paves the way for more rigorous integration of AI systems into the scientific process.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158506</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A biogeochemical investigation of Trunk River Lagoon, Falmouth, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/158505</link>
<description>A biogeochemical investigation of Trunk River Lagoon, Falmouth, Massachusetts
Dumit, Diana
This thesis delves into the intricate dynamics of lipid biomarker creation, deposition and preservation within Trunk River Lagoon, encompassing sediments, microbial blooms, and mats. Through a multi-faceted approach, the research uncovers the interplay between natural processes and anthropogenic influences, shedding light on the evolutionary trajectory of this aquatic ecosystem. From ancient sediment records to contemporary microbial communities, each aspect offers unique insights into environmental changes and the implications for interpreting biomarker signals&#13;
&#13;
In chapter 2 we employ radiocarbon dating, stable isotope geochemistry, and lipid biomarker analyses on a 2-meter sediment core spanning 3000 years, revealing shifts from a freshwater to a brackish environment with evidence of anthropogenic contamination. In Chapter 3, biomarker analyses on active blooms unveil a diverse microbial community receiving organic inputs from various sources, including sewage. Moreover, lipid analyses reveals rapid sulfurization of organic matter in the water column. In Chapter 4, attention turns to the preservation of biolipids within modern microbial mats. Through detailed analysis, the study reveals primary diagenetic processes such as hydrogenation and sulfurization, highlighting the complexities involved in interpreting biomarker distributions accurately. &#13;
 &#13;
Overall, this research underscores the necessity of comprehensive lipid analysis in modern environments to accurately interpret biomarker distribution and abundance. These findings not only advance our understanding of sedimentary records and biomarker signals but also emphasize the complex interplay between natural processes and anthropogenic influences in shaping contemporary aquatic ecosystems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158505</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Acquisition of Formal Semantics in Statistical Models of Language</title>
<link>https://hdl.handle.net/1721.1/158503</link>
<description>On the Acquisition of Formal Semantics in Statistical Models of Language
Jin, Charles C.
The increasingly impressive performance of recent large language models raises a crucial question: to what extent can such models, trained solely on text, develop an understanding of language grounded in the semantics of the underlying domain? Progress on this question carries significant practical and philosophical implications for the relationship between meaning, understanding, and the capacity to exhibit seemingly intelligent behavior.&#13;
&#13;
This thesis makes two primary contributions. First, it develops a scientifically rigorous approach to studying what statistical models of language can understand about language based on the formal semantics of programming languages. Specifically, it leverages the probing classifiers framework: training small classifiers to find encodings of program semantics within the model's internal representations. A main insight is that the clean separation between syntax and semantics in this domain allows for greater control in experimental design. It introduces two new techniques. The first, semantic probing interventions, is a general methodology for distinguishing whether the probe's measurements reflect (1) the learned representations of the language model encode semantics or (2) that the probe itself has learned to infer semantics from representations of pure syntax. The second, latent causal probing, is a formal framework for probing that provides a robust empirical methodology for studying whether language models are able to access the latent concepts that underlie the text they observe during training. A key innovation is to create a single structural causal model that unifies (1) the data generation process underlying the text used to train the language model and (2) the steps of a probing experiment. This makes it possible to conduct a causal analysis that intervenes on the data generation process to trace the influence of the latent variables in the training data through the model's internal representations.&#13;
&#13;
The second core contribution of this thesis consists of a series of experimental studies. Specifically, we train a language model on a synthetic grid-world navigation task, then probe the model's learned representations for encodings of the unobserved, intermediate world states. By leveraging the techniques we develop, the results deliver strong empirical evidence that statistical models of language are latent concept learners: capable of inducing the latent variables that underlie the generation of their training data, despite being trained only to model a conditional distribution over tokens.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158503</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Evolution for Parallel and Scientific Computing</title>
<link>https://hdl.handle.net/1721.1/158502</link>
<description>Language Evolution for Parallel and Scientific Computing
Churavy, Valentin
Scientists, working on the biggest problems facing humanity today, write and run largescale computer simulations. It has been a decades’ long dream of both scientists and programming language designers to make the development for and usage of high-performance computing easier. Many attempts have failed, perhaps because this is a hard problem, perhaps because the social motivation and the required steps to achieve success have not come together, and perhaps solutions to date only solve part of the problem in essence never fully solving the problem. This thesis proposes that there is a combination of features necessary to form a solution. Starting from a bedrock that combines performance with high-level abstractions in a single language. The language needs to enable composable abstractions, or we are doomed to keep developing the single-shot applications of the past. These abstractions should enable code reuse for different forms of compute architectures, to allow users to keep up with the fluid landscape of accelerators. These abstractions should enable code reuse for different mathematical objects such as dense, sparse and structured matrices. These abstractions should enable code reuse for differentiable programming, to enable integration of techniques like sensitivity analysis and scientific machine learning. With the right methodology, these abstractions can compose with each other and specialize to the domain. I will demonstrate that the combination of high-level array-based abstractions and a lowlevel performance portable kernel programming framework form a potent combination for large-scale scientific computing. I will show its efficacy using real-world scientific codes. Furthermore, I will introduce a differentiable programming framework built on top of a general automatic differentiation engine operating on compiler level. The automatic differentiation framework outperforms state-of-the-art, is capable of synthesizing gradient functions from GPU kernels, and can differentiate a wide variety of parallel constructs. As the infrastructure supporting this language needs to be more sophisticated than those of yesteryear, new problems arise. This thesis solves some of these problems and demonstrates their solution on a fluid dynamics code used in climate modelling as one of many imaginable applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158502</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Composing Foundation Models for Decision Making</title>
<link>https://hdl.handle.net/1721.1/158501</link>
<description>Composing Foundation Models for Decision Making
Ajay, Anurag
Recent advancements in conditional generative modeling have enabled models like DALLE and GPT-4 to generate high-resolution images and coherent text from brief prompts. However, developing a foundation model for decision-making is hindered by the scarcity and expense of collecting paired visual, language, and action data. To address this challenge, this thesis proposes a scalable alternative: a compositional model architecture that leverages separately trained expert models specializing in language, vision, and action. By reducing the need for extensive paired data collection, this approach maintains efficiency in solving novel decision-making tasks while mitigating the data scarcity problem. Our compositional foundation model employs a large language model for task planning, a video diffusion model to generate detailed video trajectories, and an inverse dynamics model to map videos into actions. We demonstrate the effectiveness of this approach in the context of table-top manipulation tasks. Furthermore, given the application of foundation models across various embodied agents, there is a growing need for systematically evaluating these models’ "common sense" understanding of the world. This evaluation is crucial for the successful deployment of embodied agents in real-world scenarios. To address this need, we introduce the first open-vocabulary benchmark for Embodied Question Answering (EQA). This benchmark assesses the foundation models’ ability to comprehend and reason about the world. In summary, by addressing data scarcity in developing foundation models for decision-making and establishing a benchmark for evaluating the reasoning capabilities of embodied agents, this thesis aims to advance the development of foundation models for decision-making.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158501</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Unified Framework for Visual Recognition and Generation via Masked Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/158500</link>
<description>Towards a Unified Framework for Visual Recognition and Generation via Masked Generative Modeling
Li, Tianhong
Recognition and generation are two key tasks in computer vision. However, recognition and generative models are typically trained independently, which ignores the complementary nature of the two tasks. In this thesis, we present a unified framework for visual data recognition and generation via masked generative modeling, and further demonstrate its superior power to address challenges across various applications. We will begin with MAGE, a novel framework that unifies image generation and recognition while achieving state-ofthe-art performance on both tasks. We then extend it into vision-language multi-modal training through ITIT, which utilizes unpaired image and text data to train models capable of high-quality, bidirectional image-text generation – the recognition power enables accurate image-to-text captioning, while the generation power enables realistic text-to-image generation. Moreover, inspired by the synergy between image generation and recognition observed in MAGE, we introduce RCG, a framework that enhances the quality of unconditional image generation to the same level of class-conditional generation, by using representations learned in a self-supervised manner to guide the generative process. Lastly, we introduce Reparo to address the challenge of packet loss in video conferencing with the help of masked generative modeling, enabling the reconstruction of lost video data without traditional error correction methods. This ensures high-quality communication even under conditions of substantial data loss. These works demonstrate the power of the proposed unified framework, to not only push forward the state-of-the-art in individual downstream applications but also to provide robust, versatile solutions adaptable to a wide range of real-world problems in computer vision and beyond.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158500</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-cost Agents with Language Perception and Dynamic Inference</title>
<link>https://hdl.handle.net/1721.1/158499</link>
<description>Low-cost Agents with Language Perception and Dynamic Inference
Pan, Bowen
Designing efficient artificial intelligence agents presents significant challenges, particularly in terms of learning and inference costs. Traditional agents often suffer from high learning expenses due to their limited ability to generalize across diverse tasks and environments. Recent advances in large language models (LLMs) have shown strong generalization capabilities by leveraging high-level abstractions of the world through language. In this thesis, we propose leveraging language as a perceptual representation to enable LLM-based agents to perform vision-language navigation tasks with reduced data collection costs. We demonstrate that language not only facilitates the generation of efficient synthetic data but also serves as a bridge to minimize domain gaps between different environments. However, transformer-based agents are burdened with high inference costs, especially when handling long-horizon visual content. To mitigate this, we introduce two strategies: (1) reducing visual input redundancy through dynamic token selection, and (2) accelerating model inference using a memory-efficient Mixture of Experts (MoE) architecture. Together, these approaches offer a robust framework for enhancing both learning and inference efficiency in LLM agents.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158499</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safe and Ethical Implementation of Intelligent Systems</title>
<link>https://hdl.handle.net/1721.1/158497</link>
<description>Safe and Ethical Implementation of Intelligent Systems
Dai, Zheng
In the year 2024, the prospect of solving human level tasks using intelligent systems is no longer the subject of science fiction. As these systems play an increasingly critical role in our day-to-day lives, it becomes ever more important to consider the safety and ethics surrounding their implementation. This is a multifaceted challenge spanning multiple disciplines, involving questions at the regulatory, engineering, and theoretical levels. This thesis discusses three projects that span these levels. We first explore the problem of tracing causal influence from training data to outputs of generative models. In our exploration we encounter the phenomenon of unattributability, and consider its scientific and regulatory implications. We next tackle the challenge of designing a high diversity library of therapeutics that is depleted of dangerous off-target binders using intelligent systems, developing a suite of inference and optimization tools along the way. Finally, we derive universal bounds for the robustness of image classifiers that inform us of how safe these intelligent systems can be in theory. Together, these projects present a multilevel overview of the safe and ethical implementation of intelligent systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158497</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maliciously Secure Computation, Theory and Practice</title>
<link>https://hdl.handle.net/1721.1/158496</link>
<description>Maliciously Secure Computation, Theory and Practice
de Castro, Leo
Data analytics fuels countless innovations and reveals unparalleled insights, and these benefits only grow the more data is amassed. This has resulted in the size of datasets and the compute needed to manage them becoming too resource-intensive for even large companies to handle alone, fueling the rise of cloud computing and outsourced data management. A central problem with this outsourcing is security. How can parties ensure that an untrusted cloud is accurately running the prescribed protocol? More generally, how can two parties collaborate to run a computation over joint inputs, where both inputs remain private while still delivering the correct output? This thesis focuses on answering these questions by constructing secure computation protocols with low communication &amp; computation overhead. The protocols in this thesis include several concretely efficient constructions of private information retrieval, a functional commitment scheme for all functions, and a general two-party secure computation scheme that comes within polylogarithmic factors of the optimal communication and computation complexity. In addition to their efficiency, all protocols presented in this thesis guarantee protection against worst-case, malicious adversaries.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158496</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Spin Dynamics in Magnon and Quantum Spin Systems</title>
<link>https://hdl.handle.net/1721.1/158494</link>
<description>Interactive Spin Dynamics in Magnon and Quantum Spin Systems
Hu, Zhongqiang
Spintronics utilizes the intrinsic spin of electrons to design next-generation electronic devices, reducing power consumption and enabling innovative computing functions. Over the past decades, significant research interest has been directed toward two types of spin-based systems: collective excitations of spins, known as spin waves or magnons, in magnetic materials, and optically active spin defects as represented by nitrogen-vacancy (NV) centers in diamond, leading to the prosperity of magnonics, quantum sensing, and quantum information processing. As the understanding of dynamics in individual spin systems has deepened, recently there has been an increasing interest in the interactive dynamics within hybrid spin systems. This shift in focus reflects an increasing curiosity about how these complex interactions can be harnessed to further advance their microwave and quantum applications. However, several challenges persist, including the limited coherence length of magnons and the restricted frequency range of NV-based magnetometers, which will be tackled in this thesis. We first leverage the chirality of interlayer magnetic dipolar interactions to introduce an easily implementable system—antiparallel aligned magnetic multilayers—for realizing topological magnonic surface states and low-dissipation spin current transport in a tunable manner. We then expand the frequency window of NV-based magnetometers using nonlinear microwave-spin interactions, offering novel functionalities in quantum state control and sensing. We further exploit nonlinear spin dynamics by hybridizing NV centers with magnonic thin films, which not only amplifies the intensity of nonlinear resonance signals that are intrinsic to NV spins, but also enables novel frequency mixings through parametric pumping and nonlinear magnon scattering effects. We believe our study of interactive spin dynamics in hybrid systems involving magnons, quantum spin defects, and microwave photons help optimize these systems for a wide range of applications in both classical and quantum domains.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158494</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling rational agents with limited capability</title>
<link>https://hdl.handle.net/1721.1/158493</link>
<description>Modeling rational agents with limited capability
Jia, Kai
In many scenarios, players exhibit inherent limitations in various aspects of their capability to generate maximally rational play in strategic games. Modeling such capability limitations and elucidating their implications will advance our understanding of the strategic interactions among players. In this thesis, I study two novel settings where players have limited capabilities. I formalize a hierarchy of capabilities and study related equilibrium concepts, computational complexity, solution algorithms, and the impact of varying capabilities on game outcomes.&#13;
&#13;
The first limited-capability setting is limited-perception games. I focus on a class of oneshot limited-perception games. Such games extend simultaneous-move normal-form games by presenting each player with an individualized perception of the true game. Players’ payoffs are determined by the true game hidden from players. The accuracy of a player’s perception is determined by the player’s capability level, with a higher level corresponding to a more accurate perception. I study both capability-oblivious and capability-aware players. A capability-oblivious player does not know they have limited perception and therefore plays the optimal strategy of their perceived game. I present payoff bounds and other predictable behavior of capability-oblivious players in a special class of limited-perception games. A capability-aware player reasons with the set of possible true payoff functions and other players’ perceptions and incentives to maximize their own objective (e.g., the worst-case payoff) based on their limited perception. I present novel formalizations of simultaneousmove equilibria and show the hardness of equilibrium solving. I further present positive results that (i) an approximate equilibrium has a compact, tractable representation; and (ii) a few classes of zero-sum games can be efficiently solved.&#13;
&#13;
The aforementioned efficiently solvable zero-sum games are reduced to solving nonsmooth convex programs. To this end, I present the Trust Region Adversarial Functional Subdifferential (TRAFS) algorithm for constrained optimization of unstructured nonsmooth convex Lipschitz functions. Unlike previous methods that assume a subgradient oracle model, I propose the functional subdifferential, defined as a set of subgradients that simultaneously captures sufficient local information for effective minimization while being easy to compute for a wide range of functions. Intriguingly, the TRAFS design also incorporates game-theoretical thinking. In each iteration, TRAFS solves a zero-sum game between the optimizer and a local approximation of the objective function to guarantee progress. The optimizer has access to step vectors in a local ℓ2 -bounded trust region; the local approximation uses the functional subdifferential. TRAFS finds an approximate solution with an absolute error up to ϵ in O(1/ϵ) or O(\sqrt{1/ϵ}) 1/ϵ iterations depending on whether the objective function is strongly convex, improving the previously best-known bounds of O((1/ϵ)^2) and O(1/ϵ) in these settings. TRAFS makes faster progress if the functional subdifferential satisfies a locally quadratic property; as a corollary, TRAFS achieves linear convergence (i.e., O(log 1/ϵ)) for strongly convex smooth functions. In the numerical experiments, TRAFS solves twice as many problems compared to the second-best method and is on average 39.1x faster on problems solved by both methods.&#13;
&#13;
The second limited-capability setting is limited-strategy games where a player’s capability limits the strategies available to them. I work with a formalization where a player’s strategy space is defined as programs in a Domain-Specific Language (DSL). A player’s capability limits the size of programs available to that player. I focus on characterizing the impact of player capability on game outcomes. I study a new game model called McDncDa derived from network congestion games. I show that it is computationally hard to determine whether an McDncDa instance is capability-positive (i.e., whether increasing a player’s capability level leads to a better payoff). I then study a parameterized special class of McDncDa called MGMG. I show that MGMG is always capability-positive, and it is socially capabilitypositive (i.e., the sum of all players’ payoffs always gets better if every player’s capability level is increased by one) if some resources in the game have increasing returns to scale despite the existence of multiple equilibria.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158493</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Sense of Training Large AI Models</title>
<link>https://hdl.handle.net/1721.1/158484</link>
<description>Making Sense of Training Large AI Models
Ahn, Kwangjun
Today, one of the most impressive applications of optimization is the training of large AI models. But currently such models are trained with ad-hoc heuristics at a very large computational cost, mainly due to lack of understanding of their working mechanisms. In this thesis, we conduct a systematic study of large-model optimization, crucially informed by practical applications. The first part investigates two interesting phenomena regarding optimization of Transformer-based models, one of the most popular architectures for language modeling. We investigate how training Transformer-based models can lead to remarkable properties such as in-context learning, and we further discuss the main challenges associated with Transformer training. The second part of this thesis focuses on understanding the Adam optimizer, one of the most popular algorithms for training large models. We offer a new view on Adam based on an online learning perspective that elucidates the importance of Adam’s algorithmic components. Building on this new perspective, we also prove that Adam achieves the optimal convergence rate in various non-convex optimization settings, both smooth and non-smooth settings. The third part of this thesis focuses on the unstable convergence phenomenon in training large models. We identify its main characteristics from first principles, and discuss its causes and implications for learning. We then discuss its connection to popular flat minima optimization algorithms, and initiate a formal study of them by defining a formal notion of flat minima, and analyzing the complexities of finding them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158484</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convex Network Flows</title>
<link>https://hdl.handle.net/1721.1/158483</link>
<description>Convex Network Flows
Diamandis, Theo
This thesis introduces a new framework for flow problems over hypergraphs. Our problem formulation, which we call the convex flow problem, only assumes that the constraints on the flows over each edge are in some convex set. The objective is to maximize a sum of concave utility functions---one for the net flow at every node and one for each edge flow---subject to these constraints. This framework not only includes many classic problems in network optimization, such as max ﬂow, min-cost ﬂow, and multi-commodity flows, but also generalizes these problems to allow, for example, concave edge gain functions. As a result, our framework includes applications spanning a number of fields: optimal power ﬂow over lossy networks, routing and resource allocation in ad-hoc wireless networks, Arrow-Debreu Nash bargaining, and order routing through financial exchanges, among others. This problem has a number of interesting properties, including a 'calculus' of flow sets, an equivalent conic form, and a natural generalization of many classic network flow results. &#13;
&#13;
We develop an efficient algorithm for solving the convex flow problem by constructing a particular dual problem that decomposes over the edges of the hypergraph. This dual problem has a number of useful interpretations and admits a straightforward specification: the dual function and its gradient can be evaluated using only simple subroutines which often have closed-form solutions. These subroutines suggest a clean, easy-to-use problem interface, which we provide in the open-source software package ConvexFlows.jl, written in the Julia programming language. We discuss implementation considerations, including how to handle important special cases, and we provide a simple interface for specifying convex flow problems. We show that our solver is significantly faster than the state-of-the-art commercial optimization solver Mosek, even for small problems sizes with limited parallelization. &#13;
&#13;
Finally, we consider the nonconvex flow problem with fixed costs on the edges, i.e., where there is some fixed cost to send any nonzero flow over an edge. We show that this problem has almost integral solutions by a Shapley--Folkman argument, and we provide a simple modification of our original algorithm for this nonconvex problem. We conclude by discussing a number of interesting avenues for future work.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158483</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsimonious Principles of Deep Neural Networks</title>
<link>https://hdl.handle.net/1721.1/158482</link>
<description>Parsimonious Principles of Deep Neural Networks
Huh, Minyoung
At the core of human intelligence lies an insatiable drive to uncover the simple underlying principles that govern the world’s complexities. This quest for parsimony is not unique to biological cognition but also seems to be a fundamental characteristic of artificial intelligence systems. In this thesis, we explore the intrinsic simplicity bias exhibited by deep neural networks — the powerhouse of modern AI. By analyzing the effective rank of the learned representation kernels, we unveil the observation that these models have an inherent preference for learning parsimonious relationships in the data. We provide further experimental results to support the hypothesis that simplicity bias is a good inductive bias for finding generalizing solutions. Building upon this finding, we present the Platonic Representation Hypothesis — the idea that as AI systems continue to grow in capability, they will converge toward not only simple representational kernels but also a common one. This phenomenon is evidenced by the increasing similarity of models across domains, suggesting the existence of a Platonic “ideal” way to represent the world. However, this path to the Platonic representation necessitates scaling up AI models, which poses significant challenges regarding computational demand. To address this obstacle, we conclude the thesis by proposing a framework for training a model with parallel low-rank updates to effectively reach this convergent endpoint.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158482</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Strategic AI Agents for Human-centric Multi-agent Systems</title>
<link>https://hdl.handle.net/1721.1/158481</link>
<description>Building Strategic AI Agents for Human-centric Multi-agent Systems
Jacob, Athul Paul
This thesis addresses the challenge of developing strategic AI agents capable of effective decision-making and communication in human-centric multi-agent systems. While significant progress has been made in AI for strategic decision-making, creating agents that can seamlessly interact with humans in multi-agentic settings remains a challenge. This research explores the limitations of current approaches, such as self-play reinforcement learning (RL) and imitation learning (IL), and proposes novel methods to overcome these constraints. Modeling human-like communication and decision making is a crucial first step toward building effective strategic agents. The initial part of the thesis addresses this through two approaches. We start by developing a regret minimization algorithm for modeling actions of strong and human-like agents called piKL, which incorporates a cost term proportional to the KL divergence between a search policy and a humanimitation learned policy. This approach improves reward while keeping behavior close to a human-imitation learned policy, producing agents that predict human actions accurately while improving performance in the benchmark game of no-press Diplomacy. Then, we develop a procedure for modeling populations of agents that communicate with humans using natural language. Our sample-efficient multitask training scheme for latent language policies (LLPs) improves the reward obtained by these policies while preserving the semantics of language in a complex real-time strategy game. Building on these foundations, the second part of the thesis focuses on building strategic agents for human-centric multi-agent domains. The research introduces the DiL-piKL planning algorithm and its extension, RL-DiL-piKL, which regularize self-play reinforcement learning and search towards a human imitation-learned policy. These algorithms enable the training of Diplodocus, an agent achieving expert human-level performance in no-press Diplomacy. A significant milestone is reached with Cicero, the first AI agent to achieve human-level performance in full-press Diplomacy, integrating a language model (LM) with planning and reinforcement learning algorithms based on piKL. The final part of the thesis revisits language generation tasks, applying piKL to model pragmatic communication and improving LM truthfulness. It presents Regularized Conventions (ReCo), a model of pragmatic language understanding that outperforms existing best response and rational speech act models across several datasets. Furthermore, a novel approach to LM decoding is introduced, casting it as a regularized imperfect-information sequential signaling game. This results in the equilibrium-ranking algorithm, which consistently improves performance over existing language model decoding procedures.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158481</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Variation in Healthcare across Time and Providers using Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158480</link>
<description>Characterizing Variation in Healthcare across Time and Providers using Machine Learning
Ji, Christina X.
Modeling healthcare decisions and their outcomes is a complex problem. In addition to being affected by patient characteristics, the prognosis can vary depending on when the patient is receiving care, and treatment decisions can vary depending on who makes the decisions. In this thesis, we consider two axes of variation in healthcare: over time and across providers. For both axes, we focus on identifying when variation exists, characterizing the patients who are affected by such variation, and addressing shifts due to this variation. The solutions we propose draw ideas from causality and dataset shift.&#13;
&#13;
In the first part of this thesis, we address these three aspects for variation over time. First, we create an algorithm that can detect when a model is affected by change over time and identify sub-populations where the model is more affected. We use our algorithm to perform a large-scale study of temporal shifts in health insurance claims. We demonstrate changes over time are prevalent in healthcare and examine case studies to better understand the drivers of such changes. Next, we examine how to learn a model that can perform well on current data. As data from the current time period is limited, we consider several methods that can leverage sequences of historical data to learn a good image classification model for the final time step. We build a benchmark for evaluating these methods on sequences constructed from synthetic shifts and validate our conclusions on a real-world dataset.&#13;
&#13;
In the second part of this thesis, we address similar questions for variation across providers. First, we create a statistical approach to test whether significant variation exists across providers. Our approach involves learning a model of treatment decisions with provider-specific random effects. We perform a case study on first-line type 2 diabetes treatment and find significant variation exists across providers. Then, we develop an algorithm for identifying regions of patients with the most disagreement between providers. We formalize this as a causal inference problem, where disagreement is defined by the causal effect of the provider on the treatment decision. We illustrate this algorithm on first-line type 2 diabetes and Parkinson's treatment decisions and uncover regions of variation that align with uncertainty in clinical guidelines.&#13;
&#13;
In the third part of this thesis, we build a tool for examining the effects of variation over time or across providers for individual patients. We use a large language model built on electronic health record concepts to generate patient trajectories. To enable interventions on time and provider, we introduce new tokenizations for these concepts. We also incorporate a structural causal model for patient visits to allow for generation of interventional and counterfactual trajectories. We hope the model in this part of the thesis can be used to answer additional questions about how patient trajectories would change if they were treated during a different time period or by a different provider.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158480</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Interactions between Optical Fields and Atom-like Systems in Integrated Circuits</title>
<link>https://hdl.handle.net/1721.1/158479</link>
<description>Programmable Interactions between Optical Fields and Atom-like Systems in Integrated Circuits
Larocque, Hugo
Photons can interact with a wide variety of quantum systems and their ability to more easily preserve their coherence makes them ideal candidates for transmitting information between remote quantum information processors. Photonic integrated circuits (PICs), which can be manufactured with modern semiconductor fabrication, provide a platform in which such interactions can occur at scale. Implementing integrated devices enabling these interactions within programmable and scalable settings while preserving a sufficient amount of strength continues to be a general goal in quantum photonics. Here, we implement device designs and architectures that improve current limits on the programmability and scalability of three types of optical interactions. More specifically, we explore the use of programmable multimode interference as a means for unitary transformations onto a set of optical spatial modes, optical resonators for high-extinction coherent modulators driven by RF signals, and large-scale silicon photonics for interacting with hybrid integrated quantum dot emitters.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158479</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Textiles for Physical Interactions</title>
<link>https://hdl.handle.net/1721.1/158478</link>
<description>Intelligent Textiles for Physical Interactions
Luo, Yiyue
Human-environment interaction is a fundamental aspect of our daily lives, involving the constant use of our sensory and motor systems to extract, process, and communicate information. However, capturing, analyzing, and reproducing these interactions pose significant challenges due to their pervasive, variable, and prolonged nature, as well as their unique character for each individual. Despite these challenges, it is essential to develop systems that can accurately capture and reproduce human-environment interactions for a wide range of applications, including human behavior studies, health monitoring, human-computer interactions, and robot imitation learning. This thesis focuses on developing seamlessly integrated, scalable manufactured sensing and actuating systems, as well as advanced computational pipelines to capture, analyze, and reproduce adaptive ubiquitous physical human-environment interactions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158478</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse and Structured Tensor Programming</title>
<link>https://hdl.handle.net/1721.1/158477</link>
<description>Sparse and Structured Tensor Programming
Ahrens, Willow
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear grids of values. Real-world tensors are often structured, containing patterns which allow us to optimize storage or computation, such as sparsity (mostly zero), runs of repeated values, or symmetry. Specializing implementations for structure yields significant speedups, but support for structured tensors is fragmented and incomplete. The heart of the problem is coiteration, simultaneously iterating over multiple tensors in a program, where each tensor format may have different internal structure. As each combination of structures requires a unique coiteration algorithm, existing frameworks struggle to abstract over the design space, instead hard-coding support for a few programs and/or a few structures. In this thesis, we build an abstraction for coiteration, enabling us to support both a wide range of programs and diverse tensor structures. We use a language, looplets, to describe the structure of tensors in tensor programs. Looplets allow the compiler to generate code to coiterate over any combination of structured tensor formats. The looplets language decomposes loops over sparse and structured formats hierarchically. This decomposition simplifies compilation, allowing us to capture key mathematical properties (such as x∗0 = 0, which motivates sparsity) with simple term rewriting. Building on looplets, we introduce a new language, Finch, for general structured tensor programming. Finch makes it easier to compute with structured tensors by combining program control flow and tensor structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of each structure, making it easier to write and optimize structured tensor programs. In our case studies, we show that this leads to dramatic speedups in diverse applications, including linear algebra, image processing, and graph analytics. Our abstracted design makes it easier to extend Finch to new tensor structures and programming models. Finch has been separately extended to support a DSL for symmetry-aware tensor programs and to support real-valued indexing.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158477</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paths to AI Accountability: Design, Measurement, and the Law</title>
<link>https://hdl.handle.net/1721.1/158476</link>
<description>Paths to AI Accountability: Design, Measurement, and the Law
Cen, Sarah H.
Algorithmic systems are increasingly intervening on human interactions and decisions, from selecting the content users see on social media to helping hiring managers choose candidates to interview. In recent years, the falling barrier between humans and AI has sparked fears about AI’s capabilities and elicited questions about the role that algorithms and, increasingly, AI should play in our lives. As society continues working towards answering these questions, this thesis argues that we must construct paths to AI accountability by determining who owes responsibility to whom in the AI ecosystem, upholding these responsibilities, and enforcing them. Pursuing AI accountability allows us to innovate while still acknowledging that AI is a technology developed and wielded by human actors. Furthermore, by focusing on the responsibilities of human actors, this approach builds on existing social and legal frameworks of accountability. Within this vast, multidisciplinary research area, this thesis centers on three aspects of AI accountability: design, measurement, and the law. In Part I, we examine the importance of designing responsible AI systems from the ground up, which involves exploring definitions of responsibility, methods for achieving them, and the ramifications (e.g., trade-offs) of responsible design. As demonstrations of design, we study three different contexts. Each context builds on a notion of responsibility, and we investigate how these notions—which include trustworthiness, fairness, and social welfare—arise and interact. We provide formal definitions of each notion, discuss their implications, and propose interventions for achieving them. In Part II, we turn our attention to AI measurement: quantifying AI behaviors and effects through systematic observations and procedures. We illustrate the importance of AI measurement through three case studies: (i) a black-box audit for social media algorithms; (ii) an estimator and experiment design for individual treatment effect estimation in the presence of spillover; and (iii) a user study testing whether users adapt to their recommender systems. In this part, we show how measurement can play a crucial role in compliance testing, analyzing AI behavior, and producing evidence that can inform decision-making (e.g., policy). In Part III, we discuss how the law can align incentives with AI accountability as well as challenges in realizing AI accountability in practice. We center our discussion on two works. The first seeks to fill a gap in the law around AI that arises from AI’s unintuitive and opaque nature, and argues that AI decision-subjects have a substantive right in the age of AI that we term the “right to be an exception.” While the first work studies a gap in the law, the second tackles practical challenges in carrying out the law. It examines how lacking both transparency and access to AI systems can frustrate the ability to monitor, evaluate, and audit AI systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158476</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utilization and Synthesis of Symbolic World Models for Safe, Generalizable, and Efficient Action</title>
<link>https://hdl.handle.net/1721.1/158475</link>
<description>Utilization and Synthesis of Symbolic World Models for Safe, Generalizable, and Efficient Action
Hunt, Nathan
Reinforcement learning with neural networks has proven incredibly flexible at learning to act in diverse environments. Model-based RL techniques have helped to ameliorate the dependence on large quantities of data that these models normally have. However, despite their flexibility, neural world models have several drawbacks. Symbolic world models, in comparison, are easier to verify (e.g. for safety concerns), more compatible with domain-independent planning techniques, and able to be learned or adapted with more limited data. In this thesis, I will demonstrate these advantages of symbolic world models in three projects. The first, VSRL, shows how we can use a symbolic world model to ensure that an RL policy is safe during both training and deployment and promote safe exploration. The second, SPARSER, presents a hybrid domain planner which uses world models in a planning domain description language. It showcases how we can exploit the event structure in the world model to enable more efficient planning. In the final project, PWM, I will explore learning a world model directly from observations and actions gathered from interacting with an environment. We combine symbolic and neural synthesis techniques to enable efficient world model synthesis even from visual observations. Together, these projects demonstrate the versatility and value of symbolic world models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158475</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The growth characteristics of indium antimonide as revealed by chemical etching and x-ray anomalous transmission.</title>
<link>https://hdl.handle.net/1721.1/158472</link>
<description>The growth characteristics of indium antimonide as revealed by chemical etching and x-ray anomalous transmission.
Miller, David Christopher.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Vita.; Bibliography: leaves 122-127.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158472</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photochemistry of organometallic compounds : generation and reactions of 16e- and 17e- intermediates</title>
<link>https://hdl.handle.net/1721.1/158471</link>
<description>Photochemistry of organometallic compounds : generation and reactions of 16e- and 17e- intermediates
Young, Kent Maxwell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1989; Title as it appears in the M.I.T. Graduate List, June 1989: Photochemistry of organometallic complexes--generation and reactions of 16e- and 17e- intermediates.; Includes bibliographical references (leaves 164-166).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158471</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forced vibrations of a single stage axial compressor rotor.</title>
<link>https://hdl.handle.net/1721.1/158462</link>
<description>Forced vibrations of a single stage axial compressor rotor.
Fabunmi, James Ayinde.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158462</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the murine leukemia virus genome.</title>
<link>https://hdl.handle.net/1721.1/158461</link>
<description>Mapping the murine leukemia virus genome.
Faller, Douglas Vincent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1978; Vita.; Bibliography: leaves 182-188.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158461</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental groups of algebraic stacks</title>
<link>https://hdl.handle.net/1721.1/158460</link>
<description>Fundamental groups of algebraic stacks
Noohi, Behrang,
            1973-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2000; Includes bibliographical references (p. 57).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158460</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic scattering of neutrons by hexagonal cobalt</title>
<link>https://hdl.handle.net/1721.1/158455</link>
<description>Magnetic scattering of neutrons by hexagonal cobalt
Moon, R. M.
            (Ralph Marks)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1963; Vita.; Includes bibliographical references (leaves 97-98).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158455</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatiotemporal Signatures of Elastoinertial Turbulence</title>
<link>https://hdl.handle.net/1721.1/158324</link>
<description>Spatiotemporal Signatures of Elastoinertial Turbulence
Yamani, Sami
The addition of small amounts of polymers to a Newtonian solvent makes the fluid viscoelastic, and can lead to significant drag reduction in high-speed flows. The interaction of viscoelasticity and inertia in a dilute polymer solution results in the emergence of unique inertioelastic instabilities. The nonlinear evolution of these instabilities engenders a state of turbulence with significantly different spatiotemporal features compared to Newtonian turbulence, commonly termed elastoinertial turbulence (EIT). We explore EIT by studying the dynamics of low-speed submerged jets of dilute aqueous polymer solutions injected through a nozzle into a tank of quiescent water or polymer solution. In a free shear layer, fluid elasticity has a dichotomous effect on jet stability depending on its relative magnitude, creating two distinct regimes in which elastic effects can either destabilize or stabilize the jet. For small levels of elasticity an inertioelastic shear-layer instability emerges, in agreement with existing linear stability analysis of viscoelastic jets, which is independent of bulk undulations in the column of fluid forming the jet. The growth of this instability near the edge of the jet destabilizes the flow, advancing the transition to turbulence to lower Reynolds numbers and closer to the nozzle compared to a Newtonian jet. Increasing the fluid elasticity merges this shear-layer instability into a bulk instability of the fluid column. In this regime, elastic tensile stresses in the sheared polymer solution act like an “elastic membrane” that stabilizes the flow, delaying the transition to turbulence to higher levels of inertia and greater distances downstream of the nozzle. In a wall-bounded shear layer, a separate investigation shows that fluid elasticity generates a self-sustained inertioelastic travelling wave within the wall boundary layer under flow conditions at which a Newtonian wall jet remains completely laminar. The phase velocity of this travelling wave decreases as fluid elasticity increases, resulting in the stabilization of the jet. In the fully-developed turbulent state far from the nozzle, viscoelastic jets exhibit unique spatiotemporal features associated with EIT. The time-averaged angle of jet spreading and the center-line velocity of the jet are self-similar with distance from the nozzle, and the similarity scaling coefficients vary with fluid elasticity. The cascade of turbulent eddies has a universal frequency spectrum independent of fluid elasticity. This spectrum is characterized by a power law with an exponent of −3 that is different from the well-known Kolmogorov law with exponent −5/3 for Newtonian turbulence. EIT also modifies the Lagrangian coherent structures that develop in the turbulent flow. Increasing elasticity generates coherent structures that are larger and more elongated in the streamwise direction, consistent with the suppression of streamwise vortices by EIT. On a larger scale, the elongated coherent structures create a stochastic cycle in EIT that consists of active and hibernating turbulent states with alternating strong and weak turbulent fluctuations. Looking ahead, this new fundamental understanding of EIT can be leveraged to explore the potential of biopolymers as cheap and environmentally-friendly drag reducing agents replacing synthetic polymers made from petroleum oil. Biopolymers are typically semiflexible polyelectrolytes with rheological properties that can be adjusted over a wide range by varying conditions such as the solvent quality and/or the ionic strength. We study aqueous solutions of a typical long chain biomacromolecule (Xanthan gum) in canonical shear and extensional flows and quantify how the rheological properties can be tuned by changing the ionic strength of the solvent. In steady shear flow, increasing the biopolymer concentration dramatically increases both the zero shear viscosity and the extent of shear-thinning, while increasing the ionic strength of the solvent, decreases both the zero shear viscosity and the level of shear-thinning. In transient extensional flow, increasing biopolymer concentration increases the extensional relaxation time of the solution, while increasing the ionic strength of the solvent decreases this relaxation time. Based on our insights from this rheological characterization, we demonstrate that injecting a high inertia jet of aqueous biopolymer solution into quiescent environments at different levels of ionic strength can significantly modify the spectral characteristics of the inertioelastic instabilities that develop and lead to a change in the spatiotemporal signatures of elastoinertial turbulence. Our findings lay out a pathway for identifying the most promising biopolymers to serve as biodegradable drag reducing agents for marine vehicles operating in high salinity environments enabling savings in the cost of transport and future reduction in our carbon footprint.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158324</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale design of bioadhesive platforms for next-generation applications in surgery and healthcare</title>
<link>https://hdl.handle.net/1721.1/158323</link>
<description>Multiscale design of bioadhesive platforms for next-generation applications in surgery and healthcare
Wu, Sarah J.
Bioadhesives—materials capable of adhering to biological tissues—hold significant promise as transformative tools in healthcare, offering the ability to repair tissues with ease and minimal damage. These materials present numerous opportunities in surgery and human-machine interfaces, creating a broad landscape of applications that has captivated clinical and scientific interest alike. Still, there remain open challenges surrounding their reliability, biocompatibility, usability, and versatility. These include weak adhesion with wet tissues, foreign body response, cumbersome application processes, and limited customizability. This dissertation presents a multiscale framework for addressing these obstacles, encompassing design strategies on the molecular, polymer network architecture, macroscale device, and application process levels. The implementation of this framework is demonstrated through the development of two pioneering bioadhesive platforms: (1) a multifunctional patch for minimally invasive surgery, and (2) a 3D printable bioadhesive for fabricating tunable, application-specific devices. Together, these platforms expand the design space for creating robust and versatile tissue repair solutions and biomedical devices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158323</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Material Recovery from Salt-Lakes and Spent Batteries with Membranes and Solvents</title>
<link>https://hdl.handle.net/1721.1/158322</link>
<description>Critical Material Recovery from Salt-Lakes and Spent Batteries with Membranes and Solvents
Foo, Zi Hao
The sustainable extraction and recovery of critical metals such as lithium, cobalt, and rare earth elements are essential for advancing renewable energy technologies, electric vehicles, and modern electronics. This thesis addresses the significant environmental, economic, and logistical challenges associated with traditional methods of extracting these metals from primary sources like spodumene ores and continental salt lakes, and secondary sources like spent battery and magnet leachates. Conventional extraction processes from primary sources are highly energy-intensive, environmentally taxing, and pose substantial water usage concerns. In contrast, while secondary sources such as spent lithium-ion batteries offer a promising avenue to alleviate environmental impacts and secure a stable supply chain, they still pose challenges in terms of high chemical usage and waste acid management. This research focuses on advancing three innovative processes: nanofiltration, electrodialysis, and solvent-driven fractional crystallization, aiming to enhance the efficiency and sustainability of metal recovery from both primary and secondary sources. The thesis findings are supported by direct experimental measurements and extensive computation involving multi-ionic and mixed-solvent activity and fugacity coefficient models, fundamental molecular dynamics simulation, multicomponent continuum dynamics ion transport models across nanofiltration and ion exchange membranes, and techno-economic analysis of membrane and solvent processes. First, advancements in nanofiltration technology are explored to pre-treat salt-lake brines for improved lithium extraction efficiency and purity. Positively charged nanofiltration membranes demonstrate enhanced monovalent selectivity through Donnan exclusion, effectively removing multivalent cations and improving lithium purity in the feed brine. Our results show that the Li/Mg selectivity can be enhanced by 13 times with Donnan-enhanced nanofiltration membranes. Our experiments exemplify the Donnan-enhanced membrane’s ability to reduce magnesium concentrations to 0.14 % from salt lakes in a single filtration stage. This method not only increases the yield and quality of extracted lithium but also reduces the environmental impact by minimizing additional purification steps. Second, electrodialysis is investigated for the selective recovery of lithium from complex mixtures like battery leachates. This technique leverages ion mobility differences to retain lithium ions while separating other cations. Bipolar membrane electrodialysis further converts lithium chloride into high-purity lithium hydroxide and hydrochloric acid, which can be recycled, thereby supporting a circular economy in battery recycling. Experimental results demonstrate that selective electrodialysis can achieve ∼99 % lithium purity with 68.8 % lithium retention from Ni-Mn-Co battery leachates. The techno-economic analysis projects LiOH production costs between USD 1.1 to 3.6 per kilogram, approximately an order of magnitude lower than prevailing market prices. Third, the use of dimethyl ether (DME) in solvent-driven fractional crystallization is examined as an innovative method for extracting critical metals. DME’s properties allow for efficient water extraction from aqueous solutions, causing the crystallization of metals like cobalt and nickel. Our computational analysis reveals that DME-based solvent-driven water extraction can concentrate an input saline feed to 5.5 M and regenerate over 99 % of the DME using ultra-low-grade heat below 50°C, with a DME/water selectivity ratio of 125. This process ensures high purity and reduces post-processing needs, offering a more environmentally friendly alternative to traditional solvent extraction techniques. The findings of this thesis underscore the potential of advanced variants of nanofiltration, electrodialysis, and solvent-driven fractional crystallization technologies in promoting sustainable and economically viable critical metal recovery processes. By addressing the pressing issues of environmental degradation and resource scarcity, this research supports the development of a circular resource economy, where waste materials are continuously reused and recycled, contributing to a sustainable energy future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158322</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Modeling of a Catapulting Magnetic Transmission for Tuning Energy Storage and Release</title>
<link>https://hdl.handle.net/1721.1/158320</link>
<description>Design and Modeling of a Catapulting Magnetic Transmission for Tuning Energy Storage and Release
Thomas, Marcel Adam Craig
The purpose of this work is to generate design rules and models for a catapulting magnetic leadscrew transmission. These rules and models empower scientists and engineers with the ability to tune energy storage and release, and thereby increase the peak specific power (power/mass) of an actuator. This enables rapid design and development of lightweight (&lt; 0.5 kg), high peak power (&gt;200 W) actuators. This has the potential to impact powered exoskeletons and force-controlled robotics for rehabilitation and strength augmentation of explosive movements such as locomotion, jumping, and throwing. This thesis provided the following scientific contributions: (i) the concept of a catapulting magnetic screw actuator, (ii) experimentally validated models that are useful for the design and optimization of the magnetic leadscrew, considering both magnetic and structural aspects, (iii) experimentally validated models of the catapulting event in a magnetic leadscrew, and (iv) use of these models in the context of a practical application, namely powered exoskeletons that may reduce the metabolic cost of walking. First, the catapulting magnetic screw is introduced. An equation of motion is derived and experimentally validated. The equation of motion demonstrates that the potential wells in the magnetic screw create a ripple in the power as a function of time. Then, despite the equation of motion being a nonlinear differential equation with no closed-form solution, bounds on the ripple magnitude and frequency are derived. This gives the slip force and the lead needed to meet a specified tolerance on power as a function of time.&#13;
Then, a model is developed that enables rapid design of a magnetic screw that achieves a desired slip force. This model agrees with finite element analysis to within 10% error across varying each design parameter by multiple orders of magnitude. Then, given a magnetic screw, a structure is needed to be sufficiently stiff to keep the magnets from sticking together. Models of the magnetic stiffness matrix and structural stiffness matrix and simplifications thereof are given to ensure sufficient structural stiffness. Finally, the catapulting event may be too fast for a desired application, so it is shown how nonlinear springs may be used to meet requirements for powered exoskeletons that assist in walking.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158320</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of a Powered Series-Elastic Cycloidal Ankle (CyAn) Prosthesis</title>
<link>https://hdl.handle.net/1721.1/158319</link>
<description>Design and Evaluation of a Powered Series-Elastic Cycloidal Ankle (CyAn) Prosthesis
Du, Lucy W.
The prevalence of major lower limb loss in the United States is projected to increase significantly due to rising rates of diabetes and obesity, highlighting an urgent need for advanced prosthetic solutions [1]. Individuals with lower limb amputations often face increased energy expenditure and secondary musculoskeletal conditions as a result of using conventional prosthetic devices [2]. These challenges underscore the necessity for innovative prosthetic designs that can enhance user mobility and comfort. A promising prosthesis solution are powered ankle-foot prostheses, which have the potential to provide biologically accurate push-off power, thereby offering significant benefits such as improved walking economy, increased mobility, and reduced impact forces on the user’s residual limb. However, existing powered prostheses often lack customization and fail to adequately meet the diverse and specific needs of individual users, which can limit their effectiveness and adoption. This thesis introduces a personalized, optimized, low-profile powered ankle-foot prosthesis, known as the Cycloidal Ankle (CyAn), designed to achieve biological ranges of motion and torque during level-ground walking. The CyAn employs a cycloidal drive transmission and a series carbon fiber spring to mimic tendon-like compliance, which enhances energy storage and return while maintaining a low build height to accommodate a broader range of users. The prosthesis device is capable of 25◦ of dorsiflexion and 41◦ of plantarflexion, and is capable of outputting at least 130 Nm of torque during walking, corresponding to biological ankle torque during level ground walking at 1.5 m/s for a 50th percentile male [3]. The CyAn prosthesis uses of a cycloidal drive transmission coupled with a series carbon fiber spring. This combination replicates tendon-like compliance and allows for a reduced build height without compromising the prosthesis’s range of motion or mechanical performance. The development of the CyAn prosthesis involved a comprehensive mechanical and mechatronic design process, encompassing modeling, optimization of electrical energy consumption, component selection, and benchtop and clinical evaluation. This thesis describes the detailed design and analysis of the CyAn prosthesis, including a parametric model for predicting device performance, fatigue life calculations, and mechanical integrity assessments of device components. Benchtop testing results confirm that the device successfully achieves the targeted performance metrics, demonstrating its capability to replicate natural gait mechanics. The clinical validation study was conducted with 3 participants with unilateral transtibial amputation at 3 different walking conditions: level ground at 1.5 m/s, uphill (+10◦ slope) at 0.8 m/s, and downhill (-10◦ slope) at 1.2 m/s. During the experiment, the subjects walked on an instrumented treadmill to regulate the walking speed while force and motion data were recorded. The results of these tests demonstrate the prosthesis design’s capability to replicate natural gait mechanics and kinetics, as well as insights into further improvements and adaptations. This thesis comprehensively details the mechanical and mechatronic design processes, encompassing modeling, optimization, component selection, and empirical evaluation of the CyAn prosthesis. This thesis presents the first of its kind rotary powered ankle-foot prosthesis, utilizing a cycloidal drive mechanism and a custom series carbon fiber spring. Compared to existing powered devices, the CyAn offers a lower device mass and increased biomimetic functionality, making it a cost-effective solution for improving mobility and quality of life for transtibial amputees. This research establishes a framework for developing customized prosthetic solutions that address the unique needs of individual users, with significant clinical results demonstrating the potential of the CyAn to improve health outcomes by normalizing biomechanics, increasing energy efficiency, and reducing adverse limb loading.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158319</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Generalization of Models on Streets Imagery: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/158316</link>
<description>Towards Generalization of Models on Streets Imagery: Methods and Applications
Alhasoun, Fahad
The domains relevant to urban planning have been disrupted by the proliferation of highly&#13;
granular city data and the advancements in machine learning. However, machine learning models are susceptible to pitfalls constraining their deployment in many applications&#13;
including domains related to urban settings. There is much to be addressed between the&#13;
methods and applications before we can realize all potentials of machine learning to improve urban life. In this thesis, we focus on the use of streets imagery and classification&#13;
problems. We start motivating the thesis with a case study where deep learning models&#13;
are trained to predict street contexts (i.e. residential, park, commercial...etc) from streets&#13;
imagery. We then shift gears and discuss a novel unsupervised domain adaptation method&#13;
to address the drop in accuracy when models are tested outside the domain of the training&#13;
data (i.e. a model trained on San Francisco and tested in Boston). We further our discussion with a proof of concept of a framework to develop more generalized models starting&#13;
with a prototype of a system of streets imagery collection, labeling, and ending with how&#13;
we approach generalization by breaking the problem into smaller prediction tasks to aid in&#13;
more understanding of the interworking of the models.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158316</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Case studies in the modeling and control of continuous pharmaceutical manufacturing processes</title>
<link>https://hdl.handle.net/1721.1/158315</link>
<description>Case studies in the modeling and control of continuous pharmaceutical manufacturing processes
Maloney, Andrew John
The pharmaceutical industry employs a myriad of modalities; ranging from small molecules to&#13;
biologics such as peptides, monoclonal antibodies, bi-specific antibodies, and viral vectors. The&#13;
manufacturing of these products is as varied as the products themselves. Small molecules are&#13;
synthesized chemically; i.e. by a series of key chemical transformation, work-up, and recovery&#13;
steps. Larger molecules can be isolated from naturally occurring sources (i.e. humans, plants, or&#13;
other microorganisms), or produced via recombinant hosts such as Chinese hamster ovary (CHO),&#13;
Escherichia coli, or Saccharomyces cerevisiae, with some products requiring both a recombinant&#13;
host and transient transfection or infection with additional genetic material.&#13;
&#13;
Across these modalities, industry, regulatory agencies and academia are investigating technologies for improved quality, efficiency, capability, and consistency. Of these technologies, continuous manufacturing (CM) is of particular interest due to its ability to allow for reduced equipment sizing and footprint, improved environmental sustainability, and improved process control. This thesis supports the implementation of continuous pharmaceutical manufacturing through advanced modeling, simulation, and control as described in three independent case studies.&#13;
&#13;
The first work considers the development of a virtual plant for manufacturing of a small molecule&#13;
active pharmaceutical intermediate (API) through four chemical transformation, workup, and&#13;
recovery steps. The plant is used for uncertainty quantification, improved process design, and&#13;
novel process control strategy development. The second work considers the production of small,&#13;
globular proteins by the yeast Pichia pastoris. A model for copy number stability is developed and&#13;
validated using data in open literature and data generated at MIT. The third work concerns the&#13;
production of monoclonal antibodies (mAbs) using Chinese hamster ovary cells as a production&#13;
host. Hardware considerations, lower level regulatory controls, and advanced process modeling&#13;
and control for a heavily-instrumented mAb manufacturing testbed are discussed. Across this&#13;
thesis, the benefits of systems-level analysis in the continuous manufacturing of pharmaceuticals&#13;
is documented and demonstrated.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158315</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inside the App Bureaucracy: The Use of Smartphone Apps in Public Service Delivery Organizations in Pakistan</title>
<link>https://hdl.handle.net/1721.1/158312</link>
<description>Inside the App Bureaucracy: The Use of Smartphone Apps in Public Service Delivery Organizations in Pakistan
Masud, Mohammad Omar
Smartphone apps are being used in governments in developing countries for monitoring of frontline officials in delivery of public services. Development literature has expressed doubt about transformative impact of digital technologies on entrenched bureaucracies in developing countries. While smartphone monitoring apps have improved speed and reliability of information from the ground, we do not know how availability of such apps among large number of middle to low level officials affect work and practice in large government bureaucracies in a developing country. The dissertation looks at four in depth cases studies involving use of smartphone apps in Pakistan. The cases involve the anti-Dengue program in the city of Lahore, garbage collection agency in Lahore, crime mapping by Lahore police and school monitoring by the provincial school department in the province of Punjab (which includes Lahore). Using a detailed analytical framework, I trace out the evolution of the smartphone monitoring apps in each case starting from design and implementation and continuing to use of their data among multiple levels of the bureaucracy. Using Zuboff’s concept of informating and literature on accountability and performance in government organizations, I look at how design, implementation and use of smartphone monitoring apps and their data bring about changes in workflows and practices, among lower echelons of the bureaucracy without any major restructuring or reform, leading to greater responsiveness and performance orientation. The research reveals that low level officials are responsive to monitoring data because it gives salience to their work and is an objective performance measure in a challenging work environment. The research also shows that such behavior is contingent upon how effectively the organizations manage viewing and sharing of monitoring data with forums to discuss data with frontline officials. It also points out the importance of effectively managing a smart mobile data infrastructure to sustain emerging workflows and practices.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158312</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-level Design, Fabrication, and Optimization of Sorbent-based Atmospheric Water Harvesting Devices</title>
<link>https://hdl.handle.net/1721.1/158311</link>
<description>System-level Design, Fabrication, and Optimization of Sorbent-based Atmospheric Water Harvesting Devices
Wilson, Chad T.
Sorption-based atmospheric water harvesting (SAWH) has been demonstrated as a promising avenue to addressing the increasing problem of water scarcity, especially in arid inland regions where alternative technologies are limited. However, current sorbent materials are often limited in their applicability due to system integration and device design constraints. In this thesis, we present advancement of atmospheric water harvesting technologies in both the passive and active design space by leveraging a system-level approach to modelling and optimization of devices. First, we discuss SAWH device fundamentals in terms of heat, mass, and fluid transport, and identify key components which impact device performance for both passive (solar) and active (electrical/chemical) systems, as quantified by our proposed performance metrics. Next, we develop a coupled heat and mass transport model of a passive, solar-driven atmospheric water harvesting device and quantify the impact of system variables on device operation. We use this model to fabricate an optimal system that efficiently utilizes a hydrogel-salt composite sorbent for record passive water production in the Atacama Desert. Furthermore, we propose an underlying mechanism for observed system-level degradation of our hydrogel-salt composite and demonstrate successful lifetime elongation of the sorbent in SAWH operation. Additionally, we use our fundamental understanding of SAWH to design an active device for portable use. Highly compact, lightweight, and energy dense, this system operates independent of external environment conditions and produces more than 2 L/day of potable water. Finally, a generalized topology optimization approach is proposed for sorbent scaffolding structures to further improve system water output while reducing power consumption and packing of atmospheric water harvesting devices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158311</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Object-based SLAM</title>
<link>https://hdl.handle.net/1721.1/158310</link>
<description>Towards Object-based SLAM
Zhang, Yihao
Simultaneous localization and mapping (SLAM) is a fundamental capability for a robot to perceive its surrounding environment. The research area has developed for more than two decades from the original sparse landmark-based SLAM to dense SLAM, and now there is a demand for semantic understanding of the environment beyond pure geometric understanding. This thesis delves into object-based SLAM where the map consists of a set of objects with their semantic categories recognized and their poses and shapes estimated. Such a map provides vital object-level semantic and geometric perception to applications such as augmented reality (AR), mixed reality (MR), robot manipulation, and self-driving. In order to perform object-based SLAM, the sensor measurements have to undergo a series of processes. First, objects are semantically segmented in the sensor measurements. This step is typically done by a neural network. As robots are often required to bootstrap from some initial labeled datasets and adapt to different environments where labeled data are unavailable, it is important to enable semi-supervised learning to improve the robot’s performance with the unlabeled data collected by the robot itself. Second, after the objects are segmented, measurements for each object across different views have to be associated together for downstream processing. Lastly, the robot must be able to extract the object pose and shape information from the measurements without access to the detailed object CAD models which are commonly unavailable. This thesis studies these three aspects of object-based SLAM, namely semi-supervised learning of semantic segmentation in a robotics context, data association for object-based SLAM, and category-level object pose and shape estimation. The thesis closes with a discussion of how these components can be integrated into a full object-based SLAM system in the future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158310</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion Aggregation, Correlated Ion Transport and the Double Layer in Super-Concentrated Electrolytes</title>
<link>https://hdl.handle.net/1721.1/158309</link>
<description>Ion Aggregation, Correlated Ion Transport and the Double Layer in Super-Concentrated Electrolytes
McEldrew, Michael
In the dilute regime, properties of electrolytes are well known and their mathematical descriptions are well established. The physical picture of dilute electrolytes, in which ions are pristinely solvated, fully dissociated, and immersed in an excess of structureless solvent medium, lends itself naturally to elegant and tidy mathematical descriptions. Owing in large part to their simplicity and physical transparency, these descriptions have guided our intuition of electrolytes for the better part of the last century. However, with the explosion of interest in super-concentrated electrolytes, particularly for electrochemical energy storage applications, theoretical descriptions of electrolytes within this regime are greatly needed. The physical description of superconcentrated electrolytes gets completely flipped from that of their dilute counterparts: ions have complex solvation structures, they are only partially dissociated, and they outweigh or even outnumber the solvent. This complex environment imparts unexpected properties to super-concentrated electrolytes. Understanding the origin of these unexpected properties could unlock the key design principles for the next generation of super-concentrated electrolytes. In this thesis, we develop simple, chemical-specific, theoretical models of superconcentrated electrolytes. First, we develop a continuum model of the electrical double layer in water-in-salt electrolytes that unravels the physics behind a potential mechanism for oxidative stability in WiSEs. We find that asymmetric ion solvation leads to very asymmetric water distributions within the double-layer. Next, we develop a thermodynamic model of ion aggregation and solvation in super-concentrated electrolytes. The model is deeply rooted in polymer-physics and treats the electrolyte as a poly-disperse mixture of branched ion clusters. In addition to cluster distributions and thermodynamics, our model predicts the onset of a percolating ion network, termed an ionic gel, at a critical salt concentration. We apply our model to two important classes of super-concentrated electrolytes: room temperature ionic liquids (RTILs) and water-in-salt electrolytes (WiSEs). For these classes, our model was able to be greatly simplified, as well as parameterized and validated by extensive molecular dynamics simulations. Furthermore, we consider the effects of extensive ion clustering and gelation on ion transport, electrochemical stability window and the emergence of nano-heterogeneity observed in super-concentrated electrolytes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158309</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiatively Cooled Magnetic Reconnection Experiments Driven by&#13;
Pulsed Power</title>
<link>https://hdl.handle.net/1721.1/158307</link>
<description>Radiatively Cooled Magnetic Reconnection Experiments Driven by&#13;
Pulsed Power
Datta, Rishabh
Magnetic reconnection is a ubiquitous process in astrophysical plasmas, responsible for the explosive conversion of magnetic energy into thermal and kinetic energy. In extreme astrophysical systems, such as black hole coronae and neutron star magnetospheres, radiative cooling modifies the energy partition by rapidly removing internal energy. In this thesis, we perform experimental and computational studies of magnetic reconnection in a radiatively cooled regime, previously unexplored in reconnection studies. The Magnetic Reconnection on Z (MARZ) experiments consist of a dual exploding wire array, driven by a 20 MA peak, 300ns rise time current generated by the Z pulsed-power machine (Sandia National Labs). The load generates oppositely-directed supersonic, super-Alfvénic, collisional plasma flows with anti-parallel magnetic fields, that generate a reconnection layer (Lundquist number SL ∼ 100), in which the total cooling rate far exceeds the Alfvénic transit rate [mathematical notation].&#13;
 &#13;
Two- and three-dimensional simulations of the MARZ experiments are performed in GORGON, an Eulerian resistive magnetohydrodynamic code. The simulations demonstrate the generation of a reconnection layer, which radiatively collapses, exhibiting a rapid fall in temperature, strong compression, and an increased reconnection rate consistent with theoretical predictions. The reconnection layer is unstable to the plasmoid instability, generating secondary current sheets separated by magnetic islands. High energy X-ray emission is generated predominantly by the plasmoids. The plasmoids also collapse radiatively, and the reconnection layer recovers a laminar large aspect ratio structure, which does not exhibit further plasmoid generation, indicating stabilization of the original plasmoid instability of the current sheet.&#13;
 &#13;
The experiments confirm numerical predictions by providing evidence of plasmoid formation and strong radiative cooling. Experimental diagnostics directly measure the spatial, temporal, and spectral properties of radiative emission from the reconnecting system. The reconnection layer generates a transient burst of &gt;1 keV X-ray emission, consistent with the formation and subsequent rapid cooling of the layer. Time-gated X-ray images show fast-moving (up to 50 km s−1) hotspots in the layer, consistent with the presence of plasmoids in 3-D resistive magnetohydrodynamic simulations. X-ray spectroscopy shows that these hotspots generate the majority of Al K-shell emission (around 1.6 keV), and exhibit temperatures (170 eV) much greater than that of the plasma inflows and the rest of the reconnection layer.&#13;
 &#13;
The findings in this thesis are of particular relevance to the generation of radiative emission from reconnection-driven astrophysical events, and to the global dynamics of reconnection in strongly cooled systems. The MARZ experiments also provide a novel platform for investigating radiative effects in high-energy-density and laboratory astrophysics experiments, and for validation of radiation magnetohydrodynamic and atomic spectroscopy codes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158307</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-induced States and Phase Transitions in Quantum Materials investigated by Photoemission Spectroscopy and Epitaxial Synthesis</title>
<link>https://hdl.handle.net/1721.1/158272</link>
<description>Light-induced States and Phase Transitions in Quantum Materials investigated by Photoemission Spectroscopy and Epitaxial Synthesis
Choi, Dongsung
In condensed matter physics, a field of the study on phases of matter and their transitions, light-induced states and phase transitions have attracted significant attention due to their importance in both fundamental research and applications. This thesis will specifically delve into three compelling studies: (1) Floquet-Bloch states, photon-dressed Bloch states, were investigated in graphene. These states are generated by a time-periodic potential of light, closely related to the topic of Floquet engineering. (2) A light-induced insulator-to-metal transition was observed in Sr₂IrO₄, providing valuable insights into the fundamental characteristics of its ground states. (3) A light-induced topological phase transition (from a Z₂ topological insulator to a trivial insulator) was investigated in Bi-doped (Pb,Sn)Se thin-films. For these studies, we employed time- and angle-resolved photoemission spectroscopy (trARPES) and molecular beam epitaxy (MBE). Through in-depth investigation into these phenomena, this thesis seeks to contribute to the broader understanding of light-matter interactions in quantum materials.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158272</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Improve Clinical Decisions and AI Safety by Leveraging Structure</title>
<link>https://hdl.handle.net/1721.1/158271</link>
<description>Learning to Improve Clinical Decisions and AI Safety by Leveraging Structure
Chauhan, Geeticka
The availability of large collections of digitized healthcare data along with the increasing power of computation has allowed machine learning (ML) for healthcare to become one of the key applied research domains in ML. ML for health has great potential in providing clinical decision-making support that can improve quality of care and reduce healthcare spending by easing clinical operations. However, the successful development of ML models in healthcare is contingent on data that is complex, noisy, heterogeneous, limited in labels and highly sensitive. In this thesis, we leverage the unique structure present in medical data along with the availability of external knowledge to guide model predictions. Additionally, we develop differentially private (DP) training techniques using gradient structure to mitigate privacy leakage.&#13;
&#13;
In this thesis, we develop methods on different medical modalities such as multivariate physiological signals of ICU patients, patient discharge summaries, biomedical scientific articles, radiology reports, chest radiography imaging and spoken utterances. We tackle tasks such as forecasting patient states, relationship extraction, disease prediction, medical report generation and differentially private model training. We begin the thesis by offering open source data processing and modeling frameworks, move towards improved interpretability of model predictions to develop clinician trust and finally investigate differentially private ML techniques to protect user data. &#13;
&#13;
First, we show that the use of aggregated feature representations based on clinical knowledge offers model robustness against evolving hospital systems. Second, we leverage external knowledge in the form of clinical concept extraction to significantly improve relationship extraction. Third, we leverage the rich information from reports associated with chest radiographs to develop highly accurate disease severity prediction models using contrastive learning. Fourth, we showcase that the report generation task offers competitive disease prediction capabilities, label efficiency and improved interpretability. Finally, we introduce novel methods for improved privacy-utility-compute tradeoffs for DP pre-training of large speech models. We highlight DP as an important component of model safety, necessitating its development in conjunction with AI safety approaches that will be pertinent in healthcare and beyond.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158271</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Experiments and First-Principles Calculations to Understand and Engineer Metal Exsolution in Perovskites</title>
<link>https://hdl.handle.net/1721.1/158270</link>
<description>Combining Experiments and First-Principles Calculations to Understand and Engineer Metal Exsolution in Perovskites
O'Leary, Willis
Exsolution processing has emerged as a leading new route to fabricate highly active and stable ceramic-supported metal catalysts for a wide variety of applications, including solid oxide fuel cells, solid oxide electrolyzers, catalytic converters, and chemical/fuel production. In exsolution, metal cations are exsolved to the surface of a perovskite oxide solid solution under reducing conditions. The result is a perovskite backbone decorated with partially embedded metallic nanoparticles. The stability and anti-coking properties of exsolved nanoparticles have driven growing interest in exsolution materials. However, even after two decades of intense research, key open questions remain regarding exsolution's precise mechanism and, consequently, how to rationally engineer the properties of exsolution nanoparticles. This thesis aims to address these questions through a combination of experimental work and first-principles atomistic modelling with the long-term goal of accelerating the commercialization of exsolution materials.&#13;
&#13;
We first investigate the impact of perovskite composition on the properties of  Ni nanoparticles exsolved from bulk SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ. We adjust the makeup of the Sr site, adding dopants of varying valence and ionic radii as well as vacancies, and measure how these changes modulate the surface density of the exsolved nanoparticles. We then use density functional theory (DFT) calculations to explain the observed trends, finding that the energetics of cation surface segregation and surface reduction control nanoparticle nucleation kinetics. This work provides valuable new insights into the exsolution mechanism, and, for the first time, introduces a quantitative model capable of accurately predicting the experimental exsolution properties of given perovskite composition from first principles. &#13;
&#13;
Next, we extend this quantitative model to capture the influence of the exsolution conditions on the properties of Ni nanoparticles, this time focusing solely on Ni exsolution from bulk  Sr₀.₈La₀.₁Ca₀.₁Ti₀.₉₄Ni₀.₀₆O₃₋ subscript δ. We first measure the dependence of nanoparticle density on exsolution temperature and oxygen partial pressure. We rationalize the empirical trends using the LaMer theory for nucleation and  our model previously developed to predict composition effects. This achievement points towards the first-ever method for first-principles prediction of a generic perovskite composition’s exsolution properties under varying reducing conditions. Thus, we make a major step towards fully in silico design of exsolution materials, greatly increasing their commercial attractiveness.  &#13;
&#13;
Finally, we develop a novel, highly efficient DFT methodology to predict Raman signatures of point defects and apply this methodology to interpret SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ’s complex Raman spectrum. Based on empirical and DFT-derived Raman spectra, we characterize SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ’s defect chemistry and local structure. Our findings are a vital first step towards using Raman spectroscopy to study and screen exsolution materials. More broadly, our computational methodology supercharges Raman spectroscopy as a tool to characterize local structure in a wide range of technologically relevant material systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158270</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding community changes in ecological systems: a probabilistic and geometric perspective</title>
<link>https://hdl.handle.net/1721.1/158269</link>
<description>Understanding community changes in ecological systems: a probabilistic and geometric perspective
Deng, Jie
Regulating and predicting community changes in ecological systems represent fundamental challenges in science and engineering, particularly in systems subject to constant environmental perturbations (e.g., natural, in vivo, and in situ environments). Consequently, a central goal of ecological research has been to understand the processes of coexistence, invasion, and assembly in open systems that underlie changes in the composition of ecological communities. These changes can be induced by either natural events (e.g., viruses infecting humans) or human actions (e.g., fecal microbiota transplantation). Although previous studies have theoretically explored criteria for successful coexistence, invasion, and assembly under specific or fixed environmental conditions, the variable and often unknown environmental conditions in nature have left these criteria largely untested.&#13;
&#13;
The overarching goal of my PhD thesis is to provide a testable theoretical framework for the dynamics of coexistence, invasion, and assembly under environmental uncertainty (i.e., in nature or open systems). This framework, rooted in the generalized Lotka-Volterra model, adopts a probabilistic and geometric perspective to understand these dynamics. In particular, my thesis comprises three core projects. The first project develops probabilistic system-level measures to quantify the effects of third-party species on the coexistence of a pair (or subset) of species by integrating population dynamics models (i.e., the Lotka-Volterra model) with in vivo experimental data from fruit fly gut microbiota. Additionally, I test general heuristic rules based on the proposed probabilistic measures using in vitro soil and in vivo gut microbial communities. The aim is to predict how non-resident species (invaders) can alter resident communities and to assess the applicability of our probabilistic measures. The second project seeks to unify coexistence and invasion theories within a geometric and probabilistic framework that is testable. This unification enables us to predict and test the impact of interspecific interactions on invasion and exclusion probabilities without requiring detailed model parameterization or extensive datasets. The third project identifies the general principle governing the development of ecological systems under environmental uncertainty, which could assist in regulating or even predicting changes in ecological community compositions. This principle is validated across a broad spectrum of ecological scales, from large mammals to gut microbes, through publicly available data. I believe this thesis will bring us closer to understanding the processes that influence community compositions and their changes, knowledge that holds great potential for advancing bio-conservation, bio-technologies, and bio-medicine.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158269</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cobordism of manifolds with w1, w2 and w4 vanishing.</title>
<link>https://hdl.handle.net/1721.1/158232</link>
<description>Cobordism of manifolds with w1, w2 and w4 vanishing.
Giambalvo, Vincent William.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; Bibliography: leaves 75-77.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158232</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contribution to the methods of measuring stresses below the surface</title>
<link>https://hdl.handle.net/1721.1/158230</link>
<description>Contribution to the methods of measuring stresses below the surface
Safoglu, Recep Ali,
            1920-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1947; Vita.; Includes bibliographical references (leaves 183-184).
</description>
<pubDate>Wed, 01 Jan 1947 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158230</guid>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local politics and industrial adjustment : the political economy of Italy in the 1980's</title>
<link>https://hdl.handle.net/1721.1/158229</link>
<description>Local politics and industrial adjustment : the political economy of Italy in the 1980's
Locke, Richard M.,
            1959-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Title as it appears in the M.I.T. Graduate List, Feb. 1989: "Eppure Si Muove"--the political economy of industrial change in Italy.; Includes bibliographical references (leaves 285-310).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158229</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solvent extraction of cobalt from thiocyanate solutions</title>
<link>https://hdl.handle.net/1721.1/158227</link>
<description>Solvent extraction of cobalt from thiocyanate solutions
Hard, Robert A.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1957; Vita.; Bibliography: leaf 84.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158227</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neutron diffraction study of the field-induced diamagnetism in the semimetal bismuth.</title>
<link>https://hdl.handle.net/1721.1/158222</link>
<description>A neutron diffraction study of the field-induced diamagnetism in the semimetal bismuth.
Collins, Steven Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158222</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neutron diffraction study of the magnetization in dilute Cu (Fe) alloys.</title>
<link>https://hdl.handle.net/1721.1/158220</link>
<description>A neutron diffraction study of the magnetization in dilute Cu (Fe) alloys.
Dickens, Michael Hugh.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1974; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158220</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Xenobiotic metabolism and mutation in diploid human lymphoblasts</title>
<link>https://hdl.handle.net/1721.1/158216</link>
<description>Xenobiotic metabolism and mutation in diploid human lymphoblasts
Crespi, Charles L.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1982; Bibliography: leaves 148-155.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158216</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained stochastic climate simulation</title>
<link>https://hdl.handle.net/1721.1/158214</link>
<description>Constrained stochastic climate simulation
Curtis, David Carleton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1982; Bibliography: leaves 215-226.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158214</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demand-responsive transit : problems and possibilities.</title>
<link>https://hdl.handle.net/1721.1/158213</link>
<description>Demand-responsive transit : problems and possibilities.
Ewing, Reid Harris.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 255-264.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158213</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetics of nitrogen solution from arc plasmas into liquid iron.</title>
<link>https://hdl.handle.net/1721.1/158212</link>
<description>Kinetics of nitrogen solution from arc plasmas into liquid iron.
Esimai, Charles Nduka.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158212</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale Origins of Thermal Transport Phenomena for Hybrid Layered Perovskites</title>
<link>https://hdl.handle.net/1721.1/158207</link>
<description>Nanoscale Origins of Thermal Transport Phenomena for Hybrid Layered Perovskites
Dahod, Nabeel S.
An exciting and fundamentally powerful modern methodology for materials development is the process by which artificial solids are rationally built piece-by-piece from nanoscale “building blocks”. Among the library of nanomaterials currently at the forefront of this pursuit, two-dimensional layered lead halide perovskites (2D LHPs), are of particular interest. These materials, solid crystals composed of alternating layers of atomically thin organic and inorganic subphases, possess novel optical and electronic properties that make them particularly suited for use in devices including solar cells, LEDs, flexible electronics, and even lasers. While significant early strides have been made in investigating charge carrier transport through dynamic models and sophisticated experiments, comparatively little attention has been given to understanding the manner in which the design of these nanostructured solids impacts their macroscopic thermal properties via thermal carrier (phonon) transport. This knowledge, however, is critical to addressing the thermal management constraints necessary to the design of reliable and stable devices. &#13;
To this end, this dissertation seeks to elucidate the thermal stability and fundamental pathways for heat transport within 2D LHP artificial solids. I first present an experimental investigation into the thermal and structural stability of these 2D LHPs near room temperature using differential scanning calorimetry and x-ray diffraction. This analysis reveals near-room temperature melting transitions isolated to the organic component of the nanomaterials. The existence of such an isolated phase transition indicates thematerials behave thermophysically as composites, a hypothesis that is supported by the effective use of a lever rule in estimation of the heat capacity of the materials. &#13;
I discuss the theoretical foundation and experimental construction of a frequency domain thermoreflectance technique to effectively measure the cross-plane thermal conductivity of 2D LHPs. This technique is then utilized to perform the first measurement of the thermal conductivity of 2D LHPs. This experimental study reveals that even in terms of their thermal transport pathways, 2D LHPs can be treated as composite materials. Specifically, lead bromide 2D LHPs exhibit structure-property relationships characteristic of ballistic phonon transport within isolated subphases and diffuse scattering at the organic-inorganic interfaces between layers. &#13;
Finally, I report the first measurements of the vibrational spectrum for 2D LHPs via low frequency Raman spectroscopy. This probe identifies the persistence of bulk-like phonons&#13;
even in the atomically thin 2D LHPs, in addition to identifying coherent acoustic phonons in lead iodide 2D LHPs potentially capable of carrying thermal energy across the organic-inorganic interfaces without scattering. Each of the observations made throughout this dissertation suggest the thermophysical representation of 2D LHPs as composite materials is a useful framework for understanding their thermal transport properties. That so many material properties can be effectively predicted simply from the bulk properties of the component phases is surprising given both the long-range order of the artificial solids and the sub-nanometer length scale of the individual component layers, and underlines the potential for intelligent engineering of the thermal properties of 2D LHPs without deleterious influences on the sterling optoelectronic properties.
</description>
<pubDate>Sat, 01 Jun 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158207</guid>
<dc:date>2019-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning methods to study structurally heterogeneous macromolecules in vitro and in situ</title>
<link>https://hdl.handle.net/1721.1/158202</link>
<description>Deep learning methods to study structurally heterogeneous macromolecules in vitro and in situ
Powell, Barrett M.
Proteins, RNA, and other biomolecules form complex 3-D structures that dynamically interact to carry out essential biological processes. These macromolecular complexes are often structurally heterogeneous, which is key to executing or regulating their specific biological functions.&#13;
&#13;
To understand the molecular mechanisms underpinning these biological functions, structural biologists aim to determine the 3-D structure of the relevant macromolecule or macromolecular complex. Most such structural insights use techniques that strip the macromolecule of its cellular context (i.e., in vitro) and, subsequently, report a single average structure. However, recent advances in cryogenic electron microscopy (cryo-EM) provide avenues to determine sets of heterogeneous structures from a single dataset, and simultaneous advances in cryogenic electron tomography (cryo-ET) enable the resolution of macromolecules in their native cellular environment (i.e., in situ).&#13;
&#13;
This thesis describes the conceptualization, implementation, and application of tomoDRGN, a deep learning method developed to resolve structurally heterogeneous macromolecules in situ. TomoDRGN builds on the well characterized cryoDRGN method, which facilitates analysis of heterogeneous structures by cryo-EM, to cryo-ET, where I show it efficiently learns an ensemble of unique 3-D volumes from the structurally heterogeneous dataset provided. I additionally describe the application of TomoDRGN to datasets of diverse macromolecules, highlighting its ability to resolve conformational and compositional heterogeneity and to identify rare yet biologically informative structural states. This thesis also details an approach and protocol for rapid structural characterization of bacterial ribosomes in situ, wherein tomoDRGN facilitates powerful upstream dataset filtration. Finally, this thesis provides a detailed protocol for the characterization of heterogeneous cryo-EM datasets with cryoDRGN and, in doing so, illustrates the types of new insights enabled by the cryoDRGN and tomoDRGN Deep Reconstructing Generative Networks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158202</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The dynamic behavior of an ammonia synthesis reactor</title>
<link>https://hdl.handle.net/1721.1/158114</link>
<description>The dynamic behavior of an ammonia synthesis reactor
Eymery, Jean-Pierre.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1964; Vita.; Includes bibliographical references (leaves 215-217).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158114</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Many meson production in meson-nucleon collision in the chew-low formalism</title>
<link>https://hdl.handle.net/1721.1/158110</link>
<description>Many meson production in meson-nucleon collision in the chew-low formalism
Tarimer, Niyazi.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1957; Vita.; Includes bibliographical references (leaf 60).
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158110</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformation processes of zirconium</title>
<link>https://hdl.handle.net/1721.1/158109</link>
<description>Deformation processes of zirconium
Rapperport, Eugene J.
            (Eugene John)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1955; Vita.; Bibliography: leaves 99-101.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158109</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pulsed laser ablation of calcified biological tissue : physical mechanisms and clinical applications</title>
<link>https://hdl.handle.net/1721.1/158105</link>
<description>Pulsed laser ablation of calcified biological tissue : physical mechanisms and clinical applications
Izatt, Joseph A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1991; Includes bibliographical references (leaves 192-205).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158105</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>German nuclear dilemmas, 1955-1965</title>
<link>https://hdl.handle.net/1721.1/158104</link>
<description>German nuclear dilemmas, 1955-1965
Kelleher, Catherine McArdle.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1967; Vita.; Bibliography: leaves 665-686.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158104</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting torsional fatigue crack growth (Mode III) in turbo-generator shafts</title>
<link>https://hdl.handle.net/1721.1/158100</link>
<description>Predicting torsional fatigue crack growth (Mode III) in turbo-generator shafts
Nayeb-Hashemi, Hamid.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158100</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Approaches for Understanding and Redesigning Enzyme Catalysis</title>
<link>https://hdl.handle.net/1721.1/158059</link>
<description>Computational Approaches for Understanding and Redesigning Enzyme Catalysis
Karvelis, Elijah
The remarkable specificity and catalytic efficiency of many enzymes make them attractive for applications ranging from therapeutics to chemical manufacturing. However, it remains challenging to identify the specific structural and dynamic mechanisms underlying the catalytic power of enzymes, which has limited our ability to re-engineer catalytic properties. In this thesis, I address these shortcomings by developing and demonstrating computational strategies comprised of techniques spanning statistical mechanics, machine learning, and protein design, and I apply them to the enzyme ketol-acid reductoisomerase (KARI), whose economic viability for the production of isobutanol would be strengthened by enhancing its activity on one of its two native substrates: 2-acetolactate (ACL). While computational enzyme redesign strategies for increased activity have traditionally focused on decreasing the energetic gap between the enzyme-substrate ground state and transition state, this thesis postulates and evaluates whether a more holistic treatment including the dynamics of complete turnover events could further elucidate properties affecting turnover efficiency and guide the identification of mutants with enhanced catalytic function.&#13;
&#13;
In the first study, we describe a novel redesign strategy for enhanced specific activity (turnover number) based on analysis of enzyme-substrate turnover dynamics. The approach combined statistical mechanical path sampling algorithms and machine learning methods to identify the structural characteristics of enzyme-substrate complexes primed for successful conversion of substrate to product, which were then energetically stabilized by mutating KARI. A subset of candidate mutants were tested using path sampling-based reaction rate constant calculations, and eight mutants were identified with computed improvements in turnover number of up to four orders of magnitude for the isomerization of ACL. Further analysis revealed structural mechanisms by which enhanced activity was attained. In the second study, we examine the effects of these same mutations on the isomerization of KARI's other native substrate: 2-aceto-2-hydroxybutyrate (AHB), and we find that the mutants selected for increased activity on ACL had varied levels of activity on AHB. These variations in mutant activity on AHB were explained by analysis of WT-AHB simulations, which showed that only some of the structural mechanisms related to enhanced ACL catalysis transferred to, and thereby facilitated, AHB catalysis. This thesis highlights the influence of conformational states that are visited during the dynamics of substrate turnover and their role on enzyme catalysis, and it furthermore suggests a framework with which researchers may consider and apply these effects when engineering catalytic function.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158059</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Microporous Polymers for Separations</title>
<link>https://hdl.handle.net/1721.1/158058</link>
<description>Designing Microporous Polymers for Separations
Storme, Kayla R.
In Chapter 1, we investigate the influence of side-chain length and dispersity in ring-opening metathesis polymerization (ROMP) polymers with pore-generating side chains. Macromonomers with four discrete monodispersities are separated and polymerized to produce bottlebrush polymers with monodisperse side chains. Each bottlebrush polymer is fabricated into a free-standing film. Pure-gas experiments are performed to explore the impact of dispersity and side chain length on gas separation performance. &#13;
&#13;
In Chapter 2, we evaluate the mixed-gas performance of a class of bottlebrush polymers described in Chapter 1. Gas sorption, diffusion, and CO₂-induced plasticization are reported. Competitive sorption effects are studied using 50:50 mixture of CO₂/CH₄. Separation performance at different compositions of CO₂/CH₄ is also explored. &#13;
&#13;
In Chapter 3, we incorporate nitrile functionality into the structure of a family of polymers with rigid, porogenic side chains described in Chapters 1 and 2. Statistical and block copolymers are synthesized to demonstrate the role of grafting density on separation performance and CO₂ plasticization resistance. Sorption experiments are performed to determine improvements to selectivity.&#13;
&#13;
In Chapter 4, we describe the optimized SN Ar synthesis of a poly(arylene ether) (PAE) that produces high molecular weight polymers. The synthesis of an analogous PAE with C-H functionality instead of C-F is also reported. Porosity and free volume are investigated in both PAEs. Separation performance is characterized and compared to other polymers with similar structural motifs.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158058</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Game Theoretic Approach to Resilient Space System Design</title>
<link>https://hdl.handle.net/1721.1/158057</link>
<description>A Game Theoretic Approach to Resilient Space System Design
Jones, Michael P.
There is a growing need for space missions that maintain performance in uncooperative or even adversarial environments. Space system designers must account for resilience to non-cooperative interactions in the design process while trading resilience and performance against cost. Prior academic work has considered resilience for system design, especially in the context of environmental factors; however, current literature does not include a space system design methodology that explicitly models non-cooperative, interactive threats to produce a more resilient design. To address this gap, a novel game-theoretic methodology is proposed to capture the interactive nature of non-cooperative systems at the strategic design level. The result is a two-player strategic design game in which the system under design and the threat system are both modeled as rational actors, and design options for the system architecture and threat system architecture are strategic choices for each actor. Performance, resilience, and cost metrics are calculated by an operational-level simulation of system-threat interactions. Tradespace and sensitivity analyses based on the results are used to evaluate the cost premium of adding resilience to the system and to demonstrate the strategic design choices that provide the most cost-effective means of increasing resilience to the modeled threats. The resulting methodology is presented through three case studies demonstrating the applicability of the methodology across multiple space mission applications.&#13;
&#13;
The first case study evaluates a low earth orbit (LEO) Satellite Communications (SATCOM) system design. The results show that perfect resilience (no drop in performance) to the modeled ground based jamming threat requires a 224\% cost increase and that additional satellites are a more cost effective means of increasing resilience than fewer, more capable satellites. The second case study focuses on a Global Navigation Satellite System (GNSS) and adds more fidelity to the physical model and the design choices available to both the threat and the system. A medium earth orbit (MEO) constellation that maximizes resilience to the modeled jamming and kinetic threats consists of 56 satellites in 7 planes, while in LEO this requires 819 satellites in 21 planes. For a LEO GNSS constellation to be more cost effective than a MEO GNSS constellation with the same level of resilience, the LEO system's first unit cost must be $\leq 1/10$ the MEO system's first unit cost. Cost based sensitivity analyses demonstrate how results are influenced by cost model estimates and show how program managers can use this methodology to guide program decisions as cost estimates improve over time. The third case study looks at a non-cooperative GEO mission through an abstracted two player game environment called GEO Patrol. This case study adds fidelity to the operational interactions by requiring complex decision making. Reinforcement learning is used with over 7 million games of self play to train a 2 hidden layer neural network to generate actions. Human-in-the-loop experiments verify the simulation results and improve understanding of the underlying system dynamics. Over 20 volunteers played 53 games representing 3 distinct scenarios. The average difference between the volunteer and simulation results is 5.1\%, verifying the simulation. The three case studies demonstrate how the methodology can be applied across disparate space missions with varying levels of model fidelity. Systems designers can apply the methodology to produce both quantitative and qualitative recommendations to ensure the final system is resilient by design.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158057</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Benefits of angularly-controlled field switching on the pulling-into-step ability of salient-pole synchronous motors</title>
<link>https://hdl.handle.net/1721.1/157987</link>
<description>Benefits of angularly-controlled field switching on the pulling-into-step ability of salient-pole synchronous motors
Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.
Thesis: Sc. D., Massachusetts Institute of Technology. Department of Electrical Engineering, 1931; Includes bibliographical references (leaves [73]-[75]).
</description>
<pubDate>Thu, 01 Jan 1931 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157987</guid>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leptonic and hadronic polarization in semi-leptonic inclusive weak and exclusive electromagnetic interactions</title>
<link>https://hdl.handle.net/1721.1/157979</link>
<description>Leptonic and hadronic polarization in semi-leptonic inclusive weak and exclusive electromagnetic interactions
Raskin, Alan Steven.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1987; Bibliography: leaves 220-223.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157979</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organization of the marginal band of avian erythrocytes</title>
<link>https://hdl.handle.net/1721.1/157978</link>
<description>Organization of the marginal band of avian erythrocytes
Swan, Judith Ann.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1985; Bibliography: leaves 170-182.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157978</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A statistical approach to heavy-ion transfer reactions to the continuum</title>
<link>https://hdl.handle.net/1721.1/157976</link>
<description>A statistical approach to heavy-ion transfer reactions to the continuum
Karp, Joel Steven.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157976</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Proxy Records of Climate and Carbon Cycle Perturbations in the Paleozoic: Integrating Isotope Geochemistry and Sedimentology</title>
<link>https://hdl.handle.net/1721.1/157970</link>
<description>Multi-Proxy Records of Climate and Carbon Cycle Perturbations in the Paleozoic: Integrating Isotope Geochemistry and Sedimentology
Anderson, Noah
Carbonate rocks are a valuable archive of past environmental conditions. To glean robust information from this archive, we must understand how carbonate sediments form, ensure our analytical techniques are optimized, and consider how inherently local deposition of sediments can communicate information about global changes in climate. Chapter 1 proposes a new conceptual model for the formation of ooids that suggests that these small carbonate grains could form while buried in the shallow sediment pile during certain intervals of Earth history. Chapters 2 and 3 calibrate the clumped isotope paleothermometer for calcite, dolomite, and apatite, resolving significant discrepancies in calculated paleotemperatures. Chapter 4 applies clumped isotope thermometry to Early Mississippian strata and demonstrates a ~5ºC global cooling and substantial ice volume expansion coincident with a major perturbation to the global carbon cycle. Chapter 5 examines the extent to which diagenesis and facies- and phase-specific effects drive a major Early Mississippian carbon isotope excursion. In aggregate, this thesis outlines a roadmap for assessing changes to climate and the carbon cycle for carbonate rocks in the Paleozoic.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157970</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence and the US-China Balance of Power</title>
<link>https://hdl.handle.net/1721.1/157968</link>
<description>Artificial Intelligence and the US-China Balance of Power
Chang, Benjamin Angel
How will artificial intelligence affect the US-China balance of power? While a nascent literature debates whether AI may upend strategic stability or revolutionize the nature of warfare, existing discussions suffer from both imprecise conceptualization and scarce data. In three essays, this dissertation evaluates the impact of AI on the nuclear balance, the conventional balance, and long-term US-China competition more generally by focusing on deep learning, generating data through simulation and supply chain analysis.&#13;
&#13;
The first essay defends the focus on deep learning, then presents an end-to-end conceptualization of how its technical qualities translate into usefulness across different categories of modern military tasks, which in turn affect, when contextualized to the particular dyad under study, the strategic balances across different domains of US-China competition. At each analytic layer, the paper condenses deep learning’s effects into several generalizations, tying AI to existing debates in security studies and setting an agenda for future research.&#13;
&#13;
The second essay simulates US-China nuclear war in Python to assess AI’s impact on the strategic balance, focusing on the tracking of mobile platforms on land. It finds that AI reduces the total “effective counterforce area” – the area the United States would have to destroy with nuclear weapons, to carry out a splendid first-strike – by one to two orders of magnitude. Under low to medium alert, the simulation finds this would enable successful US nuclear counterforce. While countermeasures are available to China, the essay predicts heightened nuclear tensions as a result.&#13;
&#13;
Finally, the third essay exploits supply chain datasets to assess each side’s ability to bring AI-enabled autonomous weapons to bear in future conventional conflicts. I find that control over the production of advanced AI chips by the United States and allies almost certainly means the United States would better exploit such weapons, if they emerged as decisive in modern warfare, within at least the next ten years. Potential Chinese policy responses, such as cannibalizing its civilian sector or substituting with older chips, would likely fail for technical reasons.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157968</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Picophytoplankton of the Northeast U.S. Shelf: Community Composition and Dynamics</title>
<link>https://hdl.handle.net/1721.1/157966</link>
<description>Picophytoplankton of the Northeast U.S. Shelf: Community Composition and Dynamics
Stevens, Bethany Lynn Fowler
Marine picophytoplankton are the most abundant primary producers in the ocean and are expected to be favored by the ongoing effects of climate change. Predicting the response of marine ecosystems to these changes requires mechanistic knowledge of picophytoplankton ecology. This thesis uses a combination of long-term monitoring, cruise data, population models, and high-throughput sequencing to investigate the dynamics of picophytoplankton across scales of space and time that are relevant both to the physiology of the individual cells and to the structure of the Northeast U.S. Shelf (NES), a productive and economically important coastal ecosystem. To identify the drivers of seasonal changes in picophytoplankton abundance, I first estimate daily division and loss rates for a nearshore community of picoeukaryotes over a 16-year period. I compare their cell concentrations, vital rates, and responses to environmental variables to those of the cyanobacteria, Synechococcus. Next, to reveal how these dynamics relate to changes in community composition, I analyze 9-years of monthly metabarcoding data and characterize taxonomic variability within the picoeukaryote assemblage. In the second half of this thesis, I explore spatial environmental variability and test the extent to which data from the nearshore observatory are representative of the picophytoplankton communities across the NES. I analyze flow-cytometry data collected from 22 regional research cruises, estimate daily Synechococcus and picoeukaryote division rates from underway data, and describe the distinct depth distributions of the two groups from subsurface samples. The major findings of this thesis are that, across the NES, the picoeukaryotes divide at much higher rates than the more abundant Synechococcus and are subject to greater top-down control from grazing or viral lysis. Both groups are light limited in the fall, temperature limited in the spring, and undergo earlier spring blooms in warmer offshore waters. For Synechococcus, the relationships between cell concentration, division rate and environmental parameters are consistent across the continental shelf, while the picoeukaryote community appears to be nutrient-limited farther from shore. Together, this work creates a detailed picture of the various controls on picophytoplankton abundance within a dynamic coastal ecosystem and advances our understanding of how picophytoplankton communities respond to environmental change.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157966</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein Folding, Host Cell Proteostasis, and Viral Evolution</title>
<link>https://hdl.handle.net/1721.1/157965</link>
<description>Protein Folding, Host Cell Proteostasis, and Viral Evolution
Yoon, Jimin
Pandemics and epidemics caused by pathological RNA viruses, such as the 1918 influenza pandemic, the global AIDS epidemic, and the recent coronavirus pandemic, impose a severe burden on global health and the economy. A major challenge associated with developing effective antiviral strategies is the exceptionally high mutation rate of RNA viruses, which endows them with a remarkable capacity to adapt to selection pressures such as antibodies or antiviral drugs. Hence, it is critical to understand the molecular-level factors that can constrain and potentiate viral evolution. While mutations benefit viruses by generating the diversity required for evolution, they also threaten viral viability because the majority of non-conservative amino acid substitutions cause protein folding defect. Mutations that result in substantial protein folding defects cannot be tolerated, regardless of how adaptively beneficial the resultant protein variant otherwise would be. Importantly, in cells, protein folding is assisted by intricate networks of chaperones and quality control factors, termed proteostasis networks. When a substitution on a protein impedes its proper folding, proteostasis network components can triage the defective protein variant to chaperones for folding assistance, or to quality control factors for timely degradation. Interestingly, virtually all RNA viruses rely on their host’s proteostasis network components for viral protein folding. It follows that the host’s proteostasis network could play a prominent role in defining the sequence space accessible to an evolving viral protein. In this thesis, I address how host proteostasis networks shape viral protein evolution. First, I describe how the composition of the host cell’s proteostasis machineries chapes the accessible sequence space of human immunodeficiency virus envelope protein. Second, I focus on an important immune-escape variant of influenza nucleoprotein whose fitness depends on host chaperones, and reveal the underlying molecular mechanism of the chaperon dependence. Finally, I demonstrate how chaperone machineries can determine the host range of viruses, and provide potential pathways viruses can evolve to overcome this selection pressure. Overall, elucidating how protein folding and host cell proteostasis affect viral protein evolution would substantially improve our ability to accurately predict RNA virus evolution and host-switching, and may enable design of host-targeted therapeutics that reduce the adaptability of RNA viruses.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157965</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Constraints on Melting Processes in the Earth and Small Rocky Bodies</title>
<link>https://hdl.handle.net/1721.1/157964</link>
<description>Experimental Constraints on Melting Processes in the Earth and Small Rocky Bodies
Hoyos Muñoz, Susana
The compositional and thermal evolution of rocky bodies in the Solar System is determined by the melt generation and crystallization processes in their interiors. This thesis investigates large-scale igneous processes in the Earth, the Moon, and the Angrite Parent Body using a multidisciplinary approach that integrates high-pressure experiments, geochemical analysis, and petrologic modeling. In Chapter 1, I examine the mantle source lithology of Hawaiian pre-shield tholeiitic volcanism through high-pressure equilibrium experiments and geochemical modeling. In Chapter 2, I define the crystallization sequence and petrogenesis of the young mare basalts collected by the Chang'e 5 mission and propose a model for melt generation in the Moon at ~ 2Ma. In Chapter 3, I estimate a minimum radius of ~1600 km for the Angrite Parent Body using near-liquidus equilibrium experiments. The implications for planet formation models of a differentiated moon-sized planetesimal accreting in the first 3 Ma of the Solar System are also discussed. In the fourth Chapter, I introduce a new experimental technique for studying melt migration in volcanoes, which allowed me to observe and describe the mechanisms that control melt migration in the upper crust. Together, these studies advance our knowledge of melt production and crystallization conditions in planetary interiors and provide fundamental insights into the geologic history of the Earth and other rocky bodies in the Solar System.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157964</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical Analysis of Poly(ethylene terephthalate) Film Degradation Kinetics of Engineered IsPETase Variants</title>
<link>https://hdl.handle.net/1721.1/157963</link>
<description>Biochemical Analysis of Poly(ethylene terephthalate) Film Degradation Kinetics of Engineered IsPETase Variants
Zhong-Johnson, En Ze Linda
Plastic production and pollution have become a global crisis, with 79% of waste plastics landfilled in 2015 and only 12% recycled, demonstrating the need for rapid improvements in waste management and recycling technologies. Poly(ethylene terephthalate) (PET) is a major plastic polymer that is heavily investigated for enzymatic recycling. The presence of the ester bond in the polymer allows hydrolysis via serine esterases, such as cutinases and lipases. However, little is known about the surface reaction and how biochemical behavior might differ on a 2D solid surface compared to solution phase. Consequently, traditional solution-phase biochemical models, such as Michaelis-Menten, may not be directly applicable to kinetics of these enzymes, as the catalysis is occurring under a heterogeneous phase. To improve the fundamental understanding of the enzymatic reaction on the surface and derive an appropriate biochemical model for kinetic analysis, this thesis aims to develop a simple kinetic assay of PET biodegradation, identify mutations that positively impact product formation rates, and develop a novel biochemical model to analyze these mutations that fully describe the kinetic profiles observed for these enzymes. I developed a kinetic assay based on spectrophotometric measurements of UVabsorbance of the products in the reaction supernatant, as degradation products harboring the benzene ring will absorb between 240-280 nm. The method was found to be reliable to obtain relative measurements of initial reaction rates but cannot be used to determine the absolute concentration of products in the supernatant. I also developed a directed evolution assay of IsPETase using solid PET film substrates and found that mutation T116P improved maximum product accumulation by 30% based on kinetic studies and thermostability, while mutations S238N and S290P improved purification yield and thermostability. Finally, my collaborators and I found that the activity of IsPETase is impacted by surface crowding and developed a biochemical model to analyze the kinetic data of mutants. Based on the kinetic model, T116P reduced crowding susceptibility with no impact on activity, resulting in improved macroscopic degradation rates. In conclusion, crowding tendency may become a major property to be targeted for enzyme engineering to improve solid-substrate depolymerases for industrial applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157963</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the spectrum of resonance fluorescence induced by a monochromatic field.</title>
<link>https://hdl.handle.net/1721.1/157904</link>
<description>Measurement of the spectrum of resonance fluorescence induced by a monochromatic field.
Wu, Frederick Yung-Fung.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157904</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on attention and creative thought</title>
<link>https://hdl.handle.net/1721.1/157884</link>
<description>Essays on attention and creative thought
Wang, Jocelyn Yuxing
In the mental life of an ordinary person, creative thoughts, as well as other non-rigid forms of thought, such as mind wandering, are both pervasive and important for our cognitive endeavors. The goal of my dissertation is to provide a theory of these non-rigid forms of thought by understanding some of the cognitive mechanisms that underlie them, as well as to understand how these underlying mechanisms contribute to our epistemic lives more generally in all kinds of reasoning. Chapter 1 (based on co-authored work with Azenet Lopez) begins with a puzzle that arises from research on mind wandering: since during mind wandering we plausibly prioritize the information relevant to the concurrent tasks less, why does mind wandering sometimes improve rather than impair concurrent task performance? I resolve the puzzle by rejecting the standard conception of attention, according to which the more focused one’s attention is, the better it is at improving task performance. I instead argue that certain tasks are better performed with a more diffuse rather than focused mode of attention. I offer a conception of "diffuse attention" that generalizes from external to internal forms of attention and conceptualize mind wandering as an instance of it. Chapter 2 turns to provide an account of creative thinking, which is closely related to mind wandering. I argue that previous accounts in philosophy about the generation of creative thought are incomplete due to overlooking the role of what I call “memory gists”. Memory gists are memory contents that represent more abstract or qualitative features that are extracted from the specific, surface level features in the memory representations that were initially encoded in memory. I argue that generating and using memory gists in memory search enables highly creative people to form connections between memory contents that are not usually associated with each other by revealing their commonalities shared in their gists. Moreover, I argue that different mechanisms underlie online and offline generation of memory gists: the former involves the mode of diffuse attention that I conceptualized in Chapter 1, while the latter involves memory consolidation during sleep or wakeful rests. The active role that memory plays in creative thinking raises some questions about how to conceptualize the function of memory in our epistemic lives more generally. I explore this topic further in Chapter 3, where I reject the traditional view in epistemology that memory merely functions to preserve previously acquired information, such as information acquired through perception. I argue instead that one of the functions of memory is to improve our understanding of what was represented in the contents that we previously acquired. This is possible thanks to the fact that during memory consolidation, our memory system further processes previously acquired information, and generates representations about relationships between different components of the subject under consideration. My work thus contributes to the ongoing project of understanding memory as an active process instead of merely performing the role of storing information, and highlights understanding as one of the epistemic values that memory generates.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157884</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on vowel-zero alternations in Hungarian</title>
<link>https://hdl.handle.net/1721.1/157881</link>
<description>Constraints on vowel-zero alternations in Hungarian
Takács, Dóra Kata
I analyze a large set of Hungarian nominal stems whose last vowel alternates with zero in certain contexts (Vago (1980), Siptár &amp; Törkenczy (2000)): e.g. bokor [bokor], bokr-ok [bokr-ok]. I argue that the mechanism underlying these alternations is syncope, departing in this from earlier work (Vago (1980), Abondolo (1988), J. Jensen &amp; Stong-Jensen (1988, 1989), Törkenczy (1995), Abrusán (2005)) which assumes epenthesis or metathesis. My research focuses on which stems fall into this closed group of vowel-zero alternating stems. I show that there is an interaction between phonological processes that repair phonotactically illicit consonant clusters – like voicing assimilation, gemination, affrication – and vowel-zero alternations. I present a proposal relying on underspecification that correctly predicts that these phonological processes block vowel-zero alternations. The grammar that generates this result includes a ranking schema where the constraint triggering syncope (referred to below as Syncope) is outranked not only by the Markedness constraints that define illicit CC-clusters in Hungarian but also by the faithfulness constraints that are normally violated in the repair of such clusters. The general ranking I will argue for is: (1) Markedness (*CC for various CCs) » Faithfulness to Cs » Syncope » Max V I also present results from a nonce word experiment, which confirms that Hungarian speakers are aware of the systematic restrictions my analysis characterizes. The broad significance of the work is to document a large-scale conspiracy (Kisseberth (1970)) whereby permissible CC clusters emerge in at least two ways: through direct action of repair processes (assimilation or merger of two Cs into one) and through blockage of the syncope process that could yield the inputs to such repairs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157881</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-optation of B Cell Developmental States in Malignancy and Autoimmunity</title>
<link>https://hdl.handle.net/1721.1/157878</link>
<description>Co-optation of B Cell Developmental States in Malignancy and Autoimmunity
Ramseier, Michelle L.
Transcriptional states provide a useful lens for understanding the diversity of cell identity and function. Cell state is regulated through both cell-intrinsic and -extrinsic mechanisms, and can diversify through tightly regulated transitions. However, perturbations to these regulatory mechanisms facilitate shifts in cell phenotypes and function that, gone unchecked, can disrupt homeostasis and drive disease. The ability of dysregulated transcriptional states to integratively represent underlying intrinsic and extrinsic drivers of aberrant cell survival and function nominates their potential as prognostic and therapeutic targets in disease.&#13;
&#13;
Here, we establish the therapeutic significance of cell state through the lens of pathologies that dysregulate B cell development and maturation. B cells develop and mature through tightly regulated cell-intrinsic and -extrinsic transcriptional state transitions restricted by stage-specific survival, proliferative, and apoptotic dependencies. We thus utilize B cell development and maturation as model systems to study how perturbations to cell-intrinsic and -extrinsic regulators can result in the pathologic emergence of aberrant developmental states enabling dysregulated survival and proliferation. In each chapter, we apply single-cell RNA-sequencing to define heterogeneous B cell developmental states in malignancy and autoimmunity, and uncover underlying signaling perturbations linked to their dysfunctional transcriptional regulation. We consider how these aberrant developmental states are driven by mutational perturbations to dysregulated cell-intrinsic signaling in BCR-ABL1 B cell acute lymphoblastic leukemia (B-ALL), cell-extrinsic signaling in CTLA4-deficient T cell-mediated follicular B cell blocks, and integrative mutational and niche-specific survival in mantle cell lymphoma (MCL). Finally, we identify how these aberrant developmental states shift or resolve upon targeted therapeutic intervention in each disease context. Collectively, this work demonstrates how cell states are intrinsically and extrinsically regulated, inform aberrant survival in disease, and demonstrate promise as therapeutic targets.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157878</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Germanium on Silicon Integrated Photonics for the Mid-Wave Infrared</title>
<link>https://hdl.handle.net/1721.1/157877</link>
<description>Germanium on Silicon Integrated Photonics for the Mid-Wave Infrared
Morgan, Rachel E.
This thesis presents the development of a Germanium-on-Silicon (GOS) integrated photonics platform for the mid-wave infrared (MWIR) wavelength range. Integrated photonics applies nanofabrication approaches with optical materials in order to miniaturize and improve the robustness of optical systems. Most integrated photonics development occurs in the near-infrared for telecommunications applications, but there is increasing interest in expanding the technology to other wavelength ranges. The MWIR wavelength range has applications in environmental sensing, industrial monitoring, and communications. This work develops low-loss waveguides, a passive component library, integrated modulators, laser integration design, and systems analysis&#13;
for the 2-5 µm wavelength range.&#13;
&#13;
Low-loss waveguides are demonstrated with losses of 0.6-2.5 dB/cm without top cladding. A detailed study of top cladding materials is conducted including niobium pentoxide, hafnium dioxide, epitaxial silicon, and others. Of these materials, niobium pentoxide offers the best performance with measured losses as low as 3.49±0.3 dB/cm.&#13;
&#13;
A passive component library is designed based on waveguides with and without top cladding, developing building-block components such as couplers, splitters, ring resonator filters, and Mach-Zehneder interferometers. Air-clad ring resonators demonstrate narrow-bandwidth filtering with a recorded extinction ratio of &gt;20 dB, full-width half max (FWHM) of 0.7 GHz, and unloaded Q factor of &gt;190,000.&#13;
&#13;
Integrated phase shifters are designed based on the plasma-dispersion effect and the thermo-optic effect. A plasma dispersion effect modulator is designed for forward-bias operation at 4.6 µm wavelength with predicted half-wave voltage (V subscript &#120587;) of 0.5 V, length (L) of 525 µm, voltage-length product (V subscript &#120587;L) of 0.027 V·cm, and speed of 58.4 MHz. A reverse-bias plasma dispersion effect modulator at 4.6 µm wavelength is designed with predicted V subscript &#120587; of 4 V, L of 3 mm, V subscript &#120587; L of 1.24 V·cm, and speed of 3.2 GHz. Thermal phase shifters fabricated out of 400 µm long metal wires have a predicted power required for a 2&#120587; phase shift (P₂ subscript &#120587;) of 410 mW without top cladding and a P₂ subscript &#120587; of 100 mW with Nb₂O₅ cladding.&#13;
&#13;
Designs for flip-chip integration of quantum-cascade lasers (QCLs) are presented. Coupling between the QCL and the GOS waveguides is simulated and input tapers are designed. Test structures for QCL-waveguide coupling and external cavity QCL demonstration are designed. The predicted coupling from the QCL into a GOS airclad waveguide is 10-30%.&#13;
&#13;
The components designed in this work can be combined to develop photonic integrated circuit (PIC) designs for applications in the MWIR wavelength range. To demonstrate this, design analysis for a gas-sensing lidar transmitter and a MWIR spectrometer are presented. The gas-sensing lidar is predicted to provide a sensitivity to N₂O of &lt;1% with a much smaller mass compared to existing free-space optics satellite lidar transmitter designs. The MWIR spectrometer has a predicted spectral resolution of 0.2 nm.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157877</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Site-specific chemical and topological modifications to augment mRNA therapeutic potential</title>
<link>https://hdl.handle.net/1721.1/157875</link>
<description>Site-specific chemical and topological modifications to augment mRNA therapeutic potential
Aditham, Abhishek J.
Synthetic mRNA has emerged as a promising therapeutic platform for the treatment of a wide variety of diseases. Despite clinical demonstrations of mRNA for SARS-CoV-2 vaccines, mRNAs remain limited in application by their susceptibility to nucleases and overall short expression lifetime in vivo. We investigated the site-specific installation of chemical and topological modifications into therapeutic mRNA to augment their expression in cell cultures and mouse models. We began by developing messenger oligonucleotide-conjugated RNAs&#13;
(mocRNAs), which are mRNAs ligated to modified oligonucleotides that contain 3’ nuclease-resistant modifications. We show that mocRNAs are subject to slower deadenylation and enhance therapeutic protein expression in cell lines and primary cell cultures. We expanded on this technology by creating mRNAs with chemically branched poly(A) tails, or multitail mRNAs, which increase the density of modifications at the 3’ end of mocRNA and further stabilize mRNA against deadenylation.&#13;
&#13;
In conjunction with increased nuclease resistance at the 3’ terminus, we developed a strategy to enhance translation initiation on circular mRNAs (circRNAs). We developed QRNAs, which are circRNAs that possess an unnaturally-linked inverted 7-methylguanosine (m7G) cap. QRNAs substantially outperform conventional circRNAs, given the low translation initiation efficiency of IRES compared to cap-dependent initiation. Ultimately, our studies exploring the chemical and topological space of mRNA demonstrates the value of site-specific chemical and topological modifications for designing future generations of designer mRNA-based therapeutics.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157875</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision measurement of the W boson mass with the CMS Experiment in pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157874</link>
<description>Precision measurement of the W boson mass with the CMS Experiment in pp collisions at √s = 13 TeV
Yang, Tianyu Justin
The mass of the W boson, mW, is an important fundamental constant of nature, which&#13;
is also potentially sensitive to a plethora of physics beyond the Standard Model. In this&#13;
thesis, we discuss the precision measurement of mW with the CMS detector at the LHC in&#13;
proton-proton collisions at √s = 13 TeV. The phenomenology of W bosons produced in pp&#13;
collisions, the CMS detector characteristics, and other relevant factors are examined to justify&#13;
the overall strategy to measure mW from the muon transverse momentum and pseudorapidity&#13;
spectrum [formula] in the W → µν channel with a part of the 2016 data corresponding&#13;
to an integrated luminosity of 16.8 fb⁻¹. Dedicated studies aiming to reduce systematic&#13;
uncertainties related to the muon transverse momentum calibration, the muon reconstruction&#13;
and background rejection efficiencies, and the modeling of the W boson production and decay&#13;
kinematics are presented. A profiled maximum-likelihood fit of MC templates to observed&#13;
data incorporating over 4,000 nuisance parameters is employed to extract the central value&#13;
and the total uncertainty on m_W. The result of this measurement is m_W = 80, 360.2 ± 2.4 (stat.) ± 9.6 (syst.) MeV,= 80, 360.2 ± 9.9 MeV, which is consistent with the standard model prediction m(SM/W) = 80, 354.5 ± 5.7 MeV.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157874</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Just Doing My Job: Normative Dimensions of Social Roles</title>
<link>https://hdl.handle.net/1721.1/157873</link>
<description>Just Doing My Job: Normative Dimensions of Social Roles
Wells, Eliza
“What should I do?” Often, our answers make reference to our social roles: we ask what we should do as lawyers, citizens, or parents. But this confronts us with problems. Consider a would-be whistleblower, a wife challenging the gendered division of household labor, or the conflicted police officer Javert from Les Misérables. These agents feel there is a genuine conflict between morality and the norms of their role. While many philosophers treat social roles as incidental to our moral lives, this dissertation aims to do justice to this experience of roles’ normative force. I argue that doing so prompts revision to orthodox views of role-occupants’ reasons for action, blameworthiness, and responsibility for structural injustice. In Chapter One, I develop a new account of how social roles generate normative reasons for occupants to comply with role norms. I argue that agents’ reasons to comply with their role norms depend on how those norms contribute to functioning social practices. In addition to its claims about the structure of normative reasons, my view delivers a striking upshot in cases of conflict. While popular accounts of role normativity often maintain that moral considerations can cancel roles’ normative force, my project suggests a radically different conclusion: role-occupants have good reasons to comply even with norms that result in conflicts with what they morally ought to do. Social roles generate genuine normative conflicts. While many role-occupants find conflicts between roles and morality distressing, many others seem not to notice that there is a conflict at all. Consider the oft-maligned excuse: “I was just doing my job.” Chapter Two defends an epistemic variant of this excuse. I argue that agents who comply with roles’ deliberative norms may—for good reason—bracket morally relevant considerations. As a result, they may be non-culpably ignorant of wrongdoing. On some views, this can excuse them from blame. But even denying that moral ignorance exculpates is compatible with accepting role-occupants’ excuses. Such views often emphasize being motivated by the right reasons. But because role compliance is often justifiable, ignorance need not be blameworthy indifference to the right reasons. The upshot is a novel position in the debate about moral ignorance as an excuse. We might worry that this unduly lets role-occupants off the hook. If, as I argue, roleoccupants can have good reasons for acting immorally, and they can sometimes be blameless even when they do act wrongly, does that prevent us from taking those wrongs seriously? I grapple with this problem in Chapter Three. Drawing on theories of structural injustice, I argue that roles’ normative character actually generates responsibilities for justice. Because role performance both affirms and instantiates unjust structures, role-occupants bear responsibilities that can only be discharged by changing what their actions mean and do. This vindicates the widespread but philosophically puzzling view that agents ought to direct efforts towards injustices they participate in intimately, even when they could make a greater impact elsewhere. It also means that role-occupants are not off the moral hook. Ultimately, we each bear responsibility to create a more just social world.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157873</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Amplifying signals in the tumor microenvironment&#13;
for drug development and diagnostics</title>
<link>https://hdl.handle.net/1721.1/157872</link>
<description>Amplifying signals in the tumor microenvironment&#13;
for drug development and diagnostics
Martin Alonso, Maria Carmen
The advent of molecular biology and next-generation sequencing has significantly transformed our understanding of cancer and the delivery of cancer care. These advancements greatly accelerated the pace of biological discovery leading to the development of targeted therapies and immunotherapies, which have resulted in unprecedented survival benefits for patients. Additionally, they have also materialized in how disease is diagnosed and monitored, which increasingly involves genomic profiling of circulating tumor DNA (ctDNA) molecules in liquid biopsies such as blood. Despite the promise offered by these innovative therapies and monitoring tools, their broad integration into clinical practice faces important challenges. Widespread adoption of emerging therapies demands enhanced tools to successfully identify therapeutic targets and to restrict their potent activity to cancer cells. At the same time, improving the sensitivity of ctDNA-based tests, that remains limited by the scarcity of ctDNA in blood, holds the key to unlock the full potential of liquid biopsy across many important clinical applications.&#13;
&#13;
This thesis addresses these critical challenges by combining the unique opportunities posed by secreted molecules in the tumor microenvironment (TME) with engineering principles of signal amplification. Tumors are intricately tied to their local and systemic microenvironments for tumor progression, and key to orchestrating such interactions are secreted molecules. Targeting these abundant molecules that are accessible extracellularly and oftentimes even systemically, offers significant advantages over traditional approaches that target confined, scarce and occult malignant cells within tissues.&#13;
&#13;
In Part I of this thesis, we propose exploiting the catalytic activity of tumor-associated proteases in the local TME to selectively deliver potent therapies to cancer cells while sparing healthy tissues. Leveraging advancements in high throughput screening and in deep learning, we contribute important tools for the effective design of conditional drugs that require the cleavage of a protease substrate to unleash drug cytotoxicity. In Part II, we address challenges in ctDNA detection by introducing liquid biopsy priming agents - DNA-binding proteins and nanoparticles - that transiently attenuate endogenous ctDNA clearance routes. Priming agents synthetically amplify ctDNA levels in blood to greatly improve the sensitivity and the robustness of liquid biopsies. Our approach marks a paradigm shift in how we think about the limit of detection of molecular diagnostics and holds promise for other circulating biomarkers and beyond oncology. &#13;
&#13;
Collectively, this thesis presents a TME-centric perspective of cancer, coupled with engineering principles of signal amplification, to reframe therapeutic and diagnostic paradigms in oncology, with far-reaching implications across all stages of cancer management.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157872</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural case on adjuncts</title>
<link>https://hdl.handle.net/1721.1/157869</link>
<description>Structural case on adjuncts
Jou, Eunsun
This dissertation investigates how case is assigned to nominal adverbials dubbed durative and multiplicative in Korean. These adverbials express the duration of an event, or the number of times an event is repeated. In transitive, unergative, and unaccusative constructions, the adverbial is marked with accusative case. In psychological predicate constructions, the adverbial is marked with nominative case. Interestingly, in passive and inchoative constructions (grouped together under the term nonactive), the adverbial allows both nominative and accusative case.&#13;
I derive these patterns from a specific model of Voice, and a model of successive-cyclic Dependent Case. I first argue in favor of a Voice system that treats passive and inchoative constructions as syntactically equivalent: whether a nonactive construction is passive or inchoative is determined by the feature specification on Voice (Kallulli 2007). Furthermore, this nonactive Voice head introduces an implicit agent (for passives) or causer (for inchoatives), which can be optionally realized as a PP. This agent/causer at Spec, VoiceP competes with the theme argument to move to Spec, TP. Hence, there are two different structures that can arise in nonactive constructions. The other constructions that do not show case optionality lack this competition. In transitive, unergative, and unaccusative constructions, there is no implicit agent/causer at Spec, TP to compete with the theme argument. In psychological predicate constructions, the experiencer argument introduced at Spec, ApplP acts as an intervener and blocks the theme argument from competing with the implicit agent/causer.&#13;
&#13;
My model of successive-cyclic Dependent Case explains how the different structures result in different case patterns. It is a revised version of Levin’s (2017) original model, whereby case evaluation occurs not only at the end of the syntactic derivation but at the Spell-out of each phase. However, my version of the model involves a more relaxed locality constraint for dependent case assignment. I demonstrate how my model can not only derive the case marking patterns of durative and multiplicative adverbials, but can also account for other case phenomena in Korean such as case stacking and multiple nominative constructions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157869</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing the Intersectional Risks Associated with the Full Life Cycle of the U.S. Housing Stock</title>
<link>https://hdl.handle.net/1721.1/157868</link>
<description>Assessing the Intersectional Risks Associated with the Full Life Cycle of the U.S. Housing Stock
Manav, Ipek Bensu
This work presents the most comprehensive framework to date to assess the intersectional risks associated with design and policy decisions regarding the built environment. This framework is applied to decisions regarding the selection of hazard mitigation measures to apply, households to prioritize in hazard mitigation grant programs, and construction materials to use in efforts to reduce societal greenhouse gas (GHG) emissions.&#13;
&#13;
To study these decisions, a computation inexpensive method is developed to compute expected damages associated with each individual building in a community with hurricane wind exposure. This method is applied to study the cost burden of expected damages on each individual household. Later, this is integrated into building life cycle assessment (LCA) to incorporate hazard vulnerability into building embodied emissions. Lastly, building LCA is extended to inform the sectoral environmental footprint (SEF) of construction material sectors.&#13;
&#13;
Together, the model results of this work show that expected damages are currently underestimated, socially-vulnerable groups are likelier to be priced out hazard repairs, and ignoring use and end-of-life stages leads to ignoring the largest portion of building life cycle emissions as well as the largest contributors of the SEF of construction materials. By reevaluating the performance of the housing stock under each metric, strategies are proposed to prevent monetary damages, redistribute the cost burden of remaining monetary damages, and couple considerations for climate mitigation and climate adaptation by promoting disaster risk reduction as a pathway towards GHG abatement.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157868</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Habit Formation and Political Persuasion: A Behavioral and Statistical Approach</title>
<link>https://hdl.handle.net/1721.1/157867</link>
<description>Habit Formation and Political Persuasion: A Behavioral and Statistical Approach
Tohidi Kalorazi, Amir
This thesis explores the complex dynamics of human behavior across diverse contexts, integrating perspectives from behavioral science and statistical analysis. The central focus of this study revolves around the analysis of repetitive behavior in various scenarios including shopping, social media use, and news sharing.&#13;
&#13;
The initial study investigates the influence of habits on the in-store shopping experience. By leveraging store closures as a disruptive event, we examine how these closures prompt individuals to alter their purchasing patterns. We propose that such disruptions encourage people to engage in more deliberate decision-making processes, leading them to explore alternatives that they might have previously overlooked due to established habits. Employing a difference-in-differences framework, we estimate the causal impact of habits on brand loyalty. Our findings reveal a significant role of habits, with households exhibiting stronger habits experiencing a temporary disruption in their shopping routines following store closures. Over time, these households appear to develop new habits in different stores, resulting in lasting changes in preferred brands. This suggests that the formation of shopping habits can lead to suboptimal consumer behavior. These insights have practical implications for businesses, including pricing strategies, advertising approaches, and product placement within stores.&#13;
&#13;
The second study introduces an innovative methodology for quantifying habitual behavior in the context of social media usage. Interactions with social media platforms often yield psychological rewards, fostering the development of habitual behaviors driven by cue-response associations. By leveraging entropy as an implicit measure of behavioral regularity, this study aims to uncover the intricate relationship between habit formation and digital routines. Through empirical analyses, we establish the validity of the entropy metric, demonstrating its effectiveness in capturing distinct behavioral patterns beyond mere frequency. Our results highlight the nuanced connection between entropy and future app engagement, indicating a positive association for lower entropy values and a significant decline for excessively irregular patterns. These findings contribute to theoretical understanding of habitual behavior and offer practical insights for managing digital habits. Ultimately, this work advances our comprehension of how habits manifest in the digital realm and provides a robust tool for predicting long-term user behavior.&#13;
&#13;
The third study delves into the intricate interplay between individuals' beliefs and their ability to anticipate the persuasive impact of climate change news articles. The central aim is to determine whether climate change deniers or believers possess varying capacities to predict the persuasive consequences of articles emphasizing climate change severity. Through a series of surveys, we gather predictions about the impact of such articles on climate change deniers. Surprisingly, findings reveal discordant predictions: deniers anticipate a backfire effect among peers, climate believers foresee negligible effects. We rigorously test these predictions with a randomized survey experiment involving deniers, uncovering an unexpected positive opinion shift towards climate change after article exposure. Notably, this effect does not translate into discernible changes in stated or revealed support for climate change actions. In the context of the pressing climate challenge, our study offers insights to inform targeted communication and interventions that foster consensus and meaningful action.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157867</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Ethics within Metaphysics</title>
<link>https://hdl.handle.net/1721.1/157866</link>
<description>The Ethics within Metaphysics
Impagnatiello, Michele Odissea
This dissertation consists of three chapters at the intersection of ethics and metaphysics. In the first chapter, I put forward a new theory of personal identity, give arguments for it, and defend it from objections. In the first part, I argue that the two most prominent theories of personal identity, the psychological theory and the physical theory, do not satisfy some constraints on any acceptable theory: that personal identity be all-or-nothing, determinate, principled, and substantive. I then put forward a new theory, the phenomenal theory, on which personal identity is determined by the uninterrupted continuity of a stream of consciousness. I argue that this theory does satisfy all the desiderata, and is as such a better theory. In the second part, I argue that the phenomenal theory also solves the problem of fission cases, because there are no cases of phenomenal fission. In the third and last part, I consider the objection that, on the phenomenal theory, we do not survive interruptions of consciousness such as sleep; I argue that this objection doesn’t succeed in refuting the theory. In the second chapter, I generalize a debate about laws of nature to the domains of metaphysics and ethics. Patterns in the natural world lead us to the postulation of laws. A metaphysical dispute arises as to whether these laws are mere summaries of the mosaic (as the Humean would have it), or whether they govern the mosaic (as the Anti-Humean would have it). In this paper, I first argue that similarly, patterns in the metaphysical and ethical facts should lead us to the postulation of metaphysical and ethical laws, which are the proper subject of metaphysical and ethical inquiry. Then, I argue that the Humean/Anti-Humean debate also arises when it comes to metaphysical and ethical laws. Finally, I argue in favor of the Anti-Humean conception of metaphysical and ethical laws, both adapting standard arguments used in the debates about laws of nature, and with new arguments specific to metaphysics and ethics. In the third chapter, I investigate conflicts between ethics and metaphysics. Sometimes, a metaphysical theory has revisionary ethical consequences: for example, some have thought that modal realism entails that there are no moral obligations. In these cases, one may be tempted to reject the metaphysical theory on the grounds that it conflicts with commonsensical ethics. This is an ethics-to-metaphysics inference. My claim is that this inference is in general irrational, and that the fact that a metaphysical theory has highly revisionary ethical consequences is no reason at all to reject the theory. I argue for this claim on the basis of general epistemic principles about the transmission of justification, and what makes for a good argument. Furthermore, I argue that my account can explain why a certain narrow class of ethics-to-metaphysics inferences are rational.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157866</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multidimensional profiling of the Toxoplasma gondii proteome</title>
<link>https://hdl.handle.net/1721.1/157865</link>
<description>Multidimensional profiling of the Toxoplasma gondii proteome
Herneisen, Alice Lydia
Universally, external signals are transduced and propagated in cells by secondary messengers. In the asexual and replicating stages of apicomplexan parasites, these pathways initiate and sustain transitions within the lytic cycle responsible for parasite spread and pathogenesis. Among these early-branching parasitic protists are the etiologic agents of the widespread, persistent, and deadly human diseases malaria (Plasmodium spp.) and toxoplasmosis (Toxoplasma gondii), making the understanding of these parasite signaling pathways of global importance. Although components of secondary messenger signaling pathways are conserved among apicomplexans and higher eukaryotes, 800 million years of divergence from existing model organisms precludes identification of parasite-specific secondary messenger responses or a priori reconstruction of their signaling pathways.&#13;
&#13;
This thesis addresses that gap. I have adapted state-of-the-art proteomics methods to study the proteome of the model apicomplexan T. gondii across multiple dimensions: abundance, stability, time, and space. Chapter 2 describes how I employed thermal proteome profiling to identify the target of an antiparasitic compound, thereby enhancing our understanding of parasite calcium signaling pathways. In a conceptual leap, I applied this method to systematically identify calcium-responsive proteins on the basis of biochemical interactions with this second messenger in Chapter 3. From this analysis, the protein phosphatase PP1 emerged as an unanticipated calcium-responsive phosphatase along with dozens of novel proteins belonging to this critical signaling network.&#13;
&#13;
Signaling pathways communicate to orchestrate complex cellular processes, yet in apicomplexan parasites they are often studied in isolation. In Chapter 4, I identify a node linking three key second messenger pathways in T. gondii: calcium, cyclic GMP, and cyclic AMP. The apicomplexan-specific kinase SPARK regulates the AGC kinases PKG, PKA C1, and PKA C3, which together control transitions within the asexual cycle of this important family of parasites.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157865</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonetic Faithfulness in Phonological Opacity</title>
<link>https://hdl.handle.net/1721.1/157863</link>
<description>Phonetic Faithfulness in Phonological Opacity
Kim, Yeong-Joon
This dissertation presents a novel approach to phonological opacity, which is grounded in new findings regarding substantive restrictions on the patterns of opaque interactions. The central thesis posits that phonological opacity functions to preserve the phonetic properties specified in the input of a phonological operation. Specifically, it argues that inputs are enriched with phonetic auditory features, and surface opacity emerges as a result of processing these enriched inputs. This proposal can be detailed as follows. First, processes that become opaque are initially biased by certain phonetic markedness conditions. Second, these phonetic biases, encoded in the phonetically enriched inputs, are mapped onto the nearest phonologically contrastive sounds to satisfy the requirement of phonetic faithfulness, resulting in surface phonological opacity.&#13;
 &#13;
This hypothesis yields a testable prediction: only phonetically natural processes, which possess an appropriate phonetic markedness condition, can become opaque. The results of typological surveys encompassing 87 counterfeeding and 65 counterbleeding interactions across languages support this prediction, revealing that opacified processes are subject to a narrow range of markedness conditions, such as coarticulatory assimilation (e.g., palatalization) and durational adjustments (e.g., segmental weakening). Other types of phonological processes, particularly non-natural ones, are only rarely, if ever, opacified. This asymmetry in the patterns of phonological opacity underscores that opaque interactions are not independent of phonetic substance. &#13;
&#13;
In addition to this main finding, it is also shown that the current proposal offers additional advantages in explaining phonological opacity. First, it successfully accounts for various non-typical opaque interactions such as feeding opacity and stress misapplications, alongside counterfeeding and counterbleeding interactions. The proposal also integrates various phonological phenomena, such as compensatory lengthening, coalescence, and incomplete neutralization, within the framework. Second, learning simulations using a weighted constraint version of the proposed model demonstrate that intermediate hidden structures, such as phonetically enriched inputs, can be learned when the mappings between abstract inputs and surface representations are established.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157863</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technological Innovation and Integration of Whole Brain Imaging, Olfactory Stimulation, and Correlative Microscopy in Larval Zebrafish</title>
<link>https://hdl.handle.net/1721.1/157835</link>
<description>Technological Innovation and Integration of Whole Brain Imaging, Olfactory Stimulation, and Correlative Microscopy in Larval Zebrafish
Swain, Corban N.
Achieving a deep understanding of the brain is a cross-disciplinary endeavor that requires the investigator to consider biomolecular, electrical, and sensory interactions across time and space at many scales. This understanding is important because a deeper understanding of the brain precedes advancements in efficient computing, generalizable frameworks for learning, and, of critical importance, the understanding and treatment of neurological diseases. Towards this end, this thesis presents novel approaches and technologies for whole-brain imaging, olfactory stimulation, and correlative imaging---i.e. the utilization and registration of multiple imaging modalities within a single sample. The overall objective of this thesis research is to not just create technologies, but to integrate them to enabler richer and more contextual understandings of the larval zebrafish's brain. &#13;
In this work we show novel light field microscopy algorithms that allow us to reconstruct 3D images from 2D micrographs with improved resolution to enable high-frame-rate recordings of whole-brain neural activity. We describe the designing and building of the first known system for multi-directional olfactory stimulation of larval zebrafish with up to ten separate odor channels. We demonstrate an optimized expansion microscopy-compatible immunostaining protocol for whole-mount zebrafish which preserves registration epitopes to move towards the neuron-level alignment of structural and functional data. And, finally, we showcase a set of proof-of-concept experiments and analyses which demonstrate our ability to integrate olfactory stimulation, whole-brain calcium imaging, behavioral recording, and structural staining in individual larva.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157835</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing resource allocation in large communications satellite constellations</title>
<link>https://hdl.handle.net/1721.1/157833</link>
<description>Optimizing resource allocation in large communications satellite constellations
Pachler de la Osa, Nils
Satellite communications are becoming a key technology for maintaining connectivity in a world driven by information. In the recent years, established players (such as SES and Telesat), as well as new competitors (such as SpaceX and Amazon) have proposed constellations able to serve hundreds of thousands of users, using thousands of satellites. While the orbital configuration of each design is different, the next generation of satellite communications relies on highly flexible digital payloads, such as phased array antennas, on-board processing, and adaptive modulation and coding schemes. Several approaches have been proposed to deal with the complexity of the added flexibilities at the spacecraft level. Nevertheless, how to address the flexibilities at the constellation level, critical to operate the next generation of systems, remains an open question. This dissertation develops optimization-based decision-making frameworks for designing and operating the next generation of communication constellations. In particular, novel methods for the Beam Shaping, User Grouping, Satellite Routing, Frequency Assignment, and Gateway Routing problems are proposed, tailored for large non-geostationary orbit constellations with satellites at multiple altitudes, referred to as hybrid systems. The methods leverage optimization to find an optimized set of decisions that maximize capacity and quality of service and minimize necessary ground infrastructure, all while avoiding interference. The proposed methods are then combined, tested, and evaluated using existing constellation designs under representative operational conditions with hundreds of thousands of users. The reported results prove that the proposed techniques are able to multiply by two the capacity of these systems, with favorable trade-offs in quality of service and necessary ground infrastructure. By testing existing designs, it is concluded that the number of satellites, as well as the link quality are the main drivers of performance. Furthermore, the analysis shows that hybrid constellations offer advantages over other designs, thanks to the combination of high quality links on low altitude satellites, and high coverage on high altitude satellites. Additionally, this dissertation studies the optimal proportion of satellites across various altitudes in hybrid LEO-MEO constellations. Results show that hybrid constellations are desirable when the cost of MEO and LEO satellites are comparable and interference is minimal.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157833</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Design of Molecular Nanostructures for Exciton Control</title>
<link>https://hdl.handle.net/1721.1/157831</link>
<description>Theoretical Design of Molecular Nanostructures for Exciton Control
Castellanos, Maria A.
Organic semiconductors comprised of strongly-coupled chromophores harness control of delocalized excitations, or excitons, via programmed molecular structures. The dynamics of these excitons enable energy and information transfer within molecular networks, positioning chromophore assemblies as ideal candidates for a number of technologies such as solar energy conversion, nanoelectronics, and quantum computing. Despite significant advancements, there exists no universal model that can explain the dependence of exciton photophysics on molecular morphology. This thesis employs mathematical and atomistic models to contribute key physical insights into the interdependencies between chromophore spatial organization and exciton dynamics, shaped by inter-chromophore couplings and interactions with the thermal bath.&#13;
&#13;
In the first part, a Frenkel Exciton-based model is introduced as a strategy for studying exciton evolution between precisely arranged chromophores. In Chapter 2, I develop a novel approach to map unitary quantum computing operations to Hamiltonians describing excitonic circuits in the presence of a model bath. Then, Chapter 3 scales this framework to complex quantum algorithms represented by explicit molecular systems. Finally, Chapter 4 presents an innovative molecular approach for directing exciton flow via geometrical phase in tightly-bound chromophore arrays. &#13;
&#13;
The second part delves into the intricacies of exciton interaction in densely packed molecular systems arranged within DNA scaffolds. Chapter 5 combines molecular dynamics and quantum mechanical calculations, further validated by experimental results, to study the interplay between long-range electrostatic and short-range charge transfer interactions. Chapter 6 then correlates this interplay with geometrical configurations derived from the DNA scaffolding. This thesis culminates in Chapter 7, which introduces a computational pipeline designed to leverage the precise control over excitons afforded by macromolecular frameworks, paving the way for custom-tailored DNA-based excitonic circuits.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157831</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High field dynamic nuclear polarization methods: Microwave sources and mechanisms</title>
<link>https://hdl.handle.net/1721.1/157830</link>
<description>High field dynamic nuclear polarization methods: Microwave sources and mechanisms
Mardini, Michael
In thirty years of active development, dynamic nuclear polarization (DNP) has emerged as a forefront technique for expanding the scope of solid state nuclear magnetic resonance. For the most part, and particularly at high fields, these advances have come with continuous-wave microwave irradiation and the introduction of nitroxide-based biradicals exploiting the cross effect mechanism. In this thesis, I argue that this approach is not necessarily optimal and report progress towards arbitrary-waveform DNP, in the construction of a suitable solid-state microwave source, and the use of narrow-line monoradicals exploiting the Overhauser effect. My colleagues and I have also investigated the Overhauser mechanism through selective deuteration of radicals, leading to a relatively simple modification which yielded a significant increase in Overhauser enhancement. Finally, I detail studies of two unexplored DNP mechanisms in trityl: the three-spin solid effect and resonant mixing. With solid-state microwave sources and Overhauser radicals, DNP is now more accessible as we can achieve reasonable enhancement without the need for a gyrotron. Moreover, as amplifier and resonator technologies continue to develop, it is likely that pulsed DNP will emerge at high fields and overtake continuous-wave DNP in absolute sensitivity enhancement as well.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157830</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for the Study of Galactofuranose in Mycobacteria</title>
<link>https://hdl.handle.net/1721.1/157829</link>
<description>Methods for the Study of Galactofuranose in Mycobacteria
Taylor, Katherine I.
Despite the energy costs associated with deploying furanose sugars over their more stable pyranose counterparts, galactofuranose is prevalent across nature from commensals to human pathogens. However, it is conspicuously absent from human cells, thus establishing its biosynthetic pathways as important potential drug targets. Our knowledge about the biological roles of galactofuranose in cells is hampered by a dearth of methods by which to study it. Access to glycans containing galactofuranose is limited by the unfavorable equilibrium between pyranose and furanose forms of the sugar, leading to low-yielding and synthetically arduous routes to galactofuranose glycans and its high-energy nucleotide sugar donor, used in biochemical experiments to probe the activity and kinetics of galactofuranose glycosyltransferases. Moreover, study of carbohydrate structures within the cell is limited by the lack of methods to selectively modify glycans with functional handles. Finally, study of the biosynthetic machinery for galactofuranose biosynthesis and inhibitors thereof is limited by their relatively weak affinities for their ligands, providing a challenge for selective chemical probes. In this work, we describe three methods to address these challenges and expedite the study of galactofuranose-containing glycans, their biological function, and their biosynthetic machinery. First, we developed a method to produce the rare high-energy sugar donor UDP-galactofuranose in situ for facile preparation of the mycobacterial galactan utilizing the sugar mutase UDP-galactopyranose mutase. We used this method to generate up to 10 milligrams of polymer and demonstrated that it could be selectively functionalized. Second, we leveraged the rapidly expanding set of biosynthetic probes of the mycobacterial cell wall to rapidly characterize intracellular distances between distinct layers of the cell wall in Mycobacterium tuberculosis model organisms Corynebacterium glutamicum and Mycobacterium smegmatis using fluorescence resonance energy transfer. We evaluated strains with varying galactan structures and compared our data to previous characterization of the cell wall to assess the method’s utility. Finally, we characterized the kinetics of a mild electrophile, the squaric ester, and assessed its utility in selectively binding and modifying a key galactofuranosyl transferase involved in mycobacterial cell wall biosynthesis. Taken together, these findings present a suite of methods to expedite the exploration of galactofuranose structure and function within a relevant pathogen and lays groundwork for further study of galactofuranose across other organisms.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157829</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Mechanical Counter Pressure Spacesuits and Compression Garments: Active Pressurization and Design for Mobility</title>
<link>https://hdl.handle.net/1721.1/157827</link>
<description>Engineering Mechanical Counter Pressure Spacesuits and Compression Garments: Active Pressurization and Design for Mobility
Kothakonda, Akshay
Extravehicular activities (EVA’s) are an essential and integral part of human exploration of space, with their use ranging from performing scientific experiments on a planetary surface to assembling space stations. Spacesuits must provide an astronaut with the conditions necessary to survive an EVA for several hours and to enable them to carry out these complex tasks. One of the main factors impeding the effectiveness of EVA operations is the stiffness of the spacesuits, which is inherent in gas pressurized suits.  While there have been several engineering advances in improving the mobility of gas pressure suits, a mechanical counter pressure (MCP) suit seeks to significantly improve mobility and minimize metabolic workload by replacing gas pressure with contact pressure of a tensioned fabric against the body. Although a marked improvement in mobility of MCP suits over traditional gas pressurized suits was demonstrated in the 1970’s with the Space Activity Suit, engineering challenges remain before this such a suit can be used operationally. Applications of the MCP suit concept extends to compression garments for athletic and medical use.  This thesis seeks to address some of the fundamental requirements of an MCP suit. These include providing uniform MCP of 29.6 kPa over the body, minimizing mechanical work during suited movements, and enabling easy don and doff. While the thesis focuses on the single degree-of-freedom arm section of the suit, the work can be extrapolated to the entire body.  The bidirectional actuation of two-way Shape Memory Polymers (2W-SMP) is leveraged to both provide MCP and allow for easy don/doff. This is achieved by reversing actuation via thermal stimulus. Two designs of MCP suit as an assembly of suit fabric, 2W-SMP, and elastomers are conceived and analysis is carried out to select the more feasible design. On the selected design, analysis is conducted to select a 2W-SMP with maximum MCP for a given donning effort.  Two types of suit fabrics are analyzed in the design- the woven fabric and the jersey knit fabric. Nonlinear finite element analysis (FEA) models that can be used to analyze the deformation of these fabrics under suited movements have been developed. Results from these simulations are hoped to aid in designing the fabric in a such a way that sustains circumferential tension of 2 the 2W-SMP and minimizes mechanical work during movements. While the former would require the fabric to be stiff along the circumferential direction, the latter can be achieved by aligning the compliant axes of a given fabric with the directions of first principal strain. The process of estimating the optimum fabric pattern will be iterative and the resulting pattern may comprise of a composite of different fabric types with varying parameters. Mapping skin Lines of Non-Extension (LoNE’s) informs contours for inextensible cables in such a way that they do not impede movements. These cables may form part of suit life support system.  This thesis focuses on developing tools such as methodology for sizing SMP’s, fabric models, and LoNE’s and as such does not use those tools to arrive at an optimum suit design. Utilization of these tools towards suit design is one of the future tasks in this work. Additionally, the author expects that future research efforts in this area at large will benefit from these tools.  The thesis includes an introduction of the problem and the motivation, background and literature review of relevant concepts, a deep dive into analysis and tests on shape memory polymer materials and their use for compression devices, development of fabric numerical models, and finally a discussion of the work and a summary of contributions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157827</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Future Space Debris Population and Orbital Capacity</title>
<link>https://hdl.handle.net/1721.1/157822</link>
<description>Modeling the Future Space Debris Population and Orbital Capacity
Jang, Daniel
Increased investments and technological advances in satellite manufacturing and launch services have led to a newly vitalized Low Earth Orbit (LEO) environment. Megaconstellations consisting of hundreds to hundreds of thousands of satellites have been proposed, with SpaceX’s Starlink satellite constellation now reaching more than 5400 operational satellites. This denser LEO environment underscores the urgent need for models to predict and manage the risk of collisions and the sustainable use of space. Many models have been proposed over the years to quantify the risk of collisions between resident space objects, including the seminal paper by Kessler that described the runaway conditions for which LEO could become unusable. In this thesis, the development of the MIT Orbital Capacity Analysis Tool (MOCAT) is described along with conclusions and insights. MOCAT is a novel open-source approach to evaluating the LEO environment and comprises of a Source Sink Evolutionary Model (SSEM) and a Monte Carlo (MC) method. The SSEM simplifies the complex dynamics of space-object interactions into deterministic equations, focusing on the long-term evolution of orbital populations across different altitude shells. The simplified nature of the SSEM allows for computational efficiency, which enables optimization routines such as the exploration of equilibrium solutions for LEO carrying capacity. The improvements to the SSEM in this work through binning in the physical dimension as well as inclusion of Delta-V dynamics from the collision dynamics increases the fidelity of the SSEM. In comparison, MOCAT-MC offers a comprehensive means to simulate the individual interactions between RSOs. The MOCAT-MC tool propagates the orbits of low-earth orbit objects and models their interactions including collisions and explosions, and provides insights into the evolving trends of the LEO population. Of particular note is the computational efficiency of the model, which is essential for managing the complexities inherent in orbital dynamics and the potential large number of objects centuries into the future. Validation results and a range of simulations, including no-future launch scenarios and the launch of proposed megaconstellations totaling more than 80,000 active payloads are explored, resulting in millions of trackable objects. Despite the much fewer megaconstellations planned at the higher altitudes, even a small fraction of failures in post-mission disposal or collision avoidance maneuvers result in an outsized effect on orbital debris accumulation. MOCAT-MC is able to simulate Lethal Non-Trackable (LNT) objects, which comprise the vast majority of the orbital population today. These lethal non-trackable object population will only grow as more payloads and debris are launched into orbit and increase the collision rate. The effect of these objects are modeled and discussed. These two models offer different approaches to modeling the future orbital environment each with its strengths and weaknesses. Validation against existing models in literature shows the utility of MOCAT in informing future space traffic management and constellation design. The MOCAT tool has been created such that researchers can use a common model that is validated, robust, and efficient, allowing for advancement in our ability to forecast and mitigate the risks associated with the increasing density of LEO while advocating for a more sustainable approach to space exploration and utilization.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157822</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Principles from Cognitive Science to Analyze and Guide Language-Related Neural Networks</title>
<link>https://hdl.handle.net/1721.1/157821</link>
<description>Using Principles from Cognitive Science to Analyze and Guide Language-Related Neural Networks
Tucker, Mycal
Natural language, while central to human experience, is not uniquely the domain of humans. AI systems, typically neural networks, exhibit startling language processing capabilities from generating plausible text to modeling simplified language evolution. To what extent are such AI models learning language in a “human-like” way? Defining “human-like” generally may be an impossible problem, but narrower definitions of aspects of human-like language processing, borrowed from cognitive science literature, afford metrics for evaluating AI models. In this thesis, I borrow two theories about human language processing for such analysis. First, human naming systems (e.g., a language’s words for colors such as “red” or “blue”) appear near-optimal in an informationtheoretic sense of compressing meaning into a small number of words; I ask how one might train AI systems that behave similarly. Second, people understand and produce language according to hierarchical representations of structure; I study whether large language models use similar representations in predicting text. Thus, in this thesis, I show how to train and analyze neural networks according to cognitive theories of human language processing. In myfirst branch of work, I introduce a method for neural network agents to communicate according to cognitively-motivated pressures for utility, informativeness, and complexity. Utility represents a measure of task success and induces task-specific communication; informativeness is a task-agnostic measure of how well listeners understand speakers and leads to generalizable communication; complexity captures how many bits are allocated for communication and can lead to simpler communication systems. All three terms are important for human-like communication. In experiments, training artificial agents according to different tradeoffs between these properties led them to learn different naming systems that closely aligned with existing natural languages. In my second branch of work, rather than training neural agents from scratch, I probe pre-trained language models and found that some use representations of syntax in making predictions. Humans use hierarchical representations of sentence structure in understanding and producing language, but it is unclear if large language models, trained on simple tasks like next-word-prediction, should learn similar representations. I introduce a causal probing method that sheds light on this topic. By creating counterfactual representations 3of syntactically ambiguous sentences, I measured how model predictions changed for different structural interpretations of the same sentence. For example, I recorded model predictions to ambiguous inputs like “The girl saw the boy with the telescope. Who had the telescope?” with different syntactic structures. For some (but not all) models, I found that models use representations of syntax (e.g., change their answers to the previous question). Thus, I offer novel insight into pre-trained models and a new method for studying such models for other properties. Thetwohalvesofmythesisrepresentcomplementaryapproachestowardsmorehumanlike AI; training new models and analyzing pre-trained ones closes an AI development feedback loop. In this thesis, I explain my contributions to both parts of this loop and identify promising directions for future research.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157821</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Kinetics and Nonequilibrium Thermodynamics of Driven Systems: Stochastic Methods and Applications to Single-Molecule Biophysics</title>
<link>https://hdl.handle.net/1721.1/157819</link>
<description>Statistical Kinetics and Nonequilibrium Thermodynamics of Driven Systems: Stochastic Methods and Applications to Single-Molecule Biophysics
Piephoff, D. Evan
Advances in condensed-phase spectroscopy have permitted the ability to obtain time traces of biomolecules at the single-molecule level of detail. These real-time trajectories provide details that are typically unavailable in ensemble-averaged experiments, such as the effect of conformational dynamics on enzymatic reactions. From a theoretical perspective, it is therefore valuable to develop kinetic approaches for characterizing measurable quantities in order to connect to such single-molecule experiments. In this thesis, we analyze the statistical kinetics and nonequilibrium thermodynamics of driven biomolecular systems, with a particular emphasis on enzymatic processes. Specifically, we focus on kinetic methodology development; analyzing single-molecule fluctuations for mechanistic insight; examining the modulating influence of conformational interconversion on enzyme catalysis; and characterizing the nonequilibrium thermodynamics of generalized biomolecular machines. For enzymatic turnover reactions, it is found that the turnover rate reduces to the celebrated Michaelis–Menten functional form when conformational detailed balance is satisfied. In the presence of non-vanishing conformational currents, we predict and characterize the rich, cooperative behaviors attainable in conformational nonequilibrium. In addition, enzyme turnover fluctuations are analyzed by studying the Poisson indicator, a normalized measure of stochastic variation. A novel pathway analysis framework is extended to nonrenewal processes (i.e., those with correlated inter-event times) and fully reversible processes, accounting for kinetic network complexities, nontrivial event-averaged initial conditions, and the constraints associated with microscopic reversibility. For a dynamically disordered biomolecular machine involving an observable process coupled to a hidden process, a recently derived time-based fluctuation theorem no longer applies to the observable first-passage time; however, using a stochastic thermodynamics approach to examine fluctuating trajectories, we find that its validity is restored in the absence of hidden flux through the initial state manifold. Thus, the violation of this relation serves as an experimentally verifiable signature of hidden detailed balance breaking. The analysis presented herein provides a novel framework for analyzing a variety of kinetic processes, including enzyme turnover, molecular motor translocation, ion transport, and fluorescence emission.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157819</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on stereospecific diketopiperazine oxidation and applications to the synthesis of complex epidithiodiketopiperazines</title>
<link>https://hdl.handle.net/1721.1/157818</link>
<description>Studies on stereospecific diketopiperazine oxidation and applications to the synthesis of complex epidithiodiketopiperazines
Walker, Katherine L.
I. Introduction and Background on Epidithiodiketopiperazines &#13;
&#13;
A brief history and summary of methods for synthesis of epidithiodiketopiperazines (ETPs) are discussed. Three hypotheses for the mechanism of action of these biologically active natural products are reviewed, and the unified biosynthetic hypothesis that our group disclosed is summarized. The total syntheses of the natural product hyalodendrin are analyzed as a case study of the total synthesis of ETPs, and representative examples of our group’s entries into the synthesis of complex ETPs are examined.&#13;
&#13;
II. Studies on Stereospecific Diketopiperazine C–H Hydroxylation&#13;
&#13;
Mechanistic investigation of the permanganate-mediated hydroxylation reaction of 2,5-diketopiperazines (DKPs) is discussed. The course of the hydroxylation reaction with three permanganate oxidants examined in our total synthesis of naturally occurring epipolythiodiketopiperazines (ETPs) is investigated with respect to the activity of the different oxidants, as well as the stereochemical outcome and the configurational stability of the product diols. An example of a subsequent thiolation was then demonstrated to proceed under retention of stereochemistry, in contrast to the stereoinvertive thiolations previously observed in several total syntheses. The data is supported by computational analyses.&#13;
&#13;
III. Progress Toward the Total Synthesis of (+)-Chetomin&#13;
&#13;
We describe our work toward the total synthesis of ETP natural product (+)-chetomin. Key features of the synthetic progress include a method for construction of the key nitrogen¬–¬carbon bond with advanced reaction partners, including protected diols, sulfides, and ETPs, and stereocontrolled thiolation strategies. The challenges remaining to access (+)-chetomin are addressed on model systems.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157818</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Synthetic Proteins Produced via Automated Fast-Flow Peptide Synthesis</title>
<link>https://hdl.handle.net/1721.1/157817</link>
<description>Investigation of Synthetic Proteins Produced via Automated Fast-Flow Peptide Synthesis
Cowfer, Amanda Elizabeth
Flow chemistry techniques and methods have given the broad scientific community high-fidelity access to chemical compounds with minimal effort compared to traditional synthetic techniques. Since the introduction of solid phase peptide synthesis (SPPS), the peptide community has endeavored to combine the convenience of flow chemistry with the iterative steps associated with peptide elongation in SPPS. Nearly one decade ago, members of the Pentelute lab envisioned and developed a flow-based peptide synthesizer, the Automated Fast-Flow Peptide Synthesizer, or AFPS for short. This technology enabled fast, reliable access to short peptide chains, with each coupling taking less than 3 minutes in total, significantly decreasing the labor needed to produce these peptides. However, peptide chains over 50 amino acids remained challenging to produce via AFPS, microwave synthesis, or traditional SPPS batch couplings. With modern research requiring rapid and high-fidelity access to long polypeptide chains, an immediate need to develop peptide synthesis technology to produce single-domain protein polypeptides in a single shot will be critical. Herein, I report on the arduous journey and unmatched teamwork needed to improve the AFPS systems for regular, reliable access to polypeptide chains of more than 200 amino acids in a single working day. In addition, I will highlight the workflow and knowledge needed to take a free polypeptide chain to a fully folded and biologically active protein, equivalent in form and function to its recombinant counterparts. I will discuss the iterative steps my team took to vary both chemical and mechanical and control variables to improve per-coupling yield enough to enable access to full-length single-domain proteins. On this journey, we utilized test peptides to validate synthesis quality and later synthesized a suite of full-length single-domain biologically active proteins. I will spend some time focusing on the barnase-barstar binding pair. Next, I will dive into how I build and design each AFPS synthesizer to improve synthesis outcomes and user-friendliness while retaining the core functionality and customizability that have made the AFPS so successful in the Pentelute lab. I will highlight my role in the renovation of the first generation AFPS system, the “Automatide,” and dive into the key characteristics that set our synthesizers apart from what is currently commercially available. Finally, we report on the synthesis and characterization of several small and very interesting luciferases. Luciferases are proteins that produce bioluminescence when exposed to specific chemical substrates, and for the organisms that produce these enzymes, they play a vital role in mating, defense, and camouflage. In the research arena, luciferases have had broad applications for decades, including detection of environmental contaminants, diagnosis of pathogens, high-throughput screening for drug discovery, understanding protein-protein interactions, and more. Current efforts in the field have focused on the development of small artificial luciferases due to their many advantages over traditional larger luciferases, such as enhanced stability and increased brightness. Herein, we report on the synthesis and characterization of the copepod, Gaussia priceps, luciferase GLuc (18 kDa), and artificial luciferases picALuc (12 kDa) and LuxSit-I (14 kDa). In addition, we synthesized the mirror-image counterpart of picALuc due to its potential for broad-reaching impact in health and diagnostics; this is the first reported mirror-image bioluminescent luciferase. Finally, we will report on our efforts to develop a split-picAluc protein complement assay (PCA) using AS-MS technology, which will be the smallest and most versatile split-luciferase reported to date. In summary, fast-flow peptide synthesis was utilized to produce and investigate several biologically relevant proteins to improve upon existing tools available to the broad chemistry and biology community.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157817</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Efficient Planning for Navigation using Global Information in Large and Uncertain Environments</title>
<link>https://hdl.handle.net/1721.1/157816</link>
<description>Towards Efficient Planning for Navigation using Global Information in Large and Uncertain Environments
Kurtz, Martina Stadler
We would like to enable a team of robots to navigate quickly and efficiently in large and uncertain outdoor environments. We hypothesize that in such environments, global, uncertainty-aware information is necessary to enable high-quality planning. However, most existing systems do not model or plan using global, uncertainty-aware information. For example, many planners assume access to complete global information in the form of full environment maps, or they assume that locally good planning decisions under uncertainty will result in globally good planning outcomes. To enable the use of global information for planning in large and uncertain environments, we must develop models that concisely represent key navigation features of the environment, and build planners that are capable of reasoning efficiently about global information. In this thesis, we design models and planners that use global information in large and uncertain environments to increase the efficiency and quality of planning for navigation. We present four contributions towards using global information for efficient navigation. First, we propose a high-level planning representation that can be learned from previous plans considered in the environment and used online during hierarchical, multi-query robot navigation. Second, we propose a planner for collaborative multiagent navigation in an uncertain environment; the approach uses macro-actions and value function approximations to maintain computational tractability. Third, we develop a robust hierarchical planning system to enable the deployment of the collaborative multiagent planner on a real-world team navigating in a structured, uncertain outdoor environment. Finally, we develop a method for learning uncertainty-aware, single agent value function-based approximations from graph data to increase the efficiency of the collaborative multiagent planner.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157816</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Cleavable Monomers and Cross-Linkers for the Synthesis of Degradable Polymer Architectures</title>
<link>https://hdl.handle.net/1721.1/157815</link>
<description>Design of Cleavable Monomers and Cross-Linkers for the Synthesis of Degradable Polymer Architectures
Cardoso da Costa, Leticia
Degradable materials, with different chemical compositions and various polymer architectures, are desirable for countless purposes, ranging from biological applications to recyclability of plastic waste. The creation of brand-new materials with useful, desired properties and built-in degradability is, however, very difficult. The introduction of labile bonds to already known polymers offers, thus, a much simpler approach for the manufacture of degradable materials. Here, we report the design of cleavable monomers and cross-linkers for the synthesis of degradable materials with different polymer architectures. The first half of this thesis focuses on the design of new degradable bottlebrush and brush-arm star polymers (BASPs) via ring-opening metathesis polymerization (ROMP). A brief introduction to the recent advances of bottlebrushes and related nanoarchitectures as a promising carrier platform is provided, followed by the current efforts to impart degradability within the nanoparticle in order to modulate its drug release and clearance rate (Chapter 1). After the introduction, we present the synthesis of boronic ester-crosslinked BASPs that selectively disassemble into bottlebrush fragments upon exposure to hydrogen peroxide, which is often elevated in diseased tissue microenvironments. The H2O2-induced disassembly of spirocyclohexyl nitroxide (chex)-containing BASPs induces a change in transverse magnetic relaxivity that can be detectable via magnetic resonance imaging (MRI) (Chapter 2). In the next chapter, we present the synthesis of backbone-degradable bottlebrush polymers via the co-polymerization of drug-loaded norbornenemacromonomers with a library of tailored silyl ether-based olefins via ring-opening metathesis polymerization (ROMP). The difference in backbone degradation rates, imparted by the silyl ether substituents, leads to different drug release profiles and therapeutic efficacy in vitro (Chapter 3). The second half of this thesis focuses on the introduction of degradable bonds into the polymer backbone of vinylic thermosets via radical ring-opening polymerization (rROP). A brief introduction to the current strategies utilized to impart chemical deconstruction to cross-linked polymer networks prepared by radical polymerization is presented (Chapter 4). Lastly, we improve the performance of a consumer good, gel nail polish, by imparting degradability via co-polymerization with a cleavable comonomer. Gel nail polishes, UV-curable (meth)acrylic coatings, display superior mechanical and adhesive properties compared to alternative nail polishes. These properties, however, come at the expense of ease-of-removal. Here, a cleavable bond is introduced into the resulting cured polymer networks via co-polymerization with a cleavable comonomer. This approach does not impact the material’s properties while enabling easy and fast removal under triggered deconstruction (Chapter 5).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157815</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging the Properties of Aprotic Solvents Towards Efficient Electrocatalytic Carbon Dioxide Reduction</title>
<link>https://hdl.handle.net/1721.1/157814</link>
<description>Leveraging the Properties of Aprotic Solvents Towards Efficient Electrocatalytic Carbon Dioxide Reduction
Chu, An T.
Electrochemical carbon dioxide reduction has been studied as a method to sustainably produce valorized hydrocarbons. However, the reaction faces two challenges: low reaction selectivity towards value added products, and the deleterious reaction of carbon dioxide with the electrolyte to form soluble carbonate species. While both issues are sensitive to the composition of the electrolyte, the reaction has been exhaustively studied in aqueous electrolytes with limited opportunities for further extensive tunability. This thesis describes approaches for overcoming low reaction selectivity and electrolyte carbonation using aprotic-solvent based electrolytes. We leverage the unique solvation environments and equilibrium acidities accessible in such media to overcome key limitations to reaction performance that are intrinsically linked to the use of aqueous electrolytes. We demonstrate key principles for tuning aprotic solvent-based electrolytes towards improving carbon dioxide electroreduction catalysis, establishing the foundation for the development of advanced electrolyte designs.&#13;
&#13;
Chapter 1 details the development of a dimethyl sulfoxide / acetic acid electrolyte which can engender selective carbon dioxide reduction with minimal electrolyte carbonation on gold cathodes. We demonstrate that the key to engendering these balance of properties entails operating an electrolyte with a low water content, with simultaneous usage of a buffer which is non-nucleophilic and whose pKa is matched to the carbon dioxide / bicarbonate equilibrium. Under such conditions, the selectivity to carbon monoxide can be driven as high as 90% with only millimolar equilibrium bicarbonate formation: a compromise difficult to achieve in water.&#13;
&#13;
Chapter 2 details the discovery of a new mechanism for ethylene electrosynthesis on copper catalyst using dimethyl sulfoxide / phenol electrolyte. Starting from carbon monoxide—a crucial intermediate in the carbon dioxide reduction pathway—we present kinetic evidence that radically altering the solvent environment and proton donor can enable a mechanism involving quasi-equilibrium proton and electron transfer steps prior to a late rate-determining step. By demonstrating that the pathway in dimethyl sulfoxide / phenol has a potential-rate scaling and acid order distinct from those in aqueous electrolytes, we establish a new tunable platform for enabling selective electrocatalysis of hydrocarbon products.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157814</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developments in THz Polaritonics: Towards Integrated Nonlinear THz Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157813</link>
<description>Developments in THz Polaritonics: Towards Integrated Nonlinear THz Spectroscopy
Sung, Eric Rueyhao
The terahertz (THz) polaritonics platform is a compact, waveguide-based platform for the generation, manipulation, and detection of THz waves. The platform uses thin (&lt;100 μm) lithium niobate (LiNbO₃, LN) and lithium tantalate (LiTaO₃, LT) slabs, which can be patterned to control THz propagation. One of the unique features of the platform is that the THz fields can be imaged directly within the slab with subwavelength spatial resolution and subcycle temporal resolution. Both the amplitude and phase of the fields are recorded, which allows the full spatiotemporal evolution of the fields to be visualized. This makes the platform appealing for compact, waveguide-based THz experiments. The work in the thesis aims to develop tools to enable robust, compact THz spectroscopy using the polaritonics platform.&#13;
&#13;
The first phase of my research aims to develop methods for enhanced THz generation in the waveguides. In a typical polaritonics experiment, the optical pump light is focused to a single line which launches THz fields with electric field strengths of approximately 10 kV/cm. Although the fields are sufficiently strong for THz imaging, any nonlinear spectroscopic applications would require the use of much larger THz fields so that the much weaker THz transients that result from multiple interactions with the sample could be reliably detected. To this end, I developed two methods. The first method uses thin LN waveguides with a beveled edge for enhanced narrowband THz generation. The optical pump light is focused onto the bevel, after which it refracts and becomes confined within the waveguide by total internal reflection. This allows the pump beam to repeatedly drive the generated THz field during its multiple back-and-forth traversals within the LN slab. Using this method, we observe a 10-fold enhancement of the THz spectral amplitude at the velocity-matched frequency. The second method combines the tilted pulse front geometry with THz focusing to generate a strong THz field in the time domain. A circular stair-step "echelon" mirror is used to shape the pump pulse into a conical tilted pulse front composed of a series of concentric rings of pump light. When the pump rings are imaged onto a thin LT waveguide, coherent superposition of the focusing THz fields excited individually by each pump ring results in a dramatically enhanced THz field at the focus. When optimized, the method generates THz fields with electric field strengths up to 175 kV/cm, which is roughly 20x larger than what is generated by a single line of pump light.&#13;
&#13;
The second phase of my research focuses on methods for expanding the polaritonics toolset for spectroscopic applications. Previous experiments coupling the THz phonon-polaritons in a LN waveguide to the quasi-antiferromagnetic magnon mode in an adjacent slab of ErFeO₃ took advantage of the fact that both materials have similar refractive indices. Furthermore, the ErFeO₃ layer complicates THz imaging because it strongly absorbs the optical probe light. I investigated two experimental geometries to address these concerns. The first geometry uses a high-reflecting coating sandwiched between the LN slab and the sample material. The coating is designed to reflect the optical probe light, which enables THz imaging in LN by preventing the probe light from entering the sample and greatly expands the range of possible samples. The second geometry uses a slot waveguide to localize the THz field within a low-index slot region, which results in much stronger interactions between the THz fields and a sample inserted into the slot. Using this geometry, the linear THz absorption spectrum of a test sample was measured with good sensitivity and the complex dielectric function was recovered.&#13;
&#13;
The work presented here describes methods for enabling robust integrated THz spectroscopy in the polaritonics platform. The methods, when combined, should also form the basis for future polaritonics experiments that interrogate the nonlinear THz responses of materials.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157813</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring and Exploiting Ribonuclease 1: from Protein Biochemistry to Protein Engineering</title>
<link>https://hdl.handle.net/1721.1/157812</link>
<description>Exploring and Exploiting Ribonuclease 1: from Protein Biochemistry to Protein Engineering
Wralstad, Evans Christian
Ribonuclease (RNase) 1 is a human protein with a remarkable ability to indiscriminately hydrolyze RNA. RNase 1 and its bovine homologue RNase A exhibit ubiquitous expression across tissues, a catalytic efficiency within the diffusion-limited regime, and minimal substrate sequence requirements. RNase A has been a favorite model protein of biochemists for over half a century; due to the high level of sequence conservation between RNase A and RNase 1, many observations made for RNase A have corollaries for RNase 1.&#13;
&#13;
RNase 1 and RNase A are members of the pancreatic-type ribonuclease (ptRNase) superfamily, a class of enzymes which share many biophysical features, including a small molecular weight, high cationicity, and a secretory nature. Historical elucidation of ribonuclease biochemistry describes their susceptibility to oxidation-induced inactivation. This raises the question: how are these secretory enzymes able to preserve catalytic competency in oxidatively challenging extracellular environments such as blood serum and even epidermal skin?&#13;
&#13;
In Chapter 2 of this thesis, the intrinsic antioxidative capacity of RNase 1 is described. Chemical biology and biomimetic techniques corroboratively implicate two methionine residues as sacrificial antioxidants to protect the enzymic active site, allowing catalysis to persist in the presence of reactive oxygen species. In silico studies suggest evolutionary patterns to install these antioxidative features across the ptRNase superfamily. Sulfur–arene interactions appear to tune the reactivity of methionine residues in a manner consistent with rates of oxidation. These findings highlight an underappreciated role for methionine—to protect catalytic histidine residues—and indicate a means by which ptRNases remain functional in oxidatively challenging physiological environments.&#13;
&#13;
The desirable biophysical features of RNase 1 and the wealth of biochemical knowledge regarding it have also made it a favored model system of protein engineers, as exemplified by RNase S and cyclic RNase-based zymogens, two systems which reversibly attenuate ribonucleolytic activity. In particular, RNase-based zymogens can be activated by exogenous proteases; this schema has biotherapeutic potential, as demonstrated by zymogens which activate in response to viral infection and exert cytotoxic ribonucleolytic activity.&#13;
&#13;
Efforts to establish a zymogen directed toward the coronavirus SARS-CoV-2 are described in two parts of this thesis. In Chapter 3, the main protease 3CLpro of SARS-CoV-2 is enzymologically characterized. This work clarifies reported inconsistencies in enzymological features of this key viral protease and relies on a non-Michaelis–Menten, Bayesian inference-based analytical technique to circumvent some of the causes of the inconsistent prior reports. Then, in Chapter 4, the newfound knowledge of 3CLpro enzymology is applied toward the design of an RNase 1-based, 3CL superscript pro - directed zymogen. The zymogen is inactivated by steric occlusion and conformational distortion of the active site, and site-specific activation by 3CL superscript pro results in a multi-order of magnitude increase in ribonucleolytic activity. 3CL superscript pro action upon the zymogen leads to ribonucleolytic turnover of a fluorescent RNA substrate by the activated species, affording signal amplification that enables detection of nanomolar 3CLpro concentrations in a timeframe comparable to rapid antigen detection testing.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157812</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market-Based and Policy-Based Conditional Demand Forecaster for Airline Revenue Management</title>
<link>https://hdl.handle.net/1721.1/157811</link>
<description>Market-Based and Policy-Based Conditional Demand Forecaster for Airline Revenue Management
Lu, Yuxuan
The price transparency brought by the proliferation of online travel agencies and the loosening of fare class restrictions increases the importance of competitive pricing in the practice of airline revenue management. In this dissertation, we propose a novel forecasting framework, market-based and policy-based conditional demand forecaster (MPCF) to provide airline revenue management systems (RMSs) with dynamically adjusted sell-up probabilities and fare class demand forecasts based on estimated total market demand (market-based) and predicted fare class availabilities of competitors (policy-based) for a future flight departure.&#13;
&#13;
In the MPCF framework, an airline estimates the total market demand for travel on a future departure day and predicts its competing airline’s future fare class availabilities. The estimation of the total market demand allows the forecasting airline to anticipate additional demand when it offers lower fare quotes than its competitors and vice versa; the prediction of a competitor’s policy enables the revision of expected passenger sell-up probabilities to higher fares, conditioned on competitive influences and assumed passenger choice behaviors.&#13;
&#13;
In simulation tests, we assumed a Bertrand-Edgeworth passenger choice model, corresponding to a fully undifferentiated fare environment. With an assumption of perfect knowledge of total market demand that has already arrived for a future departure date, MPCF gains 12.69% in revenue compared with the existing Q-forecasting in an isolated market. Without that perfect knowledge, which requires estimation of total market demand, MPCF still gained 6.23% of revenue. MPCF leads to higher revenue gains on departures with demand levels that are different from the mean historical demand, demonstrating its benefits in providing dynamic sell-up and forecast guidance according to the predicted policies of competitors. The simulation results confirm the benefits of constructing demand forecasts on predicted market demand and competitors’ policies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157811</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Practical Engineering Design Optimization with Computational Graph Transformations</title>
<link>https://hdl.handle.net/1721.1/157809</link>
<description>Accelerating Practical Engineering Design Optimization with Computational Graph Transformations
Sharpe, Peter D.
Multidisciplinary design optimization has immense potential to improve conceptual design workflows for large-scale engineered systems, such as aircraft. However, despite remarkable theoretical progress in advanced optimization methods in recent decades, practical industry adoption of such methods lags far behind. This thesis identifies the root causes of this theory-to-practice gap and addresses them by introducing a new paradigm for computational design optimization frameworks called code transformations. Code transformations encompass a variety of computational-graph-based scientific computing strategies (e.g., automatic differentiation, automatic sparsity detection, problem auto-scaling) that automatically analyze, augment, and accelerate the user’s code before passing it to a modern gradient-based optimization algorithm. This paradigm offers a compelling combination of ease-of-use, computational speed, and modeling flexibility, whereas existing paradigms typically make sacrifices in at least one of these key areas. Consequently, code transformations present a competitive avenue for increasing the adoption of advanced optimization techniques in industry, all without placing the burden of deep expertise in applied mathematics and computer science on end users. The major contributions of this thesis are fivefold. First, it introduces the concept of code transformations as a possible foundation for an MDO framework and demonstrates their practical feasibility through aircraft design case studies. Second, it implements several common aircraft analyses in a form compatible with code transformations, providing a practical illustration of the opportunities, challenges, and considerations here. Third, it presents a novel technique to automatically trace sparsity through certain external black-box functions by exploiting IEEE 754 handling of not-a-number (NaN) values. Fourth, it proposes strategies for efficiently incorporating black-box models into a code transformation framework through physics-informed machine learning surrogates, demonstrated with an airfoil aerodynamics analysis case study. Finally, it shows how a code transformations paradigm can simplify the formulation of other optimization-related aircraft development tasks beyond just design, exemplified by aircraft system identification and performance reconstruction from minimal flight data. Taken holistically, these contributions aim to improve the accessibility of advanced optimization techniques for industry engineers, making large-scale conceptual multidisciplinary design optimization more practical for real-world systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157809</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Vehicle, Payload, and Trajectory Optimization Framework for Highly-Coupled Aircraft Systems</title>
<link>https://hdl.handle.net/1721.1/157808</link>
<description>An Integrated Vehicle, Payload, and Trajectory Optimization Framework for Highly-Coupled Aircraft Systems
Dewald, Annick J.
A class of highly-coupled aircraft systems is identified in Earth observation applications, where the aircraft design couples tightly with the science instrument design and the operation of both the aircraft and science payload. This dissertation identifies an opportunity to simultaneously optimize the aircraft platform, the science payload, and the operational strategy under one system-level objective function to improve the performance of the total aircraft system. This approach extends the field of MDO which demonstrates that simultaneously optimizing all the subsystems within a larger system allows the optimizer to leverage the couplings between disciplines, rather than be subject to them, resulting in better performance outcomes [1]. The inclusion of the instrument and trajectory into the optimization problem introduces additional objectives related to the science mission needs. While many methods for multi-objective optimization exist in the field, these methods are not tractable with the many objectives within these complex systems. A methodology is proposed to explore trade-offs between multiple objectives by sweeping through different combinations of weighting terms in a weighted-sum objective function to find Pareto optimal design points across the design space. These design points are then evaluated within the objective space, a hyperspace where each axis corresponds to a different objective, to understand the performance capabilities with respect to each objective and evaluate trade-offs between objectives. Findings from this objective space exploration can then be communicated to the science stakeholder to find the design that is best capable of meeting the identified science mission needs. This dissertation then applies this methodology to a series of case studies on a representative science mission. The science mission objective of these case studies is to reduce uncertainty in predictions of sea-level rise by understanding ice mechanics that drive ice shelf collapse and destabilize previously grounded glaciers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157808</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Opportunities and Limitations of Earth Observation Technology for Environmental Justice Advocacy: A Case Study of Toxic Prisons in the U.S.</title>
<link>https://hdl.handle.net/1721.1/157806</link>
<description>Opportunities and Limitations of Earth Observation Technology for Environmental Justice Advocacy: A Case Study of Toxic Prisons in the U.S.
Ovienmhada, Ufuoma
People of color and other socio-economically marginalized groups in the United States experience a disproportionate burden of environmental challenges such as air pollution and extreme heat; the Environmental Justice (EJ) movement aims to combat these burdens and promote collective well being. Earth Observation (EO) technology, such as satellites, can be used to monitor air quality, extreme heat, and other quantities relevant to EJ. However the application of this technology in measuring EJ, or supporting EJ advocacy efforts has not been widely explored. Satellite EO systems also historically have not been designed with EJ end users in mind. This application is increasingly more pressing as space agencies like NASA are seeking information on how their data can be used to support underserved communities. This dissertation brings together EO data science, systems engineering, and community- engagement to elucidate opportunities and limitations of Earth Observation Technology for Environmental Justice Advocacy. The dissertation is organized into three categories of contributions – Description, Evaluation, and Design/Prescription – that are each composed of multiple research efforts.&#13;
&#13;
In Description, I apply a three-pronged approach to provide insights on the opportunities and limitations of EO data for EJ. First, along with a team of researchers, I assess peer- reviewed literature on satellite data for environmental justice through a scoping review. The second contribution of this chapter is an interview study with a subset of grassroots EJ actors about how they can use EO data in their domain of EJ activism which contests the exposure of prisons and incarcerated people to environmental hazards. The third contribution of this chapter is a system’s engineering architectural description of NASA’s current satellite EO for EJ ecosystem. Using justice theory as an analytical framework, I reveal limitations of NASA’s current EO for EJ architecture for advancing holistic notions of EJ.&#13;
&#13;
In Evaluation, with support from co-authors, I measure spatiotemporal patterns of air pollution burden, and air and land surface temperature extremes in prison landscapes across the U.S. These studies contribute to a nascent literature documenting empirical evidence of environmental hazards in carceral landscapes. It also extends the literature on applications of satellite-derived and modeled geospatial data for EJ. In Design/Prescription, first, supported by 3 years of community engagement with prison EJ activists, I present the Design of a GIS decision support system that features EO data responding to expressed needs of prison EJ activists. Then, I present two essays that Prescribe recommendations for methodological innovations in the design and application of EO technologies and geospatial data for EJ advocacy.&#13;
&#13;
Together, these three chapters demonstrate the immediate relevance of EO and geospatial technologies for prison EJ advocacy, and broader implications for the EO community interested in supporting the aims of the EJ movement more holistically.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157806</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay of Transition Metals and Noncovalent Interactions in C–H Activation Catalysis</title>
<link>https://hdl.handle.net/1721.1/157804</link>
<description>Interplay of Transition Metals and Noncovalent Interactions in C–H Activation Catalysis
Vennelakanti, Vyshnavi
Selective C–H activation is crucial for the synthesis of bioactive molecules and natural products, and plays an important role in pharmaceutical industry, medicinal chemistry, and the materials industry. While synthetic routes to activate unreactive C–H bonds require harsh conditions and usually show poor selectivity, biological systems, such as non-heme iron enzymes, carry out selective C–H activation efficiently under ambient conditions. These enzymes catalyze a variety of reactions including C–H halogenation, hydroxylation, epoxidation, and ring closures, several of which are mediated with the help of noncovalent interactions such as hydrogen bonds (HBs). Most reactions share a common catalytic pathway with the formation of a reactive ferryl intermediate which is hard to be characterized experimentally. Computational studies of these enzymes help to bridge the gap in experiments towards understanding enzyme mechanism and selectivity.&#13;
&#13;
In this thesis, we study the interplay of noncovalent interactions and transition metals in C–H activation catalysis using quantum mechanical simulations. We employ density functional theory (DFT) and wavefunction theory to perform an extensive computational study of protein HB interactions and transition metal complex (TMC) active sites in non-heme iron halogenases and hydroxylases. Due to the fleeting nature of the ferryl intermediate, experimentalists tend to use vanadyl mimics in order to better understand the ferryl intermediate. However, these metals exhibit distinct electronic structure, motivating us to investigate if vanadyl mimics are indeed faithful to the native ferryl intermediates. Studying the mechanism of metalloenzymes using first principles methods could be challenging due to the larger system sizes. Thus, we also try to understand C–H activation carried out by 3d TMCs focusing on the specific case of partial oxidation of methane to methanol. While the oxidation and spin states of the metals in the enzyme active site are well defined through spectroscopic methods, that is not the case with TMC catalysts. Thus, modeling TMC catalysts is accompanied by the twin challenges of identifying the ground spin state and determining the appropriate method to identify the ground state since properties such as reaction energies and scaling relations are sensitive to the computational method used. Additionally, the ability of TMCs to exist in multiple spin states is often leveraged for practical applications, with one such example being spin crossover (SCO) complexes that exhibit a change in spin state as a function of external stimulus like temperature and are widely studied due to their increasing use in molecular switches. We curate an experimental data set of 95 Fe(II) SCO complexes and predict SCO behavior using DFT with the aim of identifying the best performing functional. This in turn sets the stage to design SCO complexes with tailed properties such as those that exhibit SCO behavior at room temperature. We expect that the insights from this work can directly guide efforts on biomimetic chemistry as well as both biological and synthetic C–H activation catalysis.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157804</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wearable Gut and Brain Interfaces for Valence Detection and Modulation</title>
<link>https://hdl.handle.net/1721.1/157740</link>
<description>Wearable Gut and Brain Interfaces for Valence Detection and Modulation
Vujic, Angela
Emotion detection interfaces have shown promise in mediating our emotional health through improved diagnosis, self-tracking, social support systems, mindfulness, and biofeedback training; however, the most popular methods falter when distinguishing between positive and negative valence - or lead to privacy or social issues. Brain and gut interfaces can serve as an alternative, but often require complex setups with many electrodes, large datasets, and the usage of significant training to achieve benchmark emotion detection performance. I present novel, wearable gut- and brain-interfaces for valence detection and modulation that can be made feasible with as few as two electrodes, minimal training and statistical analysis. I coin and define the area of gut-brain computer interfacing (GBCI), while further developing the field of affective brain-computer interfacing (aBCI). I take a novel approach by using the stomach signal and motivational direction models as an alternative to traditional affective modalities and models. I present Joie, a joy-based electroencephalography (EEG) brain-computer interface (BCI); JoyNet, a neural network for joy detection with EEG; and KALM, an EEG, electrodermal activity (EDA) and respiration rate multimodal fusion model. I also present Serosa, an novel electrogastrography (EGG) GBCI which non-invasively records indices of gastric neurons that can be correlated with emotional states and provide a new affect detection modality. This thesis presents findings and innovations in research and application: first, offline affect detection models which contextualize neural with embodied modalities and evaluate how each signal influences affect detection performance. Second, novel real-time interfaces are implemented and evaluated with placebo-controlled laboratory studies. Third, I present a novel neuroethics discussion which uses socioecological models to anticipate harms and I reflect on the works in this thesis.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157740</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Computation in Decentralized Systems</title>
<link>https://hdl.handle.net/1721.1/157739</link>
<description>Secure Computation in Decentralized Systems
Zyskind, Guy
Decentralized systems like Bitcoin and Ethereum are real-world examples of secure distributed systems deployed at scale. Over the past decade, these systems and others have proven to provide a trust-minimized solution for computing. They ensure the correct execution of code (correctness), maintain the integrity of stored data, and remain consistently available (availability). Additionally, they allow any user to interact without the risk of censorship.&#13;
&#13;
However, while decentralized systems guarantee security properties like integrity, correctness, and availability, they do not provide privacy. In this regard, they are strictly worse than assuming full trust in a centralized server, since any node in the network must see all data. Furthermore, in many of these open systems (also known as 'permissionless' networks), there are no restrictions on who can operate a node. This means that decentralized systems, and public blockchains in particular, cannot operate on private data, greatly limiting the kinds of use-cases they can support.&#13;
&#13;
This dissertation explores solutions to mitigate the privacy concerns associated with modern decentralized systems, focusing particularly on blockchains. The research employs Secure Multiparty Computation (MPC) techniques to address these issues, demonstrating how MPC, which already shares a similar distributed trust threat model, can enhance privacy in decentralized systems. More specifically, this thesis focuses on the following key areas in decentralized systems:&#13;
&#13;
Access Control Mechanisms and Confidential Smart Contracts: The thesis begins by exploring access control mechanisms on blockchains, and from that builds up to the concept of confidential smart contracts -- arbitrary programs that execute both correctly and privately.&#13;
&#13;
Identity Management and Authentication: Building on access control and confidential smart contracts, we examine identity management and authentication within decentralized networks. We develop a highly efficient Threshold ECDSA protocol that runs in the server-aided MPC model.&#13;
&#13;
Perhaps more importantly, we revisit the server-aided MPC model itself, which sits somewhere between the dishonest and honest-majority MPC paradigms, and show that a confidential smart contract is a real-world realization of the server in this model. We thus theorize that dishonest MPC protocols in general can be practically improved under this model, and argue that because there is a real-world counterpart, this model is realistic.&#13;
&#13;
An Improved Distributed Point Function (DPF) and ORAM: A major theoretical contribution of this work is a novel three-party Distributed Point Function (DPF) construction. This leads to state-of-the-art Oblivious RAM (ORAM) and Distributed ORAM (DORAM) protocols, which are important building blocks in MPC.&#13;
&#13;
Privacy-Preserving Digital Currencies: Using this DPF construction, we revisit the problem of privacy-preserving digital currencies, proposing a solution in the account model. This approach challenges the current consensus that privacy in blockchains requires a UTXO model.&#13;
&#13;
Secure Inference with private retrieval: Lastly, the thesis explores how Large Language Models (LLMs) can perform secure inference while retrieving data from private, distributed databases. This method represents a step towards building secure decentralized AI systems that respect user privacy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157739</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cyborg Psychology: The Art &amp; Science of Designing Human-AI Systems that Support Human Flourishing</title>
<link>https://hdl.handle.net/1721.1/157738</link>
<description>Cyborg Psychology: The Art &amp; Science of Designing Human-AI Systems that Support Human Flourishing
Pataranutaporn, Pat
As Artificial Intelligence (AI) becomes increasingly integrated into our daily lives, understanding the psychological implications of human-AI interaction is crucial for developing systems that truly support human capabilities. This dissertation introduces “Cyborg Psychology,” an interdisciplinary, human-centered approach to understanding how AI systems influence human psychological processes. Cyborg Psychology also emphasizes applying these insights to design and develop AI systems that support human flourishing. Cyborg Psychology recognizes the complex, non-linear interactions between humans and AI, acknowledging that both can influence and shape each other in dynamic and often unpredictable ways. Informed by human-computer interaction, psychology, and behavioral sciences, this dissertation focuses on understanding AI’s impact on crucial cognitive and behavioral processes, including motivation, critical thinking, self-reflection, confidence, beliefs, biases, and more. In addition, the work presents several AI systems that apply psychological insights to support human cognition and behavior. For example, the “Wearable Reasoner” seeks to enhance human rationality, “Personalized Virtual Characters” aims to support learning motivation, and “Future You” is designed to encourage long-term oriented thinking and behavior. Employing a diverse array of research methodologies, this work proposes a framework for investigating the implications of interaction design choices. The ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157738</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechatronic Design and Evaluation of a Two-Degree-of-Freedom Powered Ankle-Foot Prosthesis with Myoneural Interfacing Capabilities</title>
<link>https://hdl.handle.net/1721.1/157737</link>
<description>Mechatronic Design and Evaluation of a Two-Degree-of-Freedom Powered Ankle-Foot Prosthesis with Myoneural Interfacing Capabilities
Hsieh, Tsung-Han
Recent advancements in neural interfaces and sensing technologies have opened new possibilities for enhanced prosthesis control. The agonist-antagonist myoneural interface (AMI) connects residual muscle pairs to emulate natural dynamics, while electronic osseointegrated prostheses for the rehabilitation of amputees (eOPRA) allow direct measurement of neural signals through implants. Additionally, magnetomicrometry enables precise, real-time measurement of muscle length. These innovations motivate the development of more sophisticated prosthetic designs, including two degrees of freedom (2DoF) ankle systems. &#13;
&#13;
This Ph.D. thesis advanced bionic limb technology through three primary aims. First, a comprehensive characterization study of human-scale actuators was conducted, including brushless motors of different sizes. Using a custom-built dynamometer, the performance of these actuators was evaluated across their full operating range. Building upon this foundation, an innovative bionic ankle-foot prosthesis with enhanced capabilities was designed and fabricated. This advanced prosthetic system achieved biological fidelity in terms of range of motion, torque output, and angular velocity, thus enabling more natural and adaptable gait patterns. To validate the efficacy of the system, a subject with AMI constructs was fitted with the prosthesis and underwent a series of locomotion tasks, including level-ground ambulation and obstacle traversal. &#13;
&#13;
This work pushed the boundaries of bionic limb function and advanced the restoration of natural locomotion after lower limb amputation, providing valuable insights into the potential of combining advanced prosthetic design with neural interfacing techniques.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157737</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the effects of sunlight on the fate of oil spilled at sea</title>
<link>https://hdl.handle.net/1721.1/157736</link>
<description>Quantifying the effects of sunlight on the fate of oil spilled at sea
Freeman, Danielle Haas
Oil spilled at sea is transformed by sunlight-driven photochemical reactions. The transformed oil has different properties and behavior in the environment compared to the fresh oil, resulting in different fates and effects. My work in this thesis was to put numbers on these changes, with the goal of better predicting where oil goes and how it behaves in diverse spill scenarios. First, I focused on how sunlight generates water-soluble compounds from oil, which can lead to the dissolution of oil-derived compounds in seawater (photo-dissolution; Chapter 2). To find out whether photo-dissolution could be an important fate process during an oil spill, I used a combination of experiments and photochemical rate modeling to calculate photo-dissolution rates for the 2010 Deepwater Horizon spill (DwH) in the Gulf of Mexico (GoM). I found that photo-dissolution likely converted ~8% of the floating surface oil to dissolved organic carbon during DwH, a fraction similar in magnitude to other well-recognized fate processes. Moving beyond DwH, I evaluated the sensitivity of oil photo-dissolution and photochemically-altered oil physical properties to temperature. I found that if a spill like DwH had occurred in 5°C water rather than the exceptionally warm 30°C water of the GoM, 7x less oil could have dissolved via photo-dissolution and the viscosity of the remaining insoluble oil could have been 16x higher, resulting in lower entrainment of oil into the water column as small droplets (Chapter 3). The net result is that more oil would stay at the sea surface in a cold-water spill. Finally, I determined photo-dissolution rates for diverse oil products beyond the light crude that spilled during DwH (Chapter 4). I found that oil photo-reactivity could be predicted from oil chemical composition. I also found that photo-dissolution likely affects oil mass balance in spills of light oils forming thin slicks but not in spills of light or heavy oils forming thick slicks. Overall, this work advances our understanding of how oil changes in the environment upon sunlight exposure. This information can be applied to better predict, evaluate, and mitigate the effects of oil spilled at sea on marine ecosystems, including humans.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157736</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in Management Science: Applications to Online Retail, Healthcare, and Non-Profit Fundraising</title>
<link>https://hdl.handle.net/1721.1/157734</link>
<description>Advancements in Management Science: Applications to Online Retail, Healthcare, and Non-Profit Fundraising
Zhai, Chen Wen (Sabrina)
Management science is an evolving-field that requires novel models and algorithms, combining methods from statistics, optimization, and machine learning. This thesis presents advancements in management science across three domains: revenue management, healthcare, and non-profit funding platforms. The chapters in this thesis develop rigorous algorithms and techniques which are relevant in practice, and present data-driven insights into each of the application areas. &#13;
&#13;
Chapter 2 studies a personalized dynamic pricing problem commonly faced by online retailers. Customers arrive sequentially to the selling platform, and for each arrival the seller must make an immediate pricing decision for that customer. The seller aims to learn the demand as a function of price and customer covariates through price experimentation, while simultaneously earning as much total revenue as possible. Previous work on this topic have adopted a classical online learning setup, where the retailer begins the selling horizon with no information about the problem and gains all knowledge about the demand function from the online selling phase. However, this assumption is often not true in practice. Many retailers already possess some information about their product's demand from market research or previous sales data, and not utilizing this information is clearly suboptimal. The chapter develops a novel framework that allows the seller to incorporate historical data on pricing decisions and realized demand, and moreover enables one to study the effect that certain characteristics of this historical dataset have on online selling performance. Using this framework, a dynamic pricing algorithm is proposed which effectively uses both historical and real time data, and achieves provably optimal performance. Furthermore, a new distance measure is developed to quantify how close the historical pricing decisions are to being optimal. Using this distance measure, the chapter shows a surprising inverse relationship between this measure and the achievable online performance.  &#13;
&#13;
Chapter 3 focuses on applying causal inference techniques to study the treatment efficacy of different antibiotics on patients with urinary tract infection. Up to 50% of women will experience a urinary tract infection (UTI) in their lifetime, making it the third most common indication for antibiotic treatment in the United States. Though national treatment guidelines encourage using one of three antibiotics as the first-line treatment, other second-line and alternative antibiotics are still commonly prescribed in practice. Studies on the efficacy of first-line versus second-line and alternative antibiotics for UTI are limited and dated. The chapter presents a retrospective cohort study using the claims database from Independence Blue Cross to determine the relative efficacy and adverse event rates between different categories of antibiotics. By combining causal inference techniques with automated feature extraction using the Observational Medical Outcomes Partnership (OMOP) common data model, evidence is found which supports the use of guideline-recommended first-line treatments for uncomplicated UTI. Specifically, the rate of treatment efficacy is higher for first-line antibiotics relative to alternatives. Surprisingly, the analysis also finds evidence which supports increased efficacy of first line agents relative to second-line antibiotics, which are of broader spectrum, albeit the effect difference is smaller compared to the comparison between first-line antibiotics and alternatives. This large-scale cohort study which includes a comprehensive collection of covariates provides much-needed evidence to support the continued recommendation of first-line drugs for the treatment of UTI. The chapter also suggests the feasibility for performing complex causal inference analyses using automated feature engineering packages for OMOP-formatted datasets.&#13;
&#13;
Chapter 4 studies an online matching problem where sequentially arriving donors must be matched to projects needing funding on peer-to-peer philanthropic crowdfunding platforms such as DonorsChoose.org. Empirical studies have shown that (i) donors have heterogeneous preferences over the projects, and (ii) many return to make more than one donation. Facing such donors, the platform’s aim is to match each donor to one of their preferred projects so as to maximize the total donation without over-funding any projects and without knowing the arrival pattern. Previous work in the literature have not studied the effect of returning donors on algorithm performance. The chapter shows an upper bound on the best achievable worst-case performance of any online algorithm which reveals the relationship between donor return rate and algorithm performance. Furthermore, numerical analysis shows that a simple known algorithm achieves a performance that improves with the number of returning donors without differentiating between the original and return donors. The algorithm is intuitive and straightforward to implement, and the results shed light on the practical value that returning traffic can bring for fundraising platforms.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157734</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory studies of atmospheric photochemistry in indoor and outdoor environments</title>
<link>https://hdl.handle.net/1721.1/157732</link>
<description>Laboratory studies of atmospheric photochemistry in indoor and outdoor environments
Goss, Matthew B.
Secondary organic aerosol (SOA), fine particulate matter formed through indirect photochemical reactions, influences the climate and contributes to air pollution harmful to human health. While these two effects act at different scales, they are governed by similar chemical processes. This work investigates the atmospheric photochemistry of indoor and outdoor environments, giving particular attention to the reactions that lead to SOA formation, notably those involving oxidant and peroxy radical (RO2) chemistry.&#13;
First, this thesis examines the oxidation of dimethyl sulfide (DMS), which represents a large natural source of sulfur to the atmosphere and affects the climate. Using varied chemical conditions across numerous environmental chamber experiments, we characterize aerosol formation from the oxidation of DMS, as well as two related compounds, dimethyl sulfoxide and dimethyl disulfide. We also measure key rate constants crucial to understanding the formation and fate of hydroperoxymethyl thioformate, an important recently-discovered DMS product.&#13;
Second, this work investigates the indoor air quality implications of 222 nm germicidal ultraviolet lamps (GUV222). While these lamps are effective at reducing the spread of airborne pathogens, they lead to the formation of ozone (O3), a harmful air pollutant. Through environmental chamber experiments, we quantify the GUV222-driven production of O3, OH, oxidized products, and SOA, and further demonstrate that GUV222 causes new particle formation. Based on these results, we recommend that GUV222 lights be operated at their lowest effective level.&#13;
Finally, we pivot to examine assumptions embedded within the relationship between chamber experiments and SOA parameterizations in global chemical transport models. We represent historical laboratory experiments in a box model, enabling explicit estimates of the unmeasured RO2 and oxidant chemistry that influences aerosol formation. This work shows that reaction conditions are dynamic, changing within and between experiments, and demonstrates that RO2 isomerization is implicitly built into SOA parameterizations, even without its explicit representation.&#13;
Overall, this thesis connects multiple areas of indoor and outdoor atmospheric photochemistry, improving our understanding of marine organosulfur chemistry, the impacts of GUV222 lamps, and the relationship between laboratory chamber measurements and the modeling of aerosol on a global scale.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157732</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Purification Strategies and Monomer Platforms for Ruthenium-Initiated Living Ring-Opening Metathesis Polymerization</title>
<link>https://hdl.handle.net/1721.1/157730</link>
<description>New Purification Strategies and Monomer Platforms for Ruthenium-Initiated Living Ring-Opening Metathesis Polymerization
Kilgallon, Landon J.
Chapter 1: Introduction to Covalent Capture Purification and Ring-Opening Metathesis Polymerization In chemical spaces where difficult purifications are commonplace, innovative purification methodologies have been developed to circumvent the limitations associated with classical physiochemical property-driven purifications (chromatography, crystallization, distillation, etc.). Covalent capture purification—a type of catch-and-release purification—purifies molecules by selectively capturing them (via a covalent bond) onto a solid support, washing away impurities, and cleaving the product from the support for recovery. In the first half of this chapter, we review literature examples where covalent capture has been implemented for the purification of chemically synthesized molecules, including synthetic peptides, oligonucleotides, oligosaccharides, and small molecules. Ruthenium-initiated ring-opening metathesis polymerization (ROMP) remains an extraordinary tool for polymer synthesis due to its functional group tolerance, the ready availability of monomers and initiators, and the overall ease at which well-defined polymers can be rapidly synthesized. However, complete removal of ruthenium residues from the product is a difficult task that is compounded by the lack of understanding of initiator decomposition in ROMP. The existing methods for purification of ROMP polymers, which are typically solubility-based, are reviewed. The promise of covalent capture purification—a reactivity-based purification method— for living ROMP is discussed. Chapter 2: Covalent Capture Purification for Living Ring-Opening Metathesis Polymerization Covalent capture purification, a type of catch-and-release purification, facilitates complex molecule purification by partitioning reaction mixtures based on chemical reactivity rather than physiochemical properties. While this purification methodology has proven highly valuable for the purification of synthetic peptides, oligonucleotides, and oligosaccharides, it has not yet been implemented for the purification of synthetic polymers. Ruthenium-initiated living ROMP remains an extraordinary tool for polymer synthesis, but removal of trace ruthenium from the polymeric product remains a difficult task due to the wide scope of polymer compositions, the lack of a complete understanding of initiator decomposition, and the unknown identities of trace ruthenium products generated during ROMP. In this work, we translate covalent capture purification to living ROMP for the first time, and demonstrate its use as a general purification method for ROMP polymers. The optimized covalent capture system was used to purify a variety of linear polynorbornenes (up to ~7 kDa) in yields ≥49% and high purities (≥99.6% ruthenium removed). Chapter 3: Tricyclononenes and Tricyclononadienes as Efficient Monomers for ROMP: Understanding Structure–Propagation Rate Relationships and Enabling Facile Post-Polymerization Modification Tricyclononenes (TCN) and tricyclononadienes (TCND) represent under-explored classes of monomers for ROMP that have the potential to both advance fundamental knowledge (structure-polymerization kinetics relationships) and serve as practical tools for the polymer chemist (post-polymerization functionalization). In this work, a library of TCN and TCND imides, monoesters, and diesters, along with their exo-norbornene counterparts, were synthesized to compare their behavior in ruthenium-initiated ROMP. To understand the relationship between monomer structure and ROMP propagation rate, density functional theory methods were used to calculate a variety of electronic and steric parameters for the monomers. While electronic parameters (e.g., HOMO energy levels) correlated positively with the measured kp values, steric parameters generally gave improved correlations, which indicates that monomer size and shape are better predictors for kp than electronic parameters for this data set. Furthermore, the TCND diester— which contains an electron-deficient cyclobutene that is resistant to ROMP—and its polymer p(TCND) are shown to be highly reactive toward base-catalyzed conjugate addition with thiols, providing a protecting/activating-group free strategy for post-polymerization modification. Chapter 4: Safe and Scalable Syntheses of N,N-Dimethyltrifluoromethanesulfonamide (DMTMSA) and Other Trifluoromethanesulfonamide Solvents for High Energy Density Battery Applications A simple, scalable synthetic methodology for the synthesis of N,N-dimethyltrifluoromethanesulfonamide (DMTMSA) and other trifluoromethanesulfonamide solvents is described. No specialized glassware is required, water is the solvent, and an ice bath is used for cooling. Up to 155 g of DMTMSA is synthesized in a single batch in 92% yield. The optimized process is highly mass efficient (PMI = 9.1), and excess dimethylamine may be recovered (93% recovery, 51% decrease in waste) and recycled via a simple short-path distillation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157730</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of quorum-sensing circuits for metabolic flux control in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/157729</link>
<description>Development of quorum-sensing circuits for metabolic flux control in Escherichia coli
Dinh, Christina V.
Metabolic engineering seeks to reprogram microbial cells to efficiently produce value-added chemicals. Traditionally, this is achieved by overexpressing the production pathway and/or knocking out competing endogenous pathways. However, limitations in some pathways are more effectively addressed through dynamic metabolic flux control to favor different objectives over the course of the fermentation. This thesis aims to develop autonomous and pathway-independent regulation tools that can be applied to controlling metabolic fluxes in these contexts to improve production. To this end, quorum-sensing (QS)-based circuits were constructed, characterized, and applied to regulating metabolic fluxes in a cell-density-dependent manner. The first tool is a bifunctional QS circuit in which each control module regulates transcription under circuits derived from different QS systems. Characterization showed that the switching dynamics of both circuits can be tuned by varying the expression level of circuit components. To address major limitations in the naringenin and salicylic acid pathways, one module was used to delay transcription of key heterologous genes to overcome enzyme inhibition and growth burden while the second module controlled expression of CRISPRi components to silence competing endogenous pathways. Application of these regulation schemes resulted in significant production improvements in both pathways. Especially when aiming to dynamically down-regulate enzyme activity, post-translational control can offer faster response dynamics. To develop a post-translational control tool, expression of a protease linker was regulated under a QS circuit, resulting in selective degradation of tagged enzymes. This circuit was applied to regulating phosphofructokinase (Pfk:) levels with the ultimate goal of dynamic composition control in co-culture fermentations. Application of this control circuit in a naringenin-producing co-culture system resulted in improved composition profiles, which benefited production. Finally, a second post-translational control system that co-localizes proteins in response to cell-density changes was constructed and characterized. Such a system can be applied to actuate changes in reaction rates with minimal dependence on host cell machinery. Overall, this work developed QS-based circuits and showed they can be powerful tools for addressing key limitations in microbial syntheses.
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157729</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifidelity Methods for Design of Transition MetalComplexes</title>
<link>https://hdl.handle.net/1721.1/157725</link>
<description>Multifidelity Methods for Design of Transition MetalComplexes
Janet, Jon Paul
The rational design of materials with tightly controlled properties is crucial to addressing future challenges in energy, electronics and catalysis. While improvements in computing power have made simulation with density functional theory (DFT) an essential tool in screening new materials, it remains too costly to address truly high-dimensional design spaces. This problem is especially acute for open-shell transition metal (TM) complexes, which are of central importance in homogeneous catalysis and have applications in solar energy and electronics. The space of TM complexes is enormous and poorly characterized, while DFT calculations for these systems are expensive and sensitive to method choice, making it impractical to simulate large numbers of candidates indiscriminately. This makes the search for TM complexes with desired properties a formidable challenge. This thesis addresses this challenge by formulating strategies for materials design that exploit insights from data-driven surrogate models together with first-principles simulations. A framework for data-driven inference of the quantum properties of TM complexes is developed, using artificial neural networks (ANNs) and graph-based molecular representations that facilitate rapid screening while retaining physical meaning such that chemical insights can be extracted. Multiple sources of uncertainty that would limit the application of these methods to TM complexes are addressed. Surrogate models are trained to estimate system-specific DFT uncertainty by including data from DFT calculations with different fractions of exact exchange, and a novel uncertainty metric for data-driven discovery is proposed that quantifies the ability of ANNs to generalize to unseen data based on similarity in the learned latent space. This metric is shown to offer superior performance over existing methods. The application of these methods to virtual design problems is demonstrated with two case studies: 1) identifying spin crossover complexes from a design space of thousands using an evolutionary strategy and 2) probabilistic, multiobjective optimization of redox couples over a 3 million-complex space. The utility of this surrogate-assisted approach is evident and orders-of-magnitude accelerations are obtained over screening purely with DFT. Such strategies open the door for in silico design of some of the most challenging molecular systems at a far greater scale than ever before.
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157725</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interoceptive Interventions: Interfacing with Inner States</title>
<link>https://hdl.handle.net/1721.1/157724</link>
<description>Interoceptive Interventions: Interfacing with Inner States
Jain, Abhinandan
This thesis explores the emerging frontier of Human-Computer Interaction (HCI) that moves beyond traditional interfaces to directly modulate internal bodily processes, emotions, and cognitive states. As HCI progresses towards further integration between human and machine, this thesis investigates novel technologies that interface with interoceptive systems to influence subjective experiences and mental states. In this thesis, I introduce "Interoceptive Interventions" — tools designed to modulate physiological states. These tools interface with and alter internal physiological conditions, thereby influencing emotional and behavioral states.&#13;
&#13;
I present three individual proof-of-concept wearable prototypes “Frisson”, “ReCode”, and “Somnia” - grounded in neuroscience theories and evidence from embodied cognition. Frisson is a system targeted to elicit aesthetic chills and their downstream cognitive effects. I showcase experimental evidence of chills’ impact in modulation of emotional state, negative beliefs and amelioration of anhedonia in depression. Next, I present ReCode, a system which modulates baroreceptor activity and causally influences sympathetic activity (fight or flight response) and has consequential effect on perceived emotion and anxiety ratings. Finally, I present Somnia, a system which stimulates the vestibular system to influence sleep onset. These prototypes target specific pathways to enable on-demand emotion elicitation, emotion regulation and sleep regulation for users, while also providing potential non pharmacological interventions for conditions like insomnia, depression, and anxiety.&#13;
&#13;
This thesis aims to make a twofold contribution: First, it introduces a conceptual framework that highlights how interfacing with unconscious bodily processes opens up new possibilities for human computer interface design. Specifically, by gently actuating core physiological dynamics linked to consciousness and psychology, there is potential for such tools to deliver a promising new paradigm for digital wellness interventions. Second, the interoceptive modulation tools developed in this work provide a platform for researchers to experimentally engineer physiological processes underlying emotions and sleep. This could allow examining causative pathways between physiology and psychology beyond correlational observations and developing interventions for affective/sleep disorders. Researchers and designers can build on this to advance a generation of augmented technologies that empower users to self-regulate the body and the mind.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157724</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Public Interest Computing: a Pluralistic Design Language Foundation for Societal-Machine Alignment</title>
<link>https://hdl.handle.net/1721.1/157721</link>
<description>Public Interest Computing: a Pluralistic Design Language Foundation for Societal-Machine Alignment
Gaikwad, Snehalkumar 'Neil' S.
The proliferation of algorithmic systems, including artificial intelligence (AI), in decision-making contexts necessitates a critical examination of their alignment with societal and environmental values. The reciprocal relationship between these norms and emerging AI technologies calls for a structural conceptualization of algorithmic systems that extends the scale of human-centered considerations. This dissertation introduces “Public Interest Computing, a Pluralistic Design Language,” which enables a novel design space for value-sensitive algorithmic ecosystems, fostering what we term “Societal-Machine Alignment.” The research is structured in three interconnected parts. First, we establish a comprehensive theory of Public Interest Computing, grounded in the planning and capability approach to human development. Second, we present a series of Public Interest Computing systems that instantiate and refine the proposed theoretical framework. These systems, co-designed with communities, demonstrate societal-machine alignment through five key design dimensions. Farm Pulse System exemplifies substantive fairness for at-risk farmers by enabling restorative justice through recourse in climate change adaptation decisions. Boomerang exhibits incentive alignment, promoting equitable designs of reputation systems in AI data markets. The Prototype Tasks System illustrates computationally mediated cognitive alignment, creating a level playing field for workers. The Beyond Boundaries framework enables environmental alignment, providing a platform for public discourse on climate change. Our analysis using Gobo focuses on value alignment, investigating ways to increase human agency in interactions with invisible algorithms on online platforms. Each system serves as an empirical testbed, providing critical design insights that shaped the theory and engineering of Public Interest Computing.&#13;
&#13;
The third part demonstrates the interplay between the developed Public Interest Computing systems and policy by applying the Pluralistic Design Language to real-world scenarios. We illustrate&#13;
the bidirectional relationship between technology and policy, showing how Public Interest Computing informs policy decisions (“AI for Policy”) and, conversely, how policy shapes the responsible development of AI systems (“Policy for AI”). This symbiotic relationship opens new avenues for&#13;
evidence-based policymaking, with Public Interest Computing serving as a foundation. By synthesizing the insights gained from this demonstration, we offer a principled approach for future&#13;
research and practice, paving the way for a more informed and responsible design of algorithmic&#13;
systems that aligns with societal values and priorities.&#13;
Public Interest Computing and its Pluralistic Design Language serve as a guiding lens, leading us&#13;
towards a future where societal values and algorithmic ecosystems are inherently aligned. Public&#13;
Interest Computing is not an end in itself but a means for understanding, reflection, and adaptation, ensuring that as technology advances, so does our commitment to aligning it with the greater&#13;
good.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157721</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluid-fluid displacement in porous-media microfluidics</title>
<link>https://hdl.handle.net/1721.1/157718</link>
<description>Fluid-fluid displacement in porous-media microfluidics
Qiu, Yu
Immiscible fluid-fluid displacement under geometric confinement is a key physical process in large-scale subsurface energy technologies such as geologic carbon sequestration and in small-scale microfluidic techniques. Research over the past few decades has provided improved understanding of the fluid-fluid displacement patterns on the macroscopic scale, which range from compact displacement to fractal pattern. Many questions remain, however, regarding how the macroscopic displacement patterns are controlled by the microscale interactions between the fluid interface and the solid surface in systems under geometric confinement like microfluidic devices and porous media. This fluid-solid interaction—exacerbated by the roughness inherent to all natural and engineered surfaces—introduces a large energy dissipation near the solid boundary that challenges our ability to interpret laboratory experiments and develop mathematical models. In Part I of this Thesis, we study the motion of a fluid-fluid interface at the scale of a single capillary through mathematical modeling and laboratory experiments. We first develop a phase-field model to simulate two-phase flow with moving contact lines in the partial wetting regime. We construct a self-consistent formulation of fluid-solid surface energy which allows prescribing arbitrary static contact angles. We then propose a formulation to account for nonequilibrium conditions near the contact line and demonstrate the ability of our model to simulate dynamic configurations, from spontaneous imbibition to wetting transition and interface pinch-off. We then experimentally study the shape of a moving interface in a capillary tube prewetted with the invading liquid. For viscously favorable displacements (when the invading fluid is more viscous than the defending fluid), we find a universal behavior of the dynamic contact angle—a macroscopic descriptor of interface shape—which increases monotonically with capillary number. In contrast, for viscously unfavorable displacements, we observe a sharp wetting transition where the dynamic contact angle shoots to 180 over a narrow range of flow rates. Above the transition, a trailing film of viscous defending fluid is left behind the displacement front and the invading fluid propagates along the tube center as a finger. We rationalize the emergence of this sharp, trailing-film type of wetting transition by means of a minimal-ingredients hydrodynamic theory that exhibits bifurcated solutions. In Part II of this Thesis, we investigate the role of surface roughness on twophase displacements. We do so in a microfluidic device with a precisely controlled structured surface as an analogue for a rough fracture. In the drainage regime, we show that the roughness induces two types of liquid films entrained on the solid surfaces behind the displacement front: the classical Bretherton “thick film”, and a new type of “thin film” that is confined within the roughness. Each type of liquid film is characterized by distinct stability criteria and dewetting dynamics. In the imbibition regime, we show that surface roughness promotes that the wetting liquid preferentially advances within the roughness layer. The formation of a leading film stabilizes the displacement front as the flow rate increases, which would otherwise— that is, in a smooth confinement—become fractal. In summary, our work sheds light on the microscale physics and macroscopic pattern formation in rough confinement that may control long-term mixing and reactivity in geological systems and lab-on-a-chip applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157718</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Interventions in Fine-grained Contexts for Habit Formation</title>
<link>https://hdl.handle.net/1721.1/157717</link>
<description>Investigating Interventions in Fine-grained Contexts for Habit Formation
Khan, Mina
Behavior change is important, yet hard to sustain. Habits are automatic responses to specific contextual cues, and can help sustain behavior change. Fine-grained specific contexts are commonly used in habit formation, but interventions in automatically-detected fine-grained contexts have rarely been explored for habit formation. &#13;
&#13;
We investigate habit-formation using interventions in fine-grained mobile, physical-world and digital, computer-based contexts, making three key contributions for each: a survey to identify behavior change needs, a prototype system designed to deliver fine-grained context-specific interventions, and a study to investigate habit-formation using interventions in fine-grained contexts, compared to interventions in less fine-grained contexts. We use the Self-report Habit Index (SRHI) and Self-Report Behavioral Automaticity Index (SRBAI) to measure habit formation and habit automaticity, respectively.&#13;
&#13;
For mobile, physical-world behavior change, the survey of needs (N=53 participants) indicated that participants want diverse and personalized behavior change support in diverse and specific contexts. We created a wearable device with on-device deep learning for interventions in personalized and privacy-preserving egocentric visual contexts. In a 4-week pilot study (N=10), interventions in egocentric visual contexts led to more percentage increase in average habit formation (SRHI) and automaticity (SRBAI) than interventions in coarse-grained contexts based on time, geolocation, and physical activity. The percentage increase in median habit formation was also more for the fine-grained egocentric context group, whereas the percentage increase in median habit automaticity was similar between the two groups. For both groups, the habits persisted in the post-study evaluations 1 and 10 weeks later, without interventions.&#13;
&#13;
For computer-usage behavior change, the survey of needs (N=68) indicated that participants want to reduce excessive/unnecessary use, e.g., social media, and found off-the-screen breaks helpful. We created a Chrome extension to deliver interventions based on specific web activities, and conducted a 6+2-week study (N=31, 6 weeks of interventions and 2 weeks of post-study without interventions). After 6 weeks, interventions in fine-grained website-entry-based contexts led to more percentage increase in mean and median habit formation and automaticity than interventions in coarse-grained interval-based or random contexts. After the additional two-week post-study, without interventions, the website-entry group had the largest percentage increase in mean SRHI/SRBAI, whereas the interval-based group had the largest percentage increase in median SRHI/SRBAI. &#13;
&#13;
Qualitative results from both studies indicated that interventions in fine-grained contexts were delivered at more opportune moments and were less disruptive. We discuss the limitations of our research and present a first step towards investigating interventions in fine-grained contexts for habit formation, potentially for sustainable behavior change, without long-term dependence on technology.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157717</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Vegetation Morphology on Turbulence and Bedload Transport</title>
<link>https://hdl.handle.net/1721.1/157716</link>
<description>The Impact of Vegetation Morphology on Turbulence and Bedload Transport
Zhao, Tian
By promoting sediment deposition and retention, aquatic vegetation can contribute to river bank stabilization, biodiversity, as well as carbon sequestration. The morphology and distribution of aquatic plants influence the velocity field, turbulence intensity, and sediment transport in wetlands, which impacts the erosion and deposition processes. By combining physical and numerical experiments, this thesis quantified how vegetation geometry impacts turbulence and sediment transport near the bed.&#13;
&#13;
In aquatic canopies, turbulence generated at the stem scale, and for submerged canopies, also in the canopy shear layer, could contribute to the near-bed turbulence. Results of flume experiments using a constant channel average velocity revealed that bedload transport was predominantly correlated with near-bed turbulence, but was also weakly correlated with near-bed velocity. First, in emergent canopies, if vegetation was not clustered, turbulent kinetic energy (TKE) and bedload transport did not depend on the arrangement and stem diameter(s) and can be predicted from plant biomass and velocity. If vegetation was clustered in patches, TKE and bedload transport decreased with increased clustering and can be predicted from plant biomass, patch geometry, and velocity. Second, in submerged canopies, for constant channel velocity, submerged canopies could enhance or reduce bedload transport, depending on their degree of submergence. With increasing submergence, H/h (defined as the ratio of flow depth H to canopy height h), the near-bed velocity and TKE decreased, and the source of near-bed turbulence shifted from stem wake to the shear layer at the canopy top. A model to predict near-bed TKE in submerged canopies was developed and used to explore bedload transport under more realistic conditions with constant energy slope and flexible vegetation. For a constant energy slope, the denser the canopy, and/or the larger fraction of flow depth occupied by the canopy (decreasing H/h), the greater the sediment transport was reduced compared to unvegetated beds. This thesis provides essential parameterizations of vegetation to hydrodynamic and morphodynamic models, which can be used to predict the vegetation conditions that promote or diminish erosion, offering a useful guide for river and coastal restoration.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157716</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the non-microbial sources and sinks of dissolved metabolites in seawater</title>
<link>https://hdl.handle.net/1721.1/157715</link>
<description>On the non-microbial sources and sinks of dissolved metabolites in seawater
Germolus, Noah Paul
Dissolved marine metabolites are small (&lt;1000 Da) organic chemicals that remain in seawater when passed through a filter (typically &lt;0.2 µm pore size). Their name implies their biological function: to be produced and consumed by cellular metabolism. These chemicals are the flows of the “microbial loop”—the principle that most of the photosynthesized matter in the ocean is exchanged, respired, and restructured by single-celled organisms. Metabolites have critical biological utility, so they are considered extremely labile; estimates of the time each spends outside cells range from hours to days. Their concentrations are drawn down by their consumers to nanomolar and picomolar levels, making measurement difficult. However, improved techniques to measure metabolites simultaneously and at extremely low concentrations avail the question of what happens to metabolites outside the cell membrane. Conventionally, representations of labile DOM exchange networks avoid that question—metabolites’ short lifetimes imply their flows lead from one organism to the next. This thesis begins to interrogate that assumption, asking if there are other processes that could change the seawater exometabolome on time scales that are relevant to microbial life. In Chapter 1 I discuss the ways ambient metabolite pools could be affected by animals, chemistry, and physics. In Chapter 2 I investigate the photolysis of metabolites and examine metabolomic techniques’ suitability for such experiments. In simulated sunlight, 11 of 57 metabolites decayed to some extent in artificial or natural seawater, and tryptophan and kynurenine may decay rapidly in the mixed layer of an oligotrophic ocean. For Chapter 3, I captured five species of migratory zooplankton and measured metabolites in their dissolved excreta. Four species survived the experiment and produced 43 metabolites, many at a rate that should be measurable in field samples. Chapter 4 harnesses the previous two chapters, plus a model for physical mixing, to probe a field dataset comprising 60 metabolites from Hydrostation S (south of Bermuda). Based on eight profiles over the course of two days, I posit: (1) copepods alone can supply the entire demand of &gt;20 compounds to the mixed layer; (2) mixing is rapid enough to erase input signatures in the mixed layer; and (3) photochemistry is a slow leak of metabolites to forms whose lability is yet unknown. Chapter 5 reflects on how metabolites break the microbial loop—and suture it together with more ecological richness than with elemental fluxes alone.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157715</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creativity and Justice: Leveraging Creative Learning Principles to Co-Design Just Futures With and For Young People</title>
<link>https://hdl.handle.net/1721.1/157714</link>
<description>Creativity and Justice: Leveraging Creative Learning Principles to Co-Design Just Futures With and For Young People
Trapp, Jaleesa
Young people who live in underserved and under-resourced communities and have access to a creative learning environment are poised to create positive change within their communities. Their lived experiences make them experts on the issues their communities are currently facing, and the creative learning environment lends itself as a space where young people can prototype, improve, and implement solutions. Young people can use their imagination and creativity to seek justice and re-imagine their communities.&#13;
&#13;
This dissertation examines the Youth Activism and Advocacy program, which I designed using a transformative justice framework, in collaboration with the Clubhouse Network, a global network of after-school centers in historically under-resourced communities. Young people in ten communities around the world used their creativity, lived experiences, and civic imagination to develop and sustain social justice campaigns in their communities.  This dissertation addresses the following research questions: (1) How might we cultivate and support constructionist learning environments that serve young people from communities that have been marginalized? (2) How might we use computational tools to support creative learning while developing and amplifying social justice campaigns? (3) How might we use Human Centered Design methods to allow for meaningful participation and engagement from youth who have been marginalized?&#13;
&#13;
While there were multiple pathways into and motivations for engaging in community action projects, all of the young people gained technical, organizational, and leadership skills that can be applied in future education and career pursuits. The outcomes of the Youth Activism and Advocacy program are complex and intertwined, prompting a call to action to further examine how civic engagement and creative learning can broaden participation in STEM and computing fields—and support youth in making a positive impact in their communities, moving them towards greater justice.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157714</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Textile Macroelectronics: Architecting Sensate and Computational Fabrics across Scales</title>
<link>https://hdl.handle.net/1721.1/157711</link>
<description>Textile Macroelectronics: Architecting Sensate and Computational Fabrics across Scales
Wicaksono, Irmandy
Textiles are omnipresent and among the oldest forms of art and culture in human civilization. They serve as our protective skin, the interface between our bodies and the environment, and a medium for self-expression and collective experience. As electronics become more compliant, miniaturized, and low-cost, textiles provide an ideal substrate for technology integration, further driving the era of ubiquitous computing. My research fuses recent advances in functional materials, digital fabrication, hardware systems, and immersive technologies to demonstrate Textile Macroelectronics architecture and develop sensate and computational fabrics across scales.&#13;
&#13;
In this dissertation, I propose a ubiquitous computational textile framework—a synergy between functional device selection, textile structures, fabrication tools, and system architecture—that integrates a distributed network of sensing and computational elements as primitives or raw materials in the manufacturing process of electronic textile products. In the first part of the dissertation, I present several methods, artifacts, and implementations of sensate textiles using functional fibers and digital machine knitting. I argue that to promote the disruption and adoption of sensate textiles and achieve seamless integration, we require a better hierarchical understanding of textile construction and fiber-fabric properties, as well as ways to integrate electronics and functionalities with industrial textile fabrication processes. By controlling functional and common yarn inputs, along with knitting structures and patterns, I can architect fabric forms and aesthetics while tuning their electrical and mechanical properties. With this approach, I have developed a set of custom proxemic and tactile textile interfaces based on capacitive and piezoresistive sensing for musical expression, human-computer interaction, activity recognition, and multi-sensory experiences in various forms such as cloth, footwear, mats, carpets, and large-scale architectural facades.&#13;
&#13;
In the second part of the dissertation, I will discuss my work in exploring flexible, stretchable, and soft printed circuit technologies, incorporating multi-modal sensing with distributed computation to address scalability issues inherent in large and dense sensate textiles. These efforts have led to unique power, interconnection, and networking paradigms that allow us to transition from application-specific sensate textiles to generic computational fabrics that can be tailored and programmed for various applications. Finally, through these collective and complementary efforts, I aim to demonstrate an ecosystem of fabric artifacts that will lead us toward an Electronic Textile Gaia—a vision where sensing and intelligence are seamlessly interconnected and integrated into the fabric of everyday life, from in-body, on-body, room-scale, to architectural textiles, for applications ranging from physiological and physical activity monitoring to interactive media and built environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157711</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave propagation sensors for structural control</title>
<link>https://hdl.handle.net/1721.1/157654</link>
<description>Wave propagation sensors for structural control
Pines, Darryll J.
            (Darryll John)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaves 166-172).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157654</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global solvability of invariant differential operators.</title>
<link>https://hdl.handle.net/1721.1/157651</link>
<description>Global solvability of invariant differential operators.
Zhang, Weida.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1978; Vita.; Bibliography: leaves 96-97.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157651</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Federal Urban Renewal Program: a financial and economic analysis</title>
<link>https://hdl.handle.net/1721.1/157645</link>
<description>The Federal Urban Renewal Program: a financial and economic analysis
Anderson, Martin Carl.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Industrial Management, 1962; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157645</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A crosscorrelation method for measuring the impulse response of reactor systems</title>
<link>https://hdl.handle.net/1721.1/157641</link>
<description>A crosscorrelation method for measuring the impulse response of reactor systems
Balcomb, J. Douglas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaves 136-137).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157641</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An application of Bernoulli polynomials to the theory of cyclotomic fields.</title>
<link>https://hdl.handle.net/1721.1/157635</link>
<description>An application of Bernoulli polynomials to the theory of cyclotomic fields.
Segal, Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157635</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction of aflatoxin B1 with DNA in vivo in the rat and mouse</title>
<link>https://hdl.handle.net/1721.1/157634</link>
<description>Interaction of aflatoxin B1 with DNA in vivo in the rat and mouse
Croy, Robert George.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Vita.; Bibliography: leaves 153-165.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157634</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specificity and structural characterization of the PDZ domain from DegS, an extracytoplasmic E. coli protease</title>
<link>https://hdl.handle.net/1721.1/157629</link>
<description>Specificity and structural characterization of the PDZ domain from DegS, an extracytoplasmic E. coli protease
Walsh, Nathan P.
            (Nathan Peter),
            1973-
DegS is a membrane-bound bacterial protease that is involved in the extracytoplasmic-stress response. The C-terminal domain has limited homology to PDZ domains and was thought to be involved in regulation or substrate recognition. A model of this PDZ domain was generated from NMR solution studies and homology modeling. Peptide selection studies identified the sequence Tyr-Tyr-Phe (YYF) as a C-terminal motif that binds to the PDZ domain. Possible targets were identified including many of the outer-membrane proteins (OMPs), which contain both a conserved terminal YxF and internal YYF sequences. The binding of the DegS PDZ domain to a YYF peptide and OMP derivatives were confirmed using microcalorimetry. Because stress signaling can be triggered by over-expression of some of the outer-membrane proteins, I propose that DegS may receive a signal from unassembled OMPs and transmit it to the aE transcription factor by increasing proteolysis of RseA.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February 2002; Includes bibliographical references (p. 87-95).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157629</guid>
</item>
<item>
<title>Photocatalysis in a New Light: A Biohybrid Approach for Improved Reactivity with Tunable, Low-Energy Light Excitation</title>
<link>https://hdl.handle.net/1721.1/157600</link>
<description>Photocatalysis in a New Light: A Biohybrid Approach for Improved Reactivity with Tunable, Low-Energy Light Excitation
Cesana, Paul T.
Since the advent of photoredox catalysis, much thought has been devoted to the development of exciting reaction modalities and the catalysts which perform these reactions. Less thought has been placed into the specific aspects of light absorption as the key step in photocatalytic mechanisms. Natural photosynthetic systems drive the high-energy reactions of photosynthesis with efficient and broadband energy capture. They provide a blueprint toward optimizing these processes in synthetic systems. In photosynthesis, both light capture and reactivity have been optimized by separation into distinct sites. The dominant process by which absorbed sunlight is transferred between these sites is through resonance energy transfer, which is highly efficient over long distances. This work highlights that light capture and energy transfer are crucial steps for the design of highly efficient photocatalysts in the future.&#13;
Chapter 1 describes the relevant structures in natural photosynthesis as inspiration for synthetic approaches, the different mechanisms of energy transfer, and examples of photocatalytic systems that harness such excitation transfer processes to improve performance. Chapter 2 reports the synthesis of a biohybrid photocatalyst inspired by the modular architecture of photosynthetic apparatus which conjugated a photosynthetic light harvesting protein to a transition metal photocatalyst. Spectroscopic investigation found that absorbed photoenergy was efficiently funneled from the light harvester to the photocatalyst. The utility of the biohybrid photocatalyst was demonstrated via an increase in yields for two test reactions, including enabled reactivity at red wavelengths where the photocatalyst alone does not absorb. Chapter 3 establishes the power of incorporating nature’s design into non-natural photoenzymatic catalysis, generalizing the approach to other systems and methodologies. Photoenzymes require high-intensity light to function because of the poor absorption properties of their photoactive intermediate. A conjugate composed of a covalently linked photoenzyme and light antennae separates light capture from catalysis. Spectroscopic characterization of the conjugate showed the presence of efficient energy transfer from the light-harvesting components to the photoenzyme. In the presence of energy transfer, a maximum ~4-fold increase in product yields was observed as well as enabled reactivity. Chapter 4 highlights spectroscopic exploration into emerging molecular catalyst species. Finally, Chapter 5 provides an outlook to the future possibilities of the topics presented herein.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157600</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on the synthesis of bisindole Aspidosperma alkaloids</title>
<link>https://hdl.handle.net/1721.1/157599</link>
<description>Studies on the synthesis of bisindole Aspidosperma alkaloids
Pinto, Taylor
I. Introduction and Background on Aspidosperma Alkaloids&#13;
A brief overview of monoterpene indole Aspidosperma alkaloids is discussed. The biosynthesis of the characteristic pentacyclic core from tryptamine and secologanin is summarized. Some representative examples of total syntheses of Aspidosperma alkaloids are discussed.  Synthetic strategies for the synthesis of bisindole members of the family are also examined.&#13;
&#13;
II. Total Synthesis of (–)-Voacinol, (–)-Voacandimine C, and related congener, (−)-methylenebisdeoxoapodine&#13;
We describe the first total synthesis of complex aspidosperma alkaloids (–)-voacinol and (–)-voacandimine C via a late-stage C7-methylenation strategy inspired by a biogenetic hypothesis. We envisioned rapid access to these natural alkaloids from a common, symmetrical precursor assembled by methylenation of a D-ring-oxidized variant of the structurally related natural product (–)-deoxoapodine. Chemoselective N9-oxidation of a pentacyclic deoxoapodine precursor enabled the synthesis of the corresponding hexacyclic C8-aminonitrile. Stereocontrolled methylenation of a C8-enamine derivative of deoxoapodine, accessed by ionization of the C8-aminonitrile, afforded a symmetrical dodecacyclic bisaminonitrile as a versatile precursor to these bisindole alkaloids. Final-stage, biosynthesis-inspired, controlled reductive opening of the oxolane substructures of this dodecacyclic intermediate provided a unified approach to (–)-voacinol and (–)-voacandimine C, while direct reduction of the same intermediate afforded the structurally related (–)-methylenebisdeoxoapodine.&#13;
&#13;
III. Progress Toward the Total Synthesis of Voacandimine A&#13;
We describe our work toward the total synthesis of bisindole Aspidosperma alkaloid, voacandimine A. Key features of the synthetic progress include two routes for monomer synthesis, two methods for complex fragment assembly to form the bisindole structure, and strategies to address the stereochemistry of the ring fusion.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157599</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetry and its Signatures in Quantum Many-Body Dynamics</title>
<link>https://hdl.handle.net/1721.1/157598</link>
<description>Symmetry and its Signatures in Quantum Many-Body Dynamics
Ogunnaike, Olumakinde
Symmetry has long been a defining feature in our understanding of statistical or manybody systems. By making appeals to universal properties associated with global symmetries and topology, one may describe universal properties of “typical” states and dynamics in equilibrium, even when keeping track of the precise dynamics of a particular many-body system is impossible. This challenge of tracking allowable states and dynamical transitions is only exacerbated for non-equilibrium systems, where one cannot rely on the same notions of typicality. Further, when driven out of equilibrium by external interactions, quantum orders constructed from highly sensitive correlations between states are liable to vanish. Despite these conceptual and practical difficulties, the rise of quantum technologies and accompanying theoretical developments has motivated a surge of interest in dynamical quantum phenomena. The recent developments in the field of quantum many-body dynamics provide satisfactory accounts of many interesting phenomena, including failures of the Eigenstate Thermalization Hypothesis, various dynamical and mixed-state phases of matter, and measurement-induced dynamics and phase transitions. Many of these results are explained for specific systems or within different conceptual frameworks, however these results rarely generalize. In this thesis, I attempt to unify many aspects of quantum many-body dynamics under the same conceptual framework through an investigation of the universal signatures of symmetry in quantum dynamical systems. This is accomplished via a mapping between the averaged dynamics and the low-energy spectrum of an effective Hamiltonian in a “doubled Hilbert space,” comprised of two copies of the original space. This provides a general and versatile framework to qualitatively understand both familiar and novel universal properties of dynamical phenomena like charge diffusion, sub(super)-diffusion of multipole moments in systems with short and long-range interactions, charge and multipole, and even measurement-induced phase transitions. By expanding into a doubled Hilbert space, one may capture the subtleties of non-equilibrium physics, and particularly dynamical phases, within the framework of equilibrium physics and phases. In this work, we examine how to understand various symmetry-constrained dynamical phases and phase transitions using through a dual description of symmetry-constrained equilibrium phases and symmetry-breaking transitions in an enlarged Hilbert space.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157598</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Intersection of Physics Modeling and Representation Learning</title>
<link>https://hdl.handle.net/1721.1/157597</link>
<description>Exploring the Intersection of Physics Modeling and Representation Learning
Kitouni, Ouail
Representation Learning has evolved into a multi-purpose tool capable of solving arbitrary problems provided enough data. This thesis focuses on two primary directions: (1) Harnessing the power of deep learning for applications in fundamental physics and (2) using physicsinspired tools to improve and shed some light on otherwise large-scale, inscrutable black-box algorithms. We explore a collection of applications that improve different aspects of nuclear and particle physics research across its many stages, from online data selection to offline data analysis. We also tease out how deep learning can open up entirely new avenues of research through the lens of mechanistic interpretability to (re)derive fundamental theory as well as new ways to reinterpret physics measurements. Lastly, we study how physics tools can be useful to better understand the dynamics of deep learning as well as provide a solid foundation for algorithms and training paradigms that expand the frontier of machine learning.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157597</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein spatiotemporal dynamics in gene regulation and disease pathology</title>
<link>https://hdl.handle.net/1721.1/157596</link>
<description>Protein spatiotemporal dynamics in gene regulation and disease pathology
Zheng, Ming
A cell orchestrates billions of proteins to the right place at the right time to perform diverse cellular processes. Over the decades, this field has been evolving by integrating advances in microscopy, biochemistry, and molecular biology to unravel the intricate mechanisms governing protein spatiotemporal dynamics as well as the functional consequences. This thesis focuses on the physical motions of proteins at a length scale of tens of nanometers to several microns, where the apparent diffusion and the condensate dynamics of assembly and disassembly are specifically studied. In the studies presented in this thesis, the functional relevance of protein motion is exemplified in the context of gene regulation and disease pathology. We find that the apparent diffusion of transcription factors (TFs) is preferentially partitioned into slowly diffusing states by interacting with RNA, leading to enhanced chromatin occupancy and gene expression (Oksuz et al., 2023). The assembly and disassembly dynamics of transcriptional condensates are coupled to the active RNA synthesis, linking gene expression and the spatiotemporal organization of transcriptional proteins in a feedback loop (Henninger et al., 2021). In addition to transcriptional proteins, we find insulin receptors (IRs) are incorporated in dynamic condensates in normal cells to perform metabolic signaling transduction. In insulin-resistant cells which could occur in chronic diseases such as type 2 diabetes (T2D), IR signaling is dysregulated, associated with diminished IR condensate dynamics of assembly and disassembly (Dall’Agnese et al., 2022). Furthermore, pathogenic signaling reduces the mobility of key proteins–both inside and outside of condensates—that act in many cellular functions. Such reduced protein mobility under diverse pathogenic stimuli, termed proteolethargy, may account for diverse cellular dysregulation seen in chronic disease (Dall’Agnese, Zheng, Moreno et al., 2024).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157596</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Reconstruction Techniques for CUORE: Searching Beyond the Standard Model with Cryogenic Calorimeters</title>
<link>https://hdl.handle.net/1721.1/157595</link>
<description>Advanced Reconstruction Techniques for CUORE: Searching Beyond the Standard Model with Cryogenic Calorimeters
Mayer, Daniel W.
Located within the Laboratori Nazionali del Gran Sasso (LNGS), the Cryogenic Underground Observatory for Rare Events (CUORE) is an experiment primarily searching for neutrinoless double beta decay in ¹³⁰Te. It is the largest operating sub-kelvin cryogenic detector array, instrumenting 988 TeO₂ detector channels at temperatures below 20 mK. CUORE uses the cryogenic calorimeter technique, resolving the thermal signatures from nuclear/particle interactions within crystal absorbers for precise determination of deposited energy. This work establishes methods and analysis techniques to treat CUORE as a segmented detector in aggregate, with a focus towards identifying and reconstructing track-like signatures induced by high-energy through-going particles traversing the detector array. Implementations of such high-multiplicity techniques are used to validate that CUORE can resolve the remaining underground flux of muons within LNGS. This result demonstrates CUORE’s unprecedented size and acceptance as compared to previous cryogenic calorimeter arrays, and has applications towards future searches for neutrinoless double beta decay for which muon-induced backgrounds are non-negligible. Additionally, these methods open up new avenues for CUORE to search for exotic beyond-the-Standard Model particles and interactions, such as particles with fractional electric charge. If realized in nature, fractionally charged particles (FCPs) could be present within the underground flux of cosmic radiation and would leave faint track-like signatures across the detector. We report on a search for FCPs using the first tonne-year of CUORE’s exposure, finding no excess of FCP track candidates over background, and setting leading limits at 90% C.L. on the possible underground flux of FCPs with charges between 1/24 − 1/5 that of an electron. Lastly, we introduce differentiable programming methods for the end-to-end training of neural ordinary differential equations to model thermal pulse dynamics within CUORE calorimeter channels. These methods and results improve understanding of detector response, enable improved in situ background characterization, and open novel opportunities for CUORE and future tonne-scale cryogenic calorimeters to search for physics beyond the Standard Model.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157595</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Search for High-Frequency Gravitational Waves with a Modified Axion Detector</title>
<link>https://hdl.handle.net/1721.1/157593</link>
<description>The Search for High-Frequency Gravitational Waves with a Modified Axion Detector
Pappas, Kaliroë Mabelle West
ABRACADABRA-10cm has had great success as a lumped-element axion dark matter pathfinder experiment, with two published axion searches and an extensive background investigation. Now, using the electrodynamics of gravitational waves and a simple change of pickup structures, we are using the ABRACADABRA detector to search for high-frequency gravitational waves in the kHz to MHz range. These higher frequencies may indicate signs of in-spiraling primordial black holes, or other beyond the standard model phenomena. With careful calibration used to distinguish between the two signals, we introduce the first simultaneous search for both axions and gravitational waves using a lumped-element axion detector. In this thesis I will present on the high-frequency cryogenic ABRACADABRA-10cm detector, the background investigations of the detector and the design and first data from the ABRACADABRA-10cm high-frequency gravitational wave search.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157593</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radioactive Atoms and Molecules for Fundamental Physics</title>
<link>https://hdl.handle.net/1721.1/157592</link>
<description>Radioactive Atoms and Molecules for Fundamental Physics
Udrescu, Silviu-Marian
The Standard Model (SM) of particle physics and the theory of General Relativity represent two of the greatest achievements in physics in the past century. However, despite their success, many experimental observations remain unanswered: What is the nature of Dark Matter and Dark Energy? Why is there so little antimatter in the Universe? Why is gravity so weak compared to the other fundamental forces? These questions point to the existence of new phenomena waiting to be discovered. High-precision laser spectroscopy experiments using atoms and molecules emerged as a fruitful approach for searching for new physics effects. Recently, atoms and molecules containing short-lived radioactive isotopes have been proposed as particularly sensitive laboratories to search for physics beyond the SM, especially at the nuclear level. However, many atoms containing very short-lived isotopes are still out of reach for spectroscopic investigations, while radioactive molecules have been completely inaccessible experimentally until recently.&#13;
&#13;
In this thesis, I will present a series of pioneering experiments aimed at harnessing the power of radioactive atoms and molecules to explore nuclear phenomena, both within and beyond the SM. I will start by describing the first-ever precision laser spectroscopy investigation of a radioactive molecule, radium monofluoride (RaF). I will present measurements of the vibrational, rotational, and hyperfine spectrum of RaF, proving its high sensitivity to minuscule nuclear effects. These experiments allowed the quantification of a feasible laser-cooling scheme for RaF and the observation of the effect of the distribution of nuclear magnetization inside the Ra nucleus on the energy levels of RaF. To our knowledge, this is the first time this effect was observed in a molecule, opening the way for using molecules to benchmark ab initio nuclear theory. Finally, I will present measurements of the ionization potential of RaF, showing its suitability for Rydberg states studies and precise quantum control using external electric fields.&#13;
&#13;
I will then present the theoretical calculations and the status of an experiment aiming to measure hadronic parity violation using single molecular ions inside a Penning trap. The experiment's goal is to use the external magnetic field provided by the trap to fine-tune molecular energy levels of opposite parity close to degeneracy, thus increasing the signal produced by parity violating nuclear properties. The sensitivity to the sought-after signal is expected to be increased by more than twelve orders of magnitude compared to atoms. This amplification will allow the observation of yet-to-be-measured parity violating effects in a molecule. These measurements will be critical to guide our understanding of electroweak nuclear phenomena.&#13;
&#13;
Finally, I will show preliminary results obtained from a novel experiment with the goal of enabling laser spectroscopy studies of atoms and molecules containing radioactive nuclei with lifetimes of 1 ms and below. Such isotopes can't be currently studied spectroscopically. Using an event-by-event Doppler reconstruction, our approach could overcome most of the challenges encountered by state-of-the-art experimental techniques, allowing us to extend our reach toward unexplored regions of the nuclear chart. Such short-lived isotopes are of great importance for our microscopic understanding of nuclei as well as for constraining the properties of nuclear matter.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157592</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics and Anisotropy in 2D Metal Organochalcogenolate Semiconductors</title>
<link>https://hdl.handle.net/1721.1/157590</link>
<description>Exciton Dynamics and Anisotropy in 2D Metal Organochalcogenolate Semiconductors
Lee, Woo Seok
Silver phenylselenolate (AgSePh) is a novel hybrid organic-inorganic two-dimensional (2D) semiconductor that belongs to the broader class of metal organochalcogenolates (MOCs). Since its blue-emitting excitonic properties were discovered in 2018, AgSePh has attracted attention from the scientific community. From a fundamental science perspective, AgSePh provides an excellent platform for exploring many-body interactions among quasiparticles (such as excitons, phonons, and photons) due to its large exciton binding energy, strong exciton-lattice interactions, and natural photonic cavity structure. From a technological standpoint, its narrow blue emission, a tunable bandgap through composition control, chemical robustness, in-plane anisotropy, and low-cost, scalable synthetic methods make AgSePh promising candidate for photonic and optoelectronic applications. However, we do not yet fully understand how its excitonic properties arise at a fundamental level. The central aim of this thesis is to elucidate the correlation between structure, inorganic composition, organic ligands, and excitonic properties in these novel hybrid 2D semiconductors. First, we present the synthesis, structural and optical properties of 2D AgEPh (E = S, Se, Te) single crystals, colloidal nanocrystals, and thin films. Importantly, the growth of millimeter-sized single crystalline 2D AgEPh (E = S, Se, Te) enables their crystal structure determination via single crystal X-ray diffraction: AgSPh in P2₁, AgSePh in P2₁/c, and AgTePh in P2₁/c. Second, we explore the underlying mechanism of light emission in AgSePh and AgTePh. Despite having the same crystal structure, these compounds exhibit strikingly different excitonic properties: AgSePh shows narrow photoluminescence (PL) with a minimal Stokes shift, while AgTePh exhibits broad PL with a large Stokes shift. Using time-resolved and temperature dependent optical spectroscopy, combined with sub-gap photoexcitation studies, we demonstrate that the exciton dynamics in AgSePh films are dominated by the interaction of free-excitons with extrinsic defect states, whereas the dynamics in AgTePh are dominated by intrinsic exciton selftrapping behavior. Third, we study alloying between AgEPh. we demonstrate that AgSePh and AgTePh form homogeneous alloys with tunable excitonic properties across all compositions, whereas AgSPh and AgSePh/AgTePh exhibit a miscibility gap. These observations are elucidated by density functional theory calculations and correlated with crystallographic information. Fourth, using polarization-resolved micro-absorption, reflectance, and photoluminescence spectroscopy, combined with the GW plus Bethe-Salpeter equation calculations, we reveal multiple low-lying excitons with in-plane anisotropy in AgSePh and AgTePh. This showcases the richness of excitonic physics in these materials, which arises from their low-symmetry crystal structures. Finally, we show that the electronic and excitonic structure of AgSePh can be engineered through organic functionalization, resulting in giant excitonic anisotropy and a completely different absorption spectrum in 2D AgSePh-F₂(2,3). This divergence in excitonic properties is attributed to the semi 1D Ag chains in AgSePh-F₂(2,3), in contrast to hexagonal 2D Ag network in AgSePh. This finding can be generalized to other blue-emitting 2D AgSePh-R compounds which exhibit either AgSePh-like or AgSePh-F₂(2,3)-like absorption spectra. Overall, this thesis advances the understanding of the structure-composition-excitonic property relationships in these emerging hybrid semiconductors, paving the way for future investigations into this exciting material family.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157590</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Research to Search: Technologies and Techniques of Legal Research, 1880-1980</title>
<link>https://hdl.handle.net/1721.1/157589</link>
<description>From Research to Search: Technologies and Techniques of Legal Research, 1880-1980
Reiss Sorokin, Alex
In 1964, the Ohio State Bar Association (OSBA) embarked on a project to harness computer technology to automate legal research. After three years of investigation, it established the Ohio Bar Automated Research (OBAR) organization and contracted a local computer company, Data Corporation, to develop an electronic legal research service. Despite initial skepticism and mounting costs, these lawyers and technologists managed to launch a working service by 1969. The service – also named OBAR – was available through remote consoles placed in law firms, libraries, and government offices. By 1973, an improved system was relaunched as Lexis, a soon-to-be-national legal information retrieval service. Lexis went on to become a staple of American legal practice while OBAR gradually faded out of the picture. This dissertation tells the story of the OBAR system and its promise of automating legal research. &#13;
What did it take for lawyers to begin using and trusting computer technology for their work? I argue that the automation of legal research required both conceptual and material rearrangement. Legal research was a deeply social activity supported by an intricate infrastructure of people, technologies, and techniques. To be trusted and used, the computer had to be constantly charged with meanings, often contradictory ones. It was presented as a tool that would be integrated into an existing legal research process and a technology that would overhaul legal research. The computer was attributed mechanical qualities, like being objective or operating according to instructions, and human ones, like being sophisticated and capable of conversation. These contradictory meanings, along with the gap between promise and reality, were constantly sewn together as part of the computerized system’s development and marketing process. &#13;
To capture the process of automation, this dissertation traces legal research practices before the computer, the development process of the new technology, and the competing notions of trust and credibility in its early years. The first section traces the splintering of legal research into a distinct task that could be taught, delegated, and automated. In the first chapter, I focus on print legal research technologies and legal research instruction through the first half of the 20th century. I show that innovations in legal research went hand-in-hand with a reallocation of legal work among lawyers and non-legal staff. Examining legal research manuals shows that instruction in types of law book gradually gave way to a more systematic approach to legal research. The second chapter considers the history of legal research work through an examination of the law office and the distribution of labor within it. It shows that the development of legal research into a distinct task that could be delegated was intertwined with social, professional, and technological developments at mid-century. The third chapter describes how the specter of automation focused bar associations’ attention on legal research practices. It shows that legal research fit into a social and professional setting. Lawyers relied on an array of technologies and personnel to produce answers to legal questions. As a whole, the section argues that three factors joined to make legal research into a distinct task, thus making its automation possible: the development of instructional materials and courses on legal research, the growth and bureaucratization of law firms, and the introduction of women and machinery into the law office in the 20th century.&#13;
Two chapters and two short excursuses make up the second section, which focuses on the development and early adoption of the OBAR system. In chapter four, I examine the entanglement of technological choices and ideals in the process of developing the OBAR system in the 1960s. I show that the focus on direct use by lawyers was meant to cast suspicion on human judgment while touting the computer as an objective and trustworthy tool. Excursus one unpacks OBAR’s promise of an interactive system. It shows that at the same time as the system was likened to human dialogue, it offered a substantially different interaction with court cases, a process that altered the epistemic and social setting of legal research. Chapter five considers the reactions of OBAR’s early users as communication consoles were placed in law firms and libraries across Ohio in the 1970s. Relying on call reports and correspondence, I examine controversies around the system’s accuracy and credibility. Excursus two tells the story of what came out of the system’s promise in light of later developments. Focusing on the chasm that developed between lawyers and technologists in defining the system in the 1970s, it explains how an approach that focused on the system as a product prevailed over an approach that viewed the system as a service to the profession. To become a successful national product, Lexis had to shed its connections to the organized bar and give up any social aspirations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157589</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Terahertz Control and Ultrafast Spectroscopy of Layered Antiferromagnets</title>
<link>https://hdl.handle.net/1721.1/157588</link>
<description>Coherent Terahertz Control and Ultrafast Spectroscopy of Layered Antiferromagnets
Ilyas, Batyr
The central theme of modern condensed matter physics is to understand the emergent phenomena arising from interactions among Avogadro’s number of particles in quantum materials, alongside efforts to control their properties. While powerful transport, thermodynamic, and spectroscopic tools have been developed, they often fall short to reveal the intricate interplay among electronic, spin, orbital, and lattice degrees of freedom. A promising approach involves selectively perturbing one degree of freedom while observing responses in others, made possible by ultrafast lasers with femtosecond time resolution. These advancements not only showcase the capability of ultrafast experiments in understanding complex material properties but also demonstrate the manipulation of ordered phases at ultrafast timescales, thereby opening a laboratory for studying materials in nonequilibrium regime. This dissertation contributes to the ongoing effort of developing new ultrafast spectroscopy tools, utilizing them to probe lattice, magnetic, and electronic properties, and gaining active control over them. Specifically, it investigates the induction of a new magnetic state with net magnetization using intense low-energy terahertz (THz) pulses in the van der Waals antiferromagnet FePS₃. Critical fluctuations near the phase transition are found to enhance both the magnitude and the lifetime of this new state. Additionally, a broadband two-dimensional (2D) THz spectroscopy technique is developed and employed to study interactions among low-energy collective excitations and to directly identify phonons that induce the new magnetic phase. Furthermore, time-resolved spectroscopy in the visible and near-infrared spectral range is utilized to detect a bound state between phonon and electronic states in the sister compound NiPS₃, and to capture a magnetostriction effect in FePS₃ using coherent phonon spectroscopy, that was elusive to conventional diffraction experiments. Finally, second harmonic generation spectroscopy with microscale spatial resolution, is employed to study the multiferroic material NiI₂, demonstrating its persistence down to the single atomic layer — a first of its kind. These findings and tools can potentially be extended to frustrated quantum magnets to control their magnetic phases and potentially detect their collective modes. The 2D nonlinear spectroscopy utilized in this dissertation is gaining attention both theoretically and experimentally as a promising tool for detecting fractionalized spin excitations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157588</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonizing Long-haul Trucking</title>
<link>https://hdl.handle.net/1721.1/157587</link>
<description>Decarbonizing Long-haul Trucking
Jones, Robert
As climate change poses an ever-increasing challenge for the world, the transportation sector has experienced significant troubles in mitigating its carbon dioxide emissions. Particularly responsible for this development is the heavy-duty trucking sector. Heavy-duty freight trucks are responsible for approximately 30 % of the highway transportation emissions even though they only represent about 5.5 % of vehicles on the road. Heavy-duty trucks are also the backbone of US freight, as they account for 71 % of freight delivered to the American people. The corresponding road freight energy consumption has been consistently increasing over the last decades and is expected to grow even further in the future. Emissions must be drastically reduced in order to adhere to the proposed targets of the 2015 Paris Agreement and to limit global warming. This contrast raises a crucial question: how can road freight emissions be substantially reduced while at the same time facing a growing transportation demand? Especially for long-haul class 8 trucks, this question is difficult to answer. This study seeks to elucidate potential competitive powertrain and fuel combinations and eliminate other poor alternative options.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157587</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the physics of intranuclear organization</title>
<link>https://hdl.handle.net/1721.1/157585</link>
<description>On the physics of intranuclear organization
Sood, Amogh
Eukaryotic nuclei, despite their diverse and crowded chemical milieu, can achieve precise spatiotemporal organization of their contents and chemistry, despite lacking access to membrane-bound organelles. It has recently become apparent that the cells accomplish this feat by leveraging physical processes such as liquid-liquid phase separation driven by multivalent macromolecular interactions to form biomolecular condensates which can serve as membrane-less organelles for the precise, vectorial organization of intranuclear contents. In particular, the hierarchical and functional packaging of DNA into chromatin is mediated by phase separation. Epigenetic modifications of histone proteins, which DNA wraps around to form nucleosomes, are key determinants of nucleosomes’ condensability and chromatin’s higher-order structure. Chromatin structure, by regulating access of transcriptional machinery to the genome, in turn, has broad implications for cellular processes such as gene regulation and cellular differentiation. Furthermore, there exists a bi-directional feedback between 1D epigenomic sequence and 3D chromatin structure as the former is spread and maintained by enzymes that have a “reader-writer” functionality that allows them to similarly modify nucleosomes close to each other in sequence but not necessarily in space. Recent advances suggest chromatin has the properties of a viscoelastic network and exhibits non-trivial dynamics. Therefore, the dynamics of chromatin structure and the spread and maintenance of epigenetic marks are intimately and inextricably linked yet poorly understood. Part I of this thesis is devoted to understanding the complex interplay between chromatin structural dynamics and stochastic reaction networks describing histone modifications. Furthermore, given the prominent role phase separation plays in intranuclear organization, we devote Part II of this thesis to study the impact of competition between specific and non-specific interactions on liquid-liquid phase separation coupled to percolation and thereby attempt to elucidate the molecular grammar of phase separating biomolecules and evolutionary pressures that shape them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157585</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical and Analytical Methods in Low-Dimensional Strongly Correlated Quantum Systems</title>
<link>https://hdl.handle.net/1721.1/157584</link>
<description>Numerical and Analytical Methods in Low-Dimensional Strongly Correlated Quantum Systems
Peng, Changnan
The study of low-dimensional strongly correlated quantum systems lies at the intersection of intricate theoretical models and practical numerical methods, offering deep insights into condensed matter physics. This thesis explores the application of various numerical and analytical methods to these systems. It addresses universal behaviors and phase transitions, exemplified by the phenomenon of multiversality. Specifically, the transition from a 1D Luttinger liquid to a charge density wave insulator, characterized by partly Kosterlitz-Thouless transition and partly Ising transition, is analyzed using both analytical renormalization group calculations and numerical density matrix renormalization group simulations. Additionally, the thesis introduces a statistical smoothing spline method to pinpoint transition points systematically. The work extends to quantum dynamics, presenting a generic theoretical framework for analyzing quantum-classical adiabatic dynamics with learning algorithms. A provably efficient adiabatic learning (PEAL) algorithm with favorable scaling properties is developed. The algorithm is numerically validated on the 1D Holstein model, demonstrating its precision in predicting dynamics. Furthermore, the thesis derives a Hamiltonian lattice formulation for the 2+1D compact Maxwell-Chern-Simons theory, providing an analytical solution that aligns with continuum theories and facilitating future numerical applications. Through these explorations, the thesis underscores the complementary roles of numerical and analytical methods in advancing the understanding of complex quantum systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157584</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonizing the US Power Sector</title>
<link>https://hdl.handle.net/1721.1/157582</link>
<description>Decarbonizing the US Power Sector
Farnsworth, Amanda
As the second highest national emitter, the US has the opportunity, and responsibility, to reduce emissions and mitigate the impacts of climate change. The power sector has been identified as the linchpin in our national decarbonization strategy, with high electrification goals for the other sectors. As of 2022, the power sector was responsible for more than a quarter of annual emissions. As electrification increases, the importance of decreasing the emissions and emissions intensity of electricity production grows. This thesis explores the challenges and opportunities of decarbonizing the US power sector. Two models were built to complete this analysis: Ideal Grid (IG) which is a greenfield capacity expansion and economic dispatch model, and Evolving Grid (EG), which is a brownfield capacity expansion and economic dispatch model. These models are an especially novel addition to the current arsenal of publicly available capacity expansion models because they include embodied emissions, in addition to the industry-standard consideration of power plant tailpipe emissions from fossil fuel combustion. Nine regions of the contiguous US are represented in these models.  First, IG is used to highlight regional decarbonization challenges. Regions with significant land available for variable renewable energy (VRE) buildout and strong wind resources had the cheapest paths to a clean grid. Also, hydropower resources play a significant role. At deep decarbonization levels, the need for long-duration energy storage (like pumped hydropower storage) increases. The role of embodied emissions is explored, showing that as fossil-fuel consumption decreases and VRE penetration increases, they become nonnegligible. To most effectively reduce system emissions, embodied emissions should be accounted for. Next, fusion is integrated into the model to demonstrate its potential role. Assuming a $8,500/kW CAPEX, fusion is not economically competitive unless a carbon constraint is applied. But, at deep decarbonization levels, fusion is prominent in all regions. EG shows that intermediary decarbonization goals before 2050 play a pivotal role in determining fusion adoption and overall fleet composition. Lastly, the versatility and value of presented models is demonstrated by outlining other potential applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157582</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure of hadrons and other potential phases of QCD</title>
<link>https://hdl.handle.net/1721.1/157581</link>
<description>The structure of hadrons and other potential phases of QCD
Schindler, Stella T.
Quantum chromodynamics (QCD) is a mathematical theory describing subatomic particles called quarks and gluons, and the strong force that binds them together into protons and neutrons. This thesis centers on two major thrusts of modern QCD research: (1) uncovering the inner quark and gluon structure of the proton, and (2) mapping out other phases of matter that quarks and gluons form as we vary pressure and temperature. To study these topics, we develop, utilize, and synergize tools in quantum field theory (analytics), lattice gauge theory (numerics), and phenomenology (comparing theory to experiment). Specifically, we use new and existing techniques to access precision information about the inner structure of the proton, via the study of transverse momentum distributions, energy correlators, and diffractive processes at colliders. Additionally, we develop new analytic and numerical techniques for studying QCD phase structure inspired by non-Hermitian physics, and probe the possibility of new exotic phases near the QCD phase transition.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157581</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonreciprocal phenomena in superconductivity</title>
<link>https://hdl.handle.net/1721.1/157580</link>
<description>Nonreciprocal phenomena in superconductivity
Davydova, Marharyta
This thesis introduces and studies several unusual phenomena that arise in low-dimensional systems in the presence of a magnetic field.  The first example that we discuss is nonreciprocal superconductivity, which occurs upon simultaneous breaking of inversion and time-reversal symmetries.  Nonreciprocal superconductors describe certain classes of unconventional superconductors that include certain kinds of mixed-pairing and finite-momentum ones. They also occur in engineered systems exhibiting s-wave pairing-based superconductivity, for which we put forward several simple proposals. We demonstrate several striking observable consequences of nonreciprocal superconductivity. These include current rectification in normal metal-nonreciprocal superconductor junctions and the Josephson diode effect, for which we propose a simple and universally applicable mechanism. With the advent of novel low-dimensional symmetry-breaking materials, such as multilayer graphenes and twisted cuprates, as well as modern experimental possibilities involving engineered systems,  nonreciprocal phenomena could eventually become an indispensable tool for revealing the nature of superconducting orders.&#13;
&#13;
The second part of this thesis concerns doped Mott insulators in a magnetic field, described by a triangular-lattice Fermi-Hubbard model in the limit of strong interaction. This is relevant for many novel materials, such as moiré transition metal dichalcogenides bilayers. We predict a new bound state, spin polaron, formed by binding a doped hole with a magnon (spin flip). Spin polarons have large effective mass and are spin 3/2 quasiparticles. The mechanism for their formation is kinetic frustration, and therefore their binding energy is proportional to the hopping t, which is the largest energy scale within a single Hubbard band. We then propose a new phase diagram for the triangular lattice Hubbard model in a magnetic field as well as multiple experimental signatures. We hope that the prediction of the spin polaron, which has since been experimentally confirmed, will give rise to novel mechanisms for superconductivity and correlated orders in doped Hubbard models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157580</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Dark Matter Halos through the Lens of Machine Learning</title>
<link>https://hdl.handle.net/1721.1/157579</link>
<description>Decoding Dark Matter Halos through the Lens of Machine Learning
Nguyen, Tri V.
Dark matter (DM) constitutes about 85% of the matter in the Universe, yet its particle nature remains one of the greatest outstanding questions in astrophysics. DM halos act as the scaffolding within which galaxies form, but the specific mechanisms through which they influence galaxy evolution are not fully understood, especially at galactic scales. While cosmological simulations and astrophysical surveys have made significant strides in constraining DM properties, upcoming surveys will generate terabytes of complex, high-dimensional data. It is thus imperative to develop new methodologies capable of interpreting and linking this data with theoretical models. Machine learning techniques, coupled with advancements in cosmological simulations, present a transformative opportunity. In this thesis, I conduct a multi-scale investigation into the nature of DM and its role in shaping galaxies by integrating advanced machine-learning techniques with cutting-edge cosmological simulations. First, I employ simulation-based inference and graph neural networks to infer the mass density profiles of DM halos in dwarf galaxies from their stellar kinematics. Next, I develop a generative model using normalizing flows and recurrent neural networks to reconstruct the mass assembly histories of DM halos in cosmological simulations. Furthermore, I utilize variational diffusion models and Transformer-based neural networks to perform point-cloud modeling of satellite populations under alternative DM models. Finally, I create synthetic surveys for the Gaia surveys from Milky Way-like simulations, bridging the gap between simulations and observations. This thesis demonstrates the transformative potential of machine learning techniques to probe the DM properties and galaxy formation. The methodologies developed herein provide new avenues for interpreting vast and complex astronomical datasets and offer insights that could lead to a deeper understanding of the fundamental nature of DM.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157579</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring New Frontiers in High Energy Physics: Boosted Resonances Decaying To Quarks, Foundation Models, and Heterogeneous Computing at the CMS Experiment</title>
<link>https://hdl.handle.net/1721.1/157578</link>
<description>Exploring New Frontiers in High Energy Physics: Boosted Resonances Decaying To Quarks, Foundation Models, and Heterogeneous Computing at the CMS Experiment
Krupa, Jeffrey
In this thesis, we introduce machine learning (ML) tools to optimize data taking and analysis at data-intensive scientific experiments, focusing on the CMS experiment at the Large Hadron Collider (LHC). A path to a foundation model for LHC physics is described, where self-supervised learning is enabled through the re-simulation of decaying partons. The first experiments with remote operation of GPUs in LHC experiments are presented. These tools will help equip experiments at the High-Luminosity LHC (HL-LHC) to perform precision measurements and searches for new physics, for example, low mass resonances decaying to quarks. In this context, a search for narrow resonances decaying into quarkantiquark pairs produced with high transverse momentum is presented. The analysis is based on data collected in Run 2 with the CMS detector at the LHC in proton-proton collisions at √ &#119904; = 13 TeV. Resonance candidates are reconstructed as large-radius jets and identified using a state-of-the-art jet tagging algorithm. This analysis presents the most sensitive limits for new spin-1 bosons coupling universally to quarks and spin-0 bosons coupling preferentially to heavier quarks. The invariant jet mass spectrum is probed for a potential narrow peaking signal over a smoothly falling background. Upper limits at 95% confidence level are set on the coupling of narrow resonances to quarks as a function of the resonance mass. For masses between 50 and 300 GeV, these are the most sensitive limits to date on all possible mediators. Using conventions on s-channel dark matter mediators, limits are set on dark photons and dark matter in the context of the relic density.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157578</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopic study of emergent electronic phases in transition metal based compounds</title>
<link>https://hdl.handle.net/1721.1/157577</link>
<description>Spectroscopic study of emergent electronic phases in transition metal based compounds
Song, Qian
Antiferromagnets with non-relativistic spin splitting are outstanding candidates as the next generation of spintronic materials owing to their electron-volt (eV) scale spin splitting, ultrafast spin dynamics and nearly vanishing stray fields. Achieving voltage-based control of spin polarization in antiferromagnets is of great interest for realizing energy-efficient and compact devices for information storage and processing. Spin spiral type-II multiferroics exhibit an inversion-symmetry-breaking antiferromagnetic order which directly induces ferroelectric polarization, allowing for symmetry protected cross-control between spin chirality and polar order. This intrinsic coupling between the magnetic and dipolar order parameters results in record-strength magnetoelectric effects. Two-dimensional materials possessing such intrinsic multiferroic properties have been long sought for harnessing magnetoelectric coupling in nanoelectronic devices. The recent discovery of intrinsic magnetic order in atomically-thin van der Waals (vdW) materials has created new opportunities for the study of collective spin phenomena in free-standing two-dimensional (2D) systems and nanoscale devices. Among possible multiferroic vdW materials, several families have been identified, and of particular promise is the magnetic semiconductor NiI₂. The multiferroic state of NiI₂ is characterized by a proper-screw spin helix with given handedness, which couples to the charge degrees of freedom to produce a chirality-controlled electrical polarization. We use a suite of optical technique which reveal an ordered magnetic, polar state that persists down to the ultrathin limit of monolayer NiI₂.&#13;
&#13;
Recent development of spin-group formalism has identified a new class of magnets with nontrivial spin textures, including even-parity d, g, or i-wave altermagnet and odd-parity p-wave antiferromagnets. The chiral magnetic order in NiI₂ breaks Inversion-Time-Reversal-Translation (P Tτ ) symmetry, and Spin-Rotation-Translation (Uτ ) symmetry, allowing for spin splitting even in the absence of spin-orbit-coupling (SOC). We provide direct evidence that the spin polarization in a spin spiral type-II multiferroic exhibits p-wave (odd-parity) character and directly couples to the spin chirality, enabling electrical control of non-relativistic spin splitting. Our findings represent the first observation of a p-wave antiferromagnet, and open a new frontier of voltage-based switching of non-relativistic spin splitting in vdW antiferromagnets.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157577</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Color: Lattice Gauge Theory for Strongly-Coupled Physics</title>
<link>https://hdl.handle.net/1721.1/157576</link>
<description>Beyond Color: Lattice Gauge Theory for Strongly-Coupled Physics
Oare, Patrick R.
Quantum Chromodynamics (QCD) is the prototypical strongly interacting Quantum Field Theory (QFT). It is the interaction that yields the strong nuclear force that binds protons and neutrons together. The underlying mathematical picture of QCD is known exactly: it is an &#119878;&#119880;(3) gauge theory coupled to six flavors of fermions (the quarks). Despite this, it remains difficult to compute QCD observables because QCD is strongly-coupled, and typical perturbative methods used in QFT only work in specific regimes of validity for QCD. The most successful ab initio method to study QCD is Lattice Gauge Theory (LGT). This computational formalism computes observables by discretizing spacetime to render the path integral tractable. The primary focus of LGT in the 40 years since its inception has been the study of QCD, as the theory has direct physical relevance to so much of our universe, and the desire to understand QCD has driven many conceptual breakthroughs and advancements in LGT. Despite the focus on QCD, lattice methods find significant utility in studying other strongly-coupled gauge theories related to and unrelated to QCD. This thesis will focus on applying LGT to strongly-coupled physics inside and outside of QCD and on developing techniques within LGT that may be used to better understand said theories. First, the spectral function reconstruction problem in LGT is considered, and a new method for spectral function reconstruction in LGT is presented. Spectral functions describe the energy states of a theory: bound states, resonances, and continuum thresholds. The presented reconstruction method uses the analytic properties of the retarded Green’s function to constrain the full set of spectral functions that may be reconstructed from LGT data using the Nevanlinna-Pick interpolation problem. Next, two theories will be numerically studied using LGT. The first is the Standard Model Effective Field Theory (SMEFT). The SMEFT process that is considered is neutrinoless double &#120573; (0&#120584;&#120573;&#120573;) decay, a hypothetical decay of two neutrons into two protons and two electrons. LGT is used to compute non-perturbative matrix elements for the unphysical &#120587;⁻→ &#120587;⁺&#119890;⁻&#119890;⁻ transition, which contributes to nuclear 0&#120584;&#120573;&#120573; decay, and for the decay of the dinucleon &#119899;⁰&#119899;⁰ → &#119901;⁺&#119901;⁺&#119890;⁻&#119890;⁻. Connections to Effective Field Theory studies of 0&#120584;&#120573;&#120573; decay will also be discussed. Finally, adjoint QCD (QCD₂), the theory of a Majorana fermion coupled to a &#119878;&#119880;(&#119873;) gauge field in the adjoint representation in 1+1 spacetime dimensions, will be studied using LGT. QCD₂ is a well-studied QCD-like theory whose properties have been crucial in the study of confinement. Lattice methods are used to compute the static quark potential, string tensions, and the low-lying spectrum of the theory, which will provide input that may be used to understand better QCD₂ and the confinement mechanism in general.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157576</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Sample to Answer: Innovations in sample processing and CRISPR-based diagnostics for enhanced clinical translation and field deployment</title>
<link>https://hdl.handle.net/1721.1/157575</link>
<description>From Sample to Answer: Innovations in sample processing and CRISPR-based diagnostics for enhanced clinical translation and field deployment
Arizti Sanz, Jon
The recent (re)emergence and rapid spread of infectious disease agents underscore the urgent need for effective disease prevention and control strategies. Accurate and timely diagnostics serve as the base for effective disease management, enabling the rapid identification of disease outbreaks, guiding treatment decisions, and informing public health interventions. However, the global diagnostic testing capacity is currently insufficient to effectively respond to emerging infectious disease threats, which has fueled the spread of Lassa, Ebola, SARS-CoV-2, and Zika virus in recent years. Globally, this diagnostic gap is particularly pronounced at the primary care or community level, an essential site for swift and effective response. Widespread, rapid, and user-friendly diagnostic tests are vital components of effective outbreak containment and response strategies, as they enable the rapid identification of new cases, thereby facilitating timely intervention and preventing further pathogen spread. Therefore, addressing the critical need in the global diagnostic testing infrastructure requires the development and deployment of diagnostic tools that are accurate, affordable, and accessible in decentralized settings.&#13;
&#13;
Existing diagnostics fall short in bridging the current diagnostic gap, but recent advances in nucleic acid-based technologies, and CRISPR-based diagnostics (CRISPR-Dx) in particular, have shown significant promise in transforming infectious disease detection. CRISPR-Dx are easily programmable, robust, sensitive, isothermal, and highly specific, but further advances will be required to facilitate their use outside of centralized laboratories. This thesis aims to address this critical gap in global diagnostic testing capacity, focusing on the innovation, validation, and deployment of CRISPR-Dx for infectious diseases. We first developed SHINE, a rapid and sensitive Cas13-based nucleic acid detection platform without the need for nucleic acid extraction. In this first version (SHINEv1), we simplified the CRISPR-Dx workflow, reducing user manipulations and assay time, and enabling automated interpretation of assay results using a companion smartphone application. Next, we made further improvements and thoroughly validated this platform to create SHINEv2, a further streamlined, equipment-free, and easily deployable technology with the ability to discriminate SARS-CoV-2 variants of concern (VOCs). Given the excellent programmability of CRISPR-Dx, we expanded the use of SHINE beyond SARS-CoV-2 to other clinically relevant pathogens. We developed and validated SHINE assays to detect and discriminate species, subtypes, and variants of influenza virus, with important implications for public health and clinical care. We also designed and tested multiplexed diagnostic assays for the detection and differentiation of three tick-borne pathogens in clinical samples. Finally, given the inadequacy of existing sample processing methods – and their importance to nucleic acid test deployment – we developed a high throughput experimental workflow to analyze the effects of chemical reagents on diagnostic assay performance and nuclease activity in patient samples using a commercially available microfluidic platform. Together, the research presented in this thesis contributes to the development of more effective, accessible, and field-deployable diagnostic solutions, thereby enhancing our ability to respond to the global burden of infectious diseases.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157575</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total Synthesis of Verticillin A and Application of Diazene-Directed Fragment Assembly to the Synthesis of Heterodimeric Epidithiodiketopiperazine Derivatives</title>
<link>https://hdl.handle.net/1721.1/157574</link>
<description>Total Synthesis of Verticillin A and Application of Diazene-Directed Fragment Assembly to the Synthesis of Heterodimeric Epidithiodiketopiperazine Derivatives
Knauss, Walker
I. Total Synthesis of (+)-Verticillin A We report the first total synthesis of (+)-verticillin A, completed in 16 steps. Our initial strategy of late-stage sulfidation on a dimeric substrate produced an undesired diastereomer of ETP. We were able to access an ETP with the desired diastereoselectivity by effecting sulfidation on an epimerized, monomeric substrate. In order to install a disulfide with the desired facial selectivity, we developed a stepwise sequence involving stereoselective formation of a C15-benzhydryl disulfide followed by intramolecular sulfidation at C11. Because ETPs are unstable to carboncentered radicals and irradiation with UV light, we developed conditions to reduce the disulfide and protect the resulting thiols as alkylsulfides prior to cobalt reductive dimerization and photochemical desulfonylation. Finally, deprotection of the thiols and oxidation delivered the ETP natural product (+)-verticillin A. &#13;
&#13;
II. Synthesis of Heterodimeric ETP Derivatives Using Diazene-Directed Fragment Assembly&#13;
&#13;
We report the development of a novel route to heterodimeric ETP derivatives using diazenedirected fragment assembly. This is the first application of diazene-directed coupling to the synthesis of dimeric diketopiperazine alkaloids Our group’s initial route to heterodimeric ETP derivatives relied upon reductive cobalt dimerization, which produces a nearly statistical mixture of homo- and heterodimeric products. In contrast to the initial route, the diazene-based approach disclosed herein enables selective heterodimerization. To demonstrate the utility of heterodimeric ETP derivatives, we have synthesized an ETP-diazirine photoaffinity labelling probe, which we hope can be used to study the interactions of ETPs with cellular targets.; II. Synthesis of Heterodimeric ETP Derivatives Using Diazene-Directed Fragment Assembly&#13;
&#13;
We report the development of a novel route to heterodimeric ETP derivatives using diazenedirected fragment assembly. This is the first application of diazene-directed coupling to the synthesis of dimeric diketopiperazine alkaloids Our group’s initial route to heterodimeric ETP derivatives relied upon reductive cobalt dimerization, which produces a nearly statistical mixture of homo- and heterodimeric products. In contrast to the initial route, the diazene-based approach disclosed herein enables selective heterodimerization. To demonstrate the utility of heterodimeric ETP derivatives, we have synthesized an ETP-diazirine photoaffinity labelling probe, which we hope can be used to study the interactions of ETPs with cellular targets.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157574</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explorations in two dimensional strongly correlated quantum matter: from exactly solvable models to conformal bootstrap</title>
<link>https://hdl.handle.net/1721.1/157573</link>
<description>Explorations in two dimensional strongly correlated quantum matter: from exactly solvable models to conformal bootstrap
Jones, Robert A.
This dissertation presents two projects that touch upon the role of quantum mechanics in classifying phases of matter and their transitions. In the first project, we set out to answer: is it possible to find a lattice model in the Ising universality class that realizes the Kramers Wannier symmetry in such a way that it squares to 1, rather than a lattice translation as in the usual Ising model? Using insights from symmetry-protected topological phases of matter, we answer in the affirmative, with the caveat that the symmetry, beyond being non-onsite, actually acts on a Hilbert space that is not a local tensor product. The second concerns the nature of the Neel-VBS deconfined quantum critical point. This is thought to be described by the noncompact CP¹ model, which we argue to be continuously connected to the theory accessed by the 2 + ε expansion for the O(3) NLSM. To shed light on the nature of the DQCP, we perform conformal bootstrap studies of the O(3) model in 2 &lt; d &lt; 3.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157573</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Squeezing the Quantum Noise of LIGO below the Standard Quantum Limit</title>
<link>https://hdl.handle.net/1721.1/157572</link>
<description>Squeezing the Quantum Noise of LIGO below the Standard Quantum Limit
Jia, Wenxuan
The year 2015 marked the first detection of a gravitational wave signal from a pair of black holes located 410 megaparsecs (1.3 billion light-years) away. Their merger unleashed an immense amount of energy, with the peak emission rate surpassing the combined power of all luminous stars in the observable universe. Unlike stars, the merger of two black holes does not emit electromagnetic radiation like visible light but instead illuminates the universe with gravitational radiation. These waves traveled freely for over a billion years before being captured by the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors. Upon reaching Earth, these waves caused a minuscule length change between the LIGO mirrors, on the order of 10^(−18) m, a thousand times smaller than a proton.&#13;
&#13;
The unprecedented sensitivity of LIGO requires an extremely low noise level. The design of LIGO as an interferometer converts the gravitational-wave signal to an optical signal, which is measured on photodiodes along with other noises. One of the noise sources is the quantum noise due to the quantum vacuum fluctuations of the light itself. Besides the light, the mirror also has quantum-mechanical features and experiences quantum back-actions when we probe it with light. Knowing the position of the mirror very well would inevitably perturb its momentum, which prevents us from precisely making the next measurement of the position. This is fundamental physics dictated by Heisenberg’s uncertainty principle. In the case of continuous measurement like LIGO, the quantum back-action leads to an apparent sensitivity limit known as the Standard Quantum Limit (SQL). It tells us how precisely we can measure an object with light.&#13;
&#13;
The SQL applies when using uncorrelated photons or coherent light to measure the object, such as a laser beam. However, introducing quantum correlations through squeezed light, a technique called squeezing (Chapter 2), can circumvent this limit. Squeezed vacuum, a non-classical light state, exploits quantum correlations between photon pairs to reduce vacuum fluctuations in one quadrature at the cost of another. By manipulating the quantum correlation between light and the mirror, the squeezed vacuum can potentially reduce quantum noise below the SQL, a concept explored in frequency-dependent squeezing. This thesis develops a first-principle model of quantum noise in LIGO (Chapter 3) and investigates how squeezing can mitigate it while considering practical factors like optical losses and mode-mismatch (Chapter 4). These theories are constructed with a bottom-up approach. Experimental details on generating and utilizing frequency-dependent squeezing for LIGO are also discussed (Chapter 5), culminating in the observation of LIGO’s quantum noise below the SQL (Chapter 6).&#13;
&#13;
Besides squeezing, increasing optical power can also reduce quantum shot noise. Nevertheless, maintaining high power levels (fractions of megawatts) in LIGO is challenging due to experimental imperfections, such as unintended point absorbers on the mirror coating. This thesis analyzes the thermoelastic distortions caused by these absorbers, which limit achievable optical power in current and future gravitational-wave detectors (Chapter 7).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157572</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast dynamics in quantum materials probed by time-and-momentum-resolved techniques</title>
<link>https://hdl.handle.net/1721.1/157571</link>
<description>Ultrafast dynamics in quantum materials probed by time-and-momentum-resolved techniques
Su, Yifan
The interactions of quasiparticles in quantum materials give rise to intriguing phenomena, including magnetism and superconductivity. However, these interactions are often challenging to understand due to the intertwining of multiple degrees of freedom, such as charge, spin, orbital, and lattice. To fully understand such strongly correlated systems, a suite of experimental techniques that respectively probes various degrees of freedom and simultaneously resolves multiple channels, including energy, momentum, time, and space, are highly desired. This poses a significant challenge for the entire community. In this dissertation, I will focus on a series of experiments performed on quantum material systems utilizing several multi-resolution techniques. Ultrafast electron diffraction (UED) and time-and-angle-resolved photoemission spectroscopy (trARPES) are tools that I co-developed with my colleagues at MIT in the past several years. Supplemented by the time-resolved X-ray diffraction (trXRD) setup at free electron laser facilities around the world, they provide direct access to lattice (UED and trXRD) and electronic (trARPES) structures in quantum materials on an ultrafast timescale of a few hundred femtoseconds. The first part of the dissertation will briefly introduce assorted aspects of ultrafast phenomena as well as the fundamental principles and instrumentation of the several time-andmomentum-resolved techniques. Following the introduction to these time-and-momentumresolved techniques, the second part of the thesis focuses on the coherent acoustic phonons in quantum materials observed with UED. The crystalline lattice is the building block of any solid-state system and, thus, the most important aspect in condensed matter physics research. The study of coherent acoustic phonons, the fundamental coherent excitation of the lattice, could be traced back to the 1980s when solid-state ultrafast lasers were first developed. However, the knowledge about the excitation mechanism was not complete. In this part of the thesis, I will introduce a new pathway for launching coherent acoustic phonons: magnetostriction, and discuss the spin-mediated shear oscillator enabled by this mechanism in van der Waals antiferromagnet. I will further discuss the original methodology I developed that uses coherent acoustic phonon detected with UED as a picosecond timescale "lock-in" experiment that senses nano-scale mechanical motions in ultra-thin quantum materials. The last part of the dissertation will focus on charge density wave (CDW) phase transitions in quantum materials. CDWs are systems where strong interplays between electrons and phonons drive the phase transition that causes the modulation of charge density and is thus accompanied by periodic lattice distortions. In this dissertation, I will focus on systems with multiple interacting CDW orders. These systems are ideal platforms for studying the interplays among multiple order parameters. The suite of probes, including UED, trXRD, and trARPES, offers a comprehensive view of CDW systems from both phononic and electronic perspectives. This part of the thesis will examine a series of CDW materials with multiple CDW orders, including ErTe₃, EuTe₄, and, CsV₃Sb₅. Via a series of ultrafast multi-messenger experiments, I will survey various origins and behaviors of CDW interactions and answer longstanding questions about the nature of CDW ground states in these quantum materials. The overarching theme of this dissertation is to establish a paradigm of problem-solving in quantum materials research via a combination of multiple channels acquired from a suite of ultrafast momentum-resolved techniques. Coherent phonons and CDW systems are two of the richest playgrounds in the ultrafast regime. I am going to investigate various cases where an ultrafast laser pulse decodes the intertwined degrees of freedom in quantum materials. The insight developed in these case studies may be carried over to other quantum material systems with emergent quantum states, such as superconductivity and magnetic orders.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157571</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Terahertz Spectroscopy for the Manipulation and&#13;
Elucidation of Correlated Quantum Materials</title>
<link>https://hdl.handle.net/1721.1/157570</link>
<description>Ultrafast Terahertz Spectroscopy for the Manipulation and&#13;
Elucidation of Correlated Quantum Materials
Allington, Clifford
Light-matter interactions are at the heart of quantum mechanics. The photoelectric effect, blackbody radiation, and the hydrogen emission spectrum were all experimental observations using light and its interaction with matter which led to the discovery of the quantum mechanical nature of the universe. In modern research, the interactions of light with matter plays a significant role in both understanding the properties of, or controlling various aspects of quantum materials, a class of materials whose macroscopic properties are only understood through quantum mechanics. Quantum materials are often categorized into two classes: topological materials or strongly correlated materials, though the cross-over and interplay between these two aspects is a significant field of study as well. Strongly correlated materials exhibit exotic physical phases such as magnetism, superconductivity, or heavy fermion formation due to the strong interactions of electrons. Many of these properties hold significant promise for application, yet the ability to predict correlated physics from a theoretical standpoint is still at a young stage of development. To this end, experimental efforts to demonstrate and understand the interplay between different degrees of freedom in a material (spin, charge, lattice, and orbital) are essential for progressing in this direction.&#13;
&#13;
In this thesis, a variety of light-matter interactions using ultrafast techniques are explored in a set of quasi two-dimensional strongly correlated materials. These are bulk materials, whose properties are strongly founded in the two-dimensional layers stacked on top of one another. A variety of Optical-Pump Terahertz-Probe spectroscopic methods are used to drive a system out of equilibrium while monitoring the low-energy physics in the terahertz (THz) spectral range. This part of the electromagnetic spectrum is essential to understanding many aspects of strongly correlated physics. For example, the charge carriers in a metallic (or photoexcited) material have a strong spectral weight here and many of the collective modes of insulating phases, such as phonons or magnons, occur at these energies as well. Specifically, the collective modes of two van der Waals antiferromagnets are excited coherently with the use of ultrafast optical pulses. In the antiferromagnet NiPS3 a new mechanism for launching a coherent magnon is discovered. In the multiferroic, antiferromagnet NiI2, evidence for a new type of quasiparticle, an electromagnon-polariton, is demonstrated in a non-equilibrium sample. Further, preliminary data regarding the measurement of a new type of Kondo hybridization gap (a pseudogap) in the kagome strange metal Ni3In is reported using the photoexcited dynamics and the Rothwarf-Taylor bottleneck model.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157570</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrumental Effects in 21 cm Cosmology: One-point Statistics and Power Spectrum with the HERA Interferometer</title>
<link>https://hdl.handle.net/1721.1/157569</link>
<description>Instrumental Effects in 21 cm Cosmology: One-point Statistics and Power Spectrum with the HERA Interferometer
Kim, Honggeun
The epoch of reionization (EoR) signifies a critical phase in the universe’s evolution, marking the shift from a predominantly neutral intergalactic medium to the ionized state observed today. A key aspect of studying the EoR involves observing the redshifted 21 cm line emission with radio telescopes. A significant challenge in this endeavor is isolating the faint 21 cm signals from bright foreground emissions and systematics. This collection of works focuses on understanding the impact of instrumental systematic effects on statistical measurements, such as the one-point statistics and power spectrum, using the Hydrogen Epoch of Reionization Array (HERA). First, I investigate one-point statistics measured from image cubes based on HERA Phase I observations after foreground removal for the first time. I highlight the influence of systematics on these measurements, by measuring the second and third moments. These analyses show that, despite efforts to mitigate systematics, the residual systematics still cause deviations in the measurements from the expected values. In addition, I evaluate EoR models against observational data, suggesting the second moment measurements likely reject the cold reionization model characterized by inefficient X-ray heating. The third moment, which captures non-Gaussianity features of the signals, is significantly diminished by the instrument response and further reduced by the foreground removal process, making it challenging to probe non-Gaussianity. However, there remains the potential to detect some skewness at low redshifts. One potential systematic for HERA involves calibration errors stemming from per-antenna perturbations due to feed misalignment. I have simulated these calibration errors by modeling realistic perturbed primary beams for HERA Phase II observations. The chromatic calibration errors are critical since they can cause foreground emission to contaminate the region of Fourier space expected to be dominated by cosmological signals. I then present the work focused on developing a method to mitigate the calibration errors and foreground leakage, thereby recovering the clean EoR window.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157569</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical and core-level X-ray spectroscopy of correlated two-dimensional materials</title>
<link>https://hdl.handle.net/1721.1/157567</link>
<description>Optical and core-level X-ray spectroscopy of correlated two-dimensional materials
Occhialini, Connor Alexander
The intersection of low-dimensionality and strongly correlated electrons in van der Waals (vdW) materials offers a rich landscape of ordered phases and associated excitations for potential applications in nanoelectronics. The coupling between distinct degrees of freedom in correlated materials provide routes to realize novel functional properties, which can be further manipulated by the high tunability intrinsic to vdW materials through, e.g., heterostructures and doping. However, identifying the mechanism of correlated phases poses a fundamental challenge due to coexistent and competing orders. This requires detailed knowledge of the microscopic interactions/excitation spectra, methods to disentangle the individual roles of coexistent orders, and selective probes of symmetry-breaking within different coupled degrees of freedom. In this thesis, we demonstrate the utility and complementarity of resonant X-ray spectroscopy and symmetry-selective optical probes in combination with appropriate external tuning parameters (e.g. strain, pressure, ligand substitution, layer number) for revealing the origin of correlated phases in low-dimensional vdW materials. We first investigate the triangular lattice antiferromagnet NiI₂. Frustrated exchange interactions result in a helimagnetic ground state and spin-induced ferroelectric order, making bulk NiI₂ a type-II multiferroic. Using a combination of optical spectroscopic probes, including Raman, magneto-optics, and second harmonic generation, we demonstrate the persistence of multiferroic order to the single-layer limit. We then aim to resolve the microscopic magnetic interactions and their interplay with the lattice symmetry to identify the origin of the magnetic ground state. Towards this goal, we investigate the magnetic ground state and transition temperature versus hydrostatic pressure and layer number, and directly probe the evolution of magnetic/structural orders with resonant magnetic X-ray scattering/structural diffraction, respectively. From these results, we demonstrate the central role of interlayer exchange interactions and their coupling to the structural symmetry in driving the magnetic ground state of NiI₂. We next investigate the broader class of triangular lattice nickel dihalides, NiX₂ (X = Cl, Br, I), to identify the origin of sharp optical excitations, i.e. excitons, in nickel-based vdW magnets. We employ Ni-L₃ edge resonant inelastic X-ray scattering (RIXS) to access a q-resolved and site-specific view into the excitation spectra. We identify the sharp excitons with spin-forbidden intra-configurational multiplets of octahedrally-coordinated Ni²⁺, which become renormalized by Ni-X charge transfer. We also observe a finite dispersion of these excitations, demonstrating a multiplet delocalization that is controlled by the ligand-tuned charge transfer gap in a process analogous to ground state superexchange. These results establish the microscopic origin of these excitons and provide a mechanism to explain their possible coupling to the magnetic order/excitations. Finally, we study the iron-based superconductor FeSe, which displays a rotational symmetry breaking electronic nematic phase in proximity to unconventional superconductivity without magnetic order. To understand the origin of nematicity, we investigate the ordering of the orbital degrees of freedom using X-ray linear dichroism with in-situ uniaxial strain tuning, electronic transport measurements and structural diffraction. We observe a lattice-independent orbital polarization acting as the primary nematic order parameter. This resolves the orbital origin of nematicity in FeSe and suggests that anisotropic spin fluctuations are the mechanism of unconventional superconductivity.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157567</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collagen-mimetic peptides for diagnosis and analysis</title>
<link>https://hdl.handle.net/1721.1/157566</link>
<description>Collagen-mimetic peptides for diagnosis and analysis
Borgula, Isabella M.
Collagen, the most abundant protein in the human body, is an essential scaffold for tissue development, regulation, and homeostasis. As a major structural component of the extracellular matrix, collagen is not static. Rather, it is highly diverse and dynamic, actively participating in tissue physiology. Collagen can be a challenging protein to study due to its massive size and heterogeneity across subtypes. A valuable tool to study and better understand collagen is a technology known as collagen-mimetic peptides (CMPs), which are synthetic peptides that mimic the natural structure of collagen. These peptides can be applied to study collagen structure and function, from its macromolecular architecture in tissues to the significance of molecular modifications on its amino acid sidechains. This thesis explores the application of CMPs in diagnostic applications, in which CMPs detect aberrations in native collagen, and analytical contexts, in which CMPs act as a simplified system to understand collagen biochemistry. Chapter 2 investigates the ability of CMPs to identify collagen remodeling in a mouse model of pulmonary fibrosis, demonstrating their potential as non-invasive diagnostic tools for fibrotic diseases. Chapter 3 analyzes the collagen-rich desmoplastic reaction surrounding PDAC in murine models and human samples, highlighting the utility of CMPs in characterizing tumor microenvironments. Finally, Chapter 4 examines the structural implications of threonine phosphorylation on collagen stability, showcasing the value of CMPs in studying posttranslational modifications. The findings discussed in this thesis lay a foundation for future CMP applications in targeted drug delivery and biomaterials design.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157566</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Metrology with Ytterbium Ions for New Physics Search</title>
<link>https://hdl.handle.net/1721.1/157564</link>
<description>Precision Metrology with Ytterbium Ions for New Physics Search
Kniazev, Evgenii
Modern physics faces a growing discrepancy between the success of the Standard Model and the body of evidence pointing to the New Physics beyond it. A powerful method of New Physics searches is using quantum sensing tools based on Atomic, Molecular, and Optical physics. In particular, the modern optical atomic clocks demonstrate unprecedented accuracy and precision. Complementary to high-energy searches with particle colliders, the atomic clocks are used to place stringent bounds on tests of fundamental physics. One of the possible candidates for physics beyond the Standard Model is a carrier of a fifth force. Such a hypothetical particle that mediates interactions between leptons and quarks can potentially be detected in a tabletop atomic clock experiment. In particular, the isotope shift measurements may show sensitivity to coupling induced by such particles. In this thesis, we describe the efforts to place bounds on this particle using isotope shifts of optical transitions in Ytterbium. We conduct the isotope shift experiment by measuring ions one at a time and in a co-trapped configuration following the protocol of correlation spectroscopy. We study the systematic uncertainty budget for both types of measurements. We apply the King plot method to isotope shift spectroscopy data and observe the King nonlinearity. Using the analysis of the nonlinearity patterns, we determine the significance of the second source of the King nonlinearity with a currently unknown source.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157564</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergence, Formation and Dynamics of Hot QCD Matter</title>
<link>https://hdl.handle.net/1721.1/157563</link>
<description>Emergence, Formation and Dynamics of Hot QCD Matter
Scheihing Hitschfeld, Bruno Sebastian
Understanding the dynamics of Quantum Chromodynamics (QCD) in quantitative detail is one of the main frontiers in particle physics. While the last century gave us the formulation of the theory of nuclear interactions, QCD, as well as that of the rest of visible matter encoded in the Standard Model of Particle Physics, much remains to be understood. In particular, the hot QCD matter produced in high energy collisions of heavy ions presents a unique challenge to theory and phenomenology due to the vast number of different phenomena that take place in such a collision, and even more so because it is an out-of-equilibrium process. In this thesis, we make progress in two concrete directions in the vast landscape of hot QCD physics. The first one is quarkonium transport inside quark-gluon plasma (QGP), the high temperature phase of QCD. Over the past two decades it has been realized that a significant fraction of quarkonium suppression in high energy heavy ion collisions comes from dynamic dissociation and recombination processes, instead of static screening of the interaction potential as originally proposed by Matsui and Satz. Our contribution is the formulation of the precise correlation functions in QCD at finite temperature that describe the dissociation and recombination processes of heavy quarkonium in QGP, as well as their calculation in weakly coupled QCD and strongly coupled N=4 supersymmetric Yang-Mills theory. We also formulate the Euclidean version of these correlation functions so that they may be calculated using Lattice QCD techniques. In this way, our results provide the necessary ingredients to carry out an analysis of the suppression of ϒ states in heavy ion collisions in terms of the parameters of the QCD lagrangian.&#13;
The second contribution we make is the development of tools to understand the process of hydrodynamization in QCD kinetic theory and their application to a simplified description where only a subset of the QCD scattering mechanisms are included. By doing this, we learn that the process of hydrodynamization in this theory, and specifically, how memory of the initial condition is lost, follows the recently proposed Adiabatic Hydrodynamization scenario.&#13;
Concretely, hydrodynamization proceeds through a sequential process in which a monotonously shrinking set of low-energy states dominate the dynamics, where the opening of an energy gap relative to the ground state(s) signals the start of each stage of this process. The hydrodynamic attractor is reached when only one low-energy state remains as the ground state, and the system approaches local thermal equilibrium following the adiabatic evolution of this low-energy state.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157563</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetooptical studies of small-gap semiconductors : Hg₁₋ Cd Te and InSb</title>
<link>https://hdl.handle.net/1721.1/157490</link>
<description>Magnetooptical studies of small-gap semiconductors : Hg₁₋ Cd Te and InSb
Weiler, Margaret Horton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157490</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcription of the adenovirus genome during productive infection of HeLa cells.</title>
<link>https://hdl.handle.net/1721.1/157484</link>
<description>Transcription of the adenovirus genome during productive infection of HeLa cells.
Price, Richard Pearsall.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1972; One unnumbered leaf inserted.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157484</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The action of colocins E1 and K on proline transport in isolated membrane vesicles of E. coli.</title>
<link>https://hdl.handle.net/1721.1/157483</link>
<description>The action of colocins E1 and K on proline transport in isolated membrane vesicles of E. coli.
Kabat, Jonathan Peter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1971; Seventeen unnumbered leaves inserted. Vita.; Bibliography: leaves 105-107.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157483</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of fatigue crack initiation and propagation in an age hardenable aluminum alloy.</title>
<link>https://hdl.handle.net/1721.1/157482</link>
<description>Mechanisms of fatigue crack initiation and propagation in an age hardenable aluminum alloy.
Erhardt, Karl Edward.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Vita.; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157482</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symplectic fibrations and weight multiplicities of compact groups</title>
<link>https://hdl.handle.net/1721.1/157481</link>
<description>Symplectic fibrations and weight multiplicities of compact groups
Lerman, Eugene.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1989; Includes bibliographical references (p. 71-72).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157481</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activation of the c-Ha-ras oncogene</title>
<link>https://hdl.handle.net/1721.1/157479</link>
<description>Activation of the c-Ha-ras oncogene
Tabin, Clifford James.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1984; Bibliography: leaves 202-214.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157479</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magneto-optical studies in In[subscript 1-x]Ga[subscript x]As[subscript y]P[subscript 1-y] semiconducting alloys</title>
<link>https://hdl.handle.net/1721.1/157477</link>
<description>Magneto-optical studies in In[subscript 1-x]Ga[subscript x]As[subscript y]P[subscript 1-y] semiconducting alloys
Alavi, Kambiz.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1981; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157477</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk analysis for earthquake-induced ground failure by liquefaction.</title>
<link>https://hdl.handle.net/1721.1/157476</link>
<description>Risk analysis for earthquake-induced ground failure by liquefaction.
Yegian, Mishac K.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 281-292.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157476</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photosynthetic regeneration of ATP using native and immobilized bacterial chromatophores.</title>
<link>https://hdl.handle.net/1721.1/157473</link>
<description>Photosynthetic regeneration of ATP using native and immobilized bacterial chromatophores.
Yang, Ho Seung.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1976; Vita.; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157473</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pressure broadening of infrared absorption lines at moderate densities.</title>
<link>https://hdl.handle.net/1721.1/157472</link>
<description>Pressure broadening of infrared absorption lines at moderate densities.
Wormhoudt, Joda Cornelius.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157472</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Friction and Spectroscopic Probes of New Physics with Trapped Ions</title>
<link>https://hdl.handle.net/1721.1/157355</link>
<description>Surface Friction and Spectroscopic Probes of New Physics with Trapped Ions
Counts, Ian T.
To trap a single ion under vacuum is to control a microscopic, isolated quantum laboratory. This thesis describes two research programs made possible by single-ion trapping of Yb⁺. The first program is a study of a paradigmatic frictional interface: a trapped ion transported along a one-dimensional multistable energy landscape formed by a periodic, corrugated optical potential and a harmonic electric trapping potential. Two regimes of friction behavior are differentiated: single-slip (whereby an ion slips out of a corrugated groove and sticks into its neighboring groove) versus multislip (whereby the ion instead sticks into its next-neighboring or next-next-neighboring groove). By varying transport speed and corrugation depth, experimental signatures of both regimes are measured and used to write a predictive Boltzmann model. At low enough corrugations, the ion can be expected to tunnel through (in addition to slip over) the potential barriers of the energy landscape, leading to a reduction in friction termed quantum lubricity. Attempts at seeing quantum tunneling via static Rabi oscillations are described. While no smoking-gun signature of tunneling was repeatable, the suppression of quantum tunneling behavior is attributed to certain technical limitations of the experiment apparatus, and possible remedies are considered. The second (and largest) research program of this thesis is a probe of new physics via isotope shift spectroscopy. Shift measurements are taken to sub-kHz precision across all five even-numbered isotopes of Yb+ along three clock transitions (quadrupolar shifts S₁/₂ → D₅/₂, S₁/₂ → D₃/₂, and octupolar shift S₁/₂ → F₇/₂). Deviations from theoretical predictions have been found and indicate higher-order Standard Model effects or even beyond-the-Standard-Model physics. Spectroscopic design and shift results, as well as possible theoretical conclusions, are discussed.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157355</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-level design of low-carbon structures</title>
<link>https://hdl.handle.net/1721.1/157347</link>
<description>System-level design of low-carbon structures
Fang, Demi L.
“What is more likely to be associated with a reduction in emissions: switching from concrete to timber, or shortening the spans throughout the building?” While such insights are valuable for mitigating emissions from structural systems during early stages of design, it is difficult to answer these types of questions in current paradigms of performance-driven design. This dissertation makes several original contributions to the system-level design of low-carbon structures. First, a literature-supported network of strategies available to reduce emissions during early-stage structural design is established and evaluated on the bases of literature availability, impact, implementability, and compatibility. Material efficiency and material choice represent two key levers for reducing emissions in structural design, but it is difficult to navigate trade-offs between these strategies at a system level of structural design. Holistic design strategies can help achieve this, but these current paradigms of performance-driven design (e.g. deploying rules of thumb, comparing a few design options, and optimization) are limited in their capacity to inform decision-making towards higher performing designs. There is a particular opportunity to produce these insights using data-driven approaches given the growing quality and quantity of data in the field of low-carbon structural design. In response, this dissertation analyzes both types of data that are available in the field: wild data (measured from the industry) and synthetic data (produced from bottom-up parametric structural models). Data from over 200 fully designed structural systems from a structural engineering firm are analyzed. This analysis is the first to 1) provide empirical evidence for floors and foundations representing the largest opportunities for carbon reductions and 2) evaluate the relationship between structural material quantities and embodied carbon in structural systems (many analyses evaluate the latter without the former). In a field where material choice is a predominant impression for reducing emissions, these new insights importantly affirm the prominent role of material efficiency in reducing a structural system’s emissions. While the design space of wild data includes a diverse variety of projects, leveraging a synthetic dataset computed from a bottom-up parametric model helps produce insights specific to the design problem at hand. The final contribution of this dissertation is to propose a computational framework that leverages synthetic data to empower decision-making in design. The framework addresses two challenges: 1) the challenge of extracting decision-making insights from design data, and 2) the challenge of comparing decision-making across continuous (numerical) and categorical variables, which are typical in most design problems. In this framework, a machine learning model is trained on a provided set of design data to compute gradients across the design space. These gradients are distilled into “influence metrics”, which offer a novel, accessible way to build and supplement intuition on low-carbon design decisions. A few case studies in low-carbon structural design are presented to demonstrate the use of the proposed method with synthetic datasets. By striking a meaningful balance between applying rules of thumb and optimization, the method empowers a paradigm shift from performance-driven design to performance-informed, human-driven design. &#13;
Key words: embodied carbon of structural systems, design decision-making, low-carbon structural design
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157347</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-similar singularity formation and wellposedness theory for compressible fluids and dispersive PDE</title>
<link>https://hdl.handle.net/1721.1/157346</link>
<description>Self-similar singularity formation and wellposedness theory for compressible fluids and dispersive PDE
Cao Labora, Gonzalo
In this thesis, we study different problems related to singularity formation and local wellposedness of fluid equations and dispersive PDE. Regarding singularity formation, we construct radially symmetric smooth selfsimilar profiles for the compressible Euler equations which exhibit an implosion type singularity in finite time. This will be the first part of the thesis. The second part of the thesis consists on doing a non-radial stability analysis around those profiles to show singularity formation for adequate small perturbations of the profile. In particular, this stability analysis also allows to conclude existence of singularities for periodic initial data. The stability also allows to obtain singularity formation for the corresponding equation with dissipation: the compressible Navier-Stokes equations. Moreover, the self-similar profiles constructed are also intimately related to dispersive equations, and we will show how to use them to prove finite time singularity formation for some supercritical defocusing NLS equations, using its hydrodynamical formulation. The third part of the thesis consists of the study of a different dispersive equation: the Zakharov– Kuznetsov equation. The equation is a generalization of the KdV equation to higher dimensions with applications in plasma physics. We improve the deterministic local wellposedness in the cyilnder both in the deterministic and the probabilistic setting.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157346</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Contract, the Contractor, and the Capitalization of American Building</title>
<link>https://hdl.handle.net/1721.1/157336</link>
<description>The Contract, the Contractor, and the Capitalization of American Building
Spencer, Chelsea Anne
The heroic claims of twentieth-century architects notwithstanding, modern American architecture was built by general contractors. This new type of builder was unknown to US Americans before the Civil War, but by the turn of the twentieth century they commanded a powerful position in the widening gulf between architects and the construction of their buildings. Operating at the critical inflection point between projection and materialization, paper and concrete, contractors appealed to investment-minded clients as fellow businessmen, offering them what neither craft builders nor professional architects could deliver: a completed building, for a fixed price, on a guaranteed schedule.&#13;
&#13;
This dissertation tells the story of how building became contracting in the United States during the long nineteenth century. Known to legal historians as the age of contract, the nineteenth century gave rise to a constellation of juridical and economic ideas that revolved around a vision of social relations modeled on market exchange and possessive individualism. Revealing the ideological and institutional foundations of today’s construction industry, the dissertation shows how nineteenth-century thinking about contract, freedom, value, and risk shaped the architectural building contract, the limits of the architecture profession, the practice of general contracting, and thus the modern relationship between architecture and building.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157336</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling technology pathways and retrofit adoption to achieve city-wide building emissions reduction goals</title>
<link>https://hdl.handle.net/1721.1/157328</link>
<description>Modeling technology pathways and retrofit adoption to achieve city-wide building emissions reduction goals
Berzolla, Zachary M.
Achieving net zero emissions from buildings by 2050 is an unprecedented challenge that will require an all-in effort at local, state, federal, and international levels. The exact path to reach this goal in existing buildings varies widely from one community to another. Thus local planning efforts and a bottom-up approach is needed to attain emissions reduction goals. This dissertation lays out a framework to create technology pathway roadmaps to help cities around the world identify actionable strategies to achieve their building emissions reduction goals. These “technical potential” roadmaps can help policymakers quantify the exact requirements in terms of retrofits, workforce, and material to attain their end goals. The application of these tools in 24 cities around the world are discussed. A sound roadmap is only as good as its implementation, and currently retrofit rates lag what is necessary to achieve 2050 goals on time. One of the oft-cited barriers to retrofit adoption is the high upfront cost. This dissertation documents a survey carried out by the author and the resulting model used to help quantify households’ willingness to pay for retrofits. Leveraging the willingness to pay model enables policymakers to analyze the techno-economic pathways to their goals. Finally, one of the greatest challenges to achieve emissions reduction goals is the timeline of retrofit adoption. Under the current business as usual retrofitting rate, less than a fifth of the building stock will be retrofitted by 2050. To help policymakers grasp this temporal challenge, this dissertation introduces a novel application of technology diffusion models that can quantify retrofit adoption over time. The tools developed in this dissertation are aimed at providing communities of all sizes with data-driven insights to meet their ambitious but necessary building-related decarbonization goals in a timely manner.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157328</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Words to Worlds: Bridging Language and Thought</title>
<link>https://hdl.handle.net/1721.1/157326</link>
<description>From Words to Worlds: Bridging Language and Thought
Wong, Lionel Catherine
What do we understand when we understand language? Human language offers a broad window into the landscape of our thoughts. We talk about what we see, believe, and imagine, posing questions and communicating our plans. Language, in turn, stocks our mental inventories with new concepts and theories, communicating ideas that we might not otherwise have discovered by thinking on our own even over the course of a lifetime. How do we make meaning from language, and how, in turn, does the meaning we construct from language draw on the other resources and capacities of human thought, from perception, to mental simulation and decision making? This thesis proposes a computational framework for modeling language-informed thinking, organized into two parts. In the first, I overview the overarching framework that makes up the backbone of this thesis, Rational Meaning Construction, which proposes how natural language can construct arbitrary expressions in a flexible, symbolic, and probabilistic language of thought that supports general inferences. I present examples and experiments demonstrating the range of this theory, modeling how concrete propositions and questions in language can update and query beliefs about many different domains of knowledge. In the second section, I turn to language that communicates more abstract conceptual knowledge – generic background concepts and theories that we can learn from language, and which give us building blocks for representing more concrete beliefs. I present three models that build on the basic premises of Rational Meaning Construction to learn new lexical concepts and theories from language. The first models how we can learn new theories from generic sentences that explicitly communicate or implicitly presuppose abstract knowledge. The second elaborates on this model to also incorporate environmental feedback alongside information from language. The third suggests how we can learn the meanings of new words from scratch, with very little linguistic data, using principles of both representational and communicative efficiency to guide learning. I conclude by discussing a open questions that this thesis raises about how we learn and understand language, and outline future directions that might make progress on answering them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157326</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low temperature deformation of copper single crystals oriented for multiple slip</title>
<link>https://hdl.handle.net/1721.1/157306</link>
<description>Low temperature deformation of copper single crystals oriented for multiple slip
Saimoto, Shigeo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy, 1964; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157306</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sobolev tests for uniformity on compact Riemannian manifolds,</title>
<link>https://hdl.handle.net/1721.1/157305</link>
<description>Sobolev tests for uniformity on compact Riemannian manifolds,
Giné, Evarist,
            1944-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157305</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of taxing unemployment benefits</title>
<link>https://hdl.handle.net/1721.1/157299</link>
<description>The effects of taxing unemployment benefits
Ellis, W. Philip.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1999; "September 1999."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157299</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>What makes the money go round?</title>
<link>https://hdl.handle.net/1721.1/157298</link>
<description>What makes the money go round?
Rosenblat, Tanya,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 129-133).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157298</guid>
</item>
<item>
<title>Essays on the economics of work and family</title>
<link>https://hdl.handle.net/1721.1/157297</link>
<description>Essays on the economics of work and family
Johnson, John H.
            (John Henry),
            1973-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 117-120).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157297</guid>
</item>
<item>
<title>Essays on international macroeconomics : the real exchange rate, income inequality and the international investment position of small countries</title>
<link>https://hdl.handle.net/1721.1/157296</link>
<description>Essays on international macroeconomics : the real exchange rate, income inequality and the international investment position of small countries
García, Pablo S.
            (Pablo Silva),
            1970-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157296</guid>
</item>
<item>
<title>Essays on crises</title>
<link>https://hdl.handle.net/1721.1/157295</link>
<description>Essays on crises
Dudek, Maciej Konrad,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157295</guid>
</item>
<item>
<title>Social security, pensions, and the retirement decisions of individuals and couples</title>
<link>https://hdl.handle.net/1721.1/157294</link>
<description>Social security, pensions, and the retirement decisions of individuals and couples
Coile, Courtney.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157294</guid>
</item>
<item>
<title>Repeated games with private information</title>
<link>https://hdl.handle.net/1721.1/157293</link>
<description>Repeated games with private information
Amarante, Massimiliano,
            1966-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (leaves 56-59).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157293</guid>
</item>
<item>
<title>The information content of asset prices and emerging market crises</title>
<link>https://hdl.handle.net/1721.1/157292</link>
<description>The information content of asset prices and emerging market crises
Aguiar, Mark.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 91-95).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157292</guid>
</item>
<item>
<title>Essays on market integration and productivity</title>
<link>https://hdl.handle.net/1721.1/157291</link>
<description>Essays on market integration and productivity
Park, Charles C.,
            1968-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, February 1998; Includes bibliographical references (leaves 110-112).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157291</guid>
</item>
<item>
<title>Three essays on search and bargaining models</title>
<link>https://hdl.handle.net/1721.1/157290</link>
<description>Three essays on search and bargaining models
Dasgupta, Sugato.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1999; Includes bibliographical references (leaves 92-94).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157290</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on information production and information use in financial markets</title>
<link>https://hdl.handle.net/1721.1/157289</link>
<description>Essays on information production and information use in financial markets
Solomon, Amit.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1998; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157289</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-Layer Nanoparticles for Cytokine Delivery</title>
<link>https://hdl.handle.net/1721.1/157260</link>
<description>Layer-by-Layer Nanoparticles for Cytokine Delivery
Pires, Ivan S.
In the past decade, cancer immunotherapy has been a promising therapeutic strategy for cancer treatment. However, immunotherapy has failed to improve responses in certain cancers such as ovarian cancer (OC). The action of cytokines in the tumor microenvironment (TME) is key to regulating immune responses, but dose-limiting toxicities limit the application of cytokines in cancer therapy. One promising approach to improve treatment with cytokines are nanoparticles (NPs) which, when modulated via layer-by-layer (LbL) assembly, can provide many of the desirable characteristics of cytokine-delivery vehicles including tumor cell targeting, subcellular localization, and improved pharmacokinetics. In this thesis, we address some aspects of NPs that have limited their clinical utility including manufacturing, control over self-assembly, and mechanistic understanding of their interactions in biological environments The focus here was on using liposomal LbL-NPs coated with a bilayer of poly-L-arginine (PLR) and poly-L-glutamate (PLE). The coating of NPs with PLR/PLE enables targeting towards cancer cell surfaces which allows for extended extracellular presentation of cargos. This ability is used for targeted delivery of a potent immunostimulant – interleukin-12 (IL-12) to disseminated tumors in metastatic OC. Aspects on the manufacturing of other lipid-based nanocarriers such as discoidal assemblies and immune stimulating complexes (ISCOMs) are also explored. We show that employing a bottom-up approach to produce lipid-based NPs from mixed micelles allows for greater control over NP self-assembly. With this procedure, we generated immune stimulating complexes (ISCOMs) co-loaded with monophosphoryl-lipid-A (MPLA) via a scalable approach for clinical-scale manufacturing of the adjuvant termed Saponin MPLA NanoParticles (SMNP). Moreover, we discover that this approach allows for precise control over liposome size from 50 nm to 1 µm with minimal polydispersity. Lastly, by exploiting the lipid headgroup charge repulsion, we find that multivalent charged lipids yield discoidal lipid nanoparticles through this approach. Unlike previous attempts to generate lipid-based discs, this new class of NPs termed charge-stabilized nanodiscs (CND) do not require disc-stabilizing agents such as proteins or polymers. CNDs are shown to be promising drug delivery vehicles, especially when coated with PLR/PLE via the LbL technique where they have greater tumor accumulation than LbL-coated liposomes. On the use of LbL-NP for cytokine delivery via PLR/PLE coated NPs, we found that covalent conjugation of IL-12 to the liposomal core of LbL-NPs greatly improves targeting and retention of IL-12 in peritoneally-disseminated OC tumors, enabling immunological and therapeutic effects not observed with free cytokine treatment. Mechanistic investigations revealed that these LbL-NPs rapidly accumulated in tumor nodules upon intraperitoneal (i.p.) administration, wherein shedding of the LbL coating allowed for gradual release of IL-12-lipid 3 conjugates via lipid extraction by serum proteins present in interstitial fluid. Upon a single dose of IL-12 conjugated to LbL-NPs using an intraperitoneally disseminated OV2944 highly-metastatic (HM-1) mouse model, we observed a dramatic increase in T cell levels within the ascites and the tumor nodules dispersed within the i.p. space which was not observed with either free cytokine or unlayered IL-12-NPs. When evaluated for its effectiveness in this highly aggressive model, two doses could significantly enhance survival compared to even five times (5x) the amount of free cytokine. Remarkably, while the model was non-responsive to checkpoint inhibitor (CPI) therapy with anti-PD1 and anti-CTLA4, when combined with LbL-IL-12-NPs, we achieved complete responses with robust immune memory induction. The mice were able to rapidly clear rechallenges with fresh cancer cells in the i.p. space. Towards the clinical translation of LbL-IL12-NPs, we demonstrate that LbL assembly is readily performed via microfluidic mixing technology amenable for clinical-scale manufacturing. We also find that we can titrate the polymer amount used to omit time-consuming purification steps. We also find that the LbL film conformation is key to maintaining therapeutic efficacy as thicker films hinder IL-12 delivery. Lastly, we uncover that the binding target of PLE on the surface of cancer cells is SLC1A5, a glutamine amino acid transporter.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157260</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation of Chemical Kinetic Models including Macromolecules in Multiphase Systems</title>
<link>https://hdl.handle.net/1721.1/157258</link>
<description>Automatic Generation of Chemical Kinetic Models including Macromolecules in Multiphase Systems
Pang, Hao-Wei
Detailed chemical kinetic models are indispensable tools for unraveling the complexities of industrial and environmental chemistry systems. Many important industrial and environmental chemistries involve thousands of species and hundreds of thousands of complex pathways, which are difficult to resolve manually. To address this challenge, automatic mechanism generation software has been developed. Previous studies have demonstrated the promising quantitative agreements of automatically generated mechanisms with experimental data. However, previous studies have primarily focused on small molecules in single-phase systems, overlooking the complexities of multiphase systems and macromolecules commonly found in industrial and environmental processes. This thesis introduces advancements in three key areas of automatic mechanism generation: Part I extends the current framework of automatic mechanism generation to tackle the longstanding issue of polymer fouling in the industrial system. Two detailed kinetic models are presented: one for anaerobic fouling and the other for aerobic fouling in distillation columns. Modeling innovations are introduced, which allow one to construct models including thousands of chemical reactions occurring in the liquid and film phases, vapor-liquid equilibria of hundreds of molecules, transport between the phases, and flows between the trays. All of these factors significantly affect the fouling rate. Most of the critical model parameters are derived from quantum chemistry calculations. The modeling method is validated using experimental film growth measurements made with a quartz-crystal microbalance. These models clarify the mechanistic details of the fouling process. Part II develops machine learning models for predicting thermochemical parameters in gas and liquid phases. A decision tree model based on subgraph isomorphism for gas-phase radical thermochemistry is presented. The model demonstrates improved accuracy compared to the existing empirical model and reliable uncertainty estimates for both interpolation and extrapolation tasks. Additionally, the effectiveness of active learning for building models for solvation-free energy is explored under various compositions of initial training sets and uncertainty estimation methods for data acquisition. The possibility of aiding data acquisition with unsupervised learning for active learning is also assessed. Part III adds new features and enhances the performance of multiple packages under the Reaction Mechanism Generator software suite, originally developed by the MIT Green Group. New tools are developed to facilitate thermochemical data augmentation, multiphase 3 simulation for automatic mechanism generation, and the automatic implementation of quasisteady state assumptions during the simulation of detailed kinetic models. A new species and reaction selection algorithm is developed to enable the automatic generation of mechanisms for molecular growth systems. Various speed improvement techniques are applied to improve both the simulation speed and sensitivity analysis of large-scale detailed kinetic models. By addressing these key areas, this thesis contributes to the advancement of automatic mechanism generation, paving the way for more accurate and efficient modeling of complex chemical systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157258</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From the body to the brain: Studying drug delivery and physiological interactions using MRI</title>
<link>https://hdl.handle.net/1721.1/157253</link>
<description>From the body to the brain: Studying drug delivery and physiological interactions using MRI
Dawson, Miranda
The brain is in continuous communication with the rest of the body. Nerves connect the peripheral and central nervous system, and complex vasculature networks selectively permit passage of small molecules with an exogenous origin into the brain parenchyma. Although brain-body interactions underpin a host of cognitive and physiological phenomena, they are often overlooked in studies of brain biology and mental function. We studied aspects of the interaction between brain and body using functional and molecular magnetic resonance imaging (MRI), in combination with other tools. In a first project, we examined properties of the blood-brain barrier (BBB). The BBB is a highly selective collection of endothelial cells and tight junction proteins that restrict passage of extracerebral substances from the blood vessels into the brain tissue. We disrupted and bypassed the BBB to deliver an MRI contrast agent and quantitatively assessed the resulting contrast dynamics. We discovered that individual brain regions display method-independent susceptibility to BBB disruption and washout, suggesting principles for calibrating drug delivery and understanding the propensity for chemical exchange across the BBB. We then used one of the widefield brain delivery techniques to apply a novel contrast agent for the study of the cholinergic system, a neurochemical pathway important for motor control mechanisms in both the central and peripheral nervous systems. Kinetic modeling of probe distributions revealed intrinsic localization of cholinergic enzymes. Finally, we applied related neuroimaging tools to an animal model of substance abuse, a pathology for which brain-body interactions are particularly engaged but underappreciated. We designed a study to investigate the role of the insula, a cortical mediator of peripheral physiological signals, in responses to opioid exposure. With molecular imaging approaches, we show the insula shapes drug-dependent brain phenotypes and physiological responses during substance exposure and withdrawal. In all, this work serves as a demonstration of the power of quantitative neuroimaging methods for multifaceted investigation of brain and body relationships.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157253</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering non-equilibrium mechanisms that regulatestructure and function of biomolecular condensates usingphase-field modeling</title>
<link>https://hdl.handle.net/1721.1/157252</link>
<description>Discovering non-equilibrium mechanisms that regulatestructure and function of biomolecular condensates usingphase-field modeling
Natarajan, Pradeep
Biomolecular condensates are phase-separated assemblies in living cells that form through cooperative interactions between their constituents such as proteins and RNAs. They are emerging as important organizers of biochemistry in cells and are dysregulated in disease. Understanding the physical principles that shape the form and function of these condensates is a fundamental scientific challenge, which if addressed can provide novel therapeutic avenues to improve human health. The equilibrium principles behind condensate formation are well understood. Several studies have investigated the impact of multivalency, protein sequence, protein-RNA interactions, and the role of DNA in modulating biomolecular interactions that drive phase separation. However, the living cell is inherently out of equilibrium with non-equilibrium reactions that constantly burn ATP and turn over biomolecules. My thesis investigates how the interplay between biomolecular interactions and non-equilibrium reactions that turn over biomolecules affect the structure and function of biomolecular condensates. Phase-field modeling is used for these investigations, as this approach has been historically successful in answering similar questions in other fields such as material science. Prior work shows that proteins present in biomolecular condensates associated with RNA transcription undergo complex coacervation with the RNA product. The first project in this thesis investigates the interplay between complex coacervation and spatially heterogeneous RNA synthesis on condensate morphology and dynamics using a phase-field model. This simple model exhibits a rich variety of dynamical behaviors and steady states. It also provides a unifying framework to explain diverse experimental observations related to condensate morphology and dynamics such as vacuole formation, aspherical shapes, directed motion, and splitting-fusion behaviors. The second project investigates how transcription of messenger RNA (mRNA) by transcriptional condensates is modulated by other RNAs in the vicinity, such as long non-coding RNAs (lncRNAs). Our model reveals that lncRNA transcription in the vicinity can regulate mRNA transcription by altering the protein concentration and lifetime of transcriptional condensates, 3 promoting mRNA transcription from genes expressed at a low level and inhibiting transcription from highly expressed genes. This model provides a unifying framework to reconcile conflicting observations in the literature about transcriptional regulation by lncRNAs. The final project focuses on the fibrillar center of the nucleolus, an important condensate that is involved in ribosome biogenesis. Using a phase-field model that explicitly accounts for rRNA-protein interactions in the nucleolus and the non-equilibrium reaction of rRNA transcription, we show that the coarsening of fibrillar centers is arrested leading to a preferred size. Altering this size affects rRNA export and processing from the fibrillar centers. These predictions are validated by experiments. Using a combination of experiments and theory, we uncover the non-equilibrium mechanism that controls size control of fibrillar centers and the functional consequences of this size control on rRNA processing.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157252</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular, Genetic, and Process Approaches for Improving Secreted Pharmaceutical Protein Quality in Komagataella phaffii</title>
<link>https://hdl.handle.net/1721.1/157251</link>
<description>Molecular, Genetic, and Process Approaches for Improving Secreted Pharmaceutical Protein Quality in Komagataella phaffii
Yang, Yuchen
Biopharmaceutical products constitute a significant portion of the global bioeconomy. Compared to traditional synthetic small-molecule drugs, recombinant therapeutic proteins offer advantages like enhanced specificity and reduced side effects, and there has been tremendous growth in their innovation thanks to modern DNA technologies and AI-driven algorithms. While mammalian platforms such as Chinese Hamster Ovary (CHO) cells are commonly used for their high production titer and capability for complex post-translational modifications, thier high cost of goods manufactured can greatly constrain biopharmaceutical global accessibility. The yeast Komagataella phaffii is the prime candidate for next-generation biomanufacturing for reasons including simpler host biology, reduced time to market, and better sustainability. Nevertheless, product quality, such as size/charge variants and non-human glycosylation, can be of major concern for proteins secreted from this host organism. This thesis explores three different engineering approaches aimed at improving the quality of both aglycosylated and glycosylated proteins, with a particular focus on monoclonal antibodies, the leading class of protein biopharmaceuticals by both sales and innovation. Firstly, we demonstrated significant quality improvements through molecular sequence engineering of aglycosylated monoclonal antibody backbones. By making informed, conservative mutations to two or three amino acid residues, we greatly reduced product-related variants from proteolysis and N-terminal variations. We further showed the comparability between yeast- and CHO-secreted products, providing a framework for rapid product development with this unconventional yeast. Secondly, we applied CRISPR-Cas9 gene editing technology to humanize the glycosylation pathway of K. phaffii. We achieved homogeneous G0 glycosylation on a reporter peptide by resolving a previously unreported synthetic lethality via a transcriptomics-informed approach. Key challenges for monoclonal antibody glycosylation were also identified through further comprehensive pathway engineering. Lastly, we examined the performance of glycoengineered K. phaffii strains under varied process conditions. Employing a machine learning algorithm, we improved the desired glycan abundance on a subunit vaccine candidate. The process-robustness of engineered strains suggests the potential of this host as a viable commercial biomanufacturing host.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157251</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microporous Polymer-Metal Organic Framework (MOF)&#13;
Hybrid Materials for Separations</title>
<link>https://hdl.handle.net/1721.1/157250</link>
<description>Microporous Polymer-Metal Organic Framework (MOF)&#13;
Hybrid Materials for Separations
Wu, Wan-Ni
Membrane-based separation holds significant promise for reducing the high energy consumption associated with traditional thermal-based separation processes in the chemical industry. Recent advancements in microporous materials, such as polymers of intrinsic microporosity (PIMs) and metal-organic frameworks (MOFs), have demonstrated performance improvements over conventional polymers. Mixed-matrix membranes (MMMs) have emerged as a potent strategy, combining the processability of polymers with the superior separation properties of MOFs to create high-performance membranes. Additionally, the integration of MOFs into polymers can mitigate stability issues such as plasticization, swelling, and physical aging. This thesis investigates MMMs based on PIM-1 and its derivatives, along with UiO MOFs, for gas and organic solvent-based separations. The studies focus on enhancing polymer–MOF interfacial compatibility, understanding penetrant transport, and addressing key challenges in MMM design and fabrication. A longstanding challenge with MMMs fabrication is poor polymer–MOF compatibility, leading to particle agglomeration and non-selective interfacial voids. To address this, the strategy of decorating polymers and MOFs with compatible functional groups was explored. By studying UiO-66-NH2 MOF and carboxylic acid-functionalized PIM-1 (PIM-COOH), it was demonstrated that MMMs with compatible functional groups exhibit enhanced polymer–MOF interaction and plasticization resistance. To further understand transport within these MMMs, self-diffusivities of gases were measured using pulsed-field gradient nuclear magnetic resonance and compared to macroscopic diffusivities obtained from permeation and sorption analysis. The PIM–MOF material platform was also extended to solvent-based separations.To understand solvent transport through microporous polymers, intrinsic properties of swollen polymers were obtained both experimentally and computationally, and these properties were correlated with solvent transport metrics. Finally, MMMs composed of PIM-COOH and UiO MOFs with systematically increasing pore apertures were evaluated for their solvent nanofiltration performance. Key challenges such as MOF instability and non-ideal polymer–MOF interfaces were identified. In summary, this thesis delves into the structure-property relationships of microporous materials for gas and solvent-based separations, offering insights that can guide the future design of advanced composite membranes for challenging separations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157250</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of the hif-1-dependent hypoxic stress response by C. elegans</title>
<link>https://hdl.handle.net/1721.1/157246</link>
<description>Regulation of the hif-1-dependent hypoxic stress response by C. elegans
Diehl, Calista Sorine
All aerobic organisms need a way to sense oxygen levels and respond accordingly when in an unfavorable environment. In almost all metazoans, oxygen is both sensed and regulated by the HIF-1 (hypoxia inducible factor) transcription factor that is activated in periods of hypoxia and goes on to regulate hundreds of genes allowing for appropriate adaptations to hypoxia. HIF-1 activation results in changes at the cellular, tissue and whole organism levels such as increases in glycolysis, vascularization and erythropoiesis; HIF-1 is a critical factor in human development as well as progression of numerous diseases including ischemic stroke, COPD and cancer. HIF-1 is negatively regulated by the O2-dependent prolyl hydroxylase EGL-9 (known as EGLN, PHD, or HIF-PH in mammals). In normoxic conditions, EGL-9 uses ambient O2 to hydroxylate HIF-1. Hydroxylated HIF-1 is recognized by the von Hippel-Lindau (VHL-1) tumor suppressor protein, a component of an E3-ubiquitin ligase complex that targets HIF-1 for proteasomal degradation. In hypoxic conditions, EGL-9 is unable to hydroxylate HIF-1; stabilized HIF-1 enters the nucleus to regulate the expression of target genes that coordinate the hypoxia response. Increased activity of HIF-1, produced by either hypoxia or an egl-9(lf) mutation, induces the hypoxic stress response, which coordinates numerous adaptive changes in C. elegans, including retention of eggs in the uterus, decreases in locomotion and defecation rates, and increased resistance to not only hypoxia but also other stresses including oxidative stress and ER stress. By identifying suppressors of the egl-9(lf) mutant phenotype of egg retention, we have identified two independent pathways that regulate aspects of the hypoxic response in C. elegans. First, we discovered that loss of the conserved nonsense-mediated decay (NMD) pathway, an RNA surveillance mechanism that degrades aberrant mRNA transcripts with premature termination codons, suppressed the egl-9(lf)-induced changes in egg laying and defecation and caused increased hypoxia sensitivity. Other aspects of the egl-9(lf) phenotype, such as resistance to oxidative stress and changes in locomotion, were not affected by NMD-pathway mutations, indicating that NMD modulates specific aspects of the hypoxia response. Secondly, we found that loss of the neprilysin metallopeptidase, nep-2, suppressed the egl-9 Egl phenotype through the degradation of multiple neuropeptides including the known NEP-2 target SNET-1. Our findings reveal two different pathways that function downstream of egl-9 to regulate aspects of the hypoxic stress response, both providing a new pathway with which to study the neuromuscular control of egg laying using NEP-2, and critically showing the integration of the evolutionarily conserved hypoxic-stress response and nonsense-mediated decay pathways.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157246</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A synthetic biology platform for malaria parasites based on orthogonal transcriptional control</title>
<link>https://hdl.handle.net/1721.1/157237</link>
<description>A synthetic biology platform for malaria parasites based on orthogonal transcriptional control
Cárdenas Ramírez, Pablo
Malaria is responsible for half a million deaths each year in some of the poorest communities around the world. Furthermore, the evolution of drug resistance among malaria parasites threatens to continue this trend. However, our understanding of malaria parasite biology is held back by a lack of tools with which to study the function of their genes. In light of this, we have created systems that control gene expression in the malaria parasite Plasmodium falciparum using bacterial repressor proteins. These are the first tools to reliably control malaria parasite transcription and offer the most robust method of conditional gene expression in Plasmodium parasites to date. We develop automated DNA design software to apply this technology to study essential parasite genes for functional genomics and confirm compound-protein interactions for drug discovery. We hope these tools advance efforts to engineer and control malaria parasites in the future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157237</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Degradation Mechanisms and Applications in Ion Intercalation Materials</title>
<link>https://hdl.handle.net/1721.1/157236</link>
<description>Degradation Mechanisms and Applications in Ion Intercalation Materials
Zhuang, Debbie
Lithium ion batteries (LiBs) are a pivotal energy storage technology that are widely adopted for their high energy density and safety. From a macroscopic level, LiBs operate at a micrometer lengthscale, but consist of many active material nanoparticles which participate in reversible electrochemical reactions that store and release energy. These particles control the crucial processes for energy storage in macroscopic devices, generating a process spanning multiple length and time scales in LiBs. However, despite the ubiquitous application of LiBs in many industries, degradation limits their lifespan, hindering their broader applicability in usages demanding high energy density and extended lifespans, such as electric vehicles (EVs). Dominant degradation occurs at the nanoparticle level involving various mechanisms, such as formation of resistive films on the particle surface or surface phase transformations in common LiB materials. The effects of degradation are observed at the macroscopic level from electrochemical responses such as voltage or current measurements. Bridging the gap between microscopic and macroscopic scales to extract particle level degradation mechanisms from electrode scale responses is essential for understanding LiB degradation. These methods can be used to quantify degradation in battery materials for second life use, designing degradation resistant materials, and more.&#13;
&#13;
Here, I propose a comprehensive multiscale framework that initially models LiB degradation at a single particle scale, using nickel rich materials as an example, then projects single particle degradation into population scale for both solid solution and phase separating materials. Furthermore, I analyze and design improved pulse diagnostics using hybrid pulse power characterization (HPPC) methods to extract physical microscopic degradation mechanisms from electrode-level responses. Overall, I set up a consistent framework modeling degradation from single particle to population level and vice versa in LiBs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157236</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperbolic String Field Theory</title>
<link>https://hdl.handle.net/1721.1/157235</link>
<description>Hyperbolic String Field Theory
Firat, Atakan Hilmi
This thesis develops string field theory whose elementary interactions are parameterized using hyperbolic geometry. We introduce a systematic procedure to characterize its off-shell data: the local coordinates around punctures on Riemann surfaces as a function of complex structure and the vertex regions in the relevant moduli spaces over which the moduli integration is performed. This procedure exploits the relation between hyperbolic geometry and the semi-classical Liouville theory. We demonstrate that the (generalized) hyperbolic three-string vertex is exactly solvable, while the higher-order vertices can be obtained via the conformal bootstrap of Liouville theory in terms of classical conformal blocks and the DOZZ formula. The four-string and tadpole vertices are constructed explicitly using the known expressions of the associated blocks. Our method suggests the existence of a hidden cubic structure within hyperbolic string field theory.&#13;
&#13;
We also take the WKB-like limit of our construction and demonstrate that it can be used to characterize Strebel quadratic differentials on Riemann surfaces. These differentials encode the geometry of polyhedral vertices of classical closed string field theory. The implication is that they can be embedded into the hyperbolic paradigm. The validity of our results in this regime is further confirmed by developing a topology-independent machine learning algorithm characterizing Strebel differentials. Such algorithm provides an alternative, numerically scalable approach for computing closed string field theory interactions. Finally, our work investigates the open-closed string field theory in the presence of large number of D-branes. We establish its consistency by solving the relevant geometric version of the Batalin-Vilkovisky master equation using hyperbolic geometry and investigate its limits.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157235</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequence-Dependent &amp; -Independent Effects of Intron-Mediated Enhancement (IME)</title>
<link>https://hdl.handle.net/1721.1/157229</link>
<description>Sequence-Dependent &amp; -Independent Effects of Intron-Mediated Enhancement (IME)
Kowal, Emma J. K.
Introns are ubiquitous features of eukaryotic genes, and their precise removal from pre-mRNA transcripts by the spliceosome is an essential step in gene expression. Genomic deletion of an intron from a gene tends to reduce its expression, and addition of an intron tends to increase it. This phenomenon, termed Intron-Mediated Enhancement (IME), has been observed in many organisms, genes, and introns. IME can act at multiple levels to increase transcription rate, processing rate, export efficiency, translational efficiency, and stability of the processed mRNA. These stimulatory effects range across orders of magnitude depending on the context, and also on the identity of the intron, as has been shown in Arabidopsis thaliana. Presently, little is known how intron sequence may determine the mode or magnitude of effect on gene expression output in animals. In this study we report the design and execution of several massively parallel reporter assays (MPRAs), interrogating the effect of tens of thousands of synthetic and natural intron sequences on gene expression in the human HEK293T and HeLa cell lines. We observe that even with random internal sequence, most of these introns splice well and trigger IME. In the primary tested context, the average intron stimulates an eight-fold increase in both mRNA and protein output over intronless controls, suggesting that the enhancement is largely at the level of mRNA accumulation. We analyze the sequence features associated with highly-enhancing introns and demonstrate that the poly-uridine (polyU) content of an intron is positively correlated with its impact on host gene mRNA and protein level. In a second library of natural intron sequences, we observe that U12-type introns do not stimulate IME, while U2-type introns universally do. Surprisingly, we observe in both MPRAs that the enhancement from random introns is similar to or greater than the enhancement from natural sequences. In sum, we have developed a robust experimental platform for interrogating the sequence-activity relationship of IME, and used it to uncover new insights into this unsung sculptor of eukaryotic gene expression.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157229</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brownian dynamics simulation of soft matter with hydrodynamics: methods for constrained systems and shear processing of 2D materials</title>
<link>https://hdl.handle.net/1721.1/157226</link>
<description>Brownian dynamics simulation of soft matter with hydrodynamics: methods for constrained systems and shear processing of 2D materials
Funkenbusch, William Tian
2D materials are a rising class of soft matter material with a promising set of unique characteristics. The most ubiquitous 2D material, graphene, for example, possesses large surface areas, tunability, and unique electrical, optical, and catalytic properties while being lightweight, strong, and flexible. This has led to graphene seeing use in separations, biomedical applications, flexible electronics, and more. Meanwhile, synthetic 2D polymers, a relatively new field of study, represent a massive expansion of the design space for 2D materials and their applications. Solution processing of these materials is often an important step for synthesizing or applying them, necessitating knowledge of their behavior in suspensions and in flows. As these materials become more viable, our fundamental understanding of them must increase in tandem. This will inform us in designing these materials for our desired applications. However, especially when compared to their 1D counterparts, our understanding of 2D materials is lacking. It is the goal of this thesis to help fill in this gap in knowledge.&#13;
&#13;
In Chapter 1, we discuss the basics of soft matter and methods for simulating which are the basis for understanding the work in this thesis. We present and discuss the governing equations for the movement of soft matter particles. We then discuss the simulation methodology and mobility tensor approximations used in this thesis along with some additional considerations.&#13;
&#13;
In Chapter 2 we study methods for simulating constrained Brownian systems. We compare the current state-of-the-art method for these simulations, GMRES, to a different method called the projected conjugate gradient (PrCG) method. In particular, we compared PrCG and GMRES for rigid bodies, freely jointed chains, and immobile systems. We find that both methods exhibit the same linear computational complexity. We find that PrCG, however, exhibits some notable advantages over GMRES including lower precomputational and storage burdens, a guaranteed feasible iterate, and trivial extension to new constraint types due to the lack of a preconditioner. We use PrCG to solve a mixed constraint problem with rigid body and immobile particles, comparing to the analytical solution at large separations.&#13;
&#13;
The remainder of this thesis studies the effects of self-attraction on self-avoiding, semi-flexible, athermal 2D materials (sheets) in shear flow. In Chapter 3, we give a background on rheology and 2D materials which are necessary for understanding the remaining chapters. We begin by discussing non-Newtonian fluids, specifically their applications and affect on the momentum balance presented in Chapter 1. Then, we give a brief introduction on simple shear and discuss how it is implemented in simulations. Finally, we give a brief introduction to 2D materials, their applications, as well as previous experimental, theoretical, and computational work.&#13;
&#13;
In Chapter 4, we model self-interacting, self-avoiding, semi-flexible, athermal sheets in shear flow. We find a rich conformational landscape of 4 different behaviors --- flat, tumbling, 1D folded, and 2D folded --- which are well-delineated by several dimensionless groups representing the ratios between shear strength and interaction strength, and bending rigidity and interaction strength. We use these dimensionless groups to explain the observed behaviors, explain the folding behavior of 1D folded sheets, and calculate and explain the viscosity of a dilute suspension of these sheets. We use the conformational and rotational properties of the sheet simulations to explain this behavior, demonstrating a new explanation for the non-monotonic rheological properties of 2D materials which does not involve sheet-sheet interactions (which are rare in dilute suspensions) or thermal energy (which is often small in sheet systems). We also study systems with two initially stacked sheets in order to model, for example, shear exfoliation of 2D materials. We find three behaviors --- separating, waltzing, and flipping --- which are characterized by the same dimensionless groups as single sheets. We again explain these behaviors and calculate the viscosity of these sheets which again shows interesting non-monotonic rheological properties which we also explain using the conformational and rotational properties of the sheets.&#13;
&#13;
In Chapter 5, we use simple time-dependent flow protocols to show how the properties of sheets can be controlled. Specifically, we use linear shear annealing simulations to show that the final conformational properties of a sheet suspension can be tuned continuously by varying the quench time. We also use our knowledge of the phase map of sheets to design flow protocols with step changes in shear rate to produce a target state of highly aligned, 1D folded sheets which represents, among other things, our predicted lowest possible viscosity for a sheet suspension.&#13;
&#13;
In Chapter 6, we discuss potential future directions for the sheet model applied in Chapters 4 and 5. Specifically, we discuss loose ends from Chapter 4 and potential extensions of the model. We discuss potential benefits of and complications in exploring these directions.&#13;
&#13;
Finally, in Chapter 7, we summarize the discoveries presented in this thesis and provide concluding remarks.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157226</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Lie Theory in the Verlinde Category</title>
<link>https://hdl.handle.net/1721.1/157225</link>
<description>On Lie Theory in the Verlinde Category
Kannan, Arun S.
A symmetric tensor category arises by axiomatizing the basic properties of a representation category of a finite group A famous theorem of Deligne states that, in characteristic p = 0, any symmetric tensor category of moderate growth is essentially a representation category of an affine supergroup scheme. This is not true in positive characteristic, with the most fundamental counterexamples being the Verlinde category Verₚ and its higher analogs Verₚn. It seems these categories will play a role in generalizing Deligne's theorem, and therefore, to understand symmetric tensor categories of moderate growth in general, it is important to study affine group schemes in these categories. The first part of the thesis reviews this theory.&#13;
&#13;
In the remainder of the thesis, we approach the study of Verₚ by considering two perspectives: the first perspective is that because these categories do not fiber over the category of supervector spaces, these categories provide examples of new phenomena which do not arise out of (super)algebra or (super)geometry. In particular, we explain how the Verlinde category can be used to provide new constructions of Lie superalgebras, and in particular exceptional simple Lie superalgebras in low characteristic. We also show that in characteristic 5 a new algebraic structure we call a "weak Jordan algebra" arises. Finally, we classify bilinear forms in the Verlinde category Ver₄⁺ and discuss the associated Witt semi-ring, which is a new algebraic structure.&#13;
&#13;
The second perspective is that these categories actually contain the category of supervector spaces, so they must generalize what is already known. We extend the theory of Frobenius kernels to the Verlinde category and use it to prove an analog of the Steinberg tensor product theorem for the group scheme GL(X).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157225</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of microRNA degradation in Caenorhabditis elegans via the E3 ubiquitin ligase EBAX-1</title>
<link>https://hdl.handle.net/1721.1/157215</link>
<description>Regulation of microRNA degradation in Caenorhabditis elegans via the E3 ubiquitin ligase EBAX-1
Stubna, Michael William
microRNAs (miRNAs) are short, ~22-nucleotide noncoding RNAs that base-pair to messenger RNAs (mRNAs) to direct their post-transcriptional repression through their associated Argonaute (AGO) proteins. Animal genomes encode hundreds of miRNAs that, together, regulate a majority of mRNAs and tune spatiotemporal gene expression programs. The production and degradation of many miRNAs occurs in a regulated manner, but molecular pathways of miRNA degradation are relatively poorly understood.&#13;
Some rapidly degraded miRNAs owe their instability to a mechanism termed target-directed miRNA degradation (TDMD), whereby unusual miRNA binding sites with extensive complementarity to the miRNA promote a conformational shift in AGO, leading to the recruitment of an E3 ubiquitin ligase complex containing the substrate receptor ZSWIM8. The subsequent polyubiquitination and proteolysis of AGO liberates the miRNA, rendering it vulnerable to nucleases. TDMD underlies the instability of many miRNAs in diverse cell lines and animals.&#13;
In this work, I probe the biological scope of TDMD as a regulatory mechanism in the nematode Caenorhabiditis elegans, which tolerates homozygous loss of the ZSWIM8 ortholog, EBAX-1, and expresses some miRNAs that are subject to rapid, developmentally regulated decay. I have confidently identified at least 22 miRNAs destabilized by EBAX-1 across the worm life cycle. These included the embryonic miR-35–42 family as well as certain stress-responsive miRNAs that together constitute some of the shortest-lived miRNAs in this organism. In mutants of ebax-1, the accumulated miR-35–42 excessively repressed predicted target mRNAs and underwent 3′ trimming as they aged, though no consistent signature of 3′ trimming or tailing emerged for EBAX-1-sensitive miRNAs.&#13;
A recent study reports that the destabilization of miR-35 at the embryo-to-L1 transition does not depend on that miRNA’s 3′ region, unlike canonical mammalian TDMD. To test the generality of this result for other EBAX-1 sensitive miRNAs, I assayed the behavior of seed- or 3′-based miR-43 variants in the presence and absence of EBAX-1. Intriguingly, the miR-43 3′ variants showed substantially reduced propensity to be regulated by EBAX-1. The requirement for 3′ pairing therefore varies between EBAX-1 sensitive miRNAs, raising questions about the molecular features of TDMD trigger RNAs that recruit EBAX-1 when extensive pairing is not crucial.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157215</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The rickettsial effector Sca4 has a conserved interaction with host clathrin and a tick cell specific role in infection</title>
<link>https://hdl.handle.net/1721.1/157209</link>
<description>The rickettsial effector Sca4 has a conserved interaction with host clathrin and a tick cell specific role in infection
Vondrak, Cassandra Joan
Intracellular bacterial pathogens secrete effectors to manipulate the host cell environment, create a hospitable niche, and promote infection. While many effectors interact with specific host machinery to perform a single distinct function, some effectors are capable of interaction with multiple host proteins to carry out multiple functions. Rickettsia species are obligate intracellular bacteria that cause vector-borne diseases that constitute an ongoing public health threat. As Rickettsia spp. have small genomes, and thus a limited coding capacity, multifunctional effectors may be an efficient way to manipulate their host environment. However, relatively few secreted effectors have been characterized in the Rickettsia genus and even fewer have been identified as multifunctional effectors. &#13;
&#13;
In this work, I demonstrate that the rickettsial secreted effector Sca4 interacts with the host endocytic factor clathrin heavy chain. As previous work showed that Sca4 interacts with the host protein vinculin in mammalian cells, this discovery of the Sca4-clathrin interaction makes Sca4 one of the first multifunctional effectors to be identified in a Rickettsia species. When investigating the potential role of the Sca4-clathrin interaction, I found that clathrin promotes the cell-to-cell spread of R. parkeri in mammalian cells by acting in the recipient cell. However, the Sca4-clathrin interaction was found to be dispensable for efficient cell-to-cell spread. I investigated the role of this interaction in the tick arthropod vector and found that the Sca4-clathrin interaction is necessary for the efficient proliferation of R. parkeri in tick cells. These findings show that knowledge of the complete roles of rickettsial secreted effectors in both arthropod vector and mammalian hosts is needed to fully understand rickettsial pathogenesis.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157209</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Medical Devices to Improve Oral Delivery of Biopharmaceuticals</title>
<link>https://hdl.handle.net/1721.1/157196</link>
<description>Engineering Medical Devices to Improve Oral Delivery of Biopharmaceuticals
Sharma, Shonit Nair
The dynamic mechanics of the gastrointestinal (GI) tract, including gut contractions, variable pH, and degradative enzymes, significantly challenge the development of oral delivery systems for biologic drugs by compromising their reliable delivery and therapeutic efficacy. While recent advances in oral delivery systems offer improved absorption through tissue penetration, their clinical translation remains tenuous due to the uncertainty of actuation-based delivery in variable environments and inherent design complexity. Inspired by the compression-based toxin delivery system of the stonefish, we developed a simple oral delivery device that harnesses GI mechanics to reliably actuate and systemically deliver biologic drugs. By synchronizing device actuation with gut contractions, the device and tissue work in tandem to ensure the complete transfer of a loaded therapeutic from the device to the tissue, bypassing physical and biochemical barriers and maximizing absorption. Through ex vivo and in silico experiments, we engineered the geometry of the device to achieve safe and targeted injections in the gut. An ex vivo electromechanical simulation model revealed the effectiveness of gut contractions for device actuation, and extensive in vivo experiments involving minipigs demonstrated comparable biologic drug delivery efficacy to subcutaneous injection. Harnessing the dynamic mechanics of the GI tract to improve oral delivery could transform drug administration and significantly enhance the lives of many patients.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157196</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation of Chemical Kinetic Models for Biofuel Oxidation and Pyrolysis</title>
<link>https://hdl.handle.net/1721.1/157195</link>
<description>Automatic Generation of Chemical Kinetic Models for Biofuel Oxidation and Pyrolysis
Dong, Xiaorui
Biofuels hold great promise for reducing greenhouse gas emissions and boosting engine performance. Modeling biofuel combustion and pyrolysis chemistry with chemical kinetic models allows for high-throughput evaluation of their performance under various conditions. These models, typically comprising hundreds of species and thousands of elementary reactions, provide quantitative predictions for the investigated systems. While manually creating such mechanisms is labor-intensive and requires extensive knowledge, reaction mechanism generation software, such as the Reaction Mechanism Generator (RMG), greatly facilitates model development by automating the selection of relevant species and reactions, as well as estimating their thermochemical, kinetic, and transport parameters. Despite the advances in these software packages and their success in modeling small, simple conventional fuels, their application to novel, under-explored biofuels is often limited due to a lack of knowledge in the relevant chemical space. This gap can be bridged by expanding the software's access to accurate thermochemical and kinetic parameters. However, these data are scarce, and their acquisition, typically via quantum chemical calculations, is challenging on a large scale. &#13;
&#13;
This thesis addresses these challenges by developing automated workflows to enhance the calculation of accurate thermochemical and kinetic parameters, thereby extending the capabilities of RMG for biofuel modeling. First, an automatic thermochemistry calculation workflow is implemented and integrated into the chemical kinetic model development process. The significant improvement in computational capacity enables an iterative approach to model generation and refinement, where the thermochemistry of hundreds of molecules is refined in each iteration. This approach is validated through the modeling of light alkene combustion chemistry, resulting in a model that accurately predicts key combustion properties and outperforms other well-regarded models. This study highlights the necessity of sufficient refinement iterations for a comprehensive exploration of the relevant chemical space and the convergence of critical species and reactions in the chemical kinetic model. This approach is then applied to model less-studied biofuels, such as butyl acetate isomers and tetramethylethylene. By incorporating key kinetic parameters from literature and quantum chemical calculations, along with iteratively refined thermochemistry, the developed models demonstrate strong predictive capabilities. These models agree with experiments conducted after their development and reveal important reaction pathways in the studied systems.&#13;
&#13;
Additionally, this thesis enhances the acquisition of accurate kinetic parameters through the simultaneous development of software, datasets, and data-driven models. RDMC, a cheminformatics software, is developed, featuring toolkits for elementary reaction analysis and end-to-end automated workflows for generating molecular and transition state conformers. A dataset covering nine reaction types relevant to combustion and pyrolysis radical chemistry is created using the RDMC workflow. Concurrently, another radical reaction dataset is curated, covering different reaction types. In total, the two datasets introduce high-quality elementary reaction data for over 11,000 radical reactions. Eventually, a graph neural network is trained on the new dataset for fast kinetic parameter estimation that can potentially benefit chemical kinetic model development.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157195</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPU-Oriented Algorithms for Continuous Energy Monte&#13;
Carlo Neutron Transport</title>
<link>https://hdl.handle.net/1721.1/157192</link>
<description>GPU-Oriented Algorithms for Continuous Energy Monte&#13;
Carlo Neutron Transport
Ridley, Gavin
The advent of graphics processing units (GPUs) has brought computing to new heights with deep learning models, now deployed ubiquitously and touching the lives of many. While GPU hardware may be ideal for deep learning, its full potential in various scientific computing applications has yet to be realized. Often, paradigm shifts in the data formalisms and algorithmic choices used to solve scientific computing problems must take place to fully leverage GPUs. A quintessential example of this shift has been the move towards matrix-free, high-order finite element formulations researched under the Exascale Computing Project. Similar groundbreaking shifts are only starting to take place in continuous energy Monte Carlo (MC) neutron transport simulations. These simulations play a crucial role in designing fission, fusion, and security systems that may play a pivotal role in the transition to a decarbonized world. This work contributes to adapting continuous energy MC neutron transport simulations for the GPU computing era. We first summarize some changes made to other scientific computing applications that led to performance gains on GPUs, which informed our independent development of a CUDA-based version of OpenMC, an open-source continuous energy MC neutron and photon transport code. Fortunately, the historical event-based MC simulation modality developed extensively through the 1980s for vector computers provides an excellent basis for GPU computing. Adapting a full-physics, continuous energy MC neutron transport simulation for GPUs is a feat only completed by a few institutions across the world, so we share some software development tricks that facilitated this task. We then identify a variety of algorithmic optimizations that improved the performance of the baseline CUDA application, and identify areas for further development. 3 Based on experience adapting a full-physics continuous energy MC code for GPU, we identify two pieces of the simulation which can be improved for GPU computing: resonance upscatter handling and unresolved resonance modeling. Our new method for modeling resonance upscatter is based on a novel, fundamental observation regarding the resonance upscatter effect. The relative speed tabulation (RST) method developed by other GPU MC researchers can be underpinned by a universal special function we have named the incomplete Faddeeva function, which is closely related to the incomplete Goodwin-Staton integral. Our research develops numerical algorithms for efficient, accurate computation of the incomplete Faddeeva function and identifies some properties of the function. We then present a specialized root-finding algorithm that takes advantage of the structure of the problem to efficiently sample the resonance upscatter effect on GPUs. This obviates the need to rely on RST tables or a zero kelvin pointwise cross section, freeing precious GPU memory while using a GPU-friendly memory access pattern. Continuing in the same direction, we focus on unresolved resonance region (URR) crosssection modeling, which was shown to induce a 30% computational efficiency degradation on GPUs. We review the requirements to model cross sections in the unresolved resonance regime, and provide what is to our knowledge the first rigorous demonstration that URR modeling can be reduced to a one-dimensional probabilistic model in addition to some expectation values of partial cross sections conditioned on the total. Through three asymptotic arguments covering different resonant behavior regimes, we show that the normal inverse Gaussian distribution is the natural choice for modeling the total neutron cross-section distribution. Rather than inducing a performance degradation, we show the new URR modeling technique in fact outperforms a pointwise infinite-dilute approach when it is used to model the URR region.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157192</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Computing for Building Performance and Design</title>
<link>https://hdl.handle.net/1721.1/157180</link>
<description>Spatial Computing for Building Performance and Design
Weber, Ramon Elias
Accommodating urban population growth while reducing emissions from the built environment poses an unprecedented challenge to the architectural discipline. To enable more sustainable construction, the dissertation proposes a new computational design framework to investigate how building performance from an environmental and user perspective relates to spatial design. The dissertation surveys existing computational methodologies for design automation and identifies new opportunities and value propositions for architectural computing in design guidance, feedback, and optimization. Exploring methods that can be used to generate and optimize structural systems of buildings and interior layouts, a specific focus lies in the design of residential buildings. By applying generative design methods to building analytics, new ways for estimating the embodied carbon of a building and the environmental impact of system-level design choices can be explored.&#13;
First, the research demonstrates how generative geometric algorithms can be coupled with structural simulations to accurately predict the structural material quantity and, through that, the embodied carbon of a building in early stages of design. Second, a new method for representing, analyzing, and generating spatial layouts – the hypergraph – is proposed, that captures the characteristics of any given floor plan. Unveiling new architectural opportunities through automatic geometry creation, the hypergraph shows potential to improve the quality of residential spaces in terms of environmental performance and access to daylight. Enabling new design tools for architects, it offers creative applications and new collaborative workflows for incorporating new spatial metrics in the design process. Allowing for new quantitative insights in building performance, the research demonstrates that spatial efficiency can outperform envelope upgrades in terms of carbon emission savings.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157180</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nature-Centered Materiomics: Experimental and&#13;
Computational Design</title>
<link>https://hdl.handle.net/1721.1/157176</link>
<description>Nature-Centered Materiomics: Experimental and&#13;
Computational Design
Shen, Sabrina C.
As of the year 2020, the accumulated mass of anthropogenic materials now outweighs all living biomass on Earth. Industrial material production simultaneously contributes nearly 30% of global greenhouse gas emissions each year, which in conjunction with solid waste accumulation and deterioration of ecological processes, threatens the livelihood of current and future generations of both human and non-human species. This is in dramatic contrast with natural materials, which consistently outperform human engineering, yet are invariably produced using abundant, renewable sources of energy and upon their disuse, decompose to fuel new growth. Nature effectively forms sustainable supply chains with no waste by leveraging both the constituents of materials and their structural organization at multiple scales, architecting common and abundant building blocks into a variety of high-performing composites. In this thesis, we present a nature-centered materiomics approach to emulate this in the design of novel sustainable materials. We leverage both computational and experimental strategies to consider multiple length-scales and time-scales across the processing, structure, properties, and performance of material systems with minimal ecological impact. First, we demonstrate machine learning strategies for harnessing functional geometries in natural materials and demonstrate how interpretable models can be leveraged toward novel material design. Next, we develop a platform for the fabrication of tunable biocomposites composed of renewable and biodegradable feedstocks, and consider Bayesian optimization as an approach to guide composite optimization and design. Finally, we extend the fabrication system to hybrid-living materials and demonstrate dynamic bio-welding capabilities in the strongest mycelium-based material in the literature to-date. Altogether, these contributions enhance multiscale understanding of nature-centered material design and pave the way for future innovations that align human engineering with regenerative material cycles.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157176</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shifting Paradigms: Data-Centric Approach for Marine Statics Correction using Symmetric Autoencoding</title>
<link>https://hdl.handle.net/1721.1/157174</link>
<description>Shifting Paradigms: Data-Centric Approach for Marine Statics Correction using Symmetric Autoencoding
Kanniah, Brindha
Deep learning has demonstrated remarkable performance in a wide variety of domains and is often leveraged for making high-stakes decisions. Parallel to its growing and beneficial presence in other domains, deep learning is gaining a notable reputation for solving challenging problems in geophysics. A key problem - given the escalating energy and geosequestration demands in present times - is marine statics correction. The traditional workflow for correcting marine statics has been based on a model-centric paradigm. This paradigm involves a series of transformations between non-commensurate spaces: first, inversion from seismic data space to velocity model space and second, forward modeling from velocity model space to seismic data space. Statics correction within this paradigm has severe drawbacks, mainly the high compute, time and labor cost, and inaccuracies stemming from errors in velocity model inversion or from unmet assumptions about subsurface structure. Overcoming these drawbacks was thus, the prime motivation for our study - where we chose to leverage deep learning as the core algorithmic tool to understand the limits of the model-centric paradigm and explore the performance horizons of a different, data-centric, paradigm to statics correction. The main feature of the data-centric paradigm is the direct mapping between commensurate data spaces, eliminating the need for intermediary transformations to and from velocity model space. Initial benchmark tests on the model-centric approach revealed the impact of inaccuracies in velocity model inversion as substantial nonzero timeshifts - exceeding 0.01s, and reaching values as large as 0.04s - for most arrivals in seismic data. These arrival time precision levels are unacceptable for good seismic imaging and time-lapse analysis; underscoring the need for an improved approach to marine statics correction. Consequently, we began our investigations into the data-centric paradigm. With the focus of disentangling the effects of varying seawater velocity from coherent subsurface geology in seismic records, we implemented an autoencoder algorithm, named SymAE. Notably, SymAE leverages the permutation symmetry of coherent subsurface information to perform the separation of information from nuisance variations. Once trained, SymAE is able to redatum selected subsurface and water velocity information in its latent space to produce statics-corrected seismic records. Our results show that for training datasets of increasing subsurface complexity, SymAE strongly converges all dynamic timeshifts to zero, aligning perturbed traces to reference traces. Crucially, SymAE delivers the required timeshift precision of 0.01 seconds for all arrivals - an achievement that the model-centric approach falls short of. This notable precision improvement using SymAE highlights how a streamlined data-centric paradigm outperforms the traditional model-centric paradigm of marine statics correction. This finding is pivotal as it is the foundation that lays the groundwork and opens the path towards the real-world deployment of SymAE for statics correction in challenging deepwater environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157174</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutronic-Thermal Simulation of Micro Reactor Designs for the Purpose of Analyzing the Impact of Thermal Expansion and Hydrogen Migration in Metal Hydride Moderator</title>
<link>https://hdl.handle.net/1721.1/157171</link>
<description>Neutronic-Thermal Simulation of Micro Reactor Designs for the Purpose of Analyzing the Impact of Thermal Expansion and Hydrogen Migration in Metal Hydride Moderator
Kendrick, W. Reed
The recent increased interest in microreactor designs has presented the opportunity to take advantage of the smaller core dimensions to perform steady state neutronic-thermal coupled simulation with the inclusion of an additional physics system. This work accomplishes this, by adding thermal expansion and zirconium hydride-based hydrogen diffusion to the neutronic-thermal simulation of multiple heat pipe microreactor designs. Microreactors’ smaller cores are inherently characterized by more leakage than gigawatt-scale reactor cores. The inclusion of thermal expansion’s representation in the coupling system may reveal neutronic or thermal impacts of geometric expansion that have yet to be noted for these smaller scale geometries. This is the impetus for the work on thermal expansion. The work on hydrogen diffusion is inspired by the common use of zirconium hydride in microreactor designs as a moderator. This material provides high density of hydrogen with high melting point, but features a well documented increase in mobility of hydrogen within the zirconium lattice at high temperatures. Coupling this migration of hydrogen within the neutronic-thermal simulation is performed in order to identify and analyze neutronic and thermal impacts due to the movement of hydrogen within the moderator. Additionally, a heat pipe failure case is simulated for each microreactor geometry studied, aimed to analyze the impacts of multipipe failure on both thermal expansion and hydrogen diffusion, as well as their downstream neutronic-thermal effects.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157171</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights on Serology, CRISPR Diagnostics, and Machine Learning Architectures for Biological Sequences</title>
<link>https://hdl.handle.net/1721.1/157162</link>
<description>Insights on Serology, CRISPR Diagnostics, and Machine Learning Architectures for Biological Sequences
Siddiqui, Sameed Muneeb
Fueled by technological breakthroughs, advancements in our understanding of infectious agents offer unprecedented potential for their early detection, intervention, and ultimately, eradication. This dissertation focuses on combining cutting-edge immunological, diagnostic, and computational approaches to confront infectious diseases more effectively, with a particular emphasis on SARS-CoV-2. The first two chapters delve into the immunological aspects of SARS-CoV-2, exploring the dynamics of antibody responses during primary infection and reinfection. First, we explore the dynamics of antibody responses during primary infection, revealing a “switch-like” relationship between antibody titer and function. Next, we investigate the humoral immune response following reinfection, identifying specific biomarkers that differentiate between primary infection and reinfection, offering potential tools for monitoring disease spread and understanding immunity. The subsequent chapter shifts focus towards technological innovation in diagnostics, presenting a novel bead-based method for CRISPR diagnostics that leverages a split-luciferase reporter system for enhanced sensitivity and a highly deployable bead-based platform for multiplexed pathogen detection. This work represents a significant advancement in rapid, scalable, and portable diagnostic tools. Finally, the dissertation culminates with a leap into computational biology, introducing ’Janus,’ a subquadratic state space model designed to efficiently handle large biological sequences. Janus demonstrates superior performance in genomics and proteomics tasks, outperforming existing models with significantly fewer parameters, thus paving the way for more efficient and accurate modeling of protein behavior and other biological processes. Collectively, these works contribute to the broader field of infectious disease research with new immunological insights paired with advances in technological and computational solutions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157162</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cognitive Underpinnings of Legal Complexity</title>
<link>https://hdl.handle.net/1721.1/157156</link>
<description>The Cognitive Underpinnings of Legal Complexity
Martínez, Eric
Across modern civilization, societal norms and rules are codified and communicated largely in the form of written laws. Although principles of communicative efficiency and legal doctrine dictate that laws be comprehensible to the common world, legal documents have long been attested to be incomprehensible to those who are required to comply with them (i.e. everyone). Why? This thesis investigates this question using the tools of cognitive science. Chapter II approaches the question from the comprehender side, documenting the cognitive and linguistic factors that make legal documents difficult to understand for non-lawyers. Corpus analyses reveal that legal contracts are laden with psycholinguistically complex structures at a strikingly higher rate than nine baseline genres of English. Experimental evidence further reveals that some of these structures, such as center-embedded syntax, inhibit recall and comprehension of legal content more than others, suggesting that difficulties in understanding legal content result largely from working-memory limitations imposed by long-distance syntactic dependencies as opposed to a mere lack of specialized legal knowledge. Chapter III extends these results to other legal genres and investigates the cognitive and linguistic profile of law over time. Analyzing every law passed by congress between 1951 and 2022 with matched texts from four different genres, we find that laws have and continue to be disproportionately laden with psycholinguistically complex structures relative to baseline genres of English, suggesting that top-down efforts to simplify legal texts over this period have largely failed. 3 Chapters IV and V turn to the producer side, investigating why legal actors write in a complex manner in the first place. We find that lawyers likewise struggle to recall and comprehend legal content drafted in a complex register and prefer simplified legal documents to complex documents across virtually every dimension. We further find that people tasked with writing official laws write in a more convoluted manner than when tasked with writing unofficial legal texts of equivalent conceptual complexity, whereas people editing a legal document do not write in a more convoluted manner than when writing from scratch. From a cognitive perspective, these results suggest law to be a rare exception to the general tendency in human language towards communicative efficiency. In particular, these results indicate law’s complexity to be derived from its performativity, whereby low-frequency structures may be inserted to signal law’s authoritative, world-state-altering nature, at the cost of increased processing demands on readers. From a legal perspective, these findings call into question the coherence and legitimacy of legal theories and principles whose validity rests on the notion of law being comprehensible to laypeople, such as ordinary meaning, fair notice, and modern variants of textualism. From a policy perspective, this work informs long-standing efforts to simplify legal documents for the public at-large, which, despite bipartisan support, have remained largely intractable. Finally, from a field-building perspective, this thesis lays the foundation for a broader interdisciplinary research program that uses insights from cognitive science to inform long-standing and cutting-edge questions of legal doctrine and policy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157156</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The development and application of mass-spectrometry-based tools to monitor proteome remodeling in microbes</title>
<link>https://hdl.handle.net/1721.1/157155</link>
<description>The development and application of mass-spectrometry-based tools to monitor proteome remodeling in microbes
Telusma, Bertina
Outside of controlled laboratory environments, cells are continually sensing and adapting to highly variable environmental conditions in an effort to maintain cellular homeostasis and to maximize fitness in each condition. Although specific stresses elicit distinct cellular responses, the reshaping of the proteome is a central element in most cellular adaptation. This dynamic proteome remodeling involves a highly orchestrated combination of regulated proteins synthesis, degradation, and modification, each contributing to the overall goal of matching the capacity of the expressed proteome to the demands of the sensed environment. Although each pathway will contribute, ultimately, whether cells mount a response primarily driven by synthesis or by degradation hinges on the nature and duration of the stress, as well as the cell type involved. Understanding the balance of these contributions has historically been challenging. As such, there is a need for approaches that can quantitatively resolve the contributions of protein synthesis and protein degradation pathways in a wide array of cellular and environmental contexts.&#13;
Quantitative proteomics via mass spectrometry stands out as a powerful tool for deciphering these questions, as it allows one to simultaneously monitor thousands of proteins. In this work, I leverage the power of quantitative proteomics coupled with metabolic labeling to investigate how microbes remodel their proteome during cellular adaptation. In chapter 2, I describe the development and characterization of these proteomic methods, including a detailed analysis of the variety of metabolic labeling schemes that can be employed in budding yeasts, which facilitate the bulk of my thesis work. In chapters 3 and 4, I apply these methods to the methylotrophic yeast, Komagataella phaffii, which grows robustly on a diverse set of carbon sources. As such, I use K. phaffii as a key case study to explore questions of cellular adaptation. I find that the K. phaffii expressed proteome varies greatly between cells grown in methanol, oleate, or glucose and, interestingly, that proteome remodeling strategies vary in a context-dependent manner. Specifically, I find that autophagic degradation drives proteome remodeling under nitrogen starvation conditions, with selective autophagic degradation of peroxisome supporting the cells transition from methanol media to glucose media. In contrast, I uncover that synthesis and growth-coupled dilution is the primary driver as K. phaffii adapts from methanol media to oleate media. Given the deep proteome coverage enabled by my methods, and my application of these methods in a wide variety of genetic backgrounds (6) and environmental conditions (5), these datasets also serve as a rich resource to identify conditions stimulating degradation of specific proteins, as well as the genetically defined pathways supporting these activities. Finally, in appendices 1 and 2, I highlight how these approaches can be applied across different microbial species to broadly characterize the proteomic consequences of nutrient and genetic perturbations. Overall, my work highlights how the development and application of powerful quantitative methods provide a global view of how proteome remodeling supports cellular adaptation, reveal deeper insights into pathways supporting turnover of specific proteins, and help to identify potential therapeutic targets to ameliorate protein-turnover related diseases.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157155</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational design of a novel soft-Xray-based turbulence diagnostic in NSTX-U</title>
<link>https://hdl.handle.net/1721.1/157154</link>
<description>Computational design of a novel soft-Xray-based turbulence diagnostic in NSTX-U
Chen, Xiang
Turbulence transport poses a significant challenge in fusion research. The measurement of turbulent fluctuations is critical for comprehending turbulence transport, predicting its behavior, and ultimately controlling it to maximize fusion gain. However, there is a notable scarcity electron temperature fluctuation diagnostics, including for high density tokamak plasmas and in spherical tokamaks. The ultimate aim of our research is to develop a novel diagnostic tool for temperature fluctuations. Before experimental exploration, conducting a numerical feasibility study is essential for the proposed diagnostic. The high spatial and temporal resolutions that are attainable using Soft X-ray (SXR) imaging makes it a promising candidate. The primary objective of the thesis is to assess the feasibility of an electron temperature fluctuation diagnostic based on SXR imaging.&#13;
&#13;
The feasibility study involves gathering fluctuation data and constructing a numerical diagnostic model. This model computes synthetic SXR measurements, which are then reconstructed using tomographic algorithms to derive electron temperature fluctuations. These reconstructions are then compared against the ground truth to assess diagnostic performance. Optimization of performance is achieved by adjusting diagnostic parameters to identify the optimal set for feasibility analysis.&#13;
&#13;
This study consists of two primary parts. First, we utilize a simplified toy model with circular plasma geometry and synthetic fluctuation data abstracted from gyrokinetic simulation fluctuation spectra and we employ a pseudolocal tomography algorithm for reconstruction and demonstrate reliable measurement of electron temperature fluctuations for X-ray detectors with sufficiently high signal-to-noise ratio. Second, we conduct a more comprehensive feasibility study using fluctuation data directly generated from gyrokinetic simulations, in a real (spherical tokamak) NSTX-U configuration with complex plasma geometry. Assumptions from the previous study, such as infinitely thin beam size, are relaxed to assess their impact on reconstruction. Additionally, we enhance the reconstruction algorithm using neural networks, enabling reconstruction of both electron density and temperature fluctuations, as well as cross-phase analysis. Overall, the study confirms the feasibility of the SXR diagnostic given that SXR detectors meet minimum requirements. Furthermore, we explore fluctuations generated from different gyrokinetic simulations, demonstrating the diagnostic's ability to differentiate fluctuations originating from different instabilities under the same configuration.&#13;
&#13;
This research provides a theoretical foundation and guidance for developing a practical SXR-based electron temperature fluctuation diagnostic for experimental use. It outlines the measurable quantities, their limitations, and the minimum requirements for SXR hardware to ensure reliable measurements. This contribution significantly advances our understanding of plasma turbulence transport, addressing a key challenge in fusion research. However, the current study's limitations employ a simplified emissivity model. Utilizing a more comprehensive model incorporating atomic data could yield more robust conclusions. Additionally, incorporating real hardware parameters would enhance the reliability of the conclusion.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157154</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light Water Reactor Loading Pattern Optimization with Reinforcement Learning Algorithms</title>
<link>https://hdl.handle.net/1721.1/157146</link>
<description>Light Water Reactor Loading Pattern Optimization with Reinforcement Learning Algorithms
Seurin, Paul R.M.
In 2023, Commercial Nuclear Power Plants (NPPs) in the USA, comprising Light Water Reactors (LWRs) such as Pressurized Water Reactors (PWRs) and Boiling Water Reactors (BWRs), remained the largest single source of carbon-free energy. They provided approximately half of the nation’s carbon-free electricity and under 20% of total electricity throughout the year. Ensuring the competitiveness of these nuclear assets is crucial for maintaining their role in providing dispatchable clean energy alongside renewable sources. The recent commissioning of Vogtle Units 3 and 4 marked the first new NPPs connected to the grid in over three decades, highlighting the high costs associated with nuclear technology and underscoring the need to improve their economic competitiveness. Optimizing the fuel cycle economics through enhanced core Loading Pattern (LP) is a key strategy to address this challenge. Since the 1960s, optimizing the LP for LWRs has been a major focus in nuclear engineering, but the large search space has posed significant difficulties. Computational methods from Stochastic Optimization (SO) have been used to tackle this issue, yet they often fail to outperform expert-designed solutions preferred by utilities. Deep Reinforcement Learning (RL), a subset of Deep Learning focused on decision-making, has shown promise in surpassing human-expert solutions in fields such as gaming and robotics. This thesis investigates the use of RL to improve automated tools for solving the PWR LP optimization problem, with the goal of developing efficient decision-support tools for core designers to generate more economical loading patterns. We present a novel approach using deep RL to solve the LP problem and compare it with traditional SO-based methods. Our findings indicate that the LP problem benefits from a global search to rapidly identify promising directions, followed by a local search to efficiently exploit these directions and avoid local optima. Proximal Policy Optimization (PPO), a type of RL algorithm, adapts its search capabilities with learnable policy weights, making it effective for both global and local searches, which contributes to its superiority over SO-based methods. Additionally, we introduce a new method called PEARL (Pareto Envelope Augmented with 3 Reinforcement Learning) to tackle multi-objective optimization challenges. PEARL demonstrates greater efficiency in identifying Pareto fronts without requiring additional designer intervention, compared to traditional single-objective scaling methods. Finally, we extend PEARL to a novel paradigm called physics-informed RL by integrating statistical techniques and physics knowledge to enhance algorithm performance. As problem complexity increases, classical methods sometimes fail to find feasible solutions. Incorporating physics-informed insights becomes crucial for discovering high-quality and diverse solutions more efficiently. These results highlight the potential of AI advancements in the nuclear field. A deep understanding of AI tools is essential to fully leverage their capabilities. Our approach achieved a cumulative benefit of over 4 $million per year per plant compared to using off-the-shelf AI solutions. While further work is needed to translate these theoretical benefits into real reactors, these algorithms promise to enhance the competitiveness of future nuclear fleets. In doing so, they could make a substantial contribution to achieving carbon neutrality by increasing the amount of clean electricity on the grid.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157146</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomy Work: Personhood, Expertise, and Activism of Disabled AI Data Workers in China</title>
<link>https://hdl.handle.net/1721.1/157144</link>
<description>Autonomy Work: Personhood, Expertise, and Activism of Disabled AI Data Workers in China
Wu, Di
This dissertation examines the labor and life of disabled workers in China’s artificial intelligence (AI) data annotation programs. The study draws on 14 months of ethnographic fieldwork, conducted over three years, with disabled activists, disabled workers, employment advocates, tech company staff, and government officials. This is supplemented by five years of my professional experience in disability nonprofits. My primary field site was a disabled people-led NGO founded in 2006, which I refer to as ENABLE. In recent years, ENABLE has developed numerous projects with tech companies to hire people with visual and physical impairments as data annotators for AI systems and to design assistive technologies for the community.&#13;
&#13;
In ENABLE’s case, what appears to be a familiar story of capitalist exploitation of disabled people turns out to be, instead, a story about the struggles of disabled Chinese people over different ways of being, living, and relating. I use the term “autonomy work” to describe disabled people’s labor to make “autonomous” machines (zidonghua) (Chapter 1), build an “autonomous” life (zizhu shenghuo) through work (Chapters 2 &amp; 3), and design tools for “independent” navigation (duli chuxing) (Chapter 4).&#13;
&#13;
I argue that disabled activists seek to construct greater autonomy for their community by reconfiguring social relations in and around technology. I call this mechanism “rerouting.” Instead of a complete departure from asymmetrical power relations, my interlocutors “reroute” the pathways between different human and non-human nodes without changing the nodes per se. They do so within the sociotechnical systems they build, the technological institutions they navigate, the kinship structures they seek to remake through tech work, and the physical terrain they navigate with assistive devices, all in pursuit of multiple forms of autonomy. “Rerouting” contributes to the rich scholarship on the intersection of disability and technoscience by highlighting the effects of disabled people’s unorthodox knowledge and practices that bend the world towards disabled bodies and minds. Furthermore, it specifies a key mechanism through which these effects are realized. Disabled people hack lives, build access, and improvise affordances by reorganizing the pathways between objects, bodies, and environments that were originally designed with other intentions.&#13;
&#13;
With deep knowledge and lived experience of the social issues they advocate for, disabled activists in China approach technology as a puzzle piece, not a magic bullet. They make technology useful for their lives, work, and activism by returning the technical to the social. Rather than displacing the slow work of social movements with neoliberal techno-solutionism, I show that this community-driven technological engagement is part of a larger effort to sustain that very slow work within a shifting political environment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157144</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge and the City: Redefining Islamic Urbanism, 762–1067</title>
<link>https://hdl.handle.net/1721.1/157142</link>
<description>Knowledge and the City: Redefining Islamic Urbanism, 762–1067
Lesoon, Courtney
This study demonstrates that the rapid urbanization of the Islamic world in its first five centuries can be attributed in part to the development of an independent class of city administrators who ensured that urban life thrived even in the most tumultuous of political times. This dissertation subverts existing historical models of urbanism, which were developed for medieval Europe, by excavating a theorization of the city from the political writings of the philosopher al-Farabi (d. 950), who argues that cities require the administrative wisdom of learned men trained in law. To historically corroborate al-Farabi’s theory, which has been cast as utopian, I identify these learned men in the historical record as the ʿulamaʾ. I demonstrate that early Islamic learning was a complex but ordered system—even before its institutionalization—first by articulating its delineations via a praxis of personally conferring and acquiring ʿilm (knowledge). This praxis was, I demonstrate, informed by a widely held view that ʿilm was metaphysically substantiated. The ʿulamaʾ—those marked by ʿilm—inherited their legal authority from the Prophet via the transmission of hadith and thus did not rely entirely on the political vesting of the caliph or amir to carry out Islamic law on the level of the city. I demonstrate that the ʿulamaʾ, with their independent legal authority, served as city administrators via two primary positions—the qadi (judge) and the muḥtasib (officer of public order)—and various other positions delegated by these two offices. Just as the system of early Islamic learning was regularized across the Islamic world, so too was the administration of cities by the ʿulamaʾ. Through city administration, the ʿulamaʾcultivated favorable living conditions in cities. Their relative independence from the state allowed for a continuity in city administration—and thus a continuity in urbanism—that survived the many political upheavals that came to define the Islamic world in the tenth and eleventh centuries.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157142</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Architectures of Microbiality: From Diatoms to Diatom Houses</title>
<link>https://hdl.handle.net/1721.1/157141</link>
<description>Architectures of Microbiality: From Diatoms to Diatom Houses
Luo, Xuan
This dissertation examines the diatom—a microorganism found ubiquitously from oceans to kitchen countertops—as a resonant factor in understanding modernity. From the early 19th century to the mid-20th century, rapid advances in optical microscopy dramatically unveiled the microbial world, during which diatoms, owing to their astonishing biological properties, forms, and material possibilities, became the subject of wonderment by a diverse group of naturalists, artists, and architects. I contend that microbial ubiquity constitutes a crucial but often overlooked aspect of environmental history, which architecture has consistently failed to account for. &#13;
	&#13;
The narrative progresses from early fascination with the diatom’s aesthetic potential to geologists’ more serious-minded efforts, laying the foundations for Richard Neutra’s (1892–1970) insistence that the diatom was critical to a new type of modern architecture. Through the case studies of Marquis Panciatichi d’Aragona (1813–1897) and Jacob Whitman Bailey (1811–1857), among others, I explore how diatom-influenced works from the 19th to early 20th centuries underpinned a transboundary dialogue on the microbial and its architectural imaginations. This perusal historicizes the theoretical and technological conditions that made possible the emergence of modernist views of nature, epitomized by Neutra’s philosophy on environmental psychology and realism through his early 20th-century proposal to integrate diatoms into the very fabric of modern living spaces. &#13;
	&#13;
This dissertation examines diatoms within shifting epistemological frameworks, tracking their transition from scientific specimens to motifs in visual culture. It investigates how unseen natural elements were disseminated, accumulated, and manifested into distinctly perceivable forms, revealing that the understanding of diatoms expanded from isolated, object-focused studies to a concern for environmental relationships and geological transitions. Filling the world was not some grand narrative of human will but the disorienting, puzzling, and even frightful everywhere-ness of microbiality. The intersection of diatoms with architecture changed from aesthetics and form to a deeper engagement with sites and land, following the transformative reconception of the thickness of the earth’s surface. This dissertation reveals a condition of knowledge that architecturally and psychologically rewrote nature as an encounter of biological (un)consciousness and technological actualization.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157141</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics and Implications of ROS in Marine Systems</title>
<link>https://hdl.handle.net/1721.1/157137</link>
<description>Dynamics and Implications of ROS in Marine Systems
Taenzer, Lina
The reactive oxygen species (ROS), superoxide and hydrogen peroxide, play critical roles across diverse marine ecosystems, influencing redox chemistry and organismal health. The distribution and concentration of these compounds in the oceans may serve as important controls for various biogeochemical cycles. The contrasting physiological nature of ROS, serving as both integral compounds for cellular processes such as signaling and growth while inducing oxidative cell damage at elevated concentrations, has made interpretation of their roles in organismal and ecosystem health challenging. Despite the potential for these ROS to provide unique insights into the intricate interactions occurring at the interface between life and its surrounding environment, critical gaps in our understanding of these compounds in marine systems exist. In this thesis I explored two aspects of marine ROS. The first part is focused on advancing our understanding of the distribution of superoxide in the sea. As part of a multidisciplinary team, I developed a submersible chemiluminescent sensor (SOLARIS) capable of measuring ROS in situ to ocean depths greater than 4,000 meters. With the use of SOLARIS, I discovered that a broad diversity of sponges and corals are local hotspots of superoxide at depth. Then, I studied the distribution of superoxide in the stratified water column of the Baltic Sea and found large subsurface maxima in the aphotic zone. In the second part of this thesis, I probed the use of hydrogen peroxide as a monitoring agent of organismal health. I measured hydrogen peroxide and bromoform production by two seaweed species exposed to different stressors. An analysis of these signals suggests that hydrogen peroxide could serve as a non-invasive chemical signature for stress in seaweed meadows and farms. Lastly, I characterized hydrogen peroxide associated with different coral species during a Stony Coral Tissue Loss Disease transmission experiment. I determined that hydrogen peroxide does not predict infection before lesions are visible, thus hindering its utility as an early-stage signature of disease within corals. Altogether, this thesis extends our perspective on the distribution and controls on ROS in various marine systems and provides a baseline for using ROS dynamics to monitor organismal health.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157137</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Cellular Landscape of the Brain: A Scalable Approach to Comprehensive Microscopy Data Analysis</title>
<link>https://hdl.handle.net/1721.1/157135</link>
<description>Mapping the Cellular Landscape of the Brain: A Scalable Approach to Comprehensive Microscopy Data Analysis
Kim, Minyoung E.
Recent advances in intact tissue processing and imaging have enabled the generation of whole brain microscopy data at subcellular resolution, revealing intricate morphological details of cells at unprecedented scales. Given that cellular morphology is strongly linked to distinct functional states of cells, in-depth morphological analysis of such data offers immense potential for understanding their roles in brain development and disease. However, the lack of scalable computational techniques poses a substantial challenge in achieving comprehensive morphological characterization. To efficiently and accurately analyze cellular morphology, we need to process terabyte-scale three-dimensional (3D) data, which inevitably complicates downstream analysis workflows with existing methods.&#13;
&#13;
To address the challenge, we developed an end-to-end scalable framework that seamlessly strings each step of the analysis pipeline together, enabling comprehensive fluorescence microscopy data analysis. The framework, termed MorPheT (Morphology Phenotyping Tool), serves as an all-in-one solution, offering a suite of analysis modules spanning from image pre-processing to precise cell detection, atlas alignment, morphological phenotyping, and interactive visualizations. MorPheT employs an ensemble method using both supervised and unsupervised approaches to maximize feature learning for unbiased morphological characterization. A novel deep neural network (ALNet) was designed to capture the long-range contextual dependencies inherent in 3D training data during supervised learning. Unsupervised learning leverages complementary features from the supervised approach, demonstrating the powerful synergy of this ensemble method.&#13;
&#13;
We applied MorPheT to two main projects. First, we profiled brain-resident macrophages (BRMs) and created the first fetal mouse brain atlases across multiple developmental stages, revealing distinct regional growth patterns of BRMs throughout development. We also demonstrated MorPheT’s effectiveness in characterizing microglia distribution patterns and morphological properties brain-wide in both control and neurodegeneration mouse brains. In the second project, we investigated cFos+ cells in a memory engram study, showcasing MorPheT’s utility for brain-wide analysis of engram cells. By examining regions hypothesized to hold memory engrams for contextual fear conditioning memory, we identified brain regions where engrams for a specific memory are distributed. Taken together, MorPheT is a powerful tool for cell profiling and mapping across the brain, and we anticipate it will help democratize computational analysis for large-scale microscopy datasets, making advanced analytical approaches more accessible to the broader scientific community.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157135</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compact Capabilities: Developing and Evaluating a Field-Portable Neutron Resonance Capture Analysis System</title>
<link>https://hdl.handle.net/1721.1/157134</link>
<description>Compact Capabilities: Developing and Evaluating a Field-Portable Neutron Resonance Capture Analysis System
Rahon, Jill M.
Technological advances in the thorium fuel cycle and other advanced reactor concepts suggest their possible commercialization for nuclear power use in the next ten years. Although the thorium cycle shares many aspects with the uranium and plutonium fuel cycles, it introduces the requirement for the nondestructive assay of multiple isotopes (²³8U, ²³²Th, ²³³U, ²³⁵U, or ²³⁹Pu) in varied concentrations and chemical or physical forms. Current methodologies used for safeguarding the uranium and plutonium fuel cycles are either unsuitable for quantifying many of these isotopes or lack the ability to differentiate between them effectively. This work presents an experimental evaluation of a portable Neutron Resonance Capture Analysis (NRCA) system sensitive to isotopes with neutron capture resonances in the epithermal range (1-100 eV). NRCA is a technique traditionally used for nuclear data collection and nondestructive assay of archaeological materials, typically conducted at large accelerator facilities with beamlines in excess of ten meters. This research miniaturizes the system to a two-meter beamline using a portable deuterium-tritium neutron generator. It builds upon the foundation of a portable Neutron Resonance Transmission Analysis (NRTA) system, utilizing capture gamma rays to generate a signal, in contrast to the neutron transmission measurements of NRTA. The NRCA technique is evaluated in this novel, portable configuration first using nonradioactive samples for optimization and then progressing to depleted uranium and thorium salt samples. Through a research partnership with Pacific Northwest National Laboratory, the technique was tested using highly enriched uranium, 233U and high-assay, low-enrichment uranium (HALEU) samples. Field portability tests demonstrated its ability to operate safely in field conditions, with operator doses remaining well within occupational limits. The system was able to identify multiple mid- and high-Z materials by reconstructing their neutron resonance profiles in experiments as brief as 20 minutes. It successfully differentiated between nuclear fuel cycle isotopes in composite samples as small as 2 grams, with limited success in quantifying the areal densities of uranium and thorium. These results suggest that NRCA, especially when used in concert with NRTA and other neutron-interrogation techniques, has the potential to rapidly and nondestructively quantify and characterize isotopes of interest in support of safeguards material accountancy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157134</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Sparse and Low Rank Matrix Optimization for Machine Learning Applications</title>
<link>https://hdl.handle.net/1721.1/157126</link>
<description>Advances in Sparse and Low Rank Matrix Optimization for Machine Learning Applications
Johnson, Nicholas André G.
Numerous fundamental problems in operations research, machine learning, and statistics exhibit natural formulations as cardinality or rank constrained optimization problems. Sparse solutions are desirable for their interpretability and storage benefits. Moreover, in the machine learning setting, sparse solutions exhibit superior model generalization and have a natural interpretation as conducting feature extraction in high-dimensional datasets. On the other hand, since the rank of a matrix is equivalent to the cardinality of the matrix's vector of singular values, rank can be interpreted as the matrix generalization of sparsity. Accordingly, low rank solutions inherit similar desirables properties as sparse solutions while allowing for very flexible modelling capability. Unfortunately, optimizing over cardinality and rank constraints is non-convex and NP-Hard in general which has led to strong reliance on convex relaxations and heuristic methods which yield sub-optimal solutions.&#13;
&#13;
This thesis advances both the theory and application of sparse and low rank matrix optimization, focusing on problems that arise in statistics and machine learning. We develop algorithmic approaches to problems exhibiting cardinality and rank constraints by leveraging techniques from mixed-integer and mixed-projection optimization. The proposed algorithms outperform existing convex relaxations and heuristics. Our rigorous analysis and empirical validation aim to contribute to both the theoretical foundations of optimization and the development of practical tools for complex challenges in statistics and machine learning.&#13;
&#13;
Chapter 2 studies the Sparse Plus Low Rank Matrix Decomposition problem. We present an alternating minimization algorithm that computes high quality feasible solutions and outperforms benchmark methods, scaling to dimension n=10000 in minutes. We additionally design a custom branch and bound algorithm to globally solve problem instances of dimension up to n=25 in minutes. Chapter 3 examines the Compressed Sensing problem, for which we present a custom branch and bound algorithm that can compute globally optimal solutions. Our approach produces solutions that are on average 6.22% more sparse on synthetic data and 9.95% more sparse on real world ECG data when compared to state of the art benchmark approaches.  Moreover, our approach outperforms benchmark methods when used as part of a multi-label learning algorithm. Chapter 4 explores the problem of learning a partially observed matrix that is predictive of fully observed side information, which consists of an important generalization of the Matrix Completion problem. We reformulate this problem as a mixed-projection optimization problem and present an alternating direction method of multipliers algorithm that can solve problems with n = 10000 rows and m = 10000 columns in less than a minute. On large scale real world data, our algorithm produces solutions that achieves 67% lower out of sample error than benchmark methods in 97% less execution time.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157126</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Risk and Optimizing Resilience of Digital and Physical Supply Chains</title>
<link>https://hdl.handle.net/1721.1/157125</link>
<description>Predicting Risk and Optimizing Resilience of Digital and Physical Supply Chains
Hu, Kevin
A number of disruptions and related challenges have affected the landscape of global supply chains in the past decade. These include the COVID-19 pandemic, geopolitical tensions, and cross-industry cyber breaches, highlighting the need for resilient and adaptive supply chain management. This thesis explores the role of data, machine learning, and analytics in developing predictive risk models to evaluate supply chain-related risks and optimizing the supply chain to improve resiliency. This thesis focuses on the two primary industry application domains of cybersecurity and the global shipping industry.&#13;
&#13;
Chapter 2 and 3 are motivated by the increasing prevalence of supply-chain related cyber breach incidents such as the SolarWinds breach in 2020. Chapter 2 develops the first predictive model for cyber risk that relies on innovative supply chain features. It utilizes large-scale data from more than 30,000 entity enterprises and their respective digital supply chain networks. In particular, this chapter develops descriptive features of the local supply chains of these entities, and then leverages these features to develop a supervised ML model for predicting whether an enterprise will experience a data breach incident. The results from this analysis demonstrate that local supply chain characteristics are significant predictors of data breach risk. Additionally, including supply chain features increases predictive power compared to baseline models that rely solely on internal enterprise features.&#13;
&#13;
Chapter 3 introduces an innovative global supply chain network graph and cyberattacker framework for modeling cyberattacker behavior in supply chain network environments. Theoretical analysis of this model proves that certain local supply chain characteristics determine an upper bound on the probability that an enterprise is compromised in this framework. Furthermore, the supply chain graph is calibrated with real data and then used to train an unsupervised reinforcement learning (RL) attacker agent. The agent traverses the supply chain network graph by cyberattacking and compromising nodes with the goal of maximizing its reward. The trained agent is used to produce an unsupervised risk assessment of the company nodes by simulating attacks within the network graph. The assessment, which is validated using public breach data, is competitive with basic unsupervised models and can significantly improve predictive performance when included as a feature for supervised models. An attractive aspect of this innovative modeling approach is that it does not require access to historical breach data needed for supervised models and algorithms, as unfortunately, the currently available data on cyber breaches is very partial and sparse.&#13;
&#13;
Chapter 4 develops a novel methodology for optimizing shipping container scheduling for the last leg in the shipping container global supply chain, called the \textit{drayage trucking} delivery process. The work in this chapter details the drayage trucking process from end-to-end and highlights key sources of inefficiencies throughout the process. An integer programming (IP) model is introduced to schedule each step in the drayage trucking delivery process to improve efficiency and minimize additional costs that are incurred as a result of inefficiencies in the container delivery schedule, which are known as \textit{accessorial charges}. The IP generates optimized schedules using industry delivery data, which are then compared with historical schedules. The results demonstrate that this approach can significantly decrease costs and improve container scheduling efficiency compared to current industry practices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157125</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformation and its surface expression in stressed planetary materials</title>
<link>https://hdl.handle.net/1721.1/157122</link>
<description>Deformation and its surface expression in stressed planetary materials
Seltzer, Cassandra
This thesis investigates the response of planetary materials to changing stress fields, and resultant signatures of stress in geophysical properties observable from planetary surfaces. When forces change within rocky and icy layers of planetary bodies, constituent materials of these layers adjust on the microscale; energetically favorable alignment of microstructural materials builds across scales to result in deformation, preferred directions for material transport and wave propagation, and heat release. This work therefore explores the relationship between microstructure and stress conditions in order to connect geophysical observations to the underlying forces on subsurface materials, using both experimental and computational methods. The first two chapters investigate two-phase deformation, where a partial melt phase is present between grains of solid materials such as olivine (Chapter 2) or ice (Chapter 3). Chapter 2 finds that in partially molten rocky materials, microstructural melt aligns parallel to the maximum applied stress direction quickly over geological time, while crystallographic orientations require significant strain intervals to reset. This shows that we can use the melt-induced changes to properties in the deforming Earth, for example, as an indicator of short-term stress fields. Chapter 3 applies these findings to the evolution of icy systems through simulated deformation of ice-melt aggregates, suggesting that current seismic studies which do not correct for the orientation of melt may misinterpret deformation at the base of warm ice sheets. The final two chapters center on deformation mechanisms that may shape the properties of icy outer Solar System satellites as they orbit their host planets. Chapter 4 provides novel experimental constraints on meteoritic materials relevant to the cores of icy moons, finding that microstructural brittle deformation, and resultant energy release, occurs even at very small differential stresses. Acoustic emissions associated with this brittle deformation are also more energetic at lower confining pressures, indicating that smaller, lower-pressure icy moons might receive enhanced heat from core deformation. The final chapter (Chapter 5) investigates crustal processes on Titan, Saturn’s largest moon. This work models how tidal stresses interact with local topographic stresses to create fracture across Titan’s crust, creating pathways for sediment generation and fluid transport.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157122</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies in biotic persistence and the taxonomic stability of traits over geological time</title>
<link>https://hdl.handle.net/1721.1/157121</link>
<description>Studies in biotic persistence and the taxonomic stability of traits over geological time
Tamre, Erik
It is increasingly recognized in evolutionary biology that biotic processes and pathways can be viewed as being under selection as well as organisms or populations. This view is particularly relevant when considering the history of the Earth’s biosphere over geological timescales, and the evolution of groups interacts with the evolution of processes in shaping the biosphere over time. This thesis considers a novel selection mechanism proposed to be operating on clades based on their age and tests its presence in marine animals over the Phanerozoic (Chapter 2); it also seeks to understand the interaction between some microbial traits and lineages over geological time as well as considering the implications of this interaction on the traits’ longevity. Chapter 3 considers the production of photoprotective pigment scytonemin, and Chapter 4 considers microbial iron oxidation. In these two chapters, I describe a metric called “clade fidelity” of a trait to describe its tendency to be associated with certain lineages and vertically inherited within them throughout the trait’s history, and I examine the relationship between a trait’s clade fidelity and its ecological context as well as evolutionary fate. The case studies in the thesis show that the proposed theoretical frameworks are applicable in practice and carry considerable explanatory power for the understanding of evolutionary processes on a scale of planetary history.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157121</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orbital stability in a classical pilot-wave system</title>
<link>https://hdl.handle.net/1721.1/157120</link>
<description>Orbital stability in a classical pilot-wave system
Liu, Nicholas Z.
The hydrodynamic bouncing droplet system, consisting of millimetric droplets bouncing on a vibrating fluid bath, displays many quantum mechanical phenomena on a macroscopic scale. These phenomena include tunnelling, diffraction and wave-like statistics. This thesis focuses on the features responsible for the quantisation of orbital radii, and rationalises this quantisation in terms of the stability of circular orbits arising in the presence of a rotating frame and a central force. We find that orbital quantisation is most pronounced when the waves generated by each bounce decay slowly. The wave decay rate, in turn, is related to the concept of path memory, the number of prior impacts with the bath that affect the droplet’s future dynamics. We conduct an analytical investigation into the stability of circular orbits using a generalised theoretical framework that allows for an exploration of classical pilotwave dynamics both inside and outside the experimentally accessible parameter regime. The exploration of parameter regimes beyond those accessible with the hydrodynamic system reveals much richer orbital dynamics. Our novel mathematical approach allows for evaluation of the integrals appearing in the stability problem in terms of Bessel functions of complex order, and thus facilitates asymptotic expansions of the stability problem in various limits. Within the experimental parameter regime, we demonstrate that in a rotating frame, circular orbits destabilise only via resonant instabilities, for which the growing perturbations oscillate at a frequency that is an integer multiple of the orbital frequency. Conversely, in a central force, non-resonant instabilities arise, for reasons detailed herein. Outside the experimental parameter regime, we show how the non-resonant instability leads to counter-intuitive scenarios; for example, circular orbits that are stabilised by increasing memory. In the limit of vanishing particle inertia, infinite path memory and a linear spring force, we demonstrate the intriguing possibility of infinitely many sharply quantised orbital states, where the allowed orbital radii exist in vanishingly thin intervals, and are stabilised by the combined influence of the time-averaged wave field and spring force. We demonstrate that these sharply quantised orbital states are only stable for higher memory. We then consider the effect of weak external forces on spin states, circular orbits arising in the absence of external forces, and show that the destabilisation of spin states depends in a complex manner on the type of external force applied. Finally, we show that the instability of large circular orbits is related to the in-line speed oscillations of free walking droplets in a manner that is independent of the external force.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157120</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Envisioning Water: Sustainability and Future-Making in Dubai and Los Angeles</title>
<link>https://hdl.handle.net/1721.1/157118</link>
<description>Envisioning Water: Sustainability and Future-Making in Dubai and Los Angeles
Christidi, Nadia
The following dissertation explores how the future of water is being imagined, planned and&#13;
prepared for in two dryland cities – hyper-arid Dubai and semi-arid Los Angeles – as the&#13;
climate changes and as they face increasing pressures to become more ‘sustainable.’ Both&#13;
Dubai and LA are cities that have long been deemed unsustainable, but are aiming to&#13;
become sustainability leaders. Dubai, which relies on energy-intensive desalination and has&#13;
high water consumption, including in ubiquitous urban greening, is investing heavily in&#13;
achieving efficiencies and powering water through clean energy. Los Angeles, which&#13;
sources the majority of its water through aqueduct systems from faraway places where&#13;
water is becoming increasingly taxed, is looking to produce more of its water supply&#13;
locally, and especially through wastewater recycling. Throughout the dissertation, I trace&#13;
the plans, projects, and policies being introduced in this vein to consider how&#13;
‘sustainability’ initiatives play out and get negotiated through the socio-political and&#13;
political economic structures in the two cities to unique effects.&#13;
To get at sustainability’s variegated forms and effects, I first view sustainability as a&#13;
“boundary object” (Star and Griesemer) and “technology of imagination” (Pederson et. al).&#13;
Treating sustainability as a “boundary object” that is shared but viewed differently by&#13;
actors enables me to hone in on the interests and forces - sometimes countervailing - that&#13;
shape sustainability projects. Treating it as a technology of imagination allows me to get at&#13;
the imaginative effects that sustainability projects constitute. Second, I consider how these&#13;
interests, forces, and effects emerge from and get mediated through entrenched structures&#13;
like bureaucratic systems, accumulation regimes, and sunken investments, which produce&#13;
a stickiness to infrastructures and infrastructural visions that renders change challenging,&#13;
slow, and incremental. As such, I show, for instance, how Dubai’s highly centralized&#13;
governance structure and foreign-investment development model produce an emphasis on&#13;
sustainability’s enhancement of the city-state’s competitiveness agenda that can belie&#13;
larger eco-realities, while LA’s fragmented institutional, regulatory, and financing scapes&#13;
complicate collaboration on recycling projects which span across and exceed individual&#13;
institutional mandates.&#13;
&#13;
Finally, alongside the municipal projects I focus on, I also look at visions of the future by&#13;
artists, designers, and architects to get at how the arts might provide alternatives that in some cases could help get beyond the stickiness of sustainability as it is currently being&#13;
imagined.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157118</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surgery Exact Triangles in Instanton Theory</title>
<link>https://hdl.handle.net/1721.1/157113</link>
<description>Surgery Exact Triangles in Instanton Theory
Bhat, Deeparaj
The introduction of instanton Floer theory and Donaldson polynomial invariants in the 1980s revolutionised the study of low dimensional topology. Since then, many Floer theories have been introduced with different structural properties and qualitative features. One of these Floer theories, Heegaard Floer theory, is popular due to its computational ease and rich algebraic structure. One of the computational tools absent in other Floer theories is the integer surgery formula that computes Heegaard Floer homology of 3-manifolds obtained by surgery along knot(s) in them. This thesis establishes a new surgery formula in instanton Floer theory. The algebraic language to express this formula is that of the derived category of chain complexes. The first part of the thesis describes this surgery formula whose statement and proof are inspired by the Atiyah-Floer conjectures. The second part then contrasts with the Heegaard Floer analogue by showing that instanton and Heegaard Floer theory cannot agree over integers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157113</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Properties of Colloidal II-VI and III-V Semiconductor Nanocrystals: Single Nanocrystal Photon Correlation Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157111</link>
<description>Optical Properties of Colloidal II-VI and III-V Semiconductor Nanocrystals: Single Nanocrystal Photon Correlation Spectroscopy
Berkinsky, David
Colloidal nanocrystals (NCs), also known as quantum dots, are nanometer-sized semiconductor crystalline structures comprised of thousands to tens of thousands of atoms placing them in a world between the molecular-sized and the bulk-sized world, allowing them to harness unique qualities from both. Colloidal NCs are used in many applications including light-emitting diodes (LEDs), photovoltaics (solar cells), lasers, transistors, photocatalysis, and many more. In this thesis, I investigate the optical properties of colloidal NCs, specifically InP/ZnSe/ZnS, CdSe/CdS/ZnS, and ZnSe/ZnS NCs using a combination of ensemble and single NC photon correlation spectroscopic techniques. In the first chapter, I introduce the photophysical properties of colloidal NCs and spectroscopic techniques relevant to my studies. In the second chapter, I determine the dominant photoluminescent line shape broadening mechanisms in single InP/ZnSe/ZnS and CdSe/CdS/ZnS NCs using temperature dependent photoluminescent spectroscopic techniques. In the third chapter, I investigate the coherent emissive properties of single InP/ZnSe/ZnS and CdSe/CdS/ZnS at cryogenic temperatures, demonstrating the longest coherence time measured in a colloidal NC system to date. In the fourth chapter, I develop an ensemble third-order correlation technique to elucidate the average single ZnSe/ZnS NC triexciton efficiency and dynamics. Finally, I propose future directions in the fifth chapter, including a fourth order correlation technique to resolve absolute energy information on timescales faster than CCDbase spectroscopic techniques, and an open-access photon correlation Monte Carlo toolkit with the aim of filling education gaps and provide the colloidal NC community with a database of analytical tools that will encourage a wider audience to engage with photon correlation spectroscopy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157111</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Synergistic Understanding of Language Processing in Biological and Artificial Systems</title>
<link>https://hdl.handle.net/1721.1/157110</link>
<description>Towards Synergistic Understanding of Language Processing in Biological and Artificial Systems
Hosseini Asl, Eghbal
The faculty of language in the human brain relies on a network of frontal and temporal regions, predominantly in the left hemisphere, often defined as the “language network”. Despite decades of research aimed at uncovering the neural mechanisms underlying activity in this network, a computationally precise account has remained elusive. Over the past five years, artificial neural networks (ANNs) have achieved capabilities in the comprehension and production of language that are indistinguishable from those of humans, and their internal representations bear similarity to activity within the language network. In this thesis, I aim to build a synergistic understanding of language processing in both ANN models and the language network in the human brain by addressing three main questions: 1. When and how do human brains and ANN language models converge or diverge in their representations during language processing? 2. How does the amount of training data affect convergence between the human brain and ANN language models? 3. What computational mechanisms could underlie similarities in language processing between human brains and ANN language models?&#13;
&#13;
To answer the first question, I demonstrate that representational spaces converge between successful ANNs and the human brain, presumably driven by the statistics of their inputs. I show that brain responses to stimuli (sentences) that are represented similarly across multiple successful ANNs are easier to predict from model representations; in contrast, brain responses to sentences that are represented differently across models are challenging to predict, despite high consistency among human participants. Extending these findings to the domain of vision, I suggest that the principle of representation universality may underlie information processing across various domains.&#13;
&#13;
The second question addresses a common criticism of language ANNs: namely, that they are implausible as models of human language processing because they require  vastly more training data. Using two complementary approaches, I show that ANNs can build representations similar to those in the human language network even with a “developmentally realistic” amount of training data, approximately 100 million words.&#13;
&#13;
Finally, to answer the third question, I draw inspiration from computational neuroscience to reveal how ANN language models learn a predictive model of linguistic input. By focusing on representational geometry, I demonstrate that ANN models progressively “untangle” the temporal trajectory of a sentence’s representation via straightening—reduction in curvature between adjacent words as the input is passed through the model’s layers. Using this straightening mechanism, the ANN model recasts next-word prediction as a smooth linear extrapolation from the current internal state to a future state. Straightening emerges as a result of model training and scales with model size. Furthermore, the average degree of sentence straightening in the deep layers of the model correlates with corpus-based estimates of sentence surprisal, which are linked to human comprehension difficulty (e.g., as reflected in reading times).&#13;
&#13;
Collectively, these lines of work provide essential ingredients for building a more computationally precise model of language processing in the human brain, leveraging synergies with artificial neural network language models.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157110</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards depth-resolved multi-cubic-centimeter field of view endoscopic camera for intraoperative nerve identification.</title>
<link>https://hdl.handle.net/1721.1/157108</link>
<description>Towards depth-resolved multi-cubic-centimeter field of view endoscopic camera for intraoperative nerve identification.
Yoon, Yong-Chul
One out of every five peripheral nerve injuries in the United States has an iatrogenic origin. These injuries can cause chronic neuropathies, paresthesia, and varying functional losses.&#13;
To reduce the risk of nerve injury, surgeons meticulously identify and track nerves within the surgical field using white-light magnification. However, small (sub-millimeter diameter)&#13;
and buried nerves are often difficult to identify with this approach. This has motivated a long-standing effort to develop improved nerve visualization technologies that are deployable in both open and minimally invasive surgical workflows. Fluorescence imaging is the most commonly explored strategy, and multiple exogenous fluorophores that bind to nerve-specific targets have been developed. However, fluorescence imaging has several limitations, including a disrupted workflow (due to the need for specialized lighting) and a significant regulatory burden. For these reasons, fluorescence-based nerve visualization has not yet been clinically adopted.&#13;
&#13;
Polarization-based optical coherence tomography (OCT) approaches to nerve visualization would inherently mitigate each of these translational challenges. First, OCT imaging is not affected by room light and thus can be used simultaneously with surgical lighting. Second, OCT is label-free and avoids regulatory pathways associated with new drug development. However, because OCT offers high-resolution, three-dimensional imaging. A surgical OCT system supporting video-rate acquisition of cubic centimeter fields would require signal capture bandwidths that are several orders of magnitude higher than what is available today. It is unlikely that this gap will be addressed through incremental advances in existing OCT platforms.&#13;
&#13;
In this thesis, we present a radically different OCT platform designed to aggressively reduce signal capture bandwidths while also simplifying the optical and electronic subsystem designs. The proposed approach is contour-looping (CL-) OCT (pronounced cloaked). It retains the depth-sectioning capability upon which OCT is based but discards the requirement of comprehensive three-dimensional imaging, which results in impractical signal capture bandwidths. As such, CL-OCT defines a strategy for low-bandwidth depth-sectioned imaging that may be sufficient for specific imaging tasks such as nerve identification. Importantly, the CL-OCT platform is compatible with a camera-based (i.e., scan-free) deployment that is advantageous for endoscopic deployments. In the second component of this thesis, we provide extensive theoretical and experimental studies on how optical amplifiers can be used in OCT to address sensitivity challenges of high-speed surgical OCT platforms like CL-OCT. Together, these lines of research define a new approach to meeting the need for OCT-based solutions for intraoperative nerve identification. This technology, if successfully translated, may lead to a lower incidence of iatrogenic nerve injury.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157108</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive and Prescriptive Trees for Optimization and Control Problems</title>
<link>https://hdl.handle.net/1721.1/157107</link>
<description>Predictive and Prescriptive Trees for Optimization and Control Problems
Kim, Cheol Woo
This thesis introduces novel methods to expedite the solution of a broad range of optimization and control problems using machine learning, specifically decision tree algorithms. In many practical settings, similar optimization and control problems often need to be solved repeatedly. We propose methods to leverage patterns from pre-solved problem instances using machine learning, leading to drastically faster solutions once training is complete. &#13;
&#13;
The thesis is structured into four parts, each tackling different class of optimization or control problems. In Chapter 2, we propose a machine learning approach to the optimal control of multiclass fluid queueing networks (MFQNETs). We prove that a piecewise constant optimal policy exists for MFQNET control problems, with segments separated by hyperplanes passing through the origin. We use Optimal Classification Trees with hyperplane splits (OCT-H) to learn an optimal control policy for MFQNETs. &#13;
&#13;
In Chapter 3, we study fluid restless multi-armed bandits (FRMABs), deriving fundamental properties and designing efficient numerical algorithms. Using these results, we learn state feedback policies with OCT-H and introduce a novel feature augmentation technique to handle nonlinearities.&#13;
&#13;
In Chapter 4, we propose a machine learning framework for solving two-stage linear adaptive robust optimization problems with binary here-and-now decisions and polyhedral uncertainty sets. We also introduce novel methods to expedite training data generation and reduce the number of different target classes the machine learning algorithm needs to be trained on. &#13;
&#13;
In Chapter 5, we introduce a prescriptive machine learning approach to speed up the process of solving mixed integer convex optimization (MICO) problems. We use a prescriptive machine learning algorithm, Optimal Policy Trees (OPT), instead of more commonly used classification algorithms. We demonstrate that OPT-based methods have a significant advantage in finding feasible solutions compared to classification algorithms.&#13;
&#13;
We test our approach on various synthetic and real-world problems. Using the proposed methods, we can obtain high-quality solutions to a broad range of large-scale optimization and control problems in real-time – within milliseconds.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157107</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Memory and fluctuations in chemical dynamics</title>
<link>https://hdl.handle.net/1721.1/157102</link>
<description>Memory and fluctuations in chemical dynamics
Farahvash, Ardavan
This thesis describes the development and application of theories that elucidate both the static and time-dependent responses of various condensed-phase environments to molecular systems. Part I, the cornerstone of this thesis, explores the role of surface vibrations in gas-phase heterogeneous catalysis. Utilizing the Mori-Zwanzig projection operator formalism, I have developed a theory that maps surface vibrations to a generalized Langevin equation (GLE). Two projection schemes are considered. The first scheme projects the motion of the entire solid substrate onto the motion of molecular adsorbates. The second scheme projects onto both the motion of adsorbates and of surface adsorption sites. Through the first approach, I demonstrate that physisorbed species primarily couple with acoustic phonons, while chemisorbed species couple with dispersionless local vibrations. I also use this scheme to examine how phonons affect reactions rates, both in ensembles near and far-from thermal equilibrium. Using the second approach, I study how energy is dissipated in simulations of molecule-surface scattering. I demonstrate that phonon confinement effects from nanoscale simulations can significantly impact calculated surface sticking coefficients. Part II considers the role of solvent in adsorption and desorption at liquid-solid interfaces. Specifically, I employ enhanced sampling methods to study a model system of carbon monoxide at a water/platinum interface. Using these methods, I show that the local coordination number around a CO molecule plays a crucial role in the transition states of the adsorption/desorption process, and that CO tends to increase its coordination number before desorbing. Part III develops a machine learning and electronic structure framework for the computationally efficient parametrization of Frenkel Hamiltonians from snapshots of molecular dynamics simulations of organic semiconductors. Direct electronic structure calculations on these snapshots encode the nuclear fluctuations of the chromophores in the material and how they couple to excitons, but at enormous cost. I discuss how the strategic application of machine learning methods can drastically reduce the number of electronic structure calculations needed to produce a complete exciton trajectory. Critically, I demonstrate that by decomposing the two-molecule excitonic coupling into interactions between one-molecule transition monopoles, a more accurate and less data-intensive machine learning scheme can be devised.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157102</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-molecule diagnostics to support curative interventions for tuberculosis and HIV</title>
<link>https://hdl.handle.net/1721.1/157100</link>
<description>Single-molecule diagnostics to support curative interventions for tuberculosis and HIV
Dougan, Tyler J.
Tuberculosis (TB) and the human immunodeficiency virus (HIV) are two of the leading causes of death worldwide. Tuberculosis is curable, but because of the difficulties of diagnosing it, many people with TB—and the majority of those killed by it—never begin treatment. HIV can be treated with lifelong medication. But if drug resistance develops or treatment is interrupted, the virus resurges. This could be prevented by an HIV cure that either clears HIV from the body or keeps the virus suppressed without continued therapy. Next-generation diagnostics will play a central role in supporting access to existing TB cures and future HIV cures. In this thesis, I describe the advancement of digital enzyme-linked immunosorbent assay (ELISA) protein detection methods in service of curing these two deadly infectious diseases.&#13;
&#13;
Existing TB diagnostics rely heavily on sputum, which is highly infectious, leading to increased TB cases among health care workers and limiting access to places with appropriate biosafety precautions. We developed a multiplexed Single Molecule Array (Simoa) digital ELISA that can diagnose TB from biomarkers in urine. Our assay is highly sensitive, as demonstrated in diverse cohorts totaling approximately 600 individuals.&#13;
&#13;
Simoa is a robust and widely used platform, but its accessibility is limited because it relies heavily on advanced microwell and imaging technology. We developed a new digital ELISA platform, called Molecular On-bead Signal Amplification for Individual Counting (MOSAIC), that performs the final readout step with a flow cytometer, bringing digital ELISA within reach of many hospitals and other health care centers. In addition to reducing instrumentation and cost, MOSAIC also allows for greater sensitivity and higher-order multiplexing than Simoa. It is, to our knowledge, the most sensitive protein measurement technique ever developed, with attomolar limits of detection.&#13;
&#13;
Finally, I describe the application of MOSAIC toward the development of HIV cures and longer-acting antiretroviral medications. These depend on a deeper understanding of the biology of HIV, and when they are ready for clinical trials, will also need highly sensitive tests to characterize the virus-host interactions and determine whether they are working. We developed ultrasensitive Simoa and MOSAIC assays for 20 circulating host and viral proteins and measured them in a cohort of 17 individuals with HIV whose treatment was interrupted, to evaluate which biomarkers could predict when the virus would rebound. Baseline levels of these biomarkers did not predict viral rebound, but changes over time did, highlighting the need for scalable personalized approaches.&#13;
&#13;
HIV and TB are two of the great diseases in the world. The next generation of diagnostic technologies, a urine test conducted on expensive instrumentation, and newly identified circulating biomarkers will not in themselves solve these problems. But these more sensitive assays are one step closer to the true biology of these diseases, and these advances in accessibility bring this this ultrasensitive monitoring one step closer to the clinic.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157100</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in Models and Algorithms for Management Science</title>
<link>https://hdl.handle.net/1721.1/157094</link>
<description>Advancements in Models and Algorithms for Management Science
Yao, Yuanfan (Evan)
Management science is an interdisciplinary field that leverages a variety of analytical techniques to inform effective decision-making within businesses and organizations. It is a dynamic field that is continuously innovating as data becomes increasingly available and businesses leverage new digital technologies. As a result, there is a constant need to develop models and algorithms to address unique decision-making settings. This thesis is composed of three independent chapters, each of which proposes novel modeling insights and algorithmic solutions for real-world problems.&#13;
&#13;
Chapter 2 studies a mathematical model in online resource allocation where a decision-maker must efficiently allocate a scarce resource to patient and impatient customers. This study is motivated by recent advancements in on-demand online platforms (such as Uber and Instacart) where customers who are patient (e.g., can wait a few minutes for a ride) are offered a discounted price. Under this model, we develop a simple resource allocation policy that has provable theoretical guarantees under a competitive ratio analysis and is also easy to use in practice. Our work supports the managerial intuition that offering discounts for patient customers leads to more robust and efficient resource allocation.&#13;
&#13;
Chapter 3 addresses the challenge of organizing a large corpus of documents into an expert-defined labeling scheme without manual annotation or labeled training examples. This work is motivated by a collaboration with a major pharmaceutical company to streamline root cause analysis of deviations in the manufacturing process. In investigating a new deviation, quickly finding related historical deviations is crucial, but such deviation reports are not organized in a way to facilitate this task. This chapter proposes an innovative methodology called Document Classification with Reference Information (DCRI), which crucially leverages the existence of reference information, documents which describe the taxonomy of interest but are not labeled examples themselves. Empirical results show that DCRI can produce highly accurate labels with minimal intervention from subject matter experts. Based on these empirical findings, we develop a mathematical model for the underlying data generating process and propose both numerical and theoretical finds that further justify the DCRI approach.&#13;
&#13;
Chapter 4 studies a novel way of generating insights from black-box classification models by deriving simple conditions under which the model predicts confidently. Existing work on explaining binary black-box classifiers typically studies when the model predicts 1 or 0 without accounting for the confidence (i.e., probability) of the prediction. Our work argues that explaining when a model makes confident predictions is more useful to a practitioner as such predictions typically correspond to when a model is more accurate and reliable. We define a novel evaluation metric for black-box explainers which emphasize confident predictions and develop a local-search based methodology to find interpretable lists of if-then rules that optimize for this metric. Evaluation on six real-world datasets suggest that such rule-based explanations are effective at capturing highly confident data points. By targeting highly confident predictions of black-box model, our methodology generates rules that are more useful than existing approaches which only explain a classifier's binary predictions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157094</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultramafic Alteration and the Cooling of Earth and Mars</title>
<link>https://hdl.handle.net/1721.1/157093</link>
<description>Ultramafic Alteration and the Cooling of Earth and Mars
Murray, Joshua
This thesis deals with the influence of `ultramafic' rocks over the climate of planets. Ultramafic rocks, rich in Mg and Fe, are the most common rocks on Earth but exist primarily in the mantle and rarely outcrop on the surface. They are incredibly unstable under Earth's surface conditions where they are altered via incongruent reactions which form clay minerals, iron oxides, and ultimately release cations to the ocean. Due to their instability, they play an out-sized role in Earth's long-term carbon cycle. My first chapter investigates a hitherto unappreciated mechanism by which ultramafic rocks serve as a carbon sink, through the formation of high-surface-area clays and the resultant burial of organic carbon. I use a combination of mineral weathering models and proxy data to show that this mechanism has contributed to the glaciations of the Palaeozoic (541 - 252 Ma).&#13;
&#13;
Unlike Earth, igneous rocks on the Martian surface are frequently of ultramafic composition. My second chapter argues that the alteration of these Martian ultramafic rocks was fundamental in the cooling of the planet from a habitable surface with liquid water to a cold and icy planet, largely devoid of an atmosphere. I show that the same high-surface-area clay minerals which bury organic carbon on Earth are prevalent enough on Mars to store the bulk of its initial 1-4 bar atmosphere as adsorbed methane. I postulate that this methane was formed abiotically during hydrothermal alteration of ultramafic rocks, a process which is observed in ultramafic systems on Earth. I show that this framework reconciles the histories of \dc{} and atmospheric loss-to-space on Mars.&#13;
&#13;
My final chapter quantifies the effects of the alteration of ultramafic and mafic rocks across the Taconic orogeny in Newfoundland, Canada. This collision exposed one of the most well-studied ultramafic bodies on Earth, the Bay of Islands ophiolite, and closely preceded global cooling in the Middle-Late Ordovician (470-445 Ma). I present a new method, leveraging both geochemical analysis and modelling of basin sediments, to infer ancient silicate weathering fluxes. I show that the relative weathering rate in this region increased dramatically during the Taconic orogeny. This method could be applied throughout systems with tectonically-driven changes in surface lithology to build a fuller understanding of the forces which modulate Earth's climate. &#13;
&#13;
My work asks as many questions as it answers but tries to honestly portray the uncertainties associated with the application of quantitative methods in noisy, geologic systems. I hope that in trying to meaningfully constrain these processes I plant seeds of inquiry from which myself and others can one day make more concrete statements of the cause and effect between tectonics and climate.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157093</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instability Scaffolding: Enacting Strategic Instabilities to Produce Authentic Premium Wine</title>
<link>https://hdl.handle.net/1721.1/157091</link>
<description>Instability Scaffolding: Enacting Strategic Instabilities to Produce Authentic Premium Wine
Zhang, Alan
Unstable conditions can be a risk to productions, potentially disrupting operations and rendering activities unpredictable. While a common organizational response is to minimize instability, I find instead that producers can also purposefully cultivate it to generate value—through strategic instabilities. My dissertation explores how strategic instabilities are enacted in productions of fine wine, articulating the practices and arrangements that facilitate working with unstable production conditions in productive ways—a process I refer to as instability scaffolding. My data are drawn from a 16-month ethnographic study of two field sites in the California premium wine industry, combined with archival data and industry interviews. In Chapter One, I explain why minimizing certain sources of instability, while potentially more efficient, would be considered inauthentic for premium wine productions. In Chapter Two, I look historically at the California premium wine category, and explain why and how working with instabilities of nature became a basis of its authenticity. This chapter examines the instability scaffolding (i.e., cooperative category framing work) performed in the California wine industry to enable such productions to become commercially viable, and identifies the intra-category mutualism that motivated competitors to support such productions. Chapter Three offers insight into the modern-day operations of a world-renowned fine wine producer. I identify the trajectory management work scaffolding this organization’s achievement of craft authenticity, turning production instability into productive instability so that high-quality wines are produced consistently year after year despite relying on unpredictable activities. Chapter Four explores a regional-level instability scaffolding allowing many producers to keep their operations logistically feasible despite working with unstable conditions. I show how vineyard proprietors and contract providers worked together to sustain craft authenticity at scale in the region through a process I theorize as contract custodianship. My dissertation concludes in Chapter Five with a discussion of instability scaffolding more broadly and its implications for further research. I highlight how my research contributes new insights into the multiple ways organizations can leverage complex interactions in product by skillfully engaging with them to express authenticity in productions at commercial scale.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157091</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of Microbial Primary and Secondary Metabolism in the Marine Realm</title>
<link>https://hdl.handle.net/1721.1/157088</link>
<description>Characterization of Microbial Primary and Secondary Metabolism in the Marine Realm
Geller-McGrath, David Edward
This thesis applies meta-omics data analysis to elucidate the ecological roles of marine microorganisms in diverse habitats and includes the development of new bioinformatics tools to enhance these analyses. In my second chapter, I applied genome mining tools to analyze the gene content and expression of biosynthetic gene clusters (BGCs). The analysis of BGCs through largescale genome mining efforts has identified diverse natural products with potential applications in medicine and biotechnology. Many marine environments, particularly oxygen-depleted water columns and sediments, however, remain under-represented in these studies. Analysis of BGCs in free-living and particle-associated microbial communities along the oxycline water column of the Cariaco Basin, Venezuela, revealed that differences in water column redox potential were associated with microbial lifestyle and the predicted composition and production of secondary metabolites. This experience set the stage for my third chapter, in which I developed MetaPathPredict, a machine learning-based tool for predicting the metabolic potential of bacterial genomes. This tool addresses the lack of computational pipelines for pathway reconstruction that predict the presence of KEGG modules in highly incomplete prokaryotic genomes. MetaPathPredict made robust predictions in highly incomplete bacterial genomes, enabling more accurate reconstruction of their metabolic potential. In my fourth chapter, I performed metagenomic analysis of microbial communities in the hydrothermally-influenced sediments of Guaymas Basin (Gulf of California, Mexico). Previous studies indicated a decline in microbial abundance and diversity with increasing sediment depth. Analysisrevealed a distribution of MAGs dominated by Chloroflexota and Thermoproteota, with diversity decreasing as temperature increased, consistent with a downcore reduction in subsurface biosphere diversity. Specific archaeal MAGs within the Thermoproteota and Hadarchaeota increased in abundance and recruitment of metatranscriptome reads towards deeper, hotter sediments, marking a transition to a specialized deep biosphere. In my fifth chapter, I developed MetaPathPredict-E, a deep learning-powered extension of MetaPathPredict for eukaryotic metabolism predictions. Eukaryotic metabolism is diverse, reflecting varied lifestyles across eukaryotic kingdoms, but the complexity of eukaryotic genomes presents challenges for assembly and annotation. MetaPathPredict-E was trained on diverse eukaryotic genomes and transcriptomes, demonstrating a robust performance on test datasets, thus advancing the study of eukaryotic metabolic potential from environmental samples
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157088</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small molecule motion within and through organic nanomaterials: an anthology</title>
<link>https://hdl.handle.net/1721.1/157087</link>
<description>Small molecule motion within and through organic nanomaterials: an anthology
Kaser, Sam
This thesis chronicles three distinct projects united by the common theme of small molecule motion in organic materials. Chapter 1 provides a holistic introduction to key background for the main body chapters, Chapters 2-4. The work outlined in the Chapters 2 and 3 was completed under the supervision of Prof. Julia Ortony and pertains to self-assembled small-molecule aramid amphiphile (AA) nanostructures. AAs are noteworthy within the class of self-assembled amphiphile materials because of their unusual mechanical stability borne of strong intermolecular interactions between aramid units. In Chapter 2, I evaluate local conformational dynamics in different chemical domains of an AA nanoribbon through Electron Paramagnetic Resonance (EPR) spectroscopy. These experiments were enabled by co-assembly of AAs with stable nitroxide radical spin labels into the nanoribbon ensemble. Distinct conformational behavior is resolved between domains, and variable temperature studies enable description of each spin label environment through phase transition characterization and activation energy analysis. Chapter 3 explores AA nanostructure morphology in response to pH changes, i.e. ionization-modulated molecular rearrangements. This chapter is divided into Chapters 3A and 3B. In Chapter 3A, the pH dependency of diammonium headgroup AA nanostructures is correlated with aramid backbone flexibility and intermolecular interactions. These diammonium headgroups are also found to exhibit an aggregation-induced pKa drop, which we leverage in Chapter 3 to induce pH responsiveness in a guanidinium headgroup moiety over a physiologically relevant pH range. Finally, Chapter 4 (completed under the supervision of Prof. Zachary Smith) explores molecular transport of CO₂ gas mixtures through a novel guanidinium-functionalized polymer of intrinsic microporosity (PIM-G) membrane. PIM-G shows high permselectivity towards CO₂ over CH₄, N₂, and O₂, and selectivity was further improved by exchanging the polymer’s default Cl− counterion with larger halides. This halide exchange-driven selectivity enhancement occurred without a commensurate drop in CO₂ permeability. This thesis work investigates small molecule motion within novel organic nanomaterials, outlining analytical approaches and structure-property relationships that may be applicable to broad categories of functional materials.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157087</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing Structural and Spin-state Dependent Reactivity of Single Atom Catalysts (SACs) with Systematically Improvable Computational Tools</title>
<link>https://hdl.handle.net/1721.1/157086</link>
<description>Revealing Structural and Spin-state Dependent Reactivity of Single Atom Catalysts (SACs) with Systematically Improvable Computational Tools
Jia, Haojun (贾皓钧)
Efficient catalysts are essential for advancing energy conversion and storage technologies, particularly for challenging reactions such as methane-to-methanol conversion and the oxygen reduction reaction (ORR) important for fuel cells. Single-atom catalysts (SACs), particularly doped graphitic catalysts, have emerged as a promising class of materials. SACs combine the advantages of homogeneous and heterogeneous catalysts, offering tunable active sites and scalability. However, understanding the relationship between the structure of SAC active sites and their reactivity remains challenging due to the limitations of experimental characterizations. Computational modeling provides atomic-level insights into SAC active site configurations and the impact of the metal's local environment on their properties and catalytic activity. This thesis presents a combined effort utilizing computational methods to explore the design and optimization of SACs for methane-to-methanol conversion and the ORR.&#13;
&#13;
In this thesis, we use range-separated hybrid density functional theory (DFT) to compare the energetics and structure of the direct metal-coordinating environment in the presence of 2p (i.e., N or O) and 3p (i.e., P or S) dopants and with increasing finite graphene model flake size to mimic differences in local rigidity. While metal–ligand bond lengths in SACs are significantly shorter than those in transition metal complexes, they remain longer than SAC mimic macrocyclic complexes. Consequently, we observe SACs to simultaneously favor the formation of the metal–oxo while also allowing for methanol release. This reactivity is different from what has been observed for large sets of square planar model homogeneous catalysts. Moreover, modulating the coordination environment near single metal sites by means of codopants, we carry out a large-scale virtual high-throughput screening (VHTS) of transition metal (i.e., Mn, Fe, Co, and Ru) SACs codoped with various elements (i.e., N, O, P, and S) in numerous spin and oxidation (i.e., M(II)/M(III)) states for the challenging conversion of methane to methanol. We identify that the ground-state preference is metal- and oxidation-state-dependent. We observe a weak negative correlation between the oxo formation energy (ΔE(oxo)) and the energy of hydrogen atom transfer (ΔE(HAT)), thanks to the high variability in the coordination environment. Therefore, codoped SACs demonstrate flexible tunability that disrupts linear free energy relationships in a manner similar to that of homogeneous catalysts without losing the scalability of heterogeneous catalysts.  Further exploration focuses on codoped Fe and Ru-based SACs for ORR using VHTS and machine learning (ML). The ML models demonstrate superior accuracy in predicting reaction energetics compared to traditional scaling relationships. The findings validate codoping as a powerful strategy for tuning the properties of SACs to achieve enhanced ORR performance. Promising catalyst candidates are proposed for experimental validation, showcasing the potential of SACs in overcoming limitations in catalyst design for challenging reactions and provides valuable insights for the rational design of high-performance ORR catalysts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157086</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding divergence in marine protistan communities: from strain diversity to basin biogeography</title>
<link>https://hdl.handle.net/1721.1/157085</link>
<description>Decoding divergence in marine protistan communities: from strain diversity to basin biogeography
Krinos Quinn, Arianna Isabella
Protists (microbial eukaryotes) in the global ocean are critical components of primary&#13;
productivity and nutrient recycling. Protists are genetically diverse and have distinctive&#13;
ecological niches based on genetically-driven differences in physiological fitness. A deeper&#13;
understanding of which dimensions of protistan genetic diversity translate to measurable phenotypic variation is needed to predict the impact of protists on marine biogeochemistry and&#13;
protists’ environmental change sensitivity. I cultured twelve strains of the coccolithophore&#13;
Gephyrocapsa huxleyi across temperatures, which revealed strain-specific differences in thermal optima and niche widths. I used traits measured during the experiments to design&#13;
a Darwin ecosystem model simulation, which demonstrated basin-specific biogeography of&#13;
thermal optima and niche widths (Chapter 2). For seven of the twelve strains, I sequenced&#13;
transcriptomes at 3-5 temperatures to assess gene expression variation. Using the RNAseq&#13;
data, I developed a regression modeling approach to identify proteome allocation model parameters. Combining differential expression analysis, gene abundance normalization, and the&#13;
regression model to explore the proteome allocation model parameter space, I probed differences in modeled strategies of G. huxleyi strains in response to temperature (Chapter 3).&#13;
Scalable workflows highlight the challenge and promise of meta-omic data to link community&#13;
structure to physiology. I developed a pipeline for metatranscriptome analysis and taxonomic&#13;
annotation to address the lack of tools built specifically for microbial eukaryotes, and created mock communities to assess recovery success in protistan metatranscriptome analysis&#13;
workflows (Chapters 4 and 5). I applied these tools to a three-year metatranscriptomic&#13;
dataset from Cape Cod Bay to investigate a recent emergence of a summer coccolithophore&#13;
population in the 20-year time series, tracking shifts in nutrient physiology to identify potential bottom-up controls (Chapter 6). This dissertation advances approaches to constrain&#13;
the protistan taxonomic diversity that underlies shifts in global primary productivity and&#13;
nutrient turnover. Specifically, strains of a single phytoplankton species revealed diversity&#13;
relevant to a global ecosystem model. Future work will clarify variability in protistan gene&#13;
content and expression that may underpin both protists’ present ecological niches and their&#13;
future climate change response.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157085</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dimers, Trimers and their Superpositions in a Bose-Fermi Mixture</title>
<link>https://hdl.handle.net/1721.1/157084</link>
<description>Dimers, Trimers and their Superpositions in a Bose-Fermi Mixture
Chuang, Alexander
This thesis describes experiments on few- and many-body bound states in a Bose-Fermi&#13;
mixture of ultracold 23Na and 40K atoms. We examine the formation of dimers and trimers in&#13;
a balanced, thermal mixture and their evolution into strongly interacting Bose polarons with&#13;
hybridized dimer and trimer character when we instead immerse an impurity concentration&#13;
of K into a dense quantum bath of Na.&#13;
We report a novel direct observation of a heteronuclear halo trimer, consisting of two&#13;
lighter Na atoms and one heavier K atom, alongside the familiar NaK Feshbach dimer, using&#13;
radiofrequency (rf) spectroscopy. We find that in proximity to a Feshbach resonance, the&#13;
trimer feature closely follows the dimer resonance across an order-of-magnitude variation&#13;
in binding energy. We show that the measured binding energies are consistent with our&#13;
theoretical model of the trimer as having the structure of a Feshbach dimer weakly bound&#13;
to one additional boson.&#13;
We then study the fate of impurities interacting with a bosonic quantum bath, the&#13;
paradigmatic Bose polaron scenario. By preparing an initial attractive polaron state, we&#13;
probe previously inaccessible, highly-correlated Bose polaron states, again on the repulsive&#13;
side of the Feshbach resonance. Deep within the condensate, the rf spectra no longer exhibit&#13;
discrete dimer and trimer features as before, instead dominated by a single broad feature.&#13;
We attribute this to the impurity-boson coupling becoming stronger than the dimer-trimer&#13;
energy splitting, leading to hybridization of dimer and trimer states and, consequently, an effective level repulsion consistent with the spectra we observe. This experiment demonstrates&#13;
the remarkable interplay between polaron physics and bound-state formation in a quantum&#13;
environment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157084</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular characterization of microbial interactions with labile&#13;
dissolved organic matter</title>
<link>https://hdl.handle.net/1721.1/157080</link>
<description>Molecular characterization of microbial interactions with labile&#13;
dissolved organic matter
Halloran, Kathryn H.
Marine microbes produce and consume labile dissolved organic matter (DOM), generating a carbon flux with significant implications for global carbon cycling and microbial ecosystems. Intracellular measurements of biological activity cannot fully capture microbial interactions with dissolved carbon. Better understanding this carbon flux thus requires direct and compound-specific characterization of metabolites, the small organic biomolecules that make up labile DOM. However, these measurements are challenging due to low metabolite concentrations, high ambient salt concentrations, and the complexity of labile DOM. More complete characterization of dissolved metabolites is therefore a standing challenge in the field. This in turn leaves many open questions with respect to the specificity of microbe-DOM interactions and the biotic and abiotic drivers of those interactions. This thesis addresses those challenges and questions. In Chapter 2, I explore the compound-specific uptake of metabolites by the copiotrophic gamma-proteobacterium Alteromonas macleodii, with a focus on metabolites derived from the cyanobacteria Prochlorococcus. I find that Alteromonas grows on 3-methyl-2- oxobutanoic acid, a valine intermediate, but not on the other cognate branched chain amino acid intermediates. This substrate selectivity is likely driven by transporter specificity. The distinct fate of these structurally similar molecules emphasizes the importance of compound-specific characterization of labile DOM. To expand our ability to make these compound-specific measurements, in Chapter 3 I develop a method for derivatizing carboxylate-, carbonyl-, and phosphate-containing molecules via aniline derivatization, solid phase extraction, and liquid chromatography-tandem mass spectrometry (LC-MS/MS). This method is able to quantify 51 different metabolites dissolved in seawater, 25 of which could not be detected previously, with pM to nM detection limits. I verify the utility of this method by applying aniline derivatization to phytoplankton culture filtrates and field samples. Additionally, I show that where dissolved metabolites can be quantified by multiple methods, the measurements obtained by aniline derivatization are in good agreement with measurements yielded by other methods. Finally, in Chapter 4 I use aniline derivatization to characterize the diel variability of labile DOM produced by phototrophic microbes. Here, I apply aniline derivatization to filtrate from cultures of Prochlorococcus grown under 24-hour diel light/dark conditions and sampled every two hours. I find that Prochlorococcus cells not only release metabolites into solution, but also take those metabolites up again, with diel rhythmicity. Together, this thesis shows that microbe-DOM interactions can be remarkably subtle and complex; expands our ability to quantify the metabolites that make up labile DOM; and demonstrates the importance of directly quantifying these dissolved metabolites to fully characterize microbial ecology and carbon cycling in the ocean.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157080</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural insights into microbial one-carbon metabolic enzymes: Ni–Fe–S-dependent carbon monoxide dehydrogenases and acetyl-CoA synthases</title>
<link>https://hdl.handle.net/1721.1/157079</link>
<description>Structural insights into microbial one-carbon metabolic enzymes: Ni–Fe–S-dependent carbon monoxide dehydrogenases and acetyl-CoA synthases
Biester, Alison
Carbon monoxide dehydrogenase (CODH) and acetyl-CoA synthase (ACS) enzymes play crucial rules in the global carbon cycle by catalyzing reversible carbon dioxide reduction and reversible acetyl-CoA synthesis, respectively. In some cases, CODHs are monofunctional, whereas in other cases CODHs form complexes with ACSs and their catalysis is coupled through an internal gas channel between the CODH and ACS active sites. These carbon-fixing enzymes are thought to be among the oldest on earth, dating back to the last universal common ancestor based on strong conservation of these enzymes between bacterial and archaeal domains of life. In this thesis, we present structural characterizations of bacterial and archaeal CODHs. Using xenon pressurization, we elucidate gas channel paths in a monofunctional CODH from bacteria through crystallographic studies. This structure provides the first experimental visualization of gas channels in a monofunctional CODH. We compare monofunctional CODH gas channels to the gas channels observed in bacterial CODH/ACS complexes and find monofunctional CODH gas channels are highly branched compared to in CODH/ACS complexes, wherein the specificity of the gas channel path is important for active site coupling. In methanogens, CODH and ACS catalysis are coupled, but a complex between these two enzymes was never previously visualized. The methanogenic CODH/ACS complex has been particularly mysterious because the methanogenic ACS lacks the domain that binds CODH in acetogens. In this work, we use cryogenic electron microscopy to capture the first-ever snapshot of an archaeal CODH/ACS complex. We observe a hydrophobic cavity between the CODH and ACS active sites that is rerouted relative to bacterial CODH/ACSs but conserved with a channel path in the monofunctional CODH. In another cryogenic electron microscopy structure of the archaeal CODH alone, we see that this hydrophobic cavity becomes plugged such that CO cannot leave CODH unless ACS is bound. This channel plugging mechanism is conserved with the channel plugging mechanism observed in the acetogenic CODH/ACS complex. This work advances our understanding of how CO is carried to and between active sites in CODH and ACS, and elucidates intriguing similarities between CODH/ACS complexes in acetogens and methanogens.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157079</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and Computational Advancements in Peptidomimetic Ligand Discovery</title>
<link>https://hdl.handle.net/1721.1/157078</link>
<description>Experimental and Computational Advancements in Peptidomimetic Ligand Discovery
Lee, Michael Alan
The usage of peptides as therapeutics is a growing area of interest within the pharmaceutical industry for the facilitation of protein-protein interactions (PPIs). Peptides inhabit a unique therapeutic space because of their high levels of chemical customization balanced with their potential for high specificity due to a wide variety of potential structures. At the same time, discovery tools for finding peptides that modify PPIs have evolved, including advances in affinity selection techniques and combinatorial chemistry. Specifically, the usage of solid phase peptide synthesis for split-and-pool chemistry allows for rapid access to highly diverse (&gt;108 total sequences) compound libraries for use in ligand discovery. A primary technique for in vitro ligand discovery is affinity selection-mass spectrometry (AS-MS), which utilizes tandem mass spectrometry to decode complex mixtures of peptide ligands pulled down from a peptide library through affinity selection. This approach provides unique advantages due to the high levels of chemical customization that can be performed on synthetic peptide libraries, including the incorporation of unnatural amino acids or the modification of library structure through macrocyclization.&#13;
This thesis will focus on the development of experimental and computational tools to analyze affinity selection datasets more efficiently and thoroughly. We demonstrate the synthesis of macrocyclic peptide libraries that increases the diversity of synthetic macrocyclic libraries while utilizing accessible, efficient chemistry for cyclization. These libraries are then used for the discovery of novel ligands to two proteins. Structure activity relationships are established for one of these ligands and its affinity is matured through the usage of focused libraries containing a variety of unnatural amino acids. Additionally, we investigate a variety of resins used for solid phase peptide synthesis, particularly in the synthesis of small domain proteins or difficult peptide sequences.&#13;
Because of the high amounts of peptides synthesized and pulled down by AS- MS experiments, efficient computational methods are crucial for effective ligand discovery efforts. Here, we discuss two methods of expanding data analysis, first by a sequence-independent enrichment quantification. AS-MS experiments operate using the decoded peptide sequence from tandem MS/MS data to nominate potential hit peptides, but that process depends on the efficient fragmentation of a&#13;
4&#13;
target peptide and the quality of the subsequent MS2 spectrum. We utilize techniques to identify putative hits through the comparison of peptide enrichment based only off mass-to-charge ratio without an assigned sequence, allowing for label free MS1 quantification. The second method utilizes machine learning techniques to rationalize trends in successfully sequenced peptide sequences from AS-MS experiments with respect to target proteins. This approach allows for the creation of a ligand “sequence space”, which allows for the incorporation of unnatural amino acids in ligand discovery.&#13;
Overall, this thesis presents a variety of methods to enhance the scope of peptide-based drug discovery. We anticipate this work to accelerate the process of drug discovery through a diversification of peptide structure combined with more powerful computational analytics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157078</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross Section Measurement of Exclusive ϕ-Meson Electroproduction off the Proton at CLAS12</title>
<link>https://hdl.handle.net/1721.1/157071</link>
<description>Cross Section Measurement of Exclusive ϕ-Meson Electroproduction off the Proton at CLAS12
Moran, Patrick
This analysis studies the exclusive ϕ meson electroproduction process ep → e′p′ϕ at CLAS12 in the kinematic region 0.39 ≤ Q² ≤ 8.38 GeV², 1.97 ≤ W ≤ 4.03 GeV, and 0.17 ≤ −t ≤ 7.26GeV². Cross section σ(Q²,W) and differential cross section dσ/dt (Q²,W,t) measurements are reported. The scaling of the overall cross section was determined to be 1/Q⁶˙⁴⁷⁺⁻⁰˙⁹⁷, which is consistent with the Generalized Parton Distribution (GPD) prediction of 1/Q⁶. The ratios of the longitudinal and transverse cross sections, R = σL / σT , are extracted from the angular decay distributions for four values of Q² and are found to be consistent with the GPD scaling prediction. The mean-square gluonic radius of the proton ⟨b²⟩ subscript g is extracted from the t-dependence of the differential cross sections dσ / dt in the kinematic region 0.12 ≤ x subscript B ≤ 0.39, the first such measurement in the valence regime.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157071</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Synthesis of Stimuli-Responsive Polymers with Programmable Cleavability</title>
<link>https://hdl.handle.net/1721.1/157070</link>
<description>Design and Synthesis of Stimuli-Responsive Polymers with Programmable Cleavability
Zafar, Hadiqa
Polymers comprise a large portion of modern-day materials, from everyday plastics that we can hold and use, to nanomaterials imperceptible to the naked eye. Applying synthetic chemistry to impart structural changes to established polymers offers a promising path to introduce novel functionalities for applications ranging from biology to sustainability. In particular, this thesis explores the synthesis, characterization and evaluation of polymeric platforms containing rational incorporation of moieties that can undergo chemical cleavage, effecting enhancements in their design and performance. In the first half, we explore advancements to linker design and controlled release of payloads from molecular bottlebrush polymers. The first chapter introduces bottlebrush polymers as nanocarriers for therapeutics, and provides a detailed literature analysis of the synthetic and architectural developments that have been reported for these constructs, as well as outlooks for the future. The second chapter reports the first synthesis of peptide-containing bivalent bottlebrush (co)polymers (BBPs), featuring caspase-3-cleavable peptides linked to fluorogenic probes that provide a “turnon” signal upon enzymatic cleavage. The impacts of different architectural features of these polymers on enzyme access reveal insights into the interactions of enzymes with BBPs, and provide design criteria for future therapeutic systems leveraging this approach. The third chapter investigates a synergistic approach to treating pancreatic ductal adenocarcinoma (PDAC) with drug-loaded BBPs by leveraging multiple facets of structural modularity, including linker and drug identities and concentration ratios. This mechanism-guided approach to combination therapy is validated with the translation of in vitro studies that identify synergy across axes of both drug release timing and mechanism of action to in vivo validation of enhanced therapeutic efficacy of the combination BBP system. The remaining two chapters are a departure from BBPs, instead introducing a novel approach to cleavable comonomers for improving plastic end-of-life sustainability. The fourth chapter thus provides detailed background on the current plastic waste outlook, vinyl polymers and their synthesis, radical ring-opening polymerization, and current approaches to cleavable comonomers and the end-of-life options they offer commodity polymers. The fifth and final chapter reports the first “mixed” cleavable comonomer approach to degradable polymers towards a polyacrylic acid system optimized for biodegradability. A computational model offers parameters for controlling degradation fragment molecular weight and dispersity that are validated experimentally, and the material performance properties of the homopolymer are retained for its cleavable analog. Overall, this thesis leverages structure-activity relationships of cleavable functionalities in stimuli-responsive polymers, and expands the scope under which they can be utilized during their productive lifetime or processed thereafter.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157070</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifunctional Wireless Gut-Brain Neurotechnology</title>
<link>https://hdl.handle.net/1721.1/157069</link>
<description>Multifunctional Wireless Gut-Brain Neurotechnology
Sahasrabudhe, Atharva
The complexity of the brain is well known, in that it uses specially organized neural circuits to interact with the external world. Besides external stimuli, the brain also receives, integrates, and responds to sensory signals emerging from internal organs of the body through the network of the peripheral nervous system. Although these nerve signals are subliminal and cannot be consciously detected or controlled, they play a profound role in maintaining a homeostatic state. Recent evidence also suggests that interoceptive signals can impact higher level cognitive functions. The anatomical, functional, and molecular details about these brain-body pathways are beginning to be deciphered, but a lot remains to be uncovered. Cutting edge neurobiological tools like optogenetics, chemogenetics, and activity-based sensors have revolutionized studies of the brain. However, application of these methodologies for studies of brain-body circuits is reliant on engineered devices that support these sophisticated functions in peripheral organs too. Studying interoceptive circuits in a causal fashion in behaving animals, thus, requires advanced multifunctional implantable neurotechnologies that can be deployed at multiple sites spanning regions in the brain and the peripheral organ of interest. This thesis aims to bridge this unmet technological need.&#13;
This work presents a collection of advances that overcome thermomechanical constraints of fiber drawing and allow processing of traditionally non-drawable components. These advances yielded multifunctional probes that allow depth specific optical, electrical, and pharmacological probing of neural circuits in the brain, while also being compatible with brain-wide functional magnetic resonance imaging techniques. The same underlying design principles have also made possible fiber-based miniaturized electrochemical probes for performing electrocatalytic reactions in the brain to deliver transient, gaseous neurotransmitters, such as NO, through controlled generation and delivery in-vivo. Finally, wireless microelectronic fibers that combine the scalability and mechanical versatility of thermally drawn polymer fibers with the sophistication of microelectronic chips for organs as diverse as the brain and the gut were developed. This approach produces meters-long continuous fibers that can integrate light sources, electrodes, thermal sensors, and microfluidic channels in a miniature footprint. Paired with custom-fabricated control module, the fibers wirelessly deliver light for optogenetics and transfer data for physiological recording. This technology was validated by modulating the mesolimbic reward pathway in the mouse brain and the anatomically challenging intestinal lumen to demonstrate wireless control of sensory epithelial cells and vagal afferents that guide animal’s feeding and reward behaviors.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157069</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracing RNA biography: in situ transcriptome profiling by novel spatial&#13;
omics technologies</title>
<link>https://hdl.handle.net/1721.1/157068</link>
<description>Tracing RNA biography: in situ transcriptome profiling by novel spatial&#13;
omics technologies
Ren, Jingyi (Rena)
Cell state and function are shaped by the spatiotemporal regulation of gene expression. This intricate pattern of gene expression is, in part, attained through the precise regulation of mRNA: its metabolism, transport, and translation within individual cells across spatial and temporal dimensions. Therefore, it is critical to methodically delineate the spatially resolved post-transcriptional regulations within transcriptomes, studying these events at the single-cell and single-molecule level. This advantage is important for mapping the complex network of transcriptional and post-transcriptional gene regulatory mechanisms inherent in cells and tissues. Moreover, our understanding of RNA translation in diverse cell types and states will be greatly enriched by the examination of spatially resolved protein synthesis patterns at the genomic scale within heterogeneous cells. Presently, the state-of-the-art spatial transcriptomic techniques offer only static snapshots of RNA expression, falling short of capturing RNA dynamics and their controlled translation within subcellular domains. Therefore, our driving question is whether the spatial regulation of multi-staged RNA life cycle influences cellular state and activity. Thus, an unmet need is to develop new methods capable of spatially tracking not only steady-state RNA expression but also their post-transcriptional states. This work is essential in providing a comprehensive picture of spatial RNA dynamics in cellular function and physiology. Filling this gap, I developed a novel in situ sequencing toolbox to study spatiallyresolved post-transcriptional RNA dynamics at the genomic scale in single cells during my PhD studies. My graduate work has led to the development of two novel in situ profiling technologies: (1) TEMPOmap (temporally resolved in situ sequencing and mapping), which resolves nascently-transcribed RNAs in space and time, and (2) RIBOmap (ribosome-bound mRNA mapping), a spatial ribosome profiling method. Utilizing these methods, we were able to holistically profile spatial, temporal and single-molecule information of RNA at the transcriptomic and translational levels in single cells. The main contribution of this work is that we established a specialized spatial transcriptomic toolkit specific for capturing the dynamics of mRNA in situ. Applying these technologies, I’ve profiled spatial, temporal and single-molecule information of RNA and single cells at the transcriptomic and translational levels in a range of biological systems, including iPSCs, primary skin cells and intact brain tissues. Specifically, I’ve focused on quantifying key steps in the mRNA life cycle in their spatial context, including RNA synthesis, nuclear export, translation, cytoplasmic translocation, and degradation. My goal was to better grasp the link between gene function and RNA lifespan at a genomic level across different cell types. Notably, we found that (1) different mRNAs are controlled both post-transcriptionally and translationally, with distinct subcellular localizations within cells; (2) in contrast to the previous belief that RNA dynamics solely depend on the primary sequence, they in fact exhibit diverse dynamic behavior for the same RNA species based on cell states, types, and even tissue regions. In primary skin samples, we noted cell-type-dependent alterations in the rates of RNA synthesis, transport, and degradation. Additionally, the translation level varied across cell types and regions within intact mouse brain tissue.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157068</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Procollagen Folding in Health and Disease</title>
<link>https://hdl.handle.net/1721.1/157067</link>
<description>Procollagen Folding in Health and Disease
Yammine, Kathryn Marie
Procollagen is a large, complex, and in many ways, unusual protein that is ubiquitous in the human body and in all animals. Decades of research have advanced our understanding of how cells fold and secrete this protein, nonetheless, many questions remain concerning procollagen biosynthesis, and how the process can go awry in the case of collagenopathies. Understanding how these mechanisms break down in disease is key to (1) gaining a better fundamental understanding of how these mechanisms function, and (2) developing effective and targeted strategies for disease modifying treatment. In this thesis, we discuss some of the newly appreciated mechanisms involved in procollagen folding in health and disease. In Chapter 2, we explore the molecular basis of procollagen assembly, and uncover a new role for the triple helical domain sequence in guiding trimer assembly. In Chapters 3 and 4, we develop, characterize, and deploy an expandable human cartilage model to examine the processes of procollagen proteostasis that break down in the cases of the chondrodysplasia-inducing Gly1170Ser and Arg719Cys substitutions in procollagen-II, respectively. In Appendix C, we explore the functional differences between two alternatively spliced forms of the procollagen-II N-propeptide and speculate about the role and importance of aspartate hydroxylation in ocular function and homeostasis. Collectively, the work described in this thesis advances our understanding of the molecular mechanisms involved in procollagen proteostasis in health and disease.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157067</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Spin Patterning in Metal-Organic Frameworks</title>
<link>https://hdl.handle.net/1721.1/157066</link>
<description>Chemical Spin Patterning in Metal-Organic Frameworks
Petry, Stephanie Michelle
Emergent phenomena are ubiquitous and fundamental to life as we know it, serving as vital environmental regulators, such as the production of honey by honeybees. These phenomena occur when individual components of a system interact, generating new collective behaviors. In magnets, interactions between electron spins result in emergent properties with profound fundamental and technological implications. As metal-organic-frameworks (MOFs) are highly tailorable materials, this thesis will examine the utilization of MOF platforms to engineer bespoke spin properties. Through deliberate manipulation of magnetic interactions, we engineer custom magnetic materials with unique emergent properties. Our investigation begins with the controlled construction of magnetic interactions in a family of chemically similar but structurally distinct metal-organic materials. Despite sharing the same magnetic components, variations in their structural and magnetic dimensionalities significantly influence their magnetic behaviors. In the following section, we address current experimental challenges in engineering spin frustration within honeycomb lattices. We introduce a novel model for spin frustration on this lattice and employ MOFs to realize this concept. The tailorable nature of this MOF platform facilitates the investigation of how manipulable chemical interactions influence the resulting magnetic properties. The concluding section outlines a synthetic strategy for designing an underexplored magnetic model, leveraging the versatility of MOF synthesis to make new materials from preexisting structures. Highlighting our initial findings, we offer brief insight into the future prospects of this endeavor. These combined studies underscore the remarkable potential of MOF platforms in creating designer magnetic materials, representing significant progress in the field of condensed matter physics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157066</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and dynamics of magnetic domain walls in multi-sublattice magnetic oxides</title>
<link>https://hdl.handle.net/1721.1/157065</link>
<description>Structure and dynamics of magnetic domain walls in multi-sublattice magnetic oxides
Huang, Siying
Spintronics is a study that lies at the intersection of magnetics and electronics, which makes use of the electron spin in solid-state devices for data storage and manipulation. A promising future spintronic technology is racetrack memory, where magnetic domain walls (DWs) are encoded with bits of information and are translated by currents on thin-film racetrack devices. What enables the current-driven motion of the DW is its Néel character, generally stabilized by the Dzyaloshinskii–Moriya interaction (DMI). Fast DW motion of the order of km/s was shown in multi-sublattice metallic systems, overcoming the fundamental limits in ferromagnetic systems through angular momentum compensation of the sublattices. Recently, DMI and even faster DW motion have been observed in thin-film rare-earth iron garnets. However, the net angular momentum in such systems is shown to be far from angular momentum compensation. Moreover, the mechanism of the DMI in garnets is shown to be distinct from the metallic systems, thus requiring further understanding as well. In this thesis, we examine magnetic DWs in such multi-sublattice magnetic oxides. We demonstrate a strong tunability of the DMI by a factor of 7 through the substrate in Pt/garnet thin f ilms, providing further understanding of the DMI mechanism. For the anomalously fast DW motion, we present an explanation by the field-like torque counteracting the damping-like torque and increasing the spin Hall efficiency. We propose measuring the DW velocity with a transverse field applied to probe this field-like torque, and present experimental evidence. We investigated the DW depinning dynamics in Pt/BiYIG thin film, presenting a phase diagram of this pinning event, which proves the crucial role of minimizing the pinning effect in achieving fast DW velocity. In EuIG(110) thin film with strong in-plane anisotropy, we demonstrate bistable Néel DW states interchangeable by in-plane field pulse-driven incoherent DW reversal, from which we extract for the first time the Bloch line energetics. Besides the above DW Néel character stabilization, we also provide a DW position stabilization on the racetracks by the exchange bias effect in Pt/Co/Pt/Co₀.₈Ni₀.₂O thin films. This thesis provides a comprehensive understanding of the stability and dynamics of DWs on racetrack devices based on magnetic oxides, from the aspects of both scientific understanding and technical optimizations, paving a path to future innovation and optimization in racetrack memory device design.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157065</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic and Analytic Methods in Combinatorics</title>
<link>https://hdl.handle.net/1721.1/157064</link>
<description>Probabilistic and Analytic Methods in Combinatorics
Sawhney, Mehtaab
This thesis studies a range of topics across combinatorics, broadly defined. The second chapter of this thesis addresses a longśstanding question of Erdős regarding the existence of high girth Steiner triple systems. The tools employed fall squarely within the context of probabilistic method, drawing on recent advances within design theory and the theory of random processes. The third and fourth chapters of this thesis consider problems within random matrix theory; in particular on problems regarding sparse random graphs. The third chapter concerns a question of Vu regarding the singularity of the k-core of a random graph. In particular, a sparse ErdősśRényi graph G(n,d/n) with high probability has large corank due to the presence of isolated vertices. Answering raised by Vu at the ICM 2014, the third chapter proves that by iteratively deleting vertices of degree less than k (e.g. forming the k-core) we have that the associated graph is nonsingular with high probability. The fourth chapter answers a longstanding question regarding the spectral distribution of a matrix where each entry is 1 with probability d/n. In particular, this result gives the first spectral distribution for nonhermitian random matrices at this level of sparsity and answers a question that was highlighted by Tikhomirov at the ICM 2022. The fifth and sixth chapters are concerned with discrepancy theory. The fifth chapter provides bounds for online vector balancing by finding a Markov chain on R with integer steps and for which the stationary distribution is Gaussian. The sixth chapter concerns a famous result of Spencer in finding a low discrepancy coloring of a set system. This chapter gives the first such algorithm for finding a low discrepancy coloring which runs in nearly input sparsity time. The seventh chapter concerns effective bounds for special cases of the polynomial Szemerédi theorem. In particular, answering a question of Green, this chapter gives effective bounds for sets avoiding the pattern x,x + y² − 1,x + 2(y² − 1) (e.g. Roth’s theorem with a shifted square difference). This is the first polynomial pattern which is not homogeneous and complexity at least one for which effective bounds have been obtained. Furthermore this paper introduces the use of higher order techniques within the context of degree-lowering. The final chapter concerns a question at the intersection of probabilistic combinatorics and statistical physics. This chapter determines the sharp constant γ such that with high 3probability a graph G ∼ G(n,1/2) may be split into two equal parts A and B such that each vertex in A has γ√n more neighbors in A than in B. This provides an essentially complete resolution to a question of Füredi and draws on a combination of methods from graph enumeration and boolean analysis.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157064</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Dynamics on Integrable Lattice Models</title>
<link>https://hdl.handle.net/1721.1/157063</link>
<description>Stochastic Dynamics on Integrable Lattice Models
Nicoletti, Matthew S.
The purpose of this thesis is to present some new results related to the six-vertex and dimer model. One theme is the construction and analysis of Markov processes which are naturally associated to these lattice models. Certain integrability properties of the six-vertex and dimer model, often related to the Yang–Baxter equation, allow for the construction of associated Markov chains. In some cases, these are measure preserving Markov chains on configurations of the lattice model. In other cases, they arise via transfer matrices, after choosing a distinguished time coordinate, as a continuous time degeneration of the “time evolution” of the lattice model itself. It is often the case that the integrability of the underlying lattice model provides powerful tools to study the associated Markov chains or their marginals, which are sometimes Markov chains themselves. In Chapter 2, we construct Markov chains on six-vertex states in the quarter plane Z² ≥₀ and the full plane Z². When viewing the six-vertex model as a model of random surfaces, the Markov chain is an example of a surface growth model in the (2+1)-dimensional “Anisotropic KPZ” (or “AKPZ”) universality class. In the Z² case, the translation invariant Gibbs measures of the stochastic sixvertex model are stationary measures of the process. Using structure preserving local moves for the dimer model, in Chapter 3 we construct another surface growth model in the AKPZ universality class, which has the dimer model Gibbs measures as stationary distributions. By exactly computing key quantities such as the current, we confirm predictions from the physics literature on the AKPZ universality class, and we confirm the expected hydrodynamic limit PDE of the growth process in special domains known as tower graphs. To complement our analysis of the growth process, we analyze the local asymptotics of dimer model correlation functions on tower graphs, and confirm in this case the prediction ([1]) that they converge to those of translation invariant Gibbs measures. In Chapter 4, we construct a Markov chain generalizing domino shuffling which samples exactly from a recently introduced probability measure on tuples of interacting dimer configurations. Exact sampling is extraordinarily useful for the discovery and numerical investigation of asymptotic phenomena in new models. In Chapter 5, we utilize local moves for a different purpose; we construct deterministic tembeddings, which are embeddings of a bipartite graph that are compatible with the underlying 3dimer model. It was recently shown ([2], [3]) that a certain subclass of these, perfect t-embeddings, can be ultimately used to prove “conformal invariance of the model” in the scaling limit. Furthermore, for each local move in the dimer model, there is a corresponding local geometric transformation of t-embeddings ([4]). For Aztec diamond and tower graphs, this allows for an inductive construction of perfect t-embeddings. We utilize the “exact solvability” of the resulting recurrence relations to give exact formulas for the embeddings. We then precisely characterize the global and local asymptotic behavior of the embeddings, and to confirm predictions of [3], [5] in these two cases. In Chapter 6, we utilize the Yang–Baxter equation for a colored generalization of the six-vertex model to compute stationary measures for colored interacting particle systems. In several cases, we match our constructions to existing stationary measures, while in other cases we obtain new stationary measures. We provide a new, unified construction and method of proof (of stationarity) for several different interacting particle systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157063</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The algebraic K-theory of the chromatic filtration and the telescope conjecture</title>
<link>https://hdl.handle.net/1721.1/157062</link>
<description>The algebraic K-theory of the chromatic filtration and the telescope conjecture
Levy, Ishan
We develop tools for understanding the algebraic K-theory of categories such as those coming from the chromatic filtration of the stable homotopy category, and apply these tools to improve our understanding of the large scale structure of stable homotopy theory and understand Ravenel's telescope conjecture.&#13;
&#13;
More specifically, in joint work with Burklund, we prove a general devissage result which in particular identifies the algebraic K-theory of certain coconnective ring spectra satisfying suitable regularity and flatness hypotheses with the K-theory of their pi₀. Using this and an extension of the Dundas--Goodwillie--McCarthy theorem to —1-connective ring spectra, we obtain a formula for the algebraic K-theory of the K(1)-local sphere in terms of topological cyclic homology of a ring spectrum j_zeta, and in particular find that its algebraic K-groups are not all finitely generated. In joint work with Lee, we extend these computations to understand the algebraic K-theory of the K(1)-local sphere in the stable range using THH, where we observe phenomena such as the failure of Zₚ Galois descent for THH for an extension of j_zeta. In joint work with Burklund, Hahn, and Schlank, we show that the failure of Zₚ-descent also happens for the T(2)-local TC of this extension. Combining this with the cyclotomic redshift result of Ben-Moshe--Carmeli--Schlank--Yanovski, this implies that the T(2)-local algebraic K-theory of the K(1)-local sphere is not K(2)-local, and hence a counterexample to the height 2 telescope conjecture. We also  give similar counterexamples to the height n telescope conjecture for all n≥2 and all primes, and show that Zₚ Galois hyperdescent for chromatically localized algebraic K-theory generically fails.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157062</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Chromatin Organization and Dynamics with Coarse-Grained Modeling</title>
<link>https://hdl.handle.net/1721.1/157061</link>
<description>Understanding Chromatin Organization and Dynamics with Coarse-Grained Modeling
Liu, Shuming
The genome is the blueprint of human life, and it is crucial to understand its organization. The genome organization is hierarchical with different principles dominating at different scales. At the near-atomistic level, nucleosomes are organized as ordered chromatin fibers or disordered chromatin arrays. Furthermore, chromatin and related proteins can function within condensate environments. Computational modeling provides valuable insights into such complex biological processes. Considering the complexity of chromatin and biomolecular condensates, coarse-grained (CG) modeling is essential to achieve the biologically relevant timescales. We have developed CG models and toolkits to facilitate modeling chromatin and related proteins. We have also applied CG protein and DNA models to study chromatin folding and phase separation.&#13;
&#13;
In Chapter 1, we begin with an overview of the hierarchical scales of genome organization. We also introduce CG modeling as a powerful tool to understand the chromatin structures and dynamics. In Chapters 2 and 3, we demonstrate the development of CG simulation force fields and toolkits. In Chapter 2, we present novel CG force fields trained with contrastive learning. We have achieved a new set of hydropathy parameters trained with a99SB-disp all-atom force field trajectories of intrinsically disordered proteins, which accurately reproduces their average radius of gyration. In addition, we have developed a unified force field that captures the average radius of gyration of both ordered and disordered proteins in the training set. In the future, we will focus on benchmarking our models and existing CG models with condensate simulations, which enables more appropriate selections of CG models based on specific conditions. In Chapter 3, we introduce OpenABC, a versatile toolkit designed to streamline the setup of CG simulations, especially condensate simulations. OpenABC incorporates diverse CG force fields within an extensible framework and is built on a simulation platform that supports GPU acceleration, thus speeding up CG simulations. &#13;
&#13;
In Chapters 4 and 5, we shift our focus to the applications of CG simulations. In Chapter 4, we discuss the force extension and inter-chain contacts of chromatin fibers. Our CG simulations reveal that the chromatin fiber behaves like an elastic spring under forces no more than 3 pN, while it dramatically unstacks and unwraps at approximately 4 pN. Meanwhile, inter-chain contacts can help unfold the native two-start fibril-like structures. The study demonstrates that biologically relevant pN-level forces and crowding environments contribute to the absence of 30-nm fibers in vivo. In Chapter 5, we apply Markov state models and non-Markovian dynamics models to study the folding dynamics of tetra-nucleosomes. The tetra-nucleosome with 10n+5-bp linkers shows more diverse structures without dominant native structures, while 10n-bp linkers lead to funnel-shaped free energy landscape with a strong folding trend. Within the condensate, the transition rates slow down, while the unfolding and folding rates are comparable. These two studies highlight that the intrinsic physical chemistry properties of chromatin are fundamental to the genome organization in cells.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157061</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Metal–Organic Frameworks for Water Harvesting</title>
<link>https://hdl.handle.net/1721.1/157060</link>
<description>Tailoring Metal–Organic Frameworks for Water Harvesting
Oppenheim, Julius Jacob
Water sorbents enable technologies that have the potential to mediate water insecurity, meet an increasing energy demand, and push towards sustainability. Metal-organic frameworks (MOFs) are candidate sorbents for such technologies as a direct result of their inherent chemical modularity – facilitating the use of MOF sorbents to adsorb over a large range of relative humidity (RH). However, the underlying structure–function relationships between MOF composition and structure with sorption properties have yet to be explicitly determined. In this thesis, the author explores and defines such structure–function relationships. Chapter 1 introduces the important sorption properties as well as the top performing MOFs and MOF families. In Chapter 2, the author presents a derivation for a relationship between pore composition and the observable sorption parameters (critical RH, maximum gravimetric capacity, and presence of hysteresis loops). Chapter 3 realizes the insights from the preceding chapter to design and synthesize an industrially viable sorbent with high capacity below 30% RH and excellent cycling stability. Chapter 4 further explores these insights, with a focus on the observation that ions contained within the framework pore can greatly increase the hydrophilicity of a framework. Within Chapter 5, the author investigates the relationship between pore hydrophilicity and kinetic hysteresis, finding that kinetic limitations arise in sufficiently hydrophilic frameworks. Chapter 6 explores the driving differences in interaction between a framework and π-backbonding sorbates, for a framework in which the water sorption properties have been previously reported. Within Chapter 7, the author explores an alternative method for post-synthetic modification, whereby chlorine radical abstraction is utilized to reduce a framework, which may be useful for the synthesis of new sorbents.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157060</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Studies of Interfacial Proton-Coupled Electron Transfer to Molecularly Defined Surface Sites</title>
<link>https://hdl.handle.net/1721.1/157059</link>
<description>Mechanistic Studies of Interfacial Proton-Coupled Electron Transfer to Molecularly Defined Surface Sites
Lewis, Noah B.
Nearly all electrocatalytic reactions and all aqueous electrochemical systems involve either the reductive formation or oxidative scission of a surface-hydrogen bond in an interfacial proton-coupled electron transfer (I PCET) reaction. Whether as an intermediate step in electrocatalysis or a stoichiometric charging step in a psuedocapacitor, regardless if involved in product formation or solution degradation, I PCET can occur in any protic electrolyte. Despite I-PCET reactions’ integral role in electrochemical energy storage and value-added chemical synthesis, molecular-scale models for I-PCET mechanisms have historically been lacking. The general heterogeneity and dynamism of electrode surfaces make it difficult to identify relevant surface active sites and therefore nearly impossible to correctly assign changes in reactivity between surface-based or electrolyte-based effects. In contrast to standard heterogeneous surfaces, graphite-conjugated carboxylate (GC COOH) electrodes display stable, isolated, unique, and atomically precise active sites. Investigating I-PCET at GC-COOH electrodes therefore introduces unprecedented clarity into the chemical nature of surface-H bonds and eliminates convolution from differences in electrode structure between electrolyte conditions. This thesis utilizes GC-COOH electrodes to explore how two fundamental electrolyte properties, pH and ionic strength, control I PCET kinetics with an understanding of both properties’ kinetic dependence leading to new mechanistic insights for I PCET reactivity.&#13;
 &#13;
Chapter 2 concerns how I-PCET kinetics are controlled by electrolyte pH and how the observed rate dependence informs I-PCET mechanisms. Equilibrium apparent rate constants (kapp) for I-PCET were measured to be fastest at both pH extremes but reach a minimum at pH 10. The lack of pH-independent regions and the asymmetric slopes of the “V”-shaped kapp vs pH dependence observed for I-PCET stand in stark contrast to the established rate-pH dependence and path-dependent mechanism established for outer-sphere proton-coupled electron transfer. Such differences highlight the need for an alternative mechanistic model for I-PCET. With these observations, a donor-identity-dependent model for I-PCET is developed. In this model, I-PCET occurs through one of two proton donor/proton acceptor couples, either a hydronium/water couple predominating at low pH and slowing with increased pH or a water/hydroxide couple predominating at high pH and slowing with decreasing pH. These studies constitute the first molecular-scale mechanistic understanding of elementary I-PCET reactions.  &#13;
&#13;
Chapter 3 investigates how high concentrations of proton-neutral supporting electrolytes effect I-PCET kinetics. We measure proton activity with the reversible hydrogen electrode and I PCET kinetics with GC-COOH from 1 mole kg⁻¹ to 17 mole kg⁻¹ NaClO₄ in unbuffered perchloric acid, acetate buffered, and unbuffered sodium hydroxide aqueous electrolytes. While the proton activity of unbuffered acidic conditions increases drastically across this concentration range, that of the buffered and basic electrolytes changes little. Additionally, a significant decrease in I-PCET rates versus the rate expected for the measured proton activity is observed for the acidic and buffered electrolytes but not the basic electrolytes. With these observations we construct a mechanistic model in which I-PCET is not a single step, but a multi-step reaction sequence in which elementary I-PCET is gated by an ion exchange reaction between proton donor/acceptor species and proton-neutral supporting electrolyte at the electrode-electrolyte interface. These findings demonstrate how supporting electrolyte can be leveraged as a design parameter to independently control electrolyte pH and the rates of I-PCET-based reactions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157059</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Characterization of Iron-Sulfur Cluster Excited States and their Relevance to Electron Transfer Reactions</title>
<link>https://hdl.handle.net/1721.1/157058</link>
<description>Experimental Characterization of Iron-Sulfur Cluster Excited States and their Relevance to Electron Transfer Reactions
Skeel, Brighton A.
Iron-sulfur (Fe−S) clusters, and cuboidal [Fe₄S₄] clusters in particular, are biologically ubiquitous metallocofactors involved in a diverse array of cellular processes. The electronic structures of these metallocofactors are highly multiconfigurational, and characterized by dense manifolds of low-energy excited states. The ground states for these systems have been extensively studied for several decades, and are understood to be the products of a confluence of super-exchange and spin-dependent electron delocalization interactions. On the other hand, our understanding of the excited states of these clusters—many of which are measurably populated at ambient temperature—is minimal, due largely to the fact that describing these states both experimentally and computationally is a daunting task. Here, we simplify this problem first by recognizing that imposing a particular ligand field symmetry (namely 3:1 site differentiation) on a [Fe₄S₄] cluster causes some of its excited states to become degenerate in well-defined ways. This in mind, we describe the synthesis of an array of 3:1 site-differentiated [Fe₄S₄]¹⁺ clusters, and characterize them by variable temperature (VT) solution NMR spectroscopy and magnetometry. Our global fits of these VT data using a simplified model Hamiltonian have furnished, for the first time, experimental pictures of the excited state manifolds for [Fe₄S₄]¹⁺ clusters, including both their low-energy spin states and alternate valence electron configurations (“valence isomers”). We find that the energy scale associated with both of these phenomena is commensurate with that of the thermal energy at ambient temperature, and that these alternate valence arrangements and spin configurations are thus relevant to understanding the room temperature reactivity of biological Fe−S systems. We find additionally that the primary coordination sphere has a strong influence on the topography of these excited state landscapes, in particular that the donor properties of the ligands binding an [Fe₄S₄]¹⁺ cluster determine its ground state valence electron distribution. Finally, we describe the variable temperature electron transfer self-exchange kinetics for a series of [Fe₄S₄]¹⁺/²⁺ clusters where we have experimentally mapped the excited spin state manifolds, thus taking the first steps toward connecting the excited state manifolds of these metallocofactors to their reactivities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157058</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Positive traces and analytic Langlands correspondence</title>
<link>https://hdl.handle.net/1721.1/157057</link>
<description>Positive traces and analytic Langlands correspondence
Klyuev, Daniil
I will describe my results with co-authors in two directions. &#13;
&#13;
The first direction is the problem of classification of positive traces on quantized Coulomb branches. In the most general setting, this problem generalizes the classical problem of describing irreducible unitary representations of real reductive Lie groups. We consider the case of Kleinian singularities of type A and provide a complete classification of positive traces.&#13;
&#13;
The second direction is analytic Langlands correspondence, which is the following. Let X be a smooth irreducible projective curve over C, G be a complex reductive group. On one side of this conjectural correspondence there are G superscript v -opers on X satisfying a certain topological condition ( real opers), where G superscript v is Langlands dual group. On the other side there is joint spectrum of certain operators on L²(Bun subscript G), called Hecke operators, where Bun subscript G is the variety of stable parabolic G-bundles on X and L²(Bun subscript G) is a Hilbert space of square-integrable half-densities. We prove most of the main conjectures of analytic Langlands correspondence in the case when G=PGL₂(C) and X either a genus one curve with points or X is P¹ with higher structures at points.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157057</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kazhdan-Laumon Categories and Representations</title>
<link>https://hdl.handle.net/1721.1/157056</link>
<description>Kazhdan-Laumon Categories and Representations
Morton-Ferguson, Calder
In 1988, D. Kazhdan and G. Laumon constructed the Kazhdan-Laumon category, an abelian category A associated to a reductive group G over a finite field, with the aim of using it to construct discrete series representations of the finite Chevalley group G(F subscript q). The welldefinedness of their construction depended on their conjecture that this category has finite cohomological dimension. This was disproven by R. Bezrukavnikov and A. Polishchuk in 2001, who found a counterexample for G = SL₃. Since the early 2000s, there has been little activity in the study of Kazhdan-Laumon categories, despite them being beautiful objects with many interesting properties related to the representation theory of G and the geometry of the basic affine space G/U. In the first part of this thesis, we conduct an in-depth study of Kazhdan-Laumon categories from a modern perspective. We first define and study an analogue of the Bernstein-Gelfand-Gelfand Category O for Kazhdan-Laumon categories and study its combinatorics, establishing connections to Braverman-Kazhdan’s Schwartz space on the basic affine space and the semi-infinite flag variety. We then study the braid group action on D superscript b (G/U) (the main ingredient in Kazhdan and Laumon’s construction) and show that it categorifies the algebra of braids and ties, an algebra previously studied in knot theory; we then use this to provide conceptual and geometric proofs of new results about this algebra. After Bezrukavnikov and Polishchuk’s counterexample to Kazhdan and Laumon’s original conjecture, Polishchuk made an alternative conjecture: though the counterexample shows that the Grothendieck group K₀(A) is not spanned by objects of finite projective dimension, he noted that a graded version of K₀(A) can be thought of as a module over Laurent polynomials and conjectured that a certain localization of this module is generated by objects of finite projective dimension. He suggested that this conjecture could lead toward an alternate proof that Kazhdan and Laumon’s construction is well-defined, and he proved this conjecture in Types A₁,A₂,A₃, and B₂. We prove Polishchuk’s conjecture for all types and prove that Kazhdan and Laumon’s construction is indeed well-defined, giving a new geometric construction of discrete series representations of G(F subscript q).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157056</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear analysis of the pressoreceptor reflex system</title>
<link>https://hdl.handle.net/1721.1/157050</link>
<description>Nonlinear analysis of the pressoreceptor reflex system
Levison, William H.
            (William Henry),
            1936-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Vita.; Includes bibliographical references (leaves 166-170).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157050</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Leningrad Physico-Technical Institute and the birth of Russian physics</title>
<link>https://hdl.handle.net/1721.1/157049</link>
<description>The Leningrad Physico-Technical Institute and the birth of Russian physics
Josephson, Paul Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1987; Bibliography: leaves 461-477.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157049</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty &amp; robustness for single-cell studies</title>
<link>https://hdl.handle.net/1721.1/157032</link>
<description>Uncertainty &amp; robustness for single-cell studies
Shiffman, Miriam
The advent of new technologies capable of measuring molecular profiles at single-cell granularity, across thousands or millions of cells, offers unprecedented insight into the form, function, and circuitry of biological systems. At the same time, these technologies present particular statistical and computational challenges, including noise, sparsity, technical and biological variability, and multilevel sampling regimes. To distill relevant signal from biological phenomena, then, analyses must combine information in a careful and coherent way across cells. In light of these complexities, it is prudent that single-cell analyses incorporate notions of uncertainty and robustness to guide their interpretation and inform future decision making.&#13;
&#13;
This thesis makes two main advances in facilitating coherent, actionable quantification of uncertainty and robustness for single-cell studies. First, we provide a framework for generalizability of differential expression analysis that—unlike common statistical tools (significance, power, standard error)—does not rely on the assumption that the sample in hand is independent and identically distributed as future samples. Instead, we posit an alternate (complementary) lens on generalizability: could dropping a very small fraction of cells meaningfully alter the basic conclusions of differential expression? We develop an accurate and efficient approximation to estimate this dropping-data robustness metric for the key outcomes of differential expression, for independent observation and pseudobulk analyses. Broadening these gene-level results to a high-level, biologically meaningful summary, we overcome the inherently non-differentiable and combinatorial nature of gene set enrichment analysis to provide an additional approximation for the dropping-data robustness of top gene sets. Applied to public single-cell RNA-seq data of healthy and diseased cells, our metric identifies widespread nonrobustness across genes that extends to high-level nonrobustness of top gene sets. The second part of this thesis provides a full Bayesian framework for reconstructing probabilistic trees of cellular differentiation from single-cell profiles. Namely, motivated by the biology of differentiation and confronted with a lack of existing hierarchical models, we develop a new family of probabilistic trees where data is generated continuously along branches (and latent cell state evolves smoothly over the tree). We also develop two approaches, focusing on gene-level or cell-level variability, to model measurement noise arising from single-cell RNA-sequencing. In tandem, we construct a novel Markov chain Monte Carlo sampler over trees, including message passing with variable augmentation to accelerate inference. These techniques recover latent trajectories from simulated single-cell transcriptomes, and make progress toward inferring trajectories, with calibrated uncertainties, from real transcriptomes.&#13;
&#13;
I close by reflecting on common themes relevant to uncertainty and robustness for single-cell studies, including interplay between the continuous and the discrete, the challenge of summarization, the importance of cyclical model criticism, and a possible way forward through differentiable and probabilistic programming.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157032</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lineage-level within-species dynamics in the human facial skin microbiome</title>
<link>https://hdl.handle.net/1721.1/157027</link>
<description>Lineage-level within-species dynamics in the human facial skin microbiome
Baker, Jacob S.
Multiple lineages of a bacterial species can coexist in a community. These extremely closely-related clades originate from the recent immigration of individual cells, whose evolution over short timescales (years) results in minute genomic diversity (101 SNPs/genome). Each has distinct origins, and the mutations they contain can reveal their individual evolutionary and ecological history. However the difficulty of differentiating coexisting lineages limits the phylogenetic resolution at which community dynamics can be studied. Here, I describe methods to cluster large sets of diverse genomes into lineages and apply them to the observation of natural lineage-level assembly dynamics in the human facial skin microbiome. In Chapter 2, I use new methods to improve lineage-level clustering and delineate 4,055 genomes of C. acnes and S. epidermidis isolates from human facial skin into 167 lineages. In Chapter 3, I use these data to observe natural transmission events and assembly dynamics of the facial skin microbiome. I find that the gain and loss of individual C. acnes and S. epidermidis lineages underlies their apparent stability at the species level, and that these dynamics also change throughout the human lifespan. Lineages of S. epidermidis are replaced in unexpectedly fast cycles, and C. acnes lineages are acquired during developmentally-driven population expansion. By advancing current methods, I enabled the observation of new ecological dynamics at an unprecedented resolution. The dynamics described here will influence the development of therapeutic strains with durable engraftment, and inspire the study of their effects on hosts, such as the immune consequences of lineage-level turnover.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157027</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tangible Telepresence: Distributed and Synchronous Tangible Interfaces for Enhancing Interpersonal Connectedness over Time and Space</title>
<link>https://hdl.handle.net/1721.1/157023</link>
<description>Tangible Telepresence: Distributed and Synchronous Tangible Interfaces for Enhancing Interpersonal Connectedness over Time and Space
Choi, Kyung Yun
In today's hyper-connected world, digital communication technologies have transformed how people maintain relationships across distances. However, constant digital stimuli and the pressure to always be available can lead to overwhelm, stress, and a lack of personal space. This thesis explores the concept of Tangible Telepresence, enhancing connectedness between intimate dyads through gestural engagement and seamless transitions between synchronous and asynchronous communication.&#13;
&#13;
To demonstrate this concept, this thesis introduces-TeleTangibles- distributed and synchronous tangible interfaces that expand the bandwidth of interpersonal communication. TeleTangibles allow users to adjust their personal boundaries by moving between real-time and slow-paced communication within their physical space. The design space of TeleTangibles encompasses interaction spaces and expression levels, from abstract to concrete, through different motions and forms, focusing on engaging intimate dyads' nonverbal interactions and their perception of their relationship.&#13;
&#13;
The thesis presents two distinct TeleTangible examples, TelePop and Picto, addressing different aspects of the design space and demonstrating asynchronous communication through recording, replaying, and sharing tangible interactions remotely. Insights from these projects contribute to a deeper understanding of TeleTangibles' design space and the factors influencing their effectiveness in promoting interpersonal connectedness and social presence.&#13;
&#13;
The main contributions of this thesis are threefold: First, it extends real-time synchronous remote interaction to include asynchronous interaction through time-delayed responses, allowing individuals to adjust their levels of connectedness and enabling smooth transitions between interaction modes. Second, it proposes essential functionalities for developing Tangible Telepresence, illustrated through two TeleTangible examples, including recording and replaying interaction history and establishing mutual awareness through shared tangible languages and experiences. Third, it highlights that complex meanings or detailed information are not essential for strengthening connectedness when mutual awareness is established, as users perceive TeleTangibles as various forms of interaction that reduce the pressure of immediate response while confirming each other's status.&#13;
&#13;
This research contributes to the fields of Tangible User Interfaces and interpersonal communication by providing a new approach to expanding remote interpersonal communication media through playful gestural engagement. It offers a timely exploration of the challenges of maintaining social connectedness and respecting personal boundaries in an increasingly digital world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157023</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards A Robust Integrated Urban Mobility System: Public Transit and Ride-Sharing Systems</title>
<link>https://hdl.handle.net/1721.1/157009</link>
<description>Towards A Robust Integrated Urban Mobility System: Public Transit and Ride-Sharing Systems
Guo, Xiaotong
The global pandemic has fundamentally changed lifestyles, impacting how, when, and where people travel within cities. In this post-pandemic world, urban mobility demand patterns are experiencing significant shifts. To manage the growing uncertainty in urban mobility, there is a growing need to develop a robust urban mobility system. This system must be adaptable to evolving demand patterns while ensuring efficiency and environmental sustainability in transporting large populations. Additionally, the increasing popularity of shared mobility and rapid advancements in autonomous driving technologies are creating new opportunities for innovative approaches to urban transportation systems.&#13;
&#13;
This dissertation delves into the development of a robust and integrated urban mobility system for the future, with a focus on the public transit and ride-sharing systems. While the advent of shared mobility platforms such as Uber and Lyft, along with Autonomous Mobility-on-Demand (AMoD) services like Waymo and Cruise, have revolutionized urban travel, public transit systems remain the backbone of urban mobility. This is attributed to their capacity to move large numbers of people over long distances at a relatively low cost and an environmentally friendly way. Thus, this study aims to enhance the robustness of both public transit and ride-sharing systems and explore ways to seamlessly integrate these two components. The dissertation presents five distinct studies to elaborate on these objectives.&#13;
&#13;
The first three studies focus on the vehicle rebalancing problem, which is one of the most critical strategies in ride-sharing operations. An effective rebalancing strategy can significantly reduce empty miles traveled and reduce customer wait times by better matching supply and demand. While the supply (vehicles) is usually known to the system, future passenger demand is uncertain. The first study proposes a novel approach to better immunize rebalancing decisions against demand uncertainty. This approach, namely the matching-integrated vehicle rebalancing (MIVR) model, incorporates driver-customer matching into vehicle rebalancing problems to produce better rebalancing strategies. For further protection against uncertainty, robust optimization (RO) techniques are introduced to construct a robust version of the MIVR model. Problem-specific uncertainty sets are designed for the robust MIVR model. The second study further explores different approaches for handling demand uncertainty in the vehicle rebalancing problem. There are two ways to handle uncertainty. First, the point-prediction-driven optimization framework involves predicting the future demand and then producing rebalancing decisions based on the predicted demand. Second, data-driven optimization approaches directly prescribe rebalancing decisions from data. In this study, a predictive prescription framework is introduced to this problem, where the benefits of predictive and data-driven optimization models are combined.&#13;
&#13;
Although vehicle rebalancing algorithms could improve system efficiency, there exists a detrimental feedback loop where underserved communities with low demand density are unintentionally discriminated. To resolve this fairness issue, the third study develops algorithms for vehicle rebalancing that aim to minimize disparity within the system. Grasping the concept of disparity is a foundation for understanding fairness in the ride-hailing system. The vehicle rebalancing encompasses two critical aspects: upstream demand forecasting and downstream vehicle repositioning. The issues of disparities within both these components are addressed. To reduce disparity in demand prediction, we implement a strategy utilizing a Socio-Aware Spatial-Temporal Graph Convolutional Network (SA-STGCN), aimed at improving demand forecast accuracy while reducing discrepancies in prediction errors across diverse regions. For equitable repositioning of the supply side vehicles, we introduce a disparity-reducing MIVR system. This system is designed to facilitate a balanced vehicle distribution, ensuring that ride-hailing services are accessible equitably across different areas. &#13;
&#13;
The fourth study focuses on the robustness of public transit systems. Limited studies have considered demand uncertainties when designing transit schedules. To better address demand uncertainty issues inherent in public transit systems, this study utilizes the RO framework to generate robust transit schedules against demand uncertainty. A nominal (non-robust) optimization model for the transit frequency setting problem (TFSP) under a single transit line setting is first proposed. The model is then extended to the RO-based formulation to incorporate demand uncertainty. The large-scale origin-destination (OD) matrices for real-world transit problems present computational challenges to solve the optimization problem. To efficiently generate robust transit schedules, a Transit Downsizing (TD) approach is proposed to reduce the dimensionality of the problem. &#13;
&#13;
The last study focuses on the integration of emerging AMoD systems with existing public transit networks. We propose a novel optimization framework to generate the system design of the Transit-Centric Multimodal Urban Mobility with Autonomous Mobility-on-Demand (TCMUM-AMoD) at scale. The system operator (public transit agency) determines the network design and frequency settings of the PT network, fleet sizing and allocations of the AMoD system, and the pricing for using the multimodal system with the goal of minimizing passenger disutility. Passengers' mode and route choice behaviors are modeled explicitly using discrete choice models. A first-order approximation algorithm is introduced to solve the problem at scale. Using a case study in Chicago, we show the potential to generate integrated urban mobility systems in different demand scenarios.&#13;
&#13;
The final chapter summarizes the whole dissertation and outlines potential avenues for future research directions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157009</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Relationship between Linguistic Representations in Biological&#13;
and Artificial Neural Networks</title>
<link>https://hdl.handle.net/1721.1/157004</link>
<description>The Relationship between Linguistic Representations in Biological&#13;
and Artificial Neural Networks
Kauf, Carina
Research in cognitive neuroscience strives to understand the representations and algorithms that support human cognition, including language. The scientific tools for investigating human-unique capacities, such as language, have long been limited. For example, we do not have the option to learn about the neural circuits that support these capabilities by studying simpler systems than the human brain, such as animal models. However, recent advances in engineering have provided new tools for studying language: artificial neural network language models (LMs), which exhibit remarkable linguistic capabilities and are fully intervenable. In this thesis, I draw on these advances to shed light on language processing in the human brain.&#13;
&#13;
Of course, comparisons between LMs and the human language system face challenges. I argue that in order to evaluate the suitability of LMs as cognitive models of language processing, we need to better understand (i) how linguistic stimuli are encoded in the internal representations of LMs (ii) how linguistic stimuli are encoded in the language-selective cortex of humans, and (iii) whether and how we can meaningfully relate linguistic representations from these two systems to each other. This thesis work makes progress on all three questions by combining evidence from neuroimaging, behavioral research, and computational modeling. First, I analyze whether LM representations of linguistic stimuli encode information about semantic plausibility. I find that LMs acquire substantial but inconsistent plausibility knowledge and that their judgments are influenced by low-level features of the input, making them good models of human language processing but unreliable models of world knowledge. Then I use fMRI to probe the computations that drive the language network’s response. I find evidence for a generalized reliance of language comprehension on syntactic processing, contra claims that language comprehension relies on shallow/associative processing, and for only a superficial encoding of sentence meaning. Finally, I systematically investigate what aspects of language inputs are critical for LM-to-brain alignment. I find that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and that this alignment is mainly driven by representations of word meanings rather than sentence structure. Taken together, this thesis provides evidence that the core language network encodes semantic information only superficially, implying that naturalistic human language processing must rely on the interaction of multiple tightly interconnected systems, and argues that – in spite of their limitations – LMs can help improve our understanding of human language processing through the interplay of in-silico modeling and human experiments.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157004</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden Influence in Dynamic Networks</title>
<link>https://hdl.handle.net/1721.1/157002</link>
<description>Hidden Influence in Dynamic Networks
Erhardt, Keeley Donovan
Our world is structured by networks that connect objects, ideas, and people. These networks consist of nodes (entities) and edges (connections) that dynamically evolve, reflecting changes in relationships, the emergence of new entities, and the dissolution of old links. Unlike static networks, which offer a snapshot of connections at a specific time, dynamic networks allow for modeling processes and system-level changes over time. These changes shed light on the evolution of social interactions, digital communications, financial transactions, and other networked data. Leveraging mathematical and statistical models, including neural network techniques, this research delves into the hidden influence that weaves through seemingly unrelated, yet intrinsically connected, entities in online social and financial networks. I begin with a foundational overview of graph learning techniques and the specific models utilized in my work. The body of this dissertation is divided into three core sections. The first examines the orchestration of influence campaigns by state-backed entities on social media, utilizing the influence model to unravel the complex interactions among networked Markov chains based on temporal activity patterns. Next, I quantitatively analyze the shifting geopolitical relationships and digital diplomacy efforts between two nation-states, employing a node representation learning strategy. Lastly, I apply a geometric deep learning framework to uncover connections between cryptocurrency wallets, analyzing transaction patterns and temporal dynamics to identify underlying networks. By introducing innovative approaches that leverage probabilistic and deep learning techniques to analyze dynamic networks, this dissertation contributes valuable insights and methodologies with significant implications for diverse domains such as cybersecurity, financial technology, and communications infrastructure.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157002</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Characterization of a Novel, Low-Cost Method for Measurement of Volatile Organic Compounds</title>
<link>https://hdl.handle.net/1721.1/156999</link>
<description>Development and Characterization of a Novel, Low-Cost Method for Measurement of Volatile Organic Compounds
Gao, Amanda
Measurements of atmospheric pollutants are crucial for improving our understanding of atmospheric chemistry, managing air quality, and estimating exposure to compounds that have profound impacts on human health. Low cost sensors (LCS), due to order-of-magnitude reductions in power usage, maintenance needs, and purchase cost compared to research-grade reference instruments, have the potential to greatly expand the spatiotemporal resolution of these measurements. While there are several commercially-available LCS that can measure environmental volatile organic compounds (VOCs), an important class of hazardous pollutants, these sensors can only make non-specific “broadband” measurements and have, to date, been underutilized in research. &#13;
&#13;
This thesis describes the development, characterization, optimization, and use of a novel low-cost instrument for measuring environmental VOCs. This instrument utilizes an array of low-cost VOC sensors representing three fundamentally different sensor types.  It also takes advantage of user-controlled parameters that achieve greater degrees of differentiation between responses of sensors with the same measurement type. In the first part of this work, we describe the instrument itself, as well as a laboratory study that characterizes sensor responses to environmentally relevant VOCs. Though environmental applications pose unique challenges that can’t be completely addressed in the laboratory, our results demonstrate that this instrument can give quantitative, chemically specific information about VOCs.&#13;
&#13;
The second part of this work is based on measurements made as part of a collaborative indoor air quality campaign, where our low-cost VOC instrument and co-located reference monitors made measurements of realistic indoor VOC sources. Results from an LCS-derived matrix factorization analysis were compared to an independent factor analysis of reference VOC measurements, demonstrating that our uncalibrated low-cost data can provide quantitative and qualitative information about VOC sources and composition. Based on this comparison analysis, we describe a procedure for sensor selection that allows us to evaluate the relative importance of specific sensors or sensor types in providing information about VOC composition and sources, helping future similar LCS array applications to avoid measurement redundancies and minimize material cost. &#13;
&#13;
Overall, the results from this thesis show that this LCS instrument can provide useful, quantitative information about VOC sources and composition at a fraction of the size and cost of a research-grade instrument--opening the possibility of widespread and spatially distributed measurements of VOCs in air quality and chemistry contexts, especially for indoor air.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156999</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A bone-anchored mechanoneural knee prosthesis to enhance control and embodiment</title>
<link>https://hdl.handle.net/1721.1/156995</link>
<description>A bone-anchored mechanoneural knee prosthesis to enhance control and embodiment
Shu, Tony
To maximally utilize the peripheral nervous system for prosthetic control, it is necessary to first understand the compounded errors induced by amputated physiology before developing the appropriate interfacing technologies to extract any latent movement information. Through this work, I develop a foundational approach to amputation interventions and artificial interfaces applied toward neurorobotic control at the transfemoral level. The first part of this dissertation explores the neurophysiological and neuromechanical outcomes of a revisional transfemoral amputation that restores agonistantagonist muscle dynamics. A within-subjects study is performed to investigate changes in muscular function and cortical activity as a result of the intervention. Through these data, I provide evidence that extant amputated musculature can be modified to restore functionality for the purpose of efferent neurorobotic control. The second part of this dissertation explores a combined implementation of the revisional transfemoral amputation with a bone-anchored, or osseointegrated, transfemoral implant and chronically-implanted intramuscular electrodes. The clinical outcomes of the combined transfemoral platform are quantified through biophysical measurements and measurements of the stability of the implanted hardware to suggest the potential for bidirectional neurorobotic interfacing. The third part of this dissertation compares cohorts of persons with amputation possessing varied muscle architectures and physical interfacing configurations on the ability to produce physiological neurorobotic knee dynamics. Two subjects with the novel transfemoral platform are compared to the other cohorts without individual aspects of the platform, demonstrating unprecedented agility and sustainment of prosthetic embodiment in the process.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156995</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable Computational Modeling of pre-mRNA Splicing for Multiple Eukaryotic Species</title>
<link>https://hdl.handle.net/1721.1/156986</link>
<description>Interpretable Computational Modeling of pre-mRNA Splicing for Multiple Eukaryotic Species
McCue, Kayla M.
One of the key steps in eukaryotic gene expression is pre-mRNA splicing, whereby intronic sequences are excised from immature pre-mRNA transcripts and the remaining exonic sequences are joined together. This process is catalyzed by the spliceosome, a large complex of proteins and RNAs. A variety of RNA sequence features influence this process, including the core splice site (SS) motifs and splicing regulatory elements (SREs), which recruit protein splicing factors. Together these RNA elements and factors form an intricately interconnected regulatory system which is still incompletely understood. In this thesis, I describe SMsplice, an interpretable computational model of splicing that seeks to improve the understanding of how sequence elements influence the splicing pattern of pre-mRNA transcripts in a variety of eukaryotic organisms. SMsplice incorporates three key aspects of the splicing process: scores of potential SS motifs, scores of SS-proximal hexamers representing SREs, and structural preferences of the spliceosome for particular exon and intron lengths. We iteratively learn the SRE scores within this framework and assess performance by comparing the predicted splicing pattern of a transcript to a canonical pattern to calculate the F1 score, the harmonic mean of precision and recall. Our best-performing SRE scores yield performances of 70% in human, 73% in mouse, 86% in zebrafish and Drosophila melanogaster, 83% in silkworm moth, and 85% in Arabidopsis thaliana. Applying SMsplice to multiple organisms enables a variety of evolutionary analyses. Comparing the relative contributions of the SS scores, SRE scores, and the structural preferences revealed an increased dependence on SREs in lineages with longer introns, particularly mammals. Exonic regulatory information flanking real versus decoy SS was on average more discriminative than intronic regulatory information for all metazoans studied. In Arabidopsis, intronic and exonic SREs played comparable roles, suggesting a greater role for intronic information in plants compared with animals. Motifs generated from the hexamers with the strongest SRE scores recapitulated known splicing regulator binding sites in multiple organisms, and a majority of the human motifs were significantly associated with splicing quantitative trait loci, including novel as well as known motifs. Furthermore, many of these motifs are common to all of the organisms tested, suggesting that aspects of splicing regulation are deeply conserved. This notion was further supported by the observation that using the SRE scores learned for one organism within the SMsplice model for another organism generally performed well. A notable exception was that SRE scores learned in mammals performed fairly well in non-mammals, but not vice versa, which may reflect the evolution of mammalian-specific splicing regulation alongside the lengthening of introns. This thesis demonstrates the utility of interpretable models of splicing, which allow for comparative analyses of features between organisms.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156986</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contesting Design: Ancestral Technology as Portal to Post-Design(s)</title>
<link>https://hdl.handle.net/1721.1/156980</link>
<description>Contesting Design: Ancestral Technology as Portal to Post-Design(s)
Reynolds-Cuéllar, Pedro
Nowadays, designers and technologists are constantly exposed to increasingly technocentric views of the future, primarily fueled by dominant ideologies—scalability, universal applicability, and profit, among others. Many of these future makers are preparing in the present, often at institutions reproducing these ideologies. However, this established understanding of what technology is and what is worthy of design is currently being challenged. Literature and practice connecting with ways of knowing and doing outside this dominant lens are rising in both technology and design studies. Alternative design programs at higher education institutions, preparing students for a world where technology is de-centered, and grassroots initiatives building futures through Indigenous technology are some of the ways in which these techno-narratives can be contested. This dissertation joins these efforts by foregrounding —and moving into practice— alternative ways to teach design and think about technology.&#13;
I start by exploring the value distribution from participatory design initiatives across participants and introduce a model for longitudinal assessment of these programs. Using the findings and insights from this study, I propose and implement two largely immersive university courses on technology design in close collaboration with rural collectives in Colombia. In contributing to methodological shifts within participatory design, I foreground connections at its intersection of Indigenous research methods. In giving a language to these proposals, I advance the notion of ‘Ancestral Technology’ as an alternate framework to approach technology design. It is a form of world-making (design) that primarily supports cultural cohesion, is rooted in bounded geography, and has a history living through collective memory. As designers and technologists interested in helping build a future outside the techno-centric imaginary, we must connect to the ancestral.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156980</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imagining the Voltage of Neurons Distributed Across Entire Brains of Larval Zebrafish</title>
<link>https://hdl.handle.net/1721.1/156979</link>
<description>Imagining the Voltage of Neurons Distributed Across Entire Brains of Larval Zebrafish
Wang, Zeguan
Neurons interact in networks distributed throughout the brain. Although much effort has focused on whole-brain calcium imaging, recent advances in genetically encoded voltage indicators (GEVIs) raise the possibility of imaging voltage of neurons distributed across brains. However, due to the high imaging speed and signal-to-noise ratio requirements of GEVIs, microscopy hardware to date has only been able to image the voltage of neurons within subregions of the brain, even for small animals like the larval zebrafish. To address this challenge, this thesis presents a high-speed remote scanning light-sheet microscope capable of imaging GEVI-expressing neurons distributed throughout entire brains of larval zebrafish at a volumetric rate of 200.8 Hz. The microscope combines remote refocusing and an ultrafast dual-camera system to significantly enhance the scanning and acquisition speed of light-sheet microscopy. Using this microscope, we measured voltage of ~1/3 of the neurons of the larval zebrafish brain, distributed throughout. We observed that neurons firing at different times during a sequence were located at different brain locations, for sequences elicited by a visual stimulus, which mapped onto locations throughout the optic tectum, as well as during stimulus-independent bursts, which mapped onto locations in the cerebellum and medulla. Whole-brain voltage imaging may open up frontiers in the fundamental operation of neural systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156979</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling spatial mapping, memory and their underlying mechanisms in the hippocampal complex</title>
<link>https://hdl.handle.net/1721.1/156973</link>
<description>Modeling spatial mapping, memory and their underlying mechanisms in the hippocampal complex
Sharma, Sugandha
Humans form mental representations of the space and environment around them. This ability is fundamental to tasks such as navigation, spatial reasoning, and understanding the relationships between objects in the environment. Spatial mapping in humans involves several cognitive processes, including perception, memory, and spatial reasoning. Memory plays a crucial role in spatial mapping. As individuals move through an environment, they encode and store information about the spatial layout, which they can later recall to navigate or perform tasks. Further, spatial memory involves similar brain regions as those implicated in sequential episodic memories. Research on human spatial mapping has greatly advanced our understanding of how humans form these mental representations, but leaves us some ways from a complete understanding. In particular, it has been difficult to understand what makes human spatial representations generalizable enabling few-shot learning of maps of novel spaces, how humans store the vast amount of spatial information (maps) experienced through their lifetimes, and what is the connection between spatial memory and episodic memory in the brain, and why is it significant? In this thesis, I aim to answer these questions. First, I ask whether hierarchical spatial representations form the basis of generalizable spatial representations leading to efficient exploration of novel spaces. I present a Map Induction framework that uses a compositional hierarchy to represent spaces, and present results on its utility for exploring novel spaces. Second, I ask how humans store the vast amount of information (e.g., compositional map primitives required to form hierarchical spatial representations) experienced through their lifetimes. I present a neural model called MESH (motivated by brain’s entorhinal-hippocampal system), that has an exponential capacity and shows a gradual decay in retrieval quality with an increase in the number of stored memories rather than a catastrophic drop. Third, I present Vector-HaSH, a model of the entorhinal-hippocampal circuit that forms an instance of MESH, preserving all its properties. This model unifies-general associative memory, spatial memory and episodic memory providing a computational hypothesis for the unification of spatial and episodic memory roles of the hippocampal complex. Overall this research bridges the computational, algorithmic and implementation levels of analyses for explaining how humans represent and reason about spaces.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156973</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tumor cell-intrinsic signals promoting tolerance and adaptation to oncogenic kinase inhibition</title>
<link>https://hdl.handle.net/1721.1/156951</link>
<description>Tumor cell-intrinsic signals promoting tolerance and adaptation to oncogenic kinase inhibition
Flower, Cameron Timothy
Therapeutics targeting oncogenic kinases have offered longer survival and superior quality of life for cancer patients with particular malignancies compared to the preceding standard of care. However, many patients still fail to show a clinically meaningful response to kinase inhibitors prescribed on the basis of tumor genotype, and nearly all responsive patients eventually develop resistance, limiting the curative potential of these agents. A more complete understanding of the molecular basis underlying therapy failure is required for designing new agents and combinations with improved response rates. In this thesis, I explore these issues using tractable experimental models in which genotype-matched kinase inhibitors fail to kill or durably arrest proliferation of cancer cells, with particular focus on the role of cellular signaling networks.&#13;
In the first part, I have characterized a panel of human lung cancer cell lines harboring genetic gain-of-function alterations of clinically actionable tyrosine kinases (TKs). Using commonly prescribed TK inhibitors (TKIs), I show that TK genetic status generally predicts whether or not a cell line will show any response to genotype-matched TKI (GM-TKI), but is insufficient to predict drug tolerance, the ability of a cell line to sustain proliferation under drug. In drug combination experiments targeting co-mutated pathways, I show that some degree of tolerance to GM-TKI is explained by oncogenic co-mutations, but not across all lines. By leveraging targeted and untargeted mass spectrometry (MS) of endogenous tyrosine-phosphorylated proteins, which enables phosphosite-specific quantification of TK signaling networks, I report several cell line-specific vulnerabilities not predicted to exist at the genetic level, and the consensus observation that sustained activity of SRC family kinases (SFKs), or of the SRC-like kinases ABL1/2, is an important contributor to GM-TKI tolerance in all lines.&#13;
In the second part, I have examined the molecular events underlying drug-induced adaptation, the process by which drug exposure inadvertently drives upregulation of pro-survival signaling pathways. In a collaborative effort, we report the signaling and transcriptional dynamics underlying early adaptation to oncogenic BRAF inhibition in a patient-derived cell line model of human BRAF-mutant melanoma. We show by time-resolved MS of mitogenic signaling networks, computationally integrated with matched mRNA sequencing data, that adaptation to BRAF inhibition in our model system is promoted by early drug-induced compensatory SFK signaling, due in part to accumulation of reactive oxygen species via an impaired NRF2 antioxidant response. This concerted adaptive response promotes sensitivity to SFK inhibition across a panel of patient-derived BRAF-mutant melanoma cell lines and in a mouse xenograft model. The work described in both parts was aided by two MS software solutions I developed: one to automate the generation of targeted acquisition methods for protein phosphosites and pathways of interest, and the other to retain quantitative information from fragment ion spectra with missing values.&#13;
Together, this thesis reports new connections between cell signaling and kinase inhibitor response, and offers the intriguing hypothesis that SFK signaling may be a conserved barrier for maximally effective targeted cancer therapy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156951</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Processes of Stratification Breakdown and Restratification in Antarctic Coastal Polynyas</title>
<link>https://hdl.handle.net/1721.1/156947</link>
<description>Processes of Stratification Breakdown and Restratification in Antarctic Coastal Polynyas
Xu, Yilang
Antarctic coastal polynyas are areas of persistent open water surrounded by sea ice. They are characterized by deep winter mixing due to dense water formation from sea ice production and elevated biological productivity after spring restratification. Antarctic coastal polynyas are diverse in terms of their mixing and stratification pattern, as well as the associated biological productivity. Here, we combine satellite and in situ observations, idealized numerical models, and analytical scaling to investigate the three-dimensional polynya circulation and explore the physical factors that control the winter destratification and springtime restratification in coastal polynyas. The high-resolution coupled model with ice shelf, sea ice, and ocean components qualitatively reproduces the observed coastal polynyas and sea ice fields, as evidenced by satellite measurements. In winter, strong offshore ocean currents driven by offshore katabatic winds carry some newly-formed dense water away from the polynya, weakening the destratification rate in the polynya water column. In contrast, coastal easterly winds induce onshore Ekman transport, constrain dense water outflows, and intensify vertical mixing. Moreover, an ice tongue and coastline geometry can modify sea ice and ocean circulations, thus influencing the dense water dispersal pathways and destratification in polynyas. In spring, offshore-originating sea ice meltwater primarily drives polynya restratification in the top 100 m of the water column. Even though ice shelf basal meltwater can ascend to the polynya surface, much of it is mixed over the upper 100–200 m and does not have a significant contribution to the near-surface restratification. The surface runoff from the ice shelf surface melt could potentially contribute significantly to the near-surface restratification, but its magnitude is less constrained with high uncertainty. This thesis provides a framework to study mixing and stratification dynamics in Antarctic coastal polynyas. It helps to explain their associated variabilities in dense water formation and biological productivity.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156947</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Modeling for Guiding the Transition to Low-Carbon Logistics</title>
<link>https://hdl.handle.net/1721.1/156946</link>
<description>Quantitative Modeling for Guiding the Transition to Low-Carbon Logistics
Lehmann, Jonas
We propose quantitative models to guide the transition to low-carbon logistics by adopting new vehicle technologies. Freight and logistics systems are key enablers of global economic growth, competitiveness, and access to markets and services. However, freight mobility is a significant and growing source of negative externalities, including greenhouse gas emissions, thus facing increasing pressure to decarbonize. This thesis addresses the sector’s inherent decarbonization complexities, offering decision-support tools and insights across three chapters to decouple freight activity from carbon emissions. &#13;
First, we investigate the operational requirements for leveraging low-emission delivery vehicles in a last-mile distribution system. Specifically, we provide exact and heuristic solution approaches to route goods through a two-echelon multi-modal last-mile distribution system with satellite facilities. These systems can enhance flexibility and agility in serving densely populated, congested urban areas while reducing negative externalities by employing various vehicle types suitable for specific urban environments.&#13;
Second, we study the tactical and strategic implementation of vehicle fleet transitions towards low-carbon technologies under emissions reduction targets. More specifically, we provide a multi-period combinatorial optimization decision-support tool that offers cost-optimal fleet replacement and utilization decisions given a set of decarbonization targets. A case study utilizing fleet and network data from a large U.S. consumer goods company underscores the importance of strategic planning and execution in fleet transitions to leverage network-wide cost benefits and minimize potential excess costs as first movers.&#13;
Third, we investigate the roles of energy choices and cost uncertainties in fleet asset decarbonization. We propose a stochastic programming model to account for uncertainty in fixed and variable costs and the associated risk of stranded assets given the dynamic developments of low-emission technologies. We find that incorporating cost uncertainty captures a broader range of future technology pathways, and dynamically adjusting fleet transition strategies may offer advantages over static, deterministic approaches.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156946</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Staphylococci of the Skin: Consequences for Host Health</title>
<link>https://hdl.handle.net/1721.1/156945</link>
<description>Staphylococci of the Skin: Consequences for Host Health
Khadka, Veda D.
The skin is the body’s largest barrier organ, and as such hosts roughly one million bacteria per square centimeter over its 1.8m2 surface area. As a barrier organ, the skin not only provides a physical layer of defense against these microbes but an immunological one as well. Immune cells present in deeper layers of skin are in constant dialogue with the microbes present on the surface, and these interactions have far-reaching consequences for host health. Here, I interrogate the dynamics of the skin microbiome and consequences of host-microbe interactions when the skin barrier is damaged. The skin as an external organ is subject to frequent stressors encountered in daily life, and can also be compromised due to genetic factors that weaken the barrier and predispose the host to inflammatory skin diseases. On healthy adults with an intact skin barrier, the skin microbiome is relatively diverse and stable. When the skin barrier is disrupted – either by daily stressors or genetic factors – the composition of the microbiome abruptly shifts to a less diverse state with an abundance of Staphylococci. Staphylococci have been shown to be important modulators of the host immune response and can improve host barrier repair from damage by wounding or parasitic infection during health. Much less is known about immune interactions with skin resident microbes like Staphylococci during barrier damage, however. In this work, I investigate the skin microbiome dynamics underlying a common inflammatory skin disease, atopic dermatitis (AD). During flares of AD, the pathogen Staphylococcus aureus rises to dominate the skin microbiome, and I show that relative abundance of S. aureus decreases in patients who are treated with a combination of conventional therapies and dilute bleach baths. Next, I use an animal model to interrogate how the host responds to skin resident microbes when the skin barrier is damaged. Although the protective effect of skin resident microbes like S. epidermidis during health have made members of the skin microbiome attractive targets for development into probiotic therapies, I show that common skin microbes ubiquitously delay skin barrier repair. Together, these works suggest a mechanism by which the skin microbiome can exacerbate disease during barrier damage, such as during AD, and describe the underlying dynamics of the skin microbiome during treatment for AD.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156945</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mucins regulate virulence and colonization factors in Streptococcus pneumoniae</title>
<link>https://hdl.handle.net/1721.1/156937</link>
<description>Mucins regulate virulence and colonization factors in Streptococcus pneumoniae
Bath, Jade Rose
Mucus covers all wet epithelia in the human body, creating a protective barrier for the underlying cell layer, and accommodating trillions of microbes that make up the microbiota. Mucin glycoproteins, the main gel-forming component of mucus, have emerged as multifaceted regulators of microbial physiology and microbial communities. Defects in mucus production or changes in mucin glycosylation are associated with microbial dysbiosis, where the outgrowth of opportunistic pathogens threatens human health. Streptococcus pneumoniae is a ubiquitous opportunistic pathogen, able to both asymptomatically colonize the microbiota of healthy children and adults and to cause invasive diseases. The mechanisms by which the body tolerates the presence of S. pneumoniae as part of the microbiota remain largely unknown. In this thesis, I fill this gap by exploring how S. pneumoniae senses and responds to the mucin environment. First, I identify that mucins downregulate a key virulence factor of S. pneumoniae, the cytolytic toxin pneumolysin (PLY). I show that through the regulation of PLY, mucin protects host cells from toxin-mediated killing and modulates inflammatory signals. Second, I identify that mucins downregulate colonization factors in S. pneumoniae, modulating microbe-microbe interactions between nasopharyngeal bacteria. Together, these results uncover novel mechanisms for how mucin tames opportunistic pathogens and provides insight for the development of novel therapeutics to treat S. pneumoniae infection.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156937</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ecological forces affecting microbial eukaryotes in the&#13;
coastal ocean</title>
<link>https://hdl.handle.net/1721.1/156936</link>
<description>Ecological forces affecting microbial eukaryotes in the&#13;
coastal ocean
Gomez, Annika L.
Marine microbial eukaryotes (protists) play a central role in the global biogeochemical cycles. Protist communities comprise carbon-fixing eukaryotic phytoplankton, which comprise the base of the marine food web, heterotrophic protists, which are predators of other microbes, and mixotrophs, which engage in a combination of these nutritional modes. The total abundance of a protistan population at any given time relies upon a combination of growth and death rates, which are impacted by nutrient availability (bottom-up control) and predation (top-down control). In this thesis, I investigate the effect of specific top-down and bottom-up forces at fine scales of time, location, and taxonomy, uncovering mechanisms by which nutrient limitation and viral infection affect marine protistan communities. In the first study, I leverage the 93-day Nahant Time Series to examine the dynamics and ecology of viruses infecting marine protists, the majority of which have only been identified by culture-independent means. This study focuses on Nucleocytoviricota, a diverse group of eukaryote-infecting dsDNA viruses with known potential to influence host metabolism and nutrient cycling. I developed a novel metagenomic sequence analysis pipeline which resolves cohesive populations of Nucleocytoviricota based on daily dynamics. Virus populations exhibit rapid and extensive fluctuations throughout the time series, mirroring the dynamics of their hosts. Diversity and structure of populations is indicative of viral ecology, with large networks of overlapping viruses and hosts suggestive of broad host range of some viruses while sharp population boundaries suggest viruses with narrow host ranges. In the second study, I investigate the role of bottom-up control, describing the effects of nutrient limitation on phytoplankton sinking velocity. We measure single-cell buoyant mass using a suspended microchannel resonator (SMR). Buoyant mass directly relates to sinking velocity through Stoke’s law. We show that sinking velocity can be modulated by nutrient limitations via the accumulation of carbohydrates which increase cell density. These results demonstrate that in addition to cell growth, nutrient limitation can also affect vertical stratification within phytoplankton populations. The combined conclusions of these chapters demonstrate novel mechanisms by which top-down and bottom-up forces shape marine protistan communities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156936</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neural clock underlying the temporal dynamics of an auditory memory</title>
<link>https://hdl.handle.net/1721.1/156934</link>
<description>A neural clock underlying the temporal dynamics of an auditory memory
Bahle, Andrew H.
Imitation is an essential hallmark of intelligent systems. Children imitate the speech, body language and expressions of adults, eventually graduating to creative expressions of their own individual thoughts and ideas. In machine intelligence and A.I., large language models have recently demonstrated a striking ability to convincingly imitate written forms of human language, from observation of massive corpora of text. A fundamental question is how these varied intelligent systems achieve such robust imitation. In animals, imitation is accomplished by complex neural circuits in the brain. To perform imitation, animals must first represent the sensory consequences of the action to be imitated and store this representation as a memory. Next, they must recall this sensory memory, evaluating their imitation attempts until a satisfactory match is achieved. In this thesis I study the neural control of vocal imitation in the songbird Taengiopia guttata, focusing on the first stage of imitation when animals must form a temporally structured sensory memory, or template, of the action to be imitated. In the first chapter, I present work attempting to localize the brain regions involved in the formation of the sensory memory used in imitation. We provide evidence that HVC, a pre-motor region that controls the timing of adult song, is involved in storing the timing of the tutor memory. This works shows how focal cooling can be used to study the formation of temporally structured memories even in the absence of overt behavior. In chapter 2, we ask what neural dynamics support the observed effect of cooling on the imitation. Using freely moving calcium imaging and head-fixed high-throughput electrophysiology, we show that tutoring evokes sparse sequential activity in HVC, reminiscent of its activity during adult production of the vocal imitation. This activity was present as early as the very first day of tutoring, perhaps indicating that HVC connectivity is innately predisposed to produce sparse sequential representations of song. In the final chapter, we explore changes in the representation of the tutor song before and after tutoring. We observe the emergence of tutor selective neural responses in HVC after tutoring and quantify this selectivity at the population level and in different cell-types. We further show that this tutor song selectivity is stronger in HVC than any of its auditory inputs, suggesting that tutor song selectivity results from the storage of a tutor memory in HVC itself. Together this work shows how HVC neural dynamics can act as a clock for the storage and recall of an auditory memory and gives insight into how memories containing temporal structure might be stored more broadly.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156934</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The cognitive and neural basis of complex decision-making in the primate brain</title>
<link>https://hdl.handle.net/1721.1/156933</link>
<description>The cognitive and neural basis of complex decision-making in the primate brain
Ramadan, Mahdi F.
A longstanding question at the intersection of comparative psychology, cognitive ethology, and cognitive neuroscience is what cognitive strategies primates use to tackle complex multi-step decisions, and what are the neural underpinnings of the strategies. Traditionally, cognitive experiments come in two broad flavors. In one flavor, sophisticated tasks thought to invoke high-level cognitive strategies are used, but their complexity precludes them from rigorous quantitative modeling, leading to mixed interpretations. In another flavor, very simple tasks are used, which have afforded detailed characterization of behavior and the underlying neurobiology, but are limited in eliciting high-level cognitive strategies. In this thesis, I capitalize on both traditions. In the first chapter, I present a novel multi-step decision-making task that was sufficiently complex to allow for multiple strategies, ranging from basic heuristics to more optimal strategies, but simple enough to accommodate quantitative modeling. I then use a series of human psychophysical experiments to quantitatively show that humans rely on a heuristic hierarchical strategy to solve the task due to attentional constraints, and when uncertain, flexibly revise their decisions in a computationally rational manner. In chapter two, I train two monkeys on the task and find that monkeys also adopt a hierarchical and revision strategy to solve the task, like humans. Monkeys were also able to readily generalize their strategy to novel scenarios and made eye-movements that were indicative of simple forms of counterfactual reasoning. However, it was difficult from behavior alone to test whether monkeys were actually using multiple different strategies to solve the task. To investigate this possibility and the underlying neurobiology of hierarchical and revision strategies, in chapter three we conducted high-density neural recordings from monkeys while they performed the task. Neural recordings revealed that monkeys were indeed not using one strategy to solve the task, but rather showed the initialization and dynamic progression of two distinct cognitive strategies that monkeys adaptively selected for different scenarios. We find that neural population initial conditions and response dynamics were flexibly modulated to implement these distinct decision-making strategy plans. Finally, we use the neurally inferred strategies to build composite psychophysical models that better capture the monkeys’ behavior. These results point to the importance of detailed neural recordings in combination with quantitative behavioral modeling for understanding primate cognition.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156933</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring connections between seagrass ecosystem services and meadow hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/156931</link>
<description>Exploring connections between seagrass ecosystem services and meadow hydrodynamics
Schaefer, Rachel
Meadows of aquatic vegetation, such as seagrass, modify the flow of water and transport of sediment in the environment. The hydrodynamic drag generated by a seagrass meadow contributes to the numerous ecosystem services it provides, which includes quiescent habitat for other species, wave damping, water quality enhancement, and carbon sequestration. This thesis reports on a series of studies using physical experiments, simulations, and field measurements to relate the interactions between seagrass, waves, currents, and sediment to two ecosystem services, wave dissipation and carbon sequestration.&#13;
&#13;
First, laboratory studies and simulations were used to explore how plants interact with waves and currents with the goal of predicting wave dissipation and turbulence generation. The flexibility of a plant is critical in defining its interactions with the environment. Seagrass plants deflect under currents, which streamlines the plants and reduces the parts of the plants directly experiencing the flow, and sway under waves, which reduces the relative motion between the plants and the flow. These responses, known as reconfiguration, reduce the drag seagrass plants experience compared to rigid plant of the same length. Laboratory flume and numerical experiments showed that the relative magnitudes of current and wave velocities determine the influence of reconfiguration on drag, and therefore on seagrass-induced wave attenuation and turbulence. For more flexible leaves, defined as having a ratio of drag force to restoring force due to stiffness greater than 100, drag reduction due to current-induced deflection competes against drag augmentation due to lower relative motion, such that enhancing current speeds reduces wave energy dissipation only when the current velocity is less than one-third of the maximum wave velocity. For stiffer leaves, drag augmentation dominates drag reduction so that adding a current enhances wave energy dissipation. Meanwhile, the measured effects of reconfiguration on plant-generated turbulence were used to propose a hybrid analytical model for predicting the turbulence to account for the relative contributions of waves and currents.&#13;
&#13;
Second, field experiments were performed in three Massachusetts, USA seagrass meadows to relate spatial patterns in hydrodynamics with spatial patterns in sediment organic carbon. Lower velocities were expected to reduce sediment mobility and thus enhance the deposition and retention of sediment carbon. At a wave-dominated continuous meadow, results showed decreasing sediment carbon accretion rates with increasing wave velocities, which could be predicted by accounting for seagrass-induced wave damping and wave shoaling. However, at a current-dominated lagoonal continuous meadow, sediment carbon increased with increasing tidal velocities. The spatial reduction in sediment carbon at the latter site was attributed to spatial diminishment of sediment supply with increasing distance into the meadow, away from the lagoon inlet. Lastly, in a patchy current-dominated meadow the spatial variability in sediment carbon stocks did not correlate with the spatial distribution of patches. One vegetated patch showed substantially higher sediment carbon than the rest of the meadow, which was attributed to the recent persistence of the specific patch. In addition, preliminary results for a field study comparing different methods of estimating net ecosystem carbon exchange in a seagrass meadow are also presented.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156931</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformation toughening and the martensitic transformation in ZrO2</title>
<link>https://hdl.handle.net/1721.1/156848</link>
<description>Transformation toughening and the martensitic transformation in ZrO2
Coyle, Thomas William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1985; Vita.; Bibliography: leaves 235-251.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156848</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic behavior in economic rivalry</title>
<link>https://hdl.handle.net/1721.1/156847</link>
<description>Strategic behavior in economic rivalry
Fudenberg, Drew.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156847</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information and distortion in filtering theory.</title>
<link>https://hdl.handle.net/1721.1/156845</link>
<description>Information and distortion in filtering theory.
Galdos, Jorge Ignacio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1975; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156845</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Super-resolution Control of Ultracold Dipolar Atoms on a 50-nm Scale</title>
<link>https://hdl.handle.net/1721.1/156809</link>
<description>Super-resolution Control of Ultracold Dipolar Atoms on a 50-nm Scale
Du, Li
Degenerate quantum gases of magnetic atoms such as dysprosium (Dy) and erbium (Er) offer new opportunities to quantum simulation research due to their large spin degree of freedom and long-range dipole-dipole interactions. In this thesis, following an introduction to the fundamental properties of Dy, we introduce the design and construction of an experimental apparatus that is capable of producing Bose-Eintein codensates of more than 10⁵ Dy atoms in every 10 seconds. &#13;
In addition, we describe two experiments that advances the quantum control over the spin, the motion, the interaction, and the dynamics of ultracold dipolar gases.&#13;
&#13;
&#13;
In the first experiment, we introduce a super-resolution control scheme using a spin-dependent optical potential that localizes Dy atoms on a sub-50 nm scale, a distance that is more than 10 times shorter than the optical wavelength. With the interatomic distances shortened by a factor of 10, the interatomic dipole-dipole interaction is significantly enhanced. We will discuss how this strong and tunable long-range interaction enables the simulation of new classes of many-body Hamiltonians. We experimentally demonstrate the super-resolution technique by creating a bilayer of ultracold Dy atoms and mapping out the atomic density distribution with sub-10 nm resolution. The interlayer dipole-dipole interaction are detected via two out-of-equilibrium experiments.&#13;
&#13;
&#13;
In the second experiment, we study the suppression of dipolar relaxation, an inelastic process that limits the lifetime of higher spin states, using external optical confinements. By confining ultracold dysprosium atoms in ultrathin optical layers, the magnetic atoms can approach each other only side by side. The interatomic dipole-dipole repulsion provides a protective shield that stops the atoms from tunneling to short-range. We observe an order of magnitude suppression of inelastic dipolar relaxation losses in the presence of the dipolar shield. This scheme can extend the lifetime of quantum gases of spin mixtures, thereby offering more opportunities for exploring physics such as spin-orbit coupled Bose gases, dipolar spinor condensates, etc.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156809</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental, geochemical and isotopic insights on melting and degassing behavior on Earth</title>
<link>https://hdl.handle.net/1721.1/156788</link>
<description>Experimental, geochemical and isotopic insights on melting and degassing behavior on Earth
Beaudry, Patrick Claude
This thesis investigates igneous and geothermal processes on Earth through various lenses and at various scales. The processes addressed range from fluid circulation in hydrothermal systems to the formation of the continental crust, onset of subduction, degassing of arc magmas, and mantle melting at mid-ocean ridges. Through this unlikely “medley” of Earth Science problems, parallels are drawn between experiments investigating melting and degassing at high pressure, geochemical and isotopic fingerprints of the Earth’s oldest rocks, and kinetic processes associated with aqueous supercritical fluids. The main lines of investigation rely on the tools of experimental petrology (Chapters 2 and 3) and stable isotope geochemistry (Chapters 1 and 4).  Chapter 1 explores the systematics of methane (CH4) isotopologues in geothermal systems, with a particular focus on the kinetics of isotopic exchange reactions in the vicinity of the supercritical point of water (373°C and 220 bars). This study finds that CH4 isotopologues uniquely record high-temperature processes, given their high closure temperature—i.e. slow equilibration timescales under typical geothermal conditions—combined with the fast timescales associated with supercritical fluids. Chapter 2 describes the development of a new piston cylinder experimental approach to study the solubility and speciation of sulfur (S) in hydrous, oxidized primitive magmas as can be found in subduction zones. High pressure experiments demonstrate the coupled behavior of H2O and S, which mutually interact to fix redox conditions. Exsolution of S-rich fluids is found to play an important role on magmatic redox conditions, with apparent preferential loss of oxidized S to a fluid phase, explaining several natural observations from arc environments. Chapter 3 confirms the primary nature of a high MgO, high Al2O3 mid-ocean ridge basalt (MORB) glass from the ultraslow spreading Southwest Indian Ridge, identifying its multiple saturation boundaries within the plagioclase lherzolite stability field. This finding validates newly developed quantitative petrogenetic models for MORB, with important implications for our understanding of mantle thermal structure and for the origin of primitive glasses globally found at mid-ocean ridges. Chapter 4 describes the multiple S isotope characteristics of a suite of mafic to felsic rocks from the 4.0–2.9 Ga Acasta Gneiss Complex (AGC) from the Northwest Territories, Canada. These help placing constraints on the early Earth S cycle and its relation to tectonic regime. Along with other geochemical indicators, the Acasta rocks appear to record a gradual onset of subduction-like processes, established at least by ~3.3 Ga.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156788</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the Reach of Quantum Enhanced Gravitational-Wave Detectors</title>
<link>https://hdl.handle.net/1721.1/156780</link>
<description>Expanding the Reach of Quantum Enhanced Gravitational-Wave Detectors
Ganapathy, Dhruva
TheAdvancedLIGOdetectorsarethemostprecisedisplacementsensorsevermade,operating at the cutting edge of quantum noise limited sensitivity. The introduction of non-classical squeezed states to reduce quantum shot noise during the third gravitational wave observing run O3 ushered in the era of quantum-enhanced gravitational wave interferometry. This was, however, accompanied by an increase in measurement back-action, in the form of quantum radiation pressure noise which degraded detector sensitivity at low frequencies below 100Hz. In the early 2000s, Kimble et. al. [1] proposed the use of optical filter cavities to prepare frequency dependent squeezed states which circumvent measurement back-action by suppressing radiation pressure noise at low frequencies while continuing to reduce shot noise across the rest of the gravitational wave signal band.&#13;
&#13;
In this thesis, we explore frequency dependent squeezing for gravitational wave detectors, with an emphasis on optimal filter cavity design, and characterization of squeezing in optical systems. We then describe the commissioning of a 300m filter cavity for the first realization of frequency dependent squeezing in gravitational wave interferometer for the fourth gravitational wave observing run O4. Along with significantly enhancing the astrophysical sensitivity of the LIGO detectors, this is also the latest milestone in several decades of research in quantum noise reduction.&#13;
We conclude the thesis by extending frequency dependent squeezing to alternate interferometer configurations by studying the feasibility of detuning the signal cavity of the interferometer to enhance sensitivity to kilohertz signals from neutron star post-mergers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156780</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Wide-Field Infrared Transient Explorer (WINTER): a new near-infrared time-domain survey</title>
<link>https://hdl.handle.net/1721.1/156779</link>
<description>The Wide-Field Infrared Transient Explorer (WINTER): a new near-infrared time-domain survey
Frostig, Danielle
The Wide-Field Infrared Transient Explorer (WINTER) is a new near-infrared observatory for time-domain astronomy, the study of the evolving night sky. The field has exploded in the last two decades at optical wavelengths, but complementary infrared efforts have been limited by available detector technologies. In this thesis, I present the design, build, and early operations of the new WINTER instrument, which was installed on a dedicated 1-meter robotic telescope at Palomar Observatory in June of 2023. &#13;
&#13;
WINTER’s science goals include robotic follow-up of kilonovae from binary neutron star (BNS) and neutron-star black-hole (NSBH) mergers, surveys to study galactic and extragalactic transients and variables, and building up a deep coadded image of the near-infrared sky. The project also helped develop the world’s largest Indium Gallium Arsenide (InGaAs) detectors for cost-effective near-infrared astronomical imaging without cryogenic cooling. The custom camera combines six InGaAs detectors with a novel tiled fly’s-eye optical design to cover a &gt;1 degree-squared field of view with a 90\% fill factor. WINTER observes in the Y-, J-, and shortened-H-band filters (0.9-1.7 microns), with a filter tray selecting one filter at a time. &#13;
&#13;
The project is a collaboration between MIT and Caltech, with Caltech leading the data reduction pipeline and observatory site management and MIT leading the instrument and facility hardware and operations. This thesis touches upon all aspects of the instrument, highlighting the major subsystems I directed, including detailed instrument design and modeling, project requirements flowdown, kilonova follow-up science simulations, testing of new detectors alongside the development of custom readout firmware and software, stray-light analysis, robotic scheduling software, and on-sky early operations and science.&#13;
&#13;
Since its installation in 2023, WINTER has been operating robotically each night, with ongoing work to improve the instrument. This thesis presents a snapshot of WINTER's progress as of April 2024, concluding with an update on its current performance and future directions for the project.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156779</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Quantum Information to Cosmic Censorship: Emergent Spacetimes and Their Surfaces</title>
<link>https://hdl.handle.net/1721.1/156778</link>
<description>From Quantum Information to Cosmic Censorship: Emergent Spacetimes and Their Surfaces
Folkestad, Åsmund Schiager
In this thesis, we explore classical and semiclassical gravity from the perspective of the AdS/CFT correspondence. We leverage global methods in General Relativity (GR) together with quantum information- and complexity-theoretic properties of the conformal field theory (CFT) dual to obtain novel results in classical and semiclassical gravity.&#13;
&#13;
In the first part, we obtain a collection of results suggesting that holography enforces a refined version of Cosmic Censorship that potentially can replace the Weak Cosmic Censorship (WCC) conjecture, which has been disproven in Anti-de Sitter (AdS) spacetimes. We show that certain important GR results usually proven assuming WCC can instead be derived from consistency of the AdS/CFT dictionary. We also construct new likely violations of WCC in asymptotically AdS₄ spacetimes, but show that these cannot have a holographic dual; this provides evidence that singularities are better behaved in holographic theories, compared to GR with generic matter. Finally, we show a connection between event horizons and CFT pseudorandomness, and we construct a new measure of the size of a naked singularity. We conjecture that quantum gravity only forbids macroscopic naked singularities, according to this measure.&#13;
&#13;
In the second part, we derive new properties of various extremal submanifolds, with several consequences for AdS/CFT. For example, we provide a physically intuitive explanation for why extremal surfaces are natural boundaries between independent subsystems. We also prove results that constrain far-from-equilibrium dynamics in gravity and CFTs.  Finally, we construct a puzzle showing that geometric states with large entanglement need not correspond to a wormhole, highlighting subtleties in the ER=EPR proposal.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156778</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast spectroscopy and control of correlated quantum materials</title>
<link>https://hdl.handle.net/1721.1/156759</link>
<description>Ultrafast spectroscopy and control of correlated quantum materials
Fichera, Bryan T.
In this thesis, I describe research completed during my Ph.D. on correlated condensed matter systems using ultrafast optics. I begin with a broad overview of this field, focusing specifically on the essential physics involved in ultrafast processes and how that physics may be utilized, in the sense of either spectroscopy or control, to understand correlated systems. I then give a pedagogical introduction to second harmonic generation, both in theory and in practice, before describing results from four projects I completed in my Ph.D.—(i) a technical project concerned with automating polarization rotation in second harmonic generation, (ii), a demonstration that second harmonic generation may be used to differentiate charge density wave domains with opposite planar chirality, (iii) our discovery of an ultrafast reorientation transition in the antiferromagnetic semiconductor CaMn₂Bi₂, and (iv) second harmonic generation evidence for an amplitude-mode electromagnon in CuBr₂. I conclude by reflecting on the progress achieved in correlated electron physics as a result of this work, and by giving my own perspective on the future of this field.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156759</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gyroscropes Orbiting Gargantuan Black Holes: Spinning Secondaries in Extreme Mass Ratio Inspirals</title>
<link>https://hdl.handle.net/1721.1/156758</link>
<description>Gyroscropes Orbiting Gargantuan Black Holes: Spinning Secondaries in Extreme Mass Ratio Inspirals
Drummond, Lisa V.
Large mass ratio binary black hole systems are essential for studying the two-body problem in general relativity and are key sources of low-frequency gravitational waves. These sources will be detectable by the Laser Interferometer Space Antenna (LISA), which is a planned space-based gravitational-wave observatory. At lowest order, the secondary body (smaller black hole) follows a geodesic of the more massive black hole's spacetime. Post-geodesic effects are needed to model the system accurately. Failure to incorporate these effects can introduce bias in tests of general relativity and compromise precision measurement of the larger black hole's properties. One very important post-geodesic effect is the gravitational self-force, which describes the small body's interaction with its own contribution to a binary's spacetime and includes the backreaction of gravitational-wave emission driving inspiral. Another post-geodesic effect, the spin-curvature force, is due to the smaller body's spin coupling to spacetime curvature. Exploiting the large mass-ratio approximation, this thesis presents a suite of mathematical and computational tools for precisely calculating bound orbits and inspiral of spinning bodies around rotating black holes. &#13;
&#13;
In Chapters 3 and 4, we employ a frequency-domain formulation to describe completely general orbits of spinning bodies in curved spacetime. The small body's spin influences orbital frequencies and accumulated phases which are direct gravitational-wave observables. In Chapter 5, we combine the leading orbit-averaged backreaction of point-particle gravitational-wave emission with the spin-curvature force to construct the trajectory and associated gravitational waveform of a spinning body inspiraling into a Kerr black hole. To achieve this, we use a near-identity transformation (NIT) to rapidly compute trajectories for generic orbit and spin configurations. This efficiency is essential for the high-dimensional, long-duration waveforms of large mass-ratio binary systems.  In Chapter 6, we describe how the framework of Chapters 3 and 4 can be used to generate gravitational wave fluxes for spinning bodies on completely generic orbits and discuss a ``shifted geodesic'' approximation scheme which could speed up the evaluation of these fluxes. This thesis introduces methods for accurately modeling completely general orbits of spinning bodies in large mass ratio binary black hole systems, enhancing gravitational-wave models for the LISA science program, and introducing a limit that can be computed precisely as a benchmark for calculations across all mass ratios.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156758</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport Properties of Divertor Edge Plasmas Measured with Multi-Spectral Imaging</title>
<link>https://hdl.handle.net/1721.1/156659</link>
<description>Transport Properties of Divertor Edge Plasmas Measured with Multi-Spectral Imaging
Linehan, Bryan Lee
The transport of heat and particles in the boundary of a tokamak is not sufficiently understood for the purposes of constructing a pilot nuclear reactor. Improving numerical and theoretical understanding is inhibited by traditional boundary diagnostics that provide sparse and inflexible spatial coverage. In this thesis, multi-spectral imaging of helium line ratios (HeMSI) was used to create 2D poloidal maps of Tₑ and nₑ in the TCV divertor. These are the first plasma boundary measurements to provide continuous 2D coverage of Tₑ and nₑ for arbitrary magnetic geometries. These measurements were validated against co-local Thomson scattering measurements in diverted plasmas. HeMSI showed good agreement with Thomson scattering in the common flux region(CFR) of ionizing plasma for both majority helium and majority deuterium plasmas. Having validated this powerful new tool, HeMSI was used to investigate the effects of flux expansion in the TCV divertor for plasmas in the conduction limited regime. Increasing poloidal flux expansion is expected to lower the temperature of the divertor target by increasing the plasma volume and connection length of the magnetic field line between the core and target. These benefits are observed in the conduction limited regime but not in the partially detached regime. The 2D poloidal maps of Tₑ and nₑ, in concert with other measurements, were used to calculate the ionization rate of He and D, the E × B drift velocity, Spitzer heat conduction, and parallel flow in 2D. This allowed for heat transport to be locally resolved into conduction, parallel convection, and drift convection components. Similarly, particle transport was categorized into drift and parallel components. These calculations demonstrate that in relatively cool plasmas (Tₑ &lt; 30eV), drifts compose a significant amount of the heat and particle transport. This violates the assumptions of simple two-point modeling and demonstrates the importance of accounting for drifts in modeling. Drifts may explain the boundary’s lack of sensitivity to poloidal flux expansion in the partially detached regime. Lastly, the anomalous heat and particle transport coefficients, χ⊥ and D⊥, were calculated by enforcing local power and particle balance. Values of χ⊥ close to the separatrix (ρ &lt; 1.005), and values of D⊥ were consistent with standard modeling practices. However,χ⊥ measurements sufficiently far into the CFR(ρ &gt; 1.005) exceeded typical modeling assumptions by two orders of magnitude. This implies that boundary codes will underestimate the radial temperature falloff length. This is shown to be true in a comparison of Tₑ measurement to simulations performed with the SOLPS-ITER code. This brings into question the validity of the assumption of diffusive heat transport in the far CFR.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156659</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Entanglement and Chaos in Quantum Field Theory and Gravity</title>
<link>https://hdl.handle.net/1721.1/156658</link>
<description>Entanglement and Chaos in Quantum Field Theory and Gravity
Wei, Annie Y.
In this thesis we explore several questions at the intersection of quantum information theory and quantum many-body physics. We study properties like entanglement and chaos, and we use intuition from discrete, few-body systems to learn about continuum systems. First we study quantum scars, a phenomenon previously studied in chaotic, few-body quantum systems, and we extend the analysis from the case of few-body quantum mechanics to the case of quantum field theory. Next we turn to the study of multipartite entanglement. Inspired by the operational interpretation of bipartite entanglement, we propose a new information-theoretic measure for tripartite entanglement based on subsystem recoverability, and we study this quantity in the vacuum state of (1+1)-D conformal field theory. Then we consider toy models of quantum gravity, where the objective is to construct qubit models that reproduce aspects of holography. We study toy models that consist of putting a lattice gauge theory on a tensor network, and we show how such toy models can be made background-independent. Finally we propose a new tensor network toy model for 3D gravity that features a topologically defined area operator, such that the areas on crossing cuts do not commute.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156658</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illuminating the Cosmos: dark matter, primordial black holes, and cosmic dawn</title>
<link>https://hdl.handle.net/1721.1/156657</link>
<description>Illuminating the Cosmos: dark matter, primordial black holes, and cosmic dawn
Qin, Wenzer
The Λ-CDM model of cosmology has done much to clarify our picture of the early universe. However, there are still some questions that Λ-CDM does not necessarily answer; questions such as what is the fundamental nature of dark matter? What is its origin? And what causes the intriguing measurements that we are seeing from cosmic dawn? In this thesis, I will describe three directions in which I have pushed forward our understanding of how fundamental physics manifests in cosmology. First, I have studied the signatures of exotic energy injection in various astrophysical and cosmological probes, including the Lyman-α forest, the blackbody spectrum of the cosmic microwave background, the power spectrum of the cosmic microwave background, and the formation of the earliest stars in our universe. Second, I have investigated the formation of primordial black hole dark matter in a general model for inflation with multiple scalar fields. I have identified the space of models that can generate primordial black holes while remaining in compliance with observational constraints using a Markov Chain Monte Carlo, and also showed that future gravitational wave observatories will be able to further constrain these models. Finally, I have developed an analytic description of signals from 21cm cosmology using methods inspired by effective field theory. This method includes realistic observational effects and has been validated against state-of-the-art radiation hydrodynamic simulations, including those with alternative dark matter scenarios. With these recent efforts, we are advancing the frontiers of dark matter phenomenology and cosmology, thereby paving the way towards illuminating the remaining mysteries of our cosmos and drawing closer to a comprehensive understanding of the universe.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156657</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Algorithms, Optimizations, and Benchmarks for Metric and Graph Clustering</title>
<link>https://hdl.handle.net/1721.1/156653</link>
<description>Parallel Algorithms, Optimizations, and Benchmarks for Metric and Graph Clustering
Yu, Shangdi
Clustering is a fundamental unsupervised machine learning task of detecting groups of similar objects in data. Clustering can be used to identify the underlying substructures of data and can detect essential functional groups, such as people with similar interests, news articles on similar topics, or proteins with similar utilities, which can then be used for various downstream tasks. In this thesis, we focus on the efficient clustering algorithms for metric and graph data. Both types of data are common in various applications today. Moreover, in modern applications, the size of both types of datasets and the dimensionality of the metric data are scaling rapidly. In this thesis, we solve the challenge of clustering large datasets by designing algorithms with high parallelism that take advantage of modern shared-memory multi-core machines and dynamic algorithms that can efficiently update the result without re-computing from scratch. We also present approximate clustering algorithms that scale to high-dimensional data.&#13;
&#13;
The first part of this thesis studies parallel clustering algorithms for low-dimensional metric data. Clustering algorithms are frequently expected to perform numerous similarity searches, as clustering entails grouping similar objects together. Although many algorithms have been designed for nearest neighbor search, many clustering algorithms require customized nearest neighbor search with special constraints, so we cannot use existing nearest neighbor search approaches of the shelf. We present examples of how to design customized similarity searches for hierarchical agglomerative clustering and density peaks clustering algorithms using optimized tree index data structures.&#13;
&#13;
The second part of this thesis studies parallel clustering algorithms for high-dimensional metric data. In this thesis, we show two approaches for clustering high-dimensional data. The first is to design approximate similarity searches that are customized for a particular clustering algorithm. The second is to convert the metric data into a graph representation, and then run graph clustering algorithms on this derived graph. For the first approach, we present an approximate density peaks clustering framework for high-dimensional data using approximate similarity searches. We also show that the framework has good empirical performance with graph-based nearest neighbor search techniques on high-dimensional data. For the second approach, we present an algorithm that clusters high-dimensional metric data by converting data into a particular graph representation called the triangulated maximally fltered graph and then running the directed bubble hierarchical tree algorithm on the converted graph.&#13;
&#13;
The final part of this thesis studies clustering algorithms for graph data. We present a dynamic graph clustering algorithm that can quickly update the output when the input changes instead of performing a slow recomputation from scratch. We also introduce a benchmarking suite for comprehensively evaluating the quality and speed of parallel graph clustering algorithms on both native graphs and k-nearest neighbor graphs converted from metric data. Our evaluation includes methods tailored to both weighted and unweighted graphs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156653</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Learning Algorithms via Sublinear-Time Methods</title>
<link>https://hdl.handle.net/1721.1/156652</link>
<description>Enhancing Learning Algorithms via Sublinear-Time Methods
Vasilyan, Arsen
Our society increasingly relies on algorithms and data analysis to make critical decisions. Yet, almost all work in the theory of supervised learning has long relied on the following two assumptions: 1. Distributional assumptions: data satisfies conditions such as Gaussianity or uniformity. 2. No distribution shift: data distribution does not change between training and deployment. While natural and often correct, these assumptions oftentimes do not hold. Yet, these assumptions are routinely made for giving theoretical guarantees for supervised learning algorithms. These guarantees can become null and void, should one of these algorithms be used in a setting where these assumptions do not hold. Overall, if critical decisions rely on theoretical reliability guarantees, incorrect assumptions can result in catastrophic failure. The first part of this thesis shows how to mitigate this dependence. We introduce and develop testers which can alert a user if some assumptions are not satisfied. Leveraging insights from the area of property testing, the first part of this thesis constructs such testers for a number of well-studied function classes, addressing distributional assumptions and distribution shift. The second part of this thesis shows how insights from sublinear-time algorithms can also be used to make learning algorithms more runtime-efficient. We show that sublinear-time local algorithms, capable of deriving partial solutions by examining only a fraction of the input, can be used as a powerful primitive to resolve problems in learning theory.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156652</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable and Trustworthy AI for Evidence-based Clinical Decision Support in Cancer Care</title>
<link>https://hdl.handle.net/1721.1/156651</link>
<description>Reliable and Trustworthy AI for Evidence-based Clinical Decision Support in Cancer Care
Moon, Intae
The integration of cutting-edge AI methods with real-world clinical data has moved from being a novelty to a necessity in oncology. However, the deployment of AI faces challenges, including the complexity of reliably modeling longitudinal Electronic Health Records (EHR) characterized by missing data and frequent patient drop-outs, patient heterogeneity which leads to disparities in AI performance, and the need for validating AI models' clinical benefits, especially in managing challenging cancer cases. This thesis presents research focused on addressing these challenges: developing a continuous time model-based time-to-event regression framework to improve the prediction of clinically meaningful patient outcomes from irregularly sampled EHR data; utilizing data and algorithm-driven approaches to mitigate AI performance disparity for predicting cancer-associated adverse events across diverse patient demographics; and developing an AI-based decision support tool that integrates genomics and clinical data for evidence-based cancer care, with a focus on improving management of difficult-to-treat cancer cases. This work contributes towards transforming cancer care through reliable and trustworthy AI-driven clinical decision support.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156651</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neuro-Symbolic Learning for Bilevel Robot Planning</title>
<link>https://hdl.handle.net/1721.1/156646</link>
<description>Neuro-Symbolic Learning for Bilevel Robot Planning
Silver, Tom
Decision-making in robotics domains is complicated by continuous state and action spaces, long horizons, and sparse feedback. One way to address these challenges is to perform bilevel planning, where decision-making is decomposed into reasoning about “what to do” (task planning) and “how to do it” (continuous optimization). Bilevel planning is powerful, but it requires multiple types of domain-specific abstractions that are often difficult to design by hand. This thesis proposes the first unified system for learning all the abstractions needed for bilevel planning. Beyond learning to make planning possible, this thesis also considers learning to make planning fast, especially in environments with many objects. A final contribution considers planning to learn, where the robot iteratively plans online to collect additional data and then learns to improve planning. Altogether, the thesis represents a step toward a general-purpose robot that can autonomously synthesize a specialized library of abstractions and plan to solve a very broad set of tasks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156646</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Learning-guided Search for Coordination of Multi-agent Transportation at Scale</title>
<link>https://hdl.handle.net/1721.1/156645</link>
<description>Towards Learning-guided Search for Coordination of Multi-agent Transportation at Scale
Yan, Zhongxia
While transportation is an age-old problem, new technologies for autonomy raise new possibilities and realities for coordination of hundreds or thousands of vehicles and robots: criss-crossing autonomous vehicles, faster/cheaper Amazon delivery, and robots warehouses for storage/sorting/fetching. How do we tackle these new optimization challenges? In this thesis, I highlight multiple levels of decision-making in large-scale transportation problems, ranging from assignment of tasks to collision-free path/motion planning and everything in between (e.g. order of goals, routing, order of crossing, lane changing, continuous acceleration control). As practical solutions must be obtained in limited time, we leverage machine learning policies embodying offline experience to improve decision making. However, as we find in coordination of autonomous vehicles, policy learning alone may accommodate highly nonlinear continuous system dynamics but is insufficient in addressing the combinatorial discrete decisions in high-dimensional multi-agent systems. Thus, we investigate a more effective paradigm for tackling multi-agent transportation problems which involves 1) identifying or designing well-suited search-based algorithms for the problem settings then 2) designing machine learning approaches for guiding and accelerating the search algorithm. For problems ranging from vehicle routing problems (VRPs) to multi-agent path finding (MAPF), we find that, while the design of well-suited search-based algorithm is important, deep neural networks policies consistently accelerates or improves the solution quality of state-of-the-art search algorithms while eliminating the need for hand-designed search heuristics. With extensive empirical evaluations, we demonstrate that such learned policies often generalize beyond their training distributions to broader problem distributions. Finally, we return to the problem of autonomous vehicle coordination to design efficient search algorithms leveraging the structures of crossing orders at intersections with continuous vehicle kinematics, motivating further research in learning-guided crossing order search and semi-centralized coordination of vehicles/robots.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156645</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between Correlation and Topology in Two-dimensional Systems</title>
<link>https://hdl.handle.net/1721.1/156642</link>
<description>Interplay between Correlation and Topology in Two-dimensional Systems
Dong, Zhihuan
A new class of 2d materials, moiré superlattices, has emerged and become one of the most exciting playgrounds for the study of many-body physics. These systems, thanks to their unprecedented tunability, have exhibited a plethora of interesting phenomena in experiments, such as unconventional superconductivity, strange metal behavior, exciton condensation, emergent Kondo lattice physics, quantum Hall ferromagnetism, (fractional) quantum anomalous Hall effect, and formation of anomalous Hall crystals. Typically in many of these systems, there is a nearly flat low-energy band with non-zero Berry curvature. This generalizes the familiar quantum Hall physics to a broader context, where both dispersion and quantum geometry can be varied. This thesis focuses on novel quantum phases in systems where three ingredients, kinetic energy, interaction, and band topology all play a role. First, we demonstrate quantum Hall ferromagnetism in a topological band, which is a simple yet striking example of the crucial role of band topology in the consequence of strong correlations. Motivated by this, we present a fruitful framework to think about the effects of interaction within a topologically nontrivial band, known as non-commutative field theory. This development provides an analytical handle to this broad class of challenging problems and settles long-standing puzzles shrouding quantum Hall physics. Most of the existing studies focus on the strong correlation effect on a stage defined by band topology. However, this picture is only justified when the single-particle band gap dominates over interaction. Going beyond this regime, we study two representative moire systems (1) Quantum anomalous Hall effect in transition metal dichalcogenide moire and (2) Fractional quantum anomalous Hall effect in multilayer rhombohedral graphene moire. In these systems, instead of playing a role on the stage defined by the band topology, the interaction is strong enough to determine the band topology. We identify various mechanisms for interactions to stabilize a Chern band.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156642</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Organizational Economics and Strategy</title>
<link>https://hdl.handle.net/1721.1/156641</link>
<description>Essays in Organizational Economics and Strategy
Quist, Kramer
In these essays, I explore how organizational structure interacts with other features of an organization to influence strategy. In the first paper, I consider how an organization’s cognitive diversity interacts with organizational structure to influence the degree to which the organization chooses to pursue exploratory new ideas. In the second paper, I consider when delegation motivates or demotivates employees. In the final paper, I consider how different types of communication technologies complement different types of organizational structures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156641</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next-Generation Intelligent Portfolio Management</title>
<link>https://hdl.handle.net/1721.1/156635</link>
<description>Next-Generation Intelligent Portfolio Management
Zhao, Zijie
In the fast-paced world of financial technology, the integration of advanced Natural Language Processing (NLP) and Deep Reinforcement Learning (DRL) is transforming portfolio management. This thesis presents a pioneering portfolio management framework that leverages Transformer-based models and Large Language Models (LLMs) to enhance return predictions and sentiment extraction from extensive financial texts coupled with robust DRL trading agents to optimize portfolio performance. We introduce an adaptive retrieval-augmented framework for LLMs, finely tuned through instruction tuning to align with human instructions and incorporate market feedback. This approach enables dynamic weight adjustments within the Retrieval-Augmented Generation (RAG) module, showcasing the synergy between extracting more accurate underlying sentiment and better capturing stock movements, resulting in more profitable and robust portfolios. Additionally, we address the challenges of applying DRL to stock trading by developing the Hierarchical Reinforced Trader (HRT). This innovative strategy employs a bi-level DRL framework that combines strategic stock selection via a High-Level Controller with effective trade executions managed by a Low-Level Controller. Our results demonstrate significant enhancements in portfolio management, achieving higher Sharpe ratios than the S&amp;P 500 benchmark in bullish markets, while also substantially reducing losses and drawdowns in bearish and volatile market scenarios. Moreover, model interpretability is crucial given the black-box nature of both LLMs and DRL models. Practitioners without a strong machine learning background require clear interpretations of model outputs. To address this, one idea is to consider features univariately, omitting feature interactions to maintain interpretability. The Univariate Flagging Algorithm (UFA) identifies optimal cut points for each feature, flags them, and summarizes them to lower dimensions for each sample. We further enhance the UFA framework within the Generalized Additive Model (GAM), extending it to a broader framework capable of modeling any data generated by exponential family distributions. Our comparative analysis on various public benchmark datasets demonstrates that our extended framework not only achieves better predictive results than the original UFA but also retains its robustness against missing and imbalanced datasets. In conclusion, this thesis underscores the significant potential of integrating advanced NLP and DRL techniques into portfolio management, setting a new standard for intelligent financial decision-making.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156635</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lab-to-Fab Monolithic 3D Integrated Carbon Nanotube Transistors: Scaling and Reliability</title>
<link>https://hdl.handle.net/1721.1/156634</link>
<description>Lab-to-Fab Monolithic 3D Integrated Carbon Nanotube Transistors: Scaling and Reliability
Yu, Andrew C.
Conventional scaling of silicon integrated electronics can no longer yield improvements that keep pace with the increasing computing demands of abundant-data applications. Moreover, for data intensive computing applications, a majority of system energy is consumed moving data between compute and off-chip memory, which are often physically separate with limited connectivity. This is termed the “memory wall”. A promising solution to this problem is monolithic 3D integration, in which layers of compute and memory are designed and integrated together vertically in the same monolithic 3D nanosystem, connected by ultra-dense, nanoscale interconnects, referred to as interlayer vias (ILVs). This provides significant projected system-level energy-delay benefits beyond conventional 2D physical and equivalent scaling. However, conventional silicon logic and memory technologies are incompatible with such monolithic 3D integration and cannot be used to realize such 3D nanosystems.&#13;
&#13;
In this thesis, I first develop, and then establish within a commercial foundry, a monolithic 3D technology using back-end-of-line (BEOL) carbon nanotube FET (CNFET) + Resistive RAM (RRAM) stack over silicon CMOS that achieves comparable memory performance (read power, write energy/latency, endurance, retention, multiple bits-per-cell capability) in the same footprint as a conventional RRAM stack using front-end-of-line (FEOL) silicon FET access transistors. This is accomplished through the following: (1) I develop the first CNFET process that is lift-off-free and can scale to advanced process technology nodes, (2) I lab-to-fab transfer and adapt this process from an academic prototype into a commercial CMOS foundry process on 200 mm wafers at a 90 nm technology node equivalent, and (3) I improve the scaling, variation, and reliability of lift-offfree BEOL CNFET to achieve iso-performance, iso-footprint, and iso-reliability BEOL memory metrics. This process is established within SkyWater Technology Foundry (90/130nm technology 3 node on 200 mm Si wafers) and an apples-to-apples comparison is made directly versus FEOL Si FET + RRAM fabricated on the same wafers, from the same foundry, at the same node.&#13;
&#13;
Such BEOL CNFET + RRAM technology promises to unlock a large architecture design space with significant system-level energy-delay product (EDP) benefits vs. FEOL Si + RRAM-only designs, e.g., &gt;5× EDP benefits for new iso-footprint, iso-memory-capacity monolithic 3D architectures uniquely enabled by new monolithic 3D physical design. In summary, this thesis experimentally implements and demonstrates foundry monolithic 3D using beyond-silicon nanotechnologies as a complementary integration path for dramatically improving system-level energy-efficiency and performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156634</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning the language of biomolecular interactions</title>
<link>https://hdl.handle.net/1721.1/156633</link>
<description>Learning the language of biomolecular interactions
Sledzieski, Samuel
Proteins are the primary functional unit of the cell, and their interactions drive cellular function. Interactions between proteins are responsible for a wide variety of functions raning from catalytic activity to cellular transport and signaling, and interactions between small molecules and proteins are the foundation of many therapeutics. However, the experimental determination of these interactions is expensive and relatively slow, limiting the ability to model interactions at genome scale. It is therefore critical to develop computational approaches for modeling these interactions. Unsupervised language models trained on amino acid sequences, namely protein language models, learn patterns in sequence evolution that encode protein structure and function. These protein language models are thus a powerful tool for extracting features of proteins, enabling the adoption of lightweight downstream models. Here, we present novel machine learning techniques for adapting protein language modeling to the prediction of protein interactions at scale, enabling de novo interaction network inference and large-scale drug compound screening. We show that these methods achieve state-of-the-art performance, and allow us to discover new biology and therapeutic candidates. In addition, we introduce methods for efficient training and adaptation of these models, and outline several applications which take advantage of the scale enabled by lightweight models. As a whole, this thesis demonstrates how computational advances in language modeling and the massive growth of data brought about by the sequencing revolution can be leveraged to tackle the genotype-to-phenotype challenge in biology, and lays the groundwork for more widespread adoption of these techniques for proteomic modeling.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156633</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Privacy via Disguising and Revealing</title>
<link>https://hdl.handle.net/1721.1/156632</link>
<description>Flexible Privacy via Disguising and Revealing
Tsai, Lillian
Users have tens to hundreds of accounts with web services that store sensitive data, from social media to tax preparation and e-commerce sites. While users have the right to delete their data (via e.g., the GDPR or CCPA), more nuanced data controls often don’t exist. For example, a user might wish to hide and protect their profiles on an e-commerce or dating app when inactive, and to recover their accounts should they return to the application. However, services often provide only coarse-grained tools that result in all-or-nothing exposure of users’ private data.&#13;
&#13;
This thesis introduces the notion of *disguised data*, a reversible state in which sensitive data is hidden. To demonstrate the feasibility of disguised data, this thesis also presents Edna— the first system for disguised data—which helps database-backed web applications provide new privacy features for users, such as removing their data without permanently losing their accounts, anonymizing their old data, and selectively dissociating personal data from public profiles. Edna helps developers support these features while maintaining application functionality and referential integrity in the database via *disguising* and *revealing* transformations. Disguising selectively renders user data inaccessible via encryption, and revealing restores their data to the application. Edna’s techniques allow transformations to compose in any order, e.g., deleting a previously anonymized account, or restoring an account back to an anonymized state.&#13;
&#13;
With Edna, web applications can enable flexible privacy features with reasonable developer effort and moderate performance impact on application operation throughput. In the Lobsters social media application—a 160k LoC web application with &gt;16k users—adding Edna and its features takes &lt;1k LoC, and decreases throughput 1–7% in the common case. Edna decreases throughput up to 28% when a heavy user who owns 1% of all application data continuously disguises and reveals their account.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156632</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design</title>
<link>https://hdl.handle.net/1721.1/156631</link>
<description>Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design
Wang, Hanrui
Quantum Computing (QC) has the potential to solve classically hard problems with greater speed and efficiency, and recent years have seen exciting advancements in this field. However, significant gaps remain between application requirements and the capabilities of current devices, particularly in terms of software framework support, efficiency, and reliability. To bridge these gaps and fully unleash the power of quantum computing, it is critical to perform AI-enhanced co-design across various technology stacks, from algorithm and program design to compilation and hardware architecture. In this thesis, we aim to develop architectural and system supports for quantum computing. To address the software support gap, I will discuss two compilation frameworks—FPQAC and Q-Pilot—designed for the Field-Programmable Qubit Array (FPQA) implemented with emerging reconfigurable neutral atom arrays. This architecture leverages movable atoms for routing two-qubit gates and we optimize atom movements and gate scheduling for high scalability and parallelism. To enhance reliability, I will introduce QuantumNAS and TorchQuantum, frameworks for quantum program structure (ansatz) design for variational quantum algorithms. QuantumNAS employs an intelligent search engine and utilizes noisy feedback from quantum devices to optimize program structure and qubit mapping tailored to specific hardware, leading to significant resource reduction and reliability improvements. Additionally, I will present QuantumNAT, QOC, and RobustState for noise-aware training of parameters in variational quantum algorithms to ensure high reliability. The DGR framework will also be discussed for addressing drifted and correlated errors in quantum error correction decoding. Furthermore, I will introduce QuEST, which leverages data-driven AI models to predict the reliability of arbitrary quantum circuits on real quantum hardware. Finally, to close the efficiency gap, I will present SpAtten, an algorithm-architecture-circuit co-design aimed at efficient Transformer-based quantum error correction decoding, and SpArch accelerator, designed for sparse tensor algebra for efficient quantum control signals generations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156631</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Fabrication and Assembly for In Situ Manufacturing</title>
<link>https://hdl.handle.net/1721.1/156630</link>
<description>Computational Fabrication and Assembly for In Situ Manufacturing
Nisser, Martin Eric William
Fabrication today relies on disparate, large machines spread across industrial facilities. These are operated by domain experts to construct and assemble artefacts in sequential steps from large numbers of parts. This traditional, centralized mass manufacturing paradigm is characterized by large capital costs and inflexibility to changing needs, complex global supply chains hinged on economic and political stability, and waste and over-manufacturing of uniform artefacts that fail to meet the technical and personal needs of today’s diverse individuals and use cases. Today, these challenges are particularly severe at points of need, such as the space environment. The space environment is remote and unpredictable, and the ability to manufacture in situ offers unique opportunities to address new challenges as they arise. However, the challenges faced in space are often mirrored on Earth. In hospitals, disaster zones, low resource environments and laboratories, the ability to manufacture customized artefacts at points of need can significantly enhance our ability to respond rapidly to unforeseen events. In this thesis, I introduce digital fabrication platforms with co-developed hardware and software that draw on tools from robotics and human-computer interaction to automate manufacturing of customized artefacts at the point of need. Highlighting three research themes across fabrication machines, modular assembly, and programmable materials, the thesis will cover a digital fabrication platform for producing functional robots, a modular robotic platform for in-space assembly deployed in microgravity, and a method for programming magnetic material to selectively assemble.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156630</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Sensing as a Utility using Swept-Source Raman Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/156629</link>
<description>Chemical Sensing as a Utility using Swept-Source Raman Spectroscopy
Persits, Nili
The integration of chemical sensing into everyday life is a decades-old dream that has so far failed to come to fruition. Many sensor technologies have been proposed and developed, but few can claim to be non-destructive, reagent-free, and suitable for multiple applications while also enabling significant scale-up and remaining cost-effective. &#13;
&#13;
This thesis proposes a utility service model for chemical sensing using Swept Source-Raman Spectroscopy (SSRS) that addresses these challenges. First, we introduce the SSRS fiber-probe that allows to measure Raman spectra with a single-point detector and only a few milliwatts of tunable laser excitation. We validate the probe design by monitoring nitrate fertilizer in a hydroponic setup, in environmental water samples, and in growing plants with sensitivity and resolution which are equivalent to benchtop systems. We further demonstrate the scaling up of SSRS into a sensor network by leveraging readily-available data communication optical fiber infrastructure. We showcase a 16-sensor network that uses the laser as a shared resource and develop an engineering-based cost model that supports the scaling up of this network to dozens of sensors deployed over kilometers. Lastly, we monitor metabolites in a therapeutic-producing cell culture, and use linear regression models and a-priori information of our samples to reduce the spectral acquisition time, making this sensor architecture competitive in both performance and cost to existing solutions. These findings represent significant progress towards achieving ubiquitous chemical sensing and facilitating the integration of chemical sensors into everyday life.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156629</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Cryptographically Private and Verifiable Computation through Hardware-Software Co-Design</title>
<link>https://hdl.handle.net/1721.1/156628</link>
<description>Practical Cryptographically Private and Verifiable Computation through Hardware-Software Co-Design
Samardzic, Nikola
Fully Homomorphic Encryption (FHE) and Verifiable Computation (VC) enable offloading computation to untrusted servers with cryptographic privacy and integrity guarantees. Despite their attractive security properties, FHE and VC are not widely adopted because (1) they suffer prohibitive performance overheads, about 10,000× to 1,000,000× over unencrypted and unverified computation, respectively and (2) they are hard to use even for expert cryptographers: porting non-trivial applications takes experts months of manual work.&#13;
This thesis contributes hardware and software techniques to make FHE and VC practical. Specifically, we present a full hardware and software stack for FHE that addresses its performance and usability challenges, consisting of hardware accelerators that erase FHE’s overheads, a redesign of the state-of-the-art FHE scheme to make accelerators more efficient, and an FHE compiler that produces efficient programs from high-level code. We then leverage the commonalities between FHE and VC to design an accelerator that reduces VC overheads.&#13;
F1 and CraterLake are FHE accelerators that improve performance over state-of-the-art by 10,000×. F1 is the first programmable FHE accelerator, and erases most performance overheads for smaller FHE programs. CraterLake builds on F1, and is the first accelerator able to support arbitrarily large FHE programs effectively.&#13;
F1 and CraterLake’s speedups bring with them new bottlenecks, mainly arithmetic efficiency. We present BitPacker, a new implementation of an FHE scheme that keeps encrypted data packed in fixed-size words, enabling near-full arithmetic efficiency in accelerators. BitPacker is the first redesign of an FHE scheme that targets accelerators. On CraterLake, BitPacker improves performance by gmean 59% and up to 3×, and reduces energy by gmean 61%.&#13;
To make the performance we unleashed accessible to non-experts, we contribute Fhelipe, a compiler that abstracts away FHE’s implementation details and hides its complex and restrictive programming interface. Fhelipe translates high-level tensor programs into optimized FHE circuits that can then be executed on CraterLake or a CPU. Fhelipe produces compiled programs that match or exceed the performance of state-of-the-art manual implementations. It also outperforms prior FHE compilers by gmean 18.5× on a wide set of benchmarks.&#13;
While FHE provides data privacy, it does not provide integrity. NoCap is a hardware accelerator that enables practical integrity by speeding up verifiable computation by 40× over state-of-the-art accelerators and by 580× over CPU.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156628</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneous Integration of Spin-Photon Interfaces with s Scalable CMOS Platform</title>
<link>https://hdl.handle.net/1721.1/156627</link>
<description>Heterogeneous Integration of Spin-Photon Interfaces with s Scalable CMOS Platform
Li, Linsen
A central challenge in the development of long-range high-speed quantum networks and fault-tolerant quantum computing is the generation of a large-scale entanglement of quantum systems. Color centers in diamonds have emerged as a leading quantum information processing platform, satisfying the DiVincenzo criteria for quantum computing and recently enabling the quantum advantage in communications. However, it is estimated that for general-purpose quantum information processors, millions to billions of high-quality physical qubits will be required, motivating the need for hardware architectures that are highly scalable by leveraging modern semiconductor integrated systems. &#13;
&#13;
Here, we introduce a scalable quantum information processing hardware architecture in a proof-of-concept consisting of an addressable and tunable two-dimensional array of tin-vacancy centers, hybrid integrated on a foundry process electronics control chip. We demonstrate necessary components individually, like scalable high-yield heterogeneous integration between diamond nanostructure and foundry control chip, parallel control and measurement, tuning of quantum emitter emission wavelength as well as lifetime, and coherent light correlation with quantum emitters as a proof of concept for a scalable architecture capable of hosting thousands toward millions of qubits. Besides the experimental demonstration of the architecture, the thesis will include free-space spin-photon interface design, quantum emitter strain engineering, scalable high-quality fabrication technology, general architecture theoretical analysis, and AI-assistant quantum resource scheduling for a deep discussion of the different essential components of the system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156627</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifaceted Understanding of Accreting Neutron stars and their Environments</title>
<link>https://hdl.handle.net/1721.1/156625</link>
<description>Multifaceted Understanding of Accreting Neutron stars and their Environments
Ng, Wei Chieh (Mason)
Accreting neutron stars are cosmic laboratories featuring some of the most extreme processes in the universe, hosting an accretion disk that supplies material that is magnetically channeled onto the magnetic poles of neutron stars. The emission from these accreting neutron stars peak in the X-rays, owing to gravitational potential energy loss as the accreting material falls under the strong gravitational well of the neutron star. The community has been utilizing X-ray timing and spectroscopy for decades to unravel the mysteries of these objects, with X-ray polarimetry being a recent development providing two additional observables.&#13;
&#13;
In my thesis, I showcase a multifaceted approach to studying accreting neutron star binaries, employing X-ray timing, spectroscopic, and polarimetry with many X-ray instruments to advance our understanding of the dynamics and evolution of these systems. I have also developed an end-to-end pulsation pipeline tool that is designed for rapid characterization of new X-ray transients, particularly for neutron stars. In the analyses undertaken as part of my thesis, I have incorporated multiple techniques and instruments to develop a comprehensive understanding of the phenomenology of many neutron star systems, such as accreting millisecond X-ray pulsars, ultraluminous X-ray pulsars, ultracompact X-ray binaries, and Z/atoll-state sources. It is through this multifaceted application that we can reveal a holistic description of neutron star binaries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156625</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral-Timing Observations of Disk-Jet Coupling in Black Hole X-ray Binaries</title>
<link>https://hdl.handle.net/1721.1/156623</link>
<description>Spectral-Timing Observations of Disk-Jet Coupling in Black Hole X-ray Binaries
Wang, Jingyi
Accreting black holes are a fundamental tool to understand accretion and ejection physics, and are ideal laboratories to ultimately test Einstein's general relativity (GR) in the strongest gravity regime in the Universe. High-fidelity GR tests require a precise knowledge of the physical environments in which particles move. Two biggest challenges there are how close to the event horizon inspiraling gases reach, and how the relativistic jets are launched. The puzzle piece linking these two challenges together is the nature and geometry of the hot (hundreds of keV) X-ray emitting plasma called the “corona". X-ray Reverberation Mapping, where X-rays produced close to the BH reverberate off inspiraling gas, allows us to map out scales close to the event horizon -- orders of magnitude better than the resolution of our telescopes. Black hole X-ray binaries (BHXBs) are binary systems with a stellar-mass black hole and a companion star. They are usually transients as they cycle through phases of quiescence and outburst in which they exhibit different accretion states with distinct spectral-timing features, allowing us to study the accretion-ejection physics or disk-jet coupling in a single source on a human timescale. In MAXI J1820+070, I discover that the soft reverberation lag becomes longer during the hard-to-soft state transition, several days before the transient radio jet is observed. Together with the discovery that the reverberation lag gets shorter in the hard state while the compact jet becomes weaker, this result suggests a close relationship between the X-ray corona and the radio jet. The corona might be the base of the jet that expands and/or gets ejected during the state transition. In the "NICER reverberation machine", I expand the sample size of BHXBs where reverberation is detected from 3 to 11, and find the evolution of reverberation lag in the hard and intermediate states is a generic feature of BHXBs, and should be explained with state transition models. I explore simultaneous modeling of the flux-energy spectrum and cross spectra and a proof-of-concept to apply machine learning to fitting the cross spectra. I also study a BHXB IGR J17091--3624 that exhibits “heartbeat"-like variabilities in its 2022 outburst and find the source began in traditional hard and intermediate states and transitioned into an exotic soft state. I also discover one of the most coherent quasi-periodic oscillations, and find an interplay between heartbeats and iron emission/absorption line. These results lead to new insights into the physical nature of exotic variabilities and accretion disk instability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156623</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches that Extend Healthcare: Algorithms &amp; Applications</title>
<link>https://hdl.handle.net/1721.1/156620</link>
<description>Machine Learning Approaches that Extend Healthcare: Algorithms &amp; Applications
Yang, Yuzhe
Modern clinical systems frequently exhibit sporadic patient visits, delayed diagnoses, and unequal care distribution among diverse populations. Often, diseases aren’t identified until they reach advanced stages. The scarcity of specialists and disparities in healthcare access further complicate the long-term monitoring, timely intervention, and unbiased assessments. This thesis addresses the above challenges by developing artificial intelligence (AI) and machine learning (ML) algorithms and building practical systems that use these algorithms to solve key problems in healthcare and medicine.&#13;
&#13;
Specifically, on the algorithms front, the thesis introduces principled ML approaches to achieve fair, unbiased, and generalizable AI models, addressing core challenges in real-world medical data which encompass four main axes:&#13;
• Label Scarcity: The thesis presents a novel self-supervised learning scheme that learns periodic and frequency information in data without labels, enabling representation learning for periodic tasks like vital signs estimation with minimal labeling efforts.&#13;
• Data Imbalance: The thesis develops new ML algorithms to address data imbalance in regression, filling the gap in techniques for practical imbalanced regression problems.&#13;
• Domain Generalization: The thesis presents theoretically grounded learning methods that ensure generalization across imbalanced domains and unseen environments.&#13;
• Subpopulation Shifts: The thesis studies learning in the presence of underrepresented subgroups, providing actionable insights for model deployment in real-world settings.&#13;
&#13;
On the applications front, the thesis develops new AI-driven biomarkers and systems for human disease and medicine leveraging the proposed algorithms, enabling discovery and advancing delivery and equity in healthcare:&#13;
• Early Diagnosis Biomarker for Parkinson’s: The thesis presents an AI-based biomarker for Parkinson’s disease that enables early detection years before standard clinical diagnosis, as well as longitudinal progression tracking using nocturnal breathing signals.&#13;
• In-Home Touchless Monitoring of Sleep Posture: The thesis designs novel AI systems for continuous and contactless sleep posture monitoring overnight in the user’s own home using wireless signals.&#13;
• Equitable Medical AI Deployments In The Wild: The thesis establishes best practices for medical imaging AI models that maintain their performance and fairness in deployments beyond their initial training contexts, across diverse populations and unseen sites.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156620</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-Efficient Hardware Architectures for Enhanced Secure Communication Systems</title>
<link>https://hdl.handle.net/1721.1/156619</link>
<description>Energy-Efficient Hardware Architectures for Enhanced Secure Communication Systems
Woo, Jongchan
In the era of digital transformation, the expansion of the Internet of Things (IoT) has been pivotal in driving innovations across various sectors. However, this expansion also brings forth heightened security risks, particularly in the communication between billions of connected devices. This thesis presents significant advancements in secure and reliable communication systems, crucial for addressing these risks within IoT infrastructures. It explores the development and integration of cryptographic solutions designed to enhance both the energy efficiency and reliability of communications. Central to this work is the CERMET framework, which integrates energy-efficient cryptographic techniques with both symmetric (AES) and asymmetric (ECC) encryption methodologies. This framework significantly reduces the energy demands of cryptographic operations, crucial in energy-constrained environments. Additionally, this research repurposes the padding bits of AES to improve error correction capabilities, thereby enhancing the reliability of data transmission across noisy channels. Together with the application of the Guessing Random Additive Noise Decoding (GRAND) decoder, these technologies are unified into a comprehensive system that assures robust security and data integrity. This work not only addresses the critical needs for energy efficiency in IoT but also sets a new benchmark for the security and robustness of communication systems, facilitating a scalable and adaptable solution for various IoT applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156619</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Sepsis Prognosis: Prediction Models and Dissecting Electronic Health Records</title>
<link>https://hdl.handle.net/1721.1/156618</link>
<description>Machine Learning for Sepsis Prognosis: Prediction Models and Dissecting Electronic Health Records
Liao, Wei
Sepsis is the body's extreme response to an infection. It is a life-threatening medical emergency. Given the heavy burden sepsis has posed on the health care system, extensive research in the area has been performed to facilitate sepsis diagnosis. Sepsis prognosis can support the assessment of the likely progression of the disease and thus inform treatment decisions, but it is much less explored. Here I present two approaches to build sepsis prognosis models. First, I introduced the idea of assessing neutrophil function from simple-to-obtain phase microscopy images. I developed an experimental pipeline using measurement of reactive oxygen species genera=on as a label of neutrophil function. I generated a large neutrophil imaging dataset and explored different deep learning approaches to predict neutrophil activation state. Second, I developed machine learning models to predict sepsis patient future clinical score using electronic health records. As part of the effort, I developed a multidatabase extraction pipeline to facilitate electronic health records extraction process. My work demonstrates the potential of using deep learning models to evaluate functional aspects of the immune system and to predict sepsis patient future state, which could provide significant insight into sepsis prognostic monitoring and is easy to adapt in clinical settings. It is of great significance to understand the input data in developing reliable and generalizable machine learning for healthcare models. It is also increasingly apparent that machine learning for healthcare models can predict patient sensitive information from data that does not explicitly encode it. However, we lack a clear understanding of the extent of the problem: what types of sensitive information can be predicted and how it generalizes to different models or different datasets. We lack approaches to develop models that can make clinical inferences but not infer sensitive information. Critically, we also lack approaches to explain such data encoding. Using electronic health records, I thoroughly investigated the ability of machine learning models to encode a wide range of patient sensitive information. I developed a strategy to ensure that clinical prediction is minimally based on patient-sensitive information. I presented an approach that can explain feature importance in patient sensitive information encoding. This set of studies not only allows us to gain deep understanding of the sepsis patient clinical score prediction model but also are applicable to a variety of machine learning models utilizing time-series data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156618</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cluster Analysis in High Dimensions: Robustness, Privacy, and Beyond</title>
<link>https://hdl.handle.net/1721.1/156617</link>
<description>Cluster Analysis in High Dimensions: Robustness, Privacy, and Beyond
Narayanan, Shyam
Cluster analysis focuses on understanding the cluster structure of data, and is perhaps one of the most important subfields in high-dimensional data analysis. Traditionally, cluster analysis focuses on partitioning data into closely related groups, such as in k-means clustering and learning mixture models. However, one sometimes overlooked part of cluster analysis is analyzing data from a single cluster: this encompasses problems such as mean estimation and covariance estimation, which correspond to learning the location and shape of a cluster, respectively. In this thesis, we study various classic problems in high-dimensional cluster analysis, relating to both identifying several clusters and learning a single cluster. We provide improved algorithms and lower bounds for problems including k-means and k-median clustering, Gaussian mean and covariance estimation, high-dimensional mean testing, and learning mixtures of Gaussians. Importantly, in this thesis we also focus on the socially motivated constraints of robustness, privacy, and explainability, and how they affect the complexity of these problems. In our quest to understand cluster analysis under such socially motivated constraints, we discover the first black-box transformation from robustness to privacy, as well as the first-known statistical separation between some natural models of robust statistics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156617</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and Robotic Path Planning over High Dimensional Categorical Observations</title>
<link>https://hdl.handle.net/1721.1/156616</link>
<description>Inference and Robotic Path Planning over High Dimensional Categorical Observations
San Soucie, John Edward
Advances in marine autonomy, deep-learning, and in-situ marine sensing technology have enabled oceanographers to collect vast amounts of spatiotemporally-distributed, sparse, high dimensional categorical data. Statistical models, particularly in streaming and computationally constrained settings, have lagged behind data collection. Recent developments in topic modeling for robotics have highlighted the potential to efficiently extract meaningful relationships from categorical data, and adjust robotic path-planning based on real-time inference. This dissertation seeks to fill the gap in streaming statistical models for sparse, high-dimensional categorical data, in the context of open-ocean phytoplankton community ecology. We begin by exploring the use of existing topic modeling approaches for plankton community characterization. Topic models are compared to standard ecological techniques for dimensionality reduction. The increased fidelity and expressiveness of the topic modeling approach allows for greater resolution of plankton co-occurrence relationships. By analyzing these relationships and ocean physics in and around a retentive eddy, the source of phytoplankton variability is traced to storm-driven advection on the ocean surface. We conclude that topic models offer unique insights into the causal mechanisms underlying plankton community variability. Next, we turn our focus to the development of a streaming belief model for categorical path planning. Such a model must be capable of predicting in regions without data, and it must be able to process streaming data in a computationally efficient manner. We introduce the Gaussian Dirichlet Random Field model, a novel topic model with spatially continuous latent log-probabilities. In addition to producing a more accurate model than the state-of-the-art in locations with data, the Gaussian Dirichlet Random Field model can interpolate and extrapolate. The model is initially presented with a batch hybrid Markov Chain-Monte Carlo inference procedure. We develop a streaming fully-variational inference approach for inference, called Streaming Gaussian Dirichlet Random Fields, which satisfies both the prediction and efficiency requirements for path planning belief models. In-silico experiments demonstrate the ability of this model to accurately map latent co-occurrence patterns. Comparisons to a standard Gaussian process on both path-planning tasks and observation mapping tasks show how the ability of Streaming Gaussian Dirichlet Random Fields to leverage additional categorical observations enables superior performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156616</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications</title>
<link>https://hdl.handle.net/1721.1/156615</link>
<description>Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications
Liu, Zhijian
Deep learning has been used across a broad spectrum of applications, including computer vision, natural language processing, and scientific discovery. However, behind its remarkable performance lies an increasing gap between the demand for and supply of computation. On the demand side, the computational costs of deep neural networks have surged dramatically, driven by ever-larger input and model sizes. On the supply side, as Moore's Law slows down, hardware no longer delivers increasing performance within the same power budget.&#13;
&#13;
In this dissertation, we present our solutions across the algorithm, system, and application stacks to address the demand-supply gap through the lens of sparsity. In Part I, we first develop algorithms, SparseViT and SparseRefine, which identify sparsity within dense input data. We then introduce new sparse primitives, PVCNN and FlatFormer, to efficiently process inputs with sparsity. In Part II, we introduce system libraries, TorchSparse, to optimize existing sparse primitives and effectively translate theoretical savings from sparsity into practical speedups on hardware. In Part III, we apply sparsity to accelerate a wide range of computation-intensive AI applications, such as autonomous driving and language modeling. We conclude this dissertation with a vision towards building more efficient and accessible AI.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156615</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Representation Learning: On Data-scarcity, Uncertainty and Symmetry</title>
<link>https://hdl.handle.net/1721.1/156614</link>
<description>Scalable Representation Learning: On Data-scarcity, Uncertainty and Symmetry
Loh, Charlotte Chang Le
Deep learning has experienced remarkable success in recent years, leading to significant advancements in various fields such as vision, natural language generation, complex game play, as well as solving difficult scientific problems such as predicting protein folding. Despite these successes, traditional deep learning faces fundamental challenges limiting their scalability and effectiveness. These challenges include the necessity for extensive labeled datasets, the lack of trustworthiness due to model overconfidence, and difficulties in generalizing to new, unseen data. In this thesis, our primary goal is to tackle these issues by introducing novel tools and methods that augment traditional deep learning. We explore various strategies for solving the main bottlenecks of traditional deep learning, which includes incorporating prior known symmetries and inductive biases of the problem, utilizing Bayesian and ensemble methods, and leveraging abundance of unlabeled data in a representation learning framework. We discuss and demonstrate practical applications of these novel tools in diverse domains including vision, photonics, material science and neuroscience.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156614</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Physics-Inspired Generative Models</title>
<link>https://hdl.handle.net/1721.1/156612</link>
<description>On Physics-Inspired Generative Models
Xu, Yilun
Physics-inspired generative models such as diffusion models constitute a powerful family of generative models. The advantages of models in this family come from relatively stable training process and high capacity. A number of possible improvements remain possible. In the thesis, we will first delve into the improved techniques for training and sampling in diffusion models. The training objectives of diffusion models exhibit high variance when the data distribution is multi-modal. To mitigate this, we propose a training objective that generalizes conventional denoising score-matching and significantly reduces variance in training targets. Alternatively, we introduce a training framework that integrates learnable discrete latents into continuous diffusion models. These latents simplify the learning of diffusion models’ complex noise-to-data mapping. On the other hand, the sampling process of diffusion models generally involves solving differential equations. To expedite the sampling process, we propose a new sampling algorithm that combines the best of previous ODE and SDE samplers, greatly boosting the performance of pre-trained diffusion models. Additionally, our research explores methods to promote diversity in finite samples by introducing mutual repulsion forces in the generative process. In the realm of physics-inspired generative models, many physical processes could be used to develop generative models. We will introduce a new family of generative models arising from electrostatic theory, termed Poisson Flow Generative Models (PFGM). PFGM rivals leading diffusion models while showcasing improved sampling robustness. The extended version, PFGM++, places diffusion models and PFGM under the same framework and introduces new, better models. We will further present a principled approach to convert physical processes into generative models.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156612</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Learning Control via Structural Policy Priors, Latent World Models and Hierarchical Abstraction</title>
<link>https://hdl.handle.net/1721.1/156611</link>
<description>Efficient Learning Control via Structural Policy Priors, Latent World Models and Hierarchical Abstraction
Seyde, Tim N.
Learning control will enable the deployment of autonomous robots in unstructured real-world settings. Solving the associated complex decision processes under real-time constraints will require intuition, guiding current actions by prior experience to anticipate long-horizon environment interactions and integrating with optimal control to ground action selection in short-horizon system constraints. Ensuring tractability of the underlying learning process is conditional upon maximizing task-aligned information extracted from environment interactions while minimizing the required guidance via human interventions. In this thesis, we develop novel learning control algorithms that enable efficient acquisition of complex behaviors while limiting prior knowledge, direct human supervision, and computational requirements. Our study focuses on learning from interaction through reinforcement learning, combining insights from model-free, model-based, and hierarchical techniques. We design decoupled discrete policy structures to yield memory-efficient agent representations. Our study demonstrates the competitive performance of critic-only agents on continuous control tasks, highlighting accelerated information propagation and exploration benefits. We further leverage hierarchical abstraction over diverse behavior components to enable time-efficient optimization. Our methods jointly learn heterogeneous low-level controller parameterizations via mixture policies for single-agent control while decoupling multi-timescale strategic from reactive reasoning in the context of multi-agent team coordination. We lastly build latent world models for multi-step reasoning and sample efficient interaction selection. Our work employs uncertainty over expected long-term returns for targeted deep exploration and constructs multi-agent interaction models to accelerate competitive behavior learning via self-play in imagination. In sum, this thesis develops scalable and efficient robot learning algorithms by addressing representational challenges across layers of abstractions, providing agents with an intrinsic ability to set implicit exploration goals under high-level guidance, and facilitating information propagation in limited data regimes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156611</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motion-robust Machine Learning Methods for Region-of-Interest Tracking and Selective Magnetic Resonance Imaging with External Shim Arrays</title>
<link>https://hdl.handle.net/1721.1/156608</link>
<description>Motion-robust Machine Learning Methods for Region-of-Interest Tracking and Selective Magnetic Resonance Imaging with External Shim Arrays
Zhang, Molin
Fetal motion during imaging presents a significant challenge, resulting in image artifacts and limiting the diagnostic information that can be obtained. Despite the adoption of fast single-shot MRI techniques, capable of acquiring images in less than a second per slice, fetal motion remains problematic, leading to noticeable artifacts between slices. These artifacts, which degrade image quality and impede accurate diagnosis, emphasize the vital necessity of implementing robust motion correction techniques in fetal MRI. This thesis presents a novel pipeline aimed at improving the robustness of fetal MRI against fetal motion. Central to this pipeline is the objective of achieving spatially selective Magnetic Resonance Imaging (MRI), focusing exclusively on the region of interest (ROI). It is crucial to emphasize that while the impetus for this thesis stems from fetal motion issues, the techniques developed herein have broader applications beyond this specific domain. The pipeline comprises three interconnected components, each addressed by a novel technique: fetal pose estimation and data augmentation with diffusion model, general optimization framework for selective imaging with time-varying shim array fields and self-supervised reconstruction method for highly under-sampled temporal related imaging. This proposed pipeline enhances the robustness to fetal motion by shortening the acquisition time.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156608</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming with Neural Surrogates of Programs</title>
<link>https://hdl.handle.net/1721.1/156602</link>
<description>Programming with Neural Surrogates of Programs
Renda, Alex
Surrogate programming, the act of replacing programs with surrogate models of their behavior, is being increasingly leveraged to solve software development challenges. Surrogates are typically machine learning models trained on input-output examples of the program under consideration. With surrogate compilation, programmers train a surrogate that replicates the behavior of the original program to deploy to end-users in its place, with the goal of improving performance. With surrogate adaptation, programmers first train a surrogate of a program then continue to train the surrogate on a downstream task, with the goal of improving the accuracy of the surrogate on the task. With surrogate optimization, programmers train a surrogate of a program then use the surrogate to optimize the program's inputs, with the goal of optimizing inputs more efficiently than with the original program. These emerging design patterns represent an important new frontier of software development. However, we lack a coherent understanding of the applications and methodology underlying surrogate programming.&#13;
&#13;
In this thesis I investigate three hypotheses about surrogate programming: that surrogate programming can be used to achieve state-of-the-art results on large-scale programming tasks; that there is a small set of methodologically distinct design patterns that can be grouped into a single programming methodology, unifying existing uses of surrogates in the literature; and that we can guide surrogate design using facts derived from the modeled program to train surrogates more efficiently and achieve better performance on downstream tasks.&#13;
&#13;
To argue these hypotheses, I present four sets of contributions. I first present DiffTune, a surrogate optimization based approach to tuning the parameters of a large-scale CPU simulator. I next generalize this approach to identify the three design patterns above, and lay out the common methodology underlying all design patterns. I then present Turaco, a program analysis which allows developers to reason about the training data distribution to use to train a surrogate of a given program. I conclude with Renamer, a neural network architecture which mirrors source programs' invariance over variable renaming in their inputs.&#13;
&#13;
Surrogate programming has the potential to change how developers program large-scale computer systems, by abstracting away much of the complexity to machine learning algorithms. Together, the contributions in my thesis lay the groundwork for a principled understanding of the applications and methodology of surrogate programming.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156602</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphs of Convex Sets with Applications to Optimal Control and Motion Planning</title>
<link>https://hdl.handle.net/1721.1/156598</link>
<description>Graphs of Convex Sets with Applications to Optimal Control and Motion Planning
Marcucci, Tobia
This thesis introduces a new class of problems at the interface of combinatorial and convex optimization. We consider graphs where each vertex is paired with a convex program, and each edge couples two programs through additional convex costs and constraints. We call such a graph a Graph of Convex Sets (GCS). Over a GCS we can formulate any optimization problem that we can formulate over an ordinary weighted graph, with scalar costs on the vertices and edges. In fact, for any fixed choice of the variables in the convex programs, a GCS reduces to a weighted graph where we can seek, e.g., a path, a matching, a tour, or a spanning tree of minimum cost. The challenge in a GCS problem lies in solving the discrete and the continuous components of the problem jointly. By combining the modelling power of graphs and convex optimization, GCSs are a flexible framework to formulate and solve many real-world problems. The graph and the combinatorial goal (e.g., finding a path or a tour) model the high-level discrete skeleton of a problem. The convex costs and constraints fill in the low-level continuous details. The primary contribution of this thesis is an efficient and unified method for solving any GCS problem. Starting from an integer-linear-programming formulation of an optimization problem over a weighted graph, this method formulates the corresponding GCS problem as an efficient Mixed-Integer Convex Program (MICP). This MICP can then be solved to global optimality using common branch-and-bound solvers, or approximately by rounding the solution of its convex relaxation. Importantly, both the formulation of the MICP and its solution are fully automatic, and a user of our framework does not need any expertise in mixed-integer optimization. We first describe the GCS framework and the formulation of our MICP in general terms, without presupposing the specific combinatorial problem to be solved over the GCS. We illustrate our techniques through multiple examples spanning logistics, transportation, scheduling, navigation, and computational geometry. Then we focus on the Shortest-Path Problem (SPP) in GCS. This problem is particularly interesting since it generalizes a wide variety of multi-stage decision-making problems and, using our techniques, it can be solved very effectively. We consider two main applications of the SPP in GCS: optimal control of dynamical systems and collision-free motion iplanning. In these two areas, our techniques either generalize or significantly improve upon algorithms and optimization methods that have been developed for decades and are widely used in academia and industry. Lastly, the techniques introduced in this thesis are implemented in the software packages Drake and gcspy. The former is a large and mature software for robotics. It is open-source and widely used by the community. The second is a very simple and lightweight Python package which is also open source. In this thesis, we will illustrate the usage of gcspy through multiple basic examples.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156598</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Faster Methods in Bayesian Unsupervised Learning</title>
<link>https://hdl.handle.net/1721.1/156597</link>
<description>Toward Faster Methods in Bayesian Unsupervised Learning
Nguyen, Tin D.
Many data analyses can be seen as discovering a latent set of traits in a population. For example, what are the themes, or topics, behind Wikipedia documents? To encode structural information in these unsupervised learning problems, such as the hierarchy among words, documents, and latent topics, one can use Bayesian probabilistic models. The application of Bayesian unsupervised learning faces three computational challenges. Firstly, existing works aim to speed up Bayesian inference via parallelism, but these methods struggle in Bayesian unsupervised learning due to the so-called “label-switching problem”. Secondly, in Bayesian nonparametrics for unsupervised learning, computers cannot learn the distribution over the countable infinity of random variables posited by the model in f inite time. Finally, to assess the generalizability of Bayesian conclusions, we might want to detect the posterior’s sensitivity to the removal of a very small amount of data, but checking this sensitivity directly takes an intractably long time. My thesis addresses the first two computational challenges, and establishes a first step in tackling the last one. I utilize a known representation of the probabilistic model to evade the label-switching problem: when parallel processors are available, I derive fast estimates of Bayesian posteriors in unsupervised learning. Generalizing existing works and providing more guidance, I derive accurate and easy-to-use finite approximations for infinite-dimensional priors. Lastly, I assess generalizability in supervised Bayesian models, which can be seen as a precursor to the models used in Bayesian unsupervised learning. In supervised models, I develop and test a computationally efficient tool to detect sensitivity regarding data removals for analyses based on MCMC.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156597</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strongly Correlated Electronic Matter: Phases and their Transport</title>
<link>https://hdl.handle.net/1721.1/156596</link>
<description>Strongly Correlated Electronic Matter: Phases and their Transport
Musser, Seth W.
This thesis surveys the research in strongly correlated matter that I have done over the course of my PhD. It can broadly be divided into two categories: phases of matter and their transport. My research into phases of matter has been concerned with the low energy properties of interacting electrons at fractional filling of an underlying lattice. I, along with collaborators, have shown that it is possible to have a continuous quantum phase transition between a metal and a generalized Wigner crystal that breaks the translation symmetry of the lattice. In a subsequent work we were able to support this with an exact bosonization treatment in the quasi-one dimensional setting. Finally, we have recently considered the possibility of an intervening topological phase where a gap opens, but translation symmetry remains unbroken. In doing so we formalized the concept of a “minimal” topological order and proved a number of results about these. My research into transport in strongly correlated systems involved: proposing an alternative explanation for magnetoresistance curves in a cuprate metal, and proposing a diagnostic of Hall viscosity in rotating Bose Einstein condensates using vortex dynamics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156596</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of Supermassive Black Hole Feedback in Galaxy Clusters</title>
<link>https://hdl.handle.net/1721.1/156595</link>
<description>Evolution of Supermassive Black Hole Feedback in Galaxy Clusters
Calzadilla, Michael Samir
Galaxy clusters, the largest gravitationally-bound structures in the Universe, are superb laboratories for studying the baryon cycle that governs the evolution of all galaxies. The outer boundary of a galaxy’s circumgalactic medium (CGM) should be the most sensitive probe of inflowing and outflowing material, but it is difficult to observe. Only in galaxy clusters, where the CGM gets hot enough to glow in X-rays, can the entire baryon cycle be observed– from the rapid cooling of the intracluster medium (ICM) that contributes to the CGM and level of star formation in the central galaxy, to the eventual feeding and triggering of feedback from the largest supermassive black holes (SMBHs) residing in them. This cooling and subsequent feedback from these active galactic nuclei (AGN) is the primary driver of the baryon cycle and evolution of the largest, brightest cluster galaxies (BCGs), quenching their expected levels of cooling and star formation by up to two orders of magnitude. In this thesis, I investigate how the AGN feedback cycle behaved in the early universe, where studies of this kind have not been possible until just recently. Using multiwavelength observations from radio to optical and X-ray, I study a unique, representative sample of galaxy clusters and their central galaxies and SMBHs spanning almost 10 Gyr of cosmic evolution. By studying how the largest galaxies and black holes in the universe co-evolve under extreme and chaotic environments, we can gain a great deal of insight into how the universe we see around us today came to be.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156595</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent Quantum Phenomena in Magic-Angle Twisted Graphene Superlattices</title>
<link>https://hdl.handle.net/1721.1/156594</link>
<description>Emergent Quantum Phenomena in Magic-Angle Twisted Graphene Superlattices
Park, Jeong Min
Strongly correlated electron systems have attracted considerable interest due to their ability to host a wealth of emergent quantum phenomena. Recently, moiré engineering, also known as twistronics, has emerged as a new approach for creating twodimensional correlated materials. In this thesis, I designed and studied novel moiré quantum matter based on twisted graphene superlattices. Starting with the discovery of magic-angle twisted trilayer graphene, I established a family of highly tunable materials that display unconventional correlated and superconducting phases. The superconductivity observed in this magic family demonstrates strong coupling and significant violation of the Pauli limit. By integrating transport and thermodynamic measurements, I uncovered the electronic structures behind the correlated phases and the spontaneous breaking of flavor symmetry. Additionally, by merging correlation and topology in magic-angle graphene, a fractional Chern insulator phase was realized at low magnetic fields. These results provide a robust and versatile platform for investigating emergent phenomena in two dimensions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156594</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional quantum algorithms: a mélange of methods for matrix functions</title>
<link>https://hdl.handle.net/1721.1/156593</link>
<description>Functional quantum algorithms: a mélange of methods for matrix functions
Rossi, Zane Marius
The study of quantum algorithms is stymied by a lack of human intuition—many of these algorithms appear to rely on non-intuitive attributes unique to quantum mechanics, and as such 'good' quantum algorithms are often sporadic, non-intuitive, and requiring of bespoke analysis. The quantum algorithmist is up against a triple headwind: they must (1) be delusion-hardened against non-generalizing classical heuristics, (2) have understanding of disparate classical algorithms with which to compare their work, and (3) do this all largely without access the high-level programming abstractions ubiquitous in classical computer science for over seventy years.&#13;
&#13;
A partial remedy for these problems has emerged with the development of a new class of quantum algorithms, quantum signal processing (QSP) and quantum singular value transformation (QSVT), which have had success in unifying, simplifying, and improving most known quantum algorithms. QSP/QSVT transform the spectrum of linear operators encoded in unitary processes by near arbitrary continuous functions, and this simple ability—computing matrix functions quantum mechanically—has been shown to subsume diverse tasks with comparatively simple complexity analysis.&#13;
&#13;
This thesis claims and provides a series of constructions supporting that QSP and QSVT should not be viewed solely as subroutines for transforming linear systems, but as limited examples among an extensive class of quantum algorithms converting algorithmic problems to simpler algebraic ones. We construct an array of algorithms in this class, which we call functional quantum algorithms, and show they ought to and can be manipulated and combined purely at the level of this algebraic reduction to constitute useful, composite quantum algorithms.&#13;
&#13;
We emphasize three constructions (among a collection of auxiliary results), ordered by complexity: (a) a limited extension of QSP/QSVT-like circuit ansätze to the multivariable matrix function setting, (b) a construction of recursively composable univariate QSP/QSVT-like subroutines, and (c) a construction of modular quantum subroutines (gadgets) that can approximate generic multivariable continuous matrix functions. We provide necessary and sufficient conditions under which these algorithms can be analyzed and combined functionally, i.e., purely at the level of the scalar transformations applied, and show these assertions require significant doing. Given our constructions' violation of basic assumptions of standard QSP and QSVT, we necessarily provide alternative proof techniques and quantum subroutines of independent interest. Finally, we also situate functional quantum algorithms among existing constructions in classical functional programming, identifying them as instances of monads, suggesting concrete directions for high-level, flexible quantum algorithmic design and analysis.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156593</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illuminating the Nature of Dark Matter through Observation, Simulation and Machine Learning</title>
<link>https://hdl.handle.net/1721.1/156592</link>
<description>Illuminating the Nature of Dark Matter through Observation, Simulation and Machine Learning
Sun, Yitian
Dark matter constitutes 85% of the matter content in the universe, yet its microscopic nature remains elusive. Discovering the nature of dark matter will not only greatly further our understanding of the universe but will almost certainly shed light on what lies beyond the Standard Model of particle physics. In this thesis, I discuss the progress I have made in the hunt for dark matter by proposing direct observation strategies in the present-day universe, building simulations for dark matter energy injection imprints in the early universe, and using Machine Learning to address the unique challenges in both of these tasks. Specifically, I explore using the echoes of astrophysical radio sources to probe axion dark matter, self-consistently simulating dark matter energy injection in the era of reionization, employing simple Neural Networks to improve these early universe simulations, and utilizing Machine Learning-powered inference techniques to tackle the problem of the Galactic Center 1-ray Excess.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156592</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural MMO: Massively Multiagent Simulation and Learning</title>
<link>https://hdl.handle.net/1721.1/156586</link>
<description>Neural MMO: Massively Multiagent Simulation and Learning
Suarez, Joseph
Neural MMO is a massively multi-agent environment for reinforcement learning research. It is designed to push the boundaries of environment complexity while maintaining computationally efficiency for academic research. Agents in Neural MMO can forage for a variety of resources, engage in strategic combat with each other, defeat scripted enemies for loot, level-up various interdependent professions, acquire tools, weapons, equipment, etc., and exchange items on a global market. Neural MMO was among the first many-agent simulators for reinforcement learning research, and it is still unique among environments today. To my knowledge, no other project provides large agent populations, high per-agent complexity, and efficient simulation at once. These properties make Neural MMO a suitable environment for a variety of research topics in multi-agent learning that would be difficult to explore without such a simulator. The environment can process 128 agents at up to 25x real-time on a single CPU core, totaling 3,000 agent-steps per second. This speed is owed to simulation techniques borrowed from the games industry. In the course of developing Neural MMO, I made several adaptations for the specifics of reinforcement learning, such as the two-layer structure of Neural MMO's observations and actions and the efficient internal data representation. The contributions of this work include these adapted methods as general-purpose tools for designing RL environments. Through my own experiments and from the results of a series of competitions that I hosted on Neural MMO, we have seen agents capable of long-term coherent strategies, multi-tasking across various objectives, and conditioning for specific goals. The largest discovery of this project has been the extent to which standard reinforcement learning methods with limited compute are able to solve complex tasks. Neural MMO is free and open-source software under the MIT license with comprehensive documentation at neuralmmo.github.io and a 1000+ member community Discord.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156586</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Methodologies for Optimizing Over Probability Distributions</title>
<link>https://hdl.handle.net/1721.1/156585</link>
<description>Scalable Methodologies for Optimizing Over Probability Distributions
Li, Lingxiao
Modern machine learning applications, such as generative modeling and probabilistic inference, demand a new generation of methodologies for optimizing over the space of probability distributions, where the optimization variable represents a weighted population of potentially infinitely many points. Despite the ubiquity of these distributional optimization problems, there has been a shortage of scalable methods grounded in mathematical principles. To bridge this gap, this thesis introduces two complementary lines of work for scalable distributional optimization. The first part of this thesis focuses on optimizing over discrete distributions to generate high-quality samples for probabilistic inference. We present two works that tackle sampling by optimizing pairwise interaction energies defined on a collection of particles. The first work focuses on designing a new family of mollified interaction energies over moving particles, offering a unified framework for constrained and unconstrained sampling. The second work focuses on the scalable optimization of a family of popular interaction energies—maximum mean discrepancy of mean-zero kernels—to generate high-quality coresets from millions of biased samples, obtaining better-than-i.i.d. unbiased coresets. The second part transitions to optimizing over continuous distributions through neural network parameterization, enabling the generation of endless streams of samples once optimized. We exploit convexity principles to identify suitable mathematical formulations and scalable optimization algorithms in three contexts: 1) averaging distributions in a geometric meaningful manner using a regularized Wasserstein barycenter dual formulation; 2) identifying local minima of non-convex optimization as a generative model by learning proximal operators with global convergence guarantees; and 3) solving mass-conserving differential equations of probability flows without temporal or spatial discretization by leveraging the self-consistency of the dynamical system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156585</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Ultra-Resolution Biomolecular Mapping in Cells with Expansion Microscopy</title>
<link>https://hdl.handle.net/1721.1/156584</link>
<description>Toward Ultra-Resolution Biomolecular Mapping in Cells with Expansion Microscopy
Liu, Yixi
To investigate the molecular and cellular foundations of biological functions, achieving nanoscale spatial resolution in biomolecular imaging is essential. Expansion microscopy (ExM)1, a new kind of super-resolution microscopy, enables this by physically enlarging preserved biological specimens. Thus allows the investigation of structure-function relationships at nanoscale resolution using conventional diffraction-limited microscopes. ExM involves a series of chemical processes, including anchoring, polymerization, softening, and expansion. Before biomolecules are secured to the gel network, there is a risk that fixation and these chemical steps may alter the integrity and organization of the biomolecules. As resolution increases, previously indiscernible structural changes become visible, highlighting the importance of preserving ultrastructure. In this thesis, we present several ultrastructure preservation methods that minimize perturbations during sample preparation and maintain the integrity and organization of biomolecules. We name the one strategy with better performance subzero temperature expansion microscopy (subExM), which showed improved structure preservation and fluorescent signal intensity. This method holds promise for broadening our understanding of biological systems and paves the way for elucidating how structural variations underpin functional differences across healthy and diseased states.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156584</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Algorithms for Vector Similarities</title>
<link>https://hdl.handle.net/1721.1/156582</link>
<description>Efficient Algorithms for Vector Similarities
Silwal, Sandeep B.
A key cog in machine learning is the humble embedding: vector representations of real world objects such as text, images, graphs, or molecules whose geometric similarities capture intuitive notions of semantic similarities. It is thus common to curate massive datasets of embeddings by inferencing on a machine learning model of choice. However, the sheer dataset size and large dimensionality is often \emph{the} bottleneck in effectively leveraging and learning from this rich dataset. Inspired by this computational bottleneck in modern machine learning pipelines, we study the following question: &#13;
&#13;
"How can we efficiently compute on large scale high dimensional data?"&#13;
&#13;
In this thesis, we focus on two aspects of this question. &#13;
&#13;
&#13;
1) Efficient local similarity computation: we give faster algorithms for individual similarity computations, such as calculating notions of similarity between collections of vectors, as well as dimensionality reduction techniques which preserve similarities. In addition to computational efficiency, other resource constraints such as space and privacy are also considered.&#13;
&#13;
2) Efficient global similarity analysis: we study algorithms for analyzing global relationships between vectors encoded in similarity matrices. Our algorithms compute on similarity matrices, such as distance or kernel matrices, without ever initializing them, thus avoiding an infeasible quadratic time bottleneck.&#13;
&#13;
Overall, the main message of this thesis is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156582</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magellan: A Risk-bounded Plan Executive for Autonomous Mobile Agents</title>
<link>https://hdl.handle.net/1721.1/156578</link>
<description>Magellan: A Risk-bounded Plan Executive for Autonomous Mobile Agents
Reeves, Marlyse H.
Autonomous mobile agents are being tasked to perform increasingly complex, coordinated missions that require achieving multiple goals over long horizons and in risky environments. This requires a tight coupling between activity and motion planning– commitments made in an activity plan and its supporting motion plan can strongly influence overall behavior and, if not chosen in coordination, can lead to poor performance or failure. These missions also require robust plan execution, adapting to disturbances on the fly. Current plan executives are myopic in that they focus on a short planning horizon and weakly characterize behavior beyond this horizon. For long horizon missions, this produces highly suboptimal and brittle behavior. This thesis presents Magellan, a plan executive for long duration missions with multiple autonomous mobile agents. Magellan employs a receding horizon trajectory optimization approach that avoids myopia by generating precise trajectories over a limited horizon while guiding these motions beyond this horizon by using an effective heuristic that considers the constraints and objectives of the full activity plan. Magellan also uses this heuristic throughout execution to continuously monitor and detect plan infeasibilities early on, while reporting the source of unrecoverable constraint violations to the activity planner for re-planning. We demonstrate that Magellan achieves an order of magnitude improvement in solution quality over the state of the art for multi-agent problems of arbitrary size.&#13;
Autonomous mobile agents often operate in hazardous environments, where safety is essential. These agents can guarantee bounded risk during planning by analyzing their stochastic dynamics. These dynamics are complicated by their non-linearity. Most state-of-the-art methods require a simple closed-form dynamics model to verify plan correctness and safety; however, modern robotic systems often have dynamics modeled by a complex analytical form that is learned from data, such as a deep neural network. Thus, there is a need to perform efficient trajectory planning using learned dynamics models while guaranteeing bounded risk. This thesis also presents LaPlaSS, a novel "generate-and-validate" approach to risk-bounded planning in which a planner generates a candidate trajectory quickly, using an approximate linear dynamics model, and the validator assesses candidate risk against an accurate model. When a candidate takes excessive risk, the validator supplies the planner with additional safety constraints. Key to LaPlaSS is to learn a simple, low-dimensional approximate model, used for candidate generation, and an accurate stochastic model, used for validation. LaPlaSS uses a variational autoencoder to learn a simple linear model in a higher-dimensional latent space, which it uses to generate candidate trajectories. The VAEalso provides an accurate stochastic model, which the validator samples to evaluate candidate risk. We demonstrate that LaPlaSS can generate trajectory plans with bounded risk for a real-world agent with learned dynamics and is an order of magnitude more efficient than the state of the art.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156578</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Triplet Exciton Lifetime in the Stability and Efficiency of Blue Phosphorescent OLEDs&#13;
Or &#13;
Why Even The Brightest OLEDs Have a Dark Side</title>
<link>https://hdl.handle.net/1721.1/156577</link>
<description>The Role of Triplet Exciton Lifetime in the Stability and Efficiency of Blue Phosphorescent OLEDs&#13;
Or &#13;
Why Even The Brightest OLEDs Have a Dark Side
Tiepelt, Jan  Onchoke
Over the past decade, organic light-emitting devices (OLEDs) have evolved from emerging technology to dominating high-end mobile displays. Active-matrix OLED (AMOLED) displays offer superior color vibrancy, contrast, and form factor. Their thin, lightweight, and flexible nature holds promise for innovative form factors in smart- phones, wearables, and solid-state lighting, including curved, foldable, and rollable applications.&#13;
&#13;
Amongst the greatest remaining challenges are the stability and loss of efficiency at high current densities, called roll-off, of blue high-efficiency OLEDs. After the discovery of fluorescent OLEDs by Tang and Van Slyke in 1987, exhibiting a mere 1% efficiency, Baldo et al. introduced highly-efficient phosphorescent OLEDs in 1998. This novel technology traded a quadrupling in efficiency for a significantly higher energy density stored in devices due to the existence of long-lived triplet excitons believed to be a crucial source of molecular damage and efficiency loss. Most modern AMOLED displays, to this day, employ high-efficiency phosphorescent OLEDs in red and green pixels, whereas blue emission is still realized by their much less efficient fluorescent counterparts owing to unacceptably low stability.&#13;
&#13;
Countless architectural and chemical means have been explored to mitigate degradation phenomena in phosphorescent OLEDs. However, due to the interdependence of driving phenomena in OLEDs, the isolation of key parameters is highly-challenging and understanding of improvements is often merely empirical. As a consequence, key pa- rameters and the underlying physical mechanisms in the degradation and efficiency loss of phosphorescent OLED are not well understood and the existing challenges remain largely unresolved.&#13;
&#13;
This thesis focuses on the role of triplet exciton lifetime in the stability and efficiency roll-off of blue phosphorescent OLEDs. We develop a toolbox of probing techniques that isolate triplet lifetime experimentally, that is, are ‘orthogonal’ to any other aspect of the device.&#13;
&#13;
We follow a systematic approach of increasing complexity beginning with an optical system aiming at understanding fundamental stability behavior of archetypal phospho- rescent dye emitter systems. By engineering a plasmonic triplet quenching pathway, we find a cubic dependence of photostability on triplet lifetime for doped emitter sys- tems showing that triplet lifetime is a dominant factor - more so than reducing triplet density alone.&#13;
&#13;
We then implement an ’orthogonal’ triplet lifetime isolation approach on OLEDs under operation that we coin external surface plasmon polariton modulation (SPPM). We reveal an only square-root dependence of OLED stability and roll-off on emissive triplet lifetime strongly suggesting that other excited state species may play a crucial role.&#13;
&#13;
We combine SPPM with modulation of the emissive layer (EML) position to de- convolute the roles of charge carrier balance and triplet lifetime. We show that for a 15 nm shift of the EML the impact of charge carrier balance on both stability and roll-off exceeds that of triplet lifetime by a factor of three and four, respectively, highlighting the importance of the former in modern OLED design.&#13;
&#13;
Using magnetic exciton annihilation modulation (MEAM) on phosphorescent OLEDS we show that non-emissive, ’dark’, excitons play a dominant role in efficiency roll-off and degradation. Dissociating dark triplets, we achieve a 2.5-times decrease in roll and degradation as well as a two-times increased dependence on bright triplet life- time. This key finding highlights the key role of typically overlooked dark excitons in phosphorescent OLED operation and in harnessing Purcell enhancement for device stabilization.&#13;
&#13;
Finally, we explore device integration approaches, including extraordinary optical transmission through sub-wavelength hole arrays and transfer-printing of 2D arrays of plasmonic nanopatch antennas onto OLED stacks. Both aim at achieving high Purcell enhancement while maintaining a high device efficiency.&#13;
This thesis elucidates the intricate role of triplet exciton lifetime in the stability and efficiency of blue phosphorescent OLEDs. We gain unique insights from the developed in-situ characterization techniques highlighting the importance of deliberate strategies to manage dark triplet excitons in the quest to stabilize phosphorescent blue.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156577</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Passing Visual Turing Test with Computational 3D Displays and Appearance Modeling</title>
<link>https://hdl.handle.net/1721.1/156576</link>
<description>Towards Passing Visual Turing Test with Computational 3D Displays and Appearance Modeling
Shi, Liang
In Neal Stephenson’s 1992 science-ﬁction novel "Snow Crash," the term "metaverse" was coined to describe a virtually persistent world in which humans, as avatars, interact with each other and software agents just as they would in the physical world. This is no longer science ﬁction with the rapid development of virtual and augmented reality devices in the past decade. To stimulate future development and research, Meta Inc. (previously Facebook) deﬁned a grand challenge termed visual Turing test in 2022, asking if the virtual experience can be created as vivid and realistic as the real-world such that a human observer can no longer distinguish. Passing this test requires significant technological advancements in areas such as display, rendering, acquisition, sensing, modeling, and simulation. The focus of this thesis is on developing next-generation computational 3D near-eye displays to deliver a more realistic and natural 3D experience, as well as devising inverse procedural material (PM) modeling and capture frameworks that allow non-expert users to bring real-world appearances into a digital twin in a few simple clicks.&#13;
&#13;
Existing near-eye 3D displays rely on the stereoscopic principle by showing parallax images to deceive the brain into perceiving depth from 2D images. This deviates from the experience of how humans observe the real world, causing visual discomfort and vision degradation after prolonged use. Computer-generated holography (CGH) is widely regarded as the next-generation technology capable of utilizing light interference and diffraction to produce true 3D graphical projections. However, its adoption was fundamentally limited by the low image quality and vast computational cost. Similarly, PMs have recently emerged as the industry standard for representing physically-based, spatially-varying appearance due to their unlimited resolution, editability, and memory efﬁciency. Nonetheless, the extensive domain knowledge and labor required to master and author PMs have made it difficult for non-experts to create and operate them.&#13;
&#13;
This thesis exploits recent advances in machine learning, optimization, rendering, and differentiable software infrastructures to develop algorithms and user-friendly tools for synthesizing, compressing, encoding/decoding, and deploying photorealistic 3D holograms, as well as for authoring, editing, and manipulating procedural materials. The ultimate objective is to pass the visual turning test and democratize the creation of a metaverse for ordinary people.&#13;
&#13;
At the very end of this thesis, an extrapolation of the Virtual Turing Test is applied for physical reproduction, and a novel 3D printing system that fabricates painting facsimiles faithfully to the originals will be discussed, as well as the potential of this technology to help culture preservation and education.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156576</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lightwave Electronics Based on Nanoantenna Networks</title>
<link>https://hdl.handle.net/1721.1/156575</link>
<description>Lightwave Electronics Based on Nanoantenna Networks
Yeung, Matthew
The evolution of commercial lasers has played a pivotal role in advancing our comprehension of the natural world. Emitting coherent, high-intensity light, lasers offer unparalleled capabilities, reaching peak intensities of terawatts per centimeter squared at frequencies exceeding a million times per second. Such immense peak intensities facilitate electric field amplitudes at the focus that scale to the V/nm range, rivaling the electric field strengths within atoms and molecules responsible for atomic-scale electron dynamics. Through the realm of ultrafast nonlinear optics, the manipulation of optical waveforms has paved the way for controlling electrons at optical frequencies [1]–[7]. This breakthrough has led to a deeper understanding of electron dynamics, particularly in the context of light field-driven phenomena in solids, a field commonly referred to as lightwave electronics.&#13;
&#13;
As traditional semiconductor electronics approach their saturation limits in terms of speed and size, researchers are turning to light as a means of encoding information, ushering in the era of integrated photonics. Light, with its minimal absorption and high data propagation rates coupled with low power dissipation, offers an ideal medium for transmitting information. While integrated photonics encodes information within the time-averaged intensity and polarization of light, the harnessing of ultrafast electric field oscillations in light, analogous to the behavior of modern-day high-speed electronics, presents an opportunity to encode information in the sub-cycle field oscillations. In pursuit of electronics operating at optical frequencies, various methods have been explored to realize practical lightwave electronic circuit elements analogous to those in traditional electronics. Pioneering experiments have showcased that optical waveforms can initiate attosecond electron currents across various interfaces, encompassing metal-vacuum [8]–[17], dielectric [18]–[20], and air interfaces [21], [22]. However, a significant hurdle arises from the difference between the characteristic frequencies of optical (PHz) and electronic systems (GHz-THz), making it difficult to integrate optical systems with electronic ones.&#13;
&#13;
This thesis underscores the pivotal role of metallic nanoantennas as a scalable platform for lightwave electronics. It demonstrates how metallic nanoantennas enable control over light field-driven responses through rectification, resonance control (leading to lower peak fields required to operate), and polarization control. A comprehensive framework is introduced, serving as the backbone for subsequent chapters. The thesis showcases three nanoantenna variants: one facilitating shot-to-shot measurements of PHz optical phase on a chip, yielding over 2000 optical phase-sensitive electrons, ideal for compact optical waveform synthesis and interferometry-free frequency combs. When leveraging rectification, the thesis further illustrates harmonic frequency mixing with bandwidth surpassing that of the input light, enabling the direct detection of PHz frequencies using multi-cycle light from commercial laser systems. This practical demonstration facilitates frequency mixing, positioning the device as a compact detector for optical oscilloscope-like measurements. Finally, an architecture is presented to enable polarization-sensitive rectification, highlighting the flexibility of the nanoantenna platform for PHz-driven electrons.&#13;
&#13;
As is exhibited through the work in this thesis, nanoantennas possess the potential to emerge as a comprehensive solution for lightwave-driven electronics, enabling a new era of high-speed, low-power electronic systems driven by oscillations of light.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156575</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating and protecting high fidelity multi-particle entangled states with atom-photon interactions</title>
<link>https://hdl.handle.net/1721.1/156574</link>
<description>Generating and protecting high fidelity multi-particle entangled states with atom-photon interactions
Ramette, Joshua
We introduce methods based on atom-light interactions to produce multi-particle quantum states of atoms. After reviewing the basic challenges associated with engineering atomphoton interactions and techniques for overcoming these, we first describe “counter-factual” carving, a new method that harnesses curious properties of quantum measurements to generate multi-particle entangled states, with potential applications across many quantum information processing systems. With a cavity of cooperativity C used to enhance the atomphoton interaction, counter-factual carving generates entangled states with infidelities of e−C/N, an exponential improvement over previous carving schemes, which scale as [C/N]−1. By fundamentally improving the scaling of the entanglement fidelity with C, it reduces the necessary C for generating high fidelity entangled states by several orders of magnitude in practical scenarios. Next, we consider fault-tolerant quantum computing architectures made from local, error corrected modules linked together via a physically distinct mechanism (such as photonic links) which is generally noisier and slower. Such a modular approach reduces the task of achieving true scalability to designing a unit module of fixed qubit number equipped with a fault-tolerant quantum input/output interface, so that scaling simply involves connecting more identical modules. We show that logical qubits encoded in surface code patches in distinct modules can be fault-tolerantly connected despite substantially elevated noise along their shared interface, helping to solve the crucial problem of network noise inherent when connected error corrected modules to form a modular quantum computer. Then, we consider the full suite of interactions between atoms afforded by Rydberg physics, the exchange (XY) and Ising (ZZ) couplings. By positioning the atoms in a zigzag pattern and applying combinations of microwave photon dressings within the Rydberg manifold, we show how to implement a model Hamiltonian with tunable XY and ZZ couplings between nearest-neighbor and next-nearest-neighbor terms in a 1D spin chain. Tuning these couplings allows one to access a host of interesting phases and quantum phase transitions, including the exotic deconfined quantum criticality (DQC) phenomenon. We introduce methods for using targeted laser exciations to prepare multi-particle ground states of this model Hamiltonian within targeted total magnetization sectors and numerically simulate our procedure at half-filling at points of DQC within the phase diagram. Finally, on a different note, we consider the dynamical behavior of a discrete state coupled to a finite bandwidth quasicontinuum, in the intermediate regime, where the bandwidth of the quasicontinuum cannot be treated as infinite or zero. This problem has many applications, including understanding how to generate an excited Rydberg superatom W state by driving an ensemble of Rydberg atoms initially in their ground states. We consider the driving to be much stronger than the bandwidth and take the small energy spread of the finite bandwidth quasi-continuum as a perturbation on top of the strong drive. Somewhat counter-intuitively, we find that rather than fully decaying into the continuum at long times, the discrete state instead Rabi oscillates into a superposition of the continuum states even at infinite time, following a quick amplitude loss of small magnitude.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156574</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conditional Mechanical Squeezing of a Micromechanical Oscillator Approaching the Quantum Regime</title>
<link>https://hdl.handle.net/1721.1/156573</link>
<description>Conditional Mechanical Squeezing of a Micromechanical Oscillator Approaching the Quantum Regime
Lane, Benjamin Bret
Optomechanics is a rapidly developing field studying systems with optical and mechanical resonators coupled via radiation pressure. This coupling plays a key role in precision displacement and momentum sensing like in gravitational wave detection and atomic force microscopy. Recently the field has entered the quantum regime, with the interaction being used to prepare pondermotively squeezed states of light, optical-optical entanglement, motional ground states, and more.&#13;
&#13;
In this thesis, I will introduce and demonstrate a conditionally-prepared motional squeezed state of a micromechanical oscillator approaching the quantum regime, with a conditional position and momentum variance of 11.93 and 6.32 times the zero point fluctuation respectively. This work was done by performing fast, continuous measurement of the position of a 50 ng oscillator in a detuned optomechanical cavity and applying optimal filtering, namely the Wiener filter, to the noisy measurement record to estimate the conditional state.&#13;
&#13;
I also present a new technique for locking homodyne detectors using a free-space continuously tunable optical field modulator, enabling rapid tomographic measurements of optical states. Lastly, I also include an appendix detailing a method for reducing seismic Newtonian noise of next generation gravitational wave detectors like Cosmic Explorer utilizing simple earthworks, inspired by phononic meta-materials of contemporary optomechanics research groups.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156573</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental Impacts on Simulated Galaxy Properties</title>
<link>https://hdl.handle.net/1721.1/156572</link>
<description>Environmental Impacts on Simulated Galaxy Properties
O'Neil, Stephanie
Galaxies in the Universe come in a variety of shapes, sizes, and colors. The range of properties is produced from a combination of physical processes. Galaxies live within various structures and therefore receive different influences from their environment that alter their evolutionary pathways. Thus, not only does each physical process need to be understood, but we also need to understand the interactions of these processes with each other and with the environment in which they live. In this thesis, I work towards a more complete picture of galaxy evolution by studying how galaxies are affected by the environment in which they evolve. I use cosmological and zoom-in simulations to study the components of galaxies and how they relate to each other. I start with a study on measuring the boundaries of halos. In order to study galaxies, their dark matter halo, and the environment in which they live, it is integral to have a robust size measurement of these objects. Using massive galaxy groups and clusters in the IllustrisTNG simulations, I show how the splashback radius, which defines a halo’s boundary by separating orbiting material from infalling material, varies depending on the measurement method. I then show how selection effects can bias measurements of the splashback radius by examining various populations of galaxies. The more massive and brighter a galaxy is, the more it deviates from the dark matter distribution. This is significant since these are the galaxies that are most easily observed. Motivated by uncovering the underlying cause of galaxy population bias, I present a study of how galaxy properties change with time spent in the cluster. I demonstrate that certain scaling relations, in particular the stellar-to-halo mass ratio, the stellar size, and the metallicity as functions of stellar mass, systematically deviate from the standard relation for galaxies that have spent more time in their cluster. I also show that measurements of the splashback radius stabilize after approximately 5 Gyrs, the time it takes for one pass through the cluster. Finally, I run a new suite of zoom-in simulations introducing a new model for self-interacting dark matter that includes a significant amount of elastic, exothermic, and endothermic scattering. This is the first time that a model emphasizing endothermic scattering has been simulated in a galactic environment. I show that this model can produce realistic halos and has the potential to mitigate several of the existing discrepancies between observations of galaxies and simulations of cold dark matter.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156572</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Consensus and Synchronization for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/156559</link>
<description>Efficient Consensus and Synchronization for Distributed Systems
Yang, Lei
Recent interest in decentralized applications calls for distributed systems that replicate their states across a large number of servers communicating over wide-area networks. We propose near-optimal solutions to two fundamental problems in the design and implementation of such systems: consensus and synchronization. First, we propose a universal decomposition of distributed consensus protocols that enables near-optimal throughput and liveness on fluctuating networks. Our key technique is to minimize the amount of communication necessary for participants to safely reach consensus. We design a state-of-the-art information dispersal protocol to achieve that. Second, we propose the first family of efficient rateless error-correcting codes for reconciling set differences. Our codes enable pairs of servers to synchronize system states with near-optimal communication and computation costs. We theoretically analyze these solutions, and implement end-to-end systems to demonstrate strong real-world benefits.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156559</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Learner-centric Tools for Physical Skill-learning</title>
<link>https://hdl.handle.net/1721.1/156557</link>
<description>Designing Learner-centric Tools for Physical Skill-learning
Turakhia, Dishita Girish
Learning physical or psychomotor skills such as throwing a ball or operating a drill is integral to our lives. Whether for excelling in sports, mastering DIY tools, or gaining expertise in operating machinery, we are constantly learning to perfect physical skills, which are often taught by instructors and sometimes self-learned. The landscape of how we learn such skills, however, is undergoing a profound transformation with watershed advances in technology. For example, ubiquitous wearables that allow precision sensing, integrated machine learning models that enable personalized predictions, and augmented and virtual reality environments that promise immersive experiences are all revolutionizing how researchers design tools for personalized skill learning. Furthermore, these enabling technologies which offer self-learning opportunities outside the traditional classroom settings, when combined with generative artificial intelligence tools can pave the way for a future where every learner can self-learn through personalized learning experiences.&#13;
&#13;
Amidst this excitement of innovation in enabling technologies, it is crucial, however, to design learning tools that are grounded in learner’s experiences and cater to the diverse nature of human learning. Each learner is unique in their skill levels, learning speeds, and preferences. Moreover, every learning experience is multifaceted, exceeding beyond mere skill acquisition, and encompassing broader facets like cultivating self-motivation, self-efficacy, and creativity. Given this multifaceted nature of human learning, my vision is to leverage technology to design learnercentric tools for comprehensive skill learning.&#13;
&#13;
I fulfill this vision through my doctoral research which lies at the intersection of HumanComputer Interaction (HCI) and Learning Science (LS). In this work, I have combined HCI principles of tool design with LS frameworks to personalize the learning of physical skills by enhancing motivation, creativity, and self-refection. I have designed, built, and studied interventions grounded in learners’ and educators’ experiences thereby challenging prevailing techno-centric approaches that focus solely on skill enhancement. This work particularly investigates three frameworks for skill learning: adaptive, game-based, and refection-based, targeting motor skills, maker skills, and fabrication skills, respectively. These frameworks are informed by existing LS theories and insights into educators’ teaching strategies. The tools I built using these frameworks also leverage technological advancements in sensing, shape-changing interfaces, computer vision, and generative artificial intelligence.&#13;
&#13;
Specifically, the adaptive learning framework focuses on dynamically adjusting the difficulty of training tasks based on individual skill levels. This personalized approach aims to optimize the learning experience by maintaining an optimal challenge point. The game-based learning approach explores the integration of fabrication with existing digital games, fostering engagement and motivation. And finally, the refection-based learning approach emphasizes self-directed learning through critical dialogue and prompts aligned with learning goals. The effectiveness of the learning tools built using these frameworks was evaluated through user studies. By examining the deployment of these tools and studying their impact on learners, my research contributes to expanding the design space for learning tools and deepening our understanding of how humans learn with technology. Through these contributions to the field of HCI and LS, this thesis opens up new ways of designing a learner-centric future of comprehensive skill development.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156557</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Quarton Coupler for Near-Ultrastrong Nonlinear Light-Matter Coupling in Superconducting Circuits</title>
<link>https://hdl.handle.net/1721.1/156556</link>
<description>The Quarton Coupler for Near-Ultrastrong Nonlinear Light-Matter Coupling in Superconducting Circuits
Ye, Yufeng
The interaction between an atom and an electromagnetic mode of a resonator is of both fundamental interest and is ubiquitous in quantum technologies. Most prior work studies a linear light-matter coupling of the form [formula], where &#119892; measured relative to photonic (&#120596;ₐ) and atomic (&#120596; subscript &#119887;) mode frequencies can reach the ultrastrong regime [formula]. In contrast, a nonlinear light-matter coupling of the form [formula] has the advantage of commuting with the atomic [formula] and photonic â superscript † &#119886; Hamiltonian, allowing for fundamental operations such as quantum-non-demolition (QND) measurement. However, due to the perturbative nature of nonlinear coupling, the state-of-the-art &#120594;/max(&#120596;&#119886;, &#120596;&#119887;) is limited to &lt; 10⁻². In this thesis, we develop the theory of quarton couplers and experimentally demonstrate, for the first time, a near-ultrastrong &#120594;/max(&#120596;ₐ, &#120596; subscript &#119887;) = (4.852 ± 0.006) × 10⁻² nonlinear coupling of a superconducting artificial atom and a nearly-linear resonator. We also show signatures of light-light nonlinear coupling [formula], and &#120594;/2&#120587; = 580.3 ± 0.4 MHz matter-matter nonlinear coupling [formula] which represents the largest reported &#119885;&#119885; interaction between two coherent qubits. Finally, we present a new qubit readout scheme that uses the quarton coupler to enable simulated performance of 5 ns readout time with greater than 99% readout and QND fidelity. Our work reveals a new path for order-of-magnitude improvements of fundamental superconducting qubit operations by engineering nonlinear light-matter couplings in parameter regimes unreachable by existing designs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156556</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Equity and Reliability in Machine Learning</title>
<link>https://hdl.handle.net/1721.1/156553</link>
<description>Advancing Equity and Reliability in Machine Learning
Shanmugam, Divya
The data we have are often not the data we wish to use. This distinction can have serious consequences for the behavior of machine learning models across environments and demographic subgroups. If a disease is systematically underdiagnosed, machine learning models trained on this data risk replicating patterns of underdiagnosis. If the data used to evaluate machine learning models is not representative of data the models encounter during deployment, we risk missing model failures on subsets of the data distribution. If the demographics we use to assess the fairness of machine learning models are excessively coarse, we risk missing significant disparities in algorithmic performance. For domains in which f lawed data is common, these systematic differences represent a barrier to the widespread adoption of machine learning systems. In this thesis, we develop methods to encourage machine learning predictions to be reliable and equitable even when the underlying data are not. We approach this goal in three ways. We do so first by taking a data-centric lens, and developing methods to precisely characterize differences between the data we have and the data we wish to have (Chapters 2 &amp; 3). We then adopt a model-centric lens to consider how one might efficiently update models without access to the training data (Chapters 4 &amp; 5). Finally, we provide commentary on standard approaches to the use of race when evaluating machine learning systems (Chapter 6). In sum, this dissertation is a step towards machine learning methodology that is robust to the inevitably unreliable and inequitable data we are able to observe.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156553</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Systemic to Regional: Personal Health and Medical Monitoring Systems that Adapt to Individual Variance</title>
<link>https://hdl.handle.net/1721.1/156551</link>
<description>From Systemic to Regional: Personal Health and Medical Monitoring Systems that Adapt to Individual Variance
Zhu, Junyi
Doctors require biometric sensor data to improve diagnostic accuracy, monitor a patient’s recovery progress, and make informed decisions about further treatment. Advances in electronics and sensing technologies have led to the development of remote monitoring devices, such as for ECG and blood pressure, which can collect biometric data outside of the clinic. However, these forms of systemic biometric signal monitoring only capture limited aspects of one's overall health, lacking detailed information on specific local body regions. In addition, individual patient health conditions are diverse and often complex. Thus, traditional sensing techniques, while effective for the broader population, often do not meet the unique needs of specific patient groups, especially for environments beyond clinic and home.&#13;
&#13;
In this thesis, I will be presenting my Ph.D. work around personalized health and medical monitoring systems that adapt to individual variance, including muscle engagement monitoring during unsupervised rehabilitation, upper airway obstruction monitoring for obstructive sleep apnea, as well as device and measurement setup customization based on the patient’s regional body physique and use environment.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156551</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Low-Level Priors from Images for Inference and Synthesis</title>
<link>https://hdl.handle.net/1721.1/156549</link>
<description>Learning Low-Level Priors from Images for Inference and Synthesis
Sharma, Prafull
With the recent advancements in computer vision, scene understanding is critical for both downstream applications and photorealistic synthesis. Tasks such as image classification, semantic segmentation, and text-to-image generation parse the scene in terms of high-level properties of objects and scene. Along with understanding and creating visual media along these dimensions, it is important to understand the low-level information such as geometry, material, lighting configuration, and camera parameters. Such understanding would help us with tasks such as material acquisition, fine-grained synthesis, and robotics. In this thesis, we discuss learning priors over low-level properties to facilitate inference of geometry, static-dynamic disentanglement, and material properties. We present a self-supervised method to construct a persistent representation for inferring geometry and appearance inferred using a single image at test time. This representation can be leveraged to infer static-dynamic disentanglement and can used for 3D-aware scene editing. We employ representations from a pre-trained visual encoder for selecting similar materials in images. Additionally, we demonstrate fine-grained control over material properties for image editing using pre-trained text-to-image models. This fine-grained control is achieved by maintaining the photorealistic image ability of text-to-image models while learning control based on synthetic rendered images.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156549</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Effective Theories for Deep Learning and Beyond</title>
<link>https://hdl.handle.net/1721.1/156548</link>
<description>Towards Effective Theories for Deep Learning and Beyond
Zhai, Xiyu
Deep learning has been quite successful in the past decade, with fantastic progress made across multiple domains such as computer vision, natural language processing, and Reinforcement learning. However, the theoretical understanding of its success is limited, and its behavior constantly defies our traditional theoretical understanding of machine learning. We will present our work on deep learning theory and show powerful techniques we developed for studying wide neural networks’ optimization and generalization behavior that help narrow the gap between theory and practice. Inspired by the recent success of transformer architectures like ChatGPT and SORA, we would like to present our work on the expressive power of vision transformers. Besides, we have been working on new AI theories and algorithms that go beyond deep learning and a new AI programming system that aims to make the implementation of these new ideas possible. It goes beyond the scope of a PhD to finish a demonstration of the new AI, but we aim to show strong evidence of its feasibility.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156548</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies in coordination chemistry</title>
<link>https://hdl.handle.net/1721.1/156417</link>
<description>Studies in coordination chemistry
Haas, Terry E.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1963; Vita.; Includes bibliographical references (leaves [102]-105).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156417</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of fission products in the region of mass numbers 103 to 131</title>
<link>https://hdl.handle.net/1721.1/156416</link>
<description>A study of fission products in the region of mass numbers 103 to 131
Winchester, John W.
            (John Widmer)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1955; Vita.; Bibliography: leaves 135-138.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156416</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Pro-Apoptotic Function of CED-9 Requires a CED-9-CED-4 Interaction</title>
<link>https://hdl.handle.net/1721.1/156351</link>
<description>The Pro-Apoptotic Function of CED-9 Requires a CED-9-CED-4 Interaction
Tucker, Nolan T.
Apoptosis is a fundamental, conserved process of cell death required for animal development and tissue homeostasis.  In Caenorhabditis elegans apoptosis is inhibited by the BCL-2 homolog CED-9. In addition to its canonical anti-apoptotic role, CED-9 possesses an apparent proapoptotic function, because reduction in CED-9 function can result in decreased cell death – e.g., a ced-9(lf) allele enhances the partial cell-death defect of animals with a weak loss-of-function mutation in the pro-apoptotic caspase gene ced-3. This proapoptotic function of ced-9 is poorly understood. CED-9 has been thought to inhibit apoptosis by binding to and inhibiting the pro-apoptotic protein CED-4, the C. elegans homolog of APAF-1. In this work we show that mutations in either CED-9 or CED-4 located in their CED-9-CED-4 binding regions as defined by X-ray crystallography cause a defect in executing apoptosis without affecting the anti-apoptotic function of ced-9. We further show that these mutant CED-9 and CED-4 proteins are defective in a CED-9-CED-4 interaction in vivo. These data indicate that the known in vivo CED-9CED-4 interaction is required for the pro-apoptotic function of ced-9 but is dispensable for its anti-apoptotic function.  While caspase-mediated apoptosis is the best characterized form of programmed cell elimination, other conserved mechanisms, such as cell extrusion, also exist. In C. elegans, at least eight cells that normally die by apoptosis are eliminated by cell extrusion in the absence of apoptosis. From an EMS mutagenesis screen, performed on worms lacking caspase activity, searching for survival of an extruded cell (ABplpappap), I isolated a nonsense mutation in the GMP reductase gene, gmpr-1. Interestingly, this allele (n6457) also allows ABplpappap to escape caspase-mediated apoptosis. Additionally, I found that RNAi against genes that affect either pyrimidine metabolism (pyr-1) or purine metabolism (gmpr-1 and adss-1) similarly allows ABplpappap to survive but that RNAi against R151.2, which affects both purine and pyrimidine metabolism, does not. gmpr-1(n6457) is suppressed by RNAi against R151.2 or pyr-1 but not by RNAi against the purine metabolism gene adss-1. These results suggest that the ability to maintain a normal physiologic ratio of purines and pyrimidines plays a role in cell extrusion and possibly in apoptosis as well. Given the highly conserved nature of cell-death genes and processes, interactions between human CED-9/BCL-2 family members and APAF-1 as well as nucleotide imbalance might play important roles in human diseases that involve disruptions in apoptosis, such as certain neurodegenerative disorders and cancers.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156351</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing In-hand Dexterous Manipulation via Machine Learning</title>
<link>https://hdl.handle.net/1721.1/156350</link>
<description>Advancing In-hand Dexterous Manipulation via Machine Learning
Chen, Tao
Robots are becoming better at navigating and moving around, but they still struggle with using tools, which severely limits their usefulness for household tasks. Using tools requires dexterously manipulating everyday objects like hammers, scissors, knives, screwdrivers, etc. While simple for humans, manipulating everyday objects remains a long-standing challenge that requires breakthroughs in robotic hardware, sensing, perception, and control algorithms. This thesis proposes machine learning techniques that substantially improve the state-ofthe-art performance of dexterous manipulation controllers. It focuses specifically on in-hand object reorientation tasks. Previous works on this problem had limitations like using expensive sensors or hands, only working for a few objects, requiring the hand to face upward, slow object motion, etc. This thesis goes a step further by enabling a low-cost robot hand to dynamically reorient diverse objects in mid-air with the hand facing downward using an inexpensive depth camera. To train such a system, the thesis proposes techniques for robots to learn to reorient objects with a downward-facing hand in the air. It also proposes multiple techniques to improve the time efficiency of the learning algorithms. Additionally, it discusses how to reduce the gap between simulation and reality so that controllers trained in simulation can transfer directly to real systems. Furthermore, the thesis explores the use of tactile sensors in dexterous manipulation. It concludes with a discussion of the current system’s issues and outlines future research directions for dexterous manipulation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156350</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Speed Acoustophoresis for Multiphase Micron-Sized Particle Assembly</title>
<link>https://hdl.handle.net/1721.1/156349</link>
<description>High Speed Acoustophoresis for Multiphase Micron-Sized Particle Assembly
Chai, Lauren Amy
Acoustophoresis assembly is a non-contact assembly method characterized by its ability to assemble micron particles in 2D and 3D mm-scale patterns and its material agonism. In this method, acoustic standing wave pressure gradients interact with the particles to generate the acoustic radiation forces that assemble the particles into the desired pattern. However, assembly times are prohibitively slow in applications with small particle sizes and high-viscosity fluids. One mitigation strategy is to increase transducer power, thereby increasing the assembly forces. However, the increased power also increases streaming, a bulk fluid flow which destroys the assembled pattern when large enough and, thus, caps transducer power, setting a floor on particle size and assembly time. This thesis presents a strategy for circumventing the streaming flow disruption- assembling particles in the streaming transient stage. During this stage, the streaming velocity is low enough to allow the assembly of particles whose sizes are otherwise too small when streaming is at steady state. Indeed, assembly experiments in the transient stage show particles assembling into a pattern that otherwise deteriorates as streaming velocity reaches a steady state. Results show that increasing transducer voltage by 50% shortened the time to peak pattern quality magnitude by 25-46%. This thesis also presents a model for using the streaming velocity measurement to predict the timing of maximum pattern quality. This thesis also presents a sensitivity analysis of the assembly time model in the literature to explain the discrepancy between the assembly time measurements and the literature model. Here, the central theme is to show where to recoup lost assembly energy and improve assembly rate without increasing streaming. The sensitivity analysis shows that assembly time is most sensitive to sound speed, where 1% sound speed variation leads to a 4% change in the assembly time. Examination of the acoustophoresis standing waves show that sound speed and vessel width variation contribute to imperfect constructive interference of the standing waves, further reducing assembly time, especially once the model includes resonance. Resonance combined with parameter measurements accounts for the discrepancy between the model and measurements.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156349</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Complex Problems in Fluid Dynamics: from CFD to Experiments Leveraging ML</title>
<link>https://hdl.handle.net/1721.1/156348</link>
<description>Exploring Complex Problems in Fluid Dynamics: from CFD to Experiments Leveraging ML
del Aguila Ferrandis, Jose
Structures operating in the marine environment are subject to large steady and unsteady forces, typically at high Reynolds number. Given the current limitations on CFD at high Reynolds number, and the high expense to conduct experiments at relevant scales, design is constrained by the limited available data, incomplete knowledge of the principal physical mechanisms, and restricted parametric searches, ML offers opportunities to overcome these limitations.&#13;
&#13;
In this thesis we demonstrate we apply ML methods to three engineering problems of high importance to theory and applications:&#13;
&#13;
1. Optimizing Vortex Generators to Reduce Ship Form Drag: In the quest for reducing ship emissions, it is imperative that the fluid mechanics of ship resistance be explored for improving propulsive efficiency. Form drag is a significant part of the resistance of high block coefficient ships and remains a last frontier for hull ship optimization. Weexplore the optimization of vortex generators (VG) as a powerful tool for reducing f low separation.&#13;
&#13;
2. Mapping the Properties of Fluid Forces in Vortex Induced Vibrations of SCR Risers: Vortices form in the wake of bluff bodies as a result of flow instabilities that are hard to study parametrically, especially for complex structures such as steel catenary risers (SCR). The resulting vibrations are of theoretical and practical importance. By using experimental and field data we can extract hydrodynamic databases the incorporate known physics, fill the parametric space, and provide new knowledge. By focusing on the SCR vibratiosn,w we demonstrate that we not only extract physics, but can provide accurate predictions as well.&#13;
&#13;
3. Causal Learning of Large Amplitude Ship Motions with Emphasis on Parametric Rolling: Predicting ship motions in severe sea states is complex due to the nonlinear wave-body interactions involved. This section introduces a simulation approach utilizing neural networks trained on stochastic wave elevations from multiple sea states.&#13;
&#13;
The trained networks can predict core vessel motions, including the challenging phenomenon of parametric rolling. Once trained using detailed CFD simulations, these networks provide swift and efficient vessel dynamics predictions. The research also explores the statistics of non-linear motions, aiming for consistent and accurate predictions across different wave conditions. This methodology, influenced by the universal approximation theorem for functionals, represents a significant advancement in addressing engineering challenges.&#13;
&#13;
In summary, these studies emphasize the role of ML a instrumental tool in advancing marine systems, driving them toward increased efficiency and adaptability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156348</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards reliable organisms: fault-tolerance in unconventional models of computation</title>
<link>https://hdl.handle.net/1721.1/156347</link>
<description>Towards reliable organisms: fault-tolerance in unconventional models of computation
Tan, Andrew K.
Computation is often described in abstract and idealized terms. In this setting, the details of the computer can be neglected: different models of computation are interchangeable for a cost which is at most polynomial in the size of the task at hand. The situation is more complicated for computers composed of imperfect components which are liable to introduce errors into the computation. Early work by John von Neumann showed that computers composed of imperfect components could be used to simulate perfect computation with high probability through the introduction of redundancy at the hardware level, i.e. they could be made fault-tolerant if certain conditions were met. These conditions, however, depend crucially on the details of noisy computational primitives; while the principles of fault-tolerance remain---store and manipulate data redundantly and perform error correction throughout---the constructions must be designed bespoke to the primitives at hand. In this thesis, we present examples of fault-tolerance in less conventional models of computation and investigate cases in which hardware-level fault-tolerant design may be more efficient in spite of the requisite redundancy.&#13;
&#13;
First, we develop new fault-tolerance results in unconventional models of classical and quantum computation.&#13;
In particular, we study a model of formula-based computation over larger alphabets subject to symmetric noise;&#13;
and show that performing computation with large alphabet majority gates results in strictly larger nominal thresholds than achievable using Boolean majority gates. We then move away from a formula-based architecture to study large-alphabet computation based on feed-forward artificial neural networks subject to analog noise. Using the biologically-inspired grid code, we show that fault-tolerant neural network-based computation can be achieved. Then, we envision quantum computation composed of subroutines---rather than gates---developing a method for the error correction of noisy quantum subroutines based on the quantum singular value transform.&#13;
&#13;
Second, we seek a precise understanding when fault-tolerance is not only possible but preferable. In certain cases, the choice between non-fault-tolerant and fault-tolerant designs is clear: for classical computation, hardware-based fault-tolerance is rarely used due to existence of cheap and reliable transistors; in quantum computation, fault-tolerance appears to be the most economical route towards large-scale computation owing to the difficulty of reliably manipulating quantum information.&#13;
The trade-off may not be as clear in less conventional models of computation. We make a detailed accounting of the resources required to achieve fault-tolerance in a simple setting which allows us to precisely delineate a regime in which hardware-level fault-tolerance is preferred despite the requisite redundancy.&#13;
&#13;
Lastly, we discuss some connections between the study of fault-tolerance and information theory. In particular, we argue that an algorithm we developed for noisy compression can be understood to be finding optimal encodings over noisy channel and is therefore helpful for designing good encodings of data for fault-tolerant computation. This is followed by a review of upper bounds on the fault-tolerant threshold based on information theoretic arguments. We suggest that new measures of information are needed if existing upper bounds are to be improved for circuits, and propose a high-level template for this work.&#13;
&#13;
We end with a reflection on the term "organism'' and a discussion of the implications of this collection of results.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156347</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits</title>
<link>https://hdl.handle.net/1721.1/156346</link>
<description>Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
Lee, Kyungmi
As deep neural networks (DNNs) are widely adopted for high-stakes applications that process sensitive private data and make critical decisions, security concerns about user data and DNN models are growing. In particular, hardware-level vulnerabilities can be exploited to undermine the confidentiality and integrity required for those applications. However, conventional hardware designs for DNN acceleration largely focus on improving the throughput, energy-efficiency, and area-efficiency, while the hardware-level security solutions are significantly less well understood. This thesis investigates the memory security for DNN accelerators, where the off-chip main memory cannot be trusted. The first part of this thesis illustrates the vulnerability of sparse DNNs to fault injections on their model parameters. It presents SparseBFA, an algorithm to identify the most vulnerable bits among the model parameters of a sparse DNN. SparseBFA shows that a victim DNN is highly susceptible to a few bit flips in the coordinates of sparse weight matrices, less than 0.00005% of the total memory footprint for its parameters. Second, this thesis proposes SecureLoop, a design space exploration framework for secure DNN accelerators that support a trusted execution environment (TEE). Cryptographic operations are tightly coupled with the data movement pattern in secure DNN accelerators, complicating the mapping of DNN workloads. SecureLoop addresses this mapping challenge by using an analytical model to describe the impact of authentication block assignments and a simulated annealing algorithm to perform cross-layer optimizations. The optimal mapping identified by SecureLoop is up to 33% faster and 50% better in energy-delay product compared to conventional mapping algorithms. Finally, this thesis demonstrates the implementation of a secure DNN accelerator targeting resource-constrained edge and mobile devices. This design addresses the implementation-level challenges of supporting a TEE and achieves a low overhead of less than 4% performance slowdown, 16.5% more energy consumption per each multiply-and-accumulate operation, and 8.1% of the accelerator area.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156346</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of human sex-linked homologs DDX3X and DDX3Y and phenotypic consequences</title>
<link>https://hdl.handle.net/1721.1/156345</link>
<description>Regulation of human sex-linked homologs DDX3X and DDX3Y and phenotypic consequences
Rengarajan, Shruthi
The X-linked gene DDX3X and its Y-linked homolog DDX3Y comprise one of 17 gene pairs retained on the human X and Y chromosomes during their evolution from ordinary autosomes; both genes are widely expressed in human tissues. Mutations of DDX3X and DDX3Y result in a wide range of sex-dependent phenotypes, necessitating the study of their regulation.&#13;
&#13;
In this thesis, we show that DDX3X is extraordinarily dosage-sensitive, and that perturbation of either DDX3X or DDX3Y expression is buffered -- by negative cross-regulation of DDX3X and DDX3Y in 46,XY cells, and by negative auto-regulation of DDX3X in 46,XX cells. In 46,XY cells, knockdown of either DDX3X or DDX3Y by CRISPRi causes transcript levels of the homologous gene to rise. In 46,XX cells, chemical inhibition of DDX3X protein activity elicits an increase in DDX3X transcript levels. This regulation is mediated through mRNA stability and buffers total levels of DDX3X and DDX3Y protein in human cells. Our findings indicate that gene regulatory mechanisms present on ancestral autosomes were retained and modified during the 200-million-year evolution of the human sex chromosomes.&#13;
&#13;
This regulation has key consequences for human diseases. We re-analyzed data from the Cancer Dependency Map to identify genetic dependencies on the broadly expressed regulators on the Y chromosome. We find that DDX3Y is required for the survival of a set of cancer cell lines that present with loss-of-function mutations in DDX3X, uncovering a novel dependency in male tumors. Altogether, this work identifies a regulatory mechanism on the human sex chromosomes that has important consequences for human disease.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156345</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Computer Vision beyond Lₚ Adversaries</title>
<link>https://hdl.handle.net/1721.1/156330</link>
<description>Robust Computer Vision beyond Lₚ Adversaries
Leclerc, Guillaume
Deep learning computer vision systems, integral to technologies such as self-driving cars, facial recognition, and content moderation, require robustness against diverse perturbations to ensure reliability and safety. Examples of such perturbations include variations in lighting conditions, occlusions, or changes in object orientations and backgrounds, all of which can significantly impact the performance of these systems. This thesis investigates the effects of Lₚ-perturbations on neural networks, focusing on models trained to be robust against these disturbances. We demonstrate that such models exhibit behaviors more akin to human vision, which is widely regarded as the benchmark for robust vision, compared to models trained using other methods. However, our measurements indicate that, from the perspective of Lₚ perturbations, these models perform on par with humans. In contrast, in practical vision tasks, humans still significantly outperform these models. This finding suggests that while Lₚ perturbations provide a useful measure of robustness, exploring more sophisticated perturbations may be necessary to achieve a level of robustness closer to that of human vision. In response, the thesis introduces a novel simulation-based framework, 3DB, designed for experimenting with arbitrary and semantically preserving perturbations on physically accurate 3D models directly instead of 2D images. The utility of this framework is demonstrated through its ability to diagnose and enhance the understanding of the behavior of off-the-shelf models. These insights suggest that the framework could be beneficial not only for debugging models but also for generating training data. However, the potential of this application is currently constrained by the lack of publicly available and open datasets of 3D objects. Furthermore, the absence of scalable methods for capturing such detailed representations limits the research community’s ability to collect these datasets. To address this limitation, a cost-effective 3D scanning process is proposed. This process facilitates the simultaneous capture of geometry and material properties, which are essential for simulating light behavior and crucial for modeling real-world perturbations within the 3DB framework and other rendering engines. The availability of large-scale, physically accurate datasets of 3D objects, when integrated with differentiable rendering engines, could enable adversarial training with application-specific perturbations. This approach has the potential to significantly narrow the gap between the robustness of neural networks and their biological counterparts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156330</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms &amp; Systems for Differentiable Graphics Programming</title>
<link>https://hdl.handle.net/1721.1/156329</link>
<description>Algorithms &amp; Systems for Differentiable Graphics Programming
Bangaru, Sai Praveen
Differentiable graphics representations are now a centerpiece for learning-based approaches in inverse rendering, novel-view synthesis, data &amp; compute efficient rendering, and even 3D generative models. We see great advances in performance &amp; fidelity when mixing classical wisdom from existing primitives like meshes and textures, and novel primitives like tiny neural networks. This cross-pollination of ML &amp; graphics is key to these advances, but is held back due to complications: existing frameworks like PyTorch are ill-suited to graphics programming both due to algorithmic problems, like discontinuities, and system-design problems that lead to poor performance &amp; expressive power. This thesis discusses several original approaches that were developed with generalizability in mind, to allow these approaches to apply broadly to different domains that struggle with the same problems. The first half of this thesis tackles discontinuities in a wide variety of applications, both (i) through a compiler that takes the problem-specific boundary sampling idea and automates it through compiler passes, and (ii) the warped-area reparameterization method that can be used to handle discontinuities by entirely removing the requirement of such problem-specific boundary samplers. We show how this enables light-weight integration with existing renderers by reparameterizing a mesh-based path tracer and a neural SDF renderer to make them fully differentiable. The second half will present the SLANG.D compiler, an industry collaboration that resulted in a high-performance compiler for next-generation differentiable &amp; neural graphics systems. We discuss how the user-centric focus of SLANG.D’s automatic differentiation system enables users to write large-scale differentiable graphics pipelines and re-use 1000s of lines of existing rendering infrastructure without sacrificing its performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156329</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Model Atoms Across Scales</title>
<link>https://hdl.handle.net/1721.1/156328</link>
<description>Learning to Model Atoms Across Scales
Fu, Xiang
The understanding of atoms and how they interact forms the foundation of modern natural science, as well as material and drug discovery efforts. Computational chemistry methods such as density functional theory and molecular dynamics simulation can offer an unparalleled spatiotemporal resolution for observing microscopic mechanisms and predicting macroscopic phenomena. However, many natural processes are extremely complex, requiring highly accurate modeling of many atoms for a considerable period to study. Computational chemistry methods may not be accurate or efficient enough, limiting the applicable domains and scales. Furthermore, discovering new materials and drugs requires novel candidate atomistic structures, which are conventionally based on heuristic or exhaustive search methods. This thesis presents machine learning methods for modeling atoms for tasks across different scales. First, we propose machine learning force fields that can decompose molecular interactions into fast and slow components, and then accelerate molecular simulations through multiscale integration. Second, we propose an end-to-end workflow for learning time-integrated coarse-grained molecular dynamics using multi-scale graph neural networks. Third, we propose diffusion models designed for periodic material structures that can enable the discovery of novel stable materials as well as material inverse design given a target property. The material diffusion model can be further extended to complex metal-organic frameworks with a multi-scale modeling approach.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156328</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unraveling the Reconfiguration Dynamics of Silver Nanowire Networks under Electrothermal Stress</title>
<link>https://hdl.handle.net/1721.1/156320</link>
<description>Unraveling the Reconfiguration Dynamics of Silver Nanowire Networks under Electrothermal Stress
Trebach, Adam Morris Willner
Transparent conducting electrodes (TCEs) are vital to many optoelectronic devices including solar panels and touch-screens. Indium Tin Oxide, the dominant TCE, comes with myriad problems including a shaky supply chain and brittleness. Networks of silver nanowires show many advantages over ITO, but they suffer from problems with durability. In this work, we focus on electrothermal instability and provide new insights into the electrothermal drivers of silver nanowire network failure. First, we borrow the notion of betweenness centrality from graph theory and test its applicability to simulated nanowire network systems. We also test several other centrality measures, some introduced here and designed specifically to describe nanowire networks. We find that betweenness centrality performs as well as or better than the other tested measures in terms of ability to identify network elements least important to conduction. We then execute a combined computational and experimental analysis of several small, computationally tractable silver nanowire networks. We create high fidelity computational models of these specific systems and calculate electrical and graph theoretic properties to test whether we can predict failure. We find that standard circuit analysis outperforms graph theoretic measures, although neither prove predictive. We find strong evidence that electromigration is the primary driver of failure at contacts between silver nanowires, either by causing failure directly or by providing the kinetic perturbations required to initiate spheroidization. In the process of building our computational models of real nanowire networks, we develop several new image processing techniques that show great promise when applied to the challenging task of automatically extracting non-planar graphs from images.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156320</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Efficient Machine Learning for Computational Imaging</title>
<link>https://hdl.handle.net/1721.1/156319</link>
<description>Data-Efficient Machine Learning for Computational Imaging
Guo, Gavin (Zhen)
This thesis presents a method that improves data efficiency in computational imaging by incorporating prior knowledge from physical models into machine learning algorithms. Our approach optimizes image reconstruction from sparse and noisy datasets by utilizing physical constraints to guide deep learning models. This integration accelerates the imaging workflow, minimizes the need for large datasets, and improves resilience to measurement noise. The key insight is that physical model-based priors can regularize deep learning for more robust performance. Experiments demonstrate how this physics-assisted machine learning technique enables faster, more accurate, and reliable imaging. By facilitating high-quality imaging from limited data, this method has the potential to advance applications in healthcare, material studies, and industrial inspection. One of the highlights of our method is the application of real-time 2D imaging for improving 3D printing. High-performance manufacturing is achieved by training a neural model combined with a system of dynamic equations. The thesis offers a framework that seamlessly integrates physical insights and data-driven methods, enabling advances beyond what either approach could achieve alone.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156319</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model Acceleration for Efficient Deep Learning Computing</title>
<link>https://hdl.handle.net/1721.1/156318</link>
<description>Model Acceleration for Efficient Deep Learning Computing
Cai, Han
Large foundation models play a central role in achieving recent fundamental breakthroughs in artificial intelligence. By simultaneously scaling up the dataset and model size to an unprecedented level, these foundation models demonstrate remarkable performances in many areas such as protein structure prediction, image/video synthesis, code generation, ChatBot, etc. However, their computation and memory costs grow dramatically. It makes deploying these foundation models on real-world applications difficult, especially for resource-constrained edge devices. In addition, their prohibitive training cost also significantly hinders the development of new foundation models and raises concerns about the enormous energy consumption and CO2 emission. To address these concerns, building effective model acceleration techniques is critical to closing the gap between supply and demand for computing. &#13;
&#13;
This thesis will cover three important aspects of model acceleration. First, we will discuss efficient representation learning, including EfficientViT (a new vision transformer architecture) for high-resolution vision and condition-aware neural networks (a new control module) for conditional image generation. Second, we will present hardware-aware acceleration techniques to create specialized neural networks for different hardware platforms and efficiency constraints. Third, we will introduce TinyTL, a memory-efficient transfer learning technique to enable on-device model customization. Through our design, we can significantly boost deep neural networks' efficiency on hardware without losing accuracy, making them more accessible and reducing their serving cost. For example, our model delivers 48.9x higher throughput on A100 GPU while achieving slightly better zero-shot instance segmentation performance than the state-of-the-art model. For conditional image generation, our approach achieves 52x computational cost reduction without performance degradation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156318</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining RF Machine Learning and RF Photonics to Enable New Analog Communications Architectures</title>
<link>https://hdl.handle.net/1721.1/156317</link>
<description>Combining RF Machine Learning and RF Photonics to Enable New Analog Communications Architectures
Davis III, Ronald A.
High-performance communications links and the increasing reliance on artificial intelligence are creating an exponentially increasing demand for higher data rates and computing performance. However, Moore’s Law of exponentially growing computing capabilities has slowed, meaning that the traditional computing architecture has reached a bottleneck in processing performance, largely due to data movement. Considerable efforts have been made to create custom hardware to accelerate deep neural network training and inference. Among these efforts are optical neural networks (ONNs), which have been a promising approach that excel at linear operations but struggle with nonlinear implementations and scalability. Here, an LTI Simulation Toolkit has been created to facilitate rapid iterative photonic circuit designing to quickly evaluate ONN architectures. The LTI Toolkit interprets the electromagnetic (EM) waves as LTI inputs into a transfer function that obeys the analytical solutions of the photonic components. Thus, this LTI toolkit was a stepping stone to designing the primary result of this work—the ONN architecture called the multiplicative analog frequency transform optical neural network (MAFT-ONN), implementing single-shot matrix products and single-shot nonlinear activations using a single device for all neurons in a layer. We demonstrate its RF signal processing capabilities by experimentally implementing various signal processing operations like matched filters, Wiener filters, and linear signal estimation as well as a 3-layer DNN for modulation classification of raw RF signals, thus being the first recorded analog hardware accelerator to ever perform deep learning directly on raw RF signals. We also present a system-level analysis to quantitatively show that the MAFT architecture is hundreds of times faster than even the theoretical peak performance of modern digital architectures in communications systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156317</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated photonics for imaging: novel sources, architectures and applications</title>
<link>https://hdl.handle.net/1721.1/156316</link>
<description>Integrated photonics for imaging: novel sources, architectures and applications
de Cea Falco, Marc
Integrated photonics is considered a promising technology to tackle many current technological bottlenecks including data communication, computation, sensing and healthcare. Consequently, significant efforts have been dedicated to the study of integrated optical devices and systems for these applications.&#13;
However, despite its disrupting potential, the use of integrated photonics for imaging applications has only recently started to be considered, and its scope has mostly been limited to Light Detection and Ranging (LIDAR) for autonomous driving. &#13;
This thesis deals with the development and experimental demonstration of integrated optical systems for imaging applications. In particular, in the first part of this thesis we address the realization of arrays of silicon LEDs (both free space coupled and waveguide coupled) in CMOS photonics processes. We show that such sources can be used to realize compact lensless holographic microscopes, and we leverage CMOS integration to demonstrate a novel reflection holography architecture. We also show how silicon waveguide coupled light sources can be used to realize truly monolithic, highly multiplexed refractive index sensors at low cost and small form factor. In the second part of this thesis we explore the scaling limits of traditional Optical Phased Arrays (OPAs) for beam steering applications and study the use of non-uniformly spaced OPAs to circumvent such limitations.&#13;
This thesis lays out novel light sources, architectures and applications that leverage silicon photonics platforms to realize compact, low cost and high performance imaging systems. This serves as a stepping stone toward the transformation of currently expensive and specialized imaging systems into the consumer market.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156316</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Representation Learning from Synthetic Data</title>
<link>https://hdl.handle.net/1721.1/156315</link>
<description>Visual Representation Learning from Synthetic Data
Fan, Lijie
Representation learning is crucial for developing robust vision systems. The effectiveness of this learning process largely depends on the quality and quantity of data. Synthetic data presents unique advantages in terms of flexibility, scalability, and controllability. Recent advances in generative modeling have enabled the synthesis of photorealistic images and high-quality text, drastically increasing the viability of synthetic data. Despite these advancements, the application of synthetic data for representation learning and visual recognition tasks lags behind, with a noticeable performance gap between models trained on synthetic versus real data. In this thesis we demonstrate our recent efforts to close this gap and utilize synthetic data to train state-of-the-art representation models. We begin by utilizing synthetic texts from large language models to enhance the training of vision-language models. Next, we explore synthetic images generated by text-to-image models, examining the scaling laws applicable to these images when used for supervised model training. We also introduce a multi-positive contrastive loss specifically designed for synthetic images, demonstrating their advantages over real images in representation learning. Finally, we propose a novel framework for training vision models exclusively with synthetic texts and images, which achieves superior performance, surpassing state-of-the-art models trained on real images in tasks including fine-grained classification and semantic segmentation. These works establish a robust foundation for advancing generative models in representation learning and solving key computer vision tasks, and mark an advance in utilizing synthetic data for improved representation learning across the data-centric AI ecosystem.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156315</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An obligate intracellular bacterial pathogen forms extensive and stable interkingdom contacts with the endoplasmic reticulum</title>
<link>https://hdl.handle.net/1721.1/156314</link>
<description>An obligate intracellular bacterial pathogen forms extensive and stable interkingdom contacts with the endoplasmic reticulum
Acevedo-Sánchez, Yamilex
Intracellular pathogens employ various mechanisms to manipulate host membranes, a critical strategy for subverting host pathways and advancing their life cycle. To this end, Rickettsia parkeri directly engages the host plasma membrane to infiltrate target cells and spread to neighboring ones. However, whether this pathogen directly engages other cellular membranes during infection remains unclear. Interestingly, a report showed a single example of rough ER membranes encircling a rickettsial isolate, reminiscent of membrane contact sites (MCSs), where organelle membranes closely interact. MCSs play pivotal roles in maintaining cellular homeostasis and facilitating communication and trafficking within the cell. While vacuolar bacterial pathogens and viruses are known to exploit MCSs, the formation of these sites by cytosolic pathogens has not been extensively investigated. In my research, I demonstrate that R. parkeri interacts with the rough ER. Moreover, I show that this interaction has membrane contact site-like properties. These contacts are stable and extensive and are regulated by VAP proteins. Thus, I propose that R. parkeri forms a direct interkingdom MCS with the ER. Even though the functional implications of this interaction remain unknown at the culmination of my thesis, my findings offer a more nuanced understanding of the cellular landscape during R. parkeri infection, showing that this pathogen engages additional cellular membranes beyond the host plasma membrane. Furthermore, this discovery highlights the importance of studying neglected pathogens like R. parkeri and utilizing them as valuable tools for discovering new aspects of biology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156314</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Considerations For the Deployment of Clinical NLP Systems</title>
<link>https://hdl.handle.net/1721.1/156307</link>
<description>Practical Considerations For the Deployment of Clinical NLP Systems
Lehman, Eric
Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as healthcare. A healthcare system attempting to automate a clinical task must weigh all approaches with respect to safety, efficacy, and efficiency. This thesis investigates the challenges and implications of implementing LLMs in clinical settings, focusing on the three considerations listed above: safety, efficacy, and efficiency. We first explore the potential biases that might be introduced in downstream patient safety by using LLMs in a zero or few-shot setting and find that LLMs can propagate, or even amplify, harmful societal biases in a number of clinical tasks. Then, we examine the privacy considerations of pretraining a language model on protected health information (PHI) bearing clinical text and find that simple probing methods are unable to meaningfully extract sensitive information from an encoder-only language model pretrained on non-deidentified electronic health record (EHR) notes. Finally, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. We show that relatively small specialized clinical models are substantially more effective than larger models trained on general text used through in-context learning. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We argue that using a clinical text-specific pretrained language model allows for an efficient, effective, and privacy-conscious approach, enabling a tailored and ethically responsible application of AI in healthcare.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156307</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizations of how neural networks learn</title>
<link>https://hdl.handle.net/1721.1/156306</link>
<description>Characterizations of how neural networks learn
Boix-Adsera, Enric
Training neural network architectures on Internet-scale datasets has led to many recent advances in machine learning. However, the mechanisms underlying how neural networks learn from data are largely opaque. This thesis develops a mechanistic understanding of how neural networks learn in several settings, as well as new tools to analyze trained networks. First, we study data where the labels depend on an unknown low-dimensional subspace of the input (i.e., the multi-index setting). We identify the “leap complexity”, which is a quantity that we argue characterizes how much data networks need in order to learn. Our analysis reveals a saddle-to-saddle dynamic in network training, where training alternates between loss plateaus and sharp drops in the loss. Furthermore, we show that network weights evolve such that the trained weights are a low-rank perturbation of the original weights. We observe this effect empirically in state-of-the-art transformer models trained on image and vision data. Second, we study the ability of language models to learn to reason. On a family of “relational reasoning” tasks, we prove that modern transformers learn to reason with enough data, but classical fully-connected architectures do not. Our analysis suggests small architectural modifications that improve data efficiency. Finally, we construct new tools to interpret trained networks. These are: (a) a definition of distance between two models that captures their functional similarity, and (b) a distillation algorithm to efficiently extract interpretable decision-tree structure from a trained model when possible.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156306</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wingsail Design Methodology and Performance Evaluation Metrics for Autonomous Sailing</title>
<link>https://hdl.handle.net/1721.1/156304</link>
<description>Wingsail Design Methodology and Performance Evaluation Metrics for Autonomous Sailing
Cole, Blake Ian Barry
This dissertation explores an innovative approach to the design of aerodynamically-actuated wingsails, along with advancements in marine vehicle autonomy focused on vessel tracking and collision avoidance. The first segment of this research introduces a deterministic wingsail design optimization framework that leverages geometric programming to efficiently generate optimal wingsail designs. The proposed methodology is then validated through the development and deployment of a bespoke wingsail data acquisition system called the WingDAQ, which enables quantitative analysis of the unsteady forces affecting wingsail performance in realistic sailing conditions. The results highlight the limitations of traditional aerodynamic models due to unsteady flow conditions, and suggest that lighter wingsails designed with higher lift-to-drag ratios and faster natural frequencies enhance sailing performance. This thesis also introduces a novel instantiation of the unscented Kalman filter (UKF) that is particularly well suited to long-distance surface vessel tracking, and facilitates affordable collision avoidance capabilities on autonomous surface vessels. One such low-cost collision avoidance system is presented, utilizing real-time data from the Automatic Information System (AIS) and the behavior-based autonomy middleware, MOOS-IvP. Field tests confirm the robustness of this framework, establishing a foundation for the integration of physics-based design optimization and adaptive autonomy for wind-powered marine vehicles.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156304</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Online Collaborative Learning: Designs for Effective In-Situ Discussion and Engagement in Large-Scale Learning Environments</title>
<link>https://hdl.handle.net/1721.1/156300</link>
<description>Enhancing Online Collaborative Learning: Designs for Effective In-Situ Discussion and Engagement in Large-Scale Learning Environments
Almahmoud, Jumana
The challenge of scaling online discussions in education persists, with many platforms struggling to maintain meaningful engagement and effective moderation as course sizes increase. This often results in diminished discourse quality, making it difficult for students to identify relevant threads and for instructors to guide conversations. The new Nota Bene (NB) platform introduces a curated reading experience to tackle these issues by leveraging the design affordances of social annotation tools. Designed to curate and elevate meaningful discourse, NB aligns established pedagogical theories with innovative technology, enhancing collaborative learning through annotations and discussions. It encourages an environment of inquiry and community among students. NB addresses specific educational challenges by integrating features such as multi-layered visibility, real-time interactions, and curated comment prompts. Findings suggest that improving the user experience on NB with these features significantly improves students' engagement with each other, the reading material, and discourse quality, supported by pedagogical principles such as social constructivist learning theories.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156300</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parameterized Relaxations for Circuits and Graphs</title>
<link>https://hdl.handle.net/1721.1/156299</link>
<description>Parameterized Relaxations for Circuits and Graphs
Akmal, Shyan
Whatmakessomeproblemshardtosolve, and others easy? In situations where complexitytheoretic hypotheses rule out the possibility of fast algorithms for problems, are there nonetheless instances for which we can evade intractability and still design efficient algorithms? In this thesis, we investigate these questions from the perspective of parameterized relaxations. We consider important computational problems on circuits and graphs, and design fast algorithms for relaxed versions of these tasks, that highlight tractable instances of problems which are provably hard in general. &#13;
&#13;
On circuits, we tackle the Majority-SAT problem, a task related to counting solutions to Boolean formulas in conjunctive normal form (i.e., CNF formulas), which has been extensively studied in areas related to probabilistic planning and inference. It has been known since the problem’s introduction in the 1970s that Majority-SAT is complete for the class PP(intuitively, the complexity class of decision versions of counting tasks, believed to contain very difficult problems), and so under standard conjectures in complexity theory, cannot be solved in polynomial-time. We nonetheless show however, that Majority-SAT can be solved in optimal linear time when its inputs are restricted to be k-CNF formulas (i.e., CNF formulas where every clause width at most k), for any constant integer k ≥ 1. This is surprising, since most circuit satisfiability problems remain hard even when restricted to 3-CNF formulas. Indeed, prior to our work, it was widely conjectured that Majority-SAT should be PP-complete on 3-CNFs. Beyond overturning this conjecture, we also characterize the complexity of many additional variants of Majority-SAT on bounded width formulas.&#13;
&#13;
On graphs, we tackle the All-Pairs Connectivity and Disjoint Shortest Paths problems. In the All-Pairs Connectivity (APC) problem, we are given an unweighted, directed graph G on n vertices, and are tasked with computing the maximum flow between each pair of vertices in G. Despite significant research on the problem, the fastest algorithm for APC in dense directed graphs is the naive n⁴⁺ᵒ⁽¹⁾ time approach, which simply runs a fast maximum f low algorithm separately for each pair of nodes. Moreover, the Strong Exponential Time Hypothesis (SETH) implies that APC cannot be solved in truly subcubic time. We consider a relaxation of APC, the k-Bounded All-Pairs Connectivity (k-APC), problem for integer k ≥ 1, where for each pair of nodes (s,t) in G, we must compute the maximum flow from s to t exactly if the maximum flow value is less than k, but if the maximum flow is at least k we merely need to report that the flow value is “large” instead of computing its exact value. We present an algorithm solving k-APC in Ō((kn)ʷ) time, where ω &lt; 2.3716 is the exponent of matrix multiplication. This is subcubic even for small k (evading the SETH lower bound for the general APC problem), and runs in Ō(nʷ) time for all constant k, which is already optimal for the 1-APC problem under conjectures in fine-grained complexity. Before our work, no algorithm was even known for 3-APC that ran faster than an algorithm simply solving the general APC problem directly. &#13;
&#13;
In the Disjoint Shortest Paths (DSP) problem, we are given a graph G on n vertices, with specified source nodes s₁,...,sₖ and target nodes t₁,...,tₖ, and are tasked with determining if G contains internally vertex-disjoint sᵢ ⇝ tᵢ shortest paths. This problem is NP-hard in general, if k can grow with n. We study k-DSP, the DSP problem parameterized by the number of terminal pairs, for small k. We show that 2-DSP can be solved in optimal linear time over weighted undirected graphs and directed acyclic graphs. Prior to our work, the fastest algorithm for 2-DSP over weighted undirected graphs took O(n⁷) time, and the fastest algorithm over weighted, dense directed acyclic graphs took O(n³) time.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156299</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Cost Electronic Microfluidics for Multiplexed Point-of-Care Biomarker Detection</title>
<link>https://hdl.handle.net/1721.1/156298</link>
<description>Low-Cost Electronic Microfluidics for Multiplexed Point-of-Care Biomarker Detection
Kikkeri, Kruthika
As we have seen in recent years, point-of-care (PoC) systems are vital elements in healthcare, as they can aid in disease detection, monitoring, and treatment, and even inform public policy. However, while qualitative (yes/no) PoC sensors are abundant (i.e. pregnancy tests, rapid COVID19 tests), low-cost, automated, quantitative PoC platforms are limited. Yet, given the importance of quantitative biomarker detection for nuanced analysis of patient health for chronic and fast acting diseases, there remains a persistent need for PoC systems capable of cost-effective detection of low abundance markers in blood. This thesis explores methodology for development of an automated low-cost system for the measurement of protein biomarkers in blood. By prioritizing accessibility, affordability, and automation, I focus on addressing unmet needs in PoC platform development which often prevent translation of these systems to PoC settings. Focusing primarily on cytokine biomarkers, notably IL-6, this thesis proposes modular solutions designed for seamless integration into existing clinical workflows. Chapter 2 introduces a sample-to-answer PoC workflow, consolidating blood testing steps through at-site sample collection, on-chip blood-to-plasma separation, and a bead-based electrochemical assay. Leveraging microfluidics and electronics, this system offers a rapid 30-minute assay time. It was validated by measuring spiked IL-6 concentrations in human blood, with applications demonstrated in CAR-T patient monitoring and small molecule detection for drug regulation. Chapter 3 introduces Microfluidics via Inkjet-Printing and Xurography (MINX), the first rapid prototyping technique which combines tape-based microfluidics with multiplexed electrodes. MINX was employed to fabricate low-cost PoC biosensors for detecting cytokine biomarkers. In Chapter 4, this modular fabrication method was extended to create the first integrated PoC system featuring tape-based microfluidic valves for automated fluidic and electrical controls. This MINX PoC platform was validated through detection of IL-6 in human plasma. Finally, Chapter 5 outlines future directions, emphasizing real-time dynamic control to enhance assay tunability. These advancements in PoC platforms hold promise for improving protein biomarker detection accessibility and affordability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156298</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Improving Representational Robustness of Machine Learning Models</title>
<link>https://hdl.handle.net/1721.1/156297</link>
<description>Understanding and Improving Representational Robustness of Machine Learning Models
Ko, Ching-Yun
The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. In this thesis, we will do a systematic study on the understanding and improvement of several machine learning models, including smoothed models and generic representation networks. Specifically, we put our focus on studying representational robustness, which we define as the “robustness” (or generally trustworthy properties) in the induced hidden space of a given network. For a generic representation network, this corresponds to the representation space itself, while for a smoothed model, we will treat the logits of the network as the target space. Representational robustness is fundamental to many trustworthy AI areas, such as fairness and robustness. In the thesis, we discover that the certifiable robustness of randomized smoothing is at the cost of class unfairness. We further analyze ways to improve the training process of the base models and their limitations. For generic non-smooth representation models, we find a link between self-supervised contrastive learning and supervised neighborhood component analysis, which naturally allows us to propose a general framework that achieves better accuracy and robustness. Furthermore, we realize that the current evaluation practice of foundational representation models involves extensive experiments across various real-world tasks, which are computationally expensive and prone to test set leakage. As a solution, we propose a more lightweight, privacy-preserving, and sound evaluation framework for both vision and language models by utilizing synthetic data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156297</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonparametric High-dimensional Models: Sparsity, Efficiency, Interpretability</title>
<link>https://hdl.handle.net/1721.1/156296</link>
<description>Nonparametric High-dimensional Models: Sparsity, Efficiency, Interpretability
Ibrahim, Shibal
This thesis explores ensemble methods in machine learning, a technique that builds a predictive model by jointly training simpler base models. It examines three types of ensemble methods: additive models, tree ensembles, and mixtures of experts. Each ensemble method is characterized by a specific structure: additive models can involve base learners with single or pairwise covariates, tree ensembles use a decision tree as a base learner, and mixtures of experts typically employ neural networks. The focus of this thesis is on considering various sparsity and structural constraints within these methods and develop optimization based approaches to enhance training efficiency, inference, and/or interpretability.&#13;
&#13;
In the first part, we consider additive models with interactions under component selection constraints and additional structural constraints e.g., hierarchical interactions. We consider different optimization based formulations and propose efficient algorithms to learn a good subset of components. We develop two toolkits that are scalable to large number of samples and large set of pairwise interactions.&#13;
&#13;
In the second part, we consider tree ensemble learning. In this setting, we consider flexible and efficient formulation of differentiable tree ensemble learning. We study flexible loss functions, multitask learning etc. We also consider end-to-end feature selection in tree ensembles, i.e., we perform feature selection while training of tree ensembles. This is in contrast to popular tree ensemble learning toolkits, which perform post-training feature selection based on feature importances. Our toolkit provides substantial improvements in predictive performance for a desired feature budget.&#13;
&#13;
In the third part, we consider sparse gating in mixture of experts. Sparse Mixture of Experts is a paradigm where a subset of experts (typically neural networks) are activated for each input sample. This is used to scale training as well as inference of large-scale vision and language models. We consider multiple approaches to improve sparse gating in mixture of expert models. Our new approaches show improvements in large-scale experiments on machine translation as well as distillation of pre-trained models on natural language processing tasks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156296</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic Tools for Neural Interfacing</title>
<link>https://hdl.handle.net/1721.1/156295</link>
<description>Magnetic Tools for Neural Interfacing
Koehler, Florian
Neurological disorders have a significant impact on millions of people worldwide, leading to personal hardship and high healthcare costs. Our ability to improve these conditions hinges on our understanding of the nervous system and how it is affected. To improve our knowledge, new tools are required to study the brain and improve how we modulate neural activity. Current methods carry significant drawbacks such as the requirements for highly invasive surgeries, only shallow penetration depth, and low spatial resolution. Methods using magnetic fields have the potential to overcome these challenges since magnetic fields have the distinct advantage of passing through the body without attenuation. They can be employed as signal carrier to deliver stimuli anywhere in the body where magnetosensitivity is introduced. By employing electrochemistry, materials science, molecular biology, and electrical engineering this thesis aims to explore new means to utilize magnetic fields for neurostimulation and improve upon existing methods. The first project is focused on the investigation of the magnetic field effect. The effect has been studied in magnetopharmacology and is hypothesized to be at the core of how animal species perceive the earth’s magnetic field. We explore if its underlying radical pair mechanism could be used for magnetic neurostimulation. The second project employs magnetic nanotransducers for neurostimulation. These functionalized, biocompatible magnetic nanoparticles translate magnetic fields to heat or mechanical stimuli that target specific receptors to thus modulate cellular activity. We initially focus on magnetothermal neurostimulation and describe how improvements to the electronics and power delivery of the high-power, high-frequency setup have led to reliable field conditions used to investigate magnetothermally induced nerve growth. Then we shift focus to introduce a novel way to genetically target cells for mechanical neurostimulation and show how it can be used to activate cells in vitro.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156295</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smoothness and Adaptivity in Nonlinear Optimization for Machine Learning Applications</title>
<link>https://hdl.handle.net/1721.1/156294</link>
<description>Smoothness and Adaptivity in Nonlinear Optimization for Machine Learning Applications
Li, Haochuan
Nonlinear optimization has become the workhorse of machine learning. However, our theoretical understanding of optimization in machine learning is still limited. For example, classical optimization theory relies on assumptions like bounded Lipschitz smoothness of the loss function which are rarely met in machine learning. Besides, existing theory cannot well explain why adaptive methods outperform gradient descent in certain machine learning tasks like training transformers. In this thesis, to bridge this gap, we propose more general smoothness conditions that are closer to machine learning practice and study the convergence of popular classical and adaptive methods under such conditions. Our convergence results improve over existing ones and also provide new insights into understanding the role of adaptivity in optimization for machine learning applications. First, inspired by some recent works and insights from deep neural network training, we propose a generalized non-uniform smoothness condition with the Hessian norm bounded by a function of the gradient norm almost everywhere. We develop a simple, yet powerful analysis technique that bounds the gradients along the trajectory, thereby leading to stronger results for both convex and non-convex optimization problems. In particular, we obtain the classical convergence rates for gradient descent (GD), stochastic gradient descent (SGD), and Nesterov’s accelerated gradient method (NAG) in the convex or non-convex settings under this general smoothness condition. In addition, the new analysis technique also allows us to obtain an improved convergence result for the Adaptive Moment Estimation (Adam) method. Despite the popularity and efficiency of Adam in training deep neural networks, its theoretical properties are not yet fully understood, and existing convergence proofs require unrealistically strong assumptions, such as globally bounded gradients, to show the convergence to stationary points. In this thesis, we show that Adam provably converges to stationary points under far more realistic conditions. In particular, we do not require the strong assumptions made in previous works and also consider the generalized smoothness condition. However, the above results can not explain why adaptive methods like Adam significantly outperform SGD in machine learning applications like training transformers, as the convergence rate we have obtained for Adam is not faster than that of SGD. Previous research has empirically observed that adaptive methods tend to exhibit much smaller directional smoothness along the training trajectory compared to SGD. In this thesis, we formalize this observation into a more rigorous theoretical explanation. Specifically, we propose a directional smoothness condition, under which we prove faster convergence of memoryless Adam and RMSProp in the deterministic setting. Notably, our convergence rate is faster than the typical rate of gradient descent, providing new insights into the benefits of adaptivity in training transformers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156294</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Human-Inspired Methods for Extending Advances in Computer Vision to Data- and Compute-Constrained Environments</title>
<link>https://hdl.handle.net/1721.1/156293</link>
<description>Human-Inspired Methods for Extending Advances in Computer Vision to Data- and Compute-Constrained Environments
Brandt, Laura E.
Recent developments in computer vision have often relied on access to big data, powerful compute, or both. City-based systems, such as self-driving cars and airport checkpoints, have benefited greatly from these advances, so much so that automated cars and security checks are beginning see true deployment in modern society. In contrast, robots and autonomous systems in data- and compute- constrained environments, like remote wilderness regions or off-Earth, are still relying on pre-deep learning era computer vision algorithms. Robots in the most challenging of environments - and, correspondingly, the environments most crucial to automate - have been left behind by modern computer vision.&#13;
&#13;
In this dissertation, I discuss several human-inspired methods that I have studied during my time at MIT, with the goal of closing the gap between modern computer vision and data- and compute- constrained environments. I explore methods for collecting good - but small - datasets, upsampling lazily-computed visual estimates to improve their quality, and identifying which image samples (and sample regions) a learned model is `uncertain' about. Finally, I share sketches of several human-inspired paradigms for leveraging these new tools to make vision models more efficient and generalisable, which I hope can serve as a starting point for future efforts to close the gap and bring modern vision to deployment in data- and compute- constrained environments.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156293</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid Expansion and Specialization of the Bitter Taste Receptor Family in Amphibians</title>
<link>https://hdl.handle.net/1721.1/156282</link>
<description>Rapid Expansion and Specialization of the Bitter Taste Receptor Family in Amphibians
Higgins, Kathleen W.
In classic sensory literature, bitterness is viewed as an early-warning system for toxic or spoiled foods. In vertebrates, bitterness is detected with a family of intronless G protein-coupled receptors called TAS2Rs. TAS2Rs are well studied in the human and mouse mouth, but recent studies have explored additional species and reported extra-oral expression of a subset of receptors. Here, we systematically search 680 vertebrate genomes, identifying 9,291 TAS2Rs. We determine that frogs and salamanders (“batrachians”) have an elevated number of TAS2Rs, and explore several processes that may have contributed to this increase through enhancement of non-allelic homologous recombination. Speciﬁcally, we ﬁnd differences related to location along the chromosome, presence and size of TAS2R clusters, neighboring repeat elements, and proportion of copy-number constrained genes. We also ﬁnd that total TAS2R count is metastable between closely related species, although there is evidence of either rapid gene turnover and/or gene conversion. Next, we narrow our analysis to ﬁve scientiﬁcally and ecologically relevant batrachian species, and perform deep RNA sequencing of seven oral and extra-oral tissues. We ﬁnd that the tongue is the primary site of TAS2R expression, but that a signiﬁcant number of receptors are expressed extraorally. Indeed, total genomic TAS2R count is directly proportionate to the number of TAS2R receptors that are expressed but not in the tongue. This suggests that the expanded TAS2R repertoire in batrachians may relate to tissue-speciﬁc specialization. An in vitro functional assay shows that batrachian TAS2Rs are able to respond to a host of classic bitterants, in addition to several novel chemicals. We ﬁnd that several frogs have liver-expressed receptors for the hepatotoxin aﬂatoxin, providing a unique example of a toxin detected at its target organ. We ﬁnd that two toad toxins are broadly recognized by receptors in the mouth, intestines, and liver, providing clues about the dietary and post-prandial response to toxin ingestion. We also ﬁnd that the cane toad has skin receptors recognizing its own skin-expressed toxin, marinobufagenin, raising the possibility of an auto-regulatory pathway.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156282</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elaboration of the Homer1 Recognition Landscape Reveals Incomplete Divergence of Paralogous EVH1 Domains</title>
<link>https://hdl.handle.net/1721.1/156281</link>
<description>Elaboration of the Homer1 Recognition Landscape Reveals Incomplete Divergence of Paralogous EVH1 Domains
Singer, Avinoam A.
In recent decades, the structure-function paradigm has been challenged by the discovery that many proteins contain unstructured regions that are necessary for the proper function of their encompassing proteins. Some of these intrinsically disordered regions are responsible for mediating protein-protein interactions that are critical for cellular processes such as protein localization, supermolecular assembly, and cell signaling. Many of these interaction sites are encoded as modular sequences of 3-10 residues which bind to recognition domains. Being able to systematically identify interactions mediated by these Short Linear Motifs (SLiMs) is instrumental to unraveling the many biological processes regulated by this class of interactions. Characterizing the binding preferences of specific SLiM binding domains is key to both predicting interactions and disrupting them for therapeutic purposes. In this study, I apply high-throughput screening of a proteome-derived peptide library to investigate the SLiM binding properties of the synaptic scaffolding protein Homer1. In doing so, I uncover that the reported Homer1 motif fails to capture the diversity of sequences that can bind to its EVH1 domain, and how sequences flanking the core motif can modulate the affinity and specificity of these interactions. Surprisingly, Homer1 preferentially binds to sequences that simultaneously match the motif of a related EVH1 domain from the Ena/VASP family of actin regulatory proteins. An analysis of EVH1 binding preferences in distantly related orthologs suggests that this overlapping preference originated in the ancestral domain that gave rise to distinct Ena/VASP and Homer protein families. The incomplete divergence between Ena/VASP and Homer, coupled with the existence of proteomic sequences capable of binding to both, indicates that competition between these families is likely averted through external regulation that extends beyond the binding preferences inherent to each family. Alternatively, in certain instances, such competition may serve as an advantageous mechanism for molecular decision making, as has been observed with other SLiM binding domains.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156281</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Interpretation of Machine Learning Models</title>
<link>https://hdl.handle.net/1721.1/156277</link>
<description>Automated Interpretation of Machine Learning Models
Hernandez, Evan
As machine learning (ML) models are increasingly deployed in production, there’s a pressing need to ensure their reliability through auditing, debugging, and testing. Interpretability, the subfield that studies how ML models make decisions, aspires to meet this need but traditionally relies on human-led experimentation or is based on human priors about what the model has learned. In this thesis, I propose that interpretability should evolve alongside ML by adopting automated techniques that use ML models to interpret ML models. This shift towards automation allows for more comprehensive analyses of ML models without requiring human scrutiny at every step, and the effectiveness of these methods should improve as the ML models themselves become more sophisticated. I present three examples of automated interpretability approaches: using a captioning model to label features of other models, manipulating a ML model’s internal representations to predict and correct errors, and identifying simple internal circuits through approximating the ML model itself. These examples lay the groundwork for future efforts in automating ML model interpretation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156277</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utilizing in vivo genome-wide CRISPR/Cas9 screens in immunocompetent mouse models to identify novel tumor liabilities and explore mechanisms of resistance to Chimeric Antigen Receptor T-cell therapy</title>
<link>https://hdl.handle.net/1721.1/156275</link>
<description>Utilizing in vivo genome-wide CRISPR/Cas9 screens in immunocompetent mouse models to identify novel tumor liabilities and explore mechanisms of resistance to Chimeric Antigen Receptor T-cell therapy
Koch, Catherine E.
Despite significant advancements in the understanding of cancer and the development of countless novel therapeutic modalities, resistance and relapse remain persistent and pervasive issues across tumor types even in the setting of cutting-edge treatments. The situation is particularly dire for tumors like glioblastoma (GBM) where most patients, even with standard of care treatment, are dead within 1 year of diagnosis. This lack of effective treatment for GBM reflects formidable challenges to therapeutic design including vast intra- and inter-tumoral heterogeneity, highly invasive growth patterns, and a hostile, immunosuppressive tumor microenvironment (TME). Fortunately, with the advent of CRISPR/Cas9 screening platform technologies, mechanisms of treatment resistance and novel therapeutic target identification can be efficiently interrogated in an unbiased, high-throughput manner. Furthermore, adaptation of these platforms for use in vivo allows for screens to be conducted in models capable of more accurately recapitulating the complexities of clinical disease. Even more importantly, these results may reveal unique tumor dependencies that would go unnoticed in vitro, potentially leading to more clinically relevant insights.   &#13;
This thesis details the results of 3 genome-wide pooled CRISPR/Cas9 screens conducted in parallel in vitro and in vivo in immunocompetent murine models of GBM or B-cell Acute Lymphoblastic Leukemia (B-ALL). Importantly each screen is the first of its kind with no prior genome-wide screens having been conducted in vivo in an immunocompetent murine model of GBM. The 2 screens conducted in GBM sought to better understand fundamental in vivo specific dependencies of GBM tumors in order to identify potential novel therapeutic targets or explore mechanisms of resistance to Chimeric Antigen Receptor T-cell (CAR-T). Prior to conducting these screens, no identified murine GBM models were suitable for in vivo CRISPR/Cas9 screening, thus, a modified model was generated, the details of which are also included in this thesis. Results from these screens highlighted the heme biosynthesis pathway as an in vivo specific dependency and potential novel therapeutic target. In addition, processes of cell motility and migration were implicated in promoting resistance to CAR-T therapy in GBM. The third screen conducted in B-ALL sought to explore mechanisms of resistance to CAR-T therapy and in doing so identified interferon gamma pathway components, including downstream target Qa-1b, as important mediators of resistance to CAR-T therapy in vivo. Interestingly, for all screens, comparison of in vivo and in vitro results highlighted distinct biological processes with little to no overlap in hits. In addition, while each screen produced unique results the same iterative screening workflow and novel, small-pooled genome-wide guide RNA (gRNA) library were utilized. Ultimately, the ability of this screening workflow to produce robust results for different conditions and mouse models highlights its versatility and potential broad applicability. From a therapeutic standpoint, insights derived from this collection of screens could aid in guiding development of novel treatment strategies for GBM and synergistic CAR-T combination therapies to improve efficacy in the setting of GBM or B-ALL. Additionally, deeper mechanistic exploration of these datasets could yield a better understanding of the complex nature of GBM biology and insight into the function of CAR-T therapy across both hematologic and solid tumor settings.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156275</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alkyl guanidines and nitroguanidines</title>
<link>https://hdl.handle.net/1721.1/156250</link>
<description>Alkyl guanidines and nitroguanidines
Elderfield, Robert Cooley,
            1904-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1930; Vita.; Includes bibliographical references (leaves 93-99).
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156250</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The generalized Eilenberg-Moore spectral sequence.</title>
<link>https://hdl.handle.net/1721.1/156247</link>
<description>The generalized Eilenberg-Moore spectral sequence.
Kettner, James Earl.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1972; Vita.; Bibliography: leaves 69-70.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156247</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of cascades in compressible flows</title>
<link>https://hdl.handle.net/1721.1/156244</link>
<description>Design of cascades in compressible flows
Yeh, Hsuan.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1950; Bibliography: leaves 120-122.
</description>
<pubDate>Sun, 01 Jan 1950 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156244</guid>
<dc:date>1950-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of hydrostatic pressure on grain boundary self-diffusion in lead</title>
<link>https://hdl.handle.net/1721.1/156241</link>
<description>The effect of hydrostatic pressure on grain boundary self-diffusion in lead
Pober, Richard L.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1971; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156241</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gene expression studies using retroviral vectors</title>
<link>https://hdl.handle.net/1721.1/156240</link>
<description>Gene expression studies using retroviral vectors
Cone, Roger David.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1986; Includes bibliographies.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156240</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An empirical study of the economics of bargaining.</title>
<link>https://hdl.handle.net/1721.1/156239</link>
<description>An empirical study of the economics of bargaining.
Grubert, Harry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1968; Vita.; Bibliography: leaves 105-108.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156239</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Value of Urban Amenities: Analyzing the Changing Dynamics of Consumer Goods, Mobility, and Green Spaces in Cities</title>
<link>https://hdl.handle.net/1721.1/156131</link>
<description>The Value of Urban Amenities: Analyzing the Changing Dynamics of Consumer Goods, Mobility, and Green Spaces in Cities
Wang, Binzhe
Urban amenities, ranging from diverse consumer goods and services to pleasant environmental settings, reliable public services, and convenient mobility options, make cities appealing places to live. The evolution of these amenities has been a dynamic process influenced by technological advancements, shifts in lifestyles, and market changes. The provision of amenities constantly involves the interplay between the private and public sectors, making them important questions in city planning. This dissertation explores these critical themes through three essays, each utilizing a distinct type of amenity to demonstrate the processes, dynamics, and underlying reasons for the changing landscape of urban amenities. Essay 1 focuses on an everyday amenity—retail stores in the context of the working-from-home (WFH) transition, highlighting the interdependency of urban density and consumer amenities. I use mobile-device foot traffic data for retail establishments in 5,295 cities and towns, supplemented with online expenditure and retail rental market data, to explore the reorganization of retail activities in the U.S. Essay 2 investigates flexible mobility, an amenity enabled by ride-hailing platforms and is primarily prevalent in dense urban areas. I discuss the mobility consequences and economic implications of regulating ride-hailing services, using evidence from the ride-hailing congestion tax in Chicago. Essay 3 explores the creation of urban green spaces, a fundamental environmental amenity. I analyze the relationship between public green spaces and gated green spaces within residential complexes in Xiamen, China, and argue that in the process of rapid urbanization, local governments may delegate the responsibility of providing green spaces to private developers, leading to the prevalence of supersized gated communities in Chinese cities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156131</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Repurposing Colonialism: Postcoloniality and the Politics of World Heritage</title>
<link>https://hdl.handle.net/1721.1/156105</link>
<description>Repurposing Colonialism: Postcoloniality and the Politics of World Heritage
Janakiraman, Aarthi
Since the turn of the 21st century, countries in the Global South have increasingly sought to inscribe their heritage sites on UNESCO’s World Heritage List. World Heritage listing is considered prestigious because it brings global recognition, secures political legitimacy, boosts tourism, and attracts economic investment. Within the Indian Ocean Region, many nations continue to seek World Heritage status for their colonial heritage, despite a painful history of decolonization coupled with rising nationalist sentiments, raising the question: how does highlighting colonial-era heritage as World Heritage serve postcolonial societies? &#13;
&#13;
Through a transnational comparative study of colonial-era World Heritage sites in South and Southeast Asia, this dissertation examines diverging approaches to postcolonial nation-building using colonial-era heritage. While all forms of heritage are instrumentalized in furthering nation-building agendas, I argue that the production of colonial-era World Heritage serves three distinct uses for postcolonial societies: to signal modernity, manage ethno-racial politics, and preserve elite privilege. Through this triad of uses, I demonstrate how spatial manifestations of colonial power are coopted by different actors and legitimized through global institutions to further the present-day agendas of postcolonial elites. Considering nation-building through the lens of heritage reveals the lived inequities and power structures of postcolonial societies that are preserved through urban conservation.&#13;
&#13;
This dissertation probes the role of colonial-era heritage in postcolonial societies through an in-depth consideration of two comprehensive case studies – the Singapore Botanic Gardens, and the Victorian Gothic and Art Deco ensemble of Mumbai. Using a version of Mukhija’s (2010) N of one plus some approach – here an ‘N of two plus some’ approach—I also examine Old Town of Galle and its fortifications in Sri Lanka, and Georgetown and Malacca Historic Cities of the Straits of Malacca in Malaysia, as secondary cases to add richness and inspire ways to question the primary cases by situating them in relation to others. Drawing from 10 months of fieldwork conducted over a period of two years, I combine evidence from semi-structured interviews with field observations, critical spatial and visual methods, as well as document and media analysis. Bringing together scholarship on urban design with critical heritage studies, ethnic studies, and international development, I explore a typology of colonial-era heritage practices in constructing the postcolonial nation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156105</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Behind the Waterfront: Enduring Inequities and Illusive Renewals in the Making of Mediterranean Port Cities</title>
<link>https://hdl.handle.net/1721.1/156097</link>
<description>Behind the Waterfront: Enduring Inequities and Illusive Renewals in the Making of Mediterranean Port Cities
Ignaccolo, Carmelo
This dissertation uncovers the problematic legacies of large-scale urban design gestures in Mediterranean port cities. It evaluates lasting tropes, measures socioeconomic effects, and reveals neglected histories. This research challenges waterfront-centric narratives by demonstrating how port cities often reinvent their coastal front while turning their back on adjoining neighborhoods, relegating them to languish in the shadow of new development.&#13;
&#13;
This study carries out a computationally rigorous yet culturally sensitive investigation to expose overlooked legacies behind urban waterfronts. It bridges urban design scholarship with port city literature, critical heritage discourse, and inequality studies to understand what frameworks and analytical methodologies can illuminate hidden-in-plain-sight, yet structurally ingrained, injustices stemming from the physical remaking of port-adjacent neighborhoods.&#13;
&#13;
From a methodological perspective, it employs historical GIS techniques on primary sources collected in more than thirty archives in Barcelona, Marseille, Rome, Naples, and Beirut. It models socioeconomic data and urban morphology features extracted from archival materials spanning a period of 150 years. Finally, it creates new data through participatory mapping initiatives and contextualizes analytical findings with interviews and field observations.&#13;
&#13;
The dissertation adopts a tripartite structure, with the first paper acting as the theoretical frame for two in-depth empirical case studies on Naples (Italy) and Beirut (Lebanon). BEHIND THE WATERFRONT introduces the “behind the waterfront” framework for the study of Mediterranean port cities and proposes a longue durée analysis of governance schemes (power), technical mechanisms (progress), and socioeconomic effects (poverty) shaping water-facing development patterns. THE MASKING OF INEQUITIES IN NAPLES evaluates the intentions and policies behind the late-nineteenth-century urban incision of Naples's historic center and examines its long-term effects and lingering tropes. EXCLUSIONARY TALES IN BEIRUT’S SPACES OF CRAFTSMANSHIP unearths the spatial history of crafts workshops in Beirut's port-facing neighborhoods and situates their recurring displacements in the city's design politics.&#13;
&#13;
Overall, this dissertation demonstrates how spatial injustices have persisted through physical forms, political processes, and socio-cultural milieux in the illusive renewals of Mediterranean coastal neighborhoods. Its findings and interdisciplinary methods reveal the spatial inheritance of contemporary inequities, fostering the adoption of inclusive urban narratives, acknowledging plural pasts, and envisioning reparative futures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156097</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Digital Cities and Digital Nations: Singapore, Thailand, China</title>
<link>https://hdl.handle.net/1721.1/156096</link>
<description>Building Digital Cities and Digital Nations: Singapore, Thailand, China
Stokols, Sol Andrew
Despite critiques of the “smart city,” the term has found new life in many parts of the world, morphing from a corporate marketing effort to an “imaginary” of national development. In the mid 2010s, the idea of a “Fourth Industrial Revolution” predicted that the emergence of 5G connectivity and the Internet of Things (IoT) would enable an even greater extraction of data from physical environments and objects. Around this time, three countries compared in this dissertation adopted these ideas into their national development plans: Singapore’s Smart Nation (2014), Thailand 4.0 (2016), and Made in China 2025 (2015). These policies also resulted in urban pilot projects including city data platforms, IoT sensor systems, and digital twins. How and why did the “smart city” and “4th IR” resonate with political leaders and national histories in these countries, and how is the trajectory of urban technologies in these contexts co-produced through an interplay between political institutions, culture, and material effects of technologies themselves? This dissertation draws on the perspectives of science and technology studies (STS), political science of late development, and urban theory to understand the implications of these experiments for the future of cities and more broadly, the future of data capitalism.&#13;
&#13;
The dissertation draws on 10 months of fieldwork across three countries involving interviews with key stakeholders, process tracing of policy and project evolution, archival and policy analysis, site visits, and grounded theory development afforded by these different methods. In addition to serving as testbeds for the nation, pilot projects examined in each country are symbolic showcases shaped by visions of national identity and political dynamics. In Singapore, digital twins and embedding of IoT sensors in biotic environments transform the city into a showroom for the “urban solutions” sector and reinforce its identity as a “city in a garden.” In Thailand, the push for digitization of city data is intertwined with questions of sovereignty in a polity long dominated by its capital city and riven by persistent political unrest. Meanwhile in China, the development of Xiong’an New Area and its digital infrastructure is promoted as demonstrating a “new development concept” driven by indigenous innovation, digital urban services, and greater central control over urban development.&#13;
&#13;
The rise of platform capitalism has been predicated on the value of data as an asset monopolized by private firms. Platform companies, eager for greater control over urban data, have tried to build new digital urban districts, exemplified by Google’s Quayside in Toronto, which failed due to citizens’ fear of more personal data being surrendered to a corporation. However, in the countries I examine in this dissertation, urban data is increasingly seen as a resource for development and public infrastructure. This leads to an effort by a range of stakeholders to claim sovereignty over that data—from nations passing laws on data sovereignty within their territorial borders, to cities and local leaders deploying data platforms as a resource for municipal governance and local development, to firms that seek to profit from the proliferation of urban data and analytical platforms. Urban data has become a crucial albeit contested domain of state infrastructural power. The dissertation offers a new understanding of the transmutation of urban concepts in diverse contexts, and calls for planners and urban scholars to engage in reimagining alternative urban futures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156096</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The nature of buckling in the hyperbolic paraboloid and the influence of edge members on buckling</title>
<link>https://hdl.handle.net/1721.1/156062</link>
<description>The nature of buckling in the hyperbolic paraboloid and the influence of edge members on buckling
Leet, Kenneth.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1964; Vita.; Includes bibliographical references (leaves 144-145).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156062</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Robotic Search and Sampling of Sparse Natural Phenomena</title>
<link>https://hdl.handle.net/1721.1/156011</link>
<description>Adaptive Robotic Search and Sampling of Sparse Natural Phenomena
Todd, Jessica Eve
Autonomous robots are increasingly being used in the field of scientific exploration and data acquisition. Intelligent autonomous robots, capable of online adaptive planning, are seeing wide use in underwater field mapping and agricultural monitoring. The majority of these approaches produce maps of easily observable and widely dispersed phenomena such a temperature, salinity or tree coverage. However underwater and planetary science can often involve phenomena that are ‘expensive’ to observe, discrete, and sparsely distributed. For example, coral disease can only be visually detected by an underwater robot when hovering close to the reef, due to light attenuation underwater, putting the robot at risk of collision with obstacles or organisms. Similarly, subsurface water on Mars can only be detected from a landed system on the surface, due to the short range of the detectors. When the operating conditions are resource-constrained, such as a limited battery life, expensive sensing actions can consume the resource budget, limiting the range of area that can be explored. The tension between needing to act intelligently to find and measure sparse phenomena, and needing to operate within resource constraints, leads to challenges for the robot’s autonomous decision making process in choosing what to sense, where, and when. This thesis aims to address this challenge by combining semantic ’substrates’ in the environment with hierarchical probabilistic modelling which maps substrate distributions to the underlying phenomena of interest. By using substrates that are detectable over a wide f ield of view, and correlated with sparser and harder to find phenomena, a robot can be guided to regions known to be associated with the phenomena of interest. This problem can be formulated as a partially-observable Markov decision process (POMDP) referred to as the Discrete Search and Sample problem. This thesis proposes two algorithmic contributions to the field of adaptive path planning to address two scenarios within this framework. In the f irst scenario, we assume the robot has prior knowledge about the expected density of discrete targets in the various substrates, however is operating without prior knowledge of substrate distributions. We develop a novel multi-altitude planning method, the Sparse Adaptive Search and Sample (SASS) for seeking out targets by mixing low-altitude observations of discrete targets with high-altitude observations of the surrounding substrates. By using the prior information about the distribution of targets across substrate types in combination with belief modelling over these substrates in the environment, high-altitude observations provide information that allows SASS to quickly guide the robot to areas with high target densities. In our second scenario, the a priori assumption of substrate-target correlation modelsis relaxed and the robot is now operating without strong prior knowledge of target density, or the relationship between target and substrate. Drawing inspiration from the Species Distribution Modelling community, an hierarchical probabilistic model is developed using the Integrated Nested Laplace Approximation framework, that enables online inference about expected target hotspots using predicted substrate distributions. Model parameters are learned online to build a prediction over the discrete targets, and the model is integrated into an anytime online planner to enable adaptive path planning. Both algorithms are extensively evaluated with both synthetic and real-world datasets. Additionally, through the course of addressing these two scenarios, two novel generative species-substrate model were developed that enable rapid simulation of synthetic worlds, with properties derived from real-world data. The development of these simulators allow the testing of path planners that aim to exploit natural correlations in spatial distributions that occur in the real world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156011</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalar Scattering Theory and Physics-inspired Optimization for Computational Imaging</title>
<link>https://hdl.handle.net/1721.1/155978</link>
<description>Scalar Scattering Theory and Physics-inspired Optimization for Computational Imaging
Pang, Subeen
This thesis explores the realm of computational imaging, focusing on the critical problems of phase retrieval and optical scattering—essential for accurately extracting physical information from photons. It aims to enhance the understanding and computational efficiency of existing models by addressing the fundamental challenges encountered due to diffraction effects, multiple scattering, and noise. Specifically, the thesis proposes improvements and comprehensive analyses of models related to phase retrieval, such as the Transport-of-Intensity Equation (TIE), and optical scattering approximations, including the Lippmann-Schwinger Equation (LSE).&#13;
&#13;
For phase retrieval, this work introduces mathematical approaches to reduce the TIE's sensitivity to experimental conditions and provides a quantitative comparison with other methods to clarify its applicability. It also explores the adjoint method for solving the TIE, which significantly enhances numerical stability, and discusses the analytical relationship between non-paraxial formulations and conventional phase retrieval methods, deepening our understanding of the field.&#13;
&#13;
In the domain of optical scattering, where information in photons is further encoded via complex light-matter interactions, this thesis examines several models derived from the scalar wave equation such as the LSE, the Born series, and the beam propagation method. It provides a direct and quantitative analysis of their relationships and numerical stability, highlighting the strengths and weaknesses of these models in various experimental contexts, which has not been discussed thoroughly in previous studies.&#13;
&#13;
Additionally, the thesis tackles the computational challenges associated with the LSE by proposing numerical strategies and integrating neural networks as a learnable regularization. This approach aims to reduce computational demands while maintaining generalizability across different scattering objects.&#13;
&#13;
Overall, this work contributes to the field of computational imaging by offering a deeper understanding of phase retrieval and optical scattering models, alongside presenting solutions to overcome their limitations. It sets the stage for further theoretical analysis and practical applications in physics, where accurate information retrieval from photons is crucial.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155978</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in System Dynamics for Operations Management:&#13;
Policy, Platforms and Pricing</title>
<link>https://hdl.handle.net/1721.1/155917</link>
<description>Essays in System Dynamics for Operations Management:&#13;
Policy, Platforms and Pricing
López Quirós, Jose Luis
This dissertation explores how system dynamics (SD) can improve traditional operations management (OM) models for work in public policy, understanding plaform markets, and the implications of price transparency decisions on platform firm performance and consumer behavior.&#13;
&#13;
Chapter 1, co-authored with Edward Anderson and David Keith, creates a roadmap for researchers who study public policy-related OM problems. We review and organize relevant system dynamics literature in both traditional operations management, and public policy venues. We identify a set of interesting open questions and the potential SD building blocks for answering them by topic. Leveraging this review, we describe under what conditions system dynamics is most appropriate. We then identify several overarching methodological and domain gaps for future research. Finally, we propose a process for using SD with traditional OM methodologies. &#13;
&#13;
Chapter 2 is joint work with Geoffrey Parker and Edward Anderson. We develop the Value Creation Lens, a framework, grounded in theory, for understanding the dynamics of platform value creation and growth. We separate a platform’s value into three components: (1) the standalone value of the product, (2) the value of other participants on the platform, and (3) the value created by complementary products from 3rd party providers. We explore differences in value creation between consumer-facing and business-facing platforms, along with managerial implications.&#13;
&#13;
Chapter 3 studies the effects of a common price obfuscation tactic, namely the use of shrouded hidden fees on consumer behavior and platform firm performance. I develop an SD model based on the Value Creation Lens and use it to understand the competing incentives that lead to price shrouding or transparency in online platforms. I find evidence to suggest that building consumer trust through disclosure is a dynamic attribute that may be dominated by worse-before-better outcomes. The results provide evidence that platform price transparency decisions should differ depending on market and industrial context.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155917</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speaking Up, Speaking Out, and Making Movements: How Employee Activists Raise Social, Political, and Moral Concerns at Work</title>
<link>https://hdl.handle.net/1721.1/155916</link>
<description>Speaking Up, Speaking Out, and Making Movements: How Employee Activists Raise Social, Political, and Moral Concerns at Work
Kessinger, Raquel
This dissertation explores how employee activists raise social, political, and moral concerns at work. To do this, I draw on interviews with employee activists, an archival database of white-collar employee activism events between 2018-2022, a three-day participant observation in employee activism training, and employee activist documents. In the first chapter, I examine how employee activists experienced the voice processes inside of their organizations as they attempted to raise social, political, and moral concerns. Despite describing companies that valued openness and leaders that encouraged employee voice, employee activists believed internal, individual voice channels were insufficient in addressing their concerns, prompting them to instead engage in collective action and public protests. I explore how internal voice processes broke down when activist raised social, political, and moral concerns as well as the types of social, political, and moral issues activists felt compelled to express. Finally, I examine how societal factors, including political polarization and pressure for companies to grow, fueled this phenomenon. In the second chapter, I explore how employee activists used internal communications tools to mobilize for collective action and to amplify their noisy exits from firms. Here, I describe how employee activists mobilized large-scale collective action quickly, often shortening the time leaders had to respond to their movements. I also examine how employee activists used internal communications tools and external social media to amplify their noisy exit messages, creating artifacts of dissent within their organizations, attracting mainstream media attention, and at times, laying the groundwork for future movements. Finally, I consider how organizational leaders responded to employee activists’ use of internal communication tools by placing new restrictions on these platforms. In the third chapter, I consider the direct effects and secondary consequences of employee activism by exploring how employee activists framed leaders’ responses to their contentious activism in ways that either constrained or fueled their movement’s momentum. Here, I examine three categories of outcomes: big wins—when organizational leaders acquiesced to all activist demands, partial wins—when organizational leaders offered some concessions or made meaningful gestures to acknowledge activists’ concerns, and losses—when leaders rejected activists’ demands and doubled down on the business practice in question. Finally, I show that regardless of a movement’s outcome, employee activists sought to build lasting capacity across movements and organizations by using internet technologies to improve resource mobilization for future employee activists.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155916</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Household and Behavioral Finance</title>
<link>https://hdl.handle.net/1721.1/155915</link>
<description>Essays in Household and Behavioral Finance
de Silva, Tim
This thesis contains three chapters on household finance and behavioral finance. The first chapter studies how to balance insurance and incentives in student loans with income-contingent repayment, which insure borrowers against income risk but also can reduce their incentives to earn more. Using a change in Australia's income-contingent repayment schedule, I show that borrowers reduce their labor supply to lower their repayments. I use these responses to estimate a dynamic model of labor supply with frictions that generate imperfect adjustment. My estimates imply that the labor supply responses to income-contingent repayment decrease the optimal amount of insurance but are too small to justify fixed repayment contracts. The second chapter studies the role of risk preferences and frictions in portfolio choice, using variation in the default asset allocation of 401(k) plans. We estimate that absent participation frictions, 94% of investors would prefer holding stocks in their retirement accounts, with an equity share of retirement wealth that declines over the life cycle. We use this variation to estimate a structural life cycle portfolio choice model with Epstein-Zin preferences. Our results suggest that the lack of participation in the stock market is mainly due to participation frictions rather than non-standard preferences (e.g., loss-aversion). The third chapter studies the properties of subjective forecasts relative to econometric forecasts at different forecast horizons. In the context of corporate earnings forecasts, we find that sell-side equity analyst forecasts outperform econometric forecasts in the short run but underperform in the long run. We then decompose these differences in forecasting accuracy into information, forecast bias, and forecast noise. We find that noise and bias strongly increase with forecast horizon, while equity analysts’ information advantage decays rapidly. We show that noise increasing with the forecasting horizon generates a mechanical reversal in the sign of the error-revision (Coibion-Gorodnichenko) regression coefficient at longer horizons, independently of over-/underreaction. Finally, we demonstrate that a parsimonious model with bounded rationality and a noisy cognitive default matches the term structures of noise and bias jointly.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155915</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Culture and Coordination</title>
<link>https://hdl.handle.net/1721.1/155910</link>
<description>Essays on Culture and Coordination
Mellody, James C.
Culture is both mechanism and outcome of coordinated social action. First, culture enables people to come together and act in a collective fashion—enabling coordination both within and across teams in organizational settings. Second, coordinated action produces, shifts, and reinforces culture over time. In this dissertation, I examine the relationship between culture and coordination through three studies. In Chapter 1, I examine how employees from different areas of functional expertise can work together to create a shared culture enabling further coordination. Leveraging ethnographic data from an academic research setting, I find that safety professionals enacted a shared culture of safe and sustainable research by teaching researchers how to integrate safety and sustainability into their research, rather than handling compliance tasks for them. In Chapter 2, co-authored with Ray Reagans, we examine how managers and firms can foster cultures that enable individuals from various underrepresented groups to succeed. Organizations face a tradeoff in managing diversity: individuals from different stigmatized groups prefer different diversity cultures because they are represented at different levels within organizations. We find that organizations can align individuals from different groups to perform better under the same culture by focusing on a general sense of individuation, allowing them to move beyond the tradeoff grounded in representation. Specifically, we find that organizations can do this by creating a culture that frames each person as an individual rather than a member of a group and in turn valuing equality of all individuals regardless of background. Finally, in Chapter 3, I examine how individuals online allocate their attention to various cultural tastes. I find that, in the online world, freedom of exploration allowing individuals to participate across multiple communities enables connections to form between generic and specialty communities, which would otherwise rely on separate audiences in the offline world. While the internet may not shift the overall distribution of attention away from generic communities toward a greater variety of specialty communities, it enables cross-cutting discussion and engagement across these communities, increasing exposure to diverse tastes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155910</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Closing the Voice Gap: Evidence from a Hospital System’s Empowerment Program</title>
<link>https://hdl.handle.net/1721.1/155905</link>
<description>Closing the Voice Gap: Evidence from a Hospital System’s Empowerment Program
Minster, Arrow
This dissertation explores how empowerment programs within organizations can effectively close workers’ voice gap, or the difference between workers’ desired and actual influence over organizational decision-making. By drawing on a 17-month ethnographic study of a hospital system’s empowerment program, I examined the interactional and cultural processes that guide the justification, sustainability, and efficacy of worker empowerment. Chapter 1 motivates and introduces the case of the hospital system’s empowerment program. Chapter 2 focuses on Coastal Care’s justifications for empowerment, specifically how leaders and managers described the program as valuable and appropriate for their organization. Chapter 3 explores how Coastal Care navigated and overcame the challenges that hindered continuous worker involvement in the program. I identified the importance of scaffolding, or various unscripted practices, which complemented the formal design of the empowerment program. The scaffolding provided informal opportunities for worker involvement in instances when the formal programming failed to do so, ultimately sustaining involvement in the program. Chapter 4 identifies a process and conditions necessary for closing the voice gap via the empowerment program. Although the program legitimated worker power over workplace change, effective empowerment relied on frontline managers actively crafting opportunities for workers to exercise influence. When managers made three moves (prioritizing workers' issues, centering diagnostic dialogues, and engaging with assigning tasks), they mobilized skeptical workers to address departmental processes. Managers variously deployed these strategies as a consequence of their history with the issue: when they were physically close to the issue and when they had not encountered previous failures in resolving it. This dissertation contributes to research on empowerment programs and organizational theories of worker voice and upward influence. I bridge these oft-siloed perspectives by identifying formal and informal practices that promote opportunities for worker influence over organizational decision-making.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155905</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Algorithms, Markets, and&#13;
Organizations</title>
<link>https://hdl.handle.net/1721.1/155898</link>
<description>Essays on the Economics of Algorithms, Markets, and&#13;
Organizations
Raymond, Lindsey
This dissertation contains three chapters that study how digitization and increasingly reliance on algorithms shapes workers, organizations, and markets. In the first chapter, I show how the digitization of public housing records leads to the entry of investors using algorithms. Digitization and entry lead to changes in equilibrium prices and allocation in the US residential real estate market. Consistent with a theoretical model of comparative advantage, I observe shifts in investment patterns for both humans and algorithmic investors and changing house prices, particularly for minority homeowners. In the second chapter, I study how hiring algorithm design shapes the effects of algorithms in the labor market. Using data from a professional services firm, I show that incorporating exploration can improve the quality of the interview screening process (as measured by eventual hiring rates), while also increasing demographic diversity, relative to the firm's existing practices. While the adoption of automated approaches to hiring is often associated with decreasing access to opportunity, we show the impact on efficiency and equity depends on algorithm design choices. In the third chapter, joint with Danielle Li and Erik  Brynjolfsson, we study the staggered introduction of a generative AI-based conversational assistant using data from 5,000 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14\% on average, including a 34\% improvement for novice and low-skilled workers but with minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the best practices of more able workers, helps newer workers move down the experience curve, and improves worker learning. Our results suggest that access to generative AI can increase productivity, with large heterogeneity in effects across workers. Together, these chapters highlight how the increasing prevalence of algorithmic decision making impacts workers, firms, and markets.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155898</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genomic and physiological adaptation to temperature in the invasive golden star tunicate (Botryllus schlosseri)</title>
<link>https://hdl.handle.net/1721.1/155890</link>
<description>Genomic and physiological adaptation to temperature in the invasive golden star tunicate (Botryllus schlosseri)
Tobias, Zachary John Corey
Because non-indigenous species (NIS) often encounter novel environments during colonization and expansion, species invasions present useful opportunities to investigate the mode and pace of adaptive change in natural populations. In this dissertation, I use the range expansion of the invasive golden star tunicate, Botryllus schlosseri, as a natural experiment to study how a pernicious NIS adapts its thermal physiology on contemporary time scales. In Chapter 2, I applied low-coverage whole genome sequencing (lcWGS) to investigate patterns of population genetic structure and signatures of local adaptation to temperature. In addition to illustrating the potential for rapid adaptation of thermal tolerance at the genomic level, this chapter demonstrated that the molecular basis of thermal adaptation on either coast is distinct, providing valuable evidence for parallel adaptation being driven by divergent molecular means. In Chapter 3, I performed a physiological study to investigate differentiation of post-larval heat tolerance across five populations across a major biogeographic break on the east coast of North America. I found that northern populations are more susceptible to heat stress than their southern, warm-exposed counterparts, providing evidence for adaptive shifts of thermal tolerance. Further, by taking advantage of natural temporal variability in temperature, I demonstrated that temperature during development positively affects heat tolerance at later life stages, establishing developmental plasticity of thermal tolerance. In Chapter 4, I extended my physiological investigation to the west coast of North America, comparing post-larval heat tolerance across three populations spanning a 24.3° latitudinal gradient while expanding to include differentiation of cold tolerance in adults. Similar to the east coast, I observed that the two northern populations were more susceptible to heat stress than their southern counterpart. For cold tolerance, I observed a pattern of countergradient variation whereby northern populations were better able to maintain cardiac function in the cold than southern populations. This suggests compensatory genetic adaptation to the colder water temperatures at higher latitudes. Overall, my work furthers our understanding of how NIS are able to rapidly shift their thermal physiology in response to novel environments, shedding light on the potential of species more generally to adapt to environmental change on contemporary timescales.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155890</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Entrepreneurship and Finance</title>
<link>https://hdl.handle.net/1721.1/155888</link>
<description>Essays in Entrepreneurship and Finance
Bidanda, Maya
This thesis consists of three chapters on entrepreneurship and finance. The first chapter studies how small employers are the origins of entrepreneurs. I find that workers at small employers are more likely to be entrepreneurs in the future and that entrepreneurs who previously worked at successful small employers are more likely to start successful firms. This is consistent with a hypothesis of learning entrepreneurial human capital at work. The second chapter, joint with Alex Martin, studies the impact of access to childcare on self-employment outcomes. We find that parents, particularly mothers with children too young for kindergarten, are more likely to be self-employed and less likely to be in the formal workforce than parents with children old enough for school. When introducing exogenous access to pre-kindergarten childcare, mothers with access are less likely to be self-employed and more likely to be in formal work. This suggests that labor market barriers in formal work are a push into self-employment for parents. However, self-employment in this setting results in lower wages and difficulty finding future work, so policy should work to correct this push. The third chapter studies the spillover effects of labor displacement from technological innovation in local U.S. labor markets. I find evidence of a decrease in local aggregate demand in local economies that are particularly hit by labor displacement.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155888</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference from Limited Observations in Statistical, Dynamical, and Functional Problems</title>
<link>https://hdl.handle.net/1721.1/155887</link>
<description>Inference from Limited Observations in Statistical, Dynamical, and Functional Problems
Stepaniants, George
Observational data in physics and the life sciences comes in many varieties. Broadly, we can divide datasets into cross-sectional data which record a set of observations at a given time, dynamical data which follow how observations change in time, and functional data which observe data points over a space (and possibly time) domain. In each setting, prior knowledge of statistical, dynamical systems, and physical theory allow us to constrain the inferences and predictions we make from observational data. This domain knowledge becomes of paramount importance when the data we observe is limited: due to missing labels, small sample sizes, unobserved variables, and noise corruption.&#13;
&#13;
This thesis explores several problems in physics and the life sciences, where the interplay of domain knowledge with statistical theory and machine learning allows us to make inferences from such limited data. We begin in Part I by studying the problem of feature matching or dataset alignment which arises frequently when combining untargeted (unlabeled) biological datasets with low sample sizes. Leveraging the fast numerical methods of optimal transport, we develop an algorithm that gives a state-of-the-art solution to this alignment problem with optimal statistical guarantees. In Part II we study the problem of interpolating the dynamics of point clouds (e.g., cells, particles) given only a few sparse snapshot recordings. We show how tools from spline interpolation coupled with optimal transport give efficient algorithms returning smooth dynamically plausible interpolations. Part III of our thesis studies how dynamical equations of motion can be learned from time series recordings of dynamical systems when only partial observations of these systems are captured in time. Here we develop fast routines for gradient optimization and novel tools for model comparison to learn such physically interpretable models from incomplete time series data. Finally, in Part IV we address the problem of surrogate modeling, translating expensive solvers of partial differential equations for physics simulations into fast and easily-trainable machine learning algorithms. For linear PDEs, our prior knowledge of PDE theory and the statistical theory of kernel methods allows us to learn the Green’s functions of various linear PDEs, offering more efficient ways to simulate physical systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155887</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Assembling Modular Systems: Enhancing Efficiency and Accuracy in Robotic Lattice Assembly</title>
<link>https://hdl.handle.net/1721.1/155885</link>
<description>Self-Assembling Modular Systems: Enhancing Efficiency and Accuracy in Robotic Lattice Assembly
Romanishin, John
This thesis introduces a robotic system designed to progress the concept of modular ’industrial Legos,’ aiming to harness the advantages of modularity in new environments, such as CNC machines and factory automation. This vision intersects with the domain of modular self-reconfigurable robotics and robotically-assembled structures. The work presented here extends from existing research and is particularly focused on three key areas: (1) the development of improved modular connectors between system elements, (2) the advancement of modular linear actuator designs that not only facilitate component rearrangement but also contribute to task performance, and (3) improved performance and efficiency of lattice-assembler robots. All topics are examined in depth, and their relevance to prior work is carefully outlined. Central to any modular system is the design of connectors, which enable the modules to assemble into various configurations. Although extensive research has been conducted in this area, existing connector solutions often fall short in terms of repeatability, affordability, and mechanical strength. This thesis addresses these shortcomings by introducing a new connector that utilizes a kinematic coupling for high-repeatability and incorporates a simple, screw-based latching mechanism for strong mechanical connections. Detailed design specifications for these connectors are provided; these designs are both relatively simple to implement and robust enough to withstand the harsh conditions encountered in industrial settings. In CNC and industrial automation, linear actuators often pair with structural elements to form gantry systems, allowing for precise 3D-positioning of tools and end effectors. While there are self-contained linear actuators that offer some level of modularity—often termed ’superficial modularity’—traditional designs generally lack the capacity for what might be called ’deep modularity.’ This refers to the unique ability to seamlessly join two actuators of length L to create a single actuator of length 2L. To address this, the thesis presents a high-strength, extensible modular linear actuator built on magnetic lead screws, capable of general linear motion and of cooperating with other modular elements to rearrange modules. Lastly, this thesis introduces an assembly robot named Belty, equipped with proprioceptive actuators that feature large-diameter, high-torque-density brushless motors and minimal gearing. These actuators offer several advantages, such as efficient motion, back-drivability, force sensing, and impact resilience. These features are especially valuable for assembly tasks within constrained spaces, as they enable simplified assembly algorithms through contact-rich motion primitives (e.g., dragging parts, bumping into objects). Following the hardware discussion, assembly algorithms are detailed to demonstrate the system’s ability to construct a diverse array of structures quickly and energy-efficiently.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155885</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Artificial Intelligence and Future of Knowledge Work</title>
<link>https://hdl.handle.net/1721.1/155883</link>
<description>Essays on the Economics of Artificial Intelligence and Future of Knowledge Work
Munyikwa, Zanele
This dissertation comprises three chapters that dissect the evolving interface between artificial intelligence (AI) and knowledge work, with a particular focus on language and writingbased AI technologies. Chapter 1 describes the scope, scale, and economic value of writing skills within the labor market, presenting descriptive statistics and employing hedonic wage analysis to estimate the salary premiums associated with writing skills. By analyzing administrative data and job postings, it underscores the economic significance of writing proficiency across various professional domains, emphasizing its crucial role in the contemporary workforce. In Chapter 2, attention shifts to the direct influence of generative AI on knowledge work through a randomized field experiment in the copywriting industry. This analysis explores how AI-driven tools not only enhance content creation but also impact productivity, creativity, and workers’ subjective feelings of ownership over their work. The findings illustrate the transformative potential of AI in reshaping creative professions and significantly altering traditional workflows. Chapter 3 investigates the effects of algorithmic resume writing assistance on hiring outcomes for knowledge workers in an online labor market, employing causal text analysis and mediation analysis to uncover the mechanisms through which AI influences hiring decisions. This chapter focuses on text as a mediator, examining how AI-induced adjustments to linguistic properties such as formality and error correction mediate the relationship between AI tool use and hiring outcomes. This examination reveals how subtle changes to linguistic properties can significantly affect job seekers’ success, underscoring the practical benefits and complexities of integrating AI into hiring practices. Together, these chapters offer a comprehensive exploration of how AI technologies, particularly those focused on language and writing, are redefining the landscape of knowledge work. By highlighting the interactions between writing skills, technological innovation, and employment, this dissertation sheds light on the critical role of writing in the contemporary job market and the significant impact of AI advancements on professional content creation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155883</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Understanding and Combating Misinformation at Scale</title>
<link>https://hdl.handle.net/1721.1/155882</link>
<description>Essays on Understanding and Combating Misinformation at Scale
Allen, Jennifer
In Chapter 1, I explore the use of crowdsourcing as a potential solution to the misinformation problem at scale. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possibly keep up with the volume of misinformation produced every day. Furthermore, many people see fact-checkers as having a liberal bias and thus distrust them. Here, we explore a potential solution to both of these problems: leveraging the “wisdom of crowds'' to make fact-checking possible at scale using politically-balanced groups of laypeople. Our results indicate that crowdsourcing is a promising approach for helping to identify misinformation at scale.&#13;
&#13;
In Chapter 2, joint with David Rand and Cameron Martel, I extend work on crowdsourced fact-checking to assess the viability of crowdsourcing in an opt-in, polarized environment. We leverage data from Birdwatch, Twitter’s crowdsourced fact-checking pilot program, to examine how shared partisanship affects participation in crowdsourced fact-checking. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs.&#13;
&#13;
In Chapter 3, I examine the role of online (mis)information on US vaccine hesitancy. I combine survey experimental estimates of persuasion with exposure data from Facebook to estimate the extent to which (mis)information content on Facebook reduces COVID vaccine acceptance. Contrary to popular belief, I find that factually-accurate vaccine-skeptical content was approximately 50X more impactful than outright false misinformation. Although outright misinformation had a larger negative effect per exposure on vaccination intentions than factually accurate content, it was rarely seen on social media. In contrast, mainstream media articles reporting on rare deaths following vaccination garnered hundreds of millions of views. While this work suggests that limiting the spread of misinformation has important public health benefits, it highlights the need to scrutinize accurate-but-misleading content published by mainstream sources.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155882</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>All Models are Wrong, Simple Models Provide Insight: A Study of Human Manipulation</title>
<link>https://hdl.handle.net/1721.1/155878</link>
<description>All Models are Wrong, Simple Models Provide Insight: A Study of Human Manipulation
West Jr., A. Michael
Humans possess a unique ability to manipulate tools, facilitated by the dexterity of our hands. Unfortunately, millions lose this capability annually due to conditions like limb amputation or Cerebral Vascular Accident. Robotic rehabilitation technologies, including prosthetics and exoskeletons, aim to restore motor function. However, understanding human control strategies is crucial for effective implementation. Unfortunately, studying humans is challenging due to their inherent complexity. Simple models can help untangle this complexity. This thesis delves into the role of simple models in analyzing human neural motor control and perception through the study of upper-limb motor control and hand manipulation. Specifically, this thesis aims to develop a descriptive understanding of human manipulation in unimpaired subjects. To achieve this, we first delve into a common simplification of human hand manipulation—the presence of kinematic hand synergies—and stress the importance of studying functional hand manipulation beyond simple grasping. Secondly, we present findings from a motor control study that introduces a simple mathematical model, emphasizing mechanical impedance, that competently describes how humans manage physical interaction. Thirdly, we underscore the significance of mechanical impedance through a perceptual study which highlighted humans’ robust ability to perceive limb stiffness. Lastly, we introduce a metric that can objectively quantifying manipulation complexity, potentially broadening researchers’ scope in studying human complex manipulation. The insights gained from this work have far-reaching implications, potentially enhancing existing robotic and rehabilitation technologies and guiding the development of new ones. In a field dominated by large complex models fed by big data, this thesis highlights the value of conducting the basic science research required to uncover aspects of human motor control. A video presentation of this thesis can be found at: https://www.youtube.com/watch?v=u2eCJHqEGww
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155878</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fracture Morphologies in Particulate Suspensions</title>
<link>https://hdl.handle.net/1721.1/155877</link>
<description>Fracture Morphologies in Particulate Suspensions
Lilin, Paul
Films of a colloidal suspension solidify and crack during drying and discontinuously shearthickening suspensions shear-jam and fracture under applied stress. In both systems, the morphology and dynamics of fracture reveal the internal stresses in the material: craquelures in old paintings reveal details of the painting technique used; crack patterns of drying drops of blood contain forensic evidence; and growing fractures in dense suspensions constitute a structural signature of shear jamming. In this thesis, we develop an in-depth physical understanding of the fracture dynamics and morphology in two particulate suspensions: Drying sessile drops of colloidal suspensions and discontinuously shear-thickening suspensions subject to air injection. Wefirst highlight distinct modes of stress release in drying drops of colloidal suspensions. As water evaporates from a freshly deposited drop, a thin close-packed particle deposit forms at the edge of the drop and grows inward. Water evaporation from this particle deposit combined with adhesion of the deposit to the substrate causes tensile stresses leading to the formation of regular radial cracks. Depending on whether the deposit then remains attached to the substrate or delaminates, the remaining stresses in the deposit get released in different ways: If the deposit remains attached, stresses are released by the formation of orthoradial cracks that bridge the radial cracks, creating a complex crack pattern. Conversely, no additional cracks form if the deposit delaminates. Instead, the deposit curves and deforms out of plane to relieve drying stresses, creating shapes that resemble blooming flowers. We show how the combination of poroelasticity with fracture mechanics and non-Euclidian plate mechanics captures both the complex crack pattern formation and the out-of-plane deformation as distinct responses to a similar drying stress. We then probe the fracture and relaxation characteristics of a discontinuously shearthickening suspension placed in an open three-dimensional container and subject to air injection. X-ray images reveal the growth of an air cavity with shapes ranging from smooth bubbles that rise upwards under the action of buoyancy to sharp cracks that remain attached to the injection nozzle. We show that the shape and the relaxation dynamics of the air cavity are linked to the rheology of the suspension: Sharp cracks relax into bubbles when the shear rate applied to the suspension by the bubble growth decreases below the critical shear rate denoting the onset of discontinuous shear thickening. Our findings suggest ways to tune crack patterns and to infer stresses and material properties from fracture dynamics in both drying suspensions and shear-jamming suspensions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155877</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence in Labor Market Matching</title>
<link>https://hdl.handle.net/1721.1/155874</link>
<description>Artificial Intelligence in Labor Market Matching
Wiles, Emma Benz
In my dissertation I study three applications of AI in labor market matching. In my first chapter I show that AI-improved but not entirely written resumes make workers more likely to be hired with no negative downstream implications to employers or to match quality. However, in my second chapter I show that when employers are given entirely AI written drafts of a job post, the jobs posted are more generic and less likely to make a hire. Lastly, I provide evidence that non-technical workers can use AI to upskill into data science, however those skills do not persist in absence of AI assistance. My first chapter investigates the association between writing quality in resumes for new labor market entrants and whether they are ultimately hired. I show this relationship is, at least partially, causal: in a field experiment in an online labor market with nearly half a million jobseekers, treated jobseekers received algorithmic writing assistance on their resumes. I find that the writing on treated jobseekers’ resumes had fewer errors and was easier to read. Treated jobseekers were hired 8% more often, at 10% higher wages. Contrary to concerns that the assistance takes away a valuable signal, I find no evidence that employers were less satisfied with the quality of work done, using star ratings, the sentiment of their reviews, and their probability of rehiring a worker. The analysis suggests digital platforms and their users could benefit from incorporating algorithmic writing assistance into text-based descriptions of labor services or products without downstream negative consequences. In my second chapter, I study a randomized experiment conducted on an online labor market that encouraged employers to use a Large Language Model (LLM) to generate a first draft of their job post. Treated employers are 20% more likely to post the job and decrease time spent writing their job post by 40%. Among the posted jobs, treated employers receive 5% more applications. Despite this, they are 18% less likely to hire. I find no evidence that this is driven by treated employers receiving lower quality applicants. Moreover, despite the large increase in the number of jobs posted, there is no difference in the overall number of hires between treatment and control employers. These results imply that the treatment lowered the probability of hiring among at least some jobs which would have otherwise made a hire. I rationalize these results with a model in which employers with heterogeneous values of hiring can attract better matches by exerting effort to precisely detail required 3skills. I show how a &#13;
technology that lowers the cost of writing and imperfectly substitutes for effort causes more posts, but lowers the average hiring probability through both marginal posts (as these are less valuable) and inframarginal posts (as the technology crowds out effort and makes the job posts more generic). I provide evidence for these mechanisms using employer screening behavior and the embeddings of the job posts’ texts. In my third chapter, we investigate if LLMs can be used to help non-technical workers adapt to technology induced, rapidly changing skill demands by “upskilling” into a more technical skillset. With coauthors at Boston Consulting Group, we run a randomized control trial on knowledge workers, who have no data science experience, to test whether workers paired with LLMs are able to perform data science tasks to the level of real data scientists. We give consultants at BCG data science problems, representative of what the data scientist role at the company demands, but whichGPT-4 cannot solve on its own. We find that treated workers given access to and training in using ChatGPT are more likely to correctly solve all three tasks, and can perform at the level of real data scientists without GPT-4 on the coding task. These results suggest that LLMs can be used to help workers gain new skills to meet the evolving, more technical demands of the labor market, but that for some types of tasks the work of non-technical workers is not interchangeable with data scientists’.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155874</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Marketing Innovations</title>
<link>https://hdl.handle.net/1721.1/155872</link>
<description>Essays in Marketing Innovations
Li, Keyan
The dissertation consists of three chapters on understanding marketing innovations, including targeted marketing and new product development.&#13;
The first chapter studies a novel targeting problem that many firms face and develops a new method for targeting experimentation. Adaptive learning policies that guide how firms trade off acquiring new information to improve a current targeting policy, versus exploiting the current policy to harvest, typically focus on settings in which customers arrive individually, in a frequent sequence. However, in practice, firms often conduct marketing campaigns in batches, in which they target a large group of customers with personalized marketing actions together. This has an important implication for how firms resolve the tradeoff between acquiring new information and exploiting the current policy. The large number of customers in each batch (campaign) introduces an information externality: the incremental information contributed by a single customer depends upon the assignment decisions for other customers in the batch. We investigate how to optimally acquire and coordinate information in these settings. The algorithm we propose uses Gaussian processes to estimate the value of incremental information, while accounting for the information externality between customers in the same batch. Findings are validated using data from a field experiment.&#13;
The second chapter studies customer demand in a non-market-oriented economy. The economics and marketing literature has primarily focused on market economies and studied factors such as price and advertising when analyzing customer demand. However, in non-market-oriented economies, social factors like corruption can have a significant influence on customer decisions. In particular, this paper focuses on the demand for luxury products, which are widely used for gift-giving and even bribery in emerging markets. One possible mechanism is that when the relative size of non-market-oriented sectors in the local economy increases, luxury products can be used to identify those who have a higher willingness to pay for scarce resources. As a result, the demand for luxury products moves together with the degree of corruption. By leveraging natural experiments of top-down anti-corruption campaigns that temporarily halt this channel, an empirical study is performed using a comprehensive dataset that covers the sales of all cigarette brands and the local social environment in China. The results suggest that these social factors can have an unanticipated impact on the demand for luxury products.&#13;
The third chapter studies how customer search can stifle product innovations. Conventional wisdom suggests that when an incumbent fails to innovate, there is a greater risk to the incumbent of competition from other innovators. I show conditions in which the opposite is true: by delaying innovation, an incumbent can create entry barriers that deter innovation by competitors. Consequently, both competition and innovation are suppressed. The key insight driving these outcomes is that customer search is endogenous, and absence of innovation today can disincentivize customers from searching in the future. Since customers need to search to discover innovations, when they search less, it both creates entry barriers for competitors, and reduces the competitors' incentives to innovate. Postponing innovation can benefit incumbents if it motivates customers to search less, and thus competitors to innovate less. Notably, I show that searching less is a rational customer response.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155872</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/155867</link>
<description>Essays in Financial Economics
Scott, Justin Rand
This thesis contains three chapters exploring the interrelationships among risk premia, the valuation of government debt, and macroeconomic dynamics. The first chapter documents and explores the implications of the risk price puzzle–the empirical disconnect between inflation and risk premium shocks. I show that existing New Keynesian models struggle to rationalize the risk price puzzle with an upward-sloping Phillips curve. To resolve the puzzle, I develop a novel macro-finance model that integrates a two-sector real business cycle framework with the government debt valuation equation, which determines the price level without nominal rigidities. Empirically, the response of inflation to risk premium shocks switches from positive to negative around 1998, mirroring the change in the stock-bond correlation. The model attributes this phenomenon to the changing covariance between shocks to the risk premium and real risk-free rate, which is consistent with the heightened responsiveness of monetary policy to the stock market (“Fed put”).&#13;
&#13;
The second chapter uses a flexible SDF model to infer the values of untraded tax and expenditure claims at the state level. Since state governments do not issue their own currencies, they are precluded from monetizing the value of their debt through inflation. On average, I find that surplus risk generates a gap between the market and fundamental values of state-level debt, although the gap is an order of magnitude smaller than at the federal level. Inconsistent with the prior literature, more stringent balanced budget amendments appear to have no effect on state fiscal capacity. The third chapter documents the impact of risk premium shocks on firm-level outcomes and explores the dimensions of heterogeneity in those responses. I find that investment falls more for firms with higher betas. Risk premium shocks also increase misallocation as proxied by dispersion in marginal product of capital (MPK), although they have no effect on aggregate total factor productivity.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155867</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smart Data Analytics for Manufacturing Processes</title>
<link>https://hdl.handle.net/1721.1/155866</link>
<description>Smart Data Analytics for Manufacturing Processes
Mohr, Fabian
The increasing multitude of powerful data analytic tools in recent years has motivated the development of smart data analytics approaches, i.e. a decision tree based super algorithm that automatically selects the most suitable method based on a systematic interrogation of the dataset. This work introduces two different smart data analytics approaches for the objectives of supervised classification and fault detection, as they arise for example in the context of process monitoring schemes for chemical manufacturing processes. For both objectives, a visual representation of the method selection process is presented in form of a data analytics triangle. The necessary interrogation framework for the model selection process is introduced and the overall approaches for both objectives are demonstrated in case studies showing great performance over a variety of different supervised classification and fault detection problems. Additionally, predictive modeling techniques are applied to industrial endto-end biomanufacturing datasets for two monoclonal antibody products to predict critical quality attributes. Different approaches are introduced to consider both secondorder and third-order tensorial data combined as possible inputs to the predictive modeling. It is shown that the utilization of the proposed methods is capable of significantly improving the prediction performance if the dataset is analyzed correctly beforehand. Moreover, the ability to predict and classify the cycle life of LiNiMnCo (NMC) battery half-cells based on acoustic measurements capturing degradation events such as grain fracture and gas formation in combination with prior introduced data analytical methods is demonstrated. Lastly, the application of a Kalman Filter model to predict moats around companies in the stock market is explored.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155866</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Symbolic Regression: From Generalized Formulation to Density Estimation and Inverse Problem</title>
<link>https://hdl.handle.net/1721.1/155864</link>
<description>Advances in Symbolic Regression: From Generalized Formulation to Density Estimation and Inverse Problem
Tohme, Tony
In this thesis, we explore the field of Symbolic Regression (SR), a middle ground between simple linear regression and complex inscrutable black box regressors such as neural networks. In essence, SR searches the space of mathematical expressions to find a model that best captures the relationship between inputs and outputs of a given dataset. While SR has not gained mainstream popularity due to its computational intricacy and reliance on heuristics, its potential for generating explicit, concise, and interpretable mathematical models deserves further attention. This work presents a series of advancements in Symbolic Regression, extending its applicability and demonstrating its potential across diverse domains and problem settings. Initially, we introduce GSR, a Generalized Symbolic Regression method that redefines the traditional SR optimization problem to discover analytical mappings from the input space to a transformed output space. The proposed GSR approach achieves promising performance compared to existing SR methods across established benchmark datasets, as well as a more challenging dataset introduced in this study, called SymSet. Next, we delve into the task of recovering underlying partial differential equations (PDEs) from data through the use of the adjoint method. We begin by considering a family of parameterized PDEs encompassing linear, nonlinear, and spatial derivative candidate terms. We then formulate a PDE-constrained optimization problem aimed at minimizing the error of the PDE solution from data, and elegantly derive the corresponding adjoint equations. We showcase the efficacy of the proposed approach in selecting the appropriate candidate terms, thereby discovering the governing PDEs from data. We also compare its performance with a commonly employed method for PDE discovery. Furthermore, we introduce MESSY Estimation, a Maximum-Entropy based Stochastic and Symbolic densitY estimation method. The proposed approach infers probability density functions symbolically from samples by leveraging the Maximum Entropy Distribution (MED) principle. We uncover three key contributions: (i) the Lagrange multipliers, inherent in the MED ansatz, can be efficiently computed by simply solving a linear system of equations, (ii) the density recovery task is enhanced through matching more unconventional low-order (symbolic) moments, rather than necessarily matching higher-order (raw) moments, and (iii) the proposed symbolic density estimation framework leads to increased interpretability and better conditioning. 3Finally, we introduce ISR, an Invertible Symbolic Regression (ISR) approach, which bridges the concepts of SR and invertible maps. Specifically, ISR seamlessly combines the principles of Invertible Neural Networks (INNs) and Equation Learner (EQL), a neural network-based symbolic architecture for function learning. Demonstrating its versatility, ISR also serves as a symbolic normalizing flow for density estimation tasks. Additionally, we showcase its applicability in solving inverse problems, including a benchmark inverse kinematics problem, and notably, a geoacoustic inversion problem in oceanography aimed at inferring posterior distributions of underlying seabed parameters from acoustic signals. The diverse findings of this thesis not only contribute to advancing the field of Symbolic Regression, but also underscore its versatility and potential across various domains. A shift to explicit symbolic models, as demonstrated in this thesis, could unveil hidden patterns within the plethora of datasets available today, offering new insights and directions in the evolving field of machine learning and data analysis.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155864</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/155861</link>
<description>Essays in Financial Economics
Im, Joanne
This thesis comprises three chapters on climate and international finance.&#13;
Recent pressure on publicly traded (public) firms to divest high greenhouse gas-emitting assets has raised concerns that these assets are flowing to more opaque, privately held (private) firms that may be operating them in more emissions-intensive ways and that sellers are being rewarded for such sales by increased valuations. Whether this is likely to be an important concern depends on the climate and valuation consequences of such asset transfers. These issues are explored in the first two chapters of the thesis.&#13;
The first chapter uses data from fossil fuel power plant operations in the United States and employs a difference-in-difference design to estimate the effects of sales on plant emissions outcomes 1998-2022. I find that eighteen months after sale, changes in power plant unit emissions were statistically indistinguishable at the 5% level from zero vis-a-vis comparable plant units that shared technological specifications and were in the same regional electricity area; this was true regardless of whether the buyer was a public or private firm. Then, using data on fossil fuel power plant sale announcements by public firms, I employ an event study methodology to estimate the effect of public-to-private sale announcements on sellers’ market values. I find that, on average, the announcement of a sale to a private firm led to cumulative abnormal returns to the seller’s stock; however, this average was not statistically different from the average return when the announcement was to a public firm.&#13;
The second chapter further investigates from a theoretical perspective. I present a general equilibrium model that predicts what will happen to asset ownership and emissions when public, but not private, firms experience a positive shock to their cost of emitting when there is trade in assets. I find three qualitatively distinct equilibria, one of which is a “greenwashing equilibrium.” The public firm expresses the shock entirely through ownership decisions by selling to private firms and assets emit more than they would have if trade were suppressed. There are also equilibria with no trade and divestments of assets to public firms.&#13;
The third chapter studies exchange rate determination. I test a set of assumptions that imply the return parity of long-run, real bonds denominated in different currency numeraire. The joint hypothesis is rejected in our post-2009 sample of developing and developed market currencies; however, I document a strong relationship between changes in the log of bilateral, real exchange rate and real holding period bond returns in the direction of parity, contributing to the Meese-Rogoff puzzle on exchange rate determination.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155861</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Epitaxy by In-Situ Grown BN and InGaN Strain Relaxation</title>
<link>https://hdl.handle.net/1721.1/155856</link>
<description>Remote Epitaxy by In-Situ Grown BN and InGaN Strain Relaxation
Liu, Yunpeng
Compound semiconductors have superior material properties over silicon, making them good candidates for optoelectronics and power electronics. As the need for electronics with higher flexibility and smaller size increases, lift-off methods for fabricating freestanding compound semiconductors are developed. In 2017, a layer transfer technique based on remote epitaxy was introduced to overcome the drawbacks of conventional lift-off methods. By inserting a layer of 2D material between the epitaxial layer and the substrate, the epitaxial layer can be mechanically peeled off because of the van der Waals gap created. At the same time, the thin 2D layer still allows the penetration of the substrate electrostatic potential, which seeds the single-crystalline growth. However, the complex 2D materials transferring processes generate damage and contamination of the 2D materials. This thesis introduces a novel remote epitaxy process based on MBE in-situ grown boron nitride (BN). Ultrathin (200nm) single-crystalline freestanding gallium nitride (GaN) film is obtained using this technique. A high throughput process is demonstrated to produce multiple freestanding membranes from a single wafer and in a single growth without involving time-consuming processes. Substrate recycling is shown without the need for wafer polishing.  To emphasize the uniqueness of the ultrathin single-crystalline GaN membrane, a chip-less wireless e-skin based on surface acoustic wave sensors is reported here. This device offers high sensitivity, lower power and long-term sensing of strain and ultraviolet light.  The merits of remote epitaxy are not limited to fabricating freestanding membranes. Previously, remote heteroepitaxy was found to be able to relax the misfit strain caused by lattice mismatch. For example, the large lattice mismatch between the high indium content indium gallium nitride (InGaN) and the GaN substrate impedes the efficiency of the InGaN based red micro-LED. In the last part, relaxed high indium content InGaN film grown on BN is demonstrated. The as-grown film shows a 70 percent relaxation, 28 percent indium content and improved crystalline quality compared with InGaN on GaN heteroepitaxy. We envision this InGaN buffer can be used as the foundation for the InGaN base red LED in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155856</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/155853</link>
<description>Essays in Financial Economics
Olshanskiy, Yury
In Chapter 1, I investigate abnormal behavior in individual stocks using two decades of high-frequency U.S. stock market data. I identify hundreds of thousands of short episodes where stocks exhibit ”explosive” behavior, deviating from the unit-root null hypothesis. These phenomena span multiple days, differ from typical return movements, and affect a wide range of stocks, including liquid and large-cap stocks. Explosive episodes account for a considerable portion of stocks’ idiosyncratic variance. These are transitional episodes with partial reversal, providing predictable returns, setting them apart from large overnight and high-frequency jumps. I analyze stocks and their susceptibility to explosive behavior in connection with aggregate market fluctuations. While downward explosions tend to cluster among stocks and are more pro-cyclical, upward explosions appear as an idiosyncratic phenomenon. Explosive episodes involve significant buying and selling pressure along with trading volume. To explain explosive price movements, the paper introduces a model involving inelastic buyers, insiders, and competitive sellers. It emphasizes the role of explosions in the price discovery process and addresses the observed reversal. The frequency, severity, and reversal of explosiveness are explained by the expected size of inelastic demands, the knowledge possessed by a representative insider, and the frequency of seeing both in the market. Using short interest dissemination dates, empirical tests validate the model’s predictions, indicating a higher likelihood of explosive behavior in stocks with substantial reported short interest. Chapter 2, joint work with Roman Sigalov, studies the stability of factor structure by analyzing its variation on different market events. We start by documenting variation in distributions, means, volatilities, and correlations in a set of characteristics managed longshort portfolios on the weeks with large market moves, leading earnings announcements, and FOMC announcements with unexpected shocks to interest rates. This variation manifests in differences in factors extracted using characteristics based on statistical methods that we document using Instrumented PCA. The factor structure shows variation in the factor loadings and in the distribution of factors itself. We propose two ways of capturing eventspecific variation in the factor structure. The first method, Treatment-IPCA, estimates orthogonal factors specific to the events we consider. We find significant premia associated with some treatment factors. The second method, Boosted- IPCA allows us to test the differential importance of firm characteristics in describing the cross-section of stock returns on market events relative to base periods. Chapter 3 explores market making under imperfect competition. Using a dataset on individual-level intraday market making in an option market, I demonstrate a significant level of concentration in liquidity provision across options. I propose a dynamic duopoly market making model wherein inventory distribution shapes agents’ strategic behavior and observed liquidity provision on best quotes. I characterize the solution up to the optimal actions, enabling straightforward numerical solutions under both non-cooperative and cooperative equilibria. Qualitatively, the equilibria differ under various sets of parameters, allowing for a wide range of possible inventory and liquidity dynamics, some of which are non-trivial. Tight capital constraints and a high rate of order arrival lead to violations of a monotonic principle. In particular, this results in observing “resting” market maker behavior when an agent does not provide liquidity. Conversely, relaxed constraints lead to a more standard equilibrium where market makers reduce inventory imbalances. Analyzing a grim-trigger non-Markov equilibrium, I find that collusive behavior among market makers increases liquidity prices but reduces their variability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155853</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Attention To Retention: The Informativeness Of Insider's Decision to Retain Shares</title>
<link>https://hdl.handle.net/1721.1/155852</link>
<description>Attention To Retention: The Informativeness Of Insider's Decision to Retain Shares
Voelcker, Gabriel
I show that corporate insiders’ decision to retain shares is pervasive and informative about future firm performance. Insiders file Form 144 with the US Securities and Exchange Commission to report their intention to introduce unregistered stock into their company’s public float. However, the form is not binding—insiders can legally choose to not follow through with a proposed sale at virtually no cost. I document that insiders’ retaining shares is pervasive: for as much as 36% of the proposed sales, insiders choose to retain at least some of the shares (i.e., “Retentions”) after they could have sold them. Retentions are associated with a 4.0% increase in annualized returns versus Sales. Additional analyses suggest that retaining shares is related to private information about the firm’s financial performance and to stock mispricing. Collectively, the results highlight yet another signal that should be accounted for when interpreting insiders’ trading decisions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155852</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examination of two remarkable AAA+ proteases: unraveling substrate-enzyme interactions of the double-ringed ClpAP and direct thermal activation of HslUV proteolysis</title>
<link>https://hdl.handle.net/1721.1/155851</link>
<description>Examination of two remarkable AAA+ proteases: unraveling substrate-enzyme interactions of the double-ringed ClpAP and direct thermal activation of HslUV proteolysis
Shih, Tsai-Ting (Irene)
AAA+ (ATPases associated with diverse cellular activities) proteases are degradation enzymes, which together with other molecular machines, including disaggregases and chaperones, are critical to the maintenance of cellular protein homeostasis. Different members of this quality control network swing into action from the moment the nascent polypeptide emerges from a ribosome’s exit channel facilitating proper folding to the end of a protein’s lifetime when damaged and unnecessary proteins are removed by degradation. Cells are subject to varying environmental conditions and stressors that impact the health and integrity of their proteomes, conditions that render these protein-quality control enzymes especially important. One arm of these responses is energy-dependent protein degradation by AAA+ proteases. AAA+ proteases contain two major catalytic components, an ATP-dependent unfoldase and an associated peptidase, which can be genetically fused or as two independent proteins docked together. All AAA+ mechanoenzymes use energy released from continual ATP hydrolysis cycles to perform work on other macromolecular substrates. In the case of the ring-shaped unfoldases, their axial channels are lined with pore loops that bind and translocate along the substrate; these translocation power strokes can, in turn, exert an unfolding force on structured parts of this protein and once unfolding is successful the remaining polypeptide can be processively translocated into the partner peptidase for proteolysis. Here, I provide evidence regarding how high growth temperatures activate degradation by the AAA+ protease HslUV. Furthermore, I establish an initial map of sequence determinants located within the substrate that provides the ClpAP protease with contacts to enhance its grip, thereby improving the probability of successful unfolding of stable protein structures, and thus the efficiency of substrate degradation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155851</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Online Platforms and Human-Algorithm Interaction</title>
<link>https://hdl.handle.net/1721.1/155846</link>
<description>Essays on Online Platforms and Human-Algorithm Interaction
Moehring, Alex
This dissertation contains three chapters that analyze how algorithms on social media platforms influence the content that users engage with and how individuals incorporate algorithmic predictions in their decision-making. In Chapter 1, I study how engagement maximizing news feed algorithms on social media affect the credibility of news content with which users engage. This allows me to estimate the extent to which engagement-maximizing algorithms promote and incentivize low-quality content. In addition, I evaluate how the ranking algorithm itself can be designed to promote and encourage engagement with high quality content. In Chapter 2, I analyze how the introduction of a new non-personalized news feed impacts user engagement quantity, quality, and diversity on the Reddit platform. I find that this auxiliary feed increases the share of users that engage with news-related content and the diversity of engagement within news categories and within articles from publishers across the political spectrum increases as a result of the feed. In Chapter 3, in collaboration with Nikhil Agarwal, Tobias Salz, and Pranav Rajpurkar, we study human-AI collaboration using an information experiment with professional radiologists. Results show that providing (i) AI predictions does not always improve performance, whereas (ii) contextual information does. Radiologists do not realize the gains from AI assistance because of errors in belief updating – they underweight AI predictions and treat their own information and AI predictions as statistically independent.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155846</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on the Role of Individualized Data in Marketing</title>
<link>https://hdl.handle.net/1721.1/155845</link>
<description>Three Essays on the Role of Individualized Data in Marketing
Wang, Yifei
This dissertation consists of three chapters on understanding the opportunities and challenges of using individualized data in marketing, in the context of mobile economy, retailing, and education. &#13;
&#13;
The first chapter investigates the impact of market structure on the use and abuse of individualized data. In particular, I study how a shift in competition affects the nature and type of a firm's intrusion on consumers' privacy. I study this question in the context of Android app markets in China, and measure privacy by examining apps' permission requests. I investigate a 2017 regulation that reduced competition in censored app categories by prohibiting censorship-circumvention tools commonly used to access apps banned by the government. This regulation made banned apps less accessible and reduced the competition faced by permitted apps in censored categories but did not affect apps in uncensored categories. I use a synthetic differences-in-differences approach to Android permission requests by apps in censored and uncensored categories before and after the regulatory change. I show that reducing competition led to a significant increase in the permission requests by apps in censored categories. Empirically, I show that this increase in privacy-invasive behavior is due to treated apps' efforts to improve consumer engagement and monetize attention. &#13;
&#13;
The second chapter investigates the correlations in an individual customer's willingness to engage in search across different decisions and contexts.  I show empirically show that the amount of search a customer engages in is correlated across seemingly unrelated tasks. I prove theoretically that this leads to correlations in customer decisions in seemingly unrelated product categories and contexts. I use these theoretical and empirical findings to explain the 'harbingers of failure' phenomenon documented in the recent literature, which is a series of findings showing there exist customers who systematically buy new products that fail across product categories and decision contexts. In this paper I argue that one latent characteristic that could contribute to the effect is a customer's willingness to engage in search and show how the theoretical and empirical findings on interrelated search of individual customers can explain the harbingers of failure phenomenon.&#13;
&#13;
&#13;
The third chapter studies how access to digital educational content affects inequality in education. In particular, our analysis uses individualized data on children’s reading behavior from an eBook app to trace out both the short-run and long-run treatment effects of providing free access to digital reading resources to children with different socio-demographic backgrounds. We find that free access to digital content leads to a dramatic and immediate increase in reading time for treated children, and that this immediate effect is much larger for children from less developed regions with fewer educational resources. However, children's reading activities decline quickly after the start of their free access, and this decline is much faster for children from less developed regions. Further evidence suggests that children from more developed regions benefit more from the free access in the long run. Our mechanism analysis further reveals a nuanced complementarity between digital and non-digital education.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155845</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of the drying of solids</title>
<link>https://hdl.handle.net/1721.1/155827</link>
<description>The mechanism of the drying of solids
Sherwood, Thomas K.
            (Thomas Kilgore),
            1903-1976.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1929; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1929 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155827</guid>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Band structure of silver chloride and silver bromide by the augmented plane wave method</title>
<link>https://hdl.handle.net/1721.1/155823</link>
<description>Band structure of silver chloride and silver bromide by the augmented plane wave method
Scop, Peter Michael.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1964; Vita.; Includes bibliographical references (leaf 121).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155823</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tryptophan and indole accumulation by Escherichia coli mutants</title>
<link>https://hdl.handle.net/1721.1/155821</link>
<description>Tryptophan and indole accumulation by Escherichia coli mutants
Lim, Peter Gan Pin.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1964; Vita.; Includes bibliographical references (leaves 54-58).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155821</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of superpower political-military interaction.</title>
<link>https://hdl.handle.net/1721.1/155814</link>
<description>An analysis of superpower political-military interaction.
Platpe, William Allan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1971; Vita.; Bibliography: leaves 332-344.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155814</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development planning in a dual economy.</title>
<link>https://hdl.handle.net/1721.1/155813</link>
<description>Development planning in a dual economy.
Dixit, Avinash K.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1968; Vita.; Includes bibliographies.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155813</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring Microstructural Change via Non-Contact Ultrasonic Property Measurements</title>
<link>https://hdl.handle.net/1721.1/155657</link>
<description>Inferring Microstructural Change via Non-Contact Ultrasonic Property Measurements
Dacus, Benjamin R.
Reactor pressure vessels (RPVs) are irreplaceable components in nuclear reactors that need to be consistently evaluated to ensure the safety of operation. With the push to extend reactor lifetimes in the US to 80 years in order to meet low-carbon energy demands, these RPVs need to be evaluated with non-destructive methods due to the limited number of representative RPV coupons that exist for material evaluation via destructive tests. Transient Grating Spectroscopy (TGS) has been proposed as a non-destructive technique that can evaluate material evolution in reactor systems, but its applicability as an inference model has yet to be demonstrated on RPVs or similar systems. In this thesis, TGS is used to develop an inference model for microstructural evolution in precipitation-hardening alloys in order to understand the applicability of TGS as a non-destructive measure-of-health technique to investigate RPVs in commercial nuclear reactors. This is accomplished by showing the effects of solute clustering, precipitation, and grain growth due to thermal aging and irradiation in binary Cu-3at%Ti, where a clear relationship exists between thermal diffusivity, surface acoustic wave (SAW) frequency and microstructure. A model RPV alloy, which experiences embrittlement via the same mechanisms as commercial RPV alloys, is investigated with TGS and atom probe tomography (APT) to show how commercial alloy precipitation affects the evolution of thermal diffusivity measured by TGS after thermal aging and irradiation. The results show, in a different trend seen in the Cu3Ti alloy, an initial decrease in the thermal diffusivity due to nanoscale precipitation effectively scattering lattice phonons, followed by a continuous increase in thermal diffusivity after thermal aging. A theoretical model for thermal conductivity is developed using the observed microstructural features as a function of thermal aging time and the predictions match closely with the experimental results. Finally, APT and initial TGS analyses of commercial RPVs show that while promise exists for using TGS to infer RPV health, additional measurements are necessary to reach the required sensitivity. The development of inference models for understanding microstructure evolution in precipitation-hardening alloys via properties measured with TGS show the potential of TGS to be used in the future as a non-destructive method of evaluation of RPV health.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155657</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopy of Topological Materials and their Technological Applications</title>
<link>https://hdl.handle.net/1721.1/155654</link>
<description>Spectroscopy of Topological Materials and their Technological Applications
Nguyen, Thanh
Topological materials have been at the forefront of condensed matter physics and material science research in the past decade. The concept of topology has greatly shaped our modern understanding of matter including how band topology can tailor the exotic electronic and structural properties of a material. The prediction and subsequent experimental observation of topological semimetals stimulated an exploration of a substantial number of new materials. These range from verifying new exciting theoretical predictions to examining compilations of vast databases obtained through high-throughput computational and machine-learning approaches. In this thesis, we explore these new research directions by using new spectroscopic techniques to probe topology and other phonon-related phenomena in materials and by investigating potential applications of topological materials in modern-day technologies.&#13;
&#13;
We begin the dissertation by illustrating a chronicle of research into topology in condensed matter physics over the decades, the techniques of x-ray and neutron spectroscopies, and an overview of current progress in modern technologies. The first part of the dissertation describes using x-ray and neutron spectroscopies as a novel probe of topology. In particular, we describe how the nesting of Weyl nodes may lead to exotic phenomena such as anomalies in phonon dispersions and the stabilization of exotic magnetic orders, both of which are measurable using x-ray and neutron probes. We then focus on probing phonon transport in materials using x-ray scattering along two interesting directions. We demonstrate a method that overcomes the difficult task of measuring thermal properties of a thin film on a substrate. By measuring the gradual diffusive lattice relaxation subsequent to laser-induced heat using ultrafast x-ray diffraction, we are able to extract thermal conductivity of a thin film as well as the thermal boundary conductance between the film-substrate interface with lateral sensitivity and with information about local defects. Second, we probe a nonequilibrium state of matter termed many-body localization by demonstrating a departure from thermalization of phonons in disordered semiconductor superlattices.&#13;
&#13;
The second part of the dissertation focuses on technological applications of topological materials following their experimental discovery and recent theoretical developments. Topological materials may possess unique material properties resulting in applicability and possible large-scale implementation into certain technological niches based on current necessities. We investigate how topological materials may be part of future thermometric material candidates as they can acquire large entropy through unique Landau level quantization and large Berry curvature-induced anomalous Nernst effect. We describe measurements of nonlinear Hall effects in these materials which highlight their possible use in energy harvesting and terahertz applications. We also describe ideas of using topological materials as future post-copper back-end-of-the-line interconnects in integrated circuits due to favorable resistivity scaling with decreasing size. Finally, we showcase some research into spintronic-related logic and memory applications due to the generation of large spin-orbit-torques in these materials resulting in efficient current-driven switching capabilities.&#13;
&#13;
We conclude by presenting an outlook of future research directions in the field of topological materials and the prospects of their integration into modern-day electronic and thermal technologies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155654</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Method for Cryogenic Fluid Thermal Conductivity Measurements and Thermophysical and Transport Properties of Hydrogen-Helium Mixtures</title>
<link>https://hdl.handle.net/1721.1/155652</link>
<description>A New Method for Cryogenic Fluid Thermal Conductivity Measurements and Thermophysical and Transport Properties of Hydrogen-Helium Mixtures
Hamilton, Ben
New high field, high temperature superconducting magnets for fusion applications can operate at temperatures up to 65 K, well above those of low temperature superconducting magnets. These higher temperatures allow for the consideration of hydrogen as a magnet coolant, either pure or as a mixture with helium. However there does not exist experimental data on the thermophysical and transport properties of supercritical hydrogen-helium mixtures in the 15-70 K temperature regime and theoretical work is limited. Measurement of transport properties of cryogenic fluids is diffucult due to issues related to natural convection. This thesis presents a new method, known as the confined 3ω method for fluids, which measures the thermal conductivity and diffusivity simultaneously of cryogenic fluids. This method is then combined with measurements of density and viscosity to measure the thermophysical and transport properties of hydrogen-helium mixtures from 20- 100 K.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155652</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>TEXterity: Tactile Extrinsic deXterity</title>
<link>https://hdl.handle.net/1721.1/155651</link>
<description>TEXterity: Tactile Extrinsic deXterity
Kim, Sangwoon
This thesis introduces the concept of TEXterity (Tactile Extrinsic deXterity) to address challenges in robotic manipulation. Focusing on tactile sensing, TEXterity aims to enhance dexterity by enabling robots to perceive and act upon extrinsic contact between the grasped object and the environment. Identifying interpretability, observability, and uncertainty as key challenges in tactile sensing, this thesis sets out to answer four pivotal questions: &#13;
• Is tactile sensing actually useful?: Chapter 2 explores the practical utility of tactile sensing through an examination of its application in a peg-in-hole insertion task. It demonstrates the advantages of tactile sensing over conventional force-torque sensing. &#13;
• How can we exploit tactile sensing efficiently?: Chapter 3 proposes an efficient approach to exploit tactile sensing by introducing extrinsic contact as an interpretable representation. This method offers scalability and effectiveness in utilizing tactile data. &#13;
• How can we reason about extrinsic contact with tactile sensing?: Chapter 4 develops a novel method for simultaneous estimation and control of extrinsic contact states, addressing uncertainties introduced by physical interactions. It enables robots to reason effectively about extrinsic contact using tactile sensing in a controlled manner.&#13;
• How can we achieve extrinsic dexterity with tactile sensing?: Chapter 5 extends the simultaneous estimation and control method to achieve extrinsic dexterity, showcasing precise in-hand sliding regrasps facilitated by pushing the object against the external environment. &#13;
The conclusion summarizes the key findings, emphasizing the significance of tactile sensing and TEXterity in addressing challenges and advancing robotic manipulation. Strategies to tackle major challenges are outlined, focusing on interpretability, observability, and uncertainty. &#13;
In essence, this thesis lays the groundwork for unlocking the potential of tactile sensing in robotic manipulation, offering insights, solutions, and avenues for future research to propel the field toward achieving TEXterity and further toward human-level dexterity.&#13;
Please see real-world demonstration videos at this url: https://youtube.com/playlist?list=PLyqras4WFJI2q_TQshgVdWsKlxtjKolKL&amp;si=AlSWgcu4BuJSkMCo
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155651</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physiology-inspired deep learning for improved heart&#13;
failure management</title>
<link>https://hdl.handle.net/1721.1/155650</link>
<description>Physiology-inspired deep learning for improved heart&#13;
failure management
Schlesinger, Daphne E.
Heart failure is an increasingly prevalent condition, which is associated with significant morbidity and mortality. While there has been profound progress in the development of pharmacotherapy and specialized devices for heart failure in recent decades, challenges remain in disease diagnosis and management. One of the key issues is that central hemodynamics and cardiac mechanics, the quantities that characterize the state of a heart failure patient, are difficult to measure. Deep learning methods have shown promise for addressing problems in clinical medicine but are fundamentally limited by their opacity to interpretation, which inhibits model trust and adoption. In this thesis, we propose physiology-inspired deep learning approaches to improve heart failure management. &#13;
&#13;
Central hemodynamic parameters are typically measured via pulmonary artery catheterization — an invasive procedure that involves some risk to the patient and is not routinely available in all settings. In Chapter 2, we sought to develop a noninvasive method to identify elevated mean pulmonary capillary wedge pressure (mPCWP). We leveraged data from 248,955 clinical records at the Massachusetts General Hospital (MGH) to develop a deep learning model that can infer when the mPCWP &gt; 15 mmHg using the 12-lead electrocardiogram (ECG). Of these data, 242,216 records were used to pre-train a model that generates useful ECG representations. The remaining 6739 records contain encounters with direct measurements of the mPCWP from right heart catheterizations (RHCs), which provide gold-standard hemodynamic measurements. Eighty percent of these data were used for model development and testing (4304 in train, 546 validation, and 540 in the test set), and the remaining records comprise a holdout set (1349) that was used to evaluate the model. We developed an associated unreliability score that identifies when model predictions are likely to be untrustworthy. The model achieves area under the receiver operating characteristic curve (AUROC) scores of 0.80 ± 0.02 (test set) and 0.79 ± 0.01 (holdout set). Model performance varies as a function of the unreliability, where patients with high unreliability scores correspond to a subgroup where model performance is poor: for example, patients in the holdout set with unreliability scores in the highest decile have a reduced AUC of 0.70 ± 0.06. These results demonstrate that the mPCWP can be inferred from the ECG, and the reliability of this inference can be measured. When invasive monitoring cannot be expeditiously performed, deep learning models may provide information that can inform clinical care. &#13;
&#13;
We extended this work in Chapter 3, and developed a Cardiac Hemodynamic Artificial Intelligence monitoring System (CHAIS) that uses single-lead ECG data to infer when cardiac hemodynamics are abnormal. CHAIS is a deep neural network that was trained to detect abnormal cardiac hemodynamics using just lead I of the 5930 paired ECG recordings and RHCs from MGH used in Chapter 2. CHAIS was tested on the internal holdout set of 1439 paired single-lead ECGs and RHCs (858 patients) from MGH and on an external validation set of 4629 paired ECGs and RHCs (2577 patients) from another institution. We also prospectively collected single-lead ECG data using a commercially available wearable ECG monitor, from 83 patients who were scheduled for a RHC at MGH, and used CHAIS to infer if their left atrial pressures would be elevated at the time of their RHC. CHAIS achieves an AUROC of 0.80 for detecting elevated left atrial pressures on the internal test dataset and 0.76 on the external validation set. On patients who wore a wearable ECG monitor before RHC, CHAIS had an AUROC of 0.70; however, when ECG data are available within 1.25 hours before catheterization, the AUROC is 0.875. These results demonstrate the utility of ambulatory cardiac hemodynamic monitoring with a wearable ECG monitor.&#13;
&#13;
Finally, in Chapter 4, we described an approach to directly incorporating knowledge of cardiovascular physiology into a deep learning framework. The framework consists of a neural network encoder, to map from arterial blood pressure (ABP) waveforms to latent cardiovascular parameters, and a mechanistic model which maps from those parameters to hemodynamic waveforms, called Cardiovascular Simulator (CVSim). We trained the model on a synthetic data set, and found that the model achieved a multiclass accuracy of 47.8 percent for placing samples into classes in terms of their left ventricular end diastolic pressure (LVEDP), cardiac output (CO), and left ventricular ejection fraction (LVEF), using clinically relevant thresholds. Here, we also proposed a physiology-inspired trust score, and find that the multiclass accuracy is higher in the subset of samples in the lowest decile with respect to the score, compared to results on the rest of the data. On applying the model to a small clinical data set from patients in intensive care settings at MGH, classification performance in terms of LVEDP, CO, and LVEF was limited, given the various challenges of transfer from synthetic to real clinical data. However, we observed that cardiac mechanical parameters inferred from the clinical data set trended positively with the administration of pharmaceutical agents expected to modulate those parameters. This suggests that the model can glean meaningful information from clinical ABP waveforms and shows promise for future development. &#13;
&#13;
With further clinical testing, the suite of methods described in this thesis have the potential to advance heart failure care by enabling non-invasive central hemodynamic monitoring and minimally-invasive inference of cardiac mechanics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155650</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational prediction of health status from the human gut microbiome and metabolome</title>
<link>https://hdl.handle.net/1721.1/155649</link>
<description>Computational prediction of health status from the human gut microbiome and metabolome
Dawkins, Jennifer
The collection of microorganisms that reside in the gut is termed the gut microbiome and is crucial to overall human well-being. Gut microbiome dysfunction, or dysbiosis, has been implicated in a broad range of diseases including inflammatory bowel diseases (IBDs), cardiovascular diseases, kidney diseases, metabolic diseases, and gastrointestinal infections like Clostridioides difficile infection (CDI). Often, microbiome-linked illnesses arise after the microbiome is disrupted, such as by antibiotic treatment. However, because the microbiome is so diverse and individual-specific, very little is known about the specific microbial changes that may lead to human disease. Thus, it is extremely difficult to predict whether a given disruption to the microbiome will result in disease. &#13;
&#13;
Of the diseases linked to gut microbial disfunction, dysbiosis is perhaps most prominently linked to CDI. As the most common health-care associate infection, CDI is thought to occur when an individual has had both exposure to the C. difficile pathogen and gut dysbiosis caused by a past perturbation, such as antibiotic treatment. Infection recurrence, with an estimated rate of 15.5%, is a particularly insidious problem, and there is currently no reliable method to predict which individuals will recur. There is a need for early prediction of CDI after a perturbation, as this can allow physicians to start or restart more effective treatments immediately and prevent further sickness and risk of death.&#13;
&#13;
Current research into the microbiome and microbiome dysbiosis, including CDI, focuses heavily on identifying the microbial taxonomic composition using next generation sequencing. However, there is growing evidence that the gut metabolome may provide crucial information that cannot be gained from microbial composition alone, as metabolites provide the means by which host cells and microbe cells communicate with each-other. Predictive analyses are especially useful for uncovering links between metabolic or microbial composition features and host disease state as it models all input covariates simultaneously. However, current predictive methods often fall short when applied to the microbiome, as simpler methods lack the capabilities to model this complex system, whereas highly non-linear “black box” methods lack interpretability.  When predicting from biological or medical data with the goals of clinical utility and advancement of scientific knowledge, a model that can explain its decisions is crucial for increasing physician trust and uncovering avenues for future investigation. There is a need for interpretable computational models that can learn non-linear relationships between host outcome and paired microbial composition and metabolomic profiles.&#13;
&#13;
This thesis addresses these two challenges. First, we present the analysis, including predictive analyses, of a novel longitudinal study of CDI recurrence in patients which demonstrates that a small set of metabolites can accurately predict future recurrence. Our findings have clinical utility in the development of diagnostic tests and treatments that could ultimately short-circuit the cycle of CDI recurrence. Secondly, we present a novel predictive model developed specifically for making interpretable predictions on paired microbial composition and untargeted metabolic profiles. We demonstrate our model’s ability to predict a variety of host disease states accurately while providing clear and biologically compelling explanations of its decisions, thereby demonstrating high clinical and scientific utility.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155649</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling sustainable mineral supply pathways to meet clean energy demand</title>
<link>https://hdl.handle.net/1721.1/155648</link>
<description>Modeling sustainable mineral supply pathways to meet clean energy demand
Bhuwalka, Karan
The adoption of renewable energy technologies hinges on the availability of many critical minerals. To meet the large demand for critical minerals, it is vital to scale up mineral supply in an environmentally and socially responsible way while maintaining low materials costs for key technologies. To guide policy and technology innovation that meets this objective, we need robust approaches for evaluating the availability and costs of materials. However, traditional approaches for assessing material availability or ‘criticality’ do not incorporate price feedback or a structural understanding of how material supply evolves. In this thesis, I build a model that simulates metal demand, mine opening and operation decisions, and mineral reserve development while incorporating price feedback. This model is used to evaluate how factors such as the rate of demand growth, materials substitutability and recycling rates impact materials prices and availability in the long term. The model is then applied to data on real mining projects for two key battery materials: nickel and lithium. Model simulations analyze supply pathways till 2040 to identify strategies that reduce the risk of materials supply constraints impacting clean energy technology deployment. &#13;
&#13;
Results demonstrate that a combination of high mining productivity, development of material substitutes and large recycling rates reduce the prevalence of availability risks from ~90% to just under 2% for materials experiencing high demand. In the nickel case, results show that environmental regulation can reduce impacts such as supply-chain emissions by 50% but lead to a 2x increase in nickel prices with only 70% of baseline nickel demand being satisfied. However, if regulations are combined with innovation that lowers processing costs and market coordination that reduces project development timelines and risks, over 90% of the demand is met without price increases. Similarly in the lithium case, reducing mine development timelines from 8 years to 6 years can increase the percentage of demand satisfied from 82% to 92% by moderating supply shortages and lithium prices.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155648</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Myelination diseases of the central nervous system: Artificial Axons as in vitro models of chemomechanical cues</title>
<link>https://hdl.handle.net/1721.1/155646</link>
<description>Myelination diseases of the central nervous system: Artificial Axons as in vitro models of chemomechanical cues
Yang, Mingyu
Myelination is a key biological process wherein glial cells such as oligodendrocytes wrap myelin around neuronal axons, forming an insulative sheath that accelerates signal propagation down the axon. A major obstacle to understanding myelination is the challenge of visualizing and reproducibly quantifying this inherently three-dimensional process in vitro. To this end, Van Vliet et al. previously developed Artificial Axons (AAs), a biocompatible platform consisting of 3D-printed axon mimics that can be ensheathed by oligodendrocytes in vitro. In this thesis, we advance and apply the Artificial Axon platform to create in vitro models of lesion-like environments to elucidate the mechanisms underlying myelination diseases.  &#13;
&#13;
First, we improve the existing AA platform to investigate how biophysical cues affect myelin wrapping by rat oligodendrocytes. We build a new high-resolution 3D printer (HR-3DP) that can fabricate AAs with sub-kilopascal elastic moduli and &lt;2 µm diameters. These properties are clinically relevant as prior neuroimaging data from human patients show correlations between demyelinating diseases and changes in brain stiffness, axon diameter, and axon density. An open question is whether these biophysical changes simply act as correlative biomarkers or contribute directly to disease progression. We demonstrate that the extent of myelin ensheathment by rat oligodendrocytes is sensitive to the Young’ modulus, diameter, and density of axons, indicating that each of these biophysical cues may play a causal role in influencing an oligodendrocyte’s propensity to myelinate. We further demonstrate that the responses of oligodendrocytes to pro-myelinating compounds are dependent on axon stiffness, and that the relative ranking of drug efficacies differs between stiff and compliant axons. These results reinforce the importance of studying myelination in mechanically representative environments, and highlight the importance of considering biophysical cues when conducting drug screening studies for pro-myelinating compounds.&#13;
&#13;
Second, we demonstrate the promise of using AAs to model lesion-like environments using human oligodendrocytes. For example, multiple sclerosis (MS) is a demyelinating disease affecting over one million adults in the United States, characterized by the destruction of myelin through a range of immune-mediated mechanisms. The accumulation of myelin debris and inflammatory cytokines in the brains of MS patients is thought to contribute to a growth-inhibitory environment that impairs myelin repair. We used AAs to model the impact of myelin debris and microglia co-culture on myelin ensheathment, recapitulating in vivo results demonstrating a dose-dependent effect of myelin debris on myelin ensheathment. We further demonstrate the compatibility of the AAs with myelination by human oligodendrocytes derived from induced pluripotent stem cells (iPSCs). In particular, we explore the effect of the apolipoprotein (ApoE) genotype on myelin ensheathment, based on clinical data that individuals with the ApoE4 allele exhibit worsened MS prognosis compared to individuals with the ApoE3 allele. Finally, we demonstrate how targeted perturbations to cholesterol metabolism pathways differentially impact ApoE3 vs. ApoE4 human oligodendrocytes. In sum, these results demonstrate the potential of AAs to elucidate the cellular and molecular mechanisms of myelination in the context of human disease.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155646</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Resolution Experimental Measurements and Mechanistic Modelling of Saturated Cryogenic Pool Boiling Heat Transfer</title>
<link>https://hdl.handle.net/1721.1/155628</link>
<description>High-Resolution Experimental Measurements and Mechanistic Modelling of Saturated Cryogenic Pool Boiling Heat Transfer
Chavagnat, Florian
Refueling cryogenic rockets in low Earth orbits has the potential to significantly enhance the duration and the reach of future space missions. However, the development of such capabilities is not without challenges, which exacerbated by the complexity and the cost of testing such equipment in microgravity, e.g., a fuel depot placed in orbit. The low boiling point of cryogenic fuels (hydrogen, methane, oxygen) makes them prone to boil when transferred through superheated pipes, or simply stored in fuel tanks, resulting in the presence of two-phase mixture. Boil-off gas can lead to pressurization of components like fuel line and tanks, cause two-phase flow instabilities during fuel transfer, or significantly reduce the usable amount of cryogenic fuel. Progresses in multiphase computational fluid dynamics (mCFD) can be leveraged to predict the two-phase flow behavior in such cases. However current boiling models offer poor prediction accuracy of the boiling heat flux in most applications, but also miss critical boiling characteristics, e.g., the amount of vapor produced.&#13;
This thesis proposes a fully closed formulation of a mechanistic boiling model adapted to saturated cryogenic pool boiling. The model leverages exhaustive measurements of boiling parameters. A new pool boiling setup was designed for that purpose, using pressurized nitrogen as a proxy for cryogenic fuel. The apparatus allows to measure the typical boiling curves, i.e., boiling heat flux and wall temperature, measurement of dry area fraction, triple contact line density as well as more fundamental parameters such as bubble lift diameter, nucleation frequency and nucleation site density among others. The heating surface inclination was varied between 0° (upward facing horizontal) and 179° (downward facing horizontal) to probe for the effect of buoyancy on the boiling parameters and overall heat transfer.&#13;
The analysis of individuals bubble using both phase-detection and shadowgraphy showed the lack of microlayer. Instead, the large size of the bubble footprint observed experimentally strongly suggested an intense evaporation process at the triple contact line of the bubbles and occurring right after nucleation. Based on this observation, the evaporation rate at the triple contact line has been indirectly estimated on numerous bubbles and appeared consistent with analytical models describing such evaporation mechanism in other fluids. In the tested operating conditions, the linear evaporation rate was measured at around 5 W/m, accounting for 20% of the boiling heat flux. The amount of heat removed by quenching of the heating surface through transient conduction has also been evaluated using phase-detection recordings and was&#13;
3&#13;
shown account for an additional 20% of the boiling heat flux. The remaining portion of the boiling heat flux could be explained by neither triple contact evaporation nor transient conduction during quenching.&#13;
The heat flux partitioning model proposed in this work allows to predict the quenching heat flux, the triple contact line evaporation as well as the observed dry area fraction, contact line density and bubble density during nitrogen boiling. Minimal effect of surface inclination has been observed on the nucleate boiling heat transfer, except at the departure from nucleate boiling.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155628</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Therapeutic applications of DNA origami-based programmable nanoparticles</title>
<link>https://hdl.handle.net/1721.1/155627</link>
<description>Therapeutic applications of DNA origami-based programmable nanoparticles
Arnold, Olivia Jane Young
DNA origami utilizes the complementary Watson and Crick base pairing of DNA to self-assemble highly programmable nanoparticles. These nanoparticles have distinct advantages over other nanoparticle delivery platforms, including polymeric and lipid nanoparticles, in that they offer precise nanoscale resolution control over the attachment of therapeutic cargo, while other nanoparticle platforms only offer control over average ligand density. In this thesis, we demonstrate the therapeutic utility of DNA origami for cancer and infectious diseases. First, we demonstrate that modulating the nanoscale arrangement&#13;
of an adjuvant enhances the efficacy of cancer vaccines. Second, we demonstrate that this DNA origami nanoparticle can be used as a modular delivery vehicle for infectious disease associated antigens, enabling rapid response during pandemic situations. Third, we demonstrate the conjugation of CD40 ligand, an immune-activating molecule, onto the DNA origami nanoparticle, and describe initial investigations into the diverse spatial arrangements of CD40L and preliminary effects on the immune response. Collectively, these studies illustrate the potential of DNA origami as a therapeutic for various disease areas, as well as its potential as a tool for investigating biological&#13;
receptor-ligand interactions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155627</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automation Framework for Exploration Medicine (AFEM): A path for integrating automation into autonomous emergency care</title>
<link>https://hdl.handle.net/1721.1/155623</link>
<description>Automation Framework for Exploration Medicine (AFEM): A path for integrating automation into autonomous emergency care
Porter, Allison Paige
Long communication latencies in exploration spaceflight will make current real-time support paradigms for urgent medical events infeasible. Further, as mission duration increases for exploration, so too will the probability of adverse low- and high-criticality medical events. The need for in-situ resolution to medical problems will require crewmembers to perform rapid and precise decision-making to both diagnose issues and formulate treatment plans. We posit that integrating automation into the care paradigm can address the challenges to medical care in long-duration spaceflight posed by resource gaps (e.g., training, access to expertise, and tools). However, it is not clear what aspects of the exploration care paradigm are most well-suited for the integration of automation. Using the lens of Point-of-Care Ultrasound (POCUS) (a viable diagnostic tool for exploration medicine due to its portable, low mass/volume, speedy, versatile nature), this work proposes a new framework, the automation to translate patient care from the hospital to the austere and spaceflight environments and explores how automation may enable that transition. We investigated the role of human-automation teams for emergency care in spaceflight through the Automation Framework for Exploration Medicine (AFEM), a process using naturalistic methods with a two-pronged approach: 1) characterizing a candidate task for automation and 2) characterizing the work domain(s) encompassing that task within the human-automation system. To overcome the challenge of characterizing a dynamic system surrounding a task that does not exist in its intended—inaccessible—usecase (i.e., POCUS on Mars), we leveraged existing analogous domains to guide the development of human-automation systems. We conducted in-situ observations in a hospital Emergency Department to understand how clinicians process contextual information in an urgent medical setting to provide care using ultrasound technology. We also engaged specialists in semi-structured interviews (based upon human-machine teaming systems engineering methodologies) to identify key procedural information components for automation. Lastly, we developed a Toolkit–grounded in cognitive systems engineering methodologies–that provides a novel framework for identifying domain-and task-specific constraints from analogous environments. A supporting Roadmap provides guidance for experimenters interested in further development of automated and autonomous systems. From this work, we conclude that specific aspects of the care environment which influence the result of a task or process (“Mediating Factors”) from candidate work domains call 3for distinct, targeted guidance for automation support and are valuable in providing system developers with tunable automation level and implementation guidelines within and/or between those work domains. Further, our findings elucidated highest-priority system design requirements for non-expert POCUS end-users regarding transparency, augmenting cognition, and coordination to support generating a common mental model. Finally, the Toolkit and Roadmap scaffold guide system development in integrating automation into this novel ecosystem. This scaffold is well-positioned to be leveraged by other system designers who do not have easy, reasonable, or sufficient access to a unique domain for which they are developing systems. AFEMsupports large-scale efforts in preparing for future human exploration missions not only on the level of augmenting exploration medical capabilities, but also on the higher level of developing the structure by which automation and autonomy is integrated into human exploration missions in non-medical domains. Evidence-based design practice is directly translatable to automation assistance for medical providers in resource-limited environments, as well as to any situation where a person’s sensory processing, perception, decision-making, or response selection could be aided with automation to accomplish a task.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155623</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing and using defects in high-temperature superconductor cables and magnets</title>
<link>https://hdl.handle.net/1721.1/155616</link>
<description>Characterizing and using defects in high-temperature superconductor cables and magnets
Ibekwe, Richard Tochi
High-temperature superconductor (HTS) cables and magnets are enabling a range of high-current and high-field applications, including compact fusion devices aiming to achieve net energy. Defects in HTS pose manufacturing, cost, and operational challenges. A rigorous understanding and predictive capability for defect-induced behavior at relevant scale has not been established. To address this shortcoming, a cable-level defect characterization experimental platform coupled to high-fidelity computational modeling is developed and the concept of using defects as a tool for controlling current distribution in HTS cables is explored. The cable (critical current = 438 A at 77.3 K, self-field) comprises a non-twisted 70 cm-long copper former containing a soldered stack of five rare-earth barium copper oxide (REBCO) tapes (each with a critical current of 115.7 A/4 mm-w at 77.3 K, self-field), which can contain a variety of induced defects. Spatially-resolved electric fields are measured with a high-density voltage tap array and absolute current distribution with six custom-wound embedded Rogowski coils. 3D circuit modeling uses nodal analysis and self-consistently accounts for the magnetic field dependence of critical current. Design, fabrication, calibration, and testing of the cable and instrumentation are described. Model validation results are presented, comparing cables with no defect, one defect, and two defects, and with tapes oriented in two different configurations. A cable containing a model-guided configuration of intentional defects is studied, demonstrating numerically and experimentally the use of defects for forcing more uniform current distributions in REBCO cables, one of many possible applications. The predictive capabilities developed in this work and the demonstration of the use of defects as a tool could enable the design of full-scale HTS cables and magnets that optimize defect tolerance and performance to improve cost, manufacturability, and robustness.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155616</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Wearable Countermeasure for Musculoskeletal Deconditioning in Human Spaceflight</title>
<link>https://hdl.handle.net/1721.1/155615</link>
<description>A Wearable Countermeasure for Musculoskeletal Deconditioning in Human Spaceflight
Bellisle, Rachel F.
In the reduced gravity experienced during human spaceflight, the human body no longer experiences the constant loading provided by Earth’s gravity. Muscles atrophy due to disuse, the spine elongates, which may cause back pain and discomfort, and sensorimotor changes lead to impaired posture and locomotion. The Gravity Loading Countermeasure Skinsuit (GLCS or Skinsuit) is a musculoskeletal deconditioning countermeasure for spaceflight, consisting of a skin-tight garment that applies a vertical load from the shoulders to the feet. The suit aims to mitigate musculoskeletal deconditioning (potentially including spine, muscle, and sensorimotor) during exposure to microgravity by simulating some of the effects of Earth’s gravity (1G). This wearable system is intended to support human health and astronaut performance during future missions to the moon and Mars (where space for large exercise equipment may be unavailable) and to further mitigate microgravity-induced effects in current missions on the International Space Station (ISS). The high-level objectives of this research are 1) to further prepare the GLCS for operational use in human spaceflight and 2) to improve our understanding of GLCS loading and its effects on the musculoskeletal and sensorimotor systems. &#13;
&#13;
In this thesis, the next-generation versions of the GLCS (the Mk-7 and Mk-8 GLCS) were fabricated, and a proof-of-concept instrumented GLCS was developed with integrated fabric strain sensing capabilities. Next, the loading functions (insole load, shoulder load, skin pressure, and fabric strain) and subjective user experience (discomfort, mobility, exertion, and dyspnea) in the GLCS were characterized in six participants. Subjective user experience was additionally measured in one participant in low-Earth orbit on the ISS. Results provide a characterization of GLCS experimental loads in static and dynamic body positions and indicate that the GLCS has sufficient comfort for 60 to 90 minutes of wear during resistance exercise. &#13;
&#13;
Human participant studies in 1G, partial gravity analogs, and the ISS were used to investigate the effects of the GLCS on neuromuscular activity (surface EMG) and sensorimotor function (proprioception and functional performance). Statistically significant differences in metrics of EMG amplitude were observed between suited and unsuited conditions in some muscles during upright standing, hip external rotation, hip extension, and knee flexion, with additional non-significant trends seen in other activities. These results aim to guide the development of exercise procedures for the GLCS countermeasure system and indicate that the GLCS may be able to contribute to the mitigation of muscle atrophy. In sensorimotor tests, the GLCS produced a non-significant decrease in ankle inversion proprioception compared to unsuited conditions, and further investigation is needed. &#13;
&#13;
This thesis investigated the fabrication of the GLCS, the loading function, the user experience, the impacts on two physiological targets (muscle activity and sensorimotor function), and the intersection of these aspects. The results contribute to the development of the GLCS as a proposed musculoskeletal countermeasure for future missions to low-Earth orbit, the moon, or Mars, and support the goal of the GLCS to maintain the safety, health, and performance of astronauts, ensuring that they can complete mission-critical tasks in flight and return safely to Earth.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155615</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Rheometric Techniques and Constitutive Models for Linear and Nonlinear Rheology: Applications to Polymeric Solutions and Colloidal Gels</title>
<link>https://hdl.handle.net/1721.1/155614</link>
<description>Novel Rheometric Techniques and Constitutive Models for Linear and Nonlinear Rheology: Applications to Polymeric Solutions and Colloidal Gels
John Rathinaraj, Joshua David
Complex fluids are used in a range of industries including the consumer goods, automotive, and oil and gas production sectors. Characterizing the rheology, understanding the flow behavior, and modeling the constitutive response of these complex fluids is important for predicting time-varying material performance under various processing conditions, thus helping prevent failures in large-scale industrial operations. However, modeling the rheological behavior of such complex fluids using canonical constitutive laws that relate the state of stress to the deformation history can be challenging due to their broad relaxation spectrum and time-varying (or “mutating”) material response. Part I of this thesis therefore deals with developing constitutive models relevant for polymeric solutions that exhibit broad viscoelastic relaxation spectra, and mutating material systems such as colloidal gels that may exhibit age-dependent viscoelastic properties. Similarly, characterizing the rheological properties over the full range of time or frequency scales relevant to their use can be challenging, especially if the material properties are ‘thixotropic’ and change with time. Thus, part II of this thesis focuses on developing novel rheometric techniques that can robustly characterize time-varying rheological properties of complex fluids and soft solids. In Part I of this thesis, we utilize fractional differential equations, formulated into fractional constitutive models, to describe the strain and rate-dependence of the stress response in complex fluids. This formulation is capable of quantitatively modeling the linear viscoelastic response of a wide range of polymeric solutions and colloidal gels. Subsequently, we incorporate material nonlinearities into the (inherently linear) fractional models using an integral Boltzmann-like framework which combines a frame-indifferent strain measure with a strain-dependent softening or damping function. This enables quantitative description of rheological nonlinearities such as shear thinning and normal stress differences. From here, we evaluate analytical expressions for the steady shear viscosity and viscoelastic moduli in terms of the linear relaxation kernel and the parameters of the damping function. Such analytical expressions provide physical and mathematical understanding for empirical relations such as the Cox-Merz rule and the Gleissle mirror relations that are widely used in industrial rheological characterization. In addition to broad viscoelastic response, many complex fluids “mutate”; i.e. they also show more complex time-dependent dynamics due to rheological aging, thixotropy, and continuous yielding behaviors which are not captured by the integral viscoelastic framework discussed above. To explore these complexities, we first study ‘rheological aging’ in a drilling mud formulated from a bentonite dispersion (a discotic colloidal gel). We present a framework for modeling the linear viscoelastic response in the presence of physical aging based on the transformation to a ‘material time’ domain. In this transformed reference domain, we can again quantitatively describe the constitutive relationship between stress and strain-rate measured in these time-dependent clay dispersions using fractional differential equations; however the spectrum of time constants continuously evolves with the material age. In the final section of Part I we turn to scientific machine learning techniques, informed by existing rheophysical laws, to formulate a universal differential equation to describe the thixotropic and yielding behavior response of time-evolving complex fluids. We demonstrate using experimental data from a model discotic colloidal dispersion that this framework, which incorporates a neural network into an existing physical model comprising of coupled fractional differential equations, can accurately learn the effective constitutive relationship governing the thixotropic and yielding behavior of the complex fluid directly from experimental data. The resulting framework can accurately model and predict the full thixo-elasto-visco-plastic (TEVP) response of an aging or mutating system. For an aging or gelling fluid, extracting dynamic rheological properties, such as the storage modulus and loss modulus obtained from small amplitude oscillatory deformation, can be challenging due to the fast mutation time of the material. Conventional oscillatory techniques employ discrete Fourier transforms, which inherently assume the time signal to be time-translation invariant. This assumption results in systematic errors for mutating materials and there is also a need to develop novel rheometric techniques that can rapidly and accurately extract the time-frequency information of mutating material systems. Therefore, in Part II of this thesis, we develop a discrete Gabor transform procedure (a special case of the short-time Fourier transform) that can be implemented in commercial rheometric hardware to robustly extract time-frequency information of aging and thixotropic material systems. It is often challenging to discern whether the resulting time-dependent material response should be attributed physically to thixotropic microstructural mechanisms or material viscoelasticity. To address this, we augment the Gabor transform with a parallel superposition flow protocol. The resulting deformation history superimposes a nonlinear external drive and probes the material response using superposed small oscillatory perturbation. The resulting ‘pump/probe” protocol allows us to distinguish between locally-reversible viscoelastic material responses and irreversible thixoplastic effects that can lead to large, time-dependent material deformations even in the absence of elasticity. These new advanced rheometric protocols make use of modern high-resolution electromechanical rheometer systems that are confined to laboratory settings and not suitable for monitoring the rheological properties of the fluid in industrial settings. Therefore, in a final contribution, we develop a novel compact mechanical tuning fork resonator that can be deployed in the field and can continuously measure both time-independent and time-varying rheological properties of a range of complex fluids.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155614</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous Experimentation to Accelerate Boiling Heat Transfer Research</title>
<link>https://hdl.handle.net/1721.1/155613</link>
<description>Autonomous Experimentation to Accelerate Boiling Heat Transfer Research
Ravichandran, Madhumitha
Boiling heat transfer stands out as a highly efficient method for dissipating heat, utilized across a spectrum of engineering systems ranging from nuclear reactors to computer processing units. Over the course of decades, researchers have heavily relied on direct visualization of boiling phenomena through photography and videography to understand this phenomenon and develop modeling tools. However, due to the complexities of the boiling process, particularly near the surface, extracting essential information from these techniques has proven challenging. The primary impetus behind this thesis stems from the pressing need to advance current knowledge in boiling heat transfer, coupled with the potential offered by advancements in high-resolution imaging, e.g., infrared thermometry, and machine learning. The primary focus of this research work is on creating online platforms capable of real-time measurement of essential bubble dynamic parameters, including nucleation site density, bubble growth time, wait time, departure diameter, dry area fraction and distribution, and contact line density. To this end, a machine learning (ML) methodology was developed to accelerate the analysis of infrared data obtained during boiling heat transfer investigations. Deep neural network models were developed to directly measure bubble growth time, period, and nucleation site density from radiation recorded by a high-speed infrared camera. Further, a comprehensive physical framework utilizing Binary Neural Networks (BNN) supported by direct memory access (DMA) of binary data recorded by an IR camera was established to implement ML prediction models seamlessly in real-time during experimental procedures.&#13;
&#13;
A machine learning-based model was developed to predict the Departure from Nucleate Boiling Ratio (DNBR) using unprocessed, time-dependent infrared radiation distributions without prior knowledge of heat flux, achieving over 95% accuracy. This methodology enabled online and quasi-real-time monitoring of DNBR and estimation of Critical Heat Flux (CHF) during boiling experiments using infrared thermometry, a crucial step toward implementing intelligent, autonomous experiments. Further, leveraging explainable AI models, a hypothesis-free data-driven methodology was developed to elucidate the importance of fundamental boiling parameters in predicting the boiling crisis. The analysis revealed that parameters such as nucleation site density, bubble departure frequency, growth time, and footprint radius are all necessary and equally important.&#13;
&#13;
Finally, high-resolution infrared thermography and the developed online measurement framework were utilized to demonstrate the presence of self-organized criticality (SOC) in nucleate boiling. SOC was observed to emerge from bubble interactions near the wall and was characterized by nucleation site density, average bubble footprint radius, and the product of average bubble growth time and frequency. A new approach to estimate CHF from subcritical heat flux input conditions, alongside relevant critical quantities, was presented, providing insights into the onset of SOC and its implications for boiling heat transfer.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155613</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conventional Precision-Guided Hypersonic Weapons: An Unconventional Threat to Strategic Stability?</title>
<link>https://hdl.handle.net/1721.1/155606</link>
<description>Conventional Precision-Guided Hypersonic Weapons: An Unconventional Threat to Strategic Stability?
Sanchez, Eli
Since the inception of the current hypersonic weapons arms race roughly two decades ago, many analysts and commentators have suggested that hypersonic weapons will increase the likelihood of conflict between the global superpowers. In this thesis, I provide an in-depth analysis of one variety of these concerns: that hypersonic weapons will increase the likelihood of nuclear war and incentivize nuclear arms racing. Two issues have prompted analysts to make this provocative prediction. One is that the United States' efforts to develop highly accurate, conventional hypersonic boost-glide weapons (HBGWs) may produce systems capable of conducting disarming strikes against US adversaries' nuclear retaliatory forces. This would enable the United States to increase its counterforce capabilities unhindered by its obligations under nuclear arms control treaties, and may therefore prompt its adversaries to do likewise&#13;
&#13;
To evaluate HBGWs' efficacy as counterforce weapons, I assess the vulnerability of silo-based intercontinental ballistic missiles (ICBMs) to conventional hypersonic weapons. I find that such weapons may prove comparable to nuclear armed ballistic missiles in their efficacy as counter-silo weapons if they achieve the ambitious accuracy goals set forth for US weapons. They may also be able to achieve comparable kill probabilities against silos as nuclear ballistic missiles with the levels of accuracy that have been previously reported for hypersonic flight vehicles, though in a manner that would render them highly vulnerable to missile defenses. I therefore recommend that conventional HBGWs with intercontinental ranges be treated as strategically-salient weapons and subjected to numerical limits under arms control treaties.&#13;
&#13;
The second issue is that, due to hypersonic weapons' ability to maneuver throughout flight, their intended targets cannot be determined with certainty. States could therefore misinterpret the intent of an attack. Of particular concern are attacks against conventional military forces being mistaken for counter-nuclear strikes or decapitating strikes, or an attack against one country being mistaken by a neighboring country for an attack against itself. Such misunderstandings could escalate conventional conflicts to nuclear war, or initiate hostilities between states not presently at war.&#13;
&#13;
I find that, if states are unable to reliably detect and track incoming weapons, such misunderstandings would be highly plausible in many scenarios. However, if weapons can be reliably detected at least in the vicinity of high-value assets–such as strategic nuclear assets or national leadership–the risk of inadvertent escalation may be very low in most cases. I further find that providing for sufficient missile early warning coverage to minimize the risks associated with destination ambiguity is well within the capabilities of the preeminent nuclear powers. However, regardless of the extent of states' early warning coverage, I find attacks by HBGWs with speeds of roughly Mach 15 and above could still plausibly be mistaken for decapitating strikes if the target nation's leadership believes the weapons may be nuclear-armed. I therefore recommend states forgo HBGWs with speeds in excess of Mach 15 for conventional warfighting, establish no-fly zones for hypersonic weapons about their adversaries' capital cities, and that hypersonic weapons be exclusively armed with conventional payloads.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155606</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering translational vaccine delivery systems with the polyphenol tannic acid</title>
<link>https://hdl.handle.net/1721.1/155601</link>
<description>Engineering translational vaccine delivery systems with the polyphenol tannic acid
Janes, Morgan E.
Supramolecular biomaterials, which are capable of spontaneous assembly via diverse non- covalent interactions, represent an exciting frontier in drug delivery due to their ease of formulation and modularity. Tannic acid (TA) is a naturally-occurring compound that has a demonstrated ability to engage in supramolecular interactions with biological cargo, including proteins, nucleic acids, and cell membranes. In this thesis, we harnessed the broad bio-adhesive capacity of TA to develop scalable, modular systems for vaccine engineering across various applications.&#13;
&#13;
Building upon previous work utilizing TA-containing metal phenolic networks for cell membrane engineering, we fabricated a cell membrane-bound formulation for local drug delivery to dendritic cell (DC) vaccines termed META (Membrane Engineering using Tannic Acid). DC vaccines hold great potential for a spectrum of diseases including cancer and autoimmune conditions, and combination drug delivery is an attractive strategy to manipulate their function and overcome in vivo plasticity. However, DCs are not compatible with particle-based local delivery approaches due to their broad phagocytic capacity. We showed that META preserved DC viability and critical functions such as migration, and then demonstrated the capacity of the system to incorporate and release protein cargoes with varying physical properties alone and in combination. Finally, we showed that META carrying either pro- or anti-inflammatory cargo could influence the carrier cell phenotype accordingly, underscoring the potential of META for the local control of phagocytic immune cells in a next step to advance DC therapies in the clinic.&#13;
&#13;
In the main portion of this work, we used TA to modulate the delivery kinetics of subunit vaccines to enhance their immunogenicity against infectious disease. Subunit vaccines are a well-established and clinically scalable intervention, yet they have achieved limited success for weak or rapidly evolving antigens such as those associated with SARS-CoV-2. Delivery strategies that promote gradual release of subunit vaccines from the site of injection may improve humoral immunity by enhancing the duration of lymph node exposure, however, clinical implementation of this approach is challenging due to poor scalability and high costs. Here, we showed that TA acts as an “adhesive” to mediate deposition and retention of protein antigens at the subcutaneous injection site for over one week. In addition to enhancing the magnitude and duration of vaccine drainage to the lymph nodes, inclusion of TA induced lymph node accumulation of antigen-laden monocyte-derived dendritic cells (moDCs), eliciting durable antibody titers against the receptor-binding domain (RBD) of SARS-CoV-2 and variants of concern in mice. This system, termed TAPER (Tissue-Adhesive Polyphenol-mediated Enhanced Retention), provides many benefits including one-pot synthesis, scalability, low cost, and modularity, which together may open the door for the realization of effective and clinically feasible vaccination strategies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155601</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics and Engineering of Moisture-Capturing Hydrogels for Water and Heat Harvesting</title>
<link>https://hdl.handle.net/1721.1/155600</link>
<description>Physics and Engineering of Moisture-Capturing Hydrogels for Water and Heat Harvesting
Díaz-Marín, Carlos D.
Moisture in the air represents a vast freshwater and energy resource which can be leveraged to mitigate the critical water scarcity and decarbonization challenges. In particular, moisture sorption offers a pathway for decentralized and geography-independent freshwater production from humidity through sorption-based atmospheric water harvesting. Similarly, the large heat released during moisture sorption can be used for building decarbonization through high energy density thermal energy storage. A major bottleneck towards realizing these technologies with the desired performance and large scalability lies in the sorbent material, which controllably captures and releases the moisture from the air. The last two hundred years have seen the exploration and development of numerous sorbents. However, these sorbents still suffer from performance limitations, limited scalability, and/or high cost, severely preventing them from being used for widespread atmospheric water harvesting and thermal energy storage. Recently, hydrogel-salt composites have shown potential to overcome these previous limitations due to their highly scalability, low cost, extreme capacity to capture moisture even in arid conditions, and high speed to capture and release the moisture. All these properties make hydrogel-salt composites attractive candidates for largescale sorption. Despite these promising results, major limitations in our fundamental knowledge of hydrogel-salt composites remain. These limitations block further material-level and system-level optimization as well as their long-term demonstration in systems that address the pressing water and energy challenges. For instance, there is critical knowledge gap between our understanding of the sorption performance properties and the hydrogel-salt composite physical and chemical composition. This knowledge gap has forced the development of novel materials by trial-and-error, limited the material performance, and prevented system-level design and optimization. Additionally, beyond performance, the durability of these materials, which is critical for their application use, has been understudied, hindering their system-level use.  In this thesis, we develop thermodynamic, transport, and degradation descriptions for hydrogel-salt composites and we leverage this knowledge to exceed state-of-the-art performances. We first develop and experimentally validate thermodynamic models enabling the prediction and design of the uptake and enthalpy of hydrogel-salt composites and transport models for the prediction and design of the sorptiondesorption kinetics of hydrogel-salt composites. We then use this knowledge to synthesize hydrogels that can capture and retain maximized amounts of moisture from the air (1.8 kg of moisture per kg of material at 30% relative humidity), exceeding the uptake of previously demonstrated sorbents. Lastly, we study the metal-mediated degradation of hydrogel-salt composites at different conditions and use this knowledge to implement a simple strategy that achieves high material durability (&gt;8 months). Altogether, these results tackle key bottlenecks on the development of hydrogel salt-composites enabling their careful design for superior performance and long-term durability, which is critical for low-cost operation. This represents a major step to use moisture through sorbent materials to address the water scarcity and building decarbonization challenges.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155600</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of the “Stix” Code, a Radio Frequency (RF) Wave Solver, to Investigate RF-Sheath Behavior in Realistic Tokamak Geometry</title>
<link>https://hdl.handle.net/1721.1/155599</link>
<description>Development of the “Stix” Code, a Radio Frequency (RF) Wave Solver, to Investigate RF-Sheath Behavior in Realistic Tokamak Geometry
Migliore, Christina P.
A favorable method to heat plasmas to fusion relevant temperatures is to use radio frequency (RF) waves in the ion cyclotron range of frequencies (ICRF). However, experiments have shown that this technique produces RF rectified sheaths known to form on both the ICRF antenna as near-field sheaths and farther away from the antenna in the torus as far- field sheaths. These RF sheaths can result in large DC voltages on plasma facing components (PCFs) causing adverse effects such as impurity generation from sputtering of high-Z metallic coatings, edge power dissipation, and hot-spot formation. With many current and upcoming tokamaks relying on ICRF for heating, it is becoming increasingly critical to numerically model RF sheaths for advancing mitigation methods. Due to the size of the RF sheath in comparison to the launched RF wavelength, the RF sheath can be reduced to a boundary condition (BC) on the computational domain boundaries. Traditionally, many electromagnetic (EM) RF solvers use conducting wall BCs that do not include the effects of sheaths while other codes use overly simplified models that do not capture accurate rectification. Recently, J. Myra et. al 2015 models the RF sheath as a non-linear BC using a characteristic sheath impedance allowing for a more representative calculation of DC enhanced potentials on PFCs.&#13;
&#13;
In this thesis, the novel finite-element plasma RF wave solver called “Stix” that includes the full non-linear RF sheath BC is introduced with the goal of investigating RF sheath behavior in realistic tokamak geometry. First, it is demonstrated that Stix can replicate previous analytical and numerical RF sheath cases as verification of the solve. Next, focusing on near-field sheaths, the experimental antenna power phasing study done on Alcator C-Mod is chosen to be simulated with Stix. Using a magnetic field aligned two-dimensional (2D) slice along the 4 straps of the C-Mod ICRF antenna, it is seen that Stix can reproduce the experimental minimization trend in the enhanced potentials found when varying the strength of the power ratio between the 2 inner straps versus the total 4 straps. Similarly, Stix confirms that the monopole phasing of the 4 straps produces significantly higher potentials as found in experiment. For the various antenna phasing schemes, an estimate of erosion rate from the DC voltages shows that even in the lowest voltages cases the sputtering is notable. To examine the behavior of far-field sheaths, a predictive study is chosen for the upcoming SPARC tokamak. Using a 2D simulation in the poloidal cross-section of SPARC, the effect of varying the strength of wave-particle absorption on the enhanced potentials is investigated. It is found that even with the lowest absorption scenario, the calculated voltages along the vacuum vessel are negligible. This result is discovered to be due to the small magnetic field angles, bn, into the walls as direct consequence of the poloidal cross-section reference frame. Further investigation through the addition of limiter-like bumps shows that far-field rectification is dominated by the strength of bn. The findings from both the comparison and predictive modeling of RF sheaths done in Stix show the effectiveness such a numerical tool has for furthering the optimization of ICRF heating.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155599</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-sided magnetic resonance sensors for clinical detection of volume status</title>
<link>https://hdl.handle.net/1721.1/155598</link>
<description>Single-sided magnetic resonance sensors for clinical detection of volume status
Sherman, Sydney Elaine
Several pathological processes affect the body's ability to regulate volume status. In each of these disease states, the body loses some ability to regulate fluid balance and maintain an euvolemic state. Deviations from euvolemia have been shown to increase morbidity and mortality. The ability to detect pre-symptomatic changes in volume status would allow for more responsive management of these conditions and prevention of higher-mortality complications. Direct evaluation and quantification of early-stage changes in volume state is not currently a clinical measure. T2 relaxometry, a magnetic resonance technique, may offer a feasible method to quantify volume status. In this work we explore the design of single-sided magnetic resonance sensors for the quantification of volume status, evaluate the clinical performance of the sensor, and elucidate further physiological considerations for fluid diagnostics with clinical MRI. &#13;
&#13;
The primary research question that motivated this thesis is: can a point-of-care relaxometer accurately distinguish muscle interstitial fluid shifts in a single measurement? Several approaches are used to answer this question including instrumentation development, signal acquisition studies, and clinical studies. We describe the design of a point-of-care, single-sided magnetic resonance relaxometer. The constructed sensor can acquire slice-selective signal from 8mm above the instrument’s surface with a high signal-to-noise ratio. We detail instrument performance on phantoms, ex-vivo tissue, and human participants. Preliminary observational clinical studies of two cohorts— ‘euvolemic’ athletes and hospitalized end-stage kidney disease patients treated with hemodialysis— were conducted and validate the instrument can detect signal selectively from the muscle interstitial compartment and distinguish ‘euvolemic’ adults and those with end stage kidney disease. We discuss the implementation of multi-exponential fitting of acquired data. This enables analysis of individual muscle tissue compartments. We demonstrate strategies to double signal magnitude and improve T2 fitting accuracy through the simulation and implementation of linear frequency swept adiabatic radiofrequency pulses.  These decrease the sensitivity of applied RF pulses to B1 and B0 inhomogeneity. Finally, we explore physiological considerations for the instrument’s clinical implementation with an MRI study of patients with chronic kidney disease (CKD) and healthy control participants. This allows for the evaluation of physiological factors which may affect the sensor’s accuracy in clinical outcome prediction and offer further future areas for study.&#13;
&#13;
The single-sided magnetic resonance sensor and signal acquisition and processing techniques described demonstrate high potential for quantitative clinical assessment of volume status. This work focuses exclusively on ‘euvolemic’ adults or adults with CKD, but the principles demonstrated are agnostic to many underlying disease pathologies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155598</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Modeling of Nanostructured Palladium-Based Hydrogen-Selective Membranes</title>
<link>https://hdl.handle.net/1721.1/155597</link>
<description>Design and Modeling of Nanostructured Palladium-Based Hydrogen-Selective Membranes
Kim, Lohyun
Hydrogen and its isotopes are crucial in various fields, including established applications such as ammonia production to emerging applications such as energy storage, transportation fuel, and nuclear fusion. The emerging applications are projected to grow exponentially, making the separation of hydrogen from other gases a critical step in meeting some of these growing needs. This thesis focuses on the separation needs for nuclear fusion and decalin dehydrogenation.&#13;
&#13;
Nuclear fusion reactors such as the one being developed by MIT and Commonwealth Fusion Systems generate energy through the fusion of hydrogen isotopes (deuterium and tritium). However, due to the scarcity of tritium, minimizing the amount of tritium inventory and recycling of tritium is essential to achieve fuel self-sufficiency and scale up fusion. This process requires the separation of fuel gases from helium, a major byproduct of fusion reactions. Pd-based membranes, recognized for high permeance and selectivity, offer a promising solution for this separation with minimal energy, cost, and footprint. However, these membranes have not yet been tailored for nuclear fusion applications. There are no comprehensive models explaining the transport of hydrogen isotope mixtures through Pd, and Pd membranes can be damaged by helium embrittlement caused by tritium decay and operate in a limited temperature range of ~600-700 K that places constraints on design given the high temperatures of the reactor exhaust, and molten salt blanket (~1000 K).&#13;
&#13;
This thesis aims to develop Pd-based membranes for nuclear fusion for the removal of helium and recycling of deuterium and tritium. The first objective is to develop an integrated hydrogen multicomponent transport model for Pd membranes. Building on this understanding, the next aim is to design nanostructured Pd composite membranes that can remain stable at temperatures of 1000 K and allow for migration of defects due to radiation or helium entrapment to interfaces.&#13;
&#13;
The transport model for hydrogen isotope mixtures via Pd membrane was developed based on the solution-diffusion mechanism. The model additionally accounts for isotopic interactions during transport and all major kinetic steps, including surface reaction and bulk diffusion. The model predictions exhibited good agreement with various empirical measurements in the literature and in the laboratory, validating its accuracy. This model can be integrated with a flow resistance circuit, enabling the identification of major transport mechanisms and potential behavior of various Pd-based membranes.&#13;
&#13;
Considering the potential operating conditions in the reactor, three types of Pd composite membranes were developed: Pd film, Pd islands, and Pd plug membranes. An ultrathin Pd film membrane was developed by integrating a graphene layer acting as a flexible intermediate layer. The membrane demonstrated a reversible change of gas flow rate by a factor of ~10 in response to hydrogen, attributed to a reversible phase transition of Pd in the presence of hydrogen, implying potential uses for gas sensing and flow control. A membrane with densely-packed Pd islands on graphene was developed using a new fabrication method and exhibited excellent thermal resistance at 1000 K for 100 hours owing to the isolated nature of the islands. Finally, a Pd plug membrane was developed by creating isolated plugs inside the pores of the support layer. The membrane showed hydrogen permeance of ~1⨉10-7 mol/m2·s·Pa at a temperature of 800 K. The permeance of helium and nitrogen was similar to the baseline leakage of the gas module, indicating negligible leakage of the membrane. By implementing the transport model, the lower bound of hydrogen selectivity against helium was estimated to be ~150 at 800 K. The membrane showed stable performance at 800 K for 100 h.&#13;
&#13;
A second application is the dehydrogenation of decalin, a promising cycloalkane that serves as an effective source and liquid mobile carrier for hydrogen. Dehydrogenation reactions are also important in petrochemical refining. The dehydrogenation process, typically occurring at 500 K, produces hydrogen mixed with decalin, tetralin and naphthalene, with a low yield necessitating an effective separation process. Compared to current thermal separation methods, integrating membrane technology with the dehydrogenation process can significantly enhance hydrogen production and separation. Nanoporous graphene membranes, notable for their high permeance and selectivity due to atomic thickness and size-selective pores, are particularly promising. This thesis explored the use of nanoporous graphene membranes for separating hydrogen from a mixture of decalin isomers and tetralin at 500 K. The graphene exhibited hydrogen permeance 1000 times higher than typical polymeric membranes, and its size-selective pores enabled much higher hydrogen selectivity compared to Knudsen selectivity. The membrane maintained stable performance at 500 K for 40 days, demonstrating its outstanding thermal stability. The different permeances between decalin isomers and tetralin suggest complex impacts of surface interactions and diffusion across the graphene layer.&#13;
&#13;
In conclusion, this thesis developed new composite membranes with Pd and graphene, designed for nuclear fusion reactors and the decalin dehydrogenation process. Empirical and theoretical analyses confirmed their stable and outstanding performance under unique operational conditions. This research facilitates the integration of these advanced membranes into both applications, promising advancements in hydrogen-selective membrane technology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155597</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconstructing the Atomic Number of Cargo X-ray Images using Dual Energy Radiography</title>
<link>https://hdl.handle.net/1721.1/155593</link>
<description>Reconstructing the Atomic Number of Cargo X-ray Images using Dual Energy Radiography
Lalor, Peter
To combat the risk of nuclear smuggling, the U.S. deploys scanning systems at ports of entry. Radiation portal monitors are capable of identifying unshielded nuclear or radiological threats by detecting radiation that is passively being emitted from a container. Radiography systems offer complementary detection capabilities, capable of identifying large, dense objects, such as a heavily shielded nuclear warhead. However, this analysis reveals that a smuggler might be able to evade detection from both passive scanning systems and radiography using a lightly shielded nuclear threat. One avenue to improve the capability to identify concealed illicit materials is through dual energy radiographic systems, which enable a rough elemental analysis of cargo contents due to the atomic number dependence of photon attenuation. This work investigates the capabilities and limitations for atomic number discrimination using dual energy MeV systems, finding that different materials can sometimes produce identical photon transparency measurements. This degeneracy introduces a fundamental ambiguity when differentiating between materials of different atomic numbers, even in systems with perfect resolution and no statistical noise. Despite these limitations, we describe a new and more precise method for approximating the atomic number of radiographic images by introducing a semiempirical transparency model. By incorporating a simple calibration step, this model captures second-order effects such as bulk scattering and uncertainties in the source energy spectrum and the detector response function. This methodology was benchmarked using both Monte Carlo simulations and experimental measurements from commercial scanning systems, demonstrating the ability to obtain accurate material predictions on noisy input images by incorporating an image segmentation step. Furthermore, we show that this approach can be adapted to identify shielded objects using a two-pass procedure.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155593</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine-Guided Biopsy Analysis in Oncology: Facilitating Diagnostic Access and Biomedical Discovery Through Deep Learning</title>
<link>https://hdl.handle.net/1721.1/155590</link>
<description>Machine-Guided Biopsy Analysis in Oncology: Facilitating Diagnostic Access and Biomedical Discovery Through Deep Learning
Landeros, Christian
Data collected from cancer patients encompass a wide range of imaging, molecular, and genetic information in an effort to provide more personalized and targeted care. In this the- sis, we outline three case studies examining the role deep learning can play in interpreting and utilizing complex biopsy data. In the first case, we employ neural networks to facilitate the detection of high-risk human papillomavirus (HPV) DNA in a point-of-care diagnostic device, facilitating deployment in resource-limited settings. In the second, we develop a comprehensive computational pipeline for cyclic fluorescent microscopy analysis. We iden- tify key immune cell subpopulations in head and neck cancer biopsies to better understand the tumor microenvironment’s influence on disease progression and treatment response. Our third study tackles the analysis of gigapixel-sized digital histology images from breast cancer biopsies. We establish a two-step approach that i) uses self-supervised learning to encode small-scale histological details into robust representations and ii) applies a transformer model to these representations to evaluate larger-scale histological patterns. Our model success- fully distinguished patients with high-risk genetic profiles from histology alone and provided visualization tools to highlight slide regions most closely associated with a high risk of recur- rence. In doing so, we set the stage for deep learning to serve as an alternative to expensive genetic testing as well as a tool for illuminating previously unidentified markers of recurrence risk. These studies underscore deep learning’s versatility in oncology, streamlining the inte- gration of complex datasets into clinical workflows and broadening access to state-of-the-art personalized care.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155590</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Dimensional Optimal Path Planning and Multi-Timescale Lagrangian Data Assimilation in Stochastic Dynamical Ocean Environments</title>
<link>https://hdl.handle.net/1721.1/155586</link>
<description>High-Dimensional Optimal Path Planning and Multi-Timescale Lagrangian Data Assimilation in Stochastic Dynamical Ocean Environments
Doshi, Manan
In the ocean domain, opportunities for a paradigm shift in the science of autonomy involve fundamental theory, rigorous methods, and efficient computations for autonomous systems that collect information, learn, collaborate and make decisions under uncertainty, all in optimal integrated fashion and over long duration, persistently adapting to and sustainably utilizing the ocean environment. The ocean is a prime example of multiscale nonlinear multidisciplinary and strongly coupled dynamics where measurements of one variable can be used to infer other fields through their joint dynamic probability density functions. Integrating ocean dynamics with autonomy enables the principled exploration, sustainable utilization, and strong conservation of our oceans. The object of this thesis is to develop theory, algorithms, and computational systems for the high dimensional optimal path planning of autonomous vehicles in the physical space augmented with other dynamical fields, and for the Bayesian nonlinear assimilation of the observations gathered by these vehicles along their trajectory. The resulting high dimensional optimal path planning and generalized Lagrangian Bayesian data assimilation enable the sustained and optimal operation of autonomous vehicles over a long time duration in realistic uncertain ocean settings. With this vision, the vehicles autonomously make decisions to optimally achieve their mission targets in the augmented space of physical and collectible fields, e.g., reach the destination in minimum time while using the minimum energy, harvest the maximum wave-solar-wind energy, or farm the maximum amount of kelp. &#13;
&#13;
To this end, we focus on three specific theoretical and methodological goals: (i) Develop exact differential equations and accurate algorithms to efficiently predict and compute the reachable sets and optimal controls for complex high-dimensional objectives in dynamical fields and showcase these controls in realistic ocean scenarios; (ii) Develop Bayesian theory and schemes to predict Lagrangian field probability densities and rigorously assimilate Lagrangian data collected by moving vehicles or drifters, leveraging the different sensitivity and timescales of the underlying Lagrangian and Eulerian systems; and (iii) Integrate the optimal planning and assimilation to enable learning from the information gathered on-board the vehicles, from Bayesian updates of the optimal controls to the acquisition of knowledge using Bayesian learning. We showcase the theory and methods in a range of ocean applications.&#13;
&#13;
In the first part, we review the theory and schemes to predict joint energy-time, harvesting-time, and energy-harvesting-time reachability fronts and optimal paths using state augmentation. %extending to high dimensional coupled fields the theory and schemes for solely time-optimal path planning. We validate our energy-time algorithms in analytical and representative dynamical fields. We then derive theory to predict reachability fronts across multiple times simultaneously and obtain a closed loop control law allowing vehicles to accomplish their mission even after straying from their initial plan due to forecast errors. The theory and schemes are developed for both backward and forward reachable tubes with time-varying target and start sets. The resulting value functions elegantly capture not only the reachable tubes but also time-to-reach and time-to-leave maps as well as start time versus duration maps. We validate results with analytical solutions and demonstrate wider applications for optimal control in dynamic environments.&#13;
&#13;
In the second part, we develop and implement fundamental schemes for multi-timescale Bayesian data assimilation for coupled dynamical systems, with a focus on Lagrangian-Eulerian systems. We obtain a Gaussian Mixture Model (GMM) - Dynamically Orthogonal (DO) based hybrid filter for Lagrangian and Eulerian stochastic fields and observations. We first showcase the schemes for a coupled system where we analytically validate the performance of the filter. We subsequently demonstrate results by applying the filter for a general coupled chaotic system and for a joint Lagrangian-Eulerian system with a more complex quasi-geostrophic flow. &#13;
&#13;
In the third part, we integrate the schemes developed for the first two parts. We propose coupled methods that allow ocean vehicles to robustly and optimally complete their mission while continuously learning from the new information being collected, updating the Lagrangian and Eulerian fields, their joint probabilities, and the robust optimal control of their future trajectories. We showcase preliminary results using the proposed method.&#13;
&#13;
We conclude by demonstrating several planning and Lagrangian algorithms in data-assimilative ocean simulations and real-time ocean experiments with real data and forecasts. This includes the characterization of residence times and connectivity in the Red Sea, the transport of plastics in the coastal ocean showcasing results for Massachusetts Bay, the subduction pathways of surface waters to intermediate depths in the Alboran Sea, and the Bayesian Eulerian-Lagrangian data assimilation of drifter data in the Balearic Sea.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155586</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vitamin A and glycoproteins of rat corneal epithelium</title>
<link>https://hdl.handle.net/1721.1/155575</link>
<description>Vitamin A and glycoproteins of rat corneal epithelium
Yang, Cha Lee Kim.
Previous work from this laboratory has shown that the uptake of labeled precursors into a single specific glycopeptide in the rat intestinal mucosa was depressed significantly in vitamin A deficiency (De Luca et al ., 1970). A very similar pattern to the intestinal glycopeptide pattern was obtained from the rat corneal epithelium; namely, the glycopeptide affected by vitamin A was eluted with 0.4 N LiCl solution by a stepwise column chromatography on DEAE-Sephadex A-50. A specific "peak S" glycopeptide eluted between 0.35 N and 0.42 N LiCl solution was found to be the most decreased (approximately by 50%) glycopeptide component by vitamin A deficiency when a new continuous gradient column chromatography of the same anion exchanger was employed. "Peak S" glycopeptide was further characterized by polyacrylamide (7.5%) gel electrophoresis and gas-liquid chromatography. The affected "peak S" glycopeptide was found to be rich in sialic acid, possibly as an end-sugar. Topical application for one hour in vivo of water-dispersible vitamin A palmitate to corneas of deficient rats resulted in a stimulation of glycoprotein/glycopeptide synthesis upon subsequent incubation in vitro. Upon fractionation of glycopeptide prepared from vitamin A-deficient control and deficient, vitamin A-treated rat corneal epithelium by DEAE-Sephadex A-50 column chromatography, the fraction eluted in 0.2 N LiCl solution showed a marked increase in labeling of 14c-glucosamine into glycopeptides. This stimulated fraction appears to consist of small molecules probably lacking sialic acid. The in vitro stimulation of glycopeptide synthesis in vitamin A-deficient corneal epithelium seems to be confined also to the 0.2 N LiCl fraction. Histologically, corneal epithelium, and particularly the mucus-secreting conjunctival gland showed a strong fluorescent response to fluorescent antibody made against the intestinal glycopeptide affected by vitamin A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1973; Cataloged from PDF of print version of thesis. Vita.; Includes bibliographical references (pages 174-184).
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155575</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The capacitated lot size problems</title>
<link>https://hdl.handle.net/1721.1/155566</link>
<description>The capacitated lot size problems
Matsuo, Hirofumi.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaves 104-107.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155566</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A computational approach to optimal taxation and public pricing policies : the case of Mexico</title>
<link>https://hdl.handle.net/1721.1/155565</link>
<description>A computational approach to optimal taxation and public pricing policies : the case of Mexico
May-Kanosky, Ernesto Gregorio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1984; Bibliography: leaves 144-147.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155565</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preparation and reactions of medium-ring Grignard reagent</title>
<link>https://hdl.handle.net/1721.1/155535</link>
<description>Preparation and reactions of medium-ring Grignard reagent
Fleckenstein, Lee Joseph,
            1930-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1958
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155535</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Issues in the detection of gravitational radiation</title>
<link>https://hdl.handle.net/1721.1/155528</link>
<description>Issues in the detection of gravitational radiation
Stephens, Michelle Sharon Eisgruber.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (leaves 126-131).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155528</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some studies in inorganic chemistry.</title>
<link>https://hdl.handle.net/1721.1/155527</link>
<description>Some studies in inorganic chemistry.
Smart, James Conrad.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1974; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155527</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Optimization and Modern Machine Learning: Theory, Computation, and Healthcare Applications</title>
<link>https://hdl.handle.net/1721.1/155509</link>
<description>Integrating Optimization and Modern Machine Learning: Theory, Computation, and Healthcare Applications
Villalobos Carballo, Kimberly M.
Optimization and machine learning are two predominant fields for decision-making today. The increasing availability of data over the past years has facilitated advancements in the intersection of these two domains, which in turn has led to better decision support tools. Optimization has significantly enhanced traditional machine learning models by refining their training methods, and machine learning has improved many optimization algorithms by enabling better decision-making through accurate predictions.&#13;
&#13;
However, integrating optimization theory with modern machine learning methods, like neural networks and kernel functions, faces two primary challenges. Firstly, these models don't meet the fundamental convexity assumptions of optimization theory. Secondly, these models are primarily used in tasks with numerous parameters and high-dimensional data, requiring highly efficient and scalable algorithms. This focus on efficiency limits consideration for discrete variables and general constraints that are typical in optimization. This thesis introduces novel algorithms to address these challenges. &#13;
&#13;
The work is divided into four chapters, encompassing rigorous theory, computational tools, and diverse applications. In Chapter 1, we extend state-of-the-art tools from robust optimization to non-convex and non-concave settings, allowing us to generate neural networks that are robust against input perturbations. In Chapter 2, we develop a holistic deep learning framework that jointly optimizes for neural network robustness, stability and sparsity by appropriately modifying the loss function. In Chapter 3 we introduce TabText, a flexible methodology that leverages the power of Large Language Models for patient flow predictions from tabular data. Lastly, in Chapter 4 we present a data-driven approach for solving multistage stochastic optimization problems via sparsified kernel methods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155509</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Training Human-AI Teams</title>
<link>https://hdl.handle.net/1721.1/155508</link>
<description>Training Human-AI Teams
Mozannar, Hussein
AI systems are augmenting humans' capabilities in settings such as healthcare and programming, forming human-AI teams. To enable more accurate and timely decisions, we need to optimize the performance of the human-AI team directly. In this thesis, we utilize a mathematical framing of the human-AI team and propose a set of methods that optimize the AI, the human, and the interface in which they communicate to enable better team performance. We first show how to provably train AI classifiers that complement humans and can defer the decision to humans when it is best to do so. However, in specific settings, AI cannot autonomously make decisions and thus only provides advice to humans. In that case, we build onboarding procedures that train humans to have an accurate mental model of the AI to enable appropriate reliance. Finally, we study how humans interact with large language models (LLMs) to write code. To understand current inefficiencies, we developed a taxonomy to categorize programmers' interactions with the LLM. Motivated by insight from the taxonomy, we leverage human feedback to know when to best display LLM suggestions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155508</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowing How, Knowing Who, Knowing What to Do</title>
<link>https://hdl.handle.net/1721.1/155507</link>
<description>Knowing How, Knowing Who, Knowing What to Do
Thwaites, Abigail
What I am able to do matters. It matters whether I am able to defuse the bomb: if I am able to defuse the bomb, then I ought to do so. It matters whether I was able to attend my spouse’s birthday party: if I was able to attend, then I should have attended. On the other hand, if I was really unable to attend, then my spouse should not resent me for having failed to do so. This dissertation explores three ways in which our abilities have been thought to matter:  1. What I am able to do bears on what I know how to do. 2. What I am able to do bears on what my options are. 3. Whether I am able to justify an action to another bears on whether that action is permissible. ϕ ϕ Chapter 1 concerns the relationship between abilities and know-how. Know-how often seems to go hand-in-hand with ability. If I tell you that I know how to swim, you can generally assume that I am able to swim (and vice versa). One naïve way of encoding this relationship is as a simple biconditional: I know how to  if and only if I am able to . Unfortunately, this naïve account of know-how historically suffers against two major objections, one against each direction of its biconditional. This chapter defends a revised version of the naïve ability theory of know-how against these twin objections. Chapter 2 concerns the relationship between abilities and options, where an option is a potential object of the subjective ought. Ordering take-out for dinner is an option for me. Making dinner at home is, too. However, singlehandedly fixing world hunger is not. Why not? Traditional approaches to options impose an ability constraint on options. They say that whatever options are, they must be the kinds of things I’m able to do. In this chapter, I reject the ability constraint on options. I develop a theory of options which does not require abilities. Chapter 3 concerns a particular kind of important ability: justifiability. Justifiability is central to contractualist theories of ethics such as ex ante contractualism, where an action’s permissibility depends on whether I am able to justify it to those affected. This chapter presents a puzzle for ex ante contractualism. In particular, I argue that whether or not I am able to justify an action to an individual sometimes depends on the description under which I conceive of that individual. In cases involving competing descriptions, ex ante contractualism seems to give conflicting permissibility judgments depending on which description we prioritize. This chapter is dedicated to resolving that puzzle. I argue that attention to second-personal modes of description can help.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155507</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tax Policy, Housing Markets, and Redistribution</title>
<link>https://hdl.handle.net/1721.1/155506</link>
<description>Tax Policy, Housing Markets, and Redistribution
Soltas, Evan J.
Governments make extensive use of tax policy to advance various social objectives, such as redistribution and residential integration, through the housing market. This dissertation consists of three papers, two on such housing-related tax policies and a third on means-tested transfer benefits. An introductory chapter reviews the role of government in the housing market, both in actuality and through the lens of economic theory. In the first paper, I study the impacts and incidence of subsidies for low-income housing development, leveraging new data on competitions for Low-Income Housing Tax Credits and three sources of policy variation.  I find that these subsidies add few net units to the housing stock and instead pull investment forward in time. In addition, most of the subsidy accrues to developer profits or is dissipated in the fixed costs of subsidy competition. In the second paper, I study the take-up of a New York City tax incentive  for developers to integrate low-income housing into market-rate buildings. I find that, while the incentive has high fiscal costs per unit relative to other housing subsidies, the cost premium is explained by differences in the distributions of low-income units across neighborhoods and may, in some neighborhoods, plausibly be repaid by the social benefits of moving households to opportunity. In the third paper, which is coauthored, we show that in eight U.S. transfer programs, selective take-up among the eligible implicitly enables the government to target people with low consumption and lifetime income. Such "self-targeting" makes automatic transfers undesirable, as the social value of this information generally exceeds the social costs of the ordeals used to reveal it.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155506</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in International Economics and Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/155505</link>
<description>Essays in International Economics and Macroeconomics
Seo, Jaeeun
This dissertation consists of three independent essays in international economics and macroeconomics. The first chapter emphasizes the importance of persistent heterogeneity across workers in the economy's adjustment to sectoral shocks. The second chapter explains the spatial concentration of non-tradable consumption services based on consumers' trip-chaining behavior, which results in geographically correlated consumption location choices. The final chapter provides a novel mechanism by which persistent noise in signals endogenously generates optimism through dynamic learning. &#13;
&#13;
The first chapter develops a sufficient statistics approach to evaluate the impact of sectoral shocks on labor market dynamics and welfare. Within a broad class of dynamic discrete choice models that allows for arbitrary time-invariant heterogeneity across workers, I show that knowledge of steady-state intersectoral worker flows over various time horizons is sufficient to evaluate labor supply responses to shocks as well as their aggregate welfare consequences. I also establish analytically that assuming away persistent worker heterogeneity---a common practice in the literature---necessarily leads to overestimation of steady-state worker flows, resulting in systematic biases in counterfactual predictions. As an illustration of our sufficient statistics approach, I revisit the consequences of the rise of import competition from China. Using US panel data to measure steady-state worker flows, I conclude that labor reallocation away from manufacturing is significantly slower, and the negative welfare effects on manufacturing workers are more severe than those predicted by models without persistent worker heterogeneity.&#13;
&#13;
The second chapter shifts the focus to non-tradable consumption services market, such as restaurants and retail, in Seoul. To understand the spatial concentration of services, we first causally identify positive spillovers across services stores. We microfound these spillovers by incorporating the trip-chaining mechanism---whereby consumers make multiple purchases during their services travel---into a quantitative spatial model that endogenizes the spatial distribution of services. When calibrated to an original survey on trip chaining, this mechanism explains about one-third of the observed concentration. However, unlike standard agglomeration mechanisms, it does not lead to inefficiency nor does it exacerbate welfare inequality. Finally, we show that spatial linkages of services consumption play a crucial role in shaping the impact of the rise of work from home and of delivery services on the distribution of services.&#13;
&#13;
In the third chapter, I propose a noisy rational expectations model with persistent noise. Firms learn about economic conditions from signals, and the noise in the signals is persistent rather than i.i.d. over time. Firms rationally account for the persistence of noise and update their interpretations of signals based on ex post observations of true economic conditions. I show that this process gives rise to a novel mechanism by which optimism arises endogenously, which in turn amplifies or dampens the effects of underlying shocks. In particular, this model can generate the delayed overreaction in firms' expectations documented in the literature. Moreover, strategic complementarity between firms and the resulting higher-order optimism further strengthen my mechanism. Finally, I distinguish empirically my rational theory of optimism from behavioral theories by exploiting the difference in the degree of overextrapolation between consensus and individual forecasts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155505</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Design for Platform-Enabled Operations</title>
<link>https://hdl.handle.net/1721.1/155500</link>
<description>Information Design for Platform-Enabled Operations
Shah, Sohil
Information design studies how an informed player (planner) can optimally provision information over an uncertain payoff relevant random variable to influence the actions of less-informed players (agents). Codified through a ``signaling mechanism", the informed player can design distributions over informative signals to reveal depending on the value of the uncertain random variable. Through the design of the signaling mechanism, a planner can affect the agents' posterior beliefs of the uncertainty and the agents' consequent actions. While an abstract representation, solving for the optimal signaling mechanism provides valuable real-world intuition into how platforms or public entities should provision information. &#13;
&#13;
However, the practical design of these optimal signaling mechanisms more generally is associated with five key technical challenges. First, the uncertainty set can be continuously-valued which leads to an uncountably large set of decision variables over which to optimize. Second, the objective of the planner and the response of agents to the induced beliefs can be arbitrarily complex which can lead to intractable optimization formulations. Third, in dynamic settings, planners with multi-period objectives need to provision information across time, but provisioning information in the present affects what information the planner can provision in the future. This necessitates the use of computationally intensive multiperiod dynamic programming. Fourth, if agents in these dynamic settings are long-run and subject to subgame perfection, agents anticipate what information the planners will adaptively provision in the future necessitating one to solve a coupled dynamic program. Fifth, planners may find themselves in competition over the provision of information, aiming to gain favor in strategic interactions where both the quality and the content of the information revealed matter. &#13;
&#13;
This thesis presents a study of information design as a means to improve platform operations. We formulate models that address each of the five technical challenges described in the context of a particular practical application. Examples of the practical applications we consider include pandemic management, ride-hailing, and incentivizing research and development. In the first chapter, continuous-valued uncertainty and optimization methods robust to planner preference are addressed. We consider a planner using information design to manage a population of hybrid workers amidst the spread of a disease with uncertain infectious risk. We identify closed-form solutions for the optimal signaling mechanism over the risk for a general class of set-based objectives and we identify computationally efficient algorithms to approximate the optimal signaling mechanism for general Lipschitz-continuous objectives. In the second chapter, dynamicity is addressed. We consider a dynamic model where the planner iteratively provisions information to agents with time-varying preferences. The third chapter addresses long-run agents. We focus on a setting where agents serve a transportation platform affected by surge pricing and we identify the optimal signaling mechanisms that provision information over the timing of the uncertain surge. In the final chapter, we address a competitive information design setting. We consider a Bayesian Stackelberg game with a malicious attacker performing a sequential search over a set of firms under costly inspection. Firms can pay to mitigate the probability that the attacker succeeds should they be chosen as a target. Firms can then also choose how much information to reveal on inspection about the attacker's probability of success. We characterize the equilibrium mitigation and signaling strategies of the firms.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155500</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Learning with Discrete Structures: Statistical&#13;
and Computational Perspectives</title>
<link>https://hdl.handle.net/1721.1/155499</link>
<description>Statistical Learning with Discrete Structures: Statistical&#13;
and Computational Perspectives
Behdin, Kayhan
In various statistical tasks it is of interest to learn estimators with discrete structures (e.g., sparsity, low-rank, shared model parameters, etc)---they are appealing for their interpretability and compactness.  However, learning with discrete structures can be computationally challenging. In this thesis, we explore statistical and computational aspects of statistics estimators (some classical and some new) that can be formulated as discrete optimization problems. &#13;
&#13;
In Chapters 2 and 3, we study two well-known problems in high-dimensional statistics: sparse Principal Component Analysis (PCA) and Gaussian Graphical Models. These are notoriously hard optimization problems---we explore computationally friendlier estimators based on Mixed Integer Programming (MIP) under suitable statistical assumptions. We study the statistical and computational properties of our estimators. &#13;
&#13;
In the fourth chapter, we study the multi-task learning problem with sparse linear estimators. Motivated by applications in biomedical sciences, we  propose a new modeling framework to jointly learn sparse linear estimators for different tasks by sharing support information. Our theoretical results show that our joint estimation framework can lead to better statistical properties compared to independently fitting models for each task. We develop scalable approximate solvers for our MIP-based formulation. &#13;
&#13;
In the fifth chapter, we study the problem  of sparse Nonnegative Matrix Factorization (NMF) using Cutler and Breiman's archetypal regularization. &#13;
&#13;
We explore the utility of our methods in the context of applications in the biomedical sciences, computer vision and computational finance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155499</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Robustness and Interpretability in Learning and Data-Driven Decision-Making</title>
<link>https://hdl.handle.net/1721.1/155498</link>
<description>Efficient Robustness and Interpretability in Learning and Data-Driven Decision-Making
Bennouna, Mohammed Amine
As machine learning algorithms are increasingly developed and deployed in high-stakes applications, ensuring their reliability has become crucial. This thesis introduces algorithmic advancements toward reliability in machine learning, emphasizing two critical dimensions: Robustness and Interpretability.&#13;
 &#13;
The first part of this thesis focuses on robustness, which guarantees that algorithms deliver stable and predictable performance despite various data uncertainties. We study robustness when learning under diverse sources of data uncertainty, including the fundamental statistical error, as well as data noise and corruption. Our work reveals how these different sources interact and subsequently impact data-driven decisions. We introduce novel distributionally robust optimization approaches, each tailored to specific uncertainty sources. Our findings highlight that protection against one source may increase vulnerability to another. To address this, we develop distributional ambiguity sets that provide holistic robustness against all sources simultaneously. In each setting, we demonstrate that our novel approaches achieve “efficient” robustness, optimally balancing average performance with out-of-sample guarantees. Our novels algorithms are applied to various scenarios, including training robust neural networks, where they significantly outperform existing benchmarks.&#13;
 &#13;
The second part of the thesis addresses interpretability, a critical attribute for decision-support tools in high-risk settings, which requires that algorithms provide understandable justifications for their decisions. Our work in this part was motivated by data-driven personalized patient treatment—an increasingly sought-after machine learning application. In this reinforcement learning problem, interpretability is crucial: physicians cannot rely on a black-box algorithm for prescribing treatments. We introduce theoretically the problem of learning the most concise discrete representation of a continuous state-space dynamic system. In the patient treatment setting, this corresponds to identifying treatment groups based on the evolving features of patients under treatment. Surprisingly, we prove theoretically that it is statistically possible to learn the most concise representation of a dynamic system solely from observed historic sample path data. We subsequently develop an algorithm, MRL, which learns such a concise representation, thereby enhancing interpretability and tractability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155498</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Role of Identity in Economic and Political Behavior</title>
<link>https://hdl.handle.net/1721.1/155497</link>
<description>Essays on the Role of Identity in Economic and Political Behavior
Ruebeck, Hannah K.
These essays consider the role of various personal and social identities and resulting decision-making in the domains of education, work, and political participation. The first essay studies beliefs about experiencing racial or gender discrimination, or perceived discrimination, and its consequences for worker behavior. Using a large randomized controlled trial (RCT, N=5,000) in a constructed online labor market, I show that perceived racial and gender discrimination has large negative effects on worker retention, future labor supply, and cooperation with managers and that these effects are driven by large psychological costs to interacting with a biased manager. Firms can therefore improve both equity and efficiency by reducing perceived discrimination. I then test whether implementing hiring procedures that reduce the potential for actual discrimination are effective at reducing perceived discrimination. The procedures I test—blinding hiring managers to demographics and using unbiased algorithms—at best moderately reduce rates of perceived discrimination when members of minority groups remain highly under-represented.&#13;
&#13;
The second essay studies childhood confidence, a potential determinant of educational and labor-market behavior when ability is imperfectly observed. This essay documents two main facts in a large, national sample of children whose outcomes are followed for 20 years. First, childhood confidence in math and reading is starkly gendered along stereotypical lines: girls are more likely to be under-confident in math and over-confident in reading, and vice-versa for boys. Second, childhood over- and under-confidence in math strongly predicts adolescent test scores, educational attainment, and majoring or working in STEM. &#13;
&#13;
The final essay studies political efficacy, or beliefs about government responsiveness to citizen preferences and action in an RCT with 6,000 participants. In the context of US climate policy, we test how these beliefs and preferences for government action change when citizens learn about the recent, largest climate bill in US history. Learning about policy progress has small positive effects on political efficacy and small negative effects on preferences for the government to focus on climate policy. These countervailing effects may be why we see no effect of this treatment on citizen climate action. On the other hand, additionally watching a short, fictional narrative about a young, initially apathetic woman who goes on to organize a climate march has large effects on political efficacy and subsequently large effects on donations to climate lobbying groups and revealed interest in climate marches.&#13;
&#13;
JEL Codes: D91, J7, J16
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155497</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/155496</link>
<description>Essays in Macroeconomics
de la Barrera i Bardalet, Marc
The thesis consists of three essays on macroeconomics. In the first essay, I study the price and wage-setting implications of monopsony models with nominal rigidities. I develop a New Keynesian model with wage posting and on-the-job search. I show how wage markdowns are related to the importance of hiring costs, and those are estimated to be an order of magnitude larger than previous calibrations. I show how at the individual level, both higher monopsony power and higher wage rigidity amplify the price response of idiosyncratic demand shocks. At the aggregate level, the main driver of inflation is not an increase in real wages but rather an increase in the cost of hiring workers. Given that firms have problems finding workers, they raise prices. In a calibrated model, I show how negative labor supply shocks reduce the real wage when the nominal wage increase is offset by the nominal price increase. In the second essay (joint with Marc de la Barrera and Masao Fukui), we study how the interaction between China’s productivity growth and currency peg to the US dollar affected the labor market and trade imbalance in the United States. Empirically, we document that in response to similar exposure to Chinese exports, countries pegging to the US dollar experienced larger unemployment and trade deficits compared to floating countries. Theoretically, we develop a dynamic model of trade featuring endogenous imbalances and nominal rigidity, and show that Foreign growth may hurt Home welfare and characterize optimal trade and monetary policy in this environment. Quantitatively, we find that China’s currency peg is responsible for 447 thousand manufacturing jobs lost in the US over 2000-2012, one third of the total US trade deficit over the same period, and reduced US lifetime welfare gains from Chinese growth by 32% compared to an economy where an otherwise identically growing China had its currency floating. A short-run safeguard tariff may have effectively accommodated China’s currency peg and ameliorated the labor market distortions. In the third essay (joint with Tim de Silva), we explore a novel field that uses machine learning techniques to solve dynamic stochastic optimization problems. While most traditional approaches require the knowledge of a law of motion for exogenous states like income, we show a methodology that allows us to remain agnostic about the data-generating process of the state. Instead of calibrating a model mimic the dynamics of the state, we need to ob2serve realizations of such state. Parametrizing the policy function with a neural network, we are able to solve the value function problem without ever knowing the law of motion of the state, which the neural network endogenously learns. We test our approach with the income fluctuations problem and show how our methodology is able to learn the income process when it is an AR(1), and is also able to solve the problem for an unspecified income process. We then compare the welfare loss of specifying a particular income process and evaluating the policy function without making any assumption on the income process, and we find that the miss optimization loss is negligible. A byproduct of this project is the publication of the python package nndp that is available for use and solve a wide array of finite horizon, dynamic stochastic optimization problems.&#13;
&#13;
JEL Classification C63, E24, E32, F16, F31, G51, J63
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155496</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Computer-Assisted Design and Analysis of First-Order Optimization Methods and Related Problems</title>
<link>https://hdl.handle.net/1721.1/155495</link>
<description>Advances in Computer-Assisted Design and Analysis of First-Order Optimization Methods and Related Problems
Das Gupta, Shuvomoy
First-order methods are optimization algorithms that can be described and analyzed using the values and gradients of the functions to be minimized. These methods have become the main workhorses for modern large-scale optimization and machine learning due to their low iteration costs, minimal memory requirements, and dimension-independent convergence guarantees. As the data revolution continues to unfold, the pressing demand for discovering faster first-order methods and rigorous convergence analyses of existing first-order methods have become the key problem in today’s big data era. To that goal, in this thesis, we make advancements in computer-assisted design and analysis of first-order methods and related problems. &#13;
&#13;
The core contribution of this thesis is developing computer-assisted methodologies for analyzing and designing first-order methods using nonconvex quadratically constrained quadratic optimization problems (QCQPs). In this approach, the key idea involves posing the analysis or design of first-order methods as nonconvex but practically tractable QCQPs and then solving them to global optimality using custom spatial branch-and-bound algorithms. &#13;
&#13;
In Chapter 2 of this thesis, we present Branch-and-Bound Performance Estimation Programming (BnB-PEP), a unified methodology for constructing optimal first-order methods for convex and nonconvex optimization. BnB-PEP poses the problem of finding the optimal first-order method as a nonconvex but practically tractable QCQP and solves it to certifiable global optimality using a customized branch-and-bound algorithm. Our customized branch-and-bound algorithm, through exploiting specific problem structures, outperforms the latest off-the-shelf implementations by orders of magnitude, accelerating the solution time from hours to seconds and weeks to minutes. We apply BnB-PEP to several practically relevant convex and nonconvex setups and obtain first-order methods with bounds that improve upon prior state-of-the-art results. Furthermore, we use the BnB-PEP methodology to find proofs with potential function structures, thereby systematically generating analytical convergence proofs. &#13;
&#13;
We next propose a QCQP-based computer-assisted approach to the analysis of the worst-case convergence of nonlinear conjugate gradient methods (NCGMs) in Chapter 3. Those methods are known for their generally good empirical performances for large-scale optimization while having relatively incomplete analyses. Using our computer-assisted approach, we establish novel complexity bounds for the Polak-Ribière-Polyak (PRP) and the Fletcher-Reeves (FR) NCGMs for smooth strongly convex minimization. In particular, we construct mathematical proofs that establish the first non-asymptotic convergence bound for FR (which is historically the first developed NCGM), and a much improved non-asymptotic convergence bound for PRP. Additionally, we provide simple adversarial examples on which these methods do not perform better than gradient descent with exact line search, leaving very little room for improvements on the same class of problems.&#13;
&#13;
In Chapter 4 of this thesis, we develop the nonconvex exterior-point optimization solver (NExOS)---a first-order algorithm tailored to sparse and low-rank optimization problems. We consider the problem of minimizing a convex function over a nonconvex constraint set, where the set can be decomposed as the intersection of a compact convex set and a nonconvex set involving sparse or low-rank constraints. Unlike the convex relaxation approaches, NExOS finds a locally optimal point of the original problem by solving a sequence of penalized problems with strictly decreasing penalty parameters by exploiting the nonconvex geometry. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and low-rank optimization problems, empirically demonstrating that our algorithm outperforms specialized methods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155495</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>He Lāhui Hawaiʻi Ma O Ka Hoʻomomona Hou I Ka ʻĀina: Nā Moʻolelo Lōkahi Ma Ka Mokupuni ʻO Oʻahu (A Hawaiian Nation Through The Restoration Of ʻĀina Momona: Stories Of Unity– Among Ourselves, That Which Feeds, Community, And Spirit–On The Island Of Oʻahu)</title>
<link>https://hdl.handle.net/1721.1/155494</link>
<description>He Lāhui Hawaiʻi Ma O Ka Hoʻomomona Hou I Ka ʻĀina: Nā Moʻolelo Lōkahi Ma Ka Mokupuni ʻO Oʻahu (A Hawaiian Nation Through The Restoration Of ʻĀina Momona: Stories Of Unity– Among Ourselves, That Which Feeds, Community, And Spirit–On The Island Of Oʻahu)
Grande, Aja Oona
This ethnography composed as moʻolelo (stories) documents my personal huakaʻi (cultural excursion, journey) to understand my kuleana (profound sense of duty) as a non-Kanaka ʻŌiwi (non-Native Hawaiian) with intergenerational ties to Hawaiʻi. I draw insight and methodologies from ʻike kūpuna (Hawaiian ancestral knowledge) and Hawaiian studies, as well as studies in anthropology, history, and science, technology &amp; society (STS), to promote a sustainable, self-determinant Lāhui Hawaiʻi (Hawaiian Nation). These mokuna (chapters) are chapters of my life, volunteering and working alongside mahi ʻai (food cultivators), each week for one year, with two non-profits that aim to restore ʻāina momona (abundance of that which feeds) on the island of Oʻahu. Like looking to the roots of a plant for connections below the surface, through nānā i ke kumu (look to the source), I saw my ʻohana (family) as part of community-building in Hawaiʻi: we mālama (actively care for) the people and ʻāina who care and have cared for us, including the Kānaka ʻŌiwi (Native Hawaiians) whose kūpuna (ancestors) have regenerated with their kulāiwi (ancestral homelands) for 100 generations. While I do not trace my biological lineage to Kānaka ʻŌiwi, I learned through my moʻolelo moʻokūʻauhau (genealogy story) that my roots belonged to a wider network of ancestors who have helped to keep the mauli (life spirit) of a Lāhui Hawaiʻi alive. Each mokuna of this dissertation depicts separate legs of my huakaʻi during fieldwork, to hoʻoikaika (strengthen) respective pilina (mutually sustaining relationships) with kuʻu mauli (my life spirit), kānaka (people, such as family and community), ʻāina (glossed in English as “that which feeds”), and akua (gods, the divine). My moʻolelo show how strengthening these pilina leads to lōkahi–a concept of balance and harmony in Hawaiian worldview–for individuals and our Lāhui Hawaiʻi.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155494</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Algorithms for Machine Learning: Efficiency, Estimation Errors, and Beyond</title>
<link>https://hdl.handle.net/1721.1/155490</link>
<description>Large-Scale Algorithms for Machine Learning: Efficiency, Estimation Errors, and Beyond
Wang, Haoyue
Optimization algorithms stand as a cornerstone for machine learning and statistical inference. The advent of large-scale datasets introduces computational challenges, necessitating the pursuit of more efficient algorithms. Modern optimization techniques are usually tailored to particular machine learning issues. These approaches leverage the unique structural characteristics of the problems, resulting in enhanced efficiency compared to current methods applied to these issues. Another key aspect in examining learning algorithms involves comprehending the estimation precision of the derived estimator. In some scenarios, while achieving exact optimization on the training set may be impractical, certain straightforward and effective heuristics can demonstrate commendable estimation accuracy within an appropriate statistical framework. In this thesis, we examine a few large-scale algorithms from both optimization and statistics perspectives. In Chapters 2 and 3, we study two continuous optimization algorithms tailored to structural constraints. Chapter 2 focuses on a generalized Frank-Wolfe method for unbounded constraints with cylinder-like constraints. Chapter 3 focuses on a CD-like method for polyhedral constraints with a small number of extreme points. Both methods have state-of-the-art performance due to their awareness of the problem structures. In Chapter 4, we study a variant of linear regression with possible mismatches between interpreter-response pairs. We study a simple and efficient heuristic method, and give a rigorous analysis of its estimation error in a statistical setting. In Chapters 5 and 6, we examine two algorithms for decision trees. Chapter 5 studies the computation of optimal decision trees, and introduces a new branch-and-bound method for optimal decision trees with general continuous features. Chapter 6 turns to the analysis of the CART algorithm under a sufficient impurity decrease condition. We prove tight error bounds for signals functions with this condition, and discuss a few function classes that satisfy this condition. Chapter 7 studies a density estimation problem with shape restrictions. We propose a cubic-Newton method framework for the computation, and also study the approximation property of the finite mixture.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155490</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Public and Behavioral Economics</title>
<link>https://hdl.handle.net/1721.1/155487</link>
<description>Essays in Public and Behavioral Economics
Rafkin, Charlie
This thesis examines how psychological forces and non-standard preferences affect poverty and the design of social welfare programs for low-income households. &#13;
&#13;
The first chapter, "Eviction as Bargaining Failure: Hostility and Misperceptions in the Rental Housing Market" (co-authored with Evan Soltas), studies the causes of evictions from rental housing and the welfare impact of policy interventions to address them. Court evictions from rental housing are common but could be avoided if landlords and tenants bargained instead. Such evictions are inefficient if they are costlier than bargaining. We test for two potential causes of inefficient eviction — hostile social preferences and misperceptions — by conducting lab-in-the-field experiments in Memphis, Tennessee with 1,808 tenants at risk of eviction and 371 landlords of at-risk tenants. We detect heterogeneous social preferences: 24% of tenants and 15% of landlords exhibit hostility, giving up money to hurt the other in real-stakes Dictator Games, yet more than 50% of both are highly altruistic. Both parties misperceive court or bargaining payoffs in ways that undermine bargaining. Motivated by the possibility of inefficient eviction, we evaluate the Emergency Rental Assistance Program, a prominent policy intervention, and find small impacts on eviction in an event-study design. To quantify the share of evictions that are inefficient, we estimate a bargaining model using the lab-in-the-field and event-study evidence. Due to hostile social preferences and misperceptions, one in four evictions results from inefficient bargaining failure. More than half would be inefficient without altruism. Social preferences weaken policy: participation in emergency rental assistance is selected on social preferences, which attenuates the program’s impacts despite the presence of inefficiency. &#13;
&#13;
The second chapter, "The Welfare Effects of Eligibility Expansions: Theory and Evidence from SNAP" (co-authored with Jenna Anders), studies the U.S. rollout of eligibility expansions in the Supplemental Nutrition Assistance Program. Using administrative data from the U.S. Department of Agriculture, we show that expanding eligibility raises enrollment among the inframarginal (always-eligible) population. Using an online experiment and an administrative survey, we find evidence that information frictions, rather than stigma, drive the new take-up. To interpret our findings, we develop a general model of the optimal eligibility threshold for welfare programs with incomplete take-up. Given our empirical results and certain modeling assumptions, the SNAP eligibility threshold is lower than optimal. &#13;
&#13;
The third chapter, "Preferences for Rights" (coauthored with Aviv Caspi and Julia Gilman), observes that public discourse about in-kind transfers often appeals to "preferences for rights" — for instance, the "right to health care" or "right to counsel" for indigent legal defense. Preferences for rights are "non-welfarist" if the person values the right per se, holding fixed how the right instrumentally affects others’ utilities. We test for non-welfarist preferences for rights, and their relationship to redistributive choices, with incentivized online experiments (N = 1,800). Participants face choices about allocating rights goods (lawyers, health care) and benchmark goods (bus passes, YMCA memberships) to tenants facing eviction. We implement a share of choices. In two of three experiments, more than half of participants allocate rights goods in ways that are consistent with preferences for rights and dominated if preferences were entirely welfarist. Dominated behaviors are more common with rights goods than benchmarks. In a fourth experiment, those with preferences for rights also exhibit "anti-targeting," where they redistribute lawyers and health care more universally than benchmark goods to recipients whose incomes differ. At least 26% of participants are non-welfarist, while at most 31% are welfarist.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155487</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in International Trade and Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/155486</link>
<description>Essays in International Trade and Macroeconomics
Kim, Bumsoo
The thesis consists of three essays on international trade, macroeconomics and finance. In the first essay (joint with Marc de la Barrera and Masao Fukui), we study how the interaction between China's productivity growth and currency peg to the US dollar affected the labor market and trade imbalance in the United States. Empirically, we document that in response to similar exposure to Chinese exports, countries pegging to the US dollar experienced larger unemployment and trade deficits compared to floating countries. Theoretically, we develop a dynamic model of trade featuring endogenous imbalances and nominal rigidity, and show that Foreign growth may hurt Home welfare and characterize optimal trade and monetary policy in this environment. Quantitatively, we find that China's currency peg is responsible for 447 thousand manufacturing jobs lost in the US over 2000-2012, one third of the total US trade deficit over the same period, and reduced US lifetime welfare gains from Chinese growth by 32% compared to an economy where an otherwise identically growing China had its currency floating. A short-run safeguard tariff may have effectively accommodated China's currency peg and ameliorated the labor market distortions. &#13;
&#13;
In the second essay (joint with Ying Gao and Marc de la Barrera), we study the Federal Reserve's problem of disclosing the models it uses in supervisory stress tests of large banks. Banks argue that nondisclosure leads to inefficiencies from uncertainty, but regulators are concerned that full disclosure can lead to banks gaming the system. We formalize this trade-off in a stylized model where both the regulator and banks have private ``models" about a risky asset, and the regulator uses its own model to `stress test' the investment. We show that the regulator may benefit from hiding the model when the bank's model is more precise than the regulator's own model. The key idea is that hiding the regulator's model forces the bank to guess it using the bank's own models, effectively eliciting the bank's private information. We also show that if the regulator can fine-tune disclosure policies, the regulator can approximately enforce the first-best action as if the regulator fully knew all the private information held by banks. The intuition is an application of the Cremer and McLean (1988) information rent extraction result.&#13;
&#13;
In the third essay, I investigate the rationale behind East Asian countries' adoption of currency pegging strategies during periods of accelerated growth and explores the optimal peg level in the context of nominal wage rigidity and financial frictions. By developing a policy framework under wage rigidity, imperfect substitution, and exchange rate determination, I find that while the optimal peg level significantly influences real outcomes, standard economic models struggle to rationalize undervalued currency pegs. Potential justifications for an undervalued peg, such as export-driven learning externalities, are explored through numerical simulations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155486</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Monetary Policy and Growth</title>
<link>https://hdl.handle.net/1721.1/155485</link>
<description>Essays in Monetary Policy and Growth
Halperin, Basil
This thesis is composed of three essays studying both monetary policy as well as economic growth. The first two chapters study optimal monetary (and fiscal) policy. The third chapter studies the relationship between transformative artificial intelligence, economic growth, and asset pricing.&#13;
&#13;
The first chapter (joint with Daniele Caratelli) studies optimal monetary policy in a world with menu costs. We analytically characterize optimal monetary policy in a multisector economy with menu costs and show that inflation and output should move inversely following sectoral shocks. That is, after negative productivity shocks, inflation should be allowed to rise, and vice versa. In a baseline parameterization, optimal policy stabilizes nominal wages. This nominal wage targeting contrasts with inflation targeting, the optimal policy prescribed by the textbook New Keynesian model in which firms are permitted to adjust their prices only randomly and exogenously. The key intuition is that stabilizing inflation causes shocks to spill over across sectors, needlessly increasing the number of firms that must pay the fixed menu cost of price adjustment compared to optimal policy. Finally, we show in a quantitative model that, following a sectoral shock, nominal wage targeting reduces the welfare loss arising from menu costs by 81% compared to inflation targeting.&#13;
&#13;
The second chapter offers a reexamination of optimal monetary and fiscal policy at the zero lower bound. I make five conceptual points about optimal monetary and fiscal policy at the zero lower bound (ZLB) in representative agent New Keynesian models, using the simplest possible version of such a model. (1) Monetary policy is typically described as facing a time consistency problem at the zero lower bound; but if ZLB episodes are a repeated game rather than a one-shot game – as is empirically realistic – then the time consistency problem can be easily overcome by reputational effects. (2) The ZLB is not special, in terms of the constraint it creates for monetary policy: an intratemporal rigidity, such as the minimum wage or rent control, creates exactly the same kind of constraint on monetary policy as the intertemporal rigidity of the ZLB. (3) Austerity is stimulus: in the representative agent New Keynesian model, fiscal stimulus works through the change in government spending. Promising to cut future spending – committing to austerity – has precisely the same effect on inflation and the output gap as a decision to raise spending today. (4) Fiscal stimulus can be contractionary, when targeted heterogeneously: if fiscal spending is targeted at certain sectors, this can in fact lower inflation and deepen the output gap. (5) Fiscal policy faces a time consistency problem at the ZLB, just as monetary policy does. Overall, I suggest that – in this class of models – the power of monetary policy at the ZLB has been underrated, and the power of fiscal policy has been overrated.&#13;
&#13;
The third chapter (joint with Trevor Chow and J. Zachary Mazlish) studies how asset prices can be used to forecast the pace of development of AI technology. We study the implications of transformative artificial intelligence for asset prices, and in particular, how financial market prices can be used to forecast the arrival of such technology. We take into account the double-edged nature of transformative AI: while advanced AI could lead to a rapid acceleration in economic growth, some researchers are concerned that building a superintelligence misaligned with human values could create an existential risk for humanity. We show that under standard asset pricing theory, either possibility -- aligned AI accelerating growth or unaligned AI risking extinction -- would predict a large increase in real interest rates, due to consumption smoothing. The simple logic is that, under expectations of either rapid future growth or future extinction, agents will save less, increasing real interest rates. We contribute a variety of new empirical evidence confirming that, contrary to some recent work, higher growth expectations cause higher long-term real interest rates, as measured using inflation-linked bonds and rich cross-country survey data on inflation expectations. We conclude that monitoring real interest rates is a promising framework for forecasting AI timelines.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155485</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Human-Multi-Robot Collaborative Visual Exploration in Underwater Environments</title>
<link>https://hdl.handle.net/1721.1/155482</link>
<description>Enabling Human-Multi-Robot Collaborative Visual Exploration in Underwater Environments
Jamieson, Stewart Christopher
This thesis presents novel approaches to vision-based autonomous exploration in underwater environments using human-multi-robot systems, enabling robots to adapt to evolving mission priorities learned via a human supervisor's responses to images collected in situ. The robots model the spatial distribution of various habitats and terrain types in the environment using semantic classes learned online, and send image queries to the supervisor to learn which of these classes are associated with the highest concentration of targets of interest. The robots do not require prior examples of these targets, and learn these concentration parameters online. This approach is suitable for exploration in unfamiliar environments where unexpected phenomena are frequently discovered, such as coral reefs. A novel risk-based online learning algorithm identifies the concentration parameters using the fewest possible number of queries, enabling the robots to adapt quickly and reducing the operational burden on the supervisor.&#13;
&#13;
I introduce four primary contributions to address prevalent challenges in underwater exploration. Firstly, a multi-robot semantic representation matching algorithm enables inter-robot sharing of semantic maps, generating consistent global maps with 20-60% higher quality scores than those produced by other methods. Next, we present DeepSeeColor, a novel real-time algorithm for correcting underwater image color distortions, which achieves up to 60 Hz processing speeds, thereby enabling improved semantic mapping and target recognition accuracy online. Thirdly, an efficient risk-based online learning algorithm ensures effective communication between robots and human supervisors, and, while remaining computationally tractable, overcomes the myopia which would cause previous algorithms to underestimate a query's value. Lastly, we propose a new reward model and planning algorithm tailored for autonomous exploration, together enabling a 25-75% increase in the number of targets of interest located when compared to baseline surveys. These experiments were conducted with simulated robots exploring real coral reef maps and with real, ecologically meaningful targets of interest.&#13;
&#13;
Collectively, these contributions overcome key barriers to vision-based autonomous underwater exploration, and enhance the capability of autonomous underwater vehicles to adapt to new and evolving mission objectives in situ. Beyond marine exploration, these contributions have value in broader applications, such as space exploration, ecosystem monitoring, and other online learning problems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155482</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Belief is Messy</title>
<link>https://hdl.handle.net/1721.1/155481</link>
<description>Belief is Messy
Pearson, Joshua Edward
Belief is messy; but if we are careful, it can be tamed. This guiding slogan anchors my thesis, which navigates the complexities of belief regarding three central topics: the conditions under which beliefs should be revised, how our inductively justified beliefs are structured, and what our practice of ascribing beliefs reveals about belief itself. In each case, I argue that the reality underlying belief is far messier than usually acknowledged. Despite its messiness, I demonstrate that, so long as we are careful, theorising about belief is not only possible, but fruitful. Chapter 1 argues that if ordinary beliefs about the future are justified despite having an on-zero chance of being false, then belief revision is far weaker and messier than is usually maintained. I outline novel cases concerning sequences of coin flips that display an odd phenomena: while one is initially justified in believing a coin will not produce a series of n consecutive heads or tails, this belief must be abandoned on observing the first flip of the coin, regardless of whether it lands on heads or on tails. No theory of belief revision can yet capture this phenomena. I outline a new theory that can. Chapter2appliesthistheorytoStalnaker’s notorious ”Composers” case. Suppose you believe on independent bases that Verdi is Italian, Bizet is French, and Satie is French. You then learn that in fact all three composers are compatriots. Should you be cautious, and become ambivalent as to whether they are all Italian or all French, or bold, and conclude they are all French? Implausibly, approaches to belief revision so far side exclusively with either cautious or bold, despite the fact that both reactions look permissible. I demonstrate how the theory in Chapter 1 can maintain that both reactions are indeed permissible, whereby which one is to be preferred depends just on one’s epistemic temperament. Chapter 3 concerns beliefs formed on the basis of induction. I argue that the anti-skeptical claim that such beliefs are justified and can constitute knowledge is only tenable if this knowledge is asymmetric in various surprising and messy ways. For instance, while you can know that all emeralds are green if all the emeralds in your own sample are green, you cannot know this of other possible equally-sized samples, even if they samples that will be examined by your future self, or by other agents who are just like you. Chapter 4 considers belief ascriptions and the increasingly popular thesis that belief is ‘weak’— the claim that one can justifiably believe a proposition even if one has low confidence in it. Though proponents of weak belief usually take various felicitous belief ascriptions as supporting of their view, matters get messy on considering ascriptions of beliefs in conditionals. I show that proponents of weak belief either cannot consistently apply their preferred methodology when accommodating beliefs in conditionals, or they must deny that beliefs in conditionals can be used in reasoning.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155481</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental Limits of Learning for Generalizability, Data Resilience, and Resource Efficiency</title>
<link>https://hdl.handle.net/1721.1/155479</link>
<description>Fundamental Limits of Learning for Generalizability, Data Resilience, and Resource Efficiency
Blanchard, Moïse
With the advancement of machine learning models and the rapid increase in their range of applications, learning algorithms should not only have the capacity to learn complex tasks, but also be resilient to imperfect data, all while being resource efficient. This thesis explores trade-offs between these three core challenges in statistical learning theory. We aim to understand the limits of learning algorithms across a wide range of machine learning and optimization settings, with the goal of providing adaptable, robust, and efficient learning algorithms for decision-making.&#13;
&#13;
In Part I of this thesis, we study the limits of learning with respect to generalizability and data assumptions following the universal learning framework. In universal learning, we seek general algorithms that have convergence guarantees for any objective task without structural restrictions. While this cannot be achieved without conditions on the training data, we show that in general this can be performed beyond standard statistical assumptions. More generally, we aim to characterize provably-minimal assumptions for which universal learning can be performed, and to provide algorithms that learn under these minimal assumptions. After giving a detailed overview of the framework and a summary of our results in Chapter 2, we investigate universal learnability across a wide range of machine learning settings: full-feedback in realizable online learning (Chapter 3), supervised learning with arbitrary or adversarial noise (Chapter 4); partial-feedback in standard contextual bandits (Chapter 5) and, as a first step towards more complex reinforcement learning settings, contextual bandits with non-stationary or adversarial rewards (Chapter 6). &#13;
&#13;
We investigate the impact of resource constraints in Part II, specifically of memory constraints in convex optimization. The efficiency of optimization algorithms is typically measured through the number of calls to a first-order oracle which provides value and gradient information on the function, aptly referred to as oracle-complexity. However, this may not be the only bottleneck; understanding the trade-offs with the usage of resources such as memory could pave the way for more practical optimization algorithms. Following this reasoning, we make advancements in characterizing achievable regions for optimization algorithms in the oracle-complexity/memory landscape. In Chapter 7 we show that full memory is necessary to achieve the optimal oracle-complexity for deterministic algorithms; hence, classical cutting-plane methods are Pareto-optimal in the oracle-complexity/memory trade-off. On the positive side, we provide memory-efficient algorithms in Chapter 8 for high-accuracy regimes (sub-polynomial in the dimension). In exponential-accuracy regimes, these algorithms strictly improve the oracle-complexity of gradient descent while preserving the same optimal memory usage. These algorithms can in fact be used for the more general feasibility problem for which we give improved lower-bound trade-offs in Chapter 9. These results imply that in standard accuracy regimes (polynomial in the dimension), gradient descent is also Pareto-optimal and reveal a phase transition for the oracle-complexity of memory-constrained algorithms.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155479</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Things Seem: Arbitrariness, Transparency, and Representation</title>
<link>https://hdl.handle.net/1721.1/155478</link>
<description>How Things Seem: Arbitrariness, Transparency, and Representation
Heine, Jessica Anne
The principles at the heart of this dissertation are Arbitrariness and Transparency. A representation is arbitrary with respect to content to the extent that the vehicle of representation (brain states, phenomenal experiences, pen marks, soundwaves, etc.) could have represented different content. A representation is transparent if one can be aware of the content of the representation in virtue of hosting it, without being aware of the representation itself. &#13;
&#13;
Chapter 1 argues that visual phenomenal character is fully arbitrary with respect to the objects of perception. That is, for any visually perceptible set of objects and any visual phenomenal character (any ways things perceptually seem) there could be a veridical perception of exactly those objects with that character. This principle is rejected by almost all contemporary theories of perception, yet rarely addressed directly. Many have taken the apparent inconceivability of a certain sort of “shape inversion” — as compared to the more plausible, frequently discussed “color inversion” — as evidence that the spatial characters of our perceptions are uniquely suited to and/or revelatory of the structure of their objects, such that alleged perceptions of those objects that differed radically in spatial character could not be veridical. I argue that these conclusions are unjustified: I claim that the difficulty involved in constructing coherent “shape inversion” scenarios is attributable to the complex relations among visual and tactile shape experiences, as opposed to relations between shape experiences and worldly shape properties. &#13;
&#13;
There is a consensus that endorsing the arbitrariness of perception — as defended in Chapter 1 — necessitates rejecting the transparency of perception with respect to worldly objects. Chapter 2 attacks that consensus. The consensus requires positing a family of properties whose metaphysical status is much more peculiar than is generally appreciated. These “noumenal” properties are allegedly essential to explaining the veridicality of our perceptions, yet no clear explanation is available for how we can learn about them or why we should postulate them. I argue that they do not exist.&#13;
&#13;
Chapter 3 defends an empiricist constraint on understanding language. I argue that the arbitrariness of language prevents anyone — regardless of intelligence, access to data, etc. — from understanding the meaning of words merely by learning how words relate to other words or other arbitrary symbols. While Chapter 1 argues that perception is arbitrary with respect to objects, perception is not arbitrary with respect to perspective-indexed contents. It may be arbitrary how the pen looks insofar as it is arbitrary who is doing the looking, but it is not arbitrary how the pen looks to me or to you. Given what the pen is like and given what I am like, there is only one way for the pen to look to me. I thus argue that one can only understand language by associating at least some linguistic expressions with perceptual representations of parts of the world described by those expressions. If this view is correct, then all knowledge of the world necessarily relies on foundational knowledge about how the world perceptually seems to the knowers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155478</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Inequality in Cities</title>
<link>https://hdl.handle.net/1721.1/155477</link>
<description>Essays on Inequality in Cities
Weiwu, Laura
This dissertation investigates the equity consequences of one the largest and most ambitious public works programs in the world—the Interstate highway system—on inequality across groups and the intergenerational outcomes of children. &#13;
&#13;
In my primary dissertation project described in chapters one and two, I quantify how interstate road infrastructure construction during the 1960s increased racial inequality in American cities. As a mechanism for why these disparities arise, I explore the central role of exclusionary institutions that limited access to residential neighborhoods for minority families and how they shape the distributional effects of interstate highway policy.&#13;
&#13;
I develop a quantitative model with rich spatial heterogeneity to capture how in equilibrium, institutions interact with social preferences and the housing market to influence inequality in highway impacts. The model structure crucially enables distinguishing the extent to which segregation is de jure, versus due to economic differences or social homophily, to measure which barriers are most meaningful for the residential choices of Black families. I estimate the model using spatially-granular historical records constructed from restricted Decennial Censuses for 25 cities and archival road network maps that I digitized for 71 cities. The empirical variation for estimation derives from quasi-random placement of interstate routes relative to historical comparison roads and from spatial discontinuities in where racial exclusion prevailed across locations.&#13;
&#13;
With the empirical estimates and the model framework, I find that highways produced commuting benefits that accrued largely to suburban neighborhoods and environmental harms concentrated in central areas, where the Black population resided, leading to their losses from interstate highways. Exclusionary institutions are the primary force behind segregation in the 1960s and account for the majority of the racial gap in interstate impacts. When I eliminate constraints on where Black households are permitted to live, rather than being harmed by highways, the Black population achieves gains. This shift in welfare occurs because they are no longer confined to the urban core, where highway costs outweigh benefits, and the findings highlight the key role of institutions in shaping the disparate incidence of place-specific shocks. &#13;
&#13;
In my secondary dissertation project described in chapter three, I measure the consequences of place-based interventions on children’s long-run outcomes, continuing with the setting of the interstate highway system.&#13;
&#13;
The first step to making this research possible is building an infrastructure of intergenerational linkages encompassing 100 years of economic events. At the U.S. Census Bureau, I have played a leading part in creating novel parent-child linkages that cover the universe of the U.S. population born between 1964 and 1979—on the order of tens of millions of individuals. Construction uses machine learning models, string-comparison methods from natural language processing, and immense administrative tax datasets. &#13;
&#13;
Using these linkages, I document that upward mobility for Black children was depressed during this period, where children from families in the top-quintile of parental income were more likely to enter the bottom-quintile in adulthood rather than remain in the top-quintile. I then investigate how the interstate highway network affected the economic mobility of these households. Place-based policies, such as transportation infrastructure, aim to benefit areas they directly target, and highways do so by improving a neighborhood’s access to employment. However, when a policy is sufficiently large, migration in equilibrium alters the peer composition of both the locations households seek out and the locations they depart from. In the case of the interstate network, as this migration was highly differential by economic status (higher educated, White families left central areas for suburbs with increases in connectivity), inequality was amplified across space—suburban neighborhoods experienced job access increases and enhanced peer externalities while central neighborhoods faced solely declines in peer externalities. These direct and indirect impacts then subsequently influence intergenerational mobility by race.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155477</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance, Stability and Control of Electric Short Takeoff and Landing Aircraft</title>
<link>https://hdl.handle.net/1721.1/155476</link>
<description>Performance, Stability and Control of Electric Short Takeoff and Landing Aircraft
Courtin, Christopher B.
The maturation of distributed electric propulsion (DEP) technologies for aircraft presents an opportunity to develop new aircraft configurations which take advantage of favorable aero-propulsive couplings. One such arrangement is the electrically blown wing, where the slipstream of distributed, electrically-driven propellers arranged along the wing lead edge is used to enhance the wing lifting capability, enabling reductions in required field length or improvements in cruise efficiency compared to aircraft with conventional high lift systems. DEP blown lift enables the development of a new class of electric super-short takeoff and landing (eSTOL) aircraft, which are fixed wing aircraft designed for sufficiently short runways to make them competitive with vertical flight aircraft. The objectives of this thesis is to examine where in the design space of potential aircraft the introduction of DEP blown lift technology offers a weight or fuel burn advantage compared to alternative configurations, and to examine the implications of flight in the low-speed, high-power envelope for vehicle stability and control. An approach is developed for modeling propeller-based distributed electric propulsion based on existing jet flap theory. The theory is advantageous for capturing the impact of propeller diameter on the lift augmentation. Corrections to the effective deflection angle of the jet based on the propeller size are developed. The modeling approach is incorporated into a conceptual design and optimization framework to examine the system-level impacts of distributed electric propulsion, focusing on hybrid-electric configurations. Hybrid DEP is shown to offer fuel burn benefits over conventional fixed-wing configurations for the cases where the design is constrained by takeoff and landing distance requirements. DEP blown lift is shown to enable fixed-wing eSTOL aircraft with takeoff and landing ground rolls of less than 150 ft. These aircraft can carry twice the payload for the same design mission and vehicle weight as a vertical takeoff and landing aircraft designed with the same underlying technology. This performance is achieved through a combination of lift augmentation and relatively low wing loading which results in takeoff and landing speeds of approximately 30 kts. On landing approach, gusts and atmospheric turbulence can be a large fraction of total vehicle airspeed. The ability of the aircraft and pilot to reject gusts and track a target trajectory directly effects the ground footprint required, because the expected lateral and longitudinal offset from the touchdown aiming point must be included in any estimate the total required runway size. The implications of flight in this low-speed, high-power regime on vehicle performance and stability are examined for a case study, the Electra EL-2 aircraft, a piloted hybridelectric DEP aircraft developed to demonstrate the feasibility of eSTOL operations. The thesis shows the need to use the electric motors as part of the attitude control system of the aircraft at slow speeds to increase the control power in the roll and yaw axes. Flight test results from this aircraft are shown to agree well with the jet flap modeling approach. Finally, a series of candidate flight control laws incorporating the electric motors into the EL-2 vehicle flight control system are presented. Exploratory tests of these control laws by ten pilots in the EL-2 simulator showed the inclusion of the motors in the flight control system improved the pilot handling qualities ratings and achievable landing precision of the aircraft, with the greatest improvements arising from the use of the motors to augment the longitudinal stability based on airspeed feedback.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155476</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Political Economy</title>
<link>https://hdl.handle.net/1721.1/155475</link>
<description>Essays on Political Economy
Molina, Carlos Molina
This dissertation is a collection of three papers on political economy that explore several of the most salient political events in recent decades, including the rise in affective polarization, protests, and high levels of corruption in developing countries. These events can severely affect the success of democratic institutions, which are vital for economic development. In Chapter 1, I study the role of social influence on social media in explaining individuals' preferences for biased news. I design and conduct a field experiment on Twitter to identify this effect and its potential mechanisms. The chapter demonstrates one mechanism through which social media can attenuate the demand for polarizing content: as these platforms amplify the visibility of user interactions, thereby increasing the importance of social image concerns, users adjust their news consumption to be more balanced. In Chapter 2, I investigate the effects of social media on collective action. By exploiting Facebook's release in a specific language as an exogenous source of variation in access to social media where the language is spoken, I show that Facebook, the largest social media platform, has had a significant and sizable positive impact on citizen protests. In Chapter 3, I document evidence of an unexplored yet common form of vote-buying: voter registration shifting. I demonstrate that this practice drastically affects 15% of local elections in Colombia. By employing quasi-experimental methods, I illustrate how the (future) allocation of transfers from central to local governments creates perverse incentives for political candidates to redistribute vote-buying.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155475</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics and Decision Making in Sustainable Operations</title>
<link>https://hdl.handle.net/1721.1/155474</link>
<description>Analytics and Decision Making in Sustainable Operations
Thayaparan, Leann
Sustainable operations has transformed in the past decade,  as interest from consumers, companies and regulators has increased. There has been a growing excitement and necessity to leverage the large-scale data collected to improve the modelling and decision making around sustainable operations. In this thesis, we introduce new methodologies to support data-driven sustainable operations, and in specific deal with topics around electric vehicles and the COVID-19 pandemic. &#13;
&#13;
In Chapter 2, we consider the problem of electric vehicles (EVs) as distributed storage for the electric grid. While electric vehicles can act as batteries supporting both the home and the electric grid, uncertainty around car usage must be accounted for before such models can be used in practice. We introduce a driver behavior-focused dynamic optimization for the charging and discharging of electric vehicles. We characterize policies that are interpretable to drivers to address distrust of automatized discharging of car batteries and prove analytically the regimes under which such policies are optimal. Finally, we work closely with an American EV manufacturer to show the dollar and carbon benefit that can be expected to be saved from discharging based on their driving behavior. We do this by clustering drivers based on their driving to derive probability distributions of when and how much drivers use their car to feed into the dynamic optimization.&#13;
&#13;
We further develop the challenge of data-driven decision making in sustainability through Chapter 3. Rather than learning probability distributions as in Chapter 2, we introduce a deterministic approach in which a tree-ensemble model, specifically a random forest, forecasts how much drivers use their EV. This gives rise to a challenge from the predict-then-optimize literature around the tractability of optimizations in which an objective function is determined by a tree ensemble model. In this chapter we introduce an Upper Bounding Method for Optimizing over Tree Ensemble Models, UMOTEM. We demonstrate the scalability of UMOTEM, showing it grows linearly with regard to both the number of trees in the ensemble as well as those trees' depth. This is a strong improvement over comparable formulations which grow exponentially. We also bound the optimality gap introduced through the approximation, characterizing it using features of the random forest such as leaf separation and in-sample error. We computationally compare our approximation to similar methods, demonstrating that the algorithm captures over 90% of optimality in 2% of the runtime for publicly available datasets. Finally, we demonstrate the use of UMOTEM through two case studies. First, we take the same case as Chapter 2, and show how UMOTEM can be leveraged to optimize the charging and discharging of EVs. Second, we work closely with Oracle Retail to apply UMOTEM to promotion scheduling in order to determine an optimally markdown strategy for a fashion retailer. &#13;
&#13;
In the final chapter of this thesis, we address data-driven decision making in one of the other major operational challenges to affect the globe, the COVID-19 pandemic. We develop a SIR-based model that can account for multiple waves. This model is agnostic to what drives the new waves (new variants, behavior changes,  government policies, etc.) but takes a data-driven approach to identify when infection rates change. We prove analytical guarantees on how fast new waves can be detected. When modelling COVID-19, we show a new wave can be expected to be flagged within half a week. We also show strong computational results on COVID-19, demonstrating improvement over top COVID-19 forecasting models used by the CDC.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155474</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Development Economics</title>
<link>https://hdl.handle.net/1721.1/155473</link>
<description>Essays in Development Economics
Ho, Lisa Yen Zheng
This thesis comprises three chapters. The first essay studies the effects of flexible work arrangements on female labor force participation in West Bengal. The second essay considers the effects of mobile internet on educational outcomes in Brazil. The third essay examines how information about the carbon footprint of different foods affects meal choices in the United States. The first chapter, joint with Suhani Jalota and Anahita Karandikar, studies the causes and consequences of persistently low female labor force participation rates in India. Across the world, several hundred million women say that they want a job but are out of the labor force, often because available opportunities are incompatible with norms about their household roles. In a field experiment with 1,670 households in West Bengal, we over flexible, short-term data entry jobs which meet households where they are in terms of expectations of women’s domestic responsibilities. We find three sets of results. First, flexibility more than triples job take up, from 15% for an office job to 48% for a job that women can do from home, while multitasking with childcare, and at the hours they choose. Second, to better understand why employers might hesitate to o er greater job flexibility, we study effects of work-from-home on performance and find negative effects on work quality and speed. Third, flexible jobs act as a labor market gateway for women initially out of the labor force: experience with flexible jobs makes women more likely to accept less flexible and outside-the-home jobs in the future. This gateway effect may be explained by changes in attitudes about appropriate behavior for men and women. Flexibility makes a larger difference to the labor supply of women who hold more traditional pre-intervention attitudes, and work experience in turn shifts women and children’s gender attitudes to become less traditional. Thus, flexible work arrangements can both attract women to the labor force and provide a gateway to less flexible jobs. The second chapter, joint with Pedro Bessone and Ricardo Dahis, studies the impacts of Brazil’s staggered mobile internet rollout on children’s educational outcomes. We compare standardized test scores before and after the staggered entry of 3G into Brazil’s 5,570 municipalities using a heterogeneity-robust event-study design. We find no effects of mobile 3internet on test scores for 5th or 9th grade students and can reject effect sizes of 0.04 standard deviations in both math and Portuguese. Taken together, our results indicate that the arrival of high-speed mobile internet is not sufficient to improve educational outcomes either through direct take-up by individuals or through broader changes to the economy. The third chapter, joint with Lucy Page, examines the adoption and effects of carbon footprint labelling by firms in the food sector. Food systems account for approximately one-third of total greenhouse gas emissions, and simple shifts across food choices can yield large cuts in emissions. In a randomized field experiment with over 200,000 meal kit customers in the US, we find that carbon footprint labels cause customers to choose lower-emission meals, and that the introduction of labels has positive effects on customer retention and company profits. Both the reduction in emissions and the increase in profits are driven by customers with high baseline beef consumption. We find evidence that the labels act through salience rather than knowledge, and that the effects on meal choices depend on whether customers’ values are aligned with the goal to address climate change through behavioral change.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155473</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Machine Learning for Climate Adaptation</title>
<link>https://hdl.handle.net/1721.1/155469</link>
<description>Multimodal Machine Learning for Climate Adaptation
Zeng, Cynthia
Climate change stands as one of the most urgent challenges of our generation, with devastating floods in Pakistan, heartbreaking earthquakes in Turkey, and unprecedented wildfires in Canada. For over a century, meteorology has traditionally relied on solving dynamical equations, but machine learning (ML) is now emerging as a transformative force. This thesis explores how to use machine learning and optimization methods to address issues surrounding climate change adaptation and sustainable development.&#13;
&#13;
The first part of the thesis focuses on developing multimodal machine learning frameworks for extreme weather forecasting. The multimodal ML approach integrates diverse sources and modalities of data, including text-based language, images, and tabular time series. The effectiveness of such an approach is showcased through two distinct case studies in extreme weather forecasting: in Chapter 2, a short-term hurricane forecast with a 12-hour lead time, and in Chapter 3, a long-term flood risk assessment model. Our contributions include the development of a generalizable multimodal ML framework to facilitate a wide range of prediction tasks in meteorology and beyond. Notably, our hurricane forecasting models demonstrate performance comparable to the National Hurricane Center’s top models for 24-hour intensity and track forecasts.&#13;
&#13;
ML-driven weather forecasting models offer two distinct advantages over traditional dynamical models: significant reductions in computational time, enabling real-time, location-specific predictions, and the ability to develop long-term risk models for proactive disaster mitigation rather than reactive responses. Therefore, in the second part of the thesis, we delve into two application domains to envision the transformative force in addressing climate change-induced challenges. In Chapter 4, we introduce an Adaptive Robust Optimization (ARO) framework for designing insurance policies, combining historical and anticipatory risks obtained by machine learning models. In Chapter 5, we develop a real-time machine learning framework for wind forecasting, aimed at adjusting factory production levels to minimize air pollution and its impact on surrounding urban areas. In partnership with OCP Group, the world’s largest phosphate producer, our algorithm is now fully integrated into operational systems and reduces hazardous emission impact by 33-47% annually.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155469</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Communication and Signaling: Evidence, Inference, and Persuasion</title>
<link>https://hdl.handle.net/1721.1/155464</link>
<description>Essays on Communication and Signaling: Evidence, Inference, and Persuasion
Gao, Ying
This thesis contains 3 chapters, each of which explores the implications of partial disclosure in a different context. As disclosures affect the beliefs of key actors in each scenario, optimal information design is an important consideration, and I try to characterize it and identify potential pitfalls in each case.&#13;
&#13;
Chapter 1 considers the disclosure problem of a sender with a large dataset of hard evidence. A sender may have an incentive to drop observations before submitting the data to receivers in order to persuade them to take a favorable action. I predict which observations the sender discloses using a model with a continuum of data, and show that this model approximates the outcomes with large, multi-variable datasets. In the receiver's preferred equilibrium, the sender's strategy relies on imitation: they submit evidence that imitates the natural distribution under a more desirable target state. As a result, it is enough for an experiment to record data on outcomes that maximally distinguish higher states. I characterize these strategies and show that senders with little data or a favorable state fully disclose their data, but still suffer from the receiver’s skepticism, and therefore are worse-off than they are under full information. On the other hand, senders with large datasets can benefit from voluntary disclosure by dropping observations under low states.&#13;
&#13;
In Chapter 2, my coauthors and I study the Federal Reserve's problem of disclosing the models it uses in supervisory stress tests of large banks. Banks argue that nondisclosure leads to inefficiencies stemming from uncertainty, but regulators are concerned that full disclosure can lead to banks gaming the system. We formalize the intuition behind this trade-off in a stylized model where both the regulator and banks have imperfect, private "models" about a risky asset, and the regulator uses its own model to `stress test' the investment. We show that if the regulator uses its model to test the banks' investment, full disclosure is suboptimal, and the regulator may benefit from hiding the model when the bank's model is more precise than the regulator's own model. The key idea is that hiding the regulator's model forces the bank to guess it using the bank's own models, effectively eliciting the bank's private information. We also show that if the regulator can fine-tune disclosure policies, the regulator can approximately enforce banks to take the first-best action, through an intuition closely related to the Cremer and McLean (1988) information rent extraction result.&#13;
&#13;
Finally, in Chapter 3 I return to signaling via hard evidence, and analyze communication through a piece of verifiable evidence when the receiver/decision maker is uncertain about where the sender's preferred action lies in relation to their own, that is, the sender's relative bias. In contrast to the known-preference case, fully informative communication is impossible, even when the receiver is certain the sender is informed. The main novelty is that receivers cannot distinguish between senders of opposite preferences who pool by withholding their information when it is unfavorable. Two opposing patterns of partial disclosure emerge. When senders are biased relative to receivers, but just as state-sensitive, nondisclosure is driven by senders with extreme preferences, who choose to withhold slightly unfavorable evidence. The reverse, however, occurs when senders are state-insensitive.&#13;
&#13;
JEL Codes: D80, D82, G28
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155464</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Climate Action</title>
<link>https://hdl.handle.net/1721.1/155463</link>
<description>Essays on the Economics of Climate Action
Page, Lucy
This thesis comprises three chapters, each of them describing an experimental project studying individual action on climate change in the United States. &#13;
&#13;
In the first chapter of my thesis, Hannah Ruebeck and I study whether and how individuals strategically build political movements, focusing on the climate movement in the US. Policy change typically requires bipartisan support in Congress, and bipartisan constituent lobbying might be key to building these legislative coalitions. Recent work in economics shows that political action spreads through networks, so citizens may have externalities on others’ political action and play a role in shaping these citizen coalitions. We use a series of online experiments with 21,000 participants to show three things. First, Democrats, “typical” members of the US climate movement, internalize these externalities and are more likely to email Congress when doing so can inspire others to join them. Next, Democrats are much more likely to try to recruit other Democrats than Republicans, even when they know that all of them believe that climate change is mostly human-caused. Third, this outreach gap is strategic, driven by Democrats' beliefs that cross-party outreach will be relatively ineffective, rather than from a distaste for engaging Republicans in the climate movement. However, widespread affective polarization—dislike of those across the political aisle—still plays an important role here in that Democrats expect cross-party outreach to fail because Republicans will be polarized against them. &#13;
&#13;
In the second chapter of my thesis, Hannah Ruebeck, James Walsh, and I study the role that story-telling can play in shaping political beliefs and engagement. We focus on narratives about policy change, studying how Americans react to information about policy progress and to storytelling about the role that citizen action plays in that policy change. In an online experiment with 6,000 participants, we first show that learning about the policy progress of the Inflation Reduction Act (IRA) somewhat increases political efficacy, or beliefs about government responsiveness to citizen action, but also reduces demand for continuing climate policy. However, pairing information about the IRA with a fictional story that ties policy progress to citizen action yields large increases in political-efficacy beliefs and continuing climate action. We find evidence that storytelling may have strong effects on climate action both because it yields large changes in beliefs and because of its strong emotional effects. &#13;
&#13;
In the final chapter of my thesis, Lisa Ho and I partner with the largest mealkit company in the United States to test the impacts of adding carbon-footprint labels to menus. Food systems account for about one-third of total greenhouse gas emissions, and simple shifts across food choices can yield large cuts in emissions. Menu-based nudges like carbon-footprint labels are an increasingly common tool for encouraging these shifts. In a randomized field experiment with over 200,000 meal kit customers in the US, we find that adding carbon-footprint labels to menus causes customers to choose lower-emission meals, and that the introduction of labels has positive effects on customer retention and company profits. Thus, companies might have incentives to implement green nudges like these.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155463</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Environmental and Healthcare Market Design</title>
<link>https://hdl.handle.net/1721.1/155462</link>
<description>Essays in Environmental and Healthcare Market Design
Russo, Anna
This thesis studies the design of government intervention in environmental and healthcare markets. The first chapter, joint with Karl M. Aspelund, studies how asymmetric information inf luences the performance and design of government-established markets for conservation. Market mechanisms aim to deliver environmental services at low cost. However, this objective is undermined by participants whose conservation actions are not marginal to the incentive — or “additional” — as the lowest cost providers of environmental services may not be the highest social value. We investigate this potential market failure in the world’s largest auction mechanism for ecosystem services, the Conservation Reserve Program, with a dataset linking bids in the program’s scoring auction to satellite-derived land use. We use a regression discontinuity design to show that three of four marginal winners of the auction are not additional. Moreover, we find that the heterogeneity in counterfactual land use introduces adverse selection in the market. We then develop and estimate a joint model of multi-dimensional bidding and land use to quantify the implications of this market failure for the performance of environmental procurement mechanisms and competitive offset markets. We design alternative auctions with scoring rules that incorporate the expected impact of the auction on bidders’ land use. These auctions increase efficiency by using bids and observed characteristics to select participants based on both costs and expected additionality. The second chapter explores the observation that healthcare is often allocated without prices, sacrificing efficiency in the interest of equity. Wait times then typically serve as a substitute rationing mechanism, creating their own distinct efficiency and distributional consequences. I study these issues in the context of the Veterans Health Administration (VA) healthcare system, which provides healthcare that is largely free but congested, and the Choice Act, a large-scale policy intervention that subsidized access to non-VA providers to reduce this congestion. Using variation in Choice Act eligibility in both patient-level and clinic-level difference-in-differences designs, I find that the price reduction for eligible veterans led to substitution away from the VA, an increase in overall healthcare utilization and spending, and reduced wait times at VA clinics in equilibrium. I then use the policy-induced price and wait time variation to estimate the joint distribution of patients’ willingness-to-pay and willingness-to-wait. I find that rationing via wait times redistributes access to healthcare 3to lower socioeconomic status veterans, but at a large efficiency cost (-24%). This equity-efficiency trade-off is steep: rationing by wait times is an inefficient form of redistribution across a range of equity objectives. By contrast, I find that a coarsely targeted, modest increase in copayments increases consumer surplus by more than the Choice Act, at lower cost to the VA, while disproportionately benefitting low-income veterans. The third chapter, joint with Abigail Ostriker, investigates the effects of regulations designed to correct a wedge between privately- and socially-optimal construction in areas at risk of flooding in Florida. Using a spatial regression discontinuity around regulatory boundaries and an event study around the policy’s introduction, we document that floodplain regulation reduces new construction in high-risk areas and mitigates damages at homes constructed under flood-safe building standards. Embedding these effects in a model of the housing market, we find the policy reduces damages to the socially-efficient level, but incurs higher costs than a first-best corrective tax. Improved targeting of the existing policy achieves 94% of first-best welfare gains, or $7,567 per newly-constructed house.&#13;
&#13;
JEL Codes: D4, D6, D8, H2, H5, I1, I3, Q2, Q5
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155462</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low Earth Orbit Spacecraft Slotting: Towards an Implementable Proposal</title>
<link>https://hdl.handle.net/1721.1/155421</link>
<description>Low Earth Orbit Spacecraft Slotting: Towards an Implementable Proposal
Lifson, Miles
The growth in the number of proposed low Earth orbit (LEO) satellites is driven primarily by large commercial communications constellations. The launch of even half of the proposed satellites would result in an order-of-magnitude increase in active spacecraft traffic, with significant implications for LEO operations. This thesis provides a framework for understanding LEO orbital use. Intelligently organizing large constellations to efficiently make use of LEO and avoid hazardous conjunctions between on-station satellites can significantly reduce risk while imposing only a minimal burden on satellite operators. This research demonstrates the design of efficient, mutually compatible orbits and shells, describes analytical tools to assess their benefits, explores trade-offs in policy implementation pathways, and estimates reductions to the collision avoidance burden for operators from the use of cross-operator compatible orbits. The proposed framework supports quantification of the efficiency of orbital shell allocations, the opportunity cost of alternatives, and the amount of remaining uncommitted volume.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155421</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining Key Engineering Parameters to Advance Electrochemical CO₂ Separation Technologies</title>
<link>https://hdl.handle.net/1721.1/155418</link>
<description>Defining Key Engineering Parameters to Advance Electrochemical CO₂ Separation Technologies
Clarke, Lauren E.
Carbon dioxide (CO₂) capture, coupled with utilization or storage, is anticipated to help facilitate society-wide decarbonization by reducing emissions of current thermochemical processes and addressing hard-to-decarbonize sectors. However, current carbon capture approaches remain energy intensive, expensive, and dependent on fossil-fuel derived heat, challenging sustainable, large-scale deployment. Electrochemical technologies for CO₂ separation have recently gained attention, as these methods have the potential to achieve higher efficiencies, directly utilize renewable energy, enable modular devices, and operate at (or near) ambient conditions. Various redox chemistries have been identified and experimentally tested for their ability to drive CO₂ separation; however, open questions remain about the expected performance of these systems and how these conceptual processes can be effectively designed at scale. Furthermore, there are several important and interdependent performance descriptors (e.g., energetic efficiency, faradaic efficiency, and separation capacity or flux) whose complex interplay convolutes determination of the optimal design space.&#13;
&#13;
To address this, my research focuses on developing modeling frameworks to better understand the relationship between material properties, operating conditions, and key system performance metrics within electrochemical CO₂ separation systems. First, I discuss a thermodynamic modeling analysis that explores tradeoffs between the upper bounds on energetic and faradaic efficiencies, and identify key molecular/system properties that balance these competing metrics. Then, I describe a cell-level model that was developed to evaluate the impact of several key variables on energetic penalties (on top of the thermodynamics) for liquid-fed electrochemical reactors (used in “4-stage” system configurations). I also demonstrate how this model can be used to explore pathways towards improving the system energetic efficiency. Finally, I describe a 2D model for gas-fed electrochemical cells (used in “2-stage” system configurations) and demonstrate how electrochemically-generated concentration gradients can induce natural convection that can significantly impact observed cell performance. The results from this model highlight how designing cells that harness natural convection can enhance achievable current densities and, ultimately, CO₂ separation fluxes. Overall, the findings from these collective modeling analyses inform ongoing molecular design and device engineering campaigns, provide insight into key system properties, and establish engineering guidelines for electrochemical CO₂ separation devices across scales.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155418</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivariant quantum connections in positive characteristic</title>
<link>https://hdl.handle.net/1721.1/155417</link>
<description>Equivariant quantum connections in positive characteristic
Lee, Jae Hee
In this thesis, we apply techniques from symplectic Gromov--Witten theory to study the equivariant quantum connections in positive characteristic. The main examples of interest arise from symplectic resolutions. We introduce equivariant generalizations of the quantum Steenrod operations of Fukaya, provide nontrivial computations in the example of the cotangent bundle of the projective line, and explore the relationship with Varchenko's construction of mod p solutions to the quantum differential equation. We then prove the compatibility of the equivariant quantum Steenrod operations with the quantum differential and difference connections. As a consequence, we obtain an identification of our operations for divisor classes with the p-curvature of the quantum connection in a wide range of examples.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155417</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Analysis of a Lithium-ion Convection Battery</title>
<link>https://hdl.handle.net/1721.1/155416</link>
<description>Modeling and Analysis of a Lithium-ion Convection Battery
Gao, Weiran
Lithium-ion batteries (LIBs), crucial to modern portable electronics and increasingly significant in transportation and grid storage, represent the state-of-the-art in energy storage technology due to their high energy density, efficiency, and long cycle life. Despite declining costs and improving energy densities, driven by advancements in materials and manufacturing processes along with expanded market scale, current LIBs often struggle to meet the evolving demands of new applications. Current research predominantly focuses on material innovations, with less attention given to re-engineering cell architectures to address the technological challenges.&#13;
This thesis investigates the "convection battery" cell architecture, a novel approach involving circulating electrolyte through the porous electrodes and separator of a LIB cell to enhance mass and thermal transport. Compared to traditional LIBs, this architecture may enhance ion flux in electrodes, improve safety and maintenance, simplify system design, and ultimately reduce overall costs. Prior studies, including experimental work in our laboratory, have highlighted the benefits of electrolyte flow, yet a comprehensive engineering analysis on this aspect is lacking.&#13;
To bridge this gap, this thesis employs a combination of modeling and analytical techniques to systematically explore the potential advantages and opportunities enabled by the convection battery cell architecture. The first half of the thesis delves into the fundamental mechanisms of electrolyte convection in enhancing mass and thermal transport within a LIB cell, utilizing a convection battery sandwich cell layer model developed from the Li-ION SIMulation BAttery (LIONSIMBA) Toolbox. Through dimensional analysis, I identified conditions under which convection provides the most performance enhancement, alongside exploring the necessary flow rates and performance limitations. &#13;
In the latter half of the thesis, practical implementation aspects are examined, starting with the requisite additional electrolyte to achieve desired transport enhancements. A potential design for the convection battery system is proposed, and COMSOL-based convection battery cell stack models and a system design model were developed to aid the analyses. Through illustrating its utility in two distinct scenarios, I have endeavored to highlight the convection battery's unique value proposition and its potential to broaden the applicability of current LIB technologies. This thesis establishes a foundation for the convection battery technology, highlighting its potential to improve the performance of current LIB systems and to venture into novel application domains. To conclude the thesis, I discuss future research avenues and the design considerations essential for the advancement and realization of the convection battery technology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155416</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Pathway Design for Biopolymer Building Block Production</title>
<link>https://hdl.handle.net/1721.1/155415</link>
<description>Novel Pathway Design for Biopolymer Building Block Production
Bannister, K'yal Rasean
The carbon and energy intensity associated with plastics production from petroleum, combined with the accumulation of plastics waste in the environment, necessitates the development of technologies for the production of renewably-derived, degradable alternatives. Microorganisms can be metabolically engineered to convert renewable feedstocks to plastic building blocks. This thesis aims to design and implement metabolic pathways to industrially relevant hydroxy acids (HAs) and diols with the ultimate goal of using them for sustainable plastics production.&#13;
&#13;
We began by prioritizing bioaccessible HAs for bio-production. Our analysis identified 182 bioaccessible HAs. We prioritized monomers from this list based on novelty, ease of chemical polymerization, maximum theoretical yield, and potential to improve material properties in a biopolymer. 3-Hydroxyisobutyric acid (3HIB) and 3-hydroxy-2-methylbutyric acid (3H2MB) were prioritized based on their high molecular weight polymerization and ability to reduce thermal degradation when incorporated into a biopolymer.&#13;
&#13;
Next, we designed a novel pathway to 3HIB and 3H2MB. This pathway involves the conversion of glucose to various branched acyl-CoAs and ultimately to 3HIB or 3H2MB. As proof of concept, we engineered E. coli for the production 3HIB and 3H2MB from glucose at titers as high as 66 ± 5 mg/L and 290 ± 40 mg/L, respectively. To our knowledge, this is the first report of 3H2MB bio-production from glucose. We optimized this pathway for 3H2MB production by deleting competing pathways and developing a byproduct recycle. Finally, we investigated mutagenesis of a pathway enzyme to expand the product range of this pathway. Future work should investigate these mutants for the production of other biopolymer building blocks.&#13;
&#13;
Finally, the feasibility of biological pathways to 3-methyl-1,5-pentanediol (3MPD) was investigated. 3MPD is an attractive monomer for the production of degradable, diol-diacid copolyesters. The feasibility determining step in this pathway involves the hydroxylation of 3-methylpentanol (3MP) to 3MPD by AlkBGT from P. putida. Despite optimizing our system for alkBGT expression, strain growth, and substrate transport, no 3MP conversion to 3MPD was observed. Future work should probe other hydroxylation enzymes.&#13;
&#13;
Overall, this thesis demonstrates the utility of novel pathway design to reach HAs and diols that lead to biopolymers with improved industrial application.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155415</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis Imaging with Vector Sensor Arrays</title>
<link>https://hdl.handle.net/1721.1/155414</link>
<description>Synthesis Imaging with Vector Sensor Arrays
Kononov, Ekaterina R.
Radio astronomy observations at frequencies below 10 MHz could provide valuable science, such as measuring the cosmic dark age signal in the redshifted 21-cm hydrogen absorption line, detecting exoplanetary auroral emissions which lead to inferences about magnetic fields and atmospheres, and characterizing the effects of solar wind and coronal mass ejections on the magnetospheres of solar system planets. Despite their value, few measurements in the sub-10 MHz band have been made because of the technical challenges in conducting these observations. &#13;
&#13;
Parabolic antennas, which are commonly used for radio astronomy, would need to be impractically large to obtain high angular resolution at frequencies below roughly 100 MHz. Instead, to observe frequencies between 10 MHz and 100 MHz, observatories use interferometric arrays of electrically-small antennas such as dipoles. At even lower frequencies, below about 10 MHz, the Earth's ionosphere reflects, attenuates, and distorts radio waves, making radio astronomy in this band only possible from space. However, a spaceborne array would need thousands of electrically-small antennas to reach the sensitivity required for detecting faint astronomical signals, and it would need to be positioned far from the Earth to reduce the impact of Earth-generated radio interference. &#13;
&#13;
The high number of antennas and large distance from the Earth would make such an array expensive to deploy and difficult to operate. For a given performance level, using more efficient antennas would minimize the number needed, and using antennas that are robust to interference would reduce the required distance from Earth. To this end, this thesis considers constructing the array out of vector sensor antennas. These advanced antennas consist of three orthogonal dipoles and three orthogonal loops with a common phase center. They are used for terrestrial applications such as geolocation, and their benefits include direction-finding and polarimetric capabilities. Vector sensors could provide a more efficient means to conduct radio astronomy observations of low frequencies in space but have not been considered for this application previously.&#13;
&#13;
This thesis investigates how an array of vector sensors could be used for space-based radio interferometry. First, we describe a parametric model of vector sensor sensitivity, and we show that vector sensors are twice as sensitive as tripoles, simpler antennas that have been previously considered for similar applications. Next, we derive a polarimetric radio interferometry measurement equation for vector sensors. Then, algorithms for inverting the measurement equation, or synthesis imaging, are developed and demonstrated through several case studies using synthetic and measured data. We show that vector sensors can be 6-10 dB more robust to noise than tripoles when detecting point sources, and that interferometric measurements with vector sensors contain four times more Fisher information than measurements with tripoles. The analyses and tools developed in this thesis contribute to enabling space-based sub-10 MHz radio astronomy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155414</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identification and characterization of rickettsial proteins at the host-pathogen interface</title>
<link>https://hdl.handle.net/1721.1/155408</link>
<description>Identification and characterization of rickettsial proteins at the host-pathogen interface
Sanderlin, Allen Garrett
Bacterial pathogens subvert host cell processes during infection by secreting protein effectors that manipulate host machinery. Cataloging such effectors has enabled a deeper exploration of the molecular basis of disease for numerous pathogens. Members of the Rickettsia genus are obligate intracellular bacteria that pose a growing threat to human health, but their complete dependence on the host cell niche has precluded a thorough investigation of the bacterial factors acting at the host-pathogen interface. Accurately identifying and characterizing these proteins will provide a necessary framework for understanding rickettsial biology and disease.&#13;
&#13;
In this work, I demonstrate that the conserved rickettsial protein RARP-1 is not a bona fide secreted effector, as had been previously suggested. Instead, I found that Rickettsia parkeri RARP-1 localizes to the periplasm where it supports the rickettsial life cycle by promoting host cell invasion and intracellular growth. Motivated by this discrepancy, I developed a cell-selective proteomic screen to identify effectors secreted during R. parkeri infection. In addition to several known secreted effectors, my approach revealed the novel secreted rickettsial factors SrfA–G. Notably, these Srfs include Rickettsia-specific proteins of unknown function that are structurally diverse, variably conserved, and targeted to distinct host cell compartments. I further demonstrate that one of these effectors, SrfD, localizes to the endoplasmic reticulum where it interacts with the host Sec61 translocon. Taken together, this work highlights the elusive nature of rickettsial effectors while offering new ways to probe the unique biology of these bacterial pathogens.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155408</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Foundations of Anti-System Protests in Democracy</title>
<link>https://hdl.handle.net/1721.1/155407</link>
<description>The Foundations of Anti-System Protests in Democracy
Cummings, Peter M. M.
Anti-system protests in democracy are a subcategory of political protests characterized by participants from diverse sectors or groups in society, wide-ranging demands, blame directed at the political system, and a context of representative democracy. This protest subtype raises a critical puzzle: why do protesters in democracy come to blame the political system for their grievances? I argue that common explanatory theories in the protest literature – such as relative deprivation, political opportunity, and resource mobilization – provide relevant background conditions that increase the likelihood of protest onset, but these variables are unlikely to produce anti-system protests in democracy without repeated episodes of political unresponsiveness across sectors. Iterative episodes of unresponsiveness lead sectoral movement organizations to accumulate unresolved demands, to prefer extra-institutional action, and to strategically update by escalating their targets of blame to the political system. With these foundations in place, democracies become vulnerable to anti-system protests, which can then be set off by seemingly insignificant triggers. I test this argument primarily through in-depth, historical analysis of two cases: 1) Chile 2006-2019, a case in which sectoral protests escalated to anti-system protests, and 2) Brazil 2003-2013, a case in which sectoral protests escalated to multi-demand and multi-sector but non-system protests. My project challenges literature that suggests that modern protests come together rapidly due to technological advances, supports literature that urges scholars to view protests as interconnected waves of contention, and adds to literature on blame attribution by showing how and why social movement organizations escalate their targets of blame over time and based on political experiences. Lastly, my project connects to the anti-system politics literature, exploring an alternative anti-system outcome to populism.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155407</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CAG repeat expansions induce cytoplasmic RNA aggregation</title>
<link>https://hdl.handle.net/1721.1/155405</link>
<description>CAG repeat expansions induce cytoplasmic RNA aggregation
Das, Michael R.
Expansions of ‘CAG’ trinucleotide repeats in the genome can cause over a dozen diseases, including Huntington disease and several spinocerebellar ataxias. Short tracts of these ‘CAG’ repeats are benign; however, mutant alleles that harbor an abnormally large number of consecutive ‘CAG’ motifs can result in disease. Mutant RNA molecules harboring an expanded repeat tract can promote pathology through at least two activities. First, these repeat-containing RNA molecules are prone to condensation within the nucleus. This results in the formation of liquid-like nuclear foci. These foci can sequester RNA-binding proteins, resulting in a loss of the RNA-binding proteins’ functions. Second, the expanded repeat tract in the RNA can undergo translation even if the repeat does not lie in a canonical coding sequence, a phenomenon termed repeat-associated non-AUG (RAN) translation. This can result in the production of toxic, repeat-containing proteins.  &#13;
 &#13;
In this thesis, I present evidence that RNA molecules containing an expanded CAG repeat tract can condense in the cytoplasm to form gel-like aggregates. Unlike nuclear foci of CAG repeat RNAs, these cytoplasmic RNA aggregates are associated with RAN translation and with cytotoxicity. The repeat RNAs co-aggregate in the cytoplasm with their cognate repeat proteins. These aggregates sequester several typically-nuclear RNA-binding proteins, such as FUS and TDP-43. Inhibiting translation of the repeat RNAs prevents cytoplasmic RNA aggregation, cytoplasmic mislocalization of RNA-binding proteins, as well as toxicity.&#13;
&#13;
In the second part of this thesis, I present evidence that the cytoplasmic repeat RNA aggregates also contain NEAT1, a lncRNA that typically localized to the nucleus. CAG repeat RNAs that do not undergo translation, and consequently do not form cytoplasmic RNA aggregates, do not cause NEAT1 mislocalization. Several proteins that bind NEAT1, including FUS, also mislocalize to the cytoplasmic RNA aggregates. Genetic knock-down of FUS is sufficient to induce cytoplasmic accumulation of NEAT1. These results suggest that depletion of FUS from the nucleus (e.g. through sequestration at cytoplasmic aggregates) increases the accumulation of NEAT1 in the cytoplasm.&#13;
&#13;
Ultimately, this work opens new questions into the role of cytoplasmic RNA aggregates in CAG repeat expansion disorders and the role of FUS in nuclear retention of RNAs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155405</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coupled cycling of metals with nitrogen and carbon in marine sediments</title>
<link>https://hdl.handle.net/1721.1/155404</link>
<description>Coupled cycling of metals with nitrogen and carbon in marine sediments
Karolewski, Jennifer S.
Cross element processes are complex and often understudied components within biogeochemical cycles. In this thesis, I use stable isotopes of carbon, nitrogen, and oxygen as the primary tools to interrogates these complex reactions. First, I report abiotic oxidation of nitrite to nitrate by manganese(III)-pyrophosphate. This reaction can occur even in the absence of oxygen, unlike biological nitrite oxidation. Reaction rates were measured at a range of environmentally relevant pH values (5-8) with the reaction proceeding more quickly at lower pH. Reaction order was second order with respect to manganese(III) and first order with respect to nitrous acid. No reversibility of reaction was observed upon addition of isotopically distinct nitrate. An inverse kinetic isotope effect of +19.9 ± 0.7‰ was calculated, which was comparable in magnitude and direction to that of biological nitrite oxidation. In natural waters, such as estuaries, this reaction could potentially play an important role in the nitrogen cycle. Next, I report an abiotic reaction between hydroxylamine and manganese(III)-pyrophosphate which forms nitrous oxide, nitrite, and likely dinitrogen gas. In artificial seawater (pH = 8), this reaction proceeds rapidly, with the ratio of products highly dependent on the reactant ratio. Nitrous oxide site preference (SP) of +35.5 ± 0.6‰ was observed, consistent with the isotopic signatures of several marine nitrifying organisms. This suggests that “leakage” of intermediate hydroxylamine from nitrifier cells could potentially react with manganese(III) in a mixed biotic-abiotic process without previously being noticed. Finally, I performed experiments using carbon-13 labelling to measure rates of anaerobic oxidation of methane (AOM) in cold seep sediments collected at Cascadia Margin. Four forms of oxidized manganese and four forms of oxidized iron were added to treatments in order to evaluate how these altered rates of AOM. In contrast to previous work, addition of metals did not overall increase rates of AOM above those of an unamended control and some treatments in fact reduced it. However, energy yields from microbes using metal as an electron acceptor are higher per mole of methane reduced than that of using sulfate so even with these lower rates, energy yields would have exceeded those of controls. Additionally, doubling times for the archaea performing AOM are long enough that the microbial community may not have been able to adapt on the timescale of the experiment. Overall, the results of this thesis illuminate the need for further study of abiotic and coupled cycling reactions when considering biogeochemical cycles.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155404</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Centralized and Decentralized Approaches to Advanced Air Mobility Traffic Management</title>
<link>https://hdl.handle.net/1721.1/155403</link>
<description>Centralized and Decentralized Approaches to Advanced Air Mobility Traffic Management
Chin, Christopher H.
Advanced air mobility (AAM) is an emerging air transportation concept that leverages new types of aircraft, such as electric vertical take-off and landing (eVTOL) aircraft, to carry passengers or cargo in urban or rural settings. Initial AAM operations will be low-volume, so traffic can be managed with existing air traffic control rules, procedures, and designated routes. However, projected levels of AAM demand will require new traffic management procedures. We focus on three key considerations of AAM traffic management. First, we desire a traffic management system that efficiently utilizes limited airspace and vertiport resources and minimizes delays. Next, given the diverse set of possible AAM applications, we desire a system that is fair across different operators and flights. Finally, we must be cognizant of the level of information sharing required from operators, as AAM operators can have preferences on when, how much, and with whom information is shared. Current concepts of operations envision a federated architecture in which multiple thirdparty service suppliers manage AAM traffic rather than regulatory agencies like the Federal Aviation Administration. We first consider how a single service supplier can manage traffic. To maximize efficiency, we start with a centralized optimization that requires operators to share full trajectory information. We consider the trade-off between efficiency and alternative fairness metrics, study the impact of operator preferences for fairness, and evaluate how to handle dynamic demand. We then turn to a decentralized setting since AAM operators may be unwilling or unable to share information with a central traffic manager. We develop a decentralized traffic management protocol that requires less information sharing. We show that the protocol with backpressure prioritization maximizes efficiency in one timestep, even with limited information sharing. We then consider federated airspace configurations where different regions of airspace can utilize different traffic management methods. We show that ideas from the decentralized protocol can be leveraged to coordinate traffic in a federated setting. The methods developed in this dissertation can help service suppliers manage AAM traffic while directly addressing many considerations of AAM operations, like operator fairness and information sharing.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155403</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the mechanics of growth: A large deformation theory for coupled swelling-growth and morphogenesis of soft biological systems</title>
<link>https://hdl.handle.net/1721.1/155402</link>
<description>Understanding the mechanics of growth: A large deformation theory for coupled swelling-growth and morphogenesis of soft biological systems
Senthilnathan, Chockalingam
Understanding the growth of soft biological systems is crucial in a wide range of applications with extensive societal consequences, such as plastic and reconstructive surgery, curbing the growth of tumors and bacterial colonies, tissue engineering of functional vascular grafts, etc. Solid tumors account for more than 85% of cancer mortality, and bacterial biofilms account for a significant part of all human microbial infections. Mechanics plays a crucial role in determining how these systems grow and acquire their shape (morphogenesis). The overarching theme of this thesis is in elucidating the mechanics of growth and morphogenesis in such soft systems, starting from universal underlying mechanisms. Growing biological systems are a mixture of fluid and solid components and increase their mass by intake of diffusing species such as fluids and nutrients (swelling) and subsequent conversion of some of the diffusing species into solid material (growth). Experiments indicate that these systems swell by large amounts and that the swelling and growth are intrinsically coupled, with the swelling being an important driver of growth. However, most existing theories for swelling coupled growth employ linear poroelasticity, which is limited to small swelling deformations, and employ phenomenological prescriptions for the dependence of growth rate on concentration of diffusing species and the stress-state in the system. In particular, the termination of growth is enforced through the prescription of a critical concentration of diffusing species and a homeostatic stress. In contrast, by developing a fully 3coupled swelling-growth theory that accounts for large swelling through nonlinear poroelasticity, we show that the emergent driving stress for growth automatically captures all the above phenomena. Further, we show that for the soft growing systems considered in this thesis, the effects of the homeostatic stress and critical concentration can be encapsulated under a single notion of a critical swelling ratio. The applicability of the theory is shown by its ability to capture experimental observations of growing tumors and biofilms under various mechanical and diffusion-consumption constraints. We further show our theory is able to model and explain morphogenesis that arises in mechanically confined growth of a wide range of systems. Specifically, it reveals the key role played by the relative timescale of volumetric growth processes (such as cell division and extracellular matrix production) to that of remodelling processes (such as cellular rearrangement and microstructural evolution) in mechanics driven morphogenesis. The key insights from our analysis could build into future work that can elucidate more complicated biological growth and morphogenesis processes such as brain lobe development and embryogenesis. Additionally, compared to generalized mixture theories, our theory is amenable to relatively easy numerical implementation with a minimal physically motivated parameter space.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155402</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ribosome Heterogeneity in Zebrafish Germ and Soma provides insight into Gene Expression during Development and Disease</title>
<link>https://hdl.handle.net/1721.1/155399</link>
<description>Ribosome Heterogeneity in Zebrafish Germ and Soma provides insight into Gene Expression during Development and Disease
Shah, Arish N.
Ribosomes are the micromachines which produce the materials composing the molecular-cellular complexity of organisms. Regulation of gene expression by the translational machinery provides a layer of control over the timing, location, and amount of any given gene product and its associated functions. Protein synthesis during vertebrate development is driven by a common pool of ribosomes of two distinct origins: subunits synthesized by the mother during oogenesis and stored in the egg, and subunits produced after fertilization by the embryo. In most organisms, these two are the same. &#13;
Recently, cell type-specific expression of two zebrafish ribosomal DNA (rDNA) genes was identified (Locati et al. 2017b). One rDNA variant located on chromosome 4 was found to be specifically expressed in eggs (aka maternal type), while expression of another rDNA variant located on chromosome 5 was specific to somatic cells (aka somatic type). Critically, these rRNAs vary in about 15% of their sequence. Ribosome structural heterogeneity is an appealing occurrence as it may uncover ribosome-specific functionality shaping translational control seen in development. Since variation observed between the zebrafish rRNA types is substantial, it has the potential to affect ribosome biogenesis, structure, and function at different levels. We use this system to investigate the possibility of germ cell-specific ribosome functionalization.&#13;
This thesis contains research assessing two rDNA gene variants, the transcribed rRNAs, and the two sets of ribosome subunits they compose. We separately characterize maternal and somatic ribosomes using 6 - 120 hours post-fertilization (hpf) animals. Cryo-EM structure maps of each show compositionally different, yet structurally similar assemblies. Using transgenic labeling of maternal and somatic subunits, we confirm intersubunit compatibility forms cognate and hybrid monosomes. We show primordial germ cells transcribe the somatic rDNA gene upon genome activation, and, unexpectedly, shift transcription to the maternal rDNA gene at 72 hpf. Lastly, we demonstrate maternal ribosome-enriched translation of germ cell-specific mRNA in vivo. Zebrafish germ cells maintain a majority of maternal rDNA gene products at all measured times.&#13;
Our findings solidify the chromosome 4 rDNA variant as a germ cell-specific rDNA gene. This work clarifies the structural, molecular, and cellular consequences of cell type-specific rRNA expression on ribosome heterogeneity. We indicate a germ cell-specific ribosome functionalization and frame the zebrafish dual rDNA gene variant system for future inquiry regarding ribosome biogenesis, translation control, and germ cell development.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155399</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Human-Mouse Neural Crest Chimeras as a Novel Model for Human Melanocyte Biology in Development and Disease</title>
<link>https://hdl.handle.net/1721.1/155398</link>
<description>Human-Mouse Neural Crest Chimeras as a Novel Model for Human Melanocyte Biology in Development and Disease
Singh, Kristin Andrykovich
Limited persistence of xenogenic cells in human-mouse interspecies chimeras is an essential obstacle to overcome for the creation of disease models which allow the study of in vivo human cells. The two main barriers to donor cell survival are (1) having the capacity to respond appropriately to the xeno-environment and (2) competing with endogenous host cells. &#13;
&#13;
This works uses human-mouse neural crest chimeras, generated by E8.5 in utero injection of human ESC/iPSC-derived neural crest cells (NCCs) into the gastrulating mouse embryo, as an experimental platform to better understand the limits of the barriers restricting interspecies chimerism. Here we present approaches which focus on recapitulating the melanocyte neural crest lineage to pave the wave for a novel immune-competent melanoma model:&#13;
&#13;
 First, we try adapting human donor cells to the mouse host environment via expression of a single mouse receptor (c-Kit) on human donor cells to rescue an evolutionarily divergent ligand/receptor interaction required for melanoblast proliferation and survival. We find that this extends the persistence of human NCCs, but does not lead to postnatal survival, suggesting c-Kit is not sufficient to rescue all aspects of melanocyte biology. In our second approach, we try to improve donor cell postnatal survival by combining proliferative advantage and pre-lineage biasing. We do this by injecting a genetically-defined human melanoma derived in primary melanocytes. We find they have the capacity to migrate in utero like primary mouse NCCs and contribute long-lasting donor cells in the dermis in post-natal chimeras. However, the immune-competent mouse hosts are not tolerized to human antigens. In the final approach, we address xenograft rejection as the final barrier to long-lasting interspecies chimerism. To explore whether central tolerance via human NC-derivative contribution to the chimera thymus is feasible, we develop fetal thymic organ culture (FTOC) co-cultures with human NCCs. We find that human NCCs and NCC-mesenchyme have a unique capacity to efficiently engraft onto mouse thymus ex vivo and preliminary evidence suggesting partial central tolerance is possible.&#13;
&#13;
The success of these approaches ultimately suggests an updated framework for understanding how human-mouse chimerism barriers fit within the Developmental Hourglass Model and inform how future human-mouse interspecies chimeras may overcome these barriers within immune-competent hosts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155398</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Breaking things so you don’t have to: risk assessment&#13;
and failure prediction for cyber-physical AI</title>
<link>https://hdl.handle.net/1721.1/155397</link>
<description>Breaking things so you don’t have to: risk assessment&#13;
and failure prediction for cyber-physical AI
Dawson, Charles Burke
Before autonomous systems can be deployed in safety-critical environments, we must be able to verify that they will perform safely, ideally without the risk and expense of real-world testing. A wide variety of formal methods and simulation-driven techniques have been developed to solve this verification problem, but they typically rely on difficult-to-construct mathematical models or else use sample-inefficient black-box optimization methods. Moreover, existing verification methods provide little guidance on how to optimize the system's design to be more robust to the failures they uncover. In this thesis, I develop a suite of methods that accelerate verification and design automation of robots and other autonomous systems by using program analysis tools such as automatic differentiation and probabilistic programming to automatically construct mathematical models of the system under test. In particular, I make the following contributions. First, I use automatic differentiation to develop a flexible, general-purpose framework for end-to-end design automation and statistical safety verification for autonomous systems. Second, I improve the sample efficiency of end-to-end optimization using adversarial optimization to falsify differentiable formal specifications of desired robot behavior. Third, I provide a novel reformulation of the design and verification problem using Bayesian inference to predict a more diverse set of challenging adversarial failure modes. Finally, I present a data-driven method for root-cause failure diagnosis, allowing system designers to infer what factors may have contributed to failure based on noisy data from real-world deployments. I apply the methods developed in this thesis to a range of challenging problems in robotics and cyberphysical systems. I demonstrate the use of this design and verification framework to optimize spacecraft trajectory and control systems, multi-agent formation and communication strategies, vision-in-the-loop controllers for autonomous vehicles, and robust generation dispatch for electrical power systems, and I apply this failure diagnosis tool on real-world data from scheduling failures in a nationwide air transportation network.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155397</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Behavior Models for Characterizing Trajectories within Terminal Airspace</title>
<link>https://hdl.handle.net/1721.1/155395</link>
<description>Hierarchical Behavior Models for Characterizing Trajectories within Terminal Airspace
Li, Clement
Flight trajectories in terminal airspace, ranging from highly controlled airports to untowered airports, exhibit repeatable patterns. These patterns, or behaviors, may often contain sub-patterns, suggesting that the nature of aircraft behaviors in terminal airspace is hierarchical. While current methods for identifying patterns within historical ight track data use a at clustering approach, a hierarchical organization may be useful for better representing the multi-scale structure of trajectory patterns. At the same time, the features which distinguish between behaviors and sub-behaviors may vary across behaviors and at different levels in the hierarchy. This research proposes a hierarchical model of behaviors and develops a methodology which uses recursive clustering and feature selection for identifying a behavior tree within an airspace and applies the methodology to airspace at both general aviation and commercial airports. The hierarchical approach enables the identification of sub-behaviors with increased specificity, provides improved understandability of the clustering as a byproduct of dynamic feature selection, and also allows for a more nuanced concept of outliers. The identified behaviors are shown to reflect underlying structure present within the airspace without prior knowledge of that structure. Comparisons between different airports show that the behavior trees and underlying structure vary significantly between airports. The hierarchical behavior models are used in an example application of real-time behavior recognition and trajectory prediction. Additionally, an approach using hierarchical behavior models for identifying anomalous flights is presented.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155395</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring Compliance to the International Telecommunication Union’s Geosynchronous Orbital Assignments</title>
<link>https://hdl.handle.net/1721.1/155394</link>
<description>Measuring Compliance to the International Telecommunication Union’s Geosynchronous Orbital Assignments
Roberts, Thomas González
Satellites in orbit around the Earth obey more than just the laws of physics; a network of rules, regulations, and guidelines govern their activities, too. In geosynchronous orbit (GEO), countries coordinate their satellite missions to prevent harmful interference in the radio-frequency spectrum and ensure equitable access to the popular orbital regime. The International Telecommunication Union (ITU), a specialized agency of the United Nations, administers this coordination process via rules agreed upon by its member states and encoded in the agency’s extensive Radio Regulations. Although the Regulations have governed GEO satellite operations for more than four decades, the system aches for improvement: member states’ governments spend years in a back-and-forth filing process with the ITU in order to be issued assignments that they then regularly fail to adhere to. For the first time in the public domain, this thesis assesses GEO satellite operators’ compliance with the physical portion of their ITU assignments. By algorithmically comparing historical GEO satellite positions with corresponding ITU satellite network filings using a methodology that references only publicly available data, the study reveals that over 20 percent of satellites have been out of compliance with the ITU’s rules at any given time in recent years. Another 15 percent operate outside of their ITU orbital prescriptions, but within unused portions of the geostationary belt. Non-compliance is shown to be unbalanced across the satellite population. Older satellites are less likely to be in compliance than newer ones. Those operated by government agencies are out of compliance more often than those operated by commercial companies. Military satellites, which may be exempted from ITU coordination processes, follow the rules as often as civil government satellites. Among the largest state operators in GEO, China and the United States exhibit higher compliance rates than Russia. Further insights can be gleaned from individual compliance assessments for the nearly 1,000 GEO satellites included in the study group. The thesis concludes with recommendations for how space-sharing rule-makers can use the tools introduced in this work to improve the efficiency of orbital assignment processes via the mechanisms available to them at the ITU World Radio Conferences.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155394</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Optical Communication Terminals for Increasing Connectivity on Small Satellites</title>
<link>https://hdl.handle.net/1721.1/155393</link>
<description>Development of Optical Communication Terminals for Increasing Connectivity on Small Satellites
Kammerer III, William John
The development of free-space optical communication (FSOC) systems for space is key to enabling orders of magnitude more bandwidth and throughput for space communication systems. In addition, the recent developments in CubeSat technology can enable significant new small satellite capabilities. This technology has successfully been implemented and demonstrated for optical downlinks on the CubeSat platform, but work remains toward enabling lower-cost terminals as well as developing and demonstrating optical crosslink terminals. In this thesis, advancements are made in the development of CubeSat-scale optical communication terminals through two efforts: First is hardware demonstration and, second is modeling tools for and analysis of free-space optical communication terminals for interfacing with constellations. The first effort, hardware demonstration, involves contributions to the NASAsponsored CubeSat Laser Infrared Crosslink (CLICK) program in designing, building, and operating optical communication terminals in space on both the CLICK-A and CLICK-B/C terminals. The 1.2U CLICK-A downlink terminal, whose goal is to establish a 10 Mbps link to a low-cost portable 28 cm optical ground station called PorTeL (Portable Telescope for Lasercom), operated in space to demonstrate key technologies for risk reduction of the CLICK-B/C crosslink terminal. We present the f irst results of in-space operation of a microelectromechanical systems (MEMS) fine steering mirror (FSM), used for precision pointing of the space terminal. This mirror, used within a novel optical design, enabled the terminal to correct for an average blind spacecraft pointing error of 8.494 mrad and maintain a total RMS pointing error of 0.164 mrad, equal to-0.194 dB of pointing loss of the 1.3 mrad FWHM transit beam, after initial blind pointing error correction across three optical downlink experiments. Key contributions are also made for the flight build of the 1.5U CLICKB/C terminals, whose goal is to establish a 20 Mbps intersatellite optical crosslink at separations from 25 to 580 km. The developments primarily focus on the optical bench assembly, upon which all the free-space optics used for pointing control and full-duplex communications on the terminal are mounted. Optical ground support equipment for testing the performance of the terminal is developed and utilized to confirm the as-designed 1/&#119890;² beam divergence of 0.120 mrad has been achieved in a prototype assembly of the optical bench. The second modeling effort involves the development of orbit and link modeling tools for designing CubeSat-scale optical communication terminals to communicate with FSOC-enabled proliferated low Earth orbit (pLEO) constellations. In addition to the physical aspects of the laser beam pointing, the architectures used for delivery of CubeSat-collected data to its final destination can significantly impact its utility. An interesting new architecture has been recently enabled with the advent of pLEO constellations where their networking infrastructure can be implemented as a relay and downlink communication architecture with the potential to dramatically increase link availability time for small satellites. While this infrastructure provides a potential new architecture for satellites in LEO to use for communication, the link dynamics between the satellites that comprise the constellation and external LEO satellites and are not intuitive or well studied. The modeling tools developed are used to analyze the link dynamics between satellites in these constellations and external (non-constellation) satellites in typical orbits used by Earth observation missions. This analysis determines the geometric link availability and calculates how long windows of link availability windows persist. The pLEO constellations analyzed include the first generation of SpaceX Starlink, Amazon Project Kuiper, OneWeb, and the Space Development Agency Proliferated Warfighter Space Architecture Tranche 1 Transport Layer. Results from the modeling show which constellation designs are better suited for this type of external link architecture with certain constellations enabling continuous link availability for external satellites in both sun-synchronous and mid-inclination orbits at range and lateral velocity limits suitable for CubeSat-scale terminals. A link modeling tool is developed to determine the driving design choices for this CubeSat-scale optical crosslink terminal. The results from these orbit and link modeling tools are used in the development of link budgets that support the feasibility of CubeSat-scale optical communication terminals to communicate with FSOC-enabled pLEO constellations. The technological advancements detailed in this thesis encompass the in-space demonstration of technology to reduce pointing loss, optical subassembly design validation for an optical crosslink demonstration, and the development of link modeling capabilities for a novel optical data relay architecture within the field of CubeSat communications. These developments contribute to the further deployment of CubeSat-scale optical communication terminals for both downlinks and crosslinks. This technology has the potential to significantly increase connectivity for CubeSatscale spacecraft and revolutionize spacecraft operations in low Earth orbit.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155393</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small-Body and Heliophysics Missions using Hybrid Low-Thrust Propulsion</title>
<link>https://hdl.handle.net/1721.1/155391</link>
<description>Small-Body and Heliophysics Missions using Hybrid Low-Thrust Propulsion
Miller, Daniel
Throughout the history of spaceflight, new missions and capabilities have been enabled by the development of increasingly efficient and powerful propulsion systems. However, all of these systems, from the earliest chemical engines to modern electric thrusters, require propellant, thereby reducing the mass budget available for a payload. By harnessing the momentum of reflected photons of sunlight, solar sails offer a propellant-free alternative but are limited by attitude restrictions and their low thrust. Their improvement has also been inhibited by current knowledge of both materials science and structural engineering. In this dissertation, an assessment of a hybrid propulsion system is presented that maximizes the positive traits of its two constituent subsystems. By augmenting solar electric propulsion (SEP) with a solar sail, a spacecraft may be created with lower propellant consumption than SEP alone and greater thrust than a pure sailcraft, while not necessitating the technical development of the most ambitious proposed solar sails. To conduct this study, trajectories are generated to potential heliophysics and smallbody targets: high-inclination, heliocentric orbits and interstellar objects (ISOs). For the former, detailed mass budgets are created and a trade study of subsystem size versus mass is conducted to identify the performance necessary to produce a net positive change in launch mass. For the latter, six spacecraft propulsion systems and four launch vehicles are considered in a broad study of mission viability using two separate databases of synthetic target ISOs. The ability of the hybrid low-thrust propulsion configuration to produce lower mean arrival velocities than more conventional alternatives is then determined. In both cases, nonidealized power and propulsion models are used to improve upon the preexisting literature and make a more accurate assessment of the technology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155391</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>YAP and TAZ have Functionally Redundant Roles in Uveal Melanoma</title>
<link>https://hdl.handle.net/1721.1/155388</link>
<description>YAP and TAZ have Functionally Redundant Roles in Uveal Melanoma
Lamboy Rodríguez, Swanny A.
Uveal Melanoma (UM) is the primary ocular malignancy in adults. The primary tumor is treatable but 50% of patients develop fatal metastases. Most UM are driven by activating mutations in the heterotrimeric G protein alpha subunits paralogs, GNAQ or GNA11, whose main downstream effectors are MAPK signaling and the transcriptional activator YAP. Recent zebrafish work established the importance of YAP and de-emphasized the role of MAPK in UM. Here we show that deletion of yap has no significant effect on the incidence, or kinetics, of GNAQQ209L-driven zebrafish UM. Additional experiments revealed the presence of nuclear Taz in the yap null tumors. Our data suggest that this reflects functional redundancy between yap and its paralog, taz, either of which can efficiently drive UM. Furthermore, we show that the tumorigenic effects of YAP and TAZ are TEAD-dependent. To determine the human relevance, multiple YAP or TAZ-deficient clones were generated for two human UM cell lines, Mel202 and MP41. Deletion of either protein had no consistent deleterious effects on cell survival or proliferation, across the two cell lines in vitro. Moreover, deletion of YAP or TAZ did not prevent tumor formation in mice after intracardiac injection, and the clones show high liver tropism, modeling human UM metastases. The liver tumors displayed nuclear YAP and/or TAZ, as appropriate for their genotype, and only low-level, heterogenous staining for phospho-ERK. We conclude that the YAP/TAZ signaling plays the dominant role in both zebrafish and human UM, but most tumors can survive without YAP or TAZ due to the functional redundancy of these two proteins.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155388</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Common Cellular Response to Broad Splicing&#13;
Perturbations</title>
<link>https://hdl.handle.net/1721.1/155387</link>
<description>A Common Cellular Response to Broad Splicing&#13;
Perturbations
Varineau, Jade E.
Cells are constantly experiencing stress that arises from external environmental factors or internal dysfunction. Despite this, cells are remarkably robust, and possess organized cellular response pathways to adapt to stress. Cellular stress responses to disruptions in many steps of the central dogma - from DNA synthesis to post-translational protein processing - are well classified. However, we have yet to establish a unifying mechanism describing how cells respond to disruptions in the process of mRNA splicing. Here, I demonstrate that a p53-stabilizing Mdm2 alternative splicing event is a common feature that arises under broad splicing perturbations. I then demonstrate that the resulting p53 stabilization propagates downregulation of metabolic transcripts. Furthermore, I show that this metabolic alteration is relevant to tissue-specific disorders that arise due to mutations in splicing factors, and other disorders of aberrant p53 stabilization, more broadly. Together, this work elucidates common components of a cellular response to broad splicing perturbations that are similar to other cellular stress responses, and lends insight into molecular mechanisms of splicing-associated developmental disorders.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155387</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Motion of Solid Objects Immersed in Viscoelastic Fluids</title>
<link>https://hdl.handle.net/1721.1/155386</link>
<description>Modeling the Motion of Solid Objects Immersed in Viscoelastic Fluids
Joens, Mary Agnes
Flows of viscoelastic fluids around immersed objects have long been a subject of interest in both experimental and theoretical studies. Such flows are used as benchmarking problems for validation of advanced numerical methods and the predictions of equations of state, as model systems for larger industrial processes, and as coarse-grained models for the movement of microorganisms and other active micro-scale objects (often referred to as ``microswimmers'') through biofluids and other viscoelastic media. Additionally, a better understanding of such flows offers the potential to improve the ways in which microrheology measurements are made and interpreted. This thesis aims to develop a flexible, adaptable framework for describing the motion of geometrically simple objects in weakly elastic fluids by adapting well-established and robust analytical methods. &#13;
&#13;
The first part of this thesis focuses on the describing the motion of single spheres immersed in weakly viscoelastic fluids moving in arbitrary, time-dependent one- and two-dimensional trajectories. We show how perturbation methods can be used in conjunction with the Lorentz reciprocal theorem to determine a general relationship between the particle trajectory and the force exerted on it by the surrounding fluid at low Weissenberg numbers, as well as how an inverse relationship can be determined and used to analyze the trajectory of a particle propelled by some external force. Potential applications for this solution methodology are explored in detail, including the development of a framework for analyzing data from active microrheology experiments in the weakly nonlinear regime and the design of different forcing protocols for externally directed spherical microswimmers. &#13;
&#13;
The next part of this thesis focuses on modeling an independently propelled force- and torque-free microswimmer composed of two counterrotating spheres of differing radii. Swimmers of this type, while ineffective in Newtonain fluids, are capable of propelling themselves through nonlinear viscoelastic fluids due to the imbalance of normal stress differences induced in the fluid by the rotating asymmetric body. The propulsion speed of such swimmers in a variety of configurations is determined using perturbation methods and an application of the reciprocal theorem. The impact of varying both the details of the swimmer geometry and the rheological properties of the surrounding fluid on the propulsion speed are described in detail. &#13;
&#13;
The final part of this thesis focuses on the modeling and analysis of a long, slender mechanical resonator in both Newtonian and weakly viscoelastic fluids for use as a continuously deployed rheometer in industrial settings. Much like the motions of spherical particles discussed in the first parts of this thesis, the vibration of an immersed thin cantilever beam is another canonical problem in fluid mechanics, and we again show how well-established methods for analyzing this system can be adapted to better capture the influence of viscoelasicity. Ultimately, we leverage a derived relationship between the free-end displacement of a cantilever beam undergoing some known, externally driven vibration and the hydrodynamic force exerted on it by the surrounding fluid to accurately determine rheological properties of several fluids using both optical and electric measurements of the free-end displacement of a real mechanical resonator at its resonance frequency.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155386</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Property Prediction and Molecular Discovery through Multi-Fidelity Deep Learning and Computational Chemistry</title>
<link>https://hdl.handle.net/1721.1/155385</link>
<description>Optical Property Prediction and Molecular Discovery through Multi-Fidelity Deep Learning and Computational Chemistry
Greenman, Kevin P.
Optical properties are crucial for the design of molecules for numerous applications, including for display technologies and biological imaging. The accurate prediction of these properties has been the subject of decades of work in both physics-based approaches and statistical modeling. Recently, large datasets of both computed and experimental optical properties have become available, along with the advent of powerful deep learning approaches cable of learning meaningful representations from these large datasets. This thesis presents new approaches for predicting optical properties by fusing the experimental and computational data in multi-fidelity models that achieve greater accuracy and generalizability than previous methods. Additionally, it conducts a thorough benchmark of various strategies for handling multi-fidelity data to inform the modeling choices of future practitioners working with optical properties and beyond. Despite the greater availability of optical property data recently, the near-infrared (NIR) region of the spectrum remains more data-sparse despite its promise in many applications. This thesis demonstrates the shortcomings of existing methods for predicting optical properties in this region of chemical space and recommends best practices for future research in this area. Finally, this thesis highlights successful usage of data-driven optical property prediction for the discovery of novel molecules for specific applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155385</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Deep Learning to Understand and Design Heterogeneous Materials</title>
<link>https://hdl.handle.net/1721.1/155384</link>
<description>Using Deep Learning to Understand and Design Heterogeneous Materials
Yang, Zhenze
The drive to develop materials with superior performance and multifunctional capabilities has underscored the importance of precisely controlling material heterogeneity, establishing it as a pivotal component of materials science and engineering. From a compositional standpoint, the materials-by-design paradigm incorporates multiple constituent phases to combine strengths of each individual components, giving rise to a general concept known as "composite", which applies across scales from the microscale to the macroscale. Introducing heterogeneity into materials can also be achieved by integrating structural variations into their design. By altering topological configurations or embedding structural defects, there exists substantial potential for tailoring materials properties and behaviors. However, the compositional and structural heterogeneity in materials complicates the calculation of their properties and extensively expands their design space. As a result, it poses challenges to conventional computational and experimental methods, limiting their effectiveness in fully exploring and exploiting the potential of heterogeneous materials.&#13;
&#13;
In this dissertation, we aim at addressing these challenges by combining multiscale modeling techniques and machine learning (ML) models to accelerate property calculation and design of heterogeneous materials. We begin with developing artificial intelligence (AI)-based surrogate models for building multiscale "structure-to-physical field" linkage. At the continuum level, our  work highlights the prediction of strain/stress fields in complex hierarchical composites through a conditional generative adversarial network, bypassing conventional numerical simulations such as the finite element method (FEM). At the atomic scale, we utilize a graph neural network based method to bridge structural defects and the distribution of atomic properties in mesoscale crystalline solids. Compared to traditional atomistic molecular dynamics (MD) simulations, our proposed method demonstrates enhanced efficiency and broad applicability.&#13;
&#13;
Apart from addressing forward problems (from structure to property), we also showcase the utility of AI-based methods in addressing the notoriously challenging inverse problem (from property to structure). These inverse problems, characterized by limited information from observed data, present significant challenges due to the absence of governing equations or constitutive relations that can be directly solved with multiscale modeling. We introduce a framework that integrates multiple deep learning (DL) models to tackle a typical inverse problem efficiently, circumventing the need for costly iterative methods and provides new solutions to addressing ill-posed inverse problem where multiple solutions may exist for a single observation.&#13;
&#13;
To further design heterogeneous materials, we employ high-throughput screening technique to streamline the generation and testing of materials and structures of our interest. Our investigations encompass a wide range of materials systems, from 3D graphene-based foams showcasing structural heterogeneity to polypeptide self-assemblies exhibiting compositional heterogeneity. We leverage both atomistic and coarse-grained (CG) MD simulations to generate thousands of different data points for each material and train ML or DL models with these created datasets. This high-throughput approach automates the creation of nanoscale structures, facilitates the assessment of their mechanical properties, and accelerates the identification of promising candidates. We believe that our approach not only significantly accelerates the pace of discovery and optimization in material science but also opens new avenues for the creation of innovative materials with tailored properties, marking a pivotal advancement in the field of heterogeneous material design.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155384</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic-Scale Design at the 2D/3D Interface using Electron Microscopy</title>
<link>https://hdl.handle.net/1721.1/155383</link>
<description>Atomic-Scale Design at the 2D/3D Interface using Electron Microscopy
Reidy, Kate
Well-controlled nanostructures and their interfaces with surrounding materials are at the core of our most advanced technologies, from superconducting qubits to emerging catalysts. This thesis focuses on elucidating structure, growth, and properties of technologically relevant interfaces between 2D materials and 3D nanoislands, using advanced electron microscopy and spectroscopy techniques. 2D materials represent promising candidates for use in emerging quantum and energy applications, particularly if interfaced in a well-controlled manner with current 3D technological materials, such as conventional metals or semiconductors. At relevant length scales the nanomaterial properties can be altered by the placement of even a single atom or defect in the periodic arrangement of atoms, necessitating the development of new techniques for creating and understanding atomically-precise interfaces. &#13;
&#13;
We begin by proposing general insights for efficient coupling of 2D/3D interfaces at the atomic scale by examining key parameters of growth, including the role of temperature, defects, surface reconstructions, interface chemistry, and thermodynamic equilibrium shapes. We demonstrate how 2D/3D heterostructures grown in ultrahigh vacuum lead to epitaxial, faceted nanoscale islands with ultra-low defect density interfaces. Next, we describe how the interface structure influences local electronic and excitonic properties. We observe reproducible moiré periodicities which vary the electronic charge density, and explain the emergence of an excitonic peak related to the 2D/3D interface via dielectric screening by the 3D contact. We discuss how such understanding of the structure-property-performance relationship allows versatile design of low-dimensional heterostructures for next generation nanoscale devices. &#13;
&#13;
Finally, we extend beyond the 2D/3D metal system to explore further combinations of materials and modes of complex heterostructure design. We fabricate ultrathin multi-element lateral and vertical heterostructures closely linked to future technological integration, and describe how emerging in situ electron microscopy techniques to observe nucleation and growth with high spatial and temporal resolution can unlock fundamental understanding of kinetic and thermodynamic mechanisms, such as growth rates, diffusion phenomena, phase transformation kinetics, strain relaxation mechanisms, and defect formation. The observation of such atomic-scale processes allows us to develop a set of fundamental design rules for building ‘designer materials’ atom-by-atom with tailored properties.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155383</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smoothed Online Learning: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/155382</link>
<description>Smoothed Online Learning: Theory and Applications
Block, Adam B.
Many of the algorithms and theoretical results surrounding modern machine learning are predicated on the assumption that data are independent and identically distributed. Motivated by the numerous applications that do not satisfy this assumption, many researchers have been interested in relaxations of this condition, with online learning being the weakest such assumption. In this setting, the learner observes data points one at a time and makes predictions, before incorporating the data into a training set with the goal of predicting new data points as well as possible. Due to the lack of assumptions on the data, this setting is both computationally and statistically challenging. In this thesis, we investigate the statistical rates and efficient algorithms achievable when the data are constrained in a natural way motivated by the smoothed analysis of algorithms. The first part covers the statistical rates achievable by an arbitrary algorithm without regard to efficiency, covering both the fully adversarial setting and the constrained setting in which improved rates are possible. The second part of the thesis focuses on efficient algorithms for this constrained setting, as well as special cases where bounds can be improved under additional structure. Finally, in the third part we investigate applications of these techniques to sequential decision making, robotics, and differential privacy. We introduce a number of novel techniques, including a Gaussian anti-concentration inequality and a new norm comparison for dependent data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155382</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>First explicit reciprocity law for unitary Friedberg—Jacquet periods</title>
<link>https://hdl.handle.net/1721.1/155381</link>
<description>First explicit reciprocity law for unitary Friedberg—Jacquet periods
Corato Zanarella, Murilo
In the early 2000's, Bertolini and Darmon introduced a new technique to bound Selmer groups of elliptic curves via level raising congruences. This was the first example of what is now termed a "bipartite Euler system", and over the last decade we have seen many breakthroughs on constructing such systems for other Galois representations, including settings such as twisted and cubic triple product, symmetric cube, and Rankin--Selberg, with applications to the Bloch--Kato conjecture and to Iwasawa theory.&#13;
&#13;
This thesis studies the case of Galois representations attached to automorphic representations on a totally definite unitary group U(2r) over a CM field which are distinguished by the subgroup U(r) x U(r). We prove a new ``first explicit reciprocity law'' in this setting, which has applications to the rank 0 case of the corresponding Bloch--Kato conjecture.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155381</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering cytokine immunotherapies via cell surface targeting</title>
<link>https://hdl.handle.net/1721.1/155380</link>
<description>Engineering cytokine immunotherapies via cell surface targeting
Santollani, Luciano
Cancer immunotherapy targets immune cells to trigger a highly specific, long-lasting anti-tumor response. With the clinical success of immune checkpoint blockade and the development of promising next-generation agents, immunotherapy is steadily growing as a key pillar of the oncology clinic alongside surgery, radiation, and chemotherapy. Cytokines, endogenous regulators of immune responses, have long been promising immunotherapy candidates due to their innate ability to modulate lymphocyte behavior. However, translation of cytokines as systemically administered immunotherapies has been severely limited by on-target/off-tissue toxicity. One approach to overcome this challenge is to engineer cytokines for intratumoral retention following local administration to isolate their activity to on-target tissue. In this thesis, we explore an immune cell-based localization strategy by designing, evaluating, and optimizing antibody-cytokine fusions targeting the ubiquitous leukocyte receptor CD45.&#13;
&#13;
First, we engineer and profile an αCD45-IL15 fusion that exhibits significantly diminished receptor-mediated internalization relative to its wild-type counterpart. This extended surface half-life augments downstream pSTAT5 induction and enables both cis and trans signaling between lymphocytes. We demonstrate this enhanced cell-surface biology is consistent when this approach is applied to another pro-inflammatory cytokine, IL-12. Preliminary experiments additionally suggest conserved internalization behavior between mouse and human CD45. Intratumoral αCD45-cytokine administration at specified doses leads to decoration of leukocytes in the tumor and tumor-draining lymph node (TDLN) while sparing systemic exposure. Biodistribution experiments suggest dose-dependent drainage of CD45-targeted proteins from the tumor through the TDLN and into systemic circulation, allowing for compartment specific targeting.&#13;
&#13;
In the second part of the thesis, we develop and deeply characterize a two-dose sequential cytokine therapy termed αCD45-Cyt that safely elicits profound anti-tumor immunity. In this paradigm, a single dose of αCD45-IL12 followed by a single dose of αCD45-IL15 is able to eradicate both treated tumors and untreated distal lesions in multiple syngeneic mouse tumor models. Mechanistically, the improved intratumoral and nodal retention driven by CD45 targeting enabled reprogramming of tumor specific CD8+ T cells in the TDLN to exhibit an anti-viral transcriptional signature. Finally, we discuss preliminary data and plans for translating αCD45-Cyt therapy. Altogether, this thesis highlights the power of targeting host immune cells for use in immunotherapy and more broadly discusses the ability of multi-receptor targeting to elicit new signaling biology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155380</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conformal welding of random surfaces from Liouville theory</title>
<link>https://hdl.handle.net/1721.1/155379</link>
<description>Conformal welding of random surfaces from Liouville theory
Yu, Pu
Liouville quantum gravity (LQG) is a natural model describing random surfaces, which arises as the scaling limit for random planar maps. Liouville conformal field theory (LCFT) is the underlying 2D CFT that governs LQG. Schramm-Loewner evolution (SLE) is a random planar curve, which describes the scaling limits of interfaces in many statistical physics models. As discovered by Sheffield (2010), one of the deepest results in random geometry is that SLE curves arises as the interfaces under conformal welding of LQG surfaces. In this thesis, we present some new results on conformal welding of LQG surfaces as well as their applications towards the theory of SLE. We first define a three-parameter family of random surfaces in LQG which can be viewed as the quantum version of triangles. Then we prove the conformal welding result of a quantum triangle and a two-pointed quantum disk, and deduce integrability results for chordal SLE with three force points. The second main result is regarding the conformal welding of a multiple number of LQG surfaces, where under several scenarios, we prove that the output surfaces can be described in terms of LCFT, and the random moduli of the surface is encoded in terms of the partition functions for the SLE curves. The third part is about the conformal welding of the quantum disks with forested boundary, where we prove that this conformal welding gives a two-pointed quantum disk with an independent SLEκ for κ ∈ (4,8). We further extend to the conformal welding of a multiple number of forested quantum disks, where as an application, for κ ∈ (4,8), we prove the existence of the multiple SLE partition functions, which are smooth functions satisfying a system of PDEs and conformal covariance. This was open for κ ∈ (6,8) and N ≥ 3 prior to our work. The conformal loop ensemble (CLE) is a random collection of planar loops which locally look like SLE. For κ ∈ (4,8), the loops are non-simple and may touch each other and the boundary. As a second application, we derive the probability that the loop surrounding a given point in the non-simple conformal loop ensemble touches the domain boundary.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155379</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of fate specification in stem cells of the planarian Schmidtea mediterranea</title>
<link>https://hdl.handle.net/1721.1/155376</link>
<description>Regulation of fate specification in stem cells of the planarian Schmidtea mediterranea
Park, Chanyoung
Regeneration requires mechanisms for producing a wide array of cell types. Neoblasts are stem cells of the planarian Schmidtea mediterranea that undergo fate specification to produce over 125 adult cell types. Fate specification in neoblasts can be regulated through expression of fate-specific transcription factors. We utilize multiplexed error-robust fluorescence in situ hybridization (MERFISH) and whole-mount FISH to characterize fate choice distributions of stem cells within planarians. Our analyses reveal that fate choices are frequently made distant from target tissues and in a highly intermingled manner, with neighboring neoblasts frequently making divergent fate choices for tissues of different location and function. These patterns were present across the animal, and at wounds sites during regeneration. Furthermore, we find that post-mitotic progenitors are able to migrate and be targeted to distant tissue targets, and are able to incorporate into tissues within anatomical regions devoid of neoblasts. We propose that anatomical pattern formation is driven not by the location of fate choice, but through the migratory assortment of post-mitotic progenitors from mixed and spatially distributed fate-specified stem cells. &#13;
&#13;
Animal regeneration requires the ability to accurately replace missing and damaged tissues after injury. The planarian Schmidtea mediterranea is capable of whole-body regeneration, even from small fragments, and is able to regenerate all adult tissues. Regeneration in planarians is driven by a pluripotent population of adult stem cells called neoblasts that gives rise to all adult tissues of the animal. Here we assess regeneration after the specific ablation of body pigment cells and find that loss of pigment cells does not elicit a proliferative neoblast response. Quantification of new cell incorporation dynamics by F-ara-EdU labeling and Smedwi-1 protein perdurance reveals that incorporation of new cells remains unchanged during depigmentation and regeneration. In regenerating animals, a decrease in the magnitude of pigment cell turnover allows for the accumulation and restoration of pigment cell numbers during regeneration. Furthermore, we demonstrate that exposure to UV-B radiation does not accelerate regeneration, indicating that potential loss of pigment cell function is not a trigger to initiate tissue-specific regeneration. Taken together, our data suggests a model in which body pigment cells can regenerate passively without the need for an active proliferative response.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155376</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation of TorsinA Interaction Partners</title>
<link>https://hdl.handle.net/1721.1/155375</link>
<description>An Investigation of TorsinA Interaction Partners
Hernandez, Victoria J.
TorsinA is a AAA+ (ATPases associated with a variety of cellular activities) protein that is implicated in the neuromuscular disorder DYT-TOR1A early-onset isolated dystonia. DYT-TOR1A is a heritable form of dystonia characterized by involuntary twisting movements and postures that arise during adolescence. A glutamate deletion towards the C terminus of TorsinA leads to DYT-TOR1A by disrupting the ability for TorsinA to interact with its ATPase activators Lamina Associated Polypeptide 1 (LAP1) and Luminal Domain Like LAP1 (LULL1), rendering TorsinA catalytically inert. TorsinA’s precise role in DYT-TOR1A remains elusive in large part because its function is unknown; a major impediment in understanding TorsinA’s function is the lack of any identified substrate of TorsinA. Here, we performed a TorsinA pulldown from mammalian cells to re-examine TorsinA’s interaction partners. We identified Calnexin, a lectin chaperone in the endoplasmic reticulum (ER), as the most abundant protein associated with TorsinA. Prior studies had identified Calnexin as a binding partner of TorsinA, assuming Calnexin to interact as folding chaperone for TorsinA. We chose to investigate the interaction between TorsinA and Calnexin in further detail, by studying the elements of TorsinA that are necessary for Calnexin binding. We found that while TorsinA N-glycosylation is required for Calnexin binding, terminal mono-glucosylation is not. This finding deviates from Calnexin’s interactions with its canonical substrates, as Calnexin’s lectin domain specifically recognizes mono-glucosylated N-glycans. Furthermore, we found that TorsinA remains associated with Calnexin 3 hours following translation inhibition. This finding again deviates from the Calnexin substrate model, as Calnexin preferentially binds newly synthesized proteins. Therefore, we conclude that the interaction between TorsinA and Calnexin likely has functional significance unrelated to TorsinA biogenesis, and that the two proteins may function as co-chaperones. Understanding the function of TorsinA in the context of Calnexin could bring us closer to identifying a substrate, or substrates, of TorsinA, thereby illuminating TorsinA’s role in DYT-TOR1A.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155375</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure and Behavior of Plate Boundary Regions Through the Wilson Cycle</title>
<link>https://hdl.handle.net/1721.1/155374</link>
<description>The Structure and Behavior of Plate Boundary Regions Through the Wilson Cycle
Molitor, Zachary
This thesis explores the geochemical and geophysical properties of plate boundary regions in the Atlantic, East Africa, the New England Appalachians, and subduction zones around the Pacific Ocean. Chapter One presents geochemical constraints on the extent of enriched mantle from upwelling mantle plumes relative to the observed extent of topographic swells related to mantle flow. It builds on this data by presenting a new geophysical model of mantle flow around mantle plumes that constrains the viscosity structure of the upper mantle and emphasizes the role of dynamic pressure from flowing mantle in the generation and maintenance of plume swell topography. Chapter 2 presents new experimental constraints on subduction zone melts at 2.4 GPa and temperatures representative of conditions near the top of the subduction slab in the mantle wedge. Our experimental constraints support existing hypotheses that proposed erupted primitive high magnesian andesites are produced through mantle melting and mixing of melts in the mantle wedge, while also presenting novel constraints on the concentration of water that can be maintained in glass during quenching. Chapter 3 presents a field-based study of low melt fraction migmatites in central New Hampshire. In it, we utilize a unique approach, based on the compaction lengthscale, to calculate the shear viscosity of the migmatite during deformation associated with the Acadian-Neoacadian orogeny and the presence of an orogenic plateau. Chapter 4 presents a detailed macro- and microscale analysis of structures and deformation in southern New England related to contemporaneous strike-slip conjugate faulting in the upper crust. In it we present new electron backscatter diffraction (EBSD) data and in situ trace element and U-Pb isotopic compositions for monazite and titanite. These datasets provide quantitative constraints on the style and conditions of deformation in the weak middle crust beneath an orogenic strike-slip conjugate shear system (in the upper crust). Furthermore, this data constrains the late Paleozoic stress field in New England and the kinematics of collision between Gondwana and Laurasia.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155374</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-based and data-driven modeling of multi-active material electrode batteries</title>
<link>https://hdl.handle.net/1721.1/155373</link>
<description>Physics-based and data-driven modeling of multi-active material electrode batteries
Liang, Qiaohao
As the design of single-component battery electrodes has matured, the battery industry has turned to hybrid electrodes with blends of two or more active materials to enhance battery performance. Leveraging best properties of each material, these multi-active material electrodes open a new and complex design space that could be more efficiently explored through physics-based simulations, leading to a growing demand for improved battery simulation frameworks capable of accounting for parallel reactions and diffusion pathways, phase transformations, multi-scale heterogeneities, and interactions between individual active materials. However, existing open-source battery simulation frameworks based on porous electrode theory are tailored for single-component electrodes and cannot meet these requirements without extensive modification by users. &#13;
&#13;
In this thesis, I first introduce the implementation of Hybrid Multiphase Porous Electrode Theory (Hybrid-MPET), a newly developed open-source and modular simulation software for batteries with multi-active material electrodes. Building upon the theoretical foundations of the reaction kinetics, transport phenomenon, thermodynamics, and electrochemistry of lithium-metal and lithium-ion batteries, Hybrid-MPET models are capable of accurately simulating solid solution, conversion and multiphase active materials in the hybrid porous electrode at intra-particle and inter-particle scales.&#13;
&#13;
To demonstrate the practicality of this new framework, I next present a series of Hybrid-MPET models for different commercial battery applications with multi-active material electrodes to predict battery performance and explain experimental phenomena. I primarily focus on validating physics-based models against testing datasets of multi-active material electrode batteries powering Medtronic's implantable cardioverter-defibrillators (ICDs). I present a many-particle Hybrid-MPET model for medium-rate Li/silver vanadium oxide (SVO) battery that predicts the impact of reaction heterogeneity across particle populations on previously unexplained cell voltage transient behavior. As a natural extension, I then develop a Hybrid-MPET model for high-rate Li/carbon monofluoride (CFx)-SVO battery that accurately predicts cell voltage under low-rate background monitoring, high-rate defibrillation pulsing, and post-pulse relaxation by addressing solid-phase diffusion limitations of Li⁺ in SVO. In addition to the rate dependence of active material utilization, my insights are centered around active material interaction in the form of Li⁺ redistribution: I discuss its effect in multi-active material electrodes batteries under both near open-circuit equilibrium and far-from-equilibrium conditions, as well as its broader impact on cell operation protocols and design principles. &#13;
&#13;
Lastly, I explore integration of physics-based and machine learning models to more accurately predict pulsing performance of individual Li/ CFₓ-SVO cells and capture cell-by-cell performance variance. While the physics-based Hybrid-MPET model still predicts nominal cell performance, the machine learning models with additive or ensemble learning nature can account for extra information from battery production and early testing data responsible for the observed variability in the battery pulse voltages. The interpretability of the selected machine learning models also can offer insights to support future battery production as well as clues on how to refine physics-based models.&#13;
&#13;
It is hoped that the simulation framework, models, and insights presented in this thesis would not only assist medical battery designs, but also accelerate the development of future multi-active material electrodes battery applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155373</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cosmos as a Uniting Pillar of Relationships: A Diné &amp; Hózhó Perspective</title>
<link>https://hdl.handle.net/1721.1/155371</link>
<description>The Cosmos as a Uniting Pillar of Relationships: A Diné &amp; Hózhó Perspective
Harvey, Alvin Donel
Outer space is a shared ancestral environment of unbounded cultural value to humanity.  Representing a common past and molding a collective future, our ethics, philosophies, technologies, and dreams of living in the cosmos are having profound effects on our lives on Earth. As access to spaceflight increases, ancient philosophical questions as well as the ethics of our behaviors and designs are returning to guide narratives, strategies, methodologies, and methods in Aeronautics and Astronautics. These complex questions are woven into the central understanding and potential solutions to our greatest challenges of climate change, space travel, and regenerative – not just sustainable - systems. Enduring and sophisticated in their methods of observation, characterization, and design, are Indigenous Knowledge Systems, which have addressed these same challenges for eons. Cosmological, ontological, axiological, and epistemological differences and parallels between Western and Indigenous science and engineering are places of critical thought and shared growth. Wholistic approaches in weaving both, seeing and building from the best in each, are key to addressing shared challenges.  To this effort, this thesis seeks to answer the question of how research partnerships between Indigenous people, the Knowledge they carry, and the field of Aeronautics and Astronautics be mutually beneficial and be guided by methodologies and methods that create and sustain such relationships for the addressment of shared challenges. Motivated by ethical and legal needs and by my community responsibility to challenge, enhance, and evolve the field of Aeronautics and Astronautics, I surmised that such a place of mutual benefit can be constructed through 1) drawing from experiences of Indigenous and Western astrophilosophies coming into conflict and building from such differences a shared astrophilosophy that leads to Indigenous astroecology and Indigenous engineering, 2) constructing an understanding of Indigenous Research Methodologies and Methods in Aeronautics in Astronautics, particularly on the bridging of the two epistemologies by a mixed method approach. This approach is further contextualized and constructed to be operationalized through my current understanding of Diné Research Methodologies and Methods. I contend that Indigenous Knowledge informing engineering practices can be centered, not integrated, in Aeronautics and Astronautics in ways that enable further research inquiry and application, and finally 3) through constructing principles of 3 gathering and meeting with Indigenous communities, examining prior experiences with pedagogy and andragogy that centers Indigenous Knowledge in research, and building from shared systems thinking and engineering to frame future work.  Throughout this thesis, I center relationality, the shared Indigenous understanding and importance of good relationships with the universe, as a fundamental principle and process of engineering from an Indigenous paradigm. In a central pursuit of seeking the knowledge to be good relatives, this work contributes to a relational approach to collaboration and research and a reaffirmation that Indigenous Knowledge is science and engineering. From my Diné perspective, I offer this thesis as a practice of good relationality, K’é, in the seeking of Hózhó, the balance and beauty to live together through the relationships made with the cosmos.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155371</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of Orc6 in ORC binding-site switching during helicase loading</title>
<link>https://hdl.handle.net/1721.1/155369</link>
<description>The role of Orc6 in ORC binding-site switching during helicase loading
Driscoll, David
DNA replication is a fundamental cellular process that enables proper maintenance of genomic integrity and cellular identity as cells divide. Initiation of bidirectional DNA replication is a key requirement for complete duplication of the genome. The first step in eukaryotic DNA replication is origin licensing, during which the origin recognition complex (ORC) coordinates loading of two Mcm2-7 hexameric helicases in opposite orientations to form the Mcm2-7 double hexamer. These Mcm2-7 double hexamers mark sites of eventual bidirectional replication initiation upon entering S-phase, when activation of these helicases and recruitment of DNA polymerases leads to new DNA synthesis. In. S. cerevisiae, ORC initially binds to a high affinity site at origins of replication to load the first Mcm2-7 complex. ORC is then able to bind a secondary inverted binding site at origins to load a second Mcm2-7 in the opposite orientation to coordinate formation of the Mcm2-7 double hexamer. Previous single-molecule studies show that one ORC can recruit and load both Mcm2-7 helicases at origins. This one-ORC helicase-loading model requires a dramatic change in ORC location to transition between its primary and secondary binding site. Importantly, this must occur without release of ORC from the site of helicase loading. How this binding-site switch is mediated is currently unknown.&#13;
In this thesis, we used single-molecule Förster resonance energy transfer (smFRET) assays to study interactions between ORC and Mcm2-7 during helicase loading. We found that upon recruitment of Mcm2-7 to origin DNA, in addition to the previously described ORC C-terminal tier interaction with Mcm2-7 C-terminal tier, Orc6N forms an interaction with the N-terminal tier of Mcm2-7 (Mcm2-7N). We identified elements of Orc6 required to mediate this interaction, as well as potential mechanisms of inhibition of this interaction by the regulatory kinase CDK. The kinetics of this interaction indicate that Orc6N interacts with Mcm2-7N well before ORC undergoes its binding site switch, consistent with a role of this interaction in retaining ORC at the site of helicase loading during this transition. Additionally, we demonstrate a role for Orc6 in mediating stable second Mcm2-7 recruitment. These findings emphasize the role of protein-protein interactions in enabling ORC to coordinate loading of oppositely-oriented Mcm2-7 helicases to enable bidirectional replication and identify new functions for Orc6 during this process.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155369</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and Optimization of Networks in Overload</title>
<link>https://hdl.handle.net/1721.1/155364</link>
<description>Analysis and Optimization of Networks in Overload
Wu, Xinyu
Network overload occurs when the demand of network users exceeds the network capacity. The increasing gap between the growth of network demand and capacity, resulting from the surge in data-intensive machine learning applications and the slowdown in Moore’s law, led to more frequent and severe occurrences of network overload in recent years. Severe overload induces heavy congestion with high delay and packet loss, impairing network performance. In this thesis, we develop new models and methods to gain in-depth understanding of network overload in two aspects: (i) designing network control policies to optimize network performance; (ii) quantifying the capability of malicious routing attacks to induce network overload. We first investigate network policy design to optimize multiple network performance metrics including delay, fairness, and stability. We leverage a deterministic fluid queueing model which regards packets as continuous flows to overcome the bottlenecks of the classic stochastic models with discrete packets in order to optimize the three metrics. (i) Delay: We establish the sets of transmission policies that can minimize the average and maximum queueing delay in both single-hop and multi-stage switching networks explicitly. We term the policies rate-proportional policies since they require an identical ratio between the ingress and egress rates of different nodes at the same layer of the network. We further generalize them to queue-proportional policies, which asymptotically minimizes queueing delay based on the real-time queue backlogs agnostic of packet arrival rates. (ii) Fairness: We identify that the existing policies that can balance the overload in networks with unbounded node buffers may not work given bounded buffer sizes. We propose a policy that combines Maxweight scheduling and Backpressure routing which can reach the most balanced queue overload in networks with bounded buffers. (iii) Stability: We demonstrate that the introduction of bounded node buffers affects the transmission policies that can guarantee queue stability and avoid queue overload. We derive an explicit set of transmission policies that can guarantee queue stability in both single-commodity and multi-commodity networks with bounded node buffers. We then quantify the capability of network adversaries to induce network overload via routing attacks, where a subset of nodes is hijacked and the adversary can manipulate their packet forwarding. We consider two objectives of routing attacks: no-loss throughput minimization and loss maximization. The first objective attempts to minimize the network’s throughput that is guaranteed to survive, and the second objective attempts to maximize the packet loss due to link overflow given the traffic demand. We start from networks with static routing. We propose exact algorithms in general multi-hop networks for the first objective, and two approximation algorithms with multiplicative and additive guarantees in single-hop networks for the second objective. We then extend our approach to networks with dynamic routing, where nodes that are not hijacked can optimize their routing policies to maximize network throughput in response to routing attacks on hijacked nodes. We show that the two objectives are equivalent in this case, and propose an algorithm based on dynamic programming with performance guarantee when the hijacked nodes are in a chain structure or parallel to each other. We demonstrate the near-optimality of the proposed algorithms through simulations over a wide range of network settings with either static or dynamic routing control, which demonstrates their ability to evaluate the potential throughput loss due to overload under malicious routing, and identify the critical nodes to be protected to reduce the impact of routing attack.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155364</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Targeting PRMT5 impairs cancer proliferation through splicing but promotes state shifts that enable resistance and disease progression</title>
<link>https://hdl.handle.net/1721.1/155362</link>
<description>Targeting PRMT5 impairs cancer proliferation through splicing but promotes state shifts that enable resistance and disease progression
Fowler, Colin E.
Epigenetic regulators play key roles in disease and development. Cancers can hijack developmental processes and mechanisms to grow and survive, including re-activating expression of epigenetic regulators normally confined to developing cells. One such regulator is protein arginine methyltransferase 5 (PRMT5), which methylates proteins involved in transcription, splicing, and translation. PRMT5 is expressed in a wide variety of tumor types, and preclinical studies show that PRMT5 inhibitors (PRMT5i) effectively suppress cancer cell proliferation. Despite this success, resistance to targeted therapies is inevitable. This thesis addresses the mechanism by which resistance to PRMT5 inhibition is established, the clinical consequences of resistance, and the basis for cancer’s dependance on PRMT5. &#13;
&#13;
We establish the first model of resistance to PRMT5i treatment. We show that lung adenocarcinoma (LUAD) cell lines initially respond to treatment, but resistance arose rapidly through Lamarckian induction. The resistant state is stable, but brings with it a number of collateral sensitivities, including to the chemotherapeutic paclitaxel. Both PRMT5i resistance and collateral paclitaxel sensitivity depend on stathmin 2 (Stmn2), a microtubule regulator specifically expressed in resistant cells. This observation has direct clinical relevance, as co-treatment with PRMT5i and taxanes was highly effective at killing multiple cancer cell lines, and suppressed the emergence of PRMT5i resistance. Moreover, analysis of patient data showed that STMN2 levels serve as a biomarker to predict patient response to taxanes. &#13;
&#13;
We also addressed the clinical implications of the PRMT5i resistant state. We showed that resistance was marked by transcriptional and chromatin changes that drove LUAD into a more dedifferentiated state, mimicking late-stage disease. Accordingly, transplant experiments showed that resistant lines are more metastatic than their parental counterparts. Mechanistically, short-term PRMT5i treatment triggered widespread chromatin accessibility changes, imbuing the cells with the plasticity to sample, and select for, the chromatin state that provides PRMT5i-resistance. Finally, in vivo PRMT5i treatment increased murine lung tumor grade, without decreasing tumor burden. Collectively, these data show that PRMT5i resistance in LUAD is a clear clinical possibility, and that resistance will likely be accompanied by increased disease progression. &#13;
&#13;
Finally, we explored the mechanistic bases for cancer cells’ vulnerability to PRMT5 inhibition. We focused on the prevailing view that the PRMT5-splicing axis is critical, developing a system that allows inducible depletion of a PRMT5 cofactor, CLNS1A, to specifically inhibit PRMT5-mediated methylation of the spliceosomal Sm proteins. We established that many cell lines respond to CLNS1A depletion, showing loss of Sm protein methylation and corresponding loss of cell viability, but others were inherently resistant. In the resistant lines, CLNS1A is fully dispensable, but its depletion sensitized cells to PRMT5 inhibition. Splicing analysis, comparing CLNS1A-dependent and independent lines, established that the degree of sensitivity to CLNS1A depletion reflects the level of splicing defects, and particularly the induction of detained introns. Together, this data suggest that aberrant splicing, specifically detained intron formation, underlies cancer’s vulnerability to PRMT5 inhibition.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155362</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Newforms and Spherical Characters for Unitary Groups</title>
<link>https://hdl.handle.net/1721.1/155359</link>
<description>Local Newforms and Spherical Characters for Unitary Groups
Dang, Gefei
We first prove a smooth transfer statement analogous to Jacquet–Rallis’s fundamental lemma and use it to compute the special value of a local spherical character that appears in the Ichino–Ikeda conjecture at a test vector. Then we provide a uniform definition of newforms for representations of both even and odd dimensional unitary groups over p-adic fields. This definition is compatible with the one given by Atobe, Oi, and Yasuda in the odd dimensional case. Using the nonvanishing of the local spherical character at the test vector, we prove the existence of the representation containing newforms in every tempered Vogan L-packet. We also show the uniqueness of such representations in Vogan L-packets and give an explicit description of them using local Langlands correspondence.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155359</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Likelihood-Free Hypothesis Testing and Applications of the Energy Distance</title>
<link>https://hdl.handle.net/1721.1/155358</link>
<description>Likelihood-Free Hypothesis Testing and Applications of the Energy Distance
Gerber, Patrik Róbert
This thesis studies questions in nonparametric testing and estimation that are inspired by machine learning. One of the main problems of our interest is likelihood-free hypothesis testing: given three samples X, Y and Z with sample sizes n, n and m respectively, one must decide whether the distribution of Z is closer to that of X or that of Y . We fully characterize the problem’s sample complexity for multiple distribution classes and with high probability. We uncover connections to two-sample, goodness-of-fit and robust testing, and show the existence of a trade-off of the form mn ≍ k/ε^4, where k is an appropriate notion&#13;
of complexity and ε is the total variation separation between the distributions of X and Y . We generalize our problem to allow Z to come from a mixture of the distributions of X and Y , and propose a kernel-based test for its solution, and also verify the existence of a trade-off between m and n on experimental data from particle physics. In addition, we demonstrate that the family of “classifier accuracy” tests are not only popular in practice but also provably near-optimal, recovering and simplifying a multitude of classical and recent results. Finally, we study affine classifiers as a tool for estimation and testing, with the key technical tool being a connection to the energy distance. In particular, we propose a density estimation routine based on minimizing the generalized energy distance, targeting smooth densities and Gaussian mixtures. We interpret our results in terms of half-space separability over these classes, and derive analogous results for discrete distributions. As a consequence we deduce that any two discrete distributions are well-separated by a half-space, provided their support is embedded as a packing of a high-dimensional unit ball. We also scrutinize two recent applications of the energy distance in the two-sample testing literature.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155358</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Robustness of Neural Network Interatomic Potentials through Sampling Methods and Uncertainty Quantification</title>
<link>https://hdl.handle.net/1721.1/155357</link>
<description>Enhancing Robustness of Neural Network Interatomic Potentials through Sampling Methods and Uncertainty Quantification
Tan, Aik Rui
Neural network interatomic potentials (NNIPs) are a significant advancement in computational materials science and chemistry for their ability to accurately approximate the potential energy surface (PES) of atomic systems with significantly reduced computational costs compared to quantum mechanical methods. Without relying on predefined interaction parameters, NNIPs offers greater flexibility and adaptability than classical force fields, and can be used for atomistic simulations of complex materials and biological systems. However, NNIPs face inherent limitations due to their dependence on diverse training data and limited extrapolative capabilities. This thesis proposes methodologies to address these challenges through analysis of uncertainty quantification (UQ) techniques, introduction of novel data sampling strategies, and development of structural similarity analysis algorithm to extract physical insights from diverse data sets. First, we examine the efficacy of UQ for single deterministic neural networks, demonstrating that a Gaussian mixture model-based approach can significantly reduce computational demands without sacrificing prediction accuracy and UQ reliability, although it does not significantly outperform the baseline ensemble method. Utilizing insights gained from the UQ analysis, we introduce a PES sampling technique based on adversarial attacks on predicted uncertainties, which samples atomic configurations with maximized uncertainties and mitigates the typical correlation issues associated with molecular dynamics sampling. Additionally, recognizing the limitations of the proposed adversarial sampling method, we introduce an enhanced sampling method using predicted uncertainty as collective variables (CVs) to enable more thorough exploration of under-sampled regions and to reduce confinement within local minima/maxima of energy and uncertainty landscapes. Finally, we propose a graph-based method to analyze structural variances in amorphous bulk systems that could be difficult to capture using conventional CVs, and yet can provide physical insights to explain the macroscopic properties of the materials. Overall, the methodologies proposed in this thesis improve the robustness and applicability of NNIPs in atomistic simulations and provide a groundwork for further advancements in this space.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155357</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable Hydrogels for Water Treatment</title>
<link>https://hdl.handle.net/1721.1/155356</link>
<description>Sustainable Hydrogels for Water Treatment
Gokhale, Devashish P.
A major challenge in water treatment today is the elimination of micropollutants, including low-concentration emerging contaminants like PFAS and heavy metals like lead. Existing technologies like activated carbon adsorption fail to eliminate these contaminants due to their chemical diversity and low concentrations (micrograms to nanograms per liter). Yet, it is important to be able to remove micropollutants from water to promote human health, comply with increasingly stringent laws and regulations, and enable technological development. Given that wastewater treatment is responsible for 5% of global greenhouse gas emissions, the space limitations on the sizes of treatment processes (such as in large cities), and resource constraints (in developing markets, which suffer from the greatest water pressure), it is also important to develop methods to eliminate micropollutants sustainably, and in low-footprint, resource-efficient processes.&#13;
&#13;
Hydrogels (polymer gels containing a significant quantity of bound water) are an ideal platform for the development of new functional materials to bind and degrade micropollutants. Water-based chemistries and a large library of commoditized precursors allow significant chemical flexibility to design for desired performance. High porosity and water content allow rapid transport of micropollutants, reducing equipment footprint and operating costs. A principal theme in this thesis is the unified design of hydrogel materials at multiple length scales using a combination of theory, simulations, and experiments - from chemistry control to engineering scale up. Special emphasis is laid on filling market niches and addressing real-world problems by means of practical solutions throughout the technology development process.&#13;
&#13;
First, we focus on technologies to sequester micropollutants using hydrogel absorbents. Using rational design principles that combine simulations and experiments, we synthesize hydrogels bearing multiple functionalities, including chemically anchored micelles to bind organic micropollutants, and chelating agents and charged groups to bind metal cations. Hydrogel microparticle absorbents are shown to treat complex contaminated water at environmentally relevant concentrations, including in the presence of background hardness. A proof-of-concept packed bed demonstrates the scalability of this approach. In an academia-industry collaboration, these hydrogel microparticles are tuned to treat amino acid fermentation products, selectively removing impurities without eliminating the target amino acid. The hydrogels achieve performance comparable to commercial adsorbents, while simultaneously being more sustainable and versatile. We also further develop hydrogels as a platform technology for sequestration applications, preparing hydrogel capsules to hold yeast cells for water treatment, aiding in the scalability of biological treatment methods by eliminating the need for secondary steps to remove added microorganisms from water.&#13;
&#13;
Next, we present a solution to degrade organic micropollutants, based on binding single-atom iron inside hydrogel microparticles. Iron is used to catalyze a Fenton reaction, converting added hydrogen peroxide to highly reactive hydroxyl radicals that oxidize hazardous organic micropollutants into safer small molecules. An appropriate chemistry allows us to implement the Fenton reaction, which normally requires an acidic environment, loses catalyst in the form of sludge, is footprint-intensive, and suited to the destruction of relatively high concentration contaminant streams, without suffering from these traditional limitations. We demonstrate the degradation of recalcitrant micropollutants, providing a pathway to their accelerated elimination from the environment.&#13;
&#13;
Finally, we turn to the scale up and application of this technology in real-world settings. Highlighting evolving regulations and improvements in our understanding of water quality, we present a series of case studies focusing on industrial applications such as food &amp; beverage and semiconductor manufacturing, and oil and gas extraction. Of special importance here are the diverse practical considerations and market constraints that must be overcome to promote technology adoption. Within this context, we discuss how technologies like those introduced in this thesis could enable versatile and low-cost effluent treatment for regulatory compliance, and tunable and selective treatment of influents and process streams to enable high-quality manufacturing. We present a method for scaling up the manufacturing of our hydrogel materials, and discuss some recent steps taken towards the commercialization of this technology.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155356</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>In situ electron microscopy of nanomaterials dynamics in heterogeneous phase environments</title>
<link>https://hdl.handle.net/1721.1/155355</link>
<description>In situ electron microscopy of nanomaterials dynamics in heterogeneous phase environments
Lee, Serin
Engineered nanomaterials with desired properties and structures are indispensable to catalyze the processes of reliable energy conversion and storage systems for a sustainable future. These functional nanomaterials often experience dynamic physical and chemical changes during the operating cycles. Investigating the dynamics in real-time allows establishing structure-property-performance relationships to optimize the materials design. In situ transmission electron microscopy (TEM) is one of the most powerful tools to capture dynamic changes with high spatial and temporal resolution. More importantly, the reaction or operating conditions of the materials can be mimicked by controlling external stimuli of in situ TEM experiments, including control of temperature, electrochemical biasing, and exposure to liquid and gas. In this thesis, nanomaterials dynamics are investigated using in situ TEM coupled under the control of external stimuli in a heterogeneous phase consisting of solids exposed to a liquid or gas environment.&#13;
First, I developed a temperature-dependent radiolysis model to explain the effect of temperature on the electron beam-induced radiolysis in liquid cell TEM. Radiolysis leads to the nucleation and growth of metal nanocrystals by reacting with the radicals, and I used the model to address the temperature-dependent chemical environment and corresponding kinetics of nanocrystal growth. The results demonstrated that the combination of microscopy and temperature-dependent modeling of the chemical environment can guide the analysis of the thermally controlled liquid cell TEM experiments. Moreover, the approach can be expanded to engineering the nanocrystal structure in lab-scale synthesis while acknowledging the differences compared with TEM experiments. Next, nanoscale electrochemistry under a controlled environment was discussed. In particular, the effect of temperature and substrate on electrochemical deposition is explained. In situ TEM results and modeling of the temporal evolution of the ion concentration demonstrated that the temperature accelerates the growth rate while it also controls the transition of growth modes. When 2D material graphene was used as a substrate for deposition, along with the classical nucleation and growth during the pulse on stage, transient growth and coarsening occurred, which could be attributed to the intrinsic properties of graphene to hold the charges. The results suggested that in situ TEM enables addressing the effect of electrochemical parameters and controlling nanoscale electrochemical phenomena. Finally, I applied the simultaneous acquisition of 2D projection and 3D topographic imaging in environmental TEM (ETEM) setup to analyze the structure dynamics of supported catalytic nanoparticles during heating and gas exposure. Particle migration and coalescence dominated above an onset temperature that depends on the gas. 3D topography captured that particles migrate through the bulk support and across the support surface. The degradation of the support during particle migration was also observed in certain gas environments. In some gas environments, the particle coalescence via oriented attachment took place. These results showed that the combination of imaging modes can provide information to explain the catalyst degradation during operation. &#13;
This thesis demonstrates that in situ TEM coupled with an understanding of the physical and chemical environments can provide insight into the nanostructure dynamics, which could contribute to revealing the degradation mechanisms of functional materials.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155355</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the gap between microbial and metazoan evolutionary history</title>
<link>https://hdl.handle.net/1721.1/155354</link>
<description>Bridging the gap between microbial and metazoan evolutionary history
Parsons, Chris
While microbial life originated early in Earth history and continues to underpin most modern ecosystems, its evolutionary trajectory remains murky. This is predominantly due to the dearth of diagnostic microbial fossils and biomarkers. Conversely, metazoans (animals) emerged relatively recently and have deposited a thorough archive of their existence in the rock record. The ancient shared ancestry of these two groups presents an opportunity to leverage the rich metazoan fossil record to constrain the timing and mechanisms of microbial evolution. In this thesis, I connect these two disparate groups in three different ways: 1) identification of genes with a history of recent transfer between Bacteria and Metazoa, 2) characterization of a bacterial protein family with specificity for a metazoan substrate, in this case collagen, and 3) using animal fossil age ranges as a proxy for hypothetical microbial lineages on the early Earth. I report a number of novel transfers from Bacteria into Metazoa, Opisthokonta, and Eukarya, and use two of them to constrain the ages of the bacterial phylum Bacteroidetes and the archaeal family Methanosarcinaceae.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155354</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Cellular Redox State and Biomass Synthesis</title>
<link>https://hdl.handle.net/1721.1/155353</link>
<description>Control of Cellular Redox State and Biomass Synthesis
Chang, Sarah Mary
To proliferate, tumors must synthesize sufficient biomass, such as proteins, nucleotides, and lipids. Many nutrients that produce biomass undergo oxidation reactions that require the redox cofactor NAD+ as an electron acceptor. Thus, the cellular redox state, measured by the NAD+/NADH ratio, can constrain the synthesis of oxidized biomass. This dissertation aims to uncover the determinants of the cellular NAD+/NADH ratio and how the cellular redox state governs biosynthetic capabilities of cancer cells in response to elevated biomass demands. In serine depleted conditions, which increase the NAD+ demand to support serine synthesis, we find that modulating the NAD+/NADH ratio proportionally alters serine synthesis rates. We uncover that some cancer cells elevate mitochondrial respiration and increase the NAD+/NADH ratio following serine withdrawal while others do not. Increasing mitochondrial respiration is sufficient to elevate the NAD+/NADH ratio and improve serine synthesis and proliferation in serine depleted conditions. Exogenous lipid withdrawal can also elevate mitochondrial respiration and the NAD+/NADH ratio, leading to increased serine synthesis despite no change in serine demand. Together, we find that the cellular NAD+/NADH ratio is regulated by mitochondrial respiration in a cell and environment specific manner, impacting oxidative biosynthesis reactions to determine the proliferative capacity of cancer cells in different nutrient environments.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155353</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Labor Economics</title>
<link>https://hdl.handle.net/1721.1/155351</link>
<description>Essays in Labor Economics
Enriquez, Brandon
This thesis consists of four chapters on the economics of labor markets. The f irst, joint with Fidan Ana Kurtulus, evaluates the employment effects of the Japanese trade shock of the 1960s-1980s, and whether they were racially disparate. Most importantly, we test whether differential occupational exposure drove racially disparate effects. Using detailed establishment-level data and a shift-share instrumental variables design, we find that the shock caused substantial decreases in overall manufacturing employment and in Black manufacturing operator employment. We find that two-thirds of the decrease in Black operator employment (relative to that of white operators) was due to disparate occupational exposure. Disparate exposure wasassociated with local anti-Black prejudice. The Japan shock decreased Black income in affected areas, and had persistent effects on Black poverty and joblessness. Taken together, these results show that aggregate sector-level trade shocks can hit minority workers particularly hard when they are concentrated in exposed occupations. The second, joint with Anthony DeFusco and Maggie Yellen, is documents facts about wage garnishment. Wage garnishment allows creditors to deduct money directly from workers’ paychecks to re pay defaulted debts. We document new facts about wage garnishment between 2014–2019 using data from a large payroll processor who distributes paychecks to approximately 20% of U.S. private-sector workers. As of 2019, over one in every 100 workers was being gar nished for delinquent debt. The average garnished worker experiences garnishment for five months, during which approximately 11% of gross earnings is remitted to their creditor(s). The beginning of a new garnishment is associated with an increase in job turnover rates but no intensive margin change in hours worked. The third, joint with Damon Jones and Ernie Tedeschi, is on employment effects of the the Child Tax Credit. We estimate the extensive and intensive margin labor supplyresponse to the monthly Child Tax Credit disbursed in 2021 as a part of the American Rescue Plan Act. Us ing Current Population Survey microdata, we compare labor supply outcomes among households who qualify for varying relative increases in household income, as a result of their income level and household size. We do not find strong evidence of a change in labor supply for families receiving the credit. The results are robust to alternative labor supply models, where households respond mainly to cash on hand or changes in the annual budget set The final, joint with David Autor, is on employment effects of pandemic unemployment insurance. In response to the economic devastation caused by the COVID-19 pandemic, the fed eral government creased the Federal Pandemic Unemployment Compensation Program (FPUC), which expanded unemployment insurance benefits by $600 per week. Complementarytopriorworkonpandemicunemploymentbenefits,westudythe effects of FPUC on firm payrolls. We find that FPUC had large effects on small business employment and reopening. Our estimates suggest that the FPUC reduced employment by approximately 1 million workers, or about 0.5 percent of firm payrolls. Our results establish that increases in the unemployment insurance replacement rate are likely to be particularly impactful for small and low-wage firms, both because these benefits have a greater incentive effects for low-wage workers and because small firms cannot readily smooth out the uncertainty in filling vacancies. JEL Codes: J01, J15, J65, J22, G51
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155351</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Satellite-based Analysis and Forecast Evaluation of Aviation Contrails</title>
<link>https://hdl.handle.net/1721.1/155350</link>
<description>Satellite-based Analysis and Forecast Evaluation of Aviation Contrails
Meijer, Vincent R.
Theclimate impact of aviation is currently estimated to account for 3.5% of anthropogenic climate forcing when quantified in terms of effective radiative forcing. Non-CO2 impacts are thought to represent over half of this aviation-induced forcing, with contrails and contrail cirrus being the largest contributor. Contrails (short for condensation trails) are the lineshaped clouds that form as a result of the warmer, moister engine exhaust mixing with the colder ambient air. When the air is ice supersaturated, a formed contrail can persist for multiple hours and become indistinguishable from natural cirrus. Contrails interact with Earth’s energy balance by reflecting incoming shortwave radiation (a cooling effect) and absorbing outgoing longwave radiation (a warming effect), with the net effect estimated as warming. The ice supersaturated regions that allow for contrails to persist are known to be horizontally wide but vertically thin, which has motivated the idea of contrail avoidance. Small changes in altitude, with minimal associated fuel burn penalty, may be sufficient for avoiding areas where contrails can persist. This concept has been extensively studied by means of simulations, which all indicate that the avoided warming by contrails far outweighs the costs and climate-impacts of the additional fuel burn. However, all of these studies assume that the location of ice supersaturated regions is known perfectly, and they therefore ignore the impact of forecasting accuracy on the effectiveness of contrail avoidance. With recent studies indicating that numerical weather prediction models have limited ability to predict ice supersaturation, there is a pressing need for further evaluation of the capability of models to predict contrail persistence as well as their improvement. Despite efforts made in the past and ongoing, there is no objectively evaluated implementation of contrail avoidance at the time of writing. This doctoral thesis develops an approach to locate contrails in satellite imagery and utilize these observations to assess existing methods for contrail prediction, as well as provide improved short-term forecasts. Contrails are found in geostationary GOES-16 ABI imagery 2by use of a convolutional neural network. Once found, their altitude is estimated using this same imagery by another deep learning algorithm that was trained on contrails collocated in data from the satellite-based CALIOP LIDAR. A dataset for the purpose of contrail forecast evaluation is developed by use of an algorithm that finds the location of aircraft exhaust plumes and contrails within CALIOP data. The resulting data provides observations of persistent contrail formation at the flight-by-flight level. Existing prediction methods that use numerical weather prediction (NWP) data from the ERA5 and HRRR models are compared to a nowcasting method that utilizes the aforementioned contrail detections and altitude estimates. This comparison is performed using traditional forecast evaluation methods, as well as a novel framework developed specifically for assessing the achievable benefits of contrail avoidance. The developed contrail detection algorithm is applied to more than 100,000 GOES-16 ABI satellite images to study contrail coverage over the United States for the years 20182020, and how it relates to changes in flight traffic density. This effort represents the first long-term (&gt; 2 years) study of contrail coverage using satellite imagery. It is also the first extensive application of a contrail detection algorithm to geostationary satellite data, such that estimates of contrail coverage are available at a temporal resolution of 10-15 minutes. This is in contrast to previous studies that detected contrails in low-Earth orbit (LEO) satellite imagery where a particular region is only visible twice a day. As a result, it is possible to directly study the diurnal patterns in contrail coverage and how these relate to f light activity. Furthermore, the year 2020 included the unprecedented reductions in flight activity as a consequence of the COVID-19 pandemic. This provides a unique opportunity to study how contrail coverage changed with large reductions in flight traffic. The contrails detected in the GOES-16 ABI imagery are then collocated with data from the CALIOP LIDAR aboard the CALIPSO satellite. The resulting dataset provides crosssectional and altitude information on more than 3000 contrails. This dataset is used to create and assess multiple algorithms for the estimation of contrail altitude by use of GOES-16 ABI imagery alone. The best performing algorithm is a convolutional neural network that is first trained on CALIOP cirrus data and then fine-tuned on the contrail data. This algorithm has a root mean square error of 570 meters and a coefficient of determination (R2) of 0.76. These results suggest that image-level context is useful for estimating contrail altitude, and that training on contrail data is better than training on cirrus data alone as was done in previous work. The altitude estimation model also provides an estimate of its uncertainty: the 95% confidence intervals (CIs) derived from the model’s output are shown to contain approximately 95% of the test data points and are narrower (2.0 km on average) than such intervals derived from flight data near the observed contrails (4.4 km on average). Finally, a method to match contrails found in CALIOP data to the flights that formed them is used to create a dataset for the purpose of evaluating persistent contrail forecasts at the flight-by-flight level. This dataset is used to evaluate the performance of predictions of ice supersaturation from ERA5 and HRRR as well as a nowcasting approach that relies on the contrail detections and altitude estimates developed here. The best performing nowcast approach is able to achieve higher hit rates at low false alarm rates (1 to 15%) than predictions using the ERA5 and HRRR relative humidity w.r.t ice. However, the nowcasting approaches have upper limits on their hit rates (&lt; 68%) as their predictions are constrained to areas where contrail are observed. A framework to assess the benefits that can be achieved by 3using a particular forecasting system for contrail avoidance is developed as well. It shows that existing forecast performance metrics cannot directly be used to evaluate the suitability of a particular prediction method for contrail avoidance, as the forecast needs to be ‘correct twice in a row’: at the original and the deviated aircraft flight level. The results obtained using this new framework indicate that, depending on the relative benefits of avoiding a contrail and costs of additional fuel burn, the nowcasting approach may lead to a more effective contrail avoidance implementation than possible with current NWP data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155350</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How much DNA is too much?</title>
<link>https://hdl.handle.net/1721.1/155349</link>
<description>How much DNA is too much?
Barker, Juliet C.
Cells exhibit great diversity in the amount of DNA that they contain, and whole genome duplication events explain much of this diversity. But, whole genome duplication typically obstructs the proliferation of cells in which it occurs. How cells survive and proliferate after DNA content is increased is unknown, and fundamental to understanding the evolutionary success of organisms and cancers. I address this unknown by identifying determinants of cell survival immediately upon whole genome duplication, using budding yeast as a model. I identify the range in number of whole genome duplications (ploidies) that cells can survive. I demonstrate that cell growth accompanies increased ploidy across this range, and that physical determinants alleviating and exacerbating cell surface stress increase and decrease the limit to viable ploidy, respectively. I also identify gene expression changes characteristic of whole genome duplication, immediately as it occurs. In part because recapitulating physiologic effects implied by this signature does not impact the range of ploidies with which cells survive, I propose that ploidy is inherently limited by the impacts of growth in size, which accompany whole genome duplication, to the integrity of the cell surface.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155349</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Positional Information in Adult Bilaterian Tissues</title>
<link>https://hdl.handle.net/1721.1/155347</link>
<description>Positional Information in Adult Bilaterian Tissues
McMann, Conor L.
How the coordination of cellular and molecular processes guides the formation of complex biological structures and patterns is a multifaceted and longstanding question. In animal development, a single cell is able to become an organism with intricate morphology and organ systems, all arranged in a predictable, repeatable formation. In regeneration, missing tissue is reformed in a manner that recapitulates that which was lost, including shape, orientation, and internal structure. These challenging biological processes require positional information, which influences spatial fate and migration decisions to generate proper tissue formation. In this work, we investigated the mechanisms of positional information across the animal kingdom through the study of different organisms in Bilateria.  &#13;
&#13;
In planarians, positional information is regionally expressed by muscle cells in the form of genes encoding signaling molecules known as positional control genes (PCGs). Positional information in the planarian body is particularly influenced by putative organizer pole cells located at the extreme ends of the anterior-posterior axis. We identified nr4A as a gene required for the localization of pole cells to the extreme ends of the anterior-posterior (AP) axis. nr4A RNAi caused PCG expression to shift away from the extremes of the AP axis and the iterative duplication of anterior and posterior anatomy. This work reveals a phenotype in which positional information and the cells that influence positional information are no longer properly coordinated, leading to a state of positional disequilibrium. &#13;
&#13;
Additionally, we sought to understand how the direction of positional information is established in the planarian body. Planarian regeneration involves a mechanism to distinguish anterior-facing wounds that require head regeneration from posterior-facing wounds that require tail regeneration. We found that RNAi of activin-2 interfered with this polarity resolution decision during regeneration, resulting in posterior-facing heads after amputation. We show that activin-2 influences asymmetric gene expression at amputations that is required for proper polarity resolution. This work reveals a role for Activin signaling in establishing proper positional information polarity during planarian regeneration. &#13;
&#13;
We then sought to investigate how positional information functioned in adult tissues more generally in Bilateria. Acoels, a sister clade to the rest of Bilateria, also display a muscle-expressed PCG system that underlies their regenerative ability. This suggests that adult positional information could be evolutionarily ancient in Bilateria and might exist in many extant clades. We characterized regional gene expression in the regenerative axolotl limb and the non-regenerative mouse limb. This analysis revealed that, while developmental signaling factors rarely recapitulate developmental expression patterns in the limb, several developmental transcription factors are indeed expressed regionally as they are in development in limbs of both organisms. Other genes that regulate ECM, transcription, and signaling also display shared regional expression in limbs of both organisms. In both axolotl and mouse, connective tissue plays a primary role in this regional expression. This work provides a comprehensive characterization of regional gene expression in the vertebrate limb and is consistent with a model wherein connective tissue-driven adult positional information is an evolutionarily ancient and well-distributed characteristic of Bilateria.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155347</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Information</title>
<link>https://hdl.handle.net/1721.1/155346</link>
<description>Essays on the Economics of Information
Corrao, Roberto
The thesis is composed of three chapters discussing the economics of information in markets and contracts. The first chapter paper proposes a theoretical framework that combines information design and mechanism design to analyze markets for mediation services between an informed and an uninformed party. The mediator receives compensation from the informed party and must rely on information that is voluntarily reported. We describe all the outcomes that can be induced via a mediation contract and compare the optimal outcomes when the mediator has the bargaining power (i.e., monopolistic mediation) with those when the informed party has it. The main finding is that mediation contracts often reveal more information with a monopolistic mediator because they give up some information rents to retain incentive compatibility. Unlike the conventional logic of quality under-provision for physical goods, here the attempt to capture information rents can lead to increased information disclosure. These findings shed light on the controversial matter of whether a monopolistic market for information intermediaries, such as rating agencies for financial securities, is more or less desirable than a competitive one. The second chapter studies the bounds of mediated communication in sender-receiver games in which the sender’s payoff is state-independent. We show that the feasible distributions over the receiver’s beliefs under mediation are those that induce zero correlation, but not necessarily independence, between the sender’s payoff and the receiver’s belief. Mediation attains the upper bound on the sender’s value, i.e., the Bayesian persuasion value, if and only if this value is attainable under unmediated communication, i.e., cheap talk. The lower bound is given by the cheap talk payoff. We provide a geometric characterization of when mediation strictly improves on this using the quasiconcave and quasiconvex envelopes of the sender’s value function. In canonical environments, mediation is strictly valuable when the sender has countervailing incentives in the space of the receiver’s belief. We apply our results to asymmetric-information settings such as bilateral trade and lobbying and explicitly construct mediation policies that increase the surplus of the informed and uninformed parties with respect to unmediated communication. This chapter is the result of joint work with Yifan Dai. The third and final chapter studies a principal-agent model in which actions are imperfectly contractible and the principal chooses the extent of contractibility at a cost. If 3contractibility costs satisfy a monotonicity property—which is implied by costs that come from difficulties in distinguishing actions when writing the contract—then optimal contracts are necessarily coarse: they specify finitely many actions out of a continuum of possibilities. This result holds even if contractibility costs are arbitrarily small. By contrast, costs that are derived from enforcing a contract ex post affect allocations but yield complete contracts. Applying our results to a nonlinear pricing model, we study how changes in consumer demand, production costs, and informational asymmetries affect the optimally coarse set of quality options. This chapter is the result of joint work with Joel P. Flynn and Karthik A. Sastry. JEL Codes: D40, D42, D86
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155346</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/155345</link>
<description>Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
Tagliabue, Andrea
Existing robust model predictive control (MPC) and vision-based state estimation algorithms for agile flight, while achieving impressive performance, still demand significant onboard computation, preventing deployment on robots with tight Cost, Size, Weight, and Power (CSWaP)constraints. The existing imitation learning strategies that can train computationally efficient deep neural network policies from those algorithms have limited robustness and/or are impractical (large number of demonstrations, training time), limiting rapid policy learning once new mission specifications or flight data become available. This thesis details efficient imitation learning strategies that make policy learning from MPC more practical while preserving robustness to uncertainties. First, this thesis contributes a method for efficiently learning trajectory tracking policies from robust MPC, enabling learning of a policy that achieves real-world robustness from a single real-world or simulated mission. Second, it presents a strategy for learning from MPCs with time-varying operating points, exploiting nonlinear models, and enabling acrobatic flights. The obtained policy has an onboard inference time of only 15 &#120583;s and can perform a flip on a UAV subject to uncertainties. Third, it extends the previous approaches to vision-based policies, enabling onboard sensing-to-action with milliseconds-level latency, reducing the computational cost of vision-based state estimation, while using data from a single real-world mission. Fourth, it presents a method to reduce control errors under uncertainties, demonstrating rapid adaptation to unexpected failures and uncertainties while avoiding the challenging reward tuning/design of existing methods. Finally, this thesis evaluates the proposed contributions in simulation and hardware, including flights on an insect-scale (sub-gram), soft-actuated, flapping-wing UAV. The methods developed in this thesis achieve the world’s first deployment of policies learned from MPC on sub-gram soft-actuated aerial robots.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155345</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pulsed Laser Deposited Iron Garnet Thin Films for Spintronics</title>
<link>https://hdl.handle.net/1721.1/155342</link>
<description>Pulsed Laser Deposited Iron Garnet Thin Films for Spintronics
Khurana, Bharat
This thesis aims to develop materials for spintronic memories and demonstrate growth of iron garnet superlattices with layering at a sub unit cell scale. Iron garnet films are suitable for next generation spintronic memories because of their low Gilbert damping and their growth with perpendicular magnetic anisotropy (PMA) which promote fast domain wall dynamics. Additionally, selection of rare earth ion and substitution of cations in various sublattices of iron garnets enables tuning of their saturation magnetization, magnetocrystalline anisotropy, Gilbert damping and magnetostriction. Superlattices of garnets provide an opportunity to explore interfacial phenomena. &#13;
&#13;
We first study spin transport in platinum/scandium-substituted terbium iron garnet heterostructures. Epitaxial scandium-substituted terbium iron garnet (TbScIG) thin films were deposited on gadolinium gallium garnet [GGG, (111) orientation] substrates using pulsed laser deposition (PLD). Sc, a nonmagnetic cation, occupies up to 40% of the octahedral Fe sites. Anomalous Hall effect-like spin Hall magnetoresistance measurements were performed to determine the effect of scandium content on spin mixing conductance of the TbScIG|Pt interface. The spin mixing conductance increased significantly with an increase in Sc content. We then study bilayer films of bismuth-substituted yttrium iron garnet (BiYIG)/thulium iron garnet (TmIG) grown on garnet substrates that combine the non-zero interfacial Dzyaloshinskii–Moriya interaction (DMI) of TmIG with moderate damping between that of BiYIG and TmIG, making them useful for the development of high speed spin orbit torque driven magnetic devices. Lastly, we describe superlattices made from rare earth (RE = Tm, Tb, Lu) and bismuth iron garnets (IGs) grown by pulsed laser deposition. Superlattices of TmIG/TbIG show a composition modulation for layers as thin as 0.45 nm, less than one unit cell, and exhibit perpendicular magnetic anisotropy that is qualitatively different from the in-plane anisotropy of the solid solution (Tm,Tb)IG. BiIG/LuIG superlattices exhibit magnetic damping characteristic of the end members rather than the solid solution. Garnet superlattices may provide a playground for exploring interface physics within a vast parameter space of cation coordination and substitution.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155342</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computationally Efficient Optimization of Formation Flying Trajectories, with Solar Radiation Pressure, near Lagrange Points</title>
<link>https://hdl.handle.net/1721.1/155340</link>
<description>Computationally Efficient Optimization of Formation Flying Trajectories, with Solar Radiation Pressure, near Lagrange Points
Hettrick, Hailee E.
One of three main goals established by the National Academy of Sciences in the 2020 Astronomy and Astrophysics Decadal Survey is the investigation of exoplanets residing in habitable zones to help further the understanding of the universe by searching for extraterrestrial life. While indirect detection methods have identified a vast quantity of exoplanets in habitable zones, such methods are inadequate for characterizing exoplanets. In this manner, humanity’s search for other life in the universe requires technological advancements in space-based telescopes and onboard instruments. In particular, the characterization of exoplanets with Earth-like qualities requires imaging capabilities that exceed the current state of coronagraphs. An external occulter, commonly referred to as a starshade, flying in formation with a space-based telescope is a proposed solution to the aforementioned technological limitation. In this proposed mission, the starshade flies tens of thousands of kilometers in front of the telescope along the inertially constant line-of-sight vector with the target star, suppressing its light from the telescope’s pupil, thereby enabling imaging of the exoplanet. While the proposed telescope/starshade formation flying mission addresses the imaging capabilities required for exoplanet characterization, the addition of a second spacecraft introduces some drawbacks. Certain consequences of adding a second spacecraft are unavoidable– such as the inherent costs of production and launch. One drawback, based on initial analyses of the cost of retargeting the formation line-of-sight between imaging phases, may be minimized. The aforementioned cost embodies both time and fuel expenditures, which may adversely affect the potential science yield of the mission. While the conclusion from such an analysis is irrefutable when using impulsive control, limiting retargeting maneuvers to impulsive control solutions ignores the naturally occurring dynamics of the formation’s location. The telescope/starshade formation flying mission proposed in the decadal survey intends to operate in the regime of the second Lagrange point (&#119871;₂) of the Sun-Earth system. While Sun-Earth &#119871;₂ provides several advantages from an imaging standpoint, it also offers a rich solution space of naturally occurring dynamics well-known to the restricted three-body problem. This suggests that by employing Dynamical Systems Theory (DST), the telescope and the starshade can exploit fuel-free motion. Moreover, the starshade may be modeled under the influence of solar radiation pressure (SRP), which expands the domain of fuel-free trajectories it can exploit by treating SRP as a means of actuation rather than a perturbation to be corrected. Since the restricted three-body problem has no closed-form solutions, schedulers focused on high-level mission trajectory designs must iteratively numerically solve the equations of motion for the telescope and starshade, which can quickly become computationally inefficient. To overcome this computational cost and enable the swift execution of a novel scheduler that maximizes science yield while minimizing fuel expenditures via exploiting the naturally occurring dynamics, approximate analytical solutions for both dynamical models are found using approximate analytical techniques for nonlinear systems. The ultimate product of this thesis is the novel top-down scheduler Pathfinder, which is used to demonstrate the various contributions of this work via two sample missions. Pathfinder approaches the problem of designing a high-level mission trajectory for the formation line-of-sight by first considering when exoplanets are observable and then using closedform, approximate analytical solutions of the spacecraft to determine the characteristic orbit for that observation. Two sample missions, using mission parameters from NASA/JPL’s HabEx proposal, demonstrate that the mission can observe over 70 stars in a five-year mission using two different minimum retargeting times while expending less than 40% of the retargeting fuel allocated. Thus, Pathfinder– enabled by DST and the computational efficiency invoked by the approximate solutions– demonstrates the feasibility of ensuring fuel savings while satisfying the science objectives of the mission.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155340</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cation Site Occupancy and Defect Engineering in Perovskite Heterostructures</title>
<link>https://hdl.handle.net/1721.1/155339</link>
<description>Cation Site Occupancy and Defect Engineering in Perovskite Heterostructures
Cho, Eunsoo
Complex oxide thin films and their heterostructures are quintessential materials for next-generation multiferroic and magnetoelectric devices. Cation and anion point defects collectively affect the structural, electrical, and magnetic properties of complex oxides. At the same time, such defects allow one to manipulate the behavior and tailor it to desired characteristics. Furthermore, the formation of atomic defects is highly influenced by the growth condition of these thin films. This thesis assesses how cation and oxygen stoichiometry and growth dynamics can control the material properties of single- and multi-phase perovskite oxide thin films. &#13;
&#13;
We start with the generation of ferroelectricity in an antiferromagnet LuFeO₃ through cation antisite defects and concomitant inversion symmetry breaking. We elucidate how the crystal structure and ferroelectricity depend on cation stoichiometry. We then integrate this antiferromagnet into a self-assembled nanocomposite with a ferrimagnet [formula] in order to demonstrate interfacial magnetic exchange coupling. We discuss how the growth condition affects cation site occupancy and crystal structure. Next, we describe a nontrivial self-assembly mechanism of [formula] and CoOₓ arising from the change in oxygen pressure during deposition. Electrolyte gating of the nanocomposite can modulate the strain state and magnetization by removing oxygen vacancies, expanding the pathway of magnetoelectric coupling. We then discuss a newly discovered layering of oxygen vacancies in perovskite [formula] and identify the crystal structure using both experimental and theoretical approaches. We end by investigating the electronic and magnetic properties of [formula] related to oxygen ligand holes, as well as analyzing the effects of cation substitution and off-stoichiometry in several iron garnets. &#13;
&#13;
All in all, this thesis advances the understanding of the correlation between crystal structure, atomic defects, and multiferroic and magnetoelectric properties. The observations encourage further studies of introducing and improving multiferroicity via defect engineering in underutilized material systems and exploring interfacial phenomena in various types of complex oxide heterostructures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155339</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase-and-Defect Diagrams for Polycrystalline Grain Boundary Segregation</title>
<link>https://hdl.handle.net/1721.1/155337</link>
<description>Phase-and-Defect Diagrams for Polycrystalline Grain Boundary Segregation
Matson, Thomas P.
Defect engineering provides access to a much larger range of material properties and is particularly necessary when designing any high-defect density material such as nanocrystalline (NC) alloys. Traditionally, bulk equilibrium phases have been considered in a decoupled manner from defects, such as solute and segregated atoms, dislocations, and grain boundaries. In recent years, a push has been made to treat defects as “defect states” in a manner analogous to bulk phases so they can be analyzed alongside existing bulk equilibrium phase diagrams – a treatment I refer to here as “phase-and-defect” diagrams. Segregated grain boundaries (GBs) are one such defect phase, and recent progress has indicated that spectral information, which describes the full distribution of available atomic environments, is required to rigorously understand segregated polycrystalline grain boundaries. However, models proposed prior to this work are primarily thermodynamic isotherms, which suffer from several limitations that prevent their use in the development of phase-and-defect diagrams. Existing spectral isotherms often use scalar assumptions to address solute-solute interactions, or are not atomistically informed, and have not been constructed from analytical free energy functions. For this reason, they cannot be used to construct fully spectral phase-and-defect diagrams. Furthermore, existing databases of spectral parameters contain only dilute limit information, limiting the accessibility of spectral segregation predictions at finite concentrations.  In this work, I take the following steps to address this need. First, I present a thermodynamic model that captures the spectral nature of both the segregation and solute interaction energies, and I describe an atomistic, physically motivated method to measure the full spectrum of GB solute interaction energies in a polycrystal. Then, I present the analytical framework for a spectral regular solution model of segregated polycrystals. I use this framework to derive a fully spectral free energy function and demonstrate how it can be used to develop a self-consistent phase-and-defect diagram which considers the bulk regular solution and the segregated polycrystalline defect state, and which shows significant improvement of the spectral model over traditional scalar representations. Finally, I develop an accelerated framework for predicting spectral solute-solute interactions, using a modified “bond-focused” local atomic environment (LAE) representation to construct descriptors for nearest neighbor pairs in the GB. I rigorously demonstrate its use for multiple binary alloys, and I then apply this accelerated framework to approximately 200 available embedded atom method (EAM) potentials to construct a large-scale database of spectral parameters for binary alloys beyond the dilute limit. This work makes accessible, for the first time, fully spectral segregation parameters at finite concentrations. Additionally, it provides a framework for incorporating those estimates into existing CALPHAD methodology, allowing the production of phase-and-defect diagrams for segregated polycrystals. In doing so, I hope that this work will improve the community’s ability to engineer stable nanocrystalline alloys and other defect states in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155337</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust spectral representations and model inference for&#13;
biological dynamics</title>
<link>https://hdl.handle.net/1721.1/155336</link>
<description>Robust spectral representations and model inference for&#13;
biological dynamics
Hastewell, Alasdair D.
Current developments in automated experimental imaging allow for high-resolution tracking across various scales, from whole animal behavior to single-cell dynamics to spatiotemporal gene expression. Transforming these high-dimensional data into effective low-dimensional models is an essential theoretical challenge to enable the characterization, comparison, and prediction of dynamics both within and across biological systems. Spectral mode representations have been used successfully across physics, from quantum mechanics to fluid dynamics, to compress and model dynamical data. However, their use in analyzing biological systems has yet to become prevalent. Here, we present a set of noise-robust, geometry-aware mathematical tools that enable spectral representations to extract quantitative measurements directly from experimental data. We demonstrate the practical utility of these methods by applying them to the extraction of defects in signaling fields on membranes, the inference of partial differential equations directly from videos of active particle dynamics, and the categorization of emergent patterns in spatiotemporal gene expression during bacterial swarming.&#13;
&#13;
An additional challenge occurs for complex biophysical processes where the underlying dynamics are only partially determined. We wish to use experimental data directly to infer effective dynamical models that elucidate the system's underlying biological and physical mechanisms. Building on spectral mode representations, we construct a generic computational framework for inferring the dynamics of living systems through the evolution of their mode representations. The framework can incorporate prior knowledge about biological and physical constraints. We apply this framework first to single-cell imaging data during zebrafish embryogenesis, demonstrating how our framework can compactly characterize developmental symmetry-breaking and reveal similarities between pan-embryo cell migration and Brownian particles on curved surfaces. Next, we apply the framework to the undulatory locomotion of worms, centipedes, robots, and snakes to distinguish between locomotion behaviors. Finally, we present an extension of the framework to the case of nonlinear dynamics when all relevant degrees of freedom are only partially observed, with applications to neuronal and chemical dynamics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155336</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Labor and Education Economics</title>
<link>https://hdl.handle.net/1721.1/155335</link>
<description>Essays in Labor and Education Economics
Corradini, Viola
This thesis consists of three chapters on labor economics and the economics of education. The first two chapters study the reasons behind racial disparities in school choices and propose two solutions to alleviate them: providing information about school quality and promoting attendance in racially integrated schools during earlier grades. Differences in school choice by race contribute to unequal access to effective schools and exacerbate school segregation. Conditional on test score and district of residence, Black and Hispanic families consistently opt for schools with fewer white and Asian students, lower average achievement, and lower value-added. The first chapter asks how information about school quality affects this gap. Specifically, I examine the effects of New York City’s introduction of a letter-grade system rating the quality of its high schools. The ratings shifted Black and Hispanic students’ choices more than those of white and Asian students, narrowing racial gaps both in enrollment at high-quality schools and in academic achievement. Using a structural model of school choice and surveys of families, I find that race differences in the response to quality information stem in part from different beliefs and preferences. The model estimates suggest that Black and Hispanic students have less accurate perceptions of school quality, making them more receptive to the grade-based scoring system. In addition, white and Asian students are less influenced by information on school quality because they have strong preferences for other school attributes. Simulations suggest that better quality information narrows racial gaps in choice and achievement. Additionally, simulations indicate that the design of information is important in determining who benefits most from its provision. A system that releases coarse quality ratings for high-quality or oversubscribed schools increases test scores among lower achieving students more than perfect information by reducing the competition for high-quality schools from higher achieving students. The second chapter, joint with Clemence Idoux, combines unique survey data and administrative data from New York City to identify the determinants of racial disparities in school choice and shows that attending a more diverse middle school can mitigate racial choice gaps. A post application survey with guardians of high school applicants reveals that information gaps and homophily in school preferences explain cross-race differences in choice. In turn, instrumental variable estimates show that middle school students exposed to more diverse peers apply to and enroll in high schools that are also more diverse. These effects are consistent across racial groups, particularly benefiting Black and Hispanic students who enroll in higher value-added 3highschools. Notably, changes in application patterns due to exposure to diverse middle school peers appear driven by changes in the set of known school options and an increased preference for peer diversity. The final chapter, joint with Lorenzo Lagos and Garima Sharma, investigates why workplaces are not better designed for women. In particular, we show that changing the priorities of those who set workplace policies can create female-friendly jobs. Starting in 2015, Brazil’s largest trade union federation made women central to its bargaining agenda. Using a difference-in-differences design that exploits variation in affiliation to the federation, we find that “bargaining for women "increases female-centric amenities in collective bargaining agreements, which are then reflected in practice. These changes lead women to queue for jobs at treated establishments and separate from them less—both revealed preference measures of firm value. We find no evidence that these gains come at the expense of employment, wages, or firm profits. Our results suggest that changing institutional priorities can narrow the gender compensation gap. JEL Classification: I20, I21, J52
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155335</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds and Low-Rank Approximation for Controlled Markov Processes</title>
<link>https://hdl.handle.net/1721.1/155334</link>
<description>Bounds and Low-Rank Approximation for Controlled Markov Processes
Holtorf, Flemming
Stochastic processes have captivated scientific interest by balancing conceptual simplicity with the ability to model complex, poorly understood, or even entirely unknown phenomena. Still, the deployment of stochastic process models remains challenging in practice due to their intrinsically uncertain nature, complicating the computation and interpretation of model predictions. This thesis addresses two distinct challenges for the study of Markovian processes and associated control problems: certification and scale.&#13;
&#13;
In the first part of this thesis, we present an algorithmic framework for conservatively answering the question: What is the best performance a controlled jump-diffusion process can attain? Answers to this question, even if conservative, shed light on fundamental limits, allowing us to distinguish situations where intrinsic noise masks poor decisions from situations where any attempt of improvement is futile. We connect infinite-dimensional linear programming over cones of occupation measures to techniques for approximating the solutions of Hamilton-Jacobi-Bellman and Kolmogorov backward equations. The result is a hierarchy of structured sum-of-squares programs that furnishes a sequence of hard, yet computable bounds for common control performance indicators encoding, for instance, operating cost or the probability of failure. These bounds in turn serve as witnesses of fundamental limitations or certificates of optimality and safety.&#13;
&#13;
In the second part of this thesis, we explore the use of the framework developed in part one to shed light on the limits of quantum information technologies by quantifying the performance limits of controlled quantum devices. In the context of open-loop quantum control, our framework improves upon quantum speed limits, such as the Mandelstam-Tamm bound, and provably allows characterization of performance boundaries to arbitrary precision. For closed-loop controlled quantum systems, it constitutes the first ever approach to rigorously bound performance losses induced by continuous measurement.&#13;
&#13;
The third part of the thesis is devoted to the challenge of scale as commonly encountered when studying Markov process models in high-dimensional spaces. Here, the complexity of computing and representing model predictions routinely exceeds available resources and renders interpretation challenging. We develop computational tools based on dynamical low-rank approximation that allow us to extract the dominant characteristic features of processes described by vast, nonlinear matrix-valued differential equations and track their evolution over time at a reduced cost.&#13;
&#13;
The methods developed in this thesis are accompanied by software solutions exploiting the features of the Julia programming language to enable deployment of Markov process models in the context of scientific inquiry and engineering advancement more widely, with greater ease, and rigorous guarantees.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155334</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplexed, scalable, and functionality compatible platforms for 3D spatially resolved proteomic profiling</title>
<link>https://hdl.handle.net/1721.1/155332</link>
<description>Multiplexed, scalable, and functionality compatible platforms for 3D spatially resolved proteomic profiling
Tian, Yuxuan
Extracting and interpreting the network of biomolecular information is crucial for comprehending complex biological systems. Due to the highly dynamic, deeply interconnected network of various biomolecules, it is necessary to characterize the spatial distribution and molecular functionality of biomolecules within the context of their local microenvironment and their interactive role in the whole biomolecular network. Spatial proteomics serves as an indispensable tool for extracting and deciphering cellular and molecular information of proteins, one of the important type of functional biomolecules, within biological systems. However, compared with high throughput technologies such as single cell sequencing or spatial transcriptomics, its widespread application is often hindered by the constrained volumetric scalability and multiplexing capability inherent in current approaches.&#13;
Additionally, existing spatial proteomics focus more on capturing the information of molecular identity and their spatial locations, often neglecting the information of functionality. Since proteins, together with other functional biomolecules (glycans, neurotransmitters, lipids, etc.), are major participants of biological processes, their functionality profile can provide valuable insights into molecular recognition, signal transduction, and functional abnormality directly related to disease diagnostics and therapeutics. However, the widely used chemical fixation protocols don’t support functional detection of proteins, triggering the development of new tissue preservation technology for the unfulfilled aim of spatially resolved functional detection.&#13;
To address these challenges, two projects were implemented during my PhD to improve the volumetric scalability, multiplexing capacity, and the possibility of spatially resolved functional detection of spatial proteomics. Here we firstly present a robust, easy-to-implement 3D proteomic mapping platform with both volumetric and multiplexing scalability. With this platform, multiround, multichannel protein detection can be achieved on millimeter scale tissues in a unique buffer environment. We showcased two modalities for 3D proteomic analysis: dense labeling, better at morphological characterization of cytoarchitectures, and stochastic sparse labeling, compatible to combinatorial barcoding strategy for enhanced multiplexing scalability. Eventually, we displayed the potential of our spatial proteomic platform by performing comparative cellular profiling, pathological investigation and structural characterization on multiple regions between long banked human brain samples diagnosed with Alzheimer’s Disease and control. Additionally in the second project, we developed a fixative-free technology enabling 3D in situ functional detection of proteins. As a demonstration, we confirmed the mechanical interlocking effect on various types of biological tissues, and verified the entanglement between dense polymer chains and biomolecules. As we predicted, signals of enzymatic activity from endogenous proteases in mouse colon tissues were successfully captured in 3D, spatially resolved way. Together, we envision that our results will serve as pioneering technologies to enable spatial proteomic profiling with better volumetric scalability, multiplexing capacity, and possibility of spatially resolved functional detection.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155332</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Screens for non-eukaryotic initiation factor two alpha dependent activation of ATF4 translation</title>
<link>https://hdl.handle.net/1721.1/155330</link>
<description>Screens for non-eukaryotic initiation factor two alpha dependent activation of ATF4 translation
Xu, Acer Eliot
Cells must be able to respond to their environment. Inability to respond correctly to changes has direct consequences on cellular fitness. Eukaryotic cells have developed many different pathways to signal nutrient availability, cellular stress, and general environmental changes in order to maintain homeostasis, accumulate biomass, and divide. Less is known about how these signaling pathways overlap and interact to produce complex cell phenotypes. Here, I investigate these questions using two models of eukaryotic cell stress: the integrated stress response (ISR) and mammalian target of rapamycin (mTOR) pathway in H. sapiens, and the environmental stress response (ESR) and heat shock response (HSR) in S. cerevisiae. I first establish a CRISPRi screening competent human cell line incapable of activating the ISR. Using genome-wide genetic screens of this cell line, I show that the gene CARS1 represents a previously unknown pathway for activation of ATF4 translation that does not require phosphorylation of the classical ISR affector molecule eIF2α. I also show that mTOR signaling is not necessary for this activation of ATF4, and that this pathway is modulated by the genes METAP2 and OGT. Next, I study overlap between the ESR and HSR to show that the ESR acts as a suppressor of Hsf1 activity in aneuploid yeast. We show that the ESR is cytoprotective and that active Hsf1 cannot compensate for loss of the ESR. We also show that ESR activation in aneuploid yeast is cytoprotective. These studies highlight the complexity of regulation between various signaling pathways in eukaryotic cells, define a novel pathway for regulation of cell state, and demonstrate that there are additional unexplored signaling pathways, the mechanisms of which still remain unclear.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155330</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Random geometry in two and three dimensions</title>
<link>https://hdl.handle.net/1721.1/155328</link>
<description>Random geometry in two and three dimensions
Wolfram, Catherine C.
A central theme in random geometry is the interplay between discrete models and continuum ones that appear in scaling limits. Surprising structure and symmetry often arises in these scaling limits, leading to an interplay between combinatorics, probability, complex analysis, and geometry. The dimer model is one of the classical lattice models of statistical mechanics and can be defined in any dimension. In the first half of this thesis, we prove a large deviation principle for dimer tilings in three dimensions. This generalizes a two-dimensional result of Cohn, Kenyon, and Propp, and is one of the first results for dimers in any dimension d &gt; 2. Many ideas and constructions used to study dimers are specific to two dimensions, so our arguments start from a smaller set of tools including Hall’s matching theorem, the qualitative description of the Gibbs property, and a double dimer swapping operation. In the second half of this thesis, we study discrete, geometrically-motivated coordinates called shears on the space of circle homeomorphisms up to Möbius transformations. The Weil–Petersson Teichmüller space is a subspace of this which has been of long-term interest in geometry and string theory and has recent connections to SLE curves in probability. We introduce and study natural ℓ2 spaces in terms of shears, and obtain sharp results comparing them to Hölder classes of circle homeomorphisms and the Weil–Petersson class. We also give a preliminary result about i.i.d. Gaussian random shears.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155328</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Post-Synthetic Chemical and Morphological Modifications of Polymer Membranes for Gas Separations</title>
<link>https://hdl.handle.net/1721.1/155327</link>
<description>Post-Synthetic Chemical and Morphological Modifications of Polymer Membranes for Gas Separations
Joo, Taigyu
Microporous porous polymers have rigid backbone structures or bulky monomer units that inhibit efficient packing of polymer segments, resulting in high free volume materials with good diffusion selectivity when processed into films. As a result of these characteristics, microporous polymers became potential candidates for next-generation gas separation membrane materials. Over the past two decades, this potential has sparked extensive membrane material development research, resulting in a library of microporous polymers. However, despite these significant advancements in developing novel polymer structures, gas separation membrane technology has yet to achieve the performance levels and operational stability required to replace traditional gas separation methods like cryogenic distillation and amine absorption processes. This gap underscores the need for continued research and development in microporous polymer membranes to fully realize their potential in gas separation applications. Free volume manipulation (FVM) is an innovative post-synthetic modification approach designed to address the permeability–selectivity tradeoff of polymer membranes. This method involves protection/deprotection chemistries of labile functional groups, such as a tertbutoxycarbonyl group (tBOC), to modify the physical packing structures, and hence, transport properties of the modified membranes. This thesis combines rational design principles with synthetic chemistry and materials characterization to further develop FVM and incorporate in situ thermal oxidative crosslinks. The developed FVM approach is demonstrated through its application to amine-functionalized PIM-1 and microporous poly(aryl ether) (PAE) polymers. A comprehensive analysis of transport properties, using various permeation and sorption experiments, reveals that FVM is an effective strategy for enhancing gas separation performance and stability of microporous polymer membranes. Furthermore, the chemistries developed from the FVM approach are utilized to systematically understand and provide valuable insights into the roles of free volume, chemical functionality, and crosslinking in physical aging behavior of microporous membranes. This thesis primarily focuses on developing structure–property–performance relationships of FVM-modified membranes, offering strategic directions for the future development of high-performing microporous membranes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155327</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Random and exact structures in combinatorics</title>
<link>https://hdl.handle.net/1721.1/155326</link>
<description>Random and exact structures in combinatorics
Sah, Ashwin
In this thesis I aim to show several developments related to notions of randomness and structure in combinatorics and probability. One central notion, the pseudorandomness-structure dichotomy, has played a key role in additive combinatorics and extremal graph theory. More generally, however, such notions come into play in the study of combinatorial probability and the use of random processes in extremal combinatorics. In a broader view, randomness (and the pseudorandomness notions which resemble it along various axes) can be viewed as a type of structure in and of itself which has certain typical and global properties that may be exploited to exhibit or constrain combinatorial and probabilistic behavior.&#13;
&#13;
These broader ideas often come in concert to allow the construction or extraction of exact behavior. I have chosen three directions along which to study this theme: the singularity of discrete random matrices, thresholds for Steiner triple systems, and improved bounds for Szemerédi's theorem. Each concerns breakthroughs in central questions of the fundamental areas of random matrices, combinatorial designs, and additive combinatorics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155326</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Ricci Flow on Spin Manifolds</title>
<link>https://hdl.handle.net/1721.1/155325</link>
<description>The Ricci Flow on Spin Manifolds
Baldauf, Julius
This first part of this thesis generalizes classical spin geometry to the setting of weighted manifolds (manifolds with density) and provides applications to the Ricci f low. Spectral properties of the naturally associated weighted Dirac operator, introduced by Perelman, and its relationship with the weighted scalar curvature are investigated. Further, a generalization of the ADM mass for weighted asymptotically Euclidean (AE) manifolds is defined; on manifolds with nonnegative weighted scalar curvature, it satisfies a weighted Witten formula and thereby a positive weighted mass theorem. Finally, on such manifolds, Ricci flow is the gradient flow of said weighted ADM mass, for a natural choice of weight function. This yields a monotonicity formula for the weighted spinorial Dirichlet energy of a weighted Witten spinor along Ricci flow. This part is joint work with Tristan Ozuch. The second part of this thesis introduces a functional generalizing Perelman’s weighted Hilbert-Einstein action and the Dirichlet energy for spinors. It is well-defined on a wide class of non-compact manifolds; on asymptotically Euclidean manifolds, the functional is shown to admit a unique critical point, which is necessarily of min-max type, and Ricci flow is its gradient flow. The proof is based on variational formulas for weighted spinorial functionals, valid on all spin manifolds with boundary. This part is also joint work with Tristan Ozuch.&#13;
&#13;
The final part of this thesis studies the Ricci flow on closed manifolds admitting harmonic spinors, providing a new definition of Ricci flow. It is shown that Perelman’s Ricci flow entropy can be expressed in terms of the energy of harmonic spinors in all dimensions, and in four dimensions, in terms of the energy of Seiberg-Witten monopoles. Consequently, Ricci flow is the gradient flow of these energies. The proof relies on a weighted version of the monopole equations, introduced here. Further, a sharp parabolic Hitchin-Thorpe inequalityfor simply-connected, spin 4-manifolds is proven. From this, it follows that the normalized Ricci flow on any exotic K3 surface must become singular.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155325</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Properties and Processing of Chalcogenide Perovskites</title>
<link>https://hdl.handle.net/1721.1/155324</link>
<description>Properties and Processing of Chalcogenide Perovskites
Ye, Kevin W.
There is an unmet need for thin film photovoltaic (PV) technology that features low materials cost, high performance and reliability, and compatibility with low-cost manufacturing. Chalcogenide perovskite materials show promise to meet these criteria but are at an early stage of development. Other promising optoelectronic materials, such as lead halides perovskites, have shown remarkable strides in optoelectronic performance, but they are plagued with issues of toxicity and stability. Chalcogenide perovskites have been proposed as potential replacements for these lead halide perovskites, but there is a dearth of information on their fundamental material properties. Recent theoretical studies have demonstrated that chalcogenide perovskite materials have suitable band gaps for a single-junction solar cell, and these results are backed by experimental studies on bulk sample morphologies. In order to determine the technological potential of this material class, it is important to understand their structure-property correlations, as well as study it in thin-film form. In this thesis, we focus on materials in the Ba-Zr-S system, particularly BaZrS3 and Ba3Zr2S7 . Although the band gap of these materials have been experimentally determined, there are still many unknown material properties, including absorption coefficients, carrier mobilities, and carrier diffusion lengths. High-quality thin film samples of chalcogenide perovskites will enable measurements of these properties. This thesis has two main goals: 1) to further investigate and characterize bulk properties of these materials for optoelectronic applications and 2) realize the synthesis of thin films, enabling further measurements and paving the way for device fabrication. We connect optoelectronic performance to both intrinsic and extrinsic material properties, using a suite of characterization techniques. We also demonstrate the first-of-a-kind synthesis of epitaxial chalcogenide perovskite thin films using molecular beam epitaxy (MBE), and show encouraging results on alloying and defect control. This thesis advances the current knowledge of chalcogenide perovskites both in terms of properties and processing, and sets the stage for realizing chalcogenide perovskite optoelectronics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155324</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Structural Nucleic Acid Delivery Vector Technology</title>
<link>https://hdl.handle.net/1721.1/155323</link>
<description>Development of a Structural Nucleic Acid Delivery Vector Technology
Knappe, Grant Alexander
The delivery of biomacromolecules such as proteins and nucleic acids to specific cells inside the body remains a critical challenge in contemporary biomedical research. Delivery technologies to accomplish this are ideally safe, effective, versatile, and scalable, but the current commercial technologies often fall short of these characteristics. Nucleic acid nanoparticles, which are computationally-designed nanostructured assemblies of nucleic acid, represent a promising foundational technology to utilize in delivery applications. Here, I present several advancements towards developing a delivery platform designed on nucleic acid nanoparticles. After introducing the current challenges in delivery and the current commercial technologies, I detail why nucleic acid nanoparticles offer promise as components of a delivery technology. Then, I describe my thesis work on evaluating the safety of nucleic acid nanoparticles in pre-clinical animal models, on their fabrication with a focus on integrating new chemistries and characterization techniques, and on a proof-of-concept demonstration in using nucleic acid nanoparticles to deliver viral antigens in a vaccine formulation. Finally, I summarize where I think the state of the technology is today, where its potential could reach, and what the path to reach that potential is.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155323</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry and representation theory of symplectic singularities in the context of symplectic duality</title>
<link>https://hdl.handle.net/1721.1/155322</link>
<description>Geometry and representation theory of symplectic singularities in the context of symplectic duality
Krylov, Vasily
This thesis studies the geometry and representation theory of various symplectic resolutions of singularities from different perspectives. Specifically, following the ideas of Bellamy, Hilburn, Kamnitzer, Tingley, Webster, Weekes, and Yacobi, we establish a general approach to attack the Hikita-Nakajima conjecture and illustrate this approach in the example of ADHM spaces. We also study minimally supported representations of the quantizations of ADHM spaces and provide explicit formulas for their characters. Lastly, we describe the monodromy of eigenvalues of quantum multiplication operators for type A Nakajima quiver varieties by examining Bethe subalgebras in Yangians and linking their spectrum with Kirillov-Reshetikhin crystals.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155322</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse Fourier restriction for the cone</title>
<link>https://hdl.handle.net/1721.1/155321</link>
<description>Sparse Fourier restriction for the cone
Ortiz, Alexander
In Fourier restriction theory, weighted inequalities allow us to probe the shape of level sets. In this thesis, we describe a new weighted Fourier extension estimate for the cone and its connection with the Mizohata–Takeuchi conjecture. The main result Theorem 3.1 builds on techniques from geometry originally explored by Tom Wolff in this context. The proof uses circular maximal function estimates first proved by Wolff and later generalized by Pramanik–Yang–Zahl in their work on restricted projections as a black box.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155321</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic-numeric programming in scientific computing</title>
<link>https://hdl.handle.net/1721.1/155320</link>
<description>Symbolic-numeric programming in scientific computing
Gowda, Shashi
Scientific programming languages should pursue two goals: closeness to mathematical notation, and the ability to express efficient numerical algorithms. To meet these goals simultaneously, languages use imperative surface syntaxes that mimic mathematical notation. However, mimicking does not make them the same—mathematics is declarative, pliable, and caters to exploratory human nature; but algorithms are imperative and must cater to machines. Hence, there is a fundamental limit to this approach and we leave the expressive power of the symbolic representation on the table.&#13;
&#13;
In this thesis, we ask the question: How can symbolic and numerical modes of computing co-exist, one informing the other? As an answer, we develop a symbolic-numeric system that can trace through numerical code to produce symbolic expressions, and turn symbolic expressions back into high-quality numerical code at staged compilation time. This allows the scientific user to generate code with the full power of algebraic manipulation and to treat numerical code as the symbolic artifact it is. We identified siloing of symbolic software into 3 categories which currently each reproduce similar forms of symbolic capabilities, but cannot share code between each other. Our work demonstrates that this siloing is not essential and an ecosystem of symbolic-numeric libraries can thrive in symbiosis.&#13;
&#13;
Our system is adaptable to any domain: users can define 1) Symbolic variables of any type 2) the set of primitive (symbolically indivisible) functions in the domain, 3) the propagation of partial information, and 4) pattern-based rewrites and simplification rules. There is a tendency in scientific computing to create a “compiler for every problem” starting from scratch every time. Every such effort erects its own towers of symbolic and numerical capabilites. A system like ours eliminates this redundancy and lets every scientific user be a “compiler designer” without any prior knowledge of compiler development.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155320</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reversible and Irreversible Effects of Magneto-Ionic Gating: Exchange, Anisotropy, and Magnetization</title>
<link>https://hdl.handle.net/1721.1/155319</link>
<description>Reversible and Irreversible Effects of Magneto-Ionic Gating: Exchange, Anisotropy, and Magnetization
Kossak, Alexander E.
Access to data, and its storage, is annually becoming of greater concern to consumers, companies, and governments. As the volume and generation of data grow, the technology that writes, reads, and secures it must become faster, more compact, and more efficient. Creating energy-efficient data storage and logic devices has therefore become an urgent research focus. Magnetic storage, through voltage-controlled magnetism (VCM), uses orders of magnitude less energy than traditional current-controlled static and dynamic randomaccess memory. Herein, VCM is explored through the lens of magneto-ionics – a low-power, reversible, and non-volatile approach to the electric field control of magnetism. This thesis advances the understanding and optimization of magneto-ionic systems, focusing on the reversible and irreversible intricacies of the voltage-control of magnetic properties. In the initial exploration of Pd/Co/Pd trilayers via solid-state hydrogen-ion gating, the mechanism which toggles magnetic anisotropy is proven and quantitative values for the magneto-ionic modulation of magnetization and effective anisotropy are provided. The simple heterostructure also demonstrates the modulation of novel magnetic properties including spin angular momentum, orbital angular momentum, and proximity-induced moments. This understanding provides the foundation for a systematic optimization of Co/Pd multilayer heterostructures which achieves the highest reversible solid-state efficiency to date. Guided by this understanding, magneto-ionic gating is harnessed for modulation of the RKKY interlayer exchange interaction. This small applied bias voltage enables fully reversible, sub-millisecond, 180° voltage-controlled switching of the magnetic order. Moreover, by engineering the magnetic properties of the heterostructure, it enables field-free switching, a highly sought-after device property for magnetic memory. These findings not only contribute to the fundamental understanding of magneto-ionic phenomena, but also hold promise for the development of next-generation spintronic devices and non von Neumann computing architectures. By disentangling the complexities of the magneto-ionic manipulation of the exchange, anisotropy, and magnetization of magnetic materials, this thesis paves the way for enhanced functionality and performance in spin-based technologies, offering promising avenues for future research.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155319</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Twisted Gan-Gross-Prasad conjecture for unramified quadratic extensions</title>
<link>https://hdl.handle.net/1721.1/155315</link>
<description>Twisted Gan-Gross-Prasad conjecture for unramified quadratic extensions
Wang, Danielle
The global twisted GGP conjecture is a variant of the Gan-Gross-Prasad conjecture for unitary groups, and considers the restriction of an automorphic representation of GL(V ) to its subgroup U(V ), for a skew-Hermitian space V. It relates the nonvanishing of a certain period integral to the central value of an L-function attached to the representation.&#13;
&#13;
In this thesis, using a relative trace formula approach, we prove the global twisted GGP conjecture in the unramified case, under some additional local assumptions on the quadratic extension and the automorphic representation. In particular, we reduce the required fundamental lemma to the Jacquet-Rallis fundamental lemma.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155315</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep-Space Exploration Enabled by CubeSats with Staged Electrospray Propulsion</title>
<link>https://hdl.handle.net/1721.1/155314</link>
<description>Deep-Space Exploration Enabled by CubeSats with Staged Electrospray Propulsion
Pettersson, Gustav M.
From their beginnings as an educational experiment, CubeSats have grown to a popular technology development platform and a capable tool for scientific missions. Deep-space CubeSats have been limited to rideshare launches thus far, acting as companions to enhance large missions or using propulsion to reach science orbits near the destination of the primary mission (e.g. to Mars or around the Moon). More CubeSat companions are planned on future high-profile missions and the concept is likely to continue to gain popularity, but is inherently limited by the primary mission. This thesis looks towards the next step: dedicated scientific deep-space CubeSat mission applications and technology development. The asteroid belt contains unique scientific value as the small bodies in this region are remnants of solar system formation. Because there are over a million known bodies spread across dozens of types, however, it is infeasible to explore a representative sample with large missions. To solve this, swarm missions that explore the asteroids have been proposed, where a “mothership” is launched from Earth and transfers to a central location in the asteroid belt and releases a “swarm” of tens of smaller spacecraft to visit a sample of asteroids. So far, likely because of a combination of weak science motivation and poor technology readiness, no mission of this type has become reality. This thesis addresses the first part with two mission concepts that are strongly motivated by the decadal survey for planetary science and astrobiology: a swarm to visit the members of an asteroid family, and a swarm to visit a broad sample of the largest asteroids. The second part is addressed by leveraging the technologically mature CubeSat platform, which already solved many of the challenges of swarm missions, and making progress on some of the remaining limitations. Two factors that challenge deep-space operations are the high propulsion capabilities (delta-v) required to impart meaningful orbit change and the falloff of available solar power away from the Sun, e.g. by a factor of five to ten in the main asteroid belt. Practical high delta-v CubeSat propulsion comes exclusively from electric propulsion, where the mass and volume of the propulsion system are reduced at the expense of higher power requirements compared to e.g. chemical propulsion. Unfortunately, the power demanded by state-of-theart electric propulsion options make them infeasible for many deep-space missions. Electrospray propulsion systems overcome the power challenge thanks to their high efficiency, but are not mature enough to operate as long as necessary to deliver the requisite delta-v. This lifetime limitation can be overcome by dividing the propulsion system into a series of stages, creating a flexible system that scales to large delta-v and uses little power. In this 2thesis, the first complete CubeSat staged electric propulsion system is analysed, designed, implemented, and tested, to where it is ready for an in-space technology demonstration. The analysis shows that propulsion systems based on this new technology can enable deep-space CubeSats capable of several km/s of delta-v in the asteroid belt in the near future. With propulsion well on its way to be solved, the main remaining limitation is the reliance of deep-space missions on ground resources, i.e. the deep-space network (DSN), for communications and navigation. With the number of satellites envisioned, either ground resources need significant expansion, or the method by which the CubeSats are operated requires change. By using the mothership as a communication and navigation relay the ground resource issue is eliminated. Some of the proposed missions are likely to be feasible with modifications of technology already available and demonstrated today, but the most ambitious mission variants shown in this thesis will require significant work. After a flexible mothership has been developed for the first mission, a wide range of exciting science missions in the asteroid belt will be possible, and the rapid technology development in the CubeSat sector can be leveraged in a new paradigm of scientific exploration.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155314</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wall rock alteration accompanying Canadian Pre-Cambrian gold mineralization</title>
<link>https://hdl.handle.net/1721.1/155241</link>
<description>Wall rock alteration accompanying Canadian Pre-Cambrian gold mineralization
Beaton, Neil Stewart.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Geology, 1935; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1935 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155241</guid>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intensity measurements in the arc spectrum of copper using the diffraction grating</title>
<link>https://hdl.handle.net/1721.1/155235</link>
<description>Intensity measurements in the arc spectrum of copper using the diffraction grating
Smyth, Harold Thomas,
            1910-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1937; Includes bibliographical references (leaf 74).
</description>
<pubDate>Fri, 01 Jan 1937 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155235</guid>
<dc:date>1937-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of the preparation and chemiluminescent oxidation of certain cyclic hydrazides</title>
<link>https://hdl.handle.net/1721.1/155234</link>
<description>Study of the preparation and chemiluminescent oxidation of certain cyclic hydrazides
Stanley, Lester Nelson,
            1910-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1936; Vita.; Includes bibliographical references (leaves 281-287).
</description>
<pubDate>Wed, 01 Jan 1936 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155234</guid>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of shear deformation on the large deflections of orthotropic circular plates</title>
<link>https://hdl.handle.net/1721.1/155233</link>
<description>Effect of shear deformation on the large deflections of orthotropic circular plates
Smith, Cloyd Virgil,
            1936-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1962; Vita.; Includes bibliographical references (leaves 149-151).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155233</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Biologically Plausible Deep Neural Networks</title>
<link>https://hdl.handle.net/1721.1/155069</link>
<description>Towards Biologically Plausible Deep Neural Networks
Han, Yena
Human intelligence has long been a source of inspirations for developing artificial intelligence. Computational principles and components discovered in the human brain have been successfully applied to artificial systems. These artificial systems are not only useful for engineering tasks but also serve as computational models of the brain, connecting theories to empirical data. Conversely, artificial intelligence, deep neural networks in particular, has contributed to advancing the understanding of the brain. Deep neural networks when trained adequately can reproduce behavioral and neural data better than previously developed models. Here we present studies that contribute to this interplay between natural and artificial intelligence. We first investigate invariance, a key computational principle that enables robust visual recognition and efficient generalization to new visual concepts, in human vision. Based on the experimental results, we propose deep neural network architectures that support the observed human behavioral properties in invariant recognition tasks. Next, we introduce a comparison framework for deep neural networks, where ground-truth targets are known, such that interpretations from the comparison can be validated. We explore whether deep neural networks with high functional similarity measures can provide reliable insights into the architectural building blocks of the brain.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155069</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomized Data Structures: New Perspectives and Hidden Surprises</title>
<link>https://hdl.handle.net/1721.1/155068</link>
<description>Randomized Data Structures: New Perspectives and Hidden Surprises
Kuszmaul, William
This thesis revisits some of the oldest and most basic questions in the theory of randomized data structures—questions such as: How efficient is a linear probing hash table? How fast can you maintain a sorted array of numbers? How big does a pointer have to be? With the help of new techniques, along with a willingness to look beyond conventional wisdom, we are able to achieve much stronger bounds for each of these questions than were previously thought to be possible.&#13;
&#13;
Our results also come with a powerful set of tools that span a wide range of problems and settings. Perhaps the most surprising of these tools is a new paradigm for designing efficient dynamic data structures, in which, by ‘tying our hands behind our back’ (i.e., by artificially restricting ourselves to a special class of privacy-preserving data structures), we are able to circumvent decades-old barriers in time/space efficiency. This technique appears three (completely separate) times in the thesis. &#13;
&#13;
Combined, our results overturn a 60-year-old myth on linear-probing hash tables; refute a 30-year-old conjecture and solve a 40-year-old open problem on dynamic sorting; resolve a 20-year-old open problem on dynamic load balancing; settle some of the most basic and fundamental questions from the theory of space-efficient data structures; and answer a 20-year-old question on memory allocation that was left as the central open problem in the first paper on history independence.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155068</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radical Tools to Transform Biomass Sugars</title>
<link>https://hdl.handle.net/1721.1/155064</link>
<description>Radical Tools to Transform Biomass Sugars
Carder, Hayden M.
Carbohydrates are implicated in essential biological processes – cell recognition, cell signaling, mechanical structure, and energy storage – and play essential roles in the potency and selectivity of bioactive natural products and pharmaceutical compounds. While some biomass-derived carbohydrates are extracted on commercial scales (e.g. D-glucose, D-xylose, D-galactose), there are &gt; 500 ’rare’ sugars that feature prominently and are difficult to isolate from natural sources. Instead, these rare glycosides are typically prepared through multistep chemical syntheses which commonly rely on protecting group manipulations to achieve selective reaction outcomes. Currently, the lack of reliable and selective chemical tools to rapidly access diverse sugar building blocks presents a bottleneck in the synthesis and evaluation of unusual and unnatural sugars. New, selective methods are needed for the expedient synthesis of rare sugars. This thesis describes the development of selective radical reactions to transform unprotected and minimally-protected biomass-derived carbohydrates into diversely functionalized monosaccharides and glycans. We specifically targeted epimerization reactions and radical rearrangements to achieve broad synthetic access to sugar isomers and deoxygenated sugars, respectively.&#13;
&#13;
We have achieved the direct synthesis of rare sugar epimers through site-selective radical epimerization. Mechanistic studies establish that these reactions proceed under kinetic control through two distinct and sequential H atom transfer steps: H atom abstraction and H atom donation. We have developed complementary methods to achieve equatorial-to-axial and axial-to-equatorial diastereoselectivity by employing two compositionally distinct H atom abstraction catalysts. The distinct reactivity profile of the catalysts was found to arise from a change in mechanism: equatorial-to-axial epimerization is achieved by a minimally selective H atom abstraction and diastereoselective H atom donation while axial-to-equatorial epimerization is achieved by a site-selective and diastereoselective H atom abstraction.&#13;
&#13;
Furthermore, we accessed deoxygenated sugars by leveraging a manganese-promoted 1,2 radical migration, which proceeds via a sugar radical intermediate accessed by H atom abstraction. The resulting deoxyketopyranosides feature chemically distinguishable functional groups and are readily transformed into diverse carbohydrate structures.&#13;
&#13;
This work validates the potential for radical intermediates to facilitate the selective transformation of carbohydrates and showcases the step and efficiency advantages attendant to synthetic strategies that minimize reliance upon protecting groups.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155064</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reaction Condition Dependence of Different Overpotential Components in Electrochemical Hydrogen Catalysis</title>
<link>https://hdl.handle.net/1721.1/155063</link>
<description>Reaction Condition Dependence of Different Overpotential Components in Electrochemical Hydrogen Catalysis
Tang, Bryan Y.
The ability to drive redox reactivity at electrode interfaces necessitates polarization away from equilibrium. This manifests as an applied or measured overpotential. However, there are many ways to generate non-equilibrium conditions, and correspondingly overpotential can arise from multiple different sources. Thus, the applied overpotential in an electrocatalytic system is always the aggregate of different overpotential components, each of which can be uniquely affected by different reaction conditions. This thesis seeks to not only isolate, but understand how different overpotential components are affected by changes in reaction condition.&#13;
&#13;
Chapter 2 explores how the applied overpotential for the hydrogen evolution reaction (HER) can be partitioned into a charge transfer overpotential, which drives proton-coupled electron transfer, and a chemical overpotential arising from increasing surface H activity. Typical experiments provide no information about the relative contributions from these two components in hydrogen evolution catalysis. We employ a Pd membrane double cell to spatially isolate charge transfer and chemical reaction steps in HER catalysis, deconvoluting their overpotential contribution under different reaction conditions. We analyze how pH, and the introduction of poisons and promoters affect each component, and find that for a given H2 release rate, only charge transfer overpotential is affected by reaction conditions. These findings suggest that reaction condition dependent-HER efficiencies are driven predominantly by changes to the charge transfer kinetics rather than the chemical reactivity of surface H.&#13;
&#13;
Chapter 3 explores how proton consumption necessary for the HER can lead to non-equilibrium interfacial pH environments that differ substantially from the bulk. This pH swing subsequently manifests as a concentration overpotential which superimposes onto the aggregate overpotential in the HER. Using open circuit potential decay transients, we develop and validate a general methodology for temporal isolation of the proton concentration overpotential on Pt. This then allows us to experimentally quantify polarization-induced interfacial pH swings. Using this method, we quantify the impact of buffer strength, supporting electrolyte composition, and the presence of cation exchange polymer overlayers on the polarization-induced pH swings. We find that (1) modest current densities of −30 mA cm⁻² are sufficient to sustain pH swings of &gt; 2 pH units, even for strongly buffered solutions; (2) addition of alkali supporting electrolyte to unbuffered, acidic electrolyte can induce pH swings so large that the polarized electrode environment becomes strongly alkaline and (3) the presence of a Nafion polymer overlayer containing fixed anionic charges serves to further augment the interfacial pH swing, resulting in a similar pH swing at half the applied current density. The transport characteristics of these systems were analytically modelled, enabling direct calculation of boundary layer thickness and quantitative prediction of the OCP decay transient.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155063</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting Consumer Plastic Formulations to Marine Fates and Impacts</title>
<link>https://hdl.handle.net/1721.1/155060</link>
<description>Connecting Consumer Plastic Formulations to Marine Fates and Impacts
Walsh, Anna Nicole
Solutions to plastic pollution have been impeded by knowledge gaps surrounding plastic’s environmental persistence and implications. To fill some of these gaps, this thesis aims to connect consumer plastic formulations (the specific mixture of polymers and additives) to marine fates and impacts. First, I explored relationships between consumer polyethylene (PE) bag formulations, degradation by sunlight, and dissolved organic carbon (DOC) release to seawater. I found that the bags contained 15-36% inorganic additives, mainly calcium carbonate and titanium dioxide (TiO2). Bags and pure PE produced 3- to 80-fold more DOC during sunlight exposure than darkleaching, with more DOC generated by the bags than the pure PE. High resolution mass spectrometry revealed that photo-produced DOC comprised tens of thousands of unstudied chemicals. Additives strongly influenced degradation rates and DOC compositions. Second, I examined the interplay between sunlight and marine microbes on degradation of pure and TiO2containing cellulose diacetate (CDA) fabrics. I found that sunlight reduced CDA’s average molecular weight (MW) and, ultimately, converted it to CO2. TiO2 accelerated MW reduction 2fold and conversion to CO2 24-fold. Prior degradation by sunlight expedited microbial degradation in both fabrics. Finally, I assessed inorganic additive compositions in consumer plastics, their potential for liberation by sunlight, and potential impacts on local and global biogeochemistry. Consumer plastics contained ~8% inorganic additives comprising nearly 50 elements. Additive zinc (Zn) isotopic signatures appeared unique relative to other marine sources, which may be evident in the marine Zn isotopic balance. Light exposure accelerated release of elements into water relative to dark-leaching. Based on the most-cited estimate of plastic leakage to the ocean, plastic-derived antimony and Zn may be 3% and 1%, respectively, of natural riverine fluxes and quadruple by 2060. Proportions in heavily polluted rivers appear even greater. However, plastic leakage estimates span orders of magnitude, translating to high uncertainty in element fluxes. Collectively, this thesis demonstrates that additives and sunlight are overlooked drivers of marine plastic fates and impacts; integrating them into studies and models may transform our understanding of plastic pollution. Furthermore, leveraging the connection between formulation and fate may enable us to reduce environmental impacts using existing materials
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155060</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonsmooth Distillation Models Robust to Convergence Errors: Numerical Methods and Topological Aspects</title>
<link>https://hdl.handle.net/1721.1/155057</link>
<description>Nonsmooth Distillation Models Robust to Convergence Errors: Numerical Methods and Topological Aspects
Martins Cavalcanti, Suzane
Distillation is the most widely used (yet highly energy-intensive) industrial separation method and one of the most well-studied chemical engineering processes. However, engineers still often encounter distillation simulation errors while using state-of-the-art process software such as Aspen Plus and HYSYS. These errors preclude one from converging flowsheets with recycle streams and from successfully utilizing rigorous process optimization methods, which are both essential tasks in designing more energy-efficient and economically viable processes. In this thesis we address these challenges by developing nonsmooth (i.e., non-differentiable) distillation models and equation-solving methods that are robust to a wide range of convergence errors. As demonstrated by our results, nonsmooth functions are a powerful tool due to their ability to automatically switch between different terms, which allows us to describe and adapt to different modes of behavior of a system using a single model.&#13;
	&#13;
To investigate the "dry column'' errors often encountered in Aspen Plus we developed a nonsmooth version of the MESH model, which can be solved with Newton-type methods using exact generalized derivatives obtained with automatic differentiation techniques. This model allows us to simulate distillation columns in which one or more stages operate with a single phase, either superheated vapor or subcooled liquid. By developing continuation methods to simulate the nonsmooth MESH model, we discovered a new class of degenerate bifurcations in distillation columns which are generally observed regardless of the mixture or parameter being varied. These bifurcations are characterized by infinitely-many, multiple steady states with dry/vaporless stages, and happen at the so-called critical parameter value associated with the first flow rate in the column reaching zero. &#13;
	&#13;
In order to describe the topological structure of these bifurcation curves in a rigorous fashion, we proved a piecewise-differentiable (PCʳ) Rank Theorem that allows us to characterize nonsmooth curves and surfaces as PCʳ manifolds, according to the theoretical framework introduced in this thesis. We also generalized a previous Lipschitz Rank Theorem and applied it to define Lipschitz embedded submanifolds. Further, we developed sufficient and practically verifiable conditions, in terms of the B-subdifferential generalized derivative, that can be applied to the PCʳ MESH model function to theoretically predict the geometric behavior of its level sets that we observed numerically.&#13;
	&#13;
The nonsmooth MESH model overcomes dry column errors for specifications that lead to a feasible state with dry/vaporless stages. To address convergence failure due to column specifications being infeasible, which in general is unpredictable prior to simulation, we developed a second class of nonsmooth, adaptive distillation models. Our modeling strategies return a feasible solution even when one or two specifications are infeasible, by automatically resetting the latter to ensure that all flow rates are within their imposed lower and upper bounds. Additionally, we developed a nonsmooth version of the inside-out algorithm to converge these nonsmooth models reliably from an ab initio starting point, even for highly non-ideal mixtures. With a series of test cases, we demonstrate that our distillation modeling methods outperform Aspen Plus due to their ability to converge both individual columns and flowsheets with recycle under infeasible or near-infeasible specifications, non-ideal thermodynamics, and poor initial guesses.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155057</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>In situ genome sequencing: genomic measurement at the&#13;
convergence of structure and molecular identity</title>
<link>https://hdl.handle.net/1721.1/155056</link>
<description>In situ genome sequencing: genomic measurement at the&#13;
convergence of structure and molecular identity
Reginato, Paul L.
A central aim of contemporary biology is to understand the function and behavior of molecules in the context of whole individual cells. Progress has been largely driven by two families of measurement technology: microscopy and DNA sequencing. Microscopy directly provides rich information on structure and organization, while sequencing directly provides rich information on the identities of molecular species. Both microscopy and sequencing have developed the resolution needed for molecular characterization of cells: respectively, they can resolve the relative organization and identities of single biomolecules at the scale of an entire single cell. The scientific utility of each is now primarily limited by its inability to recapitulate the measurements of the other: since biomolecules exist in a 3D world and function through spatially local interactions, their diverse identities and relative structural organization within a cell are two sides of one coin. Recent technologies, such as highly multiplexed fluorescent in situ hybridization and in situ RNA sequencing, seek to unify the measurement of structure and biomolecular identity, thus enabling molecular mapping of biological samples.&#13;
&#13;
In this thesis, I report the collaborative development of in situ genome sequencing (IGS), the first technology for simultaneously sequencing and imaging genomes in intact biological samples. We applied IGS to spatially localize thousands of genomic sequences in individual human fibroblasts and early mouse embryos, thereby mapping their genome structure. We went on to develop expansion genome sequencing (ExGS), which integrates IGS with expansion microscopy to achieve ~3.5-fold greater spatial and ~14-fold greater genomic resolution in cultured cells. ExGS indicates a scalable path toward in situ genome sequencing with near-complete genomic resolution. I anticipate that these developments in genomic measurement technology will be instrumental for the convergence of structural organization and molecular identity in biological science.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155056</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Engineering of Carbon Fixing Material Systems</title>
<link>https://hdl.handle.net/1721.1/155054</link>
<description>Design and Engineering of Carbon Fixing Material Systems
Lundberg, Daniel James
Through interaction with ambient light, water, oxygen, and physical stress, materials such as coatings, adhesives, and structural composites degrade. Materials fail commonly through the generation and propagation of micro-cracks, with only a miniscule amount of mass degradation often necessitating complete replacement. This degradation leads to a cycle of material production, use, and replacement fed by resource extraction and leading to waste generation. In the United States, for every ton of plastic produced we landfill over three-quarters of that amount each year. This is compared to natural systems such as plants which continuously grow, repair, and self-reinforce using carbon dioxide (CO₂) and ambient light as mass and energy sources respectively. In this thesis, we seek to study and develop mechanisms of growth and self-repair similar to living plant systems to create what we deem artificial carbon fixing materials, which incorporate these characteristics under non-biological conditions.&#13;
&#13;
We present both experiment and computation to explore the conversion of atmospheric greenhouse gases such as carbon dioxide and methane into monomeric chemicals such as formaldehyde. A mathematical analysis of CO₂ photoreduction revealed that carbon conversion and material growth rates which rival that of living plants can be obtained with currently employed photocatalytic materials under ambient conditions. This analysis revealed that the rate limiting step in the production of carboning fixing materials acting on CO₂ is the catalytic conversion of this gas. Further, a high-throughput photocatalytic setup was constructed to analyze 12 common but unique semiconducting nanoparticle photocatalysts based on titanium dioxide, silicon carbide, and tin oxide. Overall, a universal kinetic mechanism was uncovered which accurately characterized the relationship between CO₂ conversion and production selectivity across all photocatalysts.&#13;
&#13;
Methane was also investigated as an ambient carbon source for carbon fixing materials, where the rate and efficiency of this gas’ oxidation with hydrogen peroxide was optimized over transition-metal modified zeolite catalysts. We revealed that iron- and copper-containing catalysts, which are reported as the most active and efficient under industrial conditions—over 30 MPa of methane partial pressure and up to 90 °C—perform poorly at ambient temperature and under one atmosphere of methane due to rapid hydrogen peroxide consumption.  Instead, solely iron-containing materials showed the both the highest rate and efficiency of methane oxidation under the ambient conditions explored. To the best of our knowledge, the highest selectivity to formaldehyde to date for both carbon dioxide photoreduction and methane oxidation under the ambient conditions of room temperature and atmospheric pressure. During methane oxidation, intermediate formaldehyde was captured in-situ through reaction with urea to produce a urea-formaldehyde polymer. To enhance the efficiency of the reaction system, methane oxidation over the iron-modified zeolite catalyst was coupled with hydrogen peroxide generation with the enzyme alcohol oxidase. Finally, modeling was performed to understand carbon nanotube-based sensors and chemically-responsive materials, which may find use in the detection and quantification of greenhouse gas emission streams and the abatement of such chemical release to the environment.&#13;
&#13;
Both experimental and computation work performed in this thesis expands our understanding of the limitations and potential of imbued carbon fixation to augment the lifetime and performance characteristics of material systems. Last, this work informs the creation of future low-cost and low-carbon emission materials which we believe can reduce the environmental impact of material production as well as remediate greenhouse gas emission streams.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155054</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Condensations of esters, ethers and alcohols with aromatic hydrocarbons in the presence of aluminum chloride</title>
<link>https://hdl.handle.net/1721.1/155027</link>
<description>Condensations of esters, ethers and alcohols with aromatic hydrocarbons in the presence of aluminum chloride
Sturgis, Bernard Miller,
            1911-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1936; Vita.; Includes bibliographical references (leaves 152-158).
</description>
<pubDate>Wed, 01 Jan 1936 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155027</guid>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Factors in the extension of shelf life of ration components</title>
<link>https://hdl.handle.net/1721.1/155026</link>
<description>Factors in the extension of shelf life of ration components
Spiegel, Lawrence Sanford.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Food Technology, 1958; Vita.; Includes bibliographical references (leaves 339-353).
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155026</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure and properties of the trichocysts of paramecium.</title>
<link>https://hdl.handle.net/1721.1/155020</link>
<description>The structure and properties of the trichocysts of paramecium.
Jakus, Marie A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1945; Vita.; Bibliography: leaves 35-36.
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155020</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the forms of copper in copper reverberatory slags.</title>
<link>https://hdl.handle.net/1721.1/155019</link>
<description>On the forms of copper in copper reverberatory slags.
Huang, Pei-Yung.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1945; Vita.; Bibliography: leaves 134-140.
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155019</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic interactions between trade flows and exchange rates : theory and evidence</title>
<link>https://hdl.handle.net/1721.1/155016</link>
<description>Dynamic interactions between trade flows and exchange rates : theory and evidence
Ueda, Kazuo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1980; Bibliography : leaf 127.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155016</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron beam techniques for testing and restructuring of wafer-scale integrated circuits</title>
<link>https://hdl.handle.net/1721.1/154969</link>
<description>Electron beam techniques for testing and restructuring of wafer-scale integrated circuits
Shaver, David Carl.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154969</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the synthesis of metastable A-15 "Nb3Si" by ion implantation.</title>
<link>https://hdl.handle.net/1721.1/154955</link>
<description>On the synthesis of metastable A-15 "Nb3Si" by ion implantation.
Clapp, Mireille Treuil.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154955</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Macromolecules using Machine Learning and Simulations</title>
<link>https://hdl.handle.net/1721.1/154378</link>
<description>Designing Macromolecules using Machine Learning and Simulations
Mohapatra, Somesh
The near-infinite number of possible macromolecules, arising from the combinations of monomers, linkages, and their topological arrangement, contributes to the ubiquity and indispensability of macromolecules. However, such chemical diversity hinders the development of general computational approaches that can be applied to macromolecules. The challenges around representing, comparing and learning over macromolecules are manifold. Current representations provide limited coverage of chemical space, and require significant customization to include non-natural monomers and non-linear topologies. Similarity computation methods are limited to biological macromolecules, incorporate evolutionary bias in scoring, and generally do not extend to unnatural monomers or non-linear topologies. Machine learning models are restricted by descriptors with limited representation capacity. To address these challenges, we developed chemistry-informed representations for the individual monomer unit and the complete macromolecule to capture both the local chemistry and global topology. Chemical similarity computation methods were developed to compare two or more macromolecules, irrespective of monomer chemistry and topology. A wide variety of unsupervised and supervised machine learning methods, selected according to the macromolecule type, data set size, and task, were used to identify patterns in unlabeled data sets, and map macromolecules to properties in labeled data sets, respectively. Using attribution analysis over the pre-trained models, we interpreted the decision-making process of the models. We applied these tools for de novo design, virtual screening, and in silico optimization of macromolecules, mostly followed by experimental validation of predictions, for applications ranging from peptides and glycans, to electrolytes and thermosets.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154378</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods to program and to probe RNA tertiary structure with nucleic acid origami</title>
<link>https://hdl.handle.net/1721.1/154377</link>
<description>Methods to program and to probe RNA tertiary structure with nucleic acid origami
Parsons, Molly F.
Biological structure determination has revolutionized mechanistic understandings, nanotechnology, and drug design. Despite advances in structural determination technologies, from nuclear magnetic resonance to cryo-electron microscopy (cryo-EM), one class of biomolecules has resisted 3D structure characterization. RNA, particularly larger RNAs, often dynamically adopt multiple conformations in a structural ensemble, and this heterogeneity has made 3D structure determination challenging through conventional techniques.&#13;
&#13;
In this thesis, I investigated two avenues for improving RNA 3D structure determination, both leveraging the nanoscale programmability of nucleic acid origami. Nucleic acid origami generally involves folding one long single-stranded nucleic acid, the scaffold, into a target geometry via hybridization with short oligonucleotide "staples." First, we expanded the geometric space accessible to 3D wireframe DNA-scaffolded origami with edges composed of two helix bundles, optimizing folding conditions and crossover design and analyzing the final folded 3D structures, for a new design algorithm. I designed a tetrahedral wireframe DNA origami to capture an engineered tRNA via hybridization at three sites. For this complex, I verified stable, cooperative binding, and characterized the 3D structure with cryo-EM, which confirmed binding at all three sites and yielded a 17-Å resolution reconstruction of the tRNA. I also outlined a high-throughput workflow to probe the unknown tertiary structure of a target RNA with varied designs of DNA origami. Additionally, I studied the design of 3D wireframe RNA-scaffolded origami, characterizing the folded structure for several crossover schemes to evaluate how best to accommodate the A-form helical geometry of RNA for robust designs. The resulting algorithm for designing RNA-scaffolded polyhedra enables precise, covalent anchoring of a target RNA fragment onto a wireframe polyhedra. I tested this anchoring approach to attach a 232-nt HIV-1 RNA fragment to an RNA-scaffolded pentagonal bipyramid as a method to improve cryo-EM characterization. The particles folded into the expected pentagonal bipyramidal geometries, and cryo-EM micrographs suggested anchored target RNA, but the design and data analysis need further refinement to determine a 3D structure for the anchored RNA fragment. These studies together represent proofs-of-concept for stabilizing RNA structures on nucleic acid origami, enabled by the expansion of origami design.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154377</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Crystalline Anisotropy on Solid-state Dewetting</title>
<link>https://hdl.handle.net/1721.1/154376</link>
<description>Effects of Crystalline Anisotropy on Solid-state Dewetting
L'Etoile, Maxwell A.
Solid-state dewetting is the process by which micro– and nano–scale films, wires, and other fabricated structures on a substrate evolve toward geometries which reduce the overall surface free energy of the system. This process, also sometimes referred to as agglomeration, occurs at elevated temperatures and is mediated by surface selfdiffusion. Regardless of initial conditions, dewetting eventually leads to the formation of one or more particles whose morphology is determined by the orientational dependence of the constituent material’s surface free energy density.&#13;
&#13;
Subtle differences in initial conditions can determine whether a system dewets into a single particle or many and whether this evolution occurs over the course of minutes, hours, days, or years. Furthermore, the intermediate stages of dewetting behavior can exhibit profound complexity, and many materials systems are prone to a host of morphological instabilities. Although decades of research have steadily increased the extent of our knowledge about solid-state dewetting, a generalizable, predictive understanding of dewetting behavior has remained elusive, in large part because of the difficulty of modeling systems with strong crystalline anisotropy.&#13;
&#13;
The work in this thesis focuses on advancing our understanding of the dewetting behavior of single-crystal materials and consists chiefly of two parallel thrusts: the development of a powerful new method for simulating solid-state dewetting and the use of lithographic patterning to experimentally study dewetting in systems with precisely controlled geometries. We apply these two synergistic approaches to understanding the morphological stability of ruthenium nanowires, the effects of ambient conditions on dewetting nickel (110) films, and the dendritic morphologies which arise at the corners of holes in dewetting films.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154376</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity and Expertise in Representative Governance</title>
<link>https://hdl.handle.net/1721.1/154374</link>
<description>Diversity and Expertise in Representative Governance
Revel, Manon
Representative democracy is a widespread and essential part of democratic governance. Our understanding of it has been largely shaped by the triumph of elections that vest peripheral access to power through episodical polls. How would representative democracy look under different selection rules? In an attempt to reflect on foundational principles on which we could build a renewed case for representative democracy as democratic governance, this thesis explores democratic innovations for selecting representatives and focuses on the interplay between selection mechanisms, epistemic performance, and procedural aspects. The first chapter investigates the optimal number of voters needed to aggregate votes on a binary issue under majority rule. It takes an epistemic view where the issue at hand has a ground truth “correct” outcome and each one of &#119899; voters votes correctly with a fixed probability, known as their competence. Assuming that the best experts, i.e., those with the highest competence, can be identified to form an epistemic congress, this chapter surprisingly shows that the optimal congress size should be linear in the population size, even with expert decision-making. The next chapters delve into the concept of liquid democracy, a governance mechanism in which citizens can either vote directly or delegate their votes to others, and examine the epistemic and procedural performances of this approach offering insights from both theoretical and empirical perspectives. Taking an epistemic view, the second chapter highlights delegation scenarios where liquid democracy is likely to find the ground truth. It treats delegations as a stochastic process akin to well-known processes on random graphs —such as preferential attachment and multitypes branching process—and relate their dynamics to liquid democracy’s performance. Along the way, it proves new bounds on the size of the largest component in an infinite Pólya urn process which may be of independent interest. The third chapter presents empirical experiments designed to compare liquid democracy with direct democracy, the counter-factual. It validates the theoretical findings of the second chapter, providing evidence that delegation mechanisms align with theoretical expectations. The fourth chapter analyzes delegation dynamics in a real-world setting and explores how liquid democracy functions in scenarios with contentious issues. It reveals insights into the patterns of delegation and the usage of liquid democracy in non-epistemic contexts. The fifth chapter reflects on lottocracy (selection of representatives at random) and proxy democracy (selection based on self-selection and flexible nominations that determine the relative influence of representatives) as two models to select representatives. It investigates the procedural aspects of both selection mechanisms exploring how inclusion, equality and legitimacy would be realized under lottocracy and proxy democracy. The sixth chapter, drawing on computational social choice, formulates a unified framework for comparing selection mechanisms. It devises a model in which different selection mechanisms can be formalized and evaluated axiomatically. It classifies selection mechanisms based on whether they are open-closed, flexible-rigid, and direct-virtual and propose the following five axioms: proportionality, diversity, monotonicity, faithfulness, and effectiveness. Throughout, the thesis intertwines insights from mathematics (social choice theory, applied probability, statistics), political philosophy, and empirical analyses to provide a comprehensive exploration of different facets of representative democracy. The interdisciplinary approach reflects the complexity and richness of democratic governance and calls for continued collaboration across disciplines to tackle its challenges and shape its future.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154374</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inertial Hysteresis Couplings</title>
<link>https://hdl.handle.net/1721.1/154373</link>
<description>Inertial Hysteresis Couplings
Wheeler, Charles Michael
This thesis presents the Inertial Hysteresis Coupling (IHC), a new family of variable-slip mechanical couplings/clutches aimed at achieving order-of-magnitude (∼10x) improvements in torque density (torque capacity / coupling diameter) over existing magnetic and fluid options. IHCs leverage combined normal, frictional, and inertial forces acting on sliding mechanical elements to realize this torque density improvement. The new design (a) allows for continuous modulation of these high-torque loads while (b) naturally achieving lockup at maximum engagement and (c) remaining well-suited to forced-convection cooling in high-heat-dissipation scenarios. Additionally, the base IHC design can be modified to achieve “one-way clutching” behavior while still retaining the ability to speed-synchronize (transmit load under partial slip) and achieve lockup. These characteristics make IHCs particularly well-suited to automotive and mobile robotics applications – for example, active control of vehicle differential slip – where high torque density and slip control are both of critical importance.&#13;
&#13;
As the first investigation into IHCs, this research establishes multiple important foundations for analysis, simulation, and design. Starting from first principles, a ground-up model for IHC behavior is derived that encapsulates IHC geometry, relevant coordinate systems/transformations, kinematics, equilibrium equations, thermal loads, etc. Implemented in MATLAB, this model facilitates the selection of IHC parameters via performance projections, sensitivity studies, and a variety of different visualizations and animations. These tools enabled the design and fabrication of a physical IHC prototype, “ihcBENCH.” Through testing of this prototype, the key desired behaviors were successfully demonstrated: linear torque modulation via control of the “clutch angle” βo (max slip torque before lockup = 13.2 Nm, max/min slip torque ratio = 3.8, R² = 0.986); IHC lockup at high clutch engagement angles (βo ⪆ 37°); and the one-way clutching behaviors previously described.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154373</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoparticle Self-Assembly for the Synthesis and Processing of Ordered Nanocomposite Solids</title>
<link>https://hdl.handle.net/1721.1/154372</link>
<description>Nanoparticle Self-Assembly for the Synthesis and Processing of Ordered Nanocomposite Solids
Lee, Margaret Sandra
Nanoparticle self-assembly has emerged in recent years as a promising strategy for generating nanocomposite materials, with a focus on developing methods that are capable of controlling structure and composition at the nanoscale and ideally beyond, as natural nanocomposites have demonstrated how hierarchical ordering of constituent materials leads to enhanced properties in the composite. Nanocomposite tectons (NCTs) are a class of scalable nanoscale building blocks capable of self-assembly into ordered superlattices in solution through the use of dynamic supramolecular binding interactions. However, while dynamic interparticle interactions are key for enabling reversible binding and preventing kinetic traps during the assembly process, they render assembled structures susceptible to dissociation upon changes in the solution environment, limiting their processability outside of these narrow conditions. This work presents various methods for improving NCT superlattice stability and processability into polymer nanocomposites. These methods include the addition of free polymer to the assembly solution as a simple means to controllably increase the stability of nanoparticle superlattices against thermal dissociation in solution, as well as a method for embedding ordered nanoparticle superlattices into a polymer gel matrix, a medium that stabilizes the embedded arrays against disruption while still allowing dynamic lattice manipulation. Further stabilization can be obtained with complete solvent removal to bind nanoparticle arrays within a solid polymer matrix. The NCT design space is expanded by demonstrating a unary NCT-small molecule linker system capable of undergoing a reversible order-to-order phase transition between FCC and BCC, as well as demonstrating how solvent quality can be used to obtain ordered assemblies without the need for thermal annealing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154372</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of ultra-robust supramolecular assemblies  and their application to water treatment</title>
<link>https://hdl.handle.net/1721.1/154371</link>
<description>Design of ultra-robust supramolecular assemblies  and their application to water treatment
Christoff-Tempesta, Ty
Molecular self-assembly offers a powerful bottom-up approach to producing small molecule nanostructures with high surface areas, tunable surface chemistries, and pristine internal order. Conventionally, the dynamic nature of these systems has constrained their use to specific cases in primarily biomedical applications. Here, I present the design of molecular self-assemblies constructed from small molecule aramid amphiphiles to overcome these limitations. Aramid amphiphiles incorporate a Kevlarinspired domain that imparts strong, cohesive intermolecular interactions between molecules. This design results in the self-assembly of aramid amphiphiles into nanostructures with suppressed dynamic mobility and mechanical properties rivaling silk. By harnessing this stability, I expand the application space of small molecule assemblies to extending molecular assemblies to the solid-state, stabilizing unusual metastable nanostructures, and producing stable antifouling surface coatings. Finally, I leverage surface areas near 200 m²/g to design aramid amphiphile-based nanomaterials that treat liters of lead-contaminated water with single milligrams of material. Incorporating durable interactions into supramolecular assemblies offers a route to surmount the limitations of conventional assemblies, enabling customizable nanomaterials for demanding applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154371</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ionic conductivity transitions in antiperovskite ionic conductors</title>
<link>https://hdl.handle.net/1721.1/154370</link>
<description>Ionic conductivity transitions in antiperovskite ionic conductors
Li, Yiliang
High-performance solid electrolytes are critical for realizing all-solid-state batteries with enhanced safety and cycling efficiency. However, the relatively low ionic conductivity of most solid state electrolytes has been considered as the major challenge. Antiperovskite(AP) electrolytes are recently developed solid state electrolyte with enhanced ionic conductivity. AP materials also provide an ideal system to study the relationship between disorder and ionic conductivity, as in AP system the degree of lattice disorder can be easily tuned through atom substitution and the incorporation of cluster ions. Moreover, some AP compounds exhibit non-linearity in the Arrhenius plot of ionic conductivity. At higher temperatures, ion conductivity of AP materials curve upward, while most of the known solid electrolytes exhibit negative deviations. This thesis outlines study on the mechanisms of such ionic conductivity transition in APs and aims to give guidance on design of solid electrolytes with high ionic conductivity, including octahedral tilt disordering, paddlewheel mechanism and double paddlewheel mechanism. Carrying out temperature-dependent synchrotron XRay diffraction, differential scanning calorimetry, impedance spectroscopy, neutron diffraction, nuclear magnetic resonance and ab initio molecular dynamics simulation, we show that the 100x increase in ionic conductivity in stoichiometric AP Na₃OCl is related with an octahedral to cubic phase transition with increasing octahedral tilt disorder. We then present two novel cluster ion AP electrolytes Li₂(OH)Cl and Na₂(NH₂)(BH₄) in which paddlewheel and double paddlewheel mechanism with high cation vacancy concentration result in more than 100x ionic conductivity increase through the phase transition. Our study suggests that to achieve high ionic conductivity in solid electrolytes, a structure with octahedral tilt disordering, high vacancy concentration and cluster anions with high rotational mobility are beneficial.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154370</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Studies of Bio-Inspired Synthetic Random Heteropolymers</title>
<link>https://hdl.handle.net/1721.1/154369</link>
<description>Computational Studies of Bio-Inspired Synthetic Random Heteropolymers
Hilburg, Shayna L.
Biological heteropolymers, such as proteins and nucleic acids, work together to execute the vast suite of tasks which ultimately enable life. These capabilities, particularly enzymatic function, are incredibly attractive for developing sustainable materials, removing pollutants and contaminants, designing advanced nanomedicines, and countless other applications. However, as proteins typically denature and lose functionality in non-physiological conditions, harnessing their activity in processing steps or end applications which require foreign environments proves difficult. Amphiphilic synthetic polymers can provide a bio-inspired means for augmenting, and even mimicking, bio-macromolecular function. A four component methacrylate-based random heteropolymer (RHP) system has been demonstrated to be especially promising as an easily scalable and broadly effective option for protein-stabilization and protein-mimicry. The statistical distribution of random chains makes analysis of particular molecules and motifs challenging experimentally. In this thesis, we use atomistic molecular dynamics simulations to perform nanoscale characterization of individual sequences to provide insight into how such synthetic polymers behave. We draw from random heteropolymer theory and experimentation to develop a set of computational studies which lead to a comprehensive view of how the polymers assemble, move, and interact in solution. First, we focus on the structure and dynamics of RHPs in aqueous solution, investigating what drives their assembly. We found the polymers to have multiple dynamic modes and heterogeneous surfaces in water, properties which scale predominantly with composition rather than particular sequence motifs. We then study the impact of solution environment, examining polymer response to varied solvent properties. Modulation of electrostatic and polar interactions showed strong environmental-dependence on RHP assembly with significant activation barriers to remodeling upon altering solvent. Finally, we characterize RHP interactions with other macromolecules, small molecules, and interfaces. These studies demonstrate that such interactions alter not only the driving forces to assembly, but can also introduce high energy interfaces that may stimulate changes in polymer conformation. Our analyses, leveraging techniques from both polymer physics and protein sciences, enable predictable modification and processing of synthetic polymer assemblies. As native proteins and nucleic acids are, themselves, heteropolymers, this thesis also provides a synthetic perspective that relays behavioral relationships back to the biomolecules that inspire it.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154369</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the longitudinal dynamics of the human gut microbiome and immune system</title>
<link>https://hdl.handle.net/1721.1/154367</link>
<description>Investigating the longitudinal dynamics of the human gut microbiome and immune system
Bi, Haixin Sarah
This thesis reports three explorations of two human-associated biological systems: the gut microbiome and the adaptive immune system. These two systems are closely intertwined, and engage in significant crosstalk with each other and with the rest of the body. Advances in recent years in high-throughput sequencing technologies have enabled study of these systems with unprecedented depth and comprehensiveness. In this thesis I leverage these tools to uncover novel insights into their composition and function. In the first chapter, I describe a project in which we closely tracked a small cohort of subjects’ microbiomes over a period of up to one year, as they traveled from North America to Africa and back. We then intersected these data with data from African locals and contextualized our results via reanalysis of larger, published datasets on travelers’ microbiomes, to build a fuller picture of what happens to our gut microbes when we travel. In the second chapter, I describe an analysis of a forthcoming microbiome data resource from our lab, in which I examined and quantified the diversity of antiphage defense capabilities at the strain level within the gut microbiome. My results underscore the value of large, cross-sectional datasets to capture underlying strain heterogeneity. Finally, in the third chapter, I describe a longitudinal study of an important component of adaptive immunity, the T cell receptor (TCR) repertoire, in healthy individuals over a time span of up to one year. Our results served as the first reference point for normal temporal fluctuations in TCR repertoire dynamics. Together, this thesis underscores the value of modern sequencing-based observational study of these complex human systems, using both cross-sectional and longitudinal designs.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154367</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering kinetics of immunotherapies and vaccines</title>
<link>https://hdl.handle.net/1721.1/154366</link>
<description>Engineering kinetics of immunotherapies and vaccines
Bhagchandani, Sachin Haresh
The dynamic progression of immune responses to infections &amp; tumors points to the possibility of an optimal temporal window for immune modulation as a key parameter that could influence protective outcomes. Altering kinetics in an attempt to orchestrate an immune response in resonance with the biological rhythm of innate and adaptive immunity could have significant returns with respect to improved efficacy and decreased toxicity; all without necessitating the approval of new agents. In this thesis, we explored two distinct strategies to engineer immunotherapy and vaccine kinetics. We show how these kinetics can significantly impact cellular and humoral immune responses, and carried out detailed investigations into the underlying mechanisms that govern these temporal effects.  &#13;
&#13;
Firstly, imidazoquinolines (IMDs), small molecule agonists of Toll like receptor (TLR)-7 and/or TLR-8, are of great interest as potential anti-cancer therapeutics due to their ability to activate innate immune cells. Nevertheless, safe and effective systemic administration of these compounds in the clinic is an unsolved challenge due to dose-limiting toxicities, poor bioavailability, and severe immune-related adverse events upon intravenous administration. While attempts to deliver them via nanoparticle technologies have improved the potency of IMDs, achieving these outcomes while minimizing acute systemic inflammation has proven difficult.   &#13;
&#13;
Here, we developed a bottlebrush prodrug (BPD) IMD library as a tool to provide a detailed understanding of how the kinetics of drug release impacts safety and tumor immune stimulation. Cylindrical BPDs featuring antibody-like dimensions (~10 nm), coaxial PEG chains, and TLR-7/8 agonists linked through cleavable linkers along their backbone were synthesized using ringopening metathesis polymerization (ROMP). By tuning the cleavable linker molecular structure, IMD-BPD constructs were identified that allowed for potent stimulation of innate immune cells in tumors while avoiding systemic increases in inflammatory cytokines, reductions in white blood cell counts, or liver toxicity. These BPDs enabled significant reductions in tumor growth in syngeneic tumor models and improved responses to anti-PD-1 checkpoint blockade. Single-cell RNA-sequencing revealed that IMD-BPDs promote dendritic cell activation and reduce immunosuppressive macrophages in the tumor microenvironment, changes that free TLR7/8 agonists were unable to achieve.   &#13;
&#13;
Secondly, “extended dosing” of vaccines – immunization regimens that prolong exposure to antigen/adjuvant– has shown promise as a strategy to significantly augment humoral immune responses to HIV vaccines. Studies in mice and non-human primates have shown that extended dosing of immunogens can trigger long-lived germinal center responses, promote somatic hypermutation, and increase neutralizing antibody breadth. As one form of extended dosing, escalating-dosing (ED) immunization, where a given dose of antigen/adjuvant is administered as 7 injections of increasing dose over 2 weeks (7-ED), has been found to be one of the most effective ways to achieve these effects. This approach provides multifaceted benefits, such as allowing for improved antigen capture on follicular dendritic cells (FDCs) and augmenting follicular helper T cell priming. Furthermore, it results in the generation of long-lived germinal centers (GCs) with a more diverse repertoire of B cell clones that enter these GCs. However, such a multi-dose regimen presents significant feasibility challenges in terms of clinical translation.  &#13;
&#13;
Here we explored the parameter space of “reduced ED” immunizations employing fewer injections, aiming to increase clinical feasibility while retaining much of the immunological benefits of ED dosing compared to traditional bolus immunization. We carried out systematic studies varying number of doses, dose ratio, and dose intervals, immunizing mice and analyzing the subsequent immune responses looking for patterns maximizing the size of the antigen-specific GC response. A two shot reduced ED regimen (2-ED) consisting of dose 1 (20% of vaccine dose) on day 0 and a second shot (80% of vaccine dose) with a 7-day interval elicited prolonged optimal responses that were an order of magnitude improved over bolus immunization and enabled antigen capture of the second shot on FDCs, though not reaching the immune response levels elicited by the 7-ED regimen. Computational modeling of the germinal center response indicated that the 7ED regimen is able to maximize capture of native antigen as a consequence of high antibody titers prior to the final shot. These results suggested that sustained antigen availability at the time of the second immunization in our 2-ED regimen would boost innate inflammation in lymph nodes &amp; improve antigen capture in follicles. Consistent with these predictions, we found that sustained second prime dosing by anchoring the antigen onto alum via a phosphoserine linker significantly improved the magnitude of the antigen-specific GC responses.  &#13;
&#13;
These results pave the way to safer and more potent cancer immunotherapies and vaccines via engineered kinetic approaches to administering these compounds.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154366</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Behavior of the Atmospheric Boundary Layer in the Vicinity of the Gulf Stream Sea Surface Temperature Front</title>
<link>https://hdl.handle.net/1721.1/154364</link>
<description>The Behavior of the Atmospheric Boundary Layer in the Vicinity of the Gulf Stream Sea Surface Temperature Front
Liu, Hanyuan
The evolution of the marine atmospheric boundary layer (MABL) in the vicinity of a sea surface temperature (SST) front is of particular research interest, as the large air-sea temperature and humidity differences at the surface fuels various physical processes inside the boundary layer, causing intense heat and momentum exchange. Such processes make the mesoscale MABL an ocean-drive-atmosphere scenario. Dominant mechanisms, although having been studied intensively, are still yet to be fully understood due to the highly turbulent nature of the MABL. Previous studies often relied on satellite-derived SST and wind fields to investigate boundary layer dynamics, yet the coarse spatial and temporal resolution of such a method limits the understanding of the MABL evolution on shorter timescales.&#13;
&#13;
In this thesis, a combination of in situ data and model simulations is used to investigate the MABL response to the SST front in the Gulf Stream region on a timescale of one day or less. Analysis of MABL structure is divided into three categories depending on the background wind strength and its direction relative to the front: cold to warm, parallel/weak, and warm to cold. Two mechanisms identified in previous studies, vertical mixing and thermally induced pressure gradient, and their role in MABL evolution, are studied quantitatively. A comparison between observations and model simulations allows further analysis of the contribution of moist processes that were often considered to be of secondary importance in the past even over the ocean. Results show that vertical mixing is responsible for the majority of the MABL deepening, while the pressure adjustment’s effect is more significant when the cross-frontal wind is weak. Sensitivity tests conducted in the Weather Research and Forecast (WRF) also show that moisture processes, including surface latent heat, boundary layer transport of moist, and cloud formation, further enhance the mixing that drives MABL changes.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154364</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Solution-Processed Stable Silver Nanowire Networks for Transparent Electrodes</title>
<link>https://hdl.handle.net/1721.1/154362</link>
<description>Development of Solution-Processed Stable Silver Nanowire Networks for Transparent Electrodes
Chae, Woo Hyun
The continued use of indium tin oxide (ITO) as a transparent electrode for next generation optoelectronic devices will face severe challenges, including the scarcity of raw materials, high cost of processing, and lack of mechanical flexibility. Emerging transparent electrodes composed of solution-synthesized silver nanowire (AgNW) networks are highly appealing alternatives, but their poor stability prevents market adoption. Thus, there is a need to develop scalable and commercially viable processes for fabricating stable AgNW-based transparent electrodes. This thesis seeks to overcome the failure modes of AgNWs by developing fully solution-based fabrication processes for nanocomposite transparent electrodes incorporating AgNWs and Graphene Oxide (GO) or Reduced Graphene Oxide (RGO). In particular, electrophoretic deposition and layer-by-layer deposition were actively explored as processing techniques to achieve AgNW networks protected by thin films of GO or RGO. A first process is developed to fabricate a transferrable AgNW network protected by GO on both sides, leading to exceptional chemical stability. A use case is demonstrated where the transparent electrode is integrated as a back-contact for a semitransparent organic solar cell, leading to a longer device lifetime. A second process is developed towards forming a conformal coating on AgNWs based on nanosized RGO, leading to enhanced all-around stability. A flexible transparent film heater was thus made, demonstrating its exceptional stability under harsh conditions. It is expected that the processing knowledge and structure-property relationships uncovered in this work can be extended to the integration of other emerging low dimensional materials compatible with solution-based processing, and help bridge the gap between proof-of-concept demonstrations and industrial production of functional nanocomposite materials.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154362</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activity and Stability Design Principles of Transition Metal Compounds for Decarbonization</title>
<link>https://hdl.handle.net/1721.1/154361</link>
<description>Activity and Stability Design Principles of Transition Metal Compounds for Decarbonization
Peng, Jiayu
Enabling sustainability while mitigating the ever-increasing carbon dioxide emissions is one of the most significant challenges of our time. A key element in achieving these goals lies in developing renewable energy technologies (e.g., rechargeable batteries, fuel cells, and water splitting devices) enabled by low-cost, earth-abundant materials. One of the most promising sustainable technologies is transforming earth-abundant molecules and compounds into value-added fuels, chemicals, and materials with electricity converted from solar and wind energy using electrolyzers. Moreover, as such an approach stores energy from intermittent sources in chemical bonds, renewable electricity can be regenerated utilizing fuel cells to meet our energy needs at scale and on-demand. Unfortunately, the cost and efficiency of these clean energy technologies have been hampered by the slow kinetics of oxygen electrocatalysis, currently requiring costly noble-metal-based catalysts to drive these electrochemical reactions at practical rates. To tackle this challenge, transition metal compounds, such as oxides and nitrides, have been utilized as low-cost, non-precious catalysts for these sustainable energy technologies. However, the best-known non-precious catalysts to date are still much less active and stable than their precious-metal-based counterparts, particularly in acidic systems. Thus, it is crucial to elucidate the activity and stability design principles (i.e., descriptors) of transition metal compounds for these electrochemical systems. Such design principles can offer a mechanistic understanding of the physical origin of their activity and durability and provide new guiding principles to optimize these compounds for energy storage and chemical transformation. In this thesis, we combine electrochemical characterizations, X-ray spectroscopies, and ab initio calculations to develop intrinsic descriptors for tuning the activity and stability of transition metal oxides/nitrides for decarbonization. First, we leverage the inductive effect to design catalysts with optimized electronic structures and redox properties by incorporating highly acidic/electronegative metals into perovskite oxides and constructing hybrid organic-inorganic oxide-based catalysts with stronger tunability in electronic structures than pure oxides. Second, we establish descriptors and mechanistic insights for optimizing the stability of oxides/nitrides in acid by tuning the electronic structures, bonding interactions, and thus, dissolution energetics. Lastly, we examine opportunities to extend such a descriptor-centered approach to a data-driven materials design paradigm.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154361</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Velocity Impact of Metal Microparticles</title>
<link>https://hdl.handle.net/1721.1/154360</link>
<description>High-Velocity Impact of Metal Microparticles
Lienhard, Jasper Z.
The recent development of the laser-induced particle impact test (LIPIT) has enabled direct imaging of individual microparticles as they impact a substrate at velocities of hundreds or thousands of meters per second. This technique allows for precisely controlled high-throughput impact testing, creating new opportunities for the study of material behavior under extreme dynamic conditions. In this thesis, a series of experiments are presented which use the LIPIT to impact metal microparticles onto metal substrates across a range of controlled velocities. An ultrahigh-speed camera records the resulting rapid deformation of the impacting microparticles and impacted substrates, and post-mortem characterization is conducted on each impact site. A wide range of dynamic mechanical behaviors are observed, including jetting, surface oxide break-up, impact-induced melting, particle fragmentation, and hydrodynamic penetration. This research speaks directly to the cold spray additive manufacturing method, in which a stream of solid metal particles is impacted onto a surface to deposit metal coatings. The physics that control the upper and lower bounds of the velocity window over which cold spray can occur are explored. Microparticle impact behavior regimes are measured and mapped as a function of velocity for several different metals, and the role of surface passivation layers in controlling cold spray adhesion is examined. Finally, experiments relevant to the study of impact-induced erosion are presented, which explore the formation of solid and liquid ejecta clouds during impact and examine how kinetic energy is dissipated by melting and substrate material ejection.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154360</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the nature of fluctuations in the edge of I-mode and L-mode plasmas at ASDEX Upgrade</title>
<link>https://hdl.handle.net/1721.1/154358</link>
<description>Understanding the nature of fluctuations in the edge of I-mode and L-mode plasmas at ASDEX Upgrade
Bielajew, Rachel S.
Changes in turbulence at the plasma edge are thought to lead to the formation of the edge transport barrier in tokamak plasmas, which defines the transition from low confinement (L-mode) to high confinement operation such as H-mode or the "improved'' confinement regime I-mode.  I-mode is of particular interest for future reactor operation because its unique transport barrier in heat but not particles prevents impurity accumulation and keeps the I-mode edge away from instability boundaries which lead to damaging Edge-Localized Modes.  The mechanism for the formation of the edge transport barrier is an open question across high confinement regimes.  The unique transport barrier formed in I-mode, with separated heat and particle transport channels, must be understood.  This thesis explores fluctuations in the edge region of plasmas at ASDEX Upgrade (AUG) to better understand how changes in fluctuations relate to changes in transport between L and I-mode, as well as L-modes across different magnetic configurations (favorable and unfavorable grad B drift).  An extensive turbulence diagnostic suite at AUG is used for experimental exploration of edge turbulence, with special focus on measurements of radiated temperature fluctuations from the Correlation Electron Cyclotron Emission diagnostic. &#13;
&#13;
Comparative studies of L-mode and I-mode edge turbulence reveal that the Weakly Coherent Mode (WCM), previously considered a marker of I-mode, is present in both regimes.  The WCM is present across a wide parameter space of collisionality in both L-mode and I-mode.  The presence of the WCM and its electron temperature fluctuation amplitude are not correlated with the quality of the global confinement.  Properties of the WCM are compared in detail between L and I-mode, with a focus on unfavorable grad B drift plasmas.  While some properties of the WCM are consistent between L and I-mode, such as its wavenumbers and radial location, other WCM properties change between the confinement regimes, such as its coupling to a low frequency edge oscillation.  Additionally, studies of matched L-modes in favorable and unfavorable grad B drift magnetic configurations show that the WCM can form in L-modes of both magnetic configurations, and that its onset can occur even at power levels far below the L to I-mode and L to H-mode transition.  While the nature of fluctuations in the edge of the matched plasmas is seen to be dominated by a WCM-like feature in both the favorable and unfavorable configurations, differences in turbulence damping from velocity shear and the resulting differences in turbulence fluctuation amplitude seem to play into the different H-mode power threshold between the configurations.  In addition to these experimental studies, gyrokinetic studies of the L-mode and I-mode are performed at radial locations inside the transport barrier region, with a focus on the outer core.  These gyrokinetic studies reveal that the nature of L-mode and I-mode turbulence is very similar in the outer core in terms of the identities of instabilities present, and the response of turbulent heat flux to changes in gradient drives.  The fluctuations associated with the I-mode WCM are not inconsistent with the edge transport required for the unique I-mode transport barrier in heat and not particles, however, the presence of the WCM in L-mode shows that this fluctuation is not a unique marker of I-mode.  These findings improve our understanding of I-mode phenomenology overall, which is important for its extrapolation to future reactors.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154358</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Processing and Optical Uses of Van der Waals Layered Materials</title>
<link>https://hdl.handle.net/1721.1/154357</link>
<description>Processing and Optical Uses of Van der Waals Layered Materials
Jo, Seong Soon
This thesis is mainly divided into two parts: processing and optical uses of (1) transition metal dichalcogenides and (2) group IV-VI monochalcogenides. Wefirst study processing of large-area polycrystalline MoS₂ and TiS₂ film for photonics and microelectronics, which mainly focus on lowering processing temperature by controlling oxygen concentration. We identify opposite roles of oxygen during the sulfurization process as a reaction catalyst or an inhibitor, for MoS₂ and TiS₂ formation respectively. In MoS₂, O₂ promotes the crystallization of MoS₂ at lower temperature (as low as 400 °C). On the other hand, the kinetic barrier of replacing Ti-O bonds by Ti-S bonds increases with adoption of oxygen in the system. Thus we design the system to lower oxygen background, enabling to lower sulfurization temperature as low as 500° and then form smooth film by suppressing roughening. Beyond investigating roles of oxygen during thin film growth, it is crucial to thoroughly understand the processing and properties of native oxide for designing semiconductor devices. Combining modeling with experimentation, we calculate the oxidation rate and uncover the mechanisms of spontaneous oxidation of bulk single crystals of ZrSxSe₂₋ₓ alloys and MoS₂. ZrSxSe₂₋ₓ alloys oxidize rapidly, and the oxidation rate increases with Se content. Oxidation of basal surfaces is initiated by favorable O₂ adsorption and proceeds by a mechanism of Zr-O bond switching, that collapses the van der Waals gaps, and is facilitated by progressive redox transitions of the chalcogen. The rate-limiting process is the formation and out-diffusion of SO₂. In contrast, MoS₂ basal surfaces are stable due to unfavorable oxygen adsorption. Furthermore, we investigate the role of various processing parameters involved in thermal oxidation and non-thermal oxidation, revealing the growth mechanism. In the second part, new switching mechanism for light-controlling-light is examined in IV-VI monochalcogenides and black phosphorous for photonic integrated circuits. This work is inspired by recent theoretical work suggesting that the in-plane crystal orientation in such materials can be switched through an ultra-fast, displacive (i.e. non-diffusive), non-thermal, and lower-power mechanism by strong electric fields, due to in-plane dielectric anisotropy. We use numerical device modeling to study device concepts based on switching the crystal orientation of SnSe and bP in photonic integrated circuits. Furthermore, we experimentally observe a preliminary switching behavior in bulk single crystal SnSe using single THz pump-probe spectroscopy.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154357</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrahigh frequency characterization of complex materials using transient grating techniques</title>
<link>https://hdl.handle.net/1721.1/154355</link>
<description>Ultrahigh frequency characterization of complex materials using transient grating techniques
Crimmins, Timothy Francis,
            1972-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2000; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154355</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The regulation of brain serotonin concentrations by dietary factors affecting brain tryptophan</title>
<link>https://hdl.handle.net/1721.1/154354</link>
<description>The regulation of brain serotonin concentrations by dietary factors affecting brain tryptophan
Fernstrom, John Dickson.
Daily rhythms occur in the concentrations of tryptophan in rat plasma and brain and of serotonin in rat brain. To determine whether these normally-occurring changes in plasma and brain tryptophan could account for the variation in brain serotonin, we injected rats with different doses of L-tryp­tophan and measured the responses of the plasma and brain tryptophan and the brain serotonin pools at various times after injection. A dose of 12.5 mg/kg, given at the time of day when plasma and brain tryptophan levels are normally low­est, produced elevations in plasma and brain tryptophan and  in brain serotonin which approached, but did not exceed, peak daily concentrations. Thus, changes in plasma and brain try­ptophan within the normal dynamic range are capable of pro­ducing significant changes in brain serotonin levels. Because the ingestion of carbohydrate produced signifi­cant alterations in plasma and brain tryptophan and in brain serotonin, experiments were performed to test the response of these three pools to the consumption of another constituent of food, protein. Following the ingestion of the same carbohy­drate diet supplemented with casein, 18% dry weight, plasma tryptophan levels became elevated, but brain tryptophan and serotonin concentrations did not change. Inasmuch as protein contains amino acids that compete with tryptophan for trans­port into the brain, the influx of these amino acids along  with tryptophan into the circulation following protein inges­tion may have produced an effective inhibition of tryptophan uptake into the brain. Thus, carbohydrate diets containing  an amino acid mixture approximating casein in amino acid com­position, but lacking the amino acids thought to compete with tryptophan for transport into the brain (tyrosine, phenylalanine, leucine, isoleucine, and valine), were fed to rats. Follow- ing the ingestion of this diet, plasma and brain tryptophan  and brain serotonin and 5-hydroxyindoleacetic acid levels all increased. If animals were fed the amino acid mixture diet lacking aspartate and glutamate (amino acids thought not to compete with tryptophan for transport) instead of the large neutral amino acids, or the complete amino acid mix diet  (including the large neutral amino acids), plasma tryptophan concentrations rose, but no increases in brain tryptophan, serotonin, or 5-hydroxyindoleacetic acid occurred. The con­centrations of serotonin and its major metabolite in brain, which appear to be influenced by tryptophan availability to the brain, thus are subject not only to plasma tryptophan levels, but also to the levels of several other amino acids  in plasma.  These results support the hypothesis that the rate of serotonin synthesis in brain is influenced by tryptophan availability. They demonstrate a remarkable sensitivity of brain serotonin concentrations to changes in brain tryptophan levels within the normal dynamic range.  Long-term changes in brain serotonin were also studied  in rats consuming a diet containing a naturally-occurring protein very low in tryptophan content (corn protein). In  these animals, the plasma and brain tryptophan pools were greatly depressed after five weeks on the diet; brain sero­tonin concentrations were correspondingly decreased. Similar results were obtained in rats eating smaller than normal quantities of a diet containing a protein with normal amounts  of tryptophan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Department of Nutrition and Food Science, 1972; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references (pages 98-107).
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154354</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The use of photocatalysis in the development of a method for the determination of the intermediates of gas-phase, metal-catalyzed reactions</title>
<link>https://hdl.handle.net/1721.1/154352</link>
<description>The use of photocatalysis in the development of a method for the determination of the intermediates of gas-phase, metal-catalyzed reactions
Modell, Michael,
            1924-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1964; Vita.; Includes bibliographical references (leaves 167-170).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154352</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The organization of consensus in a large southern Italian city : the social bases of an urban political machine.</title>
<link>https://hdl.handle.net/1721.1/154344</link>
<description>The organization of consensus in a large southern Italian city : the social bases of an urban political machine.
Chubb, Judith,
            1947-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154344</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Excitation of some levels of N14 by electron scattering.</title>
<link>https://hdl.handle.net/1721.1/154341</link>
<description>Excitation of some levels of N14 by electron scattering.
Ensslin, Norbert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1972; Leaf 178 is inverted. Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154341</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical vapor deposition of solid solutions in the system zirconia-yttria.</title>
<link>https://hdl.handle.net/1721.1/154339</link>
<description>Chemical vapor deposition of solid solutions in the system zirconia-yttria.
Ferrao, Luiz Paulo Camargo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154339</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partially ordered sets with hooklengths : an algorithmic approach.</title>
<link>https://hdl.handle.net/1721.1/154325</link>
<description>Partially ordered sets with hooklengths : an algorithmic approach.
Sagan, Bruce Eli.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1979; Vita.; Bibliography: leaves 96-98.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154325</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereospecific total synthesis of beta-lactam antibiotics from peptide precursors.</title>
<link>https://hdl.handle.net/1721.1/154324</link>
<description>Stereospecific total synthesis of beta-lactam antibiotics from peptide precursors.
Christie, Michael Allen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154324</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The interpretation of visual motion.</title>
<link>https://hdl.handle.net/1721.1/154323</link>
<description>The interpretation of visual motion.
Ullman, Shimon.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Bibliography : p. 248-254.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154323</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, Development, and Study of New High-Throughput Biodegradation Test for Polymers</title>
<link>https://hdl.handle.net/1721.1/154321</link>
<description>Design, Development, and Study of New High-Throughput Biodegradation Test for Polymers
Av-Ron, Sarah Halina Mindel
The rise of plastic pollution requires an urgent response. The substitution of long-lasting commodity plastics with biodegradable alternatives has been a thoroughly researched solution. Unfortunately, current biodegradation standards are not allowing for fast biodegradation measurements on materials, and present other disadvantages such as requiring a large amount of material, lack of directions in sample preparation leading to limited comparability between different biodegradation results or expensive experimental setups both financially and time wise. This thesis focuses on the design and improvement of an affordable biodegradation test based on the clear-zone assay. First, the test is proven to be high-throughput and used for generating the biodegradability information of over 600 polymer chemistries. This large dataset highlights some relationships between polymer chemistry and biodegradability. Secondly, an improved design of a clear-zone monitoring robot leads to higher accuracy of the extracted data. In combination with a new sample preparation process using polymer nanoprecipitation, the system enables the determination of biodegradation rates. Finally, the development of a kinetic model capable of reproducing clear-zone assay results is developed, thus allowing to relate biodegradation rate with kinetic rates.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154321</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infrared Thermometry Analysis of Surface Effects in Subcooled and Saturated Boiling of Water</title>
<link>https://hdl.handle.net/1721.1/154320</link>
<description>Infrared Thermometry Analysis of Surface Effects in Subcooled and Saturated Boiling of Water
Aguiar, Gustavo Matana
In this work, the dynamics of subcooled and saturated flow boiling are studied, focusing on the influence of surface characteristics, visualized through advanced diagnostics capturing micro-scale phenomena. Utilizing specially crafted thin-film infrared heaters, we measured bubble dynamics parameters and correlated them with surface properties. Nano-smooth and rough surfaces, comparable to nuclear reactor fuel claddings, were examined, and a nanoporous layer was applied to mimic fouling, enabling a detailed study of its effect on boiling heat transfer. A range of subcooled flow boiling tests were conducted, encompassing diverse mass velocities and degrees of subcooling. Through this, bubble dynamic parameters were measured across different surfaces, elucidating the effects of surface characteristics. The interplay between liquid heat transfer and bubble dynamics during subcooled flow boiling was another focal point of our investigation. In the saturated flow boiling tests carried out at a single mass velocity transitioning from a subcooled regime to annular flow, we analyzed the correlation between CHF and vapor quality across various surfaces. Specific attention was given to the CHF phenomena in slug flow, linking it to the flooding phenomena. In annular flow, the study pivoted towards understanding the transition of CHF mechanisms, from departure from nucleate boiling to dry out, and the consequential impact of disturbance waves. The findings of this study are instrumental in enhancing the accuracy of heat flux partitioning methods. The enriched understanding and insights are anticipated to be integral for developing more refined multiphase computational fluid dynamics codes, crucial for designing efficient heat-dissipating mechanisms in applications such as nuclear reactors.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154320</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>L-dopa metabolism and the regulation of brain polysome aggregation</title>
<link>https://hdl.handle.net/1721.1/154252</link>
<description>L-dopa metabolism and the regulation of brain polysome aggregation
Weiss, Bette Fishman.
Intraperitoneal administration of L-dopa {50-300 mg/kg) caused a marked disaggregation of brain polysomes in immature rats {15-20 g body weight). Polysome disaggregation was maximal between 40 and 60 minutes after injection and not significant at 20 and 120 minutes. Older rats {50 and 100 g body weight) showed a similar response, but required larger doses of the amino acid {500 mg/kg). Disaggregation could not be sustained in 100 g rats by chronic single injections of 500 mg/kg per day for 10 days; disaggregation occurred only if the last dose of L-dopa was 1 hour before·sacrifice. Polysome disaggregation after oral L-dopa was achieved in. 100 g rats if the dopa administered was very high {3 g/kg) and the time after administration was increased to 90-120 minutes. Brain polysomes may be unique in their response to L-dopa, as neither rat liver nor skeletal muscle showed a change in polysome aggregation after L-dopa injection. To determine the mechanism of L-dopa on polysome disaggregation, the metabolism of L-dopa in the brain was examined. Single i.p. doses significantly elevated tryptophan content in the brain at the three ages of rats examined; hence the effect of L-dopa on polysomes did not result from the unavailability of this amino acid. No other amino acid was depleted after L-dopa administration to account for brain polysome disaggregation. The effects of intraperitoneal L-dopa and related drugs on brain polysomes and dopa metabolism were compared in 50 g rats to determine which subsequent metabolic pathway within the brain was responsible for the polysome disaggregation. At the time after L-dopa when polysome⁻ disaggregation occurred there were increased brain dopa, 3-O-methyldopa levels and a depletion of S-adenosylmethionine. There was a four-fold increase in dopamine and a small, insignificant increase in norepinephrine. The disaggregation of brain polysomes in rats after L-dopa administration was not reproduced by administering its metabolite 3-0-methyldopa, nor by giving D-dopa. D-dopa also depleted the brain of S-adenosylmethionine, to synthesize 3-0-methyldopa, but was not converted to catecholamines. Polysome disaggregation did not take place when L-dopa was given after a decarboxylase inhibitor;. there was no increase in dopamine when L-dopa was administered after the inhibitor, although the concentration of dopa in the brain was four times the level attained with L-dopa injection alone. These experiments suggested that formation of a catecholamine was an obligatory requirement for polysome disaggregation in the brain after L-dopa. Polysome disaggregation did not occur when dopamine or norepinephrine were administered intracisternally. However, polysome disaggregation was potentiated when 100 mg/kg L-dopa, a dose that alone did not disaggregate polysomes, was given after a monoamine oxidase inhibitor. Blocking one route of catecholamine metabolism, by this inhibitor resulted in a much higher level of catecholamines in the brain than expected after this dose of L-dopa alone; the dopamine concentration was equivalent to that after 500 mg/kg L-dopa alone. These observations suggested that the mechanism by which L-dopa disaggregated brain polysomes involved its conversion to dopamine within the majority of brain cells. The responses of brain polysomes and brain tryptophan to L-dopa administration to rats were apparently unrelated, because the dose responses and time courses of these phenomenon were different. Tryptophan levels rose before and remained elevated after polysomes were disaggregated. Administration to 50 and 100 g rats of low doses of L-dopa, that could not disaggregate brain polysomes, elevated brain tryptophan content. The increase in brain tryptophan content after L-dopa occurred with a concurrent larger increase in plasma tryptophan level. The plasma tryptophan rise was not due to an alteration in plasma insulin level, nor could it be attributed to a decrease in liver or skeletal muscle tryptophan content. Except for increases in tryptophan and dopa, no plasma amino acid level was altered in rats after L-dopa. The increase in brain tryptophan content was no longer seen in 100 g rats treated with chronic single injections of 100 mg/kg per day, and if the last dose of L-dopa was one day before sacrifice, brain tryptophan content declined.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references (pages 140-151).
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154252</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Families of ideals in the ring of power series in two variables.</title>
<link>https://hdl.handle.net/1721.1/154243</link>
<description>Families of ideals in the ring of power series in two variables.
Iarrobino, Anthony A.
            (Anthony Ayers),
            1943-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1970; Vita.; Bibliography: leaf 145.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154243</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A methodology for assessing alternative water acquisition and water use strategies for western energy facilities in th American West</title>
<link>https://hdl.handle.net/1721.1/154234</link>
<description>A methodology for assessing alternative water acquisition and water use strategies for western energy facilities in th American West
Shaw, John J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1981; Bibliography: leaves 264-269.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154234</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive settlements of clay foundations subjected to cyclic loading.</title>
<link>https://hdl.handle.net/1721.1/154230</link>
<description>Predictive settlements of clay foundations subjected to cyclic loading.
Silva-Tulla, Francisco.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1977; Vita.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154230</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Chemically-Defined Platform Materials for Localized Delivery of RNA Therapeutics</title>
<link>https://hdl.handle.net/1721.1/154211</link>
<description>Development of Chemically-Defined Platform Materials for Localized Delivery of RNA Therapeutics
Andresen, Jason Lee
Most nucleic acid therapeutics have been engineered for systemic delivery with accumulation predominantly occurring in the liver. Currently, mRNA and siRNA represent the most clinically advanced nucleic acid modalities which can be used to address underlying under- or over-expression of crucial proteins, respectively. To achieve therapeutic impact, RNA delivery often utilizes lipid nanoparticles (LNP) as non-viral vehicles to encapsulate these cargoes for intracellular uptake and efficacy. However, these drugs demonstrate the broad potential to correct virtually any misregulated protein, and thus should not be confined to efficacy in the liver. A wide array of target organs exist which cannot always be reached through this route of administration. Tissues such as the front of the eye (FotE) or the brain both exhibit biological barriers to systemic drug delivery which could be overcome using localized delivery of RNA. &#13;
&#13;
Significant efforts are required to elucidate the structure-activity relationships capable of developing LNP for localized delivery. In this thesis, we outline several strategies for optimization of localized RNA delivery in specific tissues of interest. Using a considered screen of variables including chemical structure of LNP components and delivery media, we optimize LNP for delivery of mRNA to corneal epithelial cells present in the FotE. These improvements were translatable across LNPs containing different ionizable lipids, generally considered the most important consideration for delivery efficacy. LNP modifications also acted synergistically to combine the improved effects and result in an LNP 26-fold more potent in corneal epithelial cells than benchmark liver-targeting formulations. Another rationally-designed library of LNP identified the most potent LNP (named MG-LNP) for delivery of either mRNA or siRNA to microglia both in vitro and in vivo after localized injection. As microglia are notoriously difficult to transfect, this represents one of the most potent tools available to screen RNA therapeutics in this cell type. Using these MG-LNP we delivered siRNA against the inflammatory transcription factor PU.1 and observed a decrease in neuroinflammation typical of neurodegenerative diseases. Finally, we developed a novel hydrogel system which can be used as a cell culture scaffold through 3D bioprinting. This hydrogel can be used to encapsulate cells of interest for future efforts in high throughput screening to allow for improved selection of RNA therapeutics and LNP design for local delivery.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154211</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Dynamically Orthogonal Modeling and Bayesian Learning for Underwater Acoustic Propagation</title>
<link>https://hdl.handle.net/1721.1/154210</link>
<description>Stochastic Dynamically Orthogonal Modeling and Bayesian Learning for Underwater Acoustic Propagation
Hajj Ali, Wael
Sound waves are critical for a variety of underwater applications including communication, navigation, echo-sounding, environmental monitoring, and marine biology research. However, the incomplete knowledge of the ocean environment and acoustic parameters makes reliable acoustic predictions challenging. This is due to the sparse and heterogeneous data, as well as to the complex ocean physics and acoustics dynamics, multiscale interactions, and large dimensions. There are thus several sources of uncertainty in acoustic predictions. They include the ocean current, temperature, salinity, pressure, density, and sound speed fields, the bottom topography and seabed fields, the sources and receivers properties, and finally the model equations themselves. The goals of this thesis are to address these challenges. Specifically, we: (1) obtain, solve, and verify differential equations for efficient probabilistic underwater acoustic modeling in uncertain environments; (2) develop theory and implement algorithms for the Bayesian nonlinear inference and learning of the ocean, bathymetry, seabed, and acoustic fields and parameters using sparse data; and (3) demonstrate the new methodologies in a range of underwater acoustic applications and real sea experiments, showcasing new capabilities and leading to improved understanding.&#13;
&#13;
In the first part, we derive, discretize, implement, and verify stochastic differential equations that (i) capture dominant input uncertainties in the environment (e.g., ocean, bathymetry, and seabed) and in the acoustic parameters (e.g. source location, frequency, and bandwidth), and (ii) predict the acoustic pressure fields and their probability distributions, respecting the nonlinear governing equations and non- Gaussian statistics. Starting from the acoustic Parabolic Equation (PE), we develop Dynamically Orthogonal (DO) differential equations for range-optimal acoustic uncertainty quantification. Using DO expansions for the input uncertainties, we develop the reduced-order DO-PEs theory for the Narrow-Angle PE (NAPE) and Padé Wide- Angle PE (WAPE) stochastic partial differential equations (PDEs). We verify the discretized DO-PEs in new stochastic range-independent and range-dependent test cases, and demonstrate their advantages over state-of-the-art methods for uncertainty quantification and wave propagation in random media. Results show that a single DO-PE simulation can accurately predict stochastic range-dependent acoustic fields and their full non-Gaussian probability distributions, with computational savings of several orders of magnitude when compared to direct Monte Carlo methods.&#13;
&#13;
In the second part, we extend recent nonlinear Bayesian data assimilation (DA) to the inference and learning of ocean-bathymetry-seabed-acoustic fields and parameters using sparse acoustic and oceanographic data. We combine the acoustic DO-PEs with Gaussian mixture models (GMMs) to predict probability densities in the DO subspace, allowing for efficient non-Gaussian estimation of state variables, parameters, and model functions themselves. The joint multidisciplinary estimation is enabled by state augmentation where the ocean-acoustic-bathymetry-seabed states and parameters are fit together to GMMs within the DO subspace. The new GMM-DO ocean acoustic inference system is validated by assimilating sparse data to infer the source depth, source frequency, and acoustic and environment fields and parameters in five new high-dimensional inference test cases based on state-of-the-art oceanographic and geoacoustic benchmarks. We evaluate the convergence to inference parameters and quantify the learning skill. Results show that our PDE-based Bayesian learning successfully captures non-Gaussian statistics and acoustic ambiguities. Using Bayes’ law, it provides accurate probability distributions for the multivariate quantities and enables principled learning from noisy, sparse, and indirect data.&#13;
&#13;
In the final part, we integrate our acoustic DO-PEs and GMM-DO frameworks with the MSEAS primitive equation ocean modeling system to enable unprecedented probabilistic forecasting and learning of ocean physics and acoustic pressure and transmission loss (TL) fields, accounting for uncertainties in the ocean, acoustics, bathymetry, and seabed fields. We demonstrate the use of this system for low to mid-frequency propagation with real ocean data assimilation in three regions. The first sea experiment takes place in the western Mediterranean Sea where we showcase the system’s performance in predicting ocean and acoustic probability densities, and assimilating sparse TL and sound speed data for joint ocean physics-acoustics-source depth inversion in deep ocean conditions with steep ridges. In the second application, we simulate stochastic acoustic propagation in Massachusetts Bay around Stellwagen Bank and use our GMM-DO Bayesian inference system to assimilate TL data for acoustic and source depth inversion in shallow dynamics with strong internal waves. Finally, in the third experiment in the New York Bight, we employ our system as a novel probabilistic approach for broadband acoustic modeling and inversion. Overall, our results mark significant progress toward end-to-end ocean-acoustic systems for new ocean exploration and management, risk analysis, and advanced operations.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154210</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cloud Ecologies: An Environmental Ethnography of Data Centers</title>
<link>https://hdl.handle.net/1721.1/154209</link>
<description>Cloud Ecologies: An Environmental Ethnography of Data Centers
Gonzalez, Steven
This dissertation is a multi-sited ethnography of cloud computing infrastructures and their wider environmental impacts in the northeastern and southwestern United States, Puerto Rico, and Singapore. Informed by participant observation among technicians and ethnographic engagement with the wider communities where data centers are sited, I situate the Cloud’s material, political, economic, and social resonances as local rather than global in scope. What follows is a comparative reckoning of the Cloud’s “metabolic rifts” (Marx 1863), defined by the particular geographic, climatic, economic, political, and social constraints and affordances in each of my field sites. Given that a significant portion of this fieldwork was undertaken during the height of COVID-19, the dissertation’s chapters are interrupted by experimental vignettes I call “precipitations”, which call for theoretical and methodological innovations amid pandemic lockdowns and the near-impossibility of human subjects research.&#13;
 	In Chapter I, Cloud Temporalities, I document the quotidian rhythms of maintenance, repair, and thermal management in data centers that in turn rehabilitate the masculine subjectivities of technicians. The ‘emic’ temporal formations of uptime (success) and downtime (failure), are linked to the cold that assures the former and the runaway heat that brings bout the latter. Through deep analysis of technicians’ behaviors, speech patterns and discourses, I reveal how temporality (uptime and downtime), temperature (heat), and masculine expertise converge in data centers, introducing the terms “thermotemporalities” and “thermomasculinities”. Chapter 2, Cloud Clamor, ethnographically reconstructs the sonic experiences of Arizona residents located in close proximity to data centers. I excavate a shared lexicon of physiological and psychic harms articulated by residents exposed to data center “noise pollution”, tuned to wider sociopolitical disturbances amid the rise of “woke” and “anti-woke” discourses. I situate the experiences of Arizona residents within the larger history of noise regulation in the United States, linking their collective pursuit of “silence” to sonoracism and Allison Martin’s concept of “sonic gentrification.” Additionally, I introduce settler acoustics, a narrative complex in which the Sonoran Desert wilderness is repeatedly cast as “empty” and “barren” and is thusly figured as the preferred receptacle for the Cloud’s sonic waste (over suburbia), despite settler histories of dispossession and the ongoing presence of indigenous communities there. In Chapter 3, Cloud Hydraulics, I contrast the extreme work environments in the arid data centers of Arizona to their humid counterparts in the tropics of Puerto Rico and Singapore, revealing a temperate bias that pervades computational design and practice. I trace the hydraulic practices I document in data centers to deeper histories of air conditioning, speleology, and architecture. I turn to figuration to narrate the hydraulic paradox of the Cloud as simultaneously hydrophilic and hydrophobic, as part of the hydrological cycle and vulnerable to the deluges precipitated by hurricanes. Inspired by the work of Mél Hogan and Stefan Helmreich, I introduce “limit ecologies” as a framework for apprehending cloud computing’s turbulent expansion into submarine and extraterrestrial frontiers.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154209</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherence, Dephasing, and Quantum Interference in Colloidal Perovskite Nanocrystals</title>
<link>https://hdl.handle.net/1721.1/154208</link>
<description>Coherence, Dephasing, and Quantum Interference in Colloidal Perovskite Nanocrystals
Kaplan, Alexander E.K.
As the development of photonic quantum technologies continues to be pushed forward, the need for a source of quantum light, the so-called quantum emitter, has been ever-growing. The requirements for a quantum emitter are onerous: indistinguishable photon or entangled photon pairs need to be generated on-demand, which necessitates the presence of transform-limited emission. The hunt for ideal quantum emitters is ongoing: while there are many strong candidates, none meet all of the necessary requirements to enable the full capabilities of potential quantum technologies. Recently, colloidal cesium lead bromide perovskite nanocrystals have garnered interest for their potential use as quantum emitters. As a novel emitter, there is a prominent need for further characterization of perovskite nanocrystals, not only to learn more about the fundamental photophysics in these materials but also to establish their inherent properties as coherent sources of light.&#13;
&#13;
In this thesis, I discuss what it means to be a quantum emitter, what it means for light to behave quantum mechanically, what kinds of quantum emitter properties are important, and how to measure those properties. I discuss the intrinsic capabilities of cesium lead halide perovskites, and what primes them to be the first colloidal quantum emitter. This thesis chronicles measuring Hong-Ou-Mandel (HOM) two-photon interference, the ubiquitous measure of quantum interference, showing HOM visibilities in colloidal materials for the first time. I also detail what types of experiments and further work can be done to build on what we know now, and hopefully provide insight into interesting future scientific inquiry.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154208</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Users on Social Media for Better Content Credibility</title>
<link>https://hdl.handle.net/1721.1/154204</link>
<description>Empowering Users on Social Media for Better Content Credibility
Jahanbakhsh, Farnaz
As misinformation raged on online social spaces and threatened people’s lives and even democracy, platforms rose as the authority on misinformation detection and moderation. However, concerns about freedom of speech and listening rights and the autonomy of individuals in deciding what content to consume, as well as the misalignment in incentives between the users and the platforms should give us pause in accepting this centralized moderation as the optimal solution. In this dissertation, I have explored an alternative approach to misinformation moderation-a democratized one that empowers every user to have a say in what content they consider misinforming, make decisions about what they want to do with such content, and help their friends avoid misinformation. I have investigated the following questions: 1) how to alter the design of social media platforms to enable users to have a say in what is misinformation and what a social media platform that provides this empowerment would look like, 2) how this user empowerment changes the accuracy of content that users share, 3) how to design tools that enable this user empowerment on the web and work on all platforms without needing support from them, 4) how to leverage AI not to impose "the truth" on users, but to help amplify users’ assessments, and 5) how to enable users beyond labeling content accuracy, and empower them to modify and "fix" online content.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154204</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on the physical structure, properties and operation of ionic liquid electrosprays in the pure-ion mode</title>
<link>https://hdl.handle.net/1721.1/154203</link>
<description>Studies on the physical structure, properties and operation of ionic liquid electrosprays in the pure-ion mode
Gallud Cidoncha, Ximo
Electrosprays operating in the pure-ion mode exhibit compelling characteristics for micropropulsion, such as the ability to achieve high specific impulses with high efficiency. Liquid metal and ionic liquid-based electrospray sources are the most well-established types that operate in this mode. Understanding the emission physics of ionic liquids has proven challenging due to their distinct differences from liquid metals, which impose limitations on the stability and current throughput of these sources. Notably, the minor role of space charge, lower surface tension coefficient, and limited conductivity of ionic liquids appear to significantly impact their operational range, restricting them to a narrow set of extracting potentials, requiring sufficient hydraulic impedance, and limiting the range of meniscus sizes to the micrometer scale. The latter constraint poses observational challenges that limit their experimental study.&#13;
&#13;
Electrohydrodynamic numerical modeling has been instrumental in understanding the operational conditions of these sources, although existing models have not been able to reproduce normal experimental situations. The primary contribution of this thesis is the efficient implementation of an electrohydrodynamic model for ionic liquid electrosprays in the pure-ion mode, which can be experimentally validated, and that aims to explore electrode geometrical effects, the current and the stability of ionic liquid pure-ion electrospray menisci. The model takes into account the specific geometry of the sources and reveals emitted current ranges where the pure-ion regime can be sustained. Current bounds are wider at very small meniscus sizes (&lt;10 &#120583;m), lower critical field for emission, and higher electrical conductivity. The model unveils a potentially universal range of electric fields local to the meniscus which in the limit of high impedance, axially symmetric configuration, and negligible space charge, it can be extended to the stable cone-jet mode. This range begins at 3 the extinction voltage, where in the limit of negligible reservoir pressure, a conical geometry reminiscent of the Taylor cone is postulated and extends until the electric field near the electrode holding the meniscus reaches an electric pressure approximately twice the surface tension of a sphere with the same radius as the meniscus for the geometries tested. The specific value depends weakly on the geometrical details of the sustaining electrodes. Experimental efforts reveal that this limit likely corresponds to a bifurcation into two emitting menisci.&#13;
&#13;
Further observations from the model indicate that the emitted current is likely independent of electric conductivity and the critical field for emission, similar to liquid metal ion sources. The modeling suggests that the flow rate emitted is primarily determined by upstream conditions of the flow, and in specific cases involving small meniscus sizes and low beam perveance, space charge becomes a factor. These upstream conditions encompass the local pressure, influenced solely by hydraulic impedance, surface tension coefficient, meniscus radius and reservoir pressure, as well as the local electric field near the meniscus anchoring point to the electrode, and not any coefficient with sole relevance in the emission region.&#13;
&#13;
Preliminary validation efforts conducted in this thesis suggest a moving meniscus, where the radius and impedance properties of emitting pure-ion menisci may not remain constant during current-voltage excursions in porous emitters.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154203</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modification of Biopolymers Using Palladium Oxidative Addition Complexes</title>
<link>https://hdl.handle.net/1721.1/154202</link>
<description>Modification of Biopolymers Using Palladium Oxidative Addition Complexes
Rodriguez, Jacob J. L.
Chapter 1: Introduction to the Chemical Modification of Biopolymers This chapter introduces biopolymers as a discrete chemical entity via a brief historical account of the field of protein biochemistry. Other details such as the therapeutic role of key biopolymers, chemical modification strategies, palladium organometallic chemistry, and the recent developments in Pd-mediated bioconjugation are provided as well to give context for the following chapters.&#13;
&#13;
Chapter 2: Amphiphilic Biaryl Monophosphine Ligands by Regioselective Sulfonation Amphiphilic ligands are valued for their ability to facilitate organometallic reactions in the presence of water. The regioselective sulfonation of a series of commercially available biaryl monophosphines to generate amphiphilic ligands is presented. In this one-step protocol, the temperature and addition of fuming sulfuric acid were carefully controlled to arrive at sulfonated biaryl monophosphine ligands in high yields with &gt;95% regioselectivity without the need for chromatographic purification.&#13;
&#13;
Chapter 3: Palladium Mediated Synthesis of Protein–Polyarene Conjugates Catalyst transfer polymerization (CTP) is widely applied to the synthesis of well-defined πconjugated polymers. Unlike other polymerization reactions that can be performed in water (e.g., controlled radical polymerizations and ring-opening polymerizations), CTP has yet to be adapted for the modification of biopolymers. Here, we report the use of protein–palladium oxidative addition complexes (OACs) that enable catalyst transfer polymerization to furnish proteinpolyarene conjugates. These polymerizations occur with electron-deficient monomers in aqueous buffers open to air at mild (≤37 °C) temperatures with full conversion of the protein OAC and an average polymer length of nine repeating units. Proteins with polyarene chains terminated with palladium OACs can be readily isolated. Direct evidence of protein–polyarene OAC formation was obtained using mass spectrometry, and all protein–polyarene chain ends were uniformly functionalized via C–S arylation to terminate the polymerization with a small molecule thiol or a cysteine-containing protein.&#13;
&#13;
Chapter 4: Oligonucleotide Bioconjugation with Bifunctional Palladium Reagents Organometallic reagents enable practical strategies for bioconjugation. Innovations in the design of water-soluble ligands and the enhancement of reaction rates have allowed for chemoselective cross-coupling reactions of peptides and proteins to be carried out in water. There are currently no organometallic-based methods for oligonucleotide bioconjugation to other biomolecules. Here we report bifunctional palladium(II)-oxidative addition complexes (OACs) as reagents for highyielding oligonucleotide bioconjugation reactions. These bifunctional OACs react chemoselectively with amine-modified oligonucleotides to generate the first isolable, bench stable oligonucleotide-palladium(II) OACs. These complexes undergo site-selective C-S arylation with a broad range of native thiol-containing biomolecules at low micromolar concentrations in under one hour. This approach provided oligonucleotide-peptide, oligonucleotide-protein, oligonucleotide-small molecule, and oligonucleotide-oligonucleotide conjugates in &gt;80 % yield and afforded conjugation of multiple copies of oligonucleotides onto a monoclonal antibody.&#13;
&#13;
Chapter 5: Chemoselective Modification of mRNA Using Palladium Oxidative Addition Complexes Therapeutics derived from mRNA rely on non-canonical, covalent modifications to be effective. Chemically altered, non-natural nucleotides are heavily incorporated into therapeutic mRNA resulting in reduced immunogenicity and prolonged stability which, together, significantly enhance translation. While these alterations to mRNA chemical structure have been extensively studied, larger modifications, like conjugates of mRNA to proteins and other large nucleic acids, are comparatively underexplored. Herein we use chemoenzymatic methods to append in vitro transcribed mRNA with a thiol, allowing for further derivatization including the formation of a palladium mRNA oxidative addition complex (OAC).
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154202</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Tumor-Localized Therapies for Cancer – from Antioxidant Enzymes to Cytokines and Radiation Therapy</title>
<link>https://hdl.handle.net/1721.1/154198</link>
<description>Engineering Tumor-Localized Therapies for Cancer – from Antioxidant Enzymes to Cytokines and Radiation Therapy
Sheen, Allison Mae
Surgery, radiation therapy, and chemotherapy are mainstays of cancer treatment, treating cancer by directly removing or killing cancer cells. In recent decades, alternative treatments such as targeted therapy and immunotherapy have revolutionized the treatment of cancer, improving tumor control and increasing survival in a growing subset of cancer patients. Immunotherapies activate the patient’s immune system to recognize and kill cancer cells, while targeted therapies broadly target molecules or pathways involved in growth and survival of cancer cells. Despite the success of these therapies, there are still many patients that do not respond and could benefit from further development of targeted therapies, immunotherapies, and combinations with existing therapies. &#13;
&#13;
Broadly, this thesis focuses on the development and preclinical testing of tumor-localized, protein therapeutics for cancer. First, we explore the potential of the antioxidant enzyme catalase to control tumor growth via scavenging of hydrogen peroxide and production of oxygen in the tumor. After engineering approaches to maximize intratumoral catalase exposure, we performed in vitro characterization and in vivo validation of our approaches in syngeneic mouse tumor models but failed to show therapeutic efficacy or transcriptional profiles consistent with the expected activity.&#13;
&#13;
Cytokines can be potent immunotherapies, inducing activation and proliferation of immune cells critical to anti-tumor immune responses. While traditional cytokine therapies suffer from low efficacy and high systemic toxicity, intratumorally administered and retained cytokines have improved safety and efficacy. We investigated the translation of intratumorally injected, collagen-binding interleukin (IL)-2 and IL-12 from mouse models to pet dogs with spontaneously-occurring tumors, including considerations for combining IL-2 and IL-12 with radiation therapy (RT). RT is a promising strategy to improve the efficacy of immunotherapy or vice versa. We explored this combination in syngeneic mouse tumor models. Relative to immunotherapy alone, combination with RT can improve responses at irradiated tumors and overall survival, but does not improve responses at unirradiated tumors. In canine tumor studies, we find that collagen-anchored IL-2 + IL-12 therapy exhibits a tolerable toxicity profile and induces increased infiltration of immune cells in the tumor and a transcriptional profile consistent with antitumor effector functions. Combination of RT and biweekly cytokine therapy results in dramatic tumor regression in the majority of dogs. These canine tumors are highly similar to certain human tumors, suggesting that collagen-anchored cytokines may be efficacious in cancer patients.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154198</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technology Demonstration of a Megawatt-Class Integrated Motor Drive for Aircraft Propulsion</title>
<link>https://hdl.handle.net/1721.1/154196</link>
<description>Technology Demonstration of a Megawatt-Class Integrated Motor Drive for Aircraft Propulsion
Chen, Yuankang
An integrated compressor-generator concept, in which the electromagnetic rotor elements of a generator share a common rotor structure with the blades of the Low Pressure Compressor (LPC), is proposed as an approach to integrating megawatt-scale generators with a gas turbine. The benefits of this concept include the elimination of windage loss on the generator’s outer rotor surface, which is now used as the LPC hub, a reduction in overall number of bearings required, and an improvement in system volumetric power density by housing the generator inside the LPC. The lower temperatures of the LPC module provide the most favorable generator operating environment, and close access to a source of bleed air for cooling purposes.&#13;
&#13;
An outer-rotor permanent magnet electric machine reduced-order model framework that captures critical structural, thermal, and electromagnetic constraints is developed to identify key enabling technologies for maximizing specific power. A tooth-and-slot stator with an outer rotor Halbach array rotor architecture is identified to maximize electric machine specific power with current technology across a range of power levels from 100 kW to 3.6 MW.&#13;
&#13;
A fully air-cooled 1 MW integrated motor drive is developed to demonstrate the viability of the identified architectures and technologies. The motor drive is estimated to have electric machine and power electronics specific powers of 17.1 kW/kg and 20.2 kW/kg respectively, exceeding NASA’s 2030 performance targets for electrified aircraft propulsion. One key differentiator of the motor drive design is the attention afforded to its manufacturability and assembly. This includes additional constraints placed on material selection and the development of design features that minimize the risk of damage to components during assembly. A novel channel-type heat exchanger is developed and experimentally demonstrated to meet the combined structural and thermal performance requirements of the motor drive. Optimum heat exchanger geometry depends strongly on channel surface roughness, as system cooling flow limits constrain its operation to the flow transition regime. Synchronous excitation of the spindle modes via the destabilizing electromagnetic rotor-stator forces is a key challenge for the overhung rotor architecture due to the absence of an effective source of damping. The spindle root must be sufficiently stiffened to ensure the natural frequencies of the spindle modes are above operating frequencies. Motor drive rotordynamic operability is enabled with solid dampers controlled by a novel in-situ damper tuning mechanism, which produces changes in damper stiffness and damping without requiring disassembly and reassembly of the bearing housing module. The demonstrator outcomes are scalable and applicable to a wide variety of applications in transportation, power generation, and industry.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154196</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnosing Band-Mediated Electrochemical Half-Reaction Mechanisms and Identifying the Unique Features Thereof</title>
<link>https://hdl.handle.net/1721.1/154195</link>
<description>Diagnosing Band-Mediated Electrochemical Half-Reaction Mechanisms and Identifying the Unique Features Thereof
Howland III, William C.,
Redox catalysis is pervasive in both benchtop and industrial scale chemistry as a means of introducing and interconverting between key functional groups as well as, increasingly, a means to elaborate carbon skeletons. Further development of redox catalysis methods is of central importance as we seek to eliminate the use of environmentally harmful reagents, reduce volumes of chemical waste by improving atom economy, and enable new means of functionalizing small molecule feedstocks and modifying heavily functionalized biomass feedstocks. Despite the centrality of redox catalysis to both contemporary and future chemistry, a unified understanding of its principles has to this point not been synthesized by the community. Redox catalysis is approached from two directions that do not, for the most part, communicate with one another: thermal redox and electrochemical redox. Thermal redox is concerned with the use of purely chemical processes to effect transformations while the application of electrochemical understanding of redox is generally limited to electrolysis in the context of chemical synthesis. This work seeks to bridge these disparate areas of redox chemistry by demonstrating that thermal redox catalysis on heterogeneous catalysts may proceed by electrochemical mechanisms. Furthermore, we aim to illustrate the distinctive characteristics of this reactivity paradigm in the hope that the unique features might be exploited in future redox catalyst design.&#13;
&#13;
In Chapter 2, we show that thermal oxidation of an organic reactant on a doped carbon catalyst with molecule-like active sites appears to proceed by electrochemical half-reactions mediated by the conduction band of the catalyst.&#13;
&#13;
In Chapter 3, we develop mathematical models to rationalize and predict catalyst behavior in non-ideal band-mediated electrochemical catalysis with particular emphasis on the effects of resistivity and the possible modes of interference between half-reactions.&#13;
&#13;
In Chapter 4, we seek to identify manifestations of cooperativity between the grains of polycrystalline platinum in the form of electrical charge transfer between grains of different relative activity towards the constituent half-reactions of aerobic formic acid oxidation.&#13;
&#13;
This thesis is intended to serve as a prompt and a guide to the further exploration of thermal redox catalysis proceeding by band-mediated electrochemical half-reactions. It is hoped that this work represents a substantial step toward the exploitation of the unique capacity of band-mediated mechanisms to catalyze each individual half-reaction on spatially and chemically distinct moieties that may be independently optimized for their specialized purpose and to autonomously and optimally apportion overall driving force between the reactions. More broadly, it is hoped that this work contributes to the application of electrochemical understandings and methods to thermal catalysis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154195</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between Extra-Germinal Center expansion of memory B cells and affinity maturation during the humoral recall response</title>
<link>https://hdl.handle.net/1721.1/154193</link>
<description>Interplay between Extra-Germinal Center expansion of memory B cells and affinity maturation during the humoral recall response
VanBeek, Matthew Christopher
According to the WHO, vaccines have saved more lives than any other medical intervention in history. Current estimates suggest vaccines save millions of lives annually. The importance of vaccines was recently highlighted by effective vaccines that curtailed the SARS-CoV-2 pandemic. Vaccines prime the immune system to mount tailored responses against antigens, like pathogen-derived proteins. However, some viruses can evolve to evade these immune responses. Targeting the regions of viral proteins that are conserved during such evolution is a goal of rational vaccine design. Achieving this goal requires a better understanding of how immune memory is generated and recalled.&#13;
&#13;
Upon vaccination or infection, B cells and the antibodies they secrete, undergo a Darwinian evolutionary process in the Germinal Center (GC) to generate tailored memory and antibodies. Memory B cells generated in the GC can be recalled in extra-germinal center processes (EGC) to create a large, rapid antibody response. During the recall response, new GCs also form. In this thesis, I explore the EGC’s role in the recall response, and the interplay between GC and EGC processes. The mechanistic understanding that has emerged is pertinent to effectively targeting conserved epitopes, amplifying antibody responses, and providing broad protection against mutable variant pathogens. This thesis uses a variety of computational models describing the GC and EGC processes, aiming to enhance our understanding of the generation of immune memory and its subsequent recall to create antibodies. Computational predictions are also tested against clinical data.&#13;
&#13;
The first study that is reported sheds light on why coupled EGC and GC processes may have evolved. During the primary immune response, GCs produce memory B cells with a high affinity for the infecting pathogen or vaccine antigen, but they also generate many low affinity memory B cells. During the recall response, the EGC plays a crucial role by selecting and expanding the best pre-existing memory B cells to the infecting pathogen or booster antigen. Upon infection with a variant, the memory B cells expanded are the low affinity B cells produced in the primary GC. Over much longer times secondary GCs generate tailored high affinity B cells to variant antigens. These results suggest that the GC and EGC evolved to protect us against families of variant pathogens.&#13;
&#13;
This thesis also provides a mechanistic explanation for the surprising observation of increased antibody breadth after a third homologous SARS-CoV-2 vaccine. After the second vaccine dose, effective antigen presentation and epitope masking enables B cells with lower germline affinities to enter the secondary GC. These subdominant B cells target different epitopes than the immunodominant response generated after the first shot. As a result, the second dose generates a broad memory pool that is expanded in the EGC after the third vaccine dose to create a broad antibody response capable of binding to variants like Omicron.&#13;
&#13;
Finally, we build on the studies above to investigate the phenomenon known as Original Antigenic Sin, where a heterologous boost with a related variant antigen can generate an immune response worse than a prime with the variant.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154193</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Entropy Amorphous and Crystalline Li-Garnet Films for Lithionic Applications</title>
<link>https://hdl.handle.net/1721.1/154191</link>
<description>High Entropy Amorphous and Crystalline Li-Garnet Films for Lithionic Applications
Zhu, Yuntong
Safe, reliable, and Li+-conductive solid-state electrolytes are considered the key to unlocking the potential of solid-state and hybrid batteries, as well as lithionic devices beyond batteries, such as memristors and environmental gas sensors. The materials class of Li-garnet-type Li7La3Zr2O12-d (LLZO) has garnered attention because of its high Li+ conductivities, wide electrochemical stability windows, and non-flammability. However, high-temperature sintering (&gt;1050 °C) is generally required to achieve its highly conductive cubic phase (cLLZO), which raises concerns over processing cost, sustainability, and interface stability with adjacent layers, such as cathodes. Alternative, amorphous LLZO (aLLZO) phases require lower temperatures for synthesis as compared to their crystalline counterparts, making this material an attractive option for use in solid-state batteries and beyond. However, to date, their amorphous local structure and structure-transport relationships have not been properly explored. In addition, the low-temperature synthesis routes to stabilize various aLLZO to cLLZO phases have rarely been discussed.&#13;
This thesis investigates the structure nature of a new class of high-entropy aLLZO phases and defines their low-temperature synthesis conditions towards lithionic applications. First, we explore the synthesis conditions and local structure of aLLZO and contextualize them with existing Li+-conductive oxides. These high-entropy glass-ceramic phases consist of the highest number of local building units (LBUs) ≥ 4 being identified so far with edge- and face-shared LBUs, not conforming to the traditional Zachariasen glass formation rules. Within the aLLZO structure, we identify Zr and Li as network formers, facilitating the formation of LBU connections via bridging oxygen. A model study is next designed to confirm the role of La as a network modifier. Increasing the La concentration in aLLZO promotes local disordering. Moreover, low-temperature synthesis options for aLLZO and cLLZO are explored through crystalization enthalpy analysis and the development of the 1st Time-Temperature-Transformation (TTT) diagram for LLZO. We confirm the successful synthesis of cLLZO at a record-low temperature of 500 °C, about half of the temperature used for classic sintering. Finally, recent advancements and challenges towards the device integration of Li+-conductive films are analyzed. This thesis sets the cornerstone for future structure optimization of high-entropy aLLZO glass-ceramics and provides low-temperature synthesis guidelines to assist the integration of aLLZO and cLLZO into lithionic devices.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154191</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Electronic Molecular Materials with Magneto-Optical and Magnetic Properties</title>
<link>https://hdl.handle.net/1721.1/154190</link>
<description>Investigation of Electronic Molecular Materials with Magneto-Optical and Magnetic Properties
Delage-Laurin, Lèo
This thesis is divided into three parts. Part 1 summarizes the design, synthesis, and properties of novel small-molecule and polymeric magneto-optical materials. Part 2 discusses synthetic advances for the development of redox-active pyrrole-fused small molecules and polymers. Part 3 details the synthetic work and investigation of selectively deuterated and perdeuterated BDPA radicals as polarizing agents in Overhauser dynamic nuclear polarization.&#13;
&#13;
In Chapter 1, we introduce the magneto-optical phenomenon known as Faraday rotation and its manifestation in organic materials. Current frontiers in performance, applications, and mechanistic underpinnings of the Faraday effect in organic-chromophore thin films are discussed.&#13;
&#13;
In Chapter 2, we evaluate the Faraday rotation in thin films of metallocene/PMMA composites. We discuss the development of ferrocenium-based optical-quality thin films, their resulting MO responses, and the desirable and unique MO features of C-term materials geared toward applications.&#13;
&#13;
In Chapter 3, we expand on the work established in Chapter 2 by developing functional polymers with magneto-optically active ferrocenium units embedded through their backbone. We observe a 30% increase in Verdet constants over the previously obtained ferrocenium/PMMA thin films.&#13;
&#13;
In Chapter 4, we evaluate the Faraday rotation in thin films of several phthalocyanine and porphyrin derivatives, which exhibit large A-term Faraday responses. We observe maximum Verdet constants greater than those found in competing inorganic materials. The effect of various chemical modifications is discussed.&#13;
&#13;
In Chapter 5, we expand the scope of organic Faraday rotators exhibiting large A-term Faraday rotation via the synthesis of novel liquid crystalline azacoronenes. Significant gains in the Faraday rotation of solid-state samples can be achieved via molecular alignment through the liquid crystalline phase.&#13;
&#13;
In Chapter 6, we investigate the Faraday rotation in thin films of chiral π-conjugated polymers. We discuss the impact of chirality, structural order, and film thickness on the magnetooptical rotation of polymeric systems and report among the largest Verdet constants observed to date in organic thin films.&#13;
&#13;
In Chapter 7, we summarize our synthetic efforts toward the development of redox-active pyrrole-fused small molecules and polymers, expanding the synthetic landscape of molecules alike to ones discussed in Chapter 5. The synthesis of novel pyrrole-fused precursors, monomers, and polymers are discussed.&#13;
&#13;
In Chapter 8, we describe the synthesis of selectively deuterated α,γ-bisdiphenylene-βphenylallyl (BDPA) radicals and their investigation as polarizing agents in Overhauser dynamic nuclear polarization (OE-DNP), shedding light on the mechanism of polarization transfer in OEDNP in insulators, which has yet to be established.&#13;
&#13;
In Chapter 9, we follow up on the work established in Chapter 8 with the synthesis of perdeuterated BDPA radicals. We establish that the substitution of the 1 H spins with 2 H on the phenyl moiety increases the positive enhancement from the BDPA core by 50%.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154190</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Special Purpose End-to-End Satellite Constellation Design Methodology</title>
<link>https://hdl.handle.net/1721.1/154189</link>
<description>Development of a Special Purpose End-to-End Satellite Constellation Design Methodology
Cummings, Andrew T.
Recent improvements in payload delivery to low-Earth orbit (LEO) have significantly improved the capacity to economically deploy satellite constellations. These constellations are profoundly versatile, with the potential to allow researchers to gather large volumes of data from astronomy and Earth-observing missions over an unconstrained mission lifetime. However, many current satellite constellation design methodologies focus on designing satellite constellations with global coverage, rather than constellations with maximized coverage for a smaller, specified area of interest.&#13;
&#13;
This dissertation presents an open-source methodological framework capable of optimizing the design of special purpose satellite constellations for a given mission science objective. We present two case studies that we use to illustrate the framework’s functionality. The first case study is an astronomy constellation, with a science objective that maximizes coverage of a list of target stars so that we can minimize the error of stellar spin-axis inclination retrieval and constrain the spin-axis inclination of FGK spectral-type target stars. The second case study is an Earth-observing constellation designed to minimize revisit time to a specific target area, and achieve better ground sampling precision over existing thermal infrared platforms for the monitoring of thermal emissions over a target area.&#13;
&#13;
Our special purpose framework is robust, modular, and fast, allowing it to serve as an effective reference point for any commercial, scientific, and governmental mission requiring satellite constellation design. We find Pareto optimal satellite constellation architectures for a thermal infrared Earth-observing mission that achieve a minimum revisit time of 8 days for a ground sampling distance of 6.6 m.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154189</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metallocluster Site-Differentiation and Subsite Specific Heterometal Substitution</title>
<link>https://hdl.handle.net/1721.1/154186</link>
<description>Metallocluster Site-Differentiation and Subsite Specific Heterometal Substitution
Bostelaar, Trever M.
The deployment of metalloclusters in applications such as catalysis and materials synthesis requires robust methods for site-differentiation: the conversion of clusters with symmetric ligand spheres to those with unsymmetrical ligand spheres. However, imparting precise patterns of site-differentiation is challenging because, compared with mononuclear complexes, the ligands bound to clusters exert limited spatial and electronic influence on one another. In Chapter 2, we described a method that used sterically encumbering ligands to bind to only a subset of a cluster’s coordination sites. Specifically, we showed that homoleptic, phosphine-ligated Fe–S clusters undergo ligand substitution with N-heterocyclic carbenes to give heteroleptic clusters in which the resultant clusters’ site-differentiation patterns are encoded by the steric profile of the incoming N-heterocyclic carbene. This method afforded access to every site-differentiation pattern for cuboidal [Fe₄S₄] clusters and was extended to other cluster types in Chapter 3, particularly in the stereoselective synthesis of site-differentiated Chevrel-type [Fe₆S₈] clusters. In Chapter 4, we further utilized the 3:1 site-differentiation of cuboidal [M₄S₄] (M = Fe or Co) clusters to perform subsite specific metal atom substitution at each cluster. Specifically, we showed that the unique metal sites of homometallic clusters of the form [M₄S₄(IMes)₃Cl]+ can be selectively excised by addition of 2 equiv TlTp. Reconstitution with M′Cl2 (M′ = Co, Fe, for M = Fe, Co, respectively) yielded the heterometallic clusters [CoFe₃S₄(IMes)₃Cl]+ and [FeCo₃S₄(IMes)₃Cl]+. The reduced clusters, [M′M₃S₄(IMes)₃Cl], as well as the CO-bound clusters, [M′M₃S₄(IMes)₃(CO)], were also prepared, and a comparative analysis of the properties of all three series of clusters was undertaken. Low-valent electronic configurations are accessed in all four clusters, [Fe₄S₄(IMes)₃(CO)], [CoFe₃S₄(IMes)₃(CO)], [Co₄S₄(IMes)₃(CO)], and [FeCo₃S₄(IMes)₃(CO)], and this studied further reveals how heterometal substitution modulates the degree of C–O bond weakening.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154186</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrolysis of Molten Polyphosphate Salts Generates P₄ and O₂: Selectivity, Kinetics and Stability Behind a Promising Alternative to Carbothermal Phosphate Reduction</title>
<link>https://hdl.handle.net/1721.1/154185</link>
<description>Electrolysis of Molten Polyphosphate Salts Generates P₄ and O₂: Selectivity, Kinetics and Stability Behind a Promising Alternative to Carbothermal Phosphate Reduction
Licini, Andrew John
Elemental white phosphorus is a vital feedstock chemical for a vast network of industries, but it is currently produced via a carbothermal reduction rife with environmental and efficiency problems. In this work, we use the molten salt electrolysis of sodium polyphosphates to investigate an alternative means of white phosphorus production. In order to collect robust and meaningful data on these traditionally challenging systems, we introduce a comprehensive toolkit of techniques and design elements, such as a sodium reference electrode and graphite pseudoreference couple, working electrode geometries that prevent gas blockage and inert container and separator materials. Via high-temperature product collection at graphite working electrodes in tandem with electrokinetic analysis, we identify white phosphorus as the major cathodic product with high selectivity and infer a rate-limiting formation of a P³⁺ intermediate involved in its formation. Simultaneously, we find that graphite oxidizes much more efficiently as an anode than the coke used in the industrial thermal process, evolving a mixture of carbon dioxide (96%) and carbon monoxide (4%) instead of exclusively carbon monoxide. When the graphite anode is replaced by a metal anode, we also identify oxygen as the major, and ostensibly only long-term, anodic product. While iridium metal anodes require a higher overpotential for oxygen evolution relative to platinum and gold, they also demonstrate vastly superior corrosion resistance, with corrosion rates as low as 0.93 mm yr⁻¹ in Lux-basic melt compositions. Platinum and gold ions released during corrosion also re-reduce in solution to form elemental metal particulates and gaseous oxygen, resulting in roughly 100% oxygen evolution efficiency in all cases and allowing for catalyst material to partially be reclaimed. Finally, we find that the Lux acidity of the melt has a variety of significant effects on the energetics of melt reactions, simultaneously promoting the reduction of phosphates to phosphorus while suppressing the oxidation of graphite or evolution of oxygen and promoting metal anode corrosion. Taken together, these findings indicate that a viable carbon-free electrogeneration of white phosphorus is feasible under appropriate solvent conditions and electrode material choices, and this electrolysis represents an appealing alternative to legacy processes.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154185</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficiently Searching for Objects Within Large Collections of Images and Video</title>
<link>https://hdl.handle.net/1721.1/154184</link>
<description>Efficiently Searching for Objects Within Large Collections of Images and Video
Moll, Oscar R.
Images and videos are now ubiquitously captured and collected. Cameras in our phones let us capture our day to day, camera drones help engineers monitor structures and map terrain. Video conferencing is a key enabler of remote work and distributed teams. Video and images are also a key part of robotic perception, including selfdriving cars, which use cameras as key sensors. Social media is increasingly image and video based. Video from these applications often makes its way to cloud storage, either for archival or for more complex uses, such as building datasets for machine learning algorithms used by the application.&#13;
&#13;
These large image and video datasets are enabling a new generation of applications. For example, video data from vehicle dashboard-mounted cameras (dashcams) is used to train object detection and tracking models for autonomous driving systems [82], to annotate map datasets such as OpenStreetMap with locations of traffic lights, stop signs, and other infrastructure [59]; and to automate insurance claims processing by analyzing collision scene footage [61]. &#13;
&#13;
In parallel to the above trend, deep-learning-based computer vision models have f lourished in the last decade. High-quality semantic embedding models are widely available today for images, and it is reasonable to assume a basic level of automated semantic understanding of individual images and video frames for all of the application scenarios above. &#13;
&#13;
Despite these enabling advances, carrying out even simple high-level tasks on image and video data collections is difficult. For example, searching your own data for some ad-hoc object of interest is not always easy: publicly available models are not always accurate enough when used on proprietary datasets, and they are not equally accurate for all searches. Ad-hoc object searches are useful in their own right for data exploration, and these searches are also a key step in other processes, such as labeling data for supervised machine learning. Data labeling is an expensive process involving human input, in practice, this implies picking a subset of the data only. Labeling presumes we can locate images or videos with the desired label distribution in the training and validation sets. This desired distribution may differ greatly from the natural distribution of the collected data. For example, Tesla runs an extensive suite of checks on their autopilot perception models[44]. These checks correspond to important scenarios, more akin to corner cases discovered over time, than to a random sample. &#13;
&#13;
The following are two important bottlenecks for any work with images and video collections: &#13;
• Adapting existing models to our own data and tasks is laborious, requiring some human annotation and testing. &#13;
• The computational demands of the models used are high, so, processing large data collections is slow and expensive. The goal of this thesis is to describe two systems: SeeSaw and ExSample, designed to tackle instances of these problems on the large image and video collections, respectively. &#13;
&#13;
SeeSaw [57] helps users find results in queries where the visual semantic embedding used to index the data performs poorly. The high-level approach in SeeSaw is to take user feedback during the search process in the form of bounding boxes around regions of relevance within previously shown results. Behind the scenes, SeeSaw integrates this feedback into future results. This is challenging as SeeSaw must deal with the problem of integrating feedback from small samples of high dimensional data in a way that increases rather than decreases result quality. &#13;
&#13;
Searching for examples in the video presents extra challenges. Video data is much larger in volume than images, as one second of typical video commonly maps to 30 image frames. This multiplier makes video datasets harder to manage as costs are higher across the board: costs of storage, costs of data IO, compute costs for decompressing it, and finally, any processing itself. At the same time, video data is also more redundant, as consecutive frames are usually very similar in content. This redundancy reduces the utility of each frame individually. &#13;
&#13;
We designed ExSample [55] to tackle these trade-offs between cost and utility that are specific to video. ExSample assumes there is an accurate object detector available, which can detect whether a given frame has an object of interest and where the object is. It uses this information to decide whether results are being found, and, importantly, whether they are redundant. This information is used to decide which areas of the data, either single files or segments of long video files, are likely to yield the most new examples in a future sample, and then pick frames from that area. The statistical methodology used to estimate the probability of new results is based on a similar intuition as the chance of finding new gene variants in a sampled population, new words in a text, or new species in an expedition, known as missing mass estimation. ExSample adapts related ideas to improve sampling from videos in an online setting. &#13;
&#13;
Beyond their common application goal of helping users with image and video searches, both SeeSaw and ExSample also share some design goals: &#13;
1. Accessibility: we want to lower barriers to getting started using the systems. We want users to get started with SeeSaw and ExSample without requiring a prior extensive labeling effort or without requiring models that work well with their data already. One target application of the systems is helping users find data they can use for labeling in order to make models for their data. &#13;
2. Scalability: the exponential growth of data means that for these systems to continue being useful, costs and latencies should scale favorably as the data grows. In particular, expensive processing at query time or repeated wait times/costs that grow linearly with data are undesirable. &#13;
3. Adaptability: SeeSaw and ExSample adapt to users’ unique data and queries. SeeSaw is meant to help users with queries when pre-trained embedding models are falling short, adapting to the users’ queries and datasets. ExSample is designed to adapt to different video files and scenarios, for example including both moving camera and static camera videos, large videos or many small videos, etc. &#13;
&#13;
We describe SeeSaw in Chapter 2, and then ExSample in Chapter 3, following a discussion of limitations, areas for future work, and ways in which both systems could work together in Chapter 4.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154184</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Machine Learning Models of Sensory Systems</title>
<link>https://hdl.handle.net/1721.1/154183</link>
<description>Evaluating Machine Learning Models of Sensory Systems
Feather, Jenelle
We rely on our sensory systems to perceive and interact with the world, and understanding how these systems work is a central focus in neuroscience. A goal of our field is to build stimulus-computable models of sensory systems that reproduce brain responses and behavior. The past decade has given rise to models that capture complex behaviors such as image classification, word recognition, and texture perception. Yet, there are known discrepancies between such models and human observers, such as in the architectural components, learning mechanisms, and resulting representations, that must be rectified to obtain complete models of the brain. &#13;
&#13;
This dissertation investigates the representations in contemporary models of sensory systems, focusing on the auditory and visual systems. The first study explores the extent to which deep neural network audio models capture human fMRI responses to sound. Most tested models out-predicted previous hand-engineered models of auditory cortex and exhibited hierarchical brain-model correspondence. The second study investigates the invariances of visual and auditory models of perception using "model metamers", synthetic stimuli that produce the same activations in a model as a natural stimulus. Behavioral experiments on humans using these stimuli reveal that the invariances of most current computational neural network models of perception do not align with human perceptual invariances. Our experiments trace this discrepancy to invariances that are specific to individual models, and provide some guidance for how to eliminate them. The third study uses similar techniques as those used to generate model metamers, but applies them to auditory texture models with the aim of reducing their dimensionality. We found that previous hand-engineered models of auditory texture can be significantly reduced in dimensionality without compromising their ability to capture human perception. The fourth study investigates the representational geometry of neural networks trained with biologically-inspired stochasticity. Together, this work presents ways to compare the representations of neural networks to those of human perceptual systems, and suggests paths for future improvements of these models.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154183</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies Directed Toward the Synthesis of Streptonigrin</title>
<link>https://hdl.handle.net/1721.1/154181</link>
<description>Studies Directed Toward the Synthesis of Streptonigrin
Gomez, Christian
The synthesis of complex pyridines has been a consistent challenge in the field of organic chemistry. Although there are many methods to accomplish this task, few methods consistently allow the synthesis of highly substituted pyridines without substantial limitations on the resulting substitution pattern. This thesis describes the synthesis of multisubstituted pyridines through both cascade cycloadditions and electrocyclic ring closure reactions through the lens of streptonigrin total synthesis. Natural product total synthesis offers both a way to test and showcase the true substrate scope of emerging synthetic methods; the pentasubstituted core of streptonigrin provides a window into the applications and limitations thereof.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154181</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Messy Measurement: Approaches to Causal Inference&#13;
With Unobserved Variables</title>
<link>https://hdl.handle.net/1721.1/154178</link>
<description>Messy Measurement: Approaches to Causal Inference&#13;
With Unobserved Variables
Markovich, Zachary
This dissertation focuses on methods for conducting causal inference when an essential variable is unobserved. The first paper provides new methods for causal inference with bundled variables. A bundled variable is one that, rather than being observed directly, is instead represented as a collection of proxies present in the dataset. The only existent approach to causal inference in this setting is to dimension reduce the proxies, thereby recovering the missing treatment or moderator. The first paper of this dissertation provides a new method for quantifying the causal effect of the full bundle, thereby sidestepping this missing data problem. The second paper provides a new method for analyzing randomized experiments with non-compliance. Researchers typically attempt to estimate the treatment effect only among compliers in such cases, but compliance status is not directly observed, so instrumental variables methods are used instead of directly conditioning on compliance. I propose an alternative estimator based on upweighting units that are likely compliers. I show that this estimator is asymptotically conservative under weaker assumptions than instrumental variables models require. The final paper (joint with Ariel White) turns to an applied causal inference task and focuses on the effect that minimum wage increases have on the probability of voting. Because receiving a pay raise due to a minimum wage increase is confounded by income and socio-economic status (which goes unobserved), we employ a difference-indifferences design that provides credible causal evidence that minimum wage increases raise the turnout rate of affected workers.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154178</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scenario-based uncertainty quantification for deep space optical communications</title>
<link>https://hdl.handle.net/1721.1/154177</link>
<description>Scenario-based uncertainty quantification for deep space optical communications
Milton, Julia
This thesis develops a framework for evaluating deep space communication architectures under uncertainty and explores the potential of optical communications to meet the future data requirements of deep space science missions. Currently, the Deep Space Network (DSN) relies solely on radiofrequency (RF) communications. As science-driven space exploration missions become more complex, the data requirements are pushing the limits of existing communications infrastructure. Incorporating optical communications into the network offers a way to achieve higher data rates by moving to a higher frequency band without increasing mass and power. However, optical communications require fundamentally different designs and infrastructure, necessitating substantial design and investment.&#13;
&#13;
Aligned with NASA’s goals, the study sets a 100x improvement over current data rates as the performance benchmark for proposed designs. Five architectures, including arrayed ground stations, a relay at the Earth-Sun L4 Lagrange point, and a space-based receiver array in Earth orbit, are evaluated using data rate, availability, data volume, and cost proxy as figures of merit. A statistical model for data rate is developed, incorporating 22 input parameters to estimate laser data transmission using Pulse Position Modulation protocol.&#13;
&#13;
The study develops a cost proxy model of component costs, revealing the cost-effectiveness of segmented mirror antennas. Evaluating 3000 designs for each planet scenario and architecture, statistical analyses determine the cost-optimal design achieving 100x data rate improvement at the lowest cost.&#13;
&#13;
A Polynomial Chaos Expansion (PCE) is used to calculate the mean and variance of the data rate over a target planet’s orbit in approximately 1/2 the time of a Monte Carlo simulation. Results show that optimal designs vary across planet scenarios, with Venus-optimized designs featuring smaller fields of view and narrower bandpass filters due to higher background light conditions.&#13;
&#13;
Each architecture offers distinct advantages and disadvantages in terms of cost, data rate, data volume, and availability, with arrayed architectures standing out as promising options for high data rates and reduced risk. The space-based receiver array enables average data rates of 256 Mbps from 1.74 AU (corresponding to an average Mars distancec), compared to 84 Mbps for a single ground station, with data volumes 8-10 times greater than a single ground station due to high availability and the absence of atmospheric effects. However, the single ground station remains the most economical choice in every scenario. Comparisons show that the baseline optical architecture for a single ground station is more cost-effective than large arrays of RF receivers for achieving similar performance improvements on Mars and Saturn. The nominal design for Mars requires 249 arrayed 12 m antennas at a total cost of $154.38 million, while the single optical ground station costs $97.18 million to achieve the same data rate.&#13;
&#13;
The study also discusses the development and implementation of a novel hybrid RF/optical antenna at the Deep Space Network site in Goldstone, California. The system, featuring a 1.5 m segmented primary mirror, a superconducting nanowire single photon detector, signal acquisition and control cameras, and adaptive optics components, will receive downlink from the first demonstration of deep space optical communications from the DSOC payload on the Psyche spacecraft, scheduled to launch later this year.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154177</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proton dynamics in ultrathin GdO&#119909;H&#119910; films for&#13;
magneto-ionics</title>
<link>https://hdl.handle.net/1721.1/154176</link>
<description>Proton dynamics in ultrathin GdO&#119909;H&#119910; films for&#13;
magneto-ionics
Sheffels, Sara
Dynamic tuning of materials properties with simple voltage control is desirable for a variety of applications, from magnetic memory, to neuromorphic computing, to solid state pixels and optical circuit components. Metal oxides can conduct ionic current, allowing their properties and those of adjacent materials to be controlled through voltage control of ion transport and electrochemical reactions. This thesis focuses on protonic defects in gadolinium oxide and gadolinium hydroxide (GdOxHy). Protons are mobile in this oxide at room temperature, and hydrogen incorporation can control a variety of materials properties, making these devices a promising platform for thin film solid state ionic devices with very simple and robust architectures. However, the proton transport and hydrogen storage properties of GdOxHy are not sufficiently well understood to determine the limits and optimal operation conditions for this material platform. Additionally, the mixed ionic and electronic conductivity of the material poses challenges for measuring these properties. This thesis sheds light on proton dynamics in nanoscale GdOxHy films in order to understand proton conductivity and hydrogen storage. The work investigates the effects of hydration and gating on the structure of GdOxHy films and devices, measures the devices' electrical and electrochemical properties, and focuses on applications in magneto-ionics, where voltage control of protons is used to toggle the magnetic properties of a ferromagnetic or ferrimagnetic layer.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154176</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of an Affordable, Precise Irrigation Controller that Lowers the Barrier to Water- and Energy-Sustainable Agriculture</title>
<link>https://hdl.handle.net/1721.1/154164</link>
<description>Design of an Affordable, Precise Irrigation Controller that Lowers the Barrier to Water- and Energy-Sustainable Agriculture
Sheline, Carolyn
With climate change and population growth exacerbating global food insecurity, it has become urgent to establish more water- and energy-efficient means to raise agricultural production. Available techniques to bolster crop productivity, such as solar-powered drip irrigation (SPDI) and precision irrigation, are currently cost-prohibitive for farmers in low-and middle-income countries (LMICs), where food insecurity will be most severe. This thesis demonstrates one method to reduce the barrier to these systems, by pairing them with a Predictive Optimal Water and Energy Irrigation (POWEIr) controller that optimizes irrigation schedules to make efficient use of solar and water resources for maximum crop yield. In doing so, POWEIr also decreases SPDI system costs.&#13;
&#13;
First, this work confirms the hypothesis that scheduling irrigation activity to match the availability of variable solar power enables SPDI cost savings. For a fixed irrigation system, a SPDI full-season operation simulation study was conducted and the impact of adjusting the pumping load dynamically to match solar power availability was assessed. When evaluated against conventional operation, this process of profile matching enabled a power system lifetime cost decrease of &gt;18% while delivering 100% of the required irrigation for a simulated two-hectare Kenyan tomato farm with over 50 m well depth.&#13;
&#13;
To exploit these cost and reliability benefits, this work proposes the POWEIr controller. The POWEIr controller leverages machine learning and utilizes a small set of inexpensive sensors to optimize irrigation schedules based on solar energy and crop water demand pre-dictions. The performance of the POWEIr controller was evaluated with an experimental SPDI prototype and compared to simulated typical farming practices. For the same irrigation delivered, a six-fold decrease in the required battery capacity was observed. With no batteries, the POWEIr controller still satisfied a greater fraction of the irrigation demand. Overall, compared to typical practice, the controller provided more reliable irrigation using solar power, with minimal battery usage.&#13;
&#13;
High reliability at low cost necessitates that the POWEIr controller’s irrigation schedules are robust to errors in agronomy inputs and weather data. Sensitivity to these errors was assessed by evaluating the impact on simulated irrigation amounts and crop yield. It was found possible to rely on weather data from an economical station, costing $190, 83% less than a better-equipped research-quality alternative, with negligible consequences to crop yields. This conclusion held steadfast across diverse crop and soil types. The crop coefficient was the most significant factor affecting irrigation performance, thereby pointing to the need for calibration of this factor alone. This underscores the POWEIr controller’s capability to accurately optimize irrigation schedules for only essential water use while relying on affordable sensors and minimal calibration.&#13;
&#13;
Finally, the POWEIr controller was piloted on farms in Jordan and Morocco and performance was benchmarked against measured local, conventional drip irrigation practices on similar farms. It provided up to 44% and 43% savings in water use and pumping energy consumption, respectively, for similar crop yields. This result demonstrates theory to practice of accessible precision agriculture technology and offers tangible evidence of the POWEIr controller’s potential to raise agricultural sustainability.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154164</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Chemical Tool for Analyzing Brain: Electron Microscope-Like Images with Fluorescent Microscope</title>
<link>https://hdl.handle.net/1721.1/154162</link>
<description>Scalable Chemical Tool for Analyzing Brain: Electron Microscope-Like Images with Fluorescent Microscope
Shin, Tay Won
Neuroscientists have long studied complex neural circuits by examining their ultrastructure and molecular features. Electron microscopy (EM) has greatly advanced our fundamental understanding of neurobiology by revealing ultrastructural features through dense labeling of membranous structures while fluorescence microscopy (FM) allowed neuroscientists to identify and study specific biomolecules of interest.  However, it would be ideal if such imaging could be achieved with FM (i.e., conventional light microscope), allowing anyone to identify and localize biomolecules in the detailed ultrastructural context of neural circuits. In this study, we report a novel membrane probe and modified expansion microscopy (ExM) protocol aimed at achieving conventional light microscope images that are comparable to low-resolution EM, enabling the visualization of ultrastructural features in thick mice brain tissue sections with molecular contrast, and pointing the way towards the possibility of tracing and reconstructing the neural circuitries. We demonstrate the ability of this novel strategy, which we call ultrastructure membrane expansion microscopy (umExM), to show ultrastructural features that were previously observable with EM using light microscopy and the localization of biomolecules in the ultrastructural context with a resolution of ~60 nm. Combining umExM with existing fluorescence fluctuation imaging methods (i.e., SRRF), we achieved a resolution of ~30 nm. umExM may enable the routine use of ultrastructure imaging in neurobiology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154162</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Targeting and Manipulating Endogenous Transcriptional Condensates</title>
<link>https://hdl.handle.net/1721.1/154154</link>
<description>Targeting and Manipulating Endogenous Transcriptional Condensates
Lee, Choongman
Biomolecular condensates are membraneless compartments composed of selectively concentrated biomolecules with liquid-like properties. In this thesis, we discuss our discoveries showing that biomolecular condensates are associated with transcription. However, tools to study transcriptional condensates in living cells are limited due to the size of the condensates (tens to hundreds of nanometers) and the dynamics of the condensates in living cells. Therefore, we develop optogenetic tools to target and manipulate endogenous transcriptional condensates. We find that intrinsically disordered regions of transcription factors can be used to direct cargo to the condensates. Combined with an improved light-induced dimer and its binding partner allow us to generate a quick response with high insertion efficiency. We adopt a proximity-based modification to biotinylate proteins inside of the condensates upon blue light exposure. This helps to determine the constituent of the condensates using mass spectrometry. Our approach opens the road to the proteome-wide investigation of the transcriptional condensates. In this thesis, we discuss the development and future outlook of the technique.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154154</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporal gene expression and regulation in T4 phage</title>
<link>https://hdl.handle.net/1721.1/154153</link>
<description>Temporal gene expression and regulation in T4 phage
Levine, Mandara Alexis
As the most abundant biological entity in the biosphere, bacteriophages play a critical role in shaping the microbial diversity, and thus overall ecosystem health. They are also essential tools in molecular biology, shedding light on fundamental biological concepts. T4 phage, with its complex lifecycle and genetic content, has been instrumental in many such discoveries. However, many questions regarding gene regulation in T4 phage remain unanswered. In this study, we employ end-enriched RNA-seq (Rend-seq) and ribosome profiling to examine T4 RNA and protein synthesis throughout the course of infection, gaining new insights at the transcriptional, translational, and genomic level. At the transcriptional level, we identified transcript boundaries, novel putative promoters, and new potential cleavage sites for the T4 endoribonuclease RegB. At the translational level, we identified many instances of previously unreported changes in translational efficiency over the course of infection, indicating the presence of intricate and uncharacterized mechanisms of regulation. Collectively, transcriptional and translational controls lead to precisely tuned protein synthesis rates during infection, as exemplified by the phenomenon that components of T4 protein complexes are synthesized according to their stoichiometry— a principle that has been observed in organisms during steady-state growth. Finally, we identified and experimentally validated T4’s 290th gene, 61.-1. Though non-essential to T4 in laboratory conditions, this gene has homologs present in a number of other phage and drastically impacts E. coli growth when ectopically expressed. This study provides insights into T4 phage biology, paving the way for further exploration into molecular biology, virology, and biotechnology; our rich data set can be utilized by future studies to answer a diverse array of inquiries.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154153</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conducting polymers for electrochemically mediated separations</title>
<link>https://hdl.handle.net/1721.1/154121</link>
<description>Conducting polymers for electrochemically mediated separations
Ren, Yinying.
Many conventional separation technologies are limited by high energy costs, generation of chemical wastes, and lack of molecular selectivity. This thesis is motivated to develop novel separation techniques that overcome the aforementioned drawbacks for the separation of neutral organic compounds from aqueous solutions. Adsorption is one of the most widely adopted technologies for the intended task. Polymeric adsorbents have shown great potential for replacing activated carbon for removing a wide range of toxic organic pollutants from wastewater streams since they do not suffer from costly regeneration needs and high attrition rates. An electrochemically regenerable polymeric adsorbent based on an intrinsically conducting polymer, polypyrrole (PPy), doped with anionic surfactant dioctylsulfosuccinate (AOT), denoted PPy(AOT), is developed for mitigating organic pollutants in wastewater. Highly porous PPy(AOT) can be synthesized using a facile electropolymerization protocol, and has an adsorption capacity of greater than 570 mg pollutant/g polymer in its superhydrophobic oxidized state. The hydrophobicity of PPy(AOT) and hence its affinity for organics can be modulated electrochemically through the reorientation of AOT dopants, which can be exploited to regenerate the adsorbent and use it repeatedly for multiple adsorption/desorption cycles. A combined density functional theory and molecular dynamics approach was used to study the interactions between the adsorbed organic molecules and the surfactant-doped conducting polymer adsorbent, and to elucidate the mechanism of electrochemical modulations of hydrophobicity and affinity of the sorbent. The AOT doped polypyrrole (PPy(AOT)) and a polyvinylferrocene/polypyrrole hybrid (PVF-PPy) have complementary hydrophobicity tunability in response to electrochemical modulations: both materials are hydrophobic in their respective neutral states, exhibiting high affinities towards organics; upon application of a mild potential to oxidize PVF-PPy and reduce PPy(AOT), they can be simultaneously rendered hydrophilic, thereby driving desorption of organics and regeneration of the materials. Therefore, the two materials form an attractive pair for an asymmetric electrode system to work in tandem. The asymmetric system can be used in a cyclic fashion, through repeated application of a small potential (close to 0 V) to program the capture of organics from a large volume of feed solution, and a higher potential (above 0.9 V) to stimulate the release of the adsorbed organics into a small volume of desorption solution. The redox-responsive electrode system can be applied for remediation of organic pollution in wastewater as well as recovery of organic products from reaction mixtures. The asymmetric configuration has multiple benefits, including suppression of water parasitic reactions, high energetic efficiency, and selectivity for target organic species. The ability to modulate the hydrophobicity of conducting polymers and physicochemical insights about the material are significant for developing broader applications in drug delivery, sensing, self-cleaning surfaces, microfluidics, and artificial muscles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 123-134).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154121</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic insight on a chimeric Cas9 protein's specificity for DNA target with 5 '-NAA-3' PAM</title>
<link>https://hdl.handle.net/1721.1/154120</link>
<description>Mechanistic insight on a chimeric Cas9 protein's specificity for DNA target with 5 '-NAA-3' PAM
Nip, Lisa.
Numerous protein variants have been made to expand the repertoire of CRISPR-Cas nucleases that can recognize protospacer-adjacent motifs (PAMs) other than the canonical NGG discovered in wild-type Streptococcus pyogenes. While Cas nuclease engineering has largely yielded proteins with enhanced specificity for NGG and variations on G-containing PAMs, we were able to construct a chimeric Cas protein with consistent specificity for a 5'-NAA-3' PAM by rationally combining the PAM-interacting domain of Streptococcus macacae with the S. pyogenes Cas9 scaffold. We have been able to demonstrate during in vitro incubations that our chimeric protein is capable of cleaving dsDNA with an NAA PAM, but a deeper biochemical understanding of the nature of these new chimeric proteins' binding and cleavage activities is of paramount importance for their practical use. Here, we use of the principles of enzyme kinetics to investigate our chimeric protein's comparative efficiency to Cas12a and the biophysical mechanism by which our grafted S. macacae segment works synergistically with the S. pyogenes Cas9 scaffold to cleave target DNA with an NAA PAM. We show that SpySmacCas9 does not bind or cleave at rates comparable to Casl2a, but its overall performance rivals that of wild-type SpyCas9 with a new PAM preference.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 62-65).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154120</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shadow suburbanism : Mexican everyday life, fear, and space in Greater Atlanta</title>
<link>https://hdl.handle.net/1721.1/154117</link>
<description>Shadow suburbanism : Mexican everyday life, fear, and space in Greater Atlanta
Arroyo, John C.
            (John Christopher)
Historical, traditional patterns of U.S. migration locate Mexican immigrants and Mexican- Americans in dense urban centers, but recent, rapid demographic shifts reveal suburbs and exurbs - specifically in the South - as the new host destinations for Mexican settlement. This critical "Latinization" phenomenon is evident by highly visible changes to the built environment, as well as in the ways municipal planning and law enforcement agencies adjust their policies to either accommodate or discriminate against Mexican immigrant land uses. My dissertation broadly examines how fear and invisibility influence Mexicans' spatial coping mechanisms and agentic strategies in suburban Atlanta (Gwinnett County) -- one of the fastest developing and multiethnic metropolitan counties in the country. I ask what constraints and/or opportunities do Mexican populations encounter when attempting to reshape transportation networks, housing, and commercial centers and in recent high-growth U.S. immigrant gateways to fit their needs? Where is this transformation occurring, and why? How do they hold meaning, and what is that meaning to Mexican populations in a culture of anti-immigrant fear and tension? A case-study approach is supplemented with data triangulated from 145 in-depth interviews; participant observation; and longitudinal (1996-2018) content analysis of material from English and Spanish-language news media, community organizations, and federal, state, and municipal policy documents. My project extends the literature on the contributions and integration challenges of Mexicans in new immigrant destinations in both planning and sociology. It provides an empirical basis to analyze the culture of fear and resulting interiorization through the lens of Latino Urbanism scholarship while simultaneously situating migration theory across multiple spaces and places, based on the experiences of the largest Latinx minority group in the U.S. I argue that despite an era of pervasive fear and heightened tensions surrounding Mexican communities under the Trump administration, suburbs provide Mexicans flexible opportunities to assert their identity. The dissertation concludes by drawing on Kevin Lynch's Good City Form (1981) and the ideals of immigrant placemaking to propose a theory of "suburban spatial fit" for ethnically diverse suburbs. Where Mexican immigrants live in the U.S. plays a critical role in how they adapt to their host society -- and how their host society reacts to their presence in a policy context. In a 2 1st century America defined by exponential Latinx growth and evolving debates about immigration federalism, the case of Gwinnett County illustrates the role of the built environment as both agent and canvas for Latinx expressions of cultural self.
Thesis: Ph. D. in Urban Planning, Policy, and Design, Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2018; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 436-459).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154117</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating brain-wide neural mechanisms using fMRI and novel tools</title>
<link>https://hdl.handle.net/1721.1/154116</link>
<description>Investigating brain-wide neural mechanisms using fMRI and novel tools
Bricault, Sarah Jean.
Defining the neural mechanisms that coordinate behavior requires characterizing the activity dynamics of diverse brain regions and neural circuit elements. My thesis explores such dynamics to help understand brain-wide processing of rewarding and aversive stimuli relevant to decision making. My primary experimental tool is functional magnetic resonance (fMRI), applied in anesthetized and awake rats, and I introduce methodologically significant innovations along with my scientific work. In the first part of my thesis, I investigate the neural bases of responses to intracranial rewarding and aversive stimuli. Comparison of psychometric and fMRIbased measurements identifies a putative site for reward integration in the nucleus accumbens (NAc), and targeted pharmacological inactivation of this region correspondingly distorts the evaluation of reward magnitudes in an operant task. My results dissociate processing of stimuli of opposite valence, by combining rewarding and aversive stimuli in a decision-making task and demonstrating that the two stimuli are processed independently. A limitation of these imaging studies is that they are performed in sedated animals. I therefore introduce a protocol for investigation of brain-wide neural dynamics in awake, paralyzed rats. I characterize intrinsic dynamics of brain function in this preparation, and argue that it constitutes a promising basis for further investigations of behaviorally relevant neural function. In the final part of my thesis, I describe a new tool for perturbation of brain dynamics using image-guided pharmacological interventions. The tool is a conjugate of the inhibitory drug muscimol to a paramagnetic contrast agent. I show that this reagent allows neurophysiological consequences of local inhibition to be characterized in spatial and temporal dimensions, creating a facile basis for assessing the contributions of drug-targeted structures. My work thus establishes a platform for hypothesisdriven investigation of distributed neural mechanisms involved in a broad range of contexts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from the printed version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154116</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Discovering Metabolite Structures from Mass Spectra</title>
<link>https://hdl.handle.net/1721.1/154037</link>
<description>Machine Learning Methods for Discovering Metabolite Structures from Mass Spectra
Goldman, Samuel Lucas
Small molecule metabolites mediate myriad biological and environmental phenomena across host-microbiome interactions, plant chemistry, cancer biology, and various other processes. Mass spectrometry is often used as an analytical technique to investigate the small molecules present in a sample, measuring both their masses and fragmentation spectra. However, the complexity and high dimensionality of spectral data makes it difficult to identify unknown metabolites and their roles, with a large majority of detected metabolites remaining unidentified in public data.&#13;
&#13;
This thesis proposes a suite of new computational methodologies for higher accuracy annotation of small molecule metabolites from mass spectrometry data that integrate chemistry-informed priors with modern deep learning advancements. I begin by decomposing and framing the metabolite annotation pipeline into four key tasks well-fit for supervised deep learning including (A) molecular formula prediction, (B) spectrum-to-molecule property prediction, (C) molecule-to-spectrum prediction, and (D) de novo generation of molecular candidates. To address these various tasks, I first introduce the Molecular Formula Transformer to predict molecular property fingerprints from spectra by changing the tandem mass spectrum input basis from scalar mass values to plausible molecular formula annotations. This method is then extended to an energy-based-model formulation to predict the molecular formula of an unknown molecule from its tandem mass spectrum. Following these initial efforts to learn better representations of fragmentation spectra, I develop new neural networks capable of generating fragmentation spectra from small molecules through two-step autoregressive modeling. I show how this can be accomplished by generating either molecular formula peaks or molecular fragment peaks.&#13;
&#13;
Downstream of metabolite prediction, a separate key question is to identify the function of discovered small molecules. To this end, I study and probe the ability to model enzyme-substrate compatibility from high throughput screens within a single enzyme family. In a final collaborative work, I further demonstrate how a new method for epistemic uncertainty quantification, evidential deep learning, can be applied to molecular property prediction.  Altogether, this work outlines a path forward to a fully neuralized pipeline for the high throughput identification of small molecule metabolites and their functions.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154037</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Tools for Measuring and Analyzing Bacterial Gene-Expression Dynamics</title>
<link>https://hdl.handle.net/1721.1/154035</link>
<description>New Tools for Measuring and Analyzing Bacterial Gene-Expression Dynamics
Parker, Mirae Leigh
Messenger RNAs (mRNAs) are essential targets of gene regulation. The cell adapts and grows by changing its gene-expression profile, which it can achieve by manipulating the rates of mRNA initiation and decay and thus changing the relative abundances of transcripts. To understand the biological significance of these transcriptomic changes it is useful to observe how these changes correlate with emergent downstream behaviors and phenotypes. To manipulate and predict transcriptomic changes, it is also helpful to identify the sites of RNA regulation (transcription initiation, termination, and decay). By observing these sites of regulation, and how they change across different environmental and genetic contexts we can learn to recognize the sequence determinants of these processes and anticipate under which circumstances they will modulate gene expression changes. In order to record transcriptomic histories and facilitate the correlation of these archives with emergent cellular behaviors I have developed a molecular time capsule (MTC). An MTC is a self assembling protein capsule which captures highly reproducible snapshots of the full cellular transcriptome for delayed retrieval and analysis. These encapsulated records remain stable, even while the host transcriptome undergoes major remodeling. These records are also cleanly separable from non-encapsulated RNAs originating either from the host cell itself, or from other cells in a heterogenous population. To facilitate the identification of transcript ends across multiple conditions, I have also developed analysis tools for end-enriched RNA sequencing data (Rend-seq). Rend-seq is a variation of RNA-sequencing that aids in the interference of in vivo 5′ and 3′ ends in addition to determining their abundances. The tools I have developed for processing and identifying transcript ends from these data have been bundled into an open source Python package: rendseq. This package also powers an interactive website, rendseq.org, which increases the accessibility of published Rend-seq datasets.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154035</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Performance of Consensus Protocols</title>
<link>https://hdl.handle.net/1721.1/154034</link>
<description>Improving Performance of Consensus Protocols
Wan, Jun
Designing an efficient solution for Byzantine Broadcast (BB) is a central problem for many distributed computing and cryptographic tasks. Some of the most important challenges include improving the round complexity, the communication complexity of the protocol and tolerating strong adversaries.&#13;
&#13;
For round complexity, under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes n. However, whether we can match the expected constant round complexity in the corrupt majority setting — or more precisely, when f ≥ n/2 + ω(1) — was unknown, where f denotes the number of corrupt nodes. We solve this long-standing question and achieve BB in expected constant rounds, even when 99% of the nodes are corrupted by a weakly adaptive adversary. A weakly adaptive adversary can observe messages sent by honest nodes, adaptively corrupt nodes and inject arbitrary new messages.&#13;
&#13;
Besides a weakly adaptive adversary, it is also important to study the round complexity of BB protocol under a strongly adaptive adversary. A strongly adaptive adversary can examine the original message an honest node would have wanted to send in some round, adaptively corrupt the node in the same round and make it send a completely different message instead. In the corrupt majority setting, no protocol with sublinear round complexity is known. We are the first to construct a BB protocol with sublinear round complexity. Specifically, assuming the existence of time-lock puzzles with suitable hardness parameters and that the decisional linear assumption holds in suitable bilinear groups, we show how to achieve BB in ( n / (n−f) )² · polylogλ rounds with 1 − negl(λ) probability, where λ is the security parameter.&#13;
&#13;
Another important metric for a BB protocol is the communication complexity. There have been many attempts to achieve sub-quadratic complexity in several directions, both in theory and practice, all with pros and cons. We initiate the study of another attempt: improving the amortized communication complexity over a significant long sequence of Byzantine Broadcast executions. Weachieve optimal amortized linear complexity under honest majority and amortized quadratic communication complexity under dishonest majority and a strongly adaptive adversary.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154034</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Real-time Earth Observation with Satellite Constellation Crosslinks and Propulsion</title>
<link>https://hdl.handle.net/1721.1/154033</link>
<description>Toward Real-time Earth Observation with Satellite Constellation Crosslinks and Propulsion
Chan, Manwei
The development of remote sensing small satellite constellations has created the potential for high-resolution Earth-observation data to reach end users faster. This work investigates how propulsion and intersatellite links enable constellations to continuously collect and deliver data faster than constellations without these capabilities.&#13;
&#13;
This work has four contributions. The first contribution is a constellation simulation framework that is based on open-source libraries. This simulation framework can propagate satellites and execute propulsive maneuvers. The second contribution is a planning and scheduling algorithm for propulsive maneuvers, target observation times, and optimal data routing paths. The third contribution is the development of high performance constellation designs with respect to constellation cost and the following metrics: age of information, system response time, and total pass time. The cost model is developed from two separate models: the Small Satellite Cost Model (SSCM) and a launch cost model developed in this work. The fourth contribution is a set of cost-estimating relationships (CERs) that models the trade-off between cost and system performance in terms of the aforementioned metrics.&#13;
&#13;
The new simulation framework of contribution 1 is verified against the industry-standard software Systems Tool Kit (STK). The simulation framework is used to run 21 different constellation designs, 3 different satellite models, and 432 distinct ground targets. These scenarios are run during each of the four seasons to eliminate geometric biases for a total of 108,864 individual scenario simulations. A single satellite executing the reconfiguration algorithm produces up to a 125% increase in pass time over seven days when compared to an identical satellite without propulsive capabilities. For an access cone with a nadir half-angle of 20 ,thereconfigurationalgorithm produces a 67% increase in pass time. Comparing the cost of inter-satellite link (ISL) and reconfiguration-capable satellites versus (i) only ISL-capable satellites and (ii) a baseline satellite without ISL or reconfigurable capabilities, a Pareto optimal analysis revealed 29% of designs had both propulsion and intersatellite link capabilities when optimizing for age of information, 7% of designs had both propulsion and intersatellite link capabilities when optimizing for system response time, and 33% of designs had both propulsion and intersatellite link capabilities when optimizing for total pass time. The CERs show that for constellations costing between $150M and $1B (FY24), age of information can be reduced by 32 seconds for every million dollars spent, system response time can be reduced by 35 seconds for every million dollars spent, and total pass time over 3 days can be increased by 2 seconds for every million dollars spent.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154033</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of the Role of Differential Gene Expression and RNA Editing in Drosophila Tonic and Phasic Motoneuron Diversity</title>
<link>https://hdl.handle.net/1721.1/154029</link>
<description>Characterization of the Role of Differential Gene Expression and RNA Editing in Drosophila Tonic and Phasic Motoneuron Diversity
Crane, Andrés B.
How differential regulation of gene expression programs across neuronal subtypes leads to their unique morphology, membrane excitability and synaptic properties is poorly understood. Protein expression and function within neurons depends on numerous transcriptional and post-transcriptional regulatory steps, which can be controlled at a single cell level. Using modern single cell sequencing approaches, the transcriptional and post-transcriptional regulatory programs for many cell subtypes have recently been analyzed. In this thesis, I have compared and analyzed the distinct transcriptomes of the tonic Ib and phasic Is glutamatergic motoneuron subtypes in the Drosophila 3rd instar larvae. These neurons display distinct morphological and electrophysiological properties, in addition to being easily accessible for imaging, electrophysiology, and genetic approaches, making them an ideal model system to test how programs of gene regulation can lead to distinct morphological and functional synaptic properties. First, we sequenced RNA from 105 Ib and 101 Is single neurons to define the transcriptome for each neuronal subtype. Comparison of gene expression levels revealed ~800 differentially expressed genes (DEGs) across multiple gene classes, including intracellular Ca²⁺ buffers, signaling ligands and transcription factors, synaptic cleft proteins, and ion channels. Perturbation of these genes produced cell-type-specific morphological and electrophysiological defects, indicating that these genes are important in determining cell-type-specific characteristics of the two neuronal classes. Next, I analyzed how a specific type of posttranscriptional modification, RNA editing, was differentially regulated across the transcriptome in each neuronal subtype. Out of ~14,000 genes in the Drosophila genome, only a small number (324 sites in 215 genes) were canonically RNA edited at a robust level in these motoneurons. The majority of the edits caused significant missense mutations in protein coding domains, indicating RNA editing could play a direct role in modulating protein function. Editing levels varied considerably between single neurons, although some edits were found in almost every cell and some sites were edited to ~100% in each cell with editing. 42 edits occurred in evolutionarily conserved regions of the encoded proteins, suggesting they may play important roles in protein function. Finally, 26 of the 324 sites discovered were differentially edited between Ib and Is subtypes. Taken together, our data indicate that cell-type-specific programs of transcriptional regulation set initial cell-specific characteristics, but that post-transcriptional and post-translation regulation through RNA editing and other mechanisms act to fine-tune effective expression levels and function for a subset of the proteome. Our transcriptomic and RNA editing data provide a resource for future experiments to determine the role of specific DEGs and RNA editing events in determining cell-type-specific diversity and function of Drosophila larval motoneurons.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154029</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Templated Multiferroic Nanocomposites by Ion-Lithography</title>
<link>https://hdl.handle.net/1721.1/154025</link>
<description>Templated Multiferroic Nanocomposites by Ion-Lithography
Su, Tingyu
Vertical aligned nanocomposites (VAN), formed by phase separation of two different materials, have been a novel platform to study heteroepitaxy. The perovskite-spinel system is one of the most popular VAN structures to study multiferroic coupling. For example, ferroelectric BiFeO3 (BFO) and ferrimagnetic CoFe2O4 (CFO) could grow epitaxially on [001]-SrTiO3 substrate, forming BFO matrix and CFO pillars. In this thesis, we focus on three things:&#13;
&#13;
First, the possibility of integrating perovskite on garnet Gd3Ga5O12 (GGG) substrate is explored. Perovskite candidates YFeO3 and BaTiO3 are integrated with garnet Y3Fe5O12. Growth mechanism, structure characterization and magnetic properties of as-grown films are discussed.&#13;
&#13;
Second, ion-lithography enabled by the VELION FIB-SEM system is developed to template the perovskite—spinel system. Specifically, fin-shaped BFO—CFO nanocomposite is synthesized by pulsed laser deposition (PLD). Nucleation mechanism of CFO on patterned STO substrates together with atomic characterization is discussed.&#13;
&#13;
Last, a customized micro-magnetics module in COMOL Multiphysics is built based on the weak formulation of Landau-Lifshitz-Gilbert (LLG) equation. Magnetic properties of single crystal CFO are modeled.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154025</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for High Throughput Biological Data</title>
<link>https://hdl.handle.net/1721.1/154024</link>
<description>Machine Learning Methods for High Throughput Biological Data
Murphy, Michael A.
Machine learning is becoming a pivotal tool in the analysis of datasets generated from high-throughput biological omics experiments. However, omics data introduces distinctive algorithmic challenges that set it apart from other domains where machine learning is applied. These challenges encompass issues such as limited data availability, complex noise, ambiguities in representation, and the absence of definitive ground truth for validation. In this thesis, I present three examples of machine learning applications to different omics modalities in which I address these challenges. In my first project, I develop an approach for contrastive representation learning with immunohistochemistry images, which suffer complex technical and biological noise that render generic approaches ineffective; and I demonstrate how this approach can be combined with noisy labels derived from transcriptomics to derive an effective classifier of cell-type specificity. In my second project, I consider the problem of predicting mass spectra of small molecules: previous methods suffer from a tradeoff between capturing high-resolution mass information and a tractable learning problem, which I resolve by introducing a novel representation of the output space. In my third project, I perform gene regulatory network inference using a number of different single-cell sequencing platforms, and carry out a quantitative comparison of these technologies. In summary, this thesis showcases the difficulties that arise in applying modern machine learning approaches to high-throughput biological measurements, and empirical case studies of how these difficulties may be overcome.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154024</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Modeling of Disrupted Sensory Processing during Propofol Mediated Unconsciousness</title>
<link>https://hdl.handle.net/1721.1/154020</link>
<description>Statistical Modeling of Disrupted Sensory Processing during Propofol Mediated Unconsciousness
Tauber, John
A critical component of general anesthesia is unconsciousness, during which patients are unaware of their environment. Propofol, the most widely used anesthetic agent, induces slow-oscillations in local-field potential (LFP) recordings. Spiking becomes strongly coupled to the phase of the slow-oscillations, creating alternating irregular “Up” and “Down” states of high and low activity, respectively. Although it is well established that these changes in LFP and spiking interfere with inter-area communication, we have less understanding of how this disrupts sensory processing at level of functioning networks. Here we used statistical modeling to investigate sensory processing during propofol-mediated unconsciousness. We utilized a rich experimental dataset containing LFP and spiking stimulus responses simultaneously recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of non-human primates before and during propofol-mediated unconsciousness. Due to the presence of Up and Down states conventional averaging of trials to estimate a mean sensory response is statistically invalid. We addressed this by developing a Hidden Markov Model (discrete-valued latent process) with point-process observations to estimate the mean stimulus response in each recorded brain area. Our results showed stimuli occurring during spiking Up states trigger weaker spiking responses than in awake animals in auditory cortex, and little or no spiking responses in higher order areas. In a second study, we further characterized functional properties of Up states in auditory cortex to discern if they are ‘awake-like’. We designed a State-Space Model (continuous-valued latent process) where stimulus responses were modeled as input-driven changes to the latent process. This structure enabled point-process observations to reflect a variety of response profiles across neurons as well as the ability to estimate responses with single-trial specificity. Preliminary evidence suggests stimulus responses during Up states changed heterogeneously across neurons in auditory cortex when contrasted with the awake state. Finally, we introduced a spike-field coherence estimation framework to capture the coupling of LFP and spiking. Our approach treats spiking data as point-process observations of a latent continuous-valued oscillatory process. While preliminary results indicate similar performance to naïve estimators, our framework can form the basis for customized estimators by adding additional structure to the model (e.g., sparsity), which may contextually improve spike-field coherence estimation. In conclusion, our findings suggest that the spiking activity during Up states is neurophysiologically distinct from spiking activity in the awake brain, possibly contributing to disruption of sensory processing. Moreover, the statistical techniques presented may benefit anesthesia researchers and the broader neuroscience community.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154020</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling a permanent human presence beyond low Earth orbit: wearable radiation protection and enhanced science through virtual reality</title>
<link>https://hdl.handle.net/1721.1/154006</link>
<description>Enabling a permanent human presence beyond low Earth orbit: wearable radiation protection and enhanced science through virtual reality
Paige, Cody Alison
NASA's ambitious plans for human space exploration include returning to the Moon within the next five years and undertaking long-duration, deep-space missions to Mars within the next decade. This thesis encompasses two key areas of research aimed at enhancing the safety and productivity of human exploration beyond low Earth orbit: the development of advanced radiation protection materials and the utilization of virtual reality (VR) for surface exploration. Aligned with NASA's Exploration Systems Development Mission Directorate, the research falls under the exploration capabilities topic of the Advanced Exploration Systems (AES) programs and projects.&#13;
&#13;
The first component of the research focuses on the development of a novel radiation shielding material comprising hydrogen and Boron-10. The material is designed to safeguard astronauts from primary and secondary radiation encountered in deep space, as well as on the Lunar and Martian surfaces. A multi-functional fabric was conceptualized, incorporating the radiation protection material, as well as advanced materials for dust protection and thermal regulation. The material can be integrated into advanced spacesuit designs, enabling astronauts to spend more time performing extravehicular activities while ensuring their safety during deep space travel. Through simulations and modeling using an online radiation simulation tool, I identified the optimal combination of shielding materials and evaluated the dose reduction potential offered by the proposed material. Furthermore, a prototype of the radiation shielding material was successfully manufactured, providing valuable insights for future production processes.&#13;
&#13;
The second component of my thesis centers around the utilization of virtual reality for Lunar and planetary surface exploration. I test the application of VR as a tool for scientists to remotely explore and analyze these extraterrestrial surfaces from Earth. Additionally, VR is leveraged as a training tool to enhance astronauts' understanding of the environments they will encounter during upcoming exploration missions. I conducted field expeditions and data collection in terrestrial analog environments, specifically focusing on Svalbard, Norway, as a Mars-relevant geological site. Using depth- and environmental sensor data, a high-resolution 3D virtual environment was created, enabling a comparative user study between VR and traditional desktop applications. The results of the study highlighted the superiority of VR in terms of sense of scale, contextualization, and workload reduction. The study concluded that VR holds great potential as a valuable tool for scientific exploration and astronaut training, with recommendations provided for further improvements in data collection methodologies, virtual environment development, and VR application enhancements.&#13;
&#13;
In support of the next steps for the VR platform development for the Lunar surface, I contributed to the development of a low-cost, flight-capable time-of-flight camera for 3D mapping of the Lunar south polar region. By collaborating with NASA Ames, Lunar Outpost, and MIT's Resource Exploration and Science of our Cosmic Environment (RESOURCE) team, I tested and optimized the camera for Lunar conditions, ensuring its viability, as well as developed and tested a flexible concept of operations for its flight on a near-term Commercial Lunar Payload Services mission. This development paves the way for broader access to Lunar surface data and enables a wider range of scientists to contribute to Lunar exploration.&#13;
&#13;
The contributions presented in this thesis significantly advance the goal of safe and collaborative human exploration of the lunar and planetary surfaces. The developed radiation shielding material, along with the application of VR technology, not only mitigate health risks for astronauts but also enhance scientific productivity and understanding of these extraterrestrial environments. These advancements pave the way for a sustainable human presence on the Moon and further exploration of our solar system.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154006</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring the Psychological and Physiological Responses of Humans Experiencing Sacred Architectural Space in XR: a Case Study at the Monastery of Simonos Petra</title>
<link>https://hdl.handle.net/1721.1/154005</link>
<description>Measuring the Psychological and Physiological Responses of Humans Experiencing Sacred Architectural Space in XR: a Case Study at the Monastery of Simonos Petra
Vlavianos, Nikolaos
This dissertation argues that it is possible to characterize, both qualitatively and quantitatively, human emotional responses to sacred architecture, and that XR is an effective way to achieve this characterization. This research is driven by a systematic architectural method devised to Scan, Visualize, Simulate immersive experiences and Test humans in XR (SvStXR). Moreover, using self-reporting questionnaires, sensory data, debriefing, and eye-tracking data, the research shows that specific forms of enrichment of the XR (virtual) environment – aural and visual – have measurable impacts on these experiences. Through a high-quality XR protocol -- consisting of the experimental procedure, data collection, and data analysis -- the research achieves assessments of different parametrically-manipulated conditions of light, sound, texture and scale at Simonos Petra Monastery in Greece, and compares these assessments to the lived experience of monks in the physical sacred space. These assessments were possible through the introduction of novel metrics for data analysis that utilized the questionnaire data, GSR Mean Score and Accelerometer Sum of Absolute Differences (SAD score) to numerically support qualitative observations influenced by spatial and human factors. Through user studies at three different locations – at Simonos Petra, MIT, and the Hellenic College Holy Cross in Brookline, MA – 100 participants were exposed to AthosXR, the immersive experience of the interior of the monastery’s church. This dissertation also has the potential to contribute to future projects attempting to use XR environments as a laboratory for better understanding the impact of architectural space on spatial perception. Beyond the value of this research for architects and designers, this work addresses a number of audiences: XR practitioners and scholars, scientists in environmental studies, psychologists seeking information about space, its ambient conditions, and human emotions.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154005</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacial engineering and spectroscopy of spin-triplet excitons for singlet fission sensitization of silicon solar cells</title>
<link>https://hdl.handle.net/1721.1/154004</link>
<description>Interfacial engineering and spectroscopy of spin-triplet excitons for singlet fission sensitization of silicon solar cells
Perkinson, Collin Fisher
Silicon solar cells account for 95% of global solar energy production today, but their power conversion efficiencies have improved by barely 2% in 25 years as they ap- proach their thermodynamic limit. Thermalization of high-energy photons is the single largest energy loss process in these devices, accounting for the loss of a third of all incident solar energy. Singlet exciton fission, an energy-splitting process that converts single high-energy photons into multiple lower-energy excited states, has the capability to reduce thermalization losses and boosts the maximum silicon solar cell efficiency from 29.4% (the Shockley-Queisser limit) to 35%.&#13;
&#13;
While the idea of reducing heating losses in silicon solar cells using singlet fission has been circulating in the community for nearly half a century, the practical real- ization of such a device has yet to be demonstrated. In fact, despite many attempts, few studies have shown any evidence for energy transfer of spin-triplet excitons (the product states of singlet fission) to silicon. The challenge derives from the quantum mechanical spin properties of triplet excitons, resulting in triplets having vanishingly low ability to emit light, low mobility and small diffusion lengths, and requiring very close spacing with other materials to undergo energy transfer. This latter require- ment may be especially limiting for silicon, an indirect-bandgap semiconductor that requires carefully tailored chemical passivation layers to reduce energy trapping at its surface.&#13;
&#13;
In this thesis, I employ interfacial engineering and spin-triplet spectroscopic tech- niques to explore three different approaches to singlet fission sensitization of silicon: radiative energy transfer, direct triplet transfer, and charge transfer. In the first approach, triplets are harvested by an interlayer containing heavy atoms, enabling the excited states to emit light that is subsequently absorbed by silicon. In the sec- ond approach, triplets transfer directly to silicon via short-range electron tunneling through a thin passivating interlayer. In the third approach, triplets are dissociated using an interlayer that facilitates charge transfer to silicon. In each case, particular attention is given to the materials between the singlet fission layer and silicon. These interlayers must be bifunctional, balancing triplet transfer to silicon with passivation of the silicon surface.&#13;
&#13;
In my first project, we identify a need in the radiative energy transfer approach for higher-triplet-energy singlet fission materials. Through a high-throughput com- putational and experimental search, we discover two new singlet fission materials: dicyanoanthracene and difluorooctafluoroanthracene. These materials have triplet energies that could enable exothermic transfer to silicon via near infrared emitting quantum dots. In my second project, we find that subnanometer hafnium oxynitride enables efficient triplet transfer from the well-studied singlet fission material tetracene to silicon, possibly via direct triplet transfer. In my third project, finding evidence that triplet transfer via hafnium oxynitride is enabled by triplet dissociation with silicon, we investigate interlayers deliberately selected to facilitate charge transfer to silicon; and we show that zinc phthalocyanine enables charge-transfer-mediated sin- glet fission sensitization of silicon. In the final project of this thesis, I explore the potential of singlet fission materials for magnetic field sensing, showing that they can be used for spatial imaging of magnetic fields, as well as to create a first-of-its-kind magnetic field switchable singlet fission laser.&#13;
&#13;
The studies in this thesis advance our understanding of the mechanisms of triplet energy transfer to silicon, expand the range of applications for singlet fission, and present interfacial engineering strategies and spectroscopic probes for triplet excitons. I hope that these findings will benefit ongoing efforts to realize efficient singlet fission sensitization of silicon and help to one day push commercial solar cells beyond their conventional thermodynamic limits.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154004</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An in situ Investigation of Sediment Hydrodynamics for Advancing Seabed Mining Plume Monitoring and Modeling</title>
<link>https://hdl.handle.net/1721.1/154003</link>
<description>An in situ Investigation of Sediment Hydrodynamics for Advancing Seabed Mining Plume Monitoring and Modeling
El Mousadik, Souha
The potential of seabed mining for mineral resources, vital in transitioning to a metals-based economy, is rapidly gaining momentum and is poised to materialize within the next decade.  A pivotal step in this process requires establishing seabed exploitation regulations that call for a proactive scientific approach to assess potential disturbances to the benthic environment. A critical environmental concern, in particular, revolves around sediment plumes formed during the extraction of deposits from the top sediment layer of the ocean floor. This thesis seeks to provide timely insights into the behavior of these plumes through an in-depth in situ investigation into the properties of deep-sea sediment.&#13;
&#13;
To achieve this goal, a cutting-edge instrument has been developed to provide real-time measurements of size and settling velocity distributions in the abyssal deep-ocean environment. This instrument was deployed during two pilot deep-sea mining trials at a depth of 4500 meters in the Clarion Clipperton Fracture Zone in the Pacific. Throughout these trials, unprecedented sediment data was collected, offering a unique insight into suspended sediment generated from deep-sea nodule collector vehicle operations.&#13;
&#13;
The suspended sediment grain size and other key particle shape characteristics are observed to be influenced by the nature of the hydrodynamic processes that generate the sediment in suspension. The data collected in situ not only revealed marked differences from previous laboratory-based measurements but also challenged the significance of particle size in dictating sediment settling rates. This implies that ex-situ size measurements and single parametrization-based settling velocity estimates are inadequate for evaluating sinking sediment flux, which is crucial in determining the extent of seabed mining plumes.&#13;
&#13;
The broader implications of these findings extend to monitoring seabed mining sediment plumes, emphasizing the imperative for well-informed in situ measurements rooted in an understanding of the complex hydrodynamic processes at play for a nuanced parameterization of sediment plume transport models.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154003</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studying Particle and Energy Transport using Nuclear and X-ray Diagnostics for Discovery Science and Inertial Confinement Fusion</title>
<link>https://hdl.handle.net/1721.1/154002</link>
<description>Studying Particle and Energy Transport using Nuclear and X-ray Diagnostics for Discovery Science and Inertial Confinement Fusion
Adrian, Patrick J.
Obtaining a fundamental understanding of particle and energy transport in High Energy Density Plasmas (HEDP) is essential for advancing basic plasma science and for correctly modeling the energy balance and ignition threshold in Inertial Confinement Fusion (ICF) implosions. Particle and energy transport in HEDP and ICF have therefore been the subject of extensive analytical and numerical studies over decades. In contrast, only a very limited set of experimental data exists to test these theories. The lack of data is generally due to the dynamic and complex nature of HEDP and ICF implosions, seriously compromising any methods that try to directly relate observables to the transport processes. To address this issue, we present a new set of diagnostics and experimental platforms that were implemented and used to uniquely probe three transport processes relevant to HEDP and ICF implosions: low-velocity ion-stopping power, the ion-electron energy exchange, and ion inter-diffusion. The experiments utilized one-dimensional, shock-driven implosions of D³He gas-filled thin-glass capsules to reach conditions relevant to ICF. In the first set of experiments, low-velocity ion stopping was measured and used to constrain the ion-electron energy exchange cross section at plasma conditions similar to the hot spot in ICF ignition experiments. The data agreed with quantum mechanical treatment of ion-electron energy-exchange cross section, which includes the diffraction of electrons [1]. In the second experiment, the mixing between the glass shell and D³He fuel due to plasma inter-diffusion was explored. This experiment utilized enhanced x-ray emission from the glass mixed into the hot spot, and the resulting data was used to validate different ion-diffusion models. The observed x-ray emission was modeled with an ion-Fokker Planck code, indicating that a significant component of the mix is due to kinetic streaming of the different ion species rather than collisional diffusion. Overall, these experiments have advanced our understanding of particle and energy transport in HEDP, which is critical for modeling the energy balance and ignition threshold in ICF implosions.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154002</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Past, present, and future climate impacts of aviation</title>
<link>https://hdl.handle.net/1721.1/154000</link>
<description>Past, present, and future climate impacts of aviation
Grobler, Carla
Commercial aviation is one of the hardest-to-decarbonize contributors to climate change and accounts for approximately 5% of anthropogenic radiative forcing due to its CO₂ and non-CO₂ emissions. As the industry grows faster than its efficiency improvements, aligning its growth with international climate goals becomes increasingly urgent and challenging. To enable data-driven decision-making, this thesis investigates aviation’s historical emissions, present-day climate impacts, current damages due to marginal emission changes, and potential pathways to reach netzero climate impacts by 2050.&#13;
&#13;
First, the research develops a consistent bottom-up emissions inventory for commercial civil passenger aviation from 1980-2019, quantifying how emissions patterns such as spatial, temporal, and compositional characteristics have varied over time. Results show that, while fuel consumption increased by 330%, capacity, measured as available seat miles, grew by 560%. This growth has been regionally heterogeneous, with the emissions share over Asia growing from 7% to 23%, while emissions share over North America decreased from 55% to 25%. Over this time, the share of nighttime fuel consumption increased from 30% to 38%, due to an increase in aircraft utilization. Additionally, there has been a shift in the emissions composition, with the nitrogen oxides (NOx) emissions index increasing by 20%, while the non-volatile particulate emissions index decreased by 70%.&#13;
&#13;
Second, this thesis quantifies aviation’s present-day climate effects using the bottom-up emissions inventory described above. Since contrails are responsible for an estimated 57% of aviation’s effective radiative forcing, we place a particular focus on modeling contrails. Using a mediumfidelity aircraft plume model and ERA5 reanalysis weather data, our results indicate that contrail  3   impacts increased by 460% from 1980 to 2019, outpacing growth in annual fuel consumption. Per flight distance, contrail impacts varied less than 10% over this time. However, underlying drivers did not remain constant. The fraction of flight segments that cause contrails increased by 32%, while the radiative forcing per distance of contrail decreased by 24%. We additionally quantify for the first time how different assumptions regarding the treatment of contrails mixing with ambient air could change their lifetime by a factor of three.  &#13;
&#13;
Third, this thesis calculates the impacts of marginal changes in aviation emissions on climate and air quality, supporting cost-benefit analyses of emissions interventions. Climate and air quality impacts are monetized on a per unit emissions basis, using a simplified climate model and sensitivities from the GEOS-Chem global chemistry-transport model. We find that cruise emissions account for 90% of impacts per fuel unit, with 49-81% arising from air quality effects depending on the discount rate. Collectively NOx, CO2, and contrails cause 97% of the total impact. By estimating the first-order contribution to variance we find that the equilibrium climate sensitivity, climate damage function, and value of statistical life contribute to the greatest uncertainty in outputs.&#13;
&#13;
Finally, we evaluate pathways through which alternative energy carriers and avoidance of nonCO₂ impacts could lead aviation towards net-zero climate impacts by the year 2050. We consider the alternative energy carriers: synthetic fuels from biomass; synthetic fuels from green hydrogen and atmospheric CO₂; and the direct use of green liquid hydrogen. We also perform a meta-analysis to evaluate how these alternative fuels could affect the non-CO₂ impacts and how contrails can be avoided through small scale adjustments in flight altitude. We find that 50% of fleet-wide contrail length can be avoided for a 0.88% fleet-wide fuel burn penalty (5th to 95th percentile range 0 to 2.51). Thereafter avoiding subsequent contrails becomes more fuel costly, with an additional 20% avoidance requiring double the additional fuel. Together with continued efficiency gains, and assuming 50% avoidance of contrails, such an energy transition could reduce annual lifecycle aviation CO₂ emissions by 89-94% compared to year-2019 levels, despite a 2-3-fold growth in demand by 2050. If these costs were passed directly on to passengers, ticket prices would rise by no more than 15% compared to a no-intervention baseline. However, the pathways we identify reduce aviation CO₂-equivalent emissions by 46-69% only; more action is required to mitigate non-CO₂ impacts.&#13;
&#13;
Collectively, this thesis provides perspective into aviation’s past, present, and future climate impacts. These results indicate substantial growth and impacts over the past 40 years, and highlight CO₂ and contrails as the largest contributors. Though, through sustained investment in alternative fuels, and contrail avoidance measures, our results also indicate, future mitigation is possible. However, to meet the global climate goal of net zero emissions by 2050, the transition needs to start now.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/154000</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>From the atmosphere to the abyss: Tracing organic carbon deposition, cadmium isotopes, and iron cycling using marine sediments</title>
<link>https://hdl.handle.net/1721.1/153994</link>
<description>From the atmosphere to the abyss: Tracing organic carbon deposition, cadmium isotopes, and iron cycling using marine sediments
Tegler, Logan
The marine biological pump refers to the formation and subsequent export of particulate organic carbon from the sunlit zone to the ocean’s interior. The magnitude and attenuation of this flux exert an important control over the air–sea balance of carbon dioxide. This thesis is focused on constraining this flux, the factors that control it, and developing novel tracers for it.  First, I evaluate Holocene carbon depositional fluxes in margin sediment and shed light on seafloor OC deposition. I find that margins host 19.4 T mol yr⁻¹ of marine OC and, contrary to the current paradigm, less than 4 % of the OC is buried in low-oxygen environments. However, in order to understand how the efficiency of the biological pump may have changed over time, it is necessary to use proxies. In Chapter 3, I examine cadmium isotopes as a potential paleonutrient proxy.  I suggest that in addition to biological uptake, Cd isotopes may be influenced by local redox conditions, remineralization, and external Cd additions. In chapter 4, I measure Cd isotopes in the Mt. McRae shale (2.5 Ga) that was deposited across a purported ‘whiff’ of oxygen that is believed to reflect the onset of oxygenic photosynthesis. I find that the Cd isotopes are invariant and light during  the ‘whiff’ interval. Rather than reflecting no changes in nutrient cycling, I suggest these compositions reflect a source–sink balance between Cd-depleted surface waters and external Cd inputs. Finally, in Chapter 5, we redirect our attention to the Fe cycle. Iron is a limiting nutrient in many ocean regions, which limits the efficiency of the biological pump. We use iron isotopes and Q-mode factor analysis to identify five sources of iron to sites in the South Pacific and Southern Oceans,  including: dust, a ligand-bound background source, volcanic ash, and two hydrothermal sources. Taken together, this thesis examines elemental interactions and spans temporal scales, from ancient epochs to the modern era. While we leverage trace elements as proxies of past marine biogeochemical cycles, we also stress that careful work is needed to apply and analyze them.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153994</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Structure-based Machine Learning Approach for Spatiotemporal Fluctuations in Nuclear Thermal Fluids</title>
<link>https://hdl.handle.net/1721.1/153993</link>
<description>A Structure-based Machine Learning Approach for Spatiotemporal Fluctuations in Nuclear Thermal Fluids
Wang, Yu-Jou
In nuclear energy systems, flow-assisted damage can arise at locations where complex turbulent structure interaction drives enhanced mechanical loads and mixing of different temperature flows drives thermal-induced mechanical failures. Thermal striping is a such phenomenon characterized by the turbulent mixing of non-isothermal streams, which can induce thermal stress fluctuations and fatigue damage in the critical components. The concern over such damage mechanism cannot be fully addressed via plant instrumentation due to the high frequencies involved, as well as the complex interaction between the source of the flow oscillation and the affected locations. High-fidelity models and simulations can play a significant role in predicting the performance of critical components under thermal fatigue. While scale-resolving models are capable of capturing complex unsteady flow features, such models are often computationally overburdening for industrial applications, making them inapplicable to online monitoring.&#13;
&#13;
This thesis proposes an industrial-scale prognosis machine learning (ML) tool for thermal striping applications. A two-level ML framework based on turbulence coherent structures is developed. In the first level, well-organized coherent structures are extracted by performing proper orthogonal decomposition on local parameters, and then a tree-based machine learning model is used to down-select the reference structures for the field reconstruction. In the second level, a parameterized convolution neural network is trained to predict the bias introduced by reference structures approximation. The two-level design leverages vortex identification and local bias correction techniques, which largely increase the data efficiency for training. &#13;
&#13;
The demonstration of the methodology shows that the proposed framework can successfully capture the fluctuation frequencies and amplitudes of the spatiotemporal fields. Through three test cases, the proposed framework has shown its capability to work with a notably limited training sample size (between 20 to 30 in each instance) in a highly variational setting. A speed-up factor of an order of 10⁶ ∼ 10⁷ was achieved. Based on the vortex identification method, the methodology is expected to be applicable to general phenomena driven by large coherent structures. The framework is also shown to have the ability to identify fatigue limiting regions for the spectrum of operating conditions.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153993</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infinite dimensional flag varieties</title>
<link>https://hdl.handle.net/1721.1/153966</link>
<description>Infinite dimensional flag varieties
Haddad, Ziad Sami.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1984; Bibliography: leaf 61.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153966</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>P-adic L-functions for elliptic curves over CM fields</title>
<link>https://hdl.handle.net/1721.1/153965</link>
<description>P-adic L-functions for elliptic curves over CM fields
Haran, Shai M. J. Haran.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1983; Bibliography: leaves 43-46.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153965</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning 3D Modeling and Simulation From and For the Real World</title>
<link>https://hdl.handle.net/1721.1/153905</link>
<description>Learning 3D Modeling and Simulation From and For the Real World
Ma, Wei-Chiu
Humans have extraordinary capabilities of comprehending and reasoning about our 3D visual world. With just a few casual glances, we can grasp the 3D structure and appearance of our surroundings and imagine all sorts of “what-if” scenarios in our minds. Existing 3D systems, in contrast, cannot. They lack structural understanding of the world and often break apart when moved to unconstrained, partially-observed, and noisy environments. To alleviate the challenge, this thesis focus on developing robust computational tools that can effectively perceive, model, and simulate the 3D world from unconstrained sensory data. We investigate the full spectrum of dynamic 3D world understanding: from robot localization to recognition, from static 3D reconstruction to dynamic motion estimation, and from closed-loop simulation to 3D generation. By examining these tasks not only in controlled settings, but also in sparse, noisy, and sometimes even extreme real-world settings, we aim to answer the following two questions: (i) how to robustly model and reason about the visible world that we see; and (ii) how to hallucinate the unseen and imagine novel scenarios in a realistic fashion.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153905</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Neural Networks from Theoretical and Biological Perspectives</title>
<link>https://hdl.handle.net/1721.1/153904</link>
<description>Understanding Neural Networks from Theoretical and Biological Perspectives
Liao, Qianli
Neural Networks is an important subject of both biological and artificial intelligence, which we have not yet fully understand. Recently the field of neural networks has been popularized by the advancement of Deep Learning, and people start to know how to make artificial neural networks perform extrordinarily well in practical applications, rivaling human performance in many tasks. However, the extreme pursuits of practical performances are not underpinned by deep understanding, leading to an unbalanced and incomplete, if not precarious, field of intelligence.&#13;
&#13;
Our theoretical and biological understanding about neural networks have fallen far behind their unprecedented proliferation. This creates an awkward situation — we create very capable intelligent systems that we do not understand, compromising our ability to predict, control and correctly treat such systems. Furthermore, these developments also do not directly translate to our understanding of the general concept of intelligence, including, most importantly, our own human intelligence. &#13;
&#13;
With alleviating such issues as one of our goals, my colleagues and I spent a few years on trying to understand neural networks from both theoretical and biological perspectives, gaining interesting insights and results on these topics. This led to a few published papers that I compile to constitute my thesis. Finally, I discuss my preferred future directions of research under the current trends of AI/AGI.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153904</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gluing and Creasing Paper along Curves: Computational Methods for Analysis and Design</title>
<link>https://hdl.handle.net/1721.1/153892</link>
<description>Gluing and Creasing Paper along Curves: Computational Methods for Analysis and Design
Mundilova, Klara
Curved geometries that can be obtained from flat sheets of material have many potential applications in design and engineering. In this thesis, we consider shapes achievable by joining planar patches of material along their curved boundaries, focusing specifically on curved-crease origami as a special case. Our research makes a threefold contribution.&#13;
&#13;
First, we extend the theory behind the computation of shapes consisting of developable patches. Based on classical differential geometry using curvature-based analysis, we consider the gluing of two patches with specified rulings and the pairwise joining of three patches with partial ruling information. We highlight a simplified computational approach for the case when the joined two patches are cylinders or cones that is also applicable in the discrete case. Additionally, we show how to compute a crease that connects a patch with a patch that is composed of tangent-continuous cylinders and cones.&#13;
&#13;
Using this theory, we are able to extend the family of shapes that allow a parametric reconstruction. We provide examples of shapes that allow explicit parametrizations, parametrizations using elliptic integrals, and parametrizations that require numerical integration.&#13;
&#13;
Finally, we employ the developed theory to devise algorithmic design strategies for shapes with curved creases. Inspired by artistic origami, we offer a parametric design tool for the construction of origami spirals. Additionally, we consider two strategies that approximate a polyhedral shape with a modular curved-crease design. Finally, we provide a constructive linear subdivision scheme for regular developable planar quad meshes that correspond to a discretized curved-crease shape.&#13;
&#13;
With this thesis, we aim to make curved-crease origami more accessible for interdisciplinary research in various design and engineering contexts.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153892</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effective Teamwork Using a Theory of Mind Over Plans</title>
<link>https://hdl.handle.net/1721.1/153887</link>
<description>Effective Teamwork Using a Theory of Mind Over Plans
Zhang, Yuening
When agents execute a task, their minds focus on the plans, that is, what plans lead to the successful completion of the task, and what plans they are executing with the rest of the team. When there is no guarantee that agents have common knowledge of the task and can observe each other's actions, agents' beliefs about plans can often be incorrect or misaligned, causing them to make uninformed choices of actions that lead to task failure. My research investigates how an agent can be effective within a team, where an agent may be a teammate or a coach.&#13;
&#13;
Previous work has proposed agents that achieve effective coordination by reasoning about a shared flexible plan, assuming common knowledge and full observability of actions, or by reasoning about each other's beliefs about states when those assumptions do not apply. I claim that to be effective in teamwork, agents must reason about the beliefs of their teammates about plans. By recognizing when misconceptions and misalignment in their beliefs about plans might lead to failure and aligning their beliefs as necessary, agents can ensure execution success while minimizing the need for communication.&#13;
&#13;
This thesis provides: (1) A novel modeling framework based on dynamic epistemic logic to represent agents' nested beliefs about plans. This complements existing frameworks that focus on beliefs about states, so that agents can explicitly communicate about plans during coordination. (2) EPike, a computational model for an agent teammate that collaborates with others to execute a task, assuming that agents can observe each other's actions. By planning its actions online in a receding-horizon fashion using an MCTS algorithm, EPike dynamically adapts its actions to its teammates and communicates to align their beliefs. (3) TARS, a computational model for a team coordinator that monitors the team and intervenes when needed to ensure the team's success, when agents may not observe all the actions. By framing the intervention problem as a CC-POMDP, TARS maintains a probabilistic belief about the team's mental state, and only intervenes when the team's risk of failure exceeds a specified threshold. I show the effectiveness of EPike and TARS both in the empirical evaluation and in a VirtualHome simulation testbed.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153887</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Data-Based Perspective on Model Reliability</title>
<link>https://hdl.handle.net/1721.1/153886</link>
<description>A Data-Based Perspective on Model Reliability
Jain, Saachi
Neural networks can fail to generalize to real world data — particularly on subpopulations that might have been mislabelled, corrupted, or underrepresented during training. In such settings, the set of features that a model relies on, or its feature prior, often determines the model’s ultimate reliability. While many factors contribute to a model’s feature prior, recent evidence indicates that the training dataset often plays a pivotal role. This thesis therefore aims to build the foundation for a data-centric perspective on model reliability, by uncovering how the training dataset’s composition affects the model’s feature prior, and thus the mistakes the model tends to make. It advances this objective through two main thrusts: developing scalable tools for identifying model failure modes in large datasets in large datasets and investigating the impact of pre-training data on the reliability of transfer learning models.&#13;
&#13;
In the first thrust, we develop techniques for uncovering meaningful patterns of model errors, especially in settings where manual exploration is prohibitively expensive. This includes building a framework for generating counterfactual images to debug model behavior as well as introducing a technique for automatically identifying failure modes by distilling them as directions in a latent space. We also propose a data-based approach to mitigate such failures at their source, by isolating training examples that drive a targeted bias. to mitigate such failures at their source, by isolating training examples that drive a targeted bias.&#13;
&#13;
In the second thrust, we investigate the role of the pre-training data in the transfer learning setting, where a pre-trained model is adapted to a downstream task. Here, we f irst explore the problem of “bias transfer”, where biases from the pre-trained model can persist even after adapting the model to the downstream task. We then introduce transfer influences, a framework for pinpointing the counterfactual impact of a pre-training datapoint on the final prediction. This framework enables us to isolate (and remove) detrimental points from the pre-training dataset to improve transfer learning performance.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153886</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Broadband Semiconductor Laser Frequency Combs</title>
<link>https://hdl.handle.net/1721.1/153877</link>
<description>Development of Broadband Semiconductor Laser Frequency Combs
Zeng, Tianyi
Quantum Cascade Laser (QCL) is a compact, mature and flexible coherent radiation source in both terahertz (THz) and mid-infrared (midIR) frequency range. Many molecules have rotational-vibrational absorption bands in midIR, making QCL-based dual comb spectroscopy (DCS) systems the ideal tool for field deployable multi-species detection system. Comb bandwidth and power ultimately determines the sensitivity and species distinguishability of the system. While the long-time development effort in high power IR QCLs have led to demonstrations of watt-level midIR QCL, there is a lack of an integrated, deterministic, broadly-applicable and efficient solution to expand the comb bandwidth for arbitrary broadband QCL gain medium.&#13;
&#13;
This thesis reports the complete characterization, design and fabrication techniques required to transform broadband Quantum Cascade Laser (QCL) into Frequency Combs (FCs), with demonstration of broadband QCL FC up to 113 &#119888;&#119898;⁻¹ centered at 9.5 &#120583;m. These techniques are not strictly limited to 1) the frequency range, 2) the material system, of the demonstrated devices, and can be implemented in any other non-linear solid-state photonic system that can benefit from an integrated dispersion engineering solution. &#13;
&#13;
There are three major innovations proposed and implemented in each one of the aspects mentioned above. In terms of characterization, a novel integrated segmented source-DUT (device under test) structure was implemented for the complete characterization of bias-dependent dispersion. Since the probing source is a self-aligned pulsed laser, the Signal to Noise Ratio (SNR) is orders of magnitude higher compared to other conventional methods. With respect to the design approach, this thesis derived from the original design heuristic of Double-Chirped Mirror (DCM) [1], appended extra design rules in consideration of the guided mode property, and conducted parallel hybrid 1D-2D parametric optimization in a vast parameter space. Last but not the least, through years of process optimization, we overcame several major challenges and fabricated high aspect-ratio, low critical dimension (CD) variation compensator structures as designed. The successful transformation of a broadband QCLfrom an incoherent state to a comb state verifies the effectiveness of the designed compensator and the precision and robustness of the fabrication process.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153877</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Queries with Disjunctions</title>
<link>https://hdl.handle.net/1721.1/153876</link>
<description>Optimizing Queries with Disjunctions
Kim, Albert
Despite decades of research into query optimization, optimizing queries with disjunctive predicate expressions remains a challenge. Solutions employed by existing systems (if any) are often simplistic and lead to much redundant work being performed by the execution engine. In this thesis, we present a two-pronged approach to address this problem: (1) For single-table queries, we provide a formal analysis of the problem for column-oriented engines, and based on our analysis, we present several algorithms capable of generating optimal predicate evaluation plans. Our algorithms are polynomial-time (unlike existing works), theoretically proven to produce optimal plans, and can be implemented in existing systems with minimal effort. (2) For multi-table queries, we present a novel form of query execution called tagged execution, which groups tuples into subrelations based on the predicates they satisfy and tags them with that information. Query operators can take advantage of the additional context provided by tags during runtime, and this allows them to eliminate much of the redundant work performed by traditional engines and even achieve pushdown optimizations for disjunctive predicates. Both our approaches greatly outperform existing solutions, sometimes by orders of magnitude, and we believe the combination of a theoretical approach (1) and a practical approach (2) helps meet the needs of most users.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153876</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ubiquitous Metadata: Design and Fabrication of Embedded Markers for Real-World Object Identification and Interaction</title>
<link>https://hdl.handle.net/1721.1/153872</link>
<description>Ubiquitous Metadata: Design and Fabrication of Embedded Markers for Real-World Object Identification and Interaction
Doğan, Mustafa Doğa
The convergence of the physical and digital realms has ushered in a new era of immersive experiences and seamless interactions. As the boundaries between the real world and virtual environments blur and result in a "mixed reality," there arises a need for robust and efficient methods to connect physical objects with their virtual counterparts. In this thesis, we present a novel approach to bridging this gap through the design, fabrication, and detection of embedded machine-readable markers.&#13;
 &#13;
The vision of mixed reality relies on mobile and wearable devices being aware of the surroundings to enhance real-world experiences with contextual information. For individual object identification, machine-readable tags such as barcodes and RFID labels are typically used. Barcodes, though cost-effective, tend to be obtrusive, less durable, and less secure than RFID labels. Regardless of their type, such tags are usually added to objects after fabrication, rather than being integrated into the original design.&#13;
 &#13;
This thesis attempts to overcome the shortcomings of traditional post-hoc augmentation processes by proposing novel tagging approaches that extract hidden, integrated features of objects and employ them as machine-detectable markers. Our research focuses on the design, implementation, and evaluation of comprehensive systems for embedding and interacting with embedded markers. We categorize the proposed tagging approaches into three distinct categories: natural markers, structural markers, and internal markers. Natural markers, such as those used in SensiCut, are inherent fingerprints of objects repurposed as machine-readable identifiers, while structural markers, such as StructCode and G-ID, leverage the structural artifacts in objects that emerge during the fabrication process itself. Internal markers, such as InfraredTag and BrightMarker, are embedded inside fabricated objects using specialized materials. Leveraging a combination of methods from computer vision, machine learning, computational imaging, and material science, the presented approaches offer versatile solutions for object identification, tracking, and interaction.&#13;
 &#13;
These markers, seamlessly integrated into real-world objects, effectively communicate an object’s identity, origin, function, and interaction, functioning as gateways to "ubiquitous metadata" – a concept where metadata is embedded into physical objects, similar to metadata in digital files. Across the different chapters, we demonstrate the applications of the presented methods in diverse domains, including product design, manufacturing, retail, logistics, education, entertainment, security, and sustainability. Furthermore, we discuss the challenges and opportunities in deploying embedded machine-readable markers at scale and in consumer products, enabling novel interactions in AR, and addressing privacy issues.&#13;
 &#13;
In conclusion, this thesis presents a comprehensive exploration of embedded machine-readable markers as a means to connect real-world objects to virtual worlds to achieve ubiquitous metadata. The thesis forges ahead with a future where objects come alive, environments become interactive, and virtual worlds seamlessly merge with our everyday lives.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153872</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>p-GaN Platform for Next-Generation GaN Complementary Transistors and Circuits</title>
<link>https://hdl.handle.net/1721.1/153863</link>
<description>p-GaN Platform for Next-Generation GaN Complementary Transistors and Circuits
Xie, Qingyun
Gallium nitride (GaN) integrated circuits (ICs) are receiving increasing attention because they offer compactness, reduced parasitics, and higher performance compared to discrete transistors or printed circuit board (PCB) integration. The p-GaN platform exhibits tremendous potential in power ICs and recently, in high temperature (500 °C) digital circuits. While the initial demonstrations offer promising results, several challenges remain. Notably, the lack of a monolithically integrated GaN complementary technology impedes the advancement of GaN power ICs.&#13;
&#13;
This thesis aims to enhance the p-GaN platform (GaN-CMOS platform) (CMOS: complementary metal-oxide-semiconductor) through developing the next generation of GaN complementary technology (p-channel and n-channel field-effect transistors (FETs)). Based on the GaN-CMOS platform, the aggressive scaling of novel complementary transistors (self-aligned-gate p-FET and self-aligned metal/p-GaN-gate HEMT) is pursued. Alternative metallization schemes and a new technology for gate recess in GaN p-FETs are demonstrated. The unique characteristics of the p-FET are revealed through a combination of experimental measurements and TCAD simulations. The p-FET (based on GaN-CMOS platform) and p-GaN-gate n-FETs are analyzed for high temperature operation. Lastly, in order to aid the future design of more complex circuits based on the p-GaN platform, a device-to-circuit CAD framework was developed for GaN n-FET circuits and validated at high temperature up to 500 °C.&#13;
&#13;
To the best of the author’s knowledge, the above results represent the state-of-the-art in GaN complementary technology and GaN electronics based on the p-GaN platform. These findings are expected to deliver wider impact in the areas of power, RF/mixed-signal, and high temperature electronics.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153863</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Blocks for Human-AI Alignment: Specify, Inspect, Model, and Revise</title>
<link>https://hdl.handle.net/1721.1/153862</link>
<description>Building Blocks for Human-AI Alignment: Specify, Inspect, Model, and Revise
Booth, Serena Lynn
The learned behaviors of AI systems and robots should align with the intentions of their human designers. In service of this goal, people—especially experts—must be able to easily specify, inspect, model, and revise AI system and robot behaviors. These four interactions are critical building blocks for human-AI alignment. In this thesis, I study each of these problems. First, I study how experts write reward function specifications for reinforcement learning (RL). I find that these specifications are written with respect to the RL algorithm, not independently, and I find that experts often write erroneous specifications that fail to encode their true intent, even in a trivial setting [22]. Second, I study how to support people in inspecting the agent’s learned behaviors. To do so, I introduce two related Bayesian inference methods to find examples or environments which invoke particular system behaviors; viewing these examples and environments is helpful for conceptual model formation and for system debugging [25, 213]. Third, I study cognitive science theories that govern how people build conceptual models to explain these observed examples of agent behaviors. While I find that some foundations of these theories are employed in typical interventions to support humans in learning about agent behaviors, I also find there is significant room to build better curricula for interaction—for example, by showing counterexamples of alternative behaviors [24]. I conclude by speculating about how these building blocks of human-AI interaction can be combined to enable people to revise their specifications, and, in doing so, create better aligned agents.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153862</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating Dynamical Systems from Data</title>
<link>https://hdl.handle.net/1721.1/153861</link>
<description>Simulating Dynamical Systems from Data
Alomar, Abdullah
The ever-increasing availability of data from dynamical systems offers an opportunity for automated data-driven decision-making in various domains. However, a significant barrier to realizing this potential is the issues inherent to these datasets: high-dimensionality, noise, sparsity, and confounding. In this thesis, we propose methods to exploit the richness in the structure of such datasets to overcome the above-mentioned problems while undertaking various inference tasks.&#13;
&#13;
Central to these methods is a key factorization characterizing the function governing the dynamics. Specifically, we harness trajectories from different, yet related, dynamical systems. We posit that the function governing the dynamics of each individual system can be factorized into a linear combination of latent separable functions of the state and action. Crucially, these latent functions are shared across the different dynamical systems. This principled factorization structure provides guidance on how to devise theoretically sound methods that perform well empirically across a variety of tasks. These tasks include time series imputation and forecasting, change point detection, reinforcement learning, and trace-driven simulation in networked systems.&#13;
&#13;
Exploiting the principled factorization structure has paved the way for the contributions we make in different tasks. First, we propose and analyze algorithms for mean and variance estimation and forecasting of time series with varying noise models, data missingness patterns, and assumptions on the factorization structure. These algorithms employ variants of the classical multivariate singular spectrum analysis (mSSA) algorithm and establish a link between time series analysis and Matrix/Tensor Completion. Second, we develop and analyze an algorithm for change point detection inspired by the factorization structure and based on the cumulative sum (CUSUM) statistic. This work extends the analysis of CUSUM statistics traditionally done for the setting of independent observations. Finally, we explore the potential gains of considering the factorization structure in simulating Markov Decision Processes (MDPs). We then build upon this approach to accommodate MDPs with time varying parameters with the specific application of trace-driven simulation in networked systems.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153861</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Graph Grammars: Towards Custom Robots for Every Task</title>
<link>https://hdl.handle.net/1721.1/153850</link>
<description>Robot Graph Grammars: Towards Custom Robots for Every Task
Zhao, Allan
As robots find broader applications outside factory floors, they face an increasing number of challenges. For example, they must accommodate rugged terrain, limited battery capacity, and complex dynamics. Existing robots are largely designed by hand to meet a given set of specifications. While highly capable, these manually-designed robots tend to leave performance on the table. These difficulties have motivated research into automatic robot design tools. Early tools were often limited in the range of robot topologies they can explore, however. Current graph-based robot representations can expand the space of possible designs, but it is not always clear how the resulting designs can be fabricated.&#13;
&#13;
To enable efficient design exploration and ensure fabricability, we propose graph grammars as a universal robot design representation. Graph grammars use rewriting rules to incrementally add complexity or select among distinct design alternatives. Because only fabricable components and connections are expressed in the grammar, the generated robot topologies are valid by construction. Through recursion and branching, graph grammars can also generate a large variety of possible designs. To tackle this expansive search space, we propose a specialized learning-based search algorithm called Graph Heuristic Search (GHS). GHS focuses limited simulation resources on the most promising designs. We compare GHS to random search and Monte-Carlo tree search baselines, showing that GHS finds higher-performing designs in less wall-clock time. We combine graph grammars and GHS with other techniques such as differentiable simulation to efficiently optimize multiple types of mobile robots. In doing so, we show that graph grammars are a principled yet general design representation for robot co-design. Their efficiency and versatility brings us one step closer to the dream of generating custom robots for every task.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153850</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simulated Annealing Approach to Designing Optimal Decision Trees for Classification, Prescriptive, and Survival Analysis</title>
<link>https://hdl.handle.net/1721.1/153845</link>
<description>A Simulated Annealing Approach to Designing Optimal Decision Trees for Classification, Prescriptive, and Survival Analysis
Sujichantararat, Suleeporn
A binary decision tree is a highly interpretable machine learning model, as humans can easily understand how a prediction is made by answering a series of binary questions. Earlier work has provided a powerful framework for constructing optimal decision trees by utilizing multiple random warm starts and local search to iteratively improve each warm start until a locally optimal decision tree is found. However, local search does not guarantee global optimality if the number of random warm starts does not exceed the number of local minima. Hence, we propose to incorporate simulated annealing into decision tree construction, as some worse transformations could lead to a better final model. We focus on three problem domains: classification, prescriptive and survival analysis to produce Optimal Classification Trees with Simulated Annealing (OCT-SA), Optimal Policy Tree with Simulated Annealing (OPT-SA), and Optimal Survival Tree with Simulated Annealing (OST-SA). This approach further improves on OCT, OPT, and OST by probabilistically allowing a tree to move to a tree with a worse objective value. A challenge in designing OCT-SA, OPT-SA, and OST-SA is to define an appropriate simulated annealing cooling schedule that leads to a global optimal solution in practical runtime. We evaluate OCT-SA, OPT-SA, and OST-SA on widely-used benchmarking real-world datasets. The evaluation metrics are out-of-sample performances over unseen test datasets: Gini impurity scores for classification, mean rewards scores for prescriptive, and local full likelihood score for survival analysis. The results demonstrate that our simulated annealing framework benefits the decision trees construction process.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153845</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Efficient Machine Learning with Applications to Cardiology</title>
<link>https://hdl.handle.net/1721.1/153841</link>
<description>Data-Efficient Machine Learning with Applications to Cardiology
Raghu, Aniruddh
Deep learning models have demonstrated impressive capabilities in many settings including computer vision, natural language generation, and speech processing. However, an important shortcoming of these models is that they often need to be trained on large datasets in order to be most effective. In domains such as medicine, large datasets are not always available, and thus there is a need for data-efficient models that perform well even in limited data regimes. In this thesis, motivated by this need, we present four contributions to data-efficient machine learning: (1) analyzing and improving few-shot learning, where we study a popular few-shot learning algorithm (Model Agnostic Meta-Learning) and provide insights as to why it is effective, proposing a simplified version that offers substantial computational benefits; (2) improving supervised learning on small clinical datasets of electrocardiograms (ECGs), where we develop a new data augmentation strategy for ECGs that helps boost performance on a range of predictive problems; (3) improving pre-training through the use of nested optimization, introducing an efficient gradient based algorithm to jointly optimize model parameters and pre-training algorithm design choices; and (4) developing a new self-supervised learning pipeline for complex clinical time series, where the design of the pipeline is driven by the multimodal, multi-dimensional nature of real-world clinical time series data. Unifying several of these contributions is the application area of cardiovascular medicine, a setting in which machine learning has the potential to improve patient care and outcomes.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153841</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Language and Logic for Programming and Reasoning with Partial Observability</title>
<link>https://hdl.handle.net/1721.1/153840</link>
<description>A Language and Logic for Programming and Reasoning with Partial Observability
Atkinson, Eric Hamilton
Computer systems are increasingly deployed in partially-observable environments, in which the system cannot directly determine the environment’s state but receives partial information from observations. When such a computer system executes, it risks forming an incorrect belief about the true state of the environment. For example, the Mars Polar Lander (MPL) is a lost space probe that crashed because its control software believed it was on the Martian surface when it was actually 40m high, and as a result, it turned off its descent engine too early. Developers need better software development tools to prevent such accidents.&#13;
&#13;
In this dissertation, I will present programming language tools that enable developers to deliver correct software in partially-observable environments. In particular, I will present belief programming, a new type of programming language in which developers write a model of the partial observability in the environment. The language produces an executable state estimator, which is a function that maps environmental observations to estimates of the environment’s true state. I have implemented the prototype belief programming language BLIMP, and used BLIMP to implement several example programs – including an engine controller for the MPL. I will also present Epistemic Hoare Logic (EHL), a program logic for belief programs that enables developers to reason about properties of the resulting state estimators. I have used EHL to prove that the BLIMP version of the MPL does not have the error that caused the original MPL to crash. Furthermore, I will present semi-symbolic inference, a technique that provides more efficient implementations of belief programming languages. Across a range of benchmarks, my performance experiments show that a semi-symbolic BLIMP implementation achieves speedups of 515x-58,919x over a naïve BLIMP implementation. Together, the contributions of belief programming, EHL, and semi-symbolic inference enable developers to focus on modeling the partial observability in the environment, and provide programming language support for automatically generating and reasoning about efficient state estimators.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153840</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Reconfigurable Vision Models</title>
<link>https://hdl.handle.net/1721.1/153839</link>
<description>Learning Reconfigurable Vision Models
Gonzalez Ortiz, Jose Javier
Over the past decade, deep learning methods have emerged as the predominant approach in a wide variety of fields, such as computer vision, natural language processing, and speech recognition. However, these models have also been notorious for their high computational costs and substantial data requirements. Furthermore, they often present significant challenges to non-technical users, who most often lack the expertise needed to effectively tailor these models to their specific applications.&#13;
&#13;
In this thesis, we tackle these challenges by exploring amortizing the cost of training models with similar learning tasks. Instead of training multiple models independently, we propose learning a single, reconfigurable model that effectively captures the spectrum of underlying problems. Once trained, this model can be dynamically reconfigured at inference time, adapting its properties without incurring additional training costs.&#13;
&#13;
First, we introduce Scale-Space Hypernetworks, a method for learning a continuum of CNNs with varying efficiency characteristics. This enables us to characterize an entire Pareto accuracy-efficiency curve of models by training a single hypernetwork, dramatically reducing training costs. Then, we characterize a previously unidentified optimization problem in hypernetwork training, and propose a revised hypernetwork formulation that leads to faster convergence and more stable training. Lastly, we present UniverSeg, an in-context learning method for universal biomedical image segmentation. Given a query image and an example set of image-label pairs that define a new segmentation task, it produces accurate segmentation without additional training, outperforming several related methods on unseen segmentation tasks.&#13;
&#13;
We empirically demonstrate the validity of our methods in real-world applications, focusing on computer vision and biomedical imaging, where we assess a wide array of tasks and datasets. In all of these works we find that it is not only feasible to train reconfigurable models but that in doing so, we achieve substantial efficiency gains both at training and at inference time.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153839</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis-Aided Development of Distributed Programs</title>
<link>https://hdl.handle.net/1721.1/153838</link>
<description>Synthesis-Aided Development of Distributed Programs
Kuraj, Ivan
Despite many advances in programming models and frameworks, writing distributed programs remains hard. Even when the underlying logic is inherently sequential and simple, addressing distributed aspects results in complex cross-cutting code that undermines such simplicity. While the sequential computation model of programs represents a simple and natural form for expressing functionality, corresponding distributed implementations need to break this model. One of the most challenging aspects that impede achieving separation of concerns, significantly increases the difficulty of reasoning about distributed programs and, subsequently, complicates their implementation is the consistency model.&#13;
&#13;
This thesis examines the possibility of using the sequential model for writing distributed programs, characterizes the requirements for making that possible, and presents a synthesis approach that allows programmers to automatically generate distributed implementations from behaviors given as sequential programs and orthogonal specifications of distributed aspects. The end result is a programming system in which programmers define sequential behaviors and separately specify data allocation, reactivity, the underlying network with orthogonal specifications, as well as integrity, as a set of high-level semantic properties. Given such specifications, the system automatically finds an optimal consistency model needed to maintain the given integrity and emits low-level message-passing implementations. The system combines two novel techniques into a two-step process: first, it statically infers optimal consistency requirements for executions of bounded sets of operations, and then, it uses the inferred requirements to parameterize a new distributed protocol to relax operation reordering at run time when it is safe to do so. We demonstrate the system’s expressiveness and examine run-time performance impact on benchmarks from prior work, as well as new benchmarks.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153838</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Deep Learning Computing: From TinyML to LargeLM</title>
<link>https://hdl.handle.net/1721.1/153837</link>
<description>Efficient Deep Learning Computing: From TinyML to LargeLM
Lin, Ji
Deep learning has prevailed in various fields and fundamentally changed human society. Efficiency is the key factor in democratizing deep learning and broadening its applications. It is increasingly important as Moore’s law slows down while the model size scaling speeds up. We need efficient algorithms and systems to help us bridge the gap.&#13;
&#13;
In this thesis, we will discuss techniques to improve the efficiency of deep learning by removing redundancies. We study efficient deep learning computing at the two extremes of scaling: tiny machine learning (TinyML) and large language models (LLMs). TinyML aims to run deep learning models on low-power IoT devices with tight memory constraints. Weexplored a system-algorithm co-design approach to remove redundant memory usage and enable real-life applications on commercial microcontrollers, achieving a milestone ImageNet accuracy of 70% for the first time. We further extend the solution from inference to training and enable on-device learning under only 256KB of memory. Similar to TinyML, the gigantic model sizes of LLMs also exceed the hardware capability even for the most advanced GPUs. We developed post-training quantization schemes for different serving workloads to reduce redundant bits of weights and activations, enabling W8A8 quantization (SmoothQuant) for compute-bound inference and W4A16 quantization (AWQ) for memorybound. We further develop TinyChat, an efficient and Python-native serving system, to realize the speedup from quantization. Finally, we will discuss some domain-specific optimization opportunities, including efficient video recognition with Temporal Shift Module (TSM) and image generation with Anycost GANs, where we reduce application-specific redundancies with specialized model designs.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153837</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Out of Distribution Database Workloads</title>
<link>https://hdl.handle.net/1721.1/153835</link>
<description>Machine Learning for Out of Distribution Database Workloads
Negi, Parimarjan
DBMS query optimizers are designed using several heuristics to make decisions, such as simplifying assumptions in cardinality estimation, or cost model assumptions for predicting query latencies. With the rise of cloud first DBMS architectures, it is now possible to collect massive amounts of data on executed queries. This gives a way to improve the DBMS heuristics using models that utilize this execution history. In particular, such models can be specialized to particular workloads — thus, it may be possible to do much better than average by learning patterns, such as some joins are always unexpectedly slow, or some tables are always much larger than expected. This can be very beneficial for performance, however, deploying ML systems in the real world has a catch: it is hard to avoid Out of Distribution (OoD) scenarios in the real workloads. ML models often fail in surprising ways in OoD scenarios, and this is an active area of research in the broader ML community. In this thesis, we introduce several such OoD scenarios in the context of database workloads, and show that ML models can easily fail catastrophically in such cases. These range from new query patterns, such as a new column, or new join, to execution time variance across different hardware and system loads. In each case, we use database specific knowledge to develop techniques that get us ML models with more reliable and robust performance in OoD setting.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153835</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems and Techniques for Efficient Real-World Graph Analytics</title>
<link>https://hdl.handle.net/1721.1/153832</link>
<description>Systems and Techniques for Efficient Real-World Graph Analytics
M. F. da Trindade, Joana
Graphs are a natural way to model real-world entities and relationships between them, ranging from social networks and biological datasets to cloud computing infrastructure data lineage graphs. Queries over these large graphs often involve expensive subgraph traversals and complex analytical computations. Furthermore, real-world graphs often encode relationships that evolve through time, which are modeled as a temporal graph, i.e., one in which edges are associated to time attributes (such as start time and duration). These real-world graphs are often substantially more structured than a generic vertex-and-edge model would suggest, but this insight has remained mostly unexplored by existing graph engines for graph query optimization purposes. In addition, most temporal graph processing systems remain inefficient as they rely on distributed processing even for graphs that fit well within a commodity server’s available storage. To this end, we present two distinct systems tailored for optimizing large-scale real-world graph processing. The first system, Kaskade, targets the challenges of efficient query evaluation by leveraging structural properties of graphs and queries to infer materialized graph views, speeding up query evaluation by rewriting queries in terms of views it deems beneficial based on input graph and&#13;
query characteristics. The second system, Kairos, introduces selective indexing, a technique that chooses a subset of target vertices to index based on characteristics of the underlying temporal graphs and input queries. This system further employs a highly-specialized parallel data structure aimed at in-memory storage and fast retrieval of temporal edges. Finally, Kairos is built upon Ligra, the de facto benchmark system in shared-memory parallel graph processing, offering similar advantages and a familiar API to application programmers. Both systems offer speedups of up to 50-60x when compared with alternative baselines, and introduce novel classes of query optimization techniques aimed at efficient real-world graph analytics.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153832</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application-Aware Scheduling Architectures for Mobile Systems</title>
<link>https://hdl.handle.net/1721.1/153831</link>
<description>Application-Aware Scheduling Architectures for Mobile Systems
Balasingam, Arjun
Architects of mobile systems have long optimized schedulers for platform-level objectives, such as system throughput or operating cost. However, these objectives could be at odds with performance indicators that applications or users of the platform might care about. This thesis proposes two-tiered architectures to realize app-aware resource allocation policies for mobile systems. The first level decomposes app-level objectives into platform-level objectives over scheduling rounds. The second level leverages classical schedulers, designed for platform objectives, as building blocks to guide the optimizer toward app-level objectives. We apply this design paradigm to build resource allocation systems and algorithms in two domains: mobility platforms and cellular networks.&#13;
&#13;
Mobius allocates tasks from different customers to vehicles in mobility platforms, which are used for food and package delivery, ridesharing, and mobile sensing. Over rounds, Mobius invokes vehicle routing solvers that maximize task completion throughput to compute schedules that are fair to different customers using the platform. On a trace of Lyft rides in New York City, Mobius computes max-min fair online schedules involving 200 vehicles and over 16,000 tasks, while achieving only 10% less throughput than a classical vehicle routing solver. &#13;
&#13;
Zipper is a radio resource scheduler that fulfills throughput and latency service-level agreements for individual apps connected to a cellular network. Zipper bundles apps into network slices, and leverages classical schedulers that maximize base station throughput to compute resource schedules for each slice that comply with each app's requirements. On a typical workload consisting of video streaming, conferencing, IoT, and virtual reality apps, Zipper reduces tail throughput and latency violations, measured as a ratio of violation of the app's request, by 9x, compared to traditional base station schedulers.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153831</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Optical Hardware for Neural Network Inference Acceleration</title>
<link>https://hdl.handle.net/1721.1/153830</link>
<description>Large-Scale Optical Hardware for Neural Network Inference Acceleration
Bernstein, Liane
Artificial deep neural networks (DNNs) have revolutionized tasks such as automated classification and natural language processing. To boost accuracy and handle more complex workloads, DNN model sizes have grown exponentially over the last decade, outpacing improvements in digital electronic microprocessor efficiency. This mismatch limits DNN performance and contributes to soaring data center energy costs. Optical hardware for deep learning (optical neural networks, or ONNs) can theoretically increase DNN processing efficiency; however, the feasibility of large-scale, fully programmable and reconfigurable ONNs has not yet been comprehensively shown in experiments.&#13;
&#13;
This thesis reports our demonstrations of ONNs that classify ~1000-element input vectors using standard DNN layers in inference without hardware modeling or retraining. In a first project, we used digital optical links to replace copper wires for transmitting and copying data to electronic multipliers. Our experimental implementation showed an MNIST classification accuracy within &lt;0.6% of the digital electronic ground truth. We estimated that this 'digital ONN' could reduce energy consumption for long data transfer lengths, but not in tightly packed electronic multiplier arrays. Therefore, in a second project, we expanded upon this work by performing reconfigurable optical multicast and analog optoelectronic weighting to compute DNN layer outputs in a single shot. Our proof-of-concept system yielded an MNIST classification accuracy of 96.7% (boosted to 97.3% with weight fine-tuning) with respect to the ground-truth accuracy of 97.9%. We calculated that a near-term optimized version of this system could lower energy consumption and latency by 1-2 orders of magnitude compared to a state-of-the-art digital electronic systolic array. These findings suggest a paradigm shift towards optoelectronic DNN accelerators with lower resource utilization that could enable the next generation of deep learning.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153830</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Robust And Practical Neural Video-Conferencing</title>
<link>https://hdl.handle.net/1721.1/153826</link>
<description>Towards Robust And Practical Neural Video-Conferencing
Sivaraman, Vibhaalakshmi
Video conferencing systems suffer from poor user experience when network conditions deteriorate because current video codecs cannot operate at extremely low-bitrates or under lossy network conditions without frame corruption or video freezes. To tackle the low-bitrate problem, several neural alternatives have been proposed that reconstruct talking head videos using sparse representations of each frame such as facial landmark information. However, these approaches produce poor reconstructions in scenarios with major movement or occlusions over the course of a call, and do not scale to higher resolutions. To cope with packet loss, most systems use retransmissions or Forward Error Correction (FEC) techniques. Retransmissions are impractical in real-time settings due to their slow turnaround times while Forward Error Correction (FEC) techniques require extensive tuning to ensure the right level of redundancy. Instead, this dissertation develops a new paradigm for video conferencing using a suite of generative techniques based on super-resolution and attention mechanisms to improve video conferencing experience across both classes of poor network conditions.&#13;
&#13;
First, we present Gemino, a new neural compression system for video conferencing based on a novel high-frequency-conditional super-resolution pipeline. Gemino upsamples a very low-resolution version of each target frame while enhancing high-frequency details (e.g., skin texture, hair, etc.) based on information extracted from a single high resolution reference image. Such a design overcomes the robustness issues of models that rely on only facial landmarks under extreme motion. Gemino’s design includes a multi-scale architecture that runs different components of Gemino at different resolutions, allowing it to scale to resolutions comparable to 720p. We also personalize the model to learn specific details of each person, achieving much better fidelity at low bitrates. We implement Gemino atop aiortc, an open-source Python implementation of WebRTC, and show that it operates on 1024x1024 videos in real-time on a Titan X GPU, and achieves 2.2-5x lower bitrate than traditional video codecs for the same perceptual quality. &#13;
&#13;
Since Gemino is not designed to leverage high-resolution information from multiple references, we further design Gemino (Attention), a version of Gemino that computes "attention" or a weighted correspondence between regions of different reference frames and the target frame. This attention design is in contrast to the optical flow framework within Gemino that is restricted to merely linear translations from regions of a single reference frame to its target region. Such an attention-based design is, instead, able to combine information across different references and use the best parts of each reference frame to improve the fidelity of the reconstruction.&#13;
&#13;
Lastly, we develop Reparo, a loss-resilient generative codec for video conferencing that reduces the duration and impact of video freezes during outages. Reparo’s compression does not depend on temporal differences across frames, making it less brittle in the event of packet loss. Reparo automatically generates missing information when a frame or part of a frame is lost, based on the data received so far, and the model’s knowledge of how people look, dress, and interact in the visual world.&#13;
&#13;
Together, these approaches suggest an alternate future for video conferencing powered by neural codecs that can operate in extremely low-bandwidth scenarios as well as under lossy network conditions to enable a smoother video conferencing experience.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153826</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating the impulse response of linear shift-variant, image degrading systems.</title>
<link>https://hdl.handle.net/1721.1/153808</link>
<description>Estimating the impulse response of linear shift-variant, image degrading systems.
Filip, Anthony Edmund.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Leaf 137 used twice. Vita.; Bibliography: leaves 137-145.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153808</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inverse optimization applied to fixed charge models</title>
<link>https://hdl.handle.net/1721.1/153804</link>
<description>Inverse optimization applied to fixed charge models
Sempolinski, Dorothy Elliott.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1981; Vita.; Bibliography: leaves 97-98.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153804</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>GAG inhibition of collagen-platelet interaction</title>
<link>https://hdl.handle.net/1721.1/153802</link>
<description>GAG inhibition of collagen-platelet interaction
Silver, Frederick Howard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1977; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153802</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scanning desorption of small molecules from model biological surfaces.</title>
<link>https://hdl.handle.net/1721.1/153800</link>
<description>Scanning desorption of small molecules from model biological surfaces.
Silver, Bruce
            (Bruce Richard)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1977; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153800</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the propagation of a acoustic waves and instabilities in inhomogeneous media</title>
<link>https://hdl.handle.net/1721.1/153798</link>
<description>A study of the propagation of a acoustic waves and instabilities in inhomogeneous media
Maling, George C.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1963; Vita.; Includes bibliographical references (leaves 127-129).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153798</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanosatellite Hyperspectral Imaging Performance Modeling for Ocean Color Detection.</title>
<link>https://hdl.handle.net/1721.1/153796</link>
<description>Nanosatellite Hyperspectral Imaging Performance Modeling for Ocean Color Detection.
Payne, Cadence
Earth’s oceans are an integral sub-system of our planet, an invaluable resource, and an informative proxy for understanding human-related climate impact. Ocean color observations are particularly useful for monitoring and modeling phytoplankton, valuable fauna that form the basis of the marine food web, produce an estimated 50-85% of breathable oxygen, and provide the largest and most efficient mechanism for oceanic carbon capture. Monitoring the behavioral response of phytoplankton to the impact of increased anthropogenic input at a scale observable by spacecraft provides information on ocean health at large. More effective space-based monitoring requires increased spectral, temporal, and spatial resolution compared with currently available performance from legacy instruments such as MODIS, MERIS, and SeaWiFS. Data coverage without temporal gaps is necessary for monitoring short- and long-term trends, and high spectral resolution is required for taxonomic species discrimination and identification of in-water optical constituents in turbid, coastal regions. Nanosatellites hosting ocean-sensing hyperspectral imagers may offer gap-filling solutions by providing complementary measurements with high spectral, spatial, and temporal resolution that align spectrally with legacy data.&#13;
&#13;
This work investigates the utility of nanosatellite solutions for targeting the ocean color observational needs of increased spatial coverage and spectral resolution. Two reference nanosatellite architectures, AEROS and HYPSO-1, are evaluated to derive sensor performance with respect to measurement requirements and sensitivities. Each mission hosts an ocean sensing hyperspectral imaging payload with unique architectures, and their performance represents a benchmark for nanosatellite solutions. In this thesis, the capabilities of nanosatellite hyperspectral imagers are analyzed by using environment models and developing detailed instrument simulations. Synthetic atmospheric scenes are produced for three regions using the Py6S, open-source radiative transfer model. Model outputs provide top-of-atmosphere spectral radiance across a tradespace of environmental factors and viewing geometries. Regions are selected for their global climate relevance and proximity to the coast, as coastal observations require higher spectral resolution. The three target regions are geographically distributed to represent a diverse set of potential nanosatellite imaging scenes to assess performance for both ideal and non-optimal imaging conditions.&#13;
&#13;
A radiometric performance model is developed to determine the nanosatellite hyperspectral imagers’ signal-to-noise ratio for all generated scenes, enabling the identification of imaging and operational constraints. The imagers’ noise equivalent spectral radiance is derived to determine the imaging sensitivity to input signals and minimal detection limits. Performance is contrasted between the two reference missions, and each mission is evaluated for their compliance with identified measurement needs. These needs are captured by a set of mission, system, and payload requirements derived from community reports, constituent retrieval algorithms, and lessons learned from legacy missions. These requirements are scaled for compatibility with the nanosatellite platform to enable assessments of design limitations and potential opportunities for improvement. Model derivation and results are discussed and design limitations of the nanosatellite platforms are identified.&#13;
&#13;
The results of this thesis demonstrate the challenges of satisfying measurement needs designed for state-of-the-art ocean color imagers with the nanosatellite platform. However, it is found that both the AEROS and HYPSO-1 nanosatellite missions achieve partial compliance with the SNR requirement of 200 for VIS/NIR bands with the implementation of spectral binning. Both missions also achieve partial compliance with the noise-equivalent spectral radiance levels desired for VIS/NIR bands, and the HYPSO-1 mission is fully compliant with the maximum required value. Recommendations for future improvements, including imaging system design modifications that support high SNR in high-priority VIS/NIR and SWIR bands, as well as the necessity of a combined ocean surface-atmospheric radiative transfer model for environmental modeling are provided.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153796</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perception-aware planning for differentially flat robots</title>
<link>https://hdl.handle.net/1721.1/153794</link>
<description>Perception-aware planning for differentially flat robots
Murali, Varun
The central question to this thesis can be stated as follows: “Can we design computationally efficient algorithms that are capable of robustly navigating complex environments and unstructured environments at operational speeds." Visual inertial navigation in perceptually degraded environments is a challenging problem for robotic vehicles. With a camera, inertial measurement unit (IMU) pairing being ubiquitous to most consumer electronics, they form an ideal pairing for applications on the edge and have found applications ranging from large-scale search and rescue, autonomous driving to home robots such as robotic vacuum cleaners. In general, the navigation problem for robots can be written in the form of the sense-think-act framework for autonomy. The “sensing" part is typically performed in this context as bearing measurements to visually salient locations in the environment; the “planning" part then uses the estimate of the ego-state from the sensors and produces a (compactly represented) trajectory from the current location to the goal. Finally, the “act" or controller follows the plan. This division leaves several interesting problems at the intersection of the parts of the framework. For instance, consider the problem of navigating in a relatively unknown environment; if the future percepts are not carefully planned, it is possible to enter a room with very few visual features that degrade the quality of state estimation, which in turn can result in poor closed-loop performance. Quadrotors are a class of robots that dictate further constraints on their sensors, namely size and weight. These constraints make camera-IMU pairings ideal for this type of aerial vehicle and bring further interesting challenges in terms of computational load for embedded systems. To this end, we first study the problem of modeling visibility on the camera canvas and incorporating these heuristics as an optimal-control problem. We then study the problem of optimizing the robot speed along the path with visible features such that the traversal time is minimized and the constraints are satisfied. As we hit the limit of optimization capability in a model-based fashion, we employ machine learning to jointly optimize the speed, uncertainty along the trajectory in a numerically stable fashion.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153794</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>City-scale Cross-view Geolocalization with Generalization to Unseen Environments</title>
<link>https://hdl.handle.net/1721.1/153793</link>
<description>City-scale Cross-view Geolocalization with Generalization to Unseen Environments
Downes, Lena M.
Cross-view geolocalization, a supplement or replacement for GPS, provides an estimate of an agent's global position by matching a local ground image to an overhead satellite image. It is challenging to reliably match these two sets of images in part because they have significantly different viewpoints. Existing works have demonstrated geolocalization in constrained scenarios over small areas using panoramic cameras, yielding methods that have limited generalization to unseen environments or conditions and that do not quantify uncertainty. This thesis details Wide-Area Geolocalization (WAG) and Restricted FOV Wide-Area Geolocalization (ReWAG) that combine a neural network with a particle filter to achieve global position estimates for a moving agent in a GPS-denied environment while scaling efficiently to city-sized regions in unseen environments and working with either panoramic or non-panoramic cameras. One contribution is a trinomial loss function that enables accurate and computation-efficient localization across city-scale search areas of nearly 300 km^2 in size by improving image retrieval on the off-center image pairs that result from a coarsely discretized satellite image database. Another contribution is a computationally efficient method to incorporate pose information with input image pairs, which improves localization accuracy with non-panoramic cameras and off-center ground images. An additional contribution is the GKL uncertainty measure for localization outputs, which enables detection of particle filter false convergence through characterization of the particle distribution. The final contribution is a demonstration of ReWAG's ability to generalize across different times of day, seasons, weather, and cameras on data collected from a moving car in Cambridge, Massachusetts, as well as the public release of a challenging imagery dataset collected on this vehicle platform. WAG and ReWAG localize from over 1 km to less than 100 m of localization error while performing particle filter updates with less than 1% of the computation required for previous approaches.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153793</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoemulsion-Loaded Hydrogels For Advanced Pharmaceutical Formulations</title>
<link>https://hdl.handle.net/1721.1/153792</link>
<description>Nanoemulsion-Loaded Hydrogels For Advanced Pharmaceutical Formulations
Chen, Liang-Hsun
Pharmaceutical formulation plays an important role in transforming a drug substance into the final drug product taken by a patient. It involves processes that combine an active pharmaceutical ingredient (API) and a mixture of inactive excipients into a final drug product with desired therapeutic effects and physical properties. Among various drug products, oral solid dosage forms are the most preferred product forms dominating the market because of their high patient compliance and wide acceptance. However, conventional oral drug formulations typically require costly multistep manufacturing, and poor bioavailability of hydrophobic APIs still remains a persistent challenge in many formulations. To address the limitations, we utilize two promising building blocks, nanoemulsions and hydrogels, to design new materials that enable more efficient and effective formulations of oral drug products with high quality and versatile release.&#13;
&#13;
First, we present a method to encapsulate functional nanoemulsions in alginate capsules for more versatile delivery of lipophilic active ingredients. The nanoemulsions are intrinsically viscous, ensuring the formation of spherical capsules and large encapsulation efficiency. Quantitative release analyses show that the capsule systems possess a tunable, delayed-burst release. The proposed encapsulation methodology can be further generalized to other functional nanoemulsions with various active ingredients, oil phases, nanodroplet sizes, and chemically crosslinked inner hydrogel cores.&#13;
&#13;
Second, we design a new thermogelling methylcellulose (MC) nanoemulsion that can be efficiently formulated into composite solid dosage drug products with well-controlled API nanocrystals embedded in a MC matrix. The composite drug products possess a fast and tunable release performance because of the fast-eroding MC matrix and successful formation of API nanocrystals. Using the versatile thermal processing approach, we formulate the nanoemulsion into various dosage forms (nanoparticle suspension, drug tablet, and oral thin film) in a manner that avoids nanomilling.&#13;
&#13;
Finally, we develop thermogelling hydroxypropyl methylcellulose (HPMC) nanoemulsions as robust templates to formulate oral films loaded with poorly water-soluble drug nanoparticles. The thermally gelled network effectively immobilizes the oil nanodroplets for confined nanoparticle crystallization and avoids potential irreversible nanoparticle aggregation. The oral films possess a tunable immediate release, and a scaling rule is developed for designing the release profiles of oral films.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153792</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-Theoretic Safety Analysis for Teams of Collaborative Controllers</title>
<link>https://hdl.handle.net/1721.1/153787</link>
<description>System-Theoretic Safety Analysis for Teams of Collaborative Controllers
Kopeikin, Andrew N.
Human teams collaborate by establishing roles, changing functional authorities, maintaining team cognition, coordinating, and helping one another close control loops.  These complex interactions are inspiring novel system concepts to improve human-machine and multi-machine collaboration. However, these new systems challenge existing methods to model, analyze, design, and assure their safety. As such, few have been fielded in safety-critical domains like aerospace.&#13;
&#13;
To address this gap, this work develops a rigorous and systematic approach to analyze safety and enable safety-guided design of systems that exhibit collaborative control. It introduces a system-theoretic framework to describe multi-controller interactions. This includes a taxonomy of seven structural dimensions that influence such interactions and nine dynamics observed in collaborative control that are defined using System-Theoretic Accident Model and Processes (STAMP).  An analyzed set of controller interactions in aerospace systems demonstrates the framework and highlights how designers are trying to create more sophisticated systems.&#13;
&#13;
The framework provides the necessary foundation to extend the state-of-the-art in hazard analysis, System Theoretic Process Analysis (STPA), to systematically address collaboration.  First, a mechanism is developed to incorporate the nine collaborative control dynamics into STAMP control structure models so that they are explicitly considered in hazard analysis.  Second, a process is derived from STPA to identify unsafe combinations of control actions between multiple controllers.  The procedure systematically considers potential issues involving gaps, overlaps, transfers, and mismatches in authority that are found in teams.  It is executed using an abstraction-based algorithm that manages combinatorial complexity and provides automation support.  Third, a method is introduced to identify causal factors from these unsafe control combinations that relate to the collaborative dynamics. The new technique, STPATeaming, is applied to a manned-unmanned aircraft teaming case study, and it finds new causal factors not previously found in a past hazard analysis of the same system. &#13;
&#13;
Finally, a structure is derived from Intent Specification to (1) integrate design and assurance processes, (2) support system modeling and analysis at different levels of abstraction, and (3) trace engineering activities using a means-end hierarchy.  The framework integrates STPATeaming into a broader systems engineering approach.  It also leverages the analytical structure of STPA-Teaming to provide novel traceability of its results directly to architectural design decisions. The safety-guided approach is illustrated using the same case study as above.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153787</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Stress-equivalent Spalart-Allmaras Wall Model with Local Boundary Conditions for RANS, DES, and LES</title>
<link>https://hdl.handle.net/1721.1/153786</link>
<description>A Stress-equivalent Spalart-Allmaras Wall Model with Local Boundary Conditions for RANS, DES, and LES
Ursachi, Carmen-Ioana
While high-fidelity, scale-resolving methods in Computational Fluid Dynamics (CFD) are increasingly applied, the cost of these methods remains a significant barrier to their effective use. In this thesis, a new wall model is developed based upon a modified version of the Spalart-Allmaras (SA) turbulence model that lessens the near-wall grid requirements. This is achieved by, below the log layer, making the eddy viscosity approach a constant, non-zero value, and the velocity, which has a non-zero slip, varying approximately linearly with distance from the wall while maintaining the same total shear stress. The wall model introduces one parameter which controls the near-wall behavior of the solution. Unlike typical wall models, this method avoids the need to query the interior solution by utilizing a boundary condition which only requires solution information present at the boundary, making it well-suited for unstructured grids and mesh adaptation. The new approach is combined with mesh adaptation and applied to ReynoldsAveraged Navier-Stokes (RANS), demonstrating accurate predictions of quantities of interest such as aerodynamic coefficients, surface pressure and temperature, skin friction, and heat transfer compared with standard RANS-SA, while requiring substantially less near-wall grid to resolve the solution. Additionally, the new wall model and modified turbulence model are applied to Detached Eddy Simulation (DES) in a hybrid RANS/LES framework, where it is demonstrated that the wall model allows for reliable solutions on near-wall grids that are significantly coarser in the wall-normal direction than those used typically for DES. Finally, the wall model boundary condition is applied to wall-stress Wall-Modeled Large Eddy Simulation (WMLES) and shown to produce similar results to the traditional equilibrium model, while avoiding the need to query the interior solution.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153786</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Robust and Autocatalytic Architectures for Human Missions to Mars</title>
<link>https://hdl.handle.net/1721.1/153785</link>
<description>Engineering Robust and Autocatalytic Architectures for Human Missions to Mars
Lordos, George
For decades, Mars architectures had been constrained to up to 6 crew members for surface endurances up to 600 days because the increased mass for larger missions implied significantly higher costs. More recently, reusable rockets and mass manufacturing approaches are starting to transform the human spaceflight landscape, offering increased mass to orbit capabilities, dramatically reduced unit costs and a falling cost of access to space. In view of the above, the approach in this work is to turn crew size from a constraint to the primary design variable and explore the design space for larger architectures that could offer increased value, robustness and a potential for growth. This requires more elaborate and flexible approaches to modeling the supply and demand for crew time, going beyond the state-of-the-art. The crew time model in this work is based on the parametric modeling of 533 activities and considers economies of scale and productivity effects from specialization, task focus and learning to estimate time demanded for logistics, utilization, thriving and growth. An architecture screening model integrates the high-resolution crew time model with low-fidelity sub-models for mission aspects like food production and habitat construction, all driven by crew size and surface endurance. Mars architectures from 4-63 crew members in up to 3 nearby sites for up to 10 years were evaluated using new metrics such as the Lifecycle Cost per Non-Logistics Full Time Equivalent person on Mars per year and the Robustness Composite Indicator (ROCI), finding that dual-site designs with 30 to 42 crew strike a balance between affordability to NASA, mission value, high robustness with a self-rescue capability, and potential for autocatalytic growth. A case study proposes a NASA-led, 36-crew, 10-year Mars mission, with broad participation from Artemis Accords member countries. The 27-year program, with an average annual cost of $3.1b peaking at $7b in the mid-2030's, fits within NASA's current deep space exploration budget. This work shifts the focal point for human spaceflight from mass to crew time, supports the study of larger, hitherto unexplored Mars architectures, and finds that larger, multi-site Mars missions could be more cost-effective, robust and sustainable than traditional concepts. The research also supports future work towards new, larger-scale Mars analog missions which could improve our understanding of crew productivity and other factors which vary with mission scale.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153785</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task-optimized models of human hearing link perception and neural coding</title>
<link>https://hdl.handle.net/1721.1/153780</link>
<description>Task-optimized models of human hearing link perception and neural coding
Saddler, Mark R.
Hearing allows organisms to derive information about the world from sound. The ears convert pressure waves into patterns of neural activity which the brain can use to make powerful inferences about the source. While extensive modeling efforts in the past few decades have resulted in well-established computational descriptions of peripheral auditory coding, comparatively less is known about how this neural code supports complex auditory behavior. Humans with normal hearing are remarkably adept at recognizing and localizing sounds in noisy environments with multiple competing sources. However, these abilities are fragile and are greatly compromised in listeners with hearing loss or cochlear implants, often leading to frustration and social isolation. Current assistive devices largely fail to aid impaired listeners in noisy environments, and the development of more effective devices is currently limited by an incomplete understanding of which features of neural coding underlie perception. This thesis develops computational models for explicitly linking specific features of peripheral auditory processing and perception. In a series of three studies, we optimized deep artificial neural network models to perform real-world hearing tasks using simulated auditory nerve input. The first study outlined a framework for optimizing models under different conditions to test how perception is constrained by our ears and acoustic environment in the domain of pitch perception. The second study extended the framework to examine the widely debated perceptual role of auditory nerve spike timing in hearing more broadly. The third study explored a practical application of task-optimized models by leveraging intermediate model representations as a perceptual metric for speech enhancement. Collectively, the results link aspects of hearing to environmental and neural coding constraints, illustrating the utility of artificial networks to reveal underpinnings of behavior.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153780</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Performance Analysis for Autonomous Spacecraft Navigation within Satellite Constellations using Intersatellite Optical Communications Links</title>
<link>https://hdl.handle.net/1721.1/153778</link>
<description>Systems Performance Analysis for Autonomous Spacecraft Navigation within Satellite Constellations using Intersatellite Optical Communications Links
Grenfell, Peter
Free-space optical communications is an advanced technology for high data rate communications that has experienced rapid development for space applications over the last couple decades, due to the increasing need for bandwidth with modern sensing and information technologies. Lasercom has advantages over radio frequency (RF) systems, with the primary advantage being better scalability of terminal data rates versus Size, Weight, and Power (SWaP) constraints. A lasercom terminal already has the necessary hardware for optical intersatellite link (OISL) measurements, since it is the same hardware that is needed for communications. Intersatellite measurements can be used to improve the observability of satellite orbits in applications like satellite communications constellations. We will perform a systems analysis of the OISL measurement technology to better understand how measurement errors are related to the hardware design. We analyze relativistic effects when modeling the intersatellite light propagation. We expand on previous constellation analyses, in particular navigation via OISLs within LEO mega-constellations like Starlink, Earth navigation constellations like the Global Positioning System (GPS), and notional Lunar &amp; Mars constellations. We estimate the achievable performance in these applications and show that baseline OISL navigation performance is on the order of 0.1-10 m and 0.1-10 mm/s, depending on the application configuration. This is comparable to existing state-of-the-art non-autonomous navigation methods like GPS and radio ground tracking and at least one order of magnitude better than existing autonomous navigation methods such as optical navigation. Lasercom crosslinks not only enable increased throughput in satellite communications constellations, but they can also be used to enable collectively-autonomous, high-precision navigation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153778</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable Robotic Perception: From Outlier-Robust Estimation to Task-Aware Runtime Monitoring</title>
<link>https://hdl.handle.net/1721.1/153776</link>
<description>Reliable Robotic Perception: From Outlier-Robust Estimation to Task-Aware Runtime Monitoring
Antonante, Pasquale
Reliable perception is a key prerequisite for safe operation of robots and autonomous vehicles. The future of the field relies on public trust and provable correctness of behavior in real-world scenarios. Though commonly used, testing and simulation alone are insufficient to ensure correctness and do not provide sufficient evidence for safety certification. The current literature lacks a system-wide framework to formally verify the safety requirements of the perception system of an autonomous vehicle. Moreover, current perception algorithms tend to fail in the presence of many outliers and require extensive parameter tuning. This thesis presents a comprehensive exploration of outlier-robust estimation algorithms, perception monitoring, and risk assessment to enhance the robustness and safety of robots and autonomous vehicles.&#13;
&#13;
The first part of the thesis focuses on geometric perception, which is the task of estimating geometric models (e.g., poses) from sensor measurements (e.g., LiDAR scans). Geometric perception is plagued by the presence of outliers —spurious measurements— that compromise the accuracy of the estimated geometric model. Computing robust estimates in the face of outliers has been a central topic in computer vision and robotics. In this thesis I introduce two unifying formulations for outlier-robust estimation, and investigate fundamental limits, practical algorithms, and applications. In particular I present two outlier-robust estimation algorithms (together with two variations that are parameter-free), that are able to robustly estimate geometric models in the presence of a high percentage of outliers.&#13;
&#13;
The second part of the thesis focuses on task-aware runtime monitoring of perception systems in high-stakes robotics applications such as autonomous vehicles. Safety and performance are key enablers for autonomous driving: on the one hand we want our autonomous vehicles to be safe, while at the same time their performance (e.g., comfort or progression) is key to adoption. In this thesis I formalize the problem of task-aware runtime monitoring and present a framework that uses the diagnostic information present in the perception system to detect and identify faults at runtime, while assessing the risk they pose to the autonomous vehicle.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153776</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of locus coeruleus norepinephrine in reinforcement learning</title>
<link>https://hdl.handle.net/1721.1/153775</link>
<description>The role of locus coeruleus norepinephrine in reinforcement learning
Drummond, Gabrielle
Norepinephrine (NE) is one of the four main neuromodulators in the brain, exerting widespread influence over almost all cortical and subcortical brain regions, with the exception of the striatum. The locus coeruleus (LC) is a small brainstem nucleus, and the primary source of NE in the brain. LC-NE neurons release NE to regulate baseline arousal and to facilitate a variety of sensory-motor and behavioral functions. Dysfunction in the LC-NE system has been implicated in the etiology of ADHD, schizophrenia, anxiety or stress, as well as in the cognitive decline observed in aging and Alzheimer’s disease. Despite its brain-wide effects and established involvement in CNS disorders, much about even the normal function of the LC-NE system in the brain remains unknown. Here, we explore the role of LC-NE in reinforcement learning by studying the spatiotemporal dynamics of transient LC-NE release during learned behaviors, the impact of this release on behavior, and the effects of LC-NE on task representation and processing in cortical target regions. Using a go/no-go task where water-restricted mice must push a lever at a go tone to receive a water reward, and refrain from pushing at a no-go tone to avoid an air-puff punishment, we first explored the role of LC-NE in this behavior. Additionally, we used tones of differing intensity to modulate the uncertainty on a trial by trial basis. We found that LC-NE is important for two aspects of reinforcement learning: promoting task execution under uncertain conditions and facilitating improved performance after a surprising outcome. In line with these results, using both opto-tagging and 2-photon imaging we found that LC-NE neurons are active during two task epochs, pre-lever press and post-reinforcement. Pre-press, phasic LC-NE neuronal activity scaled with the degree of certainty of the tone identity, while post-reinforcement LC-NE activity scaled with the degree of surprise. Thus, reward following ‘hits’ after low intensity go tones elicited higher activity than high intensity go tones, and punishment following movement after no-go tones, or false alarms, elicited the highest activity. Our electrophysiology data also indicated some degree of heterogeneity in LC-NE neuronal activity, such that some neurons exhibited pre-press activity, some had post-reward activity, few had both, but all LC-NE neurons responded strongly after a false alarm. To further explore this heterogeneity in task encoding, we performed 2-photon imaging of LC-NE axonal calcium activity during the task in two target regions, the motor and prefrontal cortices. We found that pre-press activity is preferentially sent to the motor cortex to facilitate task execution, while post-punishment activity is projected to both regions equally. To determine how LC-NE affects ongoing processing in target regions, we performed high density single unit recordings in motor and prefrontal cortices while silencing the LC on a subset of trials. We found that LC-NE activity following an air-puff punishment changed population activity in the cortex such that the population representation of the stimulus on the subsequent trial is more discriminable (i.e. elicited larger difference in population trajectories between go and no-go tones). To explore how this false alarm signal might be sustained to have lasting effects on behavior through the next trial (seconds later), we also studied the role of astrocytes in our reinforcement learning task. Astrocytes are the most abundant non-neuronal cell type in the brain and have been suggested to both reflect and modulate neuronal activity. They operate on multiple timescales and have been shown to be responsive to NE in the brain, and thus present a viable means by which LC-NE learning signals can have lasting impacts on population activity and ongoing behavior. Using 2-photon imaging of astrocyte calcium, we found that astrocytes show reliable and long-lasting increases in calcium activity following an air-puff punishment. Manipulating astrocyte calcium dynamics in prefrontal and motor cortex abolished the improvement in behavioral performance that typically follows a false alarm, indicating that astrocyte signaling mediates improvement in performance following a surprising outcome. Finally, to determine whether LC-NE is acting directly on astrocytes to mediate reinforcement learning, we performed 2-photon imaging of astrocyte and neuronal calcium in the cortex while chemogenetically silencing LC neurons. We found that without LC-NE activity, cortical astrocytes no longer exhibit the typical sustained increase in calcium following a false alarm. Taken together, these data indicate that LC-NE neurons signal critical components of reinforcement learning, including reward uncertainty and reward prediction error or surprise, and selectively project these signals throughout the brain to facilitate reinforcement learning by altering astrocyte calcium and neuronal population activity in the cortex.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153775</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Branching out: how a planarian neoblast generates cellular diversity</title>
<link>https://hdl.handle.net/1721.1/153773</link>
<description>Branching out: how a planarian neoblast generates cellular diversity
King, Hunter O.
The extraordinary regenerative capacity of planarians is derived from their large population of pluripotent stem cells called neoblasts. Neoblasts are a heterogenous population of stem cells that begin unspecified, but are rapidly specified within a single cell cycle and give rise to every cell in the animal. In this work, we characterize the different types of specified neoblasts in the planarian Schmidtea mediterranea by using single-cell RNA sequencing. We also profile their post-mitotic descendants that serve as migratory progenitors. Through these analyses, we uncover many novel neoblast and post-mitotic progenitor types, and the genes that define them. Because neoblasts subtypes are often specified by their unique expression of Fate Specific Transcription Factors (FSTFs), we computationally identify planarian transcription factor genes and characterize neoblasts and post-mitotic progenitors by their expression of these putative transcription factors. We find that many cell types can be identified through their unique expression of FSTF modules, which can be used to connect cells with the same fate across neoblast, post-mitotic progenitor, and differentiated cell stages. Through gene inhibition studies, we discover that many of the FSTFs discovered for these novel precursor types have a functional role in their specification. We note that transcriptional signatures of some cell-type fates become apparent at different stages in the lifetime of the progenitor and propose a model where cellular diversity arises at different times for different tissue classes.&#13;
&#13;
To better uncover the genes signatures that define different cell-types, we develop methods that use neural networks to learn patterns in gene expression in planarian post-mitotic progenitors. We find that neural network classifiers are powerful predictors of cell-type based on gene expression and the network’s learned weights can be examined to uncover gene signatures that define different progenitor types. We find that autoencoders, a class of neural networks made for the efficient representation of data, can be used to learn gene signatures in cells in an unsupervised manner. We find that cells close together in UMAP space, but belonging to different cell clusters through traditional clustering, are often difficult for the neural networks to distinguish. From this, we hypothesize that cells in different clusters may be transcriptionally similar and cells existing across continuous UMAP space may exist across continuous gene-expression space, and so assigning cells to discrete clusters may not always be appropriate. We propose autoencoders, which can encode gene-expression signatures through a continuous, latent-space encoding, may be more appropriate and can be used to uncover novel gene and cell relationships.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153773</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maintenance and Metalearning of Time Interval Representations</title>
<link>https://hdl.handle.net/1721.1/153772</link>
<description>Maintenance and Metalearning of Time Interval Representations
Ferguson, Alexandra C.
When we perform actions in the world, we estimate what is happening around us. That information goes through a series of transformations in the brain in order to execute an action that meets our goals. For example, we might remember the speed of a car in order to decide when to cross the road. These transformations can be simple, for example based on physics models of speed and time, like the car example, or they can be complex and built around evolutionary and experience-based statistical regularities in the world. This thesis uses a sensorimotor time production task to investigate different types of transformation and noise that exist between observation and action. First, I will propose a task which utilizes memory of a time interval in order to probe memory noise, memory storage, and inference over internal noise. To do this, monkeys perform a delayed time reproduction task. I find that the behavior is consistent with the the brain storing the memory as a function of time, and that the inference does not mitigate the internal memory noise. Second, I investigate how estimated prior distributions change when the statistical regularities of the world change. Monkeys perform a blocked time reproduction task, and behavior across policy transitions shows fast adaptation to new policies. I apply this algorithm to a model and fit it to behavioral data.  Third, I display some preliminary neural data gathered during these tasks as well as hypotheses for neural implementation. With these experiments, I utilize a simple task to pick apart transformations that occur between observation and action.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153772</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Increasing Flexibility in the Design and Operation of Instrument Flight Procedures</title>
<link>https://hdl.handle.net/1721.1/153770</link>
<description>Increasing Flexibility in the Design and Operation of Instrument Flight Procedures
Salgueiro Rodrigues Filho, Sandro
Instrument Flight Procedures determine aircraft departure and arrival trajectories in terminal airspaces. While their main objective is to ensure safe aircraft navigation, flight procedures also have a significant effect on the capacity, efficiency, access, and noise characteristics of airspaces by defining their route structure. The design of procedures is limited by a variety of constraints that restrict achievable aircraft trajectories.  Many constraints originate from safety considerations and can interact in complex ways to limit flight procedure flexibility and system performance.&#13;
&#13;
It is hypothesized that opportunities to increase system flexibility may exist through a better understanding of constraints and opportunities to reevaluate them based on technology improvements. Following a review of constraints and their effect on flexibility, the required geometric separation between flight procedures is identified as a significant constraint in the design of flight procedures and is chosen as the focus of an in-depth study to identify constraint reevaluation opportunities.&#13;
&#13;
The collision risk between procedures is identified as the main driver of their required separation. Through an analysis of modern aircraft navigation performance in normal operations, it is found that the collision risk between procedures is expected to be dominated by the risk due to non-normal events (i.e., deviations), which can be controlled through the use of collision mitigation capabilities. As a result, it is posited that a better understanding of the collision risk between flight procedures under the effect of mitigations represents a key mechanism for identifying how technology improvements may enable the reevaluation of separation. To that end, a model of the mitigated collision risk between flight procedures is developed and presented.&#13;
&#13;
In example applications of the proposed mitigated collision risk model, several potential system improvement paths are identified and discussed that could result in lower separation between procedures and therefore greater flexibility. These include improvements to mitigation technologies, better aircraft navigation reliability, and greater knowledge of aircraft non-normal behaviors that could lead to less conservative assumptions in collision risk modeling. Examples discussed in this thesis include the evaluation of the achievable separation between real procedures at Boston Logan Airport (BOS), which could offer noise benefits, and the evaluation of the achievable separation between future Advanced Air Mobility (AAM) routes. The methods presented in this thesis could be used to support the evaluation of future changes to separation as well as the planning of future mitigation systems.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153770</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic and Neural Circuit Analyses of Sickness and Foraging Behaviors in C. elegans</title>
<link>https://hdl.handle.net/1721.1/153766</link>
<description>Genetic and Neural Circuit Analyses of Sickness and Foraging Behaviors in C. elegans
Madan, Gurrein Kaur
A fundamental goal of neurobiology is to decipher how the brain generates behaviors. Neuromodulation of brain circuits is thought to be critical for animals to generate long-lasting behavioral states crucial for survival. However, our understanding of the specific mechanisms by which neuromodulators alter circuit function to drive context-relevant and stable behavioral states remains limited. In this thesis, we provide new insights into the neuromodulation and neural circuit control of long-lasting behavioral states by focusing on sickness and foraging behaviors in the nematode Caenorhabditis elegans. The defined cell lineage and fully mapped connectome of C. elegans, combined with cutting-edge genetic and imaging tools, allow us unprecedented access into the study of these behaviors.  In Chapter 1, we review the literature on the neural mechanisms of sickness and foraging behaviors, covering research across model organisms, with an emphasis on findings in C. elegans. In Chapter 2, we show that neuromodulatory pathways linked to stress and satiety are recruited upon pathogen infection to drive sickness behaviors. These pathways include FMRFamide peptides and the TGF-beta/DAF-7 neuromodulator, which are released from distinct neural sources and in distinct combinations to drive state-dependent behaviors. In Chapter 3, we reveal the functional architecture of a neural circuit that enables flexible, sensory-driven control of persistent behavioral states that underlie C. elegans foraging. In this work, we identify the sensory-processing AIA neuron as the basis of context-appropriate behavioral state transitions. Then in Chapter 4,  we lay a foundation for further investigation of the regulation of foraging behaviors. To do so, we perform genetic and imaging studies on the NSM and I6 neurons that are part of the nervous system in the foregut (pharynx). Collectively, these three thesis projects expand our mechanistic understanding of the neuromodulatory and circuit control of long-lasting behavioral states.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153766</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic simulations of colloidal dispersions: sticky and polarizable particles as building blocks for novel materials</title>
<link>https://hdl.handle.net/1721.1/153739</link>
<description>Dynamic simulations of colloidal dispersions: sticky and polarizable particles as building blocks for novel materials
Reed, Kelsey M.
The development of new materials is important for societal well being. There has been a great deal of effort recently to integrate experiments, theory, and simulation to accelerate materials discovery, and this broad effort is what motivates this thesis. While there are many different types of materials that are suited for particular applications, one class of materials with a high degree of custamizability is soft materials composed of colloidal particles. These dispersions can be fabricated to respond to a variety of stimuli, including external fields. There have been decades of work using both experiments and theory to investigate the microstructure and material properties of single-component colloidal dispersions. However, composite systems offer an even greater set of possible new materials, especially if the different interactions at play can be controlled and well understood. While there has been more work recently in the area of composite colloidal dispersions and their applications, there has been little focus on the use of processing to achieve different structures. In this thesis, we use Brownian dynamics (BD) simulations to efficiently explore a novel composite colloidal system and two processing schemes that lead to interesting colloidal materials.&#13;
&#13;
Computation can only help to accelerate materials design if simulations include accurate models at modest computational expense. This thesis involves the development of an expression and numerical scheme for calculation of the stress in dispersions of spherical colloidal particles. We show that the quantity we compute differs significantly from that computed using a simpler model, and we also show that the virial stress is incorrect, as one should expect for long-ranged interactions. These results can be used as another tool in the efforts of the simulation community to more accurately investigate polarizable particles via simulations.&#13;
&#13;
The next part of the thesis involves two different simulation studies of a composite colloidal system consisting of polarizable and attractive particles. Control of both attraction strength and applied field strength can be used to dictate particle interactions, and furthermore the role of processing has several different routes. We explore two different promising processing routes, and depending on the processing scheme, different anisotropic gel structures can be formed, and their microstructure further tuned based on the strength of competing interactions. These results highlight the use of simulations to investigate composite colloidal materials formed via specific processing conditions.&#13;
&#13;
Professor Patrick S. Doyle has certified this thesis on behalf of James W. Swan, who passed away in November 2021. I would like to thank Professor Doyle for advising me in 2022-2023 and for certifying this thesis.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153739</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrative and Scalable Pipeline for Multi-omic Characterization of Biological Tissues</title>
<link>https://hdl.handle.net/1721.1/153738</link>
<description>Integrative and Scalable Pipeline for Multi-omic Characterization of Biological Tissues
Choi, Seo Woo
Study of mammalian organs is essential to understand functions and dysfunctions at the state of health and disease. Specifically, comprehensive characterization of the mammalian brain is crucial to understand brain physiology and neurological diseases. Depending on the goal of the study, various brain models, such as those involving mice, postmortem humans, and three-dimensional (3D) organoids derived from human induced pluripotent stem cells (hiPSCs), can be utilized. Regardless of the models used, a large-scale, unbiased study of the brain is yet to be achieved due to the vast complexity and high interconnectedness of the cells in the brain.&#13;
&#13;
For the unbiased and holistic approaches toward the elucidation of brain structures and molecular profiles, we need to achieve effective retention of biomolecular information throughout the entire volume of tissue. The elucidation of the connections, molecular profiles, and structures within these models is performed with the help of highly specific molecular probes. Revealing the identities of thousands of molecules with intact spatial context, however, is challenging as repeated exposure to molecular probe attachment conditions damages the biomolecular integrity.&#13;
&#13;
To address this challenge, we developed a method to rapidly and effectively preserve biomolecular information inside the large-scale biological tissues for holistic and three-dimensional characterization. This technology, termed EPIC-SHIELD, takes a novel approach for formaldehyde fixation to achieve rapid, homogenous preservation of biomolecules across the tissue volume, which is then complemented with a carefully adjusted polyepoxy crosslinking (SHIELD) that forms stable tissue-gel hybrid for long-term stabilization of biomolecules with minimal chemical damage. To improve the scalability of informational acquisition and processing, we developed a pipeline that involves light-sheet microscopy and nuclei-based co-registration. We achieved multi-omic and multiplexed characterization of thick mouse brain tissues using our pipeline that enabled cell type analysis and ontogeny studies. Furthermore, effective preservation by EPIC-SHIELD allowed successful characterization of fresh frozen human brain samples that provided invaluable opportunity to study postmortem tissues at health and disease. As a demonstration, we compared the molecular compositions and spatial distributions of various cell types and pathological traits in control and Alzheimer’s disease samples. Lastly, we applied our EPIC-SHIELD technology for unbiased characterization of hiPSC-derived cerebral organoids co-cultured with microglia. Using our technologies, we envision that we can be a step closer to fully understand the mechanisms behind the brain physiology.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153738</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermally Drawn Piezoelectric Fibers Impart Acoustic Functionality to Fabrics</title>
<link>https://hdl.handle.net/1721.1/153737</link>
<description>Thermally Drawn Piezoelectric Fibers Impart Acoustic Functionality to Fabrics
Yang, Grace Helen
Acoustic interactions with fabrics have predominantly focused on their application as sound absorbers. However, the broader spectrum of fabric-acoustics interactions remains relatively unexplored. This thesis investigates how the relationship between fabrics and acoustics can be leveraged through the use of a piezoelectric fiber. This acoustic fiber can be incorporated into traditional fabrics to transform them into microphones and loudspeakers. Various experiments and simulations are devised to understand how fabric properties influence fabric vibrations. Once the fundamental working mechanisms are understood, fabric systems are engineered to optimize the performance of these acoustic fabrics for various applications.&#13;
&#13;
The acoustic fiber is the key technology that enables this work, as fibers are the building blocks of fabrics. Thermal drawing allows for the scalable production of multimaterial fibers with complex architectures, providing functionality. The processing methods are investigated in detail to reliably create fibers with high piezoelectric performance. The fiber can then be woven into textiles while the fabric's structural identity is maintained. As part of the fabric, the fiber transforms traditional textile materials into microphones or loudspeakers with significant sensitivity and efficiency. The vibrations in the fabrics, whether sensed or generated by the fiber, are measured and visualized, providing a deeper understanding of how nanometer- and micron-scale vibrations result from or give rise to audible sound. Several applications offer glimpses of novel possibilities enabled by acoustic fabric systems, including sound direction detection, biometric sensing, and noise control.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153737</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring circadian rhythms, food intake, and their interactions in marine invertebrates</title>
<link>https://hdl.handle.net/1721.1/153736</link>
<description>Exploring circadian rhythms, food intake, and their interactions in marine invertebrates
Berger, Cory Ailin
Circadian rhythms and energy metabolism are critical and interconnected components of animal physiology. Metabolic inputs like time of feeding modulate circadian clocks and behavioral and molecular rhythms. In turn, circadian clocks regulate metabolic processes, allowing animals to optimize energy usage on daily timescales. On longer timescales, animals require physiological responses to tolerate variation in food availability. Most of our mechanistic knowledge of these processes comes from terrestrial mammals and insects, while there are major knowledge gaps for marine invertebrates. My dissertation focuses on the interactions of circadian rhythms and metabolism in three marine invertebrate systems using a combination of behavioral, molecular, and bioinformatic approaches. In Chapter 2, to understand how sensory signals are integrated into circadian clocks, I test the effects of various light and temperature regimes on circadian rhythms in the sea anemone Nematostella vectensis. Misaligned light and temperature cycles severely disrupt behavioral rhythms and substantially alter the rhythmic transcriptome, particularly the expression of genes mediating metabolic processes. This illustrates how interactions between environmental cues shape circadian behavior and physiology. In Chapter 3, I develop a high-throughput behavioral system to study diel vertical migration (DVM) in the copepod Acartia tonsa. DVM is driven by tradeoffs related to food availability, but we do not fully understand how food availability affects this circadian process. Using high-resolution tracking software, I find that Acartia possesses group-level DVM-like circadian rhythms in the lab, and that these swimming rhythms are altered by time-restricted feeding. This illustrates that food availability can impact DVM via effects on circadian clocks. In Chapter 4, I analyze how polar copepods respond to starvation at the molecular level. I find that two species with distinct dietary strategies partially share a genetic toolkit to respond to starvation, whereas differences in their starvation responses may reflect different modes of lipid storage. I also use evolutionary analyses to show that starvation response genes are under selective constraint, underlining their importance to organismal fitness. In aggregate, this thesis provides insights into the circadian rhythms of marine organisms, explores how metabolism modulates circadian rhythms, and sheds light on the physiological consequences of food availability in zooplankton.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153736</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convective Dynamics of the Tropical Atmosphere in Three Idealized Approaches</title>
<link>https://hdl.handle.net/1721.1/153733</link>
<description>Convective Dynamics of the Tropical Atmosphere in Three Idealized Approaches
Velez Pardo, Martin
Atmospheric convection organized at large spatial scales significantly influences precipitation patterns and weather events in tropical and subtropical regions, and has a rich, two-way interaction with Earth's climate. Tropical cyclones, mesoscale convective complexes, heat lows, and the rainbands of the Intertropical Convergence Zone are all examples of such large-scale convective organization, but despite their relevance for human life and ecosystems, the mechanisms that govern their formation and many of their characteristics are not fully understood. This work presents three studies of convective dynamics in the tropical atmosphere using idealized frameworks. &#13;
In the first part, we use direct numerical simulations of simple setups of rotating dry convection based on the Rayleigh-Bénard system to study minimal conditions that produce large-scale convective organization, and the spontaneous formation of tropical-cyclone-like structures. We find that the latter form more readily for a particular set of controlling parameters and thermal boundary conditions, characterized by a slow enough rotation rate, asymmetry of the heat fluxes at the boundaries, effective internal cooling, and a dependence of the low-level heat flux on the overlying flow. &#13;
In the second part, we use rotating tank experiments of turbulent convection to probe further some of the findings of the first part, particularly the formation of large-scale cyclonic flows with top-bottom asymmetric, flux-based thermal boundary conditions, in a setup with hot water insulated at the bottom and sides, and cooling freely to the air above. We find large, persistent cyclonic vortices in experiments with a similar range of governing parameters as the results from the numerical simulations in the first part, particularly for cases where the convective time scale is shorter than the rotational time scale, that is, for convective Rossby numbers greater than about 1. Our approach in the first two parts seeks to narrow the disciplinary gap between traditional turbulence research and the physics of atmospheric convective organization and tropical cyclones. &#13;
In the third part, we turn to the topic of the mechanisms that drive precipitation in the tropics at scales of tens to a few hundred kilometers. Using cloud-resolving model simulations, we study how rainfall responds to imposed anomalies in the surface temperature, the atmospheric heating rate at different heights, and the pressure gradients that drive the winds near the surface. We find that such forcings lead to self-consistent but different relationships between the amount of rainfall produced and the net heating of the atmosphere, quantified by the Normalized Gross Moist Stability. We show that the spatial extent of the forcings affects how well this relationship can be inferred from horizontally-averaged atmospheric properties. In contrast, we find that the relationship between rainfall and the average relative humidity in the atmosphere falls onto the same curve for all types of environmental forcings considered. As a general contribution, the three parts of this work highlight the fruitfulness of a diverse set of idealized approaches in deriving a more mechanistic understanding of the convective dynamics of the atmosphere.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153733</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Empirical Matching Systems</title>
<link>https://hdl.handle.net/1721.1/153729</link>
<description>Essays on Empirical Matching Systems
Karnani, Mohit
This dissertation is a collection of three papers on empirical methods in matching systems. In Chapter 1, I study the estimation of treatment effects in the context of randomized controlled trials conducted with participants in matching systems. I show how conventional methods fail to account for the interference across outcomes induced by matching systems, and therefore yield invalid estimates of causal parameters. I propose a method that solves the interference problem and apply it in two empirical settings. Chapter 2 studies the relevance of the configuration of on- and off-platform options when centralized matching systems operate alongside a decentralized matching process. In these situations, the existence of off-platform options in a decentralized system can affect the outcomes of participants in the centralized system who seek to be matched to on-platform options. We show this by developing and estimating a structural model that considers the interplay between on- and off-platform options in a matching system. Chapter 3 studies the causal effects of different screening and recruiting policies affecting applicants in the Chilean centralized college match. We show how machine learning methods can enhance these screening and recruiting policies.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153729</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unveiling Roadway Network Safety: Application of Statistical Physics to Crowdsourced Velocity Data</title>
<link>https://hdl.handle.net/1721.1/153728</link>
<description>Unveiling Roadway Network Safety: Application of Statistical Physics to Crowdsourced Velocity Data
Botshekan, Meshkat
In an increasingly mobile world, traffic safety poses stark realities. In 2022, roadway incidents in the U.S. claimed over 38,824 lives, surpassing the fatality rate of the COVID-19 pandemic within the country. Despite these alarming statistics, understanding the overarching patterns of traffic safety presents a complex challenge due to myriad influencing factors such as driver behavior, roadway network geometry, weather conditions, and vehicle design.&#13;
&#13;
This study revisits traffic safety from the perspective of statistical physics, positing universal temporal and memory effects to delve into the internal structure of traffic by exploring higher-order statistics. The examination of the internal structure enables the uncovering of near-miss incident risks in congested traffic flow—risks positively correlated with collision risks derived from historic accident records. By integrating the complex dynamics of traffic flow, the near-miss risk is ascertained from the crowdsourced velocity measurements of vehicles, thereby offering a computationally efficient framework with potential for real-time implementation.&#13;
&#13;
We apply this framework to extensive velocity datasets collected anonymously across multiple states in the U.S., enabling the derivation of the spatial distribution of expected near-miss risk on a large scale. Moreover, we assess and compare the reliability and robustness of these networks, merging graph theory with our physics-inspired near-miss risk approach. Our findings consistently reveal patterns across different states, facilitating the identification of the most and least reliable/robust networks. This framework lays the foundation for a real-time, proactive maintenance of roadway networks, a major stride towards creating a safer transportation infrastructure.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153728</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Geospatial Data to Understand Social Mixing in Cities</title>
<link>https://hdl.handle.net/1721.1/153725</link>
<description>Leveraging Geospatial Data to Understand Social Mixing in Cities
Heine, Cate E.
Understanding the impact of dense social interactions in urban areas is crucial for designing inclusive cities, as they contribute to both the benefits and challenges of urban living. Various&#13;
urban planning paradigms and design approaches, such as the 15 minute city, aim to foster social connections and accessibility across communities. However, evaluating the effectiveness of these paradigms is difficult given the complexity of measuring social interactions. In the following projects, I leverage large, geospatial datasets which capture human mobility to describe policy and design impacts on social mixing in three different urban contexts.&#13;
&#13;
Study 1 presents a new way to measure social segregation in mobility patterns using sparse, anonymized geolocation data. We identify homophily in the way that Stockholm residents travel&#13;
through cities without performing privacy-invasive and data-intensive home location detection. Study 2 calculates this metric with geolocated Twitter data to analyze the impacts of Paris’s “Zones 30” policy, which aimed to enhance human mobility and social mixing by reducing speed limits and implementing street improvements. We find that areas of the city where the policy was implemented fostered more activity from more unique Twitter users who come from more neighborhoods of the city, but not from more socioeconomically diverse neighborhoods of the city. Study 3 employs call detail record data to measure social mixing on a fine spatial and temporal scale across the city of Stockholm, identifying the types of urban amenities that are located in areas where highly income-diverse groups of people gather. We then leverage quasi-random shocks to the road network due to maintenance-based road closures in order to identify causally-interpretable relationships between increasing access to various types of urban amenities and experienced segregation. Study 4 draws on GPS data from mobile phones in order to assess the relationship between local living, as advocated for by the “15-minute city” urban planning&#13;
paradigm, and experienced socioeconomic segregation across the US. We find that low-income neighborhoods with high levels of local living see lower levels of experienced diversity, indicating a potential tradeoff between the social and environmental goals of the 15-minute city paradigm.&#13;
&#13;
Collectively, these studies highlight the value of fine-grained mobility data in understanding the types of urban spaces and layouts that foster social interactions between diverse groups of people and those that exacerbate social segregation—questions critical to the design of inclusive, sustainable cities.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153725</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing methods of selective, location-specific drug delivery in the gastrointestinal tract</title>
<link>https://hdl.handle.net/1721.1/153721</link>
<description>Developing methods of selective, location-specific drug delivery in the gastrointestinal tract
Subramanian, Deepak Adarsh
Oral drug delivery remains the most commonly used method of drug administration, due to greater patient adherence to the treatment. However, drug efficacy when delivered orally remains suboptimal due to the presence of formidable barriers to drug diffusion in the gastrointestinal (GI) tract and subsequent absorption in the bloodstream. Current methods of overcoming these barriers involve encapsulating the drug in pH-responsive or mucoadhesive nanoparticle formulations, which protect the drug cargo from the harsh environment of the GI tract and allowing for controlled release at the optimal time and location. However, they generally provide minimal targeting capability (due to the wide spatial range of the pH environment or the presence of mucus), limiting their ability to release the drug in a specific location.&#13;
&#13;
The GI tract is a very diverse organ system, and location-specific expression of certain targets can be used to improve the localization and retention of the drug carriers. One example of these differentially expressed targets are the mucin glycoproteins that compose the mucus layers, which cover the epithelial surfaces of the GI tract. These mucins are expressed differently in different regions of the GI tract. As such, the work in this thesis primarily aims to identify potential ligands (small molecule and peptide) that can selectively bind to the different mucins, thus allowing for improved targeting and localization within the associated regions of the GI tract.&#13;
&#13;
For both small molecule and peptide ligands, the work in this thesis is presented similarly. First, the starting libraries of small molecules and peptides were chosen and prepared for initial in vitro screening. Next, in vitro screening was performed to select for the most promising “hits” which showed stronger binding and selectivity to the targets when compared to general mucoadhesives. These hits were then exposed to the mucin in a more physiologically relevant ex vivo model, which allowed for further selection of the top hits. These hits were then examined using bio-layer interferometry to measure the binding kinetics and calculate the kinetic affinity. Their performance was evaluated in vivo to demonstrate their ability to localize a bound cargo in a rodent model. Finally, computational methods were used to analyze the results and identify potential mechanisms of ligand-mucin binding. Overall, the work in this thesis will inform the development of further regioselective targeting approaches to improve oral drug delivery efficacy.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153721</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying barriers to recycling plastic packaging in the United States</title>
<link>https://hdl.handle.net/1721.1/153720</link>
<description>Quantifying barriers to recycling plastic packaging in the United States
Ravi, Basuhi
Staggering statistics about plastic waste and attendant concerns have driven multi-stakeholder engagement from entrepreneurs searching for innovative technological solutions and companies pledging circular measures to nations setting ambitious recycling targets. Yet, recycling rates for plastics remain abysmally low (&lt;8\%). In this dissertation, I use a supply-demand approach to investigate why we don’t recycle more plastics and quantify barriers to better recovery and recycling outcomes. Recovery system architectures depend on geographic context. I examine plastic waste recovery in the United States (US), the world's largest producer of both total and per capita plastic waste. Of the 38 Mt of post-consumer plastic waste generated in the US in 2021, approximately 42% can attributed to single-use plastic packaging. I probe two large-volume plastic packaging contexts: PET (polyethylene terephthalate) bottles and flexible packaging made of PE (polyethylene) and PP (polypropylene). Demand for recycled PET has risen sharply due to policy and consumer pressures that target circularity. However, in the US, PET bottle collection rates have not increased in a decade, and the supply of recycled PET is unable to meet rising demand. Therefore, in the PET bottle case study, I quantify the cost of supply-side policy push to increase recovery. I find that deposit return systems invite incentivized consumers into the recycled PET value chain, and recycled content mandates for PET bottles can reduce the net cost burden of a nationwide deposit return system, improve PET bottle recycling rates to &gt;80\% (from 24\%), and save 7.6 MtCO2eq per year. Unlike PET bottles, demand for mechanically recycled PE from flexible plastic packaging waste is low, but researchers are exploring advanced recycling pathways to divert hard-to-recycle packaging formats from disposal. I survey advanced recycling methods proposed in the literature and investigate the techno-economic potential for their market scalability. I find that large-volume products such as fuels or feedstock chemicals are limited by the availability of plastic waste at low costs. On the other hand, high-value end-uses such as fine chemicals are constrained by process yields and unmatched scale of plastic waste generation. I generalize these findings to examine the role of policy in improving the plastics recycling landscape in the US. Lastly, I discuss the petrochemicals value chain and the significance of displacement in realizing the expected environmental benefits of plastics recycling.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153720</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent behaviors in complex microbial ecosystems</title>
<link>https://hdl.handle.net/1721.1/153719</link>
<description>Emergent behaviors in complex microbial ecosystems
Hu, Jiliang
From tropical forests to gut microbiomes, ecological communities harbor diverse and abundant species. Understanding the complex emergent phenomena of diversity, stability, and invasibility in these communities within a unified framework has been a significant challenge. My PhD thesis addresses this knowledge gap by conducting the first direct test of a theory proposing that simple community-level features govern emergent behaviors. By utilizing bacterial microcosms, we demonstrate that as the number of species or the strength of interactions increases, microbial ecosystems transition through three distinct dynamical phases: from stable coexistence, to partial coexistence, to the emergence of persistent fluctuations and alternative stable states in species abundances, confirming theoretical predictions. Notably, high biodiversity and dynamic fluctuations reinforce each other under fixed conditions. By combining theoretical frameworks and microbial community experiments, we establish that community-level features determine the invasion outcome in microbial communities. We found that the communities with fluctuations in species abundance are more invasible and diverse than stable ones, leading to a positive diversity-invasibility relationship. As predicted by theory, increasing interspecies interaction strength and size of species pool leads to a decrease of invasion probability in our experiment. Although diversity-invasibility relationships are qualitatively different depending upon how the diversity is changed, we resolved the diversity-invasibility debate by showing a universal positive correspondence between invasibility and survival fraction across all conditions. Communities composed of strongly interacting species can exhibit an emergent priority effect in which invader species are less likely to colonize than species in the original pool. However, in this regime of strong interspecies interactions, if an invasion is successful it causes larger ecological effects on the resident community than when interactions are weak. Overall, this thesis uncovers predictable emergent patterns of diversity, dynamics, and invasibility in ecological communities, offering insights into a unified framework for microbial ecology.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153719</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Modeling of Geologic Carbon Dioxide Storage in Faulted Siliciclastic Settings</title>
<link>https://hdl.handle.net/1721.1/153718</link>
<description>Numerical Modeling of Geologic Carbon Dioxide Storage in Faulted Siliciclastic Settings
Saló Salgado, Lluís
Carbon capture and storage (CCS) is a technology where CO₂ captured from point sources or the atmosphere is injected underground for permanent storage. CCS is part of a portfolio of technologies aimed at enabling the transition to a sustainable energy system with net-zero CO₂ emissions. This Thesis focuses on geologic CO₂ storage (GCS) in faulted siliciclastic sedimentary basins, and addresses the impact of uncertainty sources on forecasts of CO₂ migration made with physics-based numerical models. A key contribution of this work is the quantification of uncertainty on fault petrophysical properties and modeling of flow within clay-smeared faults, which play a central role on CO₂ storage effectiveness and safety.&#13;
&#13;
We start by extending a reservoir simulator to increase fidelity in 3D numerical models of GCS. Extensions include a thermodynamic model to calculate PVT properties of CO₂-brine mixtures and relative permeability hysteresis. We then conduct numerical simulations and experiments of CO₂ injection and migration at the meter scale. Our experiments use tanks with transparent panels that allow recreating realistic basin geometries and CO₂ injection protocols. Using direct observations, we find that local measurements reduce model calibration time and that accurate deterministic forecasts are challenging. Next, recognizing the impact of faults on subsurface flow and shortcomings of previous models of fault architecture and hydraulic properties, we propose a probabilistic method to estimate the directional components of the fault permeability tensor. We extend previous efforts by modeling the fault core in 3D, using flow-based upscaling, and quantifying uncertainty. We then move to the field scale and apply this method to forecast fault CO₂ migration in the Miocene section offshore Texas. Faults are common in this area, and it is crucial to understand how they may limit storage capacity. Our modeling results indicate that, due to subsurface structure, stratigraphy and fault hydrogeology, updip CO₂ migration in listric growth faults is unlikely.&#13;
&#13;
Our findings show that quantitative forecasts are uncertain due to limited subsurface knowledge and modeling choices. Efforts to quantify parameter uncertainty and its impact on modeling forecasts appear necessary, a task that requires development of reduced order models accounting for the limitations of simulation data. Forecasts based on numerical models will benefit from history-matching and updating as field data becomes available.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153718</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Broadband acoustical scattering in coastal environments: application to gelatinous organisms and gas microbubbles</title>
<link>https://hdl.handle.net/1721.1/153712</link>
<description>Broadband acoustical scattering in coastal environments: application to gelatinous organisms and gas microbubbles
Kahn, Rachel E.
Broadband acoustical technology revolutionized our ability to explore, monitor, and operate in the ocean. While strides have been made in numerous physical and biological applications, there remain many standing scientific questions well suited to broadband approaches. Physics-based sound scattering models allow us to interpret and draw quantitative observations from measurements. Such models have been developed and used to assess the biomass of many types of marine organisms of ecological significance, but we lack rigorous scattering models for gelatinous organisms despite their possibly accounting for a significant proportion of global marine biomass. Additionally, acoustical techniques for characterizing microbubble populations have been established for decades, yet little is known about the spectral characteristics of dense microbubble clouds associated with estuarine tidal fronts. These bubbles facilitate air-sea gas exchange and could interfere with acoustical operations in coastal environments; however, the density and size distribution of the bubbles must be known to assess their impacts. This dissertation addresses these deficiencies in our application of broadband techniques. In Chapter 2, a sound scattering model for gelatinous organisms is developed based on the Distorted Wave Born Approximation. The 3-D model is applied to a species of scyphomedusa and verified with laboratory measurements of broadband backscattering from live individuals. The model predicts backscattering levels and broad spectral behavior within &lt;2 dB. In Chapter 3, a towable instrument is developed for measuring broadband excess attenuation from bubbles from which the size distribution is inferred. The instrument is tested under breaking waves in a laboratory wave tank and then used to characterize the bubble size distribution in the Connecticut River tidal ebb plume front. In Chapter 4, broadband backscattering measurements from the Connecticut River front are used to infer the associated bubble size distribution. Spatial trends in the bubble size distribution are examined within the context of frontal kinematics. An observed disparity between the bubble size distribution measured with excess attenuation and volume backscattering is hypothesized to arise from a sampling bias caused by bubbles concentrated in the upper water column.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153712</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control and Prediction of Polymer Network Structure and&#13;
Connectivity for Sustainability, Stimuli-Response, and Catalytic&#13;
Function</title>
<link>https://hdl.handle.net/1721.1/153708</link>
<description>Control and Prediction of Polymer Network Structure and&#13;
Connectivity for Sustainability, Stimuli-Response, and Catalytic&#13;
Function
Lundberg, David James
Polymer networks find broad application in human life because of their robustness, diverse properties, and economical production. The functions of polymer networks depend on their physical properties, which are dictated at microscopic levels by their chemical structure and connectivity. Engineers and chemists can design polymer networks with new or expanded functions by navigating within known chemistry-connectivity-property relationships, however it is often the case that new chemistries, structures, or connectivity need to be invented, explored, and understood to realize novel functions. &#13;
&#13;
In this thesis, we develop and explore new chemistry-structure-connectivity relationships to design polymer networks with expanded functions related to sustainability, catalysis, and stimuli-responsiveness. In the first half, we leverage well-known supramolecular self-assembly of PdxL2x-type metal-organic cages to form robust gels. Using isomeric chemistry with distinct connectivity, we fabricate gels with either expanded Pd12L24 or compact Pd2L4 MOCs as network junctions. Homogeneous catalyst moieties can be covalently appended within the endohedral space of Pd12L24 cross-links, allowing for translation into a heterogenous support with well-defined local environment which can be reused to allow multiple rounds of catalysis. We demonstrate this strategy for alcohol oxidation reactions catalyzed by a nitroxide, and intramolecular ring closing of allenols and alkynoic acids catalyzed by a Au(I) phosphine complex. In the latter example, of catalyst colocalization within the MOC cross-link increases catalytic activity by over two orders of magnitude. Alternately, Pd2L4 structures feature a cavity which is known to bind small molecule guests. We systematically study the influence of guest binding on the physical properties of gels containing Pd2L4 cross-links and demonstrate that this feature can be used to tune bulk dynamics, and expand gel stability and assembly conditions, overcoming a fundamental challenge of this class of materials. Finally, we design materials capable of sol-gel transitions based on tunable guest binding events, enabling new modes of stimuli-responsiveness. &#13;
&#13;
In the second half, we work toward the development predictions of network deconstruction enabled by cleavable comonomer additives (CCAs). While CCA inclusion can enable network deconstruction and facilitate recycling, network connectivity and material properties can be perturbed. Therefore, prediction of the critical minimum CCA loading to enable bulk network deconstruction is highly desired. We initially derive a reverse gel-point theory predicated on classic network formation theories to establish an upper bound for the critical CCA loading and allowed us to interpret the merits of the CCA approach relative to less successful strategies (e.g., addition of cleavable cross-linkers). Next, we sought to advance the accurate characterization of copolymerization for systems with reversible propagation, which is representative of the largest class of CCAs within ring-opening metathesis (ROMP) chemistry. We resolve a critical knowledge gap in the literature due to the erroneous report of copolymer equations and perceived limitation of population balance modeling to describe these systems. Using the methods developed, accurate kinetic characterization of these systems can be achieved, which is vital for the use of stochastic simulations to predict the degradation behavior of CCA-containing materials.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153708</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, Construction, and Validation of Magnetic Particle Imaging Systems for Rodent, Primate, and Human Functional Neuroimaging</title>
<link>https://hdl.handle.net/1721.1/153707</link>
<description>Design, Construction, and Validation of Magnetic Particle Imaging Systems for Rodent, Primate, and Human Functional Neuroimaging
Mattingly, Eli
Non-invasive neuroimaging techniques, such as functional magnetic resonance imaging, enabled a paradigm shift in the way neuroscientists study the brain. With these techniques, different levels of brain function can be safely localized in humans, allowing the study of diseases. However, due to the sensitivity limitations of the existing methods, it often requires averaging across large cohorts (up to 100s of subjects) to discern significant differences. Magnetic Particle Imaging (MPI) is a new imaging modality that may overcome this sensitivity limitation due to its very strong signal strength paired with a lack of biological background signals and noises. Previously, MPI instrumentation had not been developed at the human scale capable of functional neuroimaging. The goals of this thesis are to demonstrate the feasibility of MPI at this scale. First, the general principles of MPI design are discussed, then functional neuroimaging experiments on the rat brain are shown with up to 6x the sensitivity of 9.4T MRI, and finally the human-scale MPI system design and implementation is presented with an analysis of its measured sensitivity. MPI, with its unprecedented sensitivity, may catalyze a new class of neuroimaging experiments as well as open the door to using functional neuroimaging for diagnostics in a variety of diseases.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153707</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biological and Biomechanical Effects of Direct Perturbation of Tissue Structure in the Cirrhotic Liver</title>
<link>https://hdl.handle.net/1721.1/153706</link>
<description>Biological and Biomechanical Effects of Direct Perturbation of Tissue Structure in the Cirrhotic Liver
Leaker, Ben D.
Cirrhosis is the scarring that occurs as the common final stage of chronic liver diseases. Although our understanding of the disease has improved substantially over the past several decades, there has been little progress in the treatment of cirrhosis. Drug development has been unsuccessful largely because it is difficult to reverse profound structural changes with a pharmaceutical approach. Consequently, there is a need for innovative new approaches to the treatment of cirrhosis that can exert greater influence on tissue architecture. This thesis explores three innovative treatment strategies, each aimed at directly perturbing tissue structure through a distinct mechanism.&#13;
&#13;
First, we investigate microinjury perturbation using a fractional laser. This work revealed an exacerbated injury response in the cirrhotic liver. Through investigating the mechanism of this response, we revealed a novel ischemic susceptibility related to the microvascular architecture. &#13;
&#13;
We then investigate lytic perturbation through interstitial infusion of collagenase clostridium histolyticum and mechanical perturbation through shockwave disruption. With both techniques we were able to show significant reductions in fibrosis with minimal toxicity. &#13;
&#13;
Direct perturbation methods have been underexplored in the search for a treatment for cirrhosis and fibrosis of internal organs in general. Our findings highlight the potential of such an approach and may pave the way for new therapeutic options.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153706</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Long-term Object-based SLAM in Low-dynamic Environments</title>
<link>https://hdl.handle.net/1721.1/153705</link>
<description>Long-term Object-based SLAM in Low-dynamic Environments
Fu, Jiahui
Simultaneous Localization and Mapping (SLAM) is fundamental for autonomous agents to understand their surroundings. Moreover, for advanced robotic tasks, engaging in consistent object-level reasoning is critical, especially for activities involving repetitive traversal within the same environment, such as household cleaning and object retrieval. In a changing world, robots should always locate themselves and their targets while maintaining an updated environment map. Traditional SLAM relies on static geometric primitives from observations, lacking semantic understanding. These unordered sets of points, lines, or planes struggle with object-level interpretation, leading to false estimation against scene changes.&#13;
&#13;
As the world functions and evolves under the minimal unit of objects, object-aided SLAM is a logical option. This thesis revolves around long-term object-based SLAM within low-dynamic environments to bridge the communication gap between SLAM techniques and high-level robotic applications and enhance SLAM compatibility with object-level perception. It presents three contributions:&#13;
&#13;
First, we propose a multi-hypothesis approach for the ambiguity-aware adoption of object poses in object-based SLAM. This approach accommodates the inherent ambiguity arising from occlusion or symmetrical object shapes. We design a multi-hypothesis object pose estimator front end in a mixture-of-expert fashion and utilize a max-mixture-based back end to infer globally consistent camera and object poses from a sequence of pose hypothesis sets.&#13;
&#13;
Second, we develop two change detection approaches for offline and online applications, with two novel scene and object representations, PlaneSDF and shape-consistent neural descriptor fields, respectively. Regarding long-term operation, we account for inevitable scene changes over extended periods and the efficiency and scalability of the chosen map representations. Furthermore, we explore cluster- and object-level change detection, following a "divide-and-conquer" strategy to enable more accurate and flexible change detection through local scene differencing.&#13;
&#13;
Last, we propose a neural SE(3)-equivariant object embedding (NeuSE) for long-term consistent spatial understanding in object-based SLAM. NeuSE is trained to serve as a compact point cloud surrogate for complete object models. Our NeuSE-based object SLAM paradigm directly derives SE(3) camera pose constraints compatible with general SLAM pose graph optimization. This realizes object-assisted localization and a lightweight object-centric map with change-aware mapping ability, ultimately achieving robust scene understanding despite temporal environment changes.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153705</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering minimally immunogenic cargos and delivery modalities for gene therapy</title>
<link>https://hdl.handle.net/1721.1/153703</link>
<description>Engineering minimally immunogenic cargos and delivery modalities for gene therapy
Raghavan, Rumya S.
Since the discovery of CRISPR-Cas9 systems, gene therapies have revolutionized the field of molecular biology by introducing functional genes into cells to correct genetic defects or diseases. To date, several gene therapies are pending approval for use in the clinic and have shown promise in the treatment of a variety of genetic disorders including retinal dystrophy, hemophilia, lysosomal storage disorders and certain types of cancer. However, there are several challenges to using CRISPR-Cas9 in the clinic, including the efficiency and specificity of the gene editing process, the potential for off-target effects, and the immunogenicity of the CRISPR-Cas9 system. One of the main challenges of gene therapies is the immunogenicity of the (1) therapeutic vector and (2) cargo. Existing delivery systems trigger immune responses, rendering therapies ineffective and pose considerable risks to the patient population. Even the cargos, Cas nucleases, have been shown to generate humoral and cellular immunity in the general population. Thus, there is a need for minimally immunogenic cargos and delivery modalities to advance gene therapy to the clinic. &#13;
&#13;
The goal of this thesis is to design and optimize minimally immunogenic (1) vehicles and (2) cargos for translational gene therapy delivery. (1) For the development of gene therapy delivery vectors, previous work has identified endogenous proteins that can form capsids and package nucleic acid. In this work, I focus on the PNMA or Paraneoplastic MA-containing protein family to engineer a delivery system that can form capsids, package nucleic acid, and deliver functional, minimally immunogenic cargo to target cells. (2) For the development of non-immunogenic gene therapy cargos, I engineer existing gene therapy cargos, such as SaCas9 and AsCas12a, to be minimally immunogenic while retaining native functionality. &#13;
&#13;
This work overall highlights the promise of protein engineering to minimize immunogenicity of delivery systems and gene editing nucleases while optimizing for their functionality in vivo. I hope this work will be expanded and grow to serve as a foundation for personalized gene therapy medicine.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153703</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the role of mucin gels in disease progression and transmission</title>
<link>https://hdl.handle.net/1721.1/153701</link>
<description>An investigation of the role of mucin gels in disease progression and transmission
Bustos, Nicole A.
Mucus is a biological hydrogel that coats every wet epithelial surface of the body, including the respiratory tract. Within the body, mucus serves as a barrier: a mesh network made of mucin polymers that act as a size and biochemical filter, trapping small molecules, including pathogens. When mucus is cleared from the respiratory tract via exhalations such as coughing or sneezing, it can act as a vessel for infectious material from infected individuals that can be carried to other potential hosts or remain in the environment. The high shear rates associated with violent exhalations cause respiratory mucus to fragment into droplets spanning a range of sizes. Once emitted, droplets are propelled to nearby surfaces or entrained and advected in ambient air flows. Larger droplets may settle quickly to the ground, whereas smaller aerosolized droplets may remain suspended in the air and evaporate over time.&#13;
&#13;
In this thesis, we investigate the role of mucins in these within-host and external disease processes. In mucosal layers within the host, mucus plays an important role in limiting the progression of infectious pathogens. The pathogen’s ability to penetrate mucus, in many cases, determines its ability to reach its target cell and initiate infection before being cleared by the body. We begin by studying the impact of mucins on the transport of virus-sized particles. First, we examine the transport of bacteriophage, a model system for viruses, and nanoparticles of comparable size in reconstituted mucin gels simulating the respiratory tract and intestinal environment. Our findings reveal that phage have different transport abilities tied to their geometry, size, and surface chemistry. In addition, they are relatively unhindered in concentrated mucin gels compared to similar-sized nanoparticles. We show that in different phage populations, diffusive Brownian motion is associated with both Gaussian (normally distributed) and non-Gaussian population-level and particle-level step size distributions, which consequently impacts the spread and confinement of these particles in mucin gels. Moreover, we establish that the degree of Gaussianity is influenced by mucin type, suggesting that mucin-phage biochemical interactions play a significant role in phages’ mucin specific transport, as opposed to differences in the mucin network structure.&#13;
&#13;
Next, we study the effect of mucins and nanoparticles on the fragmentation dynamics of biological fluids. Polymers are known to shift and broaden the droplet size distributions of complex fluids compared to their Newtonian counterparts. Yet, the impact of mucus’ main solid component, mucin, along with pathogens, on the size and dispersal of emitted droplets remains incompletely unresolved. We use varying concentrations of charged nanoparticles to simulate pathogen load in several mucus polymer model systems. This highlights the importance of incorporating native mucin properties and mucin-pathogen interactions in the modeling of biological fragmentation processes. Our measurements show that the shear rheology of mucin gels are insensitive to the presence of particle load. On the other hand, the extensional rheological properties of mucin gels, including the characteristic relaxation time and thread lifetime, were greatly modified by suspensions of nanoparticles. Accordingly, the average droplet size of sprayed mucin gels is increased and the spatial distribution (depth and clustering) of droplets was shown to vary depending on particle charge. By comparison, other mucus polymer models did not recapitulate the same behaviors.&#13;
&#13;
The findings of this thesis underscore the importance of pathogen-mucin dynamics across multiple length scales. Integrating mucosal barriers into experimental systems is crucial for understanding the mechanistic and biophysical principles underlying disease transmission and initial host infection.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153701</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Unemployment</title>
<link>https://hdl.handle.net/1721.1/153699</link>
<description>Essays on Unemployment
McKenna, Claire
This dissertation explores the factors that shape individual labor force transitions after job loss. Using mixed methods, I explore how people navigate the aftermath of job loss in the U.S. and the variables that influence this process. In the first essay, I use in-depth interviews with a diverse group of women in Greater Boston to understand how individual trajectories after job loss take shape. These women had been separated from hourly service employment because of the COVID-19 pandemic, and I trace their responses after that separation in terms of intensity of search effort and jobs pursued. I offer a richer understanding of what women do and how they feel in the aftermath of job loss. I also propose a more multifaceted view of the factors influencing unemployed women’s decision-making with respect to their labor market positions and relationships with work. This type of qualitative analysis emphasizes that many women strive for labor market outcomes that align with politicians’ rhetoric about the importance of steady work but often encounter obstacles that set them back. I conclude the chapter with a discussion of policy changes aimed at helping women employed in low-paid work achieve greater stability. In the second essay, I explore institutional influences on individual labor force transitions after job loss in greater depth. Specifically, I explore the role of unemployment insurance (UI), a social insurance program that provides people who have lost their jobs with temporary income to meet basic needs while they job search. Combining linked Current Population Survey data with state administrative sources, I investigate the degree to which preexisting features of state UI programs affected job finding of the non-employed and job quality of the reemployed during the COVID-19 pandemic. During a period of unprecedented federal expansion, I try to understand the degree to which pre-pandemic features of state UI programs remained important. The role of interstate variation, particularly the influence of stricter states, is increasingly relevant, as more states grow emboldened to challenge established UI system norms or break with the federal partner. This essay contributes to the small but growing literature that traces disparate labor force outcomes to state UI policy differences. Further, it contributes a new dimension of insight to the vast UI literature by exploring the role of states. Read together, this dissertation contributes insight to issues and debates that are central to work and employment, a field committed to surfacing the labor market’s most pressing challenges and proposing solutions to make work more equitable and humane. Findings show that unemployment can be an upending force in people’s lives, and our public policy has a long way to go before it can adequately address the wide-ranging fallout.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153699</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Innovations in Urban Computing: Uncertainty Quantification, Data Fusion, and Generative Urban Design</title>
<link>https://hdl.handle.net/1721.1/153696</link>
<description>Innovations in Urban Computing: Uncertainty Quantification, Data Fusion, and Generative Urban Design
Wang, Qing Yi
Today, urban computing has emerged as an interdisciplinary field connecting data science and urban planning, reflecting the growing integration of urban life with advanced computational methods. Urban computing has particularly benefited from deep learning owing to the spatiotemporal and multi-modal nature of data emerging from urban systems. Deep learning models have not only boosted predictive accuracies beyond traditional models, but also adeptly handled unstructured data. However, the application of deep learning in urban system analysis has a lot of challenges. Within the vast and complex scope of urban computing combined with deep learning, this dissertation zooms in on three emerging issues: uncertainty quantification, data fusion, and generative urban design while focusing on transportation systems and urban planning applications. &#13;
&#13;
The first part of this dissertation proposes a framework of probabilistic graph neural networks (Prob-GNN) to quantify spatiotemporal uncertainty. This Prob-GNN framework is substantiated by deterministic and probabilistic assumptions and empirically applied to predict Chicago's transit and ridesharing demand. Prob-GNNs can accurately predict ridership uncertainty, even under significant domain shifts such as the COVID-19 pandemic. Among the family of Prob-GNNs, two-parameter distributions (e.g., heteroskedastic Gaussian) achieve the highest predictive performance, which is 20% higher in log-likelihood and 3-5 times lower in calibration errors compared to the one-parameter baseline distributions.&#13;
&#13;
The second part addresses data fusion, in which a theoretical framework of deep hybrid models (DHM) was created to combine the numeric and imagery data for travel behavior analysis. DHM aims to enrich the family of hybrid demand models using deep architectures and enable researchers to conduct associative analysis for sociodemographics, travel decisions, and generated satellite imagery. Empirically, DHM is applied to analyze travel mode choice using the Chicago MyDailyTravel Survey as the numeric inputs and the satellite images as the imagery inputs. DHM can construct latent spaces that significantly outperform classical demand and deep learning models in predicting aggregate and disaggregate travel behavior. Such latent spaces can also be used to generate new satellite images that do not exist in reality and compute the corresponding travel behavior and economic information, such as substitution patterns and social welfare. &#13;
&#13;
The last part develops a human-machine collaboration framework for generative urban design and then instantiates the framework with a model trained to generate satellite imagery from a land use text description and a constraint image depicting the unaltered major road networks and natural environments. The trained model can generate high-fidelity, realistic satellite images while retaining control over the land use patterns in generated images with natural language descriptions, producing alternate designs with the same inputs, respecting the built and natural environment, and learning and applying local contexts from different cities.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153696</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of Thermophotovoltaics and Materials for High-Temperature Thermal Energy Storage</title>
<link>https://hdl.handle.net/1721.1/153694</link>
<description>Characterization of Thermophotovoltaics and Materials for High-Temperature Thermal Energy Storage
LaPotin, Alina D.
To achieve a decarbonized electricity system, energy storage capacity must grow significantly to mitigate the intermittency of wind and solar energy. Thermal energy grid storage (TEGS) can meet the low cost per energy needed for wind and solar paired with storage to be cost-competitive with fossil fuels. The TEGS concept uses thermophotovoltaics (TPVs) to convert stored heat back to electricity. TPVs have long had the potential to reach high efficiencies, but experimental demonstrations of devices have not met these predictions. Prior work has largely focused on ~0.5 – 0.75 eV devices paired with emitter temperatures &lt;1300 °C. Operating at higher emitter temperature is advantageous because it has the potential to boost TPV efficiency as well as power density, which lowers costs. However, the operation and characterization of devices under &gt;2000 °C light sources and at view factors required to achieve high power densities presents many challenges. &#13;
&#13;
In this thesis, we characterize high-bandgap 1.4/1.2 eV and 1.2/1.0 eV two-junction devices for TEGS. First, we integrate and test devices in a graphite cavity system and discuss challenges for high-temperature systems. Next, we demonstrate the devices with a tungsten emitter across a temperature range of 1900 – 2400 °C. By reflecting ~93% of unusable sub-bandgap light, a 1.4/1.2 eV device reached an efficiency of 41.1%±1% under a 2400 °C tungsten emitter at an electric power density of 2.39 W/cm². A 1.2/1.0 eV device reached an efficiency of 39.3%±1% under a 2127 °C emitter at an electric power density of 1.8 W/cm². Through the combination of high bandgaps paired with high emitter temperatures, multiple junctions, and high back surface reflectivity, devices reached efficiencies up to 9.1 percentage points higher than prior work.  &#13;
&#13;
In the next part of this thesis, we develop a testing platform for the characterization of TPVs at high power density. By reaching a view factor of 0.36 from the cell to the emitter and using a carbon/carbon emitter, this setup tested 1.2/1.0 eV devices under a maximum irradiance of 92.7 W/cm² with a 2350 °C emitter producing electric power densities of 8.26 W/cm². Using this calorimetry platform, we measured a peak efficiency of 38.7%±1.4% under a 2250 °C emitter and at an electric power density of 6.73 W/cm².&#13;
&#13;
This work addresses a number of high-temperature/high-irradiance TPV testing challenges and demonstrates the viability of TPVs to be both an efficient and power-dense heat engine. This can enable grid-scale thermal energy storage and can be used in a variety of applications including portable or stationary power generation and heat recovery. In the last part of this thesis, we address the challenge of creep in graphite components in the TEGS system. We characterize compressive creep rates in graphite insulation and propose mitigation strategies.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153694</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Study of Vaccination Strategies to Change B Cell Immunodominance</title>
<link>https://hdl.handle.net/1721.1/153691</link>
<description>Computational Study of Vaccination Strategies to Change B Cell Immunodominance
Yang, Leerang
B cell Immunodominance refers to the phenomenon where B cells that target certain epitopes of a pathogen are preferentially selected among the B cells that potentially target diverse epitopes presented on a pathogenic protein. This phenomenon presents a challenge in the development of vaccines against mutable viruses like HIV, influenza, and SARS-CoV-2 which rapidly modify the immunodominant epitopes and render existing antibodies ineffective. Moreover, as the immune system preferentially targets immunodominant epitopes, functionally important and conserved epitopes are less targeted, which poses a barrier to the generation of broadly neutralizing antibodies. &#13;
&#13;
Computational modeling offers a valuable tool for understanding how immunodominance arises from the stochastic and dynamic processes of immune response. By simulating the systemic interactions between B cells, helper T cells, antibodies, and antigens, we can gain insights into the underlying mechanisms of immunodominance, thereby informing more effective vaccine design strategies. In this thesis, computational models are developed to investigate strategies for modulating B cell immunodominance in vaccinations. Experimental data from collaboration are used to validate model findings.&#13;
&#13;
The first project investigates engineered influenza immunogens designed to elicit cross-reactive B cells targeting the receptor-binding site. A computational model reveals that the efficacy of an immunogen in eliciting cross-reactive antibodies depends on interactions between B cell antigen engagement and T cell-mediated selection within germinal centers. The second project addresses the enhanced efficacy of a third dose of SARS-CoV-2 vaccines against the Omicron variant. Through computational models and human vaccination data, we find that a third dose significantly boosts neutralizing antibody responses by targeting less-mutated, subdominant epitopes. This is facilitated by pre-existing antibodies from earlier doses, which improve antigen availability and partially mask immunodominant epitopes. The third project seeks to optimize 'slow delivery' immunization schemes. Our mathematical models, developed in collaboration with experimentalists, propose a practical way to optimize vaccine delivery kinetics for superior T follicular helper cell and antigen-specific germinal center B cell responses.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153691</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carrots and Sticks in Green Moves: Assessing the Mobility, Environmental, Economic, and Social Impacts of Sustainable Mobility Solutions</title>
<link>https://hdl.handle.net/1721.1/153689</link>
<description>Carrots and Sticks in Green Moves: Assessing the Mobility, Environmental, Economic, and Social Impacts of Sustainable Mobility Solutions
Zheng, Yunhan
In recent decades, urbanization presents an enduring challenge of our era, intensify-ing the complexities of travel demand management. In our contemporary landscape, these challenges are exacerbated by uncertainties arising from rapid technological innovation and unforeseen disruptions such as COVID-19. In response to these societal disruptions, various new transportation policy instruments have emerged. However, the impacts of many of these mobility solutions remain largely unexplored. This dissertation investigates the multifaceted impacts of four transportation policy instruments: congestion pricing on ride-hailing services in Chicago, car ownership restrictions in Beijing, the introduction of electric vehicle charging stations in California, and the promotion of remote work across the United States. Through these studies, we aim to illuminate the diverse eﬀects of diﬀerent types of sustainable mobility solutions on society.&#13;
&#13;
The ﬁrst study assesses the eﬀects of Chicago’s Ground Transportation Tax, which is a congestion pricing policy targeting ride-hailing service in downtown Chicago. While the policy was launched aiming to discourage single ride-hailing trips and stimulate shared ride-hailing trips as a way to combat the plague of congestion and promote sustainable forms of transportation, the policy eﬀectiveness remains unclear. This study is one of the ﬁrst studies that empirically assess the impact of TNC congestion surcharge policies on urban transportation. We employ a diﬀerence-in-diﬀerences strategy to assess the policy inﬂuence on ride-hailing ridership and its implications for equity.&#13;
&#13;
The second study examines Beijing’s car license plate lottery policy, implemented since January 2011 to address traﬃc congestion and air pollution. Despite the policy, Beijing residents can still buy and register cars in neighboring cities, potentially compromising its eﬀectiveness. We employ a synthetic control method combined with diﬀerence-in-diﬀerences to quantify the policy leakage. Our results reveal that the leakage led to an additional 443,000 cars sold in neighboring cities (within 500 km of Beijing) from 2011-2013, constituting 35%-40% of the stipulated car growth reduction. Acknowledging this leakage emphasizes the need for a more comprehensive, regionally collaborative approach to Beijing’s urban transportation challenges.&#13;
&#13;
The third study explores the economic impact of Electric Vehicle Charging Stations&#13;
(EVCS) beyond their primary function of promoting cleaner transportation. Analyzing a dataset covering 5,000 EVCS and 130,000 Points-of-Interest (POI) in California, this study employs a combination of propensity-score-matching and diﬀerence-in-diﬀerences methods to systematically measure the inﬂuence of EVCS on nearby establishments. The ﬁndings reveal that a newly-established EVCS within 500 meters signiﬁcantly increases customer visits and annual spending at nearby businesses, emphasizing the economic advantages of increased EVCS investments. We also ﬁnd that EVCS beneﬁt businesses not only in developed regions but also in disadvantaged regions, and tend to attract higher-income and exploratory visitors.&#13;
&#13;
The fourth study investigates remote work, whose sustainability impact has garnered great attention amid its widespread adoption during the COVID-19 pandemic. This study examines the impacts of remote work on vehicle-miles traveled (VMT) and transit ridership in the United States from April 2020 to October 2022. Applying the instrumental variable approach, this study ﬁnds that remote work signiﬁcantly reduces state-level vehicle-miles traveled and Metropolitan Statistical Area transit ridership, and also quantiﬁes the speciﬁc eﬀect magnitude. Based on these results, we also measure the impacts of remote work on on-road CO2 emissions and transit fare revenue loss during this period. The insights are crucial for policymakers addressing urban transport and environmental sustainability amid continued remote work trends.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153689</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Mechanisms of Shock-Induced Deformation in Polymeric Systems</title>
<link>https://hdl.handle.net/1721.1/153688</link>
<description>Understanding the Mechanisms of Shock-Induced Deformation in Polymeric Systems
Mikhail, John P.
Shock waves, commonly resulting from ballistic or explosive impact in military scenarios, are highly destructive and pose significant health risks due to their rapid, intense energy transfer. To mitigate these effects, it is crucial to design armor capable of absorbing and dissipating shock wave energy, thus safeguarding the wearer. Traditional materials like woods and metals often fall short in meeting these requirements due to their low toughness, high weight, limited flexibility, or manufacturing challenges. Furthermore, even when suitable materials are identified, they often have unclear physical deformation mechanisms contributing to their dissipative properties; thus, it is difficult to generalize the benefits of one material in order to find alternatives.&#13;
&#13;
This thesis focuses on the study of polymeric materials, which often outperform traditional materials in terms of shock energy absorption. The vast design space of polymers affords them a diverse set of physical properties and adaptability to different applications. Researchers can tailor these properties by altering the polymers' type, composition, or structure. In order to obtain detailed mechanistic understanding of the shock response of some polymeric materials, atomic and molecular scale simulations are leveraged and the configurational and vibrational responses of the systems analyzed. The thesis work consists of three major projects: (1) a theoretical study of the statistics of Gaussian polymer chains under harmonic applied fields, (2) a computational study of semicrystalline and crystalline models of polyethylene (PE) undergoing shock deformation, and (3) a computational study of liquid ethanol systems undergoing shock deformation.&#13;
&#13;
The theoretical study of Gaussian chains generalizes previous literature models and is shown to well approximate a variety of applications involving polymer confinement. A wave equation is derived from the force-compression relationship of the Gaussian chains--with the addition of a viscous damping fluid--and is connected to the well-studied Korteweg de Vries equation. The effects of material properties (dispersion, dissipation, and nonlinear elasticity) on the propagation of a shock wave are demonstrated explicitly.&#13;
&#13;
The computational study of semicrystalline PE considers systems of three different crystallinity fractions; their configurational changes under shock are analyzed, distinguishing the responses of the crystalline and noncrystalline regions of the systems. Physical mechanisms underlying the deformation response of these materials, including crystallographic slip and loss of nematic order, are identified as a function of shock pressure. The prominence of these mechanisms is shown to be a non-monotonic function of lamellar thickness and crystallinity fraction, and they each contribute to shock energy storage.&#13;
&#13;
The computational study of ethanol reveals the configurational and vibrational changes undergone by the liquid in response to elevated shock pressure and temperature. Shifts of vibrational peaks are observed and compared with experimental results from our collaborators. The hydrogen bonding network responds to energy from the shock wave, resulting in structural and dynamical alterations which may have implications for thermal conductivity and reduction of hot spot formation from shocks.&#13;
&#13;
Overall, this thesis presents a multipronged and detailed approach to understanding shock deformation in polymeric and organic systems, offering mechanistic insights and methodological frameworks applicable to a broader range of materials.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153688</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting silos with distributed and private computation</title>
<link>https://hdl.handle.net/1721.1/153685</link>
<description>Connecting silos with distributed and private computation
Vepakomma, Praneeth
Data in today’s world is increasingly siloed across a wide variety of entities with varying resource constraints. The quality of wisdom generated from a collaborative processing of such data is substantially better if the data from all these entities is shared across each other or centralized at a nodal entity. Such data sharing and centralization is often prohibited due to stringent privacy regulations, computational constraints, communication bottlenecks, trade secrets, trust issues and competition. This necessitates development of efficient methods for distributed computation while preserving privacy to generate wisdom whose quality is on par with the case of data centralization. This thesis covers methods introduced for the same in an inter-disciplinary manner to tackle several such problems using distributed and private computation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153685</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Flow and Fracture of Antarctic Ice Shelves</title>
<link>https://hdl.handle.net/1721.1/153683</link>
<description>The Flow and Fracture of Antarctic Ice Shelves
Millstein, Joanna D.
This thesis explores the deformation of glacier ice, with an aim of deriving insights into its flow and fracture through satellite remote sensing methods. Glaciers deform through the driving force of their own weight and display dramatic responses to both external forcing, such as changes in climate, and internal forcing like variations in stress. Here, we focus on Antarctic ice shelves, the fast-flowing extensions of the ice sheet that impart stabilizing resistive stresses onto the grounded area. Processes contributing to dynamic change through melting, calving, the flow and fracture ice, can be explored and quantified with expirical relationships derived from mechanical properties. To understand the stability and future projections of glaciers and ice sheets in a changing climate, it is critical that we quantify and calibrate these processes.&#13;
&#13;
In Chapter 2, we leverage modern satellite remote sensing products to gain new insights into the flow of glacier ice. We validate and calibrate the constitutive relation for glacier ice across Antarctic ice shelves using a simple relationship between ice thickness and surface strain rate data. We find that the constitutive relation should employ an exponent n = 4, in contrast to the commonly used n = 3. This finding implies that ice shelves are more sensitive to changes in the stress state than typically assumed. Next, in Chapter 3 we derive a spatiotemporally dense dataset of surface strain rate fields across the Brunt Ice Shelf, the site of active full-thickness fractures, known as rifts. This dataset provides a mechanical framework with which we can analyze dynamic change, allowing us to quantify surface deformation with radar remote sensing. Lastly, Chapter 4 presents a fatigue-crack growth model for active rifts. This empirical framework sets bounds on rift propagation rates over periods of weeks and months and, in doing so, presents a simple parameterization of rift growth rates that can be implemented using observational data. This work provides a promising method to resolve fracture evolution over the period of weeks to years. Ultimately, this thesis uses observational data to validate theoretical models of glacier change, advancing our grasp on the dynamics of Antarctic ice shelves.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153683</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soft, Compliant Tactile Robotic Manipulators</title>
<link>https://hdl.handle.net/1721.1/153681</link>
<description>Soft, Compliant Tactile Robotic Manipulators
Liu, Sandra Q.
When we look to the future of soft robotics and manipulation, we begin to look towards sensory-rich and compliant grasping mechanisms. Not only do we want to capitalize on the significant advantages in safety and adaptability that soft robots have, we also want to incorporate high-resolution tactile sensors, which will allow soft robots to perform more tasks. One such system is the GelSight sensor, which is low-cost, effective, and high-resolution. However, the integration of these camera-based sensors into compliant manipulators is difficult due to the rigidity of the sensor backing. This thesis explores the design of multiple different compliant high-resolution tactile manipulators, along with some examples of their real-world uses. The first such design incorporates a simple camera-based tactile sensor into an exoskeleton-covered soft robot with vision-based proprioception. A later design integrates full camera-based tactile sensing capabilities into a flexible Fin Ray structure. Finally, the designs culminate in a novel soft-rigid human-inspired robotic hand with continuous tactile sensing which is capable of grasping heavier objects and safely interacting with humans. The incorporation of high-resolution tactile sensors into soft, compliant robots brings us closer to developing new manipulators that could someday match or exceed the capabilities of human hands.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153681</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Isotopes as a Tool for Exploring Exoplanet Atmospheres</title>
<link>https://hdl.handle.net/1721.1/153680</link>
<description>Isotopes as a Tool for Exploring Exoplanet Atmospheres
Glidden, Ana
Isotopes are a powerful tool to understand the formation and evolution of celestial bodies. Isotope ratios have been used extensively in planetary science to help better understand the history of the solar system. In this thesis, I evaluate how we can characterize exoplanet atmospheres using isotopologues. I also extend this to the detectability of novel biosignature gases by way of carbon isotope ratios.&#13;
&#13;
I evaluated the detectability of ¹³CO₂ in the atmospheres of temperate terrestrial and sub-Neptune sized planets and assess its suitability as a biosignature gas. While the search for signs of life has focused on terrestrial planets around M dwarf stars, an unambiguous detection of life—including through isotopologues—on a rocky world will likely remain out of reach in the JWST era. Due to their size and ability to retain large scale height, H₂-dominated atmospheres, sub-Neptunes are relatively easier to study using transmission spectroscopy compared with terrestrial planets. I simulated observations of CO₂ isotopologues in the H₂-dominated atmospheres of our nearest (&lt; 40 pc), temperate (equilibrium temperature of 250-350 K) sub-Neptunes with M dwarf host stars. I find Earth-like fractionation of ¹³CO₂ to be distinguishable only if the atmosphere is H₂-dominated with a few percentage points of CO₂. I find that carbon isotopes via CO₂ are only observable in the atmospheres of sub-Neptune planets for the most idealized targets. I extended this work to fully consider the requirements and considerable challenges of using isotopologues as biosignature gases in exoplanet atmospheres. I conclude that isotopologue measurements should be used to evaluate formation mechanisms of planets and exoplanetary systems rather than potential biosignature gases.&#13;
&#13;
Carbon isotopic composition in planetary atmospheres can inform both the formation location and heating processes within the protoplanetary disk. Through simulations, I assessed the detectability of isotopologues in the atmospheres of exoplanets with JWST and determined the best dozen targets to measure atmospheric carbon isotope ratios using transmission spectroscopy. The similarity of carbon isotope ratios in our solar system bodies is at odds with simulations, which predict that carbon isotope ratios should vary both radially and axially within a protoplanetary disk due to temperature and UV radiation variations. Measuring the ¹²C/¹³C ratio in planets and their host stars will help to assess if (1) the homogenization of ¹²C/¹³C is common in planetary system formation and (2) evaluate which mechanism(s) could be responsible.&#13;
&#13;
Finally, TRAPPIST-1 e is one of our best, potentially habitable, terrestrial exoplanets. In preparation for the JWST GTO Exoplanet Transit Spectroscopy team’s observations of TRAPPIST-1 e (PI: N. Lewis), I evaluate which atmospheric scenarios we will be able to constrain with our limited data. The four transits of our GTO Program will permit initial reconnaissance of TRAPPIST-1 e’s atmosphere, though a close consideration is required for which atmospheric scenarios we will be able to support or reject with our expected data. We will be able to evaluate specific atmospheric compositions such as a low mean molecular weight, primordial H₂-dominated atmosphere, and the presence of strong absorbers, such as CO₂ and CH₄. I find that in the best-case scenario, we may see hints of CO₂ and CH₄ within our data, though this is unlikely due to the strong chance that the atmosphere has a small scale height and that the atmospheric signal will be entangled with stellar noise from the M dwarf host star. However, we will be able to put additional constraints on the mean molecular weight of the atmosphere and use our observations to inform on the best path forward for making conclusive detections of the two most detectable molecular species—CO₂ and CH₄—in the atmosphere of TRAPPIST-1 e.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153680</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>State-space Modeling of Neural Oscillations: Toward Assessing Alzheimer’s Disease Neuropathology with Sleep EEG</title>
<link>https://hdl.handle.net/1721.1/153679</link>
<description>State-space Modeling of Neural Oscillations: Toward Assessing Alzheimer’s Disease Neuropathology with Sleep EEG
He, Mingjian
The recent development of Alzheimer's disease (AD) modifying therapies has led to a heightened need for early diagnostic methods. Studies over the past two decades have highlighted brain waves during sleep as a window of opportunity to assess neural activity impacted by AD neuropathology before clinical symptoms emerge. However, many existing practices in sleep electroencephalography (EEG) analysis are based on visual inspection and the ubiquitous Fourier transform. These approaches face several methodological challenges demonstrated in this thesis, and they have become increasingly limiting as the scientific questions become more complicated, as is the case with studying sleep oscillations in preclinical AD patients.&#13;
&#13;
In this thesis, I developed a set of methods centered around a parametric oscillator model. These new algorithms build on previous ideas of state-space modeling of neural oscillations and enable novel solutions to some fundamental questions in sleep EEG, which are crucial to address for its interpretability in older adults. First, a previous data-driven greedy search method is improved to identify the number of oscillations and more reliably characterize them through time-domain modeling. I applied this updated approach to distinguish between slow and delta oscillations during sleep and observed clearer associations of delta oscillations. Second, a new switching state-space solution extends the application of oscillator modeling to time varying neural oscillations. I derived a probabilistic detection of sleep spindles that automatically adjusts to individualized spindle characteristics. This method can work on data as short as a few seconds and extracts complex spindle activity, providing highly stable spindle property estimates. Third, a semi-automated pipeline is developed to build realistic four-layer forward models including the cerebrospinal fluid in each individual to account for cortical atrophy. A simulation study showed that cortical atrophy alone produces modest reductions in scalp EEG ~2dB while also spreading the source currents, suggesting that individualized head models might need to be employed to study sleep EEG. To this end, a general dynamic source localization solution is derived, with full capacities to accommodate spatial patterns in the cortex while maintaining efficient parameter estimation. A proof of concept analysis showed successful localization of multiple simultaneous oscillations in real EEG recordings. This work advances our abilities to analyze sleep EEG data from first principles of statistical modeling and signal processing, paving ways for an individualized and non-invasive assessment of sleep oscillations in older adults. These methods may provide a critical step toward rigorously inferring the underlying neural activity that is likely altered by early AD neuropathology.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153679</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation, Prediction, and Monitoring of Methane Emission from Oil and Gas Development</title>
<link>https://hdl.handle.net/1721.1/153678</link>
<description>Evaluation, Prediction, and Monitoring of Methane Emission from Oil and Gas Development
Li, Yunpo
Human beings are experiencing man-made climate change due to the emission of greenhouse gases, among which methane is a highly potent one, with a 20-year global warming potential 80 times higher than that of carbon dioxide. A systematic approach is needed to evaluate, monitor, and mitigate methane emissions such as those from the oil and gas (O&amp;G) industry (22% of total anthropogenic emissions). In this thesis, I addressed three critical challenges to control such emissions, namely, (i) the uncertainty in a potential O&amp;G methane emission pathway via the groundwater system, (ii) the large population of potential leaking infrastructural elements making routine inspection inefficient and expensive, and (iii) the intermittent emissions that cannot be captured via periodic surveys. To address (i), groundwater samples were collected from more than 300 sites in O&amp;G-producing Northern Appalachia. Dissolved methane concentration was negatively correlated with the distance to O&amp;G well in one of our study regions, but such correlation was confounded by topographic variation. Furthermore, dissolved sulfate concentration was negatively correlated with methane concentration and with distance to coal mine, and these correlations were robust even when considering topographic confounding. In conclusion, groundwater methane could be attributed to natural geological sources and sulfate-mediated biogeochemical processes, rather than O&amp;G development. To investigate (ii), Machine Learning (ML) models were used to predict O&amp;G well integrity issues related to methane leakage to guide prioritized sensor allocation. Different ML models (e.g., Random Forrest, XGBoost, and Logistic Regression) were compared on a dataset consisting of 1,250 O&amp;G wells, and a test F-1 score above 65% was achieved. Furthermore, the most important physical parameters for the prediction were identified, and the geospatial clustering of integrity issues was observed and analyzed. These findings could enable prioritized sensor allocation near O&amp;G facilities with high emission risk and inform better design of future O&amp;G wells. To study (iii), inexpensive continuous methane sensors are needed, but such sensors can suffer from signal interferences. Given such, an ML signal deconvolution strategy was proposed and an experimental apparatus, consisting of mass flow controllers, a gas chamber, and a data logging system, had been built to collect data for ML model training and testing. In addition, preliminary tests were conducted to study the influence of humidity and gas flow rate on the performance of the sensors. Lastly, the apparatus is being upgraded to integrate commercially available methane sensors and temperature control system. Overall, the research of this thesis deepens our understanding of O&amp;G methane emissions and enhances our capability to monitor and mitigate those contributions.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153678</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact.AI: Democratizing AI through K-12 Artificial Intelligence Education</title>
<link>https://hdl.handle.net/1721.1/153676</link>
<description>Impact.AI: Democratizing AI through K-12 Artificial Intelligence Education
Williams, Randi
Today's youth are growing up in a world where artificial intelligence (AI) technologies shape how we live, work, play, socialize, and navigate our world. This rapid technological change is already significantly shifting individuals' lives and the opportunities they can obtain. Thus, researchers, educators, and government leaders must consider how to prepare a diverse citizenry to thrive in the emerging age of AI, for example, through outreach initiatives like grade school AI curricula. My thesis delves into K-12 AI literacy, particularly how AI curricula might empower students to see themselves as technosocial change agents, capable of using technology to work toward positive, equitable social change.&#13;
&#13;
First, I explore the question,"What should K-12 youth know about AI?" and introduce a new AI literacy framework, Impact.AI, covering AI concepts, practices, and perspectives that align with a technosocial change agent identity. This framework will inform the development of middle school AI curricula that empower students to become conscious consumers, ethical engineers, and informed advocates of AI. Next, I consider, "How should we design AI curricula for K-12 students and educators?" and share how I iteratively developed AI education tools and curricula that facilitate students' learning about AI as they work on AI projects. Finally, I evaluate how well these frameworks and designed artifacts contributed to students' learning about AI and developing strong AI identities. As AI becomes increasingly prevalent in everyday life, it is essential that all people have the opportunity to both understand and shape the technology.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153676</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Heterogeneous Nucleation Mechanisms in Polyolefins</title>
<link>https://hdl.handle.net/1721.1/153675</link>
<description>Understanding Heterogeneous Nucleation Mechanisms in Polyolefins
Volchko, Nathan William
The family of polyolefins are among the most widely used semi-crystalline polymers in the modern world, with many of their desirable properties due to their microstructure formed during crystallization.  While processing conditions have a large impact on crystallization, an additional tool is the use of additives called nucleating agents (NAs), that improve the rate at which small crystallites form and can even promote different crystal polymorphs over others.  However, much of the current understanding of NAs is based on trial-and-error, with no predictive power.  Experimental methods to study heterogeneous nucleation are often plagued with confounding factors that lead to uncertainty in the measured kinetics and thermodynamics of nucleation.  Molecular simulations have recently showed promise to investigate the nano time- and length- scales associated with nucleation, but remain unverified with real materials.  This thesis advances the way that heterogeneous nucleation is studied, using a two-pronged approach in studying potential NAs: in the laboratory, and with molecular dynamic simulations.&#13;
&#13;
Two experimental methods were refined to accurately measure kinetic and thermodynamic parameters of polymer + NA pairings.  In the first method, micro-droplets of high-density polyethylene were crystallized on single crystal substrates, allowing precise control of the NA – polymer interface.  The second method is the current state of the art, and consisted of micro-droplets of high-density polyethylene containing nanoplatelet additives in an immiscible matrix.  This method sacrifices some control of the nucleation interface in order to expand the number of testable NAs to virtually any material.  Finally, a third experimental method based on a novel combination of the other two methods was introduced to independently validate results obtained by the state of the art method for the first time.  Of broader significance, a thermodynamic efficiency metric was defined to allow comparison of NAs between different polymers in a more robust way than prior metrics.   &#13;
&#13;
Molecular dynamic simulations formed the second major tool in studying heterogeneous nucleation.  The effects of three new NAs on the nucleation of an n-alkane were tested, paralleling the laboratory experiments of the most successful NAs.  The same kinetic and thermodynamic parameters obtained in the laboratory with real materials were compared for the first time, demonstrating excellent qualitative agreement.  Molecular mechanisms were also investigated, including epitaxy and strain in the alkane crystal that reduced lattice mismatch. The strength of interaction between the NA and the alkane was quantified in a new way, and found to be a key factor in determining nucleating efficiency.&#13;
&#13;
The results of this thesis provide new insight into molecular mechanisms of nucleation, and the improved methods pave the way for high-throughput experiments and simulations to screen the nucleating efficiency of many materials.  With this approach, novel NAs can be more rapidly discovered, and intelligent selection from a database of NAs will allow the production of next-generation materials with tailored crystal morphologies and improved macroscopic properties.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153675</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, fabrication, and application of bioinspired soft photonic materials</title>
<link>https://hdl.handle.net/1721.1/153674</link>
<description>Design, fabrication, and application of bioinspired soft photonic materials
Miller, Benjamin
When one thinks of color, the first thing which comes to mind is often absorption by pigments and dyes or emission by lights and displays. However, there is another mechanism known as structural color, most commonly seen in the swirling colors of a soap bubble. This is a wave phenomenon, where incident and reflected light interfere with each other to selectively reflect certain wave-lengths. More complex examples can often be seen in nature, such as the bright coloration of the blue morpho butterfly caused by intricate nanostructures on the wing.&#13;
&#13;
Developing synthetic versions of the structurally-colored materials found in nature has been a longstanding goal of the research community, with many notable successes and potential applications. One particularly interesting area is mechanically-responsive structural color, where the optical properties of the material change when strained. Yet current examples of these materials suffer from a number of drawbacks such as poor optical or mechanical performance, limited colors or patterns, high cost, and slow or low-volume production.&#13;
&#13;
The core of this thesis is the development of a new manufacturing process capable of producing sheets of mechanically-responsive, structurally-colored materials in a tunable, scalable, and affordable way. These are elastic materials which reversibly and predictably change color when stretched or compressed, achieved by combining 19th century research on color photography with recent research on holography. An assortment of sample materials created with this process are thoroughly analyzed.&#13;
&#13;
The thesis then extends this concept, suggesting a wide variety of alternative functional structurally-colored materials that might be created by modifying the manufacturing process in key ways. This is demonstrated by the creation of mechanically-responsive, structurally-colored fibers.&#13;
&#13;
With such a rich design space now accessible, exploring it experimentally becomes challenging. Therefore this thesis also presents a software platform that was developed, allowing the user to create a three-dimensional model of any desired object coated with any photonic structure. The user can then deform the object in real-time and observe the change in visual appearance.&#13;
&#13;
Finally, a number of applications for dynamic structurally-colored materials are demonstrated or discussed, making use of their ability to convert invisible physical forces into visible color change. This spans fields including healthcare, fashion, robotics, and human-computer interaction.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153674</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous experimentation for molecular discovery applications</title>
<link>https://hdl.handle.net/1721.1/153673</link>
<description>Autonomous experimentation for molecular discovery applications
Canty, Richard Benjamin
Automated experimental systems, which provide a means to accelerate scientific research and the discovery of application-driven materials, are specialized and limited in scope. The inability to pivot to new research areas has driven a shift in design to create generalized systems and incorporate decision logic to grant systems autonomy. The incorporation of autonomy enables flexible and adaptable operation but complicates automation. In this thesis, autonomy for operation execution and workflow design was integrated into an automated platform for molecular discovery. This integration required adapting control architectures to handle goal-oriented commands, advancing scheduling strategies to orchestrate concurrent and evolving workflows, and developing modular interfaces to facilitate workflow mutation and the incorporation of real-time decision logic. For control, a master controller orchestrated platform tasks while a separate database provided platform instruments with current information on samples, workflows, and platform resources. This enabled executive control through high-level commands which could be translated into concrete actions by the agent at run-time using current platform information and available instrument capabilities. For scheduling, a greedy algorithm was developed to handle the simultaneous execution of multiple workflows with temporal constraints between tasks and whose tasks and operational details could change. This provided a way to accommodate the adaptability and agency of the platform’s systems, ensure sample integrity, and prevent resource and operational conflicts. Workflows were designed with modular, high-level tasks and were self-contained to allow platform agents to mutate the workflow without requiring knowledge of other workflows or the implementation-level details of other agents’ operations. Hardware modules utilized an inverted modular design whereby each task they were programmed to accomplish was injected into their parent controller rather than every controller enforcing a standard set of operations on their children. This enabled every agent to make flexible use of its decision logic with access to a suite of context-specific commands. Further exploring the ideas of autonomy, an algorithm was developed to assist the platform in selecting alternative reaction conditions to improve reaction yields. The data-driven approach imitates a chemist by considering conditions from related reactions then evaluating trust in those conditions by the number and quality of successful reactions that are similar to these related reactions to determine new conditions
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153673</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven ocean modeling using neural diferential equations</title>
<link>https://hdl.handle.net/1721.1/153671</link>
<description>Data-driven ocean modeling using neural diferential equations
Ramadhan, Ali
Even with today’s immense computational resources, climate models cannot resolve every cloud in the atmosphere or eddying swirl in the ocean. However, collectively these small-scale turbulent processes play a key role in setting Earth’s climate. Climate models attempt to represent unresolved scales via surrogate models known as parameterizations. However, these parameterizations have limited fdelity and can exhibit structural defciencies. In this thesis, we attempt to develop new data-driven parameterizations of oceanic turbulence with better predictive skill than the existing operational parameterizations. First we develop Oceananigans.jl, a next-generation, fast, and friendly ocean model that runs on CPUs and GPUs to be able to generate the high-resolution simulation data needed to train the parameterizations (and simulate some other fows at high resolution along the way). We describe in detail the equations being solved and the numerical methods employed by Oceananigans.jl, as well as some validation test cases and selected research results that Oceananigans.jl made possible. We then demonstrate that neural networks embedded in the partial diferential equations of fuid dynamics may be trained on highly resolved fuid-dynamical models of unresolved scales and act as data-driven parameterizations in an ocean model. These neural networks overcome limitations of traditional neural networks in fuid dynamical applications in that they can incorporate conservation laws and are stable when integrated for long times. We argue that our approach provides a new route forward to the development of surrogate models for climate science, opening up exciting new opportunities. We then end by transferring the trained neural networks into a hydrostatic large-scale ocean model.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153671</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Airline Recovery using Mixed-Integer Optimization and Supervised Learning</title>
<link>https://hdl.handle.net/1721.1/153670</link>
<description>Large-Scale Airline Recovery using Mixed-Integer Optimization and Supervised Learning
Hizir, Ahmet Esat
Airlines plan their aircraft and crew schedules using operations research methods. However, these schedules are often disrupted due to the irregular nature of flight operations. Airline recovery is the process in which airlines take various actions to adjust and repair their aircraft routes, crew schedules, and passenger itineraries. This process has a sequential structure in practice, where aircraft recovery is followed by crew and then passenger recovery. Although recovery problems are smaller in scope than their planning counterparts, limited solution timeframes prevent airlines from using a full-scale optimization approach.&#13;
&#13;
This thesis proposes fast solution methods that combine mixed-integer optimization and supervised machine learning techniques to find better solutions to large-scale airline recovery problems than those found with other exact and heuristic approaches. Our approach reduces the solution space for a given disruption by adding constraints (cuts) based on the patterns discovered in the solutions to historical disruptions. The model with the added cuts is solved using mixed-integer optimization solvers.  &#13;
&#13;
During the day of flight operations, the available time for airlines to handle disruptions may vary. The overall solution process we propose allows parameter tuning to match the extent of solution space reduction with the available solution time. This feature helps the proposed methods to effectively navigate the trade-off between solution quality and runtime. Our computational studies are conducted using real flight and crew schedules from major US airlines with more than 2,500 daily flights. Experiments demonstrate that our approach can generate solutions of significantly higher quality than benchmark methods.&#13;
&#13;
We use tree-based classification methods to predict recovery decisions. Due to their interpretable structures, we are able to discover insights into the attributes of effective recovery decisions. We demonstrate that these insights can be incorporated into currently used heuristic-based airline recovery processes to improve solution quality by up to 15%.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153670</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Learning of Quantum Systems using Prior Information</title>
<link>https://hdl.handle.net/1721.1/153668</link>
<description>Accelerating Learning of Quantum Systems using Prior Information
Dutt, Arkopal
Towards realizing practically useful quantum devices, the sizes of quantum devices are being scaled up. An imminent challenge to scalability is ensuring resource requirements of learning tasks that occur as part of device characterization and execution of quantum algorithms also scale favorably. Resource requirements in general grow exponentially with the system size or the number of qubits and at the standard quantum limit (SQL) with respect to the learning error. On the other hand, much is known about the quantum system, which is of known construction with prior experience. A natural question is then, can we accelerate learning of quantum systems using prior information? In this thesis, we describe how prior information can be exploited to reduce resources over current baseline methods.&#13;
&#13;
In the first part, we consider the problem of quantum state tomography and identify a class of quantum states that are hard to simulate classically, to be learnable in sample complexity growing polynomially in the number of qubits. Our learning algorithm can be used to verify circuits commonly used in quantum advantage experiments. In the second part, we consider the problem of discriminating quantum channels on a physical system under experimental constraints of limited control and lack of direct readout. After introducing an ancillary measurement system that weakly interacts with the physical system, we show sequential protocols adapted to this setting outperform multi-shot and parallel protocols, achieving learning rates faster than SQL. In the third part, we consider the common recurring task of Hamiltonian learning during calibration. We introduce a batch-mode Hamiltonian active learner (HAL) that proposes informative queries adaptively during learning. In our experiments on an IBM quantum device, HAL reduced resources by 95% compared to standard methods and by 33% compared to a sequential active learner. In the fourth part, we consider the problem of estimating the expectation value of a Hamiltonian with respect to a quantum state, which features in many hybrid quantum-classical algorithms for ground state energy estimation in quantum chemistry. To guide the selection of of measurement methods designed for this problem, we propose a benchmark that assesses their performance against a set of common molecular Hamiltonians and common states. Benchmarking on IBM quantum devices reveal that decision diagrams are preferred for near-term quantum hardware. Finally in the fifth part, we propose a quantum algorithm based on molecular bootstrap embedding for ground state estimation of large molecular Hamiltonians that could potentially take advantage of access to multiple smaller quantum computers.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153668</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Zeolite Synthesis with Tailored Structure-Directing Agents (SDAs) and High-throughput Platform</title>
<link>https://hdl.handle.net/1721.1/153666</link>
<description>Enhancing Zeolite Synthesis with Tailored Structure-Directing Agents (SDAs) and High-throughput Platform
Kwon, Soonhyoung
The synthesis and applications of zeolites, porous materials with unique structures and properties, have been traditionally constrained by limited methodologies. The conventional techniques rely heavily on laborious trial-and-error processes, involving costly precursors and complex structure-directing agents (SDAs). This often leads to restricted catalyst selections and hinders the full utilization of zeolite’s potential in various industrial applications. Addressing these challenges, this thesis presents an innovative, high-throughput platform for zeolite synthesis, offering a more efficient and versatile approach.&#13;
&#13;
The platform’s foundation is laid in the form of the Organic SDA Database (OSDB), a unique repository constructed using natural language processing and high-throughput binding energy calculations. The OSDB catalogs potential organic SDAs for targeted zeolite frameworks, providing a quantitative metric for comparing their effectiveness. It not only simplifies the SDA selection process but also expands the organic candidate pool, giving researchers better control over zeolites’ physical and chemical properties.&#13;
&#13;
Next, a high-throughput synthesis platform is introduced, designed to test multiple synthesis conditions simultaneously and optimize zeolite production. It vastly reduces the time and labor traditionally associated with zeolite synthesis, offering a practical solution to the laborious trial-and-error approaches of the past.&#13;
&#13;
Following the establishment of the high-throughput platform, the thesis demonstrates its application by replacing conventional, costly, and complex organic SDAs with simpler and more economical ones. This innovative approach ensures the desired physicochemical properties of zeolites without compromising cost-effectiveness.&#13;
&#13;
Additionally, the thesis explores the crystallization of a unique zeolite intergrowth using a single organic SDA. The successful synthesis of a tri-phasic zeolite intergrowth known as MIT-2, composed of CHA/ERI/OFF phases, underscores the versatility of the developed platform.&#13;
&#13;
Finally, the methodology is extended to selectively crystallize specific zeolite phases from a multi-selective organic SDA. This process culminates in the synthesis of phase-pure LTA-family zeolites, exhibiting the highest Si/Al ratios ever reported in hydroxide media. This achievement implies a significant improvement in zeolites’ hydrothermal stability, potentially enhancing their catalytic lifetime and overall performance in targeted reactions.&#13;
&#13;
In summary, this thesis offers an innovative, efficient, and versatile methodology for zeolite synthesis. The proposed high-throughput platform not only revolutionizes the traditional zeolite synthesis process but also opens up new avenues for catalyst selection. This advancement is poised to significantly contribute to the zeolite research field and broader society, by facilitating the design of custom zeolite catalysts for diverse industrial applications.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153666</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-informed Deep Learning and Differentiable Mechanistic Models for Multicomponent Transport Phenomena</title>
<link>https://hdl.handle.net/1721.1/153665</link>
<description>Physics-informed Deep Learning and Differentiable Mechanistic Models for Multicomponent Transport Phenomena
Rehman, Danyal
A large subset of critical metals in today’s world occur in abundance across diverse bodies of water. For example, 83% of global lithium reserves are found in salt lakes and geothermal brines. Harvesting these metals, however, is often a separations challenge due to the presence of other solutes with similar solubility products and ionic radii. Membrane-based processes have shown great promise in performing these separations with high fractionation efficacy and energy-efficiency, motivating further study. To better optimize these systems and predict performance outside calibration limits, computational methods are essential. Extant methods leverage partial differential equation (PDE)-based continuum models that introduce a multitude of physical assumptions to improve computational tractability. These assumptions typically oversimplify the governing dynamics posing significant challenges to model generalization at new operating conditions. Moreover, these PDE-based models necessitate tedious characterization procedures that require vast amounts of experimental data for calibration, which simultaneously overconstrain regression parameters further impeding generalization performance. In this thesis, we alleviate many of these concerns through two distinct approaches: (1) the design and analysis of new differentiable mechanistic models for multicomponent membrane transport; (2) the development and validation of physics-informed deep learning surrogate models for transmembrane transport phenomena. In the former, we relax many of the assumptions present in traditional models, while introducing more physically-representative relationships like pore size distributions, fictitious image charges, etc. In addition, we combine these efforts with Monte Carlo simulations to rigorously quantify model uncertainty, engendering models that are more accurate than the state-of-the-art. Lastly, we pose new hybrid global-local optimization strategies capable of reducing the amount of data required for characterization by over 80%. In the second approach, we develop neural differential equation (NDE)-based models capable of learning smooth ion rejection profiles typical of membrane-based systems. These methods also leverage the attention mechanism, akin to that observed in language models, to elucidate paired transport relationships that dictate transmembrane ion transport. Further, we adopt transfer learning strategies conditioned on conventional PDE-based models in conjunction with integrated inductive biases like charge conservation, substantially narrowing the scope of feasible solutions. Our proposed deep learning frameworks achieve superior generalization performance over state-of-the-art mechanistic models, demonstrating the value of deep learning alternatives to conventional methods.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153665</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Design Tools: Implications on Design Process, Designer Behavior, and Design Outcomes</title>
<link>https://hdl.handle.net/1721.1/153664</link>
<description>Generative Design Tools: Implications on Design Process, Designer Behavior, and Design Outcomes
Saadi, Jana I.
Generative design tools, empowered by recent advancements in computational algorithms, offer the opportunity for human designers and design tools to collaborate in new, more advanced modes throughout various stages of the product design process to facilitate the creation of higher performing and more complex products. Much of the research focuses on the technical development and application of these tools, while less attention has been paid to how generative design tools are used from the designer’s perspective. Three main contributions of this dissertation include a development of a generative design process, observations of the implications of the use of generative design tools, and an understanding of how designers balance multiple objectives throughout a generative design process.  A grounded theory approach based on the experiences of designers was first used to develop a generative design process. Six in-depth interviews were conducted with experienced designers from different disciplines who use commercial generative design tools in their work, detailing the design processes they followed. A qualitative-based coding and analysis of the interviews was used to generate 161 coded themes describing the design process. Through these themes, a provisional process diagram for generative design and its uses in the early-stage design process is proposed to outline explicit and implicit stages of the design process.  Several implications of the use of generative design tools on the design process and designer behavior were developed through additional analysis of the interviews. The early 5  stages of defining tool inputs bring about a constraint-driven process in which designers focus on the abstraction of the design problem. Designers will iterate through the inputs to improve both quantitative and qualitative metrics, such as engineering performance and product styling. This learning-through-iteration allows designers to gain a thorough understanding of the design problem and solution space. This can bring about creative applications of generative design tools in early-stage design to provide guidance for traditionally designed products.  It was observed that generative design tools primarily allow for quantitative inputs to the tool while qualitative metrics, in particular aesthetics, are considered indirectly by designers. To explore this further, controlled lab experiments were conducted to understand how designers balance quantitative and qualitative objectives while using generative design tools. Thirty-four participants completed two design tasks (with and without generative design tools) with the same qualitative and quantitative objectives. Counterintuitively, designs created in the task without generative design tools had a statistically higher quantitative performance than those created with generative design tools. On the other hand, the designs produced with generative design tools displayed a greater aesthetic diversity and expanded a larger portion of the objective space. Participants also expressed the ability to focus on the qualitative objectives by delegating the quantitative objective to the generative design tool. This showcases the potential for generative design tools to assist in the design process and leveraging the expertise of both the human designer and the generative design tool to allow for greater consideration of various objectives throughout the design process.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153664</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical properties and design of light-emitting devices based on organic materials and nanoparticles</title>
<link>https://hdl.handle.net/1721.1/46680.2</link>
<description>Physical properties and design of light-emitting devices based on organic materials and nanoparticles
Anikeeva, Polina Olegovna
This thesis presents the detailed experimental and theoretical characterization of light-emitting devices (LEDs) based on organic semiconductors and colloidal quantum dots (QDs). This hybrid material system has several advantages over crystalline semiconductor technology; first, it is compatible with inexpensive fabrication methods such as solution processing and roll-to-roll deposition; second, hybrid devices can be fabricated on flexible plastic substrates and glass, avoiding expensive crystalline wafers; third, this technology is compatible with patterning methods, allowing multicolor light sources to be fabricated on the same substrate by simply changing the emissive colloidal QD layer. While the fabrication methods for QD-LEDs have been extensively investigated, the basic physical processes governing the performance of QD-LEDs remained unclear. In this thesis we use electronic and optical measurements combined with morphological analysis to understand the origins of QD-LED operation. We investigate charge transport and exciton energy transfer between organic materials and colloidal QDs and use our findings as guidelines for the device design and material choices. We fabricate hybrid QD-LEDs with efficiencies exceeding those of previously reported devices by 50-300%. Novel deposition methods allow us to fabricate QD-LEDs of controlled and tunable color by simply changing the emissive QD layer without altering the structure of organic charge transport layers. For example, we fabricate white light sources with tunable color temperature and color rendering index close to that of sunlight, inaccessible by crystalline semiconductor based lighting or fluorescent sources. Our physical modeling of hybrid QD-LEDs provides insights on carrier transport and exciton generation in hybrid organic-QD devices that are in agreement with our experimental data. The general nature of our experimental and theoretical findings makes them applicable to a variety of hybrid organic-QD optoelectronic devices such as LEDs, solar cells, photodetectors and chemical sensors.
</description>
<pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46680.2</guid>
<dc:date>2009-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical theory of the process of consolidation of mud deposits</title>
<link>https://hdl.handle.net/1721.1/153594</link>
<description>Mathematical theory of the process of consolidation of mud deposits
Ortenblad, Alberto.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1926; Vita.
</description>
<pubDate>Fri, 01 Jan 1926 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153594</guid>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A decompositional search algorithm for efficient diagnosis of multiple disorders</title>
<link>https://hdl.handle.net/1721.1/153589</link>
<description>A decompositional search algorithm for efficient diagnosis of multiple disorders
Wu, Thomas Dee.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (p. 229-236).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153589</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical methods of earthquake attenuation</title>
<link>https://hdl.handle.net/1721.1/153584</link>
<description>Statistical methods of earthquake attenuation
Heidari, Massoud.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1987; Bibliography: v. 2, leaves 219-224.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153584</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of sulfur-bearing gases from blast furnace slags</title>
<link>https://hdl.handle.net/1721.1/153583</link>
<description>Evolution of sulfur-bearing gases from blast furnace slags
Agrawal, Balkishan.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153583</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of glucose metabolism using positron imaging and 18F-labeled analogs</title>
<link>https://hdl.handle.net/1721.1/153582</link>
<description>Measurement of glucose metabolism using positron imaging and 18F-labeled analogs
Kearfott, Kimberlee Jane.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1980; Bibliography: leaves 348-372.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153582</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technology Development for the Functional and Structural Analysis of the Brain</title>
<link>https://hdl.handle.net/1721.1/153474</link>
<description>Technology Development for the Functional and Structural Analysis of the Brain
Torres Cabán, Cristina Coralys
Tool development for neuroscience has allowed us to study brain function and structure with great detail. However, as all technologies have limitations, we are yet far from fully deciphering brain dynamics and its relationship to its anatomical structure. In this thesis, I describe the development of two technologies: genetically encoded potassium indicators for functional studies of the brain, and an approach for constructing a partial connectome of the zebrafish brain. In functional studies, methods to study cell dynamics have relied on the use of electrodes or dyes. Genetically encoded indicators have gained popularity because they are non-invasive and targetable. We focus on potassium as it’s proposed its homeostasis in the brain is supported by glia, which modulates neuron activity, and potassium’s dysregulation has been observed in several neurological diseases. I  present the design and characterization of a family of genetically encoded potassium indicators using a potassium binding protein (Kbp) and the mNeonGreen fluorescent protein, named KRaIONs. These indicators were developed by using structure-guided mutagenesis of Kbp’s potassium binding site and by identifying alternative potassium binding proteins from metagenomic databases. In structural studies, efforts to map small organism’s brains have relied on electron microscopy or cell-labeling, which are limited in protein detection, cell-labeling diversity, or imaging resolution. Using the zebrafish model, we propose a strategy for constructing a partial connectome of the fish larvae brain by optimizing a version of Brainbow, a method that stochastically labels cells. My work focuses on identifying endogenous markers to detect cell and synapse location and type, which would complement the cell labels. To detect all proposed cell labels and endogenous markers in the zebrafish brain, I describe a protocol that uses DNA-conjugated antibodies to enable multiplex imaging at high resolution, with the use of expansion microscopy. Taken together, the work presented in this thesis introduces a method to readout potassium dynamics and my contributions to the zebrafish partial connectome project.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153474</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Differential Effects of Dexamethasone on the Metabolism of Healthy and Diseased Articular Cartilage</title>
<link>https://hdl.handle.net/1721.1/153473</link>
<description>Understanding the Differential Effects of Dexamethasone on the Metabolism of Healthy and Diseased Articular Cartilage
Black, Rebecca M.
The glucocorticoid drug Dexamethasone (Dex) has been proposed as a potential therapeutic to treat or prevent post-traumatic osteoarthritis (PTOA), a disease affecting millions of Americans without a disease-modifying drug. However, current reports on Dex efficacy often disagree on the effects of Dex on cartilage tissue homeostasis.&#13;
&#13;
The first part of this thesis uses proteomic analyses of a bovine ex vivo cartilage explant monoculture model of PTOA progression to identify inflammation-induced catabolic processes associated with disease progression that were attenuated by the addition of Dex. Some of the matrix fragments and inflammatory cytokines appear to be novel, promising disease biomarker candidates. Dex had an imperfect rescue effect on the observed dysregulation of anabolic and chondroprotective processes.&#13;
&#13;
The second part of this thesis expands the ex vivo cartilage explant monoculture PTOA model to use human donor knee cartilage. Proteins released into the media were clustered by the kinetics of their release over three weeks. This analysis further details the classification of biomarker candidates by their timing of release from cartilage, which was compared to the timing of proteins reported to be in patient synovial fluids after traumatic joint injuries. Experiments here revealed that Dex restored the kinetics of release to many matrix components.&#13;
&#13;
The final part of this work combines both healthy human ankle cartilage and bone in a novel osteochondral PTOA model that incorporates disease-relevant tissue crosstalk. Across seven human donors, proteomic analysis of culture media and cartilage tissue revealed that catabolic disease effects such as matrix breakdown and sGAG loss were attenuated by Dex, as well as the dysregulation of bone metabolism with disease but not Dex treatment. Regression analysis was used to find disease-specific peptide biomarkers by regressing individual peptide abundances against sGAG loss. Regressing protein abundances against Dex rescue effects on sGAG loss then identified an association of pro-inflammatory humoral proteins and apolipoproteins that were associated with lower donor-specific Dex efficacy.&#13;
&#13;
Taken together, these results demonstrate the anti-catabolic effects of Dex in human cartilage and osteochondral PTOA model systems, with only minor off-target effects on anabolic processes and identify methods of better predicting disease and Dex responses.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153473</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated computational and experimental analysis of non-neuronal cell molecular mechanisms contributing to Alzheimer's disease progression</title>
<link>https://hdl.handle.net/1721.1/153472</link>
<description>Integrated computational and experimental analysis of non-neuronal cell molecular mechanisms contributing to Alzheimer's disease progression
Lee, Meelim Jasmine
Alzheimer’s disease (AD) is the most common form of dementia, characterized by symptoms such as memory loss and language disruption. Amyloid beta-containing plaques and hyperphosphorylated tau-containing neurofibrillary tangles (NFTs) are defining characteristics of AD. However, plaques and NFTs are insufficient to robustly predict disease severity, and therapies targeting these disease hallmarks have been largely unsuccessful. Increasingly, there has been focus on non-neuronal cells, mechanisms by which they affect neurotoxicity, and their varied roles in disease progression. In this thesis, we apply systems techniques towards ADrelevant omics datasets to generate new hypotheses regarding non-neuronal cell contributions to disease progression.  &#13;
&#13;
First, we conducted cross-species analysis of human and mouse transcriptomics to identify translatable signatures implicating the innate immune system and specifically the Tyro3/Axl/Mer receptor (TAMR) family. AD mouse models are a major component of preclinical evaluation of therapies. However, amyloid and tau dysregulation are often key features, and fidelity of subsequent disease progression to human AD is limited. In this work, we concurrently analyzed human and AD mouse model transcriptomics to identify signatures present in human data and predictive of mouse outcomes. Towards experimental validation of our computational inferences, we evaluated Protein S effects on microglial response to amyloid. Overall, this work provides a computational framework for rational selection of mouse models and presents preliminary data towards our hypothesis that TAMR ligands preferentially binding Mer could modulate absolute and relative TAMR abundance to bias microglia towards a therapeutically beneficial, phagocytic state. &#13;
&#13;
Second, we applied statistical modeling to analyze matched protein-level and AD patient outcome data to identify oligodendrocyte contributions to patient stratification. We focused on histopathological outcomes to identify a tau/oligodendrocyte basis for patient stratification. Subsequently, we modeled distinct stages of disease progression to identify protein-level clusters covarying with cognition more than plaque burden. Clusters negatively associated with cognition were enriched for oligodendrocyte lineage cell peptides, distinct from those underlying our initial tau/oligodendrocyte patient stratification. Together, these supervised analyses highlight oligodendrocyte lineage cell contributions to patient variability and point towards questions of whether this signature is AD specific, generalizable across neurodegenerative diseases, or present prior to the cellular phase of disease progression.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153472</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Controls on the Cycling and Reactivity of Marine Dissolved Organic Matter</title>
<link>https://hdl.handle.net/1721.1/153470</link>
<description>Chemical Controls on the Cycling and Reactivity of Marine Dissolved Organic Matter
Granzow, Benjamin Nash
Marine dissolved organic matter (DOM) is an actively cycling reservoir of carbon containing thousands of unique compounds. To describe the complex dynamics that govern the biological transformation and decomposition of compounds in this molecular black box, models of DOM reactivity use chemical characteristics, as well as environmental parameters, to describe trends in the turnover time of classes of DOM. In this thesis, I describe two projects that examine hypotheses regarding the turnover of two classes of DOM. In the 1st project, I test the assumption made by the size–reactivity continuum hypothesis that high molecular weight (&gt; 1 kDa) DOM (HMWDOM) represents a diagenetic intermediate between large labile material and small recalcitrant compounds. Size-fractions of HMWDOM were collected using size-exclusion chromatography, and the changes in MW and chemical composition of the fractions were studied using diffusion-ordered spectroscopy. The size fraction carbon isotopic values were correlated with the proportion of humic substances in the fractions. Through linear modeling, the apparent radiocarbon ages of the two major components of HMWDOM were determined to be 1-3 yrs and 2-4 kyrs, respectively. Combined with the measurements of MW distribution this work demonstrates that HMWDOM is composed of two components that have contrasting decomposition pathways in the ocean. HMWDOM cannot be treated as a single DOM pool when incorporated into models of DOM diagenesis.&#13;
&#13;
The 2nd project in this dissertation examines the remineralization of phosphonates, compounds with a direct C-P bond, in the lower euphotic zone using a newly developed fluorescent assay, which measures the activity of carbon-phosphorus lyase. C-P lyase activity (CLA) profiles from the North Pacific Subtropical Gyre (NPSG) showed a sharp activity maximum near the deep-chlorophyll maximum (DCM). High-resolution nutrient measurements suggest that this subsurface CLA maximum is the result of a high nitrate flux at the top of the nitracline. The composition of particulate-P through the euphotic zone was also examined. While phosphonates were not detected in suspended particles, a significant amount of aminoethylphosphonate was measured in sinking material, suggesting eukaryotic material may be an important source of phosphonates to the ocean.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153470</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches for Characterizing ALS Disease Progression</title>
<link>https://hdl.handle.net/1721.1/153469</link>
<description>Machine Learning Approaches for Characterizing ALS Disease Progression
Ramamoorthy, Divya
Amyotrophic Lateral Sclerosis (ALS) is a fatal neurodegenerative disease that is complex in its onset, pattern of spread, and disease progression. This heterogeneity makes it challenging to identify potential therapeutics and to evaluate their effectiveness in slowing progression. At the same time, a better understanding of the heterogeneity of ALS might help identify environmental or genetic modifiers of disease that could be targeted therapeutically. Despite the importance of accurately modeling ALS progression, current computational methods fail to capture the complexity of disease progression. &#13;
&#13;
In this thesis, I describe machine learning approaches to characterizing disease progression in ALS. I first present the development of a Mixture of Gaussian Processes model to learn clusters of ALS disease progression from sparse longitudinal clinical data. I show that our learned trajectories are robust to sparse data, and correlate with alternate clinical measures such as survival. I also demonstrate applications of the method to other neurodegenerative diseases, including Alzheimer’s Disease and Parkinson’s Disease. &#13;
&#13;
Next, I interrogate molecular features that correlate with clinical progression patterns. I longitudinally profile untargeted metabolomics and phosphorylated neurofilament heavy chain for a cohort of 283 individuals across 687 visits. Our results show that a PLSR model can be used to estimate disease severity, including ALSFRS-R and Vital Capacity, from metabolite concentrations. I also show that the distributions of neurofilament levels vary between ALS progression patterns. &#13;
&#13;
Together, these results advance our understanding of disease progression in ALS, with critical implications for clinical trial analysis. These results also advance our biological understanding of the complex molecular changes that are associated with the disease.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153469</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Characterization of RTK Signaling Networks Using Phosphoproteomic Approaches</title>
<link>https://hdl.handle.net/1721.1/153468</link>
<description>Mechanistic Characterization of RTK Signaling Networks Using Phosphoproteomic Approaches
Gerritsen, Jacqueline S.
Cancer is a complex disease, which often stems from aberrant gene expression and protein signaling. In order to improve development of novel therapeutics, underlying mechanisms employed by malignant cells to maintain their function need to be deeply understood. The field of phosphoproteomics has advanced over the past few decades to allow analysis of phosphorylated proteins with increased sensitivity and accuracy. Using these methods in combination with a systematic mutational strategy that evaluates the network effects of loss of phosphorylatable tyrosines that the protein of interest seemingly depends on, allows for an unbiased mechanistic characterization of the signaling network. When implemented with various computational tools, these data can provide a predictive model that can inform future targeting strategies in disease.&#13;
&#13;
Here, I present an overview of the value phosphoproteomics adds to cancer research, and the insights we can gain when combined with Y-to-F mutational approaches to gain mechanistic understanding underlying protein function. We successfully applied these approaches in characterizing EGFR, a protein often dysregulated in cancer. Furthermore, we experimentally evaluated the use of fluorophores such as GFP in these signaling studies. We also applied these approaches in other settings, when evaluating function of AXL, another RTK that has been associated with acquired resistance to EGFR-targeting tyrosine kinase inhibitors. We applied phosphoproteomic analysis to explore the regulation of EGFR by phosphatase PTPRJ as a potential indirect mechanism of modulation. Beyond cancer, we applied our phosphoproteomic approach to validate a novel biomarker in Alzheimer’s disease.&#13;
&#13;
Together, these findings highlight how these approaches can result in invaluable mechanistic insight that can propel future drug development efforts.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153468</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing 3D Wireframe DNA Nanoparticles for Programmable Innate Immune Activation</title>
<link>https://hdl.handle.net/1721.1/153467</link>
<description>Designing 3D Wireframe DNA Nanoparticles for Programmable Innate Immune Activation
Du, Rebecca R.
DNA nanotechnology harnesses the predictability and specificity of canonical base pairing interactions to enable the synthesis of complex DNA nanodevices capable of interacting with their environment in a controllable manner. Wireframe DNA origami are a class of highly customizable DNA nanoparticles that can be folded into near-arbitrary 2D or 3D geometries upon which ligands can be attached and organized with nanoscale precision. Due to their versatility, DNA origami have been designed for a wide range of applications in a variety of fields ranging from chemistry to computer science, and there is substantial interest in harnessing DNA nanostructures for therapeutic applications. However, an understanding of how the immune system responds to wireframe DNA nanostructures is currently lacking. Furthermore, it is unclear how controllable design parameters such as scaffold sequence, nanoparticle geometry, or ligand organization influence immune activation. &#13;
&#13;
In my thesis, I begin by describing a method for scalable bioproduction of circular single-stranded DNA that provides a high degree of sequence control and opens up opportunities for scaffold engineering. I demonstrate the design and production of two different scaffolds in both shaker flask and bioreactor setups, then validate the application of this method towards DNA nanotechnology by characterizing the successful folding of wireframe DNA origami from each scaffold. Using nanoparticles folded from one of these engineered scaffolds, I next investigate the molecular mechanisms by which wireframe DNA nanoparticles interact with innate immune receptors. I start by characterizing immunological recognition of unmodified wireframe nanoparticles and find that they induce Type I IFN expression primarily through cGAS-STING, while TLR9 is very minimally activated. I then enhance the ability of these nanoparticles to activate TLR9 by attaching discrete copy numbers of immunostimulatory CpG motifs at precise spatial positions and show that activation levels can be controllably tuned by changing the nanoscale organization of CpGs on the nanoparticle. Finally, I delve further into how design parameters such as nanoparticle geometry, inter-CpG distance, CpG copy number, and spatial presentation can be used to modulate innate immune activation. Overall, this work identifies specific nanoparticle properties that impact immunological recognition and sheds light on design principles that enable the production of immunomodulatory DNA origami.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153467</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tool Development for Studying and Manipulating Peptide-MHC Interactions in a Globally-Representative Manner</title>
<link>https://hdl.handle.net/1721.1/153466</link>
<description>Tool Development for Studying and Manipulating Peptide-MHC Interactions in a Globally-Representative Manner
Huisman, Brooke D.
Major histocompatibility complex (MHC) proteins play a critical role in the adaptive immune system, presenting peptide fragments on the surface of cells for surveillance by T cells. In this way, T cells are able to sense cellular dysfunction associated with disease, such as the presence of pathogen-derived peptides. The ability to assess and predict peptide-MHC binding is, therefore, an important component of understanding and engineering immune responses. Peptide-MHC binding is complex, in part due to the immense diversity on both sides of the interaction: MHCs are encoded by the most polymorphic genes in the body and can bind to a subset of trillions of potential peptides. Further, MHC alleles have not been uniformly studied, which presents challenges when designing therapies for diverse patient populations. In this work, we develop tools to study and manipulate peptide-MHC interactions in a more globally-representative manner. First, we study highly polymorphic class II MHC alleles, utilizing data from high-throughput yeast display screens to train algorithms for antigen prediction. Next, we adapt the yeast display platform to screen user-defined libraries of peptides and apply the approach for optimizing peptides and profiling whole viral pathogens for MHC binding. To further increase the MHC throughput of these approaches, we develop a second-generation platform that opens the pipeline for MHC alleles. Finally, we take an orthogonal approach to studying peptide-MHC binding in a representative manner, studying the highly conserved, class Ib MHC HLA-E. We characterize the HLA-E peptide repertoire and train prediction algorithms to identify novel proteome-derived binders. Taken together, these works advance our toolset for studying peptide-MHC interactions across patient populations, with applications in infectious disease, cancer, and autoimmunity.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153466</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A materials-based approach for localized delivery of cancer immunotherapy</title>
<link>https://hdl.handle.net/1721.1/153465</link>
<description>A materials-based approach for localized delivery of cancer immunotherapy
Agarwal, Yash
Cancer immunotherapy provides a promising new alternative to traditional cancer treatment modalities such as chemotherapy and radiation. However, even the most effective therapies only show benefit in a subset of patients when used alone and so, combination therapy may be critical to maximizing anti-tumor responses in the clinic. Inflammatory cytokines such as interleukins-2, 12 and 15 promote potent anti-tumor immunity, but systemically administered cytokines are also highly toxic. &#13;
&#13;
In this thesis, we engineered cytokines with a peptide tag containing multiple phosphoserine (pSer) residues, through in-cell phosphorylation during recombinant expression. Cytokines with pSer tags bind tightly to the common vaccine adjuvant aluminum hydroxide (alum) via ligand exchange. Intratumoral injection of pSer-cytokine-loaded alum led to prolonged retention of the proteins in tumors (&gt;weeks) with minimal side effects. A single dose of alum-tethered interleukin-12 (IL-12) induced significant interferon-γ-mediated T-cell and NK-cell activity in tumors, increased tumor-antigen accumulation in draining lymph nodes, and elicited robust tumor-specific T cell priming. Intratumoral alum/cytokine therapy enhanced responses to checkpoint blockade, promoting cures in distinct poorly immunogenic syngeneic tumors while eliciting control over distant, untreated lesions and metastases. Thus, intratumoral treatment with alum-anchored cytokines presents a safe, tumor-agnostic approach to improve local and systemic anti-cancer immunity.  &#13;
&#13;
This thesis also contains abundant discussion about the potential disadvantages of persistently-retained IL-12 along with solutions to circumvent the obstacles while maintaining the high therapeutic-index benefits seen with local delivery of alum-bound cytokines. Overall, our work presents strong proof-of-concept for alum as a powerful delivery vehicle for cancer immunotherapy and further work could help unlock the true potential for precise spatiotemporal control after local drug delivery.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153465</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approaches to investigating antibiotic efficacy and discovery of treatment strategies against antibiotic tolerance</title>
<link>https://hdl.handle.net/1721.1/153464</link>
<description>Approaches to investigating antibiotic efficacy and discovery of treatment strategies against antibiotic tolerance
Andrews, Ian Wayne
While it is widely understood that antibiotic resistance is challenging the efficacy of our antibiotic therapies, there is a growing appreciation for the clinical importance of other ways that bacteria can survive antibiotic treatment. In particular, antibiotic tolerance describes a set of phenotypes where genetically susceptible bacteria can survive antibiotic treatment. Thus, there is a need for methods to improve our understanding of currently used antibiotics, as well as the discovery and development of new antibiotic therapeutic strategies, particularly focused on addressing antibiotic tolerance.&#13;
&#13;
In this thesis, I present advancements to our understanding of antibiotic efficacy and our methods for discovering treatment strategies against antibiotic tolerant bacterial populations. First, I present an investigation into the metabolic processes that drive beta-lactam lethality, where we showed that drug-induced disruption of anabolic- catabolic homeostasis is an important factor for beta-lactam activity. Next, I describe an approach towards the discovery of antibiotic adjuvants for treating antibiotic tolerance. Finally, I present work towards the development of a method for discovering new antibiotics with activity against antibiotic tolerant bacteria. I describe characterization of compounds that were identified in primary screening efforts, as well as the results of a deep learning approach to this challenge in antibiotic discovery. Together, these projects demonstrate a set of advancements in our understanding of antibiotic activity and towards the challenge of antibiotic discovery.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153464</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Homogenous catalysis: exploratory studies of metal catalyzed oxidation processes,</title>
<link>https://hdl.handle.net/1721.1/153440</link>
<description>Homogenous catalysis: exploratory studies of metal catalyzed oxidation processes,
Klee, Howard William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153440</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lattice dynamics of zinc blende crystals with point impurities.</title>
<link>https://hdl.handle.net/1721.1/153438</link>
<description>Lattice dynamics of zinc blende crystals with point impurities.
Kim, Ch'ang-hyo,
            1956-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1972; Vita.; Bibliography: leaves 234-240.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153438</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrical properties and defect diffusion study in alumina and calcia-stabilized zirconia.</title>
<link>https://hdl.handle.net/1721.1/153436</link>
<description>Electrical properties and defect diffusion study in alumina and calcia-stabilized zirconia.
Kitazawa, Kōichi.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153436</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediate logics and equational classes of Brouwerian algebras.</title>
<link>https://hdl.handle.net/1721.1/153435</link>
<description>Intermediate logics and equational classes of Brouwerian algebras.
Kirk, Robert Elefter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Philosophy, 1972; Vita.; Bibliography: leaves 79-83.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153435</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The 3'-end of turnip yellow mosaic virus RNA : application of novel sequencing techniques.</title>
<link>https://hdl.handle.net/1721.1/153433</link>
<description>The 3'-end of turnip yellow mosaic virus RNA : application of novel sequencing techniques.
Silberklang, Melvin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153433</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causes and consequences of pair-bond disruption in socially monogamous species</title>
<link>https://hdl.handle.net/1721.1/153406</link>
<description>Causes and consequences of pair-bond disruption in socially monogamous species
Sun, Ruijiao
Many animals, from crustaceans to humans, form socially monogamous pair-bonds which are maintained during one or more consecutive breeding seasons. However, the ecological consequences of the disruption of monogamous pair-bond have rarely been addressed because it is difficult to estimate the rates and demographic impacts of pair-bond disruption (divorce or widowhood). This dissertation investigates the effect of global changes and individual heterogeneity on pair-bond disruption (divorce or widowhood) and their consequences for vital rates and life-history outcomes for socially monogamous species. In Chapter 2 and Chapter 4, analyses of long-term demographic datasets reveal different patterns of pair-bond dynamics between the population of wandering albatross (Diomedea exulans) breeding in sub-Antarctica and the snow petrel (Pagodroma nivea) breeding in Antarctica. In wandering albatross, divorce is nonadaptive with no improvement in breeding success, while divorce triggered by breeding failure is adaptive in Snow Petrels, resulting in a higher subsequent breeding success. Widowhood rates are male-biased due to lower survival rates of females in wandering albatrosses. In both wandering albatross and snow petrel, remaining single after a pair-bond disruption results in a reduction in individual lifetime reproductive success due to missed breeding seasons. Chapter 3 presents a link between individual personality and divorce in wandering albatrosses demonstrating the important implications of behavior types for the dynamics of social relationships. Personality was measured on a shy-bold continuum, linked to individual risk-taking tendencies, with bolder individuals more likely to take risks and shyer individuals. In wandering albatrosses, shyer males exhibit higher divorce rates than bolder males but no such relationship was found in females. Chapter 4 shows that environmental fluctuations can affect the prevalence of pair-bond disruption in snow petrels, with higher rates of pair-bond disruption under unfavorable environmental conditions. Moreover, the findings suggest a potential increase in the prevalence of pair-bond disruption towards the end of the current century. As a whole, this thesis advances our understanding of the effects of pair-bond disruption on demography which should not be ignored when providing guidelines for the conservation and management of endangered species.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153406</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Four problems in nonlinear Earth system dynamics</title>
<link>https://hdl.handle.net/1721.1/153405</link>
<description>Four problems in nonlinear Earth system dynamics
Arnscheidt, Constantin W.
The Earth system — the interconnected global set of living and non-living systems comprising our planet — has undergone profound changes throughout its history. Some were slow and gradual; others dramatic and seemingly abrupt. The latter include “hyperthermal” global warming events, “Snowball Earth” glaciations, and mass extinctions. Although the details are different, underpinning them all is the nonlinearity of the Earth system: small changes sometimes trigger much larger responses. This thesis addresses four related problems in this subject area.&#13;
&#13;
First, I search for general patterns in extreme climate-carbon cycle disruptions throughout the last 66 million years. I find a persistent tendency towards extreme warming rather than cooling events, and that extremes are strongly non-Gaussian. Through theory and computation, I suggest a simple explanation: the carbon cycle fluctuates more when it is warmer. This could also explain why hyperthermals were often synchronized with changes in Earth’s orbit, and suggests that there will be more extreme hyperthermal-like events in a future warmer world.&#13;
&#13;
Second, I interrogate past global temperature changes for signs of long-term stabilizing feedbacks. One example is the silicate weathering feedback, which is often invoked to explain recovery from large Earth system disruptions, and Earth’s persistent habitability thus far. I show that stabilizing feedbacks indeed controlled temperature on timescales of thousands to hundreds of thousands of years, but not on longer timescales. This supports silicate weathering as an important long-term climate controller, but raises questions about the role of chance in Earth’s continued habitability.&#13;
&#13;
Third, I revisit the problem of how global glaciations (“Snowball Earth” events) are initiated; this happened multiple times in the deep geologic past. Using a simple model of atmospheric radiation and the weathering feedback, I show that the most relevant “tipping point” leading to glaciation may be a critical rate of decrease in effective solar radiation, as could result from volcanic aerosols or the collapse of a methane greenhouse. This also suggests that planets well within the conventional “habitable zone” can still be surprisingly susceptible to glaciation.&#13;
&#13;
Fourth and finally, I consider abrupt catastrophes for life. Earth system disruptions have previously triggered mass extinctions, and empirical work shows that the rate of environmental change may have been the determining factor. Using this observation, the existing theory of “evolutionary rescue”, mathematics surprisingly like the global glaciation problem, and agent-based simulations, I argue that rate-induced collapse in response to environmental change may be a common feature of evolutionary-ecological systems on a vast range of scales.&#13;
&#13;
Beyond pure intellectual interest, I hope that insights from these four contributions will help, however modestly, to enable a flourishing Earth system long into the future.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153405</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed-gas Transport in Microporous Polymer Derivatives for Energy-Efficient Gas Separations</title>
<link>https://hdl.handle.net/1721.1/153404</link>
<description>Mixed-gas Transport in Microporous Polymer Derivatives for Energy-Efficient Gas Separations
Mizrahi Rodriguez, Katherine
Over the past forty years, membrane-based gas separations have emerged as a promising alternative to energy-intensive separation processes such as amine absorption and cryogenic distillation. However, a trade-off between permeability and selectivity as well as issues of decreased performance at high pressures have hindered widespread membrane deployment. In response, there is a growing body of research aimed to developing materials formulations that address these polymer-specific disadvantages. Polymers of intrinsic microporosity (PIMs) were designed as ultrahigh-free-volume materials with inefficient chain packing and high surface areas, resulting in pure-gas performance beyond that of traditional glassy polymers. Hybrid systems such as mixed-matrix membranes (MMMs) also emerged as a means of increasing overall separation performance and reducing plasticization. Despite the surge in available material platforms for membrane-based separations, the performance and fundamental underpinnings of binary and ternary mixed-gas transport in these microporous polymers and MMMs remains underexplored.&#13;
&#13;
This thesis combines synthetic chemistry, materials science, and chemical engineering to develop a platform of functionalized PIM derivatives and to investigate their pure- and mixed-gas transport properties in industrially relevant conditions. Approaches to increase diffusion-based performance in PIMs are developed, including methods to functionalize PIMs with carboxylic acid and amine functionalities, template free volume elements through protection/de-protection chemistries, and fabricate MOF–polymer composites. The effects of polymer functionalization, free volume manipulation, and membrane hybridization on transport are investigated via pure-gas testing and sorption–diffusion analysis, to elucidate structure–property relationships between polymer packing structure and gas diffusion. Polymers with identical backbone structures and varying backbone functionality are subsequently used as a platform to investigate the effects of CO₂ sorption affinity on the binary and ternary mixed-gas transport. Among the PIMs considered, amine-functionalized PIM-1 shows a notable increase in mixed-gas selectivity compared to the pure-gas case. The generalizability of this approach is investigated through aminefunctionalization of a different family of polymers, poly(aryl ether)s (PAEs). Results indicate that amine-functionalization can serve as a promising route to increase mixed-gas transport performance while also reducing CO₂-based plasticization. The influence of CO₂ sorption affinity on transport is finally investigated through ternary mixed-gas tests in toxic gas mixtures containing H₂S. Taken together, this thesis derives connections between macromolecular chemistry and complex gas transport performance in PIMs. By developing these structure-property-performance relationships, this work provides context for the potential of PIMs in industrial applications and rational design handles for future development of high-performing membrane solutions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153404</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on the biosynthesis of thiamine</title>
<link>https://hdl.handle.net/1721.1/153375</link>
<description>Studies on the biosynthesis of thiamine
Camiener, Gerald Walter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1959; Vita.; Includes bibliographical references (leaves 147-152).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153375</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of graft polymerization by high energy irradiation</title>
<link>https://hdl.handle.net/1721.1/153374</link>
<description>The mechanism of graft polymerization by high energy irradiation
Hoffman, Allan S.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1957; Vita.; Bibliography: leaves 220-225.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153374</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Managers in politics.</title>
<link>https://hdl.handle.net/1721.1/153370</link>
<description>Managers in politics.
Rosenbloom, David Lee.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1970; Two unnumbered pages inserted. Vita.; Bibliography: p. 283-293.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153370</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision analysis incorporating preferences of groups.</title>
<link>https://hdl.handle.net/1721.1/153368</link>
<description>Decision analysis incorporating preferences of groups.
Kirkwood, Craig W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Questionnaire copy follows leaf 155. Vita.; Bibliography: leaves 158-160.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153368</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A theoretical investigation of optical absorption intensities in transition metal complexes.</title>
<link>https://hdl.handle.net/1721.1/153357</link>
<description>A theoretical investigation of optical absorption intensities in transition metal complexes.
Noodleman, Louis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1975; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153357</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The theory of linear lattices</title>
<link>https://hdl.handle.net/1721.1/153355</link>
<description>The theory of linear lattices
Haiman, Mark D.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1984; Vita.; Bibliography: leaves 113-121.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153355</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel High-throughput Technologies for Applications in Microbiology</title>
<link>https://hdl.handle.net/1721.1/153348</link>
<description>Novel High-throughput Technologies for Applications in Microbiology
Chen, Sijie
Electroporation, a widely used gene delivery technique for bacterial transformation, involves applying an electric field to increase cell membrane permeability. However, the traditional cuvette-based protocol is laborious, time-consuming, low throughput, and operator dependent, hindering the progress of bacterial genetic engineering. To address these challenges, this study focuses on developing high-throughput solutions to streamline the process, improve efficiency, and ensure consistent results.&#13;
&#13;
The primary contribution of this research is the development of—to the best of our knowledge—the first automated system for high-throughput bacterial genetic engineering. At the core of this system is a microfluidic device designed to integrate seamlessly with liquid handling robots. By performing electroporation column-by-column, this automated system achieves a 96-well electroporation in just 5 minutes, 30 times faster than performing 96 independent genetic transformations using cuvettes. Through experiments with Escherichia coli NEB10&#120573;, it demonstrates high efficiency and consistency, enabling rapid and reliable genetic manipulation.&#13;
&#13;
The introduction of high throughput electroporation results in other bottlenecks in the process, particularly in the assessment of transformation efficiency through colony counting. Existing automated colony counting solutions often struggle with merged colonies, a common issue in high-throughput plating. To address the need for a high-throughput evaluation solution to assess the efficiency of bacterial electroporation, we introduce MCount. MCount outperforms existing solutions by optimizing contour and regional pairing, resulting in low error rates and minimal reliance on hyperparameters. Moreover, we propose statistical methods that require few labeled or even unlabeled datapoints, ensuring consistently low error rates and facilitating deployment in scenarios with limited labeled data.&#13;
&#13;
Finally, as conventional electroporation methods are limited to relatively small sample volumes (&lt; 200 L), we present M-tube, a disposable and user-friendly microfluidic device for large volume (&gt; 10 mL) bacterial gene delivery. With minimal fabrication requirements and straightforward operation, M-tube surpasses cuvettes in transformation efficiency and facilitates the creation of transposon mutant libraries.&#13;
&#13;
Collectively, the high-throughput solutions for microbiology overcome the limitations of cuvette-based electroporation, significantly improving efficiency and consistency in bacterial genetic engineering. These advancements pave the way for accelerated research and development in the field, fostering breakthroughs in genetic manipulation and biotechnology applications.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153348</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum sensing and simulation with solid-state defects</title>
<link>https://hdl.handle.net/1721.1/153346</link>
<description>Quantum sensing and simulation with solid-state defects
Wang, Guoqing
Solid-state spin defects have garnered significant interest in recent years due to their potential for various quantum applications, ranging from sensing, simulation to communications. Despite the fast progress of the field in the past decades, numerous challenges remain before these applications can be fully realized in broader scenarios. Among the issues that need to be addressed are deleterious decoherence and low-fidelity control,  and the lack of critical capabilities such as vector field sensing at nanoscale and arbitrary frequency field sensing. Additionally, a deeper understanding of the rich dynamics associated with the charge degrees of freedom of various defect states in the material host is necessary.&#13;
&#13;
This thesis aims to tackle these challenges and contribute to the development of better quantum sensors and simulators. The study starts with quantum control based on modulated driving, which allows for the engineering of more flexible driving strength and the control of evolution modes. By utilizing modulated control techniques, we experimentally observe high-order effects beyond the traditional rotating wave approximation and protect Rabi coherence by more than one order of magnitude in an ensemble of NV centers in diamond. Moreover, we exploit the modulated driving to simulate dynamical symmetries and characterize their selection rules using projective measurements of the coherence evolution of the system. The same control technique is shown to further design parallel quantum gates for atom array quantum platforms. &#13;
&#13;
Inspired by the flexible quantum control in different frames, we develop protocols to improve quantum sensing performances. In sensing coherent signals, we develop a protocol to sense vector ac field at nanoscale using a single spin. To broaden the frequency range of quantum sensors, we develop a quantum mixer to convert off-resonant signal fields to resonant ones, achieving the sensing of arbitrary frequency vector fields. To overcome the challenge of degraded coherence in large (nuclear) spin ensembles, we develop a novel unbalanced echo sequence to refocus the dominant noise in the intrinsic hyperfine interaction and quadrupole terms, which extends the dephasing time of a large spin ensemble to the single spin limit. In sensing stochastic signals, we develop a digital noise spectroscopy approach based on Walsh dynamical decoupling sequences to eliminate systematic error due to approximating frequency filters with delta functions. All these protocols are experimentally demonstrated with NV center platforms.&#13;
&#13;
Given that the charge states of defects play an important role in the performance of aforementioned quantum applications due to their influence on the spin state and coherence, we use NV centers to probe the charge dynamics by characterizing the bath spectrum under different optical illumination conditions. With a fast wide-field imaging setup integrated with a SPAD array, we observe the process of charge transport through the redistribution of spin concentration of P1 centers and other spin defects in the bath. The potential tunability of spin concentration in time and spatial domain paves the way for their applications in simulating quantum many-body dynamics such as quantum hydrodynamics.&#13;
&#13;
In summary, this thesis studied the improvement of quantum control and the development of better sensors and simulators with solid-state defects. Although experiments in this thesis are performed with spin defects in diamonds, many of the studied topics are general to most quantum platforms, such as modulated control, dynamical decoupling, coherence protection, noise spectroscopy, etc. Thus, we foresee broader applications of the thesis research outcome both in realizing quantum sensors and networks with solid-state spin defects, and in extending the developed control techniques for characterizing and improving the performance of other quantum platforms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153346</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Organization and Labor Economics</title>
<link>https://hdl.handle.net/1721.1/153343</link>
<description>Essays in Industrial Organization and Labor Economics
Yellen, Maggie
This thesis consists of three chapters, two in Industrial Organization and one in Labor Economics. The first and second chapters study industrial technologies: the first explores how machine learning changes decision-making of heavy duty truck technicians, while the second studies technological switching in the shale industry. The third chapter studies wage garnishment in the United States.&#13;
&#13;
The first chapter (joint with Adam Harris) uses observational data to explore how a predictive algorithm changes human decision-making. Using a novel, rich decision-level data set from the maintenance of heavy-duty trucks, we document how skilled technicians' decision-making is changed by the introduction of an algorithm designed to predict the risk of truck breakdowns. We develop and estimate a model of technician decision-making that accounts for variation in monetary and non-monetary costs. Using an embedded neural network, we flexibly estimate technicians' beliefs about the probability of truck breakdowns both before and after the introduction of the algorithm. Comparing these estimated beliefs with an objective breakdown probability, we find that the algorithm significantly improves technicians' ability to predict breakdowns: the algorithm narrows the gap between actual and optimal costs by 79%. All of this gain comes from decreased repair costs, suggesting that the algorithm primarily helps technicians avoid low value repairs.&#13;
&#13;
The second chapter studies technology in a different setting: the U.S. shale industry.  In late 2014, global oil prices dropped precipitously, driving U.S. shale producers out of the market. As the number of new wells completed dwindled, productivity began to rise sharply, beginning a steepened upward descent that continued through 2019. This chapter draws on detailed well-level data from the Bakken shale play in North Dakota to tease apart several classic explanations for these trends, including Schumpeterian creative destruction and technological improvement. I document firm-level jumps from gel-based completions to slickwater after the price shock, with earlier jumps for mid cap and private firms. However, I find that improved geological targeting (or "high-grading") and slickwater adoption fail to account for over 70% of the productivity increase.&#13;
&#13;
The third chapter (joint with Anthony DeFusco and Brandon Enriquez) uses administrative data to investigate wage garnishment in the United States. Wage garnishment allows creditors to deduct money directly from workers' paychecks to repay defaulted debts. We document new facts about wage garnishment between 2014 and 2019 using data from a large payroll processor who distributes paychecks to approximately 20% of U.S. private-sector workers. As of 2019, over one in every 100 workers was being garnished for delinquent debt. The average garnished worker experiences garnishment for five months, during which approximately 11% of gross earnings is remitted to their creditor(s). The beginning of a new garnishment is associated with an increase in job turnover rates but no intensive margin change in hours worked.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153343</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimality of human acoustic perception of natural sounds, and violin acoustics elucidated with a non-corporeal musical instrument made by fully air-coupled finite-element-modeling of the Titian Stradivarius</title>
<link>https://hdl.handle.net/1721.1/153340</link>
<description>Optimality of human acoustic perception of natural sounds, and violin acoustics elucidated with a non-corporeal musical instrument made by fully air-coupled finite-element-modeling of the Titian Stradivarius
Krishnadas, Arun
By Weber’s Law, just-noticeable-differences of a stimulus grow in proportion to the stimulus. From hundreds of measurements of naturally scintillating environmental sound signals, Weber's law is found to be a consequence of attaining the minimum mean-square error, known as the Cramer-Rao lower bound, and maximum Fisher information in intensity estimation. Remarkably, just-noticeable-differences in sound intensity from decades of psychophysical experiments with artificial sound, are also found to approximately attain the respective Cramer-Rao lower bounds on intensity resolution obtained from the natural sound intensity fluctuations we measured, indicating that they are approximately optimally adapted to environmental scintillation. This adaptation is also found to maximize general information reception and optimize pattern recognition via homomorphic variance-stabilizing transformation of natural signal-dependent intensity scintillation received to signal-independent noise perceived that can be canceled without loss of signal information. It is consistent with diverse natural processes in random wave generation converging to similar intensity scintillation statistics by the central limit theorem. Sensing resolution adapted to follow Weber's Law so provides quantifiable advantages to organisms, systems or machines that rely on auditory sensory perception with natural sound to survive or function effectively.&#13;
&#13;
In the second part of this thesis, we look at music perception and build a non-corporeal instrument: a fully air-coupled finite-element-modeled Titian Stradivarius violin employing CT-scan reconstruction. The non-corporeal instrument is trained to play the first two measures of the Sonata No. 1 in G-minor, BWV 1001: II. Fuga. Allegro by Bach, using a constrained optimization approach. The musical piece played is found to be indistinguishable from Itzhak Perlman’s concert performance used to train the instrument. The non-corporeal violin allows us to study two new ways in which design changes may be perceived. Firstly we implement design modifications important to violin acoustics and show corresponding plucked and bowed string sounds. Secondly we look at how a violinist would compensate for these design modifications by altering their input actions to obtain the desired sound. The former is advantageous to violin makers who can experiment with changes without making them on their physical instrument. The latter provides a playability metric to classify violins similar to how a player might perceive them. &#13;
&#13;
In addition to perception of violin sounds, we show acoustic features of a violin such as its efficiency as a measure of power entering the instrument to the power radiated as sound. This is found to vary from roughly 15\% in the male Baritone voice range to 25\% in the female Mezzo-Soprano and Soprano voice range before falling off to 15\% beyond. The consequences of these efficiency variations on player compensations for uniform sound across musical notes is shown. Additionally, we show the parts of a violin contributing to radiated sound: f-holes dominate radiation at the low frequencies near the Helmholtz resonance transitioning to a predominantly body-induced radiation at higher frequencies through the top plate. The non-corporeal violin also allows us to “see” the sound through a complete spatial characterization of acoustic pressures from the surface of the violin out to the far-field in three-dimensional space. The accuracy of our results are shown by validating against experimentally measured structural velocities in the entire spectral range of importance to violin radiation. Experimental matching so obtained is primarily a consequence of the air-coupling together with the implementation of instrument quality wood as an orthotropic viscoelastic material to capture elasticity and damping properties. The high impedance contrast approaches developed here can be applied to related acoustic problems involving geophysical, biological and naval structures in air, water and other fluid media.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153340</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-order Discontinuous Galerkin Methods and Deep Reinforcement Learning with Application to Multiscale Ocean Modeling</title>
<link>https://hdl.handle.net/1721.1/153339</link>
<description>High-order Discontinuous Galerkin Methods and Deep Reinforcement Learning with Application to Multiscale Ocean Modeling
Foucart, Corbin
With the expanding availability of computational power, numerical modeling plays an increasingly pivotal role in the field of oceanography, enabling scientists to explore and understand ocean processes which are otherwise inaccessible or challenging to observe directly. It provides a crucial tool for investigating a range of phenomena from large-scale circulation patterns to small-scale turbulence, shaping our understanding of marine ecosystems, global climate, and weather patterns. However, this same wide range of spatiotemporal scales presents a distinct computational challenge in capturing physical interactions extending from the diffusive scale (millimeters, seconds) to planetary length scales spanning thousands of kilometers and time scales spanning millennia. Therefore, numerical and parameterization improvements have and will continue to define the state of the art in ocean modeling, in tandem with the integration of observational data and adaptive methods. As scientists strive to better understand multiscale ocean processes, the thirst for comprehensive simulations has proceeded apace with concomitant increases in computing power, and submesoscale resolutions where nonhydrostatic effects are important are progressively becoming approachable in ocean modeling. However, few realistic ocean circulation models presently have nonhydrostatic capability, and those that do overwhelmingly use low-order finite-difference and finite-volume methods, which are plagued by dispersive errors, and are arduous to utilize in general, especially on unstructured domains and in conjunction with adaptive numerical capabilities. High-order discontinuous Galerkin (DG) finite element methods (FEMs) allow for arbitrarily high-order solutions on unstructured meshes and often out-compete low-order models with respect to accuracy per computational cost, providing significant reduction of dispersion and dissipation errors over long-time integration horizons. These properties make DG-FEMs ideal for the next generation of ocean models, and, in this thesis, we develop a novel DG-FEM ocean model with the above longer-term vision and adaptive multiscale capabilities in mind.&#13;
&#13;
Using a novel hybridizable discontinuous Galerkin (HDG) spatial discretization for both the hydrostatic and nonhydrostatic ocean equations with a free surface, we develop an accurate and efficient high-order finite element ocean model. We emphasize the stability and robustness properties of our schemes within a projection method discretization. We provide detailed benchmarking and performance comparisons for the parallelized implementation, tailored to the specifics of HDG finite element methods. We demonstrate that the model achieves optimal convergence, and is capable of accurately simulating nonhydrostatic behavior. We evaluate our simulations in diverse dynamical regimes including linear gravity waves, internal solitary waves, and the formation of Rayleigh-Taylor instabilities in the mixed layer. Motivated by investigating local nonhydrostatic submesoscale dynamics using realistic ocean simulation data, we develop schemes to initialize and nest the new DG-FEM model within a comprehensive hydrostatic ocean modeling system. Nested within such data-assimilative hydrostatic simulations in the Alboran Sea, we provide a demonstration of our new model’s ability to capture both hydrostatic and nonhydrostatic dynamics that arise in the presence of wind-forced instabilities in the upper ocean layers. We show that such a model can both validate and work in tandem with larger hydrostatic modeling systems, enabling multi-dynamics simulations and enhancing the predictive fidelity of ocean forecasts.&#13;
&#13;
Next, as DG-FEM methods are well-suited to adaptive refinement, we develop a method to learn new adaptive mesh refinement strategies directly from numerical simulation by formulating the adaptive mesh refinement (AMR) process as a reinforcement learning problem. Finite element discretizations of problems in computational physics can usefully rely on adaptive mesh refinement to preferentially resolve regions containing important features during simulation. However, most spatial refinement strategies are heuristic and rely on domain-specific knowledge or trial-and-error. We treat the process of adaptive mesh refinement as a local, sequential decision-making problem under incomplete information, formulating AMR as a partially observable Markov decision process. Using a deep reinforcement learning (DRL) approach, we train policy networks for AMR strategy directly from numerical simulation. The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation (PDE) at hand, nor does it require a pre-computed training dataset. The local nature of our deep reinforcement learning approach allows the policy network to be trained inexpensively on much smaller problems than those on which they are deployed, and the DRL-AMR learning process we devise is not specific to any particular PDE, problem dimension, or numerical discretization. The RL policy networks, trained on simple examples, can generalize to more complex problems and can flexibly incorporate diverse problem physics. To that end, we apply the method to a range of PDEs relevant to fluid and ocean processes, using a variety of high-order discontinuous Galerkin and hybridizable discontinuous Galerkin finite element discretizations. We show that the resultant learned policies are competitive with common AMR heuristics and strike a favorable balance between accuracy and cost such that they often lead to a higher accuracy per problem degree of freedom, and are effective across a wide class of PDEs and problems.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153339</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Generation of Entanglement in Ytterbium Clock Atoms and a Novel Interpretation of the Madelung Fluid Theory of Quantum Mechanics</title>
<link>https://hdl.handle.net/1721.1/153337</link>
<description>On the Generation of Entanglement in Ytterbium Clock Atoms and a Novel Interpretation of the Madelung Fluid Theory of Quantum Mechanics
Mendez, Enrique
In this thesis, I present the results of three experiments utilizing entanglement to improve metrology in an optical lattice clock atom Ytterbium-171. Furhtermore, I present a novel interpretation of the Madelung Theory of Quantum Mechanics, namely that of a particle-fluid decomposition and it's consequences.&#13;
&#13;
In more detail, in one experiment, we demonstrate near-unitary squeezing of the collective spin state of ~10³ atoms in an optical lattice clock atom wiht a measured metrological gain of 6.5(4) d limited by readout detection. Without the readout limit, the generated stated themselves offer an inferred metrological gain of 12.9(6) d, limited by the curvature of the Bloch sphere. An interferometer is used to leverage this squeezing into an entanglement enhanced quantum measurement that improves the precision by 3.7(2) over the standard quantum limit (SQL).&#13;
&#13;
In the third experiment, we utilized the ability to reverse the sign of our squeezing Hamilitonian, to circumvent our readout limit, as well as the SQL. Essentially, by squeezing, acquiring a phase shift, and unsqueezing, we obtain SIgnal Amplification by Time-reversed INteraction (SATIN). These results represented the greatest phase sensitivity improvement beyond the SQL, 11.8(5) dB, demonstrated to date in any full Ramsey interferometer. Furhtermore, Heisenberg Scaling in the sensitivity ∞ 1/N was achieved. &#13;
&#13;
The second part of the thesis focuses on the Newtonian-Madelung-Takabayasi Theory of Quantum Mechanics, a classical and gauge-independent picture of quantum mechanics. This forumlation emphasizes the importance of a fifth fundamental force, the gradient of the quantum potential, which allows for the stability of matter and helps demonstrate a clear transition from classical to quantum regimes. This fifth force summarizes effects, that in the canonical theory, look like zero-point motion.&#13;
&#13;
In the spin-0 case, most essential consequences are elucidated: notably, energy conservation in configuration space is derived via the Lagrangian and Noether's theorem, and a classical quantum-boundary is defined. The model is used to predict a relativistic clock shift that agrees with canonical theory to first order in v²/c².&#13;
&#13;
The Madelung Fluid theory is applied to a toy quantum field theory  (QFT) for the first time. This generates a classical set of equations, that contain QFT dynamics. It was not yet proven to be entirely equivalent to the starting QFT. Like how the NMT illustrated a hidden fifth fundamental force, the NMT version of QFT illustrates a new fundamental source of radiation. This illustrates an axiom we could utilize as a starting point to quantize any classical field theory, possibly including GR.&#13;
&#13;
Lastly, the connection between classical and quantum mechanics is investigated through the Koopman Theorem and it's importance for understanding fundamental questions about quantum mechanics. The Koopman theorem points to a hypothesis, i.e., the converse of the Koopman theorem is true, that converse might explain why this Newtonian-Madelung-Takabayasi (NMT) picture exists. If this converse were true, it would tell us that this NMT picture (or some other classical picture) could be fundamental, i.e., mathematically guaranteed, rather than happen stance. However, the details of this relationship require more details not yet known.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153337</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Organization</title>
<link>https://hdl.handle.net/1721.1/153335</link>
<description>Essays in Industrial Organization
Harris, Adam
This thesis comprises three chapters on the US trucking industry.  The first and second chapters study long-term relationships between shippers and carriers.  The third studies how artificial intelligence changes human decision-making in the context of the maintenance of heavy-duty trucks.&#13;
&#13;
The first chapter (joint with Thi Mai Anh Nguyen) provides evidence on the scope and incentive mechanisms of long-term relationships in the US truckload freight industry. In this setting, shippers and carriers engage in repeated interactions under fixed-rate contracts that leave scope for inefficient opportunism. We show that shippers use the threat of relationship termination to deter carriers from short-term opportunism. Carriers respond to the resultant dynamic incentives, behaving more cooperatively when their potential future rents are higher. While shippers and carriers often interact on multiple lanes, we find evidence that shippers' incentive schemes do not take advantage of this multi-lane scope for certain classes of carriers.&#13;
&#13;
The second chapter (also joint with Thi Mai Anh Nguyen) builds on the first, exploring a market-level tradeoff that informal long-term relationships present.  On the one hand, relationships capitalize on match-specific efficiency gains and mitigating incentive problems. On the other hand, the prevalence of long-term relationships can also lead to thinner, less efficient spot markets. We develop an empirical framework to quantify the market-level tradeoff between long-term relationships and the spot market. We apply this framework to an economically important setting—the US truckload freight industry—exploiting detailed transaction-level data for estimation. At the relationship level, we find that long-term relationships have large intrinsic benefits over spot transactions. At the market level, we find a strong link between the thickness and the efficiency of the spot market. Overall, the current institution performs fairly well against our first-best benchmarks, achieving 44% of the relationship-level first-best surplus and even more of the market-level first-best surplus. The findings motivate two counterfactuals: (i) a centralized spot market for optimal spot market efficiency and (ii) index pricing for optimal gains from individual long-term relationships. The former results in substantial welfare loss, and the latter leads to welfare gains during periods of high demand.&#13;
&#13;
The third chapter (joint with Maggie Yellen) uses observational data to study how a predictive algorithm changes human decision-making.  Using a rich decision-level data set from the maintenance of heavy-duty trucks, we document how skilled technicians' decision-making is changed by the introduction of an algorithm designed to predict the risk of truck breakdowns.  We develop and estimate a model of technician decision-making that accounts for variation in monetary and non-monetary costs. Using an embedded neural network, we flexibly estimate technicians' beliefs about the probability of truck breakdowns both before and after the introduction of the algorithm.  Comparing these estimated beliefs with an objective breakdown probability, we find that the algorithm significantly improves technicians' ability to predict breakdowns: the algorithm narrows the gap between actual and optimal costs by 79%. All of this gain comes from decreased repair costs, suggesting that the algorithm primarily helps technicians avoid low value repairs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153335</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Approaches to Discovery and Optimization in Physics: Symbolic Regression, Bayesian Optimization, and Topological Photonics</title>
<link>https://hdl.handle.net/1721.1/153333</link>
<description>Novel Approaches to Discovery and Optimization in Physics: Symbolic Regression, Bayesian Optimization, and Topological Photonics
Kim, Samuel
Computational tools including high-fidelity simulations, optimization algorithms, and more recently, machine learning, have become increasingly important in furthering scientific and engineering innovations as available computational power and computing methodologies have both advanced significantly. However, as our understanding of the world and the systems we study correspondingly increase in complexity, there is still a need for designing novel computational methods. In this thesis, I describe three major innovations in which I use optimization and machine learning to automate scientific discovery, optimization, and inverse design.&#13;
&#13;
First, I propose a neural network-based method to perform symbolic regression and automatically learn the underlying equations from high-dimensional and complex datasets. The neural network-based model can integrate with other deep learning architectures, thus taking advantage of the powerful capabilities of deep learning for the task of scientific discovery. Second, I demonstrate the usage of Bayesian neural networks as a surrogate model in Bayesian optimization to enable global optimization of high-dimensional, non-convex problems including topology optimization of photonic crystals and chemical property optimization of molecules. On these complex tasks, my method is able to outperform more commonly used surrogate models and improve optimization in terms of both sampling efficiency and computational cost of training. Finally, I develop a framework for global optimization and automated discovery of 3D topological photonic crystals using a combination of a low-dimensional level-set parameterization and standard gradient-free optimization algorithms. My approach is able to discover novel 3D photonic crystals in several topology settings requiring no prior knowledge of topological candidates, thus indicating a path towards the automated discovery of novel topological photonic crystal designs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153333</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Driving Emerging Technologies From Concept to Reality: A Case Study of Carbon Nanotubes</title>
<link>https://hdl.handle.net/1721.1/153326</link>
<description>Driving Emerging Technologies From Concept to Reality: A Case Study of Carbon Nanotubes
Ho, Rebecca
The evolution of electronics has transformed nearly every aspect of society and has been fueled by decades of relentless device scaling. However, electronics is facing a paradigm shift as serious obstacles challenge future progress. Continued scaling is growing increasingly difficult and already yields diminishing energy efficiency benefits. At the same time, other obstacles such as data bandwidth bottlenecks, interconnect density limitations, and reliability are limiting computing performance. Therefore, traditional routes to progress are insufficient, and new approaches must be investigated if we are to continue the technological advancement society has come to expect.&#13;
&#13;
One major thrust toward overcoming these obstacles is the search for alternative, beyond-silicon technologies. Yet despite the promise of these emerging nanotechnologies, their nascency has made their integration into practical and useful electronic systems challenging. In my thesis, I aim to tackle this challenge and present a roadmap for how such new and immature nanotechnologies can be leveraged to not only set the foundation for futuristic next-generation hardware, but also realize practical systems that can have an impact today. As a case study, I use carbon nanotubes (CNTs) to demonstrate a realistic roadmap for commercially realizing these next-generation technologies. First, I show, for the first time, that every type of today’s conventional circuitry (digital, analog, and mixed-signal circuits) can be fabricated with CNT field-effect transistors (CNFETs). This provides a pathway for adopting these futuristic technologies today. Second, to show how CNFETs can play a role in the next generation of computing systems, I leverage the unique low-temperature fabrication of CNFETs alongside emerging memory technologies to achieve the finest 3D integration of emerging technologies to date, which I further use to enable a new circuit design technique. Third, to show how CNFETs can enable futuristic electronic systems that can impact application spaces beyond conventional computing, I leverage VLSI-compatible foundry fabrication of CNFETs to realize BioSensor chips capable of detecting and identifying infectious pathogens in liquid. These experimental demonstrations of CNFETs in today’s (conventional circuitry), tomorrow’s (dense fine-grained 3D systems), and futuristic (healthcare diagnostics) applications explicitly demonstrate a practical roadmap for how emerging nanotechnologies can be developed for near-term adoption while providing longer-term motivation for enabling next-generation electronic systems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153326</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of 3D Complex Nanostructures Using Block Copolymer Self-Assembly</title>
<link>https://hdl.handle.net/1721.1/153325</link>
<description>Design of 3D Complex Nanostructures Using Block Copolymer Self-Assembly
Liu, Runze
The continuously pursuit of designing high quality complex nanostructures is required by the modern nanotechnology. However, the conventional 'top-down' lithography is limited by the rapidly increasing cost and difficulty when operating on length scales below 20 nm. As an alternative, block copolymer (BCP) self-assembly offers a 'bottom-up' approach for the fabrication of various nanopatterns. By implementing external guidance and modifying molecular architectures, long-range ordered BCP microdomains can be oriented and registered in specific locations during the annealing process. Over the past few decades, BCP self-assembly has become well-established for generating arbitrary complex 2D patterns with feature sizes of a few nanometers.&#13;
&#13;
Despite the remarkable success in creating 2D patterns, efforts to achieve highly ordered 3D nanostructures through BCP self-assembly have continued. In this Thesis, a series of novel routes to fabricate oriented nanomeshes, the overlaid parallel lines, are explored and evaluated using both experimental and simulation methods. The discovered mechanisms, proposed modeling setups, and polymer architecture design strategies shed light on a path towards fabricating various 3D geometries and even multicomponent nanopatterns through BCP self-assembly.&#13;
&#13;
The first stage involves introducing external guidance to traditional linear diblock copolymers (di-BCPs). A newly designed multi-mechanism directed self-assembly (MMDSA) approach is proposed to construct metallic nanomeshes without requiring pattern transfer or high-resolution lithographic templating. Three mechanisms, including trench wall guidance, edge nucleation, and underlayer guidance, are systematically evaluated through both experiments and dissipative particle dynamics (DPD) simulations. The DPD model, a particle-based method, is reparametrized to accurately reproduce all the experimental findings in the MMDSA experiments and provide an accurate description of the self-assembled phase structures in both bulk and thin film states.&#13;
&#13;
The next motivation stems from the desire to expand the library of self-assembled structures. To this end, an unconventional "A-branch-B" diblock Janus bottlebrush copolymer (di-JBBCP) is synthesized and studied. Di-JBBCP consists of a rigid backbone grafted with alternating A and B sidechains, which allows for effective and efficient microphase separation of the two blocks parallel to the backbone. The phase diagram of di-JBBCP is fully investigated, revealing the bulk-stable perforated lamella, unconventional Frank-Kasper A15 spheres, and hexagonally close-packed spheres, along with cylinder, gyroid, and lamellar morphologies attainable through a simple annealing step.&#13;
&#13;
Finally, taking a step beyond di-JBBCP, we introduce a third block to form a triblock JBBCP with two Janus domains: one perpendicular and one parallel to the backbone. The perpendicular Janus domain enforces a superstructure that intrinsically confines the intralayer self-assembly of the parallel Janus domain. This results in a phase-in-phase structure that gives rise to a mesh-like monoclinic network as well as a tetragonal counterpart with excellent long-range order. Furthermore, under the guidance of a topographical template with appropriate surface conditions, the phase-in-phase structure can be aligned with unique preferred orientations. These findings demonstrate a valuable pathway to bottom-up fabrication of original 3D nanostructures via soft matter, which extends the capabilities of BCP self-assembly.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153325</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection of antiferromagnetic domain reversal in epitaxial insulating oxides</title>
<link>https://hdl.handle.net/1721.1/153324</link>
<description>Detection of antiferromagnetic domain reversal in epitaxial insulating oxides
Churikova, Alexandra
Antiferromagnetic materials have historically played a passive role as biasing layers in spintronics applications due to the challenges in manipulating and detecting the magnetic order. However, substituting ferromagnets by antiferromagnets as active switching elements in data storage devices offers the potential for ultrafast dynamics, improved non-volatility, higher bit packing density, and qualitatively new physical phenomena. These advantages spurred intensive research in current-induced switching of the magnetic order in metallic and insulating antiferromagnets. Insulating antiferromagnetic insulators are promising due to their magnon-mediated long-distance spin transport, THz response, and rich platforms for fundamental study.&#13;
&#13;
Here, we approach the challenge of manipulating antiferromagnetic order in insulating oxides. First, we demonstrate the limitation of purely electrical detection of switching of the antiferromagnetic order with spin-orbit torques from electric current pulses. In epitaxial nickel oxide thin-films interfaced with a heavy metal as a source of spin-orbit torque, we show that electrical signatures that have previously been attributed to magnetic switching arise from localized thermo-resistive heating and electromigration in the heavy metal. Secondly, we exploit the weak ferromagnetic nature of epitaxial hematite thin-films to directly control the antiferromagnetic sublattices with current pulses. To achieve this, we experimentally elucidate the magnetoelastic relaxation mechanism that governs magnetic field-driven antiferromagnetic domain reversal. This approach, with our analytical model for the field-induced response, reveals the role of the film magnetostriction and anisotropy. With this novel framework, we demonstrate deterministic current-driven control of the antiferromagnetic order which we show is driven by predominantly thermo-magnetoelastic torques. This work provides fundamental insight into physical phenomena governing domain reversal, crucial to realizing antiferromagnets as active spintronics device elements.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153324</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-assembled molecular nanomaterials and their applications</title>
<link>https://hdl.handle.net/1721.1/153323</link>
<description>Self-assembled molecular nanomaterials and their applications
Cho, Yukio
Self-assembly of small molecules in water provides a bottom-up strategy to generate highly ordered nanostructures with dimensions on molecular (&lt;10 nm) length scales. However, molecular self-assembled nanomaterials have traditionally been employed mostly in biological applications due to two critical and inherent constraints: 1) The nanostructures lack cohesive interactions for stability outside of an aqueous environment, and 2) Modifying their functionality, often attributes to the surface chemistry of nanostructures, without disturbing molecular packing is challenging. This dissertation explores a new self-assembling platform called “aramid amphiphiles (AAs)”. These AAs incorporate hallmark features of Kevlar, the renowned material used in bulletproof vests, to enhance cohesive intermolecular interaction and suppress dynamic motion of constituent molecules. In this study, a spontaneous self-assembly of AAs with various functionalities into one dimensional, high-aspect-ratio AA nanomaterials in water has been observed. These nanomaterials not only exhibit remarkable mechanical strength but also maintain stability outside of an aqueous environment, where the initial driving force of self-assembly is typically absent. Leveraging their mechanical robustness and high aspect-ratios morphologies, these individual nanomaterials can undergo shear alignment to form hierarchically ordered macroscopic material that is mechanically flexible and stable in air. Furthermore, the enhanced intermolecular cohesion also provides an alternative pathway to circumvent the previously mentioned second constraint. This is accomplished through the use of domain-selective thermal decomposition, a process that strategically removes the thermally labile hydrophilic domains while maintaining the thermally stable internal cohesive domain of AA nanomaterials.&#13;
&#13;
This dissertation also broadens the potential applications of AA nanomaterials by tailoring the surface chemistry functionality of constituent AAs. For example, the AA nanomaterials can be modified to dissociate lithium salt and exhibit lithium-ion conductivity by incorporating an oligomer form of polyethylene oxide. The resulting AA solid-state membrane, composed of molecular self-assembled nanomaterial, enables a unique feature of reversibility. This feature facilitates easier separation of different components, thus enhancing battery recycling - a technology that is currently economically impractical and technically challenging. Moreover, the AAs can be tailored to self-assemble into high-aspect-ratio, rigid nanotubes with the surface functionality to immobilize metal nanoparticles. The sub-nm diameter metal nanoparticles are highly catalytically active, but they are often not cost-effective to separate from solution due to their size. Assembling these metal nanoparticles onto the AA nanotubes not only maintain the high surface- volume ratio necessary for effective catalysis, but also enables the assemblies to be separated from the solution and reused through a simple filtration process. This dissertation thereby paves the way for using molecular self-assembly to create advanced, nanostructured, solid-state molecular materials.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153323</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplexed representations of uncertainty by mouse pulvinar-prefrontal projections during goal-directed behaviors</title>
<link>https://hdl.handle.net/1721.1/153321</link>
<description>Multiplexed representations of uncertainty by mouse pulvinar-prefrontal projections during goal-directed behaviors
Leow, Yi-Ning
Processing sensory information to generate decisions and action is a central component of learned, goal-directed behavior. Even during ongoing sensory-motor processing, our sensory landscape is fltered through our prior expectations and ongoing goals. This active perceptual process hinges on a distributed network of cortical and subcortical areas. The pulvinar, or homologous rodent lateral posterior (LP) nucleus, is a higher-order visual thalamic nucleus that bridges many of these subcortical and cortical structures. In particular, LP/pulvinar interactions with the prefrontal cortices such as the anterior cingulate cortex (ACC) have been implicated in regulating attentional processes. However, the anatomical inputs integrated and precise information carried by this projection during decision-making and action-selection has never been clarifed. We address this gap by leveraging genetic tools available in mouse models to examine the role of LP-ACC inputs directly with projectionspecifc anatomical mapping, axonal calcium imaging with two-photon microscopy in animals viewing visual stimuli passively or performing a decision task, and optogenetic manipulations. We fnd that LP-ACC axons integrate inputs from a vast network of subcortical and cortical structures that are implicated in attention, visuomotor functions, and spatial cognition. During passive viewing, activity of the LP-ACC projection is dominated by global arousal states while visual information is poorly represented. During a two-alternative graded random dot motion direction discrimination task, LP-ACC activity in individual axons and the axonal population represents multiple task variables. The activity of single axons ranges from the coding of stimulus coherence and direction in the random dot stimuli to the signaling of diferent task epochs in individual trials. At the population level, we fnd highly structured representations of task variables: LP-ACC activity jointly represents direction and coherence of visual stimuli in a low-dimensional geometric manifold that facilitates visual decoding. Furthermore, LP-ACC axons dynamically represent the outcome and uncertainty of previous trials, and integrate past and current trial uncertainty throughout the task. These physiological responses infuence trial-by-trial behavior, which can be disrupted by optogenetic perturbation of specifc trial epochs. Our fndings demonstrate that the LP contributes to attention and decision-making by providing a read-out of ongoing uncertainty, integrated over time with behavioral history, to adaptively tune neuronal responses and guide goal-directed behavior on a trial-to-trial basis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153321</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Infrastructure of Peace: Socio-spatial planning in UN Peace Operations</title>
<link>https://hdl.handle.net/1721.1/153320</link>
<description>The Infrastructure of Peace: Socio-spatial planning in UN Peace Operations
Danielak, Silvia
My research examines infrastructure building, and the ‘planning for peace’ embedded therein, in the context of United Nations (UN) peace operations. The installation of solar panels, the repair of roads, and the construction of bridges constitutes an important vehicle for conflict transformation and imaginary for the future of a conflict-affected society. Peace operations’ infrastructure projects have a significant, long-term impact on the built environment and ecology in the places of intervention – a logic that is scarcely articulated as part of peace efforts and remains disjoint from the sustainability discourse to which peacebuilding has turned. My dissertation constitutes a multi-disciplinary inquiry, connecting urban studies and peace studies through an approach informed by historical sociology. I offer an urban planning perspective on peace operations, and specifically its infrastructure building. Through three case studies, this dissertation explores the ‘infrastructural imaginaries of peace’ – infrastructure as promise, risk, and legacy – pursued through engineering and planning expertise and practice in the UN missions in Cyprus in the 1960s, in Haiti after the mid-2000s, and in Mali after 2013.&#13;
&#13;
The dissertation’s central argument is that peacekeeping operations conduct a significant socio-spatial (re-)organization in pursuit of peace through infrastructure building. The dissertation’s historical perspective on peacekeeping’s involvement in public works highlights that – contrary to the recent uptick in attention to peacekeepers’ ecological footprint and ‘sustainable’ peace efforts – socio-spatial, urban and environmental aspects have always featured in peace operations, albeit through different paradigms. Furthermore, the recent increased attention on ‘greening’ peacekeeping and ‘positive legacy’ after missions’ closure reveals an uneasy positioning of peace operations’ infrastructure building between the pursuit of positive and negative peace objectives. These objectives are not easily reconcilable and challenge us to rethink the spatial and temporal dimension of peace efforts, and the equity planning that might need to gain more traction in peace operations’ infrastructure projects.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153320</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The flow of a high temperature, variable viscosity liquid, at low Reynolds numbers.</title>
<link>https://hdl.handle.net/1721.1/153185</link>
<description>The flow of a high temperature, variable viscosity liquid, at low Reynolds numbers.
Sununu, John Henry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1966; Bibliography: leaves 106-108.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153185</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The eikonal approximation in quantum field theory: applications to the vertex function and light cone physics.</title>
<link>https://hdl.handle.net/1721.1/153184</link>
<description>The eikonal approximation in quantum field theory: applications to the vertex function and light cone physics.
Eichten, Estia Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153184</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>NMR studies of rearrangements of transition metal complexes.</title>
<link>https://hdl.handle.net/1721.1/153181</link>
<description>NMR studies of rearrangements of transition metal complexes.
Eaton, Sandra Shaw.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153181</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactions of isoleucyl-tRNA synthetase from Escherichia coli B.</title>
<link>https://hdl.handle.net/1721.1/153180</link>
<description>Reactions of isoleucyl-tRNA synthetase from Escherichia coli B.
Eldred, Emmett Walter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1972; Vita.; Bibliography: leaves 138-140.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153180</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridged binuclear complexes.</title>
<link>https://hdl.handle.net/1721.1/153179</link>
<description>Bridged binuclear complexes.
Eaton, Gareth Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153179</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive behavior and the pure theory of consumer demand.</title>
<link>https://hdl.handle.net/1721.1/153178</link>
<description>Adaptive behavior and the pure theory of consumer demand.
El Safty, Ahmad el-Sayed Ali.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1972; Vita.; Bibliography: leaves 171-173.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153178</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Age-hardening of a copper-cobalt and a copper-iron alloy</title>
<link>https://hdl.handle.net/1721.1/153175</link>
<description>Age-hardening of a copper-cobalt and a copper-iron alloy
Gordon, R. B.
            (Robert Bruce)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1939; Vita.; Includes bibliographical references (leaves 99-101).
</description>
<pubDate>Sun, 01 Jan 1939 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153175</guid>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of organizational change: some determinants of managerial problem solving and decision making competences</title>
<link>https://hdl.handle.net/1721.1/153119</link>
<description>Dynamics of organizational change: some determinants of managerial problem solving and decision making competences
Nath, Raghu.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Industrial Management, 1964; Includes bibliographical references (leaves 161-165).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153119</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coding and decoding for time-discrete amplitude continuous memoryless channels</title>
<link>https://hdl.handle.net/1721.1/153118</link>
<description>Coding and decoding for time-discrete amplitude continuous memoryless channels
Ziv, Jacob.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1963; Vita.; Includes bibliographical references (leaf [147]).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153118</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On complex connective K-theory,</title>
<link>https://hdl.handle.net/1721.1/153115</link>
<description>On complex connective K-theory,
Williams, Stephen Leon.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1969; Vita.; Bibliography: leaves 88-89.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153115</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The reaction of organolithium reagents with carbon monoxide.</title>
<link>https://hdl.handle.net/1721.1/153112</link>
<description>The reaction of organolithium reagents with carbon monoxide.
Kelly, Edward Gregory.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1969; Vita.; Includes bibliographical notes.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153112</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer recognition of three-dimensional objects in a visual scene.</title>
<link>https://hdl.handle.net/1721.1/153109</link>
<description>Computer recognition of three-dimensional objects in a visual scene.
Guzmán, Adolfo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Vita.; Bibliography: leaves 257-259.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153109</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Landing Security: Risk, Endogeneity, and the Archives of Colonialized Planning in Morocco</title>
<link>https://hdl.handle.net/1721.1/153103</link>
<description>Landing Security: Risk, Endogeneity, and the Archives of Colonialized Planning in Morocco
Elgamal, Asmaa
This dissertation investigates the historical and contemporary relationships between security, development and planning through the lens of collective land tenure in Morocco. Weaving a historical narrative that traces the legal and bureaucratic institutions of Moroccan land management to their colonial roots, I argue that security politics are inextricably tied to the very genesis of development planning. The title of the dissertation, “Landing Security,” reflects both the centrality of land to my analysis as well as its power to secure allegiances, quell anxieties, and fortify state sovereignty. This power does not only lie in the material possession of land or its potential for economic investment, but also in the bureaucratic processes through which land tenure becomes legitimate, stable, and thus uncontestable. “Landing Security” thus refers to land not as an object or a noun, but as a verb capable of transforming relationships and pacifying the violence of state-building and development. In contrast to conceptions of planning as a form of state control, I argue instead that development planning is a form of risk management in which a territorialized understanding of culture – including the relationships of subject populations to their land – is constructed as risk. Risk mitigation, in the form of what I call finding “tolerable levels of endogeneity” – or accepting a certain level of locality and tradition deemed necessary for effective control – then becomes the mechanism through which security logics are embedded in state practice. This translates into an obsession with binding development policy to presumably traditional legal, social, and political institutions, thus producing a manufactured path dependency as a proxy for cultural authenticity. I demonstrate, moreover, that the planning rationalities of the protectorate regime continue to guide the management of collective land in the contemporary Moroccan state, now bolstered by the legal and institutional legacies of the colonial regime. Ultimately, I argue, one of the greatest successes of the French colonial tenure in Morocco was its ability to transform its own archive into the new reference point for cultural authenticity within an ongoing project of modern state-building.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153103</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Detection of En-route Convective Weather Avoidance and Development of a Weather Assessment Model</title>
<link>https://hdl.handle.net/1721.1/153094</link>
<description>Data-Driven Detection of En-route Convective Weather Avoidance and Development of a Weather Assessment Model
Price, Rachel E.
Convective weather has significant impacts on aviation operations around the world. It reduces available airspace capacity, and avoidance of hazardous convective systems increases the workload for both pilots and air traffic controllers. Optimizing airspace use in the presence of constantly evolving convective activity is an ongoing research effort. Today, convective weather avoidance models aid controllers in safely and efficiently managing their airspace. These models were built on past examples of aircraft-weather interactions drawn from archived weather observation data and air traffic data. However, the development of existing models was limited by the significant amount of work required to assess convective weather impacts on the final trajectory of each aircraft. This work explores data-driven approaches to weather avoidance modeling and introduces an image-based modeling paradigm.&#13;
&#13;
At present, there is limited ability to examine historical aviation operations and automatically identify when aircraft deviated due to convective weather blocking the intended route. In particular, retrospectively assessing aircraft-weather interactions without the aid of human subject matter experts poses a difficult challenge. Each interaction is different, and simple heuristics have proven to be ineffective at classifying weather encounters. A key contribution of this work is the development of a deviation detection model, which takes in information about a flight’s flown route and its intended route as well as local weather observation data and assigns a behavioral classification. Individual cases are classified as non-deviations, deviations for tactical weather avoidance, or deviations unlikely to be caused by the tactical weather situation. This classifier was developed using a supervised machine learning approach, with training data labeled by crowd-sourced subject matter experts, generally airline pilots. By leveraging transfer learning techniques, a small labeled dataset was sufficient to train the deviation detection classifier.&#13;
&#13;
After the deviation detection classifier was developed, it was used to label a large set of aircraft-weather interactions. The goal of this step was to identify interactions containing useful information on weather avoidance decisions and techniques, which was previously a process carried out by human subject matter experts. This included both non-deviations where the aircraft penetrated convective weather and tactical weather deviations where the aircraft maneuvered to avoid convective weather. The large set of non-deviations and tactical weather deviations was used to develop a deviation prediction model. For the deviation prediction model, only the intended route and local weather observations are considered. The model outputs a prediction as to whether the intended route will be flown or if there will be a deviation for convective weather.&#13;
&#13;
The predictive model developed in this work outperforms existing convective weather avoidance models, and was able to do so based off of an entirely machine-labeled training dataset. This demonstrated the value of using a small dataset to effectively label a much larger dataset, as well as the image classification framework used for developing the deviation detection classifier. Reducing reliance on human data curation and labeling makes the creation of more specialized predictive models feasible and allows future research to leverage techniques requiring larger datasets than were previously possible.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153094</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Commerce Data: from Theory to Practice</title>
<link>https://hdl.handle.net/1721.1/153093</link>
<description>Learning from Commerce Data: from Theory to Practice
Peng, Tianyi
Enterprises can anticipate substantial benefits from the vast potential of commerce data. However, deploying analytics platforms to extract value from such data poses a significant challenge for many organizations. One major obstacle lies in the ability to effectively learn from commerce data within an environment characterized by noise, non-stationarity, intricate intervention patterns, and the occurrence of operational issues and anomalies at any given moment.&#13;
&#13;
Motivated by these challenges, we address the problem of causal learning in panels with general intervention patterns that may depend on historical data. In this thesis, we present a novel and nearly complete solution to this problem that allows for the rate-optimal recovery of treatment effects. Our work generalizes the outcome model of the difference-in-difference paradigm and expands the applicability of the synthetic-control paradigm. In doing so, we provide a novel de-biasing analysis that addresses the low-rank matrix regression with non-random intervention patterns and noise; a non-trivial feature of independent interest. Moreover, this thesis also addresses the challenges of anomaly detection and uncertainty quantification for low-rank matrices with missing entries and general noises, thus enabling robust learning from corrupted data.&#13;
&#13;
On the practical side, our algorithm forms the core of TestOps, a pioneering testing platform co-developed with a USD 100 billion beverage company. TestOps solves a long-standing problem of learning from physical retail experiments, leading to a substantial increase in revenue of millions of dollars per month in Mexico alone, and is being rolled out globally. Our framework has also sparked ongoing collaborations in healthcare, finance, and sustainability, extending its applicability beyond retailing. The outcomes have been consolidated and documented in an open-source Python package available at: https://github.com/TianyiPeng/causaltensor.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153093</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Control of Mechanoneural Interfaces for Neuroprosthetic Limbs</title>
<link>https://hdl.handle.net/1721.1/153092</link>
<description>Design and Control of Mechanoneural Interfaces for Neuroprosthetic Limbs
Song, Hyungeun
Within the past two decades, numerous attempts have been made to fully reconstruct bionic gait via signals derived from the human nervous system. However, human gait has been difficult to emulate due to the high resolution of both efferent and afferent signaling required to replicate coordinated volitional and reflexive motor commands. Even invasive neural interfaces providing functional feedback from the bionic leg have been unable to demonstrate biomimetic gait. To compound the difficulty, there is limited understanding of the fundamental level of afferent feedback necessary to facilitate a high degree of neuroprosthetic integration for human locomotion.&#13;
&#13;
In this thesis, we investigate the impact of preserving afferent feedback in residual limbs on the sensorimotor responses of individuals with below-knee amputations. Additionally, we present a neuroprosthetic framework that fully reconstructs biomimetic gait from neural information generated by individuals with below-knee amputation. We have achieved the level of neuroprosthetic integration necessary to execute versatile gait through a surgically-constructed mechanoneural interface that enhances native muscle afferents within the amputated residuum. Finally, we develop a myoneural actuator technology in a rodent model, enabling the design of a novel mechanoneural interface that allows for the direct modulation of proprioceptive afferents. These advancements have the potential to significantly improve the quality of life for individuals with amputations and further the development of advanced surgical and neuroprosthetic technologies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153092</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Optimizing Nanophase Separation Sintering</title>
<link>https://hdl.handle.net/1721.1/153091</link>
<description>Understanding and Optimizing Nanophase Separation Sintering
Oliver, Christian Edward
Sintering provides a way of producing bulk products from materials below their melting temperatures. This creates an avenue to utilize material systems that are difficult to produce from other traditional manufacturing processes. While originally used to process ceramic components, it has become a way to produce metal parts that are conventionally difficult to produce, such as parts made from refractory metals. Various sintering methods such as liquid phase sintering and supersolidus liquid phase sintering have been used to enhance densification and reduce the sintering time of the materials. Nanophase separation sintering (NPSS) was first explored by Dr. Mansoo Park. In this process, a mechanically alloyed powder experiences phase separation on the surface. This second phase promotes neck formation and rapid densification. By being a solid state process, parts made via this method can be used at higher temperatures that they are sintered at and allow for high melting temperature materials such as tungsten to be sintered quickly at low temperature. Nanophase separation sintering has also been shown to allow the production of bulk nanocrystalline products. To fully harness such a technique, it is necessary to fully understand the thermodynamic and kinetic phenomena surrounding it.&#13;
&#13;
First, we explore how to better understand the critical processes occurring during NPSS. Previous techniques such as the Master Sintering Curve (MSC) assume a single mechanism for sintering when the reality of a sintering process in more complicated. For this reason, a Kissinger style analysis for sintering dilatometric data was derived. This allowed us to understand how different temperature dependent mechanisms, represented by densification rate peaks, facilitate the different stages of NPSS. First, however, the Kissinger style analysis was properly derived based on the combined stage sintering model. The error in using the traditional Kissinger analysis for sintering dilatometry data was explored highlighting the necessity for our derivation for accelerated sintering techniques. Finally, the method was evaluated against 3 separate test cases to ensure our obtain activation energies were in line with established methods.&#13;
&#13;
With this new tool, we moved to re-evaluate the previous explored systems of W-Cr, Cr-Ni and Ti-Mg. By analyzing the W-Cr system in more detail, it was possible to ascertain that there were two critical sintering events: neck formation and neck redissolution. These results were verified quantitatively using a Kissinger style analysis and qualitatively through ex-situ SEM imaging. Further confirmation came from examining the Cr-Ni system, which also exhibits NPSS. Finally, when examining the Ti-Mg system which sintered via NPSS but did not achieve the extensive densification of the previous systems. It is show that Ti-Mg does not exhibit redissolution and thus does not fulfill the newly-understood criteria of NPSS.&#13;
&#13;
With this understanding of NPSS, novel alloy systems were designed to exhibit it. First, the binary Mo-Cr systems was explored and shown to fully exhibit NPSS. The Mo-Cr system was used to explore optimizing NPSS in materials including temperature and sintering aid minimization. Beyond binary systems, the Mo-W-Cr ternary was explored. It was shown to ii sinter exceptionally well, similarly to the Mo-Cr binary. The relative impact of the ternary elements is shown and compared to previous NPSS systems. Mechanical testing is performed to understand it usefulness compared to differently processed alloys.&#13;
&#13;
The accumulation of these results verify the thermodynamic requirements and kinetic mechanisms that define nanophase separation sintering. Through this work, novel alloys, including those beyond the binary systems traditionally examined, can be designed to exhibit NPSS and further optimized for improved processing and usefulness.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153091</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatially-Directed Interfacial Polymerization</title>
<link>https://hdl.handle.net/1721.1/153088</link>
<description>Spatially-Directed Interfacial Polymerization
Chazot, Cécile A. C.
Polymer-based materials and composites are widely used in applications from consumer goods to aircraft, due to their low cost and desirable properties. However, processing of both plastics and composites is often time consuming (taking hours or even days) and requires high temperatures, contributing to their energy footprint. This thesis introduces In Situ Interfacial Polymerization (ISIP) and Interfacial Photopolymerization (IPP), two room-temperature, interfacial polymerization (IP)-based methods for rapid manufacturing of polymers and their composites. First, In Situ Interfacial Polymerization (ISIP) is introduced as a fabrication technique for nanocomposites with tunable morphology. Using ISIP, dense carbon nanotube (CNT)-polymer composite sheets can be obtained in a matter of minutes by ISIP within a CNT network. Uniform aramid-CNT composite sheets obtained by this method show a two-fold increase in modulus and strength compared to pristine CNT sheets. Fundamental understanding of the balance between capillary flow and reaction-precipitation kinetics is implemented in a first principle macrokinetics model and enables expansion to porous materials beyond CNTs. Adjusting the transport-reaction balance through monomer selection, a new IP scheme based on aqueous diamines and organic dianhydrides is also introduced as a mean to produce nanotextured thermoformable polyetherimide films. While the ISIP technique facilitates composite formation within nanoporous materials, Interfacial Photopolymerization (IPP) enables photopolymerization printing of linear chain polymers without requiring a scaffold substrate. A prototype system is developed for IPP of poly(acrylonitrile) and the rate and resolution are quantified, showing the potential of IPP as a future photopolymerization 3D printing method for thermoplastics, contrasting current techniques that are restricted to non-recyclable thermoset polymers. A macrokinetics model of IPP is developed that balances diffusion/partition and polymerization reaction kinetics to assess build rate, enabling process optimization and material properties tunability.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153088</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robustness of Reinforcement Learning Systems in Real-World Environments</title>
<link>https://hdl.handle.net/1721.1/153087</link>
<description>Robustness of Reinforcement Learning Systems in Real-World Environments
Garau Luis, Juan José
Reinforcement Learning (RL) is recognized as a promising paradigm to improve numerous decision-making processes in the real world, potentially constituting the core of many future autonomous systems. However, despite its popularity across multiple fields, the number of proofs of concept in the literature is substantially larger than the number of reported deployments. This can be primarily attributed to differences between real-world environments and experimental RL setups. On one hand, from a domain-specific perspective, it is challenging to fully characterize concrete tasks and environments in the real world, and training in physical environments may not always be possible. On the other hand, the real world presents several domain-agnostic challenges that make learning more difficult, such as high-dimensionality, non-stationarity, or generalizability. Although RL agents have demonstrated effective performance in practical applications, their robustness to these real-world phenomena is still challenging. &#13;
&#13;
To move a step forward towards better RL deployability, this thesis investigates different aspects of RL system design, focusing on enhancing robustness in real-world environments. It is composed of three main areas of research:&#13;
&#13;
Firstly, to comprehensively characterize the problem of real-world robustness, I propose an RL roadmap. This identifies key factors that influence the interaction between an RL system and a real-world environment, and it offers a structured approach to addressing the overall problem. I further delve into one specific element of this roadmap, the state space, and present a set of mathematical bounds for the change in mutual information (MI) between state features and rewards during policy learning. By observing how MI evolves during learning, I demonstrate how to identify more effective feature sets, as shown through the study of a practical use case, the Traffic Signal Control problem.&#13;
&#13;
Secondly, I introduce MetaPG, a novel domain-agnostic RL design method that prioritizes robustness in addition to performance. MetaPG is an AutoRL method that automates the design of new actor-critic loss functions, represented as computational graphs, for optimizing multiple independent objectives. Through evolutionary search, MetaPG generates Pareto Fronts of new algorithms that maximize and trade all objectives considered. When applied to a use case aimed at optimizing single-task performance, zero-shot generalizability, and stability on five different environments, evolved algorithms show, on average, a 4.2%, a 13.4%, and a 67% increase in each of these metrics, respectively, compared to the SAC algorithm used as warm-start. Furthermore, MetaPG offers insights into the structure of the evolved algorithms, allowing for a better understanding of their functionality.&#13;
&#13;
Lastly, this thesis focuses on the application of conceptual frameworks and design principles to specific real-world problems in which robustness has been systematically overlooked. I introduce a novel RL system for solving the frequency assignment problem for multibeam satellite constellations. By conducting a comprehensive search over six major design decisions, I identify a design variation that achieves a 99.8% success rate in 100-beam scenarios. However, this variation falls short in handling high-dimensionality and non-stationarity. This thesis demonstrates that robustness against these challenges can be obtained through different design variations, which attain an 87.3% success rate in 2,000-beam cases. Additionally, I also investigate design trade-offs in another real-world application, molecular optimization, and show that current methods are not well aligned with robustness.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153087</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Irreversible Thermodynamic Formalism of the Electronic Contribution to the Entropy in Metals</title>
<link>https://hdl.handle.net/1721.1/153086</link>
<description>An Irreversible Thermodynamic Formalism of the Electronic Contribution to the Entropy in Metals
Paras, Jonathan S.
We have applied an irreversible thermodynamic formalism to quantify the electronic contribution to the entropy in metals and alloys. We demonstrate the utility in our proposed approach by rationalizing the entropy of phase transitions in metal-insulator, order-disorder, and allotropic transitions with current models of the vibrational and configurational entropy. Measurements of the electronic transport properties in Cu-Ni, Fe-Ni, and Fe-Cr alloys reveal the importance of considering multicarrier transport models when interpreting electronic transport properties. We generalize our approach to include the effects of the electronic entropy on the mixing thermodynamics of these alloy systems and attempt to integrate our results with conventional thermodynamic modeling by appealing to a cluster model for their configurational entropy. Finally, our work questions the idea of "thermodynamic ideality" in these alloys. We find that the entropy of mixing shows a noticeable electronic influence even when the total entropy seems unremarkable. This suggests that the short range order (S.R.O.) of atoms plays a significant role in both the solid and molten states, even when there are no dominant intermetallic compounds in these alloys.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153086</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport and damage in hydrated coatings – a model soft active composite material</title>
<link>https://hdl.handle.net/1721.1/153085</link>
<description>Transport and damage in hydrated coatings – a model soft active composite material
Zhou, Yu Ren
Active materials are solids which cyclically absorb and release large quantities of mass and/or energy, and encompass a wide variety of materials including lithium ion battery anodes and biological matter. Four interrelated fundamental processes – transport, volume change, stress, and damage – are present in active materials. This thesis explores the interconnection of these processes using water-based anti-corrosion coatings, water-absorbing polymer films composed of coalesced latex particles, as the model active material system. The free volume theory was applied to latex particle boundary networks to understand transport which leads to volume swelling. More hydrophilic boundary networks led to higher water diffusivity due to higher free volume at these networks after water swelling. Enhancement of ion diffusivity due to free volume increase after water swelling explains the correlation between ion transport and water absorption timescales. Inplane stresses in water-based coating films during volume change induced by hydration-drying cycling were explored by in-situ measurement of in-plane stress, and ex-situ observation of surface morphology. In-plane stress relaxation was observed during the hydration half-cycle, attributed to the enhanced capacity for polymer chain motion after free volume swelling by absorbed water. Inplane tensile stress was measured at the beginning of the drying half-cycle. It was proposed that surface pit formation was due to preferential plastic deformation of more hydrated regions under in-plane tension during drying, above defects in the latex superlattice with locally faster water diffusivity. To understand stress-induced damage at the coating-steel interface, pull-off measurements were conducted to measure the adhesion force between stainless steel probes and water-based coatings. Slower degradation in adhesion force during NaCl(aq) immersion of coatings with faster water transport was explained by the faster-swelling coating surface moving more rapidly towards the dissolving probe passivating layer, thereby slowing the rate of coatingpassivating layer gap width increase. The techniques in this thesis may be applied to automated experiments which are becoming increasingly feasible with recent computational advances.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153085</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles control of zeolite synthesis, transformations, and intergrowth</title>
<link>https://hdl.handle.net/1721.1/153084</link>
<description>First-principles control of zeolite synthesis, transformations, and intergrowth
Schwalbe Koda, Daniel
Designing new materials enabling of sustainable catalysis and separations is essential to fully decarbonize the industrial sector, but materials discovery is hindered by labor-intensive experimentation. Computational methods such as high-throughput screening or machine learning can filter structures and compositions prior to experiments. Nevertheless, finding synthesis routes to realize computationally proposed materials still relies on trial-and-error. This thesis describes how materials discovery can be streamlined using high-throughput simulations, physics-based representations, and machine learning. In particular, this work analyzes the synthesis, phase transformations, and intergrowth of zeolites, which are industrially relevant catalysts and molecular sieves known for their hard-to-control phase competition and polymorphism.&#13;
&#13;
Part I of this thesis describes the development of methods to simulate and analyze zeolite transformations and template-based synthesis. Diffusionless phase transformations and intergrowths are predicted by using graph-based representations of zeolite frameworks. Moreover, a data-driven analysis indicates how  structural factors are often insufficient to predict outcomes in their synthesis. To address this knowledge gap, computational tools are developed to simulate template-based synthesis conditions for zeolites. The proposed methods accelerate the prediction of binding energies between molecular templates and zeolite hosts, reducing computational costs by up to two orders of magnitude while increasing the reproducibility of the calculations. These tools are then used to simulate hundreds of thousands of template-zeolite pairs in a high-throughput screening pipeline. The simulation results explain thousands of synthesis outcomes from the zeolite literature from the past six decades and recall archetypical templates purely from physical principles.&#13;
&#13;
Part II of this thesis leverages these methods to design new zeolite synthesis routes, thus enabling the control of phase competition, intergrowth, and catalytic properties of the materials. The computational tools guide the experimental synthesis of zeolites with improved catalytic behavior, including pure-phase frameworks with low-cost templates and disordered structures with tunable polymorph selectivity. Finally, the computational approach maps numerous possibilities for further discovery based on over 35 million data points generated using machine learning, which is used to decode this “zeolite genome”. This work provides a roadmap on how synthesis routes for zeolites can be controlled a priori using theory and computation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153084</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Templated Solid-State Dewetting of Single Crystal Ni Thin Films</title>
<link>https://hdl.handle.net/1721.1/153083</link>
<description>Templated Solid-State Dewetting of Single Crystal Ni Thin Films
Shin, Yoon Ah
Solid thin films are generally metastable in the as-deposited state and will undergo dewetting if they are heated. Dewetting can occur at temperatures well below the film’s melting temperature so that the film remains in the solid-state. Solid-state dewetting of thin films is driven by surface energy minimization and proceeds through retraction of film edges that develop rims that break down via various dewetting phenomena. While dewetting of polycrystalline thin films leads to irregular morphologies, dewetting of single crystal thin films leads to regular morphologies due to the uniformity in crystalline anisotropy. Moreover, for single crystal thin films, dewetting can be guided to form complex nanostructures in a highly reproducible manner by pre-patterning the film. &#13;
&#13;
Lithographic pre-patterning of single crystal thin films also provides a framework for investigation of mechanisms of specific dewetting phenomena. These phenomena include the effects of in-plane crystallographic orientations of film edges on their morphological evolution during dewetting-induced retraction. Also, for a given prepatterned film, effects of materials and process parameters, including out-of-plane orientation of the film, initial film thickness and annealing ambient, on dewetting phenomena can be systematically investigated.&#13;
&#13;
In this work, using pre-patterned single-crystal Ni films on MgO substrates as a model system, mechanisms and means of controlling dewetting phenomena were investigated, paying attention to effects of the above-mentioned parameters. The first part of this thesis focuses on a specific dewetting phenomenon called a fingering instability. The conditions that lead to fingering instead of an alternative phenomenon called rim pinch-off were investigated. It was found that an initial edge roughness leads to the development of fingering and that fingering can be induced by intentionally pre-patterning film edges with periodic roughness. Furthermore, it was found that within a range of periods, templating controls the steady-state finger characteristics (spacing, direction and propagation rate) and leads to the formation of periodic arrays of parallel nanowires. Templated fingering was investigated over a wide range of experimental variables (template wavelengths, in-plane edge orientations, and film thicknesses) and the finger characteristics were determined for each of the variables. The second part of this thesis focuses on how the steady-state finger characteristics are controlled by the wavelength of patterned roughness. A kinetic model that provides mechanistic insights into the effects of all of the investigated experimental parameters is developed. It is also shown that there is a lower limit and an upper limit for direct templating and dewetting behaviors of templated fingers outside the direct templating regime are reported and analyzed. Lastly, in the third part of this thesis, edge retraction and the pinch-off phenomenon was investigated in different ambient conditions, using two different types of reducing gas (H₂-based and CO-based). A profound effect of the ambient gas on the formation of valleys ahead of the rims on retracting edges, the origin of pinch-off behavior, was discovered and it is demonstrated that the observed differences can be attributed to changes in surface energy anisotropy.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153083</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-functional flat optics for imaging and sensing</title>
<link>https://hdl.handle.net/1721.1/153082</link>
<description>Multi-functional flat optics for imaging and sensing
Yang, Fan
Flat optics refer to optical devices composed of ultra-thin and light-weight planar optical components, which manipulate light in ways that are not possible using conventional bulky optics. Multiple applications including imaging, beam steering, sensing, and projection can be performed through a single layer flat lens, which facilitates the integration between optical and electronic components. Through the arbitrary manipulation of wavefront, flat optics feature improved optical performance and customized functionality.&#13;
&#13;
In this thesis, we focus on the design and optimization of single or multi-layer metasurfaces to construct flat optical components for imaging and sensing applications. We propose a variety of device configurations, analytical analysis, material choices, and prove the effectiveness of the concepts through experimental demonstrations.&#13;
&#13;
We have developed the design concept of wide field-of-view metalens for imaging applications. We proposed the analytical solution to obtain the optimum phase profile of the single layer wide field-of-view metalens, which show diffraction-limited imaging performance with near-180° field-of-view. We further built up the algorithm to design metalens with combined wide field-of-view and achromatic features, which show 1 - 1.2 &#120583;&#119898; wavelength broad bandwidth imaging performance with minimal transverse focal shift.&#13;
&#13;
We have demonstrated a variety of depth sensing techniques using metasurfaces. They include passive depth sensing mechanism using stereo camera, and active depth sensing mechanism through structured light projection and beam steering. Near-180° 3-D depth sensing have been realized utilizing the wide field-of-view design concept.&#13;
&#13;
We have further combined multiple optical properties into a single flat optical element through polarization-multiplexing. Based on the proposed concept, we demonstrated another passive 3-D depth sensing mechanism through utilization of metalens with double-helix point-spread-function. The metalens showed meter scale depth sensing range with sub-millimeter accuracy. We have further proposed the design concept of wide field-of-view metalens with extended depth-of-focus. Opposite to metalens for depth sensing, it reveals object information in the entire 3 - 10 &#119898;&#119898; extended depth range in the near-180° field-of-view through image deconvolution. We have also proposed a zoom lens with tunable magnification through polarization-multiplexing. An unprecedented 10x zoom ratio between the wide-angle and telephoto mode has been experimentally validated.&#13;
&#13;
Lastly, we have realized the reconfiguration in the mid-infrared band using phase change materials. We demonstrated zoom lens in the 5.2 &#120583;&#119898; wavelength leveraging the large refractive index difference of the phase change material &#119866;&#119890;₂&#119878;&#119890;₂&#119878;&#119887;₄&#119879;&#119890;₁ between the amorphous and crystalline states. A similar 10x zoom ratio between the two modes has been validated. We have further showed the design concept of a varifocal lens. The focal length can be tuned continuously between 4 - 10 &#119898;&#119898; range by controlling the temperature profile of the metasurface.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153082</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Enzyme Neofunctionalization in Plant Specialized Metabolism</title>
<link>https://hdl.handle.net/1721.1/153080</link>
<description>Mechanisms of Enzyme Neofunctionalization in Plant Specialized Metabolism
Kim, Colin Younghun
Plants, as sessile organisms, have evolved a plethora of secondary metabolites, along with highly specialized enzymes and unique metabolic pathways that support their biosynthesis. To date, approximately one-third of all existing pharmaceuticals are of plant origin. However, only a fraction of plant natural products and their derivatives have been explored as potential therapeutics. Moreover, the majority of plant natural product biosynthetic pathways remain inadequately studied, and the biocatalytic potential of their biosynthetic enzymes remains underexplored. In this thesis, we explore the molecular mechanisms underlying enzyme neofunctionalization in plant specialized metabolism by examining two enzyme families: nonheme iron(II)- and 2-oxoglutarate-dependent halogenase and BAHD acyltransferase. First, we describe the genomic- and biochemical-basis for gene duplication and neofunctionalization of flavonol synthase (FLS) that led to the unique emergence of halogenase in Menispermaceae plants. The discovery of dechloroacutumine halogenase (DAH) that catalyzes the regio-stereoselective chlorination in acutumine biosynthesis, exemplifies a mode of action for alkaloid diversification. Furthermore, we present the chromosomal-level assembly of Menispermum canadense genome, which provides genomic insights to plant halogenase evolution. Next, we examine the structural- and mechanistic-basis for the neofunctionalization of coumarin synthase (COSY) that catalyzes the isomerization and lactonization in coumarin biosynthesis. This study unveils the emergence of an unconventional catalytic mechanism mediated by a BAHD-family enzyme, and sheds light on its recruitment to the evolutionarily new coumarin biosynthetic pathway in eudicots. Overall, this work focuses on the remarkable strategies that plants explore to evolve their biocatalytic machinery and to expand their specialized metabolism.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153080</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Controlling the Degradation Mechanisms at Cathode-Electrolyte Interfaces in All-Solid-State Lithium-Ion Batteries</title>
<link>https://hdl.handle.net/1721.1/153078</link>
<description>Understanding and Controlling the Degradation Mechanisms at Cathode-Electrolyte Interfaces in All-Solid-State Lithium-Ion Batteries
Kim, Younggyu
All-Solid-State Li-ion Batteries with Li₇La₃Zr₂O₁₂ solid electrolyte enable higher energy density compared to conventional batteries with liquid electrolytes since they are compatible with Lithium metal anode. Despite their promises, stability issues at the interface between cathode and solid electrolyte need to be solved for their implementation. The interface needs to be chemically stable at high temperature during sintering. Electrochemical and chemo-mechanical stabilities at the interface are necessary during the operation of the battery for good cyclability. In order to study interfacial stabilities we developed model system with thin film cathode. The cell design allowed characterization of the interface by interface-sensitive techniques without the needs of destructive techniques. We studied interfacial degradation between LiNi₀.₆Mn₀.₂Co₀.₂O₂ (NMC622) cathode, and Li₇La₃Zr₂O₁₂ (LLZO) solid electrolyte. We evaluated thermal stability in controlled gas environments (Air, O₂, N₂, humidified O₂, CO₂) to identify contributors for secondary phase formations and their effect on charge transfer properties. Li₂CO₃, La2Zr2O7, and La(Ni,Co)O₃ formed at the NMC622|LLZO interface when annealed at 700 °C in air, which increased the interfacial resistance by 2 orders of magnitude. Sintering in gas environment without CO₂ and H₂O (g) was necessary to obtain chemically stable interfaces. Sintering in O₂ gave excellent chemical stability and interfacial resistance comparable to lowest values obtained in literature with protective coatings at the interface. Sintering in N₂ caused oxygen loss at high temperature, but secondary phases did not form. NMC622|LLZO interface was electrochemically unstable due to limited oxidation stability of LLZO. Electrochemical degradation at the interface reduced Ni during potentiostatic hold at 4.3 V vs Li/Li⁺, and formed reduced phases with Ni²⁺ and Co²⁺ from cycling at 80 °C. Electrochemical degradation decreased capacities by overpotential increase. Reduction was not observable when cycling temperature was lowered to room temperature, indicating that the reaction could be kinetically inhibited. Stress due to lattice parameter changes of NMC622 during cycling caused intergranular cracks in NMC622 film and delamination at NMC622|LLZO interface. Chemo-mechanical degradation caused abrupt capacity decrease by disconnecting Li-ion conduction pathway, so it should be avoided for better cyclability. Understanding of interfacial degradation offers guidelines for designing All-Solid-State Li-ion batteries with better interfacial stability.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153078</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Controlling the Surface Chemistry of Oxides to Enhance Catalytic Activity at Elevated Temperatures</title>
<link>https://hdl.handle.net/1721.1/153076</link>
<description>Understanding and Controlling the Surface Chemistry of Oxides to Enhance Catalytic Activity at Elevated Temperatures
Kim, Dongha
Oxides have been extensively used as high temperature catalysts in electrochemical devices and for gas conversion reactions, including solid oxide fuel cells (SOFC), solid oxide electrolysis cells (SOEC), ethane cracking, CO oxidation, and the oxidative coupling of methane. In hightemperature catalysis, the gaseous reactants undergo chemical or electrochemical reactions at the surfaces of the oxide catalysts. Therefore, understanding and controlling the surface chemistry of oxides is essential for maximizing their catalytic performance. Despite being of high importance and scientific interest, our current understanding of the surface chemistry of oxide catalysts remains limited. One of the reasons for this is that it is not trivial to investigate oxide catalysts in operando. High temperatures (typically 750-900˚C) and reactive gas environments limit the use of many surface analysis tools such as X-ray photoelectron spectroscopy and scanning probe microscopy. Aiming to fill the gap in our knowledge of the surface chemistry of oxides, this thesis investigates the surface chemistry of various oxide catalysts, including perovskite oxides, fluorite oxides, and amorphous oxides in different catalytic applications. &#13;
&#13;
This thesis introduces five different studies on the surface chemistry of oxide catalysts used for different catalytic applications. The first study investigates the surface degradation of doped lanthanum manganite perovskite oxides which results from the formation of surface dopant precipitates during their operation as air electrodes in solid oxide fuel/electrolysis cells. The 4 effects of polarization on the surface degradation are explained by two different driving forces for dopant segregation to the surface of the air electrode. This dopant segregation and resulting surface degradation causes a significant decrease in the activity of the air electrode. In the second study, a method to reverse this surface degradation and regenerate the surface catalytic activity with an applied electrical potential is introduced. The mechanism of the surface reactivation under anodic potential is proposed. Next, a novel electrochemical method to control the size and number density of Au nanoparticles deposited onto perovskite oxides and fluorite oxides is demonstrated. This electrochemical method allows one to control the concentration of the surface oxygen vacancies and obtain small Au nanoparticles with a high density on the oxide surface. Such a fine distribution of metal nanoparticles is important to maximize their catalytic performance in gas conversion reactions such as CO oxidation and the water-gas shift reaction. In the fourth study, the surface chemistry of state-of-the-art amorphous oxide catalysts for the oxidative coupling of methane (OCM) is investigated. The degradation mechanisms of these OCM catalysts and various ways to inhibit such degradation by controlling the phase evolution of the catalyst surface are introduced. The fifth and final study introduces an electrochemical way to perform the OCM reaction using mixed ionic-electronic conductors (MIEC) as a backbone electrode and OCM-active secondary particles deposited onto the surface of the MIEC electrode. Optimizing the dispersion of the OCM-active secondary particles has been proposed as a promising way for improving the OCM selectivity of this material.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153076</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating microbial interactions, adaptations, and evolution under extreme environmental conditions in modern microbial mats from Shark Bay, Western Australia</title>
<link>https://hdl.handle.net/1721.1/153075</link>
<description>Elucidating microbial interactions, adaptations, and evolution under extreme environmental conditions in modern microbial mats from Shark Bay, Western Australia
Skoog, Emilie J.
Fossilized microbial mats dating back 2+ Ga record some of the earliest evidence of life on Earth. These ancient pustular mats preserve evidence of microbial production of extracellular polymeric substances (EPS), revealing the potential importance of EPS in the protection and preservation of these ancient microbial communities. Today, extant pustular microbial mats in Shark Bay, Australia serve as one of the best modern analogues of these ancient systems. These modern microbial communities are able to persist in the extreme environmental conditions (i.e., hypersalinity, periodic desiccation, UV radiation, heavy metal toxicity) that characterize these past and present environments due to the EPS that coat and protect these microbes. The ability to produce, degrade, and modify EPS may advantage certain microbial communities and consequently support synergistic cooperations and influence overall community structure, yet these interactions remain unexplored. Additionally, the inter-exchange of evolutionarily useful genes over billions of years may have enabled the adaptation and evolution of this community to such environmental extremes. This thesis uses metagenomic and biogeochemical analyses to elucidate how modern pustular microbial mats have persisted and adapted to survive extreme environmental conditions for over two billion years.&#13;
&#13;
In this thesis, I merge metagenomic analyses with experimental data to i) analyze the composition of EPS produced by cyanobacteria enriched from these microbial mats, ii) link these findings with metagenomic potential of the heterotrophic microbial community to modify and degrade this EPS, and iii) infer potential microbial interactions facilitated by the cycling of EPS and within this community. I further explore the presence and cycling of sulfated EPS within this pustular mat and determine the role that sulfated EPS may play in overall microbial community structure and protection from harsh environmental conditions. Metabolic reconstruction of poorly-characterized members of the rare biosphere is then used to i) determine the functional role of these bacteria within the pustular mat community for the first time and ii) infer the adaptational mechanisms for stress responses of these organisms within this extreme environment as well as other localities within which these candidate phyla have been identified. Finally, horizontal gene transfer (HGT) analyses are used to predict microbial interactions and determine relative timing of transfer events of evolutionarily useful genes that may confer microbial resistance to a range of environmental stresses. Combining these various approaches and techniques enables a better understanding of how microbial metabolism, metagenome plasticity, and microbial adaptation may inform the ecological success and evolution of an ancient microbial community under environmental extremes.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153075</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron Hydrodynamics in Crystalline Solids: Microscopic Origins, Mesoscopic Size Effects, and Macroscopic Observables</title>
<link>https://hdl.handle.net/1721.1/153074</link>
<description>Electron Hydrodynamics in Crystalline Solids: Microscopic Origins, Mesoscopic Size Effects, and Macroscopic Observables
Varnavides, George
Despite the continued scaling down of transistors since the invention of the integrated circuit over seventy years ago, recent trends have deviated from Moore’s predictions, due to fundamental physical limitations including heat dissipation. With data centres already using an estimated 200 terawatt hours each year, and projected to make up as high as 8% of global demand by 2030, there is a pressing need to design energy-efficient nanoelectronics using bottom-up techniques. As high-quality single-crystal materials used in nanoelectronic devices approach the micro- and nano-scale, the transport of heat and charge in these devices results in inhomogeneous spatial signatures, which must be understood through spatially-resolved transport theories. &#13;
&#13;
One particularly-exciting example of emergent transport phenomena with distinct spatial signatures is the recently re-invigorated field of “electron hydrodynamics”. Advances in spatially-resolved transport measurements have revealed that electrons in condensed matter can flow collectively, exhibiting fluid phenomena such as channel flow and vortices. The observations confirm theoretical predictions over half a century old, violating textbook descriptions which treat electron collisions akin to billiard balls. Unlike everyday fluids however, preferred directions in crystals imply electron fluids exhibit anisotropic and non-dissipative viscous contributions, giving rise to novel electronic transport phenomena. Resistive processes in these hydrodynamic devices occur predominantly at the boundaries, altering the spatial distribution of heat dissipation and thus impacting thermal design. &#13;
&#13;
In this thesis, a multi-scale theoretical and computational framework to understand hydrodynamic phenomena in three-dimensional crystalline solids is presented. Starting from the smallest length-scales, temperature-dependent electron interactions are predicted from first principles at the Eliashberg level of theory. These suggest the indirect lattice-mediated electron interaction dominates in bulk semimetals, including those with topological band crossings. These microscopic interaction lifetimes are used as input to solve the mesoscopic Boltzmann transport equation, with an efficient 3 spatially-resolved solver developed as part of this thesis. Going beyond the limiting relaxation time approximation, the importance of these microscopic details on the linearized scattering matrix is evaluated, resulting in a macroscopic viscosity tensor. The additional complexities these anisotropic and asymmetric viscosity tensors introduce to electron flow is investigated by solving the electronic Navier Stokes equations in experimentally-accessible geometries. Finally recent experimental observations of the non-monotonic temperature-dependence and directional-dependence of electron flow in bulk WTe2 and PdCoO2 respectively are elucidated. The thesis concludes with a discussion on the importance of the presented results in the broader context of the field of electron hydrodynamics and suggests possible future avenues of inquiry.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153074</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogen in aluminum oxide and at the aluminum oxide/aluminum interface: an ab initio thermodynamics and Monte Carlo investigation</title>
<link>https://hdl.handle.net/1721.1/153073</link>
<description>Hydrogen in aluminum oxide and at the aluminum oxide/aluminum interface: an ab initio thermodynamics and Monte Carlo investigation
Somjit, Vrindaa
The thermodynamic stability, wide band gap, and extreme hardness of Al₂O₃ make it indispensable in anti-corrosion coatings, resistive switching devices, and superconducting qubits. However, point defects due to impurities like hydrogen, or intentionally added dopants, can significantly alter the properties of Al₂O₃ coatings and microelectronics. Moreover, Al₂O₃ is often formed by oxidizing aluminum, resulting in an Al₂O₃/Al interface. The interface structure plays a critical role in these applications, but its buried nature makes experimental characterization challenging. This thesis utilizes ab initio statistical thermodynamics to resolve the Al₂O₃/Al interface structure and the effect of point defects on the properties of Al₂O₃ and Al₂O₃/Al interface.&#13;
&#13;
In the first part, we identify dopants that reduce hydrogen permeability of Al₂O₃ coatings for hydrogen pipelines. By utilizing an ab initio thermodynamics model that accounts for the distinct Al₂O₃ formation and pipeline functional conditions, we demonstrate that silicon and titanium dopants improve the properties of Al₂O₃ barriers by eliminating the concentration of protons and trapping hydrogen at aluminum vacancy sites.&#13;
&#13;
Following this, we determine the evolution of the atomic and electronic structure of the Al₂O₃/Al interface during oxide growth and in the presence of hydrogen. Using ab initio Grand Canonical Monte Carlo to explore the interfacial configuration space, we find that the interface is atomically sharp and propagates layer-by-layer into Al. We identify point defects that are crucial to scale growth, and determine electronic structure changes that underpin the self-healing property of Al₂O₃/Al coatings and Schottky barrier height variation in Al₂O₃/Al electronics. We establish that hydrogen promotes the formation of interfacial superabundant aluminum vacancies, which is likely the precursor to blistering of Al₂O₃/Al coatings.&#13;
&#13;
Finally, we focus on the effect of point defects on another important application of Al₂O₃: as an electrolyte in resistive switching devices. We determine that electronegative dopants in Al₂O₃ serve as preferential sites for the formation of conductive oxygen vacancy networks, reducing switching variability of Al₂O₃ resistive switching devices.&#13;
&#13;
The detailed understanding of the effect of ionic and electronic defects on the Al₂O₃ and Al₂O₃/Al interface expand our understanding of this fundamental system. The atomistic insight provides routes to engineering advanced barrier coatings, controllable transistor technologies and resistive switching devices, and noise-free superconducting qubits.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153073</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Fluoride/Fluorine Bond Activity for High-Energy Li and Li-ion Batteries</title>
<link>https://hdl.handle.net/1721.1/153071</link>
<description>Tailoring Fluoride/Fluorine Bond Activity for High-Energy Li and Li-ion Batteries
Gao, Haining
Extending the classes of reactions that underlie electrochemical energy storage systems is of fundamental and practical importance to improving mobility, autonomy, medical devices and electronics. Most of the cathode materials developed so far are oxide-based: all commercial Li-ion cathodes utilize lithium transition metal oxides, while MnO₂, SOCl₂, and SO₂ are representative examples for Li primary cathodes. In contrast, fluoride-based cathodes are generally less investigated, with only several instances, but all with exceedingly high theoretical energy densities, such as carbon-monofluoride (CFₓ), transition metal fluorides, and the recently developed perfluorinated gas cathodes (SF₆ and NF₃). This indicates the strong potential for fluoride-based cathodes to surpass the current energy density limit. Therefore, to expand the landscape of fluoride redox to provide a new degree of freedom for the design of high-energy cathodes, this thesis examines the controlling parameters for fluoride bond redox activities, and their implications for Li and Li-ion batteries. &#13;
&#13;
The first part of this thesis targets sulfur−fluorine (S−F) bonds. Using Li−SF₆ battery as a platform, the dominating effect of the electrolyte solvent properties on lithium fluoride (LiF, one of the discharge products) nucleation and growth was demonstrated. The electrode passivation induced by LiF is mitigated via increasing the fluoride solvation strength of the solvents, resulting in improved Li−SF₆ cell rate capabilities. Strategies to tune the S−F bond redox activity at molecular structure level was investigated next using liquid phase pentafluorosulfanyl arenes (RPh-SF₅), where one of the F-ligands in the SF₆ molecule is replaced by an aromatic group (R-Ph). The ring structure facilitates electron transfer by increasing molecular polarity, while R functionality alters the S−F bond reduction potential by changing the electron distribution around the –SF₅ group. As a new family of Li primary catholytes, the R-Ph-SF₅ reactants allow for full reactant defluorination and a total of up to 8 e− transfer per molecule, yielding capacities of 861 mAh/g subscript reactant and voltages up to ~2.9 V vs. Li/Li⁺. At a cell level, gravimetric energies of 1085 Wh/kg were attained at 50 ºC, exceeding all leading primary batteries based on electrode + 3 electrolyte (sub-stack) mass. Voltage compatibility of R-Ph-SF₅ and CFₓ solid cathodes further enabled design of a hybrid battery containing both fluorinated catholyte and cathode. The hybrid cells reach extraordinarily high cell active mass loading (~80%) and allow for significant boosting of sub-stack gravimetric energy of Li−CFₓ cells by at least 20%. &#13;
&#13;
The carbon−fluorine (C−F) bonds in perfluoroalkyl group (R subscript F) were investigated next. The effect of extrinsic factors was examined using liquid perfluoroalkyl iodides (CFIs) as an example system, where the polarizable iodine supports electrochemical reduction with concerted F⁻ ligand expulsion. C−F bond redox activity was found to be influenced significantly by multiple parameters, including reactant concentration, discharge rate, temperature, and solvent properties (e.g. catholyte viscosity). A maximum of 8 e⁻/C₆F₁₃I, or 8/13 available F, is accessible, but only at ideal conditions (low reactant concentration and rate). Increasing concentration or rate exacerbates premature cell termination caused by deactivation of intermediates, resulting in &lt;2 e⁻/C₆F₁₃I. This challenge was addressed via molecular design. By replacing the I-ligand with an alkene linker connected to a conjugated system, close-to-full defluorination of RF was achieved, yielding up to 15 e⁻ per molecule (or 15/17 available F), at voltages up to 2.6 V vs. Li/Li⁺. In addition to the ring structure and the R substitutional group, which facilitate charge transfer as that observed in R-Ph-SF₅, the alkene linker was found to be essential here for the reduction transformation propagating along the RF tail.&#13;
&#13;
Lastly, Mn-F bond was studied in the context of electrochemical fluoridation of MnO, the product of which functions as rechargeable Li-ion cathodes. Previous studies showed that small MnO particle size (&lt;10 nm) is necessary for MnO fluoridation via LiF splitting reaction. We demonstrated that such limitation is originated from LiF instead of MnO. With electrochemically formed LiF, which is nano-crystallized and in intimate contact with MnO, high MnO utilization (~0.9 e⁻/MnO) is achievable even with large MnO particle size (~400 nm).&#13;
&#13;
Overall, the central advance of this thesis is the identification of multiple new electrochemical conversion motifs. In addition to the development of novel classes of Li primary cathodes with extraordinary electrochemical performances, this work also constructs a map of handles to tune the fluoride bond activities, providing a new platform for the design of battery materials for different applications. For instance, in rechargeable batteries, LiF is suggested to play an important role in stabilizing reactive interfaces and improving cycling stability.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153071</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Cardiac Delivery of Antisense Oligonucleotides with Peptidomimetic Targeting Agents</title>
<link>https://hdl.handle.net/1721.1/153070</link>
<description>Improving Cardiac Delivery of Antisense Oligonucleotides with Peptidomimetic Targeting Agents
Antilla, Sarah An-ning
Cardiovascular disease (CVD) is one of the leading causes of death worldwide, both individually and as a comorbidity for other diseases such as diabetes and atherosclerosis. Unfortunately, CVD trends irreversibly toward heart failure. Current treatments only manage symptoms such as high blood pressure rather than addressing the root biological causes of the disease. Many micro-RNAs (miRNAs) are either over or under-expressed in CVD, making the regulation of these miRNAs a potential treatment strategy. Here, we investigate the delivery of antisense oligonucleotides (ASOs) to inhibit the expression of an overexpressed miRNA in CVD, miRNA-21.&#13;
&#13;
One of the challenges in delivering ASOs and other gene therapies is achieving delivery to the desired tissue before the therapeutic is trafficked to the liver or kidneys. Our lab has a platform for discovering peptide-protein interactions, affinity selection-mass spectrometry (AS-MS), with which we can find short peptide or peptidomimetic targeting agents with nM binding affinity to target proteins. Here, I describe a platform for selecting and procuring cardiac-specific proteins or their extracellular domains (ectodomains), in some cases employing the automated fast-flow peptide synthesis (AFPS) our lab has developed, which can produce single domain proteins in hours in a single shot. We aim to discover and validate binders to these targets using AS-MS.&#13;
&#13;
Because these targets were challenging to generate binders to, we began investigating the transferrin receptor (TfR1) and a peptide found in literature to bind to TfR1, T12. T12 binds to TfR1 with low tens of nM binding affinity, and a conjugate of anti-miRNA peptide nucleic acid (PNA) with T12 inhibits about 50% of miRNA-21 expression in mouse cardiac tissue at 30 mg/kg, while 30 mg/kg PNA alone does not show significant inhibition of miRNA-21 expression in the heart. To reduce the dose required for efficacy, we synthesized a linear dimer of T12, which exhibits tenfold stronger binding to TfR1. A PNA-T12 dimer conjugate exhibits just over 50% inhibition of miRNA-21 expression in cardiac tissue at only 5 mg/kg, out-performing the PNA-T12 monomer conjugate. We begin to investigate dimer architecture and its effects on the T12-TfR1 interaction. With these promising initial results, we hope to apply this simple peptide targeting platform to other cardiac-specific targets and their discovered binders.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153070</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>DNA sequence design of non-orthogonal binding networks, and application to DNA data storage</title>
<link>https://hdl.handle.net/1721.1/153069</link>
<description>DNA sequence design of non-orthogonal binding networks, and application to DNA data storage
Berleant, Joseph Don
DNA has proven itself a powerful tool in a diverse array of nanotechnology-related domains, including molecular computation, nanostructure fabrication, and data storage. Most DNA-based systems focus on using sets of DNA sequences that are orthogonal to each other, such that each DNA sequence has a dedicated binding partner, its complementary sequence. This design approach reduces the number of interactions that must be considered when predicting how a system will behave, at the cost of reducing the information-gathering ability of each molecular unit.&#13;
&#13;
Relatively little research has attempted to solve the problem of designing promiscuous, or non-orthogonal, DNA sequences, which are characterized by their ability to bind to several distinct partners with variable binding affinities. Yet there are many situations in which this type of dense interaction network can be useful. For example, in neural networks, a node will often take inputs from hundreds or thousands of upstream nodes, allowing it to condense large amounts of information into a single output value. While naturally occurring biological networks often make use of promiscuous binding behavior, the field of molecular computing currently lacks a general-purpose and efficient method for non-orthogonal DNA sequence design.&#13;
&#13;
In this thesis, I describe a novel, robust, and broadly applicable method for designing small or large sets of non-orthogonal DNA sequences. This method takes an arbitrary matrix of pairwise binding affinities, and attempts to design DNA sequences such that the differential binding affinity between any two pairs of sequences is proportional to the difference in the corresponding elements of the matrix. The key innovation of this method is the reformulation of the matrix via a binary embedding, which reduces the design specification to a set of binary strings that permit relatively straightforward sequence design.&#13;
&#13;
Not all matrices permit a binary embedding and I consider three cases here: when a binary embedding exists, when it is unknown if it exists, and when it does not exist. When it exists, I show through both simulation and experiment that DNA sequences can be designed with high precision. When it is unknown if a binary embedding exists, I give novel conditions for determining existence via representation of the matrix in a weighted graph. Finally, when an exact binary embedding does not exist, I develop an alternative method using approximate binary embeddings. To demonstrate the power of this method, I apply to the task of similarity searching in a large, simulated DNA databases, where I show that it outperforms the existing state of the art. I hope that this work opens the door to further innovations in designing and applying non-orthogonal DNA sequences to DNA nanotechnology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153069</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>External field effects on electronic and ionic defects in&#13;
functional oxides: Experiments and simulations</title>
<link>https://hdl.handle.net/1721.1/153048</link>
<description>External field effects on electronic and ionic defects in&#13;
functional oxides: Experiments and simulations
Chi, Yen-Ting
Understanding the effects of external stimuli on electronic and ionic defects in functional oxides is the key to design next generation energy conversion/storage and memory devices. Mechanical strain and high electric fields are known to commonly exist in thin film functional oxides and significantly affect the electronic and ionic defect concentrations, reaction energy landscapes, and transport properties. This thesis focuses on quantifying the influence of external stimuli such as strain and electric field on the defect chemistry in functional oxides, including perovskites ABO₃ and solid oxide fuel cell materials. &#13;
&#13;
We first quantify the nonlinear dielectric response of neutral oxygen vacancies, composed of strongly localized electrons at an oxygen vacancy site, in perovskite oxides of the form ABO₃. Our approach implements a computationally-efficient local Hubbard U correction in density functional theory simulations. These calculations indicate that the oxygen vacancy dipole moment correlates strongly with B site cation reducibility and lattice volume. Next, we select SrTiO₃ as our prototypical perovskite and assess the effects of biaxial strain on the stability of electronic defects at finite temperature. We constructed a predominance diagram for free electrons and small electron polarons in this material as a function of biaxial strain and temperature. We found that biaxial tensile strain in SrTiO₃ can stabilize the small polaron, leading to thermally-activated and slower electronic transport consistent with prior experimental observations on this material. These findings also resolved apparent conflicts between prior atomistic simulations and conductivity experiments for biaxially strained SrTiO₃ thin films.&#13;
&#13;
To validate our prediction and to resolve the controversy from earlier studies on quantifying the strain-induced ionic conductivity enhancement, we develop an experimental platform that facilitates in situ application of tunable in-plane strain and concurrent impedance measurement on one sample under controlled temperature and atmosphere. This approach can access a wide temperature range (room temperature to 700℃) and maintain precise gas control (oxygen partial pressure down to 1-10 ppm without reactive gas) relevant to mixed ionic-electronic conducting oxides. We apply our technique to study three important materials with different configurations: Y-ZrO₂ (YSZ, 9.5 mol% Y₂O₃) single crystal, Gd-CeO₂ (GDC, 3% Gd) polycrystalline thin film, and Pr₀.₁Ce₀.₉O₂₋ₓ (PCO10) polycrystalline thin film. Using this experimental platform, we directly observe strain-induced oxygen breathing from the mixed ionic electronic conductor PCO10 and demonstrate the strain effect on ion transport and surface exchange efficiency. Tensile strain is found to consistently increase the conductivity in all three materials under all temperatures, with much more significant effect on PCO10. Finally, tensile strain does not show a significant effect on the surface oxygen exchange coefficients of PCO10 due to a cancellation from the simultaneous change of activation energy and preexponential factors. &#13;
&#13;
This thesis quantifies the effect of external strain and electric field on defect chemistry in functional complex oxides. Our new methodology, developed framework, and deep insights into energy conversion/storage and memory materials can be widely applied to applications including batteries, ferroelectrics, semiconductors, and catalysis.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153048</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematics, Methods, and Models for Data-Driven Rheology</title>
<link>https://hdl.handle.net/1721.1/153047</link>
<description>Mathematics, Methods, and Models for Data-Driven Rheology
Lennon, Kyle R.
While data-driven tools and techniques have revolutionized much of the scientific and engineering landscape, they have yet to make a substantial impact in the field of rheology. Rheological data sets are at once too scarce and too diverse to enable traditional machine learning approaches -- their scarcity a reflection of the time- and material-intensive nature of bulk rheometry, and their diversity a product of the many rheometric protocols and tools used to characterize the hereditary behavior of complex fluids. In this thesis, we explore methods and models that combine domain knowledge curated over the nearly century-long history of rheology with modern advancements in data science and machine learning, whose aim is to maximize the utility of the available rheological data and rheometric tools. Essential to each of the methods and models developed in this thesis is a solid mathematical foundation that elucidates the unique nature of rheological data, without which the machine learning techniques could not take firm hold. These Mathematics, Methods, and Models for Data-Driven Rheology promise to advance the field of rheology, and the engineering of complex fluid and soft solid systems, in several ways.&#13;
&#13;
In Part I of this thesis, we derive a new mathematical construction for asymptotic nonlinearities in simple shear flows, called Medium Amplitude Parallel Superposition (MAPS) rheology. Based on a polynomial expansion of the general time-invariant functional relationship between shear stress and strain (or strain rate) in simple shear flows, MAPS reveals a common embedding for many previously disconnected data sets. This asymptotic framework enables direct comparisons of constitutive model predictions with a variety of experimental data, and facilitates data-driven studies throughout the remainder of this thesis.&#13;
&#13;
In Part II, we first develop a novel data-rich experimental method for weakly nonlinear rheology, which uses three superposed oscillatory tones to obtain high-throughput measurements of a MAPS response function. We present applications of this technique to robust parameter identification within physically motivated constitutive equations, and to data-driven monitoring of rheological transitions within a vitrifying clay dispersion. We next derive an automated method for the analysis of rheological data, based on the longstanding technique of superposing parametrically self-similar data sets. We validate this statistically robust technique, which employs machine learning to automate various types of data superposition tasks, on a broad range of data drawn from the rheological literature.&#13;
&#13;
In Part III, we propose a general modeling framework that encapsulates many well-known viscoelastic constitutive equations. This model formulation incorporates an arbitrary tensor-valued function of the stress and rate-of-deformation tensors into a "generalized nonlinear Maxwell model". The medium amplitude behavior of this model reveals a data-driven framework for constitutive model selection using MAPS rheology, which can accommodate both shear and normal stress data as well as multiple relaxation modes. We then consider a machine learning surrogate for the arbitrary tensor-valued function, and demonstrate that such a machine learning approach can rapidly generate accurate and generalizable models from limited experimental data. By design, these models are highly extensible and directly amenable to three-dimensional simulations of industrially relevant flows, and may therefore facilitate the rapid design and engineering of processes involving complex fluids.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153047</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Resolved THz Magnetospectroscopy: Techniques and Applications</title>
<link>https://hdl.handle.net/1721.1/153046</link>
<description>Time-Resolved THz Magnetospectroscopy: Techniques and Applications
Dastrup, Blake Stephensen
Ultrafast terahertz (THz) spectroscopy has been advanced by a research effort that includes technique development as a central focus. Multiple paradigm-shifting breakthroughs in experimental techniques have emerged in recent decades, including optical pump/THz probe spectroscopy, THz polaritonics, and THz magnetospectroscopy. Motivated by growing interest in integrating THz spectroscopy with an external magnetic field—THz magnetospectroscopy—the work of this thesis has been to develop THz spectroscopy techniques in the service of enabling THz magnetospectroscopy beyond the linear absorption regime. I have approached this goal via two primary routes.&#13;
&#13;
The first is the development of a THz generation scheme in the context of the THz polaritonics platform (waveguide-based THz generation, interaction, and readout), aimed at reaching the THz amplitudes necessary for nonlinear waveguide-based THz magnetospectroscopy. We used a novel noncollinear velocity-matching scheme based on total-internal reflection of 800 nm pump beam, such that the in-plane velocities of the pump and THz are equal. Using this scheme we observe more than 10x enhancement of the THz spectral amplitude, and nearly 100x enhancement of the THz pulse energy. Finite-difference (FDTD) simulations are presented, which elucidate pump depletion mechanisms and indicate the possibility of further enhancement by using longer pump wavelengths.&#13;
&#13;
The second is the development of a free-space optical pump/THz probe (OPTP) magnetospectrometer. This instrument includes kHz repetition rate single-shot detection, which decreases acquisition time and has enabled some of the first time-resolved measurements of carrier relaxation via cyclotron resonance experiments in bulk Si. In these measurements we observe directly the magnetic field dependence of the carrier relaxation time, which decreases with increasing magnetic field—largely a consequence of Landau-level degeneracy. We also present experimental evidence of a B-field induced metastable multiferroic state in an anomalous Y-type hexaferrite, along with time-resolved measurements of the spin reorientation transition in this material induced by laser heating.&#13;
&#13;
Together, the methods developed represent significant steps towards enabling time-resolved studies of THz-frequency magnetic dynamics.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153046</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Modification of Iron-Sulfur Cofactors</title>
<link>https://hdl.handle.net/1721.1/153045</link>
<description>Chemical Modification of Iron-Sulfur Cofactors
Namkoong, Gil
Iron–sulfur (Fe–S) cofactors are multi-metallic clusters that are found in all living organisms. The presence of multiple metal centers enables these cofactors to perform some of the most challenging chemical transformations in biosphere, such as activation of strong C–H bonds and reduction of N2, many of which are pertinent to human health and global biogeochemical cycles. We hypothesize that these unique reactivities of FeS cofactors arise at least in part from their electronic structures, which can be described as an ensemble of magnetically coupled, locally high-spin Fe sites that can access electronic states distinct from those available to mononuclear Fe. However, there is a substantial barrier to deciphering the electronic structure of Fe–S cofactors: the complexity associated with spectroscopic analysis using ⁵⁷Fe-specific techniques (e.g., ⁵⁷Fe Mössbauer and electron-nuclear double resonance spectroscopy) due to the large number of Fe sites. This thesis reports strategies to chemically modify complex Fe–S cofactors to address this challenge and to facilitate the electronic-structure studies of FeS enzymes. In Chapter 1, selected examples of Fe–S enzymes, their chemistry, and their remaining mechanistic questions are described in addition to the discussion on advantages and limitations of ⁵⁷Fe-specific techniques. In Chapter 2, we demonstrate that Fe ions in a wide range of [Fe₄S₄] clusters exchange with exogenous Fe²⁺, and exploit this reactivity to develop a facile method for incorporating ⁵⁷Fe into the SAM-binding cluster of a radical SAM methyltransferase, RlmN. In Chapter 3, we show that two [Fe₄S₄] clusters of BtrN, a Twitch-domain-containing radical SAM enzyme, have different rates of Fe exchange, and by utilizing such difference, we demonstrate that we can selectively label either of the two clusters with ⁵⁷Fe. In Chapter 4, we apply a similar Fe exchange method to IspG, an [Fe₄S₄] enzyme involved in bacterial isoprenoid biosynthesis, to show that the Fe sites within the [Fe₄S₄] cluster can be further differentiated in the Fe exchange kinetics and consequently that site-selective ⁵⁷Fe labeling can be achieved. Using the site-selectively labeled sample, we study a unique π-complex in an inhibitor-bound state of IspG to understand its electronic structure. Lastly, in Chapter 5, extending the site-selective ⁵⁷Fe labeling strategy for the iron–molybdenum cofactor of the Mo-dependent nitrogenase, we report protocols for selectively removing an Fe from the cofactor and incorporating a closed-shell metal, Zn²⁺. We anticipate the modified clusters to serve as ‘simplified’ models for interrogating the complex electronic structure of the ironmolybdenum cofactor.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153045</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Accurate and High-Throughput Kinetics Predictions via Message Passing Neural Networks</title>
<link>https://hdl.handle.net/1721.1/153044</link>
<description>Enabling Accurate and High-Throughput Kinetics Predictions via Message Passing Neural Networks
Spiekermann, Kevin A.
Quantitative estimates for kinetic properties, namely reaction barrier heights and reaction energies, are essential for developing kinetic mechanisms, predicting reaction outcomes, and optimizing chemical processes. While ab initio methods, such as quantum chemistry, can be incredibly useful for providing accurate kinetic data, their high computational cost severely limits their utility for large-scale applications. High-quality experimental data is often even more rare, and such approaches are less amenable to exploring the vastness of chemical space due to monetary cost, time, and safety considerations. Modern machine learning (ML) techniques offer a promising option since they can quickly provide estimates to narrow the search space for more expensive ab initio or experimental methods. Unfortunately, the paucity of reliable quantitative chemical reaction data to train such models has presented a major hindrance for these data-driven approaches.&#13;
&#13;
Here, this thesis focuses on the intersection of ML and quantum chemistry with the goal of enabling automatic high-fidelity predictions of kinetic parameters. The novel contributions can be grouped into three main categories:&#13;
&#13;
1. Large-scale dataset generation, with an emphasis on high-quality methods and reaction diversity. Although much of the presented work studies reactions in the gas phase, this thesis also contributes a large dataset calculated in many popular solvents.&#13;
&#13;
2. Train various ML models to quickly predict accurate kinetic parameters, which avoids the challenging task of finding transition state structures. Importantly, these models operate on simple input representations and hence are ideal for automated, high-throughput applications.&#13;
&#13;
3. Provide best-practice guidelines and an open-source software package to improve the status quo of ML for chemistry research.&#13;
&#13;
The contributions from this thesis, and from similar work, will be essential for modern high-throughput workflows and the future of automated predictive chemistry.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153044</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Control of Versatile High-Speed and Large-Range Atomic Force Microscopes</title>
<link>https://hdl.handle.net/1721.1/153043</link>
<description>Design and Control of Versatile High-Speed and Large-Range Atomic Force Microscopes
Xia, Fangzhou
Microscopy instruments are important in nano-technology research for imaging of nanoscale phenomena. Among such tools is the atomic force microscope (AFM) for nanoscale imaging and surface characterization. An AFM scans a micro-cantilever over the sample surface to measure various quantities from the probe-sample inter- action. With high-speed imaging, dynamic processes can be visualized to improve fundamental understanding of microscopic interactions. Scientists can use videos, in addition to images, to observe and compare experimental data with theoretical predictions, and verify models without speculating about intermediate dynamics. However, conventional AFMs have limited throughput that allow for static imaging only and require transparent working environments.&#13;
&#13;
The contributions of this thesis remove such AFM restrictions and enable advanced visualization capabilities. Example applications include visualizing chemical reactions and biological responses in their native environments. To this end, the thesis addresses four main AFM limitations. These are (i) increase the low imaging throughput to be compatible for higher temporal resolution imaging, (ii) remove the transparency requirement, for AFMs that use optical beam deflection sensing, and enable imaging in harsh opaque liquids, (iii) establish automation algorithms to reduce operational overheads associated with experiment setup and controller tuning, and (iv) introduce custom design modifications resulting in affordable AFMs for engineering education.&#13;
&#13;
These new capabilities are primarily enabled with the development of new sub- systems. The key components include nano-positioners, cantilever probes, and control algorithms. New generation AFM nano-positioners are designed with high-speed, large-range or low-cost characteristics for different scanning needs. Coated active cantilever probes are developed for AFM imaging in specialized opaque environments. Multiple algorithms for scanner control, automatic tuning, and image formation are investigated to improve AFM imaging performance. Additional developments to sup- port AFM imaging include high-bandwidth driver electronics, optical systems with vision-based automation, and software implementation for AFM big data processing. Three AFM systems are integrated using these new subsystems for different applications. They include a versatile sample scan AFM for overview-and-zoom imaging in air and liquids, a multi-layer stacked scanner AFM for high-speed and large-range imaging in air, and a low-cost active probe AFM for engineering education. AFM im- ages and videos at 20 frames per second are taken in various environments to verify the new capabilities. These developments have broader impacts in the fields of precision instrumentation, nano-fabrication, and nano-scale process video-rate visualization.
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153043</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Underpinnings of Efficient Chemical-to-Electrical Energy Interconversion in Bipolar Membranes</title>
<link>https://hdl.handle.net/1721.1/153041</link>
<description>Mechanistic Underpinnings of Efficient Chemical-to-Electrical Energy Interconversion in Bipolar Membranes
Toh, Wei Lun
The efficient operation of all electrochemical devices requires an understanding of how electrical and chemical energy can be freely interconverted with minimal losses. One way in which chemical energy can be stored is in the form of ionic gradients. Bipolar membranes (BPMs) are an emerging technology that have the unique ability to convert the chemical potential gradient within a pH gradient into an electric potential gradient in the form of a membrane voltage (forward bias) and vice-versa (reverse bias). This unprecedented functionality has been utilized to design and construct a suite of electrochemical devices such as water and CO₂ electrolyzers, fuel cells, redox flow batteries, and electrodialyzers that operate with a pH gradient. This in turn has allowed the independent optimization of cathode and anode catalysts, and unlocked new operational modalities such as in operando neutralization for product recovery in CO₂ electrolysis or the extension of the operating voltage window for redox flow batteries, enabling higher energy and power densities. However, despite their utility, the fundamental processes that control the efficiency of chemical-to-electrical energy interconversion in BPMs remain poorly understood, both under open-circuit conditions and under polarization. Hence, the overarching goal of the work in this thesis is to identify these processes and unravel their mechanistic origins in order to develop remediation strategies.&#13;
&#13;
In Chapter 2, we describe how bipolar pairing interactions can arise from the intimate association of fixed charges at the bipolar junction when one of the phases constituting the junction is mobile, leading to attenuation of the membrane voltage and severe overpotential penalties in reverse bias. We find that the use of layered materials as interfacial additives can inhibit these detrimental interactions and significantly improve electrochemical performance.&#13;
 &#13;
In Chapter 3, we reveal that BPMs are subject to a process whereby the crossover of coions is coupled to the parasitic neutralization at the bipolar interface, which we term neutralization shortcircuiting. We find that for weak electrolyte-containing BPMs, this process buffers the bipolar interface due to the production of the conjugate acid/base, levelling the membrane voltage to a value dictated by the proton affinity of the conjugate acid/base. In addition, this neutralization product can undergo dissociation under reverse bias, reducing the Faradaic efficiency for the water dissociation reaction.&#13;
&#13;
Finally, in Chapter 4, we discover that, for BPM cells containing mixtures of strong and weak electrolytes, the weak electrolyte can impose a significant neutralization overpotential on the strong electrolyte in voltage regions where the former is unreactive. This occurs as the weak electrolyte imposes an ionic blockade by competing for the same fixed-charge sites in the membrane as the strong electrolyte, inhibiting the transport of the latter. We report on the use of advecting polyelectrolytes and thinner ion exchange membranes as materials strategies for overcoming this transport inhibition, and explain its implications for CO₂ electrolyzers and galvanic cells utilizing forward bias BPMs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153041</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the physiological function of Alzheimer’s disease risk gene Abca7 in the central nervous system</title>
<link>https://hdl.handle.net/1721.1/153040</link>
<description>Investigating the physiological function of Alzheimer’s disease risk gene Abca7 in the central nervous system
Beja-Glasser, Victoria
Alzheimer’s disease (AD) is the most prevalent neurodegenerative disorder, and there is currently no cure or preventative treatment. Therefore, understanding what makes certain people more susceptible to AD is critical to developing effective therapeutic strategies. Large-scale genome-wide studies have identiﬁed several risk genes for AD, opening a new era of mechanistic studies of AD pathogenesis. One key candidate that increases the risk of AD, especially in African American and Non-White Hispanic populations, is ATP-binding cassette transporter A7 (Abca7). Early in the disease pathogenesis, AD patients who harbor mutations in Abca7 have been shown to have very robust β-amyloid deposition and exhibit impairments in multiple cognitive domains; yet, how Abca7 mutations contribute to increased AD risk remains elusive because its normal role in the central nervous system (CNS) is poorly understood. As this is a widely unexplored research area, the goal of this thesis is to explore the normal function of Abca7 in the intact adult brain, which may provide crucial knowledge to understand its role in AD. I developed several mouse lines for studying Abca7 in vivo (Chapter 1), which will become available to the scientific community: Abca7-HA reporter, null Abca7-knockout (KO), and a floxed Abca7 mouse strains. Extending this work using my newly-generated Abca7-KO line, I present the first lines of evidence for (1) the spatiotemporal regulation and subcellular localization of the Abca7 protein in vivo and (2) the identification of gene and gene pathways in which Abca7 is involved (Chapters 2 and 3). Abca7 is expressed throughout neurodevelopment into adulthood and is ubiquitously expressed and localizes to membranes and synaptic membranes in the intact postnatal brain (Chapter 2). Most notably, Abca7 is important for GABAergic interneuron cellular pathways in the thalamus. Lacking Abca7 reduces the expression of several genes related to GABA synthesis, GABAergic synaptic vesicle protein levels, and aberrant anxiety-related behaviors in young and aged mice (Chapter 3). As the closest homologous protein of Abca7 is Abca1, a key cholesterol and lipid transporter in the body, I also investigated the potential role of Abca7 in neuro-lipid processing in Chapter 4. I did not identify differentially expressed lipid species in the presence versus absence of Abca7 in both young and aged brains. Taken together, this thesis work is the first to provide the groundwork for understanding the role of Abca7 in the normal brain, which will help guide future studies of its link to increased AD risk and pathogenesis.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153040</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging the Photochemistry of N-Nitrosamines for Their Aqueous Detection, and Development of a Novel Hemi-Iptycene for Porous Polymers</title>
<link>https://hdl.handle.net/1721.1/153038</link>
<description>Leveraging the Photochemistry of N-Nitrosamines for Their Aqueous Detection, and Development of a Novel Hemi-Iptycene for Porous Polymers
Beard, Jessica C.
N-Nitrosamines are a class of compounds known for both the potent carcinogenicity of many of its members, and for their widespread occurrence in the human environment. Consequently, the development of methods for their detection is an active area of research. Chapter 1 provides an introduction to the structure, reactivity, and synthetic applications of N-nitrosamines with an emphasis on alkyl N-nitrosamines. The role of N-nitrosamines as water contaminants and the methods for their detection are also discussed.&#13;
In Chapter 2, two synthetic routes are used to synthesize several 9,9-disubstituted-9,10-dihydroacridines (DHAs). In one route, the final synthetic step consists of acid-promoted cyclization of a benzyl tertiary alcohol intermediate; for some substrates, elimination competes with the desired cyclization and leads to reduced yields. For N-substituted DHAs, however, they can instead be synthesized by Grignard addition to the parent acridinium, and portions of this route were adapted for rapid microwave conditions. The synthesized DHAs were evaluated as potential nitrosamine indicators and were found unsuitable for this application.&#13;
In Chapter 3, the detection of aqueous nitrosamines via photonitrosation of a naphtholsulfonate indicator is described. The initial photoreaction yields an ortho-naphthoquinone-oxime, and the colorimetric response is enhanced by the formation of an intensely green iron(II) complex. The sensitivity and selectivity of the resulting solution-phase assay are evaluated, and efforts towards a solid-supported assay are described. Additionally, the iron complex formed in the assay and the analogous commercial dye Naphthol Green B are characterized by Mössbauer and EPR spectroscopy, and the resulting characterization indicates that both compounds are low-spin iron(II) complexes.&#13;
In Chapter 4, the synthesis of a novel hemi-iptycene diacetylene monomer, whose core structure is derived from triptycene and diphenylfulvene, is described. The hemi-iptycene quinone precursor to the diacetylene monomer is constructed by the one-pot Diels–Alder cycloaddition–dehydrochlorination of 2-chlorotriptycene quinone with diphenylfulvene, which is performed as a neat melt. Other attempted synthetic routes to the quinone are described as well. Lastly, preliminary results from the Sonogashira polymerization of the hemi-iptycene monomer with two dihalide fluorene monomers are described.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153038</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surveillance of SARS-CoV-2: From Sewage to the Clinic</title>
<link>https://hdl.handle.net/1721.1/153037</link>
<description>Surveillance of SARS-CoV-2: From Sewage to the Clinic
Xiao, Amy
Early work has shown that wastewater surveillance could detect SARS-CoV-2 before it became widespread in a population, and that viral levels in wastewater largely paralleled local increases in clinical cases of COVID-19. These studies highlight the potential of wastewater surveillance to provide early warning of emerging outbreaks – as well as the need for improved modeling and quantitation to understand the relationship between viral concentrations in wastewater and active infections in the population. The work described in this thesis seeks to address this need by exploring this relationship at different temporal and spatial scales through a combination of experimental observation and computational modeling. First, we model wastewater SARS-CoV-2 RNA concentrations from the greater Boston area in conjunction with clinical case counts reported for the state of Massachusetts. We find that wastewater SARS-CoV-2 concentrations foreshadow new clinical case counts by 4-10 days, and that wastewater signals are most likely driven by a period of high fecal viral shedding early in infection. Next, we develop quantitative metrics that combine wastewater and clinical data in order to improve the interpretability of wastewater data for making public health decisions. We apply these metrics to the first and second waves of COVID-19 in the greater Boston area and find that wastewater was a leading indicator of clinical cases in the first but not the second wave, likely due to changes in clinical testing capacity. Finally, we describe the development of a campus-wide wastewater surveillance system. We find that wastewater surveillance is often sensitive to single-case resolution, and that student cases are more reliably detected compared to contractors and vendors. We also find notable examples where we observe strongly positive signals in wastewater that are not identified by clinical testing, which poses challenges to administrative action. Overall, we demonstrate the strengths and limitations of wastewater surveillance in the MIT dorms compared to twice a week nasal swab PCR. In all, the work presented in this thesis demonstrates the value of integrating wastewater surveillance with clinical case data to make public health decisions and highlights the avenues for future development of this rapidly expanding field.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153037</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effects of DEAD-box Helicase Activity on Biomolecular Condensate Structure and Dynamics</title>
<link>https://hdl.handle.net/1721.1/153036</link>
<description>The Effects of DEAD-box Helicase Activity on Biomolecular Condensate Structure and Dynamics
Coupe, Sebastian
Biomolecular condensates are membraneless organelles which help spatiotemporally organize the biochemistry of the cytoplasm and nucleoplasm. These bodies are composed of specific proteins and RNA which self-associate to drive condensate formation. The strengths, timescales, and valence of the biomolecular interactions which drive condensate formation also have consequences for the properties of these bodies. In the context of subcellular reaction crucibles, condensate dynamics may hold the key to understanding the regulation of their function. Enzymatic processes which control condensate properties are of particular interest to biologists attempting to understand the function of biomolecular condensates, bioengineers looking for control points in manipulating condensates, and physicists interested in active materials. In this work, we study how DEAD-box helicases dictate condensate structure and dynamics via their enzyme-dependent RNA interactions and RNA remodeling activity. First, we show how a DEAD-box helicase’s nucleotide dependent RNA-binding dynamics can give rise to new effective microscopic structures. The changes microstructure in turn have dramatic consequences for condensate material state. We then uncover how DEAD-box unwinding can influence RNA structures that form within biomolecular condensates. Changes in helicase activity give rise to changes in condensate composition, time-dependent structure, and time-dependent dynamics. Lastly we demonstrate that a small molecule which interacts with the active site of LAF-1 promotes condensate aging. This suggests a new mechanism for perturbing condensate properties, which we believe involves restricting protein conformational dynamics. This work uncovers basic principles of DEAD-box controlled condensate properties as well as more general principles of condensate and hydrogel design. It also establishes LAF- 1 and DEAD-box helicases as an interesting avenue of active matter research, where activity affects phase behavior and emergent material properties.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153036</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Host-Bacteria-Virus Interactions in Caenorhabditis elegans</title>
<link>https://hdl.handle.net/1721.1/153035</link>
<description>Host-Bacteria-Virus Interactions in Caenorhabditis elegans
Vassallo, Brian Gregory
The factors governing transmissibility of viruses between susceptible hosts are not well understood. The discovery of Orsay virus and the experimental tractability of the C. elegans host enables the study of the determinants of virus transmission. Studies from mice and insects have demonstrated that bacteria influence viral infection and transmission. Here we used Caenorhabditis elegans and Orsay virus to screen a panel of bacteria to study their impact on virus transmission rates. We identified that exposure to and ingestion of these bacteria result in divergent bacteria-specific effects on transmission rate. Specifically, we observed that the presence of Pseudomonas aeruginosa and Pseudomonas lurida reduced transmission and infection rates relative to the standard laboratory diet of Escherichia coli OP50, whereas the presence and ingestion of Ochrobactrum species enhanced infection by Orsay virus. We observed that the inhibitory effect of P. aeruginosa and P. lurida was dependent on quorum- sensing pathways and the two-component response regulator gacA. We noted that Ochrobactrum ingestion into the intestinal lumen was important for its ability to enhance Orsay infection. Our data suggest there is a tripartite interaction between the Orsay virus, bacteria, and the C. elegans host that can strongly modulate virus transmission in a bacterial species-specific manner.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153035</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bitesize bundles F-actin and influences actin remodeling in syncytial Drosophila embryo development</title>
<link>https://hdl.handle.net/1721.1/153034</link>
<description>Bitesize bundles F-actin and influences actin remodeling in syncytial Drosophila embryo development
Yeh, Anna R.
Actin networks undergo rearrangements that influence cell and tissue shape. Actin assembly and organization is regulated in space and time by a host of actin binding proteins. Despite the importance of these processes, their regulation is not well understood. The Drosophila Synaptotagmin-like protein, Bitesize (Btsz), organizes actin at epithelial cell apical junctions in a manner that depends on its interaction with the actin-binding protein, Moesin. Here, I show that Btsz functions in actin reorganization at earlier, syncytial stages of Drosophila embryo development. Btsz was required for the formation of stable pseudo-cleavage furrows that prevented spindle collisions and nuclear fallout prior to cellularization. While previous studies focused on Btsz isoforms containing the Moesin Binding Domain (MBD), we found that isoforms lacking the MBD also function in actin remodeling. Consistent with this, we found that the C-terminal half of BtszB cooperatively bound and bundled F-actin with micromolar affinity, suggesting a direct mechanism for Synaptotagmin-like proteins regulating actin organization during animal development. This is the first work to show that a Slp can bind and bundle F-actin and provides additional context to past work for how Btsz is regulating actin networks.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153034</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Bureaucracy in Sub-Saharan Africa</title>
<link>https://hdl.handle.net/1721.1/153033</link>
<description>Essays on Bureaucracy in Sub-Saharan Africa
Russell, Stuart
Bureaucracies in sub-Saharan Africa frequently depart from the Weberian ideal, but how does the politicization of these bureaucracies affect governance and the implementation of public policy? My dissertation explores this question through three essays. In the first essay, I introduce a unique data set that captures the universe of competitive public procurement contracts in Burkina Faso awarded from 2012 to 2021. I demonstrate patterns consistent with a political finance cycle in which political incumbents politicize the procurement process in order to raise financing for their electoral campaigns. I show that, in the months prior to elections, political incumbents are more likely to award procurement contracts -- and especially ones that are unusually large one and indicative of favoritism. In the second essay, I document an important phenomenon that has received scholarly little attention: public sector employees who simultaneously serve as local elected officials. In particular, I show that more than 40% of mayors in Burkina Faso elected in 2016 were bureaucrats. I theorize about the causes and consequences of this phenomenon, providing descriptive evidence of these ideas when possible. In the third essay, together with Mai Hassan and Horacio Larreguy, I study public sector hiring in local bureaucracies in Kenya. When mid-level bureaucrats and local politicians share authority over hiring, we argue that each actor will hire their in-group to their most valued types of positions. We use detailed payroll data to demonstrate that both actors introduce their own unique bias into the hiring process, but the two types of bias are concentrated in different types of government jobs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153033</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Changing protein-DNA interactions promote ORC binding site exchange during replication origin licensing</title>
<link>https://hdl.handle.net/1721.1/153031</link>
<description>Changing protein-DNA interactions promote ORC binding site exchange during replication origin licensing
Zhang, Annie
DNA replication is a fundamental process that ensures the faithful duplication of genetic material in all living organisms. In eukaryotic cells, DNA replication proceeds bidirectionally from origins of replication. At the heart of this process is the eukaryotic replicative helicase, Mcm2-7, which leads the replication process by unwinding the DNA strands at the front of the replication machinery. To prepare for bidirectional replication, the hexameric Mcm2-7 helicases must be loaded onto origins of replication as head-to-head double hexamers in a process named helicase loading. Recent studies revealed that one molecule of the helicase loader, the origin- recognition complex (ORC), sequentially loads two Mcm2-7 hexamers in the precise orientation required for bidirectional replication. ORC initiates this process by binding to a high-affinity DNA binding site to load the first Mcm2-7 hexamer. Subsequently, ORC transitions to bind a secondary, inverted binding site to load the second Mcm2-7 in the opposite orientation. The precise molecular mechanism of this binding site exchange remains poorly understood.&#13;
&#13;
In my thesis, I used single-molecule Förster resonance energy transfer (sm-FRET) to investigate the mechanism of ORC site-switching during helicase loading. I monitored the changing interactions between DNA and ORC or Mcm2-7 and characterized the kinetics of DNA conformational changes. I also identified a series of molecular events that lead to a stepwise decrease in ORC stability on DNA, which facilitates ORC dissociation from its initial DNA binding site during site-switching. In addition, I characterized the first helicase-loading intermediate that slides on DNA, providing an explanation for how ORC reaches the secondary binding sites located at diverse locations relative to the first site. These findings highlight the importance of dynamic protein-DNA interactions in loading oppositely-oriented Mcm2-7 helicases for bidirectional DNA replication.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153031</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for the structure-based design of protein-binding peptides</title>
<link>https://hdl.handle.net/1721.1/153028</link>
<description>Computational methods for the structure-based design of protein-binding peptides
Swanson, Sebastian Robles
The de novo design of peptides that bind to target proteins could enable binding to specific epitopes, inhibition of natural interactions, and targeted degradation of proteins. Despite advances in protein engineering, this remains a challenging task due to the large space of peptide structures and inaccuracies in atomic energy functions. In this thesis, I introduce new computational methods for structure-based design using structural motifs from the Protein Data Bank (PDB). To sample peptide structures in the context of a target protein, I mine tertiary motifs from known structures in the PDB to identify surface-complementing fragments or “seeds”. I show that TERM-based seeds can describe known binding structures with high resolution: the vast majority of peptide binders from a non-redundant set of 486 peptide-protein complexes can be covered by seeds. Furthermore, I demonstrate that known peptide structures can be reconstructed with high accuracy from peptide-covering seeds. I develop two methods for combining seeds to sample larger peptide backbone structures. The first method combines seeds that satisfy geometric overlap criteria and the second method identifies loop fragments from the PDB to join spatially proximal seeds. To score peptide structures, I develop statistical potentials that capture distinct features of their interface structures: sequence-structure compatibility and designability. Through a series of computational benchmarks, I show that the statistical potentials can be used to identify seeds predicted to form favorable interface structures. As proof of concept, I use the methods to design peptide binders of multiple target proteins, some of which have no known peptide binder. The designs are structurally diverse and have Rosetta energies that are comparable to natural peptides. For some of the peptides, I show that AlphaFold can accurately predict the designed structure. Altogether, this work demonstrates the potential of applying structural motifs to the design of protein-binding peptides and highlights important directions for future work.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153028</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Siderophore-Based Drug Repurposing of Platinum Anticancer Agents as Targeted Antibiotics and Investigation of Pt-Induced DNA Damage &#13;
in Gram-Negative Bacteria</title>
<link>https://hdl.handle.net/1721.1/153027</link>
<description>Siderophore-Based Drug Repurposing of Platinum Anticancer Agents as Targeted Antibiotics and Investigation of Pt-Induced DNA Damage &#13;
in Gram-Negative Bacteria
Guo, Chuchu
Bacterial infections are a growing threat to human health. This crisis has been exacerbated by the overuse of broad-spectrum antibiotics, resulting in adverse consequences such as the rapid emergence and spread of antibiotic resistance. Additionally, the use of broad-spectrum antibiotics disrupts the human microbiota, potentially leading to the onset of diseases associated with microbial dysbiosis. Consequently, there is an urgent need for innovative strategies to develop narrow-spectrum antibiotics. One promising approach involves the development of siderophore–antibiotic conjugates (SACs), which aim to achieve targeted antibacterial activity by leveraging siderophores, secondary metabolites utilized by bacteria to acquire the essential transition metal nutrient iron (Fe). This SAC approach exploits the molecular recognition between siderophores and their cognate transport machinery, hijacking siderophore-based Fe acquisition pathways for selective and efficient drug delivery as a “Trojan-horse” strategy. This thesis explores the application of a SAC strategy based on enterobactin (Ent), a native triscatecholate siderophore, to repurpose platinum (Pt) anticancer agents as targeted antibiotics against Gram-negative bacteria, involving four Ent-based conjugates incorporating Pt(V) prodrugs (Ent–Pt(IV)).&#13;
&#13;
Chapter 1 introduces the fundamental concepts and background relevant to this thesis. It includes a comprehensive discussion of siderophores and antibiotic development using synthetic SACs. This chapter also highlights previous studies conducted by the Nolan laboratory, specifically focusing on the utilization of Ent for SAC development and its broader applications. Furthermore, it explores the role of Pt anticancer agents within the context of this research.&#13;
&#13;
Chapter 2 focuses on repurposing cisplatin, the first FDA-approved Pt anticancer drug, as a targeted antibiotic with enhanced potency. This study demonstrates that through the conjugation of Ent onto the axial ligand of a cisplatin-based Pt(IV) prodrug, the resulting conjugates selectively inhibit growth and induce filamentous morphology in E. coli via the Ent uptake machinery. Ent conjugation facilitates Pt accumulation in E. coli cells while reducing Pt uptake in the human cell line HEK293T. This proof-of-concept study represents the first example of using the SAC approach to repurpose Pt compounds as antibiotics.&#13;
&#13;
Chapter 3 expands the scope of Ent–Pt(IV) conjugates by repurposing oxaliplatin, the third-generation Pt anticancer drug, as a targeted antibiotic against Gram-negative bacteria that express the Ent uptake machinery. This study further investigates the DNA damage induced by Ent–Pt(IV) conjugates based on cisplatin and oxaliplatin. These findings establish a correlation between the antibacterial activities and the levels of DNA damage caused by Pt complexes. Collectively, this study generalizes the Ent-based drug repurposing strategy and provides insights into the cellular consequences of Ent–Pt(IV) conjugates in bacterial cells.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153027</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inheritance, innovation, and adaptation of immune systems: Regulation of the Innate Immunity Sensor Stimulator of Interferon Genes (STING)</title>
<link>https://hdl.handle.net/1721.1/153026</link>
<description>Inheritance, innovation, and adaptation of immune systems: Regulation of the Innate Immunity Sensor Stimulator of Interferon Genes (STING)
Liu, Bingxu
The recognition and defense against pathogen invasion represent a common focus across various living systems. Recent advancements have shed light on the structural similarities observed in multiple innate immune proteins shared between bacteria and humans. These findings suggest the evolutionary origins of certain human innate immune genes from bacteria. Using the innate immune protein STING as an example, which exhibits conserved domains from bacteria to humans, we investigated how the activity of human STING is regulated both similarly and differently compared to its bacterial counterpart.&#13;
&#13;
Similar to bacterial STING, the oligomerization of human STING is adequate to activate multiple downstream responses. However, unlike bacteria, the unique N-terminal transmembrane domain of human STING forms a proton channel, which plays a role in inducing non-canonical autophagy and inflammasome activation. Additionally, we demonstrated that mammalian cells employ DNAJC13 to prevent the accumulation of aggregated STING following its activation. This mechanism serves to avoid excessive activation of STING and ensure an appropriate response in mammalian cells. Our work, guided by evolutionary information, provides compelling experimental evidence for novel functions of STING and unveils new methods of regulating STING activity. These insights are valuable for guiding STINGbased therapies. Moreover, the innovative and adaptive nature of STING in mammalian cells offers new inspiration for engineering proteins derived from different hosts for diverse purposes.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153026</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional genomic and transcriptomic tools for spatial and dynamic phenotypes</title>
<link>https://hdl.handle.net/1721.1/153025</link>
<description>Functional genomic and transcriptomic tools for spatial and dynamic phenotypes
Le, Hong Anh Anna
Biology is driven by complex cellular processes that require precise regulation in time and in space. However, the genetic and molecular factors underlying these behaviors are difficult to study in their native contexts and, as a result, are often not well understood. Although next-generation sequencing and image-based methods have enabled high-throughput profiling of cell states, there is still a need for technologies that systematically probe and measure complex behaviors, including cell non-autonomous and dynamic phenotypes.&#13;
&#13;
In this thesis, we present the development of functional genomic and synthetic biology tools to address this challenge. We first applied optical pooled screening to quantify cell-cell interactions in mixed cultures with primary neurons and reveal functional interaction partners of synaptogenic cell adhesion molecules. Using these screens, we identified differential modulators of excitatory and inhibitory synapse formation, implicating diverse cellular pathways in this process. To increase the throughput of these optical pooled screens, we also built a fluidics platform for automated in situ sequencing. Finally, we leveraged retroviral polyproteins to package cellular RNAs for non-destructive measurements, enabling longitudinal recording of transcriptional states in living cells. Together, this work establishes scalable tools to measure and understand spatial and dynamic cellular phenotypes.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153025</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Transposon-Encoded Protein TnpB is an RNA-Guided Nuclease</title>
<link>https://hdl.handle.net/1721.1/153024</link>
<description>The Transposon-Encoded Protein TnpB is an RNA-Guided Nuclease
Nety, Suchita P.
Programmable RNA-guided systems, such as CRISPR-Cas adaptive immune systems, perform a wide variety of biological functions and have been harnessed for important therapeutic and diagnostic biotechnologies. One CRISPR effector protein, the Cas12 nuclease, is hypothesized to have its evolutionary origins in TnpB, an abundant transposon-encoded protein that contains a predicted nuclease domain. The biochemical properties of TnpB are unknown, but may provide clues to the evolutionary origins of RNA-guided activity.&#13;
&#13;
In this thesis, we investigate the biochemical activity of TnpB, revealing that this protein uses a noncoding RNA as a guide to cleave DNA substrates in a targeted manner. We investigate the biogenesis of the guide RNA, finding that TnpB can process its own mRNA to yield the guide RNA. In the process of studying guide RNA biogenesis, we also identify a potential cis-regulatory mechanism whereby part of the TnpB mRNA downregulates DNA cleavage activity. Next, we study the biochemical diversity of this protein across 59 diverse TnpB orthologs, and identify conserved features of TnpB function. Finally, we evaluate the prospects for TnpB as a human genome editing tool and highlight structural features that govern activity and specificity in human cells. &#13;
&#13;
Ultimately, TnpB is emblematic of a new class of programmable RNA-guided systems, the Obligate Mobile Element Guided Activity (OMEGA) family. This work serves as a window into the diversity of RNA-guided functions in nature and provides a rich source for future biotechnological development.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153024</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical reinforcement and property tuning of adhesive elastomers with polymer-grafted inorganic nanoparticles</title>
<link>https://hdl.handle.net/1721.1/153023</link>
<description>Mechanical reinforcement and property tuning of adhesive elastomers with polymer-grafted inorganic nanoparticles
Desroches, Griffen James
Pressure-sensitive adhesives are a ubiquitous class of soft elastomer adhesives capable of instantaneous bonding to a substrate under light pressure. However, their viscoelastic nature renders them vulnerable to mechanical destruction, degradation, and creep, limiting their application window. Established methods for strengthening the PSA film have relied on stiffening the adhesive component, either by increasing network density via crosslinking bonds or addition of immobilizing filler particles. The result is a decrease in adhesive power, further limiting their utility as adhesive products, and represents a fundamental tradeoff between network bonding strength and wettability.  New strategies for how to increase the effective strength of existing bonding interactions without increasing their number would be of significant interest both for fundamental studies into adhesive nanoscale structure and for applications-driven rational design of materials. In this thesis, we will address this fundamental challenge using multivalent polymergrafted nanoparticles to manipulate the nanoscale structure of the PSA such that strain resisting interactions can be decoupled from flow properties at the bulk scale. A conventional solvent-borne PSA with crosslinking residues was first investigated to understand how multivalent PGNP centers might amplify the effects of crosslinking. Subsequently, a waterborne PSA/water-soluble PGNP methodology was demonstrated to investigate how PGNPs might be used to bridge existing voids in the gel structure of a non-crosslinked adhesive. Lastly, a 3D-printable, solvent-free photocured elastomer resin was prepared with PGNP filler to show how nanocomposite adhesive materials might be processed into functional components and objects beyond simple PSA films. The effects of various structural and compositional parameters of both PSAs and PGNPs on the final mechanical properties of the film are also discussed at length to demonstrate how a design-of-materials strategy can be applied to these nanocomposite systems to prepare PSA materials with designer properties.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153023</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials At Extremes: Shock-induced Fracture and Phase Transitions</title>
<link>https://hdl.handle.net/1721.1/153022</link>
<description>Materials At Extremes: Shock-induced Fracture and Phase Transitions
Lem, Jet
Owing to the high-pressures and strain-rates associated with shock waves, their study is of great interest to a host of different fields including, but not limited to, planetary sciences, medical applications, sports science, aerospace engineering, chemistry, and materials science. Most broadly, the study of shock wave can be broken into two categories; the destructive and the constructive. On the destructive side, researchers are investigating the deleterious effects of shock waves in primary traumatic brain injury, earthquakes, and material fracture. On the constructive side, advances in shock wave generation have allowed researchers to leverage the high-pressures and high-strain rates associated with shock waves in medical practices, such as shock lithotripsy used to break up kidney stones, in the high throughput testing of novel materials, and in the generation of exotic states of matter. &#13;
&#13;
Expanding on previous experimental developments, the work presented herein employs laser-induced converging shock waves for the efficient generation of shock waves at the microscale. This technique allows one to conduct hundreds of experiments in a day on small sample volumes, all with a conventional tabletop pulsed laser system. Converging ring shocks were used to drive the insulator-to-metal phase transition in the prototypical Mott-Hubbard system, vanadium sesquioxide V2O3. The phase transition and mechanism were interrogated with Raman spectroscopy and optical photothermal IR microscopy of recovered samples, as well as in situ time-resolved optical reflectivity measurements during shock propagation. Herein we present, to the best of our knowledge, the first permanent pressure-induced insulator-to-metal phase transition in V2O3.&#13;
&#13;
The fracture response of brittle and ductile materials were studied with converging laser-induced surface acoustic waves (SAWs) and high-speed imaging. Borosilicate glass samples subject to converging SAWs demonstrated anomalous fracture-toughness enhancement above a shock pressure threshold. Raman spectra of recovered samples revealed significant structural rearrangements in the glass amorphous structure. We hypothesize that these rearrangements allow for nanoscale ductility that provides a mechanism for energy relaxation in shocked silica glasses above a given shock pressure threshold. Additionally, preliminary results regarding SAW-induced melting in metal samples are presented.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153022</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Discovery and Chemical Synthesis of Peptides and Proteins that Cross Biological Barriers</title>
<link>https://hdl.handle.net/1721.1/153021</link>
<description>The Discovery and Chemical Synthesis of Peptides and Proteins that Cross Biological Barriers
Farquhar, Charlotte E.
The precise delivery of therapeutic agents to specific tissues or cells remains a complex task. Cell-penetrating peptides (CPPs) and proteins are used as carrier modules, tissue targeting agents, and probes to better understand disease states. In this work, we first investigate CPPs for peptide-mediated delivery of macromolecules. Despite the existence of thousands of CPP sequences, none have been approved by regulatory authorities for macromolecule delivery. Here, we demonstrate a method for in-cell penetration selection-mass spectrometry (in-cell PS-MS) to discover novel peptides from a synthetic library. These all-D, non-canonical peptides can deliver macromolecule cargo to the cytosol with high peptide stability and low toxicity. In-cell PS-MS introduces a method to discover unnatural synthetic peptides to delivery therapeutically relevant cargo into subcellular compartments.&#13;
&#13;
The chromatographic enrichment of a peptide library through cation-exchange chromatography also generates novel, non-canonical peptide sequences for delivery of macromolecular cargo. We developed a method based on testing fractionated or pooled peptide libraries for the discovery of novel peptides with low toxicity, which can be modified to produce novel CPPs for oligonucleotide delivery with efficacy at nanomolar concentrations.&#13;
&#13;
Cell penetrating peptides have also previously shown efficacy in crossing the blood-brain barrier (BBB), allowing a small, soluble peptide (BTP-7) to cross the BBB and target human glioblastoma (GBM). We demonstrate that conjugation of BTP-7 to camptothecin improves drug solubility in aqueous solutions, retains drug efficacy against patient-derived GBM stem cells, enhances BBB permeability, and enables therapeutic targeting to intracranial GBM, leading to higher toxicity in GBM cells compared to normal brain tissues, and ultimately prolongs survival in mice bearing intracranial xenografts of patient-derived GBM.&#13;
 &#13;
Dipeptide repeat proteins (DPRs), are characteristic in amyotrophic lateral sclerosis and frontotemporal dementia. Synthesis of these proteins by biological or chemical means has previously proved difficult, due to their toxicity and propensity towards aggregation. Herein we report the chemical synthesis of four DPRs using automated fast-flow peptide synthesis. Structural assays demonstrate in-vitro aggregation, especially in proline-rich DPRs. Human neuroblastoma cells cultured with arginine-rich DPRs with longer repeat lengths show reduced cell viability, thereby reproducing the cytotoxic property of endogenous DPRs. This research demonstrates the potential of automated flow synthesis for synthesizing difficult-to-access proteins and gives us new insight into the behavior of arginine-rich DPRs at the cell membrane. Collectively, this work demonstrates successful approaches to address the cellular delivery of therapeutic modalities for the future development of novel therapeutics.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153021</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Promotion of Heterogeneous Acid and Base Catalysts for Biomass Upgrading</title>
<link>https://hdl.handle.net/1721.1/153020</link>
<description>Promotion of Heterogeneous Acid and Base Catalysts for Biomass Upgrading
Khechfe, Alexander A.
As the chemical industry decarbonizes and shifts away from petroleum-based feedstocks, renewable lignocellulosic biomass has received interest as an alternative carbon source for chemicals production. Homogeneous acids and bases have been shown to be active for a wide variety of chemical reactions useful in converting bio-based substrates to existing and novel chemicals; however, the costly separations associated with homogeneous catalysts have necessitated the study of solids with similar properties to fulfill the same role. Indeed, many natural or synthetic solid materials contain acidic or basic active sites that can catalyze a variety of organic transformations. In this work, we studied how acid-catalyzed reaction rates can be enhanced by modifying reaction and catalyst conditions beyond the active site itself. We have also leveraged the acid-base pairs on alkaline earth oxides to generate novel monomers for chemically recyclable plastics.&#13;
	&#13;
First, we show how applied electrochemical potentials can be used to enhance Brønsted acid-catalyzed dehydration reactions over a molybdenum oxide film on an yttria-stabilized zirconia pellet. Brønsted acid-catalyzed dehydrations of isopropanol and 2-butanol were found to increase by 2.5× and 1.3×, respectively, upon a +1.5 V polarization of the molybdenum oxide film. We hypothesize that this promotion originates from generation of Brønsted acid sites localized to the three-phase boundary at the catalyst/gas/electrolyte interface and/or acid site strengthening due to electrical polarization. This work demonstrates an alternative handle to promote catalytic turnover, which with further understanding, could be applied toward other Brønsted acid-catalyzed chemistries. &#13;
	&#13;
Next, we sought to understand how solvent and framework polarity affect Lewis-acid catalyzed aldol addition reactions in microporous Hf-BEA zeolites using the self-aldol addition of ethyl pyruvate (EP) as a probe reaction. Initial rates were measured using toluene and acetonitrile as solvents and hydrophobic Hf-BEA-F and hydrophilic Hf-BEA-OH catalysts. Apparent first-order rate constants span two orders of magnitude across the four systems; at 363 K, the highest rates were observed over hydrophobic Hf-BEA-F in toluene, while the lowest rates were observed in hydrophilic Hf-BEA-OH in acetonitrile. Despite the substantial rate constant variation across the four systems, apparent enthalpies for Hf-BEA-F in both solvents and Hf-BEA-OH in acetonitrile were within the error of each other (∼70 kJ/mol⁻¹). Reactions performed using Hf-BEA-OH with toluene featured a higher apparent enthalpic barrier of 83.8 kJ/mol⁻¹. The differences between the systems are attributed to hydrogen-bonding interactions between the EP molecules and polar silanol nests during catalysis in toluene using Hf-BEA-OH, which hinder EP adsorption to the active site in the hydrophilic framework. These findings show that aldol addition kinetics are not significantly modified by solvent polarity in hydrophobic frameworks beyond site-blocking effects; however, silanol nests in hydrophilic frameworks significantly alter substrate adsorption to the active site.&#13;
	&#13;
Finally, acid-base pairs on supported alkaline earth oxide catalysts were used for the first-reported continuous catalytic synthesis of the α-methylene-δ-valerolactone (MVL), the monomer of the chemically recyclable acrylic plastic poly-MVL, via a gas-phase aldol condensation of δ-valerolactone (DVL) and formaldehyde (FA). In particular, CaO supported on SiO₂ catalyzed the reaction with &gt;95% selectivity to MVL at conversions of ~30% (613 K, 0.4 mol% DVL, 1.2 mol% FA, 5 wt% CaO/SiO₂ catalyst). Selectivity to MVL remained &gt;75% even at DVL conversion between 50-75%. This work shows the first continuous and scalable synthesis of the MVL monomer, which represents an important step toward the large-scale production of this chemically recyclable plastic.&#13;
	&#13;
These results show that reactivity over solid acid and base catalysts can be tuned beyond modification of the catalyst active site alone. Rigorous analysis of these systems can allow for the development of new biomass upgrading schemes with higher reactivities that can eventually lay the groundwork for a more sustainable chemical industry.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153020</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reversible Germline Dynamics of Caenorhabditis elegans During Exposure to Pathogenic Bacteria</title>
<link>https://hdl.handle.net/1721.1/153019</link>
<description>Reversible Germline Dynamics of Caenorhabditis elegans During Exposure to Pathogenic Bacteria
Bollen, Daniel Paul
Little is known of how pathogen infection affects reproductive fitness and fecundity in metazoa. Caenorhabditis elegans is a free-living nematode and well-characterized animal model that encounters pathogenic microbes in its natural environment. How these pathogens affect the reproductive capacity of C. elegans is not well understood.  In this thesis, I examine the effect that exposure to the Gram-negative pathogenic bacteria Pseudomonas aeruginosa has on the reproductive system of C. elegans hermaphrodites.  In Chapter One, I discuss the literature surrounding host-pathogen interactions as well as stress responses as they relate to the C. elegans germline.&#13;
&#13;
In Chapter Two, I identify several processes that arise in the C. elegans gonad upon exposure to pathogenic Pseudomonas aeruginosa.  I show there is a marked reduction in brood size with concomitant reduction in gonad size and the number of nuclei in the germline. I define two processes that are induced that contribute to the decrease in the number of germ cell nuclei.  First, I observe that infection with P. aeruginosa leads to the induction of programmed germ cell death. Second, I observe that this exposure induces mitotic quiescence in the proliferative zone of the C. elegans gonad. Importantly, these processes appear to be reversible; when animals are removed from the presence of P. aeruginosa, germ cell death is abated, germ cell nuclei numbers increase, and brood sizes recover.&#13;
&#13;
In Chapter Three, I discuss the possible implications of these findings and their potential evolutionary value to the host	, as well as avenues for further study.  The reversible germline effects during exposure to P. aeruginosa may represent an adaptive response to improve survival of progeny and may serve to facilitate resource allocation that promotes survival during pathogen infection.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153019</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regularity of the Level Set Equation for Mean Curvature Flow with an Axis of Symmetry</title>
<link>https://hdl.handle.net/1721.1/152962</link>
<description>Regularity of the Level Set Equation for Mean Curvature Flow with an Axis of Symmetry
Hance, Jackson R.
In this thesis we study the regularity of viscosity solutions to the level set equation for mean curvature flow. We describe a set of hypotheses under which we can prove that the level sets of these solutions are C¹,¹ submanifolds of spacetime with well understood behavior near singular times. We then relate the derivatives of the solution of the level set flow to the solutions of certain evolution equations along fixed level sets. Finally we carry out this program to show that certain solutions with an axis of symmetry are in fact classical solutions of the level set problem.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152962</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Thick Battery Electrodes and Interdigitated Cell Architectures Via Micro-structured Carbon Nanotube Forests</title>
<link>https://hdl.handle.net/1721.1/152961</link>
<description>Towards Thick Battery Electrodes and Interdigitated Cell Architectures Via Micro-structured Carbon Nanotube Forests
Church, Richard Bertram
The growing demand for electric vehicles and portable electronics has created a significant interest in the scalability, recyclability, and economics of both traditional and emerging battery technologies. Although lithium-ion batteries (LiB) are approaching their theoretical energy density they will remain a widespread and promising technology as alternative (e.g., Li-metal) chemistries and solid-state architectures are still in relatively early stages of commercial scale-up. In the meantime, LiB performance can be improved by redesigning the cell geometry to incorporate thick 3D electrodes. Using thick electrodes increases the cell level energy density by minimizing the volume and mass contributions of inactive components, including the current collectors and separator. However, batteries with thick planar electrodes suffer from capacity limitations due to increased mechanical fatigue, Li-ion diffusion distances, and tortuosity. 3D electrode designs compensate for this weakness by providing micro-scale channels within the electrode to enable rapid charge transport and accommodate active material expansion. To meet these criteria, the materials used in 3D electrodes must be mechanically robust, electrically conductive, and processable in a manner enabling precise control over geometry and porosity.&#13;
&#13;
In this thesis we first develop thick 3D “honeycomb” battery electrodes using patterned, vertically aligned carbon nanotubes (VA-CNTs) on metal foils as current collectors. We translate insights from CNT growth on silicon wafer substrates to grow CNT forests over 250 μm tall on thin metal foils (Cu) that are suitable for electrode fabrication. Thick electrodes are then created by coating CNT forests with Si thin films by low pressure chemical vapor deposition. Half-cells using monolithic and honeycomb patterned Si-CNT electrodes were cycled over a range of current densities, demonstrating the electronic connection between the deposited Si and Cu foil via the aligned CNTs. The honeycomb electrodes exhibit large gravimetric (~1750 mAh/gSi) and areal (~20 mAh/cm2) capacities, and exhibit reduced capacity fading when compared to non-patterned electrodes. &#13;
&#13;
Next, the Si-CNT composites are investigated as a template for a 3D full cell design, Geometrically, compared to a planar electrode of a given energy density, the decreased diffusion distance of a 3D cell results in an improvement in power density, thereby decoupling the inherent tradeoff between the energy density and power density that is experienced by planar cells. The difficulty of producing 3D full cells comes from the need to produce high conformality electrolyte films that are pinhole free and which demonstrate sufficient ionic conductivity. To address this issue, we utilize an initiated chemical vapor deposition (iCVD) process to deposit conformal poly(hydroxyethyl methacrylate-co-ethylene glycol diacrylate) thin films on to the patterned Si-CNT composites. Doping these copolymer films with lithium salts results in ionic conductivities on the order of ~10⁻⁵ S/cm, which is among the highest conductivities exhibited by conformal electrolyte technologies. To complete a full battery cell, a slurry-based cathode is infiltrated into the iCVD coated Si-CNT composite electrode. These cells are then soaked in a liquid electrolyte and cycled to demonstrate the first-time use of an iCVD polymer electrolyte in a full cell and a proof-of-concept CNT-based 3D full cell. Lastly, a 2D finite element simulation is presented to predict the theoretical energy and power densities of idealized interdigitated CNT-based full cells.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152961</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Policy Impacts on Air Quality and Climate at the National, Regional and Global Scale</title>
<link>https://hdl.handle.net/1721.1/152960</link>
<description>Energy Policy Impacts on Air Quality and Climate at the National, Regional and Global Scale
Freese, Lyssa M.
The ability of future society to adapt to and mitigate climate change in a just way requires an understanding of the local impacts of policy, as well as the uncertainties in the human and earth systems. This thesis brings together air quality and climate change, quantifying the impacts of relevant policies, developing idealized metrics for quantifying climate phenomena, and improving our capacity to model future emissions scenarios. &#13;
&#13;
The first chapter explores the impacts that shutting down nuclear power would have on the distribution of air quality in the United States, as well as longer-term climate impacts. This is done using an energy grid model (US-EGO) paired with a chemical transport model (CTM), GEOS-Chem, which allows for calculation of the complex chemical responses to changes in emissions of nitrogen oxides (NOₓ) and sulfur dioxide (SO₂), and subsequent health impacts. This work shows that shutting down nuclear power without adequate clean energy ramp-up leads to increased pollution, climate impacts, and subsequent premature mortalities. Even with the 2030 expected renewable capacity in the United States, there is still a net increase in premature mortalities due to air pollution. This poses an issue for a just energy transition, as Black and African American communities in the United States are already disproportionately exposed to pollution from fossil fuel energy sources, so anything that leads to additional emissions from these sources will disproportionately harm these communities.&#13;
&#13;
In chapter two, I build a new approach to modeling the impacts of tropospheric chemical species, with the aim of allowing us to examine large ensembles of social scenarios without the high computational costs of CTMs. Using Green's functions (a form of impulse response function) to represent the transport of Black Carbon (BC) in Southeast Asia (SEA), I define a wide range of policy trajectories to evaluate the implications of early coal retirements in the region. This approach improves our ability to evaluate multiple criteria, such as climate, air quality, energy capacity, and economic impacts. &#13;
&#13;
For the third chapter, I establish a simplified approach for quantifying the temperature impact of CO₂ on temperature in the Antarctic. This work shows that in spite of a negative greenhouse effect at the top of the atmosphere, the Antarctic still warms in response to increased greenhouse gas (GHG) concentrations. This is done through the creation of a single column model, and I propose the use of the surface greenhouse effect rather than that of the top of the atmosphere to estimate the Antarctic temperature response to CO₂.&#13;
&#13;
The fourth chapter uses the approach of the second chapter and concepts of the third chapter to develop an emulator of the spatial temperature response to emissions of CO₂. Current approaches to modeling the temperature response to an emissions scenario are either computationally expensive or low resolution (often just providing the global mean value). In order to better quantify the local impact of varying climate policies, we develop a Green's Function emulator for temperature response to CO₂ based on CMIP6 model data, thus maintaining the high resolution and underlying complexity of the model, but reducing run time by a factor of 10,000,000. This approach can be used to investigate various policies and the time dependency of local temperature change on an emissions trajectory.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152960</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gravitational form factors of hadrons from lattice QCD</title>
<link>https://hdl.handle.net/1721.1/152959</link>
<description>Gravitational form factors of hadrons from lattice QCD
Pefkou, Dimitra Anastasia
The gravitational form factors (GFFs) of hadrons encode the matrix elements of the energy-momentum tensor of QCD. These quantities describe how energy, spin, and various mechanical properties of hadrons are carried by their quark and gluon constituents, and can be constrained using the framework of lattice quantum chromodynamics (QCD). In this thesis, we explore the gravitational structure of hadrons through two main results. The first result [1] is an extraction of the gluon GFFs of the pion, nucleon, &#120588; meson, and Δ baryon as functions of the squared momentum transfer &#119905; in the region 0 ≤ −&#119905; &lt; 2 GeV², as determined in a lattice QCD study at pion mass &#119898; subscript &#120587; = 450 MeV. By fitting the extracted GFFs, we extract various gluon contributions to the energy, pressure, and shear force distributions of the hadrons. We also obtain estimates for the corresponding gluon mechanical and mass radii, as well as the forward-limit gluon contributions to the momentum fraction and angular momentum of the four hadrons. Our results for the gluon GFFs of the proton were found to be in agreement with the first phenomenological extraction [2]. The second result [3] is a decomposition of the GFFs of the pion between the different quark flavor and gluon contributions, in the kinematic region 0 ≤ −&#119905; &lt; 2 GeV², on a lattice ensemble with quark masses yielding pion mass &#119898; subscript &#120587; = 170 MeV. From these results, we obtain the renormalization scheme- and scale-independent total GFFs of the pion, which are in agreement with the momentum fraction sum rule and the nex-to-leading order chiral perturbation theory prediction for its &#119863;-term, which is a fundamental hadron charge related to the internal forces.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152959</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studying Electronic Textures with Coherent Lensless Imaging</title>
<link>https://hdl.handle.net/1721.1/152958</link>
<description>Studying Electronic Textures with Coherent Lensless Imaging
Levitan, Abraham Lewis
X-ray microscopes have opened our collective eyes to the richness of nanoscale texture in systems such as correlated and quantum materials. These microscopes draw their power from the combination of short wavelengths, which provide high resolution, and interaction with atomic resonances, which makes them sensitive to subtle changes in electronic structure. However, x-ray microscopy remains an area where the main limits are technological rather than fundamental. Therefore, major progress is still possible with methodological improvements.&#13;
&#13;
In the past twenty years, research has exploded into the use of coherent x-ray light to improve the quality and resolution of x-ray microscopes. In many cases, using coherent light makes it possible to remove the objective lens in a microscope, replacing it with an algorithmic analysis of the direct scattering data. This can increase the quality and resolution of the resulting quantitative images.&#13;
&#13;
In this thesis, I present the results from a collection of projects aimed at using coherent imaging methods to study the real-space texture of electronic phases of matter with soft x-ray light. I first discuss the methods we developed and implemented to counteract the experimental errors that we found to be ubiquitous in our data, focusing on ptychography, the most commonly used lensless imaging method. Then, I turn to the development of an entirely new single-shot lensless imaging method, randomized probe imaging (RPI).&#13;
&#13;
RPI has proven to be reliable and robust across a broad range of scenarios. The remainder of the thesis is devoted to applications of RPI at a free electron laser and a synchrotron. Also reported are further projects designed to improve the method, as well as attempts to expand our understanding of the mechanisms behind it and its limitations. I sincerely hope that the availability of RPI will help bring x-ray imaging to a broader group of scientists and lead to a better understanding of the nanoscale details of electronic texture.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152958</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and Phenomenological Investigations of the MiniBooNE Anomaly</title>
<link>https://hdl.handle.net/1721.1/152957</link>
<description>Experimental and Phenomenological Investigations of the MiniBooNE Anomaly
Kamp, Nicholas
The 4.8σ excess of electron neutrino-like events reported by the MiniBooNE experiment at Fermilab's Booster Neutrino Beam (BNB) is one of the most significant and longest standing anomalies in particle physics. This thesis covers a range of experimental and theoretical efforts to elucidate the origin of the MiniBooNE low energy excess (LEE). We begin with the follow-up MicroBooNE experiment, which took data along the BNB from 2016 to 2021. The detailed images produced by the MicroBooNE liquid argon time projection chamber enable a suite of measurements that each test a different potential source of the MiniBooNE anomaly. This thesis specifically presents MicroBooNE's search for vₑ charged-current quasi-elastic (CCQE) interactions consistent with two-body scattering. The two-body CCQE analysis uses a novel reconstruction process, including a number of deep-learning based algorithms, to isolate a sample of vₑ CCQE interaction candidates with 75% purity. The analysis rules out an entirely vₑ-based explanation of the MiniBooNE excess at the 2.4σ confidence level. We next perform a combined fit of MicroBooNE and MiniBooNE data to the popular 3+1 model; even after the MicroBooNE results, allowed regions in [formula] parameter space exist at the 3σ confidence level. This thesis also demonstrates that, due to nuclear effects in the low-energy cross section behavior, the MicroBooNE data are consistent with a [notation]-based explanation of the MiniBooNE LEE at the &lt;2σ confidence level. Next, we investigate a phenomenological explanation of the MiniBooNE excess involving both an eV-scale sterile neutrino and a dipole-coupled MeV-scale heavy neutral lepton (HNL). It is shown that a 500~MeV HNL can accommodate the energy and angular distributions of the LEE at the 2σ confidence level while avoiding stringent constraints derived from MINERvA elastic scattering data. Finally, we discuss the Coherent CAPTAIN-Mills (CCM) experiment--a 10-ton light-based liquid argon detector at Los Alamos National Laboratory. The background rejection achieved from a novel Cherenkov-based reconstruction algorithm will give CCM world-leading sensitivity to a number of beyond-the-Standard Model physics scenarios, including dipole-coupled HNLs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152957</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atoms and Molecules Immersed in a Bose-Einstein&#13;
Condensate</title>
<link>https://hdl.handle.net/1721.1/152956</link>
<description>Atoms and Molecules Immersed in a Bose-Einstein&#13;
Condensate
Ni, Yiqi
In this thesis work, I demonstrate the creation of dipolar ²³Na⁴⁰K molecules in their absolute ground state directly from Bose polarons, which are dressed fermionic ⁴⁰K impurities immersed in a ²³Na Bose-Einstein condensate. In contrast to Feshbach molecules, Bose polarons at negative scattering length are long-lived in the presence of the BEC. We demonstrate direct photoassociation from Bose polarons to electron- ically excited molecular states, dark resonance spectroscopy of the absolute ground state molecular state, and finally stimulated rapid adiabatic passage (STIRAP) of Bose polarons into the molecular ground state, all in the presence of the Na BEC. This leads to a dense molecular gas at temperatures on the order of 70 nK immersed inside a &#119879;/&#119879; subscript &#119888; = 0.1 BEC.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152956</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of lightweight prefabricated home foundation manufactured via industrial large scale polymer additive manufacturing</title>
<link>https://hdl.handle.net/1721.1/152955</link>
<description>Design of lightweight prefabricated home foundation manufactured via industrial large scale polymer additive manufacturing
Perez, Alfonso Alexander
The mission of this thesis is to present a novel light-weight, low cost alternative to concrete foundations for dwelling home construction. Worldwide, UN HABITAT estimates suggest nearly 1.6 billion people lack access to adequate shelter including 150 million homeless individuals. Home affordability is limited by expensive materials that require transportation to emerging markets which oftentimes lack necessary infrastructure. Traditional home foundations exhaust local resources and strain local supply chain logistics networks due to their predominantly concrete composition. Concrete is also a major strain on the natural environment, alone accounting for ~8% of global CO2 emissions. Furthermore, concrete requires substantial water consumption, which is often impossible in areas that do not have consistent access to alkaline free water. Despite all the negative impacts of traditional concrete foundations, concrete is common in dwelling home construction due to the load bearing compressive strength. Without a durable foundation a dwelling home’s longevity and safety significantly decrease. &#13;
&#13;
This thesis presents an analysis of conventional dwelling home foundation designs and the resulting consequences including, but not limited to, weight, cost, and environmental impact. With an analytical understanding of traditional foundation’s strengths and weaknesses, a novel sustainable prefabricated foundation design is presented that is intended to be made entirely of recycled thermoplastic polymers, namely polyethylene terephthalate (PET). The proof of concept prototypes of the prefabricated foundation are designed specifically for and made using industrial large scale polymer additive manufacturing. The novel prefabricated foundation is &gt;10x lighter weight than a traditional concrete pad foundation and consumes ~0kg of water during production. This foundation design overcomes the negative impacts of traditional concrete foundations while simultaneously exceeding the core dwelling home foundation functional requirements. &#13;
&#13;
Challenges in achieving this goal include: a) additively manufactured anisotropic mechanical properties, b) quasi-static mechanical performance of large scale additively manufactured recycled polymer structures, c) CAD for large scale additive manufacturing of polymers, d) structural finite element analysis of anisotropic large scale additive manufactured polymers, e) thermal distortion during manufacture, and f) design for the not well developed large scale additive slicer algorithms. . Full scale prefabricated foundation prototypes were produced and force-displacement tests were conducted. Five nearly full-scale (~8’x2’x1’) prototypes of the prefabricated additively manufactured polymer foundation product were produced. These were made from ABS/20% carbon fiber, weighed ~300 lb [135 kg] and were manufactured using a CI BAAM 806 system in under five hours. Force displacement testing using a Baldwin 300kN hydraulic force-displacement system revealed that at the extreme design load, 5,000 lb on the center column, that the displacement was less than 0.5 mm and the failure load was &gt;60,000lb. It was shown that 1) the prefabricated foundation design is indeed manufacturable, 2) that the warping, shrinkage, flash, and layer delamination defects are seemingly inconsequential, and 3) that the prototypes provide a static loading safety factor on the order of 50x. This thesis demonstrates that a prefabricated volumetric modular polymer home foundation manufactured by industrial large scale polymer additive manufacturing is possible. &#13;
&#13;
In addition to the novel prefabricated foundation product, this thesis presents a 12-step iterative design process that is useful for the efficient design of structures made via industrial large scale polymer additive manufacturing. The 12-step iterative design process presented herein proved to be a cost effective method designed to inexpensively and quickly de-risk subsequent design steps. The production rate (R), structural quality (Q), manufacturing cost (C), design flexibility (F), and process sustainability (S) of industrial large scale polymer AM was studied and presented using a physics informed causal loop diagram.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152955</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Solid Electrolyte Interphase-Based Descriptors for Lithium Liquid Electrolyte Design</title>
<link>https://hdl.handle.net/1721.1/152954</link>
<description>Quantitative Solid Electrolyte Interphase-Based Descriptors for Lithium Liquid Electrolyte Design
Marques Hobold, Gustavo
Energy density requirements for next-generation batteries make the Li metal anode a key candidate to replace graphite in Li-ion batteries due to Li’s improved capacity (3,860 mAh/g vs. 372 mAh/g). However, Li anodes display lower Coulombic efficiency (CE, &lt;99.9%) than graphite (&gt;99.95%), resulting in faster loss of reversible Li over cycling. This shortfall derives from parasitic Li-electrolyte reactions that lead to accumulation of inactive Li⁰ and electrolyte-derived byproducts, which result in the formation of a native solid electrolyte interphase (SEI). The SEI is an electrolyte-derived passivating layer that ideally protects Li from continued reactions with the electrolyte, and carries the function of regulating Li⁺ exchange between Li and the electrolyte. Despite the importance of the SEI, its properties and composition are challenging to probe experimentally due to its exceedingly low amount (sub-μmolₗᵢ/cm²/cycle at &gt;99% CE). Given these challenges, the framework on which the existing understanding of the SEI was built is largely based on qualitative models, such that knowledge about the optimal chemical composition and functional properties of the SEI has remained remarkably incomplete. To bridge this gap, this thesis focuses on advancing experimental methodologies to precisely quantify (1) functional properties and (2) chemical composition of native SEIs, and explore their relationships with CE.&#13;
&#13;
First, a key functional property that is regulated by the SEI is studied: the rate of Li⁺ exchange between Li and the electrolyte. Measurements of Li⁺ exchange have historically been challenging to interpret due to inconsistent measurement protocols, leading to unclear understanding of its relationship with CE and its role on passivation of the SEI, an often loosely-defined property. Here, distinct but self-consistent electrochemical techniques are used to precisely define and quantify Li⁺ exchange across SEIs, revealing that this functional property varies across electrolytes, increasing from low to high CE, thus ultimately establishing that fast Li⁺ exchange is needed for high CE. These results further revealed a previously-overlooked evolution of Li⁺ exchange with cycling that is unique to high-CE electrolytes, underscoring the importance of SEI formation.&#13;
&#13;
Crucially, imparting beneficial properties to the SEI requires exacting knowledge of the chemical phases that ought to be promoted vs. suppressed at high-CE. This understanding can be gained by examining high vs. low CE SEIs, but has insofar remained largely qualitative due to the lack of quantitative techniques for SEI characterization. Thus, in order to (2) explore SEI chemical compositions that are favored at high CE, the quantitative power of analytical instruments was leveraged to measure byproducts of SEI formation with unprecedented chemical and quantitative resolution. A custom operando GC experiment was developed to measure gas evolution in situ during Li cycling. With sub-nmol/min resolution, these revealed that SEI formation reactions that release CO or CO₂, leaving behind a decarbonylated/decarboxylated byproduct in the SEI, are associated with higher CE. Chemical quantification was extended to post-mortem cells by advancing titration techniques to enable ~nAh resolution of select SEI phases (ROCO₂Li, Li₂C₂, RLi, LiF, P-containing), in addition to the inactive Li⁰. These enabled an unprecedented look into precise SEI compositional breakdown fingerprints that are unique to each electrolyte. In particular, Li₂C₂, a previously-unmeasured phase on Li, showed a strong anti-correlation with CE. Titration was further expanded to include Li₂O and to perform a rigorous statistical analysis on the role of oxygenation vs. fluorination on CE—the latter strategy has been the major motif guiding electrolyte design for the past decade. It was found that Li₂O displayed the strongest positive correlation with CE, surpassing the fluorinated LiF. The critical role of Li₂O was further exploited to create a set of fluorine-free electrolytes which &gt;99% CE, revealing SEI oxygenation to be an alternative and underexplored design space for electrolyte design discovery. Altogether, these quantitative SEI-based descriptors thus reveal new avenues for electrolyte engineering, informed by a fundamental understanding of the “optimal” Li SEI and their associated capacity loss mechanisms at high CE.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152954</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Localization and Lensing of Fast Radio Bursts using CHIME/FRB and its VLBI Outriggers</title>
<link>https://hdl.handle.net/1721.1/152953</link>
<description>Localization and Lensing of Fast Radio Bursts using CHIME/FRB and its VLBI Outriggers
Leung, Calvin
Every two minutes, a luminous, millisecond-duration flash of radio light arrives at Earth from outside the Milky Way. These elusive fast radio bursts (FRBs) last just a millisecond, and the vast majority are never detected again. FRBs are powerful probes of dark matter and cosmological structure, and offer insights into magnetars: a rare class of neutron stars which produce the strongest magnetic fields in the Universe. However, because FRBs are so fleeting, the field is grappling with much simpler questions: How do magnetars emit FRBs? From what galaxies (and redshifts) do FRBs originate? &#13;
&#13;
Pinpointing FRBs to their host galaxies using the Canadian Hydrogen Intensity Mapping Experiment (CHIME) is perhaps the single most promising path towards uncovering the mystery of FRBs. CHIME detects about 700 FRBs per year, but lacks the resolution to pinpoint its bursts. Very-long baseline interferometry (VLBI) is a solution which uses widely-separated telescopes to achieve high angular resolution, but this technique has been limited to following up the small fraction of sources which repeat. In this thesis, I develop key technologies to combine wide-field observations for FRB detection with high angular resolution for FRB localization in one instrument, including high-bandwidth digital instrumentation, a stable reference clock for CHIME, and two telescopes, observing in tandem with CHIME over 3000-kilometer baselines. I wrote a VLBI correlator to analyze data from the testbeds, and used the array to successfully pinpoint a one-off FRB with sub-arcsecond precision at the time of detection. This sets the stage for CHIME Outriggers: three dedicated telescopes which will enhance CHIME’s angular resolution to sub-arcsecond scales over CHIME’s entire field of view, pushing FRB science into an era of plentiful and precise localizations.&#13;
&#13;
I also develop a new way to use FRBs as probes of sub-solar mass primordial black holes. By exploiting multi-path interference in gravitational lensing, I conducted a novel search for lensed FRBs. We find that some FRBs exhibit plasma lensing (scintillation), which we attribute to the Milky Way’s interstellar medium, and use our null search to place new constraints on extragalactic primordial black holes as dark matter.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152953</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multipole Symmetries and Many Body Physics</title>
<link>https://hdl.handle.net/1721.1/152946</link>
<description>Multipole Symmetries and Many Body Physics
Lake, Ethan
This thesis studies the many body physics of systems with multipolar conservation laws --- systems which conserve both a total charge and various multipole moments thereof. These conservation laws place constraints on the way in which particles can move, and carry important ramifications for both quantum ground states and non-equilibrium dynamics. Particular emphasis is placed on Hubbard models with dipole moment conservation, which are of special interest due to their ability to be experimentally realized in strongly tilted optical lattices. The bosonic versions of these models are shown to possess unusual insulating Bose-Einstein condensates in their ground states, accompanied by unconventional patterns of spontaneous symmetry breaking. The fermionic versions on the other hand are shown to host exotic non-Fermi liquids. We also report progress on understanding the ramifications that multipole conservation has for diffusion, showing that it leads to unusually large dynamical exponents and exponentially localized disorder-free steady states.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152946</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linking Biomolecular Condensates to Disease and Therapeutic Development</title>
<link>https://hdl.handle.net/1721.1/152890</link>
<description>Linking Biomolecular Condensates to Disease and Therapeutic Development
Hawken, Susana W.
The cell is compartmentalized into membrane-bound and membraneless organelles that organize and regulate key cellular functions. Over the past decade, growing evidence supports the notion that membraneless organelles, called biomolecular condensates, compartmentalize biomolecules – proteins and nucleic acids – involved in shared cellular processes through a biophysical process called phase separation. Biomolecular condensates have distinct physicochemical properties dependent on the molecular features and interactions of constituent biomolecules. Diseaseassociated mutations in individual biomolecules that compose condensates can alter condensate physicochemical properties. In addition, key drug targets have been identified as components of condensates. This thesis examines biomolecular condensates in disease and therapeutic development. We find that condensate-promoting features in condensate-forming proteins can be mapped and leveraged to build a resource cataloging mutations that likely contribute to condensate dysregulation in human diseases (Banani et al., 2022). Pathogenic mutations in condensate-promoting features span diverse disease classes across both Mendelian diseases and cancers. FDA-approved small molecule therapeutics interact with condensates, selectively partitioning into some condensates and not others (Klein et al., 2020). Selective partitioning of small molecules has broad implications for drug therapeutic activity and resistance. These findings demonstrate the need to integrate condensate-based models in our study of disease and therapeutic development – an effort which will generate novel pathogenic mechanistic hypotheses and improved drug design for human diseases.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152890</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ecology and adaptation of denitrifiers across genomes and environments</title>
<link>https://hdl.handle.net/1721.1/152885</link>
<description>Ecology and adaptation of denitrifiers across genomes and environments
Zhang, Irene H.
Although nitrogen comprises 78% of the Earth’s atmosphere, this N2 gas is largely inaccessible to most living organisms, limiting the supply of bioavailable (i.e. fixed) nitrogen to many marine, terrestrial, and aquatic environments. Microbes primarily mediate the many steps of the nitrogen cycle, including losses from anaerobic ammonia oxidation (anammox) and denitrification. The stepwise reduction of nitrate to nitrite, nitric oxide, nitrous oxide, and N2 gas, denitrification can be performed by diverse taxa and acts as a major sink of fixed nitrogen within low oxygen environments, and contribute to emissions of the greenhouse gas nitrous oxide. While canonical studies typically use model complete denitrifiers capable of fully reducing nitrate to N2 gas, sequencing efforts have revealed a diversity of partial denitrifiers capable of only one or a subset of denitrification steps in natural systems. Much remains to be known about the adaptive advantages of partial denitrification, the taxonomic identities and ecological roles of environmental partial denitrifiers, and the radiation of denitrification genes within and across environments. Using a combination of laboratory models, metagenomics, and phylogenetics approaches, we find a predominance of partial denitrifiers within marine oxygen deficient zones, uncover evidence for a rate vs. yield tradeoff between complete and partial denitrifiers, explore the metabolic capabilities of uncultivated putative denitrifiers, and investigate the diversification of denitrifying lineages and genes in a range of natural environments.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152885</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutron Resonance Transmission Analysis of Nuclear Material Using a Portable D-T Neutron Generator</title>
<link>https://hdl.handle.net/1721.1/152884</link>
<description>Neutron Resonance Transmission Analysis of Nuclear Material Using a Portable D-T Neutron Generator
Klein, Ethan Avram
The accurate identification and quantification of nuclear material is important for nuclear security, nonproliferation, and safeguards missions. Existing techniques have limitations in terms of their isotopic specificity, isotopic range, and their capacity to analyze thick or shielded targets. Neutron Resonance Transmission Analysis (NRTA) is an active neutron interrogation technique that addresses these challenges by offer- ing isotope-specificity, applicability to a wide range of mid- and high-Z isotopes, and the ability to assay targets shielded by high-Z material. However, the dependence of NRTA on large beamline facilities to generate high neutron flux and achieve excellent system resolution has prevented the technique from being used for on-site applica- tions. To overcome this limitation, this thesis presents a portable NRTA system design that utilizes a commercial-off-the-shelf D-T neutron generator. The portable system, comprising a neutron source and neutron detector assembly, each weighing less than 40 kg, enables easy transport and setup for on-site applications. The system is capable of achieving sufficient flux and resolution to identify neutron resonances up to approximately 60 eV in under an hour, making it suitable for analyzing high-Z targets with thicknesses of several centimeters and with up to several millimeters of low-Z shielding. The feasibility of the portable NRTA system for a number of nonpro- liferation and safeguards applications was evaluated by assaying various uranium and plutonium targets. The system successfully assayed both depleted and high enriched uranium and reactor-grade plutonium targets, accurately determining the enrichment and fissile mass content as validated by chemical analysis. The system was also able to assay combinations of thorium and uranium metal of varying enrichment, as well as small quantities of U-233 oxide under high passive gamma backgrounds to accuracies under a gram. These results highlight the potential viability of using a portable NRTA system to support nuclear nonproliferation missions that require the non-destructive assay of a diverse range of nuclear materials, including advanced reactor safeguards for the thorium fuel cycle and the confirmation of special nuclear material for arms control treaty verification.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152884</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Is the Medium Still the Message? Assessing the Role of Video in Political Persuasion</title>
<link>https://hdl.handle.net/1721.1/152883</link>
<description>Is the Medium Still the Message? Assessing the Role of Video in Political Persuasion
Wittenberg, Chloe
Video is a central—and growing—part of the American political landscape. Whether via televised news programs, campaign advertising, or social media posts on YouTube and TikTok, members of the public now have access to a colossal amount of audiovisual content about politics, distributed across myriad platforms. Yet despite the ubiquity of political video in contemporary life, scholars still lack a firm understanding of whether, and in what ways, individuals’ processing of such content differs from their processing of more traditional media, such as text. To answer these questions, I draw on a range of data sources, including eight original survey experiments, spanning over 20,000 U.S. adults and more than 150 unique messages, as well as a large dataset of political posts published on Facebook over a five-year period. In the first part of the dissertation, conducted in collaboration with Ben Tappin, Adam Berinsky, and David Rand, I document an intriguing disconnect between a message's credibility and persuasiveness: although political videos tend to be perceived as substantially more believable, compared to corresponding transcripts, they generate only marginally more attitude change. In the second part of the dissertation, I then assess whether these patterns persist over time. Using a two-wave longitudinal study, I demonstrate that members of the public generally revise their political opinions in response to new information, regardless of whether this information is presented in the form of video or text—with persuasion visible across both modalities immediately after exposure and several days later. Finally, in the last part of the dissertation, I shift my focus from political attitudes to personal engagement. Observationally, I find that political videos tend to attract more interest and attention on social media, compared to text-based news articles. However, within a more controlled experimental setting, members of the public do not display a consistent preference for video when deciding what types of political media to consume and share online. Together, these findings offer new insights about the nature of political information processing in an increasingly multimodal media environment.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152883</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization and Generalization of Minimax Algorithms</title>
<link>https://hdl.handle.net/1721.1/152869</link>
<description>Optimization and Generalization of Minimax Algorithms
Pattathil, Sarath
This thesis explores minimax formulations of machine learning and multi-agent learning problems, focusing on algorithmic optimization and generalization performance. The first part of the thesis delves into the smooth convex-concave minimax problem, providing a unified analysis of widely used algorithms such as Extra-Gradient (EG) and Optimistic Gradient Descent Ascent (OGDA), whose convergence behavior was not systematically understood. We derive convergence rates for these algorithms in the convex-concave setting. We show that these algorithms work effectively due to their approximation of the Proximal Point (PP) method, which converges to the solution at a fast rate, but is impractical to implement. In the next chapter, we expand our study to nonconvex-nonconcave problems. These problems are generally challenging to solve, as a solution may not be well defined, or even if a solution exists,  its computation may not be tractable. We identify a class of nonconvex-nonconcave problems that do have well defined and computationally tractable solutions. Leveraging the concepts developed in the first chapter, we design algorithms to efficiently tackle this special class of nonconvex-nonconcave problems. The final part of this thesis addresses the issue of generalization. In many cases, such as GANs and adversarial training, the objective function for finding the saddle point can be written as an expected value over the data distribution. However, since we often do not have direct access to this distribution, we solve the empirical problem instead, which involves averaging over the available dataset. The final chapter aims to evaluate the quality of solutions to the empirical problem compared to the original population problem. Existing metrics like the primal risk, which are used to assess generalization in the minimax setting are found to be inadequate in capturing the generalization of minimax learners. This prompts the proposal of a new metric, the primal gap, which overcomes these limitations. This novel metric is then utilized to investigate the generalization performance of popular algorithms like Gradient Descent Ascent (GDA) and Gradient Descent-Max (GDMax).
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152869</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On The Performance Of The Maximum Likelihood Over Large Models</title>
<link>https://hdl.handle.net/1721.1/152867</link>
<description>On The Performance Of The Maximum Likelihood Over Large Models
Kur, Gil
This dissertation investigates non-parametric regression over large function classes, specifically,  non-Donsker classes. We will present the concept of non-Donsker classes and study the statistical performance of Least Squares Estimator (LSE) --- which also serves as the Maximum Likelihood Estimator (MLE) under Gaussian noise --- over these classes. (1) We demonstrate the minimax sub-optimality of the LSE in the non-Donsker regime, extending traditional findings of over these classes. (1) We demonstrate the minimax sub-optimality of the LSE in the non-Donsker regime, extending traditional findings of Birgé and Massart 93' and resolving a longstanding conjecture of Gardner, Markus and Milanfar 06'. (2) We reveal that in the non-Donsker regime, the sub-optimality of LSE arises solely from its elevated bias error term (in terms of the bias and variance decomposition). (3) We introduce the first minimax optimal algorithm for multivariate convex regression with a polynomial runtime in the number of samples -- showing that one can overcome the sub-optimality of the LSE in efficient runtime. (4) We study the minimal error of the LSE both in random and fixed design settings. and Massart 93' and resolving a longstanding conjecture of Gardner, Markus and Milanfar 06'. (2) We reveal that in the non-Donsker regime, the sub-optimality of LSE arises solely from its elevated bias error term (in terms of the bias and variance decomposition). (3) We introduce the first minimax optimal algorithm for multivariate convex regression with a polynomial runtime in the number of samples -- showing that one can overcome the sub-optimality of the LSE in efficient runtime. (4) We study the minimal error of the LSE both in random and fixed design settings.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152867</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Learning for Lightweight Machine Learning Inference at the Edge</title>
<link>https://hdl.handle.net/1721.1/152866</link>
<description>Continuous Learning for Lightweight Machine Learning Inference at the Edge
Khani, Mehrdad
With the proliferation of edge devices such as mobile phones, consumer robots, drones, wearables, and IoT devices, the generation of data at the edge of the internet network has been increasing exponentially. Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), have the ability to process this data with remarkable accuracy. However, state-of-the-art ML models require substantial computational resources that edge devices typically lack, necessitating a shift to powerful servers in the cloud as hosts for these models. Running these models at the edge is desirable due to benefits such as low-latency results and adherence to data privacy constraints, but is limited by the available computational power and energy consumption of edge devices. Moreover, lightweight models designed for edge devices often exhibit a significant drop in accuracy. Continuous learning offers a potential solution by improving the accuracy of lightweight models by dynamically adapting them to specific scenes or narrow distributions of inputs, which is especially relevant since in practice, these models do not need to generalize to every possible sample from the distribution.&#13;
&#13;
In this thesis, two key methods are introduced to tackle the challenges in continuous learning systems for edge devices: Model Streaming and Model Reuse. Model Streaming offloads the adaptation process to remote machines with greater computational capacity and updates only a critical subset of model parameters that significantly influence the lightweight model’s performance, reducing the bandwidth needed for model updates. Model Reuse uses an efficient DNN model to dynamically select a suitable lightweight model from a library of historical models designed for similar input distributions, boosting the scalability, responsiveness, and accuracy of continuous learning systems. These methods are applied to practical systems, including MMNet for adaptive neural signal detection in 5G cellular communication systems, AMS for real-time video inference on edge devices, SRVC for efficient video compression, and RECL for responsive, resource-efficient continuous learning for video analytics.&#13;
&#13;
We show how continuous learning can significantly improve lightweight machine learning inference on edge devices. The proposed techniques effectively address the unique challenges posed by resource-constrained edge environments. Practical applications presented in the thesis, such as MMNet, AMS, SRVC, and RECL, demonstrate the real-world effectiveness of these methods. These innovations in continuous learning have the potential to reshape the landscape of edge computing by offering more accurate and adaptable inference capabilities, enabling efficient use of computational resources, reduced latency, and better energy efficiency.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152866</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Firms, Labor Markets and Anti-System Politics: Essays on the Political Economy of Populism</title>
<link>https://hdl.handle.net/1721.1/152865</link>
<description>Firms, Labor Markets and Anti-System Politics: Essays on the Political Economy of Populism
Giannoni, Matias Alberto
The limitations of existing political economy theories about the rise of populism motivate an approach that is centered on firms and employment trajectories as causes of individual frustration. In this dissertation, I explore the political economy causes of the rise of populism in both developed and developing economies. I address the gap in the literature that relates individual economic hardship and populist preferences. I argue that explanations proposing economic changes as causes of anti-system politics, like globalization, trade, or skill-biased technological change, are incomplete if firms are ignored. I test a firm-based theory of anti-system politics in Italy and Brazil, using a combination of original survey data, survey experiments and quasi-experimental designs. In the last chapter I focus on the case of France and show that there is a link between labor market characteristics, political deattachment, and a preference for outsider candidates that can operate independently of issue-based ideological preferences.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152865</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vacuum Transistors Based on III-Nitrides Field Emitter Arrays with Self-Aligned Gate</title>
<link>https://hdl.handle.net/1721.1/152864</link>
<description>Vacuum Transistors Based on III-Nitrides Field Emitter Arrays with Self-Aligned Gate
Shih, Pao-Chuan
Vacuum electronics are promising future high-frequency and harsh-environment devices thanks to their scattering-free and radiation-robust vacuum channels. Field-emission vacuum transistors based on silicon and metals have been demonstrated over past 30 years, however the power consumption and long-term device stability are still issues. To further improve the field emission device performance and stability, III-Nitride semiconductors are attracting significant attention recently thanks to their engineerable electron affinities and high bonding energies. Ideally, the degenerately-doped n-type semiconductors with low electron affinities can have low work functions, leading to small electron Fowler-Nordheim tunneling barriers and thus low operating voltage and reduced power consumption. Moreover, materials with large bonding energies are expected to be more robust towards ion-bombardment-induced degradation. In spite of the great potential of III-Nitride vacuum transistors, there are still very limited transistor-level demonstrations with performance comparable to Si and metal-based field emitters.&#13;
&#13;
This thesis aims to identify the key challenges of III-Nitride vacuum transistors and demonstrate new approaches to tackle these issues. First, GaN field emission diodes are studied to understand the basic device operation and the long-term stability of GaN emitter tips. Second, self-aligned-gate structures are developed on GaN field emitter arrays to demonstrate vacuum transistors with reduced operating voltages. The device performance is further improved by sharpening the field emission tips and optimizing device geometries. Third, N-polar GaN and AlGaN self-alignedgate field emitter arrays are also fabricated and their material properties for field emission applications are investigated. Finally, a new technology to demonstrate fully-integrated III-Nitride vacuum transistors is discussed. This thesis work serves as a foundation for future high-frequency (above-100 GHz) and high-power III-Nitride vacuum electronics.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152864</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Rigorously Tested &amp; Reliable Machine Learning for Health</title>
<link>https://hdl.handle.net/1721.1/152863</link>
<description>Towards Rigorously Tested &amp; Reliable Machine Learning for Health
Oberst, Michael Karl
When can we rely on machine learning in high-risk domains like healthcare? In the long-term, we want machine learning systems to be as reliable as any FDA-approved medication or diagnostic test. Building reliable models is complicated by the need for causal reasoning and robust performance. To support decision-making, we want to draw causal conclusions about the impact of model recommendations (e.g., will recommending a particular drug lead to better patient outcomes?). Moreover, we want our models to perform well across different hospitals and patient populations, including those that differ from the hospitals / populations seen during model development.&#13;
&#13;
These objectives run into limitations of what our data can tell us without further assumptions. For instance, we only observe outcomes for the treatments that were actually prescribed to patients, not all possible treatments. Similarly, we do not observe performance on every conceivable hospital where a model might be deployed, but only on the (typically much more limited) data we have access to.&#13;
&#13;
In this thesis, I approach these challenges using tools from causality and statistics, incorporating external knowledge into the process of both model validation and design. External knowledge can come from a variety of sources, including human experts (e.g., clinicians) or gold-standard data (e.g., from randomized trials). First, I introduce methods for assessing and improving the credibility of causal inference, including methods to help domain experts “sanity check” the causal reasoning of models for decision-making, identify under-represented populations in causal analyses, and incorporate limited experimental data to improve the credibility of causal conclusions. Second, I introduce tools for building robust predictive models by incorporating domain knowledge of plausible variation across environments: Both estimating worst-case predictive performance (e.g., accuracy) of models under domain-specific changes in the data generating process, as well as optimizing models to obtain optimal worst-case performance.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152863</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redistributing the Costs of Volumetric Denial-of-Service Mitigation</title>
<link>https://hdl.handle.net/1721.1/152862</link>
<description>Redistributing the Costs of Volumetric Denial-of-Service Mitigation
DeLaughter, Samuel
Volumetric Denial-of-Service (DoS) attacks pose a severe and exponentially increasing threat to the Internet.  Existing mitigations provide valuable stop-gaps but fail to address the root cause, and the overhead they incur is poorly understood.  To combat these attacks we present a protocol-agnostic approach to DoS mitigation that moves overhead away from service bottlenecks, towards the network edge and onto attackers themselves.  We observe that the vast majority of attacks rely on a small subset of packet types which are individually identical to legitimate packets, but generated far more often by attackers than by regular clients.   Making such packets marginally more difficult to generate can significantly reduce flood volumes without harming legitimate clients.  We design and implement two novel mitigations in TCP following this approach, to combat the ubiquitous SYN Flood attack.  The first is largely a toy example illustrating how simple packet padding can rate-limit bandwidth-constrained attackers, while the second is a more robust approach using miniature proofs-of-work to restrict the common CPU-bound attacker.  We also present a rigorous experimental methodology and novel suite of metrics for more accurately evaluating the efficacy and overhead of arbitrary DoS mitigations across changes in attack, client behavior, and network topology.  We use this measurement framework to evaluate our proposed mitigations in a controlled network testbed.  Both mitigations exhibit negligible overhead, and while their efficacy is subjective they succeed in completely nullifying potentially devastating SYN floods in certain contexts.  Beyond our immediate findings in TCP, this work is broadly applicable to the design of DoS-resilient network protocols and internet architectures.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152862</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying the Performance of Network Control Algorithms</title>
<link>https://hdl.handle.net/1721.1/152861</link>
<description>Verifying the Performance of Network Control Algorithms
Arun, Venkat
As networked systems become critical infrastructure, their design must reflect their new societal role. Today, we build systems with hundreds of heuristics but often do not understand their inherent and emergent behaviors. This dissertation presents, performance verification, a set of tools and techniques to prove performance properties of heuristics running in real-world conditions. It provides an alternative to queuing and control theory, which are typically too optimistic about performance because of their limited capacity to accurately model real-world phenomena. Overly optimistic analysis can lead to heuristic designs that fail in unexpected ways upon deployment. Rigorous proofs on the other hand, can not only inspire confidence in our designs, but also give counter-intuitive insights about their performance.&#13;
&#13;
A key theme in our approach is to model uncertainty in systems using non-random, non-deterministic objects that cover a wide range of possible behaviors under a single abstraction. Such models allow us to analyze complex system behaviors using automated reasoning techniques. We will present automated tools to analyze congestion control and process scheduling algorithms. These tools prove performance properties and find counter-examples where widely deployed heuristics fail. We will also prove that current end-to-end congestion control algorithms that bound delay cannot avoid starvation.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152861</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards ML Models That We Can Deploy Confidently</title>
<link>https://hdl.handle.net/1721.1/152859</link>
<description>Towards ML Models That We Can Deploy Confidently
Salman, Hadi
As machine learning (ML) systems are deployed in the real world, the reliability and trustworthiness of these systems become an even more salient challenge. This thesis aims to address this challenge through two key thrusts: (1) making ML models more trustworthy by leveraging what has been perceived solely as a weakness of ML model—adversarial perturbations, and (2) exploring the underpinnings of reliable ML deployment. Specifically, in the first thrust, we focus on adversarial perturbations, which constitute a well-known threat to integrity of ML models, and show how to build ML models that are robust to so-called adversarial patches. We then show that adversarial perturbations can be repurposed to not just be a weakness of ML models but rather to bolster these models’ resilience and reliability. To this end, we leverage these perturbations to, first, develop a way to create objects that are easier for ML models to recognize, then to devise a way to safeguard images against unwanted AI-powered alterations, and finally to improve transfer learning performance. The second thrust of this thesis revolves around ML model interpretability and debugging so as to ensure safety, equitability, and unbiased decision-making of ML systems. In particular, we investigate methods for building ML models that are more debuggable and provide tools for diagnosing their failure modes. We then study how data affects model behavior, identify unexpected ways in which data might introduce biases into ML models, particularly in the context of transfer learning. Finally, we put forth a data-based framework for studying transfer learning which can help us discover problematic biases inherited from pretraining data.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152859</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement and Modeling for Resource Monitoring</title>
<link>https://hdl.handle.net/1721.1/152858</link>
<description>Measurement and Modeling for Resource Monitoring
Ponce, Eric Andrew
Effectively tracking resource consumption through ``smart'' metering provides value. Installing such meters, however, is costly, labor-intensive, and could potentially disrupt sensitive, aging distribution networks. Utilizing existing ``analog'' meters through noninvasive retrofit methods provides a more feasible solution to transform our networks into ``smart'' ones by significantly reducing material and installation costs. The high-quality flow rate data produced by these retrofits also enables nonintrusive load monitoring (NILM) for applications such as condition-based maintenance.&#13;
&#13;
Distributed sensing technologies such as those used for resource tracking require electrical power that may not be easily available at the installation site. The toroidal current transformer based magnetic energy harvester (CTMEH), which leverages the existence of extensive electrical power grid cabling, provides a potential solution. Its usefulness, however, is limited by the need to thread the power conductor through the toroidal transformer. &#13;
A model for a CTMEH using a split-core toroid is necessary to enable the design of more useful energy harvesting mechanisms necessary for distributed sensing technologies.&#13;
&#13;
The spread of constant power loads (CPLs) connected to rectifiers in power distribution systems poses a potential stability problem that requires a means of analyzing stability under different source and load impedance conditions. Stability analysis methods allow for the design of ``smart'' loads that can dynamically adjust their operation to prevent instability. Investigation of the currents and voltages in such systems would also aid NILM efforts.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152858</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Parameter Learning in Adaptive Systems</title>
<link>https://hdl.handle.net/1721.1/152856</link>
<description>Fast Parameter Learning in Adaptive Systems
Cui, Yingnan
Adaptive control pertains to the control of dynamic systems exposed to parametric uncertainties in real-time with guarantees of stability and control goals of tracking. A central part of adaptive control is parameter estimation. Real-time adaptive control requires parameter estimation to occur quickly and accurately. This thesis is dedicated to fast parameter estimation in adaptive identification and control of linear dynamic systems with time-invariant and time-varying parameters.&#13;
&#13;
Several algorithms for fast parameter convergence are proposed in this thesis, both in discrete and continuous time, for adaptive identification and control. Three different algorithms are proposed for discrete-time systems: the first introduces a time-varying gain, the second is based on a second-order tuner that combines acceleration and momentum-like terms, and the third combines the first two. In each case, a guarantee of boundedness is established with no requirements of input excitation. With input excitation, parameter convergence is established for constant parameters, with the convergence proven to be exponential when the excitation is persistent. For continuous-time systems, the thesis consists of two contributions, both in the context of adaptive control. The first is a proof of parameter convergence for a new class of adaptive controllers for a class of nonlinear dynamic systems with multiple inputs, with adaptive control in the inner loop and reinforcement learning in the outer loop. This convergence is shown to be exponential when persistent excitation conditions are satisfied. Finally, stability and parameter convergence are established for adaptive control of a class of linear time-varying systems whose states are not accessible for measurement. Here, the thesis introduces yet another adaptive algorithm that combines a standard gradient-descent approach with that based on an integral error. When the time variations are known, it is shown that parameter convergence can occur; when the time variations are unknown, the parameter estimates are shown to converge to a compact set.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152856</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Energy-Aware Image and Transformer Processors for Edge Computing</title>
<link>https://hdl.handle.net/1721.1/152854</link>
<description>Flexible Energy-Aware Image and Transformer Processors for Edge Computing
Ji, Alex
Machine learning inference on edge devices for image and language processing has become increasingly common in recent years, but faces challenges associated with high memory and computation requirements, coupled with limited energy resources. This work applies different quantization schemes and training techniques to reduce the cost of running these models and provide flexibility in the hardware. Energy scalability is achieved through bit width scaling, as well as model size scaling. These techniques are applied to three neural network accelerators, which have been taped out and tested, to enable efficient inference for a variety of applications.&#13;
&#13;
The first chip is a CNN accelerator that simplifies computation using nonlinearly quantized weights by reordering multiplication and accumulation. This modified computation requires additional storage elements compared to a conventional approach. To minimize the area overhead, a custom accumulator array layout is designed. The second chip targets moderately-sized Transformer models (e.g. ALBERT) using piecewise-linear quantization (PWLQ) for both weights and activations. Lastly, an energy-adaptive accelerator for natural language understanding based on lightweight Transformer models is presented. The model size can by adjusted by sampling the weights of the full model to obtain differently sized submodels, without the memory overhead of storing multiple models.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152854</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Fidelity of Classic Cardiovascular Metrics in the Context of a Failing and Mechanically Supported Heart</title>
<link>https://hdl.handle.net/1721.1/152849</link>
<description>Investigating the Fidelity of Classic Cardiovascular Metrics in the Context of a Failing and Mechanically Supported Heart
Macleod, Fiona K.
The advent of mechanical circulatory support devices (MCS) has ushered in a new era in the treatment of low cardiac output states. Proper titration of MCS is essential for optimal therapeutic outcomes and relies on monitoring metrics of cardiac state. However, innovation in clinically determining these metrics has lagged compared to treatment options. Consequently, many rely on assumptions about the cardiovascular environment that are invalidated by the use of MCS, raising concerns about their reliability.&#13;
&#13;
This work used thermodilution, the gold standard method for measuring a patient’s cardiac output, as well as a newly developed MCS-based method, to explore under what conditions traditional metrics are impacted by MCS. Data from porcine studies and clinical trials were used to examined how the validity of both measurement methods can be impacted by changes in a patient’s cardiac state as well as commonly used medical interventions like mechanical ventilation and pharmacologic treatment, with and without MCS present. Ultimately, we found that thermodilution remains valid under conditions where beat-to-beat variability in flow is low, a limitation not present in the new MCS-based method.&#13;
&#13;
Translation of this work to the clinic will help better inform physicians on when thermodilution measurements should and should not be used to titrate MCS support due to possible unreliability. What’s more, understanding when and how traditional, “gold standard” metrics, such as thermodilution, fail is essential for validating newer, more reliable metrics like the MCS-based method. Both of these consequences will ultimately improving patient outcomes.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152849</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Gamma Stimulation on Neurological Phenotypes of Alzheimer's Dementia and Down Syndrome</title>
<link>https://hdl.handle.net/1721.1/152847</link>
<description>The Impact of Gamma Stimulation on Neurological Phenotypes of Alzheimer's Dementia and Down Syndrome
Jackson, Brennan
Non-invasive Gamma ENtrainment Using Sensory stimulation (GENUS) at 40Hz reduced Alzheimer’s disease (AD) pathology such as amyloid and tau levels, prevented cerebral atrophy, and improved behavioral testing performance in mouse models of AD. This thesis work focuses on the translation of this intervention into human patients with AD. Initial pilot studies assessed safety, compliance, entrainment and exploratory clinical outcomes in patients with mild AD during acute and chronic exposure to GENUS. Additionally, due to the increase in amyloid burden and other shared biological and cognitive phenotypes present in the setting of trisomy 21 compared to AD, we also present initial investigation into the use of GENUS in adult Ts65Dn mice, a mouse model of Down syndrome (DS). Chronic exposure resulted in significant genetic and immunohistochemical changes related to synapse organization and adult neurogenesis within the hippocampus, as well as an improvement in spatial memory during behavioral testing. Finally, we also share pilot studies in human individuals with DS to show initial safety, compliance, and entrainment data from acute exposure experiments. Overall, GENUS offers a promising, non-invasive method for altering gamma frequency activity in hippocampal and cortical neuronal circuits and improving cognitive performance in the setting of AD and DS.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152847</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relaxing Topological Barriers in Geometry Processing</title>
<link>https://hdl.handle.net/1721.1/152846</link>
<description>Relaxing Topological Barriers in Geometry Processing
Palmer, David R.
Geometric optimization problems are full of topological barriers that hinder optimization, leading to nonconvexity, initialization-dependence, and local minima. This thesis explores convex relaxation as a powerful guide and tool for reframing such problems. We bring the tools of semidefinite relaxation to bear on challenging optimization problems in field-based meshing and unlock polynomial geometry kernels for physical simulation. We bring together frame fields with spectral representation of geometry. We use current relaxation to devise a new neural shape representation for surfaces with boundary as well as a convex relaxation of field optimization problems featuring singularities. Unifying these disparate problems is a focus on how the right choice of representation for geometry can simplify optimization algorithms.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152846</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uninventing Carceral Technology: Four Experiments in Imagining the World More Rigorously</title>
<link>https://hdl.handle.net/1721.1/152845</link>
<description>Uninventing Carceral Technology: Four Experiments in Imagining the World More Rigorously
Barabas, Chelsea Marie
How do we advance social justice in an increasingly datafied world? Against a backdrop of burgeoning social movements, data-driven technologies have become an important terrain of struggle. That’s because the design and implementation of technology is not simply about the creation of software and hardware objects, but the negotiation of practices and possibilities for living life together differently (Suchman 2007). In this dissertation, I examine the role of technology in shaping our collective imaginations of what is possible in the context of the carceral state. By carceral state, I mean the expansive system of state-sanctioned capture, confinement, and control that underpins our current unjust social order. Drawing on rich intellectual traditions such as feminist, Indigenous, and Black studies, I interrogate the default assumptions underlying the design and implementation of data-intensive systems, in order to fundamentally reimagine the role of technology in larger struggles for justice. Ultimately, the aim of this work is to experiment with ways of “un-inventing” (MacKenzie 1993) carceral technology, by reconfiguring the structural, interpersonal, and personal aspects of computation to the point that harmful algorithms are no longer created.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152845</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genomically encoded logic gates and cell-cell communication devices for the implementation of a cryptographic hashing algorithm in living cells</title>
<link>https://hdl.handle.net/1721.1/152844</link>
<description>Genomically encoded logic gates and cell-cell communication devices for the implementation of a cryptographic hashing algorithm in living cells
Padmakumar, Jai Phiroze
Cell-cell communicaon can be harnessed to distribute tasks across different cells. Applied to biocomputaon, communicaon enables cell populaons to carry out operaons too complex to be performed in a single cell. Doing so in a predictable manner requires well characterized genec parts that can signal between cells and process those signals. In this work, we develop a set of 12 logic gates encoded on the E. coli genome based on phage repressors with superior characteriscs to previous sets of transcriponal logic gates. We addionally develop a set of four cell-cell communicaon devices compable with the newly designed logic gates. To engineer large circuits, we introduce a method to paron a large genec circuit across different cells while considering the size of the circuit that can be placed in a single cell and communicaon between cells. Together, these tools were used to implement a recoded version of the MD5 hashing funcon, a historically widely used cryptography algorithm. The circuit requires 110 logic gates paroned across 65 E. coli strains, requiring a total of 0.66 Mb of recombinant DNA introduced onto their genomes with the most complex strain carrying a total of 40 recombinant genes. For each unique strain we constructed, we experimentally verified the strains can sense the appropriate input signals and compute the correct logic funcon, as indicated by a fluorescent output. To validate each cell communicates the correct signal to the next cell, we use reporter cells to measure the signal from the sender and show the reporter cells respond correctly. This work demonstrates the behavioral complexity that can be achieved with synthec cellular populaons.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152844</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Methods and Perspectives on the Administration of&#13;
American Elections</title>
<link>https://hdl.handle.net/1721.1/152842</link>
<description>New Methods and Perspectives on the Administration of&#13;
American Elections
Jaffe, Jacob
This dissertation uncovers and answers three questions about the administration of American elections. First, why do voters choose on form of voting method over another? The second puzzle is how accurately votes are tabulated within the United States? Third and finally, when inaccuracies in the tabulation of votes in U.S. elections are uncovered, how do voters respond? Election administration is a field of increasing partisan rancor in the United States. Donald Trump has claimed that in some cases voting by mail in 2020 was fraudulent and responsible for his defeat. In this context, it is key to understand why voters choose to vote by mail when compared to other voting methods. Then, to contextualize these claims it is necessary to measure how many errors are detected in American elections. While tabulation errors are not common, they do occur. Given the importance of political trust and existing rhetoric on the accuracy of voting tabulation, it is important to present voters with situations that reflect the types of errors that occur in real elections. Overall, I by using a large field experiment I find that convenience in the form of lowered information costs has a significant impact on usage of vote by mail. Then, I find that detected rates of tabulation errors in post-election audits varies dramatically across states, likely due to administrative differences in how re-tabulation is performed. Finally, I present a number of Americans with a survey experiment that varies the information they receive about a post-election audit, including the number of tabulation errors discovered. I find that most Americans do not fail to trust elections with a relatively small number of errors, though they are still extremely vulnerable to partisan messaging about the trustworthiness of elections.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152842</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Political Origins of Alliances</title>
<link>https://hdl.handle.net/1721.1/152841</link>
<description>The Political Origins of Alliances
Pollmann, Mina Erika
Our understanding of the causes of alliance formation is limited by a lack of theoretical interest in what types of concerns might inhibit alliance formation and insufficient empirical interrogation of cases of alliance formation compared to cases of alliance non-formation. My dissertation seeks to improve our understanding of the causes of alliance formation by applying an original typology of anti-alliance concerns in examining eight cases of successful alliance formation alongside six cases of failed alliance negotiations. Based on the existing alliance literature, I identify the following seven types of anti-alliance concerns: asymmetry, liability, autonomy, provocation, rigidity, entanglement, and defection. I find that asymmetry concerns - the concern that the potential allies do not actually share a mutual threat - were the most significant (coded as high in 9 out of the 14 cases) across successful and failed alliance negotiations, and that asymmetry and entanglement concerns - the concern that the potential ally will force the state to support it in a “costly and unprofitable enterprise” - were the most prevalent (coded as high or low in 10 out of the 14 cases) across successful and failed alliance negotiations. The relationship between the type and/or number of high concerns and alliance formation is weak to nonexistent. Furthermore, in four out of the six most similar paired comparisons of a failed alliance negotiation to a successful alliance negotiation, the different outcomes (i.e., failure versus success) were better explained by different domestic political conditions (i.e., more unfavorable versus more favorable) rather than by differences in the strategic soundness of the proposed alliance (i.e., more strategically unsound versus more strategically sound). An interesting policy implication from a deeper dive into the negotiation processes is that while autonomy, rigidity, and entanglement concerns appear negotiable between the potential signatories, asymmetry, liability, and provocation concerns appear non-negotiable. Defection concerns were not prevalent enough for meaningful analysis of negotiability, though I speculate that defection concerns, when present, are so prohibitive that alliance negotiations do not even begin.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152841</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the Deeply Virtual Neutral Pion Electroproduction Cross Section off the Proton at 10.6 GeV using the CLAS12 Detector</title>
<link>https://hdl.handle.net/1721.1/152836</link>
<description>Measurement of the Deeply Virtual Neutral Pion Electroproduction Cross Section off the Proton at 10.6 GeV using the CLAS12 Detector
Johnston, Robert
Deeply virtual exclusive reactions provide unique channels to study both transverse and longitudinal properties of the nucleon simultaneously, allowing for a 3D image of nucleon substructure. This presentation will discuss work towards extracting an absolute cross section for one such exclusive process, deeply virtual neutral pion production, using 10.6 GeV electron scattering data off a proton target from the CLAS12 experiment in Jefferson Lab Hall B. This measurement is important as exclusive meson production has unique access to the chiral odd Generalized Parton Distributions, and is also a background for other exclusive processes such as Deeply Virtual Compton Scattering, making the determination of this cross section crucial for other exclusive analyses.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152836</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Migrant Capitalism:&#13;
Emigration, Remittances, and Urbanization in Punjab</title>
<link>https://hdl.handle.net/1721.1/152835</link>
<description>Migrant Capitalism:&#13;
Emigration, Remittances, and Urbanization in Punjab
Meer, Ayan Hassan
This dissertation addresses a classical question in the social sciences: how does capitalism relate to international emigration, and conversely, how does emigration shape the form capitalist development takes in a given polity? To answer this question, this dissertation employs extensive archival and field-based research, as well as geospatial analysis, drawing on a number of empirical cases centered in or related to the Indian state of Punjab. This region, in fact, concentrates the extreme inequalities of India’s recent development. On the one hand, some of the wealthiest regions in the country, buoyed by remittances from the diaspora, but also an agrarian economy in decline, as the 2021 historic protests have attested to. Punjabi farmers, once perceived as the favored beneficiaries of state-led development policies, are now ensconced in a social and ecological crisis with falling incomes, high levels of indebtedness and frequent crop failures, in the context of India’s uneven urbanization.&#13;
&#13;
To conceptually systematize the type of political economy borne out of these emerging migrations, this dissertation contributes to literature in development studies and geography by introducing the concept of “migrant capitalism”—construed in three distinct yet related typologies. In the first instance, it refers to the super-exploitation of immigrant workers, in labor and accumulation regimes where severe wage compression is the only way to protect profitability from market risks and environmental uncertainty. Secondly, migrant capitalism addresses the entrepreneurial strategies of migrants, who use emigration and the consequent remittances to diversify their income and/or invest, given the increased liberalization of capital flows across the Global South. Finally, migrant capitalism attends to the accumulation strategies of migration industries, the firms and “migration infrastructures” that facilitate emigration, and whose geographical distribution in Punjab belies specific territorial production complexes as part of its urbanization trajectory.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152835</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Joule Heating and Pore Pressure Evolution, Magmatic Intrusions at Continental Rifts, and Microseisms in Yellowstone Lake</title>
<link>https://hdl.handle.net/1721.1/152833</link>
<description>Joule Heating and Pore Pressure Evolution, Magmatic Intrusions at Continental Rifts, and Microseisms in Yellowstone Lake
Smalls, Paris Todd
The goal of this thesis is to investigate research topics related to geothermal resources. Chapter 1 introduces the thesis and motivates the need to study research topics in relevant geothermal environments, and develop new technologies to economically extract heat from these resources. In Chapter 2, microseism events in Yellowstone National Park are studied. Yellowstone is located in one of the most seismically active volcanic calderas in the world and is a widely studied area for investigating physical (e.g., faulting) and chemical (e.g., hydrothermal venting) relationships in geothermal systems. Chapter 3 describes dike intrusion modeling research I conducted early in my graduate studies including the feedback mechanisms between magma injection at plate spreading centers and topographic development. Chapter 4 represents a shift in my research interests from science to engineering problems relevant for geothermal heat extraction. In this chapter, I describe the design process for developing a novel experimental approach to study the effects of applying a high-voltage to saturated rock specimens under in-situ states of stress. This novel “Electric Rock Fracturing” experimental set-up is used in Chapter 5, the final chapter of this thesis, to study the effects of high-voltage application on the temperature, electrical conductivity, and permeability of brine saturated Berea Sandstone rock specimens.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152833</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perspectives on Geometry and Optimization: from Measures to Neural Networks</title>
<link>https://hdl.handle.net/1721.1/152831</link>
<description>Perspectives on Geometry and Optimization: from Measures to Neural Networks
Suárez Colmenares, Felipe
This thesis explores geometrical aspects of matrix completion, interior point methods, unbalanced optimal transport, and neural network training. We use these examples to illustrate four ways in which geometry plays key yet fundamentally different roles in optimization.  &#13;
&#13;
The first part explores the benign properties of exploiting the intrinsic symmetries in matrix completion. In the second problem, we study the emergence of Fisher-Rao flows in entropic linear programs and explore its relationship to interior point methods. The third problem concerns unbalanced optimal transport. Inspired by a Lagrangian formulation of curvature for curves of measures, we present an algorithm for interpolation in Wasserstein-Fisher-Rao space. Lastly, we study the non-convex dynamics of neural network training for large step sizes and show that a simplified model of a two-layer neural network exhibits a phase transition and a self-stabilizing property known as the "edge of stability".
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152831</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Domain-Specific Generative Methods</title>
<link>https://hdl.handle.net/1721.1/152830</link>
<description>Developing Domain-Specific Generative Methods
Lewis, Kathleen M.
Generative AI is a field that is rapidly developing and growing in scale. As research in this area shifts to building on large-scale foundation models and powerful architectures, careful thought has to go into adapting these models to new domains and tasks. The work in this thesis demonstrates novel approaches to adapting large-scale generative models and architectures to specific applications in virtual try-on, conceptual art, and domain-specific image classification. In addition to the technical contributions, this thesis explores broader open questions about domain-specific generative models; for example, how can we carefully construct our training data to mitigate bias? What do human-in-the-loop methods for creative generative AI look like in practice? To what extent are large-scale vision-language models useful for traditionally image-only tasks?
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152830</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Inclusivity in Collaborative Technology</title>
<link>https://hdl.handle.net/1721.1/152828</link>
<description>Designing Inclusivity in Collaborative Technology
Park, Soya
Are Collaborative Tools Effective for Low-Status Workers? My thesis aims to provide answers to this question. Prior research in social science and organizational science argues that collaborative tools should be effective since technology reflects the social values of its users, and users will adapt the technology they use to make it effective for their needs. What I have found is that low-status users are not utilizing collaborative tools as much due to social burdens. I endeavor to comprehend why such a disconnect exists and to design systems that facilitate collaboration across different status levels.&#13;
&#13;
My focus is on designing inclusive collaborative technology that caters to users of all statuses. I will refer to this as "status-aware interfaces." Instead of leaving low-status workers to navigate tools primarily designed for high-status individuals, I aim to provide assistance tailored to their unique challenges, making collaborative technology effective for low-status workers as well. &#13;
&#13;
This dissertation develops these ideas through three systems: TaskLight, CollaboRanger, and Who2chat, each of which results in specific design implications for creating status-aware interfaces.&#13;
&#13;
TaskLight enables low-status individuals to take on coordination work during collaboration, thereby allowing them to gain improved agency. CollaboRanger allows participants from different status dynamics who embrace different norms to bridge gaps and resolve conflicts. Who2chat uses nudges and enables social signaling to reduce social anxiety for low-status individuals interacting with users with higher status.&#13;
&#13;
My work suggests that existing collaborative systems are indeed not effective for low-status individuals and should be redesigned to meet their social needs. These implications are even more relevant in the era of the future of work, where society becomes more distributed, and job markets swiftly change, leading low-status individuals to be more isolated and perplexed. Designing effective systems for low-status workers will ensure that technology is inclusive and supports those who are sometimes not heard or cared for.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152828</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Compute Congestion for Low Latency Datacenter RPCs</title>
<link>https://hdl.handle.net/1721.1/152827</link>
<description>Mitigating Compute Congestion for Low Latency Datacenter RPCs
Cho, Inho
Latency-sensitive applications in recent datacenter workloads, such as interactive machine learning inference, high-frequency algorithm trading, cloud gaming, and interactive AR/VR applications impose stringent latency requirements. These applications heavily rely on low-latency RPCs as an essential building block, often executed in mere microseconds through parallel computations and in-memory operations. Given the high fan-out RPC traffic patterns typical of these applications, it’s imperative to minimize tail latency to maintain end-to-end latency within its service level objectives (SLO).&#13;
&#13;
With the innovations in datacenter networks and the end of Dennard scaling, congestion is now moving from networks to compute resources. This thesis introduces two systems, Breakwater and LDB, designed to mitigate and diagnose compute congestion, each targeting different sources of tail latency. Breakwater aims to alleviate CPU congestion and lock contention during intermittent server overload, while LDB furnishes developers with a tool to diagnose the functions causing high tail latency with low overhead.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152827</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods and models of screening genomic variants</title>
<link>https://hdl.handle.net/1721.1/152824</link>
<description>Methods and models of screening genomic variants
Frangieh, Chris J.
Genomes are the basis of human biology and human disease. Understanding the role of each gene on a healthy or diseased phenotype requires an intervention to causally link between genotype and phenotype. Advances in RNA-guided endonucleases have enabled such pooled screens in human cells. I first consider a model to understand drivers of immune evasion in a pooled knockout screen conducted in an in vitro model of metastatic melanoma. Next, I discuss strategies for scaling these screens to encompass a larger set of genes from the human genome. Finally, I explore how next-generation genome editors can move beyond knockout screens to identify the biological role of any sequence at any location in the human genome.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152824</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>System and Reactor Design, and Materials Testing, for Efficient Thermochemical Solar Fuel Production in Temperature/Pressure Swing Redox Cycles</title>
<link>https://hdl.handle.net/1721.1/152823</link>
<description>System and Reactor Design, and Materials Testing, for Efficient Thermochemical Solar Fuel Production in Temperature/Pressure Swing Redox Cycles
Patankar, Aniket Sanjay
Zero emission fuels such as hydrogen and its carriers are essential for decarbonizing hard-to-electrify sectors like industrial process heat, long-distance transportation and seasonal energy storage. Green hydrogen can also decarbonize chemical processes in the fertilizer, petrochemicals and metal refining industries. While electrochemical water splitting has high TRL its cost is higher than future targets, especially when coupled with transient renewables. &#13;
&#13;
In this thesis we consider solar thermochemical hydrogen (STCH) with redox cycles using non-stoichiometric metal oxides. This pathway uses renewable heat, including concentrated solar, coupled with thermal energy storage for continuous, round-the-clock green hydrogen production. High-temperature heat (&gt; 1300℃) is used to reduce metal oxides like doped ferrites or ceria in a low oxygen environment. The reduced metal oxide is then used to split water at ~ 800℃ and atmospheric pressure. The large temperature and pressure swings severely penalize heat-to-fuel conversion efficiency if measures like heat recovery and pressure cascading are not implemented. Demonstrated state-of-the-art STCH systems have ~7% heat-to-fuel efficiency. &#13;
&#13;
In this thesis we present a roadmap for achieving 40% heat-to-fuel STCH efficiency. At the core of our innovation is the novel Reactor Train System (RTS). This system consists of multiple sealed and insulated STCH reactors that move between reduction and oxidation zones. The RTS is modular, scalable and compatible with any high-temperature heat source and thermal energy storage. Our system, designed around the RTS, achieves high efficiency by: &#13;
&#13;
i. Heat recovery from reactors between reduction and oxidation temperatures via a radiative counterflow heat exchanger (&gt; 75% effectiveness). &#13;
ii. Radiative waste heat recovery in the oxidation zone for on-site electricity production to drive oxygen removal and hydrogen separation. &#13;
iii. Staged oxygen removal whereby the oxygen partial pressure in a reactor is reduced gradually as it traverses the reduction zone. This reduces power consumption and pump capex by 70%.  &#13;
iv. Replacing mechanical pumps by thermochemical oxygen pumping (TcOP), wherein a second redox cycle is used to absorb oxygen released during the STCH reduction. TcOP increases hydrogen yield per cycle, enabling 40% heat-to-fuel efficiency.&#13;
v. Using composites of ceria and metal ferrites as the redox material. The composites combine the large oxygen carrying capacity of ferrites with superior kinetics and stability of ceria. &#13;
&#13;
The work involved the development of comprehensive and computationally efficient models for the design and optimization of the RTS, that couple radiative heat exchange between reactors; heat and transfer mass and defect chemistry within porous redox material, and energy and mass integration among components that minimize entropy generation. New high-fidelity modeling tools were developed for transient radiative heat transfer in reactors having porous redox materials have low optical thickness, highly anisotropic scattering and non-uniform morphology. Our method GREENER (Generalized Radiation ExchangE factors and NEt Radiation method) achieves the same accuracy as Monte Carlo Ray Tracing with several orders of magnitude lower computational cost. &#13;
&#13;
In this thesis we proposed and tested novel composite redox materials containing ceria and magnesium ferrite. Composite pellets were fabricated. A STCH test system was assembled where pellets were cycled in an infrared furnace at STCH-relevant conditions. Our result show that ceria-ferrite composites are a promising class of redox materials that can improve both efficiency and productivity compared to state-of-the-art single-phase materials.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152823</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Property, Place, and Politics: Essays on the Political Economy of Land in South Africa</title>
<link>https://hdl.handle.net/1721.1/152822</link>
<description>Property, Place, and Politics: Essays on the Political Economy of Land in South Africa
Lekalake, Rorisang
This dissertation explores theories on the distinctiveness of landed property as a source of rootedness for political attitudes and behavior. The first paper addresses prevailing theories on homeownership’s civic virtues through the case of South Africa’s free housing program. Drawing on a large survey from the country’s most populous province, I find the anticipated correlation between owneroccupancy and higher participation. Moreover, my results indicate that housing beneficiaries are more open than renters to punishing poorly-performing incumbents and may therefore be an untapped interest group. The second paper tests whether emotional attachments to residential property distinguish it from other asset classes through a novel survey experiment of 1,282 urban South Africans. Contrary to expectation, participants expressed higher average disapproval of selling an inherited transport business than for either residential or commercial property. Furthermore, sentimental value did not translate into stronger opposition to government expropriation. The third paper proposes a theory linking mass displacement to enduring resentment, which I test through the case of apartheid-era removals to segregated enclaves known as Bantustans. While respondents from a 2004 survey with a personal or family history of displacement were more likely to hold land-related grievances, they reported lower hostility toward white South Africans. Moreover, my analysis of 2000- 2019 electoral returns shows that residents of relocation sites in KwaZulu-Natal province were more likely to vote for former Bantustan authorities than those in surrounding areas. Taken together, the dissertation challenges a number of conventional wisdoms on the relationship between property and voter behavior. Furthermore, it contributes to the study of the political economy of land in the Global South by focusing on residential, rather than productive, property.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152822</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive AUV-assisted Diver Navigation for Loosely-Coupled Teaming in Undersea Operations</title>
<link>https://hdl.handle.net/1721.1/152813</link>
<description>Adaptive AUV-assisted Diver Navigation for Loosely-Coupled Teaming in Undersea Operations
O'Neill, Brendan
Human divers face immense challenges in the undersea domain due to constraints on life support, sensory input, and mobility. Due to these challenges, even simple tasks are difficult, and navigation between points of interest is key among them. However, humans have progressively utilized creativity, innovation, and research to explore the Earth’s oceans at greater depths and with increased spatial and temporal detail. Autonomous underwater vehicles often lack the tools, dexterity, or flexibility to manage specific tasks or unforeseen circumstances. However, advances in inertial navigation, computation, and acoustic communication enable autonomous underwater vehicles to perform tasks outside human capability. Acoustic modem technology allows for flexible and reliable communication over an acoustic link. We propose algorithms for cooperative navigation between a diver and an autonomous underwater vehicle as a pathway toward complex undersea human-robot teams. This thesis identifies the communication, software, and algorithmic tools to enable loosely-coupled cooperative navigation between an autonomous underwater vehicle and a diver without a surface presence. Divers present new challenges for cooperative navigation based on their unique motion profile and variable pace from diver to diver. By leveraging the vehicle’s sensor suite, acoustic modem technology, and nonlinear least-squares state estimation, we enable enhanced diver localization and navigation without a surface presence. Adaptation to environmental impacts is explored through measured ocean currents as well as updates to the diver’s motion model based on state estimation analysis. These adaptations produce more efficient diver transits with fewer heading changes. In addition, maneuvering strategies for autonomous underwater vehicles are explored to assess their impact on diver localization accuracy. Experimental validation is shown through surface platforms as proxies for the autonomous underwater vehicle and diver, demonstrating the localization accuracy within a few meters for experiments under various operating conditions. These contributions provide a foundation for undersea human-robot teams to engage in complex tasks with greater efficiency through their combined strengths.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152813</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aerosol carbon nanotube production via scalable microplasma synthesis of catalyst nanoparticles</title>
<link>https://hdl.handle.net/1721.1/152812</link>
<description>Aerosol carbon nanotube production via scalable microplasma synthesis of catalyst nanoparticles
Sawyer, William J.
The exceptional properties of individual carbon nanotubes (CNTs) have long indicated their potential for a range of practical  applications. Yet, the challenge of synthesizing ordered assemblies of high quality CNTs limits the ability to fully translate those properties into macroscale structures. Floating catalyst chemical vapor deposition (FC-CVD) has emerged as the most promising process for large-scale production with an increasing number of commercial uses. Yet, FC-CVD still requires improvements in control (i.e., reliability and CNT size) and quality (i.e., defects and impurities) to overcome the current trade-off between CNT quality and process intensity and enable the full potential of CNT-based materials.&#13;
&#13;
This thesis describes the design, construction, and implementation of a lab-scale system that achieves end-to-end control of catalyst generation and aerosol CNT growth for the purpose of understanding these processes and assessing potential scalability. First, a stand-alone microplasma reactor is designed and fabricated, and used for synthesis of iron-carbon aerosols from a ferrocene vapor precursor. The microplasma approach achieves precise particle diameter control in the 1--5 nm range and aerosol concentrations an order of magnitude higher than previously published approaches; this is explained by a charge-mediated formation mechanism enabled by the us-scale residence time. The influence of operating conditions on process stability and run-time is investigated, and a dielectric gradient focusing technique is developed to reduce variability and extend the lifetime of operation. Second, a FC-CVD system is built and integrated with the microplasma reactor and used to explore CNT synthesis on iron-carbon catalyst aerosols. Controlling temperature, gas chemistry, and flow conditions at which the catalyst aerosol and carbon precursor streams mix is shown to be critical for enabling CNT nucleation, controlling CNT diameter, and limiting iron and amorphous carbon impurities. Synthesis of highly-graphitized single-wall CNTs is demonstrated over a range of operating conditions by a Pareto front analysis with production rates of ~1 mg/hr. Based on these findings, an outlook is presented on the limiting factors and criteria for the scale-up of high quality CNT production.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152812</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Universal Tensor Abstraction and its Application to and Implementation within Block-Based Compression</title>
<link>https://hdl.handle.net/1721.1/152808</link>
<description>A Universal Tensor Abstraction and its Application to and Implementation within Block-Based Compression
Ray, Jessica Morgan
Data with spatial relationships are often represented in modern programs using multidimensional arrays or other tensor structures, along with the associated metadata necessary to track the location of each piece of data. For example, a program can represent a video as a multidimensional array and a frame within the video as another multidimensional array with a timestamp that gives the location relative to the start of the video. Current programs cannot natively associate this location-based metadata with the arrays, causing the burden of tracking location to fall on to the user. It quickly becomes an arduous task within domains that have numerous arrays with spatial relationships spread across them. The task becomes further complicated when domains have multiple ways to represent the data, such as using projections, permutations, refinement, and coarsening. One such domain, block-based compression, has this type of heterogeneous, spatial data all throughout it, leading to overly complex implementations. &#13;
&#13;
Block-based compression forms the core of many common image and video standards such as JPEG, H.264, H.265, and H.266. The fundamental data unit in block-based compression, the block, represents everything from a video down to an individual pixel, all of which need to maintain their location relative to other blocks in a program. Due to the lack of support for this spatial data, each implementation largely starts from scratch, leading to inconsistencies in data representation and data access.&#13;
&#13;
This dissertation provides a critical look at the association between location and tensors, and defines a core abstraction called the Universal Tensor abstraction (UniTe). UniTe mathematically describes what it means to associate tensors with location and quantify spatial relationships across multiple tensors in a single program.&#13;
&#13;
While UniTe itself is not tied to a particular domain, this dissertation also provides a practical look at implementing UniTe in the context of block-based compression. An initial library implementation highlights the overhead incurred from UniTe due to computing spatial relationships and underlying array indices at the innermost level of computations.&#13;
&#13;
To combat this overhead, this dissertation also presents two different domain-specific languages, CoLa (Compression Language) and SHiM (Staged Hierarchical Multidimensional arrays), and their accompanying compilers built around UniTe. CoLa and SHiM show that it is possible to remove the overhead and achieve performance parity with hand-implemented C code, while also providing users with an intuitive way to represent and utilize spatial data.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152808</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Science in Investment Management</title>
<link>https://hdl.handle.net/1721.1/152807</link>
<description>Data Science in Investment Management
Singh, Manish
In this thesis, titled "Data Science in Investment Management," we aim to explore the applications of data science and artificial intelligence across various dimensions of investment management, offering innovative solutions and insights to the industry. This thesis is composed of several parts, each addressing a different aspect of investment management and leveraging data science techniques to deliver valuable insights.&#13;
&#13;
In first part, for industries and crypto-currencies, we develop a dynamic classification system that groups stocks according to quantified similarities from a wide variety of structured and unstructured data features. With the availability of big data, we were able to use artificial intelligence (AI) methods to extract relevant information about companies from various data sources and learn about their similarity in the future, according to market perception. In second part, we study ways of creating capital and portfolio management for fusion energy and biopharmaceutical investments. By leveraging computational techniques like portfolio approach, we provide novel insights into the optimal financing strategies for high-risk, high-reward ventures like fusion research and biopharmaceutical investing. We also quantify the impact of clinical trial results on the stock prices of the companies, that can aid biopharma investors in risk management. Given the increasing interest in ESG investing, we study the excess-returns of the ESG investing. We also develop the measure of the impact on patient lives due to the products of the biopharmaceutical companies that can attract ESG funds for biopharmaceutical companies. Next part of the thesis investigates the real-time psychophysiological analysis of financial risk processing, offering a deeper understanding of human behavior in the context of investment decision-making using a data driven approach. In the next part, we focus on the use of explainable Machine Learning for an important problem of consumer credit risk. In the final part, we conclude with the discussion about the future of Artificial Intelligence and Data Science in Finance.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152807</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Quest for Ideal Quantum Amplifiers</title>
<link>https://hdl.handle.net/1721.1/152803</link>
<description>The Quest for Ideal Quantum Amplifiers
Peng, Kaidong
Faithful amplification and detection of weak signals are vital to a wide range of research fields, from quantum computing and dark matter detection to metrology and space communications. The increasingly complex systems for computing, metrology, and communication require more robust, performant, and scalable components. As a prominent example, quantum computing holds the potential to solve computational problems intractable for classical computers and advance healthcare, energy, climate, finance, and cybersecurity. However, the algorithmic complexity, finite qubit coherence, and imperfect control require quantum computers to scale to millions of physical qubits while maintaining low hardware error rates to impact real-world applications, necessitating quantum error correction and fast, high-fidelity, and simultaneous readout of a large number of qubits in each error correction cycle.&#13;
&#13;
Quantum amplifiers are a critical frontend quantum hardware to faithfully amplify single-photon-level readout signals above the orders of magnitude larger ambient and electronics noise at room temperature. However, existing quantum amplifiers face performance-scalability tradeoff and are thus challenging to meet the demands of large-scale information-critical quantum systems. This thesis aims to develop next-generation quantum amplifiers that achieve optimal noise performance, scalability, directionality, and processor integrability at the same time. We invent a new class of amplifiers, Floquet-mode traveling-wave parametric amplifiers (TWPAs), that solve the long-standing performance-scalability tradeoff and theoretically offer broadband directional amplification with over 99.9% quantum efficiency across a wide bandwidth. Furthermore, we experimentally demonstrate a low-loss Floquet TWPA using our recently developed planar implementation architecture, offering advantages such as 100x less measured material loss than conventional methods and compatibility with aluminum superconducting qubit fabrication. This architecture will enable direct on-chip integration of quantum amplifiers with superconducting quantum processors, reducing hardware infrastructure overhead and energy dissipation of large quantum systems. In addition, in our quest for ideal quantum amplifiers, we develop a general broadband isolation scheme, in conjunction with our Floquet TWPA implementation, as a promising avenue towards realizing nonreciprocal broadband amplifiers, which will significantly improve system-level efficiency and unlock new opportunities in experimental science.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152803</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Materials based on the Kagome Lattice</title>
<link>https://hdl.handle.net/1721.1/152802</link>
<description>Quantum Materials based on the Kagome Lattice
Kang, Min Gu
A central concept in condensed matter physics is the “emergence” of new macroscopic properties from complex microscopic interactions between many constituents. Quantum materials refer to a subset of condensed matter systems whose emergent properties defy description from classical physics or low-level quantum mechanics. Quantum materials exhibit truly diverse and fascinating properties, ranging from high-temperature superconductivity and topologically protected boundary currents to the fractionalization of elementary particles. The exotic properties discovered in quantum materials not only inspire and guide theoretical advancements – for example, the discovery of Kondo effect in impurity spin systems led to the development of renormalization group theory – but also hold great promise for the next generation of quantum technologies – for instance, the discovery of topological insulators opens up the possibility of realizing novel spintronic and magnetoelectronic devices. Designing new quantum materials, understanding their exotic properties, and harnessing them for the development of new quantum technologies represent a central mission of condensed matter physics.&#13;
&#13;
Our strategy for designing new quantum materials and realizing new emergent quantum properties is based on the specific lattice geometry known as the kagome lattice. The insulating kagome lattices have been studied since the 1980s as a platform for novel quantum magnetic states. In the metallic case, the symmetry of the kagome lattice protects rich singularities in its electronic structure, including Dirac fermions at the Brillouin corner K, van Hove singularities at the zone edge M, and the flat band across the entire Brillouin zone. Once properly combined with other perturbations, these electronic singularities of the kagome lattice represent a rich potential to realize diverse emergent quantum phenomena at the intersection of topological and strongly correlated physics. These tantalizing opportunities presented by the kagome lattice have been theoretically recognized and extensively explored since the early 2000s. However, a proper experimental realization of the kagome lattice electronic structure and related quantum phenomena have been elusive for a long time, despite being highly desired.&#13;
&#13;
This dissertation summarizes me and my collaborators’ pioneering contributions in the experimental realization of the kagome lattice physics over the past six years. Together with parallel efforts from several other research groups worldwide, our works have played a central role in initializing, establishing, and advancing a new research field “Quantum materials based on the kagome lattice”. Our achievements are twofolded. First, by employing a suite of synergistic band structure probes, we conducted in-depth investigations of the electronic structure of newly discovered 3d transition metal-based kagome lattice materials and successfully demonstrated the long-sought realization of the kagome lattice electronic structure, namely Dirac fermions, flat bands, and van Hove singularities, near the Fermi level. Second, by combining these electronic singularities with the spin-orbit coupling, magnetism, and strong electronic interactions inherent in the 3d-kagome metals, we realized diverse topological and correlated quantum phenomena on the kagome lattice, thereby confirming numerous theoretical predictions made since the early 2000s. With these radical advancements in the experimental realization of kagome lattice physics, the kagome lattice materials are now established as one of the most versatile and promising platforms for the quantum materials research. It has been a great privilege to contribute to this field and to witness how this field starts, gains interest, spreads globally, and becomes a major pillar of the community during my Ph.D. journey.&#13;
&#13;
I start the thesis with an essential introduction to the kagome lattice physics in Chapter 1. Chapter 2 summarizes a catalogue of kagome lattice materials discovered thus far, along with an overview of the physics studied in them. The subsequent chapters describe the details of our research, namely the realization of the kagome lattice electronic structure and associated quantum phenomena in various 3d kagome lattice materials. These include the realization of massive Dirac fermions and intrinsic anomalous Hall conductivity in a ferromagnetic kagome metal Fe3Sn2 (Chapter 3); the observation of coexisting surface and bulk Dirac fermions as well as flat bands in an antiferromagnetic kagome metal FeSn (Chapter 4); the realization of the topological flat band in a nonmagnetic kagome metal CoSn (Chapter 5); measurements of the quantum geometry associated with the kagome flat band (Chapter 6); observation of the sublattice-decorated van Hove singularity and its connection to charge order in CsV3Sb5 (Chapter 7); and observation of the intricate competition between charge order and superconductivity in the kagome metal series (K,Rb,Cs)V3(Sb,Sn)5 (Chapter 8). I conclude the dissertation by outlining the remaining goals and challenges in this research area and potential extensions to other lattice geometries in Chapter 9. It is my sincere hope that this dissertation will serve as a groundwork, guiding and inspiring future studies of kagome metals and relevant systems.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152802</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Microtransit: Design and Operations of Reservation-based Systems</title>
<link>https://hdl.handle.net/1721.1/152801</link>
<description>Toward Microtransit: Design and Operations of Reservation-based Systems
Cummings, Kayla
Static public transit infrastructure remains unresponsive to ever-changing mobility landscapes, contributing to transit ridership decline and compounding urban congestion. Simultaneously, the rapid growth of ride-sharing suggests that flexible, on-demand mobility services meet a critical need of urban travelers, despite their high fares. This context creates opportunities to design new hybrid microtransit services that shepherd the digital capabilities and operating flexibility of ride-sharing into the realm of high-capacity public transit. In this thesis, we discuss new strategic optimization frameworks for microtransit planning and operations. We develop decomposition algorithms to achieve insights at scale, and we evaluate new decision-making prototypes for transportation planners over case studies based on real-world data. We examine how these new microtransit technologies enable widespread coverage, rapid welfare gains for travelers, sustainable financial health for transportation operators, and reduced environmental footprints. Furthermore, we introduce new optimization frameworks for decision-making in other reservation-based, platformed systems, namely for server installation under demand uncertainty in cloud data centers. &#13;
&#13;
In Chapter 2, we examine opportunities for transit agencies to outsource mobility services to mobility-on-demand providers. We formulate a fare-setting model via large-scale, mixed-integer, non-convex optimization to jointly set discounted fares across the integrated network, subject to commuters' travel choices. We formulate a novel two-stage decomposition with a new solution approach combining tailored coordinate descent, parsimonious second-stage evaluations, and interpolations using special ordered sets. We learn that alliance priorities inform optimal fare designs: flat fares decrease total vehicle miles, while geographically informed discounts improve passenger happiness. The model improves system utilization and lowers prices for low-income and long-distance commuters, and our revenue allocation mechanism aligns profit-oriented private operators with public transit priorities. &#13;
&#13;
In Chapter 3, we optimize employee driver itineraries for door-to-door, reservation-based transportation services that capture vehicle rerouting capabilities following operating disruptions. We formalize the problem via two-stage stochastic optimization with a tight, network-based recourse model. Our activated Benders decomposition algorithm exploits linking relationships between the first-stage and second-stage problems to accelerate and strengthen Benders cuts. Using data from a major paratransit platform, we show that our algorithm scales to real-world instances, outperforming several benchmarks in terms of computational times, solution quality, and solution guarantees. From a practical standpoint, the model mitigates operating costs by strategically adding slack to driver itineraries, creating flexibility and robustness against supply and demand uncertainty.  &#13;
&#13;
In Chapter 4, we optimize the design and operations of a microtransit system that relies on reference lines and performs on-demand deviations in response to passenger demand. Our two-stage stochastic optimization for microtransit network design model leverages a novel subpath-based representation of microtransit operations in a load-expanded network to streamline on-demand deviations between checkpoint stops. We develop a double decomposition algorithm combining Benders decomposition and subpath-based column generation using a tailored label-setting algorithm. Our method scales to large practical instances based on Manhattan taxi data, with up to 100 candidate lines and hundreds of stops. Comparisons with transit and ride-sharing benchmarks suggest that microtransit can promote efficient, equitable, and sustainable mobility: high demand coverage, low operating costs, high level of service, higher accessibility, and limited environmental footprint. &#13;
&#13;
Chapter 5 addresses an online resource allocation problem to optimize cloud data center configurations under demand uncertainty. We propose an integer optimization formulation for an offline hardware installation problem to maximize demand coverage under capacity constraints, reflecting real-world data center operations. To handle the online dynamics, we adjust the formulation to reserve capacity for a single-sample approximation of the dynamic decision-making problem, as opposed to multi-scenario sample average approximation. This tractable approach is accompanied by (i) theoretical results showing that single-sample approximation provides strong performance guarantees, as well as (ii) computational results using real-world data showing the cost benefits. The proposed solution has been deployed in Microsoft data centers worldwide to support data center managers’ decision-making for the rack placement process and alleviate inefficiencies in data center operations.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152801</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asymptotics, exact results, and analogies in p-adic random matrix theory</title>
<link>https://hdl.handle.net/1721.1/152799</link>
<description>Asymptotics, exact results, and analogies in p-adic random matrix theory
Van Peski, Roger
This thesis is a compilation of exact results regarding p-adic random matrices and Hall-Littlewood polynomials, and asymptotic results proven using these tools. Many of the results of both types are motivated and guided by analogies to existing results in classical random matrix theory over R, C or H, but often exhibit probabilistic behaviors which differ markedly from these known cases. Specifically, we prove the following:&#13;
&#13;
(1) We show exact relations between products and corners of random matrices over Qₚ and Hall-Littlewood processes, which are direct analogues of the classical relations between singular values of real or complex random matrices and type A Heckman-Opdam hypergeometric functions. (2) We prove that the boundary of the Hall-Littlewood t-deformation of the Gelfand-Tsetlin graph is parametrized by infinite integer signatures, extending results of Gorin and Cuenca on boundaries of related deformed Gelfand-Tsetlin graphs. (3) In the special case when 1/t is a prime p we combine this with the aforementioned relations between matrix corners and Hall-Littlewood polynomials to recover results of Bufetov-Qiu [BQ17] and Assiotis [Ass20] on infinite p-adic random matrices. (4) Using the above relation between matrix products and Hall-Littlewood polynomials, together with explicit formulas for the latter, we obtain exact product formulas for the joint distribution of the cokernels of products A₁,A₂A₁,A₃A₂A₁, . . . of independent additive-Haar-distributed matrices A subscript i over the p-adic integers Zₚ. This generalizes the explicit formula for the classical Cohen-Lenstra measure on abelian p-groups. (5) We give an exact sampling algorithm for products of corners of Haar GLₙ(Zₚ)-distributed matrices, and show by analyzing it that the singular numbers of such products obey a law of large numbers and their fluctuations converge dynamically to independent Brownian motions. (6) We classify left GLₙ(Qₚ)-invariant stochastic processes on the (discrete) homogeneous space GLₙ(Qₚ)/GLₙ(Zₚ) with independent increments. We consider the one with smallest jumps, a p-adic analogue of multiplicative Brownian motion on GLₙ(C)/U(N), and show that the singular numbers of the matrix evolve as independent Poisson jump processes which are forced to remain ordered by reflection off the walls of the type A Weyl chamber. (7) As N and time go to ∞, we show that this process converges to a stationary limit, with density explicitly expressed in terms of certain intricate exponential sums. The proof uses new Macdonald process computations, which feature a symmetric function incarnation of the explicit solution to the inverse moment problem for abelian p-groups shown recently by Sawin and Wood [SW22b]. (8) We prove that this reflected Poisson walk is universal, governing dynamical local limits for the singular numbers of p-adic random matrix products at both the bulk and edge, and may thus be viewed as a p-adic analogue of the extended sine and Airy processes. (9) Extrapolating this process to general real p &gt; 1, we analyze the limit as p --&gt; 1. We prove a law of large numbers, a central limit theorem relating it to stationary solutions of certain SDEs, and a bulk limit to a certain explicit stationary Gaussian process on R. Unlike most previously studied limits of Macdonald processes, the latter exhibits scaling exponents characteristic of the Edwards-Wilkinson universality class in (1+1) dimensions, which may be seen as a reflection of locality of interactions between singular numbers which differs markedly from classical random matrix theory.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152799</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Handheld MRI for Point-of-Care and Educational Applications</title>
<link>https://hdl.handle.net/1721.1/152795</link>
<description>Handheld MRI for Point-of-Care and Educational Applications
Kuang, Irene A.
Magnetic Resonance Imaging (MRI) is a powerful, non-invasive imaging modality for visualizing the body’s internal anatomy and providing contrast between soft tissues. However, the reach of clinical MRI scanners is limited by their expensive infrastructure (millions of USD), including radiofrequency (RF) shielded rooms and liquid helium-cooled superconducting magnets. Permanent magnet arrays present as a lowcost and portable alternative for point-of-care and educational MR applications, while sacrificing in image quality. This work focuses on the following contributions: (1) the development of a novel magnet topology, referred to as the “spokes-and-hub” configuration, which incorporates a computationally efficient equivalent charge magnetic field analysis technique using surface charges of bar magnets arranged in oppositely polarized rings; (2) the optimization of dithered RF pulses through the utilization of a microcontroller and inexpensive hardware; (3) the design of gradient encoding fields specific to the spokes-and-hub magnet; and (4) the reconstruction of images obtained from phantoms. Finally, these contributions are summarized into a framework for the comprehensive imaging system design demonstrated in this thesis, which will also allow for future iteration, scaling, and advancement of spokes-and-hub magnet design.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152795</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Visible-Light Platforms, Devices, and Systems for Augmented Reality and Beyond</title>
<link>https://hdl.handle.net/1721.1/152794</link>
<description>Integrated Visible-Light Platforms, Devices, and Systems for Augmented Reality and Beyond
Notaros, Milica
Augmented-reality head-mounted displays have many wide-reaching applications in defense, medicine, etc. However, current commercial head-mounted displays are bulky, heavy, and indiscreet. Moreover, these current displays are not capable of producing holographic images with full depth cues; this lack of depth information results in users experiencing eyestrain and headaches that limit long-term and wide-spread use of these displays. Here, to address these limitations, VIPER (Visible Integrated Photonics Enhanced Reality), a novel integrated-photonics-based holographic display, is developed and demonstrated. The VIPER display consists of a single transparent chip that sits directly in front of the user's eye and projects 3D holograms.&#13;
&#13;
First, this VIPER display concept is proposed. Second, the first transparent 300-mm-wafer foundry platform on glass for visible-light integrated photonics is developed. Third, a novel passive optical-phased-array-based architecture and holographic image encoding methodology are developed and used to demonstrate a large-scale passive version of the VIPER display. Fourth, to enable compact and efficient modulation for dynamic encoding of the VIPER display, liquid-crystal material is integrated into the VIPER platform and used to demonstrate the first integrated visible-light liquid-crystal-based phase and amplitude modulators. Fifth, these liquid-crystal-based modulators are leveraged to demonstrate the first actively-tunable visible-light integrated optical phased arrays. Sixth, these liquid-crystal-based components are used to develop and demonstrate a novel active version of the optical-phased-array-based VIPER pixel. Seventh, the architecture for the full active VIPER display is developed and used to demonstrate dynamic video display functionality.&#13;
&#13;
Finally, applications beyond augmented reality are presented, including chip-based underwater communications, 3D printers, trapped-ion systems, and optical tweezers.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152794</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Profiling Host Respiratory Responses to SARS-CoV-2 Infection</title>
<link>https://hdl.handle.net/1721.1/152793</link>
<description>Profiling Host Respiratory Responses to SARS-CoV-2 Infection
Miao, Vincent N.
Single-cell genomics have revolutionized the granularity with which we can dissect cellular phenotypes in both health and disease. Global atlases of homeostatic tissues have revealed rare cell populations critical to organ function and provide comprehensive genomic reference maps for the scientific community, and profiling of non-homeostatic tissues—across oncology, inflammation, autoimmunity, infectious disease and beyond—have revealed insights into the drivers of disease pathogenesis and potential avenues for prophylactic or therapeutic intervention. Over the past three years, we have studied SARS-CoV-2 infection through the lens of single-cell genomics, querying pre-existing single-cell datasets, developing in vitro models of disease, validating animal models of infection, and profiling human samples with our clinical collaborators, all the while adding to the vast trove of COVID-19 knowledge generated amidst a global pandemic. In the early days of the pandemic, we identify putative targets of SARS-CoV-2 infection in human and non-human primate tissues and discover that the SARS-CoV-2 entry receptor ACE2/Ace2 is an interferon-stimulated gene in human but not murine nasal epithelia. We then profile nasopharyngeal swabs taken from individuals upon hospital admission, stratifying epithelial responses to infection based on ensuing peak disease severity. We identify a muted anti-viral response to infection within the nasal epithelia as a correlate of severe but not mild or moderate disease progression. Finally, we turn to non-human models of SARS-CoV-2 infection to profile the lower respiratory response to viral perturbation across multiple acute time points, pinpointing cellular populations and attributes that are associated with enhanced protection in the lower respiratory tract. Together, our work highlights the use of single-cell genomics in uncovering tissue-based host-pathogen interactions and provides a framework for rapidly and systematically assessing the immune response to emerging pathogenic threats.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152793</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatially precise in situ transcriptomics in intact biological systems</title>
<link>https://hdl.handle.net/1721.1/152792</link>
<description>Spatially precise in situ transcriptomics in intact biological systems
Sinha, Anubhav
Biological tissues are composed of cells of diverse types and states that are spatially organized with nanoscale precision over extended length scales to perform coordinated functions. Understanding patterns of gene expression in tissues is needed to understand complex cellular behaviors in health and disease. Current methods for highly multiplexed RNA imaging are limited in their spatial resolution and lack precise subcellular landmarks, limiting the ability to localize transcripts to nanoscale and subcellular compartments. Here, I describe the development of targeted expansion sequencing (targeted ExSeq), a multiplexed RNA imaging method that enables efficient detection of a predefined set of transcripts in three-dimensional tissues with nanoscale resolution. Targeted ExSeq integrates expansion microscopy, an approach for volumetric super-resolution tissue imaging through the principle of physical magnification, with targeted in situ sequencing, an approach for imaging RNA transcripts of interest. Targeted ExSeq was used to spatially map layer-specific cell type organization in the primary visual cortex in the mouse brain, nanoscale RNA localization within dendrites and dendritic spines in the mouse hippocampus, and position-dependent states of tumor and immune cells in a human metastatic breast cancer biopsy. The approach for targeted RNA in situ sequencing was adapted to whole-mount RNA imaging of intact preimplantation mouse embryos. By integrating immunofluorescence and in situ sequencing with live-embryo mechanical measurements, a spatially-resolved multimodal map of preimplantation embryogenesis was generated to study self-organization in development, finding early lineage segregation events and progressive mechanical softening. Joint measurements demonstrated that early lineage segregation events have differential mechanical and morphological properties that align with distinct developmental programs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152792</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Demonstration of Quantum Low Probability of Intercept for Ultra-Secure Communication</title>
<link>https://hdl.handle.net/1721.1/152789</link>
<description>Experimental Demonstration of Quantum Low Probability of Intercept for Ultra-Secure Communication
Heyes, Jane E.
Secure communication systems utilize encryption with shared keys between the sender and receiver of the encrypted message. For added security, the encrypted message can be hidden within a significant amount of noise so that the eavesdropper could not even extract the actual encrypted message, let alone decrypt it. However, such a system, called low probability of intercept (LPI), also uses a shared key which is susceptible to security failure caused by key disclosure, just like encryption systems. A quantum version of LPI, or QLPI, operates entirely differently: the key for quantum low probability of intercept (QLPI) is transient and not shared, and its security is based on the quantum no-cloning theorem. In this work, we will present a tabletop proof-of-concept experiment to demonstrate QLPI, achieving errorfree secure communication at an internet-compatible rate of 0.5 Gbps over the equivalent of 50 km of telecom fibers.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152789</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conformal Methods for Efficient and Reliable Deep Learning</title>
<link>https://hdl.handle.net/1721.1/152788</link>
<description>Conformal Methods for Efficient and Reliable Deep Learning
Fisch, Adam
Deep learning has seen exciting progress over the last decade. As large foundation models continue to evolve and be deployed into real-life applications, an important question to ask is how we can make these expensive, inscrutable models more efficient and reliable. In this thesis, we present a number of fundamental techniques for building and deploying effective deep learning systems that are broadly based on conformal prediction, a model-agnostic and distribution-free uncertainty estimation framework. We develop both theory and practice for leveraging uncertainty estimation to build adaptive models that are cheaper to run, have desirable performance guarantees, and are general enough to work well in many real-world scenarios. Empirically, we primarily focus on natural language processing (NLP) applications, together with substantial extensions to tasks in computer vision, drug discovery, and medicine.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152788</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-Inspired Deep Learning for Inverse Problems in MRI</title>
<link>https://hdl.handle.net/1721.1/152787</link>
<description>Physics-Inspired Deep Learning for Inverse Problems in MRI
Singh, Nalini M.
We demonstrate the power of combining the forward image acquisition model with deep learning solutions for inverse problems in magnetic resonance imaging (MRI), from individual network layers to the network architecture design and inference procedure.&#13;
&#13;
First, we propose neural network layers that combine image space representations with representations in Fourier space, where MRI data is acquired. These layers can be used as drop-in replacements for standard image space convolutions in a variety of network architectures and yield higher quality reconstructions across a wide range of MR imaging tasks.&#13;
&#13;
Next, we demonstrate a deep learning framework for rigid-body motion correction in MRI, where the forward imaging model informs both the network architecture and the inference procedure. Our method incorporates potentially unknown motion parameters as inputs to the network and then optimizes them for each test example. The optimization is performed via an objective function that forces the reconstructed image and estimated motion parameters to be consistent with the acquired data. This approach reduces the joint image-motion parameter search used by most motion correction strategies to an inference-time search over motion parameters alone, greatly simplifying the complexity of the optimization problem to be solved for a novel image. Our hybrid method achieves the high reconstruction quality metrics that characterize deep learning solutions while retaining the benefits of explicit model-based optimization – in particular, the ability to reject examples where the network produces poor reconstructions. Experiments demonstrate the advantages of this combined approach over purely learning or model-based reconstruction techniques.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152787</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximate Bayesian Modeling with Embedded Gaussian Processes</title>
<link>https://hdl.handle.net/1721.1/152784</link>
<description>Approximate Bayesian Modeling with Embedded Gaussian Processes
Chen, Rujian
Bayesian inference plays an important role in uncertainty quantification and decision making in science and engineering applications. However, the presence of complex model components can significantly impact model accuracy and computational feasibility of Bayesian inference methods. We propose the embedded Gaussian process framework to address these challenges. The embedded GP model captures the uncertainty of complex physical models and incorporates it in posterior inference where a joint distribution of all uncertain quantities, including the complex physical models, is learned from data. We show that under appropriate conditions, inference can be done efficiently through various Gaussian process related properties and computational techniques. Compared to previous deterministic modeling approaches, our proposed framework leads to improved model fit and decision making capabilities, which we demonstrate through an application to Bayesian inference and experimental design for a large-scale off-shore oil production system featuring complex fluid dynamical processes.&#13;
&#13;
Gaussian processes are increasingly employed in science and engineering applications to provide efficient approximations to complex computational models. Recent literature continues to expand and generalize Gaussian process surrogate models, including the introduction of generalized real or simulator observations. Despite the increasing applications, theoretical properties and statistical implications of GP approximations on data analysis are not yet fully understood. We contribute advances in this front by studying asymptotic properties of the approximate posterior in GP surrogate models with generalized observations. We prove conditions and guarantees for consistent approximate inference in terms of posterior expectations and KL-divergence. Our convergence results provide a family of consistency guarantees for downstream prediction, estimation and decision making.&#13;
&#13;
Finally, we study the problem of hyperparameter optimization in probabilistic latent variable models. Although efficient algorithms are available for many classes of popular models, they cannot handle fully general models due to the appearance of intractable quantities that must be computed. In general, hyperparameter optimization is well known to be a challenging problem for complex or high-dimensional models without special model features. This is the case, for example, for the embedded GP model, where GP hyperparameters can have a strong impact on inference properties. We develop a new particle-based approximate algorithm which is both simple to implement and can handle general models, including hard cases that cannot be solved by aforementioned specialized optimizers. We show that our algorithm outperforms state-of-the-art optimizers on a number of simulation experiments. We then demonstrate our method on the embedded GP model for the large-scale oil production system. Besides generating substantial improvements to model performance, our method allows us to infer simulator mismatch characteristics from data in greater detail, a capability which was not possible before.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152784</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolic and genetic factors guiding hematopoietic cell fate</title>
<link>https://hdl.handle.net/1721.1/152783</link>
<description>Metabolic and genetic factors guiding hematopoietic cell fate
Do, Brian Tshao
Proper organismal development requires precise control over cellular identity, and a wide range of diseases is characterized by altered differentiation states. Understanding the environmental factors that interplay with gene expression networks to regulate cell state holds immense potential for both basic scientific understanding and therapeutic advancements to treat cancer and other diseases. Of particular interest is the role of nutrient availability and metabolism in modulating cell states, as metabolites play dual roles as both signaling cues and physical substrates. However, the specific metabolic cues that maintain or alter cell state in physiologically relevant contexts have not been fully characterized. Moreover, we lack a mechanistic understanding of how metabolic perturbations translate into specific context-appropriate decisions across diverse cell types. &#13;
&#13;
In this dissertation, we examine how metabolism influences the acquisition of context-specific cell states in normal and transformed hematopoiesis. Guided by metabolism-focused small molecule screens, we find that nucleotide depletion and imbalance initiate differentiation-related cell state changes specifically by impairing DNA replication, rather than as a general consequence of impaired proliferation or reduced viability. Genetic screens in three cell types reveal that while different cellular contexts have divergent responses to depletion of metabolic enzymes and other pathways, inhibition of nucleotide metabolism and DNA replication are shared paths to differentiated cell states across all three model systems. Surprisingly, checkpoint signaling is not required for these effects. Instead, by tracking early transcriptional and epigenetic changes, we find that DNA replication stress appears to preferentially activate primed regulatory loci with pre-existing chromatin accessibility. In the presence of differentiation blockades, this creates a synthetic cell state where features of the progenitor gene regulatory program are retained even as the maturation program emerges, whereas in untransformed cells we observe the acceleration of an otherwise normal maturation process. Transcriptional and epigenetic changes begin while cells are still in S phase but do not depend on which genomic regions are being actively replicated, suggesting that a previously uncharacterized signaling or sequestration mechanism could enable replication stress to trigger maturation programs in trans; we identify the Mediator kinase module as a potential link. Finally, we catalog additional context-specific metabolic and genetic determinants of differentiation that are specific to replication stress. Together, the studies presented in this dissertation shed light on how nucleotide depletion and DNA replication stress accelerate cell state transitions across multiple hematopoietic cell types, with implications for how such processes may underlie developmental and disease states and be harnessed for therapeutic benefit.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152783</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The ecology of prophages at the microscale</title>
<link>https://hdl.handle.net/1721.1/152781</link>
<description>The ecology of prophages at the microscale
Szabo, Rachel E.
Microbial communities shape nearly every habitat on Earth. Delineating the principles by which microbial collectives assemble, function, and evolve across systems is fundamental to transforming microbial ecology from an observational science to a predictive one. In this thesis, I argue that temperate bacteriophages – the pervasive viruses that can reproduce in either a predatory or mutualistic manner with their bacterial hosts – must be explicitly considered in order to develop a more complete framework for microbial community development. However, our understanding of the impacts of temperate phage induction and maintenance in complex communities is limited. In my first study, I investigated the ecological processes controlling community development at the microscale, which is the approximately the scale at which many microbial communities assemble in nature. Using synthetic resource particles as scaffolds for the assembly of discrete, microscale ecosystems, I characterized the complex marine microbial communities that grew on hundreds of individual particles. I found that these microscale communities diverge both taxonomically and functionally, with prophage induction, especially among founding community members, emerging as one factor significantly associated with this variability. However, the ecological modulators of lysogeny-lysis transitions in complex communities remain largely unknown. Therefore, in my second study, I explored the extent to which prophage dynamics may be influenced by chemical cues serving as proxies for the densities and identities of surrounding community members – namely, quorum sensing autoinducers. Through a large-scale genomic survey of prophages and their hosts across the bacterial domain, I estimated the extent to which prophages co-opt quorum sensing systems by encoding their components as auxiliary genes. Collectively, this work suggests that variability in community assembly at the microscale – driven, in part, by the dynamics of temperate phages – may underlie patterns in the diversity and functionality of larger-scale microbial ecosystems.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152781</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Re-Innovation Nation: The Strategic and Political Logic of Technology Transfer Policy in Rising China</title>
<link>https://hdl.handle.net/1721.1/152780</link>
<description>Re-Innovation Nation: The Strategic and Political Logic of Technology Transfer Policy in Rising China
Minnich, John David
This project examines China’s efforts to accelerate its economic rise using technology extraction policies, defined as measures that condition foreign access to China’s market on technology transfers to domestic firms. Using original data, I address two puzzling and previously unexamined patterns in China’s technology transfer behavior. First, China’s use of tech extractors rose sharply in the years after it joined the World Trade Organization (WTO) in 2001, only to fall equally dramatically after 2015. Second, within this broad-based increase after 2001, there was surprising variation across industries in China’s use of these tools. China issued tech extractors liberally in many high value-added industries, including high-speed rail, aircraft manufacturing, and renewable energy technology. At the same time, it used them sparingly in others, such as battery technology, precision measurement equipment, and semiconductor design and fabrication.&#13;
&#13;
What explains variation in China’s use of technology extraction policies? I argue that national power and regime security concerns lead China to pursue tech extraction in strategic industries, but that China is constrained in doing so, even in the most strategically vital sectors, by its bargaining power over foreign firms. China’s leverage is weakest when policy enforcement capacity is low and when it sits in the middle of global value chains in an industry, such that most imports consist of foreign inputs to be processed locally for re-export elsewhere. In these industries, China relies more on foreign firms to drive export growth and employment than they rely on it as a final market. This limits China’s bargaining power, constraining the use of tech extractors.&#13;
&#13;
I evaluate these arguments using a combination of statistical analysis of an original industry-level dataset on technology extractors from 1995-2020 and detailed qualitative case studies of tech extraction in three strategic industries. My data, based on manual analysis of several hundred Chinese language regulations, reveal that strategic industries account for 85 percent of the increase in the use of tech extractors after 2001. However, I find that China is more than twice as likely to use these policies in strategic industries in which it is downstream of value chains as a final market than in those in which it is intermediate. Case studies of tech extraction efforts in wind turbine technology, semiconductors, and aviation illuminate the relationship between enforcement capacity and the rise of tech extractors, value chain position and the non-use of these tools in some strategic sectors, and the causes behind the decline of technology extraction policies after 2015.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152780</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical aspects of optimal transport</title>
<link>https://hdl.handle.net/1721.1/152775</link>
<description>Statistical aspects of optimal transport
Stromme, Austin J.
Optimal transport (OT) is a flexible framework for contrasting and interpolating probability measures which has recently been applied throughout science, including in machine learning, statistics, graphics, economics, biology, and more. In this thesis, we study several statistical problems at the forefront of applied optimal transport, prioritizing statistically and computationally practical results. We begin by considering one of the most popular applications of OT in practice, the barycenter problem, providing dimension-free rates of statistical estimation. In the Gaussian case, we analyze first-order methods for computing barycenters, and develop global, dimension-free rates of convergence despite the non-convexity of the problem.&#13;
&#13;
Extending beyond the Gaussian case, however, is challenging due to the fundamental curse of dimensionality for OT, which motivates the study of a regularized, and in fact more computationally feasible, form of optimal transport, dubbed entropic optimal transport (entropic OT). Recent work has suggested that entropic OT may escape the curse of dimensionality of un-regularized OT, and in this thesis we develop a refined theory of the statistical behavior of entropic OT by showing that entropic OT does attain truly dimension-free rates of convergence in the large regularization regime, as well as automatically adapts to the intrinsic dimension of the data in the small regularization regime. We also consider the rate of approximation of entropic OT in the semi-discrete case, and complement these results by considering the problem of trajectory reconstruction, proposing two practical methods based off both un-regularized and entropic OT.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152775</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Miniature bioanalytical assays utilizing mechanical actuation of microspheres</title>
<link>https://hdl.handle.net/1721.1/152766</link>
<description>Miniature bioanalytical assays utilizing mechanical actuation of microspheres
Hanhauser, Emily B.
An estimated 70% of medical decisions rely on diagnostic results, yet half of the global population lacks access to diagnostic testing. Geography contributes to this gap, with decreased access found in decentralized settings, which can lack the physical and labor infrastructure necessary for gold standard bioanalytical assays. In such settings, lateral flow affinity assays remain the technique of choice, due to their fast time-to-result, ease-of-use and deliverable geometry. However, these assays suffer in sensitivity, with limits of detection 10 - 1000x higher than their gold standard counterparts. Capitalizing on transport advantages at small scales, many microfluidic devices enable high sensitivity assays in a compact form, but their translation to the field is often challenging due to their complexity and the required supporting instrumentation. As a result, gaps in diagnostics, suitable for the realities of operating in decentralized settings, remain.&#13;
&#13;
Inspired by challenges demonstrated during the height of the COVID-19 pandemic, this thesis introduces an affinity assay platform that utilizes simple mechanical, nonfluidic actuation of microspheres on a sensing surface to quantify bioanalytes in liquid samples. The assay integrates two processes, capture of analytes during bead sedimentation, and analysis based on bead interactions with a sensing surface using optical microscopy and image processing. The assay can be performed in a standalone device using minimal manual steps and in less than 30 minutes. Combined with single molecule sensitivity demonstrated by previous bead-based assays, this platform has the potential to enable high sensitivity bioanalysis with the ease-of-use profile required for decentralized settings.&#13;
&#13;
In the first part of this thesis, we describe a framework for predicting bead-analyte capture over a range of bioanalytes to aid in design across applications. We apply this method to design and analyze our proposed capture method, and to examine and predict transport advantages that arise from settling microspheres. In the second part, we investigate multiple simple actuation mechanisms that lead to bioanalyte quantification based on bead-surface interactions. While nearly all mechanisms show initial efficacy, thermal diffusion was selected for further development, with demonstrated picomolar limits of detection using a model assay, and bead sliding is also explored for its potential versatility and simplicity of imaging. As nonspecific binding of beads to the surface limits sensitivity, we also theoretically model and experimentally investigate surface coating techniques to minimize this effect and demonstrate preliminary enhanced assay sensitivity using a zwitterionic coating.&#13;
&#13;
Finally, using a simple device prototype, we demonstrate the ability of our assay to quantify cardiac troponin I (cTnI), an established biomarker for cardiac injury. Because elevated levels can indicate heart attack or other critical conditions, measuring cTnI accurately and rapidly is essential. cTnI is quantified via immunoaffinity techniques, with current generation high sensitivity lab assays reaching 0.001 ng/mL limit of detection (LOD) in 30 minutes to one hour and bedside point-of-care (POC) devices having 20x higher LOD but in 15 minutes. The instrumentation requirements of these assays make them expensive and challenging to perform in decentralized settings. As the global leading causes of death continue to transition from infectious to chronic diseases, increasing access to cTnI diagnostics will be crucial to improving health outcomes.&#13;
&#13;
Our results show that our integrated assay can detect cTnI at 0.01 ng/mL in buffer and at 0.1 - 1 ng/mL in 10x diluted serum in a 1% BSA buffer in under 30 minutes. This is a clinically-useful result, with our assay showing LODs 5-25x higher than current bedside POC assays and on par with previous generation high sensitivity lab assays (0.1 – 1 ng/mL), but in a format that requires a single manual transfer step and no specialized or dedicated instrumentation. The performance of the current assay is limited by sensing surface variability, which could be improved by optimization of surface chemistry and blocking reagents. Overall, the platform presented in this thesis could enable quantification of bioanalytes at sensitivities approaching current standard methods but in a user-friendly, high-throughput, distributable and rapid format, an important step forward towards filling the gap in technology for decentralized diagnostics and for other monitoring applications, such as those in water and food safety.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152766</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soft Robotic Platforms for the Simulation of Cardiovascular Disease and Device Development</title>
<link>https://hdl.handle.net/1721.1/152765</link>
<description>Soft Robotic Platforms for the Simulation of Cardiovascular Disease and Device Development
Rosalia, Luca
The advancement of safe and effective medical devices critically hinges on the utilization of high-fidelity models of human disease. The closer these models emulate the intricacy of pathophysiological phenomena driving diseases and their effects, the greater their potential impact on clinical medicine. This thesis delves into the exploration of soft robotics in disease modeling, a promising avenue for faithfully replicating the biomechanical aspects of human disease in a patient-specific manner. In addition to supporting the development and evaluation of medical devices, high-fidelity models have the potential to elucidate pathophysiological mechanisms of disease onset that are not yet fully understood, support personalized clinical decisions and interventions, and be utilized as pedagogical tools for medical education and training. This thesis will demonstrate the efficacy of soft robotics in recapitulating cardiovascular diseases, specifically aortic stenosis (AS) and heart failure with preserved ejection fraction (HFpEF), affecting over 40 million people worldwide.&#13;
&#13;
This work first describes the development of a tunable and biomimetic soft robotic aortic sleeve that can re-create the anatomy and hemodynamics of AS with high fidelity. In a preclinical swine model of AS and through a combination of invasive monitoring and 4-dimensional flow imaging techniques, this thesis demonstrates the ability of the aortic sleeve to recapitulate clinically relevant hemodynamics of AS and mimic the complex transvalvular blood flow patterns associated with this disease. The utility of the aortic sleeve in the re-creation of patient-specific AS hemodynamics is showcased using a 3D-printed hemodynamic in vitro model of AS. Through the  miniaturization of the soft robotic aortic sleeve, this work then describes the development of a chronic small animal model of AS, which was leveraged to investigate ventricular remodeling and plasticity due to the cessation of the biomechanical stimuli induced by the aortic sleeve.&#13;
&#13;
This thesis then presents the design and development of a soft robotic cardiac sleeve that can tunably recapitulate loss of cardiac compliance associated with HFpEF. In a patient-specific in vitro model of ventricular remodeling and, separately, in an acute porcine model, this work demonstrates that cardiac filling capacity can be finely modulated by varying the actuation level of the cardiac sleeve, enabling the recapitulation of the  hemodynamic aberrations of HFpEF. The computational platforms, including lumped-parameter, finite element, and computational fluid dynamic models leveraged for the design and optimization of these soft robotics-driven models of AS and HFpEF will be described.&#13;
&#13;
The final part of this thesis demonstrates the utility of these platforms for the design and evaluation of device-based solutions for AS and HFpEF. Specifically, it describes an in silico study focused on designing a pulsatile-flow mechanical circulatory support device for HFpEF, an in vitro investigation of patient-specific hemodynamics following TAVR implantation, as well as a demonstration of the use of the soft robotic porcine model of HFpEF hemodynamics for medical device testing. &#13;
&#13;
By elucidating the first applications of soft robotics in disease modeling, this dissertation holds the potential for profound impact by enhancing our understanding of disease mechanisms, aiding in medical education, and offering potential for innovative and personalized therapeutic solutions for patients with AS and HFpEF. In the future, it may propel the evolution of soft robotic models for other cardiovascular conditions and beyond the cardiovascular field, catalyzing further advancements in preclinical and translational research.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152765</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Learning from Uncurated Data</title>
<link>https://hdl.handle.net/1721.1/152764</link>
<description>Robust Learning from Uncurated Data
Chuang, Ching-Yao
The field of machine learning has witnessed a growing interest in learning from uncurated data, which involves training models from data that has not been carefully curated or labeled. However, this type of data is typically noisy, incomplete, and riddled with errors, making it challenging for machine learning algorithms to learn effectively. This thesis focuses on the development of robust learning methods that can effectively leverage uncurated data while being resilient to the inherent noise and errors in the data. Specifically, we investigate the robustness of contrastive learning, a prominent technique for self-supervised representation learning by contrasting semantically similar and dissimilar pairs of samples. Firstly, we delve into the fundamental challenge inherent in learning from unlabeled data. We find that eliminating false negatives and encouraging hard negatives notably enhance downstream performance and training efficiency. Subsequently, we shift our focus to the omnipresent noise within the dataset. We pay particular attention to the emergence of false positive pairs, a phenomenon particularly prevalent in multimodal contrastive learning settings. In the final segment of our study, we contemplate the efficient eradication of biases from large-scale models. It is observed that, when models are pretrained on biased, uncurated data, they frequently inherit numerous inappropriate biases, which consequentially lead to skewed predictions. In an effort to rectify this, we devise a debiasing algorithm that operates independently of any data or training requirements. Throughout the dissertation, the common thread tying these three components together is a robust and comprehensive approach to mitigating the unique error types associated with unlabeled, noisy, and biased data respectively, offering substantial contributions to the realm of machine learning research.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152764</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-bounded Programming using Constrained, Hierarchical, Stochastic Shortest Path Problems</title>
<link>https://hdl.handle.net/1721.1/152763</link>
<description>Risk-bounded Programming using Constrained, Hierarchical, Stochastic Shortest Path Problems
Hong, Sungkweon
The escalating demand for autonomous systems that enhance human lives has emphasized a pressing need to ensure their safety. However, guaranteeing safety in the face of extensive decision-making and uncertain scenarios has proven to be a formidable challenge. We argue that an autonomous agent designed for safety-critical and largescale domains must successfully overcome the following challenges: 1) the agent must be able to make decisions by adhering to risk bounds to ensure safety; 2) it needs to understand and make decisions over hierarchical tasks to address the issue of scalability; 3) the agent should be able to generate contingencies so that there are predefined backup plans for various possible scenarios; and 4) it must ensure timely planning to prevent catastrophic consequences in time-critical and safety-critical applications. This thesis addresses all four needs by first framing a constrained and hierarchical stochastic shortest path problem (HC-SSP) and then solving it using an anytime algorithm.&#13;
&#13;
In this thesis, we present an executive named Zeppelin, which employs a divide-and- conquer approach to solving HC-SSP, leveraging the hierarchical structure of the problem to generate solutions in an anytime fashion. Zeppelin consists of two main components: ACDC, a risk-bounded conditional planner responsible for addressing individual tasks, and RHCP, a coordinating planner that decomposes hierarchically structured tasks into individual tasks and resolves them through ACDC calls.&#13;
&#13;
To solve individual tasks while adhering to Zeppelin’s anytime property, ACDC employs a two-stage approach. First, it solves a Lagrangian dual of the planning problem to find a feasible solution swiftly. Then, it incrementally updates the solution by iteratively generating candidates, thereby achieving the anytime property. A key underlying both stages is the reformulation of the constrained problem into a series of unconstrained problems, allowing ACDC to leverage off-the-shelf state-of-the-art solvers. &#13;
&#13;
RHCP employs an iterative resource allocation strategy, providing a systematic approach for decoupling hierarchically structured tasks. To effectively guide the search towards promising resource allocations, RHCP leverages the outputs of ACDC, specifically lower and upper bounds on the solution quality. By incorporating these bounds, RHCP can direct the search toward resource allocations that hold the potential for better solutions.&#13;
&#13;
We demonstrate Zeppelin in the context of commercial aircraft operations by deploying it as a mission manager for automated decision-making processes from departure to arrival. The experiment demonstrated that Zeppelin could generate an initial safe plan in just 1/20 of the computation time required by the state-of-the-art method. In addition, Zeppelin successfully achieved a near-optimal solution with a 1.6% optimality gap with minimal additional computation time, demonstrating its anytime property. This practical application not only highlights the capabilities of Zeppelin but also facilitates the identification and bridging of gaps between theoretical foundations and real-world applications.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152763</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing Entanglement and Symmetry Breaking Orders via Spectroscopies and Machine Learning</title>
<link>https://hdl.handle.net/1721.1/152761</link>
<description>Probing Entanglement and Symmetry Breaking Orders via Spectroscopies and Machine Learning
Liu, Tongtong
Quantum materials are essential components in the development of advanced technologies, including magnetic-field sensors, energy-related technologies, and quantum computers. Especially, the search of highly entangled quantum materials is crucial, because entanglement is a resource in quantum information applications. A key step towards finding and fabricating highly-entangled materials is to develop experimental and theoretical methods to characterize entanglement. In large-scale solid-state systems, the experimental characterization relies on spectroscopies, including X-ray and neutron spectroscopies. Among different conceptual and mathematical formalisms of entanglement, multipartite entanglement has gained significance due to its accessibility through local probe techniques such as spectroscopies. The resonant inelastic X-ray scattering (RIXS) is an advanced X-ray spectroscopic technique that can probe collective excitations arising from charge, spin, and orbital degrees of freedom, which makes it suitable to characterize multipartite entanglement. RIXS also exhibits potential that extends beyond current understandings, under exceptional precision, it can measure four-point correlations beyond the capability of other spectra techniques, which inspires new entanglement probes. This dissertation contains many aspects of probing entanglement and symmetry breaking orders using both spectroscopies and machine learning. In the first part about probing entanglement using spectroscopies, we will introduce a theoretical proposal for using RIXS to probe entanglement. We propose a new RIXS technique that can extract four-point correlations beyond the scope of the spin and charge structure factors. We verify our method using computational RIXS spectra and theoretically propose multipartite entanglement witnesses based on the four-point correlations for general fermion systems. Building upon the theme of extracting information from materials using spectroscopies, we further present two theoretical works that predict symmetry breaking orders in two-dimensional systems, which can be directly visualized using spectroscopic techniques. (1) We investigate local signatures of quantum Hall ferroelectric and nematic states arising near impurities that can be observed via Scanning Tunnelling Microscopy (STM). (2) We study charge orders at the fractional fillings in twisted transition metal dichalcogenide (TMD) bilayers that can be observed directly via STM. The second part is about the prediction of magnetic orders using machine learning. We’ll present a machine-learning model based on the Euclidean equivariant graph neural network (E3NN) which preserves the crystallographic symmetry, that is trained to predict magnetic orders (ferromagnetic, antiferromagnetic, and non-magnetic) and magnetic propagation vectors (zero or nonzero) with the crystal structures as input. The descriptor used has the advantage to encode general crystal structures of any space group while retaining all spatial information, this characteristic holds significant potential for advancing material science studies.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152761</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric Computing beyond the Laplacian</title>
<link>https://hdl.handle.net/1721.1/152758</link>
<description>Geometric Computing beyond the Laplacian
Wang, Yu
The Laplace–Beltrami operator, or, Laplacian, is the central object of study in shape analysis, geometry processing, and scientific computing. Motivated by the ubiquity of Laplacian in geometric computing algorithms, this thesis initiates a program for systematically designing novel algorithms by replacing the Laplacian with better or even optimal alternative operators. Our approach is based on the observation that Laplacian-based algorithms can yield suboptimal results for geometric computing tasks, often due to, e.g., the inability to capture extrinsic geometry and/or the lack of a problem-specific metric under which the Laplacian is defined. We bridge the gap by proposing geometric computing algorithms built on operators or PDEs other than the ordinary Laplacian. Borrowing insights from optimal control and inverse PDE problems, we propose efficient numerical schemes to search for the metric or conformal structure whose associated (generalized) Laplacian is optimal for a given task, or explicitly design operators as suggested by modern spectral geometry. Concretely speaking, to represent an arbitrary diffeomorphism or injective map that is possibly non-conformal, we search for a generalized Laplacian or elliptic PDE that accounts for quasi-conformal deformation and satisfies a prescribed Cauchy boundary condition; for geometric data interpolation, we search for the generalized Laplacian whose behavior best approximates a higher-order variational problem; to design neural networks that directly operate on triangle meshes, we learn finite-element kernels that assemble the operators from data; and for extrinsic shape analysis, we consider an alternative operator, the Dirichlet-to-Neumann operator—the Schur complement of a higher dimensional Laplacian with the interior marginalized out. In addition, we develop discrete models of inverse elliptic problems, resembling core properties of the continuous counterparts. With extensive experimental evaluations, our formulations significantly improve over state-of-the- art algorithms for foundational considerations in geometry ranging from computing injective maps to interpolation on geometric domains.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152758</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing for Deep Engagement</title>
<link>https://hdl.handle.net/1721.1/152749</link>
<description>Designing for Deep Engagement
Ramsay, David Bradford
Flow state represents the quality of meaningful experience-- an effortless, depth of attention that is often undermined in our interrupt-driven, modern society. In this thesis, I present four novel interventions to promote states of deep engagement.&#13;
&#13;
Evaluating whether one of these interventions has a meaningful impact on flow state is difficult to do.  The bulk of my work, then, focuses on the methodological challenges of flow state research.  Herein I tackle three weaknesses in our ability to make strong, generalizable predictions about the causal link between environmental stimuli and flow states: (1) I discuss advancing how we represent the environment (specifically for aural stimuli) using phenomenological principles; (2) I advance the state-of-the-art in how we represent and measure flow bio-behaviorally (with the goal of integrating physiology into our judgements); and (3) I evaluate methodological weaknesses in current experimental flow work.  To do this, I present experimental work on models of auditory attention, new wearables and survey instruments for flow estimation, and an experiment that compares flow as measured in lab and at home across varying task structures.&#13;
&#13;
This thesis contributes a suite of state-of-the-art psychophysiological and behavioral hardware tools designed to inform inference about flow in-the-wild; it also contributes two unique, open-source, naturalistic datasets collected with them.  Combined with time-aware, probabilistic representations of cognition, this work sets the stage for a precise and explicit bio-behavioral definition of flow states that will improve our ability to understand its relationship to our environment.  In so doing, it points to an improved approach for social psychology more generally.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152749</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Full Posterior Inference for Uncertainty-Aware Robot Perception</title>
<link>https://hdl.handle.net/1721.1/152738</link>
<description>Scalable Full Posterior Inference for Uncertainty-Aware Robot Perception
Huang, Qiangqiang
Robot perception is crucial for both fully autonomous systems, like self-driving cars, and human-centric devices such as mixed reality glasses. While advances have been made in perception problems like simultaneous localization and mapping (SLAM) and visual localization, the quest for self-diagnosable, robust systems capable of operating in large, complex environments continues.&#13;
&#13;
This thesis aims to improve self-diagnosis and robustness in robot perception by promoting continuous uncertainty reasoning in localization and mapping, particularly under limited and ambiguous world observations. We investigate scalable and expressive approximations for posterior distributions in SLAM, overcoming the limited expressivity of Gaussian approximations for representing commonly encountered non-Gaussian posteriors. We harness the sparsity in factor graphs for scalability and utilize diverse density approximations to enhance expressivity. In advancing SLAM algorithms, we have achieved three contributions that provide unprecedented accuracy in describing posterior distributions, especially in highly non-Gaussian situations: 1) real-time inference of marginal posteriors by blending Gaussian approximation and particle filters, 2) incremental inference of joint posterior through learning normalizing flows on the Bayes tree, and 3) reference solutions to full posterior inference via nested sampling. Additionally, we develop a streaming platform that connects mobile devices and servers through web applications to conduct live demos of object-based SLAM, featuring the sharing of mapping results among online peers and continuous visualization of localization and mapping uncertainty.&#13;
&#13;
We also introduce a novel application of full posterior inference for uncertainty-aware robot perception, focusing on evaluating camera pose localizability to pinpoint visual localization challenges in 3D scenes. By employing this framework, we optimize fiducial marker placements in 3D environments, boosting localization rates by 20%.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152738</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact</title>
<link>https://hdl.handle.net/1721.1/152735</link>
<description>Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
Fallah, Alireza
The societal challenges posed by machine learning algorithms are becoming increasingly important, and to effectively study them, it is crucial to incorporate the incentives and preferences of users into the design of algorithms. In many cases, algorithms are solely designed based on the platform's objectives, without taking into account the potential misalignment between the platform's goals and the interests of users. &#13;
&#13;
This thesis presents frameworks for studying the interactions between a platform and strategic users. The central objective of the platform is to estimate a parameter of interest by collecting users’ data. However, users, recognizing the value of their data, demand privacy guarantees or compensations in exchange for sharing their information. The thesis delves into various aspects of this problem, including the estimation task itself, the allocation of privacy guarantees, and the potential vulnerabilities of these guarantees to the platform's power. &#13;
&#13;
In particular, in the first part of this thesis, we formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her data in exchange for a monetary reward but at the same time has a private heterogeneous privacy cost which we quantify using differential privacy. We consider two popular data market architectures: central and local. In both settings, we establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Next, we pose the mechanism design problem as the optimal selection of an estimator and payments that elicit truthful reporting of users' privacy sensitivities. We further develop efficient algorithmic mechanisms to solve this problem in both privacy settings. Moreover, we investigate the case that users have heterogeneous sensitivities for two types of privacy losses corresponding to local and central privacy measures.&#13;
&#13;
In the second part, we study a different aspect of the data market design: the optimal choice of architecture from both users' and the platform's point of view. The platform collects data from users by means of a mechanism that could partially protect users' privacy. We prove that a simple shuffling mechanism, whereby individual data is fully anonymized with some probability, is optimal from the viewpoint of users. We also develop a game-theoretic model of data sharing to study the impact of this shuffling mechanism on the platform's behavior and users' utility. In particular, we uncover an intriguing phenomenon that highlights the fragility of provided privacy guarantees: as the value of pooled data rises for users, the platform can exploit this opportunity to decrease the provided privacy guarantee, ultimately leading to reduced user welfare at equilibrium.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152735</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators</title>
<link>https://hdl.handle.net/1721.1/152734</link>
<description>Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators
Wang, Miaorong
Deep learning has permeated many industries due to its state-of-the-art ability to&#13;
process complex data and uncover intricate patterns. However, it is computationally&#13;
expensive. Researchers have shown in theory and practice that the progress of deep&#13;
learning in many applications is heavily reliant on increases in computing power, and&#13;
thus leads to increasing energy demand. That may impede further advancement in&#13;
the field. To tackle that challenge, this thesis presents several techniques to improve&#13;
the energy efficiency of deep learning accelerators while adhering to the accuracy and&#13;
throughput requirements of the desired application.&#13;
&#13;
First, we develop hybrid dataflows and co-design the memory hierarchy. That&#13;
enables designers to trade off the reuse between different data types across different&#13;
storage elements provided by the technology for higher energy efficiency. Second, we&#13;
propose a weight tuning algorithm and accelerator co-design, which optimizes the&#13;
bit representation of weights for energy reduction. Last, we present VideoTime3, an&#13;
algorithm and accelerator co-design for efficient real-time video understanding with&#13;
temporal redundancy reduction and temporal modeling. Our proposed techniques&#13;
enrich accelerator designers’ toolkits, pushing the boundaries of energy efficiency for&#13;
sustainable advances in deep learning.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152734</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Data-Driven Signal Separation and Interference Mitigation in Radio-Frequency Communication Systems</title>
<link>https://hdl.handle.net/1721.1/152733</link>
<description>Machine Learning for Data-Driven Signal Separation and Interference Mitigation in Radio-Frequency Communication Systems
Lee, Cheng Feng Gary
Single-channel source separation for radio-frequency (RF) systems is a challenging problem relevant to key applications, including wireless communications, radar, and spectrum monitoring. This thesis addresses the challenge by focusing on data-driven approaches for source separation, leveraging datasets of sample realizations when source models are not explicitly provided. To this end, deep learning techniques are employed as function approximators for source separation, with models trained using available data. Two problem abstractions are studied as benchmarks for our proposed deep-learning approaches. Through a simplified problem involving Orthogonal Frequency Division Multiplexing (OFDM), we reveal the limitations of existing deep learning solutions and suggest modifications that account for the signal modality for improved performance. Further, we study the impact of time shifts on the formulation of an optimal estimator for cyclostationary Gaussian time series, serving as a performance lower bound for evaluating data-driven methods. The thesis also introduces the “RFChallenge” as a benchmarking platform, aimed at addressing the gap in current literature for a comprehensive comparison of emerging machine learning solutions for RF signal separation. Finally, we explore an alternative approach of using deep learning to train a library of individual signal models that can be used together for subsequent inference tasks. While showing promise as a scalable strategy for the problem, our preliminary findings uncover the practical limitations of such methods. Ultimately, this thesis seeks to provide insights into judicious choices of data-driven solution architecture based on the signal structures under consideration. Our findings aim to stimulate further research at the intersection of machine learning and RF system design, contributing to the development of next-generation wireless technology through data-driven methodologies.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152733</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Association Algorithms and Representations for Robust Geometric Perception</title>
<link>https://hdl.handle.net/1721.1/152732</link>
<description>Data Association Algorithms and Representations for Robust Geometric Perception
Lusk, Parker C.
Data association is a fundamental requirement of geometric estimation in robotics. Identifying correspondences between measurements and models enables estimation processes to incorporate more data, in general leading to better estimates. However, sensor data is replete with noise and spurious measurements, making data association considerably more challenging. This thesis addresses data association problems that arise in realistic robotic perception, thus enabling robust geometric estimation. The first contribution of this thesis is the introduction of a scalable algorithm that efficiently identifies pairwise correspondences in high-outlier scenarios without initial data alignment. By modeling the pairwise association problem using a weighted graph, large complete subgraphs of highly consistent associations can be found without sacrificing information through thresholding, unlike previous methods. The second contribution is the introduction of a novel representation for lines and planes using the affine Grassmannian manifold. This thesis looks beyond points to higherorder geometric abstractions and provides a means for robustly aligning line and plane landmarks without an initial guess. When applied to lidar-based localization and loop closure, which face challenges in registering sparse point clouds, higher accuracy and success rates are achieved compared to typical representations. The third contribution of this thesis is to extend pairwise data association to multiway data association, wherein multiple pairs of associations are jointly analyzed to improve their accuracy and to ensure their consistency. By leveraging insights from the spectral graph clustering literature, this thesis develops an algorithm that is computationally efficient and provides accurate solutions with guaranteed global consistency. The final contribution of this thesis is to develop a multiway association algorithm that is capable of operating directly on pairwise affinities, unlike previous work which assumes the availability of pairwise binary permutation matrices. By delaying pairwise decision making until many pairwise affinities can be analyzed together, higher accuracy associations can be made. Taken together, these contributions improve the robustness of data association, allowing reliable geometric estimation in the presence of uncertainty.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152732</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Problem Solving in Engineering with Multiple Solution Methods</title>
<link>https://hdl.handle.net/1721.1/152730</link>
<description>Problem Solving in Engineering with Multiple Solution Methods
Li, Hao
One of the key challenges of Engineering Education is developing students’ ability to navigate and solve problems that have multiple solution paths. In order to accomplish this, the process of solving these moderately- and ill-structured problems needs to be better understood. We used two approaches to achieve this. &#13;
&#13;
First, we performed problem solving experiments with students (two preliminary studies and a main study). One preliminary study found that expert problem solvers tended to start solving problems with simpler methods compared to novices. The other preliminary study found that students using reasoning and intuition had better outcomes than students who "dived in" to detailed analysis. The main experiment was conducted to illustrate the possibilities of student problem solving activity in a more open-ended way. Here, the subject population consisted of 72 undergraduate and graduate students recruited from the author's institution. The participants were given a problem with a well-defined goal but no well-defined method. After attempting to solve the problem, the participant was given a short questionnaire. The results were coded to extract the method used and the approximate time used for each method. Student performance was compared against school year, the choice of method, and the number of methods used. No significant differences in performance were found between students in different years.  However, it was found that students who either 1) used simpler methods (methods with lower solve time) or 2) used more than one method tended to perform better than average, though the results are not statistically significant. Additionally, survey results were analyzed to understand the reasons for students' method choices.&#13;
&#13;
Second, we built a mathematical model to describe the behavior of a problem solver with multiple methods at their disposal. Each solution method was modeled with a fixed solve time, and the problem solver may switch between methods. We start with a basic model with two solution methods, and additional complexities are successively added. Next, we present two versions of the model: using Markov and Poisson processes to describe the method transition behavior. Two optimization problems are presented: one whose objective is to maximize the solve probability given a time limit, and one whose objective is to minimize the average solve time to achieve problem solving success. We give analytic solutions for the solve probability and average solve time for the case with two methods. We also present conditions for which switching methods is beneficial. It was found that whenever there existed sufficiently short methods for solving a problem, using multiple methods (i.e. switching methods) can improve the problem solving outcome. The model and experiment are then matched, and the results are used to develop a framework of strategies for teaching students to solve problems with multiple solution methods.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152730</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cosmic Echoes of the Early Universe: From Primordial Black Holes to Gravitational Waves</title>
<link>https://hdl.handle.net/1721.1/152727</link>
<description>Cosmic Echoes of the Early Universe: From Primordial Black Holes to Gravitational Waves
Geller, Sarah R.
Our framework for describing physics on quantum scales, the Standard Model of particle physics, is both gloriously successful and self-evidently incomplete. Generations of particle physicists have hoped to fill the gaps in our framework by looking for new physics in a few specific ways. For instance, it was hoped that collider experiments would reveal evidence of supersymmetry and that this would elucidate the nature of dark matter. Though these experiments have made numerous important discoveries, they have excluded or failed to furnish evidence for many previously leading theories, leaving the biggest questions unanswered. In the face of these unknowns, we are driven to search for new physics in new ways, and it is precisely in this context that the era of observational and theoretical cosmology is coming into its own. In this thesis, we investigate gravitational and particle phenomena in the early universe. The power of this cosmological approach lies in the fact that, in the early universe, temperatures and pressures were so extreme that now-rare high energy processes occurred frequently, and all forms of matter underwent dramatic changes as the universe cooled and expanded, leaving a record of these processes in a background of gravitational waves, cosmic neutrinos, large-scale structure, and the cosmic microwave background. Thus, the early universe functions as a laboratory for studying rare phenomena in which both ultra-small and ultra-large scale physics come into play, providing a window into otherwise inaccessible physical regimes. In this thesis, we aim to think globally about cosmological models. Rather than study the idiosyncratic features of a particular model, we have tried to identify generic features within large classes of well-motivated theoretical models to determine the testable predictions of these models that are within the range of existing and forthcoming experimental sensitivities.&#13;
&#13;
This thesis is structured as follows: In Chapter 1, we present, for completeness, some background material and motivation for the questions we will address in the body of the thesis. In Chapter 2, we consider inflationary models that incorporate realistic features from high-energy physics—including multiple interacting scalar fields and nonminimal couplings to the spacetime Ricci scalar—that could produce PBHs with masses in the range required to address the present-day dark matter abundance. In Chapter 3, we perform a Markov Chain Monte Carlo (MCMC) analysis of a simple yet generic multifield inflation model characterized by two scalar fields coupled to each other and nonminimally coupled to gravity, fit to Planck 2018 cosmic microwave background (CMB) data. Chapter 4 proposes a formalism to describe relativistic hydrodynamics in spherical symmetry of a mixture of collisionless particles and a thermal equilibrium radiation gas, extending the formalisms of Refs.[213, 162] to include the effects of neutrino decoupling during the radiation dominated epoch on the formation and predicted abundance of SMBH seeds. In App. A, we present a detailed discussion of the evolution of adiabatic and isocurvature modes in multifield inflationary models. App. B delves into the supergravity embedding of the multifield inflationary model discussed in Chs. 2 and 3. In App. C we derive exact field-space trajectories of non-minimally coupled two-field inflation with a potential featuring a near-inflection point feature. App. D discusses the gravitational waves induced by scalar perturbations at second order, while App. E investigates the subtleties of adiabatic mode evolution during a transient period of ultra-slow-roll inflation. In App. F and G we present the derivations and definitions underlying the results of Ch. 4. Finally, we give concluding remarks and discuss ongoing and upcoming research directions in Ch. 5.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152727</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preparing urban mobility for the future of work: impacts and adaptation</title>
<link>https://hdl.handle.net/1721.1/152726</link>
<description>Preparing urban mobility for the future of work: impacts and adaptation
Caros, Nicholas S.
The unexpectedly rapid rise of remote work in recent years has upended longstanding travel patterns. Commuting to work is no longer a routine trip with a fixed destination; millions of people have suddenly been granted the flexibility to choose their own work location and travel schedule when working remotely. Urban transportation systems designed and operated to serve regular commuters are struggling to meet the evolving mobility needs of their communities and are facing substantial demand shortfalls as a result. Moreover, there is evidence of a latent desire for new mobility services and land use policies that allow remote workers to take full advantage of the flexibility offered by remote work.&#13;
&#13;
This dissertation takes a three-step approach to addressing the issues presented by remote work through transportation policy. First, it creates the conceptual infrastructure needed to support interdisciplinary remote work research that can be translated into evidence-based policy. This infrastructure includes a common taxonomy of remote work stakeholders and arrangements, a map of the relationships between stakeholders, and a conceptual framework for describing and classifying individual remote work studies.  Several examples demonstrate how the taxonomy and framework can be used to develop comprehensive research findings that facilitate the design of remote work policy.&#13;
&#13;
The second step is collecting and analyzing extensive primary data related to remote work arrangements and associated travel behavior. New questions were added to a monthly national survey, allowing the identification of unanticipated aggregate and disaggregate trends. One of the most important findings is that approximately one-third of all remote work takes place outside of the home, at other work-friendly third places such as coffee shops and libraries. Many personal factors are found to be predictive of the choice of work location, including household characteristics such as the presence of roommates, employer remote work policies, and attitudes towards colleagues. An extended example of modeling the commuting frequency, mode choice, departure time, and destination of commutes to third places demonstrates how this rich source of data can be used to inform travel demand modeling for remote workers. These new models, which leverage zero-one inflated beta regression and mobile phone records to predict individual commuting patterns, are then applied to the City of Chicago to estimate the impact of remote work on carbon emissions from commuting. The study finds that overall carbon emissions are reduced by 31% relative to a 2019 baseline, and that commutes to third places are responsible for 16% of all commuting-related emissions. &#13;
&#13;
The third step is applying the insights and predictive models generated from the previously collected data to optimize urban mobility systems for remote work. The studies in this section of the dissertation tackle challenges faced by different remote work stakeholders: shared mobility platforms, public transit agencies, and shared workplace providers. For shared mobility platforms, a new type of ride-pooling service that leverages the destination flexibility of remote workers and other customers is shown to lead to more efficient passenger-vehicle matching and thus reduce vehicle distance traveled. A case study using ride-hailing data from Manhattan estimates that when a quarter of passengers have flexible destinations, overall travel can be reduced by 4.8%.&#13;
The matching algorithm also allows shared mobility platforms to cooperate with employers and shared workplace providers to offer an all-inclusive mobility and workplace service. Employer incentives for employees to work at the same location as their team members are found to reduce the efficiency of passenger-vehicle matching and lead to longer trips.&#13;
&#13;
To help public transit agencies respond to remote work, a new transit capacity flexibility model is developed. It allows agencies to evaluate the capacity of the network under different levels of passenger flexibility and changing destination preferences. The capacity flexibility model, which is the first such model that is tractable for network-sized problems, is then solved for the Boston rapid transit system. It demonstrates that the Boston network can accommodate 13% fewer passengers when commuting demand partially shifts from the downtown core to neighborhood centers as a consequence of remote work.&#13;
&#13;
Many governments are exploring opportunities to build new shared workplaces due to substantial interest in working at third places among remote workers. Shared workplaces can also address some of the social issues presented by remote work, such as social isolation and fewer interaction opportunities. The third study in this section proposes an integer programming model for selecting optimal shared workplace sites under social objectives. It finds that the distribution of shared workplaces varies significantly depending on the objective, and proposes a multi-objective framework for generating solutions with a balanced set of social benefits.&#13;
&#13;
To conclude, the potential applications of this research are discussed and an extensive agenda for future remote work and urban mobility research is presented.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152726</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion Source Development for IsoDAR and Multi-Messenger Astrophysics with KamLAND</title>
<link>https://hdl.handle.net/1721.1/152724</link>
<description>Ion Source Development for IsoDAR and Multi-Messenger Astrophysics with KamLAND
Smolsky, Joseph
Neutrinos can answer many open questions in physics. Doing so will require developing new technology, effective collaboration between experiments, and using existing data in new ways. This work presents contributions to these efforts with the IsoDAR, SNEWS, and KamLAND collaborations. The results are a new ion source for a sterile neutrino search, an upgraded alert system for supernova neutrino detection, and the first flux limits on MeV anti-electron neutrinos from galactic X-ray binaries.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152724</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating and Reprogramming RNA Folding with Molecular Probes</title>
<link>https://hdl.handle.net/1721.1/152722</link>
<description>Investigating and Reprogramming RNA Folding with Molecular Probes
Allan, Matthew F.
Ribonucleic acid (RNA) performs versatile, essential functions in all known organisms and viruses. These functions require the RNA to fold into specific secondary and tertiary structures, and in many cases to switch among multiple structural states. Predicting and experimentally determining RNA structures remains challenging, particularly because of this propensity of one RNA sequence to form a heterogeneous ensemble of structures.&#13;
&#13;
This thesis investigates two related problems in RNA folding. First, a method of nucleic acid origami is developed in which four divergent RNA sequences are reprogrammed by short antisense oligonucleotides (ASOs) to fold into six different 3D wireframe polyhedra. How each type of structural feature in the polyhedra affects the stability of local base pairs is revealed using dimethyl sulfate mutational profiling with sequencing (DMS-MaPseq). Second, the structural ensembles formed by the genome of SARS coronavirus 2 (SARSCoV-2) are investigated using DMS-MaPseq. In particular, a method for detecting long-range RNA:RNA interactions using ASOs is developed and applied to pinpoint an interaction between the frameshifting stimulation element (FSE) and a sequence of RNA over one kilobase downstream, which occurs in nearly half of the RNA molecules. This technique is expanded to reveal long-range interactions in three additional coronaviruses, suggesting that this type of RNA structure is more common than previously thought.&#13;
&#13;
Overall, this thesis uses ASOs and mutational profiling to reprogram and investigate RNA folding. A wide variety of RNA sequences prove amenable to these techniques, which enable the creation of synthetic RNA structures and the characterization of natural ones including long-range RNA:RNA interactions. The results of this study enable future investigations on developing RNA origami for research and therapeutic applications and on the roles of distant RNA elements in regulating ribosomal frameshifting in coronaviruses.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152722</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Société Française des Urbanistes and the Invention of Urbanism</title>
<link>https://hdl.handle.net/1721.1/152717</link>
<description>The Société Française des Urbanistes and the Invention of Urbanism
El Hayek, Chantal
This dissertation demonstrates that it was the Société Française des Urbanistes&#13;
(SFU) that invented urbanism in the interwar period, rooting it in Henri Bergson’s&#13;
theories of creative evolution and Paul Vidal de la Blache’s principles of human&#13;
geography. Scholars have historically overlooked this contribution, and do so even today.&#13;
They define urbanism generically, mostly describing a positivist science of spatial&#13;
organization, incorporating infrastructural, hygienic, and social engineering systems.&#13;
Rectifying this misconception, I reveal how this group of practicing architects and&#13;
theorists—attempting to offset the erosive effects of commercialism on cities—forged, in&#13;
1911 in Paris, a reformist alliance founded on faith in metaphysics and social science. In&#13;
coining the term urbanisme, SFU established the field based on principles that defied&#13;
positivist notions of urban development and deterministic ideas of human evolution.&#13;
I analyze SFU’s spatial schemes and written oeuvres, in concert with&#13;
contemporaneous scholarship on urban theory, geography, and philosophy, to contend&#13;
that Bergson’s anti-positivist discourse on time and consciousness is central to our&#13;
understanding of urbanism and its origins. Besides establishing the professional, legal,&#13;
and academic foundations of urbanism in France, SFU engaged in a global urban reform&#13;
campaign, drawing up restructuring schemes for cities in Europe, North and South&#13;
America, the Eastern Mediterranean, North and East Africa, and Southeast Asia. They&#13;
scripted numerous architectural treatises, essays, and legal texts, and organized&#13;
international conferences to debate methods of reforming post-WWI cities. This&#13;
formidable production had a profound impact on the cities and subsequent generations of&#13;
planners who grappled with the problem of mitigating industrialization’s negative&#13;
outcomes. The dissertation charts the group’s social networks by tracing the genealogy of&#13;
ideas by Western thinkers that influenced SFU’s conception of urbanism. It displays the&#13;
ways in which SFU applied these ideas in distinctive settings, revealing the cultural&#13;
influences these planners exerted on administrators and policy makers. Ultimately, the&#13;
dissertation shows that SFU established urbanism as a “scientific art” of territorial&#13;
development, emphasizing inventiveness and individual experience and seeking to&#13;
reconcile the conditions of the modern city with the allegedly timeless features that&#13;
characterized the pre-industrial landscape: spiritualism, nature, tradition, and art.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152717</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polycrystalline Grain Boundary Solute Segregation at Finite Sizes and Temperatures</title>
<link>https://hdl.handle.net/1721.1/152714</link>
<description>Polycrystalline Grain Boundary Solute Segregation at Finite Sizes and Temperatures
Tuchinda, Nutth
Engineering defect chemistry allows novel pathways for controlling bulk material behavior, especially for high-defect-density materials like nanocrystalline alloys. With this concept, solute segregation at grain boundaries can be employed as a defect design tool to stabilize such systems against coarsening. However, the understanding and availability of grain boundary data are still limited for polycrystals that contain a profuse spectrum of grain boundary sites, especially for the effects of grain size and temperature on grain boundary site spectra. This thesis aims to fill the literature gap by providing a framework for studying grain size dependences (when grain boundary and triple junction volume fractions become finite) and solute segregation at finite temperatures where vibrational entropy becomes non-negligible. &#13;
&#13;
The thesis first demonstrates extractions of triple junction segregation spectra which show characteristics of the defect type vis-à-vis grain boundaries. The developed size-dependent isotherms suggest that triple junctions can show a strong contrast of local solute content compared to adjacent grain boundaries. Although the bulk size effect can be negligible beyond approximately 20 nm in grain size due to the low junction volume fraction depending on the system chemistry and grain geometry. An analysis from an Al-based hybrid quantum mechanical-molecular mechanical dilute segregation energy dataset shows a periodic chemical trend for a cluster of solute elements with a junction segregating tendency. A further site environment analysis of the junctionpreferred solute elements suggests both chemical and elastic contributions that influence the spectrality of solute segregation and thus the energetic contrast observed. &#13;
&#13;
Aside from grain junctions, temperature can play a major role in the stability of nanocrystals. This thesis demonstrates estimations of dilute site-wise vibrational segregation entropy spectra in polycrystals within harmonic approximations. The majority of the systems calculated show a positive site-wise energy and entropy correlation in agreement with the literature for simplified boundaries. The framework is extended in combination with data sciences to create a dilute segregation energy-entropy spectral database, allowing multiscale translation from site-wise segregation energy-entropy spectra to bulk McLean averages often reported in the literature and used in alloy design. The outcome demonstrates that some of the average segregation entropies reported could overestimate vibrational effects in grain boundary segregation due to the interplay between configurational and vibrational entropy in spectral grain boundary solute segregation, suggesting the importance of considering grain boundary spectrality (as opposed to McLean-type models) in interpreting grain boundary data from polycrystals. &#13;
&#13;
It is hoped that this thesis can provide a better understanding of size and temperature effects in grain boundary solute segregation and serve as a pathway toward a full spectral grain boundary genome for defect-enabled material designs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152714</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying functional groups in microbial communities based on ecological patterns</title>
<link>https://hdl.handle.net/1721.1/152708</link>
<description>Identifying functional groups in microbial communities based on ecological patterns
Shan, Xiaoyu
Recent development in sequencing technologies has greatly advanced our understandings of structure and function of microbial communities in various ecosystems. In microbial communities, a metabolic function is often performed by a group of multiple species (i.e., a functional group) at the same time. However, identifying these functional groups remains to be a major challenge for structure-function mapping in microbiome studies. Instead of relying on annotation-based methods that are highly biased for a few model microorganisms, here I tackle this challenge by developing a novel annotation-free approach. In chapter two, I develop the mathematical framework behind the new approach – which we call EQO – and show its power by applying it to a few existing microbiome datasets. I show that, based solely on the patterns of statistical variation in species abundances, EQO identifies functional groups in soil, ocean and animal gut microbiome. The following two chapters discuss an application of this method, which has led to the discovery a potential new form of interaction between bacteria in animal guts, and an unexpected finding in the lab regarding the ecological dynamics of phage-plasmids in marine bacterial populations. In chapter three, I show how applying EQO to an aquaculture dataset leads us to identify potential pathogen-inhibiting groups of bacteria in an animal-associated microbiome. Guided by the computational prediction, I successfully isolate a member of this group that is a novel species with a broad spectrum of interaction against various Vibrio pathogens. By synthesizing and secreting polysaccharides, the novel species causes limited dispersion and reduced virulence of Vibrio. My efforts to understand the ecology of marine bacteria also lead me to study the role of widely distributed phage-plasmids. Combining mathematical models and experimental evidence, I show that loss-of-function mutations and segregational drift recurrently drive productive infections of phage-plasmids within marine bacterial populations. Together, this thesis provides a simple yet powerful approach to abstract functional groups from taxonomic composition in complex microbiome. As a useful hypothesis generating tool, this approach will pave the way for more mechanistic studies of microbiome in the future.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152708</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Navigating Climate and Urban Change: Development Patterns, Conflicts, and Equity in a Sub-Arctic and Post-Socialist City</title>
<link>https://hdl.handle.net/1721.1/152704</link>
<description>Navigating Climate and Urban Change: Development Patterns, Conflicts, and Equity in a Sub-Arctic and Post-Socialist City
Durova, Aleksandra O.
As climate change impacts on settlements escalate, cities strive to adapt and enhance resilience. However, as another form of urban restructuring, climate adaptation and resilience projects have inadequately addressed vulnerability and the risk of reproducing inequitable outcomes. Simultaneously, urbanization dynamics reshape cities, intensifying vulnerability through patterns of urban growth and housing and land development. This dissertation examines urban restructuring and its implications for equity, focusing on how and why residential development patterns shape uneven socio-spatial risk and vulnerability.&#13;
&#13;
Centering its inquiry on Yakutsk (Russia), this dissertation merges the contexts of a sub-Arctic city at the forefront of climate change concerns and a post-Socialist city undergoing rapid urban change and associated landscape transformations. Drawing on historical perspectives on urbanization, this dissertation provides insights into the interplay between socialist legacies, their reconfigurations, and the reproduction of risk and vulnerability. This study engages with views of urban change as a conflict-laden process, placing socio-environmental transformations at its center. Adopting a constructivist grounded theory approach, this dissertation analyzes data collected through semi-structured interviews, a resident survey, and secondary sources.&#13;
&#13;
By examining four distinct development patterns and citywide dynamics, this study reveals that socio-spatial vulnerability and risk are a product of development conflicts surrounding growth and housing and infrastructure provision. The state generates and mediates these growth, redevelopment, and infrastructure conflicts and reproduces risk and uneven vulnerability through different processes. These processes include facilitating risky land conversions, excluding certain areas from collective protection, shifting responsibility for protection onto households, infringing on development rights, and supporting displacement with insufficient compensation. The processes arise from governance dynamics, such as top-down technocratic planning aligned with market interests, inadequate consideration of risk in spatial development, and a lack of housing market regulation. The cases expose physical manifestations of socialist legacies and conflicts surrounding their transformations. Elements of state-led urbanism and recentralization of power, distinct from socialism but echoing the past, also contribute to vulnerability reproduction. The findings also highlight the need to reimagine planning approaches within traditional realms and climate adaptation aiming for more equitable transitions in urbanization.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152704</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on Rental Housing Markets in the United States</title>
<link>https://hdl.handle.net/1721.1/152703</link>
<description>Three Essays on Rental Housing Markets in the United States
Preis, Benjamin J.
What do rental markets look like in the United States today? Over the course of three papers, I investigate rental housing markets in ten of America’s most populous cities: Boston, MA; Columbus, OH; Dallas, TX; Kansas City, MO; Minneapolis, MN; Nashville, TN; Omaha, NE; Philadelphia, PA; Seattle, WA; and Washington, DC. I compare the locations of landlords and rentals, examine the extent to which rental markets have concentrated ownership, and consider the difficult identification problem facing researchers who try to use administrative data to identify rental properties in the United States.&#13;
&#13;
In the first paper, I geolocate rental properties and landlords in eight cities. I find that the median landlord has a mailing address within 10 miles of their rental property, and that a majority of landlords with a residential mailing address are located within the same region as their rental properties. Landlords with residential mailing addresses are located in neighborhoods that are whiter, richer, and have more college graduates than the neighborhoods in which they own properties. I also find that many landlords are located far away from their rental properties, in superstar cities and throughout the country. I use a network-science approach to identify the core locations of landlords, which I call the “landlord market area.”&#13;
&#13;
In the second paper, I identify landlords who have significant market shares in a given city or neighborhood. I use machine learning to deduplicate different landlord records, a required step in the age of corporate landlords. I find that many neighborhoods have moderate and high levels of ownership concentration. Higher levels of concentration are correlated with higher rent levels among the cities I study. I use an instrumental variable approach to investigate the interaction between city-wide wage increases and ownership concentration, finding that neighborhoods with higher levels of concentration see larger rent increases.&#13;
&#13;
In the third paper, I investigate the most common methods to identify rental properties in the United States. Most studies heretofore have relied on tax assessment databases to identify rental properties, yet I find that the most common approaches to do so are overinclusive of some types of units, and underinclusive of others. I compare these methods to the rental properties identified by rental registries and to American Community Survey estimates. I identify best practices with regard to rental registries, and interrogate when different approaches are more suitable than others.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152703</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the role of clay minerals in the preservation of organics and soft tissues</title>
<link>https://hdl.handle.net/1721.1/152696</link>
<description>Probing the role of clay minerals in the preservation of organics and soft tissues
Hall, James
Interactions between organics and various types of minerals have been well documented and studied both experimentally and in nature. Clays, which are aluminum silicate minerals, have been shown to be exceptional at the adsorption and/or incorporation of organics. These minerals have also been the focus of understanding how exceptionally preserved fossils are formed, particularly in the Ediacaran and Cambrian; however, it is debated if these clays are original to the sediments preserving these organisms. Sedimentary rocks containing Fe-rich clay and amorphous phases will be sampled for return by the Mars Sample Return Campaign to search for preserved organics; however, the ability of these clay and amorphous phases to preserve organic carbon has yet to be tested. In this thesis, I seek to create a more thorough understanding of how clay minerals present within these two systems can aid in organic preservation.&#13;
&#13;
I begin this thesis by examining two morphotypes of erniettomorph fossils (soft-bodied Ediacaran organism with unknown phylogenetic placement) preserved in a clay-rich siliciclastic deposit from the Woods Canyon Formation in Nevada. From the data collected to characterize both morphotypes, we concluded that the clay minerals present were likely important in the preservation of the organics and better three-dimensional preservation was due to the abundance of large-grained quartz within the organism pre-burial. For the remaining chapters of my thesis, I examined organic preservation in a simulated Martian lacustrine system. I begin by incubating basaltic sediments and a 5% pCO2 headspace with solutions containing a range of organic molecules which have plausible abiotic sources. I subsequently expand on these experiments by amending the solutions with Fe2+. Finally, I use the same solution conditions (with and without added Fe2+) and incubate microbial EPS with the sediments. The results from these experiments allow us to create a model for organo-mineral associations that would occur on early Mars, and the types of organo-mineral associations and textures that would be association with abiotically vs biologically produced organic carbon.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152696</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained Information Exchange Message Passing Algorithm in Probabilistic Graphical Models</title>
<link>https://hdl.handle.net/1721.1/152695</link>
<description>Constrained Information Exchange Message Passing Algorithm in Probabilistic Graphical Models
AlHajri, Mohamed Ibrahim
Probabilistic graphical models combine probability and graph theory into a powerful multivariate statistical modeling approach. While there is an extraordinary range of types of graphical models in the broader literature, we focus on Hidden Markov Model (HMM) and Linear Dynamical System (LDS). These two models are ubiquitous in applications including Kalman filtering, and the decoding of LDPC codes which is what all cellular wireless systems run on these days. Message passing algorithms exploit the independence and factorization structure within these graphical models to develop analytically tractable, computationally efficient, and exact inference algorithms. Posterior inference using message-passing algorithm depends heavily on the information exchange between the nodes of the graph and having unconstrained sized messages, results in the exact inference of the posterior distribution. Despite its usefulness and ubiquitous applications, the unconstrained information exchange message-passing algorithm has numerous shortcomings, including prohibitive computational complexity, and unaffordable communication complexity. These shortcomings can limit the effectiveness and practicality of the algorithm in the era of big data.&#13;
&#13;
This thesis presents a comprehensive analysis of a novel constrained information exchange message-passing algorithm. A cornerstone of this algorithm is the modified sum product algorithm, an innovative approach that identifies and prioritizes critical information fragments for particular inference tasks.&#13;
&#13;
The thesis further delves into the algorithm’s utility in various posterior inference scenarios, encompassing index-specific and index-free posterior inferences. While the former scrutinizes either singular or multiple posterior inferences at designated indexes, the latter ensures that all nodes use identical compression matrices, catering to both single and multi-step posterior inferences.&#13;
&#13;
The algorithm elucidates the balance between algorithmic performance and resource utilization in data-driven applications. By reducing computational and communication complexities, it proposes an efficient solution for high-dimensional data handling, particularly when resources are constrained. In essence, the proposed algorithm enhances the scalability, practicality, and efficiency of probabilistic graphical models, propelling advancements in data-driven research and decision-making across a multitude of domains.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152695</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integral Equation-Based Inverse Scattering and Coil Optimization in Magnetic Resonance Imaging</title>
<link>https://hdl.handle.net/1721.1/152694</link>
<description>Integral Equation-Based Inverse Scattering and Coil Optimization in Magnetic Resonance Imaging
Cruz Serrallés, José Enrique
One trend in Magnetic Resonance Imaging (MRI) over the years has been to steadily increase the static magnetic field strength and hence the frequency of operation, resulting in higher available signal-to-noise ratio that could be traded for shorter scan times and increased image quality. In the ultra-high field regime (≥7T), since the radiofrequency wavelength is comparable to the dimensions of body, quasi-static approaches cannot be used to simulate the interactions between electromagnetic field and biological tissue, which can result in unwanted energy deposition hot spots and in decreased image quality. The electrical properties of tissue (permittivity and conductivity) influence these interactions and the RF field distributions inside of the body. Although undesirable from the point of view of coil and pulse design, this dependence on EP opens the door to new imaging modalities using the same MR data. In this thesis, I detail how we applied highly accurate integral equation formulations to the tasks of 3D electrical properties estimation (inverse scattering) and parallel transmit (pTx) coil array optimization. I also present novel regularization strategies that are ideally suited for inverse problems. I also discuss how we validated these approaches with numerical examples, and the efforts that we undertook to estimate electrical properties of a phantom using data from an MR scanner.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152694</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Precision Oncology: A Predictive and Causal Lens</title>
<link>https://hdl.handle.net/1721.1/152693</link>
<description>Towards Precision Oncology: A Predictive and Causal Lens
Hussain, Zeshan
Precision oncology promises personalized care for each patient based on a holistic view of their data. However, several methodological and translational advances are required for successful implementation of this vision in the clinic. These include building temporal models to predict a patient’s survival outcomes in response to therapy, validating these methods with experimental data from Randomized Controlled Trials (RCTs), quantifying the uncertainty in the predictions, and finally, exploring how these elements can be woven together into a clinical decision support tool. In this thesis, I explore each of these aspects in turn: i) first, I build different models of clinical time-series data, with a focus on prediction of survival outcomes and forecasting of core biomarkers, ii) next, I design methods to give additional “context” for these models, including uncertainty quantification of causal estimates and validation of these estimates using RCT data, and iii) finally, I study how these elements affect treatment decision-making via a controlled user study of a decision support tool prototype.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152693</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Geometry of Neural Network Representation Spaces</title>
<link>https://hdl.handle.net/1721.1/152692</link>
<description>Modeling the Geometry of Neural Network Representation Spaces
Robinson, Joshua David
Neural networks automate the process of representing objects and their relations on a computer, including everything from household items to molecules. New representations are obtained by transforming different instances into a shared representation space, where variations in data can be measured using simple geometric quantities such as Euclidean distances. This thesis studies the geometric structure of this space and its influence on key properties of the learning process, including how much data is needed to acquire new skills, when predictions will fail, and the computational cost of learning. We examine two foundational aspects of the geometry of neural network representations.&#13;
&#13;
Part I designs and studies learning algorithms that take into account the location of data in representation space. Focusing on contrastive self-supervised learning, we design a) hard instance sampling strategies and b) methods for controlling what features models learn. Each produces improvements in key characteristics, such as training speed, generalization, and model reliability.&#13;
&#13;
Part II studies how to use non-Euclidean geometries to build network architectures that respect symmetries and structures arising in physical data, providing a powerful inductive bias for learning. Specifically, we use geometric spaces such as the real projective plane and the spectraplex to build a) provably powerful neural networks that respect the symmetries of eigenvectors, which is important for building Transformers on graph structured data, and b) neural networks that solve combinatorial optimization problems on graphs such as finding big cliques or small cuts, which arise in molecular engineering and network science.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152692</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Freshness for Monitoring and Control over Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/152691</link>
<description>Information Freshness for Monitoring and Control over Wireless Networks
Tripathi, Vishrant
In this thesis, we study the optimization of general information freshness metrics in wireless networks, with the goal of applying our theoretical results to problems in real-time monitoring and control. We make contributions in three directions.&#13;
&#13;
First, we consider the optimization of general cost functions of Age of Information (AoI). Here, we develop computationally efficient scheduling algorithms for optimizing information freshness in both single-hop and multi-hop wireless networks. We further develop an online learning formulation when the cost functions of AoI are unknown and propose a new online learning algorithm for this setting called Follow-the-Perturbed-Whittle-Index.&#13;
&#13;
Second, we consider weighted-sum AoI minimization. In this setting, we study how correlation impacts information freshness. We also propose a near-optimal distributed scheduling protocol called Fresh-CSMA for AoI minimization, that has provable performance guarantees.&#13;
&#13;
Third, we apply our theoretical results to problems in multi-agent robotics and monitoring – both via simulations and practical system implementations. We use simulations to demonstrate significant performance improvements in the collection of time-varying occupancy grid maps using multiple robots via the Whittle Index framework. Further, to demonstrate the benefits of our theoretical contributions, we built a real system (WiSwarm) for mobility tracking using a swarm of UAVs, communicating with a central controller over WiFi. Our experimental results show that, when compared to the standard IEEE 802.11 MAC layer + TCP/UDP, our system can reduce AoI by a factor of 109x/48x and improve tracking accuracy by a factor of 4x/6x, respectively.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152691</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Responsive Space Environments: New Paradigms in Blending Virtual and Physical Exploration through Human-Robot Operations</title>
<link>https://hdl.handle.net/1721.1/152689</link>
<description>Responsive Space Environments: New Paradigms in Blending Virtual and Physical Exploration through Human-Robot Operations
Haddad, Don D.
As humanity wanders further into space, the integration of robotics and virtual environments in space systems becomes increasingly pivotal, augmenting our capacity to explore, interact, and study remote environments. This research delves into the use cases, benefits, and challenges associated with Human-robot operations in virtual space analog environments, proposing innovative visualizations that work in tandem with automation to address challenges in mission planning and control, leveraging techniques akin to video game interfaces. In the pursuit of modeling and capturing the essence of distant environments, this research embarks on an investigation of cutting-edge 3D reconstruction techniques, expertly combined with high-definition rendering pipelines, to synthesize virtual environments sourced from real-world information and examine their role in planning and executing planetary exploration missions. This synthesis process becomes especially critical in remote settings, like on the Moon, Mars, and asteroids, where data transmission costs are high and efficiency is paramount in space exploration. &#13;
&#13;
Accordingly, this dissertation also overviews the design and analyzes the performance of AKALL (Azure Kinect à la Luna), a software application for 3D imaging developed during this research that underwent rigorous testing on the SSERVI Lunar regolith testbed at NASA Ames Research Center. This software is designed to operate within an isolated Docker container and was integrated to operate a repurposed commercial imaging  payload within Lunar Outpost’s Mobile Autonomous Prospecting Platform (MAPP) lunar rover, an integral component of the upcoming Intuitive Machines mission, (IM-2), slated to land on the Moon’s south pole early in 2024.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152689</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Upper and Lower Bounds for Sampling</title>
<link>https://hdl.handle.net/1721.1/152686</link>
<description>Upper and Lower Bounds for Sampling
Lu, Chen
This thesis studies the problem of drawing samples from a probability distribution. Despite the prevalence of sampling problems in applications, the quantitative behavior of sampling algorithms remains poorly understood. This thesis contributes to the theoretical understanding of sampling by giving upper bounds and more importantly lower bounds for various sampling algorithms and problem classes. On the upper bound side, we propose new sampling algorithms, motivated by the perspective of sampling as optimization [JKO98], and give convergence guarantees for them. We also obtain state-of-the-art convergence results for the popular Metopolis-Adjusted Langevin Algorithm. On the lower bound side, we establish the query complexity for strongly log-concave sampling in all constant dimensions. Our lower bounds rely on simple geometric constructions, which can hopefully be of aid to similar results in high dimensions.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152686</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry and transport in development</title>
<link>https://hdl.handle.net/1721.1/152684</link>
<description>Geometry and transport in development
Romeo, Nicolas François
Shape changes are some of the most visually striking features of biological development at all scales of living matter. With recent advances in high-resolution microscopy, it becomes possible to track the morphology and motion of systems ranging from organelles to every single cell within specific tissues or even whole organisms. This enables quantitative physical modeling to understand the phenomena driving and controlling the emergence of spatial patterns and organization in development. &#13;
&#13;
In the first half of this thesis, we consider two problems arising in fruit fly oogenesis. In a small organ known as the egg chamber, we apply ideas from continuum and statistical mechanics to explain the nonlinear dynamics and regulation of nuclear envelope shape and of a cytoplasmic transport event known as `nurse cell dumping'.  In particular, these results show how biological and physical mechanisms can cooperate to enable or regulate developmental processes.&#13;
&#13;
In the second half of this thesis, we consider zebrafish embryogenesis, during which thousands of cells collectively migrate to lay out the organism's body plan. Here, a direct physical modeling approach is hampered by the exploding complexity of a three-dimensional many-body dynamic which obfuscates the identification of relevant degrees of freedom. In this context, we investigate ways to translate cell trajectories into lower-dimensional representations and to capture the essential ordering principles of collective cell organization. By leveraging model inference techniques, the resulting representation of the collective cell dynamics enables a compact characterization of developmental symmetry breaking.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152684</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Koszul duality and the bar spectral sequence</title>
<link>https://hdl.handle.net/1721.1/152683</link>
<description>Koszul duality and the bar spectral sequence
Zhang, Adela YiYu
The bar spectral sequence for algebras over a spectral operad relates Koszul duality phenomena in several contexts. In this thesis, we apply this classical tool to the Koszul dual pair given by the (non-unital) E subscript ∞-operad and the spectral Lie operad over Fₚ. The bar spectral sequence for E subscript ∞-algebras yields the structure of operations on mod p Topological André-Quillen cohomology and the homotopy groups of spectral partition Lie algebras, building on the work of Brantner-Mathew. In the colimit, the unary operations are Koszul dual to the Dyer-Lashof algebra. On the other hand, the bar construction against certain spectral Lie algebras models labeled configuration spaces by a theorem of Knudsen. The associated bar spectral sequence yields new results on their mod p homology at low weights, as well as interesting patterns of universal differentials. We also record an attempt with Andrew Senger on detecting these differentials via deformation of the bar comonad.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152683</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Career Crafting in the Future of Work with&#13;
Data-driven Tools</title>
<link>https://hdl.handle.net/1721.1/152682</link>
<description>Empowering Career Crafting in the Future of Work with&#13;
Data-driven Tools
Saa, Isabella Loaiza
The rapid advancement of technology, particularly in the realm of A.I., is causing significant shifts in work dynamics and career paths. Automation driven by A.I. is reshaping tasks and job requirements, pressuring workers to constantly upskill to remain relevant in a rapidly changing professional landscape. Predictions suggest a need for around 12 million occupational transitions in the U.S. by 2030. Low-skill workers face the greatest challenges in reskilling, while the evolving skill landscape lacks clear guidance for workers. Organizations must prioritize helping employees adapt to technological progress to ensure retention and productivity. To address these issues, it’s essential to develop tools that empower the workforce to identify skill gaps and navigate career changes. This dissertation delves into the transformative effects of technology on work and careers, exploring the interplay between A.I. advancements, changing career trajectories, and the need for effective tools to support workers. The work reviews the impact of technology on labor, examines evolving career concepts, introduces the philosophy of Value-Sensitive Design (VSD) for creating decision-making tools, and presents prototypes aimed at assisting individuals in making informed career choices, ultimately contributing valuable insights to the evolving world of work.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152682</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Housing Dynamics in the Face of Shocks</title>
<link>https://hdl.handle.net/1721.1/152681</link>
<description>Housing Dynamics in the Face of Shocks
Sandoval Olascoaga, Sebastian
This thesis explores housing dynamics in the context of shocks and understanding its impact on the housing sector, the well-being of communities, and the development of its citizens. It investigates the localized effect of extreme weather events on communities and individuals, the repercussions of the COVID-19 pandemic-induced eviction moratoria expiration on public health, and the influence of house flipping practices on neighborhood stability and housing affordability. This study sheds light on the critical role of housing stability in overall quality of life and societal progress, highlighting the pressing need for informed decision-making and policy formulation in the face of evolving challenges. The findings present implications for public health, climate resilience, neighborhood stability, and housing outcomes, contributing to the existing knowledge and paving the way for comprehensive housing systems that foster individual and societal well-being and prosperity.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152681</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protocols and Devices for Scalable Spin-Photon Quantum Networks</title>
<link>https://hdl.handle.net/1721.1/152680</link>
<description>Protocols and Devices for Scalable Spin-Photon Quantum Networks
Chen, Kevin C.
One of the central goals in quantum information science is constructing a quantum network useful for quantum communication, sensing, and computation. Its realization crucially depends on efficient distribution of entanglement. A promising approach entails connecting quantum nodes via photons, which are naturally resilient against decoherence, and storing quantum bits in atomic memories; among which, solid state spin qubits in diamond are particularly promising candidates for memory storage in a quantum repeater network. However, experimental efforts thus far have been mainly stymied by the absence of efficient and scalable spin-photon interfaces.&#13;
&#13;
To address these challenges, we propose a photonic integrated circuit architecture with heterogeneously integrated emitter-nanocavity systems for faithfully transferring photonic qubits onto diamond color centers. This hybrid platform offers arbitrary photonic routing, phase stability, and reconfigurability to achieve high-fidelity and high-efficiency local and remote entanglement generation. Subsequently, we report our experimental efforts in realizing a cavity-enhanced optical interface with tin vacancy centers in diamond and characterizing a heterogeneously integrated emitter cavity system in a silicon nitride photonic integrated circuit. The on-chip components allow for additional control over both the spin and optical degrees of freedom necessary for achieving spin photon entanglement. As an outlook, we discuss how the experimental results in this thesis and ongoing efforts pave the path towards additional quantum network applications, such as realizing a quantum random access memory.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152680</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation Learning Through the Lens of Science: Symmetry, Language and Symbolic Inductive Biases</title>
<link>https://hdl.handle.net/1721.1/152679</link>
<description>Representation Learning Through the Lens of Science: Symmetry, Language and Symbolic Inductive Biases
Dangovski, Rumen Rumenov
In this thesis, we explore representation learning, a key technique in machine learning and artificial intelligence that has led to remarkable advancements in fields such as speech, vision, language perception and generation, as well as solving complex scientific problems like protein folding. Despite its success, the prevailing method of end-to-end supervised learning faces challenges, including the need for large datasets, non-interpretable classifications, and difficulties in transferring representations.&#13;
&#13;
To address these limitations, we adopt a scientific perspective, focusing on machine learning tasks that are particularly affected by these issues, and developing benchmarks inspired by scientific principles. Our approach centers on the identification and development of novel inductive biases (assumptions made by the learning algorithm to improve generalization) based on symmetry, language, and symbolic properties. These inductive biases prove beneficial for both solving scientific problems using machine learning and enhancing representation learning methods.&#13;
&#13;
We term this methodology “Representation Learning through the Lens of Science” and demonstrate its effectiveness in various applications. Finally, we discuss the limitations of our approach and propose directions for future research to further refine and expand upon the concepts introduced in this thesis.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152679</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Compositional Generalization of AI Systems</title>
<link>https://hdl.handle.net/1721.1/152678</link>
<description>Enabling Compositional Generalization of AI Systems
Li, Shuang
A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. Despite their impressive performance, current AI systems fall short in this area and are often unable to solve tasks that fall outside of their training distribution. The work contained in this thesis aims to bridge this gap by incorporating compositionality into deep neural networks, thereby enhancing their ability to generalize and solve novel and complex tasks, such as generating 2D images and 3D assets based on complicated specifications, or enabling humanoid agents to perform a diverse range of household activities. The implications of this thesis are far-reaching, as compositionality has numerous applications across fields such as biology, robotics, and art production. By significantly improving the compositionality ability of AI systems, this research will pave the way for more data-efficient and powerful models in different research areas.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152678</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and Systems for Scalable Multi-Agent Geometric Estimation</title>
<link>https://hdl.handle.net/1721.1/152675</link>
<description>Algorithms and Systems for Scalable Multi-Agent Geometric Estimation
Tian, Yulun
Collaborative geometric estimation, which enables multiple agents to construct globally consistent geometric models of the environment (e.g., maps and robot poses) from noisy local measurements, is a crucial capability for multi-agent systems. However, achieving scalable collaborative estimation in the real world is challenging. On one hand, solving the underlying geometric optimization problems is hard due to the coupling among agents and poor numerical conditioning. On the other hand, realworld communication networks impose operational constraints (e.g., in the form of available bandwidth) that need to be accounted for during deployment.&#13;
&#13;
This thesis develops algorithms and systems toward enabling scalable collaborative geometric estimation, with a focus on tackling the aforementioned technical challenges. The first part of this thesis considers geometric estimation under a fully distributed communication architecture, in which agents directly communicate with each other without relying on a central server. To this end, this thesis presents distributed pose graph optimization algorithms with the goals of achieving certifiable global optimality and convergence under asynchronous communication. Leveraging the developed algorithms, this thesis then develops a complete system for distributed simultaneous localization and mapping (SLAM), and demonstrates the proposed system in large-scale urban environments where up to 8 ground robots traverse a total distance close to 8 km. The second part of this thesis tackles geometric estimation under a server-client architecture, where a server coordinates communication during collaborative optimization. To this end, this thesis presents a communication-efficient solver that enables large-scale collaborative mapping with significantly reduced communication. Furthermore, specialized solvers for collaborative rotation averaging and translation estimation are developed, which exploit spectral graph theoretic methods to achieve fast convergence. These algorithmic contributions, together with opensource code and datasets, facilitate the development of scalable multi-agent perception systems in complex environments.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152675</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytical and computational methods for non-Gaussian reliability analysis of nonlinear systems operating in stochastic environments</title>
<link>https://hdl.handle.net/1721.1/152670</link>
<description>Analytical and computational methods for non-Gaussian reliability analysis of nonlinear systems operating in stochastic environments
Guth, Stephen Carrol
Statistical problems arise in many areas of ocean engineering, such as ship seakeeping, structural reliability, and design, and can be caused both by uncertainty in materials and construction, and by the inherently stochastic nature of random waves. Due to the dynamical nonlinearity of these systems, the statistical response is typically non-Gaussian with highly non-trivial distribution tails, making the problem of risk analysis a very hard one. However, accurate estimates of responses and loads are important, both to avoid catastrophic failures like capsize or hull breach, but also to make long term plans such as maintenance schedules, fatigue calculations, and route selections. These statistics are especially important during the design phase, when the statistical performance of different hull configurations or structure designs is an important factor in choosing the final design. Further, the most important events to accurately estimate are also often the most difficult: extreme events that live in the unlikely tails of statistical distributions.&#13;
&#13;
Traditional direct experiments and simulations, which have trouble accounting for these types of random parameters or random forcing, typically must be repeated many times in order to characterize statistical results engineers are interested in. Unfortunately, this “iron law of Monte Carlo techniques" quickly leads to economically infeasible requirements on the number of experiment or simulation repeats, especially requirements for resolving distribution tails and the likelihood of rare events. These costs are bad enough when evaluating a single hull configuration, but become ruinous when comparing the performance of different hull choices in all but the most restricted conditions. In this thesis, we develop a set of analytical and computational techniques to replace expensive long time Monte Carlo simulations with more economical carefully designed short time simulations combined with analytical formulas.&#13;
&#13;
This thesis is divided into four parts. In the first part, we develop a theory for designing stochastic wave-structure interaction simulations. This theory is designed to reduce simulation costs by replacing long time steady state simulations with short time wave-episode simulations, without sacrificing the fidelity of the statistical calculations, i.e. with theoretical guarantees for statistical convergence. In the second part, we apply ideas of data-driven reduced order modeling to the pairs of wave-episodes and associated responses, and demonstrate how we can recover accurate statistics with data from only a small number of simulations. Importantly, we use machine learning techniques well suited to learning from the parametrized wave-episodes, and well suited to recovering statistical quantities. In the third part, we develop ideas based on active learning and optimal experimental design to further improve on data efficiency by using intermediate simulation results to improve upon later experiment parameters. These active learning techniques are based upon high quality uncertainty quantification in order to choose experimental designs that target regions of design space with two properties: regions with low model certainty, and regions that are likely contributors to tail risks. In the final part, we analytically examine the effects of intermittent random loads on material fatigue lifetimes according to the Serebrinksy-Ortiz fatigue model. The analytical predictions for fatigue lifetime statistics have important differences compared to the traditional linear models, e.g. the rainflow counting algorithm, especially in the long tail of early failure risks in the presence of intermittent extreme event loads, and make direct use of the statistics calculated in the first three parts.&#13;
&#13;
The four parts of this thesis fit together, first by calculating the statistical responses and loads of nonlinear ocean systems subjected to given sea conditions, and then by exploring the fatigue effects of those known statistical loads. In particular, each part of this thesis takes especial care to resolve the distribution tails of statistical quantities of interest. Extreme events, whether rogue waves or slamming loads, can have a disproportionate impact on the performance of naval vessels and marine structures, and so the techniques presented in this thesis are carefully designed to maximize accurate resolution of non-Gaussian tail risks, without multiplying simulation costs and data requirements endlessly. Looking forward, the natural next step is to apply these techniques to tow tank experiments, where costs are highest and data the most reliable. The application of these techniques will naturally lead to risk averse design and optimization, with orders of magnitude lower cost for data acquisition and avoidance of over-engineering.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152670</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Data Fusion for Estimating Electricity Access and Demand</title>
<link>https://hdl.handle.net/1721.1/152664</link>
<description>Multimodal Data Fusion for Estimating Electricity Access and Demand
Lee, Stephen J.
Electric power is a key enabler for economic development; nevertheless, 770 million&#13;
people live without electricity access and 3.5 billion have unreliable connections.&#13;
There is general consensus that the global community is off-track from realizing the&#13;
United Nation’s Sustainable Development Goal #7 (SDG7) target of “universal access&#13;
to affordable, reliable and modern energy services” by the year 2030. Under the International&#13;
Energy Agency’s (IEA) central “Stated Policies Scenario,” 670 million people&#13;
are expected to still be without electricity access in 2030.&#13;
&#13;
Simultaneously, we as a global community are off-track from achieving the Paris&#13;
Agreement ambitions to limit global warming to 1.5 degrees Celsius compared to preindustrial&#13;
levels. A 2021 U.N. report notes that national mitigation pledges for 2030 will&#13;
collectively produce only one-seventh of the emissions reductions necessary to achieve&#13;
the 1.5 degree goal. While electricity and heat together comprise 31.9% of all greenhouse&#13;
gas (GHG) emissions globally, the electric power sector is expected to play a&#13;
significant role in virtually all credible pathways towards climate stabilization: power&#13;
sector emissions must be cut to near-zero by mid-century, and the power sector must&#13;
also expand to electrify and therefore decarbonize a larger share of total energy use.&#13;
The IEA’s “Net Zero by 2050” roadmap for net zero emissions models that electricity&#13;
demand for “emerging market and developing economies” will need to exceed double&#13;
the electricity demand in “advanced economies” by mid-century. Our development&#13;
and climate imperatives both rest upon electricity demand in low- and middle-income&#13;
counties.&#13;
&#13;
This dissertation attempts to push the state-of-the-art with regards to understanding,&#13;
estimating, forecasting electricity demand in underserved contexts. We present&#13;
four technical chapters towards these ends.&#13;
&#13;
First, we assess the importance of accurately estimating aggregate demand levels&#13;
by performing sensitivity analyses using technoeconomic optimization models. We find&#13;
that efforts to improve methods for demand forecasting are essential to prospects for right-sizing system designs. Over the domain of aggregate demand values modeled, the&#13;
average cost of service provision range from $0.13/kWh to $0.37/kWh. This nearly&#13;
three-fold difference demonstrates the critical influence of economies of scale and improved&#13;
grid utilization on cost. We additionally find that characterizing building-level&#13;
consumer type diversity plays a critical role in the outcome of high-resolution infrastructure&#13;
plans. For our ‘central demand case,” we show that modeling a diversity of&#13;
consumer types results in least-cost plans that are 9% less costly than modeling assuming&#13;
demand assuming there is only one customer type. When comparing supply&#13;
technology shares for cost-optimal designs, modeling consumer type diversity demand&#13;
decreases prescribed grid extension shares from 89% to 77%.&#13;
&#13;
In our second technical chapter, we employ machine learning systems for probabilistic&#13;
data fusion to the problem of forecasting annual electricity demand at the countrylevel&#13;
for all African countries. We provide a novel set of probabilistic forecasts for the&#13;
continent while addressing missing data issues and employing a rigorous framework for&#13;
cross-validation and backtesting model results.&#13;
&#13;
In our third technical chapter, we show how machine learning systems for probabilistic&#13;
data fusion can be used for estimating electricity access rates at building-level&#13;
resolutions in low-access countries. Estimating electricity access is a key component to&#13;
understanding electricity demand because aggregated consumption statistics only reflect&#13;
demand from buildings with electricity access. Without access information, there&#13;
is significant ambiguity when attempting to attribute aggregated consumption values to&#13;
individual buildings. We train and evaluate our model using data describing electrified&#13;
and non-electrified buildings in Rwanda and we achieve state-of-the-art results relative&#13;
to existing methods in the literature. For our test set in Rwanda, our method achieves&#13;
an accuracy score of 80.7% while the closest published baseline in the literature achieves&#13;
70.9%. Our system additionally enables explicit uncertainty quantification and has the&#13;
potential to be scaled across the whole African continent.&#13;
&#13;
In our final technical chapter, we develop novel methods for estimating buildinglevel&#13;
electricity demand. Challengingly, ground truth metered consumption datasets&#13;
in low-access countries are often only accompanied by noisy geolocation data. This&#13;
issue is exacerbated by the fact that meter and building connections reflect many-tomany&#13;
relationships. There may be many electricity meters residing within a single&#13;
building, and there may also be many buildings that are connected to a single meter.&#13;
While our consumption data is logged at the meter-level, machine learning features of&#13;
interest can only be extracted at the building-level. Because standard supervised machine&#13;
learning models cannot express this complexity, we develop an application-tailored&#13;
model based on a neural network (NN)-embedded probabilistic graphical model (PGM)&#13;
for probabilistic data fusion. The PGM-based approach allows us to explicitly define&#13;
potential relationships between meters and nearby buildings while the NN models employed&#13;
enable us to effectively to extract information from multimodal features at the&#13;
building-level. As a result, our model reflects a principled approach to training and&#13;
running building-level demand estimation models using only meter-level ground truth information. We also make a few additional contributions: we show novelty by providing&#13;
probabilistic building-level output; training and testing in Rwanda, a country&#13;
for which building-level estimates are not currently available; and provide demand estimates&#13;
for commercial and industrial consumers in addition to residential consumers.&#13;
From a methodological standpoint, ours is the first machine learning model that embeds&#13;
and trains NNs within PGMs employing Markov chain Monte Carlo (MCMC)&#13;
sampling algorithms for inference. This application serves as an example for the novel&#13;
combination of these individually important classes of algorithms.&#13;
&#13;
Taken together, the methods and studies presented in this dissertation enable the&#13;
improved deployment of continuous electricity infrastructure planning across all lowand&#13;
middle-income countries worldwide. We hope the research community continues to&#13;
catalyze progress towards enabling continuous planning methodologies and map efficient&#13;
pathways for achieving our global climate and development goals.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152664</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nothing to See, Nothing to Say, and Noting How Much&#13;
Three Essays on Information and Behavior</title>
<link>https://hdl.handle.net/1721.1/152661</link>
<description>Nothing to See, Nothing to Say, and Noting How Much&#13;
Three Essays on Information and Behavior
Cashman, Matthew
I present three essays that examine information flows and behavior. The first examines the effect of sequential play in Public Goods Games in cases where players move one after another but do not see each others’ moves. Even with no information flow—when there is nothing to see of others’ decisions—order of play affects contributions to the public good. The second essay considers pre-play socializing and its effects on coordination games and hold-up games. Pre-play small-talk results in better outcomes even when players talk before they know they will be playing a game—before they have anything of strategic relevance to say. The third essay presents a novel quantitative, empirical means of measuring the flow of memes through minds. Most ways of learning what other people know rely on strong commitments to what the right question to ask is. Using cloze completion tasks I outline a principled, content-agnostic method of estimating how much information from a given text is stored in a reader’s mind.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152661</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Antimalarial Afterlives: Medicine for a Planetary Age from Madagascar</title>
<link>https://hdl.handle.net/1721.1/152660</link>
<description>Antimalarial Afterlives: Medicine for a Planetary Age from Madagascar
Robbins, Gabrielle Lydia Marie
This historical ethnographic dissertation investigates the politics of health and medicine in central Madagascar. Focusing on the island’s Covid-19 response, which revolved around the commercialization of “homegrown” pandemic therapeutics, I examine what made rapid development of “Made in Madagascar” medicines possible over the long durée as well as the social, political, religious, and ecological effects of this pivot to medical self-sufficiency. The island’s Covid drugs used medicinal plants, Artemisia annua, long cultivated as cash crops for exported antimalarial drug ingredients. But as artemisia was “repositioned” from a global antimalarial drug to a domestic Covid-19 treatment, I argue that social sciences of medicine must expand a “biographical approach” to pharmaceuticals with attention to drugs’ “afterlives” as therapeutic entities for novel diseases. I further suggest the frame of drugs’ afterlives can make clear how drug-making is also place-making, as artemisia stems become village building materials and important substances like firewood for daily life in cultivation hubs.&#13;
&#13;
In order to fully understand Madagascar’s pandemic response, I also trace the arc of self-sufficiency politics on the island from the late 1800s to the present. I argue that the meaning of “self-sufficiency” is not stable but rather, multiple ideas about acceptable dependence overlap through time: colonial-era demands for fiscal self-sufficiency; socialist-era politics of isolationist centralized industry; and pandemic-era demands for resource sovereignty amid unstable global supply chains, a multiplicity that yields significant social complexity in the present.&#13;
&#13;
I then draw on ethnographic data to analyze domestic medical industries’ sensory, religious, ecological, and political dimensions during overlapping Covid-19 and climate crises on the island. I argue that in contemporary Madagascar, drug-making offers therapy just as much as drug-taking, which forces expanded understandings of how medicines work and upon whom. This case study of medicine-making in Madagascar then concludes with a call for social studies of medicine to more fully integrate attention to religion and ecology in analyses of pharmaceuticals and medical politics.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152660</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Switching Dynamics in Ferroelectric Hf₀.₅Zr₀.₅O₂ Devices: Experiments and Models</title>
<link>https://hdl.handle.net/1721.1/152658</link>
<description>Switching Dynamics in Ferroelectric Hf₀.₅Zr₀.₅O₂ Devices: Experiments and Models
Kim, Taekyong
Ferroelectric Hf₀.₅Zr₀.₅O₂ (FE-HZO) has breathed new life into the field of ferroelectric research, boasting exceptional physical properties, such as compatibility with existing semiconductor processes, highly scalable thickness, and prominent FE properties. As a result, this intriguing material has gathered extensive attention for applications in ultra-scaled Si MOSFETs, memory devices, energy-efficient hardware for convolutional computation, and RF devices. However, despite intense research, there is still controversy about the FE switching dynamics, a crucial factor in designing ferroelectric device applications.&#13;
&#13;
This thesis pursues fundamental understanding of the switching dynamics in FE-HZO structures founded on accurate dynamic measurements with meticulous experimental design considerations. Towards this, low-parasitic FE-HZO structures have been fabricated and characterized over a broad range of frequencies using large-signal and small-signal analysis. In large-signal analysis, a Finite-Difference implementation of the Nucleation Limited Switching model (FD-NLS) is introduced, which accurately predicts the FE circuit dynamics across a wide range of time scales. Additionally, a thorough analysis of the imprint effect, a critical reliability issue in FE devices is provided. In small-signal analysis, a physically meaningful small-signal equivalent circuit model is developed that describes impedance measurements well over a full bias range and 7 orders of magnitude of frequency all the way into the GHz regime. Moreover, this work sheds light on the underlying physics of the circuit elements.&#13;
&#13;
The findings in this thesis will contribute to the design and modeling of diverse FE-HZO devices for a wide range of applications, adding valuable knowledge to the field of FE-HZO research.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152658</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shadows, Mirrors, and Snow: Decoding Multibounce Light Transport in Time-of-Flight Photography</title>
<link>https://hdl.handle.net/1721.1/152657</link>
<description>Shadows, Mirrors, and Snow: Decoding Multibounce Light Transport in Time-of-Flight Photography
Henley, Connor A.
Time-of-flight photography is a new imaging modality that combines the precise time-of-flight measurements of lidar with the wide field of view and high angular resolution of photography.  The result of this combination is that time-of-flight cameras by nature capture videos of light as it flows through the photographed scene.  Unlike regular photographs, time-of-flight photographs expose the sequence of how light interacts with a scene.  Furthermore, the fixed and finite speed of light in free space tightly couples the dimensions of space and time, such that it is typically straightforward to determine precisely where and when observed scattering events occur.&#13;
&#13;
In this thesis we present five new imaging methods that leverage the unique advantages of time-of-flight photography to analyze light transport within a scene and, through this analysis, infer scene properties.  First, we show that shadows observed in light that has scattered exactly twice before returning to the camera can be used to infer the geometry of surfaces that are hidden from view.  Second, we show how the time-of-flight of these two-bounce returns can be used to retrieve the visible scene’s geometry.  Third, we propose an algorithm to retrieve the spatially varying BRDF of visible surfaces from two-bounce returns.  Fourth, we show how multi-bounce returns measured with a time-of-flight camera can be used to unambiguously detect and localize specular surfaces like mirrors and windows.  Finally, we show how time-resolved measurements of light that scatters many times within snow before returning to the camera can be used to infer the snow’s density and grain size, and the concentrations of impurities within the snow.&#13;
&#13;
Our hope is that introducing several useful and practical applications of time-of-flight photography will spur interest and motivate further research into new applications and the fundamental principles of this exciting new technology.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152657</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Patient-Specific Models of the Heart to Study Cardiac Growth and Remodeling</title>
<link>https://hdl.handle.net/1721.1/152653</link>
<description>Building Patient-Specific Models of the Heart to Study Cardiac Growth and Remodeling
Fan, Yiling
Cardiac growth and remodeling (G&amp;R) are intricate processes that occur in response to various physiological and pathological conditions to maintain the optimal function of the heart. These processes involve complex molecular, cellular, and structural alterations that influence the size, shape, and function of the heart. Research efforts, utilizing in vitro and in vivo models, have shed light on the mechanisms of G&amp;R at the cellular and tissue levels. Clinically, we have access to a wealth of sophisticated information, ranging from 0D (e.g. pressure, heart rate, and ECG) to 4D (e.g. dynamic MRI and CT) data about our cardiovascular system. However, a significant challenge lies in the interpretation of these data, as they provide macroscale information and how this links to microscale mechanisms remains poorly understood. In consequence, a vast amount of clinical data, particularly imaging data, are employed qualitatively rather than quantitatively. Furthermore, many microstructural discoveries have not yet been fully leveraged for the improvement of diagnostic accuracy and therapeutic strategies.&#13;
&#13;
The main goal of this thesis is to develop multiscale patient-specific simulations to connect microscale insights and clinically accessible macroscale information. I propose a growth characterization workflow utilizing in vivo cardiac MRI, kinematic growth modeling, and inverse finite element method to quantify tissue level growth. Tested on in vivo data of swine models, this approach showcases the potential of non-invasive measurement of microstructural growth properties. It also highlights two potential areas of improvements: a more granular growth analysis currently hindered by the simplified kinematic growth model and a more efficient process of the inverse analysis, catering to the time-sensitive nature of clinical practice. To address this, I create a multiscale model using a microstructurally motivated growth theory . The model enables investigation of changes in different tissue components (e.g. myocytes and collagen) during the process of G&amp;R and its subsequent reversal. In addition, I develop a data-driven surrogate model that can efficiently generate dynamic patient-specific simulations of the left ventricle with mean nodal error below 3 mm. The surrogate model achieves a speed increase of 4 orders of magnitude compared to the traditional finite element model, making it ideal for inverse studies in fast-paced clinical settings.&#13;
&#13;
In summary, this thesis introduces simulation frameworks that effectively quantify G&amp;R properties, bridge the mechanobiological insights of G&amp;R across different scales, and facilitate the generation of personalized models. These advancements represent progress towards the realization of cardiac digital twins in the realm of clinical practice.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152653</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shared Control Based on Predictability Analysis With Application to Human-SuperLimb Collaboration</title>
<link>https://hdl.handle.net/1721.1/152652</link>
<description>Shared Control Based on Predictability Analysis With Application to Human-SuperLimb Collaboration
Song, Hanjun
This thesis presents data-driven quantitative methodologies based on predictability analysis to construct shared control in human-robot collaborations. The methods are applied to assist hemiplegic patients in a bimanual eating task (i.e., eating with a fork and a knife) with a Supernumerary Robotic Limb, or SuperLimb for short. The core idea of the study is to assign situations with high predictability to the robot’s autonomy and situations with low predictability to the human’s intervention or manual control. Furthermore, we find a method to improve predictability by seeking useful information that robots are not able to observe.&#13;
&#13;
The first phase of the study utilized correlation data analysis and principal component analysis (PCA) to measure the predictability of hand movements during the bimanual task. Correlation data analysis reveals that a few degrees of freedom (DOF) of the hand holding the knife are not significantly correlated with any other DOF, indicating independent, voluntary movements that are difficult to predict. On the other hand, a few DOF are strongly correlated with the fork movements and therefore predictable. Based on the analysis, a hybrid SuperLimb is designed so that the independent DOF are powered and manually controlled by human body movements, while the predictable DOF are automatically controlled by robot actuators.&#13;
&#13;
In the second phase of the study, Extended Lipschitz Analysis (ELA) is developed as another method of measuring predictability. ELA evaluates the demonstration data by computing Lipschitz quotients and informs the level of predictability of the desired action independent of the prediction model. The Lipschitz quotients tend to be very high in some situations due to the information discrepancy between humans and robots and lower when humans and robots share information. Therefore, ELA provides a method of lowering Lipschitz quotients by seeking missing information and adding a new variable to the input space. Unpredictable situations where the Lipschitz quotients are high even after the new variable is added are assigned to the human and the remaining situations with small Lipschitz quotients are assigned to the robot.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152652</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Personal Software with Reactive Databases</title>
<link>https://hdl.handle.net/1721.1/152650</link>
<description>Building Personal Software with Reactive Databases
Litt, Geoffrey
Spreadsheets and relational databases can simplify the creation of a variety of software, particularly for end-users who are less familiar with programming. This thesis extends techniques from those tools in three novel ways. First, we show how existing real-world web applications can be extended without doing traditional programming, using a spreadsheet view. Second, we show how text documents can be gradually enriched into personal software tools using similar techniques. Finally, we demonstrate a new reactive relational data architecture for building complex applications with rich interactions and stringent performance requirements. Together, these projects empower both end users and application developers with simpler tools for developing software.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152650</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robots as Social Catalysts: A Multidisciplinary Framework for Designing Embodied Social Agents that Foster Long-term Human Collaboration and Connection</title>
<link>https://hdl.handle.net/1721.1/152644</link>
<description>Robots as Social Catalysts: A Multidisciplinary Framework for Designing Embodied Social Agents that Foster Long-term Human Collaboration and Connection
Chen, Huili
As artificial intelligence (AI) devices become more common in our homes, concerns about their potential harm to human-human connections arise accordingly. This dissertation aspires to study the responsible design of embodied agents as social catalysts to purposefully enhance human-human interactions. It aims to shed light on the following three overarching research questions. Can we become more socially connected and collaborative with one another through the facilitation of a socially embodied agent? What social capabilities do these embodied agents need to acquire as social catalysts? What approaches should we take to design, develop and evaluate computing systems that enable positive social interactions between a human group and an embodied agent responsibly? &#13;
&#13;
To investigate the three questions, this work proposes a multidisciplinary framework for the holistic design and evaluation of embodied social agents intended to foster human-human connection and collaboration. It argues that robots need to possess three social capabilities: social-affective perception, context awareness, and social adaptation. These capabilities are elaborated in detail within the framework, together with a comprehensive, iterative process for their design, evaluation, and enhancement. This process needs to be grounded in theories and findings in psychology, and employ a mixed-methods integrative approach that involve computing, social sciences, and interaction design. &#13;
&#13;
A case study centered on parent-child reciprocal interaction is conducted to demonstrate and evaluate this proposed framework, highlighting the unique complexities and possibilities of multi-person human-robot interaction. The case study aims to facilitate enriching adult-child exchanges essential for children's development while overcoming various technological and methodological challenges posed by young children as a user group. A series of studies and experiments were conducted in this dissertation to examine all key aspects of the long-term multi-person human-agent interaction (M-HAI). These aspects include understanding the dynamics of human-human interaction, modeling social-affective dynamics in human-human interaction, introducing design guidelines for long-term M-HAI, and designing and evaluating adaptive M-HAI.&#13;
&#13;
In summary, this dissertation provides insights into the potential of designing embodied social agents as social catalysts within human groups. It invites future exploration into the possibilities and challenges of machine-catalyzed group interactions, emphasizing both technical and ethical considerations. As sociable intelligent devices—from personal voice agents at home to autonomous vehicles—rapidly proliferate, humans increasingly interact with AI agents in an ecology composed of other humans and other intelligent machines. Accordingly, this work helps advance the social sophistication of intelligent machines that live with humans in this emergent human-agent ecology, as well as the understanding of the social and behavioral mechanisms underlying this ecology.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152644</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling Image Synthesis with Emergent and Designed Priors</title>
<link>https://hdl.handle.net/1721.1/152643</link>
<description>Controlling Image Synthesis with Emergent and Designed Priors
Chai, Lucy
Image synthesis has developed at an unprecedented pace over the past few years, giving us new abilities to create synthetic yet photorealistic content. Typically, unconditional synthesis takes in a tensor of random numbers as input and produces a randomly generated image that mimics real-world content, with little to no way of controlling the result. The work contained in this thesis explores two avenues of obtaining controllable content from image generative models using emergent and designed priors. Emergent priors leverage the capabilities of a pre-trained generator to infer how the world operates, simply by training on large quantities of data. On the other hand, designed priors use built-in constraints to enforce desired properties about the world. Using emergent priors, we can control content by discovering factors of variation and compositional properties in the latent space of synthesis models. We further add coordinate information and camera inputs as designed controls to generate continuous-resolution and 3D-consistent imagery.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152643</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Politics of Scale and Scaling in Chinese Governance and Venture Capitalism</title>
<link>https://hdl.handle.net/1721.1/152642</link>
<description>The Politics of Scale and Scaling in Chinese Governance and Venture Capitalism
Wong, Jamie Jing-Men
Drawing on three cumulative years of fieldwork I conducted in China with start-up entrepreneurs, venture capital investors (VCs), and local government officials, this dissertation investigates the intersection between Chinese governance, venture capitalism, and "big data"-driven technologies. Through a parallel study of how scale and scaling feature in China's nation building and the venture capitalist project, I elaborate on the notion of the "weight of scale": the simultaneous duality of scale as a resource and as a burden. I reveal how the impact of data-driven technologies such as artificial intelligence and machine learning extends beyond domain-specific applications, and show how sociotechnical imaginaries influenced and informed by these technologies crucially lend scientific authority to ways of configuring and organizing society. I highlight how "successful models that mostly fail" –— modes of operation involving massive trial-and-error with only a few spectacularly favorable outcomes –— have spread from VC to Chinese political life. Overall, this dissertation tells the story of how American VC domesticated Chinese investors and how China eventually came to domesticate the VC format to govern.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152642</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bringing the Water-Efficiency Benefits of Precision Irrigation to Resource-Constrained Farms Through an Automatic Scheduling-Manual Operation Irrigation Tool</title>
<link>https://hdl.handle.net/1721.1/152638</link>
<description>Bringing the Water-Efficiency Benefits of Precision Irrigation to Resource-Constrained Farms Through an Automatic Scheduling-Manual Operation Irrigation Tool
Van de Zande, Georgia D.
As global populations increase and freshwater supplies decrease, improving farmers’ adoption of water-efficient irrigation equipment and practices is crucial. This aim is particularly imperative in resource-constrained regions like East Africa (EA) and the Middle East and North Africa (MENA) where existing precision irrigation solutions— which are designed to achieve high water efficiencies—often do not meet the needs of farmers. In these regions, farmers prefer their current manual practices, or they may not be able to easily purchase, install, or maintain traditional precision irrigation equipment. This work aims to bring the water-efficiency benefits of precision irrigation to resource-constrained farmers by understanding and meeting their specific needs.&#13;
&#13;
First, this work sought to elucidate the differences between the diverse types of EA farmers and to understand if opportunities exist for new irrigation products targeted to these farmers. An interview-based market assessment was conducted to reveal distinct market segments and each segment’s values regarding irrigation systems. Then, a techno-economic feasibility analysis was conducted to reveal which irrigation methods and energy sources would be most promising for each segment. Four market segments were found: the traditional smallholder, the semi-commercial smallholder, the medium-scale contract farmer, and the remote farmer. The remainder of this thesis focuses on the medium-scale contract farmer who would value low-cost prediction capabilities and solar-powered drip irrigation systems optimized for profit. The identified opportunities for innovation in this work can guide irrigation designers as they develop new systems that directly serve farmers’ needs.&#13;
&#13;
The second aim of this work targeted medium-scale contract farmers in EA and a similar segment of MENA farmers. Functional requirements were proposed for a tool that could address the efficiency needs of these farmers while integrating into their current manual practices. To meet these requirements, a design concept for an automatic scheduling and manual operation (AS-MO) user experience (UX) was proposed. Storyboards and a prototype demonstration of the AS-MO UX were evaluated by farmers and key market stakeholders in Kenya, Jordan, and Morocco. Farmers in Kenya and Jordan in particular valued the proposed UX because they want increased efficiency on their farms without installing automatic valves for cost and complexity concerns. Interviewees provided feedback on how to improve the tool’s design in future iterations.&#13;
&#13;
Finally, this work describes functional AS-MO tool prototypes that were installed on a farm in Jordan and a farm in Kenya. To understand how this tool performs under real farm conditions, these prototypes were designed to deliver a long-term ASMO UX to study participants. The prototype monitored local weather conditions, generated water-efficient schedules using an existing scheduling theory, and notified users’ phones when they should manually open or close valves. The irrigation practices of participants using the AS-MO prototype were compared to conventional practices. After 11 weeks of use, study participants also demonstrated successful use of the prototype on a daily basis. Irrigation events were measured on the field to show that users confirmed 93% of the scheduled events correctly using the tool’s interface. Further, of the irrigation events that did occur, a majority of their durations fell within 15% of the scheduled duration. Results from this work and feedback from study participants can continue to improve the design of the proposed AS-MO tool and its UX. If adopted at scale, this tool could increase the adoption of water-efficient irrigation practices on resource-constrained farms that are not served by existing precision irrigation technology, improving food security and sustainable agriculture in EA and MENA.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152638</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incidence Problems for Slabs</title>
<link>https://hdl.handle.net/1721.1/152635</link>
<description>Incidence Problems for Slabs
Tammen, Sarah
In this thesis, I prove incidence estimates for slabs which are formed by intersecting small neighborhoods of well-spaced hyperplanes in R  superscript &#119889; with the unit cube [0, 1] superscript &#119889; . My work is an analogue of a theorem of Guth, Solomon, and Wang, who proved a version of the SzemerédiTrotter theorem for thin tubes that satisfy a certain strong spacing condition. My proof uses induction on scales and the high-low method of Vinh, along with new geometric insights.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152635</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>View Synthesis for Visuomotor Policy Learning</title>
<link>https://hdl.handle.net/1721.1/152632</link>
<description>View Synthesis for Visuomotor Policy Learning
Lin, Yen-Chen
Visuomotor policy learning is the problem of teaching machines how to use visual information to determine how to interact with their environment. Recent approaches have harnessed deep learning models to demonstrate impressive results in multi-modal and multi-task generalization. However, these models often lack a comprehensive understanding of the 3D world as they are primarily trained on large-scale RGB image datasets. In this thesis, we present a new framework that equips visuomotor policies with a view synthesizer. This generative model has the ability to envision novel viewpoints and perspectives of the 3D environment. Unlike training a visuomotor policy solely on real-world data, a view synthesizer can produce coherent views of a 3D scene in a controllable manner. This capability assists the policy in utilizing symmetries present in robotic tasks through learned and designed utilization. Learned utilization expands the training dataset of the visuomotor policy to implicitly encourage the emergence of symmetric properties through learning. On the other hand, designed utilization integrates symmetric properties into both the policy’s input representations and its model architectures to explicitly establish symmetric properties. We demonstrate that the proposed systems exhibit improved sample efficiency and generalization compared to visuomotor policies that lack the capability for view synthesis.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152632</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paper-based Molecular Technologies for Faster, More Accessible Infectious Disease Diagnostics</title>
<link>https://hdl.handle.net/1721.1/152611</link>
<description>Paper-based Molecular Technologies for Faster, More Accessible Infectious Disease Diagnostics
Yee, Emma Hiu-Yan
A lack of rapid, widely accessible methods for diagnosing infectious diseases greatly hinders effective disease control in all parts of the world, but especially in resource-limited communities. Resource-limited areas frequently are the hardest hit by infectious diseases, but often cannot support the cost and infrastructure needs of current diagnostic methods. Technologies that utilize low-cost materials and easy-to-use methods could help address this issue. Diagnostic tests made from cellulose paper that use colorimetric readouts—where a visible color appears if disease biomarkers are present—have shown promise in reducing device cost, equipment and training needs. However, their ability to support a variety of diagnostic methods has not been extensively assessed. The objective of this thesis was to investigate the ability of paper-based, colorimetric diagnostic tests to detect protein and nucleic acid biomarkers of infectious disease in clinically relevant matrices.&#13;
&#13;
Protein affinity reagents, such as antibodies and engineered binder proteins, were integrated into paper devices, and antibody stabilization methods were developed for room-temperature bioactive paper device storage. It was demonstrated that paper-based tests with colorimetric detection methods could detect both protein and DNA infectious disease biomarkers in complex, bodily fluids like saliva. Colorimetric detection was performed either via enzymatic amplification or eosin photopolymerization amplification, a visible light-initiated radical polymerization method for colorimetric signal amplification in diagnostic assays.  To better understand the eosin photopolymerization reaction mechanism and its diagnostic applications, a reaction-diffusion model that closely mimicked paper diagnostic assay conditions was developed based on a proposed reaction mechanism. After experimental validation, the model and experimental results were used to inform future use of the colorimetric signal amplification method.&#13;
&#13;
Investigating the diagnostic performance of paper-based colorimetric tests enabled the development of diagnostic assays with clinically relevant sensitivity for tuberculosis, periodontal disease, and COVID-19. Cellulose paper microfluidic devices were shown to be versatile platforms that can support different detection reactions and capture chemistries, and are compatible with many sample types. The main findings of this research have enabled a better understanding of how paper-based colorimetric diagnostic tests can be applied to better meet disease control needs.
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152611</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Character sheaves on symmetric spaces</title>
<link>https://hdl.handle.net/1721.1/152602</link>
<description>Character sheaves on symmetric spaces
Grojnowski, Ian.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1992; Includes bibliographical references (leaf 29).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152602</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the action of organoalkali metal compounds on butadiene.</title>
<link>https://hdl.handle.net/1721.1/152601</link>
<description>A study of the action of organoalkali metal compounds on butadiene.
Letsinger, Robert Lewis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1945; Vita.; Bibliography: leaf 106.
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152601</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A dual resonance model for meson-nucleon scattering.</title>
<link>https://hdl.handle.net/1721.1/152596</link>
<description>A dual resonance model for meson-nucleon scattering.
Hanson, Andrew Jorgen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: 156-160..
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152596</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Error bounds for parallel communication channels.</title>
<link>https://hdl.handle.net/1721.1/152595</link>
<description>Error bounds for parallel communication channels.
Ebert, Paul M.
            (Paul Michael)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152595</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inelastic electron scattering from F19 and Ca4.̊</title>
<link>https://hdl.handle.net/1721.1/152594</link>
<description>Inelastic electron scattering from F19 and Ca4.̊
Hallowell, Paul Lincoln.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152594</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model for pion-nucleon scattering based on the Bethe-Salpeter equation.</title>
<link>https://hdl.handle.net/1721.1/152592</link>
<description>A model for pion-nucleon scattering based on the Bethe-Salpeter equation.
Zia, R. K. P.
            (Royce King-Ping)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1968; Vita.; Bibliography: leaves 185-188.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152592</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport theory of dense gas systems with attractive forces.</title>
<link>https://hdl.handle.net/1721.1/152590</link>
<description>Transport theory of dense gas systems with attractive forces.
Hamer, Norman David.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1971; Vita.; Bibliography: leaves 243-245.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152590</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bosonic Quantum Hall States from Rapidly Rotating Bose-Einstein Condensates</title>
<link>https://hdl.handle.net/1721.1/152583</link>
<description>Bosonic Quantum Hall States from Rapidly Rotating Bose-Einstein Condensates
Shaffer, Airlia
Rotating Bose-Einstein condensates (BECs) were explored as a platform for studying quantum Hall physics. This work relies on the equivalence between neutral particles under rotation and charged particles in a magnetic field to create rotational analogs of quantum Hall states of bosons.&#13;
&#13;
A novel geometric squeezing protocol is presented for creating states with arbitrary, controllable Landau level occupation. Lowest Landau level (LLL) Landau gauge condensates are observed for the first time using this method enabled by $\textit{in situ}$ imaging.&#13;
&#13;
In the flat bands created through rotation, interaction energy dominates kinetic energy facilitating the study of the interaction driven dynamics of Landau gauge quantum Hall states of bosons. These states crystallize into a periodic array of superfluid droplets, at a rate that demonstrates the onset of dynamics in the LLL. In addition, the random phase of the superfluid array upon separate iterations of the experiment suggests that this state shares some properties with a supersolid.&#13;
&#13;
For moderate rotation frequencies, where there are not distinct flat bands, the hydrodynamic instability of the condensate to a patterned density distribution is observed. Microscopically, the instability to pattern formation results from the synthetic magnetic field induced minimum in the collective excitation spectrum crossing zero. The depth and position of this minimum is controlled through rotation and so too are the emergent pattern anatomies. Density depleted regions of the condensate are full of quantized vortices, which enter the cloud causing it to turbulently evolve into a vortex liquid.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152583</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Cavity-Coupled Rydberg Atom Array Platform for Quantum Computing</title>
<link>https://hdl.handle.net/1721.1/152582</link>
<description>A Cavity-Coupled Rydberg Atom Array Platform for Quantum Computing
Rudelis, Alyssa
Neutral atom systems have long been the test bed for complex quantum physics. Recently, much of the focus in quantum research has shifted from fundamental science to applications in quantum computation. Although several different hardware platforms have made strides in their capabilities in this direction, each has its own impediments to scaling system size: both physically in terms of qubit number and temporally in terms of code cycles before decoherence. Specifically in neutral atom systems, the ability to non-destructively readout atomic states on timescales much faster than atomic decoherence is lacking. By pairing the geometric reconfigurability and engineered strong interactions of neutral atom Rydberg arrays with strong optical coupling to high-finesse cavities, we can build a new quantum architecture that oversteps many of the limitations of other hardware systems. In this dissertation, we lay out the case for coupling Rydberg atom arrays to cavities, discussing the connections from atomic physics to quantum computing and the fundamental physics that gives optical cavity systems an advantage over other current quantum computer implementations. We then describe the design, testing, and implementation of such a system. Our system simultaneously accommodates Rydberg excitation, reconfigurable optical tweezer arrays, selective atomic state addressing, and strong coupling to an optical cavity. We discuss in detail the risks and technical considerations of installing such a system in ultra-high vacuum, including the discovery of a new material failure mechanism for high-reflectivity mirrors. Finally, we outline concrete future steps to demonstrate proof-of-principle surface code error correction in our system, paving the way to fault-tolerant quantum computation with neutral atoms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152582</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Tools to Physically Magnify Biological Substrates for Clinical Applications</title>
<link>https://hdl.handle.net/1721.1/152581</link>
<description>Developing Tools to Physically Magnify Biological Substrates for Clinical Applications
Aronson, Jenna L.
This dissertation will cover two technologies that have been developed to provide enhanced nanoscale resolution of clinical pathology and liquid biopsy samples using conventional confocal microscopes. Biology is based on nanoscale building blocks, biomolecules, which interact over nanoscale distances. Diseases, especially in their earliest stages, are associated with subtle changes in the presence and organization of biomolecules in cells and tissues. Expansion microscopy (ExM), a method of physical specimen expansion that preserves nanoinformation, and thus enables molecular mapping on conventional microscopes, is spreading rapidly through biology, with many hundreds of experimental papers and preprints to date. Some early studies have also shown ExM to physically magnify small changes found early in a disease, making them more obvious to clinical investigators.&#13;
&#13;
The first technology presented here is decrowding expansion pathology (dExPath), which can expand proteins away from each other in human brain pathology specimens, including formalin-fixed paraffin-embedded (FFPE) clinical specimens. Immunostaining of dExPath-expanded specimens reveals, with nanoscale precision, previously unobserved cellular structures, as well as more continuous patterns of staining. This enhanced molecular staining results in observation of previously invisible disease marker-positive cell populations in human glioma specimens, with potential implications for tumor aggressiveness. dExPath results in improved fluorescence signals even as it eliminates lipofuscin-associated autofluorescence. Thus, this form of expansionmediated protein decrowding may, through improved epitope access for antibodies, render immunohistochemistry more powerful in clinical science and, perhaps, diagnosis.&#13;
&#13;
In the second technology presented in this document, we ask whether this could be adapted to become a practical clinical diagnostic tool for early disease, by extending ExM to the imaging of liquid biopsies – easily obtained, minimally invasively extracted, specimens from patients (e.g., blood, saliva). Ultimately, this protocol holds exciting potential for non-invasive, longitudinal insight into early and subsequent changes in brain diseases.&#13;
&#13;
The final unit of this dissertation will cover other projects and experiments that did not make it into publications or otherwise did not go as planned. I will also share larger lessons I learned about experimental design and project planning that I have learned throughout my development as a graduate researcher. I will conclude by sharing the experiences throughout my tenure at MIT through which I discovered and cultivated my passion for biotech entrepreneurship.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152581</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clearance systems at brain borders</title>
<link>https://hdl.handle.net/1721.1/152580</link>
<description>Clearance systems at brain borders
Murdock, Mitchell
Submitted to the Department of Brain and Cognitive Sciences on May 8th, 2023 in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Neuroscience.&#13;
&#13;
The glymphatic movement of fluid through the brain powerfully clears metabolic waste. We observed multisensory 40 Hz stimulation promotes the influx of cerebrospinal fluid and the efflux of interstitial fluid in the cortex of the 5XFAD mouse model of Alzheimer’s disease, which was associated with increased aquaporin-4 polarization along astrocytic endfeet, dilated meningeal lymphatic vessels, and amyloid accumulation in cervical lymph nodes. Inhibiting glymphatic clearance abolished the removal of amyloid by multisensory 40 Hz stimulation. Using chemogenetic manipulation and a novel genetically encoded sensor for vasoactive intestinal peptide (VIP), we found VIP+ interneurons facilitate glymphatic clearance by regulating arterial pulsatility. Our findings establish novel mechanisms to recruit the glymphatic system to remove brain amyloid.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152580</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Novelty Detection in Songbird Auditory Cortex</title>
<link>https://hdl.handle.net/1721.1/152579</link>
<description>Predictive Novelty Detection in Songbird Auditory Cortex
Happ, Michael Liu
In order to make sense of complicated sensory landscapes, the brain privileges the processing of novel stimuli. Detecting novelty is therefore a fundamental problem for the brain to solve. And it turns out to be complicated, as stimuli can be completely novel, or just novel relative to certain certain contexts or expectations. To better understand how the brain detects both types of novelty, we studied an auditory region of the avian brain that performs both absolute and relative novelty detection. We introduce a predictive model, called the Agnotron, that is capable of performing both kinds of novelty detection with the same circuit mechanism. Armed with predictions made by the Agnotron, we perform experiments to confirm the existence of Agnotron-like circuitry in the brain. While we fail to find evidence that the various novelty signals in this brain area are produced by the same mechanism, we do find support for predictive circuitry for some novelty signals. We continue with an advanced investigation of one absolute novelty signal in particular, known as the Song-Specific Adaptation. After recapitulating classical results with state-of-the-art technology, we report novel phenomena that rule out predictive circuit mechanisms for the SSA. Taken together, our results suggest that predictive mechanisms can explain some novelty signals in the avian brain, but not the SSA, which seems to have a more simplistic feed-forward mechanism of generation.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152579</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural language models and human linguistic knowledge</title>
<link>https://hdl.handle.net/1721.1/152578</link>
<description>Neural language models and human linguistic knowledge
Hu, Jennifer
Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input. &#13;
&#13;
In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152578</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Gates with Superconducting Fluxonium&#13;
Qubits</title>
<link>https://hdl.handle.net/1721.1/152577</link>
<description>Novel Gates with Superconducting Fluxonium&#13;
Qubits
Ding, Leon
Over the past two decades, superconducting qubits have emerged as a leading platform for gate-based quantum computation. Despite tremendous technological advancements, errors accumulating during gate operations are still a major bottleneck toward building a robust quantum computer. In general, these errors may be reduced by both increasing qubit coherences and improving gate design.&#13;
&#13;
In this thesis, we develop the fluxonium qubit for superconducting quantum computing, a relatively newer qubit with advantages in qubit coherence. We first outline the design and simulation of these and other qubits, including a procedure to minimize flux noise in flux-tunable qubits. We then introduce a new fluxonium architecture containing fluxonium qubits coupled via a transmon coupler (FTF for fluxonium-transmon-fluxonium) and demonstrate high-fidelity novel gates, achieving up to 99.99% fidelity single-qubit gates and 99.9% two-qubit gates on the same device. We show that this coupling scheme has advantages for scalability, ZZ reduction, and performance. These results mark a technological milestone for fluxonium qubits and contribute to the ultimate goal of error-corrected universal quantum computing with superconducting qubits.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152577</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Legacy of the First Galaxies: Exploring Ancient Stars in the Milky Way</title>
<link>https://hdl.handle.net/1721.1/152576</link>
<description>The Legacy of the First Galaxies: Exploring Ancient Stars in the Milky Way
Brauer, Kaley
In the first several hundred million years after the Big Bang, the first stars and galaxies transformed the Universe. These ancient systems launched the creation of every galaxy we see today, including the Milky Way. Ever since, for the last 13 billion years, the Milky Way has grown through galaxy mergers. Several of these mergers were with other similarly-sized galaxies and possibly a hundred of these mergers were with small dwarf galaxies. The smallest dwarf galaxies accreted by the Milky Way, the ultra-faint dwarfs (UFDs), are relics of the first galaxies in the Universe and provide important insight into early galaxy formation and chemical enrichment. Currently, though, accreted UFDs are poorly understood and we lack ways to identify stars that accreted from UFDs. By utilizing a simulation suite of 35 Milky Way-mass galaxies forming, I find that chemical tagging with r-process elements and clustering in kinematic phase space can help us identify stars that accreted together from these dwarf galaxies. Kinematic clustering only identifies recently accreted UFDs, so we recommend chemical tagging as the more robust method to identify these stars. I also present an analytic model of collapsar enrichment that can self-consistently explain the observed scatter in r-process chemical elements of old stars. I am expanding on these studies with highly-resolved hydrodynamic simulations of the earliest dwarf galaxies, the Aeos simulations. The methodology and initial results for these simulations are also presented.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152576</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetoresistance and Thermoelectricity in Quasi-Low Dimensional Semimetals</title>
<link>https://hdl.handle.net/1721.1/152575</link>
<description>Magnetoresistance and Thermoelectricity in Quasi-Low Dimensional Semimetals
Zhu, Junbo
The emergent properties and low energy excitations of low dimensional condensed matter systems are profoundly affected by the interplay of strong magnetic field and the anisotropic ionic potential of the crystal lattice. Detailed information of the underlying physics of such systems can be probed by the macroscopic responses observed in electric and thermoelectric transport experiments. In this thesis we present a com- prehensive study of the electronic bands and transport properties of the quasi-1D layered material ZrTe₅. For vacancy doped crystals, we report a transport, thermodynamic, and spectroscopic study with a focus on elucidating the connections between its band structure and unusual thermoelectric properties. For crystals in the stoichiometric limit, we report the discovery of a commensurate resonance effect in the angular magnetoresistance which can be understood as 1D to 2D crossovers of quasiparticle motion.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152575</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Algorithms and Non-Equilibrium Dynamics in Many-Body Systems</title>
<link>https://hdl.handle.net/1721.1/152574</link>
<description>Quantum Algorithms and Non-Equilibrium Dynamics in Many-Body Systems
Zanoci, Cristian
Understanding the physical features of quantum many-body systems is a challenging endeavor spanning across multiple branches of physics. In this thesis, we explore the properties of one- and two-dimensional quantum systems in the context of quantum algorithms and non-equilibrium dynamics. &#13;
&#13;
In the first part, we construct quantum algorithms that sample from classical Gibbs distributions. More specifically, a quantum state can encode the entire probability distribution, such that a measurement in the computational basis yields an unbiased sample. We show that in the case of Ising models, such states can be prepared in a time that scales quadratically with the linear system size. The state preparation is based on adiabatic evolution under a local quantum Hamiltonian. This approach achieves a polynomial speedup over classical local Markov chain algorithms. By exploring the physical origin of this speedup, we connect the quantum phases of the Hamiltonian governing the adiabatic evolution to the complexity of the sampling problem. Moreover, we show that some of these phases exhibit unusual characteristics, such as robust exponentially degenerate ground states and fracton excitations with restricted mobility. We relate these properties to the subsystem symmetries of the quantum Hamiltonian. &#13;
&#13;
In the second part, we develop an open system framework for studying the thermalization and transport properties of systems coupled to external baths. We first identify the conditions under which they can be cooled to low temperatures. Next, we use the baths to impose temperature and chemical potential gradients across the system and study the emergent steady states. In particular, we are interested in the temperature-dependence of the transport coefficients. Furthermore, we find exact analytical solutions for these quantities in a class of Sachdev-Ye-Kitaev models, which provide additional insights about the non-equilibrium dynamics at late times. Finally, we establish a relationship between diffusion and quantum chaos by showing that the diffusivities are upper bounded by the chaos propagation rate at all temperatures.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152574</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel X-Ray and Antinucleus Searches for Dark Matter</title>
<link>https://hdl.handle.net/1721.1/152573</link>
<description>Novel X-Ray and Antinucleus Searches for Dark Matter
Roach, Brandon Michael
Over a century of cosmological observations suggest that only one-fifth of the matter density of the Universe resides in the familiar subatomic particles of the Standard Model. The remaining eighty percent is known as dark matter (DM), whose presence has so far only been inferred by its gravitational effects on SM particles at cosmic scales. This dissertation describes indirect searches for DM decaying or annihilating into Standard Model particles, particularly x-rays and antinuclei.&#13;
&#13;
A variety of DM candidate particles are expected to decay or annihilate into x-ray photons, which can be detected by space-based telescopes. For example, keV-scale sterile neutrinos arise in many new-physics scenarios, and their decay would produce a distinctive x-ray line. I describe three searches for x-ray line emission from decaying sterile-neutrino DM using data from the NuSTAR x-ray observatory, thereby setting world-leading constraints on the decay rate of this DM candidate in the mass range 6-40 keV. &#13;
&#13;
Low-energy cosmic antinuclei are also a powerful probe of DM. In particular, low-energy antideuterons are expected to be a nearly background-free channel for DM detection, owing to their suppressed production in cosmic-ray collisions. The General Antiparticle Spectrometer (GAPS) balloon experiment will employ a novel exotic-atom-based detection technique to achieve world-leading sensitivity to low-energy antinuclei. The GAPS experiment will contain a large-area tracker consisting of more than 1100 lithium-drifted silicon [Si(Li)] detectors, which serve as the antinucleus stopping target, x-ray spectrometer, and charged-particle tracker. I describe the x-ray testing procedure used to validate the performance of these detectors for flight. This testing also validates that thick, large-area Si(Li) detectors can be mass-produced and operated at temperatures as high as -40 degrees C, with potential applications throughout nuclear physics, particle physics, and astrophysics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152573</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alternative splicing and RNA editing of the Complexin C-terminus regulates neurotransmitter release in Drosophila</title>
<link>https://hdl.handle.net/1721.1/152572</link>
<description>Alternative splicing and RNA editing of the Complexin C-terminus regulates neurotransmitter release in Drosophila
Brija, Elizabeth A.
Chemical synaptic transmission is an essential and highly regulated step in neuronal communication. Changes in the strength of synaptic transmission mediate several forms of plasticity associated with learning and memory. Functional regulation of Complexin (Cpx) provides an entry point for altering presynaptic output since the protein controls spontaneous and evoked neurotransmitter release through its effects on SNARE complex assembly. The Drosophila cpx locus undergoes alternative splicing to produce two isoforms (Cpx7A and Cpx7B) that differ in the C-terminal ~20 amino acids of Cpx. Although PKA phosphorylation of Cpx7B C-terminal residue S126 enhances spontaneous release and synaptic growth in Drosophila, the more abundant Cpx7A does not undergo PKA phosphorylation, but is subject to RNA editing that produces three alternative C-terminal domain residues (N130S, N130D, N130G). Edit variant N130S contains a phospho-competent residue located in a similar C-terminal region to the Cpx7B PKA phosphorylation site, but the functional significance of Cpx7A RNA editing in regulating neurotransmission and structural plasticity is unknown.&#13;
&#13;
In this thesis, I characterized the role of alternative splicing and RNA editing in Cpx function. I found that Cpx7A and Cpx7B splice isoforms have largely redundant roles in regulating neurotransmitter release despite significant expression level differences. Single-cell RNAseq data  revealed multiple Cpx7A RNA editing variants are co-expressed, indicating editing acts stochastically to generate a range of edited Cpx proteins within individual  cells. To determine if RNA editing alters Cpx7A function, I compared synaptic transmission and growth properties in cpx null mutants rescued with unedited or edited Cpx7A transgenes. N130S variants displayed a dramatic reduction in spontaneous fusion clamping compared to unedited Cpx7A. In addition, N130S can dominantly function when co-expressed with unedited Cpx7A, suggesting the abundance of edited proteins within single neurons can fine-tune their baseline neurotransmission. N130S displayed altered subcellular localization, suggesting altered ability of the edited protein to tether to synaptic vesicles. Additionally, casein kinase 2 was found to phosphorylate the N130S variant. Together, these findings indicate Cpx7A and Cpx7B have redundant roles in controlling baseline neurotransmission, while differential RNA editing of Cpx7A alters Cpx’s clamping properties and functionally changes presynaptic output by enhancing spontaneous neurotransmitter release.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152572</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional and ultrastructural investigation of mouse and human dendritic spines</title>
<link>https://hdl.handle.net/1721.1/152571</link>
<description>Functional and ultrastructural investigation of mouse and human dendritic spines
Vardalaki, Dimitra
Dendritic spines are the physical site of the majority of excitatory synaptic connections in the mammalian brain. Their complex morphological attributes and protein composition have inspired decades of theoretical and experimental work on how classes of dendritic spines differentially sculpt input processing and plasticity. The technical challenge of tethering physiological measurements of synaptic strength to spine morphology and protein expression in specific cell types leaves many open questions for the field. Here, we combine super-resolution protein imaging and patch-clamp electrophysiology to investigate the formation of nascent connections in the adult mouse cortex, and we developed a new method to apply these techniques to human neurons. In the first project presented in Chapter 2, we used super-resolution protein imaging and patch-clamp electrophysiology to identify filopodia as the structural substrate for silent synapses in adult neocortex. Of 2,234 spiny synapses from adult mouse layer 5 pyramidal neurons, a surprisingly large fraction (~25%) lacked AMPA receptors. These putative silent synapses were located at the tips of thin dendritic protrusions that lack the distinct head of conventional spines, known as filopodia, which were more abundant in adult cortex by an order of magnitude than previously believed (compromising ~30% of all dendritic protrusions). Physiological experiments revealed that filopodia do indeed lack AMPAR-mediated transmission, but they exhibit NMDAR-mediated synaptic transmission. We further showed that functionally silent synapses on filopodia can be unsilenced via Hebbian plasticity, recruiting new active connections into a neuron’s input matrix. In the second project presented in Chapter 3, we developed Patch2MAP to perform super-resolution imaging of proteins localized in the 3D morphology of any cell type in human tissue (or in any other species) without the need for exogenous protein expression. Our method, which combines patch-clamp electrophysiology with epitope-preserving magnified analysis of proteome (eMAP), further allows for correlation of physiological properties with subcellular protein expression. We applied Patch2MAP to individual spiny synapses in human cortical pyramidal neurons and demonstrated that electrophysiological AMPA-to-NMDA receptor ratios correspond tightly to respective protein expression levels. Taken together, the combination of protein imaging and physiological measurements expand our understanding on how the interplay of structure and protein content of spiny synapses shape synaptic input and open new avenues for a comprehensive investigation of synaptic function in humans.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152571</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of Genome Organization</title>
<link>https://hdl.handle.net/1721.1/152570</link>
<description>Dynamics of Genome Organization
Grosse-Holz, Simon Benedikt
A human cell contains about 2m of DNA, packed into a nucleus with diameter ~10μm. The three-dimensional structure of this packing has been the subject of intense investigation essentially since the discovery of DNA itself, with an explosion of the field over the past 15 years, following the advent of chromosome conformation capture techniques. The fourth dimension---time---however, has remained elusive and the dynamics underlying the organization of the genome are much less known. In this thesis I present my contributions to our understanding of these dynamics, working towards a full four-dimensional characterization of genome organization. First, by pulling on a genomic locus in live cells, we revealed the rather liquid-like material properties of chromatin and dispelled the idea that chromatin in interphase forms a gel. Second, by tracking genomic elements known to act as boundary elements for loop formation, we quantified the dynamics of chromatin loops in live cells. My contribution to both projects lay in the development and application of novel data analysis, modeling, and inference methods, implementations of which have been made available to the community for future use. Finally, we devised a simple scaling argument to reconcile the orthogonal observations of chromosome structure, dynamics, and mechanics. In sum, these contributions further our understanding of the dynamical behavior of chromatin in living cells and provide valuable tools and directions for future research.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152570</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elastic and inelastic dipolar scattering</title>
<link>https://hdl.handle.net/1721.1/152569</link>
<description>Elastic and inelastic dipolar scattering
Barral, Pierre
Quantum gases offer experimentalists and theorists a versatile and controllable platform to study quantum physics. From simple individual atoms, physicists can build complex systems that show new phenomena. Most of those emerging phenomena arise from interactions. The strength of the interactions is an essential parameter for a quantum gas experiment. If the interactions are too weak or lead to rapid losses, the experiment cannot succeed.&#13;
&#13;
The dipolar interaction has many features making it attractive in the physicist toolbox. Molecules usually feature strong dipolar interactions, but we use dysprosium which is still the most magnetic atom and is easier to manipulate than molecules. However, the magnetic interaction between dysprosium atoms still needs to be stronger to be useful. Moreover, dipolar interaction can lead to detrimental losses called dipolar relaxation. We developed and demonstrated two new experiments to address both issues.&#13;
&#13;
The first one utilizes the repulsive character of the side-by-side dipolar interaction to shield the atoms and prevent dipolar relaxation. The second one demonstrates a new technique to place the atoms closer than ever in a cold-atom experiment in a controlled way. We create a novel bilayer system with dysprosium and observe a much stronger dipolar interaction. This thesis also gives detailed derivations to understand dipolar scattering in 3D and 2D geometries.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152569</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-equilibrium physics: from spin glasses to machine and neural learning</title>
<link>https://hdl.handle.net/1721.1/152568</link>
<description>Non-equilibrium physics: from spin glasses to machine and neural learning
Zhong, Weishun
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales. These complex behaviors can be utilized for various information processing tasks such as error correction, learning, and optimization. Despite the empirical success of utilizing these systems for intelligent tasks, the underlying principles that govern their emergent intelligent behaviors remain largely unknown. In this thesis, we aim to characterize such emergent intelligence in disordered systems through statistical physics. We chart a roadmap for our efforts in this thesis based on two axes: learning mechanisms (long-term memory vs. working memory) and learning dynamics (artificial vs. natural). We begin our exploration from the long-term memory and artificial dynamics continent of this atlas, where we examine the structure-function relationships in feedforward neural networks, the prototypical example of neural learning. Using replica theory, information theory, and optimal transport, we study the computational consequences of imposing connectivity constraints on the network, such as distribution constraints, sign constraints, and disentangling constraints. We evaluate the performances based on metrics such as capacity, generalization, and generative ability. Next, we explore the working memory and artificial dynamics corner of the atlas and investigate the non-equilibrium driven dynamics of recurrent neural networks under external inputs. Then, we move to the working memory and natural dynamics island and study the ability of driven spin-glasses to perform discriminative tasks such as novelty detection and classification. Finally, we conclude our exploration at the long-term memory and natural dynamics kingdom and investigate the generative modeling ability in many-body localized systems. Throughout our journey, we uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems. We hope that our investigation into the emergent intelligence of seemingly disparate learning systems can expand our current understanding of intelligence beyond neural systems and uncover a wider range of computational substrates suitable for AI applications.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152568</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Optics and Mechanics in Gravitational-Wave Detectors</title>
<link>https://hdl.handle.net/1721.1/152567</link>
<description>Quantum Optics and Mechanics in Gravitational-Wave Detectors
Whittle, Chris
Gravitational-wave detectors like Advanced LIGO probe perhaps the most cataclysmic events since the Big Bang, involving objects tens to hundreds of times more massive than the Sun, yet they remain at the whim of minute quantum fluctuations. Pushing our reach further into the cosmos demands a mastery over these quantum effects. In recent years, we have entered the era of quantum-enhanced gravitational-wave detection, wherein the injection of squeezed states has been demonstrated as an effective technique to suppress high-frequency vacuum fluctuations. As gravitational-wave detectors continue to improve, operating at higher powers with more squeezing and reduced classical noises, radiation pressure noise is increasingly becoming a limiting factor at low frequencies. Frequency-dependent squeezed sources circumvent this by appropriately rotating the quadrature of the injected squeezing so as to confer sensitivity improvements across the entirety of the gravitational-wave detection band.&#13;
&#13;
In this thesis, we study the use of frequency-dependent squeezing in gravitational-wave detectors. We offer the first demonstration of a frequency-dependent squeezed source operating at frequencies useful for gravitational-wave detectors. To achieve this, we commissioned and operated a long, extremely-high-finesse optical cavity to a high degree of stability, compatible with the stringent requirements called for by the next iteration of LIGO: Advanced LIGO+.&#13;
&#13;
At the same time, gravitational-wave detectors are just now reaching the sensitivities required to observe quantum effects on the kilogram-scale of the test masses. We use the superb displacement precision of Advanced LIGO to suppress the differential motion of the test masses to within 10% of the ground state.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152567</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Profiling and Mechanisms of Cerebrovascular Function in Health and Neurodegeneration</title>
<link>https://hdl.handle.net/1721.1/152566</link>
<description>Molecular Profiling and Mechanisms of Cerebrovascular Function in Health and Neurodegeneration
Garcia, Francisco J.
The unmet medical need for therapies that treat neurological disorders is in part due to our lack of understanding the underlying biological mechanisms and inefficient delivery strategies to target the brain. The cerebrovasculature is essential for proper brain function as it tightly regulates blood flow and supplies the necessary nutrients. Furthermore, the presence of a bloodbrain barrier (BBB) provides protection to vulnerable neurons but poses a challenge for drug delivery. In neurodegeneration, BBB breakdown and vascular impairments are hallmarks that precede the onset of disease-specific phenotypes. Efforts to understand the basic biology of cells that comprise the cerebrovasculature as well as the changes that occur in disease have made significant progress with the advent of single-cell technologies. Here we characterize molecular profiles of cell types that comprise the human cerebrovasculature using both ex vivo fresh tissue and post mortem in silico sorting of human brain tissue samples. Using single-nucleus RNAsequencing (snRNA-seq), we profile cerebrovascular nuclei across 11 subtypes, including endothelial cells, mural cells, and perivascular fibroblasts. We uncover human-specific expression patterns along the arteriovenous axis and determine previously uncharacterized cell type-specific markers. Next, we use these human-specific signatures to study changes in cerebrovascular cells from patients with Huntington’s disease (HD), which reveal activation of innate immune signaling in vascular and glial cell types and a concomitant reduction in the levels of proteins critical for maintenance of blood–brain barrier integrity. Lastly, using a combination of an adeno-associated virus (AAV) approach in combination with a cell type-specific promoter (CLDN5) towards brain endothelial cells, we develop an AAV vector for effective gene therapy delivery to the cerebrovasculature. We demonstrate that a single dose of gene therapy targeting the cerebrovasculature to lower levels of huntingtin via a microRNA-mediated mechanism is sufficient to delay the progression of Huntington’s disease in vivo. Altogether, this work provides both a comprehensive molecular atlas for future studies to study the cerebrovasculature in health and disease contexts as well as tool for development of novel therapeutic strategies towards neurological disorders.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152566</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exoplanetary Systems in Technicolor</title>
<link>https://hdl.handle.net/1721.1/152565</link>
<description>Exoplanetary Systems in Technicolor
Berardo, David
As the number of confirmed extra-solar planets surpasses 5000, our ability to characterize exoplanets has equally advanced in the era of high precision photometry afforded by instruments such as the James Webb Space Telescope. With increased precision, longer baselines of observation, and access to multi-instrument data sets, the study of exoplanets has reached a level of detail never before possible. In this thesis, I focus on the analysis of several multi-planet transiting exoplanet systems observed with the KEPLER, K2, Spitzer, TESS and Hubble instruments. Using data from these instruments, I present a detailed study of the bright HIP41378 five-planet system, extracting the orbital periods of two long-period planets, refining transit center measurements, and detecting Transit Timing Variations (TTVs) of one of the innermost planets in the system, which hints at the presence of an undetected 6&#119905;ℎ planet.&#13;
&#13;
Beyond individual systems, I also consider two observable properties of exoplanets at a population-level scale, in the context of treating planets as 3D objects. Measuring the shape of a planet provides us with insight towards their internal structure, formation, and atmosphere, and allows us to disentangle observational degeneracies between surface features and their orbital parameters. First, I expand the study of rotation-induced oblateness to a sample of almost 400 planets, quantifying the detectability of oblateness for these planets in the context of future high-precision observations with the James Webb Space Telescope and other observatories. Next, I examine the observational effects of tidal deformation by the host star of nearly 200 exoplanets, and how current and future uncertainties in the measurement of a planets density allow for such an effect to be detectable. Additionally, I report on the analysis of a large data-set of the TRAPPIST-1 system obtained using the STIS instrument on the Hubble Space Telescope. In keeping with the theme of planets in 3-D, this analysis aims to search for extended neutral Hydrogen exospheres around the TRAPPIST-1 planets, which would appear as transit-like signals in the Lyman-&#120572; emission of the host star.&#13;
&#13;
Finally, I turn to the stars themselves around which exoplanet orbit, considering the effects of non-homogeneous stellar surfaces on exoplanet observations. I investigate how spatially and time varying surface features of a star contribute to baseline trends that must be removed when analyzing exoplanet transits, and assess the feasibility of empirically extracting precise stellar spectra due to star spots, which directly impact the precision of measuring planetary emission spectra. Through these studies, I aim to disentangle higher-order effects and biases which will become prevalent as the quality of observations improves, and to contribute to a more complete understanding of exoplanetary systems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152565</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ordering of Curving Interfaces</title>
<link>https://hdl.handle.net/1721.1/152564</link>
<description>Ordering of Curving Interfaces
Frank, John R.
Curved surfaces are fundamental parts of living systems. This thesis examines how materials can order on curving interfaces, resulting in shape changes and pattern formation. Many phenomena that are well-studied in flat space display new behavior when lifted onto a deformable surface: liquid crystals buckle membranes into peaked shapes, diffusing particles can sense curvature and localize patterns, and anisotropic growth can form branching structures over many scales.&#13;
&#13;
The systems I study include fluid membranes and growing solids. My framework connects the study of liquid crystals to cytoskeletons of living cells, and provides tools for understanding the machinery of vesicles as well as the remodeling of entire cells. Orientational order plays a central role on these surfaces. Topological defects in an orientation field are an area of intense historical and on-going interest. This work was published in a paper with my coauthor and advisor Mehran Kardar.&#13;
&#13;
I show that curvature modifies diffusion and can change the spatial patterns generated by Turing instabilities. Turing patterns have been studied extensively on flat substrates. To lift this patterning mechanism onto the highly curved shapes of living systems, we apply tools from perturbation theory and differential geometry to analytically compute modifications to the Laplacian and its normal modes on curved surfaces. This extends the framework of differential geometry to understand chemical concentrations diffusing on biological interfaces. In this thesis, I expand upon a paper I published with my coauthors Jemal Guven, Mehran Kardar, and Henry Shackleton.&#13;
&#13;
I conclude with initial results from a new cellular automaton of anisotropic solid growth, which generates tree-shaped morphologies. This suggests that branching structures in botanical trees may result from a simple, universal growth process. Topological defects naturally appear at the branch points of these structures in simulations and in nature. By expanding biophysics from its historical focus on the molecular realm to include macroscopic living solids, we may eventually learn to save our global forests and engineer growing structures on Earth and beyond.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152564</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Goals, Play, and Cognitive Pragmatism: A study of flexible&#13;
human minds</title>
<link>https://hdl.handle.net/1721.1/152563</link>
<description>Goals, Play, and Cognitive Pragmatism: A study of flexible&#13;
human minds
Chu, Junyi
Few phenomena in childhood are as compelling or mystifying as play. While many animals play, human play is distinguished by the sheer diversity of goals that we pursue, even as adults. Yet the seeming inutility of play belies one of the hallmarks of intelligence: a remarkably flexible ability to reason and plan in novel situations. What kind of mind generates and pursues so many goals, and has so much fun in the process? In this dissertation, I suggest that answering this question requires us to go beyond current accounts of rational action and exploration. To map out the path forward I present three lines of research involving behavioral experiments with young children (ages four to six years) and adult comparisons. In study one I find that adults and children endorse speculative conjectures, even when implausible or lacking evidence, because we primarily evaluate novel proposals based on how well it answers our questions. In study two I demonstrate that children at play spontaneously take unnecessarily costly actions and pursue prima facie inefficient plans, even though they minimize costs when achieving similar goals in non-play contexts. Finally, study three demonstrates that adults and children value their goals from the moment they are chosen: participants stick with their goals even when less costly alternatives are available. On their own, each study contributes novel empirical findings and theoretical insights to their respective literature in explanation, play, and planning. Taken together however, they suggest a broader conclusion: that humans treat goals as valuable constraints for reasoning and decision-making. By paying attention to the goals we adopt and the problems we make for ourselves, we may explain much more of the richness and flexibility of the human mind.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152563</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the role of thalamic activity in visual cortical plasticity</title>
<link>https://hdl.handle.net/1721.1/152562</link>
<description>Investigating the role of thalamic activity in visual cortical plasticity
Wang, Joyce
Experience-dependent plasticity is an integral process by which the brain learns to respond to relevant stimuli in the world. The visual cortex is a well-studied locus of such learning, exemplified by the simple yet robust phenomenon of stimulus-selective response plasticity (SRP). In SRP, neurons in the cortex alter their activity profile to a visual stimulus as it becomes familiar with repeated exposure. It requires the mechanisms of long-term potentiation (LTP), the basis for solidifying memories in the brain, which makes SRP a compelling blueprint for the basic principles of how learning might work in general.&#13;
&#13;
Recently, deeper investigations into SRP have revealed that it requires circuit changes that extend beyond a simple model of LTP, including the requirement of inhibitory interneuron activity but not LTP onto the classic input to layer 4 of the visual cortex. Such differences may be clarified by analyzing the dorsal lateral geniculate nucleus (dLGN) of the thalamus, which is the primary source of visual information for the cortex. In the following sections, we test the hypothesis that SRP expression is driven by the mode of activity of dLGN neurons, which are in turn driven by feedback from layer 6 of visual cortex.&#13;
&#13;
In this thesis, I describe two complementary approaches for investigating the role of thalamic activity in visual cortical plasticity. In Chapter 1, I provide a broad overview of the dLGN and our current understanding of SRP. In Chapter 2, I describe how we used calcium fluorescence imaging of single dLGN neurons to track the changes that occur in this population over time. In Chapter 3, I summarize an experiment which used extracellular electrophysiology to compare the differences in dLGN spiking activity to familiar vs. novel stimuli. In short, no differences in the activity of dLGN cells were found to familiar vs. novel stimuli using either method, a novel finding that I seek to place into the broader context in Chapter 4. I discuss how these findings update our potential models of SRP circuitry and describe the remaining questions to be answered by future research.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152562</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of language in the minds and brains of children</title>
<link>https://hdl.handle.net/1721.1/152561</link>
<description>Development of language in the minds and brains of children
Olson, Halie A.
One of the most remarkable aspects of early childhood is language development: a process that epitomizes the interplay between nature and nurture in the human brain. Language skills blossom during the second year of life, but they continue to be shaped by children’s experiences as they grow and encounter new words and linguistic structures through formal education and their own pursuits. In this thesis, I present three lines of research that aimed to characterize and explore various influences on language processing in children, using both neural and behavioral measures. The first set of studies (Chapters 2-3) involved the development and validation of a functional magnetic resonance imaging (fMRI) task designed to measure language-evoked activation in the brains of awake toddlers, an under-studied age group in fMRI research that represents a critical period of time in language development. Using this task in adults, we found no difference in canonical language regions’ responses to speech in dialogue compared to monologue. Ongoing work in toddlers suggests that we can measure language-evoked activation with our approach, which will enable us to characterize language network function for the first time in awake toddlers using fMRI, as well as better understand how social context may or may not impact language processing. The second study (Chapter 4) investigated how children’s personal interests impact language processing in the brain. In both neurotypical and autistic children with strong interests, activation in canonical language regions was significantly greater when they listened to personalized stories about their interests than when they listened to generic stories, pointing to the importance of content on the brain’s response to language in childhood. Finally, the last study (Chapters 5-6) implemented a remote, randomized controlled trial intervention to examine the impact of audiobooks paired with instructional support on children’s language skills. Using vocabulary measures tailored to the books children read, results suggest that struggling readers improved only when audiobooks were paired with instructional support. Together, these studies introduce novel approaches to measuring language in the minds and brains of children, and explore how factors such as interest and exposure may impact language processing during development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152561</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-level models of language comprehension in the mind and brain</title>
<link>https://hdl.handle.net/1721.1/152560</link>
<description>Multi-level models of language comprehension in the mind and brain
Gauthier, Jon
What are the mental and neural representations that drive language understanding and acquisition? This thesis presents a two-part suite of methods for addressing these questions, rooted in the idea that representational claims must jointly assert a content — what a mental representation is about — but also a computational role — why it is there, and what concrete function it serves in bringing about language behavior.&#13;
&#13;
The first part of the thesis explores the computational role of syntactic and semantic representations in language acquisition and use. I instantiate a theory of syntactic bootstrapping, demonstrating through computational simulations how correspondences between the syntactic behaviors of words and their meanings can be exploited to efficiently construct a lexicon. This modeling work recapitulates classical dynamics of language learning exhibited by children acquiring their first language, and more broadly presents an expanded view of the computational role of these representational systems.&#13;
&#13;
The second part of the thesis addresses the neural side of these questions. I take a critical view on the present model-based cognitive neuroscience of language, arguing that some popular evaluation paradigms are limited in the types of claims about representational content they can safely support. I then present two case studies of a path forward, both exploiting measures drawn from modern large language models (LLMs). The first designs controlled interventions on LLMs’ internal representational &#13;
 contents, and tests the consequences of these interventions in a brain mapping evaluation. We apply this method in an fMRI brain decoding study, which reveals findings about the time-course of human syntactic representations. The second study integrates an LLM into a structured model of auditory word recognition, which is designed from the start for model interpretability. I apply this model to explain EEG data recorded as subjects listened to naturalistic English speech. The model enables us to discover distinct neural traces of how humans recognize and integrate the meanings of words in real time. I conclude by discussing the implications of these findings for the mental computations that drive online language comprehension.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152560</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Monte Carlo Sampling of Lattice Field&#13;
Theories</title>
<link>https://hdl.handle.net/1721.1/152559</link>
<description>Efficient Monte Carlo Sampling of Lattice Field&#13;
Theories
Yunus, Çağin
Monte Carlo Sampling of Quantum Field Theories suffers from many inefficiencies. These inefficiencies, among other things, make determination of the QCD phase diagram and calculation of the correlation functions numerically difficult. As a step towards eventually overcoming these issues, the problem of infinite variance due to the exceptional configurations in fermionic Lattice Field Theories and probability distributions of the two-point functions in bosonic Lattice Field Theories are investigated. In the context of four-fermion interactions, a family of discrete Hubbard-Stratonovich sampling schemes are developed to avoid exceptional configurations. It is then shown that, while this sampling schemes work in principle, the estimations of uncertainties are unreliable. To overcome this limitation, a reweighting method is developed and shown to be efficient and reliable for the models investigated. As a study of the probability distributions of the correlators in a simple model, the probability distributions of the two-point functions are exactly calculated for interacting &#119874;(&#119873;) models in the disordered phase. It is shown that, by utilizing the probability distribution of the two-point function, improved estimators of the mean can be constructed. Taken together, these techniques show that the statistical properties of Monte Carlo sampling in simple LQFTs can be exploited to improve calculations of physical quantities and set up the groundwork for future applications to phenomenologically relevant QFTs such as QCD.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152559</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Flexible Mind of a Worm: the atlas of brain-wide representations of behavior in C. elegans</title>
<link>https://hdl.handle.net/1721.1/152558</link>
<description>The Flexible Mind of a Worm: the atlas of brain-wide representations of behavior in C. elegans
Kim, Jungsoo
The primary function of the brain is to generate behaviors and motor outputs. However, how neurons across the brain encode quantitative features of an animal’s behavior remains largely unknown. In this work, we built a new system to record and extract brain-wide activity in freely-behaving C. elegans. We built non-linear probabilistic models to explain how individual neurons encode distinct behavioral features. Utilizing neuronal identity information of the recorded neurons, we created the first-ever atlas describing how each defined neuron class in an animal’s brain encodes its behavior. Furthermore, we examined how these encodings change over varying behavioral states and identified key nodes in the connectome that could flexibly change encoding. Overall, this work provides a new view of how neurons across the brain of an animal encode its behavior.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152558</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resolving the Mysteries of Highly Irradiated Planets: Observations and Simulations</title>
<link>https://hdl.handle.net/1721.1/152557</link>
<description>Resolving the Mysteries of Highly Irradiated Planets: Observations and Simulations
Mehrle, Nicholas
Modern exoplanet science has an observational bias towards short-period planets. Among other things, these planets tend to be highly irradiated, either thermally resulting in high equilibrium temperatures and/or through high energy FUV/Xray radiation. The resulting planets exhibit a diverse array of physical characteristics unlike those seen on Earth. I present a collection of works broadly encompassed by the theme of understanding highly irradiated planets and a set of new techniques I develop to further analysis of these strange worlds. First I discuss observations of Upsilon Andromedae b, a non-transiting planet I have observed the atmosphere of for the first time, and Venus, Earth’s twin sister that turned out so different. Each of these observations is enabled by a new method I introduce for that class of analyses. I then present my work on radiation-hydrodynamics simulations of atmospheres subject to intense high energy radiation, for which I have developed a new simulation code with a unique purpose.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152557</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Novel Quantum Physics Using Ytterbium-171 in An Optical Cavity</title>
<link>https://hdl.handle.net/1721.1/152556</link>
<description>Exploring Novel Quantum Physics Using Ytterbium-171 in An Optical Cavity
Li, Zeyang
In this thesis, I present the development of a cavity quantum electrodynamics (CQED) system using single or multiple ensembles of ytterbium-171 atoms and its use for quantum metrology and quantum information science investigations.&#13;
&#13;
We develop and study a unified theoretical framework that describes CQED spin systems. We unify the two major roles of cavity light: the measurement of the atomic state and the catalyst for generating entanglement. The obtained model agrees well with the experimental results. We utilize this framework to implement and optimize a variety of quantum metrological applications.&#13;
&#13;
With optimized parameters guided by the theoretical model, we achieve a near-unitary spin squeezing in the ground state manifold of ytterbium atoms. We observe a metrological gain of 6.5(4)dB, while the inferred metrological gain without measurement limitation can reach 13dB. In a second experiment, we coherently transfer the entanglement from the ground state manifold to the optical clock transition for its 105 times faster phase accumulation and higher relative accuracy compared with an rf-clock. We infer a 4.4dB of improvement in performance, which is the first demonstration of the quantum entanglement-assisted optical clock operation.&#13;
&#13;
We also implement a time-reversal-based quantum metrology protocol. We demonstrate that this method benefits practical quantum metrology since it improves the signal-to-noise ratio by amplifying the signal rather than reducing the noise. Notably, it is insensitive to the measurement noise, the dominant limitation in previous experiments. With the time-reversal protocol, we observed a 12.8(9)dB metrological gain and a record high 11.8(5)dB gain of phase sensitivity.&#13;
&#13;
We further bring it to quantum information science. We explore the out-of-time-ordered correlators (OTOCs), a benchmark of how fast the quantum information “scrambles” into the whole quantum many-body system. We demonstrate that the time-reversal method can efficiently use the quantum scrambler’s exponentially fast dynamics as a way to improve the signal.&#13;
&#13;
Altogether, we have built and upgraded the machine of this lab to be able to per- form complicated quantum experiments. We can coherently and uniformly prepare and initialize the atomic states, and use the cavity to generate quantum entanglement or undo it among the atomic ensemble. We not only improve the attainable performance of precision measurement but also extend the investigation of quantum metrology to the field of quantum information science.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152556</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stoner magnetism and Berry phase in quantum materials</title>
<link>https://hdl.handle.net/1721.1/152555</link>
<description>Stoner magnetism and Berry phase in quantum materials
Dong, Zhiyu
Two-dimensional solids often exhibit carrier bands with Berry phase in &#119896; space, resulting in carriers behaving like spinning objects and generating orbital magnetization in position space. This thesis explores the impact of orbital magnetization arising in this way on the correlated electron phases. The effect of Berry phase is particularly interesting for magnetic phases with spin and valley polarization originating from Stoner instability, such as those seen in moiré graphene and other narrow-band systems. Despite recent advances in the field, these questions remain largely unexplored, and this thesis aims to address this gap in research. Interesting physics arises due to an interplay between two distinct effects: geometric phases in &#119896; space due to band Berry curvature and geometric phases in position space arising for spin-polarized carriers traversing a spin texture. This results in an interaction that we term the “chiral interaction,” a form of an emergent spin-orbital interaction that arises solely from electron exchange, in the absence of microscopic spin-orbit couplings. The chiral interaction, in contrast to microscopic spin-orbit coupling, respects the SU(2) spin rotation symmetry and exhibits other interesting characteristics. In this thesis, we establish the existence of this interaction through a general symmetry argument and microscopic calculations, and investigate its consequences. Specifically, we explore the emergence of chiral edges that support spin excitations propagating without back-scattering and the occurrence of skyrmions, the topologically protected particle-like objects stabilized by the chiral interaction in the ground state of the system.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152555</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Trainability and Expressivity of Quantum Machine Learning Models</title>
<link>https://hdl.handle.net/1721.1/152554</link>
<description>The Trainability and Expressivity of Quantum Machine Learning Models
Anschuetz, Eric R.
Research over the last few decades has provided more and more evidence that precise control of many-body quantum systems yields a method of computation more powerful than what is achievable using conventional models of computation. This culminated in recent years with experimental demonstrations on quantum devices of computational tasks on the verge of classical intractability.&#13;
&#13;
These current generation quantum devices are, however, too noisy and small to perform any means of error correction. These limitations motivate the study of hybrid quantum-classical algorithms as potential practical use-cases of these devices in the near-term. This thesis is concerned with studying potential use-cases of these hybrid algorithms, determining limitations of algorithms constructed via this framework, and giving provable guarantees on the performance of such algorithms. Specifically:&#13;
&#13;
1. We consider quantum-classical hybrid algorithms which are framed as optimization problems with quantum-evaluated loss functions. We show that when the operations implemented on the quantum device are drawn from a certain problem-independent distribution, the loss landscapes (in expectation) exhibit a phase transition in trainability. We argue that the trainable phase is typically unachievable, and thus that such algorithms are not practical to implement.&#13;
&#13;
2. We sharpen these arguments and show similar behavior for local, shallow, variational quantum algorithms. We also study the impact of noise on such algorithms when they are in the trainable phase, and show that such noise is capable of making otherwise trainable algorithms untrainable in a statistical query setting.&#13;
&#13;
3. We give efficient classical algorithms to simulate certain variational quantum algorithms that circumvented the assumptions of our previous results and were known to be trainable, demonstrating that care must be taken to strike a balance between quantum implementability and classical intractability.&#13;
&#13;
4. We prove unconditionally that certain quantum neural networks are more expressive than a wide class of classical neural networks and demonstrate that quantum contextuality is the resource for this separation. We also give arguments (along with numerical evidence) that such models are efficiently trainable, thus showing that there exists a regime where hybrid quantum-classical algorithms outperform their purely classical counterparts.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152554</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Black Holes to the Big Bang: Astrophysics and Cosmology with Gravitational Waves and their Electromagnetic Counterparts</title>
<link>https://hdl.handle.net/1721.1/152553</link>
<description>From Black Holes to the Big Bang: Astrophysics and Cosmology with Gravitational Waves and their Electromagnetic Counterparts
Biscoveanu, Andrea Sylvia
The growing catalog of gravitational-wave signals from compact object mergers has allowed us to study the properties of black holes and neutron stars more precisely than ever before and has opened a new window through which to probe the earliest moments in our universe’s history. Population-level measurements of the masses and spins of compact objects can reveal how these systems form and evolve. Multi- messenger observations of compact object mergers can shed light on the properties of the electromagnetic counterparts of these systems, such as short gamma-ray bursts and kilonovae. Finally, observations of the stochastic gravitational-wave background can constrain early-universe physics inaccessible with other means.&#13;
&#13;
In this thesis, I demonstrate how we can leverage such observations of gravitational waves and their electromagnetic counterparts to learn about astrophysics and cosmology. The first part focuses on methods for facilitating the detection of electromagnetic counterparts and the simultaneous analysis of gravitational-wave and electromagnetic data for mergers including a neutron star. I then transition to a detailed study of black hole spin, including characterizing the measurability of spin in individual systems with current gravitational-wave detectors and presenting novel population-level analyses. This work is complemented by the development of new methods for increasingly detailed gravitational-wave data analysis. Such analyses will be critical to the astrophysical interpretation of the growing catalog of compact-object binaries and will enable the future detection of the cosmological stochastic gravitational-wave background.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152553</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing In Situ Instrumentation to Monitor Anthropogenic Change</title>
<link>https://hdl.handle.net/1721.1/152512</link>
<description>Developing In Situ Instrumentation to Monitor Anthropogenic Change
Colson, Beckett Casper
To predict and mitigate anthropogenic impacts on the ocean, we must understand the underlying systems that govern the ocean's response to inputs (e.g. carbon dioxide, pollutants) . Analytical models can be used to generate predictions and simulate intervention strategies, but they must be grounded with empirical observations. Unfortunately, there exists a technological gap: in situ instrumentation is often lacking or nonexistent for key parameters influenced by anthropogenic inputs. While discrete bottle samples can be collected and analyzed for these parameters, their limited spatiotemporal resolution constrains scientific inquiry. To help fill the technological gap, this dissertation presents the development of instrumentation for the ocean inorganic carbon system and microplastics. The first few chapters present the development process of CSPEC, a deep-sea laser spectrometer designed to measure the ocean carbon system through alternating measurements of the partial pressure of carbon dioxide (pCO₂) and dissolved inorganic carbon (DIC) . CSPEC uses tunable diode laser absorption spectroscopy (TDLAS) to measure the CO₂ content of dissolved gas extracted via a membrane inlet. Chapter 2 derives membrane equilibration dynamics from first principles, thus enabling informed design decisions. The analytical results showed that cross-sensitivity to other dissolved gases can be introduced by the equilibration method, regardless of the specificity of the gas-side instrumentation. A new method, hybrid equilibration, leverages the membrane equilibration dynamics to improve time response without incurring cross-sensitivity. Chapter 3 presents POCO, a surface pCO₂ instrument that employs TDLAS and a depth-compatible membrane inlet. Through laboratory and field-testing, POCO demonstrated that hybrid equilibration overcame the gas flux limitation of deep-sea membrane inlets. Chapter 4 presents CSPEC, which successfully mapped the carbon system near different hydrothermal features at 2000 m in Guaymas Basin, becoming one of the first DIC instruments field-tested at depth. Chapter 5 introduces impedance spectroscopy for quantifying microplastics directly in water. Microplastics were successfully counted, sized, and differentiated from biology in the laboratory: a step toward in situ quantification. The analytical tools and measurement systems presented in this dissertation represent a significant step towards increasing the spatiotemporal resolution of carbon system and microplastic measurements, thus enabling broader scientific inquiry in the future.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152512</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational and experimental methods for CRISPR-based saturation mutagenesis screens</title>
<link>https://hdl.handle.net/1721.1/152504</link>
<description>Computational and experimental methods for CRISPR-based saturation mutagenesis screens
Hsu, Jonathan Yee-Ting
Genetic variation is a powerful framework for functional characterization of the human genome. The emergence of CRISPR technology has enabled the efficient and diverse installation of genetic variation in situ, leading to its widespread use in functional genomics. The application of high-throughput CRISPR saturation mutagenesis screens for the functional interrogation of the coding and non-coding genome holds great promise in accelerating our understanding of how static DNA sequences encode and influence dynamic processes in human development and disease.&#13;
&#13;
In this thesis, we focus on the development of computational and experimental methods for CRISPR-based saturation mutagenesis screens. First, we developed CRISPR screening uncharacterized region function (CRISPR-SURF), a deconvolution framework for the analysis of CRISPR saturation mutagenesis screens. Drawing inspiration from the field of signal processing, we propose the modeling of CRISPR perturbations across an underlying genomic regulatory signal by means of a convolution operation and apply CRISPR-SURF for the discovery of non-coding regulatory elements involved in gene regulation. Second, we developed PrimeDesign to facilitate the rapid design of prime editing (PE) guide RNAs and demonstrate its utility by using recommended designs to install pathogenic variants in human cells. Complementing PrimeDesign, we developed pegPool as a high-throughput pooled screening strategy for prime editing guide RNA (pegRNA) optimization. We demonstrate the generalizability of pegPool by assessing a total of &gt;18,000 pegRNA designs, with up to 210 designs in a single pool, to identify high efficiency pegRNA constructs targeting genomic sites. Finally, we developed multiplexing of site-specific alterations for in situ characterization (MOSAIC) as a rapid non-viral method for saturation mutagenesis screens at single-nucleotide and codon resolution. Using MOSAIC, we demonstrate in situ saturation mutagenesis of the BCR-ABL1 oncogene to identify drug resistant variants and IRF1 untranslated region (UTR) to map non-coding regulatory elements involved in transcriptional initiation.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152504</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Sustainable and Inclusive Cities: Analyzing the Impact of Planning Paradigms in the US</title>
<link>https://hdl.handle.net/1721.1/152503</link>
<description>Building Sustainable and Inclusive Cities: Analyzing the Impact of Planning Paradigms in the US
Salazar-Miranda, Arianna
The three papers in this dissertation study how different urban planning paradigms—normative ideas used by planners to shape the built environment—can support more sustainable development patterns. To investigate this topic, I analyze large-scale, high-resolution data using various analytical methods. The first paper examines the sustainability implications of the 15-minute city model, which emphasizes local living. Using large-scale GPS data from US cities, the study examines the relationship between trip length, access to nearby amenities, and segregation. The results suggest that less restrictive zoning rules could make it easier for people to access nearby amenities without traveling long distances. However, such policies also run the risk of increasing the social isolation of the poor. In the second paper, I investigate the consequences of developing suburban neighborhoods using the garden city model, a historical paradigm emphasizing urban form as a key driver of neighborhood well-functioning. I develop and validate a methodology to measure the key attributes of the garden city model at scale and over time by inferring it from neighborhood layouts. Combining neighborhood design measurements with data on individual mobility and emissions, I demonstrate that residents of neighborhoods designed using the garden city model are more sedentary, more socially isolated, and produce more greenhouse gas emissions due to longer commutes induced by the street network. Finally, the third paper tests the idea that neighborhood form persists over time in the context of the United States Housing Corporation, the first housing initiative funded by the federal government in 1918. Comparing neighborhoods that were planned but canceled with others that were planned and constructed, the study shows that street shape and block configuration persist via path dependence, while other urban design features like the composition of blocks do not. Overall, these papers highlight the critical link between physical form and sustainability in urban planning.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152503</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examining coral reef ecosystem dynamics using microorganisms and metabolites</title>
<link>https://hdl.handle.net/1721.1/152493</link>
<description>Examining coral reef ecosystem dynamics using microorganisms and metabolites
Becker, Cynthia Carroll
Microorganisms and metabolites are foundational to the success and productivity of biodiverse and economically important coral reef ecosystems and are also tightly connected. Metabolites are small organic compounds produced by reef organisms and are the chemical currencies exchanged by unicellular microorganisms (bacteria and archaea) within the seawater. Although central to reef biogeochemical cycling, we still lack fundamental information on the dynamics of these components of reefs. In this dissertation, I analyzed microorganisms in Caribbean coral reef habitats over temporal, spatial and reef health gradients as well as metabolites in a spatial reef study. In Chapter 2, I applied a rapid sequencing methodology to corals afflicted with the lethal stony coral tissue loss disease and identified specific microorganisms which were biological indicators of the disease. In Chapter 3, I investigated the dynamics of microorganisms over short temporal tidal and diurnal cycles, as well as spatially across US Virgin Island (USVI) coastal habitats. In these habitats, I found tidal cycles were driving changes in microbial communities within mangroves, but diurnal patterns were more important in reef habitats. In Chapter 4, I examined reefs over a longer temporal scale by contributing to the building of a 7-year time-series of USVI reef ecology and found that reef water microorganisms were predictive of hurricane and stony coral tissue loss disease impacts. Finally, in Chapter 5, I combined analyses of untargeted and targeted metabolomics, microbial taxa, and functional genes from metagenomics across 300 km of reefs in Florida, in addition to microorganisms in healthy and diseased corals. With this unprecedented combination of ‘omics datasets, I found that biogeographic zones, environmental features, and underlying habitat characteristics were related to microbial and metabolite features in the reef ecosystem. Further, I identified microorganisms and metabolites which were characteristic of specific reef biogeographic zones. Collectively, my work advances our understanding into the dynamics of microorganisms and metabolites in biodiverse coral reef habitats across natural temporal and spatial gradients and in the face of unprecedented stress and disturbance.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152493</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Biological Pathways</title>
<link>https://hdl.handle.net/1721.1/152489</link>
<description>New Biological Pathways
Borrajo, Jacob
Through synthetic biology, our species is now learning to give biology instructions by using microscopic – often designed – biological components, allowing biology to conduct highly specialized forms of work previously unseen in nature. In this thesis, I propose and develop three new biological pathways which can perform three different categories of work i) information retrieval ii) information storage and iii) information editing.&#13;
&#13;
For information retrieval, I propose repurposing viral capsid proteins to perform non-destructive transcriptomic measurements. We demonstrate that this approach allows for live-cell transcriptomics, and we longitudinally measure the transcriptional responses of the same living human cells after stimulation with TNFa.&#13;
&#13;
For information storage, I propose and develop trans-splicing as a strategy to barcode the introduction of genetic elements en masse, and show that cell transcriptomes can be reliably barcoded for facile information storage.&#13;
&#13;
For information editing, I propose and develop a new RNA splicing machine – the splice editor – which can edit long stretches of mRNA sequences. I demonstrate that this CRISPR/Cas13 guided editor can perform exon replacement, which may one day lead to a new class of therapeutics.&#13;
&#13;
Altogether, this thesis showcases three new biological pathways, and demonstrates that living biological systems can be instructed to perform various kinds of complex, biological work.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152489</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integral Quadratic Constraints and Safety Certificates for Uncertainty Characterization and Control Safety-Aware Filtering of Proximity Operations Between Satellites</title>
<link>https://hdl.handle.net/1721.1/152485</link>
<description>Integral Quadratic Constraints and Safety Certificates for Uncertainty Characterization and Control Safety-Aware Filtering of Proximity Operations Between Satellites
Garcia Burgos, Axel
Techniques in robust optimization and formal verification methods are used (1) to examine the stability and robust performance of a satellite controller that considers six-dimensional, uncertain state, and often unmodeled dynamics during rendezvous and proximity operations, and (2) to explore the synthesis of control Lyapunov/barrier functions (CLFs/CBFs) using neural networks and stochastic gradient descent to provide safety-aware filtering for the fuel-optimal control policies. A linear quadratic regulator controller for a servicer satellite (Servicer) is analyzed via the dissipativity inequality principle and quadratic constraints. This method allows the capture of unmodeled dynamics to reduce system uncertainty of proximity operations among the Servicer, client satellite (Client), and unsafe regions (e.g., obstacle). The same controller is implemented with a finite time horizon (i.e., model predictive controller) to filter out unsafe control output during an autonomous inspection of a Client. This framework mitigates the collision risk based on integral quadratic constraints (IQCs) worst bounds recommendation, miss distance, Mahalanobis distance, and Probability of Collision (Pc) metrics. Innovative deterministic reachability methods based on integral quadratic constraints and neural Lyapunov functions are compared and connected. The novel contributions of this work focus on formulating mathematical safety guarantees, modeling controller output, and reducing uncertainty on system performance when designing fuel-optimal and safe maneuvers of Servicer around the Client while avoiding unsafe regions in LEO.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152485</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embracing the Uncertain Future: Three Papers of Uncertainty in Analysis, Planning, and Policy-Making</title>
<link>https://hdl.handle.net/1721.1/152477</link>
<description>Embracing the Uncertain Future: Three Papers of Uncertainty in Analysis, Planning, and Policy-Making
Engelberg, Daniel L.
Uncertainty is inseparable from long-range planning. In striving for just, equitable, and sustainable futures, we are always confronted with the limits of our own understanding. Looking far into the future, or even trying to properly assess the ground beneath our feet, often reveals much more about what we are unsure of than what we can predict with confidence. However, uncertainty is still largely treated as subordinate in urban planning, either not grappled with at all or applied as an addendum to mean trend forecasting. This dissertation seeks to invert the traditional approach by placing uncertainty at the center of planning within three stages of the planning process: simulation analysis, planning, and policy making. The objective of this three-paper dissertation is then to examine how uncertainty interacts with three stages in the urban planning and policy making process; and to suggest how centering uncertainty can improve planning. The first paper considers the analysis of policy options under uncertainty in land use and transportation simulation. This paper demonstrates the applicability of scenario discovery, a research design for decision making under deep uncertainty, in land use and transportation models. I find that scenario discovery performs marginally better in identifying robust strategies relative to more circumscribed approaches, but significantly enhances insights regarding adaptive policy making. The second, lead-authored paper asks what impact uncertainty has on the climate policy disposition of municipal elected officials. We sent a survey to elected officials in cities with greater than 100,000 people querying their degree of climate policy uncertainty as well as their propensity to support climate policies. Using a structural equation model with a novel latent variable measure of climate uncertainty, we demonstrate that uncertainty diminishes propensity for climate policy. My final paper delves into the use of scenario planning to support racial equity planning. From the literature on equity in scenario planning and my own experience, I develop a novel framework for using scenario planning to promote racial equity. This framework builds on the five types of racial equity, a six-stage hybrid scenario process, and the three outcomes of public sector scenario planning: organizational learning, organizational strategy, and community learning. Using this framework, I assess the inclusion of equity in the Delaware Valley Regional Planning Commissions Dispatches from Alternative Futures scenarios plan. This plan successfully raises racial equity as a concern for the future of the Philadelphia region. However, the stakeholder group was not sufficiently diverse for full deliberative justice and the scenario planners do not utilize tools that can assess the distributional outcomes of scenarios and policies. Neither epistemic nor restorative justice were a significant part of the scenario plan, leaving open the possibility for more radically co-designed scenarios for racial equity in the future.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152477</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Antibody Titer Assessment of the Biomanufacturing Process using Nanofluidic Binding Assays</title>
<link>https://hdl.handle.net/1721.1/152457</link>
<description>Continuous Antibody Titer Assessment of the Biomanufacturing Process using Nanofluidic Binding Assays
Rohskopf, Zhumei
Online monitoring of monoclonal antibody (mAb) product titers throughout biologics process development and production enables the implementation of quality-by-design biomanufacturing, rapid bioprocess decision-making, and process optimization. Intermittent sampling and analysis are presently utilized to maintain long-term perfusion culture. However, this increases the risk of introducing external contaminants into the cell culture and the analyzed analytes. Sensors used in in-line probe-based approaches must withstand harsh clean-in-place and sterilization-in-place procedures. Analytical instruments that provide real-time monitoring and continuous feed flow processing and are therefore suitable for direct integration into a perfusion bioreactor are ideal for improving overall process robustness. Online implementation of conventional analytical methods, including high-performance liquid chromatography (HPLC) and turbidimetric analysis, typically vii necessitates interfacing with an automated sampling system capable of online sampling and fractionation, which appends to the cost, risk of failure, and mechanical complexity of the system.&#13;
&#13;
The objective of this study is to create a nanofluidic device for online monitoring of monoclonal antibody (mAb) titers based on ligand-binding. This system has a small footprint, straightforward operation procedures, and minimal complex data analytics requirements. During manufacturing and process development, this nanofluidic platform delivers direct titer measurement in the bioreactor as well as functional information on the binding activity, enabling immediate modification of process parameters to improve biomanufacturing yield and preserve the desired product quality.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152457</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Study of DNA Delivery from Self-Assembled Nanolayered Films</title>
<link>https://hdl.handle.net/1721.1/152453</link>
<description>Mechanistic Study of DNA Delivery from Self-Assembled Nanolayered Films
Wang, Sheryl
The delivery of nucleic acids to modulate gene expression levels can enable highly specific and durable therapeutic effects. Formulation to protect nucleic acid cargoes and direct them to target tissues is critical to the success of these promising therapies. Delivery of nucleic acids, such as plasmid DNA, must overcome both systemic and local cellular barriers. Surfacemediated gene delivery bypasses systemic trafficking obstacles by localizing DNA release within the cellular microenvironment. Local delivery has several advantages including increased efficacy at the target site and reduced off-target effects. Layer-by-layer (LbL) self-assembly is a promising method to incorporate DNA in nanolayered thin film surface coatings for controlled, localized release. Although the potential for DNA delivery via layer-by-layer films has been reported, details in the mechanism or factors influencing the success of delivery have not been explored.&#13;
&#13;
In this thesis, we designed LbL-assembled DNA multilayer films for localized gene delivery. We present a mechanistic investigation of factors impacting in vitro DNA transfection efficacy. Using pre-formed DNA-polymer complexes as a model system, we identified relative polymer to DNA content in polyplexes as a key driver of effective transfection. We then explored the impact of LbL assembly parameters on DNA multilayer film composition and release kinetics, and how these subsequently influence transfection efficacy in vitro. Finally, we characterized the film releasate to elucidate how cells interact with DNA multilayer films. Rapid release of DNA complexed with polymer was found to enable the greatest transfection efficiency. The findings described here will contribute to rational design of more effective LbL films for DNA delivery.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152453</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization and Engineering of Transposons for Genome Editing</title>
<link>https://hdl.handle.net/1721.1/152452</link>
<description>Characterization and Engineering of Transposons for Genome Editing
Ladha, Alim
Since the human genome was sequenced in 2000, we have rapidly elucidated previously unappreciated associations between our genetic code and human disease. Driven by the rapid development of new molecular technologies, there has been increased desire to treat these genetically rooted diseases by directly manipulating DNA. Using genome editing tools, primarily based on bacterial CRISPR systems, scientists can treat the genome like a text file, adding, deleting, or changing small stretches of DNA. However, the ability to cut-and-paste large fragments of DNA into a defined location in the genome has remained elusive. In the first part of this thesis, we characterize and engineer multiple genome editing systems to address the problem of DNA insertion and, more broadly, problems in human health.&#13;
&#13;
First, we functionally characterize a system of unknown function, a type V-K CRISPR-associated transposase from the cyanobacteria Scytonema hofmanni (ShCAST). We demonstrate that ShCAST performs self-sufficient targeted DNA insertion and can be reconstituted in bacteria for genome editing with efficiency of up to 80% without selection. We then go on to characterize transposon homing, a key mechanism in the natural lifecycle of CAST systems, which enables these mobile elements to navigate to the sites in which they are found. We show that type V-K systems use a non-canonical CRISPR RNA (crRNA) to perform this task. Surprisingly, the distinct type I-B CAST uses a dedicated sequence-specific DNA binding protein for homing to an attachment site.&#13;
&#13;
Next, we engineer a non-long terminal repeat (LTR) retrotransposon, R2 from the silkworm Bombyx mori (R2bm), as a site-specific DNA insertion tool in human cells. We show that R2bm can be used to perform DNA insertion into human 28S ribosomal DNA (rDNA) repeats and validate this strategy for delivering a functional transgene. We also demonstrate that R2bm’s target site can be changed through association with a reprogrammable DNA-binding protein like SpCas9, enabling functional correction of a truncated protein in the human genome.&#13;
&#13;
In early 2020, while we were harnessing R2 for DNA insertion, the COVID-19 pandemic broke out and general laboratory activities were shut down. In the second part of this thesis, we develop a new chemistry for detection of SARS-CoV-2 called STOPCovid to address the urgent need for diagnostics in the COVID-19 pandemic. STOPCovid aims to simplify liquid handling for sensitive nucleic acid detection in low-complexity and point-of-care settings. We demonstrate that STOPCovid has a sensitivity of 93.1% and specificity of 98.5% using over 400 patient samples.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152452</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Artificial Photosynthesis: Yeast-Inorganic Hybrid System</title>
<link>https://hdl.handle.net/1721.1/152451</link>
<description>Towards Artificial Photosynthesis: Yeast-Inorganic Hybrid System
Pandit, Shalmalee Dhananjay
Artificially photosynthetic systems aim to store solar energy and chemically reduce carbon dioxide. These systems have been developed to use light to drive processes for carbon fixation into biomass and/or liquid fuels. We have developed a hybridbiological system that manages both genetically controlled generation of products along with the photoactivability of a semiconductor system. In this work, we show development of an inorganic-biological hybrid system that utilizes otherwise toxic cadmium waste to favor reduced product formation in yeast. In this work, we build on a system we recently developed to collect and remediate heavy metals from solution through genetic engineering of the yeast Saccharomyces cerevisiae. Yeast are genetically engineered to produce hydrogen sulfide that reacts with the cadmium ion to nucleate cadmium sulfide nanoparticles on the yeast cell wall. We show these nanoparticles can act as a light harvesting semiconductor complex that shares properties with the light-dependent reactions of photosynthesis. That is, upon light activation of the cadmium sulfide nanoparticles, favorable electron transfers fill resulting electron holes that alter the redox state of the yeast, creating metabolic conditions that promote reduced product formation. This could be a step towards developing an artificially photosynthetic organism, as the resultant conditions drives carboxylation of the five-carbon tricarboxylic acid (TCA) cycle metabolite alphaketoglutarate to form the six-carbon metabolite citrate. This incorporation of CO2 into yeast biomass allows for more carbon efficient ethanol production and illustrates how an engineered light-driven system can be used to increase the formation of a common biofuel. More generally, this study shows how nanoparticle semiconductor systems can be combined with metabolic engineering to enable formation of specific products.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152451</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The regulation of bacterial virulence by mucin glycans</title>
<link>https://hdl.handle.net/1721.1/152450</link>
<description>The regulation of bacterial virulence by mucin glycans
Werlang, Caroline Andrea
Mucus is a major ecological niche for the microbiota, forming a habitat in which beneficial microbes can thrive while preventing outgrowth and infection of potentially pathogenic microbes. The virulence-disruption activity of mucus is enabled mucin, a glycoprotein that forms the mucus gel. Upon sensing the O-glycans displayed mucin, opportunistic pathogens can alter their gene expression, and are less likely to engage in virulence behaviors like biofilm formation and toxin production. Therefore, mucin gels can be considered a sophisticated bioactive material with the ability to manipulate microbial behavior and phenotypes, select for beneficial organisms, and protect host epithelial surfaces. These properties motivate the creation of mucin-inspired materials that can reproduce the protective effects of mucin in patients with mucosal disorders or infections. &#13;
&#13;
This thesis reports the production of bioactive mucin-inspired materials that can disarm the virulence of pathogenic bacteria. As a model system, we looked at the oral cavity, where salivary mucus prevents the opportunistic pathogen Streptococcus mutans from causing cavities. First, we identified that mucin glycans were the structural components of mucus that were essential for preventing biofilm formation, quorum sensing, toxin production, and genetic competence. Next, we leveraged this knowledge to create glycan-bearing materials that selectively prevent Streptococcus mutans biofilm formation. Finally, we reproduce this workflow in a second niche, where we identify glycans that can prevent biofilm formation and epithelial cell killing by the vaginal pathogen Gardnerella vaginalis. Together, these results present a roadmap for the development of mucin-inspired antivirulence materials for various niches. With the rise of clinical antimicrobial resistance, these materials, which treat infections by hijacking nutrient sensing pathways rather than directly killing bacteria, present a novel strategy for treating infections that can target pathogens while leaving the commensal microbiome intact.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152450</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Locating a Black Planning Tradition and Spatializing Black Nationalism</title>
<link>https://hdl.handle.net/1721.1/152448</link>
<description>Locating a Black Planning Tradition and Spatializing Black Nationalism
Williams, Darien Alexander
This dissertation explores the Black planning tradition and how Muslims, particularly those in Black nationalist organizations, utilize newspapers and land to critique urban planning practice and offer alternative models of planned organization and development. The first essay in this three-essay series explores the use of political art in Muhammad Speaks in remapping Black life for Muslims and non-Muslims during the Great Migration. The second essay draws on economic theological doctrine and newspaper advertisements to map the economic footprint of the Nation of Islam. The third essay interrogates the history of a single parcel of land to understand the diversity of actors involved in simultaneously shaping the local built environment and global religious nationalist movements toward self-determination. This work expands scholarly understanding of alternative models for collective decision-making and resistance tactics and provides histories of Blackness, Muslimness, and planning practice as they intersect.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152448</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New roles for intermediaries: the case of community-owned solar energy development</title>
<link>https://hdl.handle.net/1721.1/152446</link>
<description>New roles for intermediaries: the case of community-owned solar energy development
Chun, Jungwoo
This dissertation explores the ways that intermediary organizations of various kinds have helped to promote community-owned solar energy development in the United States. These organizations play a much wider range of facilitative roles than most other mediating organizations have traditionally played in the public sector over the past several decades in America. Until recently, energy development in the United States has been dominated by private investors and government regulators, particularly those focused on the construction and operation of investor-owned fossil fuel facilities. Now that renewable energy is economically competitive, indeed in 2020 about 20% of all new electricity produced in the United States was generated from renewable sources, up by 9% from 2019 including small-scale solar (EIA 2022), new investors and new regulators have inserted themselves into the energy development process. Given its ease of installation, solar energy is an attractive option, and many communities are seeking to own and operate new facilities; but, despite its increasing financial competitiveness and recent technology improvements, it is often unclear to many communities, particularly those that are under-resourced, the steps they must take to own and operate solar facilities of their own. Based on interviews with 28 intermediary organizations and surveys of more than 300 members of community solar cooperatives in 15 states, I have been able to determine that intermediary organizations involved in promoting communityowned solar energy development have moved beyond the purely “neutral” facilitating roles that other public dispute mediators have traditionally played. Instead, they have been successful by focusing on (1) enhancing community understanding of the solar energy status-quo (i.e., production and distribution options); (2) informing project design and financing efforts, (3) providing procedural guidance; (4) helping to promote changes in local, state, and federal policies and programs, and (5) supporting local organizational and political capacity-building. Intermediaries in the community solar context include community-owned cooperatives and energy justice organizations along with a variety of governmental support offices. From the data I have collected, I think it is fair to say that community solar focused intermediaries are working primarily to empower under-resourced and under-represented communities by helping them move toward community ownership. They empower communities by helping them apply technical information, mobilize networks and partnerships, and build local movements to promote distributed, decentralized, and democratized energy systems.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152446</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The crystal chemistry of the homologous series Pb₃₊₂nSb₈S₁₅₊₂n (the plagionite group).</title>
<link>https://hdl.handle.net/1721.1/152396</link>
<description>The crystal chemistry of the homologous series Pb₃₊₂nSb₈S₁₅₊₂n (the plagionite group).
Kohatsu, Judith Jenkins.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Vita.; Bibliography: leaves 131-132.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152396</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The agrarian roots of authoritarian modernization in Brazil, 1880-1930</title>
<link>https://hdl.handle.net/1721.1/152395</link>
<description>The agrarian roots of authoritarian modernization in Brazil, 1880-1930
Reis, Elisa Maria da Concei.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1980; Bibliography: leaves 285-315.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152395</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An empirical test of the perfect market model hypothesis: an analysis of the reaction of stock returns to quarterly earnings data.</title>
<link>https://hdl.handle.net/1721.1/152203</link>
<description>An empirical test of the perfect market model hypothesis: an analysis of the reaction of stock returns to quarterly earnings data.
Kozel, Peter Patrick.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1973; Number 229 used twice in paging. Vita.; Bibliography: leaves 202-206.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152203</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of normal mode analysis to ocean acoustic tomography</title>
<link>https://hdl.handle.net/1721.1/152202</link>
<description>Applications of normal mode analysis to ocean acoustic tomography
Romm, Joseph J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1987; Bibliography: leaves 102-105.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152202</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimum mean square error without coding.</title>
<link>https://hdl.handle.net/1721.1/152199</link>
<description>Minimum mean square error without coding.
Cohn, David Leslie.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1970; Vita.; Bibliography: leaves 143-144.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152199</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Biology Tools to Study Bacterial Cell Surface Glycans</title>
<link>https://hdl.handle.net/1721.1/152136</link>
<description>Chemical Biology Tools to Study Bacterial Cell Surface Glycans
Marando, Victoria M.
Cell surface glycans are ubiquitous and serve as the first point of contact between a cell and the surrounding environment. Many of the carbohydrate-mediated interactions that occur at this interface regulate key signaling processes such as cell-cell recognition, communication, and adhesion. Bacterial glycans in particular play critical roles in maintaining cellular structure and are implicated in infections and pathogenesis. Understanding molecular determinants of these important biological functions is critical for both fundamental and translational research. Despite the need to better understand these important biological structures, methods for probing glycan structure and function remain limited. Glycans are incompatible with common strategies for studying other biomacromolecules, which often exploit chemoselective reactions for covalent modification, capture, or imaging. Unlike the amino acid residues that constitute proteins, glycan building blocks are composed primarily of polyol isomers and lack distinguishing reactivity required for selective labeling. Moreover, unlike protein synthesis, glycan biosynthesis is not templated, making perturbation through genetic manipulation is often convoluted. Finally, the molecular complexity of glycan composition presents an added challenge. Unlike the 20 canonical amino acids used in proteins, bacteria use more than 600 distinct monosaccharide building blocks. To address this open challenge, we developed novel chemical biology tools to study bacterial cell surface glycans. We have established a new, generalizable strategy for chemoselective glycan modification to enable the study of specific bacterial cell wall glycans. Our method relies on the direct incorporation of reactive glycan building block surrogates by cell surface glycosyltransferases, a technique termed “biosynthetic incorporation”. We first validated this approach by labeling the arabinan (Chapter 2), which enabled several important downstream applications, including assay development and controlled cell surface perturbation (Chapter 3). We then demonstrated the generalizability of this approach by developing probes for mannose-containing glycans using this strategy (Chapter 4). In this work, we have also targeted modification of the cell wall assembly enzymes themselves (Chapter 5), in addition to the structures they produce. Ultimately, we envision that the chemical biology tools developed in this work will be useful for both answering fundamental biological questions and towards efforts to develop new antibiotics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152136</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affinity Maturation of Peptides to Bind Protein-Protein Interfaces</title>
<link>https://hdl.handle.net/1721.1/152135</link>
<description>Affinity Maturation of Peptides to Bind Protein-Protein Interfaces
Ye, Xiyun
The use of peptides as inhibitors for protein-protein interactions (PPI) is an attractive strategy for developing therapeutics. Understanding the structure-activity relationships of peptide-based inhibitors is crucial for optimizing their activity. To guide this optimization, combinatorial peptide libraries with label- free affinity selection platforms have been developed to study protein-ligand interactions. Applying this platform, several multi-alanine-substituted analogs with picomolar binding affinity were discovered for the oncogenic protein MDM2. In addition, tolerance to alanine substitutions for peptide ligands of the 12ca5 antibody and 14-3-3 regulatory protein were characterized.&#13;
&#13;
With the understanding of residue contributions to PPI, we have developed affinity maturation strategies to improve binding affinity from micromolar to low nanomolar. The kidney, parathyroid gland, and choroid plexus express the aging-related transmembrane protein α-Klotho, co-receptor in the fibroblast growth factor 23 receptor complex. Reduced α-Klotho levels indicate chronic kidney disease and other age-related diseases. The long-standing difficulty in detecting or labeling α-Klotho prevented us to understand its biological functions. Here we describe branched multimeric peptides that recognize α-Klotho with high affinity and selectivity in the biological milieu. The branched peptides are prepared in a single-shot synthesis by parallel automated fast-flow synthesis in under one hour. The branched α-Klotho-binding peptides show improvement in affinity relative to the monomeric versions and can be used to label Klotho for live imaging in kidney cells.&#13;
&#13;
In addition, we described the discovery of peptides that recognize α-Klotho with high affinity and selectivity by applying in-solution size exclusion-based affinity selection-mass spectrometry (AS-MS). N-terminal small molecule modifications on these peptides leads to great binding improvement. After two rounds of AS-MS, the affinity-matured peptides have at least 2300-fold increased binding affinity to α-Klotho compared to the initially reported peptide Pep-10. The lead peptide binders were utilized to enrich Klotho from cell lysates and to label Klotho in live kidney cells. Our results further support the utility of in-solution, label-free AS-MS protocols to deliver peptide-based binders to target proteins of interest with high affinity and selectivity, resulting in functional probes for biological studies.&#13;
&#13;
Human papillomavirus (HPV) infections account for nearly all cervical cancer cases, the fourth most common cancer in women worldwide. High-risk variants, including HPV16, drive tumorigenesis in part by promoting the degradation of the tumor suppressor p53. This degradation is mediated by the HPV early protein 6 (E6), which recruits the E3 ubiquitin ligase E6AP and redirects its activity towards ubiquitinating p53. Targeting the protein interaction interfaces between HPV E6 and E6AP is a promising modality to mitigate HPV-mediated degradation of p53. Herein, we designed a covalent peptide inhibitor, termed ‘reactide’, that mimics the E6AP LXXLL peptide by targeting cysteine residue 58 in HPV16 E6 with quantitative conversion and selectivity. This reactide provides a starting point for chemical intervention for HPV-driven cancers.&#13;
&#13;
Covalent molecules have found widespread applications as activity-based probes and as irreversibly inhibitory drugs. Currently, there is no rapid, label-free, and highly diverse affinity selection method to enrich reactive peptides from unnaturally chemical space. Herein, we describe a AS-MS method to identify peptide inhibitors with reversible warhead. The method uses mix disulfides to build reversible peptide- protein conjugates that can ambiguously enrich crosslinked molecules for MS/MS sequencing. Using this approach, we optimized peptide that irreversibly inhibited a viral oncoprotein HPV 16E6 with nanomolar potency and specificity. This approach should enable rapid, enrichment to identify new classes of highly selective covalent inhibitors for diverse molecular targets.&#13;
&#13;
Lastly, a systematic study on the tolerance of non-canonical amino acids of characteristic chirality, steric effects, and backbone structures has been presented. Overall, these studies highlight the potential of peptide-based inhibitors and affinity selection platforms in drug discovery
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152135</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Data-Driven Models to Understand Transition Metal Catalyst Energy Landscapes and Metal-Organic Framework Stability</title>
<link>https://hdl.handle.net/1721.1/152134</link>
<description>Using Data-Driven Models to Understand Transition Metal Catalyst Energy Landscapes and Metal-Organic Framework Stability
Nandy, Aditya
The selective partial oxidation of methane-to-methanol has been a "Holy Grail" challenge for well-over half of a century. Computational high-throughput virtual screening (HTVS) with first-principles density functional theory (DFT) can play a valuable role in unearthing design rules for scalable and viable synthetic analogues that preserve selectivity and activity observed only in enzymes. Single-site catalysts represent the most promising synthetic analogues to these enzymes, often enabling atom-economy, tunability, and selectivity not possible with bulk heterogeneous catalysts. Single-site catalysts with 3d transition-metals can access a range of spin- and oxidation-states. Due to strong oxidation and spin-state dependence on the relative energetics of reactive intermediates on the methane-to-methanol energy landscape, linear free energy relationships (LFERs) that are invoked during HTVS to simplify catalyst screening cannot be readily used. As an alternative approach, the absence of universal scaling relations between intermediate energetics provides an opportunity for non-linear machine learning (ML) models that can be used over a larger space of candidate materials. Rather than relying on linear relationships between quantities, ML models can be trained to directly predict catalyst reactivity on the basis of chemical composition and applied to thousands of compounds. In this thesis, we first study methane oxidation on transition metal complexes. We quantify the limits of LFERs that are typically used for catalyst screening. We demonstrate that LFERs systematically fail to predict individual reaction energies as well as relationships between reaction energies. We also show that there is no “one-size-fits-all” line that successfully predicts scaling behavior across distinct electron configurations. When these LFERs fail, we use ML models to harness deviations from scaling to design catalysts with increased reactivity as quantified by turnover frequencies.&#13;
&#13;
Metal-organic frameworks (MOFs) are heterogeneous materials that have strong analogies to single-site transition metal complexes. For over two decades, MOFs have been developed for various applications in gas separations, sensing, and catalysis. In practice, we must activate a MOF and remove solvent from its pores to render it porous and usable. Simultaneously, the MOF must also be stable under the thermal conditions. Although the tailored metal active sites and porous architectures of MOFs are promising for separations, sensing, and catalysis applications, a lack of understanding of how to improve their stability limits their use. MOFs vary in their coordination geometries, pore sizes, coordination chemistry, metal identity, and oxidation states, which challenge the development of general structure-activity relationships that generalize over various families of MOFs. In the second part of this thesis, we harness the hybrid nature of MOFs to quantify their chemistry beyond simple pore size descriptors. We 4 adapt molecular graph-based featurizations that were successful for screening single-site transition metal complexes and generalize them to MOFs. With our new featurization, we highlight that hypothetical MOF databases are severely biased and lack metal diversity. Instead of relying on these databases to construct data-driven models, we develop workflows to mine the extant experimental literature and develop data-driven models to predict MOF stability from experimental data. Our models develop a path forward for stable MOF design. We show how we can improve the stability of existing MOFs and develop a new in silico database of “ultrastable” MOF structures.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152134</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing Exciton Dynamics in Metal–Organic Frameworks</title>
<link>https://hdl.handle.net/1721.1/152133</link>
<description>Probing Exciton Dynamics in Metal–Organic Frameworks
Wan, Ruomeng
Bound electron–hole pairs, known as excitons, are key carriers of energy during a material’s interaction with light. Therefore extending understanding and control over exciton dynamics can potentially break the bottlenecks that currently limit solar energy harvesting technologies. However, design principles for accessing desired exciton transfer and conversion pathways remain unclear, as these dynamics are often tangled with a mixture of interactions both among excitons and with the surrounding environment. To address this challenge, efforts in this thesis center around developing strategies for tuning exciton dynamics using metal–organic frameworks (MOFs) as a scaffold. Independent synthetic handles within MOFs will be mapped with the structural and electronic variables that impact exciton–exciton, exciton–vibrational, and exciton–photon interactions. Chapter 1 introduces the fundamental principles underlying these three types of interactions experienced by excitons in MOFs. The perception of MOFs will be expanded beyond the traditional image of their crystal structure to capture MOFs’ nuclear degrees of freedom, dielectric environment, and macroscopic morphology, which are factors that will be further explored in the following chapters. Chapter 2 demonstrates that MOFs’ metric structure is suited for tuning exciton–exciton interactions that occur via resonant dipole coupling. Due to the angular sensitivity of these interactions, a paddlewheel pillared MOF structure effectively blocks parasitic singlet energy transfer from a pyrene donor to a porphyrin acceptor by anchoring their transition dipoles at 90° to each other. Chapter 3 brings MOFs’ nuclear degrees of freedom into consideration, and identifies spectral footprints left by excitons’ coupling to the intramolecular vibration of the organic building units. These vibronic signatures are applied as a spectral handle for extracting the trend of excitonic coupling strength in a series of perylene diimide (PDI)-based MOF-74 analogs with different metal ions (Mn2+, Ni2+, Zn2+). A correlation between the excitonic coupling strength and the metal ions’ polarizability points to the inorganic moieties’ potential role in mediating the dielectric environment experienced by the excitons. Chapter 4 zooms out of the light–MOF interactions occurring at the molecular level to highlight the influence of the macroscopic morphology of MOF crystallites. A dipole orientation-dependent waveguide effect detected in bichromophoric MOF microplates reveals that excitons’ transition dipoles in MOFs can be inherently aligned with the reflecting surfaces of their surrounding microcavity, which enables potential control over excitons’ interaction with the waveguide modes confined in the crystal. Broader implications and future possibilities unlocked by these observations will be discussed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152133</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Sensitivity and Resolution of Solid State Nuclear Magnetic Resonance and Dynamic Nuclear Polarization</title>
<link>https://hdl.handle.net/1721.1/152131</link>
<description>Advances in Sensitivity and Resolution of Solid State Nuclear Magnetic Resonance and Dynamic Nuclear Polarization
Golota, Natalie C.
Traditional structural biology methods such as x-ray crystallography and solution-state nuclear magnetic resonance (NMR) cannot provide atomic level resolution of insoluble, noncrystalline proteins. Such proteins include amyloid fibrils implicated in numerous neurodegenerative diseases. While solid state NMR using magic angle spinning (MAS) has yielded high resolution structures of amyloid fibrils, including polymorphs of amyloid-β (Aβ), its sensitivity is inherently limited, requiring undesirable time and resource commitments. Dynamic nuclear polarization (DNP) is a powerful method of enhancing solid state NMR sensitivity. However, this sensitivity gain comes at a significant loss of spectral resolution due to sample conformational heterogeneity at the cryogenic temperatures required for efficient electron-nuclear polarization transfers. This hinders acquisition of site-specific structural assignments in complex systems. Thus, it is critical to improve DNP resolution by advancing to higher magnetic fields and faster MAS frequencies. Unfortunately, at high fields, the efficiency of the most widely applied DNP polarization mechanisms decrease, along with the availability of high-power microwave sources. The work presented in this thesis seeks to address instrumentation limitations and improve DNP methods that presently limit the utility and power of MAS DNP at high fields.&#13;
&#13;
This thesis first describes the mechanism of Overhauser Effect (OE) sensitivity enhancements in insulating solids. We demonstrate the generation of strong positive OE with less than 200 mW of microwave power. We also employ selective deuteration to elucidate the role of individual hyperfine-coupled protons on the BDPA radical. This work provides a basis for the improved development of high field DNP radicals with fluctuating hyperfine interactions.&#13;
&#13;
The continued expansion of MAS DNP at high field and with fast MAS rotors requires improvement in the efficiency of coupling microwave irradiation into the sample. We provide a comprehensive discussion of the effect of the radio frequency (RF) coil on the transverse microwave coupling efficiency in 1.3 mm and 0.7 mm rotor systems. When the ratio of the pitch to microwave wavelength is ~ 0.5, the coupling efficiency is significantly reduced, as is the case for a typical 1.3 mm or 0.7 mm RF coil at fields between 460-593 GHz. To address this, we introduce axial microwave coupling schemes for 3.2, 1.3-, and 0.7-mm rotors and demonstrate theoretical improvements in the electron Rabi field of &gt; 60% in 3.2 mm rotor systems and up to a factor of 8 improvement in 0.7 mm rotor systems. We further provide experimental results of MAS spinning stability in 3.2 mm rotors at 95 K using the modified axial bearing required for axial irradiation schemes.&#13;
&#13;
While the first two sections describe sensitivity enhancements under DNP, the later chapters are focused on sensitivity enhancements leveraged via MAS frequencies &gt; 90 kHz and ¹H detected MAS NMR. We demonstrate the first ¹H detected MAS NMR study of the arctic mutant of Aβ₁₋₄₂, which is implicated in the pathogenesis of early onset familial AD. Despite resolution limitations in the sample as a result of limited MAS frequency and sample heterogeneity, we determine that the core fibril structure of E22G Aβ₁₋₄₂ is monomorphic with suggested conserved structure relative to that of the wild type fibril. &#13;
&#13;
To further reduce homogenous contributions to the solid-state linewidth, we introduce the fabrication and use of 0.7 mm diameter diamond rotors. The superior material strength, thermal conductivity, and microwave transparency make diamond the optimal MAS rotor material.  First, we describe the mechanism of material ablation and characterize the effects of pulse energy, irradiation scheme and pulse number on the achievable taper angle in high aspect ratio holes. We then apply a dual-sided axial machining strategy to fabricate 0.7 mm diamond rotors, and further demonstrate stable operation up to 124 kHz in addition to ¹H detected MAS NMR results.  Overall, the areas of focus in this thesis describe several resolution and sensitivity advancements that when combined in the future could provide sufficient sensitivity and resolution with which to study ex-vivo amyloid plaque samples and other exogenous biomedically relevant samples.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152131</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protonic All-Solid-State Electrochemical Device as an Artificial Synapse for CMOS-Compatible Neuromorphic Computing</title>
<link>https://hdl.handle.net/1721.1/152130</link>
<description>Protonic All-Solid-State Electrochemical Device as an Artificial Synapse for CMOS-Compatible Neuromorphic Computing
Ryu, Seungchan
The development of artificial intelligence (AI) has changed the overall landscape of information technology.[1] However, conventional computing hardware is energetically unfavorable to handle multifarious AI tasks because the frequent data transfer between the physically separated microprocessors and data storage units results in ‘memory wall bottleneck’, leading to high energy consumption.[2]  In contrast, a human brain is energy efficient because it fuses computing and storage functionalities at the same place. Inspired by the human brain, neuromorphic computing using non-volatile memristors has been proposed as a novel computing hardware to enable an energy-efficient computing.[3,4] This thesis contributes to develop a highly energy efficient and CMOS-compatible protonic electrochemical artificial synapse employing the nanoporous gadolinium (Gd)-doped ceria (Gd:CeO₂) as an inorganic solid proton electrolyte. The Gd:CeO₂ was carefully chosen based on the consideration of its high proton conductance, doping effect, and fabrication compatibility with current semiconductor technology.[5-8]&#13;
&#13;
We synthesize the Gd:CeO₂ thin-film electrolytes using pulsed laser deposition.  By the tuning of growth temperature and partial oxygen pressure, the obtained thin-film electrolytes have various structures ranging from the preferred oriented dense structures to nanoporous structures. We investigate proton conduction properties through these thin-films using electrochemical impedance spectroscopy, and show that nanoporous structures can improve the proton conductance by more than 5 orders of magnitude. We explain the plausible origin of the enhanced proton conduction via the water layer formation on the surface or porous columnar structures.  We process to fabricate electrochemical memristors using the Gd:CeO₂ electrolyte, PdHX as the gate/proton reservoir and tungsten oxide (WO₃) as the switching channel, and characterize the performance of these devices.  We examined the thickness effect of the electrolyte on the device operation. We show that devices built on porous Gd:CeO₂ electrolyte can perform highly energy-efficient computing as the protonic artificial synapse. The device required ~1 fJ/(µm²×nS) per synaptic event, which is energetically efficient compared to the conventional non-volatile memristors,[9,10] and also comparable to that of the human brain. Our analysis of the thermodynamic and kinetic contributions of computing energy show that the high proton conductance in the nanoporous structure results in the decrease in the ohmic drop in the electrolytes and mainly contributes to small computation energy. This study shows that Gd:CeO₂ electrolyte based memristors provide a promising solution of CMOS-compatible energy-efficient computing for AI applications.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152130</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Stereoselective Hydrofunctionalization Reactions Enabled by Dual Copper (I) Hydride and Palladium Catalysis</title>
<link>https://hdl.handle.net/1721.1/152128</link>
<description>Development of Stereoselective Hydrofunctionalization Reactions Enabled by Dual Copper (I) Hydride and Palladium Catalysis
Knippel, James Levi
Copper(I) hydride (CuH) complexes are versatile reagents for the reduction of carbonyl compounds, Michael acceptors, and olefins. Further, the hydrocupration of unsaturated species (olefins, dienes, alkynes etc.) offers convenient access to catalytic amounts of stereodefined organometallic nucleophiles. These in situ generated organocopper species can be subsequently trapped by a variety of electrophiles for productive bond-forming reactions. This dissertation describes several new CuH-catalyzed hydrofunctionalization reactions, particularly those which rely on dual CuH- and Pd-catalysis. Chapter one offers a synopsis on the history of CuH catalysis and the seminal discoveries in dual CuH- and Pd-catalyzed reactions. Chapter two describes the CuH-catalyzed C2-allylation of benzimidazoles. Chapter three details the discovery of the dual CuH- and Pd-catalyzed hydroalkenylation of olefins. In chapter four, the development of a dual CuH- and Pd-catalyzed synthesis of Z-dienes is recounted. The final chapter covers the hydroalkenylation of vinyl metalloids, including vinyl silanes and vinyl boronate esters.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152128</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination Chemistry of Fe–S Clusters Supported by N-Heterocyclic Carbenes</title>
<link>https://hdl.handle.net/1721.1/152126</link>
<description>Coordination Chemistry of Fe–S Clusters Supported by N-Heterocyclic Carbenes
Brown, Alexandra C.
Iron–sulfur clusters are ubiquitous in Nature and carry out some of the most challenging multielectron redox reactions in the biosphere while utilizing three primary building blocks: Fe²⁺, Fe³⁺, and S²⁻. Studying synthetic Fe–S clusters aids in understanding the underlying properties of Fe–S clusters that enable this reactivity by providing Fe–S clusters with unusual coordination spheres that are amenable to multiple modes of structural and spectroscopic characterization. Synthetic Fe–S cluster chemistry is hindered by poor control over the coordination sphere of metalloclusters as compared to mononuclear complexes: the large size of metalloclusters means changing the ligand at one metal site often has little effect on the neighboring sites. Here, we introduce a strategy based on the remote steric profiles of ligands on adjacent metal sites (here, monodentate N-heterocyclic carbenes), to obtain [Fe₄S₄] clusters for which subsequent reactivity can be localized to a single Fe site. That is, the steric bulk of di-aryl NHC ligands enables isolation of [Fe₄S₄] clusters in which three of the Fe centers are coordinated to NHCs and that further ligand exchange reactivity is localized to the unique Fe site. Following the establishment of this site-differentiation strategy, we demonstrate its application to several outstanding problems in Fe–S cluster chemistry. First, we demonstrate that Fe–S clusters are able to bind and activate π-acidic ligands like CO, resolving the disconnection between the reactivity of Fe–S clusters and the typical reactivity of high-spin, mid-valent Fe centers by showing that Fe–S clusters can access low-valent states which have sufficient π-basicity to activate CO.  We expand this chemistry to electronically tunable aryl isocyanide ligands and demonstrate that Fe–S clusters can access multiple electron configurations with varying capacity for π-backbonding, highlighting the importance of Fe–Fe interactions within an Fe–S cluster for tuning the Fe valences and binding π-acidic ligands. We next synthesize an [Fe₄S₄] cluster supported by a bulkier NHC ligand and demonstrate that it can be reduced to reveal a three-coordinate Fe site with no apparent affinity for N₂. Driven by the lack of N₂ affinity in the [Fe₄S₄] cluster—in contrast to [MoFe₃S₄] clusters—we next explore binding of CO at [MoFe₃S₄] clusters to understand the effects of Mo incorporation on intracluster bonding; comparisons between these clusters and analogous [Fe₄S₄]–CO clusters reveals that the [MoFe₃S₄]–CO clusters exhibit attenuated changes to their structures and spectra over redox events. This suggests that Mo may increase the covalency within the cluster, potentially making access to these low valent states more facile. Lastly, we introduce three studies aimed at modelling biological Fe–S cluster chemistry: revealing reversible homolytic Fe–C bond cleavage at Fe–S clusters to release alkyl radicals and modeling its effects on the selectivity of radical reactions, synthesizing alkene- and alkyne-bound Fe–S clusters, and abstracting Fe²⁺ from [Fe₄S₄] clusters to access the first synthetic [Fe₃S₄]⁺ clusters.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152126</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Flow Synthesis of Biomacromolecules</title>
<link>https://hdl.handle.net/1721.1/152125</link>
<description>Automated Flow Synthesis of Biomacromolecules
Callahan, Alex
Chemically synthesized biomacromolecules are powerful tools that enable modern biological study, but certain classes remain under-investigated. Existing techniques to access phosphorodiamidate morpholino oligomers (PMOs), mirror image proteins (D proteins), and peptide nucleic acids (PNAs) are functional but require individualized attention and time-intensive effort. As a result, unlike easily accessible biopolymer classes, widespread application of PMOs, D proteins, and PNAs in biotechnological workflows remains rare.&#13;
&#13;
In this work, we report the development of strategies for rapid, reliable access to these synthetically challenging classes of biomacromolecules. Existing synthetic pipelines face two primary complications: long timelines and unpredictable synthetic inefficiency. These issues arise from chemical properties inherent to the biopolymer backbone, for example, sequence-dependent aggregation of proteins and rapid side reactions of PNAs. This thesis reports the development of new production pipelines that address these issues with improvements to chemical synthesis techniques and purification and handling workflows.&#13;
&#13;
Generalized production pipelines were developed for PMOs, proteins, and PNAs. We demonstrate the rapid synthesis of PMOs with a custom-designed automated fast-flow instrument. With this instrument, chemical and process variables of PMO synthesis were optimized, resulting in a platform that enables same-day PMO synthesis. No longer limited by week-long lead times, we demonstrated the on-demand perpetration of PMOs to treat underexplored disease areas, including rare Duchenne muscular dystrophy (DMD) subtypes and SARS-CoV-2. Continuing the investigation of biopolymers from automated flow peptide synthesis (AFPS) to enable therapeutic discovery, we developed an expedited mirror image phage display (MIPD) platform. MIPD is the premier technique for generating D-peptide ligands; however, the widespread application of this technique is hindered by the individualized attention required to prepare the mirror-image D-protein substrates. We demonstrate that automated flow protein synthesis addresses these challenges and delivers mirror image proteins for screening using a standardized, rapid format. These results show the value of rapid, general access to synthetic proteins, and we further refined their preparation with a novel purification scheme termed Folding Selection. We demonstrate that the minor chemical modifications present on synthetic side products result in substantially altered physical-chemical properties and that simple bio-purification techniques can separate them from the native protein in hours. With this strategy, we demonstrate the production of nine functional synthetic proteins in under ten hours each. Finally, we developed a streamlined AFPS pipeline for producing peptide-PNA conjugates without tedious individualized synthetic optimization. With this platform, we rapidly prepared PPNAs and showed their utility as therapeutics against SARS-CoV-2.&#13;
&#13;
In summary, multidisciplinary improvements to biopolymer synthesis pipelines were explored. We envision that the routine production of biopolymers for study with AFPS pipelines stands to free up significant numbers of person-hours otherwise spent on troubleshooting challenging synthetic schemes.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152125</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular color centers</title>
<link>https://hdl.handle.net/1721.1/152124</link>
<description>Molecular color centers
Laorenza, Daniel William
The inherent atomistic precision of synthetic chemistry enables bottom-up structural control over quantum bits, or qubits, for quantum information science. While molecular platforms may be used to create bespoke qubits for applications from nanoscale sensing to secure communication, such systems generally lack ground state spins capable of non-thermal initialization and single spin readout, hindering their integration with existing quantum technologies. To overcome this shortcoming, we developed general design principles to develop transition metal-based molecular spins wherein the ground state spin could be initialized and read out with optical light, deemed optical addressability, as a first step towards single spin addressability. With an inverse electronic structure design approach, we initially illustrated this optical addressable ground state in a series of tetravalent chromium, Cr(IV), ions coordinated to strong field aryl ligands. We then improved the coherent spin properties of these systems using symmetry control over the crystallographic host material, providing additional design criteria to develop qubits that may function in intrinsically noisy environments. Building upon these initial demonstrations, we illustrated that the electronic structure of may be readily tuned for targeted applications through modification of the ligands in a series of tetraalkyl Cr(IV) molecules. The amalgamation of this work, along with recent developments controlling the photophysical properties of metal-based spin centers, provide a platform to create the second generation molecular color centers that may be designed to address specific challenges or sensing tasks for quantum information science.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152124</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexing Carbon Nanomaterials and Reactive Metal Species for Selective Chemical Sensing and Tunable Catalysis</title>
<link>https://hdl.handle.net/1721.1/152123</link>
<description>Complexing Carbon Nanomaterials and Reactive Metal Species for Selective Chemical Sensing and Tunable Catalysis
Luo, Shao-Xiong Lennon
This thesis highlights strategies for functionalizing carbon nanomaterials with reactive metal species for applications in chemical sensing and electrocatalysis. In Chapter 1, we begin with an introduction of chemiresistive sensing using functionalized carbon nanotubes (CNTs). This introduction summarizes the design, fabrication, characterization, and evaluation of carbon nanotube-based chemiresistive sensors. Potential strategies for optimizing sensitivity and selectivity are also discussed. Typical applications of CNT-based chemiresistive sensing are also surveyed. In Chapter 2, we report the synthesis of Pentiptycene Polymer/Single-Walled Carbon Nanotube Complexes and their applications in the selective detection of benzene, toluene, and o-xylene using chemiresistive and quartz crystal microbalance-based methods. In Chapter 3, we report a method to effectively immobilize transition metal selectors in close proximity to the SWCNT surface using pentiptycene polymers containing metal-chelating backbone structures. We have identified sensitive, selective, and robust copper-based chemiresistive ammonia sensors displaying low parts per billion detection limits. We have added these hybrid materials into the resonant radio frequency circuits of commercial near-field communication (NFC) tags to achieve wireless detection of ammonia at physiologically relevant levels, offering a non-invasive and cost-effective approach for early detection and monitoring of chronic kidney diseases. In Chapter 4, we report that iptycene-containing poly(arylene ether)s (PAEs) show to limit the palladium nanoparticles (Pd NPs) growth and stabilize the Pd NPs dispersion. SWCNT-based chemiresistors and graphene field-effect transistors (GFETs) using these PAE-supported small Pd NPs are sensitive, selective, and robust sensory materials for hydrogen gas under ambient conditions. In Chapter 5, we describe chemiresistors based on SWCNTs containing small and highly reactive copper-based nanoparticles in sulfonated pentiptycene poly(arylene ether)s (PAEs). The sensors show exceptional sensitivity to trace hydrogen sulfide in wet air with a low-ppb detection limit, high selectivity over a wide range of interferants, and month-long stability under ambient conditions. In Chapter 6, we report a SWCNT-based chemiresistor catalyst combination that can detect ppb levels of ethylene in air, driven by the chemoselectivity of the catalytic transformation. The utility of this ethylene sensor is demonstrated in the monitoring of senescence in red carnations and purple lisianthus flowers. In Chapter 7, we report SWCNT-based chemiresistive sensors based on a catalytic system comprising a copper complex and TEMPO cocatalyst, enabling the sensitive, selective, and robust detection of trace ethanol in air. In Chapter 8, we report the synthesis of carbon-nanomaterial-based metal chelates that enable effective electronic coupling to electrocatalytic transition metals. The defined ligands on the graphene surfaces enable the formation of structurally precise heterogeneous molecular catalysts. We demonstrate that the densely functionalized metal-chelated carbon nanomaterials are effective heterogeneous catalysts in the oxygen evolution reaction with low overpotentials and tunable catalytic activity.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152123</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical integration between cytoplasmic networks during cytokinesis</title>
<link>https://hdl.handle.net/1721.1/152121</link>
<description>Mechanical integration between cytoplasmic networks during cytokinesis
Pelletier, James F.
Self-replication of cells involves nuclear separation and cytokinesis, both of which depend on the mechanical nature of cytoplasm. The cytoplasm is a multicomponent and active gel, solid enough to integrate forces over the cell, but liquid enough to permit dynamic organization. This thesis addresses force generation and cytoplasmic organization during cytokinesis at two levels: how do the emergent mechanics of cells facilitate cytokinesis, and how do these mechanics emerge from their molecular composition and interactions? These broad questions were addressed in two systems: eggs of the frog Xenopus laevis, and JCVI-syn3.0, a genomically minimized bacterium derived from Mycoplasma mycoides. The X. laevis system enables essentially undiluted cytoplasmic extracts and is arguably the most physiological non-living system, and JCVI-syn3.0 is arguably the least complex living system, so they bridge the interface between non-living molecules and living cells. In each system, this thesis characterizes mechanically relevant molecules then develops a continuum model for the mechanics. In the X. laevis system, the most abundant cytoplasmic components – microtubules, F-actin, intermediate filaments, endoplasmic reticulum, mitochondria, and cytosol – are imaged, in order to characterize flows involved in nuclear positioning and cytoplasmic organization. While the system had been conceptualized as a solid network of microtubules immersed in a liquid cytosol, progress reported in Chapters 2 and 3 shows the system behaves as a continuum and develops a new model of nuclear positioning based on surface forces and internal stresses. In the M. mycoides system, cell morphology dynamics are imaged using a microfluidic chemostat, and the genetic basis for normal morphology is determined using an approach agnostic to known gene function. While previous models had considered the cytoskeletal protein FtsZ as the primary driver for cell division, progress reported in Chapter 4 shows cell morphology is more complex and depends on five additional genes of unknown function, which likely affect membrane mechanics. In both systems, this thesis shows a mechanically integrated view of the cell is necessary and enabling in order to understand cytokinesis and physiology.
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152121</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Site-selective Labeling of the Nitrogenase Iron-Molybdenum Cofactor</title>
<link>https://hdl.handle.net/1721.1/152119</link>
<description>Site-selective Labeling of the Nitrogenase Iron-Molybdenum Cofactor
Badding, Edward D.
Nitrogenases are enzymes known to catalyze the kinetically challenging, and biologically important, reduction of N₂ to NH₃. The mechanism of these enzymes, and in particular, the chemistry that occurs at the catalytic cofactor of the Mo nitrogenase, the iron- molybdenum cofactor (FeMo-co), has been studied for decades. A challenge in understanding its unique reactivity is knowing how the valence electrons of FeMo-co are distributed and coupled, and how those change during catalysis. Because the large number of metal sites present within FeMo-co gives rise to a complex set of spectroscopic responses, correlating that information to a specific metal site within the three- dimensional structure is a substantial challenge. My thesis is focused on addressing this problem by incorporating ⁵⁷Fe site-selectively within FeMo-co—specifically its terminal Fe site (Fe1). Spectroscopic analysis of the site-selectively labeled Mo nitrogenase in its resting state informed on the valence and spin orientation of the Fe1 site, and as a result, ruled out multiple proposed spin-coupling schemes for the entire cluster. Characterization of the oxidized resting state and the first intermediate of nitrogen fixation provided insight into the cofactor’s redox chemistry, and established the utility for using this methodology to study other states of FeMo-co. Finally, the methodology to site-selectively label FeMo- co was expanded to manipulate its chemical composition by substituting the Fe1 site with Co²⁺. Incorporation of this new artificial metallocofactor into the Mo nitrogenase and its subsequent characterization revealed that, within the same charge state, CoFeMo-co is EPR active for states that are EPR silent in the WT enzyme. This work opens the door for studying these states using advanced EPR techniques or magnetic Mössbauer&#13;
spectroscopy.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152119</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of Nitrogenase Cofactors bound to Nitrogenase Carrier Proteins</title>
<link>https://hdl.handle.net/1721.1/152118</link>
<description>Study of Nitrogenase Cofactors bound to Nitrogenase Carrier Proteins
Srisantitham, Suppachai
Nitrogenases employ Fe-S clusters as catalytic cofactors to perform the kinetically challenging reduction of N₂ to NH₃. The unique reactivity of nitrogenase cofactors is derived in part from the complexity of their cluster composition, featuring eight metal sites, at least seven of which are Fe. Understanding how these cofactors operate requires knowledge of their electronic structure in the resting state, and how they change during the catalysis. However, spectroscopically studying nitrogenase cofactors is challenging since it is difficult to resolve the signal arising from the large number of Fe sites. To overcome this challenge, we have developed a post-biosynthetic modification method to selectively label specific sites of nitrogenase cofactors with ⁵⁷Fe. As a test case, we showed that the terminal Fe sites of nitrogenase L-cluster can be selectively labeled with ⁵⁷Fe in a near quantitative yield. Using this technology together with site-directed mutagenesis, we obtained the first concrete evidence that the L-cluster binds to the nitrogenase carrier protein NifX through His³⁵ residue. In addition, we extended the use of both site-selective labeling strategy and NifX as a protein platform to conduct a comparative electronic structure study of two nitrogenase catalytic cofactors (FeMo-co and FeV-co). We found that the valence distribution across the Fe₇ fragment is very similar for both cofactors in the dithionite-reduced state. Overall, this thesis reports the development and utilization of the site-selective ⁵⁷Fe-labeling in conjunction with the use of carrier protein NifX to study electronic structure of nitrogenase cofactors.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152118</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A blood exchange method to study circulation kinetics of tumor cells in the blood</title>
<link>https://hdl.handle.net/1721.1/152117</link>
<description>A blood exchange method to study circulation kinetics of tumor cells in the blood
Miller, Alex Brandon
Blood is an essential compartment for tumor cell trafficking. In solid epithelial-derived tumors, blood serves as the major vehicle for metastasis, whose principal cell is the circulating tumor cell (CTC). These cells originate from the primary tumor and are shed at low concentrations into the blood, where they travel to distant sites to initiate metastatic lesions. In liquid, blood-borne cancers, the blood is a major reservoir of disease, and allows cells to move between the bone marrow, where disease typically initiates, to other sites throughout the body.&#13;
&#13;
While the general steps of these processes is known, there is a lack of evidence in the field regarding the physical properties defining tumor cell trafficking through the blood. Several studies have estimated vastly conflicting half-life time of CTCs, ranging from seconds to hours. However, these studies are limited in that they typically involve monitoring the decay in concentration of injected in vitro cultured cells, rather than that of native, unprocessed tumor cells. Measuring the blood concentration of these injected in vitro cultured cells over time is insufficient to extrapolate the two defining variables which underlie the concentration of CTCs: half-life time and generation rate. Studying these parameters is crucial to understanding the nature of the metastasis of cancer throughout the body, which remains the leading cause of cancer deaths.&#13;
&#13;
Our lab has developed a technology capable of detecting genetically fluorescent CTCs longitudinally directly from the bloodstream of mice in real-time. The system combines a surgical cannulation technique of the jugular vein and carotid artery with a lab-built optofluidic platform, which uses laser-based detection on a microfluidic chip, to enumerate and capture CTCs from un-anesthetized mice. By setting two of these in sequence, we aim to develop a method for transferring unprocessed CTC-containing blood between animals and monitoring the resulting blood concentrations to elucidate the circulatory kinetics of tumor cells in the blood. &#13;
&#13;
In this thesis, we begin by developing a system to determine circulation properties of CTCs. Using a series of real-time CTC detection platforms, we create a model to describe how the exchange of blood between healthy and tumor-bearing mice allows us to extrapolate circulation properties of the cells. Next, we apply this platform to study the circulation kinetics of CTCs from several models of solid-tumor disease. Finally, we use these techniques to study the kinetics of leukemia cells. By varying the tumor and treatment status of donor and recipient animals, we assess how the tumor cells themselves and the microenvironment of the bone marrow impacts tumor cell clearance. We discover that E-selectin, a vascular adhesion molecule, prevents cell turnover between tumor compartments and enables relapse cells to escape circulation more quickly. Altogether, this work provides a novel method to assess circulation kinetics of tumor cells and identify features that regulate the clearance of tumor cells from the blood.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152117</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure- and Composition-Performance Relationships of Electrically Conductive Metal-Organic Frameworks, Conjugated Porous Organic Polymers, and Fused Aromatics</title>
<link>https://hdl.handle.net/1721.1/152116</link>
<description>Structure- and Composition-Performance Relationships of Electrically Conductive Metal-Organic Frameworks, Conjugated Porous Organic Polymers, and Fused Aromatics
Chen, Tianyang
Combining charge transport with permanent porosity and structural modulability, electrically conductive metal-organic frameworks (MOFs), have drawn increasing attention due to their potential use in a variety of applications including electrochemical energy storage (EES). Although fully conjugated porous organic polymers (POPs) generally exhibit lower electrical conductivities and crystallinity, they are built merely on earth abundant elements, light-weight, and thus offer great potential for EES applications. Fused aromatic materials are one of the most promising electrode materials for EES if not poorly conductive.&#13;
In this thesis, the author explores structure- and/or composition-property relationships of electrically conductive MOFs, conjugated POPs, and fused aromatic materials, with the focus on their potential use in energy-related applications. Chapter 1 first introduces recent developments of electrically conductive MOFs and conjugated POPs, with particular attention paid to their structure-property relationships, and their applications as electrode materials for EES. The remaining part of Chapter 1 summarizes the use of organic electrode materials for EES, emphasizing two major obstacles. Focusing on the composition-property relationships, Chapter 2 demonstrates the continuous fine-scale tuning of band gaps over 0.4 eV and of electrical conductivity over four orders of magnitude in a series of highly crystalline binary alloys of two-dimensional electrically conducting MOFs. To probe the structure-property relationships, Chapter 3 reveals the construction of compositionally constant Ni-based MOFs and conjugated coordination polymers with different structural dimensionality, including closely π-stacked one-dimensional chains, aggregated two-dimensional layers, and a three-dimensional framework, based on 2,3,5,6-tetraamino-1,4-hydroquinone and its various oxidized forms. These compositionally constant materials exhibit distinct electronic properties caused by different dimensionality and supramolecular interactions between structural motifs. Chapter 4 presents polymeric tetraoxa[8]circulenes as a new family of porous organic polymers with light-switchable and tunable semiconducting properties. Chapter 5 and 6 focus on the use of conducting fused aromatic materials as electrodes for EES. Chapter 5 describes the design and synthesis of all-organic, fused aromatic materials that store up to 310 mAh g–1 and charge in as little as 33 seconds. This performance stems from abundant quinone/imine functionalities that decorate an extended aromatic backbone, act as redox-active sites, engage in hydrogen bonding, and enable a delocalized high-rate energy storage with stability upon cycling. Chapter 6 demonstrates that a small fused aromatic molecule whose high electrical conductivity, high capacity for redox charge storage, and complete lack of solubility in any practical solvent allow it to reversibly intercalate Li+ ions and function as a competitive cathode material for Li-ion batteries, even as a neat material.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152116</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Deactivation-Resistant Catalysts for Pd-Catalyzed C–N Cross-Coupling Reactions</title>
<link>https://hdl.handle.net/1721.1/152115</link>
<description>Development of Deactivation-Resistant Catalysts for Pd-Catalyzed C–N Cross-Coupling Reactions
Reichert, Elaine C.
Chapter 1: Introduction to Pd-Catalyzed C–N Cross-Coupling: Rational Biarylphosphine Ligand Design Enhances the Reactivity of Difficult Substrates&#13;
&#13;
Transition-metal-catalyzed C−N cross-coupling reactions are an important class of transformations with applications in a variety of fields, and Pd-based catalysts are among the most effective for these reactions. However, certain classes of compounds, including very bulky substrates as well as those containing coordinating functional groups, can be very difficult to couple. The choice of the supporting ligand on Pd plays an important role in the efficiency of a given reaction, since the ligand is necessary to facilitate the productive coupling reaction and hinder the formation of off-cycle Pd species. Correspondingly, the Buchwald lab has enabled challenging transformations by rationally designing new biarylphosphine ligands.&#13;
&#13;
Chapter 2: Development of an Aryl Amination Catalyst with Broad Scope Guided by Consideration of Catalyst Stability&#13;
&#13;
A new dialkylbiaryl monophosphine ligand, GPhos, that supports a palladium catalyst capable of promoting carbon–nitrogen cross-coupling reactions between a variety of primary amines and aryl halides, was developed; in many cases, these reactions can be carried out at room temperature. The reaction development was guided by the idea that the productivity of catalysts employing BrettPhos-like ligands is limited by their lack of stability at room temperature. Specifically, it was hypothesized that primary amine and N-heteroaromatic substrates can displace the phosphine ligand, leading to the formation of catalytically dormant palladium complexes that reactivate only upon heating. This notion was supported by the synthesis and kinetic study of a putative off-cycle Pd complex. Consideration of this off-cycle species, together with the identification of substrate classes that are not effectively coupled at room temperature using previous catalysts, led to the design of a new dialkylbiaryl monophosphine ligand. An Ot-Bu substituent was added ortho to the dialkylphosphino group of the ligand framework to improve the stability of the most active catalyst conformer. To offset the increased size of this substituent, we also removed the para i-Pr group of the non-phosphorus-containing ring, which allowed the catalyst to accommodate binding of even very large α-tertiary primary amine nucleophiles. In comparison to previous catalysts, the GPhos-supported catalyst exhibits better reactivity both under ambient conditions and at elevated temperatures. Its use allows for the coupling of a range of amine nucleophiles, including (1) unhindered, (2) five-membered-ring N-heterocycle-containing, and (3) α-tertiary primary amines, each of which previously required a different catalyst to achieve optimal results.&#13;
&#13;
Chapter 3: Pd-Catalyzed Amination of Base-Sensitive Five-Membered Heteroaryl Halides with Aliphatic Amines&#13;
&#13;
A versatile and functional-group-tolerant method was developed for the Pd-catalyzed C–N cross-coupling of five-membered heteroaryl halides with primary and secondary amines, an important but underexplored transformation. Coupling reactions of challenging, pharmaceutically relevant heteroarenes, such as 2-H-1,3-azoles, are reported in good-to-excellent yields. High-yielding coupling reactions of a wide set of five-membered heteroaryl halides with sterically demanding α-branched cyclic amines and acyclic secondary amines are reported for the first time. The key to the broad applicability of this method is the synergistic combination of (1) the moderate-strength base NaOTMS, which limits base-mediated decomposition of sensitive five-membered heteroarenes that ultimately leads to catalyst deactivation, and (2) the use of a GPhos-supported Pd catalyst, which effectively resists heteroarene-induced catalyst deactivation while promoting efficient coupling, even for challenging and sterically demanding amines. Cross-coupling reactions between a wide variety of five-membered heteroaryl halides and amines are demonstrated, including eight examples involving densely functionalized medicinal chemistry building blocks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152115</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An alum particle-based platform to enhance and investigate humoral immune responses to immunization</title>
<link>https://hdl.handle.net/1721.1/152110</link>
<description>An alum particle-based platform to enhance and investigate humoral immune responses to immunization
Rodrigues, Kristen A.
Slow delivery of vaccines has been shown to amplify humoral immune responses compared to traditional bolus immunization, but clinical translation of frequent repeated immunizations is challenging. In this thesis, we investigate the use of peptide linkers containing consecutive phosphoserine residues (pSer) designed to mediate ligand exchange interactions between pSer- conjugated antigens and aluminum hydroxide (alum) particles in order to mediate slow delivery of antigen from the injection site depot to the draining lymph nodes (dLNs). We optimized this pSer/alum platform for a SARS-CoV-2 RBD antigen and an HIV envelope trimer MD39, systematically modulating characteristics of both the antigen and pSer linker to maximize on- target, vaccine relevant responses. These optimized pSer-antigen designs elicited robust antigen- specific germinal center B cell and serum antibody responses in mice, with synergistically amplified humoral immunity when co-anchored with phosphate-containing molecular adjuvants CpG and SMNP. Based on the prolonged retention of antigen at the injection site in physiological conditions with the pSer/alum approach, we next applied a fluorescence resonance energy transfer-based approach to track antigen stability longitudinally. A substantial fraction of antigen remains intact after three weeks at the injection site with the optimized alum-anchored, pSer-conjugated MD39 trimer. The pSer/alum approach promoted significantly improved antigen delivery to the follicular dendritic cell (FDC) network in the dLNs compared to soluble MD39, with most antigen in the dLN FDC intact through at least day 28. The pSer modification approach employed here provides a simple and robust strategy to prolong antigen availability in a clinically translatable vaccine regimen.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152110</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural investigations of adenosylcobalamin-dependent enzyme maturation</title>
<link>https://hdl.handle.net/1721.1/152109</link>
<description>Structural investigations of adenosylcobalamin-dependent enzyme maturation
Vaccaro, Francesca A.
Metalloenzymes utilize metallocofactors, ranging from single metal ions to complicated metallic clusters, to catalyze a wide range of challenging chemical reactions that are critical for life. Incorporation of these metallocofactors often relies on proteins known as metallochaperones that transport, modify, and/or insert the metallocofactor for its target metalloenzyme. This thesis focuses on the maturation of methylmalonyl-CoA mutase (MCM). In humans, MCM is the only known adenosylcobalamin (AdoCbl)- dependent enzyme; mutations or deletions of MCM or any metallochaperones involved in its maturation lead to methylmalonic aciduria, an inborn error of metabolism. The final step of MCM’s maturation involves an adenosyltransferase (ATR), which catalyzes the adenylation reaction to form AdoCbl and then delivers AdoCbl to MCM, and a G-protein chaperone, which facilitates AdoCbl delivery by the ATR through GTP hydrolysis. In addition to the human system, there are two bacterial systems used to understand the maturation of MCM: a homologous three-component system from Methylobacterium extorquens and an analogous two-component system from Cupriavidus metallidurans. The two-component system consists of the natural fusion protein, IcmF, which contains the AdoCbl-dependent isobutyryl-CoA mutase and its corresponding G-protein chaperone, and the analogous ATR. This thesis provides structural and biochemical characterizations of these two model systems to understand how the metallochaperones, specifically the G-protein chaperones, enable efficient mutase maturation. We present the crystal structure of a minimal system consisting of the G-protein chaperone, MeaB, and the Cbl-binding domain of the MCM from M. extorquens. This structure trapped an active conformation of the G-protein chaperone, revealing the first snapshots of the 180° rotation of one protomer needed to complete the nucleotide binding site and perform GTP hydrolysis. We also present mutagenesis and solution state data for IcmF from C. metallidurans that characterize its nucleotide- and cofactor-state dependent oligomerization, important for cofactor loading and unloading. Using cryogenic electron microscopy, we obtain structural data on IcmF that show that the monomeric G-protein domains of IcmF dimerize to resemble the active conformation of dimeric MeaB, and that this “active” conformation of the G-protein domains physically props open the mutase domains to enable AdoCbl loading. Finally, we present the crystal structure of C. metallidurans ATR and use this structure in computational docking studies with C. metallidurans IcmF to probe the potential interfaces within a G-protein:ATR:mutase complex. Overall, this work deepens our understanding of the function of G-protein chaperones in the maturation of AdoCbl-dependent mutases and sets the stage for further studies of metallochaperones and their roles in metalloenzyme maturation. Collectively, these studies have implications in both human health and in biotechnology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152109</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The synthesis and study of phenyl substituted anthracenes.</title>
<link>https://hdl.handle.net/1721.1/152091</link>
<description>The synthesis and study of phenyl substituted anthracenes.
Koepsell, Don George.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152091</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organizational and bureaucratic politics in Soviet defense decisionmaking : a case study of the Soviet air defense forces</title>
<link>https://hdl.handle.net/1721.1/152087</link>
<description>Organizational and bureaucratic politics in Soviet defense decisionmaking : a case study of the Soviet air defense forces
Lepingwell, John William Rix.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1988; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152087</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fermentation product recovery by supercritical fluid extraction : microbiological and phase equilibrium aspects</title>
<link>https://hdl.handle.net/1721.1/152085</link>
<description>Fermentation product recovery by supercritical fluid extraction : microbiological and phase equilibrium aspects
Willson, Richard C.
            (Richard Coale)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1988; Title as it appeared in M.I.T. Graduate List, June 1988: Supercritical fluid extraction of fermentation products. Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152085</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rotational and vibrational energy transfer in methane</title>
<link>https://hdl.handle.net/1721.1/152084</link>
<description>Rotational and vibrational energy transfer in methane
Foy, Bernard Ryan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1988; Bibliography: leaves 119-123.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152084</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Well-Dressed Spacecraft: Textiles for Cosmic Dust Metrology</title>
<link>https://hdl.handle.net/1721.1/152029</link>
<description>The Well-Dressed Spacecraft: Textiles for Cosmic Dust Metrology
Cherston, Juliana Mae
I envision cosmic grains that may have traveled light years to meet, in a microscopic blitz, the commonplace textile! This work brings the first electronic textiles to Low Earth Orbit, tracking advanced fabric sensor characterization from a tabletop laser accelerator, to a warehouse-scale electrostatic accelerator, to the walls of the International Space Station. &#13;
&#13;
There is, increasingly, direct place-sharing between the specialized scientist and the lay explorer – a coexistence of scientific and humanistic pursuit that is muted in the specialized laboratories of the 20th century.  When scientific and humanistic infrastructure are not gracefully coevolved, they clash. A key design vision for the current work is to propose opportunities exist to more deeply unify the infrastructural demands and desires of the explorer with the architectures that enable scientific inquiry.&#13;
&#13;
For example, for decades, spacecraft and spacesuits have leveraged textile substrates as their outermost protective skins. The primary contribution of this work is the introduction of piezoelectric and charge sensitive fabric skin for sensing hypervelocity space dust, while simultaneously offering enhanced protective capabilities. Dust kinematic estimates can in turn suggest the grain's likely origin. From space webs on asteroids and `sensory conductors' on extravehicular spacesuits to future 'textile telescopes' for sensing interstellar dust, I introduce a suite of additional conceptual avenues that together map out a landscape of opportunity for scientists and explorers alike.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152029</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Membranas</title>
<link>https://hdl.handle.net/1721.1/152028</link>
<description>Membranas
L'Huillier Chaparro, Nicole
Organized around an installation called Membranas, this dissertation will explore alternative logics and modes of socialization through improvisatory encounters. Membranas is an infrastructure that stimulates call and response exchanges between humans, the wind, vibrations in the air, and a machine. It does this by shifting away from conventional notions of sound and music through the creation of several interactive sculptural elements that activate an experience using vibrational and sonic organs contained in the installation, a set of membrane sensors in the form of flags that perceive sounds and vibrational activity, and a vibrational membrane microphone based on a soft accelerometer elastic sensor to be used outdoors. Membranas is a performative interface that establishes a continuous testbed for exploring resonance as an inclusive force that stimulates collectivity and the sense of interconnectivity among participants. This work emerges as a way of putting into practice ideas within La Membrana, an organizational conceptual apparatus that stimulates vibrational ways of speculating about how to rearrange the social.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152028</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Earth Observation-Informed Modeling to Inform Sustainable Development Decision-Making</title>
<link>https://hdl.handle.net/1721.1/152016</link>
<description>Using Earth Observation-Informed Modeling to Inform Sustainable Development Decision-Making
Reid, Jack
This work aims to demonstrate the viability of a methodology for supporting local, sustainable development decision-making through the development of clearer linkages between environmental modeling and societal impact, with a particular emphasis on the use of earth observation data. To accomplish this, it explores the efficacy and difficulties of collaboratively developing a systems-architecture-informed, multidisciplinary GIS decision support system for sustainable development applications that makes significant use of earth observation data. &#13;
&#13;
This is done through the development and evaluation of decision support systems (DSSs) for two applications: (1) mangrove forest management and conservation in the state of Rio de Janeiro, Brazil; and (2) coronavirus response in six regions around the world. In both cases, the methodology involves the application of the System Architecture Framework, which includes analyzing the stakeholders to inform the design of the DSS in question. Other components of the methodology are developing the DSS through a collaborative process with stakeholders; pursuing targeted analyses; and evaluating the usefulness of both the DSS and the development process through interviews, workshops, and other feedback mechanisms.&#13;
&#13;
All of this takes place under the umbrella of the Environment, Vulnerability, Decision-Making, Technology (EVDT) Framework for combining remote observation and other types of data to inform decision-making in complex socio-environmental systems, particularly those pertaining to sustainable development. As the name suggests, EVDT integrates four models into one tool: the Environment; Human Vulnerability and Societal Impact; Human Behavior and Decision-Making; and Technology Design for earth observation systems including satellites, airborne platforms and in-situ sensors. The data from each of these domains is used by established models in each domain, which are adapted to work in concert to address the needs identified during the stakeholder analysis. The capabilities provided by this framework will improve the management of earth observation and socioeconomic data in a format usable by non-experts, while harnessing cloud computing, machine learning, economic analysis, complex systems modeling, and model-based systems engineering.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152016</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Centering Communities in Research and Technology Design</title>
<link>https://hdl.handle.net/1721.1/152013</link>
<description>Centering Communities in Research and Technology Design
Calacci, Dan
Large companies hold an increasingly large monopoly on data and information about community behavior, making contexts ranging from instrumented city neighborhoods to online platforms more legible than ever to private firms. Meanwhile, many communities lack the tools and methods they need to access, use, and make sense of their own data, creating an information asymmetry that restricts their ability to understand their own conditions and relationships with new technologies. What methods, tools, and policies can help communities leverage their own data and disrupt this concentration of power? This dissertation investigates how community-focused computational research can help communities understand their own behaviors and the impact of new technologies in three domains: experienced urban segregation, online platform design, and algorithmic management. First, the study on urban segregation highlights how individual mobility behaviors can contribute to city-wide measures of income inequality, and provides an example of how large-scale data analysis can prioritize community agency. The second study concentrates on online platform design, and demonstrates how communities can understand their relationship to new online platforms by studying the racialized use of a hyper-local \"neighborhood watch\" social network. The third study explores how community co-research and design can help communities interrogate their relationship to algorithms by presenting a worker-led algorithmic audit of a delivery platform's pay system. In addition to their field-specific contributions, each study demonstrates a distinct epistemic and ethical approach to knowledge production. Using design lessons from each study, this dissertation also examines various approaches to community-focused research, discusses their main legal and logistical obstacles, and offers strategies for overcoming them. These contributions offer key ways that research and technology design can center communities, examples of how this practice can engage and empower worker groups and neighborhoods, and concrete policy and organizing strategies to make community-focused research more common and accessible.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152013</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations of Cognitive, Affective, and Communicative Systems for Neurodiverse Individuals</title>
<link>https://hdl.handle.net/1721.1/152012</link>
<description>Foundations of Cognitive, Affective, and Communicative Systems for Neurodiverse Individuals
Johnson, Kristina T.
Individuals with complex neurodevelopmental differences, including some individuals with autism and/or genetic disorders, often struggle with cognition, communication, emotion regulation, motor coordination, and social interaction. At the same time, these individuals offer untapped insight into fundamental questions of human nature: How do we learn? How and why do we communicate? Why do we do what we do? In many respects, these individuals possess some of the greatest need and potential for scientific inquiry and advancement; yet they are difficult to bring into clinics, almost impossible to assess using a single modality, and often cannot provide accurate verbal indicators of how they are feeling or what they are thinking. Moreover, the population is small, heterogeneous, and geographically distributed. There is inconsistent terminology to describe this group, few standardized assessments that are sensitive and accessible for this population, and unequal access to support, technology, research, and funding. Lastly, there is an overwhelming focus on what they cannot do.&#13;
&#13;
This thesis investigates how individuals with complex neurodevelopmental differences can and do thrive. Here, I present the design, framework, and implementation of systems that can elucidate the ways in which neurodiverse individuals are learning and growing across cognitive, affective, and communicative domains. These systems include custom-built hardware, software, mathematical models, novel methodologies, publicly released datasets, and multiple scientific studies. Each study is built around strengths-based research questions – asking what these individuals can do – and personalized, naturalistic study paradigms.&#13;
&#13;
Specifically, I introduce a multi-modal, customizable learning platform, SPRING, designed to motivate skill development for this population while automating data collection and feedback. I also describe a personalized, naturalistic fMRI study probing the neural correlates of special interests in children both with and without autism. Finally, I present Commalla, a system of remote data acquisition and individualized speech signal processing that seeks to characterize how individuals with few spoken words communicate using non-speech vocalizations. This research aims to build the foundation for a transdisciplinary center for the study and advancement of these exceptional individuals, with the intention of expanding our understanding of human cognition, motivation, and language.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152012</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Biomedical and Bioinspired Fiber Devices via Thermal Drawing</title>
<link>https://hdl.handle.net/1721.1/152011</link>
<description>Engineering Biomedical and Bioinspired Fiber Devices via Thermal Drawing
Lee, Youngbin
Fibers are ubiquitous as elements in broad fields from conventional textiles and optics to smart textiles and electronics. One of the fiber fabrication methods, thermal drawing process, has a unique strength on a scalable production of multimaterial fibers. Based on various properties of multiple materials in a single fiber, multifunctional fibers have been studied for diverse applications such as sensing, battery, computing, and biomedical devices. In this thesis, I develop approaches to expand use of thermally drawn fibers in biomedical fields by overcoming a limitation of thermal drawing technique or integrating functional components into thermally drawn fibers. First, via development of thermally drawable photoresist, I combine thermal drawing and photolithography to produce scalable multimaterial fibers which have breaking axial symmetry within them. Microscale patterns along the shaft enable the functional point for biomedical devices to be extended from the tip to all surfaces by customizably exposing internal functionalities of multifunctional fibers. Second, biocompatible magnetic soft robots are designed using thermally drawn exoskeleton fibers. By integrating a specially formulated magnetic composite into thermally drawn fibers and applying strain treatment, the soft robots can show bioinspired locomotion for biomedical applications under simplified magnetic fields. Lastly, artificial muscle fibers advance in selectively controllable form with rapid response. Liquid metal integrated into thermally drawn bimorph fibers serves as an internal heat source which allows for selective actuation of muscle fibers close together with rapid responsive speed. This thesis can be a foundation to expand fiber-based applications using thermal drawing in biomedical fields.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152011</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lifelong Personalization for Social Robot Learning Companions: Interactive Student Modeling Across Tasks and Over Time</title>
<link>https://hdl.handle.net/1721.1/152010</link>
<description>Lifelong Personalization for Social Robot Learning Companions: Interactive Student Modeling Across Tasks and Over Time
Spaulding, Samuel Lee
Early language and literacy skills are important foundations for learning and form the basis of later academic success. Motivated by a growing scientific consensus that language learning requires engaging students cognitively, affectively, and socially, this thesis advances work to develop “social robot learning companions" that engage with and adapt to students across different language/literacy tasks to provide long- term, scalable, and personalized learning assistance. Personalized student modeling helps promote learning and engagement, but sophisticated modeling relies heavily on student interaction data. In order to elicit useful amounts of personalized student data, researchers have increasingly employed “long-term" interaction designs, which occur over distinct sessions at different times.&#13;
&#13;
This thesis broadens the scope of single-task “long-term personalization" to “multi-task personalization" across different tasks. Both “long-term" and “multi-task" personalized interaction designs are mirrored by an associated shift in algorithm and model design: continual learning, which accounts for the temporal sequence in which data is received, and transfer learning, which accounts for the task in which data originates, using data from a ‘source’ task to learn a model in a different ‘target’ task. The combination of these paradigms, which I call “lifelong personalization" could lead to flexible personalized models that can better adapt to individuals over time and across tasks.&#13;
&#13;
This thesis is a presentation and evaluation of continual and transfer learning methods, focusing on their impact on accuracy and data efficiency of personalized student models, and on student learning and engagement. To facilitate this research, I have developed a unified robotic game system for studying lifelong personalization over two different educational games, each emphasizing certain language and literacy skills. The robot’s behavior in each game is backed by a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game’s learned student model to the other via a novel instance-weighting protocol based on task similarity. By evaluating new methods for flexible, adaptive student personalization within a suite of custom-designed games for promoting students’ language/literacy skills, this thesis contributes both algorithmic and human-centered insights for the future of educational human-robot interactions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152010</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational and Statistical Methods for Analysis of Spatial Transcriptomics Data</title>
<link>https://hdl.handle.net/1721.1/152008</link>
<description>Computational and Statistical Methods for Analysis of Spatial Transcriptomics Data
Cable, Dylan Maxwell
Spatial transcriptomics technologies are an emerging class of high-throughput sequencing methodologies for measuring gene expression at near single-cell resolution at spatiallydefined measurement spots across a biological tissue. We show how measuring cells in their native environment has the potential to identify spatial patterns of cell types, cell-to-cell interactions, and spatial variation in cellular behavior. However, several technical challenges necessitate the development of appropriate statistical methods, including additive mixtures of single cells, overdispersion, and technical platform effects across technologies. The key contributions of this thesis include developing a statistical framework accounting for these challenges to identify cell types within spatial transcriptomics datasets. We extend this approach to a general regression framework that can, accounting for multiple replicates, learn cell type-specific differential gene expression (DE) across many scenarios including DE across spatial regions and due to cell-to-cell interactions. We apply our framework to a metastatic tumor clone and discover an association between immune cell localization and an epithelial-mesenchymal transition of cancer cells. We also extend our approach to identify cell types from spatial transcriptomics data in an unsupervised manner, drawing from our other algorithms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152008</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanics of Composite Hydrogels</title>
<link>https://hdl.handle.net/1721.1/152007</link>
<description>Mechanics of Composite Hydrogels
Song, Jake
Composite hydrogels – which consist of fillers in a swollen polymeric matrix – are ubiquitous structural components of biological materials, biomedical devices, food, and consumer products. The mechanical properties of composite hydrogels are pivotal for their respective applications, and are governed by physical processes which remain unexplored. This thesis explores the physical processes that underlie (i) structure formation in composite hydrogels, viewed from the perspective of the self-assembly of patchy particles; (ii) linear viscoelastic stress relaxation in composite hydrogels, as a manifestation of athermal relaxation dynamics; (iii) nonlinear compression stiffening in composite hydrogels, arising from complex particle dynamics that arise during compression; and (iv) design methods for composite hydrogels, as functional materials with programmable morphologies and stimuli-responsivity. These results provide a framework for understanding the mechanics of composite hydrogels in nature and technology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152007</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing</title>
<link>https://hdl.handle.net/1721.1/152006</link>
<description>Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing
Jeong, Sooyeon
Globally, more than 264 million people of all ages are affected by depression, which has become a leading cause of disability. Several interactive technologies for mental health have been developed to make various therapeutic services more accessible and scalable. However, most are designed to engage users only within therapy and intervention tasks. This thesis presents social robots that deliver interactive positive psychology interventions and build rapport with people over time as helpful companions to improve psychological wellbeing. Two long-term deployment studies explored and evaluated how these robotic agents could improve people’s psychological wellbeing in real-world contexts. In Study 1, a robotic coach provided seven positive psychology interventions for college students in on-campus dormitory settings and showed significant association with improvements in students’ psychological wellbeing, mood, and motivation to change. In Study 2, we deployed our robots in 80 people’s homes across the U.S. during the COVID-19 pandemic and evaluated the efficacy of a social robot that delivers wellbeing interventions as a peer-like companion rather than an expert coach. The companion-like robot was shown to be the most effective in building a positive therapeutic alliance with people and resulted in enhanced psychological wellbeing, improved readiness for change, and reduced negative affect. We further explored how traits, such as personality and age, influence the intervention outcomes and participants’ engagement with the robot. The two long-term in-the-wild studies offer valuable insights into design challenges and opportunities for companion AI agents that personalize mental health interventions and agent behaviors based on users’ traits and behavioral cues for better mental health outcomes.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152006</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Into the Wild: Deploying Brain and Physiological Sensing in Natural Environments to Enhance Wake and Sleep Cognitive Behavioral Studies</title>
<link>https://hdl.handle.net/1721.1/152004</link>
<description>Into the Wild: Deploying Brain and Physiological Sensing in Natural Environments to Enhance Wake and Sleep Cognitive Behavioral Studies
Bernal Cubias, Guillermo Román
This thesis presents two experimental platforms for performing cognitive behavioral studies in natural settings: one for wake time and one for sleep. The equipment utilized today in behavioral and sleep labs is not very accessible, comfortable, portable, or simple to operate. The systems documented in this dissertation demanded the creation of novel wearables, sensors, signal processing, communications, and machine learning solutions that vastly outperformed current systems.&#13;
&#13;
The first platform introduced here is Entwine, a toolkit for behavioral researchers to create VR experiments. The first half of this toolkit includes Unity modules to help create a VR behavioral experiment. These modules are meant to lower the barrier of entry rather than replace Unity development, and they can be built on or modified by the user. I present a study that is able to identify the spatiotemporal dynamics between the autonomic nervous system (HR, EDA) and the central nervous system (Frontal and Parietal cortices) during a high cognitive demand task. I also explored how such a system can help measure and test the field of vision to evaluate retinal and early afferent visual pathways.&#13;
&#13;
The second contribution of this dissertation is the Fascia Ecosystem, which reinvents sleep studies using three key technologies. First, the Fascia Sleep Mask uses fabric-based sensing to collect polysomnogram-like data in a soft sleep mask. Second, the Fascia Hub lets a researcher or scientist give the patient audio and visual feedback and stimulation. This helps with sleep and dream research by allowing for interventions to be made. Finally, the machine learning API provides real-time sleep staging, spindles, and slow-wave saliency maps in the Fascia Portal, where sleep researchers can view patient signals and store experiment data. The presented work streamlines cognitive study procedures by introducing two novel solutions that will be shared with the scientific community. I have shown through user studies that these prototypes are easy to use and have the ability to significantly enhance cognitive research, diagnosis, and understanding of sleep structure.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152004</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The dynamics of attention in digital ecosystems</title>
<link>https://hdl.handle.net/1721.1/152002</link>
<description>The dynamics of attention in digital ecosystems
Epstein, Ziv
With more than half of the population of earth now using social media, it has become a critical tool for individuals to gain information about the world around them and to connect with others, and for researchers to build a broad understanding of human behav- ior. However, myopic design patterns of these platforms, as well as the broader attention economy, have enabled the proliferation of misinformation and undermined collective in- telligence. These situating factors motivate a guiding research question for the emerging field of platform design: how does attention operate in digital ecosystems, and how can we design platforms to mitigate misinformation and promote collective intelligence? This dissertation addresses key aspects of this question by arguing that attention operates in a two- stage process (“Try” and “Buy”) on social media. The Try phase (stage 1) involves initial exposure to content, and the Buy phase (stage 2) involves engagement with con- tent conditional on having been exposed to it in the first place. Due to the difficulties of measuring Stage 1 exposure in standard survey experiment settings, most research has focused only on psychological determinants and design interventions for stage 2, neglecting the crucial role attention plays online. To understand the attentional dynamics of stage 2, I will discuss studies on a scalable accuracy prompts that “moves the spotlight of attention” towards peoples’ existing but latent capacity for discerning truth from falsehood. To under- stand the attentional dynamics of stage 1, I will discuss how social influence can impact the content users are exposed on social media environments, via a large scale field experiment with AI-generated hybrid animals (“GANimals”). To directly measure attentional expo- sure to content in addition to engagement, I introduce a new research tool called Yourfeed. Yourfeed is a digital environment that mirrors the attentional context of social media, and tracks both dwell time and engagement for each piece of content. Together, these contribu- tions provide empirical evidence for how attention operates on social media, and highlights a suite of design interventions to promote high-quality interactions with these platforms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152002</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Science and Art of Human and Artificial Intelligence Collaboration</title>
<link>https://hdl.handle.net/1721.1/152001</link>
<description>The Science and Art of Human and Artificial Intelligence Collaboration
Groh, Matthew
While artificial intelligence (AI) appears to be surpassing the performance of human experts on a wide variety of games and real-world tasks, these algorithms are prone to systematic and surprising failures when deployed. In contrast to today’s state-of-the-art algorithms, humans are highly capable of adapting to new contexts. The different strengths and weaknesses of humans and AI motivate a guiding research question for the emerging field of human-AI collaboration: When, where, why, and how does the combination of human problem solving and AI systems lead to a hybrid system that surpasses (or fails to surpass) the performance of either humans or the machine alone? This dissertation addresses various dimensions of this guiding question by conducting large-scale, digital experiments across three distinct tasks and domains: deepfake detection, dermatology diagnosis, and Wordle. First, the experiments in deepfake detection examine the similarities and differences between human and machine vision in identifying visual manipulations of people’s faces in videos and identify important performance trade-offs between hybrid systems and human or AI only systems for deepfake detection. Second, the experiments in dermatology diagnosis reveal that non-visual information is often essential for diagnosing skin disease, diagnostic accuracy disparities across skin color exist in image-only store-and-forward teledermatology, and clinical decision support based on a fair deep learning system can significantly increase physicians’ diagnostic accuracy in this experimental setting. Third, the experiment on Wordle demonstrates that digitally mediated expressions of empathy can counteract the negative effect of anger on human creative problem solving. In addition to these digital experiments, this dissertation presents two algorithmic audits on clinical dermatology images to reveal where systematic errors arise in state-of-the-art algorithms, examines how context influences automated affect recognition, and proposes methods for more effectively incorporating context in applied machine learning. Together, these contributions provide empirical evidence for why human-AI collaborations succeed and fail across a variety of tasks and domains, insights into how to design human-AI collaborations more effectively, and a framework for when and where hybrid systems should rely on human problem solving.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152001</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endless Ecosystems : Designing a World Without Waste</title>
<link>https://hdl.handle.net/1721.1/152000</link>
<description>Endless Ecosystems : Designing a World Without Waste
Lee, Nicolas Alexander
The widespread environmental damage caused by anthropogenic activities over the last century represents a direct threat to both humans and non-humans. While the ethical considerations of non-human conservation are frequently considered in modern discourse on environmental sustainability, the deterioration of non- human ecosystems poses an immediate crisis through the loss of irreplaceable ecosystem services such as the fixation of CO2 from the atmosphere, the production of oxygen, and the filtration of fresh water. Even if drastic measures are taken to curb climate change, there is no guarantee that this will prevent the decline of non-human species. This dissertation proposes that designers who specify the source of physical media are uniquely positioned to impact the actions of non-humans, and that they should seek to maximize the degree to which they leverage ecosystems. Counter to modern sustainability paradigms of net-zero initiatives and conservation through non-intervention, this work examines how non-human ecosystems have achieved immense levels of sustainable productivity with minimal waste for hundreds of millions of years and argues that human systems can achieve such sustainability by embedding themselves within non-human systems of material production and decomposition. Endless Ecosystems is a framework for sustainable design bounded by the synthesis and decomposition of matter by non-human ecosystems. A set of tools that enable this methodology are presented and applied across several case-studies in prototypical designs for technology in the built environment. The environmental impacts of these methods are quantified and explored through the application of life-cycle assessment and Emergy evaluation. Through these cases and their analyses, this framework is broadly applied as a strategy for sustainable design with the ability to empower ecosystem resource cycles rather than deplete them.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152000</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>How do we design robots equitably?: Engaging design justice, design fictions, and co-design in human-robot interaction design and policymaking processes</title>
<link>https://hdl.handle.net/1721.1/151999</link>
<description>How do we design robots equitably?: Engaging design justice, design fictions, and co-design in human-robot interaction design and policymaking processes
Ostrowski, Anastasia K.
Robots are in their infancy engaging in our social spaces and we are faced with new perplexing and complex questions and challenges, such as how will these technologies create or further entrench inequities in society, which stakeholders have a say in technology development or policy development, and whose voices are heard in technology design. There is limited engagement with these questions in human-robot interaction where there is a greater need to understand how human-centered design methodologies and frameworks, such as participatory design, co-design, and Design Justice, can be further developed in the field to promote more equitable robot design and policy design processes. Here, I explore areas that should be considered and incorporated in equitable innovative technology design including who designs the technology, how do we support co-designers in the design of technologies, and how can we leverage participatory design, co-design, and Design Justice principles to support equitable robot design and policy making. Overall, I consider how we can support and create equitable design boundaries, spaces, and processes in the design and policy making of robotics through the presentation of three sets of studies. The first set of studies focuses on empowering users through co-design of robots. The second set turns a reflexive lens onto human-robot interaction roboticists and designers, mapping the context of human-robot interaction design processes and the field’s attention to ethics, equity, and justice. The third set of studies explores ways of creating spaces to support equitable technology design and policy making through design workshops and a design justice pedagogy summit. From this work, I present design guidelines for more equitable co-design practices in human-robot interaction, tools to support roboticists and other innovative technology designers in engaging with equity and justice through their design processes, and further promote discussion around the policy and design ecosystems of innovative technology. While this dissertation specifically focuses on robot design, the methodologies and processes developed are broadly applicable to other innovative and often disruptive technologies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151999</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Conversational Agent for Dynamic Procedural Interactions</title>
<link>https://hdl.handle.net/1721.1/151990</link>
<description>A Conversational Agent for Dynamic Procedural Interactions
Colón-Hernández, Pedro
How-To questions (e.g., “How do I cook rice?”, “How do I write a check?”, or “How do I send pictures to my family from my iPhone?”) are some of the most common questions asked of search engines and presumably of conversational agents as well. Answers to How-To questions should generally be in the form of a procedure; step-by-step instructions that users perform in sequence. However, people find reading instructions cognitively demanding and often prefer that another person guide them through a procedure. Prior work in automating procedural guidance either concentrates on how to communicate instructions or how to reason about procedural knowledge to extract states of entities.  In this work, we present an end-to-end procedural voice guidance system that automatically generates and presents step-by-step instructions to users through a conversational agent. This system overcomes three significant challenges: generating a contextual knowledge graph of the procedure, ordering necessary information through reasoning on that graph and converting it to procedural steps, and finally constructing a conversational system that delivers the procedure in a way that is easily followed by users. Our approach improves upon the current state-of-the-art in conversational agents, which often hand off the interaction to a web search. We demonstrate that our system can be utilized for end-user guidance, and that a contextual commonsense inference system can be used for procedural knowledge graph generation and ultimately procedural step generation. We also show that reasoning for procedural step generation is essential for the task. Lastly, we show that combining our knowledge driven system, both its steps and contextual commonsense assertions with a large language model (LLM) provides more accurate and reliable procedural guidance in tasks that the LLM may have trouble recalling/or were created after training. This work opens up paths to perform contextual graph-based reasoning for story-based applications and helps inform the design of future conversational agents within the domain of procedural guidance.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151990</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacing with Dreams : Novel Technologies and Protocols for Targeted Dream Incubation</title>
<link>https://hdl.handle.net/1721.1/151983</link>
<description>Interfacing with Dreams : Novel Technologies and Protocols for Targeted Dream Incubation
Haar Horowitz, Adam Jedidiah
Scientific research into relationships between sleep physiology and waking cognition has progressed dramatically in the past few decades, but research on the basic science, function, and health consequences of dream phenomenology has not. This dissertation describes research into novel devices and protocols for improving dreamer control of dreams: their content and related function. Set within historical and cultural contexts for the import of dreaming, this thesis seeks to reorient science and everyday health practices toward a renewed respect for the contributions of the dreaming mind to overall wellbeing and cognition.&#13;
&#13;
This thesis research emphasizes the use of a scientifically reliable dream incubation protocol (Targeted Dream Incubation: TDI) to influence dream content across multiple sleep stages, as well as reviewing historical context of dream science and cultural practices. Along with a wonderful team, I have developed a wearable electronic system (Dormio) and associated protocol (TDI) that cause experimental subjects to dream specifically and reliably of a chosen theme, using stimuli presented in pre-sleep wake and N1 sleep in combination with serial awakenings. The experimental work of this thesis centers on four experiments utilizing TDI. Our first experiment demonstrates that TDI does, indeed, induce cue-related N1 dreaming and can be used to augment creativity across a range of tasks related to incubated dream themes. Still further analysis shows that our TDI protocol can be used to augment creative self-efficacy, the belief that one has the ability to produce creative outcomes. Our second experiment shows that TDI can be used to influence daydreams as well as night dreams, and enables controlled comparisons of these brainstates. This experiment employs and validates a non-contact version of TDI, in which dream incubation is effective simply via timed audio cues on a Dormio web interface without any wearable sleep staging device. Pushing beyond bench science to clinical practice, a third experiment in collaboration with PTSD-focused psychiatrists examines the capacity for TDI to influence subjects’ levels of self-efficacy regarding nightmares and dreaming (the belief one can control one’s dream content, a key predictor of successful nightmare therapies). We show TDI can significantly increase self-efficacy, reducing feelings of helplessness and nightmare related complaints. Finally, a fourth pilot experiment extends our dream incubation protocols into the REM state, opening up avenues for influencing novel mnemonic and affective REM dream-related functions.&#13;
&#13;
The results demonstrate the potential bench science and clinical relevance of our suite of dream incubation protocols and technologies. We identify new opportunities for interfacing with human cycles of memory, mind-wandering, emotional adaptation and creative cognition across the full 24 hour spectrum of thought. Our dream incubation system has spawned a series of experiments, both scientific and artistic, and been an impetus for the first conference and first collection of scientific papers on the new field of Dream Engineering. Beyond describing the creation and validation of dream incubation tools, this thesis explores applications and implications of incubating dreams, maps out methods of community building that could bring pluralistic perspectives into dream research, and extends our published writings on the ethics of this research in order to outline an appropriate future for this emerging field.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151983</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming Diffusion Limitations in Encapsulated Cell Therapy for Type 1 Diabetes</title>
<link>https://hdl.handle.net/1721.1/151982</link>
<description>Overcoming Diffusion Limitations in Encapsulated Cell Therapy for Type 1 Diabetes
MacIsaac, Corina N.
Type 1 diabetes is a life-threatening condition characterized by the body’s inability to produce sufficient insulin to properly regulate blood glucose levels. Despite advances in glucose monitoring/insulin delivery systems, the pancreatic islet is still functionally superior at regulating blood glucose levels. Pancreatic islet transplantation requires immune suppression, but islet encapsulation would eliminate this requirement. A hydrogel encapsulation barrier, made from alginate, physically shields transplanted cells from rejection by the immune system. However, encapsulated islets completely rely on diffusion for nutrient delivery which can be limited by the encapsulation material, the naturally large size of pancreatic islets, and the fibrosis around the capsules from the foreign body response.&#13;
&#13;
In this thesis, we address these diffusion limitations to improve the long-term survival and function of encapsulated islets. First, we explore the use of peptide-modified alginates to improve the function of diffusion advantageous dissociated islets. Next, we analyze the potential benefit of increasing vascularization at the surface of the alginate capsule. Although the islet must remain avascular to be protected from the immune system, the diffusion analysis illustrates that vascularization at the capsule surface could increase oxygen delivery. This analysis motivated the engineering of an endothelial cord construct that promotes vascularization around the implanted capsules. We demonstrate that these engineered vascularized constructs result in a significant improvement in the curing of diabetes in mice. Altogether, this work addresses some of the diffusion limitations in cell encapsulation systems and results in the increased survival and function of encapsulated islets in vivo.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151982</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Environmental and Economic Systems Analysis of Land Use Decisions in the Massachusetts Cranberry Industry</title>
<link>https://hdl.handle.net/1721.1/151977</link>
<description>An Environmental and Economic Systems Analysis of Land Use Decisions in the Massachusetts Cranberry Industry
Jaffe, Caroline Adair
This dissertation presents an analysis of the environmental and economic impacts of land use decisions in the Massachusetts (MA) cranberry industry. Cranberry farming is culturally and economically important in Southeastern MA but faces ongoing, compounding challenges that make it increasingly difficult for farms to stay profitable: heightened competition from Midwestern farms using modern farming techniques, fluctuating cranberry prices, an aging farmer population, and climate change. &#13;
&#13;
These factors have led many farmers to consider new options for their farmland including undergoing farm renovations, selling their land to developers, or partnering with conservation organizations to restore their farmland to its native wetland state. Given the scale of the cranberry industry and the amount of ecologically valuable yet vulnerable land at stake, farmers, local governments, and environmental advocacy groups alike need higher quality information and tools to inform land use decisions.&#13;
&#13;
Building on the science of cranberry bog restoration and incorporating perspectives from across the cranberry industry, this thesis applies ecosystem service modeling to geospatial data to quantify environmental outcomes in this decision space and make an economic argument for restoration. In the first section of the thesis, I conduct stakeholder interviews and a literature review to identify socioeconomic contextual issues and industry stakeholder objectives. Next, guided by the Environment-Vulnerability-Decision-Technology framework, I use open-source ecosystem service models applied to public satellite imagery to model and analyze the environmental and economic impacts of different land use scenarios and identify priority restoration areas. Finally, I present and evaluate the results of this modeling work in a web-based decision-support tool that allows stakeholders to interact with and explore different land use outcomes.&#13;
&#13;
Integrating tools from diverse disciplines, this research presents novel, spatially-explicit analysis on the impacts of land use decisions in the MA cranberry region and identifies areas where restoration could generate environmental and economic synergies. With this work, I aim to deliver practical data tools that will incentivize sustainable decision-making, address knowledge gaps in the MA cranberry industry, and contribute to broader discussions around natural climate solutions and agricultural land retirement, two topics of urgent and relevant interest in the fight against climate change.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151977</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>CRISPR Biosensors for Resource-limited Nucleic&#13;
Acid Detection</title>
<link>https://hdl.handle.net/1721.1/151976</link>
<description>CRISPR Biosensors for Resource-limited Nucleic&#13;
Acid Detection
Najjar, Deborah Anne
To achieve a healthier relationship with our environment and each other, we must continue to improve our ability to rapidly and sensitively monitor pathogens both inside and outside the body. Unfortunately, biological sensing has lagged behind electronic and chemical sensors both in cost and accessibility, primarily due to the need for specialized equipment, sterile work spaces, and sensitive reagents to operate biosensors. The COVID-19 pandemic has only emphasized the need for more sensitive, rapid, and decentralized biosensing solutions that can provide in-the-moment data for personal and public health-related decision making. Recent advances in CRISPR-based biosensors has allowed for a new class of diagnostics with sequence-specific nucleic acid detection capabilities that can provide a rapid response without the need for traditional laboratory infrastructure. &#13;
&#13;
The research presented in this dissertation aims to further characterize and expand the applicability of CRISPR-based biosensor systems for resource-limited contexts by non-specialist users. Contributions include a minimally instrumented implementation of a CRISPR-based SHERLOCK assay for rapid and decentralized point-of-care detection of SARS-CoV-2 RNA and variants, a microfluidic platform for SARS-CoV- 2 antibody and CRISPR-based RNA detection through multiplexed electrochemical sensing, and a field-deployable magnetic bead-based waterborne pathogen concentration and CRISPR-based detection system for environmental monitoring. This work also examines how local and indigenous knowledge figure into the conceptualization, collection, and utilization of environmental data within local monitoring programs and considers how novel biosensing tools could generate data at the appropriate resolution for community monitoring needs.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151976</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Discovery of Hidden Cues in Photographs</title>
<link>https://hdl.handle.net/1721.1/151975</link>
<description>Computational Discovery of Hidden Cues in Photographs
Swedish, Tristan
Images of everyday scenes often contain hidden information that can be extracted to localize objects outside the view of the camera and to see around corners. For example, we show that it is possible to look at shadows cast by an object on a table, such as a teapot, and reconstruct an image of the surrounding room. We describe how to identify and make use of these hidden cues such as shadows, reflections, and other subtle changes in an image caused by the interaction of light with objects in a scene that are not in the direct-line-of-sight. We use the term computational discovery to describe techniques that can be used to uncover these cues and reveal hidden information.&#13;
&#13;
Despite incredible advances in computer vision in recent years, cameras are limited to a single viewpoint of a scene, requiring invasive multi-camera setups or active imaging modalities to solve many perception tasks today. Prior work has identified hidden cues that are present in photographs of certain environments, but these methods often require human insight to identify cues, and extensive calibration to make use of them. In order to address the limitations found in prior work, we propose an end-to-end machine learning framework to identify hidden cues. More generally, we show that object localization is approximately equal to localizing a point light source, and describe how this insight can be used to identify situations when object localization is possible. Furthermore, we show that physically-based "inverse rendering" can be used to estimate how light travels within a scene, turning objects, like coffee cups or picture frames, into "object cameras". Physical models are quite fragile to small errors in estimated scene parameters. As such, we suggest reconstruction methods that make use of the uncertainty in scene parameters to improve robustness.&#13;
&#13;
The thesis suggests a number of other interesting ways hidden cues may be used in combination with imaging systems. This work could inspire future cameras that incorporate the environment itself as part of the imaging system, blurring the line between observer and subject.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151975</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale Modeling of the Mechanical Behavior of Clay</title>
<link>https://hdl.handle.net/1721.1/151969</link>
<description>Multiscale Modeling of the Mechanical Behavior of Clay
Zhu, Hejian
The goal of this thesis is to develop a better understanding of the macroscopic engineering properties of clay through a bottom-up modeling approach, involving simulations at three length scales. At the nanoscale, molecular dynamics simulations were carried out to quantify the Potential of Mean Force (PMF) for water-mediated interactions between pairs of primary clay particles using a free energy perturbation method. We compare results for two commonly occurring clay minerals, illite and Na-smectite, both with 2:1 sheet structures, and nanotubular imogolite (1:1). Illite particles exhibit a well-defined potential well at 11 Å separation, enabling particle aggregation in face-face configurations, while Nasmectite (with lower surface charge density) exhibits net repulsion for separations less than 16 Å. In both cases, the free energy is affected by exclusion of counterions in the interlayer space, and are characterized by an oscillatory component of free energy at different solvation states. Imogolite tubes can also aggregate when counterions are distributed within the hollow tube structures. We then simulate mesoscale aggregation by coarse-graining primary particles, equilibrating monodisperse particle assemblies using NPT simulations. The illite particles are represented as single-site ellipsoidal particles using the Gay-Berne potential to approximate the PMF results. The equilibrated assemblies are characterized by a lognormal distribution of particles that aggregate in well-aligned face-face configurations w ith mean stack size &#119898; = 3 ∼ 7. We study the impact of confining p ressure a nd p ressure h istory on the particle arrangement. Higher confining pressure r esults in larger stack s izes, and in the increase of overall level of preferred particle orientation. We demonstrate that mesoscale assemblies show compression properties (in terms of the compression indices in the &#119890; − ln &#119901; and ln &#119890; − ln &#119901; spaces) similar to those observed macroscopically in experimental tests with small elastic recovery of strains during unloading and reloading. We also study the quasistatic stress-strain response of the NPT-equilibrated mesoscale systems through a sequence of strain controlled NVT relaxations. The results exhibit non-linear, inelastic and hysteretic behaviors that are also observed in macroscale experimental results. Elastic properties estimated from the stress-strain data also exhibit trends similar to published experimental studies on pure clay minerals. The stiffness properties a re c onsistent with p ower l aw functions (exponent &#119899; = 0.2 − 0.6) of confining p ressure. We s tudy t he e volution o f mesostructures by analyzing the geometric parameters (including the stack size distribution, fabric tensors, interstack pair correlation, etc.) in order to establish the relation between mesoscale and macroscopic behaviors. The potential energy is sub-divided into intra- and four inter-stack components based on the components of interstack pair correlation functions. We develop an analytical model to predict the elastic properties of mesoscale particle assemblies. The model contains a strain energy formulation for specific particle configurations based on the 5 geometric parameters describing the mesostructures of the assemblies, and a perturbation formulation by specifying changes of mesostructure to small affine transformations. We obtain good agreement between the analytical solutions and numerical estimates. Both indicate that orthotropic symmetry provides a reasonable representation of the mesoscale assemblies. The analytical model provides a basis to characterize the constitutive behavior of clay particle assemblies from a multiscale perspective, and a general framework that has potential for broader application to other engineering materials.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151969</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extrusion Printing of Carbon Nanotube Inks, from Rheology to Electronics</title>
<link>https://hdl.handle.net/1721.1/151943</link>
<description>Extrusion Printing of Carbon Nanotube Inks, from Rheology to Electronics
Owens, Crystal E.
Printed electronics rely on the deposition of conductive liquid-based inks into continuous lines and films, typically on polymeric substrates. Among candidate conductive fillers for use in electronic inks, carbon nanotubes (CNTs) have high conductivity, low density, processability at ambient temperatures, and intrinsic mechanical flexibility, showing their potential to serve as a material of choice in the manufacturing of electronics. However, printed CNT structures have been limited to date in electrical conductivity by manufacturing constraints, typically necessitating nonconductive modifiers to render ink suitable for extrusion, as well as by imperfect CNT quality and low concentration. The goal of this thesis is to surpass the limitations of current manufacturing processes using CNTs in ad-hoc mixtures, and instead to explore printing and electronic properties in relation to CNT-based solution composition and rheology in order to lay the framework for intelligent, fit-for-purpose ink design. In Part One, a short overview of CNT ink rheology is presented, particularly focusing on elastoviscoplastic yield stress behavior, observing scaling laws for flow behavior, and denoting rheometric signatures of ink phase, solution quality, and printability. With further attention to rheometry, a modified vane tool is introduced that has an optimized fractal cross section to improve measurements of this class of slip-prone yield stress fluids.&#13;
 &#13;
In Part Two, printing methods and ink designs are developed in coordination to realize three demonstrations of CNT artifact production in 2D and 3D shapes, with the objective in all cases of maximizing electrical performance while attaining “good enough” geometric resolution. By using an aqueous CNT ink with moderate yield stress (2 Pa) for feature fidelity and considering wetting interactions between the ink and a substrate, the printing of thin CNT lines onto paper and polymer substrates is achieved to create flexible electronics, exhibiting conductivity up to 10 kS/m and specific conductivity tailorable from 0.004 to 140 S.m^2/kg. As a demonstration of the process, printed lines serve as interconnects to power embedded LEDs while flexing; as contact sensors; and as interdigitated capacitors for liquid imbibition. Next, taking inspiration from wet fiber-spinning processes, a method of extruding CNT inks into a coagulating liquid bath of non-solvent is introduced, generating rapidly solidifying, lightweight fibers with specific conductivity up to 7,000 S.m^2/kg, comparable with copper (6,600 S.m^2/kg), and conductivity up to 200 kS/m. Harnessing a fluid mechanical coiling instability observed during immersed printing, extensible CNT coils for strain sensing are produced. Third, a family of inks is created with yield stress in the range of 500 Pa and CNT concentrations around 15% to print arrays of cold cathode field electron emitters with freestanding miniature conical shapes having sub-100-micrometer tip diameters onto addressable gridpoints of a PCB. By modifying the ink further to increase extensional viscosity, individual 100-micrometer-scale freestanding cylinders are fabricated that surpass state-of-the-art CNT field emission performance.&#13;
 &#13;
Finally, in Part Three, to overcome fundamental limitations of the conductivity of CNT-based macrostructures, a method is developed to enhance the conductivity of existing CNT structures using copper electrodeposition. By understanding the role of CNT wettability for homogeneous nucleation of metal, CNT-copper composites are formed with final conductivity up to 2,000 kS/m for a lightweight composite with density less than 0.8 g/cm^3, yielding a specific conductivity of 2,500 S.m^2/kg. The contributions presented in this thesis provide the means to develop new electronics using CNT-based conductors, and guidelines drawn from the properties of yield stress fluids apply more broadly to the design of solution-processable conductive inks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151943</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimization Approach to Certified Manipulation</title>
<link>https://hdl.handle.net/1721.1/151936</link>
<description>An Optimization Approach to Certified Manipulation
Aceituno, Bernardo
The goal of this thesis is to explore the problem of contact-rich robotic manipulation from an optimization perspective. We plan to study the interplay between contact mechanics, geometry, and machine learning to synthesize manipulation plans with varying theoretical properties. More specifically, we propose a quasi-dynamic mechanics model for contact-trajectory optimization and apply it to solve long-horizon manipulation problems in conjunction with randomized planning. We also discuss a machine learning pipeline to solve this problem from video demonstrations, leveraging novel tools from differentiable optimization and learning. Finally, we aim to explore the issue of certification for planar manipulation tasks in the frictionless plane. We propose a theory of certification that enables us to generate long-horizon manipulation plans that are robust to bounded pose uncertainty. The desired outcome of these techniques is to validate them over a wide range of standard manipulation tasks in 2D environments. Our current results demonstrate the ability of model-based approaches at synthesizing high-quality manipulation plans with varying properties, such as optimality, convergence, robustness, and computation speed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151936</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design optimization of two-way spanning concrete systems for low-carbon, context-informed construction</title>
<link>https://hdl.handle.net/1721.1/151935</link>
<description>Design optimization of two-way spanning concrete systems for low-carbon, context-informed construction
Hartwell, Ashley J.
The built environment is one key sector that can be readily targeted for sustainable design intervention as it accounts for approximately 40% of global CO₂ emissions. Within these emissions, embodied carbon of construction materials is substantial. Two broad strategies to reduce embodied carbon are materials substitution and geometric optimization. This dissertation focuses on the latter, presenting strategies to reduce embodied carbon in floors systems that are generally characterized by inefficiently shaped sections, typically designed with little consideration for carbon impacts, and optimized for conditions that favor western construction economics. &#13;
&#13;
This dissertation addresses these challenges in several ways. The first is by employing structural analysis and design space exploration techniques to rapidly locate feasible concrete slab designs for flat and waffle typologies solutions that outperform code-prescribed rules of thumb with respect to carbon, mass and material cost. Second, this work integrates first-principle structural engineering and building physics knowledge into multi-objective optimization workflows to formalize the study of filler slabs, a lost formwork two-way spanning concrete typology with the potential to reduce both the embodied carbon and energy use of a building. Finally, this work considers context-informed fabrication of waffle and filler slabs, evaluating the tradeoffs between mass customization and manufacturability in waffle and filler slab systems with emerging and scalable digital fabrication techniques for concrete construction.&#13;
&#13;
Research outcomes of thesis include generalized knowledge about how to achieve carbon efficiency in two-way spanning floor systems and a demonstration of how computational tools can be leveraged to discover high performing design typologies. Results indicate that with accessible changes in construction practices and utilization of computation, savings as large as 35% of embodied carbon for a residential building of average span can be achieved with minimal additional capital expenditure and optimally designed typologies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151935</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of the Koopman Operator: Novel Methods and Formulations for Lifted Linear Models</title>
<link>https://hdl.handle.net/1721.1/151934</link>
<description>Applications of the Koopman Operator: Novel Methods and Formulations for Lifted Linear Models
Ng, Jerry
The analysis and control of nonlinear dynamic systems is an active research field due to the ubiquity of nonlinear systems in the physical world. However, the handling of these systems is significantly more difficult than the handling of their linear counterparts, for which a host of methods and techniques are available. It has been shown that through the use of the Koopman Operator, these nonlinear systems can be lifted to a higher order state space, and with this lifted representation, the system's dynamics behave linearly.&#13;
&#13;
In this thesis, we explore the use of existing methods for constructing the Koopman Operator on unexplored classes of nonlinear systems, such as systems with segmented dynamics and exogenous inputs. Unlike when modeling these systems with a hybrid or switched framework, the lifted linear models based on the Koopman Operator allow for easy application of model predictive control. &#13;
&#13;
We then discuss the methods for constructing the Koopman Operator. Specifically, we alleviate the pitfalls of current data-driven methods for construction of the Koopman Operator through the use of a data-driven formulation of Direct Encoding, which is based on integration. This differs significantly from the state of the art.&#13;
&#13;
Lastly, the use of Koopman with relation to deep learning is considered. Through utilizing the aforementioned data-driven method, improvements to standard applications of Deep Koopman are demonstrated. In addition, we demonstrate a novel training method that is enabled by Direct Encoding. Through the use of this method, we are able to accurately model the stable subspace of a system containing both stable and unstable subspaces, unlike with standard Deep Koopman methods. It is shown that the resultant model can be used to estimate the borders between subspaces.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151934</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulation of unknown objects via contact configuration regulation</title>
<link>https://hdl.handle.net/1721.1/151933</link>
<description>Manipulation of unknown objects via contact configuration regulation
Taylor, Orion Thomas
In this thesis, we present an approach to robotic manipulation of unknown objects through the regulation of an object's contact configuration: the location, geometry, and mode of all contacts between the object, robot, and environment. A contact configuration constrains the forces and motions that can be applied to the object. As such, the ability to predict and regulate the contact configuration facilitates dexterous manipulation. With this as our guiding principle, we develop a joint estimation and control framework to reactively manipulate unknown objects in the gravity plane.&#13;
&#13;
We begin by building a model to describe the interactions between a polygonal object, the robot, and the environment.  This is accomplished by deriving the kinematic and wrench constraints associated with the geometric and frictional properties of each contact.&#13;
&#13;
Our estimator generates the wrench constraints, the contact mode/geometry, and the object's shape/pose, using a combination of tactile and (limited) visual feedback. There are two separate modules: the friction estimator, which infers the friction constraints and contact mode from the measured force; and the kinematic estimator, which infers the contact geometry and the object's shape/pose from tactile and visual feedback. &#13;
&#13;
The controller regulates the system's pose along the admissible motion directions, while simultaneously regulating the end-effector contact wrench to maintain the desired contact mode and geometry. The motion and wrench control objectives are balanced through a combination of a high-level controller, which synthesizes the kinematic and wrench constraints; and an impedance layer, which executes the motion. &#13;
&#13;
We implement this estimation and control framework on our manipulation platform, and demonstrate that it allows the robot to reactively execute a wide variety of manipulation tasks. These include basic motions, like reorienting an object, sliding it along the ground, or performing a regrasp; as well as more advanced primitives, like using a wall as a support to reorient an object, or regulating the contact geometry between the object and the ground. Finally, we conduct ablation studies to understand the contributions from visual and tactile feedback in our manipulation framework.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151933</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution spatio-temporal quantification of fish predator-prey interactions over ecosystem scales with multispectral underwater sensing and optimality of human visual perception with natural daylight</title>
<link>https://hdl.handle.net/1721.1/151924</link>
<description>High-resolution spatio-temporal quantification of fish predator-prey interactions over ecosystem scales with multispectral underwater sensing and optimality of human visual perception with natural daylight
Pednekar, Shourav
Marine ecosystems face simultaneous pressures from human activities, ocean industrialization, potential global warming and changing habitats. Continuous monitoring of marine biodiversity and ecosystem processes is needed to assess the individual fish species survivability in such conditions. The increasing use of computer modeling and simulations based on significantly under-sampled data of the marine environment, however, leads to unconstrained and potentially unstable predictions of key processes. To address this issue, we demonstrate a technology enabling synoptic quantification and distinction of multispecies fish population densities over ecosystem scales with continuous spatial and temporal resolution. This enables high-resolution quantification of predator-prey interactions in space and time over ecosystem scales. We present an example of an event in the Barents Sea where a massive cod predatory swarm of approximately 1.9 million individuals attacks a defending coherently moving linear capelin prey structure extending over 14 km containing approximately 23 million individuals. Capelin are a keystone species of the Arctic ecosystem. Cod are their primary predator, but cod populations have collapsed everywhere except in the Nordic Seas due to overfishing causing significant changes in ecosystem balance in those regions. We provide high-resolution spatial density images finely sampled over time of cod convergence on capelin prey, estimated capelin consumed, capelin survived and satiated cod predators quantifying the detailed spatio-temporal dynamics of predation. From these we estimate 58% of the entire capelin group was consumed by the swarming cod within 4 hours where the detailed imagery of behavioral shoal structure show capelin in the highest density regions have the highest probabilities of survival. Other interactions we quantified between predatory juvenile cod and pre-spawning capelin groups indicate a variety of behavioral mechanisms with varying levels of efficiency are at work for both the predators and prey over the large scales observed here. These observations are made with multispectral ocean acoustic remote sensing which enables instantaneous imaging of fish populations over thousands of square kilometers with average spatial resolution on the order of 100 m and temporal resolution of about 1 minute. Wide-area species classification and simultaneous population density estimation of individual species employs sensing frequencies at or near fish swimbladder resonance where the large differences across fish species are discernible. Such synoptic imaging at areal rates roughly 10^4 to 10^6 times greater than conventional methods may lead to more stable prediction of key ecosystem processes and has broad applications in remotely classifying fish populations, studying ecosystem functions and assessing species sustainability.&#13;
&#13;
Patterns in light intensity contain vital information for organisms that utilize visual sensory perception for survival in their environments. Psychophysical experiments on visual intensity discrimination with artificial light sources over a century have shown that the smallest detectable change in light intensity, termed just-noticeable difference, grows roughly in direct proportion to the stimulating intensity, approximately following Weber’s law of perception. The potential advantages of Weber’s law in the context of sensing and pattern recognition, however, have not been quantified given the natural intensity scintillation of environmental light. Here we find Weber’s Law to be a consequence of attaining the theoretical minimum mean-square error possible, the Cramer-Rao lower bound, in resolving the intensity of naturally scintillating light. We first obtain the statistics of environmental light signals which we find naturally scintillate with a standard deviation proportional to mean intensity. Given our natural scintillating light intensity data, we find log-transformed intensity and Fechnerian transformed intensity are equivalent to variance-stabilizing-transformed intensity. We then find intensity resolution that follows Weber’s Law is statistically optimal in pattern recognition by simple matched-filter correlation and maximizes information reception by homeomorphically transforming signal-dependent intensity scintillation to signal-independent Gaussian noise which can be canceled without loss of signal information. We show just-noticeable-differences in light intensity obtained from psychophysical experiments with artificial light approximately attain the Cramer-Rao lower bound on intensity resolution expected from our observed natural light intensity scintillation. Human intensity resolution is in this manner approximately optimally adapted to the statistical properties of natural light scintillation with Weber’s Law as a consequence. Along these lines, the same kind of variance-stabilizing transformation is used in the first part of the thesis in acoustic sensing of fish due to intensity scintillation of measured acoustic intensity data converged upon by the central limit theorem.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151924</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetics of Heat and Mass Transfer Near the Liquid-Vapor Interface</title>
<link>https://hdl.handle.net/1721.1/151919</link>
<description>Kinetics of Heat and Mass Transfer Near the Liquid-Vapor Interface
Vaartstra, Geoffrey
Evaporation and condensation can be seen in our daily lives but also play a key role in technologies that drive our modern society, such as power generation, distillation, refrigeration, and thermal management for buildings and electronics. Over the past century, great advances have been made in improving the performance of evaporators and condensers; yet the risk of overall system performance being bottlenecked by these components remains. Recently, state-of-the-art materials and micro-nanofabrication techniques have been applied to develop high-performance prototypes to prevent such a bottleneck. These advances are pushing toward the fundamental limit of evaporation/condensation in which the kinetics of heat and mass transport at the liquid-vapor interface become rate-limiting. As we approach this regime, experimental validation of our fundamental understanding of these kinetics ensures the accuracy of computationally-efficient models suitable for engineering design. These efforts will aid further innovation of the thermofluid systems which are critical to our modern society.&#13;
&#13;
A century of research on the kinetics of liquid-vapor phase change has provided a plethora of knowledge on the topic, yet the literature contains many discrepancies in theoretical treatment and conflicting experimental results. In this thesis, we seek to bridge the gap between kinetic theory and practical thermofluid engineering using computational analysis, and then experimentally validate the application of the kinetic model to condensation heat transfer. We first apply theory and a high-accuracy numerical technique to evaluate computationally-efficient models for evaporation/condensation rates. We quantify the accuracy of the Schrage equation—an approximation commonly used to predict heat fluxes in thermofluid engineering—and identify an existing moment-based model that ought to be used instead. Next, we fabricate and test an ultrathin, freestanding, nanoporous membrane designed to achieve high experimental sensitivity to the properties of the liquid-vapor interface. As a supplement to that experiment, we use highly-accurate direct simulation Monte Carlo calculations to validate the dusty-gas model. We demonstrate that this model accurately and efficiently predicts gas transport in our experimental system and state-of-the-art membranes that could be used for high-selectivity membrane separation processes.&#13;
&#13;
Finally, we carefully design an experimental setup to observe high-rate dropwise condensation under a microscope with strict measures to prevent contamination. We achieve unprecedented sensitivity to kinetics near the interface and our results validate the kinetic theory for condensation. Further, these experiments show that the accommodation coefficient of water is at least 0.5 and likely quite close to 1, indicating nearly ideal behavior of the interface.&#13;
&#13;
This thesis advances our fundamental understanding of the kinetics of heat and mass transfer near the liquid-vapor interface and provides guidelines for using models that can ultimately lead to better-performing components in power generation, desalination, and thermal management systems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151919</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gradient-based dimension reduction for Bayesian inverse problems and simulation-based inference</title>
<link>https://hdl.handle.net/1721.1/151914</link>
<description>Gradient-based dimension reduction for Bayesian inverse problems and simulation-based inference
Brennan, Michael Cian
Inference is a pervasive task in science and engineering applications. The Bayesian approach to inference facilitates informed decision making by quantifying uncertainty in parameters and predictions, but can be computationally demanding. This thesis focuses on Bayesian methods for inverse problems governed by partial differential equations and for simulation-based (likelihood-free) inference: in both settings, the high dimensionality of model parameters and/or data can render naïve posterior exploration intractable. We address this challenge by developing gradient-based methods that discover and exploit several notions of low-dimensional structure in inference, and then linking these dimension reduction methods to inference algorithms that employ measure transport. Dimension reduction substantially decreases the computational burden of accurate inference in high-dimensional problems, and can also reveal interpretable structure that provides qualitative insights. Our contributions are grouped into three primary topics, as follows.&#13;
&#13;
Low-dimensional subspaces First, we propose an iterative framework for solving high-dimensional Bayesian inference problems using transport maps or flows that act only on low-dimensional subspaces. We provide a principled way of identifying such subspaces by minimizing an upper bound on the Kullback—Leibler divergence between the current approximation and the target (posterior) distribution. This approach thus focuses the expressiveness of a transport map along the directions of most significant discrepancy from the target and can be used to greedily build deep compositions of maps—where low-dimensional projections of the parameters are iteratively transformed to match the posterior. We prove weak convergence of the generated sequence of distributions to the posterior and demonstrate the benefits of the framework on an array of challenging high-dimensional inference problems. &#13;
&#13;
Low-rank conditional structure Second, we explore the notion of low-rank conditional structure: summarizing conditioning variables with low-dimensional projections. We show how such summaries can be derived by minimizing a tractable gradient-based bound on mutual information, and then develop a framework that uses low-rank conditional structure in the posterior distribution—or in the joint distribution of parameters and observations—to improve approximate inference using measure transport. Our approach exploits the link between component functions of a triangular (Knothe–Rosenblatt) transport map and specific marginal-conditional distributions. Rather than approximating the target distribution globally, as in many current methods, we discover low-dimensional structure in each of these marginalconditional distributions separately and assemble the results into a naturally sparse triangular transport map. We evaluate our approach on two nonlinear Bayesian inverse problems involving elliptic partial differential equations (steady-state Darcy flow and the Helmholtz equation), in an amortized inference setting.&#13;
&#13;
Score-ratio matching Both of the preceding contributions rely on diagnostic matrices built from evaluations of gradients of the posterior or joint log-density. Our final thrust broadens the applicability of gradient-based dimension reduction to problems where such gradients are not available. We modify score matching methods to estimate score ratios that enable our gradient-based diagnostic matrices to be computed more effectively. In particular, we propose a tailored score-network parameterization and a regularization method that exploit the presence of the low-dimensional structure we seek. We demonstrate the effectiveness of the proposed method on inference problems related to groundwater modeling and energy market modeling.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151914</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coarse-grained models for prediction, uncertainty quantification, and extreme event statistics of turbulent flows in engineering and geophysical settings using physics-consistent data-driven closures</title>
<link>https://hdl.handle.net/1721.1/151911</link>
<description>Coarse-grained models for prediction, uncertainty quantification, and extreme event statistics of turbulent flows in engineering and geophysical settings using physics-consistent data-driven closures
Charalampopoulos, Alexis-Tzianni
Modeling and analysis of turbulent fluid flows remains one of the challenging areas of fluid mechanics where integration of the full equations is associated with extreme computational cost, while their simplification inevitably introduces important model errors. In this work we are aiming to answer three questions: How can we use the governing equations and available datasets to formulate physics-constrained data- driven closures that will provide accurate coarse-grained evolution equations? Can we formulate the corresponding physics-constrained closures for the quantification of uncertainty? Can high-order statistics and statistics of extreme events be computed from data-augmented coarse-scale or reduced-order models? These questions are motivated by real-world problems, such as multiphase fluid flows as well as climate modeling.&#13;
&#13;
To address these questions we adopt a statistical formulation of the governing equations and employ machine-learning ideas to formulate, physics informed, non- local in space and time closure schemes for turbulent, possibly multi-phase, fluid flows found in engineering and geophysical settings. The generic form of the systems we study includes a linear operator, external forcing and a bi-linear, energy-preserving operator. We use the predictions of neural networks to integrate over a coarse grid the evolution of turbulent anisotropic fluid flows on which bubbles that act as passive inertial tracers are being transported.&#13;
&#13;
To answer the first question we develop closures for the mean flow field that we complement with physical constraints, which follow from the energy-preserving character of the bi-linear operator. Next, we proceed with the second question by formulating the second-order moment equations for which we also derive data-driven closures that we complement with appropriate physical constraints. We utilize recurrent and convolutional layers to capture both temporal and spatially non-local effects. The addition of the physical constraints not only improves the performance of the resulted closures but also stabilizes the coarse-grained equations in cases that are otherwise unstable. The approach is tested both for closing the mean equation and the covariance evolution equation in a second-order statistical framework. The closure schemes for turbulent fluid flows are complemented with a method that aims to predict high-order moments of bubble cluster deformation from second-order statistics. This is achieved with the introduction of a hybrid quadrature method of moments appropriate for finite-dimensional dynamical systems. We demonstrate the resulted closures and assess the generalizability properties in different Reynolds and flow configurations.&#13;
&#13;
To answer the third question we machine learn non-intrusive correction operators that use as input imperfect, i.e. coarse-scale, climate model outputs that typically have discrepancies due to low resolution. An important challenge in this case is the chaotic character of the underlying dynamics which makes the machine learning of the correction operator an intractable task. To overcome this obstacle we design a new approach based on nudging, a popular method for data-assimilation in geophysical modeling, to create consistent training datasets between the input and the output of the correction operator. We illustrate that the resulted non-intrusive correction operator is able to improve inaccuracies of the coarse-scale model attractor making it consistent with the target attractor. This allow us to obtain extreme event statistics of climate extreme events such as hurricanes and atmospheric rivers, using as input only inexpensive coarse-scale simulations. To this end, the introduced approach paves the way for the parsimonious study of climate-change scenarios and the effectiveness of possible measures or policies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151911</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bioinstrumentation and Statistical Methods for Investigating Host-Microbial Interactions</title>
<link>https://hdl.handle.net/1721.1/151909</link>
<description>Bioinstrumentation and Statistical Methods for Investigating Host-Microbial Interactions
Kim, Hyungseok
Metabolic interactions between hosts and their associated microbiota, the latter referred to as the microbiome, contribute to the host phenotype and nutrient cycling within the ecosystem. There is broad diversity in the type and complexity of interactions for a given host and environment. Quantitative and qualitative resolution of the associations between hosts and the microbiome remain a key challenge in modern microbiology.&#13;
&#13;
The universal mechanism of interaction is the diffusive exchange of metabolites. In the first part of this thesis, I propose a microbial co-culture assay (“porous microplate”) that spatially controls diffusion mediated metabolite exchange. Using a model host alga Phaeodactylum tricornutum, I describe bacterial responses to algal metabolites in the porous microplate. I extend the findings to provide an insight into how different bacterial species partition host nutrients.&#13;
&#13;
The host-microbiota relationship requires proximity between the organisms, and it is strengthened by physical attachment. In the second part of this thesis, I utilize a microfluidic electrokinetic platform to characterize bacterial surfaces and their envelope components. The motivation for this is to ascertain the influence of surface charge on physical attachment. The results indicate that bacterial surface charge is correlated with the ability to attach to the algal host and the production of extracellular polymeric substances.&#13;
&#13;
Lastly, I introduce a multivariate analysis technique to visualize microbial community structure. I explain how statistical hypothesis testing can be simultaneously addressed while reducing its dimensionality. I verify the technique’s performance by comparing it to an existing dimensionality reduction method.&#13;
&#13;
Taken together, the combined microfluidic and data analysis approaches developed can help bridge several technological gaps in microbial ecology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151909</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mine to Table: Technology and Policy Strategies for Sustainable Mineral Supply Chains in the Low-Carbon Energy Transition</title>
<link>https://hdl.handle.net/1721.1/151907</link>
<description>Mine to Table: Technology and Policy Strategies for Sustainable Mineral Supply Chains in the Low-Carbon Energy Transition
Ryter, John
With rapidly increasing demand due to the low-carbon energy transition, the metals and mineral extraction industry has seen a flurry of new policy over the last several years looking to direct or capitalize on new investment. This rapidly increasing demand is coupled with a need for industry decarbonization, prompting availability, price, and environmental concerns simultaneously. To identify mechanisms to address these concerns, this work models material and economic relationships between the mining, refining, manufacturing, and recycling components of mineral supply chains. We consider unit processes and environmental impacts to establish the material processing improvements and policy mechanisms most capable of supplying the low-carbon energy transition and limiting environmental impacts. There are three main lines of inquiry. &#13;
&#13;
First, we assess the environmental benefits of recycling and its reduction of mine production by developing a dynamic, economically-informed simulation model for the copper supply chain. The primary environmental benefit of recycling is its implied reduction of mine production. However, we find that increases in recycling only displace, on average, $\sim$0.5 tons of mine production per ton increase in scrap supply, due to slow mine response rates and interim increases in demand owing to excess commodity supply. We find supply chain evolution pathways maximizing displacement of mine production, such as the inclusion of recyclables on major futures exchanges. However, even in best-case scrap supply scenarios, CO\textsubscript{2}e emissions from the copper supply chain increase 25\% by 2040 relative to 2018. With simultaneous global adoption of current best practices, 2040 CO\textsubscript{2}e emissions 10\% below 2018 are possible, though still well short of 2°C emissions targets. &#13;
&#13;
Second, we assess the impacts that regional supply chain variations and disruptions have on future supply chain behavior and emissions. We expand the copper supply chain model to enable regional, alloy-level consumption, and scrap grade-level consumption granularity, investigating each copper supply chain actor’s response to China’s solid waste import ban and the COVID-19 pandemic. We demonstrate that the economic changes associated with China’s solid waste import ban increase primary refining within China, offsetting the policy's intended environmental benefits of decreased copper scrap refining, and instead generate a cumulative increase in CO\textsubscript{2}-equivalent emissions of up to 13 Mt by 2040. Increasing China’s refined copper imports reverses this trend, decreasing CO\textsubscript{2}e emissions in China (up to 180 Mt by 2040) and globally (up to 20 Mt). We test sensitivity to supply chain disruptions using GDP, mining, and refining shocks associated with the COVID-19 pandemic, showing the results translate onto disruption effects, and that slow mine response rates make them more resilient to perturbations than recycling streams.  &#13;
&#13;
Finally, we attempt to quantify the market-based and technological mechanisms available for mitigating supply and price risks, particularly for byproduct/coproduct commodities. Here we introduce GLOMBO (GLObal Materials modeling using Bayesian Optimization), a generalized economics-informed material flow model that captures the dynamics of key mineral commodities with minimal training data inputs. Building upon established material flow and economic modeling techniques, we apply Bayesian optimization to fit global historical demand, supply, and price. We developed individual material-specific economics-informed MFA models for commodities covering the vast majority of annual mineral consumption. These also act as host metals to the bulk of the minor metals, and we develop an ancillary model to assess their byproduct commodities. We then use these models to demonstrate methods for supply risk assessment that rely wholly on empirical data, and work to understand the key drivers of price risk. Given the volatility of the resulting byproduct model, we conclude with a mine-level case study on the mechanisms and rates at which byproduct commodity prices impact mine operation, using net present value optimization methods.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151907</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Dynamic Decision Making: Algorithms, Structures, and Complexity Analysis</title>
<link>https://hdl.handle.net/1721.1/151899</link>
<description>Data-Driven Dynamic Decision Making: Algorithms, Structures, and Complexity Analysis
Xu, Yunzong
This thesis aims to advance the theory and practice of data-driven dynamic decision making, by synergizing ideas from machine learning and operations research. Throughout this thesis, we focus on three aspects: (i) developing new, practical algorithms that systematically empower data-driven dynamic decision making, (ii) identifying and utilizing key problem structures that lead to statistical and computational efficiency, and (iii) contributing to a general understanding of the statistical and computational complexity of data-driven dynamic decision making, which parallels our understanding of supervised machine learning and also accounts for the crucial roles of model structures and constraints for decision making.&#13;
&#13;
Specifically, the thesis consists of three parts.&#13;
&#13;
Part I of this thesis develops methodologies that automatically translate advances in supervised learning into effective dynamic decision making. Focusing on contextual bandits, a core class of online decision-making problems, we present the first optimal and efficient reduction from contextual bandits to offline regression. A remarkable consequence of our results is that advances in offline regression immediately translate to contextual bandits, statistically and computationally. We illustrate the advantages of our results through new guarantees in complex operational environments and experiments on real-world datasets. We also extend our results to more challenging setups, including reinforcement learning in large state spaces. Beyond the positive results, we establish new fundamental limits for general, unstructured reinforcement learning, emphasizing the importance of problem structures in reinforcement learning.&#13;
&#13;
Part II of this thesis develops a framework that incorporates offline data into online decision making, motivated by practical challenges in business and operations. In the context of dynamic pricing, the framework allows us to rigorously characterize the value of data and the synergy between online and offline learning in data-driven decision making. The theory provides important insights for practice.&#13;
&#13;
Part III of this thesis studies classical online decision-making problems in new settings where the decision maker may face a variety of long-term constraints. Such constraints are motivated by societal and operational considerations, and may limit the decision maker’s ability to switch between actions, consume resources, or query accumulated data. We characterize the statistical and computational consequences brought by such long-term constraints, i.e., how the complexity of the problem changes with respect to different levels of constraints. The results provide precise characterizations on various intriguing trade-offs in data-driven dynamic decision making.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151899</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biophysical Investigations of the Cytosolic Iron-Sulfur Cluster Assembly Pathway Late Acting Proteins</title>
<link>https://hdl.handle.net/1721.1/151895</link>
<description>Biophysical Investigations of the Cytosolic Iron-Sulfur Cluster Assembly Pathway Late Acting Proteins
Vasquez, Sheena L. S.
The cytosolic iron-sulfur ([Fe-S]) cluster assembly (CIA) pathway matures cytosolic and nuclear iron-sulfur dependent proteins found in eukaryotic cells. [Fe-S] clusters are involved in life-sustaining processes such as: DNA replication and repair, transcription, translation, and nucleotide and amino acid biosynthesis. The Drennan lab is not only interested in understanding proteins that utilize these clusters to carry out their functions, we also want to understand how CIA proteins come together to assemble and deliver [Fe-S] clusters to target proteins within the cell. Upstream proteins in the CIA pathway from fungi, like Saccharomyces cerevisiae and Chaetomium thermophilum, to humans are postulated to deliver a mature [4Fe-4S] cluster to Nar1, an intermediate protein in the pathway. Nar1 is then proposed to deliver the cluster to Met18, Cia2, and Cia1. Met18, Cia2, and Cia1 are referred to as ‘targeting proteins’ in the CIA pathway in eukaryotes. They form a protein complex (the CIA targeting complex, or CTC) that appears to be responsible for the delivery and installation of iron-sulfur clusters into apo-target proteins, such as the [4Fe-4S] cluster protein Leu1. However, how the CTC recognizes Nar1 and Leu1 for receiving and delivering a cluster is unclear. In addition, regulation of Met18 in the absence of CIA proteins has not been explored. In this thesis, we explore these questions by using biophysical techniques to study the Cia1-Cia2- Nar1 complex, Met18, and the Cia1-Cia2-Leu1-9-mer peptide complex from C. thermophilum and S. cerevisiae.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151895</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Investigation of Subcooled Flow Boiling and CHF at Prototypical Pressures of Light Water Reactors</title>
<link>https://hdl.handle.net/1721.1/151891</link>
<description>Experimental Investigation of Subcooled Flow Boiling and CHF at Prototypical Pressures of Light Water Reactors
Kossolapov, Artyom
Boiling crisis is an important phenomenon that affects the performance and safety of pressurized water reactors (PWRs). Accurate predictions of the boiling crisis are difficult to make because they require a clear understanding of the physical mechanisms leading to the boiling crisis combined with accurate models of nucleate flow boiling heat transfer. High-resolution, in situ experiments performed at prototypical pressures of light water reactors (LWRs) are needed in order to elucidate the phenomenon of the boiling crisis and to inform the development of boiling models suitable for LWRs. In the present Thesis, we developed a high-pressure flow boiling experiment together with a new phase detection technique, allowing us to investigate high pressure flow boiling with high spatial and temporal resolutions. We explored aspects of bubble departure, microlayer and triple contact line evaporation, boiling parameters (i.e. nucleation site density, bubble departure frequency, wait and growth times), heat flux partitioning and departure from nucleate boiling. The results reveal that in high pressure flow boiling bubbles depart by sliding immediately after nucleation, with no adhesion force holding a bubble in place. Drag, buoyancy and inertia were identified as the only forces governing bubble departure process. We demonstrated that bubble microlayer disappears entirely for pressures above 3 bar, with its disappearance explained by the decrease in bubble growth rate at higher pressures. The depletion of the microlayer was also analyzed, revealing that both thermal and hydrodynamic effects could be responsible for microlayer depletion process. The analysis of the triple contact line evaporation showed that it cannot account for more than 20% of the total heat flux removed by the boiling surface. Temporal boiling parameters (i.e. bubble departure frequency, wait and growth times) vary considerably between nucleation sites and nucleation events. The distribution of bubble departure frequency is particularly intriguing, since it not only exhibits a power-law, but also reveals an abundance of nucleation sites with extremely low departure frequencies (i.e. in the order of a few hertz). The analysis of the three major heat flux partitioning mechanisms (i.e. evaporation, forced convection and transient conduction) reveal that these mechanisms can only account for 40% to 60% of the total heat flux at the boiling surface, suggesting that either the modeling of these mechanisms does not accurately describe the realistic boiling scenario, or another heat transfer mechanism should be introduced to account for the missing heat flux.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151891</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of CDPK1 targets identifies a trafficking adaptor complex that regulates microneme exocytosis in Toxoplasma</title>
<link>https://hdl.handle.net/1721.1/151884</link>
<description>Analysis of CDPK1 targets identifies a trafficking adaptor complex that regulates microneme exocytosis in Toxoplasma
Chan, Alex Wai
Apicomplexan parasites use Ca²⁺-regulated exocytosis to secrete essential virulence factors from specialized organelles called micronemes. Ca²⁺-dependent protein kinases (CDPKs) are required for microneme exocytosis; however, the molecular events that regulate trafficking and fusion of micronemes with the plasma membrane remain unresolved. &#13;
&#13;
In this thesis, I describe the discovery and characterization of a regulator of microneme exocytosis in Toxoplasma gondii. In the first chapter, I introduce T. gondii as a model apicomplexan to study motile stages during asexual stages. In the second chapter, I discuss combining sub-minute resolution phosphoproteomics and bio-orthogonal labeling of kinase substrates in T. gondii to identify 163 proteins phosphorylated in a CDPK1-dependent manner. In addition to known regulators of secretion, I identify uncharacterized targets with predicted functions across signaling, gene expression, trafficking, metabolism, and ion homeostasis. In the third chapter, I describe the functional characterization of a target of CDPK1, the putative activating adaptor HOOK. In other eukaryotes, HOOK homologs form the FHF complex with FTS and FHIP to activate dynein-mediated trafficking of endosomes along microtubules. I show the FHF complex is partially conserved in T. gondii, consisting of HOOK, an FTS homolog, and two parasite-specific proteins (TGGT1_306920 and TGGT1_316650). CDPK1 kinase activity and HOOK are required for the rapid apical trafficking of micronemes as parasites initiate motility. Moreover, parasites lacking HOOK or FTS display impaired secretion of microneme proteins, leading to a block in the invasion of host cells. Taken together, our work provides a comprehensive catalog of CDPK1 targets and reveals how vesicular trafficking has been tuned to support a parasitic lifestyle.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151884</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capturing Tacit Knowledge of Experts through the Study of Visual Attention: Applications for Human Expertise and AI</title>
<link>https://hdl.handle.net/1721.1/151880</link>
<description>Capturing Tacit Knowledge of Experts through the Study of Visual Attention: Applications for Human Expertise and AI
Armengol Urpí, Àlex
Tacit or implicit knowledge is know-how that humans cannot convey explicitly; it is difficult to verbalize, and hence, it is challenging to transfer to others in words. Tacit knowledge is usually gained from experience and internalized unconsciously through implicit learning. Since tacit knowledge is not consciously accessible, it is commonly seen as a “mysterious" part of expertise that can only be transferred from one person to another through close interaction, coaching, mentoring, and observation of expert behavior. Examples of daily activities based on tacit knowledge include riding a bike, recognizing a face, writing a persuasive thesis or speaking a native language. This thesis explores new methods for tacit knowledge extraction using visual attention-based human-computer interfaces.&#13;
&#13;
Earlier studies suggest that eye gaze is particularly suited for studying the unconscious component of expertise. For this reason, this research focuses on developing new interfaces that track the visual attention of experts while performing tasks in which they excel. This thesis is divided into two main sections. In the first part, we develop novel human-computer interfaces suited to track visual attention and that enhance existing interfaces based on gaze tracking alone. We do this by exploiting brain activity in addition to eye gaze. First, we leverage neural mechanisms of visual attention to improve the accuracy of a commercial eye tracker through the analysis of electroencephalography (EEG) waves. Our hybrid system combines EEG and eye-tracking modalities to overcome the accuracy limitations of the gaze tracker alone. We integrate EEG and gaze data to efficiently exploit their complementary strengths by driving a Bayesian probabilistic decoder that estimates the region in the visual field gazed at by the user. This demonstrates that the intrinsic accuracy limitations of camera-based eye trackers can be corrected with the integration of EEG data. Then, we show why visual attention and gaze can be decoupled by developing an interface that tracks peripheral attention using EEG waves. Our novel approach can detect peripheral (or covert) spatial attention by using single-frequency phase-coded stimuli that elicit the corresponding Steady-State Visually Evoked Potentials (SSVEPs). This opens opportunities for attention-tracking applications with largely increased number of targets in the visual field.&#13;
&#13;
In the second part of this thesis, we exploit our previously developed interfaces to track the visual attention of experts performing image classification tasks. First, we create images with a hidden asymmetry that is not consciously (or explicitly) recognized by the experts. However, their visual attention patterns reveal that the asymmetry is unconsciously internalized because the attention metrics are skewed towards the image regions most relevant for categorization. This demonstrates that we can capture insights about experts' tacit knowledge by tracking their visual attention. We then show that the expertise of subjects who receive feedback extracted from their own attention patterns is significantly enhanced compared to subjects who did not. We refer to this as cognitive reinforcement. This research opens the door to new ways in which human expertise can be enhanced, exploited, and transferred. Finally, we utilize human attention maps captured during image exploration and labeling to feed a CNN-based image classification model. We demonstrate that when few images are available for training, the model fed with human attention maps, in addition to images and labels, significantly outperforms the baseline model. These results illustrate that experts' tacit knowledge can be exploited to enhance the performance of human experts as well as AI systems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151880</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of Channel Curvature on Drag, Mixing, and Stratification in Estuaries</title>
<link>https://hdl.handle.net/1721.1/151875</link>
<description>Impacts of Channel Curvature on Drag, Mixing, and Stratification in Estuaries
Bo, Tong
Estuaries often have sinuous planforms, and channel curvature can lead to distinct flow processes in bends, e.g., secondary circulation and flow separation. An integrated approach combining field observations, idealized modeling, and realistic modeling is used to understand how curvature-induced flow processes affect hydrodynamic drag, salinity mixing, and stratification in estuaries. In the North River (MA, USA), a sinuous, tidally-dominated estuary, drag is observed to be much greater than typically found in straight channel estuaries, and data analysis points to links between the high drag and curvature-induced processes. Idealized models and a realistic North River model are developed to investigate the mechanisms of drag increase in sinuous estuaries. Two key processes are found to dominate. First, flow separation leads to low-pressure eddies on the lee side of bends and thus creates bend-scale form drag. Second, curvature-induced secondary circulation transports higher momentum fluid from the surface toward the bed. Consequently, the near-bed shear and bottom stress are enhanced compared with a logarithmic velocity profile. The form drag due to flow separation and enhanced bed stress due to secondary circulation combine to increase the drag in the North River by a factor of 2-5 compared to the expected values. In addition to increasing the drag, channel curvature also affects the salinity distribution, mixing, and stratification. During ebb tides, secondary circulation in bends interacts with the salinity field to create bottom salinity fronts upstream of bend apexes. Intense mixing occurs at these curvature-induced fronts and leads to overall decreased stratification in sinuous estuaries compared to straight channels. In addition, flow separation in bends and at channel constrictions can create sharp lateral salinity gradients through differential advection during flood tides, and the resulting baroclinic forcing influences secondary circulation. Surface convergence fronts are generated at bends and constrictions as secondary circulation interacts with the laterally sheared flow, resulting in intensified mixing near the fronts. This thesis advances our understanding of how flow curvature affects the hydrodynamics, salinity, and mixing in estuaries with complex topographic features found in natural systems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151875</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale Optimization for Robust Multi-Class Prediction and Resource Allocation</title>
<link>https://hdl.handle.net/1721.1/151872</link>
<description>Large-scale Optimization for Robust Multi-Class Prediction and Resource Allocation
Gupta, Samarth
In this thesis we develop optimization-based methods to deal with uncertainty arising from data, first in the context of robust multi-class prediction and second for prescriptive analytics for medical resource allocation.&#13;
&#13;
In the first part, we make progress on training robust multi-class classifiers using error-correcting output codes (ECOC). We propose linear and non-linear integer programming (IP) formulations for the codebook design problem. By making connections with graph-theory such as edge-clique covering and graph-coloring, we develop tractable solutions approaches to both linear and non-linear IP formulations while maintaining low-optimality gaps, estimated using Plotkin's bound. &#13;
&#13;
We provide extensive computational experiments on  small class datasets including MNIST and CIFAR10. In the nominal setting, our IP-generated compact codebooks outperform commonly used large codebooks. Furthermore, in the adversarial setting, our IP-generated codebooks achieve non-trivial robustness. This is surprising due to three reasons: (1) We do not employ any {adversarial training}; (2) Most other codebooks (except Dense) do not exhibit any robustness even when they use more than twice the number of columns; (3) The robustness that we obtain is not simply because of the large network capacity. On large class datasets such as CIFAR100, Caltech-101 and Caltech-256, we leverage transfer-learning to overcome the large computational expense associated. We provide experiments under two different settings, first when the source classifier is nominally trained and second when it is adversarially trained. ECOC-based classifiers achieve better classification performance in comparison to multiclass CNNs in both settings. These experiments indicate that our large-scale discrete optimization approaches for designing ECOC-based classifiers can be extremely useful for robust operation of modern urban-systems.&#13;
&#13;
In second part of this thesis we shift our focus from robust prediction to developing a new approach for prescriptive analytics.&#13;
We make progress on the problem of uncertainty informed medical resource (vaccine) allocation to a set of  different sub-populations to control the spread of a pandemic such as Covid-19. Here, we tackle two major challenges: (1) To develop a principled data-driven approach to model and estimate uncertainty in the parameters of a system of ordinary differential equations (ODE) based  compartmentalized epidemiological model. (2) To develop tools to solve a large-scale, non-linear optimization problem which is constrained by ODE dynamics with uncertain parameters.&#13;
&#13;
We provide a data-driven approach to generate a tractable scenario set by estimating the posterior-distribution on the model parameters using Bayesian inference with Gaussian processes. Using the scenario set, we provide the nominal and  stochastic (i.e. uncertainty informed) formulations for optimal vaccine allocation. We develop a parallelized solution algorithm to efficiently solve both nominal and stochastic optimization problems. Importantly, our scenario-set estimation procedure, optimization formulations and solution approach are all flexible in that they are not limited to any particular class of ODE models. We provide experiments with two different non-linear epidemiological ODE models under different setups. Our computational experiments indicate that accounting for uncertainty in key epidemiological parameters can improve the efficacy of time-critical allocation decisions by 4-8%.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151872</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intragastric Mechanical Systems for Dysmotility Diagnosis and Obesity Treatment</title>
<link>https://hdl.handle.net/1721.1/151871</link>
<description>Intragastric Mechanical Systems for Dysmotility Diagnosis and Obesity Treatment
Jia, Zixun
Obesity represents one of the critical non-communicable global epidemics of our time affecting over 40% of the U.S. population and its impact continuously expands. The gastrointestinal (GI) tract is the main site of food digestion, nutrient absorption, and recognized to have a significant role in the signaling of satiety. This thesis aims to address two major healthcare challenges stemming from the obesity epidemic, dysmotility evaluation and intervention to simulate satiety as a mode of therapy. Specifically, dysmotility, often associated with metabolic derangements including diabetes mellitus and obesity, can be challenging to evaluate with precision. Satiety induction also remains a major challenge due to functional requirements including minimal invasiveness and long-term efficacy. This thesis presents the fundamental mechanics, materials development and electronic underpinnings of novel interventions that address both diagnostic and therapeutic unmet needs.&#13;
&#13;
Clinical evaluation of GI motility is currently limited to radiographic and nuclear methods that provide information on gastric emptying rate. While high-resolution manometry is used to evaluate motility in narrow tubular organs, such as the esophagus and rectum, there is no analogous system for gastric motility evaluation. To address this, I have developed a motility mapping platform capable of 3D pressure distribution mapping within the stomach. This incorporates the development of new materials for sensors which can operate within the dynamic range of forces experienced in the GI tract, the mechanical framework to support engagement with gastric wall and the electronics to support sensing and recording of the contractile forces. The platform has also been validated in the esophagus and rectum and compared to Food and Drug Administration (FDA)-approved high-resolution manometry devices. This new motility mapping system could revolutionize the understanding, diagnosis, and treatment of poorly understood conditions such as functional dyspepsia by enabling three-dimensional in vivo characterization of motility in the stomach.&#13;
&#13;
Current intragastric balloon therapy may be limited by a lack of persistent weight loss, as static balloons have been associated with plateauing of weight loss in large mammals. We hypothesized that the plateauing 3 is associated with gastric accommodation of the static balloon and loss of stimuli. To address this challenge, I developed a novel endoscopically administered gastric resident device that supports dynamic satiety induction to approximate the natural satiety induction process associated with episodic meal ingestion. The device expands before meals to occupy the gastric cavity, then shrinks to a minimal volume after the meal. I have developed two gastric residency and dynamic expansion mechanisms based on motorized and balloon approaches. I conducted preliminary evaluation of the system in vitro and in vivo using the swine model. This minimally invasive system enables dynamic satiety induction to support weight loss.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151871</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Epitaxy on 2D Materials for Bottom-up Heterointegration with Low-defects and Membrane Production with High-throughput</title>
<link>https://hdl.handle.net/1721.1/151868</link>
<description>Advanced Epitaxy on 2D Materials for Bottom-up Heterointegration with Low-defects and Membrane Production with High-throughput
Lu, Kuangye
Conventional epitaxy has significantly advanced the semiconductor industry, driving remarkable progress in various application fields such as electronics and optoelectronics. However, several limitations of epitaxial techniques have impeded the development of next-generation electronic and optoelectronic devices. These limitations encompass the absence of cost-effective methods for producing functional membranes with high throughput, the need to reduce the high costs of non-silicon semiconductor wafers, and the challenge of effectively integrating multiple functional semiconductor layers without detrimental effects from defects or interfacial states caused by lattice mismatch and disparate thermal expansion coefficients.&#13;
&#13;
In this thesis, novel epitaxy techniques and an in-depth investigation of their underlying principles are introduced to tackle these limitations inherent in conventional epitaxy techniques, thus paving the way for the production of high-quality epitaxial membranes as well as their heterogeneous integration in a cost-effective and high-throughput manner.&#13;
&#13;
Firstly, a unique mechanism of relaxing misfit strain in lattice-mismatched heteroepitaxial systems is observed through the implementation of remote heteroepitaxy, which involves the process of conducting heteroepitaxy on graphene-coated substrates. This approach facilitates spontaneous relaxation of misfit strain and reduction of misfit dislocations in epilayers due to the slippery graphene surface, while preserving the single-crystalline properties of the epilayers by the penetrated atomic potential from the substrate through the graphene layer. It provides a new pathway towards the heterogeneous integration of largely lattice-mismatched systems with minimized dislocation density, which could eventually broaden the material spectrum for advanced electronics and photonics. &#13;
&#13;
Subsequently, a high-throughput layer transfer technique based on remote epitaxy with directly grown two-dimensional (2D) materials on wafers as an interlayer is 3 presented. This approach enables a pristine amorphous 2D-on-wafer template for epitaxy, addressing issues of degraded or contaminated semiconductor wafer surfaces after standard 2D materials growth or transfer processes. Consequently, it enables a scheme to produce multiple freestanding membranes from a single wafer without sacrificial layer etching or wafer polishing. Moreover, atomic-precision exfoliation at the 2D interface allows wafer recycling for subsequent membrane production, with the potential for substantial cost reduction in manufacturing processes involving nonsilicon wafers.&#13;
&#13;
Additionally, we demonstrate remote epitaxy and nanopatterned epitaxy of InP, along with large-scale flexible membrane exfoliation and InP wafer recycling. By employing ultra-low temperature boron nitride growth, we successfully implement these advanced epitaxy and layer transfer techniques on InP substrate, despite its low dissociation temperature and weak ionicity. This approach paves the way for new opportunities in InP thin film-based optoelectronics and novel heterostructures at a significantly reduced cost.&#13;
&#13;
Lastly, we delve into intricacies of remote epitaxy by elucidating the respective roles and impacts of the substrate material, 2D layer, 2D-substrate interface, and epitaxial material for electrostatic coupling of these materials, which governs cohesive ordering and can lead to single-crystal epitaxy in the overlying film. By exploring various material systems and processing conditions, we demonstrate that the rules of remote epitaxy vary significantly depending on the ionicity of material systems as well as the 2D-substrate interface and the epitaxy environment. These studies lay the theoretical foundation for all of the novel epitaxy on 2D techniques investigated in this thesis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151868</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamical Reduced-Order Models for High-Dimensional Systems</title>
<link>https://hdl.handle.net/1721.1/151865</link>
<description>Dynamical Reduced-Order Models for High-Dimensional Systems
Charous, Aaron
Advances in computational power have brought the possibility of realistically modeling our world with numerical simulations closer than ever. Nevertheless, our appetite for higher fidelity simulations and faster run times grows quickly; we will always grasp at what is just beyond our computational reach. No matter if we seek to understand biological, chemical, or physical systems, the bottleneck of scientific computing is almost always the same: high dimensionality. The goal of reduced-order modeling is to reduce the number of unknowns in a system while minimizing the loss of accuracy in the approximate solution. Ideal model-order reduction techniques are optimal compromises between computational tractability and solution fidelity. While there are plenty of such techniques to choose from, their widespread adoption remains to be seen due to several persistent challenges. Many methods are intrusive and difficult to implement, lack traditional numerical guarantees such as convergence and stability, and cannot adapt to unforeseen dynamics. We seek to promote the adoption of reduced-order models (ROMs) by creating non-intrusive, efficient, and dynamically adaptive algorithms that maintain the essential features and numerical guarantees of their full-order counterparts.&#13;
&#13;
In this thesis, we derive and apply algorithms for dynamical reduced-order models. Many model-order reduction approaches project dynamical systems onto a fixed subspace obtained from either a simplification of the original equations, a set of known functions such as orthogonal polynomials, or a reduced basis of full-order simulations computed offline. However, if the true system exits the span of the prescribed subspace, such approaches quickly accumulate large errors. In contrast, dynamical ROMs adapt their subspaces as the system evolves. Geometrically, this amounts to integrating a dynamical system along a nonlinear manifold embedded in a full-order Euclidean space. We develop schemes that not only change subspaces at each discrete time step, but that change the subspace in between time steps for improved accuracy. Even further, our numerical schemes automatically detect when the dynamics depart the nonlinear manifold and may jump to a new nonlinear manifold that better captures the system state. For concreteness, we focus on a reduced-order modeling technique called the dynamical low-rank approximation (DLRA), a discrete analogue to the dynamically orthogonal (DO) differential equations. The DLRA evolves a low-rank system in time (or range) as an approximation to a full-rank system, and in contrast to many methods, the DLRA does not require an offline stage where full-order simulations are computed. It is also agnostic to the source of high dimensionality, whether it be the high resolution required, the large domain, or the stochasticity of the problem. These features make it a versatile tool suitable for a wide variety of problems. We evaluate, verify, and apply our new dynamical reduced-order models and schemes to a varied set of dynamical systems, including stochastic fluid flows and waves, videos and their dynamic compression, realistic ocean acoustics and underwater sound propagation with dynamic coordinate transforms, and stochastic reachability and time-optimal path planning.&#13;
&#13;
The majority of this work is devoted to new adaptive integration schemes for the DLRA. We start by introducing perturbative retractions, which map arbitrary-rank matrices back to a manifold of fixed-rank matrices. They asymptotically approximate the truncated singular value decomposition at a greatly reduced cost while guaranteeing convergence to the best low-rank approximation in a fixed number of iterations. From these retractions, we develop the dynamically orthogonal Runge-Kutta (DORK) schemes, which change the subspace onto which the system's dynamics are projected in between time steps. The DORK schemes are improved by using stable, optimal (so) perturbative retractions, resulting in the so-DORK schemes. They are more efficient, accurate, and stable than their predecessors. We also introduce gradient-descent (gd) retractions and the gd-DORK schemes, which tend to converge rapidly to the best low-rank approximation by recursively applying retractions. The DORK schemes may be made rank-adaptive and robust to rank overapproximation with either a pseudoinverse or by changing the gauge of the integration scheme. While the pseudoinverse technique accumulates slightly more error, it preserves mode continuity, a feature that changing the gauge lacks. Next, we derive an alternating-implicit (ai) linear low-rank solver, which is used to create ai-DORK schemes. The ai-DORK schemes are a general-purpose family of implicit integration schemes that have the same algorithmic complexity as explicit schemes (provided some conditions on the dynamics), which vastly broadens the scope of problems that can be solved with the DLRA. This relieves stringent time-step restrictions and enables the DLRA to handle stiff systems. Furthermore, we develop a piecewise polynomial approximation using adaptive clustering in order to handle non-polynomial nonlinearities in reduced-order models. We thoroughly test these numerical schemes on well-conditioned and ill-conditioned matrix differential equations; data-driven dynamical systems including videos; Schrödinger's equation; a stochastic, viscous Burgers' equation; a deterministic, two-dimensional, viscous Burgers' equation; an advection-diffusion partial differential equation (PDE); a nonlinear, stochastic Fisher-KPP PDE; nonlinear, stochastic ray tracing; and a nonlinear, stochastic Hamilton-Jacobi-Bellman PDE for time-optimal path planning. We find that the reduced-order solutions may be made arbitrarily accurate using rank-adaptive dynamical schemes that automatically track the true rank of the full-order simulation, and nonlinearities may be well-approximated by dynamically increasing the number of stochastic clusters, all at a greatly reduced computational cost.&#13;
&#13;
In addition to DORK schemes, we create a tailor-made low-rank integration scheme for the narrow-angle parabolic wave equation called the low-rank split-step Fourier method. Acoustic simulations are often bottlenecked by the Nyquist criterion, which insists that we sample spatially at least twice per wavelength. To address this, our low-rank split-step Fourier method has an algorithmic complexity that scales sublinearly in the number of classical degrees of freedom, enabling vastly larger computational domains and higher frequencies. We demonstrate its efficacy on realistic ocean acoustics problems in Massachusetts Bay with sound speed fields obtained from our high-resolution ocean primitive equations modeling system. In comparing the low-rank and full-rank simulations, we demonstrate that the dynamical low-rank method captures the full-rank features including three-dimensional acoustic energy propagation in complex ocean fields with internal waves and rapidly varying bathymetry. &#13;
&#13;
Lastly, with tools from machine learning, we introduce learnable and automatically differentiable coordinate transforms. The compressibility of a system heavily depends on the choice of coordinates, and frequently a coordinate system is chosen for its simplicity rather than its efficiency. Our novel coordinate transforms are determined in a hands-off manner by minimizing a cost function that includes the environmental data expressed in terms of the non-constant coefficients and initial conditions of a PDE. Not only do we automatically obtain Jacobians and Hessians of the transforms, we also find coordinate systems that reduce the rank of solutions to PDEs. This improves the accuracy of the DLRA for the same cost as a typical low-rank simulation, and it accelerates the convergence in rank to the full-order solution. The coordinate transforms also enable low-rank domain decomposition, which is particularly useful in ocean acoustics where the water-seabed interface is discontinuous. We demonstrate this methodology on a first-order PDE with advection and a second-order PDE, the parabolic wave equation, using two examples. We first show acoustic propagation along a three-dimensional wedge and compare the accuracy of solutions computed in the original and transformed coordinate systems. We then show acoustic propagation in a realistic ocean environment over Stellwagen Bank in Massachusetts Bay with a dynamic coordinate transform.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151865</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Market Design</title>
<link>https://hdl.handle.net/1721.1/151837</link>
<description>Essays in Market Design
Çelebi, Oğuzhan
This thesis analyzes existing allocation mechanisms and proposes new mechanisms for  two-sided matching markets, with a particular focus on the role played by diversity preferences and affirmative action.&#13;
&#13;
In the first chapter, "Diversity Preferences, Affirmative Action and Choice Rules'', I introduce a framework to analyze diversity preferences and their effect on the affirmative action policies and choice rules adopted by institutions. I characterize the choice rules that can be rationalized by diversity preferences and demonstrate that the rule used to allocate government positions in India cannot be rationalized. I show that if institutions evaluate diversity using marginal (not cross-sectional) distribution of identities, then choices induced by their preferences cannot satisfy the substitutes condition, which is crucial for the existence of competitive equilibria and stable allocations. I characterize a class of choice rules that satisfy the substitutes condition and are rationalizable by preferences that evaluate diversity and quality separately and identify the preferences that induce some widely used choice rules. The framework and results presented in this chapter provide a systematic way of evaluating the diversity preferences behind the choices made by institutions. &#13;
&#13;
In the second chapter, "Adaptive Priority Mechanisms'' (coauthored with Joel Flynn), we ask how authorities that care about match quality and diversity should allocate resources when they are uncertain of the market they face? Such a question appears in many contexts, including the allocation of school seats to students from various socioeconomic groups with differing exam scores. We propose a new class of adaptive priority mechanisms (APM) that prioritize agents as a function of both scores that reflect match quality and the number of assigned agents with the same socioeconomic characteristics. When there is a single authority and preferences over scores and diversity are separable, we derive an APM that is optimal, generates a unique outcome, and can be specified solely in terms of the preferences of the authority. By contrast, the ubiquitous priority and quota mechanisms are optimal if and only if the authority is risk-neutral or extremely risk-averse over diversity, respectively. When there are many authorities, it is dominant for each of them to use the optimal APM, and each so doing implements the unique stable matching. However, this is generally inefficient for the authorities. A centralized allocation mechanism that first uses an aggregate APM and then implements authority-specific quotas restores efficiency. Using data from Chicago Public Schools, we estimate that the gains from adopting APM are considerable.&#13;
&#13;
In the third chapter, "Best Response Dynamics in Boston Mechanism'', I introduce and analyze a dynamic process called Repeated Boston Mechanism (RBM), where the Boston Mechanism (BM) is used for multiple periods, and students form their application strategies by best responding to the admission cutoffs of the previous period. If students are truthful in the initial period, the allocation under RBM converges in finite time to the student optimal stable matching (SOSM), which is the Pareto-dominant equilibrium of BM and the outcome of the strategy-proof Deferred Acceptance Mechanism. If some students are sincere and do not strategize, then the allocation converges to the SOSM of a market in which sincere students lose their priorities to sophisticated ones. When students are not truthful in the first period but best reply to some initial admission cutoffs, the allocation converges to SOSM if students are initially optimistic about their admissions chances but may cycle between allocations Pareto-dominated by SOSM if they are pessimistic. These results provide a foundation for the earlier characterizations of equilibria of BM and  are in line with the observations of non-equilibrium play in BM in real-world markets.&#13;
&#13;
JEL Classification Codes: D47, D61
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151837</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-Aware Optimization and Data-Driven Methods for Low-Carbon Power Systems</title>
<link>https://hdl.handle.net/1721.1/151836</link>
<description>Physics-Aware Optimization and Data-Driven Methods for Low-Carbon Power Systems
Haider, Rabab
The US electricity sector is undergoing a transformation with aggressive targets to achieve 100% carbon pollution-free electricity by 2035. To achieve this objective while maintaining a safe and reliable power grid in the presence of intermittent renewable generation, new operating paradigms of computationally fast and accurate decision making in dynamic and safety-critical environments are needed. To this end, this thesis focuses on answering three questions: How can we enable dynamic (fast + frequent) decision making for safety-critical applications in the presence of integer constraints? How do we coordinate distributed grid-edge devices across multiple timescales and ownership boundaries? How do we develop and evaluate algorithms without access to real-world data? To address these questions this thesis proposes two physics-aware optimization frameworks that coordinate grid-edge resources towards meeting three goals: improving grid efficiency, ensuring grid operability, and supporting clean energy directives.&#13;
&#13;
First, we propose Grid-SiPhyR (Sigmoidal Physics-Informed Rounding; pronounced as: ‘cipher’), a physics-informed machine learning framework for end-to-end learning to optimize for combinatorial problems, and apply it to the dynamic grid reconfiguration problem. Grid-SiPhyR employs a novel physics-informed rounding approach to tackle the mixed integer nature of dynamic reconfiguration while satisfying salient safety-critical operating constraints. Offline training of the unsupervised framework on representative load and generation data makes dynamic decision making via the online application of Grid-SiPhyR computationally feasible. Second, we propose a physics-aware distributed coordination architecture for grid-edge devices, upon which two grid services are developed. We first develop a hierarchical coordination approach for voltage regulation to coordinate slow-timescale utility-owned devices with fast-timescale solar generation towards managing grid voltages. We then develop a load ramp mitigation service to coordinate the actions of distributed storage resources to provide aggregated support at the bulk level. Lastly, we address the third question through the development of synthetic datasets with representative load and generation characteristics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151836</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Based Algorithms for Improving Forecasting in Subsurface Energy Resources</title>
<link>https://hdl.handle.net/1721.1/151832</link>
<description>Machine Learning Based Algorithms for Improving Forecasting in Subsurface Energy Resources
Alolayan, Omar S.
Energy is an essential human need that is necessary for maintaining and improving quality of life. It also plays a crucial role in sustaining and developing the world economy. Accurate production forecasting models are important for governments, organizations, and companies as they enable them to make informed decisions and develop returns on investments. However, forecasting models for subsurface energy resources still face several challenges, such as mathematically ill-posed problems and lack of reliable data. Machine learning offers new methods to develop better forecasting models due its capacity to learn desired behavior from interacting with an environment of interest, as well as its ability to create optimal non-linear mappings between input and output data.&#13;
&#13;
In this research work, we reformulate the history matching problem from a least-square mathematical optimization problem into a Markov Decision Process to develop a method in which reinforcement learning can be utilized to solve the problem. This method provides a mechanism where an artificial deep neural network agent can interact with the reservoir simulator and find multiple different solutions to the problem. Such formulation allows for solving the problem in parallel by launching multiple concurrent environments enabling the agent to learn simultaneously from all the environments at once, achieving significant speed up.&#13;
&#13;
Additionally, we use deep neural networks to generate more accurate shale gas production forecasts in counties with a limited number of sample wells by utilizing transfer learning. By using transfer learning, we provide a way of transferring the knowledge gained from other deep neural network models trained on adjacent counties into the county of interest. This research project uses data from more than 6000 shale gas wells across 17 counties from Texas Barnett and Pennsylvania Marcellus shale formations to test the capabilities of transfer learning. The results reduce the forecasting error between 11% and 47% compared to the widely used Arps decline curve model.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151832</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation of Polymer Support Structures In Metal Directed Energy Deposition</title>
<link>https://hdl.handle.net/1721.1/151818</link>
<description>An Investigation of Polymer Support Structures In Metal Directed Energy Deposition
Kurfess, Rebecca Ann
Metal directed energy deposition (DED) can create complex components and has a high deposition rate compared to other metal additive manufacturing (AM) processes. As a result, DED is of interest to die and mold, energy, and aerospace industries, among others. However, the design space of DED is limited: overhangs steeper than 20° and freestanding bridge geometries are typically difficult or impossible to manufacture without support structures. The difficulty of support deposition and removal in DED necessitates that DED manufacturing of large components is restricted to geometries that do not require supports. The use of a dissimilar material, such as a polymer, as a support would enable lower cost, easily removable supports. The suitability of polymers as substrates in DED has not been explored due to two key unknowns: (1) the effect of the metal DED process on a polymer substrate and (2) the effect of a polymer substrate on the deposited metal. This research investigates the viability of polymers as supports in laser blown-powder DED, providing guidelines for polymer selection and print strategy to avoid detrimental polymer degradation, unsafe combustion conditions, and negative impacts of using a dissimilar substrate on the deposited metal DED component.&#13;
&#13;
An understanding of combustion in the polymer/DED interaction due to laser interactions with the polymer was developed, and a tradeoff between polymer degradation and metal deposition quality was discovered. For successful DED deposition to occur, the polymer must have a high absorptivity: the polymers that facilitated deposition of 316L stainless steel in this research had absorbances greater than 2 absorbance units. Polymers with high temperature fillers, such as glass fibers or carbon fibers, were shown to be effective in mitigating the extreme thermal conditions experienced by the polymer during deposition. Degradation of polymers was measured in a series of single-bead experiments, and a series of thermal models was developed to show the influence of DED parameters and polymer material properties on the penetration of heat into the polymer substrate. Both the experimental measurements and thermal model predictions indicated degradation on the order of 1mm, an acceptable level of degradation. An understanding of the effect of a polymer substrate (CF ABS) on the hardness, microstructure, and porosity of a deposited metal (316L stainless steel) was established. Porosity of the metal was observed due the entrapment of gas from polymer degradation in the molten deposited metal. Carbon from the polymer migrated into the molten metal, causing carbide formation and increasing the hardness of the deposited metal by approximately 70% compared to the expected value. To mitigate these effects, specimens were fabricated with an interlayer cooling time, lowering the overall temperature of the deposited component and decreasing the time spent by the component at higher temperatures. The mitigation strategy was proven to reduce hardness to the expected level for 316L stainless steel manufactured with DED. Additionally, the introduction of an interlayer cooling time prevented much of the gas due to polymer degradation from infiltrating the metal component, reducing porosity from gas entrapment and cutting overall porosity from 8% to 4%. &#13;
&#13;
The above findings were integrated to produce a bridge component using a polymer support structure. Overall, this research provided a methodology for selecting polymer materials, print parameters, and print strategies to enable the deposition of 316L stainless steel on CF ABS, laying the foundation for polymer support structures in metal DED.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151818</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hybrid Discrete and Continuum Framework for Multiscale Modeling</title>
<link>https://hdl.handle.net/1721.1/151813</link>
<description>A Hybrid Discrete and Continuum Framework for Multiscale Modeling
Chantharayukhonthorn, Maytee
From the agricultural and industrial products that use granular ingredients, to the geological systems below our feet, granular systems surround us. Developing a computational framework to simulate granular materials is thus a gateway into understanding and improving many aspects of our lives. However, what make them beautiful and interesting systems are also what make them difficult to simulate. They span many length scales: individual grains can be measured on the scale of micrometers yet are the building blocks of kilometer-scale geological phenomena. They are also multiphase: an hourglass, for example, has a liquid-like region of flowing grains; a gas-like region of grains that are falling, dilute, and colliding; and a solid-like region of static grains that collect at the bottom.&#13;
&#13;
This work builds upon nascent hybridization work that combines two simulation methodologies: discrete methods and continuum methods. Discrete methods model every single grain and are thus accurate; however, they scale poorly. By contrast, continuum methods can be faster by greatly reducing the degrees of freedom represented. However, they can lose accuracy due to homogenization of system behavior. The hybrid method is able to utilize both a discrete representation for complex, constitutive behavior and a continuum representation for larger scale regions of simplified behavior. The method can homogenize discrete grains into continuum, enrich continuum regions into discrete grains, and then couple these systems in a hybrid zone. The hybrid method thus bridges both the variation in length scale and multiphase nature of grains.&#13;
&#13;
In this study we present work on all components of the hybrid method. First, we introduce new granular packing methods capable of generating ad hoc granular assemblies that can meet user-defined criteria. Second, we discuss new enrichment and homogenization operators that conserve mass and momentum while also preserving higher-order properties. Finally, we discuss a higher order hybrid coupling, which better represents the two disparate simulation methods at the grid level. With these updates to the hybrid method, we demonstrate the ability to simulate large length and time scale granular systems in geometries of academic and industrial relevance. Transient and steady state behavior comparable to that of discrete methods is shown with the computational speed of continuum methods. We also show the ability to simulate systems too large for pure discrete methods. In summary, we demonstrate that the hybrid method is able to accurately simulate granular systems and is computationally efficient and robust enough to open the door to once-intractable system scales.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151813</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Autonomy and Maneuvering Simulation of an Unmanned Underwater Vehicle Near a Moving Submarine Using Actively Sampled Gaussian Process Surrogate Models</title>
<link>https://hdl.handle.net/1721.1/151812</link>
<description>Real-time Autonomy and Maneuvering Simulation of an Unmanned Underwater Vehicle Near a Moving Submarine Using Actively Sampled Gaussian Process Surrogate Models
Hammond, Brady M.
Unmanned Underwater Vehicle (UUV) maneuvering simulators have severe limitations on modeling UUV motion near a moving submarine because they are not capable of determining the complex, turbulent, hydrodynamic interactions in real time. Potential flow solvers are typically fast enough, but they neglect viscosity which introduces large inaccuracies that play a critical role in control. On the other hand, Computational Fluid Dynamics (CFD) accurately models these hydrodynamic interactions, but a simulation of a single UUV in one specific configuration typically takes hours or days to complete. Therefore, it is not practical for real-time applications. To bridge this gap, a machine learning framework based on actively sampled Gaussian Process (GP) regression is developed to create a reduced-order model (ROM) that predicts the hydrodynamic interactions in real time using a minimum number of expensive simulations.&#13;
&#13;
We show that the introduced active learning framework, called Non-Myopic MultiFidelity (NMMF) active learning for GP regression, significantly and parsimoniously accelerates the convergence of the surrogate model by combining the low cost of the low-fidelity, potential flow simulations to explore the domain, as well as optimally selected high-fidelity CFD simulations as training data to improve the model accuracy. It is shown that the resulting GP regression model captures accurately and efficiently the hydrodynamic interactions between the UUV and the moving submarine. Based on the developed algorithms, we are able to define operating envelopes that outline regions where the UUV safely overcomes the hydrodynamic interactions, as well as, regions where the UUV is overpowered and collides with the submarine. This approach also enables us to develop new autonomous protocols that compensate for the hydrodynamic interactions, by adjusting the desired UUV heading and speed, which enables the UUV to safely stay on the desired course. A sensitivity analysis confirms the robustness of the presented control strategies. The developed ideas pave the way for control algorithms in complex environments, such as turbulent boundary layers, which were previously impossible to navigate in real-time.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151812</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Velocity measurements of diffraction patterns on scattering of helium nozzle beam from (001) LiF cleavage plane by time of flight technique using metastable atoms.</title>
<link>https://hdl.handle.net/1721.1/151787</link>
<description>Velocity measurements of diffraction patterns on scattering of helium nozzle beam from (001) LiF cleavage plane by time of flight technique using metastable atoms.
Wang, James Chung-Fang.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151787</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spherical functions on GLn over padic fields.</title>
<link>https://hdl.handle.net/1721.1/151784</link>
<description>Spherical functions on GLn over padic fields.
Luks, Eugene Michael.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; On t.p. "n" is subscript.; Bibliography: leaf 52.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151784</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metal complexes related to vitamin B6 catalysis and tetraaza-[14], [15], and [16]-macrocycles - potential models for porphyrins and corrins.</title>
<link>https://hdl.handle.net/1721.1/151783</link>
<description>Metal complexes related to vitamin B6 catalysis and tetraaza-[14], [15], and [16]-macrocycles - potential models for porphyrins and corrins.
Weinstein, Georgia Nan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151783</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating semantic descriptions from drawing of scenes with shadows.</title>
<link>https://hdl.handle.net/1721.1/151740</link>
<description>Generating semantic descriptions from drawing of scenes with shadows.
Waltz, David L.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Lacking leaf 40. Vita.; Bibliography: leaves 296-300.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151740</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies in production functions.</title>
<link>https://hdl.handle.net/1721.1/151738</link>
<description>Studies in production functions.
Mukhopadhyay, Swapna.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151738</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of velocity dependence of collision-broadening cross sections using saturation spectroscopy.</title>
<link>https://hdl.handle.net/1721.1/151734</link>
<description>Determination of velocity dependence of collision-broadening cross sections using saturation spectroscopy.
Mattick, Arthur Thomas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1975; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151734</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Schubert geometry of flag varities and Gelfand-Cetlin theory</title>
<link>https://hdl.handle.net/1721.1/151733</link>
<description>Schubert geometry of flag varities and Gelfand-Cetlin theory
Kogan, Mikhail,
            1974-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2000; Includes bibliographical references (p. 63-65).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151733</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Delocalized Photonic Deep Learning on the Internet's Edge</title>
<link>https://hdl.handle.net/1721.1/151701</link>
<description>Delocalized Photonic Deep Learning on the Internet's Edge
Sludds, Alexander
Machine learning has become ubiquitous in our daily lives, providing unprecedented improvements in image recognition, autonomous driving and conversational AI. To enable this improvement the size of machine learning models has grown exponentially, requiring new hardware that scales accordingly. CMOS electronics, the workhorse of computing for the last half century, has hit a fundamental barrier to further improvement, limited by the high energy and bandwidth cost of metallic interconnects. In this thesis I will demonstrate how we can build systems making use of the physics of photonics and electronics to enable computing systems on lightweight edge devices that were previously infeasible by orders of magnitude.&#13;
&#13;
First, we consider a system where all metallic interconnects above the digital logic are replaced by optical fan-out. I propose a freely scalable digital optical neural network accelerator which replaces all non-local metallic wires in a digital systolic array with free-space optical interconnections enabled by fan-out and receiverless photodetectors.&#13;
&#13;
For the primary contribution of my thesis I explore making use of photonics to enable faster edge computing. Advanced machine learning models are currently impossible to run on edge devices such as smart sensors and unmanned aerial vehicles owing to constraints on power, processing, and memory. I introduce an approach to machine learning inference based on delocalized analog processing across networks. In this approach, named Netcast, cloud-based “smart transceivers” stream weight data to edge devices, enabling ultraefficient photonic inference. I demonstrate image recognition at ultralow optical energy of 40 attojoules per multiply (&lt;1 photon per multiply) at 98.8% (93%) classification accuracy. I reproduce this performance in a Boston-area field trial over 86 kilometers of deployed optical fiber, wavelength multiplexed over 3 terahertz of optical bandwidth. My work allows milliwatt-class edge devices with minimal memory and processing to compute at teraFLOPS rates reserved for high-power (&gt;100 watts) cloud computers.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151701</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI-Based Speech Assessment of Cognitive Impairment Disorders</title>
<link>https://hdl.handle.net/1721.1/151700</link>
<description>AI-Based Speech Assessment of Cognitive Impairment Disorders
Haulcy, R'mani
Previous research has shown that speech can be used to detect cognitive impairment in patients with dementia and other neurodegenerative diseases. These diseases produce cognitive deficits that lead to changes in the acoustic and linguistic content of the speech produced by the patients.&#13;
&#13;
In this thesis, we analyze the speech of subjects with Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), and Primary Progressive Aphasia (PPA). We show that AD subjects can be distinguished from healthy controls with 85.4% accuracy and that the Mini- Mental State Examination scores of the subjects can be predicted with a root mean squared error of 4.56, using sentence embeddings. We present the Crowdsourced Language Assessment Corpus (CLAC), a corpus that we created to provide the community with a collection of audio samples from various speakers that can be used to learn a general representation for speech from healthy subjects, as well as complement other health-related speech datasets.&#13;
&#13;
We present a novel, language-agnostic approach for measuring the quality of repetition in a recording, a method that was inspired by the need to automatically quantify the impaired repetition abilities that characterize the speech of people with the logopenic variant of PPA (lvPPA). A subset of the CLAC corpus was used as healthy controls and we demonstrated the feasibility of our approach by using it to distinguish between healthy and lvPPA speakers with impaired repetition with 85.7% accuracy. Lastly, we compare standard linguistic features to more advanced sentence embeddings by using a variety of feature extraction methods to extract features from picture description and monologue data for four different FTD/PPA variants. We show that all variants can be distinguished from healthy controls with &gt;= 90% accuracy using transformer-based sentence embeddings.&#13;
&#13;
We hope that the work presented in this thesis will contribute to the goal of using artificial intelligence to improve human health, clinical trial design, and drug development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151700</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Function Follows Form: An Exploration of Robotic Embodiment through Geometry</title>
<link>https://hdl.handle.net/1721.1/151699</link>
<description>Function Follows Form: An Exploration of Robotic Embodiment through Geometry
Chin, Lillian Tiffany
Current robot designers often treat a robot’s body as merely a vessel to transport the brain, limiting the potential scope of “embodied intelligence” that is unique to robotics. This cognitivist approach to robot design is in-line with critiques from disability studies, which argues that assuming a single normative body ignores the diversity of human body-environment interactions, unfairly limiting human potential.&#13;
&#13;
This thesis aims to use framings from disability studies to better inform robotic design practice. By examining interactions between a body and its environment — whether a human body or a robotic one — we gain a better understanding of the necessary materials and tools needed to make that interaction a successful one. I am particularly interested in using geometry as a design tool for robot bodies, especially given the potential of mechanical metamaterials in robotic design.&#13;
&#13;
After first discussing bodies, materials and geometry’s impact on form in both disability studies and robotics, I demonstrate the power of this approach through two main case studies. First, I introduce AuxBots, a set of modular robots that use a motorized auxetic shell to control their volumetric expansion. Since the AuxBot’s expansion was achieved purely through geometry, I had significant control over the design space. I created AuxBots optimized for speed and expansion ratio (390% volume expansion in 0.2 seconds) or for force output (135 N max, or 76x strength-weight ratio, one of the highest ratios of any modular robot). I was able to explicitly make trade-offs between stiffness, speed, and force as a direct consequence of how the geometric parameters affect the overall system behavior.&#13;
&#13;
My second example is the technique of fluidic innervation, a sensorization strategy that patterns empty air channels within the struts of a 3D printed metamaterial. When the metamaterial deforms, these air channels will have their volume changed, which, by the Ideal Gas Law, can then be measured using an off-the-shelf pressure sensor. This combines perception directly with the underlying geometry of the metamaterial itself, since the channels follow the same structural elements that form the overall metamaterial. When complex metamaterials (ex. handed shearing auxetics) are sensorized, fully proprioceptive soft robotic systems can be created as structure, actuation and sensing are embedded into one single geometry.&#13;
&#13;
These examples demonstrate not only the power of using geometry as a design tool, but also the potential for combining social science approaches for robotic development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151699</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Computational Techniques with Physics for Applications in Accelerated MRI</title>
<link>https://hdl.handle.net/1721.1/151698</link>
<description>Combining Computational Techniques with Physics for Applications in Accelerated MRI
Arefeen, Yamin Ishraq
Magnetic Resonance Imaging (MRI) non-invasively measures high-resolution images of soft tissue contrast in human anatomy without any ionizing radiation or injection of contrast agent. However, MRI incurs large costs to patients and researchers due to expensive equipment and slow imaging times. Significant research effort in the MRI field aims to reduce costs by developing techniques that increase information per unit time acquired by the scanner. This thesis presents methods that combine our knowledge of MRI physics with modern computational techniques to design algorithms that improve acquisition efficiency.&#13;
&#13;
We first propose SPARK, a machine learning method for reconstructing images from accelerated structural MRI acquisitions trained from just a single scan. Spark exploits calibration regions to train neural networks that correct a physics based input reconstruction, improving performance at smaller calibration sizes and synergizing with a wide range of techniques. We next introduce Latent Signal Models for time-resolved MRI reconstruction. Latent Signal Models trains neural-networks to approximate the Bloch equations, and inserts the models directly into the MRI re- construction problem. This enables fast optimization through a proxy for the Bloch equations and yields fewer degrees of freedom than linear models. Third, we explore cramer-rao-bound optimization of sequences for quantitative MR parameter mapping. Auto-differentiation through simulations computes necessary gradients for optimization. Finally, we propose an optimization scheme that designs radio-frequency pulse amplitudes for reduced heating in Fetal MRI, while maintaining signal-to-noise and contrast-to-noise ratios.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151698</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-Efficient Security Solutions for Next-Generation Embedded Systems</title>
<link>https://hdl.handle.net/1721.1/151696</link>
<description>Energy-Efficient Security Solutions for Next-Generation Embedded Systems
Maji, Saurav
The proliferation of embedded systems and the Internet of Things (IoT) has opened up new possibilities for various applications. However, with these advancements come heightened security risks, making IoT security a major concern. Typically, embedded systems operate in resource-constrained environments with low power and area budgets, necessitating security solutions that can adapt to such conditions with minimal overhead. This thesis presents research that demonstrates the implementation of efficient secure solutions for resource-constrained embedded systems across different threat models and applications. Specifically, this work focuses on three broad applications. Firstly, it proposed to improve the security of implantable medical devices through dual-factor authentications that incorporate human responses alongside cryptographic security. Secondly, this thesis also examines the side-channel security vulnerabilities of embedded Neural Network implementations, which have been mitigated by the development of a side-channel secure Neural Network Accelerator with improved defenses. The thesis further explores fault attacks over Neural network implementations, leading to the development of the first ASIC demonstration of a fault-attack-resistant neural engine. Finally, this work delves into anti-counterfeiting in agriculture and develops an interdisciplinary solution that combines materials engineering and computer science. The solution involves the use of silk tags attached to individual seeds, assigning them unique identities using Physical Unclonable Functions. Although each application comes with unique challenges, all solutions prioritize security and are tailored specifically for resource-constrained embedded systems with minimal resource overheads. The findings of this thesis are an attempt towards developing security solutions for resource-constrained embedded systems by integrating appropriate algorithmic and architectural innovation with interdisciplinary solutions.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151696</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-State Flow Control for Ion Electrospray Propulsion</title>
<link>https://hdl.handle.net/1721.1/151694</link>
<description>Solid-State Flow Control for Ion Electrospray Propulsion
MacArthur, Jonathan V.
Electrospray propulsion offers high-efficiency, low-mass propulsive capabilities, based on ion extraction from ionic liquid propellants, to small spacecraft that exceed the capabilities of conventional chemical propulsion. Two shortcomings of such propulsion systems is their lack of lightweight propellant management due to the restrictive sizing and weight criteria for small spacecraft and the lack of a truly homogeneous porous emitter material. Lacking fluid control in these thrusters can lead to excess propellant delivery to the surface of the thruster emitter, flooding and potentially shorting the entire thruster. Use of mechanical valves and flow regulators outweigh the benefits of having such a small electric thruster in the first place. Previous work on an electrowetting valve provided a single-use activation barrier to prevent propellant from flooding the thruster's emitter prior to launch and shorting the thruster entirely. The next reasonable evolution in this propellant management area is to attain fully active control of the ionic liquid propellant in a manner analogous to conventional mechanical valves in pressurized systems. Open microfluidics are microfluidics where one surface of the liquid is unconstrained at all times. Electrowetting provides the ability to manipulate liquid wetting characteristics actively with the application of an electric potential. Combining the transport of liquids via capillary flow in open microchannels with the ability to manipulate liquid contact angle via electrowetting, a solid-state electrostatic liquid flow controller may be designed and built.&#13;
 &#13;
This thesis aims to tackle three primary contributions seen as improving on the aforementioned shortcomings. First, the analysis of performance envelopes for an electrowetting flow control device. Second, design, fabrication, and testing of the devices to compare with model predictions. Third, creating a new emitter material to couple with the flow controller in order to mitigate uncontrolled flow to the emitter tips. This research will shed light on the many variables at play in achieving active flow control in electrowetting open microfluidics. These analyses will grant insight into designing and fabricating micro-devices to be tested in the lab in order to verify the models. A new porous emitter material will aim to address issues with emitter uniformity that are seen in current materials, also lending to mitigation of tip flooding. A solid-state propellant management device may be coupled with future electrospray thrusters based on this thesis research.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151694</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive and Responsive Design Under Uncertainty for Resource-Constrained Small Satellites</title>
<link>https://hdl.handle.net/1721.1/151691</link>
<description>Adaptive and Responsive Design Under Uncertainty for Resource-Constrained Small Satellites
Fifield, Michael G.
The space environment that satellites face is uncertain and challenging for survival and operation. Traditionally, satellite design methods mitigate the effects of uncertainty through the use of ample margin. However, robust designs often sacrifice significant nominal performance in exchange for this reduced sensitivity to uncertainty. Small satellites in particular are limited in size, weight, and power (SWaP) and do not have the luxury of resources for ample design margin. They can ill-afford the performance sacrifice of robust design. As SmallSats continue to decrease in size - even down to the hundreds of grams - the need grows for design techniques that offer resilience under uncertainty without the inevitable sacrifice of performance that comes with robust design.&#13;
&#13;
In this thesis, a methodology is presented to mitigate the effects of uncertainty in the space environment with the ability to adapt the satellite’s behavior during operation. Compensation for uncertainty on-orbit allows for dynamic allocation of margin on an as-needed basis, reducing the performance loss while improving the ability to maintain operation. The methodology covers two phases. First, design prior to operation enables provisioning of resources to plan for and provide the capability for passive and active dynamic mitigation of predicted uncertainty. Second, reprogramming of dynamic behavior in operation allows for optimal mitigation actual uncertainty. The resultant designs balance improved resilience in the face of uncertainty with minimal overdesign and sacrifice of performance.&#13;
&#13;
The methodology is applied to a novel SmallSat concept, WaferSat - a SWaPconstrained satellite etched on a 300 g silicon wafer using microelectromechanical systems (MEMS) production. Optimization of active and passive dynamic compensation is utilized to mitigate effects of thermal uncertainty with limited sacrifice of payload power. Multiple design families - with the same available payload power (isoperforming) and confidence of operating temperature constraint satisfaction (isofeasible) are identified utilizing different combinations of responsive and adaptive mitigation techniques.&#13;
&#13;
Application of the methodology is expanded to a second system, DiskSat, a similar larger, more thermally complex system. A detailed comparison of the continuum between responsiveness and adaptability is made, demonstrating the Pareto-set of isoperforming and isofeasible designs between two mitigation methods. Design for the balance of active and passive uncertainty mitigation over multiple constraints is explored, highlighting implementation considerations.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151691</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Practical Fixed-Wing Aircraft with Electroaerodynamic Propulsion</title>
<link>https://hdl.handle.net/1721.1/151685</link>
<description>Towards Practical Fixed-Wing Aircraft with Electroaerodynamic Propulsion
Brown, Arthur
Electroaerodynamic (EAD) propulsion is a novel means of generating thrust via collisions between ions and neutral molecules. EAD thrusters have no moving parts, and are therefore almost silent; they may therefore be useful for aircraft propulsion in applications where silence is valuable. Previous research has shown that EAD for fixed-wing, heavier-than-air aircraft propulsion is feasible. The goal of this thesis is to determine whether a fixed-wing EAD aircraft can be practical; i.e., with sufficient payload, range/endurance, and flight performance to be of interest in some initial application.&#13;
&#13;
Two initial applications are identified, both of which may benefit from low noise: surveillance and last-mile package delivery. Nominal mission requirements are developed. Three aircraft design case studies are presented: an uncrewed aircraft powered by unducted EAD thrusters for a surveillance mission, a family of uncrewed aircraft powered by multistaged ducted (MSD) EAD thrusters for a package delivery mission, and an uncrewed MSD-powered monoplane for a surveillance mission. MSD thrusters are more powerful and efficient than equivalent unducted EAD thrusters, in part because the duct contributes to thrust. Multidisciplinary design optimization frameworks, including models for thruster performance, aerodynamics, structures, weights, and power electronics, are developed as part of the case studies. &#13;
&#13;
Excess thrust for climb is the driving requirement for EAD fixed-wing flight: a practical aircraft requires more thrust than a feasible one, in order to climb. The MSD surveillance monoplane and package-delivery aircraft can fly their nominal missions, including climb requirements. However, they require improvements in three technological areas, relative to today’s state of the art: efficient ion generation methods, low-pressure-loss thruster electrodes, and lightweight power converters are required. The package delivery mission also requires improvements in battery specific power. Plausible technological development paths in all four areas are identified.  &#13;
&#13;
EAD propulsion for surveillance and package delivery aircraft can be practical if the requisite technological improvements can be obtained. The technologies’ identification, as well as the parameters by which improvement is quantified, is a key contribution of this thesis. Future work should focus on demonstrating the technological improvements, enabling the development of a fixed-wing, heavier-than-air EAD aircraft with practical capabilities.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151685</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Myeloid Cell Phenotype Using Cell Surface-Adhered Microparticles for Therapeutic Applications</title>
<link>https://hdl.handle.net/1721.1/151683</link>
<description>Engineering Myeloid Cell Phenotype Using Cell Surface-Adhered Microparticles for Therapeutic Applications
Kapate, Neha
Cell-based therapies present a new frontier for treating previously untreatable diseases. Living cells can innately overcome biological barriers, respond in real-time to biological stimuli, interact with specific cell types, and provide a canvas for further cellular engineering. The crucial role of the innate immune system, and particularly myeloid cells, in the dysregulated biological processes in numerous diseases has come into focus, motivating the development of myeloid cell therapies. The polarization of myeloid cells between classically activated, pro-inflammatory states and suppressive, anti-inflammatory states has myriad effects within the local environment, including metabolic modulation, production of cytokines, and activation of responding adaptive immune cells. As adoptively transferred cells can readily alter their phenotype based on their microenvironments, it is critical to develop a method for controlling cell phenotype in vivo.&#13;
&#13;
In this thesis, I develop a biomaterials approach for tuning myeloid phenotype, specifically differentiating monocytes and macrophages, for pre-clinical applications as cell therapy. I investigate how different myeloid cell phenotypes can be engineered and sustained using cell surface-adhered microparticles, termed “backpacks.” I delve into designing backpacks that load various drug molecules to promote anti- or pro-inflammatory phenotypes. I assess the effect of these microparticles on durability of phenotypic activation and other cellular functions in vitro. Next, I apply this platform to study immune-modulation and therapeutic effect in several disease models. I assess treatment with anti-inflammatory backpacks adhered to monocytes in a mouse model of progressive multiple sclerosis to determine immunomodulatory effects and therapeutic efficacy. Then, I scale up the fabrication of backpack-macrophages and apply this treatment in a clinically relevant porcine model of traumatic brain injury. Finally, I backpack-induced polarization of monocytes into the opposite direction with a pro-inflammatory phenotype, demonstrating the utility of backpacks as a platform technology. I assess treatment of monocytes with pro-inflammatory microparticles in a mouse model of breast cancer to assess tumor microenvironment remodeling and effect on tumor burden. Altogether, this work provides a biomaterials-based approach to tune myeloid cell phenotype ex vivo, for precise control of cell phenotype in vivo.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151683</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the depths of unconsciousness with&#13;
multifunctional neurotechnology</title>
<link>https://hdl.handle.net/1721.1/151682</link>
<description>Probing the depths of unconsciousness with&#13;
multifunctional neurotechnology
Garwood, Indie C.
Innovation in the interrelated fields of anesthesia and psychiatry demands an improved understanding of the mechanisms behind altered states of consciousness. Studying the brain's functional equilibrium, and how it can be disrupted, has motivated increasingly high resolution and multifunctional neurotechnology. Inferring the dynamic structure of neuronal signaling from high dimensional data requires concomitant computational advances. This thesis focuses on how the intersection of neuroscience, engineering, and statistics can be leveraged to unravel the mechanisms behind altered consciousness induced by high-dose ketamine. Although ketamine has been indispensable to medical practice since 1970, the neurobiological mechanisms behind its unique behavioral effects are not fully understood. I hypothesized that ketamine’s inhibition of N-methyl-D-aspartate receptors (NMDARs) leads to a systemic restructuring of both chemical and electrical neuronal signaling which ultimately disrupts consciousness. Systematically testing this hypothesis required the ability to probe electrochemical signaling across the behavioral spectrum spanning cognition and unconsciousness. To enable this study, I first developed multifunctional fiber-based neurotechnology capable of simultaneously recording and modulating cortical and deep brain electrochemical signaling in non-human primates. Second, I developed a state-space model framework for characterizing the structure of neural activity and its dynamic response to neuromodulation. Using these developments, I found that ketamine's systemic alteration of electrochemical signaling results in rigidly structured neural activity that disrupts communication between brain areas, resulting in loss of consciousness. This work furthers our understanding of the neural dynamics that define unconsciousness, while also empowering systems neuroscience with an integrated, generalized toolbox for characterizing neuropharmacology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151682</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transfer Learning For Spoken Language Processing</title>
<link>https://hdl.handle.net/1721.1/151674</link>
<description>Transfer Learning For Spoken Language Processing
Khurana, Sameer
This thesis develops transfer learning paradigms for spoken language processing applications. In particular, we tackle domain adaptation in the context of Automatic Speech Recognition (ASR) and Cross-Lingual Learning in Automatic Speech Translation (AST). &#13;
&#13;
The first part of the thesis develops an algorithm for unsupervised domain adaptation of End-to-End ASR models. In recent years, ASR performance has improved dramatically owing to the availability of large annotated corpora and novel neural network architectures. However, the ASR performance drops considerably when the training data distribution does not match the distribution that the model encounters during deployment (target domain). A straightforward remedy is collecting labeled data in the target domain and re-training the source domain ASR model. However, it is often expensive to collect labeled examples, while unlabeled data is more accessible. Hence, there is a need for unsupervised domain adaptation methods. To that end, we develop a simple but effective adaptation algorithm called the Dropout Uncertainty-Driven Self-Training (DUST). DUST repurposes the classic Self-Training (ST) algorithm to make it suitable for the domain adaptation problem.&#13;
&#13;
The second part of the thesis develops a transformer neural network encoder that embeds speech from several languages into a shared semantically aligned joint speech-text embedding space. To learn the multimodal semantic embedding space, we propose a teacher/student learning framework where we fine-tune a pre-trained multilingual speech encoder (student) using semantic supervision from a pre-trained multilingual semantic text encoder (teacher). We show that by building multilingual speech-to-text translation technology using the semantic representations learned by our speech encoder, we could achieve a significant \textit{zero-shot} cross-lingual task transfer from seen (during training) high-resource spoken languages to unseen (during training) low-resource spoken languages.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151674</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodality: Models, Algorithms, and Applications</title>
<link>https://hdl.handle.net/1721.1/151667</link>
<description>Multimodality: Models, Algorithms, and Applications
Boussioux, Léonard
As global sustainability challenges intensify, the demand for innovative, interdisciplinary solutions that leverage diverse data sources and analytical methods is surging. We investigate combining operations research and artificial intelligence to address urgent sustainability and healthcare concerns by developing adaptable, universally applicable frameworks.&#13;
&#13;
This thesis delves into multimodality by using simultaneously distinct data types, such as tables, images, time series, and free text. We formulate versatile methodologies that can be applied to various tasks, from tropical cyclone forecasting and biodiversity tracking to healthcare operations, with minimal adaptation. We mimic human abilities to comprehend and connect different data types by incorporating artificial intelligence and optimization in data-driven strategies. Our contributions include the development of generalizable data pre-processing, feature extraction, and data fusion pipelines that facilitate large-scale multimodal data processing in complex real-world scenarios. Notably, our tropical cyclone forecasting models demonstrate performance comparable to the US National Hurricane Center's top models for 24-hour intensity and track forecasts.&#13;
&#13;
Moreover, we build integrated predictive-to-prescriptive data-driven frameworks connecting operations research and artificial intelligence. In support of multimodality, we introduce innovative tools that ensure model reliability and performance in critical situations. We explore adaptive robust ensemble modeling to augment planning and decision-making under uncertainty. Our predictive and prescriptive models have been effectively implemented in factories, museums, and hospitals to tackle sustainability and public health issues, including air pollution management, ecosystem preservation, and rare tumor segmentation. Our pollution management models considerably reduced harmful emissions at the OCP Safi Site, Morocco's largest chemical industrial plant, while cutting unnecessary costs. In addition, our tumor segmentation models match medical doctors' expertise while offering substantial time savings.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151667</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Experience in Temporal Situational Context: Method of Matching and Modeling in Design</title>
<link>https://hdl.handle.net/1721.1/151655</link>
<description>Visual Experience in Temporal Situational Context: Method of Matching and Modeling in Design
Peng, Wenzhe
Adhering closely to the phenomenological approach, a computational design system needs to incorporate visual experience to efficiently craft compelling human-centric visual designs. However, the computation of visual experience, which includes perception, cognition, emotion, and action, is challenging due to its subjective, non-deterministic, and unconscious nature. Recognizing that the temporal situational context, or an individual’s perceived environment over time, can provide insights into their cognitive state and yield a more consistent visual experience than static contexts, I argue that by incorporating temporal situational context, we can better match and model visual experiences, leading to effective and empirically grounded computational phenomenological design systems. They include the development of experience-sensitive spatial design systems, human-centric human-computer interaction designs, and improved film pre-production quality and efficiency.&#13;
&#13;
To incorporate visual experience in design, this thesis proposes a versatile computational representation method for temporal situational context called the Temporal Framed Scene Graph (TFSG), and examined in two projects. The first project investigates the modeling of human behavior in an augmented reality exhibition using a recurrent graph network, with behavior represented in TFSG format. In the second project, considering video as an effective medium for conveying visual experience about a scene, this project utilizes TFSG-facilitated visual experience matching for shot planning and set design. Its effectiveness is assessed using quantitative and qualitative tests in real-world filming scenarios. The project results further support the thesis argument, showing that TFSG effectively captures visual experience and provides a valuable foundation for exploring the possibility of the matching and modeling of visual experience in design, leading to more efficient and human-centered design pipelines. This thesis contributes to fields including design studies, praxeology, cinematography, and AI, by presenting (1) A versatile representation of temporal situational context that computationally describes the visual experience in a scene. (2) A method that helps with film pre-production and human-centered spatial design through visual-guided optimization. (3) A context-driven multi-model system for modeling human behavior in an AR exhibition.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151655</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Additive Manufacturing Towards Electronically-Active Surfaces</title>
<link>https://hdl.handle.net/1721.1/151653</link>
<description>Additive Manufacturing Towards Electronically-Active Surfaces
Saravanapavanantham, Mayuran
Ubiquitous and imperceptible integration of optoelectronic devices into the world around us would allow for novel modes of energy harvesting, communications, sensing, information display and computing. To date, owing to the availability of foundries and scalable processing modalities, this has been achieved via fabrication of discrete elements which are then deterministically positioned throughout the world via pick-and-place assembly. Alternatively, availability of large-area, ultra-thin and continuous elements would enable seamless integration of electronics onto surfaces around us much like a second skin.&#13;
&#13;
Thin-film electronics, often fabricated with sub-micron device-functional layer thicknesses, present an avenue towards such mechanically imperceptible, large-area and continuous integration of electronics onto any surface of choice – a paradigm which we refer to as Active Surfaces. Herein, we report on developing transferable large-area ultra-thin organic photovoltaics, decoupling their manufacturing from the final integration thereby allowing the electrification of any surface of interest. In particular, we discuss scalable manufacturing methods to fabricate fully-printed ultra-thin photovoltaic modules, their integration onto light-weight and high strength composite fabrics, present the development of equivalently ultra-thin encapsulation films, introduce an approach to solution-coat ultra-thin substrates which can subsequently be used as the releasable carrier for devices and highlight further advances necessary to translate this into a commercially viable technology.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151653</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surpassing Local Optimality in Geometry Processing</title>
<link>https://hdl.handle.net/1721.1/151652</link>
<description>Surpassing Local Optimality in Geometry Processing
Zhang, Paul
Geometry processing is a diverse field studying questions such as how to represent a shape, how to modify a shape, and how shapes respond to perturbation. These questions lie at the heart of many applications in computational design, simulation, fabrication, and animation. We approach these problems through the lens of geometric optimization where solutions are acquired by defining application specific objective functions paired with geometry dependent constraints. These problems can generically be solved with Newton’s method to initialization dependent local optima. In this thesis, we examine whether problem specific knowledge can be leveraged to employ more advanced optimization techniques and obtain better results. We specifically explore the use of convex relaxation, variable augmentation and sum-of-squares programming to target cross-field based quad meshing, hexahedral mesh quality enhancement, and algebraic collision detection. With these tools, we manage to avoid shallower local minima and sometimes to reach or even surpass globally optimal solutions. We conclude with some principles by which these methods can be generally applied to other parts of geometry processing or optimization.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151652</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Continuous-Time Pipeline ADC with Reduced Sensitivity to Clock Jitter</title>
<link>https://hdl.handle.net/1721.1/151650</link>
<description>A Continuous-Time Pipeline ADC with Reduced Sensitivity to Clock Jitter
Mittal, Rishabh
With the advent of the fifth-generation (5G) standard for cellular networks, direct RF receivers are becoming popular in applications such as cellular base stations. Such systems require analog-to-digital converters (ADC) with a high dynamic range over a large digitization bandwidth (&gt; 500 MHz). For high-speed high-resolution ADCs with an upfront sampler, the clock jitter poses a fundamental bottleneck for the maximum achievable signal-to-noise ratio (SNR). In applications requiring 10-12 bit resolution for 1 GHz digitization bandwidth, the clock jitter values must be no more than a few tens of femtoseconds. This poses significant design challenges for the clock generator.&#13;
&#13;
The continuous-time (CT) pipeline ADC is an emerging architecture that combines the benefits of a discrete-time pipeline ADC and a continuous-time ∆Σ ADC architecture. In this thesis, we explore the clock jitter sensitivity of the CT pipeline ADC. We derive the SNR limitations in a CT pipeline ADC and propose a new CT pipeline ADC design with improved tolerance to clock jitter. We also present a design methodology for the delay line and propose a novel inductor-less delay line that provides a good amplitude and phase matching between the stage 1 signal path and the sub-ADC-DAC path from DC to 1.6 GHz to minimize the signal leakage in the first stage residue.&#13;
&#13;
A prototype ADC was fabricated in a 16-nm FinFET process. The ADC achieves 61.7/60.8dB (low/high frequency) SNR over 1-GHz bandwidth. The active area is 0.77mm² and the ADC consumes 240mW. The Schreier figure-of-merit (FOMS) is 157.9dB which is amongst the best in comparison to other state-of-the-art continuous-time ADCs with digitization bandwidth greater than 500MHz.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151650</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning Emulators for Accessible Climate Projections</title>
<link>https://hdl.handle.net/1721.1/151644</link>
<description>Deep Learning Emulators for Accessible Climate Projections
Lütjens, Björn
Climate change has shifted from a purely scientific topic to a deeply politicized issue. To combat climate change we need to create mutual understanding on the links between policies, global warming, and city-scale impacts. Climate models have been incredibly helpful in generating this causal understanding, but running them requires supercomputers and is only accessible to the minority of researchers. &#13;
&#13;
This thesis explores how emulating climate models with deep learning can make them more accessible and, at the same time, raise novel challenges in deep learning on physical, long-term time-series, and high-dimensional data. This dissertation shows that deep learning can decrease runtime in dynamical models, increase accuracy in local climate projections, and generate visualizations of climate impacts. Specifically, this thesis contributes a hybrid model, called multiscale neural operator, that corrects fast low-resolution simulations by learning a hard-to-model parametrization term. This achieves to cut runtime complexity from quadratic to quasilinear which can result in a 1000x faster model on selected equations in multiscale dynamics. This thesis also contributes satellite imagery of the future that visualizes climate data using physically-consistent deep generative vision models.&#13;
&#13;
The thesis contributions are framed in an envisioned online tool that rapidly emulates the city-scale impacts of various climate policies. In the future, such an emulator could accelerate local climate risk analyses, attribution of extreme events, and the understanding of causal links between between impacts and policies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151644</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Privacy-Preserving Video Analytics</title>
<link>https://hdl.handle.net/1721.1/151637</link>
<description>Privacy-Preserving Video Analytics
Cangialosi, Francis
As video cameras have become pervasive in public settings and accurate computer vision has become commonplace, there has been increasing interest in collecting and processing data from these cameras at scale ("video analytics"). While these trends enable many useful applications (such as monitoring the mobility patterns of cars and pedestrians to improve road safety), they also enable detailed surveillance of people at an unprecedented level. Prior solutions fail to practically resolve this tension between utility and privacy, as they rely on perfect detection of all private information in each video frame—an unrealistic assumption.&#13;
&#13;
In this dissertation, we present Privid, a privacy-preserving video analytics system that aims to provide both a meaningful guarantee of privacy and an expressive, general query interface that is amenable to a wide range of analysts. In particular, Privid's privacy definition does not require perfect detection of private information, and its query interface allows analysts to provide their own arbitrary (untrusted) machine learning (ML) processing models.&#13;
&#13;
The key takeaway from our evaluation is that Privid can provide a practical balance between privacy and utility: across a variety of queries over both real surveillance videos and a simulated city-wide camera network, Privid protects the appearance of all people with differential privacy, and maintains accuracy within 79-99% relative to a non-private system.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151637</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Production of Ideas</title>
<link>https://hdl.handle.net/1721.1/151632</link>
<description>Essays on the Production of Ideas
Kim, Soomi
Old ideas serve as critical inputs in the production of new ideas. In order to generate knowledge, innovators “stand on the shoulders of giants,” the great thinkers who came before, whose ideas serve as the foundation to build on. In this dissertation, I rely on rich empirical data in biomedical settings to identify factors that drive or hinder this cumulative process of knowledge production. The first essay focuses on how knowledge workers innovate in new domains without giants, where there are only few existing ideas to build on. Using the setting of structural biology, I explore how a new technological tool—the automation of analogical reasoning—allowed innovators to import knowledge from an adjacent domain, bypassing the need to build knowledge from the ground up. In the second essay, I turn to how institutions can shape innovative outcomes, particularly when the shoulders of giants rest on a weak foundation. I document that poor communication among different institutional parties of the patent system likely led to the prevalence of biomedical patents based on erroneous or fraudulent science, reducing incentives for innovation. Finally, in the third essay, I highlight the role of private sector polices—specifically, insurance design—in steering the direction of firms’ R&amp;D efforts in drug development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151632</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Probing of Nonlinear Dynamics in Quantum Materials: Beyond Linear Response Probes</title>
<link>https://hdl.handle.net/1721.1/151631</link>
<description>Ultrafast Probing of Nonlinear Dynamics in Quantum Materials: Beyond Linear Response Probes
Ergeçen, Emre
The interactions between the spin, orbital, and charge degrees of freedom of electrons give rise to a variety of exotic phenomena and emergent phases of matter in quantum materials. However, understanding these phenomena is often hindered by the lack of experimental probes that are sensitive to multiple degrees of freedom. Linear response probes, such as scattering and electron transport measurements, are sensitive to a single electronic degree of freedom and cannot provide insights into electron-electron coupling, which can result in hidden orders and spectral features still under debate.&#13;
&#13;
Recent advances in ultrafast laser technology have made it possible to study the dynamical behavior of solids that remains hidden to equilibrium probes. Novel spectroscopy techniques, such as coherent phonon spectroscopy, transient absorption spectroscopy, second harmonic generation microscopy, and high field THz excitation spectroscopy, are capable of probing the properties of quantum materials that are not accessible with linear response techniques. By employing these techniques, this thesis visualizes different types of collective excitations in quantum materials and unveils new physics that only exists under non-equilibrium conditions. The results presented in this thesis contribute to the understanding of the mechanisms underlying exotic states of matter and have the potential for applications in the development of novel devices.&#13;
&#13;
This thesis focuses on the use of novel ultrafast spectroscopy techniques to study the collective behavior and emergent phases of matter in quantum materials. In Chapter 2, we unravel a exceptionally strong coupling between lattice vibrations and orbitals in a van der Waals antiferromagnet, NiPS3. In Chapter 3, we show that this exceptionally strong phonon-orbital coupling can lead to a strong effective coupling between spins and lattice in FePS3, which is not detectible with conventional probes, such as Raman and X-ray scattering. In Chapter 4, by using nonlinear optical microscopy techniques, we discover a two-dimensional multiferroic, NiI2. In Chapter 5, using high electric field excitations in THz domain, we observe the phononic analog of Cherenkov effect, where electrons propagating faster than the speed of sound directionally emit acoustic phonons. Finally, we provide a comprehensive and insightful outlook towards the possibilities that lie ahead and elucidate the prospective trajectories and viable experiments that could potentially pave the way for further advancements and breakthroughs in the realm of ultrafast spectroscopy of quantum materials.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151631</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lon degrades stable substrates slowly but with enhanced processivity, redefining the attributes of a successful AAA+ protease</title>
<link>https://hdl.handle.net/1721.1/151628</link>
<description>Lon degrades stable substrates slowly but with enhanced processivity, redefining the attributes of a successful AAA+ protease
Kasal, Meghann
Protein quality control (pQC) in all cells is mediated by macromolecular machines that make new proteins, ensure that newly translated polypeptides are properly folded, and degrade aberrant proteins. Mechanoenzymes belonging to the AAA+ (ATPases associated with diverse cellular activities) superfamily are present in all domains of life and harness energy from chemical fuels to perform mechanical work by promoting conformational changes in other biological macromolecules. Mechanical work is required to unfold proteins and disassemble aggregates during pQC, as well as during other biological processes, such as unwinding DNA during replication and transporting cellular cargo along cytoskeletal filaments. The AAA+ protease Lon contains a AAA+ module fused to a C terminal peptidase domain and N terminal auxiliary domain, which is implicated in substrate recognition and allosteric regulation. Lon plays a major role in pQC by performing ‘housekeeping’ degradation of unfolded and misfolded proteins. Additionally, Lon recognizes and degrades natively folded proteins containing a recognition tag (degron). Detailed biochemical and biophysical characterization of Lon has lagged behind that of other AAA+ proteases, thus the underlying mechanical features of Lon catalyzed degradation were largely unknown prior to this work.&#13;
&#13;
In Chapter 2 of this thesis, I provide the first single¬ molecule characterization of Lon mechanical unfolding and translocation using a bacterial Lon homolog from Mycoplasma. This work reveals that Lon: (i) is a ‘powerful’ AAA+ protease given its ability to degrade very stable model substrates; (ii) has a conserved stepping mechanism shared by two related AAA+ proteases despite differences in their velocities, ATPase rates, domain architectures, and substrate contacts; and (iii) exhibits high processivity and frequently completely degrades multi domain substrates. Importantly, despite being a slower unfoldase and translocase than other well characterized AAA+ proteases, Lon is more persistent and successful at degrading targeted substrates. Based on this work, I propose that, because of its cellular niche, Lon balances degradation of a very large substrate repertoire consisting of both unfolded and highly stable proteins, and Lon has likely been ‘tuned’ during evolution to maximize its proteolytic success to a greater extent than the other bacterial AAA+ proteases. In the final chapter, I propose several future directions for the continued research on Lon both in vitro and in vivo to identify novel substrates, modes of regulation, and further elucidation of the mechanistic features of degradation.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151628</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical Characterization of Glycan Assembly Pathway Enzymes</title>
<link>https://hdl.handle.net/1721.1/151626</link>
<description>Biochemical Characterization of Glycan Assembly Pathway Enzymes
Bernstein, Hannah
Glycans and glycoconjugates are found in all domains of life and are known to mediate many important and diverse biological processes. Glycan biosynthesis is not templated, and complex glycans are instead assembled en bloc in highly specific and tightly regulated enzymecatalyzed assembly pathways that occur at the membrane interface. Despite the critical roles of glycans, our understanding of the glycome is quite incomplete. This thesis describes a biochemical approach that may be used to provide greater insight into both glycan structure elucidation as well as the structural determinants of substrate specificity for these glycanbiosynthesizing enzymes.&#13;
&#13;
Phosphoglycosyl transferases (PGTs) initiate glycan assembly pathways by catalyzing the transfer of a phospho-sugar from a nucleotide-diphosphate (NDP)-sugar to a polyprenol phosphate (PrenP), yielding a Pren-PP-sugar product that is then elaborated by sequential glycosyl transferases (GTs), flipped across the membrane, and transferred to the final target. Previous work has defined two PGT superfamilies, which are defined by different membrane topologies and catalytic mechanisms, and characterized the requirements of the PrenP substrate, but less is known about the structural determinants of substrate specificity with respect to the NDP-sugar substrate. This thesis presents the methods used to synthesize and purify a panel of candidate UDP-sugar substrates that can be deployed in various binding or activity studies in order to investigate the structure-activity relationship of PGT enzymes and NDP-sugar substrates in vitro. This library is then used to biochemically validate the reported substrate specificity profile of several monotopic PGTs.&#13;
&#13;
The next section of this thesis focuses on GTs enzymes, which are also known to demonstrate incredible selectivity with respect to the NDP-sugar substrate. Because GTs adopt one of only a few different folds, elucidation of the structural determinants of substrate specificity could have wide-reaching impacts. The substrate specificity of GTs in the Pgl pathway of two Campylobacter concisus strains is probed as a model system. Finally, the utility of this biochemical approach is exemplified through its application in identifying the first two monosaccharides of the exopolysaccharide (EPS) glycan from Bacillus subtilis, which had not been previously known.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151626</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultra-scaled III-V Vertical Tunneling Transistors</title>
<link>https://hdl.handle.net/1721.1/151619</link>
<description>Ultra-scaled III-V Vertical Tunneling Transistors
Shao, Yanjie
In the quest of reducing the power consumption of transistors, charge carrier transport mechanisms other than thermionic emission over an energy barrier have received considerable attention. Among all possible mechanisms, quantum mechanical tunneling has emerged as one of the most promising, and the design and demonstration of Tunnel Field-Effect Transistors (TFETs) has been an object of great interest in the past few years. In spite of intense research and promising simulation predictions, the results to date have been disappointing: the combination of high drive current and sub-thermionic switching characteristics has never been achieved. Are we in front of a fundamental barrier?&#13;
&#13;
This thesis is dedicated to exploring the limit of TFETs in terms of device scalability, high-current potential, and sharp switching capability. We focus on the most promising group III-V semiconductor heterojunction structure, the broken-band GaSb/InAsSb system, in a vertical nanowire (VNW) TFET configuration. We first develop a new technology for ultra-scaled GaSb/InAsSb VNW fabrication, reaching a diameter as small as 5 nm. We then build VNW Esaki diodes, demonstrating record-high tunneling current density and ideal scaling behavior. Furthermore, we have fabricated ultra-scaled VNW TFETs which show that a combined high tunneling current and steep subthreshold swing is indeed achievable. Finally, we discuss opportunities and challenges of all-III-V complementary TFET logic. The findings in this thesis demonstrate a potential technology platform for future ultra-low-power digital electronics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151619</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Channel Comparison Methods and Statistical Problems on Graphs</title>
<link>https://hdl.handle.net/1721.1/151616</link>
<description>Channel Comparison Methods and Statistical Problems on Graphs
Gu, Yuzhou
Initially driven by channel coding, information theory has developed a large collection of tools for measuring and comparing effectiveness of information channels. These tools have found applications in various fields such as statistics, probability, and theoretical computer science. This thesis explores several applications of these tools to statistical problems related to graphs.&#13;
&#13;
Part I focuses on information channels and channel comparison methods, including f-divergences, strong data processing inequalities, and preorders between channels. While these theories have been well-established for binary memoryless symmetric (BMS) channels, there remains much to discover for channels with larger input alphabets. We develop a theory of q-ary input-symmetric (FMS) channels, generalizing the theory of BMS channels. We demonstrate that while FMS channels exhibit more complex behavior than BMS channels, some properties of BMS channels can be extended to FMS channels. Furthermore, we perform tight analysis on contraction properties of the Potts channels, the simplest examples of FMS channels.&#13;
&#13;
In Part II, we apply the information theoretical methods established in Part I to solve problems related to random graph models with community structures. The random graph models include the stochastic block model (SBM) and its variants, which hold significance in statistics, machine learning, and network science. Central problems for these models ask about the feasibility and quality of recovering hidden community structures from unlabeled graphs. By utilizing the relationship between random graphs and random Galton-Watson trees, we demonstrate that many important problems on these graphical models can be reduced to problems on trees. We apply various channel comparison methods to solve these tree problems, demonstrating that different methods are effective for different problems and that selecting the correct tool for a problem is crucial. Problems we study include (for SBMs) weak recovery, optimal recovery algorithms, mutual information formula, and (for broadcasting on trees) reconstruction, robust reconstruction, uniqueness of belief propagation fixed points, boundary irrelevance, computation of limit information, and so on.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151616</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconducting Nanowire Technology for Microwave and Photonics&#13;
Applications</title>
<link>https://hdl.handle.net/1721.1/151613</link>
<description>Superconducting Nanowire Technology for Microwave and Photonics&#13;
Applications
Colangelo, Marco
Quantum computing and quantum communication are innovative technologies promising to revolutionize several aspects of our societal landscape. However, early cutting-edge experiments are rapidly approaching significant scalability roadblocks. As the qubit count increases, superconducting quantum processors require an increasing number of control and readout electronic devices, which are incompatible at scale with the performance of dilution refrigerators. Photonic-based platforms struggle with integration issues due to operational, design, and heterogeneous material compatibility.&#13;
&#13;
In this thesis, we demonstrate that superconducting nanowires have the potential to drive a major leap in the scalability of these and other architectures. We show that the exotic microwave properties of superconducting nanowires enable cryogenic devices at microwave frequencies with an ultra-compact footprint. We introduce microwave directional couplers and resonators featuring a footprint reduction of up to 200 times, making them suitable for on-chip integration with superconducting quantum processors and in any application needing cryogenic microwave signal processing. &#13;
&#13;
Furthermore, we engineer the nanowire properties to overcome the metrics trade-offs of single-photon detectors. We demonstrate an all-in-one nanowire detector with record performances, imaging capabilities, and photon-number resolution capabilities, all in the same design. Our device can be used to scale experiments needing many high-performance detectors.&#13;
&#13;
Finally, we demonstrate single-photon detectors integrated on lithium-niobate-on-insulator with state-of-the-art performance. We also introduce integrated array technology on silicon-on- insulator. Our nanowire technology can be on-chip heterogeneously integrated with current quantum photonic platforms, removing the need for out-coupling to fiber-coupled detectors.&#13;
&#13;
In conclusion, superconducting nanowires have the potential to become a comprehensive solution for scaling classical and quantum architectures.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151613</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Player Capability and Locally Sub-Optimal Behavior in Strategic Games</title>
<link>https://hdl.handle.net/1721.1/151607</link>
<description>Player Capability and Locally Sub-Optimal Behavior in Strategic Games
Yang, Yichen
Game theory has a profound influence across many different disciplines, including economics, social science, logic, and computer science. Research in game theory has surfaced many interesting phenomena on how strategic players interact in various game settings. In this thesis, I consider two topics in game theory. The research in both topics surfaces and characterizes interesting phenomena of how strategic players interact in game theoretic settings.&#13;
&#13;
The first topic is the emergence of locally suboptimal behavior in finitely repeated games. Locally suboptimal behavior refers to players playing suboptimally in some rounds of the repeated game (i.e., not maximizing their payoffs in those rounds) while maximizing their total payoffs in the whole repeated game. The emergence of locally suboptimal behavior reflects some fundamental psychological and social phenomena, such as delayed gratification, threats, and incentivized cooperation. The central research question in this part is when can locally suboptimal behavior arise from rational play in finitely repeated games. To this end, we prove the first sufficient and necessary condition that provides a complete mathematical characterization of when locally suboptimal behavior can arise for 2-player finitely repeated games. We also present an algorithm for the computational problem of, given an arbitrary game, deciding if locally suboptimal behavior can arise in the corresponding finitely repeated games. This addresses the practical side of the research question.&#13;
&#13;
The second topic is the impact of player capability on game outcome. Varying player capabilities can significantly affect the outcomes of strategic games. Developing a comprehensive understanding of how different player capabilities affect the dynamics and overall outcomes of strategic games is therefore an important long-term research goal in the field. We propose a general framework for quantifying varying player capability and studying how different player capabilities affect game outcomes. We introduce a new game model based on network congestion games and study how player capabilities affect social welfare at Nash equilibria in this context. The results in this part surface an interesting phenomenon that in some situations, increasing player capabilities may deliver a worse overall outcome of the game. We characterize when such phenomena happen for the games we study. &#13;
&#13;
We further extend the new game model introduced above with incomplete information on player capability and multi-round play. We establish (algorithmic) game theoretic properties in these extensions, regarding the existence of different types of equilibrium solutions and the complexity of finding equilibrium solutions. These extensions model aspects of interactions between strategic agents that lead to phenomena such as concealment and deception.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151607</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Volumetric Mapping for Medical Imaging and Geometry Processing</title>
<link>https://hdl.handle.net/1721.1/151604</link>
<description>Volumetric Mapping for Medical Imaging and Geometry Processing
Abulnaga, Sayed Mazdak
Mapping is the task of computing a transformation from a source shape to a target domain. It is a central problem in medical imaging and computer graphics. Most methods for this task apply only to two-dimensional (2D) surfaces. The neglected task of volumetric (3D) mapping, a natural extension relevant to shapes extracted from medical imaging, simulation, and volume rendering presents unique challenges that do not appear in the 2D case. In this thesis, we propose methods for mapping volumes represented as tetrahedral meshes. We are motivated by problems in medical imaging using magnetic resonance imaging (MRI) of the placenta to study placental health and function. This application presents a challenging problem setting relevant for mapping tasks in computer graphics and geometry processing as a whole. We propose an automatic segmentation method to extract placental shapes from MRI. To alleviate interpretation issues of placental MRI, we propose a volumetric parameterization to map the placenta to a standardized representation and enable visualization of local anatomy and function. To tackle the more general problem of computing a map between shapes, we propose a method to compute symmetric correspondences between an arbitrary class of highly dissimilar geometric shapes. Finally, we combine our proposed approaches to develop a combined shape-and-image mapping framework to find dense correspondences of placental shapes. The combination of these works can be used to assess localized placental function through MRI, necessary to develop biomarkers of fetal health. We conclude by discussing the potential of this work in future clinical research studies to improve fetal-maternal health.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151604</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Domain and User-Centered Machine Learning for Medical Image Analysis</title>
<link>https://hdl.handle.net/1721.1/151598</link>
<description>Domain and User-Centered Machine Learning for Medical Image Analysis
Hoebel, Katharina Viktoria
The utilization of diagnostic imaging in the United States and worldwide is steadily growing. Due to a shortage of trained staff, the result is an increased and unsustainable workload for radiologists. Consequently, there is a high clinical need for the automation of cognitively challenging tasks, such as analyzing and interpreting medical images, to lighten the burden on radiologists and avoid a further increase in healthcare expenditure. Machine learning (ML), including deep learning (DL) offer a potential solution as these algorithms can learn to automatically recognize subtle patterns from large amounts of data and augment clinical decision-making.&#13;
&#13;
Despite the high enthusiasm for ML algorithms, concerns regarding their readiness for clinical deployment are impeding their clinical translation. In this thesis, we address three fundamental challenges to the translation of ML algorithms into clinical care settings.&#13;
&#13;
First, algorithms must perform robustly in routine clinical care settings. We demonstrate how appropriate image preprocessing improves the stability of handcrafted radiomic features extracted from brain MRIs. Second, the selected network design must be appropriate for a specific task. Here, we illustrate the advantages of shifting from a strictly discrete (ordinal) model of disease severity distribution to a continuously valued one. We introduce a generalized framework that can recover information lost by discretizing continuous variables into discrete training labels. Furthermore, disagreements in the labels generated by different annotators can be caused by individually varying decision thresholds. Therefore, we present the first design and demonstration of two methods that enable the joint learning of annotators’ ordinal classification and their individual biases for a latent, continuously valued target variable like disease severity. Lastly, the performance of ML algorithms needs to be evaluated in a clinically meaningful manner. We address the disconnect between the subjective quality perception of clinical experts and the metrics that are typically used to evaluate performance. Furthermore, we identify criteria that experts use to evaluate the quality of automatically generated segmentations and describe their thought processes as they correct them.&#13;
&#13;
Based on the learnings from our work, we conclude with concrete recommendations for developing robust and trustworthy ML tools for medical imaging.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151598</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fourier decoupling for convex sequences</title>
<link>https://hdl.handle.net/1721.1/151597</link>
<description>Fourier decoupling for convex sequences
Fu, Yuqiu
We study the decoupling theory for functions on R with Fourier transform sup- ported in a neighborhood of a convex sequence [formula], where [formula] and &#119892; : [0, 1] → R is a &#119862;² function satisfying &#119892;′(&#119909;) &gt; 0, &#119892;′′(&#119909;) &gt; 0 for every &#119909; ∈ [0, 1]. We utilize the wave packet structure of functions with frequency support in a neigh- borhood of an arithmetic progression.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151597</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New degrees of freedom in integrable models with q-Hahn weights and their applications to symmetric functions and probability.</title>
<link>https://hdl.handle.net/1721.1/151596</link>
<description>New degrees of freedom in integrable models with q-Hahn weights and their applications to symmetric functions and probability.
Korotkikh, Sergei
We present three groups of results about integrable lattice models constructed from orthogonality weights of q-Hahn polynomials. First, we establish that the q-Hahn orthogonality weights appear as matrix coefficients in certain isomorphisms between tensor products of representations of quantum affine sl₂ algebra. This allows us to find new integrable degrees of freedom in q-Hahn models by constructing an integrable vertex model on a square lattice with weights coming not from an R-matrix, as usually the case, but from our isomorphisms.&#13;
&#13;
Second, we use the partition function of our new vertex model to construct a generalization of t=0 Macdonald symmetric functions, which we call inhomogeneous spin q-Whittaker polynomials. Using integrability we are able to extend several classical properties of symmetric functions to our generalization, in particular, we prove analogues of the Cauchy and dual Cauchy identities. Moreover, we are able to characterize spin q-Whittaker polynomials by vanishing at certain points, which leads to a discovery of interpolation analogues of q-Whittaker and elementary symmetric polynomials.&#13;
&#13;
Finally, we introduce a (colored) stochastic version of our vertex model and prove explicit integral expressions for q-deformed moments of the (colored) height functions of it. Following known techniques our stochastic model can be interpreted as a q-discretization of the Beta polymer model with three families of integrable parameters, and we are able to extend the known results about Tracy-Widom large-scale fluctuations to our generalization of this polymer model.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151596</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Homotopically nontrivial area contracting maps</title>
<link>https://hdl.handle.net/1721.1/151595</link>
<description>Homotopically nontrivial area contracting maps
Kumanduri, Luis
In this thesis, we investigate the existence of area contracting maps in a given ho-motopy class. In particular, we give various generalizations of Guth’s h-principle for k-dilation of maps between spheres to other manifolds. This includes all maps from highly connected domains and maps to highly connected targets with a homology vanishing condition. We give necessary and sufficient conditions for a homotopy class of maps &#119883;ᵐ → &#119884;ⁿ between closed oriented manifolds to have representatives with small &#119896;-dilation for &#119896; = &#119899;, &#119899; − 1 when &#119896; &gt; (&#119898; + 1)/2.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151595</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pyrometallurgical Oxide-Sulfide Anion Exchange for &#13;
Improved Material Separation and Metal Production</title>
<link>https://hdl.handle.net/1721.1/151594</link>
<description>Pyrometallurgical Oxide-Sulfide Anion Exchange for &#13;
Improved Material Separation and Metal Production
Stinn, Caspar R.
Efforts to decarbonize, reduce water consumption, and respond to increasing feedstock complexity motivate the search for new processing pathways in mining, recycling, and metal production. Currently, environmentally tedious and economically burdensome chemical separations are required to produce pure compounds amenable to materials manufacturing and metal production. An alternative approach is the development of chemical pretreatments that enable low cost, environmentally sustainable physical separations instead. This can be accomplished via pyrometallurgical anion exchange chemistry. Revisiting separation pathways provides an added benefit; new separation chemistries can enable improved downstream processes for metal production without direct greenhouse gas emissions. Sulfur-based routes are expected to be particularly versatile. However, kinetic and thermodynamic unknowns currently hinder the deployment of sulfur-based separation chemistries.&#13;
&#13;
Herein, thermodynamic modelling and oxide sulfidation kinetic measurements enable for the first time the establishment of an integrated thermodynamic, kinetic, and mass transport framework for pyrometallurgical oxide-sulfide anion exchange. Selective sulfidation via this methodology is established to be a low cost, sustainable pretreatment that enables high performance, environmentally friendly physical separations in place of legacy chemical approaches. This approach is shown to be effective across a range of modern materials processing challenges, including rare earth separation, lithium ion battery recycling, metal slag recycling, and commodity mineral processing. Separation metrics achieved through pyrometallurgical selective sulfidation exhibit order of magnitude improvements over state of the art hydrometallurgical pathways. Life cycle and technoeconomic assessments reveal these benefits also come at a fraction of the cost and environmental impact.&#13;
&#13;
Furthermore, novel sulfide products from pyrometallurgical oxide-sulfide anion exchange are found to be amenable to metal production via simple vacuum thermal treatments. Aluminothermic reduction via reactive vacuum distillation is shown to enable manufacturing of a range of alloys from sulfide feedstocks without direct greenhouse gas emissions, including aluminum-manganese, aluminum scandium, ferronickel, ferrochromium, and iron-rare earth alloys. Together, sulfide-based processing pathways are found to unlock new synergies in metal separation and reduction for a sustainable materials future.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151594</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Traversing catalytic contexts for interrogation and design of carbon conversion electrocatalysts</title>
<link>https://hdl.handle.net/1721.1/151593</link>
<description>Traversing catalytic contexts for interrogation and design of carbon conversion electrocatalysts
Zeng, Joy Shuang
Driving chemical reactions with voltage provides an opportunity to perform thermodynamically difficult reactions at mild temperatures and pressures. One useful chemistry to perform electrochemically is CO₂ conversion. Converting CO₂ into value-added chemicals could be one strategy for abating atmospheric carbon, and electrochemistry is well-suited to provide the driving force required for CO₂ conversion reactions that tend to be highly endergonic. However, amidst the physical complexity of electrified interphases, both mechanistic inquiry and rational catalyst design remain challenging. In this work, we leverage concepts from fields outside of electrocatalysis to establish new strategies for both the interrogation and design of promising electrocatalysts.&#13;
&#13;
We first discuss strategies for interrogating electrocatalytic reaction mechanisms. This is described in the context of CO₂ reduction reaction (CO₂RR) to carbon monoxide (CO) at immobilized metal tetrapyrroles. We detail how collecting and quantitatively analyzing reaction rate data over a wide range of reaction conditions illuminated new details of the CO₂RR reaction mechanism at cobalt phthalocyanine (CoPc). Such mechanistic analysis strategies are often used in heterogeneous thermocatalysis, and this work sets a precedent for also using them in electrocatalysis. We also report a robotic system that automates collection of reaction rate data. We report how the robotic system was used to expand our CO₂RR mechanistic analyses to additional metal tetrapyrroles such as cobalt tetraphenyl porphyrin (CoTPP). Together these works establish a foundation for applying more rigorous kinetic analyses in the space of electrocatalysis. &#13;
&#13;
We next discuss strategies for designing electrocatalytic active sites. We show how a new electrochemical C–C bond formation catalyst was developed by sequentially electrifying known hydroformylation catalysts. We show that electrification of a known organometallic catalyst leads to mechanistically distinct, voltage-driven reactivity. This work pioneers the design principle of using known reactivity from thermal catalysis as an experimental starting point for developing new electrocatalysts.&#13;
&#13;
Together, these works provide new interdisciplinary approaches for interrogating and designing electrocatalytic interphases. In the context of carbon conversion, this work has contributed insight on reaction mechanisms of known CO₂RRs, and, demonstrated new catalysts that further upgrade common products of CO₂RR for greater value-add.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151593</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Elucidation and Therapeutic Improvement of Anti-CTLA-4 Therapies</title>
<link>https://hdl.handle.net/1721.1/151592</link>
<description>Mechanistic Elucidation and Therapeutic Improvement of Anti-CTLA-4 Therapies
Lax, Brianna Marie
Anti-CTLA-4 antibodies have successfully elicited durable tumor regression in the clinic; however, long-term benefit is limited to a subset of patients for select cancer indications. The incomplete understanding of their mechanism of action has hindered efforts at improvement, with conflicting hypotheses proposing either antagonism of the CTLA-4:B7 axis or Fc effector-mediated regulatory T cell (Treg) depletion governing efficacy. Here we report the engineering of a non-antagonistic CTLA-4 binding domain (b1s1e2) that depletes intratumoral Tregs as an Fc fusion. Comparison of b1s1e2-Fc to 9d9, an antagonistic anti-CTLA-4 antibody, allowed for determination of the separate contributions of CTLA-4 antagonism and Treg depletion to efficacy. Despite equivalent levels of intratumoral Treg depletion, 9d9 achieved more long-term cures than b1s1e2-Fc in MC38 tumors, demonstrating that CTLA-4 antagonism provided additional survival benefit. Consistent with prior reports that CTLA-4 antagonism enhances priming, treatment with 9d9, but not b1s1e2-Fc, increased the percentage of activated T cells in the tumor-draining lymph node (tdLN). Treg depletion with both constructs was restricted to the tumor due to insufficient surface CTLA-4 expression on Tregs in other compartments to elicit Fc effector-mediated Treg depletion. Through intratumoral administration of diphtheria toxin (DT) in Foxp3-DTR mice, we show that depletion of both intratumoral and intranodal Tregs provided even greater survival benefit than 9d9, consistent with Treg-mediated restraint of priming in the tdLN. Lastly, we engineered a CTLA-4-targeted enzyme fusion as a potentially translatable approach for combined intratumoral and intranodal Treg depletion. Preliminary data suggest that CTLA-4 targeting increases local Treg death as a result of proximal enzymatic activity, but further characterization remains to be done. Overall, our data demonstrate that anti-CTLA-4 therapies require both CTLA-4 antagonism and intratumoral Treg depletion for maximum efficacy - but that future therapies capable of depleting intranodal Tregs could show superior efficacy, even in the absence of CTLA-4 antagonism.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151592</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous First-Principles Design of Transition Metal Complexes</title>
<link>https://hdl.handle.net/1721.1/151591</link>
<description>Autonomous First-Principles Design of Transition Metal Complexes
Arunachalam, Naveen
Designing novel transition metal complexes (TMCs) holds immense potential for advancing sustainability and chemical synthesis. While high-throughput virtual screening (HTVS) workflows and density functional theory (DFT) have emerged as powerful tools for discovering new TMCs, exhaustive design space exploration remains a formidable challenge due to the combinatorial growth of possible complexes with respect to ligands, metals, oxidation states, and ligand field symmetries. Machine learning models trained on databases of TMC structures and their computationally derived properties offer a promising avenue for rapidly and accurately evaluating the molecular properties of novel TMCs. However, the extrapolation error of these models in new regions of chemical space depends on the chemical space spanned by training data, making it crucial to carefully design training data to optimize the performance of HTVS workflows.&#13;
&#13;
This thesis presents a comprehensive exploration of transition metal complex design, encompassing fundamental chemical insights and HTVS improvements derived from the systematic enhancement of database coverage, such as expanded coverage of ligand field symmetries, metal identities, and similarity to experimentally synthesized complexes. In addition, this work introduces interactive web-based tools for the design of TMCs and metal-organic frameworks, allowing users to explore and provide feedback on machine learning models. These tools facilitate collaboration between researchers and promote the iterative improvement of models by incorporating user feedback, leading to more effective and efficient exploration of the vast chemical space of TMCs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151591</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Decision Making in Operations Management</title>
<link>https://hdl.handle.net/1721.1/151589</link>
<description>Data-Driven Decision Making in Operations Management
Gong, Xiaoyue
Encouraged by the plethora of advances in artificial intelligence (AI) in the past decade, this thesis studies the length to which we can push various business operations with new technologies, in our theoretical understanding and practical performance alike. Towards this goal, this thesis develops data-driven decision-making methods for a selection of challenging emerging problems in supply chain and other business operations. &#13;
&#13;
In the first module of the thesis (Chapters 2 and 3), we invent reinforcement learning methods with provable optimality guarantees for inventory management problems. The challenge in the inventory problems that we are interested in is that the demand distribution varies over time according to some natural cyclic patterns (such as weekly sales cycles), and we are in the online setting where we do not have prior knowledge of the demand distribution or access to prior data. Solutions to these inventory models have been carefully studied for decades in the offline setting where the cyclic demand distribution is known beforehand; however, very few results have been attained in the online setting. The complexity of the problem motivated us to introduce reinforcement learning into the picture. Our design of a reinforcement learning algorithm has an optimal regret bound for a number of inventory models with unknown cyclic demands that we study in these chapters.&#13;
&#13;
In the second module of the thesis (Chapter 4), we study online assortment optimization for reusable resources. E-commerce platforms like Amazon and Expedia constantly endeavor to recommend more favorable assortments of products and services to their customers. The choice of assortment influences customer purchasing decisions, and can thus significantly impact the platform’s revenue. We consider assortment optimization with reusable resources, which means that the product returns to the inventory once the customer has finished using it. Reusability arises in major applications including cloud services, physical storage, and make-to-order service. The unpredictability of the usage times means that planning ahead becomes more challenging. We show that a simple greedy policy is 1/2 competitive for online assortment optimization with reusable resources. This means that on average, the greedy policy earns at least half the revenue of a clairvoyant optimal policy which has access to much more information. This result is surprising because the greedy policy does not take into account the customer or usage time distributions, both of which are necessary to solve for the optimal policy. &#13;
&#13;
In the third module of the thesis (Chapter 5), we develop practical solutions for the cloud service supply chain at Microsoft Azure. The cloud computing industry boomed in the past few years as digitization continues to take place globally and as remote work becomes more of a norm. A main challenge faced by cloud service providers is to deploy cloud server hardware under demand uncertainty, without incurring unnecessarily large operational costs. We formulate the underlying optimization problem as a two-stage stochastic program. We then develop exact Benders-type algorithms that exploit the structure of the second stage problem. We test our proposed algorithms with numerical experiments based on real production traces from Microsoft Azure, which demonstrate noticeable advantages of our algorithms over existing heuristics used in production. Given the large scale of the problem, our deployment policy could potentially lead to savings of hundreds of millions of dollars per year.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151589</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Modeling and Design of Sparse Deep Neural Network Accelerators</title>
<link>https://hdl.handle.net/1721.1/151571</link>
<description>Systematic Modeling and Design of Sparse Deep Neural Network Accelerators
Wu, Yannan
Sparse deep neural networks (DNNs) are an important computation kernel in many data and computation-intensive applications (e.g., image classification, speech recognition, and language processing). The sparsity in such kernels has motivated the development of many sparse DNN accelerators. However, despite the abundant existing proposals, there has not been a systematic way to understand, model, and develop various sparse DNN accelerators. &#13;
&#13;
To address these limitations, this thesis first presents a taxonomy of sparsity-related acceleration features to allow a systematic understanding of the sparse DNN accelerator design space. Based on the taxonomy, it proposes Sparseloop, the first analytical modeling tool for fast, accurate, and flexible evaluations of sparse DNN accelerators, enabling early-stage exploration of the large and diverse sparse DNN accelerator design space. Across representative accelerator designs and workloads, Sparseloop achieves over 2000× faster modeling speed than cycle-level simulations, maintains relative performance trends, and achieves ≤ 8% average modeling error. &#13;
&#13;
Employing Sparseloop, this thesis studies the design space and presents HighLight, an efficient and flexible sparse DNN accelerator. Specifically, HighLight accelerates DNNs with a novel sparsity pattern, called hierarchical structured sparsity, with the key insight that we can efficiently accelerate diverse degrees of sparsity (including dense) by having them hierarchically composed of simple sparsity patterns. Compared to existing works, HighLight achieves a geomean of upto 6.4× better energy-delay product (EDP) across workloads with diverse sparsity degrees, and always sits on the EDP-accuracy Pareto frontier for representative DNNs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151571</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Learning in Sequential Games</title>
<link>https://hdl.handle.net/1721.1/151570</link>
<description>Near-Optimal Learning in Sequential Games
Yu, Tiancheng
Decision making is ubiquitous, and some problems become particularly challenging due to their sequential nature, where later decisions depend on earlier ones. While humans have been attempting to solve sequential decision making problems for a long time, modern computational and machine learning techniques are needed to find the optimal decision rule. One popular approach is the reinforcement learning (RL) perspective, in which an agent learns the optimal decision rule by receiving rewards based on its actions.&#13;
&#13;
In the presence of multiple learning agents, sequential decision making problems become sequential games. In this setting, the learning objective shifts from finding an optimal decision rule to finding a Nash equilibrium, where none of the agents can increase their reward by unilaterally switching to another decision rule. To handle both the sequential nature of the problem and the presence of the other learning agents, multi-agent RL tasks require even more data than supervised learning and single-agent RL tasks. Consequently, sample efficiency becomes a critical concern for the success of multi-agent RL.&#13;
&#13;
In this thesis, I study argubly the most fundamental problems of learning in sequential games: &#13;
&#13;
1. (Lower bound) How many samples are necessary to find a Nash equilibrium in a sequential game, no matter what learning algorithm is used? &#13;
2. (Upper bound) How to design (computationally) efficient learning algorithms with sharp sample complexity guarantees? &#13;
&#13;
When the upper and lower bounds match each other, (minimax) optimal learning is achieved. It turns out utilizing structures of sequential games is the key towards optimal learning. In this thesis, we investigate near-optimal learning in two types of sequential games: &#13;
&#13;
1. (Markov games) All the agents can observe the underlying states (Chapter 2) and, &#13;
2. (Extensive-form games) Different agents can have different observations given the same state (Chapter 5). &#13;
&#13;
To achieve near-optimal learning, a series of novel algorithmic idea and analytical tools will be introduced, such as &#13;
&#13;
1. (Adaptive uncertainty quantification) Sharp uncertainty quantification of the value function estimations to design near-optimal exploration bonus (Chapter 3), &#13;
2. (Certified policy) A non-uniform and step-wise reweighting of historical policies to produce approximate Nash equilibrium policies (Chapter 4), &#13;
3. (Balanced exploration) Achieing optimal exploration of a game tree based on the size of the subtrees (Chapter 6), &#13;
4. (Log-partition function reformulation) Re-interpreting classical algorithms as computing gradients of a log-partition function (Chapter 7), &#13;
&#13;
which may be of independent interest.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151570</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Innovation Challenges in NASA’s Planetary Program and a Policy Framework for Sustainable and Equitable Space Resource Utilization</title>
<link>https://hdl.handle.net/1721.1/151566</link>
<description>Innovation Challenges in NASA’s Planetary Program and a Policy Framework for Sustainable and Equitable Space Resource Utilization
Nasr, Maya
The overall goal of this research is to systematically study planetary technology innovation, its challenges, and paths forward in the space sector, from institutional, strategic, policy and legal vantage points. Part I of this thesis delves into the challenges and opportunities for innovation in planetary technology at NASA. Six technology case studies were analyzed to understand NASA’s enterprise architecture and its technology investment, development, and maturation frameworks, uncovering management and program challenges for efficient development and integration of innovative planetary technologies. The research identified policy and structural challenges and cultural challenges, highlighting the need for a fundamental shift in philosophy to incorporate new technology and risk into call for proposals. The research also assessed the difficulties faced by NASA’s Jet Propulsion Laboratory (JPL) and suggested changes to its enterprise architecture. The Chaotic 2.0 architecture was found to be the most flexible and a pain point analysis conducted. An implementation strategy was proposed, and future-proofing analysis conducted to outline future phases of JPL’s enterprise architecture. Overall, the research provided valuable insights and recommendations for enhancing technology innovation and management within NASA and the broader space sector.&#13;
&#13;
Part II of this thesis proposes a sustainable and equitable policy framework for space exploration and natural resource utilization. The research reviewed existing policies, laws, and guidelines, identifying gaps and inadequacies for space resource governance. Drawing from lessons learned from resource governance on Earth and historical policies, the research recommended best approaches for policy and governance for space resources. These approaches were adapted to the unique circumstances of space, resulting in an improved plan for international management of space resources as multinational exploration and ISRU increase.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151566</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of Neuromuscular Electrical Stimulation as a Bone Loss Countermeasure on a Long Duration Mars Mission</title>
<link>https://hdl.handle.net/1721.1/151554</link>
<description>Evaluation of Neuromuscular Electrical Stimulation as a Bone Loss Countermeasure on a Long Duration Mars Mission
Abitante, Thomas
During spaceflight, astronauts experience bone loss, in part, due to the absence of the skeletal loading normally experienced on Earth.  The current ISS exercise regimen is not expected to sufficiently reduce the risk of fracture on a long duration mission to Mars (&gt;6 months) and additional exercise cannot be added as exercise is time consuming, incurs high caloric demands, and future spacecraft will be unable to accommodate the current machines. Therefore, to minimize the risk of musculoskeletal related injuries, additional skeletal loading should be introduced via non-exercise based countermeasures. The purpose of this research is to examine the potential efficacy, practicality, and potential implementation of Neuromuscular Electrical Stimulation (NMES), a therapy that induces involuntary contractions of muscles, to help mitigate astronaut bone loss as a result of microgravity. &#13;
&#13;
To accomplish this, we first produced a finite element analysis model of the femur, to determine the strain at the proximal femur during NMES contractions of the thigh muscles. This strain was compared to the strain produced during other activities that have been investigated for bone loss in order to infer efficacy. Second, we examined how healthy individuals, and subsequently different muscle characteristics, produce force and fatigue with NMES in order to inform both initial and progressive regimen design. Last, we performed a metabolic analysis, examining the metabolic cost of repetitive NMES to the thighs and lower legs, and comparing this cost to that of walking at various speeds. Determining the potential caloric cost of a NMES therapy will aid in determining feasibility as well as regimen design. &#13;
&#13;
The results of this research show that NMES can create strains at the proximal femur comparable to simple, non exercise activity like that of walking. The  bouts of repetitive contractions would need to initially be a maximum of 5 to 10 minutes in duration in order to minimize fatigue and maximize the force of each contraction, and would incur only a modest caloric expenditure, minimizing the risk of a negative caloric balance when used as a supplement to exercise.  Over the course of the day, hundreds of loading cycles could be added to the femur, as well as the tibia, increasing the skeletal loading, further reducing bone loss, and subsequently reducing the risk of fracture upon landing on Mars.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151554</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analog-to-Digital Converters for Secure and Emerging AIoT Applications</title>
<link>https://hdl.handle.net/1721.1/151539</link>
<description>Analog-to-Digital Converters for Secure and Emerging AIoT Applications
Chen, Ruicong
AI algorithms based on convolutional neural networks (CNNs), coupled with their high computational requirements, have stimulated the development of novel energyefficient hardware. Analog neural networks (ANNs) with in-memory computing (IMC) using resistive random-access memory (RRAM) are promising architectures to reduce latency and increase energy efficiency for IoT devices. However, interface circuitry, including analog-to-digital converters (ADCs) between RRAM and digital components, is becoming the bottleneck of the RRAM-based ANNs. To address this challenge, a direct hybrid encoding for signed expressions (HESE) SAR is proposed to increase the sparsity of ADC output.&#13;
&#13;
In addition to the performance requirements, the security of IoT devices is of paramount importance. An attacker can perform an ADC power side-channel attack (PSA) to expose confidential information by tapping into the power supply of the ADC. This attack exploits the strong correlation between the ADC digital output codes and the ADC power supply using neural networks based PSA. Previous works have implemented current equalizers or noise injections to protect ADCs from PSAs. However, the current equalizer introduces a large area and energy overhead for the ADC, which is not ideal for IoT applications. Additionally, the previous work with noise injection only protects the probing of CDAC supply. To overcome these limitations, two secure ADCs are proposed to improve both energy efficiency and security, making them more suitable for real-world applications.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151539</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Decision Making for Healthcare Operations: Models and Implementation</title>
<link>https://hdl.handle.net/1721.1/151523</link>
<description>Optimal Decision Making for Healthcare Operations: Models and Implementation
Na, Liangyuan
Healthcare systems face financial and operational challenges to provide high-quality patient care. Advances in machine learning and scalable optimization have led to opportunities for utilizing analytics to improve healthcare operations. However, only a limited number of models have been extensively deployed in practice due to the complex nature and strict regulations of healthcare systems. This thesis aims to develop and deploy practical analytics models that support strategic, tactical, and operational decision making in healthcare systems.&#13;
&#13;
A large part of the thesis involves close collaborations with Hartford HealthCare (HHC), the largest hospital network in Connecticut, spanning seven hospitals with $5 billion annual revenue. In Chapter 2, we optimize nurse staffing at the Emergency Department (ED) of Hartford Hospital. We develop a two-phase methodology: (a) a robust optimization model to allocate aggregate staffing levels, followed by (b) mixed integer optimization models to schedule each nurse. Then in Chapter 3, we develop machine learning models to predict eight patient operational outcomes related to discharge, mortality, and intensive care for all inpatients at seven hospitals. We build an online daily pipeline from data extraction to prediction-driven decision support.&#13;
&#13;
More importantly, we implement our models into a two-module end-to-end software deployed in large-scale production, supporting daily decision making of over 400 users of doctors, nurses, and managers across seven hospitals at HHC. The nurse scheduling module provides a labor-free process from input collection to schedule output, improving patient coverage and nurse satisfaction with reduced cost. The patient outcome prediction module is deeply integrated into medical providers’ daily workflow, identifying timely discharges and patient exacerbation. HHC reports better staff workflow and patient care, together with substantial benefits in reduced length of stay and increased financial margins.&#13;
&#13;
In the final part of the thesis, we collaborate with a mobile health (mHealth) application, Hearsteps, designed to reduce sedentary behavior and promote physical activity in individuals with hypertension. In Chapter 4, we develop innovative batch off-policy learning methods to optimize the app’s digital intervention by sending anti-sedentary messages to users. Our interpretable decision tree-based policy improves treatment effects and guides future clinical trials.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151523</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Algorithmic Learning and Uncertainty Quantification</title>
<link>https://hdl.handle.net/1721.1/151512</link>
<description>Essays on Algorithmic Learning and Uncertainty Quantification
Vijaykumar, Suhas
The thesis consists of three essays. The first, titled “Localization, Convexity, and Star Aggregation,” develops new analytical tools based upon the offset Rademacher complexity for studying stochastic optimization in non-convex domains, including statistical prediction and model aggregation problems. Using these tools, I show that a simple procedure called the star algorithm can recover near-optimal convergence rates for non-parametric logistic regression in non-convex models.&#13;
&#13;
The second essay, titled “Kernel Ridge Regression Inference,” introduces a new technique for deriving sharp, non-asymptotic, uniform Gaussian approximation for partial sums in a reproducing kernel Hilbert space, which is then applied to construct uniform confidence bands for the widely-used kernel ridge regression algorithm.&#13;
&#13;
The third and final essay, titled “Frank-Wolfe Meets Metric Entropy,” uses ideas from asymptotic geometry to derive new dimension-dependent and domain-specific lower bounds for conditional gradient algorithms, a class of optimization procedures including the popular Frank-Wolfe algorithm and many of its variants. Such algorithms have found extensive use in machine learning and high-dimensional statistics, motivating a more thorough analysis of their limitations in high-dimensional problems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151512</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scattering at threshold in massive wave propagation and ionization</title>
<link>https://hdl.handle.net/1721.1/151508</link>
<description>Scattering at threshold in massive wave propagation and ionization
Sussman, Ethan W.
In this thesis, we examine the transition between Legendrian regularity and ellipticity at infinity for two PDEs, the Schrödinger–Helmholtz equation with an attractive long range potential and the Klein–Gordon equation. In the former case, the transition occurs as the spectral parameter &#119864; → 0⁺, and thus describes the ionization of Hydrogenic atoms, and in the latter case the transition occurs at null infinity, where the phase in the asymptotic tails becomes singular. Using novel microlocal tools, we work in the variable coefficient setting throughout.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151508</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights into the substrate specificities, interactions and regulatory mechanisms of bacterial glycoconjugate biosynthesis enzymes</title>
<link>https://hdl.handle.net/1721.1/151506</link>
<description>Insights into the substrate specificities, interactions and regulatory mechanisms of bacterial glycoconjugate biosynthesis enzymes
Anderson, Alyssa J.
Monotopic phosphoglycosyl transferase enzymes (MonoPGTs) are membrane-bound enzymes that initiate the assembly of prokaryotic glycoconjugates essential for bacterial survival and proliferation. MonoPGTs belong to an expansive superfamily with a diverse and richly annotated sequence space; however, limited biochemical characterization of this enzyme superfamily has been performed. To close the gap between sequence annotation and biochemical characterization of these critical enzymes, this thesis presents methods for characterizing the substrate specificities, interactions, and regulatory mechanisms of monoPGTs.&#13;
&#13;
Substrate specificities of monoPGTs are largely unknown; therefore, connecting the sequences of monoPGTs to the composition of the final glycoconjugate produced remains a significant challenge. In Chapter 2, structural, sequence, and biochemical analyses are combined to identify co-conserved sequence “fingerprints” that predict the substrate specificity for a subset of monoPGTS that utilize UDP-N,N’-diacetylbacillosamine. The methodology described is generalizable and may be applied to identify sequence motifs that serve as fingerprints for monoPGTs of differing substrate specificities. &#13;
&#13;
Evidence suggests glycoconjugate biosynthesis machinery forms complexes to centralize glycoconjugate production in the cell. However, the protein-protein interactions that comprise these complexes are poorly understood. In Chapter 3, I explore interactions among a subfamily of monoPGTs known as large (Lg) PGTs in near-native membrane mimetic systems. In this work, I characterize LgPGT dimerization using detergent-free solubilization and chemical crosslinking. This approach to studying protein-protein interactions of membrane-bound enzymes can be broadly applied to characterize interactions among glycoconjugate biosynthesis enzymes as a whole. &#13;
&#13;
The regulatory mechanisms that govern monoPGT regulation remain enigmatic. In Chapter 4, activity-based protein profiling (ABPP) probes are implemented as protein-centric, membrane-protein compatible tools that lay the groundwork for understanding the activity and regulation of the monoPGT superfamily from a cellular proteome. Robust, covalent labeling at the active site of various representative monoPGTs from unfractionated cell membrane fractions is demonstrated using 3- phenyl-2H-azirine probes.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151506</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Liouville Properties and Dimensionality Bounds for Harmonic and Caloric Functions</title>
<link>https://hdl.handle.net/1721.1/151505</link>
<description>Liouville Properties and Dimensionality Bounds for Harmonic and Caloric Functions
Gui, Feng
Classical Liouville type theorems claim that solutions to certain elliptic or parabolic PDE are trivial provided some generic constraints about the function and the underlying space. When the solution space is not trivial, one can ask whether it is a linear space with finite dimension. In this thesis, we study several Liouville properties in geometric analysis. First, we prove a Hamilton type and a Souplet-Zhang type gradient estimates which imply a strong Liouville theorem for ancient f-caloric functions with certain growth assumption on smooth metric measure spaces. Second, we generalize Colding-Minicozzi’s result to estimate the dimension of polynomial growth f-caloric functions. We apply some of these results to gradient shrinking Ricci solitons. Lastly, we prove a dimensionality bound for exponential growth solutions to a parabolic type equation on an infinite strip.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151505</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein-based Degrader Strategies against Oncogenic RAS</title>
<link>https://hdl.handle.net/1721.1/151504</link>
<description>Protein-based Degrader Strategies against Oncogenic RAS
Cheah, Keith M.
Targeted therapies have emerged as a promising cancer treatment strategy by inhibiting proteins specific to cancer cells while preserving healthy cells. RAS proteins play a crucial role in cancer development and are associated with increased tumor growth and invasion. Mutations in RAS genes, especially KRAS G12D, are present in a large proportion of human cancers. However, these proteins have been deemed "undruggable" due to the absence of binding pockets for small-molecule drugs, and the significant challenge of delivering protein-based inhibitors across the cell membrane to reach intracellular RAS.&#13;
&#13;
This thesis focuses on the development of novel protein-based degrader strategies against KRAS G12D, specifically. We first developed a generalizable solubilization strategy to address the low aqueous solubility of proteins that have undergone bioreversible esterification, a permeation strategy that involves raising the cationicity and hydrophobicity of the protein. We then engineered a cell-permeable KRAS-G12Dtargeting degrader that consists of an esterified protein-based KRAS-G12D binder, R11.1.6, conjugated to a small molecule ligand of the VHL E3 ligase, VL1. We confirmed the cytosolic entry of esterified R11.1.6-VL1 and demonstrated efficacy in human cancer cell lines through in vitro studies. Although modest efficacy in RAS degradation and growth inhibition was observed, this strategy presents a novel paradigm for targeting previously undruggable proteins.&#13;
&#13;
Finally, we build on previous work on intracellularly expressed KRAS-G12Dtargeting biodegraders. Unlike the cell-permeable degrader described above, which consists of a small-molecule component, biodegraders are fully protein-based constructs consisting of R11.1.6 conjugated to an E3 ligase itself. They can thus be genetically expressed in cells, eliminating the need for transmembrane delivery. The development of degraders has largely been limited to a trial-and-error approach, with little understanding of the effects of specific design components like linker length, linker rigidity, and target affinity. We utilized high-throughput fluorescence-based screening and regression modeling to determine the relative importance and effect of such design components on RAS degradation, offering several rational design principles that will inform the future development of RAS-targeting biodegraders.&#13;
&#13;
Overall, these findings offer a valuable contribution to the ongoing efforts in developing targeted therapies against RAS and potentially enabling RAS to become a more druggable target.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151504</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Mechanisms Underlying Belief Updating with&#13;
Applications in Wisdom of Crowds</title>
<link>https://hdl.handle.net/1721.1/151503</link>
<description>Essays on Mechanisms Underlying Belief Updating with&#13;
Applications in Wisdom of Crowds
Zhang, Yunhao
This dissertation consists of three chapters on understanding how people update their beliefs after learning about others’ opinions and how we can leverage belief-updating to improve Wisdom of Crowds.&#13;
&#13;
In Chapter One, I propose a new Revealed Expertise (RE) algorithm that uses the “RE measure”, which is a scaled amount of belief updating given numerical advice (i.e., the group mean), as a proxy for prior variance to better reflect the relative expertise of each agent in a crowd. The intuition, which I confirm both theoretically and empirically, is that those who are less swayed by the group mean tend to be more accurate in their initial judgment. Therefore, using inverse-variance weighting with the RE measures as the variance inputs outperforms the existing wisdom-of-crowds methods by over-weighting the more accurate initial judgments in the aggregation. Crucially, I demonstrate that while self-reported confidence reflects one’s feeling of uncertainty given one’s available information, advice-taking reveals the amount of information one has and has not taken into account in their initial judgment. Therefore, the RE algorithm is able to successfully identify the experts, even when self-reported confidence fails. &#13;
&#13;
In Chapter Two, I develop a boundedly rational model to characterize the relationship among stated confidence, uncertainty, expertise, and advice-taking. The semi-Bayesian belief-updating model I develop is able to reconcile two important empirical phenomena. First, I demonstrate that even though agents can state a high confidence (i.e., low first-order uncertainty), they may put a large weight on the advice in belief-updating if their estimate of their stated confidence is imprecise (i.e., large second-order uncertainty due to their lack of information). Second, I show that the distance effect (i.e., the weight on advice tends to decrease as the distance between the initial estimate and the advice increases), a widely documented empirical pattern in advice-taking, can be a consequence of people updating their beliefs following a semi-Bayesian updating heuristics given their cognitive limitation.&#13;
&#13;
In Chapter Three, I propose an experimental paradigm to examine the role of (preference-based) motivated reasoning in biased advice-taking. In an incentivized task assessing the accuracy of nonpolitical news headlines, we find partisan bias in advice-taking. We then adjudicate between two possible mechanisms for this biased advice-taking: a preference-based account, where participants 3 are motivated to take less advice from counter-partisans because doing so is unpleasant; versus a belief-based account, where participants sincerely believe co-partisans are more competent at the task (even though this belief is incorrect). To do so, we examine the impact of a substantial increase in the stakes, which should increase accuracy motivations (and thereby reduce the relative impact of partisan motivations). We find that increasing the stakes does not reduce biased advice-taking, hence no evidence to support the bias is driven by preference. Instead, in two follow-up experiments, we show evidence of the belief-based account being the main driver of the biased advice-taking.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151503</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Development and Labor Economics</title>
<link>https://hdl.handle.net/1721.1/151500</link>
<description>Essays in Development and Labor Economics
Sharma, Garima
This thesis comprises three chapters studying labor markets in developing countries. The first two chapters examine two sources of gender gaps in the labor market — gender differences in employers’ monopsony power over their workers, and the possibility that the decision-makers who design workplaces do not prioritize women’s needs when doing so. The final chapter focuses on a different population, of the poorest Indian households, and studies whether a “big-push” program providing these households with a large asset transfer can durably lift them out of poverty.&#13;
&#13;
The first chapter examines the extent and sources of gender differences in employers' monopsony power over their workers in Brazil. I exploit establishment-level demand shocks induced by the end of the Multi-Fiber Arrangement to show that women are substantially less likely than men to separate from an employer that lowers their wage. The implied gender difference in monopsony power would generate an 18pp gender wage gap among equally productive workers, explaining over half the raw gender wage gap. To study the source of this gender difference in monopsony power, I build and estimate a discrete choice model wherein employers can have more monopsony over women either because women strongly prefer their current employer, or because they have fewer good employers than men. Of the 18pp monopsony gender gap, I find that 10 points are attributable to women’s stronger preference for their specific employer, and 8 points to the fact that good jobs for women are highly concentrated in the textile sector. Surprisingly, I show that this concentration is itself largely a product of amenities/disamenities present in different sectors, rather than gender-specific comparative advantage. My findings demonstrate that although the textile industry provides women desirable jobs, this desirability confers its employers with higher monopsony power. By contrast, desirable jobs for men are not similarly concentrated.&#13;
&#13;
The second chapter (joint with Viola Corradini and Lorenzo Lagos) investigates why workplaces are not better designed for women. In particular, we show that changing the priorities of those who set workplace policies can create female-friendly jobs. Starting in 2015, Brazil’s largest trade union federation, the Central Única dos Trabalhadores (CUT) made women central to its bargaining agenda. We use a difference-in-differences design to compare establishments negotiating with CUT-affiliated unions to those negotiating with non-CUT unions. We find that “bargaining for women” increases female-centric amenities in collective bargaining agreements as well as in practice. These changes cause women to queue for jobs at treated establishments and separate from them less—both of which are revealed preference measures of firm value. We find no evidence that the gain in amenities comes at the expense of either men or women's employment or wages, or of firm profits. Our results thus suggest that changing institutional priorities can narrow the gender compensation gap.&#13;
&#13;
The final chapter (joint with Abhijit Banerjee and Esther Duflo) studies the long-run effects of a "big-push" program that provides a large asset transfer to the poorest Indian households. The program is premised on the idea that the poor are stuck in a poverty trap, which implies that a one-time capital grant that makes very poor households substantially less poor ("big push") can set off a virtuous cycle that takes them out of poverty. In a randomized controlled trial that follows these households over ten years, we find that the program improves poor households' well-being over the long run, increasing their consumption by 0.6 standard deviations (SD), food security by 0.1 SD, income by 0.3 SD, and health by 0.2 SD. These effects grow for the first seven years following the transfer and persist until year ten. One main channel for persistence is that treated households take greater advantage of opportunities for income gains that arise naturally over time, such as by diversifying into lucrative wage employment and migration.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151500</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transparent Value Alignment: Foundations for Human-Centered Explainable AI in Alignment</title>
<link>https://hdl.handle.net/1721.1/151499</link>
<description>Transparent Value Alignment: Foundations for Human-Centered Explainable AI in Alignment
Sanneman, Lindsay
Alignment of autonomous agents' values and objectives with those of humans can greatly enhance these agents' ability to act flexibly to safely and reliably meet humans' goals across diverse contexts from space exploration to robotic manufacturing. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate their objectives comprehensively, accurately, and in forms that are readily usable for agent planning. Value alignment is an open challenge in artificial intelligence that aims to address this problem by enabling agents to infer human goals and values through interaction. Providing humans with direct and explicit feedback about this value learning process through approaches for explainable AI (XAI) can enable humans to more efficiently and effectively teach robots about their goals. In this thesis, we introduce the Transparent Value Alignment (TVA) paradigm which captures this two-way communication and inference process and discuss foundations for the design and evaluation of XAI within this paradigm. &#13;
&#13;
First, we introduce the Situation Awareness Framework for Explainable AI (SAFE-AI), which provides a rigorous approach for comprehensively determining a user's informational needs for their given role and context, identifying which XAI techniques can be applied to meet these needs or gaps in the current state-of-the-art, and evaluating explanation quality. We also review other human factors literature related to cognitive workload and trust in automation and discuss how these constructs additionally inform the design and evaluation of XAI systems. &#13;
&#13;
Next, we propose four metrics for assessing the alignment of reward functions between humans and autonomous agents (i.e. ``reward alignment''). These metrics can be applied to study alignment in scenarios where the human's ground truth reward function is not necessarily directly accessible, as is the case in many real-world settings. We also validate these metrics through a human-subject experiment and a subsequent factor analysis. Findings from this factor analysis indicate the existence of two components comprising the overall reward alignment between humans and agents: feature alignment, which captures how similar a human's reward features and weights are to an agent's, and policy alignment, which captures how similar human and agent policies are for a given reward function.&#13;
&#13;
We also present a series of human-subject experiments which study the efficacy of a broad range of reward explanation techniques across multiple domains. These experiments consider variable reward complexity (defined as the number of features in the reward function), variable task complexity (defined as the number of tasks the human must perform simultaneously when the explanation is provided), and variable team complexity (defined as the number of agents performing the set of required tasks). The results from these experiments together suggest a trade-off between providing users with direct and complete information about the agent's reward function through XAI and increasing their workload. Abstraction-based explanations were a promising approach for balancing these factors, but results also indicated the importance of selecting appropriate abstractions for the particular domain, context, and user. Scenarios with higher team complexities (a larger number of agents) were also subjectively assessed more positively than those with lower team complexities, indicating that in terms of interpretability, simpler decoupled agent plans for larger numbers of agents may be preferable to more complex agent plans for fewer agents. &#13;
&#13;
Finally, we discuss how the TVA problem framing could be applied to real-world domains in the future through a set of case studies. In particular, we highlight findings from a study of key players in the industrial robotics ecosystem in Europe which identified the importance of developing improved robot interfaces and easier-to-program systems for robotics in manufacturing. We also discuss the applicability of TVA to space mission planning, which is informed by observations of the tactical planning process for the Mars Curiosity rover at the NASA Jet Propulsion Laboratory (JPL). Lastly, we discuss how TVA could be applied to human-autonomy teaming scenarios such as search-and-rescue mission planning.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151499</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Robust and Efficient Framework for Slice-to-Volume Reconstruction: Application to Fetal MRI</title>
<link>https://hdl.handle.net/1721.1/151495</link>
<description>A Robust and Efficient Framework for Slice-to-Volume Reconstruction: Application to Fetal MRI
Xu, Junshen
Volumetric reconstruction in presence of motion is a challenging problem in medical imaging. When imaging moving targets, many modalities are limited to fast 2D imaging techniques that provide cross-sectional snapshots (2D images) of the subject with an attempt to "freeze'' in-plane motion. However, inter-slice movement results in slice misalignment in 3D space, i.e., each image being an independent slice that fails to form a coherent volume for diagnosis and analysis. bTo this end, slice-to-volume reconstruction (SVR) has been proposed to reconstruct a high-quality 3D volume from misaligned 2D observations by performing inter-slice motion correction and super-resolution reconstruction. Existing SVR algorithms, however, have a limited capture range of slice motion and are time-consuming, particularly when producing high-resolution volumes.&#13;
&#13;
This thesis proposes a motion-robust and efficient machine learning framework for SVR, motivated by the application of magnetic resonance imaging (MRI) in assessing fetal brain development. We first introduce a slice-to-volume registration transformer that models input slices as a sequence and performs inter-slice motion correction by simultaneously predicting rigid transformations of all images in 3D space. We then reformulate the reconstruction problem using implicit neural representation, where the underlying volume is represented by a continuous function of 3D coordinates. This resolution-agnostic approach allows efficient reconstruction of high-resolution volumes. Finally, we extend this method to data that suffer from non-rigid motion by introducing an implicit motion field that captures slice-dependent deformation. These advances together enable robust and efficient 3D reconstruction and visualization in fetal MRI, benefiting diagnosis and downstream analysis. Additionally, the proposed framework has the potential for broader clinical implications in various applications that involve similar volumetric reconstruction problems.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151495</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanosecond Pulsed Plasmas in Dynamic Combustion&#13;
Environments</title>
<link>https://hdl.handle.net/1721.1/151492</link>
<description>Nanosecond Pulsed Plasmas in Dynamic Combustion&#13;
Environments
Pavan, Colin A.
Plasma assisted combustion (PAC) is a promising technology for extending combustion operating envelopes with a low energy cost relative to flame power. It has been investigated for use in various situations, particularly those where combustion is being performed near flammability limits imposed by equivalence ratio, residence time, etc. While the fundamental processes allowing plasma to modify combustion dynamics have been well studied, there are still many unresolved questions in determining the relative contribution of different actuation pathways in different situations (thermal enhancement, kinetic enhancement or transport-induced effects) and how the plasma will evolve and interact with the flame in a dynamic combustion environment. &#13;
 &#13;
The plasmas being used for PAC are typically non-equilibrium and are often produced by the nanosecond repetitively pulsed discharge (NRPD) strategy. The development of these discharges is highly dependent both on applied voltage and also on the gas environment (composition, temperature, flow field, etc.). As the plasma affects the combustion, so too does the combustion affect the plasma structure and energy deposition pathways. This two-way coupling means that the plasma’s ability to modify the combustion, and the mechanisms by which it achieves these effects, will vary as the environment changes due to combustion dynamics. This impact of the combustion on the plasma has received considerably less attention than the other direction of interaction, especially in environments with transient or propagating flames. The first main objective of this thesis is to explore the development of NRPDs in dynamic combustion environments and in particular how the plasma develops on the timescales of transient combustion (many accumulated pulses). This is performed first in a laminar, mesoscale platform to probe the interaction in detail, and the important insights are later shown to be relevant to high power systems of practical interest. &#13;
 &#13;
While the impact of the plasma on the flame has been considerably better studied and the fundamental processes are well understood, there are still hurdles that must be overcome before PAC systems can begin to be designed and implemented for use outside of the laboratory. The development of versatile and flexible engineering models of the impact of the plasma will be necessary to allow system designers to make predictions about combustor operation when plasma is applied. The second main objective of this thesis is to develop such an engineering model and demonstrate its predictive capabilities across a variety of configurations. The model is developed for a laminar mesoscale platform and is shown to correctly predict the impact of the plasma in several different configurations, indicating a path forward towards physics[1]informed design of PAC systems. The model also provides important physical insight of the impact of plasma on flame, such as the role of pressure waves in disturbing the flame dynamics, even when considering uniform DBD discharges.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151492</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Allocating Scarce Resources: Modeling and Optimization</title>
<link>https://hdl.handle.net/1721.1/151489</link>
<description>Allocating Scarce Resources: Modeling and Optimization
Gilmour, Samuel
There are countless settings in which an authority must choose how to allocate scarce resources among a set of recipients. Deceased-donor organs must be allocated to patients, public school spaces to students, and public housing to residents. When resources are extremely scarce, it is particularly important for the authority to build an allocation system that achieves an acceptable trade-off between efficiency and equity. This thesis contributes several models and tools that both support the design of new allocation systems and extract insight from existing ones. In both cases, efficiency and equity take center stage.&#13;
&#13;
Chapter 2 considers the problem faced by an authority who allocates resources according to a scoring system. A scoring system is based on two foundations: a scoring rule, which is a function that computes scores for each recipient-resource pair based on some observable properties that relate the pair, and an allocation procedure, which determines the allocation using only the scores. We introduce a model that allocates a set of resource types among a set of patient types according to a scoring system, before presenting several optimization formulations and heuristics that directly optimize the scoring rule while scaling to practical problem sizes. We also show how a scoring rule of high quality in the type-based model can fail in a setting where individual recipients and resources exhibit within-type variation in properties, and suggest approaches that perform well when allocating individuals.&#13;
&#13;
Moving away from the specific setting of a scoring system, Chapter 3 shows that the ability for recipients to choose whether to accept or decline the offer of a resource can act as a hidden source of inequity in an allocation system. We formulate several game-theoretic models based on two groups of recipients, selective and non-selective, who display different propensities to accept or decline the offer of a resource. We define the notion of an equilibrium in these models and provide numerical experiments showing that inequity can arise directly as a result of the disparity in selectiveness between recipients.&#13;
&#13;
Chapter 4 studies a mass screening program for SARS-CoV-2 that was implemented in Greece during 2021, in which the Greek National Public Health Organization allocated a finite supply of mandatory self-tests among different segments of the population. We develop a novel compartmental model to describe the dynamics of the COVID-19 pandemic in Greece, placing particular focus on the testing procedures. We fit the model to detailed data to quantify the overall effectiveness of the program in reducing hospitalizations and deaths, and also to understand the effects of several operational decisions. We conclude that self-testing is an extremely important intervention to consider for pandemic preparedness.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151489</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretations of Machine Learning and Their Application to Therapeutic Design</title>
<link>https://hdl.handle.net/1721.1/151487</link>
<description>Interpretations of Machine Learning and Their Application to Therapeutic Design
Carter, Brandon M.
We introduce a framework for interpreting black-box machine learning (ML) models, discover overinterpretation as a failure mode of deep neural networks, and discuss how ML methods can be applied for therapeutic design, including a pan-variant COVID-19 vaccine.  While ML models are widely deployed and often attain superior accuracy compared to traditional approaches, deep learning models are functionally complex and difficult to interpret, limiting their adoption in high-stakes environments.  In addition to safer deployment, model interpretation also aids scientific discovery, where validated ML models trained on experimental data can be used to uncover biological mechanisms or to design therapeutics through biologically faithful objective functions, such as vaccine population coverage.&#13;
&#13;
For interpretation of black-box ML models, we introduce the Sufficient Input Subsets (SIS) method that is model-agnostic, faithful to underlying functions, and conceptually straightforward.  We demonstrate ML model interpretation with SIS in natural language, computer vision, and computational biological settings.  Using the SIS framework, we discover overinterpretation, a novel failure mode of deep neural networks that can hinder generalizability in real-world environments.  We posit that overinterpretation results from degenerate signals present in training datasets.  Next, using ML models that have been calibrated with experimental immunogenicity data, we develop a flexible framework for the computational design of robust peptide vaccines.  Our framework optimizes the n-times coverage of each individual in the population to activate broader T cell immune responses, account for differences in peptide immunogenicity across individuals, and reduce the chance of vaccine escape by mutations.  Using this framework, we design vaccines for SARS-CoV-2 that have superior population coverage to published baselines and are conserved across variants of concern.  We validate this approach in vivo through a COVID-19 animal challenge study of our vaccine.  This thesis demonstrates distinct ways model interpretation enables ML methods to be faithfully deployed in biological settings.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151487</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Segmentation Pipelines for Medical Imaging using&#13;
Deep Learning</title>
<link>https://hdl.handle.net/1721.1/151486</link>
<description>Improving Segmentation Pipelines for Medical Imaging using&#13;
Deep Learning
Patel, Jay Biren
One of the most important steps in the clinical workflow is the segmentation of medical imaging, which can be used for a variety of clinical decision-making tasks such as disease diagnosis and treatment response evaluation. Manual segmentation of 3D medical imaging (such as computed tomography (CT) or magnetic resonance imaging (MRI)) by a clinical expert can be too time-consuming to be feasible in a routine clinical workflow, and can moreover be susceptible to human errors and inconsistencies. In recent years, deep learning (DL) based methods have exhibited human-level performance for a variety of computer vision tasks, making them an attractive choice for researchers aiming to automate the segmentation of medical imaging. This thesis considers two medical imaging scenarios and examines how fully automatic image segmentation via DL can enhance downstream clinical tasks. &#13;
&#13;
The first scenario evaluates the clinical workflow for diagnosing incidental adrenal masses on CT. Despite standardized reporting systems and strict guidelines for defining an adrenal mass, there exists significant inter-rater variability for this task. To enable objective and reproducible characterization of the adrenal gland, this thesis develops the first DL method for segmentation and classification on CT. Using a large-scale retrospectively acquired dataset, this method is used to identify potential missed detections by radiologists and discuss the clinical implications of this. &#13;
&#13;
The second scenario focuses on the treatment response assessment of metastatic brain tumor patients on MRI. Due to the large number of metastases a patient can have, standard radiographic analyses track only a select few target lesions through the course of therapy in order to assess the efficacy of a treatment. With this paradigm, smaller non-target lesions may be neglected or even missed due to the lack of quantitative emphasis. To that end, a pipeline is developed to automatically segment brain tumor metastases on MRI and output standard response assessment metrics. With the prevalence of longitudinal imaging data available for brain metastases patients, a secondary model is formulated to improve the detection and segmentation of micro-metastases by utilizing known prior time point information.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151486</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Volumetric Optical Imaging of Tissue Microstructure for Grading of Dysplasia In Vivo</title>
<link>https://hdl.handle.net/1721.1/151485</link>
<description>Volumetric Optical Imaging of Tissue Microstructure for Grading of Dysplasia In Vivo
Cannon, Taylor M.
Optical imaging offers unique advantages in medical diagnostics. In particular, high resolution, fast imaging speed, and strong depth penetration make optical coherence tomography (OCT) an attractive modality for non-perturbatively investigating structural changes in tissue that accompany disease progression on the microscale. Although clinical interpretation of OCT images is generally qualitative, the scattered light intensity signal underlying these images may be analyzed further to calculate sub-resolution sample properties, accessing additional functional information in tissue. Developing meaningful ways to compute and interpret these properties could enable earlier detection of structural disease-associated tissue changes, such as increases in sizes and densities of epithelial nuclei with the progression of esophageal dysplasia. In this thesis research, we investigate the application of novel signal processing methods and custom OCT hardware to uncover biomarkers of dysplasia based on these pathological changes in cell nuclei. Further, we apply our methods to evaluate patients during upper gastrointestinal endoscopy using a custom imaging device designed for enhanced sensitivity to these dysplastic changes, potentiating a more sensitive screening paradigm towards earlier detection of esophageal cancer.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151485</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Econometrics, Causal Inference, and Machine Learning</title>
<link>https://hdl.handle.net/1721.1/151478</link>
<description>Essays on Econometrics, Causal Inference, and Machine Learning
Singh, Rahul
The traditional tools of econometrics may be inadequate for modern data sets, for example the 2020 US Census, which will be deliberately corrupted by the Census Bureau in the interest of privacy. Meanwhile, the modern tools of machine learning may be inadequate for the traditional goals of policy evaluation, which are to measure cause and effect and to assess statistical significance. &#13;
&#13;
In this dissertation, I develop tools for flexible causal inference, weaving machine learning into econometrics and solving unique problems that arise at their intersection. Specifically, I work in three domains at the intersection between econometrics and machine learning: (Chapter 1) causal inference with privacy protected data, (Chapter 2) rigorous statistical guarantees for machine learning, and (Chapter 3) simple algorithms for complex causal problems. &#13;
&#13;
JEL: C81,C45,C26.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151478</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Theory and Algorithms for Convex Optimization with Non-Standard Structures</title>
<link>https://hdl.handle.net/1721.1/151475</link>
<description>New Theory and Algorithms for Convex Optimization with Non-Standard Structures
Zhao, Renbo
Optimization models and algorithms have long played central and indispensable roles in the advancement of science and engineering. In recent years, first-order methods have played important roles in tackling applications arising in machine learning and data science, due to their simplicity, reasonably fast convergence rate, and low periteration computational cost. However, there exist many important applications that violate the fundamental assumptions on which existing first-order methods are based — specifically, the objective function, despite being convex, is neither Lipschitz nor has Lipschitz-gradient on the feasible region. The purpose of this thesis is to propose new optimization models for these “non-standard” problems, develop new first-order methods to solve these models, and analyze the convergence rate of these methods.&#13;
&#13;
In the first chapter, we present and analyze a new generalized Frank-Wolfe method for the composite convex optimization problem min&#119909;∈R&#119899;&#119891;(A&#119909;) + ℎ(&#119909;), where &#119891; is a &#120579;-logarithmically-homogeneous self-concordant barrier, A is a linear operator and the function ℎ has a bounded domain but is possibly non-smooth. We show that our generalized Frank-Wolfe method requires &#119874;((&#120575;0 + &#120579; + &#119877;ℎ) ln(&#120575;0) + (&#120579; + &#119877;ℎ)2/&#120576;) iterations to produce an &#120576;-approximate solution, where &#120575;0 denotes the initial optimality gap and &#119877;ℎ is the variation of ℎ on its domain. This result establishes certain intrinsic connections between &#120579;-logarithmically homogeneous barriers and the Frank-Wolfe method. When specialized to the &#119863;-optimal design problem, we essentially recover the complexity obtained by Khachiyan (1996) using the Frank-Wolfe method with exact line-search.&#13;
&#13;
In the second chapter, we present and analyze a new away-step Frank-Wolfe method for the convex optimization problem min&#119909;∈&#119987; &#119891;(A&#119909;) + ⟨&#119888;, &#119909;⟩, where &#119891; is a &#120579;-logarithmically-homogeneous self-concordant barrier, A is a linear operator, ⟨&#119888;, ·⟩ is a linear function and &#119987; is a nonempty polytope. We establish the global linear convergence rate of our Frank-Wolfe method in terms of both the objective gap and the Frank-Wolfe gap. This, in particular, settles the question raised in Ahipasaoglu, 3 Sun and Todd (2008) on the global linear convergence of the away-step Frank-Wolfe method specialized to the D-optimal design problem.&#13;
&#13;
In the third chapter, we propose a generalized multiplicative gradient (MG) method for a class of convex optimization problems, which, roughly speaking, involves minimizing a 1-logarithmically-homogeneous function over a “slice” of symmetric cone. This problem class includes several important applications, including positron emission tomography, D-optimal design, quantum state tomography, and Nesterov’s relaxation of boolean quadratic optimization. We show, via the machinery of Euclidean Jordan algebra, that this generalized MG method converges with rate &#119874;(ln(&#119899;)/&#119896;), where &#119899; denotes the rank of the symmetric cone.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151475</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimentation and Control in Online Platforms</title>
<link>https://hdl.handle.net/1721.1/151474</link>
<description>Experimentation and Control in Online Platforms
Zheng, Andrew
Decision-making in many online platforms is naturally modeled as control of a large-scale dynamical system. In particular, these are typically offline control problems: the platform collects fine-grained offline datasets, either via an experiment or logging of some incumbent policy, and hopes to use this data to evaluate new control policies or improve existing ones. This thesis explores the statistical challenges involved in learning about policies in such an environment, where sample efficiency is paramount.&#13;
&#13;
One ubiquitous problem is that of experimentation under "Markovian" interference, where interventions on some experimental units impact other units through modifications to the shared system state (such as a limited inventory). The best existing estimators for this problem are largely heuristic in nature. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator, the Differences-In-Q’s (DQ) estimator, which achieves a striking bias-variance tradeoff: DQ can have exponentially smaller variance than off-policy evaluation, while incurring bias that is only second order in the impact of the intervention. In the process, we introduce new techniques for achieving practical bias-variance trade-offs in off-policy evaluation more generally. Chief among DQ’s advantages is its effectiveness in practice. Over the course of a six-month engagement, we implemented DQ on Douyin’s internal experimentation platforms. In the process, we demonstrated that DQ dominates state-of-the-art alternatives, and adapts readily to a variety of practical experimental settings and concerns.&#13;
&#13;
When more sophisticated experimental designs are available, a common alternative is to choose units of experimentation that are sufficiently coarse so as to eliminate interference. ‘Region-split’ experiments on online platforms, where an intervention is applied to a single region over some experimental horizon, are one example of such a setting. Synthetic control is the state-of-the-art approach to inference in such experiments. The cost of these experiments is high since the opportunity cost of a sub-optimal intervention is borne by an entire region over the length of the experiment. More seriously, correct inference requires assumptions limiting the ‘non-stationarity’ of test and control units, which we demonstrate to fail in practice. So motivated, we propose a new adaptive approach to experimentation, dubbed Synthetically Controlled Thompson Sampling (SCTS), which robustly identifies the optimal treatment without the non-stationarity assumptions of the status quo, and minimizes the cost of experimentation by incurring near-optimal, square-root regret.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151474</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atom-by-atom designs of metal oxide catalysts for the oxygen evolution reaction</title>
<link>https://hdl.handle.net/1721.1/151467</link>
<description>Atom-by-atom designs of metal oxide catalysts for the oxygen evolution reaction
Lunger, Jaclyn R.
The demand for sustainable energy sources has never been more pressing, and developing innovative, efficient, and eco-friendly energy technologies is essential. The oxygen evolution reaction (OER) holds great promise, as it enables the production of green chemicals, hydrogen gas, and pure metals electrolytically from their oxides, with oxygen as the sole byproduct. However, OER suffers from slow reaction kinetics and a lack of efficient artificial catalysts. In contrast, enzymes such as photosystem II exhibit high efficiency in performing this reaction, a result of millions of years of biological evolution. Thus, the goal of this research is to design optimal metal oxide catalysts for OER that rival the activity of enzymes.&#13;
&#13;
To identify highly efficient catalysts, a systematic approach surpassing current trial-and-error methods is necessary. Complex metal oxides offer a promising class of materials for catalyst design, with billions of potential local atomic structures and active site environments. Therefore, this research combines machine learning, site-level descriptor approaches, and high-throughput virtual screening using density functional theory (DFT) to design optimal metal oxide catalysts for OER. The goal is to identify atom-by-atom design principles that can enhance the efficient production of metals and hydrogen.&#13;
&#13;
In this study, we develop and curate a large open-source dataset of perovskite surfaces with substitutions up to quaternary and facets up to (555) to discover "enzyme-like" active sites. Additionally, we extend our understanding of H-OER to M-OER, a reaction that enables clean metal production analogous to clean hydrogen production, and develop electronic descriptors for screening metals with favorable electrolysis kinetics. Our research offers a comprehensive road-map for the theoretical, atom-by-atom design of catalysts for OER, bridging the gap between current heterogeneous catalysts and enzymes. By utilizing machine learning, DFT, and site-level descriptors, 3 we propose a more targeted and effective approach to catalyst development. Ultimately, this research will contribute to the advancement of sustainable energy technologies and help address the global energy challenge.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151467</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Gestural Making: A framework for exploring the creative potential of gestures, materials, and computational tools</title>
<link>https://hdl.handle.net/1721.1/151465</link>
<description>Computational Gestural Making: A framework for exploring the creative potential of gestures, materials, and computational tools
Pinochet, Diego
The emergence of digital computation in design reinforced the traditional view that ‘to design’ is ‘to think,’ ‘to represent’ is ‘to plan,’ and ‘to make’ is ‘to fabricate.’ Under this computational design trichotomy, the uniqueness of the gesturing hand to sense, communicate, grasp, shape, and interface in the world has been traditionally overlooked, relegating making as a peripherical stage of the creative process where no intellectual development -apparently- occurs. I argue that hand gestures have the power of blurring the limits imposed by the computational trichotomy reframing design as an integrated process in which representing, thinking, and making are intertwined and inseparable. In this dissertation, I start from the assumption that ‘to make’ equals ‘to design,’ and propose a ‘computational gestural making’ framework to capture the potential of the interaction between human gestures, intelligent machine behavior, and material context. I explore the creative power of the thinking hand through the development of fabrication tools embedded with machine learning algorithms focusing on the interactive, material, and performative aspects of the making process. The scope of this doctoral research centers on establishing a Computational Gestural Making framework that (1) establishes a model for Human, Machine, and Material interaction (2) outlines the development and assessment of a gesture-based framework for interactive design and fabrication as a method for computational gestural making. (3) applies the proposed framework in case studies to assess the means and the effectiveness by which computational gestural making emerges as an alternative way of designing, embracing the uniqueness of the thinking hand as an agent for creating original and authored work.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151465</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence-Aided Synthesis and Characterization of 2D Materials</title>
<link>https://hdl.handle.net/1721.1/151462</link>
<description>Artificial Intelligence-Aided Synthesis and Characterization of 2D Materials
Lu, Ang-Yu
Semiconductor chips serve as the fundamental building blocks of modern electronics and form the core of artificial intelligence systems. However, as the technology node approaches the physical limitation for Si, significant scattering in the channel, current leakage and performance degradation will prevent further device scaling down. To address this challenge, two-dimensional (2D) materials have emerged as promising candidates for next-generation transistors, to maintain the pace of Moore's Law—doubling the number of transistors every 18 months. The integration of AI and automation in material science has recently drawn significant attention, offering the potential to expedite and enhance material development processes. This thesis aims to develop an autonomous platform to accelerate 2D material synthesis with four distinct projects. First, we employ named entity recognition (NER) and extractive question-answering (EQA) models to extract experimental recipes, including categorical and numerical data, illustrating how to trace the trajectories within a single material and between two different materials. Additionally, we use generative language models to summarize and generate synthesis recipes for knowledge connections and knowledge transfers in the synthesis of 2D materials. Second, we explore the correlation between growth parameters and provided the growth windows for high-quality hBN by the Gaussian process. Third, we demonstrate cost-effective automated synthesis and characterization systems for CVD-grown graphene by upgrading existing equipment and adopting open-source software and hardware solutions. Moreover, we propose an integrated autonomous platform that combines robotics, multiphysics simulations, machine learning, and automated synthesis and characterization systems for 2D material synthesis. Finally, we systematically investigate the connections between PL signatures and Raman modes employing statistical analysis, convolutional neural networks, interpretable models, and support vector machines, delivering comprehensive insights into the physical mechanisms linking PL and Raman features. This thesis may serve as a potential framework for developing and discovering novel materials for next-generation electronics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151462</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Product Returns Management in Online Retail</title>
<link>https://hdl.handle.net/1721.1/151442</link>
<description>Product Returns Management in Online Retail
Ibragimov, Marat
In Chapter 1, I and coauthors study the problem of predicting the product return rate using the products’ visual information. In online channels, products are returned at high rates. Shipping, processing, and refurbishing are so costly that a retailer’s profit is extremely sensitive to return rates. Using a large dataset from a European apparel retailer, we observe that return rates for fashion items bought online range from 13% to 96%, with an average of 53% – many items are not profitable. Because fashion seasons are over before sufficient data on return rates are observed, retailers need to anticipate each item’s return rate prior to launch. We use product images and traditional measures available prelaunch to predict individual item return rates and decide whether to include the item in the retailer’s assortment. We complement machine-based prediction with automatically extracted image-based interpretable features. Insights suggest how to select and design fashion items that are less likely to be returned. Our illustrative machine-learning models predict well and provide face-valid interpretations – the focal retailer can improve profit by 8.3% and identify items with features less likely to be returned. We demonstrate that other machine-learning models do almost as well, reinforcing the value of using prelaunch images to manage returns.&#13;
&#13;
In Chapter 2, I consider customer search and product returns on the individual level. Previous research has focused on linking customers’ purchase and return decisions. However, online retailers have access to the information which precedes the purchase decision – customer search. I demonstrate that customer search information provides important insights about product returns. Using data from a large European apparel retailer, I propose and estimate a joint model of customer search, purchase, and return decisions. I then provide theory and data indicating that using search filters, viewing multiple colors of a product, spending more time, and purchasing the last item searched are negatively associated with the probability of a return. Finally, I use the proposed model to optimize the product display order on the retailer’s website.&#13;
&#13;
Chapter 3 extends and reinforces the results obtained from previous Chapters. In the paper, I study the assortment planning problem in presence of frequent product returns. I develop a deep-learning model of customer search, purchase, and return. The model is based on a transformer framework and allows the recovery of important relations in the data. I use the estimated model to demonstrate that retailers could identify successful and unsuccessful products and modify the assortment. The modified assortment would increase the retailer’s sales and at the same time decrease returns. Lastly, I provide qualitative insights on which products are most likely to be unsuccessful in online retail.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151442</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensorless Wavefront Correction Algorithms for Free-Space Optical Communications</title>
<link>https://hdl.handle.net/1721.1/151441</link>
<description>Sensorless Wavefront Correction Algorithms for Free-Space Optical Communications
Čierny, Ondrej
Free-space optical communications (FSOC) technology facilitates high-throughput wireless links across large distances with low size, weight, and power (SWaP) terminals. However, it is difficult to design reliable, low-cost FSOC terminals for long-range links through the atmosphere. Even in clear conditions, the effects of air turbulence along such links usually necessitate active wavefront correction via adaptive optics (AO). Conventional AO algorithms rely on direct wavefront sensing, an approach that is high in cost and SWaP and usually degrades in strong atmospheric scintillation. Sensing methods that are more tolerant to scintillation have been developed, but they are often more challenging to implement and further increase cost and SWaP.&#13;
&#13;
Sensorless wavefront correction algorithms, such as stochastic parallel gradient descent (SPGD), are preferable in terms of cost and SWaP, and have been used in FSOC terminals as methods to optimize the received signal strength indicator (RSSI). A key challenge with such algorithms, however, is that their convergence rate degrades as more atmospheric modes are optimized. This can lead to an inadequate correction rate due to limited bandwidth of the AO element and cause link interruptions. To maintain a sufficient link margin in such conditions, correction algorithms with better convergence properties are needed.&#13;
&#13;
This thesis focuses on the development and testing of a new non-stochastic algorithm for multimodal wavefront correction and a more general analysis of the circumstances where sensorless algorithms attain adequate performance for FSOC, including in strong scintillation. An end-to-end simulation environment is built to compare SPGD with the developed non-stochastic algorithm over a range of atmospheric conditions and hardware configurations. We show that in identical conditions, the non-stochastic algorithm either improves the link margin by 2--3 dB or relaxes the AO element bandwidth requirement by a factor of 2--3 compared to SPGD. Finally, the simulation results are validated in the laboratory under simulated atmospheric turbulence and compiled into a useful design tool for predicting sensorless wavefront correction performance.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151441</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Sensing and Integrated Systems Frameworks for Decision Support in Sustainable Development</title>
<link>https://hdl.handle.net/1721.1/151437</link>
<description>Remote Sensing and Integrated Systems Frameworks for Decision Support in Sustainable Development
Lombardo, Seamus
Local leaders in sustainable development face challenging decisions due to complex environmental phenomena, intersecting socioeconomic factors, diverse stakeholders, data scarcity, and constrained financial resources. Decision Support Systems (DSS) software can aid these stakeholders by improving understanding of systems dynamics and interrelated societal factors. However, flaws in existing DSS development and functionality often produce DSS that do not meet the objectives of local stakeholders, leading to DSS disuse. This research implements a novel DSS development process to address these issues in two case studies: flood resilience in Pekalongan, Indonesia, and natural resource management for the Yurok Tribe in California.&#13;
&#13;
First, System Architecture Framework (SAF) uses inputs of stakeholder interviews to translate stakeholder objectives into DSS functions and forms. Targeted satellite remote sensing (SRS) of permanent water, shoreline change, and mangrove trends are conducted in Pekalongan, and forest trends and above ground biomass are analyzed for the Yurok Tribe. Classification analyses achieve high overall accuracy (&gt;= 84%) and trend analyses have correlations to high resolution data at a significance level of &#120572; &gt; 0.05. The Environment-Vulnerability-Decision-Technology (EVDT) integrated modeling framework is used to integrate local infrastructure and land use data towards insights for environmental impact mitigation decisions and community aid allocation. DSS user evaluations with Boston-area (n = 20), Indonesian (n = 37), and Yurok Tribe users (n = 9), are conducted to assess DSS utility and verify the mapping of SRS analyses to specific stakeholder decisions and economic metrics. \&#13;
&#13;
High user information-relevancy (&lt;= 94%) and information-sufficiency (&lt;= 81%) ratings, 5 specific decisions mapped to the SRS analyses via dedicated stakeholder interviews, and 57 actionable comments from user studies, provide strong support for the use of SAF and user studies to improve DSS usefulness and accessibility. Higher understanding scores achieved by DSS users compared to control-briefing users on environmental (p = 0.0012), socioeconomic (p = 0.0093), and policy (p = 0.0043) questions, analyses of integrated SRS and local data that provide concrete insights for stakeholder decisions (such as inundation trends for agricultural adaptation budget allocation and forest trends for carbon sequestration project management), and positive stakeholder comments regarding DSS capabilities, support the theory that SRS data and EVDT can improve DSS functionality. &#13;
&#13;
Demonstrating the utility of a novel DSS design process in overcoming previous roadblocks to DSS use in sustainable development is this work’s core contribution. SAF to target stakeholder objectives, integration of accessible SRS analyses and local socioeconomic data via EVDT for actionable insights, and user studies to gather stakeholder feedback are the core elements of this novel design process. The DSS developed also provide tangible benefits to users, with local stakeholders expressing a strong desire for DSS institutionalization. Future work includes ensuring DSS longevity and the application of the DSS design process to other relevant case studies. Overall, this research collaborates directly with communities to confront environmental impacts, address challenging decisions, and advance sustainable development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151437</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Horizons of Artificial Intelligence in Quantum Computation</title>
<link>https://hdl.handle.net/1721.1/151436</link>
<description>Horizons of Artificial Intelligence in Quantum Computation
Kiani, Bobak T.
The potential emergence of practical quantum computers has guided research into their potential applications, particularly in the context of artificial intelligence. Motivated by the success of deep neural networks in classical machine learning, a prevailing hope is that such success will translate to so-called quantum variational algorithms or quantum neural networks inspired by their classical counterparts.&#13;
&#13;
Contemporary deep learning algorithms are primarily developed using a series of heuristics, which often lack rigorous proofs to justify their efficacy. Due to the opaque nature of these algorithms, providing definitive assurances regarding their performance remains a formidable challenge. Though this complexity extends to the quantum analogues of deep learning, a growing body of literature has identified a set of theoretical tools to better understand the reasons why classical machine learning models are so effective in real-world tasks. We use these tools to investigate these quantum analogues in an effort to partially address the question of when and under what conditions we can anticipate success.&#13;
&#13;
We primarily study the learnability of quantum machine learning algorithms via tools from statistical learning theory, quantum mechanics, random matrix theory, and group theory. Our findings indicate that careful consideration must be given to the design of quantum machine learning algorithms in order to achieve reasonable levels of success. In fact, some of our results reveal that random or unstructured methods in quantum machine learning are prone to various challenges, including issues related to trainability or the absence of significant advantages over the best classical algorithms. Throughout the thesis, we offer several examples of how to potentially introduce structure into these algorithms to partly remedy these issues.&#13;
&#13;
Furthermore, we explore the reverse question of how quantum computing can inform and enhance classical machine learning. We investigate the incorporation of unitary matrices into classical neural networks, which leads to a more efficient design for these unitary neural networks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151436</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brain-wide representations of behavior spanning multiple timescales and states in C. elegans</title>
<link>https://hdl.handle.net/1721.1/151435</link>
<description>Brain-wide representations of behavior spanning multiple timescales and states in C. elegans
Atanas, Adam
Changes in an animal’s behavior and internal state are accompanied by widespread changes in activity across its brain. However, how neurons across the brain encode behavior and how this is impacted by state is poorly understood. We recorded brain-wide activity and the diverse motor programs of freely-moving C. elegans and built probabilistic models that explain how each neuron encodes quantitative features of the animal’s behavior. By determining the identities of the recorded neurons, we created, for the first time, an atlas of how the defined neuron classes in the C. elegans connectome encode behavior. Many neuron classes have conjunctive representations of multiple behaviors. Moreover, while many neurons encode current motor actions, others encode recent actions. Changes in behavioral state are accompanied by widespread changes in how neurons encode behavior, and we identify these flexible nodes in the connectome. Our results provide a global map of how the cell types across an animal’s brain encode its behavior.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151435</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection and Characterization of Hot Super-Earth Exoplanets</title>
<link>https://hdl.handle.net/1721.1/151434</link>
<description>Detection and Characterization of Hot Super-Earth Exoplanets
Essack, Zahra
Hot super-Earths are exoplanets characterized by short orbital periods and surface temperatures high enough to melt silicate rock. These planets lack any solar system analog, and are the most accessible small, rocky exoplanets available to study today. In this thesis, I develop a holistic understanding of hot super-Earth exoplanets through laboratory experiments, modeling, and telescope observations, with a focus on three main research areas: surface characterization, atmospheric characterization, and planet detection. &#13;
&#13;
Surface characterization is explored through laboratory experiments to help explain the observed brightness (high geometric albedos) of some hot super-Earths. We design high-temperature experiments to create and measure the reflectivity of lava and quenched glasses, and find that these surface materials have low albedos. This implies that the high albedos of some hot super-Earths are likely a result of a highly reflective atmosphere or an evolved high-albedo surface, allowing us to constrain the parameter space of possible reflected light sources.&#13;
&#13;
Atmospheric characterization is explored through modeling of planetary atmospheric escape and transmission spectra, to determine detectable observational signatures that can be used to probe the composition and evolution of rocky vapor atmospheres around hot super-Earths. We find that ground-based high-resolution spectrographs have the best capabilities for detecting the sodium resonance doublet absorption feature in transmission spectra of a sample of hot super-Earths. We identify K2-141 b as the best planetary target for future sodium observations, and constrain the currently available target list for validating models of the surface and atmospheric evolution of highly irradiated rocky planets.&#13;
&#13;
Finally, the planet detection component of the thesis uses photometric data, and precise radial velocity measurements to discover, and measure the mass of the exoplanet TOI-1075 b, one of the most massive super-Earths discovered to date. The composition and bulk density of TOI-1075 b suggest that the planet has no substantial atmosphere, despite its mass and size, challenging existing theories on planetary atmospheric evolution. Studying the TOI-1075 system in the broader context of short-period planetary systems is necessary for testing planet formation and evolution theories, density-enhancing mechanisms, and for future surface and atmospheric characterization studies of the planet via emission spectroscopy.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151434</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Localization and Structure Learning in Reverberant Underwater Acoustic Environments</title>
<link>https://hdl.handle.net/1721.1/151431</link>
<description>Data-Driven Localization and Structure Learning in Reverberant Underwater Acoustic Environments
Arikan, Toros
Passive localization and tracking of a mobile emitter, and the joint learning of its reverberant 3D environment, are important yet challenging tasks in the shallow-water underwater acoustic setting. A typical application is the monitoring of submarines or other man-made emitters with a small, surreptitiously-deployed receiver array. This task can be rendered more difficult by obstacles such as seamounts or piers, which can occlude the line of sight from the emitter to the receivers. Furthermore, the underwater acoustic domain is complex and difficult to model, and a good signal-to-noise ratio is not assured. We view these complexities as features that can be leveraged for improved localization performance, using global optimization and neural network methods. We develop a multi-stage optimization and tracking architecture that precisely maps the reflective boundaries in the environment, and thereby uses the nonline of sight reflected arrivals for robust and accurate localization. Each stage of this architecture establishes domain knowledge such as synchronization and occluder estimation, which are inputs for the following stages of more refined algorithms. Within this framework, we introduce a 2D neural network boundary estimation method that outperforms the existing methods in the literature, and is robust to the large time delay estimation errors that are common in the application domain. We analyze the performance and reliability of this holistic framework, both in simulation and in reallife reverberant watertank testbeds that model the shallow-water underwater acoustic setting. The results are encouraging for the future development of better-performing localization methods with novel capabilities, using data-driven learning algorithms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151431</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Artificial Intelligence with Programmable Silicon Photonics</title>
<link>https://hdl.handle.net/1721.1/151430</link>
<description>Accelerating Artificial Intelligence with Programmable Silicon Photonics
Bandyopadhyay, Saumil
Advances in the fabrication of large-scale integrated silicon photonics have sparked interest in optical systems that process information at high speeds with ultra-low energy consumption. Photonic systems, which have historically been used for optical telecommunications, have recently been demonstrated to accelerate tasks in quantum simulation, artificial intelligence, and combinatorial optimization.&#13;
&#13;
This thesis reports work towards the goal of realizing large-scale programmable photonic systems for information processing: 1) we develop deterministic error correction algorithms for programmable photonic systems, whose capabilities are believed to be limited by fabrication error, showing that these systems can be programmed to implement accurate linear matrix processing suitable for deep neural networks at scales of up to hundreds of channels; 2) we describe a new paradigm for coupling large numbers of optical channels to photonic circuits with exceptionally high alignment tolerance, enabling the use of high-volume, low-precision electronic pick-and-place equipment for photonic assembly; and 3) we design, fabricate, and demonstrate the first single-chip, end-to-end photonic processor for deep neural networks. This fully-integrated coherent optical neural network (FICONN), which monolithically integrates multiple optical processor units for matrix algebra and nonlinear activation functions into a single chip, implements single-shot coherent optical processing of a deep neural network with sub-nanosecond latency. On-chip, in situ training of a deep neural network is demonstrated on this system, obtaining high accuracies on a vowel classification task comparable to that of a digital system. Our results open the path towards integrated, large-scale photonic processors for low-latency inference and training of deep neural networks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151430</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven approaches for complex systems:&#13;
leveraging machine learning, materials science, and&#13;
manufacturing for new biomedical technologies</title>
<link>https://hdl.handle.net/1721.1/151419</link>
<description>Data-driven approaches for complex systems:&#13;
leveraging machine learning, materials science, and&#13;
manufacturing for new biomedical technologies
Verheyen, Connor Anthony
Many research efforts to advance human health and well-being involve interdisciplinary problem spaces and complex, poorly-understood systems. This thesis integrates both computational and experimental approaches to advance our understanding and control of complex systems at the interface of machine learning, materials science, and manufacturing. Specifically, I demonstrate the data-driven description of supervised machine learning for biomedical engineering tasks, the data-driven design of optimized soft granular biomaterials, and the proof-of-concept development of a transcatheter additive manufacturing platform. &#13;
&#13;
In Part 1, I develop custom software for high-resolution, multifactorial machine learning (ML) experiments. I iteratively apply this workflow to a set of diverse ML problems from the biomedical engineering (BME) domain to generate massive meta-datasets covering each phase of the hierarchical ML optimization and evaluation process. Then, I describe the underlying patterns and heterogeneity in these rich datasets and delineate empirical guidelines for the rigorous and reliable adoption of machine learning for BME problems. &#13;
&#13;
In Part 2, I leverage the insights from Part 1 to develop a flexible and robust data-driven modeling pipeline for complex soft materials. The pipeline can be applied after each round of experimentation to build predictive models, extract key design rules, and generate data-driven design frameworks. I use this integrated, stepwise approach to optimize the structures, properties, and performance profiles of soft granular biomaterials for injection- and extrusion-based biomedical applications. &#13;
&#13;
In Part 3, I leverage the optimized materials from Part 2 to develop a novel microgel-based transcatheter additive manufacturing technology. I obtain proof-of-concept data for the platform's critical features, including controlled transcatheter material delivery to distant target locations, rapid in situ structuration of arbitrary 3D constructs, and reliable scaffold stabilization to ensure long-term implant integrity. Together, this work paves the way for minimally-invasive, patient-specific, in situ biofabrication.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151419</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Data-driven Algorithm and Mechanism Design in Online Advertising Markets</title>
<link>https://hdl.handle.net/1721.1/151400</link>
<description>Automated Data-driven Algorithm and Mechanism Design in Online Advertising Markets
Liang, Jason Cheuk Nam
Modern automated-bidding (or autobidding) ecosystems have fueled the prevalence of programmatic advertising, which accounts for 90% of total digital ad transaction volume and more than 120$ billion dollar ad spend in 2022. In an autobidding ecosystem, online advertisers convey high-level goals to ad platforms using levers presented by the platforms, and subsequently platforms run automated algorithms to procure ads on advertisers' behalf. While autobidding significantly simplifies and scales up ad procurement processes, it also brings about new challenges: for advertisers, the simplification to ad procurement comes at the cost of information dilution as advertisers no longer have access to granular procurement details; for ad platforms, booming advertiser activities have incentivized growing sophistication in advertising campaign and objectives.&#13;
 &#13;
My thesis is dedicated to address two themes from a data-centric perspective: how should advertisers effectively interact with ad platforms in limited information environments? And how should ad platforms design procurement mechanisms to achieve revenue and advertiser welfare goals under complex advertiser objectives and behaviors?&#13;
 &#13;
The thesis first addresses the advertisers' problem by exploring how advertisers can utilize levers presented by ad platforms such as budgets or target return-on-investments (ROI) to optimize ad procurement objectives. We analyze the effectiveness of standard platform levers, and then present efficient data-driven algorithms for an advertiser to optimize over lever decisions under limited information, where the procurement algorithm and selling mechanisms in the autobidding ecosystem are treated as a blackbox. Then, the thesis explores an ad platform's problem of designing selling mechanisms against advertisers with complex objectives. In particular, this part of the thesis concerns strategic advertisers who may manipulate selling mechanisms via submitting corrupted information to achieve better long-term rewards, as well as constrained advertisers who are subject to financial restrictions. We first design data-driven pricing algorithms to maximize long-term platform revenue in the presence of different advertiser types. Then, we also investigate how ad platforms can augment standard ad auctions with machine-learned advice to improve worst-case welfare guarantees on the individual advertiser level when advertisers are financially constrained and adopt arbitrary strategies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151400</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Modeling with Guarantees</title>
<link>https://hdl.handle.net/1721.1/151388</link>
<description>Generative Modeling with Guarantees
Quach, Victor
Language models have become ubiquitous in natural language processing, leveraging large amounts of unlabeled data and fine-tuning for downstream tasks. However, concerns have been raised regarding the accuracy and trustworthiness of the text generated by these models. In parallel, differential privacy has emerged as a framework to protect sensitive information while allowing machine learning algorithms to learn from it. Nevertheless, the trade-off between statistical guarantees and utility poses challenges for many applications. Therefore, this thesis aims to develop techniques that balance guarantees and utility, focusing on improving the reliability of generative models while preserving their flexibility.&#13;
&#13;
First, we propose a framework that enables the generation of text conditionally using hard constraints, allowing users to specify certain elements in advance while leaving others open for the model’s prediction. By facilitating interactive editing and rewriting, this framework provides users with precise control over the generated text.&#13;
&#13;
Next, we introduce conformal prediction methods for generating predictions under soft constraints, ensuring statistical correctness. These methods produce valid confidence sets for text generation while maintaining high empirical precision.&#13;
&#13;
Finally, we explore the balance between privacy and utility in data release by relaxing the notion of guarantees from differential privacy to a definition based on guesswork. We present a learning-based approach to de-identification, addressing the challenges of privacy preservation while still enabling effective data utilization. &#13;
&#13;
The effectiveness of our proposed methods is demonstrated through a range of tasks, including text infilling, radiology report generation, and X-ray classification. These tasks showcase the utility of our techniques in various practical scenarios.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151388</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and Computational Electrochemistry to Move Toward Plastics Circularity</title>
<link>https://hdl.handle.net/1721.1/151384</link>
<description>Experimental and Computational Electrochemistry to Move Toward Plastics Circularity
Maalouf, Joseph
This thesis work is grounded primarily in the goal of leveraging the potent yet fine tunable nature of an electrochemical driving force to tackle key issues in augmenting the chemical recyclability of plastics, namely the synthesis of plastic monomers and the deconstruction of existing plastics. Increasing plastic circularity will be crucial in decarbonizing the 400 Mt of plastic generated annually and the associated climate and environmental effects that result from producing plastics on this scale. While the key chemical reaction involved in the synthesis of plastics is the polymerization of monomers, the goal of this thesis is to demonstrate that electrochemistry – both experimental and computational – has a role to play in the synthesis of novel plastic monomers in addition to allowing for new potential decomposition pathways for the plastics in use today. This thesis can be broken down into three parts: (1) the experimental demonstration of sustainable synthesis of circular monomers using electrochemistry (2) the computational study of organic redox mediators with the potential for polystyrene deconstruction (3) and the implementation of data driven models to improve the throughput of computational screening.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151384</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Municipal Finance</title>
<link>https://hdl.handle.net/1721.1/151382</link>
<description>Essays in Municipal Finance
Jensen, Jonathan
In the first chapter I explore property tax limits in Massachusetts. Since the 1870s, 46 states have placed limits on the ability of local governments to collect property taxes. It is unclear whether these policies prevent excessive spending or constrain efficient investment in public goods and services. Using referendum data from Massachusetts, I find that easing property tax limits leads to an increase in educational spending and government salaries, and a decrease in pension underfunding. Using municipal bond spreads and house prices as a proxy for the efficiency of increased municipal investment, I find that property tax limits inefficiently constrain local governments in Massachusetts. Easing these constraints leads to a 10-bps reduction in bond spreads and a 3% increase in house prices. The effects are stronger when tax referendums are large, and when the resulting revenue is used for education.&#13;
&#13;
In the second chapter I explore the long term causes and consequences of property tax reform throughout the United States. I find that increases in real estate value are associated with a higher probability of property tax reform. I find that following reforms, local revenues fall, the state share of education spending increases, and state income taxes increase, while total state and local taxes remain the same. I identify potential winners and losers from property tax reform. I find that ex-ante low-income and low education spending counties benefit while high-income and high education spending counties are worse off. I find no evidence that state governments are more or less efficient at service provision than local governments.&#13;
&#13;
In the third chapter (joint with Fiona Paine), we investigate the impact of cybersecurity risk and cyber attacks on municipal finances. Cyber attacks are estimated to cost billions of dollars per year. We use a dataset of municipal ransomware attacks merged with hand collected IT investment data and municipal bond data. Following a ransomware attack, municipal bond yields fall by 10 bps and IT investment as a share of total town expenditure increases by 0.23%. We investigate potential channels leading to decreased yields post hacking. We find evidence that being hacked reduces cyber risk by disciplining municipalities to move closer to the optimal level of IT spending.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151382</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of high-low method to distance problems</title>
<link>https://hdl.handle.net/1721.1/151380</link>
<description>Application of high-low method to distance problems
Zhang, Lingxian
In this thesis, I used the high-low method to study incidence problems arising form variants of the distance set conjectures. &#13;
&#13;
In the complex space, my collaborator Sarah Tammen and I proved analogues of the incidence estimates of Guth, Solomon and Wang \cite{WellSpacedTubes} for tubes obeying strong spacing conditions, and we used one of our new estimates to resolve a discretized variant of Falconer's distance set problem in C². &#13;
&#13;
In the real space, I proved an incidence estimate for special boxes in R⁶, and used the estimate to derive a bound for a discretized variant of Falconer's problem in R³.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151380</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Influence of Cellular Redox State on Mitosis</title>
<link>https://hdl.handle.net/1721.1/151368</link>
<description>The Influence of Cellular Redox State on Mitosis
Sapp, Kiera Marie
To divide, cell cycle machinery must be tightly regulated. This is done through a complex and dynamic network of well-characterized protein kinases, ensuring cells only transition to subsequent cell cycle stages when specific checkpoints have been passed. This regulation prevents cells with insufficient biomass, incomplete DNA replication, or misaligned chromosomes from dividing into two daughter cells. Here, we show direct regulation of the key mitotic regulator, Aurora kinase B, by changes in mitochondrial metabolism during mitosis. In early mitosis, an increase in mitochondrial membrane potential coincides with a robust reduction in cellular redox status, preventing disulfide bond-mediated activation of Aurora kinase B as a determinant of anaphase onset. In cells lacking the ability to alter their redox state due to deficient mitochondrial respiratory function, deficient Aurora kinase B activity leads to mitotic abnormalities and death. This work identifies a novel mechanism by which cellular metabolism regulates cell cycle progression and has important impacts for human diseases such as cancer, as errors in cell division are directly linked to aberrant proliferation.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151368</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Economic Policy Design</title>
<link>https://hdl.handle.net/1721.1/151365</link>
<description>Essays on Economic Policy Design
Sturm, John
This thesis comprises three chapters, each studying the design of a different economic policy. The first chapter proposes a new industrial policy designed to prevent coordination failures. The second chapter characterizes the design of efficient economic sanctions. The third chapter studies income taxation in the presence of household heterogeneity.&#13;
&#13;
The first chapter proposes a new way that industrial policymakers can design targeted subsidies in the presence of multiple equilibria. Inefficient multiplicity is common in models of agglomeration, even under policies designed according to the standard, Pigouvian rule. I propose a “super-Pigouvian” policy-setting role that simultaneously addresses externalities and selects the efficient equilibrium. The main idea behind this policy is to compensate households for not only their actions’ direct effects but also their indirect effects through influences on other households’ behavior. After demonstrating the theoretical properties of this policy, I quantify its potential effects in an empirical application to South Korea. In a calibrated, dynamic model of structural transformation, super-Pigouvian policy achieves moderate welfare gains compared to the worst equilibrium supported by Pigouvian policy.&#13;
&#13;
The second chapter studies the design of international trade sanctions. Specifically, I ask what trade taxes (tariffs and export taxes) a sanctioning country can use to decrease the economic welfare of a trading partner at the least economic cost to itself. My main result draws a close connection between this problem and the well-understood problem of designing trade taxes for terms-of-trade manipulation. This connection has several useful implications for sanction design: First, small sanctions increase welfare in the sanctioning country. Second, sanctions target the same goods as terms-of-trade manipulation, but with greater intensity. Third, sanctions ignore elasticities of demand and supply in the sanctioning country. Finally, sanctions treat imports and exports asymmetrically. &#13;
&#13;
The third chapter (joint with André Sztutman) considers how income taxes should be designed when one accounts for heterogeneity households’ tax responses. We address this question by providing a test that passes if and only if there exists a weighted utilitarian planner for whom taxes are locally optimal. This test incorporates standard sufficient statistics—such as the shape of the income distribution and mean elasticities of taxable income—as well as a novel ingredient: the variance of elasticities conditional on income. Theoretically, we show that the test fails when these variances are sufficiently high. Empirically, we find they are indeed large in a panel of US tax returns. We thereby conclude, without taking a stance on redistributive preferences, that there are welfare-improving tax reforms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151365</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nearby cycles and the cohomology of shtukas</title>
<link>https://hdl.handle.net/1721.1/151364</link>
<description>Nearby cycles and the cohomology of shtukas
Salmon, Andrew
V. Lafforgue constructed Langlands parameters from automorphic forms for any reductive group over a function field using excursion operators.  Our aim is to give a general approach for proving certain local-global compatibilities satisfied by these Langlands parameters.  The main consequence for the Langlands correspondence is to show that Lafforgue's construction is compatible with Lusztig's theory of character sheaves at a given point of a smooth curve over a finite field.  Namely, using the theory of character sheaves, one attaches a torus character and a two-sided cell to an irreducible representation of a reductive group over a finite field.  If our automorphic form lives in an isotypic component determined by this irreducible representation, we show that the torus character and two-sided cell determine the semisimple and unipotent parts of the image of the tame generator under the Langlands correspondence, respectively.&#13;
&#13;
One key step is showing that nearby cycles commute with pushforward of certain perverse sheaves from the stack of global shtukas to a power of a curve.  The main technical ingredient is the notion of what we call $\Psi$-factorizability, where nearby cycles over a general base are independent of the composition of specializations chosen, and the $\Psi$-factorizability statements we make give some answers to a question raised by Genestier-Lafforgue.  To compute the action of framed excursion operators, we instead compute in monodromic affine Hecke categories.  Ultimately, this reduces certain questions in the global function field Langlands program to questions in local geometric Langlands.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151364</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Heparan Sulfate Proteoglycan Perlecan Regulates Axonal and Synaptic Stability</title>
<link>https://hdl.handle.net/1721.1/151363</link>
<description>The Heparan Sulfate Proteoglycan Perlecan Regulates Axonal and Synaptic Stability
Guss, Ellen Jane
Heparan sulfate proteoglycans (HSPGs) form essential components of the extracellular matrix (ECM) and basement membrane (BM) and have both structural and signaling roles. Perlecan is a secreted ECM-localized HSPG that contributes to tissue integrity and cell-cell communication. In this thesis, I identify a role for Drosophila Perlecan in the maintenance of larval motoneuron axonal and synaptic stability. In Chapter 1, I discuss known roles for Perlecan and other HSPGs in animal development, with a focus on their functions within the nervous system. In Chapter 2, I describe how loss of Drosophila Perlecan causes alterations in the axonal cytoskeleton and breakage of axons, followed by synaptic retraction of neuromuscular junctions. These phenotypes are not prevented by blocking Wallerian degeneration and are independent of Perlecan’s role in Wingless signaling. Overexpression of Perlecan in motoneurons cannot rescue synaptic retraction phenotypes. Similarly, removing Perlecan specifically from neurons, glia, muscle, fat body, or hemocytes does not cause synaptic retraction, indicating the protein is secreted from multiple cell types and functions non-cell autonomously. Within the peripheral nervous system, Perlecan predominantly localizes to the neural lamella, a specialized ECM surrounding nerve bundles. Loss of Perlecan disrupts neural lamella structure, with reduced ECM thickness observed for the colocalized Viking protein, a Drosophila type IV Collagen homolog. In addition, Viking shows abnormal accumulation and aggregation at sites along the neural lamella that are associated with axonal breakage and exit from their usual boundary within the nerve bundle. Entire nerve bundles degenerate in a temporally coordinated manner across individual hemi-segments during the late stages of larval development. These observations indicate disruption of neural lamella ECM function triggers axonal destabilization and synaptic retraction of motoneurons, revealing a role for Perlecan in axonal and synaptic integrity during nervous system development.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151363</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adjoint Methods and Inverse Modeling for Process Variation Analysis in Silicon Photonics</title>
<link>https://hdl.handle.net/1721.1/151362</link>
<description>Adjoint Methods and Inverse Modeling for Process Variation Analysis in Silicon Photonics
Zhang, Zhengxing
With the recent growing interest and development in the field of integrated photonics, manufacturing process variation studies are required to launch the many photonic applications to mass production and commercial use. Specifically, we need models that predict the impact of process variations on photonic circuits, and extraction of process variation information from fabrication measurements, so that we can use these models and information for robust design that achieves high performance and yield given manufacturing limitations.&#13;
&#13;
Current studies in the area of process variation in silicon photonics are emerging but are still limited. A general problem in both modeling the impact of variations and extraction from measurement is the cost of time, either the computational cost from simulation, or the cycle length of fabrication process. Therefore, here we explore some of the powerful tools that can shorten the simulation or experiment time, and provide alternative, efficient, or economical approaches for process variation studies, while still maintaining accuracy. In particular, this thesis discusses two groups of methods for analysis: adjoint methods for the modeling of variation impact, and inverse modeling for measurement data analysis.&#13;
&#13;
In the first half of the thesis, we make necessary extensions to the adjoint methods specifically for the analysis of process variation in silicon photonics. We introduce spatial sampling and frequency interpolation techniques to enable wavelength dependent adjoint-based sensitivity analysis for photonic components, and illustrate the implementation using the example of a Y-branch with line edge roughness variation. We also present a case study that builds a compact generative model using sensitivity analysis results, which can be used in circuit-level variation analysis. These wavelength dependent extensions and compact model derivations achieve less than 5% error compared to the original adjoint method for this example case, while largely reducing the disk storage cost or sampling procedure. We also compare the adjoint analysis with ensemble simulation results, and despite failing at large variation and higher-order effects, the adjoint method is still able to capture the general trends and some of the most important features in the problem, which provides an efficient approach for the modeling of process variation impact. &#13;
&#13;
For the second half of the thesis, we present two case studies to showcase the usage of inverse modeling in measurement data analysis. In the first study, we extract spatial variations from characterization measurements of silicon nitride ring resonators with different design parameters. We integrate the potential extraction error into the model using a Bayesian framework, in order to reduce the impact of measurement noise. In the second study, we consider the impact of process variations in integrated optical phased arrays, and validate the simulated theory with experimental measurement of the far-field beam profiles. Due to the nonlinearity of the problem, inverse modeling methods are needed to separate different sources of process variations in the validation. We develop algorithms that tackle the overfitting, scalability, and ambiguity issues of the inverse problem, and show agreement between extracted variation and simulation prediction. For both of these studies, we show that alternative approaches for variation extraction exist besides the conventional test structure approach, and they can be an accessible and economic candidate for process variation observation and measurement.&#13;
&#13;
Together, these methods and techniques provide tools that speed up process variation studies in silicon photonics. Robust design based on these models and information about process variation will provide the road to high performance and yield for industry level production of integrated photonic applications.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151362</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dzyaloshinskii-Moriya Interaction and Local Exchange Variation in Rare-Earth Transition-Metal Ferrimagnets</title>
<link>https://hdl.handle.net/1721.1/151361</link>
<description>Dzyaloshinskii-Moriya Interaction and Local Exchange Variation in Rare-Earth Transition-Metal Ferrimagnets
Suzuki, Daniel Hiroshi
Spintronics utilizes the spin of electrons in addition to its charge to manipulate magnetism in thin films. Research in spintronics has the potential to usher in a new wave of technologies, particularly in next-generation racetrack memory storage. These technologies rely on the motion of chiral spin textures such as domain walls or skyrmions in order to read and write data. Rare-earth (RE) transition-metal (TM) ferrimagnetic heterostructures are especially promising candidates due to their minimal stray fields and vanishing angular momentum near compensation, and have already exhibited high-speed current-induced domain wall motion and ultra-small skyrmion stability at room temperature.&#13;
&#13;
The stability of the chiral spin textures necessary for racetrack and skyrmionic memory is governed by an antisymmetric magnetic exchange interaction known as the Dzyaloshinskii-Moriya Interaction (DMI). Unlike the symmetric Heisenberg interaction that favors collinear spin alignment and gives rise to ferromagnetism and antiferromagnetism, the DMI favors canting of spins perpendicular to its neighbors. While DMI has been extensively studied in ferromagnetic systems, little work has been done to quantify its strength in ferrimagnetic systems. In this thesis the compositional dependence of a number of static and dynamic magnetic properties of RE-TM amorphous ferrimagnets necessary for the design of chiral spin texture-based technologies is investigated. We develop a simple method for determining magneto-optical Kerr angles that informs efficient design of magneto-optical Kerr effect microscopes. Additionally, we observe a significant variation in both RE and TM average atomic moment as a function of composition well described by a local environment model as well as substantial variation in magnetization as a function of RE-TM film thickness. Finally, current-induced domain wall motion is used to characterize the spin-transport properties and DMI of RE-TM films.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151361</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Computation and Experimentation for Accelerated Understanding of Electrode Microstructure in Redox Flow Batteries</title>
<link>https://hdl.handle.net/1721.1/151360</link>
<description>Combining Computation and Experimentation for Accelerated Understanding of Electrode Microstructure in Redox Flow Batteries
Tenny, Kevin M.
Improving grid energy storage is crucial for integrating renewable energy options and reducing anthropogenic carbon emissions. Redox flow batteries (RFBs) are a promising long-duration energy storage technology, whose modular energy and power scaling complements diurnal, weather-dependent electricity generation. Pumped from external tanks, liquid-phase electrolytes containing redox active species are dispersed through the RFB reactor to undergo an interconversion of oxidative states, releasing electrons to pass through external loads before the electrolytes return to their respective tanks. The electrodes are encased in the RFB reactor and fulfill several critical functions for successful operation: They facilitate advection through the porous media, afford active sites for redox events, and transfer liberated electrons through a solid matrix. Consequently, electrode topologies influence multiple transport scales that require precise metering to ascertain performance metrics. However, most electrode analysis relies on experimentation or modeling to analyze structures that engender favorable performance; but when combined, the two can provide a deep analysis of electrode structure/function relationships.&#13;
&#13;
In this dissertation, I first provide an assessment of the U.S. long-duration energy storage market, outlining the industry competitiveness and attractiveness, as well as discussing headwinds for RFBs seeking to enter the market. This work then focuses on exploring multiple electrode topologies, both experimentally and computationally, drawing relationships between fluid dynamic and electrochemical functions in diverse electrolyte environments. Further, I examine a promising potentiodynamic electrochemical method that can improve our understanding of flow cell performance. Next, I demonstrate the progression of high- and low-dimensional macrohomogenous models and their applications for screening structural and operational benchmarks, culminating in regressive models for targeted domain optimization. I conclude with parameter sweeps across artificially designed electrodes, revealing key microstructural features that lead to augmented functionality before offering my perspective on various business and research directions for the field.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151360</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics under Variability, Volume, and Velocity with Applications to Sustainability and Healthcare</title>
<link>https://hdl.handle.net/1721.1/151352</link>
<description>Analytics under Variability, Volume, and Velocity with Applications to Sustainability and Healthcare
Digalakis, Vasileios
Analytics, machine learning, and optimization provide unique opportunities to harness the massive amounts of data that are available and positively impact some of the most pressing challenges of our time, including climate change and improved healthcare operations. The classical paradigm of analytics, which assumes a dataset is centrally collected and readily available to analyze, is shifting. Modern data science problems present new complexities, including variability (i.e., changing phenomena due to various types of uncertainties), large volumes of data or decisions or both, and data arriving dynamically with high velocity. &#13;
&#13;
This thesis advances two strands of large-scale analytics. The first is methodological, focusing on the development of predictive and prescriptive machine learning and optimization methodologies, primarily mixed-integer and robust, for problems that exhibit the aforementioned characteristics. The second is applied, and encompasses collaborations with various industry partners in the sustainability and healthcare operations spaces, seeking to reap the benefits of large-scale analytics in these settings. &#13;
&#13;
In Chapters 2 and 3, we introduce the framework of slowly varying machine learning, which provides a tool to deal with variability in an interpretable way. In Chapter 2 in particular, our methodology enables the estimation of sparse linear regression models where the underlying regression coefficients are allowed to vary slowly and sparsely under some graph-based temporal or spatial structure. In Chapter 3, we take a step toward the stabilization of decision tree models even under new trends in the training data. In Chapter 4, we introduce the backbone method, a general, heuristic framework that scales interpretable machine learning techniques to ultra-high dimensional datasets hence tackling the volume characteristic. Chapter 5 develops a mixed integer optimization- and machine learning-based approach for the problem of frequency estimation in data streams, addressing settings where large amounts of data arrive dynamically with high velocity. Finally, in Chapter 6, we present a robust optimization- and machine learning-based framework that guides a 1 billion USD investment in solar panels and batteries by a leading fertilizer producer, with the aim of decarbonizing a significant portion of their production pipeline and reducing operational costs. Our model’s forecast indicates that this decarbonization effort will be profitable, thus emphasizing that investing in renewable energy can be a financially viable option, rather than an expensive luxury that developing nations cannot afford while industrializing their economies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151352</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization Methods for Machine Learning under Structural Constraints</title>
<link>https://hdl.handle.net/1721.1/151349</link>
<description>Optimization Methods for Machine Learning under Structural Constraints
Chen, Wenyu
In modern statistical and machine learning models, structural constraints are usually imposed for model interpretability as well as model complexity reduction. In this thesis, we present scalable optimization methods for several large-scale machine learning problems under structural constraints, with a focus on shape constraints in nonparametric statistics and sparsity in high-dimensional statistics.&#13;
&#13;
In the first chapter, we consider the subgradient regularized convex regression problem, which aims to fit a convex function between the target variable and covariates. We propose novel large-scale algorithms, based on proximal gradient descent and active set methods, and derive novel linear convergence guarantees for our proposed algorithms.  Empirically, our framework can approximately solve instances with n=100,000 and d=10 within minutes.&#13;
&#13;
In the second chapter, we develop a new computational framework for computing log-concave density MLE, based on smoothing techniques in combination with an appropriate integral discretization of increasing accuracy. We establish convergence guarantees of our approaches and demonstrate significant runtime improvements over earlier convex approaches.&#13;
&#13;
In the third chapter, we focus on Gaussian Graphical Models, which aims to estimate a sparse precision matrix from iid multivariate Gaussian samples. We propose a novel estimator via ℓ₀ℓ₂-penalized pseudolikelihood. We then design a specialized nonlinear Branch-and-Bound (BnB) framework that solves a mixed integer programming (MIP) formulation of the proposed estimator. Our estimator is computationally scalable to p~10,000, and provides faster runtime compared to competing ℓ₁ approaches, while leading to superior statistical performance.&#13;
&#13;
In the fourth chapter, we further look into improving the BnB framework for sparse learning problems with the ℓ₀ℓ₂ penalty and general convex smooth losses. We present a novel screening procedure within the BnB framework to fix relaxed variables to 0 or 1 with guarantees. Our experiments indicate that this screening procedure can significantly reduce the runtimes of BnB solvers.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151349</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Design of Solid Rocket Powered Vehicles Including Exhaust Plume Radiant Emission</title>
<link>https://hdl.handle.net/1721.1/151348</link>
<description>Integrated Design of Solid Rocket Powered Vehicles Including Exhaust Plume Radiant Emission
Mathesius, Kelly J.
For aircraft and rockets where vehicle visibility is a concern, exhaust plume radiant emission is an important aspect of solid rocket powered vehicle performance. However, it is often not considered during the design phase, despite significant couplings with other vehicle disciplines, especially propulsion. Considering plume radiant emission during the design phase is important for ensuring vehicle design constraints and objectives can be met while accounting for the coupling of plume radiant emission with other disciplines.&#13;
&#13;
Technology gaps exist for integrating exhaust plume radiant emission in solid rocket powered vehicle design. Typical modeling approaches are computationally expensive, and rely on CFD and complicated integration schemes that are not well-suited for fast, iterative vehicle design. Existing data for solid rocket motor exhaust plume radiant emission is limited in the open-literature, and does not include measurements for small, low-thrust motors or propellants containing the burn rate suppressant oxamide. Few design guidelines exist for the integrated consideration of exhaust plume radiant emission in solid rocket motor design.&#13;
&#13;
This thesis provides advancements and solutions for these technology gaps to enable design phase consideration of exhaust plume radiant emission. The effects of chamber pressure and propellant oxamide content on exhaust plume infrared radiant emission were measured for small, low-thrust, end-burning solid rocket motors. Static fires utilized motors that were operated at approximately 1 MPa to 2 MPa with ammonium perchlorate composite propellants that were doped with 0 or 8% oxamide. An end-to-end differentiable model for exhaust plume radiant emission was developed and implemented in the flexible AeroSandbox design optimization framework. The developed model shows reasonable agreement with measurements from this work and results from other studies, and is robust over eight orders of magnitude of plume radiant intensity.&#13;
&#13;
The model is used to explore the couplings between vehicle thrust, chamber pressure, oxamide content, and exhaust plume radiant intensity for a small (&lt; 3kg), low-thrust (5 N - 20 N), fast (&gt; 100 m/s) solid rocket powered aircraft concept. For this class of vehicles, it was found that a large range of radiant intensities can be achieved for a given thrust requirement by varying the motor oxamide content and chamber pressure. Additionally, the effects of motor size scale on the progression of afterburning kinetics and plume radiant emission is explored and quantified; for sufficiently small motors and plumes, it was found that the excess fuel in the plume remains largely unburnt, which reduces the plume radiant intensity.&#13;
&#13;
The experimental data, practical modeling tools, and design guidelines developed in this thesis support the design phase consideration of exhaust plume radiant emission in solid rocket motor design. For vehicles where visibility is important, considering exhaust plume radiant emission during vehicle design enables a better understanding of motor design and performance tradeoffs and supports improved motor performance.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151348</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Simulation of Many-body Systems with Superconducting Qubits</title>
<link>https://hdl.handle.net/1721.1/151347</link>
<description>Quantum Simulation of Many-body Systems with Superconducting Qubits
Karamlou, Amir H.
The study of interacting many-body quantum systems is central to the understanding of wide a range of physical phenomena in condensed-matter systems, quantum gravity, and quantum circuits. However, quantum systems are often hard to study analytically, and the classical computing resources required for simulating them scale exponentially with the size of the system. In this thesis, we discuss utilizing superconducting quantum circuits as a wellcontrolled quantum platform for probing the out-of-equilibrium dynamics and the properties of many-body quantum systems. We use a 3×3 array of superconducting transmon qubits to study the dynamics of a particle under the tight-binding model, and probe quantum information propagation by measuring out-of-time-ordered correlators (OTOCs). Using a 4×4 qubit array, we probe entanglement across the energy spectrum of a hard-core Bose-Hubbard lattice by extracting correlation lengths and entanglement entropy of superposition states generated in particular regions of the spectrum, from the band center to its edge. The results presented in this thesis are in close quantitative agreement with numerical simulations. The demonstrated level of experimental control and accuracy in extracting the system observables of interest is extensible to larger superconducting quantum simulators and will enable the exploration of larger, non-integrable systems where numerical simulations become intractable.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151347</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Behavioral Macroeconomics and Mechanism Design</title>
<link>https://hdl.handle.net/1721.1/151338</link>
<description>Essays in Behavioral Macroeconomics and Mechanism Design
Flynn, Joel P.
This thesis is in two parts. The first part of the thesis, "Essays in Behavioral Macroeconomics," is motivated by the simple observation that the macroeconomy is complicated; many households and firms interact across myriad markets in ways that change over time. This part of the thesis studies, empirically and theoretically, the microeconomic foundations and macroeconomic implications of hypotheses inspired by these complications: that people adopt simplified and misspecified narratives to understand the world; and that people will only pay attention to the macroeconomy when it is important to them.&#13;
&#13;
In the first chapter, "The Macroeconomics of Narratives" (coauthored with Karthik A. Sastry), we study the macroeconomic implications of narratives, or beliefs about the economy that affect decisions and spread contagiously. Empirically, we use natural-language-processing methods to measure textual proxies for narratives in US public firms' end-of-year reports (Forms 10-K). We find that: (i) firms' hiring decisions respond strongly to narratives, (ii) narratives spread contagiously among firms, and (iii) this spread is responsive to macroeconomic conditions. To understand the macroeconomic implications of these forces, we embed a contagious optimistic narrative in a business-cycle model. We characterize, in terms of the decision-relevance and contagiousness of narratives, when the unique equilibrium features: (i) non-fundamental business cycles, (ii) non-linear belief dynamics (narratives "going viral") that generate multiple stable steady states (hysteresis), and (iii) the coexistence of hump-shaped responses to small shocks with regime-shifting behavior in response to large shocks. Our empirical estimates discipline both the static, general equilibrium effect of narratives on output and their dynamics. In the calibrated model, we find that contagious optimism explains 32% and 18% of the output reductions over the early 2000s recession and Great Recession, respectively, as well as 19% of the unconditional variance in output. We find that overall optimism is not sufficiently contagious to generate hysteresis, but other, more granular narratives are.&#13;
&#13;
In the second chapter, "Attention Cycles" (coauthored with Karthik A. Sastry), we document that, in aggregate downturns, US public firms’ attention to macroeconomic conditions rises and the size of their input-choice mistakes falls. We explain these phenomena with a business-cycle model in which firms face a cognitive cost of making precise decisions. Because firms are owned by risk-averse households, there are greater incentives to deliver profits by making smaller input-choice mistakes when aggregate consumption is low. In the data, consistent with our model, financial markets punish mistakes more in downturns and macroeconomically attentive firms make smaller mistakes. Quantitatively, attention cycles generate asymmetric, state-dependent shock propagation and stochastic volatility of output growth.&#13;
&#13;
In the third chapter, "Strategic Mistakes" (coauthored with Karthik A. Sastry), to study the equilibrium implications of decision frictions, we introduce a new class of control costs in continuum-player, continuum-action games in which agents interact via an aggregate of the actions of others. The costs that we study accommodate a rich class of decision frictions, including ex post misoptimization, imperfect ex ante planning, cognitive constraints that depend endogenously on the behavior of others, and consideration sets. We provide primitive conditions such that equilibria exist, are unique, are efficient, and feature monotone comparative statics for action distributions, aggregates, and the size of agents' mistakes. We apply the model to make robust equilibrium predictions in a monetary business-cycle model of price-setting with planning frictions and a model of consumption and savings during a liquidity trap when endogenous stress worsens decisions.&#13;
&#13;
The second part of this thesis, "Essays in Mechanism Design," studies two contentious issues in the allocation of resources in the modern economy: How should we account for diversity when we allocate resources in two-sided matching markets? How should digital goods and information be priced and regulated?&#13;
&#13;
In the fourth chapter, "Priority Design in Centralized Matching Markets" (coauthored with Oguzhan Celebi), we observe that in many centralized matching markets, agents' property rights over objects are derived from a coarse transformation of an underlying score. Prominent examples include the distance-based system employed by Boston Public Schools, where students who lived within a certain radius of each school were prioritized over all others, and the income-based system used in New York public housing allocation, where eligibility is determined by a sharp income cutoff. Motivated by this, we study how to optimally coarsen an underlying score. Our main result is that, for any continuous objective function and under stable matching mechanisms, the optimal design can be attained by splitting agents into at most three indifference classes for each object. We provide insights into this design problem in three applications: distance-based scores in Boston Public Schools, test-based scores for Chicago exam schools, and income-based scores in New York public housing allocation.&#13;
&#13;
In the fifth chapter, "Adaptive Priority Mechanisms" (coauthored with Oguzhan Celebi), we ask how authorities that care about match quality and diversity should allocate resources when they are uncertain of the market they face? Such a question appears in many contexts, including the allocation of school seats to students from various socioeconomic groups with differing exam scores. We propose a new class of adaptive priority mechanisms (APM) that prioritize agents as a function of both scores that reflect match quality and the number of assigned agents with the same socioeconomic characteristics. When there is a single authority and preferences over scores and diversity are separable, we derive an APM that is optimal, generates a unique outcome, and can be specified solely in terms of the preferences of the authority. By contrast, the ubiquitous priority and quota mechanisms are optimal if and only if the authority is risk-neutral or extremely risk-averse over diversity, respectively. When there are many authorities, it is dominant for each of them to use the optimal APM, and each so doing implements the unique stable matching. However, this is generally inefficient for the authorities. A centralized allocation mechanism that first uses an aggregate APM and then implements authority-specific quotas restores efficiency. Using data from Chicago Public Schools, we estimate that the gains from adopting APM are considerable.&#13;
&#13;
In the sixth and final chapter, "Nonlinear Pricing with Under-Utilization: A Theory of Multi-Part Tariffs" (coauthored with Roberto Corrao and Karthik A. Sastry), we study the nonlinear pricing of goods whose usage generates revenue for the seller and of which buyers can freely dispose. The optimal price schedule is a multi-part tariff, featuring tiers within which buyers pay a marginal price of zero. We apply our model to digital goods, for which advertising, data generation, and network effects make usage valuable, but monitoring legitimate usage is infeasible. Our results rationalize common pricing schemes including free products, free trials, and unlimited subscriptions. The possibility of free disposal harms producer and consumer welfare and makes both less sensitive to changes in usage-based revenue and demand.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151338</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Environmental and Health Risk</title>
<link>https://hdl.handle.net/1721.1/151337</link>
<description>Essays on the Economics of Environmental and Health Risk
Ostriker, Abigail J.H.
Individuals face many types of risk, including to their property, finances, and health. In this thesis, I study a variety of strategies for mitigating those risks, in the context of health shocks and natural disasters. In the health context, risk of disease onset is mitigated in some cases with screening programs, with governments recommending an age to begin screening. In the environmental context, risk of physical destruction is mitigated with property insurance, which can be purchased from the federal government (for floods) or private insurers (for wildfires). In the first two essays, I study the effectiveness of government policies recommending screening ages and regulating floodplain development via rules embedded in the National Flood Insurance Program. In the third essay, I investigate how higher property insurance prices affect housing markets. All three essays highlight the importance of government policies and insurance products for mitigating personal risk.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151337</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low complexity regions in Biological Matter: Sequences, higher-order assembly, and evolution.</title>
<link>https://hdl.handle.net/1721.1/151336</link>
<description>Low complexity regions in Biological Matter: Sequences, higher-order assembly, and evolution.
Jaberi-Lashkari, Nima
From the fibrous silk of a spider to liquid-like nucleoli, biological matter plays important roles in organisms. How does biological matter assemble from component parts, carry out important organismal functions, and evolve in the tree of life? Low complexity regions (LCRs) of proteins are simple protein sequences which have been appreciated to play outsized roles in defining the structures underlying biological matter. However, most LCR sequences do not have known functions. Here I will present my work trying to answer the questions above through developing a unified understanding of LCRs across species. I will present a framework for understanding how LCR copy numbers, sequence relationships, and composition give rise to biological structures. In this thesis I will present work that uses this framework for the identification of novel putative biological assemblies, previously unknown scaffolds underlying biological structures, and shed light on the evolution of biological matter. Together, this work demonstrates that a unified understanding of LCRs can provide novel insight into the structures underlying biological matter.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151336</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Double affine galleries</title>
<link>https://hdl.handle.net/1721.1/151335</link>
<description>Double affine galleries
Tao, James H.
We develop a theory of galleries for double affine hyperplane arrangements. A gallery is an infinite sequence of chambers, indexed by an ordered set, which is maximal with respect to a finiteness condition on the multiset of wall-crossings. &#13;
		&#13;
We study the possible order types of galleries. We also use galleries to define a double affine Bruhat order which generalizes the one introduced by Braverman, Kazhdan, and Patnaik, and studied by Muthiah and Orr. We prove an analogue of the classical characterization of the Bruhat order in terms of subexpressions of reduced expressions, and we define an analogue of the Demazure product. &#13;
&#13;
We also study tours, which are certain finite sequences of chambers. Using the previous results, we show that tours form a category which behaves similarly to the category of generalized galleries defined in the classical setting. We construct a functor from tours to schemes, whose image consists of double affine analogues of Demazure varieties. We show that the colimit of this functor recovers the double affine flag variety at the level of sets, but we do not think that the colimit of schemes is well-behaved. Instead, we describe a different way of equipping the colimit set with a ringed space structure, and we conjecture that this ringed space is a scheme. &#13;
&#13;
Our main result is that the category of tours (with fixed start and end chambers, and subject to certain constraints) is contractible. We call this result 'homotopical deletion' because it generalizes the Coxeter deletion lemma.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151335</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An optimization perspective on log-concave sampling and beyond</title>
<link>https://hdl.handle.net/1721.1/151333</link>
<description>An optimization perspective on log-concave sampling and beyond
Chewi, Sinho
The primary contribution of this thesis is to advance the theory of complexity for sampling from a continuous probability density over R^d. Some highlights include: a new analysis of the proximal sampler, taking inspiration from the proximal point algorithm in optimization; an improved and sharp analysis of the Metropolis-adjusted Langevin algorithm, yielding new state-of-the-art guarantees for high-accuracy log-concave sampling; the first lower bounds for the complexity of log-concave sampling; an analysis of mirror Langevin Monte Carlo for constrained sampling; and the development of a theory of approximate first-order stationarity in non-log-concave sampling.&#13;
&#13;
We further illustrate the main tools in this work—diffusions and Wasserstein gradient flows—through applications to functional inequalities, the entropic barrier, Wasserstein barycenters, variational inference, and diffusion models.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151333</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Coherent Categorification of the Asymptotic Affine Hecke Algebra</title>
<link>https://hdl.handle.net/1721.1/151332</link>
<description>A Coherent Categorification of the Asymptotic Affine Hecke Algebra
Propp, Oron Yehonatan
Kazhdan–Lusztig categorified the affine Hecke algebra H in terms of equivariant coherent sheaves on the Steinberg variety. Recently, Ben-Zvi–Chen–Helm–Nadler have applied the formalism of categorical traces to construct a "coherent Springer sheaf" on a moduli stack of Deligne–Langlands parameters whose endomorphism algebra recovers H. In this thesis, we extend these results to Lusztig's asymptotic affine Hecke algebra J. Using work of Bezrukavnikov–Ostrik, we construct an "asymptotic coherent Springer sheaf" on an "asymptotic" moduli stack of Deligne–Langlands parameters whose endomorphism algebra identifies with J. We show that a certain restriction of the coherent Springer sheaf identifies with this asymptotic coherent Springer sheaf and induces Lusztig’s homomorphism φ on endomorphism algebras. Next, following a conjecture of Qiu–Xi, we consider a category of equivariant coherent sheaves on the square of the G_m-fixed points in a Springer fiber. We identify its 2-categorical class with a summand of the asymptotic coherent Springer sheaf, and deduce that it categorifies the corresponding block of J. We then construct a family of functors from the mixed affine Hecke category categorifying φ. Finally, we show that the universal trace functor for the mixed affine Hecke category is right t-exact with respect to an exotic t-structure, and sends monoidal duals of connective objects to coconnective objects. To this end, we construct an explicit complex computing the 2-categorical class map for certain monoidal categories over quotient stacks. We then deduce a (co)connectivity statement for the 2-categorical classes associated to Bezrukavnikov–Riche’s braid group action for the Springer resolution. In particular, we obtain that the coherent Springer sheaf lies in cohomological degree 0 (i.e., is a sheaf rather than a complex), resolving a conjecture of Ben-Zvi–Chen–Helm–Nadler and Zhu. As a consequence of the proof, we partially resolve another conjecture of Qiu–Xi, showing that J embeds in a K-group of equivariant vector bundles on the square of a finite set.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151332</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Development of Damage-Tolerant Active Materials</title>
<link>https://hdl.handle.net/1721.1/151331</link>
<description>Design and Development of Damage-Tolerant Active Materials
Kim, So Yeon
As the world faces an energy and resource crisis, there is a growing need to develop more efficient energy production, conversion, and storage technologies for sustainability. However, these technologies often require materials that can cyclically absorb, convert, and release a significant amount of free energy during service, which can result in dynamic behaviors similar to metabolism in biological systems. Such “active” materials encompass a wide range of materials, including nuclear fusion reactor walls and battery electrodes, and can be prone to structural instabilities and abrupt failure. This thesis aims to enhance the damage tolerance of active materials by designing multi-phase composites incorporating secondary phases that can “proactively” divert damage into more benign forms.&#13;
&#13;
The specific focus is on fusion structural metals subject to 14.1 MeV fast neutron radiation, which produces transmutation helium (He) that embrittles grain boundaries. The strategy pro-posed is to incorporate nano-phases that can absorb and store helium in their bulk lattice, forming a “helide” compound. To identify potential helide formers, this thesis first defines and validates a metric for evaluating helium-absorbing capability and performs a large-scale computational screening. Second, the thesis develops a machine-learning model that can predict the wettability of nano-phases by metals to assess the manufacturability of such multi-phase composites and con-duct matrix-dependent down-selection. Third, the thesis experimentally verifies that the designed nano-phases can absorb and store &gt; 10 at% He within their lattice interior, thereby reducing both the size and number density of helium bubbles at grain boundaries. Lastly, the thesis examines the collateral effect of nano-phases on the phase evolution of metal matrices and provides a guideline for redesigning composites considering this effect.&#13;
&#13;
Together, the results suggest that incorporating the identified helide formers at a concentration of 1–2 vol% can delay the development of critical helium damage in fusion structural materials. This study demonstrates that the atomic energy landscape associated with critical damage carriers can be adjusted even up to a few electron-volts by simply incorporating secondary phases, enabling damage-tolerant active materials.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151331</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapidly-Deployable Materials Processing Approaches for Energy Applications and Chemical Separations</title>
<link>https://hdl.handle.net/1721.1/151330</link>
<description>Rapidly-Deployable Materials Processing Approaches for Energy Applications and Chemical Separations
Patil, Jatin Jayesh
Anthropogenic CO₂ emissions can be mitigated through a combination of deploying and improving methods to generate, store, and save energy worldwide. These methods rely on materials advancements, but due to the short timescales available to actually implement solutions, will require rapid deployment that should leverage existing infrastructure to make a timely and sizeable impact. My thesis focuses on a few case studies of materials systems relevant to solar energy and chemical separations to improve their performance using existing and rapidly-deployable systems. I first present and analyse the case of the instability of metal nanowire networks, where my work investigates and demonstrates design principles for encapsulants which are economically viable and deployable with widely-available vapor deposition equipment. Next, I investigate the relative advantage of carbonaceous membranes for chemical separations and electrochemical applications by exploring the properties of graphene oxide membranes. We examine coal tar pitch-derived graphene oxide to understand means of improving the permeance and chemical stability of graphene oxide membranes without reducing their nanofiltration capacity. Furthermore, I investigate laser annealing with a CO₂ laser cutter as a means to modify membrane materials to make them conductive for electrochemical applications such as redox-flow batteries and dye degradation. Finally, the thesis concludes with an outlook for future work, where I discuss the implications of rapidly-deployable processing innovation, in addition to&#13;
how it can contribute to waste valorization and a circular economy.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151330</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Long-Term Relationships and Networks</title>
<link>https://hdl.handle.net/1721.1/151324</link>
<description>Essays on Long-Term Relationships and Networks
Nguyen, Thi Mai Anh
This thesis comprises three chapters on long-term relationships and networks.  The first and second chapters study long-term relationships between shippers and carriers in the US truckload freight industry.  The third chapter studies properties of learning and information aggregation on social networks.&#13;
&#13;
The first chapter, joint with Adam Harris, provides evidence on the scope and incentive mechanisms of long-term relationships in the US truckload freight industry. In this setting, shippers and carriers engage in repeated interactions under fixed-rate contracts that leave scope for inefficient opportunism. We show that shippers use the threat of relationship termination to deter carriers from short-term opportunism. Carriers respond to the resultant dynamic incentives, behaving more cooperatively when their potential future rents are higher. While shippers and carriers often interact on multiple lanes, we find evidence that shippers' incentive schemes do not take advantage of this multi-lane scope for certain classes of carriers.&#13;
&#13;
The second chapter, joint with Adam Harris, builds on the first, exploring a market-level tradeoff that informal long-term relationships present.  On the one hand, relationships capitalize on match-specific efficiency gains and mitigating incentive problems. On the other hand, the prevalence of long-term relationships can also lead to thinner, less efficient spot markets. We develop an empirical framework to quantify the market-level tradeoff between long-term relationships and the spot market. We apply this framework to an economically important setting—the US truckload freight industry—exploiting detailed transaction-level data for estimation. At the relationship level, we find that long-term relationships have large intrinsic benefits over spot transactions. At the market level, we find a strong link between the thickness and the efficiency of the spot market. Overall, the current institution performs fairly well against our first-best benchmarks, achieving 44% of the relationship-level first-best surplus and even more of the market-level first-best surplus. The findings motivate two counterfactuals: (i) a centralized spot market for optimal spot market efficiency and (ii) index pricing for optimal gains from individual long-term relationships. The former results in substantial welfare loss, and the latter leads to welfare gains during periods of high demand.&#13;
&#13;
The third chapter proposes a novel learning model on social networks that captures settings where individuals interact frequently on multiple, relatively short-lived topics. In this model, each period features a new draw of nature and multiple rounds in which information arrives, gets aggregated, and diffuses through network links. The repetitive nature of interactions across periods allows for a separation between learning about the environment and aggregating information about the current state. A class of empiricist learning rules achieve convergence of learning on all networks. On clique trees, these learning rules further achieve strong efficiency in information aggregation. The paper also presents a converse to the positive efficiency result and identifies distinct reasons why efficiency is hard to obtain in general circumstances, even though convergence of learning holds generally.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151324</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online Combinatorial Optimization for Digital Marketplaces</title>
<link>https://hdl.handle.net/1721.1/151321</link>
<description>Online Combinatorial Optimization for Digital Marketplaces
Susan, Fransisca
Digital marketplaces have access to a large amount of user data, which presents new opportunities for learning and data-driven decision-making. However, there are two fundamental challenges associated with the use of data in digital marketplaces. First, many decision-making processes in digital marketplaces involve evaluating and experimenting with a large number of options. While companies may be comfortable with conducting A/B testing when there are only two options, this becomes impractical when there are many more options to consider. Second, machine learning models often rely heavily on data that may not always be accurate or reliable, leading to poor performance when making predictions and later on causing poor decision-making. In addition, real-time decision-making is often required. &#13;
&#13;
This motivates the main theme in my thesis, which is to develop effective and efficient online algorithms that take advantage of structures and predictive information in time-varying combinatorial environments, facilitating decision-making in uncertain situations. Examples of such problems include assortment optimization, product ranking, and bid optimization for online advertising.&#13;
&#13;
The thesis overall investigates online combinatorial optimization in digital marketplaces with various applications, covering general combinatorial problems, non-parametric choice models, constrained bid optimization in auctions, and fairness-constrained assortment optimization in four parts. In the first chapter, we address the problem of making real-time decisions in a time-varying combinatorial environment where the decision maker needs to balance optimizing their decision and learning about the underlying environment. We propose a unified framework that transforms robust greedy approximation algorithms into their online counterparts, even with non-linear objective functions. This framework is applicable in both full-information and bandit feedback settings, obtaining $\sqrt{T}$ and $T^{3/4}$ regret respectively. In the second chapter, we focus on the problem of learning non-parametric choice models on digital platforms in an active learning setting. This method involves influencing the data collection process to obtain more favorable data for estimation, in contrast to using only offline data or A/B testing, which might result in limited data sets. &#13;
&#13;
In the third and fourth chapters, we incorporate constraints into our optimization problems, which add complexity to the decision space, while still maintaining a large decision space. Specifically, in the third chapter, we propose a bidding strategy for budget-constrained advertisers participating on multiple platforms with different non-IC auction formats. Our proposed non-linear value-pacing-based strategy is optimal in the offline setting and has no-regret in the online setting. Lastly, in the fourth chapter, we incorporate fairness constraints to an assortment optimization problem in digital marketplaces with diverse demographics. We aim to maximize the total market share across groups subject to the condition that the market share of each group meets a predetermined threshold, while also considering legal issues that prevent personalization. We present optimal approximation algorithms for both the offline and online settings.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151321</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Parallel Algorithms and Library for Spatial Clustering and Computational Geometry</title>
<link>https://hdl.handle.net/1721.1/151320</link>
<description>Fast Parallel Algorithms and Library for Spatial Clustering and Computational Geometry
Wang, Yiqiu
This thesis presents novel parallel shared-memory multi-core algorithms, implementations, and frameworks for efficiently solving large-scale spatial clustering and computational geometry problems. The primary focus is on designing theoretically-efficient and practical algorithms that can handle the increasing demand for faster processing speeds in spatial data sets. &#13;
&#13;
In the first part of the thesis, we introduce new parallel algorithms and framework for spatial clustering. We design new parallel algorithms for exact and approximate DBSCAN, which match the work complexity of the best sequential algorithms while maintaining low depth. Extensive experiments demonstrate that our algorithms achieve massive speedup over existing algorithms and can efficiently process large-scale data sets. We also present new parallel algorithms for hierarchical DBSCAN (HDBSCAN) and Euclidean minimum spanning tree (EMST), including several theoretical results and practical optimizations. Furthermore, we propose a method to generate a dendrogram from the minimum spanning tree (MST) of the HDBSCAN or EMST problem. The EMST also solves single-linkage clustering. Lastly, we also design a framework for implementing parallel grid-based clustering algorithms. &#13;
&#13;
The second part of the thesis introduces our contributions to parallel algorithms and a library for computational geometry. We contribute to three problems in computational geometry: a new parallel reservation-based algorithm that can express both randomized incremental convex hull and quickhull algorithms; a sampling-based algorithm to reduce work for the smallest enclosing ball problem; and a parallel batch-dynamic data structure for dynamic closest pair problem. We also introduce ParGeo, a library for parallel computational geometry that provides various parallel geometric algorithms, data structures, and graph generators. Our experimental evaluations show significant speedups achieved by our proposed algorithms across different problems. &#13;
&#13;
Overall, this thesis demonstrates that parallel shared-memory multi-core algorithms, implementations, and frameworks can efficiently solve large-scale spatial clustering and computational geometry problems both in theory and practice.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151320</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamical statistics for power series and polynomials with restricted coefficients</title>
<link>https://hdl.handle.net/1721.1/151311</link>
<description>Dynamical statistics for power series and polynomials with restricted coefficients
Ang, Yan Sheng
In this thesis, we study statistical properties and related results in two different dynamical settings. In the first part, we consider a family of fractals arising as limit sets of pairs of similitudes; these fractals are closely related to power series with all coefficients equal to ±1. Motivated by the Julia–Mandelbrot correspondence, we construct a natural measure in the parameter space satisfying analogous properties for this family. Viewing the natural measure as an average root-counting measure, we establish its asymptotics and angular equidistribution. We also prove an anti-concentration inequality for the limit sets, and use this to bound the variation of the number of roots of the typical random power series from its expected value.&#13;
&#13;
In the second part, in joint work with Jit Wu Yap, we consider pairs of polynomials with rational coefficients of bounded height. In the generic case, we control the structure of the Julia sets and some notions of arithmetic complexity at most places. Using this, we prove that the average number of common preperiodic points of the two polynomials goes to 0 as height increases. We also obtain lower and upper bounds for the essential minimum of the sum of canonical heights of the two polynomials.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151311</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The restricted projection problems</title>
<link>https://hdl.handle.net/1721.1/151310</link>
<description>The restricted projection problems
Gan, Shengwen
Given a non-degenerate smooth curve &#120574; : [0, 1] → R³, its tangent directions form a one-dimensional family of lines and its normal directions form a one-dimensional family of planes. In this thesis, I study the orthogonal projections of any set &#119860; ⊂ R³ onto these restricted family of lines or planes.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151310</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Dendritic Polymers as a Modular Drug Delivery Platform for Avascular Tissues</title>
<link>https://hdl.handle.net/1721.1/151302</link>
<description>Development of Dendritic Polymers as a Modular Drug Delivery Platform for Avascular Tissues
Johnston, Brandon Michael
Avascular tissues, such as articular cartilage, meniscus, intervertebral discs, and cornea stroma, pose a number of challenges for disease treatment. Systemic therapies will not reach the target tissue at efficacious levels. Local therapies are rapidly cleared from the tissue due to turnover of biological fluids, but repeated administration causes tissue damage and poor patient compliance. One method of improving the residence time of treatments is by utilizing drug delivery systems that target and bind to the extracellular matrix (ECM) with high affinity. The ECM is composed of a dense mesh of collagen and highly anionic proteoglycan chains. As such, small, cationic nanocarriers have been studied to electrostatically bind to avascular tissues and deliver therapeutics for extended periods.&#13;
&#13;
Recently, cationic poly(amido amine) dendrimers partially modified with poly(ethylene glycol) chains (PEG-PAMAM conjugates) have shown promise in electrostatic based drug delivery to articular cartilage to treat osteoarthritis (OA), a debilitating disease of synovial joints affecting millions of people. Currently, there are no cures for OA. Though PEG-PAMAM conjugates have shown promise in delivering biologics to treat the disease, and have great potential for delivery to other avascular tissues, the impact of PEGylation on the surface charge presentation and drug delivery properties of these conjugates had not been well studied.&#13;
&#13;
The first part of this thesis developed an experimental technique to probe how PEG chain length and chain density impact the charge presentation of PEG-PAMAM conjugates. This technique facilitated quantification of important characteristics like non-covalent interactions between PEG and PAMAM and the number of cationic amines on the dendrimer surface accessible to the physiological environment. A number of drug delivery properties to articular cartilage, such as binding kinetics, diffusion, biocompatibility, and pharmacokinetics, were probed using ex vivo, in vitro, and in vivo models, and a mechanistic understanding of how PEG influences these properties was achieved. Increasing accessible charged amines by reducing PEG chain density or chain length increased electrostatic binding strength while longer PEG chains improved binding reversibility. By controlling binding strength and reversibility, specific delivery profiles could be achieved and fine-tuned.&#13;
&#13;
The second part of this thesis improved the bioconjugation of proteins to PEG-PAMAM conjugates and expanded the platform to other biologics and tissues. A robust scheme using click chemistry was introduced, improving loading efficiency while conserving protein bioactivity. Protein content influenced drug delivery properties to articular cartilage in a predictable manner, and improved joint residence times of proteins 20-fold. An anabolic protein and anti-catabolic protein, both with OA therapeutic potential, were loaded onto PEG-PAMAM conjugates and delivered to an in vivo OA model. Preliminary results show sustained delivery of the anti-catabolic protein relieved pain while both proteins showed signs of reduced OA severity compared to free proteins. Finally, the platform was explored in meniscus in the context of controlling intra-articular trafficking, where the partitioning of conjugates between cartilage and meniscus was dependent on the protein corona of the nanoparticle. These findings inform optimization of PEG-PAMAM conjugates for future drug delivery applications to avascular tissue, and provide preclinical development crucial for translation.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151302</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanochemical Understanding of Metal-Coordinated Polymers Using Simulation and Experiment</title>
<link>https://hdl.handle.net/1721.1/151299</link>
<description>Mechanochemical Understanding of Metal-Coordinated Polymers Using Simulation and Experiment
Khare, Eesha
Metal-coordination bonds have the capacity to reform after rupture, thereby enabling dynamic, tunable, and reversible (self-healing) mechanical properties. Several biological organisms, such as marine mussels (Mytilus) and marine worm jaws (Nereis virens), have been found to take advantage of these unique properties of metal-coordinated complexes to produce loadbearing materials with complex mechanical functions. Inspired by these biological materials, metal-coordination bonds have been incorporated into synthetic materials to produce a range of mechanical properties. However, efforts in engineering such metal-coordinated materials have been highly empirical, limiting the full design potential of these bonds. Developing an understanding between the microscopic metal-coordination bond properties and resulting macroscopic mechanical properties of metal-coordinated materials would enable an a priori prediction for optimized utilization of coordination bonds to build materials with advanced mechanical functions. &#13;
&#13;
This dissertation systematically characterizes metal-coordinated polymers and proteins with the aim of developing a mechanistic understanding of the relationship between microscopic bond chemistry and resulting macroscopic dynamic mechanical properties. We begin with a well-studied model system using an idealized polymer network where individual metal-coordination complexes control the macroscopic relaxation dynamics of the network. We use metadynamics simulations to show that the free energy landscape of metal-coordination bonds can be related to the macroscopic dynamic relaxation of these bonds in ideal polymer hydrogels as measured through experimental rheology. We then expand beyond single coordination complexes and use single molecule force spectroscopy to show that clusters of coordination bonds in model metal-coordinated protein dimers can rupture cooperatively, thereby synergistically increasing the rupture strength of the proteins. We resolve this rupture behavior mechanistically by using steered molecular dynamics simulations to show that metal-coordination bond rupture is highly heterogenous and undergoes several rupture pathways, even with the same initial conditions. This indicates that metal-coordination bonds may have evolved in natural materials for primarily dissipative functions.&#13;
&#13;
The above insights are subsequently evaluated within the context of the Nvjp-1 protein, a major component of the Nereis worm jaw with high amounts of metal coordination. We find that increasing quantity of metal ions makes the protein more compact, whereas increasing the spatial distribution of metal ions is found to increase the protein toughness. We then briefly demonstrate how machine learning methods can be developed for similar systems to predict materials properties. The methodology and insights developed in this thesis have important implications for understanding the molecular mechanisms of metal-coordination bond-based stabilization of proteins and polymers and the a priori design of new metal-coordinated materials with desired mechanical properties.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151299</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rational design strategies for redox flow batteries</title>
<link>https://hdl.handle.net/1721.1/151298</link>
<description>Rational design strategies for redox flow batteries
Neyhouse, Bertrand J.
Global decarbonization of the energy sector necessitates development of storage technologies to mediate the inherent intermittency of renewable resources. Electrochemical systems are wellpositioned to support this transition with redox flow batteries (RFBs) emerging as a promising grid-scale platform, as their unique architecture offers decoupled energy / power scaling, simplified manufacturing, and long service life. Despite these favorable characteristics, current embodiments remain prohibitively expensive for broad adoption, motivating the development of new electrolyte formulations (e.g., redox molecules, supporting salts, solvents) and reactor materials (e.g., electrodes, flow fields, membranes) to meet performance and cost targets for emerging applications. While many next-generation materials offer performance improvements, they must carefully balance complex tradeoffs between power / energy density, cycling stability, energy efficiency, and capital costs. This multifaceted parameter space frustrates the articulation of unambiguous design criteria, as the relationships between constituent material properties and cell performance metrics are not yet well-understood.&#13;
&#13;
In this thesis, I address these knowledge gaps by advancing theoretical and experimental methods that support the rational design of RFBs. First, I develop an in-line electrochemical sensor to measure electrolyte concentrations during RFB operation, providing insight into the dynamics of flow cell cycling. Second, I introduce an analytical zero-dimensional modeling framework for describing RFB cycling behavior, enabling facile simulation of charge / discharge behavior and device performance metrics. I then further simplify this approach by deriving closed-form expressions, facilitating the use of spreadsheet models for cycling simulations. Third, I apply these models to assess design tradeoffs for two-electron materials, demonstrating marked performance limitations. Finally, I leverage the analytical zero-dimensional framework to develop a new technique—compositionally unbalanced symmetric cell cycling—for characterizing crossover rates in redox flow cells. Broadly, the methods developed in this work have the potential to advance foundational understanding in RFB design and operation, leading to more rigorous selection criteria for candidate materials and ultimately supporting more robust, cost-competitive, and durable grid-scale energy storage.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151298</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/151297</link>
<description>Essays in Financial Economics
Cole, Allison
In Chapter 1, joint work with Bledi Taski, we pose the question: how do workers value retirement benefits relative to wages and what impact do these benefits have on firm hiring? We find that dollars paid in employer contributions to 401(k) plans have nearly double the effect on a firm's recruiting success than dollars paid in wages. However, the effect is driven primarily by high-income and higher-age occupations. We use two novel instruments to identify the results: 1) IRS mandated non-discrimination testing of retirement plans and 2) corporate policies of national wage setting.  We then develop and estimate an on-the-job search model which shows that the average worker requires only a 0.25 percentage point increase in employer contribution dollars to offset a 1% decrease in wages. Again, retirement valuations are positively correlated with salary.  We confirm the channel in an online survey setting: participants are willing to give up total pay to get a higher employer match to get a non-matching employer-sponsored 401(k). The results imply that 80% of firms could improve their probability of a job offer being accepted by increasing 401(k) contributions.&#13;
&#13;
Chapter 2, joint work with Jonathan Parker, Antoinette Schoar, and Duncan Simester, documents the share of investable wealth that middle-class U.S. investors hold in the stock market over their working lives. This share rises modestly early in life and falls significantly as people approach retirement. Prior to 2000, the average investor held less of their investable wealth in the stock market and did not adjust this share over their working life. These changes in portfolio allocation were accelerated by the Pension Protection Act (PPA) of 2006, which allowed employers to adopt target date funds (TDFs) as default options in retirement saving plans. Young retail investors who start at an employer shortly after it adopts TDFs have higher equity shares than those who start at that same employer shortly before the change in defaults. Older investors rebalance more to safe assets. We also study retirement contribution rates over the life-cycle and find that average retirement saving rates increase steadily over the working life. In contrast to what we find for investment in the stock market, contribution rates have been stable over time and across cohorts and were not increased by the PPA.&#13;
&#13;
In Chapter 3, I use administrative data on very small businesses (median 5 employees) to measure the effects of  the Paycheck Protection Program (PPP). Firms that applied for PPP increased employment by 7.5%  relative to similar firms that&#13;
did not apply. The positive effects on employment occur primarily in industries which were less affected by COVID-19: industries with more employees that are able to work remotely, those that have fewer hourly workers and essential businesses. Novel data on hiring shows that PPP worked as intended by preserving employment matches.  My estimates imply a cost of approximately $270,000 per job-year at small firms.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151297</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in electroaerodynamic thrusters for&#13;
aircraft propulsion</title>
<link>https://hdl.handle.net/1721.1/151296</link>
<description>Advances in electroaerodynamic thrusters for&#13;
aircraft propulsion
Gómez Vega, Nicolás
An electroaerodynamic (EAD) thruster is a propulsion system for small aircraft that is mechanically simple, has no moving parts, and is nearly silent. EAD thrusters produce ions from atmospheric air and accelerate the ions across two electrodes separated by an air gap: collisions of ions with neutral molecules result in momentum transfer to the neutral air, generating an ionic wind and a thrust force. EAD has been demonstrated to be a feasible form of propulsion for airships and, recently, for airplanes. A major challenge that has to be overcome is that EAD thrusters have a low thrust density (thrust per unit volume or frontal area) compared to conventional propulsion systems, such as propellers. &#13;
&#13;
This thesis focuses on thruster physics and explores different techniques to improve the thrust density and/or efficiency of EAD thrusters. Four studies are conducted to achieve these goals. The first one is a study of “decoupled” EAD thrusters with a dielectric barrier discharge ion source, in which the ionization and ion acceleration processes are separated. This is different from alternative EAD architectures using corona discharges, in which these processes cannot be independently controlled. By using benchtop and thrust-measurement tests, it is found that the current and thrust produced by these decoupled devices scales with the DC voltage and gap distance in the same manner as the ideal space-charge limited current in a thin ion slab. Similarly, the results show that current is mostly affected by the power draw of the ion source instead of by the ion source parameters independently.&#13;
&#13;
The second study involves reverse emission, a critical non-ideal effect that increases power consumption and lowers the sparking voltage without contributing to thrust. This work shows that reverse emission is caused by a gas discharge in the ion-collecting electrode, primarily at its two ends. Several techniques to mitigate this discharge are identified; all of these consist of modifying the electrode geometry to weaken the electric field at the tips. If reverse emission is mitigated, it is possible to achieve substantial improvements in power consumption, maximum thrust, and noise signature.&#13;
&#13;
The third study is a theoretical investigation of multistaged ducted (MSD) thrusters containing several serially-stacked EAD stages enclosed in a duct. The duct also includes an inlet and a nozzle and is hypothesized to provide a thrust component similar to that in ducted fans. Combining momentum theory with relevant models for the EAD stage performance, it is shown that MSD thrusters have the potential to provide order-of-magnitude improvements in thrust density and thrust-to-power ratio with respect to single-stage devices.&#13;
&#13;
The fourth study involves an implementation of multistaged EAD thrusters to both establish their performance and validate the predictions from theory. Single-stage experiments suggest that stages with small gap distances are advantageous as they provide a high force on the fluid per unit volume. By stacking multiple stages in series, it is found that the thrust density can be significantly increased as compared to single-stage thrusters: 10 stages provide a factor of 5.6 increase in thrust at the maximum voltage tested. However, these improvements occur with diminishing returns due to increasing pressure losses as more stages are added. The theoretical models are found to be consistent with the experimental data, being able to capture the effects of all the physical parameters tested.&#13;
&#13;
The work in this thesis provides a pathway for developing EAD thrusters that could deliver a high thrust density at a practical thrust-to-power ratio, potentially enabling EAD-propelled aircraft to perform useful missions.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151296</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Economic Theory</title>
<link>https://hdl.handle.net/1721.1/151295</link>
<description>Essays in Economic Theory
Lanzani, Giacomo
The first chapter of this thesis considers an agent who posits a set of probabilistic models for the payoff-relevant outcomes. The agent has a prior over this set but fears the actual model is omitted and hedges against this possibility. The concern for misspecification is endogenous: If a model explains the previous observations well, the concern attenuates. We show that different static preferences under uncertainty (subjective expected utility, maxmin, robust control) arise in the long run, depending on how quickly the agent becomes unsatisfied with unexplained evidence and whether they are misspecified. The misspecification concern’s endogeneity naturally induces behavior cycles, and we characterize the limit action frequency. This model is consistent with the empirical evidence on monetary policy cycles and choices in the face of complex tax schedules. Finally, we axiomatize in terms of observable choices this decision criterion and how quickly the agent adjusts their misspecification concern.&#13;
&#13;
The second chapter offers an axiomatization of risk models where the choices of the decision maker are correlation sensitive. By extending the techniques of conjoint measurement to the nondeterministic case, we show that transitivity is the vN-M axiom that has to be relaxed to allow for these richer patterns of behavior. To illustrate the advantages of our modeling choice, we provide a simple axiomatization for the salience theory model within our general framework. This approach leads to a clear comparison to popular preexisting models, such as regret and reference dependence, and lets us single out the ordering property as the feature that brings salience theory outside the prospect theory realm. This chapter is published in the Quarterly Journal of Economics, vol 137.&#13;
&#13;
The third chapter proposes a model of non-Bayesian social learning in networks that accounts for heuristics and biases in opinion aggregation. The updating rules are represented by nonlinear opinion aggregators from which we extract two extreme networks capturing strong and weak links. We provide graph-theoretic conditions on these networks that characterize opinions’ convergence, consensus formation, and efficient or biased information aggregation. Under these updating rules, agents may ignore some of their neighbors’ opinions, reducing the number of effective connections and inducing long-run disagreement for finite populations. For the wisdom of the crowd in large populations, we highlight a trade-off between how connected the society is and the nonlinearity of the opinion aggregator. Our framework bridges several models and phenomena in the non-Bayesian social learning literature, thereby providing a unifying approach to the field. This chapter is the result of joint work with Simone Cerreia-Vioglio and Roberto Corrao. &#13;
&#13;
JEL codes: D9
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151295</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Unemployment</title>
<link>https://hdl.handle.net/1721.1/151294</link>
<description>Essays on Unemployment
Cohen, Jonathan Palm
This thesis examines the experiences of unemployed workers. The chapters use administrative, survey, and self-collected data to understand how unemployment and the government insurance that covers it impact workers. I argue that unemployment insurance increases subsequent unemployment less than previously thought, and many earnings-relevant skills would likely hold steady during any additional unemployment. Collectively, the chapters show that the costs of unemployment insurance expansions are smaller than the existing consensus suggests.&#13;
&#13;
The first chapter, written jointly with Geoffrey Schnorr, identifies the employment effects of approving additional unemployment insurance claims whose eligibility is in doubt. To be eligible for unemployment insurance in the United States and most other countries, workers must have lost their job through no fault of their own. We use administrative data on the universe of unemployment insurance claimants in California since 2002 to estimate how adjusting the leniency of these criteria would affect claimants' subsequent employment. By comparing claimants whose applications are as-good-as randomly assigned to relatively strict government employees with those assigned to relatively lenient government employees, we isolate the causal effect of approving the marginal cases. Empirically, we find that unemployment duration increases by approximately two weeks. Based on a theoretical model of optimal UI, we map this employment effect to the efficiency cost of unemployment insurance. We find that the efficiency cost of approving these marginal cases are much lower than the efficiency costs for other types of UI benefit expansions.&#13;
&#13;
The second chapter, written jointly with Andrew Johnston and Attila Lindner, documents how survey-based skills evolve as workers remain unemployed. In addition to the psychological and financial costs that unemployment imposes on workers themselves, a central concern to policymakers is that unemployment may also harm long-term productivity if workers lose skills while unemployed. We provide direct evidence that this does not appear to be the case for a set of general skills using a linked panel of skill elicitations and administrative employment records among newly unemployed German workers in the late-2000s. We validate that the measured skills are meaningful: baseline measurements predict prior earnings and panel measurements change around certain significant life events. Despite large falls in both subjective well-being and reemployment earnings as workers remain unemployed, we find that all of the earnings-relevant cognitive and noncognitive skills in the survey remain constant. We provide evidence that the lack of observed skill depreciation cannot be explained by various types of measurement error.&#13;
&#13;
The third chapter, written jointly with Peter Ganong, aggregates the existing academic literature measuring the effect of UI benefit generosity on unemployment duration. Building on the first chapter, we hand-collect a comprehensive dataset of published studies that estimate this effect. While the theoretical and empirical consensus is that UI benefit generosity increases unemployment duration, we argue the magnitude is overstated due to publication bias: statistically insignificant or negatively-signed estimates are much less likely to be published. Correcting for publication bias decreases the overall average elasticity of unemployment duration with respect to UI benefit generosity by one-fifth to one-half. Aggregating evidence across studies, the predicted elasticity is approximately 0.3 in a policy regime similar to the typical U.S. state. However, the elasticity is larger by a factor of two or more under typical potential benefit duration extensions that occur during recessions.&#13;
&#13;
JELClassification:J24,J64,J65
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151294</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Living With Crisis: Family, Labor, and Environment in Flint, Michigan</title>
<link>https://hdl.handle.net/1721.1/151293</link>
<description>Living With Crisis: Family, Labor, and Environment in Flint, Michigan
Sobrino, Elena
In 2014, residents of a small American city were exposed to dangerous levels of lead and bacteria in the drinking water supply and forced to grapple with long-term harm to their health and water infrastructure. The Flint water crisis, as this event became known, transpired in a majority Black municipality in the American midwestern Rust Belt. Flint has a long history of powerful union and civil rights movements but is now notorious for high unemployment rates, blighted neighborhoods, and an eroded tax base and population. Living with Crisis: Family, Labor, and Environment in Flint, Michigan examines the Flint water crisis as the precipitate of multiple ongoing crises in the region. These crises called forth efforts to salvage, redefine, or dissolve various social, economic, and ecological relationships following the exit of the automobile industry in the last decades of the twentieth century from its once dominant role in subsidizing civic life in Flint. This dissertation traces how the resulting shifts in power carried over into everyday struggles in the uneven fallout of the water crisis, a crisis which has been indefinite and resistant to closure. The water crisis was profoundly shaped by the paternalist policy landscape of Michigan at the time, most notably cuts in state revenue sharing and the implementation of emergency management. The overlap of these two policies produced a scarcity in which Flint residents effectively had no democratic decision-making power over city budgets and infrastructure. Emergency management approached financial distress as a justification for takeover by unelected professionals with no accountability to residents. This system disproportionately disciplined majority Black municipalities like Flint without addressing the root causes of income and racial inequality. This dissertation shows how Flint residents made sense of the conditions that led to the water crisis and made their own definitions of economic security and wellbeing by referring to values of reciprocity. Concepts of debts both moral and financial came out of norms, histories, and relationships within churches, families, nested levels of government, corporations, and unions. Between 2019 and 2021, I lived in Flint and conducted fieldwork with a wide range of interlocutors from churches, neighborhood groups, volunteers at bottled water distributions, the United Auto Workers union, local academics, and activists. Juxtaposed together, their stories offered a composite picture of crises in social infrastructure, labor, and environment. At the same time, residents had compelling reasons to be skeptical of external evaluations of Flint that were one-dimensionally negative. Portrayals of decline and ruin fed authoritarian modes of governance and engendered feelings of distance and pity that positioned Flint as an object of charity and extraction simultaneously. This dissertation argues that in a context oversaturated with conflicting meanings of crisis, effects of the water crisis that may seem wholly disempowering, such as widespread sentiments of distrust and fatigue, are in fact important aspects of living with crisis that many residents believe merit attention, not simply elimination. By focusing on local struggles to balance recognition of harm against unwanted stigmatization, this dissertation argues for understanding acts of resistance and acts of survival as integrated rather than oppositional political practices.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151293</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis, Characterization, and Theory of Polymer Gels to Elucidate Topology-Property Relationships</title>
<link>https://hdl.handle.net/1721.1/151289</link>
<description>Synthesis, Characterization, and Theory of Polymer Gels to Elucidate Topology-Property Relationships
Beech, Haley Katherine
Crosslinked polymer networks and gels are pervasive in daily life, with applications ranging from tires to contact lenses to precision drug delivery. For any application, it is desirable to engineer the material based on fundamental principles and predictive theories a priori to synthesis or production. This is challenging because end-linked polymer gels are filled with defects, specifically loops, dangling ends, and unreacted chains that significantly impact the elasticity of the network. Many classical theories do not account for these defects and rely on untested molecular assumptions that lead to inaccurate predictions. &#13;
&#13;
To address this gap, experiments were designed to relate gel topology to key properties: equilibrium swelling, gel point, chain conformation, and fracture toughness. Equilibrium swelling data demonstrated that gels with more loops reach a higher degree of swelling due to fewer elastic constraints, and led to a revised Flory-Rehner swelling theory which accurately captures this behavior. Gel points measured during both bond forming and bond breaking processes deviated as gels became more dilute, indicating a departure from truly random percolation and suggesting kinetic effects should be considered when modelling gelation. Small angle neutron scattering was used to measure single chain conformations within a gel, indicating that elastic chains stretch to create a space-spanning network, with increased stretching as gels become more dilute. Fracture toughness data showed that gels with more loops have a lower fracture toughness but a larger strain at break due to the effective extension of average chain length, which led to an update to the Lake Thomas fracture theory that accounted for defects. &#13;
&#13;
A new conceptual understanding of fracture as a process guided by the intrinsic reactivity of the substituent strands was demonstrated, further validating the revised fracture theory. Weak and strong mechanophores were used as crosslinkers in an end-linked network where the tearing energy was shown to correlate directly to the force-coupled reactivity of the linker. Tearing energy of mixed reactivity networks demonstrated that depercolation of the fracture zone was the necessary criterion for failure. These experiments collectively enabled quantitative improvements to classical network models, provided evidence to verify molecular assumptions, and deepened conceptual understanding of network properties, enhancing predictive material design capabilities. Finally, a theoretical model for crosslinking with side reactions was updated and experimentally validated, enabling the study of nonideal networks common in instudrial applications.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151289</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracellular Vesicle Capture and microRNA Detection</title>
<link>https://hdl.handle.net/1721.1/151287</link>
<description>Extracellular Vesicle Capture and microRNA Detection
Juthani, Nidhi
Cancer is one of the leading causes of death in the United states, and there has been much focus on earlier detection through the discovery of novel, easily accessible biomarkers via liquid biopsies. Extracellular vesicles have shown promise as a noninvasive biomarker for disease diagnosis and monitoring, and have become a treasure trove of information because they have been found to carry proteins, DNA, mRNA and microRNA as well surface markers indicative of the their cell origin. Thus developing methods to profile extracellular vesicles and interrogate the contents of these vesicles is a growing area of research and has the potential to develop into a non-invasive diagnostic platform, a liquid biopsy.&#13;
&#13;
The aim of this thesis is to develop a system to capture extracellular vesicles and profile the miRNA patterns present within them. First, we develop various amplification strategies in hydrogel particles for microRNA detection, including a colorimetric detection platform that can be translated to point-of-care settings for a liquid biopsy. We also explored other amplification strategies for increased miRNA detection sensitivity including precipitation-based enzymatic signal amplification and strand displacement amplification.&#13;
&#13;
Then we develop methods for extracellular vesicle lysis and miRNA detection using a one-pot lysis and miRNA capture method. Extracellular vesicles were isolated from matched diseased and normal donor serum. Using rolling circle amplification, we performed multiplexed miRNA detection and quantification from serum extracellular vesicles. Calibration curves using rolling circle amplification were used to determine miRNA copy number estimates in agreement with other studies in literature with absolute quantification.&#13;
&#13;
Finally, we tune hydrogel particle porosity and use novel functionalization techniques to capture extracellular vesicles based on their surface markers. We explored the use of the thiol-acrylate Michael addition reaction for antibody conjugation and optimized it for extracellular vesicle capture. Using these porous, antibody functionalized hydrogel particles, we captured breast cancer serum and match healthy serum extracellular vesicles using two surface markers, paving the way for multiplexed extracellular vesicle surface marker characterization. Porous hydrogel particles have the potential to considerably enhance the workflow for exosome capture and profiling experiments, through multiplexing, fewer sample preparation requirements, and customizable nature, hence furthering extracellular vesicle research.&#13;
&#13;
Incorporating the insights from the MIT Sloan Management program, the commercialization potential and current market landscape for extracellular vesicles was analysed. Extracellular vesicles have shown tremendous potential in the field of therapeutics, diagnostics, and for furthering academic research. The market study shows that the field is growing rapidly, with continued investment from venture capital for new companies, as well as corporate acquisitions by legacy players as they look to enter the field.&#13;
&#13;
The work presented in this thesis employs the various benefits of hydrogels for biomolecule detection, namely their biocompatibility, solution-like kinetics, nonfouling nature, and tunable chemistry. We believe that this work can be leveraged to improve upon and develop new technologies for extracellular vesicle capture and analysis, leading to more insights into this promising biomarker, eventually leading to earlier and more accurate diagnosis of disease.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151287</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymer Electrolyte Discovery via Rational Design and High Throughput Methods</title>
<link>https://hdl.handle.net/1721.1/151286</link>
<description>Polymer Electrolyte Discovery via Rational Design and High Throughput Methods
Stolberg, Michael A.
Storage of electrical energy is a cornerstone in the global endeavor to lower greenhouse gas emissions—in particular, electrochemical energy storage in the form of batteries can enable the electrification of transport through electric vehicles, as well as aid the transition to renewable energy generation such as wind and solar through stabilizing the grid and mitigating intermittency. Lithium-ion batteries, a pioneering technology to enable portable electronics, are seeing increased use in transportation and grid-scale applications due to their high energy density, and greatly decreasing production costs over the past decade. However, current lithium-ion batteries are reaching the theoretical energy density and must adhere to higher safety standards as they see use in larger scale formats. The next generation of cheaper, safer, and more energy-dense batteries will be enabled by advances in electrolytes which are the focus of this work.&#13;
&#13;
In this thesis, we focus on solid polymer electrolytes, which have the potential to enable more energy-dense batteries, and display improved safety compared to the highly flammable and toxic liquid electrolytes in use today. We detail our work in two main areas: the rational design of highly dissociative ionenes, and the development of a high throughput platform to increase the scale and speed of polymer electrolyte research. In the former, we investigate the impact of anion dissociation energy on ion conduction in solid polymer electrolytes via a novel class of ionenes prepared using acyclic diene metathesis polymerization of highly dissociative, liquid crystalline, fluorinated aryl sulfonimide-tagged ("FAST”) anion monomers. These polyanions form well-ordered lamellae that are thermally stable and provide anionic channels for ion hopping. Electrochemical impedance spectroscopy and differential scanning calorimetry experiments, along with nudged elastic band calculations, suggest that cation motion in these materials operates via an ion hopping mechanism, which is enabled by the highly dissociative nature of FAST anions. In parallel, we developed a high throughput platform to accelerate electrolyte research. We detail the engineering problems and solutions which resulted in an estimated 100X increase in sample throughput with vastly less researcher effort. The platform is then leveraged in two case studies, first by performing the largest one-to-one comparison of lithium and sodium ion conduction in poly(ethylene oxide) to date, and secondly, the platform is employed in a machine learning-guided Bayesian optimization system to explore and optimize the ionic conductivity of electrolytes based upon poly(caprolactone). This work sets the stage for continued automation and data-driven design of polymer electrolytes for safer and more energy-dense batteries.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151286</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogen Distribution in Metal Hydrides and its Effect on Reactor Physics Calculations</title>
<link>https://hdl.handle.net/1721.1/151285</link>
<description>Hydrogen Distribution in Metal Hydrides and its Effect on Reactor Physics Calculations
Trainer, Amelia
Efficient and trustworthy simulations of nuclear reactor systems require that neutronmaterial interaction data be accurate, convenient to use, and readily available. When describing these interactions, the corresponding data is highly dependent on factors such as neutron energy, material temperature, and material composition. This work focuses on the scattering behavior of thermal neutrons, which are defined to have relatively low energy (typically less than 1-10 eV). Thermal neutrons are particularly interesting since, unlike their higher-energy counterparts, they are unable to ignore the molecular bonds that hold a target atom in place. While higher-energy neutrons interact at speeds sufficiently high such that they can disregard these bonds, thermal neutrons are unable to make such an assumption. When scattering, thermal neutrons may interact with the vibrational, rotational, and translational modes of a material, thus causing the interaction data to not only be specific to the target atom, but also to the material in which the atom resides. In this work, the role of vibrational excitation and de-excitation in thermal neutron scattering is explored, as it pertains to metal hydrides. Exciting and de-exciting vibrations corresponds to energy change, where a neutron may deposit energy into the material, thus inciting a vibration (phonon), or gain energy by destroying a phonon. This change in energy specifies inelastic scattering.&#13;
&#13;
In order to prepare inelastic thermal neutron scattering data, the phonon distribution of the target material must be known. The phonon distributions of commonly-used moderators are often published in nuclear data libraries. However, they utilize many assumptions. For instance, these phonon distributions are often assumed to be independent of temperature and, for metal hydrides, independent of stoichiometric variation. The goal of this work is to generate phonon distributions that account for both of these variables (temperature and composition), and to illustrate the effect they have on reactor calculations. The material of interest for this study is zirconium hydride which, despite its variable hydrogen concentration, is commonly described using a phonon distribution that corresponds to a single temperature and a single hydrogen concentration. Approximating a range of different hydrogen concentrations and temperatures by a single distribution has not yet been sufficiently verified as an acceptable assumption. Phonon distributions corresponding to various material temperatures and hydrogen concentrations are generated via density functional theory and molecular dynamics, and their effects are tested in a series of reactor calculations. &#13;
&#13;
Thermal scattering data often involves storing notoriously large files and, therefore, accounting for different material compositions can bring an overwhelming burden through storage requirements. To address this limitation, the “phonon sampling method” is developed, which allows for thermal scattering data to be represented using &lt; 1% of the storage that existing methods require, while resulting in a minor but noticeable increase in calculation time.&#13;
&#13;
Through using the hydrogen- and temperature-dependent ZrHₓ phonon distributions developed in this study, alongside the phonon sampling method, alterations to the overall shape of the phonon distributions are shown to noticeably affect reactor behavior, including the multiplication factor &#119896; and flux distributions.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151285</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication of Electrospun Anti-fouling Membranes for Emulsified Oil-in-Water Separation</title>
<link>https://hdl.handle.net/1721.1/151284</link>
<description>Fabrication of Electrospun Anti-fouling Membranes for Emulsified Oil-in-Water Separation
Song, Chen
The separation of emulsified oils from water represents a significant challenge for water reclamation and clean-up, because such droplets remain suspended for long periods of time.  Membrane technology offers a solution to such separations, but their major drawback is the requirement for frequent cleaning to remove foulants, which reduces membrane lifetime and incurs additional economic and environmental costs. Therefore, there’s motivation to design membranes with better robustness against fouling. There are generally two approaches to reducing the detrimental effects of membrane fouling. One way is to improve the effectiveness of foulant removal during membrane regeneration, so that a larger proportion of the original permeate flux can be recovered. The second approach is to improve the resistance against foulant deposition, so that the decrease in permeate flux is less severe. In this work, we look into both approaches to improve the anti-fouling properties of membranes used for the treatment of emulsified oily wastewater.&#13;
&#13;
We first investigate the feasibility of using thermal treatment to improve the effectiveness of foulant removal. We selected a commercially available copolyimide called P84 to be a promising candidate due to its excellent thermal stability. The separation performance of fibrous membranes made of P84 was examined, and the effectiveness of different foulant removal methods was evaluated. Electrospun P84 membranes offer significant advantages over other materials for the separation of oil-in-water emulsions.  In particular, the decline in permeate flux due to fouling is comparable to that observed for more oleophobic polymers like polyacrylonitrile (PAN), and less severe than that observed for several other commonly used polymers. More importantly, the P84 membranes also exhibit essentially 100% flux recovery through thermal treatment for at least three cycles, with no influence on the separation performance of the membranes. This work demonstrates the feasibility of using thermal treatment as an effective method to regenerate polymeric membranes with good thermal stability. &#13;
&#13;
Next, we look into strategies to improve the resistance against foulant deposition. A simple approach to fabricate liquid-infused membranes (LIMs) with the potential to eliminate membrane fouling was presented. Fluorinated silane was attached to electrospun membranes made of a fluorinated polymer, and then the membranes were prewetted with lubricants made of perfluoropolyethers. The LIMs with fluorinated silane attached exhibits higher permeate flux, and better selectivity compared to the LIMs without the attachment, and better fouling resistance compared to the membranes without the infused liquid. A parametric study also elucidates that operating at the highest pressure at which the membrane can still maintain its permeation selectivity and choosing a less viscous infused liquid contribute to higher permeate flux. This work demonstrates the feasibility of using LIMs to permeate selectively a dispersed phase and their potential to eliminate foulant deposition.&#13;
&#13;
Lastly, we further study the transport mechanism of the dispersed oil phase through LIMs to better understand the factors affecting the permeate flux and to later more effectively modified the LIMs to promote high flux. We used confocal laser scanning microscopy (CLSM) to construct three-dimensional images of the sequence of events responsible for coalescence of oil droplets and the formation of oil channels within the LIM. We show that the key to anti-fouling behavior is the higher affinity to the pore wall for infused liquid than permeating oil phase. Using image analysis, we find that the rate at which oil channels are opened within the membrane and the number of open channels ultimately govern the permeate flux through the LIMs. Oil concentration in the feed affects the capture of oil droplets and the subsequent coalescence of oil, which in turn affects the channel opening dynamics. The channel opening dynamics also depend on the viscosity of the infused liquid and the operating pressure. This work offers insight into the selective permeation of a dispersed liquid phase through a LIM and provides operating guidelines to promote higher flux.&#13;
&#13;
Overall, this thesis presents two new approaches to ameliorate the detrimental effect of membrane fouling during the treatment of emulsified oily wastewater. The contributions from this thesis improves the competence of membrane separation technique for the treatment of emulsified oily wastewater.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151284</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inductive and Deductive Synthesis for Database Applications</title>
<link>https://hdl.handle.net/1721.1/151281</link>
<description>Inductive and Deductive Synthesis for Database Applications
Feser, John Killian
Program synthesis is a promising method for building efficient, flexible software by deriving low-level implementations from high-level specifications. In this thesis, I use programming-languages techniques to develop systems for synthesizing high-performance, specialized software and to build better general-purpose program-synthesis algorithms. I describe two new synthesis systems. First, I present a full-featured, synthesis-based pipeline for generating database implementations that are specialized to query workloads. This project shows that synthesis is a promising approach for building systems software, but building efficient synthesizers is still difficult, and in general a new synthesizer must be built for every new language. To address this need, I present a new, general-purpose inductive synthesizer, and show that it offers state-of-the-art performance on several challenging tasks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151281</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Development Economics</title>
<link>https://hdl.handle.net/1721.1/151280</link>
<description>Essays on Development Economics
Sverdlin Lisker, Diana
Developing countries face numerous challenges that can impede their economic growth and development. These challenges manifest in several areas, including firms, governments, and households. Firms operating in these environments often encounter significant constraints, such as limited access to credit, poor contract enforcement, political instability, and inadequate infrastructure for transportation. For governments, resources are typically scarce, and information asymmetries can be high due to the prevalence of informal economies. Meanwhile, households often struggle to access quality education and healthcare, leading to lower levels of human capital. Taken together, these factors can significantly hinder opportunities for economic progress.&#13;
&#13;
This thesis focuses on three critical issues in developing countries, with a special emphasis on Mexico. The first chapter investigates the prevalence of small firms in Mexico's retail sector. It argues that high transport costs result in smaller effective market sizes, leading to the proliferation of smaller and lower-quality firms. In the second chapter, a framework is developed to guide the existing literature on social protection. It reviews and discusses the design and implementation of redistribution and income support programs in an environment with a large informal sector. Lastly, the third chapter examines the impact of expanding low-cost private healthcare access in Mexico. It finds a decrease in public healthcare usage but the results on which types of visits are being substituted away imply efficient sorting of patients and suggest that expanding low-cost alternatives can improve access and the allocation of care. By examining these issues, I aim to shed light on some of the challenges faced by firms, governments, and households in developing countries and discuss paths forward.&#13;
&#13;
JEL Classification: I130,I380,R110
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151280</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Corporate Finance and Financial Markets</title>
<link>https://hdl.handle.net/1721.1/151270</link>
<description>Essays on Corporate Finance and Financial Markets
Yu, Jiaheng
This thesis consists of three chapters.&#13;
&#13;
Chapter 1 studies the informational role of trade credit and the accounts receivable financing market. I hand collect new data on the contracts of accounts receivable based loans and trade credit terms. I find that sellers experiencing payment delays are primarily financed through accounts receivable based loans. These loans are 2- 4% per year more expensive than buyers’ borrowing rates and require a 20% average haircut on invoice value. Seller moral hazard that leads to bad-quality products is a determinant of payment delays, and although difficult to observe in existing data, can be uncovered from terms of accounts receivable based loans. Lenders help improve the quality of sellers: sellers who successfully receive credit experience a 5% decline in receivable days and have higher sales and longer relationships with buyers. I propose and structurally estimate a trade credit model that incorporates accounts receivable financing. In the model, the buyer trades off the financial cost and the incentive effect of trade credit and learns from the lender’s loan decisions. I show through counterfactual analyses that regulatory limits on payment delays increase the presence of bad products and lower output, while subsidizing accounts receivable financing may increase output at relatively low expense.&#13;
&#13;
In Chapter 2, joint work with Rodney Garratt and Haoxiang Zhu, we study the design of Central Bank Digital Currencies. Banks of different sizes respond differently to interest on reserves (IOR) policy. For low IOR rates, large banks are non-responsive to IOR rate changes, leading to weak pass-through of IOR rate changes to deposit rates. In these circumstances, a central bank digital currency (CBDC) may be used to provide competitive pressure to drive up deposit rates and improve monetary policy transmission. We explore the implications of two design features: interest rate and convenience value. Increasing the CBDC interest rate past a point where it becomes a binding floor, increases deposit rates but leads to greater inequality of market shares in both deposit and lending markets and can reduce the responsiveness of deposit rates to changes in the IOR rate. In contrast, increasing convenience, from sufficiently high levels, increases deposit rates, causes market shares to converge and can increase the responsiveness of deposit rates to changes in the IOR rate.&#13;
&#13;
In Chapter 3, joint work with Jingxiong Hu, we study the effect of “guaranteed close” on the informativeness of market close prices. Passive investment strategies that trade at market close have incurred high transaction fees charged by the primary exchanges. Investment banks undercut the exchanges by executing client orders at close prices set on the exchanges yet charging lower fees. While providing liquidity, banks trade on the order flow information. Using a quasi-experimental shock – an NYSE close auction fee cut – we find that banks’ trading activities improve the informativeness of close prices and reduce the cost of passive investment strategies. To explain this finding, we propose a model where dual trading improves price discovery. A bank contributes to price discovery by trading on the informativeness of the orders it receives relative to the market. The implications of our model apply generally to scenarios with multiple trading venues where venue operators trade on order flow data.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151270</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three applications of the regularity method in combinatorics</title>
<link>https://hdl.handle.net/1721.1/151266</link>
<description>Three applications of the regularity method in combinatorics
Berger, Aaron
Szemerédi’s regularity lemma is one of the central tools in extremal graph theory and additive combinatorics. We present three versions of this lemma tailored for three applications. The first application considered is the graph removal lemma, which states that for any graph &#119867; on &#119896; vertices and all &#120598; &gt; 0, there is some &#120575; &gt; 0 such that any graph &#119866; on &#119899; vertices and fewer than &#120575;&#119899;ᵏ copies of &#119867; can be made &#119867;-free by deleting at most &#120598;&#119899;² edges. The dependence of &#120575; on &#120598; in this lemma is of much interest. The current record is due to Fox, who showed that one can take &#120575;⁻¹ to be an exponential tower [mathematical notation] of height &#119874;ₖ(log(&#120598;⁻¹)). We give a new proof of these tower-log bounds, simplifying the iterative step by replacing Fox’s mean-entropy density with a new concept called edge budgets. &#13;
&#13;
The remaining two applications of regularity we present are popular differences results. For a compact abelian group &#119866;, a corner in &#119866; × &#119866; is a triple of points (&#119909;, &#119910;), (&#119909;, &#119910; + &#119889;),(&#119909;+&#119889;, &#119910;). The classical corners theorem of Ajtai and Szemerédi implies that for every &#120572; &gt; 0, there is some &#120575; &gt; 0 such that every subset &#119860; ⊂ &#119866;×&#119866; of density &#120572; contains a &#120575; fraction of all corners in &#119866;×&#119866;, as &#119909;, &#119910;, &#119889; range over &#119866;. However, in general one cannot take &#120575; polynomially large in terms of &#120572;. Generalizing a result of Mandache from the finite field setting, we show that for any &#119860; ⊂ &#119866; × &#119866; of density &#120572;, there is some “popular difference” &#119889;₀ ≠ 0 such that &#119860; contains &#120572;⁴ of all corners as &#119909;, &#119910; vary over &#119866; but &#119889; = &#119889;₀ is fixed. &#13;
&#13;
We conclude with a similar result for a more general class of two-dimensional patterns. The following combinatorial conjecture arises naturally from recent ergodic-theoretic work of Ackelsberg, Bergelson, and Best. Let &#119872;₁, &#119872;₂ be &#119896; × &#119896; integer matrices, &#119866; be a finite abelian group of order &#119873;, and &#119860; ⊆ &#119866;ᵏ with |&#119860;| ≥ &#120572;&#119873;ᵏ. If &#119872;₁, &#119872;₂, &#119872;₁ − &#119872;₂, and &#119872;₁ + &#119872;₂ are automorphisms of &#119866;ᵏ, is it true that there exists a popular difference &#119889; ∈ &#119866;ᵏ ∖ {0} such that&#13;
&#13;
 #{&#119909; ∈ &#119866;ᵏ : &#119909;, &#119909; + &#119872;₁&#119889;, &#119909; + &#119872;₂&#119889;, &#119909; + (&#119872;₁ + &#119872;₂)&#119889; ∈ &#119860;} ≥ (&#120572;⁴ − &#119900;(1))&#119873;ᵏ. &#13;
&#13;
We show that this conjecture is false in general, but holds for [equation] with &#119901; an odd prime given the additional spectral condition that no pair of eigenvalues of &#119872;₁&#119872;₂⁻¹ (over the algebraic closure [notation]) are negatives of each other. In particular, the “rotated squares” pattern does not satisfy this eigenvalue condition, and we give a construction of a set of positive density in [notation] for which that pattern has no nonzero popular difference. This is in surprising contrast to three-point patterns, which we handle over all compact abelian groups and which do not require additional spectral conditions.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151266</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orbit and Attitude Control for (non-) Rotating Space-Based Telescopes Utilizing Reflectivity Control Devices</title>
<link>https://hdl.handle.net/1721.1/151264</link>
<description>Orbit and Attitude Control for (non-) Rotating Space-Based Telescopes Utilizing Reflectivity Control Devices
Cabrales Hernandez, Alejandro D.
Satellite-based imaging has allowed for advancements in knowledge of Earth's environment, the Solar System, and the cosmos that were otherwise not possible using ground-based counterparts. It is no surprise that scientists call for advancements in telescope technology that allow for longer-lasting and larger apertures to improve the quantity and quality of data. An increase in the aperture's diameter and fuel capacity coincides with an increase in satellite size and mass that may be incompatible with current and proposed launch systems. Mission lifetime limitations due to propellant is of particular concern for satellites operating in unstable orbits such as the Sun-Earth Lagrange points. Therefore, there exists a need for novel methods that allow space telescopes to reduce fuel usage and satellite's volume and mass. &#13;
&#13;
To address fuel reduction and increase mission lifetime, reflectivity control devices (RCDs), or devices that are capable of regulating the effective force produced by Solar radiation pressure on a surface, are utilized in conjunction with the dynamics around the Sun-Earth Lagrange Points to provide a method of fuel-free orbit and attitude control. Additionally, RCDs produce lower actuator disturbances compared to traditional spacecraft actuators leading to a reduction in line-of-sight jitter. To address satellite's volume and mass limitations, Rotating Synthetic Apertures (RSA) telescopes are analyzed as a potential technology that enables larger apertures due to the reduction in mirror surface area compared to traditional satellites RSA satellites consist of a thin strip aperture which is rotated about an axis normal to the aperture plane. As the satellite rotates, multiple images are taken that are combined to recover the full image as if it was taken from a circular aperture. This dissertation presents the methodology of achieving fuel-free orbit and attitude control via reflectivity control devices; this enables long-mission lifetime, large-aperture sizing, and low disturbances for non-rotating and rotating space-based apertures.&#13;
&#13;
Although reflectivity control devices have been demonstrated in orbit and extensively studied, no previous method has shown the ability to obtain full six degrees of freedom with RCDs as the satellite's only actuators. An allocation algorithm for utilizing RCDs that can be continuously switched from a specular reflective state to an absorptive state is presented along with an analysis of the impact of satellite attitude on the control envelope of an aggregate RCD configuration. Regarding the operation of space telescopes, the field of regard region, or the region in which a satellite with RCDs can maintain combined orbit and attitude control, is derived for both RSA-like and non-rotating telescope configurations. An optimization scheme for the placement of RCD cells in a given configuration to maximize the control authority over different attitudes is also presented. Additionally presented in this work is the development of a dynamically similar testbed to allow for the testing of pointing control algorithms for rotating synthetic apertures. The testbed serves as a method of testing conceptual RSA satellites at lower orbital regimes where the RCDs are not able to provide control, but still correspond to regions of interest for the operation of RSA satellites for Earth science observation. The derivation of scaling laws for the testbed and hardware-based results for RSA satellites ranging from low Earth orbit to medium Earth orbit are demonstrated.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151264</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Materials for Extreme Environment Applications by&#13;
Laser Powder Bed Fusion</title>
<link>https://hdl.handle.net/1721.1/151261</link>
<description>Development of Materials for Extreme Environment Applications by&#13;
Laser Powder Bed Fusion
O'Brien, Alexander D.
As the desire for greener, more efficient energy production creates an urgent push for advances in reactor and turbine designs, rapid innovation must be achieved in the field of extreme environment materials. Recent technological progress in metal additive manufacturing has enabled new techniques in materials design that might prove essential in closing current gaps. In this study, I identify two critical advanced energy systems that may be improved with materials produced by metal additive technology, fusion reactor vacuum vessels and jet engine turbine blades, and explore the practical usage of laser powder bed fusion to improve production of relevant materials.&#13;
&#13;
First, material enhancements are considered in the context of increasing survivability for full-power operation of the ARC tokamak fusion reactor. Based on evaluation of relevant properties for four candidate vacuum vessel materials, Inconel 718 is identified as the most likely selection for construction of an initial ARC pilot plant in the short-term. Maximizing the lifetime of such a vessel will require tailoring properties to increase&#13;
resistance to neutron effects and, especially, to enhance mechanical properties in the range of 800℃. Both targets are expected to be addressable by the formation of a 718-based metal matrix composite, which is enabled by additive manufacturing. Based on rapid evaluation of various ceramics in Al-based composites, SiC is selected as a promising candidate, so an Inconel 718 composite reinforced with 2 vol% SiC is produced by laser powder bed fusion. Microstructural analysis reveals breakdown of SiC and in-situ formation of silicides and carbides, which result in decreased porosity and grain size. Room&#13;
temperature mechanical tests show good strengthening over base Inconel 718 with low loss in ductility. Improvement in high temperature ductility is achieved over the unreinforced material, but the effects appear inadequate to merit use over a wrought Inconel. An additional composite is then developed using 2 vol% ZrB2 as the reinforcing material. Microstructural results for this composite follow a similar trend to SiC and verify the capability for reducing porosity. Room temperature mechanical testing shows higher strength and lower ductility than the SiC. However, elongation at failure is found to increase drastically around 800℃, reaching more than 8x that of printed unreinforced Inconel. These results suggest high potential for ARC implementation.&#13;
&#13;
Second, the use of functionally graded printing is discussed as a potential method of establishing improved oxidation resistance for the use of niobium for turbine blade applications. To enable future studies of this, a process flow is developed to establish niobium printability. The viability of high-throughput single-layer testing with fixed-depth wells is first assessed for rapid, material-efficient parameterization. Promising conditions are then transferred to multi-layer printing, and further iteration is conducted to minimize surface agglomeration and allow continuous build-up. Finally, a Gaussian regression code is applied to recommend optimum conditions. The resulting print quality is found to achieve the first scalable printing of niobium powder in a commercial powder bed fusion system, which is necessary to enable the exploration of techniques such as functional grading.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151261</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collagen Anchoring Agonist Antibodies for Cancer Immunotherapy</title>
<link>https://hdl.handle.net/1721.1/151256</link>
<description>Collagen Anchoring Agonist Antibodies for Cancer Immunotherapy
Palmeri, Joseph Robert
While traditional cancer interventions such as surgery, radiation, and chemotherapy are aimed at killing or removing the tumor cells themselves, immunotherapies instead seek to establish long-lasting, robust antitumor immune responses. One approach that has shown promising results in preclinical mouse models is the use of agonist antibodies targeting costimulatory, or activating, receptors on effector immune cells, particularly CD8⁺ T cells. Translation of these therapeutics into the clinic has been hampered by severe, sometimes fatal, on-target, off-tumor toxicities. Thus, the field at large has shifted focus to developing agonist antibodies with tumor restricted activity. To that end, we developed collagen anchored agonist antibodies, an approach we have previously validated with collagen anchored cytokines. When injected directly into the tumor, these collagen anchoring therapies are preferentially retained in the tumor microenvironment (TME), enhancing efficacy while limiting systemic toxicities.&#13;
&#13;
We first attempted to engineer a generalizable antibody-anchoring platform by constructing fusions of IgG binding domains (IgGBs) to collagen binding domains. However, due to the weak affinity of existing IgGBs and rapid in vivo exchange with endogenous IgG, this platform underperformed at retaining agonist antibodies in the TME.&#13;
&#13;
We then pivoted to constructing direct agonist antibody fusions to collagen binding domains, demonstrating that this is a strategy generalizable to a range of antibody therapeutics. In vivo, we tested agonist antibodies targeting 4-1BB and CD28 fused to the collagen binding domain LAIR (α4-1BB- LAIR and αCD28-LAIR, respectively), in a range of monotherapy and combination therapies. We observed that while combination treatment of α4-1BB-LAIR with an antitumor antibody (TA99) displayed only modest efficacy in the B16F10 murine melanoma model, simultaneous depletion of CD4⁺ T cells during treatment boosted cure rates to over 90% of mice. We elucidated two mechanisms of action for this synergy: αCD4 eliminated tumor draining lymph node Tregs, enhancing priming and activation of CD8⁺ T cells, and TA99 + α4-1BB-LAIR supported the cytotoxic program of these newly primed CD8⁺ T cells within the TME. Replacement of αCD4 with αCTLA-4, a clinically approved antibody that enhances T cell priming, produced equivalent cure rates while additionally generating robust immunological memory. Together, my thesis work demonstrates that collagen anchoring is an effective strategy to improve the therapeutic index of agonist antibody therapies and furthermore uncovers a fundamental two-step approach to designing effective cancer immunotherapy combinations.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151256</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical Alkene Epoxidation Using Water as the Oxygen Source</title>
<link>https://hdl.handle.net/1721.1/151248</link>
<description>Electrochemical Alkene Epoxidation Using Water as the Oxygen Source
Chung, Minju
Alkene epoxidation is a crucial chemical functionalization reaction that produces key chemical intermediates for the synthesis of various commercial end products. The current methods of producing epoxides are associated with a number of challenges, including the use of energy-intensive and hazardous oxidants such as chlorine or peroxides. Alternative processes that use molecular oxygen as a reagent are available, but they often require elevated temperatures and pressures, which pose significant safety concerns. To improve the safety and efficiency of epoxide production, there is a need for new methods that can overcome these limitations and enable the production of epoxides in a more sustainable and safe manner. From this perspective, this thesis work explores electrochemical alkene epoxidation using water as the oxygen source, delving into both fundamental and practical aspects of chlorine-mediated and direct approaches.&#13;
&#13;
First, we discuss the mechanism of chlorine-mediated ethylene oxidation in saline water. We demonstrated that electrochemically generated chlorine selectively oxidizes ethylene to chloroethanol, which converts to ethylene oxide in alkaline aqueous electrolyte. Through detailed electrochemical kinetic studies, we reveal that the chlorine-mediated ethylene oxidation mechanism changes significantly in the presence and absence of ethylene, indicating that it is not a simple merging of the chlorine evolution and chemical chlorohydrin processes.&#13;
&#13;
Next, we focus on understanding the effects of tuning single-atom dopants on manganese oxide for electrochemical direct epoxidation, employing operando X-ray absorption spectroscopy to probe oxidation state and structural changes under cyclooctene epoxidation conditions. &#13;
&#13;
Finally, we present a sustainable and selective method for propylene epoxidation using an oxidized palladium-platinum alloy catalyst. This catalyst demonstrates remarkable Faradaic efficiency and rate toward epoxidation under ambient temperature and pressures, outperforming previously reported electrocatalysts for direct epoxidation. The reaction mechanism is investigated using a multi-faceted approach, including kinetic rate measurements, probe substrate analysis, and substrate-based descriptor assessment. This work advances sustainable epoxide synthesis, which currently has significant energy and environmental footprint.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151248</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Role of Inner Kinetochore Higher-Order Assembly in Kinetochore Function</title>
<link>https://hdl.handle.net/1721.1/151246</link>
<description>Investigating the Role of Inner Kinetochore Higher-Order Assembly in Kinetochore Function
Sissoko, Gunter Badji
Cells are the basic building blocks of life. As a result, every organism depends on cell division to survive and reproduce. To divide, eukaryotes use a genetic program called the cell cycle, which coordinates the steps required to grow a cell and divide its contents into two daughter cells. Each cycle, eukaryotic cells duplicate their genomes during Synthesis (S) phase and separate the two copies of the genome during Mitosis. To mechanically separate those copies, cellular structures undergo dramatic rearrangements. The genome is packaged into discrete chromosomes that can be manipulated easily. Simultaneously, microtubules reorganize into the mitotic spindle, which generates the forces that move chromosomes. To connect chromosomes to spindle microtubules, the cell builds protein complexes called “kinetochores” on regions of the chromosomes known as “centromeres.” The inner kinetochore resides on chromosomal DNA throughout the cell cycle. As mitosis approaches, post-translational modifications trigger recruitment of the outer kinetochore, which binds to microtubules and harnesses the forces they generate to move chromosomes. &#13;
&#13;
To avoid DNA damage, most organisms assemble kinetochores at a single location on each chromosome that is designated by the histone H3 variant CENP-A. CENP-A nucleates kinetochore assembly at centromeres but also easily mislocalizes to non-centromeric DNA. Interestingly, non-centromeric CENP-A typically does not recruit kinetochores. Kinetochores only assemble at loci where there is a very high density of CENP-A nucleosomes. Consistently with that, imaging and the large copy numbers of kinetochore proteins at individual kinetochores suggest that inner kinetochore proteins form higher-order assemblies. These findings suggest that interactions between CENP-A-associated inner kinetochore modules have a role in proper kinetochore assembly. In my thesis work, I investigated how higher-order assembly of the inner kinetochore protein CENP-T, which recruits the outer kinetochore in vertebrates, impacts CENP-T’s function.  &#13;
&#13;
To test the impact of higher-order assembly on CENP-T, I used artificial oligomerization to generate CENP-T assemblies with different numbers of CENP-T molecules. With this approach, I generated kinetochore-like particles that recruit a complete complement of outer kinetochore proteins and interact with microtubules similarly to endogenous kinetochores. By comparing oligomers generated with two unrelated strategies to monomers, I found that CENP-T only robustly recruits the outer kinetochore when oligomerized. This behavior can be recapitulated in vitro with recombinant proteins and no other cellular factors. Furthermore, my results suggest that outer kinetochore recruitment increases gradually with the number of CENP-T molecules in an oligomer. &#13;
&#13;
Our findings are some of the first direct evidence of the important role of inner kinetochore higher-order oligomerization in kinetochore function. They suggest that oligomerization of the inner kinetochore regulates where kinetochores assemble by restricting outer kinetochore recruitment to regions with a high density of inner kinetochore modules. This result opens the door to further work on the role of higher-order assembly at kinetochores. Furthermore, my work has revealed a method by which native kinetochore-like particles can be generated for in vitro biophysical analysis, and my artificial oligomerization approaches will be valuable for studying other pathways that involve higher-order assemblies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151246</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging the Linear Response Theory in Sensitivity Analysis of Chaotic Dynamical Systems and Turbulent Flows</title>
<link>https://hdl.handle.net/1721.1/151245</link>
<description>Leveraging the Linear Response Theory in Sensitivity Analysis of Chaotic Dynamical Systems and Turbulent Flows
Sliwiak, Adam Andrzej
The linear response theory (LRT) provides a set of powerful mathematical tools for the analysis of system’s reactions to controllable perturbation. In applied sciences, LRT is particularly useful in approximating parametric derivatives of observables induced by a dynamical system. These derivatives, usually referred to as sensitivities, are critical components of optimization, control, numerical error estimation, risk assessment and other advanced computational methodologies. Efficient computation of sensitivities in the presence of chaos has been a major and still unresolved challenge in the field. While chaotic systems are prevalent in several fields of science and engineering, including turbulence and climate dynamics, conventional methods for sensitivity analysis are doomed to failure due to the butterfly effect. This inherent property of chaos means that any pair of infinitesimally close trajectories separates exponentially fast triggering serious numerical issues.&#13;
&#13;
A new promising method, known as the space-split sensitivity (S3), addresses the adverse butterfly effect and has several appealing features. S3 directly stems from Ruelle’s closed-form linear response formula involving Lebesgue integrals of input-output time correlations. Its linearly separable structure combined with the chain rule on smooth manifolds enables the derivation of ergodic-averaging schemes for sensitivities that rigorously converge in uniformly hyperbolic systems. Thus, S3 can be viewed as an LRT-based Monte Carlo method that averages data collected through regularized tangent equations along a random orbit. Despite the recent theoretical advancements, S3 in its current form is applicable to systems with one-dimensional unstable manifolds, which makes it useless for real-world models. &#13;
&#13;
In this thesis, we extend the concept of space-splitting to systems of arbitrary dimension, develop generic linear response algorithms for hyperbolic dynamical systems, and demonstrate their performance using common physical models. In particular, this work offers three major contributions to the field of nonlinear dynamics. First, we propose a novel algorithm for differentiating ergodic measures induced by chaotic systems. These quantities are integral components of the S3 method and arise from 3 the partial integration of Ruelle’s ill-conditioned expression. Our algorithm uses the concept of quantile functions to parameterize multi-dimensional unstable manifolds and computes the time evolution of measure gradients in a recursive manner. We also demonstrate that the measure gradients can be utilized as indicators of the differentiability of statistics, and might dramatically reduce the statistical-averaging error in the case of highly-oscillatory observables. Second, we blend the proposed manifold description, algorithm for measure gradients, and linear decomposition of the input perturbation, to derive a complete set of tangent equations for all by-products of the regularization process. We prove that all the recursive equations converge exponentially fast in uniformly hyperbolic systems, regardless of the choice of initial conditions. This result is used to assemble efficient one-step Monte Carlo algorithms applicable to high-dimensional discrete and continuous-time systems. Third, we argue that the effect of measure gradient could be negligible compared to the total linear response if the model is statistically homogeneous. Consequently, one could accurately approximate the sought-after sensitivity by evolving in time a single inhomogeneous tangent that is orthogonal to the unstable subspace everywhere along an orbit. This drastically reduces the computational complexity of the full algorithm.&#13;
&#13;
Every major step of theoretical and algorithmic developments is corroborated by several numerical examples. They also highlight aspects of the underlying dynamical systems, e.g., ergodic measure distributions, Lyapunov spectra, spatiotemporal structures of tangent solutions, that are relevant in the context of sensitivity analysis. This thesis considers different classes of chaotic systems, including low-dimensional discrete systems (e.g., cusp map, baker’s map, multi-dimensional solenoid map), ordinary differential equations (Lorenz oscillators) and partial differential equations (Kuramoto-Sivashinsky and 3D Navier-Stokes system).
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151245</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supercharging Programming through Compiler Technology</title>
<link>https://hdl.handle.net/1721.1/151243</link>
<description>Supercharging Programming through Compiler Technology
Moses, William S.
The decline of Moore’s law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API’s and data structures, instead of working on their intended problem. For example, a researcher hoping to use machine learning on climate code must write a corresponding derivative simulation, understand and implement linear algebra routines, and performance engineer their simulation to run on multiple cores and nodes. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated.&#13;
&#13;
This thesis will demonstrate this approach through several real-world and composable compilers that I built for a variety of domains including parallelism, automatic differentiation, scheduling, portability, program search, and tensor arithmetic. These domains are critical to both scientific computing and machine learning. Individually, integration of domain knowledge into each of these compilers enable (often asymptotic) performance and usability benefits. Operating on a common compiler representation, however, enables these benefits to compound and provide greater performance than any domain-specific optimization in isolation.&#13;
&#13;
This research in this thesis contains joint work with Charles E. Leiserson, Tao B. Schardl, Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, Zachary, Sven Verdoolaege, Andrew Adams, Albert Cohen, Qijing (Jenny) Huang, Ameer Haj-Ali, John Xiang, Ion Stoica, Krste Asanovic, John Wawrzynek, Valentin Churavy, Lorenzo Chelini, Ruizhe Zhao, Ludger Paehler, Jan Hückelheim, Sri Hari Krishna Narayanan, Michel Schanen, Johannes Doerfert, Paul Hovland, Ivan R. Ivanov, Jens Domke, and Toshio Endo.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151243</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Theory and Practice in Parallel Clustering</title>
<link>https://hdl.handle.net/1721.1/151241</link>
<description>Bridging Theory and Practice in Parallel Clustering
Shi, Jessica
Large-scale graph processing is a fundamental tool in modern data mining, with wide-ranging applications in domains including social network analysis, bioinformatics, and machine learning. In particular, graph clustering, or community detection, is an important problem in graph processing that addresses tangible problems including fraud and threat detection, recommendation and search system design, and the detection of functional contributions of proteins and genes in biological systems. At its core, identifying the underlying substructures of a graph can indicate essential functional groups, such as people with similar interests, news articles on similar topics, or proteins with similar utilities, which can then be synthesized for a variety of applications. However, as the need to analyze larger and larger data sets increases, graph processing poses a major computational challenge, and designing scalable algorithms that can handle billions of edges while maintaining fast performance and high quality becomes crucial.&#13;
&#13;
This thesis addresses the challenges of designing highly scalable graph clustering solutions by bridging theory and practice in parallel algorithms. The thesis takes a multi-faceted approach, where first, we develop algorithms with strong theoretical guarantees, which often translates to significant performance improvements in practice, and second, we use performance engineering techniques implemented on top of these theoretically efficient algorithms to achieve fast implementations on real-world data sets. The results are highly scalable and provably-efficient algorithms for a broad class of computationally graph clustering problems, and the first practical solutions to a number of problems on graphs with hundreds of billions of edges. Some of the implementations in this thesis are used in production environments in industry, and have significant real-world impacts.&#13;
&#13;
The first part of this thesis studies the efficient counting and enumeration of small subgraphs, including small cycles and cliques, which has applications in clustering metrics and graph statistics. We design new theoretically efficient parallel algorithms for exact and approximate butterfly (four-cycle), five-cycle, and k-clique counting, and demonstrate significant performance improvements over the prior state-of-the-art small subgraph counting implementations. This part of the thesis also provides algorithms for low out-degree orientations, which are crucial as subroutines in our counting and enumeration algorithms to reduce the required work. Notably, we are the first to report four-clique counts for the largest publicly available graph with over two hundred billion undirected edges. We also explore the batch-dynamic setting, in which graph properties are maintained over batches of multiple edge updates applied simultaneously, and present a novel parallel batch-dynamic data structure that we leverage in a wide variety of classic graph processing applications, including the k-core decomposition, low out-degree orientations, and k-clique counting. Importantly, our algorithm is the first parallel batch-dynamic algorithm for k-clique counting to achieve polylogarithmic span, or a polylogarithmic longest chain of sequential dependencies.&#13;
&#13;
The second part of this thesis addresses a class of problems relating to the discovery and classification of dense substructures within a graph, focusing on hierarchical decompositions that reveal structural properties of the underlying graph with different notions or levels of density. We address bi-core decomposition and butterfly peeling, which are specialized algorithms for bipartite graphs, and we study k-clique peeling and nucleus decomposition, which generalize classic decomposition algorithms to higher order structures. This part leverages our subgraph counting algorithms to present new theoretically efficient parallel algorithms for these decomposition problems, and shows that many of these problems are P-complete, suggesting that solutions that take polylogarithmic span are unlikely. We also explore approximation algorithms as a result, and conduct thorough experimental evaluations comparing speed and accuracy trade-offs between our exact and approximate implementations.&#13;
&#13;
The final part of this thesis focuses on highly scalable graph clustering algorithms that are effective in practice and give good quality clusters compared to ground truth data, considering a broad class of classification tasks. We study classic graph clustering algorithms including correlation clustering, modularity clustering, and hierarchical agglomerative clustering. This part develops heuristic and approximation algorithms for classic graph clustering objective functions, and additionally demonstrates important relationships between graph clustering algorithms and their counterparts in pointset clustering.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151241</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collaborating at the Tower of Babel: The Meaning of Cooperation and the Foundations of Long-Term Exchange</title>
<link>https://hdl.handle.net/1721.1/151239</link>
<description>Collaborating at the Tower of Babel: The Meaning of Cooperation and the Foundations of Long-Term Exchange
Volvovsky, Hagay
This dissertation is the first to propose and validate a general process by which exchange partners arrive at a shared understanding of what actions constitute (vs. defection) in changing and complex environments, thereby making cooperation possible. Achieving cooperation in the face of incentives to defect is essential in organizations and markets. Past research has focused on the payoff structure and the resulting risk of defection as a determinant of cooperative outcomes but has failed to explain how exchange partners arrive at shared understandings of what actions constitute cooperation and defection, and why exchanges featuring the same payoff structure sometimes have different cooperative outcomes. I resolve this puzzle and explain how and when exchange partners can coordinate on the meaning of cooperation. I do so by advancing and testing a theory of shared coordination frameworks – developed through long-term exchange – that help exchange partners reach common interpretations of cooperation when unanticipated events inevitably occur. I then provide theoretical clarification for the prevalence of long-term exchange, by demonstrating the causal primary of shared frameworks in actors’ decisions to exchange with long-term partners. I validate these propositions using a novel experimental platform that is the first to manipulate participants’ coordination frameworks and disentangle their effect from the risk of defection and other correlates of long-term exchange. The results indicate that shared coordination frameworks dramatically increase the likelihood of successful cooperation in complex exchanges.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151239</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tectonostratigraphy of the Shyok Suture Zone in Ladakh, NW India.</title>
<link>https://hdl.handle.net/1721.1/151238</link>
<description>Tectonostratigraphy of the Shyok Suture Zone in Ladakh, NW India.
Martin, Craig R.
This thesis explores the timing and nature of the tectonic collision between India, Eurasia, and an intra-oceanic arc in the western Himalaya. Chapter One presents the results of a geological mapping campaign in the Shyok-Nubra confluence region of Ladakh, in northwest India. It provides lithological descriptions of the major geological units in the Shyok suture zone, and also compiles tectonostratigraphic correlations with exposures in northern Pakistan to develop a tectonic reconstruction of the southern Eurasian margin in Jurassic-Paleocene time. Chapter Two presents U-Pb zircon dates that constrain the age of the significant tectonic boundaries between the Eurasian margin and the Kohistan-Ladakh arc. Chapter Three presents paleomagnetic results that constrain the latitude of the Kohistan-Ladakh arc in the Paleocene. Chapter Four presents paleomagnetic results that constrain the latitude of the Eurasian margin in the Late Cretaceous and discusses the paleogeography of the India-Eurasia collision. This thesis constrains the age of the Shyok suture zone using multiple lines of evidence and shows that the Shyok suture zone records final closure of the ocean between India and Eurasia in the Eocene.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151238</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Games meet Concurrency: Algorithms and Hardness</title>
<link>https://hdl.handle.net/1721.1/151237</link>
<description>Games meet Concurrency: Algorithms and Hardness
Coulombe, Michael Joseph
Since the turn of the 21st century, seeing the decline of Moore’s Law on the horizon, the pursuit of continued software performance gains has led to the prominence of computer architectures with high degrees of parallelism and memory cache hierarchies. However, there are still many challenges to designing efficient algorithms and understanding the complexity of fundamental problems in these new models of computation. Given the similarities of concurrent systems of multiple agents and multiplayer games, this thesis analyzes a spectrum of models connecting these three fields and bridges the gaps between them by building upon techniques from the growing literature studying the complexity of games through gadget motion planning frameworks.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151237</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations of Machine Learning: Over-parameterization and Feature Learning</title>
<link>https://hdl.handle.net/1721.1/151236</link>
<description>Foundations of Machine Learning: Over-parameterization and Feature Learning
Radhakrishnan, Adityanarayanan
In this thesis, we establish and analyze two core principles driving the success of neural networks: over-parameterization and feature learning.  We leverage these principles to design models with improved performance and interpretability on various computer vision and biomedical applications. &#13;
&#13;
We begin by discussing the benefits of over-parameterization, i.e., using increasingly large networks that can perfectly fit training data.  While prior work characterized the benefits of over-parameterized networks for supervised learning tasks, we show that over-parameterization is also beneficial for unsupervised learning problems, such as autoencoding.  The ubiquitous advantage of using increasingly larger networks suggests that infinitely large networks should yield best performance.  Remarkably, under certain conditions, training infinitely wide networks simplifies to training classical models known as kernel machines using the Neural Tangent Kernel (NTK).  We showcase the practical value of the NTK by deriving and using it for matrix completion problems such as image inpainting and virtual drug screening.  Additionally, we use the NTK connection to provide theoretical guarantees for deep neural networks.  Namely, we  construct interpolating infinitely wide and deep networks that are Bayes optimal, or consistent, for classification.&#13;
&#13;
While the NTK has been a useful tool for understanding properties of deep networks, it lacks a key component that is critical to the success of neural networks: feature learning.  In the second part of this thesis, we identify and mathematically characterize the mechanism through which deep neural networks automatically select features, or patterns in data.  We show that neural feature learning occurs by re-weighting features based on how much they change predictions upon perturbation.  Our result explains various deep learning phenomena such as spurious features, lottery tickets, and grokking.  Moreover, the mechanism identified in our work provides a backpropagation-free method for feature learning with any machine learning model.  To demonstrate the effectiveness of this feature learning mechanism, we use it to enable feature learning in kernel machines. We show that the resulting models, referred to as Recursive Feature Machines, achieve state-of-the-art performance on tabular data.  &#13;
&#13;
Overall, this thesis advances the foundations of machine learning and provides tools for building new machine learning models that are computationally simple, interpretable, and effective.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151236</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Do Mandated Risk Disclosures Affect Corporate Risk-Taking?</title>
<link>https://hdl.handle.net/1721.1/151226</link>
<description>Do Mandated Risk Disclosures Affect Corporate Risk-Taking?
Yoon, Rachel
I examine whether mandated risk disclosures affect corporate risk-taking. I argue that mandated risk disclosures influence corporate risk-taking by mitigating risk-related agency conflicts and informing managers about their firms’ risks. Using the 2005 risk factor disclosure mandate as a setting, I predict and find that firms susceptible to shareholder-debtholder conflicts reduce their risk-taking after the mandate. I also show that firms whose managers underestimate the volatility of future outcomes reduce their risks after the mandate. To further investigate the mechanisms through which firms change risks, I exploit granular operational data from a sample of U.S. power plants. After the mandate, power plants with a high propensity for shareholder-debtholder conflicts or with inaccurate forecasts reduce exposure to risks by making several operational changes—geographically diversifying their operations, holding more fuel stock, expanding their supplier bases, etc. Collectively, my findings suggest that mandated risk disclosures influence corporate risk-taking behavior by reducing shareholder-debtholder conflicts and prompting managerial learning.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151226</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Structured Document To Structured Knowledge</title>
<link>https://hdl.handle.net/1721.1/151225</link>
<description>From Structured Document To Structured Knowledge
Qian, Yujie
Structured documents, such as scientific literature and medical records, are rich resources of knowledge. However, most natural language processing techniques treat these documents as plain text, neglecting the significance of layout structure and visual signals. Modeling such structures is essential for a comprehensive understanding of these documents. This thesis presents novel algorithms for extracting structured knowledge from structured documents.&#13;
&#13;
First, we propose GraphIE, an information extraction framework designed to model the non-local and non-sequential dependencies in structured documents. GraphIE leverages structural information through graph neural networks to enhance word-level tagging predictions. In evaluations across three extraction tasks, GraphIE consistently outperforms a sequential model that operates solely on plain text.&#13;
&#13;
Next, we delve into information extraction in the chemistry domain. Scientific literature often depicts molecules and reactions in the form of infographics. To extract these molecules, we develop MolScribe, a tool that translates a molecular image into its graph structure. MolScribe integrates symbolic chemistry constraints within an image-to-graph generation model, demonstrating robust performance in handling diverse drawing styles and conventions. To extract reaction schemes, we propose Rxn- Scribe, which parses reaction diagrams through a sequence generation formulation. Despite being trained on a modest dataset, RxnScribe achieves strong performance across different types of diagrams.&#13;
&#13;
Finally, we introduce TextReact, a novel method that directly augments predictive chemistry with text retrieval, bypassing the intermediate information extraction step. Our experiments on reaction condition recommendation and retrosynthetic prediction demonstrate TextReact’s efficacy in retrieving relevant information from the literature and generalizing to new inputs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151225</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Turbulent Dynamics Under Two Ideal Invariants: Dynamic Phase Alignment in Plasmas and Nonionized Fluids</title>
<link>https://hdl.handle.net/1721.1/151221</link>
<description>Turbulent Dynamics Under Two Ideal Invariants: Dynamic Phase Alignment in Plasmas and Nonionized Fluids
Milanese, Lucio M.
Turbulent dynamics in the presence of two invariants is poorly understood in both plasmas and non-ionized fluids. The celebrated Kolmogorov model of turbulence considered energy as the only invariant of the system, but it was subsequently discovered that a second invariant exists, namely (generalized) helicity. We present results of numerical studies of turbulence in low-&#120573;ₑ plasmas at scales below the electron skin depth and turbulence in non-ionized fluids governed by the Navier-Stokes equations. In both systems, the dynamics is dominated by the presence of energy and (generalized) helicity as two exact invariants. We show that, in the two systems, both invariants are subject to a forward cascade, and we demonstrate that this joint cascade is possible due to the existence of a strong dependence on scale of the Fourier phase alignment angle between, in low-&#120573;ₑ plasmas, fluctuations of electric and magnetic potential and, in Navier-Stokes turbulence, fluctuations of velocity and vorticity. This phenomenon, termed dynamic phase alignment, thus acquires importance as a mechanism regulating the dynamics in the presence of two invariants, arising from their conservation in the joint direct cascade, regardless of the details of the physical interactions.&#13;
&#13;
We further investigate the role of magnetic reconnection in the inverse transfer of magnetic energy in a low-&#120573;, kinetic system. We show that the merging of magnetic islands into progressively larger structures can lead to energy being transferred from small to large scales within a model that captures finite electron skin depth and finite ion Larmor radius effects as well as Landau damping. We explore the effects of Landau damping and of the generation of plasmoids on the system dynamics.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151221</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Dynamic Supply Chains and Service Delivery Systems</title>
<link>https://hdl.handle.net/1721.1/151220</link>
<description>Essays on Dynamic Supply Chains and Service Delivery Systems
Paine, James Edward
The field of Operations Management, and closely related fields of Operations Research and Industrial Engineering, focus intensely on addressing real-world problems associated with the design and management of product and service delivery systems in a human context. System Dynamics is a framework to understand, design for, and manage change emerging from both structural and behavioral features, and is uniquely suited to address policy questions in socio-technical supply chain contexts. Using System Dynamics, Operations Management, and Supply Chain Research methods this work expands on existing toolsets and theory and provides policy insights in dynamic supply chain and service delivery systems.&#13;
&#13;
Chapter 1 presents a methodological contribution to the System Dynamics and Supply Chain Research communities by developing a novel framework for supply chain models by combining three classic methods: co-flow differential equation structures, spot price discovery, and multinomial logistic choice modeling. Chapter 2 applies this framework to build a structural theory explaining the simultaneous surge in food insecurity alongside surges in food surplus and purposeful disposal at the beginning of the COVID-19 pandemic in the United States. Utilizing this structural theory, this chapter further illustrates policies that could help mitigate these stresses. Chapter 3 continues the concepts of managing a behaviorally driven multi-echelon supply subject to shocks. Utilizing a simulated environment, different policy features implied by parallel streams of Operations Management and Supply Chain literature are directly tested. These include policies that range from myopic, limited information decision rules to more modern, but data-intensive machine learning methods.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151220</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Biomedical Innovation</title>
<link>https://hdl.handle.net/1721.1/151219</link>
<description>Essays on Biomedical Innovation
Greenblatt, Wesley H.
The choice of what scientists decide to study is fundamental to determining what innovations are produced.  Yet, rather than being solely determined by a scientist's idiosyncratic preferences, this choice is shaped in important ways by the incentives, organization environment, and supporting institutions that surround a researcher.  This dissertation consists of three essays that use biomedicine as a setting to explore how institutions and organizational environment can shape the rate and direction of innovation.&#13;
&#13;
In the first essay, I examine how knowledge certification by professional medical society clinical practice guidelines shapes the use of extant knowledge.  Employing a difference-in-differences approach, I find that after inclusion in a guideline affected subfields of knowledge grow in size and scientific impact compared to controls carefully matched on observables.  Rather than the aperture of subsequent innovation narrowing, subfields shift towards exploration as they become more translational, more intellectually distant, more disruptive, and build on more diverse and less established prior research.&#13;
&#13;
In the second essay, I investigate how exposure to frontier research in a training program can alter the career trajectories of potential innovators.  I study the careers and innovative output of physicians who applied to the Associate Training Programs of the National Institutes of Health (NIH) during the turbulent period surrounding the Vietnam War.  I find program participants entered research-focused positions at higher rates and garnered more publications, citations and grant funding than synthetic controls.   In particular, the direction of their research efforts was durably imprinted with a distinct "translational" style of biomedical research that was characteristic of the NIH at the time.&#13;
&#13;
In the third essay, I study how a specific institution, grant peer review, shapes scientific risk taking.  While risk is an inherent aspect of innovation, those projects with high degrees of risk may be more likely to lead to breakthrough innovation yet may face challenges in winning the support necessary to be carried out.  I analyze R01-equivalent grants from the NIH, and after carefully controlling for investigator, grant and institution characteristics, find that grants with high levels of risk taking are renewed at lower rates.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151219</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Health Economics and Applied Econometrics</title>
<link>https://hdl.handle.net/1721.1/151213</link>
<description>Essays in Health Economics and Applied Econometrics
Cheng, Alden
Healthcare expenditures account for roughly 20 percent of the US economy, and this number is likely to remain high in the near future as baby boomers age into Medicare. Yet, in spite of high healthcare spending, quality of care varies widely across providers. Reasons for these discrepancies have yet to be explored fully, and given the increasing proliferation of rich healthcare datasets and econometrics tools, this presents an exciting avenue for research.&#13;
&#13;
In the first chapter of this thesis, I consider one potential explanation for why competition has not driven low-quality nursing homes out of the market or induced them to increase investments in quality: information frictions. I start by showing that there is huge variation in nursing home quality in California: value-added estimates (validated using a distance-based IV strategy) indicate that one standard deviation higher quality is associated with 2 percent lower risk-adjusted 90-day mortality rate. Yet, despite the high stakes for consumers, structural demand estimates reveal that demand for quality is very low, with patterns of heterogeneity suggesting that information frictions are the most likely explanation. Finally, in counterfactual simulations, I find that eliminating information frictions may reduce nursing home deaths by at least 8 to 28 percent, and potentially even more if supply side responses are taken into account.&#13;
&#13;
In the second chapter, I study the estimation of regression discontinuity and regression kink designs with multiple running variables (respectively, MRD and MRK), single-dimensional versions of which have been used for program evaluation in a wide range of settings. In MRD and MRK designs, individuals are assigned different treatments based on whether their values on several observed running variables exceed known thresholds, which aligns with eligibility criteria for numerous healthcare programs (for example, Medicaid eligibility depends on both an income and an asset threshold). However, there is relatively little econometric work studying MRD and essentially none studying MRK, so applied work often analyze each running variable separately using single-dimensional methods. Here, I propose an MRD and MRK estimator that improves upon this applied practice in two ways: first by increasing precision, and second by allowing the researcher to estimate heterogeneous treatment effects. I establish theoretical results for this estimator, and demonstrate its performance in simulations as well as two empirical applications.&#13;
&#13;
In the final chapter, I study consumer behavior in the cigarette market. I find that consumers are less responsive to sales taxes compared to excise taxes, in line with a theory of tax salience (since only excise taxes are included in posted prices), but that consumers also appear to learn over time. These results highlight that when consumers are imperfectly informed, notionally equivalent policies may have very different effects as a consequence of small differences in the way they are implemented.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151213</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Climate Change Adaptation Planning and Decision Making for Transit Infrastructure</title>
<link>https://hdl.handle.net/1721.1/151212</link>
<description>Climate Change Adaptation Planning and Decision Making for Transit Infrastructure
Martello, Michael Vincent
Sea level rise (SLR) and associated increases in the frequency and intensity of coastal flooding pose a significant threat to coastal communities and the critical infrastructure upon which they rely. In coastal cities, rail transit systems are particularly sensitive to such increases in coastal flood exposure. Transit agencies, infrastructure managers, and planners are increasingly cognizant of these risks, yet current literature and practice lack methods for quantifying the costs of flooding for rail transit infrastructure. Consequently, current literature lacks methods for valuing transit-specific investments in climate change adaptation and provides little guidance on how transit agencies can decide between potential adaptation strategies, given prevailing uncertainties and their institutional priorities. &#13;
&#13;
This thesis aims to address these gaps in literature and practice by first developing a coupled hydraulic and hydrodynamic model to assess the severity of coastal flood exposure (i.e., flood depths) for rail rapid transit systems (inclusive of their underground spaces) under present and future SLR conditions. Relying on novel transit-specific depth-damage curves collected via structured expert judgement, we translate event-specific flood depth estimates into direct damage costs via a novel flood damage cost estimation framework, considering uncertainty and variability in damage outcomes. We next estimate how expected annualized losses (EAL) increase over time with uncertain future SLR. Recognizing that future  flood damage costs are less consequential than present damage costs, we propose a novel fair market value (FMV) discounting approach and demonstrate flood risk-related cash flows exhibit low correlation to the market and can therefore be discounted near the risk-free rate. Applying this finding in conjunction with the flood damage cost estimation model, we value investments in climate adaptation projects and explore the benefits of flexibly implementing proposed projects via real options analysis (ROA). Lastly, moving beyond valuation, we explore the tradeoffs inherent in planning adaptation investment across a rail rapid transit system given institutional priorities and objectives via a multi-criteria decision analysis (MCDA) framework.&#13;
&#13;
Applying these methods to the Massachusetts Bay Transportation Authority (MBTA) rail rapid transit network, we find unprotected tunnel portals responsible for a disproportionate volume of tunnel inflow. Hydraulic connectivity of the central tunnel network allows flooding from one ingress location to propagate across the network. Consequently, we find effective adaptation measures protecting the central tunnel must function cohesively as a system to ensure a uniform level of protection. Absent adaptation, we observe coastal flood risk (measured by EAL) has more than doubled from 2008 to 2022 and is expected to grow 16% annually over the next decade under all SLR scenarios. Through a case study considering a regional adaptation pathway proposed by Climate Ready Boston, we demonstrate significant benefits to MBTA rail transit infrastructure, particularly when investments follow a flexible implementation pathway. Lastly, through a Blue Line case study, we find implementation of shore-based measures to be more favorable than transit-specific adaptation alternatives, when considering sampled institutional priorities.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151212</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Electrochemical Nature of Non-Faradaic Catalysis at Interfaces</title>
<link>https://hdl.handle.net/1721.1/151211</link>
<description>The Electrochemical Nature of Non-Faradaic Catalysis at Interfaces
Wesley, Thejas S.
Electric fields are known to play an important role in many catalytic transformations throughout chemistry and biology. In principle, oriented electric fields could also be used to control thermochemical catalysis at solid-liquid interfaces. Yet, the role of electrical polarization in non-faradaic heterogeneous catalysis remains poorly understood, primarily for two reasons. &#13;
&#13;
First, it is difficult to monitor electrical polarization at interfaces that are not connected to macroscopic wires. Such “distributed” interfaces, which include solid catalysts (e.g., supported metal nanoparticles) distributed throughout packed-bed or stirred-tank reactors, underpin virtually all of heterogeneous catalysis. Second, mechanistic studies into the role of electric fields in heterogeneous catalysis have been hampered by the lack of a conceptual framework that rigorously captures the impact of electric fields on interfacial reaction kinetics in terms of experimentally measurable quantities. Yet, an experimentally accessible rate theory that rigorously captures electrostatic effects would enable deriving new molecular insight into the nature of interfacial reaction mechanisms from measurements of polarization-dependent reaction rates. &#13;
&#13;
Herein, we tackle both challenges in turn. In the first part of this thesis (Chapters 2 and 3), we develop and apply electrochemical methods to study non-faradaic catalysis at distributed (i.e., dispersed in solution), supported catalysts. In Chapter 2, we show how equilibrated spontaneous ion and/or electron transfer reactions from redox active species in solution may be harnessed to electrochemically polarize distributed catalytic interfaces in a predictive and controllable fashion during catalysis. We apply these methods to Pt/C-catalyzed ethylene hydrogenation in aqueous and aprotic organic solutions, and discover that electrochemical polarization plays a key role in governing the rate of this non-polar, non-faradaic reaction across vastly disparate reaction media. In addition, in Chapter 3, we apply infrared and ambient-pressure X-ray photoelectron spectroscopies to demonstrate that spontaneous polarization prevails even for metal nanoparticles supported on a nonconductive oxide host. Although such materials preclude wired connections to control the degree of interfacial polarization, our studies demonstrate that nonconductive catalysts may be electrochemically polarized in a predictable way simply by varying the solution composition (e.g., pH).&#13;
&#13;
In the second part of this thesis (Chapters 4 and 5), we investigate the molecular and thermodynamic origins of polarization effects in non-faradaic catalysis. In Chapter 4, we formally synthesize longstanding concepts across physical electrochemistry and chemical kinetics in order to clarify the role of electrostatics in catalysis at interfaces. Specifically, we recapitulate well-established concepts regarding the thermodynamics of adsorption at solid-liquid interfaces, and apply these results within the framework of classical, non-ideal transition state theory to analyze polarization-dependent surface reaction kinetics. This analysis shows how the electric field-dependence of interfacial reaction kinetics arises from partial charge transfer to form surface intermediates and transition states (i.e., the electrosorption valencies). This partial charge transfer endows non-faradaic elementary reactions with electrochemical character, and augments the integer charge equivalents transferred in faradaic elementary reactions.&#13;
&#13;
In Chapter 5, we build upon the studies performed in Chapter 2 with experiments investigating the polarization-dependent reaction kinetics of Pt/C-catalyzed ethylene and trans-2-butene hydrogenation. Our observations are all consistent with a model in which polarization of the Pt surface away from the local potential of zero free charge induces polarization-driven adsorption of polar solvent or charged ions near the interface that competes with the olefin for available catalyst sites. In this model, olefin adsorption on the surface displaces polar solvent and charged ions from the interface, which is compensated in an obligatory fashion by spontaneous partial charge transfer from the redox buffer. These results show that even non-faradaic elementary reactions involving negligibly polar species are electrochemical in nature because the surface reaction physically induces motion of polar or charged species near the interface during catalyst turnover.&#13;
&#13;
Taken together, the results of this thesis reveal that non-faradaic elementary surface reactions are in fact electrochemical in nature. It is our hope that the approaches and framework advanced here will provide a useful roadmap for further experimental investigations into the electrochemical nature of reactions at interfaces.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151211</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pseudo-determinism</title>
<link>https://hdl.handle.net/1721.1/151209</link>
<description>Pseudo-determinism
Grossman, Ofer
A curious property of randomized algorithms for search problems is that on different executions on the same input, the algorithm may return different outputs due to differences in the internal randomness used by the algorithm. We would like to understand how we can construct randomized algorithms which while still using the power of randomness can ensure that when run multiple time on the same input, with high probability result in the same output on each of the executions.&#13;
&#13;
We first show a pseudo-deterministic NC algorithm for finding matchings in bipartite graphs. As a corollary, we also show a pseudo-deterministic NC algorithm for constructing DFS trees in graphs.&#13;
&#13;
We then show a reproducible algorithm for problems in search-RL. That is, we show an algorithm for problems in search-RL such that the output depends only on &#119874;(log &#119899;) many random bits used by the algorithm. We also show a pseudo-deterministic log-space low time algorithm for finding paths in undirected graphs. The algorithm is much faster than deterministic logspace algorithms for the problem.&#13;
&#13;
Next, we investigate pseudo-determinism in the context of streaming algorithms. We show both lower and upper bounds for some classic streaming problems. Most notably, we show that the problem of approximate counting in a stream (for which the well known algorithm of Morris gives a &#119874;(log log &#119899;) space algorithm), has no pseudo-deterministic algorithms using space &#119900;(log &#119899;). &#13;
&#13;
Finally, we examine an extension of pseudo-determinism to the context of interactive proofs.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151209</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Public Finance and Information Economics</title>
<link>https://hdl.handle.net/1721.1/151205</link>
<description>Essays on Public Finance and Information Economics
Medeiros Sztutman, André
This thesis comprises three chapters on public finance and information economics. The first focuses on the interaction of imperfect information in labor markets and the tradeoffs the government faces when setting &#13;
 on-linear taxes. The second focuses on the role of heterogeneity in elasticities in affecting those tradeoffs. The third focuses on the imperfect information in financial markets and on how to design disclosure rules to increase the size of gains from trade in lending markets when these markets are adversely selected. &#13;
&#13;
The first chapter asks how optimal taxes are affected by reputation building and imperfect information in labor markets. To answer that question, I build a model of labor markets with incomplete and asymmetric information where job histories play a crucial role in transmitting information about workers' productivity, which allows us to better understand the efficiency and distributive consequences of imperfect monitoring and screening in labor markets, and the tradeoffs the government faces when setting taxes. Optimal taxes are described by generalized versions of standard redistributive and corrective taxation formulas, which depend crucially on labor wedges: the marginal contribution to output relative to the increases in lifetime earnings that result from supplying one extra unit of labor at each period. Using data from the Health and Retirement Study, I find that the corrective component of taxes is likely to be large, especially at the top of the income distribution. &#13;
	&#13;
The second chapter (joint with John Sturm) asks how income taxes should account for heterogeneity in elasticities of taxable income. We address this question with a test that passes if and only if there exists a weighted utilitarian planner for whom taxes are locally optimal. Our test incorporates standard sufficient statistics and a novel ingredient: the variance of elasticities conditional on income. Theoretically, we show that the test fails when these variances are sufficiently high. Empirically, we find they are indeed large in a panel of US tax returns. We thereby conclude, without taking a stance on redistributive preferences, that there are welfare-improving tax reforms. &#13;
	&#13;
The increasing availability of data in credit markets may appear to make adverse selection concerns less relevant. However, when there is adverse selection, more information does not necessarily increase welfare. The third chapter (joint with Robert M. Townsend and Nicole Immorlica) provides tools for making better use of the data that is collected from potential borrowers, formulating and solving the optimal disclosure problem of an intermediary with commitment that seeks to maximize the probability of successful transactions, weighted by the size of the gains of these transactions. We show that any optimal disclosure policy needs to satisfy some simple conditions in terms of local sufficient statistics. These conditions relate prices to the price elasticities of the expected value of the loans for the investors. Empirically, we apply our method to data from the Townsend Thai Project -- a long panel dataset with rich information on credit histories, balance sheets, and income statements -- to evaluate whether it can help develop rural credit markets in Thailand, finding economically meaningful gains from adopting limited information disclosure policies.&#13;
&#13;
JELClassification: H2,D8,J2,I3,G2.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151205</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Labor Economics</title>
<link>https://hdl.handle.net/1721.1/151204</link>
<description>Essays in Labor Economics
Uccioli, Martina
This dissertation studies two distinct issues in the field of labor economics: the labor supply of new mothers and firms' adjustments to changing labor costs. In both cases, I study the effect of labor market policies, both because they provide quasi-exogenous variation in otherwise endogenous variables of interest, and because of the intrinsic interest in studying the welfare implications of specific policies that governments have direct control over.&#13;
&#13;
The first two chapters, written jointly with Ludovica Ciasullo, consider how maternal labor supply is impacted by working conditions, and how it in turn affects intrahousehold bargaining and task allocation within the household. &#13;
&#13;
In the first chapter we study which work arrangements new mothers choose when allowed to do so, and whether these work arrangements affect their labor supply choices. We exploit the Australian 2009 Fair Work Act, which explicitly entitled parents of young children to request a (reasonable) change in work arrangements. Leveraging variation in the timing of the law, timing of childbirth, and the bite of the law across different occupations and industries, we establish two main results. First, if allowed to request a change in work arrangements, new mothers ask for regularity in their schedule. Second, with regular schedules, working mothers' child penalty declined from a 47 percent drop in hours worked to a 40 percent drop. For the most exposed mothers, the Fair Work Act led to both a doubling in schedule regularity, and a 30% decrease in the child penalty in hours of work.&#13;
&#13;
After establishing that an increase in schedule regularity leads to an increase in maternal labor supply, in the second chapter we study how this translates into division of labor within the household. First, we document that at baseline children bring a 40% increase in their parents' active time -- that is, total time spent on paid work, housework, or parenting -- and that this increase falls disproportionately on mothers, by a 2-to-1 ratio. Second, by exploiting the improvement in maternal labor market conditions brought about by the Australian 2009 Fair Work Act, we show that this gendered allocation of time is not affected by improved labor market prospects for women. Finally, we show that mothers who work longer hours reduce housework, but not time spent directly with children, mitigating concerns that maternal participation in the labor market comes at their children's expense.&#13;
&#13;
   &#13;
The third chapter, written jointly with Andrea Manera, focuses on how labor costs -- via stringency of labor regulations --  influence firms' innovation choices. We study the impact of employment protection legislation (EPL) on firms’ innovation, through an event-study analysis of labor market reforms occurring in Europe over 2000-2016. Data from the Community Innovation Survey reveal that substantial drops in EPL for temporary workers prompt a reallocation of innovation towards the introduction of new products, away from process innovation aimed at cutting labor costs. Among innovative firms, the share of product innovators increases by 15% of the pre-reform value, while the share of firms specializing in process innovation falls by 35%. We develop a theoretical framework of directed technical change to rationalize our findings.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151204</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards causality in gene regulatory network inference</title>
<link>https://hdl.handle.net/1721.1/151203</link>
<description>Towards causality in gene regulatory network inference
Wu, Alexander Po-Yen
Understanding the coordination of biomolecules that underlies gene regulation is key to gaining mechanistic insights into cellular functions, phenotypes, and diseases. Advances in single-cell technologies promise to unveil mechanisms of gene regulation at unprecedented resolution by enabling measurements of genomic and/or epigenetic features for individual cells. However, unlocking insights from single-cell data requires algorithmic innovations. &#13;
&#13;
This thesis introduces a series of methods for uncovering gene regulatory relationships underlying cellular identity and function from single-cell data. Firstly, we present a framework for enhancing the detection of statistical associations in small sample size settings for gene regulatory network inference. We then describe the use of single-cell genetic perturbation screens for determining the causal roles of critical regulatory complexes, focusing specifically on its applications for revealing mechanistic insights about the mammalian SWI/SNF family of chromatin remodeling complexes.&#13;
&#13;
To bridge the gap between methods that identify statistical associations from observational data and those that infer causal relationships using interventions, we also introduce a new category of techniques that extends the econometric concept of Granger causality to complex graph-based dynamical systems, such as those found in single-cell trajectories. In particular, we describe a graph neural network-based generalization of Granger causality for single-cell multimodal data that enables the detection of noncoding genomic loci implicated in the regulation of specific genes. We then demonstrate how we use this approach to link genetic variants to gene dysregulation in disease, focusing on its applications to schizophrenia etiology. Lastly, we present an extension of this graph-based Granger causal framework that leverages RNA velocity dynamics for causal gene regulatory network inference and enables inquiries into the role of temporal control in gene regulatory function and disease.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151203</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Subcellular Resolution Mapping of the Human Brain</title>
<link>https://hdl.handle.net/1721.1/151202</link>
<description>Scalable Subcellular Resolution Mapping of the Human Brain
Guan, Webster
A detailed understanding of the anatomical and molecular architectures of cells and their system-wide connectivity is essential for interrogating system function and dysfunction. Extensive efforts have been made toward characterizing cells through various omics approaches, and have established invaluable databases yielding new insights. However, we still lack technologies for mapping the connectivity as well as molecular details of individual cells in the human nervous system, at the microscopic scale and human brain-wide scale. This thesis aims to develop various chemical and computational technologies as part of a fully integrated technology platform for simultaneously extracting spatial, molecular, morphological, and connectivity information of individual cells from the same brain at single-fiber resolution. We accomplished this by seamlessly integrating new chemical, mechanical, and computational tools to enable 3D multi-scale proteomic reconstruction of human organ tissues.&#13;
&#13;
To address the challenge of slow and nonuniform fluorescent labeling of tissues in 3D, we developed tissue-hydrogel transformation technologies known as ELAST and mELAST that transform previously weak and sometimes brittle brain tissue into tough, stretchable, and elastic tissue-gel hybrids. In the case of mELAST, the tissues can also be reversibly expanded to enhance transparency and magnification to visualize finer structures. Through repeated thinning via cyclic compression dynamic loading, these tough tissues can be stained significantly faster and more uniformly than passive staining of typically preserved samples. Furthermore, their toughness allows them to be de-stained and re-stained many times to enable highly multi-plexed spatial -omics.&#13;
&#13;
To understand the antibody transport mechanisms and potential improvements to compression staining protocols, we developed a computational model for solute transport during high-strain dynamic loading of elastic tissue-gels. This fundamental study showed that while the thinning was the main transport mechanism, increases in convective flow within the tissue also could have significant effects on staining uniformity. Using the model, we also identified the best ways to modulate material properties, reaction kinetics, and cyclic compression loading schedule for enhanced staining uniformity, while also identifying theoretical methods to significantly improve transport over typical protocols by increasing convection.&#13;
&#13;
Because our human brain mapping pipeline still requires tissue slicing due to chemical and optical limitations, we developed a computational pipeline termed UNSLICE to reconstruct the 3D connectivity of neural fibers across multiple brain slabs at the macroscopic (whole brain) and microscopic scales (single axons). By using blood vessels, astrocytic and other neuronal fibers, as well as axons, single fiber resolution reconstruction of sliced tissues can be achieved using UNSLICE, enabling tracing and downstream connectivity analysis. Using the combined technology platform, we analyzed a case study of Alzheimer’s Disease (AD) pathology at multiple scales from overall cytoarchitecture to individual synapses. Finally, we demonstrated, for the first time, the feasibility of scalable neural connectivity mapping in the human brains, establishing a path for probing brain connectivity and its alterations in diseases.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151202</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifunctional soft robotic devices and cardiac benchtop models for improved therapy delivery and development</title>
<link>https://hdl.handle.net/1721.1/151201</link>
<description>Multifunctional soft robotic devices and cardiac benchtop models for improved therapy delivery and development
Mendez, Keegan Leigh
The goal of this thesis research is to improve patient-specific therapy by designing and developing (1) a suite of multifunctional implantable therapy delivery devices with dynamic drug delivery capabilities, and (2) a high-fidelity benchtop model that recreates cardiac anatomy and physiology. Part 1 involves the development of implantable, soft robotic devices that can dynamically respond to their local in vivo environment to trigger or modify therapy delivery according to patient-specific anatomy and patient-specific physiological signals. Part 2 involves the development of benchtop models of the left atrium and left atrial appendage that enable device testing, development, and procedural training, and replicate (patho)physiological hemodynamics and motion. Taken together, this thesis research aims to create new therapeutic approaches that allow for patient-specific therapy delivery in response to each patient’s precise needs, and to support the development of future therapeutic approaches through the creation of new simulators that enable more personalized device testing and validation before deployment in a clinical setting.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151201</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Computer Programs: Computational and Cognitive Perspectives</title>
<link>https://hdl.handle.net/1721.1/151200</link>
<description>Understanding Computer Programs: Computational and Cognitive Perspectives
Srikant, Shashank
In this thesis, I study the understanding of computer programs (code) from two perspectives: computational and cognitive. I ask what the human bases of understanding code are, and attempt to determine whether computational models trained on code corpora (also known as code models) share similar bases.&#13;
&#13;
From the computational perspective, I start by proposing a framework to test the robustness of the information learned by code models (chapter 2). This establishes a baseline measure for how well models comprehend code. I then describe techniques for improving the robustness of these models while retaining their accuracy (chapter 3). I then propose a way forward for code models to learn and reason about concurrent programs from their execution traces (chapter 4). In doing so, I also demonstrate the limitations of heuristics developed over the past four decades for detecting data races in concurrent programs, highlighting the need for evaluating these heuristics further.&#13;
&#13;
In the cognitive aspect, I study how our brains comprehend code using fMRI to analyze programmers’ brains (chapter 5). I show that our brains encode information about comprehended code similar to how code models encode that information (chapter 6). I show how the framework I develop in chapter 2 can be used to automatically generate stimuli for experiments in psycholinguistics and cognitive neuroscience (chapter 7), which can improve our understanding of how our minds and brains comprehend programs. Finally, I propose a probabilistic framework which models the mechanism of finding important parts of a program when comprehending it (chapter 8).
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151200</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Feedback Effects of Transient Nuclear Systems Using Monte Carlo</title>
<link>https://hdl.handle.net/1721.1/151199</link>
<description>Modeling Feedback Effects of Transient Nuclear Systems Using Monte Carlo
Kreher, Miriam A.
Monte Carlo neutron transport is the gold standard for accurate neutronics simulation of nuclear reactors in steady-state because each term of the neutron transport equation can be directly tallied using continuous-energy cross sections rather than needing to make approximations in energy, angle, or geometry. However, the time dependent equation includes time derivatives of flux and delayed neutron precursors which are difficult to tally. While it is straightforward to explicitly model delayed neutron precursors, and thus solve the time dependent problem in Direct Monte Carlo, this is such a costly approach that the practical length of transient calculations is limited to about 1 second. In order to solve longer problems, a high-order/low-order approach was adopted that uses the omega method to approximate the time derivatives as frequencies. These frequencies are spatially distributed and provided by a low-order Time Dependent Coarse Mesh Finite Difference diffusion solver. While this scheme has been previously applied to prescribed transients, thermal feedback is now incorporated to provide a fully self-propagating Monte Carlo transient multiphysics solver which can be applied to transients of several seconds long.&#13;
&#13;
Several recently developed techniques are used in the implementation of the proposed coupling approaches. Firstly, underrelaxed Monte Carlo, which is a steady-state technique that stabilizes the search for temperature distributions, is applied to find initial conditions. Secondly, tally derivatives are a Monte Carlo perturbation technique that can identify how a tally will change with respect to a small change in the system. Test problems of varying complexity are carried out in flow-initiated transients to show the versatility of these methods.&#13;
&#13;
Overall, this multi-level, multiphysics, transient solver provides a bridge between high fidelity Monte Carlo neutronics and the fast multi-group diffusion methods that are currently used in safety analysis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151199</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of German peasant emigration to the United States 1815-1914: a reexamination of some behavioral hypotheses in migration theory.</title>
<link>https://hdl.handle.net/1721.1/151153</link>
<description>Aspects of German peasant emigration to the United States 1815-1914: a reexamination of some behavioral hypotheses in migration theory.
Inoki, Takenori.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1974; Vita.; Bibliography: leaves 253-274.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151153</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ferritin species and metabolism in striated muscle.</title>
<link>https://hdl.handle.net/1721.1/151143</link>
<description>Ferritin species and metabolism in striated muscle.
Vulimiri, Lakshmi.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151143</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effective stack heights for tall stacks.</title>
<link>https://hdl.handle.net/1721.1/151035</link>
<description>Effective stack heights for tall stacks.
Weil, Jeffrey Charles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151035</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laser spectroscopy of alkaline earth oxide flames and deperturbation of diatomic molecular spectra.</title>
<link>https://hdl.handle.net/1721.1/151024</link>
<description>Laser spectroscopy of alkaline earth oxide flames and deperturbation of diatomic molecular spectra.
Gottscho, Richard Alan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151024</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fatty-acyl amidases from the slime mold Dictyostelium discoideum specific for bacterial lipopolysaccharide</title>
<link>https://hdl.handle.net/1721.1/151021</link>
<description>Fatty-acyl amidases from the slime mold Dictyostelium discoideum specific for bacterial lipopolysaccharide
Verret, Charles Joseph Reynold.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1982; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151021</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The application of Friedel and Craft condensation to non-benzenoid hydrocarbons</title>
<link>https://hdl.handle.net/1721.1/151018</link>
<description>The application of Friedel and Craft condensation to non-benzenoid hydrocarbons
Prasad, Ram.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemistry, 1922
</description>
<pubDate>Sun, 01 Jan 1922 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151018</guid>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the secondary motion induced by oscillations in a shear flow / dc by David John Benney.</title>
<link>https://hdl.handle.net/1721.1/151017</link>
<description>On the secondary motion induced by oscillations in a shear flow / dc by David John Benney.
Benney, David J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1959; Vita.; Includes bibliographical references (leaf 54).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151017</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On a driving mechanism for galactic spirals.</title>
<link>https://hdl.handle.net/1721.1/151014</link>
<description>On a driving mechanism for galactic spirals.
Feldman, Stuart Irwin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Bibliography: leaves 125-127.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151014</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of particle size on the properties of casting slips.</title>
<link>https://hdl.handle.net/1721.1/150971</link>
<description>The effect of particle size on the properties of casting slips.
Kocatopcu, Şahap Şefkati.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, Division of Ceramics, 1945; Vita.; Bibliography: leaves 91-156.
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150971</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abstract analysis and optimization of Scheme</title>
<link>https://hdl.handle.net/1721.1/150966</link>
<description>Abstract analysis and optimization of Scheme
Ayers, Andrew Edward.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references (p. 171-176).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150966</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation on dehydration and canning of carp</title>
<link>https://hdl.handle.net/1721.1/150964</link>
<description>Investigation on dehydration and canning of carp
Bose, A. N.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Food Technology, 1946; Vita.; Bibliography: leaves 1-3.
</description>
<pubDate>Tue, 01 Jan 1946 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150964</guid>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>General equilibrium and government action under uncertainty.</title>
<link>https://hdl.handle.net/1721.1/150963</link>
<description>General equilibrium and government action under uncertainty.
Fischer, Joram.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1973; Vita.; Bibliography: leaves 204-208.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150963</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Problems in confirmation theory.</title>
<link>https://hdl.handle.net/1721.1/150962</link>
<description>Problems in confirmation theory.
Teller, Paul Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Humanities, 1969; Vita.; Includes bibliographies.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150962</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An illusion of power; military conscription as a dilemma of liberal democracy in Great Britain, the United States, and France.</title>
<link>https://hdl.handle.net/1721.1/150961</link>
<description>An illusion of power; military conscription as a dilemma of liberal democracy in Great Britain, the United States, and France.
Feldman, Elliot J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1973; Bibliography: leaves 897-915.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150961</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low temperature lattice instability in single and polycrystalline ZrV2.</title>
<link>https://hdl.handle.net/1721.1/150960</link>
<description>Low temperature lattice instability in single and polycrystalline ZrV2.
Levinson, Mark.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150960</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Patterns and processes driving chromosome organization</title>
<link>https://hdl.handle.net/1721.1/150915</link>
<description>Patterns and processes driving chromosome organization
Abraham, Sameer
In this thesis, we focus on different approaches to studying the organization of chro­mosomes with in the nucleus of cells. We employ both genomic analysis of chromo­some interaction maps, and polymer simulations to answer various questions that are relevant to the field.&#13;
&#13;
The first two chapters of this thesis are centered around genomic analysis of inter­action maps generated from Chromosome Conformation Capture (3C) technologies. We begin by analyzing data generated using a novel Micro-C protocol and assess its performance in comparison to established Hi-C. New computation tools are developed to extract, quantify and compare patterns detected in both techniques. We find that Micro-C can accurately recapitulate the patterns of interactions found in Hi-C. In addition, evidence for nucleosome scale structure is also detected in the data.&#13;
&#13;
Following this, the scope of the meta-analysis is expanded. We compared over 70 different human Hi-C and Micro-C libraries that vary in the biochemical parameters used in data generation. We extract trends that relate the protocol parameters to the observed patterns of enrichment found in the data. We find that libraries generated with a high degree of fragmentation are better at capturing fine scale organization, while those with larger fragments excel in capturing larger patterns and structures.&#13;
&#13;
In the final chapter, we explore the dynamic changes in chromosome organization through the early stages of cell division. We analyze experimental Hi-C of DT-40 chicken cells and uncover the role of Condensin in disassembling interphase chromatin structure during prophase. We develop a model for prophase condensation and explore different interactions between loop extruding Cohesins and Condensins. We find that non-trivial interactions between these complexes are required to accurately capture the dynamics of the data. Our findings extend the model of loop extrusion and highlight the role of interactions between SMC complexes in organizing chromosomes.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150915</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A broadened HLA ligandome uncovers new immunotherapy targets for pancreatic cancer&#13;
+&#13;
A prime editor mouse to model a broad spectrum of somatic mutations</title>
<link>https://hdl.handle.net/1721.1/150914</link>
<description>A broadened HLA ligandome uncovers new immunotherapy targets for pancreatic cancer&#13;
+&#13;
A prime editor mouse to model a broad spectrum of somatic mutations
Ely, Zackery A.
Pancreatic cancer is a lethal malignancy recalcitrant to immune checkpoint blockade and other immunotherapies. A subset of tumors is computationally predicted to harbor potentially immunogenic peptides for MHC class I (MHC-I) presentation, but the nature, expression, and immunogenicity of these peptides has yet to be determined. The only prior study of the pancreatic cancer immunopeptidome focused on profiling MHC-I-associated peptides (MAPs) from canonical proteins in bulk tumor samples; however, non-malignant cell populations comprise most of the pancreatic tumor mass, obscuring the identity of MAPs that derive specifically from cancer cells. In the second chapter of this thesis, I resolve this challenge through extensive profiling of patient-derived organoids with whole-genome sequencing, RNA sequencing, and immunopeptidomics. These data enable a proteogenomics approach that tailors MAP identification to each individual patient sample. Harnessing this platform, my colleagues and I uncovered a diverse cohort of MAPs derived from somatic mutations and transcript isoforms that are functionally unexpressed in most or all healthy tissues. These include MAPs derived from novel, unannotated open reading frames (nuORFs) present within long noncoding RNAs, processed transcripts, and 5’ and 3’ untranslated regions. We found that cytotoxic T cells specific to nuORF-derived MAPs can be readily generated from peripheral blood mononuclear cells of healthy donor individuals. This result highlights the immunogenicity of nuORF-derived MAPs and establishes them as promising targets for immunotherapies in pancreatic cancer.&#13;
&#13;
In Chapter 3, I report the development of a genetically engineered mouse model (GEMM) for performing prime editing in vivo. This system represents a rapid alternative to traditional cancer mouse models, which often take months or years to develop. Through a Creinducible prime editor enzyme encoded in the mouse germline, prime editor GEMMs can mediate rapid and precise engineering of most cancer mutations, including many that are challenging or infeasible to achieve with other CRISPR technology. We demonstrate the utility of this system by mediating secondary Kras mutations and common Trp53 hotspot mutations in model-derived pancreatic organoids. Finally, we model lung and pancreatic cancer in vivo using lentiviral delivery of prime editing guide RNAs or orthotopic transplantation of prime edited organoids. We anticipate that prime editing GEMMs will accelerate preclinical functional studies of cancer-associated alleles that are challenging to model by traditional approaches.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150914</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-particle mechanism for unconventional superconductivity with potential applications in moiré materials</title>
<link>https://hdl.handle.net/1721.1/150913</link>
<description>Three-particle mechanism for unconventional superconductivity with potential applications in moiré materials
Crépel, Valentin (Valentin Didier Marie Claude)
A novel electronic mechanism for superconductivity is introduced, wherein pairing between two conduction electrons is mediated by the virtual motion of a third one originating from the valence band. Relying on interband polarization rather than singularities near the Fermi level, this mechanism bypasses the limitations that other electronic models for superconductivity encounter at low charge carrier concentrations. As a result, it is able to capture the full crossover from Bardeen-Cooper- Schrieffer behaviors to the Bose-Einstein condensation of pairs, which has recently attracted a lot of experimental attention. Furthermore, the pairing potential obtained is non-retarded and non-perturbative in the Coulomb interaction strength, leading to strong coupling behaviors and, in particular, to large critical temperature to Fermi energy ratios. Finally, we argue that this mechanism should apply to some moiré materials, but also to ZrNCl and WTe₂, for which it predicts an experimentally observable spin-triplet order parameter.&#13;
&#13;
Faithfully describing two of the most important class of unconventional superconductors – dilute superconductors with large [formula] ratios and spin-triplet superconductors – the so-coined “three-particle” mechanism for superconductivity studied throughout this thesis appears pivotal to future research on high-&#119879;꜀ and topological superconductors.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150913</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Proxy Paradox: Explaining (Lack of) Control Over State-Sponsored Proxy Armed Groups</title>
<link>https://hdl.handle.net/1721.1/150912</link>
<description>The Proxy Paradox: Explaining (Lack of) Control Over State-Sponsored Proxy Armed Groups
Plana, Sara Cristina
This dissertation uncovers and answers three puzzles about how states that sponsor non-state armed groups abroad (proxies) control them. The central puzzle of state-proxy relations asks: why would a proxy defy the directives of its state sponsor and risk losing support? The second puzzle is that state sponsors do not always use all the tools of control at their disposal to influence their proxies. In assessing the effectiveness of these tools, this dissertation revealed its third puzzle: some tools of control that state sponsors employ—namely promises and threats—sometimes do not work as intended. The theory developed in this dissertation makes sense of these puzzles. It argues that the goal a state sponsor pursues through a proxy against their shared enemy determines how dependent the state is on bolstering or limiting its proxy’s capability, affecting both which tools of control it will use and how effective those tools are at motivating and restraining proxies. Challenging the view of an unavoidable tradeoff between control and effectiveness, this dissertation shows that some tools of control bolster and others hamper proxy capability, and that not all states are interested in maximizing the proxy’s effectiveness—because sometimes states sponsor proxies for limited aims that require limiting that effectiveness. Ultimately, the dissertation finds that state sponsors have an easier time motivating proxies than restraining them. State sponsors can use carrots and sticks to motivate proxies to conduct costly actions, but state sponsors struggle to make the sticks necessary to restrain proxies credible and severe enough to deter them from tempting misbehavior. These findings are based on multi-method case studies of relationships between the United States and armed groups in the Syrian civil war from 2013 to 2020, combining descriptive statistics of original datasets of these groups’ compliance records and process-tracing of interviews with direct actors and thousands of pages of news and statements from the US and group representatives. Alongside shedding light on oft-secretive state-proxy relations, this dissertation informs scholarly and policy debates about military intervention into civil wars, international wartime cooperation, and conflict escalation and resolution.&#13;
&#13;
Note: The views expressed in this publication are those of the author and do not necessarily reflect the official policy or position of the Department of Defense or the U.S. government. The public release clearance of this publication by the Department of Defense does not imply Department of Defense endorsement or factual accuracy of the materials.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150912</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The steric courses of chemical reactions.</title>
<link>https://hdl.handle.net/1721.1/150897</link>
<description>The steric courses of chemical reactions.
Klemperer, Walter George.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1973; Fifteen unnumbered leaves inserted. Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150897</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two-dimensional nonrecursive digital filters.</title>
<link>https://hdl.handle.net/1721.1/150896</link>
<description>Two-dimensional nonrecursive digital filters.
Fiasconaro, James Gerard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150896</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The formation of nitric oxide from organic nitrogen contained in fossil fuels.</title>
<link>https://hdl.handle.net/1721.1/150894</link>
<description>The formation of nitric oxide from organic nitrogen contained in fossil fuels.
Flagan, Richard C.,
            1947-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Number 97 omitted from paging. Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150894</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fractional parentage coefficients for f-shell electrons</title>
<link>https://hdl.handle.net/1721.1/150885</link>
<description>Fractional parentage coefficients for f-shell electrons
Nielson, Clair Worley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1962; Vita. Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 132-133).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150885</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of food phytates on the absorption of radioactive calcium in human beings</title>
<link>https://hdl.handle.net/1721.1/150884</link>
<description>The effect of food phytates on the absorption of radioactive calcium in human beings
Bronner, Felix.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Food Technology, 1952; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150884</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The geometry of homomorphism-valued inner products.</title>
<link>https://hdl.handle.net/1721.1/150882</link>
<description>The geometry of homomorphism-valued inner products.
King, Paul Lewis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Bibliography: leaf 37.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150882</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconductivity in niobium films.</title>
<link>https://hdl.handle.net/1721.1/150877</link>
<description>Superconductivity in niobium films.
Siuta, Vincent Paul.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1966; Bibliography: leaves 76-79.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150877</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering topology and correlation in epitaxial thin film kagome metals</title>
<link>https://hdl.handle.net/1721.1/150770</link>
<description>Engineering topology and correlation in epitaxial thin film kagome metals
Han, Minyong
Along with the Landau paradigm of phase transitions, topology has been recognized as an important metric of classifying condensed matter systems, offering successful descriptions for systems with nontrivial energy gaps or protected band degeneracies. As much as topology has risen in significance, the study of correlation has also provided crucial insights for a number of may-body phenomena realized when the inter-particle interactions dominate the kinetic energies of individual constituents. The kagome lattice is a generalized lattice model whose characteristic atomic arrangement produces both topological Dirac bands and correlated flat bands. In search of its material realizations, kagome metals, a class of intermetallics containing the two-dimensional kagome networks of transition metal atoms, have shown promise in faithfully manifesting the original lattice model in their bulk electronic structures. &#13;
&#13;
In this thesis, we present our works on engineering topology and correlation in two representative kagome metals, FeSn and Ni3In, stabilized in epitaxial thin film form. We characterize via transport, thermodynamic, and spectroscopic probes the emergent quantum phenomena arising from the key band structure singularities and further exploit various thin film tuning parameters to manipulate them. With systematic control of chemical potential and spin structure, we elucidate the pivotal roles of the lattice-driven Dirac and flat bands in generating magnetic instabilities and topological edge modes. We also demonstrate how the kagome spectrum reconstructs upon intense inter-kagome hybridization or broken crystallographic symmetry, eventually giving rise to new types of flat bands and their derived anomalies. To track the origins of these, we incorporate films into spintronic or ion-battery devices and monitor their responses under different conditions. These results will construct an important framework in designing topological and correlated electronic states and further driving them towards the regime suitable for functional device applications.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150770</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable Physics-informed Machine Learning Methods for Scientific Modeling and Data Analysis</title>
<link>https://hdl.handle.net/1721.1/150769</link>
<description>Interpretable Physics-informed Machine Learning Methods for Scientific Modeling and Data Analysis
Lu, Peter Yucheng
With the recent advancement of modern machine learning methods, there are now many exciting opportunities to use machine learning in scientific research, including for modeling and data analysis. Machine learning has the potential to become an indispensable tool for scientific discovery, but it is often difficult to directly apply to scientific problems. Especially in the case of deep learning approaches, machine learning methods are often lacking in interpretability, robustness, out-of-distribution generalization, and data efficiency—all qualities that are necessary for many scientific and engineering applications. In this thesis, we will illustrate several approaches for addressing these issues using a variety of applications. First, we develop a physics-informed framework for partially observed system identification, showing how combining an encoder with a sparse symbolic model allows us to reconstruct unobserved hidden states as well as the exact governing equations. Then, we design a physics-informed deep representation learning architecture for analyzing spatiotemporal systems and demonstrate its ability to extract interpretable physical parameters, corresponding to uncontrolled variables, from time-series data. Finally, we use tools from optimal transport theory and manifold learning to develop a robust non-parametric method for discovering conservation laws, showing the advantage of using geometric machine learning methods to solve scientific problems. By designing physics-informed architectures and adapting representation learning methods for scientific applications, we can overcome many of the difficulties that are currently preventing machine learning from playing a more important role in scientific discovery and create more useful computational tools for scientists and engineers trying to analyze, understand, and model their data.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150769</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The First Laboratory Searches for Low-Mass Axion Dark Matter</title>
<link>https://hdl.handle.net/1721.1/150768</link>
<description>The First Laboratory Searches for Low-Mass Axion Dark Matter
Salemi, Chiara P.
Multiple astrophysical and cosmological observations have shown that the visible matter described by the Standard Model of particle physics is only a small fraction of the energy density of the universe. We believe that there is about five times as much matter that is ‘dark’. The dark matter is likely comprised of massive particles that interact very little or not at all with other matter. Despite this lack of interaction, the ubiquity of dark matter has allowed it to have profound effects on the history of our universe—including seeding the formation of structures such as the galaxy in which we live.&#13;
&#13;
One of the most well-motivated dark matter candidates is the axion, a hypothetical particle that is predicted by the solution to another long-standing mystery in physics, the strong CP problem. The Standard Model predicts that CP symmetry should be violated by the strong force. However, precision measurements have shown that strong interactions conserve CP symmetry to better than one part in 10¹⁰. At present, the most viable solution to this strong CP problem introduces a new particle, the axion. Within a wide range of parameters, the axion also satisfies all of the requirements to be dark matter.&#13;
&#13;
This dissertation presents the first direct search for low-mass axion dark matter, using an innovative lumped-element detection method. In a lumped-element detector, a strong magnetic field interacts with the field of dark matter axions around us, inducing an effective current. This effective current is read out via a superconducting LC circuit and measured with high-sensitivity quantum sensors. The entire device must be kept only barely above absolute zero in order to reduce backgrounds that could mask a signal.&#13;
&#13;
The prototype experiment that is the primary focus of this thesis, ABRACADABRA-10 cm, set world-leading limits on axion dark matter. Over the course of two month-long physics runs from 2018 to 2020, it excluded axions with masses 0.31—8.8 neV and couplings [formula]. This thesis will cover the lifetime of the experiment from design to construction to analysis. &#13;
&#13;
The success of ABRACADABRA-10 cm has now set the stage for the DMRadio program, a series of larger detectors that will be capable of finding or definitively excluding axion-like particles and QCD axions over a wide range of masses below 1 μeV. I present initial sensitivity and design studies for the upcoming two generations of DMRadio, DMRadio-50 L and DMRadio-m³. I also discuss the path towards a future, large-scale experiment, DMRadio- GUT, which would probe of QCD axions at GUT-motivated masses.&#13;
&#13;
The dark matter community is coalescing around the goal of probing the entire axion dark matter parameter space over the next couple of decades. The effort is driven by new ideas along with advances in cryogenic, magnet, and quantum sensing technology. The 1 L, 1 T ABRACADABRA-10 cm prototype experiment has formed the basis of a major component of the worldwide effort to find or exclude axions. The work in this dissertation represents the opening of new parameter space in the search for dark matter.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150768</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Sensitivity Nitrogen-Vacancy Center Magnetometry: from DC to GHz</title>
<link>https://hdl.handle.net/1721.1/150766</link>
<description>High-Sensitivity Nitrogen-Vacancy Center Magnetometry: from DC to GHz
Alsid, Scott T.
In the past 15 years, quantum sensing of magnetic fields using nitrogen-vacancy (NV) center ensembles in diamond has matured into an established discipline, with several proof of principle demonstrations in condensed matter physics, biological systems, electronics testing, and geomagnetism. However, despite the prospect of comparable magnetic field sensitivity, the performance of NV bulk magnetometers sensing low-frequency fields continues to lag behind that of their atomic vapor cell and superconducting counterparts by about three orders of magnitude. Detecting GHz-frequency fields compares much worse, with an additional three order of magnitude reduction in demonstrated sensitivity.&#13;
&#13;
This thesis presents two experimental thrusts designed to improve an NV-ensemble magnetometer by optimizing diamond processing and sensor construction. We first investigate the irradiation and annealing process of diamond samples by developing an NV-charge state spectral decomposition technique, which we use to examine the creation dynamics and diffusion of monovacancies in the diamond lattice. We also examine the behavior of the spin coherence timescales under increasing electron irradiation doses.  We then construct an NV-ensemble magnetometer by implementing and expanding upon the best techniques used for diamond growth, microwave delivery, optical excitation and readout, and pulse-control sequences. We use this sensor to demonstrate the most sensitive NV-based bulk magnetometer reported to date in both broadband and narrowband operation, sensing fields in the kHz and GHz frequency range. These experimental efforts demonstrate a deeper understanding and improvement in NV-ensemble magnetometry, opening up new application spaces in several frequency bands of interest.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150766</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disentangling the EMC effect: from free to bound nucleon structure</title>
<link>https://hdl.handle.net/1721.1/150765</link>
<description>Disentangling the EMC effect: from free to bound nucleon structure
Segarra, Efrain Patrick Dai
Deep inelastic scattering experiments using electron beams on fixed targets allow for the study of the structure of nucleons. While the structure of the free proton has been well measured, the free neutron structure and the modification of nucleon structure in the nuclear environment, the EMC effect, remain open questions. Studies suggest that the EMC effect may arise due to strongly-interacting nucleon-nucleon systems, referred to as short-range correlations. The BAND experiment at the Thomas Jefferson National Accelerator Facility seeks to probe the bound proton structure when in a strongly-interacting state, using spectator-tagged deep inelastic scattering. This thesis presents the results of the BAND experiment, demonstrating the proton is modified when strongly-interacting with the spectator neutron in deuterium. This thesis also explores efforts to describe inclusive and tagged deep inelastic scattering data within a uniform theoretical framework to explore nucleon structure modification effects and extract the free neutron structure.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150765</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fermion pairing and correlations under a quantum gas microscope</title>
<link>https://hdl.handle.net/1721.1/150764</link>
<description>Fermion pairing and correlations under a quantum gas microscope
Hartke, Thomas Richard
Understanding interacting quantum systems is a central goal of modern physics, with applications ranging from quantum chemistry to nuclear physics to the design of novel superconductors. This thesis describes the creation of exotic phases of quantum matter built from ultracold fermionic atoms trapped in an optical lattice, experimentally realizing the Fermi-Hubbard model. These fermionic atoms develop strong correlations as they tunnel and interact in the lattice potential. Using a novel bilayer quantum gas microscope, we detect these correlations by imaging arrays containing thousands of atoms, revealing the exact location and spin of each fermion.&#13;
&#13;
We apply these techniques to directly reveal the formation of a Mott insulator of fermions with strong repulsion and the crossover to a gas of local pairs with strong attraction. At intermediate attraction, we observe correlated fermion pairs extending over multiple lattice sites, and these pairs interact to form a long range ordered state with charge-density-wave correlations. In a repulsive lattice gas, we detect local doublon-hole quantum fluctuations within the Mott insulator, which in turn establish long range magnetic order. Leveraging the technique of full density imaging, we implement model-independent thermometry of the strongly interacting system using the fluctuation-dissipation theorem, a fundamental relation guaranteed by statistical physics. Finally, we invent and realize a novel method to coherently manipulate and entangle fermion pairs within large two-dimensional arrays, with applications including robust quantum information storage and hybrid analog-digital quantum simulation. These experiments realize fundamental phenomena in condensed matter physics, and in addition demonstrate a robust platform for future exploration of the behavior of strongly interacting fermions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150764</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the Frontiers of the Standard Model with Lattice QCD</title>
<link>https://hdl.handle.net/1721.1/150763</link>
<description>Probing the Frontiers of the Standard Model with Lattice QCD
Grebe, Anthony Valenti
The Standard Model of particle physics provides a beautiful description of our universe at the most fundamental level and has been tested more thoroughly than any other scientific theory in history. Nevertheless, there are still unanswered questions lying at the frontiers of the Standard Model, ranging from composite structure formation to the violation of symmetries to the nature of the elusive neutrino. Answering these questions requires relating experimentally relevant hadrons to the fundamental Standard Model interactions of the quarks and gluons that compose them, made difficult by the fact that quantum chromodynamics is non-perturbative at low energies. In the face of the breakdown of perturbation theory, lattice QCD – a discretization of the Feynman path integral on a four-dimensional space-time grid – is the only known method for studying quarks and gluons at the energies relevant for hadron formation and structure.&#13;
&#13;
This thesis will use lattice QCD to investigate the proton charge radius, CP violation, and neutrinoless double-beta decay. The electric charge radius of the proton, relevant for electronic scattering and for hydrogen spectroscopy, has historically been a subject of debate due to conflicting experimental measurements. This work performed an exploratory lattice QCD calculation of this charge radius directly from the dynamics of quarks and gluons constituting the proton using a novel method designed to reduce a systematic uncertainty that had plagued many previous calculations. CP symmetry, the combination of charge conjugation and parity, is violated in rare quark decays, and isolating the CP-violating phase δ in the quark sector of the Standard Model requires a theoretical understanding of the initial and final hadronic states involved in the decays. The light-cone distribution amplitude of the pion computed in this work from lattice QCD is one of the theoretical inputs required to extract δ from B --&gt; ππ decays. The quest to resolving whether the neutrino is Majorana or Dirac has led to experimental searches for neutrinoless double-beta decay, but interpretations of experimental results depend on nuclear matrix elements of the isotopes in question. This work describes the first exploratory lattice QCD calculation of this nuclear matrix element for the nn --&gt; ppee transition. In all these cases, lattice QCD provides the necessary theoretical input to understand experimentally relevant physics directly from the fundamental interactions of the Standard Model, advancing the frontiers of our understanding of the universe.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150763</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Measurements of the Vector Boson Scattering Production and Searches for Charged Higgs Bosons at the Large Hadron Collider</title>
<link>https://hdl.handle.net/1721.1/150762</link>
<description>Precision Measurements of the Vector Boson Scattering Production and Searches for Charged Higgs Bosons at the Large Hadron Collider
Hu, Miao
This thesis presents measurements of cross sections of vector boson pair production in association with two jets and a search for charged Higgs bosons (H superscript ±) decaying into a top and a bottom quark.&#13;
&#13;
The data samples used for the measurements are LHC proton-proton collisions recorded with the CMS detector during 2016–2018 at √&#119904; = 13TeV, corresponding to an integrated luminosity of 137 fb⁻¹. The measurements are performed in the fully leptonic decay modes [formula] and [formula], where ℓ, ℓ′ = e, m, for the WZ and W±W± productions; and also in the fully hadronic decay modes VV → 4q and ZV → 2&#120584;2q, where V = W, Z, for the [formula], WZ, and ZZ productions. In the leptonic channel, differential fiducial cross sections as functions of the invariant masses of the jet and charged lepton pairs, as well as of the leading-lepton transverse momentum, are measured for [formula] production, and as a function of the invariant mass of the jet pair is also measured for WZ production. All measurements are consistent with the standard model predictions. An observation of electroweak production of WZ boson pairs is reported with an observed (expected) significance of 6.8 (5.3) standard deviations. In the hadronic channel, an expected significance of 3.9 standard deviations is reported for the total electroweak production of VV boson pairs. In both channels, constraints are obtained on the structure of quartic vector boson interactions in the framework of effective field theory.&#13;
&#13;
The search for H superscript ± is performed in the all-jet final state using LHC proton-proton collision data recorded with the CMS detector in 2016 at √&#119904; = 13TeV, which corresponds to an integrated luminosity of 35.9 fb⁻¹. No significant excess is observed above the expected background. Model-independent upper limits at 95% confidence level are set on the product of the H superscript ± production cross section and branching fraction for H superscript ± mass hypotheses up to 3TeV in two production mechanisms. These results are interpreted using different minimal supersymmetric extensions of the standard model, and results combining this with a search in leptonic final states are also reported.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150762</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topology, Symmetry and Mechanics: Deciphering and Controlling information flows in a living cell</title>
<link>https://hdl.handle.net/1721.1/150761</link>
<description>Topology, Symmetry and Mechanics: Deciphering and Controlling information flows in a living cell
Liu, Jinghui
Living organisms collect, preserve and transform information on complex spatiotem­poral bases. Take a living cell for instance, the signaling proteins are capable of forming patterns on lengths that are tens of thousands the molecular size. During force-generating processes such as cell divisions, both the spatial and temporal aspects of protein patterning convey essential physiology outcomes. &#13;
&#13;
While many advances focusing on the molecular complexity of such chemomechan­ical interactions have been made in recent years, it remains unclear to what extent they can be described and even predicted with the language of a physicist. That is, to decipher the structure and dynamics of the cellular information flows focusing on system-level topology and symmetry signatures, rather than the molecular and kinetic specificities. Taking a step further, with emerging experimental tools that allow for quantitative controls over the molecular interactions, the engineering of information flows towards violation of system-level physical symmetry remains an open pursuit. &#13;
&#13;
In this thesis, I present a series of studies in the chemomechanical Rho-actomyosin signaling process that takes place in P. Miniata starfish egg cells. In Chapter 1, I review this model system for its molecular components and physiological functions, highlighting the need of novel order parameters for characterizing the complex bio­chemical and biochemical changes. In Chapter 2, I show that the statistics and dynamics of topological defects embedded in Rho chemical patterns can be drawn an unexpected parallel to classical and quantum turbulent fluids. In Chapter 3, I further demonstrate a Bosonic symmetry between braided topological defects as well as the emergence of pair-scattering virtual particles on the cell membrane during sig­naling. In Chapter 4, I develop an optogenetic-based tool recruiting Rho-activating enzyme and use light to quantitatively control surface contraction waves that override wild type guiding cues and violate pole symmetry. In Chapter 5, I discuss the use of vibrational sound microscopy on non-invasively probing active fluctuations in the force-generating cell cortex. Finally, I conclude in Chapter 6 by discussing investi­gation of room-temperature novel physics in biological systems combining advanced biological tools and a condensed-matter theoretical approach.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150761</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>First Step into A New Physics Realm: Search for the Majorana Nature of Neutrinos in the Inverted Mass Ordering Region</title>
<link>https://hdl.handle.net/1721.1/150760</link>
<description>First Step into A New Physics Realm: Search for the Majorana Nature of Neutrinos in the Inverted Mass Ordering Region
Fu, Zhenghao
The search for neutrinoless double-beta decay (0&#120584;&#120573;&#120573;) is the only way to prove the Majorana nature of neutrinos and is thus a major area of interest for neutrino physics. Discovering 0&#120584;&#120573;&#120573; and measuring its half-life will be the first solid evidence for physics beyond the Standard Model (BSM) and lead to a plethora of new theoretical and experimental investigations.&#13;
&#13;
This dissertation contains both theoretical and experimental work. The theoretical calculation of the non-perturbative nuclear matrix element for 0&#120584;&#120573;&#120573; is done using lattice quantum chromodynamics (LQCD) to interpret the experimental data. The preliminary results of &#119899;⁰&#119899;⁰ → &#119901;⁺&#119901;⁺&#119890;⁻&#119890;⁻ process for both long-range (light Majorana neutrino exchange) and short-range (heavy Majorana neutrino exchange) contributions are obtained.&#13;
&#13;
The experimental work focused on KamLAND-Zen 800. It is one of the leading efforts with data from an exposure of 970 kg·yr of ¹³⁶Xe. Machine learning methods are employed to discriminate background events in data and generate new simulations for future study. With no 0&#120584;&#120573;&#120573; signal excess over the background expectation, statistical properties are extracted by a Bayesian analysis utilizing Markov Chain Monte Carlo (MCMC) algorithm. The lower limit for the 0&#120584;&#120573;&#120573; half-life of [formula] at 90% C.I., corresponding to the effective neutrino mass range of 38.4-160.0 meV, which is the first search in the inverted mass ordering region.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150760</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the Deeply Virtual Compton Scattering Cross Section from the Proton at 10.6 GeV using the CLAS12 Detector</title>
<link>https://hdl.handle.net/1721.1/150759</link>
<description>Measurement of the Deeply Virtual Compton Scattering Cross Section from the Proton at 10.6 GeV using the CLAS12 Detector
Lee, Sangbaek
Deeply Virtual Compton Scattering (DVCS) is an exclusive process that produces a real photon when a lepton scatters from a quark inside a nucleon or a nucleus. Measurement of the DVCS cross section enables the study of the Generalized Parton Distributions (GPD), which plays a central role in understanding the QCD dynamics inside a hadron. Thus, the quark and gluon origin of the nucleon spin and mass can be probed and three-dimensional images of the target nucleon or nucleus can be realized. This thesis presents a cross section analysis of DVCS from the proton in the presence of its background, Bethe-Heitler (BH) process.&#13;
&#13;
The CEBAF Large Acceptance Spectrometer for operation at 12 GeV beam en-ergy (CLAS12) collaboration has taken electron-proton scattering data in fall 2018 using a liquid hydrogen target and the 10.6 GeV polarized electron beam from the Continuous Electron Beam Accelerator Facility (CEBAF). The CLAS12 detector is a nearly hermetic fixed-target detector, located in Hall B, Jefferson Lab at Newport News, Virginia.&#13;
&#13;
The experimentally determined BH-DVCS cross section is in good agreement with a phenomenological-model based theoretical prediction. The kinematic dependence of the cross section is reported over a wide range. The short-term plan to utilize the results presented here for a thorough tomography study and the long-term plan for GPD studies at future facilities such as the Electron-Ion Collider (EIC) are discussed.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150759</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamics of Biological Active Matter</title>
<link>https://hdl.handle.net/1721.1/150758</link>
<description>Thermodynamics of Biological Active Matter
Li, Junang
Consuming a fuel at the individual particle level, active matter shows rich physical behaviors from collective motion to phase separation and serves as the focal point for addressing fundamental questions in nonequilibrium physics, ecology as well as animal behavior. Despite the diverse phenomena observed from active matter, they all share a common feature of breaking the fundamental symmetry of time rever­sal. Understanding this arrow of time is essential to unveil the complex behaviors of active matter, separating them from their equilibrium counterparts. In this thesis, I am going to develop different metrics to quantify this elementary asymmetry of forward and backward processes and demonstrate what new physics we could learn from irreversibility by applying these metrics to various biological systems. In Chap­ter 2, I will illustrate three unique methods of measuring irreversibility and verify them on a simple toy model. Equipped with these ideas, in Chapter 3, I will apply them to two biological systems at different scales. In the first example, I will show how dissipation time-scale can be extracted from the fluctuations of cortical granules embedded in starfish oocyte. In the second example, I will probe material properties of the living crystal formed by starfish embryos. In Chapter 4, I am going to switch gear and propose a data-driven approach of quantifying irreversibility from complex high-dimensional patterns, which serves as a dynamical order parameter. Lastly, in Chapter 5, I will go beyond irreversibility and give a few examples of other aspects of thermodynamics we could learn from active matter.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150758</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring fundamental physics and astrophysics with gravitational-wave sources</title>
<link>https://hdl.handle.net/1721.1/150757</link>
<description>Exploring fundamental physics and astrophysics with gravitational-wave sources
Ng, Kwan Yeung (Ken)
In this thesis, I investigate the new physics that can be learnt by the current and future gravitational wave (GW) observations. With the current observations, I study how multiple binary black hole (BBH) spin measurements can be combined by hierarchical Bayesian framework to search for ultralight bosons (e.g., axions). This search relies on the process called superradiance, in which boson clouds form around fast spinning black holes (BHs), reduce the BH spins to characteristic values, and lead to an exclusion region in the parameter space of BH mass and spin. Based on the second GW catalog released by LIGO-Virgo Collaboration, the measurements of BH masses and spins highly disfavor the bosons with masses [1.3, 2.7] × 10⁻¹³ eV and decay constants &#119891;&#119886; ≳ 10¹⁵ GeV. With the future observations, I explore the science goals of the next-generation ground-based GW detectors, namely, Cosmic Explorer and Einstein Telescope. These detectors may observe stellar-mass &#119978;(10 − 100 &#119872; subscript ⊙) BBHs up to &#119911; ∼ 100, and hence tracing the cosmic merger rate history may become feasible. There are two intriguing BBH populations beyond &#119911; ∼ 10 which are unreachable by the current detectors. The first possible population consists of astrophysical BBHs originated from the remnants of Population III (Pop III) stars, whose merger rate density may peak at &#119911; ∼ 10 from the current astrophysical prediction. The second possible population is the assembly of primordial black hole (PBH) mergers, which are the relics of overdensities in the early Universe, and the merger rate density increases monotonically with redshifts. I perform mock data challenges to quantify the individual redshift measurements of high-redshift BBHs and reconstruct the merger rate density of each subpopulation with hierarchical analysis. Even with the redshift measurements alone, the next-generation detectors will provide precise measurements of the abundance of Pop III BBHs and PBHs with a few months of data, paving a new avenue for high-redshift GW astronomy.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150757</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exotic Dark Matter in the Early Universe</title>
<link>https://hdl.handle.net/1721.1/150756</link>
<description>Exotic Dark Matter in the Early Universe
Ridgway, Gregory
The fundamental nature of dark matter remains a mystery. As crucial evidence for its existence comes from its effects on early universe observables, perhaps clues to its fundamental nature also reside in its behavior in the early universe. In this thesis, I explore two scenarios in which dark matter is capable of exotic behavior in the early universe. In the first scenario, I consider particle dark matter that is able to decay or annihilate into standard model matter. I describe a code package, DarkHistory, that quickly and accurately calculates the effects such annihilations and decays have on the evolution of the ionization levels, matter temperature, and spectrum of photons in the early universe. I then use DarkHistory and measurements of the Ly&#120572; forest to place constraints on the decay lifetime and annihilation rates of dark matter. In the second scenario, I consider dark matter that consists of dark quarks and gluons. In the specific model I consider, the confinement phase transition is of first order, leading to the formation of bubbles. The dynamics of these bubbles and their interactions with the dark quarks dramatically modifies their present-day abundance.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150756</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of gas exchange in a membrane oxygenator.</title>
<link>https://hdl.handle.net/1721.1/150748</link>
<description>An analysis of gas exchange in a membrane oxygenator.
Buckles, Richard G.
            (Richard George)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1966; Bibliography: p. 373-386.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150748</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Durable goods and transactions costs : theory and evidence</title>
<link>https://hdl.handle.net/1721.1/150747</link>
<description>Durable goods and transactions costs : theory and evidence
Eberly, Janice Caryl.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1991; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150747</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and construction of an excitation system for a superconducting alternator.</title>
<link>https://hdl.handle.net/1721.1/150745</link>
<description>Design and construction of an excitation system for a superconducting alternator.
Keim, Thomas Alan.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150745</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave effects of a free surface piercing hydrofoil.</title>
<link>https://hdl.handle.net/1721.1/150743</link>
<description>Wave effects of a free surface piercing hydrofoil.
Kern, Edward Clarence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Ocean Engineering, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150743</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Segmented approximation and analysis of stochastic processes.</title>
<link>https://hdl.handle.net/1721.1/150735</link>
<description>Segmented approximation and analysis of stochastic processes.
Akant, Adnan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150735</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Synthesis of Polymer Networks and Branched Polymers for Triggered Deconstruction of Self-Assembly</title>
<link>https://hdl.handle.net/1721.1/150724</link>
<description>Design and Synthesis of Polymer Networks and Branched Polymers for Triggered Deconstruction of Self-Assembly
Husted, Keith E. L.
The accumulation of plastic waste is becoming a global crisis. While many plastics can technically be recycled, robust plastics which are held together by permanent covalent crosslinks–thermosets, which make up 20% of all plastics–cannot generally be reprocessed or broken down. Generalizable strategies for the deconstruction and/or reprocessing of thermoset materials are therefore severely lacking and highly desirable.&#13;
&#13;
Here, we report the first generalizable approaches to enable both deconstruction and reprocessing of existing thermosets, using an unrecyclable industrial thermoset, polydicyclopentadiene (pDCPD) as a model material. We demonstrate that statistical incorporation of small amounts of molecular additives into the backbone strands of thermosets enables complete material deconstruction under mild and chemoselective conditions, without adversely affecting the parent material’s original manufacturing workflow, mechanical properties, or appearance. The deconstruction products can be functionalized and re-incorporated into virgin material to produce partially recycled material of equivalent performance to the parent.&#13;
&#13;
Next, we show that structural modification of these molecular additives to include a crosslinking site enables equivalent material deconstruction compared to first generation additives, but with improvements in material thermomechanical performance. Alternative structural modification to include multiple functionality is shown to enable four-fold reductions in the minimum required additive loading for material deconstruction. Theoretical bases for both of these phenomena are provided to corroborate experimental observations.  &#13;
Next, we show that by facilitating bond exchange rather than cleavage, direct reprocessing of permanently crosslinked thermosets is achieved. We report the first example of self-healing pDCPD, in addition to the first reported Si-O exchange of bifunctional silyl ethers, consequently adding a new and accessible reaction to the toolbox of dynamic covalent chemistry. &#13;
&#13;
Finally, we deviate from thermosets and present a new strategy for the orthogonal tuning of thermal and mechanical properties of self-assembled graft copolymer silicones. We show that small structural variations to aliphatic pendant groups along copolymer backbones of free-standing self-assembling graft copolymers underpin enormous changes in material morphology and thermomechanical properties. Notably, we also introduce highly ordered body-centered-cubic morphologies which behave as soft plastic crystals, and lamellar morphologies which behave as robust and transparent, yet unentangled and uncrosslinked thermoplastics.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150724</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Power Cyclotrons: The Bridge Between Beyond the Standard Model Physics, Computation, and Medical Applications</title>
<link>https://hdl.handle.net/1721.1/150723</link>
<description>High Power Cyclotrons: The Bridge Between Beyond the Standard Model Physics, Computation, and Medical Applications
Waites, Loyd
The IsoDAR cyclotron is a 60 MeV cyclotron designed to output lOmA of protons in order to be a driver for a neutrino experiment. Coupling the high flux generated by the IsoDAR system with a kiloton neutrino detector will provide sterile neutrino ex­clusion searches covering anomalous regions indicated by short baseline experiments. Simultaneously, the coupling of a high power target and kiloton detector allows for the investigation of dark matter candidates, namely axion-like particles. We have shown that nuclear excitations within the IsoDAR target create a unique opportunity to produce axions and detect monoenergetic peaks with the nearby kiloton detector. Beyond this, the high power produced by the IsoDAR cyclotron can be used for ap­plications beyond particle physics. The lsoDAR cyclotron accelerates and extracts Ht, which allows the beam to be split downstream, a versatile and important de­velopment to alleviate the problem of producing high-power targets for the medical isotope community. This thesis presents a proposal for production of an more than an order of magnitude higher rates than are available at present for certain highly-need medical isotopes, including Ac-225.&#13;
&#13;
In developing the proof of principle of this state-of-the-art cyclotron, the results in this thesis focus on two points related to the production and transport of ions to the accelerator. The first step was to construct an Ht ion source with the necessary excellent emittance parameters and low contamination of non-Ht ions. In this thesis, we report results from a multi-cusp ion source that meets these requirements and produces a record level of high purity, low emittance Ht current. The second has been to design a radio-frequency quadrupole (RFQ) that will allow for gentle bunching of the high current before injection. This is the first use of axial direct injection with a compact cyclotron. This thesis reports the first application of machine learning to the RFQ design. These tools enable the high currents required by lsoDAR cyclotron, leading to important impact on the accelerator, medical, and physics communities.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150723</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficiently Learning, Testing, and Simulating Quantum Many-Body Systems</title>
<link>https://hdl.handle.net/1721.1/150722</link>
<description>Efficiently Learning, Testing, and Simulating Quantum Many-Body Systems
Soleimanifar, Mehdi
This thesis focuses on quantum information and quantum computing, and their applications in studying quantum many-body systems. A remarkable interplay between computer science and quantum physics in the past few decades has revealed that a precise control and manipulation of interacting quantum systems enables us to process information and perform computations that go beyond the reach of conventional digital computers. This novel form of information processing has also resulted in a conceptually new toolkit for tackling fundamental questions about the physics of quantum many-body systems. This thesis studies new features of interacting quantum systems through the lens of computational complexity and information theory. We will see how using these new features in turn allows us to develop eﬃcient classical and quantum algorithms for learning, testing, and simulating quantum many-body systems. Below are the main results of this thesis:&#13;
&#13;
1.	We develop an algorithm for reliably testing the amount of entanglement in a pure many-body quantum state. This algorithm tests whether a quantum state is a matrix product state of certain bond dimension in the property testing model. We provide both upper and lower bounds on the number of identical copies of the quantum state required by this algorithm.&#13;
&#13;
2.	We prove that a quantum information quantity, known as the entanglement spread, satisﬁes an area law in the ground state of any gapped local Hamiltonian with an arbitrary geometry. This new feature of ground-state entanglement is obtained using a connection to the seemingly diﬀerent problem of ﬁnding the communication complexity of testing bipartite states.&#13;
&#13;
3.	We devise an algorithm for learning the local Hamiltonian that governs the interactions in a quantum many-body system. This algorithm uses the results of local measurements on the thermal state of the system, and provably only requires a number of samples that scales polynomially with the number of particles.&#13;
&#13;
4.	A quasi-polynomial time algorithm is developed that estimates the quantum partition function at temperatures above the phase transition point. We also study diﬀerent characterizations of the thermal phase transition by connecting the exponential decay of correlations to the analyticity of the free energy in the high-temperature phase.&#13;
&#13;
5.	We rigorously bound the improvement that low-depth quantum circuits can provide over methods based on product states in estimating the ground-state energy of local Hamiltonians.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150722</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developments in Complex Systems Science with Applications to Political Systems and Pandemic Response</title>
<link>https://hdl.handle.net/1721.1/150720</link>
<description>Developments in Complex Systems Science with Applications to Political Systems and Pandemic Response
Siegenfeld, Alexander F.
The standard assumptions that underlie most conceptual and quantitative frameworks do not hold for many complex physical, biological, and social systems. Complex systems science clarifies when and why such assumptions fail and provides alternative frameworks for understanding the properties of complex systems.  We review some of the basic principles of complex systems science and provide a mathematical formalism for complexity profiles. We also illustrate general modeling principles using examples from pandemic response, including an illustration of how pandemics can be stably eliminated with a combination of social distancing measures and travel restrictions and how bad science led to bad policy regarding the use of face masks.   Applications to democratic elections are also described.  We define the concepts of negative representation and electoral instability, demonstrating that United States’ presidential elections underwent a transition from a stable to an unstable regime in the 1970s and have since become increasingly unstable.  We also consider the implications of geographic political polarization on multi-scale electoral systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150720</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chiral Phases on the Lattice</title>
<link>https://hdl.handle.net/1721.1/150719</link>
<description>Chiral Phases on the Lattice
DeMarco, Michael Austin
While chiral quantum field theories (QFTs) describe a wide range of physical systems, from the standard model to topological quantum matter, the realization of chiral QFTs on a lattice has proved to be difficult due to the Nielsen-Ninomiya theorem and the possible presence of quantum anomalies. In this thesis, we use the connection between chiral phases of matter and chiral quantum field theories (QFTs) to define chiral QFTs on a lattice and allow a huge class of exotic field theories to be simulated numerically. Our work builds on the ‘mirror fermion’ approach to the problem of defining chiral theories on a lattice, which defines chiral field theories as the edge modes of chiral phases. We begin by reviewing the deep connections between chiral phases of matter, chiral field theories, and anomalies. We then develop numerical treatments of an &#119878;&#119880;(2) chiral field theory, and provide a semiclassically solvable definition of Abelian 2+1 chiral topological orders. This leads to an exactly solvable definition of chiral &#119880;(1) SPT phases with zero correlation length, which we use to extract the edge chiral field theories exactly. These zero-correlation length models are vastly more simple than previous approaches to defining chiral field theories on the lattice.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150719</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Fairness in Sequential Decision Making</title>
<link>https://hdl.handle.net/1721.1/150718</link>
<description>Algorithmic Fairness in Sequential Decision Making
Sun, Yi
Machine learning algorithms have been used in a wide range of applications, and there are growing concerns about the potential biases of those algorithms. While many solutions have been proposed for addressing biases in predictions from an algorithm, there is still a gap in translating predictions to a justified decision. Moreover, even a justified and fair decision could lead to undesirable consequences when decisions create a feedback effect. While numerous solutions have been proposed for achieving fairness in one-shot decision-making, there is a gap in investigating the long-term effects of sequential algorithmic decisions. In this thesis, we focus on studying algorithmic fairness in a sequential decision-making setting.&#13;
&#13;
We first study how to translate model predictions to fair decisions. In particular, given predictions from black-box models (machine learning models or human experts), we propose an algorithm based on the classical learning-from-experts scheme to combine predictions and generate a fair and accurate decision. Our theoretical results show that approximate equalized odds can be achieved without sacrificing much regret. We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community.&#13;
&#13;
In the second part of the thesis, we study if enforcing static fair decisions in the sequential setting could lead to long-term equality and improvement of disadvantaged groups under a feedback loop. In particular, we model the interaction between algorithmic decisions and underlying distribution using Markov Decision Model with general transition functions. We propose a new metric that measures the distributional impact of algorithmic decisions as measured by the change in distribution’s center, spread and shape. This metric categorizes the impact into within-group impact and between-group impact, where within-group impact measures how policies impact the distribution within a group, and between-group impact how policies impact the distributions of two population groups differently. Our results show that there is generally a trade-off between utility and between-group impact for threshold policies, and common fairness constraints could lead to "backfire effects" where the impact on groups could be disparate.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150718</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perceive, Predict, and Plan: Robotic Expeditionary Science in Oceanic Spatiotemporal Fields</title>
<link>https://hdl.handle.net/1721.1/150717</link>
<description>Perceive, Predict, and Plan: Robotic Expeditionary Science in Oceanic Spatiotemporal Fields
Preston, Victoria Lynn
An improved understanding of our ocean would allow us to characterize the largest habitable biosphere on planet Earth, quantify the geochemical processes that control Earth’s climate, and develop responsible regulations for controlling the natural resources stored in its depths. Expeditionary science is the art of collecting in situ observations of an environment to build approximate models of underlying properties that move us towards this understanding. Robotic platforms are a critical technology for collecting observations of the ocean. Depth-capable autonomous underwater vehicles (AUVs) are commonly used to build static maps of the seafloor by executing pre-programmed surveys. However, there is growing urgency to generate rich data products of spatiotemporal distributions that characterize the physics and chemistry of the deep ocean biogeosphere. In this thesis, the problem of charting dynamic deep sea hydrothermal plumes with depth-capable AUVs is investigated. Effectively collecting samples of geochemical plumes using the operationally preferred strategy of pre-specifying surveys requires access to a dynamics model of the advective currents, bathymetric updrafts, and turbulent mixing at a hydrothermal site. In practice, however, access to this information is unavailable, imperfect, or only partially known, and so a model of plume dynamics must be inferred from observations and subsequently leveraged to improve future sampling performance. As most in situ scientific instruments yield point-measurements, considerable uncertainty is placed over the form of the dynamics in purely data-driven solutions. Challenges related to planning under uncertainty for geochemical surveys in the deep ocean are addressed in this thesis by embedding scientific knowledge as a strong inductive prior for tractable model learning and decision-making. Algorithmic contributions of this thesis show how plumes can be perceived from field data, their fate predicted far into the future (e.g., multiple days), and informative fixed trajectories planned which place an AUV in the right place at the right time. Scientific assessment of observational data collected with AUV Sentry during field trials in the Guaymas Basin, Gulf of California are interwoven with algorithmic analyses, demonstrating how intelligent perception, prediction, and planning enables novel insights about  hydrothermal plumes.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150717</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of Edge Fluctuations of Negative Triangularity Plasma on TCV using a New Gas-Puff Imaging Diagnostic</title>
<link>https://hdl.handle.net/1721.1/150716</link>
<description>Studies of Edge Fluctuations of Negative Triangularity Plasma on TCV using a New Gas-Puff Imaging Diagnostic
Han, Woonghee
Successful operation of magnetic confinement fusion devices requires thorough analysis of edge plasma turbulence, which plays a key role in dealing with the exhaust heat and particle loads on the wall materials. Edge plasma turbulence differs depending on various plasma conditions including plasma shape, e.g., the triangularity (δ). Negative δ is of great interest, as it is known to exhibit a substantial increase in con-finement compared to the usual positive δ, D-shaped plasmas. Recent experiments in the Tokamak à Configuration Variable (TCV) and DIII-D tokamaks have correlated the confinement improvement to a reduction of fluctuations within the plasma core, but, until now, relatively little was known about the effect of δ on plasma edge dynamics.&#13;
&#13;
This thesis explores the edge turbulence in negative δ plasmas on TCV using Gas-Puff Imaging (GPI). GPI measures the spatially-resolved edge fluctuations by imaging atomic spectral-line emission from a local neutral gas puff. In collaboration with the TCV team at EPFL, a new GPI system was installed at TCV and has been operational since 2018. The most prominent features appearing in the GPI images are blobs, which are intermittent turbulence with large amplitude having filamentary structures (elongated along the field) in the scrape-off layer (SOL). Estimation of the size, speed, and frequency of blobs allows evaluation of particle fluxes leaving the magnetically confined plasma boundary. Traditional approaches to blob analysis in the GPI images, including conditional averaging sampling and cross-correlation techniques, have the limitations of only providing averaged characteristics of blobs. These limitations were tackled in this work by implementing a machine learning method with standardized models. The models were trained with synthetic GPI images and demonstrated excellent performance in predicting blob contours for real GPI data.&#13;
&#13;
Using the GPI diagnostic, together with probe measurements, first-wall interaction is found to be completely suppressed in sufficiently negative δ, for both limited and diverted L-mode plasmas in TCV. This phenomenon can be explained by blobs being ejected along the reduced connection length, which is intrinsic to negative δ plasmas. In addition, edge fluctuations for negative δ plasmas were investigated in high density plasmas, near the density limit. The suppression of first-wall interaction for sufficiently negative δ is maintained at high densities, and the density limit appears to be similar for positive and negative δ. Furthermore, the blob-tracking method was applied for a detailed blob-by-blob analysis to compare cross-field particle transport in positive and negative δ plasmas. This revealed that plasmas with smaller δ tend to have less frequent blobs, most of which have large area and low radial speed, leading to a lower cross-field particle transport. All in all, this work provides experimental evidence of reduced edge turbulence in negative δ via analysis of blobs in the GPI measurements on TCV, strengthening the prospects of negative triangularity plasmas as a potential reactor solution.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150716</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physicochemical Interactions at Interfaces: Mass and Charge Transfer at Chemically Reacting Interfaces</title>
<link>https://hdl.handle.net/1721.1/150715</link>
<description>Physicochemical Interactions at Interfaces: Mass and Charge Transfer at Chemically Reacting Interfaces
Lake, John R.
Aqueous multiphase interfaces which undergo chemical reactions are crucial interfacial regions for most of human technology and are relevant for many large-scale industrial activities. Using rationally designed surfaces to enhance aqueous physicochemical interactions at these critical interfaces provides a unique approach to alleviate limitations for many applications. From this perspective, the thesis presented here includes four distinct studies which interrogate interfaces undergoing a chemical reaction, with a focus on understanding the mass or charge transfer occurring at these interfaces to enable more effective transport during these chemical reactions. First, the impacts that microscale texture have for gas evolving electrochemical electrodes is investigated to highlight the unique design considerations that exist for gas evolving electrochemical electrodes that are distinctly different than non-gas evolving electrochemical systems. Next, the inactivation of active surfaces by adhered gas bubbles is investigated for gas evolving electrochemical systems. In many industrially relevant scenarios, the passivating effect that adhered bubbles have on a process are detrimental to its efficiency and overall function. The findings presented are contrary to the current prevailing understanding, bringing important implications for the design of these systems in the future. Next, when gas bubbles are intended to react in bulk aqueous solutions, their tendency to minimize their surface energy has interesting consequences, relevant for a host of transport, absorption, and mineralization processes. A new method for the absorption of gas bubbles in liquid absorbents is presented using carbon dioxide gas absorption by an alkaline absorbent as a model system with applications for sustainability. Finally, a study is presented looking at the transient impacts of gas depletion in the electroreduction of carbon dioxide into useful products to better inform how these systems can limit the impacts caused by poor mass transport of the reactant gas to their reacting electrodes. Taken together, these studies aim to demonstrate how fundamental interfacial engineering principles can be applied to enhance a variety of chemical processes involving multiphase aqueous reactions. As a consequence of the prevalence of these interfaces, the work has wide-ranging relevance, ranging from power generation and chemical conversion to global decarbonization.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150715</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Terahertz Spectroscopy of Collective Excitations in Correlated Materials</title>
<link>https://hdl.handle.net/1721.1/150714</link>
<description>Ultrafast Terahertz Spectroscopy of Collective Excitations in Correlated Materials
Belvin, Carina Aiello
Strongly correlated materials exhibit a multitude of exotic phases of matter. Originating from strong interactions between electrons, the properties of these systems are governed by a delicate balance among the lattice, charge, orbital, and spin degrees of freedom, leading to complex ground state phases. The lowest-energy collective excitations of a particular ground state serve as a fingerprint of the phase of matter, and revealing such collective modes can therefore provide crucial information for understanding the underlying microscopic interactions. A powerful means of experimentally observing collective excitations in solids is through ultrafast spectroscopy. In this technique, an intense, ultrashort “pump” laser pulse drives the material out of equilibrium, and a subsequent weaker “probe” pulse detects the altered state of the system. By tailoring the parameters of the pump pulse, one can either lightly perturb the material to measure its equilibrium properties or drive it into an entirely new phase of matter markedly different from its original state. In either regime, the pump pulse can coherently generate collective excitations, which can be mapped in both the frequency and time domains.&#13;
&#13;
In this thesis, we use ultrafast terahertz spectroscopy to investigate collective modes in strongly correlated materials. The terahertz spectral range is the natural energy scale of elementary collective excitations in solids, and thus terahertz spectroscopy is an ideal method for detecting such modes. To drive the system out of equilibrium, we employ pump pulses in the near infrared that are chosen to lie above, below, or resonant with specific electronic transitions in the material. This enables us to observe the dynamics of collective modes after various types of perturbations, which can offer insight into the nature of complex phases and the couplings between different degrees of freedom. In this work, we utilize this approach to reveal new electronic soft modes of the charge-orbital order in magnetite, the interplay between excitons and magnons in the van derWaals antiferromagnet NiPS₃, the crossover between distinct optical generation mechanisms of a coherent magnon in NiPS₃, and the dynamics of coherent electromagnons in the van der Waals multiferroic NiI₂.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150714</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphene-Based Nanodevices in the Superconducting and Strongly Correlated Regimes</title>
<link>https://hdl.handle.net/1721.1/150713</link>
<description>Graphene-Based Nanodevices in the Superconducting and Strongly Correlated Regimes
Rodan-Legrain, Daniel
The ability to isolate and manipulate high-quality few-atoms-thick materials represents a major advance in condensed matter physics. Assembling these ultra-thin materials into van der Waals heterostructures, i.e., artificial meta-materials with atomically sharp interfaces, markedly increases the rich variety of physical properties accessible in 2D systems. &#13;
&#13;
Graphene, a carbon-based 2D hexagonal lattice, possesses exceptional properties that since its discovery in 2004 have attracted wide attention from the scientific and engineering communities. In this work, I present a series of experiments via two different approaches, i.e., proximity effect and twist angle design, to induce superconductivity and strong correlations in graphene-based systems—two phenomena that do not intrinsically occur in this material.&#13;
&#13;
In the first part of this thesis, graphene is flanked by two superconductors and inherits their superconducting properties by proximity effect. Initially, the underlying microscopic mechanism of this phenomenon is investigated using planar tunneling spectroscopy. Then, a superconductor-graphene-superconductor junction is coupled to a superconducting circuit to create and manipulate the first graphene-based transmon qubit.&#13;
&#13;
In the second part of this dissertation, the electronic properties of graphene-based systems are engineered by controlling the relative twist angle between the atomic planes. In particular, when two graphene sheets are stacked on top of each other near the “magic angle,”  θ ≈ 1.1°, nearly flat bands develop, featuring superconductivity and correlated insulating states. I begin by showing that local electrostatic control over the different electronic phases of magic-angle twisted bilayer graphene (MATBG) enables the creation of versatile hyper-tunable quantum devices. I also present low-temperature transport experiments to demonstrate the emergence of two exotic electronic phases in MATBG previously observed in other strongly correlated systems: nematicity and strange metal behavior. Next, I discuss local electronic compressibility measurements, evidencing that the low-temperature correlated phases originate from a high-energy state with an unusual band population sequence. Then, I describe nano-optics studies, probing plasmonic collective excitations in MATBG. Last, the study of a novel 2D moiré system beyond MATBG, i.e., twisted bilayer-bilayer graphene, is discussed.&#13;
&#13;
&#13;
The contributions of this thesis to the field pertain to graphene-based superconducting devices and the MATBG rich phase diagram and may find applications in next-generation superconducting electronics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150713</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lepton-Nucleus Constraints for Neutrino Interactions and Oscillations</title>
<link>https://hdl.handle.net/1721.1/150712</link>
<description>Lepton-Nucleus Constraints for Neutrino Interactions and Oscillations
Papadopoulou, Afroditi
Currently running and forthcoming precision neutrino oscillation experiments aim to unambiguously determine the neutrino mass ordering, the charge-parity violating phase in the lepton sector and the possible existence of physics Beyond the Standard Model. To have an understanding of all the effects necessary for the success of these experiments, lepton-nucleus interactions must be modeled in unprecedented detail. With this thesis, expertise in both neutrino and electron cross-section modeling and analysis was leveraged in order to make fundamental and critical improvements to our understanding of these interactions. The outlined work takes a significant step towards this high-precision measurement era with three complementary approaches. Cross sections are reported using neutrino data sets from the MicroBooNE liquid argon time projection chamber detector at Fermi National Laboratory, as well as electron scattering data from the CLAS detector at Thomas Jefferson National Laboratory. Furthermore, the modeling development of the commonly used GENIE event generator is presented.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150712</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-pressure studies of atomically-thin van der Waals materials</title>
<link>https://hdl.handle.net/1721.1/150710</link>
<description>High-pressure studies of atomically-thin van der Waals materials
Pimenta Martins, Luiz Gustavo
Two-dimensional (2D) materials and Moiré superlattices formed by certain stacking configurations of 2D crystals, represent a new frontier for quantum matter research due the emergent properties associated to their reduced dimensionality and tunability. To glean insight into the physics of these atomically-thin van der Waals materials, their properties have been extensively studied by tuning of external parameters such as temperature, electrostatic doping, magnetic field and strain. However, there is an external tuning parameter that has not been used systematically in studies of these systems – pressure. The relative scarcity of high-pressure studies involving atomically-thin materials is due to experimental challenges, e.g., loading of micron-sized samples into the also micron-sized pressure chamber. In this thesis, I address those issues and I investigate 2D materials and Moiré heterostructures via high-pressure optical-spectroscopic experiments using diamond anvil cells (DACs), with two main goals: (i) investigating the synthesis of novel 2D materials; and (ii), tuning and probing the electronic properties of 2D materials and Moiré heterostructures.&#13;
&#13;
To address the first point, I present experiments detailing the first evidence for the formation of a hard, transparent, sp³-containing 2D phase by compression of few-layer graphene, providing robust corroboration for the existence of 2D diamond. For the second point, I present two studies. In the first study, I report on the electronic-band tuning and multivalley scattering at high pressures in monolayer MoS₂ and WS₂ revealed by double-resonance Raman. The ability to probe the modifications in the band structure and multivalley scattering as a function of strain shall advance our understanding of different multivalley phenomena in transition metal dichalcogenides such as superconductivity, valley coherence, and valley transport. In the second study, I detail the pressure-tuning of minibands in MoS₂/WSe₂ heterostructures revealed by moiré phonons– Raman silent q ̸= 0 phonons from the individual layers activated by the moiré potential. In this work, we establish Moiré phonons as a sensitive probe of the mini-band electronic structure and their modifications under hydrostatic strain in this system, which is poised to be essential in understanding the emergent phenomena observed in similar Moiré systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150710</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-Matter Interactions with Photonic Quasiparticles</title>
<link>https://hdl.handle.net/1721.1/150709</link>
<description>Light-Matter Interactions with Photonic Quasiparticles
Rivera, Nicholas
The interactions of matter with electromagnetic fields underlie very many physical phenomena. The physics of these interactions is greatly simplified by their weakness, enabling us to understand them largely at the lowest order in various parameters (e.g., field strength, atomic size, fine-structure constant). This understanding is challenged by recent experiments coupling light to collective electromagnetic excitations in solids ("photonic quasiparticles"), whose strongly confined electromagnetic fields can interact strongly with matter. &#13;
&#13;
This thesis describes how the rules of light-matter interactions are altered when bound and free electrons interact with photonic quasiparticles, and some applications that result. In the first major part of the thesis, I will develop effects arising from the linear optical properties of these excitations, in perturbative and non-perturbative regimes of QED, which give rise to new schemes for generating entangled photons, for X-ray sources, and even for high-energy particle detectors. The second major part of this thesis develops the new physics arising from the nonlinear optical properties of these photonic quasiparticles, and focuses particularly on the development of new non-perturbative nonlinear dissipation and gain phenomena. As an application, I show how these high-order nonlinearity may enable for the first time the deterministic, steady-state generation of large optical Fock and sub-Poissonian states.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150709</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing New Physics with Spectroscopy of Trapped Ions</title>
<link>https://hdl.handle.net/1721.1/150708</link>
<description>Probing New Physics with Spectroscopy of Trapped Ions
Hur, Joonseok
Dark matter, a missing puzzle piece in our understanding of the Universe, remains dark despite abundant evidence for its existence and concerted experimental searches for candidate particles. In recent decades, experiments with atomic systems, driven by unprecedented developments in precision, have been providing tests for the Standard Model (SM), our current understanding of the Universe, and probing physics beyond the SM, including dark matter. In particular, it has been proposed that a new hypothetical elementary boson, a dark-matter candidate, can violate an SM prediction: linear distributions of measured isotope shifts (ISs) mapped onto graphs called King plots [1, 2, 3]. The prediction can be tested purely experimentally. If the violation is observed, however, possible new-physics contribution has to be distinguished from higher-order SM corrections originating from nuclear physics. &#13;
&#13;
This thesis reports IS spectroscopy experiments with laser-cooled and trapped singly ionized Ytterbium (Yb⁺) ions to search for new physics through the proposed novel method. The King-plot nonlinearities thus observed for the optical clock transitions in Yb⁺ ions with significance up to 240 standard deviations &#120590; and their implications to the new boson and nuclear physics are presented. In particular, there is a dominant, common source of nonlinearity originating from nuclear charge distributions and yet a small, second source of unknown origin with 4.3&#120590; significance. Pattern analysis of the nonlinearity in the King plots has been developed as a method for identifying or removing the sources of the observed nonlinearity. Atomic and nuclear structure calculations translate the measured nonlinearity patterns into bounds on new-boson interaction between subatomic particles as well as information on nuclear properties. The atomic structure calculations performed for Yb⁺ ions are illustrated in detail. Outlook and future works are discussed, including measurements for more transitions and isotopes and improving the experimental precision.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150708</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereoselective and Economical Methods for Chemical Synthesis of Essential Medicines</title>
<link>https://hdl.handle.net/1721.1/150706</link>
<description>Stereoselective and Economical Methods for Chemical Synthesis of Essential Medicines
Mear, Sarah Jane
Executive Summary: Innovation in synthetic chemistry enables pharmaceutical research and development by allowing the exploration of diverse chemical space during drug discovery, and by encouraging the development of economical and sustainable solutions in active pharmaceutical ingredient (API) manufacturing. For a given API, a single stereoisomer often displays preferable chemical and pharmacological properties including potency, stability, solubility, or toxicity. Methods for diastereo- or enantioselective synthesis are highly desirable, for exploration of biological properties of different stereoisomers, or for driving efficiency during drug manufacturing. In this thesis, synthesis of small molecule drugs emtricitabine, lamivudine, bedaquiline, and diazepam was investigated. Stereoselective strategies and cost-of-goods reduction more broadly are described. Lastly, a perspective on reproductive health safety in the chemical laboratory is presented.&#13;
&#13;
Chapter 1: Diazotization of S-Sulfonyl-Cysteines. We report the synthesis of enantiomerically enriched &#120573;-thio-&#120572;-hydroxy and &#120572;-chloro carboxylic acid and ester building blocks by diazotization of S-sulfonyl-cysteines. Within these pharmaceuticallyrelevant building blocks, the thiosulfonate protecting group demonstrated resistance to oxidation and attenuation of sulfur’s nucleophilicity. The key transformation was optimized by a 2² factorial design of experiment, highlighting the unique reactivity of cysteine derivatives in comparison with aliphatic amino acids.&#13;
&#13;
Chapter 2: Synthesis of Emtricitabine and Lamivudine by Chlorotrimethylsilane- -Sodium Iodide-Promoted Vorbrüggen Glycosylation. By simply adding water and sodium iodide (NaI) to chlorotrimethylsilane (TMSCl), promotion of a Vorbrüggen glycosylation en route to essential HIV drugs emtricitabine (FTC) and lamivudine (3TC) is achieved. TMSCl–NaI in wet solvent (0.1 M water) activates a 1,3-oxathiolanyl acetate donor for N -glycosylation of silylated cytosine derivatives, leading to cis oxathiolane products with up to 95% yield and &gt;20:1 d.r.. This telescoped sequence is followed by recrystallization and borohydride reduction, resulting in rapid synthesis of (±)-FTC/3TC from a tartrate diester.&#13;
&#13;
Chapter 3: Diastereoselectivity is in the Details: Minor Changes Yield Major Improvements to the Synthesis of Bedaquiline. Bedaquiline is a crucial drug in the global fight against tuberculosis, yet its high price places it out of reach for many patients. Herein, we describe improvements to the key industrial lithiation-addition sequence that enable a higher yielding and therefore more economical synthesis of bedaquiline. A focus on reproducibility and mechanistic understanding led to optimized conditions that double the previously reported yields of racemic bedaquiline simply by changing the lithium amide base and including a salt additive. We anticipate facile implementation of these improvements on manufacturing scale that will increase throughput of this essential medication.&#13;
&#13;
Chapter 4: Synthesis of a Key Precursor to Benzodiazepines by Copper Hydride Reduction of 2,1-benzo[c]isoxazole. Benzodiazepines are used broadly for the treatment of anxiety disorders and for general anaesthesia. Herein we describe a new method for synthesis of diazepam precursor 2-amino-5-chlorobenzophenone by N,Oreduction of 5-chloro-3-phenylbenzo[c]isoxazole using copper hydride. The desired compound is prepared in &gt;80% isolated yield by optimizing reaction parameters to prevent overreduction of the product. We outline future directions including continuous flow processing and purification by recrystallization.&#13;
&#13;
Chapter 5: A Call for Increased Focus on Reproductive Health within Lab Safety Culture. The approach to reproductive health and safety in academic laboratories requires increased focus and a shift in paradigm. Our analysis of the current guidance from more than 100 academic institutions’ Chemical Hygiene Plans (CHPs) indicates that the burden to implement laboratory reproductive health and safety practices is often placed on those already pregnant or planning conception. We also found inconsistencies in the classification of potential reproductive toxins by resources generally considered to be authoritative, adding further confusion. In the interest of human health and safe laboratory practice, we suggest straightforward changes that institutions and individual laboratories can make to address these present deficiencies: Provide consistent and clear information to laboratory researchers about reproductive health and normalize the discussion of reproductive health among all researchers. Doing so will promote safer and more inclusive laboratory environments.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150706</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensitivity and Memory in Physics and Biology</title>
<link>https://hdl.handle.net/1721.1/150705</link>
<description>Sensitivity and Memory in Physics and Biology
Owen, Jeremy A.
Living cells exhibit striking abilities—for example, their astonishing sensitivity to small perturbations, as well as their remarkable memory of transient events and stimuli. Since living things are out-of-equilibrium, the quest to understand how these abilities arise raises very basic problems in nonequilibrium physics. In Part I, I focus on dissipation and its role as a constraint on nonequilibrium behavior.&#13;
&#13;
In Part II, I present a set of simple, universal identities and inequalities constraining the sensitivity properties of nonequilibrium systems. For a large class of perturbations, I find an equilibrium-like expression for sensitivity, whereas for other perturbations, I am able to bound the response in terms of measurable thermodynamic quantities. Applied to biochemical networks, these results extend and unify a patchwork of prior biophysics results on the energetic costs of sharp biochemical switches, accurate sensors, and molecular discrimination—revealing their common origin in the perturbation theory of Markov chains.&#13;
&#13;
In Part III, I turn to the question of how cells remember. In eukaryotes such as ourselves, memory of cell type—nerve, muscle, blood, and so on—is maintained in complex cellular networks whose kinetic details we know incompletely, and which perhaps even involve chemical dynamics coupled to the polymer dynamics of chromatin. I explore a simple biophysical model inspired by a class of these systems—involving the spreading of covalent modifications on chromatin—with the aim of uncovering qualitative features that can imbue them with the capacity for memory. We find that limitation of the modifying enzymes relative to their substrates dramatically stabilizes memory.&#13;
&#13;
Together, these findings highlight the role of the size and architecture in constraining functional behaviors, and begin to suggest the possible form of a future precise, but qualitative, theory relating structure to the functional capacities of nonequilibrium systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150705</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Squeezed Vacuum Injection in Advanced LIGO: Enhancing Gravitational-Wave Detection Using Quantum States of Light</title>
<link>https://hdl.handle.net/1721.1/150704</link>
<description>Squeezed Vacuum Injection in Advanced LIGO: Enhancing Gravitational-Wave Detection Using Quantum States of Light
Tse, Maggie
This Thesis describes the first use of squeezed vacuum states in the direct measurement of gravitational waves with the Advanced LIGO detectors. During the Observation Run O3, from April 1st, 2019 to March 27, 2020, squeezing improved the sensitivity of the LIGO interferometers to gravitational-wave signals above 50 Hz by up to 3 dB, increasing the expected detection rate by 40%-50%. This achievement is the culmination of decades of research to implement squeezed vacuum states in gravitational-wave detectors. This Thesis focuses on squeezing performance of the LIGO Livingston L1 detector, and the commissioning challenges that had to be overcome to make squeezing an integral part of the detector.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150704</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Closer Look at Classical Measurement, an Algorithm for Deliberation in Rodents, and a Conjecture on Intertemporal Choice</title>
<link>https://hdl.handle.net/1721.1/150703</link>
<description>A Closer Look at Classical Measurement, an Algorithm for Deliberation in Rodents, and a Conjecture on Intertemporal Choice
Theurel, David Francisco
In this three-part thesis, Part I is an examination of the measurement process in classical Hamiltonian mechanics. This part is concerned with the tradeoff that exists, when measuring any observable of a system, between the disturbance inflicted upon the system and the information that can be extracted. The main result takes the form of a Heisenberg-like precision-disturbance relation: measuring an observable leaves all compatible observables undisturbed but inevitably disturbs all incompatible observables. The magnitude of the disturbance (the analogue of Ò) is found to be proportional, in a sense that is made precise, to one’s initial uncertainty in the ready-state of the apparatus—a quantity that relates to the temperature of the apparatus.&#13;
&#13;
Part II of this thesis develops a model of the computations taking place in the deliberative decision-making system of rodents, during wakefulness and sleep, with focus on the role of hippocampus (HPC). In this model, medial prefrontal cortex performs high-level planning, and then tasks HPC with fleshing out the details of the plan, as needed. We describe this planning task of HPC as an optimal control problem, which allows us to draw insights from the powerful mathematics of optimal control theory. The model makes novel testable predictions, provides insights into memory consolidation during sleep, and offers a paradigm capable of accommodating a wide range of observed phenomena, such as the theta rhythm, the slow oscillation, spindle oscillations, sharp wave-ripples, θ-sequences, for-ward and reverse SWR-sequences, the formation and strengthening of episodic memories, and a need for two modes of operation—online and offline.&#13;
&#13;
The two parts described above are the main content of this thesis. Part I falls within the purview of classical theoretical physics, while Part II falls in that of computational neuroscience. The two may seem unrelated; however, while each part is self-contained, I see the two as connected. Part III of this thesis is my attempt to provide an outline of a bigger picture, which sees the foregoing as lines of inquiry towards the same far-reaching conjecture—one which has had a strong pull on my imagination during my PhD, and which I hope to be able to address in the future. This conjecture is that the probability calculus of quantum mechanics holds a kind of normative status for a class of decision problems involving intertemporal choice under uncertainty—a class of problems of great importance to artificial intelligence, brain sciences, economics, and, I argue, to physics too.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150703</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of X-ray Instrumentation for Dark Matter Searches with Cosmic-ray Antiparticles</title>
<link>https://hdl.handle.net/1721.1/150702</link>
<description>Applications of X-ray Instrumentation for Dark Matter Searches with Cosmic-ray Antiparticles
Rogers, Field Rose
Approximately 85% of the mass in the Universe is composed of dark matter. Scientific consensus is that this dark matter is comprised of some unidentified fundamental particle, but despite compelling evidence of abundant dark matter and precise observation of its gravitational effects, the nature of this material remains a mystery. The effort to reveal the fundamental properties of the dark matter is a central and and unifying theme of modern particle and astrophysics. Indirect dark matter detection centers on cosmic-ray signatures of possible dark matter annihilation or decay to Standard Model particles in the Galaxy. Relative to terrestrial experiments, such indirect dark matter searches benefit from the large size of the Galaxy and can probe broad classes of dark matter models including those that, due to low cross sections, evade both production at colliders and direct detection. The challenge for indirect dark matter detection lies in disentangling possible dark matter signatures from the large and often uncertain cosmic-ray fluxes arising from other astrophysical sources. At the same time, cosmic-ray particles, which are themselves agents of Galactic evolution, provide unparalleled probes of the Galactic environment and dynamics. This dissertation describes two complementary approaches to the interconnected worlds of indirect dark matter detection and cosmic-ray physics: first, searching for rare cosmic-ray species local to Earth, and second, using astrophysical techniques to observe the effects of cosmic-ray particles in remote regions of the Galaxy.&#13;
&#13;
The General Antiparticle Spectrometer (GAPS) is an upcoming balloon mission to search for signatures of dark matter annihilation or decay in low-energy (&lt;0.25 GeV/&#119899;) cosmic-ray antinucleus fluxes. The goal of GAPS is to deliver 1) a precision cosmic antiproton spectrum in an unexplored low-energy region; 2) a first detection of cosmic antideuterons, a signature of new physics essentially free of astrophysical background; and 3) leading sensitivity to cosmic antihelium-3. To identify rare antinuclei out of the trillions of particles expected in flight, GAPS pioneers a novel exotic atom-based particle identification technique, which relies on &gt;10 m2 of lithium-drifted silicon (Si(Li)) detectors to capture an incoming antinucleus into an exotic atom and measure the resulting X-ray and nuclear annihilation products. This dissertation details the development, noise performance, and tracking capabilities of the large-area, high-temperature Si(Li) detectors developed for the GAPS mission. Their performance is precisely characterized using a semiconductor noise model and has been shown to be stable over time. In addition, the GAPS sensitivity to cosmic-ray antiprotons is demonstrated in this work using a full instrument simulation, event reconstruction, and models of solar and atmospheric effects. With its large geometric acceptance, GAPS will detect ∼500 cosmic antiprotons per flight, producing a precision spectrum extending to lower energies than any previous measurement. This measurement will be sensitive to models of dark matter, evaporating primordial black holes, and cosmic ray propagation. It will also validate the exotic atom particle identification technique prior to the other GAPS analyses.&#13;
&#13;
Beyond local detection in high-altitude particle detectors, cosmic ray populations can be probed remotely via the electromagnetic radiation they produce as they prop-agate through the Galaxy. Recent evidence points to enhanced particle populations of uncertain origin in the Galactic Center region. These unexplained particles challenge the conventional models of cosmic ray propagation used to predict the local fluxes critical for dark matter detection. Using X-ray observations of the giant molecular cloud Sagittarius B2, new upper limits are set on low-energy cosmic ray populations near the Galactic center. The limits are comparable to predictions from hydrogen ionization measurements, supporting the observation of elevated Galactic Center cosmic ray populations.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150702</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dipolar Shielding and Sub-wavelength Bilayers in a Quantum Gas of Dysprosium</title>
<link>https://hdl.handle.net/1721.1/150701</link>
<description>Dipolar Shielding and Sub-wavelength Bilayers in a Quantum Gas of Dysprosium
Cantara, Michael Alan
In our paper Can the Dipolar Interaction Suppress Dipolar Relaxation?, by using magnetic dysprosium atoms and an optical lattice we engineered the dipolar suppression of dipolar relaxation in an ultra-cold gas prepared in an excited Zeeman level. The atoms were confined to ultra-thin layers, with their magnetic moments aligned perpendicularly to the trap such that the dipolar interaction was purely repulsive. In such a configuration we observed an order of magnitude extension of lifetime in higher-lying states, opening up new possibilities for quantum simulation with multiple spin-species of highly magnetic atoms. Theoretical efforts applying Fermi’s golden rule numerically corroborated the observed suppression factors, including the fascinating dependence of dipolar relaxation on both the external magnetic field and the optical trap confinement.&#13;
&#13;
In our paper Atomic physics on a 50 nm scale: Realization of a bilayer system of dipolar atoms, we utilize dysprosium’s disparate Clebsch-Gordan coefficients to produce two independent optical lattices capable of tuning the interlayer separation to 50 nm. The head-to-head orientation of the layers comprising the bilayer provide a new avenue of research for dipolar physics. Since the dipole-dipole interaction scales as 1/&#119903;3, the reduction in interlayer separation leads to an approximately 500x enhancement of the dipole-dipole interaction. With our new subwavelength tool we explored dipolar coupling across the layers by observing both the transfer of thermal excitation, as well as the transfer of a dipole oscillation in the harmonic trap, from one layer to another. The Born approximation, applied to a dipolar interaction potential, yields excellent agreement with the observed interlayer thermalization rate. The observed interlayer transfer of a dipole oscillation in the harmonic trap results in an in-phase oscillation rather than the mean-field predicted out-of-phase oscillation, and can be partly ascribed to the friction force observed in the interlayer thermalization experiment, but may benefit from further exploration (e.g., correlated interlayer density fluctuations). The super-resolution provided by our bilayer is amazingly flexible, limited only by the width of the layer, such that the interlayer separation can be tuned arbitrarily small if sufficient localization of the layers is achieved. Such a tool will hopefully prove highly useful for the exploration of dipolar physics on a previously unattainable length scale.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150701</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic and Statistical Uncertainties in the Characterization of Gravitational-Wave Sources</title>
<link>https://hdl.handle.net/1721.1/150700</link>
<description>Systematic and Statistical Uncertainties in the Characterization of Gravitational-Wave Sources
Huang, Yiwen
With the increasing number of gravitational-wave (GW) detections made by LIGO-Virgo-Kagra during the first three observing runs, the field of GW astrophysics has a growing need for prompt and precise parameter estimation (PE) of GW sources. To better understand the limitations of the current PE practices and to develop a more robust approach to analyze future GW sources, this thesis explores the impact of various aspects of data analysis, including priors, waveform models, noise characterization, and instrumental calibration errors, on the final PE results. This thesis demonstrates that in the case of marginal signals, the choice of priors can greatly impact the PE results and the subsequent astrophysical interpretation, especially when population-informed priors are not yet available. As the detector sensitivity improves, two other sources of systematic errors become increasingly relevant: waveform approximants and instrumental calibration errors. In the second half of the thesis, we conclude that the current waveform approximants for neutron star-black hole mergers are unlikely to introduce systematic errors comparable to the statistical uncertainties for sources that can be detected with current and near-future detector sensitivity. In terms of the calibration errors, we show that they will not impede the standard siren measurement of the Hubble constant in the coming decades. This thesis examines the impact of various data analysis choices on the final results with extensive PE runs unparalleled in the previous literature and can continue to provide valuable guidance for PE analysis for future generations of GW detectors.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150700</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent behaviors in microbial communities</title>
<link>https://hdl.handle.net/1721.1/150699</link>
<description>Emergent behaviors in microbial communities
Lee, Hyunseok
Physics of systems around us are often emergent, shaped by not only the fundamental constituents but their interactions, structures, and symmetries at larger scales. Microbial communities, which play indispensable roles in nature, are complex active systems that naturally pushes us to the unexplored frontier of emergent phenomena. During my PhD, I studied how some behaviors of microbial communities can be simple at emergent level despite the underlying complexity at microscopic level. First, I demonstrated that competition for resources may lead to the experimentally observed simplicity in community assembly. Even without microscopic information on the traits of microbes, trio and larger community assembly is often predictable from collections of pairwise competitions. Second, I demonstrated that slow mutants can take over expanding fronts and the resulting large-scale spatial pattern can be predicted without microscopic information. Overall, my works illustrate that, beyond qualitative explanations, we can make precise predictions for the behaviors of microbial communities without any information about microscopic details of the systems. This lens of emergent behavior allows us to discover simple descriptions of microbial communities at large scales unhindered by their complexities at small scales.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150699</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating Apollo: Flight Simulation Technology 1945-1975</title>
<link>https://hdl.handle.net/1721.1/150698</link>
<description>Simulating Apollo: Flight Simulation Technology 1945-1975
Tylko, John
This dissertation examines how flight simulation technology rapidly evolved in the post-World War II period, enabling new regimes of aircraft and missile flight, and eventually becoming “the very heart and soul of the NASA system” during the Apollo program. At the intersection of automatic control systems, stability and control, and analog computing, aeronautical engineers developed simulators that functioned as “a fictious airplane in the electrical realm,” permitting real-time solution of the dynamic equations of motion governing flight vehicles. The aeronautical engineering profession transitioned from a focus on wind tunnels to the use of computer simulation to model flight vehicle performance for both aircraft and spacecraft, enabling the achievement of “faster, farther, and higher” flight domains in the Cold War.&#13;
&#13;
Flight simulation became a foundational capability permitting engineers to explore design options, test algorithms, and exhaustively validate software in a purely virtual world, or in a hybrid virtual and physical world well before the airplanes and spacecraft they were designing first experienced flight. This dissertation focuses on the creators of the new technology of simulation – the engineers, the test pilots, the astronauts, the flight controllers – who utilized the new capability of simulation as prediction machines to design flight vehicles and anticipate how they will perform increasingly complex missions. This dissertation explores the influence of institutions, engineering cultures, and patrons in shaping the new technology of flight simulation. The Apollo mission simulations created an artificial reality with extremely high fidelity such that it became the lens through which astronauts and flight controllers would experience the actual lunar missions. The detailed choreography of the Apollo missions, instantiated in software, procedures, checklists, flight plans, and mission rules, was refined through extensive simulations. &#13;
&#13;
The Apollo simulations precisely orchestrated how future spaceflights would be conducted, creating an artificial reality of a future historical event. This dissertation explores how the presence of the human operator ultimately shaped the question of what constitutes realism and simulation, and who gets to decide. As NASA prepares to return to the moon, the astronauts who will fly Artemis will face many of the same challenges that the Apollo astronauts faced in determining what constitutes realism in simulating their lunar landing. However, the presence of new actors, driven by goals related to privatization, commercialization and proprietary technology could change this outcome in a society that is still grappling with boundaries between real and virtual flight.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150698</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Studies of Magnetic Field Generation and Saturation Mechanisms in Laser-Driven Plasmas</title>
<link>https://hdl.handle.net/1721.1/150697</link>
<description>Experimental Studies of Magnetic Field Generation and Saturation Mechanisms in Laser-Driven Plasmas
Sutcliffe, Graeme D.
A new tri-particle mono-energetic backlighter based on laser-driven implosions of DT³He gas-filled capsules has been implemented at the OMEGA laser. This platform, an extension of the original D³He backlighter platform, generates 9.5 MeV deuterons from the T³He reaction in addition to 14.7 and 3.0 MeV protons from the deuterium and helium-3 reactants. The monoenergetic 14.7 and 3.0 MeV protons have been used with success at OMEGA and the NIF for both radiography and stopping-power studies. There are several advantages of having a third particle to diagnose plasma conditions: an extra time-of-flight-separated radiograph and an improved ability to discern between electric and magnetic fields. This new backlighter is well-suited for NIF experiments, where large fields and plasma densities often preclude useful 3.0 MeV proton data. The advantages are demonstrated with radiographs of OMEGA plasmas with magnetic and electric fields, and in hohlraum geometries where 3.0 MeV proton data is often inadequate for field reconstruction.&#13;
&#13;
Two magnetic field generation phenomena, the Biermann battery and the electron Weibel instability, are studied in the context of laser-driven planar geometries. First, an experiment on the dynamics and scaling of the saturation of spontaneously generated magnetic fields in laser-produced, high-thermal-&#120573; HED plasmas is reported. The spatially resolved magnetic fields are numerically reconstructed from proton radiography data, leading to a quantitative physics picture of field saturation with a scaling of B ∼ 1/&#119871;ᴛ for a convectively dominated plasma, a regime where the temperature gradient scale length modestly exceeds the ion skin depth (&#119871;&#119879; /&#119889;&#119894; &gt; 1). Second, Experimental observations of electron-scale structures in an expanding high-energy-density (HED) plasma generated with a modest intensity ∼2×10¹⁴W/cm², ∼1 ns laser are presented. The observed structures have wavelengths (∼150-220 μm) and growth rates (∼0.4-1.0 ns⁻¹) consistent with an electron-driven Weibel instability where the anisotropy in the electron distribution is small, &#119860; ∼ 0.002. This instability is found to be a better match to the observed phenomena than other typical field-generation mechanisms found in HED plasmas, including counter-streaming ionWeibel and magnetothermal instabilities. These observations experimentally demonstrate for the first time that the electron Weibel instability must be considered alongside other magnetic field generation and amplification mechanisms in expanding ablation plasmas, which are ubiquitous in HED research. They also provide physics insight into the generation of magnetic fields in large-scale astrophysical plasmas. Additionally, inspection of the magnetic power spectrum shows a possible scaling match to analytic gyrokinetic predictions, |&#119861;&#119896;|² ∝ &#119896;⁻¹⁶ᐟ³, at scales below the electron Larmor radius. Third, experiments pushing towards the resistive dissipation regime of magnetic field saturation are presented and previewed with FLASH MHD simulations.&#13;
&#13;
Connections of self-generated magnetic fields to inertial confinement fusion are made. Experiments using small-scale gas-filled hohlraums on OMEGA used to study both the hydrodynamic stability of the gas-ablation interface and fields in the hohlraum plasma are showcased.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150697</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Understanding of Lithium-Oxygen Batteries Using Atomistic Simulations</title>
<link>https://hdl.handle.net/1721.1/150696</link>
<description>Improving Understanding of Lithium-Oxygen Batteries Using Atomistic Simulations
Crabb, Emily
Developing novel battery technologies is required to electrify hard to decarbonize industries. One such novel technology, lithium–oxygen batteries, has great potential for electric aviation but is hampered by our lack of detailed knowledge of the processes occurring inside the batteries. Thus, the projects in this thesis work to further our fundamental understanding of these battery systems using atomistic simulations.&#13;
&#13;
In the first project, I explored one simulation methodology, ab initio molecular dynamics, commonly used to model battery systems. I examined the coordination environment of lithium ions in different solvents and compared the computational results to experimental data. The result was that the computationally calculated properties were heavily dependent on the starting configuration of the system, which illustrates the importance of both equilibration method and sufficient independent sampling for extracting experimentally relevant quantities from ab initio molecular dynamics simulations. Such details are often poorly documented or not justified in the literature, so this work indicates a need for increased attention to these details to ensure ab initio molecular dynamics studies are reproducible and physically accurate and thus useful.&#13;
&#13;
In the second project, I utilized classical molecular dynamics to explore a wider range of properties for systems of lithium salts in twelve different solvents. This work combined a dedication to accuracy, as I compared the results from the computations to experimental data, with innovative ways of measuring ionic transport. I examined how solvent metrics combining different properties such as solvent donor number and viscosity that are relatively easy to measure experimentally correlate with the atomistic lithium transport mechanisms that are quite difficult to measure experimentally but readily accessible computationally, with the goal of eventually enabling the prediction of these transport mechanisms and thus a deeper understanding of the system on an atomistic scale from a few simple experiments. To my knowledge, this is the first time such solvent metrics have been examined with relation to ionic transport mechanisms in small molecule liquid solvent systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150696</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum transport in strongly interacting, ultracold fermi gases in box potentials</title>
<link>https://hdl.handle.net/1721.1/150695</link>
<description>Quantum transport in strongly interacting, ultracold fermi gases in box potentials
Patel, Parth
Transport of strongly interacting fermions is crucial for systems as varied as high-&#119879; subscript &#119888; superconductors, twisted bi-layer graphene, nuclear fission, and neutron stars. In this thesis, I will describe the experiments we performed to measure the transport properties of a strongly-interacting atomic Fermi gas. This system features interactions as strong as allowed by quantum mechanics and features one of the highest pairing strength, with a superfluid transition temperature on the order of the Fermi temperature. Moreover, it is also scale-invariant, making its properties directly relevant for systems with many order of magnitude higher densities. We trap these atoms in a uniform box potential made from repulsive laser light, the key experimental advancement that makes the transport experiment presented here possible. Here, we observe a very low, universal, Heisenberg-uncertainty limited diffusion of both sound and heat by studying the propagation of sound waves and conduction of heat in a uniform gas. Similar to a growing number of high-&#119879; subscript &#119888; superconductors, we observe anomalous transport properties, like the viscosity and thermal conductivity, that cannot be explained by a Fermi-liquid theory. We show the temperature dependence of all non-zero transport properties, which constitutes a complete characterization of transport phenomena in the spin-balanced, strongly-interacting Fermi gas. Our findings inform theories of fermion transport, with relevance for hydrodynamic flow of electrons, neutrons, and quarks.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150695</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resonant Bodies: Pauline Oliveros, David Tudor, and Music Mediated, 1950–1980</title>
<link>https://hdl.handle.net/1721.1/150694</link>
<description>Resonant Bodies: Pauline Oliveros, David Tudor, and Music Mediated, 1950–1980
Downey, Walker P.
In this project, I follow the intertwined trajectories of Pauline Oliveros (1932–2016) and David Tudor (1926–1996) to understand how the contemporary genre of “sound art” evolved out of experimental music in the postwar United States. Substantially expanding the scope of Oliveros and Tudor’s legacies, long understood in purely music-historical terms, I argue that their engagements with electronic media—including magnetic tape, do-it-yourself circuitry, and biomedical devices—dramatically shaped their approaches to composition, performance, and listening in the Sixties and Seventies, yielding spatialized and participatory relationships to sound that sat uncomfortably within music’s definitional limits. Through their technological experimentation, Oliveros and Tudor arrived at transformed understandings of what “liveness,” presence, and agency might mean in the context of electronics, and developed new models of embodied sonic experience that resonated with, and contributed to, postwar trends in installation and performance art. As I show, these models of practice carried Oliveros and Tudor into museum spaces circa 1980, exerting a noted influence among younger artists working with sound.&#13;
&#13;
Existing accounts of Oliveros and Tudor have tended to compartmentalize their respective engagements with electronics, relegating this exploration to historically specific arcs of their careers; I argue, to the contrary, that a concern for electronic media and their practical affordances influenced the entirety of these artists’ developmental arcs between 1950 and 1980, serving to shape their philosophies of perception, corporeality, and sonic materiality. I further reposition Oliveros and Tudor relative to one another by emphasizing the significance of their friendship, repeat collaborations, and circuit of mutual influence, which scholarship to date has generally ignored. I intervene into an active yet fraught body of literature around sound art by demonstrating that this ill-defined field of practice did not issue from a coherent and unified point of historical origin but was rather constructed in piecemeal fashion from a variety of practices and commitments. Sound art emerged as musicians like Oliveros and Tudor situated themselves in new venues and collaborative networks, as curators and theorists worked to claim and capture their practices, and, most importantly, as mediation restructured the musical work’s associated protocols (of composition, performance, notation, and live presentation), and rewrote its very ontology, such that it could render problematic the boundaries between disciplines, beg new critical vocabularies, and broker its entry into the museum.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150694</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Through Iron &amp; Ice: Searching for Sterile Neutrinos at the IceCube Neutrino Observatory</title>
<link>https://hdl.handle.net/1721.1/150693</link>
<description>Through Iron &amp; Ice: Searching for Sterile Neutrinos at the IceCube Neutrino Observatory
Diaz, Alejandro
Despite the rapid progression in our understanding of neutrinos over the last half century, much is left unknown about their properties. This leaves neutrinos as the most promising portal for Beyond Standard Model (BSM) physics, and neutrinos have already provided fruitful surprises.&#13;
&#13;
A number of neutrino experiments in the last three decades have observed anomalous oscillation signals consistent with a mass-squared splitting of Δ&#119898;2 ∼ 1 eV^2, motivating the existence and search for sterile neutrinos. On the other hand, other experiments have failed to see such a signal.&#13;
&#13;
In this thesis, we present two analyses. The first is an update to the sterile neutrino global fits with the inclusion of recent experimental data. We find that the 3+1 model provides a better fit to the global data set compared to the null, with an improvement of Δ&#120594;2 = 51 with the addition of only 3 degrees of freedom, corresponding to 6.6&#120590;. While a substantial improvement, we also find a irreconcilable tension between the data sets of 5.1&#120590;, calculated using the parameter goodness-of-fit test. This motivates the exploration of expanded models: a 3+2 model, and a 3+1+Decay model. In the 3+2 model, we find negligible improvement to the fit, and an even worse tension of 5.5&#120590;. In the more exotic 3+1+Decay model, we find the tension reduced to 3.6&#120590;. While a substantial improvement compared to the 3+1 model with the introduction of only one additional parameter, the tension is still too large to assuage concerns.&#13;
&#13;
The second analysis is the results of an expanded IceCube sterile neutrino search. A previous sterile neutrino search found no evidence for sterile neutrinos, finding a p-value of 8%. Of the three sterile mixing angles, &#120579;_14,&#120579;_24, and &#120579;_34, only &#120579;_24 was fitted for, as &#120579;_14 was negligible and &#120579;_34 = 0 was considered a conservative assumption. We present results of an analysis where we include &#120579;_34 to the fitted model. Both a frequentist and Bayesian analysis were conducted, with fits done in terms of the mass-squared splitting Δ&#119898;^2_41 and the mixing matrix parameters |&#119880;_&#120583;4|^2 and |&#119880;_&#120591;4|^2. The frequentist analysis finds a best fit at Δ&#119898;^2_41 = 5.0 eV^2, |&#119880;_&#120583;4|^2 = 0.04, and |&#119880;_&#120591;4|^2 = 0.006, with a p-value of 5.2% assuming Wilks’ Theorem with 3 degrees of freedom. Pseudoexperiments are indicating a smaller p-value 2.7%. The Bayesian analysis finds a similar best fit point at Δ&#119898;^2_41 = 5.0 eV^2, |&#119880;_&#120583;4|^2 = 0.02, and |&#119880;_&#120591;4|^2 = 0.006, with a Bayes factor indicating a “Very Strong” preference for this sterile hypothesis over the null hypothesis.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150693</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Controlled Collisions and Magnetic Trapping of Ultracold NaLi Molecules</title>
<link>https://hdl.handle.net/1721.1/150692</link>
<description>Quantum Controlled Collisions and Magnetic Trapping of Ultracold NaLi Molecules
Park, Juliana
²³Na⁶Li is a fermionic molecule that has weak singlet-triplet mixing, making it a suitable system to study the triplet rovibrational ground state (&#119886;³Σ⁺, &#119907; = 0,&#119873; = 0). It is notable for its both non-zero electric (0.175 Debye) and magnetic (2&#120583;&#119861;) dipole moments and small two-body scattering rate, as predicted by the universal model for cold collision. Additionally, ²³Na⁶Li is the lightest bi-alkali molecule, and the theoretical simulation of collisions is relatively feasible compared to other heavy molecules which makes it a promising benchmark system for theoretical quantum scattering calculations. &#13;
&#13;
This thesis describes three experiments and a numerical/theoretical work on ²³Na⁶Li molecules in the triplet ground state. The first two experiments are on molecular Feshbach resonances: in spin-polarized ²³Na⁶Li+²³Na collisions and in ²³Na⁶Li+²³Na⁶Li collisions. The first experiment focuses on the spectroscopic study of Feshbach resonances in two possible spin-polarized ²³Na⁶Li+²³Na collisions from near 0 to 1400 Gauss. This allows learning about the molecular interaction potential surface and intermediate collision complexes, benchmarking theory, and controlling reactive collisions. The second experiment is a report of an unpredicted &#119901;-wave Feshbach resonance in ²³Na⁶Li+²³Na⁶Li collisions and interpretations. The resonance occurs for molecules in the lower stretched hyperfine state near an open-channel degeneracy. The collision loss rate is enhanced by more than two orders of magnitude from the &#119901;-wave universal value at the background to near the 2D unitarity limit.&#13;
&#13;
In addition to the search for magnetically tunable resonances, ²³Na⁶Li molecules in the triplet potential are suitable for magnetic trapping. The third experiment describes building an improved experimental setup that allows magnetic trapping of ²³Na⁶Li molecules and studying various collisions in the magnetic trap by quantum state control of molecules and atoms. The molecular density is a factor of 10⁵ higher than that reported for magnetically trapped ultracold molecules, and the temperature is ≈ 1&#120583;&#119870;. This condition enables observation of both atom-molecule and molecule-molecule collisions in the ultracold regime and sympathetic cooling of ²³Na⁶Li by evaporative cooling of ²³Na in the magnetic trap.&#13;
&#13;
Lastly, this thesis presents the numerical and theoretical approach to finding a window for an all-optical creation of molecules using Raman transitions from ²³Na and ⁶Li atoms to ²³Na⁶Li molecules. All-optical creation of molecules in which the magnetic association step near a Feshbach resonance is eliminated is expected to broaden the horizon of ultracold molecules to a larger pool and to eliminate the need for high magnetic fields.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150692</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Studies of Neutral Particle Effects on Edge Transport Barriers in Tokamaks Using the Lyman-alpha Measurement Apparatus</title>
<link>https://hdl.handle.net/1721.1/150691</link>
<description>Experimental Studies of Neutral Particle Effects on Edge Transport Barriers in Tokamaks Using the Lyman-alpha Measurement Apparatus
Rosenthal, Aaron Michael
Prediction capabilities remain limited for the tokamak edge density profile due to small spatial scales, complex plasma transport, and non-negligible effects of neutral particles. To better quantify the hydrogenic neutral particle source, a one-dimensional, absolutely calibrated pinhole camera system was installed on the DIII-D tokamak to measure edge Lyman-alpha (Ly-&#120572;) emission. Ly-&#120572; emission from hydrogenic isotopes can be used to infer hydrogenic neutral density and ionization rate profiles. To provide a high spatial resolution measurement in a compact footprint, the camera utilizes advanced engineering and manufacturing techniques including 3D printing, high-stability mirror mounts, improved filtering components, and a novel alignment procedure. Absolutely calibrated, spatially resolved Ly-&#120572; brightness measurements utilize a bright, isolated line with low parasitic surface reflections and enable quantitative comparison to the edge density profile to study radial particle&#13;
transport.&#13;
&#13;
Ly-&#120572; Measurements coupled to advances in inference techniques from the Aurora code, a 1.5-dimensional flux surface average particle transport and radiation forward model, allow calculation of diffusion and convection profiles in the steep density gradient region. Quantitative edge ionization source measurements and plasma profiles during H-mode steady-state and dynamic events, such as the edge localized mode (ELM) cycle and edge gas puff modulation, are investigated. Experiments show quantitative evidence of a 1 m s⁻¹ inward particle pinch and diffusion of 0.05 m s⁻² in the steep gradient region of the density pedestal. Near the last-closed flux surface, there is evidence of saturated gradient behavior which suggests limits to the 1.5-dimensional diffusive-convective model. Inboard and outboard ionization source measurements show a significant asymmetry confirming strong poloidal variation of the ionization source. Main ion transport profiles and experimental neutral particle measurements allow quantitative evaluation of the processes forming the density pedestal. Furthermore, experimental main ion transport profiles offer the opportunity to constrain modern 2D edge codes, benchmarking their modeling of current devices, thereby improving their predictive capabilities for future burning plasma devices.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150691</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electronic Structure and Emergent Orders in Correlated Nickelates</title>
<link>https://hdl.handle.net/1721.1/150689</link>
<description>Electronic Structure and Emergent Orders in Correlated Nickelates
Li, Jiarui
Strongly correlated quantum materials encompass a class of materials with novel functionalities and exotic physical properties that escapes the conventional description of quantum mechanics. Rare earth nickelate compounds have been established as archetypal correlated quantum materials with rich diversity of ground-state properties that emerge due to the interplay between the charge, spin, orbital, and lattice degrees of freedom. Despite decades of experimental scrutiny, their ground state properties still remain debated and elusive. This doctoral work presents a study of the ground state electronic and magnetic structures of nickelate compounds using a combination of soft X-ray techniques based on scattering, spectroscopy and imaging. The research efforts in this thesis unravel the physical properties of nickelates from three aspects. In the first part, we unveiled that the nanoscale electronic phase separation phenomena in nickelate are closely tied to phase transition criticality, highlighting that the scale-invariant inhomogeneity is an essential ingredient for the ground state description. In the second part, we have discovered a macroscopic chiral polarization of the non-collinear magnetic structure in rare earth nickelates which indicates a possible ferroelectric polarization by magneto-electric coupling. In the third part, we seek to understand the ground state electronic landscapes in nickelates via carrier dopings. We have identified the doping dependent electronic properties of electron-doped RENiO₃₋ₓ and observed a sudden collapse of ordered magnetism, which may provide a new mechanism for solid-state magnetoionic switching and new applications in antiferromagnetic spintronics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150689</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Processes for Polymer Modification and Pharmaceutical Synthesis</title>
<link>https://hdl.handle.net/1721.1/150687</link>
<description>Robust Processes for Polymer Modification and Pharmaceutical Synthesis
Ahlqvist, Grace Putka
Chapter 1: Continuous Dimethyldioxirane Generation for Polymer Epoxidation&#13;
Post-polymerization modification of commodity polymers yields new applications for materials already produced industrially. Incorporation of small amounts of epoxides into unsaturated polymers such as polybutadiene expands their use for grafting and compatibilization applications, but controlled epoxidation of these polymers in a safe, scalable manner presents a challenge. Herein we describe the development of a reactor for the continuous flow generation and use of dimethyldioxirane and its application to the low-level epoxidation of unsaturated polymers. A continuous stirred tank reactor prevents reactor clogging by allowing solid precipitates to settle, enabling the pumping of a homogeneous solution of oxidant. Modification of relative concentrations, flow rates, and temperatures achieves variable epoxidation levels. This method has been demonstrated on gram scale.&#13;
&#13;
Chapter 2: Large-Scale Synthesis of Molnupiravir from Cytidine&#13;
Molnupiravir (Lagevrio®, MK-4482, EIDD-2801) is an orally bioavailable medication for COVID-19. We report the development of a supply-centered and chromatography- free synthesis of molnupiravir from cytidine, consisting of a selective enzymatic acylation followed by transamination to yield the final drug product. Both steps have been successfully performed on a decagram scale: the first step at 200 g and the second step at 80 g. Overall, molnupiravir has been obtained in a 41% isolated yield over two steps compared to a maximum 17% isolated yield over five steps in the patented route.&#13;
&#13;
Chapter 3: Diastereoselectivity Is In the Details: Minor Changes Yield Major Improvements to the Synthesis of Bedaquiline&#13;
Bedaquiline fumarate (Sirturo®)is a crucial medicine in the global fight against tuberculosis, yet its high price places it out of reach for many patients. Herein, we describe improvements to the key industrial lithiation-addition sequence that enable a higher yielding and therefore more economical synthesis of bedaquiline. Prioritization of mechanistic understanding and multi-lab reproducibility led to optimized reaction conditions that feature an unusual base-salt pairing and afford a doubling of the yield of racemic bedaquiline. We anticipate that implementation of these improvements on manufacturing scale will be facile, thereby substantially increasing the accessibility of this essential medication.&#13;
&#13;
Chapter 4: Progress Towards a Diazepam Precursor for Telescoped Flow Synthesis&#13;
Diazepam (Valium®) is an important medication for the treatment of various central nervous system symptoms and disorders. On-demand flow synthesis of this molecule presents many benefits, but previously reported telescoped syntheses use expensive, advanced starting materials. We demonstrate proof of concept for the synthesis of a crucial diazepam precursor from commodity chemicals. Our synthesis uses a directed metallation approach to selectively acylate a protected aniline in up to 39% yield in two minutes in continuous flow. We anticipate that further optimization of this reaction will enable a fully telescoped synthesis of diazepam from commodity chemicals.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150687</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent Times in Holographic Duality</title>
<link>https://hdl.handle.net/1721.1/150686</link>
<description>Emergent Times in Holographic Duality
Leutheusser, Samuel Aaron Wehlau
In holographic duality an eternal AdS black hole is described by two copies of the boundary CFT in the thermal  eld double state. In this thesis we provide explicit constructions in the boundary theory of infalling time evolutions which can take bulk observers behind the horizon. The constructions also help to illuminate the boundary emergence of the black hole horizons, the interiors, and the associated causal structure. A key element is the emergence, in the large N limit of the boundary theory, of a type III1 von Neumann algebraic structure from the type I boundary operator algebra and the half-sided modular translation structure associated with it. A by-product is a concept called causal connectability, which is a criterion for any two quantum systems (which do not need to have a known gravity dual) to have an emergent sharp horizon structure.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150686</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric Squeezing of a Degenerate Fermi Gas</title>
<link>https://hdl.handle.net/1721.1/150685</link>
<description>Geometric Squeezing of a Degenerate Fermi Gas
Wilson, Cedric Chinua
Quantum simulation with ultracold gases provides a highly customizable platform for the study of many-body physics, shedding new light on important physical systems. Magnetic fields or, equivalently in the case of a uniform, static field, rotation are especially important parameters for many-body systems including nuclear matter, neutron stars, superfluid helium, and clean conductive samples exhibiting the quantum Hall effect. This thesis details the construction of a new experiment to study rapidly-rotating quantum gases, and two experimental results from the new apparatus.&#13;
&#13;
The procedure of geometric squeezing utilizes a rotating, elliptical harmonic trap to realize the squeezing Hamiltonian for guiding center motion. We first outline the observation of a geometrically squeezed state of a rapidly-rotating Bose-Einstein condensate entering the lowest Landau level. We also measure its Hall response, analogous to the &#119864; × &#119861; drift of charged particles in crossed electric and magnetic fields.&#13;
&#13;
We then detail the realization of a geometrically squeezed state of a rapidly rotating, non-interacting atomic Fermi gas and the measurement of its Hall drift velocity. The Fermi gas shrinks down in one direction to a size limited by the width of the highest occupied Landau level. In the orthogonal direction it expands exponentially, at a local speed given by the local Hall drift velocity. We examine the physics away from the rapidly rotating regime and find that it is well described by the phase space evolution of the Fermi surface under the influence of Coriolis and centrifugal forces, in direct analogy to the cranking model for rotating nuclei.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150685</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ring-opening Metathesis Polymerization for the Creation of Responsive Colloids and Surfaces</title>
<link>https://hdl.handle.net/1721.1/150684</link>
<description>Ring-opening Metathesis Polymerization for the Creation of Responsive Colloids and Surfaces
He, Qilin
Ring-opening metathesis polymerization (ROMP) is a well-controlled living polymerization method and has been widely used to synthesize various polymer materials. This thesis investigates the use of ROMP in the creation of responsive materials, including photonic polymer colloids and surface-immobilized polymer brushes.&#13;
&#13;
In Chapter 1, an introduction is provided to the fundamental concepts relevant to this thesis, including an overview of ROMP, the applications of ROMP for synthesizing polymers with photonic crystal properties, photonic polymer particles, surface-tethered polymer brushes, and surface-initiated ROMP.&#13;
&#13;
In Chapter 2, photonic ellipsoidal particles are created from the self-assembly of dendronized bottlebrush block copolymers (den-BBCPs), which are synthesized by ROMP. The surface energy of these polymer particles is precisely controlled by the design of surfactants that have selective affinity to each block of den-BBCPs. These ellipsoidal particles can be further functionalized with magnetic nanoparticles, resulting in magnetically switchable structural color.&#13;
&#13;
In Chapter 3, Janus photonic particles are prepared to expand the functionality of the previous photonic ellipsoidal particles. Poly(4-vinylpyridine)-co-styrene is used as the second phase of the Janus particle, allowing for the functionalization with acidic magnetic nanoparticles and antibodies. The antibodies-functionalized particles can be used for the detection of Salmonella bacteria through a novel agglutination assay.&#13;
&#13;
In Chapter 4, a new strategy, termed grafting-to &amp; from, is developed for growing thick and stable polymer brushes through surface-initiated ROMP. This strategy combines the advantages of the traditional grafting-to and grafting-from methods and is used to grow responsive polymer brushes on a glass surface, creating a polymer coating that is responsive to various chemical-warfare-agents, including Sarin and Mustard Gas.&#13;
&#13;
In Chapter 5, a sterically hindered cyclobutene is synthesized as a potential ROMP monomer and its polymerization reactivity is explored. This cyclobutene is further epoxidated to give a highly reactive cyclobutane epoxide, of which the reactivities at high temperature are investigated.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150684</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Documents to Data: The Emergence of National Biometric Identification Systems in the 20th and 21st Centuries</title>
<link>https://hdl.handle.net/1721.1/150683</link>
<description>From Documents to Data: The Emergence of National Biometric Identification Systems in the 20th and 21st Centuries
Spektor, Michelle
In recent decades, states around the world are increasingly incorporating biometrics – measurements of fingerprints, faces, or other parts of the body – into ID cards, passports, databases and other national identification systems for their own citizens. How did biometric identification transform from a technology primarily aimed at criminals, colonial subjects, and groups at society’s margins, into a technique increasingly preferred by states for identifying and governing their whole citizenries? And how did this shape state-citizen relationships?&#13;
&#13;
To answer these questions, the dissertation considers two recent developments – the UK’s rejection and Israel’s acceptance of proposed national biometric systems in the early 2000s – and contextualizes them within the longer shared history of British and Israeli biometric practices since 1904. By showing how the technological designs, forms of state governance, and notions of individual and national identity inscribed in prior British and Israeli biometric infrastructures influenced subsequent ones, the dissertation argues that national biometric systems are not just technological projects of citizen data collection. They are also political projects of constituting the nation. They link measurements of the body with notions of national belonging – a link that underwrites the exercises of state power that these systems afford, the politics of inclusion and exclusion they entail, and their legitimacy (or illegitimacy) as tools of governance.&#13;
&#13;
Based on archival research, oral history, and ethnographic interviews in the UK and Israel, the dissertation’s five chapters examine how the design and implementation of six different British and Israeli biometric infrastructures biometrized identity and engendered different forms of biometric statecraft between 1904 and 2017. The purposes of these prior systems ranged from eugenics research and measuring population health in the UK in 1904, to colonial policing in Mandatory Palestine between 1918 and 1948, Jewish religious protocols for identifying the dead in the Israeli military since 1974, and surveilling workers from Occupied Palestinian Territories since the 1990s. The chapters also examine the ways social movements influenced the technological design and policy frameworks of proposed national systems in the UK and Israel in the early 2000s. Each of these biometric systems were informed by, and brought into being, different forms of statecraft, state-citizen relationships, and politics of national inclusion and exclusion.&#13;
&#13;
Prior British and Israeli biometric systems technologically and culturally influenced, but did not wholly determine, the UK’s rejection and Israel’s acceptance of national biometric systems in the early 2000s. While new biometric developments in Israel connected biometric identity with emerging forms of Israeli national identity, biometrics in the UK retained their associations with eugenics, colonialism, and national exclusion. British and Israeli technological histories thus serve as sources of insight into current policy questions about biometric systems. These histories show that biometrics have never been neutral, and how past systems’ politics of inclusion and exclusion might endure in future ones. State biometric systems – even as they move through time, place, and political context, and take on new purposes, meanings, and technological forms – raise questions about who is to be included in the system, and therefore in the nation as a whole.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150683</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven study of major disruption prediction and plasma instabilities across multiple tokamaks</title>
<link>https://hdl.handle.net/1721.1/150682</link>
<description>Data-driven study of major disruption prediction and plasma instabilities across multiple tokamaks
Zhu, Jinxiang
The use of nuclear fusion energy via magnetic-confinement tokamaks is one of a few encouraging paths toward future sustainable energy. Along the way, scientists need to learn to avoid plasma disruptions: these sudden and unexpected plasma terminations still represent one of the key challenges for tokamak devices. Forecasting plasma instabilities and disruptions using first-principle models has been demonstrated to be extremely difficult, due to the complexity of the problem and the high non-linearity of the system. To date, disruption and plasma instabilities prediction has been studied through two main approaches: data-driven versus physics-driven (or model-based). On the one hand, recent statistical and machine learning (ML) approaches based on experimental data have shown attractive results for disruption prediction, even in real-time environments. Different tokamak devices have different operational spaces, spatiotemporal scales for physics events, and plasma diagnostics. Therefore, most of these data-driven approaches were developed and optimized specifically for one device and did not show promising cross-device predictive ability. In addition, the complexity of these data-driven models limits their physics interpretability. Recent Deep-Learning (DL) based disruption prediction studies demonstrate the potential for acquiring a general representation of experimental data that can be used in cross-machine applications. On the other hand, model-based studies seek to identify event chains that can lead to disruptions through early event detection, which can help operators to avoid plasma instabilities disruptions. However, the extrapolation ability of physics-based models to new devices, especially to new physics regimes is still unclear.&#13;
&#13;
This thesis demonstrates the application of data-driven methods on plasma insta-bilities and disruption prediction via four major contributions. First, through explo-rative data analysis of thousands of shots on C-Mod, DIII-D and EAST tokamaks, the advantage of sequence-based disruption prediction model was shown. Based on this finding, a new Hybrid Deep-Learning (HDL) general disruption predictor was developed using C-Mod, DIII-D and EAST databases and it achieves state-of-the-art performance on three machines with only limited hyperparameter tuning. Dedicated cross-machine disruption prediction studies using this HDL model demonstrated that a significantly boosted accuracy on the target machine was achieved by training on 20 disruptive shots, thousands of non-disruptive shots from the target machine com-bined with hundreds of disruptive shots from other devices. In addition, by comparing the predictive performance of each individual numerical experiment, the disruptive shots from multiple devices were found to contain device-independent knowledge that can be used to inform predictions for disruptions occurring in a new device while non-disruptive shots were found to be machine-specific. Second, the cross-regime disruption prediction on multiple tokamaks using HDL model demonstrated data-driven disruption predictors trained on abundant Low Performance (LP) discharges work poorly on the High Performance (HP) regime of the same tokamak, which is a consequence of the distinct distributions of the tightly correlated signals related to disruptions in these two regimes. Moreover, the cross machine experiments suggested matching operational parameters among tokamaks strongly improves cross-machine accuracy. Given these conclusions, a scenario adaptive strategy that works for all data-driven models was proposed for next generation tokamaks, such as ITER and SPARC, and highlight the importance of developing baseline scenario discharges of future tokamaks on existing machines to collect more relevant disruptive data. Third, the powerful HDL model was upgraded to an integrated ML model that can predict major disruption as well as multiple unstable events in tokamak plasmas that can facilitate the physics interpretation of output from the black box data-driven models and enables disruption avoidance by responding to early unstable events of plasmas. Enhanced cross-machine ability and improved warning time was also observed using the integrated ML model. Finally, among all different plasma unstable events, the &#119899; = 1 tearing mode (TM) is considered to be one of the most important disruption precursors and its predictive ability is strongly desirable for ITER and SPARC. In the final part of this thesis, an empirical boundary for the &#119899; = 1 tearing mode (TM) is developed via data-driven methods and verified on thousands of DIII-D discharges. The fitted boundary is a linear function of plasma equilibrium parameters such as collisionality, poloidal beta, and the MHD risk factor (a combination of the normal-ized electron temperature profile width, q95 and elongation). The boundary indicates with a value related to the probability of having the TM onset and it achieves 88% of shot-by-shot accuracy in offline analysis of DIII-D data. Preliminary cross-machine analysis of TM onset prediction shows potential applicability of the empirical bound-ary to C-Mod and EAST data as well, but the relative importance of the individual parameters is different for different devices. This suggests the existence of different trigger mechanisms for the TMs, implying that the boundary could be generalized using data from different tokamaks representing different trigger mechanisms to im-prove its extrapolability. Finally, this new proximity metric to the &#119899; = 1 TM onset has been incorporated into the real-time in DIII-D plasma control system (PCS) and results from real-time experiments will be discussed.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150682</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defects and Ionic Transport at Reduced Temperatures -- Electric Field and Optical Control on Nanoscopic Spatial Scales</title>
<link>https://hdl.handle.net/1721.1/150681</link>
<description>Defects and Ionic Transport at Reduced Temperatures -- Electric Field and Optical Control on Nanoscopic Spatial Scales
Defferriere, Thomas
The ability to study and control oxygen ion transport in metal oxide systems at reduced temperatures (&lt;400°C) is key to the design of new generations of energy conversion/storage, memory and functional device applications, but has remained a significant challenge in the field of Solid-State Ionics. In this thesis, electric field and optical-based tools were developed capable of manipulating and quantifying the ionic transport properties of model oxygen ion conducting metal oxides systems at these temperatures.&#13;
&#13;
First, we show that ionic mobilities near room temperature in a model mixed conducting thin film of Pr0.1Ce0.9O2-δ are sensitive to frozen in defect concentrations. This was achieved by exposing a thin film to different thermal histories and quantifying the quenched in defect concentrations by measuring the film’s optical absorption, related to the oxidation state of the Pr ion. A dynamic current-voltage technique was applied to isolate the oxygen ion mobility in the quenched-in state. A 13-fold increase in ionic mobility with increases in oxygen nonstoichiometry from 0.032 ± 0.001 to 0.042± 0.001 was observed at 60°C. We discuss how nonobvious entropic effects can lead to these ionic mobility – defect concentration trends, contrary to expectations and how being able to control and quantify these trends can ultimately aid in elucidating the origins of variations seen in nano-ionic devices. &#13;
&#13;
Next, we demonstrate how applied electric fields can be used to reversibly redistribute oxygen ions between two mixed conducting metal oxide thin films (Pr0.1Ce0.9O2-x/Ce0.15La1.85CuO4+y). Field induced changes in resistance in each layer are correlated with respective changes in defect concentrations and defect chemical properties. We demonstrate for the first time the importance of defect chemical models in interpreting the origin of the resistance changes induced by ion exchange under field in nano-ionics devices. In turn, information on the electronic transport properties of the respective layers and their defect formation energetics are obtained from these models. These findings highlight the unique opportunities such bilayer studies offer in investigating the defect chemistry of metal oxide films near ambient temperatures. Dynamic current voltage measurements applied to these bilayer device structures were successful in isolating the oxygen ion mobility within the dominant layer, that was, in turn, correlated with the variations in defect concentration of the layer. The results, when compared to the results of thermal annealing studies, showed good agreements in trends, thus confirming the viability of using such experimental tools for controlling and extracting ionic transport properties in a bilayer device. Characteristic evolutions of the dynamic current voltage measurement curves at lower sweep rates were also identified in these bilayer systems and were assigned to the solid ion exchange process. This broadens the capabilities of such a technique in being able to measure the rate controlling ion transfer kinetics occurring under field between two metal oxides films. &#13;
&#13;
Finally, we show, for the first time, how above band gap illumination can be used to modulate ion transport across interfaces in metal oxides. This was demonstrated through selective changes in grain boundary ionic transport in a model oxygen solid electrolyte Gd0.03Ce0.97O2-δ and by ruling out the impact of optical heating and gas atmosphere. The observed changes are assigned to the modulation of the local grain boundary space charge potentials and ionic charge carrier depletion zones. Models to describe the observed response are developed and are supported by a combination of impedance spectroscopy (IS) and intensity-modulated photocurrent spectroscopy (IMPS).
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150681</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Dynamics of Black Hole Systems and the Ringdown of Black Hole Spacetimes</title>
<link>https://hdl.handle.net/1721.1/150680</link>
<description>Modeling the Dynamics of Black Hole Systems and the Ringdown of Black Hole Spacetimes
Lim, Halston Brandon
Fortunately, by the time Advanced LIGO-Virgo started observing gravitational waves from merging black holes in 2015, theoretical models had already been developed which would greatly contribute to the interpretation of that data. Thanks to numerical relativity, the merger of isolated near-equal mass ratio binary black holes, and the gravitational waves they emit, has been well understood. However, to capitalize on future observations made by current and planned detectors, much work will be needed to expand theoretical models towards all kinds of gravitational-wave sources, including binaries with arbitrary mass ratios and spins, and binaries that interact with their astrophysical environments.&#13;
&#13;
This thesis explores how to model a variety of gravitational wave sources using semi-analytic techniques including black hole perturbation theory and post-Newtonian theory. First, we describe work to predict and characterize the ringdown gravitational waves from misaligned binary black hole mergers. Working in the large mass ratio limit, we use Teukolsky's equation to calculate the worldline for plunging bodies with varying orbital geometries. Perturbations about Kerr spacetime are a linear superposition of quasinormal modes, and we calculate the amplitude of these mode frequencies as excited by a plunging body. The key result is that the mode amplitudes can be cleanly mapped from kinematic angles describing the plunge geometry. Next, we use this mapping to construct a ringdown waveform model consisting of quasinormal modes. Using a white Gaussian noise model, we conduct parameter estimation on the ringdown waveform and demonstrate how the mode amplitudes can be measured. Finally, we investigate the post-Newtonian orbital dynamics of hierarchical black hole triples. We find that for certain triple systems, when the tertiary is much more massive than the inner binary, post-Newtonian three-body resonances can substantially modify the orbital evolution.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150680</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chaos and Thermalization in Quantum Many-Body Systems and Gravity</title>
<link>https://hdl.handle.net/1721.1/150679</link>
<description>Chaos and Thermalization in Quantum Many-Body Systems and Gravity
Vardhan, Shreya
In this thesis, we explore the process of thermalization in chaotic quantum many-body systems with the help of concepts and techniques from quantum information theory. We identify a universal dynamical process in the Heisenberg evolution of operators known as void formation, and use it to provide a new characterization of information spreading in chaotic systems. We also develop a technique called the equilibrium approximation, which allows us to express information-theoretic quantities in pure states evolved to late times in chaotic quantum many-body systems purely in terms of equilibrium quantities, and to do so in a way that is consistent with unitarity. This technique allows us to calculate correlation measures such as entanglement entropy or logarithmic negativity, as well as measures of information recovery from subsystems, in chaotic systems ranging from spin chains and quantum field theories to black holes. For evaporating black holes, the equilibrium approximation for entanglement entropy provides a systematic derivation of certain recent prescriptions for addressing Hawking’s information loss paradox, and explains their physical origin. The equilibrium approximation for logarithmic negativity and Petz map fidelity leads to surprising new predictions for entanglement structure and information transfer between a black hole and its radiation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150679</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of fault failure parameterization and bulk rheology on earthquake rupture</title>
<link>https://hdl.handle.net/1721.1/150592</link>
<description>Effects of fault failure parameterization and bulk rheology on earthquake rupture
Bolotskaya, Ekaterina
Different physical processes associated with material failure affect earthquake rupture nucleation and propagation. These processes are frictional sliding along faults of different roughness, fracturing of the intact rock sections connecting the preexisting planes of weakness (fault step overs) or cemented fault segments, fault branching and "wing-crack" formation, inelastic deformation in the fault damage zone, and others. Due to the computational complexity of earthquake models, especially if complex fault geometries and heterogeneity of properties are also considered, all of these failure related processes are usually combined into their simplified mathematical representations: the failure law prescribed along the fault and the bulk rheology.&#13;
&#13;
In this work we explore both aspects. We study the effect of different failure parameterizations on the earthquake cycle characteristics, earthquake nucleation and dynamic rupture propagation. We start with simplified spring-slider models to understand the stability, slip regimes, and characteristics of the different phases of the earthquake cycle with different failure laws for the case of a single degree of freedom dynamic system. We then perform 2D finite element models of earthquake rupture nucleation and propagation to gain insight into how different failure laws affect the different phases of the earthquake cycle in the case of an added fault dimension. The rheological part of this work explores the effect of off-fault plastic deformation on elongated earthquake rupture characteristics, their steady-state velocity and energy balance. We use a 2.5D spectral element method that approximately accounts for 3D effects while modeling a 2D geometry and explore the dependence of earthquake rupture risetime, steady state slip rate and speed, plastic energy release rate etc. on the initial stress state and plastic properties of the bulk material.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150592</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of liquid phase resistance to gas absorption in a packed column</title>
<link>https://hdl.handle.net/1721.1/150589</link>
<description>The mechanism of liquid phase resistance to gas absorption in a packed column
King, C. Judson
            (Cary Judson),
            1934-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Vita.; Includes bibliographical references (leaves ).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150589</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inflation in Chile, a quantitative analysis</title>
<link>https://hdl.handle.net/1721.1/150579</link>
<description>Inflation in Chile, a quantitative analysis
García D'Acuña, Eduardo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1964; Vita.; Includes bibliographical references (leaves 232-235).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150579</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The synthesis and reactions of hydroxyalkyl substituted cyclooctatetraenes</title>
<link>https://hdl.handle.net/1721.1/150578</link>
<description>The synthesis and reactions of hydroxyalkyl substituted cyclooctatetraenes
Rugen, Donald Frederick,
            1923-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1952; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150578</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finite-difference calculation method for large elastic-plastic dynamically-induced deformations of general thin shells.</title>
<link>https://hdl.handle.net/1721.1/150577</link>
<description>Finite-difference calculation method for large elastic-plastic dynamically-induced deformations of general thin shells.
Leech, John W.
            (John Warner)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1967; Vita.; Bibliography: p. 112-117.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150577</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximate nonlinear estimation.</title>
<link>https://hdl.handle.net/1721.1/150576</link>
<description>Approximate nonlinear estimation.
Phaneuf, Roger Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1968; Two blank pages included in paging. Vita.; Bibliography: p. 239-242.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150576</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two issues in public key cryptography : RSA bit security and a new knapsack type system</title>
<link>https://hdl.handle.net/1721.1/150575</link>
<description>Two issues in public key cryptography : RSA bit security and a new knapsack type system
Chor, Benny.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1985; Bibliography: leaves 68-71.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150575</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Marine Parasites in Island-like Disturbed Habitats</title>
<link>https://hdl.handle.net/1721.1/150565</link>
<description>Marine Parasites in Island-like Disturbed Habitats
Dykman, Lauren N.
Parasites are taxonomically and functionally diverse members of biological communities, and can play key roles in species interactions, community structure, and ecosystem functioning. For their reliance on host species, parasites are theorized to be particularly sensitive to disturbances that alter host diversity and abundance, especially in isolated habitats, which present challenges to introduction and establishment. In this thesis, I investigate habitat isolation and disturbance as drivers of parasite diversity, with an emphasis on parasite life history strategies related to colonization and persistence. I focus on an island-like, frequently disturbed habitat, deep sea hydrothermal vents at 9∘ 50’N on the East Pacific Rise, to explore the boundaries of parasite persistence in an extreme environment. First, I analyze recovery in the vent community for 11 years after a catastrophic eruption in 2006 to test successional hypotheses in a new setting with distinct fauna and a chemosynthesis-based food web. Second, I compare parasite diversity at isolated, disturbed vents to marine ecosystems that are similarly isolated but undisturbed (atoll sandflat) and both well connected and undisturbed (kelp forest). Overall, parasite diversity within host species was not significantly lower at vents, but the vent community had manyfewer parasite species because there arefew vertebrate predator species (fish). Parasites with indirect (multi-host) life cycles were relatively diverse in the disturbed environment, which contradicts expectation based on theory. To explore this further, I investigate the three-host life cycles of trematodes at vents, which was the most diverse and abundant parasite taxon. All life stages of the trematode life cycle were discovered in ventfauna and several taxa were traced across multiple life stages via morphology and genetics. Finally, I use a computational model to investigate how different parasite strategies (colonization capability and impact on hosts) contribute to parasite success under a range of disturbance conditions in island habitats. Parasites that reduce host reproduction reached higher densities than parasites that cause mortality across all disturbance frequencies explored, and disturbance facilitated the evolution of more virulent parasites. These studies demonstrate that life history traits and the ability to adapt allow diverse parasite taxa to persist in isolated, ephemeral environments.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150565</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating Structure-Property Relationships for Targeted Materials Mechanical Design</title>
<link>https://hdl.handle.net/1721.1/150564</link>
<description>Elucidating Structure-Property Relationships for Targeted Materials Mechanical Design
Lew, Andrew J.
The ability to control mechanical properties has been of interest since time immemorial. Advancements in understanding the connection between structure and property has allowed for the intelligent design of materials with specific, desired properties. However, this task is immensely non-trivial due to the vast complexity of structure-property space. While nothing theoretically prevents the acquisition of materials with unprecedentedly extreme or precisely tuned mechanical properties, in practical terms many of our current strategies for materials design run up against intractable limits of time and resources. As a result, much of the exciting potential for nanotechnology, bioinspired structures, and hierarchical architectures remains unfulfilled.&#13;
&#13;
In this dissertation, we leverage three ways of clarifying structure-property space for the effective design of materials with targeted mechanical properties. We start by taking inspiration from nature, leveraging structures that have manifest over millions of years of evolution as a basis for further design, to experimentally obtain novel hierarchical nanomaterials with extreme mechanical properties. We subsequently use physics-based molecular simulations to interrogate precise mechanisms underlying some of the most mechanically robust materials in nature, focusing specifically on biomineral fracture. We then utilize modern advances in machine learning to enhance our exploration of structure and property, treating first the problem of crystalline fracture and subsequently other mechanical properties of compliance and buckling in a material-agnostic manner.&#13;
&#13;
Finally, the combination of all three approaches in concert allows for a farther-reaching design paradigm than limiting ourselves to any single perspective in isolation. As a result, we are able to achieve inverse design across a variety of structure-property spaces, from the controlled fracture of graphene, to the non-destructive characterization and design of hardness, and the development of hierarchical materials with specific, experimentally verified, stress strain behavior. Importantly, the strategy employed here is scalable across structures and properties, and will compound in efficacy with further advancements in key fields like additive manufacturing, computational simulation, and artificial intelligence.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150564</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total Synthesis of Oligomeric Cyclotryptamine Alkaloids</title>
<link>https://hdl.handle.net/1721.1/150563</link>
<description>Total Synthesis of Oligomeric Cyclotryptamine Alkaloids
Scott, Tony Z.
I. Total Synthesis and Stereochemical Assignment of (–)-Psychotridine&#13;
&#13;
We report the first enantioselective total synthesis and stereochemical assignment of (–)-psychotridine. The application of our diazene-directed assembly of enantiomerically enriched cyclotryptamines afforded a highly convergent synthesis of the pentameric alkaloid, allowing its detailed structural assignment. Highlights of the synthesis include the introduction of four quaternary stereocenters with complete stereochemical control in a single step via the photoextrusion of three molecules of dinitrogen from an advanced intermediate and metal-catalyzed C–H amination reactions in challenging settings.&#13;
&#13;
&#13;
II. Iterative, Diazene-Directed Total Synthesis of (+)-Quadrigemine H, (+)-Isopsychotridine C, (+)-Oleoidine, and (+)-Caledonine&#13;
&#13;
We describe the unified enantioselective total synthesis of the polycyclotryptamine natural products (+)-quadrigemine H, (+)-isopsychotridine C, (+)-oleoidine, and (+)-caledonine. Our bioinspired synthesis leverages the modular, diazene-directed assembly of whole cyclotryptmines to iteratively introduce C3a−C7' quaternary linkages on an advanced heterodimeric intermediate with full stereochemical control at each quaternary linkage. We developed a strategy for iterative aryl-alkyl diazene synthesis using increasingly complex oligomeric hydrazide nucleophiles and bifunctional cyclotryptamines bearing a C3a leaving group and a pendant C7 pronucleophile. The utility of our method is demonstrated by the first total synthesis of heptamer (+)-caledonine and hexamer (+)-oleoidine. Additionally, enabled by our fully stereoselective total synthesis and acquisition of expanded characterization data, we provide the first complete stereochemical assignment of pentamer (+)-isopsychotridine C and confirm that tetramer (+)-quadrigemine H is identical to the alkaloid called (+)-quadrigemine I.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150563</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating photophysics in colloidal semiconductor&#13;
quantum dots through photon-correlation methods</title>
<link>https://hdl.handle.net/1721.1/150561</link>
<description>Investigating photophysics in colloidal semiconductor&#13;
quantum dots through photon-correlation methods
Sun, Weiwei
Colloidal semiconductor quantum dots (QDs) exhibit interesting photophysical properties due to their quantum-confined nature. Due to their bright and stable photoluminescence, they have been widely used in a variety of applications, including lasing, light emitting diodes (LEDs), solar cells, biological imaging, and most recently quantum information science. Numerous efforts have been made to optimize these QDs to improve their performance, which all require the establishment of an efficient feedback loop between optical properties and synthesis. Spectroscopic studies on their PL properties would inform us about the nature of QD emission and point new directions for future design and optimization of QDs.&#13;
&#13;
In this thesis, I will use photon-correlation-based spectroscopy, a powerful toolkit with both high temporal and spectral resolutions, to investigate the exciton photophysics inside these QDs to build a complete understanding of the exciton dynamics in cesium lead halide and CdSe/CdS quantum dots. In the first two chapters, I will build a foundation on the understanding of semiconductor quantum dots, quantum emitters, and single particle spectroscopic techniques. In the next two chapters, I will use photon-correlation Fourier spectroscopy (PCFS) to investigate single particle optical coherence properties, to demonstrate cesium lead halide perovskite quantum dots (PQDs) as the next-generation quantum emitter materials for quantum information science, and investigate their dephasing mechanisms at cryogenic temperatures. In the next chapter, I will switch gears to CdSe/CdS quantum dots to develop the first-time all-optical deterministic blinking control method and explain the mechanisms behind it. Finally, I will present a few interesting and unexplored ideas in understanding the nature of fine structure states in PQDs and the integration of nanophotonic cavities to achieve true transform-limited single quantum emitters and large-scale generation.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150561</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>TANDEM BENZANNULATION-CYCLIZATION STRATEGIES FOR THE SYNTHESIS OF HIGHLY SUBSTITUTED INDOLES</title>
<link>https://hdl.handle.net/1721.1/150560</link>
<description>TANDEM BENZANNULATION-CYCLIZATION STRATEGIES FOR THE SYNTHESIS OF HIGHLY SUBSTITUTED INDOLES
Faialaga, Nathan H.
Tandem benzannulation-cyclization strategies were developed for the synthesis of highly substituted indoles. The benzannulation strategies involved the reaction of vinylketenes (or aryl ketenes) with ynamides or with ynehydrazides to afford highly substituted phenols via a pericyclic cascade mechanism. The vinylketenes were generated via the Wolff rearrangement of diazo enones or via the 4π electrocyclic ring-opening of cyclobutenones. Two cyclization approaches were studied: (1) an intramolecular acid-promoted cyclization, and (2) a Fischer indole cyclization. The first approach required the development of a thermal benzannulation protocol and was applied in a concise total synthesis of (-)-herbindoles A-C and (+)-trans-herbindole A. The second approach required the development of a new benzannulation variant that employed ynehydrazides as the ketenophile.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150560</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical and Biological Processes at the Middle Atlantic Bight Shelf-Break Front</title>
<link>https://hdl.handle.net/1721.1/150558</link>
<description>Physical and Biological Processes at the Middle Atlantic Bight Shelf-Break Front
Hirzel, Andrew Joseph
The Middle Atlantic Bight (MAB) is a highly productive ecosystem, supporting several economically important commercial fisheries. Chlorophyll enhancement at the MAB shelf-break front has been observed only intermittently, despite numerous studies that suggest persistent upwelling at the front. High resolution cross-frontal transect crossings were collected from three two-week cruises in April 2018, May 2019, and July 2019. Chapter 2 focused on applying a novel method of classifying planktonic images taken by a Video Plankton Recorder to enable processing of the large volumes of data collected with the instrument. Chapter 3 investigated cross-frontal trends by temporally averaging in both Eulerian and frontally-aligned coordinates. For April 2018, transient chlorophyll enhancement was seen at the front in individual transects and within the frontally-aligned mean transect, but not within the Eulerian mean transect. The Eulerian mean for May 2019 showed chlorophyll enhancement as a result of frontal eddies, which were further explored in chapter 4. No frontal enhancement was observed in July 2019. The frontal eddies observed in May 2019 were simulated using an idealized model, which showed that upwelling occurred within both of the frontal eddies, despite having opposite rotational directions. This result was consistent with nutrient enhancement observed within the centers of both eddies. Biological enhancement within each eddy was observed, which may have been a result of advection from source waters and/or a local response to upwelled nutrients. The influence of frontal variability and frontal eddies on nutrients and plankton at the front argues for the necessity for 3-D models to fully explain frontal behavior and its effects on biological responses.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150558</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Optimization for Classical and Quantum Condensed Phase Systems</title>
<link>https://hdl.handle.net/1721.1/150557</link>
<description>Molecular Optimization for Classical and Quantum Condensed Phase Systems
Shen, Yizhi
Condensed phase phenomena remain a theoretical challenge to thoroughly understand and elucidate due to the close interactions among large number of microscopic degrees of freedom.&#13;
Such deviation from the non-interacting ideality necessitates an effective resolution of the constrained fluctuations and strong correlations in condensed phase systems, which can be methodically achieved using non-Euclidean optimization tools. This thesis is devoted to the  optimization-based development of molecular simulations that facilitate our understanding of the static and dynamical properties of many-body systems.&#13;
&#13;
&#13;
Chapter 1 introduces the background on simulating condensed phase systems and sets up the overall scope of the thesis. Chapter 2 provides a primary exposure to a few fundamental connections between functional minimization on manifolds and essential properties, for example statistical and spectral, of many-body systems. &#13;
&#13;
Chapter 3 considers methods adept at treating representative classical condensed-phase systems. We start with phenomenological spin models on a lattice and turn our attention to atomistic interfaces including aqueous electrolyte-electrode and polymer-protein composite. We discuss proficient schemes to implement and process our molecular simulations, allowing us to elucidate (a)typical structural-dynamical fluctuations to heterogeneities native to these classical systems.&#13;
&#13;
Chapter 4 considers methods capable of studying correlated quantum condensed-phase systems. In particular, we explore the theoretical and numerical underpinnings behind non-parametric simulation schemes that utilize the error-mitigating technique of quantum subspace expansion. We focus on the emergent scenario in which the sub- space is generated by a real-time evolution implemented efficiently on quantum hardware. The practical advantages of the schemes are highlighted through demonstration of their fast and accurate extraction of spectral information.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150557</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bioorthogonal Reagents: Design, Synthesis, and Reactivity</title>
<link>https://hdl.handle.net/1721.1/150556</link>
<description>Bioorthogonal Reagents: Design, Synthesis, and Reactivity
Abularrage, Nile S.
The development of reactions that fit the stringent criteria for click chemistry and bioorthogonal chemistry has enabled discoveries and applications in fields ranging from materials science to chemical biology. These reactions must proceed rapidly and selectively under mild conditions. This enables the chemistry to be performed in complex systems without perturbing the normal function of the system. The prototypical bioorthogonal reactions are the copper(I) catalyzed azide–alkyne cycloaddition (CuAAC), the strain-promoted azide–alkyne cycloaddtion (SPAAC), and the tetrazine ligation. This thesis focuses on the development of new bioorthogonal reactions and reagents.&#13;
&#13;
In part 1, 5-membered cyclic Diels–Alder dienes are studied. This study ranges from the development of 4H-pyrazoles as bioorthogonal reagents to an exploration of the physical organic chemistry of 5-membered cyclic dienes that accelerate or impede their reactivity in Diels–Alder reactions. I show that 4H-pyrazoles can react rapidly as Diels–Alder dienes upon the induction of hyperconjugative antiaromaticity and predistortion. These dienes can also be stabilized to biological nucleophiles allowing their use in biological systems.&#13;
&#13;
In part 2, a new cyclooctyne is developed for SPAAC. This cyclooctyne, ABC, is activated by increased strain and electronic activation, and it reacts rapidly with azides and diazo compounds. This reactivity is a result of noncovalent interactions between the dipole and dipolarophile in the transition state; an n→π* interaction or a hydrogen bond. In addition to its fast reactivity, ABC can be prepared in three steps, and this work introduces a new synthetic route for the cyclooctyne scaffold.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150556</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modes of adiabatic flow in the entrance region of an annulus with an inner rotating cylinder</title>
<link>https://hdl.handle.net/1721.1/150547</link>
<description>Modes of adiabatic flow in the entrance region of an annulus with an inner rotating cylinder
Astill, Kenneth Norman.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1961; Includes bibliographical references (leaves 107-110).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150547</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of volatile amines derived from haddock</title>
<link>https://hdl.handle.net/1721.1/150536</link>
<description>An investigation of volatile amines derived from haddock
King, Frederick J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Food Technology, 1960; Vita.; Includes bibliographical references (leaves 87-101).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150536</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stress-strain-time phenomena in sands under triaxial testing conditions</title>
<link>https://hdl.handle.net/1721.1/150529</link>
<description>Stress-strain-time phenomena in sands under triaxial testing conditions
Soteriades, Michael C.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1954; Vita.; Bibliography: leaves 179-184.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150529</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The simulation of political interaction in multiple purpose river basin development.</title>
<link>https://hdl.handle.net/1721.1/150527</link>
<description>The simulation of political interaction in multiple purpose river basin development.
Bulkley, Jonathan W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1966; Bibliography: p. 210-215.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150527</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A transduction analysis of phagechromosome relationship.</title>
<link>https://hdl.handle.net/1721.1/150525</link>
<description>A transduction analysis of phagechromosome relationship.
Rothman, June Lynn.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150525</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting scientific information from interplanetary spacecraft radio tracking data.</title>
<link>https://hdl.handle.net/1721.1/150521</link>
<description>Extracting scientific information from interplanetary spacecraft radio tracking data.
Friedman, Louis Dill.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1971; Leaf 135 used twice. Vita.; Bibliography: leaves 90-99.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150521</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new configuration for a liquid helium temperature cryocooler</title>
<link>https://hdl.handle.net/1721.1/150518</link>
<description>A new configuration for a liquid helium temperature cryocooler
Crunkleton, James Alan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1987; Bibliography: leaves 265-268.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150518</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the non-vanishing of a function</title>
<link>https://hdl.handle.net/1721.1/150512</link>
<description>On the non-vanishing of a function
Levinson, Norman,
            1912-1975.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mathematics, 1935; Vita.; Includes bibliographical references (leaf [15]).
</description>
<pubDate>Tue, 01 Jan 1935 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150512</guid>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>III-V waveguides and couplers for integrated optics</title>
<link>https://hdl.handle.net/1721.1/150507</link>
<description>III-V waveguides and couplers for integrated optics
Dagli, N.
            (Nadir)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Bibliography: leaves 180-183.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150507</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational limitations for small depth circuits</title>
<link>https://hdl.handle.net/1721.1/150504</link>
<description>Computational limitations for small depth circuits
Håstad, Johan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1986; Bibliography: p. 66-68.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150504</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance scaling of a novel Modular Hybrid Propulsion System for aircraft flight performance flexibility and enhancement</title>
<link>https://hdl.handle.net/1721.1/150469</link>
<description>Performance scaling of a novel Modular Hybrid Propulsion System for aircraft flight performance flexibility and enhancement
Yang, David
            (Aerospace engineer),
            author.
A novel propulsion system concept, the Modular Hybrid Propulsion System (MHPS) is introduced that has the potential to enable a step-change in aircraft performance and flexibility over traditional carbon-fuel propulsion systems. The MHPS describes a hybrid propulsion system where the electrical components are interchangeable on a mission-to-mission basis. Scale-agnostic performance relationships are developed for the MHPS concept for a conventional takeoff and landing (CTOL) aircraft. Takeoff, climb, cruise, and payload performance metrics are formulated in terms of hybridity (proportion of electric power output and energy storage) and modularity levels (proportion of total aircraft mass that is interchangeable). For takeoff, increasing the power hybridization value decreases takeoff distance when the electric motor (EM) specific power is greater than the carbon-fuel (CF) engine specific power for a parallel hybrid configuration. For a ratio of EM specific power to CF specific power of 3, increasing the power hybridization from 0 to 1 decreases takeoff distance by 73%. Cruise scaling indicates that the relationship between energy hybridization and cruise time is dependent on the battery to fuel specific energy ratio and EM to CF engine efficiency ratio. Specifically, the product of the two ratios controls whether or not increasing energy hybridization will lead to an increase in cruise time. When the product of ratios is less than 1, increases in energy hybridization decrease cruise time. If the product of ratios is greater than 1, behavior is dependent on battery and fuel mass proportion, there is either an optimum energy hybridization value for cruise time or increases in energy hybridization increase cruise time monotonically. The delineation between the two trends is dependent on the relative amount of energy storage (battery + fuel) mass on the aircraft. Specifically, the usage of carbon-fuel reduces aircraft mass in-flight which is an added benefit when maximizing cruise time. This results in carbon-fuel usage being more beneficial than battery usage even when the product of the battery to CF specific energy ratio and EM to CF efficiency is equal to 1. It is found that the greater the potential fuel mass, the larger the product.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-187).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150469</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lithiation-induced phase transitions in alloying anodes for thin film lithium-ion batteries</title>
<link>https://hdl.handle.net/1721.1/150464</link>
<description>Lithiation-induced phase transitions in alloying anodes for thin film lithium-ion batteries
Miao, Jinghui,
            author.
As a result of the ever-increasing demands for miniaturized autonomous devices, high-performance batteries compatible with micro-systems have been attracting researchers' attention. Amorphous silicon (a-Si) and germanium (a-Ge), which store lithium through alloying processes rather than through intercalation, are among the top candidates as anodes for thin film Li-ion batteries. This thesis explores different types of lithiation-induced phase transitions and develops corresponding kinetic models in amorphous Si and Ge films using a framework consisting of electrochemical, structural and analytic approaches. The first section of this thesis covers the initial lithiation process of a-Si. Potentiostatic techniques reveal a kink feature in the temporal evolution of current, indicating an interface propagation mechanism for the irreversible phase transition. The rate-limiting step for propagation of the interface between unlithiated Si and the lithiated alloy is further determined to be the diffusion of Li through the lithiated phase, based on quantitative analyses of film-thickness dependence in potentiostatic tests. The second section deals with the reversible lithiation of a-Si beyond the first cycle, often assumed to be governed by simple diffusion into single-phase a-Si. We show that reversible lithiation proceeds through phase transitions between amorphous phases with different stoichiometries. Using a two-step potentiostatic technique and the Johnson-Mehl-Avrami-Kolmogorov model, it is shown that these amorphous-to-amorphous transitions occur through three-dimensional nucleation and growth processes. This conclusion is supported by TEM observations for which phase contrast is achieved through preferential high-energy electron-beam induced sputtering of Li. Instead of a complete transition at fixed voltage, reversible phase transitions in a-Si occur in a step-by-step nucleation fashion. The last section focuses on phase transitions during reversible lithiation of a-Ge using similar techniques to those used for the studies of a-Si. The only crystalline phase, Li₁₅Ge₄, is found to coexist with two amorphous alloy phases over a wide voltage range during lithiation. The formation of this crystalline phase turns is found to be highly constrained by kinetic barriers, and is very sensitive to structural evolutions such as cracking in the early cycles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from PDF version of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available. Thank you. The images contained in this document are of the best quality available"--Disclaimer page.; Includes bibliographical references (pages 114-129).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150464</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sterile neutrino searches at the Icecube Neutrino Observatory</title>
<link>https://hdl.handle.net/1721.1/150460</link>
<description>Sterile neutrino searches at the Icecube Neutrino Observatory
Axani, Spencer Nicholas Gaelan,
            author.
The IceCube Neutrino Observatory is capable of performing a unique search for sterile neutrinos through the exploitation of a matter enhanced resonant neutrino oscillation phenomena. As atmospheric muon neutrinos pass the dense material within the Earth, neutral current elastic forward scattering is predicted to induce a transition into a sterile state. This thesis presents two 3+1 sterile neutrino analyses by searching for spectral differences in the reconstructed energy and zenith direction of muon neutrino events, indicative of a transition into a sterile state. The first search probes the parameter space [delta]m²₄₁ and sin²(2[theta]₂₄) with relevant sensitivity to the global best fit region for a 3+1 sterile neutrino hypothesis. The second search performs a scan through sin²(2[theta]₂₄) and sin²([theta]₃₄) in the oscillation averaged out region of high-[delta]m²₄₁ ([theta]²₄₁ &gt;~ 10 eV²). The analyses are performed using an improved event selection, which was found to extract 305,891 well reconstructed muon neutrino events with a sample purity above 99.9%, from eight years of IceCube data. Novel simulation techniques, along with updated calibration, and a re-assessment of the systematic uncertainties are also discussed. The first analysis finds a best fit sterile hypothesis point at [theta]²₄₁ = 4.47eV² and sin²([theta]₂₄ = 0.10, consistent with the no-sterile hypothesis at the 8% confidence level. The second analysis finds a best fit sterile hypothesis at sin²([theta]₃₄ = 0.40, sin²([theta]₃₄ =0.006, consistent with the null hypothesis at the 19% confidence level.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Physics, 2020; "The following pages, 92-94, were not included in the original document submitted to the MIT Libraries. This is the most complete copy available"--Disclaimer page. Cataloged from PDF version of thesis. Supervised by Janet M. Conrad.; Includes bibliographical references (pages 217-239).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150460</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insurgent armies : explaining military loyalty after rebel victory</title>
<link>https://hdl.handle.net/1721.1/150459</link>
<description>Insurgent armies : explaining military loyalty after rebel victory
Martin, Philip Andrew,
            author.
Why do armed movements that win civil wars sometimes build states that control their military forces, but sometimes do not? Under what conditions do field commanders within these winning coalitions commit to centralized military hierarchies, or defect and maintain parallel armed networks? Despite the importance of obedient militaries for building strong states and durable peace, the logic of military loyalty after victory in civil war has not been studied in a systematic or comparative fashion. To explain variation in ex-rebel commander behavior across and within states where armed movements seize power, my argument highlights the centrality of wartime institution-building. Threats to insurgent group survival cause variation in two types of institutions: leadership bodies that bind field commanders to political elites, and local governance systems that tie field commanders to rebel-controlled communities. These institutions shape postwar outcomes through two intervening mechanisms: a) ex-rebel commanders' expectations of reciprocity from political rulers, and b) ex-rebel commanders' abilities to mobilize supporters outside of the regular army hierarchy. If collectivized leadership institutions exist, ex-rebel commanders are more likely to trust rulers' promises and remain loyal. Otherwise, commanders' calculi will hinge on their local governance records. Ex-rebel commanders who governed well during war can tap into local support networks and resist central state oversight. To develop and test this theory, the dissertation examines the war-to-peace transitions of victorious armed movements in Africa. First, I use comparative historical analysis based on original interviews and archival materials to analyse the divergent trajectories of winning rebels in Zimbabwe and Côte d'Ivoire. Second, I test observable implications of my argument at the sub-national level in Côte d'Ivoire. For this analysis, I draw on original community-level survey evidence based on interviews with key informants in a representative sample of localities in rebel-ruled territory. Finally, I illustrate the generalizability of the argument through a medium-N analysis of winning armed movements in Africa between 1974 and the present. The project contributes to debates about war and state formation, and the design of coercive institutions in fragile states. It also challenges conventional wisdom about the positive effects of rebel institution-building for peacebuilding.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150459</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrasonic imaging methods for quantitative musculoskeletal tissue assessment and improved prosthetic interface design</title>
<link>https://hdl.handle.net/1721.1/150456</link>
<description>Ultrasonic imaging methods for quantitative musculoskeletal tissue assessment and improved prosthetic interface design
Ranger, Bryan James,
            author.
For persons living with lower extremity amputation, the prosthetic socket -- the cup-like interface connecting the residuum to prosthesis - is considered the most critical component. It must be custom-made and tailored to each individual user, and if not fit properly can significantly hinder quality of life. As an alternative to conventional fabrication practices that involve subjective input from a clinician, computational modeling-based socket design practices have emerged. Despite early success, its clinical implementation and potential for broad accessibility are limited since it relies on expensive imaging technologies and robotic indentation devices. Medical ultrasound imaging, a cost-effective modality that can be used at the bedside, is a promising and clinically-viable solution. In order for ultrasound to become a viable scanning method for this application, technological development was necessary that allows for three-dimensional acquisition of (1) limb geometry and (2) mechanical tissue properties. Toward this goal, we first present the design of a novel multi-modal imaging system for rapidly acquiring volumetric ultrasound imagery of human limbs. Second, we present results of two studies that evaluate the use of ultrasound indentation and shear wave elastography (SWE) to characterize tissue biomechanics: the former to investigate how SWE is affected by transducer force, and the latter presenting a novel approach for constitutive parameter identification using a combination of finite element analysis (FEA), indentation, and SWE. Finally, we demonstrate that SWE may be performed using a non-contact approach, allowing for human limb data to be collected under discrete transducer-independent loading conditions. The techniques and results presented in this thesis highlight the potential for ultrasound imaging for improved prosthesis design, as well as more broadly to quantitative musculoskeletal tissue assessment for a variety of clinical applications. Specifically, data may be directly incorporated into computational prosthetic socket design practices that are in development in the Biomechatronics Group.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2018; Cataloged from PDF version of thesis. Cataloged from PDF of thesis.; Includes bibliographical references (pages 379-400).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150456</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rewiring neural conduits : engineering neuromuscular tissues for bidirectional neuroprosthetic interfacing</title>
<link>https://hdl.handle.net/1721.1/150455</link>
<description>Rewiring neural conduits : engineering neuromuscular tissues for bidirectional neuroprosthetic interfacing
Srinivasan, Shriya,
            author.
Contemporary technological approaches to address limb loss and neuromuscular dysfunction consist of synthetic, mechanical devices which lack an intimate bidirectional interface with nervous tissues. On the therapeutic front, the current amputation paradigm disrupts neuromuscular architecture, discards sensory organs and provides no anatomical or prosthetic replacement. This precludes the generation of afferent sensory feedback, which is critical for sensory integration, motor planning, peripheral and central neurological health, and myoelectric prosthesis control. Utilizing a paradigm of coevolution, I simultaneously engineer neuromuscular anatomy and bioelectronics to enable seamless, bidirectional neuroprosthetic interfacing. In this dissertation, I describe the design and preclinical validation of the regenerative agonist-antagonist myoneural interface (AMI) and myodermal interface (MI), which are reconstructive surgical models to restore musculotendinous and cutaneous sensory feedback, respectively. Then, through case-control studies, the functional outcomes of human subjects who have undergone below-knee and above-knee amputations incorporating native AMIs are compared to standard amputation controls. The effect of AMI amputation on sensorimotor neuroplasticity is investigated through anatomical and functional neuroimaging. These preclinical and clinical evaluations demonstrate the a) production of graded efferent and afferent signals, b) the maintenance of peripheral limb volume and central sensorimotor substrates, c) improvements in phantom sensation, phantom pain, and neuroprosthetic controllability, and d) decreased dependence on compensatory visuomotor circuitry. To address challenges with functional electrical stimulation (FES) of neuromusculature, employed for prosthetic feedback and control, I develop a closed-loop functional optogenetic stimulation system (FOS) for peripheral neuromuscular control. This system demonstrates greater accuracy, biomimetic orderly recruitment of fibers, and minimized fatigue during cyclic movements as compared to FES. Spanning from animal models to human implementation, this dissertation presents 1) a model to design new surgical techniques for afferent/efferent signaling, 2) characterize the physiology following clinical translation, and 3) recursively apply the lessons to the design of neural interfaces back at the bench. In summary, the results of this work steer a shift of the clinical amputation paradigm towards one that performs strategic rewiring of neuromuscular constructs to enable improved neurological health and neural interfacing.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 208-242).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150455</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing transition metal dichalcogenide alloys for applications to integrated photonics</title>
<link>https://hdl.handle.net/1721.1/150439</link>
<description>Developing transition metal dichalcogenide alloys for applications to integrated photonics
Li, Yifei
We explore the potential of transition metal dichalcogenides (TMDs) as phase-change materials for photonics integrated circuits (PIC). We measure the near-infrared (NIR) optical properties of bulk crystal telluride TMDs and sulfide TMDs. We find that telluride TMDs have large optical density and large optical contrast, but the loss is too high. The sulfide TMDs have lower loss, but the phase change energy is much higher. We further propose designing sulfide TMD alloys that are thermodynamically adjacent to phase boundaries between competing crystal structures, to realize martensitic (i.e., displacive, order-order) switching. We report large-area thin film synthesis of 1T TiS₂ and high-Ti-content, single-phase 2H alloy Mo₁₋ₓTiₓS₂ thin films at temperature as low as 500 °C using a scalable two-step method of metal film deposition, followed by sulfurization in an H₂S gas furnace. We demonstrate different roughening processes for each case and optimize the morphology.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150439</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Characterization of Organically Bound Copper in the Marine Environment</title>
<link>https://hdl.handle.net/1721.1/150438</link>
<description>Molecular Characterization of Organically Bound Copper in the Marine Environment
Babcock-Adams, Lydia
Marine microbes require copper (Cu) for a variety of key enzymes and can therefore experience limitation when concentrations are low. However, when Cu concentrations are too high, it becomes toxic causing decreased cell growth and even cell death. Laboratory culture experiments have shown that a diverse array of microbes produce organic ligands that complex Cu (CuL) and buffer the free ion concentration, which is the most bioavailable fraction. In this way, the microbes impose a control on the speciation of Cu, decreasing the toxic effects of Cu and making seawater conditions favorable for growth. Studies have shown that CuL complexes produced in laboratory cultures have similar complexation strengths to those found in seawater samples, which suggests a biological source of CuLs in seawater where dissolved Cu is almost entirely bound by organic ligands. However, information about individual CuL complexes is lacking which limits our understanding of the sources, sinks, and cycling of dissolved Cu. In order to fill this gap in knowledge, molecular level information about CuL complexes produced in culture and found in seawater must be obtained. To investigate this, liquid chromatography (LC) was coupled to two mass spectrometers (MS), an inductively coupled plasma (ICP) MS and an electrospray ionization (ESI) MS. By using data supplied by both techniques, the molecular characteristics of CuLs were determined laboratory cultures of the marine diatom Phaeodactylum tricornutum and the cyanobacterium Synechococcus, as well as investigating the distribution of CuLs in natural seawater samples along a line from 56°N to 20°S, along 152°W through the north and central Pacific Ocean. The CuLs identified in laboratory cultures had molecular formulae and fragmentation patterns characteristic of linear tetrapyrroles, a group of organic compounds commonly found in biological systems. This identification was further supported by absorbance and nuclear magnetic resonance spectroscopy. The distribution of CuLs in the Pacific Ocean showed a highly dynamic and complex mixture of ligands, closely tied to biological cycles.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150438</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How deformation influences the flow and fracture of glacier ice</title>
<link>https://hdl.handle.net/1721.1/150437</link>
<description>How deformation influences the flow and fracture of glacier ice
Ranganathan, Meghana
Most of the mass loss from the Antarctic Ice Sheet (AIS) occurs by dynamic flow of ice from the interior of the ice sheet to the margins, where the ice flows on the ocean, ultimately breaks apart into icebergs, and melts into the ocean. Due to anthropogenic-caused shifts in the climate system, many glaciers in the AIS are accelerating and thus increasing the contribution of the AIS to global sea-level rise. Understanding, and subsequently projecting, the behavior of these Antarctic glaciers is necessary to constrain the impacts that climate shifts will have on the Earth system and on communities around the world. To this end, the essential knowledge needed relates to the physical processes governing the flow and fracture of ice, some of which are unknown and most under-explored. This thesis seeks to illuminate these processes. I take a three-pronged approach to this question: harnessing satellite and field observations, developing theory, and improving ice flow models to represent completely the feedbacks that affect ice flow and fracture.&#13;
&#13;
In the first section of this thesis, I develop a novel technique to estimate both the ice-rock interface conditions and ice viscosity from satellite observations simultaneously. When applying this method, I find that ice is less viscous in the regions of glaciers that deform the fastest. In the next section, I consider the mechanisms causing the reduction of ice viscosity. Firstly, I evaluate magnitude of heating by viscous dissipation and show that in many regions of ice streams, shear heating may create temperate zones from which meltwater drains to the bed. Secondly, I find that changes to the ice microstructure likely play a significant role in rates of ice flow and fracture. In the final section, I propose a framework for including these new processes into ice flow models and construct a method for dynamically evaluating these parameters within ice sheet models. As a result of this work, we have a more complete view of the drivers of accelerating ice mass loss and a path forward for modeling future ice flow more accurately, which will improve projections of future sea-level rise.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150437</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport and Beyond: Efficient Optimization over Probability Distributions</title>
<link>https://hdl.handle.net/1721.1/150436</link>
<description>Transport and Beyond: Efficient Optimization over Probability Distributions
Altschuler, Jason M.
The core of classical optimization focuses on the setting where decision variables are vectors in Rⁿ. However, modern applications throughout machine learning, applied mathematics, and engineering demand high-dimensional optimization problems where decision variables are probability distributions. Can such optimization problems be solved efficiently? This thesis presents two interrelated lines of work in this direction through the common thread of Optimal Transport. A unifying theme is the optimization of joint probability distributions with constrained marginals.&#13;
&#13;
Part I of this thesis considers Optimal Transport and other optimization problems over joint distributions with two constrained marginals. Such tasks are fundamental in alignment problems, matrix problems, graph problems, and more. Chapters 2-4 establish near-linear runtimes for approximation algorithms for several classical problems under this umbrella: Optimal Transport, Minimum-Mean-Cycle, Matrix Balancing, and Matrix Scaling. Two recurring key themes are the use of entropic regularization for exploiting separability of optimization constraints, and the use of probabilistic inequalities for obtaining dimension-free convergence bounds. A dictionary is presented that unifies these various problems, which were historically studied in disparate communities.&#13;
&#13;
Part II of this thesis considers Multimarginal Optimal Transport (MOT) and other optimization problems over joint distributions with many constrained marginals. Despite the syntactic similarities with the problems in part I, these problems require fundamentally different algorithms and analyses. The key issue limiting the many applications of MOT is that in general, MOT requires exponential time in the number of marginals k and their support sizes n. Chapters 5-6 develop a general theory about what "structure" makes MOT solvable in time that is polynomial in n and k. We demonstrate this general theory on applications in diverse fields ranging from operations research to data science to fluid dynamics to quantum chemistry. Chapter 7 dedicates special attention to the popular MOT application of Wasserstein barycenters--resolving the complexity of this problem and uncovering the subtle dependence of the dimension on the answer.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150436</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lost in Starvation: How the interplay between physiology and ecology impacts bacterial persistence in a patchy landscape</title>
<link>https://hdl.handle.net/1721.1/150435</link>
<description>Lost in Starvation: How the interplay between physiology and ecology impacts bacterial persistence in a patchy landscape
Ledieu-Dherbécourt, Elise
Heterotrophic marine bacteria navigate a heterogeneous landscape of resources. Bacterial populations cycle through periods of feasting when attached to nutrient particles and periods of famine when foraging between hotspots; thus, bacterial physiological states and ecological processes are intertwined. The stress response to nutrient limitation appears within three hours, while the encounter time to new particles has been estimated to happen on the scale of days. Therefore, it is unclear how the phenotypic changes undergone by bacteria during starvation affect their ability to search for and acquire nutrients. Here, we quantified the physiological responses of the marine heterotroph Vibrio coralliilyticus to carbon and nitrogen starvation and its subsequent success at foraging in a landscape of resource particles. We compare the foraging success of different bacterial populations in terms of the minimum number of particles needed for ten percent of such a population to encounter any particle. We parametrize a model of bacterial foraging during starvation superposing multiple Poisson processes using measurements of viability, motility, attachment, and renewed growth observed for Vibrio coralliilyticus over several days of carbon and nitrogen limitation. We find that motility loss, bacterial persistence, and reductive cellular division are key behaviours determining foraging success. While motility loss increases the number of particles required for successful foraging in a population, bacterial persistence relaxes that constraint. Heightened reductive division accelerates the speed at which the first ten percent of the initial bacterial population achieves a particle encounter. This work provides a quantitative estimate of the influence of nutrient-limited phenotypes on bacterial foraging success in a marine environment.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150435</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution, Evolvability, Expression and Engineering</title>
<link>https://hdl.handle.net/1721.1/150434</link>
<description>Evolution, Evolvability, Expression and Engineering
Vaishnav, Eeshit Dhaval
This thesis describes how to build machines (Engineering) that answer questions about: (a) Evolution &amp; Evolvability and (b) Expression.&#13;
&#13;
In the first part of this thesis, I present a framework for understanding and engineering biological sequences, and solving sequence→function problems by building ‘Complete Fitness Landscapes’ in sequence space. This framework for measuring, modelling and designing biological sequences is built around the idea of learning an ‘oracle’ (typically a deep neural network model that takes a sequence as input and predicts its corresponding function) to traverse these ‘Complete Fitness Landscapes’. Here we develop a (promoter sequence)→(gene expression) oracle and use it with our framework to design sequences that demonstrate expression beyond the range of naturally observed sequences. We also show how our framework can be used to detect signatures of selection on a sequence, and to characterize robustness and evolvability.&#13;
&#13;
The second part of this thesis describes two frameworks for inferring from single-cell and spatial gene expression measurements: ATLAS (A Tool for Learning from Atlas-scale Single-cell datasets) and insi2vec (a framework for inferring from spatial multi-omic and imaging measurements).
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150434</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small Satellite Closed Ecosystems as Enabling Platforms for Low-Cost In-Space Biological Research</title>
<link>https://hdl.handle.net/1721.1/150432</link>
<description>Small Satellite Closed Ecosystems as Enabling Platforms for Low-Cost In-Space Biological Research
Haughwout, Christian Alexander
Over the past two decades, the popularity of small spacecraft has taken off, with nearly 200 CubeSats launched in 2019 alone. The objectives for these CubeSats have typically been Earth observation, communication, technology demonstration, or education. Very few CubeSats are launched with biological research payloads, which are typically hosted on larger spacecraft due to the size, weight, and power (SWaP) requirements of these experiments. A closed ecosystem, in which a combination of organisms exist in a symbiotic balance within a environment sealed against mass transfer with the outside world, offers the possibility to conduct long-duration, low-cost biological research on small spacecraft by eliminating the need for consumable supplies, such as food, as well as complex equipment such as feeding robots or waste scrubbers. The motivating mission studied in this work, called SCAMPI (Self Contained Arthropod Module for Permanent Inhabitation), is a 3U CubeSat for demonstration in low Earth orbit (LEO) of the ability to host a closed ecosystem with a volume of 475 millilitres. This work develops novel elements including the mechanical design of the ecosystem container, design of the thermal management system needed to maintain a survivable temperature range, trade study and selection of instruments and components used to monitor and characterize the payload, and evaluation of the economic case for closed ecosystem research as compared with traditional approaches to astrobiological research.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150432</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable Agri-Food Supply Chains: Consumer Demand and Company Sourcing Practices</title>
<link>https://hdl.handle.net/1721.1/150430</link>
<description>Sustainable Agri-Food Supply Chains: Consumer Demand and Company Sourcing Practices
Lee, Yin Jin
Agri-food supply chains link food production to processing, trading, distribution, and consumption across the globe. However, food production can have severe environmental and social impacts. Rising societal awareness of issues regarding supply chain sustainability both promises opportunities for and assigns responsibilities to companies involved in the supply chain. Companies may seek to win new customers by selling certified sustainable goods. They may also implement other sustainable sourcing practices (SSPs) besides certification to manage their suppliers’ sustainability performance. This dissertation presents three studies that address some knowledge gaps about consumer demand for certified sustainable products and companies’ sustainable sourcing practices, with new findings drawn from large-scale, real-world datasets.&#13;
&#13;
While survey-based research indicates that consumer demand for certified sustainable goods is large, this is not reflected in actual market demand. This gap suggests that either the demand predictions were incorrect, or that the marketing strategies used in the real world failed to tap into the potential demand. The first and second studies of this dissertation (Chapter 2 and 3) evaluate different marketing strategies by examining the actual market demand for certified Fair Trade and organic coffee based on consumer purchases at grocery stores across the US. Both studies use discrete choice models with random coefficients to characterize consumer demand in terms of consumers’ willingness-to-pay, price sensitivity, and substitution behavior.&#13;
&#13;
The first study compares the demand for Fair Trade, organic, and dual-label Fair Trade and organic coffees, focusing on consumers who bought some certified coffee (9.3% of all coffee consumers). In aggregate, consumers (i) preferred products that were both Fair Trade and organic to products that were only Fair Trade or only organic, and (ii) showed equal preference between single-label Fair Trade and organic products. The results encourage companies that are choosing between the labels to invest in both Fair Trade and organic labels instead of just one.&#13;
&#13;
The second study compares consumer demand for premium-priced and regular-priced Fair Trade and organic coffees relative to conventional (unlabeled) coffee. The study found that consumers were more sensitive to the prices of both premium and regular certified coffee than to the prices of their conventional counterparts. Even consumers who spent most of their coffee budget on premium certified coffees were more likely to choose the regular conventional category over the regular certified category in response to an increase in the price of their preferred premium certified coffee. Companies making and selling certified products would need to do more than matching prices and using sustainability certification labels to increase their market competitiveness in a traditional retail setting dominated by conventional products.&#13;
&#13;
Companies want to know what SSPs other companies are using to inform their own strategies. Stakeholders, such as governments and non-profit organizations, need to know the prevalence of different SSPs used by companies in multiple supply chain stages to understand the development of sustainable supply chain in an industry. The third study (Chapter 4) addresses these knowledge gaps by examining the mixes of SSPs used by 171 companies in the palm oil industry. The study determined how “hands-on” or “hands-off” the companies are, and how the companies’ SSPs depend on their supply chain stages—retailers, manufacturers, or processors and traders (PTs). Hypotheses about the relationship between companies’ SSPs and their supply chain stages were based on theories commonly applied in sustainable supply chain management. Regression analysis was applied to data on companies’ SSPs collected from their websites and reports in 2018. The two most popular practices were certification and supplier code of conduct. On average, companies used two hands-off practices regardless of supply chain stage. In contrast, companies used fewer hands-on practices on average the more downstream their supply chain stage was, decreasing from PTs, to manufacturers, and then to retailers. The variations agree with what is expected from theory. The results highlight the prevalence of hands-off practices and that the PTs likely lack hands-on support from their downstream customers. Stakeholders are recommended to collaborate and invent solutions that can provide more support to PTs and more upstream suppliers.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150430</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonon hydrodynamic transport at elevated temperature</title>
<link>https://hdl.handle.net/1721.1/150429</link>
<description>Phonon hydrodynamic transport at elevated temperature
Ding, Zhiwei
For over half a century, phonon hydrodynamic transport was deemed exotic and mattered only at extremely low temperatures. In this work, by combining the theoretical and experimental approach, we successfully predict and confirm the existence of phonon hydrodynamic transport in graphite above 200 K. More specifically, we introduce a direction-dependent definition of normal and Umklapp scattering, which gives an improved description of mode-specific phonon dynamics. By extending the classical Fuchs-Sondheimer solution, we developed a first-principles framework to study phonon hydrodynamics under the size effect with mode-by-mode phonon scattering details. We unambiguously revealed the Poiseuille heat flow by studying the variation of heat flow as the graphite ribbon width and identified for the first time the existence of phonon Knudsen minimum – an unusual phenomenon unique to hydrodynamic regime – which can be observed up to 90 K. Using a sub-picosecond transient grating technique, we directly observed second sound in graphite at record-high temperatures of 200 K. With the enlarged grating-period window, we firstly reported the dispersion of thermal wave, whose velocity increases with decreasing grating period. Our experimental findings are well explained with the interplay among “three fluids”: ballistic, diffusive, and hydrodynamic phonons. We believe our study may stimulate further work into discovering more material systems possessing significant phonon hydrodynamic features, as well as new research into understanding and manipulating the phonon transport in the hydrodynamic scheme.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150429</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Mixed Multinomial Logit Models</title>
<link>https://hdl.handle.net/1721.1/150428</link>
<description>Learning Mixed Multinomial Logit Models
Hu, Yiqun
Multinomial logit (MNL) model is widely used to predict the probabilities of different outcomes. However, standard MNL model suffers from several issues, including but not limited to heterogeneous population, the restricted independence of irrelevant alternative (IIA) assumption, insufficient model capacity, etc. To alleviate these issues, mixed multinomial logit (MMNL) models were introduced. MMNL models are highly flexible. McFadden and Train [2000] showed that it can approximate any random utility based discrete choice models to arbitrary degree of accuracy under appropriate assumptions. In addition, it removes other limitations of standard MNL models, including lifting the IIA assumption, allowing correlation in unobserved utility factors overtime, and most importantly, reducing the chance of model misspecification when modeling real world applications where the data composition is often found to be heterogeneous.&#13;
&#13;
Despite its importance and versatility, the study on the learning theory of MMNL is limited and learning MMNL models remains an open research topic. In this thesis, we will tackle this learning problem from two different perspectives. First, inspired by the recent work in Gaussian Mixture Models (GMM), we aim to explore the polynomial learnability of MMNL models from a theoretical point of view. Next, we present an algorithm that is designed to be more applicable and utilizes the rich source of data available in the modern digitalization era, yet still yielding ideal statistical properties of the estimators.&#13;
&#13;
Chapter 2 studies the polynomial learnability of MMNL models with a general K number of mixtures. This work aims to extend the current results that only apply to 2-MNL models. We analyze the existence of ϵ-close estimates using tools from abstract algebra and will show that there exists an algorithm that can learn a general K-MNL models with probability at least 1−δ, if identifiable, using polynomial number of data samples and polynomial number of operations (in 1 ϵ and 1 δ ), under some reasonable assumptions.&#13;
&#13;
In Chapter 3, motivated by the Frank-Wolfe (FW) algorithm, we propose a framework that learns both mixture weights and component-specific logit parameters with provable convergence guarantees for arbitrary number of mixtures. Our algorithm utilizes historical choice data to generate a set of candidate choice probability vectors, each being ϵ-close to the ground truth with high probability. The convex hull of this set forms a shrunken feasible region with desired properties to the linear subproblems in FW, which subsequently enables independent parameter estimation within each mixture and in turn, leads to convergence of the mixture weights. This framework also resolves the issue of unboundedness in estimated parameters present in the original FW approach. Complexity analysis shows that only a polynomial number of samples is required for each candidate in the target population.&#13;
&#13;
Extensive numerical experiments are conducted in Chapter 4, including both simulation and case studies on the well-known Nielsen Consumer Panel Data, to demonstrate the effectiveness of recovering the true model parameters and/or learning realistic component-level parameters, as compared to the original FW framework.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150428</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Control of the Spin-Phonon Interaction to Develop a New Generation of Molecular Quantum Bits</title>
<link>https://hdl.handle.net/1721.1/150427</link>
<description>Chemical Control of the Spin-Phonon Interaction to Develop a New Generation of Molecular Quantum Bits
Amdur, M. Jeremy
Molecular magnetism is a fascinating field critical to the development and understanding of new technologies such as molecular quantum bits and quantum memories. Implementing these materials is fundamentally limited by the rapid decay of populations of electronic spins back to magnetic equilibrium. Because of the small energy scale of paramagnetic interactions, the time a non-equilibrium state can be maintained is governed by the interaction of the electronic spin with the low energy phonon bath of its matrix. Therefore, controlling the strength of this interaction is critical for a new quantum technology. In the weak spin-phonon coupling regime, we can design technology where quantum states can be maintained, even at high temperatures. In the strong spinphonon coupling limit, we can design systems where the properties of the magnet are controlled externally by manipulation of the matrix. There is a large gulf between these lofty goals and our current technological capabilities. Bridging the gap requires a deep understanding of magnetostructural correlations and the impact tuning molecular handles has on the strength of the spin-phonon interaction. This thesis adds new key insights into the nature of the spin-phonon interaction and highlights how it can be controlled along three unique axes. Chapter 1 introduces the field of quantum information science, and the unique and powerful potential transition metal molecular electronic spins have to the field. Chapter 2 highlights the manipulation of local molecular vibrations to minimize the spin-phonon interaction and maximize quantum coherence at higher temperatures. We use these results to establish design principles for creating new high temperature quantum materials. Chapter 3 discusses progress on direct phonon engineering of electron spin systems. We design frameworks of interacting molecular qubits with specific spin topologies to create bulk entangled systems with a desired phonon structure.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150427</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-based Correlation Analysis Between Laser Speckle and Surface Size Distribution</title>
<link>https://hdl.handle.net/1721.1/150316</link>
<description>Learning-based Correlation Analysis Between Laser Speckle and Surface Size Distribution
Zhang, Qihang
Extracting quantitative information about highly scattering surfaces from an imaging system is challenging because the phase of the scattered light undergoes multiple folds upon propagation, resulting in complex speckle patterns. One specific application is the drying of wet powders in the pharmaceutical industry, where quantifying the particle size distribution (PSD) is of particular interest. A non-invasive and real-time monitoring probe in the drying process is required, but there is no suitable candidate for this purpose. In this thesis, we develop a theoretical relationship from the PSD to the speckle image and describe a physics-enhanced autocorrelation-based estimator (PEACE) machine learning algorithm for speckle analysis to measure the PSD of a powder surface. This method solves both the forward and inverse problems together and enjoys increased interpretability, since the machine learning approximator is regularized by the physical law. Moreover, we utilized an engineered intensity pupil to boost the sidelobe intensity more than 30 times and proposed a learning-based model to estimate the particle sizes from a single snapshot. It reduces the data collection time from 15 sec to 0.25 sec, broadening its application to many manufacturing industries which require a real-time refresh rate.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150316</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Dynamic Primitives Hypothesis: a Descriptive Model of Human Physical Interaction</title>
<link>https://hdl.handle.net/1721.1/150315</link>
<description>A Dynamic Primitives Hypothesis: a Descriptive Model of Human Physical Interaction
Hermus, James
Physical interaction is a key aspect of activities of daily living. These tasks require simultaneous regulation of both force and motion. For example, even a task as simple as opening a door presents a challenge to the development of prosthetics, exoskeletons, and human-robot interaction. Motor neuroscience has reported systematic patterns of motion during free reaching and force during static posture. However, similar results do not extend to physical interaction. A descriptive model is required.&#13;
&#13;
The paradox of human performance: Despite large feedback delays, and many degrees of freedom, humans are incredibly dexterous and excel at physical interaction with complex objects. To accomplish such performance, we hypothesize that motor behavior, with and without physical interaction, is constructed using a limited set of primitive dynamic behaviors, including oscillations, submovements, and mechanical impedance. We not only propose that these `building blocks' exist but that their connectivity is important -- a Norton equivalent network. &#13;
&#13;
This thesis is composed of four components that systematically investigated this hypothesis. (1) Through the study of crank turning we presented evidence for dynamic primitives. (2) To test the hypothesis, a method to estimate impedance during crank turning was developed. (3) When kinematic redundancy was substantial, a dynamic primitive-based control resolved redundancy without compromising performance. (4) This hypothesis led to the development of an experiment which falsified a common assumption, that humans can directly regulate force during motion.&#13;
&#13;
While it is fundamentally hard to prove hypotheses in human motor control, the hypothesis of dynamic primitives can descriptively account for systematic patterns in constrained motion. Furthermore, the value of this hypothesis was demonstrated in robotics by simplifying the management of kinematic redundancy and force regulation.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150315</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Context and Participation in Machine Learning</title>
<link>https://hdl.handle.net/1721.1/150314</link>
<description>Context and Participation in Machine Learning
Suresh, Harini
ML systems are shaped by human choices and norms, from problem conceptualization to deployment.  They are then used in complex socio-technical contexts, where they interact with and affect diverse populations.  However, development decisions are often made in isolation, without deeply taking into account the deployment context in which the system will be used.  And they are typically hidden to users in that context, who have few avenues to understand if or how they should use the system.  As a result, there are numerous examples of ML systems that in practice are harmful, poorly understood, or misused.  &#13;
&#13;
We propose an alternate approach to the development and deployment of ML systems that is focused on incorporating the participation of the people who use and are affected by the system.  We first develop two frameworks that lend clarity to the human choices that shape ML systems and the broad populations that these systems affect. These inform a prospective question: how can we shape new systems from the start to reflect context-specific needs and benefit justice and equity?  We address this question through an in-depth case study of co-designing ML tools to support activists who monitor gender-related violence.  Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that prioritize sustainable partnerships and challenge power inequalities.  Then, we consider an alternative paradigm where we do not have full control over the development lifecycle, e.g., where a model has already been built and made available. In these cases, we show how deployment tools can give downstream stakeholders the information and agency to understand and hold ML systems accountable. We describe the design of two novel deployment tools that provide intuitive, useful, and context-relevant insight into model strengths and limitations. The first uses example-based visualizations and an interactive input editor to help users assess the reliability of individual model predictions.  The second, Kaleidoscope, enables context-specific evaluation, allowing downstream users to translate their implicit knowledge of "good model behavior'' for their context into explicitly-defined, semantically-meaningful tests.&#13;
&#13;
This dissertation demonstrates several ways that context-specific considerations and meaningful participation can shape the development and use of ML systems.  We hope that this is a step towards the broader goal of building ML-based systems that are grounded in societal context, are shaped by diverse viewpoints, and contribute to justice and equity.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150314</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Traveling Salesman Problem for Systems with Dynamic Constraints</title>
<link>https://hdl.handle.net/1721.1/150313</link>
<description>The Traveling Salesman Problem for Systems with Dynamic Constraints
Adler, Aviv
The Traveling Salesman Problem (TSP) is a foundational problem in the fields of theoretical computer science and optimization in which an agent is tasked with visiting a set of &#119899; target locations (in any order) in the shortest amount of time, either on a graph or in a space. As this problem is well-known to be NP-hard, it is usually solved using heuristics or approximation algorithms. An important variant of the TSP is the Dynamic TSP (DTSP), in which the targets exist in a space in which the agent’s trajectory must satisfy dynamic constraints (for instance, limited ability to accelerate). The DTSP arises naturally in many robotic motion planning problems, particularly in exploration, surveillance and reconnaissance, and is generally not amenable to the standard TSP approximation algorithms. An interesting and important question, known as the Dynamic Stochastic TSP (DSTSP), asks: if the target points are distributed randomly, how does the length of the shortest tour (either in expectation or with high probability) grow with the number &#119899; of targets? This problem has been studied for a variety of common vehicle models, as well as certain broader classes of dynamic control systems.&#13;
&#13;
In this thesis, we present a novel proof that extends known DSTSP order-of-growth results to a wider variety of dynamic systems, in particular to manifold workspaces, as well as two novel algorithms which achieve a constant-factor approximation of the optimal tour with high probability. These new proofs and algorithms furthermore allow us to study not only the order-of-growth of the tour length but also, for the important subset of ‘symmetric’ dynamics, to give explicit constant factors and to tightly characterize the relationship between the dynamics, the target point distribution, and the optimal tour length. Finally, we extend these results to the non-stochastic adversarial case, in which the target points are chosen to maximize the length of the optimal tour.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150313</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetic temperature of structures for resilience, instability and failure analysis of building systems</title>
<link>https://hdl.handle.net/1721.1/150310</link>
<description>Kinetic temperature of structures for resilience, instability and failure analysis of building systems
Keremidis, Konstantinos
This thesis aims to address the urgent need for quantitative resilience assessment of buildings which, due to the perils of global warming, are expected to be subject to extreme hazards. To obtain the key input for resilience calculations in the form of structural fragility, we redefine structural mechanics within the context of statistical physics and atomistic simulations in the molecular dynamics (MD)--based framework. At the core of the approach, potentials of mean force for two-body, three-body and four-body interactions are derived to define the energy states between mass points discretizing structural members. An original potential parameter calibration procedure is proposed to link our methodology to classical continuum mechanics and experiments.&#13;
&#13;
At the interface between structural mechanics and statistical physics lie the thermodynamic ensembles, which dictate the conservation of macroscopic properties in dynamic systems. Moving beyond the classical engineering ensemble of choice --the energy conserving microcanonical (NVE) ensemble-- we explore the concept of structural thermalization in the canonical (NVT) ensemble. To that end we evoke the equipartition theorem of statistical physics and introduce, by analogy to kinetic theory of gases, the kinetic temperature of structures. Structural thermalization manifests by connecting the momentum balance equations to an outside bath reservoir maintained at a reference temperature history through the Nosé-Hoover thermostat. Following the Zeroth Law of Thermodynamics, it is recognized that a structure is in (thermal) equilibrium as long as the structure's kinetic temperature attains the bath temperature; whereas it is out-of-equilibrium when the open system (structure plus bath) exhibits a sustained temperature difference. In this case, the structure has exhausted its fluctuation-dissipation capacity, which is indicative --for structures-- of a progressive failure and instability. The implementation of the kinetic temperature as an order parameter is illustrated for numerous applications, ranging from buckling of rods to wind and fire response of buildings, all the way to determining fragility curves required for the assessment of resilience of buildings. It is suggested that the proposed order parameter becomes an integral part of the structural engineering toolbox for resilience studies of buildings and structures.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150310</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>RNA-level controllers for programmable gene expression in mammalian cells</title>
<link>https://hdl.handle.net/1721.1/150304</link>
<description>RNA-level controllers for programmable gene expression in mammalian cells
Ilia, Katherine
Synthetic biology is a burgeoning field that aims to design circuits using biomolecular components in order to equip cells with desired functionality for a variety of applications, such as medicine, biofuel production, and environmental health. As this field matures, engineers are taking on challenges that require the integration of regulatory logic in complex environments. While there has been significant progress in synthetic biology, there remains a need for genetic devices that allow for precise control over endogenously and exogenously expressed genes. Here, through two examples, we demonstrate that RNA is well-suited for engineering compact, programmable biomolecular tools with sense and actuate functionalities. In the first half of this thesis, we develop a microRNA-based strategy to precisely overwrite the expression level of an endogenous gene of interest, thereby insulating the expression levels of this gene from interference by the endogenous transcriptional unit. We incorporate this strategy into genetic controllers and leverage live cell imaging to develop a versatile strategy for probing the role of transcription factor dynamics on and enforcing the levels of transcription factors in cell fate transitions. In the second half of this thesis, we engineer programmable single-transcript RNA sensors in vivo, in which adenosine deaminases acting on RNA (ADARs) autocatalytically convert target hybridization into a translational output. This system amplifies the signal from editing by endogenous ADAR through a positive feedback loop. This topology confers high dynamic range, low background, minimal off-target effects, and a small genetic footprint. We anticipate that these approaches have extensive applications in cell- and RNA-therapeutics as well as for basic research, illustrating the potential of programmable RNA-based controllers.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150304</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and biochemical investigations into the Neisseria gonorrhoeae ribonucleotide reductase</title>
<link>https://hdl.handle.net/1721.1/150303</link>
<description>Structural and biochemical investigations into the Neisseria gonorrhoeae ribonucleotide reductase
Levitz, Talya S.
Ribonucleotide reductase (RNR) is the only known enzyme that converts ribonucleotide substrates to their deoxyribonucleotide products necessary for DNA synthesis and repair. For class Ia RNRs, this chemically-difficult process is tightly regulated at many levels including through allosteric regulation, which occurs in many organisms through oligomeric state changes. These quaternary shifts allow for radical transfer to the active site in the presence of allosteric activator ATP (active state) and prevent that radical transfer in the presence of allosteric inhibitor dATP (inactive state). &#13;
&#13;
The essential reaction that RNR catalyzes, along with its distinct active and inactive states, makes RNR both a current anticancer drug target and a promising antibiotic target. Differences in composition of inactive states between human and some bacterial RNRs further elevates RNR as a potential selective antibiotic target. This thesis structurally and biochemically characterizes the ribonucleotide reductase enzyme from Neisseria gonorrhoeae, the causative agent of gonorrhea. N. gonorrhoeae is a prevalent public health concern presenting a current and imminent risk for multi- and fully-drug resistant strains. Here, we biochemically characterize and present an inactive structure of the N. gonorrhoeae RNR, which is the first structure of an RNR from a pathogenic organism. We present and characterize compounds that selectively inhibit the N. gonorrhoeae RNR in vitro and in vivo. We also present an LC-MS/MS assay for use characterizing the activity of a diverse array of RNRs under different nucleotide conditions.&#13;
&#13;
This thesis additionally explores the use of the chameleon, an automated cryo-electron (cryo-EM) specimen preparation instrument, in generating cryo-EM specimens of the N. gonorrhoeae RNR suitable for high-resolution reconstructions, as traditional blot-based plunging techniques were unable to yield intact particles for reconstruction. General optimization of conditions for use of the chameleon for difficult cryo-EM samples is discussed along with presentation of the Neisseria gonorrhoeae inactive RNR structure as a case study. This work will open the door for future research on the N. gonorrhoeae RNR as a potential antibiotic target and for use of the chameleon for generation of high resolution reconstruction for a broad range of difficult cryo-EM samples.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150303</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Quantitative Assessment of Advanced Take-off Trajectories&#13;
for Supersonic Transport Noise Reduction</title>
<link>https://hdl.handle.net/1721.1/150302</link>
<description>A Quantitative Assessment of Advanced Take-off Trajectories&#13;
for Supersonic Transport Noise Reduction
Voet, Laurens Jozef Amandine
This thesis (a) establishes the design trades and limitations of supersonic transport propulsion systems in terms of take-off noise reduction, (b) identifies the attributes, quantifies the potential, and assesses the impact of advanced take-off trajectories designed for noise reduction, and (c) formulates a reduced-order model to scale these results for supersonic transport of different size classes and cruise Mach numbers.&#13;
&#13;
The propulsion system design trades established in this thesis show that clean- sheet engines do not enable supersonic transport to meet current subsonic transport noise limits when using conventional take-off trajectories. The impact of derivative engines on the cumulative noise levels is found to be small (1.4 EPNdB). In fact, regardless of whether a clean-sheet or derivative engine is selected, a Mach 1.4 business jet is shown to exceed the current cumulative noise limit by at least 15.5 EPNdB. Further noise reduction of the jet noise dominant engines is prevented by the fan size constraint imposed to limit wave drag during supersonic cruise.&#13;
&#13;
Advanced trajectories are proposed to reduce take-off noise by capitalizing on excess engine thrust and improved aerodynamic efficiency at higher take-off speeds. These novel trajectories use (i) automatic continuous control of thrust and high-lift devices, (ii) increased take-off speed, and (iii) reduced cut-back altitude, compared to conventional trajectories currently used for subsonic transport. For the aircraft examined, although these trajectories reduce the 65 dB-A community noise contour area by 63.8%, they only reduce cumulative certification noise by 10.6 EPNdB, which is insufficient to meet current subsonic transport noise limits. Additionally, advanced trajectories with the lowest community noise do not yield the lowest certification noise, which warrants re-examination of supersonic transport noise standards. On the contrary, engine NOx standards are representative for supersonic transport using advanced take-off trajectories and thus do not need to be modified, as the impact of these trajectories on the mass of NOx emissions during climb-out is small (16.1%).&#13;
&#13;
Last, a first-of-its-kind reduced-order model for supersonic transport take-off noise scaling shows that, as cruise Mach number increases, supersonic transport take-off noise levels increase while the thrust cut-back noise reduction potential decreases. This scaling rule enables equally stringent standard setting for noise certification of supersonic transport across a broad range of size classes and cruise Mach numbers.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150302</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calibration and Utilization of High-Fidelity Two-Qubit Operations</title>
<link>https://hdl.handle.net/1721.1/150299</link>
<description>Calibration and Utilization of High-Fidelity Two-Qubit Operations
Greene, Amy
Over the past two decades, impressive strides have been made in the field of quantum computing. Quantum advantage has been reported, and there is now an ecosystem of cloud-based quantum processors and companies interested in using them. However, high error rates continue to limit circuit depth, such that solving real-world problems with today’s quantum computers remains a challenge. For quantum computing with superconducting qubits, two-qubit gates are a major source of those errors.&#13;
&#13;
In this thesis, we calibrate high-fidelity CZ and CPhase gates for flux-tunable transmon qubits. We develop a new technique for mitigating coherent errors in twoqubit gates called quantum measurement emulation (QME). We use this technique to implement a novel operation called density matrix exponentiation (DME), which has applications in quantum machine learning and universal simulation. These protocols contribute to the understanding and mitigation of errors in two-qubit gates. They are a step towards fault-tolerant universal quantum computing with superconducting circuits.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150299</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Confidentiality Under Nondeterminism for Storage Systems</title>
<link>https://hdl.handle.net/1721.1/150298</link>
<description>Verifying Confidentiality Under Nondeterminism for Storage Systems
Ileri, Atalay Mert
Storage systems must often store confidential data for their users. It is important to ensure that confidentiality of the stored data is maintained in the presence of bugs and malicious adversaries. This thesis tackles this problem using formal verification, a technique that involves proving a software system always satisfy certain requirements.&#13;
&#13;
There are numerous challenges in specifying what it means a system being confidential and proving that a system satisfies that specification: nondeterministic behavior, indirect leakage of the data, system complexity, and others. Nondeterminism in particular creates unique challenges by making probabilistic leakage possible. This dissertation introduces the following to address these challenges:&#13;
&#13;
Two novel confidentiality specifications for storage systems with nondeterministic behavior: data nonleakage and relatively deterministic noninfluence. Both definitions accommodate discretionary access control and intentional disclosure of the system metadata.&#13;
&#13;
Two techniques accompanying these specifications: sealed blocks and nondeterminism oracles. These techniques addressed the challenges encountered in proving the confidentiality of the systems, and also reduced the proof effort required for said proofs. These techniques are formalized and implemented in two frameworks: DiskSec and ConFrm. Both frameworks contain metatheory to help the developer to prove that their implementation satisfies the specification.&#13;
    &#13;
The first confidential, crash-safe, and formally verified file systems with machine-checkable proofs: SFSCQ and ConFs. SFSCQ uses data nonleakage and ConFs uses relatively deterministic noninfluence as their confidentiality specifications. Both are implemented and verified in Coq.&#13;
&#13;
An evaluation shows relatively deterministic noninfluence has 9.2x proof overhead per line of implementation code. Experiments with multiple benchmarks show that our systems perform better compared to FSCQ verified file system but worse compared to ext4 file system.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150298</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wanting to Do What's Right</title>
<link>https://hdl.handle.net/1721.1/150290</link>
<description>Wanting to Do What's Right
Grant, Lyndal Jennifer
It may seem obvious that good people want to do what is right. But moral philosophers disagree about whether it is virtuous to be motivated to do what is right as such. Some, inspired by Kant, argue that wanting to do what is right as such is always morally praiseworthy. Others claim that such a desire amounts to a kind of moral fetishism.&#13;
&#13;
This dissertation lays out the groundwork for a new way of thinking about what it is to want to do what is right as such. The central task (which is the topic of chapter 1) is to provide a new account of moral fetishism that allows us to maintain what I take to be the natural view: it is not always wrong to want to do what is right as such (though it sometimes is). I argue that whether wanting to do what is right as such is virtuous or morally fetishistic depends on the deeper structure of the agent’s motivations. What makes the fetishist a fetishist, I argue, is that they want to do what is right whatever rightness might be. By contrast, the good person’s desire to do what is right is conditional on their substantive conception of right action being at least approximately correct. This account allows us to resolve seemingly conflicting intuitions about cases of wanting to do what is right, and also suggests a more general account of how the contents of our desires depend on our beliefs together with further features of our underlying motivational states.&#13;
&#13;
Chapter 2 takes a deeper dive into the nature of desire contents, providing an independent, disposition-based argument for a thesis on which my account of moral fetishism depends: that two people can both want p, but in wanting p, nonetheless have desires with different contents. Chapter 3 then shows how my account of moral fetishism creates trouble for prominent theories of moral worth. The upshot is that any adequate account of moral worth will need to place additional constraints on the content of the desires that ultimately explain why the agent acts as she does.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150290</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D Spatial Perception with Real-Time Dense Metric-Semantic SLAM</title>
<link>https://hdl.handle.net/1721.1/150288</link>
<description>3D Spatial Perception with Real-Time Dense Metric-Semantic SLAM
Rosinol, Antoni
3D Spatial Perception is the ability of an agent to perceive and understand the three-dimensional structure of its environment, including its position and orientation within that environment. This ability is essential for autonomous robots to navigate and interact with their surroundings, since it enables robots to perform a wide variety of tasks, such as obstacle avoidance, path planning, and object manipulation.&#13;
&#13;
To provide robots with a detailed and accurate representation of the surrounding environment, this thesis first proposes the use of a map representation that is geometrically dense, photometrically accurate, and semantically annotated. We define these maps as metric-semantic maps, and provide algorithms to build such maps in real-time. Metric-semantic maps allow both humans and robots to have a shared understanding of the scene, while providing the robot with sufficient information to localize, plan shortest paths, and avoid obstacles along the way. We then present a novel 3D representation that abstracts a dense metric-semantic map into higher-level concepts – such as rooms, corridors, and buildings – and also encodes static objects and dynamic entities. We define such representations as 3D Dynamic Scene Graphs (DSGs), and provide as well algorithms to build 3D DSGs. Finally, we show how these approaches can be combined to form a Spatial Perception Engine capable of building both metric-semantic maps and 3D DSGs from visual and inertial data. We also demonstrate the effectiveness of 3D DSGs for fast semantic path-planning queries, which can be used to direct robots using natural language commands.&#13;
&#13;
In addition to the algorithms presented in this thesis, we open-source our code and datasets for the research community to use and explore. We believe that the algorithms and resources provided in this thesis open up exciting new possibilities in the field of 3D spatial perception, and we hope to inspire further research in this area, with the ultimate goal of creating fully autonomous robots that are able to navigate and operate in complex environments.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150288</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power Sources for Sensors and Robots in Remote and Inaccessible Environments</title>
<link>https://hdl.handle.net/1721.1/150286</link>
<description>Power Sources for Sensors and Robots in Remote and Inaccessible Environments
Zhang, Ge
The continuous shrinkage of size and the drop of cost have greatly enlarged the application space for sensors and robots based on electronic device. They are being deployed ever further, into environments that were too remote or too confined to be accessed before. The harsh environmental conditions or the extreme size constraints have raised considerable challenge in creating suitable power sources for robots. This thesis works towards addressing the energy demand for sensors in those inaccessible environments, and by extension exploring their potential applications. In specific, we approach this goal in 2 aspects.&#13;
&#13;
On one hand, we developed and optimized thermal energy harvesting systems to power wireless sensor nodes in remote locations such as desert and underground environment where solar power is not reliable. We combined theory and experiments to improve the efficiency of these thermal energy harvesters, by optimizing material properties, device configurations and employing nonlinear thermal devices such as thermal diodes.&#13;
&#13;
On the other hand, we created picoliter sized Zn-air batteries to provide on-board power for microscopic sensors that can enter highly confined spaces such as the blood vessel and the brain tissue. We overcome the material and fabrication difficulties to create batteries with linear dimensions on the order of 10 μm, providing remarkable energy density of 2.75 μJ/pL. These batteries can power other micro-electronic devices and be released into solutions as colloids. Solving the energy problem paves the way for creating cell-sized autonomous sensors that can collect the information non-invasively in narrow channels like blood vessels and digestive tracts. Hence, we explored theoretically the possibility of using micro-robots to detect leaks inside tubes, revealing one corner of the tremendous potential of microscopic machines.&#13;
&#13;
Finally, we investigated the theory and reaction mechanisms of 2D polymers, which can guide the synthesis of low permeability materials for sealing leaks.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150286</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lifelong, Learning-Augmented Robot Navigation</title>
<link>https://hdl.handle.net/1721.1/150281</link>
<description>Lifelong, Learning-Augmented Robot Navigation
Doherty, Kevin J.
Simultaneous localization and mapping (SLAM) is the process by which a robot constructs a global model of an environment from local observations of it; this is a fundamental perceptual capability supporting planning, navigation, and control. We are interested in improving the expressiveness and operational longevity of SLAM systems. In particular, we are interested in leveraging state-of-the-art machine learning methods for object detection to augment the maps robots can build with object-level semantic information. To do so, a robot must combine continuous geometric information about its trajectory and object locations with discrete semantic information about object classes. This problem is complicated by the fact that object detection techniques are often unreliable in novel environments, introducing outliers and making it difficult to determine the correspondence between detected objects and mapped landmarks. For robust long-term navigation, a robot must contend with these discrete sources of ambiguity. Finally, even when measurements are not corrupted by outliers, long-term SLAM remains a challenging computational problem: typical solution methods rely on local optimization techniques that require a good “initial guess,” and whose computational expense grows as measurements accumulate.&#13;
&#13;
The first contribution of this thesis addresses the problem of inference for hybrid probabilistic models, i.e. models containing both discrete and continuous states we would like to estimate. These problems frequently arise when modeling e.g., outlier contamination (where binary variables indicate whether a measurement is corrupted), or when performing object-level mapping (where discrete variables may represent measurement-landmark correspondence or object categories). The former application is crucial for designing more robust perception systems. The latter application is especially important for enabling robots to construct semantic maps; that is, maps containing objects whose states are a mixture of continuous geometric information and discrete semantic information. The second contribution of this thesis is a novel spectral initialization method which is efficient to compute, easy to implement, and admits the first formal performance guarantees for a SLAM initialization method. The final contribution of this thesis aims to curtail the growing computational expense of long-term SLAM. In particular, we propose an efficient algorithm for graph sparsification capable of reducing the computational burden of SLAM methods without significantly degrading SLAM solution quality. Taken together, these contributions improve the robustness and efficiency of robot perception approaches in the lifelong setting.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150281</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Error behavior and optimal discretization of chaotic differential equations</title>
<link>https://hdl.handle.net/1721.1/150280</link>
<description>Error behavior and optimal discretization of chaotic differential equations
Frontin, Cory
In this thesis, the simulation of chaotic systems is considered. For many chaotic systems, we desire to make estimates of mean values of quantities of interest, and in this case, the effect of chaos is to introduce behavior that naturally lends itself to statistical, rather than deterministic, description. When simulating chaotic systems using discrete versions of governing differential equations, then, chaos introduces statistical errors alongside discretization errors. These statistical errors are generally one of two types: transient spin-up error before the system reaches the attractor (i.e. the stationary distribution of long-run states) and sampling error due to finite-time averaging of trajectories on the attractor.&#13;
&#13;
In this work, we first propose an error model to describe the expected absolute errors on the attractor of a chaotic ordinary differential equation system. This model for the error implies optimal choices of timestep and sampling time to minimize the error in the simulation- including discretization error and sampling error- given some computational budget. Adding a model for the spin-up error, this allows the description of the optimal choice of timestep, sampling, and spin-up times. Next, we develop a small-sample Bayesian approach that allows the estimation of the discretization and the sampling error using only a small number of simulation results with distinct timesteps and sampling times on the attractor.&#13;
We then extend the approach for spatiotemporally chaotic partial differential equation systems, which introduces error due to spatial discretization in addition to the temporal discretization errors and statistical errors. Finally, we augment the small-sample approach with corrections for non-negligible spin-up transient behavior, then embed the resulting small-sample method in a naive explore-exploit algorithm. Using this algorithm, we demonstrate that given a fixed total computational budget such an approach can allow chaotic simulations that achieve near-optimal estimates without strong prior knowledge of the behavior of the system. In addition to this near-optimal discretization, the method allows an a posteriori estimate of the simulation error in the final result after the exploitation stage.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150280</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing predictive tools for solvent effects on thermodynamics and kinetics</title>
<link>https://hdl.handle.net/1721.1/150278</link>
<description>Developing predictive tools for solvent effects on thermodynamics and kinetics
Chung, Yunsie
Solvents are ubiquitous in many industrially, environmentally, and medically relevant chemical systems. Solvents can strongly affect species thermochemistry and reaction kinetics, and a different solvent choice can lead to completely different reaction outcomes and phase equilibria. Accurate prediction of solvent effects is thus crucial to the design and optimization of chemical processes and the construction of liquid phase kinetic models. Ab initio methods such as quantum chemistry can be used to compute solvation effects, but a high computational cost makes them unsuitable for large-scale applications. Furthermore, existing methods are largely limited to the predictions at room temperature. Data-driven approaches like group contribution and machine learning can provide fast estimations, but a lack of quality data is a major bottleneck to these approaches. &#13;
&#13;
This thesis presents several new models that can provide fast and accurate predictions of solvent effects on thermodynamics and kinetics for a wide range of chemical space and temperature. The approaches employed in this work center around combining fundamental thermodynamic relationships, quantum chemistry, and machine learning. An extensive set of quantum chemical data is generated with ab initio methods and used to train machine learning models. Thermodynamic equations and correlations are used to make predictions for different properties and conditions based on available or calculable data. The devised models and methods can provide accurate estimates of temperature-dependent solvation free energy, solvation enthalpy, and solid solubility. The predictions can be made up to the critical point of a solvent, allowing one to simulate gas-liquid and solid-liquid equilibria for the entire range of temperature. Various quantum chemistry and COSMO-RS levels of theory are compared to identify an efficient and reliable computational workflow for the calculation of liquid phase rate constants. After establishing the optimal workflow, large-scale COSMO-RS calculations are performed and a machine learning model to predict kinetic solvent effects on neutral reactions is developed. The performances of all models are thoroughly evaluated by a direct comparison with the experimental data that are compiled from numerous public sources. The presented tools only need molecular identifiers or easily obtainable data as inputs and hence are ideal for automated, high-throughput applications.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150278</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Macrophage-hitchhiking Anisotropic Microparticles for Therapeutic and Diagnostic Applications</title>
<link>https://hdl.handle.net/1721.1/150276</link>
<description>Macrophage-hitchhiking Anisotropic Microparticles for Therapeutic and Diagnostic Applications
Wang, Li-Wen
Cell therapies represent a major paradigm shift of biotechnology in medicine due to its transformative potential in treating previously incurable diseases. A variety of cells have been applied for cell therapies, including stem cells, tissue-specific cells, and hematopoietic cells. Particularly, immune cells, a subset of blood cells, have gained significant attention owing to their inflammation-homing ability as well as inherently critical roles in disease progression and tissue regeneration. The prosperity of immune cell-based therapies in the clinic has fueled the efforts in immune cell engineering. Several approaches have been taken to functionalize immune cells, among which biomaterial-assisted cellular platforms, marrying the strengths of biomaterials and leukocytes, become a new pillar of immune cell engineering. In my thesis work, I provide a brief overview on the cell therapies in the clinic, followed by introducing two projects of biomaterial-assisted cellular platforms, where anisotropic microparticles and macrophage, a type of innate immune cells, were employed. Specifically, I developed and engineered discoidal microparticles that can hitchhike on the macrophage surface but resist phagocytosis due to their anisotropic morphology. This approach takes advantage of inflammation-homing capability of macrophages and enables stable loading of therapeutic and imaging agents in the extracellular space for therapeutic and diagnostic applications.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150276</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designer Porous Carbon Electrodes for Redox Flow Batteries</title>
<link>https://hdl.handle.net/1721.1/150275</link>
<description>Designer Porous Carbon Electrodes for Redox Flow Batteries
Wan, Charles Tai-Chieh
Porous carbon electrodes play an important role in electrochemical systems by providing electrocatalytic sites to facilitate faradaic reactions, distributing reactants and products, and conducting electrons and heat. The microstructure and surface chemistry of such electrodes are of particular relevance to redox flow batteries (RFBs), as, during operation, dissolved active species are forced through the porous media and react on its internal surfaces. RFBs hold promise for long duration energy storage due to their decoupling of energy and power, long service life, and scalability. However, contemporary RFBs are prohibitively expensive, incentivizing further cost reduction. Decreasing reactor cost by increasing power output is a promising chemistry-agnostic approach to bridging the economic gap. While commercial electrodes are functional, their properties are suboptimal for existent and emerging RFB chemistries. This thesis seeks to advance porous electrodes and electrocatalysts with property profiles tailored for RFBs through three interrelated approaches.&#13;
&#13;
The first section introduces non-solvent induced phase separation (NIPS) as a facile and versatile method to engineer interconnected porous microstructures with property sets unattainable in present-day commercial fiber-based electrodes. Combining spectroscopic characterization and fluid dynamic measurements with electrochemical evaluation with common aqueous redox couples (i.e., Fe²⁺⸍³⁺, V²⁺⸍³⁺, and VO²⁺/VO₂⁺), structure-function relations are developed for RFB electrodes. By tuning the synthesis protocol, the electrochemically accessible surface area, permeability, and through-plane porosity distribution can be systematically varied, offering new pathways to high-performance materials.&#13;
&#13;
The second section advances the interfacial engineering of electrode surfaces. A synthetic method to generate dense and planar carbon films enables opportunities to extract fundamental rate constants on bottom-up designed materials using conventional electroanalytical approaches. Next, I show that oxidative chemical vapor deposition of nanometric poly(3,4-ethylenedioxythiophene) coatings onto porous electrodes enhance electrochemical performance (i.e., Fe²⁺⸍³⁺), illuminating pathways for tailorable surfaces. Finally, I show how biomass from refuse streams such as food waste and wood byproducts can be used to obtain high surface-area electrocatalysts with desired surface functionalities without post-processing, suggesting next-generation electrode materials through low-cost and sustainable feedstocks. &#13;
&#13;
The third section explores mathematical frameworks to quantify the effectiveness factor of high surface electrocatalysts as functions of particle and reactant properties, applying both Tafel and Butler-Volmer kinetic descriptions to reaction-diffusion processes within a porous sphere. From this analysis, I identify design principles for electrocatalyst sizing based on desired utilization efficiency and find markedly lower catalyst utilization for micron-scale porous electrocatalysts often employed to enhance aqueous RFB performance, motivating the need for hierarchical structured materials to mitigate diffusional pore-scale losses. This treatment is then integrated into a catalyst layer model for polymer electrolyte fuel cell cathodes to investigate the effect of nanoscale ohmics arising in porous-carbon-supported platinum catalysts used for oxygen reduction. While RFBs are the primary focus of this thesis, the methods and findings described are generalizable to convection-driven electrochemical reactors that benefit from engineered transport layers.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150275</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Investigations of Exciton Dynamics in 0D and 2D Hybrid Semiconductor Nanomaterials</title>
<link>https://hdl.handle.net/1721.1/150274</link>
<description>Ultrafast Investigations of Exciton Dynamics in 0D and 2D Hybrid Semiconductor Nanomaterials
Powers, Eric R.
The development of hybrid organic-inorganic semiconductor nanomaterials represents a major advance towards highly tunable and efficient optoelectronic devices. These materials are comprised of multiple chemical constituents arranged in nanoscale geometries, and they exhibit emergent properties such as quantum confinement and enhanced Coulombic interactions. Photoactive nanomaterials have potential applications in lighting, solar, lasing, and quantum information. However, their complex chemistry and nanometer-scale dimensionality present a barrier to understanding and controlling their structure-function relationships. In this thesis, I utilize ultrafast pump-probe spectroscopy to study the fundamental properties of hybrid semiconductor nanomaterials. This technique allows the electronic and vibrational dynamics of these materials to be investigated with femtosecond time resolution, providing new insights into their underlying physics.&#13;
&#13;
I first employ impulsive vibrational spectroscopy to study exciton-phonon interactions in silver phenyl selenolate (AgSePh), a recently discovered hybrid 2D semiconductor. Combining this time-domain Raman technique with frequency-domain non-resonant Raman scattering, I measure the vibrational landscape of the system and identify a subset of vibrational modes that couple strongly to excitonic transitions. Density functional theory calculations are then utilized to pinpoint specific atomic displacements which couple to the excitonic transition. An investigation of temperature-dependent photoluminescence reveals the connection between atomic scale dynamics and macroscopic properties, showing the 99 cm-1 mode strongly impacts emission frequency and linewidth in AgSePh. Finally, temperature-dependent impulsive vibrational spectroscopy is employed to probe vibrational anharmonicity in this material. These findings provide a roadmap for controlling exciton-phonon coupling in hybrid organic-inorganic semiconductors.&#13;
&#13;
Next, I study charging and neutralization processes in lead halide perovskite nanocrystals (quantum dots) with transient absorption spectroscopy. A temperature-dependent study of CsPbBr3 nanocrystals reveals that charging occurs in a significant fraction of photoexcited nanocrystals (&gt;10%), which then exhibit a microsecond neutralization lifetime at cryogenic temperatures. The temperature-dependent dynamics are modeled to extract a ~100 meV energy barrier to re-neutralization. Additionally, photocharging shows a sublinear fluence dependence, excluding Auger recombination as the initiator of this process, in contrast with traditional nanocrystal systems. These results help illuminate the underlying mechanism of charging in perovskite nanocrystals, guiding future synthetic advances to improve nanocrystal performance in lighting and quantum information applications.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150274</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoporous Graphene Membranes for Health and Environmental Applications</title>
<link>https://hdl.handle.net/1721.1/150272</link>
<description>Nanoporous Graphene Membranes for Health and Environmental Applications
Chow, Chun Man
Separation processes are found in many diverse applications, including health (drug purification, sterilization) and environment (CO2 capture, water treatment, resource recovery). Compared to thermal-based separation systems, membranes are modular and have the potential for more efficient separations without the need for extreme temperatures and large equipment. However, conventional polymeric membranes can suffer from fouling, poor stability at high temperatures and in harsh chemical environments, are subject to a permeability/selectivity trade-off, and it remains hard to precisely control and engineer their structures on the molecular level. These limitations call for the development of new membrane materials to yield significant performance improvements. The emergence of 2D nanomaterials allows for the creation of atomically thin membranes such as nanoporous graphene (NPG), and offers the opportunity to enhance chemical stability as well as increase both permeability and selectivity via significantly reducing membrane thickness and controlling the pore structure. Despite significant progress in theoretical and experimental work in the development of NPG membranes, challenges remain to be addressed for NPGs to be deployed, particularly in the control of leakages through defects, the limited experimentally-supported transport understanding, and the exploration and design of NPG systems for various health and environment applications. This thesis extends the theoretical understanding of transport across graphene composite membranes and demonstrates how differences in the scaling of transport rates with pore size for viscous flow, gas effusion, dilute solute diffusion, and ion transport, together with the interplay between graphene and support structure and selective pore size distribution, influence leakage and selectivity. This thesis also explores how non-linear current-voltage relationships can arise in ultra-thin membranes due to induced charge effects. These models enable us to estimate permeation rates without the need for computationally-intensive simulations. Furthermore, this thesis presents the application of NPG to hemodialysis, desalination, and ion separations. For dialysis, using system level modeling of the device and its interaction with the body, we show that current dialyzers are membrane mass-transfer limited for protein-bound uremic toxins (PBUTs), and establish performance targets for NPG membranes to enhance PBUT removal. These targets are translated to the novel design and fabrication of 2 an hierarchically-supported NPG composite membrane. We also investigate the desalination performance achievable by practical NPGs with pore size distributions and ways to surpass the polymeric permeability/selectivity trade-off limit, and develop a novel experimental/analysis procedure to study simultaneous ion transport across NPG and strategies to enhance selectivity for the recovery of rare earth elements.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150272</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Control over Nuclear Spins</title>
<link>https://hdl.handle.net/1721.1/150264</link>
<description>Optical Control over Nuclear Spins
Xu, Haowei
The blossom of quantum information science and technology in the past decades is facilitated by the development of various qubit platforms. A qubit system that simultaneously has long coherence time, fast operation, and large scalability is highly desirable. Particularly, nuclear spins have been considered as ideal quantum information carriers thanks to their exceptionally long coherence time exceeding minutes and even hours at room temperature. However, the application of nuclear spins is hindered by their small energy scales and weak interactions with external fields.&#13;
&#13;
Light-matter interaction has attracted intense interest in recent years. The development in both classical and quantum optics provide unprecedented opportunities in the applications of optical approaches. In condensed matter physics/materials science, optical approaches provide great flexibility in characterizing material properties, driving excitations, and even triggering phase transitions in materials. Meanwhile, light-matter interactions are widely used in quantum science. For example, the spontaneous parametric down-conversion can be applied to create entangled photon pairs. If nuclear spins can be manipulated with optical approaches, then it would facilitate a number of potential applications.&#13;
&#13;
However, an efficient interface between nuclear spins and optical approaches is still lacking and is in particular hindered by the formidable gap between nuclear spin frequencies (10³ ∼ 10⁶ Hz) and optical frequencies (∼ 10¹⁵ Hz). Previous works on optical control over nuclear spins rely on ancillary electron spins. In this thesis, we propose an opto-nuclear quadrupolar (ONQ) effect, whereby two-color optical photons can coherently couple with nuclear spins without the need for ancillary electron spins. Hence, several limitations due to the presence of the electron spins, such as shortened nuclear spin coherence time, can be eased. Besides, the frequencies of the optical lasers can be arbitrary in practice, so they can be fine-tuned to minimize the material heating effect and to match telecom wavelengths for long-distance communications.&#13;
&#13;
Following the introduction to the mechanism, we suggest several applications of the ONQ effect. We will focus on the applications in quantum technologies, including using nuclear spins as the quantum memory to store the quantum information carried by optical photons, as the quantum transducer between microwave/radio frequency and optical photons. We will also discuss how laser cooling of nuclear spin excitations can be realized via the ONQ effect.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150264</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vapor Transport Deposition for Perovskite Solar Cells</title>
<link>https://hdl.handle.net/1721.1/150261</link>
<description>Vapor Transport Deposition for Perovskite Solar Cells
Wassweiler, Ella Louise
As perovskite solar cells move closer to commercialization, vapor transport deposition (VTD) has emerged as one of the potential routes for large-scale film growth of photoactive and charge transport layers for these solar cells. As a low-cost alternative to thermal evaporation, VTD has the potential to deposit organic and inorganic perovskite precursor materials either sequentially or via co-deposition. Co-deposition can benefit film formation by increasing the deposition speed and improving film conversion. However, current co-deposition techniques can struggle to produce high quality perovskite films, impacted by challenges associated with the thermal stability of organic precursors and by effectively using the broad deposition parameter space for controlling perovskite film growth.&#13;
&#13;
Here, we use methylammonium lead iodide (MAPbI3) as an archetype perovskite to identify degradation patterns and determine conditions for high-quality film growth. We show that material degradation during sublimation affects methylammonium iodide precursor powders and their contribution to the formation of MAPbI3 films differently than is observed for degradation that is solely due to material transport through a high-temperature zone. By identifying degradation products and film formation, we identify the degradation components that most affect MAPbI3 film performance. With these considerations, we design and construct a custom VTD system for co-deposition of perovskites with a wide range of deposition parameters available for studying film growth and optimizing performance. Additional investigation systematically builds from deposition on glass to an initial demonstration of solar cells. With these results we give recommendations for improved VTD reactor design and a systematic description of conditions that aid in optimization of high-performing VTD perovskite solar cells.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150261</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Biogeochemistry of Methane Isotopologues in Marine and Lacustrine Sediments</title>
<link>https://hdl.handle.net/1721.1/150257</link>
<description>The Biogeochemistry of Methane Isotopologues in Marine and Lacustrine Sediments
Lalk, Ellen
Methane is a globally significant greenhouse gas, energy resource, and it is a product and reactant of microbial metabolisms. Multiple sources and sinks of methane can be challenging to distinguish from each other, thus complicating the understanding of methane budgets and the effects of microbes on mediating Earth’s carbon cycle. The relative abundances of methane isotopologues (e.g., ¹²CH₄, ¹³CH₄, ¹²CH₃D, and ¹³CH₃D) record process-based information about the formation conditions, transport, and fate of methane, and in select environments can serve as a temperature proxy. This geochemical tool is herein applied to methane from marine and lacustrine sediments to test assumptions about prevailing mechanisms of its formation and consumption in these settings. &#13;
&#13;
This thesis describes 1) three studies about biogeochemical insights gained by quantifying the relative abundance of clumped methane isotopologue, ¹³CH₃D, in samples from marine and lacustrine sediments, and 2) one foray into method development to improve the quantification of methane in these environments. Chapter 2 presents a global survey of marine gas hydrates where isotope-based temperatures are used to assess whether linkages between methane sources and seepage-associated seafloor features match putative geologic models. Chapter 3 describes two kilometer-scale profiles of methane isotopologues from marine sediments, where the relationship between expected sediment temperature and isotope-based temperature is used to evaluate the temperature limit of microbial processing and abiotic re-equilibration mechanisms. Chapter 4 reports the largest set of methane isotopologue data from ebullition in a single lake basin, which is used to gauge the relative importance of aerobic and anaerobic methane oxidation in the study site and recommend a general sampling strategy to constrain methane source signatures in similar lake settings. Chapter 5 explains the development of a method to quantify the in situ concentration of methane based on ratios of dissolved gases, and its comparison to four other methane quantification methods for surface sediments from marine cold seeps. The findings from this research contribute to ongoing efforts to understand the sedimentary carbon cycle and microbial activity in remote environments.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150257</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Interfaces: From Biointerfaces to Mineralization</title>
<link>https://hdl.handle.net/1721.1/150253</link>
<description>Active Interfaces: From Biointerfaces to Mineralization
Leon, Victor Julio
Novel technologies that can help adapt to and mitigate climate change are needed. In this thesis, we develop three different technologies that tackle climate change from different angles. First, a fundamental study is conducted on a novel boiling droplet self-propulsion phenomenon that occurs when a surface is coated with a thin oil film. A scaling model is developed that predicts the velocity of the self-propelling droplet, and the model is used to estimate the increase in critical heat flux, which typically limits the efficiency of heat transfer in power plants. Second, we design a nano-engineered surface that increases the capture rate of CO₂ bubbles with a surrounding reactive liquid by 100x compared to normal surfaces. On a system level, our surface is 1000x more volumetrically efficient (1 kgCO₂/s/m³) than conventional CO₂ capture towers currently used on power plants (1 gCO₂/s/m³), paving the way towards smaller scale, modular CO₂ capture units that could be used on smaller point sources like cars and buildings. Finally, we design nanometric, high performance dielectric films to reduce the required cleans of algae photobioreactor walls by 3x, which would increase the efficiency of the conversion of captured CO₂ to value products using algae. By applying low voltage (~1V) and power (~1nW), the electrostatic interaction between the algae cell and dielectric surface can be modulated to either enhance or inhibit cell adhesion rate. As such, the films are also used to pattern cells on surfaces. Overall, this thesis makes contributions towards the adaption to and mitigation of climate change by improving the performance of current technologies (increasing critical heat flux of boiling droplets), improving CO₂ capture (nano-engineered bubble capture), and improving CO₂ conversion to value products (nanometric, dielectric films for algae photobioreactors).
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150253</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>V for Venous Pressure</title>
<link>https://hdl.handle.net/1721.1/150249</link>
<description>V for Venous Pressure
Jaffe, Alex T.
Cardiovascular disease is the world’s leading cause of death. Striving to decrease cardiovascular related deaths and improve quality of life, medical treatments have advanced, and society has become better informed on leading a healthy lifestyle. In parallel, our ability to monitor patients with cardiovascular disease has progressed substantially. Yet, many measurements still require invasive means for clinically acceptable accuracy. Medical ultrasound imaging noninvasively provides images of the heart and major blood vessels in real-time while emitting no harmful radiation and costing relatively little to operate. In this thesis, force-coupled ultrasound imaging techniques are developed to create accurate and noninvasive methods to measure central venous pressure and central arterial pressure.&#13;
&#13;
Force-coupled ultrasound imaging of blood vessels is a process which outputs ultrasound images containing at least one segmented blood vessel of interest and assigns a force to each image. Data is acquired with a force-coupled ultrasound probe – an ultrasound probe which measures force applied. To analyze, three data processing steps proceed automatically in order: (1) ultrasound images are synchronized with force data, (2) the blood vessel of interest is detected within the force-coupled ultrasound images, and (3) the blood vessel is segmented in each relevant image. Central arterial pressure is estimated calibration-free through force-coupled ultrasound imaging of the carotid artery combined with inverse finite element modeling [formula]. Collapse force – the force necessary to completely occlude a vein in a particular anatomical location – of the internal jugular vein is shown to predict central venous pressure with high accuracy at MIT [formula] with a limited range of venous pressures with healthy subjects and at Massachusetts General Hospital [formula] with a more vast range of venous pressures and heart failure intensive care unit patients. Additionally, central arterial and central venous pressure are simultaneously estimated through force-coupled ultrasound imaging and inverse finite element modeling of the carotid artery and internal jugular vein in the same image ultrasound viewing window.&#13;
&#13;
The proposed force-coupled ultrasound imaging techniques are well-suited to improve how central venous pressure is measured in patients with decompensated heart failure. These methods also have potential to provide a more accurate and facile central venous pressure measurement for compensated and early stage heart failure and a more informed central arterial pressure measurement for cardiovascular disease in general.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150249</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling the Bio-Nano Interface via Engineered Layer-by-Layer Nanoparticles for Treatment of Biofilm-Based Infections</title>
<link>https://hdl.handle.net/1721.1/150246</link>
<description>Controlling the Bio-Nano Interface via Engineered Layer-by-Layer Nanoparticles for Treatment of Biofilm-Based Infections
Deiss-Yehiely, Elad
Biofilm-associated infections are one of the largest health threats, causing over half a million deaths and costing society billions of dollars each year. Biofilms are associated with 65-80% of human infections and are implicated in antibiotics resistance. Biofilms are sessile, extracellular matrix-enclosed microbial masses that survive hostile environments. This self-produced matrix a viscous meshwork of extracellular polymeric substances (EPS) that attenuates antibiotic penetration due to its mechanical and physicochemical properties. The standard of care for biofilm infections is prolonged, high-dose antibiotics treatment, which are mostly ineffective and result in harmful side-effects. Coupled with a dry antibiotic pipeline, there is a need for new strategies to effectively deliver high concentrations of antibiotics through the biofilm.&#13;
&#13;
Nanoparticle (NP)-based drug delivery systems represent an exciting solution for biofilm eradication. NPs can be loaded with antibiotics, targeted the site of interest, and deliver high local therapeutic concentrations. Furthermore, next-generation NP systems can be designed to imbue desired surface functionalities to enhance efficacy. Indeed, the layer-by-layer (LbL) platform technology, in which oppositely charged polymers are sequentially adsorbed onto charged colloidal NP templates, offers a unique fabrication method to assemble modular, uniform panels of distinct NPs surface chemistries. This thesis investigates three distinct NP surfaces designed to enhance desired cellular or matrix interactions for therapeutic benefit.&#13;
&#13;
The first part of this thesis describes the development of a tunable family of pH-responsive LbL NPs with enhanced penetration and permeation throughout biofilms. Key design considerations were that positively charged NPs bind the EPS matrix and facilitate permeation; however, they are toxic and rapidly cleared in-vivo. To conserve the positive surface charge while mitigating the accompanying toxicity and clearance, a panel of pH-responsive chargereversing polymers that respond to the acidic biofilm microenvironment were synthesized. When layered onto LbL NPs, increased charge-reversal rate resulted in increased biofilm penetration and accumulation throughout the biofilm. Moreover, tobramycin loaded charge-reversing NPs resulted in a three-fold reduction of bacterial burden of P. aeruginosa biofilm as compared to free drug, demonstrating a potential high efficacy treatment.&#13;
&#13;
The second part of the thesis describes the investigation of interactions of distinct NP surfaces with three main biofilm components, the alginate, psl, or pel polysaccharides. A new panel of varied surface chemistries LbL NPs was assembled with readily available polymers and interactions with the biofilm produced by inducible alginate, psl, or pel P. aeruginosa mutants was analyzed by confocal microscopy. NPs with carboxylated surfaces showed an increased colocalization with biofilms produced by mutants expressing alginate as compared to those with sulfated surfaces. These results are clinically significant, as alginate is well-characterized to be overexpressed in the airways of Cystic Fibrosis patients infected with P. aeruginosa. These results lay the foundation for designing new NP drug delivery vehicles capable of potentially increasing antimicrobial delivery throughout the biofilm via controlled biofilm interaction.&#13;
&#13;
The penultimate chapter describes investigations of how surface presentation of cellular binding epitopes can be optimized for the eukaryotic cellular association. While not related to biofilm interactions, this study examined novel strategies to control the bio-nano interface with rational design of NP surfaces. These studies demonstrated that a bottlebrush polymer architecture layered onto NPs binds cell surface receptors more avidly as compared to covalently conjugated and traditionally layered NP constructs. The bottlebrush structure affords binding ligands physical distance from the surface, thereby maximally presenting them to the cell. The thesis ends with the description of optimization of two in vivo models of biofilm-associated infection, namely excisional wounds and airways, for testing the efficacy of systemically administered charge-switchable NPs.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150246</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Engineering for Viral Vector Manufacturing</title>
<link>https://hdl.handle.net/1721.1/150243</link>
<description>Systems Engineering for Viral Vector Manufacturing
Nguyen, Tam Ngoc Thanh
Gene therapies have emerged in recent years as a great promise for treatments against a variety of otherwise difficult to treat diseases. Recombinant adeno-associated virus (rAAV) is the current vehicle of choice for in vivo gene therapy treatments for its reduced toxicity, robust and long-term transgene expression. Rapidly rising interests in both the academic and private sectors have led to increasing demand for gene-therapy products with more than 200 ongoing global clinical trials. The large demand has created a need for manufacturing large quantities of rAAVs. However, current state-of-the-art viral vector manufacturing methods still fall short of meeting current and future demands. Transient transfection via triple transfection of plasmids is the most common manufacturing platform for the clinical stage due to its flexibility in changing the plasmids and transgene.&#13;
&#13;
Triple transfection has low yields and low fraction of full capsids in harvest. The low percentage of full capsids in harvest (1) leads to high doses which could trigger immunogenecity reaction in patient and (2) requires extensive purification which leads to increased development product cost. In order to advance gene therapy development and improve patient accessibility to the therapeutic, manufacturing must be advanced.&#13;
&#13;
At the current state of process development in rAAV vector manufacturing, batch mode is still the method of choice and evaluating the impact of system inputs on the outcome still relies on the empirical approach. Statistical and empirical approaches, such as design of experiment, can be straightforward for optimizing processes but only within a defined parameter space, whereas mechanistic models can provide physical insights and understanding of the process, multivariable interactions and dynamics to guide novel production design, and extrapolated hypotheses that can be tested. While rAAV production is still mostly performed in batch mode, continuous processing can largely reduce footprint of scale-up and provide a much higher degree of flexibility, and remains unexplored for transient transfection to produce rAAV. In this thesis, I presented a step-by-step model-based approach in creating novel methods to produce rAAV vectors, which culminated in a continuous manufacturing platform.&#13;
&#13;
A quantitative analysis of the system via a mechanistic, mathematical model is useful to organize and exploit information obtained from existing data, gain understanding of process dynamics, and predict responses to various process inputs. This thesis constructs a mechanistic, single-cell, triple-transfection model based on AAV replication biology that enables the comprehensive understanding of viral vector production kinetics and facilitates design of processes.&#13;
&#13;
As derived from mechanistic understanding of viral production process, the rapid saturation of the rate of viral capsid synthesis combined with the late onset of viral genome replication could be responsible for the low ratio of full capsids at harvest. This thesis designs a multi-stage transfection strategy in fed-batch that extends the viral capsid production timeline to improve the ratio of full to empty capsids, while keeping the total plasmid dosage constant. The multistage transfection method improved the ratio of full capsids in the harvest while preserving the genome titer. &#13;
&#13;
Continuous processing provides great advantages over batch and fed-batch operations in flexibility and foot-print reduction of scale-up. This thesis develops a perfusion bioreactor platform to produce rAAV at high cell density leveraging understandings from the mechanistic model and multi-stage transfection. The continuous process enables high cell density transfection and reduces genome titer yield per unit weight plasmid.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150243</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximation of Large Stiff Acausal Models</title>
<link>https://hdl.handle.net/1721.1/150242</link>
<description>Approximation of Large Stiff Acausal Models
Anantharaman, Ranjan
Simulations drive mission-critical decision making in many fields, but are prone to computational intractability, which severely limits an engineer’s productivity whilst designing practical systems. In particular, simulating systems with widely separated timescales are prone to computational problem called stiffness. In this thesis, we aim to alleviate these issues by the use of approximate models called surrogates, which match the full system to high fidelity whilst being inexpensive to simulate. In this thesis, we introduce a general data-driven method to generate surrogates, called the Continuous-Time Echo State Networks (CTESN), that can capture multiple widely separated time-scales which is easy to automate. We comment on its implementa- tion and then propose an active learning scheme for adaptively choosing training points. We then present several examples and case studies of stiff high-dimensional acausal systems from diverse domains heating, ventilation and cooling (HVAC) sys- tems, quantitative systems pharmacology models and electrical circuits, where we accelerate their simulation by multiple orders of magnitude. We then deploy these surrogates in the context of many downstream tasks, such as global optimization, predicting non-linear system response, and global sensitivity analysis, accelerating all tasks by two orders of magnitude. Lastly, we also show that our surrogate modeling architecture can also be used as a universal adaptive filter.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150242</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified View of Protein Low-complexity Regions (LCRs) Across Species</title>
<link>https://hdl.handle.net/1721.1/150241</link>
<description>A Unified View of Protein Low-complexity Regions (LCRs) Across Species
Lee, Byron
Low-complexity regions (LCRs) in proteins play a role in a variety of important cellular processes, dispersed across different fields in biology such as transcription, extracellular structure, and stress response. LCRs have been shown to vary in amino acid composition and structure, and can act as interacting domains capable of forming phase-separated higher-order assemblies. However, we lack a unified view of LCRs that incorporates all of the information in their sequences, features, relationships, and functions.&#13;
&#13;
In this thesis, I present a unified view of LCRs by 1) co-developing a framework based on the features and relationships of LCRs which are important in their roles as versatile interacting and phase-separating domains and 2) seeing whether this framework may provide a more general understanding of the functions of LCRs in proteins. Using the systematic dotplot matrix approach that we developed, we define LCR type/copy relationships for proteins across the proteome. Based on these definitions, we show the importance of K-rich LCR copy number for the RNA polymerase I subunit RPA43 for both assembly in vitro and localization in cells, demonstrating how principles of LCR copy number can relate these two processes. Moreover, by mapping regions of LCR sequence space to higher-order assemblies, such as the nucleolus, metazoan extracellular matrix and plant cell wall, we relate LCR functions across different fields and suggest that LCR functions may be unified in their roles in higher-order assemblies. Using this unified view, we uncover scaffold-client relationships among E-rich LCR-containing proteins in the nucleolus and discover TCOF1 as a self-assembling scaffold of the nucleolar fibrillar center. We go on to uncover previously undescribed regions of LCR sequence space with signatures of higher-order assemblies, including a teleost-specific T/H-rich sequence space. Thus, this work provides a framework that can unify the disparate functions of LCRs and enables discovery of how LCRs encode higher-order assemblies of organisms.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150241</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The impact of intratumor heterogeneity on anti-tumor immunity</title>
<link>https://hdl.handle.net/1721.1/150240</link>
<description>The impact of intratumor heterogeneity on anti-tumor immunity
Nguyen, Kim Bich
Cancer immunotherapies, notably checkpoint blockade therapy (CBT), have proven efficacy to control cancer growth, with a fraction of patients exhibiting durable responses. However, the majority of patients do not respond to CBT and the molecular determinants of resistance have not been fully characterized. Many of these therapies depend on cytotoxic CD8+ T cells, which recognize specific tumor cells and kill them. Specific recognition is bestowed by the presence of neoantigens, derived from mutated sequences arising during tumorigenesis. They are attractive targets because they are specific to tumor cells and have not been subjected to central immune tolerance mechanisms. However, there is mounting evidence from clinical data that the clonal status of neoantigens impacts the anti-tumor immune response. High intratumor heterogeneity (ITH), where the majority of neoantigens is expressed subclonally, is correlated with poor clinical response to CBT and weaker anti-tumor immunity. The mechanism by which ITH blunts tumor-reactive CD8+ T cells is still unknown. &#13;
&#13;
To study the effect clonal and subclonal expression of neoantigens has on resulting neoantigen-specific responses we developed a transplantable murine cancer model to characterize the immune response against a defined set of neoantigens expressed either clonally or subclonally to model low or high ITH, respectively. We find that clonal expression of a weakly immunogenic neoantigen with a relatively strong neoantigen increased the immunogenicity of tumors. Mechanistically we found that clonal neoantigen expression allowed cross-presenting dendritic cells to acquire and present both neoantigens. Dual neoantigen presentation was associated with a more mature DC phenotype and a higher stimulatory capacity. &#13;
&#13;
These data suggest that clonal neoantigen expression can induce more potent anti-tumor responses due to more mature dendritic cells : T cell interactions and highlights the therapeutic potential of targeting subclonal neoantigens using vaccination and modulating the tumor immune microenvironment to increase response to CBT. The model we developed is being used to recapitulate more complex subclonal neoantigen architectures found in patients to identify and refine the rules that determine the quality and structure of anti-tumor immune responses. Identifying these rules and the mechanisms that drive them will inform therapeutic strategies to circumvent poor anti-tumor immune responses.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150240</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Overlapping Finite Element Formulation</title>
<link>https://hdl.handle.net/1721.1/150239</link>
<description>A Unified Overlapping Finite Element Formulation
Lee, Sungkwon
The use of finite element analysis has now become ubiquitous in engineering practice. However, obtaining an adequate mesh for a geometrically complex analysis domain is still challenging in traditional finite element analysis since the elements can be severely geometrically distorted and thus perform poorly.&#13;
&#13;
The AMORE meshing scheme and the overlapping finite elements have been recently proposed to easily obtain an effective mesh for a geometrically complex domain. The AMORE scheme, which stands for automatic meshing with overlapping and regular elements, fills the analysis domain mostly utilizing undistorted finite elements and discretizes the regions not filled by these elements using overlapping finite elements which are quite distortion insensitive.&#13;
&#13;
This thesis proposes a new unified overlapping element formulation that easily applies to any of the basic elements including the three-dimensional elements. The new formulation is based on the earlier formulation but no longer requires the calculations of the Shepard functions used in meshless schemes. The new one-, two- and three-dimensional overlapping elements contain no spurious zero energy mode, pass the patch test, and also show good distortion insensitivity.&#13;
&#13;
Given that the previous studies on overlapping elements focused on solving two-dimensional linear elastic problems, we demonstrate the use of the new elements and AMORE not only in two-dimensional but also in three-dimensional linear structural analyses. Particularly, the thesis includes the first study on the modal and mode superposition solutions using overlapping finite elements. Illustrative example solutions show that the new discretizations are very promising in reducing the meshing effort in linear structural analyses.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150239</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning, Sensing, and Control for Contact-rich Robotic Manipulation with Quasi-static Contact Models</title>
<link>https://hdl.handle.net/1721.1/150237</link>
<description>Planning, Sensing, and Control for Contact-rich Robotic Manipulation with Quasi-static Contact Models
Pang, Tao
Human-level manipulation is not limited to the finger tips. We use our palms, arms, body and legs to make and break contact with the manipuland whenever and wherever necessary. In contrast, the vast majority of robotic manipulators are artificially divided into a small end effector that makes all the interactions, and a hefty arm that needs to always remain collision-free.&#13;
&#13;
One big reason behind this unsatisfying limitation on robotic manipulators is the lack of planning algorithms that can efficiently reason about contact-rich interactions. Navigating the non-smooth landscape of contact dynamics constraints is perhaps the biggest challenge faced by model-based contact-rich planners. Existing methods either descend along the gradients of smoothed contact dynamics and get stuck in local minima, or search globally through contact mode transitions and get overwhelmed by the exponentially many modes as the problem gets more complex. Drawing lessons from the recent empirical success of reinforcement learning in contact-rich manipulation, we hypothesize that smoothing and global search are both necessary for contact-rich planners to succeed. In this thesis, we propose a model-based contact-rich planner which searches globally under the guidance of a novel contact dynamics model. The model is quasi-static, convex, differentiable and amenable to smoothing, all of which are features designed to improve the planner's efficiency. Our method can generate complex dexterous manipulation action plans with less than 1 minute of online computation on a regular desktop computer. The plans can also transfer directly to robotic hardware under certain conditions.&#13;
&#13;
In addition to planning, a complete manipulation pipeline also needs contact force sensing and feedback control. Therefore, this thesis also studies the viability of estimating external contact forces from only joint torque measurements, and explores how to use feedback to keep the magnitude of contact forces bounded in an accidental collision.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150237</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bearingless Slice Motor and 6-DoF Position Sensor for Extracorporeal Pump</title>
<link>https://hdl.handle.net/1721.1/150233</link>
<description>Bearingless Slice Motor and 6-DoF Position Sensor for Extracorporeal Pump
Kant, Krishan
Bearingless slice motors perform the function of a conventional motor with a suspended rotor, and therefore require no bearings or other mechanical support. Such combined motor and suspension operation requires a motor design that can generate force as well torque in order to stabilize all 6 degrees of freedom. The slice configuration of such motors uses passive stability in the axial and tilt directions. Hence no closed loop control is required for these degrees of freedom motions. In this thesis, the intended application for the bearingless motor is an extracorporeal blood pump. Extracorporeal blood pumps are used to maintain blood flow as a temporary ventricular assist device and/or blood oxygenator. Bearingless pumps can help to avoid blood damage caused by friction and heat generation due to seals and bearings. These pumps are also single use disposable, and reducing the cost of the pump elements reduces the operational cost.&#13;
&#13;
This thesis presents the design, implementation and testing of a novel bearingless blood pump. Two bearingless motors are considered here: a split teeth flux reversal motor and an interior permanent magnet motor. A novel bearingless motor design with magnet-less rotor is ideated from the conventional flux reversal motor. The stator of this motor contains magnets which enhance the airgap flux and thereby make the motor more power efficient. The new bearingless motor can also generate force on the rotor independent of the rotor angle. This simplifies the radial suspension control design. This motor has a magnet-less rotor and still has a comparable passive magnetic stiffness, torque and force capability to the motors with permanent magnets in the rotor. &#13;
&#13;
An interior permanent magnet motor (IPM) is the other motor which is designed and tested in this thesis. This motor uses a redesigned rotor derived from an earlier motor version such that the motor can stably operate at higher speeds. It was found in the earlier motor that cogging torque can generate significant disturbances in the radial positions and drive the suspension unstable at higher speeds. Therefore, the cogging torque became an important design parameter for this new IPM rotor.&#13;
&#13;
Another important part of the thesis is the design and development of a 6 degrees of freedom position sensor. The major challenge was to get all the measurements while only accessing the bottom surface of the rotor. The sensor is composed of two parts: the first part measures the radial positions X/Y and the other part measures the Z, theta_X, theta_Y and theta_Z motions. Both work on the principle of eddy currents. The radial position X/Y sensor has a linear variable differential transformer(LVDT) type structure where the difference in the induced voltages yields the position measurement. This LVDT sensor uses a conductive aluminium target glued to the bottom surface of the rotor. With rotor motion, the target motion varies the induced voltages in the coils which yields the position estimates. Since this sensor is placed under the rotor, the radial and axial motions are strongly coupled. However, with an appropriate target design the coupling is minimized. The 4 DoF Z, theta_X, theta_Y and theta_Z  position sensor uses a printed circuit board (PCB) with 8 coils. These coils are driven by inductance to digital converters measuring the inductances of the coils. The conductive target is designed such that the measured inductance variation in these 8 coils can provide measurements for positions in the 4 DoF. The radial sensor is the most critical for bearingless operation. It achieves 1 kHz bandwidth and better than 1.2 um resolution in the X and Y DoF.&#13;
&#13;
Both the motors are integrated with a pump and tested in a closed loop flow circuit. The flux reversal motor is tested with water and the IPM motor is tested with water and also with blood flowing through an oxygenator.&#13;
&#13;
The motor stator coils are driven with custom designed current-controlled switching amplifiers. The amplifiers are designed to achieve a high current control bandwidth. They are also designed and adapted to minimize interference with the position sensors.&#13;
&#13;
All the components required for a blood pump are designed and developed in this thesis. The design and implementation of individual components poses various challenges. However, another major challenge is to integrate and operate them together as a blood pump such that they do not interfere with each other's operation. All these various challenges are handled in the thesis and a successful operation of the blood pump system is carried out.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150233</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphene-based Biochemical Sensing Array: Materials, System Design and Data Processing</title>
<link>https://hdl.handle.net/1721.1/150232</link>
<description>Graphene-based Biochemical Sensing Array: Materials, System Design and Data Processing
Xue, Mantian
Graphene and other two-dimensional materials have garnered significant attention as potential biochemical and chemical sensors due to their unique physical and electrical properties. However, their use has been limited by significant device-to-device variation resulting from non-uniform synthesis and fabrication processes. To overcome this challenge, we have developed a bioelectronic sensing platform comprising thousands of integrated sensing units, custom-designed high-speed readout electronics, and machine-learning-based inference. This platform has demonstrated reconfigurable sensing capability in both the liquid and gas phases, with highly sensitive, reversible, and real-time responses to potassium, sodium, and calcium ions in complexed solutions. Additionally, using a biomimetic "dual-monolayer" construct, we have observed nature-like specific interactions with the CXCL12 ligand and HIV-coat glycoprotein in 100% human serum. Furthermore, the platform is capable of providing highly distinguishable fingerprints of relevant biomarkers in breath. Machine learning models trained on multi-dimensional data collected by the multiplexed sensor array is used to enhance the sensing system’s functionality. In summary, our bioelectronic sensing platform represents an end-to-end, versatile, robust, and high-performing solution for the detection of biochemical species, with potential applications in health monitoring and disease diagnosis.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150232</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Estimation from Dependent and Adversarial Data</title>
<link>https://hdl.handle.net/1721.1/150231</link>
<description>Statistical Estimation from Dependent and Adversarial Data
Dagan, Yuval
This thesis studies learning and estimation from data that is not independent, but rather falls into one of the following categories: (1) Data with strong correlations, such as social network correlations and data over a spatial domain or a temporal domain; and (2) Adversarial time series data, where the algorithm can possibly influence future data points in an adversarial manner. I will define mathematical models and learning problems that aim to capture these scenarios and describe polynomial-time algorithms to solve them. For (1), I will define the learning problem as a problem of learning Ising models and present algorithms to learning Ising models under different contexts. For (2), I will use the formulation of adversarial streaming algorithms by Ben-Eliezer and Yogev [2020] and present a tight analysis.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150231</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards General-purpose Vision via Multiview Contrastive Learning</title>
<link>https://hdl.handle.net/1721.1/150229</link>
<description>Towards General-purpose Vision via Multiview Contrastive Learning
Tian, Yonglong
Representation learning plays a key role in building robust and general-purpose vision learners, and is a long-standing problem. It becomes increasingly interesting with the continuing explosion of data in our era. &#13;
However, most previous approaches are based on specific designs of strategies that are not generalizable. This thesis instead proposes and studies multiview contrastive learning, which is based on a simple mathematical principle -- discriminating between samples from the joint distribution and samples from the product of marginals. We firstly introduce the general framework of multiview contrastive learning (MCL). We demonstrate that this simple framework is able to deal with various representation learning problems, and often improves the state of the arts to the next level. Then we move forward by trying to understand the role of view selection in multiview contrastive learning from an information-theoretic point of view, and come up with an "InfoMin" principle, which connects to minimal sufficient statistics and information bottlenecks. Such principle is further demonstrated by supervised contrastive learning, which rivals or even beats the supervised cross-entropy learning on standard image classification benchmarks. In the last part, we discuss other applications (such as knowledge distillation) and improvements of multiview contrastive learning (e.g., how to improve its efficiency on uncurated data).
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150229</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simple, Fast, Scalable, and Reliable Multiprocessor Algorithms</title>
<link>https://hdl.handle.net/1721.1/150219</link>
<description>Simple, Fast, Scalable, and Reliable Multiprocessor Algorithms
Jayanti, Siddhartha Visveswara
In this thesis, I identify simplicity, speed, scalability, and reliability as four core design goals for multiprocessor algorithms, and design and analyze algorithms that meet these goals.&#13;
&#13;
I design the first scalable algorithm for concurrent union-find. Our algorithm provides almost-linear speed-up, performing just [formula] work when p processes execute a total of m operations on an instance with n nodes. I furnish the algorithm with a rigorous, machine-verified proof of correctness, and prove that its work-complexity is optimal amongst a class of symmetric algorithms, which captures the complexities of all known concurrent union-find algorithms. The algorithm is lightning quick in practice: it has improved the state-of-the-art in model checking [Bloemen] and spatial clustering [Wang et al.], and is the fastest algorithm for computing connected components on both CPUs and GPUs [Dhulipala et al., Hong et al.].&#13;
&#13;
I introduce concurrent fast arrays, which are linearizable wait-free arrays that support all operations, including initialization, in just constant time. As an application, I design the first fixed-length fast hash table, which supports constant time initialization, insertions, and queries.&#13;
&#13;
I define సామాన్య జాగృతి (generalized wake-up), which generalizes the information propagation problem called wake-up. I prove fundamental hardness results about this problem, and through reductions, show that any linearizable queue, stack, priority queue, counter, or union-find object's work complexity must increase with process count; these lower bounds are robust to both randomization and amortization. This thesis includes the original results in Telugu with Sanskrit abstract, along with their English translation.&#13;
&#13;
I design optimal complexity locks for real-time and persistent memory systems. Our abortable queue lock is the first abortable lock to achieve O(1) amortized RMR complexity for both cache-coherent (CC) and distributed shared memory (DSM) systems. It additionally provides "abortable first-come-first-served'' fairness and supports "fast aborts''. Our recoverable queue lock is the first recoverable lock to achieve the optimal O(log p/ log log p) worst-case RMR complexity on both CC and DSM persistent memory systems. Both locks are innovations on our newly devised standard lock, whose design simplifies and unifies several previously known techniques.&#13;
&#13;
This thesis also emphasizes rigorous guarantees for concurrent algorithms. I devise a novel universal, sound, and complete "tracking'' technique for proving linearizable and strong linearizable correctness of concurrent algorithms. My collaborators and I have used this technique to give machine-verified proofs of correctness for multicore queue, union-find, and snapshot algorithms.&#13;
&#13;
Finally, I prove and experimentally validate that asynchronous "HOGWILD!'' Gibbs Sampling, a technique born from machine learning practice, can be used to accurately estimate expectations of polynomial and other statistics of graphical models satisfying Dobrushin's condition.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150219</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing accelerated key-value store: From SSDs to datacenter servers</title>
<link>https://hdl.handle.net/1721.1/150217</link>
<description>Implementing accelerated key-value store: From SSDs to datacenter servers
Chung, Chanwoo
Efficient management of storage is a primary concern in all systems dealing with Big Data. In the modern era, flash-based solid-state drives (SSDs) are widely adopted in computer systems, slowly replacing hard disk drives. As many kinds of data generated and collected these days are not well-structured, a key-value store has become one of the most important building blocks widely used in datacenters thanks to its simple interface. Key-value stores are often used as a internal engine for other databases.&#13;
&#13;
This thesis explores whether a modern flash-based solid-state drive (SSD) augmented with near-storage computations can be re-designed to provide cheaper and power-efficient solution to maintaining various key-value services in the cloud. The thesis explores a new type of storage device, called a key-value SSD (KV-SSD), that exposes a key-value interface instead of the legacy block interface to the host machine.&#13;
&#13;
The two alternative power- and cost-efficient solutions that can replace existing KVS components are based on KV-SSDs, LightStore and PinK. LightStore is a new storage architecture based on a group of network-attached KV-SSDs without storage host servers. LightStore aims to primarily support large-sized objects and emulates other types of data stores using application-side adapters. Compared to existing storage server-based solutions, LightStore is up to 2.3X space- and 7.4X energy-efficient. PinK is a novel design of an LSM-tree for KV-SSDs with software and hardware techniques that provides bounded tail latency and design flexibility. PinK prototype reduces the read and 99th percentile latency by 22% and improves read throughput by 44% compared to LightStore prototype. The PinK prototype showed 42-73% better latency and 37% better throughput compared to commercial hash-based prototype. A proposed future design based on smart SSDs, a block-based SSD with an accelerator, shows how the smart SSDs can help existing software KVS on hosts. We believe these alternatives to running various types of key-value stores in datacenters would reduce storage management cost drastically.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150217</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundational Integration Verification of Diverse Software and Hardware Components</title>
<link>https://hdl.handle.net/1721.1/150216</link>
<description>Foundational Integration Verification of Diverse Software and Hardware Components
Erbsen, Andres
The engineering of computer systems is distinguished by a long-standing tradition of building on quicksand.&#13;
Even the most venerable and critical systems have a history of serious bugs and security vulnerabilities.&#13;
Human fallibility continues to prevail.&#13;
&#13;
Computer-checked mathematical proofs of software correctness have emerged as a promising method to rule out large classes of bugs.&#13;
However, the appropriate notion of correctness for a computer-systems component is exceedingly difficult to specify correctly in isolation, and unrelated verification of adjacent components does not rule out bugs due to their interactions. Therefore, I argue for (1) centering systems-verification efforts around interface specifications within a proof assistant, (2) proving both clients and implementations of an interface, and (3) using these results to prove an integrated-correctness theorem stated without referencing the internal interfaces.&#13;
&#13;
I present a serious (several-year, several-person) exploration of what formally proven computer-systems development would look like if this practice were standard, culminating in precedent-setting case studies involving embedded implementations of networked software and elliptic-curve cryptography. Whole-system correctness theorems spanning from application behavior to hardware designs are proven by instantiating correctness proofs of compilers, translation validators, processor implementations, and mathematical theories. For example, RISC-V machine code for a public-key-authenticated Ethernet server is proven to always eventually satisfy a trace predicate.&#13;
&#13;
Specifications of imperative languages within the system are modeled using an underappreciated technique that we call omnisemantics.&#13;
Choosing an inductively defined weakest-precondition predicate transformer as the semantics of a language allows unspecified behavior to be encoded using rules with universally quantified premises, greatly simplifying compiler-correctness proofs and program-logic construction.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150216</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale programmable silicon photonics for quantum and classical machine learning</title>
<link>https://hdl.handle.net/1721.1/150215</link>
<description>Large-scale programmable silicon photonics for quantum and classical machine learning
Prabhu, Mihika
Photonic technologies provide many unique physical advantages including ultra-high bandwidths, energy-efficient operations, and low coupling to environmental noise. Furthermore, recent advances in foundry-based manufacturing platforms have enabled the emerging field of integrated systems photonics. In contrast to their bulk optics counterparts, these systems can co-integrate dense ensembles of active photonic and electronic components on a single wafer with high phase stability and small device footprints. Initial demonstrations of each element in the integrated photonics stack—sources, processors, and detectors—motivate the development of wafer-scale photonic integrated circuit implementations, which are poised to form a key building block for fundamental advancements in computing, communications, and sensing.&#13;
&#13;
The first part of this thesis will discuss the development and early system-level demonstrations of linear programmable nanophotonic processors in the silicon-on-insulator platform for applications in quantum and classical machine learning and information processing. Using our developed processor architecture, we then present a nanophotonic Ising sampler for noise-assisted combinatorial optimization. Subsequently, we present a novel, foundry-compatible platform for integrating telecommunication-wavelength artificial atom quantum emitters directly in silicon photonic circuits. Finally, we report a capacity analysis of a structured interferometric receiver implemented with a silicon photonic processor for detection of optical signals in photon-sparse communication links.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150215</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Satisfiability Algorithms and Connections between Algorithms and Circuit Lower Bounds</title>
<link>https://hdl.handle.net/1721.1/150213</link>
<description>Satisfiability Algorithms and Connections between Algorithms and Circuit Lower Bounds
Vyas, Nikhil
In this thesis we study satisfiability algorithms and connections between algorithms and circuit lower bounds. We give new results in the following three areas:&#13;
&#13;
Oracles and Algorithmic Methods for Proving Lower Bounds: We give an equivalence between relativizing circuit lower bounds (circuit lower bounds which hold with respect to all oracles) and the existence of uniform circuits for a problem we call the MISSING-STRING problem. This connection allows us to (a) prove new time hierarchy results and (b) reduce various open problems such as whether there exists an oracle B such that [formula] to circuit lower bounds for the MISSING-STRING problem. We also give new oracles which show that the "algorithms to lower bounds" framework of Williams does not relativize.&#13;
&#13;
Circuit Lower Bounds from #SAT Algorithms: Williams' paradigm for lower bounds gives circuit lower bounds from circuit satisfiability algorithms. We build upon this paradigm and study lower bounds that can be obtained from algorithms which count the number of satisfying solutions for a circuit i.e. #SAT algorithms. Informally, we show that #SAT algorithms for circuit class C imply lower bounds for the class of functions which can be written as “sparse symmetric” functions of C. This allows us to show that NQP (nondeterministic quasi-polynomial time) is not contained in the class of [formula] circuits.&#13;
&#13;
Complexity of k-SAT and its variants: k-SAT is a canonical NP-complete problem for k ≥ 3 and tremendous effort has been devoted to finding faster algorithms for it and to understand its complexity. We study the time complexity of k-SAT and its variants such as average-case k-SAT and Unique k-SAT.  We give new algorithms for various average case variants of k-SAT that are faster than the best known algorithms for worst case k-SAT. We also give a fine grained reduction from k-SAT to Unique k-SAT which shows that their time complexities are tightly linked.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150213</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeled and Unmodeled Approaches for Quantification of the Cardiac Autonomic Nervous System</title>
<link>https://hdl.handle.net/1721.1/150211</link>
<description>Modeled and Unmodeled Approaches for Quantification of the Cardiac Autonomic Nervous System
Birjiniuk, Jonathan
The autonomic nervous system (ANS) is a branch of the nervous system that regulates involuntary functions within the body, such as blood pressure control (the baroreceptor reflex) and arousal. Tracking ANS activity in patients may provide information about traumatic brain injury, heart failure, stroke, diabetes, and sports performance; information that can ultimately aid in clinical decision making. In this work, we develop a non-invasive model-based approach and characterize a large class of unmodeled approaches for quantifying the autonomic nervous system, with the goal of improving interpretability. &#13;
&#13;
Autonomic activity may be tracked in real-time through appropriate modeling of the baroreceptor reflex. We propose a causal, parametric beat-to-beat model, relating systolic blood pressure (input) to heart rate (output). This model is validated on data from 13 nonsmoking adult males, without any history of cardiopulmonary disease, subject to both pharmacological blockade and postural changes. The model tracks the expected effects of changing posture (P&lt; 0.01) and sympathetic blockade (P&lt; 0.05) on autonomic balance. In many cases, model parameters also exhibit greater sensitivity to changes in autonomic activity and balance than autonomic indices derived from the power spectral density of heart rate variability.&#13;
&#13;
Heart rate variability (HRV) is a measure of the beat-to-beat variation of instantaneous heart rate that is commonly used as a measure of autonomic nervous system function. Using data collected from animal experiments, we show that it is related to heart rate (HR) and not an independent measure. In White New Zealand Rabbits (N=8), HR was titrated using a combination of electrical stimulation of the right cervical vagus nerve (simulating increased parasympathetic activity, decreasing heart rate) and the administration of intravenous propranolol (decreasing sympathetic activity through beta-adrenergic blockade, decreasing heart rate). Analyses of HRV against HR over the entire animal population as well as for each individual animal reveal a decaying exponential relationship between HRV and HR in mammals with an intact ANS.&#13;
&#13;
Interpretation of HRV metrics is limited due to a lack of subject-level quantification of the variance implicit in both the random process that generates HRV as well as the methods by which HRV metrics are computed. We formulate a method for quantifying the variability of HRV metrics computed in the frequency domain, and then apply this method on the aforementioned human dataset. In the best case, the coefficients of variance for total power, low frequency power, and high frequency power metrics were found to be 28%, 32%, and 39%, respectively, for subjects before autonomic blockade. This level of variance translates to requiring over 170 minutes worth of data, collected under identical physiological conditions, in order to bound the estimated total power within 10% of the true value with probability 90%. Our results suggest that HRV metrics are highly variable and need to be reported with corresponding measurements of uncertainty. In addition, population estimates and inferences that are derived from HRV metrics need to adjust for subject-level variability.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150211</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics-Enabled Quality and Safety Management Methods for High-Stakes Manufacturing Applications</title>
<link>https://hdl.handle.net/1721.1/150209</link>
<description>Analytics-Enabled Quality and Safety Management Methods for High-Stakes Manufacturing Applications
Wilde, Joshua
Quality management is a critical aspect of the management of manufacturing processes, particularly in industries where product reliability and safety are paramount. With increased digitization and automation, there is growing potential for analytical tools combined with ubiquitous data to aid the transition from quality management practices based on expert intuition and qualitative insights to more data-driven decision making. To assist in bridging the gap between this potential and current implementation practices, this thesis develops new methods for analytics-enabled quality and safety management.&#13;
&#13;
Chapter 2 focuses on the problem of detecting clinically-relevant quality variation in pharmaceutical manufacturing of biologic drugs. Currently, both pre-market clinical trials and post-marketing studies focus on variability in safety outcomes due to individual patient-drug factors. However, the inherent complexity of biologic drug manufacturing and distribution raises potential risks that temporal variability in these systems could also impact clinical outcomes. The chapter describes a data-driven signal detection method using Hidden Markov models designed to monitor for manufacturing lot-dependent changes based on reported clinical outcomes. The method is tested on three lot sequences from a major biologic drug. The results suggest correlated lot-to-lot variability in two of the three, possibly related to changing manufacturing and supply chain conditions that may impact the per lot AE rates.&#13;
&#13;
Chapter 3 explores the problem of creating structured access to unstructured quality data captured in free-text documents. Though operator reports and logs are ubiquitous in many manufacturing processes, one of the main barriers to their effective use in decision making is that unstructured data are often unclassified, which makes trend identification and other actionable analyses challenging. This chapter describes a machine learning and optimization-driven methodology to classify unstructured text in process environments into a known taxonomy of categories without access to an existing labeled training set. To accomplish this, the proposed method leverages information from existing reference documentation and formulates a linear program to select a set of key words that distinguish the categories from each other. Results from three test datasets with ground-truth labels indicate that the method delivers strong classification accuracy, both in absolute terms and relative to alternative methods.&#13;
&#13;
Chapter 4 focuses on a quality test for an optical transceiver module, a high-tech hardware product, manufactured by an industrial partner. Currently, human experts review all test logs for quality problems. This chapter proposes a two-stage machine learning classification model that is able to automatically pass the vast majority of tested products and drastically reduces the need for manual review. Assessment on out of sample real test result data suggests that the two-stage model is able to reduce the manual review burden on the operator by 75-99% while on average satisfying the requirement to limit the number of passed defective modules.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150209</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Moist Baroclinic Instability and Macroturbulence of the Midlatitude Atmosphere</title>
<link>https://hdl.handle.net/1721.1/150203</link>
<description>Moist Baroclinic Instability and Macroturbulence of the Midlatitude Atmosphere
Kohl, Matthieu
Water and its change of phase greatly enrich the dynamics of the midlatitude atmosphere and challenge us to extend our theories of baroclinic instability and macroturbulence beyond dry adiabatic dynamics. Two specific phenomena in which latent heating plays a key role and that are poorly understood form the central focus of this thesis.&#13;
&#13;
Past research has identified a special class of storm, dubbed the Diabatic Rossby Vortex (DRVs), which derives its energy from latent heating rather than baroclinic effects and as such goes beyond the traditional understanding of midlatitude storm formation. DRVs have been implicated in extreme and poorly predicted forms of cyclogenesis along the east coast of the US and west coast of Europe and have recently emerged as the dominant mode of instability in an idealized GCM with climate warming. While we have a good theoretical understanding of dry cyclogenesis, our understanding of DRV formation, and propagation as well as their growth rate and length scale is poor. In chapters 2 and 4 of my thesis, a fluid dynamical theory is developed for DRVs both in terms of simple conceptual models of moist instability and potential vorticity dynamics of finite-amplitude storms. In particular, the dispersion relation for the growth rate and length scale of DRVs is derived analytically, and it is shown that DRVs become faster than both dry or moist baroclinic waves in the limit of a convectively-neutral stratification.&#13;
&#13;
Latent heating also makes upward motion stronger than downward motion, and this asymmetry has important implications for the distribution of precipitation and its extremes. Current theories based around small-amplitude modes greatly overestimate the change in asymmetry with warming. In chapter 3, we develop a toy-model that takes into account adjustment of the atmosphere to a state of moist macroturbulence and show that it better reproduces the slow increase in the asymmetry from winter to summer over the seasonal cycle in reanalysis and with climate warming in idealized simulations.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150203</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Airline Dynamic Offer Creation Using A Markov Chain Choice Model</title>
<link>https://hdl.handle.net/1721.1/150196</link>
<description>Airline Dynamic Offer Creation Using A Markov Chain Choice Model
Wang, Kevin K.
Customers have become accustomed to real-time dynamic pricing and offer recommendations in retailing and e-commerce. Airline pricing and revenue management is falling behind with its one-size-fits-all approach, where all customers are offered the same static offers and prices. The airline industry developed a roadmap toward dynamic offer creation, which will enable airlines to dynamically bundle and price a set of offers that are customized to a shopping request.&#13;
&#13;
Realizing this vision requires advancements in both distribution and science. On the distribution side, these advancements will come with the New Distribution Capability (NDC). On the science side – which is the focus of this dissertation – little progress has been made despite years of research. Along this roadmap, entirely new price optimization problems and revenue opportunities arise, for which no known solutions exist. For example, the optimization of both ancillary bundle offers and branded fare families becomes increasingly relevant as the airlines gain additional dynamic pricing capabilities.&#13;
&#13;
We develop a novel approach to solve these dynamic offer creation problems using a Markov chain choice model (MCCM). We discover that it is particularly suited to solving the bundle pricing problem, which also has potential applications in other industries. We also extend the MCCM to solve joint pricing and assortment optimization problems and estimate MCCM parameters from realistic datasets that an airline could collect through price experimentation.&#13;
&#13;
In simulations, we find that our model can increase both ancillary and total revenue over an a la carte baseline pricing model that mimics current industry practice. The MCCM has attractive properties – the resulting offer sets and prices discourage purchases of less profitable offers and nudge customers toward more profitable ones. Our model proposes offers that are most relevant to the customer.&#13;
&#13;
Our research also identifies challenges of using the MCCM in practice. We find that parameter estimation becomes challenging with limited data and large offer sets. The assortment optimization problem is also more challenging to solve, given the number of possible offer sets. Our results contain important takeaways about the relative revenue benefits of offer set pricing and offer set selection, and when bundling is most effective.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150196</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specification and verification of sequential machines in rule-based hardware languages</title>
<link>https://hdl.handle.net/1721.1/150194</link>
<description>Specification and verification of sequential machines in rule-based hardware languages
Bourgeat, Thomas
The design of correct hardware is an important concern in the age of information, where more and more companies are designing chips tailored to their workloads. This raises two well-known problems: how to specify what is a correct design, and, once the notion of correctness is set, how to prove that a given design is correct.&#13;
&#13;
Standard practice relies on a mix of techniques. It uses testing to run concrete scenarios to verify a concrete property: “this specific test passed,” but this gives only weak overall correctness guarantees. The other technique is to use hardware formal verification, which phrases correctness as custom temporal-logic formulae and checks that a concrete design verifies those properties by solving a large set of corresponding Boolean equations.&#13;
&#13;
This thesis addresses the hardware-verification question from a different angle: we intend to mechanically formalize the specifications and correctness arguments that architects make in their minds when they design machines, and we encode them in an interactive theorem prover. For this, we address three challenges: (1) We build an expressive framework in which we can express both synthesizable designs and abstract specifications, and we connect and navigate between them in the proof assistant. (2) We enforce several strict language restrictions, allowing us to side-step previous diﬀiculties in specifications and proofs. (3) After acknowledging that we need a modular methodology to keep the verification effort under control, we showcase previously ignored diﬀiculties in specifying complex sequential machines modularly. We introduce generalized specifications and develop various proof techniques to prove that concrete designs are instances of their modular specifications. We finally apply our methodology to prove modularly the correctness of a family of pipelined processors, independently of its memory.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150194</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Template-directed Assembly of Silk in Advanced Materials for Food Security</title>
<link>https://hdl.handle.net/1721.1/150193</link>
<description>Template-directed Assembly of Silk in Advanced Materials for Food Security
Sun, Hui
The ability to provide sufficient, safe and nutritious food to the global population, which is projected to reach 9.7 billion by 2050, is becoming a major challenge for agriculture, especially under the pressure of climate change. Motivated by the pressing need to ensure food security for the fast-growing population, this thesis is dedicated to bringing biomaterials-based innovation in agriculture and food production, by developing new ways of nanofabrication with structural biopolymers (e.g. silk proteins) to generate macroscopic yet nanostructured functional materials that can be interfaced with food and plants. Inspired by the way living systems regulate disorder to order transition of biomacromolecules to achieve complex materials, a template-based directed assembly approach is developed to guide hierarchical materials growth from disordered molecules all the way up to macroscale, a process hereby termed templated crystallization. Using silk fibroin as an example, templated crystallization refers to the employment of organic templates (specifically, ordered peptide seeds) to drive a phase transformation of silk molecules from disordered to ordered conformations, followed by directed assembly of the reconfigured fibroin chains into higher order structures (e.g. β-sheeted nanofibrils). Modulation of the relative concentration between silk fibroin and the peptide seeds, silk fibroin molecular weight and pH allows for precise control over nanofibrils morphologies and mechanical properties. More importantly, silk polymorphs can be engineered by varying the peptide seeds used. Further, integration of the bottom-up templated crystallization with rapidly scalable top-down manufacturing enables generation of macroscopic nanostructured materials with potential applications in information encryption, surface functionalization, and printable three-dimensional scaffolds of customized architecture and controlled anisotropy. In particular, by integrating silk polymorph design with physical unclonable functions (PUFs), a cryptographic protocol based on PUF tags made of silk microparticles is developed for authentication of agricultural goods (e.g. seeds). Finally, nanostructured silk membranes composed of vertically aligned silk nanotubes and nanopillars are fabricated by template wetting-guided material assembly, with applications in anti-fouling, oil-water separation and as packaging materials with improved gas barrier properties.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150193</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Tools to Quantitatively Probe Mycobacterium tuberculosis-induced Host Phagosomal and Signaling Responses</title>
<link>https://hdl.handle.net/1721.1/150188</link>
<description>Development of Tools to Quantitatively Probe Mycobacterium tuberculosis-induced Host Phagosomal and Signaling Responses
Solomon, Sydney Leigh
Globally, tuberculosis (TB) remains a leading cause of infectious death, second only to COVID-19 in 2021. With over ten million new infections occurring every year, there remains an urgent need for an improved vaccine to reduce transmission and protect against TB complications. Additionally, the growing prevalence of multi-drug resistant strains of Mycobacterium tuberculosis (Mtb) necessitates new modalities for host state modulation to promote bacterial clearance. During infection with Mtb, host macrophages engulf the bacteria and sequester them in the phagosome. This interaction involves the recognition of Mtb by pattern recognition receptors (PRRs) leading to downstream signaling and cytokine production. While there are several hypotheses about how an infected cell controls Mtb, specific mechanisms have yet to fully explain how phagosomal composition and immune activation during infection influence Mtb clearance. Current methods to assess these areas of study lack specificity and application to live, virulent Mtb infection.&#13;
&#13;
This thesis develops and extends new tools to assess host-cell state during infection with Mtb. First, I sought to decipher the composition of the Mtb-containing phagosome through pathogen-mediated proximity labeling. This led to the development of three strategies for chemically conjugating heterologous proteins and enzymes. Future work will apply these strategies to functionalize the surface of Mtb with proximity ligases to identify the host proteins directly interacting with the bacteria. Next, I investigated the relationship between Mtb activation of PRRs and bacterial clearance. I utilized high-resolution single-cell approaches to measure signal activation via pathway phosphorylation and cytokine production in both directly Mtb-infected and bystander macrophages. Through modulation of these pathways, I decoupled inflammatory and antimicrobial capacity, leading to the hypothesis that productive antimicrobial responses require another signal in addition to PRR activation and phagocytosis. Collectively, this work contributes new tools vital for the identification of desirable host-cell states to promote the design of new vaccine correlates of protection and host-directed interventions.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150188</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>State Space Methods Using Biologically-Relevant Generative Models to Analyze Neural Signals</title>
<link>https://hdl.handle.net/1721.1/150187</link>
<description>State Space Methods Using Biologically-Relevant Generative Models to Analyze Neural Signals
Beck, Amanda M.
Neural oscillations have long been recognized for their mechanistic importance in coordinating activity within and between brain circuits. Co-occurring broad-band, non-periodic signals are also ubiquitous in neural data and are thought to reflect the characteristics of population-level neuronal spiking activity. Identifying oscillatory activity distinct from broadband signals is therefore an important, yet surprisingly difficult, problem in neuroscience. Commonly-used bandpass filters produce spurious oscillations when applied to broad-band noise and may be ill-informed by canonical frequency bands. Curve-fitting procedures have been developed to identify peaks in the power spectrum distinct from broadband noise. Unfortunately, these ad hoc methods are prone to overfitting and are difficult to interpret in the absence of generative models to formally represent oscillatory behavior. Similarly, broadband power spectrum log-log slope or “1/f” curve-fitting methods have been developed to identify excitatory-inhibitory balance in the LFP or ECoG, but are not defined in terms of a generative model. Here we present three novel methods that utilize generative models to (1) identify and characterize neural oscillations distinct from broad-band noise (2) apply this oscillatory structure to improve cortical source signal estimates inferred from scalp-level EEG recordings and (3) identify and characterize excitatory and inhibitory neurotransmitter contributions to LFP signals.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150187</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating the Problem of Non-uniqueness in Fluid-flow Modeling</title>
<link>https://hdl.handle.net/1721.1/150179</link>
<description>Mitigating the Problem of Non-uniqueness in Fluid-flow Modeling
Al Nasser, Saleh Mohammed
Modeling fluid flow in porous media is a valuable and essential tool for develop­ing underground resources such as hydrocarbon reservoirs, groundwater aquifers, or CO₂ sequestration projects. The modeling, if done accurately, can provide a reliable forecast of future fluid behavior. However, the properties of the porous media and the correct solutions to the physics equations describing the macroscopic fluid flow are essential to ensure accurate modeling and, consequently, reliable forecasts. Therefore, the need to discretize the porous mediums into a large number of grids is often crucial to capture the observed data's behavior. And because the data has a low abundance spatially, it is impossible to model the fluid flow uniquely. In the thesis, we study ways to transform the modeling of fluid flow in porous media into a less non-unique problem by exploring different models and data spaces. By reducing the number of grids, we quantitatively demonstrate the possibility of producing more accurate rep­resentations of reservoirs. Also, through the resolution matrix analysis and the use of Shannon information entropy, we developed a method to acquire data adaptively for an optimum survey design. Additional data sets from self-potential or seismic surveys have complemented the fluid flow data in different joint inversion methods. sing self­-potential data allows the detection of fractures with higher confidence. The seismic data was used in a cross-gradient joint inversion scheme to constrain the inversion of fluid flow data. The joint inversion helped in around 16% reduction in the seismic velocity root-mean-square-error (RMSE) and almost 26% decrease in the permeability RMSE.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150179</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward High-throughput, Quantitative Platforms to Identify the Targets of Small Molecules</title>
<link>https://hdl.handle.net/1721.1/150178</link>
<description>Toward High-throughput, Quantitative Platforms to Identify the Targets of Small Molecules
Henry, Catherine Campbell
Target identification is a major challenge in probe and drug discovery. Current binding assays are unable to detect interactions between unoptimized probes and difficult targets, such as transcription factors. Here, we developed generalizable, high-throughput platforms that can rapidly identify the mechanism(s) of action of small molecules emerging from high-throughput screening (HTS) campaigns. Specifically, this project established a solid phase method to rapidly modify small molecules with moieties of interest using isocyanate-based chemistries. This chemical method can be used to quickly generate photoaffinity labeling analogs of small molecules that can be used in a covalent ELISA and mass spectrometry workflow to determine whether small molecules bind to a target of interest and identify off-target binders. Additionally, we created a synergistic critical path for assessing the mechanism of action and on-target activity of small molecules through the generation of an on-target transcriptional profile and application of the L1000 gene-expression platform. Together, these workflows and chemical tools will enable high-throughput studies of small molecule-protein interactions with a wide range of affinities and abundances and facilitate prioritization of small molecules that bind and modulate the function of difficult targets.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150178</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effective Learning in Non-Stationary Multiagent Environments</title>
<link>https://hdl.handle.net/1721.1/150177</link>
<description>Effective Learning in Non-Stationary Multiagent Environments
Kim, Dong Ki
Multiagent reinforcement learning (MARL) provides a principled framework for a group of artificial intelligence agents to learn collaborative and/or competitive behaviors at the level of human experts. Multiagent learning settings inherently solve much more complex problems than single-agent learning because an agent interacts both with the environment and other agents. In particular, multiple agents simultaneously learn in MARL, leading to natural non-stationarity in the experiences encountered and thus requiring each agent to its behavior with respect to potentially large changes in other agents' policies. This thesis aims to address the non-stationarity challenge in multiagent learning from three important topics: 1) adaptation, 2) convergence, and 3) state space. The first topic answers how an agent can learn effective adaptation strategies concerning other agents' changing policies by developing a new meta-learning framework. The second topic answers how agents can adapt and influence the joint learning process such that policies converge to more desirable limiting behaviors by the end of learning based on a new game-theoretical solution concept.&#13;
Lastly, the last topic answers how state space size can be reduced based on knowledge sharing and context-specific abstraction such that the learning complexity is less affected by non-stationarity. In summary, this thesis develops theoretical and algorithmic contributions to provide principled answers to the aforementioned topics on non-stationarity. The developed algorithms in this thesis demonstrate their effectiveness in a diverse suite of multiagent benchmark domains, including the full spectrum of mixed incentive, competitive, and cooperative environments.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150177</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-NUM: Utility Maximization in Stochastic Queueing Networks</title>
<link>https://hdl.handle.net/1721.1/150175</link>
<description>Learning-NUM: Utility Maximization in Stochastic Queueing Networks
Fu, Xinzhe
Network Utility Maximization (NUM) studies the problems of allocating traffic rates to network users in order to maximize the users’ total utility subject to network resource constraints. We propose a new paradigm of utility maximization in stochastic queueing networks where the utility functions are unknown in advance but function values corresponding to the traffic rates decisions are observable after the traffic reaches the destination. The paradigm is called Learning-NUM as it requires learning the utility functions while at the same time making intelligent network control decisions. We study various problems under the Learning-NUM paradigm where the goal is to design policies that maximizes the total utility obtained by the end of a finite time horizon &#119879;, with the performance measured by regret, which is defined as the expected gap in total utility with respect to the optimal dynamic policies that may have knowledge of the utility functions.&#13;
&#13;
For problems under the Learning-NUM paradigm, it can be shown that the expected utility of any policy is upper bounded by &#119879; times the value of a static optimization problem that can be considered as a fluid version of the Learning-NUM problem. Therefore, to achieve a low regret, the Learning-NUM problem amounts to learning the solution to the static optimization problem and translating the solution to network control policies based on the information available in stochastic queueing networks. Different from traditional online decision-making problems, Learning-NUM problems involve integration of learning and network control and dealing with feedback delay and unknown constraints.&#13;
&#13;
We start by considering Learning-NUM problems with linear utility functions in bipartite networks, where the corresponding static optimization problems are linear programs. We propose a priority-based network control policy using the extreme-point structure of the linear program, which combined with an algorithm that learns the optimal extreme point, achieves logarthmic regret.&#13;
&#13;
Second, we study Learning-NUM problems with concave utility functions in bipartite networks. We extend an algorithm for stochastic convex optimization with bandit feedback to deal with the unknown constraints in the Learning-NUM problems. The algorithm, combined with Join-the-Shortest-Queue routing, is shown to achieve to order-optimal &#119874;˜(√ &#119879;)-regret for Learning-NUM problems in bipartite networks.&#13;
&#13;
Next, we study general Learning-NUM problems in multi-hop networks. The networks may contain time-varying channels and interference constraints that model wireless environments. We use observations on function values to construct estimates on the gradients of the utility functions. The gradient estimates are then plugged into a first-order primal-dual framework, which is further embedded in a parallel-instance paradigm to deal with the feedback delay. We show that the resulting policy achieves regret that grows sublinearly with &#119879;.&#13;
&#13;
Finally, we study the minimum delay routing problem in stochastic queueing networks through the lens of Learning-NUM. We model the problem under the Learning-NUM paradigm by interpreting the delay function, i.e., the function that maps the routing schemes to the steady-state delay of the network, as the utility function. Using the idea of gradient estimation through observations of function values, we propose a method to efficiently identify the minimum-delay routing scheme even when the delay functions are unknown.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150175</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of Hybrid Hemodynamics from Mechanical Support Devices in Cardiogenic Shock</title>
<link>https://hdl.handle.net/1721.1/150172</link>
<description>Optimization of Hybrid Hemodynamics from Mechanical Support Devices in Cardiogenic Shock
Goffer, Efrat Marcus
Cardiovascular mechanical circulatory support (MCS) offers the promise of forward blood flow maintenance and distal tissue perfusion without taxing the failing heart. However, there are no firm determinants of device initiation and titration, and demonstration of definitive clinical benefit remains elusive. In part this is due to limited understanding of pathophysiologic interplay and impact.&#13;
&#13;
We hypothesized that MCS use cannot be optimized without appreciation of its coupling with aortic dynamics – extending the concept of ventriculo:vascular coupling in native circulation to machine-augmented support. In both controlled porcine studies and a mock cardiovascular flow-loop with material properties, pressures, and flows that match human conditions, we examined the relative impact of the following MCS devices, alone and in combination: arterial unloading in the form of aortic counterpulsation; ventricular unloading and decoupling in the form of transvalvular impeller pump; and cardiopulmonary bypass in the form of extracorporeal membrane oxygenation.&#13;
&#13;
This coupling paradigm allowed us to generate heatmaps of multiple hemodynamic metrics that define the shock and MCS-supported states and a framework by which to appreciate MCS with adjunctive pharmacologic and mixed mechanical modalities. Indeed, optimum support was defined by the balance of these metrics which can best be reduced to matching of ventricular load with vascular compliance for optimization of ‘Hybrid Flows’ – flow patterns that emerged as the cumulative sum of native heart and MCS contributions.&#13;
&#13;
Translation of this work to the clinic could better inform MCS initiation, titration, and weaning and contribute to improving outcomes for cardiac failure and shock.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150172</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Interpretability and Transparency of Black-Box Models</title>
<link>https://hdl.handle.net/1721.1/150171</link>
<description>Techniques for Interpretability and Transparency of Black-Box Models
Zhou, Yilun
The last decade witnessed immense progress in machine learning, which has been deployed in many domains such as healthcare, finance and justice. However, recent advances are largely powered by deep neural networks, whose opacity hinders people's ability to inspect these models. Furthermore, legal requirements are being proposed to require a level of model understanding as a prerequisite to the deployment and use. These factors have spurred research that increases the interpretability and transparency of these models.  &#13;
&#13;
This thesis makes several contributions in this direction. We start with a concise but practical overview of the current techniques for defining and evaluating explanations for model predictions. Then, we observe a novel duality between definitions and evaluations of various interpretability concepts, propose a new way to generate explanations and study the properties of these new explanations. Next, we investigate two fundamental properties of good explanations in detail: correctness -- whether the explanations are reflective of the model's internal decision making logic, and understandability -- whether humans can accurately infer the higher level and more general model behaviors from these explanations. For each aspect, we propose evaluations to assess existing model explanation methods and discuss their strengths and weaknesses. Following this, we ask the question of what instances to explain, and introduce the transparency-by-example perspective as an answer to this question. We demonstrate its benefits in revealing hidden properties of both image classifiers and robot controllers. Last, the thesis identifies directions for future research, and advocates for a tighter integration of model interpretability and transparency into the ecosystem of trustworthy machine learning research that also encompass efforts such as fairness, robustness and privacy.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150171</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suppression of the Ubiquitin Ligase Function of FBXW7 Accelerates Metastatic Progression of Pancreatic Ductal Adenocarcinoma</title>
<link>https://hdl.handle.net/1721.1/150167</link>
<description>Suppression of the Ubiquitin Ligase Function of FBXW7 Accelerates Metastatic Progression of Pancreatic Ductal Adenocarcinoma
Cervantes Jaramillo, Grissel
Pancreatic ductal adenocarcinoma (PDAC) is the most lethal common malignancy because it is usually diagnosed at an advanced/metastatic stage. Dysregulation of protein stability and degradation has been associated with uncontrolled proliferation and genomic instability, promoting cancer progression to metastasis. One of the major regulators of protein degradation is the tumor suppressor FBXW7, a substrate recognition domain of the SCF E3 ubiquitin ligase, frequently dysregulated in many cancers.&#13;
&#13;
The function and clinical significance of FBXW7 in pancreatic cancer has been studied in some detail. Pancreatic cancer patients with low FBXW7 expression levels have poor probability of survival compared to patients with high FBXW7 expression levels. Furthermore, Fbxw7 mutations and loss cooperate with KrasG12D to accelerate PDAC formation with a high frequency, showing that Fbxw7 is an important tumor suppressor in Kras-driven pancreatic cancer. However, studies on the impact of Fbxw7 expression and its substrates in pancreatic cancer progression to metastasis remains poorly understood.&#13;
&#13;
Here, we demonstrate that Fbxw7 loss accelerates progression and metastatic potential of pancreatic cancer in KrasG12D/+; Trp53-/- PDAC models, in immunocompromised and immunocompetent hosts. We explore the impact of different Fbxw7 mutants in tumorigenesis, where the hotspot mutant R465 recapitulates the phenotype seen in complete loss-of-function of Fbxw7. Finally, we looked at global proteomic changes when Fbxw7 is lost to better understand mechanistically the role of Fbxw7 in PDAC progression to metastasis. This study addresses novel facets of PDAC metastasis which has the potential to identify novel therapeutic strategies for advanced and metastatic disease.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150167</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Lottery Ticket Hypothesis: On Sparse, Trainable Neural Networks</title>
<link>https://hdl.handle.net/1721.1/150166</link>
<description>The Lottery Ticket Hypothesis: On Sparse, Trainable Neural Networks
Frankle, Jonathan
In this thesis, I show that, from an early point in training, typical neural networks for computer vision contain subnetworks capable of training in isolation to the same accuracy as the original unpruned network. These subnetworks—which I find retroactively by pruning after training and rewinding weights to their values from earlier in training—are the same size as those produced by state-of-the-art pruning techniques from after training. They rely on a combination of structure and initialization: if either is modified (by reinitializing the network or shuffling which weights are pruned in each layer), accuracy drops.&#13;
&#13;
In small-scale settings, I show that these subnetworks exist from initialization; in large-scale settings, I show that they exist early in training (&lt; 5% of the way through). In general, I find these subnetworks when the outcome of optimizing them becomes robust to the sample of SGD noise used to train them; that is, when they train to the same convex region of the loss landscape regardless of data order. This occurs at initialization in small-scale settings and early in training in large-scale settings.&#13;
&#13;
The implication of these findings is that it may be possible to prune neural networks early in training, which would create an opportunity to substantially reduce the cost of training from that point forward. In service of this goal, I establish a framework for what success would look like in solving this problem and survey existing techniques for pruning neural networks at initialization and early in training. I find that magnitude pruning at initialization matches state-of-the-art performance for this task. In addition, the only information that existing techniques extract are the per-layer proportions in which to prune the network; in the case of magnitude pruning, this means that the only signals necessary to achieve state-of-the-art results are the per-layer widths used by variance-scaled initialization techniques.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150166</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating, Controlling, and Understanding Lithium-ion Battery Models</title>
<link>https://hdl.handle.net/1721.1/150165</link>
<description>Simulating, Controlling, and Understanding Lithium-ion Battery Models
Berliner, Marc Dylan
Lithium-ion batteries are widespread in consumer electronics, electric vehicles, and grid storage. Developing better batteries and more intelligent battery management systems is an active area of research – operating batteries safely, maximizing their lifetimes, and inventing new materials are essential for their continued proliferation. To create the next generation of batteries, researchers must tackle challenging, multidisciplinary problems with an enormous design space.&#13;
&#13;
Experimentally testing batteries can be expensive and time-consuming. Extensive analyses usually involve dozens of batteries that may be tested continuously over several weeks or months. Efficient physics-based and data-driven modeling can significantly reduce the cost and time requirements of battery development. Still, physically accurate simulations face numerous technical and theoretical barriers stemming from difficulties in measuring and analyzing battery internals during cycling.&#13;
&#13;
This thesis focuses on simulating, controlling, and understanding rigorous physics-based lithium-ion battery models. The first Part of this thesis presents PETLION, an open-source, high-performance implementation of the Porous Electrode Theory (PET) model. PET typically contains several hundred nonlinear differential-algebraic equations (DAEs) after discretization. This package is designed from a systems engineering perspective to be robust and highly efficient, about 100–1000x faster than other available implementations of PET while maintaining the same physical accuracy. PETLION is the cornerstone for the following Parts of this thesis, permitting deep analyses of PET which would otherwise be prohibitively expensive. &#13;
&#13;
The second Part of this thesis investigates a mixed continuous-discrete (hybrid) approach for fast charging of batteries in real-time. Traditional fast charging problem setups perform optimal control with a reduced-order/empirical model to find the optimal current profile, maximizing the capacity subject to safety and degradation constraints. Instead, a hybrid charging procedure is proposed, simultaneously solving the battery system of equations and the embedded solution to the constraint-based control problem. Here, the fast-charging current profile is found via direct simulation, which dynamically switches between active path constraints. Novel op
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150165</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards biosensor-assisted directed evolution of myo-inositol&#13;
oxygenase</title>
<link>https://hdl.handle.net/1721.1/150164</link>
<description>Towards biosensor-assisted directed evolution of myo-inositol&#13;
oxygenase
Nash, Jennifer Kaczmarek
Biosensors are powerful tools that leverage transcriptional regulation mechanisms to modulate gene expression in response to a variety of stimuli. This can allow for metabolic pathway regulation, high throughput screening, and many other applications. In this work, we focus on their uses in high throughput screening, the roadblocks that prevent their effective operation and the ways in which we overcame these challenges, and an example of the application of a biosensor in an optimized system. The intended application of the biosensor for this work was the directed evolution of myo-inositol oxygenase (MIOX), the enzyme that catalyzed the penultimate, rate-limiting step in the metabolic production of glucarate from glucose in E. coli. Here, we develop a biosensor that recognizes glucuronate, the direct product of MIOX, and optimize our system towards screening a library of MIOX genes for an improved enzyme variant.&#13;
&#13;
Biosensors for fructuronate and glucuronate were developed and investigated for their ability to indirectly and directly detect glucuronate levels via the UxuR and ExuR transcription factors (TFs), respectively. Ultimately, due to its ability to directly detect glucuronate, the ExuR biosensor was selected for application to high throughput screening. This biosensor was characterized via exogenous glucuronate addition and endogenous glucuronate production from MI, the substrate of MIOX, and from glucose, the initial substrate of the glucarate pathway.&#13;
&#13;
In characterizing the biosensor, we found that it was strongly impacted by experimental conditions as varying fluorescent output was observed when glucuronate was produced from differing MIOX homologs. It appeared that the capacity of the biosensor was limited when burden was imposed on the system. A variety of rational engineering approaches were attempted and evaluated in regards to their ability to yield a more consistent biosensor output. Finally, the optimal biosensor configuration and experimental setup was applied to successfully detect the difference between two homologs that differed in productivity.&#13;
&#13;
This work investigated the challenges that prevent biosensors from successfully aiding high-throughput screening. Even with these challenges we were able to display the ability of the biosensor to distinguish between enzyme variants of differing productivity levels, indicating its promise should these barriers be fully overcome.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150164</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Autism Spectrum Disorder: Fragile X syndrome and Rett syndrome</title>
<link>https://hdl.handle.net/1721.1/150163</link>
<description>Modeling Autism Spectrum Disorder: Fragile X syndrome and Rett syndrome
Colvin, Steve
Autism Spectrum Disorder (ASD) is a collection of developmental disabilities characterized by outward features, such as impaired socialization skills and repetitive behaviors, and often associated with broad health complications that together carry lifelong impact. Unfortunately, ASD symptoms occur at the nexus of higher-order cognitive functions, which arise through inordinately complex cellular and molecular processes. This pleiotropic nature means that, in many cases, ASD remains poorly understood at the mechanistic level. Tremendous efforts are underway to define the etiology of ASD, but there must be equally rigorous attention to the systems employed to model these genetics in order to ensure accurate insights into their pathophysiology and treatment options.&#13;
&#13;
We sought to extend the current state of animal models for two of the most prevalent monogenic causes of ASD and investigate their therapeutic potential. Fragile X syndrome is a trinucleotide repeat disorder where repeat expansion in the FMR1 5’ untranslated region triggers its methylation and represses its transcription. Yet we were unable to reproduce this methylation when we aggressively expanded the repeat length of mouse Fmr1, suggesting fundamental differences in species biology that may limit the utility of mice for studying treatment in Fragile X. We also initiated a project to target Rett syndrome - a highly debilitating condition typically caused by de novo loss-of-function mutations in MECP2 - with emerging RNA-based gene therapy approaches. We theorized that the A-to-I converting REPAIR system was capable of recoding all Rett nonsense mutations into tryptophan, and created a proof-of-concept tryptophan model that lacked any discernible Rett phenotypes. We also validated REPAIR editing in vitro as the first step towards a treatment to reverse Rett syndrome in patients. Together, these models contribute valuable insights into the relationship between genes and mechanisms, and provide a path forward for potential therapeutic development.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150163</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clonal architecture and genetic regulation of the developing mammalian cerebral cortex</title>
<link>https://hdl.handle.net/1721.1/150161</link>
<description>Clonal architecture and genetic regulation of the developing mammalian cerebral cortex
DeGennaro, Ellen M.
The mammalian brain is a remarkably intricate organ that starts out as a single layer of epithelial cells. Over the course of development, this single layer of cells grows into many layers of neurons and glia that differentiate into genetically-defined subtypes residing in different regions of the brain with different mature functions. While development is ongoing, it is challenging to study individual cells as they divide, differentiate, and migrate through the growing layers of tissue. However, it is possible to learn about this process indirectly through the use of multiple in vitro and in vivo models of mammalian brains. This thesis examines how brains develop at the single-cell level by first establishing a new technique to study the relationship between cell lineage and developmental gene expression in the mouse brain, as well as the gyrencephalic or folded ferret brain, which is more structurally similar to the human brain. In a subsequent chapter recently published in Developmental Cell, I show how we utilized specific patient mutations in the gene KIF26A to elucidate the developmental roles of this gene in cell migration, axon and dendrite growth, and apoptosis. For final chapter, published in 2021 in Neuron, I describe our work investigating changes in gene regulatory programs that are important for brain development over the course of mammalian and human-specific evolution.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150161</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring quantum geometry and quantum sensing&#13;
with spin defects in diamond</title>
<link>https://hdl.handle.net/1721.1/150142</link>
<description>Exploring quantum geometry and quantum sensing&#13;
with spin defects in diamond
Li, Changhao
Recent years have witnessed rapid development in the field of quantum information science. Quantum technologies are promising to revolutionize many fields, ranging from fast computation and simulation to precise sensing and secure communication. While the performance of quantum devices in the current noisy intermediate-scale quantum (NISQ) era is still limited, exploring intriguing applications of various quantum systems has been an active research area. Among the many physical platforms, solid-state spin defects, such as nitrogen-vacancy (NV) center in diamond, have attracted a lot of attentions thanks to their good controllability and long coherence time even at room temperature.&#13;
&#13;
In this thesis, we present our efforts in demonstrating promising quantum applications based on the NV system. With microwave and optical pulses, we have good control on the NV center system, which enables us to engineer the desired Hamiltonian and probe the information of quantum states.&#13;
In particular, the geometry of a quantum state is characterized by the quantum geometric tensor (QGT), which can find applications in simulating topological material and quantum metrology. We experimentally measure the QGT of an engineered Hamiltonian in a single NV center using weak periodic modulation method. Based on it, we are able to reveal the existence of a tensor monopole that characterizes a tensor gauge field in parameter space. Furthermore, we find that the QGT plays a significant role in quantum multi-parameter estimation. It not only quantifies the precision limit when estimating parameters but also links to the attainability of precision bounds.&#13;
&#13;
Thanks to the sensitivity studied above, NV centers have emerged as powerful quantum sensors to detect various signals, ranging from electromagnetic fields to temperature, with high sensitivity and spatial resolution. Detecting biological or chemical signals is more challenging, but a promising avenue is to transduce them into magnetic noise that the NV center is sensitive to. We demonstrate this strategy by measuring the rotational Brownian motion of magnetic molecules. Then, exploiting the dipolar interaction between NV centers and magnetic molecules, we design a hybrid sensor that is capable of detecting the SARS-CoV-2 virus RNA with ultrahigh sensitivity and low false negative rate. Finally, we will show that the charge state of NV centers is sensitive to surface modification of nanodiamond, which enables us to design alkali ion sensors with the help of chemical engineering.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150142</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating the Jacobi Iteration for Solving Linear Systems of Equations using Theory, Machine Learning, and High Performance Computing</title>
<link>https://hdl.handle.net/1721.1/150137</link>
<description>Accelerating the Jacobi Iteration for Solving Linear Systems of Equations using Theory, Machine Learning, and High Performance Computing
Islam, Mohammad Shafaet
High fidelity scientific simulations modeling physical phenomena typically require solving large sparse linear systems of equations which result from the discretization of a partial differential equation (PDE) by some numerical method. The solution of these linear systems often takes a vast amount of computational time to compute. Solving these linear systems efficiently requires the use of massively parallel hardware with high computational throughput (such as GPUs), as well as the development of linear solver algorithms which respect the memory hierarchy of these hardware architectures to achieve the best performance.&#13;
&#13;
This thesis offers two key components towards the development of a memory efficient linear solver algorithm tailored towards high performance computing (HPC) systems. Firstly, starting with the Jacobi iteration (a parallel linear solver algorithm well-suited for HPC), we develop a family of relaxation schemes which greatly improve the convergence of the method. These schemes, termed Scheduled Relaxation Jacobi (SRJ) schemes, provide acceleration for both symmetric and nonsymmetric linear systems of equations. In the symmetric case, a data informed heuristic is developed to aid scheme selection in a practical implementation without user intervention. Secondly, we develop a high-performance GPU implementation of the Jacobi iteration method. The main characteristic of the linear solver is that it utilizes on-chip shared memory for improved memory efficiency. This is enabled by the unstructured swept rule, an algorithm for space-time decomposition which enables efficient stencil computations in parallel on unstructured grids. The shared memory Jacobi linear solver demonstrates improved performance over a classical GPU implementation which relies solely on global memory for solving two-dimensional unstructured problems.&#13;
&#13;
These contributions provide the basis for an efficient GPU linear solver for the solution of (potentially unstructured/nonsymmetric) linear systems arising from PDEs. This provides a step towards efficient simulation of physical phenomena, and augmenting the role of simulation in scientific discovery and the engineering design process.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150137</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>QUiLT (Quantitative Ultrasound in Longitudinal Tissue Tracking): Stitching 2D images into 3D Volumes for Organ Health Monitoring</title>
<link>https://hdl.handle.net/1721.1/150136</link>
<description>QUiLT (Quantitative Ultrasound in Longitudinal Tissue Tracking): Stitching 2D images into 3D Volumes for Organ Health Monitoring
Chen, Melinda
Volumetric property maps of organs (e.g., stiffness) are clinically significant and valu- able in diagnosing the onset and monitoring the progression of a large multitude of diseases such as autosomal dominant polycystic kidney disease (ADPKD) and chronic liver disease (CLD). Unlike 2D property maps, 3D property maps allow for precise, consistent, and accurate longitudinal comparison because they eliminate the variabil- ities associated with the underlying image acquisition protocol. 3D organ reconstruc- tions provide a holistic view of the organ and serve as a foundation for generating 3D property maps. Existing methods for reconstructing 3D images of organs include MRI and CT. Ultrasound emerges as a viable alternative due to its low cost, portability, and ability for repeated use.&#13;
&#13;
This thesis presents a workflow for generating 3D images of the kidney and liver from 2D ultrasound images. We augment a 1D ultrasound probe with a set of sensors that allow for its localization in 3D space. The pose information and 2D images are combined to generate accurate organ volumes, specifically for the kidney and liver. In ex vivo studies, our method reconstructs renal volumes with an accuracy that is comparable to the current clinical gold-standard, CT. The liver, however, poses additional challenges due to its size and location underneath the ribcage – multiple partial scans are required to capture the entire organ. These scans need to be co- registered to generate a complete volume. Unfortunately, the liver parenchyma lacks features, such as edges and corners, that are used by conventional image registration techniques. To circumvent this, we propose a set of quantitative features based on the liver vasculature and develop an algorithm to determine their correspondences in 3D space. The features serve as internal landmarks that are used to perform deformable registration on the partial scans to create a whole-organ view. We validate the proposed approach with simulated and in vivo data; the method consistently performs as well or better than existing registration methods. Our method aims to provide an easy and cost-effective way for clinicians to monitor organ property changes over time, thereby paving the way for early diagnosis and prevention of diseases.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150136</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Detailed Kinetic Models for Large Pyrolysis Systems</title>
<link>https://hdl.handle.net/1721.1/150135</link>
<description>Generating Detailed Kinetic Models for Large Pyrolysis Systems
Payne, A. Mark
Detailed kinetic models have been able to accurately the predict the behavior of many complex chemical systems. The benefits of such models is numerous, ranging from being able to predict system behavior under conditions not amenable to experiments to the fact that the mere process of generating such models often leads to the discovery of new reaction pathways. Despite this utility, to date these models have mostly been applied to smaller systems of 10 heavy atoms or less. This is because as the size of the molecules grows, the number of possible isomers and thus reactive pairs grows combinatorially. Furthermore, refining these models often involves high-accuracy quantum chemistry calculations that are expensive for larger species. If these challenges can be overcome, though, generating detailed kinetic models for larger systems aims to provide valuable insights into complex systems, such as the pyrolysis of heavy oil or biomass. In this work, we show that advances in automatic mechanism generation software, quantum chemistry methods, and ever increasing amounts of computational power have made the prospect of generating detailed models for larger systems possible. We were able to generate a detailed kinetic model for the pyrolysis a 3-component hydrocarbon mixture with the largest species containing 18 heavy atoms. Despite the size of the molecules, the generated model was able to predict experimental data for this system. We also discuss aspects of refining these models with quantum chemistry calculations, specifically calculating species thermochemistry. We showed that many of the methods for correcting these calculations, including bond-additivity corrections and isodesmic reaction approaches yield similar results, despite some claims to the contrary. Finally, we collected experimental data necessary to validate detailed kinetic models for the pyrolysis of kerogen. As part of this work, we discussed the challenges of collecting such data, and showed the suitability of modern methods and instrumentation towards this task. With this, it is likely that detailed kinetic models will be increasingly used to study larger systems, though this work will likely involve fully-detailed model compound studies in tandem with approaches to reduce the combinatorial complexity of large systems without much loss in accuracy.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150135</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical Tools for Discontinuous Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/150134</link>
<description>Mathematical Tools for Discontinuous Dynamical Systems
Billingsley, Matthew Ryan
Nonsmooth equations and discontinuous dynamical systems can be used to model systems in a variety of applications, including modeling of certain physical systems and optimization problems. However, theoretical results are somewhat limited for these classes of problems, restricting their use in practical applications. In order to develop the necessary theory, existing theoretical results for other nonsmooth systems can be extended to these important classes of problems.&#13;
&#13;
In this thesis, theoretical results and software implementations are developed for certain classes of nonsmooth and discontinuous systems. A method for evaluating lexicographical directional derivatives of functional programs is developed, extending existing methods to programs containing conditional branches and loops. A software implementation of the theoretical results is also developed. Next, well-posedness and sensitivity results are established for a class of discontinuous ordinary differential equations, which are derived from overarching differential-algebraic equations using directional differentiation. Next, a new nonsmooth formulation of Hamiltonian dynamics for Hamiltonian systems with nonsmooth potential energy is developed, using lexicographic differentiation to derive a system of discontinuous differential equations, and theoretical results are developed. Using this nonsmooth formulation of Hamiltonian dynamics, a new Hamiltonian Monte Carlo method is developed for systems with nonsmooth target probability densities. A software implementation of this method has been developed in Julia.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150134</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Automated Reaction Kinetics with Message Passing Neural Networks</title>
<link>https://hdl.handle.net/1721.1/150133</link>
<description>Towards Automated Reaction Kinetics with Message Passing Neural Networks
Pattanaik, Lagnajit
Predictive chemistry holds great promise to accelerate scientific discovery and innovation. An approach towards predictive chemistry involves decomposing systems into kinetic mechanisms consisting of elementary reactions and quantitatively describing each of those reactions. Incredibly, the immense progress in computational methods and compute power now allows the calculation of thermodynamic and kinetic parameters at an accuracy necessary for predictive chemistry. Unfortunately, real systems can consist of tens of thousands of elementary reactions, so it is infeasible to calculate these parameters using traditional, labor-intensive computational methods.&#13;
&#13;
This thesis focuses on computing kinetic parameters by both automating and accelerating the computational pipelines used to generate them, relying on modern machine learning frameworks— specifically, message passing neural networks—to facilitate these calculations.&#13;
&#13;
Noting that in the framework of automated kinetic parameter calculation, transition state search is a key bottleneck, this thesis first devises a method to generate transition state geometries with deep learning. The new method achieves improvements in both accuracy and speed compared to existing alternatives. This thesis next investigates a fundamental limitation of message passing neural networks to capture tetrahedral chirality and proposes several fixes to address this limitation. While generating a single transition state structure is an important goal, accurate calculation of kinetic parameters often requires investigating multiple conformations. Hence, this thesis builds a generative framework to predict multiple low-energy conformations directly from the molecular graph. The method is demonstrated for stable species conformer generation and outperforms existing baselines. Integrating all the developed models together, this thesis next develops an end-to-end pipeline to generate transition state conformers directly from the atom-mapped reaction SMILES. While most of presented work investigates reactions in the gas phase, reactions in condensed phase require additional solvation corrections. Therefore, this thesis constructs a large dataset of solution free energies across a range of solvents. It then develops a model to predict relevant conformations of the solute for any given solute-solvent pair.&#13;
&#13;
The tools developed in this thesis will become an integral part of modern computational chemistry pipelines. Undoubtedly, the future of automated predictive chemistry will heavily rely on these and similar deep learning models for fast and accurate parameter estimation.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150133</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A First Complete Approach to Address Model Error in Computational Turbulent Heat Transfer</title>
<link>https://hdl.handle.net/1721.1/150130</link>
<description>A First Complete Approach to Address Model Error in Computational Turbulent Heat Transfer
Wiser, Ralph
The challenge of turbulent heat flux has plagued the CFD modeling community for decades. A systemic dearth of adequate data has forced modelers to heavily rely on intuition and ad hoc reasoning to justify modeling choices. Many turbulence modelers believe that error in CFD temperature prediction stems from the simple turbulent heat flux model known as the Reynolds analogy. In particular, the Reynolds analogy is assumed to be inadequate for non-unity Prandtl fluids. However, engineers continue to use the Reynolds analogy in everyday heat transfer simulations, since no other approach has proven to be more general.&#13;
&#13;
In the context of nuclear reactor engineering and licensing, a complete understanding and quantification of simulation errors is required. Since engineers continue use the Reynolds analogy for CFD heat transfer, which has widely accepted limitations, most users in nuclear vendor companies and regulatory bodies have little confidence in CFD temperature predictions. This lack of confidence heavily limits the uptake of CFD in nuclear engineering applications. Consequently, the benefits of CFD including reduced cost and project timeline have not been fully realized in the nuclear reactor engineering industry.&#13;
&#13;
This thesis focuses on a holistic understanding of the sources of model error in turbulent CFD heat transfer simulations. While the Reynolds analogy is not a perfect model for turbulent heat flux, we find that other sources of error are more consequential. These include model error due to poorly resolved turbulent structures, model error due to the form and coefficients that represent turbulence anisotropy, model error in the buoyancy production of turbulence, and boundary layer model error. This understanding derives from dozens of simulations of legacy experiments, as well as comparison with a new DNS database developed by collaborators at other universities.&#13;
&#13;
A first complete approach is proposed to address the important sources of model error in computational turbulent heat transfer. The methodology includes quantifying the model error in the turbulent momentum transfer model, which has a dominant effect on the temperature error. In addition, a new model for predicting buoyancy production of turbulence is included in the methodology. No special treatment of the turbulent heat flux is applied, i.e., the Reynolds analogy is retained. The components of the methodology are separately tested, then the complete method is assessed in a range of flow cases at different Prandtl numbers. This complete approach for temperature model error will increase confidence in CFD temperature predictions and could lead to higher CFD uptake in nuclear design. Further, it could contribute to regulatory acceptance of CFD in nuclear reactor licensing, which vendors see as a crucial step enabling licensing of advanced reactors such as liquid metal cooled and molten salt cooled reactors.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150130</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental Impacts of Future Aviation Propulsion Systems</title>
<link>https://hdl.handle.net/1721.1/150129</link>
<description>Environmental Impacts of Future Aviation Propulsion Systems
Prashanth, Prakash
Aviation is an integral part of modern society and economy. A fundamental challenge facing the aviation sector in the coming decades is to enable the 3.8% projected growth in air traffic per year and the associated benefits while simultaneously reducing aviation’s impact on the environment in terms of air quality and climate.  This thesis improves the scientific understanding of the atmospheric impacts attributable to aviation gas turbine emissions and the means of mitigating them with a focus on the propulsion system. Specifically, this thesis addresses aspects of 1) aerosol formation from aviation-attributable NOx and SOx, 2) the technical extent to which the air quality and climate impacts of aviation can be minimized, and 3) how propulsion system design for supersonic commercial aircraft in the future would impact the environment.&#13;
&#13;
I first address how emissions of aerosol precursors species – NOx and SOx from aircraft gas turbine engines – results in aerosol formation. I quantify the contribution of the different pathways to the formation of secondary inorganic aerosol and their associated impact on radiative forcing and population exposure to pollutants at the surface. A key finding is that 47% of the aviation NOx emissions-attributable aerosol RF is due to sulfate aerosol formed through the NOx-sulfate pathway where, aviation-attributable oxidants derived from aviation NOx emissions result in the oxidation of SOx emissions to sulfate aerosol. Moreover, 88% of this sulfate related RF through the NOx-sulfate pathway is due to the oxidation of non-aviation SOx, highlighting the coupling between aviation and non-aviation emissions. Furthermore, I show that aviation emissions of NOx are responsible for ~95% of aviation-attributable population exposure to particulate matter (PM2.5) and ozone.&#13;
&#13;
I then undertake the notional design of an aircraft system to assess whether it is technically feasible to have an aircraft system with net-zero climate impact and &gt;95% reduction in air quality impacts relative to the present. The identified system relies on (1) an aviation fuel with low lifecycle greenhouse gas (GHG) emissions; (2) an aircraft design which accommodates post-combustion emissions control devices to enable a 96% reduction in emissions of NOx; (3) operational strategies for contrail avoidance; and (4) atmospheric CO2 removal with geological storage at small scale (1% of geological storage potential) to address GHG emissions which are otherwise prohibitively expensive to avoid. The proposed system reduces the combined climate and air quality impacts by 99% for a 16-22% increase in direct operating costs (excluding invested capital costs of aircraft and required infrastructure).&#13;
&#13;
I then consider the environmental impacts that may arise from the addition of new capability in the form of commercial supersonic transport (SST) to the current system. Prior development of propulsion systems for SSTs have relied on derivative engines. I quantify the impact that constraints imposed by such a derivative engine design have on its performance relative to a clean-sheet design. Accounting for technology improvements, the clean-sheet design results in a 4% lower SFC than the derivative engine, with the SFC improvements being most sensitive to the ability to design low-NOx combustors followed by turbomachinery efficiency. A fleet of 140 supersonic business jets using the derivative or clean-sheet engines result in ~13 mDU of column ozone depletion per billion available seat-kilometers in 2035.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150129</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Mechanical Properties and Porosity in Laser Powder Bed Fusion by Spatial Manipulation of Feedstock Composition</title>
<link>https://hdl.handle.net/1721.1/150128</link>
<description>Tailoring Mechanical Properties and Porosity in Laser Powder Bed Fusion by Spatial Manipulation of Feedstock Composition
Lettiere, Bethany Rose
Additive manufacturing (AM) is capable of forming complex geometries, can build custom products, and offers greater versatility than many conventional manufacturing processes. Metal AM has revolutionized the design and production of rocket engines, enabled lightweight robotics and drones, and created heat exchangers that improve the efficiency of energy conversion systems. Typically, the feedstock for metal AM is a single alloy powder, metal powder, or a stochastic mixture of alloys. The design space of AM can be expanded by techniques for local manipulation of the powder feedstock, thereby enabling the tailoring of composition or microstructure gradients to improve mechanical or thermal properties. Current methods for local manipulation of the feedstock incorporate hopper- or vacuum-based methods with spatial deposition resolution on the order of 500 &#120583;m and typically require a secondary recoating step after depositing the material, further limiting the resolution and fidelity of property. In this thesis, a new hybrid AM technique combining inkjet deposition followed by laser powder bed fusion (LPBF) is studied, and the technique is demonstrated in exemplary contexts.&#13;
&#13;
To begin, a rapid experimental workflow is developed, whereby etched metal substrates are used as templates for spreading a thin layer of metal powder; spreading is preceded by inkjet deposition onto the wells and is followed by laser scanning to form a solid material with tailored characteristics. This workflow facilitates an understanding of the influence of additive concentration, laser power, and scan speed on the process. Using this workflow, the thesis: (1) assesses the resolution (∼ 200 &#120583;m) and limitations of the hybrid inkjet-LPBF process; (2) demonstrates the fabrication of porous stainless steel using a polymer-based ink as the deposited additive, achieving spatially controlled porosity upwards of 8% locally with pore sizes of 250-800 &#120583;m2 and a spatial resolution of ∼ 400 &#120583;m; (3) demonstrates spatial tailoring of the hardness of stainless steel using an ink with carbon black as the additive, achieving a local increase in the microhardness from a control of 178 HV to 196.6 HV with a spatial resolution of 250 &#120583;m. In the case of porosity control, a scaling model of thermal effects, combined with an understanding of the melt pool’s flow and stability, rationalizes the experimental outcomes and the process parameter space. This thesis validates the multi-material inkjet-laser process for high-resolution, spatially graded metal AM. These findings lay the foundation for a hybrid inkjet-LPBF fusion system to develop high-precision components and its use for rapid alloy development in metal AM.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150128</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments on biomarker preservation</title>
<link>https://hdl.handle.net/1721.1/150127</link>
<description>Experiments on biomarker preservation
Mojarro, Angel
Determining how fossils form, and the nature of organic matter that becomes fossilized and which persists through geologic time, are continuing challenges. This is because decay begins immediately after senescence and diagenetic transformations typically continue progressively over millions of years. To this end, in this dissertation, I utilize various biological and geochemical techniques to investigate processes associated with body fossil preservation in Holocene-age concretions that have encapsulated partially-to-fully decomposed fish (capelin - Mallotus villosus). Here I focus on two environments that have produced highly contrasting preservation endmembers that serve as approximate experiments on biomarker preservation. Furthermore, this dissertation includes a series of laboratory modeling experiments to determine the stability of ribonucleic acid (RNA) under early Mars-like conditions and groundtruthing experiments which complement ongoing analyses being conducted aboard the Sample Analysis at Mars (SAM) instrument in Gale Crater. The objective of this dissertation is to investigate the biological and early diagenetic transformation of organic matter within macrofossils and analog experiments to elucidate those processes, biotic or abiotic, which determine subsequent biomarker preservation. This dissertation consists of five research chapters that seek to elucidate: (I) preservation biases, (II) mechanisms of concretion formation, (III) microbial decay communities and the role of the environment on preservation, (IV) the role of metal-catalysis on RNA polymer stability, and (V) the analysis of carbonaceous chondrite meteorites as putative analogs for organics present on Mars today.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150127</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of nutrient sensing in the mTORC1 pathway</title>
<link>https://hdl.handle.net/1721.1/150126</link>
<description>Evolution of nutrient sensing in the mTORC1 pathway
Liu, Grace Y.
The mTORC1 pathway regulates growth and metabolism in response to nutrient availability. In mammals, the mTORC1 pathway monitors the concentration of certain amino acids using dedicated nutrient sensors, which bind directly to their cognate metabolites. Unlike other components of the mTORC1 pathway, which are present from yeast to human, the known nutrient sensors are poorly conserved in lower eukaryotes and may have emerged as specialized innovations. A goal of mTORC1 biology is to understand the evolutionary mechanisms that allow a highly conserved core pathway to adapt to the diverse nutritional niches that animals occupy. How does the mTORC1 pathway add new layers of regulatory sophistication to accommodate animals with divergent diets and lifestyles? Do organisms acquire novel nutrient sensors under environmental pressure? If so, where do those sensors come from?&#13;
&#13;
In this thesis, we discover a new species-specific S-adenosylmethionione (SAM) sensor and use its evolutionary history to pry open the structural logic of the mTORC1 pathway. We show that the sensor, the Drosophila melanogaster protein Unmet expectations (Unmet, formerly CG11596), is an “evolutionary intermediate,” caught between its ancestral enzymatic function and a recently acquired role in the mTORC1 pathway. Unmet interacts with the fly GATOR2 (dGATOR2) complex, a core component of the pathway, to inhibit dTORC1 during methionine starvation. This inhibition is directly relieved by SAM, a proxy for methionine availability. Unmet expression is elevated in the ovary, a methionine-sensitive niche, and flies lacking Unmet fail to maintain the integrity of the female germline under methionine restriction. By tracing Unmet’s incorporation into the mTORC1 pathway, we show that Unmet was an independent methyltransferase before it was captured by flexible loops on the GATOR2 complex. These data suggest a general mechanism in which the mTORC1 pathway assimilates new sensors by using evolvable modules on core complexes to co-opt proteins with ligand-binding capabilities. We discuss how similar principles can be used to build artificial sensors for the mTORC1 pathway and explore how repurposing ancient enzymes enables the mTORC1 pathway to rapidly adapt to metabolic niches across evolution.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150126</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semi-Cooperative Planning in Mixed Human-Autonomous Environments</title>
<link>https://hdl.handle.net/1721.1/150124</link>
<description>Semi-Cooperative Planning in Mixed Human-Autonomous Environments
Buckman, Noam
Autonomous vehicles have made immense progress towards deployment on public roads, yet navigating safely on roads with both human drivers and autonomous vehicles presents a challenge for even the most advanced systems. Algorithms and systems are needed for developing and evaluating socially-compliant planning algorithms for autonomous vehicles. In this thesis, we propose a semi-cooperative autonomy framework that considers the underlying social utility of human agents within the vehicle's trajectory planning and motion control. In addition, we present a new robotic platform for deploying and evaluating semi-cooperative autonomy in a safe, laboratory setting. &#13;
&#13;
In this thesis, we combine concepts from social psychology with game-theoretic planning algorithms to develop semi-cooperative autonomous planners. Beginning with a single autonomous vehicle, we present Iterative Best Response with Imagined Shared Control, an algorithm that considers the Social Value Orientation of each human driver while achieving desirable game-theoretic equilibria. The semi-cooperative framework is applied to larger scale systems, a socially-compliant intersection manager for mixed human-autonomy traffic and understanding SVO impact on vehicle traffic flow. In addition, we present a visibility-aware trajectory optimization algorithm for proactive motion planning around blind spots, which incorporates a model of human driver uncertainty into a semi-cooperative trajectory planner. We demonstrate the efficacy of these algorithms in simulations of human and autonomous vehicles and study the effect of human personality on algorithm performance.&#13;
&#13;
Second, we introduce the MiniCity, a 1/10th scale city environment consisting of realistic urban scenery, intersections, and multiple fully autonomous 1/10th scale vehicles with state-of-the-art sensors and algorithms. We describe how the MiniCity robotic platform is used in the development of semi-cooperative autonomy, from evaluating algorithm performance to developing new intelligent traffic systems. First, we use the MiniCity to evaluate vehicle autonomy, measuring both the impact of upstream perception on downstream vehicle performance and measuring efficiency of semi-cooperative intersection managers.&#13;
Second, we use the MiniCity's human-in-the-loop driver interface to collect user preferences for co-designing a shared controller for driving through intersections. Finally, we present a novel end-to-end infrastructure-based failure detection algorithm, FailureNet, which is trained and deployed on autonomous vehicles in the MiniCity. In all these, the MiniCity provides a safe and scalable environment for developing interactive algorithms, bringing us closer to fully deploying socially-compliant autonomy on mixed human-autonomous roads.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150124</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Order Fulfillment Algorithms for Online Retail</title>
<link>https://hdl.handle.net/1721.1/150123</link>
<description>Order Fulfillment Algorithms for Online Retail
Chen, Pin-Yi
We study an order fulfillment problem in an online-retail setting where the retailer’s fulfillment system includes warehouses that hold inventory (fulfillment centers), and a transportation network composed of node facilities and transportation arcs. To minimize the total transportation costs for order fulfillment, the online retailer should plan its transportation capacities properly in advance and execute according to the transportation plan by making smart fulfillment decisions. To fulfill a customer order, the online retailer must decide from where to source the inventory needed for the order, as well as how to route the order to the customer to satisfy a delivery time commitment. In this research we focus on the latter decision, namely choosing the route for each order. The online retailer has full control of its transportation system in both planning and execution; in addition, the online retailer can rely on third party carriers to transport some of its orders.&#13;
&#13;
We design an order fulfillment algorithm that makes immediate routing decisions for incoming orders. To determine the route to assign to an order, we compare the costs of all feasible routes. For routes that use retailer-controlled resources, we need to account for the opportunity costs associated with these resources. We propose to do this with the dual values from a transportation quadratic program, which tries to account for the uncertainty of the network flows to estimate the opportunity costs of depleting resources in the transportation network. The dual values are updated periodically with the most updated system states that include resource capacities and demand forecasts. We numerically test our algorithm on a realistic network with inputs inspired by the actual data from our industry collaborator. We compare our algorithm with several benchmark algorithms, including a LP-based algorithm that mimics the algorithm in the retailer’s current operating system. The experiments show a 50% reduction in the mean percentage difference in shipping cost from the hindsight solution as compared to the LP-based algorithm.&#13;
&#13;
In addition to the fulfillment algorithm, we formulate a capacity planning problem that determines the optimal level of transportation resources in the network for a given demand forecast. When the demand deviates from the planned capacity by a lot, adding or removing planned resources becomes a crucial cost-saving mechanism. Motivated by this, we propose ad hoc truck controllers that make online capacity modification decisions and test it with small examples.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150123</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Waters and Welfare: Rivers, Infrastructure, and the Territorial Imagination in Grand Ducal Tuscany, 1549–1609</title>
<link>https://hdl.handle.net/1721.1/150118</link>
<description>Waters and Welfare: Rivers, Infrastructure, and the Territorial Imagination in Grand Ducal Tuscany, 1549–1609
Murphy, Caroline Elizabeth
Over the central decades of the sixteenth century, the Tuscan ducal state formed in the midst of a flood crisis. A cooling climate, excessive rainfall, and deforestation, among other meteorological and anthropogenic changes now associated with the Little Ice Age, caused rivers and streams to more frequently and violently brim over and lay waste to urban and rural property. Under dukes Cosimo I de’ Medici and his sons Francesco I and Ferdinando I, the Tuscan government founded specialized offices and appointed staffs of technicians and bureaucrats to rectify this disorderly aquatic topography. Studying the administrative and cartographic records they produced, in concert with environmental legislation, utopian development proposals, and manuscript and print treatises on architecture, engineering, and political economy, this dissertation explores the arduous practical and intellectual work of alluvial planning on the novel scale of the territorial state in the decades before alluvial hydraulics coalesced as a branch of the physical sciences.  &#13;
 &#13;
Moving from the muddy labors of architects and engineers dispatched to mitigate flooding and project alluvial laws across ducal dominions, to the grandiose ideations of a new class of technocrats who proposed ambitious schemes for transforming intractable waterways into useful systems of commercial infrastructure, this dissertation argues that the problems of water elicited novel ways of imagining territory as a design problem. For the arrayed actors engaged in ordering this space, absolutist forms of planning emerged in the sixteenth century as attractive solutions for grappling with environmental crisis and securing state welfare in an increasingly interconnected and competitive world. &#13;
&#13;
Beyond revealing the much earlier legacies of improvement ideologies and projects for infrastructurally-enabled capitalist circulation most often associated with Enlightenment Europe and global modernity, this research demonstrates that early political economy, as it developed in Renaissance Italy, was conceived as a fundamentally architectural enterprise. Challenging a prevailing tendency to view early modern territorial states as abstract or conceptual entities—relations of sovereignty, bundles of laws and rights crystallizing in bounded, Euclidean space—this research shows how in the crucible of the sixteenth century, states were also conceived as material creations to be physically sculpted at scale.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150118</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Charged interfaces and their applications in energy sustainability</title>
<link>https://hdl.handle.net/1721.1/150115</link>
<description>Charged interfaces and their applications in energy sustainability
Panat, Sreedath
Energy sustainability is one of the most important challenges of the present time. In this thesis we investigate some critical sustainability challenges and develop advanced mitigation approaches for various energy systems. In photovoltaic systems, dust accumulation on solar panels is a global challenge that affects the operational efficiency. Current water-based methods impose a huge water footprint and cost to solar energy. In crude oil extraction, there is significant waste byproduct formed in the form of water-in-oil nanoscale emulsion due to the mixing of underground water and crude oil. To separate water and oil phases toxic chemical demulsifiers are added which along with effluents from the refineries reach waterbodies, harming the local ecosystem. In postcombustion CO₂ capture systems, the absorption of CO₂ from flue gas into a sorbent liquid is capital expensive. The need for large surface area of interaction necessitates the installation of prohibitively expensive absorption towers and the usage of environmentally unfriendly chemicals such as amines. In this thesis, we investigate advanced methods by leveraging interfacial charge to make these renewable and non-renewable energy systems more sustainable and efficient. First, we demonstrate a novel approach based on active electrostatic charge induction for charging and electrostatically repelling dust from solar panels. We show that more than 99% of the lost power output can be recovered by our approach without consuming a single drop of water. Second, we develop a non-Laplacian space charge emitter electrocoalescer setup that allows us to apply nearly 8 times stronger electric field compared to traditional electrocoalescers across water-in-oil emulsion to polarize and coalesce the droplets. Thus, we demonstrate that we can successfully phase separate water-in-oil nanoscale emulsions at timescales relevant for crude oil processing systems, while completely eliminating the use of toxic demulsifiers. Finally, we introduce mist-scale droplets for significantly enhancing the interfacial area of interaction between flue gas and the sorbent liquid. We introduce electrostatic space charge injection approach to charge and collect the CO₂-absorbed mist droplets at nearly 100% efficiency. Overall, our approach leads to more than 95% CO₂ absorption at 2.6-fold reduction in carbon capture capital cost.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150115</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical and Computational Modeling of Injection-induced Seismicity</title>
<link>https://hdl.handle.net/1721.1/150113</link>
<description>Mathematical and Computational Modeling of Injection-induced Seismicity
Alghannam, Maryam
It has long been recognized that pumping fluids into or out of the Earth has the potential to cause earthquakes. Some of the earliest field evidence dates to the 1960s, when earthquakes were turned on and off by water injection in Rangely, Colorado. More recently, induced seismicity has been reported worldwide in connection with many subsurface technologies, including wastewater disposal, natural gas storage, enhanced geothermal systems, and hydraulic fracturing. As a result, there has been a growing public concern around the world about the potential seismic hazard and environmental impact of subsurface energy technologies. Understanding the physical mechanisms that lead to induced seismicity is essential in efforts to mitigate the risk associated with subsurface operations. As a first step in this thesis, we develop a spring-poroslider model of frictional slip as an analogue for induced seismicity, and analyze conditions for the emergence of stick-slip frictional instability—the mechanism for earthquakes—by carrying out a linear stability analysis and nonlinear simulations. We found that the likelihood of triggering earthquakes depends largely on the rate of increase in pore pressure rather than its magnitude. Thus, the model explains the common observation that abrupt increases in injection rate increase the seismic risk. Second, we perform an energy analysis using the same spring-poroslider model to shed light into the partitioning of energy released into frictional and radiated energy—since the latter is associated with the overall size of the earthquake and its potential for damage to man-made structures. Two key elements of the analysis are: (1) incorporating seismic radiation within the model using a precisely-defined viscous damper, and (2) partitioning the energy supplied by fluid injection into dissipated and stored energy in fluid and skeleton. The analysis shows how the rate of increase in pore pressure controls the radiated energy, stress drop, and total slip of the earthquake. Third, we study the effect of heterogeneity on the dynamics of frictional faults. In particular, we develop an objective (frame-indifferent) formulation of frictional contact between heterogeneous surfaces at a small scale, and introduce the notion that friction is a function of the states of the two surfaces in contact, each representing roughness and microstructural details for the surface. We then conduct dynamic simulations of a spring-slider model and show that heterogeneous Coulomb friction alone is capable of reproducing the transitions in complex frictional behavior, from stable creep to regular earthquakes and slow slip. This thesis, as a whole, enhances our understanding of the mechanics of fluid-injection-induced earthquakes and suggests strategies that mitigate or minimize the seismic risk associated with a wide range of subsurface operations, from hydraulic fracturing and geothermal energy extraction to wastewater injection and geologic CO2 sequestration.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150113</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mass Transfer and Chemical Interactions in Subduction Zones</title>
<link>https://hdl.handle.net/1721.1/150112</link>
<description>Mass Transfer and Chemical Interactions in Subduction Zones
Codillo, Emmanuel Avila
Subduction zones are important sites of material recycling on Earth, with volatiles playing key roles in mass transfer processes and magma formation. This thesis investigates outstanding questions associated with a continuum of interrelated processes that occur as oceanic plates descend in subduction zones by integrating petrological and geochemical constraints from exhumed high-pressure rocks and erupted arc magmas, high pressure-temperature laboratory experiments, and thermodynamic calculations. Chapters 2 and 3 investigate the fluid-mediated reactions between mafic and ultramafic rocks at conditions relevant to the slab-mantle interface and show that Mg-metasomatism of mafic rocks to form chlorite-rich assemblages is favored and is likely more pervasive in subduction zones than in oceanic settings. Contrary to common belief, talc is unlikely to form in high abundance in ultramafic rocks metasomatized by Si-rich slab-derived fluids. This means that talc-rich assemblages formed via Si-metasomatism along the slab-mantle interface are less likely to be playing prominent roles in volatile transport, in facilitating slow-slip events, and in controlling the decoupling-coupling transition of the plate interface. Chapter 4 experimentally investigates the phase equilibria, melting, and density evolution of mélange rocks that formed by mixing and fluid-rock interactions. Results show that melting of mélanges is unlikely to occur along slab-tops at pressures ≤ 2.5 GPa. Accordingly, diapirism into the hotter mantle wedge would be required to initiate melting. The density contrast between mélanges and the overlying mantle would allow for buoyancy-driven diapirism at relatively low pressures and melting could subsequently occur in the hotter mantle wedge during ascent. However, diapir buoyancy may be limited at higher pressures due to the formation of abundant garnet especially in mélange rocks with peraluminous composition. Chapter 5 experimentally investigates the compositions of melts and mineral residues from melting of a mantle wedge hybridized with small amounts of mélange rocks to simulate an end-member scenario where solid mélange diapirs dynamically interact with the mantle wedge. Results from laboratory experiments show that melting of a mélange-hybridized mantle wedge can produce melts that display compositional characteristics similar to arc magmas. Finally, Chapter 6 presents new interpretations on the evolution of slab-to-mantle transfer mechanisms from subduction initiation to arc maturity. Analyses of published magma compositions from global arcs reveal that melting of mélange plays an increasingly important role in magma formation as slab-tops cool and arcs mature over time. This trend is attributed to the deepening of the decoupled plate interface during subduction where mélange zones can form more extensively and contribute to the melting process more significantly. Taken together, this thesis highlights (i) the dynamic connection between mechanical mixing of different lithologies and fluid-rock interactions along the slab-mantle interface, (ii) how these processes modify the petrophysical and geochemical properties of subducted materials, and (iii) how these processes collectively influence the mechanisms of slab-to-mantle transfer, elemental cycles, and the formation of arc magmas worldwide.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150112</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear ion transport at high electric currents in shock electrodialysis and ion-intercalation memories</title>
<link>https://hdl.handle.net/1721.1/150111</link>
<description>Nonlinear ion transport at high electric currents in shock electrodialysis and ion-intercalation memories
Tian, Huanhuan
This thesis studies the  nonlinear ion transport at high electric currents, in two applications: shock electrodialysis (shock ED) for ion separation, and ion-intercalation memories for in-memory computing. The two studies are both related to the concept of concentration polarization (CP) in electrochemistry, which describes the formation of ion concentration gradient in an electrolyte adjacent to a charge-selective interface. CP leads to linear concentration profiles at small current in ideal electrolyte. However, CP in shock ED and ion-intercalation memories can be very nonlinear. It has been well studied that CP at over-diffusion-limiting current in charged porous media can lead to the formation of deionization shock, which is the key physics for shock ED. The main contribution of this thesis to shock ED is the modeling of multicomponent ion separation. In addition, in this thesis, it is proposed that the CP in multiphase ion-intercalation materials can lead to phase re-distribution and can be used for non-volatile memories. A general phase-field model has also been developed in this thesis to study the multiphase CP behaviors. The details of the two studies are described as below. &#13;
&#13;
Shock ED is a new electrochemical method for water treatment utilizing deionization shocks generated by over-diffusion-limiting current, developed by the Bazant group. Previous models have qualitatively described the desalination of binary electrolytes. However, they cannot quantitatively  predict experiments,  or explain the selective ion removal of multicomponent electrolytes recently observed in experiments. In this thesis, a depth-averaged model is developed  for shock ED. Compared with previous homogenized models,  the depth-averaged model includes the effect of pore-scale ion distribution by integration in the pore cross-sections.  The model applies to multicomponent electrolytes and any electrical double layer (EDL) thickness, captures the phenomena of electroosmosis, diffusioosmosis, and water dissociation, and incorporates more realistic boundary conditions. The model is well consistent with experiments of binary electrolytes, and can explain the selective removal of multivalent cations from electrolytes. In addition, this thesis also includes experiments to for removal of  trace amounts of lead from dilute solutions of model tap water by shock ED, showing approximately 95\% of dissolved lead (to safe levels below 1 ppb), compared to 40\% of sodium ions, at 60\% water recovery and at an electrical energy cost of only $\SI{0.01}{\kilo\watt\hour\per\cubic\meter}$. These experimental results are quantitatively consistent with the predictions of our depth-averaged model. In summary, this thesis significantly improves the understanding of ion separation in shock ED, and can guide the optimization and scale-up of the process for industrialization. &#13;
&#13;
Resistive switching (RS) memories (including two-terminal memristors and three-terminal synaptic transistors) with multiple, nonvolatile states tunable by electric fields are promising for in-memory computing. In recent years, ion intercalation materials have attracted increasing attention for RS because of the ion-modulated electronic conductivity. In 2020, the Rupp and Bazant group developed LTO memristors  made from LTO4 or LTO7 nanofilms sandwiched by ion-blocking Pt electrodes. The RS behaviors for such systems cannot be explained by existing models like the dielectric breakdown model. In this thesis, an interfacial RS mechanism, multiphase polarization (MP), is proposed to explain the LTO memristors and other similar systems made from multiphase, ion-intercalation nanofilms sandwiched by ion-blocking electrodes. In the first step, a preliminary 1D model is developed to analyze the switching time and energy, and resistance ratio for MP-induced RS. Next, a general phase-field model is developed, for coupled ion-electron transport with surface effects like non-neutral wetting condition, dynamic contact angle and surface charge. Then, 2D MP with complex surface conditions is simulated and compared with the simple 1D model. This thesis also discusses the comparison between the MP mechanism and other RS mechanisms, as well as the prospects of MP-based memristors. This work is inspired by the memory application but should also benefit other fields as a general theoretical study.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150111</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism and Biological Scope of Target-Directed MicroRNA Degradation</title>
<link>https://hdl.handle.net/1721.1/150110</link>
<description>Mechanism and Biological Scope of Target-Directed MicroRNA Degradation
Shi, Charlie Y.
MicroRNAs (miRNAs) are small, ~21 nucleotide RNAs that guide Argonaute (AGO) proteins to complementary target RNAs using base-pairing interactions. In canonical miRNA-mediated repression, AGO recruits repressive trans-acting factors to bound targets to promote their deadenylation, decapping, and subsequent decay. Although knowledge surrounding miRNA biogenesis and effector functions is extensive, the understanding of how miRNAs decay is still relatively nascent. Recent global measurements of miRNA stabilities in cells of mice and flies concluded that most miRNAs are very stable, with half-lives ranging from many hours to days. However, a minority of miRNAs were also observed to be labile, with half-lives of a few hours. Prior to our work, it was not known how this rapid turnover is specified and actuated, although a number of miRNA-degradation processes and associated effectors had been proposed. One particular pathway discovered in 2010, named “target-directed miRNA degradation” (TDMD), was shown to cause miRNA degradation in the presence of unusually highly complementary target RNAs, and held particular explanatory appeal. However, neither the mechanism nor the biological scope of this pathway, beyond a handful of examples, were definitively known.&#13;
&#13;
We searched for factors involved in TDMD, and unexpectedly found that the ZSWIM8 Cullin-RING ligase was required for all known instances of TDMD that we tested. We then showed that, contrary to the prior model, TDMD proceeds by a mechanism where in the presence of a highly complementary target RNA, ZSWIM8 polyubiquitinates AGO and marks it for destruction, thereby exposing the normally protected miRNA to degradation. By perturbing ZSWIM8, we were able to confidently identify dozens of miRNAs likely undergoing targeted degradation in cells of flies, mice, and worms. Our data showed that TDMD was able to quantitatively explain the half-lives of most rapidly turned over miRNAs in cells of mice and flies, implying that this regulatory mode is pervasively used in bilaterian animals.&#13;
&#13;
To interrogate the biological functions mediated by TDMD, we knocked out ZSWIM8 in mice and found that mutants died soon after birth with small body size, cyanotic appearance, and defects in heart and lung development. These aberrations were accompanied by accumulation of ~50 miRNAs across various tissues – in many cases resulting in significant repression of their targets. This work is consistent with the possibility that TDMD mediates important biological functions in animals, and further work will be needed to clarify the precise molecular and cellular bases of the phenotypes.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150110</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and Dynamics of Associative Polymer Gels</title>
<link>https://hdl.handle.net/1721.1/150108</link>
<description>Structure and Dynamics of Associative Polymer Gels
Rao, Ameya
Associative polymers have attracted wide attention for applications spanning biomedicine, soft robotics, and rheological modification due to their unique viscoelastic and stimuli-responsive properties imparted by their dynamic bonds. However, predicting the response of associative polymer networks remains a challenge due to the interplay between chain and bond dynamics on various timescales, which governs their properties such as self-diffusion, relaxation, and creep.  &#13;
&#13;
In this thesis, a combined experimental and computation approach was used to provide new insight into the structure and dynamics of associative networks on various length and time scales, with the goal of understanding effects of key molecular parameters on their macroscopic behavior. The first part of this thesis studied molecular relaxation and self-diffusion of a model associative network formed by artificial coiled-coil proteins with a well-defined architecture. The combination of forced Rayleigh scattering (FRS) and neutron spin-echo (NSE) spectroscopy provided evidence for several regimes of gel relaxation behavior across a wide length scale range, including subdiffusive caging and two distinct regimes of apparent superdiffusion before terminal Fickian diffusion. The submolecular relaxation dynamics were further probed by varying the strand length and chain concentration, illustrating changes in segmental motion that reflected the underlying network design. Importantly, segmental relaxation rates were found to collapse onto a master curve when rescaled by the static inter-junction spacing measured by neutron scattering, indicating self-similar dynamics even in networks with different chain architecture and concentration. Furthermore, the presence of two distinct superdiffusive regimes on intermediate length scales suggested the existence of multiple origins for anomalous diffusion in associative systems, reflecting a complex interplay between distinct molecular states not captured by current theories.&#13;
&#13;
To obtain further insight into the molecular mechanisms underlying associative network dynamics, a coarse-grained bead-spring model was developed and implemented via Brownian dynamics simulations. Associative polymers were conceptualized as linear chains containing regularly spaced stickers interacting with a mean-field background, obviating the need to explicitly model multi-chain interactions and reducing computational cost. The simulations demonstrated the coexistence of multiple diffusive modes, termed walking and hopping, that give rise to the superdiffusive behavior seen experimentally. Importantly, the two superdiffusive regimes were found to occur by distinct mechanisms, with the lower regime occurring due to a transition between multiple walking modes (irrespective of the hopping mode) and the upper regime occurring due the onset of molecular hopping. Molecular hopping was shown to be important only in kinetics-limited systems, where the sticker association/dissociation dynamics are slower than the intrinsic Rouse relaxation of the network strands. The simulations were also used to probe the effects of molecular parameters such as the sticker density, chain concentration, and association/dissociation kinetics on self-diffusion. Notably, the results demonstrated the importance of loops in enabling chains with high sticker density to hop, whereas the mean-field prediction of purely Fickian diffusion of all length scales was recovered at high chain concentration where the hopping mode was effectively suppressed. Analytical theories were formulated to predict the characteristic walking and hopping diffusivities from the chain topological statistics and dissociation/association timescales, finding qualitative agreement with simulation. &#13;
&#13;
Finally, the last part of the thesis explored the possible contribution of multi-chain correlations toward network dynamics on different length scales, effects that are not captured by single-chain conceptualizations commonly used. Structural characterization of a model associative protein gel using small-angle and ultra-small-angle neutron scattering provided evidence for a previously unobserved static correlation length larger than the inter-junction spacing, indicating inhomogeneity in the chain density distribution in the gel. Self-diffusion measurements suggested a caging effect induced by this large-scale correlation length in governing a transition between distinct slow and fast diffusive modes. Finally, a comparison to the single-sticker dissociation time inferred from tracer diffusion measurements supported the single-chain mechanisms of walking and hopping as previously conceptualized, with the step size of the slow mode commensurate with the length of the bridging strands and the transition timescale to the fast mode consistent with the onset of hopping via dissociation of all stickers on a chain.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150108</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Current-induced Dynamics of Easy-Plane Antiferromagnets</title>
<link>https://hdl.handle.net/1721.1/150107</link>
<description>Current-induced Dynamics of Easy-Plane Antiferromagnets
Zhang, Pengxiang
Antiferromagnetic memory devices are expected to be very fast, stable, dense and energy-efficient, making them promising for the next generation non-volatile random-access memory. However, in antiferromagnets, it used to be challenging to accurately understand the current-induced dynamics, especially the spin-orbit-torque switching dynamics. To realize a practical antiferromagnetic memory device, we must overcome the challenge. &#13;
&#13;
In this PhD Thesis, I discussed about the systematic and quantitative study of a model material, collinear easy-plane antiferromagnetic insulator α-Fe2O3 covered by Pt, for non-spin-orbit-torque switching mechanisms, magnon spin transport, and finally, the long-anticipated damping-like-torque switching, and the method to quantitatively characterize the spin-orbit torques. And I also discussed about the study about the damping-like-torque switching of a non-collinear easy-plane antiferromagnetic metal Mn3Sn, and the handedness anomaly of the switching direction.&#13;
&#13;
These studies deepen the scientific understandings of spin-orbit torque dynamics in antiferromagnets, and pave the way to real-life applications of antiferromagnetic memory devices.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150107</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of a toxin-antitoxin system in the arms race between bacteria and phage</title>
<link>https://hdl.handle.net/1721.1/150105</link>
<description>The role of a toxin-antitoxin system in the arms race between bacteria and phage
Guegler, Chantal Katrin
All domains of life face the fundamental challenge of responding to infection by pathogens. In bacteria, the need to survive predation by phages, or bacterial viruses, has driven the evolution of numerous highly sophisticated anti-phage elements, including restriction modification and CRISPR-Cas systems. These defense mechanisms, in turn, have influenced the emergence of phage anti-defense systems, highlighting the intense, ongoing evolutionary arms race between phages and their bacterial hosts. Toxin-antitoxin (TA) systems are widespread phage defense systems in bacteria comprising a bacteriostatic or bactericidal toxin and an antitoxin that normally keeps the toxin inactive. Following phage infection, the toxin is activated, or liberated from its cognate antitoxin, to block viral recognition. However, the precise activating signals of phage-defensive TA systems, the targets of toxins following activation, and the mechanisms that phages can use to overcome TA-mediated defense are largely unknown.&#13;
&#13;
In this work, I characterize the role of an E. coli toxin-antitoxin system called toxIN, containing the endoribonuclease toxin ToxN and the short, repetitive RNA antitoxin toxI, in the arms race between phages and bacteria. Using RNA sequencing to track ToxN activity during T4 infection, I show that, following phage infection, ToxN is liberated from toxI and cleaves T4 mRNAs, thereby disrupting T4 translation and the assembly of new viral particles. I then demonstrate that host transcription shutoff is the activating trigger for toxIN. Because T4 robustly inhibits host transcription, cells cannot replenish the antitoxin toxI, which is rapidly degraded. Next, by isolating toxIN-resistant clones of T4, I demonstrate that T4 can overcome ToxN by increasing the expression of a small phage protein encoded by the gene tifA. TifA is a bona fide antitoxin for ToxN that is conserved in the genomes of many T4-like phages, including the coliphages T2, T6, and RB69. Interestingly, TifA is an RNA-binding protein that forms ribonucleoprotein complexes with ToxN, suggesting that TifA may sequester RNA-bound ToxN to prevent cleavage of phage transcripts during infection. Taken together, these results reveal the native targets and activation mechanism of an inducible, phage-defensive TA system and uncover a unique mechanism by which phages can overcome TA-mediated defense.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150105</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Challenges for Ultra-High-Temperature Energy Storage with Thermophotovoltaics</title>
<link>https://hdl.handle.net/1721.1/150104</link>
<description>Design Challenges for Ultra-High-Temperature Energy Storage with Thermophotovoltaics
Kelsall, Colin Clancy
Decarbonization of the electricity grid is an important part of mitigating the extent and effects of human-caused climate change. Some of the most promising carbon-free and renewable energy resources, like wind and solar, have intermittent production that could be addressed, in part, with widespread deployment of electrical energy storage. There are few storage technologies, however, that are suited to this task, partially due to their high cost, rare constituent materials, and geographic constraints. This thesis investigates several pressing design challenges for a new electrical energy storage technology, termed Thermal Energy Grid Storage (TEGS), with the potential for low cost and deployment at scale. TEGS stores electricity as heat in graphite blocks at ultra-high temperatures (&gt;2000°C) and can extract that heat as electricity, on demand, using a thermophotovoltaic (TPV) heat engine. Thermophotovoltaic systems convert thermally emitted light from a high-temperature heat source to electricity using a photovoltaic cell. By operating at extremely high temperatures and utilizing multi-junction PV cells typically intended for solar energy conversion, high conversion efficiencies can be achieved (i.e. &gt; 50%) at low cost. When operating at such high temperatures, however, sublimation of the thermal emitter material (i.e. tungsten) and deposition on the cell surface can cause significant performance degradation. To prevent this contamination process, a layer of gas can be blown across the cell, effectively sweeping the evaporated material away before it gets to the surface. This thesis examines the relevant design parameters of this Sweeping Noble Gas Curtain (SNGC) system with the goal of developing a functional and scalable TPV generator for implementation in the TEGS ultra-high-temperature thermal battery. This work consists of several analytical and numerical models predicting deposition behavior under different geometric and flow conditions, experimental validation of these models, a scalable design for an integrated SNGC-TPV system, a summary of the numerous challenges and solutions devised for testing this system above 2000°C, and finally a proof-of-concept demonstration showing the remarkable efficacy of the SNGC approach. The demonstrated long-term durability enhancement of thermophotovoltaic devices is a critical step towards the economic viability of such systems and their potential for deployment at scale.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150104</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>VirtualHome: Building Socially Intelligent Agents via Simulation</title>
<link>https://hdl.handle.net/1721.1/150100</link>
<description>VirtualHome: Building Socially Intelligent Agents via Simulation
Puig Fernández, Xavier
While remarkable progress has been made in building autonomous agents that can help us at complex tasks, these have typically been studied in isolated environments. To build agents that can be deployed in the real world, we need to study them in more human-centric settings, where they can interact with other humans and agents. Moreover, for agents to assist us effectively, they need to be able to operate in these environments while understanding our intentions and beliefs, and learn to coordinate with us effectively and safely.&#13;
&#13;
This thesis investigates the development of such assistive agents through simulation environments. In the first part of the thesis, we introduce VirtualHome, a multi-agent platform for simulating human activities in household environments, and introduce a knowledge base of daily human activities that can be executed in the simulator. Then, we present agents that can perform different tasks in the environment given human descriptions or demonstrations of the activity. Finally, we study agents that can perform activities together with other humans in the environment. We propose a framework to simulate humans in the environment at scale and propose two challenges, Watch-And-Help and Online-Watch-And-Help, to benchmark the performance of different assistive agents, and test their effectiveness in assisting real humans performing activities in VirtualHome. Together, the methods and tools presented in this thesis provide a way to study assistive agents in simulation, allowing to develop these agents safely and at scale before deploying them into the real world.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150100</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Aware Planning and Probabilistic Prediction for Autonomous Systems under Uncertain Environments</title>
<link>https://hdl.handle.net/1721.1/150099</link>
<description>Risk Aware Planning and Probabilistic Prediction for Autonomous Systems under Uncertain Environments
Han, Weiqiao
This thesis considers risk aware planning and probabilistic prediction for autonomous systems under uncertain environments. Motion planning under uncertainty looks for trajectories with bounded probability of collision with uncertain obstacles. Existing methods to address motion planning problems under uncertainty are either limited to Gaussian uncertainties and convex linear obstacles, or rely on sampling based methods that need uncertainty samples. In this thesis, we consider non-convex uncertain obstacles, stochastic nonlinear systems, and non-Gaussian uncertainty. We utilize concentration inequalities, higher order moments, and risk contours to handle non-Gaussian uncertainties. Without considering dynamics, we use RRT to plan trajectories together with SOS programming to verify the safety of the trajectory. Considering stochastic nonlinear dynamics, we solve nonlinear programming problems in terms of moments of random variables and controls using off-the-self solvers to generate trajectories with guaranteed bounded risk. Then we consider trajectory prediction for autonomous vehicles. We propose a hierarchical end-to-end deep learning framework for autonomous driving trajectory prediction: Keyframe MultiPath (KEMP). Our model is not only more general but also simpler than previous methods. Our model achieves state-of-the-art performance in autonomous driving trajectory prediction tasks.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150099</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tangled Circuits: Characterizing Errors in Experimental Superconducting Quantum Processors</title>
<link>https://hdl.handle.net/1721.1/150098</link>
<description>Tangled Circuits: Characterizing Errors in Experimental Superconducting Quantum Processors
Samach, Gabriel Orr
As progress is made towards the first generation of error-corrected quantum computers based on physical quantum bits (qubits), researchers require robust techniques for designing, operating, and characterizing coupled multi-qubit systems in the laboratory, and for understanding the errors which arise in such systems. This doctoral thesis is structured around three interconnected bodies of technical work which span the field of superconducting quantum information science. In Part II, we consider the design, simulation, and measurement of high coherence quantum bits mediated by tunable coupler elements, a fundamental building block of extensible quantum processors based on superconducting Josephson circuits. In Part III, we consider the calibration of high fidelity single- and two-qubit gate operations, and we show how these operations were harnessed to perform a demonstration of Density Matrix Exponentiation, a deep Trotter-like quantum algorithm. In Part IV, we consider an array of techniques for the characterization, verification, and validation of quantum computing hardware, and we put forth a novel quantum characterization technique for reconstructing the dynamic loss channels of multi-qubit systems, known as Lindblad tomography. Framing the dissertation on each end, Parts I and V offer a complementary account of quantum computing grounded in feminist science and technology studies, situating quantum computing as a historical, social, and material-semiotic enterprise, complicating the narrative of progress which animates our work in the laboratory.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150098</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recording and Reprogramming Neuroimmunity in Cancer</title>
<link>https://hdl.handle.net/1721.1/150094</link>
<description>Recording and Reprogramming Neuroimmunity in Cancer
Tabet, Anthony
Adult and pediatric glioblastomas are incurable and universally lethal, with a median survival under 15 months. Despite this overwhelming unmet medical need, therapies developed over the last three decades have been unable to extend patient lifetime beyond a few months. These therapies, which include the small molecule chemotherapies carmustine and temozolimide, photodynamic therapy, tumor-treating fields and others, attempt to treat brain tumors by disrupting processes important for proliferation of cancer cells. A brain tumor, however, is not just a collection of cancer cells. Neurons and immune cells in the tumor microenvironment are reprogrammed to support tumor growth and are critical in disease progression. Yet therapies targeting these two compartments have been slower to enter the clinic. Much remains unknown about how cancer cells reprogram their neuronal and immune microenvironments, and how this program changes cellular phenotype to support cancer proliferation.&#13;
&#13;
Over the last five years, new studies suggest that disrupting the underlying neuroimmune profile of tumors can inhibit progression. Targeting tumor-associated neurons and immune cells, which are indispensable for cancer proliferation, instead of simply disrupting cell division in a subset of cancer cells is a promising therapeutic strategy. Yet it remains challenging to perform biologically precise experiments in vivo, and there is no toolkit available for chronic recording or modulation of these cell signaling cascades.&#13;
&#13;
In this thesis, we deploy implantable or injectable materials which transmit signals the nervous and immune systems are responsive to. These materials can modulate both systems within a murine brain tumor chronically in vivo. Soft, polymer-based hydrogel bioelectronic neural interfaces are used to deliver electrical, optical, or chemical stimuli for bidirectional interfacing with the neuron-cancer axis. Engineered cytokines tethered to a nanoparticle-based adjuvant allow for safe modulation of the tumor immune compartment with significant survival benefit. Together, these technologies comprise a toolkit which can be used to study how neurons and immune cells contribute to brain tumor progression, enabling us to precisely interrogate the neuro-immune-cancer axis chronically in vivo.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150094</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments and Simulations of Autonomous Microscale Robotics</title>
<link>https://hdl.handle.net/1721.1/150093</link>
<description>Experiments and Simulations of Autonomous Microscale Robotics
Yang, Jingfan
Sub-millimeter microscale machines capable of navigating inaccessible spaces and remote locations are steadily approaching reality, with a rich literature emerging on externally actuated and supervised agents. In comparison, progress is slow towards autonomous, intelligent microscale agents. This thesis builds towards fundamental aspects of this pursuit, tackling unanswered questions in robotic functionalities, fabrication techniques, applications, and control. Specifically, (i) I expanded upon the cleanroom-free autoperforation technology to allow facile metal patterning on 2D material surfaces, with which I fabricated mobile electronic microparticles; (ii) Based on experimental observations of autoperforated micro-architectures, I designed and theoretically validated an electrical circuit which integrates real-time access to memory, sensing, and actuation with compatibility to additive technologies and materials as well as significantly reduced design complexity; (iii) I built an in silico modeling toolbox which predicts the performance of a userdefined glucose-responsive insulin (GRI) in animals and humans. I demonstrated the model’s applicability to aiding the design of microrobotic delivery and monitoring systems circulating in the human body, as well as to the investigation of the unsuccessful clinical translation of a unimolecular GRI; (iv) Lastly, I explored the collective intelligence in the form of emergent self-oscillation, among a group of simple, unassuming microparticles. I studied the counter-intuitive order arising from intentional breakage of the collective’s symmetry, and harnessed the stable periodic mechanical motion for the generation of oscillatory electrical currents as well as cyclically driving microrobotic loads. These advances pave the way towards microscale machine intelligence 3 – either through on-board integration of functionalities or through collective behavior – which enables sophisticated microrobotic tasks without external supervision or manipulation.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150093</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation of Full-Wave Effects on Lower-Hybrid Wave Propagation, Absorption, and Current Drive</title>
<link>https://hdl.handle.net/1721.1/150092</link>
<description>An Investigation of Full-Wave Effects on Lower-Hybrid Wave Propagation, Absorption, and Current Drive
Frank, Samuel
Lower hybrid current drive (LHCD) has been used as a radio-frequency (RF) heating and current drive actuator in tokamaks for over 40 years. However, despite being one of the first experimentally demonstrated sources of RF current drive, LHCD has never been consistently predicted by numerical simulation using coupled raytracing Fokker-Planck (FP) models. One frequently cited source of this discrepancy is the breakdown of the WKB approximation raytracing is based upon and "full-wave" effects such as diffraction and interference which raytracing cannot resolve. However, this claim has never been thoroughly investigated. In this work, we replace the raytracing model for wave propagation and damping in LHCD simulations with TORLH, a semi-spectral full-wave model that directly solves for the wave fields in order to assess the validity of raytracing. Using groundbreaking TORLH simulations, we demonstrate that in most modern tokamak core scenarios LHCD raytracing/FP calculations closely match full-wave/FP results. We found that previous discrepancies between raytracing and full-wave results in many cases can be attributed to improper controls and a lack of self-consistency in the full-wave/FP iteration. Our simulations of Alcator C-Mod, DIII-D, and the Experimental Advanced Superconducting Tokamak (EAST), suggest full wave effects are unlikely to be of particular importance in reactor-like scenarios where the LH wave is single pass damped, or even in modern experiments where the lower-hybrid wave is weakly damped.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150092</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of a Quasi-Passive Variable Stiffness Ankle-Foot Prosthesis to Improve Biomechanics Across Walking Speeds</title>
<link>https://hdl.handle.net/1721.1/150091</link>
<description>Design and Evaluation of a Quasi-Passive Variable Stiffness Ankle-Foot Prosthesis to Improve Biomechanics Across Walking Speeds
Rogers-Bradley, Emily
Currently there are an estimated 875,000 people with major lower limb loss in the United States, with numbers projected to increase 1.6-fold by 2050 due to increasing prevalence of diabetes, obesity, and related dysvascular conditions [1]. Lower limb amputation often leads to secondary conditions such as knee pain, knee osteoarthritis, osteopenia, back pain, postural changes, and general deconditioning [2]. For people with transtibial (below-knee) amputation, prevalence of knee osteoarthritis in the contralateral limb is 17x higher than in the general population, with 27% of people with unilateral amputation developing knee osteoarthritis [3]. This large increase in incidence is likely due to insufficient push-off power from the prosthesis and increased limb loading on the contralateral side [4].&#13;
&#13;
This thesis presents an ankle-foot prosthesis which increases energy storage and return, increases peak power, and decreases contralateral limb loading in a low-mass, quasi-passive device. This is achieved by automatically adjusting prosthesis stiffness to maximize energy storage across walking speeds. A novel quasi-passive variable stiffness ankle-foot prosthesis is presented with high resolution stiffness adjustment from 352 - 479 Nm/radian, corresponding to biological ankle quasi-stiffness during level ground walking from 0.75 - 1.5 m/s for a 50th percentile male. This thesis presents the development of a novel mechanism for varying bending stiffness of leaf springs which utilizes independently controlled lockable linear actuators which constrain relative sliding of parallel leaf springs relative to a mechanical ground to control bending stiffness. The detailed device design and analysis of the variable stiffness ankle-foot prosthesis is described, including a parametric model for approximating device stiffness, contact stress analysis, fatigue life calculations, and bolted joint analysis. The benchtop testing results demonstrate that the device successfully achieves the targeted stiffness range, device mass, and structural integrity.&#13;
&#13;
A study was conducted with 7 participants with unilateral transtibial amputation in order to evaluate the kinetic and kinematic effects of the variable stiffness prosthesis during walking compared to a passive energy storage and return foot. During the experiment, subjects walked on an instrumented treadmill at the speeds of 0.75 m/s, 1.0 m/s, 1.25 m/s, and 1.5 m/s while force and motion data was recorded. This thesis presents results from the clinical study which demonstrate a 15.5 - 19.3% greater peak ankle angle during walking across all speeds with the variable stiffness ankle compared to a passive control, 5.4 - 14.8% greater peak joint power, 10.5 - 23.7% greater energy return, and a 4.0 - 6.7% lower contralateral limb knee external adduction moment across walking speeds.&#13;
&#13;
This thesis presents the first of its architecture variable stiffness ankle-foot prosthesis utilizing a novel locking parallel leaf spring mechanism for stiffness control. The prosthesis has a lower device mass compared to existing powered and quasi-passive devices, and increases biomimetic functionality beyond standard passive prostheses. This thesis presents significant clinical results demonstrating the benefits of such a device on the biomechanics and energetics of people with transtibial amputation while walking. This device has the potential to improve health outcomes in people with transtibial amputation by normalizing biomechanics and increasing energy storage and return, and decreasing contralateral limb loading and unwanted knee external adduction moment. This prosthesis has the potential to expand access to high performance prosthesis technology by creating a device that is low mass, low power, and lower cost compared to fully powered devices.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150091</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytical and Numerical Studies of the Generation and Propagation of Nonlinear Water Waves by a Wavemaker</title>
<link>https://hdl.handle.net/1721.1/150090</link>
<description>Analytical and Numerical Studies of the Generation and Propagation of Nonlinear Water Waves by a Wavemaker
Seixas de Medeiros, João
The ability to predict the nonlinear effects of extreme wave events is paramount for the preparation of modern experimental tests in large wave basins. Currently, linear theory is routinely used, hence nonlinear wave-wave interactions between propagating waves are often ignored. In addition, the nonlinear interaction between evanescent modes and the generated waves has not been considered. For realistic large wave basins, there is a need for a robust and efficient nonlinear numerical wave tank capable of effectively simulating such nonlinear wave generation and propagation from general wavemaker kinematics. This thesis addresses both issues through new analytical and numerical developments.&#13;
&#13;
We perform a theoretical multiple scales analysis of the nonlinear interaction be- tween a progressive wave and its evanescent modes, deriving a new set of (2 + &#119873;ₑ) partial differential equations (PDEs) that govern their coupled evolution, alongside the appropriate boundary conditions, where &#119873;ₑ is the number of evanescent modes considered. This nonlinear evolution is governed by three dimensionless parameters: (a) wave steepness &#120598;, (b) wave length relative to water depth &#119896;₀h and (c) a newly developed wavemaker parameter &#120583;, which measures the relative importance of evanescent modes at the onset of generation. Through a numerical study of these PDEs, we conclude that evanescent modes cause an increase in amplitude and a phase shift of the progressive wave, and that, for sufficiently high &#120598; and &#120583;, such nonlinear effects cannot be ignored. We also find that the evanescent modes cause an increase on the progressive phase speed close to the wavemaker.&#13;
&#13;
For realistic large basins, we develop a novel three-dimensional, fast, and robust high-order boundary element method (HBEM) to solve for the general time-history of nonlinear broadband steep waves generated by wavemakers of arbitrary shape. This numerical method uses a new expansion up to order &#119872; of the free-surface and wavemaker, which demand the calculation of a series of high-order normal derivatives of the velocity potential. To guarantee accuracy and convergence we must: (a) solve for these derivatives in integral form through recursive use of Green third identity, while (b) imposing a &#119862;¹-continuity condition on the velocity potential at the intersection between the free-surface and wavemaker to suppress known singular solutions. We show excellent results up to &#119872; = 4.&#13;
&#13;
Finally, we use HBEM to solve for the nonlinear evolution of progressive waves due to the presence of evanescent modes, as done in the earlier analytical work. HBEM predicts the same amplitude increase and phase shift of the progressive wave, corroborating the existence of a resonant condition near the wavemaker. We conclude by investigating the importance of the evanescent modes on accurately predicting a nonlinear wave focusing event. We show that the nonlinear wave-wave interactions cause an appreciable shift of the focusing position and a reduction of the wave height during focusing. When evanescent modes nonlinear interactions are included, their effect on the focusing event is comparable to those of the other wave-wave nonlinear interactions.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150090</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributions and perturbations of the marine dissolved cobalt cycle in a changing ocean</title>
<link>https://hdl.handle.net/1721.1/150086</link>
<description>Distributions and perturbations of the marine dissolved cobalt cycle in a changing ocean
Chmiel, Rebecca Jane
Cobalt is a necessary nutrient for many marine phytoplankton, but its hybrid-type nature results in small marine inventories that make it one of the scarcest bioactive trace metals in the oceans. This study examines the marine dissolved cobalt cycle in two regions: the Pacific Ocean and Antarctic coastal seas. In the North Pacific, elevated cobalt stoichiometries among phytoplankton were linked to nitrogen, iron and phosphate stress protein biomarkers at the boundaries of oceanographic provinces and upwelling zones, providing insight into the flexibility of cobalt stoichiometry. In both regions, perturbations to the marine cobalt cycle were either predicted or observed; in the equatorial Pacific, the dissolved cobalt inventory was predicted to increase by up to 28% due to the expansion of oxygen minimum zones in a warmer ocean, while in the Antarctic, melting ice shelves have the potential to shift the nutrient regime from iron limitation towards zinc and vitamin B12 limitation, resulting in higher cobalt demand and a lower dissolved cobalt inventory. When the global cobalt cycle was estimated throughout four of Earth’s systems (the lithosphere, biosphere, hydrosphere and the anthroposphere – the human environment), it was determined that the scale of the cobalt flux through the anthroposphere is only one order of magnitude lower than the inventory of the entire hydrosphere (10⁹ mol Co yr⁻¹ and 10¹⁰ mol Co, respectively), revealing a vulnerability to anthropogenic perturbation of the marine cobalt inventory through human mining, use and disposal of cobalt if appropriate pollution abatement, disposal and recycling infrastructure is not established. In light of observed and predicted changes to cobalt biogeochemistry, this research suggests that the marine cobalt cycle is particularly vulnerable to anthropogenic perturbation from both global climate change and pollution due to its low ocean inventory and interconnection to other nutrient biogeochemical cycles.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150086</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Control for Uncooperative Networks</title>
<link>https://hdl.handle.net/1721.1/150083</link>
<description>Optimal Control for Uncooperative Networks
Liu, Bai
Modern networks are complex and may include uncooperative components that cannot be fully controlled or observed. However, classic network optimization theory focuses on network models with all nodes being observable and controllable. In this thesis, we focus on developing optimal control algorithms for uncooperative networks with stochastic, non-stochastic or adversarial dynamics.&#13;
&#13;
We start by stabilizing uncooperative networks with stochastic dynamics including external arrivals and uncooperative actions. Such networks can be characterized by overlay-underlay structures, where the network controller can only observe and operate on overlay nodes, and the underlay nodes are not controllable and only have limited observability. We propose the Tracking MaxWeight* (TMW*) algorithm that does not require direct observations of underlay nodes and only operates on overlay nodes. TMW* maintains virtual queues that track the dynamics of the underlay nodes using estimates of the underlay queue backlogs. The controller makes control decisions based on the virtual queues. We show that TMW* is throughput optimal. We further extend our analysis to the setting that the estimates of the underlay state are erroneous and show that as long as the errors scale sub-linearly in time, TMW* preserves throughput optimality.&#13;
&#13;
We extend to uncooperative networks with non-stochastic or even adversarial dynamics. The external arrivals and underlay actions cannot be captured by any stochastic process. Even worse, there might exists an adversary that controls the underlay nodes. The adversary can observe the actions of the network controller and plan its actions accordingly to maximize disruption to the network. We first extend the existing adversarial network models by introducing a new maliciousness metric that constrains the dynamics of the adversary, and characterize the stability region. We show that TMW* is also throughput-optimal for networks with non-stochastic and adversarial dynamics. We also discuss the impact of estimation errors and show that TMW* is throughput optimal if the errors scale sub-linearly in time.&#13;
&#13;
We then turn to the network utility maximization (NUM) problem for uncooperative networks. The network dynamics, such as packet admissions, external arrivals and control actions of underlay nodes, can again be stochastic, non-stochastic or adversarial. We propose the Tracking Drift-plus-Penalty (TDP*) algorithm that only operates on the overlay nodes and does not require direct observations of the underlay nodes. We rigorously analyze the tradeoffs between the average utility and queue backlog. We show that as long as the network is stabilizable, TDP* can solve the NUM, i.e., reaching the maximum utility while preserving stability.&#13;
&#13;
However, application of the NUM is still limited as they require the utility to be a function of packet admissions. We finally attempt to optimize the scheduling for uncooperative networks with general objective functions. We assume that there exists a scheduling algorithm that optimizes certain metrics, but requires instantaneous access to network state information, which is not always available. A naive approach is to make decisions directly with delayed information, but we show that such methods may lead to poor performance. Instead, we propose the Universal Tracking (UT) algorithm that can mimic the actions of arbitrary scheduling algorithms under observation delay. We rigorously show that the performance gap between UT and the scheduling algorithm being tracked is bounded by constants. Our numerical experiments show that UT significantly outperforms the naive approach in various settings.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150083</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermoelectric Energy Conversion: First-principles&#13;
Simulations, Energy Harvesting and Deep Cooling Systems</title>
<link>https://hdl.handle.net/1721.1/150081</link>
<description>Thermoelectric Energy Conversion: First-principles&#13;
Simulations, Energy Harvesting and Deep Cooling Systems
Xu, Qian
Thermoelectric devices can directly convert heat into electricity and electrical power into cooling or heating without moving parts. The energy conversion efficiency is dominated by the materials’ figure of merit zT, which is linearly proportional to the electrical conductivity, the square of the Seebeck coefficient, and inversely proportional to the thermal conductivity. Researchers worldwide have made great progress finding materials with higher zT through trial-and-error experiments during the past decades. This thesis contributes to understanding thermoelectric transport via first-principles simulations and the expansion of thermoelectric technologies through developing power generators using human metabolic heat and low-temperature freezers for vaccine storage.&#13;
&#13;
First, we present density functional theory simulations of all thermoelectric transport properties of SiGe alloys over wide ranges of compositions, doping levels, and temperatures. This is the first time we are able to simulate realistic thermoelectric materials from first principles without fitting parameters. Surprisingly, the phonon drag effect that is supposed to be only significant at low temperatures still contributes to 10-20% of zT at 1100 K in SiGe alloys. The favorable comparison between our calculations and reported experiments brings us closer to predicting the transport properties of practical thermoelectric materials. Second, we present a flexible thermoelectric generator (f-TEG) design based on bulk thermoelectric materials, featuring multifunctional copper electrodes that enable flexibility, efficient heat concentration, and dissipation. The f-TEG reaches a record-high power density of 48 µW/cm² and can power an LED for reading in a completely dark room at 17.5°C when worn on the wearer’s forehead. This high-performance, low-cost f-TEG offers a broad prospect for low-power monitors or control systems in healthcare, agriculture, transportation, and industrial automation. Lastly, we develop a small affordable compressor-thermoelectric- hybrid freezer that can achieve an ultra-low temperature (ULT) of -70°C. Our ULT freezers can be readily commercialized into products with features of lightweight, portable size, low energy consumption, and competitive prices, which would facilitate transporting and storing clinical products around the world and make vaccines much more accessible to people in remote areas.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150081</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-layer interactions in atmospheric, cryospheric, and stellar dynamics</title>
<link>https://hdl.handle.net/1721.1/150080</link>
<description>Layer-layer interactions in atmospheric, cryospheric, and stellar dynamics
Shah, Kasturi
Layer-layer interactions comprise some key scientific unknowns and uncertainties in regional and large-scale planetary climate models. My dissertation studies their behaviour with an aim to develop explanations for and improve representations of observed phenomena. I focus on three different systems: stratosphere-troposphere coupling, ice-sediment mechanics, and turbulence in velocity-layered stellar interiors.&#13;
&#13;
Three distinct manifestations of stratosphere-troposphere coupling are studied in this dissertation. First, evidence suggests that ozone depletion has driven changes in southern hemisphere tropical tropospheric width, motivating study of the width of the stratospheric tropics and potential metrics. In the second chapter, directly measured chemical tracers form a useful basis for stratospheric tropical width estimates and two new tracer-based metrics are developed. These metrics are applied to assess relationships between tropospheric and stratospheric width changes and to study stratospheric transport processes.&#13;
&#13;
Second, unanswered questions about the extent of stratospheric modulation and disruptions to it confound efforts to distinguish anthropogenic sources of trace gases, such as continued production that violate the Montreal Protocol, from natural processes. The third chapter studies seasonal synchronizations between the dominant mode of lower stratospheric variability on one-to-five-year timescales and advective contributions to temporal changes in N₂O and CFC-11. These synchronizations influence the abundance of chemical species in the atmosphere from the stratosphere to the surface.&#13;
&#13;
Third, the close association between lower stratospheric temperatures and upwelling facilitate inferring the injection of trace gases and aerosols into the stratosphere. In the fourth chapter, a regional study of the response of lower stratospheric temperatures above the Asian summer monsoon anticyclone is found to obey quasigeostrophic theory. Lower stratospheric monsoon temperatures from both radiosondes and reanalysis have been cooling over the observational record, suggesting enhanced upwelling through the monsoon anticyclone, and increased stratospheric dehydration.&#13;
&#13;
The second system this dissertation focuses on is layered viscous flow. The fifth chapter resents a theoretical and experimental study of two-layer viscous flows on inclined surfaces, commonly found in environmental and industrial flows. The general model reveals a rich variety of flow regimes for different modes of release. These phenomena are likely to underlie more complex examples of layered flows, for which the model and analytical inroads revealed here provide a basis for generalization.&#13;
&#13;
The sixth chapter is motivated by rapid ice flow including icestreams and glacier surge events, which are quasiperiodic fast flow episodes. Mechanisms for surge initiation remain unknown and debated. Here, evidence that many surge-type glaciers are underlain by layers of fine, deformable sediment spurs a stability analysis of the ice-sediment interface. The assessment reveals positive feedback loops and instability mechanisms between the subglacial sediment and overlying ice.&#13;
&#13;
The third layered phenomenon this dissertation addresses is stellar turbulence. Shear-driven turbulence in stratified stellar interiors provides an important source of vertical transport of heat, momentum, and chemical tracers. Motivated by numerical evidence of anisotropic flows, the seventh chapter presents a multiscale analysis that identifies turbulent behaviour regimes.&#13;
&#13;
The eighth and final chapter concludes this dissertation by presenting an outlook and possible directions for future research.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150080</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Reconstructing Biological History from Genomic Data</title>
<link>https://hdl.handle.net/1721.1/150079</link>
<description>Algorithms for Reconstructing Biological History from Genomic Data
Kim, Younhun
In this thesis, we study several problems related to computational biology surrounding a central theme: inferring temporally-spaced events using noisy measurements. The first half studies two theoretical problems for explaining the history of human populations at different scales. First, we present sample complexity results for learning population structures given pairwise coalescence data. The second involves pedigree reconstruction, in which we prove that there is a sample-efficient algorithm for reconstructing a “family tree” given a population-wide collection of genomic information.&#13;
&#13;
The second half of the thesis concerns models for the microbiome and practical algorithms that emphasize scalability and interpretability. We present work on strain tracking, in which one is asked to reconstruct a time-series profile of bacterial strain ratios from shotgun-sequenced reads. We state an algorithm designed to scale on large data, discuss some real-world considerations that makes the problem particularly challenging, and present empirical results. Last but not least, we present collaborative work on dynamical systems modeling of the microbiome, in which we discuss how one can learn a large, yet interpretable, Lotka-Volterra model from time-series measurements of the microbiome.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150079</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instance-Optimized Data Structures for Membership Queries</title>
<link>https://hdl.handle.net/1721.1/150074</link>
<description>Instance-Optimized Data Structures for Membership Queries
Vaidya, Kapil Eknath
We are near the end of Moore’s law and hardware growth has hit a stagnation. Modern data processing systems need to continuously improve their performance to match the humongous growth of data. Data structures and algorithms such as sorting, indexes, filters, hash tables, query optimization, etc are the fundamental building blocks of these systems and dictate their performance. Traditional data structures and algorithms provide worst-case guarantees by making no assumptions about the data or workload. Thus, the resulting data processing system gives an adequate performance in the average case but may not be optimal for a particular use case. In this thesis, we will look at how to redesign membership query data structures so they can automatically adapt to an individual use case. These instance-optimized data structures act as drop in replacements for their counterparts in systems and improve their performance without any significant overhaul of the system or labor-intensive manual tuning.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150074</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Networks, Polarization, and Voting: Models for Information Aggregation in Social Settings</title>
<link>https://hdl.handle.net/1721.1/150073</link>
<description>Networks, Polarization, and Voting: Models for Information Aggregation in Social Settings
Jin, Yan
Social networks, voting, and polarization all fall into the realm of the instrument, process, and consequence of information aggregation in social settings. These are both classical topics that have motivated studies from various disciplines, and active areas in need of new models as novel phenomenon, demand, and proposals continue to emerge in recent years. In this thesis, we study three models for social information aggregation inspired by these three topics respectively.&#13;
&#13;
In the first chapter, we consider how to detect corruption when each network nodes’ true identities are only locally known. In this model, each vertex reports about the types - truthful or corrupt - of its neighbors, where truthful nodes report the true types and corrupt nodes report adversarially. We show that detecting corruption in this model yields linear-time algorithm while the minimal number of nodes the corrupt party needs to control in order to hide all corruption is hard to approximate to any multiplicative factor, assuming the Small Set Expansion Hypothesis.&#13;
&#13;
In the second chapter, we propose a geometric opinion dynamic model where a strong form of polarization in high-dimension emerges: public opinions not only radicalize on each issue, but also correlate across issues. We demonstrate that this type of polarization could arise as an unintended byproduct of influencers’ natural effort to promote a product or an idea. We analyze this mechanism with one or more influencers, sending messages strategically, heuristically, or randomly, and examine the computational aspects of optimal influencing strategy and its effect on polarization.&#13;
&#13;
The third chapter considers whether distributed election procedure can aggregate to good social choice outcomes when voters delegate strategically. We model liquid democracy as a game where voters with continuous-valued preference peaks choose between delegation and learning about policies at a cost and voting directly. We derive the pure-strategy coalition-proof Nash equilibrium and show that equilibrium delegation network varies with learning cost. When cost is low, all voters delegate to the median is a cpNE. As learning cost increases, new forms of cpNE emerge, where extreme voters delegate inward and moderate voters delegate outward to the nearest incentized voters.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150073</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Deployable Robust Text Classifiers</title>
<link>https://hdl.handle.net/1721.1/150071</link>
<description>Towards Deployable Robust Text Classifiers
Xu, Lei
Text classification has been studied for decades as a fundamental task in natural language processing. Deploying classifiers enables more efficient information processing, which is useful for various applications, including decision-making. However, classifiers also present challenging and long-standing problems. As their use increases, expectations about their level of robustness, fairness, accuracy, and other metrics increase in turn.&#13;
&#13;
In this dissertation, we aim to develop more deployable and robust text classifiers, with a focus on improving classifier robustness against adversarial attacks by developing both attack and defense approaches. Adversarial attacks are a security concern for text classifiers, as they involve cases where a malicious user takes a sentence and perturbs it slightly to manipulate the classifier’s output. To design more effective attack methods, we focus first on improving adversarial sentence quality – unlike existing methods that prioritize misclassification and ignore sentence similarity and fluency, we synthesize these three criteria into a combined critique score. We then outline a rewrite and rollback framework for optimizing this score and achieving state-of-theart attack success rates while improving similarity and fluency. We focus second on computational requirements. Existing methods typically use combinatorial search to find adversarial examples that alter multiple words, which are inefficient and require many queries to the classifier. We overcome this problem by proposing a single-word adversarial perturbation attack. This attack only needs to replace a single word in the original sentence with a high-adversarial-capacity word, significantly improving efficiency while the attack success rate remains similar to that of existing methods.&#13;
&#13;
We then turn to defense. Currently, the most common approach for defending against attacks is training classifiers using adversarial examples as data augmentation, a method limited by the inefficiency of many attack methods. We show that training classifiers with data augmentation through our efficient single-word perturbation attack can improve the robustness of the classifier against other attack methods. We also design in situ data augmentation to counteract adversarial perturbations in the classifier input. We use the gradient norm to identify keywords for classification and a pre-trained language model to replace them. Our in situ augmentation can effectively improve robustness and does not require tuning the classifier.&#13;
&#13;
Finally, we explore the vulnerability of a very recent text classification architecture – prompt-based classifiers — and find them to be vulnerable to attacks as well. We also develop a library called Fibber to facilitate adversarial robustness research.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150071</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights from geodynamic models into ice flow, mantle magmatism, and their interactions</title>
<link>https://hdl.handle.net/1721.1/150070</link>
<description>Insights from geodynamic models into ice flow, mantle magmatism, and their interactions
Clerc, Fiona
In this thesis, I use geodynamic models to study processes within the Earth's mantle and cryosphere.&#13;
&#13;
I begin by quantifying previously unconsidered sources of magmatic CO₂. In Chapter 2, I predict how small concentrations of CO₂ found in passively upwelling mantle throughout ocean basins may generate low-degree carbonate melting. I find the flux of CO₂ segregated by these melts rivals the flux from mid-ocean ridges. In Chapter 3, I model how the deglaciation of the Yellowstone ice cap caused a reduction in mantle pressures and enhanced melting 19-fold. I predict the additional melting segregates a globally-significant mass of CO₂, potentially playing a role in positive feedbacks between deglaciation and climate. I suggest enhanced melting may be important in other magmatically-active, continental settings undergoing rapid deglaciation -- for instance, under the collapse of the West Antarctic Ice Sheet (WAIS).&#13;
&#13;
This thesis next explores glaciological factors controlling WAIS stability, associated with the fracturing of ice sheet margins supported by floating ice shelves. The Marine Ice Cliff Instability posits ice cliffs above a critical height collapse under their own weight, initiating runaway ice sheet retreat. In Chapter 4, I model the formation of marine ice cliffs, as an Antarctic ice shelf is removed. I show that over ice-shelf collapse timescales longer than a few days (consistent with observations), ice cliffs comprised of intact ice are more stable, undergoing viscous flow rather than brittle fracture. I next investigate interactions between viscous and brittle processes, guided by observations on a modern Antarctic ice shelf. In Chapter 5, I model deformation at the McDonald Ice Rumples (MIR), formed as the Brunt Ice Shelf is grounded into a bathymetric high. The MIR are characterized by concentric folds intersected by radial fractures, implying viscous and brittle behavior, respectively. I interpret these features to constrain ice rheology and strength. More broadly, this final chapter highlights how leveraging glaciological observations as natural experiments places constraints on the phenomenological laws which govern ice and (analogously) mantle flow.&#13;
&#13;
In summary, jointly developing models of both ice and mantle flow better constrains the dynamics of each system (solid Earth and cryosphere) and their interactions.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150070</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Untangling the complexity of nature: Machine-learning for accelerated life-sciences</title>
<link>https://hdl.handle.net/1721.1/150069</link>
<description>Untangling the complexity of nature: Machine-learning for accelerated life-sciences
Yaari, Adam U.
The fundamental understanding of living processes is one of the main pillars in modern medicine and technology. Biological mechanisms are convoluted and stochastic systems that remain largely misunderstood despite centuries of rigorous scientific work. In recent years, machine-learning (ML) has resurfaced as a powerful framework to identify patterns of interest in complex datasets. Yet, the impact of such methods remains limited in the broad context of life-sciences. This work optimizes the utility of ML to accelerate research of fundamental biological problems. First, we propose a paradigm shift from siloed data curation to multi-purpose cohorts at scale, even in the most restrictive case of human experimentation. The potential of this approach is revealed through the Brain TreeBank, a multi-modal dataset of naturalistic language aligned to intracranial neural recordings. The TreeBank provides the resolution and breadth required to probe the spatio-temporal dynamics of language context dependence and representation in the brain. Second, we argue for the importance of ML interpretability to accelerate the understanding of biology. We develop an explainable general-purpose tool for modeling discrete stochastic processes at multiple resolutions with output certainty estimation. We demonstrate the utility of the method by modeling patterns of somatic mutations across the entire cancer genome and extend it to map mutation rates in 37 types of cancer. The confidence intervals and increased sensitivity of the method identify sets of mutations that likely drive cancer growth in both coding and noncoding regions of the genome. Broadly, this work demonstrates how computational approaches can overcome unique challenges in biological data and how biological problems can drive advances of computational methodologies.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150069</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>From genetics to disease: Algorithms to decode somatic mutations</title>
<link>https://hdl.handle.net/1721.1/150068</link>
<description>From genetics to disease: Algorithms to decode somatic mutations
Sherman, Maxwell A.
A long-standing goal of biology is to understand how the 3 billion bases of DNA in each human cell contribute to molecular, cellular, and, ultimately, organism function. Somatic mutations, which arise in cells during the course of life, are natural experiments that can be leveraged to provide insight into this profound question. This thesis develops computational methods to identify somatic mutations and infer their phenotypic relationships from population-scale genome sequencing. The methods are developed and applied in the context of two human diseases, autism spectrum disorder and cancer. First, we develop a suite of computational tools to detect somatic copy number variants that likely arose during early embryonic development. We apply this tool set to establish that such CNVs contribute substantially to the risk of developing autism spectrum disorder in a small number of carriers. We next develop a general purpose method for modeling discrete stochastic processes at multiple resolutions. We demonstrate the utility of the method by modeling patterns of somatic mutations across the cancer genome. We finally extend and apply the aforementioned method to map somatic mutation rates in 37 types of cancer and identify sets of mutations that likely drive cancer growth in both coding and noncoding regions of the genome. Broadly, this work demonstrates how the unique challenges of biological data can both inform and benefit from computational research.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150068</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a robust ratiometric sensor and a long-term memory genetic toggle switch</title>
<link>https://hdl.handle.net/1721.1/150067</link>
<description>Design of a robust ratiometric sensor and a long-term memory genetic toggle switch
Kwon, Ukjin
Synthetic biology is an interdisciplinary field that combines principles of engineering, biology, physics, and others to design and construct novel biological parts, devices, and systems that do not exist in nature. It has the potential to revolutionize many industries, including medicine, agriculture, and energy production. Despite its potential, synthetic biology is still mainly confined to laboratory experimentation because the design process is complex and may not consistently yield reliable outcomes when applied to real-world settings. These challenges can be attributed, in part, to a lack of modularity and the inherent stochasticity of biological systems. In the realm of synthetic biology applications, such as diagnosis, multi-input sensors and long-term memory devices that can remember sensory outputs for later analysis are essential. However, these have proved difficult to achieve. In this thesis, through mathematical modeling and stochastic analysis, I present a proof-of-concept design for a robust ratiometric sensor, which is a type of multi-input sensor that is especially useful to sense relative biomarker concentrations for in-gut diagnostics. Additionally, I provide design principles for a long-term, yet reversible, memory device, which can be toggled between two stable states that can be maintained for long time despite stochastic fluctuations. These two circuit designs can be widely applied in many diagnostic applications, such as through engineered bacteria in the gut, and at the same time they offer insight on how natural systems may perform similar tasks.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150067</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dendritic cell dysfunction restrains cytotoxic T cell responses against cancer</title>
<link>https://hdl.handle.net/1721.1/150065</link>
<description>Dendritic cell dysfunction restrains cytotoxic T cell responses against cancer
Zagorulya, Maria
Although immune checkpoint blockade (ICB) therapy can induce durable survival benefits in patients with advanced cancer, most patients do not respond. ICB acts by reinvigorating pre-existing anti-tumor immune responses, and responders are often characterized by the presence of a T cell infiltrate in tumors. However, T cell infiltration does not always correspond to ICB efficacy. Tumor-reactive T cells can acquire persistent dysfunctional states, which are resistant to ICB reinvigoration. Increasing evidence suggests that T cell dysfunction can arise during T cell priming. Dendritic cells (DCs) play a key role in priming tumor-reactive T cells, indicating that DC-derived signals could regulate the functional quality of anti-tumor T cell responses. In this work, we investigated how cancer-associated suppression of DCs could lead to dysfunctional anti-tumor T cell responses.&#13;
&#13;
First, we explored tissue-specific mechanisms that could mediate lung tumor-specific T cell dysfunction, previously found to be induced during T cell priming in the lung tumor-draining lymph node (tdLN). We determined that the T cell dysfunction was caused by regulatory T cell (Treg)-mediated suppression of DC stimulatory capacity. Suppression required direct contact between Tregs and DCs and was specifically associated with the presence of clonally-expanded T helper type 1 (TH1)-like Tregs. TH1-like Tregs were induced in response to elevated levels of interferon-gamma (IFNγ) in the lung tdLN. Administration of IFNγ-blocking antibody could counter the tissue-specific enrichment in IFNγ, repolarize Tregs and restore cytotoxic T cell responses against lung cancer. &#13;
&#13;
Next, we examined longitudinal changes in anti-tumor immunity associated with the observed decline in ICB efficacy in later-stages tumors. We found that ICB resistance at later timepoints was accompanied by T cell dysfunction and a decline in stimulatory DCs in both the tumor and tdLN. Treatment with Poly(I:C) could enhance T cell and DC responses at later timepoints, providing a clear rationale for combination immunotherapy using Poly(I:C) and ICB.  &#13;
&#13;
Our work demonstrates that distinct tissue-specific and temporal elements can suppress DC ability to support productive anti-tumor immunity. Counteracting these mechanisms of DC dysfunction has the potential to enhance cytotoxic T cell responses and help better leverage the potential of anti-tumor immunity for long-term disease control.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150065</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electro-chemo-mechanics of solids: application to all-solid-state&#13;
batteries, polyelectrolyte gels, and actuators.</title>
<link>https://hdl.handle.net/1721.1/150064</link>
<description>Electro-chemo-mechanics of solids: application to all-solid-state&#13;
batteries, polyelectrolyte gels, and actuators.
Narayan, Sooraj
In several solid systems, including batteries and various electrochemically-based actuators, charged ions are transported across a solid host material under the influence of electric fields and concentration gradients, which can cause local volumetric expansion and contraction. Since stresses and strains are transmitted very effectively in solids, the chemical expansion leads to significant stress generation in the host material. These stresses also influence the movement of ions, and thus electrochemistry and mechanics are highly coupled in such systems. The overarching theme of this thesis is the theoretical formulation and numerical simulation of such systems. This thesis is divided into two major parts: Part 1. All-solid-state lithium metal batteries: While all-solid-state batteries, which use solid electrolytes (SE), are safer than conventional liquid electrolyte based systems, they are currently plagued by major challenges leading to cell failure, such as lithium filament growth through the SE, fracture of the SE, and decohesion of the anode and the SE. In order to aid and advance our understanding of the mechanisms that lead to these various modes of failure, we have mathematically modeled the electrodeposition and attendant large viscoplastic deformation of lithium at the anodeSE interface of an all-solid-state lithium metal battery (ASSLMB). Through numerical finite-element implementation of our model, we have studied the deleterious effects of plating-and-stripping of lithium around interfacial chemical or mechanical defects in ASSLMBs. Our simulations reveal the role of charging/discharging current levels, cell stack pressure, and other mechanical constraints of the system on possible failure mechanisms of such cells. Part 2. Polyelectrolyte polymers: Ionizable polymeric gels which mechanically respond to electrostatic/chemical stimuli are useful in artificial muscles, artificial skin, and drug delivery applications (among others). We have formulated a thermodynamically-consistent, fully-coupled, theoretical electro-chemo-mechanical framework accounting for large deformations, electrostatic influence on charged species, and simultaneous cross-diffusional transport of multiple mobile species. We have suitably specialized this general framework to model: (i) ionic polymer-metal composites, (ii) ionotronic devices, and (iii) polyelectrolyte gels. Using the finite-element simulation capabilities that we developed for each case, we have successfully validated our models against experiments from the literature. We also show many applications of these materials in technologically-relevant actuators, which demonstrate the utility of our numerical modeling capabilities as a tool for designing ionotronic devices and electro-chemo-mechanical actuators.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150064</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>In-home Gait Health Monitoring using Machine Learning and Ambient Sensing</title>
<link>https://hdl.handle.net/1721.1/150061</link>
<description>In-home Gait Health Monitoring using Machine Learning and Ambient Sensing
Hahm, Katie S.
Gait health assessments are commonly used for clinical evaluations of neurocognitive diseases and general well-being. Gait parameters such as step time variability and ground reaction forces are essential in distinguishing pathological and normal gait; changes in gait can be observed years before a patient is diagnosed with diseases such as Parkinson’s disease. Despite its importance, the current standard of practice for gait assessments is performed in clinical settings. These measurements are often elevated and do not accurately reflect gait patterns in daily life. There is a need for gait monitoring in the daily environmental setting.&#13;
&#13;
Internet-of-Things technology creates an opportunity to bridge this disparity in gait measurements by bringing sensors into the home environment. An ambient floor sensing approach is presented which uses the vibrations in the floor naturally created by footfalls to monitor gait. A few accelerometers are used to measure these signals, lowering the sensor density for easier deployability while minimizing privacy concerns.&#13;
&#13;
This thesis is comprised of four parts. First, signal processing algorithms are developed to extract relevant features from the floor vibrations. Second, temporal gait asymmetry is estimated using an adapted gaussian mixture model approach. Third, the footfalls are localized through a tracking machine learning approach. Lastly, the tibial acceleration is estimated using machine learning. Experiments were performed to evaluate the performance of these estimations for single person and two people walking scenarios.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150061</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conformable Ultrasound Face Patch for Cavitation-enhanced Transdermal Cosmeceutical Delivery</title>
<link>https://hdl.handle.net/1721.1/150057</link>
<description>Conformable Ultrasound Face Patch for Cavitation-enhanced Transdermal Cosmeceutical Delivery
Yu, Chia-Chen
Increased consumer interest in healthy-looking skin demands a safe and effective method to increase transdermal absorption of innovative therapeutic cosmeceuticals. However, permeation of small-molecule drugs is limited by the innate barrier function of the stratum corneum. In this thesis, we develop a conformable ultrasound face patch (cUFP) that closely adheres to the facial skin to aid in the penetration and absorption of cosmeceuticals across the epidermis and dermis layers of the skin. A novel face mask design with piezoelectric transducers embedded in a soft elastomer substrate is fabricated and investigated, starting from a single-element prototype and subsequently a full two-dimensional array. Localized pockets of fluid coupling medium are created between the cUFP and skin. This approach provides sufficient reservoir space for inertial cavitation, convective mixing, and microjet formation with the application of ultrasound. Multiphysics simulation models are used to guide parameters for device design, and the modeling outputs are verified with electrical and mechanical characterization results. Subsequently, acoustic spectrum analysis and high-speed videography are conducted to elucidate a holistic understanding of the cavitation mechanism generated by intermediate-frequency sonophoresis. Together, the simulation model, electromechanical and acoustic characterization, and cavitation visualization help guide the operating parameters for the in vitro permeation study, which is conducted to test the efficacy of the cUFP in enhancing transdermal permeation. The final system demonstrates a 26.2-fold enhancement in niacinamide transport in a porcine skin model in vitro with a 10-minute ultrasound treatment, demonstrating the suitability of the device for short-exposure, large-area application of sonophoresis for patients and consumers suffering from skin conditions, damaged skin barriers, and premature skin aging.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150057</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flux-Independent Uncertainty Propagation of Nuclear Cross Section Data Using the Windowed Multipole Formalism</title>
<link>https://hdl.handle.net/1721.1/150056</link>
<description>Flux-Independent Uncertainty Propagation of Nuclear Cross Section Data Using the Windowed Multipole Formalism
Meyer, Isaac
This thesis encompasses work in the propagation of resolved resonance range nuclear cross section uncertainties, primarily for eigenvalue calculations. The first portion explores the use of resonance parameter uncertainty data in the generation of multi-group cross section libraries. Multigroup cross section uncertainties are fundamentally flux and temperature-dependent, but are often used as general purpose across multiple applications. Traditional methods of handling resonance parameter uncertainty data do not account for the impact of the change in flux shape due to the change in resonance parameters. A single resonance model is developed to indicate the temperature and dilution regions where the approximate methods may not perform well. Additionally, a new approach for using the region-dependent multigroup uncertainties is proposed. The temperature-dependent multigroup covariances and region-wise propagation method are applied to a partial boiling water reactor (BWR) assembly calculation. The second portion involves the determination of Windowed Multipole (WMP) parameter uncertainty from continuous energy cross section uncertainty data. WMP parameters present a path forward for flux-independent uncertainty propagation in the resolved resonance region. While a library of WMP parameters exists that matches the mean value behavior of the Evaluated Nuclear Data File (ENDF) library, uncertainty information is only available for nuclides for which resonance parameter uncertainty is available. A least squares approach is developed to convert continuous energy cross section uncertainties to WMP parameter uncertainties. The approach is used to generate WMP covariances for various nuclides, highlighting nuclides for which no WMP covariance were previously available such as ¹⁶O and ⁵⁶Fe. These covariances are used along with sensitivities produced by the CLUTCH method in OpenMC to calculate resulting eigenvalue uncertainties in a partial BWR assembly as well as a plutonium nitrate solution criticality benchmark. This new method of producing WMP uncertainties enables the creation of a WMP parameter covariance library that includes all the cross section uncertainty information available in ENDF.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150056</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transposable elements and the regulatory logic of hematopoietic differentiation</title>
<link>https://hdl.handle.net/1721.1/150055</link>
<description>Transposable elements and the regulatory logic of hematopoietic differentiation
Najia, Mohamad Ali
The temporal regulation of gene expression by transcription factors, chromatin modifiers and cis-regulatory elements is central to establish cellular identity and function. Understanding this regulatory logic is critical for deriving select cell types in vitro for translational applications. The human hematopoietic system has long been a model system and an important source for adoptive cell therapies, yet, our understanding of the regulatory mechanisms that elicit commitment toward distinct hematopoietic lineages is continuously evolving. &#13;
&#13;
In this thesis, I describe several studies on transposable elements (TEs) as natural and engineered sources of regulatory innovation that contribute to, and aid in the investigation of, dynamic cellular processes. Toward this end, I built comprehensive genome-wide enhancer-gene maps spanning the human hematopoietic system and identified that TEs in the human genome contribute to the transcriptional networks regulating lymphoid cells. De-repression of TEs in hematopoietic stem cells, enacted via modulation of TE chromatin silencing machinery, facilitates the development of natural killer (NK) cells during lymphoid differentiation. Specifically, knockout of the H3K9 methyltransferase EHMT1 or transcriptional co-repressor TRIM28 induced NK-fated progenitors that ultimately generated NK cells with diverse effector properties. We further leveraged TEs by repurposing the packaging function of the MLV gag polyprotein to create a non-destructive reporter of the transcriptional states of living cells, enabling the measurement of dynamic transcriptional processes. Through engineering and scientific inquiry, I established the utility of TEs as synthetic biology tools, furthering our understanding of hematopoietic lineage decisions and highlighting that modulation of TEs can be enabling for hematopoietic cell engineering.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150055</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentering Russia: Art and Empire, 1900-1973</title>
<link>https://hdl.handle.net/1721.1/150054</link>
<description>Decentering Russia: Art and Empire, 1900-1973
Bonin, Christianna
This dissertation examines the representation and theorization of art historical and geographical peripheries in late imperial Russia and the Soviet Union, from the last decades of tsarist rule through the Soviet Union’s ascent as a leader of global decolonializing movements (1900-1973). Previous scholarship on cultural practices in the Soviet Union’s northern, eastern, and southern border regions has tended to uncritically reproduce the regime’s Marxist, anti-colonialist position, in turn foregrounding its emancipatory rhetoric and occluding the imperialistic aspects of how it managed its peoples. Focusing on representations of imperial and Soviet borderlands, as well as the training of artists from these regions, I instead reveal the Soviet era to have been a continuation of, rather than rupture from, tsarist-era ideas about the cultural and political control of colonized peoples.&#13;
&#13;
Drawing on archival research in Russia, Ukraine, and Kazakhstan, this dissertation reveals new trajectories of art practice across a century shaped by economic and social upheaval, violently imposed national borders, and shifting approaches to art education and display. Chapter One focuses on Russian artist Konstantin Korovin’s (1861-1939) depictions of the empire’s borderlands for international exhibitions, revealing how and why the cultural, political, and intellectual distinctions between “peripheries” and “centers” emerged in the late Russian Empire. Chapter Two analyzes Suprematist artist Olga Rozanova’s (1886-1918) paintings, collages, and embroideries, demonstrating that the socioeconomic restructuring of peasant populations and craft workshops in Ukraine contributed to the emergence of avant-garde art practices, thus challenging the prevailing view of the canonical avant-garde as a metropolitan, Russia-centered movement. Chapter Three turns to artmaking in Soviet Kazakhstan from the mid-1930s to the early 1970s to consider the place of an imagined “East” in Soviet cultural politics. The case of Kazakh artist Abylkhan Kasteev (1904-1973) demonstrates that to be an artist from the “peripheries” was to contended with one’s alleged backwardness—an imposed identity that members of a younger, Cold War generation of Kazakh artists would agitate against by building their own south-tosouth connections with the decolonizing “Third World.”&#13;
&#13;
By identifying how artists’ historical agency split along the lines of medium, particularly via critical distinctions of “craft” from “fine art,” these case studies form a basis for theorizing how the Soviet Union represented its “others” and thus reproduced itself. If postcolonial literature has often framed the “Orient” as a Western construction, the history narrated in this dissertation is an opportunity to re-examine such power relations in a modernizing yet non-capitalist and non-Western context. In treating borderlands as contested historical sites of representation and identity formation, this dissertation provides a foundation for further decolonial studies of art in these regions today, locating new centers in previous peripheries.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150054</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing Interfacial Reactions and Charge Transfer Kinetics in Electrochemical Energy Storage and Conversion</title>
<link>https://hdl.handle.net/1721.1/150053</link>
<description>Revealing Interfacial Reactions and Charge Transfer Kinetics in Electrochemical Energy Storage and Conversion
Zhang, Yirui(Mechanical engineer)
Climate change demands the development of clean energy technologies. Renewable energy sources such as solar or wind energy are intermittent, and it is necessary to develop advanced energy storage and conversion devices to complete the sustainable eco-system, where Li-ion batteries and fuel cells promise a bright future. Batteries and fuel cells also play essential roles in electrifying transportation in replacement of internal combustion engines. Central to these electrochemical systems is the electrode-electrolyte interface, where (electro)chemical surface reactions or intercalation reactions occur, and its thermodynamic and kinetic properties determine the energy density, power density, and lifetime of the electrochemical devices. However, the molecular structures at the interface and how they promote or suppress the desired reactions remain unclear. Furthermore, microscopic-level understandings of reaction mechanisms and electrochemical processes are still lacking, hindering the rational design of electrode-electrolyte interfaces to improve the performance of electrochemical devices.&#13;
&#13;
This thesis focuses on the fundamental understanding of the interfacial (electro)chemical reaction mechanisms at the molecular level and charges transfer kinetics at the electrified interfaces. First, an in situ Fourier-transform infrared spectroscopy (FT-IR) method was developed to examine the parasitic reactions between carbonate electrolytes and lithium nickel, manganese and cobalt oxides (NMC) in Li-ion batteries, and unique evidence for dehydrogenation reactions on Ni-rich NMC was revealed, which accounted for interface impedance build-up and battery capacity fading. Based on the proposed mechanism, strategies to suppress battery degradation were further demonstrated and discussed. Next, the kinetic mechanism for Li-ion intercalation was investigated, and experimental evidence from a charge-adjusted electrochemical method showed that ion intercalation occurs by coupled ion-electron transfer (CIET), which governs the current-dependent maximum capacity and power density of intercalation batteries. Further, the thesis extended in situ interface characterization and kinetic models to electrocatalytic reactions central to fuel cell technologies. Protic ionic liquids with different acid dissociation constants in an interfacial layer were found to enhance the oxygen-reduction reaction (ORR), attributed to strengthened hydrogen bonds between ORR products and ionic liquids, revealed by in situsurface-enhanced infrared absorption spectroscopy (SEIRAS) and density functional theory (DFT) calculations. Promoting hydrogen bonding between interfacial water molecules also facilitated proton-coupled electron transfer (PCET) kinetics, resulting in favorable hydrogen evolution reaction (HER) in controllable organic confinements. This thesis has laid a solid foundation for the rational design of electrochemical interfaces employing the physical chemistry of electrodes and electrolytes, for next-generation electrochemical storage devices with improved energy and power density and cycle life.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150053</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Pricing Algorithms for Airline RM: Theoretical Properties and Competitive Implications</title>
<link>https://hdl.handle.net/1721.1/150051</link>
<description>Continuous Pricing Algorithms for Airline RM: Theoretical Properties and Competitive Implications
Szymański, Bazyli
After decades of development of revenue management (RM) systems, airlines have until recently been limited to a set of fixed fare classes and, in turn, price points, to distribute their fare products. The advent of IATA’s New Distribution Capability will allow airlines to forego these limitations, allowing them to quote any fare from a continuous range. In theory, such continuous pricing could increase revenues by extracting more of the consumer surplus, through its ability to offer more granular fares closer to customer willingness-to-pay (WTP). This thesis provides a theoretical assessment and extensions of existing algorithms for continuous pricing, and discusses the market implications of moving away from fixed price points through competitive simulations.&#13;
&#13;
On the theoretical side, we contribute to the understanding of the existing bid price algorithms for single fare quote continuous pricing with static and dynamic optimization, two widely accepted approaches in the airline industry. For the static algorithms, in a simplified single leg, single period setup we prove the uniqueness and convergence of classless continuous Probabilistic Bid Price (ProBP) bid prices as well as the convergence of class-based continuous ProBP to classless ProBP bid prices, answering the question about the equivalence of the two optimizers. As we show, however, these results do not carry over to a realistic setup with multiple time periods.&#13;
&#13;
The dynamic optimization approach with classless Unbucketed Dynamic Programming (UDP) is built on a theoretically more established mathematical formulation. However, in its popular implementation it suffers from an inherent Poisson variance assumption, often considered a major limitation given that demand variance in reality is typically higher than the mean. To address that shortcoming, building on earlier work for traditional class-based RM, we introduce classless UDP with batch arrivals, where multiple customer arrivals are assumed within each DP time slice, leading to higher demand variance in the DP formulation. We show that when actual demand variance matches the batch arrival DP input, the new approach is an improvement over the standard classless UDP: the use of batch arrivals leads to higher airline revenues and flatter bid prices, even though the approach only emulates higher variance and the simulated passengers do not arrive in batches.&#13;
&#13;
Through PODS simulations, we show that through increased responsiveness to demand fluctuations and the ability to quote a closer-to-optimal price, the airline implementing continuous pricing can see revenue gains as high as 3% without affecting the revenues of traditional RM competitors, but that these gains are highly sensitive to airline's price elasticity estimates. To this end, we developed competitor adjustment for continuous pricing: a heuristic adjusting elasticity estimates used for fare quotation based on day-to-day deviations in competitor fares. When limited to price increments, the method was shown to bring further sustained revenue improvements of up to 0.5% for the implementing airline by capitalizing on the differences between conditional and maximum WTP, all while not negatively affecting competitor revenues.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150051</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Interfacial Thermal Transport with Molecular Dynamics: The Challenge of Making Accurate Comparisons to Experiment</title>
<link>https://hdl.handle.net/1721.1/150050</link>
<description>Modeling Interfacial Thermal Transport with Molecular Dynamics: The Challenge of Making Accurate Comparisons to Experiment
Wyant, Spencer Thomas
Interfacial heat transport is an increasingly determinant factor controlling the dissipation of heat in many modern electronic, optical, and magnetic devices – driven by their continued miniaturization and corresponding increase in interfacial density. Molecular dynamics (MD) simulations are a highly versatile theoretical tool that can be used to predict, understand, and rationalize interfacial heat transport. However, significant improvements are needed to make MD-based methods accurate enough to predict thermal boundary conductance (TBC) values – the primary quantity characterizing interfacial heat transport – that are consistent with experimental values. In this work, I investigate ways of improving MD simulations using three example interface systems: Ge-GaAs, Al-Al2O3, and AlN-GaN.&#13;
&#13;
First, accurate interatomic potentials are generated for each system using either pure machine-learned interatomic potentials (MLIPs) or a hybrid approach in which MLIPs are combined with a Taylor-expansion potential. Key tradeoffs between these approaches are assessed, with a particular focus on speed, stability and vibrational accuracy. Next, I explored how key choices in the setup and analysis of TBC calculations can affect the comparison to experimental values. From a Landauer perspective, the results show that a 4-probe definition of TBC can exceed the maximum transmission limit, consistent with new experimental measurements of the Ge-GaAs TBC. After, discussing the advantages and disadvantages of using nonequilibrium or equilibrium molecular dynamics (NEMD vs. EMD) to predict TBC, a new protocol is presented that attempts to mitigate the effects of noise by analyzing EMD data in a mode-specific fashion. While providing qualitatively better TBC results, the protocol unfortunately exhibits significant parameter sensitivity, at least with the current data.&#13;
&#13;
Using this protocol and the MLIPs developed in this work, I then predict temperature-dependent TBC values for Al-Al2O3 and Ge-GaAs interfaces, and compare them to experimental measurements. For Al-Al2O3, significant effort is made to establish a correspondence between the modelled atomistic structure and the experimental sample, resulting in a plausible O-terminated and Al-terminated structure. Surprisingly, the predicted TBC of these two structures differ significantly, with correspondingly different behavior in their mode-level contributions. Unfortunately, neither set of TBC results exhibit good agreement with experimental measurements, nor do the TBC predictions for Ge-GaAs, which are hypothesized to be impacted by finite-size effects.&#13;
&#13;
Finally, an applied case is explored whereby 15N/14N isotopic disorder was introduced in an AlN-GaN interface in an attempt to enhance TBC, motivated by a prior work. Using a latticedynamics-based descriptor, it is shown that isotopic disorder does enhance “mode overlap” between different sides of the interface, which in concept can enhance interfacial heat flow. However, by performing NEMD calculations with the accurate MLIP, and paying close attention to how the temperature drop is extracted, the results demonstrate that no TBC enhancement occurs and that rather, TBC deteriorates with the addition of isotopic disorder in AlN-GaN. The significant difference between this result and that of the prior work may be partially attributed to the use of a more accurate potential.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150050</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Scalable Structured Data from Clinical Text</title>
<link>https://hdl.handle.net/1721.1/150049</link>
<description>Towards Scalable Structured Data from Clinical Text
Agrawal, Monica
The adoption of electronic health records (EHRs) presents an incredible opportunity to improve medicine both at the point-of-care and through retrospective research. Unfortunately, many pertinent variables are trapped in unstructured clinical note text. Automated extraction is difficult since clinical notes are written in their own jargon-heavy dialect, patient histories can contain hundreds of notes, and there is often minimal labeled data. In this thesis, I tackle these barriers from three interconnected angles:  (i) the design of human-AI teams to speed up annotation workflows, (ii) the development of label-efficient modeling methods, and (iii) a re-design of electronic health records that incentivizes cleaner data at time of creation.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150049</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A small molecule-guided structure-function exploration of mitochondrial aldehyde dehydrogenase</title>
<link>https://hdl.handle.net/1721.1/150047</link>
<description>A small molecule-guided structure-function exploration of mitochondrial aldehyde dehydrogenase
Xu, Sophia Ye
Plants have been used as medicines in cultures across the globe for millennia. The pharmacological activity of plants is often derived from specialized metabolites, also called natural products. Despite this wealth of medical tradition, modern medicine was unable to precisely characterize the wealth of chemistry and individual bioactivities locked in whole plant tissues until recently. In the last four decades, almost 50% of all new FDA-approved small molecule drugs in the United States have been natural products or natural product derivatives, thanks to the advent of modern tools and techniques that confer the dual abilities to isolate and characterize individual molecules from whole plants, as well as precisely describe their effects on human physiology. The goal of this thesis is to describe the mechanism of action of one plant-based pharmacotherapy: the observed enhancement of human small aldehyde metabolism by kudzu flowers. First, we selected mitochondrial aldehyde dehydrogenase 2 (ALDH2) as a representative enzyme for small aldehyde metabolism. We subsequently identify one kudzu flower-derived isoflavone, kakkalide, that is able to enhance the activity of ALDH2, and we characterize the binding and kinetics of this interaction. Finally, while we were trying to visualize the kakkalide:ALDH2 complex using X-ray crystallography, we serendipitously discovered a novel ALDH2 complexed with NAD+ and polyethylene glycol (PEG) that provides insights into the dynamic mechanism of ALDH2 catalysis, informing both basic biology and downstream development efforts. Overall, this thesis aims to describe the biochemical and structural basis for ALDH2 activity enhancement by kakkalide, and provides new insight into the mechanism of action of ALDH2.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150047</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reshaping concrete: Empowering development through low-carbon structural design</title>
<link>https://hdl.handle.net/1721.1/150043</link>
<description>Reshaping concrete: Empowering development through low-carbon structural design
Ismail, Mohamed Abdelbagi
Less Economically Developed Countries (LEDCs) are struggling to meet the demand for affordable housing in their growing cities. There are several reasons for this, but a major constraint is the high cost of construction materials. In LEDCs, material costs can constitute up to 90% of the total cost of residential construction. Importantly, structural systems for multi-story housing in LEDCs are nearly always constructed in reinforced concrete and typically use structurally inefficient typologies like prismatic beams and flat slabs despite their structural material waste. This is because their construction mimics the materially inefficient practices of the More Economically Developed Countries (MEDCs), which were developed to reduce labor over material costs. The mounting use of steel-reinforced concrete structures in LEDC cities also raises concern about the environmental costs of construction; construction accounts for 20-30% of LEDC carbon emissions. &#13;
&#13;
This dissertation addresses these challenges with a set of strategies for the design and analysis of materially efficient concrete elements that can reduce the economic and environmental costs of urban construction. Developed to meet the constraints of LEDCs, structural elements are optimized to reduce the embodied carbon associated with the concrete and reinforcing steel while resisting the same loads as a standard building structure. These strategies include a novel approach to 3D shape parameterization, as well as a decoupled analytical engineering analysis method that accounts for the key failure modes and code-based constraints of reinforced concrete design. This method is verified through design examples that show how the embodied energy and carbon of concrete floor systems can be reduced by over 60% through shape optimization. Prototypes are fabricated and load tested to verify the efficacy of the structural design method, and to illustrate its ability to generate fabricable designs in an LEDC construction context. Finally, in order to broaden its potential impact, this research involves the development of accessible tools and methods for designers and stakeholders in LEDCs. The thesis presents an open-source toolset for the design and analysis of complex concrete elements and an intuitive design interface to communicate the performance and visual impact of materially efficient structures in real-time during early-stage design phases. This is especially useful in designs with exposed structural systems, allowing structural performance to play a key role in the final architecture.&#13;
&#13;
This thesis presents several strategies for the design of efficient concrete structures that allow us to build far more with far less, reducing the environmental and economic costs of construction while responding to the needs of LEDCs. These methods may enable the design of concrete elements for multiple performance criteria such as structural behavior, acoustic transmission, and thermal mass. They can also enable an accessible design practice through machine learning, real-time iterative workflows, and visualization tools that include the end user in the architectural design process.&#13;
&#13;
Keywords: reinforced concrete, shape optimization, digital fabrication, stress testing, machine learning,&#13;
development, sustainability
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150043</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Benaki Museum in Interwar Greece: Constructing Greek Art &amp; the Greek Nation After the Fall of the Ottoman Empire</title>
<link>https://hdl.handle.net/1721.1/150042</link>
<description>The Benaki Museum in Interwar Greece: Constructing Greek Art &amp; the Greek Nation After the Fall of the Ottoman Empire
Courcoula, Alexandra
In this dissertation, I investigate the formation of the Benaki Museum, founded in Athens, Greece in 1931, which exhibited a wide variety of Ottoman art and material culture. The museum, which continues to be a pre-eminent institution in Greece today, was largely the work of the Greek collector and founder Antonis Benakis (1873 – 1954) and its first director and curator Theodore Macridy Bey (1872 – 1940). By analyzing the museum’s early acquisition policy, its curatorial strategies, and its publications, I argue that it contributed to the formulation of Greek national narratives at a pivotal moment in Greek history: the museum opened to the public in the years after the crushing Greek defeat in the Greco-Turkish War of 1919 – 1922, which signaled the failure of Greece’s expansionist vision, the final dissolution of the Ottoman Empire and the creation of the Turkish Republic. The consequent Exchange of Populations between Greece and Turkey (1923), also contributed to the country’s ethnic homogenization and brought over a million Ottoman-Christian refugees from Asia Minor into Greece.&#13;
&#13;
I argue that the Benaki Museum constructed a highly ideological image of Ottoman art and of the recent Ottoman past. It portrayed Ottoman art as an integral and venerable part of Greek culture, and, in doing so, overtly rejected some of the Orientalist narratives inherent in earlier and more dominant constructions of Greek nationalism. These saw the classical period as the pinnacle of Greek cultural and aesthetic achievement and rejected the Ottoman period as one of cultural decline.&#13;
&#13;
At the same time, I also argue that the museum forged a Greek national history of the Ottoman past. Benakis and Macridy portrayed the material culture of Greeks of the Ottoman Empire – which were products of a multiethnic society – as distinctly “national,” culturally pure and largely free of Turkish influence.&#13;
&#13;
Moreover, the museum, which was built following a period of major territorial and demographic changes, made telling statements about who constituted the Greek nation and what constituted Greek culture. Importantly, the museum insisted on the notion that the art of Asia Minor, from regions only recently forfeited to the newly founded Turkish Republic, was an integral part of Greek culture.&#13;
&#13;
Finally, I underline Benakis’ and Macridy’s reliance on Ottoman bodies of knowledge, as well as exchanges with intellectuals in Republican Turkey who were also promoting the aesthetic appreciation of their shared Ottoman past. I thus highlight intellectual continuities and connections that are largely ignored in scholarship on modern Greece, in large part due to the very success of national narratives created by institutions like the Benaki Museum.&#13;
&#13;
I elaborate on these arguments over the course of three chapters, each of which is dedicated to one of the Ottoman-period collections in the Benaki Museum. I first engage with the museum’s large collection of Islamic ceramic and textiles produced in Ottoman Asia Minor. I then turn to the collection of late-Ottoman ecclesiastical objects, brought to Greece by the refugees of the Exchange of Populations. Finally, I turn to museum’s collection of Ottoman- period Greek folk costumes.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150042</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Social Media: Misinformation, Attention, and Digital Advertising</title>
<link>https://hdl.handle.net/1721.1/150040</link>
<description>Understanding Social Media: Misinformation, Attention, and Digital Advertising
Siderius, James
Online platforms have fundamentally changed the dynamics of social interactions and information transmission. In this thesis, I explore recent trends in social media through models and experiments of user behavior, platform algorithms and incentives, and policy initiatives. I focus on the social consequences of new communication technologies, their intended and unintended societal consequences, and how to steer them in more socially beneficial directions.&#13;
&#13;
In recent years, social media has become a breeding ground for misinformation, but the reasons misinformation spreads are still imperfectly understood. First, I discuss the role of social media in the propagation of misinformation, how latent platform algorithms may exacerbate its influence, and analyze various policies to correct misinformation spread. Technological advances stemming from social media have also enabled users to systematically access a deluge of information; yet, it is unclear to what extent this technology has actually helped to better inform. In the second part of the thesis, to characterize the landscape of digital content, I propose a model of content creation and consumption on digital platforms where users have limited attention, and discuss related experiments on the role of algorithmic ranking in user engagement. Lastly, we observe that business models of online platforms drive much of the content creation and algorithmic choices of platforms, and ultimately impact human-machine interactions. The final part of the thesis discusses the various business models of media platforms, their implications for consumer welfare, and possible remedies.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150040</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local official and polluter accountability in China's environmental inspections</title>
<link>https://hdl.handle.net/1721.1/150039</link>
<description>Local official and polluter accountability in China's environmental inspections
Wu, Mengying
Poor environmental practices can lead to high levels of ambient air pollution that are detrimental to human health and cause premature death. Even if environmental regulations are well designed, incomplete implementation at the local level imposes substantial significant social costs. Although widely documented, solutions to inadequate enforcement have not been comprehensively studied. Using the context of China's environmental inspections, I examine which interventions effectively reduce air pollution in Chinese cities and close the enforcement gap. I am particularly interested in interventions that aim to alter the behavior of polluting company managers and local environmental bureaucrats.&#13;
&#13;
In this thesis, I evaluate China's "new" hybrid approach, environmental inspections, that combines top-down scrutiny with bottom-up reporting. First, I investigate the dynamic responses of polluters to central scrutiny, which is a sharp, temporary increase in regulatory enforcement. Using data from China that links the intensity of environmental policing to high-frequency air pollution data, I show that crackdown over short (one-month) periods results in a sharp (35-39%) reduction in weekly average pollution around coal power plants. Pollution gradually reverts to prior levels after crackdowns end. The pace of reversion is faster for firms that outrank the city government, suggesting that hierarchical ties to China's central authorities limit a firm's accountability to the local environmental protection bureau.&#13;
&#13;
Second, I present a full account of the citizen complaints received during the environmental inspection and evaluate the incremental effectiveness of the complaint channel on plant’s environmental performance. I build a novel data set that includes all complaint entries filed by citizens. I describe the frequency of complaints received by a wide range of polluters during the campaign, from small barbecue stalls to large aluminum smelters, suggesting that citizens focus on the most salient pollution sources in their immediate surroundings. Engaging citizen informants during crackdowns is not associated with larger pollution reductions but has no lasting effect, especially for the outranking firms. I further explore the effectiveness of such approach in a repeated version as the same program is conducted nationwide two years after the initial round. I find diminishing effects of top-down scrutiny as plants learn and update their beliefs on the seriousness of the campaign. However, direct central attention to these outranking firms during the lookback round may prolong the environmental inspection effect.&#13;
&#13;
Third, I ask what drives citizen engagement in a central-initiated monitoring program in an authoritarian regime. I identify city and plant characteristics that predict the number of per capita complaints. Cities with poor environmental performance at the outset receive a greater number of complaints during both rounds of the environmental inspection. However, citizens can not identify and report egregious plants that polluted more in the baseline period. In addition, the willingness of citizens to file complaints is contingent on the environmental effectiveness of the original round. At the city level, the number of air-related complaints received per capita during the lookback round will decrease if measured air pollutants return to their baseline pollution levels following the conclusion of the original round.&#13;
&#13;
This dissertation empirically documents the limits of China's highly centralized, state-led approach to improving environmental governance through enforcement crackdowns and engaging citizen complaints.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150039</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monolithic Integration of Fluidics, Electronics, and Photonics using CMOS Foundry Processes</title>
<link>https://hdl.handle.net/1721.1/150038</link>
<description>Monolithic Integration of Fluidics, Electronics, and Photonics using CMOS Foundry Processes
Kim, Jaehwan
In the past decade, the fabrication capabilities of CMOS (complementary metal-oxide-semiconductor) foundries have been successfully applied to next-generation sequencing systems to simultaneously lower the cost, miniaturize the footprint, and improve the throughput of the instrument. By utilizing CMOS foundries, designers are given access to patterns with nanometer precision, a suite of readout and data interface circuit libraries, and most importantly the capability to mass-produce their designs. The demonstrations so far have mainly focused on utilizing the sensing capabilities of CMOS, either electrochemical sensing by ion-sensitive field effect transistors or fluorescence detection by an CMOS photodiode array. Moreover, nanofluidic structures within the patterning precision of modern CMOS foundries have functions useful for molecular sensing, such as concentration, separation, or fluorescent signal enhancement by volume confinement. In this thesis, we leverage the precision and scalability of CMOS to fabricate nanofluidic devices alongside photodetectors and readout circuits, utilizing a XeF2 sacrificial etch process from CMOS MEMS fabrication. A packaging approach using wirebonding and low-cost thermoplastic microfluidics is developed for gaining fluidic and electrical access to the CMOS nanofluidic chip. Low-noise and single photon sensitive photodetectors are presented as a means for optical detection of fluorophores. Lastly, capitalizing on this monolithic integration capability, co-design of an integrated nanopore with readout amplifier circuit is performed using multiphysics and circuit simulation tools. With these results, this thesis aims to lay the groundwork for designing and fabricating low-cost, miniaturized, and high performance biomolecule sensing systems.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150038</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Generation and Detection of Squeezed Microwave Photons Realized using Traveling-Wave Parametric Amplifiers</title>
<link>https://hdl.handle.net/1721.1/150037</link>
<description>The Generation and Detection of Squeezed Microwave Photons Realized using Traveling-Wave Parametric Amplifiers
Qiu, Jack Yanjie
Squeezing the electromagnetic vacuum is an essential metrological technique used to reduce quantum noise in applications spanning gravitational wave detection, biological microscopy, and quantum information science. In circuit quantum electrodynamics, Josephson parametric amplifiers play a crucial role in quantum-limited amplification and squeezed microwave generation. In this thesis, we develop a dual-pump, broadband Josephson traveling-wave parametric amplifier (JTWPA) to demonstrate non-degenerate four-wave mixing using a dual-dispersion-engineered JTWPA and investigate its squeezing performance. Furthermore, the thesis extends the existing JTWPA design to a lower frequency spectrum in the hundreds of MHz regime and demonstrates broadband parametric amplification with a large gain. Capable of multiplexed readout and improved signal-to-noise ratio, the new JTWPA can be utilized in a wide range of applications in condensed matter and astrophysics.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150037</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the electrical behavior of gases at high pressures</title>
<link>https://hdl.handle.net/1721.1/148723</link>
<description>A study of the electrical behavior of gases at high pressures
Kusko, Alexander,
            1921-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1951; Vita.; Bibliography: leaves 60-60c.
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148723</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>"And tree" computer data structures for supervisory control of manipulation,</title>
<link>https://hdl.handle.net/1721.1/148720</link>
<description>"And tree" computer data structures for supervisory control of manipulation,
Hardin, Philip A.
            (Philip Arnold)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Vita.; Bibliography: leaves 225-227.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148720</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some results on (C0) semi-groups and the Cauchy problem,</title>
<link>https://hdl.handle.net/1721.1/148719</link>
<description>Some results on (C0) semi-groups and the Cauchy problem,
Packel, Edward W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1967; Vita.; Bibliography: leaves 72-73.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148719</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an economic theory of income distribution.</title>
<link>https://hdl.handle.net/1721.1/148717</link>
<description>Towards an economic theory of income distribution.
Blinder, Alan S.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1971; Vita.; Bibliography: leaves 216-222.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148717</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Results on Brown-Gitler type spectra</title>
<link>https://hdl.handle.net/1721.1/148715</link>
<description>Results on Brown-Gitler type spectra
Goerss, Paul Gregory.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1983; Bibliography: leaves 178-180.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148715</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rationalizable economic behavior and strategic choice</title>
<link>https://hdl.handle.net/1721.1/148714</link>
<description>Rationalizable economic behavior and strategic choice
Bernheim, Bert Douglas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1982; Includes bibliographies.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148714</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a High Specific Power Electric Machine for Turboelectric Propulsion</title>
<link>https://hdl.handle.net/1721.1/148615</link>
<description>Design of a High Specific Power Electric Machine for Turboelectric Propulsion
Dowdle, Aidan Patrick
The benefits of turboelectric propulsion for aviation, in which a gas generator core electrically drives motor-powered propulsors, are limited by the mass and losses of the electric components introduced into the drivetrain. These propulsion systems are predicted to result in a 15\% fuel savings provided that megawatt-class electrical machines (EMs) and power electronics (PEs) are available with power-to-mass ratios exceeding 13 kW/kg and 16 kW/kg, respectively.&#13;
&#13;
This thesis proposes an integrated prime mover concept enabled by the material choices and cooling technology available today. In this concept, an outer rotor, tooth-and-slot Halbach array is integrated with the low pressure compressor of a low fan pressure ratio aeroengine. The specific power of the integrated compressor generator is estimated to be 14.8 kW/kg, exceeding the NASA 2030 goal for aviation applications of 13 kW/kg for a standalone electric machine for aviation applications.&#13;
	&#13;
Relative to a standalone, optimized electrical machine, co-optimization of the EM, PEs, thermal management system, and turbomachine rim suggests a 38\% increase in system specific power.&#13;
&#13;
Based on these findings and supported by 2D and 3D finite element analysis, a 19.7 kW/kg, megawatt-class, air-cooled tooth-and-slot Halbach array electrical machine demonstrator is conceived. A detailed design study together with risk mitigation experiments of key components are carried out, setting the stage for megawatt-class, high power density, and high efficiency electrical machines for aerospace applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148615</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Nucleic Acid-Based Sensors and Actuators</title>
<link>https://hdl.handle.net/1721.1/148610</link>
<description>Developing Nucleic Acid-Based Sensors and Actuators
Gayet, Raphaël Vincent
As the field of synthetic biology matures, engineers are tackling increasingly ambitious problems that require the integration of regulatory logic in complex environments. Nucleic acids are attractive molecules for designing sense-and-respond modules: they are ubiquitous, information-rich and interact with each other through simple rules. Here, through two examples, I show that nucleic acids are particularly suited to create programmable molecular tools, in which inputs and outputs are defined independently from each other. In the first half of this thesis, I describe the development of a strategy to design nucleic acid-responsive materials using the CRISPR-associated nuclease Cas12a as a user-programmable sensor and material actuator. I exploit the programmability of Cas12a to actuate hydrogels containing DNA as an anchor for pendant groups or as a structural element. This versatile approach improves on the sensitivity of current DNA-responsive materials while enabling their rapid repurposing toward new sequence targets. In the second half of this thesis, I describe how to engineer programmable single-transcript RNA sensors in vivo, in which adenosine deaminases acting on RNA (ADARs) autocatalytically convert target hybridization into a translational output. This system amplifies the signal from editing by endogenous ADAR through a positive feedback loop. This topology confers high dynamic range, low background, minimal off-target effects, and a small genetic footprint. I envision that the approaches described here have broad applications from basic science to advanced diagnostics and therapeutics, illustrating the great potential of programmable nucleic acid-based controllers.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148610</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algebraic structure of some groups of recursive permutations</title>
<link>https://hdl.handle.net/1721.1/148435</link>
<description>Algebraic structure of some groups of recursive permutations
Kent, Clement Fisher,
            1927-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1960; Vita.; Includes bibliographical references (leaves 101-102).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148435</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Burning velocities of the hydrogen peroxide decomposition flame</title>
<link>https://hdl.handle.net/1721.1/148430</link>
<description>Burning velocities of the hydrogen peroxide decomposition flame
Kehat, Ephraim.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Vita.; Includes bibliographical references (leaves 106-109).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148430</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear shell effects in fission : yields of krypton isotopes from the deuteron and alpha bombardment of heavy nuclei</title>
<link>https://hdl.handle.net/1721.1/148428</link>
<description>Nuclear shell effects in fission : yields of krypton isotopes from the deuteron and alpha bombardment of heavy nuclei
Kaplan, Morton,
            1958-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1960; Vita.; Includes bibliographical references (leaves 150-156).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148428</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The transfer of spectral energy in non-linear dispersive systems.</title>
<link>https://hdl.handle.net/1721.1/148424</link>
<description>The transfer of spectral energy in non-linear dispersive systems.
Newell, Alan C.,
            1941-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1965; Bibliography: leaf 318.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148424</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of the division cycle of human cancer cells (HELA) using a new system for continuous synchronization</title>
<link>https://hdl.handle.net/1721.1/148422</link>
<description>Studies of the division cycle of human cancer cells (HELA) using a new system for continuous synchronization
Thilly, William George.
HeLa S3 cells have been induced to divide synchronously in suspension culture through more than ten generations. Cellular DNA and RNA have been determined at hourly intervals throughout the cell cycle defining the pattern of nucleic acid accumulation in HeLa cells. Cellular activity of the enzymes thymidine kinase, alkaline DNase, catalase, acid phosphatase and lactic dehydrogenase has been determined as a function of the division cycle. The method of synchronization developed here allows the continuous synchronous growth of mammalian cells in suspension culture. This system facilitates studies of cell cycle events since it provides a large amount of cellular material with minimal maintenance time required.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1971; Includes bibliographical references (pages 211-215, 216a-216g).
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148422</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guanidinium Compounds: Synthesis, Oxoanion Binding, and Cellular Delivery</title>
<link>https://hdl.handle.net/1721.1/148290</link>
<description>Guanidinium Compounds: Synthesis, Oxoanion Binding, and Cellular Delivery
Orgren Calabretta, Lindsey
The delivery of biological molecules into cells has been an issue of importance in both chemical biology and drug discovery. One method used to transport biologics into cells is the cellpenetrating peptide (CPP). This arginine-rich peptide forms strong interactions with the cell surface through bidentate guanidinium–oxoanion hydrogen bonds. Depending on conditions, this interaction guides the uptake of the CPP and its cargo through direct translocation or endocytosis.&#13;
&#13;
In Chapter 1, I summarize literature that is relevant to this thesis.&#13;
&#13;
In Chapter 2, I describe the synthesis and characterization of a small molecule, 1- guanidino-8-amino-2,7-diazacarbazole dichloride (GADAC), that displays a high binding affinity to a carboxylate, phosphate, and sulfate in water. GADAC is also fluorescent and displays an increase in quantum yield mediated by pH. The uptake and fluorescence of GADAC is observed in human melanoma cells via epifluorescent microscopy. Thus, the GADAC scaffold shows promise as a potential cell-uptake promoter and fluorescent reporter of biologics.&#13;
&#13;
In Chapters 3 and 4, I explore alternative amino acids for use in CPPs. I studied the ability of canavanine, a δ-oxa-analog of arginine, to partition into octanol in the presence of anionic lipids as a proxy for its cell-penetration ability. I observed that canavanine is worse at partitioning than arginine, indicating it may not be an effective CPP alternative.&#13;
&#13;
In contrast, I synthesized and performed anion-mediated partitioning on Nα-methylated arginine derivatives and observed increased octanol uptake compared to unmethylated arginine. This increased uptake is correlates with a decrease in topological polar surface area (TPSA) and indicates that an Nα-methylated CPP could be a cell-uptake promoter with increased efficacy.&#13;
&#13;
Lastly, in Chapter 4, I describe the synthesis of biaryl-bisguanidines. These guanidines are inspired by axially constrained organometallic catalyst ligands and have applications in oxoanion binding as dications and organometallic catalysts as dianions. I detail initial forays into determining the binding affinities of the guanidines to oxoanions through NMR titration experiments, which were hampered by changing ionic strength of the solutions.&#13;
&#13;
Appendices describe the synthesis of photocaged phosphinothioesters for the traceless Staudinger ligation and attempts to install a diazo moiety site-selectively at the N-terminus of a peptide or protein.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148290</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable methods for immune repertoire sequencing</title>
<link>https://hdl.handle.net/1721.1/148289</link>
<description>Scalable methods for immune repertoire sequencing
Holec, Patrick
Our immune system is a complex network of cells and proteins designed to identify and eliminate cancer and infection. At the center of this lies the interaction between the T cell receptor (TCR) and antigens presented through peptide-major histocompatibility complexes (pMHCs). Our ability to detect disease relies on the heterogeneity found on both sides of this equation, yet technologies to characterize either antigen presentation or TCR diversity at a repertoire scale are rarely economical or scalable.&#13;
&#13;
In this thesis, we discuss two separate advances in immune repertoire sequencing. In the first aim, we describe a high-throughput tool to identify antigenic peptides capable of being presented on class I MHCs. This platform utilizes yeast surface display as a means to separate binding peptide from non-binding peptide. We then use this technique to screen for antigenic peptides derived from multiple pathogen proteomes across eighteen MHC alleles, resulting in a catalog of new antigens to further characterize. In my second aim, we explore integrated experimental and computational approaches for sequencing TCR repertoires. To do this, we focus on algorithmic approaches to recover TCRα and TCRβ pairings in noisy datasets. By developing a new Bayesian framework, we are able to increase pairing efficiency and computation speed. Finally, we describe how this framework primed the development of a novel T cell sequencing strategy. This method, spatial TCR deconvolution, amplifies TCR transcripts from cells lysed in solution and uses barcoded beads as a molecular beacon to record spatial proximity. Preliminary results suggest the barcoded transcripts produced have the potential to recover TCR pairings at scale.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148289</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches to Multi-Modal Data Integration and Translation in Single-Cell Biology</title>
<link>https://hdl.handle.net/1721.1/148287</link>
<description>Machine Learning Approaches to Multi-Modal Data Integration and Translation in Single-Cell Biology
Yang, Karren
Building a complete picture of cell state requires measuring different properties of the cells, such as their gene expression, morphology, etc., and understanding 1) how these properties relate to each other, 2) how they change over time, 3) how they are affected by different perturbations. It is often difficult to collect this information through experimentation alone. High-throughput single-cell assays such as single-cell RNA-sequencing are destructive to cells, making it difficult to make other observations of the same cells at other time points or using different measurement tools.&#13;
&#13;
In this thesis, I develop new machine learning methodology to integrate and translate between single-cell data. In the first half, I develop methods based on generative modeling, representation learning and optimal transport to learn mappings between cells collected at different time points. In the second half, I develop methods based on generative modeling and representation learning to map between different data modalities, including both observational measurements and interventions. Overall, this body of work progresses towards the larger goal of complete cell models that predict cell state under different measurements, time points, and perturbations.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148287</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Versatile Inference Algorithms using the Bayes Tree for Robot Navigation</title>
<link>https://hdl.handle.net/1721.1/148286</link>
<description>Versatile Inference Algorithms using the Bayes Tree for Robot Navigation
Terán Espinoza, Antonio
Robotic satellite operations are an integral component of future space missions, such as on-orbit servicing, in-space robotic assembly, and orbital debris mitigation. A key requirement shared among such space missions is the capability to carry out robust and autonomous close proximity operations between the involved agents. Scenarios involving unknown and uncooperative target objects require online estimation capabilities to acquire target information, such as inertial properties for target motion prediction, in order to enable the subsequent mission phases.&#13;
&#13;
This inspection can be formulated as a simultaneous localization and mapping (SLAM) problem, where localization of the inspector with respect to the target imposes the necessity of acquiring a map against which to perform relative navigation. Current state of the art space inspection approaches are either prohibitively expensive for online operation, or compute partial solutions by segmenting the problem into incremental algorithms and batch approach formulations. The objective of this work is to alleviate the complexity issues that arise in the information fusion steps of the estimation process, and that prevent for a full solution of the problem to be computed incrementally and online.&#13;
&#13;
For such resource constrained systems, state of the art approaches offer focused inference solutions to deal with the computational bottlenecks of complex SLAM problems. While the majority of these methods center on exploiting the conditional independence structure of the problem’s model, they operate directly on graphs instead of their underlying tree decompositions. A Bayes tree, which is the tree decomposition associated with a factor graph, blatantly exposes this sought conditional independence structure in the form of cliques and, furthermore, is the actual data structure used by the inference algorithms.&#13;
&#13;
This work makes use of the readily available conditional independence property to explore focused inference approaches directly on the Bayes tree for resource constrained incremental smoothing and mapping. It elucidates the impact that graph-centered resource constrained methodologies have at inference time, and presents a simplified approach that unifies many inference strategies (e.g., filtering, fixed-lag smoothing, incremental smoothing and mapping) under simple clique- and tree-based operations. A proof of concept is presented to highlight the advantages and the versatility obtained by reasoning using the tree instead of a graph when trying to explore the performance/quality tradespace, with the added benefit of doing so while avoiding the need to modify the problem’s original model. The key insights obtained from this analysis are then leveraged for the development of novel factors to incorporate the estimation of the target object inertial properties to the SLAM formulation, obtaining a real-time and incremental solution to the space inspection problem.
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148286</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic planning for robotic assembly of building structures</title>
<link>https://hdl.handle.net/1721.1/148285</link>
<description>Algorithmic planning for robotic assembly of building structures
Huang, Yijiang
This thesis develops the algorithmic foundations for applying automated planning techniques to program robots to assemble discrete spatial structures. Benefitting from the robot’s capacity for moving, positioning, and holding elements precisely, robotic assembly aims to neutralize the cost and time impact of increasing demand for non-standard, customized designs using programmable robotics and automated process. Programming robots to assemble structures requires us to reason about the construction sequence and the robotic motions. The critical planning challenge is satisfying both stiffness constraints that limit the deformation of the structure and geometric constraints that ensure the robot does not collide with the structure. Current planning approaches either require a significant amount of human intervention or do not scale to the numeric scale and geometric complexity demanded by construction. As we shift from mass production in manufacturing to mass customization in construction, we need versatile planning tools that can adapt to different structural typologies, off-load tedious human programming work, and involve human expertise when relevant. &#13;
&#13;
This thesis addresses this need by proposing a unified algorithmic framework to formulate and solve assembly planning problems. Our investigations are grounded on three broad classes of assembly planning problems: (1) spatial extrusion, (2) pick-and-place assembly, and (3) robotic assembly with multiple tool changes. For each class of assembly problems, we propose scalable, efficient planning algorithms and test them with simulated and real-world case studies. This thesis demonstrates how algorithmic planning can provide us with a much smoother transition between an assembly design and its final execution on the robot. Based on these sound foundations of the "forward-evaluation" of robotic constructability in various contexts, we finally attempt to "close the loop" - deriving a metric to measure constructability and use it to guide the performance-driven exploration of a discrete design catalog.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148285</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the magnetic moment of the [lambda]⁰ hyperon</title>
<link>https://hdl.handle.net/1721.1/148183</link>
<description>Measurement of the magnetic moment of the [lambda]⁰ hyperon
Li, Kelvin Kui-Yat.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1964; Vita. On the t.p.,"[lambda]" is the original Greek letter.; Includes bibliographical references (leaves 107-109).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148183</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some effects of water and of oxygen on rates of reactions of food components</title>
<link>https://hdl.handle.net/1721.1/148178</link>
<description>Some effects of water and of oxygen on rates of reactions of food components
Karel, Marcus.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Food Technology, 1960; Vita.; Includes bibliographical references (leaves 223-224).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148178</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anion exchange study of the oxidation states of the halogens, including astatine</title>
<link>https://hdl.handle.net/1721.1/148177</link>
<description>Anion exchange study of the oxidation states of the halogens, including astatine
Kahn, B.
            (Bernd)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1960; Vita.; Includes bibliographical references (leaves 76-80).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148177</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete representations of random signals</title>
<link>https://hdl.handle.net/1721.1/148174</link>
<description>Discrete representations of random signals
Jordan, Kenneth L.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Vita.; Includes bibliographical references (leaves 130-132).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148174</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic and mechanistic studies of the osmium-catalyzed asymmetric dihydroxylation of olefins</title>
<link>https://hdl.handle.net/1721.1/148170</link>
<description>Synthetic and mechanistic studies of the osmium-catalyzed asymmetric dihydroxylation of olefins
Fleming, Paul Robert,
            1964-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1993; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148170</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the conjugate locus of a Riemannian manifold.</title>
<link>https://hdl.handle.net/1721.1/148167</link>
<description>On the conjugate locus of a Riemannian manifold.
Dos Santos, Nathan Moreira.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; Bibliography: leaf 44.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148167</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of stress and strain under drained loading conditions.</title>
<link>https://hdl.handle.net/1721.1/148163</link>
<description>Prediction of stress and strain under drained loading conditions.
Hagmann, Alfred Josef.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1971; Vita.; Bibliography: leaves 74-80.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148163</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The interregional flow of funds in the United States, 1955-1958</title>
<link>https://hdl.handle.net/1721.1/148160</link>
<description>The interregional flow of funds in the United States, 1955-1958
Kane, Edward J.
            (Edward James),
            1935-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1960; Vita.; Includes bibliographical references (leaves 203-207).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148160</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of thickening kaolin suspensions</title>
<link>https://hdl.handle.net/1721.1/147967</link>
<description>The mechanism of thickening kaolin suspensions
Fuerstenau, Maurice C.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1961; Vita.; Includes bibliographical references (leaves 91-93).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147967</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The servo-analysis of postural reflexes</title>
<link>https://hdl.handle.net/1721.1/147963</link>
<description>The servo-analysis of postural reflexes
Johnson, Avery Remington.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Vita.; Includes bibliographical references (leaves 91-95).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147963</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sublimation at low pressures</title>
<link>https://hdl.handle.net/1721.1/147961</link>
<description>Sublimation at low pressures
Ulmer, Johann Konrad,
            1519-1600.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Vita.; Includes bibliographical references (leaves 218-220).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147961</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pressure dependence of the Néel temperature in Co0 and Ni0, measured with a new dilatometer</title>
<link>https://hdl.handle.net/1721.1/147960</link>
<description>Pressure dependence of the Néel temperature in Co0 and Ni0, measured with a new dilatometer
Janusz, Theodore P.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1960; Vita.; Includes bibliographical references (leaves 56-57).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147960</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High oxidation state metal centers as sites for nitrogen fixation and reduction to ammonia</title>
<link>https://hdl.handle.net/1721.1/147959</link>
<description>High oxidation state metal centers as sites for nitrogen fixation and reduction to ammonia
Vale, Michael Gerard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1992; Includes bibliographical references (leaves 238-246).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147959</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the growth of semiconductor-based epitaxial and oxide films from low energy ion beams</title>
<link>https://hdl.handle.net/1721.1/147953</link>
<description>On the growth of semiconductor-based epitaxial and oxide films from low energy ion beams
Vancauwenberghe, Olivier P. J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1992; Vita.; Includes bibliographical references (leaves 205-212).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147953</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy transformation and vertical flux processes over the northern hemisphere</title>
<link>https://hdl.handle.net/1721.1/147950</link>
<description>Energy transformation and vertical flux processes over the northern hemisphere
Jensen, Clayton Everett,
            1920-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Meteorology, 1960; Vita. Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 267-269).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147950</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/147915</link>
<description>Three Essays in Financial Economics
Kazemi, Maziar M.
This dissertation contains three chapters concerned with questions in empirical and theoretical asset pricing.&#13;
&#13;
This first chapter, Intangible Investment, Displacement Risk, and the Value Discount, explores how the composition of assets in place and growth opportunities affect risk premia. Firms with growth opportunities in the form of intangible investments exposed to displacement risk have larger expected returns than firms with growth opportunities in the form of tangible investments. I develop a production-based asset pricing model showing that a firms exposures to priced productivity and displacement risk depend on multiple firm characteristics. None of these characteristics alone can capture the firms total exposure. Empirically, intangible investment positively predicts returns, and firms undertaking more intangible investment are more exposed to proxies for displacement risk. I develop six proxies to measure displacement risk shocks: three based on sorting firms into portfolios and three based on aggregate variables. A portfolio double-sorted on two key firm characteristics, the bookto-market ratio (including intangible capital) and the difference between the intangible and tangible investment rates, produces large excess returns that existing models cannot explain. This double-sort can explain the decline of the Value Premium.&#13;
&#13;
The second chapter, Identification of Factor Risk Premia, (joint with Peter Hansen) develops a novel statistical test of whether individual factor risk premia are identified from return data in multi-factor models. We give a necessary and sufficient condition for population identification of individual risk premia, which we call the kernel-orthogonality condition. This condition is weaker than the standard rank condition commonly assumed for linear factor models. Under misspecification, our condition ensures point identification of the risk premium with minimal pricing error. We show how to test this restriction directly in reduced-rank models. Finally, we apply our test methodology to assess identification of risk premia associated with consumption growth and intermediary leverage.&#13;
&#13;
In the third chapter, Do Skilled Managers Improve Welfare? (joint with Ali Kakhbod) we consider a simple equilibrium model of active fund managers and consumers. Both managers and consumers are rational, and manager skill is measured by value added and not simply alpha. Positive value added managers do not necessarily increase consumer welfare. In our model, this is only true when the manager provides a hedge for the benchmark asset. The reason for this surprising result is that the negative correlation makes the manager and benchmark complementary “goods”. Managers need to be sufficiently skilled to offset the demand-induced price increases enough to improve welfare.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147915</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering Biological Mechanisms of Immunomodulatory Biomaterials for Encapsulated Cell Therapies</title>
<link>https://hdl.handle.net/1721.1/147912</link>
<description>Uncovering Biological Mechanisms of Immunomodulatory Biomaterials for Encapsulated Cell Therapies
Facklam, Amanda L.
Biomaterials are used in a variety of therapeutics including vaccines, engineered tissues, and cell therapies. Biomaterials enable a range of functionalities such as localized delivery, sustained release, and responsiveness. In the context of cell therapies, biomaterials can protect encapsulated cells from immune attack while allowing for nutrient and oxygen exchange. While this approach holds greats potential, the immune response to biomaterials remains a major challenge to the field. Upon implantation of a material, the immune system will initiate the foreign body response, a cascade of inflammatory activity resulting in material fibrosis. For encapsulated cell therapies, biomaterial fibrosis can result in diminished cell functionality or even cell death. To address this challenge, it is critical to design biomaterials which can modulate the host immune response to mitigate fibrosis. In this thesis, we characterize the effect of biomaterial properties on immune responses after implantation. First, we describe how physical properties of alginate capsules can affect the success of encapsulated cell therapy. We find that capsules with lower permeability to IgG and higher strength enable longer encapsulated islet cures in diabetic mice. Furthermore, we show that differences in islet cure lengths were largely dependent on differential capsule immune responses. Next, we describe the effects of E9, an anti-fibrotic biomaterial coating, on macrophage behavior. We find that E9 downregulates CD86 surface expression when immobilized on a biomaterial surface. In addition, E9 downregulates the secretion of several cytokines including MCP-1 and VEGF and upregulates the secretion of IL-1β from macrophages. Next, we describe our work identifying the functional protein targets of E9 to gain further insight into its mechanism of action. We find that Macrophage migration inhibitory factor and Thioredoxin bind E9 and may have roles in its anti-fibrotic activity. Through this work, we identify macrophage proteins and signaling pathways involved in the mechanism of action of E9, leading to an improved understanding of the foreign body response. Overall, by characterizing the effect of material properties on immune responses, we enable rational design of next-generation immunomodulatory biomaterials.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147912</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic biology tools for sensing and evolving microbes: CRISPR-diagnostics and transposon-mediated genome re-wiring</title>
<link>https://hdl.handle.net/1721.1/147906</link>
<description>Synthetic biology tools for sensing and evolving microbes: CRISPR-diagnostics and transposon-mediated genome re-wiring
English, Max Atti
Mobile genetic elements (MGEs) have played a fundamental role in the evolution of complex organisms, driving innovation through competition, collaboration and co-option. Transposons, in particular, are an ancient family of MGEs whose diverse functions have provided a rich source of DNA-binding and nuclease domains for their cellular hosts, and more recently for biological engineering technologies. In this thesis, I re-purpose transposons and their evolutionary descendants as synthetic biology tools for two distinct applications: new platforms for directed genome evolution, and CRISPR-based nucleic acid sensors. Prokaryotic CRISPR-Cas adaptive immune systems evolved in part from ancestral transposon domains, and the inherent programmability of their RNA-guided nucleases has underpinned their use as specific and sensitive in vitro diagnostics. Here, I present our efforts to expand the use of these CRISPR-based sensors to control the large-scale properties of smart biomaterial systems, and thereby enable programmed cargo release and the development of low-cost, paper-fluidic diagnostic devices. Transitioning to in vivo applications for MGE-derived tools, I describe the development of an engineered, autonomous transposon platform for continuous genome-wide mutagenesis and dynamic regulatory network re-wiring. I use this platform to study the impacts of transposon functionalization on the evolution of parallel E. coli populations towards diverse carbon source utilization and antibiotic resistance phenotypes. Through the implementation of barcode-based tracking and longitudinal next-generation sequencing, I then re-construct transposon lineages within the genomes of host cells and investigate the impact of environmental complexity and genetic contingency on host-transposon interactions. Moving forwards, we envision this directed genome evolution platform being used to discover and optimize strains for biopharmaceutical applications, and as a well-defined testbed to study the role of MGEs in the emergence and re-wiring of complex natural gene regulatory networks.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147906</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering plant-microbe communication for synthetic symbioses</title>
<link>https://hdl.handle.net/1721.1/147905</link>
<description>Engineering plant-microbe communication for synthetic symbioses
Toth, Tyler Daniel
Current agricultural practices will not be able to keep pace with a growing population in a changing climate.  A better understanding and increased reliance on the symbiotic relationships between plants and their resident microbiome will be crucial in addressing this challenge. In addition, these microbes provide a promising location for genetic engineering efforts and should be viewed as an integral part of an engineerable agricultural system. This holistic view means that desired functions can be performed in the host most suitable for the task, reducing toxicity due to resource limitations, and potentially easing regulatory concerns. Attaining this view, however, requires synthetic forms of communication between plants and microbes that are orthogonal to native signaling pathways, able to diffuse through the complex soil environment, readily produced from common cellular precursors, and easily sensed at low concentrations.&#13;
&#13;
In this thesis, I review the genetic parts and regulators available for use in plants and the types of natural and engineerable plant-microbe communication. I then highlight the potential use of plant-to-microbe communication to control gene expression in engineered soil bacteria – specifically to ensure that the energy-intensive expression of nitrogenase only occurs when microbes are near a plant. Finally, I share the creation of an engineered form of microbe-to-plant communication. We engineered plants with the ability to sense and respond to bacterial quorum signals separate from native responses. In addition, we show that the p-coumarate homoserine lactone (pC-HSL) sensors can respond to pC-HSL biosynthesized by Pseudomonas putida grown in proximity to the plants. To the best of our knowledge, this is the first demonstration of engineered microbe-to-plant communication using a small molecule. We were also able to place the biosynthesis of pC-HSL under the control of various sensors and use an engineered consortium to perform logical operations on multiple environmental inputs. This engineered form of synthetic symbiosis lays the foundation for using microbe-to-plant communication to perform tasks such as monitoring soil nutrients, sensing pathogens, or detecting environmental contaminants.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147905</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Terrible timing: The causes and consequences of problematic work schedules</title>
<link>https://hdl.handle.net/1721.1/147903</link>
<description>Terrible timing: The causes and consequences of problematic work schedules
Kowalski, Alexander Marion
In an attempt to mitigate uncertainty stemming from volatile customer demand while keeping labor costs low, organizations in a host of industries frequently adjust when work occurs without taking employee input into account. The practices they use to do so often produce problematic schedules, or unstable and unpredictable hours that workers feel are out of their control and that negatively impact their lives, on and off the job. Drawing on a mix of quantitative and qualitative data from a multi-year study of a large U.S. retailer’s supply chain division, this dissertation shows that the consequences of problematic schedules are real but not inevitable. In Chapter 1, I use detailed time-keeping records from 20,000+ hourly workers across multiple business functions to construct a multidimensional measure of schedule quality. I find variation in workers’ exposure to problematic schedules, even after controlling for job, workplace, and worker characteristics, which I attribute to variation in the scheduling practices used by frontline managers. A crucial facet of job quality, schedules thus stratify workers in the same organization, and this is due not only to the work they perform but also to managerial discretion. In Chapter 2, I use interviews and fieldwork to document how frontline managers in the retailer’s e-commerce fulfillment centers (FCs) go about the complex task of scheduling. I find that despite pressures for conformity, management teams in each FC use distinct bundles of scheduling practices, which each have predictable consequences for FC performance. The bundles emanate from and reinforce the local organizational cultures in which managers are embedded, making some FCs better places to work than others. In Chapter 3, I combine my measure of schedule quality with workers’ employment histories, finding that problematic schedules are associated with substantial increases in job exit. As a whole, I show that scheduling practices that at first appear cost-effective actually raise turnover and reduce performance. At the same time, managers are not totally constrained by industry, technology, or company policy in how they schedule work hours—even in highly uncertain environments, they can implement scheduling practices that are better for workers while remaining competitive.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147903</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping and modeling macrophages in tuberculosis</title>
<link>https://hdl.handle.net/1721.1/147899</link>
<description>Mapping and modeling macrophages in tuberculosis
Peters, Joshua M.
Immune cells polarize to diverse transcriptional and functional states to cause or resolve disease. Understanding this cellular diversity is critical to identifying new clinical interventions. Macrophages in tuberculosis granulomas are an archetype of these phenomena. Pinpointing the cellular states and features required for effective responses to vaccination or Mycobacterium tuberculosis infection remains challenging. Recent advances have dramatically increased the scale and depth of profiling transcriptional states and their pathological implications. &#13;
&#13;
In this thesis, we describe several studies that collectively advance our definition of macrophages and their dynamics in tuberculosis and lung diseases more broadly. We first characterize the single-cell transcriptomic profiles of macrophages within tuberculosis granulomas from non-human primates. We introduce and apply a framework for identifying possible signaling cues contributing to these states using ex vivo-generated macrophage models. To extend these findings, we identify conserved gene programs underpinning macrophage states across other lung diseases in humans. Similarly, we then describe macrophage and other immune cell states after Bacillus Calmette–Guérin (BCG) vaccination in non-human primates and identify molecular features associated with protective responses. Lastly, we conclude by exploring how profiling and modeling ligand and genetic perturbations in macrophages can provide increased cellular and organismal understanding of potential interventions. Overall, these findings demonstrate experimental and analytical frameworks for analyzing immune cell states across biological models and inform our understanding of macrophage states associated with tuberculosis disease and vaccination.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147899</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Throughput Screening for Small Molecule Interactions with Nucleic Acid Binding Proteins</title>
<link>https://hdl.handle.net/1721.1/147898</link>
<description>High Throughput Screening for Small Molecule Interactions with Nucleic Acid Binding Proteins
Wilson, Robert M.
Nucleic acid binding proteins are critical nodes in almost all cell signaling networks. Cell network architecture depends on highly regulated nucleic acid binding to determine cell identity, developmental trajectory, environmental response, and homeostasis. These proteins govern all aspects of cancer, from oncogenesis to metastasis, as well as treatment resistance and tumor recurrence.  Our ability to manipulate these entities in cells, and therefore our ability to understand their function precisely or to intervene in pathological processes, is extremely limited without genetic manipulation.&#13;
&#13;
To address this challenge, small molecule binding screens were implemented against transcription factor and RNA-binding protein targets. The MYC oncoprotein was successfully inhibited by an indirect strategy through stabilization of its obligate interacting partner MAX in an inactive form. A small molecule that binds to MAX shifts the equilibrium of the MYC/MAX system to favor transcriptionally repressive MAX homodimers, effectively reducing MYC transcriptional activity. Assays against the MYC target LIN28B were developed for further indirect inhibition of MYC; however, an unexpected biological interaction with cellular reporters prevents further biological characterization. Instead, biophysical secondary assays were implemented and expanded to include a panel of RNA-Binding Proteins implicated in cancer and neurodegeneration. Validation of assay positives demonstrates not only several starting points for chemical probe development, but also the fundamental chemical tractability of protein domains involved in RNA binding. Together, this work generates new chemical matter for inhibiting previously intractable targets and identifies a strategy for systematic discovery of inhibitors for protein-RNA interfaces.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147898</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Role of RNA-Binding Proteins in Tumor Response to DNA Damage-Inducing Chemotherapy</title>
<link>https://hdl.handle.net/1721.1/147897</link>
<description>Investigating the Role of RNA-Binding Proteins in Tumor Response to DNA Damage-Inducing Chemotherapy
Bird, Molly A.
Despite immense progress in the biological understanding and treatment of cancer over the past several decades, cancer remains a leading cause of mortality worldwide. Even with the emergence of molecularly-targeted therapies and immunotherapy, cytotoxic agents that induce DNA damage remain first-line drugs in the clinic. Unfortunately, the majority of patients will develop resistance to DNA-damaging chemotherapy and relapse. Recent studies indicate that RNA-binding proteins (RBPs) appear to play a particularly important role in cancer, and may contribute to chemoresistance, but the field is still at an early stage and nothing has yet progressed towards translational utility. This is at least partially due to a lack of systematic discovery tools and tractable human-relevant mouse models for in vivo screening. To address these challenges, we first created a computational platform, Transite, that systematically infers RBPs influencing gene expression through changes in RNA stability. As a proof of principle, we applied Transite to RNA expression data from cancer patients with naïve or chemoresistant tumors, identified several RBP regulators of the DNA damage response, and identified hnRNPC as a new modulator of chemotherapeutic resistance. Second, to enable the identification of new RBPs that affect chemosensitivity in high-grade serous ovarian carcinoma (HGSOC) we performed a targeted CRISPRi screen in vitro against 579 RBPs to identify those that cause sensitivity or resistance to cisplatin, oxaliplatin, paclitaxel, or olaparib. The screen implicated several known and novel RBPs in affecting sensitivity to chemotherapy in HGSOC, including the RNA-modifying enzyme ADAR1, which we subsequently validated. As an initial step towards in vivo testing and validation of the CRISPRi screen, we have devised methods to improve the engraftment and aggressiveness of an orthotopic HGSOC CRISPRi-functionalized xenograft mouse model. This involved serial in vivo transplantation studies to generate tumor and ascites-producing lines, several of which became more aggressive with subsequent passaging in mice. With further optimization, these lines can be used to enable in vivo screening of novel RBP targets that emerged from our in vitro CRISPRi screen. Taken together, the efforts described in this thesis serve as a critical resource to enable the identification of RBP-targeted combination chemotherapy treatments that could enhance therapeutic responses to anti-cancer therapy.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147897</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Syncretic Modernism: Articulations of Painting in Turkey (1910s-1940s)</title>
<link>https://hdl.handle.net/1721.1/147896</link>
<description>A Syncretic Modernism: Articulations of Painting in Turkey (1910s-1940s)
Demir, Duygu
This dissertation is a critical history of modern art in Turkey, focusing on two generations of artists that came to articulate the parameters of painting in the late Ottoman Empire and the Turkish early republic. I follow the trajectories of a selection of artists who became leading figures in the Turkish art world over the course of their careers: These painters contributed to the artistic discourse not only through their work, but also through their positions as teachers, gatekeepers, and tastemakers. While their artistic trajectories set these two generations apart, they had just as much in common: they all rose up to what they defined as the academicism of their teachers, formed generational alliances, opened exhibitions, served as missionaries of the new nation, attempted to find perpetual principles for a Turkish art, struggled to find their artistic identities, and got labeled as European imitators. Pressed between divergent expectations from the outside as well as the inside, they oscillated between the search for the universal and striving for the local. &#13;
&#13;
Oil painting had been a marker of modernity and social emancipation already in the Ottoman period; it was a vessel in attaining a level of civilization contemporaneous with the rest of the world (read Europe), a crucial tool in the ongoing pursuit of technological modernity, and also a mode of self-expression. In the early years of the republic, the Turkish intelligentsia strove for a cultural synthesis that would take its forms from the West, whereas its content would be determined by local sources. Yet, painting is neither a technology that can be borrowed, nor simply a manifestation of cultural essence but a repository for both, and much more; it resists being delineated into strict categories of form and content. The task of this project is to chart the different ways in which the late-Ottoman and Turkish painters who embraced the medium of painting attempted to position themselves within this conundrum, oscillating between emulation and invention. Charting how this predicament manifested itself in the work of these artists and the discourse it generated is also revelatory of the paradox of Turkish modernization itself, with implications for our understanding of modernisms around the world and for emerging theories of the tensions between modernist art and the modernization of nation-states. Attending to the specific historical, political and aesthetic realms of this thirty-year period, this dissertation analyzes how the two generations of painters negotiated these challenges. In this dissertation, I read the paintings, exhibition histories, institutional shifts, artist testaments, articles and reviews that shaped Turkish painting over the period in question as articulations of a complex system, presenting a counter-history of one modernism among many. This, I argue, strove for synthesis but ultimately remained syncretic—a strategic amalgam of Turkey’s highly polyglot reality that refused to be smoothed into a synthetic whole.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147896</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods, Models, and Machine Learning Approaches for Understanding Pathogen-Specific Humoral Immunity</title>
<link>https://hdl.handle.net/1721.1/147895</link>
<description>Methods, Models, and Machine Learning Approaches for Understanding Pathogen-Specific Humoral Immunity
Zohar, Tomer
The humoral immune response is comprised of vast libraries of polyclonal antibodies capable of recognizing a myriad of targets and directing a spectrum of innate immune functions. The complex heterogeneity in antibody profiles across both populations and diseases makes defining mechanisms of protection difficult. Understanding these mechanisms and the factors that influence them is essential to defining immunity and helps inform the design of vaccines and therapeutics. Thus, in this thesis, I describe five studies that present the development of experimental and computational methods, and machine learning approaches for investigating the mechanisms, dynamics, and determinants of pathogen-specific humoral immunity.&#13;
&#13;
The first study introduces an assay for probing antigen-specific antibody mediated primary monocyte phagocytosis, that is capable of capturing subsequent downstream functions. The second study describes a machine learning approach for defining the correlates of upper and lower respiratory protection against RSV and methods for evaluating vaccine designs. The third study uses machine learning methods to uncover signatures of humoral protection against SARS-CoV-2. The fourth study presents a method for longitudinally modelling humoral immunity that was used to investigate the temporal dynamics of antibody features across individuals with varying COVID-19 severity. Finally, the last study describes a genome-wide association screen of pathogen-specific polyclonal antibody characteristics and functions that was then validated with transcriptomics data. Ultimately, the methods described in this thesis present new approaches for investigating underlying phenomena related to pathogen-specific humoral immunity.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147895</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Intratumoral Cytokine Therapies for Cancer</title>
<link>https://hdl.handle.net/1721.1/147894</link>
<description>Engineering Intratumoral Cytokine Therapies for Cancer
Lutz, Emi A.
Immunotherapies enable effective, long-lasting anti-tumor immunity for some cancer patients. Cytokines are key signaling proteins that help activate and sustain critical immune cells during this process. Unfortunately, traditional cytokine therapies suffer from low efficacy and high systemic toxicity. As one strategy to improve therapeutic index, intratumorally administered and retained cytokines have been demonstrated to improve both safety and efficacy. However, further research on intratumoral cytokine therapies is needed to uncover optimal design strategies and considerations when eliciting strong, localized cytokine exposure. Towards this goal, we first test if intratumoral administration and retention is an effective strategy for type I interferons (IFN). Significant enhancement in tumor retention of IFN and IFN, mediated by anchoring these IFNs to co-injected aluminum-hydroxide (alum) particles, greatly improved their tolerability and efficacy. The improved efficacy of alum-anchored IFNs could be attributed to sustained pleiotropic effects on tumor cells, immune cells, and non-hematopoietic cells. Alum-anchored IFN therapies were curative upon combination with either anti-PD-1 or interleukin-2 (IL-2). However, only the anti-PD-1 combination led to protection against tumor rechallenge, demonstrating that overstimulation of cytokine signaling can dampen memory response. Second, we research design criteria for intratumorally administered IL-2 fused to tumor-specific nanobodies. Using yeast surface display, we develop IL-2 fusions with a range of affinities to the tumor-specific EIIIB domain of fibronectin. Such IL-2 fusions enabled strong anti-tumor efficacy, provided both intratumoral administration and sufficient affinity to EIIIB. Third, we explore intratumoral therapies that activate the cGAS-STING pathway, which leads to type I IFN production. Specifically, we design DNA-based agonists of cGAS that delay tumor growth in mice. Together, this thesis furthers our understanding of how to effectively elicit localized cytokine responses at the tumor for cancer immunotherapy.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147894</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopy of diatomic rare-earth oxides : YbO and EuO</title>
<link>https://hdl.handle.net/1721.1/147740</link>
<description>Spectroscopy of diatomic rare-earth oxides : YbO and EuO
McDonald, Steve,
            1950-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1985; Includes bibliographies.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147740</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rotor wake behavior in a transonic compressor stage and its effect on the loading and performance of the stator</title>
<link>https://hdl.handle.net/1721.1/147737</link>
<description>Rotor wake behavior in a transonic compressor stage and its effect on the loading and performance of the stator
Durali, Mohammad.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147737</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic programming algorithms for specially structured sequencing and routing problems in transportation.</title>
<link>https://hdl.handle.net/1721.1/147736</link>
<description>Dynamic programming algorithms for specially structured sequencing and routing problems in transportation.
Psaraftis, Harilaos Nicholas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Ocean Engineering, 1979; Bibliography: p. 249-252.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147736</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication &amp; coordination between components of the ClpAP degradation machine</title>
<link>https://hdl.handle.net/1721.1/147731</link>
<description>Communication &amp; coordination between components of the ClpAP degradation machine
Zuromski, Kristin L.
AAA+ (ATPases associated with diverse cellular activities) proteases are present in all kingdoms of life. These molecular machines perform energy-dependent regulated proteolysis. The bacterial enzyme ClpA is a double-ring hexameric AAA+ unfoldase/translocase that functions with the tetradecameric ClpP peptidase to degrade proteins that are damaged, unneeded, or require degradation for regulation. ClpA has two distinct, stacked rings, termed D1 and D2, constructed from hexamerization of subunits each containing two AAA+ modules. ClpA’s twelve AAA+ modules hydrolyze ATP and participate in the overall degradation process, but how the modules in D1 and D2 work together to power ATP-dependent degradation is not well-understood. Further, the mechanisms governing ClpA’s dynamic interactions with its partner peptidase, ClpP and with its adaptor protein, ClpS, remain unclear. Here, I present experiments that interrogate the coordination between components of ClpAP(S) to elucidate how these multiple proteins work together to form an efficient, regulated protease. &#13;
&#13;
In Chapter I, I provide an overview of AAA+ protein mechanism, with an emphasis on specific features of ClpA(PS) to lay a foundation for the following chapters. I introduce a ClpA subunit crosslinking strategy in Chapter II and use this method to examine how ATP hydrolysis is coordinated between (i) modules in each of the D1 and D2 rings, and (ii) between the two rings. In Chapter III, I probe the contributions of the conserved structural loops in the D1 and D2 rings that line ClpA’s central channel during ClpAP degradation. I also interrogate the substrate delivery mechanism by the ClpS adaptor in this chapter, revealing distinct roles for pore loops in D1 and D2 during this handoff. I describe a ClpA-ClpP crosslinking experiment in Chapter IV to test a structural hypothesis that ClpA must rotate on ClpP during substrate translocation. Finally, in Chapter V, I provide a broader context for how the results described in Chapters II, III, and IV improve the field’s understanding of the division of labor and coordination of mechanical work in the ClpAPS degradation machine and suggest future areas of study to further elucidate mechanistic aspects of ClpA and other AAA+ proteins.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147731</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalysis, Synthesis, and Materials in Support of Chemical Understanding and Global Health</title>
<link>https://hdl.handle.net/1721.1/147730</link>
<description>Catalysis, Synthesis, and Materials in Support of Chemical Understanding and Global Health
McGeough, Catherine P.
Chapter 1: Skipped polyenes featuring high (E)-selectivity and varying methyl substitution patterns are synthesized using a nickel-catalyzed cross-coupling reaction between allyl trifluoroacetates and vinyl bromides. The utility of this cross-electrophile coupling is showcased in part by the synthesis of the RST fragment of the marine ladder polyether, maitotoxin. Construction of this fragment is particularly challenging due to the alternating methyl substitution pattern.&#13;
&#13;
Chapter 2.1: A two-step route to MK-4482 (EIDD-2801, 1) was developed consisting of an esterification and hydroxamination of cytidine. The selective acylation and direct amination eliminate the need for protecting and activating groups and proceed in overall yield of 75%, a significant advancement over the reported yield of 17%. The step count is reduced from five transformations to two, and expensive uridine is replaced with the more available cytidine.&#13;
&#13;
Chapter 2.2: Molnupiravir (MK-4482, EIDD-2801) is a promising orally bioavailable drug candidate for treatment of COVID-19. Herein we describe a supply-centered and chromatography- free synthesis of molnupiravir from cytidine, consisting of two steps: a selective enzymatic acylation followed by transamination to yield the final drug product. Both steps have been successfully performed on decagram scale: the first step at 200 g, and the second step at 80 g. Overall, molnupiravir has been obtained in a 41% overall isolated yield compared to a maximum 17% isolated yield in the patented route. This route provides many advantages to the initial route described in the patent literature and would decrease the cost of this pharmaceutical should it prove safe and efficacious in ongoing clinical trials.&#13;
&#13;
Chapter 3: Tea is the second most consumed beverage world-wide (after water), however, approximately 90% of the tea leaf is not soluble in water and is therefore disposed of. Extracted cellulose from tea leaf waste fiber (TLWF) could be utilized around the globe as a locally sourced absorbent in the manufacturing of menstrual pads. In this chapter, an FDA friendly procedure is developed for cellulose extraction from TLWF as well as attempts to modify the texture of this material.&#13;
  &#13;
Chapter 4: The approach to reproductive health and safety in academic laboratories requires increased focus and a shift in paradigm. Our analysis of the current guidance from more than 100 academic institutions’ Chemical Hygiene Plans (CHPs) indicates that the burden to implement laboratory reproductive health and safety practices is often placed on those already pregnant or planning conception. We also found inconsistencies in the classification of potential reproductive toxins by resources generally considered to be authoritative, adding further confusion. In the interest of human health and safe laboratory practice, we suggest straightforward changes that institutions and individual laboratories can make to address these present deficiencies: Provide consistent and clear information to laboratory researchers about reproductive health and normalize the discussion of reproductive health among all researchers. Doing so will promote safer and more inclusive laboratory environments.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147730</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Copper(I) Hydride-Catalyzed Transformations of&#13;
π-Electrophiles</title>
<link>https://hdl.handle.net/1721.1/147725</link>
<description>Copper(I) Hydride-Catalyzed Transformations of&#13;
π-Electrophiles
Tsai, Erica Y.
The studies presented in this dissertation are regarding the development of new methods for copper-catalyzed transformations of π-electrophiles. The first part of this dissertation focuses on the development of broadly applicable protocols for the syntheses of enantioenriched homoallylic alcohols (Chapter 2) and homopropargylic amines (Chapter 3). The second part of this dissertation describes a method for accessing synthetically relevant β, γ-unsaturated acceptors (Chapter 4).&#13;
—&#13;
PART I&#13;
Chapter 2: Regio- and Enantioselective CuH-Catalyzed Allylation of Ketones Using Terminal Allenes&#13;
An efficient method for the copper-catalyzed allylation of ketones is described using widely available terminal allenes as allylmetal surrogates. Homoallylic alcohols bearing a wide range of functional groups are obtained in high yield and with good regio-, diastereo-, and enantioselectivity. Mechanistic investigations implicate the in situ formation of isomeric copper(I) allyl complexes which undergo addition to ketones with exclusive branched regioselectivity to afford the major isomer of the product. A stereochemical model is provided to explain the high diastereo- and enantioselectivity of this process.&#13;
&#13;
Chapter 3: Asymmetric Synthesis of Homopropargylic Amines by CuH-Catalyzed Coupling of Imines and Enynes A novel method for the synthesis of chiral homopropargylic amines is detailed. Aromatic and aliphatic N-phosphinoyl aldimines possessing a variety of functional groups are coupled with both terminal and internal enynes under mild conditions. The resulting homopropargylic amines are produced in high yields, with moderate diastereoselectivities and generally high enantioselectivities.&#13;
—&#13;
PART II&#13;
Chapter 4: Regio- and Stereoselective Synthesis of β,γUnsaturated Compounds by CuH-Catalyzed 1,6-Semireduction&#13;
A practical and highly selective preparation of β, γ-unsaturated compounds is reported. This method relies on the CuH-catalyzed 1,6-semireduction of easily accessible α, β, γ, δ-doubly unsaturated acceptors. By using the commercially available wide bite-angle ligand Xantphos, the formation of all undesired isomers and overreduction products is not observed in the majority of cases. The scope of accessible products includes β, γ-unsaturated esters, amides, sulfones, and nitriles with a variety of functional groups. Due to the high volatility of many products, careful purification is required to ensure high yields.&#13;
—&#13;
Thesis Supervisor: Stephen L. Buchwald&#13;
Title: Camille Dreyfus Professor of Chemistry
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147725</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low Temperature Solder Demountable Joints for Non-Insulated, High Temperature Superconducting Fusion Magnets</title>
<link>https://hdl.handle.net/1721.1/147592</link>
<description>Low Temperature Solder Demountable Joints for Non-Insulated, High Temperature Superconducting Fusion Magnets
Mouratidis, Theodore
Every one to two years, an operating tokamak fusion reactor requires maintenance and replacement of the internal components due to neutron damage. Previous solutions involve sectioning, assembling, and disassembling these components using the ports in between the toroidal field coils. To improve reactor reliability and simplify access, it is highly desirable that the toroidal field coils are demountable; this would also reduce the reactor downtime, when based on the tokamak and related magnetic fusion concepts. Compared to low temperature superconductors, high temperature superconductors have large cryogenic stability margins. In addition, the high thermal stability and improved passive quench protection ability of the non-insulated coil are advantageous, as is the low voltage operation, eliminating the need for high voltage electrical insulation as required in insulated cable coils. By combining the use of HTS in a non-insulated coil, a highly adaptable geometry presents itself for the inclusion of hundreds of demountable joints between coil turns. The joints have stringent requirements; they must be low resistance to minimize power dissipation, and to reduce the non-insulated coil radial current to &lt; 0.5% of the operating current in the constrained geometry, thus ensuring that when accounting for joint variation from coil to coil, the toroidal field ripple limit of 0.5% isn’t exceeded. While pure indium compression joints have demonstrated the low resistances required (∼ nΩ), they are not easily demountable in this environment, and deformation of the joint region is possible. Pb37Sn63 with a melting temperature of 183◦C is used to solder the HTS tapes into the base metal plates of the coil, so a low temperature solder is proposed in order to maintain the integrity of the principal tape matrix in the coil while allowing for the benefits of a solder-based joint for obtaining low resistance. This thesis addresses whether the electrical resistance requirements of these joints can be met, and the associated challenges.&#13;
&#13;
A novel vacuum pressure impregnation method was developed to couple superconducting tape stacks using three low temperature solder candidates: In52Sn48 (MP = 118◦C), In100 (MP = 156.6◦C), Ga100 (MP = 29.8◦C). To predict joint resistances, a finite element model is built and validated by experimental ideal joints; a 10 tape ideal joint has a 2% discrepancy, and a 40 tape ideal joint, a 32 % discrepancy. The VPI solder joints were experimentally tested at 77K; a distributed voltage tap system was used to infer the effective resistances to exit a superconducting stack, cross the solder layer, and enter the second superconducting stack. The normalized total joint experimental resistivities (0.54 – 0.68 µΩ · cm2 ) show good agreement with the model. Nonlinearity in joint I-V traces is modelled and explained to be a result of preferential HTS tape filling; the low current slope indicates the geometric resistance of interest in this thesis. The solder joint layer is then analyzed microstructurally; as a demountable joint is heat cycled, there is continual intermetallic growth between the liquid solder and the solid substrate. For liquid In52Sn48 solder and copper, this growth is quantified across three parameters of time, temperature, and solder joint thickness. This provides a critical joint lifetime, and an appropriate starting joint thickness of ∼ 100 µm. Thermal cycling of soldered superconducting tape stacks is then performed, simulating the heat applied to the tapes in the joint vicinity during demounting and mounting; this results in diffusion of the external tape copper layer into the bulk solder and oxygen-out diffusion from the YBCO layer. Through investigation of Ic, n, and R at 170◦C (operating temperature for In100), it is found that nickel electroplated tapes have higher levels of degradation than unplated tapes; in the latter, cases of simultaneously little to no Ic degradation and strong n degradation are observed, indicating decoupling of the two parameters. Finally, using the finite element model and the experimental results, joint resistances for a possible array of ARC fusion reactor scenarios are predicted; for operating conditions of B = 2 T, T = 10 K, the joint resistivities are 40 nΩ · cm2 for In100 and 124 nΩ · cm2 for In52Sn48. Using a realistic turn to turn joint area of a 100 cm by 2 cm, the electrical joint requirements of low power dissipation and a 0.5% radial current limit can be satisfied in the constrained geometry with low stresses (&lt;150 MPa) in the joint regions. This thesis validates the use of a vacuum pressure impregnation process to couple superconducting stacks with low temperature solders, showing the viability of achieving joint resistances that are required at the operating conditions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147592</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Provable Instantiations of Correlation Intractability and the Fiat-Shamir Heuristic</title>
<link>https://hdl.handle.net/1721.1/147579</link>
<description>Provable Instantiations of Correlation Intractability and the Fiat-Shamir Heuristic
Lombardi, Alex
Interactive proof systems, introduced in a seminal work of Goldwasser, Micali, and Rackoff, have become one of the most powerful and flexible tools in cryptography and computer science at large. They have directly led to some of the biggest breakthroughs in theoretical cryptography, complexity, and quantum computation. They are also at the center of a revolution in practical cryptography, particularly in the context of blockchains and cryptocurrencies. &#13;
&#13;
However, despite their importance, our understanding of cryptographic proofs is surprisingly limited. The central problem studied in this thesis is the following question: &#13;
&#13;
Can we remove interaction from interactive proofs?&#13;
&#13;
Even though this question sounds almost paradoxical, Fiat and Shamir (1986) proposed (and Blum extended) a heuristic methodology for removing interaction from a huge class of interactive proofs. This methodology is ubiquitous and essential for practical applications, but for over thirty years, we had no proof of its security, even for a single non-trivial case.&#13;
&#13;
The main goal of this thesis is to give a solid theoretical foundation for the FiatShamir transformation by developing general-purpose tools, techniques, and abstractions for characterizing its security. We propose a two-step methodology for obtaining provable instantiations that relies on the notion of correlation intractability, which is a hash function security property requiring that it is computationally infeasible to find pre-specified input-output correlations in the hash function.&#13;
&#13;
Using this methodology, we obtain various new results in cryptography, touching on areas such as non-interactive zero knowledge, delegation of computation, the insecurity of parallel repetition, and the cryptographic hardness of computing Nash Equilibria in game theory.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147579</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning Tools for Next-Generation Connectomics</title>
<link>https://hdl.handle.net/1721.1/147578</link>
<description>Deep Learning Tools for Next-Generation Connectomics
Mi, Lu
In recent years, the field of connectomics has witnessed exciting developments. Efficient algorithms are being developed to reconstruct nanoscale maps of large-scale images, allowing us a better understanding of how neural tissue computes. &#13;
However, our ability to build powerful tools for the next generation of connectomics is dependent on navigating an inherent accuracy v.s. speed v.s. scalability trade-off.  &#13;
&#13;
This thesis addresses this  tradeoff by introducing four deep learning tools and techniques applied to the acquisition, reconstruction and modeling stages of connectomics pipelines. First, we propose a way to speed up the acquisition of images using learning-guided electron microscope (EM). Second, we proposed a faster and more scalable 3D reconstruction algorithm -- cross-classification clustering (3C), for large-scale connectomics datasets. Third, we introduce a cross-modality image translation technique mapping fast X-ray images to EM images with enhanced segmentation quality. Finally, we introduced a technique to bridge the gaps between structural and functional data with connectome-constrained latent variable models (CC-LVMs) of the unobserved voltage dynamics for the whole-brain nervous system. We hope these advanced applications of deep learning techniques will help address the performance and accuracy trade-offs of next-generation connectomics studies.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147578</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modality and Time in Logical Context</title>
<link>https://hdl.handle.net/1721.1/147577</link>
<description>Modality and Time in Logical Context
Staniszewski, Frank
This thesis argues for a theory of neg-raising that is unified with existing theories of free choice (Fox, 2007; Bar-Lev, 2018) and negative polarity items (Crnič, 2014b, 2019b), as motivated by a case study of polarity-sensitive 'until'. I argue that this unification of neg-raising and polarity-sensitivity falls out as a natural consequence of the systems developed in this earlier work, combined with the novel assumption that the neg-raising predicates 'want', 'should', and 'supposed to' express underlying existential quantificational force, which is disguised on the surface as the result of obligatory strengthening in positive sentences due to the lack of a dual in the lexicon.&#13;
&#13;
The primary empirical motivation for this view comes from a novel test that is able to diagnose the underlying existential meaning of the modal items. The test requires a negative presupposition trigger, like 'no longer', which creates a downward-entailing environment in which strengthening need not apply, but which also triggers an upward-entailing presupposition, in which the basic existential meaning can be revealed. I argue that further results of this test suggest a typology of neg-raising modals that is predicted by the analysis. I also present a study (joint work with Rachel Stacy and Athulya Aravind), in which we examine predictions of the analysis in the domain of language acquisition. We argue that the proposal receives support from experimental evidence suggesting that a population of children go through a stage of development in which all modals undergo strengthening in a similar manner to the neg-raising modals examined in this thesis.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147577</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics in DNA-Chromophore Assemblies</title>
<link>https://hdl.handle.net/1721.1/147576</link>
<description>Exciton Dynamics in DNA-Chromophore Assemblies
Hart, Stephanie M.
Excitonic systems have the potential to provide new materials for light harvesting, computing, and imaging. Such applications require control over the spatiotemporal evolution of excited states. Natural light harvesting systems achieve the required control through precise molecular placement; however, it has been challenging to emulate multi-chromophore organization in synthetic systems. Towards mimicking these nat- ural architectures, we use DNA-chromophore assemblies to generate excitonic circuits with nanometer-scale precision over chromophore placement and orientation. First, in a bioinspired light-harvesting system, we explore the role of molecular parameters in directing electronic energy. Using time-resolved spectroscopy, we show how chromophore placement within a tunable DNA scaffold can be used to independently control both electronic coupling and system-bath coupling, and furthermore identify their roles in mediating exciton transport. We then extend this framework to identify how scaffold configurations are capable of steering formation of symmetry-breaking charge transfer states, paving the way towards the design of DNA machinery with dual light-harvesting and charge separation capabilities.&#13;
&#13;
Second, we examine applications of this platform for imaging. Using vibrational wavepackets, we detect a previously unknown dark state in a cyanine fluorophore and identify the associated structural mode coupling likely responsible for its formation. We then incorporate this fluorophore into a DNA double crossover tile for easy photophysical tunability by creating a strongly coupled dimer. Through a sequence- dependent mechanical force induced by the surrounding DNA, we demonstrate that distortions can be used to construct a toolkit of fluorophores visible at the single-molecule level, suggesting potential use as an imaging probe. Lastly, we use higher order DNA origami to control exciton dynamics in singlet fission sensitizers. Collec- tively, this systematic investigation and control over excitons and their dynamics with DNA structures offers design principles for solar conversion and sensing applications at the nanoscale.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147576</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visible and Infrared Light Detection Using 2D Materials</title>
<link>https://hdl.handle.net/1721.1/147575</link>
<description>Visible and Infrared Light Detection Using 2D Materials
McVay, Elaine
Two-dimensional (2D) materials such as graphene and transition metal dichalcogenides (TMDs) open up the potential for pushing electronic devices, from transistors to photovoltaics, to the atomic limit. In particular, many transition metal dichalcogenides such as tungsten diselenide (WSe₂) have properties such as a large absorption coefficient, ambipolar nature, and a bandgap near ~1.34 eV that make them suitable for solar cell applications. With respect to longer wavelength detection, application of 2D crystals and other ultrathin materials such as ALD grown Hafnium Zirconium Oxide (Hf₀.₅Zr₀.₅O₂) open up the possibility of designing fast (&lt;1 ms thermal time constant) thermal detectors while still maintaining high specific detectivity. This thesis focuses on developing novel light detectors and harvesters based on 2D materials, including multilayer Tungsten Diselenide (WSe₂) solar cells, ultrathin bolometers, and Bernal stacked bilayer graphene photoconductor devices for hyperspectral imaging within the 10 µm - 20 µm band.&#13;
&#13;
Specifically, schottky junction thin-film Platinum (Pt)/WSe₂/Gold (Au) were shown to exhibit large improvements in the short circuit current and open circuit voltage due to antireflection coating effects, surface doping, and surface trap passivation. A single absorber solar cell with an open circuit voltage (Voc) of 380 mV and short circuit current density (Jsc) of 10.7 mA/cm² was demonstrated, thanks to the absorber coating. Shifting focus to infrared detectors, this work demonstrates suspended 50 nm Al₂O₃/10 nm TiN/10 nm HZO/10 nm TiN/100 nm SiO₂ films that act as pyroelectric detectors with thermal time constant down to 0.625 ms. In addition, a Hafnium Zirconium Oxide (Hf₀.₅Zr₀.₅O₂) gated MoS₂ transistor was shown to have temperature coefficient of resistance (TCR) magnitudes &gt; 0.03 K⁻¹ when biased in subthreshold, making this technology competitive with the state-of-the-art vanadium oxide bolometers. In parallel, this work characterizes the temperature dependent IV characteristics and noise performance of novel metal-nanogap-metal thermomechanical bolometers which have been predicted to obtain TCR magnitudes &gt; 2.0 K⁻¹ and have an experimentally demonstrated TCR of down to -0.39 K⁻¹. Finally, this thesis demonstrates through simulation and early stage experiments that, using just a few programmed states, data obtained from a Bernal stacked bilayer graphene device can be used to reconstruct chemical absorption lines that allow accurate fingerprinting of chemical species.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147575</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Analysis of High-Stability THz Molecular Clock System</title>
<link>https://hdl.handle.net/1721.1/147572</link>
<description>Design and Analysis of High-Stability THz Molecular Clock System
Kim, Minah
Miniaturized frequency references with high stability are crucial for applications such as navigation and wireless networking. Recently, chip-scale molecular clocks (CSMCs) have achieved excellent stability performance by using a rotational-mode transition of gaseous carbonyl sulfide (¹⁶O¹²C³²S). Its low-cost implementation and robustness against external electrical/magnetic fields make a CSMC an attractive candidate for a high-stability clock. However, even though an invariant OCS transition frequency is used as the reference, non-idealities such as tilted baseline of spectroscopic probing and input offsets of dc amplifiers lead to the frequency error between the actual transition frequency (&#119891;₀) and the detected transition frequency. Since these nonidealities are susceptible to environmental variations, it affects the long-term stability of the clock. In addition, the short-term stability of a CSMC is limited by the spectroscopic signal-to-noise ratio.&#13;
&#13;
In this work, the effects of noise and environmental variations on clock stability were analyzed to provide guidance for the design and optimization of CSMCs. Also, a dual-loop CSMC is demonstrated to address the issues in the previous CSMCs and further improve stability performance. The prototype chip implemented in 65nm CMOS technology achieves 2 ×10⁻¹¹ Allan Deviation at 10,000-s averaging time with 71-mW power consumption. It demonstrates that CSMCs can provide outstanding stability performance while maintaining cost, complexity, and power consumption advantages.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147572</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Listening with generative models</title>
<link>https://hdl.handle.net/1721.1/147570</link>
<description>Listening with generative models
Cusimano, Maddie
This thesis extends classic traditions in perception by leveraging contemporary tools to build and apply rich generative models that describe what we hear. First, I present a hierarchical Bayesian auditory scene synthesis model to address the perceptual organization of sound into sources and events. We aimed to bridge between classical auditory scene analysis phenomena and everyday sounds, asking whether common generative principles could explain auditory scene analysis in both cases. We tested the model by having it listen to a variety of auditory scene analysis illusions and found that its judgments matched those of human listeners. Applied to everyday sounds, the model infers valid perceptual organizations. Also, due to its interpretability, the model's failures with everyday sounds were informative: they reveal the necessity of peripheral representations of periodicity, a more expressive model of spectra, and sources that compose multiple sound-generating processes. The next projects address alternative scene analysis problems of everyday physical understanding from sound. We developed methods for the ecological sound synthesis of a set of common object interactions: brief impact sounds and sustained scraping and rolling sounds. Our synthesis combines physical simulation from perceptually relevant variables with a statistical model of material. Listeners perceive our synthesized sounds to be realistic and as conveying various physical variables. I discuss future directions for developing inference for these physics-inspired models, learning sound synthesizers, and generating illusions. Given the variety of structured latent-variable generative models investigated through these projects, I conclude by exploring how multiple world models might interact in perception.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147570</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boolean-Valued Models and Their Applications</title>
<link>https://hdl.handle.net/1721.1/147566</link>
<description>Boolean-Valued Models and Their Applications
Wu, Xinhe
Boolean-valued models generalize classical two-valued models by allowing arbitrary complete Boolean algebras as value ranges. The goal of my dissertation is to study Boolean-valued models and explore their philosophical and mathematical applications.&#13;
&#13;
In Chapter 1,  I build a robust theory of first-order Boolean-valued models that parallels the existing theory of two-valued models. I develop essential model-theoretic notions like "Boolean-valuation", "diagram", "elementary diagram", and prove a series of theorems on Boolean-valued models, including the (strengthened) Soundness and Completeness Theorem, the Löwenheim-Skolem Theorems, the Elementary Chain Theorem, and many more.&#13;
&#13;
Chapter 2 gives an example of a philosophical application of Boolean-valued models. I apply Boolean-valued models to the language of mereology to model indeterminacy in the parthood relation. I argue that Boolean-valued semantics is the best degree-theoretic semantics for the language of mereology. In particular, it trumps the well-known alternative - fuzzy-valued semantics. I also show that, contrary to what many have argued, indeterminacy in parthood entails neither indeterminacy in existence nor indeterminacy in identity, though being compatible with both.&#13;
&#13;
Chapter 3 (a collaboration with Bokai Yao) gives an example of a mathematical application of Boolean-valued models. Scott and Solovay famously used Boolean-valued models on set theory to obtain relative consistency results. In Chapter 3, I investigate two ways of extending the Scott-Solovay construction to set theory with urelements. I argue that the standard way of extending the construction faces a serious problem, and offer a new way that is free from the problem.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147566</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Certifiable Outlier-Robust Geometric Perception</title>
<link>https://hdl.handle.net/1721.1/147564</link>
<description>Certifiable Outlier-Robust Geometric Perception
Yang, Heng
Geometric perception is the task of estimating geometric models (e.g., object pose and 3D structure) from sensor measurements (e.g., LiDAR scans, neural network detections) and priors (e.g., object 3D models). Geometric perception is a fundamental building block for robotics applications ranging from intelligent transportation to space autonomy.&#13;
&#13;
The ubiquitous existence of outliers —measurements that tell no or little information about the models to be estimated— makes it theoretically intractable to perform estimation with guaranteed optimality. Despite this theoretical intractability, safety-critical robotics applications still demand trustworthiness and performance guarantees on perception algorithms.&#13;
&#13;
In this thesis, I present certifiable outlier-robust geometric perception, a new paradigm to design tractable geometric estimation algorithms that enjoy rigorous performance guarantees, i.e., they return an optimal estimate with a certificate of optimality for a majority of problem instances, but declare failure and provide a measure of suboptimality for worst-case instances. Particularly, I present two generalpurpose algorithms in this paradigm: (i) an estimator that uses graph theory to prune gross outliers and leverages graduated non-convexity to compute the optimal model estimate with high probability of success, and (ii) a certifier that employs sparse semidefinite programming (SDP) relaxation and a novel SDP solver to endow the estimator with an optimality certificate or escape local minima otherwise. The estimator is fast and robust against up to 60% − 99% random outliers in practical perception applications, and the certifier can compute high-accuracy optimality certificates for large-scale problems beyond the reach of existing SDP solvers. I showcase certifiable outlier-robust perception on robotics applications such as scan matching, satellite pose estimation, and vehicle pose and shape estimation.&#13;
&#13;
In addition, this thesis shows that outlier-robust geometric estimation enables self-supervised geometric perception, the first general framework to learn a feature descriptor for correspondence matching without any ground-truth geometric models.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147564</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Firms in Integrated Urban Models: Agglomeration Economies and the Dynamics of Employment Size Decisions</title>
<link>https://hdl.handle.net/1721.1/147563</link>
<description>Firms in Integrated Urban Models: Agglomeration Economies and the Dynamics of Employment Size Decisions
He, He
Jobs are key determinants of urban phenomena ranging from daily trip patterns to urban structure. Despite their importance, the representation of jobs and firms in integrated urban models is limited. Existing approaches are exceedingly static, often lack theoretical underpinnings, and rarely account for the impact of agglomeration economies.&#13;
&#13;
I propose an agent-based dynamic programming structural model of firms’ job creation and lay-off decisions. It models the evolutionary trajectory of firm sizes rather than discrete jumps between presumed steady states. Firms are forward-looking rational agents, attempting to follow the employment size adjustment trajectory that maximizes their present value of all future profits in face of a stochastic adjustment process. I model firms’ decision-making as a continuous-time Markov decision process, solved via dynamic programming. To estimate the model’s parameters, which are firm-specific, I formulate a hierarchical Bayesian estimation procedure. I repeatedly sample from the posterior distributions of the hyperparameters using a nested Gibbs and Metropolis-Hastings sampling algorithm.&#13;
&#13;
With a panel micro-dataset of businesses in the Greater Boston Area, I apply the model to explore the heterogeneous impacts of agglomeration economies for manufacturing, professional services, and food and accommodation services firms. The empirical findings broadly align with urban economic theory. However, uniquely, the dynamic structural model enables me to distinguish between benefits that increase productivity and those that reduce labour market friction. Overall, I find that employment size adjustments are more costly for more skills-intensive sectors. Finally, using the estimation results from Boston, I examine the estimated impacts of a major urban rail line investment – the Green Line extension – in terms of job creation and gross production increase, and the cost of labour market frictions in terms of firms’ foregone profits.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147563</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polynomial Structure in Semidefinite Relaxations and Non-Convex Formulations</title>
<link>https://hdl.handle.net/1721.1/147562</link>
<description>Polynomial Structure in Semidefinite Relaxations and Non-Convex Formulations
Yuan, Chenyang
Semidefinite relaxation is a powerful tool used to approximate otherwise intractable non-convex problems, but tend to run into scalability issues in large-scale instances. The goal of this thesis is to explore the power of semidefinite relaxations and address the scalability issues, for special classes of problems with polynomial structure.&#13;
&#13;
In the first part of this thesis, we consider semidefinite relaxations of functions on quadratic maps, with applications to approximating permanents of positive semidefinite (PSD) matrices, product of quadratic forms, and can be interpreted as a generalization of MaxCut. The optimization problems and their convex relaxations have a product structure which is crucial in the analysis of approximation quality. We show that these problems are all connected with a unified analysis which recovers tight approximation factors. This leads to better approximation bounds on the permanent of PSD matrices, intermediate relaxations trading off accuracy with computational power, and constant factor approximation bounds for maximizing concave objectives on the image of quadratic maps.&#13;
&#13;
In the second part, we study the global landscape of low-rank sum of squares problems, using a non-convex Burer-Monteiro formulation to decrease the computational cost but with the risk of getting stuck in local minima. We show that in the univariate case where the SDP solution is guaranteed to be rank-2, this formulation does not have spurious local minima. This is in contrast to previous work showing that for general SDPs, in addition to genericity conditions, the rank has to be roughly the square root of the degree of the polynomial for there to be no spurious local minima. We also show that with a particular choice of basis, the gradient can be computed in near-linear time using Fast Fourier Transforms (FFTs). This enables very fast first-order methods, scaling to polynomials with millions of variables.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147562</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning inside the prediction function</title>
<link>https://hdl.handle.net/1721.1/147561</link>
<description>Learning inside the prediction function
Alet i Puig, Ferran
Many recent achievements in machine learning have followed different variations on a single recipe: we pick a supervised training dataset and assume there exists a function mapping inputs to outputs. We then leverage the expressivity of deep learning (together with few but carefully chosen inductive biases for each domain) and train a neural network to approximate this unknown function. In this thesis, we show that this single-function, single-neural-network approach can be too constraining and instead suggest spawning per-point models. This allows us to encode inductive biases in flexible ways and model expressive, structured generative models of the data distribution.&#13;
&#13;
First, we present Tailoring: a novel way of encoding inductive biases by optimizing unsupervised objectives inside the prediction function. This ensures the structure is imposed both at training and test time. Furthermore, its generality allows applications in domains as diverse as physics time-series prediction, adversarial defenses, and contrastive representation learning. We also propose Noether Networks, which automatically discover these inductive biases, in the form of conservation laws.&#13;
&#13;
Finally, we propose Functional risk minimization(FRM), an alternative framework to the standard Empirical risk minimization(ERM) setting where loss functions act in function space rather than output space. We show how we can make learning in this new framework efficient and can lead to improved performance compared to the standard ML setting.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147561</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Better Hardness via Algorithms, and New Forms of Hardness versus Randomness</title>
<link>https://hdl.handle.net/1721.1/147560</link>
<description>Better Hardness via Algorithms, and New Forms of Hardness versus Randomness
Chen, Lijie
One central theme of complexity theory is the rich interplay between hardness (the existence of functions that are hard to compute) and pseudorandomness (the procedure that converts randomized algorithms into equivalent deterministic algorithms). In one direction, from the classic works of Nisan-Widgerson and Impagliazzo-Widgerson, we know certain hardness hypothesis (circuit lower bounds) implies that all randomized algorithms can be derandomized with a polynomial overhead. In another direction, A decade ago, Williams have proved that certain circuit lower bounds follows from non-trivial derandomization.&#13;
&#13;
In this thesis we establish many new connections between hardness and pseudorandomness, strengthening and refining the classic works mentioned above. &#13;
&#13;
• New circuit lower bounds from non-trivial derandomization. Following Williams’ algorithmic method, we prove several new circuit lower bounds using various non-trivial derandomization algorithms, including almost-everywhere and strongly average-case lower bound against ACC0 circuits and a new construction of rigid matrices. &#13;
&#13;
• Superfast and non-black-box derandomization from plausible hardness assumptions. Under plausible hardness hypotheses, we obtain almost optimal worst-case derandomization of both randomized algorithms and constant-round Arthur-Merlin protocols. We also propose a new framework for non-black-box derandomization and demonstrate its usefulness by showing (1) it connects derandomization to a new type of hardness assumptions that is against uniform algorithms and (2) (from plausible assumptions) it gives derandomization of both randomized algorithms and constant-round doubly efficient proof systems with almost no overhead such that no polynomial-time adversary can find a mistake.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147560</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid Visual Object Learning in Humans is Explainable by Low-Dimensional Image Representations</title>
<link>https://hdl.handle.net/1721.1/147557</link>
<description>Rapid Visual Object Learning in Humans is Explainable by Low-Dimensional Image Representations
Lee, Michael Jinsuk
How humans learn to recognize new objects is an open problem. In this thesis, we consider one class of theories for how this is accomplished: humans re-represent incoming retinal images in a stable, multidimensional Euclidean space, and build linear decoders in this space for new object categories from image exemplars. &#13;
&#13;
In Part I, we empirically characterize human learning behavior over a battery of different learning subtasks, and find humans rapidly learn new objects from a small number of examples. We then build neurally-mechanistic, end-to-end models of object learning based on recent advances in image-computable models of ventral stream representations. We point to shortcomings of these models, including the fact none of these models actually match the ability to human few-shot learn. &#13;
&#13;
In Part II, we analyze this few-shot learning failure from a theoretical perspective, and show that a geometric property of image representations — variation in directions orthogonal to the one needed to linearly solve the task — slows learning. Given this observation, we motivate the hypothesis that current models of visual processing represent images along a much higher number of dimensions, relative to humans.&#13;
&#13;
In Part III, we identify (and remove) these hypothesized excess dimensions by developing the "perceptual alignment" method, where we combine a classical approach in experimental psychology — inferring internal stimulus representations using measurements of human similarity judgements — with deep learning methods, and create new, lower-dimensional, image-computable representations which capture patterns of human similarity judgements. Finally, we show models based on these new representations predict the ability of humans to few-shot learn across a variety of object domains. They also successfully predict the inability of humans to learn tasks based on representational dimensions that are present in baseline models but absent in perceptually aligned ones. Taken together, this thesis shows specific, neurally-mechanistic models based on a simple theory of learning are strong accounts of how humans rapidly learn new objects.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147557</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating secondary atmospheric aerosols using chemically-speciated observations and targeted model development</title>
<link>https://hdl.handle.net/1721.1/147552</link>
<description>Investigating secondary atmospheric aerosols using chemically-speciated observations and targeted model development
Pai, Sidhant J.
Fine particulate air pollution (PM₂.₅) has wide ranging influence on global climate (through radiative scattering, cloud formation, etc.) and human health (through the increased incidence of respiratory illness, etc.). Studies have shown that a major fraction of global PM₂.₅ (also called fine aerosol) is formed dynamically in the atmosphere from volatile gas-phase precursors that are emitted by both anthropogenic and biogenic sources. This class of aerosol is called secondary aerosol. Due to the numerous uncertainties associated in simulating their atmospheric formation and fates, earth science models have historically struggled to accurately represent secondary aerosols, and continue to demonstrate significant bias when compared to observational datasets. The goal of this doctoral thesis is to better constrain the sources and atmospheric fates of a few key secondary particulate species, with the intention of improving the model representation of these aerosols. With these overarching objectives in mind, this thesis spans a series of four projects that use chemically-speciated observational constraints and targeted model development to conduct (1) a comparative study of global organic aerosol schemes using airborne observations; (2) an exploration of atmospheric ammonia oxidation as a source of secondary aerosol and nitrous oxide; (3) an investigation of compositional constraints from surface, aircraft and satellite measurements to improve PM₂.₅ source-attribution over India; (4) a model evaluation of global PM₂.₅ exposure guidelines that highlights the importance of non-anthropogenic sources and proposes a chemically-speciated paradigm for PM₂.₅ measurement and source-apportionment. In aggregate, these projects contribute to a body of scientific literature that can be leveraged to inform air quality management efforts around the world.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147552</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photoporomechanics: a new technique to explore grain-scale mechanisms for fluid-driven fractures in granular media</title>
<link>https://hdl.handle.net/1721.1/147550</link>
<description>Photoporomechanics: a new technique to explore grain-scale mechanisms for fluid-driven fractures in granular media
Meng, Yue
Multiphase flow through granular and porous materials exhibits complex behavior, the understanding of which is critical in many natural and industrial processes like infiltration of water into the vadose zone, water dropout in fuel cells, and geological carbon dioxide storage. While fluid–fluid displacement in rigid porous media has been studied in depth, the understanding of the interplay between multiphase flow and granular mechanics remains an ongoing challenge.&#13;
&#13;
Photoelasticity has been used as an experimental technique to quantify the internal stresses within solid bodies for decades, providing numerous microscopic observations in assemblies of circular disks, including contact forces, force-chain lengths and&#13;
orientations, that are essential for gaining a deeper understanding of the macroscopic behavior of granular systems. In this Thesis, we extend this technique to producing millimeter-size, residual-stress-free, spherical photoelastic particles that form quasi2D granular assemblies with connected pore space, thus permitting for the first time the visualization and quantification of effective stress in coupled granular-fluid systems. We hereby refer to this novel experimental method as photoporomechanics.&#13;
&#13;
We employ photoporomechanics to study fluid-induced deformation and fracture of granular media, with a focus on its underpinning grain-scale mechanics. For cohesionless granular packs, we uncover two distinct states of the granular pack: a ‘fluidized’ friction-dominated region behind the propagating fracture tips, and a ‘solidified’ elasticity-dominated region ahead of the fracture tips. We then extend the experimental system to study cohesive granular packs, and provide direct observation of the tensile effective stress in the circumferential direction (hoop stress) behind the invasion front, and the compressive effective stress in the radial direction ahead of the invasion front. In each case, we develop macroscopic mathematical models that explain the transition from a fluid-like to a solid-like state underpinning the fracturing process, a phenomenon that plays a key role in real-world processes, such as the drying of superhydrophobic surfaces, the venting of methane from lake and marine sediments, and the formation of desiccation cracks in soils.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147550</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Efficient Wide-Range Radio-Frequency Power Generation System</title>
<link>https://hdl.handle.net/1721.1/147549</link>
<description>Techniques for Efficient Wide-Range Radio-Frequency Power Generation System
Zhang, Haoquan
Industrial radio-frequency (rf) power applications, e.g. plasma generation for semiconductor processing, are often characterized by the delivery of rf power, at a specific frequency or within a narrow band, with high peak power levels and wide overall power ranges, into loads with varying impedances.&#13;
&#13;
To meet the evolving demands in the industry, typical goals for the power generation system include operation over a wide load impedance range and wide output power range, while exhibiting very fast dynamic responses and maintaining high peak and average efficiencies. However, meeting all of these metrics simultaneously has not been possible to date, and efficiency is often sacrificed in order to meet the other requirements, yielding solutions with high electricity costs, low thermal robustness, and excessive power ratings.&#13;
&#13;
This work explores a rf power generation system based on switched-mode power amplifiers (PAs), with associated PA system architecture, control techniques, and implementation optimizations, to achieve both high performance and good efficiency for the target applications.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147549</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Improving the Acquisition and Reconstruction Of Spatio-Temporal Magnetic Resonance Imaging</title>
<link>https://hdl.handle.net/1721.1/147548</link>
<description>On Improving the Acquisition and Reconstruction Of Spatio-Temporal Magnetic Resonance Imaging
Iyer, Siddharth Srinivasan
Magnetic Resonance Imaging (MRI) is a non-invasive but slow imaging modality that provides unparalleled flexibility in acquiring multiple forms of soft-tissue contrast. Recently, there has been a lot of interest in mapping the inherent magnetization properties of the underlying human tissue and in temporally resolving the acquired data. Broadly classified as spatio-temporal MRI, these methods yield unprecedented details of the human anatomy and function, improving clinical diagnostic performance and prognosis. However, such methods are inherently high-dimensional, resulting in encoding-intensive data acquisition processes and computationally-intensive reconstructions. This begets long acquisition and reconstruction times, making such methods difficult to integrate into clinical workflows. This thesis aims to improve the acquisition and reconstruction times of spatio-temporal MRI to enable its use in clinical and neuroscientific setting.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147548</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planetary Science Meets Chemistry: Studying Potential Biosignature Gases in Terrestrial Exoplanet Atmospheres</title>
<link>https://hdl.handle.net/1721.1/147547</link>
<description>Planetary Science Meets Chemistry: Studying Potential Biosignature Gases in Terrestrial Exoplanet Atmospheres
Huang, Jingcheng
As more and more exoplanets are discovered, searching for biosignature gases is becoming one of the crucial ways to find extraterrestrial life. Biosignature gases are gases produced by living organisms that can accumulate to detectable levels in the atmosphere. Once detected, it can be attributed to signs of life on the planet. So far, only a few molecules have been studied as potential biosignature gases. A recent paper proposes that we should systematically evaluate All Small Molecules (ASM) as possible biosignature gases. This thesis summarizes my work in identifying and studying three new potential biosignature gases in terrestrial exoplanet atmospheres. In my research, I use various approaches, from simple Henry's law to our comprehensive photochemistry code and transmission spectra model, to study the biosignature potential of ammonia (NH₃) and methanol (CH₃OH). I also developed a simplified chloride steady-state chemical model to examine whether hydrogen chloride (HCl) is a good bioindicator in an H₂-dominated atmosphere. First, we find that NH₃ in a terrestrial planet's atmosphere is generally a good biosignature gas, primarily because terrestrial planets have no significant known abiotic NH₃ source. NH₃ can accumulate in the atmosphere only if life is a net source of NH₃ and produces enough NH₃ to saturate the surface sinks. Second, we consider CH₃OH a poor biosignature gas in terrestrial exoplanet atmospheres due to the enormous production flux required to reach its detection limit. Although CH₃OH can theoretically accumulate on exoplanets with CO₂- or N₂-dominated atmospheres, such planets' small atmospheric scale height and weak atmosphere signals put them out of reach for near-term observations. Finally, albeit HCl has many advantages of being a potential bioindicator, we find it is not a suitable bioindicator because it cannot accumulate to detectable levels on an exoplanet with an H₂-dominated atmosphere orbiting an M5V dwarf star. The extremely high water solubility of HCl means that wet deposition can efficiently remove it from the atmosphere, preventing HCl from accumulating to detectable levels in the atmosphere. Overall, my thesis aims to improve our understanding of biosignature gases and provide more diverse research methods and a more comprehensive framework for future work.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147547</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quasi-One-Dimensional van der Waals Lattices with Diverse Magnetism:  New Platforms Towards Ultrathin Magnetic Nanowires</title>
<link>https://hdl.handle.net/1721.1/147545</link>
<description>Quasi-One-Dimensional van der Waals Lattices with Diverse Magnetism:  New Platforms Towards Ultrathin Magnetic Nanowires
Qu, Yi
One-dimensional (1D) or quasi-1D van der Waals (vdW) magnets, which feature covalently bonded spin chains or ladders separated by weak vdW interactions, could potentially offer twofold benefits for the current field of 1D magnets. On the one hand, the bulk crystals of these phases are more ideal 1D magnets because large vdW gaps effectively prevent any inter-chain or inter-ladder exchange couplings. This allows for the study of unique 1D magnetic fluctuations coupled with 1D characteristic transport behaviors. On the other hand, the vdW gaps open up possibilities to exfoliate these 1D vdW magnets for magnetic nanowire production. These nanowires would then be used to investigate 1D confinement effects and for densely packed spintronics. In this thesis, efforts to synthesize new quasi-1D vdW magnets, control their bulk magnetism, and efficiently exfoliate them into high-quality nanowires are detailed. Chapter 1 reviews the definitions and fundamental physics of 1D magnets, discusses limitations with the current routes to access 1D magnets, and introduces lessons from the breakthroughs in two-dimensional (2D) magnets, which serve as the starting point of this thesis work. Chapter 2 demonstrates the first exfoliation strategy developed for quasi-1D vdW magnet CrSbSe₃ and presents the properties of the resulting nanowires. The exfoliated CrSbSe₃ nanowires have high aspect ratio, well-defined crystallinity, smooth surfaces, and high stability, and exhibit stronger coercivity compared to bulk CrSbSe₃ due to the stronger shape anisotropy therein. Chapters 3 and 4 are concerned with expanding the library of quasi-1D vdW magnets and efficiently controlling their bulk magnetic properties. Chapter 3 demonstrates the substitution of Se in CrSbSe₃ with S switches the overall magnetic ordering from ferromagnetic (FM) to antiferromagnetic (AFM) and discusses the metamagnetic transition and strong spin-phonon coupling in the resulting AFM spin-ladder phase CrSbS₃. Chapter 4 demonstrates that Bi alloying into the Sb sites in CrSbSe₃ and CrSbS₃ is an efficient strategy to enhance the magnetic anisotropy without altering the original 1D structural features or magnetic ground state. This offers an independent dimension to finely tune the magnetic behaviors of these quasi-1D vdW magnets.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147545</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dolomite as a paleoenvironmental archive</title>
<link>https://hdl.handle.net/1721.1/147543</link>
<description>Dolomite as a paleoenvironmental archive
Wilcots, Julia
Dolomite [CaMg(CO₃)₂] is abundant in the rock record and preserves, in its chemical composition, information about the environment in which it formed. The scarcity of dolomite forming in modern oceans and evidence for dolomite replacing primary calcium carbonate (CaCO₃) in ancient and modern successions, has prompted the idea that most geologic dolomite formed through diagenetic alteration of CaCO₃, a process that weakens the link between the isotopic signals held by the primary carbonate and properties of the depositional environment. Alternatively, seas on ancient Earth may have supported dolomite formation in depositional settings, making dolomite an exciting and useful paleoenvironmental proxy. The questions of how ancient dolomite formed — the longstanding “dolomite problem” — therefore has broad implications for our ability to use carbonate rocks to reconstruct surface environments throughout time. Each chapter of this thesis describes one approach to tackling this “dolomite problem,” spanning spatial scales from the nanometer to kilometer and temporal scales from a singular moment in geologic time to the last two billion years of Earth history. In chapter one, I quantify the relative abundance of dolomite across the North American rock record and relate changes in abundance to primary environmental variables. Dolomite abundance displays time-dependent covariation with indicators of primary paleoenvironmental conditions, suggesting that changes in dolomite abundance reflect changes in ancient depositional – not diagenetic – environments. In chapter two, I map dolomite crystal orientations in one exquisitely-preserved ∼574Ma ooid to reveal how fabric-preserving dolomite formed. In chapter three, I analyze dolomites deposited between ∼810-717Ma, asking whether their carbonate geochemistry preserves primary environmental conditions. The petrographic and geochemical data presented in these final two chapters provide evidence for primary dolomite precipitation in shallow water depositional settings and show dolomites preserve a geochemical archive of those settings. Each chapter in this mutli-scale approach to the “dolomite problem” reinforces the same ideas: dolomite precipitated in depositional environments in deep time, the mechanisms responsible for dolomite formation in Earth history may be similar to those responsible for dolomite formation today, and dolomite can and does serve as a paleoenvironmental archive.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147543</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-Bounded Dynamic Scheduling of Temporal Plans</title>
<link>https://hdl.handle.net/1721.1/147542</link>
<description>Risk-Bounded Dynamic Scheduling of Temporal Plans
Wang, Andrew J.
Real-world activities often have uncontrollable durations, which is a scheduling challenge for temporal planners, where despite uncertainty, we need to meet deadlines and coordination constraints. Missing these requirements can be costly, but it may be impossible to guarantee complete success. Instead, we aim to control the risk of scheduling failure, so we consider scheduling problems where activity durations are modeled with probability distributions. Then, by specifying a maximum allowable probability of failure, called a chance constraint, we gain access to solutions that are sufficiently safe. It is also known that reacting to duration outcomes throughout plan execution is key to avoiding failure. Thus, the goal of this thesis is to produce dynamic scheduling policies for chance-constrained temporal plans.&#13;
&#13;
Our strategy is to build on prior art in chance-constrained static scheduling. First, we characterize more rigorously the reformulation of the original problem into that of risk allocation. This separates the probabilistic condition from the temporal conditions, but also introduces conservatism. Second, we generalize the static solution’s conflict-directed hybrid algorithm to produce dynamic policies. Due to the chance constraint, we still employ nonlinear programming (NLP) to generate risk allocations, but now we leverage dynamic controllability (DC) algorithms to generate scheduling conflicts. However, those conflicts’ resolutions are disjunctive constraints, which require combinatorial search and not just an NLP. So third, we map selected clauses into a form identical to that solved by the conflict-directed algorithm for static schedules. Our algorithmic architecture thus wraps the chance-constrained static solution within another layer of conflict discovery and resolution.&#13;
&#13;
We evaluate our approach on lunar construction and car-sharing scenarios, which exemplify real-world complexity in coordinating parallel threads. We demonstrate that moving from chance-constrained static to dynamic policies dramatically increases the problem sizes we can schedule by at least 10 times. Additionally, our strategy for reallocating risk, based on discovered conflicts, solves an additional 10% of the benchmark scenarios over that achieved by uniform risk allocation. Finally, we show that our conflict-directed approach’s runtime is an order-of-magnitude faster than solving a full encoding.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147542</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supporting Well-Defined Platinum Group Metals in Metal-Organic Frameworks for Heterogeneous Catalysis</title>
<link>https://hdl.handle.net/1721.1/147541</link>
<description>Supporting Well-Defined Platinum Group Metals in Metal-Organic Frameworks for Heterogeneous Catalysis
Payne, Michael T.
For the many chemical industrial processes which require highly selective transformations, homogeneous catalysts are paramount. The effectiveness of molecular organometallic catalysts in particular can be largely attributed to the precise tools available for synthesis and characterization of the ligands and metal complexes, which allow for fine control over electronics and sterics to optimize substrate binding, reaction kinetics, and catalyst stability. Platinum group metals (PGMs; Ru, Os, Rh, Ir, Pd, Pt) enjoy special distinction among transition metal catalysts `for their high selectivity and activity for a number of reactions of interest such as cross-coupling, oligomerization, and hydrofunctionalization. Such homogeneous catalysts still find widespread application at the industrial scale, but the steep cost of these metals places stringent requirements on homogeneous systems for metal recovery. Heterogenization of PGMs thus poses a substantial benefit by enabling efficient catalyst reuse, but it remains a challenge to translate the well-defined nature of molecular catalysts (and thus their high selectivity/activity) to a solid support.&#13;
&#13;
In this thesis, I describe the development of methodologies for producing well-defined single-site PGM catalysts on a solid support using metal-organic frameworks (MOFs). MOFs are attractive platforms for the heterogenization of PGM catalysts because, in addition to high porosity and crystallinity, MOFs are structurally tunable at the molecular level, allowing fine synthetic control over active sites and their local environments. Chapter 2 of this thesis describes the development of complementary methodologies for generating well-defined PGM scorpionates on a MOF composed of Tpm-based linkers. Detailed structural studies of Pd(II) installed in the C₃ᵥ scorpionate sites reveals an unusually Pd(II)-Napical interaction between the bidentate Pd complex and the third uncoordinated pyrazole arm of the ligand (Pd-Napical = 2.501 ± 0.067 Å). Chapter 3 describes a new synthetic strategy, termed “tetrazine click-grafting”, which enables facile post-synthetic installation of phosphorus ligands onto MOFs. Lastly, Chapter 4 reports the catalysis of MOF-supported Rh phosphines prepared via tetrazine click-grafting for the hydroformylation of vinylsilanes and simple alkenes, with remarkable selectivity for linear aldehydes.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147541</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Political Economy of Inequality, Wealth, and Money</title>
<link>https://hdl.handle.net/1721.1/147540</link>
<description>Essays on the Political Economy of Inequality, Wealth, and Money
Wolters Freiheyt, Lukas
The increasing gap between those at the top and those at the bottom of the wealth distribution in Western market economies is a worrying development as it hampers equality of opportunity and distorts democratic representation. This dissertation improves our understanding of both the causes and consequences of increasing inequality by making a number of theoretical and empirical contributions. Theoretically, this dissertation provides new insights into the political foundations of wealth inequality by explaining the political economy roots of unbanked America, and by highlighting how welfare policies play a crucial role in determining whether households invest in the types of assets that build long-term wealth. Concerning the consequences of increasing inequality, this dissertation further shows how political donations and lobbying work conjointly to allow moneyed interests to shape U.S. politics. Empirically, I demonstrate that popular access to banking in the U.S. started diverging from the rest of the Western world already in the 19th century, much earlier than assumed by most scholars; I use large confidential micro-data on millions of Europeans to construct granular household-level measures of economic risk and social protection, which go far beyond the country-level measures standard in most of the literature; with my two co-authors, I further provide the first data set on moneyed interests in U.S. politics that covers the universe of lobbying reports and campaign finance filings from 1999. Taken together, my dissertation not only contributes to theoretical knowledge in political science and political economy and offers new empirical evidence on inequality but hopefully also provides those who seek a world less unequal and more financially inclusive with insights into how to get there.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147540</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Political Behavior of Economic Informality and Public Goods</title>
<link>https://hdl.handle.net/1721.1/147539</link>
<description>Essays on the Political Behavior of Economic Informality and Public Goods
Gao, Ying
This dissertation examines the relationship between economic informality, political behavior, and public goods provision. In three papers, it explores how seemingly nonpolitical everyday social institutions and collective action rooted in economic informality help extend the reach of the state and advance public policies in developing societies. I document these complementary mechanisms using microdata and policy variations in Indonesia, an important case given its recent urbanization, democratic status, and the world’s fourth largest and highly diverse population. Does informal housing (or slums) cause political marginalization? In the first paper, I delineate between housing informality of infrastructure and tenure insecurity, and posit that the former generates collective interests and demand for the state. Using a panel survey of Indonesian households from 1993 to 2014 with approximately 30,000 observations, I find that those in housing lacking piped water and sanitation access are significantly more likely to speak the national language, express support for vote buying, yet show lower ethnic trust. In light of theoretical knowledge on historical urbanization and political identity formation, results suggest that social contexts afforded by informal housing can produce clientelism alongside attitudes of political integration. I expand on the implications of the connected political and social behaviors of informality by assessing its effects on public policies. I develop a theory of community-driven development’s impact on informal settlement leaders. A field survey of 258 formal and informal leaders in urban communities under Indonesia’s National Slum Upgrading Program and a comparison group reveals modest effects. The final paper tests the role of quotidian social groups in the setting of Covid-19’s economic distress and public health urgency. In a sample of 1,085 Indonesian workers across informal employment sectors, I ask if and why vulnerable informal workers may comply with costly restrictions. Survey experiments varying information regarding a hypothetical but realistic lockdown policy demonstrate that information endorsement by workers’ membership associations significantly boosts compliance. Taken together, this dissertation challenges the scholarly pessimism around growing urban informality and contributes to the study of comparative political economy and urban politics of development.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147539</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive Health Monitoring with RadioWaves —In Body and In Home</title>
<link>https://hdl.handle.net/1721.1/147538</link>
<description>Passive Health Monitoring with RadioWaves —In Body and In Home
Zhang, Guo
Current health care is primarily in-clinic, episodic, and semi-empirical. With the development of intelligent devices such as smartphones, smartwatches, and more cutting-edge devices such as in-body devices and contactless in-home sensors, we are beginning to see a paradigm shift in health care. The new paradigm can be summarized under the framework of digital health: health care is becoming more embedded in daily life, using more continuously collected data, and making more data-driven decisions. We will discuss three of our research works about digital health in this thesis: the first one details our system for deep in-body communication and localization using a backscatter scheme, which solves the critical challenges of near-zero-power in-body continuous monitoring. The second one describes our work on digital biomarkers that are developed using passive measurement of in-home unscripted daily gait speed data by our contactless in-home sensors, which shows how this new method of daily continuously-collected health data has the potential to transform the way we assess Parkinson’s disease severity, motor fluctuation, and progression. The final work discusses the application of a wireless non-contact monitoring system for patients with COVID-19, which can be used to remotely monitor their acute and long-term physiological and behavioral symptoms. These three studies on continuous monitoring suggest innovative new directions for the future of digital health.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147538</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of DNA double strand break-mediated neurotoxicity in neurodegenerative disease</title>
<link>https://hdl.handle.net/1721.1/147534</link>
<description>Mechanisms of DNA double strand break-mediated neurotoxicity in neurodegenerative disease
Welch, Gwyneth M.
Neurons are highly susceptible to DNA damage accumulation due to their large energy requirements, elevated transcriptional activity and long lifespan. DNA double strand breaks (DSBs) are also linked to neurodegeneration and senescence. However, it is not clear how DSB-bearing neurons influence processes of neuroinflammation and neurodegeneration. Here, we characterize DSB-bearing neurons from the CK-p25 mouse model of neurodegeneration using single-nucleus, bulk, and spatial transcriptomic techniques. DSB-bearing neurons enter a late-stage DNA damage response marked by NFκB-activated senescent and antiviral immune pathways. In humans, Alzheimer’s disease pathology is significantly associated with immune activation in excitatory neurons. Spatial transcriptomics reveal that regions of CK-p25 brain tissue dense with DSB-bearing neurons harbor signatures of inflammatory microglia, which is ameliorated by NFκB knock down in neurons. Inhibition of NFκB in DSB-bearing neurons also reduces microglia activation in organotypic mouse brain slice culture. In conclusion, DSBs activate immune pathways in neurons, which in turn adopt a senescence-associated secretory phenotype to elicit microglia activation. These findings highlight a novel role for neurons in the mechanism of disease-associated neuroinflammation.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147534</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flows, Submodularity, Sparsity, and Beyond: Continuous Optimization Insights for Discrete Problems</title>
<link>https://hdl.handle.net/1721.1/147524</link>
<description>Flows, Submodularity, Sparsity, and Beyond: Continuous Optimization Insights for Discrete Problems
Axiotis, Kyriakos
In this thesis we build on connections between discrete and continuous optimization. In the first part of the thesis we propose faster second-order convex optimization algorithms for classical graph algorithmic problems. Our main contribution is to show that the runtime of interior point methods is closely connected to spectral connectivity notions in the underlying graph, such as electrical conductance and effective resistance. We explore these connections along two orthogonal directions: Making manual interventions to the graph to improve connectivity, or keeping track of connectivity so as to make faster updates. These ideas lead to the first runtime improvement for the minimum cost flow problem in more than 10 years, as well as faster algorithms for problems like negative-weight shortest path and minimum cost perfect matching. &#13;
&#13;
In the second part of the thesis, we investigate efficient optimization algorithms for problems relevant to machine learning that have some discrete element, such as sparse or low rank structure. We introduce a new technique, called adaptive regularization, which eliminates the sparsity performance degradation caused by ℓ₂ projections onto structured non-convex domains, like the set of sparse vectors or low rank matrices. This leads to improving the sparsity guarantee of one of the most well known sparse optimization algorithms, IHT.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147524</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the representation of smoke and its implications for air quality and climate</title>
<link>https://hdl.handle.net/1721.1/147523</link>
<description>Investigating the representation of smoke and its implications for air quality and climate
Carter, Thérèse (Tess)
Smoke from biomass burning (both wildfires and prescribed and agricultural burns) is important for atmospheric chemistry and composition, air quality, and climate. These impacts are associated with substantial societal implications such as large detrimental health burdens, lost work and school days, and diminished visibility and ability to use the outdoors. However, there are large uncertainties in the magnitude and characteristics of smoke, stemming from considerable unknowns in all parts of the fire system, and thus in our representation of this in models. This thesis aims to address many of these uncertainties with a multipronged approach using models and observations across scales.&#13;
&#13;
The scope of the research completed herein is introduced and described in Chapter 1. Chapter 2 focuses on how smoke emissions uncertainties carry through to air quality and radiative impacts with an emphasis on North America using four commonly used smoke inventories, a chemical transport model, and observational constraints, including surface networks, aircraft, and satellites. We show that two of the inventories (GFED4s and GFAS) direct the model closest to observations. While most air quality and climate studies only use one smoke inventory, we find that there is a large range across the inventories in health-relevant surface smoke concentrations and climate-relevant direct radiative effects. Chapter 3 investigates carbonaceous aerosol and its absorption properties from fires in two large fire source regions, the western US and Africa, using observations from three aircraft campaigns focused on fires. We find that smoke from African fires is more absorbing than that in the western US and thus that global climate models need to represent regional heterogeneity in absorption properties. We also show that a 1-day whitening lifetime of brown carbon matches observations well and substantially decreases the warming contribution of biomass burning. Chapter 4 expands the model representation of non-methane organic gases (NMOGs) from fires and investigates how important fires are for atmospheric reactivity. This is the first global estimate of the impact of fire on atmospheric reactivity. Chapter 5 focuses on two quantifiable human levers (human-ignited wildfires and agricultural fires) on smoke particulate matter under 2.5 microns in the US. We calculate that these two human drivers account for over 80% of important health metrics (population-weighted exposure and premature mortality) associated with fires, suggesting large mitigation potential of smoke impacts. Finally, Chapter 6 summarizes the work completed in this thesis.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147523</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inverse Design of Random Emitters in Nanophotonics</title>
<link>https://hdl.handle.net/1721.1/147522</link>
<description>Inverse Design of Random Emitters in Nanophotonics
Yao, Wenjie
Incoherent light from random emitters, such as thermal radiation, are very common in nature. However, modeling such random emitters may be challenging, as it naively requires Maxwell’s equations to be solved for all emitters to obtain the total response, which becomes computationally intractable in conjunction with large-scale optimization (inverse design). In this work, we present a trace formulation of random emitters that can be efficiently combined with inverse design, even for topology optimization over thousands of design degrees of freedom.&#13;
&#13;
We begin with a trivial case where the emitter is at a single location with random orientations, which leads to compute the local density of states (LDOS). In a previous work, a shape-independent upper limit was derived for LDOS, but simple geometries such as bowtie are 2-3 orders of magnitude away from this limit. By computational optimization of air-void cavities in metallic substrates, we show that the LDOS can reach within a factor of ≈ 10 of the upper limits, and within a factor ≈ 4 for the single-polarization LDOS, demonstrating that the theoretical limits are nearly attainable.&#13;
&#13;
We then study the more general case where emitters are distributed randomly in space. We present several examples of incoherent-emission topology optimization (TopOpt), including tailoring the geometry of fluorescent particles, a periodically&#13;
emitting surface, and a structure emitting into a waveguide mode.&#13;
&#13;
Finally, we employ our trace formulation for inverse design of nanopatterned surfaces that maximize spatially averaged surface-enhanced Raman (SERS) spectra from molecules distributed randomly throughout a material or fluid. This leads to radically different designs than optimizing SERS emission at a single known location, as we illustrate using several 2D design problems addressing effects of hot-spot density, angular selectivity, and nonlinear damage. We obtain optimized structures that perform about 4× better than coating with optimized spheres or bowtie structures and about 20× better when the nonlinear damage effects are included.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147522</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Deployed Quantum Networks with Solid-State Defect Centers</title>
<link>https://hdl.handle.net/1721.1/147514</link>
<description>Techniques for Deployed Quantum Networks with Solid-State Defect Centers
Bersin, Eric
The past decade has seen tremendous progress towards the development of quantum networks, wherein quantum states are transmitted over long distances for applications in distributed quantum computing, quantum-enhanced metrology, and quantum key distribution. In particular, recent results have demonstrated the fundamental building blocks of "quantum repeaters" --- network nodes containing quantum memories that can store, process, and retransmit photonic qubits. Such repeaters are key to deploying scalable quantum networks that can realize the full range of quantum networking applications. However, work in this area has typically been confined to small numbers of low-yield devices, operating in single laboratory environments. Moving from delicate, proof-of-principle physics experiments to robust, practical systems requires advancements on a number of fronts, ranging from fundamental materials science and qubit development to high-level quantum-compatible communications infrastructures.&#13;
&#13;
Here, we pursue a full-stack approach towards deployable quantum networks, specifically with solid-state defect centers as quantum memories. We investigate single qubit registers, studying creation techniques and multi-spin architectures that might enhance qubit performance. Next, we propose architectures at the device and repeater levels for improving the ability of a network to take advantage of high-performance qubits. Finally, we develop the classical infrastructure necessary for realizing quantum networks across real-world fiber links, concluding with a demonstration of photon-to-spin quantum state transfer across a 50 km deployed network in the Boston area. Together, these efforts represent a significant step in realizing scalable, memory-enabled quantum networks.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147514</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Drawing Apocalypse: Architectural Representation in the Nuclear Age and the Imagination of the End</title>
<link>https://hdl.handle.net/1721.1/147513</link>
<description>Drawing Apocalypse: Architectural Representation in the Nuclear Age and the Imagination of the End
Keller, Eliyahu
The use and development of atomic weapons have been determining factors in shaping the history of the twentieth century. Throughout the Cold War, and primarily in the United States, images of the unprecedented devastation of Hiroshima and Nagasaki were coupled with visual and literary fictions projecting the aftermath of a world-ending atomic war. These fostered an atmosphere in which the possibility of the world’s end was not only real but imminent; a nuclear apocalyptic cloud which permeated almost all facets of cultural production, including that of architecture.&#13;
&#13;
This dissertation examines the ways in which nuclear apocalyptic thinking, imagination, and discourse contaminated speculative architectural representation in the decades that followed World War II. Stepping away from the pragmatic responses proposed by architects, planners, and institutions to the nuclear threat, it first theorizes the role of architectural images in the construction of apocalyptic fictions and visualizations. Following, this dissertation studies the speculative architectural propositions, representations and narratives produced by three US-based architects: Paolo Soleri, Raimund Abraham, and Lebbeus Woods.&#13;
&#13;
Focusing on the representational paradoxes that apocalyptic thinking and imagination suggest—namely, the impossibility of representing an all-encompassing end—the dissertation explores the threat of nuclear apocalypse not as a problem to be solved by architecture, but rather as a discursive and historical lens through which architectural thinking can be reframed. To that end, the dissertation places the works examined within a constellation of nuclear-age discourses, cultural artifacts, and historical events, and interprets them through the lens of nuclear criticism: a field of literary critique that explores the relationship between the human potential for self-annihilation and cultural production. With that, the dissertation investigates how the imagination of a nuclear apocalypse fostered a kind of end-times preoccupation within speculative architectural production, and the ways in which these propositions both drew from and participated in the construction of nuclear apocalyptic culture during and after the decades of the Cold War. By doing so, the dissertation addresses a gap in architectural thinking in relation to an epistemology of disaster and unearths an apocalyptic current within postmodern architectural production that has yet to be addressed. Out of this investigation, the works examined emerge as testimonies to the limits of architectural imagination at a moment in history when the possibility of the future itself was at stake.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147513</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Evaluating Human Sound Localization in the Natural Environment</title>
<link>https://hdl.handle.net/1721.1/147512</link>
<description>Modeling and Evaluating Human Sound Localization in the Natural Environment
Francl, Andrew
Humans locate sounds in their environment to avoid danger and identify objects of interest. In a ten-minute bike ride, a person might take note of a car approaching from behind, a tree where a bird is singing, and pedestrians walking from around a blind corner. &#13;
&#13;
Research on human sound localization has greatly advanced our understanding of binaural hearing but leaves us some ways from a complete understanding. In particular, it has been difficult to assess human sound localization in ways that align with humans experience on an everyday basis. This thesis aims to more closely align research methods and modeling approaches with the natural sound localization tasks that humans perform in the real world.&#13;
&#13;
In the first study, we show that a model trained to localize sounds in naturalistic conditions exhibits many features of human spatial hearing. But when trained in unnatural environments without reverberation, noise, or natural sounds, the model’s performance characteristics deviate from those of humans. The results show how biological hearing is adapted to the challenges of real-world environments and illustrate how artificial neural networks can reveal the real-world constraints that shape perception.&#13;
&#13;
In the second study, we ran a behavioral experiment to evaluate human sound localization in a naturalistic setting with natural sounds and identified specific sounds that are difficult for humans to localize. We assessed whether the model of sound localization from the first study could predict the accuracy with which individual sounds are localized. We found that the model predicted human localization accuracy well above chance. However, the model biases were distinct from those evident in humans, suggesting room for future improvement.&#13;
&#13;
In the third study, we constructed a model that uses a biologically inspired learning approach to localizing sounds, relying on self-motion cues from head movements to learn representations of sound locations. We show that this strategy can learn a representation that enables accurate decoding of sound location without having access to the ground truth location for sounds during training.&#13;
&#13;
In the fourth study, we used a model of human speech perception as a perceptual metric to improve speech denoising. We found that while this perceptual metric improved denoising over standard approaches, a simple model of the cochlea performed similarly, suggesting much of the benefit of this approach may be in using a frequency-based overcomplete representation of the signal.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147512</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Measuring Climate Change Damages and Adaptation</title>
<link>https://hdl.handle.net/1721.1/147508</link>
<description>Essays on Measuring Climate Change Damages and Adaptation
Vilgalys, Max Aidas
Through changes in average temperature, precipitation patterns, and extreme weather events, climate change is already causing severe ecological and economic damages. Further warming is expected to have a profound effect on the functioning of ecological and human systems worldwide. While it is a top priority to limit carbon emissions and mitigate future climate change, it is also essential to prepare for damages from climate change in the remainder of this century. Research is needed to understand these impacts, and whether it is possible to adapt to these changes.&#13;
&#13;
In this thesis, I measure damages and adaptation to recent climate change in three essays. First, in joint work with Sylvia Klosin, I develop a novel debiased machine learning approach to measure continuous treatment effects in panel settings. We demonstrate benefits of this estimator over standard machine learning or classical statistics approaches. We apply this estimator to measure the degree of damages from climate change in U.S. agriculture, and find that extreme heat is significantly more damaging than linear models suggest. In the second essay, I measure the degree of adaptation to extreme heat in U.S. agriculture using flexible modeling of weather variables and a debiased machine learning estimator. I demonstrate that my double machine learning approach works well in high-dimensional settings. Applying this estimator to the past thirty years of crop yields, I find evidence of considerable adaptation to extreme heat. Finally, I examine the equity of adaptation to increasing wildfire risk in California. I study how electric utilities’ power shutoff decisions correlate with community socioeconomic status and health risk factors.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147508</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interferometric, acousto-optic modulated diffuse correlation spectroscopy @ 1064 nm (AOM-iDCS) toward higher sensitivity, non-invasive measurement of cerebral blood flow</title>
<link>https://hdl.handle.net/1721.1/147506</link>
<description>Interferometric, acousto-optic modulated diffuse correlation spectroscopy @ 1064 nm (AOM-iDCS) toward higher sensitivity, non-invasive measurement of cerebral blood flow
Robinson, Mitchell Burrows
Continuous, bedside monitoring of cerebral blood flow in patients at risk for neurovascular complications has the potential to decrease morbidity and mortality. While measures of systemic physiology can be used to infer cerebral perfusion, a technology that directly and continuously measures cerebral blood flow (CBF) is needed to properly manage treatment. Diffuse Correlation Spectroscopy (DCS) is an established optical technique that enables continuous, non-invasive, and direct measurements of CBF. The effectiveness of DCS in measuring CBF is hampered in adults by extracerebral contamination and limited depth sensitivity.&#13;
&#13;
The goal of this dissertation is to extend the usefulness of DCS in the adult population through the development of new techniques, including the use of longer wavelengths (1064 nm), acousto-optic modulation, and heterodyne detection to enhance CBF sensitivity and reduce extracerebral contamination. In each domain of improvement, we develop theory to describe the detected optical signals, advance hardware to enable measurements, and characterize the performance of the developed systems in phantom and human subject experiments. Each different improvement - wavelength extension, ultrasound sonification, heterodyne detection, and time-resolved detection - has its own advantage in terms of depth sensitivity, signal-to-noise ratio, and hardware/software complexity. As such, the techniques have the potential to be mixed and matched to increase sensitivity to CBF and dramatically improve the SNR of the measurement to enable non-invasive, bedside measurements.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147506</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing, Improving, and Verifying Machine Learning Model Robustness</title>
<link>https://hdl.handle.net/1721.1/147505</link>
<description>Probing, Improving, and Verifying Machine Learning Model Robustness
Xiao, Kai Yuanqing
Machine learning models turn out to be brittle when faced with distribution shifts, making them hard to rely on in real-world deployment. This motivates developing methods that enable us to detect and alleviate such model brittleness, as well as to verify that our models indeed meet desired robustness guarantees.&#13;
&#13;
This thesis presents a set of tools that help us detect model vulnerabilities and biases. This set comprises, on the one hand, a suite of new datasets that allow us to obtain a finer-grained understanding of model reliance on backgrounds. On the other hand, it involves 3DB, a framework that leverages photorealistic simulation, to probe model vulnerabilities to more varied distribution shifts.&#13;
&#13;
In addition to identifying these vulnerabilities, we discuss interventions that can make models more robust to distribution shifts, including using more training data. As we demonstrate, indiscriminately using more auxiliary data is not always beneficial, and we thus develop dataset projection, a method to choose the "right" auxiliary data to use.&#13;
&#13;
Finally, we show how to efficiently and formally verify that our models are robust to one of the most well-studied types of distribution shift: pixel-wise adversarial perturbations.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147505</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Moving Experiences: Traveling Museum Exhibitions and the Infrastructures of Cultural Globalization</title>
<link>https://hdl.handle.net/1721.1/147503</link>
<description>Moving Experiences: Traveling Museum Exhibitions and the Infrastructures of Cultural Globalization
de Silva, Nushelle
In the wake of two world wars, traveling museum exhibitions were touted as a model for advancing peace, broadening the views of diverse publics by compelling museums across borders to share their treasures. In this dissertation, I examine how the establishment of a global infrastructure for museum exchange was established by two organizations dedicated to cultural peacebuilding: the United Nations Educational, Scientific, and Cultural Organization (UNESCO), and the International Council of Museums (ICOM). I argue that while these ambitions spatially reorganized museums in the latter half of the twentieth century to prioritize object exchange over accumulation, the uneven globalization facilitated by exhibitions still augments rather than alleviates the coloniality of museums. &#13;
&#13;
UNESCO and ICOM led the charge to instate international administrative standards for circulating museum exhibitions in increased quantities, encompassing packing solutions, border inspections, climate requirements, and risk management. I examine their efforts to establish uniform practices across museums through increasingly standardized paperwork: manuals for professional practice, exhibition loan and insurance agreements, object condition and facility reports, and customs labels. These documents were critical interfaces for negotiating the definition of art and determining parameters for a homogenized global museum interior optimized for exchange. These standards shored up the power of dominant institutions, sanctioning their situated practices and the conservation needs of their specific object collections as universally applicable.&#13;
&#13;
Instead of augmenting scholarship on how museums developed their collections, I attend to how museums developed relationships of exchange. Rather than trace the itineraries of the individual objects that traveled, then, I examine the forms of paperwork that authorized their mobility and the spatial normalizations these documents instigated: the reconfiguration of registration and storage facilities to enable object movement; the relocation of border inspections to museum premises; the establishment of climate and building security standards. Albeit availing itself of the infrastructures of globalized trade, this emerging governmental apparatus of exhibition circulation was discursively constructed as an instrument of conservation. Traveling exhibitions persuasively guide our appraisals of art. As this dissertation demonstrates, administrative practices play a determinative role in object mobility and must be untangled to address inequitable cultural representation in the museum.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147503</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-dimensional Integral Boundary Layer Method for Viscous Aerodynamic Analysis</title>
<link>https://hdl.handle.net/1721.1/147502</link>
<description>Three-dimensional Integral Boundary Layer Method for Viscous Aerodynamic Analysis
Zhang, Shun
Viscous aerodynamic analysis is crucial for aircraft design in terms of understanding key performance metrics such as drag. However, despite advances in computational fluid dynamics (CFD) in the past few decades, a physics-based three-dimensional (3D) viscous analysis suitable for aircraft preliminary design remains a challenge. To that end, the integral boundary layer (IBL) method is a promising candidate primarily for its superior computational efficiency and aerodynamic design insights, evidenced from its success in existing two-dimensional (2D) applications. This thesis aims to develop a reliable off-the-shelf three-dimensional (3D) IBL method through contributions in both the physical and numerical modeling aspects.&#13;
&#13;
First, this thesis presents novel closure modeling strategies for 3D IBL and develops a new set of closure models, which were lacking in previous 3D IBL methods. Original 3D boundary layer data sets have been generated and form the basis for data-driven closure modeling in this work. New neural network regression models with embedded constraints are proposed for constructing 3D IBL closure and to help identify important parameters. Moreover, a model inversion formulation is devised for automated data-driven calibration of the turbulence shear stress transport model in the IBL context. Numerical studies demonstrate the effective boundary layer modeling by the proposed closure models through comparison against higher-fidelity reference solution and previous 3D IBL formulations.&#13;
&#13;
Second, the proper stabilization scheme is explored for the numerical discretization of the 3D IBL equations. On the one hand, difficulties have been identified for a rigorous stabilization formulation guided by conventional characteristic analysis. On the other hand, heuristically-defined numerical stabilization schemes are revealed to be ill-posed based on the numerical examples of this work. Instead, an intermediate fix to the numerical discretization is tailored for 3D IBL based on its underlying conservation principles. This fix is observed to produce well-behaved solution as in the numerical results throughout this thesis. &#13;
&#13;
Finally, this work develops the capability of flow transition prediction that is missing from existing 3D IBL methods. Two ways of numerical treatment for free transition are proposed and compared in detail, namely, transition fitting versus transition capturing. With its advantageous implementation convenience, solution robustness and interface resolution, the transition capturing approach is demonstrated to be effective based on both 2D and 3D test cases, and hence is recommended for 3D IBL transition modeling.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147502</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensory Encounters in the Age of Computation</title>
<link>https://hdl.handle.net/1721.1/147500</link>
<description>Sensory Encounters in the Age of Computation
Lee, Crystal
This dissertation investigates the process of curating, cleaning, visualizing, circulating, and manipulating data to understand the persuasive force of visual information in multimodal media. From the history of haptic interfaces to the data practices of social media communities across the US and China, this thesis uses historical and ethnographic methods to understand how users of quantitative information encode norms about gender, ability, and race in data visualizations and search interfaces. This critical scholarship complements projects with engineering colleagues at CSAIL to build more inclusive data representation systems. Drawing on work in feminist technoscience, disability studies, and the history and anthropology of computing, this dissertation weaves together different forms of HCI research to ask what work can or should be done by data representations across computational media.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147500</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Misinformation, Persuasion, and News Media on Social Networks</title>
<link>https://hdl.handle.net/1721.1/147492</link>
<description>Misinformation, Persuasion, and News Media on Social Networks
Hsu, Chin-Chia
Social media platforms have become a popular source of news and information for a large segment of society. Users can receive information, share digital content, or attend to online publishers for the latest news. However, the recent proliferation of misinformation has affected people’s perception of the veracity of online information and, in turn, their social behavior. In this environment of real and false information, this dissertation studies two aspects of user behavior through the lens of persuasion: (1) sharing online news, and (2) consumer choice of news media.&#13;
&#13;
The first part focuses on the dissemination of online news on social media platforms such as Twitter. I propose two frameworks: in the first I focus on non-strategic agents and in the second one I proceed with a game-theoretic setting.&#13;
&#13;
In the first model, agents choose to share news based on whether it can move their followers’ beliefs closer to their own in the aggregate, and the current size of news spread, without considering news spreading in the future. I describe the dynamics of news spread arising from individual decisions and uncover the mechanisms that lead to a sharing cascade. I elucidate an association between the news precision levels that maximize the probability of a cascade and the wisdom of the crowd.&#13;
&#13;
The second model concerns a binary vote and rational agents who share news to make their followers cast the same vote as they do while strategically speculating on others’ sharing decisions and news spread at the steady state. I characterize the underlying news spread as an endogenous Susceptible-Infected (SI) epidemic process and derive agents’ sharing decisions and the size of the sharing cascade at the equilibrium of the game. I show that lower credibility news can result in a larger cascade than fully credible news provided that the network connectivity surpasses a connectivity limit. I further delineate the relationship between cascade size, network connectivity, and news credibility in terms of polarization and diversity in prior beliefs.&#13;
&#13;
The second part of this dissertation investigates how subscribers with diverse prior beliefs choose between two ideologically opposing news media (intermediaries) that are motivated to influence the public opinion, through their roles of news verification and selective disclosure. The news media may access some news about the state of the world, which may or may not be informative and they can choose whether to verify it. The news media then decide whether to disclose the news, aiming to persuade their subscribers to take the optimal action about the state based on their own beliefs. I show that centrists choose to subscribe to the intermediary with the opposing view, thereby exhibiting anti-homophily. By contrast, extremists exhibit homophily and prefer the intermediary with ideology that aligns with theirs.&#13;
&#13;
This dissertation contributes to the growing literature on people’s behavior of news consumption by offering game-theoretic frameworks built on a persuasion motive.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147492</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Epitope-Paratope Interactions of Emerging to Endemic Viruses</title>
<link>https://hdl.handle.net/1721.1/147487</link>
<description>On Epitope-Paratope Interactions of Emerging to Endemic Viruses
Miller, Nathaniel L.
For clinicians, scientists, and public health officials, it’s a challenge to manage the spectrum of viruses across the world. The addressable viral spectrum ranges from emerging viruses such as Nipah virus (NiV), which appear sporadically as local zoonotic outbreaks with high mortality, to continually-evolving endemic viruses such as influenza A (flu). Meanwhile, SARS-CoV-2 is currently transitioning from an emerging to an endemic virus with unknown evolutionary potential. Though highly differentiated, these three viruses present a common challenge to society in the need to continually prepare for and rapidly respond to new outbreaks, novel variants, and persistent antigenic drift. Through dedication of resources and tools toward this end, society can prevent significant annual morbidity.&#13;
&#13;
Viruses present numerous surfaces known as epitopes that are recognized by the human host immune system upon infection. Infected hosts develop antibodies that bind virus epitopes via the antibody’s paratope domain in an epitope-paratope interaction (EPI). EPIs are key aspects of the dynamic interplay between the virus and the host immune response to neutralize the virus. Studying EPIs enables us to better understand viral pathogenesis, map viral evolution due to antigenic pressure, and rationally develop vaccines and therapeutics.&#13;
&#13;
In this thesis, we present a series of EPI analyses of NiV, Flu, and SARS-CoV-2 that enrich our understanding of these viruses and serve as the basis for therapeutic antibody design, optimization, and repurposing. In particular, our work focuses on antibody epitope complexity, such as epitopes that include N-glycans or epitopes for which allostery and epistasis resulting from higher-order protein structure contribute to antibody escape. We additionally examine the roles of epitope overlap and orthogonality within polyclonal responses to a given antigen. Through these investigations, we develop EPI analytical methods and tools with applications across the viral spectrum. We apply these methods to identify repurposing targets for an anti-HIV antibody, chart antigenic space for the SARS-CoV-2 receptor-binding and N- terminal domains, and rapidly model escape of the Omicron BA.1 variant from therapeutic antibodies in the days immediately following its emergence.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147487</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Solution of the Energy Minimization Problem for Codes of Codimension 1</title>
<link>https://hdl.handle.net/1721.1/147485</link>
<description>A Solution of the Energy Minimization Problem for Codes of Codimension 1
Zhang, Yichi
In this thesis, we study the space of quasicodes using Delsarte’s linear programming bound to solve the energy minimization problem on codes. Our main contribution is we give the optimal code of codimension 1 for any potential function. We also investigate the polytope of quasicodes, where we give a symmetry and a list of vertices in some special cases.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147485</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of language in broader human cognition: evidence from neuroscience</title>
<link>https://hdl.handle.net/1721.1/147484</link>
<description>The role of language in broader human cognition: evidence from neuroscience
Ivanova, Anna Alexandrovna
Many philosophers, psychologists, biologists, computer scientists, and linguists have argued that language processing serves as a foundation for human cognition. However, evidence from neuroscience has shown that language might rely on specialized cognitive mechanisms that are distinct from many aspects of human thought. In this thesis, I use cognitive neuroscience to test the limits of the brain’s functional specialization for language processing. In Chapter 1, I describe how evidence from neuroscience can illuminate the relationship between language and other cognitive functions. In Chapter 2, I investigate activity in the brain’s language network in response to computer code, an input that shares many structural similarities with natural language. I find that, despite these similarities, the language network responds weakly or not at all during computer code comprehension; instead, this process elicits responses in brain areas of a distinct, domain-general multiple demand network. In Chapter 3 and Chapter 4, I study the language network’s responses to pictures of objects and events during semantic tasks, which, like language comprehension, require access to conceptual information. I show that the language network does not respond during an object semantics task and that its responses to event semantics are not causally important for performing the task. In Chapter 5, I describe a set of brain regions that respond to semantic demand regardless of stimulus type (sentences vs. pictures) and show that they are distinct from both the language network and the domain-general multiple demand network. Finally, in Chapter 6, I discuss the implications of my work for a neuroscience-informed account of the mechanisms underlying human cognition and language use. My work establishes that language processing mechanisms are largely distinct from mechanisms that support the processing of non-linguistic structure and meaning, even for closely matched inputs, and helps further delineate the functional architecture of the human mind.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147484</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Spiritual Turn: Modern Sufism and the Study of Islamic Art and Architecture</title>
<link>https://hdl.handle.net/1721.1/147483</link>
<description>The Spiritual Turn: Modern Sufism and the Study of Islamic Art and Architecture
Guermazi, Iheb
This dissertation constructs a genealogy of modern Sufi discourse on art and architecture. It draws the history of a chain of writers and artists, connected through a spiritual and intellectual line of transmission, who developed a particular reading of their world- its values, its cultures and arts. Divided into five chapters, it follows the classic structure of a Sufi Silsila: a chain of master-disciple relationships. Each chapter is thus built around one Muslim master and one European disciple and analyzes the contribution each of them made to the Sufi aesthetic discourse. The Spiritual Turn argues that the work of this intellectual lineage finds its roots within a 19th century Sufi reformist movement led by Emir ‘Abd al-Qadir (1808-1883) who proposed to unveil the common esoteric origins of Christianity and Islam. The Algerian anti-colonial leader and Sufi master hoped that such an ecumenical project could spare his co-religionaries further military confrontations with Western powers. Organized chronologically, the dissertation then follows the slow, rhizomatic evolution of a 19th century political stance into a rather well-structured art theory a century later. The narrative focuses on the different intellectual collaborations between Arab Sufi mystics and a group of European converts to Islam. Together, they believed in a possible mystical alternative modernity and argued for an esoteric understanding of aesthetics that would replace materialist and positivist modern art and architectural theories. The 1976 London World of Islam Festival was the culmination of this intellectual lineage. This event, considered as the largest exhibition of Islamic art and cultures ever organized in Europe, was mainly curated by Western converts to Islam directly attached to Abd al-Qadir’s spiritual school. The meaning these curators ascribed to the exhibited artworks, the textual interpretations, and historical framing they provided were rooted in a purely esoteric understanding of art and architecture. This history is an opportunity to examine alternative aesthetic hermeneutics emanating from non-western, non-positivist or anti-modern perspectives. The dissertation thus uses the Sufi aesthetic discourse as one modern instance where the classical and inherently Western modes of interpretation were challenged in favor of a mystical reading of art and architecture.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147483</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision-Making Under Uncertainty: From Theory to Practice</title>
<link>https://hdl.handle.net/1721.1/147482</link>
<description>Decision-Making Under Uncertainty: From Theory to Practice
Baek, Jackie
The surge of data and technological advances over the past decade has immensely increased the use of algorithms to automate decisions for a plethora of problems. This thesis focuses on developing data-driven methodologies for sequential decision-making under uncertainty. Specifically, we develop solutions to address practical issues that can arise when operationalizing mathematical models, ranging from general methodologies to applications in healthcare and revenue management.&#13;
&#13;
First, we study an issue of fairness that arises in online learning. In online learning, it is well-known that good strategies must explore; but exploration is associated with a cost, stemming from playing actions that are eventually revealed to be sub-optimal. We study how this cost of exploration is distributed amongst groups in a bandit setting. We leverage the theory of axiomatic bargaining, and the Nash bargaining solution in particular, to formalize what might constitute a fair division of the cost of exploration across groups. On the one hand, we show that any regret-optimal policy strikingly results in the least fair outcome: such policies will perversely leverage the most 'disadvantaged' groups when they can. More constructively, we derive policies that are optimally fair and simultaneously enjoy a small 'price of fairness'. We illustrate the relative merits of our algorithmic framework with a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups.&#13;
&#13;
Next, we study the classical problem of minimizing regret for multi-armed bandits. In this classic problem, there are several existing policies that are provably asymptotically optimal, but it is well-known that the empirical performance of these policies can vary greatly. We develop a new policy that we dub TS-UCB, which is a policy that combines ideas from two prominent policies for multi-armed bandits, Thompson sampling and upper confidence bound. We show that TS-UCB achieves materially lower regret on a comprehensive suite of synthetic and real-world datasets, and we establish optimal regret guarantees for TS-UCB for both the K-armed and linear bandit models.&#13;
&#13;
Lastly, we study a decision-making problem in a revenue management setting. We study the network revenue management problem, an online allocation problem in which products are sold to a stream of arriving customers, where each product consumes a subset of capacity-constrained resources. We show that certain network structures can be exploited to improve both theoretical and empirical performance over existing, 'one-size-fits-all' approaches. Specifically, we study instances with a matroid sub-structure, which can be motivated by several classical supply chain constraints involving postponement and process flexibility. We prove that our policy improves over existing theoretical guarantees under this structure, and these results are empirically supported by numerical simulations.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147482</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Behavioral Strategies and Neural Mechanisms of Dynamic Foraging</title>
<link>https://hdl.handle.net/1721.1/147479</link>
<description>Behavioral Strategies and Neural Mechanisms of Dynamic Foraging
Le, Nhat Minh
In dynamic environments, humans and animals constantly use reward and error signals to guide action selection. As diverse strategies may be used in the task, it is unclear how neural processing of reward, value and action information might change across different behavioral states. This thesis develops techniques for behavioral and neural dissection of cortical contributions to dynamic foraging behavior. By a combination of state-space behavioral modeling, large-scale widefield and single-cell imaging technologies, together with encoding models of neural responses, we showed and quantified the rich behavioral repertoire of rodents, state-dependent cortical processing of trial outcome, and diverse, distributed encoding of value and action switching information at the single-cell level.&#13;
&#13;
Analysis of rodent behavior revealed mixtures of behavioral modes during dynamic foraging, characterized by different switch offsets, sharpness and exploration rates. We developed a new computational approach, block Hidden Markov Model, to characterize and identify these discrete states of behavior in blocks of trials. These states can be accurately decoded as sub-regimes of model-free or inference-based behavior. Widefield imaging and unsupervised analysis of the cortical activity during the behavior revealed distinct cortical activation modes corresponding to the frontal, motor, visual and retrosplenial regions that have different dynamic representations of rewards and errors. Dissecting single-neuron responses in these candidate regions with three-photon imaging across cortical depths revealed specialized processing of reward-related variables. We found widespread representation of outcome, value and switching in all four regions, with an enriched representation of outcome in the retrosplenial cortex (RSC), and of action values in the anterior cingulate cortex (ACC). Using block Hidden Markov Model, we found outcome representation was enhanced in high-efficiency behavioral states but only weakly represented at the low-efficiency states. Optogenetic perturbations of outcome information in the frontal and RSC neural clusters decreased the frequency of these high-efficiency states, demonstrating their causal role in the expression of efficient switching behavior.&#13;
&#13;
Together, these computational methods and experiments provided new tools for quantification of dynamic foraging behavior, and revealed specialized and state-dependent distribution of outcome, value and switch representations in key cortical regions. These insights lead to important hypotheses about cortical-subcortical interactions during reward-guided behavior.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147479</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diverse Behavior Prediction through Deep Hybrid Models</title>
<link>https://hdl.handle.net/1721.1/147476</link>
<description>Diverse Behavior Prediction through Deep Hybrid Models
Huang, Xin
Predicting future motions of agents is a crucial task for autonomous vehicles. This task is challenging due to multi-modal traffic agent behaviors such as maneuvers. In addition, predictors are often required to generate a limited number of prediction samples to cover diverse behaviors, due to the time complexity of processing these samples for downstream tasks.&#13;
&#13;
Existing model-based prediction methods leverage hybrid reasoning techniques to predict qualitatively representative agent motions from a large prediction space, yet they often assume simple agent dynamics and fail to account for scene context. Recently, learning-based approaches have demonstrated great success in learning complicated agent dynamics and scene context through deep neural networks to produce accurate trajectories in complex traffic scenes. On the other hand, they often leverage a black box deep neural network and fail the explore the structure of the problem.&#13;
&#13;
In this thesis, we propose deep hybrid models by unifying the power of model-based hybrid reasoning algorithms and learning-based models to predict a small set of accurate trajectory samples that cover qualitatively representative agent maneuvers. Our approach offers several advantages compared to existing model-based and learning-based predictors. First, it handles evolving intent over time by learning an accurate hybrid model representing evolving discrete maneuvers and continuous trajectories, and sampling a set of hybrid trajectory sequences through importance sampling based on learned proposal distributions. Second, it handles ambiguous maneuvers by learning a latent space of qualitative maneuvers that mimics human concepts of qualitatively representative maneuvers. Third, it generates samples that support multiple downstream tasks, including autonomous planning and driver warning, by adding a task-informed loss that leverages the specification of the task when the additional task information is given. &#13;
&#13;
We train and validate our models on large-scale public driving benchmarks, including Argoverse forecasting dataset and Waymo open motion dataset. We perform extensive qualitative and quantitative experimental results to demonstrate the advantage of our predictor over state-of-the-art model-based and learning-based baselines, in terms of accuracy, diversity, and task performance.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147476</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gromov Witten Invariants of Blow Ups of P² using Logarithmic Geometry</title>
<link>https://hdl.handle.net/1721.1/147470</link>
<description>Gromov Witten Invariants of Blow Ups of P² using Logarithmic Geometry
Lo, Chun Hong
We show that Parker’s recursion for computing Gromov-Witten invariants of blow ups of P² can be derived using logarithmic Gromov-Witten theory and punctured maps. We extend the recursion to compute Gromov-Witten invariants with Hodge class insertions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147470</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effective Modeling in Medical Imaging with Constrained Data</title>
<link>https://hdl.handle.net/1721.1/147469</link>
<description>Effective Modeling in Medical Imaging with Constrained Data
Hsu, Tzu-Ming Harry
Data for modern medical imaging modeling is constrained by their high physical density, complex structure, insufficient annotation, heterogeneity across sites, long-tailed distribution of findings/conditions/diseases, and sparsely presented information. In this dissertation, to utilize the constrained data effectively, we employ various computationally driven and clinically driven techniques, including cross-modal learning, deep reinforcement learning, transfer learning, federated learning, surrogate endpoint modeling, and clinical knowledge infusion. The techniques are demonstrated in a variety of applications, such as risk stratification for pancreatic cancer patients, COVID-19 severity risk assessment, cross-modal X-ray image and report retrieval, X-ray finding report generation from an image, orthopantomogram finding summarization and real-world federated learning benchmarking.&#13;
&#13;
In disease risk stratification applications, we develop an end-to-end body composition assessment system that quantifies fat and muscle amounts from 3-dimensional imaging studies with a two-step approach. The resulting body composition ratios for various tissues are then used to stratify risks in pancreatic cancer or COVID-19 patients. In the pancreatic cancer cohort, muscle loss is shown to be a good indicator of mortality risk; and in COVID-19 patients, visceral fat is more correlated with severity than body mass index is, despite the latter being the current go-to indicator.&#13;
&#13;
Following clinical applications related to body composition analysis, we take advantage of large-scale chest X-ray/report datasets to investigate how the association of the textual modality and the imaging modality can assist modeling. We explore the task of retrieval across radiographs and medical reports by learning a joint embedding space, and find that the retrieval performance can benefit from even a small amount of supervision. On the task of medical report generation, we attempt to describe clinical findings in a chest X-ray as radiologists do. While past works only consider language fluency but not clinical efficacy, we include both in our modeling process. The resulting models turn out to be, unsurprisingly, better at describing diseases and findings, which we identify to be a key trait for an AI system that aims to augment clinicians in their workflows.&#13;
&#13;
We then look at finding summarization from orthopantomogram, or, panoramic dental X-ray. The goal of the summarization is to localize teeth in the permanent dentition and tag them with labels of the six potential findings. To combine the modeling process with existing dental knowledge, we propose a new form of annotation that is quick to provide -- a set of 32 binary labels indicating the existence of each tooth. This annotation is used in a novel objective function for the system to optimize and is shown to improve finding summarization accuracy despite its simplicity compared to the pixel-wise supervision typically used in this task.&#13;
&#13;
Finally, we turn to inspect federated learning, which is a learning paradigm for medical institutions to collaboratively learn an AI model without exposing private patient data. As a precursor to medical imaging, we gather two large natural visual classification datasets on real-world scales, aiming to describe the impact of data heterogeneity on the performance of existing federated learning algorithms. Our results show that extreme data heterogeneity can greatly impact algorithms in their ability to classify visual patterns in federated learning setups, and the two novel solutions we bring to the table can somewhat alleviate the performance drop. We believe the conclusions can be extendable to medical imaging problems.&#13;
&#13;
To conclude the dissertation, we provide remarks on other important aspects that researchers in medical AI must consider before landing their applications in clinics, as well as some exciting yet under-explored research tracks in medical imaging. While the objective of this dissertation is to provide an extensive coverage of various methods that more effectively model medical imaging tasks when the available data are constrained, our explorations are not exhaustive. We hope the several research topics showcased in this dissertation inspire further research and can fuel explorations down the line, ultimately benefiting humanity on a civilization scale.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147469</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information and Incentives in Online Platforms</title>
<link>https://hdl.handle.net/1721.1/147466</link>
<description>Information and Incentives in Online Platforms
Meigs, Emily
This thesis studies the impact of information and design of services for online platforms in three settings: traffic routing, network games, and competition between streaming platforms. In the first part of this thesis, Chapters 2 and 3, we study game play in routing and network games, where it is reasonable to assume agents do not originally know their payoff functions. Specifically, in Chapter 2 we examine the outcome of the learning dynamics in traffic routing where the latency functions are unknown. We show that the combination of selfish routing and learning dynamics converges to the full-information Wardrop equilibrium, this supports the study of the Wardrop equilibrium even in settings where information must be learned over time. In Chapter 3 we use analogous learning dynamics in a different setting, network games where the agents’ personal utility functions are not known. This may arise in games of local public goods provision or firm competition. We show that the combination of best response and learning dynamics converges to the Nash equilibrium.&#13;
&#13;
In the second part of the thesis, Chapter 4, we study the problem of sharing information in traffic routing. We investigate whether a routing platform, for example Google Maps or Waze, should share full information, no information, or partial information. We characterize the optimal information strategy in a two-stage setting, where the platform is also learning of the road conditions from the users. We then extend the intuition to an infinite stage setting and find an information scheme that achieves a lower cost than full information.&#13;
&#13;
In the final chapter of the thesis, we study bundling and pricing strategies in streaming platforms, for example Netflix or Hulu. We investigate why there are so many streaming platforms that are succeeding in the market. We first study the setting where a market leader creates a new product and has a monopoly on the market. We show in this case it is optimal in some cases for the platform to bundle their goods. Once another firm enters the market though we show that unbundling becomes the unique optimal strategy.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147466</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Healthcare Delivery in Resource-Limited Settings</title>
<link>https://hdl.handle.net/1721.1/147464</link>
<description>Optimizing Healthcare Delivery in Resource-Limited Settings
Gibson, Emma
In resource-limited settings, critical diagnostic testing services are frequently provided through hierarchical networks comprised of healthcare facilities that collect diagnostic samples (e.g. blood, nasal swabs) from patients, and  centralized medical laboratories that analyze these samples. The first part of this thesis focuses on diagnostic sample transportation systems, which are used to move samples and test results between various locations within centralized networks. &#13;
&#13;
In Chapter 2, we describe the design and implementation of a low-cost information sharing system which allows healthcare workers to report daily sample volumes at each facility within the network using a simple text-based interface accessible on any standard mobile phone. The feasibility and effectiveness of this system were assessed in a field trial at 51 healthcare facilities in Malawi, which achieved high rates of participation and accuracy.&#13;
&#13;
In Chapter 3 we propose an optimized sample transportation system which uses data reported by healthcare facilities to generate efficient routes for sample couriers on a daily basis. This system was implemented in three districts in Malawi, where it reduced average transportation delays by 25% and decreased the proportion of unnecessary trips by 55%.&#13;
&#13;
In Chapter 4 we evaluate operational strategies for the deployment of Point-of-Care (POC) testing at healthcare facilities in Malawi. We develop a mixed-integer model to  optimize the allocation of  POC instruments to strategic locations within the diagnostic network in order to maximize the benefits of Viral Load monitoring services for people living with HIV.  Our analysis indicates that the most cost-effective POC deployment policies include a combination of targeted POC testing of high-risk patients,  as well as capacity-sharing strategies such as near-POC testing.&#13;
&#13;
In Chapter 5, we study survival analysis models,  which are frequently used to analyze health outcomes and identify risk factors associated with morbidity and mortality.  We present a new Globally Optimized Survival Trees algorithm that leverages mixed-integer optimization  and local search techniques to generate interpretable survival tree models. We demonstrate that this algorithm improves on the accuracy of existing survival tree methods, particularly in large datasets.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147464</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of Human Behavioral Models for Engineering Applications</title>
<link>https://hdl.handle.net/1721.1/147463</link>
<description>Study of Human Behavioral Models for Engineering Applications
Yu, Suhyoun
Input from and interaction with humans is quickly becoming an inseparable part of designing critical systems: electrical power grids are evolving into smart grids that allow for two-way communication between utilities and their users, and the demand for seamless human-robot collaboration in warehouses and urban rescue missions is ever-increasing. As such, proper understanding and modeling of human decision making behavior should be an integral part of designing critical systems. Unfortunately, in the traditional engineering context, humans have been assumed to be rational agents that behave in the way they ought to, rather than the way that they actually do.&#13;
&#13;
This thesis lays the groundwork for incorporating a more general model of human decision making in engineered systems to improve the quality of interaction between a system and its users, and as a result, its overall performance and reliability. We investigate three computational principles known to influence human behavior: noisy utility maximization, discounting, and the probability weighting principle from Prospect Theory. We evaluate their individual and combined effects in the context of a naturalistic spatial planning task that required sequential decision-making. This task presents challenges that will occur across general contexts. All model selection analyses used show that a significant majority of human subjects’ trajectories are best explained by models with discounting and, in particular, probability weighting.&#13;
&#13;
We conclude with a simulation of a simplified 2D urban rescue mission, which demonstrates that these more realistic assumptions about human behavior reduce the cost to complete the collaborative mission by ∼ 13%. The results reinforce the hypothesis that a more nuanced model of human behavior is critical for systems that heavily interact with humans. Better models better complement real human behavior and therefore enhance the overall efficiency and performance of a given system.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147463</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining the Precision and Sequence Determinants of Protein Synthesis Rates</title>
<link>https://hdl.handle.net/1721.1/147460</link>
<description>Defining the Precision and Sequence Determinants of Protein Synthesis Rates
Taggart, James Christopher
Protein synthesis drives cellular proliferation and produces the primary effectors of biological function. Understanding how protein synthesis rates are controlled is therefore a central question of quantitative biology. In this work, I explore both the degree of precision in protein production and the mechanisms by which protein synthesis rates are encoded in genome sequences. First, I explore whether proportional synthesis of protein complex subunits, a phenomenon widely observed in bacterial protein synthesis, is a strategy adopted by eukaryotes. Using ribosome profiling, I observe proportional synthesis in the budding yeast Saccharomyces cerevisiae and through chromosomal duplications show that the precision in these protein synthesis rates is hard-coded without widespread negative feedback regulation. Proportional synthesis is also seen in large abundant complexes conserved in higher eukaryotes. I additionally present work to unify this understanding of proportional synthesis with evidence of post-translation buffering of protein complex subunit abundance derived from mass spectrometry. In the second half of this thesis, I turn my focus towards understanding how RNA decay and processing rates, two poorly understood processes central to the quantitative tuning of protein synthesis, are encoded in bacterial mRNAs. Using high-resolution RNA end-mapping techniques in combination with genetic perturbations to stabilize intermediates of RNA decay, I generate a global map of positions of endonuclease cleavage in Bacillus subtilis. Coupling this approach with knockouts of specific endonucleases, I greatly expand and refine the known set of targets of RNase Y, its specificity factor YlbF, and RNase III. Through this, I capture sequence and structural features recognized by RNase Y, the primary endonuclease initiating RNA decay in B. subtilis. I additionally provide evidence for a novel RNA 5′ end trimming activity of unknown origin. Finally, I present a system for massively parallel interrogation of the sequence-function relationship of B. subtilis mRNA processing sites. Using this approach, I uncover key sequence determinants of cggR-gapA and glnRA operon processing by RNase Y. Taken together, this work provides insight into the degree of precision achieved in gene expression across life and reveals mechanistic details of how protein synthesis can be tuned through RNA decay in a model Gram- positive bacterium.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147460</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Tidal Dispersion in a Salt Marsh Estuary</title>
<link>https://hdl.handle.net/1721.1/147458</link>
<description>Mechanisms of Tidal Dispersion in a Salt Marsh Estuary
Garcia, Adrian Mikhail Palaci
Dispersion in estuaries sets the length of salinity intrusion and the horizontal mixing rate of waterborne constituents, including larvae, nutrients, sediments, and contaminants. While bulk calculations of dispersion are readily estimated using traditional field measurements, the mechanisms contributing to the total dispersion are difficult to identify because they require high temporal and spatial resolution to measure. Recent advances in field techniques and numerical modeling have enabled the isolated study of various mechanisms contributing to dispersion, many of which vary on tidal time-scales and over small spatial scales. The objective of this thesis is to use a combination of high-resolution field measurements and numerical modeling to determine the mechanisms of dispersion that maintain the salt balance in the North River (Marshfield, MA), a tidally-dominated salt marsh estuary with complex topography. First, a field campaign was conducted to determine the dispersion associated with the out-of-phase exchange between tributary creeks and the main channel. Then, numerical simulations of an idealized estuary were conducted and a novel quasi-Lagrangian approach was applied to analyze the sources of dispersive salt fluxes throughout the estuary. A second field campaign was conducted to evaluate the spatial variability of shear dispersion, particularly near regions of abrupt topographic variations.&#13;
&#13;
The key result from this thesis is obtained through the first application of the theoretical moving plane framework of Dronkers &amp; van de Kreeke (1986), which confirms quantitatively that all landward salt flux at a fixed location must result from spatial correlations in velocity and salinity within a tidal excursion of the fixed location. Based on this result, the sources of the landward salt flux can be directly identified based on the spatial and tidal variations of shear dispersion, which can vary strongly due to its dependence on the local tidal currents, along-channel salinity gradient, and bathymetry. This thesis identifies and quantifies various mechanisms of topographically-induced tidal dispersion and thus highlights the dominant role of topography in controlling the processes that contribute to mixing and transport in short, tidally-energetic estuaries.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147458</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pd-Catalyzed Cross-Coupling, High Throughput Experimentation, and Machine Learning</title>
<link>https://hdl.handle.net/1721.1/147457</link>
<description>Pd-Catalyzed Cross-Coupling, High Throughput Experimentation, and Machine Learning
Xu, Jessica
Chapter 1: Monophosphine Ligands Promote Pd-Catalyzed C–S Cross-Coupling Reactions at Room Temperature with Soluble Bases.&#13;
&#13;
The Pd-catalyzed cross-coupling of thiols with aromatic electrophiles is a reliable method for the synthesis of aryl thioethers, which are important compounds for pharmaceutical and agricultural applications. Since thiols and thiolates strongly bind late transition metals, previous research has focused on catalysts supported by chelating, bisphosphine ligands, which were considered less likely to be displaced during the course of the reaction. We show that by using monophosphine ligands instead, more effective catalysis can be achieved. Notably, compared to previous methods, this increased reactivity allows for the use of much lower reaction temperature, soluble bases, and base-sensitive substrates. In contrast to conventional wisdom, our mechanistic data suggest that the extent of displacement of phosphine ligands by thiols is, firstly, not correlated with the ligand bulk or thiol nucleophilicity, and secondly, not predictive of the effectiveness of a given ligand in combination with palladium.&#13;
&#13;
Chapter 2: Practical Machine Learning for Exploring Multidimensional Reaction Spaces&#13;
&#13;
Machine learning (ML) methods have the potential to leverage high throughput experimentation (HTE) data to solve problems in organic chemistry. One such problem is identification of potentially successful reactions for library generation. This work focuses on Pd–catalyzed C–N cross coupling, where the number of combinations of reaction components can easily number in the millions or billions. To address this practical problem, the work herein identifies state–of–the– art HTE–friendly coupling conditions, generates a dataset appropriate to the targeted reaction space, and demonstrates the out–of–scope predictivity of trained models. These methods could enable chemists to quickly filter out unlikely reactions in silico prior to library screening, potentially saving many hours of tedious and expensive experimental work.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147457</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactive Peptides for Site-Selective Cysteine and Lysine Bioconjugation</title>
<link>https://hdl.handle.net/1721.1/147456</link>
<description>Reactive Peptides for Site-Selective Cysteine and Lysine Bioconjugation
Dieppa-Matos, Diomedes
Achieving catalyst-free, site-selective modification of proteins in water is a significant challenge in chemical biology. Issues of residue specificity, site-selectivity, reagent stability, and reaction rate are pervasive. To address these challenges, we developed a matched peptide pair termed the reactive peptide interface (RPI). This interface consists of two peptides: a nucleophilic, cysteine-containing peptide and an electrophilic, perfluoroarylated peptide. The unique sequences of these peptides enhance the reaction rate of a nucleophilic aromatic substitution reaction between these two peptides. Potential favorable non-covalent interactions between these two peptides could facilitate the rapid reaction rate and the site-selectivity of the system through molecular recognition. This peptide interface allows for rapid Cys arylation with a k = 152 ± 3 M⁻¹ s⁻¹ and enabled the site-selective modification of a miniprotein and an antibody in cell lysate.&#13;
&#13;
Developing the technology to achieve selective lysine chemistries has progressed more slowly, in part due to lysine’s lower reactivity compared to cysteine. The diversity present in synthetic peptide libraries has been used to discover small peptide motifs that react preferentially with a specific electrophile. These relatively small motifs have shown promise in the site-selective labeling of biomolecules under mild aqueous conditions. This approach can be taken to discover peptide sequences that enhance the reactivity of an embedded lysine. We report various lysine-containing motifs that enable selective lysine acylation with moderately reactive oxygen esters. This conjugation approach forms stable amide bonds and allows for conjugation of cargo at lysine.&#13;
&#13;
Finally, peptide library diversity greatly increases the probability of finding a hit candidate with the desired properties you are screening for. We turn to mRNA display, an in vitro selection technique that enables the screening of trillions of peptide sequences for desired functions. We show our efforts into establishing an mRNA display platform in our laboratory to discover novel binders and reactive sequences.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147456</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeking Safety: The Cognitive Foundations of Civilian Behavior during Violence</title>
<link>https://hdl.handle.net/1721.1/147455</link>
<description>Seeking Safety: The Cognitive Foundations of Civilian Behavior during Violence
Milliff, Aidan J.
When people are confronting violence, what determines the strategies they choose in pursuit of safety? Why do some people flee violence, while others try to fight back, adapt, or hide? Individual choices during conflict have real political consequences—from violence escalation to migration crises—but the way people make decisions about safety is poorly understood. I develop and test a political psychology theory, situational appraisal theory, that explains individual behavior by focusing on variation in appraisals of how controllable and predictable violent environments are. I apply situational appraisal theory to explain behavior in two distinct decision-making arenas. One section of the dissertation uses the theory to explain the choices of Indian Sikhs confronting violence during the 1980s–1990s Punjab crisis and 1984 anti-Sikh pogroms. With an original method for combining qualitative analysis and machine-learning measurement to study oral histories, I analyze testimony from Sikh survivors of violence and show that individual appraisals of control and predictability influence strategy selection. People who perceive “low” control over violent threats prefer strategies that avoid rather than approach those threats. People who perceive “low” predictability in threat evolution prefer costly, drastic strategies instead of moderate risk-monitoring options. I bolster these findings by analyzing thirty additional first-hand interviews with survivors of the same violence. Evidence from India shows that situational appraisals explain variation in behavior among demographically similar people, and also provide new leverage to explain change in survival strategies over time. In a subsequent section, I then turn to decision-making in international security to evaluate the generalizability of my theory beyond India. I use a 3,000-person foreign policy survey experiment that manipulates control and predictability appraisals before eliciting preferences for responses to a U.S.–China military crisis. Results from the survey experiment demonstrate the utility of situational appraisal theory beyond the context of pogrom violence. The dissertation concludes by exploring the policy implications of situational appraisal theory for humanitarian and conflict stabilization operations. I advocate for policies that focus more directly on increasing civilians’ sense of certainty and predictability in order to promote adoption of less drastic, less de-stabilizing strategies of survival during conflict.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147455</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Dual Perspective on Computational Complexity</title>
<link>https://hdl.handle.net/1721.1/147454</link>
<description>A Dual Perspective on Computational Complexity
Chapman, Brynmor
One central aim of theoretical computer science is to understand the limits of computation.  On the one hand, algorithms give upper bounds on the resources required for a computation.  They are intuitive, well-studied, and well-understood.  On the other hand, proofs of hardness, or lower bounds as they are often called, seem to be elusive and relatively poorly understood.  Perhaps the most famous conjectured lower bound is P ≠ NP, which informally states that there are problems whose solutions can be efficiently verified but not efficiently computed.  P ≠ NP is so widely believed that the world economy relies critically on its truth.  Indeed, complexity theorists generally believe the much stronger Exponential Time Hypothesis (ETH).  However, despite decades of work, these conjectures remain open, as do much weaker conjectures such as P ≠ PSPACE.&#13;
&#13;
The aforementioned discrepancy between what is known and what is believed is fairly representative of the state of the art in complexity theory, which invites the question: is it inherent?  Are lower bounds somehow inherently harder to prove than upper bounds?  There is a specific, provable sense in which the answer is yes.  There are indeed proof barriers, which show that broad classes of proofs have no hope of proving many of the commonly studied lower bounds.  These barriers have led to a fair amount of pessimism in complexity theory.  The aim of this thesis is to study lower bounds from an algorithmic perspective, in order to use well-studied algorithmic techniques to advance complexity theory.  It covers three facets of this perspective.  It presents a way to bypass the Natural Proofs Barrier of Razborov and Rudich, discusses dualities between upper and lower bounds, and disproves longstanding conjectured lower bounds.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147454</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Format Abstractions for the Compilation of Sparse Tensor Algebra</title>
<link>https://hdl.handle.net/1721.1/147453</link>
<description>Format Abstractions for the Compilation of Sparse Tensor Algebra
Chou, Stephen
Tensors are commonly used to represent data in many domains, including data analytics, machine learning, science, and engineering. Many highly-optimized libraries and compilers have been developed for efficiently computing on dense tensors. However, existing libraries and compilers are limited in their ability to support real-world applications that work with sparse tensors, which contain mostly zeros. In particular, there exist countless specialized formats for storing sparse tensors in memory, each suited to specific types of applications and data. Since different formats often use very different data structures to store nonzeros though, computing with sparse tensors that are stored in different formats can require vastly dissimilar code that are all difficult to implement by hand and non-trivial to generate automatically. Existing libraries and compilers must therefore limit the set of computations and formats that they directly support, sacrificing usability and performance as a result.&#13;
&#13;
In this dissertation, I describe how to build a compiler that supports efficiently computing on sparse tensors that may be stored in a wide variety of formats. I first show how many commonly-used sparse tensor formats—from array-based formats like CSR, COO, and DIA to formats that store nonzeros using pointer-based data structures like linked lists, BSTs, and C-trees—can all be expressed as compositions of per-dimension formats. I further show how such per-dimension formats can be precisely defined by implementing a common set of abstractions that capture how their underlying data structures store nonzeros in memory and that capture how these data structures can be efficiently accessed or constructed. I then demonstrate how, with such specifications of per-dimension formats at hand, a compiler can generate code to efficiently compute on tensors that are stored in any of the aforementioned—and countless other—formats. We have implemented our technique in the TACO sparse tensor algebra compiler, which is the first compiler to generate code that computes any basic tensor algebra expression with sparse tensors that may be stored in arbitrary formats. Our technique generates code that has performance competitive with, if not better than, equivalent code in hand-optimized libraries and frameworks.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147453</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk and Uncertainty in Healthcare Finance, Investment Management, and Asset Pricing</title>
<link>https://hdl.handle.net/1721.1/147452</link>
<description>Risk and Uncertainty in Healthcare Finance, Investment Management, and Asset Pricing
Ben Chaouch, Zied
Measuring and managing risk and uncertainty has been a ongoing challenge for academics and practitioners in finance and economics. At its core, the challenge requires to understand both the randomness in the underlying process as well as the way humans respond to it when making decisions. This thesis focuses on these challenges from three different perspectives.&#13;
&#13;
First, from the point of view of healthcare finance, this thesis addresses key questions that occur during the drug development process and the drug deployment process. We begin by developing a systematic, quantitative, transparent, and reproducible framework that incorporates the patient’s risk and uncertainty preferences into the regulatory and decision-making process, both theoretically and empirically, to improve the design and regulation of clinical trials. Then, we consider both the development and deployment of vaccines for emerging infectious diseases. Using the COVID-19 pandemic as a case study, we develop a quantitative method to simulate and evaluate various vaccine allocation strategies when the supply of vaccines is subject to stochastic shocks. We conclude this part of the thesis by proposing and analyzing the viability of a portfolio approach aimed to improve the risk/return trade off of investment when developing mRNA vaccine candidates for 11 emerging infectious diseases. Vaccine development is not only challenging due to the high scientific risk when developing a compound, but also due to the uncertainty in the occurrence of epidemics, leading to a lack of financial incentives for pharmaceutical firms to invest in vaccine research and development.&#13;
&#13;
The second part of the thesis dives into the field of empirical asset pricing. If multi-factor models are routinely used in by finance academics and practitioners to understand and quantify the risk exposures of an asset, more than 150 factors have been proposed in the asset pricing literature, constituting a “factor zoo”. This thesis develops linear and nonlinear techniques to construct latent factors from a set of 150 well-known risk factors using different types of autoencoders. We then compare the performance of these latent models to classical multi-factor models on various test assets.&#13;
&#13;
The final part of the thesis explores an investor’s risk profile and behavioral biases in the investment management landscape and aims to understand how different market participants and different types of individuals compare along the dimensions of risk aversion and investment style. To this end, we survey a large pool of individual investors, financial advisors, and institutional investors over three years about their investment decisions under various historical and hypothetical scenarios.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147452</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches for Equitable Healthcare</title>
<link>https://hdl.handle.net/1721.1/147451</link>
<description>Machine Learning Approaches for Equitable Healthcare
Chen, Irene Y.
With the proliferation of clinical data and algorithms to improve clinical care, researchers are increasingly concerned about the equity and fairness of the resulting machine learning models. Because the observational data we collect can be noisy, incomplete, and biased, seemingly straight-forward implementation of existing methods for clinical intervention or better understanding human knowledge can lead to inaccurate and inequitable clinical algorithms. To begin to address these challenges, we need new tools to tackle the bias that can arise when modeling data. In this work, we present machine learning approaches for auditing, ameliorating, and preventing bias in the machine learning for healthcare model development process. In particular, we focus on case studies that can provide actionable insights.&#13;
&#13;
In this thesis, we present several examples of machine learning approaches towards equitable healthcare and recommend changes based on the results of the corresponding experiments. Questions of equity and bias can be thought of in terms of the different steps of the model development pipeline. We argue that these model development steps can be made more equitable and unbiased when they 1) mitigate algorithmic bias that may occur from biased data collection or model development, and 2) address known existing systemic health disparities.&#13;
&#13;
We present four case studies of machine learning approaches towards equitable healthcare, and demonstrate these approaches on real clinical tasks. First, we decompose the sources of discrimination and provide empirical estimation techniques. We present results on applying these techniques in the task of intensive care unit mortality prediction and salary prediction. Second, we consider the predictive analytics of health insurance providers, namely predicting the likelihood of hospitalization and the likelihood of high-risk pregnancy. We apply the same discrimination decomposition techniques towards practical steps for mitigating algorithmic discrimination. Third, we study the task of clustering interval-censored time-series data. We develop a deep generative model, called SubLign, to learn the latent delayed entry alignment value for each time-series as well as the heterogeneous progression patterns across the population. We evaluate our model in the context of synthetically generated data. Following, we study the task of disease subtyping for the improved understanding of disease progression. We present results on clustering clinical patients including heart failure and Parkinson’s disease. Finally, we study an example of using machine learning on an understudied problem that affects underserved patients: early detection of intimate partner violence. We develop a model that predicts the likelihood of eventual intimate partner violence self-reporting and radiology injury labeling from radiology reports. We conclude with a discussion about how machine learning can continue to address equity and bias in healthcare.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147451</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Moderate Modal Metaphysics</title>
<link>https://hdl.handle.net/1721.1/147449</link>
<description>Moderate Modal Metaphysics
Webber, Mallory
This dissertation is about metaphysical modality: What is possibly or necessarily true? And how do these truths get to be true? It is also about a vice of moderate answers to these questions.&#13;
&#13;
Chapter 1 ('Actualism, Modal Reductionism, and Contingentism') concerns how modal truths get to be true. It's metaphysically necessary that I'm human, even at worlds without me. But how do these worlds get to boast of this truth? Chapter 1 provides an answer: All truths of possible worlds owe their truth to actual individuals and how they are. It's true of any possible world that, necessarily, I'm human because, actually, I'm essentially human.&#13;
&#13;
Chapter 2 ('Higher-Order Contingentism and the Problem of Incompossible Indiscernibles') concerns what is possibly or necessarily true. Specifically, it considers whether it's possible that incompossible individuals bear relations to one another. Whether incompossibles can be related matters to higher-order contingentism, the view that it's contingent which properties exist. Take two doppelganger knives that would have been constituted by the same handle, but different blades. According to higher-order contingentism, if neither knife exists, properties that would single either knife out also do not exist. The knives are indiscernible. If it is really true that, possibly, the knives are indiscernible, the higher-order contingentist denies the being constraint, the constraint that, necessarily, only existing individuals exemplify properties. Chapter 2 resolves this tension.&#13;
&#13;
The emerging metaphysical picture is one in which actuality is special. This raises a powerful arbitrariness worry: If actual individuals are the sole determiner of modal truths, and if which individuals actually exist is arbitrary, then which modal truths there are will also be arbitrary.&#13;
&#13;
Chapter 3 ('Tolerating Arbitrariness') explores arbitrariness as a theoretical vice. Arbitrary theories inexplicably distinguish between things "on a par.''  But not all arbitrary theories are, themselves, on a par. Some arbitrary theories have relatively shallow inexplicability, and they ought to be tolerated. By focusing our attention on different levels of explanation, we open the door for arbitrariness in our metaphysics.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147449</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trajectory Planning for Flights in Multiagent and Dynamic Environments</title>
<link>https://hdl.handle.net/1721.1/147446</link>
<description>Trajectory Planning for Flights in Multiagent and Dynamic Environments
Tordesillas Torres, Jesus
While efficient and fast trajectory planners in static worlds have been extensively proposed for UAVs (Unmanned Aerial Vehicles), a 3D real-time planner for environments with static obstacles, dynamic obstacles, and other planning agents still remains an open problem. The dynamic nature of these environments demands high replanning rates, making this problem especially hard on computationally limited platforms. Existing state-of-the-art planners reduce the computational complexity at the expense of more conservative results by relying on three main simplifications or assumptions: First, the collision avoidance constraints are imposed using the Bernstein and B-Spline polynomial bases, which do not tightly enclose a given interval of a polynomial trajectory. Second, multiagent planners usually make centralized and/or synchronized computation assumptions, which lead to poor scalability with the number of agents or can degrade the overall performance. Finally, position and yaw are decoupled when optimizing perception-aware trajectories, which produces highly conservative results.  &#13;
&#13;
This thesis addresses the aforementioned limitations with the following contributions: First, it presents the MINVO basis, a polynomial basis that generates the simplex with minimum volume enclosing a polynomial curve, therefore reducing the conservativeness in the obstacle avoidance constraints. Leveraging the MINVO basis, this thesis then proposes a tractable way to avoid dynamic obstacles by imposing linear separability constraints between the polyhedral enclosures of the intervals of the trajectories. This is then extended to multiagent scenarios, and a decentralized and asynchronous obstacle avoidance algorithm among many replanning agents is presented. Real-time perception-aware planning is achieved by implicitly imposing the underactuated dynamics of the UAV through the Hopf fibration while jointly optimizing the full pose. Finally, a reduction of two orders of magnitude in the computation time is obtained by learning a policy that imitates the optimization-based planner. These proposed contributions are extensively evaluated in simulation, showing up to 32 agents planning in real time, and in real-world experiments, showcasing flights up to 5.8 m/s in unknown dynamic environments with only onboard computation.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147446</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scientific Machine Learning for Dynamical Systems: Theory and Applications to Fluid Flow and Ocean Ecosystem Modeling</title>
<link>https://hdl.handle.net/1721.1/147444</link>
<description>Scientific Machine Learning for Dynamical Systems: Theory and Applications to Fluid Flow and Ocean Ecosystem Modeling
Gupta, Abhinav
Complex dynamical models are used for prediction in many domains, and are useful to mitigate many of the grand challenges being faced by humanity, such as climate change, food security, and sustainability. However, because of computational costs, complexity of real-world phenomena, and limited understanding of the underlying processes involved, models are invariably approximate. The missing dynamics can manifest in the form of unresolved scales, inexact processes, or omitted variables; as the neglected and unresolved terms become important, the utility of model predictions diminishes. To address these challenges, we develop and apply novel scientific machine learning methods to learn unknown and discover missing dynamics in models of dynamical systems. &#13;
&#13;
In our Bayesian approach, we develop an innovative stochastic partial differential equation (PDE) - based model learning theory and framework for high-dimensional coupled biogeochemical-physical models. The framework only uses sparse observations to learn rigorously within and outside of the model space as well as in that of the states and parameters. It employs Dynamically Orthogonal (DO) differential equations for adaptive reduced-order stochastic evolution, and the Gaussian Mixture Model-DO (GMM-DO) filter for simultaneous nonlinear inference in the augmented space of state variables, parameters, and model equations. A first novelty is the Bayesian learning among compatible and embedded candidate models enabled by parameter estimation with special stochastic parameters. A second is the principled Bayesian discovery of new model functions empowered by stochastic piecewise polynomial approximation theory. Our new methodology not only seamlessly and rigorously discriminates between existing models, but also extrapolates out of the space of models to discover newer ones. In all cases, the results are generalizable and interpretable, and associated with probability distributions for all learned quantities. To showcase and quantify the learning performance, we complete both identical-twin and real-world data experiments in a multidisciplinary setting, for both filtering forward and smoothing backward in time. Motivated by active coastal ecosystems and fisheries, our identical-twin experiments consist of lower-trophic-level marine ecosystem and fish models in a two-dimensional idealized domain with flow past a seamount representing upwelling due to a sill or strait. Experiments have varying levels of complexities due to different learning objectives and flow and ecosystem dynamics. We find that even when the advection is chaotic or stochastic from uncertain nonhydrostatic variable-density Boussinesq flows, our framework successfully discriminates among existing ecosystem candidate models and discovers new ones in the absence of prior knowledge, along with simultaneous state and parameter estimation. Our framework demonstrates interdisciplinary learning and crucially provides probability distributions for each learned quantity including the learned model functions. In the real-world data experiments, we configure a one-dimensional coupled physical-biological-carbonate model to simulate the state conditions encountered by a research cruise in the Gulf of Maine region in August, 2012. Using the observed ocean acidification data, we learn and discover a salinity based forcing term for the total alkalinity (TA) equation to account for changes in TA due to advection of water masses of different salinity caused by precipitation, riverine input, and other oceanographic processes. Simultaneously, we also estimate the multidisciplinary states and an un- certain parameter. Additionally, we develop new theory and techniques to improve uncertainty quantification using the DO methodology in multidisciplinary settings, so as to accurately handle stochastic boundary conditions, complex geometries, and the advection terms, and to augment the DO subspace as and when needed to capture the effects of the truncated modes accurately. Further, we discuss mutual-information-based observation planning to determine what, when, and where to measure to best achieve our learning objectives in resource-constrained environments. &#13;
&#13;
Next, motivated by the presence of inherent delays in real-world systems and the Mori-Zwanzig formulation, we develop a novel delay-differential-equations-based deep learning framework to learn time-delayed closure parameterizations for missing dynamics. We find that our neural closure models increase the long-term predictive capabilities of existing models, and require smaller networks when using non-Markovian over Markovian closures. They efficiently represent truncated modes in reduced-order-models, capture effects of subgrid-scale processes, and augment the simplification of complex physical-biogeochemical models. To empower our neural closure models framework with generalizability and interpretability, we further develop neural partial delay differential equations theory that augments low-fidelity models in their original PDE forms with both Markovian and non-Markovian closure terms parameterized with neural networks (NNs). For the first time, the melding of low-fidelity model and NNs with time-delays in the continuous spatiotemporal space followed by numerical discretization automatically provides interpretability and allows for generalizability to computational grid resolution, boundary conditions, initial conditions, and problem specific parameters. We derive the adjoint equations in the continuous form, thus, allowing implementation of our new methods across differentiable and non-differentiable computational physics codes, different machine learning frame- works, and also non-uniformly-spaced spatiotemporal training data. We also show that there exists an optimal amount of past information to incorporate, and provide methodology to learn it from data during the training process. Computational advantages associated with our frameworks are analyzed and discussed. Applications of our new Bayesian learning and neural closure modeling are not limited to the shown fluid and ocean experiments, but can be extended to other fields such as control theory, robotics, pharmacokinetic-pharmacodynamics, chemistry, economics, and biological regulatory systems.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147444</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Modifications of Iron Oxide Nanoparticles for Magnetic Imaging and Diagnosis</title>
<link>https://hdl.handle.net/1721.1/147443</link>
<description>Surface Modifications of Iron Oxide Nanoparticles for Magnetic Imaging and Diagnosis
Zhang, Juanye
Contrast agents are commonly used in magnetic resonance imaging (MRI) for their ability to selectively highlight or darken image brightness from different tissue components depending on their chemical nature. This work explores and discusses potential of colloidal iron oxide nanoparticles as non-toxic MRI contrast agents for diagnosis.&#13;
&#13;
Novel single-nanometer iron oxide (SNIO) nanoparticles exhibited contrast enhancement in T1-weighted MRI superior than commercially used gadolinium-based contrast agents, with unprecedentedly fast and clean renal clearance in inorganic colloidal nanoparticles. An approach for size determination based on small-angle X-ray scattering (SAXS) was developed, and understanding of particle size distribution led to successful development of alkyne-functionalized nanoparticles allowing for conjugation with biological cargos, creating a novel vehicle for drug delivery. &#13;
&#13;
SNIO–CBP, product from conjugating functionalized SNIO nanoparticles with a nature-inspired cyclic collagen-binding peptide (CBP) through newly developed preparatory copper-catalyzed alkyne–azide coupling CuAAC, exhibited binding affinity towards type I collagen, a natural indicator of fibrosis abundant in the extracellular matrix. SNIO–CBP was able to quickly detect hepatotoxin-induced and diet-induced non-alcoholic steatohepatitis (NASH) in mice through T1-weighted MRI upon intravenous administration. &#13;
&#13;
New formulations of hydrophilic iron oxide nanoparticles were explored. Tailored for quantitative ultra-short time-to-echo contrast-enhanced (QUTE-CE) MRI, catechol-coated iron oxide with 10 nm mean diameter was optimized; carboxymethyl dextran as a new ligand system was developed on SNIO and formation of nanoparticle clusters was observed, implying its potential for the emergent preclinical technique of magnetic particle imaging (MPI).
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147443</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and predicting responses of ecological communities to perturbations</title>
<link>https://hdl.handle.net/1721.1/147442</link>
<description>Understanding and predicting responses of ecological communities to perturbations
Medeiros, Lucas P.
Ecological communities—systems formed by several interacting species—are subject to external perturbations due to changing environmental conditions. Perturbations ranging from antibiotics in microbial communities to warming in coral reefs may displace species abundances or alter community composition. Understanding and predicting how perturbations change these two attributes of ecological communities remains a major challenge in ecology. Indeed, solving this challenge is critical to minimize the harmful impacts of perturbations on the biodiversity and functioning of communities. In this thesis, I introduce new theoretical approaches to understand and predict changes in species abundances and community composition following perturbations and apply these approaches to different empirical communities. In particular, this thesis advances current knowledge in three main ways. In Chapters 2 and 3, we explore Lotka-Volterra population dynamics at equilibrium and demonstrate that subsets of species that can tolerate a larger amount of perturbations on model parameters (i.e., structural perturbations) are more likely to persist. We then show how these theoretical results help us to predict the persistence of different subsets of species in communities of competing herbivores and in microbial communities. In Chapter 4, we again leverage Lotka-Volterra dynamics at equilibrium and show that communities that recover faster from perturbations on abundances tend to tolerate larger amounts of structural perturbations. This theoretical connection between these two indicators allow us to assess a community’s response to perturbations using a single indicator, as we illustrate with microbial communities. Finally, in Chapters 5 and 6, we develop a data-driven approach to infer the sensitivity of species abundances to perturbations under non-equilibrium population dynamics. Using model-generated abundance time series, we demonstrate that our approach captures how both the sensitivity of different species and the contribution of species to whole-community sensitivity change over time. We then apply our approach to rocky intertidal and plankton communities to illustrate how and when different species are more likely to be sensitive to perturbations and to contribute to whole-community sensitivity. By combining theoretical developments with empirical data, this thesis moves us one step closer to a comprehensive and practical theory on the responses of communities and their species to perturbations.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147442</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Defense of Impermissivism</title>
<link>https://hdl.handle.net/1721.1/147433</link>
<description>A Defense of Impermissivism
Balin, Allison K.
This dissertation is a defense of Impermissivism, which is the thesis that there is never more than one rational response to a single body of evidence. &#13;
&#13;
Permissivism is the view that there is sometimes more than one rational response to a single body of evidence. It faces a troubling arbitrariness objection. One variety of Permissivism—Standards Permissivism—does not appear to face this objection. In Chapter One, I argue that the normative structure of Standards Permissivism is under-explored: there is no good explanation for why a subject is rationally required to adhere to her standard. Without such an explanation, the view both fails to sidestep the arbitrariness objection and fails to be plausible. After reviewing the various alternatives, I argue that the only way for a Standards Permissivist to explain the requisite normative force is by adopting the “Belief Account” of standards. I then explore some upshots of my argument: in particular, my argument has the consequence that Standards Permissivism is a form of Unacknowledged Permissivism.&#13;
&#13;
Unacknowledged Simple Permissivism can be understood as an alternative to Standards Permissivism in that both are potential responses to the arbitrariness objection. In Chapter Two, I argue that we should regard Unacknowledged Simple Permissivism as the more attractive option for an arbitrariness-avoidant Permissivist. This is partly because—as I argue in Chapter One—the Standards Permissivist is already saddled with the major drawback of Unacknowledged Simple Permissivism: that there are no acknowledged permissive cases. I argue that Unacknowledged Simple Permissivism is false by presenting a variant of the original arbitrariness objection, but as applied to an ideal agent. In order to make this argument, I argue for a certain restriction on views about higher-order evidence: in particular, higher-order evidence that your doxastic state is rational should not affect the rational status of that doxastic state.&#13;
&#13;
In Chapter Three, I argue that even with the best available account of the normative force of standards, a new sort of arbitrariness worry arises for the Standards Permissivist. In particular, this arbitrariness arises when a subject mistakenly forms a belief that is inconsistent with her standard in a permissive case. I argue that the Standards Permissivist lacks the resources to provide a satisfactory solution in such a case. The view therefore faces an arbitrariness objection of its own.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147433</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Few Shot Learning for Rare Disease Diagnosis</title>
<link>https://hdl.handle.net/1721.1/147431</link>
<description>Few Shot Learning for Rare Disease Diagnosis
Alsentzer, Emily
Rare diseases affect 300-400 million people worldwide, yet each disease has very low prevalence, affecting no more than 50 per 100,000 individuals. Many patients with rare genetic conditions remain undiagnosed due to clinicians' lack of experience with the individual diseases and the considerable heterogeneity of clinical presentations. Machine-assisted diagnosis offers the opportunity to shorten the diagnostic delays for rare disease patients. Recent advances in deep learning have considerably improved the accuracy of medical diagnosis. However, much of the success thus far is contingent on the availability of large annotated datasets  containing thousands of examples per condition for training machine learning models. Machine-assisted diagnosis of rare diseases presents unique challenges; approaches must learn from limited data and extrapolate beyond training distribution to novel genetic conditions.&#13;
&#13;
The goal of this thesis is to develop few shot learning methods that can overcome the data limitations of deep learning approaches to diagnose patients with rare genetic conditions. Motivated by the need to infuse external knowledge into models, we first develop novel graph neural network methods for subgraph representation learning that encode how subgraphs (e.g., a set of patient phenotypes) relate to a larger knowledge graph. To address the issue of data scarcity, we next develop a framework for simulating realistic rare disease patients with novel genetic conditions and demonstrate how these simulated patients are similar to real rare disease patients. Finally, we leverage these advances to develop \name, a few shot method for diagnosis of patients with rare genetic conditions in the Undiagnosed Diseases Network. SHEPHERD reasons over biomedical knowledge via geometric deep learning to learn generalizable representations of rare disease patients. \name can operate at multiple facets throughout the rare disease diagnosis process: performing causal gene discovery, retrieving “patients-like-me" with the same causal gene or disease, and providing interpretable characterizations of novel disease presentations. Our work illustrates the potential for deep learning methods to rapidly accelerate molecular diagnosis and shorten the diagnostic odyssey for rare disease patients.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147431</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integration of amino acid signals by the mTORC1 pathway</title>
<link>https://hdl.handle.net/1721.1/147429</link>
<description>Integration of amino acid signals by the mTORC1 pathway
Valenstein, Max L.
To regulate growth, cells must integrate a broad array of environmental cues to coordinate anabolic and catabolic processes. The mechanistic target of rapamycin complex 1 (mTORC1) kinase pathway is the primary eukaryotic pathway responsible for integrating these diverse signals and coordinating cell growth programs. When activated by environmental cues that favors cell and organismal growth, mTORC1 phosphorylates substrates to promote anabolic programs, such as protein synthesis, and inhibit catabolic programs including autophagy. Aberrant mTORC1 signaling has been implicated in a variety of human disease states including cancer, diabetes, and epilepsy as well as normal physiological processes such as aging. Accordingly, the molecular basis for mTORC1 activation is of significant fundamental and pathophysiological interest.&#13;
&#13;
Nutrients, such as amino acids and glucose, are critical mTORC1 activators that signal through the heterodimer Rag GTPases. When nutrients are abundant, the Rag GTPases recruit mTORC1 to the lysosome for activation. Several multi-protein complexes, including the antagonistic GATOR1 and GATOR2 complexes, control the Rag GTPases and are themselves regulated by nutrient sensors, which detect the presence or absence of specific metabolites. While the function of the GATOR1 complex has been reveled, GATOR2 remains enigmatic despite its essential role as an integrator of nutrient signals that promotes mTORC1 activation. This thesis describes a structural and biochemical characterization of the GATOR2 complex, which sheds light on its role in the mTORC1 signaling pathway. We used cryo-electron microscopy to determine the three-dimensional structure of GATOR2 and revealed that it forms a large, cage-like architecture formed by a novel mode of interaction between zinc-binding domains. The oligomeric scaffold of GATOR2 is decorated by WD40 β-propellers, which interact with nutrient sensors and the GATOR1 complex. These results suggest a model in which dynamic and competitive protein-protein interactions, and not ubiquitin transfer, transduce amino acid signals to mTORC1. This work provides a foundation for understanding the critical, integrative role that GATOR2 plays in the regulation of mTORC1 activation.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147429</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Urban Planning and Religious Practice: Three Challenges</title>
<link>https://hdl.handle.net/1721.1/147428</link>
<description>Urban Planning and Religious Practice: Three Challenges
Manouchehrifar, Babak
Religion in urban planning is conventionally viewed as a non-spatial, pre-theoretical, or extra- legal phenomenon. This view has been questioned recently by research in religious and pluralism studies and by the increasing religious diversity and activism in Western and non-Western cities. Yet, the challenge remains that urban planners usually don’t understand how to address religious concerns and practices of urban communities without compromising their statutory and political responsibilities. In this dissertation, I take up three aspects of this challenge.&#13;
&#13;
First, I analyze the conceptual and practical connections between religion, secularism, and urban planning in liberal democracies to argue that understanding religion in urban planning entails understanding religion’s constitutive other: secularism. This paper questions the assumption of religious indifference as an adopted disciplinary ethos in planning, arguing that this assumption has made it more difficult for planners to confront the ways that the spatial structures of cities are getting reshaped by religious and deep cultural differences. It has also prevented planners from addressing the consequences of a secular process of power for the organization of social life in urban communities.&#13;
&#13;
Second, I evaluate the conception of “religion” incorporated in past international development initiatives. I analyze developmental efforts led by the United States in the Philippines (1898), Albania (2003), and Iraq (2003) to argue that Protestantism has been viewed as the normative template or the “gold standard” against which other religious practices are measured as free, modern, and civil. This view has dragged North American planners working on international development into the age-old missionary conceit of “good vs bad religion” and drifted their attention away from working with local communities to address developmental challenges.&#13;
&#13;
Third, I recognize that religion and urban planning intersect with each other on firm ground, rather than in thin air. I thus propose a theory – i.e., a “weak theory” – of how urban planners can approach religion as lived and experienced in the dynamic interplay of everyday practices, i.e., as “lived religion,” rather than as mere belief, pathology, or ideology. This approach, I argue, invites planners to employ ethnography and examine the actual lived situations (in courtrooms, planning offices, or public meetings) wherein competing conceptions of “lived religion” surround specific substantive planning issues, e.g., zoning or public health deliberations.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147428</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic Analysis of Bacterial Food Perception and its Influence on Foraging Behavior in C. elegans</title>
<link>https://hdl.handle.net/1721.1/147425</link>
<description>Genetic Analysis of Bacterial Food Perception and its Influence on Foraging Behavior in C. elegans
Boor, Sonia
The ability to adapt to changes in food conditions is critical for organismal homeostasis and survival. In this thesis, I explore the genetic and neuroendocrine mechanisms by which C. elegans evaluates bacterial food conditions and accordingly alters development and behavior. In Chapter One, I discuss the relationship between C. elegans and its bacterial diet. The influence of food on behavior suggests the presence of a gut-“brain” axis that senses and communicates information about nutritional state to the nervous system to elicit a behavioral response.&#13;
&#13;
In Chapter Two, I characterize a gain-of-function allele of scd-2, the C. elegans Anaplastic Lymphoma Kinase (ALK) gene ortholog, scd-2(syb2455),which I designed based on an oncogenic mutation in ALK. While animals with loss-of-function mutations in scd-2 are dauer-formation defective, scd-2(syb2455) animals enter dauer regardless of food conditions. In Chapter Three, I report that SCD-2 also regulates the food-dependent feeding and foraging behaviors known as dwelling and roaming; scd-2(syb2455) animals roam more and scd-2 loss-of-function animals roam less than wild type. Additionally, in contrast to wild-type animals, which express the gene encoding the TGF- signaling ligand DAF-7 exclusively in the ASI chemosensory neurons, scd-2(syb2455) animals constitutively express daf-7 in both the ASI and ASJ neurons. The expression of daf-7 in the ASJ neurons drives roaming in these animals. I demonstrate that daf-7 expression in the ASJ neurons is also affected by food; ingested food in the pharynx inhibits daf-7 expression in the ASJ neurons through SCD-2 signaling. From these data we propose a positive-feedback loop that regulates roaming behavior: in the absence of ingested food, active SCD-2 induces daf-7 expression in the ASJ neurons to promote roaming, which further reduces food consumption. To further investigate how daf-7 neuroendocrine signaling responds to nutritional state, in Chapter Four I describe a screen for satiety signals using the nutritional state-dependent daf-7 expression in the ASJ neurons in males as a readout for communication along the gut-“brain” axis. This screen yielded a loss-of-function allele of che-3 and a gain-of-function allele of pdfr-1. In Chapter Five, I discuss future directions for investigating how C. elegans interacts with its food environment.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147425</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>MFSD7C is an ATP Transporter that Supports Bacterial Killing by Alveolar Macrophages in a  Lipid-Rich Microenvironment</title>
<link>https://hdl.handle.net/1721.1/147424</link>
<description>MFSD7C is an ATP Transporter that Supports Bacterial Killing by Alveolar Macrophages in a  Lipid-Rich Microenvironment
Brown, Douglas R.
Major Facilitator Superfamily Domain-Containing 7C (MFSD7C) is a solute carrier with unknown substrate. In humans, mutations in MFSD7C cause Fowler syndrome, which is characterized by aberrant development of fetal vasculature in the brain resulting in hydranencephaly-hydrocephaly. Although most Fowler Syndrome patients do not survive past birth, those who do often experience recurrent respiratory infections. We now show that Mfsd7c-deficient alveolar macrophages in mice have an impaired ability to counter infection. This is demonstrated to be due to their altered metabolic state, characterized by heightened glycolysis and reduced lipid oxidation. When Mfsd7c-deficient alveolar macrophages encounter bacteria with glucose available as a nutrient, their deficiency in bacterial killing is abrogated. We then examine the purified protein MFSD7C and perform a candidate substrate screen using a thermostability-shift assay. We identify ATP as a putative substrate for MFSD7C and validate that 32P ATP can be absorbed by E. coli expressing the protein. We also determine that heme increases the rate of ATP uptake, prompting us to examine the effects of heme on ATP transport in alveolar macrophages. We find that heme rapidly and dramatically increases the rate of ATP export by Mfsd7c, providing insight into how this protein may modulate thermogenesis and metabolism as previously observed.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147424</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Assembling Peptide Nanofibers RADA16 and IEIK13 for Rapid Hemostasis</title>
<link>https://hdl.handle.net/1721.1/147422</link>
<description>Self-Assembling Peptide Nanofibers RADA16 and IEIK13 for Rapid Hemostasis
Bittner, Colin
Blood loss and trauma remain a significant cause of mortality in both the military and for civilians. This mortality can be reduced through the development of hemostatic bandages to reduce bleeding and prolong life in the prehospital setting. Many successful hemostat materials exist but there is still nothing that meets all the criteria of the ideal hemostat: easy to apply, no negative side effects, long term shelf stability, thermal stability, cost effective, and bioabsorbable. Self-assembling peptide nanofibers RADA16 and IEIK13 provide a unique possibility to create a hemostat that meets all of these criteria. These short peptides self-assemble into nanofibers that entangle and gel in solution. Through the use of layer-by-layer (LbL) we can apply these hemostatic peptides to a bioabsorbable substrate in thin conformal layers to produce a bandage that is lightweight and stable for long periods at room temperature.&#13;
&#13;
In this thesis, we developed and optimized a new LbL system using the self-assembling peptide IEIK13 and validated it alongside a RADA16 based system. Dipping LbL on flat substrates was optimized and used to validate stability and film growth. These findings were translated into a spray LbL process allowing for coating of a three dimensional gelatin sponge substrate. These spray LbL methods were optimized to produce robust hemostatic films. We examined possible mechanisms for RADA16 and IEIK13 to accelerate hemostasis by examining the effects on the activity of several clotting factors as well as effects on clot formation. While some effects on the activity of individual clotting factors were observed, in vitro testing using whole blood showed no significant differences in clot formation. Finally, we developed entirely new LbL systems to through the addition of adhesive molecules, catechol functional groups and chitosan, to improve interaction between the coated dressings and wound tissue. We were able to show the application of these LbL films produced significant increase in tissue adhesion. These findings lay the groundwork for the development of a successful hemostatic bandage based on LbL films contain self-assembling peptide nanofibers.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147422</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Voltammetric Methods Augmented with Physical Models and Statistical Inference</title>
<link>https://hdl.handle.net/1721.1/147421</link>
<description>Voltammetric Methods Augmented with Physical Models and Statistical Inference
Fenton Jr., Alexis M.
Increasing adoption of low-cost renewable energy technologies can enable global sustainability goals. However, the intermittency of variable resources inhibits broad deployment, necessitating a range of energy management systems including rechargeable batteries (e.g., redox flow batteries (RFBs), lithium-ion batteries). RFBs are particularly attractive because their system architecture enables decoupled energy capacity and power output—along with long service life and simplified maintenance—but their present-day costs remain prohibitively high. A promising pathway to economically competitive RFBs is the use of redox-active organic compounds (RAOs), which can be functionalized to improve battery performance (e.g., cell voltage, energy density). However, state-of-the-art RAOs often decompose during operation, shortening battery lifetime. Understanding and mitigating this decay is thus crucial; corresponding efforts typically rely on ex situ and post mortem analyses to elucidate the decay pathway(s) which, while often successful, may be time-consuming and expensive. These processes may be streamlined by incorporating more real-time studies using in situ or operando electrochemical methods such as voltammetry, a powerful technique able to accurately estimate the composition of an electrolyte solution in an automated fashion. However, proposed voltammetric routines usually do not leverage physical models, meaning they may perform poorly when confronting conditions not included in training data.&#13;
&#13;
In this thesis, I seek to advance voltammetric analyses to evaluate the behavior of degrading RAOs by developing physics-informed protocols that leverage statistical inference. I first construct an algorithm that utilizes physical models and Bayesian inference to correctly identify RAOs using multiple techniques in several independently prepared multicomponent solutions in near-real-time (&lt; 5 min). I subsequently estimate the degree to which an electrolyte is charged, as well as the total RAO concentration, with high accuracy (&lt; 4 % average error), in real-time (&lt; 1 min), and in an automated fashion. Finally, I present a protocol that jointly evaluates dissimilar voltammetry techniques to improve the initial compound identification protocol. Through these developments, I lay a foundation for advanced in situ or operando voltammetric methods to aid in understanding, and consequently mitigating, RAO decay; this, in turn, may accelerate the development of new electrochemical technologies for a sustainable energy economy.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147421</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High throughput measurement and perturbation of tissues and tissue-derived cellular models</title>
<link>https://hdl.handle.net/1721.1/147420</link>
<description>High throughput measurement and perturbation of tissues and tissue-derived cellular models
Kummerlowe, Conner Samuel
Human tissues are composed of trillions of cells whose states and interactions drive health and disease. Deciphering which cells and interactions are associated with any given disease is challenging due to the vast complexity of a tissue. Successfully doing so requires a suite of tools for measuring and modeling tissues. First, tools for comprehensively measuring tissues– such as single-cell RNA-sequencing (scRNA-seq)- can identify the disruptions to the molecules, pathways, and cells in a tissue that are correlated with disease. Due to the unbiased nature of these profiling methods, this generates many hypotheses to test in order to identify the factors that cause disease. In vitro cellular models– such as patient derived organoids– that recapitulate tissue biology provide a platform for systematically testing these hypotheses. However, such experiments are difficult to scale, requiring the development of new technologies for perturbing complex model systems at scale.&#13;
&#13;
Here, we first demonstrate the value of comprehensively measuring tissues by applying scRNAseq to map the epithelial and immune correlates of disease in Zambian adults with Environmental Enteropathy (EE). In doing so, we reveal key aspects of the biology of this neglected disease, including, the presence of surface mucosal cells in EE, an increase in WNT/ßcatenin signaling in the EE epithelium, and a more cytotoxic phenotype in EE T cells. Through this work, we generate new hypotheses for therapeutic and nutritional intervention in EE.&#13;
&#13;
Next, we provide a new method for testing hypotheses in cellular models at scale by perturbing models with pooled perturbations whose effects we computationally deconvolute. We developed this “compressed screening” approach in the U2OS cell line with a high-content imaging (Cell Painting) readout and a bioactive small molecule perturbation library. We then applied this method to identify novel microenvironmental factors that modify RNA state in pancreatic ductal adenocarcinoma (PDAC) organoids.&#13;
&#13;
Altogether, the work in this thesis falls within a framework for understanding human biology by comprehensively measuring tissues to generate new hypotheses and then systematically testing these hypotheses by perturbing tissue-derived cellular models at scale. This framework provides a promising path for understanding human diseases and developing new therapeutics.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147420</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and Optimization in Modern Retail</title>
<link>https://hdl.handle.net/1721.1/147419</link>
<description>Learning and Optimization in Modern Retail
Foncea Araneda, Patricio Tomas
The general topic of this thesis is the application of optimization and statistical inference methods to practical industry problems in the domains of supply chain, demand estimation, assortment optimization, and experimentation. We develop new methodologies that improve the practice of operations management for retailers and wholesalers, testing and implementing them in partnership with industry collaborators.&#13;
&#13;
We begin by tackling the problem faced by an online retailer that receives orders sequentially and must decide from which of its warehouses to ship each item in the order. Each warehouse has a limited inventory and the retailer must balance the trade-off between immediate cost of shipping with future inventory availability. We formulate it as an online optimization problem and propose a novel primal-dual algorithm with provable performance guarantees that is robust to the demand process and does not require any explicit forecast.&#13;
&#13;
We then turn our attention to the problem of learning customer preferences using aggregated demand from multiple products and stores. Although we rely on traditional choice model techniques, the novelty of our approach is the inclusion of a low-rank term in the utility model that aims to capture non-observable characteristics as latent features. This not only improves the overall fit of our demand estimation model, but also helps addressing endogeneity of our regressors without the need of instrumental variables. Once we have a demand model in place, we can use it to forecast customer behavior and make decisions that bring positive predicted outcomes. We develop algorithms that solve the assortment optimization problem when using complex choice models, and the problem of optimally allocating shelf space. These algorithms are efficient and scalable, and we show how our industry collaborator is currently implementing these in their business operations.&#13;
&#13;
Finally, we study the problem of measuring the effect of interventions in largescale retail. We design an experimentation platform for a large wholesaler that applies promotions at store level. We apply a generalized version of synthetic control to find treatment effects and show that these estimates are more reliable and accurate than the ones obtained with current methodologies used by industry practitioners.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147419</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Chemistry Meets Machine Learning: Autonomous Computational Workflow for Chemical Discovery</title>
<link>https://hdl.handle.net/1721.1/147418</link>
<description>Quantum Chemistry Meets Machine Learning: Autonomous Computational Workflow for Chemical Discovery
Duan, Chenru
Automation has long been revolutionizing our modern society since the first industrial revolution and has the potential to provide sufficient productivity forces for revolution is ongoing in computational sciences. Quantum chemistry software and modern computers have developed to a stage where virtual high throughput screening (VHTS), i.e., running thousands of calculations in parallel, becomes possible. This provides great opportunities for developing automated workflows to utilize the increasing computing power to generate large-scale data sets. Together with machine learning (ML) models trained on these data sets as either surrogate function approximations or generative models, accelerated chemical discovery for functional molecules and materials are achieved. Current automation workflows, however, are far from perfect. Namely, they produce too many unfruitful results and suffer severely from method selection bias, especially on challenging chemical spaces such as transition metal chemistry. These problems limit the automated workflows from providing common prosperity. Similar efficiency and accuracy needed for chemical discovery.&#13;
&#13;
In this Thesis, we introduce intelligent ML-based decision-making models in automation workflows. We build the first set of classifiers to predict the likelihood of calculation success that on-the-fly monitors and terminates an already running calculations if they are predicted to fail with high confidence. These classifiers are extremely transferable and stays accurate (i.e.,&gt;95%) during the whole geometry optimization process, saving &gt;1/2 of the computation resources. We develope the first semi-supervised learning classifier to identify strong static correlation in a system, achieving state-of-the-art performance for this task. Therefore, we can pre-determine which systems require more expensive (yet more accurate) correlated wavefunction theory calculations, thus improving overall data accuracy without adding unnecessary computational cost. We also proposed an approach that utilizes the consensus among multiple density functional approximations (DFAs) to discover robust (i.e., DFA- insensitive) candidate compounds, which are in much better agreement with experimentally observed leads. Lastly, we built a DFA recommender that selects the DFA with the lowest expected error to the reference in a system-dependent manner, achieving the accuracy needed for inorganic chemical discovery. All these ML-based decision-making models are integrated in workflows for VHTS. We anticipate these “smart” computational workflows are keys to autonomous chemical discovery.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147418</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Interviews and Matching</title>
<link>https://hdl.handle.net/1721.1/147417</link>
<description>Essays on Interviews and Matching
Shayani, Joseph N.
This thesis contains three essays on the topic of quantifying the impact of interviews in a matching market. The first two essays are empirical and use novel preference and matching data from the Canadian Residency Matching Service (CaRMS), and the third essay presents a formal identification result. In the first essay, I measure the impact of interviews on employers' preferences, and in the second essay, I measure the impact of reducing interviews on match outcomes. Both essays require me to quantify employers' pre-interview information about their post-interview preferences, but employers observe information unobservable to the econometrician. To address this econometric challenge, I estimate a joint structural model of interview offers and post-interview ranks in which unobservables may be correlated across the two periods, and thereby I use the information contained in post-interview preferences to correct for employers' additional pre-interview information. The third essay presents a non-parametric identification result that formalizes the possibility of using selection (e.g., interview-offer) data and binary outcome (e.g., job-offer) data jointly to correct for the role of unobservable factors in selection.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147417</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Phonon Thermal Transport</title>
<link>https://hdl.handle.net/1721.1/147414</link>
<description>Machine Learning for Phonon Thermal Transport
Chen, Zhantao
Efficient generation, transport, conversion, and storage of energy are essential to support our modern society and combat global climate change. As one of the major energy carriers, phonons play an indispensable role in various energy-related processes. The past decades witnessed continuous development in phonon measurements and computation techniques. However, at least two significant challenges impede our further exploration of phonon thermal transport. The first challenge is the difficulty of acquiring certain phonon properties in a streamlined way. Key quantities like density-of-states (DOS) are nontrivial to compute with high computational costs and measure with complicated experimental setups. Second, there is a lack of techniques to detect phonon frequency-based information. The phonon frequency-based information, such as relaxation time and interfacial transmittance, contains rich microscopic insight and governs measurables like thermal conductivity and interfacial thermal conductance. However, it is generally beyond the reach of existing measurement techniques. This thesis demonstrates how machine learning can address the challenges by 1) predicting phonon DOS from simple information of atomic structures and symmetry-aware neural networks and 2) extracting frequency-based phonon information from time-resolved diffraction measurements with scientific machine learning.&#13;
&#13;
First, we present the direct prediction of phonon DOS using only atomic species and positions as input. We apply the symmetry-aware Euclidean neural networks (E(3)NN) to preserve crystallographic symmetries and overcome data scarcity. The predictive model reproduces essential features of phonon DOS and generalizes to materials with atom types absent from the training set. We further exemplify our method’s potential by predicting alloy systems without any additional computational cost and filtering high phononic specific heat materials out of more than 4,000 candidate materials within one hour. Second, we reveal microscopic phonon transport in heterostructures with a machine learning augmented experiment framework. By taking advantage of the ultrafast electron diffraction (UED) with the dual temporal and reciprocal-space resolution, we employ advanced scientific machine learning to recover the frequency-dependent interfacial transmittance with possible extension to the relaxation time of each layer. We demonstrate its capability by analyzing thin heterostructures beyond conventional experimental methods and reconstructing unprecedented details of real-space, real-time, frequency-resolved phonon dynamics across an interface.&#13;
&#13;
While the presented topics are closely related to phonon systems, the proposed methods in this thesis can be readily transferred to predict other continuous properties and learn the previously “unmeasurable” properties from experiments. Furthermore, we expect the thesis work to enable a deeper understanding of the fundamental connections between symmetry, structure, and elementary excitations in condensed matter and material physics.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147414</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Computational Design of Shape and Control for Rigid Robots</title>
<link>https://hdl.handle.net/1721.1/147406</link>
<description>Towards Computational Design of Shape and Control for Rigid Robots
Xu, Jie
Designing a performing robotic system for given tasks is traditionally labor-intensive for finding the optimal configurations of its hardware shape and/or software controller. The underlying coupling of the hardware shape and the software control of a robot results in an enormous parameter space involving both discrete parameters (i.e., topology structure of the robot) and continuous parameters (i.e., morphological dimensions of each robot link, the control parameters), optimizing which traditionally requires significant amounts of expert knowledge from roboticists and many manual design iterations. Intending to automate the robot design process, computational robot design has attracted increasing attention from robotics, graphs, and artificial intelligence researchers. However, building a general computational robot design process is extremely hard due to several challenging problems, including but not limited to representation, performance evaluation, and optimization problems. This thesis identifies some key challenges in the computational robot design pipeline and proposes our corresponding solutions. We first take the manipulator design as an example to present our robot design representations for both discrete and continuous robot shape parameters. With the proposed robot representations, we then explore the corresponding robot optimization techniques. In this part, we first introduce how we leverage differentiable simulators to efficiently optimize the robot control policy with the robot configuration fixed. Next, we delve into the more complicated co- design problems requiring optimization of both the shape and control of a robot. We present two novel algorithms for optimizing discrete shape parameters and continuous shape parameters, respectively. Finally, we step further toward the more realistic multi-objective robot design problems and present our solutions for finding a set of Pareto-optimal robot designs trading off multiple different objectives and tasks. We conclude this thesis by envisioning an ultimate computational design pipeline and discussing open research directions toward this ultimate goal.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147406</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular CRISPR-diagnostics for Infectious Diseases</title>
<link>https://hdl.handle.net/1721.1/147399</link>
<description>Modular CRISPR-diagnostics for Infectious Diseases
Thakku Venkateswaran, Sri Gowtham
Infectious diseases continue to represent a significant fraction of worldwide disease burden. A critical part of lowering this burden entails effective testing strategies, as underscored by the ongoing COVID-19 pandemic. The goals of diagnostics, screening, and surveillance have demanded distinct yet innovative approaches to testing. In this work, I present two novel approaches for multiplexed nucleic acid detection (CARMEN and WATSON) that build on the existing CRISPR-diagnostic method SHERLOCK. SHERLOCK combines traditional amplification (PCR or isothermal) with target-specific CRISPR-Cas13 detection and enables new, modular assay designs. With CARMEN, we employ a droplet microfluidic platform to perform thousands of parallel SHERLOCK reactions in nanoliter droplets. CARMEN’s potential for high-throughput infectious disease screening is demonstrated through the design of detection assays for large panels of clinically relevant viral and bacterial pathogens, as well as resistance markers. With WATSON, we maximize the sensitivity of SHERLOCK by targeting multiple tiled regions across a single pathogen genome. The clinical significance of WATSON is demonstrated by applying it to the detection of plasma circulating cell-free DNA in tuberculosis, highlighting its potential as a liquid biopsy test for infectious disease management.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147399</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gene transfer agents promote survival and DNA repair during stationary phase for Caulobacter crescentus</title>
<link>https://hdl.handle.net/1721.1/147398</link>
<description>Gene transfer agents promote survival and DNA repair during stationary phase for Caulobacter crescentus
Gozzi, Kevin Robert
Gene transfer agents (GTAs) are prophage-like entities found in many bacterial genomes that cannot propagate themselves and instead package ~5-15 kbp fragments of the host genome that can then be transferred to related recipient cells. Although suggested to facilitate horizontal gene transfer in the wild, no clear physiological role for GTAs has been elucidated. Here, I demonstrate that the α-proteobacterium Caulobacter crescentus produces bona fide GTAs. The production of Caulobacter GTAs is tightly regulated by a newly identified transcription factor, RogA, that represses gafYZ, the direct activators of GTA synthesis. Cells lacking rogA or expressing gafYZ produce GTAs harboring an ~8.3 kbp fragment of the genome that can, after cell lysis, be transferred into recipient cells. Notably, I found that GTAs promote the survival of Caulobacter in stationary phase and following DNA damage by providing recipient cells a template for homologous recombination-based repair. This function may be broadly conserved in other GTA-producing organisms and explain the prevalence of this unusual horizontal gene transfer mechanism. I also found that GTAs act to strongly activate the SOS-independent DNA damage response. Using a combination of biochemical and genetic techniques, I characterize the central regulator DriD and identify that DriD specifically binds ssDNA as a direct ligand. ssDNA acts as an allosteric regulator to induce a conformational change to activate DriD, allowing for DriD to promote transcription at target promoters involved in survival to DNA damage. DriD may serve as a model for other WYLdomain containing proteins, a poorly understood class of proteins commonly found associated with CRISPR loci across bacteria.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147398</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instance-Optimized Database Indexes and Storage Layouts</title>
<link>https://hdl.handle.net/1721.1/147396</link>
<description>Instance-Optimized Database Indexes and Storage Layouts
Ding, Jialin
For any modern database system, its physical design, which is composed of both the storage layout of the data itself and auxiliary data structures such as indexes, is a critical piece of maintaining high performance in the face of increasing data volumes. Existing physical design components are general-purpose: they achieve adequate performance for the average use case but don’t achieve optimal performance for any individual use case. These physical design components expose numerous configuration knobs that users must manually tune to achieve better performance for their individual use case, but tuning complex systems is labor-intensive, and poor tuning can result in degraded performance and increased costs.&#13;
&#13;
In this thesis, we explore how database systems can maximize performance while minimizing manual effort through instance-optimization, which is the process of designing systems that are able to automatically self-adjust in order to achieve the best performance for a given use case. We leverage instance-optimization to introduce novel designs for database indexes and data storage layouts that outperform existing state-of-the-art indexes and data layouts by orders of magnitude. We also demonstrate how to incorporate multiple instance-optimized database components into an end-toend analytic database system that outperforms a well-tuned commercial cloud-based analytics system by up to 3×.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147396</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alpha thalamocortical networks during propofol general anesthesia and disorders of consciousness</title>
<link>https://hdl.handle.net/1721.1/147395</link>
<description>Alpha thalamocortical networks during propofol general anesthesia and disorders of consciousness
Zhou, David Wei
Alpha (8-12 Hz) rhythms are a fundamental feature of awake electroencephalography (EEG), thought to be generated by circuits connecting the cortex and thalamus. These rhythms provide a functional architecture for cortical activity underpinning cognitive and sensory processing. Attenuation of alpha power has been linked to clinical states of unconsciousness. General anesthesia and disorders of consciousness (DoC) offer experimentallyaccessible conditions in which to study alpha disruptions using scalp and intracranial EEG recordings. To produce novel signatures of posterior alpha loss in DoC, we analyzed coherent networks in EEG of patients during recovery from DoC. To map thalamocortical networks involved in propofol-induced unconsciousness, we conducted coherence analysis of alpha networks in intracranial EEG recorded in patients with pharmacologically-refractory epilepsy and performed probabilistic tractography analysis of thalamocortical fibers in a matched cohort of healthy subjects. We found a posterior alpha network in recordings of clinical EEG and intracranial EEG that is lost during states of unconsciousness. We also found that propofol anesthesia induces alpha in medial regions of the frontal cortex and the medial temporal lobe after loss of consciousness. The cortical source of propofol-induced alpha is structurally connected to the mediodorsal nucleus of the anterior thalamus, whereas regions that generate waking alpha are connected to the pulvinar nucleus of the sensory thalamus. Our findings suggest that posterior alpha coherence is a unified signature of conscious brain states and that frontal alpha may contribute to cognitive impairment during anesthesia and sedation.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147395</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Epistemology: Essays on What to Think and What to Do</title>
<link>https://hdl.handle.net/1721.1/147389</link>
<description>Practical Epistemology: Essays on What to Think and What to Do
Schilling, Haley
Sometimes, we need evidence in order to act. A jury needs "proof beyond a reasonable doubt" in order to convict a defendant of a crime. A teacher needs to read a student’s essay in order to assign a grade. A babysitter needs to know that the sandwich does not contain peanuts, in order to give it to a child with a peanut allergy. The FDA needs "substantial evidence" of the efficacy of a new drug in order to approve it. This dissertation explores the relationship between ethics and epistemology, evidence and practical deliberation, and what to think and what to do.&#13;
&#13;
Chapter 1 develops an account of “proof beyond a reasonable doubt,” a standard that is vexingly difficult to pin down. Legal proof is knowledge on the basis of trace evidence that the defendant is guilty. This epistemic norm generalizes to all of our responses and reactive attitudes — and is a challenge to orthodox knowledge norms.&#13;
&#13;
Chapter 2 considers a central issue in the ethics of AI — algorithms are often opaque. This essay characterizes a class of applications for which there is a special moral demand for transparency: algorithms that give people what they deserve, on the basis of what they have done. Explainability is important to assure that the algorithms follow the requisite epistemic norms.&#13;
&#13;
Chapter 3 considers the pragmatic encroachment thesis, the claim that whether S knows p depends on the practical, as well as the epistemic, features of her deliberative context. The essay argues that the ordinary knowledge ascriptions that often motivate the thesis can just as easily undermine thesis, and then develops a contextualist knowledge norm that can account for the data. &#13;
&#13;
Chapter 4 explains how to set significance levels, based on practical considerations. Scientists should set significance levels based on the value of the posterior credences that would result from updating on different results of significance tests.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147389</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards machine learning models robust to adversarial examples and backdoor attacks</title>
<link>https://hdl.handle.net/1721.1/147387</link>
<description>Towards machine learning models robust to adversarial examples and backdoor attacks
Makelov, Aleksandar
In the past decade, machine learning spectacularly succeeded on many challenging benchmarks. However, are our machine learning models ready to leave this lab setting and be safely deployed in high-stakes real-world applications? In this thesis, we take steps towards making this vision a reality by developing and applying new frameworks for making modern machine learning systems more robust. In particular, we make progress on two major modes of brittleness of such systems: adversarial examples and backdoor data poisoning attacks.&#13;
&#13;
Specifically, in the first part of the thesis, we build a methodology for defending against adversarial examples that is the first one to provide non-trivial adversarial robustness against an adaptive adversary.&#13;
&#13;
In the second part, we develop a framework for backdoor data poisoning attacks, and show how, under natural assumptions, our theoretical results motivate an algorithm to flag and remove potentially poisoned examples that is empirically successful. We conclude with a brief exploration of preliminary evidence that this framework can also be applied to other data modalities, such as tabular data, and other machine learning models, such as ensembles of decision trees.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147387</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consequences and Limits of Cell-Cell Communication in Airway Immune Responses</title>
<link>https://hdl.handle.net/1721.1/147386</link>
<description>Consequences and Limits of Cell-Cell Communication in Airway Immune Responses
Allon, Samuel Jonathan
The airways are an important barrier tissue, where successful immune responses can protect our bodies from viruses, bacteria, fungal spores, and environmental toxins. Conversely, when the immune system fails to respond or responds inappropriately in the airways, it can lead to diseases from deadly pneumonia to chronic allergies. Cell-cell communication is an important aspect of how airway immune responses are coordinated, yet it is not fully known which airway cell types send intercellular signals, under what circumstances, and what the consequences of those intercellular signals are. RNA sequencing, including single-cell RNA sequencing, has proven to be a valuable tool for answering all three of these unresolved questions.&#13;
&#13;
Here, we use a combination of RNA sequencing and single-cell RNA sequencing to address these questions in three different contexts. First, we examine the airway mast cells that are overabundant in Type 2 inflammatory diseases, such as chronic rhinosinusitis. We attribute this overabundance of mast cells to a combination of in situ proliferation and immigration into the tissue. We then fully describe the subtypes of mast cells present, including which intercellular signals each subtype appears capable of sending. Second, we examine airway epithelial cells in the upper and lower airways to transcriptionally evaluate their susceptibility to SARS-CoV-2 infection. We identify the cell types with the highest expression of the SARS-CoV-2 receptor, ACE2, and provide evidence that the ACE2 mRNA gets upregulated during interferon responses. Finally, we investigated whether airway epithelial cells send secondary signals to one another after sensing immune-relevant signals like cytokines. To do this, we devised a new method for systematically detecting secondary responses in vitro and applied the method to airway epithelial cells grown in two different formats.&#13;
&#13;
Collectively, our work confirms that cell-cell communication influences nearly every aspect of airway immune responses, while also illustrating the limits of our current technologies, of RNA sequencing as an analytical tool, and of cell-cell communication itself.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147386</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcription factor antagonism regulates heterogeneity in embryonic stem cell states</title>
<link>https://hdl.handle.net/1721.1/147385</link>
<description>Transcription factor antagonism regulates heterogeneity in embryonic stem cell states
Hu, Sofia
Gene expression heterogeneity underlies cell states and contributes to developmental robustness. While heterogeneity can arise from stochastic transcriptional processes, the extent to which it is regulated is unclear. Here we characterize the regulatory program underlying heterogeneity in murine embryonic stem cell (mESC) states. We identify differentially active and transcribed enhancers (DATEs) across states. DATEs regulate differentially expressed genes and are distinguished by co-binding of Kruppel-like transcription factors Klf4 and Zfp281. In contrast to other factors that interact in a positive feedback network stabilizing mESC cell-type identity, Klf4 and Zfp281 drive opposing transcriptional and chromatin programs. Abrogation of factor binding to DATEs dampens variation in gene expression, and factor loss alters kinetics of switching between states. These results show antagonism between factors at enhancers results in gene expression heterogeneity and formation of cell states, with implications for the generation of diverse cell types during development.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147385</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Structured World Models From and For Physical Interactions</title>
<link>https://hdl.handle.net/1721.1/147384</link>
<description>Learning Structured World Models From and For Physical Interactions
Li, Yunzhu
Humans have a strong intuitive understanding of the physical world. We observe and interact with the environment through multiple sensory modalities and build a mental model that predicts how the world would change if we applied a specific action (i.e., intuitive physics). This dissertation presents my research that draws on insights from humans and develops model-based reinforcement learning (RL) agents. The agents learn from their interactions and build predictive models of the environment that generalize widely across a range of objects made with different materials. The core idea behind my research is to introduce novel representations and integrate structural priors into the learning systems to model the dynamics at different levels of abstraction. I will discuss how we can make structural inferences about the underlying environment. I will also show how such structures can make model-based planning algorithms more effective and help robots to accomplish complicated manipulation tasks (e.g., manipulating an object pile, pouring a cup of water, and shaping deformable foam into a target configuration). Beyond visual perception, touch also plays a vital role in humans to perform physical interactions. I will discuss how we bridge the sensing gap between humans and robots by building multi-modal sensing platforms with dense tactile sensors in various forms (e.g., gloves, socks, vests, and robot sleeves) and how they can lead to more structured and physically grounded models of the world.&#13;
&#13;
This dissertation consists of three parts. In Part I, we show how we can learn world models at different levels of abstractions and how the learned models allow model-based planning to accomplish challenging robotic manipulation tasks both in simulation and in the real world. Part II investigates the use of a learned structured world model for physical inference that infers the causal relationships between different components within the environment and performs state and parameter estimations. Part III goes beyond the previous two parts that only assume vision as input by considering touch as an additional sensory modality. I will discuss the novel tactile sensors we developed and how they can be used in understanding hand-object and human-environment physical interactions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147384</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power Electronics Meet Piezoelectrics: Converters, Components, and Miniaturization</title>
<link>https://hdl.handle.net/1721.1/147383</link>
<description>Power Electronics Meet Piezoelectrics: Converters, Components, and Miniaturization
Boles, Jessica Danielle
As converters and controllers of electrical energy, power electronics are the lifeblood of many exciting emerging technologies in transportation, manufacturing, information technology, and more. However, while integrated circuits have seen miniaturization and expanded performance characterized by Moore’s Law, power electronics often remain the bulkiest, lossiest, and costliest components in the systems they serve. Miniaturization of power electronics is fundamentally bottlenecked by passive components, particularly magnetics (i.e., inductors and transformers), which pose inherent size and performance challenges at small scales. &#13;
&#13;
This thesis explores how we can leverage an alternative passive component technology – piezoelectric components – to eliminate magnetics and unlock a new era of miniaturization for power electronics. Piezoelectrics offer several potential size, performance, and manufacturability advantages to power electronics, but harnessing these advantages requires fundamental re-evaluation of both power electronic circuits and piezoelectric components themselves. Accordingly, this thesis presents the following recent advances: (1) Dc-dc converter circuit topologies and operating sequences capable of efficiently utilizing piezoelectrics as sole passive components; these converter implementations demonstrate the efficiency viability of piezoelectric-based power electronics and provide their highest experimental efficiencies to date. (2) Piezoelectric component design tools for efficiency and power handling density; this framework enables maximal utilization of piezoelectric components and reveals them to have favorable scaling characteristics to small sizes. (3) An experimental demonstration of dramatic miniaturization offered by piezoelectrics; this prototype piezoelectric component has nearly an order of magnitude lower volume than a competing magnetic component design.&#13;
&#13;
These are important steps in realizing the theoretical advantages of piezoelectrics in power electronics, positioning them to revolutionize what is possible for computing, wireless communication, robotics, biomedical devices, renewable energy, and beyond.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147383</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microscale Energy Transport in Lead Halide Perovskites</title>
<link>https://hdl.handle.net/1721.1/147382</link>
<description>Microscale Energy Transport in Lead Halide Perovskites
Brenes, Roberto
Energy transport is of paramount importance for the operation and design of semiconductor devices. Lead halide perovskites, an emerging semiconductor for optoelectronic applications, exhibits significant phenomena that can enhance or disrupt lateral energy transport, such as photon recycling and microscale heterogeneity. Understanding and quantifying these energy transport mechanisms is critical for scaling perovskite photovoltaic device areas. In this thesis, we explore how photon recycling affects energy transport both in the macro and microscale, and develop a framework to quantify carrier diffusion anisotropy and grain boundary effects in optical microscopy measurements. First, we quantify the enhancement due to photon recycling in the macroscale for state-of-the-art perovskite films. We find that even with finite nonradiative recombination, benefits from photon recycling can be achieved when nonradiative lifetimes and light-emitting diode (LED) electroluminescence efficiencies exceed 2 μs and 10%, respectively. Next, we demonstrate that processes such as nonlinear recombination and photon recycling can have a significant impact on measured mean-squared-displacement (MSD) profiles in the microscale, especially for excitonic materials with short radiative lifetimes. Additionally, we find that film microstructure can lead to unique transport profiles that strongly depend on the material boundary behavior and the differences between the domain feature size and the energy carrier diffusion length. Finally, we develop a framework to analyze experimental energy carrier diffusion maps accounting for the diffusion tensor and material microstructure to overcome the shortcomings of MSD models. We use this framework to study anisotropy in lead halide perovskite single crystals and polycrystalline thinfilms. By globally fitting the unnormalized data over the diffusion map, we quantify both carrier transport and recombination and, importantly, reveal anisotropy in the diffusion tensor for CH₃NH₃PbI₃ polycrystalline films. This framework paves the way for understanding anisotropic energy transport in heterogeneous materials.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147382</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring charge to spin conversion in novel materials for efficient spintronics</title>
<link>https://hdl.handle.net/1721.1/147379</link>
<description>Tailoring charge to spin conversion in novel materials for efficient spintronics
Safi, Taqiyyah S.
The charge-to-spin conversion efficiency is the crux of the future of spintronics and electronics as it enables the generation and employment of both spin and electric current in the same device. Usually, this conversion efficiency is an intrinsic material property, which cannot be easily modified without invoking chemical or structural changes in the underlying system. This thesis explores materials with nontrivial band structures as potential active spintronics materials and explores a novel method of tunable spin-to-charge conversion, which will provide extra flexibility in spintronics device design. First, we show the successful modulation of charge-spin conversion efficiency (by more than 5X) via the metal-insulator transition in a quintessential strongly correlated electron compound vanadium dioxide (VO₂). We then show that by engineering the chemical order in the newly discovered magnetic Weyl semimetal (MWSM), Co₂MnGa, we can control the spin-charge conversion. Tailoring this interconversion efficiency will lead to energy-efficient and highly tunable memory and computing devices.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147379</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microscopic Physics of Electrical Double Layers</title>
<link>https://hdl.handle.net/1721.1/147377</link>
<description>Microscopic Physics of Electrical Double Layers
de Souza, J. Pedro
The electrical double layer exists at the phase boundaries of electrolyte solutions, where counterions from solution preferentially accumulate to screen surface charges. Due to the ubiquity of electrolytes, the electrical double layer plays a central role in many fields in science and engineering, including colloid science, electrochemistry, biology, membrane science, and tribology. Across these fields, mathematical models of the double layer have been used to analyze and predict the behavior of electrochemical interfaces in contact with electrolyte solutions. Even so, the standard continuum approaches and assumptions that are applied usually fail to describe the microscopic arrangement and structuring of ions and solvent in the electrical double layer, limiting their predictive power.&#13;
&#13;
In this thesis, I develop mathematical models to predict the microscopic structure of ionic solutions at charged interfaces, relevant for a wide set of problems including membrane transport, electrochemical capacitors, ionic liquid electrolytes, bioseparations, electrowetting, cement cohesion, and general colloidal stability. The continuum mathematical models I derive for the electrical double layer capture electrostatic correlations in electrolytes containing multivalent ions, the molecular-level layered structures in ionic liquids and concentrated electrolytes, interfacial orientational ordering of common polar liquids such as water, and the effects of electrolyte confinement in pores down to the nanoscale. These effects are not captured in applications of standard continuum theories for dilute electrolyte solutions, but are essential in accurately describing the equilibrium and nonequilibrium properties of electrolytes at charged interfaces. The key feature of the theories explored in this thesis is the inclusion of microscopic physics using formulations of non-local electrostatics, which encode additional microscopic length scales of discrete molecules, ions, and cofinement geometry into the theory.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147377</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Studies of Instability-Triggered Intersonic Surface Detachment Waves in Soft Material Sliding</title>
<link>https://hdl.handle.net/1721.1/147375</link>
<description>Dynamic Studies of Instability-Triggered Intersonic Surface Detachment Waves in Soft Material Sliding
Du, Huifeng
Spatially constrained soft structures under dynamic perturbation may evolve into a variety of organized morphological patterns such as wrinkles and random folds. These surface transformations are usually triggered by system bifurcation instabilities and regulated by energy redistribution. Among them, elastomeric materials sliding on smooth surfaces generate separation pulses due to tangential stress gradients. When the material slides at a speed much lower than those of elastic surface waves, the process is dominated by surface adhesion and relaxation effects coined as Schallamach waves. In contrast, fast traveling separation pulses at the sliding interface exceeding the Rayleigh and shear wave velocities have been theoretically conjectured but not experimentally validated. Besides, the highly dynamic nature of the problem requires a combination of different methods to understand the instability generation mechanisms and evaluate the system responses. Therefore, the purpose of this research is to advance the understanding of the dynamic behavior of instability-triggered detachment waves, and to establish a methodology for more quantitative analyses of the wave propagation, energy transformation and its physical impacts on the surrounding media. Through a synergistic effort combining analytical studies, numerical simulation and experimental observations, we established a framework suitable for the description of: i) mechanisms governing the formation and evolution of separation pulses induced by frictional contact; ii) the necessity and effectiveness of our combined approaches to address the highly nonlinear, multi-scaled dynamic problem; iii) quantitative analysis of the transient wave properties of intersonic surface detachment, and the transformation of energies into other forms (acoustic radiation) during wave propagation in space, and iv) important implications of this work and insights into how the new understanding could shift the landscape of structural design for applications involving soft material contact.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147375</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntax and Prosody of Coordination</title>
<link>https://hdl.handle.net/1721.1/147367</link>
<description>Syntax and Prosody of Coordination
Wu, Danfeng
This thesis investigates the syntactic and prosodic properties of what I call correlative coordination; coordination where each junct (i.e., conjunct or disjunct) has an overt coordinator (e.g., either…or…). I argue that the coordinators are focus-sensitive operators, and each coordinator has two positions. Only the higher position is semantically interpreted, and the lower position is semantically vacuous. These findings dovetail with previous proposals in other empirical domains, suggesting that perhaps all focus-sensitive operators have two positions in a sentence, and are interpreted in the higher position (e.g., Lee 2004, Cable 2007, Hole 2015, 2017, Hirsch 2017, Quek &amp; Hirsch 2017, and Bayer 2018). The results reported here also suggest that the coordinator, traditionally considered to be the head of coordination (e.g., or and but), may not be the actual head, but just the daughter of a junct. A covert abstract Junction head takes all the juncts as its sister, and projects to the coordinated phrase. This is identical to Al Khalaf’s (2005) analysis of coordination, but supported here by different types of evidence.&#13;
&#13;
At the same time, correlative coordination does not always look like it has the syntax that I argue for. When the coordinators seem to be higher than where I claim, I argue that ellipsis has occurred to obscure the actual size of coordination and the position of the coordinators.&#13;
&#13;
In my syntactic theory of coordination, ellipsis is a veil that obscures the underlying syntax of coordination. In the second part of this thesis, which studies the mapping from syntax and prosody, I put ellipsis in the spotlight, and ask if elided material is truly silent. In a prosodic experiment that studies ellipsis in coordination, I argue that elided material has prosodic representation, despite being silent. I confirm previous experimental results that there is a close correspondence between syntax and prosody in coordination. Because the syntactic structure of coordination is recursive, this means that the prosodic structure may also be recursive, and replicate the dominance relations in syntax.&#13;
&#13;
I further argue that there is a close syntax-prosody correspondence, even when the coordinated phrases contain elided material. This has implications for the prosodic representation of silent material. An important assumption in the literature on syntax-prosody mapping is that silent material (e.g., null heads and their projections, and perhaps traces, etc.) does not have prosodic representation (Nespor &amp; Vogel 1986; Chen 1987; Truckenbrodt 1999; Elfner 2015). Viewing this assumption in light of my experimental results, it appears that there may be a dichotomy of silent material: while null heads and their projections do not have prosodic representation, elided material does. Assuming a derivational account of the syntax-prosody mapping, a possible interpretation of these results is that prosodic structure is created at a point when material to be elided is not yet deleted, leaving effects of deleted material in prosody. But at this same moment of creation of prosodic structure, vocabulary insertion has already occurred, so that the syntax-prosody mapping can ignore phonologically null elements.&#13;
&#13;
Because ellipsis has prosodic effects, we may be able to detect elided structure not just based on syntactic-semantic evidence, but also based on prosodic evidence. I demonstrate this with another prosodic experiment that argues for the presence of ellipsis in correlative coordination based on subtle phonetic effects in prosodic boundaries. In doing so, I follow the tradition of drawing evidence for syntactic claims from prosody (e.g., Bresnan 1971 and Clemens &amp; Coon 2018).
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147367</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of laser-driven particle acceleration for the development of tunable ion sources for applications in high energy density science</title>
<link>https://hdl.handle.net/1721.1/147365</link>
<description>Investigation of laser-driven particle acceleration for the development of tunable ion sources for applications in high energy density science
Simpson, Raspberry
Since the innovation of chirped pulse amplification by Donna Strickland and Gerard Morou in 1985, laser technology has evolved such that we can create short pulses of light (10⁻¹⁵ − 10⁻¹² seconds) with high peak powers (1015 Watts) in small, focused spots (∼a few microns). A prolific area of research that has emerged over the last two decades is the use of these high-intensity lasers to drive particle beams. Possible applications of these particle sources include isotope production for medical applications, proton cancer therapy, and fusion energy schemes. This thesis focuses on laser-driven proton acceleration and adds to the existing foundation of work in the area by investigating new empirical relationships, conducting new measurements of the accelerating electric field responsible for laser-driven proton acceleration, and developing a new data analysis methodology using machine learning. This work first examines laser-driven proton acceleration in the multi-picosecond regime (&gt;1ps) at laser intensities of 10¹⁷ − 10¹⁹ W/cm². This is motivated by recent results on laser platforms like the National Ignition Facility-Advanced Radiographic Capability laser and the OMEGA-Extended Performance laser, which have demonstrated enhanced accelerated proton energies when compared to established scaling laws. A detailed scaling study was performed on the Titan laser, which provided the basis for a new analytical scaling presented in this thesis. In addition, high-repetition-rate (HRR) lasers that can operate at 1 Hz or faster are now coming online around the world, opening a myriad of opportunities for accelerating the rate of learning on laser-driven particle experiments. To unlock these applications, HRR diagnostics combined with real-time analysis tools must be developed to process experimental measurements and outputs at HRR. Towards this goal, this thesis presents a novel automated data analysis framework based on machine learning and proposes a new methodology based on representation learning to integrate heterogeneous data constrain parameters that are not directly measurable. Taken together, these thrusts enable a new preliminary framework for enhanced analysis of complex HRR experiments and a foundational step towards realizing the goal of tunable laser-driven particle sources.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147365</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new way to do epidemic modeling</title>
<link>https://hdl.handle.net/1721.1/147364</link>
<description>A new way to do epidemic modeling
Abhijit Dandekar, Raj
The Coronavirus respiratory disease 2019 originating from the virus SARS-COV-2 led to a global pandemic, leading to more than 500 million confirmed global cases and approximately 6 million deaths in more than 50 countries. Since the outbreak of this pandemic, a number of modeling frameworks have been used to analyze various&#13;
aspects of the pandemic such as prediction of infected and recovered case counts, hospitalizations, travel restrictions, reopening and non-pharmaceutical interventions. These frameworks can be divided broadly into the following categories: (a) compartment models which are interpretable but cannot capture complex effects and (b) agent based models which can capture varying ranges of complexity; but are generally non interpretable.&#13;
&#13;
In this thesis, we introduce another category for epidemic modeling, which is rooted in Scientific Machine Learning. Scientific Machine Learning (SciML) leverages the interpretability of ODEs with the expressivity of neural networks. We thus aim to retain the interpretability of compartment models along with the complexity of&#13;
agent based models using the SciML modeling paradigm. Using such a framework, we tackle a wide variety of application based problems including:&#13;
&#13;
• How quarantine control policies shaped the outbreak evolution in different countries around the world.&#13;
• Effect of early reopening in the Southern and West Central US states; and how it led to an exponential explosion of infected cases in the USA during the period of June-Aug 2020.&#13;
• Virtual Virus spread through Bluetooth tokens; and how it can be used to obtain real time estimates of the pandemic.&#13;
&#13;
Towards the end, we analyze the robustness of the proposed SciML methodology and provide a general set of guidelines for training such models in other domains.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147364</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein Synthesis and Bioconjugation for Design of Antimicrobial Conjugates</title>
<link>https://hdl.handle.net/1721.1/147363</link>
<description>Protein Synthesis and Bioconjugation for Design of Antimicrobial Conjugates
Saebi, Azin
New antimicrobial approaches are needed to combat the threat of antibiotic resistance. Gram- negative bacteria are of significant clinical concern, in part due to their cell wall structure that restricts the transport of most antibiotics. Here we describe the development of two chemical strategies that are positioned well to expand non-traditional antibiotic approaches to breach the gram-negative cell wall, particularly in Pseudomonas aeruginosa. The first strategy, chemical synthesis of proteins, enables access to proteins equipped with modifications and conjugation handles that are valuable for unlocking their therapeutic potential. We used this approach to produce a synthetic antipseudomonal bacteriocin and demonstrated that its physical and biological characteristics were comparable to its biological counterpart. The second strategy, protein-protein conjugation, is a powerful approach for the development of targeted antibiotics. We developed a protein-protein conjugation method based on Pd-mediated cross-coupling chemistry to perform conjugation reactions under mild and dilute conditions that are amenable to most proteins. Together, these strategies have the potential to generate new and non-traditional antibiotics.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147363</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Surface Coverage of Reaction Intermediates in Heterogeneous Electrocatalysis</title>
<link>https://hdl.handle.net/1721.1/147362</link>
<description>The Role of Surface Coverage of Reaction Intermediates in Heterogeneous Electrocatalysis
Jung, Onyu
The population of reaction intermediates is intimately related to the kinetic landscape and the overall rate of any multi-step reaction. For efficient interconversion of chemical and electrical energy, it is critical to understand the role of catalyst surface coverage and how it relates to two important mechanistic parameters, reaction orders and Tafel slopes. On heterogeneous catalysts, lateral interactions between neighboring intermediates can lead to non-linear correlations between reaction conditions and the overall reaction rate, and complicate the mechanistic analysis. To avoid this mathematical challenge, an empty or a fully saturated surface coverage of intermediates is often assumed in the field of heterogeneous electrocatalysis, but this idealistic assumption can lead to predictions far from empirical observations. Herein, we show that fractional coverage of reaction intermediates is key to understanding kinetic behaviors of energy conversion reactions and developing design principles for optimal catalysts.&#13;
&#13;
In Chapter 2, we report activation-controlled hydrogen evolution reaction (HER) data from pH 1 to 12 on Au and Pt catalysts and find that reaction orders in hydronium are between 0 and 1 across the pH range. Observed reaction orders and Tafel slopes that depart from the customary values are rationalized with a fractional metal–H coverage model. HER kinetics are coverage-dependent, and we propose that the sluggish proton-coupled electron transfer kinetics from H2O vs H3O+ is the reason for HER efficiency loss in alkaline electrolytes.&#13;
&#13;
In Chapter 3, we build upon the fractional coverage model in Chapter 2 to show that anything in the electrolyte, such as buffering species and supporting electrolyte ions, can adsorb on the surface and influence interfacial HER kinetics. The reaction order in sodium phosphate buffer in pH 7 is 0.6 across more than an order of magnitude in phosphate buffer concentration. Our data suggest that a fractional coverage of adsorbed dihydrogenphosphate manifests in non-integer reaction orders.&#13;
&#13;
Chapter 4 explores proton donor control as a complementary strategy to tune the reaction rate of the HER. As a model system, we study the donor-dependent selectivity of CO2 reduction catalysis on Au for which the HER is a parasitic reaction. We identify acetate buffer in dimethylacetamide as a stable nonaqueous medium where we can explicitly tune the proton donor identity and activity. By decreasing the proton activity in the electrolyte, we dramatically suppress HER to improve the selectivity of CO vs H2 production from 20 to 80%.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147362</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>GaN Electronics for High-Temperature Applications</title>
<link>https://hdl.handle.net/1721.1/147361</link>
<description>GaN Electronics for High-Temperature Applications
Yuan, Mengyang
Gallium nitride is a promising candidate for harsh environment electronics, thanks to its excellent material properties, which have given rise to high-performance (room temperature) transistors for RF, power, MEMS, and mixed-signal applications. Previous works on high-temperature (HT) electronics have been typically limited to two aspects, namely, the high-temperature robustness of discrete transistors and basic circuit building blocks, which are mainly combinational logic. While these studies offer a strong indication of the potential of GaN transistor technology for HT applications, the development of HT (500 °C) GaN-ICs is still at its early stage due to the low degree of complexity and integration demonstrated so far. &#13;
Major challenges in the realization of GaN HT-robust sequential logic circuits or more complex systems is the lack of a scalable technology.&#13;
&#13;
This thesis aims to advance the integration technology of GaN HT electronics by demonstrating a comprehensive HT (500°C) enhancement-mode (E-mode) GaN-on-Si technology from device to circuit perspectives: (1) a scalable device technology based on p-GaN-gate AlGaN/GaN HEMTs with high uniformity, which is optimized for HT operation and demonstrated to offer robust performance at least up to 500 °C with the help of in-house developed packaging technology and characterization platform, (2) compact modeling of monolithically integrated enhancement/depletion-mode HEMTs up to 500 °C HEMTs, (3) robustness-driven circuit design based on GaN technology, (4) demonstration of GaN-based combinational and sequential building blocks including inverter, NAND, NOR, ring oscillators, ROM, SRAM, D Latch, D Flip-Flop operational up to 500 °C.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147361</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Effective Tools for Debugging Machine Learning Models</title>
<link>https://hdl.handle.net/1721.1/147360</link>
<description>Towards Effective Tools for Debugging Machine Learning Models
Adebayo, Julius
This thesis addresses the challenge of detecting and fixing the errors of a machine learning (ML) model—model debugging.&#13;
&#13;
Current ML models, especially overparametrized deep neural networks (DNNs) trained on crowd-sourced data, easily latch onto spurious signals, underperform for small subgroups, and can be derailed by errors in training labels. Consequently, the ability to detect and fix a model’s mistakes prior to deployment is crucial.&#13;
&#13;
Explainable machine learning approaches, particularly post hoc explanations, have emerged as the defacto ML model debugging tools. A plethora of approaches currently exist, yet it is unclear whether these approaches are effective. &#13;
&#13;
In the first part of this thesis, we introduce a framework to categorize model bugs that can arise as part of the standard supervised learning pipeline. Equipped with the categorization, we assess whether several post hoc model explanation approaches are effective for detecting and fixing the categories of bugs proposed in the framework. We show that current approaches struggle to detect a model’s reliance on spurious signals, are unable to identify training inputs with wrong labels, and provide no direct avenue for fixing model errors. In addition, we demonstrate that practitioners struggle to use these tools to debug ML models in practice.&#13;
&#13;
With the limitations of current approaches established, in the second part of the thesis, we present new tools for model debugging. First, we introduce an approach termed model guiding, which uses an audit set—a small dataset that has been carefully annotated by a task expert—to update a pre-trained ML model’s parameters. We formulate the update as a bilevel optimization problem that requires the updated model to match the expert’s predictions and feature annotations on the audit set. Model guiding can be used to identify and correct mislabelled examples. Similarly, we show that the approach can also remove a model’s reliance on spurious training signals.&#13;
&#13;
The second debugging tool we introduce uses the influence function of an estimator to help identify training points whose labels have a high effect on an ML model’s disparity metric such as group calibration.&#13;
&#13;
Taken together, this thesis makes advances towards better debugging tools for machine learning models.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147360</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Synthesis over Noisy Data</title>
<link>https://hdl.handle.net/1721.1/147359</link>
<description>Program Synthesis over Noisy Data
Handa, Shivam
I present a new framework and associated synthesis algorithms for program synthesis over noisy data, i.e., data that may contain incorrect/corrupted input-output examples. I model the process that produced the noisy dataset as the selection of inputs and a hidden program from an input source and program source followed by the application of a noise source to the correct outputs from the hidden program to obtain the noisy dataset. This model makes it possible to formulate the problem of noisy program synthesis as an optimization problem formulated over the loss of a candidate program over the noisy dataset and the complexity of the candidate program.&#13;
&#13;
I present a noisy program synthesis algorithm based on finite tree automaton. Results from an implemented system running this algorithm on problems from the SyGuS 2018 benchmark suite highlight the algorithm’s ability to successfully synthesize programs in the face of noisy data.&#13;
&#13;
I extend the noisy program synthesis framework to formally define the concepts of an optimal loss function and the convergence of a program synthesis algorithm to a correct program. Working with these concepts, I present optimal loss functions and convergence results for a wide range of program synthesis problems in the text manipulation domain, including results that characterize optimality and convergence properties of noise sources and loss functions used in experiments with the implemented synthesis algorithm. These results provide insight into the reasons for the success of the presented technique and can help enable the development of effective loss functions and noisy program synthesis algorithms in a range of contexts.&#13;
&#13;
I also present a new noisy program synthesis algorithm that uses an abstraction refinement based optimization process to synthesize programs. The presented experimental results demonstrate the significant performance improvements that this new technique can deliver. Building on this abstraction refinement technique, I present new noisy program synthesis algorithms that can work with both noisy inputs and noisy outputs as well as domain specific languages that include infinite sets of constants.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147359</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>I. Kinetic Modeling of Surface Reactions&#13;
II. Computational Design of Organic Semiconductors</title>
<link>https://hdl.handle.net/1721.1/147358</link>
<description>I. Kinetic Modeling of Surface Reactions&#13;
II. Computational Design of Organic Semiconductors
Kim, Changhae Andrew
In Part I of this thesis, we propose moment closure methods to simulate the chemical kinetics of surface reactions. For systems with static disorder in the rate constants and short-range correlation in the densities of reactants, we propose the half heterogeneous pair approximation (HHPA). Combining the intuitions of the mean-field steady state (MFSS) method and the pair approximation (PA), we consider representative pairs of sites in a self-consistent bath of the average pairwise correlation. Preaveraging over the static disorder in one site of each pair makes HHPA efficient enough to simulate systems of several species and calibrate rate constants. For systems with long-range dynamic correlation, we propose the use of machine learning (ML) to construct system-specific moment closures. Using the lattice Lotka-Volterra model (LLVM) as a model system, we trained feedforward neural networks (FFNNs) on kinetic Monte Carlo (KMC) results at select values of rate constants and initial conditions. The ML moment closure (MLMC) gave drastic improvements in the simulated dynamics and descriptions of the dynamical regimes throughout the parameter space.&#13;
&#13;
In Part II of this thesis, we propose new design principles to enhance the efficiencies of organic light-emitting diodes (OLEDs). In particular, we are interested in thermally activated delayed fluorescence (TADF) and triplet-triplet annihilation (TTA), which convert the nonemissive triplet excitons into emissive singlet excitons. First, we introduce a simple four-state model of TADF. The model predicts that it is possible to realize adiabatic singlet (S₁) and triplet (T₁) states with fast T₁ → S₁ intersystem crossing (ISC) and S₁ → S₀ radiative decay. Using molecular dynamics (MD) and the time-dependent density functional theory (TDDFT), we consider conformational variation as a means to sample the parameter space, and then we examine the potential of direct optimization to maximize the TADF rate. Second, we investigate the role of ISC in enhancing the efficiencies of TTA upconverters. We present computational evidence that the limit-breaking TTA efficiencies of certain annihilators might be attributed to the T₂ → S₁ ISC. Furthermore, we propose strategies to enhance this ISC and provide experimental support of enhanced efficiencies.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147358</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Loss of Integration Vs. Unity in Diversity: American Anthropology in the 1970s and 1980s and the Founding of the Society for Cultural Anthropology</title>
<link>https://hdl.handle.net/1721.1/147354</link>
<description>The Loss of Integration Vs. Unity in Diversity: American Anthropology in the 1970s and 1980s and the Founding of the Society for Cultural Anthropology
Kapsalakis, Lauren
This dissertation explores historically the founding of the SCA by investigating the intellectual, political, institutional, social, and financial factors that shaped the form it came to take. It emphasizes the historical contingency through which academic institutions come into existence, highlighting there was nothing inevitable about the changes that spurred the SCA into existence.  As an example of the challenges of doing recent American history of the 1970s and 1980s of a discipline not well historicized, its primary contribution is contextualizing the social, institutional, and intellectual changes in anthropology as a discipline in a dual sense of the relation of disciplinary changes with outside social and political context. It charts changes in the discipline of anthropology and the outside world by chronicling generational conflicts between different cohorts of scholars over the direction and identity of anthropology. Through discussion of the formation of the SCA, the dissertation also highlights factors arising in the 1970s and 1980s leading to the sense of failure or impossibility of anthropology as an integrative, holistic, and comparative discipline, a change that was particularly distressing to earlier generations of American anthropologists. These changes included the fragmentation of subdisciplines, the abandonment of four-field anthropology, and new politically aggressive approaches, all of which fomented a sense of crisis in American anthropology in the 1960s, which continues to influence the discipline until the present.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147354</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Arithmetic transfers, modularity of arithmetic theta series and geometry of local-global Shimura varieties at parahoric levels</title>
<link>https://hdl.handle.net/1721.1/147352</link>
<description>Arithmetic transfers, modularity of arithmetic theta series and geometry of local-global Shimura varieties at parahoric levels
Zhang, Zhiyu
Firstly, we introduce a method to establish the conjectured arithmetic modularity of arithmetic theta series on unitary Shimura varieties at general levels, which is based on modifications and uniformizations over vertical fibers. We carry out the method at maximal parahoric levels and obtain new arithmetic modularity results. We study the mod &#119901; geometry of related Shimura varieties and Rapoport–Zink spaces via natural stratifications, in particular their irreducible components. We also formulate and prove some local modularity questions.&#13;
&#13;
Secondly, we formulate some semi-Lie and group version arithmetic transfer conjectures at maximal parahoric levels, relating central derivatives of orbital integrals of explicit test functions to arithmetic intersection numbers of special cycles. The formulation involves a way to resolve the singularity of relevant moduli spaces via natural stratifications and modify derived cycles. They appear naturally in the context of W. Zhang’s relative trace formula approach to the arithmetic Gan–Gross–Prasad conjecture (and its &#119901;-adic analogs) for unitary groups, as well as Y. Liu’s work for Fourier– Jacobi cycles. We also obtain regular integral models for these Shimura varieties to do arithmetic intersection theory.&#13;
&#13;
For any unramified quadratic extension of &#119901;-adic local fields (&#119901; &gt; 2) and any maximal parahoric level, using the modification method towards arithmetic modularity, by a local-global method and double induction we establish these arithmetic transfer identities, under mild assumptions on the &#119901;-adic field. In particular, we establish the arithmetic fundamental lemma for all &#119901;-adic fields (&#119901; &gt; 2). We introduce the relative Cayley map as a natural reduction tool.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147352</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visuo-Tactile Perception for Dexterous Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/147351</link>
<description>Visuo-Tactile Perception for Dexterous Robotic Manipulation
Bauza Villalonga, Maria
In this thesis, we develop visuo-tactile perception to enable general and precise robotic manipulation. In particular, we want to study how to effectively process visual and tactile information to allow robots to expand their capabilities while remaining accurate and reliable.&#13;
&#13;
We begin our work by focusing on developing tools for tactile perception. For the task of grasping, we use tactile observations to assess and improve grasp stability. Tactile information also allows extracting geometric information from contacts which is a task-independent feature. By learning to map tactile observations to contact shapes, we show robots can reconstruct accurate 3D models of objects, which can later be used for pose estimation.&#13;
&#13;
We build on the idea of using geometric information from contacts by developing tools that accurately render contact geometry in simulation. This enables us to develop a probabilistic approach to pose estimation for novel objects based on matching real visuo-tactile observations to a set of simulated ones. As a result, our method does not rely on real data and yields accurate pose distributions.&#13;
&#13;
Finally, we demonstrate how this approach to perception enables precise manipulations. In particular, we consider the task of precise pick-and-place of novel objects. Combining perception with task-aware planning, we build a robotic system that identifies in simulation which object grasps will facilitate grasping, planning, and perception; and selects the best one during execution. Our approach adapts to new objects by learning object-dependent models purely in simulation, allowing a robot to manipulate new objects successfully and perform highly accurate placements.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147351</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>When Voice Leads to Exit: Democracy, Development, and Private Provision</title>
<link>https://hdl.handle.net/1721.1/147348</link>
<description>When Voice Leads to Exit: Democracy, Development, and Private Provision
Read, Blair
For many citizens, social welfare brings them in close contact with their governments. Education, in particular, is a central part of citizens’ lives. From a young age, it seeks to provide students with cognitive skills and human capital, alongside lessons of political and cultural socialization. As a core component of social welfare, governments have typically monopolized the production and maintenance of education; as a popular and pro-poor policy, governments have provided schooling in greater quantities as citizen voice has increased through democratization and democratic competition. Yet despite the centrality of education in the state’s social welfare portfolio, education across the Global South is increasingly provided privately. In this dissertation, I offer a theory of private social welfare expansion, explaining why political elites facilitate the emergence of private welfare markets. I trace the rapid growth of private welfare in the Global South to the pressures of electoral competition. In the late 20th century, as elections became more competitive, politicians faced increased pressure to provide welfare, yet had limited time frames between elections and scarce fiscal resources. I demonstrate that politicians supported private sector social welfare projects to meet this demand. Drawing on evidence from India’s primary education sector and using a school census of nearly 1.2 million public and private schools constructed since independence, I show that electoral competition prompts private welfare expansion and thwarts state efforts to centralize control over education. These results have implications for scholars and policymakers seeking to understand the scope and breadth of welfare institutions, and the role of government and markets in securing and ensuring social welfare. My findings challenge the idea that private alternatives to government services exist only during a transitional period between state absence and state consolidation. Instead, private welfare expansion follows a political logic that may not diminish as state capacity grows.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147348</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Deniable Computation and Sublinear Graph Algorithms</title>
<link>https://hdl.handle.net/1721.1/147347</link>
<description>On Deniable Computation and Sublinear Graph Algorithms
Mossel, Saleet
This thesis studies deniable computation and sublinear time graph algorithms. &#13;
&#13;
Deniable Computation. We define and construct Deniable Fully Homomorphic Encryption based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption. Our constructions achieve compactness independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the detection probability achieved by the scheme. The running time of our encryption algorithm depends on the inverse of the detection probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability and polynomial encryption time. Moreover, we introduce the notions of Encryption with Deniable Edits and Encryption with Invisible Edits and give constructions under minimal assumptions: in the public-key setting we only require the existence of standard public-key encryption and in the symmetric-key setting we only require the existence of one-way functions. An encryption scheme that supports deniable edits allows a user who owns a ciphertext c encrypting a large corpus of data m  under a secret key sk, to generate an alternative but legitimate looking secret key sk [subscript c,e]  that decrypts c to an "edited" version of the data. Whereas encryption with deniable edits enables a user to modify the meaning of a single ciphertext, the goal of encryption with invisible edits is to enable ongoing modifications of multiple ciphertexts. &#13;
&#13;
Sublinear Graph Algorithms. We consider the problem of approximating the arboricity of a graph G= (V,E)  which we denote by arb(G), in sublinear time in the adjacency lists model, where the arboricity of a graph is the minimal number of forests required to cover its edge set. We design a  sublinear time algorithm that outputs an O(log^2 n) estimate of the arboricity, with high probability. Furthermore, we present a sublinear time algorithm in the adjacency list model that allows one to sample multiple edges independently from a distribution that is pointwise close to the uniform distribution over edges in the graph. If one knows the number of required samples q in advance, the overall cost of sampling q edges using our algorithm is sublinear in q, which is strictly preferable to the cost resulting from q invocations of the best algorithm for sampling a single edge.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147347</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resolving Signal Transduction in Complex Biological Environments</title>
<link>https://hdl.handle.net/1721.1/147346</link>
<description>Resolving Signal Transduction in Complex Biological Environments
Srinivasan, Shwetha
Cell surface receptors reside in complex biological environments along with their numerous extracellular ligands to transduce information across the membrane bilayer. These receptors regulate major functions including cell proliferation, facilitation of neuronal transmission, cell growth and metabolism, but their aberrant activity leads to debilitating disorders such as cancer, diabetes and paralysis. While signal transduction has been understood at a great detail for ion channels and G-protein coupled receptors (GPCRs), transmembrane signaling in enzyme-linked receptors have so far remained elusive due to the presence of only a single transmembrane helix in their structure for signal propagation. In this work, we explore the signal transduction in the epidermal growth factor receptor (EGFR), the most prominent member of the enzyme-linked category of cell surface receptors. By isolating full-length receptors using nanodiscs produced with cell-free expression, we discovered ligand-induced conformational coupling between the EGFR extracellular and intracellular domain using single-molecule Förster Resonance Energy Transfer measurements. Furthermore, we disentangle the role of the complex environment around EGFR in its transmembrane signaling by (1) uncovering the specific task of the active components of the plasma membrane; (2) ascertaining ligand-specific EGFR conformations which determine its downstream signaling and cellular process; and (3) illustrating the effect of EGFR phosphorylation in mediating interaction with intracellular signaling proteins.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147346</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Charge Carrier Balance in Lead Halide Perovskite Light Emitting Diodes</title>
<link>https://hdl.handle.net/1721.1/147345</link>
<description>Charge Carrier Balance in Lead Halide Perovskite Light Emitting Diodes
Chua, Matthew R.
Charge balance in light emitting diodes (LEDs) is a critical determinant of their performance. Charge balance is directly related to the External Quantum Efficiency (EQE), and is thought to have additional effects on device endurance and stability.&#13;
&#13;
In this thesis I present a series of experiments and simulations to study the charge balance in Lead Halide Perovskite (LHP) emitter LEDs. These LHP LEDs, particularly Near-Infrared Emitters have demonstrated high EQE, and promising lifetime.However for many other emission wavelengths their performance and endurance is&#13;
lacking. I first present a modified simultaneous Photoluminescence-Electroluminesence (PLEL) measurement, along with two methods of calibrating the absolute photon flux absorbed, namely Short-circuit current, and Absorption from reflection. This allows the determination of absolute PL efficiency in the device relative to EL efficiency, to obtain an absolute measure of charge balance. Our measurements suggest that in the prototypical high efficiency NIR ITO/ZnO/LHP/TFB/MoO3/Al planar structure, the charge efficiency or &#120578;&#119862;&#119861; is 0.83-0.91 , consistent with the high observed EQE of 16-17%.&#13;
&#13;
Next a different hole transport layer poly-TPD is introduced and the device studied relative to the original TFB case. We find that although poly-TPD devices have charge balance approximately 1.3x worse relative to TFB devices, this is insufficient to account for the 3x difference in device EQE when considering the relative PL quantum yields (PLQY) of the emitter under partially-complete stack  measurements. By performing PL quantum yield measurements in-situ on a device under hybrid electrical-optical excitation, it is found that the LHP in-device PLQY is increased significantly and transiently by electrical excitation, which allows us to resolve the contradictory charge balance and EQE results.&#13;
&#13;
These experiments are supported by transfer matrix based optical simulations that allow us to account for the possible optical effects encountered in the different PLQY measurement conditions. Optical simulations are also used to convert the observed external PLQY to Internal Quantum Efficiency, &#120578;&#119876;&#119884; , for a standard comparison&#13;
across all measurements.&#13;
&#13;
This thesis demonstrates the effectiveness of the quasi-DC PLEL measurement in determining absolute charge balance in a LHP device. This method can be further applied to a broader range of LHP devices to gain greater understanding of the effect charge transport layers and LHP emitters have on each other when used to create electrically pumped light emitting systems.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147345</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Balancing Exploration and Exploitation: Task-Targeted Exploration for Scientific Decision-Making</title>
<link>https://hdl.handle.net/1721.1/147344</link>
<description>Balancing Exploration and Exploitation: Task-Targeted Exploration for Scientific Decision-Making
Flaspohler, Genevieve Elaine
How do we collect observational data that reveal fundamental properties of scientific phenomena? This is a key challenge in modern scientific discovery. Scientific phenomena are complex—they have high-dimensional and continuous state, exhibit chaotic dynamics, and generate noisy sensor observations. Additionally, scientific experimentation often requires significant time, money, and human effort. In the face of these challenges, we propose to leverage autonomous decision-making to augment and accelerate human scientific discovery.&#13;
&#13;
Autonomous decision-making in scientific domains faces an important and classical challenge: balancing exploration and exploitation when making decisions under uncertainty. This thesis argues that efficient decision-making in real-world, scientific domains requires task-targeted exploration—exploration strategies that are tuned to a specific task. By quantifying the change in task performance due to exploratory actions, we enable decision-makers that can contend with highly uncertain real-world environments, performing exploration parsimoniously to improve task performance.&#13;
&#13;
The thesis presents three novel paradigms for task-targeted exploration that are motivated by and applied to real-world scientific problems. We first consider exploration in partially observable Markov decision processes (POMDPs) and present two novel planners that leverage task-driven information measures to balance exploration and exploitation. These planners drive robots in simulation and oceanographic field trials to robustly identify plume sources and track targets with stochastic dynamics. We next consider the explorationexploitation trade-off in online learning paradigms, a robust alternative to POMDPs when the environment is adversarial or difficult to model. We present novel online learning algorithms that balance exploitative and exploratory plays optimally under real-world constraints, including delayed feedback, partial predictability, and short regret horizons. We use these algorithms to perform model selection for subseasonal temperature and precipitation forecasting, achieving state-of-the-art forecasting accuracy.&#13;
&#13;
The human scientific endeavor is poised to benefit from our emerging capacity to integrate observational data into the process of model development and validation. Realizing the full potential of these data requires autonomous decision-makers that can contend with the inherent uncertainty of real-world scientific domains. This thesis highlights the critical role that task-targeted exploration plays in efficient scientific decision-making and proposes three novel methods to achieve task-targeted exploration in real-world oceanographic and climate science applications.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147344</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From elastic electrodes to fabric systems</title>
<link>https://hdl.handle.net/1721.1/147341</link>
<description>From elastic electrodes to fabric systems
Marion, Juliette
Textile-based electronics hold promise for wearable sensors and devices, as well as flexible and conformable electronics. In recent years, research has focused on realizing specific devices or even system level functions within a single fiber. Harnessing thermal drawing, significant developments in fiber functionality have been achieved over the last couple of decades. However, certain essential properties of device containing fibers still require attention. In particular, yarns in an electronic textile must withstand large strains and bending, while maintaining their functional integrity. As metal electrodes are necessary to connect devices over hundreds of meters of fiber, they preclude fiber devices from having the elasticity necessary to sustain weaving, knitting, and daily use.&#13;
&#13;
In this thesis, we investigate the use of in-fiber structural elasticity as a mean to build elasticity in thermally drawn functional fibers. We explore thermal drawing of elastomers and expose how the drawing stress can affect the fibers mechanical properties. We propose two approaches to achieve metal electrodes demonstrating high elasticity (&gt; 10%), through buckling of a metal microwire within a cavity, or through twisting of a fiber to yield helical metal electrodes. We later examine the mechanical constraints involved in weaving and knitting, and enrich the list of principles for fiber design, to limit friction and kinking. We finally devise a method to connect micro-devices in fibers via buckled electrodes, and demonstrate two fabric systems: a woven, three-dimensional optical antenna, and a knitted garment able to detect localized changes in skin temperature.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147341</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Concurrent Planning with  Heterogeneous Robot Teams for Timed Goals</title>
<link>https://hdl.handle.net/1721.1/147340</link>
<description>Hybrid Concurrent Planning with  Heterogeneous Robot Teams for Timed Goals
Chen, Jingkai
Robotic techniques in terms of mechanical design, perception, and planning have been improving at a dramatic rate. However, these advances for single autonomous systems will not be sufficient for many real-world applications, in which robots need to collaboratively complete tasks together given their different capabilities, while avoiding conflicts in terms of competing energy resources and occupied space, such as a cooperative truck-and-drone delivery system and smart warehouse. In these scenarios, we need not only to assign timed goals to robots but also plan all the robots to achieve these goals.&#13;
&#13;
Planning and coordinating for robot teams to complete multiple timed goals in a long horizon is challenging: (1) reasoning over hybrid systems with both discrete and continuous specifications; (2) coordinating multiple robots and timed goals; (3) optimizing towards high-quality solutions. As a result, the solution space of this problem is huge and features combinatorial and nonlinear behaviours, and finding feasible solutions is nontrivial not to say finding high-quality ones.&#13;
&#13;
There are two streams of research that address this problem: (1) hybrid activity planning, which plans both the symbolic and numeric parts of a system; (2) multirobot motion planning, which targets at coordinating collision-free motions of a large fleet of vehicles. While the first line of research mainly considers limited numeric parts, multi-robot motion planning lacks the ability to reason over high-level activities. Few of them can completely solve our problems.&#13;
&#13;
In this thesis, I address this problem by adopting a two-stage hierarchical planning framework, which combines the strengths of both lines of research mentioned above. This planning framework divides the planning procedure into two stages: (1) highlevel hybrid activity planning: optimally plan the activities of all the robots by using centralized algorithms, while partially considering robot dynamics, such as approximating nonlinear dynamics to be linear; (2) low-level multi-robot motion planning in support of activities and deadlines: given the planned activities, plan safe, executable control trajectories of all the robots in a decoupled way.&#13;
&#13;
This two-stage design is efficient, while being reasonably effective and complete: (1) to be efficient, the high-level planner properly approximates robot dynamics, and the low-level planner decouples the planning problem to single-activity subproblems and only coordinates them on demand; (2) the high-level planner is guaranteed to generate optimal decisions over activities and the low-level planner grounds these planned activities with an optimization purpose; (3) to mitigate the incompleteness due to the hierarchy, we provide rich information for making high-level decisions.&#13;
&#13;
Under this framework, this thesis has three major contributions: (1) an optimal hybrid activity planner, cKongming, that represents all the possible robot trajectories as a temporal hybrid flow graph and further encode and solve as a Mixed Integer Linear Program (MILP). Key to cKongming’s efficiency and effectiveness is its graph representation, which combines the hybrid flow graphs of the PDDL-K planner Kongming, to be effective, and the adaptive-duration action representation in PDDL 2.1 planners, to be efficient. (2) a scalable, effective multi-robot motion coordinator for activity plans, which extends the priority-based single-goal coordination that coordinates by specifying priorities between agents, to handle multiple activities with temporal constraints. Key to achieving this is to specify priorities between individual activities, rather than the complete activity plans of different robots. (3) an assembly planner that automatically plans multiple high-dimensional manipulators for assembly tasks. The planner is built under the same two-stage hierarchical planning framework. The activity planner simplifies the hybrid planner to a temporal planner and leverages an analogy to solving Vehicle Routing Problem with Time Windows to achieve efficiency. The task-directed motion planner tailors our general motion coordinator by allowing more aggressive single-activity replanning and look-ahead for future reachability. In the demonstration, we can plan assembly plans for up to three robots and 23 objects in a couple minutes.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147340</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of surface and bulk interactions: A computational approach to sustainable energy</title>
<link>https://hdl.handle.net/1721.1/147339</link>
<description>Design of surface and bulk interactions: A computational approach to sustainable energy
Jana, Asmita
Computational tools have proved to be effective in the search for and design of better materials for a more sustainable planet. In this thesis, we look at two specific applications in two different sectors: industrial and transportation. Separations account for around half of the energy used in the industrial sector, and to reduce the energy used, there are efforts to convert some of the more energy intensive separation strategies like distillation to ones that use lesser energy like membrane-based technology. The first part of the thesis looks at the specific case of air or O₂/N₂ separation. Using a classical molecular dynamics (MD) framework to model gas permeation across a nanoporous graphene membrane temple, we observed increased selectivity, resulting from increasing adsorption energy differences alone. Using density functional theory calculations, we confirm that some transition metal oxides possess adsorption energies needed to operate as adsorption-based pore-flow membranes providing a suitable motivation to examine such membranes as a viable option for air separation.&#13;
&#13;
In the transportation sector, there have been efforts to decrease the weight of the automobiles while still retaining the strength to decrease the fuel consumption and corresponding gas exhaust. In this work, we look at carbon fibers (CFs) as a candidate material. In this work, we use MD simulations to explore the processing and chemical phase space through a framework of CF models to identify their effects on elastic performance. We find that density, followed by alignment, and functionality of the molecular constituents dictate the CF mechanical properties more strongly than their size and shape. Lastly, we propose a previously unexplored fabrication route for high-modulus CFs achieved via generating high-density CFs which leads to CFs with isometric compressive and tensile moduli, enabling their potential applications for compressive loading. Finally, using this framework and by defining a parameter that can quantify crosslinking, we demonstrate that increasing the fraction of methyl functional groups increases the crosslinking and the elastic modulus.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147339</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering K-12 Students to Understand and Design Conversational Agents: Concepts, Recommendations and Development Platforms</title>
<link>https://hdl.handle.net/1721.1/147337</link>
<description>Empowering K-12 Students to Understand and Design Conversational Agents: Concepts, Recommendations and Development Platforms
Van Brummelen, Jessica
Conversation influences nearly every aspect of human life, and has done so for ages. Recently, people have begun to converse with technology. This method of technology interaction has rapidly become prominent, and raises unique questions about human-computer interaction. For instance, how do such human-like, relational interactions affect people’s trust of computer systems? Researchers have started to investigate such questions with respect to adults, finding correlations between trust and anthropomorphism of agents. However, very little research investigates children’s perceptions of these devices, and even less investigates how interventions might change these perceptions. This is despite how educational interventions have previously changed how people perceive and trust other technology. It is also despite how conversational technology is uniquely positioned to appeal to children, influence them relationally and potentially spread misinformation.&#13;
&#13;
This dissertation presents educational interventions for K-12 students, which aim to encourage healthier understanding and relationships with conversational agents. This includes conversational agent curricula, development platforms and conceptual frameworks. Through studies with children and parents from Western, Industrialized, Educated, Rich and Democratic (WEIRD) and non-WEIRD countries, I found different subsets of the participants’ perceptions of agents changed differently through the activities. For instance, after learning to program agents, participants from non-WEIRD countries felt agents were more competent, more dependable and more like authority figures than those from WEIRD countries did. Children consistently felt agents were warmer and more humanlike than parents did. When participants discussed their trust of agents, I found they frequently mentioned where agents obtained their information, what agents do with the information they are given and how agents are programmed. I also found participants most often mentioned learning something when discussing why their trust changed.&#13;
&#13;
These studies, as well as a systematic literature review and an analysis of various agent development platforms, informed the creation of a pedagogical framework of forty foundational conversational agent concepts, seventeen conversational agent design recommendations and thirteen conversational agent K-12 pedagogy recommendations. For instance, I recommend designing agents with more task-orientation in general, while considering the end user audience. I also recommend informing end users about the trustworthiness of agents through agent design and educational interventions. This is to increase transparency and allow end users to calibrate their trust accordingly. Educators may increase agent transparency through teaching the foundational agent concepts in the framework, which fall under the categories of natural language understanding; conversation representation; dialog management; data access and conversation context; and human-agent interaction. With conversational agents becoming increasingly ubiquitous, it is increasingly important for users of this technology—including and especially children—to be able to calibrate healthy perceptions and levels of trust towards it. This research aims to empower children to do this through a conversational agent design and pedagogy framework.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147337</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pursuing Mid-level Perception from Casual Videos</title>
<link>https://hdl.handle.net/1721.1/147336</link>
<description>Pursuing Mid-level Perception from Casual Videos
Zhang, Zhoutong
This thesis aims to summarize a series of explorations around a central theme: How can we learn mid-level perception from collections of casually shot videos? To avoid reader’s disappointment, I would like to be frank at the start: contents within are only starting steps towards solving the problem. Specifically, a major part of this thesis addresses the problem of recovering depths and ego-motion despite of the dynamics in the video, which is only part of the mid-level perception problem.&#13;
&#13;
Why those in particular? First of all, they are the pillars of 3D understanding if the agent is able to move and interact with the dynamic world. In a more narrow sense, this corresponds to the "mid-level vision" in Marr’s perception theory, where 2.5D sketches are recovered from processed image signals. If we add the flexibility of motion, then the task would also include recovering the ego-motion, i.e. the trajectory of the viewer through time. In addition, depths and ego-motion recovery have the potential to help solving other mid-level vision tasks. In this thesis, we show that we can solve the video version of the checkershadow illusion [1] when both the observer and the checker is moving simultaneously. This is done by building a 3D representation of the scene that are split into persistent and transient effects, which is only possible with recovered the depth and ego-motion.&#13;
&#13;
To get depth and camera’s ego-motion from videos with unrestricted object motion and ego-motion, is quite challenging. The first chapter of the thesis gives an introduction of the problem, with brief reviews of past works and demonstrate how they fail to solve the problem robustly and why. The second chapter will address a partial form of the problem, where for a video with given camera ego-motion, how to recover reliable depth maps even if there’s significant object motion in the scene. The third chapter of the thesis addresses the full problem, presenting a solution to jointly recover depth and camera ego-motion for casually shot videos.&#13;
&#13;
It is remained to ask, why the ambitious title? Why not a more specific one and end the thesis here? Maybe a bit unconventional, I would like to think of this thesis as a starting milestone for the topic, which I feel committed and excited to pursuit, instead of an end, a mere warp up of what I did for my graduate studies. Therefore, the last chapter, named "Video Canonicalization", is dedicated to an ongoing pursuit that aims to provide a structure that is helpful for analyzing different works, and clarifying design dimensions for solving mid-level vision problems using videos. Some part of this chapter may seem half-baked, with rudimentary experiments and examples that merely aim to prove the concept. Hopefully those will mature into future projects that would better bear the title.&#13;
&#13;
Finally, I would like to cite, though not in its exact form, Patrick Winston’s remarks when I entered MIT: "There’s only one thing I can promise you after your journey at MIT: you will find the thing you are truly excited about, which will drive you for the future. If not, I’ll come to you and you will be in trouble with me." I’m really glad that this turned out to be true, but sad that he will never come to us even if it wasn’t.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147336</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instabilities and flow-induced structures in anisotropic systems</title>
<link>https://hdl.handle.net/1721.1/147335</link>
<description>Instabilities and flow-induced structures in anisotropic systems
Zhang, Qing
The natural world is full of patterns that spontaneously emerge across length scales and material properties. Examples range from microscopic dendritic snowflakes to macroscopic sand ripples and intricate river networks. Central to the formation of patterns is the concept of an instability; complex forms spontaneously develop when a system is driven out of equilibrium. Nature leverages instabilities to ‘fabricate’ complex structures that maximize performance using minimal resources, exploiting the self-amplification of small perturbations. The potential to use instabilities in practical applications, for example to engineer or assemble structured materials, is barely exploited due to the notorious difficulty and limited available strategies to control the self-amplified and non-linear growth that characterizes instabilities. In this thesis, we focus on fluid instabilities due to the adaptivity of fluids to their environments, and establish novel strategies to tune the growth morphology of interfacial instabilities and flow-induced structures at different length scales.&#13;
&#13;
At the macroscale, we induce a morphology transition from the generic dense-branching growth characterized by repeated tip-splitting of the growing fingers to dendritic growth characterized by stable fingertips in the presence of anisotropy in the viscous-fingering instability. This instability arises when a less viscous fluid displaces a more viscous one in a confined environment. When the growth environment is rendered anisotropic by engraving a lattice of channels on a Hele-Shaw cell, we show that the morphology transition and the global symmetry of the dendrites can be controlled by tuning the viscosity ratio between the two fluids or the degree of anisotropy set by the lattice topography. We further exploit a material with shear-enhanced anisotropy where the anisotropy is intrinsic to the fluid, a lyotropic chromonic liquid crystal (LCLC) in the nematic phase. For high enough flow velocities, the tumbling behavior of LCLC solutions can be suppressed, which results in a flow-alignment of the material. This microscopic change in the director field macroscopically enhances the liquid crystal anisotropy to induce the transition from dense-branching to dendritic growth. &#13;
&#13;
Microscopically, we discover the emergence of flow-induced defects and structures in LCLC solutions. Pure-twist disclination loops form in a range of shear rates as a consequence of the low twist elastic constant of LCLC solutions. We demonstrate that the size of the pure-twist disclination loops is governed by the balance between nucleation and annihilation forces, which can be tuned by controlling the flow velocity. Strikingly, at lower shear rates, chiral periodic double-twist structures spontaneously emerge, even though the LCLC is achiral. We show that the mirror symmetry breaking is triggered at regions of biaxial-splay deformations that are unstable and evolve into the energetically cheaper double-twist elastic mode. Our results reveal a novel path to structural chirality in an achiral system. &#13;
&#13;
The control gained over the pattern morphology and structure formation from fluid instabilities can open pathways to harnessing unstable growth to design programmable microstructures in materials and to control assembly and flow of biological systems in microfluidic devices.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147335</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Inference and Regeneration via Active Learning</title>
<link>https://hdl.handle.net/1721.1/147330</link>
<description>Program Inference and Regeneration via Active Learning
Shen, Jiasi
Software now plays a central role in numerous aspects of human society. Current software development practices involve significant developer effort in all phases of the software life cycle, including the development of new software, detection and elimination of defects and security vulnerabilities in existing software, maintenance of legacy software, and integration of existing software into more contexts, with the quality of the resulting software still leaving much to be desired. The goal of my research is to improve software quality and reduce costs by automating tasks that currently require substantial manual engineering effort.&#13;
&#13;
I present a novel approach for program inference and regeneration, which takes an existing program, learns its core functionality as a black box, builds a model that captures this functionality, and uses the model to generate a new program. The new program delivers the same core functionality but is potentially augmented or transformed to eliminate defects, systematically introduce safety or security checks, or operate successfully in different environments. &#13;
&#13;
This research enables the rejuvenation and retargeting of existing software and provides a powerful way for developers to express program functionality that adapts flexibly to a variety of contexts. For instance, one benefit is enabling new development methodologies that work with simple prototype implementations as specifications, then use regeneration to automatically obtain clean, efficient, and secure implementations. Another benefit is automatically improving program comprehension and producing cleaner code, making the code more transparent and the developers more productive. A third benefit is automatically extracting the human knowledge crystallized and encapsulated in legacy software systems and retargeting it to new languages and platforms, including languages and platforms that provide more powerful features.&#13;
&#13;
In this thesis, I present two systems that implement this approach for database-backed programs.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147330</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained Inventory Optimization on Complex Warehouse Networks</title>
<link>https://hdl.handle.net/1721.1/147328</link>
<description>Constrained Inventory Optimization on Complex Warehouse Networks
Spantidakis, Ioannis
Online retailers increasingly face the problem of optimizing the inventory allocation of various products across a large network of warehouses.  In most practical cases, the demand for these products is unknown, and product-level inventory available for distribution across the different warehouses is very limited.&#13;
 &#13;
We first consider the problem of inventory allocation of multiple products, across a network of warehouses. This is a problem commonly faced by large fashion e-retailers. The objective is to minimize the overall shipment cost and to speed up deliveries to customers accounting for inventory constraints on the various products and capacity constraints of warehouses. We propose a multi-period, multi-product newsvendor formulation as well as an efficient solution algorithm that balances the tradeoff between overage and underage costs across time periods. We also establish the rate of convergence of the algorithm. Furthermore, and in collaboration with a fashion e-tailer, we perform a case study showing a reduction of 9% in inventory costs relative to the retailer’s current method.&#13;
 &#13;
We then turn our attention to inventory optimization across a network with cross-fulfillment. Optimizing this problem in such networks is an intractable problem. We resolve this by introducing a tractable algorithm. We introduce the concept of Fulfillment Rules in order to capture the fulfillment priorities of the retailer while at the same time allowing for a tractable approach to the inventory allocation problem that works for both continuous and discrete demand distributions.&#13;
 &#13;
In the final chapter of the thesis, we tackle the issue of high dimensional data in the context of classification settings. We develop a new dimensionality reduction algorithm called Supervised Approach for Feature Engineering (SAFE), which is an alternative to Principal Component Analysis (PCA). SAFE finds uncorrelated, lower dimensional features in order to best explain differences among classes. This allows us to improve the speed and accuracy of the classification task.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147328</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometrically and Temporally Consistent Robot Perception</title>
<link>https://hdl.handle.net/1721.1/147325</link>
<description>Geometrically and Temporally Consistent Robot Perception
Lin, Muyuan
Perception algorithms rely on the capability to reconcile an ideal mathematical model of an object with imperfect data. Despite having been investigated extensively, robot perception remains challenging with spurious outliers, limited training data, and computational constraints. To this end, we investigate geometric and temporal consistency for modeling and inference in perception problems.&#13;
&#13;
In this dissertation, we begin our work with the development of an end-to-end neural network model for 6D object pose estimation. The model applies a 3D fully-convolutional network to extract geometric features and enforces pairwise consistency of features via spectral convolution on a compatibility graph. We then develop a graph-theoretic framework for the hypothesis pruning problem. Specifically, we provide a planted clique perspective which draws the connection between a statistical model and robust estimation. This perspective leads to the design of a learning heuristic that is efficient and generalizable. Finally, we leverage temporal consistency for the 6D pose tracking of unknown objects in a video sequence. Our algorithm estimates the optical flow to produce temporally stable motion propagation, and optimizes scene structures and 6D object pose jointly. Instead of directly propagating pose estimation using frame-to-frame correlation, we register a target object in a new frame with a globally consistent model of the scene. Our method is shown to be efficient and accurate for 6D pose tracking.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147325</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genesis, dynamics, and dissipation of turbulent magnetic fields</title>
<link>https://hdl.handle.net/1721.1/147309</link>
<description>Genesis, dynamics, and dissipation of turbulent magnetic fields
Zhou, Muni
Astronomical observations indicate that coherent, dynamically important magnetic fields are ubiquitous in the Universe. However, neither the origin problem—what are the physical mechanisms that generate the initial ``seed'' magnetic fields—nor the dynamo problem—how magnetic fields are amplified and sustained by turbulent plasma motions--are well understood. In addition, the role played by these fields in determining the material and thermodynamic properties of cosmic plasmas, which strongly impact various astrophysical phenomena, is still unknown. This thesis follows the ``life cycle'' of cosmic magnetic fields and addresses problems including the magnetogenesis, the formation of large-scale magnetic fields, the magnetized turbulent cascade through kinetic scales, and the plasma heating that ultimately ensues.&#13;
&#13;
We first demonstrate in a fully kinetic framework the generation of seed magnetic fields through the Weibel instability under a generic large-scale shear flow. The resulting spontaneous plasma magnetization confirms kinetic plasma processes as a plausible cause of magnetogenesis and that cosmic plasmas are thereby ubiquitously magnetized. This work sets the stage for studying whether such microscopic seed fields with filamentary morphology, under the joint action of their own nonlinear evolution and background turbulence, can contribute to the formation of macroscopic magnetic fields. We address this question by studying the dynamics of a large ensemble of interacting magnetic flux tubes, which resembles a wide range of astrophysical systems in addition to the Weibel seed fields. The emergence of large-scale magnetic structure from small-scale turbulence is identified as a consequence of magnetic reconnection of magnetic flux tubes, leading to the inverse transfer of magnetic energy. In the last part of the thesis, we investigate the ensuing strongly magnetized plasma turbulence in the kinetic range (below the ion gyroscale). We find that the self-organization and dynamics of the magnetic fields gives rise to intermittency, which determines the turbulent spectrum, and efficient phase mixing around current sheets, which leads to electron heating. These results advance a first-principles understanding of the origin and dynamics of cosmic magnetic fields and their implications for astrophysical phenomena.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147309</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Ultrafast Energy Transfer across the Photosynthetic Membrane of Purple Bacteria with Near-Native Systems</title>
<link>https://hdl.handle.net/1721.1/147304</link>
<description>Understanding Ultrafast Energy Transfer across the Photosynthetic Membrane of Purple Bacteria with Near-Native Systems
Fiebig, Olivia C.
Purple bacteria are able to capture sunlight and convert it to chemical energy through charge separation with almost 100% quantum efficiency. This remarkable efficiency is due to the arrangement of their pigments into pigment-protein complexes that are further arranged into a larger antenna network within the membrane environment. Peripheral light-harvesting complex 2 (LH2) surrounds the core light-harvesting complex 1 (LH1), which in turn encircles the reaction center (RC). Energy absorbed by LH2 is transferred to LH1 and then finally the RC, where it is converted to chemical energy through charge separation. While the energy transfer dynamics of the photosynthetic light-harvesting complexes in purple bacteria have been studied for decades, two key parameters have been largely ignored: first, the ability of purple bacteria to maintain efficiency in fluctuating environmental conditions, and second, the impact of the lipid membrane environment on the energy transfer pathways. In this thesis, I explore these two parameters by investigating the energy transfer dynamics within the purple bacterial antenna network at each step. First, I study intra-complex dynamics by comparing the energy transfer rates within structural and spectral variants of LH2. Second, I investigate inter-complex energy transfer dynamics between LH2 complexes to directly resolve and understand the pair-wise energy transfer dynamics within the membrane. Finally, I investigate the influence of lipid environment on LH1 to RC energy transfer. These studies are made possible by the use of model membrane nanodiscs, through which I reconstruct the antenna network piece-by-piece to resolve key energy transfer steps in a controlled, near-native environment. Overall, the results show that the energy transfer dynamics within the purple bacterial antenna network are robust to fluctuating environmental conditions, with the membrane itself an active participant in the optimization of these energy transfer processes. Furthermore, these studies reveal the utility of nanodiscs to disentangle the many competing parameters within the membrane that influence protein behavior and energy transfer dynamics.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147304</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bioastronautics: Biological Modeling of the Neural Response to Space Radiation and its interaction with Alzheimer's Disease Risk Genes</title>
<link>https://hdl.handle.net/1721.1/147303</link>
<description>Bioastronautics: Biological Modeling of the Neural Response to Space Radiation and its interaction with Alzheimer's Disease Risk Genes
Hinshaw, Robert G
The exotic radiation permeating interplanetary space remains the least understood and perhaps most challenging intrinsic barrier to a sustained human presence beyond Earth. As the coming decade brings the promise of continuous lunar habitation and even the first voyage to another planet, spacefaring endeavors must contend with this ever present threat to human health. On Earth, as medical advances continue to extend human life and radiation becomes an increasingly potent tool for medical diagnostics and treatment, we are increasingly confronted by our lack of knowledge on long term health effects and disease interactions of low dose - and in the case of cancer radiotherapy not so low dose - radiation exposure. &#13;
&#13;
93The goal of this dissertation is to explore and develop both new and existing in vitro neural models for radiation research – particularly in the context of neurodegenerative disease. First, we adapt an existing 3D in vitro model of Alzheimer’s disease, which captures the beta-amyloid (Aβ) and phosphorylated tau protein aggregation characteristic of the disease, for use with the particle accelerator facility at the NASA Space Radiation Laboratory (NSRL). We show consistent changes in Alzheimer’s protein pathology and marked differences in RNA transcription dependent on culture genotype and radiation type. Then we expand on that model, leveraging the flexibility of in vitro systems to incorporate a microglial migration assay and to better model the low fluence nature of the space radiation environment while still retaining compatibility with the NSRL accelerator. We then show proof-of-concept for an accelerator-compatible, rapid differentiation neuron astrocyte coculture model derived from human induced pluripotent stem cells in which we investigate the impact of allelic variants in the APOE gene on the interaction between radiation exposure and Alzheimer’s-relevant proteins and RNA transcription. Finally, we explore the development of an ex vivo organotypic brain slice culture model and the challenges to and benefits of adapting such a platform for use with microbeam accelerators.&#13;
&#13;
This work develops approaches to improve both the biological fidelity and the irradiation fidelity of these model systems with the overall goal of creating better tools for investigating the long-term neural consequences of radiation exposure.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147303</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Cryptographically Verifiable Database Management System</title>
<link>https://hdl.handle.net/1721.1/147302</link>
<description>Towards a Cryptographically Verifiable Database Management System
Xia, Yu
Database-as-a-Service (DBaaS), like Amazon Web Service Redshift and Microsoft Azure SQL DB, is becoming increasingly popular. These services provide high performance and on-demand elasticity without heavy maintenance costs. However, as with all online applications, DBaaS is prone to malicious attacks ranging from server compromises to cheating providers.&#13;
&#13;
We believe that database security is more than just data privacy.&#13;
&#13;
Existing secure DBMSs focus on the security and privacy of data but overlook semantic properties, such as the correctness and ACID properties of transactions. Enforcing these properties is crucial to the functionality of applications. If these guarantees do not hold, catastrophic losses could result. A hacker compromising the server gains complete control of the operating system. The hacker can tamper with the data, perform arbitrary computation, violate transaction properties, or return wrong results to the client to pursue external incentives like financial benefits. Protecting data privacy does not eliminate all the incentives to initiate attacks. For example, the hacker can short the stock price of the data owner while forcing the server to run wrong transactions and return incorrect results, potentially creating business chaos. Besides the correctness of the transactions and results, ACID properties are also critical. For example, two cryptocurrency exchanges went bankrupt due to hackers double-spending their coins through isolation-level attacks.&#13;
&#13;
To address this issue, this dissertation presents Litmus, a database management system that can provide verifiable proofs of transaction correctness and semantic properties, including atomicity and serializability. Litmus features a co-design of both the database and the cryptographic parts.&#13;
&#13;
We evaluate a proof-of-concept prototype of Litmus on the YCSB and TPC-C benchmarks. We show that under certain cryptographic assumptions, Litmus can process up to thousands of transactions per second (txn/s) verifiably. Our results show a promising practical direction considering that PayPal runs on average 115 txn/s and VISA 2000-4000 txn/s. The proof is about tens of kilobytes per verification batch and verifies with a constant time of a few hundred seconds. Moreover, Litmus can extend to verify consistency as well.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147302</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical Formulations in Conceptual Design, Analysis, and Optimization</title>
<link>https://hdl.handle.net/1721.1/147300</link>
<description>Mathematical Formulations in Conceptual Design, Analysis, and Optimization
Norheim, Johannes J.
Engineering design problems are often multidisciplinary in nature, giving them a natural decomposition into subproblems. The field of multidisciplinary design, analysis, and optimization (MDAO) has developed methods to integrate these sub-problems and generate standardized mathematical formulations. In the detailed design phase of engineering, multiple technical reasons beyond the nature of the discipline justify the decomposition. However, in the early phase, during conceptual design, when the models involve equations or constraints on elementary functions with highly heterogeneous structures, multiple decompositions are often possible. Furthermore, the decompositions often involve inversions of the elementary functions, leading to a restructuring of the design equations. Applying MDAO methods to specific decomposition choices can lead to mathematical formulations with fewer variables and constraints or equations, which are often desirable properties. This thesis explores how to restructure the conceptual design equations and generate different mathematical formulations based on the same underlying model. For this purpose, we develop a new graph-based mathematical formalism, which generalizes many of the existing MDAO methods through a set of atomic transformations. We find desirable problem structures by solving different combinatorial optimization problems related to the well-known feedback arc set and assignment problems. The formulations of the optimization problems seem to be exponential on average but make it possible to find optimal structures with 30 variables in less than 1 second. Finally, we apply the restructuring method to three conceptual design problems from aerospace and naval engineering domains, with up to 40 sizing relationships. In all but one case, we find feed-forward structures. Although the restructured problems have the desired properties, a new set of unexpected numerical challenges results in some cases.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147300</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed Evolution of Glycan-Binding Proteins</title>
<link>https://hdl.handle.net/1721.1/147298</link>
<description>Directed Evolution of Glycan-Binding Proteins
Ward, Elizabeth M.
Glycan-binding proteins (GBPs) are commonly used reagents for the study of glycans. They do not require specialized equipment or time-consuming experimental methods, making them widely used tools for basic research and clinical applications. Existing glycan recognition reagents,antibodies and lectins, are limited, and discovery or creation of reagents with novel specificities is time consuming and difficult.&#13;
&#13;
This thesis details the generation of novel GBPs from a small, hyperthermostable DNA binding protein by directed evolution. A yeast surface display method for evolution of GBPs was developed and used to generate GBPs for the recognition of mammalian glycans sialic acid and the cancer-associated disaccharide Thomsen-Friedenreich (TF) antigen. Characterization of these proteins shows them to have specificities and affinities on par with currently available lectins. The proteins can be functionalized to create reagents that prove useful for glycoprotein blotting and cell staining applications.&#13;
&#13;
Carbohydrate-protein interactions are often low affinity. Naturally occurring GBPs often oligomerize to make multivalent interaction with glycan ligands, increasing the avidity of the interaction. Fusion of the evolved GBPs to the coiled-coil trimerization domain of the lectin surfactant protein D (SP-D) leads to the formation of a trimeric GBP. These trimers are properly folded, stable, and have increased binding affinity compared to monomeric GBPs. Generation of trimeric Sso7d-based GBPs is a strategy for increasing the functional affinity of the evolved proteins, thereby making the proteins useful for a wider range of applications.&#13;
&#13;
The overall goal is to create GBPs for glycans with no existing GBPs for their study. One area that can benefit from more GBP reagents is bacterial glycobiology. Many pathogens have glycans involved in virulence. One such organism is Campylobacter jejuni. The N-linked protein glycosylation pathway in Campylobacter jejuni is needed for pathogenicity of the organism, as loss of glycosylation decreases adhesion and invasion to intestinal epithelial cells and differentially modulates inflammatory responses in a gut-immune co-culture model. Future application of the developed GBP evolution platform toward bacterial glycans will have great impact on the field of bacterial glycobiology through powerful tools for studying the interactions of human pathogens, commensals and symbionts and their hosts together with novel diagnostic and analytical reagents.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147298</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimization Centered Approach to Multifidelity Aircraft Design</title>
<link>https://hdl.handle.net/1721.1/147291</link>
<description>An Optimization Centered Approach to Multifidelity Aircraft Design
Karcher, Cody Jacob
A common thread across all of engineering is the concept of design optimization. When a new product is in development, it is the job of the design team to ensure the final result performs well as measured by some set of evaluative criteria, and design decisions are made throughout the process in an effort to maximize this final performance objective. Parallel to this idea is the notion of numerical optimization, where various mathematical methods are used to construct closed form optimization problems that are then solved using computing re- sources. The past few decades have seen extensive effort at applying numerical optimization to design optimization given their similar nature, and in aircraft design these efforts are pri- marily categorized as techniques of Multi-Disciplinary Analysis and Optimization (MDAO). For a number of compelling historical reasons, the MDAO community tends to assume the presence of large integrated analysis models that are pieced together in an MDAO frame- work before being integrated with numerical optimization. But recent work has begun to challenge this approach, instead assuming that aircraft design problems are fundamentally compatible with a fast and efficient form of optimization called Geometric Programming (GP) and adapting all analysis models to fit this form. Both the analysis centered (MDAO) and optimization centered (GP) approaches have considerable merit, but to date remain fundamentally incompatible. This work takes the best elements from both approaches, the efficient mathematical structure from GP and the natural ability to integrate existing black box analysis models from traditional MDAO, and develops two new optimization algorithms that enable an optimization centered, multifidelity approach to aircraft design. The new al- gorithms are benchmarked against a current state of the art algorithm using a set of example problems, and the new design approach is applied to two representative aircraft design case studies.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147291</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low temperature heterogeneous integration of germanium on silicon</title>
<link>https://hdl.handle.net/1721.1/147290</link>
<description>Low temperature heterogeneous integration of germanium on silicon
Postelnicu, Eveline
Data communication relies on interconnects between circuit elements on a chip as well as between chips. As feature sizes decrease for silicon (Si) complementary metal oxide semiconductor (CMOS) transistors, electrical interconnects face increasing delay and increasing power consumption. Photonic interconnects can ameliorate issues with power consumption by removing heat dissipation and allowing for higher bandwidths and higher degrees of multiplexing. Germanium (Ge)’s absorption in the Telecom wavelength range (1.3-1.55 µm), its transparency in the mid-longwave infrared (mid-LWIR) range, and its ease of integration on a Si backbone make it a perfect candidate for photodetectors (PDs) for Telecom optical interconnects and waveguides for mid-LWIR optical sensors. However, optical interconnects are still larger than electrical interconnects, motivating the move to vertical integration of photonic devices. In order to achieve vertical integration, interconnects must be integrated in the back-end-of-line (BEOL) of a CMOS process flow. BEOL integration introduces strict temperature requirements of T&lt;450°C to prevent metal diffusion and oxidation of devices already on-chip.&#13;
&#13;
First, this thesis tackles BEOL-compatible active devices such as Ge-on-Si PDs for the near-IR wavelength range. We developed a BEOL-compatible process flow for Ge on Si epitaxy. Investigating strain in our low-temperature Ge led to a discovery of a residual compressive strain at extended temperature regimes compared to prior work. Based on this residual compressive strain, a model is proposed for misfit dislocation nucleation in Ge-on-Si epitaxy. Point defects and dislocation kinetics prevalent in low temperature growth of Ge are characterized and analyzed via post-processing annealing. The impact of these defects and processing conditions on Ge-on-Si photodetector device performance is also explored. We found that acceptor-like point defects identified via Hall effect are responsible for conductivity type conversion via low temperature post-processing. These point defects impact Ge-on-Si PDs' internal quantum efficiency, while dislocations appear to dominate device leakage current. We also found that effective sidewall passivation, a remote heterojunction structure, and post-processing annealing of Ge-on-Si PDs result in the lowest reported dark current density for Ge-on-Si PDs, 160 nA/cm². Lastly, BEOL-compatible passive Ge devices for mid-IR-LWIR sensing applications are investigated. Room temperature-evaporated amorphous Ge on FZ-Si waveguides demonstrate transmission loss of 15 dB/cm at 9.79µm wavelength, which is a promising start for room temperature-evaporated Ge waveguides.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147290</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemistry, Transport and Function of Ionic Phases in the Solid Electrolyte Interphase on Lithium Metal Anodes</title>
<link>https://hdl.handle.net/1721.1/147289</link>
<description>Chemistry, Transport and Function of Ionic Phases in the Solid Electrolyte Interphase on Lithium Metal Anodes
Guo, Rui
Lithium (Li) metal anodes in liquid electrolytes suffer low Coulombic efficiency (&lt; 99.9%) arising from the chemically inhomogeneous nature of the native solid electrolyte interphase (SEI), which impedes smooth Li plating and leads to excessive electrolyte consumption. Despite much attention paid to engineering Li interfaces of late, there is still limited understanding of the desired chemical composition and structure of an improved Li SEI. One major challenge has been a lack of empirical data on individual SEI phases present at a metallic Li interface, where the chemistry, transport properties and functions have not been well resolved. Consequently, the field still relies on the properties of bulk analogue materials typically invoked to understand SEI behavior. &#13;
&#13;
To address this challenge, this thesis adopted an approach to deconstruct the multiphasic native SEI by building single-component SEI models on metallic Li and probe the transport properties and electrochemistry of individual phases in typical battery electrolytes. In the first part of this thesis, single-component SEI of lithium oxide (Li2O) was synthesized directly onto Li foils by controlled metal-gas reactions, generating model Li2O SEI with nanoscale thicknesses (20–100 nm) commensurate with the native SEI derived from electrolytes. The model Li|Li2O electrodes serve as a platform for further chemical and electrochemical characterizations. In particular, electrochemical impedance spectroscopy (EIS), combined with interface modeling, was used to extract transport properties (i.e., ionic conductivity, diffusivity, charge carrier concentration and activation energy barriers) of Li2O SEI in a carbonate electrolyte. The Li2O SEI was also studied as a function of synthesis conditions, revealing microstructural sensitivities that can be tuned to modulate transport behaviors, and was further compared with similarly deconstructed Li|LiF interfaces to isolate chemistry-specific differences. Our results showed that the ionic conductivity of Li2O model SEI was several orders of magnitude higher than reported values obtained from bulk pellet measurements, revealing dramatically different chemical and microstructural environments between bulk materials and SEI phases.&#13;
&#13;
While chemical evolution of the SEI has been widely recognized in aging processes of Li-ion batteries, pinpointing the chemical origins by tracing them to specific SEI phases has been experimentally challenging. In the second part of this thesis, the single-component model interfaces were further used to study the chemical reactivity between individual SEI phases and battery electrolytes. The degree of interactions between SEI and electrolytes were examined by EIS in the lower-frequency range, and further characterized by X-ray photoelectron spectroscopy (XPS) and X-ray absorption near-edge spectroscopy (XANES). Contrary to some conventional wisdom that ionic phases are stable, our findings shed light on the fact that the ionic SEI phases, particularly in certain electrolytes, can undergo dynamic chemical evolution. These changes can then significantly influence transport through the interfaces in ways that decrease the stability of the SEI. More broadly, this work may have direct implications for ex situ modification approaches for Li: it is critical to examine the reactivity of such interfaces with the electrolyte to ensure that modified interfaces are truly protective and stable.&#13;
&#13;
In spite of the transport properties and chemical reactivity of individual SEI phases (i.e., Li2O and LiF) acquired from single-component SEI studies, there still exists a knowledge gap in understanding the structure-related effects of SEI phases. One particular component of interest is LiF, which has been widely reported as beneficial to an improved SEI, however, the functionality of nanostructured LiF phases in native SEI remains vague. The last part of this thesis examined if and how LiF influences Li deposition under well-defined and experimentally observable conditions. To do so, nanoscale LiF particles with tunable sizes (30–300 nm) were synthesized on Cu electrodes via controlled electrochemical reduction of fluorinated gases. The impact of LiF phases on the overpotential and morphology of Li deposition was studied in battery electrolytes using cyclic voltammetry combined with electron microscopy. Our findings suggested that the total surface area of ex-situ formed LiF particles has a strong positive correlation with the reduction of Li plating overpotential. Hence, this work implied that the morphology of LiF particles in SEI can affect Li nucleation without having significant impacts on the reversibility of Li.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147289</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal distributed control and estimation for systems with spatiotemporal dynamics</title>
<link>https://hdl.handle.net/1721.1/147288</link>
<description>Optimal distributed control and estimation for systems with spatiotemporal dynamics
Arbelaiz Mugica, Juncal
Traditional optimal control and estimation synthesis often relies on the implicit assumption that sufficient computational power and bandwidth are available to implement centralized communication architectures. However, this is often not the case in large-scale spatially distributed systems. In these, relying only on spatially localized information transfer might be preferred or even imperative for control and estimation. Motivated by this challenge, this dissertation analyzes optimal distributed control and estimation synthesis for systems with spatiotemporal dynamics. Emphasis is placed on the state estimation problem for dynamical systems with shift-invariances over unbounded spatial domains and with distributed noisy measurements. Dimensional analysis and scaling arguments are used to define physically interpretable dimensionless groups, which are found to be related to the measurement signal-to-noise ratio and to decentralization measures. The branch point locus is introduced as a useful tool to systematically explore the sensitivity of the spatial localization of the feedback operator to the relevant dimensionless groups.&#13;
&#13;
After some background is provided and mathematical preliminaries are introduced, the information structures of the optimal (in the sense of error variance minimization) state estimator for infinite-dimensional spatially invariant systems over &#119871;² (R) spaces are analyzed. The optimal state estimate is described by a spatially invariant distributed-parameter Kalman-Bucy filter. Its Kalman gain operator is a spatial convolution. Hence, the spatial decay of its kernel determines the information structures of the filter. In the problem set-up considered, such kernel exhibits asymptotic exponential spatial decay, which implies that estimation of the state at each spatial site heavily relies on local measurements. The role that the statistical properties of the noise processes perturbing the plant play in such spatial localization is analyzed. Under certain assumptions, it is shown that noise can make the optimal estimator rely more heavily on local information. A matching condition for which the filter gain is completely decentralized is found. Two case studies illustrate the theoretical results: i) optimal estimation of a diffusion process over the real line, and ii) optimal estimation of a linearized Swift-Hohenberg equation over the real line. Second, a similar analysis and spatial localization results for systems over Sobolev spaces are presented. Typically, these are necessary to synthesize filters for plants with higher order temporal dynamics, such as elastic waves. Third, the optimal estimation problem with hard information constraints for spatially invariant systems over &#119871;² (R) spaces is studied. Based on insights from analyzing the information structures of the distributed-parameter Kalman-Bucy filter, a convex functional optimization is proposed to design an optimal information-constrained filter gain for a subclass of spatially invariant plants. Such a gain operator minimizes estimation error variance, is stabilizing, and is compactly supported in space (i.e., the filter is enforced to share information only locally). The size of such support is defined a priori and determines the communication burden of the filter. The method is applied to design an optimal information-constrained Kalman-Bucy filter for a diffusion process over the real line, for which performance-locality trade-offs are discussed. Finally, conclusions are drawn and related future research directions discussed.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147288</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Micro/Nanostructured Surfaces for Thin-Film Condensation Heat Transfer Enhancement in Steam Power Plants</title>
<link>https://hdl.handle.net/1721.1/147287</link>
<description>Scalable Micro/Nanostructured Surfaces for Thin-Film Condensation Heat Transfer Enhancement in Steam Power Plants
Zhao, Yajing
Steam power plants, which contribute to over 50% of energy production globally, rely on condensers to control system-level energy efficiency. Due to the high surface energy of common heat exchanger materials, the vapor condenses by forming a continuous liquid film with low thermal conductivity (filmwise condensation), hindering heat transfer from the vapor side to the condenser surface. Hydrophobic surfaces achieved by either chemical methods (e.g., coating treatment) or physical methods (e.g. structures design) have shown great promise in enhancing condensation heat transfer by promoting dropwise condensation. However, the short lifetime and high fabrication cost of most of these hydrophobic surfaces remain a challenge for long-term and large-scale industrial applications. A promising solution to enhancing condensation heat transfer in a robust and scalable manner is to control the thickness and thermal conductivity of the condensate film, which we term thin-film condensation. This can be achieved by sandwiching a thin layer of porous metal wick between a hydrophobic membrane and the condenser surface to confine the condensed liquid, forming a thin liquid-metal composite film that significantly improves the effective thermal conductivity of the condensate-filled porous media.&#13;
&#13;
In this work, we designed, fabricated, tested, and demonstrated thin-film condensation heat transfer using commercially available materials and scalable approaches. First, we proved the concept using biphilic, microchannel-assisted hierarchical copper surfaces made of commercially available copper foams and copper meshes. Condensation heat transfer on the hierarchical copper surfaces was characterized to be up to 2x as compared to the conventional filmwise condensation, even with flooding on the surface due to the defects on the mesh and the coating. Then, we investigated electrospinning as a potential approach to customize hydrophobic membranes for the thin-film condenser surfaces. The key benefit of the hydrophobic membrane in the surface design is to generate capillary pressure through micro/nanoscale pores, which acts as the driving force for the condensate flow in the metal wick. We conducted a parametric study on the effects of several key fabrication parameters on the pore size of the electrospun membrane, with the help of the fractional factorial design. Solution feeding rate was found to be the most impactful parameter on the membrane pore size and should be considered the most during membrane optimization. A heat and mass transfer model was developed to predict the heat transfer performance of the thin-film condenser surfaces made of electrospun membranes and porous copper wicks. Upon careful design of the surface structures, an over 5x heat transfer enhancement is expected on these thin-film condensers, which is comparable to the state-of-the-art dropwise condensation. Finally, a techno-economic analysis was conducted on the thin-film condensers. The result shows that the additional material for the condenser tube modification costs less than 10% of the condenser cost. However, with the expected 5x steam-side condensation heat transfer performance, thin-film condensers will be able to increase power plants' output by 2-6%, which is equivalent to over $10B of the value proposition for steam power plants across the globe.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147287</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Resilient Public Transportation System: Effective Monitoring and Control under Service Disruptions</title>
<link>https://hdl.handle.net/1721.1/147286</link>
<description>Toward a Resilient Public Transportation System: Effective Monitoring and Control under Service Disruptions
Mo, Baichuan
Urban public transit is an important component of transportation systems and plays a critical role in providing mobility in many metropolitan areas. However, with aging systems, continuous expansion, and near-capacity operations, transit systems are susceptible to unplanned delays and service disruptions caused by equipment, weather, passengers, or other internal and external factors, resulting in great inconvenience for passengers and economic loss for operators. Ensuring good service provision during service disruptions is important for public transit management. &#13;
&#13;
Resilience is an important concept related to incidents. It usually refers to the ability of an entity to return to its initial conditions after it is disturbed. Since monitoring, control, and planning are the three major tasks for public transit system management, we define the resilience of a public transit system as the ability to monitor, control, and plan for incidents (service disruptions) in ways to mitigate congestion, improve travel efficiency, and reduce safety risks. &#13;
&#13;
This dissertation focuses on the first two tasks to improve the resilience of public transit operation in light of disruptions that regularly take places. Specifically, we aim to 1) understand the impact of unplanned incidents on PT systems (i.e., monitoring); and 2) design mitigating strategies to relieve incident impacts (i.e., control). The specific topics we cover in the thesis can be categorized by a two-by-two matrix. The first dimension considers short-term (e.g., less than a couple of minutes) v.s. long-term (e.g., more than 1 hour) incidents while the second dimension considers monitoring and control tasks. Five different studies under the umbrella of this two-by-two matrix are presented. &#13;
&#13;
The first study evaluates a transit system performance under random short-term service suspensions using a bulk-service queue model. We prove that under random suspensions, headways can be represented as the difference between two compound Poisson exponential variables. Assuming no vehicle overtaking, we approximate the headway as a zero-inflated truncated normal distribution to obtain a closed-form moment generating function (MGF). Based on the MGF, we derive the system stability conditions and the mean and variance of queue length and waiting time at each station with analytical formulations. The second study provides an empirical analysis of the impact of service disruptions. We use a real-world train collision incident at the Chicago Transit Authority (CTA) system to analyze the impact of unplanned long-term incidents on the system's demand, supply, and passenger behavior. We also propose a redundancy index to quickly identify alternative capacity in CTA under service disruptions. The third study proposes a probabilistic method to infer passengers' behavior (e.g., waiting, switching to another line, transferring to a bus) under disruptions. The main contribution is a probabilistic model to recognize whether an observed smart card record (e.g., transfer to a bus stop) is a normal behavior or due to the incident. This model allows us to extract the actual behavioral responses and outperforms the typical rule-based methods. The fourth study proposes a station-based path recommendation model to reduce the total system travel time during disruptions. We use a robust optimization-based formulation to address the demand uncertainty. The closed-form robust counterpart is derived. To tackle the lack of an analytical formulation of travel times due to left behind,  we propose a simulation-based first-order approximation to transform the original problem into a linear program and solve it iteratively with the method of successive average. The fifth study proposes an individual-based path recommendation model with the objective of minimizing total system travel time and respecting passengers’ path choice preferences. Passengers’ behavior uncertainty in path choices given recommendations and travel time equity are also considered in the formulation. We model the behavior uncertainty based on passenger’s prior preferences and the posterior path choice probability distribution with two new concepts: epsilon-feasibility and Gamma-concentration, which control the mean and variance of path flows in the optimization problem. We show that these two concepts can be transformed into linear constraints using Chebyshev’s inequality. The individual path recommendation problem with behavior uncertainty is efficiently solved using Benders decomposition. Finally, we use a post-adjustment heuristic to address equity requirements.&#13;
&#13;
Future research directions and potential applications of the work are discussed in the last chapter.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147286</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure-property correlations in compositionally complex ferroelectrics</title>
<link>https://hdl.handle.net/1721.1/147285</link>
<description>Structure-property correlations in compositionally complex ferroelectrics
Kumar, Abinash
Compositional control in functional oxides can significantly alter material properties and lead to the emergence of unconventional behavior. Ferroelectrics with switchable polar ordering, for instance, are influenced by local compositional variation, thus requiring an understanding of structure-property relationships. My dissertation work focuses on using atomic scale characterization with an aberration-corrected scanning transmission electron microscope (STEM) to determine these structure-property correlations in two compositionally complex ferroelectric systems: compositionally heterogeneous relaxors and stoichiometry-controlled orthoferrites.&#13;
&#13;
Relaxor ferroelectrics show exceptional electromechanical response, energy storage capacity, and electrocaloric properties, finding applications ranging from ultrasound imaging to electrical energy storage systems. These materials show nanoscale chemical and structural heterogeneity that make finding the origin of their exceptional properties a seemingly intractable problem. Here, I utilize STEM to quantify various types of nanoscale heterogeneities and determine their correlation with the nanoscale polar domain structure in a prototypical relaxor ferroelectric system Pb(Mg₁/₃Nb₂/₃)O₃-PbTiO₃ (PMN-PT). I determined three types of heterogeneities: chemical order, oxygen octahedral tilt order, and oxygen octahedral distortion order. These heterogeneities vary with Ti content and show spatial correlation with lowangle domain walls, indicating their role in breaking down the long-range polar order and stabilizing the nanoscale domain structure essential for the relaxor response. I identified monoclinic-like distortions, which are related to both the Ti content and electromechanical response. The connection among heterogeneities in local chemistry, structure, and polarization is revealed through comprehensive STEM characterization. These experimental results also validate theoretical models proposed since the discovery of relaxors.&#13;
&#13;
Further, due to the extensive demand for miniaturization of modern electronic devices, relaxor ferroelectrics are desired in their thin film forms. This requires a detailed study on the growth and characterization of compositionally heterogeneous relaxor ferroelectrics thin film. Consequently, investigating nanoscale domain structures under different conditions such as epitaxial strain is possible in thin films. With chemical and structural heterogeneities determining polar structure in bulk relaxors, their spatial distribution and behavior need to be determined to relate structure with properties in thin films. I determine the distribution of local chemistry and their role in the evolution of polar structure across thickness in PMN-PT relaxor thin films grown with pulsed-laser deposition (PLD). Further, I quantify the effect of epitaxial compressive strain in terms of the evolution of polar structure and chemical heterogeneities to explain the structural origin of decreasing relaxor behavior with strain. With an increase in epitaxial strain, the size of polar domains and chemical ordered regions increases along the film growth direction, yet the coupling among local chemistry and structure on a unit cell basis diminishes, thus showing poor relaxor response at highly strained films.&#13;
&#13;
The second ferroelectric system of the interest is orthoferrites. These have been shown as potential single-phase multiferroic materials with the coexistence of both ferroelectric and magnetic ordering. These materials have unconventional ferroelectric behavior irrespective of their centrosymmetric structure in bulk form. Orthoferrite YFeO₃ (YFO) thin films even show composition-dependent ferroelectricity. Y-rich YFO thin films reveal ferroelectric behavior while Fe-rich films show no ferroelectric response, with the structural origins of such behavior yet to be determined. Here, I use STEM to probe the atomic structure and elemental distribution in stoichiometrycontrolled YFO thin films grown by PLD. I determine YFe antisites in Y-rich YFO thin films that are absent in Fe-rich films. These antisites modify the structure and break the local symmetry, stabilizing the ferroelectric ordering. In addition, planar defects such as antiphase boundaries (APBs) are also found in Y-rich YFO, which show large structural relaxations. Despite Y-rich composition, these APBs also host FeY antisites, exhibiting bi-stable polar distortions. Density functional theory predicts the formation energy and polarization switching barrier reduces by a factor of three at these APBs, thus leading to changes in local properties. In the Y-rich YFO films, APBs show significantly lower density than Y subscript Fe antisites, indicating the ferroelectric response arises predominantly from Y subscript Fe antisites. These results reveal that defect engineering achieved via stoichiometry control allows tuning properties in functional orthoferrites.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147285</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Calculating Aesthetic Value: Formal Models of Description and Evaluation for Aesthetic Systems</title>
<link>https://hdl.handle.net/1721.1/147284</link>
<description>Visual Calculating Aesthetic Value: Formal Models of Description and Evaluation for Aesthetic Systems
Haridis, Alexandros
This dissertation presents research on visual calculating with shape grammars, that bridges gaps between seeing as an open-ended aesthetic process and mathematical formality in computing methods in architecture and other areas of design. The work extends across two areas of research.&#13;
&#13;
First, by taking as background developments in “mathematics of shapes”, it introduces new approaches to the structural description and geometric computation of form. A framework is developed for working with point-set free topological descriptions and for analyzing structural properties of computations, such as the continuity of rules involving incompatible shape&#13;
descriptions. Further, it introduces a new approach to studying “construction lines” and “registration marks” as point-line arrangements with their own algebraic, geometric, and combinatorial properties. In addition to individual contributions to the area of mathematics of shapes, this work illustrates a broader methodological direction for linking architecture and design with mathematics and computing, whereby visual observation and spatial intuition in the former are the starting point for developments in the latter.&#13;
&#13;
Second, as part of a recent direction toward assimilating aesthetic theory into visual calculating, an aesthetic evaluation system is developed for producing aesthetic responses related to the form of aesthetic objects. While the system is formulated in general terms, it uniformly assimilates aesthetic responses that are based on formal notions of unity (order) and variety (complexity/diversity), measured individually or in terms of some ratio that interrelates them. Expanding on the topic of aesthetic value systems, the last part of the dissertation discusses the role of aesthetic value judgements in educational programs connecting computing with design areas and why they are a necessary component in any pursuit for models of human-level “general” intelligence.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147284</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Assessment of Community-Scale Electrodialysis Desalination Systems and Improved Scale Mitigation through Pulsed Operation</title>
<link>https://hdl.handle.net/1721.1/147281</link>
<description>An Assessment of Community-Scale Electrodialysis Desalination Systems and Improved Scale Mitigation through Pulsed Operation
Al-Rashed, Rashed A.
Globally, rapidly developing water scarcity issues stress the need for efficient desalination practices. This thesis demonstrates electrodialysis (ED) as a potentially cost-effective high recovery desalination technology for brackish water desalination and presents novel methods of increasing system recovery through pulsed operation. &#13;
&#13;
A high-recovery (80%) pilot electrodialysis reversal (EDR) system was designed, built, and tested against a typical low-recovery (40%) commercial reverse osmosis (RO) system in India. Cost and energy consumption breakdowns were calculated and presented for both systems to determine the suitability of these technologies for different use cases. Pumping energy was identified as the primary source of energy consumption of both systems, and a set of recommendations were made to reduce it, including operational changes and more careful pump selection. Additionally, sensitivity analyses were performed to determine the effects of increasing water and power costs. In situations sensitive to upfront costs, the commercial RO system’s significantly lower capital cost is appealing despite its higher operational costs. The pilot EDR system’s lower energy and feed water consumption make it a promising cost-effective option as power and water costs increase, most notably in situations with higher or varying target salinities.&#13;
&#13;
The maximum water recovery ratio of brackish water electrodialysis systems is limited by scale formation, defined as the attachment of inorganic compounds to membrane surfaces. In an electrodialysis system, selective transport of ions through membranes results in the formation of concentration polarization (CP) near membrane surfaces which worsens membrane scaling and energy dissipation in the process. The application of a non-stationary pulsed electric field can be an effective approach for suppressing CP; however, the performance of pulsed eletrodialysis (PED) heavily relies on the appropriate selection of pulsing parameters. The effects of these parameters on desalination rate and energy consumption were first investigated in the absence of scale precipitating components. The theoretical and experimental results indicate that pulsed operation postpones the limiting condition in ED, allowing for higher input voltages. The energy savings gained from suppressing concentration polarization through PED compensate for the inefficiencies introduced due to the longer desalination time, resulting in specific energy consumption approximately similar to that of CED. This parametric understanding of PED provides the guidelines required to tune the process according to the desired desalination objectives.&#13;
&#13;
To assess the scale mitigating benefits of PED, a series of consecutive nine-day batch experiments using synthesized brackish water with high scaling propensity were performed. Pulsing parameters were selected using insight gained from the aforementioned parametric understandings, and two different pulsing frequencies (0.5 and 5 Hz) were examined to evaluate the effects of pulse/pause durations on various steps of salt formation kinetics. The extent of membrane scaling and the type of formed salt crystals were identified by observing the evolution of system pressure drop and energy consumption over time, combined with microscopic and spectroscopy analyses of membranes extracted from the stack at the end of each nine-day experiment. The observed results indicate that membrane scaling decreased in pulsed operation compared to conventional ED. Low-frequency pulsing was effective in decreasing the transport rate of scale precipitating ions to the concentrate channel. Finally, a novel hybrid pulsed-conventional operation is theorized, with the intent of leveraging the benefits of pulsing early in a batch to control supersaturation of the boundary layers and switching to conventional operation later in a batch to minimize desalination time.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147281</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Query-Driven Adaptive Sampling</title>
<link>https://hdl.handle.net/1721.1/147280</link>
<description>Query-Driven Adaptive Sampling
Ayton, Benjamin James
Automated information gathering allows exploration of environments where data is limited and gathering observations introduces risk, such as underwater and planetary exploration. Typically, exploration has been performed in service of a query, with a unique algorithm developed for each mission. Yet this approach does not allow scientists to respond to novel questions as they are raised. In this thesis, we develop a single approach for a broad range of adaptive sampling missions with risk and limited prior knowledge. To achieve this, we present contributions in planning adaptive missions in service of queries, and modeling multi-attribute environments. &#13;
&#13;
First, we define a query language suitable for specifying diverse goals in adaptive sampling. The language fully encompasses objectives from previous adaptive sampling approaches, and significantly extends the possible range of objectives. We prove that queries expressible in this language are not biased in a way that avoids information. We then describe a Monte Carlo tree search approach to plan for all queries in our language, using sample based objective estimators embedded within tree search. This approach outperforms methods that maximize information about all variables in hydrocarbon seep search and fire escape scenarios. Next, we show how to plan when the policy must bound risk as a function of reward. By solving approximating problems, we guarantee risk bounds on policies with large numbers of actions and continuous observations, ensuring that risks are only taken when justified by reward. &#13;
&#13;
Exploration is limited by the quality of the environment model, so we introduce Gaussian process models with directed acyclic structure to improve model accuracy under limited data. The addition of interpretable structure allows qualitative expert knowledge of the environment to be encoded through structure and parameter constraints. Since expert knowledge may be incomplete, we introduce efficient structure learning over structural models using A* search with bounding conflicts. By placing bounds on likelihood of substructures, we limit the number of structures that are trained, significantly accelerating search. Experiments modeling geographic data show that our model produces more accurate predictions than existing Gaussian process methods, and using bounds allows structure to be learned in 50% of the time.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147280</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Logic of Subtractives or, Barely anyone tried almost as hard as me</title>
<link>https://hdl.handle.net/1721.1/147272</link>
<description>The Logic of Subtractives or, Barely anyone tried almost as hard as me
Baron, Christopher
This dissertation is about the meaning and distribution of the modifiers "almost" and "barely." We advocate for an analysis in which they are, across their uses, modifiers of quantifiers, encoding set subtraction; "barely," but not "almost", additionally contributes negation. Both modifiers remove elements from the arguments of quantifiers that they modify, and require exhaustification for their licensing as modifiers. We start with subtractive modified quantificational determiners, like "almost every" and "barely any." We then push this analysis further, showing it very naturally extends to degree constructions like comparatives and equatives, and captures the facts better other options. This extension also provides an argument for the idea that all natural language scales are dense. Numeral constructions like "almost one hundred" and "barely one hundred" appear to complicate the idea that "almost" and "barely" are in complementary distribution, but we argue this shows us there's more than meets the eye in such constructions. We offer a theory of numeral constructions that captures the overlapping distribution. Finally, we suggest that subtractives are evidence in favor of a view of exhaustification in which the contribution of the latter is presuppositional, rather than truth conditional.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147272</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A computational framework for emotion understanding</title>
<link>https://hdl.handle.net/1721.1/147271</link>
<description>A computational framework for emotion understanding
Houlihan, Sean Dae
The organizing principle of this thesis is that human emotion understanding reflects a model-based solution to a large class of ill-posed inverse problems. To interpret someone's expression, or predict how that person would react in a future situation, observers reason over a logically- and causally-structured intuitive theory of other minds. For this work, I chose a domain that is perceptually and socially rich, yet highly constrained: a real-life high-stakes televised one-shot prisoner's dilemma.&#13;
&#13;
In the first set of studies, I illustrate that forward predictions play a critical role in emotion understanding. Intuitive hypotheses about what someone is likely to feel guide how observers interpret and reason about expressive behavior. By simulating human causal reasoning as abductive inference over latent emotion representations, a parameter-free Bayesian model captured surprising patterns of social cognition.&#13;
&#13;
In the second set of studies, I formalize emotion prediction as a probabilistic generative model. Mental contents inferred via the inversion of an intuitive theory of mind generate the basis for inferring how others will evaluate, or 'appraise', a situation. The Inferred Appraisals model extends inverse planning to simulate how observers infer others' reactions, in the terms of utilities, prediction errors, and counterfactuals on rich social preferences for fairness and reputation. I show that the joint posterior distribution of inferred appraisals provides a powerful method for discovering the latent structure of the human intuitive theory of emotions.&#13;
&#13;
In the third set of studies, I build a stimulus-computable model of emotion understanding. This work emphasizes the importance of testing whether computational models can use emotion-relevant information in service of social cognition. I suggest that building computer systems that approach human-level emotional intelligence requires generative models, where inferred appraisals function as latent causal explanations that link behavior, mental contents, and world states.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147271</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transclave Economy: Immigrant Business Survival in An Era of Pandemic</title>
<link>https://hdl.handle.net/1721.1/147269</link>
<description>Transclave Economy: Immigrant Business Survival in An Era of Pandemic
Park, Soyoung
The ethnic enclave economy – the spatial clustering of immigrant enterprises where immigrant owners employ workers of their own ethnic or migration background – has been portrayed as a protected labor market where immigrant business owners and workers build beneficial or exploitative relationships. Scholars’ characterizations of the ethnic enclave economy have been dichotomized. Optimists argue that it allows immigrants to avert labor market discrimination and racism in a host society. Specifically, immigrant workers can find entry-level jobs with ethnic enterprises despite their limited socioeconomic capital, while employers take advantage of easy access to cheap, loyal workforces. Pessimists, in contrast, claim that this enclave effect is insignificant. They point out that workers are underpaid and that the heavy reliance on ethnic ties hampers employers from innovation and expansion. Over the last several decades, the persistent debate over the positive functions of the enclave economy, which is called the enclave effect, has expanded our knowledge of the social mobility of immigrants in a receiving society.&#13;
&#13;
Despite its significant contribution, a limitation of this debate is that scholars assume this economic ecosystem is static rather than fluid. The majority of enclave economy articles capture the earlier stage of the developmental trajectory of the immigrant economy, which is dominated by a single-ethnicity group and small mom-and-pop businesses. Under this premise, they examine the economic performance of the immigrants in that exceptional temporal context. Consequently, they pay little attention to the constantly changing nature of the enclave economy and interpret the ethnic enclave as an equilibrium place where socioeconomic conditions (e.g., ethnic diversity, immigration law, economic vibrancy) are stable. However, in highly globalized urban settings, the enclave economy undergoes consistent ethnic diversification, stratification, and spatial reconfiguration as a result of the socioeconomic changes in a host society and the inflow of people from different countries, who maintain continuing connection to their home societies. &#13;
&#13;
By utilizing mixed methodology, including geostatistical analysis, interviews, surveys, and longitudinal ethnographic fieldwork from 2020 to 2021, this dissertation reveals how the enclave economy has developed into a multiethnic contested place where immigrants from different backgrounds cooperate and compete. To highlight the variable nature of the enclave economy, I incorporate the transnationalism framework and propose the term transclave economy. I argue the transclave economy is developed by the transnational inflows of labor, capital, and heterogeneous culture into an immigrant economy. In this variable system, the enclave effect should be understood as a fluid capability whose function is contingent to the time and context that each enclave economy participant is situated in. The framework will be applied to the case of nail salons in New York City, where the largest cosmetology service cluster is located and the majority of workers and owners are immigrant women. Ultimately, this dissertation contributes to highlighting the enclave economy as a system of becoming, rather than a system of being, which is an increasingly important perspective in understanding multicultural city environments.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147269</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Touch for Manipulation</title>
<link>https://hdl.handle.net/1721.1/147267</link>
<description>Interactive Touch for Manipulation
Wang, Shaoxiong
Towards helping people in daily life, robots need to better interact with our physical world and inevitably make contact with various objects. Touch provides contact geometry and forces information during interactions, which can be challenging to observe from vision due to occlusions or inherent limitations.&#13;
&#13;
This thesis will focus on how to let robots leverage touch for manipulation in interactive means. We demonstrate several hardware platforms equipped with tactile sensing and integrated perception and control frameworks to apply interactive touch to real-world manipulation tasks. (1) We use touch for manipulating deformable objects like cables, using real-time tactile feedback during sliding. The robot can slide and pull the cable into different directions based on the tactile feedback to prevent falling. (2) We perform tactile exploration for learning the physical features of unknown objects. The extracted physical features are further applied to predict the forward model and swing up the in-hand object to a target pose by dynamic motions. (3) We embed tactile sensing with active rollers and design a 6-DoF roller grasper for better in-hand tactile dexterity. We demonstrate that the tactile-enabled roller grasper can robustly perform manipulation tasks for various objects, such as planar object reorientation, rolling along cables with tension, picking and singulating thin objects, etc. We hope applying interactive touch for manipulation can lead us closer to intelligent robot automation and the transformation of our physical world.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147267</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Roles of the Mcm2-7 Tails in Replication Initiation</title>
<link>https://hdl.handle.net/1721.1/147266</link>
<description>The Roles of the Mcm2-7 Tails in Replication Initiation
Gangaramani, Paritosh
DNA replication is complex biological reaction that spans two stages of the eukaryotic cell cycle. In G1 phase, preparation for replication begins when two copies of the replicative helicase, the Mcm2-7 complex, are loaded onto DNA. However, these helicases are loaded in an inactive state, and are not activated until the following S phase. As cells enter S phase, the action of two kinases, DDK and S-CDK, facilitates the recruitment of two key helicase activators, Cdc45 and GINS. These proteins stably associate with Mcm2-7 to form the active replicative helicase complex called the CMG   (Cdc45-Mcm2-7-GINS) complex. CMG complex formation is followed by DNA unwinding, replisome assembly, DNA synthesis, and finally replication termination. The Mcm2-7 helicase is involved in every step of this process, from the earliest stages of replication initiation to the final step of replication termination. &#13;
&#13;
The Mcm2-7 helicase is a heterohexameric complex comprised of six related subunits. Each subunit contains an AAA+ ATPase domain, a large OB-fold domain, and extensions of varying lengths on each terminus. Importantly, Mcm2, Mcm4, and Mcm6 contain long unstructured N-terminal tails, which are unrelated to each other and whose role in replication initiation is not well understood.&#13;
 &#13;
In the following thesis, I describe the use of reconstituted helicase loading and helicase activation reactions to determine the specific contributions of the three Mcm2-7 N-terminal tails in key steps of replication initiation. Using this approach, I identified unique roles for each of the tails. First, I discovered that a 23 amino-acid region within the Mcm2 N-terminal tail is important for helicase loading. Second, I identified a role for the rest of the Mcm2 tail in facilitating the activation of the helicase by promoting the DDK phosphorylation of the Mcm4 and Mcm6 tails. Third, I found a unique role for the Mcm4 tail in maintaining DDK specificity of the helicase activation reaction thereby preventing unregulated replication initiation. Finally, I observed that the DDK phosphorylation sites on the Mcm4 and Mcm6 tails serve unique functions and cannot compensate for each other.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147266</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Embedding Methods for the Accurate Ground and Excited Electronic Structure of Large Molecular Systems</title>
<link>https://hdl.handle.net/1721.1/147265</link>
<description>Quantum Embedding Methods for the Accurate Ground and Excited Electronic Structure of Large Molecular Systems
Tran, Henry Khoa
The class of quantum embedding methods has shown great promise in achieving high accuracy simulations of large molecular and material systems by dividing the system into smaller fragments. However, some of their successes are limited to model systems and molecules in minimal basis sets, including methods based on the Schmidt Decomposition such as density matrix embedding theory (DMET) and bootstrap embedding (BE).&#13;
&#13;
Understanding photochemistry and reaction mechanisms requires simulations of excited states. We used DMET to target excited states for the first time, accurately calculating the first excited state in a variety of systems. We then adapted BE to start from an unrestricted bath, which better models excited states. From this, BE could predict IEs, EAs, and singlet-triplet gaps with accuracy on par with popular quantum chemical methods. Both of these successes allowed us to study the band gaps of graphene quantum dots and organic polymers where BE converges to within 0.1 eV of the desired quantum chemical method.&#13;
&#13;
None of these calculations can truly model real systems if they are performed in minimal basis sets. The theory behind BE is adapted for extended basis sets through the use of careful orbital localization, utilizing intrinsic atomic orbitals. The BE matching condition is restricted to well-localized orbitals, which allows BE to converged to at least 99\% of the correlation energy in basis sets up to cc-pVDZ. For the troublesome cases involving more diffuse and polarization effects, pair natural orbitals (PNOs) were reformulated for the embedding framework and BE with PNOs, PNO-BE, demonstrated a faster convergence to the correct correlation energy than simply increasing the fragment size. PNO-BE captures over 99\% of the correlation in systems where electronic effects are expected to span multiple atoms and provides an alternative to improving BE without letting the fragment size blow up exponentially.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147265</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum simulation of many-body dynamics</title>
<link>https://hdl.handle.net/1721.1/147262</link>
<description>Quantum simulation of many-body dynamics
Seetharam, Kushal
Quantum computers and simulators have the potential to improve our understanding of physics, material science, chemistry, and biology by providing a window into the dynamics of quantum many-body systems that appear in these fields. In addition to growing our knowledge of fundamental science, an increased understanding of these systems could lead to technological innovations in energy, industrial processes, and medicine. There are several different quantum hardware platforms and simulation modalities, however, which can be used to perform quantum simulations of many-body dynamics. This thesis seeks to uncover guidelines to a seemingly simply question: how do we answer useful questions using quantum simulators? Answering this involves learning what are good questions to ask quantum simulators, which questions should be asked to which platforms, and how we should ask each question (digital, analog, or hybrid simulation). We develop intuition for these guidelines by exploring three quantum simulation contexts: Bose-Fermi mixtures, dissipative spin chains, and nuclear magnetic resonance (NMR) spectroscopy experiments.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147262</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of wave-filament scattering in the scrape-off layer during lower hybrid current drive</title>
<link>https://hdl.handle.net/1721.1/147259</link>
<description>Impact of wave-filament scattering in the scrape-off layer during lower hybrid current drive
Biswas, Bodhisatwa
Lower Hybrid Current Drive (LHCD) is an efficient means to control the current profile in a tokamak. The performance of LHCD is sensitive to the phase-space trajectory of the wave as it propagates and damps in the plasma. These trajectories are scattered by turbulent density fluctuations in the scrape-off layer (SOL), which is hypothesized to be the reason for poor agreement between LHCD experiments and simulations (which do not account for scattering). This thesis investigates the impact of scattering processes on LHCD, and attempts to bridge this discrepancy. Two novel scattering models are developed. The \emph{filament-refraction} model generalizes beyond the weak turbulence assumption made in previous works. The \emph{full-wave/statistical} model further generalizes beyond the Wentzel-Kramer-Brillouin (WKB) limit. This latter approach results in excellent agreement with experimental observations.&#13;
&#13;
The filament-refraction model couples realistic SOL turbulence profiles to a ray-tracing code. Synthetic, 3-D turbulence profiles that mimic SOL turbulence are generated and coupled to the ray-tracing/Fokker-Planck codes GENRAY/CQL3D. In contrast with previous scattering models that employ the weak-turbulence approximation, this approach accounts for the spatial coherency of filamentary turbulence. As a result, the extent of scattering is shown to be greater in filamentary turbulence, leading to a significant modification of the current profile in the Alcator C-Mod tokamak. &#13;
&#13;
The full-wave/statistical approach employs a Mie-scattering technique to treat a single wave-filament interaction in full-wave formalism. The radiative transfer approximation is employed to treat scattering from a statistical ensemble of filaments in a turbulent layer in slab geometry. This approach extends beyond ray-tracing, and retains all single-filament full-wave effects while remaining computationally inexpensive. Notably, it is found that LH waves can asymmetrically scatter in wave-number phase-space due to full-wave interactions with spatially coherent density fluctuations. Coupling to GENRAY/CQL3D reveals current profiles that are robustly monotonic and peaked on-axis, in much better agreement with experiment. In contrast, simulations without scattering result in current profiles that are non-monotonic and peaked off-axis.&#13;
&#13;
The full-wave/statistical model is self-consistently coupled to GENRAY, allowing for LHCD simulations in the multi-pass regime. This multi-scale model couples local full-wave scattering physics to a global ray-tracing solver. Across multiple C-Mod discharges, simulations show that scattering plays a significant role in determining the current profile. Phase-space broadening due to scattering significantly broadens the power deposition profile, allowing for increased on-axis current and the mitigation of current valleys and off-axis peaks. These effects saturate for sufficiently intense SOL turbulence, and parametric scans suggest LHCD in C-Mod exists in this saturated regime. Furthermore, the asymmetric scattering effect is shown to significantly affect the current profile. At low and moderate densities, good agreement is found with experimental Motional Stark effect and Hard X-ray measurements. At high densities, the same general trends are found. However, uncertainties relating to Ohmic current, SOL collisionality, and parametric decay make comparisons with experiments challenging.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147259</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding heat transport at interfaces for thermal management of electronics</title>
<link>https://hdl.handle.net/1721.1/147258</link>
<description>Understanding heat transport at interfaces for thermal management of electronics
Zhang, Lenan
The discovery and development of two-dimensional (2D) materials offer new opportunities for high-performance nanoscale electronics. However, new material systems involve new device architectures, which leads to new challenges on both the electronic and thermal design. While significant progress has been made to understand and engineer the electrical properties of 2D devices, the thermal problems remain relatively poorly understood. Since many 2D electronics can reach very high-power density (&gt;104 W/cm2 ), the dense vertical integration of multilayers within a few nanometers leads to a significant temperature rise (&gt;150 ℃), which becomes the bottleneck of device performance. These thermal challenges are associated with two critical thermophysical properties of 2D materials, i.e., the thermal expansion and the interfacial thermal transport. In addition, to address the thermal management of 2D electronics, novel cooling approaches with insights gained from 2D thermal interfaces are in high demands. &#13;
&#13;
This thesis performed a systematic study on the thermal expansion and thermal transport of the van der Waals (vdW) bonded 2D interfaces, and developed highly efficient thermal management solutions based on two-phase cooling. First, we developed for the first time a pure experimental approach to accurately measure the thermal expansion coefficients(TECs) of various 2D materials. Our measurements confirmed the correct physical range of 2D monolayer TECs and hence addressed the more than two orders of magnitude discrepancies in literature. Second, we investigated the thermal transport across various 2D interfaces. In particular, we elucidated the role of vdW interaction in the anisotropic thermal transport of substrate-supported 2D monolayers and identified an optimal vdW interaction toward the maximum total heat transfer. On the other hand, we explored the twist-angle dependence of 2D interfacial thermal transport. We observed that depending on different material systems, the thermal transport of 2D materials can exhibit both strong and weak twist-angle dependences, which creates a new degree of freedom to manipulate heat at the atomic level. Lastly, with fundamental understanding of 2D thermal interfaces, we designed and optimized a liquid-vapor thin film evaporator based on microstructured surfaces, enabling high-performance thermal management of 2D electronics. This thesis provides a holistic understanding for the fundamental thermal properties of 2D materials and interfaces, which are critical to address the thermal crisis of 2D electronics. We believe the simulation, experimental, and design approaches developed in this thesis can serve as a guideline for the next-generation 2D electronics with unprecedented reliability and performance.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147258</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward and inverse problems in mechanics: from a single to thousands of interacting bodies</title>
<link>https://hdl.handle.net/1721.1/147257</link>
<description>Forward and inverse problems in mechanics: from a single to thousands of interacting bodies
Mowlavi, Saviz
Mechanics is the branch of physics that characterizes how bodies deform in response to forces, which can involve two categories of problems. In forward problems, one seeks to predict the response of the system given full knowledge of its physical and geometric properties. In inverse problems, some of the physical or geometric properties of the system are unknown and are either to be identified through experiments, or to be designed to optimize a desired objective. Although the physical laws governing the deformation of single elastic bodies have been known for over a century, forward problems involving thousands of interacting elastic bodies still elude simple and accurate models, while inverse problems involving even a single elastic body lack effective solution methods. &#13;
&#13;
In this thesis, we investigate forward and inverse problems in systems ranging from a single elastic body to thousands of interacting ones. In a first part, we derive analytically a model for the contact force between elastically anisotropic bodies. We then implement this contact model into a computational framework for the forward dynamics of systems composed of hundreds of interacting bodies, which we leverage to showcase examples where the elastic anisotropy of each body affects the macroscopic behavior of the system. In a second part, we derive a homogenized continuum model to predict the forward dynamics of granular materials consisting of millions of interacting elastic particles, such as sand, with a particular focus on the accurate description of the onset and arrest of flow in response to external loading variations. Besides its predictive abilities, this model also sheds light on the physical mechanisms responsible for various unique features of avalanches and landslides such as their large initial acceleration. In a last part, we propose a topology optimization framework for the inverse problem of identifying hidden voids or rigid inclusions in an elastic body using measurements of the surface deformation in response to a prescribed surface loading. This framework combines recent advances in machine learning with level-set methods and the equations governing the deformation of single elastic bodies. We demonstrate the effectiveness of our method in identifying the number, locations, and shapes of hidden voids and rigid inclusions in elastic and hyperelastic materials.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147257</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gluing Z₂-Harmonic Spinors on 3-Manifolds</title>
<link>https://hdl.handle.net/1721.1/147255</link>
<description>Gluing Z₂-Harmonic Spinors on 3-Manifolds
Parker, Gregory J.
This thesis develops the analytic foundations of a gluing result for Z₂-harmonic spinors on 3-manifolds. Z₂-harmonic spinors are a singular version of classical harmonic spinors and naturally arise as limits of sequences of solutions to generalized SeibergWitten equations. The gluing problem studied here addresses the reverse question in the case of the two-spinor Seiberg-Witten equations on a 3-manifold. This thesis is divided into two parts.&#13;
&#13;
Part I: This part constructs model solutions for the two-spinor Seiberg-Witten equations in a neighborhood of the singular set of a Z₂-harmonic spinor, and analyzes the linearized Seiberg-Witten equations at these. The model solutions are shown to converge to a given Z₂-harmonic spinor in a suitable sense, and the linearization is shown to be invertible with near-uniform control on suitable function spaces. The proofs rely on a detailed analysis of degenerating families of elliptic operators.&#13;
&#13;
Part II: This part studies the deformation theory of Z₂-harmonic spinors. For a fixed smooth singular set, the deformation theory of &#119885;₂-harmonic spinors is obstructed by infinite-dimensional cokernel of the semi-Fredholm Z₂-Dirac operator. It is shown that the component of the first variation of the Z₂-Dirac operator with respect to deformations of the singular set in this obstruction space is an elliptic pseudo-differential operator of order 1/2. Consequently, the resulting deformation theory is Fredholm up to a loss of regularity phenomenon which may be addressed by Nash-Moser theory.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147255</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resetting public policy? Democracies, Dictatorships, and Policy Change</title>
<link>https://hdl.handle.net/1721.1/147254</link>
<description>Resetting public policy? Democracies, Dictatorships, and Policy Change
Simison, Emilia
Despite regime transitions raising expectations of policy change, the empirical evidence linking regime type and policy is not conclusive and transitions from authoritarianism to democracy —and vice versa— often fail to lead to policy change. In part, such inconclusiveness and mixed outcomes reflect the use of varying measures and econometric techniques. However, they also reflect a lack of theoretical clarity regarding the mechanisms that link regime type and policy and how they operate in specific contexts. I claim that the heterogeneous effect of regime type changes on policy depends on 1) how the space for contestation changes with a given regime transition; and 2) how visible a policy is – or becomes. The combination of these two factors determines which mechanisms linking regime type to policy are likely to get triggered and affect the evolution of policy, as well as how that happens.&#13;
&#13;
I support these claims by an in-depth comparative historical analysis of the evolution of housing and financial policy across regime types in Argentina and Brazil since the 1960s. Using extensive archival resources, public records, historical media, and interviews with key actors, I study policymaking across authoritarian and democratic regimes that differed in terms of their space for contestation, analyzing the mechanisms through which changes in such space affect policy areas with different levels of visibility.&#13;
&#13;
I advance a nuanced understanding of the relationship between regime type and policy and, especially, of the conditions and ways in which policy change takes place —or not— following a change in regime type. I contribute to our empirical knowledge of policymaking under authoritarian regimes and to our theoretical understanding of how, and under which conditions, these different causal mechanisms operate. Such understanding is necessary to refine our theoretical expectations and articulate our empirical findings. It also helps us in implementing desired policies, identifying the potential policy threats of democratic backsliding, and recognizing the potentials and limits of democratization. Such recognition will enable us to promote the achievement of those potentials and to value the benefits democracy brings, even if they do not include all our ideal policy outputs.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147254</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcriptome-wide pseudouridine profiling reveals modification of critical E. coli mRNAs</title>
<link>https://hdl.handle.net/1721.1/147253</link>
<description>Transcriptome-wide pseudouridine profiling reveals modification of critical E. coli mRNAs
Schaening Burgos, Cassandra
Pseudouridine (Ψ) is an ubiquitous RNA modification, present in the tRNAs and rRNAs of species across all domains of life, and is now known to be present in the mRNAs of diverse eukaryotes. However, the modification was yet to be identified in bacterial mRNAs, and its functional roles remain largely unknown.&#13;
&#13;
In Chapter 1, I provide an overview of the structure and function of pseudouridine, focusing on the properties that can impact the structure, translation, and interactome of the mRNAs that contain it. I give a brief review of the enzymes that carry out tRNA and rRNA modification, which are likely to also modify mRNAs. I also summarize current knowledge about mRNA modifications in bacteria.&#13;
&#13;
In Chapter 2, I report the discovery of pseudouridines in E. coli mRNA, located in coding sequences as well as 5′ and 3′ untranslated regions, and estimate that there are between 100 and 150 pseudouridine sites in mRNA under the growth conditions profiled. By testing the mRNA modification capacity of all 11 pseudouridine synthases, I identify RluA as the predominant mRNA-modifying enzyme, modifying the majority of high-confidence sites. Additionally, the enzymes RluC and RluD also carry out reproducible modification of a few mRNAs. Using RNA structure probing data to inform secondary structure prediction, I show that all mRNA targets of RluA share a common sequence and structural motif, which also occurs in its canonical tRNA and rRNA targets. A significant mRNA target of RluA is the 5′ UTR of a transcript encoding the components of the type 1 pilus. Knocking out RluA led to upregulation of these genes, and an increase in cell motility.&#13;
&#13;
In Chapter 3, I discuss the possible functional consequences of mRNA pseudouridylation, and discuss the potential roles of Ψ in pilus expression in greater depth. Additionally, since I discovered sparse but reproducible modification by RluC and RluD, I compare the known structure and specificity of these three mRNA:Ψ synthases, and discuss the potential basis for their different pseudouridylation landscapes in mRNA.&#13;
&#13;
Overall, this work identifies pseudouridine in mRNAs encoding critical proteins and demonstrates the capacity of Ψ to regulate the transcripts that contain it. In doing so, it expands the known bacterial epitranscriptome and provides insight into the ways RNA modifications are leveraged for gene regulation.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147253</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Aided Biological Discovery and Design</title>
<link>https://hdl.handle.net/1721.1/147252</link>
<description>Machine Aided Biological Discovery and Design
Saksena, Sachit Dinesh
Advances in biotechnology and the life sciences are primarily driven by biologists conducting rigorous experimentation. However, biology is often too complex – with intractable combinatorial search spaces and functional landscapes – to comprehensively explore, understand, and engineer via iterative biological experimentation. Next-generation sequencing technologies have made it possible to measure biology in high-throughput, giving observational insight into these complexities. Further, in recent years, it has become possible to both manipulate biological systems with fine-grained control and directly synthesize large libraries of DNA molecules with specified sequences, providing unprecedented ability to engineer biology. We explore the thesis that computational methods that are built with experimental considerations and trained on carefully selected high-throughput experimental data can drive advances in the life sciences by making accurate predictions that can then be used to iteratively generate hypotheses and design biological sequences for further experimental validation.&#13;
&#13;
To test our thesis about the value of computational methods we introduce and apply computational approaches for modeling cellular differentiation trajectories, identifying non-specific antibodies, and designing diverse libraries of biological sequences that reflect desired objectives. First, we introduce a generative machine learning model for inferring cellular developmental landscapes from cross-sectional sequencing of in vitro differentiation time-series. We validate this model with ground-truth experimental lineage tracing experiments, and we show its ability to conduct in silico simulations of cellular differentiation trajectories with perturbations. Next, we present a computational framework for using sequencing data from therapeutic discovery campaigns to identify nonspecific antibody therapeutics in large candidate pools. We show that this approach bypasses and outperforms costly combinatorial affinity selection experiments and allows the use of only single-target selection data to identify pairwise nonspecificity. Finally, we introduce an algorithm for the rational design of high diversity synthetic antibody libraries using machine learning models and stochastic optimization. We show how this can be used to develop large libraries optimized for targets or developability characteristics leading to more promising candidates from affinity selection.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147252</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and Reactivity of Phosphorus-Containing Heterocycles and Tetrahedranes</title>
<link>https://hdl.handle.net/1721.1/147250</link>
<description>Synthesis and Reactivity of Phosphorus-Containing Heterocycles and Tetrahedranes
Riu, Martin-Louis Y.
3,5-Diphenyl-2-phosphafuran (DPF) was synthesized by treating trans-chalcone with dibenzo-7&#120582;³ -phosphanorbornadiene EtOPA (A = C₁₄H₁₀, anthracene), a source of ethoxyphosphinidene, followed by formal elimination of ethanol. DPF is a potent diene and readily reacts with dienophiles at room temperature. Mild heating of the corresponding ethylene adduct results in the retro-Diels-Alder reaction.&#13;
&#13;
MesN₂PA (Mes = mesityl), a synthon of mesitylphosphaazide (MesN₂P) and anthracene, was synthesized by treating [Ph3BPA][Na(OEt₂)₂] with [MesN₂]OTf (OTf = CF₃SO₃ −). MesN₂PA reacts with alkynes and phosphaalkynes to form the corresponding [3+2] phosphaazide-(phospha)alkyne cycloadducts and anthracene. Mesitylphosphaazide transfer likely proceeds via a 1,3-dipolar cycloaddition reaction, followed by anthracene elimination.&#13;
&#13;
cis-Macrocyclic diphosphine (PhPA)₂ was prepared by treating [EtOP₂A₂]AlCl₄ with phenylmagnesium chloride (2 equiv). X-ray diffraction analysis of the the corresponding nickel dichloride complex shows the rigid, bowl-shaped cavity of (PhPA)₂.&#13;
&#13;
Tri-tert-butylphosphatetrahedrane (ᵗBuC)₃P was prepared via the dehydrohalogenation of fluorophosphine (ᵗBuC)₃P(F)H. The phosphatetrahedrane core was confirmed spectroscopically and by X-ray diffraction analysis. Hydrogen-hydrogen bonding interactions between neighboring tert-butyl groups of (ᵗBuC)₃P were computationally investigated and contribute approximately −6 kcal/mol of stabilization.&#13;
&#13;
Synthetically useful quantities of (ᵗBuC)₃P were obtained using an improved synthesis based on fluoride-induced trimethylsilyl chloride elimination from chloro(trimethylsilyl)phosphine (ᵗBuC)₃P(TMS)Cl. Despite the incorporation of phosphorus, (ᵗBuC)₃P remains highly reactive and cage-opens to the corresponding cyclobutadiene when treated with catalytic triphenylborane. The proposed reactive intermediate was trapped by styrene and ethylene to form [4+2]-cycloadducts.&#13;
&#13;
(ᵗBuC)₃P also functions as a spring-loaded phosphinidene synthon for nickelcatalyzed group transfer to unactivated alkenes, leading to phosphiranes, three-membered rings that contain a phosphorus atom. Deprotection of the corresponding phosphiranes was achieved by the addition of triflic acid to form a P−H bond and [ᵗBu₃C₃]OTf, demonstrating that (ᵗBuC)₃P can also be viewed as a ‘PH’ synthon.&#13;
&#13;
Tetrahydrofuran (THF) solutions of triphosphatetrahedrane HCP₃ were generated by combining [Na(THF)₃][P₃Nb(ODipp)₃] (Dipp = 2,6-diisopropylphenyl), bromodichloromethane, and INb(ODipp)₃(THF). Removal of solvent under reduced pressure led to a black material that corresponds to a polymerized form of HCP₃. X-ray diffraction analysis of a cationic iron complex of HCP₃ confirmed the tetrahedral nature of the CP₃ core. Computational studies suggest that triphosphatetrahedrane is the least strained tetrahedrane with as mixed carbon-phosphorus core.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147250</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling Key Challenges to Guide Clinical Decisions in Cardiovascular Diseases</title>
<link>https://hdl.handle.net/1721.1/147244</link>
<description>Tackling Key Challenges to Guide Clinical Decisions in Cardiovascular Diseases
Dai, Wangzhi
Machine learning models in healthcare have been widely studied in a number of contexts ranging from clinical risk stratification to image-guided diagnosis and prognostication. Nevertheless, key challenges remain from both clinical and technical perspectives. In the case of prediction models, for example, predicting the occurrence of rare clinical events is often challenging, mainly because of extreme class imbalance in the training data. Estimating treatment effect, on the other hand, is hindered by the fact that the common support assumption is not \textit{a priori} guaranteed to be valid in non-randomized data. This thesis develops and applies approaches that address these challenges in order to obtain clinically useful insights. &#13;
&#13;
In the first part of the thesis, we tackle these obstacles in the context of Acute Coronary Syndrome (ACS) - a condition where blood flow to the heart suddenly becomes compromised. We use a contrastive Variational Autoencoder (contrastive-VAE), an approach that models both the majority and minority classes as having shared latent properties, to address the following challenges: 1) Predicting rare adverse clinical outcomes after ACS; 2) Quantifying common support for estimating the effect of therapies for ACS; and 3) Causal feature selection for estimating individual treatment effects (ITE). For the first challenge, we demonstrate that generative oversampling with a contrastive-VAE significantly improves the discriminatory ability of predictive models relative to other traditional methods like SMOTE (Synthetic Minority Oversampling Technique). Similarly, for the problem of common support estimation, we show that a contrastive-VAE can effectively model the overlap between multiple treatment groups, yielding a quantitative estimate of the common support for the individual treatment effect and concomitant confidence intervals for the ITE estimate.  Lastly, by modeling the joint distribution of patient features, treatments, and outcomes, we demonstrate that one can effectively identify a subset of patient features that are most important for ITE estimation, and that this smaller subset yields more precise ITEs with smaller confidence intervals.&#13;
&#13;
In the second part of the thesis, we turn to a challenging clinical problem that uses ultrasound imaging for diagnosis and prognostication. Cardiac ultrasound (or echocardiography) plays a central role in the diagnosis and management of patients with suspected aortic stenosis (AS) - a disorder where one of the valves in the heart does not fully open. A complete echocardiographic study is typically performed by a trained sonographer who acquires videos of multiple views of the heart, and echocardiographers (cardiologists who specialize in the analysis of echocardiograms) interpret these videos, yielding clinically useful information. To facilitate the acquisition and interpretation of echocardiographic data, we developed a deep learning model that uses a single echocardiographic view (as opposed to use all of the acquired views) to diagnoses severe AS. We trained and evaluated the model based on spatial-temporal convolution that can accurately identify two key indicators of severe AS: large mean gradient over valve (0.88 AUC) and narrowed aortic valve area (0.78 AUC). Our approach might enable early detection of severe AS by non-specialists.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147244</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biological Life and the Partiality Relation</title>
<link>https://hdl.handle.net/1721.1/147243</link>
<description>Biological Life and the Partiality Relation
Ravanpak, Ryan
The first chapter defends an account of the metaphysics of identity which combines two sensible claims in the personal identity literature. The first is a Parfitian thesis that we are persons whose persistence is tied to the appropriate continuity of certain psychological activities. The second is an Animalist thesis that we are human animals whose persistence is tied to the continuation of biological functioning such as respiration and metabolism. Supporters of the former often argue against the latter and vice versa. I argue that both are true on the grounds that there is good reason to believe that psychological activities of the human animal count as forms of biological functioning. I then motivate the substantive thesis that we are neither human animals nor persons essentially. What we are essentially is a broader thing—an organism—which can be a human animal, person, or both, but need not be either of them.&#13;
&#13;
The second chapter considers diachronic questions about when an organism at one moment persists at the next. I claim that the persistence of a kind of event—a biological life—is a crucial piece of the persistence of an organism, and that the appropriate continuation of biological activities is necessary and sufficient for the persistence of biological life. I offer a performance-centered account of “appropriate biological functioning” which can be applied to biological activities ranging from digesting, breathing, perceiving, and feeling. It depends on two forms of “functional continuity”. The first, intra-functional continuity, consists in chains of causal dependence between token instances of the same function-type. The second, inter-functional continuity, consists of chains of causal dependence between token instances of distinct function-types. I suggest that organisms are best conceived as systems consisting of a set of distinct biological activities which are connected to one another by both the intra-functional continuity and inter-functional continuity relation.&#13;
&#13;
In the third chapter, I argue for the thesis that one significant source of the relation of partiality comes from degrees of biological connectedness and continuity between organisms. I argue that this account fares better than a competing account of the source of partiality which relies on psychological connectedness and continuity. I then answer a skeptical challenge about why biological connections and continuity generate the relation of partiality. Although I am not an egoist, I end the chapter by suggesting that my position may make egoism more tolerable than it would be otherwise.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147243</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microscopic Strain Localization and Damage in Multi-phase Alloys</title>
<link>https://hdl.handle.net/1721.1/147242</link>
<description>Microscopic Strain Localization and Damage in Multi-phase Alloys
Kang, Jiyun
Identifying critical factors for microscopic strain localization and damage in multiphase alloys is challenging. In addition to distinct phase-specific properties, extra complexities arise from a broad spectrum of morphology and spatial distribution of phases. Multiple deformation mechanisms can also occur simultaneously, which renders it difficult to spatially and temporally resolve their contributions to strain heterogeneity. In this thesis, a comprehensive correlative approach is developed to address these challenges, which utilizes in situ scanning electron microscopy, in situ synchrotron X-ray diffraction, and various mapping techniques to analyze microstructure and micro-strain evolution. We reveal the governing microstructural mechanisms that control strain localization and damage in two most widely highlighted multi-phase alloys, an (α+β) titanium alloy and (α+α’+γ) transformation-induced plasticity (TRIP)-assisted steel. This thesis focuses especially on the effects of local texture, mechanical twinning, and mechanically-induced phase transformation. First, we study the influence of local crystallographic orientation in the two-phase titanium alloy. Quantitative analyses of local strain distribution demonstrate that the boundaries surrounded by soft and hard α grains are the most prone to strain localization. Second, we explore the role of mechanical twinning in modulating strain localization mechanisms during deformation and propose its potential use to retard damage development. The last focus of the thesis is the effects of mechanically induced martensitic transformation in a multi-phase quenching and partitioning (QP) steel. Our in situ tracking of metastable retained austenite unravels strong influences of neighborhood microstructure on its mechanical stability and also post-transformation behaviors. Based on our findings, micromechanically-guided microstructure design strategies to better optimize properties of these alloys are discussed.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147242</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Embrittlement: In-Situ Explorations of Hydrogen Effects Near the Boundaries</title>
<link>https://hdl.handle.net/1721.1/147241</link>
<description>Beyond Embrittlement: In-Situ Explorations of Hydrogen Effects Near the Boundaries
Yan, Haoxue
Despite tremendous research efforts in the past decades, hydrogen embrittlement (HE) still limits the use of structural metals in hydrogen environments today. As the demand for hydrogen clean energy equipment increases, it is ever so pressing for us to extend our understandings of embrittlement mechanisms. The discourse in the current proposed HE mechanisms arises from the dependencies of hydrogen effects on microstructure factors as well as the difficulties in directly detecting hydrogen behaviors. This thesis first designs in situ experiments that combine different scanning electron microscopy based characterization capabilities with hydrogen charging and detection methods to set correlation between local hydrogen distribution, microstructure, and defect activities. With these new techniques, we pay special focus to the dependencies of hydrogen effects on boundaries and the crystalline structures of the two adjoining grains. We hypothesize that hydrogen preferentially distribute near boundaries, and this behavior is dependent on neighboring crystalline structures. As a result, the stress variations caused by boundary segregation lead to defect activities which diminishes with hydrogen content. It is also hypothesized that the presence of boundaries, especially those that act as deep trap sites, will lead to complex phase transformation pathways and hydrogen evolution throughout. To this end, we first present experimental findings as well as quantitative assessments of hydrogen-defect interactions as a result of hydrogen segregation. A microstructure-design based approach to increase HE resistance was also proposed. We then study the role of phase boundaries during hydride decomposition and reformation and discuss the effectiveness of thermal treatments as an effective method for preventing HE. Lastly, utilizing our understanding of hydrogen preferential distribution and its effects on defect behaviors, we propose a new engineering approach to utilize hydrogen in microstructure control.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147241</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving supervised machine learning for materials science</title>
<link>https://hdl.handle.net/1721.1/147240</link>
<description>Improving supervised machine learning for materials science
Gong, Sheng
Despite the widespread applications of machine learning models in materials science, in many cases the performance of machine learning models is not sufficiently accurate enough to meet the needs of materials design. In this thesis, we propose and apply a series of strategies to exam and improve upon the performance of machine learning models for specific materials problems. First, we exam whether current deep representation learning models for atomistic systems can capture human knowledge of crystal structures, and find that current graph neural networks can capture knowledge of local atomic environments but cannot capture periodicity of crystal structures. As an initial solution, we propose to hybridize human knowledge with deep representation learning models, and find that the hybridization can lead to large improvement for predicting vibrational properties of materials. Then, for situations where the datasets of target materials properties are small while there are large relevant materials datasets, we propose to use transfer learning and multi-fidelity learning to transfer information between the large and small datasets to facilitate the learning of target properties. We use experimentally measured formation enthalpy and lattice thermal conductivity as case studies to exam the usefulness of information transfer and understand where and why information transfer helps. For situations where expansion of datasets is necessary, we propose to use active learning/Bayesian Optimization to sample the materials space efficiently and mitigate bias, and as a case study, we apply Bayesian Optimization to find the optimal laser processing parameters for poly(acrylonitrile) sheet as porous carbon electrode. Finally, if generation of data is time-consuming, we propose to use machine learning to accelerate materials experiments and simulations. For this goal, we develop a framework to use graph neural networks to predict charge density distribution of materials. The machine learning models developed in this thesis not only deepen human understanding of where and how machine learning can be used to facilitate materials development, but also lead to the discovery of new materials systems, new processes, and new insights, such as new candidate thermoelectric materials, new processes for lasering poly(acrylonitrile), and new insights into the evaluation of the stability of materials.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147240</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Delivery of Multi-Scale Payloads to Tissue-Specific Targets in Plants</title>
<link>https://hdl.handle.net/1721.1/147239</link>
<description>Precision Delivery of Multi-Scale Payloads to Tissue-Specific Targets in Plants
Cao, Yunteng
Agrochemicals delivery is of crucial importance in modern agriculture to ensure the healthy growth of crops and productivity, therefore food security, particularly under current pressures, including escalating growing conditions associated with climate change (e.g., extreme weather, the spread of plant diseases and pests, lower soil quality), an ever-increasing human population, scarcity of arable land, and limited resources. However, conventional practices suffer from low efficiency and significant payload loss to the environment, conflicting with societal and environmental sustainability requirements. Therefore, there is a dire need for new techniques for precise, efficient delivery.&#13;
&#13;
This thesis studies the use of biomaterials and drug delivery principles to engineer the precise deployment of payloads in plants. Specifically, the thesis designs a novel silk-based biomaterial and fabricates a microneedle-like device capable of delivering a variety of payloads ranging from small molecules to large proteins into specific loci of various plant tissues. Precisely sampling plant sap is also demonstrated by tuning the material composition. Silk-based microneedles further show minimal wounding responses, activation of gibberellic acid (GA₃) responses post-injection of GA₃-loaded microneedles, and promotion of bolting and inhibition flower formation by GA₃ on Arabidopsis thaliana mutant ft-10. This method is proved to be more efficient and effective in delivering GA₃ than foliar spray. Potential applications of silk-based microneedles in agriculture are also confirmed by the successful deployment of GA₃ in several crops. In addition, hollow microneedles are fabricated using silk fibroin assembly and inorganic nucleation at their phase fronts, providing new tools to bridge the biotic/abiotic interface by interrogating pathways for biomolecules transport in plants and enabling early-stage detection of bioaccumulation of environmental contaminants, such as cadmium and arsenic.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147239</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning sustainable cities: Coordinating accessibility improvements with housing policies</title>
<link>https://hdl.handle.net/1721.1/147229</link>
<description>Planning sustainable cities: Coordinating accessibility improvements with housing policies
Basu, Rounaq
Emerging mobility services (such as mobility-on-demand and micromobility) have expanded the range of travel options available to individuals and offered ways to improve access to various opportunities. Unlike mass transit services, emerging mobilities can be implemented and experimented with rather rapidly. As a result, they are also likely to induce relatively rapid changes in travel behavior and location choices. Several cities across the world are experimenting with ‘car-lite’ policies that aim to reduce auto ownership and use (and emissions) with the help of emerging mobilities, transit improvements, and/or urban design. Therefore, it becomes important to understand the near-term effects of emerging mobilities on neighborhoods through the lenses of vehicle ownership and residential location choice over the first few years of change. This is especially important given the gentrification patterns we have observed in neighborhoods where transit improvements or extensions have been implemented (often referred to as ‘transit-induced gentrification’). Will we observe similar patterns of accessibility-induced gentrification with emerging mobilities as well? If so, how can we, as planners, seek to mitigate these undesirable but consequential side-effects of car-lite policies?&#13;
&#13;
In my dissertation, I introduced necessary methodological extensions to a state-of-the-art land use-transport interaction (LUTI) model that can enable better modeling of the interdependencies between various choices and tradeoffs of housing and mobility. Applying this improved LUTI model to the city-state of Singapore, I conducted quasi-static analyses and agent-based microsimulations of `what-if' scenarios regarding how households react to accessibility changes. In addition to looking at neighborhood-level car-lite pilot programs that improve non-auto accessibility, I also explored vehicle restriction policies that seek to ban private vehicles.&#13;
&#13;
I found that private vehicle restrictions alone without complementary non-auto accessibility improvements can reduce accessibility and social welfare, even in a transit-rich place like Singapore. Solely imposing a blanket ban on private automobiles to accelerate the transition to a sustainable mobility future will likely do more harm than good. Evidence of accessibility-induced gentrification, to varying degrees, was found in all of the Singaporean neighborhoods I explored. Lower-income and less auto-dependent neighborhoods seem to be more prone to accessibility-induced gentrification, thereby suggesting that non-accessibility improvements alone may not guarantee equitable outcomes. I then explored two housing policies – upzoning and parking restrictions – as possible strategies to mitigate the gentrification side-effects. Both policies appeared to have limited value by themselves because, at times, they could accelerate gentrification or reduce social welfare. However, they became much more effective policy instruments when combined with affordability constraints (such as income restrictions and price discounts), so that the accessibility and welfare benefits of car-lite policies could be equitably distributed across residents. I also tested the generalizability and transferability of my findings through various sensitivity analyses, robustness checks, and implementation in a more auto-dependent context separate from Singapore.&#13;
&#13;
This dissertation is expected to contribute to our understanding of the effects of emerging mobilities on three fronts. From a conceptual perspective, this study can demonstrate how emerging mobilities can lead to inequitable urban development in the absence of carefully designed market regulations. From a policy perspective, we can learn about the effectiveness of some housing and mobility policies in mitigating these undesirable outcomes while enhancing targeted outcomes. From a methodological perspective, the study contributes to the creation of a state-of-the-art integrated urban model that can be used to explore near-term market dynamics in reaction to new transportation technologies.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147229</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated, single-cell analysis of transcriptional phenotype and clonotypic identity</title>
<link>https://hdl.handle.net/1721.1/147228</link>
<description>Integrated, single-cell analysis of transcriptional phenotype and clonotypic identity
Morgan, Duncan M.
Single-cell RNA sequencing (scRNA-seq) currently affords the ability to resolve whole transcriptomes of single cells with substantial throughput and has revolutionized studies of gene expression. Recent technical advances have enabled the matching of T cell receptor (TCR) and B cell receptor (BCR) variable region sequences to the transcriptional profiles of the same cells, but methods to perform analysis of the resulting data, which would enable the analysis of antigen-specific T and B cell phenotypes in their clonotypic context, remain limited. In this thesis, we develop strategies to integrate the analysis of single-cell transcriptional states with the analysis of TCR and BCR clonotypes. We demonstrate how these strategies can be used to develop actionable biological insights across a diverse spectrum of immunological contexts.&#13;
&#13;
In the first part of this thesis, we profile CD8+ T cells recovered from murine tumors established in either the flank or the lung. Using scRNA-seq, we observe that common tumor-reactive clonotypes in these tumors exhibit phenotypic skewing dependent on the tissue site of tumor growth. We demonstrate that this phenotypic skewing is established during T cell priming in either the mediastinal or inguinal lymph node and results in a lack of responsiveness to immune checkpoint blockade (ICB) therapy in lung tumors. We show that gene expression signatures associated with this phenotype are present in sequencing data generated from patients with non-small cell lung cancer, suggesting that inadequate T cell priming may contribute to ICB resistance in human patients. &#13;
&#13;
In the second part of this thesis, we analyze T cells recovered from the esophageal biopsies, duodenal biopsies, and peripheral blood of patients with the allergic disease eosinophilic esophagitis (EoE) with scRNA-seq. We identify a clonally expanded, pathogenic effector Th2 (peTH2) cell phenotype that is associated with EoE. This phenotype demonstrates features of an antigen-specific T cell response, including convergence of TCR sequences. It also exhibits an association with the homing marker GPR15, which is upregulated among peTₕ2 clonotypes in the peripheral blood that were simultaneously detected in esophagus. Additional investigations further support that peTₕ2 cells in EoE likely possess specificity for food allergen-derived epitopes and exhibit enhanced esophageal homing potential associated with expression of GPR15.&#13;
&#13;
In the last part of this thesis, we develop a methodology to enable the recovery of paired, full-length BCR sequences from 3`-barcoded scRNA-seq libraries. The method is simple, cost-efficient, and can be applied retrospectively to archived samples. We first establish the accuracy of this method. We then apply it to reveal clonal relationships both among duodenal plasma cells in patients undergoing diagnostic screening for EoE and antigen-specific B cells elicited by vaccination in rhesus macaques.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147228</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Modeling Spectrum of Data-Driven Decision Making</title>
<link>https://hdl.handle.net/1721.1/147227</link>
<description>The Modeling Spectrum of Data-Driven Decision Making
Meng, Xianglin
Data-driven decision-making has become an essential part of modern life by virtue of the rapid growth in data, the massive improvements in computing power, and great progress in academic research. The range of techniques used fall broadly on the spectrum that varies from model-based to applied, depending on the problem complexity and data availability.&#13;
&#13;
This thesis studies three settings that span the modeling spectrum in the contexts of digital agriculture, cell reprogramming, and pandemic policymaking. First, we investigate the problem of learning good farming practices in the framework of multi-armed bandits with expert advice. We extend the setting from finitely many experts to any countably infinite set and provide algorithms that are provably optimal. Second, we explore optimizing perturbations for cell reprogramming in batched experiments. Building upon multi-armed bandit algorithms, we propose an active learning approach that integrates deep learning and biology-based analysis. We numerically demonstrate the success of our method on gene expression data. Finally, we model the impacts of nonpharmaceutical interventions during the coronavirus disease 2019 (COVID-19) pandemic. We develop an agent-based model in order to overcome the limitations of observational data. We show that the trade-off between COVID-19 deaths and deaths of despair, dependent on the lockdown level, only exists in the socioeconomically disadvantaged population. Our model establishes effective measures for reducing disparities during the pandemic.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147227</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Structure Learning, Inference, and Analysis with Probabilistic Programs</title>
<link>https://hdl.handle.net/1721.1/147226</link>
<description>Scalable Structure Learning, Inference, and Analysis with Probabilistic Programs
Saad, Feras Ahmad Khaled
How can we automate and scale up the processes of learning accurate probabilistic models of complex data and obtaining principled solutions to probabilistic inference and analysis queries? This thesis presents efficient techniques for addressing these fundamental challenges grounded in probabilistic programming, that is, by representing probabilistic models as computer programs in specialized programming languages. First, I introduce scalable methods for real-time synthesis of probabilistic programs in domain-specific data modeling languages, by performing Bayesian structure learning over hierarchies of symbolic program representations. These methods let us automatically discover accurate and interpretable models in a variety of settings, including cross-sectional data, relational data, and univariate and multivariate time series data; as well as models whose structures are generated by probabilistic context-free grammars. Second, I describe SPPL, a probabilistic programming language that integrates knowledge compilation and symbolic analysis to compute sound exact answers to many Bayesian inference queries about both hand-written and machine-synthesized probabilistic programs. Third, I present fast algorithms for analyzing statistical properties of probabilistic programs in cases where exact inference is intractable. These algorithms operate entirely through black-box computational interfaces to probabilistic programs and solve challenging problems such as estimating bounds on the information flow between arbitrary sets of program variables and testing the convergence of sampling-based algorithms for approximate posterior inference. A large collection of empirical evaluations establish that, taken together, these techniques can outperform multiple state-of-the-art systems across diverse real-world data science problems, which include adapting to extreme novelty in streaming time series data; imputing and forecasting sparse multivariate flu rates; discovering commonsense clusters in relational and temporal macroeconomic data; generating synthetic satellite records with realistic orbital physics; finding information-theoretically optimal medical tests for liver disease and diabetes; and verifying the fairness of machine learning classifiers.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147226</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asymmetries in Presupposition Projection: Processing and Acquisition</title>
<link>https://hdl.handle.net/1721.1/147224</link>
<description>Asymmetries in Presupposition Projection: Processing and Acquisition
Chen, Sherry Yong
In this dissertation, I aim to provide novel evidence to shed light on the projection problem of presuppositions. The focus is on identifying the underlying patterns of how presuppositions project out of two binary connectives, if and or, by disentangling semantic presuppositions from additional processes. A series of behavioral experiments examine how English-speaking adults process presuppositional sentences in real time, and what children in the preschool age range know about presupposition projection in these constructions, finding a host of evidence in favor of a family of theories that take an asymmetric view of presupposition projection. &#13;
&#13;
The dissertation is organized into three main parts. The first part focuses on the processing of presupposition projection in adults. The real-time processing of presupposition projection out of binary connectives shows an asymmetric pattern, as reflected by response time latencies associated with the left argument compared to the right argument. The second part investigates preschool-aged children’s knowledge of presupposition projection out of if -conditionals and disjunctions. Results reveal that at the age of 5, children’s behaviors reveal an environment-based asymmetry, much like what we observed in adults’ processing signature: when the embedded presupposition from the antecedent environment is not globally satisfied, it received much lower endorsement rates, compared to the consequent environment. 6-yearolds have an even more sophisticated command of presupposition projection out of if-conditionals, in that they can also recruit presupposition-cancelling mechanisms so as to avoid presupposition failure in a nearly adult-like manner. The third part of the dissertation is on presupposition strengthening. The experimental results in the previous two chapters suggest that there is substantial evidence pointing toward the asymmetric view of presupposition projection across two binary connectives. But the asymmetric view crucially predicts a conditionalized presupposition for the right argument, which is sometimes too weak. I defend the notion of pragmatic strengthening by addressing a challenge posed by Mandelkern (2016a, 2016b), where the classic notions of pragmatic strengthening do not appear to be applicable, yet a stronger, non-conditional presupposition arises contrary to the asymmetric view’s predictions. Building on crucial insights from Fox (2019), I present an idea arguing that this non-conditional presupposition does not in fact directly come from presupposition projection out of the conditional assertion, but is the presupposition of an accommodated question that is salient in the context. The question-based explanation can supplement the asymmetric theories, thereby removing the motivation to opt for an alternative theory that treats the non-conditional presupposition p as the basic one.&#13;
&#13;
Ultimately, I defend an asymmetric view of presupposition projection, as advocated by Satisfaction Theory and Trivalent Logics. The experimental findings provide novel empirical support that corroborates the predictions of these theories: for presuppositions projected out of binary connectives, the basic, semantic pattern of projection is asymmetric in nature, with a stronger presupposition projected from the left argument than the right argument. These findings together with a better understanding of the general pragmatic principles that affect discourse structure, provide further empirical and theoretical challenges that will need to be addressed by opposing theories.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147224</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Video Ads on Social Media Platforms</title>
<link>https://hdl.handle.net/1721.1/147223</link>
<description>Understanding Video Ads on Social Media Platforms
Cao, Yiqun
This dissertation consists of three chapters that explore the effects of creative elements in video ads on social media platforms and investigate how these ads impact consumer behavior in both normal times and times of crisis.&#13;
&#13;
The first chapter performs a comprehensive exploration of video ads on social media platforms using large-scale observational data. Understanding what makes video advertising effective in boosting performance is a crucial task, given firms’ heavy investment and its ubiquitous presence in our daily life. It is also a demanding task due to the limited availability of video ad data and the complexity of video features. We first conduct unsupervised clustering to provide a taxonomy of video ad features to understand the commonalities and differences that designers favor when creating video ads. We find videos with much speech tend to have a lower presence of text and a higher presence of people. Second, we perform a feature importance ranking after constructing meaningful and representative creative features. We run multilevel linear models of selected video and campaign features on ad performance outcomes to gain insights into the effectiveness of video elements. We observe that text and its early appearance deteriorate view-related outcome metrics, whereas the presence of people and their early appearance improve view-related metrics. Third, we explore the heterogeneous effects of basic video elements on advertising performance across different platforms, industries, and campaign objectives.&#13;
&#13;
The second chapter follows up on the results from the previous chapter and investigates how algorithmic optimization interacts with whether the firm features the product early or late in the ad. Advertising algorithms seem sophisticated in achieving a single specific objective, such as views and conversions for digital video ads. However, the algorithm’s focus on only one goal may present problems for advertisers if they hope the ad could achieve multiple goals such as building awareness, raising interest, and boosting conversions. Using a field experiment, we find that ad algorithms can effectively achieve the prescribed video objective - for example: changing the campaign objective from view to click would significantly improve the probability of clicks while decreasing the short-duration video view probability. The mechanism is that the target audience will change as the campaign goal changes, because optimization algorithms always try to find the "right" audience to maximize the specified performance metrics. We find that the algorithm’s single-minded pursuit of a specified objective can be moderated by how quickly content about the product is revealed. Our results suggest that in advertising markets where algorithms are programmed to narrowly fulfill one objective, advertisers need to tailor content to engage users in a way that helps them achieve multiple objectives.&#13;
&#13;
The third chapter investigates how people’s response to digital ads changed with fluctuations in the COVID pandemic situation. As new variants keep coming, the world has experienced multiple rounds of COVID-19 virus hits. With the high mutation rate of the virus, people may have to live with COVID-19 and its variants for quite a while. The main research question addresses how people’s responses to online ads, in the form of views and conversions when viewing them, change with the ups and downs of the pandemic’s severity, specifically its perceived severity as reflected by stay-at-home behavior. We use a difference-in-differences identification strategy and a fixed-effects model. The main results show that ad conversions increase as the pandemic situation becomes more severe. We find the effect stems not just from people replacing offline shopping with online shopping interchangeably, but also from the psychological impact of COVID. Since people are likely to coexist with COVID-19 for the foreseeable future, our research helps firms and businesses better understand consumers’ behavior and better adjust to future changes in the pandemic situation through their marketing practices.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147223</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Applications For Neurological Diseases</title>
<link>https://hdl.handle.net/1721.1/147222</link>
<description>Machine Learning Applications For Neurological Diseases
Gold, Maxwell P.
Neurological conditions affect the brain and other parts of the nervous system. This includes neurodegenerative diseases like Huntington’s Disease, psychiatric conditions like schizophrenia, and brain cancers like glioblastoma. These conditions are particularly challenging to study because they affect such a vital and complex organ system, making it difficult to understand disease etiology and to develop high-quality model systems. &#13;
&#13;
Because of these challenges, experiments studying neurological diseases typically either contain very few patient samples or are collected from imperfect model systems. Machine learning approaches have proven helpful for processing these types of datasets and identifying relevant biological signal. In this thesis, I detail five examples of the utility of machine learning methods for analyzing neurological disease data. Some chapters focus primarily on the development of novel machine learning methods, while others discuss the implementation of established algorithms leading to significant advancements in our understanding of the given disease.&#13;
&#13;
Chapter 2 details a novel gene set scoring algorithm that significantly improves upon existing methods. This new approach is particularly useful for analyzing single-cell transcriptomics assays, which are becoming increasingly common in neurological disease studies. In Chapter 3, I describe how multi-omic integration of ATAC-Seq, ChIP-Seq, and RNA-seq data revealed a novel population of cycling cells relevant to Huntington’s Disease models. In Chapter 4, I discuss an improved multi-commodity flow algorithm for omics data integration and highlight its utility for understanding drug effects in glioblastoma. Chapter 5 highlights how clustering and the Prize-Collecting Steiner Forest algorithm led to a better understanding of proteomic subtypes in medulloblastoma tumors. Lastly, Chapter 6 expands upon the work in Chapter 5, and details how I used computational approaches to figure out that some medulloblastoma tumors contain cells recapitulating cerebellar granule neuron development.&#13;
&#13;
In summary, this thesis showcases the value machine learning techniques for analyzing the small, complicated datasets typically found in neurological disease experiments. Throughout this work, I emphasize the importance of collecting and integrating multiple types of biological data to get a more complete understanding of these conditions.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147222</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Modeling of Elastic and Transformation Incompatibility at Grain Boundaries in Shape Memory Materials</title>
<link>https://hdl.handle.net/1721.1/147221</link>
<description>Computational Modeling of Elastic and Transformation Incompatibility at Grain Boundaries in Shape Memory Materials
Wang, Zhiyi
Shape memory alloys (SMAs) and zirconia-based ceramics (SMCs) find a wide range of applications in various fields due to their unique properties such as superelasticity and shape memory effect. Desirable superelastic properties of shape memory materials are realized to their maximum extent in single crystalline structures due to the absence of internal constraints. By contrast, in polycrystalline forms, superelasticity is significantly compromised by severe premature intergranular fracture originated at grain boundaries. This limitation has drawn significant research interests in developing microstructures that can preserve the properties of single crystals while avoiding the production cost and manufacturing limitations of single-crystal processing.&#13;
&#13;
The overarching goal of the thesis is to improve our understanding of the competition between martensitic transformation, grain boundary constraints, and intergranular fracture in shape memory materials through comprehensive computational modeling. To this end, we developed a finite-element based framework for modeling martensitic transformation at the continuum level incorporating details of the micromechanical information. A single-crystal model is implemented to provide a full mechanistic three-dimensional description of both the anisotropic elastic and martensitic transformation stress-strain response, including the non-Schmid behavior observed in some types of SMCs. We used the geometrically nonlinear theory of martensite to identify all possible transformation systems in SMAs and SMCs, based on the knowledge of lattice parameters of the single crystal. In the case of SMCs, the model was calibrated against data obtained from compression tests of zirconia micropillars in previously published literature. We conducted finite element simulations to obtain detailed information on the nucleation and evolution of martensite variants and stress distribution at grain boundaries in both SMAs and SMCs. The simulation results also provide insights on the competing mechanisms of elastic and transformation incompatibility leading to severe stress concentration at grain boundaries. We identified grain boundary configurations which result in very large stress 3 concentrations at very low deformations due to elastic incompatibility, as well as others where the elastic incompatibility is relatively low and stress concentrations only occur at large transformation strains. We also showed how this approach can be used to explore the misorientation space for quantifying the level of elastic and transformation incompatibility at grain boundaries in both SMAs and SMCs. In addition, we investigated the correlation between different types of incompatibilities and grain boundary characteristics. In the particular case of SMAs, we explored the role that a coincident site lattice (CSL) may have in affecting grain boundary incompatibilities. We demonstrated that grain boundaries with low CSL order exhibit low elastic incompatibilities in Cu-based SMAs, as previously suggested from experimental observations. However, high CSL order grain boundaries result in incompatibilities that are commensurate with those exhibited by random grain boundary configurations. This approach could be used to identify misorientations that reduce or minimize grain boundary incompatibilities, thus extend the superelastic range of the material.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147221</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differentiable Multiscale Molecular Simulations</title>
<link>https://hdl.handle.net/1721.1/147220</link>
<description>Differentiable Multiscale Molecular Simulations
Wang, Wujie
Multiscale molecular simulation is a critical tool to understand matter. The multiscale picture of simulations views observables at different granularities, providing explanation and prediction of phenomena across a wide range of spatial and temporal scales. Recently, data-driven modeling has shown great success in improving the predictive powers of molecular simulations from modeling of electronic structures to macroscopic phenomenology. Much of the success is built on deep and end-to-end differentiable models trained on high-quality big datasets with gradient-based optimizations. To fully exploit the power of data-driven multi-scale simulations, this thesis explores the application of differentiable algorithms on multiscale molecular modeling. Specifically, I introduce algorithms in three problem domains where differentiable modeling shows great promises: 1) differentiable graph-based force field construction for multi-scale molecular simulations; 2) end-to-end differentiable molecular dynamics for learning and control based on coarse-grained observables; 3) differentiable and generative scale-hopping between fine-grained and coarse-grained dynamics. The algorithms introduced in this thesis bridge the gap between scales for data-driven modeling, opening possibilities for more powerful and predictive multiscale models.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147220</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Selective Ion Separations Using Shock Electrodialysis</title>
<link>https://hdl.handle.net/1721.1/147219</link>
<description>Selective Ion Separations Using Shock Electrodialysis
Alkhadra, Mohammad Ayman
Agricultural development, extensive industrialization, and rapid growth of the global population have inadvertently been accompanied by environmental pollution. At the same time, the chemicals and energy industries face a number of difficult challenges in the selective extraction of high-value ionic species from dilute aqueous solutions. Important examples include the selective removal of trace pollutants, such as toxic heavy metal ions, from industrial effluents; the fractionation of chemically and physically similar elements, such as lanthanides for catalytic processes; the extreme deionization of wastewaters, such as radioactive process water from nuclear power plants; and the recovery of lithium compounds and valuable metals for applications in mining and electronic waste recycling. These emerging trends have motivated the search for new principles and methods for improved ion separations. Electrochemical methods in particular have attractive features such as compact size, modularity, chemical selectivity, broad applicability, and reduced generation of secondary waste.&#13;
&#13;
In this thesis, we investigate the emerging electrokinetic approach known as “shock electrodialysis” (shock ED) and its use in selective ion separations. Although the principles of deionization by shock ED have been established in previous work, the possibility of selective ion separations has only recently been discovered, and this capability is explored in depth in this thesis. The first major thrust is an extensive and comprehensive review of electrochemical methods for water purification, ion separations, and energy conversion. The review begins with an overview of conventional electrochemical methods, which drive chemical or physical transformations via Faradaic reactions at electrodes, and proceeds to a detailed examination of the two primary mechanisms by which contaminants are separated in nondestructive electrochemical processes, namely electrokinetics and electrosorption, with special attention given to emerging methods such as shock ED. The second major thrust is the design of processes and operating conditions to demonstrate the broad applicability of shock ED for selective ion removal from contaminated water. We developed several design concepts to control the selective separation of cations, anions, and small, charged hydrocarbons based on electric charge. The third major thrust is the examination of new types of materials in shock ED, including ceramics, clays, and ion exchange resins, several of which enable operation under extreme conditions (e.g., high temperature, high radiation, chemically harsh or reactive contaminants). This study led to the development of shock ion extraction (shock IX), which is a new, hybrid process that combines shock ED and ion exchange and enables greater ion removal and selectivity, and for longer periods of time, compared to the use of either shock ED or ion exchange alone.&#13;
&#13;
From a fundamental perspective, the novel electrokinetic mechanisms explored in this thesis are shown to have broader implications in deionization, water purification, and metals refining. For the field of chemical engineering, this work demonstrates shock-based methods as an energy-efficient and sustainable route to process intensification, and it paves their way for practical implementation in industry.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147219</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Incentive Designs to Improve Market Performance</title>
<link>https://hdl.handle.net/1721.1/147218</link>
<description>Essays on Incentive Designs to Improve Market Performance
Aronoff, Daniel Joseph
This dissertation is composed of three chapters. Each essay describes and models the structure of a market; identifies an inefficiency in outcomes and proposes an incentive scheme to reduce the inefficiency. &#13;
 &#13;
First Chapter: Coordination Problems, Leverage Constraints and Smart Contract Solutions for the Repo Market, co-authored with Robert M. Townsend. We identify two inefficiencies in the US treasury repo market and propose solutions. In the first part we develop an empirically founded model of the repo market which displays multiple equilibria. In the second part we explain how the interaction of two distinct financial regulations has caused the unintended consequence of limiting the capacity of broker-dealers to intermediate repo transactions. In the third part we propose smart contract designs that address each inefficiency. The first contract structures the timing of contract commitments to induce a welfare maximizing equilibrium. The second contract replaces a chain of repo contracts with a single contract that reduces the impact of repo on broker-dealer balance sheets, which increases the volume of repo a broker-dealer can intermediate before hitting a regulatory bound. The replacement contract preserves the allocation of risk and payoffs. Our contracts can be implemented when money and treasuries are appended to programmable electronic ledgers. &#13;
   &#13;
Second Chapter: Conservation Priorities and Environmental Offsets: Markets for Florida Wetlands, co-authored with Will Rafey. We estimate a model for an existing decentralized market in Florida wetlands in which land developers purchase offsets from producers, who enhance wetlands. Offset values are aggregated over environmental attributes at fixed exchange ratios to preserve regional  “no net loss”. Producers incur a sunk cost and are awarded offsets over time. We find substantial private gains from trade. We estimate a local flood externality generated by wetlands, which is not covered by current regulations, and which is exacerbated by a spatial separation of the urban locations of impact from the rural locations of offset production. We propose a localized Pigouvian tax on offset transactions which would have prevented $800 million of new flood damage over a 20-year span while preserving more than two-thirds of the private gains from trade.&#13;
&#13;
Third Chapter: ADESS: A Proof-of-Work Blockchain Protocol Modification to Deter Double-Spend Attacks, co-authored with Isaac Ardis. A principal vulnerability of a proof-of-work ("PoW") blockchain is that an attacker can re-write the history of transactions by forking a previously published block and building a new chain segment containing a different sequence of transactions. If the attacker’s chain has the most cumulative mining puzzle difficulty, nodes will recognize it as canonical. We propose a modification to PoW protocols, called ADESS, that contains two novel features which increase the cost of launching a double-spend attack. The first innovation enables a node to identify the attacker chain by comparing the temporal sequence of blocks on competing chains. The second innovation is to penalize the attacker by requiring it to apply exponentially increasing hashrate in order to make its chain canonical. For any value of transaction, there is a penalty setting in ADESS that renders a double-spend attack unprofitable. We provide code to implement ADESS on the Ethereum Classic blockchain.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147218</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soft Skills and Hard Work: Organizing as a Political Behavior Rooted in Relational Labor</title>
<link>https://hdl.handle.net/1721.1/147215</link>
<description>Soft Skills and Hard Work: Organizing as a Political Behavior Rooted in Relational Labor
Nahmias, Gabriel Magnus
What qualities of individuals make them willing and able to organize? Healthy representative democracies depend on citizens consistently overcoming collective action problems. This quality makes organizing - systematic efforts by activists to recruit others and invest in their political engagement - a critical democratic practice. Existing explanations for organizing's emergence tend to focus on political organizations and available opportunity structures. However, organizing is a labor-intensive form of political advocacy which is dependent on the recruitment activity of individual activists. As a result, addressing what makes individuals choose to do the work of recruitment can help to expand our understanding of the conditions that will produce an active and engaged citizenry. I, therefore, evaluate how a potential organizer's disposition, skills, and positionality uniquely shapes their willingness and capacity to recruit compared to engaging in alternative forms of political activity. To this end, I draw on interviews, experiments, and original surveys in the United States and South Africa, as well as cross-national data from 57 countries. Perhaps, by centering those who bring others into the political process, we can better understand how to protect and strengthen our democracies.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147215</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous Navigation without HD Prior Maps</title>
<link>https://hdl.handle.net/1721.1/147214</link>
<description>Autonomous Navigation without HD Prior Maps
Ort, Teddy
Most fielded autonomous driving systems currently rely on High Definition (HD) prior maps both to localize, and to retrieve detailed geometric and semantic information about the environment. This information is necessary to enable safe operation of many downstream driving components including, prediction, planning, and control. However, this requirement has raised issues with scalability, confining autonomous systems to small test regions where such detailed maps can be maintained. Furthermore, the reliance on HD maps can prevent autonomous vehicles from realizing human-like flexibility to both explore new areas and successfully navigate in rapidly changing environments or weather conditions. In this thesis, we present MapLite, an autonomous navigation system using only Standard Definition (SD) prior maps, in conjunction with onboard perception to directly infer the necessary HD map online. We also explore the use of a Localizing Ground Penetrating Radar (LGPR) for precise localization using stable underground features that are robust to changing weather conditions. Together, these methods can reduce the requirement for HD prior maps and bring autonomous navigation closer to human levels of flexibility and robustness.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147214</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anatomy of an Attitude</title>
<link>https://hdl.handle.net/1721.1/147213</link>
<description>Anatomy of an Attitude
Bondarenko, Tatiana Igorevna
This thesis investigates syntax and semantics of attitude and speech reports. It is concerned with three questions: (i) what kinds of meanings can tensed embedded clauses have? (ii) how are they integrated into the argument structure of verbs? (ii) why does their distribution sometimes depend not only on the argument structure, but on the embedding environment as well?&#13;
&#13;
Chapter 1 provides a brief summary of my proposal and outlines the assumed framework.  Chapter 2 examines clauses that combine with nouns like 'claim' and 'situation' in Buryat, Korean and Russian. It argues that displacement in attitude and speech reports comes from a projection ContP in the left periphery of embedded clauses (cf. Kratzer 2006, Bogal-Allbritten 2016, a.o.), and clauses differ in whether they have ContP and thus displacement. I also argue for equality semantics of displacement (Moulton 2009, Elliott 2017): clausal embedding does not involve semantics of a universal modal. Chapter 3 examines conjunction and disjunction of embedded CPs, and shows that interpretations of these structures, as well as the impossibility of true CP conjunction follow from my proposal. Chapter 4 investigates how nominalized and bare CPs are integrated with verbs in Buryat, Korean and Russian, and shows that they systematically receive different interpretations. I argue that this difference emerges because while nominalized CPs are arguments of verbs, bare CPs are always modifiers. I show that the integration path of an embedded CP, as well as its internal structure, matter for whether it is transparent for extraction or behaves like an island. Chapter 5 examines two types of polarity subjunctives in Russian: embedded subjunctive clauses whose ability to occur with certain verbs depends on the entailment properties of the environment. I argue that the subjunctive particle activates alternatives which have to be acted upon by a higher focus operator, and intervening projections, including ContP, can change the set of alternatives that the operator receives, making the sentence L-analytic and thus ungrammatical (Gajewski 2002, a.o.) in some cases but not in others.  Chapter 6 summarizes the findings of the thesis, and the typology of tensed embedded clauses that it gives rise to.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147213</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The evolution and diversity of noncanonical microbial nitrogen metabolisms</title>
<link>https://hdl.handle.net/1721.1/147211</link>
<description>The evolution and diversity of noncanonical microbial nitrogen metabolisms
Schwartz, Sarah L.
Microbial nitrogen cycling underpins Earth’s most basic ecology. The environmental and commercial significance of nitrogen biogeochemistry has prompted substantial study of the modern nitrogen cycle, but much remains unknown about the evolution of nitrogen biochemistry through deep time. This thesis explores the evolution and nature of divergent or noncanonical nitrogen metabolisms. The first section of this work explores divergent enzymes of denitrification. I report a novel cytochrome-type nitrite reductase (nirS) structure within members of Phylum Chloroflexi, which includes a cytochrome superfamily domain in addition to functional domains conserved in Proteobacterial nirS. Phylogenetic domain mapping reveals that this gene resulted from a chimeric domain fusion in ancestral Chloroflexi. I also identify an underreported variant of nitric oxide reductase (eNOR) within Chloroflexi MAGs, and provide a detailed phylogeny of this enzyme variant, revealing much broader diversity than previously reported. Next, I explore the evolution of microbial cyanide metabolism. Cyanide-degrading enzymes have been extensively studied for bioremediation and biotechnology, but the extant diversity of such enzymes remains underexplored. Additionally, while cyanide is hypothesized to play a central role in prebiotic chemistry, few biological data constrain the age and emergence of cyanide metabolisms. I provide a comprehensive analysis of the distribution and evolution of the Class I nitrilases, a subfamily specialized for hydrogen cyanide (HCN) reduction. Gene trees reveal that cyanide-reducing nitrilases originated in bacteria and were transferred into eukaryotes, refuting earlier eukaryotic origin hypotheses. Molecular clock analyses indicate that this enzyme subfamily shares an ancestor that emerged 1-2 billion years ago in the Paleo- to Mesoproterozic. I also analyze the constrained prokaryotic distributions of other nonhomologous nitrile-reducing enzymes, thiocyanate hydrolases and nitrile hydratases. Finally, I detail the evolution of promiscuous nitrogen metabolism in nitrogenases. Though highly specialized for dinitrogen reduction, molybdenum and vanadium nitrogenases have been shown to reduce off-target nitrogenous substrates, including HCN. Using ancestral sequence reconstruction, I compare the sequence space of predicted ancestral and extant nitrogenase substrate channels. The results indicate that the predicted highest-likelihood ancestral states for key residues are not represented in extant sequence space, and the physicochemical types of these high-likelihood residue combinations are only rarely represented in divergent nitrogenases. These data suggest that ancestral nitrogenases likely had alternative substrate channel compositions, possibly reflecting selection in early Earth environments. Together, these data suggest that the diversity and age of microbial nitrogen metabolisms are currently underestimated, and furtherstudy of these pathways should shed light on large-scale patterns of microbial evolution and ecology.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147211</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and controlling resource loading in bacterial genetic circuits</title>
<link>https://hdl.handle.net/1721.1/147210</link>
<description>Modeling and controlling resource loading in bacterial genetic circuits
Barajas, Carlos
Synthetic biology is an emergent interdisciplinary research field whose aim is to engineer biological systems with novel functionalities. Through the addition of synthetic genes, biomolecules are produced that create networks of interactions, referred to as genetic circuits, which endow cells with sensing, computation, and actuation capabilities. For example, bacteria can be engineered to recognize and kill cancer cells once in the bloodstream, to neutralize radioactive waste, to sense stress levels and release drugs once in the human gut. Despite tremendous progress, it is still often a challenge to engineer cells in such a way that they behave as predicted. On the one hand, synthetic gene activation causes non-physiological burden on cellular resources that cells are unable to adjust to. This leads to physiological changes and growth rate imbalances that affect in subtle ways the function of the engineered cell. On the other hand, the mathematical models that we use to design genetic circuits often miss relevant physical aspects needed to accurately predict circuit’s dynamics. One of these aspects is spatial heterogeneity inside the cell. This thesis addresses both of these two challenges through a combined theoretical and experimental effort.&#13;
&#13;
In the first part of the thesis, I introduce a feedforward controller that increases ribosome level upon activation of a gene of interest (GOI) to compensate for the burden on the cell due to the GOI activation. The controller increases ribosome level by activating a modified SpoT enzyme with sole hydrolysis activity, which lowers ppGpp level and thus de-represses ribosomes. That is, the controller increases the availability of resources once they are demanded by the GOI activation by actuating the cell’s endogenous ribosome regulation system. Without the controller, activation of the GOI decreased growth rate by more than 50%. With the controller, we could activate the GOI to the same level without a growth rate decrease. A cell strain armed with the controller in co-culture enabled persistent population-level activation of a GOI, which could not be achieved by a strain devoid of the controller. The feedforward controller is a tunable, modular, and portable tool that for the first time keeps constant growth rate despite synthetic gene activation.&#13;
&#13;
In the second part of the thesis, I model spatial heterogeneity inside bacterial cells and propose a simple modeling framework, useful for design, that accounts for spatial information. We start with a generic partial differential equation (PDE) model that describes the rate of change of species concentration as a function of the location inside the cell. Then, we exploit time scale separation between diffusion and the chemical reaction dynamics to derive a reduced order model consisting solely of ordinary differential equations (ODEs). I then apply this result to study enzymatic-like reactions that are used to model most of the cell’s important processes, including transcription, translation, and their regulation, highlighting significant differences with traditional models. This new model provides a general and simple framework to capture spatial heterogeneity in bacterial cells and thus improves upon the predictive power of current models used to design genetic circuits.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147210</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Morphogenetic forces planar polarize LGN/Pins in the embryonic head during Drosophila gastrulation</title>
<link>https://hdl.handle.net/1721.1/147208</link>
<description>Morphogenetic forces planar polarize LGN/Pins in the embryonic head during Drosophila gastrulation
Camuglia, Jaclyn M.
During embryonic development, epithelial tissues grow and undergo dramatic changes to adopt complex shapes. Coordinated cell behaviors, including oriented cell divisions are crucial to the proper development of epithelia. How forces are transmitted across the developing embryo and how these forces along with genetic programs that are active at this time contribute to coordinated cell behaviors remains a long-standing question in biology. This work addresses how forces transmitted through the early Drosophila embryo result in the planar cell polarity of Pins/LGN localization and cell division orientation. I demonstrate that this striking planar cell polarity and division orientation depend on forces that result from morphogenetic movements on the other side of the embryo (~100μm away). The oriented divisions I describe are oriented with the global elongation and tissue flow observed in the embryo. I separately show the effect of force generation on Pins recruitment, through disrupting adherens junctions, actomyosin contractility, laser-induced tissue isolation, and, finally, by disrupting mesoderm invagination. Overall, the results show how mechanical forces are integrated across an embryo to control cell division via a novel mechanism that involves force-dependent planar cell polarity of Pins/LGN. To my knowledge, this is the first in vivo example where mechanical force has been shown to polarize Pins to mediate division orientation and further serves as an example of the integration of mechanical force and genetic programing during development.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147208</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The low temperature specific heat of a linear polymer</title>
<link>https://hdl.handle.net/1721.1/147172</link>
<description>The low temperature specific heat of a linear polymer
Isaacs, Leslie Laszlo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1960; Vita.; Includes bibliographical references (leaves 59-60).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147172</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the recognition of speech by machine</title>
<link>https://hdl.handle.net/1721.1/147166</link>
<description>On the recognition of speech by machine
Hughes, George W.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Vita.; Includes bibliographical references (leaves 142-149).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147166</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatiotemporal chaos in the nonlinear three wave interaction</title>
<link>https://hdl.handle.net/1721.1/147155</link>
<description>Spatiotemporal chaos in the nonlinear three wave interaction
Chow, C. C.
            (Carson C.)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (leaves 146-153).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147155</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-signal behavior of a parametric magnetogasdynamic generator</title>
<link>https://hdl.handle.net/1721.1/147152</link>
<description>Large-signal behavior of a parametric magnetogasdynamic generator
Lewis, Arthur T.
            (Arthur Thomas),
            1891-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Vita.; Includes bibliographical references (leaves 126-128).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147152</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Customizing Multifunctional Bidirectional Neural&#13;
Interfaces through Fiber Drawing</title>
<link>https://hdl.handle.net/1721.1/147140</link>
<description>Customizing Multifunctional Bidirectional Neural&#13;
Interfaces through Fiber Drawing
Antonini, Marc-Joseph
Understanding neurophysiological phenomena underlying complex mental and neurological conditions demands tools capable of delivering and receiving a diversity of neuronal signals over an extended period of time. Fiber drawing enables the fabrication of biocompatible multifunctional flexible fibers that record and modulate neural activity. However, constraints on the thermomechanical properties of materials have prevented the fiber integration of metals and low-loss polymer waveguides for concurrent electrical and optical neuromodulation. To address this challenge, three fabrication approaches based on fiber drawing were introduced. Each method delivered multifunctional probes featuring a low-loss transparent waveguide for optical stimulation, low-impedance metallic electrodes for electrophysiological recording, and a microfluidic channel for drug delivery. These probes successfully recorded optically evoked and spontaneous neural activity in mice for several weeks and were shown to be compatible with a mechanical microdrive for depth-specific recording, and with magnetic resonance imaging for anatomical and functional imaging studies. The multifunctionality of the probe was then leveraged to enable the translation of photopharmacology, a method that attaches optical switches to chemicals or proteins, to in vivo experiments. This approach enabled the reversible optical control of place preference behavior in freely moving mice. Finally, a fiber-based closed-loop neuroprosthesis was developed to bidirectionally interface with the gastrointestinal tract of swine. It was shown to successfully modulate the musculature to generate coordinated peristaltic waves and address dysmotility in the esophagus and the stomach. Taken together, the findings of this thesis are anticipated to provide a platform for multifunctional fiber-based probes development, and their application in the brain and peripheral circuits as investigational tools and therapeutic devices.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147140</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and applications of peptide barcoded nanoparticles for high-throughput screening of mRNA delivery materials in vivo</title>
<link>https://hdl.handle.net/1721.1/147139</link>
<description>Development and applications of peptide barcoded nanoparticles for high-throughput screening of mRNA delivery materials in vivo
Rhym, Luke Hyunsik
mRNA lipid nanoparticles (LNP) present a promising class of therapeutics, with broad applications in protein replacement therapy, gene editing, immunotherapy, and vaccines, owing to their versatility and precise nature. While recent years have seen dramatic improvements in the safety and efficacy of mRNA therapeutics, their functional delivery to target tissues and cells in vivo remains challenging, partly due to the lack of predictive power of in vitro assays and the low-throughput and costly nature of in vivo screening approaches. Thus, there is still a need for safe, specific, and potent mRNA delivery materials, as well as higher throughput in vivo screening methods.&#13;
&#13;
In this work, we developed a novel in vivo nanoparticle screening platform that relies on LC-MS/MS based detection of peptide barcodes translated from barcoded mRNAs in transfected cells, allowing for a readout of functional delivery that is directly proportional to protein production effected by each nanoparticle within a pooled library. We showed that this approach has high sensitivity and accuracy in both cultured cells in vitro and in tissues in vivo and demonstrated the applicability of this approach to in vivo screening of LNPs by developing and optimizing the formulation of a biodegradable LNP, RM133-3-21, for potent mRNA delivery to the liver. We then screened a large library of ionizable lipids for their ability to deliver mRNA to the lung and optimized both the structure and formulation of the lead compound. The resulting LNP, C15-21, is highly potent and is able to transfect up to 80% of lung endothelial cells after a single dose. In addition, we demonstrated that C15-21 is able to efficiently deliver Cas9 mRNA and sgRNA for targeted gene disruption in the lung, resulting in up to 7.5% gene editing in lung endothelial cells. Finally, we also developed materials and formulations that show high specificity for splenocytes in vivo.&#13;
&#13;
Taken together, the work presented in this thesis contributes to the field of mRNA therapeutics by increasing the throughput of LNP testing in vivo and by introducing novel delivery materials.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147139</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preparation and some reactions of an α-lactam</title>
<link>https://hdl.handle.net/1721.1/146931</link>
<description>Preparation and some reactions of an α-lactam
Lengyel, István.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1964; Vita.; Includes bibliographical references (leaves 32-33).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146931</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New models of magnetic interactions for bound magnetic polarons in dilute magnetic semiconductors</title>
<link>https://hdl.handle.net/1721.1/146921</link>
<description>New models of magnetic interactions for bound magnetic polarons in dilute magnetic semiconductors
McIntyre, Cynthia R.
            (Cynthia Roberts)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1990; Includes bibliographical references (leaves 75-76).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146921</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the effect of turbulence on sedimentation</title>
<link>https://hdl.handle.net/1721.1/146726</link>
<description>A study of the effect of turbulence on sedimentation
Dobbins, William Earl.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1964; Includes bibliographical references (leaves 138-139).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146726</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chemistry of cyclotrigermanes and digermenes</title>
<link>https://hdl.handle.net/1721.1/146725</link>
<description>The chemistry of cyclotrigermanes and digermenes
Batcheller, Scott A.
            (Scott Allan),
            1961-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1993; Vita.; Includes bibliographical references (p. 497-505).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146725</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of normal surface vibrations on sand-solid friction,</title>
<link>https://hdl.handle.net/1721.1/146720</link>
<description>The effect of normal surface vibrations on sand-solid friction,
Gordon, Samuel J.
            (Samuel James)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146720</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suppression of spin wave instabilities in ferrimagnets.</title>
<link>https://hdl.handle.net/1721.1/146717</link>
<description>Suppression of spin wave instabilities in ferrimagnets.
Hartwig, Curtis Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1966; Lacking p. 44.; Bibliography: p. 103-105.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146717</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations of the coherence properties of scattered light.</title>
<link>https://hdl.handle.net/1721.1/146715</link>
<description>Investigations of the coherence properties of scattered light.
Goldstein, John Cecil.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 156-162.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146715</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacing living plants with nanomaterials for in planta sensing and plant biotechnology applications</title>
<link>https://hdl.handle.net/1721.1/146710</link>
<description>Interfacing living plants with nanomaterials for in planta sensing and plant biotechnology applications
Salim Lew, Tedrick Thomas.
Plants represent an important biological system which can self-repair, grow autonomously and generate energy from sunlight and photosynthesis. As sessile organisms, they have evolved complex internal and inter-organism signaling pathways with distinct structures to thrive in ever-changing and at times unpredictable conditions. While humans have used plants as a source of food and raw materials for millenia, we have only considered living plants as a technology in recent years. A major advance in this area is the emergence of plant nanobionics, a field which interfaces living plants with nanotechnology to impart the former with novel and non-native functionalities. The broad vision of plant nanobionics is to use rationally-designed nanoparticles to engineer a wide array of plant-based technologies, not genetically limited to specific plant species, to potentially replace the myriad devices in our everyday lives stamped out of plastic, containing circuit boards and consuming power from the electrical grid. Nanomaterials are ideal candidates for interfacing with plants due to their unique physical and electronic properties, which can be rationally modified to complement and augment the biological properties of living plants. In addition, engineered nanomaterials can serve as nanocarriers to deliver biomolecular cargo, as well as nanosensors to translate the invisible plant internal signaling into an optical readout easily interpreted by portable electronics.; This thesis investigates the underlying mechanisms of interaction between nanoparticles and plant membranes, and explores the diverse plant biotechnology and agricultural applications enabled by interfacing living plants with nanomaterials. We first study the influence of nanoparticle physical properties on their ability to traffic into plant cells and organelles. We show that the uptake and localization of nanoparticles in plant cells is governed mainly by their surface charge and size. An experimentally-verified thermodynamic model, termed Lipid Exchange Envelope Penetration (LEEP), was formulated to guide the rational design of nanoparticles to target specific subcellular compartments within the plant cell. Leveraging on these design principles, we synthesize a new class of single-walled carbon nanotube (SWNT)-based nanocarriers to selectively deliver plasmid DNA into the chloroplasts of mature living plants. The nanoparticle-mediated delivery platform can protect and safely deliver genes into the chloroplasts of both model and agriculturally-relevant crops without mechanical aid. The nanocarrier approach enables strong transient expression of reporter proteins in Arabidopsis thaliana, wild-type watercress (Nasturtium officinale), spinach (Spinacia oleracea) and tobacco (Nicotiana benthamiana) plants. We further design another class of SWNT nanocarriers to deliver genes into pollen grains, the male gametophyte of flowering plants. The difficulty of delivering exogenous DNA into pollen grains, due to chemically inert cell walls, has hindered their wide application in agricultural biotechnology. Utilizing the LEEP model, we engineer imidazolium-functionalized SWNTs to efficiently traffic past the pollen membranes. These findings provide insights for the rational design and refinement of nanocarriers for plant biotechnology applications.; This thesis also explores the application of nanomaterials as optical nanosensors to study the plant defense signaling pathways. In this study, we employ near-infrared fluorescent SWNT nanosensors developed using the Corona Phase Molecular Recognition (CoPhMoRe) concept to modulate nanoparticle-analyte binding. The nanosensor platform can capture the fast dynamics of wound-induced H₂O₂ signal propagation in real time non-destructively, enabling interfacing of plant defense network to portable electronics at a standoff distance. We find that the H₂O₂ concentration profile post-wounding follows a logistic waveform for six plant species: lettuce (Lactuca sativa), arugula (Eruca sativa), spinach (Spinacia oleracea), strawberry blite (Blitum capitatum), sorrel (Rumex acetosa), and Arabidopsis thaliana, ranked in order of wave speed from 0.44 to 3.10 cm/min. Our findings highlight the utility of a new type of nanosensor probe that is species-independent and capable of real-time, spatial and temporal biochemical measurements in plants.; Lastly, we demonstrate the development of a plant nanobionic sensor for selective and sensitive detection of arsenic in the belowground environment. These optical nanosensors were embedded in plant tissues to monitor the internal dynamics of arsenic taken up by the plants via the roots. A rational design of SWNT nanosensors with high selectivity and intensity modulation towards arsenite is discussed. We exploit the natural ability of an arsenic-hyperaccumulating fern to engineer a nanobionic sensor capable of detecting down to 0.2 ppb level of arsenic, well below the regulatory limit in drinking and irrigation water. These demonstrations highlight the potential of nanomaterials for the creation of future plant nanobionic devices, as well as for the development of species-independent tools for agricultural biotechnology applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146710</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging through optical multimode fiber: towards ultra-thin endoscopy</title>
<link>https://hdl.handle.net/1721.1/146703</link>
<description>Imaging through optical multimode fiber: towards ultra-thin endoscopy
Lee, Szu-Yu
Optical imaging in biomedicine provides pathophysiological information with high resolution, high speed, and minimal invasiveness. Endoscopy in particular has revolutionized healthcare diagnosis and treatment as well as biological research by offering visual access to otherwise unreachable remote tissues. However, existing endoscopic modalities face fundamental limitations in their designs that prohibit miniaturization to below a few millimeters in diameter, which would enable imaging through any natural or artificial lumen and thus unprecedented opportunities. This predicament and unmet medical needs such as deep-brain imaging, imaging-guided needle biopsy, and imaging-guided micro-surgery for new and scalable endoscope designs have motivated the concept of utilizing a single optical multimode fiber (MMF) as a stand-alone image conduit. MMF is fascinating as an optical waveguide attributed to its ultra-small footprint, high data throughput, low cost, and flexibility. Nevertheless, the mode mixing and dispersion effects inherent to MMF are technical barriers to its ability to relay clear images; optical propagation through a short length of MMF scrambles an image completely. The focus of this dissertation research is therefore to study waveguide physics of MMF and to innovate powerful computational methods as compensatory strategies that enable high fidelity imaging and sensing through the fiber: We developed numerical simulation toolboxes and experimental measurement systems to characterize bi-directional light transport through MMF; By modeling the light transmission through MMF and sample interaction with matrix operations, we demonstrated three-dimensional (3D) label-free multi-modal imaging based on computational reconstruction; To facilitate multi-spectral and broadband operations with MMF, we established a parametric dispersion model for efficient fiber calibration across a broad spectrum; The spatio-temporal modes within the MMF can be conveniently leveraged for depth sensing, where we created a high-resolution and long-range axial profiling system using MMF; Finally, we showed a proximal MMF calibration method for implementing flexible MMF-based endoscopes by exploiting the waveguide physics and numerical optimization.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146703</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamically-driven Advances in Efficient and Cost-Effective Desalination and Brine Concentration</title>
<link>https://hdl.handle.net/1721.1/146699</link>
<description>Thermodynamically-driven Advances in Efficient and Cost-Effective Desalination and Brine Concentration
Bouma, Andrew Thomas
Global water resources face a number of challenges. Growing global population and rising standards of living have led to increased water demand for domestic use, agricultural irrigation, and industrial processes. The effects of climate change have resulted in changes to historical patterns of rainfall and water supply. Severe and lasting water shortages are becoming more common and widespread, so that existing water infrastructure cannot provide stable resources in some regions. Alternative water sources, such as seawater desalination, brackish water desalination, and zero liquid discharge desalination, can help bridge this gap. However, to avoid amplifying the climate crisis, carbon emissions associated with desalination and brine concentration must be minimized. As a result of the rising use of desalinated water and the inherently large energy cost associated with desalinating seawater, developing efficient desalination technologies has become a major focus of water research. This work develops improved metrics, technoeconomic models, and technological advances to raise the efficiency and cost-effectiveness of desalination and brine concentration technologies.&#13;
&#13;
First, evaluating technological improvements and new technologies relies on the ability to fairly and accurately quantify the value of said improvements. However, accurately evaluating and comparing the energy consumption of desalination plants that use different forms and grades of energy is difficult. To fully capture the thermodynamic and economic cost of energy, and to fairly compare desalination systems that use different grades of input energy, energy consumption must be compared not at the point where energy enters the desalination plant itself, but as primary energy entering a power plant in a coproduction arrangement. The first section of this work investigates a variety of metrics for comparing the energy and exergy consumption attributable to desalination in coproduction plants, evaluates 48 different power-water coproduction systems, and compares the primary energy consumption of multi-effect distillation (MED) and reverse osmosis (RO) from a thermoeconomic perspective. The entropy generation at the RO membrane and in the MED effects are derived in similar terms, which enables a comparison of the overall heat transfer coefficient in an MED system to the permeability of an RO membrane. RO is shown to outperform MED in energy efficiency because of a balance of material costs, transport coefficients, and cost of energy.&#13;
&#13;
Second, technoeconomic principles from the first section are applied to a case study. This work evaluates the technoecnomic feasibility of collocating a seawater reverse osmosis desalination plant with an existing nuclear power plant, specifically the 2.2~GW\textsubscript{e} Diablo Canyon Nuclear Power Plant on California's central coast. This work shows that at a collocated plant, the sharing of seawater intake and outfall structures, reduced power costs due to reductions in transmission costs, and potential additional cost savings from economies of scale could enable desalination plants to produce water at half the cost of other stand-alone desalination alternatives. This work is the first to show that collocated RO and nuclear power have strong coupling that results in a significant economic advantage over seawater desalination at other sites.  These advantages are not unique to the Diablo Canyon site and should be applicable to dozens of existing nuclear power facilities.&#13;
&#13;
Third, this work evaluates newly developed brine concentration technologies, specifically low-salt-rejection reverse osmosis (LSRRO) and osmotically assisted reverse osmosis (OARO). A variety of technology configurations, including single and multi-staged systems are investigated and optimized. Systems are separately designed for both minimal energy consumption and minimum system size, resulting in a design envelope that contains all cost-optimal designs. This work improves on existing literature by simulating designs in realistic form factors and using probably membrane parameters. Evaluation of exergy destruction provides insight into system operation and optimization. This work shows that the novel semi-split OARO configuration improves on both split-feed and split-brine OARO configurations, improving both energy consumption and membrane usage compared to existing designs, and extending the operating range of standalone systems. LSRRO systems are likely to have smaller system sizes than OARO systems, although specific membrane costs will determine whether this translates to cost benefits.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146699</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Ab-initio Quantum Chemistry Techniques to Hypersonic Flows for Plasma Blackout Alleviation</title>
<link>https://hdl.handle.net/1721.1/146685</link>
<description>Application of Ab-initio Quantum Chemistry Techniques to Hypersonic Flows for Plasma Blackout Alleviation
Sabo, Kevin M.
Plasma blackout in hypersonic flows produced by weakly-ionized air plasma sheaths is a phenomenon that attenuates or completely blocks communications with vehicles flying in a high-enthalpy flow environment. The ability to mitigate this issue can be useful from various re-entry vehicles to in-atmosphere hypersonic vehicles, especially with aerospace largely pushing into the hypersonic field across the industry.&#13;
&#13;
This thesis asks how effective are electrophilic compounds at reducing the number density of electrons within a hypersonic plasma sheath environment. This reduction in the electron number density reduces the critical plasma frequency, thus allowing for the successful propagation and transmission of electromagnetic signals.&#13;
&#13;
This work presents three contributions. First, a framework is proposed for the construction of viscous hypersonic chemistry models using \textit{ab-initio} quantum chemistry techniques, which accounts for both spin-adiabatic and spin-nonadiabatic chemical processes. Next, an analysis is performed on the pressure-dependence of the chemical reactions that are typically seen in hypersonic flow environments, showing that pressure-dependent chemical rate coefficients are a key aspect in these environments and thus should not be ignored. Lastly, a first principles approach to assessing the feasibility of such a system and the resulting sizing requirements is presented, including a method for identifying and calculating the predominant chemical timescales which govern the electron quenching process. Together, these contributions provide a fundamental way to model and solve the problem of alleviating plasma blackout using electrophilic compounds.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146685</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiperiod adaptive control of the wellhead price of natural gas; a Bayesian decision theoretic approach.</title>
<link>https://hdl.handle.net/1721.1/146359</link>
<description>Multiperiod adaptive control of the wellhead price of natural gas; a Bayesian decision theoretic approach.
Mehta, Cyrus Rustam.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1973; Vita.; Bibliography: leaves 143-146.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146359</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of large-eddy breakup devices in a turbulent flow</title>
<link>https://hdl.handle.net/1721.1/146357</link>
<description>Effects of large-eddy breakup devices in a turbulent flow
Balakumar, Ponnampalam.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1986; Bibliography: leaves 214-223.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146357</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phagocytosis promotes programmed cell death and is controlled by Rac signaling pathway in C. elegans</title>
<link>https://hdl.handle.net/1721.1/146356</link>
<description>Phagocytosis promotes programmed cell death and is controlled by Rac signaling pathway in C. elegans
Reddien, Peter W.
            (Peter Walthour),
            1974-
Programmed cell death is important in development, homeostasis, and disease. In the nematode Caenorhabditis elegans, four genes, egl-1, ced-9, ced-4, and ced-3, control the execution of cell death and define a molecular pathway for cell death conserved in humans. Seven genes control the engulfment of cell deaths and define two partially redundant pathways: ced-1, ced-6, and ced-7 in one pathway and ced-2, ced-5, ced-10, and ced-12 in the other. ced-3 encodes a defining member of a family of cysteine proteases termed caspases. We performed a mutational analysis of the ced-3 caspase-encoding gene, identified residues within the CED-3 protein important for caspase function in vivo, and determined that a limited amount of cell death can occur in the complete absence of CED-3 protease activity. We discovered a role for engulfment in promoting cell death and found that in the absence of engulfment, cells could occasionally recover from the initial stages of death that are triggered by the CED-3 caspase. Our results support a new view of cell death in which engulfing cells actively promote the death process rather than simply remove dead cells. We characterized an engulfment pathway and found that ced-2 encodes an adaptor protein similar to human CrkII that physically interacts with the previously identified CED-5 DOCK180 protein and that ced-10 encodes a Rac-like GTPase. ced-10 acts downstream of ced-2 and ced-5 within engulfing cells to control the extension of cell surfaces around dying cells.; (cont.) We found that ced-10 Rac is the primary Rac gene required for engulfment but acts redundantly with the mig-2 Rac-like gene and independently from ced-2 and ced-5 to control neuronal migration and axon pathfinding. We suggest Rac genes can be differentially utilized and regulated for different developmental events. We identified two new genes that promote programmed cell death from a genetic screen. The first, dpl-i, encodes a protein similar to human DP, which is the heterodimerization partner for the transcription factor E2F. The second, mcd-l, encodes a novel Zn finger-containing protein. dpl-l and mcd-l act downstream of the cell death inhibitory gene ced-9 to promote cell death.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2002; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146356</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Micromechanical modelling of the fracture behavior of second-phase reinforced cementitious materials</title>
<link>https://hdl.handle.net/1721.1/146353</link>
<description>Micromechanical modelling of the fracture behavior of second-phase reinforced cementitious materials
Huang, Qiong Joan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1990; Includes bibliographical references (leaves 140-149).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146353</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher asymptotics of the complex Monge-Ampère equation and geometry of CR-manifolds</title>
<link>https://hdl.handle.net/1721.1/146352</link>
<description>Higher asymptotics of the complex Monge-Ampère equation and geometry of CR-manifolds
Lee, John Marshall.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1982; Bibliography: leaves 78-79.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146352</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in unemployment and economic activity</title>
<link>https://hdl.handle.net/1721.1/146351</link>
<description>Essays in unemployment and economic activity
Bean, Charles Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1982; Includes bibliographies.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146351</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of amino acid transport across the lysosomal surface by the mTORC1 pathway</title>
<link>https://hdl.handle.net/1721.1/146299</link>
<description>Regulation of amino acid transport across the lysosomal surface by the mTORC1 pathway
Kedir, Jibril F.
Multicellular eukaryotes readily adjust their growth in response to environmental cues. A central nutrient sensing pathway crucial for this process requires the mechanistic target of rapamycin complex 1 (mTORC1), which integrates various inputs such as amino acids, growth factor signaling and energy levels. Amino acids promote the localization of mTORC1 to the lysosomal surface, where it can then be activated in response to growth factor availability. While the Rag GTPases mediate the recruitment of mTORC1 to the lysosomal surface, they also have a mutually exclusive and nutrient regulated interaction with SLC38A9, a lysosomal amino acid transporter. How the mutually exclusive mTORC1-Rag and the SLC38A9-Rag interaction is achieved and how it alters the function of each component is not completely understood.&#13;
&#13;
We found the Rag GTPases are necessary to promote SLC38A9 transport independent of mTORC1 kinase activity. Moreover, we attained the structures of Raptor-Rag-Ragulator and SLC38A9-Rag-Ragulator at 3.2 Å and 3.6 Å resolutions, respectively, and generated separation of function mutants of RagA. Using these constructs, we show that perturbation of Rag GTPase binding to SLC38A9, but not mTORC1, causes the accumulation of a distinct set of non-polar, mostly essential amino acids (Tyrosine, Leucine, Phenylalanine, and Isoleucine). We also show that “inactive” Rags, competent to bind SLC38A9, promote its transport activity in proteoliposomes reconstituted with wild-type SLC38A9. We believe the purpose of this regulation is to efflux amino acids from the lysosome during starvation.&#13;
&#13;
Overall, we identified the mechanism by which mTORC1 regulates the efflux of essential amino acids from lysosomes to be through the Rag-Ragulator complex. This work ascribes an alternative function to the Rag-Ragulator complex independent of its ability to convey the availability of nutrients to mTORC1. Furthermore, we show this direct gating mechanism plays a role in regulating the efflux of essential amino acids from lysosomes of mouse hepatocytes in-vivo during periods of starvation and refeeding. Thus, the studies described in this thesis provide new insights to the role of the Rag GTPases by genetic, biochemical, and structural techniques. Further mechanistic insights into Rag-Ragulator binding to SLC38A9 will reveal how the Rag GTPases regulate its transport function, along with that of other interactors, in various physiological contexts.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146299</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On intraseasonal variability in the tropics: tropical cyclones, the Madden-Julian Oscillation, and equatorial waves</title>
<link>https://hdl.handle.net/1721.1/146298</link>
<description>On intraseasonal variability in the tropics: tropical cyclones, the Madden-Julian Oscillation, and equatorial waves
Lin, Jonathan
This dissertation addresses some aspects of tropical intraseasonal variability, which is dominated by tropical cyclones, equatorial waves, and the Madden-Julian Oscillation (MJO). Because these phenomena have significant societal impacts on sub-seasonal time scales, and it is important to understand how they interact with the large-scale atmosphere. The first part of this thesis develops Forecasts of Hurricanes using Large Ensemble Output (FHLO), a large-ensemble, probabilistic tropical cyclone forecast model. FHLO incorporates the state-dependent forecast uncertainty by sampling the internal variability of ensemble numerical weather prediction models. It is shown that including state-dependent forecast uncertainty can lead to significant improvements in pointwise wind speed forecasts on lead times longer than around 3-days. The second part of this thesis addresses how tropical disturbances interact with the stratosphere. A linear framework in which a convecting, quasi-equilibrium troposphere is coupled to a dry, passive stratosphere is developed. It is shown that smaller scale waves are strongly damped by the stratosphere, while slower propagating waves, such as Rossby waves and the MJO, are less affected by the stratosphere. Excitation of the barotropic mode by the stratosphere and surface friction is also analyzed. In particular, it is found that surface friction can excite the barotropic mode far away from the equator, though the poleward extent of the barotropic mode is strongly controlled by how much energy leaks into the stratosphere. The last part of this thesis extends the linear framework to include non-zero zonal wind in the stratosphere, to understand how stratospheric circulations, like the Quasi-Biennial Oscillation, can influence the strength of the MJO. It is found that the tropospheric barotropic mode can be phase-shifted by stratospheric winds, but only under unrealistic forcings at the tropopause. Upward wave radiation is found to be stronger under easterly than westerly winds in the stratosphere, because of increased upward energy flux by Kelvin waves. The effect of the stratosphere on cirrus clouds is also investigated. It is shown that dynamical modulation of lower stratospheric clouds, and anomalous advection of upper-tropospheric ice clouds, can explain why the MJO is stronger under easterly than westerly phases of the QBO.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146298</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Drivers of Disruption: How Jakarta's Mobility Platform Drivers Understand, Transform and Resist the Algorithms that Manage Them</title>
<link>https://hdl.handle.net/1721.1/146296</link>
<description>Drivers of Disruption: How Jakarta's Mobility Platform Drivers Understand, Transform and Resist the Algorithms that Manage Them
Qadri, Rida
Leveraging embedded fieldwork with mobility platform drivers in Jakarta this dissertation shows how dreams of technologically-enabled disruption fall apart on the streets of Global South cities. Through the case of platform companies Grab and Gojek, the three essays narrate the domestication of the digital as it is implicated in the local. In doing so, I bring focus to the varied infrastructures---human, physical, relational, social---that underlie technological interventions. Paper 1 sketches out how the introduction of mobility platforms in Jakarta gave rise to unique architectures of ‘distributed worker solidarity’, showcasing the resilience of informal institutions through moments of technological change. Paper 2 examines the role played by these informal mutual aid networks in mediating precarity for platform workers in Jakarta during COVID-19, underscoring the importance of worker relationships as scaffolding for platformization. Paper 3 interrogates the encounter between abstracted, data-driven algorithmic assumptions underpinning mobility platforms with the drivers' contextual, embodied, situated knowledge practices to empirically demonstrate the limitations of algorithmic solutions in a city like Jakarta. Through this work I hope to re-articulate what technological disruption means by shifting the vantage point from the boardroom to the streets, the apps, the motorbikes, in other words, to the places where ‘disruption’ is lived.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146296</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanosystems: From the Lab to the Fab</title>
<link>https://hdl.handle.net/1721.1/146295</link>
<description>Nanosystems: From the Lab to the Fab
Srimani, Tathagata
Exponential improvements in computing performance have impacted and improved nearly every aspect of our lives: from education to transportation to healthcare. And with continued gains, applications which were once science fiction – from fully-autonomous vehicles to personalized healthcare – will soon be a reality. Yet at the exact moment these next-generation applications are poised to once again revolutionize our lives, gains in computing performance are slowing. The conventional approaches relied on to improve computing – mainly relentless physical and equivalent scaling of devices – are reaching fundamental limits, and while progress will undoubtedly continue, the rate of gains has already slowed dramatically over the last decade. Therefore, to enable these next-generation applications, new approaches to computing systems are required. Rather than rely on a single approach, coordinated advances across the system stack – from new technologies to new system architectures – are required to overcome today’s challenges. This is embodied by “NanoSystems”, which use emerging nanotechnologies to realize new system architectures to enable new applications. &#13;
&#13;
Yet however intellectually compelling or interesting nanosystems are, the problems facing computing today are very real and very current. Unfortunately, despite the promise of nanosystems, nanosystems were exclusively of academic interest. All nanosystem demonstrations were fabricated in academic labs, and there were many challenges that prohibited nanosystems from transferring into industry and thus into the real world.&#13;
&#13;
This thesis addresses this critical problem. By demonstrating the world’s first adoption of nanosystems within industry, this thesis provides both a specific path forwards as well as a general approach of how to transform promising nanosystems in theory into practical systems that can impact our daily lives. To transform nanosystems from the “lab” to the “fab”, this thesis must address challenges that span the entire stack: from low-level material optimizations, to semiconductor device engineering, to circuit and system design, up to architectures and application implementation. As a case-study, this thesis focuses on carbon nanotubes and monolithic threedimensional integration as the specific implementation of a nanosystem, yet the lessons and conclusions from this work are applicable to a broad set of emerging nanotechnologies and nanosystems. Beyond technology, this thesis shows unequivocally that nanosystems should – and can - be transferred from academic “labs” into commercial “fabs”, providing a realistic and feasible path forwards for computing to continue to improve and revolutionize the world we live in.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146295</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-Molecule Protein Sequencing (I) and Genetically Dominant mRNA Therapies to Combat Viral Evolution (II)</title>
<link>https://hdl.handle.net/1721.1/146294</link>
<description>Single-Molecule Protein Sequencing (I) and Genetically Dominant mRNA Therapies to Combat Viral Evolution (II)
Choueiri, Alexi Georges
Proteins serve critical structural and dynamic functional roles at the cellular level of all living organisms. Understanding protein contribution to biological function is critical and rests on having appropriate technologies for quantification and identification. The central dogma of molecular biology, information flow from DNA to RNA to protein, has been studied for decades as these molecules are critical to cell function and diversity. The advent of polymerase chain reaction (PCR) amplification of nucleic acid was pivotal in advancing the high-throughput molecular interrogation and analysis of DNA and RNA at the whole-genome and transcriptome level. In contrast, studying proteins has lagged technologically since there is no equivalent of PCR to amplify and detect low-copy number proteins. Instead, protein sequencing and identification methods have relied on ensemble measurements from many cells which masks cell-to-cell variations¹. While some researchers have turned to transcriptomics as a proxy to the protein composition within cells, it is critical to note that gene expression at the transcriptomic level weakly correlates with the proteomic profile due to variability in translational efficiency of different mRNAs, and the difference between mRNA and protein lifetimes². In addition, posttranslational modifications also result in significant variability of protein abundance and their primary sequence with respect to the transcriptome. Vital biological processes such as synaptic plasticity, metabolic signaling pathways and stem cell differentiation, all depend on protein expression. Many diseases also originate from genetic mutations that are in turn translated to a single or set of aberrant proteins. Diseases such as cancer and neurodegeneration tend to have triggered mutations of unclear origins and polygenic interactions. They can be best understood and addressed at the proteomic level, since their pathology is directly related to disrupted proteostasis at the cellular level³,⁴. The lack of technology for high-resolution protein-level analyses represents a significant gap in advancing important biological research. To address this issue, we propose a technology that allows for single-molecule identification and sequencing of proteins, allowing for high resolution interrogation of the proteome and enabling ultrasensitive diagnostics critical for early detection of diseases. The technology outlined will involve single-molecule detection via labeling and imaging of amino acids, the building blocks of proteins, that are sequentially isolated and immobilized from the protein N-terminus using a novel chemical design called ClickP, thus removing recognition interference from neighboring amino acids.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/146294</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a statewide transportation system planning process: an application to California.</title>
<link>https://hdl.handle.net/1721.1/145878</link>
<description>Design of a statewide transportation system planning process: an application to California.
Mead, Kirtland Chase.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Vita.; Bibliography: leaves 348-351.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145878</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analytical investigation of air-traffic in the vicinity of terminal areas.</title>
<link>https://hdl.handle.net/1721.1/145870</link>
<description>An analytical investigation of air-traffic in the vicinity of terminal areas.
Odoni, Amedeo R.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Lacking l. 102. Vita.; Bibliography: leaves 161-164.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145870</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conformally invariant quantum fields</title>
<link>https://hdl.handle.net/1721.1/145869</link>
<description>Conformally invariant quantum fields
Baez, John Carlos.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1986; Bibliography: leaves 90-91.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145869</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advocacy groups versus state power : creating global politics of the environment</title>
<link>https://hdl.handle.net/1721.1/145867</link>
<description>Advocacy groups versus state power : creating global politics of the environment
Ozeroff, Harry Cleveland.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1999; Includes bibliographical references (v. 2, p. 633-667).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145867</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operational quantum resource theories : unified framework and applications</title>
<link>https://hdl.handle.net/1721.1/145865</link>
<description>Operational quantum resource theories : unified framework and applications
Takagi, Ryuji (Physicist)
A major goal of quantum information science is to understand the relation between the properties of quantum features and the enhancements to information processing tasks enabled by them. In particular, precise quantitative descriptions of quantum phenomena have become increasingly important not only for theoretical interest but also from a practical point of view as the recent technological advances have provided access to systems on small scales, in which quantum effects play major roles. As a platform to offer such quantitative treatments, quantum resource theory has been developed. This is an operationally motivated framework, which systematically deals with quantication and manipulation of the quantum effects by considering the quantities of interest as precious "resources" that cannot be freely created by the given sets of operations.; This thesis develops quantum resource theories from two perspectives. The first part advances the framework of general resource theories, which encompass various types of quantum phenomena such as quantum entanglement, quantum superposition, and many others. We find that common structures universally shared by a wide class of resources can be extracted by employing operational viewpoints. Specically, we consider fundamental operational tasks in quantum information theory -- state/channel discrimination, resource distillation/dilution, implementation of unitary evolution -- and establish quantitative connections between the resource contents and their operational capabilities. Our general results contribute to building a unified picture of quantum resources, allowing us to gain a deeper understanding of the characterization of quantum mechanics. The second half of the thesis applies the resource theory to specic settings such as continuous-variable systems, systems with conserved additive quantities, and communication via quantum channels. We show that operational perspectives offered by resource theories provide effective ways of quantifying the underlying resources and concise arguments to solve concrete problems of interest, suggesting the further potential of resource theory as a useful theoretical tool. The resource objects considered in this thesis span from quantum states to quantum measurements and channels, extending the consideration beyond static resource theories that have been a major focus in the field, and paving the way for the development of dynamic resource theories.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-229).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145865</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two-nucleon short-range correlations in light nuclei</title>
<link>https://hdl.handle.net/1721.1/145795</link>
<description>Two-nucleon short-range correlations in light nuclei
Cruz Torres, Reynier.
Understanding the nucleon-nucleon (NN) interaction is a fundamental task in nuclear physics, as NN-interaction models are a crucial input to modern nuclear structure calculations. While great progress has been made toward understanding this interaction, the available state-of-the-art models predict significantly different behaviors at short distances and high momenta (scale-and-scheme dependence), where two-nucleon Short-Range Correlations (SRCs) dominate the nuclear wave function. Thus, SRCs are a unique tool to constrain the NN interaction and vice versa. SRCs are naturally-occurring high-local-density NN pairs that, as a result of their short-distance (r &lt;/~ 1 fm) repulsive interaction, fly apart with high momenta, hence populating momentum states above the Fermi level (k &gt;/~ &amp; k[subscript F]~~ 250 MeV/c). The study of SRCs also has significant implications for other fields, such as the astrophysics of neutron stars and the behavior of cold atomic gasses. This thesis describes experimental and phenomenological studies of the short-distance / high-momentum structure of the NN interaction through the study of SRCs and vice versa. Experimentally, I report the first measurement of the ³He and ³H(e, e'p) reactions in Hall A of the Thomas Jefferson National Accelerator Facility in kinematics in which the measured cross sections should be sensitive to the underlying nucleon momentum distributions in the range 40 to 500 MeV/c. The resulting cross-section ratios and absolute cross sections were compared to momentum-distribution ratios and precise cross-section calculations respectively. Phenomenologically, I report the generalization of the Contact Formalism (GCF) to nuclear systems, which exploits scale separation and universality to describe nucleons at short distances and high momenta.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 193-205).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145795</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of W and Z boson production cross sections in proton-proton collisions at [square root] s = 5.02TeV and [square root] s = 13TeV</title>
<link>https://hdl.handle.net/1721.1/145794</link>
<description>Measurement of W and Z boson production cross sections in proton-proton collisions at [square root] s = 5.02TeV and [square root] s = 13TeV
Brandt, Stephanie Akemi.
Measurements of W and Z boson production cross sections in pp collisions at [square root] s = 5.02TeV and [square root] s = 13TeV are presented. Data was collected by the CMS experiment at the LHC during low-pileup data taking periods in 2017. The corresponding integrated luminosity for the data is 299.1 ± 5 pb⁻¹ ( [square root] s = 5.02TeV) and 199.3 ± 4 pb⁻¹ ( [square root] s = 13TeV), with an average number of pile-up interactions [[mu] = 3 ([mu] = 2). Cross sections and cross section ratios are reported, with final states in electron and muon channels.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 153-160).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145794</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental modulation of microbial communities</title>
<link>https://hdl.handle.net/1721.1/145793</link>
<description>Environmental modulation of microbial communities
Abreu, Clare I.
            (Clare Isabel)
Microbial communities are crucial to the health of all ecosystems, and their vast diversity constitutes the majority of species on Earth. A single sample of such a community immediately reveals its complexity, but simultaneously the challenges to understanding its form and function. First, a sample provides a snapshot of a community's current state, but fails to explain how hundreds to thousands of species might emerge and remain together. Second, microbial communities are in constant flux, with species abundances changing in response to biotic interactions with each other as well as abiotic environmental conditions, making it difficult to surmise which forces drive community dynamics. In this thesis, I describe the "bottom-up" approach my colleagues and I use to build microbial communities in the laboratory. By using tractable experimental microcosms and tuning particular abiotic parameters while holding others constant, we discover the effects of environmental changes on community structure. Furthermore, we find that we can verify simple theoretical predictions about how the environment changes interactions and, by extension, community composition. We employ and modify a simple phenomenological model, the Lotka-Volterra interspecific competition model, to make predictions about the effects of increasing mortality, increasing temperature, and environmental fluctuations. We verify these predictions with a diverse set of bacterial species engaged in pairwise competition, and use these pairwise results to successfully predict the outcomes of communities of three or more species. Despite the fact that we knew little about the species other than simple attributes such as their growth rates, the model was generally successful, indicating universal behaviors in response to environmental changes. Our results can provide intuition for experimentalists in tuning laboratory conditions, as well as to scientists studying the effect of the environment on field communities
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 155-166).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145793</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring strongly interacting gapless states : cuprates, pair density waves, and fluctuating superconductivity</title>
<link>https://hdl.handle.net/1721.1/145792</link>
<description>Exploring strongly interacting gapless states : cuprates, pair density waves, and fluctuating superconductivity
Dai, Zhehao,
            Ph. D.
            Massachusetts Institute of Technology.
We study the physical property of pair density wave (PDW) and fluctuating PDW, and use it to build an effective theory of the strongly interacting pseudogap phase in cuprate high temperature superconductors. In Chapter 2, we study how Fulde-Ferrell state, the simplest form of PDW, responds to incident light. The collective motion of the condensate plays a key role; gauge invariance guides us to the correct result. From Chapter 3 to Chapter 7, we construct a pseudogap metallic state by considering quantum fluctuating PDW. We analyze a recent scanning tunneling microscope (STM) discovery of period-8 density waves in the vortex halo of the d-wave superconductor. We put it in the context of the broader pseudogap phenomenology, and compare the experimental results with various PDW-driven models and a charge density wave (CDW) driven model. We propose experiments to distinguish these different models. We present the Bogoliubov bands of PDW. We discuss fluctuating PDW from the general perspective of fluctuating superconductivity. We discuss how Bogoliubov bands evolve when the superconducting order parameter is fluctuating. We compare theoretical predictions with existing experiments on angle-resolved photoemission spectroscopy (ARPES), infrared conductivity, diamagnetism, and lattice symmetry breaking. The material presented here is based on Ref. [38, 41, 40]. Ref. [39] is not discussed in this thesis but was completed during my time at MIT.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-161).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145792</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel equivalence method for high fidelity hybrid stochastic-deterministic neutron transport simulations</title>
<link>https://hdl.handle.net/1721.1/145790</link>
<description>A novel equivalence method for high fidelity hybrid stochastic-deterministic neutron transport simulations
Giudicelli, Guillaume Louis.
With ever increasing available computing resources, the traditional nuclear reactor physics computation schemes, that trade off between spatial, angular and energy resolution to achieve low cost highly-tuned simulations, are being challenged. While existing schemes can reach few-percent accuracy for the current fleet of light water reactors, thanks to a plethora of astute engineering approximations, they cannot provide sufficient accuracy for evolutionary reactor designs with highly heterogeneous geometries. The decades-long process to develop and qualify these simulation tools is also not in phase with the fast-paced development of innovative new reactor designs seeking to address the climate crisis. Enabled by those computing resources, high fidelity Monte Carlo methods can easily tackle challenging geometries, but they lack the computational and algorithmic efficiency of deterministic methods. However, they are increasingly being used for group cross section generation. Downstream highly parallelized 3D deterministic transport can then use those cross sections to compute accurate solutions at the full core scale. This hybrid computation scheme makes the most of both worlds to achieve fast and accurate reactor physics simulations. Among the few remaining approximations are neglecting the angular dependence of group cross sections, which lead to an over-estimation of resonant absorption rates, especially for the lower resonances of ²³⁸U. This thesis presents a novel equivalence method based on introducing discontinuities in the track angular fluxes, with a polar dependence of discontinuity factors to preserve the polar dependence of the neutron currents as well as removing the self-shielding error. This new method is systematically benchmarked against the state-of-the-art method, SuPerHomogenization in three different approaches to obtaining equivalence factors: a same-scale iterative approach, a multiscale approach, and a single-step non-iterative approach. Both methods show remarkable agreement with a reference Monte Carlo solution on a wide array of test cases, from 2D pin cells to 3D full core calculations, for the iterative and the multi-scale approaches. The self-shielding error is eliminated, improving significantly the predictive capabilities of the scheme for the distribution of ²³⁸U absorption in the core. A single-step non-iterative approach to obtaining equivalence factors is also pursued, and was shown to only be adequate with the novel discontinuity factor-based method. This study is largely enabled by a significant optimization effort of the 3D deterministic neutron transport solver. By leveraging low level parallelism through vectorization of the multi-group neutron transport equation, by increasing the memory locality of the method of characteristics implementation and with a novel inter-domain communication algorithm enabling a near halving of memory requirements, the 3D full core case can now be tackled with only 50 nodes on an industrial sized computing cluster rather than the many thousands of nodes on a TOP20 supercomputer used previously. This thesis presents fully resolved solutions to the steady-state multi-group neutron transport equation for full-core 3D light water reactors, and these solutions are comparable to gold-standard continuous-energy Monte Carlo solutions.
Thesis: Ph. D. in Computational Nuclear Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145790</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Monte Carlo framework for nuclear data uncertainty propagation via the windowed multipole formalism</title>
<link>https://hdl.handle.net/1721.1/145789</link>
<description>A Monte Carlo framework for nuclear data uncertainty propagation via the windowed multipole formalism
Alhajri, Abdulla&#13;
            (Abdulla Abdulaziz)
A new framework has been developed that calculates the uncertainty in calculated quantities, such as K[subscript eff], reactivity coefficients, multigroup cross sections, and reaction rate ratios, that arise due to uncertainties in the underlying nuclear data. This framework relies on first order uncertainty analysis using sensitivity methods. The major innovation in the proposed framework is the use of the windowed multipole formalism for calculating the sensitivities. The use of the windowed multipole formalism provides a natural, physics-inspired binning strategy for the sensitivity coefficients, while also aiding in the statistical convergence of the calculated sensitivity tallies. Additionally, our framework improves on existing methods by fully accounting for temperature effects. The proposed method allows for identifying exactly the resonances and parameters that are driving the uncertainty, and thus provides guidance to nuclear data evaluators and experimenters on how to reduce the uncertainty in the most efficient manner. Calculating the uncertainty requires two key pieces of information; the windowed multipole sensitivity coefficients, and the windowed multipole covariance matrix. A sensitivity coefficient calculation algorithm based on the CLUTCH-FM methodology was implemented in OpenMC. Several methods for obtaining the windowed multipole covariance matrix from the resonance parameter covariance matrix were explored, and ultimately an approach based on random-sampling was selected. Along the way, an analytical benchmark was developed for the purposes of validating the framework, as well as the implementation. This analytical benchmark consists of a solution to the forward and adjoint neutron transport equations. The windowed multipole covariance matrix was calculated for three isotopes; ²³⁸U , ¹⁵⁷Gd , and ²³Na . The uncertainty in K[subscript eff] due to the uncertainty in the ²³⁸U and ¹⁵⁷Gd cross sections was calculated for two criticality safety benchmarks, and a beginning-of-life PWR model. The uncertainty of several reaction rate ratios due to the uncertainty in the ¹⁵⁷Gd cross section was also calculated for the PWR model. The resonances of ²³⁸U and ¹⁵⁷Gd that have the largest contribution to the uncertainty were identified for the criticality safety benchmarks.
Thesis: Ph. D. in Computational Nuclear Science &amp; Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 225-228).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145789</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The american jeweled watch industry</title>
<link>https://hdl.handle.net/1721.1/145758</link>
<description>The american jeweled watch industry
Measday, Walter Sparks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Engineering, 1955; Bibliography: leaves [317]-323.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145758</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the variety of special divisors and moduli,</title>
<link>https://hdl.handle.net/1721.1/145754</link>
<description>On the variety of special divisors and moduli,
Lax, R. F.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Bibliography: leaves 109-111.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145754</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A theoretical and experimental treatment of peristaltic pumping and its relation to ureteral function.</title>
<link>https://hdl.handle.net/1721.1/145752</link>
<description>A theoretical and experimental treatment of peristaltic pumping and its relation to ureteral function.
Weinberg, Steven Louis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1970; Bibliography: leaves 135-137.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145752</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed convection in vertical rod bundles</title>
<link>https://hdl.handle.net/1721.1/145747</link>
<description>Mixed convection in vertical rod bundles
Symolon, Paul D.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145747</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of the energy efficiency and economic viability of expanded magnesium utilization</title>
<link>https://hdl.handle.net/1721.1/145744</link>
<description>An analysis of the energy efficiency and economic viability of expanded magnesium utilization
Kenney, George Brian,
            1951-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1979; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145744</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cortico-thalamic interactions for head direction coding</title>
<link>https://hdl.handle.net/1721.1/145603</link>
<description>Cortico-thalamic interactions for head direction coding
van der Goes, Marie-Sophie Helene
The ability to orient oneself within an environment is critical for spatial navigation and thus for survival and reproductive success. Orienting depends on interactions between multiple brain areas carrying information about head and body movements as well as external sensory cues for reference. To adapt to changing environments and correct for error, head direction (HD) representations must be flexible. However, the circuit mechanisms and dynamics underlying how HD is modulated are largely unknown. Retrosplenial cortex (RSC) is a key region for spatial cognition and exhibits dense interconnection with visual and motor areas, the hippocampal formation, and diverse thalamic nuclei. Of these, the anterodorsal thalamus (ADn) is the major avenue by which HD is routed to cortex. In other cortical areas, cortico-thalamic loops have been shown to perform transformations on incoming sensory inputs for learning and behavior output. In this thesis I test the hypothesis that interactions between RSC and ADN provide a circuit substrate for flexible HD computations. In the first part, I describe experiments using simultaneous tetrode recordings in behaving mice. Through neural decoding, I show that RSC HD representation is synchronous with that of ADn, not only during the visually-guided HD reference update, but also in darkness. This coordination is supported by strong feedforward functional connectivity in the thalamo-cortical direction, suggesting that visually-guided adaptations likely emerge upstream of ADn, where angular velocity is integrated. In the second part, I confirm that ADn is devoid of recurrent excitatory connectivity, contrary to previously proposed attractor network architectures. My results suggest that inhibition, originating in thalamic reticular nucleus, likely plays a fundamental role in the control of ADn HD. Finally, I provide evidence, at the single cell level, of how long-range synaptic inputs are functionally targeted to specific dendritic domains in RSC. I speculate that this connectivity logic, together with different dendritic integration rules, may underlie high-dimensional tuning and combine visual and thalamic HD to represent global HD references. Altogether, this work suggests that ADn-RSC interactions alone cannot account for flexible HD coding, but are embedded in a network that constructs this spatial cognitive representation through transformation of multiple sensory signals.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145603</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Diverse Forms of Human and Machine&#13;
Intelligence</title>
<link>https://hdl.handle.net/1721.1/145602</link>
<description>Combining Diverse Forms of Human and Machine&#13;
Intelligence
Campero Nuñez, Andres
Artificial Intelligence algorithms never operate in isolation but are always part of broader processes that often involve humans, other computer algorithms, incentive structures, and interfaces which modulate the interaction between them. This thesis takes this perspective and considers these broader processes by studying specific combinations of three forms of intelligence: symbolic artificial intelligence, neural artificial intelligence, and human intelligence. First, diverse forms of Neuro-Symbolic AI through three pipelines consisting respectively of neural perception with symbolic reasoning, symbolic inputs with neural reasoning, and a dual-integration that learns representations which are simultaneously symbolic and neural (Chapter 2); second, the AI research community as a Human-Symbolic combination through the presentation of a taxonomy of AI models, tasks and datasets (Chapter 3); and third, a specific form of Human-AI intelligence observing that a human in combination with GPT-3 can perform an HTML code generation task better than either humans or computers alone (Chapter 4).
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145602</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Optical Tools and Techniques Toward a Functional Connectomic Understanding of C. elegans</title>
<link>https://hdl.handle.net/1721.1/145601</link>
<description>Development of Optical Tools and Techniques Toward a Functional Connectomic Understanding of C. elegans
Orozco Cosio, Danielle Marie
Optical methods to study C. elegans behavior and neural activity are popular and wellestablished in the field, but many of the most commonly used optical tools leave much to be improved upon. In this research I sought to establish the utility of new optical tools in C. elegans to address the shortcomings of commonly used ones. First, I present the properties and demonstrate the functionality of an improved near-infrared negative calcium ion indicator, NIR-GECO2, for imaging olfactory stimulated and optogenetically evoked neural activity. Next, I present the properties and demonstrate the functionality of a tool to strategically arrange GCaMP in clusters, STARC, to enable imaging of compartmentalized neural activity in regions dense with neural projections. Finally, I present a novel computational and RNA fluorescent in-situ hybridization-based method for unique identification of C. elegans neurons which allows for experiments to be performed in any C. elegans strain and can be flexibly applied to new applications and organisms. When appropriately combined with existing methods, these tools and techniques enable experiments that can push the field of C. elegans systems neuroscience towards a functional connectomic understanding of the neural control of the animal’s behavior.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145601</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interrogating the infant mind with fMRI</title>
<link>https://hdl.handle.net/1721.1/145600</link>
<description>Interrogating the infant mind with fMRI
Kosakowski, Heather Lynne
My dissertation research focused on uncovering the functional organization of higher-level perceptual and cognitive processes in infants’ brains. For two decades scientists have attempted to use functional magnetic resonance imaging (fMRI) to study infant perceptual and cognitive function. However, it remains unclear whether the basic structure of adult cortical organization is present in young infants. Chapter 2 describes innovations to improve infant fMRI techniques including the design of MRI-safe infant headphones and a size-adaptive 32-channel infant head coil. These improvements resulted in higher quality data that were less corrupted by motion artifacts. Previous fMRI studies of higher-level visual cortex in awake infants observed preferential responses to faces [5,6] and scenes [5]. In Chapter 3 I show that with increased power, we can detect infants’ (2-9 months) face-selective responses in the fusiform face area (FFA), scene-selective responses in the parahippocampal place area (PPA), and body-selective responses in the extrastriate body area (EBA) and that these responses cannot be easily explained by the low-level visual features present in the stimuli. With these same data, I test different theories of cortical development and find evidence that face-selective responses in infant FFA, superior temporal sulcus (STS), and medial prefrontal cortex (MPFC) emerge in parallel indicating that as early as infants can detect and perceive a face, they also attribute social meaning to that face. My thesis then interrogates the origin of human-unique music and speech perception. In Chapter 5 I show that infants’ (2-11 weeks) cortical response to music and speech cannot be explained by the spectrotemporal modulation statistics of those two auditory categories. In sum, my dissertation research finds that signatures of cortical organization of higher-level perceptual processing in adults are already present in young infants.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145600</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Corrosion and Corrosion Prevention Technology: Revisiting the Fundamentals and Looking Forward</title>
<link>https://hdl.handle.net/1721.1/145599</link>
<description>Corrosion and Corrosion Prevention Technology: Revisiting the Fundamentals and Looking Forward
Effendy, Surya
Corrosion science is a well-established field with over a century's worth of research and development. This venerable state of affairs is in line with the importance of the topic, which is indispensable in a global economy so dependent on metals as structural and commercial goods. Nevertheless, to an external observer, it seems as if most major scientific developments in the field are in the distant past. The current state of affairs, in which empirical testing dominates corrosion research, only serves to strengthen this impression. The dominance of empirical approach reflects a certain state of mind in the community, namely, that corrosion science is a complex topic which does not lend itself well to quantitative analysis. This belief comes at a cost, since empirical findings tend to be difficult to generalize, resulting in a slow and resource-intensive development of corrosion-resistant and corrosion-prevention technologies. &#13;
    &#13;
From another perspective, given recent developments in theoretical, computational, and experimental techniques, it seems unusual that major theories in corrosion science has remained as they were several decades ago. Indeed, I believe that the the time is ripe to reassess corrosion research in some depth, beginning with fundamental corrosion science, into corrosion prevention technology, and finally into the application of modern methods for corrosion research. Naturally, being a single person, it is not possible to discuss the entirety of corrosion science and corrosion prevention technology, and this thesis represents only the small number of key insights that I have developed in the past four years. &#13;
    &#13;
I explore anode / cathode separation and differential aeration as examples of well-established ideas which prove more complex than they are conventionally understood in the field. I show that significant evidence cited in favor of anode / cathode separation can be explaining using the more parsimonious localized corrosion hypothesis. Follow-up experimental work indicates that additional electrochemical phenomena dominates the response of both hypotheses, and establishes autocatalysis and electrophoresis as potential key unmodeled phenomena in the field. &#13;
    &#13;
I then delve into the design of anti-corrosion coatings, which suffers from the aforementioned excessive focus on empirical analysis. This section of the thesis is divided into two components, the former focusing on the formation and evolution of osmotic blisters, while the latter focuses on the interaction between the coating and the electrolyte it is immersed in. The former leads to fundamental, unifying insights on the failure of anti-corrosion coatings, with three dimensionless numbers governing the rupture, delamination, and visibility of blisters, while the latter leads to a useful tool which can be used to quantify species mobility and coating hydrophilicity using relatively simple measurements. The models presented in both works have been validated against experimental data, and promise to accelerate the pace of coating development in the field.&#13;
    &#13;
Finally, I return to basic considerations of data requirement in the design of anti-corrosion coatings and propose a physics-based kernel method for reducing the data requirement by several order of magnitudes. The method uses electrochemical impedance spectroscopy (EIS), defect-on-coating equivalent circuit, machine learning, and high-throughput experimentation to allow the exploration of the large design space of anti-corrosion coatings. As an illustrative example of the application of EIS and the manipulation of equivalent circuits from first principle, I present a combined experimental / theoretical work on the flow behavior of redox flow batteries (RFBs), which reveals the self-similarity of fibrous carbon electrodes and leads to a useful tool for flow analyses.&#13;
    &#13;
To round off the topic of data requirement, I analyze fundamental concepts in EIS data inversion, which can be used to extract additional key features relevant to the prediction of coating failure. This theoretical work dissects the key assumptions made in EIS data inversion, identifies failures in said assumptions, and builds a more coherent algorithm which outperforms existing programs designed to fulfill the same task. With the tools and discoveries made in this thesis, I hope to embolden future corrosion scientists and engineers to foray beyond the current qualitative paradigm of corrosion science.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145599</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cause, Composition, and Structure in Language</title>
<link>https://hdl.handle.net/1721.1/145598</link>
<description>Cause, Composition, and Structure in Language
Qian, Peng
From everyday communication to exploring new thoughts through writing, humans use language in a remarkably flexible, robust, and creative way. In this thesis, I present three case studies supporting the overarching hypothesis that linguistic knowledge in the human mind can be understood as hierarchically-structured causal generative models, within which a repertoire of compositional inference motifs support efficient inference. I begin with a targeted case study showing how native speakers follow principles of noisy-channel inference in resolving subject-verb agreement mismatches such as "The gift for the kids are hidden under the bed". Results suggest that native-speakers' inferences reflect both prior expectations and structure-sensitive conditioning of error probabilities consistent with the statistics of the language production environment. Second, I develop a more open-ended inferential challenge, completing fragmentary linguistic inputs such as "____ published won ____." into well-formed sentences. I use large-scale neural language models to compare two classes of models on this task: the task-specific fine-tuning approach standard in AI and NLP, versus an inferential approach involving composition of two simple computational motifs; the inferential approach yields more human-like completions. Third, I show that incorporating hierarchical linguistic structure into one of these computational motifs, namely the auto-regressive word prediction task, yields improvements in neural language model performance on targeted evaluations of models’ grammatical capabilities. I conclude by suggesting future directions in understanding the form and content of these causal generative models of human language.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145598</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing System Models of Brain Processing via Integrative Benchmarking</title>
<link>https://hdl.handle.net/1721.1/145597</link>
<description>Advancing System Models of Brain Processing via Integrative Benchmarking
Schrimpf, Martin
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision or language. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. This thesis argues that it is time for our field to take the next step: build system models that capture neural mechanisms and supported behaviors within an entire domain of intelligence. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural and behavioral experiments in the primate visual ventral stream and the human language system, as well as direct neural (causal) perturbation experiments in inferotemporal cortex. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (∼ 50% explained variance), but also discover key relationships: Models’ brain scores are predicted by their object categorization performance in vision (but only up to 70% ImageNet accuracy), and their next-word prediction performance in language. The better models predict internal neural activity, the better they match human behavioral outputs, with architecture substantially contributing to brain-like representations. Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and predict primate temporal processing, as well as models that require only a fraction of supervised synaptic updates. Taken together, the integrative benchmarks and system models presented here are first steps to modeling the complexities of brain processing in entire domains of intelligence.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145597</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding computation through low-dimensional dynamics with recurrent neural networks</title>
<link>https://hdl.handle.net/1721.1/145596</link>
<description>Understanding computation through low-dimensional dynamics with recurrent neural networks
Pollock, Eli Barton
One way to understand the brain is in terms of the computations it performs that allow an organism to survive in the world. Models of cognition and behavior can be useful for describing the computations that might be performed, but often provide little insight into how they are realized in neural network models. Addressing this disconnect requires tools for better understanding neural representations and how they are used for cognitive computations. Here, I present work towards developing a dynamical systems framework for neural computation, using a recurrent neural network (RNN) model.&#13;
&#13;
To begin, I propose an analysis-by-synthesis method that uses local constraints on population activity to create RNNs that can solve a task. I demonstrate this method by creating networks that implement variations of a ring attractor, a classic model of representation in neural circuits. The first variation produces a specific drift-diffusion process over the attractor, similar to that proposed in a working memory model. As a second variation, I introduce an input that controls the speed of the network’s dynamics, creating a model for how contextual inputs can enable flexible behavior. Third, I explore ring attractor networks with dynamics of varying complexity.&#13;
&#13;
Next, I provide a more detailed analysis of the relationship between neural representations and network connectivity. Again using a network synthesis technique, I show that RNNs with a wide variety of synaptic weight configurations can produce nearly identical ring attractors. To define the theoretical boundaries of the space containing all such networks, I identify underlying sources of variability as well as common features. In doing so, I develop a framework for relating the geometry and dynamics of constrained network states to the features of network connectivity.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145596</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dopaminergic modulation of basolateral amygdala for fear extinction</title>
<link>https://hdl.handle.net/1721.1/145595</link>
<description>Dopaminergic modulation of basolateral amygdala for fear extinction
Flick, Katelyn
The extinction of conditioned fear responses is crucial for adaptive behavior, and its impairment is a hallmark of anxiety disorders such as post-traumatic stress disorder. During fear extinction learning, a new memory is formed in basolateral amygdala (BLA) reward neurons, which inhibits the BLA fear memory. However, the neurological nature of the teaching signal that instructs the formation of fear extinction memory in BLA reward neurons is unknown. Recently work has identified a population of VTA dopamine neurons that signal shock omission during extinction, but a downstream target for that activity hasn’t been identified. In this thesis, I demonstrate the role of dopamine signaling in driving fear extinction in distinct BLA neuronal populations. We identify sources of dopamine and modes of dopaminergic action in the BLA, by showing that BLA fear and reward neuronal populations receive topographically divergent inputs from VTA dopamine (DA) neurons and differentially express dopamine receptors. Optogenetic activation of the VTA DA projections to BLA reward and fear neurons accelerated or impaired fear extinction, respectively. We found that dopamine D1 receptors (DrD1) expression in BLA reward cells is necessary for fear extinction. Furthermore, overexpression or optogenetic activation of DrD1 in BLA reward neurons accelerates fear extinction.  Lastly, we record dopamine activity in the BLA and find that DA activity is time locked to freezing cessation in BLA reward cells and determine that this activity is correlated to the change in behavior observed during extinction. Together this thesis demonstrates that dopamine activity bidirectionally drives fear extinction by distinct patterns of activity at BLA fear and extinction neurons.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145595</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formation of β-lactams and related compounds. II. Alkyl phosphates and thiophosphates</title>
<link>https://hdl.handle.net/1721.1/145449</link>
<description>Formation of β-lactams and related compounds. II. Alkyl phosphates and thiophosphates
Roth, Roy William.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1955; Vita.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145449</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic studies of active site mutants of mercuric reductase : stereochemical studies of a fluoroacetate halidohydrolase</title>
<link>https://hdl.handle.net/1721.1/145445</link>
<description>Mechanistic studies of active site mutants of mercuric reductase : stereochemical studies of a fluoroacetate halidohydrolase
Au, Karin G.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1986; Bibliography: leaves 175-177.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145445</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>DEPLESYN : a three dimensional, discontinuous synthesis, diffusion-depletion code.</title>
<link>https://hdl.handle.net/1721.1/145443</link>
<description>DEPLESYN : a three dimensional, discontinuous synthesis, diffusion-depletion code.
Chin, Ronald Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145443</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aero-thermal-mechanical interactions in ultra high-speed micro gas turbines</title>
<link>https://hdl.handle.net/1721.1/145215</link>
<description>Aero-thermal-mechanical interactions in ultra high-speed micro gas turbines
Tiralap, Aniwat.
Aero-thermal induced mechanical response of engine components in an ultra high-speed micro gas turbine engine system is assessed. Scaling down gas turbine engines for high performance requirement dictates substantial thermal-induced effects on engine operation due to high temperature gradient relative to that in conventional large gas turbine engines. Experiments indicate that the sustainable operation is limited by mechanical response of shaft-bearing housing system. It is hypothesized that this is due to thermal-induced mechanical deformation of shaft-bearing housing that results in bearing clearance variation that differs from the design intent. An unsteady CFD conjugate heat transfer computation of flow and temperature distribution in the engine system is first implemented; this is followed by determining the corresponding mechanical deformation of engine components based on finite element analysis. The computed result shows that at the beginning of the engine start-up process, radial expansion of the shaft is larger than that of the bearing housing, resulting in a smaller bearing clearance. Toward steady-state operation, a larger bearing clearance is observed. The computed results and experimental observation are in agreement thus confirming the hypothesis. The key controlling non-dimensional parameters characterizing the aerothermal-mechanical interaction and response are identified using a reduced order model that yields thermal-induced mechanical deformation in agreement with the unsteady computations. For geometrically similar engine system, the controlling thermal and structural parameters consist of: (1) shaft fin parameter, (2) housing fin parameter, (3) ratio of heat diffusivity of housing to that of shaft, (4) 3 cooling flow parameters, and (5) ratio of coefficient of thermal expansion of the housing to that of shaft. The non-dimensional parameters serve as a guideline for developing strategies for controlling bearing clearance under the acceptable margin, including selecting shaft and housing materials with appropriate properties as well as tailoring the cooling flow. An approximate scaling rule for thermal-induced shaft-bearing housing clearance variation in engine of various sizing is formulated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Supervised by Choon Sooi Tan. Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 123-125).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145215</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Digital Liminality : computational tools for 'beyond average' creative thinking</title>
<link>https://hdl.handle.net/1721.1/145213</link>
<description>Towards Digital Liminality : computational tools for 'beyond average' creative thinking
Mothersill, Philippa
            (Philippa Jane).
Renowned designer Kenya Hara writes: "Creativity is to discover a question that has never been asked". These questions are commonly discovered early in the design process; an often ambiguous and liminal experience where new information is explored and considered in non-obvious ways to reveal unexpected associations. 'Intelligent' digital technologies such as machine learning are increasingly employed in tools used in the early phases of the design process. These computational techniques undeniably surpass humans at quickly generating numerous designs and calculating 'optimised' responses, but their average-driven approaches are limited when it comes to embracing the serendipity that can inspire creative breakthroughs. How can we develop digital tools to augment this liminal period of the creative process and help designers discover unexpected ideas?; This dissertation explores this question through three new 'Beyond Average' systems that integrate ambiguity and serendipity into digitally-enabled design tools: the Reframe creative prompt tool that juxtaposes language from a designer's notes in surprising ways to provoke new associations between concepts in their project; the Looking Sideways inspiration exploration tool that presents a diverse range of content for each search query and suggests connections between the concepts discovered; and the digitally-augmented Design Daydreams ideation table and post-it note that seamlessly connects the physical and digital content that designers use in their creative processes. These systems were informed by field research and interviews with expert designers and their impact on the design process was evaluated through several interventions in which creative practitioners, entrepreneurs and technologists used the Beyond Average tools to inspire new ideas for their projects. These interventions highlighted that the creative disruptions these tools provoke cannot exist alone; they must be situated in a larger creative process that accommodates for serendipitous interjections and unanticipated ideas. Overall, this research demonstrates how embedding liminality into digital tools creates a space within the design process for serendipitous inspiration and helps designers apply these innovative ideas, pointing towards new questions to consider as we design the future of our creative work.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2020; "February 2020." Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 102-111).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145213</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and evaluation of a reaction-force series elastic actuator configurable as biomimetic powered ankle and knee prostheses</title>
<link>https://hdl.handle.net/1721.1/145212</link>
<description>Design and evaluation of a reaction-force series elastic actuator configurable as biomimetic powered ankle and knee prostheses
Carney, Matthew Eli.
All commercial leg powered prostheses have been, up to this point, a one-size fits-all design, and of those existing systems, none has yet managed to fully achieve biological walking range of motion, torque and power. Yet, no human body is the same as the next. A configurable prosthesis potentially offers improvements in battery run-time, prosthesis mass, acoustic noise, user comfort, and even enables sport and economy modes within the same fundamental hardware. In this thesis, a reaction-force, series-elastic actuator (RFSEA) is presented that is capable of achieving biomimetic ankle and knee kinetics and kinematics during level-ground walking across a range of body masses, heights and walking styles. The platform is configurable to inertial load by swapping a simple-to-manufacture flat-plate composite spring that allows tuning the actuator dynamics to match different user requirements. The RFSEA also comprises a high torque and pole-count drone motor that directly drives a ball screw with a tunable, low-gear ratio lead. The design enables high dynamic range providing a closed-loop, torque-controlled joint that can demonstrate arbitrary levels of impedance. This control fidelity is important to support smooth control in free-space and high-inertial output conditions, such as the swing and late-stance phases of walking, respectively. A simulation framework is presented that defines mechatronic design specifications for the motor, spring, and gear-reduction components. The optimization procedure clamps output joint dynamics to subject-specific biological gait data, and searches for minimum electric energy solutions across the motor, gear-reduction and spring component space. A second optimization procedure then searches for optimal linkage and spring geometry to best approach the design targets as constrained by the availability of discrete drivetrain components. In this thesis, ankle and knee designs are presented with optimized components using biological joint data from a non-amputee subject walking at 2.0m/sec with a body mass equal to 90Kg. For these designed biomimetic joints, system specifications are verified using bench test evaluations, and preliminary human gait studies.; With a minimum viable actuator mass of 1.4Kg, the platform has a nominal torque control bandwidth of 6Hz at 82Nm, a repeated peak torque capacity of 175Nm, peak demonstrated power over 400W (with theoretical limits over 1kW), a 110 degree range of motion, as well as torque and power densities of 125Nm/kg and 286W/kg, respectively. Configured as an ankle-foot prosthesis, there are 35 degrees of dorsifiexion and 75 degrees of plantar flexion, and as a knee the full 110 degrees of flexion are available to enable activities on varied terrain such as stairs and inclines. Walking dynamics are evaluated with a finite state-machine ankle controller piloted by N=3 subjects with below-knee amputation walking at 1.5m/sec on an instrumented treadmill and one subject walking on stairs. In preliminary experiments, net positive work of 0.2J/Kg, peak joint torque of 1.5Nm/Kg, and peak mechanical power of 4.3W/Kg all fall within one standard deviation of the intact-limb biological mean. Configured as an ankle-foot prosthesis, the system mass is 2.2Kg including battery and electronics, and as a knee the system mass is 1.6Kg, making the RFSEA platform the lightest, most adaptable, and most biomimetic leg system yet published.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-190).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145212</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural language input for a computer problem solving system</title>
<link>https://hdl.handle.net/1721.1/145210</link>
<description>Natural language input for a computer problem solving system
Bobrow, Daniel G.
            (Daniel Gureasko),
            1935-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1964; Vita.; Includes bibliographical references (leaves 124-127).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145210</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and computational study of electromagnetically driven flow due to the passage of a current between two electrodes</title>
<link>https://hdl.handle.net/1721.1/145209</link>
<description>Experimental and computational study of electromagnetically driven flow due to the passage of a current between two electrodes
Kang, Tae Wook.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1987; Bibliography: leaves 286-291.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145209</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of nonrecursive approximations to digital filters with discontinuous frequency responses.</title>
<link>https://hdl.handle.net/1721.1/145208</link>
<description>Design of nonrecursive approximations to digital filters with discontinuous frequency responses.
Siegel, Joseph,
            1979-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Vita.; Bibliography: leaves 167-170.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145208</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Inference and its Application to Network Localization and Navigation</title>
<link>https://hdl.handle.net/1721.1/145185</link>
<description>Decentralized Inference and its Application to Network Localization and Navigation
Liu, Zhenyu
Decentralized inference is important for complex networked systems and enables numerous applications such as network localization and navigation (NLN), Internet-ofThings (IoT), and smart cities. This thesis establishes a theoretical foundation of decentralized inference for networks with limited sensing and communication capabilities. In the considered network, each node aims to infer in real-time an evolving state based on local observations and on messages exchanged with its neighbors. The objectives of the thesis include: (i) designing message encoding strategies that maximize inference accuracy; (ii) establishing connections between information- and estimation-theoretical quantities; and (iii) characterizing the impact of the sensing and communication capabilities of the network on the inference accuracy.&#13;
&#13;
First, we investigate a system of two nodes connected via a Gaussian channel. For such a system, we design a real-time strategy for generating the encoded messages exchanged between the nodes and derive conditions under which such a strategy provides optimal inference accuracy. Building on an information-theoretic perspective of Kalman–Bucy filtering in centralized settings, we derive a relationship between Shannon information and Fisher information for decentralized inference. Then, based on results for two-node systems, we characterize the behavior of decentralized inference error in multi-node networks with general channel models. We establish both necessary and sufficient conditions on the sensing and communication capabilities of the network for the boundedness of the mean-square error over time. We show that, in addition to Shannon capacity, anytime capacity plays a critical role in characterizing the impact of the network’s communication capability on the inference accuracy.&#13;
&#13;
This thesis deepens the understanding of decentralized inference in complex networked systems; uncovers connections among estimation, information, and control theories; and provides guidelines for designing decentralized inference algorithms and network operation strategies in applications such as NLN and IoT.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145185</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Validation of Melt Probe Models for the Exploration of Ocean Worlds</title>
<link>https://hdl.handle.net/1721.1/145182</link>
<description>Experimental Validation of Melt Probe Models for the Exploration of Ocean Worlds
do Vale Pereira, Paula
Ocean Worlds, where large volumes of water may exist under a layer of ice, are strong candidates for the first discovery of extraterrestrial life. Jupiter’s moon Europa is currently believed to have at least twice as much water as Earth. A key remaining challenge is developing a probe that can traverse tens of kilometers of ice with temperatures ranging from ~100 K to 273 K to reach the oceans of Europa. Initial steps have been taken to develop analytical and numerical models of the thermal and physical dynamics of ice penetrators in cryogenic environments, but experimental validation of these models has been limited. &#13;
&#13;
This thesis presents the design and the experimental results of probes that use melting as the descent mechanism in cryogenic (79 K) vacuum ice. The probes are designed to monitor power, temperature, and descent depth and speed. The melt probes are initially operated in vacuum with internal tether spools, allowing experimental confirmation that, under these conditions, water vapor refreezes on contact with ice to close the hole behind the probe and create a pressurized pocket with liquid water around the probe. These sub-scale melt probes are tested with power input levels ranging from 496 W to 1135 W, and achieve descent speeds between 5.3 cm/h and 59 cm/h, with total travel in ice ranging from 87 cm to 202 cm. &#13;
&#13;
The relationship between the empirical total power and descent speed is then analyzed using both analytic and high-fidelity numerical models. The numerical models enable the separation of inefficiencies in probe performance from inaccuracies in the analytical models, resulting in a better understanding of the range of applicability of the classical models and also clearly demonstrating the types and importance of thermal waste in melt probe designs. Fundamentally, this set of validated models shows that the performance of cryobots that are typical of flight concepts (high-aspect-ratio and fast-moving) can be predicted by analytical models to within 5% error. This error is significantly smaller than other environmental uncertainties of Ocean Worlds, such as the thickness and composition of their icy shells.&#13;
&#13;
In order to determine the feasibility of a mission to Europa’s ocean, it is critical to be able to calculate the duration of the cryobot’s journey through the ice shell. Several example cryobot designs are considered and the time-to-ocean is calculated for each. Using a Monte Carlo simulation, the fastest cryobot design considered presents a median time-to-ocean of 3.4 years and a 90th percentile time of 6.8 years, while the slowest of the designs considered completes the journey with a median time of 6.4 years and a 90th percentile time of 13.8 years.&#13;
&#13;
Finally, these tools are applied to evaluate a selection of different cryobot system architectures focusing on two key figures of merit: time to ocean and payload volume. A roadmap for the development of ice descent technologies is then recommended that can help make a mission to Europa’s ocean viable.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145182</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination among proteins, lipids and water in membrane fusion and fission probed by solid-state NMR</title>
<link>https://hdl.handle.net/1721.1/145176</link>
<description>Coordination among proteins, lipids and water in membrane fusion and fission probed by solid-state NMR
Sutherland, Madeleine
For enveloped viruses such as HIV, influenza, and coronaviruses to enter host cells, the viral and cell envelopes must be fused together. Then, to exit the cell, a new virus particle will pinch off a piece of the host cell membrane. The two membranes must be physically separated to release the virus particle. This process is called “scission” in the context of viral exit and “fission” when simply discussing membrane division.&#13;
&#13;
Viral membrane remodeling proteins catalyze membrane fusion and fission to bring about viral entry and exit from host cells – vital processes in the life cycle of enveloped viruses. The exact mechanism by which membrane remodeling proteins catalyze membrane fusion and fission is not yet fully understood. The membrane remodeling proteins may take any combination of the following actions to facilitate fusion and fission:&#13;
1. Directly altering the local curvature of the membrane&#13;
2. Altering the line tension at Lo/Ld phase boundaries&#13;
3. Forming protein clusters to act collectively on the membrane&#13;
4. Altering the local composition of the membrane at the site of fusion or fission&#13;
5. Physically disrupting the lipid assemblies, i.e. by inserting protein domains into the membrane and creating membrane defects&#13;
&#13;
In this work, we examine the extent to which HIV’s membrane fusion protein gp41, and Influenza A’s membrane fission protein M2, utilize these five processes to carry out their functions. We use solid-state NMR techniques to probe intermolecular interactions along those lines, with an emphasis on curvature and clustering. The contributions of both proteins and lipids to these processes will be examined. The goal is to better understand the mechanisms of HIV entry and influenza release, which can hopefully guide the development of better therapeutics and vaccines.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145176</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging HSF1 chemical-genetic tools to elucidate mechanisms of proteostasis</title>
<link>https://hdl.handle.net/1721.1/145175</link>
<description>Leveraging HSF1 chemical-genetic tools to elucidate mechanisms of proteostasis
Sebastian, Rebecca Michelle
Maintenance of protein function depends on an extensive network of chaperones, quality control factors, and trafficking mechanisms collectively termed the proteostasis network. This network assists folding and maintains optimal protein localization and concentration. While the components and organization of this network are generally well-established, our understanding of how protein folding problems are identified, how the network components integrate to successfully address challenges, and which components of the proteostasis network can solve what types of biophysical issues remains immature. Cytosolic proteostasis is dynamically regulated by the master transcriptional regulator Heat Shock Factor 1 (HSF1), which induces expression of cytosolic chaperones in response to proteotoxic stressors. Previous work in our lab enabled precision regulation of HSF1 activity through constitutively active or constitutively dominant-negative variants. My graduate work has focused on applying these HSF1 chemical-genetic tools to approach various problems in the field of metazoan proteostasis.&#13;
&#13;
First, I examined the interplay between stress response pathways regulating chaperone expression and post-translational modification by the small ubiquitin like modifier SUMO2/3. This work lead to the identification of a critical role in the communication between the heat shock response and protein SUMOylation throughout the stress response. Moreover, I identify a novel role for SUMO2/3 conjugation as a rapid response to prevent protein aggregation during initial proteotoxic stress. Next, I used deep-mutational scanning to identify HSF1 as a critical non-oncogenic component capable of tuning fitness of dominant-negative mutations within the oncogene tumor protein 53 (TP53). Within the context of basal temperatures, the impact of HSF1 activation was beneficial and supportive of destabilizing mutations within the DNA-binding domain. I also discovered the unexpectedly opposing impacts of HSF1 on client mutational tolerance at permissive versus restrictive temperatures, revealing that the role of HSF1 in supporting or inhibiting TP53 mutations is environment dependent. Altogether, my results highlight the multifaceted roles of HSF1 in addressing client proteostasis.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145175</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional and Pathological States of the Protein Tau Investigated with Solid-State NMR</title>
<link>https://hdl.handle.net/1721.1/145173</link>
<description>Functional and Pathological States of the Protein Tau Investigated with Solid-State NMR
Dregni, Aurelio J.
The microtubule-associated protein tau is expressed at high levels in human brain, and its binding and stabilization of microtubules is thought to play key roles in the maintenance of neurons. However, tau is found to pathologically aggregate into specific fibrillar structures in a number of neurodegenerative diseases, such as Alzheimer’s disease, which is the most common neurodegenerative disease in humans. Each disease is characterized by a specific fibril structure, conserved between patients of the same disease, and distinct between diseases. These pathological fibrillar aggregates of tau appear to spread in a prion-like manner throughout connected pathways in the brain and the extent of tau aggregates in brain is well correlated with observed neurodegeneration. Together, these findings indicate that these distinct tau structures are in some way fundamentally tied to each disease, and they may be part of the causative chain of each disease.&#13;
&#13;
No mechanistic treatment for these tauopathies is currently available, prompting detailed study into the tau protein itself. It is unclear how the protein tau can aggregate into so many distinct but conserved structures: knowledge how the protein can be made to aggregate into specific structures in vitro would reveal what specific insults may cause aggregation in each disease. Recently, the rigid cores of the pathological fibrils from human brain have been well characterized by cryo-electron microscopy, however, little is known about the “fuzzy coat”, the disordered remaining of the protein that surrounds the small fibril core. The structure and dynamics of the disordered fuzzy coat may play key roles in modulating interactions between these pathological fibrils, and the cellular milieu, as well as any drug molecule applied to the fibrils. In addition, due to mixing of two protein isoforms, the Alzheimer’s disease fibril structure has primary sequence heterogeneity immediately outside of the rigid core. How these isoforms are mixed and the effects of this mixing is poorly understood, and these may play key roles in making AD the most common neurodegenerative disease in humans. Finally, the mechanism by which healthy tau binds to and stabilizes microtubules is poorly understood, and a detailed understanding of tau’s functional role is required if we are to avoid interfering with its &#13;
&#13;
Solid-state NMR is uniquely suited to studies of the protein tau in its heterogeneously dynamic states. In this thesis, I describe in detail the theory behind magic-angle spinning solid state NMR, and then present five manuscripts in which I have developed and applied solid-state NMR experiments to answer the specific biological questions I pose above by studying a diverse array of states of the protein tau. I hope that research built on these studies may someday help to design mechanistic therapies that can slow or even prevent these terrible and yet very common neurodegenerative diseases.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145173</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and Biochemical Characterization of Glycyl Radical Enzymes Abundant in Mammalian Gut Microbiota</title>
<link>https://hdl.handle.net/1721.1/145172</link>
<description>Structural and Biochemical Characterization of Glycyl Radical Enzymes Abundant in Mammalian Gut Microbiota
Backman, Lindsey Richelle Fernandez
The glycyl radical enzyme (GRE) superfamily uses a post-translationally installed glycyl radical to catalyze difficult chemical transformations involved in a variety of anaerobic microbial pathways. In Chapter 1, I will provide an overview of the GRE superfamily and the fascinating reactions they are known to catalyze thus far.  Although GREs are one of the most abundant enzyme families in the human gut microbiome, a majority of these enzymes still remain uncharacterized, however, limiting our understanding of how they function at the molecular level. To address this need, Chapters 2 and 3 of my dissertation focus on structurally and biochemically characterizing two newly identified gut microbial GREs, hydroxyproline dehydratase (HypD) and isethionate sulfite-lyase (IslA), from the pathogen and disease-associated bacteria, Clostridioides difficile and Bilophila wadsworthia, respectively. Overall, our work on HypD and IslA enabled us to propose enzymatic mechanisms for these new GREs; we can now take structure-based approaches to designing enzyme inhibitors that could lead to treatment of bacterial infections and associated diseases. Next in chapters 4 and 5 of my thesis I present structural, biophysical, and biochemical experiments that provided insight into the mechanism for how the most abundant GRE in the human gut microbiome, pyruvate formate-lyase (PFL), uses a spare part protein YfiD to restore activity upon oxygen damage in Escherichia coli. Lastly, in my concluding chapter, I provide my perspective on how we can use newly developed computational tools, such as AlphaFold and bioinformatic approaches, to prioritize and design targeted experiments to hopefully reveal exciting new discoveries about the GRE superfamily. Overall, my research on GREs and their associated proteins will not only add to the field’s knowledge of these critical metabolic enzymes, many of which could be antibiotic targets and have promising environmental and industrial applications, but will also expand our basic understanding of the biochemical reactions that govern bacterial-host interactions in gut microbiomes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145172</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Methods for Studying Phonon Dynamics</title>
<link>https://hdl.handle.net/1721.1/145167</link>
<description>Computational Methods for Studying Phonon Dynamics
Rohskopf, Andrew
In solids and molecules, atoms vibrate about their respective equilibrium positions at finite temperature. This thermal motion can be understood as a superposition of the structure’s normal modes, often termed phonons, which are collective motions of atoms vibrating at certain frequencies. These vibrational modes play important roles in a variety of material properties and physical phenomena such as chemical reactions, phase transitions, mass/ion diffusivity, thermal conductivity, electrical conductivity, and any property which is affected by atomic vibration. It is therefore important to study the dynamics and energy transfer processes of normal modes in a variety of systems, so that we may better understand and engineer a wide variety of phenomena. Traditional methods for studying molecular vibrations do not represent a complete framework for studying phonon transport in all solids, however, because of two problems.&#13;
&#13;
Problem 1: Traditional interatomic potentials cannot accurately model phonon/vibrational properties.&#13;
&#13;
Problem 2: The traditional physical picture of heat transfer by phonons is limited to crystalline solids.&#13;
&#13;
We formulate five questions to investigate which will help solve these problems. First, we investigate Question 1: Why do traditional potentials fail for phonons? Answering this question is the first step in solving Problem 1. We then apply the knowledge gained here to answer Question 2: How to make fast &amp; accurate potentials for phonons? While the answers to these first two questions present a major contribution to Problem 1, they do not provide a physical picture. To solve Problem 2, we begin with Question 3: How do modes interact? Here, we investigate a physical model that allows phonon interactions to be simulated. From there, we propose Question 4: How do modes transport heat? Here we seek an understanding of how modes transport heat in disordered solids. Finally, using the knowledge gained here, we investigate Question 5: What determines temperature-dependent thermal conductivity behavior in disordered solids? Traditional pictures of phonon transport based on kinetic theory have difficulties explaining the constant or even increasing thermal conductivity as a function of temperature for disordered solids, so explaining this phenomenon is the first step in realizing a new physical picture.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145167</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Density-shift Immunomagnetic Separation for Pathogen Retrieval from Complex Media</title>
<link>https://hdl.handle.net/1721.1/145166</link>
<description>Density-shift Immunomagnetic Separation for Pathogen Retrieval from Complex Media
Strawser, Mary Claire
In industries like food production, pathogenic bacteria detection is required for producers to confidently send products to market without the risk of costly product recalls. However, rapid, reliable detection remains an unmet need. Conventional methods for detecting bacteria rely on time-consuming culture and emerging rapid tests struggle to achieve high sensitivity when the sample has a background of nontarget cells, proteins, and detritus. Reducing heterogeneity, removing inhibitory substances, and concentrating target cells by sample preparation could enable more rapid detection of pathogens from complex media, such as food samples. &#13;
&#13;
This thesis describes a novel immunomagnetic bead-based sample preparation method called density-shift immunomagnetic separation (DIMS). The DIMS method uses beads to capture target pathogenic bacteria and spatially separate them from both the background components of the sample and the unbound magnetic beads by centrifugation through a density media bi-layer, and then the spatially separated target bacteria are magnetically concentrated to a surface where they can be directly imaged. The purified, concentrated target bacteria can be analyzed in downstream analyses. In the first part of this thesis, the theoretical framework of DIMS is described and criteria are developed to select an immunomagnetic bead and density media to enact DIMS. In the second part of the thesis, procedures were developed to implement DIMS in a laboratory setting, using commercially available components. In addition, models for centrifugal separation and magnetic concentration were described. In the third part of the thesis, DIMS is demonstrated in a bead-bead system that models viable but non-culturable bacteria behavior. Capture of target particles was achieved with a probability of detection 50% limit of detection (&#119871;&#119874;&#119863;50%) of 10 target particles per milliliter. In the fourth part of the thesis, we implemented DIMS in a system with Salmonella enterica in buffer and simulated spinach rinse contaminated with Escherichia coli. DIMS in buffer with S. enterica yielded an &#119871;&#119874;&#119863;50% of 90 CFU/mL. We demonstrated multiple detection techniques: The captured bacteria were plated, fluorescently imaged, and observed during miniature culture to identify colony formation at a single cell level. Using DIMS, pathogenic bacteria can be isolated and observed in less than two hours, a significant improvement over FDA-regulated testing that can take up to seven days to return a positive result.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145166</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Against the Grain: A History and Policy Analysis of Rice, Water and the Edible Landscape in Egypt</title>
<link>https://hdl.handle.net/1721.1/145152</link>
<description>Against the Grain: A History and Policy Analysis of Rice, Water and the Edible Landscape in Egypt
Lasheen, Eman Abdelhalim
Water stress is posing enormous pressure on agriculture worldwide. With the rise of ‘more crop per drop’ approaches to agriculture, countries are crafting policies that aim to balance irrigation and food production. However, these policies are not always considerate of the larger socioeconomic and ecological implications they help produce. In this dissertation, I explore the history of rice cultivation in Egypt and its regulatory context under water stress conditions as a tool for legitimizing and promoting specific claims to water use over others. I use the case study of rice as a lens to examine the role of power in agri-food planning and water rationalization. Using mixed qualitative and historical research methods, I trace four historical vignettes that showcase the interplay between rice cultivation and shifting local, regional, and international power modes. Findings of this dissertation indicate that in addition to limited water resources, the making of Egypt’s edible landscape is a function of shifting power dynamics and political interests, with adverse ecological and socio-economic implications. These interests vary from purely calorific to more complex political and economic ones, shaping ‘the edible landscape’ along the way. I argue that this edible landscape is also constantly reshaped through alternative power dynamics, represented in this case by informal collaborations between rice farmers and rice researchers as intermediary agents with interest in preserving the nation’s riziculture.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145152</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning State and Action Abstractions for Effective and Efficient Planning</title>
<link>https://hdl.handle.net/1721.1/145150</link>
<description>Learning State and Action Abstractions for Effective and Efficient Planning
Chitnis, Rohan
An autonomous agent should make good decisions quickly. These two considerations --- effectiveness and efficiency --- are especially important, and often competing, when an agent plans to make decisions sequentially in long-horizon tasks. Unfortunately, planning directly in the state and action spaces of a task is intractable for many tasks of interest. Abstractions offer a mechanism for overcoming this intractability, allowing the agent to reason at a higher level about the most salient aspects of a task. In this thesis, we develop novel frameworks for learning state and action abstractions that are optimized for both effective and efficient planning. Most generally, state and action abstractions are arbitrary transformations of the state and action spaces of the given planning problem; we focus on task-specific abstractions that leverage the structure of a given task (or family of tasks) to make planning efficient. Throughout the chapters, we show how to learn neuro-symbolic abstractions for bilevel planning; present a method for learning to generate context-specific abstractions of Markov decision processes; formalize and give a tractable algorithm for reasoning efficiently about relevant exogenous processes in a Markov decision process; and introduce a powerful and general mechanism for planning in large problem instances containing many objects. We demonstrate across both classical and robotics planning tasks, using a wide variety of planners, that the methods we present optimize a tradeoff between planning effectively and planning efficiently.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145150</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty-Based Design Optimization and Decision Options for Responsive Maneuvering of Reconfigurable Satellite Constellations</title>
<link>https://hdl.handle.net/1721.1/145138</link>
<description>Uncertainty-Based Design Optimization and Decision Options for Responsive Maneuvering of Reconfigurable Satellite Constellations
Lowey, Charlotte E.
There are many time-sensitive mission applications for persistent satellite coverage, including dynamic and unpredictable events such as natural disasters, oil spills, extreme weather events, or geopolitical conflicts, which may progress rapidly and require frequently-updated information to co-ordinate the ground response. Reconfigurable satellite constellations can provide on-demand regional coverage by maneuvering orbits to focus passes over the area of interest. In contrast, traditional satellite constellations cannot maneuver to pass over specific ground locations, meaning that achieving persistent coverage spanning all possible locations of interest globally results in a requirement for thousands of satellites. This would present prohibitive costs for many applications, as well as contributing to worsening issues of space traffic management and congestion in Low Earth Orbit (LEO).&#13;
&#13;
Incorporating reconfigurability into constellation design allows for responsive maneuvering of satellites into repeating ground tracks (RGTs) over a location of interest, simultaneously reducing the required constellation size by improving the utilization of individual satellites and providing flexibility in the achievable ground coverage. Past work on reconfigurable constellations (ReCon) demonstrated average cost savings of 20-70% compared to iso-performance static constellations, although the complexity of the solution space for the design optimization process limited the maximum size of constellations that could be evaluated.&#13;
&#13;
In this thesis, a probabilistic performance metric is developed to compare constellation designs, adopting principles of reliability-based design optimization to quantify the confidence level that reconfigurable designs will outperform iso-cost static alternatives and by what margin of performance. The results show that 74.2% of reconfigurable designs outperform iso-cost static designs with a confidence level of 90% or higher, and with a margin of at least 10% improvement in the level of performance achieved. Computational intensity of the model presents the major constraint upon the size and complexity of simulation cases that may be modelled, so variance reduction techniques are applied to lower the standard error of mean performance in the output, allowing for a reduction in optimization size and runtime while maintaining the same level of error in the predicted results. Decision options for the operational phase of a reconfigurable constellation are presented and assessed to characterize how satellite operators must weigh mission priorities to evaluate trade-offs between propellant conservation and improved coverage of high-value targets.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145138</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Aeromagnetic Compensation Models for Airborne Magnetic Anomaly Navigation</title>
<link>https://hdl.handle.net/1721.1/145137</link>
<description>Advanced Aeromagnetic Compensation Models for Airborne Magnetic Anomaly Navigation
Gnadt, Albert Reuben
Using the earth's magnetic anomaly field for navigation of aircraft has shown promise as a viable alternative to the Global Positioning System (GPS) and other navigation systems. An airborne magnetic anomaly navigation (MagNav) system collects real-time magnetic field data and uses predetermined magnetic anomaly maps of the earth to estimate location by aiding an inertial navigation system (INS), which continually drifts. MagNav has the benefits of being passive, globally available at all times and in all weather, and not reliant on sight of land or stars. Since the magnetic field strength of a dipole decreases with the inverse cube of distance, MagNav is also nearly unjammable. A corrupting magnetic source must be flying alongside or in the aircraft to be effective.&#13;
&#13;
This magnetic physics has other implications, though. In particular, the magnetic components of the aircraft itself interfere with the desired magnetic measurements that are required to navigate. Magnetic measurements are a linear superposition of multiple magnetic fields. When the measured data contains magnetic signals from both the (desired) earth field and (undesired) aircraft field, it is difficult to separate the two signals. Previous work has proven the viability of MagNav using exceedingly clean magnetic measurements taken by geo-survey aircraft. The most significant outstanding challenge for real-world, operational MagNav is handling corruption of the measured magnetic signal by magnetic sources from aircraft components.&#13;
&#13;
In this thesis, several approaches to enable high-accuracy MagNav, despite receiving corrupted magnetic field measurements, are explored. These approaches can be split into four groups: linear aeromagnetic compensation, nonlinear aeromagnetic compensation, online aeromagnetic compensation, and covariance-adaptive filtering. The first two approaches evaluate different models that aim to improve on the state-of-the-art linear model used for removing aircraft interference. The last two approaches focus on making adjustments within the navigation algorithm in real-time based on the (corrupted) data provided. Performance is compared against the state-of-the-art compensation and navigation approach, which show that these advanced linear and nonlinear models can benefit MagNav when only corrupted magnetic field measurements are available. Each model and additional tools for aeromagnetic compensation and airborne magnetic anomaly navigation are publicly available in the MagNav.jl Julia software package.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145137</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Labor and Migration: Essays on Opportunities, Vulnerabilities, and Worker Agency in Emerging Markets</title>
<link>https://hdl.handle.net/1721.1/145136</link>
<description>Labor and Migration: Essays on Opportunities, Vulnerabilities, and Worker Agency in Emerging Markets
Khan, Mahreen
Creating suitable employment opportunities while ensuring safe working conditions is one of the most significant challenges facing labor markets of emerging economies in the Global South. Workers in these countries are amongst the most vulnerable and at-risk populations, whether they choose to remain and work in their countries of origin or migrate to other destinations. My dissertation focuses on studying labor markets characteristics in the context of two contemporary phenomena confronting populous, low-income countries, namely, large scale labor migration and employment relations in global supply chains. In the first chapter, I estimate the local labor market and socio-economic spillover effects of large-scale migration from Bangladesh on non-migrant households living in migrant-prone regions. My results show a significant, positive but relatively small impact on hours worked and household income with limited effects on other socio-economic outcomes. In the next chapter, I address the health and economic risk exposure caused by the COVID-19 pandemic for low and middle-income countries as a result of their exposure to migration. We find that exposure to migration is a strong predictor for spatial variation of the effects of COVID-19. Finally, in my third essay, I study the effectiveness of worker-management committees to meaningfully engage worker voice that can help to address non-compliance with health, safety, and labor issues in factories that engage in low-wage, manufacturing factory work. I find that worker-management committees with union representation and fair electoral processes have a positive, significant effect on addressing such compliance issues. However, the effectiveness of these structures are limited by the broader institutional context of the states in which they operate. My research deepens our understanding of the challenges facing labor markets in developing countries with important implications for future policy measures in these contexts.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145136</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Role of Metrics in Innovation</title>
<link>https://hdl.handle.net/1721.1/145135</link>
<description>Essays on the Role of Metrics in Innovation
Wu, Jane Yajie
This dissertation consists of three essays studying the role of metrics in the process of innovation. Scientific and technical metrics are trusted as objective and consistent arbiters of knowledge, and as a result, are typically taken as given without much question. Yet at the same time, these metrics are chosen at a given point in time under imperfect information. The motivation of this work is to understand how such metrics influence the ideas production process, and ultimately, who benefits from innovative effort. In the first essay, I define and delineate the role of metrics in innovation from other forms of quantification in organizations. I synthesize prior work to develop a typology of mechanisms that metrics can involve, highlighting how metrics are used at different junctures in the innovation process. The second essay explores the impact of introducing a new metric on the rate and direction of innovation. I study the setting of US automotive safety, finding that the introduction of the side impact dummy as a metric reduced overall fatalities but also led to disproportionate benefits for occupants similar to the metric itself. Moreover, firms responded heterogeneously, suggesting that metrics can profoundly affect the innovation trajectories of firms. In the third essay, I analyze whether it is possible to move firms away from a metric that has become a key focusing device for R&amp;D within an industry. I use a policy shock to estimate the effects of the “removal” of watts as a metric within the domestic vacuum cleaner industry. I find that rather than investing in new metrics, firms reduce their R&amp;D in the focal area and shift efforts to adjacent, unregulated product areas.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145135</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Household Finance and Small Business Credit</title>
<link>https://hdl.handle.net/1721.1/145134</link>
<description>Essays on Household Finance and Small Business Credit
Kim, Olivia S.
Chapter 1 examines whether closing disparities in credit access between spouses can help reduce consumption inequality in the household. The 2013 reversal of the Truth-in-Lending Act increased the borrowing capacity of secondary earners in equitable-distribution states but not in community-property states, where division-of-property laws superseded the policy change. Using a matched difference-in-differences design and administrative financial-transaction records measuring the credit and consumption of each spouse, I show that this reversal closed the credit gap between spouses by increasing secondary earners’ credit card limits. In turn, spouses shared consumption more equally, reducing their pre-reversal consumption gap. Delinquency rates were not measurably impacted, suggesting that household financial standing did not worsen. These results are consistent with a model of joint decision-making under limited commitment, in which credit causes a shift in marital bargaining power.&#13;
&#13;
Chapter 2 explores the investment decisions of small business owners when their child goes to college using the linked financial accounts of small businesses and their owners. By comparing small business owner households with college-entering aged children to otherwise similar households with near college-entering aged children, I show that small business owners respond to the increase in education spending by downsizing business production and liquidating the business. These results suggest that business owners’ family financial decisions affect the real economy as business owners struggle to separate business capital demands from personal finances.&#13;
&#13;
Joint work with Natalie Cox and Constantine Yannelis in Chapter 3 uses notches in the loan guarantee rate schedule for Small Business Administration loans to estimate the elasticity of bank lending volume to loan guarantees. We show significant bunching in the loan distribution on the side of the size threshold that carries a more generous loan guarantee. The excess mass implies that increasing guarantee generosity by one percentage point of loan principal would increase per-loan lending volume by $19,000. Placebo results indicate that bunching disappears when the guarantee notch is eliminated. We conclude that lending is highly sensitive to loan guarantees, and thus, federal guarantee programs have the potential to increase lending levels when borrowing is inefficiently low.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145134</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Asset Pricing</title>
<link>https://hdl.handle.net/1721.1/145133</link>
<description>Essays in Asset Pricing
Jaffard, Pierre
This Thesis is devoted to better understand market dynamics and asset pricing anomalies.&#13;
&#13;
In Chapter 1, which is co-authored with Andrea Hamaui, we study the effect of investors’ market expectations on asset pricing. Given traditional stock returns factor modelling and the prominence of the market factor, beliefs about market re- turns represent a natural primitive for expectations of stock prices. As the desire to increase market exposure generates excess demand for high beta assets from con- strained investors, we connect mutual funds’ expectations to the beta (or low vol) anomaly. We show that the beta anomaly is particularly strong for stocks purchased by over-optimistic mutual funds. On the empirical side, we first introduce a mutual fund-level measure of market expectations and confirm the model’s predictions for asset prices.&#13;
&#13;
In Chapter 2, which is co-authored with Andrea Hamaui, we study mutual funds’ trading behavior. In particular, we introduce the concept of "core" vs "satellite" holdings and we characterize positions depending on their longevity and interim re- turn in a fund’s portfolio. We show that core positions are relatively protected from selling in times of distress, as managers consolidate their portfolio. Next, we show that this theory has implications for asset prices and liquidity: core positions incur less downward contemporaneous price pressure as a result of outflows and are rela- tively more liquid. A behavioral model rationalizes those findings and validates the use of interim return and longevity as proxies for the "coreness" of a position.&#13;
&#13;
In Chapter 3, I develop a three-period asset pricing model with heterogeneity in firms’ size and a government that introduces a policy distortion. I find that large firms can better hedge the political uncertainty associated with this policy change through lobbying, which leads them to earn lower expected returns. I provide two strands of empirical evidence consistent with the model predictions. The first one looks at the behavior of a blue versus red industries around the unexpected results of the 2016 US Presidential election. The second one forms a political risk factor using a matching procedure, and shows that lobbying is indeed associated with a lower exposure to this factor.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145133</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Synthetic Strategies for Discrete Macromolecules: Enabling Exploration of Structure-Property Relationships in Biological and Materials Applications</title>
<link>https://hdl.handle.net/1721.1/145132</link>
<description>Efficient Synthetic Strategies for Discrete Macromolecules: Enabling Exploration of Structure-Property Relationships in Biological and Materials Applications
Wang, Wencong
Discrete macromolecules feature precisely controlled features such as chemical composition, molecular connectivity, stereochemistry, and conformation. Compared to traditionally synthesized disperse polymers, these defined molecular features of discrete polymers offer an opportunity to explore the structure-property relations of designed polymeric systems under different applications, such as drug delivery systems and elastic materials. In this thesis, a brief introduction to the current state of macromolecular synthesis with discrete structures is first provided, including iterative linear growth (ILG) strategies, iterative exponential growth (IEG) strategies, and their direct comparison. Moreover, critical analysis of various IEG systems is then introduced, with a focus on their orthogonal chemistries of IEG cycles, structural parameters of molecular features, and advanced structures and applications. After introducing the fundamentals of discrete polymer synthesis, several works focusing on biological applications of discrete polymers are first discussed in Chapter 2 to 5. Through this IEG methodology, we prepared two series of IEG macromolecules with different stereochemistry and varied distance between stereogenic centers. After polymerizing these macromolecules using ring-opening metathesis polymerization (ROMP) to generate bottlebrush polymers with uniform arms, we evaluated their interactions with biological systems both in vitro and in vivo, which provide an optimized route for the design of future biomaterials (Chapter 2). After exploring these uniform macromolecules in a format as disperse bottlebrush polymers, we further conjugated them with site-specifically modified proteins to generate uniform polymer-protein conjugates through azide-alkyne cycloaddition reactions. These discrete conjugates eliminated manufacturing variables originating from polymer dispersity and offered more molecular features to the systems, such as absolute configurations of the polymer backbone, structural space between stereogenic centers, and sidechain functionalities. We believe that the application of discrete IEG polymers to form polymer-protein conjugates opens a new toolbox for understanding the impact of the structure of their conjugated polymers on the biological performances of proteins (Chapter 3-5). Lastly, incorporated unimolecular PDMS-functionalized spiropyran (SP) force probes into randomly-crosslinked PDMS elastomers to evaluate the force distribution in polymer networks (Chapter 6).
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145132</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery of microenvironment drivers of cell states, plasticity and drug response</title>
<link>https://hdl.handle.net/1721.1/145131</link>
<description>Discovery of microenvironment drivers of cell states, plasticity and drug response
Navia, Andrew Warren
Cell state can be influenced by both intrinsic and extrinsic factors with functional consequences. Illustratively, in cancer, intrinsic genome level alterations or extrinsic microenvironmental immune cell activity can drive tumorigenesis. Similarly, in viral infections like COVID-19, the responses of infected and uninfected cells can impact clinical course. The recent emergence of single-cell genomic technologies like single-cell RNA sequencing (scRNA-seq) now enable us to characterize systematically and comprehensively the roles of intrinsic versus extrinsic responses in driving disease sequalae. Here, we apply these technologies to identify tumor cell states and their microenvironmental dependences, as well as to define infected cells and their supportive peripheral cells. Further, we establish new model systems based on in vivo interactions to nominate potential therapeutic targets.&#13;
&#13;
Specifically, in pancreatic cancer, we refine a previously established basal and classical phenotype dichotomy and build on it by describing an intermediate state with a distinct supportive microenvironment. By understanding in vivo secreted factors from tumor and peripheral cells, we more accurately recapitulate cell-cell interactions ex vivo, allowing us to establish RNA state specific models. These ex vivo models suggest tumor cell plasticity that may play a role evasion of therapeutic pressure. Collectively, this work uncovers novel cancer biology, improves modeling of said biology and nominates therapeutic targets informed by system level interactions. Meanwhile, in COVID-19, we identify cell types prone to infection and tie disease severity to intrinsic epithelial immune responses. We associate clinical course with distinct immune environments, with severe cases harboring inflammatory macrophage populations and equivalent or elevated viral RNA load. We also identify viral targets, nominate mechanisms of viral entry, and find immune response trends in patients with severe disease.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145131</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding cation catalytic effects in electron transfer reactions at molecular scale</title>
<link>https://hdl.handle.net/1721.1/145130</link>
<description>Understanding cation catalytic effects in electron transfer reactions at molecular scale
Hpone Myint, Kyaw
Cation catalytic effects from spectator counter ions on rate of electron transfer (ET) reactions have been well-documented in both homogeneous and heterogeneous electron transfer reactions. People have found that the cation independent reaction path way is 300 times slower than cation (potassium) facilitated reaction rate. This cation specificity is usually observed in electron transfer involving high-valent, anionic redox couples, and for cationic redox centers, the change in reaction rate is relatively insignificant. Moreover, the observed cation specificity seemed to be independent of the shape or geometry of the redox centers, and also the shape or identity of the electrode.&#13;
&#13;
Despite the ubiquity of cation catalytic effects in aqueous ET reactions, the mech- anism behind this effect is not yet fully understood. In known literature, people have proposed two different mechanisms concerning the cation specific effects: (1) indirect and (2) direct pathways. In indirect pathway, cations modify the extended hydrogen bonding structure of water in the solution; thereby, modifying the reorganization en- ergy associated with the electron transfer. In direct pathway, cations pair up with the redox centers to modulate the local solvation characteristics, which in turn changes not only the reorganization energy but aslo the coupling between the redox centers.&#13;
&#13;
In Ch. 2, we quantified the indirect effects from the cations using various statis- tical tools. Moreover, we developed basic intuitions for the likely causes of cation specificity using model redox centers. Our analyses in this chapter reveal that cations exert no significant change in water’s hydrogen bond geometry outside the their first few solvation shells. More importantly, in Marcus picture, collective electrostatics fluctuations drive ET, and we found that the cations have no effect on electrostatics fluctuation of the bulk solvent. These findings indicate that cations exert no signif- icant indirect effect on these ET reaction, and the direct effects are the likely cause of the observed change in ET rate. We then investigated the role of ion pairing in cation specific effects, and found that more highly charged anions tend to pair more strongly with cations and this ion pairing significantly affects the local electrostatics fluctuations around the anionic redox centers, likely causing the observed changes in experimental rates.&#13;
&#13;
In Ch. 3, we applied the intuitions obtained in Ch. 2 to a real, ferri/ferrocyanide redox couple as a test case. Contrary to our expectation, ferri/ferrocyanide display no cation specific trend in outer-sphere reorganization energy for both homogeneous ET case and heterogeneous ET case. Further investigation into the effects of redox charge distribution reveals that charge distribution has significant effect on outer-sphere reor- ganization energy trend. We found that as we transition from a very concentrated to a more scattered charge distribution, the trend in outer-sphere reorganization energy slowly disappears. This implies that for some redox centers, the cation specificity is likely caused by changes in both reorganization energy and cation induced coupling values. We can use this as a general guideline for designing better redox centers in the future. Preliminary coupling calculations on representative MD configurations indicate that the coupling values are highly sensitive to orientation and number of explicit solvent molecules included in the calculations. This means an intuitive and qualitative explanation for cation specific coupling trend is out of reach for ferri/fer- rocyanide redox couple, and a quantitative explanation for observed experimental ET rate shift can be obtained by calculating ensemble averaged coupling values for each cation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145130</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Synthesis and Application of 1,4-Dithiins, Thianthrenes, and Other Sulfur-Rich Scaffolds</title>
<link>https://hdl.handle.net/1721.1/145129</link>
<description>The Synthesis and Application of 1,4-Dithiins, Thianthrenes, and Other Sulfur-Rich Scaffolds
Etkind, Samuel I.
This thesis describes the synthesis and applications of various sulfur rich compounds, focusing on 1,4-dithiins, thianthrenes, and new sulfur-rich macrocycles.&#13;
&#13;
In Chapter 1, we offer a review of the properties, general synthesis, and materials application of 1,4-dithiins and thianthrenes. Additionally, the installation of sulfur atoms in macrocyclic compounds and the benefits thereof is detailed.&#13;
&#13;
In Chapter 2, we design a macrocyclic anion receptor containing electroactive 1,4-dithiin units. The binding affinities for various anions is assessed, as well as the response of the receptor when oxidized in the presence of anionic guests.&#13;
&#13;
In Chapter 3, we disclose the utility of thianthrene-based compounds as electrolytes for symmetric batteries. A range of compounds with bipolar redox-activity are designed and synthesized, and the solvent, supporting salt, and cycling conditions are optimized to fabricate a battery with long cycle life. Implications of this work in the development of storage systems for renewable energy is discussed, as well as further prospects for the field.&#13;
&#13;
In Chapter 4, we develop a sulfurous analog to pillar[n]arene macrocycles in which the bridging methylene groups have been exchanged for sulfurs. The gram-scale production and derivation of the scaffold is explored.&#13;
&#13;
In Chapter 5, we discuss our approach towards porous organic cages. Rational design of a macrocycle as well as attempts for its production and isolation are disclosed. Though we were unable to produce our desired scaffold, we have synthesized useful intermediates that enable access to sulfurous cyclotriveratrylene analogues.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145129</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>CityScope: An Urban modeling and Simulation Platform</title>
<link>https://hdl.handle.net/1721.1/145128</link>
<description>CityScope: An Urban modeling and Simulation Platform
Noyman, Ariel
Current mass-urbanization trends create vast opportunities alongside new challenges to cities worldwide. Immigration, climate change, technological disruptions, inequality, and health concerns, are only some of the questions urban decision-makers are facing today. As these challenges grow, traditional urban processes are rendered insufficient, as they trail behind rapidly expanding cities and technological disruptions.&#13;
&#13;
In this dissertation I investigate a new urban process, which couples data-driven and evidence- based decision-making, with human-centric and participatory planning. I explore this new urban process through the design, development and deployment of CityScope: an urban modeling, simulation, and decision-making platform. From collaborative allocation of refugee-housing in Germany, through crowd-sourced mapping of public safety in Guadalajara, to mass-transit co-creation in Boston, CityScope helps to build agency amongst the ‘have-nots’, who traditionally were denied from the urban process.&#13;
&#13;
I report on a series of lab experiments and real-world deployments of CityScope through four themes: Insight: CityScope as an urban observatory, using real-time spatial data and urban dynamics analytics; Transformation: CityScope as an iterative, collaborative, and real-time Urban Human Computer Interaction system; Prediction: CityScope for urban forecasting and simulation of implicit aspects in the built environment; and Consensus: CityScope for collaborative decision-making with diverse stakeholders and communities. Finally, I describe how CityScope supported, enhanced, and occasionally replaced traditional urban decision-making, affecting both the urban process as well as its outcomes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145128</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplicative Structures on Moore spectra</title>
<link>https://hdl.handle.net/1721.1/145127</link>
<description>Multiplicative Structures on Moore spectra
Burklund, Robert
In this article we show that S/8 is an E₁-algebra, S/32 is an E₂-algebra, S/&#119901;ⁿ⁺¹ is an Eₙ-algebra at odd primes and, more generally, for every &#119945; and &#119899; there exist generalized Moore spectra of type &#119945; which admit an Eₙ-algebra structure.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145127</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrability in random conformal geometry</title>
<link>https://hdl.handle.net/1721.1/145126</link>
<description>Integrability in random conformal geometry
Ang, Jie Jun
Liouville quantum gravity (LQG) is a random surface arising as the scaling limit of random planar maps. Schramm-Loewner evolution (SLE) is a random planar curve describing the scaling limits of interfaces in many statistical physics models. Liouville conformal field theory (LCFT) is the quantum field theory underlying LQG. Each of these satisfies conformal invariance or covariance. This thesis proves exact formulas in random conformal geometry; we highlight a few here.&#13;
&#13;
The Brownian annulus describes the scaling limit of uniform random planar maps with the annulus topology, and is the canonical annular &#120574;-LQG surface with &#120574; = √︀8/3. We obtain the law of its modulus, which is as predicted from the ghost partition function in bosonic string theory.&#13;
&#13;
The conformal loop ensemble (CLE) is a random collection of loops in the plane which locally look like SLE, corresponding to the scaling limit of all interfaces in several important statistical mechanics models. We derive the three-point nesting statistic of simple CLE on the sphere. It agrees with the imaginary DOZZ formula of Zamolodchikov (2005) and Kostov-Petkova (2007), which is the three-point structure constant of the generalized minimal model conformal field theories.&#13;
&#13;
We compute the one-point bulk structure constant for LCFT on the disk, thereby proving the formula proposed by Fateev, Zamolodchikov and Zamolodchikov (2000). This is a disk analog of the DOZZ constant for the sphere. Our result represents the first step towards solving LCFT on surfaces with boundary via the conformal bootstrap.&#13;
&#13;
Our arguments depend on the interplay between LQG, SLE and LCFT. Firstly, LQG behaves well under conformal welding with SLE curves as the interfaces. Secondly, LCFT and LQG give complementary descriptions of the same geometry.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145126</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher-order Fourier analysis with applications to additive combinatorics and theoretical computer science</title>
<link>https://hdl.handle.net/1721.1/145125</link>
<description>Higher-order Fourier analysis with applications to additive combinatorics and theoretical computer science
Tidor, Jonathan
Fourier analysis has been used for over one hundred years as a tool to study certain additive patterns. For example, Vinogradov used Fourier-analytic techniques (known in this context as the Hardy-Littlewood circle method) to show that every sufficiently-large odd integer can be written as the sum of three primes, while van der Corput similarly showed that the primes contain infinitely-many three-term arithmetic progressions.&#13;
&#13;
Over the past two decades, a theory of higher-order Fourier analysis has been developed to study additive patterns which are not amenable to classical Fourier-analytic techniques. For example, while three-term arithmetic progressions can be studied with Fourier analysis, all longer arithmetic progressions require higher-order techniques. These techniques have led to a new proof of Szemerédi's theorem in addition to results such as counts of k-term arithmetic progressions in the primes.&#13;
&#13;
This thesis contains five results in the field of higher-order Fourier analysis. In the first half, we use these techniques to give applications in additive combinatorics and theoretical computer science. We prove an induced arithmetic removal lemma first in complexity 1 and then for patterns of all complexities. This latter result solves a central problem in property testing known as the classification of testable arithmetic properties. We then study a class of multidimensional patterns and show that many of them satisfy the popular difference property analogously to the one-dimensional case. However there is a surprising spectral condition which we prove necessarily appears in higher dimensions that is not present in the one-dimensional problem.&#13;
&#13;
In the second half of this thesis, we further develop the foundations of higher-order Fourier analysis. We determine the set of higher-order characters necessary over [mathematical notation], showing that classical polynomials suffice in the inverse theorem for the Gowers Uᵏ-norm when k≤p+1, but that non-classical polynomials are necessary whenever k&gt;p+1. Finally, we prove the first quantitative bounds on the U⁴-inverse theorem in the low-characteristic regime p&lt;5.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145125</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a high-fidelity biorobotic cardiovascular in vitro simulator</title>
<link>https://hdl.handle.net/1721.1/145122</link>
<description>Development of a high-fidelity biorobotic cardiovascular in vitro simulator
Park, Clara
The complex motion of the beating heart is accomplished by the spatial arrangement of contracting cardiomyocytes with varying orientation across transmural layers, which is difficult to imitate in organic or synthetic models. The anatomical details of intracardiac structures (such as papillary muscles, chordae tendineae, ventricular trabeculae, valves, moderator bands) are highly complex and challenging to replicate using current manufacturing methods. In this thesis, I propose a biorobotic hybrid heart that preserves organic intracardiac structures and mimics cardiac motion by replicating the cardiac myofiber architecture of the left ventricle. The heart model is composed of organic endocardial tissue from a preserved explanted heart with intact intracardiac structures and an active synthetic myocardium that drives the motion of the heart. The active soft tissue mimic is then coupled to the organic endocardial tissue in a helical fashion to achieve the complex three-dimensional fiber architecture. The resulting biorobotic hybrid heart simulates the contractile motion of the native heart with a faithful representation of endocardial tissue anatomy. &#13;
&#13;
This heart model is connected to a mock circulatory loop to represent the human circulatory system where pulsatile flow, and hemodynamic parameters such as flow and pressure in the heart and vasculature are recapitulated using our biorobotic heart as the active pump. Additional cardiac parameters such as heart contractility, heart rate, flow resistance and compliance can be adjusted to recreate physiological and pathological hemodynamics. We demonstrate a biorobotic cardiovascular in vitro simulator that recapitulates internal cardiac structures, ventricular motion and hemodynamics. We then mimic a pathological condition (acute mitral regurgitation) with the heart model and demonstrate various interventions (such as surgical repair, replacement and minimally invasive repair procedure) with collaborating cardiac surgeons. Overall, the biorobotic cardiovascular in vitro simulator may be used as a high-fidelity cardiovascular benchtop model for the development of intracardiac devices, thus reducing the overall number of animals used in preclinical and regulatory testing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145122</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power and Punishment: Architecture and Violence in the Italian Renaissance</title>
<link>https://hdl.handle.net/1721.1/145115</link>
<description>Power and Punishment: Architecture and Violence in the Italian Renaissance
Winston, ElDanté C.
The traditional narrative of a humanistic Renaissance, with its tropes of classical ornament, courtly manners, and artistic geniuses, has clouded the study of Italian Renaissance architecture. Over the course of three chapters, this dissertation challenges this narrative, reexamining the city, architecture, and architectural spaces across the complex milieu of the fifteenth and sixteenth century, starting with the concept of the ‘ideal’ city. The most truthful prescription of the paragon city is found in the combined text and images of military architectural treatises with their geometrically defined city walls. Reflective of its chaotic time, it is a paragon city under the jurisdiction of a ruler whose primary authority is the right to judge and dispense punishment. Because of the authoritarian overtone, the military architectural treatise has not been given the same consideration as its civic counterpart. The marginalization of military architecture has resulted in the exclusion of certain types of buildings from the history of Italian Renaissance architecture. The rocche and castles built during the Renaissance, misclassified as military architecture, have an underlying medieval heritage that has resulted in their omission from the broader discourse of Italian Renaissance architecture. Though fortified, these structures are no different from the classically clothed villas of the wealthy, more commonly examined and discussed. The conventional focus on patronage and magnificence excludes the actual sociopolitical environment, one of power, violence, justice, and execution—each regularly on display in the main piazzas of Italian cities. Violent threats to those in power demanded swift punishment that often resulted in the public execution of the offender. The public space of the piazza is understood as a space of authority and control: a gateway to power where certain kinds of violence were deemed acceptable. The exclusion of violence from Renaissance architectural history promotes a bias inherent in the traditional narrative. The resolution is a more inclusive narrative, one that acknowledges that the Renaissance is more complex and complicated than ideal.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145115</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Mental Illness and Belief Formation</title>
<link>https://hdl.handle.net/1721.1/145111</link>
<description>Essays on the Economics of Mental Illness and Belief Formation
Ridley, Matthew
This thesis consists of three chapters. Chapters 1 and 2 study questions relating to the economics of mental illness, while Chapter 3 contributes to the literature on the behavioral economics of belief formation. &#13;
&#13;
The first chapter studies why people who live in poverty are disproportionately affected by mental illness. Gautam Rao, Frank Schilbach, Vikram Patel and I review the interdisciplinary evidence of the bidirectional causal relationship between poverty and common mental illnesses---depression and anxiety---and the underlying mechanisms. Our review shows that mental illness reduces employment and therefore income and that psychological interventions generate economic gains. Similarly, negative economic shocks cause mental illness, and anti-poverty programs, such as cash transfers, improve mental health. A crucial step toward the design of effective policies is to better understand the mechanisms underlying these causal effects.&#13;
&#13;
In the second chapter, I study discrimination against people with common mental illnesses in labor market settings -- one important mechanism through which mental illness may (indirectly) cause lower employment and income. In an online experiment, I find that people pay to avoid depressed or anxious coworkers in a simple communication-based problem-solving task---paying as much to avoid them as they do to work with the college-educated. A model of earnings-maximizing statistical discrimination with correct beliefs cannot explain these preferences: depressed or anxious coworkers are equally productive when exogenously assigned. Instead, I find evidence that discrimination is driven by incorrect beliefs about such coworkers as well as an increase in costly effort when working with them. A major motivation for tackling discrimination is often to encourage revelation of mental illness (thereby perhaps improving access to treatment or support); however, I find that people pay to hide mental illness in my setting even when insulated from rejection or any financial consequence of discrimination.&#13;
&#13;
In the third chapter of my thesis, John Conlon, Malavika Mani, Gautam Rao, Frank Schilbach and I study social learning between spouses using an experiment in Chennai, India. We vary whether individuals discover information themselves or must instead learn what their spouse discovered via a discussion. Women treat their `own' and their husband's information the same. In sharp contrast, men's beliefs respond less than half as much to information that was discovered by their wife. This is not due to a lack of communication: husbands put less weight on their wife's signals even when perfectly informed of them. In a second experiment, when paired with mixed- and same-gender strangers, both men and women heavily discount their teammate's information relative to their own. We conclude that people have a  tendency to underweight others' information relative to their own. The marital context creates a countervailing force for women, resulting in a gender difference in learning (only) in the household.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145111</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Health Economics</title>
<link>https://hdl.handle.net/1721.1/145110</link>
<description>Essays in Health Economics
Devlin, Aileen Marie
This thesis consists of three chapters on various health economics topics. &#13;
&#13;
The first chapter investigates how physicians respond to Medicare reimbursement rate changes. The Affordable Care Act increased reimbursements in four states differentially across physicians.  I run a difference in difference (DID) comparing physicians with higher reimbursement increases to those with lower increases. I focus on office-based physicians and find that physicians with higher reimbursement increases were more likely to continue providing office-based care, while physicians with relatively lower increases were more likely to move all their provision to non-office facilities (e.g., hospitals), which suggests vertical integration. I also find that physicians who remain office-based exhibit a positive supply response. My findings suggest that Medicare reimbursement directly affects both access to care and vertical integration.&#13;
&#13;
The second chapter, joint with Annetta Zhou, explores the impacts of urgent care clinics (UCCs). They can divert patients out of more expensive emergency departments but they also decrease hassle costs of care potentially increasing aggregate utilization. We use a stacked DID to investigate the impact of clinic openings in Massachusetts. We find that opening UCCs decrease emergency department visits among people living near the UCC---on average, every additional four UCC visits are offset by one fewer emergency department visit. We find evidence of small diversion out of retail clinics, but no evidence of diversion out of other outpatient providers. We investigate differential impacts based on the ownership of the clinics and find that hospital-affiliated UCCs divert more patients out of emergency departments. &#13;
&#13;
The third chapter describes how exposure to telemedicine affects subsequent utilization in Medicare from 2006 to 2016. After a first telemedicine visit, patients were somewhat persistent with it, which I benchmark to persistence with in-person care with a unique provider. In a matched sample, telemedicine was roughly half as persistent as in-person care with a unique provider. However, telemedicine patients whose providers continued providing any telemedicine, a group likely to retain access, were quite persistent. If telemedicine were an experience good then exposure should lead to persistent utilization in the absence of constraints. My results thus suggest supply constraints were a limiting factor.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145110</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication, Information, and Learning</title>
<link>https://hdl.handle.net/1721.1/145109</link>
<description>Communication, Information, and Learning
Clark, Daniel
The first chapter of this thesis presents robust neologism proofness, an equilibrium refinement that applies to both cheap-talk and costly signaling games. Robust neologism proofness eliminates equilibria that can be undone by a certain kind of credible communication from the sender to the receiver. We show that robust neologism proof equilibria exist both in monotonic signaling games and signaling games where the sender can give a transfer to the receiver. We apply robust neologism proofness to various examples and compare it with other equilibrium refinements, and show that in monotone-concave-supermodular signaling games with transfers, robust neologism proofness selects the sender-optimal separating equilibria.&#13;
&#13;
The second chapter studies justified communication equilibrium (JCE), a different equilibrium refinement for signaling games with cheap-talk communication. A strategy profile must be a JCE to be a stable outcome of non-equilibrium learning when receivers are initially trusting and senders play many more times than receivers. In the learning model, the  counterfactual "speeches" that have been informally used to motivate past refinements are messages that are actually sent. Stable profiles need not be perfect Bayesian equilibria, so JCE sometimes preserves equilibria that existing refinements eliminate. Despite this, it resembles the earlier refinements D1 and NWBR, and it coincides with them in co-monotonic signaling games.&#13;
&#13;
The third chapter studies principal-agent settings where the principal has private information, both the principal and agent take actions, and the agent's action is subject to moral hazard. Unlike past work focusing on explicit contracts, we allow the principal to propose contracts that give them flexibility in their choice of future actions. We develop an adaptation of sequential equilibrium called contracting equilibrium for our principal-agent games, and prove its existence. In environments where the principal's type and agent's action are complements, the condition of payoff-plausibility characterizes the outcomes that survive robust neologism proofness as well as the strongly justified communication equilibrium outcomes. The principal-optimal safe outcomes, which are analogs of the sender-optimal separating outcomes of signaling games, are always payoff-plausible contracting equilibrium outcomes. They also provide an important payoff benchmark: Every principal type must obtain a weakly higher payoff from every payoff-plausible equilibrium.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145109</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Objective System Optimization of a Mars Atmospheric ISRU Plant</title>
<link>https://hdl.handle.net/1721.1/145095</link>
<description>Multi-Objective System Optimization of a Mars Atmospheric ISRU Plant
Hinterman, Eric Daniel
The Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) represents the first time that NASA is demonstrating In-Situ Resource Utilization (ISRU) on the surface of another planetary body. MOXIE produces oxygen from atmospheric CO2 on Mars. It was developed for NASA’s Mars 2020 Rover and produces oxygen with greater than 99.6% purity through solid oxide electrolysis. MOXIE is a small fraction of the scale that would be necessary to produce oxygen for use as a propellant for a human Mars mission, assuming that the empty oxygen tank on a Mars ascent vehicle would be filled from a scaled-up MOXIE system.&#13;
&#13;
MOXIE is a small prototype of an ISRU system that would be capable of supporting a crew of six astronauts on Mars. It is unclear, however, how to optimally scale MOXIE and what specific challenges a scaled-up version might face. This dissertation focuses on taking the lessons learned from MOXIE and determining the optimal way to scale it to a full-size system. Specifically, this dissertation defines a systems architecture for an extensible MOXIE system, called the Big Atmospheric MOXIE (BAM), based on the development of a detailed optimization model. The primary subsystems of interest are the solid oxide electrolysis (SOE) stack, the compressor, the liquefaction system, and the heat exchanger. The model has been validated with data from scaled-up SOE cell testing, past MOXIE experiments, and components used in industry.&#13;
&#13;
By understanding the scalability and extensibility of key subsystems in the MOXIE system, it is possible to design a larger, optimized systems architecture model for BAM to support the first human missions to Mars. Producing this optimized, validated systems design of a scaled-up atmospheric ISRU plant for Mars has never been done before under these parameters and is the primary goal of this dissertation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145095</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple Simultaneous Optical Links for Space-Based Platforms</title>
<link>https://hdl.handle.net/1721.1/145093</link>
<description>Multiple Simultaneous Optical Links for Space-Based Platforms
Aguilar, Alexa C.
Space-based Free Space Optical Communications (FSOC), or lasercom, offers key advantages over Radio Frequency (RF) communications in bandwidth, Size, Weight, and Power savings, and unregulated spectrum. Theoretical and demonstrated lasercom systems have shown higher data rates for similar or equal SWaP compared to their RF counterparts. New space-based network architectures, such as the broadband constellations currently being deployed by SpaceX and Telesat, among others, leverage optical intersatellite links to increase total system throughput and reduce the number of ground stations which lowers the overall system costs. Beyond LEO, the Artemis program infrastructure includes an optical communication relay between the Orion capsule and Earth, with eventual plans to expand to lunar orbiters for continuous surface coverage. Despite the performance advantages and increasing adoption across applications, state-of-the-art RF communication systems currently outperform lasercom systems in part because of optical communication systems’ inability to support multiple simultaneous links. Techniques such as frequency reuse, access methods, and dynamic beam forming enable RF communication systems to work around bandwidth limitations and establish simultaneous links with other nodes within a network (e.g., multiple ground stations, user terminals, etc). This work looks at extending this capability to laser communication systems, evaluates the technology required to support multiple simultaneous optical links, and quantifies the impact of multi-user lasercom within a network configuration. We develop a model to simulate the performance of such a system and verify it against existing models and data. The model is then applied to a LEO and deep-space network scenario which analyzes different access methods, network configurations, and terminal technologies such as fiber amplifiers versus photonic integrated circuits. We perform trade studies to identify the limitations and constraints of the proposed approach. We then make architecture recommendations for each scenario based on key performance parameters. For example, we find for the LEO case, a swarm of four, 6U cubesats can achieve a total system throughput of 12 Gbps with wavelength division multiple access in a mesh network configuration. Additionally, by using a photonic-based transceiver instead of a fiber-based one, an additional ~2.5X savings to mass can be achieved.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145093</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Models and Policy Learning for Online Marketplaces</title>
<link>https://hdl.handle.net/1721.1/145092</link>
<description>Scalable Models and Policy Learning for Online Marketplaces
Kumar, Madhav
This dissertation contains three essays on designing scalable models and policy learning methods for online marketplaces. The underlying theme across all chapters is the development of data-driven practical solutions that help improve business operations and customer experiences in e-commerce.&#13;
&#13;
The first chapter offers a new perspective on creating promotional bundles in cross-category retail. A scalable approach is designed that efficiently leverages historical purchases and consideration sets to learn heuristics for complementarity and substitutability using machine learning-based embeddings. Subsequently, thousands of candidate bundles are created based on these heuristics and their effectiveness is tested using a field experiment. Offline policy learning is applied to the experimental data to optimize the retailer’s bundle design policy. The optimized policy is robust across product categories, generalizes well to the retailer's entire assortment, and provides an expected improvement of 35% in revenue over the baseline policy.&#13;
&#13;
The second chapter investigates the impact of algorithmic pricing on consumer behavior. The adoption of algorithmic pricing by an online retailer led to considerably higher price volatility. Analysis of detailed clickstream data, complemented with lab experiments, suggests that consumers become more price sensitive when exposed to frequently changing prices caused by algorithms. Furthermore, it shows that a key mechanism driving this behavior is price salience. This finding is economically consequential because even if implementing algorithmic pricing is profitable, it triggers unintended side effects that modify consumer behavior in ways that undermine those gains.&#13;
&#13;
The third chapter augments choice models and recommendation systems with consumer consideration sets. Recommendations systems are commonly used in online marketplaces to suggest relevant items (products in case of e-commerce, content in case of social media, and music/movies in case of entertainment platforms) to users. In the case of online retail, these systems typically use historical purchases to learn consumer preferences and then predict what consumers are likely to buy next. The suggested method enhances the learning of consumer preferences by flexibly incorporating consumers' historical consideration sets along with purchases with a sequential deep learning model. The search augmented recommendation system better captures consumers’ latent preferences, more accurately predicts future actions, and substantially outperforms strong baselines. Finally, we show that these gains are distributed across the entire spectrum of consumers and not concentrated among a small subset of high usage consumers.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145092</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximum Entropy Optimization: a General Approach to Study Ordered and Disordered Proteins Reveals Key Features of Protein Phase Separation</title>
<link>https://hdl.handle.net/1721.1/145090</link>
<description>Maximum Entropy Optimization: a General Approach to Study Ordered and Disordered Proteins Reveals Key Features of Protein Phase Separation
Latham, Andrew P.
Liquid-liquid phase separation drives the formation of biological condensates that play essential roles in transcriptional regulation and signal sensing. The molecular factors that dictate these condensates' stability and spatial organization are not fully understood, and it remains challenging to predict their microstructures. Computational modeling could provide high-resolution structural characterizations of these condensates and help uncover physicochemical interactions that dictate their stability. However, the presence of both ordered and disordered domains in these proteins places a high demand on the model accuracy. In this thesis, we develop systematic methods to parameterize force fields from experimental data, and apply them to study the phase separation properties of chromatin regulator proteins.&#13;
&#13;
In chapter 2, we develop a highly efficient maximum entropy approach to fit SAXS data by introducing minimal biases to a coarse-grained protein force field, the associative memory, water mediated, structure and energy model (AWSEM). In chapter 3, we present a generic algorithm to improve the accuracy of coarse-grained IDP models using a diverse set of experimental measurements. It combines maximum entropy optimization and least squares regression to systematically adjust model parameters and improve the agreement between simulation and experiment. In chapter 4, we present an algorithm to derive a coarse-grained force field, MOFF, that can model both ordered and disordered proteins with consistent accuracy. We further applied MOFF to study the phase behavior of HP1, an essential protein for posttranslational modification and spatial organization of chromatin. The force field successfully resolved the structural difference of two HP1 homologs, despite their high sequence similarity. We carried out large scale simulations with hundreds of proteins to determine the critical temperature of phase separation and uncover multivalent interactions that stabilize higher-order assemblies. In chapter 5, we combine MOFF with a chemically accurate DNA model to study the phase behavior of chromatin regulators that are crucial for heterochromatin organization and their interactions with DNA. Notably, a layered organization was observed in condensates formed by mixing HP1, histone H1, and DNA. This layered organization may be of biological relevance as it enables cooperative DNA packaging between the two chromatin regulators.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145090</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Directed Evolution in Mammalian Cells</title>
<link>https://hdl.handle.net/1721.1/145089</link>
<description>Continuous Directed Evolution in Mammalian Cells
Hendel, Samuel Joseph
Directed evolution is a powerful methodology for the creation of new biomolecules with user-desired functions. Most directed evolution experiments are performed in vitro, in bacteria, or in yeast, even when the evolved biomolecule is intended to function in mammalian cells. As a result, the functions of biomolecules evolved in these environments are often derailed in the complex mammalian cellular environment. The development of highly efficacious methods for directed evolution in mammalian cells has severely lagged behind similar methods in single-celled organisms, owing to the relative difficulties of both mammalian cell culture and genomic engineering in mammalian cells. In this thesis, I describe the development and subsequent application of a high-throughput, adaptable, virus-based continuous directed evolution method that uses the mammalian cell for simultaneously mutagenizing, expressing, and selecting an evolving gene of interest. This platform functions by making adenoviral propagation in mammalian cells dependent upon the activity of a virally encoded gene of interest, which is continuously mutagenized by a highly error-prone engineered adenoviral polymerase. We demonstrated the platform’s efficacy in proof-of-principle evolution experiments by evolving a transcription factor to be insensitive to a small molecule inhibitor. We then engineered selection circuits for evolving endogenous human G-protein coupled receptors, wherein viral replication is coupled to an endogenous signaling pathway. We also engineered selection circuits for evolving exogenous CRISPR systems, wherein viral replication is coupled to an exogenous transcriptional couple. For both selection circuits, we demonstrated selection pressure sufficient to drive a directed evolution campaign through viral replication assays. Finally, we highlight a wide range of biomolecules for which directed evolution mammalian cells would be impactful, but was previously out of reach before the development of virus-based continuous evolution methods.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145089</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genera via Deformation Theory and Supersymmetric Mechanics</title>
<link>https://hdl.handle.net/1721.1/145087</link>
<description>Genera via Deformation Theory and Supersymmetric Mechanics
Wilson, Araminta Amabel
We study naturally occurring genera (i.e. cobordism invariants) from the deformation theory in- spired by supersymmetric quantum mechanics. First, we construct a canonical deformation quantization for symplectic supermanifolds. This gives a novel proof of the super-analogue of Fedosov quantization. Our proof uses the formalism of Gelfand-Kazhdan descent, whose foundations we establish in the super-symplectic setting.&#13;
&#13;
In the second part of this thesis, we prove a super-version of Nest-Tsygan’s algebraic index theorem, generalizing work of Engeli. This work is inspired by the appearance of the same genera in three related stories: index theory, trace methods in deformation theory, and partition functions in quantum field theory. Using the trace methodology, we compute the genus appearing in the story for supersymmetric quantum mechanics. This involves investigating supertraces on Weyl-Clifford algebras and deformations of symplectic supermanifolds.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145087</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in MacroFinance</title>
<link>https://hdl.handle.net/1721.1/145084</link>
<description>Essays in MacroFinance
Gong, Feixue
I consider how debt structure affects asset prices and firm behavior. &#13;
&#13;
In Chapter 1, I use a multi-state, general-equilibrium model with collateralized financial promises to study how allowing an asset to back multiple financial contracts (i.e., tranching) affects price bases. &#13;
A basis emerges when one asset can be tranched to issue more derivative securities than can be backed by another asset.&#13;
This theory correctly predicts that inclusion in the CDX index increases the underlying CDS basis.&#13;
&#13;
In Chapter 2, I study the use of secured and unsecured debt by nonfinancial firms for financing.  I find that firms with a higher fraction of their assets pledged respond more strongly to contractionary shocks but there is no difference in response to expansionary shocks. I then use a simple model to show that firms will endogenously arrive at these different levels of secured and unsecured debt due to differences in their expected future investment opportunities. &#13;
&#13;
In Chapter 3, I use debt covenant violations to study resolution of defaults and firm behavior. I find that the way violations are resolved have substantial implications for firm behavior: resolutions that preserve the lenders' rights lead to lower investment and debt issuance. I also find that violation outcomes are not solely determined by borrower health. In particular, lenders have a lot of discretion when deciding how to resolve a violation. Harsh lenders punish violators both by sending them disproportionately into worse resolutions, and, conditional on the resolution received, firms with harsh lenders issue less debt and have lower investment. I also provide evidence that firms dealing with multiple lenders face significant coordination friction when trying to resolve violations, likely due to the presence of cross-default clauses.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145084</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perception and Control Methods for Improving the Autonomy of Off-Road Robots</title>
<link>https://hdl.handle.net/1721.1/145074</link>
<description>Perception and Control Methods for Improving the Autonomy of Off-Road Robots
Bégin, Marc-André
Despite their growing use in outdoor agriculture, mobile robots remain challenging to operate under off-road conditions. The agricultural robot designed as part of the present work, nicknamed RoverII, offers a good case study since it was developed for field-surveying tasks on dairy farms and needs to operate autonomously on uneven terrain and under light conditions that are challenging for visual perception. This thesis aims at improving the autonomy of off-road robots by addressing two key problems encountered with RoverII. First, it requires a suspension to quickly traverse uneven terrain. As a result, the control of its robotic manipulator is complicated by the dynamic coupling with the suspension. Based on Lyapunov’s stability theory, this thesis contributes RaPID, a tuning procedure for the PID control of suspended manipulators that guarantees the stability of the system. The algorithm applies to any serial manipulator mounted on a flexible base and does not rely on linearization. To facilitate its adoption among control designers, an open-source implementation of RaPID is shared with the robotics community. Second, robot Visual Odometry (VO) is particularly sensitive to road vibrations and fast maneuvers. These effects are even more prominent in low-light settings where underexposure, motion blur, or image noise can degrade VO performance depending on the exposure parameters selected by the camera. This thesis contributes VO-AutoExpose, an auto-exposure algorithm that, unlike existing ones, predicts exposure parameters maximizing VO performance by fully leveraging the camera’s photometric response function and by explicitly balancing motion blur and image noise effects. Together, these features allow VO-AutoExpose to outperform state-of-the-art auto-exposure algorithms in challenging light conditions. Finally, while a majority of works in the VO literature rely on standard datasets, the experimental validation used in the present work directly compares the performance of different auto-exposure algorithms by accurately repeating the same camera trajectory multiple times on a custom positioning table.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145074</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phasing of Ground-based Optical Arrays for Space Applications</title>
<link>https://hdl.handle.net/1721.1/145073</link>
<description>Phasing of Ground-based Optical Arrays for Space Applications
Allan, Gregory W.
Delivery of optical power to targets in space has many applications, including optical communication, active sensing, laser propulsion, and the removal of space debris. However, large monolithic optics required to obtain high directivity are costly and difficult to slew at the rates required to track an object in low earth orbit (LEO). Coherent combination of a large number of small transmit elements will enable high power and directivity without the drawbacks associated with large monolithic optics. The challenge of coherent combination is matching the phase of the light from all elements in the optical array in the presence of phase disturbances from several sources. Changes in temperature in optical fibers and in opto-mechanical systems cause slow phase drifts, while mechanical and acoustic vibrations cause rapid variations in relative optical path length. Transmission through a turbulent&#13;
atmosphere adds yet another phase disturbance that must be compensated.&#13;
&#13;
Sensing and correction of these disturbances is a complex endeavor, made even more difficult by the unique challenges of illuminating an object in LEO. Both point-ahead offset for a fast moving target at long range and the presence of laser speckle in the light reflected from the target complicate atmospheric wavefront sensing. To address these challenges, we propose a novel optical phased array architecture based on a combination of three techniques: (i) atmospheric phase is sensed using a distributed Shack-Hartmann Wavefront Sensor (SHWFS) using light reflected from the target, and corrected using a calibrated feedforward phase offset, (ii) internal phase disturbances are sensed and corrected using Digitally Enhanced Heterodyne Interferometry (DEHI), and (iii) static misalignments and slow drifts not sensed by DEHI or the SHWFS are corrected using a Stochastic Parallel Gradient Descent algorithm.&#13;
&#13;
In this work we study the effects of optical speckle on SHWFS performance in conditions of weak turbulence. We develop a method for modeling these effects across a range of target and sensor geometries. We find that wavefront sensing based on reflected light from a rough target moving at orbital velocity in LEO is feasible provided the target is small relative to the diffraction limit of the SHWFS lenslets.&#13;
&#13;
We then analyze the performance of our proposed array architecture using a notional design for a laser system for de-orbit of space debris: an array of 2000 elements with an overall diameter of 9.6 m. Analytical models of the phase control dynamics in the DEHI and feedforward systems are developed, and used to estimate the residual phase error present on the transmit array. The diffraction pattern of the array is simulated, and a power budget is developed, showing that sufficient power is delivered to the target to achieve laser ablation thrust for debris objects in LEO.&#13;
&#13;
We validate our control system architecture using a laboratory demonstration of a three-element array. We develop a phase measurement system based on DEHI, implemented in an FPGA. Low-order atmospheric disturbances are simulated using a tip-tilt mirror, and measured using a quad-diode position sensor for correction with feedforward control. Quasi-static errors are corrected with SPGD, and the beam is steered to the desired target location. The system successfully corrects all three sources of phase disturbance. Internal disturbances are corrected to better than λ/120 RMS and all phase errors to better than λ/40 RMS.&#13;
&#13;
This thesis proposes and demonstrates a path towards the creation of large-scale optical transmit arrays for de-orbit of space debris and other applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145073</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constructing Entrepreneurial Networks: Evidence from a Mentoring Program</title>
<link>https://hdl.handle.net/1721.1/145072</link>
<description>Constructing Entrepreneurial Networks: Evidence from a Mentoring Program
Poskanzer, Ethan J.
Entrepreneurs’ social networks are important for explaining why some entrepreneurs are successful when so many others fail. While researchers have identified many impacts of entrepreneurs’ positions in social networks, important questions remain regarding how entrepreneurs’ networks come to be. In this dissertation, I study how entrepreneurs construct social networks using a U.S. accelerator’s mentoring program as a strategic research site. In the first essay, I use an experiment to study how entrepreneurs can be successfully matched to mentors. Even though every entrepreneur had access to the same mentors, different matching processes led some entrepreneurs to form longer-lasting and more beneficial connections than others. This was driven by matching those entrepreneurs to mentors who were better fits for their needs rather than higher quality mentors. These results suggest that how entrepreneurs are matched to advisors can affect a network intervention’s effectiveness and that connecting with advisors who can offer particular, localized help is a more salient friction for entrepreneurs than accessing advisors in general. The second essay examines why entrepreneurs form homophilous social networks. While no evidence indicates that entrepreneurs disproportionately initiate homophilous relationships, homophilous relationships are more likely to be maintained, leading to homophilous networks over time and indicating that homophilous relationships are more beneficial to entrepreneurs. This pattern is imbalanced by gender. Men mentors are disproportionately supportive of male entrepreneurs, leading to inequality by gender in referral attainment. Together, these results indicate that match between an entrepreneur and each of their particular contacts is crucial, and that the process of selecting particular relationships affects which entrepreneurs have productive networks.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145072</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Augmented Machine Learning and Optimization for Marketing</title>
<link>https://hdl.handle.net/1721.1/145071</link>
<description>Augmented Machine Learning and Optimization for Marketing
Zhu, Yuting
This dissertation consists of three essays exploring how to augment machine learning and optimization methods for marketing management.&#13;
&#13;
The first essay considers an augmentation of deep-learning-based recommender system for sales force management. Helping new salespeople succeed is critical for many organizations. We develop a deep-learning-based recommender system to help new salespeople recognize suitable customers, leveraging historical sales records of experienced salespeople. One challenge is how to learn from experienced salespeople’s own failures, which are prevalent but often do not show up in sales records. We develop a parsimonious model to capture these “missing by choice” sales records and incorporate the model into a neural network to form an augmented, deep-learning-based recommender system. We validate our method using sales force transaction data from a large insurance company. Our method outperforms common benchmarks in prediction accuracy and recommendation quality, while being simple, interpretable, and flexible. We demonstrate the value of our method in improving sales force productivity.&#13;
&#13;
The second essay explores an augmentation of large-scale linear programming optimization method for targeting with constraints. Personalization, which aims to target different marketing actions to different customers, has attracted broad attention in both academia and industry. While most research has focused on training personalization policies without constraints, in practice, many firms face constraints when implementing these policies. For example, firms may face volume constraints on the maximum or minimum number of actions they can take, or on the minimum acceptable outcomes for different customer segments. They may also face fairness constraints that require similar actions with different groups of customers. These constraints can introduce difficult optimization challenges, particularly when the firm intends to implement personalization policies at scale. Traditional optimization methods face challenges solving large-scale problems that contain either many customers or many constraints. We show how recent advances in linear programming can be adapted to the personalization of marketing actions. We provide a new theoretical guarantee comparing how the proposed method scales compared to state-of-the-art benchmarks (primal simplex, dual simplex and barrier methods). We also extend existing guarantees on optimality and computation speed, by adapting them to accommodate the characteristics of personalization problems. We implement the proposed method, and compare it with these benchmark methods on feasibility, computation speed, and profit. We conclude that, volume and similarity (fairness) constraints should not prevent firms from optimizing and implementing personalization policies at scale.&#13;
&#13;
The third essay studies collective search in an organization. In this paper, we build a two-member two-period model to show that when a group of people with different preferences conduct search and make a decision together, they can benefit from making a commitment on the number of products to search ex ante when the search cost is very small or relatively large. The underlying mechanism is that, because of the preference divergence between group members, they tend to search fewer products and thus have lower expected utility in group search than in single-agent search, and making a commitment on the number of products to search can help mitigate the preference divergence problem in group search. If consumers can observe product prices before search and the firm sets product prices endogenously, the firm can benefit from letting consumers commit to the number of products to search ex ante if consumers search as a group and their search cost is small. We also consider several extensions to show the robustness and boundary conditions of our findings.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145071</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An ORC Flip Enables Bidirectional Helicase Loading</title>
<link>https://hdl.handle.net/1721.1/145064</link>
<description>An ORC Flip Enables Bidirectional Helicase Loading
Gupta, Shalini
Cells must duplicate their genetic material faithfully in every cell cycle to maintain cellular identity and ensure viability. The first step in DNA replication is the licensing of all potential origins by loading two inactive Mcm2-7 replicative DNA helicases around the origin DNA in a head-to-head double hexamer. This conformation prepares the helicases to initiate bidirectional replication upon entry into S-phase. Interactions between three proteins and the Mcm2-7 helicase guide eukaryotic helicase loading. The origin-recognition complex (ORC) recognizes and binds origin DNA and recruits Cdc6. Mcm2-7 in complex with a third protein Cdt1 associates with ORC/Cdc6/DNA. Cdc6 and Cdt1 are sequentially released to allow loading of an oppositely-oriented second Cdt1-bound-Mcm2-7 hexamer that forms the Mcm2 7 double hexamer. Multiple mechanisms have been proposed to explain how two oppositely-oriented helicases are loaded at origins. Bulk in vitro helicase-loading experiments showed that two ORC binding sites are required, suggesting that two ORC proteins load the two Mcm2-7 helicases. Single-molecule observations, however, showed that a single ORC molecule can direct loading of both Mcm2-7 helicases. &#13;
&#13;
In this thesis I present experiments that investigate interactions between ORC and Mcm2-7 to build a detailed model for ORC-guided helicase loading. Consistent with previous single-molecule studies, one DNA-bound ORC is sufficient for helicase loading. Each Mcm2-7 is recruited by an identical, short interaction with ORC that begins precisely when the corresponding Mcm2-7 arrives on DNA. Further, I detected and monitored a novel interaction of ORC with the initially recruited Mcm2 7 that was previously identified through structural studies. This interaction explains how ORC can be retained on the DNA during a binding-site transition or an “ORC flip”. The ORC flip consistently occurs between the two Mcm2 7 recruiting interactions, ensuring that the same ORC molecule recruits both Mcm2-7 molecules but in opposite orientations. Strikingly, the duration of the initial ORC-Mcm2-7 interaction coincides exactly with the presence of Cdt1 in the complex, suggesting Cdt1 release triggers ORC DNA release and flipping. Together these experiments reveal an intricate and carefully coordinated series of events that allow a single ORC to load two helicases in opposite orientations.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145064</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Fluoride Molten Salt Thermophysical Properties with Transient Grating Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/145063</link>
<description>Evaluating Fluoride Molten Salt Thermophysical Properties with Transient Grating Spectroscopy
Robertson, Sean Gunn
Accurate molten salt thermophysical properties are required to optimize the efficiency and safety of molten salt-based energy technologies. Unfortunately, existing thermal conductivity data for molten fluorides are plagued by large uncertainties. In addition, the data displays a curious positive temperature coefficient. Understanding of heat transfer in simple fluids suggests thermal conductivity should decrease as a function of temperature. Unaccounted-for contributions from convection and radiation may result in the observation of erroneous positive temperature coefficients. Interestingly, negative temperature coefficients have been observed in non-fluoride salts when measured using Transient Grating Spectroscopy (TGS). This work has shown that convection and radiation are insignificant for most TGS regimes. Therefore, it was hypothesized that TGS measurements of fluorides could finally resolve the discrepancies between theory and experiments.&#13;
&#13;
The design and validation of a first-of-a-kind fluoride salt compatible TGS setup is presented. Demonstration of system performance is achieved through the acquisition and comparison of sound speed and thermal diffusivity data in lithium chloride (LiCl). The system has subsequently been used to measure the thermal conductivity of fluorides (FLiNaK). Results show a flat to slightly increasing thermal conductivity as a function of temperature (0.7086 + 0.0002 ⋅ T(oC) +/- 0.08 W/m-K). In addition to thermal conductivity, sound speed data as a function of temperature (2998 – 1.24 ⋅ T(oC) +/- 27 m/s) has also been obtained for the first time in FLiNaK. The use of accurate sound speed data in theoretical models of thermal conductivity provides better, but not complete agreement with the results from TGS. &#13;
&#13;
The continued existence of a positive temperature coefficient poses more questions than it answers. Three new hypotheses are presented, namely, the influence of vapor pressure in multi-constituent salts, uncertainty surrounding existing heat capacity data, and the validity of neglecting diffusive contributions from theoretical models. Ultimately, this work suggests that fluoride molten salt thermal conductivity is weakly temperature-dependent. Reliable property data has far-reaching consequences for the size of heat exchangers, response to accident scenarios, or more broadly, the safe and efficient deployment of molten salt-based energy technologies.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145063</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations into Message Passing Neural Networks and Polymer Fouling</title>
<link>https://hdl.handle.net/1721.1/145062</link>
<description>Investigations into Message Passing Neural Networks and Polymer Fouling
Forsuelo, Michael
This thesis covers the diverse investigations of high-fidelity molecular property prediction with Message Passing Neural Networks (MPNNs) and nanoscale polymer fouling experimentation with Quartz Crystal Microbalances (QCMs). &#13;
&#13;
Message Passing Neural Networks are promising deep learning architectures for chemical property prediction. The state-of-the-art MPNN known as Chemprop is adapted in this thesis for the prediction of infrared spectra. A novel loss function and architectural extensions are identified which increase the predictive performance of spectra relative to the original architecture. These extensions are collected into the software package Chemprop-IR. The Chemprop-IR architecture holds promise in the prediction of experimental infrared spectra with little to no peak shifts within the fingerprint region. Uncertainty estimation frameworks within Chemprop are also extended in this thesis. A pretraining procedure and expanded readout structure are identified which improve the accuracy of predicted properties and estimated aleatoric uncertainties while maintaining scalable predictivity.&#13;
&#13;
Process fouling is a pervasive problem in ethylene plants. Foulant, or the undesired accumulation of material, has been identified throughout process units of ethylene plants. Fouling can result in a multitude of challenges in plant production and operation. A few challenges include reduced thermal separation efficiencies and increased process safety concerns. Identifying the root causes to various forms of process fouling is essential to identifying the appropriate mitigation strategies. In particular, polymer fouling within ethylene plants is probed experimentally by Quartz Crystal Microbalance with Dissipation monitoring (QCM-D). QCM-D is used to assess the potential for ethylene plant products to form polymer foulant or support growth of pre-existing polymer foulant at approximate process conditions. The effects of molecular oxygen and antioxidant inhibitors on foulant growth dynamics are also addressed.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145062</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning the Electrochemistry of Degradation and Safety in Graphite Porous Electrodes for Lithium-ion Batteries</title>
<link>https://hdl.handle.net/1721.1/145061</link>
<description>Learning the Electrochemistry of Degradation and Safety in Graphite Porous Electrodes for Lithium-ion Batteries
Das, Supratim
Lithium-ion batteries have become the centerpiece of portable technology and electric transportation, as well as for grid stabilization for intermittent renewable sources. The varied applications involve varying requirements for safety, lifetime, and energy/power density. To optimally design these systems for each application, researchers have a very large design space. This requires extensive and costly experimentation or computationally heavy modeling.&#13;
&#13;
Specifically for designing batteries with better lifetime and long-term capacity retention, relying on just experiments can take between weeks to months and thousands of cells to get any robust insights for process improvement. Data-driven and physics based modeling, when done rigorously, can help inform experimentation, reducing time and cost requirements. However, modeling battery degradation is challenging as it not only is hard to visualize in-operando, but also affects cell performance at multiple scales - from single particle to porous electrode to the battery pack. Insights obtained from experimentation on a given scale to inform modeling, often performs poorly when it comes to prediction at other scales, limiting applicability.&#13;
&#13;
This thesis is a small part of a collaboration between MIT, Stanford, Purdue and Toyota Research Institute to develop data-driven models for predicting battery performance and degradation, called the D3BATT: Data-Driven Design of Lithium-ion Batteries. We adopt a simultaneous ‘bottom-up’ (first principles) and ‘top-down’ (statistical analyses of experiments) approach to inform theory formulation at multiple scales. This thesis addresses the idea behind a multiscale ‘bottom-up’ approach to understanding battery degradation: First, we use experiments designed on simple systems to study the electrochemistry of key graphite degradation mechanisms such as solid-electrolyte interphase (SEI) growth and lithium plating at the single particle scale. This gives us robust kinetic and thermodynamic parameters that are invariant with scale. Second, we extend the single particle theory to the porous electrode scale to capture the effect of multi-particle interactions and macroscopic electrode and electrolyte properties. This is done using the Multiphase Porous Electrode Theory (MPET) software, developed in the Bazant Group at MIT. Third, by simulating various cycling protocols (such as slow and fast charging, full depth-of-discharge vs. shallow formation cycling and open-circuit storage), we can compare the predictions with that of data-driven models obtained from statistical analyses of cell data. This informs the porous electrode model of the key mechanisms relevant at the cell scale, and gives a reliable estimate of electrode-scale parameters that could not have been informed from single-particle models. As an example, we apply the informed porous electrode degradation model to battery formation cycling, and explain what makes a ‘good’ formation cycling protocol. Model improvement is an ongoing effort in the research group, as new experimental data come to light. This work can be applied to a multitude of cycling scenarios and battery chemistries to assist experimental design.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145061</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Engineered Proteins in Redox Biology and&#13;
Biomarker Detection Assay Development</title>
<link>https://hdl.handle.net/1721.1/145060</link>
<description>Applications of Engineered Proteins in Redox Biology and&#13;
Biomarker Detection Assay Development
Hao, Yining
Engineered proteins are very versatile tools that have been applied in assay development for various purposes. They have been made into genetically encoded biosensors/probes or affinity agents for biomarker detection.&#13;
&#13;
This thesis explored a few topics using assays developed with engineered proteins. The genetically encoded hydrogen peroxide generator, D-amino acid oxidase (DAAO), was used to understand the hours-long intracellular hydrogen peroxide (H₂O₂) generation. This study elucidated that the primary respondent of cytosolic H₂O₂ is peroxiredoxin 1 and the H₂O₂ induced apoptosis initiates before the collapse of Prx/Trx/TR antioxidant network. Then, a genetically encoded FRET sensor was used to design a high-throughput screening assay that identified three small-molecule drugs from over 600 compounds that can mediate toxicity through H₂O₂.&#13;
&#13;
This thesis also explored the applications of engineered proteins in diagnostic assay development. I engineered binders against various targets for gram-positive and gram-negative pathogenic bacteria, and two of them that have been tested and showed binding to Salmonella whole cells. The engineered binders were also used to develop a SARS-CoV-2 rapid tests. In this project, sikes lab members developed a paper-based assay to detect the SARS-CoV-2 nucleocapsid protein as a team and successfully validated the assay with patient samples. Subsequently, I improved the thermo-stability of the reporter binder protein used in the assay by switching the fusion partner of the binder to a thermally stable protein. I also identified the bottleneck of an epigentotyping assay development and provided insight for future direction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145060</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-throughput functionalization of the Toxoplasma kinome uncovers a novel regulator of invasion and egress</title>
<link>https://hdl.handle.net/1721.1/145059</link>
<description>High-throughput functionalization of the Toxoplasma kinome uncovers a novel regulator of invasion and egress
Smith, Tyler A.
Protein kinases regulate fundamental aspects of eukaryotic cell biology, making them attractive chemotherapeutic targets in parasites like Plasmodium spp. and Toxoplasma gondii. To systematically examine the parasite kinome, we developed a high-throughput tagging (HiT) strategy to endogenously label protein kinases with an auxin-inducible degron and fluorophore. Hundreds of tagging vectors were assembled from synthetic sequences in a single reaction and used to generate pools of mutants to determine localization and function. Examining 1,160 arrayed clones, I assigned 40 protein localizations and associated 15 kinases with distinct defects. The fitness of tagged alleles was also measured by pooled screening, distinguishing delayed from acute phenotypes. A previously unstudied kinase, associated with delayed death, was shown to be a regulator of invasion and egress. I call the kinase Store Potentiating/Activating Regulatory Kinase (SPARK), based on its impact on intracellular Ca2+ stores. Despite homology to mammalian PDK1, SPARK lacks a lipid-binding domain, suggesting a rewiring of the pathway in parasites. HiT screening extends genome-wide approaches into complex cellular phenotypes, providing a scalable and versatile platform to dissect parasite biology.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145059</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cohering with the Crowd: How Audiences Shape the Quasi-Scientific Process of Entrepreneurship</title>
<link>https://hdl.handle.net/1721.1/145056</link>
<description>Cohering with the Crowd: How Audiences Shape the Quasi-Scientific Process of Entrepreneurship
Friis, Simon C.A.T.
Important aspects of entrepreneurship can be usefully understood as a quasi-scientific process in which entrepreneurs develop theories of value and test those theories through experimentation. Unlike academic scientists, however, entrepreneurs often develop and test theories in collaboration with an audience. The impact of audiences on the quasi-scientific process is brought into sharp relief on the livestreaming platform Twitch.tv, where entrepreneurs compete in a cultural market for the scarce attention of viewers.&#13;
&#13;
The first essay examines how theories of value constrain strategic choice and valuation. I ask: why are some combinations of product categories more appealing to audiences than others? A prominent line of work, drawing on prototype theory, posits a universal penalty for category-spanning offerings. I clarify the limitations of this approach, focusing in particular on its inability to explain change. I introduce theoretical coherence (the extent to which a combination of product categories coheres with a theory of value) as an alternative standard for understanding the appeal of categorical combinations. I develop and validate an empirical framework that uses word embedding models to study theoretical coherence and find that theoretical coherence is able to explain the appeal of product-category combinations not easily addressed by prototype theory.&#13;
&#13;
The second essay examines why successful experimentation requires the effective collaboration of audiences and how this in turn limits the strategic opportunities of entrepreneurs. Experimentation is traditionally thought to improve entrepreneurial outcomes because it avoids costly commitment and allows entrepreneurs to pivot to more attractive product markets. I develop and test a theory that recognizes the costs experiments impose on audiences. My theory implies that successful experimentation involves a tradeoff between two types of commitment. On the one hand, an entrepreneur can invest in developing a better prototype, thereby increasing the audience’s willingness to test the prototype. On the other hand, an entrepreneur can focus on developing their relationship with their audience, thereby increasing the audience’s tolerance for crude prototypes. I find that Twitch streamers who invest more in developing relationships with their audience experience fewer penalties from experimentation but get trapped in less attractive product markets.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145056</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic modeling and Bayesian inference via triangular transport</title>
<link>https://hdl.handle.net/1721.1/145049</link>
<description>Probabilistic modeling and Bayesian inference via triangular transport
Baptista, Ricardo Miguel
Probabilistic modeling and Bayesian inference in non-Gaussian settings are pervasive challenges for science and engineering applications. Transportation of measure provides a principled framework for treating non-Gaussianity and for generalizing many methods that rest on Gaussian assumptions. A transport map deterministically couples a simple reference distribution (e.g., a standard Gaussian) to a complex target distribution via a bijective transformation. Finding such a map enables efficient sampling from the target distribution and immediate access to its density. Triangular maps comprise a general class of transports that are attractive from the perspectives of analysis, modeling, and computation. This thesis: (1) develops a general representation for monotone triangular maps, and adaptive methodologies for estimating such maps (and their associated pushforward densities) from samples; (2) uses triangular maps and their compositions to perform Bayesian computation in likelihood-free settings, including new ensemble methods for nonlinear filtering; and (3) proposes parameter and data dimension reduction techniques with error guarantees for high-dimensional inverse problems.&#13;
&#13;
The first part of the thesis explores the use of triangular transport maps for density estimation and for learning probabilistic graphical models. To construct triangular maps, we represent monotone functions as smooth transformations of unconstrained (non-monotone) functions. We show how certain structural choices for these transformations lead to smooth optimization problems with no spurious local minima, i.e., where all local minima are global minima. Given samples, we then propose an adaptive algorithm that estimates maps with sparse variable dependence. We demonstrate how this framework enables joint and conditional density estimation across a range of sample sizes, and how it can explicitly learn the Markov properties of a continuous non-Gaussian distribution. To this end, we introduce a consistent estimator for the Markov structure based on integrated Hessian information from the log-density. We then propose an iterative algorithm for learning sparse graphical models by exploiting a corresponding sparsity structure in triangular maps. A core advantage of triangular maps is that their components expose conditionals of the target distribution. Hence, learning a map that depends on both parameters and observations enables efficient sampling from the posterior distribution in a Bayesian inference problem. Crucially, this can be done without evaluating the likelihood function, which is often inaccessible or computationally prohibitive in scientific applications (as with forward models given by stochastic partial differential equations, which we consider here). In the second part of this thesis, we propose and analyze a specific composition of transport maps that directly transforms prior samples into posterior samples. We show that this approach, termed the stochastic map (SM) algorithm, improves over other transport-based methods for conditional sampling by reducing the bias and variance of the associated posterior approximation. We then use the SM algorithm to sequentially estimate the state of a chaotic dynamical system given online observations, a nonlinear filtering problem known in geophysical applications as “data assimilation” (DA). We show that when the SM algorithm is restricted to linear maps, it reduces to the ensemble Kalman filter (EnKF), a workhorse algorithm for DA; with nonlinear updates, however, the SM algorithm substantially improves on the performance of the EnKF in challenging regimes.&#13;
&#13;
Finally, we extend the use of transport for high-dimensional inference problems by developing a joint dimension reduction strategy for parameters and observations. We identify relevant low-dimensional projections of these variables by minimizing an information theoretic upper bound on the error in the posterior approximation. We show that this approach reduces to canonical correlation analysis in the linear– Gaussian setting, while outperforming standard dimension reduction strategies in a variety of nonlinear and non-Gaussian inference problems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145049</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Locus of Learning and Innovation</title>
<link>https://hdl.handle.net/1721.1/145048</link>
<description>Essays on the Locus of Learning and Innovation
Fu, Carolyn Jiaming
This dissertation comprises three essays that explore the locus of learning and innovation in organizations and their environments, and the resulting impact on firm strategy.&#13;
&#13;
The first essay examines how firms can learn from a diverse set of sources. The returns to social learning are expected to be low in contexts rife with interdependencies, where practices from one context may be incompatible with those from another. Using a computational simulation, this essay shows that centralized social learning proves surprisingly robust to this challenge, and that to resolve it, firms should counterintuitively double down on social learning.&#13;
&#13;
The second essay examines how firms can leverage the market for systemic innovation. This is a task typically understood to be best accomplished through vertical integration, where the firm can easily explore alternative resource combinations. However, using a qualitative archival analysis of an opera company and a computational simulation, this paper shows that specialized producers’ reputational concerns lead them to undertake systemic innovation, the value of which a firm can in fact undercut by trying to vertically integrate.&#13;
&#13;
The third essay examines how firms should take audience learning into account during experimentation. While firm learning calls for its experiments to be conducted as separately as possible from their core activities, audience learning (especially in cultural markets) often necessitates their integration in order to properly value an innovation. This essay uses a qualitative archival analysis of a ballet company to elucidate the challenge this creates for experimentation, and how firms can address this by strategically choosing their audience, and periodically replacing their experimentation units.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145048</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organometallic Chemistry in Fe–S Clusters</title>
<link>https://hdl.handle.net/1721.1/145047</link>
<description>Organometallic Chemistry in Fe–S Clusters
Ye, Mengshan
The organometallic chemistry of Fe–S clusters has emerged as a new area in Fe–S enzyme biochemistry. In particular, alkylated [Fe₄S₄] clusters are now thought to be important intermediates in several classes of enzymes, including radical S-adenosyl-L-methionine (SAM) enzymes and enzymes involved in terpene biosynthesis. Due to the transient nature of these intermediates, full characterization of the geometric and electronic structures of enzymatic [Fe₄S₄]–alkyl species has not been possible. To address these challenges, we have synthesized a series of organometallic [Fe₄S₄] clusters to explore the electronic structures and reactivities of Fe–S clusters in the presence of the Fe–C bond. We first prepared novel 3:1 site-differentiated, alkylated or arylated [Fe₄S₄] clusters with chelating iminophosphorane ligands in various oxidation states. In-depth spectroscopic and computational analysis of these clusters suggests that alkylat-ed/arylated Fe sites exhibit partial or complete valence-localization in each redox state. By systematically tuning the donicities of the alkyl or aryl ligands, we demonstrated that the extent of valence localization and ground spin state is sensitive to the ligand-field strength. In addition to electronic structure studies, we also prepared alkylated [Fe₄S₄] clusters with 1,3-dimesitylimidazol-2-ylidene (IMes) ligands in several redox states and found that when the al-kylated Fe site has a coordination number greater than four, the Fe–C bond is dramatically weakened and undergoes rapid homolysis to releases alkyl radicals. In attempts to trap radical releasing intermediates, we discovered that the alkyl group would migrate from Fe on a [Fe₄S₄]³⁺ cluster to S and migrate back upon one-electron reduction. Overall, this thesis elucidates ligand-field effects on the electronic structures of [Fe₄S₄] clusters, addresses the intermediacy of organometallic intermediate radical SAM enzymes, and connects the organoiron and organosulfur chemistry.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145047</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling the Properties of Polymer Metal-Organic Frameworks and Cages Through Polymer Ligand Design</title>
<link>https://hdl.handle.net/1721.1/145046</link>
<description>Controlling the Properties of Polymer Metal-Organic Frameworks and Cages Through Polymer Ligand Design
Pearson, Matthew A.
Chapter 1: Synthetic and Design Considerations in Polymer Metal Organic Frameworks and Cages&#13;
&#13;
Strategies for developing polymer-tethered MOF/MOC hybrid materials are discussed. Emphasis is place on the impacts of synthetic strategy and polymer ligand design on the properties and applications of the end material.&#13;
&#13;
Chapter 2: PolyMOF Nanoparticles: Dual Roles of a Multivalent polyMOF Ligand in Size Control and Surface Functionalization&#13;
&#13;
A simple strategy to access functional MOF nanoparticles in one pot is reported using a ligand possessing a polymer block for surface functionalization and a coordination block with tunable multivalency for size control. This strategy produces uniform polyMOF-5 and polyUiO66 nanoparticles with sizes down to 20 nm, displaying exceptional structural and colloidal stability. &#13;
&#13;
Chapter 3: Radical PolyMOFs: A Role for Ligand Dispersity in Enabling Crystallinity&#13;
&#13;
Reported here is the synthesis of polyMOF ligands featuring MOF-forming linkers on their sidechains using common radical polymerization techniques: reversible addition fragmentation chain transfer (RAFT) polymerization and free radical polymerization (FRP). High-dispersity ligands prepared through FRP formed crystalline polyMOFs while low-dispersity RAFT ligands required the addition of free H2bdc to yield crystalline materials analogous to MOF-5 and UiO-66, suggesting that ligand dispersity is a key design parameter for polyMOF synthesis.&#13;
&#13;
Chapter 4: Mixed Ligands as a General Strategy for Tuning the Properties of polyMOFs and the Synthesis of MTV-polyMOFs&#13;
&#13;
A strategy of mixing free linker with a step-growth polymer ligand containing MOFforming linkers is investigated as a means to tune modify the properties of polyMOFs, resulting in polyMOFs with superior N2 and CO2 uptake. The strategy is further studied by combining distinct MOF-forming polymer ligands to create MTV-polyMOFs, presenting a method for incorporating low-dispersity polymer ligands with complex architectures into polyMOF lattices without the addition of small molecule components.&#13;
&#13;
Chapter 5: Polymer Metal Organic Cages from RAFT Polymerization&#13;
&#13;
Here, a RAFT polymer ligand for the synthesis of Cu-paddlewheel, isophthalic acidbased bulk polyMOC powders is developed. The crystallinity and morphology of the polyMOCs can be tuned by addition of varying amounts of free isophthalic acid. These polymer/MOC hybrids represent a unique morphology and provide a platform for the synthesis of more complex hybrids.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145046</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Similarity Metrics for Biological Data: Algorithmic developments for high-dimensional datasets</title>
<link>https://hdl.handle.net/1721.1/145044</link>
<description>Similarity Metrics for Biological Data: Algorithmic developments for high-dimensional datasets
Narayan, Ashwin
Advances in experimental methods in biology have allowed researchers to gain an unprecedentedly high-resolution view of the molecular processes within cells, using so-called single-cell technologies. Every cell in the sample can be individually profiled — the amount of each type of protein or metabolite or other molecule of interest can be counted. Understanding the molecular basis that determines the differentiation of cell fates is thus the holy grail promised by these data.&#13;
&#13;
However, the high-dimensional nature of the data, replete with correlations between features, noise, and heterogeneity means the computational work required to draw insights is significant. In particular, understanding the differences between cells requires a quantitative measure of similarity between the single-cell feature vectors of those cells. A vast array of existing methods, from those that cluster a given dataset to those that attempt to integrate multiple datasets or learn causal effects of perturbation, are built on this foundational notion of similarity.&#13;
&#13;
In this dissertation, we delve into the question of similarity metrics for high-dimensional biological data generally, and single-cell RNA-seq data specifically. We work from a global perspective — where we find a distance function that applies across the entire dataset — to a local perspective — where each cell can learn its own similarity function. In particular, we first present Schema, a method for combining similarity information encoded by several types of data, which has proven useful in analyzing the burgeoning number of datasets which contain multiple modalities of information. We also present DensVis, a package of algorithms for visualizing single-cell data, which improve upon existing dimensionality-reduction methods that focus on local structure by accounting for density in high-dimensional space. Lastly, we zoom in on each datapoint, and show a new method for learning &#119896;-nearest neighbors graphs based on local decompositions.&#13;
&#13;
Altogether, the works demonstrate the importance — through extensive validation on existing datasets — of understanding high-dimensional similarity.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145044</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>K-stability of Log Fano Cone Singularities</title>
<link>https://hdl.handle.net/1721.1/145043</link>
<description>K-stability of Log Fano Cone Singularities
Huang, Kai
In this thesis, we define the &#120575;-invariant for log Fano cone singularities, and show that the necessary and sufficient condition for K-semistability is &#120575; ≥ 1. This generalizes the result of C. Li and K. Fujita. We also prove that on any log Fano cone singularity of dimension &#119899; whose &#120575;-invariant is less than (&#119899;+1)/&#119899;, any valuation computing &#120575; has a finitely generated associated graded ring. This shows a log Fano cone is K-polystable if and only if it is uniformly K-stable. Together with earlier works, this implies the Yau-Tian-Donaldson Conjecture for Fano cone.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145043</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetric structures in the weak and strong Bruhat orders</title>
<link>https://hdl.handle.net/1721.1/145042</link>
<description>Symmetric structures in the weak and strong Bruhat orders
Gao, Yibo
The weak and strong Bruhat orders are classical and rich combinatorial objects, with connections to Schubert calculus, Lie algebras, hyperplane arrangements, sorting networks and so on. In this thesis, we study various new symmetries within these structures, including the balance constant and the hull metric property of the weak order, and the self-dual intervals and boolean elements in the strong order. Much of the work involved is joint with Christian Gaetz.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145042</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Roles for cell surface glycans in guiding human pluripotent stem cell fate</title>
<link>https://hdl.handle.net/1721.1/145030</link>
<description>Roles for cell surface glycans in guiding human pluripotent stem cell fate
Mahbuba, Deena-Al
Cell-surface polysaccharides are a fundamental component of the stem cell microenvironment. They are known to modulate developmental signals- critical for pluripotency and differentiation. Nevertheless, the architecture of the cellular glycocalyx and how these structures direct the fate of human pluripotent stem (hPS) cells have not been fully explored. I addressed this gap with a focus on a critical glycan of the hPS cells niche, heparan sulfate (HS). HS is a heterogeneous long-chain cell-surface polysaccharide. The spatial distribution and ultrastructure of this information-rich, signaling polysaccharides are poorly defined. In this work, I aim to understand the interplay between the HS organization and the developmental signal transduction in the hPS cells’ microenvironment. We discovered that HS of hPS cells has a dynamic ultrastructure that undergoes changes during lineage-specific differentiation. These changes also correlate with the cells’ ability to bind specific growth factor. While variations in HS sequence were thought to be the primary driver of alterations in HS-mediated growth factor signaling, our findings indicate a role for HS ultrastructure in its ability to recruit growth factors in stem cell niche. To advance the understanding of its roles in human development, next we engineered a HS-deficient cell line derived from hPS cells. Parallelly, I set out to develop a synthetic, modular surface-based cell separation strategy that can isolate or enrich cells of interest in a rapid and label-free way. I applied this strategy to isolate genetically engineered HS-deficient hPS cells after a CRISPR modification by engaging the cell surface HS with a small peptide- presenting synthetic surface. These HS-deficient hPS cells aid the investigation further to understand the role of HS in human development. We showed that the multi-lineage commitment of hPS cells depends on HS. Moreover, lack of HS hinders the proper neuronal projections and synaptic vesicle formation in hPS cell-derived neurons, suggesting a specific role of HS in human neural development. Taken together, these results indicate that HS has a highly dynamic ultrastructure that modulates cell fate choices of hPS cells, specifically neuronal connection formation. This work paves the way to a better understanding of HS’s role in early human development.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145030</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organophosphorus-Catalyzed Reductive Functionalization of Nitrocompounds via P(III)/P(V) Redox Couple</title>
<link>https://hdl.handle.net/1721.1/145029</link>
<description>Organophosphorus-Catalyzed Reductive Functionalization of Nitrocompounds via P(III)/P(V) Redox Couple
Li, Gen
Nitro compounds are widely available synthetic building blocks which present a strategical opportunity to serve as direct precursors for the construction of nitrogen-containing molecules with increasing complexity and value. With the utilization of geometric-distorted organophosphetanes as catalysts and hydrosilanes as terminal reductants, nitro compounds (nitroarenes, nitromethane and nitroalkanes) undergo reductive O-atom transfer reactions involving the conversion of P(III)/P(V)=O. With that, boronic acids were employed as coupling partners to intercept the nitrene reactivity of nitro compounds derived oxazaphosphorane intermediates for direct reductive C–N coupling. This organophosphorus-catalyzed nitro deoxygenative platform was further developed to a tandem C–N coupling/cyclization sequence, yielding a variety N-functionalized azaheterocycles (oxindoles, indoles, quinoxalinediones, and benzimidazoles). Besides boronic acids, anilines were also introduced as an exogenous coupling partner for novel cross-selective intermolecular N–N bond forming reactivity in the formation of various hydrazines with excellent chemoselectivities and functional group tolerance. These works herein not only expand the reactivity of low-cost and environment-benign organophosphorus compounds as platforms for catalytic reductive O-atom transfer, but also details the P(III)/P(V)=O redox couple catalyzed reaction mechanism, providing further precedents for the catalytic potential of organophosphorus compounds in reaction classes heretofore dominated by transition-metal catalysis and suggesting potential opportunities for organophosphorus catalyst optimization.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145029</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large genus bounds for the distribution of triangulated surfaces in moduli space</title>
<link>https://hdl.handle.net/1721.1/145025</link>
<description>Large genus bounds for the distribution of triangulated surfaces in moduli space
Vasudevan, Sahana
Triangulated surfaces are compact Riemann surfaces equipped with a conformal triangulation by equilateral triangles. In 2004, Brooks and Makover asked how triangulated surfaces are distributed in the moduli space of Riemann surfaces as the genus tends to infinity. Mirzakhani raised this question in her 2010 ICM address. We show that in the large genus case, triangulated surfaces are well distributed in moduli space in a fairly strong sense. We do this by proving upper and lower bounds for the number of triangulated surfaces lying in a Teichmüller ball in moduli space. In particular, we show that the number of triangulated surfaces lying in a Teichmüller unit ball is at most exponential in the number of triangles, independent of the genus.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145025</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Random surface interpretations of two-dimensional Liouville quantum gravity and Yang-Mills theory</title>
<link>https://hdl.handle.net/1721.1/145024</link>
<description>Random surface interpretations of two-dimensional Liouville quantum gravity and Yang-Mills theory
Park, Minjae
The theory of random surfaces (or "sums over surfaces") has its historical roots in quantum gravity, string theory, statistical physics, and combinatorics. This thesis explores random surfaces in two settings: one related to Liouville quantum gravity, and one related to Euclidean Yang-Mills theory in two dimensions.&#13;
&#13;
The first part introduces a specific regularization of Liouville quantum gravity surfaces. It also establishes the Polyakov-Alvarez formula on non-smooth surfaces with Brownian loops instead of the zeta-regularized Laplacian determinant. Consequently, "weighting by a Brownian loop soup" changes the so-called central charge of the regularized random surfaces, as expected in physic literature. This result justifies a definition of Liouville quantum gravity surfaces in the supercritical regime where the central charge is greater than 1.&#13;
&#13;
The second part describes continuum Wilson loop expectations on the plane as sums over surfaces, an example of gauge string duality. In contrast to the Gross-Taylor expansion, our weight is explicit as ±Nᵡ  where χ is the Euler characteristic, for any gauge group U(N), SO(N), Sp(N/2). Based on the well-established continuum theory in two dimensions, we provide a probabilistic treatment for Wilson loop expectations, also leading to various applications like an alternative proof for the Makeenko-Migdal equation and a connection with a random walk on permutations.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145024</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications and limits of convex optimization</title>
<link>https://hdl.handle.net/1721.1/145023</link>
<description>Applications and limits of convex optimization
Hamilton, Linus
Every algorithmic learning problem becomes vastly more tractable when reduced to a convex program, yet few can be simplified this way. At the heart of this thesis are two hard problems with unexpected convex reformulations. The Paulsen problem, a longstanding open problem in operator theory, was recently resolved by Kwok et al [40]. We use a convex program due to Barthe to present a dramatically simpler proof with an accompanying efficient algorithm that also achieves a better bound. Next, we examine the related operator scaling problem, whose fastest known algorithm uses convex optimization in non-Euclidean space. We expose a fundamental obstruction to such techniques by proving that, under realistic noise conditions, hyperbolic space admits no analogue of Nesterov’s accelerated gradient descent. Finally, we generalize Bresler’s structure learning algorithm from Ising models to arbitrary graphical models. We compare our results to a recent convex programming reformulation of the same problem. Notably, in variants of the problem where one only receives partial samples, our combinatorial algorithm is almost unaffected, whereas the convex approach fails to get off the ground.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145023</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bringing Redox Flow Batteries to the Grid: Techno-economic Modeling for Chemistry-Informed Design of Redox Flow Batteries</title>
<link>https://hdl.handle.net/1721.1/145016</link>
<description>Bringing Redox Flow Batteries to the Grid: Techno-economic Modeling for Chemistry-Informed Design of Redox Flow Batteries
Rodby, Kara E.
Energy storage solutions will be crucial to decarbonizing the power sector by offsetting renewable intermittency while providing additional resiliency, flexibility, and value. With the grid comprising an array of services that vary in their technical attributes and value propositions, a portfolio of storage solutions is needed to support the many functionalities.&#13;
&#13;
The redox flow battery (RFB) is one electrochemical storage technology that is particularly competitive, on a capital cost basis, at longer durations (&gt; 4 hours) due to its unique decoupling of energy and power facilitated by an open architecture. This architecture also enables long-term cost savings by allowing for targeted component maintenance. RFBs can host a vast range of chemistries, whose choice affects the upfront and long-term cost and performance of the system, providing a wide design space. Clever chemistry design can enable efficient use of targeted maintenance; for example, the state-of-the-art RFB chemistry is the vanadium RFB uses a symmetric chemistry that allows for efficient, continuous recovery of crossover losses (often the fastest form of capacity decay in RFBs).&#13;
&#13;
Despite these techno-economic benefits and decades of research, RFBs have seen minimal adoption; even within the subset of battery deployment specifically, RFBs are dwarfed by lithium-ion batteries. Limitations to RFB deployment include: a lack of demand for long duration storage, perceived risk of financing such large-scale systems of a relatively nascent technology, and high upfront costs. With little time left to develop cost-effective solutions to combat climate change, and without the high-value beachhead markets lithium-ion leveraged to drive down costs and risk while improving performance, RFBs will benefit from deeper upfront due diligence – via means like techno-economic modeling – to inform efficient research and development toward cost-competitive systems. &#13;
&#13;
In this thesis, I utilize capital and levelized cost models to explore the design space of RFB chemistries. By considering chemistry-specific cost and performance attributes, particularly incorporating capacity loss and recovery over time, these models help set techno-economic benchmarks for grid viability. Supply chain studies also probe other relevant considerations regarding the scalability of these chemistries. Finally, I exemplify how such studies can drive research efforts via experimental studies.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145016</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Execution and Optimization of Flow Chemistry on a Robotic Platform with Integrated Analytics</title>
<link>https://hdl.handle.net/1721.1/145015</link>
<description>Automated Execution and Optimization of Flow Chemistry on a Robotic Platform with Integrated Analytics
Nambiar, Anirudh Manoj Kumar
The development, optimization, and characterization of chemical processes for the synthesis of organic compounds, which play a key role in society as medicines and materials, is currently an expensive and laborious enterprise. These inefficiencies are driving efforts to develop automated, data-rich experimentation (DRE) platforms and methods designed to maximize the amount of useful data generated per unit time and raw material expended. In this thesis, a modular robotic platform for continuous flow synthesis was utilized for machine-assisted organic reaction development. An improved version of the platform was built with new capabilities including a Cartesian robot for fast and reliable pick-and-place, integrated process analytical technology (PAT) such as LC-MS and FT-IR spectroscopy for online reaction monitoring, and closed-loop feedback optimization of reaction conditions using a Bayesian optimization algorithm. In the first case study, algorithmic reaction optimization helped partially automate the specification of critical process parameters (both continuous and categorical) for a computer-proposed and human-refined synthetic route. A representative multistep synthesis involving 3 reactions (including a heterogeneous hydrogenation) and 1 separation was chosen. In multistep flow processes where downstream residence times are physically constrained by upstream flow rates, the modular reactor volumes of the robotic platform were leveraged to introduce an independent degree of freedom. Deployment of multiple PAT tools facilitated thorough process understanding and workflow automation helped accelerate and reduce the manual burden during experimentation. In the second case study, the platform’s toolkit was further expanded with the addition of an LED array to perform photochemistry. This new capability enabled the development of two photochemical steps that lead to an important class of drugs. Bayesian optimization aided in optimizing continuous variables including residence time and stoichiometry, and characterizing the effect of critical process parameters. Finally, the design of data-rich dynamic flow experiments, where continuous reactors are operated under controlled transients in input variables, was computationally studied and experimentally validated. Mathematical modeling using transport equations and a parametric analysis helped identify a simple criterion to guide the design of dynamic trajectories. Sinusoidal dynamic experiments designed using the criterion were executed on the robotic platform with two and three simultaneously varying inputs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145015</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplexed transcriptional control strategies for biosynthesis from mixed substrates in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/145014</link>
<description>Multiplexed transcriptional control strategies for biosynthesis from mixed substrates in Escherichia coli
Ni, Cynthia
Metabolic engineering reprograms microbes to produce value-added chemicals. Microbial production has the potential to use renewable feedstocks, such as conventional waste streams. Metabolic engineers already contend with the metabolic burden of recombinant production pathways; utilizing complex input streams only further complicates allocating cellular resources appropriately for biosynthesis. This thesis aims to develop transcriptional control strategies that sense and respond to changing feedstock conditions for biosynthesis and demonstrate the ability to produce a product of interest from mixed substrate feeds.&#13;
&#13;
We constructed a galacturonate biosensor with the galacturonate-responsive transcription factor, ExuR, from Bacillus subtilis and determined the best performer from a selection of biosensor variants. After establishing no interactions with the host Escherichia coli native regulatory system, we applied the biosensor to control expression of biosynthetic pathway. It was confirmed that the biosensor activated transcription in the presence of galacturonate, eliminating the need for a chemically-inducible control system.&#13;
&#13;
A second, gluconate biosensor was constructed with GntR, from B. subtilis. The two biosensors were shown to be orthogonal and each was used to control the expression of a novel D-glycerate biosynthetic pathway from its cognate substrate. We demonstrated D-glycerate production from single and mixed substrate feeds and showed that mixed substrates in different ratios resulted in fairly consistent titers.&#13;
&#13;
Finally, the pairwise orthogonality of various AHL-based QS systems was characterized to establish multiple controllers for autonomous, cell density activated expression. We determined the the las and tra systems demonstrated minimal crosstalk, which agreed with literature findings.&#13;
&#13;
This work demonstrates the engineering of expression controllers and a production strain for biosynthesis from mixed substrate feeds.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145014</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations of Iron–Nitrogen Bonding at Synthetic Iron–Sulfur Clusters</title>
<link>https://hdl.handle.net/1721.1/145013</link>
<description>Investigations of Iron–Nitrogen Bonding at Synthetic Iron–Sulfur Clusters
Sridharan, Arun
The coordination chemistry of nitrogenous ligands at high-spin iron is investigated through the synthesis and characterization of a terminal imido complex of a cuboidal tetra(μ3-sulfido)tetrairon (Fe4S4) cluster. Structural, spectroscopic, and computational studies establish a dynamic interplay between Fe–N, Fe–S, and Fe–Fe interactions, tuned by the Fe centers’ locally high-spin electron configurations and the pseudo-C3v symmetry of the imido-bound metal site. Reaction with 1,4-cyclohexadiene affords clean conversion to the corresponding [Fe4S4] anilido complex, which was characterized in two oxidation states. Oxidation and reduction of the [Fe4S4] terminal imido complex affords a four-membered redox series spanning the 1−/0/1+/2+ charge states. A combined spectroscopic and theoretical analysis of this redox series using variable-temperature 1H nuclear magnetic resonance spectroscopy reveals that oxidation events are primarily centered at the imido fragment, which takes on an iminyl ([NAr]•−) or triplet nitrene ([NAr]2•) configuration depending on the charge state of the molecule. Two-electron oxidation of [Fe4S4]+ anilido complexes results in formal aminyl radical reductive elimination, generating aniline, hydrazine, and/or azoarene as byproducts along with the corresponding [Fe4S4]2+ solvento adduct. Addition of 1,2-diarylhydrazine to the [Fe4S4]+ solvento adduct results in N–N binuclear oxidative addition to afford the [Fe4S4]2+ anilido complex, which upon further reaction with 1,2-diarylhydrazines catalyzes their disproportionation to aniline and azoarene. The development of new synthetic platforms for investigating the reaction chemistry of iron–sulfur clusters is described: [MFe3S4] cluster cubanes are prepared varying the identity of the heterometal (Mo or V) and the steric profile of its supporting ligand for application in studies of dinitrogen binding and functionalization. The discovery of a new tridentate ligand topology for cuboidal [Fe4S4] clusters incorporating 1,1′-biphenyl-3,3′-diyl linkers is reported, along with strategies for metalating its tris(phosphine) derivative.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145013</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for handling nonsymmetric cones in interior point algorithms</title>
<link>https://hdl.handle.net/1721.1/145003</link>
<description>Techniques for handling nonsymmetric cones in interior point algorithms
Kapelevich, Lea
Conic programs that seek to minimize a linear function over an intersection of symmetric (self-dual and homogeneous) cones are amenable to highly efficient primal-dual interior point methods, which are implemented by many popular off-the-shelf conic solvers.  On the other hand, many useful conic sets cannot be modeled exactly or can be modeled more efficiently using cones that are not symmetric. Algorithms for nonsymmetric cones have been implemented in significantly fewer solvers. Practically efficient, self-concordant barrier functions have not been previously suggested for many useful nonsymmetric cones. For the nonsymmetric cones with known barriers, there is little published work on how to implement numerically stable and computationally fast barrier oracles for interior point methods.&#13;
&#13;
We begin this thesis by describing the interior point algorithm we implement in the solver Hypatia for exotic cones. The exotic cones are a broad class of cones (including symmetric and nonsymmetric cones) that admit a small set of oracles needed by Hypatia's algorithm. We justify a number of practical algorithmic enhancements from an empirical standpoint. We derive new logarithmically-homogeneous, self-concordant barrier functions for several useful nonsymmetric cones. In Chapter 3, these are barriers for cones derived from spectral functions on Euclidean Jordan algebras while in Chapter 5, these are barriers related to sum-of-squares polynomial cones. We show that using these cones with our new barriers is computationally favorable in comparison to alternative formulations. We show how to evaluate the oracles needed by Hypatia's algorithm for these barriers and others in a computationally efficient and numerically stable fashion throughout Chapters 3-5.&#13;
&#13;
In the final two chapters, we derive efficient techniques for calculating information related to convex conjugates of the barriers for seven nonsymmetric cones. This information is not used by Hypatia, but is necessary for the alternative algorithm by Dahl and Andersen (2021) that is implemented by the solver MOSEK. We implement the stepping procedure described by  Dahl and Andersen (2021) in Hypatia and make some empirical comparisons between MOSEK's algorithm and Hypatia's.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145003</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Urban Building Energy Modeling to Develop Carbon Reduction Pathways for Cities</title>
<link>https://hdl.handle.net/1721.1/145002</link>
<description>Using Urban Building Energy Modeling to Develop Carbon Reduction Pathways for Cities
Ang, Yu Qian
Cities have been the nexus of economic activity and growth, but they have an insatiable appetite for energy. In response to the challenges and potential impact of climate change, cities and municipalities around the world are developing climate action plans to reduce carbon emissions and enhance resilience of their built environments. However, policymakers require a data-driven method to identify the most impactful, economical, and feasible strategies – and further translate these to actionable policy levers. This research serves to democratize and facilitate the wider use of urban building energy models in cities and municipalities.&#13;
&#13;
First, key applications and use cases of urban building energy modeling (UBEM) are identified, and a minimum viable UBEM is introduced for each use case. This framework streamlines computational requirements, data, and calibration needs, promoting more rapid development and utilization of UBEMs. Second, a web-based framework to rapidly generate UBEMs for carbon reduction technology pathways is developed, subsequently piloted in the City of Evanston, and found to significantly reduce time and resources needed for developing and utilizing UBEMs. The approach was further validated in collaboration with policymakers and researchers in eight cities – viz. Braga (Portugal), Cairo (Egypt), Dublin (Ireland), Florianopolis (Brazil), Kiel (Germany), Middlebury, VT (USA), Montreal (Canada), and Singapore. Finally, conventional UBEMs typically only incorporate building properties and characteristics. This dissertation also presents an exploratory approach – using supervised and unsupervised data science / machine learning methods – to integrate building properties with socio-economic data from census for better inference and understanding of energy use in cities&#13;
&#13;
Each approach is documented with the relevant results compared against conventional modelling workflows and / or validated through real-world urban case studies. The major contribution is the development and validation of methods and frameworks that can rapidly and automatically generate UBEMs to help cities and municipalities develop carbon reduction pathways to impact.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145002</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Objective Optimization for Public Policy</title>
<link>https://hdl.handle.net/1721.1/144994</link>
<description>Multi-Objective Optimization for Public Policy
Papalexopoulos, Theodore P.
Operations research has a storied history of tackling complex problems in public policy, ranging from vaccine distribution to the efficient design of public utility markets. The advent of "big data" analytics, machine learning, and scalable optimization has only expanded the field's impact, unlocking new research directions and application areas. What makes public policy a challenging domain is a combination of three factors: (i) policymakers must balance multiple objectives that often exist in tension, e.g., tradeoffs in efficiency and fairness; (ii) there are many stakeholders, with often disparate value judgments on how to best balance said objectives; and (iii) those stakeholders may not be technically fluent in analytics.&#13;
&#13;
This thesis develops multi-objective optimization methodologies to support policymakers in designing more efficient, fair, and inclusive policies. We apply our techniques to a range of problems in transplantation policy and public education. A core theme of our work is the need for interpretable decision-support tools, e.g., interactive applications and tradeoff curves, which are crucial in translating abstract policy tradeoffs into actionable insights. Our goal is to provide stakeholders, even those without technical expertise, with an understanding of the range of achievable policy outcomes, so that they can more effectively engage in the policymaking process. We emphasize applications of our work to real-world problems, including an extensive collaboration with the United Network for Organ Sharing (UNOS) to help develop a new national lung allocation policy, which is slated for implementation in 2023.&#13;
&#13;
Chapter 2 addresses a long-standing debate about geographic equity in organ allocation, by using multi-objective optimization to compare efficiency/fairness tradeoffs under different geographic distribution schemes. Chapter 3 introduces a novel optimization-based framework for "ethics-by-design" in scarce resource allocation, aiming to combine data modeling, shareholder input, and ethical theory into a unified approach for policy development in this area. Chapter 4 details our collaboration with UNOS policymakers to apply this framework towards the design of a new national lung allocation policy. Finally, Chapter 5 presents an empirical analysis of school assignment mechanisms for public school districts, investigating tradeoffs between satisfying student preferences and minimizing bus transportation costs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144994</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Airflow in Interior Spaces: Implications on Comfort and Health</title>
<link>https://hdl.handle.net/1721.1/144992</link>
<description>Airflow in Interior Spaces: Implications on Comfort and Health
Kongoletos, Johnathan J.
The International Energy Agency projects that rising income and greater access to air conditioning equipment in many developing countries will increase CO₂-equivalent emissions, energy consumption, and urban heat island effects. India exhibits these traits, where new building trends, hot climatic conditions, increasing social aspirations, and rapid population growth is likely to spread the adoption of air conditioning. Within India, while air-conditioners are attainable, the financial cost of acquisition and operation preclude their usage by the population’s most vulnerable. To reduce the need for air conditioning and increase available building options, low cost and socially-acceptable options are necessary to reduce productivity losses and excess mortality.&#13;
&#13;
This work presents the results of long-term temperature monitoring within four occupied homes, builds a model for understanding the influence of material choice, and evaluates that model on the basis of a reduction in peak indoor air temperature and energy savings as compared to an equivalent air conditioner. Results from the occupied homes show a peak reduction in inside air temperature of 8.2 °C during the summer months relative to informal housing in the same community. Further, using scale models and input from the Ramdev Nagar community in Bhuj, the impacts of operational airflow changes are quantified with a focus on next-day thermal comfort. Applicable outside of India, the techniques can be used concurrently with active cooling systems to reduce energy consumption or extend capacity. Targeting near-term implementation in India, this work focuses on tangible improvements spanning construction and operation.&#13;
&#13;
Shifting towards offices using chilled beams, this work presents data on the impact of ceiling fans on chilled beam performance both in steady-state and in transient situations to address discomfort in conference room settings where rapid changes in cooling performance are required or capacities exceeded. This work extends on that theme to propose improved thermostat placement targeting the reliability of the thermostat readings to serve as a proxy for thermal comfort. &#13;
&#13;
Finally, this work looks at dispersion of bioaerosols within a classroom environment. Via simulations, this work contributes through the quantification of different ventilation approaches and generalizable recommendations for contaminant control at the breathing zone.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144992</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics in Perovskite CsPbBr₃ Semiconductor Nanocrystals</title>
<link>https://hdl.handle.net/1721.1/144988</link>
<description>Exciton Dynamics in Perovskite CsPbBr₃ Semiconductor Nanocrystals
Shcherbakov-Wu, Wenbi
Semiconductor nanocrystals, or quantum dots, have attracted much attention over the past three decades due to their potential applications in optoelectronic devices, such as displays, sensors, photovoltaics, and as single-photon sources. This class of materials differs from traditional bulk semiconductors in their tunability of optical and electronic properties, which are determined by their size as well as the material composition. In the past decade, a new type of semiconductor, perovskite, has shown extraordinary photophysical properties; their low-dimensional counterpart, all-inorganic perovskite nanocrystals (CsPbBr₃ NCs), has since shown promising performances in optoelectronic devices. However, their fundamental photophysical properties, as well as energy transport properties within the NC solids, are still under active debate.&#13;
&#13;
Here, I measure both the temperature-dependent (35 K - 300 K) absorption and PL spectra of zwitterionic ligand-capped CsPbBr₃ NCs with four different edge lengths (L = 4.9 - 13.2 nm). The excitonic transitions observed in the absorption spectra can be explained with an effective mass model considering the quasicubic NC shape and non-parabolicity of the electronic bands. We observe a temperature-dependent Stokes shift; while the trend is similar to the Stokes shift observed in both MAPbBr₃ and CsPbBr₃ single crystals, it does not approach zero at cryogenic temperatures, pointing to an additional contribution intrinsically present in the NCs. Surprisingly, the effective dielectric constant determined from the best fit model parameters is independent of temperature.&#13;
&#13;
Next, I apply time-resolved photoluminescence microscopy (TPLM) to visualize exciton transport in CsPbBr₃ NC films with various surface treatments. We show that, in all samples, exciton diffusivity exhibits a striking excitation power dependence. Through fluence-, repetitionrate-, and temperature-dependent measurements, we demonstrate that this behavior does not arise from exciton-exciton annihilation or sample heating effects. Further investigation reveals that upon photoexcitation, CsPbBr₃ NCs transform into a transient configuration wherein exciton transport becomes faster. This reconfiguration memory persists for microseconds, long after electronic relaxation.&#13;
&#13;
Finally, I present preliminary TPLM measurements on CsPbBr₃ NC superlattices at cryogenic temperatures. We observe interesting power-dependent and temperature-dependent trends. While the exact transport mechanism in this regime is still unclear, the findings point to new transport phenomena that require further investigation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144988</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microscopic characterization of macroscopic colloidal gel rheology</title>
<link>https://hdl.handle.net/1721.1/144982</link>
<description>Microscopic characterization of macroscopic colloidal gel rheology
Cho, Jae Hyung
When attracted to one another, colloids, particles of size ranging from a few nanometers to a few microns suspended in a liquid, form a colloidal gel. The squishiness of a colloidal gel stems from its elastic space-spanning network of aggregated particles in a viscous liquid, which allows the gel to resist deformations like a solid under low stress, but to flow like a liquid under high stress. Owing to such mechanical versatility, colloidal gels are found in every corner of our lives as personal care products, dairy products, pharmaceuticals, and construction materials. Colloidal gels composed of functionalized particles are utilized for novel energy storage devices and biomedical applications. Engineering the mechanical behaviors of colloidal gels, however, remains a challenge due to our limited understanding of the link between microscopic particle interactions and macroscopic rheological properties. The state of thermodynamic nonequilibrium and the structural disorder of the network due to kinetic arrest of the attractive particles call for comprehensive investigation of colloidal gel rheology. In this thesis, we develop a better physical understanding of key rheological characteristics of a model colloidal gel via optical microscopy and rheometry. We employ differential dynamic microscopy to quantify the thermal fluctuations of the gel network across multiple length and timescales and rotational rheometry to characterize macroscopic strain and stress responses under shear. Use of the two complementary techniques enables us to show how the elasticity, the viscoelasticity, and the viscoplasticity of the gel on macroscopic scales arise from the microscopic structure and dynamics of the gel network while addressing different stages of the system from its gelation to yield or fluidization. Our findings suggest ways to systematically control the deformation and the flow of colloidal gels by tuning particle interactions or by adjusting external loadings.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144982</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonsmooth Methods for Process Integration</title>
<link>https://hdl.handle.net/1721.1/144971</link>
<description>Nonsmooth Methods for Process Integration
Nielsen, Caroline J.
Process integration is a promising method to improve sustainability and reduce waste in chemical processes by recovering excess resources such as heat, water, or other materials. However, calculating the maximum amount of resource that can be reused is challenging because resource sinks can only take in resource if it is of high enough quality. As a result, most current integration methods are either limited and heuristic or use large superstructure formulations that must assess all possible matches between the resource sources and sinks. &#13;
&#13;
Therefore, this thesis presents new computational methods for maximizing resource recovery that use nonsmooth functions to compactly describe the resource that is available at different qualities. This work can be divided into three main contributions that improve process integration for systems with different resources and assumptions:&#13;
&#13;
1. A generalized approach to process integration that uses a system of two nonsmooth equations to describe optimal reuse for a wide variety of resources, including multiple resources simultaneously,&#13;
&#13;
2. An extension of this general approach to more complex mass and water systems with multiple contaminants that can limit their reuse,&#13;
&#13;
3. A non-smooth optimization formulation that applies our integration approach to design variable-temperature cogeneration systems that convert process waste heat into electricity. &#13;
&#13;
By utilizing non-smooth equations, each of these contributions exhibits improved scaling compared to other integration methods and have numbers of equations or constraints that remain the same regardless of the size and complexity of the system. In addition, unlike other methods, our approaches have the flexibility to either determine resource requirements or the process variables that achieve a given target.&#13;
&#13;
This thesis describes the formulation and implementation of each of these non-smooth approaches and applies them to a wide of range of example applications. These applications include carbon-constrained energy planning, hydrogen conservation networks, water recovery from petroleum refining with multiple contaminants, and designing improved cogeneration systems for sulfuric acid and cement production processes. The results from these examples show the flexibility and scalability of our approaches and the breadth of improvements they can provide. Together, our contributions increase the applicability of computationally efficient process integration methods to improve the sustainability of a wide range of chemical processes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144971</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental design and analysis for high-parameter spatial omics</title>
<link>https://hdl.handle.net/1721.1/144970</link>
<description>Experimental design and analysis for high-parameter spatial omics
Baker, Ethan Alexander García
Recent decades have witnessed the dawn of an era of molecular biology as a data science. Technological advancements have enabled high-parameter molecular measurements (“omics”) to progress from bulk data, averaged across entire tissues or organisms, to single-cell measurements, which enable exploration of the vast diversity between individual cells, to a new frontier, spatial omics, which enables rich molecular measurements of single cells in their native tissue context. Spatial omics methods complement the rich history of histopathology to unlock new avenues to explore the spatial components of tissue biology. However, high-parameter spatial omics data present unique statistical and computational challenges, and substantial work is required to ensure that findings of spatial omics experiments are actionable in the laboratory and clinic. Here, I examine the underlying statistical properties of spatial biology to propose a general framework for experimental design in spatial omics, introduce an approximate generative model of tissue structure, and demonstrate methods for semantic segmentation and community detection in spatial omics. Finally, I share an outlook on how, when considered jointly, these contributions represent first steps towards optimal experimental design for spatial omics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144970</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Affinity Threshold for Maximum Efficacy in Anti-PD-1 Cancer Immunotherapy</title>
<link>https://hdl.handle.net/1721.1/144968</link>
<description>An Affinity Threshold for Maximum Efficacy in Anti-PD-1 Cancer Immunotherapy
Cowles, Sarah Clare
Monoclonal antibodies (mAbs) targeted to the programmed cell death protein 1 (PD-1) remain the most prevalent cancer immunotherapy both as a monotherapy and in combination with additional therapies. Despite the extensive success of anti-PD-1 mAbs in the clinic, the experimental relationship between binding affinity and functional potency for anti-PD-1 antibodies in vivo has not been reported. Two widely-used FDA-approved anti-PD-1 antibodies, nivolumab and pembrolizumab have similar single digit nanomolar equilibrium binding constants (KD). Anti-PD-1 antibodies with higher and lower affinity than nivolumab or pembrolizumab are entering the clinic and show varied pre-clinical efficacy. Here, we explore the role of broad-ranging affinity variation on efficacy within a single lineage in a syngeneic immunocompetent mouse model. &#13;
&#13;
Using yeast surface display and affinity maturation, we engineered a panel of murine anti-PD-1 antibodies with varying affinity (ranging from KD = 20 pM - 15 nM). A combination of equilibrium and kinetic sorting strategies was used to both improve and reduce the affinity of a parental anti-PD-1 clone, 29F.1A12. We characterized the affinity of the antibodies using a number of methods, including ELISA, off-rate bead assays, and bio-layer interferometry, to confirm the wide range of affinity. We also characterized the internalization rate and pharmacokinetic clearance rate of our antibodies to confirm a consistent drug profile aside from mPD-1 affinity. Using these experimentally-determined rates and literature values, we developed a physiologically-based pharmacokinetic (PBPK) model to complement the in vivo results and highlight the direct relationship between dose, affinity, and PD-1 target saturation in the tumor.&#13;
&#13;
Through in vivo efficacy studies using the MC38 murine adenocarcinoma model, we found that there is an affinity threshold required for maximum efficacy at a given dose of anti-PD-1 immunotherapy. We demonstrate that efficacy can be rescued by increasing the dose of a low affinity clone. Thus alternatively, we show that for a given affinity, there is a dose threshold required to achieve maximum efficacy. Ultimately, we conclude that the anti-PD-1 affinity/dose relationship supports a clear receptor-occupancy mechanism of action.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144968</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gallium Nitride Remote Epitaxy</title>
<link>https://hdl.handle.net/1721.1/144966</link>
<description>Gallium Nitride Remote Epitaxy
Qiao, Kuan
Silicon-based electronics devices and integrated circuits have been the backbone of most modern technology, the demands for smaller and faster electronics have pushed the material towards its physical limitations. Compound semiconductor materials are growing in importance because of their ability to offer superior performance across a wide range of applications in optoelectronics and power electronics. Unfortunately, the adoption of these new materials has been hindered by the lack of cost-effective epitaxial substrates and the difﬁculties in integrating with existing processes. Additionally, the rigid wafers pose major challenges to future ﬂexible electronics and the heterogeneous integration of dissimilar materials.&#13;
&#13;
To overcome these limitations, a new layer transfer technology based on remote epitaxy is developed. Remote epitaxy takes advantage of the atomic thickness of the two-dimensional (2D) material so that the wafer covered by 2D material can still be the substrate for epitaxial growth. At the same time, the weak van der Waals interaction between the 2D material and the epitaxial layer allows the epitaxial layer to be mechanically exfoliated precisely at the 2D material interface. In this work, the mechanism of remote epitaxy is systematically investigated. This work demonstrates that the strength of remote interaction between the substrate and epitaxial layer through 2D material is determined by the ionicity of the bulk materials and the thickness of the 2D interlayer. The 2D interlayer is transparent to such remote interaction unless it possesses periodic polar bonds.&#13;
&#13;
The remote epitaxy of gallium nitride (GaN) is studied in depth. Freestanding GaN thin ﬁlm with a threading dislocation density of 2.1×10⁷ cm⁻² and electron mobility of 254 cm²/(V·s) is demonstrated. The thin ﬁlm is then transferred onto a host substrate and the original substrate can be reused without refurbishment. The process demonstrated in this work can signiﬁcantly reduce the production cost of compound semiconductor devices by reusing the expensive wafers. The free-standing crystalline thin ﬁlms obtained by remote epitaxy can also be monolithically integrated to accommodate existing processes or create novel heterogeneous structures.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144966</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Spatial Economics</title>
<link>https://hdl.handle.net/1721.1/144957</link>
<description>Essays in Spatial Economics
Levy, Antoine Boris
This thesis is composed of three essays, showing how nationwide economic causes exert distinct local and aggregate effects across regions, depending on the geographic distribution of exposure to these common shocks, and on spatial interactions between locations.&#13;
&#13;
The first chapter, building upon administrative data covering the universe of dwellings in France, documents the presence of home bias in investment (a negative effect of distance for individual investors' lumpy portfolio allocation decisions). I explore its consequences for the equilibrium supply of housing, in a spatial equilibrium framework combined with a frictional portfolio choice. Using quasi-experimental evidence from a location-specific French investment tax credit targeted at individual landlords, I evidence a substantial causal impact on transactions, new construction, investor returns, and inwards migration. Long-distance individual investor involvement rises in treated cities, and the policy has stronger effects in locations more open to outside capital.&#13;
&#13;
The second chapter, in collaboration with Jacob Moscona, studies how exogenous differences in local population density lead regions to specialize in different kind of manufacturing industries. We show theoretically and empirically that a country's economic geography -- in particular, the distribution of population across space -- is an important source of comparative advantage, as countries with higher population-weighted population density specialize in sectors that benefit from agglomeration. After estimating substantial variation within the US in the extent to which manufacturing sectors sort into dense locations, we find that countries with higher population-weighted density disproportionately export in sectors with high "density affinity".&#13;
&#13;
The third chapter explores electoral behavioral with regionally differentiated exposure to common campaign pledges. Using quasi-random spatial variation across municipalities, and an instrumental variables strategy exploiting formulaic real estate assessments established in the 1970s, I show that a promise to repeal a broad-based housing tax accounted for a substantial share of Emmanuel Macron's electoral success in the 2017 French presidential election. In high-frequency data, the timing of the promise coincided with a significant increase in voter information search, in Macron's polling intentions, and in his market-based predicted chances of victory. The results evidence the crucial role of spatial distributive policies, even in elections marked by ideological polarization around non-economic issues.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144957</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Healthcare Delivery innovation: The role of Information and Communication Technology</title>
<link>https://hdl.handle.net/1721.1/144956</link>
<description>Essays on Healthcare Delivery innovation: The role of Information and Communication Technology
Bronsoler Nurko, Ari
With a primary interest in how innovative approaches can improve healthcare delivery quality, this dissertation focuses on the crucial role that widely available technologies and improving organizational schemes can have on health outcomes. In the first paper, I analyze the effect of introducing common group chat apps on heart attack treatment coordination across hospitals. I document a large effect among hospitals that have a higher survival gap relative to the specialized centers they send patients to:  survival rates increase by 29% (12 percentage points) and transfers by 85% (5 percentage points). In the second paper, in collaboration with Jonathan Gruber and Enrique Seira, we implement a novel deniers randomization evaluation of a private one-stop-shop model of care for one of the world’s deadliest health problems, diabetes. We estimate enormous impacts of the private supplement, increasing the share of those treated who are under control by 69%.  The returns to private care do not appear to reflect more productive delivery but rather more attachment to medical care. Finally, the third paper presents a review of the medical and economic literature on HICT adoption.  We find that HICT improves clinical outcomes and lowers healthcare costs, but (i) the effects are modest so far, (ii) it takes time for these effects to materialize, and (iii) there is much variation in the impact. Our own analysis on a novel labor market level dataset finds an increase in employment after HICT adoption, with managers benefiting more, and no negative effects on income.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144956</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Labor and Finance</title>
<link>https://hdl.handle.net/1721.1/144949</link>
<description>Essays in Labor and Finance
Seegmiller, Bryan
In chapter 1, I quantify the economic value that firms of different productivity levels derive from their labor market power by estimating the effect of unanticipated firm-level labor demand shocks on wages and employment at publicly listed U.S. firms. Productive firms face lower labor supply elasticities on average, and still lower elasticities for skilled workers, who are disproportionately employed at more productive firms. Using a dynamic wage posting model in which firms face upward-sloping labor supply and adjustment costs in hiring, I estimate that firms in the top and bottom quartiles of labor productivity pay 62% and 94% of marginal product, despite the fact that adjustment costs temper the exercise of labor market power. Markdown differentials can explain three-fifths of the average spread in log labor shares between high- and low-labor productivity firms, and the evolution of these differentials can explain most of the change in the aggregate labor share in the 1991–2014 period. Holding constant equilibrium labor demand, I estimate that about a third of capital income for the typical firm stems from wage markdowns. Aggregate wage markdowns are worth two-fifths of total capital income.&#13;
&#13;
In chapter 2, joint work with Leonid Kogan, Dimitris Papanikolaou, and Larry Schmidt, we construct new technology indicators using textual analysis of patent documents and occupation task descriptions that span almost two centuries (1850–2010). At the industry level, improvements in technology are associated with higher labor productivity but a decline in the labor share. Exploiting variation in the extent certain technologies are related to specific occupations, we show that technological innovation has been largely associated with worse labor market outcomes—wages and employment—for incumbent workers in related occupations using a combination of public-use and confidential administrative data. Panel data on individual worker earnings reveal that less educated, older, and more highly-paid workers experience significantly greater declines in average earnings and earnings risk following related technological advances. We reconcile these facts with the standard view of technology-skill complementarity using a model that allows for skill displacement.&#13;
&#13;
In chapter 3, I show that stocks with similar characteristics but different levels of ownership by financial institutions have returns and risk premia that comove very differently with shocks to the risk-bearing capacity of financial intermediaries. After accounting for observable stock characteristics, excess returns on more intermediated stocks have higher betas on contemporaneous shocks to intermediary willingness to take risk and are more predictable by state variables that proxy for intermediary health. The empirical evidence supports the predictions of asset pricing models featuring financial intermediaries as marginal investors who face frictions that induce changes in their risk-bearing capacity. This suggests that such models are useful for explaining price movements not only in markets for complex financial assets, but also within asset classes where households face comparatively low barriers to direct participation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144949</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interior point and outer approximation methods for conic optimization</title>
<link>https://hdl.handle.net/1721.1/144941</link>
<description>Interior point and outer approximation methods for conic optimization
Coey, Christopher Daniel Lang
Any convex optimization problem may be represented as a conic problem that minimizes a linear function over the intersection of an affine subspace with a convex cone. An advantage of representing convex problems in conic form is that, under certain regularity conditions, a conic problem has a simple and easily checkable certificate of optimality, primal infeasibility, or dual infeasibility. As a natural generalization of linear programming duality, conic duality allows us to design powerful algorithms for continuous and mixed-integer convex optimization.&#13;
&#13;
The main goal of this thesis is to improve the generality and practical performance of (i) interior point methods for continuous conic problems and (ii) outer approximation methods for mixed-integer conic problems. We implement our algorithms in extensible open source solvers accessible through the convenient modeling language JuMP. From around 50 applied examples, we formulate continuous and mixed-integer problems over two dozen different convex cone types, many of which are new. Our extensive computational experiments with these examples explore which algorithmic features and what types of equivalent conic formulations lead to the best performance.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144941</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Public Finance and Environmental Policy</title>
<link>https://hdl.handle.net/1721.1/144939</link>
<description>Essays in Public Finance and Environmental Policy
Schwarz, Patrick
This thesis consists of three chapters in environmental economics and public finance. The first chapter tests for adverse selection on private information in the market for flood insurance. Using detailed flood insurance policy microdata and newly available estimates of flood risk, I develop a measure of excess flood risk not used by the insurer to price insurance policies. I then use this measure as the basis for an unused observables test described in Finkelstein and Poterba (2014) and find that the market in certain areas that FEMA categorizes as low-risk, which together make up 40% of all flood insurance policies, is adversely selected. My results hold even after conditioning on a rich set of demographic and housing characteristics and measures of flood risk information and salience, suggesting that at least part of the selection is directly risk-based.&#13;
&#13;
The second chapter studies the distributional properties of federal disaster aid and how barriers to take-up vary across the income distribution. Using applicant-level microdata from FEMA’s primary emergency relief program from 2002-2019, I find that lower-income households receive more aid in expectation, both conditional on applying and conditional on receiving aid. This is consistent with the idea that lower-income households experience relatively greater uninsured necessary expenses following disasters. Despite this, I also find suggestive evidence that the non-monetary costs of applying for aid are highest among lower-income households, and estimate that program take-up sharply declines among the poorest households. Access to disaster recovery field offices, which provide assistance with filing disaster aid applications, significantly increases take-up among lower-income households but not higher-income households, implying that they increase the targeting efficiency of aid along the income distribution.&#13;
&#13;
The third chapter, written with Faraz Hayat, investigates whether firms’ input decisions are sensitive to the value-added tax (VAT) they pay on purchases. The study uses rich VAT return data from Pakistan combined with variation in VAT rates on electricity from a quasi-experiment. In contrast to the standard theory that intermediate VATs have no impact on production decisions, we find that electricity demand is responsive to VAT rates. Our results are not driven by changes in tax evasion in response to changing VAT rates. Rather, we provide evidence that frictions in the VAT refund system explain our results, as many of the firms we study collect no VAT on their output and therefore rely on the government to reimburse their VAT credits.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144939</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Copper(I) Hydride-Catalyzed Asymmetric Olefin Hydrofunctionalization Reactions</title>
<link>https://hdl.handle.net/1721.1/144930</link>
<description>Development of Copper(I) Hydride-Catalyzed Asymmetric Olefin Hydrofunctionalization Reactions
Feng, Sheng
The work described in this dissertation focuses on developing copper(I) hydride (CuH)-catalyzed enantioselective hydrofunctionalization reactions of olefins. The first chapter highlights a method on CuH-catalyzed asymmetric hydroamination of strained trisubstituted alkenes, including cyclobutenes and cyclopropenes. The second chapter presents an approach for accessing enantioenriched α-quaternary carboxylic acids, through CuH-catalyzed hydrocarboxylation of allenes. The third chapter demonstrates the enantioselective hydrocarbamoylation of alkenes enable by dual CuH and Pd catalysis.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144930</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision-making under uncertainty for electric power system operation and expansion planning</title>
<link>https://hdl.handle.net/1721.1/144926</link>
<description>Decision-making under uncertainty for electric power system operation and expansion planning
Barbar, Marc
Decision-making under uncertainty is required in a multiplicity of situations in power system operation and capacity expansion planning. This thesis investigates the drivers and impact of uncertainty on power system infrastructure planning and proposes several methods to design and operate a power system at granular modeling level. The focus of the thesis is on Emerging Markets and Developing Economy countries, specifically India and Nigeria. However, the work presented in this document can be adapted to other situations, in the power sector or elsewhere, that share similar traits. Moreover, incorporating uncertainty in generalized optimization models often yields inaccurate results due to the lack of precision in representing the problem. This thesis carefully examines a set of situations and presents appropriate decision-making under uncertainty frameworks that yield meaningful results. The thesis is divided into three parts: drivers of uncertainty, the impact of uncertainty, and accounting for uncertainty in electricity resource design, with applications to Emerging Markets and Developing Economy countries.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144926</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-dimensional computational imaging from diffraction intensity using deep neural networks</title>
<link>https://hdl.handle.net/1721.1/144925</link>
<description>Multi-dimensional computational imaging from diffraction intensity using deep neural networks
Kang, Iksung
Diffraction of light can be found everywhere in nature, from sunlight rays fanning out from clouds to multiple colors reflected from the surface of a CD. This phenomenon of light explains any change in the path of light due to an obstacle and is of particular significance as it allows us to see transparent (or pure-phase) objects, e.g. biological cells under visible-wavelength light or integrated circuits under X-rays, with proper exploitation of the phenomenon. However, cameras only measure the intensity of the diffracted light, which makes the camera measurements incomplete due to the loss of phase information. Thus, this thesis addresses the reconstruction of multi-dimensional phase information from diffraction intensities with a regularized inversion using deep neural networks for two- and three-dimensional applications. The inversion process begins with the definition of a forward physical model that relates a diffraction intensity to a phase object and then involves a physics-informing step (or equivalently, physics prior) to deep neural networks, if applicable. In this thesis, two-dimensional wavefront aberrations are retrieved for high-contrast imaging of exoplanets using a deep residual neural network, and transparent planar objects behind dynamic scattering media are revealed by a recurrent neural network, both in an end-to-end training fashion. Next, a multi-layered, three-dimensional glass phantom of integrated circuits is reconstructed under the limited-angle phase computed tomography geometry with visible-wavelength laser illumination using a dynamical machine learning framework. Furthermore, a deep neural network regularization is deployed for the reconstruction of real integrated circuits from far-field diffraction intensities under the ptychographic X-ray computed tomography geometry with partially coherent synchrotron X-ray illumination.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144925</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vector magnetometry using cavity-enhanced microwave readout of solid-state spin sensors</title>
<link>https://hdl.handle.net/1721.1/144924</link>
<description>Vector magnetometry using cavity-enhanced microwave readout of solid-state spin sensors
Eisenach, Erik Roger
Robust, high-fidelity readout is central to quantum device performance. Overcoming poor readout is therefore an increasingly urgent challenge for devices based on solid-state spin defects, particularly given their rapid adoption in quantum sensing, quantum information, and tests of fundamental physics. However, in spite of experimental progress in specific systems, solid-state spin sensors still lack a universal technique for high-fidelity readout. One leading research avenue is to engineer state-of-the-art microwave delivery systems which improve the coherent control of large spin ensembles as they are manipulated for readout. Another is to develop novel readout techniques that go beyond measuring optical fluorescence signals, which are often difficult to detect, and unique only to some solid-state spin systems. In this thesis, I discuss these two approaches, and begin by designing a three dimensional microwave resonator that overcomes the many shortcomings of conventional microwave delivery systems, which limit the readout fidelity of devices employing large spin systems. Next, I demonstrate a novel readout technique that provides high-fidelity, room-temperature readout of an ensemble of nitrogen-vacancy centers via strong coupling to a dielectric microwave cavity. This strong collective interaction allows the spin ensemble’s microwave transition to be probed directly, thereby overcoming the optical photon shot noise limitations of conventional fluorescence readout. Applying this technique to magnetometry, I first build a proof-of-concept magnetometer with the capability of measuring magnetic fields along a single vector axis, with a sensitivity better than the optical shot noise limit of the system. I then expand on the initial demonstration, by building a prototype capable of measuring three-dimensional dynamic vector fields with high sensitivity. While the current device performance is limited by technical noise, the method promises what has long been elusive for quantum sensors based on solid-state spin ensembles: a clear path to readout at the spin-projection limit.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144924</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Patterns, Short Cycles and Long Shortest Paths in Graphs</title>
<link>https://hdl.handle.net/1721.1/144923</link>
<description>Finding Patterns, Short Cycles and Long Shortest Paths in Graphs
Dalirrooyfard, Mina
This thesis is about finding useful structures in a graph using fast algorithms, or showing that no such fast algorithms exist using popular fine-grained hypotheses from the field of Fine-Grained Complexity. These structures can be any small fixed-sized pattern, or more specific bigger structures such as the longest shortest path in a graph, the size of which is represented by the diameter of a graph. Finding these structures has many applications, from protein-protein interactions in biology to anomaly detection in networks.&#13;
&#13;
We start by the problem of finding fixed-sized patterns in graphs as a subgraph, known as Graph Pattern Detection or Subgraph Isomorphism. There are no fast algorithms for graph pattern detection for many patterns despite many efforts, and so we focus on finding the source of hardness of detecting different patterns. One of our results is that one such source is the appearance of cliques (complete graphs) in the pattern which can make the pattern hard to detect.&#13;
&#13;
We then move to patterns that are not necessarily fixed sized but are either paths or cycles. The size of these patterns are often represented by popular parameters such as the diameter, radius and girth in the graph.&#13;
&#13;
We focus on computing the diameter (longest shortest path) of a graph and more specifically on approximating the diameter since computing it exactly is known to be hard. There is a folklore 2- approximation algorithm for the diameter that works in linear time, and we show that this algorithm is optimal conditioned on the Strong Exponential Time Hypothesis (SETH). Our result shows that any better than 2-approximation algorithm for the diameter requires super linear time. Moreover, we give a series of time-accuracy trade-off lower bounds, completing a line of recent works.&#13;
&#13;
The next pattern we discuss is a cycle, and more specifically it is the shortest cycle of a graph, the length of which is known to be the girth. We give the first 2-approximation algorithm for computing the girth in directed graphs in subquadratic time, improving the previous best approximation factor (in subquadratic time) which was 3.&#13;
&#13;
Finally, we don’t resort to the standard measures of these distance problems, as in many applications we need more specific notions of these problems. For example we might be only interested in the longest shortest path among specific pairs of vertices (a variant of the diameter). Hence we consider two variants: First we assume that we are given two subsets &#119878; and &#119879; of the vertex set of the graph, and we are asked to compute distance parameters such as diameter and radius by only considering the pairs of nodes in &#119878; × &#119879;. These problems are called &#119878;&#119879;-distance problems and when &#119878; and &#119879; are non-overlapping and cover all the vertex set, they are called bichromatic distance problems. We give a comprehensive study of approximation of &#119878;&#119879; and bichromatic distance parameters.&#13;
&#13;
Second, we consider a “symmetric” distance measure in directed graphs called min-distance. We give big improvements in approximating min-diameter and min-radius in general graphs and in directed acyclic graphs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144923</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Deeper Understanding of Neural Language Generation</title>
<link>https://hdl.handle.net/1721.1/144922</link>
<description>Towards a Deeper Understanding of Neural Language Generation
He, Tianxing
In recent years, the field of language modelling has witnessed exciting developments. Especially, thanks to large-scale data, powerful model architectures, and high-speed parallel computing devices, researchers are able to train language models which can generate realistic text. However, our understanding of these powerful language models remains shallow. What aspects of the language model are good, and what aspects need to be improved? These will be the key questions behind this thesis.&#13;
&#13;
This thesis includes a set of behavior analyses of language models (LMs) with a focus on generation. We will also propose methods to alleviate some of the identified problems. The four high-level topics are (1) The general sampling behavior of an auto-regressive LM. In particular, we will take a closer look at the popular sampling algorithms. (2) Whether the LM is vulnerable to adversarial attacks, and how to make it more robust. (3) The LM’s ability to remember knowledge learned from data, and relatedly, what’s the best way to expose this learned knowledge. (4) How to get more fine-grained control on the model’s generation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144922</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vehicle autonomy under the Arctic ice: environmental adaptation through model-aided machine learning</title>
<link>https://hdl.handle.net/1721.1/144919</link>
<description>Vehicle autonomy under the Arctic ice: environmental adaptation through model-aided machine learning
Viquez R., Oscar Alberto
The use of autonomous vehicles has been growing across the globe, driven by their ability to meet the diverse needs of industry and scientific applications. Terrestrial and aerial uncrewed vehicles typically benefit from high-throughput communication systems which enable accurate positioning and operator input; Autonomous Underwater Vehicles (AUVs), however, generally require a higher degree of autonomy as they must rely on much more limited communication links and lack access to global navigation satellite systems (GNSS) while underway. This distinction becomes especially important in hazardous environments like the Arctic Ocean, where surface ice may impede an AUV from breaching to regain access to position and controller updates. Instead, underwater vehicles in ice-covered environments require a higher level of autonomous decision-making, and rely on a combination of self-contained sensors and acoustic positioning networks for navigation – but the latter generally rely on a deterministic conversion of acoustic travel times to ranges, failing to capture the natural variability of the acoustic environment.&#13;
&#13;
This dissertation demonstrates the application of physics-based machine learning techniques as an alternative to deterministic solutions for environmental adaptation in unmanned vehicle auton- omy. This is achieved by gradually incrementing the complexity of the adaptation problem: first, the tasks of behavior identification and riverbed characterization are tackled with a classification approach; next, an embedded acoustics model is used in place of the conventional linear model for acoustic positioning, and a feature design approach is employed to improve the performance of this embedded range estimation; last, a pseudo-tomographic approach based on neural network tech- niques is proposed as a complement to compressive sensing, to enable exploratory environmental adaptation onboard AUVs.&#13;
&#13;
The improvements to acoustic positioning are validated against data collected in the Beaufort Sea in March of 2020, where the presence of the Beaufort Lens combines with the surface ice covering the Arctic Ocean to create an ideal setting in which to demonstrate the importance of environmental adaptation. These capabilities may impact monitoring efforts in the area, which has seen increased interest from fishing, trade and military operations, and is of significant importance to understand- ing climate conditions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144919</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infrastructural Landscapes: The Technopolitics of Watershed Planning in Asia</title>
<link>https://hdl.handle.net/1721.1/144915</link>
<description>Infrastructural Landscapes: The Technopolitics of Watershed Planning in Asia
Tang, Dorothy
This three-essay dissertation examines the technopolitics of watershed management in shaping infrastructure and landscapes in Asia. Technopolitics describes the co-production of power and technologies and how these processes manifest in the physical and social world. Using this conceptual framework, each essay analyzes a different watershed rationale, to understand how the politics of infrastructure restructure urban processes, development, and governance.&#13;
&#13;
Essay 1 analyzes China’s Sponge City movement and how stormwater management produces uneven development in Guangzhou and Shenzhen. Examining the role of green infrastructure technologies in mediating central mandates and local enforcement, I found that the technological basis of runoff standards at the national level shapes implementation strategies of local governments. While a systematic watershed approach is most ecologically effective, a fragmented parcelized approach is easier to quantify, faster to implement, and conducive to private financing. Thus, local officials turn to neoliberal urban development to meet the central government’s environmental goals instead of prioritizing ecological performance.&#13;
&#13;
Essay 2 studies how geopolitics and water security shaped colonial Hong Kong’s transboundary freshwater infrastructure. Through two major water emergencies in the 1960s, I illustrate how uncertainty over Hong Kong’s sovereignty during the Cold War produced competing freshwater infrastructure systems: reservoirs for water autonomy and aqueducts for integration with Mainland China. Moreover, the ecological impact of reservoir construction forced the British colonial government to temporarily include saline water in the freshwater supply, exacerbating a crisis of governance in 1967. This colonial legacy of freshwater provision continues to influence contemporary debates over self-sufficiency and integration in its environmental politics and land supply controversies.&#13;
&#13;
Essay 3 examines transnational infrastructure projects in the Mekong River Basin and the promise of regional infrastructure for economic development. I trace the history of transnational planning of the Mekong Project (1957-75) and the Greater Mekong Subregion Economic Corridors (1992-), to understand how planned infrastructure projects shape regional politics. The imaginary of the Mekong Region began with the functional region of a watershed and evolved into an economic region of linear corridors. Both plans remain largely unbuilt, but its political work endures in the regional dynamics and developmental ambitions of Mekong riparian nations.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144915</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robots Making Satellites: Advancing In-Space Manufacturing Through On-Orbit Robotic Assembly</title>
<link>https://hdl.handle.net/1721.1/144906</link>
<description>Robots Making Satellites: Advancing In-Space Manufacturing Through On-Orbit Robotic Assembly
Uzo-Okoro, Ezinne E.
On-orbit robotic assembly is a critical and necessary advancement for in-space manufacturing. On-orbit assembly missions typically involve humans-in-the-loop and use large custom-built robots to service existing modules. Introducing modularized small satellites (SmallSats) for use cases such as rapid reconstitution of satellite constellation nodes and inspecting damaged assets in Low Earth Orbit (LEO) can accelerate on-orbit robotic assembly capabilities. This thesis introduces and explores the potential of an innovative new approach: the on-orbit autonomous assembly of CubeSats. &#13;
&#13;
The case for small-part robotic snap assembly using CubeSats and low-cost robots and the economic feasibility of the concept are presented. The research gaps which currently limit the on-orbit assembly of CubeSats are (1) the lack of standardization of electromechanical CubeSat modules to be compatible with Commercial-Off-The-Shelf (COTS) robotic assembly hardware, and (2) the assessment and modification of hardware to enable autonomous assembly. Standardization of electromechanical CubeSat modules requires compatibility with low-cost end-effectors. End-effectors must accurately detect and grasp CubeSat components and assemble them using snap assembly attachment mechanisms. Addressing the research gaps, in this work, the robotic assembly of a 1U CubeSat using modular components and COTS robot arms is demonstrated through analyses, simulations, and prototype development. After upgrading the robot system, an XYZ-axis cartesian robot sized at 300 mm x 300 mm x 500 mm is trained and tested using CubeSat subsystem modules for use in a relevant space environment. &#13;
&#13;
The potential for decreasing the lead time for integration and assembly of CubeSats and improving cost savings via more efficient packing volumes and processes motivate the implementation of the proposed on-orbit work. A demonstration mission in which the cartesian robot and SmallSat components are enclosed in free-flying “spacecraft lockers” of approximately 24 inches x 36 inches x 12.5 inches is proposed. CubeSats could be assembled within and deployed from the proposed lockers. The lockers and the CubeSat snap-assembly modules could both have propulsion capability. It is expected that the proposed mission will demonstrate an unprecedented improvement in the build and deployment cycle of SmallSats by reducing the response time from the current minimum of 35 days to less than 10 hours.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144906</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Barium isotope cycling in the marine environment: Pathways of fractionation and implications for paleoceanographic applications</title>
<link>https://hdl.handle.net/1721.1/144900</link>
<description>Barium isotope cycling in the marine environment: Pathways of fractionation and implications for paleoceanographic applications
Middleton, Julien Thomas
Removal of particulate organic carbon (POC) from sunlit surface waters into the deep ocean represents a climatically important sink of atmospheric carbon dioxide (CO₂), linking the biogeochemical cycling of POC to CO₂-driven climate change. As POC is not well preserved in the sediment record, other proxies, including the chemistry of barium (Ba) in the ocean and through the sedimentary record, offer an avenue to investigate oceanic carbon export through Earth’s history. This thesis seeks to constrain the controls on the formation, cycling, and isotopic signature of the main particulate phase of marine barium, the mineral barite (BaSO₄) through its inception in the water column, during deposition, and ultimately into the rock record. To that end, I characterize the depth, spatial region, and general controls on particulate Ba formation in the South Pacific Ocean through shipboard experimentation and find that particulate Ba forms mainly in the surface of the Polar Frontal Zone in the presence of large particles and microbial activity. Next, I characterize the effect of ion exchange on BaSO₄, a process previously unstudied under marine conditions, in a laboratory setting. Ion exchange occurs rapidly between dissolved Ba and BaSO₄ and imparts a characteristic net offset between the Ba isotope composition of the dissolved and solid phase, which arises through a combination of Ba isotope fractionation during both precipitation and dissolution. Finally, I investigate the role of ion exchange in marine settings using co-located pore fluids and sedimented BaSO₄. Modeling of natural samples produce results that are consistent with the laboratory study, suggesting that this mode of isotopic fractionation impacts Ba isotopes in the environment and must be accounted for when applying Ba based climate proxies.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144900</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Principles of Substrate Recognition and Unfolding by the ClpAP and ClpXP AAA+ Proteases</title>
<link>https://hdl.handle.net/1721.1/144898</link>
<description>Structural Principles of Substrate Recognition and Unfolding by the ClpAP and ClpXP AAA+ Proteases
Kim, Sora
Protein degradation is a key regulatory mechanism that controls protein homeostasis in all cells. Found in all domains of life, proteases of the AAA+ (ATPases associated with diverse cellular activities) superfamily perform targeted protein degradation of specific substrates. All AAA+ proteases consist of a hexameric AAA+ unfoldase and a compartmentalized peptidase. AAA+ proteases harness the energy of ATP hydrolysis to mechanically unfold and translocate substrates through the axial channel into the peptidase chamber for degradation. Degradation by AAA+ proteases is carefully regulated by several mechanisms, including selective recognition of specific peptide sequences in the substrate (called degrons), substrate-tethering sequences (called enhancement- or e-tags), and adaptor proteins. &#13;
My thesis examines the structural basis of substrate recognition and degradation by the bacterial AAA+ proteases ClpAP and ClpXP, which are composed of either the double-ringed ClpA or the single-ring ClpX unfoldase in complex with the ClpP peptidase. Using covalently crosslinked ClpA–ClpP complexes, I interrogate the symmetry mismatch between the ClpA hexamer and the ClpP heptamer interface to establish that rotation of ClpA relative to ClpP is not required for proteolytic function, contrary to the prediction of structure-based models. Next, I present cryo-EM structures of ClpAP in complex with its adaptor ClpS and an N-degron substrate, revealing the degron-like binding of the ClpS NTE in the ClpA channel and an altered, ‘tucked’ pore-1-loop conformation. I also investigate the function of ClpA D1 pore 2 loops in adaptor-independent and ClpS-assisted degradation. Next, using engineered fusion proteins, I show that a Pro-Pro dipeptide (found in the ClpS junction) near a folded domain is sufficient for protection from ClpAP. Finally, I investigate the redox-sensitive mechanism of ClpXP degradation of the bacterial global transcription regulator FNR and demonstrate that the N terminal e-tag is a ClpX-tethering motif and the C-terminal recognition signal is a pore-binding tag. Crystal structures of apo (oxidized) and [4Fe-4S] bound holoFNR reveal how oxygen-induced conformational changes alter exposure of these two signals to selectively target the apo form for degradation. In summary, these studies identify distinct elements within enzymes, adaptors, and substrates that contribute specific functions during the multistep process of degradation by AAA+ proteases.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144898</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-informed machine learning techniques for edge plasma turbulence modelling in computational theory and experiment</title>
<link>https://hdl.handle.net/1721.1/144897</link>
<description>Physics-informed machine learning techniques for edge plasma turbulence modelling in computational theory and experiment
Mathews, Abhilash
Edge plasma turbulence is critical to the performance and operation of magnetic confinement fusion devices. Drift-reduced Braginskii two-fluid theory has for decades been widely applied to model boundary plasmas with varying success. Towards better understanding edge turbulence in both theory and experiment, a custom-built physics-informed deep learning framework constrained by partial differential equations is developed to accurately learn turbulent fields consistent with the two-fluid theory from partial observations of electron pressure. This calculation is not otherwise possible using conventional equilibrium models. With this technique, the first direct quantitative comparisons of turbulent field fluctuations between electrostatic two-fluid theory and electromagnetic gyrokinetic modelling are demonstrated with good overall agreement found in magnetized helical plasmas at low normalized pressure.&#13;
&#13;
To translate these computational techniques to experimental fusion plasmas, comprehensive 2-dimensional diagnostics operating on turbulent time scales are necessary. For this purpose, a novel method to translate brightness measurements of HeI line radiation into local plasma fluctuations is demonstrated via a newly created deep learning framework that integrates neutral transport physics and collisional radiative theory for the $3^3 D - 2^3 P$ transition in atomic helium. Using fast camera data on the Alcator C-Mod tokamak, this thesis presents the first 2-dimensional time-dependent experimental measurements of the turbulent electron density, electron temperature, and neutral density in a fusion plasma using a single spectral line. With this experimentally inferred data, initial estimates of the 2-dimensional turbulent electric field consistent with drift-reduced Braginskii theory under the framework of an axisymmetric fusion plasma with purely toroidal field are calculated. The inclusion of atomic helium effects on particle and energy sources are found to strengthen correlations between the electric field and electron pressure while broadening turbulent field fluctuation amplitudes which impact E x B flows and shearing rates.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144897</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights from biomolecular condensates into disease and drug development</title>
<link>https://hdl.handle.net/1721.1/144894</link>
<description>Insights from biomolecular condensates into disease and drug development
Afeyan, Lena K.
The cell concentrates and compartmentalizes proteins and nucleic acids into diverse phase-separated biomolecular condensates. The study of condensates has yielded myriad fundamental insights into cell biology, ranging from a better understanding of cellular organization, to the exploration of novel mesoscale functions resulting from the emergent properties of these liquid-like compartments. A consideration of the role of condensates in disease and drug development could facilitate similar paradigm shifts, with early evidence suggesting that condensate dysregulation may be a feature of many diseases, and that therapeutics can modulate condensates. In the studies presented in this thesis, we developed and validated a strategy for nominating patient mutations across the spectrum of disease that may cause condensate dysregulation, providing nominated mutations as a resource to the biomedical community for the acceleration of the study of condensates in disease. Further, we tested the hypothesis that small molecule therapeutics can concentrate into condensates, showing that clinically important cancer therapeutics display differential partitioning and that condensate partitioning can affect therapeutic activity (Klein et al., 2020). Lastly, we have expanded upon this condensate partitioning work to show that antisense oligonucleotides (ASOs), nucleic acid-based therapeutics targeting RNA, partition into and modulate certain condensates, and that specific chemical modifications can alter this partitioning behavior. Ultimately, by considering the implications of a condensate model in disease and drug development, this thesis aims to leverage recent insights into biomolecular condensates to facilitate the development of novel disease mechanistic and therapeutic hypotheses.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144894</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Establishment of MITF and TAZ as major determinants of uveal melanoma</title>
<link>https://hdl.handle.net/1721.1/144892</link>
<description>Establishment of MITF and TAZ as major determinants of uveal melanoma
Phelps, Grace B.
Uveal melanoma (UM) is a cancer that arises from transformed melanocytes that exist in the uvea of the eye. Although UM is relatively rare, it is extremely deadly, due to the lack of treatment options for the metastatic disease. In contrast, cutaneous melanoma (CM), which is derived from transformed skin melanocytes, has multiple approved therapies, which have not worked in UM patients. Thus, there is great need to gain a deeper understanding of UM, and how it differs from CM, to inform effective therapeutic strategies. The vast majority of UM patients present with activating mutations in the GNAQ/11 pathway, which signals through PLCβ4-MAPK and YAP, whereas CM is primarily driven by activation of the MAPK pathway. It has been well established that CM tumors are dependent on MITF, the master melanocyte transcription factor. Here, in stark contrast, we establish that MITF serves as a tumor suppressor in an oncogenic GNAQ/11 UM zebrafish model. Moreover, we show that resulting MITF-deficient tumors are more de-differentiated and down-regulate PLCβ4-MAPK signaling, but retain active YAP. Furthermore, we establish that YAP, and not PLCβ4, is sufficient to drive MITF-wildtype tumorigenesis. Thus, our data de-emphasize the role of the PLCβ4-MAPK pathway in UM. We further show that YAP is surprisingly wholly dispensable for oncogenic GNAQ tumorigenesis in an autochthonous zebrafish model, but resulting tumors display active TAZ, a YAP paralog. Moreover, we establish that TAZ is an extremely potent oncogene in UM using autochthonous zebrafish models, even moreso than YAP. Furthermore, we show that higher TAZ expression, but not YAP expression, significantly correlates with decreased UM patient survival. Thus, this suggests that the role of TAZ does not entirely mirror that of YAP in UM. Lastly, we provide molecular characterization of the MITF-deficient state, and develop a system in which to screen UM therapies in vivo. Overall, our work establishes a tumor suppressor role for MITF, de-emphasizes the role of the MAPK pathway, and further underscores the importance of YAP/TAZ signaling in UM.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144892</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcriptional regulators in stem cell biology</title>
<link>https://hdl.handle.net/1721.1/144889</link>
<description>Transcriptional regulators in stem cell biology
Wilson, Molly M.
Adult stem cells support tissue regeneration by both self-renewing and differentiating into specialized cell types. Analogous self-renewal and differentiation processes enable cancer stem cells to promote tumorigenesis. Transcriptional modulation, enacted by proteins including epigenetic regulators and transcription factors, determines cell fate. However, the functions of transcriptional regulators of stemness may differ in normal and transformed cells. In this thesis, I describe two transcriptional regulators of stemness and their deployment in adult stem cells and cancer. The first study focuses on the epigenetic regulator B-cell-specific Moloney murine leukemia virus integration site 1 (BMI1). BMI1 promotes stemness and tumorigenesis in many tissues through both transcriptional repression of the Cdkn2a locus and other functions. Notably, BMI1 supports melanoma progression by activating epithelial-mesenchymal transition (EMT) programs, suggesting that its role in melanocytes may also be Cdkn2a-independent. We show that BMI1 is required for melanocyte stem cell (MeSC) maintenance and function. RNAsequencing of MeSCs established that BMI1 loss leads to derepression of Cdkn2a and developmental transcription factor genes. Additionally, we observe downregulation of glutathione S-transferases, suggesting that reactive oxygen species (ROS) may accumulate in melanocyte lineage cells in the absence of BMI1. Antioxidant treatment in mice partially rescued melanocytes, indicating that BMI1-dependent ROS homeostasis may support melanocyte expansion. These data demonstrate that BMI1 has unique functions in MeSC biology that contrast with its role in melanoma. In the second study, we identify the transcription factor GLI-similar 2 (GLIS2) as a regulator of mammary stem cell (MaSC) differentiation. In the mammary epithelium, EMT drives formation of primary cilia, which promote MaSC self-renewal. We showed that primary cilia enable MaSC function in part through inhibiting GLIS2. Glis2 deletion in mice led to MaSC expansion and increased self-renewal. Conversely, GLIS2 hyperactivity in mammary cells suppressed stemness and tumorigenesis. Furthermore, human claudin-low breast cancer cells form primary cilia, and sequencing experiments demonstrated that GLIS2 is inactive in claudin-low breast tumors, indicating that GLIS2 regulation is relevant to normal and transformed mammary cells. Collectively, this work highlights roles for transcriptional regulators in self-renewing cells and elucidates how stem cell program activation induces context-dependent molecular consequences in normal and transformed tissues.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144889</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling of Boundary Transport and Divertor Target Heat Flux - Implications for Advanced Divertor Concepts</title>
<link>https://hdl.handle.net/1721.1/144888</link>
<description>Modeling of Boundary Transport and Divertor Target Heat Flux - Implications for Advanced Divertor Concepts
Ballinger, Sean Bozkurt
Tokamaks are currently being designed and built to achieve net positive unharnessed fusion energy, an important milestone on the path to electricity production. Experimental trends predict an additional challenge in these upcoming devices: a decrease in the area of the metal wall on which the plasma deposits significant heat flux, increasing the likelihood of melting damage. The heat deposition area is proportional to a parameter called the heat flux width, which decreases with increasing poloidal magnetic field and average plasma pressure. In devices designed to achieve physics breakeven such as ITER and SPARC, the heat flux width is predicted by some estimates to be less than 1 millimeter. It is therefore crucial to develop methods to more accurately predict the heat flux width and to mitigate large heat fluxes. Data from the Alcator C-Mod tokamak are particularly relevant in the effort to predict conditions in SPARC, as both are designed to use a higher magnetic field than other major tokamak experiments. Before this work, the relationship between the heat flux width and edge profiles of plasma density and temperature in C-Mod was unknown. Studies with plasma edge simulation codes were limited to a small number of discharges at a time, with many model settings being ad-hoc and difficult to evaluate for general applicability. Simulations of C-Mod had a much shorter outer divertor leg compared to SPARC, making it difficult to use detachment studies in C-Mod to speculate on detachment in SPARC. Finally, there was only a rough idea of edge plasma conditions in SPARC, and it was not known whether detachment would even be feasible. This thesis uses data from Alcator C-Mod and simulations with the UEDGE code to investigate heat flux width scalings, detachment, and advanced divertor concepts to inform the design of next-generation tokamaks that can produce significant fusion energy while remaining safe against heat flux damage.&#13;
&#13;
This thesis begins by augmenting a C-Mod heat flux width database (containing ~300 discharges) with midplane density and temperature profile data. Detailed analysis finds that the outer target heat flux width depends on the edge plasma pressure, but fails to find a clear dependence on edge gradients. The scaling of the heat flux width with the edge pressure varies by confinement mode and is used to confirm predictions of the heat flux width of 0.2-0.4 mm in SPARC and 0.4-0.6 mm in ITER H-mode scenarios.&#13;
&#13;
The UEDGE code is then used to simulate the edge of Alcator C-Mod plasmas. 75 discharges from the heat flux width database are successfully modeled in UEDGE using a fully automated process that matches experimental midplane density and temperature profiles. The resulting heat flux width in UEDGE is then compared to experimental measurements, and it is found that the UEDGE and experimental values are correlated but that UEDGE overestimates the heat flux width by an average factor of 1.8. The UEDGE-modeled discharges are modified to include single-particle drift effects and (separately) to remove flux limits. These changes do not significantly improve the UEDGE heat flux width match to experiment but demonstrate the capability of this framework to evaluate which settings in the UEDGE model improve agreement with experiment over the large range of edge plasma conditions included in the C-Mod database.&#13;
&#13;
One particular C-Mod attached H-mode discharge is then simulated in UEDGE, and a good match is achieved to experimental data at the midplane and outer target simultaneously with full drift effects included in the model. This discharge is also simulated with a ~2x longer outer divertor leg, an important component of advanced divertor concepts that could enable better high heat flux handling. Detachment is found to occur when a nitrogen impurity is introduced at a fixed fraction of 3.5% of the main ion density in the real C-Mod geometry, while with the longer leg, detachment occurs at a significantly lower fraction of 2.4% nitrogen. This bodes well for the SPARC design, which features a long outer leg.&#13;
&#13;
Finally, a full-power SPARC H-mode scenario is directly simulated with UEDGE. It is found that detachment is possible at the high heat fluxes and small heat flux width predicted for SPARC and that the heat flux at the targets can remain significantly reduced with a carbon impurity fraction around 1%. This value is not a prediction of the detachment threshold in SPARC due to the use of bifurcated attached and detached solutions obtained at low power, but is encouraging when compared to the detachment thresholds in C-Mod UEDGE simulations. This study confirms that detachment is a promising solution to mitigate high heat fluxes in the SPARC full-power scenario.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144888</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noncanonical recognition and degradation of a stable soluble protein by AAA protease FtsH</title>
<link>https://hdl.handle.net/1721.1/144887</link>
<description>Noncanonical recognition and degradation of a stable soluble protein by AAA protease FtsH
Morehouse, Juhee P.
AAA+ (ATPases associated with various cellular activities) proteolytic machines help maintain and adjust the cellular proteome in response to stress or changes in nutrients. AAA+ proteases bind degradation targets and utilize ATP-powered conformational changes in the AAA+ unfoldase ring to denature and translocate the substrate polypeptide into an associated, sequestered protease chamber for degradation. Of the five AAA+ proteases in Escherichia coli, FtsH is unique in being genetically essential, in its localization to the membrane, and in its function in degrading both membrane and cytosolic proteins. Prior in vitro characterization suggested that FtsH only degrades meta-stable proteins despite its ability to extract protein substrates from the membrane for degradation. These results motivated me to reinvestigate the determinants in a substrate required for effective FtsH unfolding. In this thesis, I present experiments that first test the hypothesis that FtsH may unfold and degrade a more stable protein in vitro with a sufficiently long degradation tag (degron) and then explore noncanonical recognition as one mechanism that may be employed in FtsH-dependent degradation.&#13;
&#13;
In Chapter I, I review our current understanding of AAA+ protease structure and function, especially as it pertains to FtsH to provide background for the later chapters. In Chapter II, I test the hypothesis that a long degron may be required for FtsH to successfully bind and unfold E. coli dihydrofolate reductase (DHFR), a stable protein which was previously found to resist FtsH degradation. Strikingly, I find that detergent-solubilized FtsH can degrade DHFR in vitro with or without an appended degron. I then show that FtsH recognition of DHFR is noncanonical and not dependent on unstructured terminal degrons but suggest a model for how FtsH may unfold DHFR by engaging an internal site in a partially unfolded intermediate. In Chapter III, I test the hypothesis that FtsH may bind another stably folded soluble protein, cyclopropane fatty acid synthase (CFAS), at an internal site. In Chapter IV, I propose future directions that may further enrich our understanding of FtsH-DHFR degradation and experimental approaches that can be applied to assess the kinetics of assembly/disassembly of other enzyme-substrate complexes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144887</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Quantitative Understanding of Compartmentalized NADPH Metabolism in Cancer Cells</title>
<link>https://hdl.handle.net/1721.1/144886</link>
<description>Toward Quantitative Understanding of Compartmentalized NADPH Metabolism in Cancer Cells
Moon, Sun Jin
Reduced nicotinamide adenine dinucleotide phosphate (NADPH) is an essential molecule in living organisms by virtue of its function as electron donor driving reductive biosynthesis and protecting cells against oxidative stress. Over the past decades, extensive research has studied key NADPH regeneration pathways and investigated whether targeting these pathways constitutes an effective cancer therapy. However, fundamental questions still remain unanswered. The pool sizes or dynamics of cytosolic and mitochondrial NADPH are unknown and how they interact through varying metabolic processes.&#13;
&#13;
Assessment of compartmentalized NADPH redox states is important as cytosolic and mitochondrial NADPH levels are known to be different due to its impermeability to intracellular membranes, yet it is known to be transported from one compartment to the other by various mechanisms such as metabolite shuttles. NADPH pool sizes can influence metabolic processes to a varying extent and targeting compartment-specific NADPH-dependent enzymes may result in selective and variable responses. Moreover, various metabolic reactions that rely on NADPH are reversible, meaning that altered NADPH redox states can influence metabolic pathway directions in one compartment first, followed by opposite changes in another compartment. Thus, improved understanding of compartmentalized NADPH pool sizes, interactive dynamics and affected metabolic pathways can lead to designing effective cancer therapies by targeting NADPH metabolism.&#13;
&#13;
In this thesis, I investigate compartmentalized NADPH metabolism in cancer cells by developing compartmentalized NADPH dynamics and metabolism analysis platform that incorporates genetically encoded NADPH biosensors to explore mitochondrial NADPH dynamics and analyze NADPH-mediated metabolism using 13C-glcucose isotopic tracers and mathematical models. First, using NADPH sensors, we observed mitochondrial NADPH pool decreased in response to mitochondria-specific oxidative stress, whereas the cytosolic NADPH was minimally influenced. Second, the oxidative pentose phosphate pathway activity and TCA cycle intermediate turnover rates increased in response to decrease of mitochondrial NADPH by mitochondrial oxidative stress. Third, utilizing a kinetic model for mitochondrial antioxidant network, we calculated mitochondrial NADPH/NADP+ ratio and documented an activation of indirect NADPH shuttle system that maintained the mitochondrial NADPH pool. Lastly, we found that the compartmentalized NADPH dynamics varied among different cancer cell lines and perturbing compartment-specific NADPH pools led to cell line specific growth inhibitions in vitro. Altogether, our mitochondrial NADPH sensor and integrated approach led to findings that enhanced our insight into compartmentalized NADPH metabolism that can help advance more selective anticancer therapies.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144886</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Product and host engineering for low-cost manufacturing of therapeutic proteins in the yeast Komagataella phaffii</title>
<link>https://hdl.handle.net/1721.1/144884</link>
<description>Product and host engineering for low-cost manufacturing of therapeutic proteins in the yeast Komagataella phaffii
Dalvie, Neil Chandra
The COVID-19 pandemic revealed a global need for affordable, accessible biologic medicines such as prophylactic vaccines and monoclonal antibodies, particularly in low- and middle-income countries (LMICs). Therapeutic proteins, which can be stored as liquids, are more amenable to distribution in LMICs than newer modalities like therapeutic RNAs that require cryogenic storage. Currently, most therapeutic proteins are manufactured in either bacterial cell processes, which require bespoke purification processes to separate the product from cellular lysates, or in mammalian cell processes, which require long development timelines, stringent sterility requirements, and expensive feedstocks, all of which contribute to high costs. &#13;
 &#13;
Alternative hosts such as yeasts have potential to shorten development timelines, lower manufacturing costs, and increase global manufacturing capacity. Yeasts, like bacteria, grow to high cell densities on inexpensive feedstocks, and, like mammalian cells, can secrete of products into the extracellular space. One yeast, Komagataella phaffii (Pichia pastoris), is currently used for manufacturing of insulin and subunit vaccines in LMICs, and has been approved by the USFDA. Wider adoption of alternative hosts such as K. phaffii has been hampered by bespoke manufacturing challenges for unique therapeutic proteins such as subunit vaccines, and low upstream titers of complex therapeutic proteins such as monoclonal antibodies.&#13;
&#13;
In this thesis, we explored two engineering strategies to improve the manufacturing of therapeutic proteins in K. phaffii. First, we engineered the product sequences of several subunit vaccine antigens and monoclonal antibodies to improve quality and secreted titer without sacrificing therapeutic function. Second, we developed tools for genome engineering in K. phaffii and applied these tools to engineer strains with improved secreted productivity. These engineered strains and product sequence modifications will enable rapid development of manufacturing processes for a wide range of therapeutic proteins.&#13;
&#13;
Lastly, we applied product engineering and host strain engineering to enable low-cost manufacturing of a subunit vaccine candidate for COVID-19. Design of the drug substance for manufacturability enabled rapid technology transfer and scale up, and the vaccine candidate is currently in clinical trials. This success illustrates the immediate impact of manufacturing in K. phaffii in LMICs, where the need for therapeutic protein interventions is greatest.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144884</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An integrated approach to enable rapid scalable upstream production of subunit vaccines with Pichia pastoris (Komagataella phaffii)</title>
<link>https://hdl.handle.net/1721.1/144883</link>
<description>An integrated approach to enable rapid scalable upstream production of subunit vaccines with Pichia pastoris (Komagataella phaffii)
Biedermann, Andrew M.
Recent experience with the COVID-19 pandemic motivates the development of vaccine designs and manufacturing technologies tailored to enable widespread access in low- and middle-income countries (LMICs). Due to limited supply, vaccines targeting SARS-CoV-2 were initially distributed primarily to high-income countries. The distribution of vaccines which were eventually made available to LMICs was complicated by logistical challenges such as the need to maintain cold-chain integrity for available vaccine modalities. These technological limitations restricted vaccine access, ultimately driving up excess deaths in LMICs and increasing global pandemic risk by enabling SARS-CoV-2 to rapidly accumulate potentially harmful mutations in an unprotected population. Resolving underlying vaccine supply and distribution challenges for LMICs will be essential to controlling the COVID-19 pandemic and to enable better response to future pandemic threats but will require improved vaccine design and manufacturing.&#13;
&#13;
Subunit vaccines produced with the yeast, Komagataella phaffii, could significantly improve access to vaccines in LMICs, owing to their potential for rapid development timelines, high productivity in existing manufacturing capacity, thermostability, and strong efficacy. In this thesis, we present an integrated approach to improve vaccine design and manufacturing in K. phaffii. In the first part, we demonstrate that K. phaffii strains engineered to eliminate the need for methanol-feeding enable improved production of SARS-CoV-2 RBD antigen. Obviating the need for methanol improved cell health and enabled production of clinical material in a 1200 L bioreactor, larger than would have been possible with traditional methanol-feeding. The benefits of methanol-free engineering appear to be generalizable to other proteins of interest. In the second part, we present a novel “modular blending” approach to media development. This new method enabled the design of a soluble medium with 2x higher productivity than our previous best defined production medium and highlighted the importance of lipid supplementation and carbon metabolism for optimal heterologous protein production in K. phaffii. Finally, viral antigens typically require multimeric display to induce strong immune responses, but common nanoparticle display technologies are difficult to produce. We present initial design and experimental work towards the secreted production of novel protein nanoparticles tailored for optimal production in K. phaffii.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144883</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrocatalytic Conversion of Carbon Dioxide to Value-Added Chemicals</title>
<link>https://hdl.handle.net/1721.1/144882</link>
<description>Electrocatalytic Conversion of Carbon Dioxide to Value-Added Chemicals
Corbin, Nathan
Closing the anthropogenic carbon cycle is an urgent issue that faces society. Carbon capture and utilization (CCU) is an attractive strategy because it can help offset the cost and possibly enable profiting off of capturing and storing carbon dioxide (CO2). In this thesis, avenues for electrocatalytically upgrading carbon dioxide are investigated. A comprehensive review on heterogeneous molecular catalysts for aqueous electrocatalytic CO2 reduction was written to synthesize the literature to discern common trends regarding the effects of various aspects of heterogeneous molecular catalysts including the metal center, ligands, axial coordination, electrode grafting, and stability. The review also highlighted key areas where experimental and computational efforts could improve to achieve greater accuracy and comparability of results, including catalyst aggregation, potential referencing, and demetallation. To further expand upon the recommendations of this review, an analysis of various sources of error in electrochemical experiments was conducted to quantitatively assess how different error sources impact mechanistically relevant kinetic parameters (e.g. Tafel slope, order dependence). Simple physical models were constructed to model the errors, which could then be used to provide guidelines on how to properly manage error when performing electrochemical kinetic measurements.&#13;
&#13;
Electrocatalytic carboxylation of organic molecules with CO2 provides another pathway to generate high-value chemicals. We developed a methodology to perform electrocatalytic carboxylation without requiring sacrificial anodes. The key design principle involves adding a salt with an inorganic cation that can suppress nucleophilic side reactions, which lead to lower carboxylic acid yields. This methodology was demonstrated to work for a variety of aliphatic, aryl, and benzylic organic halides. We also discovered an essential protecting mechanism by which the carboxylate product mitigates deactivation of the cathode by insoluble carbonate salts. Hydrogenolysis of the carbon-halide bond represents a major competing electrochemical reaction. The reactive hydrogen for this reaction was found to originate from the electrolyte solvent. Mechanistic studies revealed solvent deprotonation is the primary pathway toward the hydrogenolysis product, and a computational descriptor for solvent deprotonation was found to correlate strongly with carboxylation selectivity.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144882</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Reducing Beam-Induced Damage in Electron Microscopy</title>
<link>https://hdl.handle.net/1721.1/144870</link>
<description>Techniques for Reducing Beam-Induced Damage in Electron Microscopy
Abedzadeh, Navid
Imaging biomolecules in their natural state at an atomic scale resolution is crucial to the un- derstanding of such molecules. Under the banner of Quantum Electron Microscopy (QEM), two novel electron microscopy schemes have been proposed to achieve nanometer-scale resolution while practically eliminating beam-induced damage to biological samples. The first approach is based on a quantum mechanical principle known as interaction-free measurement (IFM) and the second approach is known as multipass transmission electron microscopy, a type of phase contrast imaging in which the probe electrons transmit through a thin sample multiple times. I have been involved in the development of major components of the IFM scheme such as electron mirrors and diffractive electron mirrors for lossless splitting of incident electron beams. Further- more, I have made major progress in developing various components of the simplest form of multipass microscopy which could be performed in a scanning electron microscope. I developed a theoretical framework to understand the effects of beam shift and hydrocarbon contamination in multipass microscopy and the limit they place on the choice of sample in this scheme. This experiment would be the first demonstration of contrast enhancement due to multipassing in an electron microscope.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144870</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capital cost evaluation of advanced reactor designs under uncertainty and risk</title>
<link>https://hdl.handle.net/1721.1/144869</link>
<description>Capital cost evaluation of advanced reactor designs under uncertainty and risk
Stewart, W. Robb
Decarbonization incentives present an enormous market opportunity for the nuclear industry. To stay within the 1.5°C limit by 2050, across a range of scenarios, the IPCC forecasted a 2.5x average increase in global nuclear generation. High capital costs and the risk of overruns and delays may prevent the industry from realizing this new potential. Recent nuclear projects in the U.S. and Europe have averaged cost overruns more than 2x the original estimate. However, new reactor architectures are under development, and they may solve some or all elements of the cost problem. These new architectures leverage modularization, intra-plant learning rates, passive safety, advanced construction, and advanced manufacturing in their endeavor to lower capital costs. This thesis systematically analyzes the cost saving potential of these strategies for eight unique reactor architectures. The aim was to understand which architectures and strategies have the greatest potential to reduce cost and risk to cost overruns, and then, apply that understanding to help utilities, policymakers, and other stakeholders in investment decision making and long-term planning. &#13;
&#13;
Total capital costs include direct costs, indirect costs, and financing costs, each based on several uncertain parameters and processes. The first section of this thesis presents a bottom-up methodology for estimating direct and indirect costs of nuclear plants to estimate an overnight cost that does not include the construction schedule and its impact on cost. The method scaled a set of reference costs from the Economic Energy Data Base for nuclear projects. The methodology featured novel components and technologies, such as standalone steel containments, e-beam welded vessels, steel plate composites, and structural and system modules. Of these technologies, advanced construction techniques such as steel plate composites were not effective in reducing the overnight direct costs, but advanced vessel manufacturing technologies were highly impactful especially for small- and multi-module architectures with heavy use of steel vessels with cost reduction up to 9% of overnight cost. Passive safety systems usually required new, expensive structures that offset the cost reduction in other systems including electrical and safeguards systems. Modularization did not substantively reduce the overnight construction costs, but it increased the impact of learning-by-doing for sequentially deployed plants. Learning-by-doing was one of the most effective cost reduction strategies, reducing capital costs 30-45%. &#13;
&#13;
The second section of this thesis presents a methodology to estimate the construction duration of a reactor architecture given the specific shape factor constraints to that architecture, local labor supply, and set of site activities and ordered dependencies. Interest accrued during construction is significant for nuclear projects because they have very long construction durations. Large reactor architectures were very sensitive to the local labor market conditions and experienced long delays (40-50%) when the labor pool was insufficient. Small reactors were more robust to labor conditions with negligible delays. Both large and small architectures can, in theory, deliver projects in 30-40 months depending on the level of structural modularization. Modularization can dramatically reduce the construction duration and associated financing costs for both large and small reactors 65-75% which was 15-25% of all total costs. &#13;
&#13;
Capital cost estimation is a highly uncertain process, particularly for nuclear projects, and the literature contains conflicting guidelines for scaling the costs of certain components. Further, the proposed modularization, learning, and indirect cost models built are subject to much uncertainty as well. The third section of this thesis quantified the uncertainty of the overnight cost estimates. Indirect costs were a significant source of uncertainty for all reactor architectures. Learning-by-doing cost reductions were also a large contributor to uncertainty for all architectures but especially the multi-module plants where there was intra-plant learning. Input data, specifically commodity volumes such as concrete and steel, were a large source of uncertainty as well. Typical first-of-a-kind costs had +/- 10% uncertainty at the 95% confidence interval, and +/20% for 10th-OAK projects. &#13;
&#13;
The final section reviews the specific construction delay drivers for the Vogtle Units 3 &amp; 4 project. This case study provided input data for supply chain delay and change order risks. Human error risks were also modeled using data from other large megaprojects. Change order and human error risk were the dominant drivers of construction delays, with supply chain delays having a small effect. Most of the cost and schedule risk were mitigated after the first-of-a-kind project. Smaller projects were not immune to the construction schedule risks and delays seen in large nuclear projects, but they were able to overcome the risks with a smaller absolute cost project. Therefore, for equal size capacity deployments (i.e. several small reactors vs. fewer large reactors), smaller reactor architectures had narrower cost distributions and lower risk to cost overrun. Smaller reactor cost escalation extended 10-11% ($700-800/kWe) above the median cost overrun, but larger reactor escalations extended 16-24% ($1,100-2,200/kWe) above the median cost overrun. However, these lower risks did not usually translate to lower median costs. By providing cost and risk to cost overrun estimates for the leading nuclear power plant architectures, this thesis work can better inform the utilities and policy makers on economics of nuclear energy and most effective technological pathways to improve its attractiveness.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144869</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simultaneous, Large Multi-Gene Delivery for Implementation of Fluorescent Reporter Spatial Multiplexing to Image Signaling Pathways</title>
<link>https://hdl.handle.net/1721.1/144868</link>
<description>Simultaneous, Large Multi-Gene Delivery for Implementation of Fluorescent Reporter Spatial Multiplexing to Image Signaling Pathways
Johnson, Shannon L.
In order to study intricate, multifaceted biological systems, one must be able to measure numerous cellular activities in real-time. Today, many intracellular, genetically encoded reporters provide a readout in living cells using fluorescent proteins. Spectral multiplexing has been attempted by engineering reporters to each have a unique spectrum that does not overlap with other reporters to allow for simultaneous recording of multiple signals in a single physiological cascade.  However, the problem with spectral multiplexing is the large degree of spectral overlap from the broad shape of the reporter’s spectrum that leads to a blended signal masking which data belongs to which molecule.  This dissertation describes the concept of spatial multiplexing to circumvent the limitations of spectral multiplexing, the challenges of designing a system to deliver multiple reporters simultaneously in culture, and the attempt to teach resilience in developing tools for understanding the nervous system in a 14-week course.&#13;
  &#13;
By fusing a fluorescent reporter to a pair of self-assembling peptides or a self-assembling RNA-protein pair, reporters could be stably clustered within cells at random points, distant enough to be resolved by a microscope, but close enough to spatially sample the relevant biology. These clusters, called signaling reporter islands (SiRIs), can be modularly designed and permit a set of fluorescent reporters to be efficiently adapted for simultaneous measurement of multiple points in a signaling pathway within single cells.  SiRIs for indicators of second messengers and kinases were created to image up to five signals at once in a single living neuron.  This introduces the need to express multiple genes in the same cell, but the probability of multiple plasmids entering the same cell for simultaneous expression is lowered as the number of plasmids increases.  Therefore, means of expressing multiple SiRI genes in a single plasmid backbone were evaluated.  Lastly, a record is shared of the process used to develop and implement a digital learning experience for teaching the place of failure in research and development.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144868</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of High-Power High-Frequency Coreless Transformer Systems</title>
<link>https://hdl.handle.net/1721.1/144865</link>
<description>Design of High-Power High-Frequency Coreless Transformer Systems
Schemmel, Daniel E.
This thesis concerns advances in the design of coreless power transformers as well as associated power electronics, which can also add extra functionality. Compared to traditional core-based transformers that require large, heavy and costly magnetic cores to guide magnetic flux between windings, the coreless power transformer employs extra resonant coils to provide near perfect magnetic coupling between input and output windings. The four-coil implementation of a high-frequency coreless power transformer was recently introduced along with basic analysis and a 100 W working demonstration device. While the development of this new coreless transformer was a significant achievement, a number of issues remain for it to be practical in electric grid applications. A key topic addressed is a means to optimize the design as well as establishing a universal metric that quantifies coreless transformer performance. Other topics include the integration of the transformer as a component of a larger system through the development of required power electronics to drive the transformer and magnetic shielding to provide control of external magnetic fields. The overall objective of this thesis is to provide guidance on the design of coreless transformer systems and to extend coreless transformer technology to higher powers.&#13;
&#13;
More specifically, through the design and construction of different four-coil coreless transformers, topics associated with the system level design of coreless transformers are addressed in the context of DC-to-DC systems. This includes the design, construction, and testing of three transformers with ratings of 1 kW at 150 kHz, 1 kW at 300 kHz, and 40 kW at 300 kHz. Efficiencies of these transformers ranged from about 93% to over 98%. A new design algorithm based on particle swarm optimization is applied to optimize the required 21 parameters in the transformer design space. An important metric for the optimization process is shown to be the transformer S₂₁ scattering parameter. This work also includes implementation of a suitable H-bridge inverter with zero-voltage switching and full-bridge rectification to produce a DC output. In addition, topics such as compensation, magnetic shielding, and transformer stacking with synchronization are presented in detail.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144865</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming challenges of fundamental electrochemical kinetic studies under dilute-reagent conditions</title>
<link>https://hdl.handle.net/1721.1/144862</link>
<description>Overcoming challenges of fundamental electrochemical kinetic studies under dilute-reagent conditions
Williams, Kindle Shea
Electrochemical reactions show promise for decarbonizing the chemical industry through distributed, local chemical synthesis; replacement of hazardous, atom-inefficient, and emissions-heavy precursors and processes with benign reagents and electrical potential; the storage of intermittent renewable energy in chemical bonds; and the direct conversion of CO2 into value-added chemicals. Before electrochemical technologies can be deployed, however, they must go through many stages of development. Different electrochemistries have different needs prior to deployment. In this thesis, we focus on lab-scale, fundamental kinetic studies of select electrochemistries – particularly those chemistries for which dilution or mixing phenomena are of practical concern.&#13;
&#13;
We first discuss electrochemical CO2 reduction – a chemistry that is very well studied in the lab environment, but has yet to reach deployment stage. In particular, we address the questions: what happens if we do not have access to a pure CO2 gas feed? Is electrochemical CO2 reduction tolerant to common CO2 impurities such as N2 and O2? Through mechanistic study, we demonstrate that neither the rate nor the mechanism of CO2 reduction is affected by the presence of O2, but that the cathodic co-reduction of O2 at relevant potentials represents a parasitic current – an energetic trade-off for tolerance to feed impurity which should be considered in technoeconomic analyses of CO2 reduction systems.&#13;
&#13;
Next, we consider mixtures not in the gas phase but in the liquid phase: blended aqueous-nonaqueous electrolytes. Such mixtures are often used to bring organic substrates into contact with water as a co-reactant, for example in O-atom transfer chemistries. We describe and measure complexities of working in such systems, such as solvent components that behave nonideally and ill-defined potential scales. We report the first measurement of the water dependence of nonaqueous alkaline hydrogen evolution, accounting for such nonidealities. In doing so, we propose molecular explanations for water’s nonideal behavior and hypothesize many roles water can play in blended electrolytes. Finally, we use such a blended electrolyte to develop an approach to benzylic C-H oxidation for the production of commodity chemicals.&#13;
&#13;
We hope this work represents a useful step toward the understanding and deployment of these relevant electrochemical systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144862</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation and Analysis of Chemical Kinetic Mechanisms</title>
<link>https://hdl.handle.net/1721.1/144861</link>
<description>Automatic Generation and Analysis of Chemical Kinetic Mechanisms
Johnson, Matthew S.
Many important processes in the world are controlled by chemical kinetics, from the combustion of fuels in engines, the production of polymers, the electrochemistry of batteries to biological processes. However, many if not most overall chemical processes do not occur in a single step reaction between reactants and products and can involve hundreds of different elementary reactions and intermediates. In many cases how well we can resolve and parametrize these elementary reactions and intermediates control ability to predict the behavior of the associated process. These systems of species reactions and their associated parameters are usually referred to as detailed kinetic mechanisms. Creating detailed kinetic mechanisms, however, requires us to determine both what reactions can happen in a given system and how fast they occur. This can be incredibly tedious an challenging to do by hand so it is often more practical to use automatic mechanism generators such as the Reaction Mechanism Generator (RMG) software. RMG allows us to build a workflow for generating and refining these mechanisms where we run RMG to generate a mechanism analyze the mechanism to determine important parameters and improve those parameters based on quantum chemistry calculations, experiments and literature, integrate the new data into RMG's estimators and rerun RMG to generate a new mechanism. &#13;
&#13;
This thesis presents a number of improvements to different aspects of this workflow and applications of this workflow. New faster and more advanced techniques and software are presented for analyzing chemical kinetic mechanisms. Improvements are presented for RMG's algorithm for selecting species and reactions to include in the mechanism. Improved techniques for generating, refining and computing phenomenological rate coefficients for pressure dependent networks are also presented. Additionally presented, is the RMG-database that manages estimation with RMG and a new machine learning based algorithm for estimating the rate coefficients of reactions. Lastly, an application of this workflow to generate a mechanism for the combustion and pyrolysis of methyl propyl ether and the extension and application of RMG to model the solid electrolyte interphase in lithium batteries are presented.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144861</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High resolution, in-situ studies of seawater carbonate chemistry and carbon cycling in coastal systems using CHANnelized Optical System II</title>
<link>https://hdl.handle.net/1721.1/144860</link>
<description>High resolution, in-situ studies of seawater carbonate chemistry and carbon cycling in coastal systems using CHANnelized Optical System II
Ringham, Mallory
Study of the marine CO₂ system is critical for understanding global carbon cycling and the impacts of changing ocean chemistry on marine ecosystems. This thesis describes the development of a near-continuous, in-situ dissolved inorganic carbon (DIC) sensor, CHANnelized Optical System (CHANOS) II, suitable for deployment from both mobile and stationary platforms. The system delivers DIC measurements with an accuracy of 2.9 (laboratory) or 9.0 (field) μmol kg⁻¹, at a precision of ~4.9-5.5 μmol kg⁻¹. Time-series field deployments in the Pocasset River, MA, revealed seasonal and episodic biogeochemical shifts in DIC, including two different responses to tropical storm and nor’easter systems. Towed surface mapping deployments across Waquoit Bay, MA, highlighted the export of DIC from salt marshes through tidal water. High resolution (&lt;100 m) data collected during ROV deployments over deep coral mounds on the West Florida Slope revealed a much wider DIC range (~1900 – 2900 μmol kg⁻¹) across seafloor and coral habitats than was observed through the few bottle samples collected during the dives (n = 5, 2190.9 ± 1.0 μmol kg⁻¹). These deployments highlight the need to investigate deep sea biogeochemistry at high spatial scales in order to understand the range of environmental variation encountered by benthic communities.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144860</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Seismic Waves for Imaging the Earth</title>
<link>https://hdl.handle.net/1721.1/144859</link>
<description>Learning Seismic Waves for Imaging the Earth
Sun, Hongyu
This thesis studies imaging Earth’s interior with seismic wavefields for seismic exploration and monitoring, and shows applications of deep learning in solving challenges in seismic imaging with either active or passive seismic data. For active data, we develop deep-learning methods to extrapolate missing low-frequency waves from band-limited seismograms. Low-frequency waves are essential to mitigate the cycle-skipping problem of full-waveform inversion (FWI), but data below ∼ 3 Hz are missing due to the band-limited characteristic of conventional artificial sources. Here we train convolutional neural networks to computationally extrapolate low-frequency data from bandlimited recordings so that FWI can start from the extrapolated low-frequency data. We also extend the method to elastic FWI where the cycle-skipping phenomenon is more severe compared to acoustic FWI, due to the short S-wave wavelength. Additionally, involving real seismic data in training may reduce the generalization error for the network trained only on synthetic data. We thus develop a semi-supervised learning method and train generative adversarial networks with real data without real labels. Both synthetic and field examples show that the extrapolated low frequencies can successfully initiate FWI from rough initial models. Furthermore, we show that extrapolated low frequencies may be used to increase the investigation depth of surface-wave inversion for near-surface characterization. Moving from active to passive data, we develop deep-learning methods to extract accurate Green’s functions from realistic noise environments. Seismic interferometry by cross-correlation of ambient noise may introduce spurious events in correlograms if the source distribution is inhomogeneous. Extremely long (from days to months) noise recordings are usually required for a reliable retrieval with high signal-to-noise ratio. We therefore propose a deep-learning method to overcome the spatial limitation of passive sources for the universal application of seismic interferometry and the temporal limitation of noise recording length for real-time monitoring. Collectively, we find that deep neural networks can learn to generate seismic waves under the regression framework of machine learning. We conclude that machine learning is a powerful complement to traditional computational approaches and may provide new insights into the imaging of the Earth’s structure and dynamics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144859</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Stability of PbS Quantum Dot Solar Cells</title>
<link>https://hdl.handle.net/1721.1/144858</link>
<description>The Stability of PbS Quantum Dot Solar Cells
Sponseller, Melany C.
Solution-based lead sulfide (PbS) quantum dots (QDs) may enable lightweight solar cells that facilitate high-throughput module manufacturing, simplify module installation, and bring light-harvesting functionality to unconventional or weight-restricted surfaces. Proven long-term stability is critical for QD solar cells to reach commercial viability, since solar cells deployed in the field must typically operate for years with minimal loss in efficiency. However, the operational stability and associated aging processes of QD solar cells are not fully understood. Existing measurements of QD solar cell lifetimes are often conducted under non-standardized stress conditions and primarily highlight relative improvements over baseline devices rather than underlying causes of performance degradation.&#13;
&#13;
In this work, we systematically investigate aging processes that lead to QD solar cell performance evolution during short-term shelf storage and long-term continuous operation. First, we analyze the role of short-term air exposure in improving the initial efficiency of PbS QD solar cells. We show that brief air exposure treatments elicit multiple oxidation processes that benefit PbS QD solar cell performance in the near-term. In particular, post-fabrication air exposure is necessary to heal the QD-top electrode interface following thermal evaporation of electrodes onto QDs.&#13;
&#13;
Next, we characterize the operational stability of PbS QD solar cells under systematically tuned stress conditions such as continuous illumination, heating, environment, and electrical bias. We demonstrate conventional QD solar cells are capable of operating lifetimes approaching 1000 hours or longer if actively cooled, but photothermal instabilities likely limit their current utility for applications requiring sustained operation above room temperature. We also provide evidence of electrode migration-induced degradation in hole transport layer-free QD solar cells, highlighting the need to consider diffusion barrier properties when evaluating next-generation hole transport materials.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144858</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping light-matter interactions for free-electron radiation and photonic computing</title>
<link>https://hdl.handle.net/1721.1/144857</link>
<description>Shaping light-matter interactions for free-electron radiation and photonic computing
Roques-Carmes, Charles
Nanophotonics has become over the past decades a paramount technology, enabling, among other things, the design of novel light sources, detectors, and devices controlling the polarization, spectral, and angular distribution of light. A landmark of nanophotonics is the design of nanostructured materials (metasurfaces, photonic crystals, single resonators, etc.) to tailor the interaction of light with matter, either by shaping light propagation at the nanoscale, or by controlling emission from atoms and molecules. In this thesis, we propose two avenues in which nanophotonics can be leveraged to enhance light-matter interactions with applications in enhancing free-electron radiation and implementing Monte Carlo sampling algorithms in photonic circuits. We present a framework to model, tailor, and enhance radiation from free electrons and other high-energy particles interacting with nanophotonic structures. We then describe the building of a featured experimental setup to record spectrally-resolved light emission from free electrons interacting with nanophotonic structures. We utilize this setup to demonstrate nanophotonic enhancement of coherent and incoherent cathodoluminescence. We also present methods to realize fast and efficient sampling of Gibbs distributions of arbitrary Ising models with recurrent photonic circuits. Those methods are experimentally demonstrated in a photonic integrated circuit on small-scale Ising models. Lastly, we propose future research developments based on those findings and possible research avenues at their intersection.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144857</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing the Role of Mucin O-Glycans in Regulating Microbial Virulence</title>
<link>https://hdl.handle.net/1721.1/144856</link>
<description>Analyzing the Role of Mucin O-Glycans in Regulating Microbial Virulence
Takagi, Julie S.
Mucus is the principal ecological niche for the human microbiota, acting as an interface between the trillions of microorganisms residing in the mucosal surfaces of the body and the underlying epithelial cells. Here, mucus plays a fundamental role in host defense, maintaining complex communities of microorganisms and preserving host health. Defects in the production or glycosylation of mucins, the gel-forming component of mucus, are associated with microbial dysbiosis and infection; however, owing to the complexity of mucins, an understanding of the molecular motifs and mechanisms underlying mucins’ protective functions remains poorly understood. In this thesis, I fill this gap by investigating how mucins and their associated glycans influence microbial physiology in the opportunistic fungal pathogen, Candida albicans, and the gram-negative bacterial pathogen, Vibrio cholerae. First, I purify and characterize libraries of mucin-attached glycans and establish their role in influencing C. albicans pathogenicity, group behavior, and cross-kingdom interactions. Next, I determine the genetic mechanism of the glycanmediated response and identify individual synthesized glycans that are sufficient for mucins’ virulence attenuation. Lastly, I investigate the expression and production of virulence factors in the enteric pathogen, V. cholerae and demonstrate that mucin O-glycans, specifically Core 2- derived glycans, inhibit the major determinants of infectivity. Collectively, the work presented in my thesis provides a framework for characterizing the biochemical signals underlying mucins’ protective function and provides insight for the development of novel therapeutic and diagnostic tools for treating and preventing infection.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144856</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Activation Landscape of Pro-Apoptotic BAK Through the Discovery of Human BH3-Only and Non-Native Peptide Binders</title>
<link>https://hdl.handle.net/1721.1/144855</link>
<description>Exploring the Activation Landscape of Pro-Apoptotic BAK Through the Discovery of Human BH3-Only and Non-Native Peptide Binders
Aguilar, Fiona
BAK is one of the two pro-apoptotic members that form part of the BCL-2 protein family. Previous work has shown that binding of certain BH3-only proteins such as truncated BID (tBID), BIM, and PUMA to pro-apoptotic BAK leads to mitochondrial outer membrane permeabilization (MOMP), release of cytochrome c, and ultimately cell death. The BH3 binding event leads to a series of conformational changes that promote the conversion of BAK from monomer to dimer and subsequently to oligomers that disrupt membranes in a process referred to as activation. Putative intermediate crystal structures, crosslinking data, and in vitro functional tests have provided insights into the activation event, yet the sequence-function relationships that make some, but not all, BH3-only proteins function as activators remain largely unexamined. &#13;
&#13;
In this thesis, I address the question using three methods: 1) computational design, 2) yeast surface-display screening of candidate BH3-like peptides, and 3) structure-based energy scoring. I identify ten new binders of BAK that span a large sequence space. Among the new binders are two peptides from human proteins BNIP5 and PXT1 that promote BAK activation in liposome assays and induce cytochrome-c release from mitochondria in HeLa cells. These new activators expand current views of how BAK-mediated cell death can be triggered. I show binding and kinetics measurements and solved crystal structures of BAK-peptide complexes, including complexes for two inhibitors and one activator. Results reveal a high degree of similarity in binding geometry, affinity, and association kinetics between peptide activators and inhibitors, including peptides described previously and those identified in this work. Here, I propose a free energy model for BAK activation that is based on the differential engagement of BAK monomers and the BAK activation transition state that integrates observations described in this thesis and previous reports of BAK binders, activators, and inhibitors.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144855</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Analysis of Methods to Eliminate Oscillatory Behavior in Bioreactors for Viral Vaccine Manufacturing</title>
<link>https://hdl.handle.net/1721.1/144853</link>
<description>Design and Analysis of Methods to Eliminate Oscillatory Behavior in Bioreactors for Viral Vaccine Manufacturing
Schickel, Kaylee Christine
Continuous stirred tank reactors, or CSTRs, are one of the most commonly used reactors in the biomanufacturing sector. This is because CSTRs offer a relatively simple design, operate by maintaining a constant and predictable outlet concentration, and have been utilized and studied long enough for the field to have developed a great familiarity. One major drawback of single-CSTR applications arises with certain cell-virus systems that exhibit the von Magnus effect. Through the von Magnus effect, oscillations in cellular and viral states arise with individual states sometimes spanning orders of magnitude within a single run. This work demonstrates two successful methods for reducing and eliminating this periodicity including designing and applying a proportional feedback controller for setpoint tracking as well as staging multiple CSTRs in series. Additionally, oscillatory behavior can be avoided through novel reactor designs. One such design, a novel hollow fiber bioreactor, is explored in this work. This design constrains larger cellular and viral species within a hollow fiber lumen where viral infection of the cells can take place along the length of the reactor. Fresh media is provided to the system through the extracapillary space with smaller waste and nutrient molecules able to pass through the membrane to maintain the health of the cells. With this design, we are able to achieve stable outlet concentrations that can be optimized. Finally, as an extension to these ideas, population balance models were analyzed to allow the tracking of individual populations of cells within a reactor - particularly those of a certain age group. As cells age, they may become less productive or more likely to exhibit genetic decomposition. Thus, tracking cell ages provides avenues to design new systems that may be able to filter such aging cell populations out, thereby potentially increasing the overall productivity of the system. The groundwork for this is providing solution strategies for population balance models including numerical approaches as well as direct analytical solutions. Both approaches are analyzed here in great depth.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144853</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Harish-Chandra Bimodules in Complex Rank</title>
<link>https://hdl.handle.net/1721.1/144847</link>
<description>Harish-Chandra Bimodules in Complex Rank
Utiralova, Aleksandra
In this work we study Harish-Chandra bimodules in the setting of the Deligne categories Rep(Gₜ). We classify all pairs (χ, ψ) of central characters of U(gₜ) for which there exists a non-zero Harish-Chandra bimodule over gₜ, such that the two copies of the center Z(U(gₜ)) act via characters χ and ψ. Thus, we answer Question 3.25 posed in Pavel Etingof’s paper Representation Theory in Complex Rank II.&#13;
&#13;
We also construct a family of Harish-Chandra bimodules that interpolate simple finite dimensional bimodules in the classical case. It turns out that they have finite K-type, which is a non-vacuous condition for the Harish-Chandra bimodules in Rep(Gₜ). The full classification of (simple) finite K-type bimodules is yet unknown.&#13;
&#13;
This construction also yields some examples of central characters χ of the universal enveloping algebra U(gₜ) for which the quotient Uₓ is not simple, and, thereby, it allows us to partially solve Problem 3.23 posed in Representation Theory in Complex Rank II.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144847</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The C-Propeptide in Collagen Proteostasis</title>
<link>https://hdl.handle.net/1721.1/144838</link>
<description>The C-Propeptide in Collagen Proteostasis
Li, Rasia (Chichi)
Collagen folding is initiated at the C-terminal propeptide (C-Pro) domain, a globular domain wholly distinct from the collagen’s characteristic triple helix. The C-Pro domain is responsible for assembling three collagen strands into the correct orientation and stoichiometry and nucleating folding of the most abundant protein in the human body. While the function of the C-Pro domain in guiding collagen assembly is well accepted, its role in proteostasis has only recently been appreciated. In Chapter 2, we demonstrate that the highly-conserved N-glycan in the collagen-I C-Pro domain is critical for maintaining collagen proteostasis under challenging conditions. Specifically, the N-glycan facilitates interaction between procollagen and ER lectin chaperones to ensure proper folding of misfolding-prone collagen variants or wild-type collagen under proteostatic stress. In Chapter 3, we present progress towards understanding the molecular mechanisms of collagen assembly. We previously showed that collagen-I C-Pro assembly patterns are guided by Ca2+ -dependent non-covalent assembly of all C-Pro trimers, followed by covalent immortalization by interchain disulfide bonds. While prior work focused on the C-Pro domains in isolation, Chapter 3 explores unanswered questions about collagen assembly using full-length procollagen constructs. We show that regions of procollagen beyond the C-Pro domain play an unexpected role in defining the ability of triple-helical domains to homotrimerize. These results also yield fresh insights into the still unknown molecular features that promote procollagen heterotrimerization. Collectively, the work described in this thesis advance our understanding of the critical roles of the C-Pro domain in collagen proteostasis.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144838</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic Design of Optical Emitters</title>
<link>https://hdl.handle.net/1721.1/144837</link>
<description>Synthetic Design of Optical Emitters
Ginterseder, Matthias
Humanity’s ability to control and harness the power of light lies at the core of what defines much of our lives today, with a great wealth of future technologies on the horizon. Colloidal emitters, such as quantum dots (QDs) and small molecules, act as chemical platforms with extraordinary capabilities in shaping light-matter interactions. Yet, the stringent requirements placed on these species by increasingly sophisticated applications demand the continuous synthetical improvement, as well as the development of conceptually new, classes of emitters.&#13;
&#13;
In the first part of this thesis, I detail a new approach to the precursor chemistry of indium arsenide (InAs) QDs based on the redox chemistry of In. The judicious combination of an As(III) and an In(I) precursor yields an atomeconomical redox couple employing safe and commercially available compounds. A pre-equilibrium based on the disproportionation of In(I) to In(III) and In(0) confers robustness and flexibility to the particle growth. The emission of these InAs-based QDs is shown to cover much of the near infrared (NIR) and shortwave infrared (SWIR), opening up new pathways to sensing and imaging technologies.&#13;
&#13;
In the second part, I describe the development of a versatile class of surface ligands for lead halide perovskite (LHP) QDs of CsPbBr3. CsPbBr3 QDs have seen tremendous development in recent years, positing them as candidate emitters for quantum optical applications. Carefully constructing binding groups and backbones tailored to the LHP surface furnishes a class of dicationic quaternary ammonium (Diquat) ligands. The influence of these ligands leads to effective electronic passivation and modulation of phonon coupling, observed in the form of narrowed emission linewidths, bulk-like Stokes shifts, mitigated inhomogeneous lineshape broadening, and an increased fraction of photons emitted into the coherent channel. &#13;
&#13;
In the final chapter, I translate emissive defects found in hexagonal boron nitride (hBN) matrices to small molecule emitters. By leveraging the covalent and two-dimensional nature of hBN, defect motives comprising as little as 3 atoms could potentially be embedded in a molecular framework while retaining their defining characteristics. A concise synthetic scheme covering multiple defect-derived structures is provided, opening the door to novel rationally designed emitters.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144837</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation and Optimization in Online Marketplaces</title>
<link>https://hdl.handle.net/1721.1/144834</link>
<description>Estimation and Optimization in Online Marketplaces
Li, Hanwei
The emergence of e-commerce business models (such as Airbnb and Amazon) brings opportunities and challenges to their operations. This thesis studies several estimation and optimization problems within the online platform domain, using data-driven approaches in operations management.&#13;
&#13;
The thesis consists of three components. Motivated by the unique setting of Airbnb, in the first work, we consider a game-theoretical setup in which each seller on the platform provides a single-unit product and competes with one another on price. We investigate sellers’ optimal pricing decisions and the platform’s optimal assortment display policy. We find that the platform should display the entire assortment to all the customers when demand is sufficiently high. Moreover, we propose a tabulation algorithm and a mixed-integer programming formulation to effectively solve for the sellers’ and the platform’s optimal decisions. Additionally, in the optimal display policy, we incorporate constraints to guarantee a certain degree of seller and customer fairness on both system and individual levels.&#13;
&#13;
The second work is also closely related to marketplaces like Airbnb, where we estimate and optimize the impact of photo layout. We apply Resnet50, a convolutional neural network model, to build two separate, supervised learning models to evaluate the image quality and room types posted by Airbnb hosts. Then, we characterize the overall impacts of photo layout by the room type, photo quality, and display order. To address the estimation challenges in the Airbnb setting, we propose a novel pairwise comparison model to consistently estimate the impact of photo layout. Our estimation results suggest that the cover image has a significantly more significant impact than non-cover photos. A high-quality bedroom cover image leads to the most significant increase in demand. The counterfactual analysis shows the potential impact when adopting the optimal photo layouts.&#13;
&#13;
In the third work, we collaborate with a global online fashion retailer, Zalando, to optimize large-scale price discount decisions. We address Zalando’s local and global business challenges by applying a three-step process. We cluster products into groups that behave similarly and pre-solve the aggregated problem. In the second step, we decompose the problem using Lagrangian relaxation into a problem for each product (SKU) and provide an efficient way to identify the Lagrange multipliers. Finally, we optimize decisions for individual products addressing local business constraints. For this new approach, which was implemented as part of Zalando’s price discount decision process, we provide results from offline tests and field experiments to demonstrate its benefit.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144834</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of Presynaptic Ca²⁺ Channel Abundance at Active Zones Through a Balance of Delivery and Turnover</title>
<link>https://hdl.handle.net/1721.1/144830</link>
<description>Regulation of Presynaptic Ca²⁺ Channel Abundance at Active Zones Through a Balance of Delivery and Turnover
Leopold Cunningham, Karen
Voltage-gated Ca²⁺ channels (VGCCs) mediate Ca²⁺ influx to trigger neurotransmitter release at specialized presynaptic sites termed active zones (AZs). The abundance of VGCCs at AZs regulates neurotransmitter release probability (Pᵣ), a key presynaptic determinant of synaptic strength. Although biosynthesis, delivery and recycling cooperate to establish AZ VGCC abundance, experimentally isolating these distinct regulatory processes has been difficult. In this thesis, I describe how the AZ levels of Cacophony (Cac), the sole VGCC mediating synaptic transmission in Drosophila, are determined. I also analyzed the relationship between Cac, the conserved VGCC regulatory subunit α2δ, and the core AZ scaffold protein Bruchpilot (BRP) in establishing a functional AZ. I find Cac and BRP are independently regulated at growing AZs, as Cac is dispensable for AZ formation and structural maturation, and BRP abundance is not limiting for Cac accumulation. Additionally, AZs stop accumulating Cac after an initial growth phase, whereas BRP levels continue to increase given extended developmental time. AZ Cac is also buffered against moderate increases or decreases in biosynthesis, whereas BRP lacks this buffering. To probe mechanisms that determine AZ Cac abundance, intravital FRAP and Cac photoconversion were used to separately measure delivery and turnover at individual AZs over a multi-day period. Cac delivery occurs broadly across the AZ population, correlates with AZ size, and is rate-limited by α2δ. Although Cac does not undergo significant lateral transfer between neighboring AZs over the course of development, Cac removal from AZs does occur and is promoted by new Cac delivery, generating a cap on Cac accumulation at mature AZs. Together these findings reveal how Cac biosynthesis, synaptic delivery, and recycling set the abundance of VGCCs at individual AZs throughout synapse development and maintenance.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144830</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inside the Moral Nexus: On Wrongs, Rights, and Normative Powers</title>
<link>https://hdl.handle.net/1721.1/144828</link>
<description>Inside the Moral Nexus: On Wrongs, Rights, and Normative Powers
Raty, Anni Aliisa
Common sense morality recognises a distinction between doing something that is wrong, and wronging someone in particular: littering is wrong, but stealing from your neighbour or trampling their flowerbed wrongs them in particular. The difference is that wronging someone is interpersonal or relational in a way that “mere” wrongdoing is not.&#13;
&#13;
In the first chapter, titled “Wrongs without Rights?”, I consider the relation between wronging someone and violating that person’s rights. Most moral philosophers take it for granted that whenever you wrong someone, you violate that person’s rights. Most also take it for granted that someone’s having a claim right against you is equivalent to your being under a duty that you owe to that person in particular—a directed duty, as they are often called. This chapter challenges these orthodox ideas and suggests that to wrong  someone is, in the first instance, to violate a directed duty that you owe to that person. I also suggest that directed duties do not always correspond to rights. So wronging someone does not always involve a rights violation. &#13;
&#13;
Consent is a normative power that allows us to alter the duties that others owe to us. By giving consent, we can make it so that something something that would otherwise wrong us, does not. In chapter 2, “The Normative Power of Uptake”, I ague that morally transformative consent requires the consent-recipient’s uptake or acceptance. Consent can’t be given unilaterally by the consent-giver; it requires the recipient’s cooperation. &#13;
&#13;
Chapter 3 is titled “A Joint Decision Account of Consent”. In this chapter I develop a novel account of consent as a kind of joint decision. According to the view I propose, consent is a joint decision that results in the recipient being released from a duty owed to the consent-giver. I argue that this view of consent is better suited to the project of sexual ethics than some of the alternatives: unlike some other views of consent, it does not portray consent as one-sided acquiescence to someone else’s pursuits. The account also guides us to ask important questions about the appropriate ways to negotiate sexual consent.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144828</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploration of Planetary Bodies with Electrospray Thrusters</title>
<link>https://hdl.handle.net/1721.1/144825</link>
<description>Exploration of Planetary Bodies with Electrospray Thrusters
Jia-Richards, Oliver
The exploration of planetary bodies such as asteroids can provide insight into the development of the solar system and targets for future in situ resource utilization. However, the current paradigm of using a single, monolithic, spacecraft limits the number of asteroid visits to one every few years. By using fleets of standardized small spacecraft, the frequency of asteroid visits could be dramatically increased while simultaneously decreasing the cost per visit. With the development of miniaturized spacecraft systems beginning to mature, attention also needs to be given to methodologies for operating these small spacecraft.&#13;
&#13;
Electrospray propulsion is a promising technology for high-Delta-v propulsion of small spacecraft due to its mechanical simplicity and scalability. However, methodologies for characterizing the propulsion system thrust on orbit have so far been underdeveloped, and are required for continued development of electrospray thrusters. In addition, the use of electrospray thrusters during operations around or on an asteroid can have further implications. First, the low thrust density of electrospray propulsion, relative to monopropellant or cold-gas propulsion, likely constrains many propulsion architectures to have a single thrust axis with respect to the spacecraft body, complicating the trajectory design process. Second, the ability to operate electrospray thrusters in a bipolar configuration opens up new mission possibilities that leverage intentional charging of the parent vehicle as well as the surface of the planetary body in order to create electric forces for actuation.&#13;
&#13;
This thesis resolves three technical challenges associated with the application of electrospray thrusters for potential planetary exploration missions. Numerical and analytical approaches for inferring the thrust output of a propulsion system based on a simple orbital maneuver are developed. These approaches allow for characterization of the propulsion system performance including quantification in the uncertainty of the thrust output. The controllability of an underactuated spacecraft during proximity operations is also resolved, and an analytical maneuver library is derived in order to guide the spacecraft through different maneuvers. Finally, the application of electric forces for a novel form of actuation on the surfaces of atmosphere-less planetary bodies is analyzed in order to enable a small vehicle to anchor the surface of a rotating asteroid or potentially achieve levitation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144825</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Make Decisions in Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/144824</link>
<description>Learning to Make Decisions in Robotic Manipulation
Dai, Siyu
In order for human-assisting robots to be deployed in the real world such as household environments, challenges in two major scenarios remain to be solved. First, for common tasks that the robot conducts day-to-day, the execution of motion plans need to ensure the safety of surrounding objects and humans. Second, to handle new tasks that some customers might occasionally demand, robots need to be able to learn novel tasks efficiently with a minimal amount of human supervision. In this thesis, we show that machine learning methods can be applied to solve challenges in both scenarios. In the first scenario, we propose learning-based p-Chekov, a chance-constrained motion planning approach that utilizes data-driven methods to obtain safe motion plans in real time. By pre-training a collision risk estimation model off-line instead of conducting online sampling-based risk estimation, learning-based p-Chekov is able to significantly improve the planning speed while maintaining the chance-constraint satisfaction performance. In the second scenario of learning new tasks, we first propose empowerment-based intrinsic motivation, a reinforcement learning (RL) approach that allows robots to learn novel tasks with only sparse or binary reward functions. Through maximizing the mutual dependence between robot actions and environment states, namely the empowerment, this intrinsic motivation helps the agent to focus more on the states where it can effectively ``control'' the environment during exploration instead of the parts where its actions cause random and unpredictable consequences. Empirical evaluations in different robotic manipulation environments with different shapes of the target object demonstrate that this empowerment-based intrinsic motivation approach can obtain higher extrinsic task rewards faster than other state-of-the-art solutions to sparse-reward RL tasks. Another approach we propose in the second scenario is automatic curricula via expert demonstrations (ACED), an imitation learning method that leverages the idea of curriculum learning and allows robots to learn long-horizon tasks when only provided with a handful of demonstration trajectories. Through moving the reset states from the end to the beginning of demonstrations as the learning agent improves its performance, ACED not only learns challenging manipulation tasks with unseen initializations and goals, but also discovers novel solutions that are distinct from the demonstrations. In addition, ACED can be naturally combined with other imitation learning methods to utilize expert demonstrations in a more efficient manner and allow robotic manipulators to learn novel tasks that other state-of-the-art automatic curriculum learning methods cannot learn. In the experiments presented in this thesis, we show that a combination of ACED with behavior cloning allows pick-and-place tasks to be learned with as few as one demonstration and block stacking tasks to be learned with twenty demonstrations.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144824</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategies for High-Performance Solid-State Photon Upconversion</title>
<link>https://hdl.handle.net/1721.1/144821</link>
<description>Strategies for High-Performance Solid-State Photon Upconversion
Lin, Ting-An
Photon upconversion, a process that converts multiple low-energy photons into higher energies, has promising applications such as photovoltaics, bio-imaging, and photo-chemistry. Among the techniques capable of achieving photon upconversion, manipulating the excited states of organic molecules is especially attractive for practical applications thanks to its capability of being operated with low-intensity incoherent light sources. The performance in solid-state, however, is unsatisfactory for applications due to weak optical absorption, internal losses, and the fundamental limit from the upconverting process—triplet-triplet annihilation (TTA)—itself. In this thesis, we investigate strategies to tackle the limitations in solid-state photon upconversion. First, optical absorption is enhanced via embedding an archetypical solid-state infrared-to-visible upconverter into an optical cavity, which results in 74-fold enhancement in absorption and two-orders-of-magnitude reduction in required excitation intensity down to subsolar flux. Charge-exciton hybrid system is also explored as a second approach to enhance absorption. With detailed mechanism further investigated, the optimized device exhibits 0.04- fold lower excitation intensity without external optical structures. Next, we dive into the internal loss pathways within an upconverter. Consisting of an absorbing and an upconverting layer, solid-state upconverters suffer from back transfer and material aggregation. Here, we demonstrate that a bilayer structure with the absorbing layer diluted into a host material can simultaneously mitigate these losses, which results in 7 times higher efficiency and 6 times lower excitation intensity. Finally, we explore the very interior of photon upconversion—the potential to achieve TTA efficiency beyond its fundamental limit by utilizing high-lying non-emissive excited states. The experimental results manifest our concept as a design rule for further developing limit-breaking TTA molecules. With the strategies to develop high-performance solid-state photon upconverters, we look forward to further advancement in modern technologies that benefit from photon upconversion.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144821</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Applied Economics</title>
<link>https://hdl.handle.net/1721.1/144815</link>
<description>Essays in Applied Economics
Wong, Michael B.
The first chapter of this dissertation, which is jointly written with Mayara Felix, asks: Who are the winners and losers as businesses increasingly rely on outsourced workers instead of direct employees to perform services such as security and cleaning? How large are the benefits of domestic outsourcing to the economy compared to its costs? To answer these questions, we leverage Brazil's unexpected legalization of outsourcing in 1993, which sharply increased outsourcing among security guards. Using regional variation in the pre-legalization court permissiveness, we find that outsourcing legalization persistently increased total employment of security guards and benefited younger entrant workers. The average wage of security guards also did not fall and may have mildly increased. However, outsourcing legalization also generated a wave of occupational layoffs that reallocated incumbent workers to lower-wage firms. These facts are explained by a model wherein outsourcing both creates productive efficiencies and reduces worker bargaining power. Seen through the model, our estimates imply that one to five years of the annual efficiency gains from outsourcing legalization would exceed the total earnings losses of laid-off incumbents.&#13;
&#13;
The second chapter analyzes how the introduction of a digital currency to a barter community in Toronto affected the volume of trade. The community initially banned cash, but subsequently introduced a digital token that could be transferred among users and redeemed at designated local stores for retail goods. Using comprehensive transactions data, I show that a large monetary expansion persistently increased transaction volume by 70% by enabling monetized trade. However, when token redemption was suddenly halted at a subset of stores, a run on the token ensued and both money-mediated and barter trade volume fell. The findings are most consistent with the predictions of a search-theoretic model wherein money functions as a medium of exchange.&#13;
&#13;
The final chapter studies the effects of partial public housing privatization in Hong Kong. Between 1998 and 2006, Hong Kong’s Tenants Purchase Scheme (TPS) sold a large share of public rental housing to sitting tenants but limited the resale and leasing of sold units. Although TPS did not reallocate housing across households, my quasi-experimental estimates reveal that TPS reduced household sizes and increased household incomes in treated estates. These effects are explained by the relaxation of household-size-contingent income limits and unit allocation rules. TPS therefore benefited well-off tenants, but reduced affordable housing availability for low-income residents.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144815</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stable and Unstable Shock Formation of the Burgers-Hilbert Equation</title>
<link>https://hdl.handle.net/1721.1/144811</link>
<description>Stable and Unstable Shock Formation of the Burgers-Hilbert Equation
Yang, Ruoxuan
The study of singularities has been an important part in the analysis of PDEs. One key type of singularities is shock. In many cases the shock has a self-similarity structure. Recently, the modulated self-similarity technique has achieved success in fluid dynamic equations. In this thesis, we apply this technique to establish finite time shock formation of the Burgers-Hilbert equation. The shocks are asymptotic selfsimilar at one single point. The shocks can be stable or unstable, both of which have an explicitly computable singularity profile, and the shock formation time and location are described by explicit ODEs. For the stable shock, the initial data are in an open set in the &#119867;⁵ -norm, and the shock profile is a cusp with Hölder 1/3 continuity. For the unstable shock, the initial data are in a co-dimension 2 subset of the &#119867;⁹ space, and the shock profile is of Hölder 1/5 continuity. Both cases utilize a transformation to appropriated self-similar coordinates, the quantitative properties of the corresponding self-similar solution to the inviscid Burgers’ equation, and transport estimates. In the case of unstable shock, we, in addition, control the two unstable directions by Newton’s iteration.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144811</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model for Set-Based Design at the System-of-Systems Scale with Approaches for Emergent Properties</title>
<link>https://hdl.handle.net/1721.1/144809</link>
<description>A Model for Set-Based Design at the System-of-Systems Scale with Approaches for Emergent Properties
Page, Jonathan Edward
Large projects with long time horizons and substantial capital investments challenge designers and engineers. Inevitably, the project’s large scale leads to competing requirements for the design team to balance. Further, the prolonged lifetimes of these projects create uncertainty regarding the project’s value retention from delivery until decommissioning or disposal.&#13;
&#13;
Set-based design (SBD) offers a solution to the first issue. The method allows designers to intelligently canvass a more extensive solution space using sets and ranges of characteristics to define the design instead of specific instances. The practice of SBD continues to grow, but few examples exist of projects at scale outside of theoretical studies.&#13;
&#13;
Flexibility offers a solution to the latter issue. Designing for flexibility accounts for this uncertainty by intentionally integrating options in the architecture of a project. This practice gives future managers the right, but not the obligation, to execute these options. Unfortunately, however, flexibility is often treated as an emergent property.&#13;
&#13;
This research reveals a case study of an instantiation intersecting these principles in the design of a naval vessel. It uses an action research approach to develop and execute an SBD process at a large scale that integrates specific emergent properties as sets of alternatives. It shares the philosophical basis of the process, the team structure formed, the steps of the method, and the documentation created to capture the knowledge. It contributes generalizable knowledge for consideration when a team may consider establishing their SBD methods, including starting small to test the process before growing, establishing the first sets, and managing communications.&#13;
&#13;
This research contributes to the development of integration by intersection by sharing how we integrated our sets and what conditions allowed their intersection. It offers a special case of practical point design within SBD called benchmarks, which acted as virtual prototypes and created reusable design knowledge for future efforts. The research could not conclude that treating emergent properties as sets suited the SBD method, but it provides insight towards answering that question in future work, especially regarding how to form those sets so that they intersect appropriately with others.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144809</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Aspects of Military Readiness</title>
<link>https://hdl.handle.net/1721.1/144804</link>
<description>Modeling Aspects of Military Readiness
Paynter, Jonathan
During peacetime, military performance assessment focuses on combat readiness. This thesis focuses on applying tools from operations research to inform and optimize strategic design decisions as well as operational decisions related to military readiness. In particular, we use a variety of optimization techniques to determine how to enhance equipping and personnel readiness and to quantify important trade-offs between personnel readiness and leader development. &#13;
&#13;
Chapter 2 focuses on helicopter maintenance scheduling and is motivated by Department of Defense investment in predictive analytics for component health. We develop an index-style decision policy for integrating signal-based pre-emptive component repairs with the recurring time-based preventive maintenance tasks for the overall, multi-component system. The results highlight that the predictive model generating the component health signal must have exceptionally low false positive rates, 5% or less for use-case settings, or the pre-emptive repair decision policy will actually hurt equipment readiness. Chapter 3 models the impact of career path design policy on personnel readiness. To develop leaders for future assignments, the military implements career path design policy that restricts the sequencing and timing of an individual's assignments. Overly restrictive policy can hurt personnel readiness even when the overall system has enough personnel for every assignment. We develop a mixed integer linear programming formulation and a column-generation inspired algorithm to determine specific changes to the career path design policy that enhance readiness. For a specific U.S. Army officer career field we show how a small change in career path design policy can provide a 9% increase in personnel readiness. Chapter 4 considers the U.S. Army's recently updated assignment process that includes a matching market for the thousands of officers moving to new jobs every year. When there are more available jobs than officers, a personnel manager assesses personnel readiness to decide which jobs enter the market, and then assignments are determined by a deferred acceptance algorithm to maximize applicant satisfaction. We develop a mixed integer formulation that combines these decisions and can be used to generate a Pareto frontier between personnel readiness and applicant satisfaction. Then, we develop a tractable solution approach for finding an approximate Pareto frontier using a local search algorithm. We use data from the U.S. Army's 2020 assignment market to show how a 2% decrease in readiness provides room for a 10-20% increase in officer assignment satisfaction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144804</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Hierarchical Probabilistic Multimodal Data Fusion</title>
<link>https://hdl.handle.net/1721.1/144802</link>
<description>Advances in Hierarchical Probabilistic Multimodal Data Fusion
Dean, Christopher L.
Multimodal data fusion is the process of integrating disparate data sources into a shared representation suitable for complex reasoning. As a result, one can make more precise inferences about the underlying phenomenon than is possible with each data source used in isolation. In the thesis we adopt a Bayesian view of multimodal data fusion, which formulates reasoning as posterior inference over latent variables. Within the Bayesian setting we present a novel method for data integration that we call lightweight data fusion (LDF). LDF addresses the case where the forward model for a subset of the data sources is unknown or poorly characterized. LDF leverages the remaining data sources to learn an inverse model suitable for posterior inference that combines both types of data. Additionally, we develop a multimodal extension to hierarchical Dirichlet processes (mmHDPs) where, in contrast to the setting for LDF, we lack observation-level correspondences across modalities and the data arise from an implicit latent variable model. Finally, we develop a novel representation for Dirichlet process and HDP mixture models that enables parallelization during inference and extends to more complex models including mmHDPs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144802</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrical Monitoring of Electromechanical Systems</title>
<link>https://hdl.handle.net/1721.1/144801</link>
<description>Electrical Monitoring of Electromechanical Systems
Green, Daisy Hikari
Electromechanical systems provide the world’s backbone for generating and using energy. Electromechanical systems can also experience an innumerable set of failures, causing induced wear and wasted energy, or eventually a complete failure of a critical piece of equipment or system. Degradation or other faults are often associated with subtle but observable changes in electrical consumption. A nonintrusive load monitor (NILM) is a convenient tool for electrical monitoring, in which all loads connected downstream of an electrical panel are monitored with a single set of current and voltage sensors. If collated in a useful way, nonintrusive electrical data can make diagnostic information more easily attainable and improve the efficient operation of critical machines.&#13;
&#13;
Ensuring correct nonintrusive identification of load operation is a challenge in varying operating conditions and fault scenarios. Most nonintrusive load monitoring research assumes that data is static over time. Also, ground truth labels are a scarce resource in industrial scenarios. Thus, a pattern classifier must train on a limited dataset not representative of long-term operation. This thesis employs an understanding of the physics and time-dependency behind changing load behavior to inform pattern classification. New statistical feature extraction techniques are presented for loads with time-varying operation. Results are demonstrated with laboratory experiments and case-studies from NILM installations onboard various marine microgrids.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144801</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>End-to-end Learning for Robust Decision Making</title>
<link>https://hdl.handle.net/1721.1/144800</link>
<description>End-to-end Learning for Robust Decision Making
Amini, Alexander Andre
Because the physical world is complex, ambiguous, and unpredictable, autonomous agents must be engineered to exhibit a human-level degree of flexibility and generality — far beyond what we are capable of explicitly programming. Such realizations of autonomy are capable of not only reliably solving a particular problem, but also anticipating what could go wrong in order to strategize, adapt, and continuously learn. Achieving such rich and intricate decision making requires rethinking the foundations of intelligence across all stages of the autonomous learning lifecycle. &#13;
&#13;
In this thesis, we develop new learning-based approaches towards dynamic, resilient, and robust decision making of autonomous systems. We advance robust decision making in the wild by addressing critical challenges that arise at all stages, stemming from the data used for training, to the models that learn on this data, to the algorithms to reliably adapt to unexpected events during deployment. We start by exploring how we can computationally design rich, synthetic environments capable of simulating a continuum of hard to collect, out-of-distribution edge-cases, amenable for use during both training and evaluation. Taking this rich data foundation, we then create efficient, expressive learning models together with the algorithms necessary to optimize their representations and overcome imbalances in under-represented and challenging data. Finally, with our trained models, we then turn to the deployment setting where we should still anticipate that our system will be faced with entirely new scenarios that they have never encountered during training. To this end, we develop adaptive and uncertainty-aware algorithms for estimating model uncertainty, and exploiting its presence to realize generalizable decision making, even in the presence of unexpected events.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144800</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resonant Spatial Light Modulation: Optical Programming and Sensing at the Fundamental Limit</title>
<link>https://hdl.handle.net/1721.1/144799</link>
<description>Resonant Spatial Light Modulation: Optical Programming and Sensing at the Fundamental Limit
Panuski, Christopher L.
Fast, energy-efficient, and compact manipulation of multimode optical signals is required for technologies ranging from brain imaging to quantum control, yet remains an open goal for present-day spatial light modulators (SLMs), active metasurfaces, and optical phased arrays. Here, we develop wavelength-scale, high-finesse photonic crystal cavity arrays as a solution to this problem. &#13;
&#13;
Specifically, we demonstrate nanosecond- and femtojoule-order spatial light modulation enabled by four key advances: (i) near-unity vertical coupling to high-finesse microcavities through inverse design, (ii) scalable fabrication of photonic crystal circuits by optimized, 300 mm full-wafer processing, (iii) picometer-precision resonance alignment using automated, closed-loop “holographic trimming”, and (iv) out-of-plane cavity control via a high-speed µLED display. Combining each, our approach weds the latest advances in incoherent and coherent optics to open a previously inaccessible regime of programmability: near-complete spatiotemporal control with a &gt;MHz modulation bandwidth per diffraction-limited mode. Simultaneously operating wavelength-scale modes near the space- and time-bandwidth limits, this work approaches the fundamental limits of multimode optical control.&#13;
&#13;
In developing this technology, we also analyze the fundamental limits of light-matter interaction in these remarkable optical microcavities that continue to drive modern science. Operated in reverse, our device constitutes a high-spatial-resolution focal plane array. Surprisingly, we discover that the fundamental limits of these sensors are ultimately dictated by refractive index variations induced by statistical temperature fluctuations. We present the first theoretical and experimental characterization of the associated thermal noise limits in wavelength-scale microcavities, develop a new class of optical sensors operating at this fundamental limit, and analyze noise cancellation techniques to enable continued development in quantum optical measurement, precision sensing, and low-noise integrated photonics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144799</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>NMR studies of quantum thermalization</title>
<link>https://hdl.handle.net/1721.1/144798</link>
<description>NMR studies of quantum thermalization
Peng, Pai
Quantum thermalization is a generic process that a quantum system reaches its equilibrium. Quantum thermalization lies in the interface of various distinct fields, ranging from condensed matter physics and quantum cosmology to quantum information science. The converging efforts from different fields open up avenues for the development of quantum many-body physics and quantum technology.&#13;
&#13;
This thesis presents a study of quantum thermalization in solid-state nuclear spin systems using nuclear magnetic resonance technique. By developing the RF control sequences, novel states, observables and Hamiltonians are created to explore and characterize various quantum thermalization phenomena. &#13;
&#13;
Leveraging the intrinsic disorder, a novel method to detect spin dynamics at single-site level is introduced. The method is applied to study hydrodynamics emerged from thermalizing quantum systems. In an interacting integrable system, coexistence of ballistic energy transport and diffusive spin transport is observed. &#13;
&#13;
With accurate Hamiltonian engineering RF sequences, the thermalization of driven systems are discussed. The exponentially slow thermalization is observed experimentally by measuring the prethermal energy autocorrelation. Beyond the prethermal energy, an even more robust prethermal conserved quantity is discovered. The result suggests a Floquet phase may exist beyond the prethermal regime.&#13;
&#13;
To understand many-body localized (MBL) systems, an algorithm to compute local integrals of motion (LIOMs) is designed. From LIOMs, various localization lengths are extracted and their critical behavior is studied.&#13;
&#13;
To further improve the Hamiltonian engineering sequences, the deep reinforcement learning (DRL) techniques are adopted. The sequences designed by the DRL show better decoupling performance than the previously best known sequence. Beyond that, a new and advantageous pattern is discovered from the DRL sequences which serves as a useful building block for more complex Hamiltonian engineering sequences.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144798</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power Line Communication for Low-Data-Rate Energy Control</title>
<link>https://hdl.handle.net/1721.1/144797</link>
<description>Power Line Communication for Low-Data-Rate Energy Control
Aderibole, Adedayo Olumayowa
Peaks in electricity demand shape power system operations at various scales. In large buildings, fees based on monthly demand peaks can comprise 30–70% of electricity bills. In sections of distribution grids, demand peaks can stress transformers, increasing the risk of equipment failure. Thermostatically-controlled loads (TCLs) – electrical loads that regulate temperature – such as air conditioners and heat pumps are key drivers of demand peaks. Therefore, analogously to the way a good driver is aware of neighboring cars, TCLs can coordinate with other TCLs within a building or section of a distribution grid to reduce demand peaks. Although schemes which enable groups of TCLs to limit their peak demand are abound, in most cases, these schemes require a reliable communication infrastructure to properly coordinate TCLs or similar loads. However, inadequate attention has been paid to developing novel or tailoring existing communication strategies to the requirements of the demand-leveling control of TCLs. Accordingly, this thesis takes a holistic approach to develop a low-cost, wide-coverage, and reliable communication infrastructure tailored to the nuances of the demand-leveling control of TCLs. &#13;
&#13;
Although power lines were not originally designed for communication, their low-cost, widespread availability, and ease of connection make them attractive for facilitating the coordination of TCLs for peak demand control. Additionally, in contrast to wireless communication media, power lines are more robust against the risk of cyberattacks and extreme environmental conditions. Unfortunately, loads and electrical switchgear can easily interfere with power line communication (PLC) in both direct and subtle ways. However, due to the infrequent switching needs of TCLs and the long physical time constants associated with their thermal loads, the demand-leveling control of TCLs only requires low data rates for communication. This work leverages the low-data-rate communication requirement to develop a suite of signaling techniques which increases PLC reliability and facilitate the effective coordination of TCLs for peak demand shaving.&#13;
&#13;
First, this thesis presents hardware which enable TCLs to communicate with one another using the power lines in a building or facility. Next, signaling techniques that substantially enhance PLC reliability by responding creatively to the unique requirements imposed by low-data-rate communication are presented. These techniques, which include chirp spread spectrum, distributed repeating, and time division multiple access, leverage so-called “quasi-peak” regulations to improve the reliability of low-data-rate PLC. By way of illustration, this thesis demonstrates a novel low-data-rate PLC system’s ability to facilitate effective demand-leveling control of TCLs in a large building or facility, using a 24-floor apartment building as a case study. The results show that the proposed low-data-rate PLC system provides a low-cost, wide-coverage, and reliable communication infrastructure for peak demand shaving.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144797</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online Information-Aware Motion Planning with Model Improvement for Uncertain Mobile Robotics</title>
<link>https://hdl.handle.net/1721.1/144796</link>
<description>Online Information-Aware Motion Planning with Model Improvement for Uncertain Mobile Robotics
Albee, Keenan Eugene Sumner
Mobile robots increasingly interact with unstructured, uncertain environments while performing useful tasks. To navigate their surroundings these robots rely on models of their own motion which themselves might not be perfectly known. Some of these uncertainties are considered within the context of the model, like unknown inertial properties; others are simply considered beyond the model's fidelity and are treated as disturbance terms. &#13;
    &#13;
Accounting for these uncertainties when performing robotic motion planning provides a richer approximation of reality, unlocking tools such as robustness or chance-constrained safety guarantees, improving motion planning optimality, and more. However, the current literature largely divides between approaches which are purely robust (or probabilistically robust) and those which consider motion planning's ability to gather information but without robustness. Moreover, these information-aware approaches rarely consider information content about the robot's own system model. Both robustness and information-awareness are desirable and can even be complementary: model information gained online is useful for performing model improvement that can aid in replanning and adjusting robustness guarantees on-the-fly. However, model-improving motion planning methods that are real-time, online-updateable, and with  robustness guarantees are lacking.&#13;
  &#13;
This thesis bridges the research gap of creating online model information-aware motion planning with control robustness guarantees. A motion planning algorithm, RATTLE, is introduced consisting of an online-updateable planning and control hierarchy capable of receiving model updates en route. Model parametric information-aware local planning and a robust online-updateable control strategy are demonstrated, as well as techniques to interpret model uncertainty and to guide the learning tradeoff.&#13;
  &#13;
This approach can be particularly appealing for space robotic systems which demand safe execution but which also desire optimal, fuel-conserving solutions. As such, two space close proximity operations use cases serve as guiding scenarios to demonstrate the capabilities of RATTLE: microgravity cargo maneuvering and autonomous on-orbit rendezvous with uncertain tumbling targets. A fully-fledged autonomous rendezvous framework, TRACE, is introduced in the context of RATTLE for the latter scenario. Simulation and on-orbit results on the International Space Station are presented for both scenarios using the Astrobee robotic free-flyers, highlighting each's uncertainty sources, the benefits of information-aware planning, and real-time resource-constrained hardware performance. The first known fully autonomous rendezvous with an uncertain tumbling target and the first known microgravity information-aware planning demonstration are conducted on-orbit. Tying together RATTLE's contributions, these results provide a concrete, hardware implementation example of the benefits and challenges of online information-aware planning coupled with model updating for mobile robotic systems operating with various uncertainty sources.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144796</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing differential scanning calorimetry as a retrospective dosimetry method for the verification of uranium enrichment activities</title>
<link>https://hdl.handle.net/1721.1/144794</link>
<description>Assessing differential scanning calorimetry as a retrospective dosimetry method for the verification of uranium enrichment activities
Connick, Rachel Clare
Fissile material produced for non-peaceful purposes may require some level of verification that it has all been accounted for in a nuclear disarmament scenario. Physical signatures from the uranium enrichment process can support and validate declared production. This thesis proposes a method to measure the signature from radiation effects in uranium enrichment equipment, using a retrospective dosimetry approach. Proof-of-concept experiments are performed to demonstrate sufficient sensitivity for the application, based on polytetrafluoroethylene (PTFE) and the thermal analysis techniques differential scanning calorimetry (DSC) and fast scanning calorimetry (FSC). The DSC-PTFE system shows statistically significant sensitivity in the dose ranges expected. The FSC-PTFE system demonstrates reduced precision compared to the DSC-PTFE system, but it also shows potential for extracting additional information about the radiation, like the particle type and energy. To improve precision in the FSC-PTFE system, additional data reduction and error propagation methods are developed, with a specific focus on the FSC "hook." These are demonstrated using an energy-conservation model to calculate the mass of the sample through the magnitude of the hook. The DSC-PTFE and FSC-PTFE systems demonstrate enough potential to justify further investigation into radiation effects as physical signatures. Through these methods, histories of unsafeguarded enrichment facilities can be reconstructed, and fissile material produced therein can be confidently transitioned to peaceful uses.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144794</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Approaches to Nonparametric Causal Inference</title>
<link>https://hdl.handle.net/1721.1/144792</link>
<description>Algorithmic Approaches to Nonparametric Causal Inference
Cohen, Peter L.
This thesis presents procedures for performing inferences of causal parameters across an array of contexts including observational studies, completely randomized designs, paired experiments, and covariate-adaptive designs.  First, we discuss an application of convex optimization to conduct directional inference and sensitivity analyses in matched observational studies.  We design an algorithm which maximizes the signal-to-noise ratio while accounting for unobserved confounding.  We analyze the asymptotic distributional behavior of the algorithm's output to develop asymptotically valid hypothesis tests for causal effects.  The resulting procedure achieves the maximal design sensitivity over a broad class of procedures.  Second, we examine the role of feature information in drawing high-precision inferences of effects in completely randomized experiments.  We construct a calibration technique based around linear regression which constructs imputation estimators with upper bounds on the asymptotic variance of the estimator.  We show that this calibration procedure is applicable to any imputation estimator which may be semiparametric efficient and automatically certifies that the resulting nonlinear regression-adjusted estimator is at least as asymptotically precise as the difference in means; a feature that was previously not guaranteed for nonlinear regression-adjusted estimators under model misspecification.  Third, we introduce Gaussian prepivoting: an algorithmic technique to construct test statistics for which randomization inference remains asymptotically valid even when symmetries underlying the randomization hypothesis are violated in the null.  We demonstrate that randomization tests based upon prepivoted statistics are finite-sample exact under sharp nulls while they asymptotically control the probability of false rejection under weak nulls.  This allows for the formation of confidence regions for treatment effects with simultaneous interpretations as exact confidence regions for homogeneous additive treatment effects and asymptotic confidence regions for heterogeneous additive effects; thereby unifying Fisherian and Neymanian inference for many experimental designs including rerandomized experiments.  Fourth, we construct a nested hierarchy of resampling algorithms which exploit probabilistic structure in superpopulation, fixed covariate, and finite population models to facilitate nonparametric inference for a wide variety of statistics in completely randomized designs.  The resampling algorithms extend the classical bootstrap paradigm by leveraging modern results on regression-adjustment and optimal transport to achieve significant gains under fixed covariate and finite population models.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144792</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Corporate Finance Theory and Dynamic Games</title>
<link>https://hdl.handle.net/1721.1/144782</link>
<description>Essays on Corporate Finance Theory and Dynamic Games
Sun, Jian
This dissertation consists of three chapters.&#13;
&#13;
In Chapter 1, I study the optimal algorithmic disclosure in a lending market where lenders use a predictive algorithm to mitigate adverse selection. The predictive algorithm is unobservable to borrowers and uses a manipulable borrower feature as input. A regulator maximizes market efficiency by disclosing information about the statistical properties of variables embedded in the predictive algorithm to borrowers. Under the optimal disclosure policy, the posterior belief consists of two disjoint regions in which the borrower feature is more relevant and less relevant in predicting borrower quality, respectively. The optimal disclosure policy differentiates posterior lending market equilibria by the equilibrium data manipulation levels. Equilibria with more data manipulation hurt market efficiency, but also discourage lenders’ use of the borrower feature. Equilibria with less data manipulation benefit from that and generate more efficient market outcomes. Unconditionally, the borrower feature is used less intensively under optimal disclosure.&#13;
&#13;
In Chapter 2, joint work with Mehmet Ekmekci, Leandro Gorno, Lucas Maestri and Dong Wei, we study a dynamic stopping game between a principal and an agent. The agent is privately informed about his type. The principal learns about the agent’s type from a&#13;
noisy performance measure, which can be manipulated by the agent via a costly and hidden action. We fully characterize the unique Markov equilibrium of this game. We find that terminations/market crashes are often preceded by a spike in (expected) performance. Our model also predicts that, due to endogenous signal manipulation, too much transparency&#13;
can inhibit learning. As the players get arbitrarily patient, the principal elicits no useful information from the observed signal.&#13;
&#13;
In Chapter 3, joint work with Dan Luo, we study SPACs in a continuous-time delegated investment model. The sponsor has an increasing incentive to propose unprofitable projects to the investor over time; in response, the investor exerts more stringent screening based on her information. The screening helps curb the sponsor’s moral hazard, but also dampens the disciplining effect of partial alignment in incentives. When the investor’s information is sufficiently noisy, the second effect dominates, so giving the investor the control over investment approval reduces everyone’s welfare.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144782</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capillary-Driven Condensation for Heat Transfer Enhancement in Steam Power Plants</title>
<link>https://hdl.handle.net/1721.1/144780</link>
<description>Capillary-Driven Condensation for Heat Transfer Enhancement in Steam Power Plants
Cruz, Samuel Steven
Condensation is a phenomenon that is ubiquitous in nature and used to effectively transfer heat in many important industrial applications including thermal management of electronics, steam power generation, and natural gas processing. Industry relies mainly on filmwise condensation, where the condensate forms an insulating, thick liquid film on the condensation surface, posing a large thermal resistance. For almost a century, much work has explored developing surfaces that can promote the nucleation, growth, coalescence and effective shedding of mobile condensing droplets from surfaces, called dropwise condensation, which is known to enhance the heat transfer up to an order of magnitude. However, the requirement for ultra-thin coatings has hampered the wide adoption of this form of condensation as thin hydrophobic coatings degrade over time in various industrial applications. In this thesis, we model, fabricate, optimize, experimentally demonstrate a proof-of-concept for a novel condensation approach which we term capillary-driven condensation. The method consists of a hierarchical structure consisting of a hydrophobic porous membrane attached on top of a wicking structure that is firmly bonded to the condenser surface. The wicking structure and the membrane can be separately tailored to maximize the fluid flow in the wick and its effective thermal conductivity, as well as the maximum capillary pressure that the membrane can sustain to push fluid through a viscous pressure drop in the porous wick to an exit port. The geometry can be optimized to reduce the thermal resistance of the structure, as well as maximize the amount of condensate that can be removed passively by capillarity. To demonstrate the viability of this condensation method, we fabricated the proposed structure with highly-defined geometry utilizing silicon microfabrication techniques. The result is a surface which is able to constrain a thin film of condensate within a high thermal conductivity wicking structure while the top condensation surface appears dry despite sustaining condensation rates above those of filmwise condensation. The thickness of this layer and geometry of the membrane can be rationally designed to maximize the heat transfer coefficient even beyond dropwise condensation. Heat transfer measurements indicate a potential range of enhancement of ≈ 40% to ≈ 400% which is to be confirmed by more sensitive experiments that reduce error. The results from this thesis show a proof-of-concept and support the promise of capillary-driven condensation surfaces for various heat transfer applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144780</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Finance and Climate Risk</title>
<link>https://hdl.handle.net/1721.1/144779</link>
<description>Essays on Finance and Climate Risk
Sastry, Parinitha
This thesis consists of three chapters on climate risks and financial markets. The first chapter studies how residential mortgage contracts distribute flood risk exposures across banks, households, and the government flood insurer. I find that banks offload flood risk to the government through flood insurance contracts, and to households through higher required down payments. This credit rationing shifts the composition of mortgages in flood zones towards richer and higher credit quality borrowers. The second chapter, joint with David Thesmar, Augustin Landier, and Jean-Francois Bonnefon, characterizes investors’ moral preferences in a parsimonious experimental setting, where we auction stocks with various ethical features. We find strong evidence that investors seek to align their investments with their social values (“value alignment”), and find no evidence of behavior driven by the social impact of investment decisions (“impact-seeking preferences”). The third chapter proposes a simple structural model to study substitution patterns within the class of safe and liquid assets at the extreme short-end of the yield curve. Demand system estimates suggest that treasury securities and financial commercial paper are nearly perfect substitutes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144779</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Information Technology, Human Capital, and the Future of Work</title>
<link>https://hdl.handle.net/1721.1/144778</link>
<description>Essays on Information Technology, Human Capital, and the Future of Work
Steffen, Sebastian
This dissertation contains three essays concerning the economics of information technology, human capital, and the future of work. In the first essay, ’Occupational Change: Automation and Reskilling Risks’, I develop a methodology to study occupational skill demands and estimate the returns to skills, by leveraging novel data from over 200 million online job postings from 2010 until 2020. I find large heterogeneity in skill returns across industries and identify potential (re)skill investment opportunities for workers.&#13;
&#13;
In the second essay, ’Digital Resilience: How Work-From-Home Feasibility Affects Firm Performance’, I build on the methodology and data from the previous chapter to measure how feasible it is for firms to shift their workforce to remote work. Using these data, I then causally identify how much remote work practices aided firms’ resilience against the Covid-19 pandemic, as measures by sales, net income, stock market returns, and volatility. The findings highlight that firms need to strategically manage the labor composition and digitization of their organizations, and consider that work-from-home practices, besides their many other advantages, are an effective way to hedge against operational risks.&#13;
&#13;
In the final essay, ’Treating the Symptoms or the Cause? Substantive and Symbolic Talent Acquisition in Response to Data Breaches’, I use the data from the first chapter to study firms’ hiring responses to data breaches. Advancing the theory of substantive and symbolic IT adoption to complementary human capital acquisitions, I find that firms significantly increase their hiring for cybersecurity as well as public relations and legal workers after suffering breach. I also find that public scrutiny can serve as an effective mechanism to shift firms’ hiring investments toward substantive, rather than symbolic measures. Given the increase in the volume and severity of cyberattacks, these results provide important and timely insights into firms’ responses and incentives to more substantively safeguard their data.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144778</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Technology and Work</title>
<link>https://hdl.handle.net/1721.1/144769</link>
<description>Essays on Technology and Work
Tuhkuri, Joonas
This thesis consists of four papers on technology, work, skills, and personality using novel large-scale data and methods. The first paper (Chapter 1, with Johannes Hirvonen and Aapo Stenhammar) presents novel evidence on the effects of advanced technologies on employment, skill demand, and firm performance. The main finding is that advanced technologies led to increases in employment and no change in skill composition. Our main research design focuses on a technology subsidy program in Finland that induced sharp increases in technology investment in manufacturing firms. Our data directly measure multiple technologies and skills and track firms and workers over time. We demonstrate novel text analysis and machine learning methods to perform matching and to measure specific technological changes. To understand our findings, we outline a theoretical framework that contrasts two types of technological change: process versus product. We document that the firms used new technologies to produce new types of output rather than replace workers with technologies within the same type of production. The results contrast with the ideas that technologies necessarily replace workers or are skill biased. &#13;
&#13;
The second paper (Chapter 2, with Ramin Izadi) investigates which personality traits and skills help workers to deal with a changing environment. Labor markets are in constant change. This paper documents how responses to labor-market shocks vary by individuals’ psychological traits. We construct measures of cognitive ability, extraversion, and conscientiousness using standardized personality and cognitive tests administered during military service to 79% of Finnish men born 1962–1979. We analyze establishment closures and mass layoffs between 1995–2010 and document heterogeneous responses to the shock. Extraversion is the strongest predictor of adaptation: the negative effect of a mass layoff on earnings is 20% smaller for those with one standard deviation higher scores of extraversion. Conscientiousness appears to have no differential impact conditional on other traits. Cognitive ability and education predict a significantly smaller initial drop in earnings but have no long-term advantage. Our findings appear to be driven directly by smaller dis-employment effects: extraverted and high cognitive-ability individuals find re-employment faster in a similar occupation and industry they worked in before. Extraversion’s adaptive value is robust to controlling for pre-shock education, occupation, and industry, which rules out selection into different careers as the driving mechanism. Extraverts are slightly more likely to retain employment in their current establishment during a mass layoff event, but the retention effect is not large enough to explain the smaller earnings drop. &#13;
&#13;
The third paper (Chapter 3, with Ramin Izadi) explores how different dimensions of personality predict school vs. labor-market performance, and how the value of these traits changed over time. We answer these questions using data that includes multidimensional personality and cognitive test scores from mandatory military conscription for approximately 80% of Finnish men. We document that some dimensions of noncognitive skills are productive at school, and some dimensions are counterproductive at school but still valued in the labor market. Action-oriented traits (activity, sociability, and masculinity) predict low school performance but high labor market performance. School-oriented traits, such as dutifulness, deliberation, and achievement striving, predict high school performance but are not independently valued in the labor market after controlling for school achievement. We further document that the labor-market premium to action-oriented personality traits has rapidly increased over the past two decades. To interpret the empirical results, we outline a model of multidimensional skill specialization. The model and evidence highlight two paths to labor-market success: one through school-oriented traits and formal skills, and one through action-oriented traits and informal skills. &#13;
&#13;
The fourth paper (Chapter 4) analyzes the impact of manufacturing decline on children. To do so, it considers local employment structure—characterizing lost manufacturing jobs and left-behind places—high-school dropout rates, and college access in the US over 1990–2010. To establish a basis for causal inference, the paper uses variations in trade exposure from China, following its entry to the WTO, as an instrument for manufacturing decline in the US. While the literature on job loss has emphasized negative effects on children, the main conclusion of this research is that the rapid US manufacturing decline decreased high-school dropout rates and possibly increased college access. The magnitudes of the estimates suggest that for every 3-percentage-point decline in manufacturing as a share of total employment, the high-school dropout rate declined by 1 percentage point. The effects are largest in the areas with high racial and socioeconomic segregation and in those with larger African American populations. The results are consistent with the idea that the manufacturing decline increased returns and decreased opportunity costs of education, and with sociological accounts linking the working-class environment and children’s education.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144769</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capturing Distributions over Worlds for Robotics with Spatial Scene Grammars</title>
<link>https://hdl.handle.net/1721.1/144763</link>
<description>Capturing Distributions over Worlds for Robotics with Spatial Scene Grammars
Izatt, Gregory
Having a precise understanding of the distribution over worlds a robot will face is critical to most problems in robotics. This distribution informs mechanical and software design specifications, provides strong priors to perception, and quantifies the real-world relevance of simulation and lab testing. However, representing and quantifying this distribution is an open and difficult problem, as these worlds can vary in myriad continuous and discrete ways. This thesis is concerned with a particular class of probabilistic procedural models – spatial scene grammars – that are tailored to describe hybrid discrete-and-continuous distributions over environments with varying numbers, types, and spatial poses of objects. We develop a spatial scene grammar formulation that is sufficiently expressive to capture the structure of practically relevant environments, but is carefully restricted to remain amenable to various forms of probabilistic inference. We show that we can sample diverse scenes from these grammars, even under the presence of constraints on scene contents and object poses; that we can parse scenes with this grammar model via a novel set of mixed-integer parsing techniques to achieve detailed scene understanding and part-level outlier detection; and that we can fit unknown parameters in the model to data via an approximate expectation-maximization algorithm.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144763</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon Nanotubes for Space Electronics: Enabling New Applications with Emerging Technologies</title>
<link>https://hdl.handle.net/1721.1/144762</link>
<description>Carbon Nanotubes for Space Electronics: Enabling New Applications with Emerging Technologies
Kanhaiya, Pritpal Singh
Physical scaling of silicon-based field-effect transistors (FETs) yield diminishing returns while also becoming increasingly challenging. This has motivated the search for beyond-silicon technologies based on materials such as carbon nanotubes (CNTs) and transition metal dichalcogenides (TMDs). However, solely relying on new materials alone is insufficient to realize next-generation electronics. Therefore, we must coordinate advances across the entire computing stack whereby we leverage new materials and device architectures, to enable new circuits and systems, to ultimately realize new exciting applications. In this thesis, as a case study we use CNT-based electronics, a promising technology projected to provide orders of magnitude energy-delay-product (EDP) improvement versus conventional silicon-based digital VLSI systems. I experimentally demonstrate new three-dimensional (3D) device and circuit architectures leveraging unique low temperature processing of CNTs, demonstrate the first CNT-based SRAM arrays, and realize new applications with CNT-based radiation tolerant electronics to drive future space missions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144762</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Supervised Learning for Speech Processing</title>
<link>https://hdl.handle.net/1721.1/144761</link>
<description>Self-Supervised Learning for Speech Processing
Chung, Yu-An
Deep neural networks trained with supervised learning algorithms on large amounts of labeled speech data have achieved remarkable performance on various spoken language processing applications, often being the state of the arts on the corresponding leaderboards. However, the fact that training these systems relies on large amounts of annotated speech poses a scalability bottleneck for the continued advancement of state-of-the-art performance, and an even more fundamental barrier for deployment of deep neural networks in speech domains where labeled data are intrinsically rare, costly, or time-consuming to collect.&#13;
&#13;
In contrast to annotated speech, untranscribed audio is often much cheaper to accumulate. In this thesis, we explore the use of self-supervised learning---a learning paradigm where the learning target is generated from the input itself---for leveraging such easily scalable resources to improve the performance of spoken language technology. Specifically, we propose two self-supervised algorithms, one based on the idea of "future prediction" and the other based on the idea of "predicting the masked from the unmasked," for learning contextualized speech representations from unlabeled speech data. We show that our self-supervised algorithms are capable of learning representations that transform high-level properties of speech signals such as their phonetic contents and speaker characteristics into a more accessible form than traditional acoustic features, and demonstrate their effectiveness in improving the performance of deep neural networks on a wide range of speech processing tasks. In addition to presenting new learning algorithms, we also provide extensive analysis aiming to understand the properties of the learned self-supervised representations, as well as disclosing the design factors that make one self-supervised model different from the other.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144761</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nano Vacuum Channel Devices for Electronics and Ultrafast Nanophotonics</title>
<link>https://hdl.handle.net/1721.1/144760</link>
<description>Nano Vacuum Channel Devices for Electronics and Ultrafast Nanophotonics
Turchetti, Marco
Recent years have seen a surge of interest in nano vacuum channel (NVC) devices due to their low power requirements, radiation hardness, integrability, and ultrafast switching times.  Planar NVC devices are ideal candidates for electronics that need to operate in harsh environments such as space. Moreover, recent work, some of which is discussed in this thesis, has demonstrated a rectified, field driven current response from planar NVCs that extends to petahertz-scale frequencies. Such petahertz electronic devices enable field-resolved measurements of ultrafast phenomena and the capability to decode information stored directly on the optical field waveform.  In this thesis, state of the art nanotechnology techniques are leveraged to develop a reliable nanofabrication process to pattern planar NVC devices using metallic and refractory materials. Their emission properties in response to both electrical and optical fields are investigated through simulation and testing. Finally, their use for electronics and optoelectronics applications is demonstrated and discussed. In particular, this thesis focuses on their use for building NVC devices for radiation-resistant logic, and for the development of novel optical-field processing techniques such as field sampling to perform time-domain spectroscopy with attosecond resolution. The results from this thesis have direct application in many fields, from metrology to communication to information processing, and represent an important contribution for the development of radiation resistant and petahertz electronics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144760</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic Molecular Models for the Oxygen Reduction Active Sites in Heteroatom-Doped Graphitic Electrocatalysts: Linking Heterogeneous and Homogeneous Electrocatalysis</title>
<link>https://hdl.handle.net/1721.1/144759</link>
<description>Synthetic Molecular Models for the Oxygen Reduction Active Sites in Heteroatom-Doped Graphitic Electrocatalysts: Linking Heterogeneous and Homogeneous Electrocatalysis
Marshall-Roth, Travis
The development and deployment of low-temperature fuel cell technologies require that low-cost platinum-free catalysts for the oxygen reduction reaction (ORR) that are highly active, selective, and durable be created. To this point, the advancement of potential replacement catalysts such as metal- and nitrogen-doped carbon (M-N-C) materials has been hampered by a lack of detail regarding the structures of the M-N4 active sites that facilitate the ORR coupled with a paucity of characterization with respect to the electrochemical behavior and properties of these sites. In this work, we present four studies on the structure, spectroscopy, and the electrochemical properties and reactivity of several Fe-N4 complexes as potential model complexes for the sites in iron- and nitrogen-doped carbon (Fe-N-C) catalysts and as platforms for elucidating catalyst design principles relevant to both homogeneous non-aqueous and heterogeneous aqueous ORR electrocatalysis.&#13;
&#13;
In Chapter 2, we synthesize a functional structural molecular platform for modeling the ORR active sites in Fe-N-C materials. We show that the complex more faithfully represents the Fe-containing sites in Fe-N-C materials compared to legacy pyrrolic and pyridinic macrocycle complexes, providing a promising platform for the construction and study of next-generation Fe-N-C materials characterized by improved active site homogeneity and increased site density without the need for high-temperature pyrolysis.&#13;
&#13;
In Chapter 3, we investigate the electrochemical response of the model complex synthesized in Chapter 2 to molecular poisons to bolster the previous claims made in the literature about the relative contributions of metal-centered and metal-free ORR activity in acidic and alkaline electrolytes. We show that acidic and alkaline electrolytes cause different ORR contributions from the model complexes, suggesting that improving the poison tolerance, durability, and activity of the metal-containing active sites should be the focus of future Fe-N-C material synthesis research.&#13;
&#13;
In Chapter 4, we examine the electrochemical ORR performance of a family of macrocycles spanning a variety of Fe-N4, Fe-C2N2, and Fe-C4 coordination environments. Our results highlight that careful control over the reaction environment is required to optimize catalysts for a specific application, and that, in general, the rate-overpotential scaling relationships for ORR allow for substantially more efficient catalysis in heterogeneous aqueous environments.&#13;
&#13;
In Chapter 5, we evaluate the homogeneous ORR performance of an Fe-N-C model complex derivative in non-aqueous weak acid buffers. The data indicate that structural features on the catalyst enable substantial rate enhancements as the acidity of the proton donor is reduced. Our results demonstrate that changes to the iron coordination environment, coupled with interactions with the electrolyte have the ability to radically alter the catalytic activity of Fe-N4 macrocycle complexes.&#13;
&#13;
The development of more sustainable and affordable fuel cell devices relies upon the evolution and elaboration of stable, highly active, and platinum-free electrocatalysts for the ORR. Through the projects outlined in this work, we have taken steps to bolster the structural identification of the nitrogen-ligated sites in Fe-N-C materials and used the resulting model complexes to investigate the role that both electrolytes and catalyst structure play in controlling ORR catalysis both in solution and on solid catalyst surfaces. In aggregate, this dissertation seeks to highlight crosstalk between molecular and heterogeneous electrochemistry, aiming to simultaneously inform Fe-N-C development for fuel cell applications and molecular catalyst design for the oxygen reduction reaction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144759</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Training for Natural Language Processing</title>
<link>https://hdl.handle.net/1721.1/144758</link>
<description>Self-Training for Natural Language Processing
Luo, Hongyin
Data annotation is critical for machine learning based natural language processing models. Although many large-scale corpora and standard benchmarks have been annotated and published, they cannot cover all possible applications. As a result, it is difficult to transfer models trained with public corpora to tasks that require domain-specific knowledge, different inference skills, unseen text styles, and explainability. In this thesis, we explore self-training methods for mitigating the data distribution gaps between training and evaluation domains and tasks. In contrast to traditional self-training methods that study the best practice of training models with real data and pseudo labels, we also explore the possibility of automatically generating synthetic data for better explainability, robustness, and domain adaptation performance. We show the performance improvement achieved by our methods on different natural language understanding and generation tasks, including question answering, question generation, and dialog response selection.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144758</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient and Robust Algorithms for Practical Machine Learning</title>
<link>https://hdl.handle.net/1721.1/144757</link>
<description>Efficient and Robust Algorithms for Practical Machine Learning
Bao, Yujia
Machine learning models are biased when trained on biased datasets. Many recent approaches have been proposed to mitigate biases when they are identified a priori. However in real-world applications, annotating biases is not only time-consuming but also challenging. This thesis considers three different scenarios and presents novel algorithms for learning robust models. These algorithms are efficient as they do not require explicit annotations of the biases, enabling practical machine learning.&#13;
&#13;
First, we introduce an algorithm that operates on data collected from multiple environments, across which correlations between bias features and the label may vary. We show that when using a classifier trained on one environment to make predictions on examples from a different environment, its mistakes are informative of the hidden biases. We then leverages these mistakes to create groups of examples whose interpolation yields a distribution with only stable correlations. Our algorithm achieves the new state-of-the-art on four text and image classification tasks.&#13;
&#13;
We then consider the situation where we lack access to multiple environments, a common scenario for new tasks or resource-limited tasks. We show that in real-world applications related tasks often share similar biases. Based on this observation, we propose an algorithm that infers bias features from a resource-rich source task and transfers this knowledge to the target task. Compared to 15 baselines across five datasets, our method consistently delivers significant performance gain.&#13;
&#13;
Finally, we study automatic bias detection where we are only given a set of input-label pairs. Our algorithm learns to split the dataset so that classifiers trained on the training split cannot generalize to the testing split. The performance gap provides a proxy for measuring the degree of bias in the learned features and can therefore be used to identify unknown biases. Experiments on six NLP and vision tasks demonstrate that our method is able to genreate spurious splits that correlate with human-identified biases.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144757</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Investigation of Critical Heat Flux Enhancement on Engineered Surfaces with Infrared Thermometry</title>
<link>https://hdl.handle.net/1721.1/144755</link>
<description>Experimental Investigation of Critical Heat Flux Enhancement on Engineered Surfaces with Infrared Thermometry
Wang, Chi
The boiling crisis is one of the most intriguing and impactful open problems in thermal science and engineering. The prediction and enhancement of the heat flux at which this boiling crisis occurs (also known as critical heat flux, or CHF) has been the focus of boiling heat transfer research for over a century, as CHF sets an upper limit to the operating heat flux for applications cooled by boiling heat transfer. Surfaces engineered with microscale and nanoscale features have shown great potential to enhance CHF. However, while many hypotheses have been formulated, the mechanisms leading to such enhancement are still debated. This knowledge gap arises from the lack of measurements, e.g., of time-dependent local heat flux and temperature distributions, through which it would be possible to verify hypotheses and modeling assumptions.&#13;
&#13;
This work contributes to removing this technical roadblock and provide new, unique insights into the boiling phenomena and the boiling crisis on surfaces with engineered hydrophilic micropillars.&#13;
&#13;
To achieve these goals, we have developed a special heating surface enabling the use of high-resolution infrared thermometry to obtain the time-dependent temperature and heat flux distribution on the boiling surface. Thank to this technical development, we have been able to collect first-of-a-kind data and shed light on the phenomena involved in the pool and subcooled flow boiling of water on these surfaces, including above-atmospheric pressure conditions (4 bar).&#13;
&#13;
We show that the presence of an intra-pillar liquid layer is the primary reason for CHF enhancement on surfaces with engineered micro-pillars in both pool and flow boiling conditions. However, the energy removed by evaporation of this liquid layer is small compared to the CHF enhancement, e.g., only 30% is removed by evaporation in pool boiling condition. Instead, by delaying the formation of dry spots, the intra-pillar liquid layer empowers other heat transfer mechanisms, e.g., forced convection and transient conduction, to remove more energy from the surface.&#13;
&#13;
This intra-pillar liquid layer consists of two comparable contributions. Initially, liquid is trapped within the pillars as vapor bubbles grow on top of them. Then, liquid is resupplied by capillary wicking effects. The experimental results show that wicking is only effective within a certain distance (&lt;1 mm) from the apparent triple contact line. This wicking distance, is found to depend on the local heat flux which is limited by thermal diffusion in the heater substrate. We can reproduce these observations with mechanistic models describing the evaporation of the intrapillar liquid layer and the dynamic of the flow wicked by the micro-pillars.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144755</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Aspects of Perception-Aware Motion Planning on Resource-Constrained Platforms</title>
<link>https://hdl.handle.net/1721.1/144749</link>
<description>Algorithmic Aspects of Perception-Aware Motion Planning on Resource-Constrained Platforms
Spasojevic, Igor
Autonomous micro aerial vehicles (MAVs) are becoming an integral tool in numerous applications involving time-critical missions in GPS-denied environments. Due to their small size and lean energy budget, MAVs are often equipped with a camera to aid ego-localization. This introduces at least two fundamental challenges. First, cameras are of little use for state estimation if there is an insufficient quantity of visual information in the environment of the robot. Second, MAVs only display a limited amount of onboard computational resources. Should extracting motion estimates require excessive computational effort, in order to prevent fatal crashes, these agents would be confined to such low speeds that their deployment would be of questionable value.&#13;
&#13;
This thesis studies algorithmic aspects of the question: “How quickly can a vision-driven MAV traverse a given path, while maintaining accurate state estimates at all times?” We seek tractable families of problems involving designing a time-optimal open-loop sequence of controls for a MAV subject to both actuation as well as perception constraints that allow the robot leverage its onboard camera for accurate state estimation. Prior work has either focused on asymptotically optimal search-based approaches which are challenging to implement in real time, or fast local-optimization-based methods with no guarantees on global constraint satisfaction, stability, or optimality.&#13;
&#13;
We present three contributions. First, we extend optimality guarantees of a robust, computationally efficient algorithm for the time-optimal path parametrization problem. Second, we demonstrate the convexity of a general family of perception constraints which require a quadrotor to maintain a sufficient amount of information within field of view of its forward-facing onboard camera. Third, we devise computationally efficient algorithms for guiding the visual attention of a fully-actuated multirotor to traverse a path in minimum time while keeping the computational burden of extracting incremental motion estimates below a set threshold. Together, these contributions serve as stepping stones towards allowing MAVs execute missions autonomously at operational speeds.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144749</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Representations for Limited and Heterogeneous Medical Data</title>
<link>https://hdl.handle.net/1721.1/144745</link>
<description>Learning Representations for Limited and Heterogeneous Medical Data
Weng, Wei-Hung
Data insufficiency and heterogeneity are challenges of representation learning for machine learning in medicine due to the diversity of medical data and the expense of data collection and annotation. To learn generalizable representations from such limited and heterogeneous medical data, we aim to utilize various learning paradigms to overcome the issue. In this dissertation, we systematically explore the machine learning frameworks for limited data, data imbalance, and heterogeneous data, using cross-domain learning, self-supervised learning, contrastive learning, meta-learning, multitask learning, and robust learning. We present studies with different medical applications, such as clinical language translation, ultrasound image classification and segmentation, medical image retrieval, skin diagnosis classification, pathology metadata prediction, and lung pathology prediction. &#13;
&#13;
We first focus on the limited data problem, which is common in medical domains. We learn cross-domain representations for clinical language translation with limited and unpaired medical language corpora using unsupervised embedding space alignment with identical anchors for word translation, and conduct sentence translation using statistical language modeling. Using metrics of clinical correctness and readability, the developed method outperforms a dictionary-based algorithm in both word- and sentence-level translation. For learning better data representations of limited numbers of ultrasound images, we then adopt the self-supervised learning technique and integrate the corresponding metadata as a multimodal resource to introduce inductive biases. We find that the representations learned by the developed approach yield better downstream task performance, such as ultrasound image quality classification and organ segmentation, compared with the standard transfer learning methods. &#13;
&#13;
Next, we zoom into the data imbalance problem. We explore the utility of contrastive learning, specifically the Siamese network, to learn representations from an imbalanced fundoscopic imaging dataset for diabetic retinopathy image retrieval. Compared with the standard supervised learning setup, we obtain comparable but interpretable results using the representations learned from the Siamese network. We also utilize meta-learning for skin disease classification with an extremely imbalanced long-tailed skin image dataset. We find that model ensemble with meta-learning models and models trained with conventional class imbalance techniques yields better prediction performance, especially for rare skin diseases. &#13;
&#13;
Finally, for heterogeneous medical data, we develop a multimodal multitask learning framework to learn a shared representation for pathology metadata prediction. We use the multimodal fusion technique to integrate the slide image, free text, and structured metadata, and adopt a multitask objective loss to introduce the inductive bias while learning. This yields better prediction power than the standard single-modal single-task training setup. We also apply robust training techniques to learn representations that can tackle a distributional shift across two chest X-ray datasets. Compared with standard training, we find that robust training provides better tolerance when the shift exists, and learns a robust representation for lung pathology prediction. &#13;
&#13;
The investigation in this dissertation is not exhaustive but it introduces an extensive understanding of utilizing machine learning in helping clinical decision making under the limited and heterogeneous medical data setting. We also provide insights and caveats to motivate future research directions of machine learning with low-resource and high-dimensional medical data, and hope to make a positive real-world clinical impact.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144745</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing the potential for zinc limitation of marine primary production: proteomic characterization of the low zinc stress response in marine diatoms</title>
<link>https://hdl.handle.net/1721.1/144738</link>
<description>Assessing the potential for zinc limitation of marine primary production: proteomic characterization of the low zinc stress response in marine diatoms
Kellogg, Riss Morgan
Marine diatoms are abundant photoautotrophic algae that contribute significantly to photosynthetic carbon fixation and export throughout the oceans. Zinc is an important micronutrient in algal metabolism, with scarce dissolved concentrations in the upper euphotic zone reflecting high biological demand. In this thesis, I investigated the response of marine diatoms to Zn scarcity to characterize metabolic mechanisms used to combat Zn stress. I began by assaying the ability to metabolically substitute cobalt (Co) in place of Zn in four diatom species and found that enhanced abilities to use Co are likely an adaptation to high surface dCo:dZn ratios in the native environment. I next demonstrated that Zn/Co metabolic substitution in diatoms is not universal using culture studies of Chaetoceros neogracile RS19, which has an absolute Zn requirement. Using global proteomic analysis, I then identified and characterized diatom ZCRP-A and ZCRP-B, a putative Zn-chaperone and membrane-tethered Zn acquisition protein, respectively, as two proteins involved in the low-Zn response. I demonstrated that these proteins are widespread in marine phytoplankton and can be deployed as protein biomarkers of Zn stress in the field. I furthermore documented both the detection of ZCRPs in the Southern Ocean and the existence of Zn/Fe co-limitation within the natural phytoplankton population in Terra Nova Bay, demonstrating that Zn co-limitation can indeed occur in the field, even in high macronutrient waters. Lastly, I explored the relative demand of Zn and cadmium (Cd) within the Southern Ocean community using stable 67Zn and 110Cd tracers, documenting a high demand for both metals during the austral 2017-2018 summer season and investigating the cycling of these elements within this important region. Overall, this dissertation provides new information regarding Zn acquisition and homeostasis mechanisms within marine algae and demonstrates that Zn co-limitation in the field is not only possible, but detectable via protein biomarkers.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144738</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Questions and Clarity: Insights from Applying Computational Methods to Paleoclimate Archives</title>
<link>https://hdl.handle.net/1721.1/144737</link>
<description>Questions and Clarity: Insights from Applying Computational Methods to Paleoclimate Archives
Fendrock, Michaela
It is a scientifically accepted fact that the Earth’s climate is presently undergoing significant changes with the potential for immense negative impacts on human society. As evidence of these impacts become clear and common, it becomes ever more important to constrain the nature, magnitude, and speed of changes to Earth systems. A fundamentally important tool to this understanding is the Earth’s past, recorded in the geologic record. There, lie examples of climate change under various forcings: important data for understanding the fundamental dynamics of climate change on our planet. However, when a climate signal is written in the geologic record, it is coded into the language of proxies and distorted by time. This thesis endeavors to decode that record using a variety of computational methods on a number of challenging proxies, to draw more information from the climate past than has previously been possible. First, machine learning and computer vision are used to decipher the primary, centimeter-scale textures of carbonate deposits in Searles Valley and Mono Lake, California. This work is able to connect facies in the tufa at Searles, grown during the Last Glacial Period, and those forming presently at Mono Lake. Next, the tracks of icebergs purged during Heinrich Events are simulated using the MIT General Circulation Model. This work, running multiple experiments exploring different aspects internal and external to the icebergs, reveals wind and sediment partitioning as centrally important to the spatial extent of Heinrich Layers. Each of these works considers a traditional geologic archive – a carbonate facies, a marine sediment layer – and uses computational methods to approach that archive from a different perspective. By applying these new methods, more information can be gleaned from the geologic record, building a richer narrative of the Earth’s climate history. The final chapter of this thesis discusses effective teaching and strategies for building communities to support teaching practice in Earth Science departments.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144737</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between an integrative and conjugative element and its bacterial host</title>
<link>https://hdl.handle.net/1721.1/144733</link>
<description>Interplay between an integrative and conjugative element and its bacterial host
McKeithen-Mead, Saria A.
Horizontal gene transfer is the mechanism by which bacteria acquire new genes. Mobile genetic elements (MGEs) are the primary drivers of horizontal gene transfer, introducing large fragments of DNA. Often, MGEs contain cargo genes beneficial to the host, conferring new traits such as antibiotic resistances, pathogenicity, and metabolic capabilities. Though MGEs sometimes carry beneficial genes, they can impose fitness costs on their host. MGEs employ various maintenance and acquisition strategies to ensure stable acquisition in bacterial populations. Two common mechanisms are integration and replication. Integration and replication are inherently harmful, as they produce DNA intermediates recognized by the host as DNA damage. Thus, MGEs have evolved mechanisms to modulate, subvert, and manipulate bacterial DNA damage repair systems.&#13;
&#13;
Integrative and conjugative elements (ICEs) are considered the most abundant conjugative elements. ICEs are MGEs that reside integrated into their host chromosome, excise, and transfer their DNA via conjugation (involving a type IV secretion system) into a recipient. During acquisition, the element must switch from a replicating element to a quiescent element capable of integrating for stable acquisition.&#13;
&#13;
Here I describe for the model ICE, ICEBs1 of Bacillus subtilis, three mechanisms used to regulate and minimize the cost of autonomous replication for its bacterial host. I show how ICEBs1 couples the cessation of replication to integration into the chromosome of nascent hosts. I found that the integration of a replicating element is lethal for the host. I show that two genes encoded on ICEBs1 inhibit the host DNA damage repair response by preventing RecA filament formation. This activity confers a fitness advantage to the host in that it dampens the SOS response and protects against phage activation while also stabilizing ICEBs1 DNA. I also present preliminary evidence that ICEBs1 utilizes a host-specific DNA motif (Chi sites) that modulates the activity of an essential host exonuclease from degradative to reparative. I found that removing Chi sites on ICEBs1 resulted in decreased acquisition of the element in new hosts in mixed population biofilms. I also found evidence that ICEs of other organisms may use their host Chi sites. Together these findings show several mechanisms used by an integrative and conjugative element to maximize the spread and propagation of the element with minimal perturbations to its host.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144733</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mispricing and the Demand for Fundamental Information</title>
<link>https://hdl.handle.net/1721.1/144730</link>
<description>Mispricing and the Demand for Fundamental Information
Anderson, Samuel S.
I provide evidence that investor demand for accounting information intensifies following nonfundamental shocks to prices. Using quasi-exogenous variation in security prices due to forced mutual fund sales, I find that mispricing triggers an increase in the consumption of accounting information, especially among institutional investors. This increase in information consumption subsequently predicts both the speed and extent to which prices return to their pre-shock levels, as well as price informativeness around future earnings events. Taken together, these findings not only demonstrate that mutual fund flow-induced mispricing shapes investors’ information consumption, but also highlight the useful role of accounting information in enhancing the informational efficiency of securities markets following temporary mispricing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144730</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nutrient sensing by the mTORC1 pathway in physiology</title>
<link>https://hdl.handle.net/1721.1/144727</link>
<description>Nutrient sensing by the mTORC1 pathway in physiology
Cangelosi, Andrew L.
Cellular growth and metabolism must be linked to the environmental status of the cell to maintain organismal function. This coordination is achieved by mTORC1, a master growth regulator that senses and integrates a diverse array of environmental inputs, including nutrients like glucose and the amino acids leucine and arginine, whose sensing mechanisms are beginning to be identified. However, the role and implications of nutrient sensing by mTORC1 in mammalian physiology remains poorly understood.&#13;
&#13;
Here, we identified a critical role of leucine sensing by mTORC1 in adapting to leucine availability in vivo. Mice lacking the leucine sensors Sestrin1 and Sestrin2 fail to inhibit mTORC1 in tissues when deprived of dietary leucine. These mice suffer from severe loss of white adipose tissue and skeletal muscle when deprived of leucine, but not other essential amino acids. We showed that their white adipose tissue loss results from mTORC1 dysregulation in the liver and is driven by aberrant production of the hepatokine FGF21. We also found that leucine sensing is compartmentalized within the liver, which is established by zonated expression of Sestrin1 and Sestrin2 in the liver lobule and demonstrates an unappreciated spatial organization of nutrient sensing in tissues.&#13;
&#13;
Further, we identified a functionally-important temporal shift in nutrient sensitivity of the mTORC1 pathway in pancreatic β cells. We found that this shift is required for β cells to acquire glucose-responsive insulin secretion after birth. We further demonstrated that modulating nutrient-responsive mTORC1 activity can be therapeutically exploited to improve the generation of stem cell-derived β cells. Collectively, these findings demonstrate a subset of the likely many important roles of nutrient sensing by mTORC1 in physiology and begin to unravel the spatial and temporal complexity of nutrient sensing within tissues of the body.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144727</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>In vivo imaging and morphogenesis of butterfly scale development</title>
<link>https://hdl.handle.net/1721.1/144724</link>
<description>In vivo imaging and morphogenesis of butterfly scale development
McDougal, Anthony D.
During metamorphosis, the wings of a butterfly sprout hundreds of thousands of scales with intricate micro- and nanostructures that determine the wings' optical appearance, wetting characteristics, thermodynamic properties, and aerodynamic behavior. This strategy—of enabling multifunctionality by controlling material structure across relevant length scales—can be emulated in bioinspired materials. However, the range of structural length scales that are integrated in biological materials exceeds that of their synthetic counterparts, due in large part to our limited understanding of the processes that biology employs in their formation. Key to gaining deeper insights into these processes is the visualization of material structures as they form within the living organism.&#13;
&#13;
In this thesis, I present an approach for studying structure formation in wing scales of live butterflies with high temporal and spatial resolution. To gain a comprehensive perspective of scale formation, I developed pupal surgery techniques and adapted quantitative phase microscopy methods to achieve label-free, in vivo visualization of scale cell growth in Vanessa cardui pupae throughout pupal development. This continuous visualization of scale cells establishes a clear phenomenological timeline of scale morphogenesis that was previously limited by the discrete nature of ex vivo studies of fixed cells. Moreover, by capturing time-resolved volumetric tissue data together with nanoscale surface height information, we gain quantitative insights into the underlying processes involved in scale cell patterning and growth.&#13;
&#13;
The quantitative data from live imaging informs a continuum mechanics model of the growth of the scales' ridge structures and allows examination of the structural requirements for proper ridge formation. Live imaging of structure formation also leads to unique broader impacts, including approaches for exploiting human perception of color to visualize overlapping structures in volumetric data, and hands-on learning activities in the classroom and laboratory centered around bio-optics, structure formation in nature, and butterfly surgery. In vivo visualization of butterfly scale cell morphogenesis offers a rich foundation for deciphering biological processes and biomechanical principles involved in the formation of functional materials and for engaging broader audiences.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144724</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence-based AI Ethics</title>
<link>https://hdl.handle.net/1721.1/144715</link>
<description>Evidence-based AI Ethics
Boag, William
With the rise in prominence of algorithmic-decision making, and numerous high-profile failures, many people have called for the integration of ethics into the development and use of these technologies. In the past five years, the field of “AI Ethics” has risen to prominence to explore questions such as 'how can ML algorithms be more fair' and 'are are tradeoffs when incorporating values such as fairness or privacy into models.' One common trend, particularly by corporations and governments, has been a top-down, principles-based approach for setting the agenda. However, such efforts are usually too abstract to engage with; everyone agrees models should be fair, but there is often disagreement on what "fair" means. In this work, I propose a bottom-up alternative: Evidence-based AI Ethics. Learning from other influential movements, such as Evidence-based Medicine, we can consider specific projects and examine them for "evidence." We draw from complementary critical lenses, one based on utilitarian ethics and on from intersectional feminism to analyze five case studies I have worked on, ranging from automatically-generated radiology reports to tech worker organizing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144715</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Subject Based Methodology for Measuring Interclass Bias in Facial Recognition Verification Systems’</title>
<link>https://hdl.handle.net/1721.1/144708</link>
<description>A Subject Based Methodology for Measuring Interclass Bias in Facial Recognition Verification Systems’
Peña-Alcántara, Aramael Andrés
Rapid progress in automated facial recognition has led to a proliferation of the use of algorithms to support decision-making in high-stakes applications, such as immigration and border control, hiring, and the criminal justice system. Recent research has uncovered serious concerns about equality and transparency in facial recognition algorithms, finding performance disparities between groups of people based on their phenotypes, such as gender presentation and skin tone. These challenges can result in loss of employment opportunities, extra scrutiny in transactions, and even loss of freedom, raising the need of deeper analysis of facial recognition’s shortcomings.&#13;
&#13;
This dissertation proposes a novel methodology and a general test statistic to measure facial recognition algorithm interclass bias. The test uses distance-based variance to capture shape-related differences in an algorithm’s accuracy at multiple operating points.&#13;
&#13;
The author assesses the performance of the test to evaluate the interclass bias for skin tone and gender, in commercial facial verification algorithms. Using a dermatologist-approved classification for skin tone system and a simple masculine and feminine classification for gender presentation, thirteen commercial-off-the-shelf facial verification algorithms are evaluated, utilizing a subset of the IARPA Janus Benchmark C dataset, and it’s 1:1 verification protocol. The analyses show that darker-skinned people have the least accurate results, with interclass bias measures up to 7.2 times higher than lighter-skinned people.  Additionally, the results show that one evaluated commercial facial verification algorithm statistically eliminates the interclass bias for skin tone.  Yet, all thirteen commercial facial verification algorithms evaluated experienced worse performance for feminine presenting persons compared to masculine presenting persons. The author believes this new measure of interclass bias can be incorporated into an algorithm’s design to remove this bias. The present biases in classifying darker-skinned and feminine presenting people require urgent attention, if commercial companies are to build genuinely equal, transparent, and accountable facial verification algorithms.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144708</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molten Alkali Metal Borates for High Temperature Carbon Capture</title>
<link>https://hdl.handle.net/1721.1/144706</link>
<description>Molten Alkali Metal Borates for High Temperature Carbon Capture
Halliday, Cameron
The value generated by industrial processes including the production of power, chemicals, metals, and cement is undermined by the accumulation of CO2 in the atmosphere and resulting damage to the environment. Carbon Capture, Utilization, and Storage (CCUS) intercepts CO2 emissions, enabling the continued use of processes and resources that produce CO2 while minimizing their environmental impact. This thesis discovers, demonstrates, and designs systems for molten alkali metal borates, a novel class of material which are positioned to benefit from both liquid phase and high temperature operation. Material and chemical properties relating to CO2 capture were explored, the ability to capture multiple acid gasses simultaneously proposed, and steam investigated as a sweep gas for isothermal operation, demonstrating ~90% capture in bench scale experiments. As liquids with inherent immunity to morphological degradation the molten alkali metal borates displayed stable performance over 1,000 hours of continuous operation. However, harsh conditions introduced material compatibility challenges such as corrosive and chemical degradation. A nickel alloy was identified as a suitable material of construction and a path forward has been proposed to minimize degradation. Techno-economic evaluation of a conceptual coal fired power plant confirmed the predicted benefits. Levelized cost of electricity increased 39% (25% to 49%) relative to that for the power plant without carbon capture. The expected cost of CO2 avoided was $34/tonne ($18-56/tonne), 38% (27-50%) lower than that for the state-of-the-art amine process and competitive with the social cost of carbon. Bioenergy with Carbon Capture and Storage (BECCS) offered a unique opportunity to realize net-negative emissions while also producing reliable base-load electricity. BECCS could remove 300-850 kilograms of CO2 from the atmosphere per megawatt hour of electrical output (kgCO2/MWhe). The estimated cost of CO2 avoided was $45-50/tonne relative to coal, and $80-100/tonne relative to a largely renewable electrical grid. Other opportunities including Natural Gas Combined Cycle (NGCC) plants and Steam Methane Reforming (SMR) were found to be viable but less optimal. Although BECCS is not the only viable option it is seen as the most promising application with early adopters able to utilize low-cost resources and play an outsized role in climate change mitigation through net- negative emissions. The business case for BECCS identifies the need for net negative emissions credits with a nominal value greater than $100/tonne of CO2 to be considered further. Multiple sites exist with the upstream and downstream infrastructure necessary to support a first-of-a-kind plant, but new pellet plants and expanded CO2 pipelines could unlock other locations with lower costs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144706</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Three Dimensional In Vitro Approaches to Study Cardiac Injury and Repair</title>
<link>https://hdl.handle.net/1721.1/144703</link>
<description>Engineering Three Dimensional In Vitro Approaches to Study Cardiac Injury and Repair
Das, Shoshana Lea
Each year, over one million people in the United States have a myocardial infarction (MI). After an infarct, the affected region undergoes massive cell death and a pathological fibrotic response. As cardiomyocytes (CMs) do not regenerate, the CM content is predominately replaced by activated cardiac fibroblasts (CFs). This modifies the regional mechanical and electrical properties, which increases the risk for arrythmias and heart failure in heart attack survivors. Despite the high incidence of MI, current treatments to mitigate the associated morbidity are limited, and the development of new treatments is impeded by expensive, low-throughput, in vivo studies that poorly predict clinical success.&#13;
&#13;
The goal of this dissertation is to develop 3D engineered tissue systems and technologies to study cardiac injury and healing responses. The presented work develops a series of in vitro models that recapitulate the focal fibrosis created by MI and develops tools to functionally characterize engineered heart tissue (EHT) models after injury.&#13;
&#13;
First, we create a 3D biomimetic model using human induced pluripotent stem cell (iPSC)-derived CMs and primary human CFs. Using a high-power, pulsed laser, we locally injure our EHT model to reproduce the regional cell death and loss of electrical and mechanical function seen in focal fibrosis of infarcted myocardium. We then demonstrate the capacity of this model to capture cardiac remodeling and the compensatory response of the surrounding tissue after injury. Second, we adapt a genetically-encoded voltage sensor, called Archon1, to enable the study of electrical activity of individual CMs within EHTs. We use this sensor to take high fidelity action potential measurements in iPSC-derived CMs and EHTs. Finally, as myocardium is highly aligned tissue, we explore the role of extracellular matrix (ECM) alignment on injury response. We study the behavior of fibroblasts in engineered anisotropic and isotropic tissues after injury. Our results indicate that the alignment of ECM directs provisional matrix assembly, a key healing response process.&#13;
&#13;
In summary, this work demonstrates the development and utility of engineered tissue systems to study cardiac injury response. Such systems can be used to better understand injury responses and probe new therapies to improve recovery.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144703</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Long Time Dynamics of Spherical Objects Governed by Surface Tension</title>
<link>https://hdl.handle.net/1721.1/144697</link>
<description>Long Time Dynamics of Spherical Objects Governed by Surface Tension
Shao, Chengyang
This thesis is devoted to the study of evolutionary partial differential equations describing the motion of spherical objects in two different physical scenarios. They share the common feature of involving the mean curvature operator, hence relating to motions governed by surface tension. The mean curvature operator makes both problems highly nonlinear.&#13;
&#13;
In the first part of this thesis, we study the long time behavior of an idealistic model of elastic membrane driven by surface tension and inner air pressure. The system is a degenerate quasilinear hyperbolic one that involves the mean curvature, and also includes a damping term that models the dissipative nature of genuine physical systems. With the presence of damping, a small perturbation of the sphere converges exponentially in time to the sphere, and without the damping the evolution that is &#120576;-close to the sphere has life span longer than &#120576; [superscript -1/6]. Both results are proved using a new Nash-Moser-Hörmander type theorem proved by Baldi and Haus. The first part of the thesis grows out of the author’s research paper [58].&#13;
&#13;
In the second part of this thesis, we derive a differential equation that describes the nonlinear vibration of a spherical water droplet under zero gravity. The equation is legitimately referred as the capillary spherical water waves equation. We develop a toolbox for paradifferential calculus on curved manifolds and prove the local existence for this equation by para-linearizing the equation. This approach avoids using Nash-Moser type iterations, and sets the stage for further study of longer time behavior of spherical water waves. For the longer time behavior, we discuss the resonance problem related to this equation, pointing out that it is a highly nontrivial problem of Diophantine analysis in the realm of number theory.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144697</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Education Finance</title>
<link>https://hdl.handle.net/1721.1/144696</link>
<description>Essays in Education Finance
Goodman, Aaron
This thesis consists of three chapters on education finance. I examine the funding systems that support public education provision in the United States, consider the incentives and constraints facing government actors within those systems, and identify policy interventions with the potential to allocate public funds more efficiently and improve student outcomes. The first chapter discusses financial aid policy in the higher education setting, while the second and third chapters study state and local financing of primary and secondary schools.&#13;
&#13;
The first chapter investigates how the geographic fragmentation of the American higher education structure shapes financial aid policy. In particular, I present theoretical and empirical evidence that competition between state-controlled university systems causes socially costly distortions to the allocation of financial aid among undergraduate students. In a tractable model, I demonstrate that business-stealing incentives lead states to provide less need-based and more merit-based aid than a national planner. Measuring competition from out-of-state institutions with both geographic distance and time-varying university rankings, I validate the model and show that a heightened threat of student outmigration shifts aid dollars away from need-based grants and low-income families. Quantitatively, need-based aid awards would be 27% larger, merit-based aid awards 14% smaller, and low-income college-attendance rates 8 percentage points higher under optimal policy.&#13;
&#13;
The second chapter considers intergovernmental fiscal relations in public school finance, where K-12 school districts rely on both funding from state governments and their own local tax collection. The dual nature of K-12 education funding complicates the analysis of school finance policy, since districts can adjust their local revenue collection in response to state funding changes and use accumulated savings buffers to divorce spending choices from current revenue levels. Focusing on the helpful institutional setting in the state of Ohio, I address this challenge and develop a method to evaluate the long-run consequences of state-level funding reforms. I first build a dynamic model of school district financial behavior and validate its reduced-form predictions about levy-proposal and spending-saving decisions. I then estimate the model, leveraging the large amount of annual variation in districts' financial resources caused by historical volatility in state aid and a unique state law freezing the nominal value of local property tax revenue. The structural estimates allow me to simulate individual districts' reactions to state funding changes and compute the resulting spending and welfare effects of counterfactual policy reforms. Differences in districts' estimated preferences and initial financial conditions create substantial heterogeneity in the behavioral responses that determine the ultimate pass-through effect of state funding changes on spending levels. By targeting districts with the most favorable behavioral responses and the highest valuations of marginal funds, budget-neutral reallocations of state aid can attain welfare increases equal to 4% of Ohio's current education expenditures.&#13;
&#13;
The third chapter examines how the distribution of school districts' property wealth affects their ability to raise local education funding. Using administrative data on individual property values, I document substantial household heterogeneity within Ohio school districts, in contrast to the perfect sorting generated by canonical Tiebout models. I show that the composition, dispersion, and shape of intra-district property value distributions have important effects on the vote shares of property tax levies and on districts' resulting revenue and spending levels. Nearly a third of Ohio's school districts have exercised the state's unique option to implement local income taxes, again contradicting theoretical results about the optimality of zero local income taxation. Districts that choose to use income taxes appear to do so because income taxes distribute the local tax burden more efficiently than comparable property taxes, allowing revenue and spending levels to rise.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144696</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Control Through a Modern Lens: Applications in Supply Chain Analytics and Logistical Systems</title>
<link>https://hdl.handle.net/1721.1/144691</link>
<description>Stochastic Control Through a Modern Lens: Applications in Supply Chain Analytics and Logistical Systems
Qin, Hanzhang
The thesis investigates classical multi-period stochastic control problems through a modern lens, including stochastic inventory control, dynamic pricing and vehicle routing. A brief history of the academic works on stochastic control is presented in Chapter 1, where the relevance of papers on stochastic processes, dynamic programming and reinforcement learning is also discussed. This thesis then focuses on revisiting inventory control, dynamic pricing and vehicle routing i) in a data-driven fashion; ii) with flexible architectures. Chapters 2-3 present several state-of-the-art results on data-driven inventory control. In Chapter 2, the following question is revisited: how much data is needed in order to obtain a (nearly) optimal policy for inventory control? To resolve this long-standing open question, a novel sample-based algorithm is proposed for the backlog setting and a matching (up to a logarithmic factor) lower-bound is also given. In Chapter 3, the same question for the joint pricing and inventory control problem is studied and the first sample-efficient solution is proposed. Chapter 4 is dedicated to the vehicle routing problem with stochastic demands (VRPSD). By combining ideas from vehicle routing and manufacturing process flexibility, a new approach to VRPSD is proposed, that uses overlapped routing with customer sharing in route determination, whose performance is close to the theoretical lower-bound, and significantly improves upon the routing strategy without overlapped routes. Chapter 5 concludes the thesis, and points out several future research directions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144691</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical Models and Statistical Methods for Understanding Electrochemical Kinetics</title>
<link>https://hdl.handle.net/1721.1/144690</link>
<description>Physical Models and Statistical Methods for Understanding Electrochemical Kinetics
Limaye, Aditya Madan
Decarbonization of the global economy in order to limit rapid global surface temperature growth is a critical industrial and societal challenge for the next several decades. Yet, several sectors of the economy remain stubbornly difficult to decarbonize, such as commodity chemicals production, cement and steel manufacturing, and synthetic fertilizer synthesis, to name only a few. Emerging efforts to decarbonize these processes rely on electrochemical techniques, which use emissions-free sources of electricity to drive relevant chemical reactions. Much remains to be understood about the fundamentals of electrochemical kinetics, hampering efforts to rationally engineer decarbonized electrochemical processes. This thesis develops new physical models and applies rigorous statistical methods towards developing a more complete understanding of electrochemical kinetics.&#13;
&#13;
The physical models I develop are grounded in the framework of classical statistical mechanics. In Chapter 2, I develop an extension to the classical Marcus kinetic theory of electron transfer that accounts for diffusive transport effects in the electrochemical double layer. In Chapter 3, I advance a simple physical explanation for why the reorganization energy, a key parameter in Marcus theory, exhibits marked attenuation upon approach to a constant potential electrode surface. Finally, in Chapter 4, I apply molecular dynamics simulations of the electrochemical double layer (EDL) to evaluate the fidelity of continuum theoretical predictions of electrostatic potential variation in the EDL.&#13;
&#13;
The statistical methods I report in this thesis leverage relatively straightforward mathematical approaches to modernize classical electrochemical analyses. In Chapter 5, I show how applying a Bayesian analysis technique to Tafel slope analysis can correct subjective human biases in literature-reported analyses of CO2 electroreduction data. Finally, in Chapter 6, I develop a new electrochemical analysis technique based on analysis of weakly nonlinear current response to a medium-amplitude oscillating voltage signal, which can serve as a complementary technique to cyclic voltammetry, a more traditional approach to electrochemical characterization.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144690</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superradiant Emission from Delocalized Frenkel Excitons in Molecular J-Aggregates</title>
<link>https://hdl.handle.net/1721.1/144681</link>
<description>Superradiant Emission from Delocalized Frenkel Excitons in Molecular J-Aggregates
Barotov, Ulugbek
Coherent coupling of excitations between chromophores in molecular J-aggregates leads to delocalized Frenkel excitons that span multiple molecules in the aggregate. Delocalized Frenkel excitons in molecular J-aggregates provide an avenue to access high-speed fluorescent materials with unique optical properties such as strong absorption, long-range exciton transport, and, in particular, superradiant emission with short radiative lifetimes and narrow linewidths. Despite these favorable optical properties, fundamental studies as well as applications of J-aggregates as high-speed light sources have been hindered by their low photoluminescence quantum yields (QYs), negligible Stoke’s shifts, and poor structural and optical stability, all leading to diminished fluorescence efficiencies in many device applications. In this thesis I perform in-depth structural and optical studies to understand non-radiative processes in J- aggregates and develop high-speed materials with short lifetimes, high QYs, and large extinction coefficients.&#13;
&#13;
In the first part I investigate whether non-radiative losses in J-aggregates originate from nonradiative processes specific to the isolated constituting molecules and demonstrate a bottom-up approach to design novel emissive J-aggregates. I first select a J-aggregating cyanine chromophore and reduce its nonradiative pathways by rigidifying its backbone. The resulting conformationally restrained cyanine dye self-assembles in water-methanol mixtures to form highly emissive J-aggregates with micron-scale two-dimensional sheetlike morphology. These novel J-aggregates have strong, red-shifted absorption at 600 nm and a resonant fluorescence with no Stokes shift, 50% QY, and 220 ps lifetime at room temperature. I stabilize the rigidified dye aggregates in a glassy sugar matrix and study their excitonic behavior using temperature-dependent absorption and fluorescence spectroscopy, which confirm J-type excitonic coupling and superradiance, both necessary for obtaining fast radiative rates. Even though non-radiative rates are significantly reduced in the rigidified dye J-aggregates, competing non-radiative processes originating from imperfections in the J-aggregate lattice still exist.&#13;
&#13;
In the second part I perform in-depth investigations to elucidate the nature of nonradiative recombination sites specific to the J-aggregate lattice and demonstrate three general effective strategies to mitigate non-radiative losses in the superradiant emission from molecular J-aggregates. I focus on J-aggregates of 5,5′,6,6′-tet- rachloro-1,1′-diethyl-3,3′-bis(4sulfobutyl)-benzimidacarbocyanine (known as TDBC) and show that self-annealing at room temperature, photo-brightening, and the purification of the dye monomers all lead to substantial increases in emission QYs and a concomitant lengthening of the emission lifetime, with purification of the monomers having the largest effect. We use structural and optical measurements to support a microscopic model that emphasizes the deleterious effects of a small number of impurity and defect sites that serve as non-radiative recombination centers. This understanding has yielded a room temperature molecular fluorophore in solution with an unprecedented combination of fast emissive lifetime and high QY. We obtain super- radiant emission from J-aggregates of TDBC in solution at room temperature with a QY of 82% coupled with an emissive lifetime of 174 ps. This combination of high QY and fast lifetime at room temperature makes supramolecular assemblies of purified TDBC a model system for the study of fundamental superradiance phenomena.&#13;
&#13;
In the last part I show that silica encapsulation and electrostatic coupling to quantum dots both lead to effective stabilization of structural and optical properties of J-aggregates. In addition, I propose a novel hybrid molecular aggregate system to introduce artificial Stoke’s shifts in molecular J-aggregates to overcome their reabsorption losses. Overall, I believe results presented in this thesis lay a foundation for the development of a new generation of organic fluorophores that combine high speed, high QY, and solution processing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144681</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamic and topological characterization of living systems</title>
<link>https://hdl.handle.net/1721.1/144679</link>
<description>Thermodynamic and topological characterization of living systems
Skinner, Dominic J.
Recent advances in microscopy techniques make it possible to study the growth, dynamics, and response of complex biophysical systems at single-cell and subcellular resolution, from bacterial communities and tissues to intra-cellular signaling and expression dynamics of genes. Despite such progress in data collection, major theoretical challenges remain to find structure and organizing principles in these data, which are often noisy and represent only a partial observation of the system. One such challenge is to estimate the rate at which a system is consuming free energy. All living systems must consume free energy to maintain or increase local order, and theoretical models can provide insights into the thermodynamic efficiency of important cellular processes. In experiments however, many degrees of freedom typically remain hidden to the observer, making thermodynamic inference challenging. Here, we introduce a framework to infer improved bounds on the rate of entropy production, by reformulating the problem of inference as a problem of optimization. We demonstrate the broad applicability of our approach by providing improved bounds on the energy consumption rates in a diverse range of biological systems including bacterial flagella motors, gene regulatory dynamics, and intracellular calcium oscillations. Another challenge is to distinguish two amorphous yet structurally different cellular materials, where in contrast to crystals, cellular structures are somewhat disordered. Here, we use information contained in the local topological structure to define a distance between disordered multicellular systems. Our metric allows an interpretable reconstruction of equilibrium and non-equilibrium phase spaces and embedded pathways from static system snapshots alone. Applied to cell-resolution imaging data, the framework recovers time-ordering without prior knowledge about the underlying dynamics, revealing that fly wing development solves a topological optimal transport problem, and enables comparisons across a wide range of different systems from zebrafish brains to bacterial colonies.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144679</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Econometrics</title>
<link>https://hdl.handle.net/1721.1/144677</link>
<description>Essays in Econometrics
Hughes, David W.
This thesis consists of three chapters on the development and analysis of methods in econometrics. In the first two chapters I consider the use of jackknife bias correction techniques to deal with the incidental parameters bias that arises from including fixed effect parameters in nonlinear models. The final chapter deals with the properties of common linear instrumental variables methods in the presence of many endogenous regressors.&#13;
&#13;
Chapter 1 considers estimation of a directed network model in which outcomes are driven by dyad-specific variables (such as measures of homophily) as well as unobserved agent-specific parameters that capture degree heterogeneity. I develop a jackknife bias correction to deal with the incidental parameters problem that arises from fixed effect estimation of the model. In contrast to previous proposals, the jackknife approach is easily adaptable to different models and allows for non-binary outcome variables. Additionally, since the jackknife estimates all parameters in the model, including fixed effects, it allows researchers to construct estimates of average effects and counterfactual outcomes. I also show how the jackknife can be used to bias-correct fixed effect averages over functions that depend on multiple nodes, e.g. triads or tetrads in the network. As an example, I implement specification tests for dependence across dyads, such as reciprocity or transitivity. Finally, I demonstrate the usefulness of the estimator in an application to a gravity model for import/export relationships across countries.&#13;
&#13;
In Chapter 2, joint with Jinyong Hahn, I compare the properties of two bias correction methods, the leave-one-out jackknife and the split-sample jackknife, in a nonlinear panel data model with individual fixed effects. Since both estimators are asymptotically unbiased with equal asymptotic variances, we derive higher-order bias and variance expressions for both bias corrections, and show that the split-sample jackknife has larger higher-order variance. This difference in higher-order variances can be important in practice, particularly in settings where the time-series dimension &#119879; is not large. In addition, the remaining bias (after bias correction) is larger for the split-sample estimator. Simulations confirm these findings, and show significant distortions in coverage when the asymptotic distribution is used for inference on the split-sample jackknife estimator.&#13;
&#13;
Chapter 3 considers the properties of linear IV estimators when used to estimate models in which there are many potentially endogenous regressors. One common setting in which many endogenous regressors naturally arise, is the interaction of endogenous treatment variables with exogenous covariates in models that aim to capture heterogeneity in treatment effects. I extend existing results on linear IV estimation by considering asymptotics under which the number of endogenous regressors is allowed to grow with the sample size, and derive consistency and asymptotic normality results for the jackknife IV estimator (JIVE), as well as the heteroskedasticity robust k-class style estimators (including the HLIM and HFUL). In simulations, the HFUL estimator is shown to outperform others in models with both many endogenous regressors and many instruments.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144677</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimistic Active Learning of Task and Action Models for Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/144676</link>
<description>Optimistic Active Learning of Task and Action Models for Robotic Manipulation
Moses, Caris
Manipulation tasks such as construction and assembly require reasoning over complex object interactions. In order to successfully plan for, execute, and achieve a given task, these interactions must be modeled accurately and capture low-level dynamics. Some examples include modeling how a constrained object (such as a door) moves when grasped, the conditions under which an object will rest stably on another, or the friction constraints that allow an object to be pushed by another object.&#13;
&#13;
Acquiring models of object interactions for planning is a challenge. Existing engineering methods fail to accurately capture how an object’s properties such as friction, shape, and mass distribution, effect the success of actions such as pushing and stacking. Therefore, in this work we leverage machine learning as a data-driven approach to acquiring action models, with the hope that one day a robot equipped with a learning strategy and some basic understanding of the world could learn composable action models useful for planning to achieve a myriad of tasks. We see this work as a small step in this direction.&#13;
&#13;
Acquiring accurate models through a data-driven approach requires the robot to conduct a vast amount of information-rich interactions in the world. Collecting data on both real and simulated platforms can be time and cost prohibitive. In this work we take an active learning approach to aid the robot in finding the small subspace of informative actions within the large action space it has available to explore (all motions, grasps, and object interactions). Additionally, we supply the robot with optimistic action models, which are a relaxation of the true dynamics models. These models provide structure by constraining the exploration space in order to improve learning efficiency. Optimistic action models have the additional benefit of being easier to specify than fully accurate action models.&#13;
&#13;
We are generally interested in the scenario in which a robot is given an initial (optimistic) action model, an active learning strategy, and a space of domain-specific problems to generalize over. First, we give a method for learning task models in a bandit problem setting for constrained mechanisms. Our method, Contextual Prior Prediction, enables quick task success at evaluation time through the use of a learned vision-based prior. Then, we give a novel active learning strategy, Sequential Actions, for learning action models for long-horizon manipulation tasks in a block stacking domain. Finally, we give results in a tool use domain for our Sequential Goals method which improves upon Sequential Actions by exploring goal-directed plans at training time.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144676</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Waveguide Quantum Electrodynamics with Superconducting Qubits</title>
<link>https://hdl.handle.net/1721.1/144670</link>
<description>Waveguide Quantum Electrodynamics with Superconducting Qubits
Kannan, Bharath
Experiments utilizing quantum optics have progressed rapidly in last few decades, particularly in the context of quantum computation, simulation, and communication. Early work in this field focused on implementations of cavity quantum electrodynamics (QED), where atoms, either natural or artificial, are strongly coupled to the confined photonic modes of cavities. However, in recent years, achieving strong coupling between atoms and itinerant photons has also gained significant interest for its applications in quantum networking. To this end, many atomic platforms are attempting to realize the waveguide QED architecture: atoms that interact with the continua of propagating photonic modes within a waveguide.  &#13;
&#13;
In this thesis, we realize the strong-coupling regime of a waveguide QED architecture by coupling superconducting artificial atoms, typically operated as qubits, to one-dimensional transmission lines. We first demonstrate that superconducting qubits in a waveguide QED system can be used as high-quality quantum emitters. We then leverage the quantum interference between the simultaneous emission from multiple qubits in order to generate non-classical, spatially entangled, and directional itinerant microwave photons. These types of photons are particularly useful for remote entanglement and quantum communication protocols. Finally, we demonstrate that superconducting qubits can be engineered to enter novel regimes of light-matter interactions that are difficult, or even impossible, to achieve in other atomic platforms. In particular, we realize the giant-atom regime of waveguide QED, where the atom can no longer be treated as a point-like object.  We use our giant atoms to implement tunable atom-waveguide couplings, as well as decoherence-free waveguide-mediated interactions between multiple atoms.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144670</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compositional Robot Learning for Generalizable Interactions</title>
<link>https://hdl.handle.net/1721.1/144669</link>
<description>Compositional Robot Learning for Generalizable Interactions
Kuo, Yen-Ling
To understand environments effectively and to interact safely with humans, robots must generalize their learned models to scenarios they have never been trained on before, such as new commands and new agents. Humans have shown a remarkable ability to compose concepts they have learned before in order to interpret and to act in a novel environment. In contrast, many deep-learning based methods fail at compositional generalization, i.e., an ability to generalize to novel combinations of concepts that have not been seen before in training. This thesis presents several learning-based approaches that leverage compositionally to enable generalization in various reasoning skills such as language understanding and social interactions. First, we show how we can derive networks directly from compositional linguistic structures to enable robots to follow novel commands and act rationally in new scenarios. These networks are extended to encode temporal relationships in linear temporal logic and augmented with sampling-based planners to plan in continuous space. Then we show how we can formulate social interactions as reward operations and apply recursive reward estimation to enable robots to reason about novel social interactions. Finally, we explore an alternative approach to incorporate compositionality when there is no explicit compositional structure — using language as an intermediate representation for internal reasoning. We show how this linguistic representation can help robots predict the trajectories of other agents.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144669</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Frontiers in Silicon Terahertz Electronics: Wirelessly Powered THz-ID and Secure THz Links</title>
<link>https://hdl.handle.net/1721.1/144668</link>
<description>New Frontiers in Silicon Terahertz Electronics: Wirelessly Powered THz-ID and Secure THz Links
Khan, Muhammad Ibrahim Wasiq
Advances in silicon integrated electronics have enabled many significant systems and applications in the terahertz (THz) band over the last decade. However, ultra-low-power (&lt;25&#120583;W) or battery-less THz transceivers have not been explored yet due to the stringent challenges posed by them. Likewise, the notion of wireless power transfer at THz frequency is non-existence. There is a growing demand for low-power mm-size transceivers in supply chain management, assets tracking, authentication, micro-robotics, on-skin or close-to-skin implants, etc. With these ubiquitous THz-links, the security of the wireless channels is another emerging challenge. Advanced digital encryption techniques are computationally intensive, power-hungry, and not suited for these low-power applications. This thesis explores the challenges and novel approaches to realizing ultra-low-power/battery-less and physically secure THz transceivers. Specifically, it demonstrates three new frontiers in standard CMOS technologies that will open up the THz band. The first is a mm-size 0.26 THz identification tag (THz-ID) enabling &#120583;W level THz link by exploiting back-scattering and beam-steering functionalities. It is the smallest, package-less, monolithic ID chip with far-field communication capability and asymmetric cryptography. The second is the optimization based on dual-antenna architecture for 0.26 THz energy harvesting with ∼25&#120583;W harvesting capability. It is the highest frequency of CMOS harvester by ∼3x and has the highest RF-to-DC conversion efficiency at low input power. And the third is developing an orbital-angular-momentum (OAM) wave-based transceiver with bits-to-OAM modes mapping for secret key distribution at 0.31 THz frequency. It is the first chip-based demonstration (at any frequency) of a transceiver front-end that transmits and receives OAM waves. It is also the smallest, least power consuming OAM transceiver and it can dynamically switch among OAM modes. The thesis concludes with potential improvements and prospects for future work.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144668</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning 3D Representations from Data</title>
<link>https://hdl.handle.net/1721.1/144666</link>
<description>Learning 3D Representations from Data
Wang, Yue
Deep learning has achieved tremendous progress and success in processing images and natural languages. Deep models enable human-level perception, photorealistic image generation, and conversational language understanding. Despite significant progress, existing deep models still fail to meet the demands of robotics. There are several factors leading to this gap. First, existing computer vision algorithms have been primarily targeted to 2D images. These algorithms are extremely good at recognizing objects in an image, but they fail to reason about 3D geometry. Second, the current success in the 2D domain is mainly due to the advance in convolutional neural networks (CNNs). However, CNNs do not generalize to arbitrary data modalities such as point clouds. Finally, 3D annotations are scarce and hard to obtain. Annotating 3D data usually requires more human effort, which hinders supervised learning from 3D data. Therefore, learning 3D representations from data remains challenging and demands further study.&#13;
&#13;
This thesis investigates how to learn representations from 3D data efficiently and effectively. This thesis aims to design 3D learning algorithms that understand geometry with minimal supervision. First, we proposed a general point cloud network , termed Dynamic Graph Convolutional Neural Networks (DGCNN), to learn a latent structure from sensory inputs. The induced structure will improve feature learning from point clouds. Unlike prior works that focus on global features, DGCNN views local geometry as the key to point cloud feature learning. Second, we study using DGCNN to enable high-level semantic reasoning tasks such as shape segmentation and 3D object detection. To that end, we propose a multi-view based object detection model that learns complementary features by projecting point clouds to virtual views. In addition, our follow-up work Object DGCNN leverages DGCNN to model object relations and empowers a post-processing free object detection pipeline with state-of-the-art performances on multiple benchmarks. Third, we generalize these point cloud models to tackle low-level motion estimation problems such as point cloud registration. The proposed Deep Closest Point architecture combines a traditional optimization pipeline with deep learning. Moreover, Partial Registration Network (PRNet) uses shape registration as a proxy task to enable self-supervised learning from point clouds in a subsequent work. Finally, this thesis allows for a critical application -- scene understanding for autonomous driving. These studies collectively facilitate  3D deep learning in a broad range of scenarios in visual computing.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144666</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Low Frequency Edge Oscillation in Alcator C-Mod and  ASDEX Upgrade I-Mode</title>
<link>https://hdl.handle.net/1721.1/144665</link>
<description>The Low Frequency Edge Oscillation in Alcator C-Mod and  ASDEX Upgrade I-Mode
McCarthy, William C.
The I-Mode confinement regime in tokamaks is a promising regime for nuclear fusion reactor operation for a number of reasons. It exhibits high energy confinement (similar to H-Mode) and low particle and impurity confinement (similar to L-Mode), which allows it to reach high pressures without unacceptable impurity accumulation. It has been shown to be stable against Edge Localized Modes which are large bursts of plasma leaving the confined region of the tokamak, lessening the heat exhaust challenge. However, the physical mechanism behind I-Mode is still not well understood. In this thesis, one possible explanation for I-Mode phenomenology is explored. Namely, that a low frequency oscillation in a variety of plasma parameters near the Last Closed Flux Surface (LCFS) is responsible for setting the transport qualities of I-Mode. The experimentally observed fluctuation is referred to as the Low Frequency Edge Oscillation (LFEO).&#13;
&#13;
The LFEO is observed by most diagnostics that measure the edge of the confined plasma with a frequency between 10kHz and 40kHz on Alcator C-Mod (C-Mod) and 5kHz and 15kHz on ASDEX Upgrade (AUG). These include, but are not limited to: Reflectometry (density measurement), Gas Puff Imaging (density measurement), electron cyclotron emission (temperature measurement), Mirnov Coils (magnetic field fluctuation measurement), and Langmuir probes (simultaneous density, temperature, and electric potential measurements). The LFEO occurs at the same location as the Weakly Coherent Mode (WCM), another common feature of I-Mode, which is thought to exist at the bottom of the radial electric field well. Its frequency structure is observed to depend on electron temperature and is modulated by the sawtooth heat pulse.&#13;
&#13;
Previous work has established that the LFEO is a Geodesic Acoustic Mode (GAM). The GAM is a toroidally and poloidally symmetric electric potential perturbation that results in a poloidal (m=1) flow and pressure perturbation and poloidal (m=2) magnetic perturbation. A major goal of this thesis was to confirm this identification on both C-Mod and AUG. On both machines, the LFEO frequency was consistent with, albeit slightly lower than, the frequency of a GAM driven very close to the LCFS. The radial structure of the LFEO was investigated using a combination of Gas Puff Imaging and scanning Mirror Langmuir Probe, revealing an inwardly radially propagating potential structure consistent with a GAM. Magnetic mode structure analysis on C-Mod revealed an m=2 magnetic structure, consistent with a GAM. An observed correlation between LFEO frequency and impurity concentration was qualitatively consistent with GAM models that include impurities. However, the required impurity levels to explain the correlation were much higher than would be physical. Given the accumulated evidence, the LFEO was confirmed to be a GAM, although one likely modified by shaping, rotation, or other effects.&#13;
&#13;
The role of the LFEO in I-Mode phenomenology was explored first by reviewing existing literature and then via a database existence space study. Previous work by Manz on AUG and Cziegler on C-Mod has shown that the GAM couples with WCM and transfers energy from the WCM central frequency to frequency bands offset by the GAM frequency. Further, Cziegler has shown on C-Mod that the total nonlinear drive into Zonal Flows (believed to be a trigger for H-Mode) is reduced in the standard I-Mode “unfavorable” plasma configuration and that GAM drive only meaningfully appears following the L-I transition. Due to the reduction in zonal flow drive in the “unfavorable” configuration appearing in LMode prior to I-Mode, it is clear that the presence of the GAM is not the controlling factor in allowing IMode access, but perhaps does extend the I-Mode window. This was supported via a database analysis aimed at identifying the parameter space existence of the LFEO. Within this study, the LFEO was determined to be observable in only 62% of C-Mod I-Modes and 40% of AUG I-Modes. While no clear parameter space separation was found between LFEO and non LFEO shots, this does confirm that the LFEO is not necessary for I-Mode operation. Finally, a shot by shot analysis of I-Modes and LFEOs revealed a sharp, unexplained transition in LFEO and WCM frequency with increasing heating power. This hints at different degrees of confinement within I-Mode, with different controlling physics. It is possible that the LFEO is important to one such regime.&#13;
&#13;
While the LFEO is not necessary for I-Mode, it may still play a role in regulating particle transport while present. The electrostatic particle and energy fluxes near the LCFS, associated with the LFEO, were calculated using a scanning Mach Mirror Langmuir Probe. This revealed significant levels of flux just inside the LCFS, peaking at the maximum probe plunge depth. The energy flux, assuming a uniform flux around the LCFS, was compared with the total energy crossing the LCFS from a balance of heating and radiative powers. We found that using fluxes at peak probe plunge depth, the flux accounted for 80% of the power crossing the LCFS. However, if the flux calculated directly at the LCFS is used, the percentage reduces to 5%. Thus, the calculated flux is not unphysical, but can be significant. The particle flux was compared to the flux calculated at the divertor using an array of divertor Langmuir probes. It was found that the flux associated with the LFEO at the divertor was significantly reduced compared to flux at the LCFS. These results suggest that the flux is not poloidally symmetric around the LCFS. A surprising result was the observation of a significant and sudden reduction in particle flux to the divertor in all fluctuations below 40kHz, at the L-I transition.&#13;
&#13;
The GAM is thus, an important, but not necessary part of I-Mode phenomenology. While it is clear that I-Mode can exist without the LFEO, the significant particle fluxes at the edge indicate that the LFEO can help to regulate particle fluxes. Further exploration of the existence space of the LFEO is necessary to determine where it can exist and how it could influence different I-Mode regimes.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144665</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Structure and Knowledge in Clinical and Biomedical Representation Learning</title>
<link>https://hdl.handle.net/1721.1/144655</link>
<description>Leveraging Structure and Knowledge in Clinical and Biomedical Representation Learning
McDermott, Matthew B. A.
Datasets in the machine learning for health and biomedicine domain are often noisy, irregularly sampled, only sparsely labeled, and small relative to the dimensionality of the both the data and the tasks. These problems motivate the use of representation learning in this domain, which encompasses a variety of techniques designed to produce representations of a dataset that are amenable to downstream modelling tasks. Representation learning in this domain can also take advantage of the significant external knowledge in the biomedical domain. In this thesis, I will explore novel pre-training and representation learning strategies for biomedical data which leverage external structure or knowledge to inform learning at both local and global scales. These techniques will be explored in 4 chapters: (1) leveraging unlabeled data to infer distributional constraints in a semi-supervised learning setting; (2) using graph convolutional neural networks over gene-gene co-regulatory networks to improve modelling of gene expression data; (3) adapting pre-training techniques from natural language processing to electronic health record data, and showing that novel methods are needed for electronic health record timeseries data; and (4) asserting global structure in pre-training applications through structure-inducing pre-training.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144655</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Language with Multimodal Models</title>
<link>https://hdl.handle.net/1721.1/144654</link>
<description>Learning Language with Multimodal Models
Ross, Candace Cheronda
Language acquisition by children and machines is remarkable. Yet while children learn from hearing a relatively modest amount of language and by interacting with people and the environment around them, neural language models require far more data and supervision, struggle with generalizing to new domains and overwhelmingly learn from text alone. This thesis explores how knowledge about child language acquisition – particularly the scale and type of linguistic information children receive, how they use feedback, and how they generalize in systematic ways beyond the language input they have been exposed to – can be applied to multimodal language models. In particular, this work focuses on (1) training language models with weak supervision using less data by grounding in vision and (2) exploring the generalization abilities of models in multimodal domains. The first approach trains a semantic parser to map from natural language to logical forms using captioned videos, learning without parse trees or any other annotations. The second approach moves from simply observing videos to a more dynamic setup using a robotic simulator and world states to validate the generated logical forms. These approaches focus on evaluating weak supervision, with training and inference data that are relatively similar; we lastly explore evaluation where the inference data is quite different from training and requires systematic generalizations. One approach tests the role of pretraining and a novel decoding strategy for navigating in a grid world; inference commands and action sequences differ in systematic ways from training. The final approach tests the extent to which pretrained multimodal Transformers models generalize when the demographics in the input images or text differ from their learned social biases.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144654</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tilting sheaves for real groups and Koszul duality</title>
<link>https://hdl.handle.net/1721.1/144648</link>
<description>Tilting sheaves for real groups and Koszul duality
Ionov, Andrei
For a certain class of real analytic varieties with the real Lie group action we define a tstructure on the category of equivariant-monodromic sheaves and develop the theory of tilting sheaves. In case of a quasi-split real form of an algebraic group acting on the flag variety we construct an analog of a Soergel functor, which fully-faithfully embeds the subcategory of tilting objects to the category of coherent sheaves on a block variety. We apply the results to give a new, purely geometric, proof of the Soergel’s conjecture for quasi-split groups. The thesis is based on a joint work with Zhiwei Yun.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144648</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integer and Matrix Optimization: A Nonlinear Approach</title>
<link>https://hdl.handle.net/1721.1/144644</link>
<description>Integer and Matrix Optimization: A Nonlinear Approach
Cory-Wright, Ryan
Many important problems from the operations research and statistics literatures exhibit either (a) logical relations between continuous variables x and binary variables z of the form "x=0 if z=0'', or (b) rank constraints. Indeed, start-up costs in machine scheduling and financial transaction costs exhibit logical relations, while important problems such as reduced rank regression and matrix completion contain rank constraints. These constraints are commonly viewed as separate entities and studied by separate subfields—integer and global optimization respectively—who propose entirely different strategies for optimizing over them.&#13;
&#13;
In this thesis, we adopt a different perspective on logical and rank constraints. We interpret both constraints as purely algebraic ones: logical constraints are nonlinear constraints of the form x=z o x for x continuous and z binary (meaning z²=z), while rank constraints, Rank(X) ≤ k, are nonlinear constraints of the form X=YX intersected with a linear constraint tr(Y) ≤ k for an orthogonal projection matrix Y (meaning Y²=Y). Under this lens, we show that regularization drives the computational tractability of problems with both logical and rank constraints.&#13;
&#13;
The first three chapters propose a unified framework to address a class of mixed-integer problems. In numerical experiments, we establish that a general-purpose strategy that combines cutting-plane, rounding, and local search methods, solves these problems faster and at a larger scale than state-of-the-art methods. Our approach solves network design problems with 100s of nodes and provides solutions up to 40% better than the state-of-the-art; sparse portfolio selection problems with up to 3,200 securities; and sparse PCA problems with up to 5,000 covariates. &#13;
&#13;
The last two chapters extend this framework to model rank constraints via orthogonal projection matrices. By leveraging regularization and duality, we design outer-approximation algorithms to solve low-rank problems to certifiable optimality, compute lower bounds via their semidefinite relaxations, and provide near-optimal solutions through rounding and local search techniques. By invoking matrix perspective functions, we also propose a new class of semidefinite-representable convex relaxations for low-rank problems which outperform the popular nuclear norm penalty.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144644</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Investigations on Flow and Mass Transport in Stressed Rough Fractures</title>
<link>https://hdl.handle.net/1721.1/144642</link>
<description>Experimental Investigations on Flow and Mass Transport in Stressed Rough Fractures
Villamor Lora, Rafael
The study of flow and transport in rough, fractured media is essential in the development of new energy technologies including enhanced geothermal systems, EGS, and CO2 sequestration. This is a complex problem, mostly due to the number of interacting physical processes in the fractured environment.&#13;
&#13;
In this thesis I introduce a novel pressure-controlled Hele-Shaw cell to investigate different physical processes in rough fractures using 3D-printed rock analogs. This system can measure high-resolution fracture aperture and tracer concentration maps under relevant field stress conditions. Using a series of hydraulic and visual measurements, combined with numerical simulations, I investigate the evolving fracture geometry characteristics, pressure-dependent hydraulic transmissivity, and the nature of mass transport as a function of normal stress. &#13;
&#13;
The experimental results show that as the fracture closes and deforms under increasing normal loading: (1) the contact areas grow in number and size; (2) the flow paths become more focused and tortuous; and (3) the transport dynamics of conservative tracers evolve towards a higher dispersive regime. Moreover, under the applied experimental conditions, I observed excellent agreement between the simulated- and the experimentally measured- hydraulic behavior.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144642</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gene-regulatory circuitry of disease risk and progression</title>
<link>https://hdl.handle.net/1721.1/144638</link>
<description>Gene-regulatory circuitry of disease risk and progression
Boix, Carles
Complex diseases act heterogeneously through a remarkable diversity of cellular and functional outcomes across the human body, primarily through epigenetic organization and gene regulation. Genetics is a powerful tool to shed light on genes involved in disease, but we need maps of gene regulation and function at the tissue and cell type resolution to better understand disease mechanisms. Increasingly high resolution measurements of cellular epigenomes and transcriptomes allow us to observe this cellular heterogeneity at scale. To systematically model these, we require scalable statistical tools that can interpret and model gene regulation, its machinery, and complex transcriptional states. In this thesis, I build references of gene regulation and function in health and disease to interpret disease-linked genomic loci and develop methods to learn context-specific representations in order to understand how a single genome yields robust and diverse transcriptional outcomes through modularity of biological functions, how this heterogeneity is maintained, and how it breaks down over time and in disease.&#13;
&#13;
In my first project, I build an integrative annotated reference of the human epigenome, systematically integrating multiple annotation projects covering hundreds of human cell lines, tissues, and states. I use this reference to map and dissect non-coding disease loci, specifically map multiple pleiotropic disease loci in coronary artery disease and dissect a locus showing tissue-specific gene involvement. In my second project, I model Alzheimer’s disease (AD) progression across affected brain regions using single-cell transcriptomics, identify specifically vulnerable neuronal populations by brain region along the disease trajectory, and uncover pathways and neuronal circuits that may mediate AD vulnerability. To analyze large-scale transcriptomic references, I develop a fast and scalable method for calling high-resolution gene expression modules from single-cell data, use it to map the complex and modular glial changes underlying AD, and highlight metabolic and immune switches in cognitive impairment. In my third project, I investigate somatic mosaicism as a source of cellular dysfunction and to uncover missing genetic determinants and mechanisms in AD. To do so, I develop methods to map mosaic burden in individual cells jointly with expression and find increased cell type-specific somatic mosaic burden in dementia which I map to specific pathways and genes implicated in AD. Finally, I model neuronal trajectories through neurodegeneration in human brains and mouse models of AD, developing methods for building disease-driven pseudotime trajectories and mapping transcriptomic changes along paths to neuronal senescence driven by DNA damage accumulation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144638</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mucin and mucin glycans alter behaviorof mucosal pathogens</title>
<link>https://hdl.handle.net/1721.1/144637</link>
<description>Mucin and mucin glycans alter behaviorof mucosal pathogens
Gold, Michaela Anne
The majority of the microorganisms that colonize the human body, collectively the human microbiota, reside within the mucus layer. While mucus hydrates and lubricates host tissues, it also serves as a protective barrier. Mucus physically blocks pathogens and other microorganisms from reaching host cells; however, mucus additionally decreases the virulence of certain opportunistic pathogens to prevent infections. A greater understanding of mucus’ spectrum of activity could lead to future therapeutics, especially important as the threat of antimicrobial resistance increases. In this thesis, I investigate mucus’ ability to block infections by two bacterial pathogens, one opportunistic and one primary. To isolate the important factors of mucus, I utilize a three-dimensional mucus model composed of natively purified mucin polymers, the major gel-forming component of mucus. With this system, I can disentangle the effects of mucin from the rest of mucus, such as other proteins and salts. I first explore mucin’s impact on Klebsiella pneumoniae, an often multi-drug resistant and sometimes hypervirulent bacterium. I determine that mucin decreases K. pneumoniae surface attachment and biofilm formation, major sources of persistent and antimicrobial tolerant infections especially on implanted medical devices. Additionally, I discover that the glycans cleaved from the mucin protein backbone also block K. pneumoniae attachment to abiotic surfaces, suggesting the possibility of mucin mimetics to prevent biofilm formation on medical devices. My second project examines the primary pathogen Salmonella enterica serovar Typhimurium, which must pass through the mucus barrier before reaching and invading host cells. I demonstrate that mucin and mucin glycans block S. Typhimurium’s ability to infect host epithelial cells. I further reveal that mucin and mucin glycans prevent infection by signaling to strongly downregulate multiple virulence genes in S. Typhimurium, including Salmonella pathogenicity island 1 (SPI-1), which is required for host cell invasion. Together, my results elucidate new ways in which mucin modulates pathogen behavior and opens the possibility to future mucin-based therapeutics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144637</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image-based pooled genetic screens for complex cellular phenotypes</title>
<link>https://hdl.handle.net/1721.1/144635</link>
<description>Image-based pooled genetic screens for complex cellular phenotypes
Funk, Luke Benjamin
Biological processes are organized in hierarchical interactions of molecules, cells, tissues, and organisms. Cells perform complex functions individually, which when misregulated can result in disease states affecting the entire organism. However, knowledge of the genetic and molecular basis for many cellular phenomena is incomplete, limiting the ability to reverse disease states and engineer biological function. Although recent technologies have enabled scalable functional genomics approaches such as pooled CRISPR screening, the cellular phenotypes that can be linked to gene function in a pooled screen have been restricted to measurement by sequencing, and are often a step removed from the biological process of interest. In contrast, microscopy provides a high-throughput and flexible means to measure a wide range of biologically-relevant phenotypes. Here, we apply an image-based pooled screening approach based on in situ sequencing to understand the contributions of protein-coding genes to a wide range of cellular processes. Specifically, we combine pooled CRISPR/Cas9 genomic perturbations of 5,072 fitness-conferring genes with microscopy-based visualization of DNA, DNA damage response, actin, and microtubules across more than 31 million human cells. By leveraging the complex phenotypes resulting from each perturbation, we identify co-functional genes across diverse cellular activities, revealing novel gene functions and associations. Additionally, we demonstrate pooled CRISPR screening combined with live-cell imaging of more than 400,000 cell division events to further identify unexpected contributions to chromosome segregation. Altogether, this work demonstrates image-based pooled genetic screening as a scalable approach to measure and understand genetic contributions to complex phenotypes and cellular functions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144635</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydraulic Fracturing Behavior of Opalinus Shale: A Framework, Experimentation &amp; Insights</title>
<link>https://hdl.handle.net/1721.1/144632</link>
<description>Hydraulic Fracturing Behavior of Opalinus Shale: A Framework, Experimentation &amp; Insights
AlDajani, Omar Abdulfattah
Hydraulic fracturing is a pivotal technology that made it possible to tap into previously inaccessible energy resources. Shales constitute the majority of unconventional reservoirs and have complex structures characterized by layered, fine-grained minerals. Pressure-volume and micro-seismic data are commonly used to assess hydraulic fracture jobs, but little is known about the produced fracture geometry and how to optimize it. Laboratory experimentation as done in the presented research can be used to address this lack of knowledge. Specifically, this thesis examines fracture behavior in shale; assesses the overall fracture geometry and gains a better understanding of fracture interaction with bedding planes. Opalinus Shale was used as the material of study, which visually had two distinct, alternating layers: a tough light layer and a soft dark layer. The following framework was set to investigate the complex rock material and hydraulic fracture process.&#13;
&#13;
First, the mineralogy and micro-structure were assessed through a variety of techniques, including X-Ray diffraction, scanning electron microscopy with energy-dispersive X-Ray spectroscopy and focused ion beam milling, and Raman spectroscopy. This showed that dark layers are clay-rich while the light layers are a heterogeneous mix of quartz, calcite, and other minerals. The used hydraulic fracturing liquid (hydraulic oil) and its interaction with the shale were thoroughly characterized. Capillary calculations showed a strong liquid affinity to the shale, resulting in significant and rapid capillary rise that minimizes the distance between the liquid front and the advancing fracture tip.&#13;
&#13;
Next, a novel series of indentation tests coupled with scratch tests were conducted to characterize the stiffness, hardness, creep compliance, and toughness along the transverse isotropic principal directions of the two distinct layers comprising this shale. Test results were used to approximate the radius of plasticity and fracture energy in each of the principal directions of each layer. Finally, a conceptual model was developed to quantify edge-to-edge and face-to-face fracture energy components, showing that fractures were more likely to propagate perpendicular to bedding.&#13;
&#13;
A novel technique was developed to homogenize and characterize the seismic properties of highly heterogeneous materials. Based on P-wave arrival times, an elliptical velocity model was constructed that defines velocities along and normal bedding, and the seismic plane of isotropy. This technique can prove very useful as it can be extended to field scale measurements and can improve acoustic event localization which depends on accurate velocity estimation.&#13;
&#13;
To do all this, a highly instrumented and controlled experimental apparatus was designed and built to subject the shale specimens to a quasi-true triaxial stress state simulating subsurface stresses and to pressurize a pre-cut artificial crack for hydraulic fracture propagation. This novel apparatus allows one to simultaneously capture images and acoustic emission data along with an array of other measurements instrumental for hydraulic fracture analysis. Despite the simple anisotropic stresses being applied to the test specimens, the fracture behavior was far more complicated than fractures propagating along the direction of maximum principal stress. The results showed the crucial role rock fabric plays in determining hydraulic fracture behavior. Acoustic emissions were also analyzed spatially and temporally, and insights such as focal mechanism frequency and relative proportionality were gained from these observations.&#13;
&#13;
This thesis serves as a solid step towards gaining a comprehensive understanding of hydraulic fracture behavior. The thesis also can contribute to the interpretation of field observations, and presents a valuable workflow for specimen characterization and data analyses. Last but not least, the described experiments can serve as a basis for predictive fracture models.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144632</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven healthcare via constraint learning and analytics</title>
<link>https://hdl.handle.net/1721.1/144629</link>
<description>Data-driven healthcare via constraint learning and analytics
Wiberg, Holly Mika
The proliferation of digitally-available medical data has enabled a new paradigm of decision-making in medicine. Machine learning allows us to glean large-scale insights directly from data, systematizing the heuristic risk assessment process that physicians use on a local scale. Optimization similarly adds rigor to decision-making, providing a quantitative framework for optimizing decisions under certain constraints. The rise in data, coupled with methodological and computational advancements in these fields, presents both opportunities and challenges. In this thesis, we leverage machine learning and optimization to learn from data and drive better decisions in healthcare. We propose novel approaches motivated by current methodological gaps, and we use analytics to tackle clinically-driven problems. This thesis develops methods and applied models to bridge the gap between research and clinical practice, with interpretability and impact as guiding principles.&#13;
&#13;
The first part of the thesis focuses on the development of new approaches for data-driven insights and decision-making. Chapter 2 introduces a constraint learning framework that embeds trained machine learning models directly into mixed-integer optimization formulations. We train machine learning models to approximate functional relationships between decisions and outcomes of interest and subsequently optimize decisions under these data-driven learned constraints and/or objectives. We also highlight an application of this framework in chemotherapy regimen design. In Chapter 3, we propose an interpretable clustering algorithm which learns a tree-based data partition in which each leaf comprises a distinct cluster.  We recover high-quality clusters that can be explicitly described by their decision paths. &#13;
&#13;
The second part of the thesis leverages machine learning and optimization to improve risk prediction and treatment decisions in various domains. We present three such applications. In Chapter 4, we study neutropenic events in chemotherapy patients. We propose a risk prediction model based on a patient's dynamic clinical trajectory over the course of multiple chemotherapy cycles. Chapter 5 demonstrates the use of analytics to address the COVID-19 pandemic. We curate a multi-center, international database of COVID-19 patients and their outcomes, which forms the basis for a COVID-19 mortality risk model for hospitalized patients. Finally, Chapter 6 examines the effectiveness of in-person vs. virtual care from a causal inference lens, considering the effect of visit modality on both operational and clinical outcomes. The resultant machine learning models inform an optimization formulation for allocating telehealth and in-person visits for diabetic patients.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144629</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust and Scalable Multiagent Reinforcement Learning in&#13;
Adversarial Scenarios</title>
<link>https://hdl.handle.net/1721.1/144626</link>
<description>Robust and Scalable Multiagent Reinforcement Learning in&#13;
Adversarial Scenarios
Shen, Macheng
Multiagent decision-making is a ubiquitous problem with many real-world applications, such as autonomous driving, multi-player video games, and robot team sports. Key challenges of multiagent learning include the presence of uncertainty in the other agent’s behaviors and the curse of dimensionality caused by the high dimensionality of the joint observation, action, and policy space. These challenges are accentuated even further in adversarial scenarios due to the unknown agent intents and unexpected, possibly adversarial behaviors. This thesis presents approaches for robust and scalable multiagent learning with the goal of efficiently building autonomous agents that can operate robustly in adversarial scenarios. The capability of accurately inferring unknown agent intents by observing its behaviors is critical for robust decision-making. A challenge in this case is the high uncertainty in an adversary’s actual behavior, including potential deception, which could be significantly different from an a priori behavior model. Capturing the interaction between the ego-agent and the adversaries as well as the reasoning of available information to both agents is critical for modeling this deceptive behavior. This thesis addresses this intent recognition problem using a game-theoretic opponent modeling approach based on a new diversity-driven belief-space ensemble training technique that is used to achieve robustness against deception. To extend the ensemble approach to scenarios with multiple agents, this thesis presents a scalable multiagent learning technique that facilitates near-optimal joint policy learning through a sparse-attention mechanism. This mechanism results in focused parameter update, which significantly improves sample-efficiency. Moreover, this thesis also contributes a novel implicit ensemble training approach that leverages multi-task learning and deep generative policy distribution to achieve better robustness at a much lower computation and memory cost compared with previous ensemble techniques. The combination of robust intent recognition and scalable multiagent learning leads to robust and scalable offline policy learning. However, a fully autonomous agent also needs to be able to continually learn from (and adapt to) new environments and peer agents. Thus this thesis also presents to a safe adaptation approach that enables adaptation to a new opponent while maintaining low exploitability for any possible opponent exploitation in adversarial scenarios. The contributions presented in this thesis facilitate building autonomous agents that can make robust decisions under competitive multiagent scenarios with uncertainty and safely adapt to previously unseen peer agents, through computationally efficient learning.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144626</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reduced-order modeling of granular intrusions driven by continuum approaches</title>
<link>https://hdl.handle.net/1721.1/144625</link>
<description>Reduced-order modeling of granular intrusions driven by continuum approaches
Agarwal, Shashank
Granular intrusions such as ballistic impacts, vehicular and animal locomotion in natural terrains, and stirring of materials in industrial processes are common. Granular media often exhibits coupled solid-like and fluid-like multiphase characteristics in such systems that are commonly not shown by simple solids (like metals) and fluids (like water). This makes modeling granular media challenging. While the field of granular physics extensively uses grain-scale Discrete Elements Modeling (DEM) to model such characteristics of granular systems, they are computationally expensive. On the other hand, capabilities to model such systems in real-time are critical in numerous applications such as path planning and efficient maneuvering of vehicles in sandy terrains on earth and extra-terrestrial environments. Due to their shape-and media-specific forms, existing reduced-order intrusion modeling methods have limited capabilities.&#13;
&#13;
This work focuses on developing efficient approaches to model motions of arbitrarily shaped objects into the granular volumes to various numerical details and accuracy levels. Specifically, we focus on a mesoscale continuum approach and a macroscale empirical approach. We establish the sufficiency of appropriately-chosen constitutive laws and computational methods in modeling various complex granular flow scenarios with a continuum approach. We exploit the approach to develop deep insights into the origin of granular resistive forces encountered during granular intrusions. We further use these insights to extend an empirical modeling method called Resistive Force Theory (RFTs) for real-time modeling of granular intrusions. RFTs developed in this work are verified against a variety of experimental and simulation results and allow modeling the motions of arbitrary three-dimensional objects moving arbitrarily in granular media at low and high speeds in real-time.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144625</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Portfolio Disclosures in the Mutual Fund Industry</title>
<link>https://hdl.handle.net/1721.1/144622</link>
<description>The Role of Portfolio Disclosures in the Mutual Fund Industry
Choi, Ki-Soon
I study whether increased transparency of fund portfolio disclosures improves outcomes for mutual fund investors. Exploiting the staggered adoption of the SEC's Form N-PORT, which improved the quality and quantity of information on fund strategies and risk profiles, I find a reduction in performance manipulation and managerial rent extraction, evidenced by decrease in risk shifting and management fees. The results are pronounced among funds that offer institutional share classes, suggesting that institutional investors benefit more from the new disclosures. I also show that investors become better at allocating capital to funds that will perform well in the future. Collectively, my findings highlight the role of portfolio disclosures in mitigating agency problems in delegated asset management by affecting managers' and investors' investment decisions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144622</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Happiness at Work: Essays on subjective wellbeing in the workplace and labor market</title>
<link>https://hdl.handle.net/1721.1/144621</link>
<description>Happiness at Work: Essays on subjective wellbeing in the workplace and labor market
Ward, George
This dissertation consists of three essays studying the extent to which subjective wellbeing shapes behaviors and outcomes in the workplace and labor market. The first essay studies an information-provision field experiment conducted on a large online jobs platform. The study collects data on the self-reported affective happiness of millions of workers across the USA, aggregates this to the level of companies, and then shows this information to some (randomly-allocated) job seekers on the platform but not others. I find that job seekers respond behaviorally to the provision of information about the happiness of incumbent workers at different organizations, and in doing so they re-allocate their applications away from lowhappiness and towards higher-happiness companies. In the second essay, I build on these findings by conducting a survey experiment that provides people with hypothetical choices between jobs at companies with varying levels of i) wage and ii) employee happiness. The results of this analysis suggest that people value workplace happiness and are, on average, willing to trade off wages in order to work at happier companies. In the final chapter, I investigate the relationship between positive affect and productivity. Studying the universe of call center sales workers at one of the UK’s largest employers, this research measures the happiness of workers on a week-to-week basis and links it to detailed administrative data on behavior and performance. Exploiting exogenous variation in employee happiness arising from differential visual exposure to bright or gloomy weather while at work, the results show a causal effect of worker happiness on sales in a field setting. Taken together, the findings of three chapters suggest that employee happiness has the potential to promote organizational performance by raising productivity, reducing turnover, and aiding recruitment.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144621</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Knowledge for Data-Driven Design</title>
<link>https://hdl.handle.net/1721.1/144620</link>
<description>Representing Knowledge for Data-Driven Design
Akay, Haluk John
Data-driven models have proven to be a transformative alternative to rule-based methods of the past. A data-driven transformation of design is necessary to guide engineers through complexity to develop next-generation products and production systems. Data is abundant from digitally documented early-stage design through final production processes, but this data is often unstructured, informal, and can be qualitative or textual in nature. For data-driven design, data must be computationally interpretable for past documented knowledge to guide future engineering decision-making. This thesis research leverages deep neural network-based language modeling to represent design data; specifically, textually described knowledge. Quantitative representation models make possible a wide range of applied AI methods for performing tasks such as evaluating functional interdependencies and extracting functional information from past design documentation. By learning from past engineering failures and achievements, Big Data and Artificial Intelligence can be used to assist human designers’ decision-making for meeting the needs of society and the environment through data-driven design.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144620</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Financial and Analytic Innovations for Therapeutic Development</title>
<link>https://hdl.handle.net/1721.1/144610</link>
<description>Financial and Analytic Innovations for Therapeutic Development
Xu, Qingyang
Despite groundbreaking advances in biomedicine over the past decades, the process of developing novel drug candidates from laboratory discoveries to safe and effective therapeutics approved by the Food and Drug Administration (FDA) has become longer, more expensive, and less likely to succeed. As a result, there is a widening gap in financing the clinical development of novel drug candidates, preventing potentially effective therapies from reaching the patients who are direly in need for a cure. This thesis proposes financial and data analytic innovations to address four important and challenging aspects of drug development.&#13;
&#13;
We begin with an overview of the financial challenges in novel drug development and the strategies proposed to improve the financial efficiency in I. In Part II, we apply the "megafund'' portfolio approach of financing novel drug developments to two disease areas: glioblastoma therapeutics and mRNA vaccines for emerging infectious diseases. By calibrating the simulation parameters with inputs from domain experts, we find a sharp contrast between the risk/return profiles of the two megafunds. While the megafund for glioblastoma achieves an attractive rate of return and net present value for the investors, the megafund for mRNA vaccines is unlikely to generate financial value mainly because the limited revenue of vaccine sales is insufficient to recover the significant cost of conducting late-stage clinical trials. The intrinsic limitation of the vaccine development business model motivates more cost- and time-efficient clinical trial designs discussed in Part III.&#13;
&#13;
Next, in Part III, we propose a novel clinical trial design which combines Bayesian decision analysis and epidemic modeling to accelerate the clinical testing of anti-infective therapeutic candidates during a rapidly evolving epidemic outbreak. The Bayesian optimal sample size of the clinical trial decreases when the disease is more infectious and deadly, and the corresponding optimal Type I error of FDA's decision increases. In addition, we apply Bayesian decision analysis to analyze whether the clinical evidence of a controversial phase 2 clinical trial for amyotrophic lateral sclerosis justifies FDA approval, by balancing the FDA's need to limit adverse medical effects and the patients' need for expedited access to a potentially effective therapy. &#13;
&#13;
In Part IV, we investigate novel machine learning models and statistical techniques to estimate key parameters of the drug development process, including the probability that the drug candidate will receive FDA approval, the duration of clinical trials, and the correlation between clinical trial outcomes. We show that there is significant bias in the machine learning models trained on the imbalanced dataset of historical drug development outcomes. We also show that debiasing the machine learning model improves the prediction accuracy and generates financial value for the drug developer.&#13;
&#13;
Finally, in Part V, we analyze two social and ethical issues of the drug development process. We illustrate the success and challenges of a disruptive pricing strategy for an osteoporosis drug, including a perverse incentive of certain health plans to favorably cover drugs with higher prices in exchange for higher rebates from the drug manufacturer. We also review the ethical controversy of using the human challenge trial (HCT), in which healthy participants are actively inoculated with the pathogen, to accelerate therapeutic development for COVID-19. We call for the wider use of quantitative modeling to assess the risk/benefit tradeoff and the proactive establishment of ethical criteria so that future HCT may be conducted with minimal delay.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144610</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>GaN Complementary-Metal-Oxide-Semiconductor (CMOS) Technology</title>
<link>https://hdl.handle.net/1721.1/144603</link>
<description>GaN Complementary-Metal-Oxide-Semiconductor (CMOS) Technology
Chowdhury, Nadim
In 2014, the Nobel prize in physics was jointly awarded to Prof. Isamu Akasaki, Prof. Hiroshi Amano, and Prof. Shuji Nakamura for the invention of the blue LED which enabled ubiquitous white lighting. The key innovative technology that underpinned the invention of blue-LED is the efficient doping of GaN with p-type dopants. For years p-GaN has been used only in optoelectronic devices such as LEDs and LASERs. In this thesis we demonstrate how p-GaN can be used to realize the next generation of efficient RF and power electronics. Apart from being a direct and wide band-gap semiconductor which can be useful for optoelectronic devices, GaN and its alloys have other material attributes like spontaneous and piezoelectric polarization, high electron mobility, and high saturation velocity for electrons. These properties enable n-channel GaN-based transistors to operate at a higher switching speed, and at a higher power density than their Si counterparts. To realize the full potential of GaN, the need for a complementary circuit technology cannot be overemphasized. A high performance GaN-Complementary-Metal-Oxide-Semiconductor (CMOS) technology could potentially find applications in data centers, electric vehicles, space electronics, on-chip power converters, beyond 5G base stations, and a plethora of other applications where Si falls short in terms of performance and efficiency. However, a major roadblock towards realizing such a technology is the lack of high-performance GaN p-channel transistors that can be monolithically integrated with GaN n-channel devices. This thesis develops key pathways to improve the current density of GaN p-channel transistors by demonstrating a self-aligned gate and a FinFET technology for the p-channel device. Our demonstrated GaN p-channel devices exhibit record performance in terms of on-current density, on-off ratio, subthreshold swing and on-resistance. This thesis for the first time demonstrates a GaN-CMOS technology on a 6-inch GaN-on-Si wafer by fabricating a monolithically integrated p-channel GaN transistor with an E-mode GaN n-channel device.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144603</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robustness Verification and Optimization of Nonlinear Systems</title>
<link>https://hdl.handle.net/1721.1/144602</link>
<description>Robustness Verification and Optimization of Nonlinear Systems
Lee, Dongchan
Nonlinear systems allow us to describe and analyze physical and virtual systems, including dynamical systems, power grids, robots, and neural networks. The problems involving nonlinearity pose challenges in providing safety guarantees and robustness in the presence of uncertainty. This thesis provides methods to exploit knowledge on upper and lower bounds on the nonlinearity and solves problems related to robustness verification and optimization subject to uncertain parameters. The first half of the thesis develops the convex restriction of a non-convex feasibility set defined by a set of nonlinear equality and inequality constraints. Convex restrictions provide a closed-form convex quadratic condition that is sufficient for solving a system of nonlinear equations. By replacing the original constraints with the proposed conditions, a non-convex optimization problem can be solved as a sequence of convex optimization problems, with feasibility and robustness guarantees. We demonstrate its applications in Model Predictive Control (MPC), robustness verification of neural networks, robust Optimal Power Flow (OPF) problem, and motion planning in robotics. The second part of the thesis focuses on nonlinear dynamical systems and develops reachability analysis and constrained-input constrained-output analysis for verification problems. We provide an optimization-based method for computing reachable sets around a nominal trajectory. The proposed methods use contraction metrics to find templates for reachable sets. Additionally, we developed constrained-input constrained-output analysis to characterize the relationship between peak magnitudes of input and output signals. Numerical experiments were conducted to demonstrate their applicability to a broad class of nonlinear systems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144602</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic Soft Continuum Robots for Telerobotic Stroke Intervention</title>
<link>https://hdl.handle.net/1721.1/144600</link>
<description>Magnetic Soft Continuum Robots for Telerobotic Stroke Intervention
Kim, Yoonho
Robotic technologies have been adopted in various subspecialties of both open and minimally invasive surgery, offering benefits such as enhanced surgical precision and accuracy with reduced effort and fatigue of the surgeon. However, robotic applications to endovascular neurosurgery for treating stroke or brain aneurysms have remained largely unexplored. The brain’s blood vessels are considerably challenging to navigate with a manually controlled passive guidewire, and improper or redundant guidewire manipulation can lead to devastating complications. Existing vascular robotic systems are designed to manipulate conventional guidewires with limited steering capabilities and remain unsuited for neurovascular intervention. In this thesis, we propose a telerobotic neurointerventional platform based on a magnetically controlled soft continuum robot. Composed of soft polymers containing tiny magnetic particles as distributed actuation sources, our magnetic soft continuum robot is thin and flexible enough to navigate the narrow and winding pathways of the brain’s blood vessels. Our magnetic manipulation system consists of a robot arm with an actuating magnet and motorized linear drives to remotely steer and advance the continuum robot under the real-time teleoperation of the system. We evaluate our system’s performance both in vitro with realistic anatomical models and in vivo with a porcine model and demonstrate telerobotically assisted therapeutic procedures for endovascular treatments of stroke and aneurysms. When compared with manually controlled passive guidewires, our telerobotic neurointerventional system based on magnetic manipulation helps to achieve safer and quicker access to hard-to-reach areas in the complex cerebral vasculature. Our system also allows an operator to work remotely from the radiation source to minimize x-ray exposure during the intervention. Furthermore, it may open the possibility of remote procedural services for telerobotic stroke intervention to address the logistical challenge in current stroke systems of care.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144600</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-Matter Interactions in High-Efficiency Photovoltaics, Light-Emitting Devices, and Strongly Coupled Microcavities</title>
<link>https://hdl.handle.net/1721.1/144595</link>
<description>Light-Matter Interactions in High-Efficiency Photovoltaics, Light-Emitting Devices, and Strongly Coupled Microcavities
Laitz, Madeleine Reynolds
The interactions of light and matter drive many of today’s devices, from electricity generation and consumption to manipulation. Within electricity generation, emerging thin film photovoltaics now rival traditional silicon-based solar cells in terms of power conversion efficiency (PCE) due to dramatic improvements to optoelectronic material properties and device architectures. Within electricity consumption, quantum dot light emitting diodes (QD-LEDs) are a high-efficiency, high color purity, versatile material candidate. Recent efforts to develop heavy metal-free QD-LEDs have led to high external quantum efficiencies in InP- and ZnSebased QDs rivaling the performance of the colloidal archetype of Cd-based QD-LEDs. Within energy manipulation, the emergence of photonics from electronics presents opportunities to engineer low-loss, low-threshold information transmission and computation by all-optical means and matter-mediated hybrid electronic/photonic processes.&#13;
&#13;
In this work, we investigate light-matter interactions in emerging thin film perovskite photovoltaics, heavy metal-free QD-LEDs and microcavities, and two-dimensional perovskite microcavity exciton-polaritons.&#13;
&#13;
First, we quantify the PCE enhancements due to photon recycling in high-efficiency Cs₀.₀₅(MA₀.₁₇FA₀.₈₃)₀.₉₅Pb(I₀.₈₃Br₀.₁₇)₃ (triple-cation) perovskite thin film photovoltaics as a function of material properties such as non-radiative recombination and the probability of photon escape. We determine that a perovskite active layer material with non-radiative rates k₁&lt; 1x10⁴ s⁻¹ can result in practical PCE improvements of up to 1.8% due to photon recycling alone, and present material and device design principles to harness photon recycling effects in next-generation perovskite solar cells.&#13;
&#13;
Next, we investigate energy and charge transfer in InP/ZnSe/ZnS QD thin films and QDLEDs as a function of increasing electric field strength. We probe the voltage-controlled photoluminescence (PL) modulation of a QD-LED in reverse bias and achieve 87% PL quenching, which is, to our awareness, the highest reported quenching efficiency in InP-based QDs. We also demonstrate amplified spontaneous emission processes in QD metallic microcavities by spectral coincidence of a three-dimensional confined photon mode and photon recycling-enhanced gain region.&#13;
&#13;
Finally, we form exciton-polaritons (polaritons) at room-temperature in 2D perovskite microcavities resulting in, to the best of our knowledge, a record exciton-photon coupling strength for planar (C₆H₅(CH₂)₂NH₃)₂PbI₄ microcavities of ℏΩ subscript Rabi = 260 ± 5 meV. By utilizing wedged microcavities in which the cavity detuning is changed as a function of excitation position, we probe the temperature-dependent polariton photophysics for varying polariton exciton/photon character. In this way, we reveal material-specific polariton relaxation mechanisms and intracavity pumping schemes from the interplay of 2D perovskite excitonic states.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144595</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guessing Random Additive Noise Decoding (GRAND), from Performance to Implementation</title>
<link>https://hdl.handle.net/1721.1/144592</link>
<description>Guessing Random Additive Noise Decoding (GRAND), from Performance to Implementation
An, Wei
To meet high reliability and low latency requirements in many new applications, such as those in Ultra Reliable Low Latency Communications (URLLC), a universal optimal decoder is desired that can not only be used to select the best short code candidate, but also can adapt itself to channel memory avoiding performance degradation. The Guessing Random Additive Noise Decoding (GRAND) algorithm makes such a decoder possible. Its prevailing research is outlined in Chapter 1.&#13;
&#13;
The well-known Markovian channels are selected for investigation in Chapter 2, leading to the Markov ordered GRAND (GRAND-MO) decoder. By exploring channel statistical properties in its pattern generation, GRAND-MO achieves significant decoding gains with increasing channel memory, eliminating the need of interleavers. The algorithm is extended to high-order modulations by guessing symbol noises, voiding de-mappers and achieving additional decoding gains, especially with the augmented constellation.&#13;
&#13;
Chapter 3 explains the rationale behind the basic version of Ordered Reliability Bits GRAND (ORGRAND), and extends the algorithm to its full version, overcoming the performance limitation in high SNR regions. A number of complexity control techniques ensure the robustness and feasibility of ORBGRAND for practical implementations. Its extension to high-order modulations is justified with additional decoding gain as well as the elimination of complex de-mapping operations.&#13;
&#13;
Armed with both hard and soft detection variants of GRAND, Cyclic Redundancy Check (CRC) codes are evaluated and recognized with excellent performance, beating state-of-art CA-Polar codes. Random Linear Codes (RLCs) are also enabled to be good candidates for their security features. Owing to the advent of GRAND, the two codes, having long been neglected for error correction, become good candidates to URLLC applications, as presented in Chapter 4.&#13;
&#13;
With decoding performance investigated for GRAND variants, their implementations are also studied. Computational complexity analysis is performed for each GRAND variant as well as CRC decoding. Moreover, a number of practical issues are addressed in Chapter 5 to facilitate hardware implementations of GRAND decoders. The investigation of GRAND from performance to implementation demonstrates GRAND's potentiality as a practical solution to URLLC applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144592</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-fidelity Two-qubit Gates and Noise Spectroscopy with Superconducting Qubits</title>
<link>https://hdl.handle.net/1721.1/144591</link>
<description>High-fidelity Two-qubit Gates and Noise Spectroscopy with Superconducting Qubits
Sung, Youngkyu
Although there has been tremendous progress toward achieving low error rates with superconducting qubits, error-prone gates remain the bottleneck in realizing quantum computing applications. To build robust quantum computers, it is crucial to identify the dominant sources of errors and suppress them by engineering the control and architecture of qubit systems. In this thesis, we implement a tunable coupler and noise spectroscopy with the goal of achieving high-fidelity two-qubit gates and characterizing underlying noise mechanisms in superconducting qubits, respectively. We engineer various control techniques---including a fast adiabatic control and spin-locking noise spectroscopy---by incorporating the impact of higher energy levels of a qubit and coupler. Specifically, we harness the higher levels of a coupler as a resource to cancel out an unwanted ZZ interaction between qubits, and thereby improving the two-qubit gate fidelity. In addition, we exploit the multiple level transitions of a transmon sensor to distinguish the noise contributions from flux and photon shot noise. The control protocols developed in this thesis may help resolve hardware challenges in building quantum computers with low error rates.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144591</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enriching Digital Maps with Aerial Imagery and GPS Data</title>
<link>https://hdl.handle.net/1721.1/144590</link>
<description>Enriching Digital Maps with Aerial Imagery and GPS Data
He, Songtao
Digital street maps with rich features are the foundation of many applications. However, creating and maintaining up-to-date digital maps often involve many labor-intensive tasks, making the mapping process time-consuming and expensive. This thesis explores automated techniques for enriching digital street maps from aerial imagery and GPS data.&#13;
    &#13;
Digital street maps consist of a collection of geometry structures such as a road graph and the semantics associated with the structures, such as the lane count and the speed limit of a road segment. This thesis first proposes two solutions, RoadRunner and Sat2Graph, to automatically extract road-level street maps from GPS trajectory data and aerial imagery, respectively. Road-level street maps serve as the base maps in digital street maps, providing the basic yet fundamental way-finding service to the map users. However, road-level street maps don't have lane structure information, which is essential for lane-to-lane navigation and autonomous vehicles. Therefore, this thesis proposes a mapping pipeline that extracts lane-level street maps from aerial imagery. Besides road structure extraction, this thesis proposes RoadTagger to infer road attributes such as the lane count and road type of road segments from aerial imagery. Finally, this thesis proposes a mapping solution to create high-resolution traffic accident risk maps that can enrich the semantics of existing digital maps and enable new applications such as safety-aware routing and precise insurance.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144590</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Irregular Applications with Pipeline Parallelism</title>
<link>https://hdl.handle.net/1721.1/144589</link>
<description>Accelerating Irregular Applications with Pipeline Parallelism
Nguyen, Quan Minh
Irregular applications have frequent data-dependent memory accesses and control flow. They arise in many emerging and important domains, including sparse deep learning, graph analytics, and database processing. Conventional architectures cannot handle irregular applications efficiently because their techniques for improving performance, like exploiting instruction-level or data-level parallelism, are not tailored to them. Thus, continued progress in these crucial domains depends on exploring new avenues of parallelism.&#13;
&#13;
Fortunately, irregular applications contain abundant but untapped pipeline parallelism: they can be divided into networks of stages. Pipelining not only exposes parallelism but also enables decoupling, which hides the latency of long events by allowing producer stages to run ahead of consumer stages. To properly decouple these applications, though, this pipeline parallelism must be exploited at fine-grain, with few operations per stage. Prior work has proposed architectures, compilers, and languages, but focus on regular pipelines, and thus are unable to overcome several challenges of irregular applications. First, architectures need to support the efficient execution of many fine-grain pipeline stages. Second, such irregular pipelines suffer from load imbalance, as the amount of work in each stage varies rapidly as the program runs. Finally, these stages must communicate and coordinate changes in control flow.&#13;
&#13;
This thesis demonstrates that exploiting fine-grain pipeline parallelism in irregular applications is effective and practical. To this end, this thesis proposes two hardware architectures and a compiler: Pipette, the first architecture, reuses existing structures in modern out-of-order cores to implement load-balanced decoupled communication between stages; and Fifer, the second architecture, makes the acceleration benefits of coarse-grain reconfigurable arrays available to irregular applications. Pipette achieves gmean 1.9x speedup over a data-parallel implementation, and Fifer achieves up to 47x speedup over an out-of-order multicore while using considerably less area. Both architectures also further accelerate challenging memory accesses and resolve the load balancing and control flow challenges that are ubiquitous in irregular applications. Finally, Phloem is a compiler that makes it easy for programmers to use these architectures by producing high-performance pipeline-parallel implementations of irregular applications from serial code. Phloem automatically achieves 85% of the performance of manually pipelined versions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144589</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-Chip Planar Lens Architectures for Optical Beam Steering</title>
<link>https://hdl.handle.net/1721.1/144588</link>
<description>On-Chip Planar Lens Architectures for Optical Beam Steering
López, Josué Jacob
Free-space optical beam steering is an important technological capability because of its applications in optical communication links and sensing such as light detection and ranging (lidar). Over the past decade, there has been significant efforts to develop a beam steering architecture that can lead to solid-state lidar with lower size, weight, power consumption, and cost (SWaP-C) while still meeting a high level of sensing performance and reliability. Herein, is the experimental demonstration of two novel planar lens-based architectures for optical beam steering in two dimensions. The first experimental demonstration is an aplanatic lens designed via the paraxial ray approximation and ray tracing. The second experimental demonstration is a Luneburg lens that is designed with a gradient in the refractive index along the radius of the lens. This second system uses a circularly symmetric grating to emit the optical beam over a wide field of view. Both planar lens architectures leverage a near-infrared tunable laser, a Mach-Zehnder interferometer switch tree, the lens that collimates and steers an optical mode in-the-plane of the chip, and a wavelength dependent grating for out-of-plane coupling. Various grating designs are presented towards the improvement of the effective aperture length and optical power emitted from the grating including double-layer grating designs and apodization schemes for the grating fill-fraction. Both devices are fabricated using a wafer-scale fabrication process and pave the way for two-dimensional optical beam steering with low electronic complexity and a large field of view. Lastly, remaining architecture challenges for a high performance lidar-on-a-chip system are discussed.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144588</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Devices and Algorithms for Analog Deep Learning</title>
<link>https://hdl.handle.net/1721.1/144582</link>
<description>Devices and Algorithms for Analog Deep Learning
Onen, O. Murat
Efforts to realize analog processors have skyrocketed over the last decade as having energy-efficient deep learning accelerators became imperative for the future of information processing. However, the absence of two entangled components creates an impasse before their practical implementation: devices satisfying algorithm-imposed requirements and algorithms running on nonideality-tolerant routines. This thesis demonstrates a near-ideal device technology and a superior neural network training algorithm that can ultimately propel analog computing when combined together. The CMOS-compatible nanoscale protonic devices demonstrated here show unprecedented characteristics, incorporating the benefits of nanoionics with extreme acceleration of ion transport and reactions under strong electric fields. Enabled by a material-level breakthrough of utilizing phosphosilicate glass (PSG) as a proton electrolyte, this operation regime achieves controlled shuttling and intercalation of protons in nanoseconds at room temperature in an energy-efficient manner. Then, a theoretical analysis is carried out to explain the infamous incompatibility between asymmetric device modulation and conventional neural network training algorithms. By establishing a powerful analogy with classical mechanics, a novel method, Stochastic Hamiltonian Descent, is developed to exploit device asymmetry as a useful feature. Overall, devices and algorithms developed in this thesis have immediate applications in analog deep learning, whereas the overarching methodology provides further insight for future advancements.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144582</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Image-based Personalized Cancer Screening</title>
<link>https://hdl.handle.net/1721.1/144581</link>
<description>Machine Learning Methods for Image-based Personalized Cancer Screening
Yala, Adam
While AI has the potential to transform patient care, the development of equitable clinical AI models and their translation to hospitals remains difficult. From a computational perspective, these tools must deliver consistent performance across diverse populations and adapt to diverse clinical needs, while learning from biased and scarce data. Moreover, the development of tools relies on our capacity to balance clinical AI utility and patient privacy concerns. In this thesis, I will discuss our contributions in addressing the above challenges in three areas: 1) cancer risk assessment from imaging, 2) personalized screening policy design and 3) private data sharing through neural obfuscation. I have demonstrated that our clinical models offer significant improvements over the current standard of care across globally diverse patient populations. The models now underlie prospective clinical trails.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144581</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Practical Approach to Federated Learning</title>
<link>https://hdl.handle.net/1721.1/144580</link>
<description>A Practical Approach to Federated Learning
Mugunthan, Vaikkunth
Machine learning models benefit from large and diverse training datasets. However, it is difficult for an individual organization to collect sufficiently diverse data. Additionally, the sensitivity of the data and government regulations such as GDPR, HIPPA, and CCPA restrict how organizations can share data with other entities. This forces organizations with sensitive datasets to develop models that are only locally optimal. Federated learning (FL) facilitates robust machine learning by enabling the development of global models without sharing sensitive data. However, there are two broad challenges associated with deploying FL systems: privacy challenges and training/performance-related challenges. Privacy challenges pertain to attacks that reveal sensitive information of local client data. Training/Performance-related challenges include high communication costs, data heterogeneity across clients, and lack of personalization techniques. All these concerns have to be addressed to make FL practical, scalable, and useful. In this thesis, I discuss techniques I've designed for addressing these challenges and describe two systems that I've developed to mitigate them - PrivacyFL, a privacy-preserving simulator for FL, and DynamoFL, an easy-to-use production-level system for FL.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144580</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gate-geometry Dependence of Enhancement-mode p-GaN Gate High Electron Mobility Transistors</title>
<link>https://hdl.handle.net/1721.1/144579</link>
<description>Gate-geometry Dependence of Enhancement-mode p-GaN Gate High Electron Mobility Transistors
Lee, Ethan Sukrae
While GaN-based transistors for power electronics have in many situations demonstrated technological superiority over conventional Si based devices, the development of GaN power electronics has only scratched the surface of the possibilities that lie ahead. The most promising device structure for power electronics is the enhancement-mode p-GaN Gate High Electron Mobility Transistor (HEMT) which matches the power, voltage, and efficiency of conventional GaN FETs but most importantly features a positive threshold voltage. This is achieved by the insertion of a Mg-doped GaN layer above the AlGaN/GaN heterostructure that is contacted by a Schottky metal. In this manner, the gate stack under positive gate bias features a back-to-back diode configuration with the reverse biased gate-metal/p-GaN Schottky diode in series with the forward biased p-GaN/AlGaN/GaN p-i-n barrier diode. An undesirable aspect of this configuration is a p-GaN intermediate node that is largely floating complicating device operation and understanding of device physics and reliability.&#13;
&#13;
This thesis carries out a detailed study of the impact of gate geometry design, in particular the relative area of the p-i-n and the Schottky barrier junctions, on the operation and reliability of industrially prototyped p-GaN gate HEMTs. In particular, we study the impact of the length of an offset region at the edges of the gate stack that is not contacted by the Schottky metal. Our study reveals gate electrostatics that under steady-state conditions are set by gate current continuity across the two junctions. Further, a surprising preferential gate current is found to flow across the p-GaN/AlGaN/GaN diode in the offset region of the p-GaN layer. The combination of these two factors has a large influence on the steady-state figures of merit of the device.&#13;
&#13;
We have also found that under pulsed gate conditions, for a short time, the gate electrostatics are dominated by a capacitive divider set by the p-i-n and Schottky barrier junctions in series. This gives rise to prominent transients that are dominated by the charging dynamics of the p-GaN node. The time constants also exhibit a prominent gate geometry dependence.&#13;
&#13;
Finally, we have studied the positive bias temperature instability (PBTI) of the devices under prolonged gate voltage stress and its dependence on gate geometry. We discover both recoverable and permanent degradation phenomena and, in particular, a new permanent positive threshold shift that is uniquely associated with the gate offset region.&#13;
&#13;
We postulate that incomplete magnesium activation in the p-GaN offset region might be responsible for the unique behavior of the gate offset region of p-GaN gate HEMTs.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144579</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying a concurrent, crash-safe file system with sequential reasoning</title>
<link>https://hdl.handle.net/1721.1/144578</link>
<description>Verifying a concurrent, crash-safe file system with sequential reasoning
Chajed, Tej
Critical systems software such as the file system is challenging to make correct due to the combination of concurrency in the implementation for good performance and the requirement to preserve data even on crash, where the whole computer stops and reboots unexpectedly. To build reliable systems, this thesis extends formal verification — proving that a system always meets a mathematical specification of its behavior — to reason about the concurrency and crash safety.&#13;
&#13;
The thesis applies the new verification techniques to the verification of DaisyNFS, a new concurrent file system. The file system is an important service of an operating system, because nearly all persistent data is ultimately stored in a file system and bugs can lead to permanent data loss in any application running on top. Another contribution of the thesis is a system design and verification techniques to scale verification to a system of the size and complexity of a file system. In particular, the file system is designed around a transaction system that addresses the core challenges of crashes and concurrency, so that the rest of the code can be verified with comparatively simpler sequential reasoning.&#13;
&#13;
An evaluation of proof overhead finds that verification required 2× as many lines of proof as code for the sequential reasoning, compared to 20× for the crash safety and concurrency proofs. A performance evaluation finds that DaisyNFS achieves good performance compared to Linux NFS exporting ext4 over a range of benchmarks: at least 60% the throughput even on the most challenging concurrent benchmarks, and 90% on other workloads.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144578</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Configurable, Extensible, and Modular Network Stacks</title>
<link>https://hdl.handle.net/1721.1/144577</link>
<description>Enabling Configurable, Extensible, and Modular Network Stacks
Narayan, Akshay Krishna
Modern networks and the applications that use them are increasingly specialized; each application increasingly uses a bespoke network stack which integrates desired protocols, services, and APIs. This thesis will describe two systems, Bertha and Congestion Control Plane (CCP), which incorporate new abstractions to navigate this new setting from the perspective of congestion control algorithm and the application’s network API, respectively. Bertha uses a new abstraction called a Chunnel to represent network services, e.g., hardware offloads of application functionality, publish-subscribe communication services, or encryption. CCP decouples congestion control algorithm implementations from network datapaths by designing an abstract datapath which supports collecting custom measurements and subsequently applying rate or window enforcement.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144577</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference for Social and Engineering Systems</title>
<link>https://hdl.handle.net/1721.1/144576</link>
<description>Causal Inference for Social and Engineering Systems
Agarwal, Anish
What will happen to Y if we do A? A variety of meaningful social and engineering questions can be formulated this way: What will happen to a patient’s health if they are given a new therapy? What will happen to a country’s economy if policy-makers legislate a new tax? What will happen to a data center’s latency if a new congestion control protocol is used? We explore how to answer such counterfactual questions using observational data—which is increasingly available due to digitization and pervasive sensors—and/or very limited experimental data. The two key challenges are: (i) counterfactual prediction in the presence of latent confounders; (ii) estimation with modern datasets which are high-dimensional, noisy, and sparse.&#13;
&#13;
The key framework we introduce is connecting causal inference with tensor completion. In particular, we represent the various potential outcomes (i.e., counterfactuals) of interest through an order-3 tensor. The key theoretical results presented are: (i) Formal identification results establishing under what missingness patterns, latent confounding, and structure on the tensor is recovery of unobserved potential outcomes possible. (ii) Introducing novel estimators to recover these unobserved potential outcomes and proving they are finite-sample consistent and asymptotically normal.&#13;
&#13;
Finally, we discuss connections between matrix/tensor completion and time series analysis; we believe this could serve as a basis to do counterfactual forecasting.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144576</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning on Geometry Representations</title>
<link>https://hdl.handle.net/1721.1/144568</link>
<description>Deep Learning on Geometry Representations
Smirnov, Dmitriy
While deep learning has been successfully applied to many tasks in computer graphics and vision, standard learning architectures often operate on shape representations that are dense and regular, like pixel or voxel grids. On the other hand, decades of computer graphics and geometry processing research have resulted in specialized algorithms and tools that use representations without such regular structure. In this thesis, we revisit conventional approaches in graphics in geometry to propose deep learning pipelines and inductive biases that are directly compatible with common geometry representations, without relying on simple uniform structure.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144568</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive and Prescriptive Analytics in Operations Management</title>
<link>https://hdl.handle.net/1721.1/144567</link>
<description>Predictive and Prescriptive Analytics in Operations Management
Skali Lami, Omar
The recent surge in data availability and advances in hardware and software and the recent developments and democratization of analytics highlight the critical importance of prediction and prescription in harnessing the power of data to create value through optimal, data-driven decision making. This thesis proposes novel Machine Learning (ML) and optimization methods in (i) predictive analytics, (ii) prescriptive analytics, and (iii) their high-impact applications in operations management.&#13;
&#13;
On the predictive side, this thesis tackles the problems of interpretability and predictive power within the context of tree ensembles. The first chapter introduces the Extended Sampled Trees (XSTrees) method, a novel tree ensemble ML method for classification and regression. Instead of learning a single decision tree like CART, or an collection of trees like Random Forests or Gradient Boosting methods, XSTrees learns the entire probability distribution over the tree space. This approach results in good theoretical guarantees and has a significant edge over other ensemble methods in terms of performance. Analytically, we prove that XSTrees converges to the true underlying tree model with rate [formula], where &#119899; ∈ N is the number of training observations. Experimentally, we show on publicly available datasets, synthetic data, and two real-world case studies that XSTrees is very competitive with the state-of-theart models, with an average accuracy between 2.5% and 50% higher than competitors for classification and an average R2 between 2% and 85% higher for regression.&#13;
&#13;
We further highlight the need and impact of more powerful and interpretable treebased methods in the second chapter through the problem of ancillary services in targeted advertising under an ML lens. This chapter aims to predict the Net Present Value (NPV) of these services, estimate the probability of a customer subscribing to each of them depending on what services are offered to them, and ultimately prescribe the optimal personalized service recommendation that maximizes the expected longterm revenue. First, we propose a novel method called Cluster-While-Classify (CWC). This hybrid optimization-ML method performs joint clustering and classification and subsequently fits a tree-based classifier on the corresponding assignment to predict the sign-up propensity of services based on customer, product, and session-level features. CWC is competitive with the industry state-of-the-art and can be represented in a simple decision tree, making it interpretable and easily actionable. We then use Double Machine Learning (DML) and Causal Forests, another tree-based ML method, to estimate the NPV for each service and finally propose an iterative optimization strategy — that is scalable and efficient — to solve the personalized ancillary service recommendation problem. CWC achieved a competitive 74% out-of-sample accuracy which, alongside the rest of the personalized holistic optimization framework, resulted in an estimated 2.5-3.5% uplift in revenue, which in turn translates to $80-100 million increase in revenue and $15-20 million increase in profits.&#13;
&#13;
On the prescriptive side, this thesis moves away from the predict-then-optimize paradigm by doing the prediction and the prescription jointly, resulting in a lower prescription error and higher robustness. The third chapter presents a holistic framework for prescriptive analytics. Given side data &#119909;, decisions &#119911;, and uncertain quantities &#119910;, that are functions of &#119909; and &#119911;, we propose a framework that simultaneously predicts &#119910; and prescribes the “should be" optimal decisions &#119911;¯. The algorithm can accommodate a large number of predictive machine learning models and continuous and discrete decisions of high cardinality. It also allows for constraints on these decision variables. We show wide applicability and strong computational performances on synthetic experiments and two real-world case studies.&#13;
&#13;
Additionally, we illustrate the impact of these predictive and prescriptive analytics methods in two additional real-world, high-impact applications: healthcare and industrial operations. The fourth chapter proposes an end-to-end framework to help mitigate the COVID-19 pandemic and its impact through the case and death prediction, true prevalence, and fair vaccine distribution. We present the methods we developed for predicting cases and deaths using a novel ML-based aggregation method to create a single prediction we call MIT-Cassandra. We further incorporate COVID-19 case prediction to determine the true prevalence and incorporate this prevalence into an optimization model for efficiently and fairly managing the operations of vaccine allocation. This also allows us to provide insights into how prevalence and exposure of the disease in different parts of the population can affect vaccine distribution.&#13;
&#13;
In the last chapter, we propose a novel, machine learning-based methodology to improve the efficiency of maintenance operations, from description to prediction to intervention. The proposed methodology has three main components, applied sequentially to the maintenance scheduling problem. First, a data-driven failure modes and effects analysis to fully describe the state of equipment at a given time in a data-driven way, including probabilities of each failure mode and its respective causes. Second, a unified predictive model which slightly adjusts its parameters for each specific piece of equipment to predict the state of some equipment in the future. Third, a holistic prescriptive model to optimize maintenance interventions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144567</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private equity fund valuation management during fundraising</title>
<link>https://hdl.handle.net/1721.1/144563</link>
<description>Private equity fund valuation management during fundraising
Baik, Brian (Kunho)
I investigate whether and how private equity fund managers (GPs) inflate their interim fund valuations (net asset values or NAVs) during fundraising periods. Specifically, I study the extent to which the GPs inflate NAVs by managing valuation assumptions (e.g., valuation multiples), influencing the financial metrics (e.g., EBITDA and sales) reported by the private firms in their portfolios, or both. Using a sample of buyout funds and their portfolio firms in Europe, I find that funds managed by low reputation GPs show more dramatic forms of NAV inflation by managing upward not only valuation multiples but also portfolio firm performance. The results are robust to a number of alternative explanations. Low reputation funds that employ some form of real earnings management show success in fundraising. Overall, I illustrate the mechanisms behind inflated fund valuations during fundraising periods and provide evidence supporting the argument that low reputation GPs are more likely manipulating NAVs than timing fundraising periods.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144563</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient reinforcement learning via singular value decomposition, end-to-end model-based methods and reward shaping</title>
<link>https://hdl.handle.net/1721.1/144562</link>
<description>Efficient reinforcement learning via singular value decomposition, end-to-end model-based methods and reward shaping
Gehring, Clement
Reinforcement learning (RL) provides a general framework for data-driven decision making. However, the very same generality that makes this approach applicable to a wide range of problems is also responsible for its well-known inefficiencies. In this thesis, we consider different properties which are shared by interesting classes of decision making which can be leveraged to design learning algorithms that are both computationally and data efficient. Specifically, this work examines the low-rank structure found in various aspects of decision making problems and the sparsity of effects of classical deterministic planning, as well as the properties that end-to-end model-based methods depend on to perform well. We start by showing how low-rank structure in the successor representation enables the design of an efficient on-line learning algorithm. Similarly, we show how this same structure can be found in the Bellman operator which we use to formulate an efficient variant of the least-squares temporal difference learning algorithm. We further explore low-rank structure in state features to learn efficient transition models which allow for efficient planning entirely in a low dimensional space. We then take a closer look at end-to-end model-based methods in to better understand their properties. We do this by examining this type of approach through the lens of constrained optimization and implicit differentiation. Through the implicit perspective, we derive properties of these methods which allow us to identify conditions under which they perform well. We conclude this thesis by exploring how the sparsity of effects of classical planning problems can used to define general domain-independent heuristics which we can be used to greatly accelerate learning of domain-dependent heuristics through the use of potential-based reward shaping and lifted function approximation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144562</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling Neural Language Generation</title>
<link>https://hdl.handle.net/1721.1/144561</link>
<description>Controlling Neural Language Generation
Shen, Tianxiao
Large-scale neural language models have made impressive strides in natural language generation. However, typical models operate in a left-to-right, unconstrained fashion with limited control over what is generated. This thesis explores flexible sequence models and weakly supervised methods to perform various controlled generation tasks. We anticipate that these techniques will be broadly applicable to other domains, such as the generation of images, molecules, and biological sequences.&#13;
&#13;
We begin by presenting a class of sequence models called blank language models (BLMs), which generate sequences by dynamically creating and filling in blanks. Given partially specified text with one or more blanks, BLM will fill in the blanks with a variable number of tokens consistent with the context. Our model is well suited for a variety of text editing and rewriting tasks and demonstrates effectiveness on text infilling, ancient text restoration, and sentiment transfer.&#13;
&#13;
Next, we investigate text autoencoders and their use to control generation through latent space operations. We establish a theory on how to mold a meaningful latent space geometry for discrete text data. Building on this, we develop a family of denoising text autoencoders that demonstrate the potential of attribute modification (e.g., tense, sentiment, etc.) through simple vector arithmetic.&#13;
&#13;
The final two chapters address language style transfer in the absence of supervised data. We first formalize the task of non-parallel style transfer and discuss the feasibility of the learning problem. We propose a method that leverages distributional alignment of latent representations to perform style transfer. Then, we study confounding factors and show that by dividing the data into two groups of different styles, with the sets in each group illustrating variations we do not wish to alter, we can exploit invariance to isolate confounders and transfer text in the desired direction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144561</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems to Democratize and Standardize Access to Web APIs</title>
<link>https://hdl.handle.net/1721.1/144560</link>
<description>Systems to Democratize and Standardize Access to Web APIs
Alrashed, Tarfah Abdullah
Today, many websites offer third-party access to their data through web APIs. However, manually encoding URLs with arbitrary endpoints, parameters, authentication handshakes, and pagination, among other things, makes API use challenging and laborious for programmers and untenable for novices. In addition, each API offers its own idiosyncratic data model, properties, and methods that a new user must learn, even when the sites manage the same common types of information as many others.&#13;
&#13;
In this thesis, I show how working with web APIs can be dramatically simplified by describing these APIs using a simple machine-readable ontology. I present a number of systems that can use these descriptions to access arbitrary APIs on the web. The first system lets users query and download data from any described web API. The second system exposes data behind web APIs as connected objects with standard types, allowing users to create interactive web applications that operate on the data accessible through these APIs. And the last system creates bridges between many heterogeneous types of data from different websites, allowing users to link and interact with data drawn from multiple web APIs simultaneously
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144560</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying radiation damage through stored energy released during defect annealing in metals</title>
<link>https://hdl.handle.net/1721.1/144559</link>
<description>Quantifying radiation damage through stored energy released during defect annealing in metals
Hirst, Charles A.
With full knowledge of a material’s atomistic structure, it is possible to predict any macroscopic property of interest. In practice, this is hindered by limitations of the chosen characterisation technique. For example, transmission electron microscopy (TEM) is unable to detect the smallest and most numerous defects in irradiated metals. Instead of spatial characterisation, defects can be detected and quantified through their excess energy.&#13;
&#13;
Previous measurements of stored energy in irradiated metals have been limited to cryogenic or ambient temperatures, despite nuclear reactors operating at 300◦C and above. Differential scanning calorimetry (DSC) of reactor-irradiated Ti measures defect densities 5 times greater than those determined using TEM. These experiments also reveal two energetically-distinct processes where the established annealing model predicts one. Molecular dynamics (MD) simulations discover the defects responsible and show that the point defect induced glide of dislocation loops contributes significantly to recovery. Our characterisation techniques are combined to propose a new mechanism for the recovery of irradiation-induced defects at elevated temperatures.&#13;
&#13;
In order to probe even smaller quantities of released energy, advances in chipbased nanocalorimetry are investigated. Flash DSC experiments demonstrate the measurement of sample mass to within 1 ng. MD simulations are used to predict the maximum possible stored energy release from Al electron-irradiated at cryogenic temperatures. Extrapolation across 11 orders of magnitude in time and 18 orders of magnitude in mass yields results which match prior literature within a factor of 2. These results are combined with measurements of the noise, as characterised through the power spectral density, to determine whether the Flash DSC can be used to detect defect annealing in irradiated metals.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144559</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Linear Modeling in High Dimensions: Advances in Hierarchical Modeling, Inference, and Evaluation</title>
<link>https://hdl.handle.net/1721.1/144554</link>
<description>Bayesian Linear Modeling in High Dimensions: Advances in Hierarchical Modeling, Inference, and Evaluation
Trippe, Brian L.
Across the sciences, social sciences and engineering, applied statisticians seek to build understandings of complex relationships from increasingly large datasets. In statistical genetics, for example, we observe up to millions of genetic variations in each of thousands of individuals, and wish to associate these variations with the development of disease. For ‘high dimensional’ problems like this, the languages of linear modeling and Bayesian statistics appeal because they provide interpretability, coherent uncertainty, and the capacity for information sharing across related datasets. But at the same time, high dimensionality introduces several challenges not solved by existing methodology.&#13;
&#13;
This thesis addresses three challenges that arise when applying the Bayesian methodology in high dimensions. A first challenge is how to apply hierarchical modeling, a mainstay of Bayesian inference, to share information between multiple linear models with many covariates (for example, genetic studies of multiple related diseases). The first part of the thesis demonstrates that the default approach to hierarchical linear modeling fails in high dimensions, and presents a new, effective model for this regime. The second part of the thesis addresses the computational challenge presented by Bayesian inference in high dimensions — existing methods demand time that scales super-linearly with the number of covariates. We present two algorithms that permit fast, accurate inferences by leveraging (i) low rank approximations of data or (ii) parallelism across a certain class of Markov chain Monte Carlo algorithms. The final part of the thesis addresses the challenge of evaluation. Modern statistics provides an expansive toolkit for estimating unknown parameters, and a typical Bayesian analysis justifies its estimates through belief in subjective a priori assumptions. We address this by introducing a measure of confidence in the new estimate (the ‘c-value’), that can diagnose the accuracy of a Bayesian estimate without requiring this subjectivism.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144554</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A multimodal approach to investigate the effects of respiration&#13;
on Fontan flow to inform strategies for circulatory support</title>
<link>https://hdl.handle.net/1721.1/144552</link>
<description>A multimodal approach to investigate the effects of respiration&#13;
on Fontan flow to inform strategies for circulatory support
Horvath, Markus Attila
The prevalence of single ventricle physiology is estimated to be 1/3000. Fontan physiology is the final palliative stage for a series of congenital heart diseases resulting in a single ventricle. Resulting hemodynamics are not well characterized and remain poorly understood. Hence, mid-term survival rates remain high and suitable circulatory support strategies are undetermined. At the core of this clinical problem lies a limited understanding of the interactions of respiration, hemodynamics, and tissue damage. Respiration has been identified to govern flow fluctuation including retrograde flow in the Fontan IVC but bench top simulators and animal models fail to recreate the interaction of breathing and venous flow adequately.&#13;
&#13;
The goal of this dissertation is to develop and validate a suit of bench top and computational models which recapitulate the interaction of respiratory biomechanics and Fontan flow, to leverage the simulator platform and a clinical study to characterize the respiratory impact on Fontan hemodynamics and retrograde flow, and to design and evaluate promising approaches for the circulatory support.&#13;
&#13;
We develop a biomimetic respiratory simulator with integrated circulatory Fontan flow loop that shows high physiological fidelity. Vascular models with physiological compliance values interact with the respiratory mechanics to recreate characteristic Fontan flow. We characterize the simulator and validate the system with a computational lumped parameter model and a pilot clinical trial. We then extend the platform a computational fluid dynamics model of the Fontan shunt.&#13;
&#13;
We leverage the cardiorespiratory simulator to characterize the impact of breathing and other physiological parameters. We conduct a clinical trial to evaluate respiratory effects Fontan retrograde flow. Thereby, we identify and characterize new physiological drivers. Subsequently, we conduct a retrospective study to evaluate clinical effects this flow reversal.&#13;
&#13;
Finally, we design and test circulatory support strategies and establish the importance to tailor them to the unique flow patterns. We potential benefits of valve implantation in the Fontan IVC and optimize the device design.&#13;
&#13;
In summary, this work provides a multimodal simulator platform paired with a clinical trial to provide deeper understanding of the Fontan physiology. The platform is a valuable tool for circulatory support development as we demonstrate here.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144552</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/144545</link>
<description>Essays in Financial Economics
Wang, Yupeng
This dissertation contains three essays in financial economics, with a focus on the effects of new technologies on traditional financial markets including venture capital market and mortgage market.&#13;
&#13;
In a joint work with Fangzhou Lu, we document in Chapter 1 the heterogeneous effects of new technology on venture capital investment. Venture capital, an undoubtedly successful way of financing entrepreneurship, is believed to suffer from search costs and information frictions. New technologies have been invented to overcome these issues. Online platforms such as Product Hunt maintain user-generated profiles of and comments on a portfolio of start-ups. Venture capital investors has been increasingly relying on data from these platforms and artificial intelligence tools to guide investment decisions. However, information frictions could be intensified due to the presence of manipulation. Using data on venture capital deals and data from Product Hunt, we present supporting evidences that entrepreneurs manipulate investor perceptions by manufacturing comments that praise their products. Using COVID-19 as a positive shock to investor online presence, we examine the differences in online opinions for similar start-up products before and after the pandemic. We argue that the net gains from manipulating online opinions are highest for entrepreneurs who are new to the online community, for start-ups in early stage, and for start-ups facing fierce competition. We demonstrate that start-ups with a high incentive to manipulate have more positive but less useful comments post COVID-19 relative to prior. Furthermore, investors tend to relate their investment decisions to online sentiment, but the effect is heterogeneous. Only young and inexperienced investors are responsive to online sentiment.&#13;
&#13;
In Chapter 2, I study Fintech mortgage lenders, who collect wide forms of borrower data entirely online and rely on big data to make credit decisions through the use of machine learning algorithms. Compared to traditional lenders, Fintech lenders are more likely to originate loans with high loan-to-value ratio (LTV) and particularly high debt-to-income ratio (DTI), possibly working through greater loan size instead of lower income. Conditional on predicted default rate using only observables, ex-post default rate is not significantly differed whether the loan is originated by a Fintech or a traditional lender. Fintech lenders also set interest rates that are more sensitive to LTV but less sensitive to DTI, and consequently, their interest rates have higher forecastability in prepayment but lower forecastability in delinquency and default, resulting from a premium charged to high LTV loans that get prepaid more often. Fintech lenders get cross-subsidies in the to-be-announced (TBA) mortgage-backed-securities (MBS) market since high prepayment rate loans are pooled together with low prepayment rate loans in the same forward contract. The findings suggest that new technology might be able to identify credit risks at the margin but may also be used to facilitate lenders in extracting rents.&#13;
&#13;
In Chapter 3, I document five facts on labor productivity in the U.S. mortgage lending industry using a novel dataset that matches lenders, lender branch locations, loan officers, mortgage applications, originations and loan performance. First, labor productivity at non-depository lenders is on average two times higher than that at banks or credit unions. Second, labor productivity growth has been accelerating since 2014. The trend is driven by banks and credit unions, not non-depository lenders. Third, banks with larger assets, higher return on assets, but lower growth in deposits have higher labor productivity. Fourth, high labor productivity is associated with long lending distance. Fifth, high labor productivity is associated with high delinquency and prepayment. One important source of productivity growth is technology adoption. In a case study, I use Quicken Loans’ adoption of Rocket Mortgage online platform in late 2015 as an exogenous technology shock, and find that competitors respond by hiring more loan officers, especially males, to compete with Quicken Loans in local mortgage markets.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144545</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Empirical Macroeconomics and Development</title>
<link>https://hdl.handle.net/1721.1/144544</link>
<description>Essays in Empirical Macroeconomics and Development
Majerovitz, Jeremy
This thesis consists of three chapters on empirical macroeconomics and development. Although the topics of study are diverse (one might say that small manufacturing firms in Indonesia and massive bank holding companies in the United States are half a world apart), they are tied together by the use of micro data and econometrics to study macro questions with important aggregate implications. These papers reflect a broader push to discipline macroeconomic models with credible empirical analysis. In Chapters 1 and 3, I do exactly this, combining theory and data to study aggregate productivity and the US banking sector, respectively. Chapter 2 instead focuses on getting the empirical analysis right: we make methodological contributions to improve a very common research design in macroeconomics and other fields.&#13;
&#13;
In Chapter 1, I study the importance of the selection channel for aggregate productivity: the process by which less efficient firms are driven out of the market by more efficient firms. Conventional wisdom suggests that markets in developing countries are more sclerotic, allowing inefficient firms to survive that would have exited in a developed country. I provide a tractable model to examine the importance of the selection channel, and show how to calibrate it to panel data on firms. I use this model to show that the effect of the selection channel on aggregate productivity is approximately equal to the average difference in log productivity between stayers and exiters, which can be measured easily in firm panel data. Results for Indonesia, Spain, Chile, and Colombia suggest that Indonesia could raise its aggregate productivity by roughly 30% if its firm exit process became as selective as Spain’s. However, cross-country estimates suggest that the selection channel is not an important explanation for cross-country differences in output per capita.&#13;
&#13;
In Chapter 2, co-authored with Karthik Sastry, we study a common research design, examine its pitfalls, and provide solutions for more accurate and efficient inference. Many prominent studies in macroeconomics, labor, and trade use panel data on regions to identify the local effects of aggregate shocks. These studies regress outcomes on the interaction of an observed aggregate shock with an observed regional sensitivity to that shock. We argue that the most economically plausible source of identification in this setting is uncorrelatedness of observed and unobserved aggregate shocks. Even when the regression estimator is consistent, inference is complicated by cross-regional residual correlations induced by unobserved aggregate shocks. We suggest two-way clustering, Driscoll and Kraay (1998) standard errors, and randomization inference as options to solve this inference problem. We also propose a more efficient optimal-IV estimator. We examine these issues in the context of Nakamura and Steinsson (2014)’s analysis of regional fiscal multipliers. We show that the standard practice of clustering by region generates confidence intervals that are too small. When we construct confidence intervals with robust methods, we can no longer reject fiscal multipliers closer to zero. The optimal IV strategy shrinks standard errors by a factor of three. Our results underscore that the precision promised by regional data may disappear with correct inference.&#13;
&#13;
In Chapter 3, co-authored with Juliane Begenau, Saki Bigio, and Matias Vieyra, we use theory and microdata to study bank behavior and model the banker's optimization problem. We propose a dynamic bank theory with a delayed loss recognition mechanism and a regulatory capital constraint at its core. The estimated model matches four facts about banks' Tobin's Q that summarize bank leverage dynamics. (1) Book and market equity values diverge, especially during crises; (2) Tobin's Q predicts future bank pro fitability; (3) neither book nor market leverage constraints are binding for most banks; (4) bank leverage and Tobin's Q are mean reverting but highly persistent. We examine a counterfactual experiment where different accounting rules produce a novel policy tradeoff.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144544</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Ridesharing under Travel Time Uncertainty: Passenger Preference and Optimal Assignment Methods</title>
<link>https://hdl.handle.net/1721.1/144538</link>
<description>Dynamic Ridesharing under Travel Time Uncertainty: Passenger Preference and Optimal Assignment Methods
Bailey, Nathaniel K.
The increased prevalence and use of on-demand ridehailing services through the previous decade have had substantial impacts on urban transportation systems. These services offer convenient and flexible door-to-door transportation to users, but have also raised concerns about system equity, environmental sustainability, and efficiency given their high fares and their resulting increase in vehicle-kilometers traveled. Dynamic ridesharing (DRS; or pooled ridehailing) provides a means to mitigate these negative externalities by offering reduced fares for passengers in exchange for increased operational flexibility in pooling multiple trips into the same vehicle concurrently. However, even prior to their suspension during the COVID-19 pandemic, these services struggled to gain the widespread adoption needed to realize many of these benefits. One under-investigated barrier to the adoption of DRS services is travel time uncertainty. Travel times on urban road networks on which DRS services operate are often highly variable, and the potential for vehicle detours due to pooling increases travel time uncertainty for DRS passengers when compared to exclusive ridehailing.&#13;
&#13;
This dissertation investigates the impact of travel time uncertainty, and traveler perceptions thereof, on decisions of whether to use pooling or not in ridehailing services, and formulates new methods to assign vehicles to passengers in DRS operation that improve passenger outcomes in terms of average delay and travel time variability. Using data collected from a survey of 1,600 Singapore residents, we estimate the impact of different presentations of information regarding travel time variability and associated attitudes on respondents’ stated preference between exclusive and pooled ridehailing trips. We find that different forms of presenting information on the uncertainty of exclusive ridehailing journey times significantly alter passengers’ responses to this uncertainty. However, travelers’ decisions to use DRS are driven much more by their attitudes towards time uncertainty than by the magnitude or means of presentation of the uncertainty. We then formulate a two-stage stochastic optimization formulation for the online DRS assignment problem that minimizes average passenger delay in the presence of stochastic travel times. Through simulation experiments on synthetic road networks with stochastic, correlated travel times, we demonstrate the improved performance of this formulation in finding efficient solutions in a stochastic environment and reducing average passenger delay and the variance of passenger arrival times.&#13;
&#13;
Overall, this dissertation finds that passengers’ lack of trust in DRS reliability is a significant barrier to greater adoption and use, and also demonstrate the potential for operational methods that account for stochastic travel times to increase DRS reliability. This multidisciplinary exploration of the ramifications of travel time uncertainty on the supply and demand of DRS services provides a foundation for future research that may expand upon these concepts, further develop upon the methods used, and create connections between the supply and demand components.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144538</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Interpersonal Relationships and Incentive Structures on the Performance of Actors in Informal Supply Chains</title>
<link>https://hdl.handle.net/1721.1/144535</link>
<description>The Impact of Interpersonal Relationships and Incentive Structures on the Performance of Actors in Informal Supply Chains
Fatunde, Olumurejiwa Adedapo
This dissertation examines operational challenges faced by intermediaries in informal supply chains, in which the relational and structural constraints present in traditional supply chains are relaxed.&#13;
&#13;
This research is organized into three papers, the first of which (Chapter 2) explores business relationships in the context of emerging market retail supply chains. Attempts to distribute durable, life-improving goods to customers at the Base of the Pyramid (BoP) have struggled to succeed at scale. One potential explanation is poor relationship management with informal retailers, which are embedded within communities. By analyzing data from a distributor selling to 331 formal and 493 informal retailers in India, we demonstrate that informal retailers recover more slowly than formal retailers after a sales agent reallocation. This indicates that disruptions to social/business relationships are particularly harmful when selling to retailers in informal markets. &#13;
&#13;
The second and third papers (Chapters 3 and 4) explore incentive design for distributed-task platforms. We use as a case study a supply chain for medical knowledge, featuring “informal” suppliers without formal contracts. Using data on 5,418 crowdsourcing contests for medical diagnosis, we examine how evaluation metrics (Chapter 3) and prize allocation mechanisms (Chapter 4) shape participants’ decisions and performance. Chapter 3 assesses the impact of evaluating participants using the longest “streak” of correct answers, rather than an accuracy-based metric. Streak evaluation increases volume of quality responses and speed of achieving consensus, largely through increased engagement. These findings are relevant in settings where streak-based-rewards are used to boost motivation; we find that they also boost performance.&#13;
&#13;
Chapter 4 studies how changing the source of prize-related uncertainty from the probability of winning to the amount at stake affects decision-making. We evaluate the impact of running a pool contest (in which participants who meet a performance threshold share prizes evenly) instead of a rank-order contest (in which prize distribution is determined exogenously and announced upfront). In pool contests, accuracy increases for average participants but decreases for top performers, suggesting that participants modify engagement levels in response to performance thresholds. This suggests that pool contests with carefully-selected thresholds can incentivize effort from participants with certain performance profiles.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144535</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Innovation, Automation, and Growth</title>
<link>https://hdl.handle.net/1721.1/144533</link>
<description>Essays in Innovation, Automation, and Growth
Manera, Andrea
This thesis consists of three essays on the impact of technological change, innovation and automation on growth and labor market outcomes. In the first chapter, I explore how dominant companies in concentrated sectors have siphoned off inventors that might have been employed more productively in competitive industries. For the period 1997-2012, I establish that sectors with rising concentration captured a disproportionate share of researchers, while also experiencing a decrease in R&amp;D productivity, signaled by falling forward citations and slowing growth per inventor. These findings imply that inventors became increasingly misallocated, accounting for nearly 30 percent of the decline in the average annual growth rate of output per worker over the 15-year study period. A calibration of this model reveals that a planner interested in maximizing growth should allocate R&amp;D tax credits to entrants in high-concentration sectors.&#13;
&#13;
In the second chapter (forthcoming in the Review of Economic Dynamics), Michele Fornino and I study the economic incentives for automation when labor and machines are perfect substitutes. We find that labor may still be employed in produc-tion, even when it is a costlier input than robots on a productivity-adjusted basis. This occurs if firms face uninsurable idiosyncratic risk, adjusting the stock of machines is costly, and workers can be hired and fired quickly enough. Even though labor survives, jobs become less stable, as workers are hired in short-lived bursts to cope with shocks. We calibrate a general equilibrium, multi-industry version of our model to match data on robot adoption in US manufacturing sectors, and use it to compute the employment and labor share consequences of progress in automation technology. A fall in the relative price of robots leads to relatively few jobs losses, while reductions in adjustment costs, or improvements in relative robot productivity, can be far more disruptive. The model-implied semi-elasticity of aggregate employment to robot penetration (number of robots per thousand employees) ranges between 0.01% and 0.12%, depending on the underlying source of increased robot adoption, consistent with findings in the empirical literature. In an extension, we show that reduced-form hiring and firing costs unambiguously depress long-run employment.&#13;
&#13;
In the third chapter, Martina Uccioli and I study the impact of employment protection legislation (EPL) on firms innovation, through an event-study analysis of labor market reforms occurring in Europe over 2000-2016. Data from the Community Innovation Survey reveal that substantial drops in EPL for temporary workers prompt a reallocation of innovation towards the introduction of new products, away from process innovation aimed at cutting labor costs. Among innovative firms, the share of product innovators increases by 15% of the pre-reform value, while the share of firms specializing in process innovation falls by 35%. We develop a theoretical framework of directed technical change to rationalize our findings, where parsimonious assumptions imply that product innovation is strongly temporary labor complementing while process innovation is strongly temporary labor substituting.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144533</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Hydraulics for Improved Centrifugal Pump Efficiency</title>
<link>https://hdl.handle.net/1721.1/144526</link>
<description>Adaptive Hydraulics for Improved Centrifugal Pump Efficiency
Johnson, Hilary Anna
Centrifugal pumps are used ubiquitously for fluid transport in industrial and municipal systems including: clean and waste water systems, pumped hydro energy storage, closed loop HVAC heating, and irrigation. Globally, pumps consume hundreds of billions of kilowatt hours of electricity each year. In the US, centrifugal pumps consume an estimated 4.4% of electricity generated, equivalent to 232 billion kWh annually. Many pumping applications require operation over a broad range of pressures and flow rates, however traditional centrifugal pumps with fixed geometry are limited in their ability to adjust, resulting in significant energy losses. This research challenged the assumption that volute geometry must be static and showed that by enabling a variable volute geometry greater efficiency can be obtained over a wider operating region.&#13;
&#13;
The thesis presents the design and demonstration of a controllable, precision, variable volute mechanism that can expand or contract to adapt to fluctuating operating conditions. Experiments support the hypothesis that adjusting the pump volume shifts the best efficiency point (BEP) creating a best efficiency range (BER). Five initial mechanism topologies were evaluated, and a spiral piston architecture was selected based on structural and hydrodynamic requirements, feasibility, and manufacturability. The engineered variable volute is comprised of a spiral piston integral with a multi start lead screw, rotated by a mating collar with internal multi-start threads and external crowned gear teeth. A non-backdrivable worm actuates the worm wheel integral in the collar. Deterministic design tools were developed, with demonstrated hydrodynamic, structural, and mechanical scalability, to enable future implementation of the variable volute technology in a variety of pump sizes for different applications.&#13;
&#13;
Experimental validation of the variable volute characterizes the sensitivity of pump flow, pressure, efficiency, and power to changes in the pump volute volume. Based on the machine design of the pump, error analysis coupled with experimental data demonstrates the ability of the mechanism to accurately and repeatably control the volute height. We achieved a 67% increase in the preferred operating range of the pump for a 160% change in volume. This results in a 30% increase in the energy efficiency at the maximum preferred flow rate. Contextually, significant research and development efforts have been historically dedicated to optimize pump geometry for less than 5% efficiency gains.&#13;
&#13;
A case study is presented analyzing a year of hourly resolution, variable operation data from ten, 2600 kW centrifugal pumps in a large wastewater treatment plant to understand the influence of pump design, selection, maintenance and operation on system efficiency. Three feasible interventions were quantitatively compared and show savings up to 4.3% (809,000 kWh) annually. Based on this analysis, an operating space methodology is presented which extends the traditional 2D pump and system curve intersection to a multi-dimensional operating space; connecting pressure and flow performance parameters with control parameters such as variable speed and variable volute.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144526</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Employment and Human Capital</title>
<link>https://hdl.handle.net/1721.1/144524</link>
<description>Essays on Employment and Human Capital
Wang, Sean Yixiang
This thesis examines how economic forces shape the nature of employment and the development of human capital. Each of the three chapters in the thesis brings economic theory and causal inference to administrative data to better understand the mechanisms that ultimately determine people's livelihoods. Collectively, the chapters emphasize how imperfect markets and institutions have the powerful potential to either reduce or exacerbate existing inequalities. &#13;
&#13;
The first chapter identifies the effects of firms on the career advancement of blue-collar workers and interprets these effects through the mechanism of employer learning. I use administrative data on the universe of Brazilian formal employment to study vertical promotions from production jobs to supervisory jobs, which are an important source of wage growth for most young workers. By comparing workers around job-to-job transitions, I show that differences in average firm promotion rates reflect persistent differences in the effects of firms on workers. Workers who move to a high promotion firm become substantially more likely than other job movers to be promoted, but they are even more likely to leave formal employment altogether. Correspondingly, their average long-term wage gains are negligible. I explain these effects using a model where firms differ in the rate they learn about the abilities of employed workers. High learning firms improve the efficiency of matching between workers and jobs, but these firms also exacerbate the adverse selection of unemployed workers and increase occupational wage inequality. By quantifying the parameters of the model using my estimated effects, I show that skill misallocation remains high and ex-post market power for employers can be large.&#13;
&#13;
The second chapter, written jointly with Samuel Young, studies the effect of private-sector unionization on establishment employment and survival. Specifically, we analyze National Labor Relations Board (NLRB) union elections from 1981 to 2005 using administrative Census data on the universe of establishments in the U.S. Our research design combines difference-in-differences and regression discontinuity extrapolation methods to estimate treatment effects including elections that win by larger margins of support. We show that unionization decreases an establishment's employment and likelihood of survival. We hypothesize that two reasons for these effects are firms' ability to avoid dealing with new unions and managers' opposition to unions. We test this hypothesis for unionization in manufacturing, the largest sector where we find substantial negative effects. There, the negative effects are significantly larger for elections at multi-establishment firms, especially those with no other unionized establishments.  We provide direct evidence suggesting that some of these differences are driven by multi-establishment firms shifting employment from newly unionized establishments to other establishments. Finally, we use the length of delays during the election process as a proxy for managers' opposition to the union and find substantially larger effects of successful elections with longer delays. Taken together, our results are consistent with firms' union avoidance tactics playing a role in explaining the overall negative effects of unionization.&#13;
&#13;
The third chapter directly estimates a theoretically motivated measure of schools' competitive pressures using centralized assignment data from a large urban school district’s deferred-acceptance mechanism. I find that competitive pressure within the district is dispersed, and most of the variation in competition is unexplained by concentration. While there is substantial pressure to attract more students to some schools, these competitive incentives do not induce schools to raise their school effectiveness on academic achievement. Instead, schools respond by shifting discretionary expenditures from administration to instruction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144524</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Experimental Design</title>
<link>https://hdl.handle.net/1721.1/144523</link>
<description>Essays on Experimental Design
Cytrynbaum, Max
This thesis contains three chapters. In the first chapter, I propose new econometric methods for panel data settings with unobserved cross-sectional heterogeneity. Previous approaches model this heterogeneity by assigning each unit a one-dimensional, discrete latent type, which can be estimated by regression clustering methods. In this paper, I show that such models can be misspecified, even when the panel has significant discrete cross-sectional structure. Motivated by this finding, I generalize previous approaches to discrete unobserved heterogeneity by allowing each unit to have multiple, imperfectly-correlated latent variables that describe its response-type to each covariate. I develop valid inference methods using a k-means style estimator of our model and propose information criteria to jointly select the number of clusters for each latent variable. I also contribute to the theory of clustering with an over-specified number of clusters and derive new convergence rates for this setting. My results suggest that over-fitting can be severe in k-means style estimators when the number of clusters is over-specified. &#13;
&#13;
The second chapter studies treatment effect estimation in a novel two-stage model of experimentation. In the first stage, using baseline covariates, the researcher selects units to participate in the experiment from a sample of eligible units. Next, they assign each selected unit to one of two treatment arms. I relate estimator efficiency to representative selection of participants and balanced assignment of treatments. I define a new family of local randomization procedures, which can be used for both selection and assignment. This family nests stratified block randomization and matched pairs, the most commonly used designs in practice in development economics, but also produces many useful new designs, embedding them in a unified framework. When used to select representative units into the experiment, local randomization boosts effective sample size, making estimators behave as if they were estimated using a larger experiment. When used for treatment assignment, local randomization does model-free non-parametric regression adjustment by design. I give novel asymptotically exact inference methods for locally randomized selection and assignment, allowing experimenters to report smaller confidence intervals if they designed a representative experiment. I apply our methods to the setting of two-wave design, where the researcher has access to a pilot study when designing the main experiment. I use local randomization methods to give the first fully efficient solution to this problem. &#13;
&#13;
The third chapter studies rerandomization and linear adjustment for average treatment effect estimation in stratified experiments. Our results show that in stratified experiments, ex-post regression adjustment can be strictly inefficient relative to difference of means estimation. Thus, the "agnostic'' efficiency improvement of Lin (2013) is atypical, corresponding to the edge case of complete randomization (no stratification). The problem arises because ex-post regression adjustment does not adapt to the stratification. In particular, it estimates the same linear adjustment coefficient for any locally randomized design. By contrast, I show that ex-ante rerandomization within strata does adaptive linear adjustment by design. In the tight acceptance criterion limit, rerandomization within strata is as efficient as the optimal linear adjustment for a given stratification. Equivalently, I show that rerandomization finds the optimal semiparametric completion of the non-parametric model produced by local randomization.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144523</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automating reaction development: hardware and software for fully-automated high-fidelity navigation of high-dimensional chemical reaction space</title>
<link>https://hdl.handle.net/1721.1/144518</link>
<description>Automating reaction development: hardware and software for fully-automated high-fidelity navigation of high-dimensional chemical reaction space
Eyke, Natalie Suzanne
Process development is one of the major hurdles that pharmaceutical companies face when bringing a new pharmaceutical compound to the market. Acceleration of process development minimizes costs and gets new compounds to the market sooner, which 1) gets medication to patients in need quickly and 2) maximizes revenue relative to a fixed patent expiration date. Incorporating automation into the process development workflow can accelerate the acquisition of high-quality data, which helps process development scientists and engineers develop safe and efficient processes rapidly. Automation may come in the form of software, such as experimental design algorithms, and hardware, such as platforms that prepare, run, and analyze reactions.&#13;
&#13;
The field of statistics offers a number of approaches to optimal experimental design that have formed the basis for so-called "quality-by-design" development strategies in the pharmaceutical industry. More recently, iterative, algorithmic optimization routines have been adapted for reaction optimization as well. However, these strategies are most appropriate for reaction domains that consist primarily of continuous reaction variables. New techniques are needed that enable optimization over discrete and continuous reaction variables - in other words, high-dimensional chemical space - simultaneously. Discrete reaction variables include factors like the catalyst, ligand, solvent, and other reagents that are used to effect a chemical transformation. Given a tensor capturing relevant information about each discrete reaction variable, machine learning is well-suited to use data to identify patterns and relationships between the various settings of each discrete variable to accurately model reaction outcomes and enable optimization. We used machine learning in combination with an experimental design routine from the field of statistics, known as uncertainty sampling, to demonstrate that machine learning can be used to minimize the number of experiments that need to be performed to obtain an accurate model of a high-dimensional reaction domain defined by multiple discrete reaction variables.&#13;
&#13;
On the hardware side, automated experimentation platforms have gained a foothold in process development organizations, but there remains a need for platforms that can reproduce the flexibility and accuracy of the bench chemist while achieving high throughput and using as little material as possible. Droplet microfluidics is attractive for reaction development because it uses small quantities of precious reaction material, and the high surface area to volume ratio enables efficient heat transfer and interphase mass transfer. We adapted an automated droplet reactor platform for high-fidelity, flexible, reproducible, high-throughput operation by upgrading the design and operating procedures, placing multiple reactor channels in parallel, and creating a scheduling algorithm that orchestrates all of the parallel hardware operations and ensures droplet integrity as well as overall efficiency. We designed and incorporated all of the necessary hardware and software to enable both thermal and photochemical reactions. We demonstrated both the single-channel and parallelized versions of the platform using a series of model thermal and photochemical reactions, and demonstrated how the parallelized platform allows for rapid acquisition of the data necessary to determine reaction kinetics. The platform is flexible in terms of use case: through the integration of a variety of experimental design algorithms, it can be used for either screening or optimization over a wide range of chemical domains, and a fraction collector can be appended to the end of the platform to capture reacted droplets and thereby enable library synthesis.&#13;
&#13;
The software and hardware developed in this thesis can together enable accelerated process development that minimizes delays between the discovery of transformative medicines and their delivery to patients in need.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144518</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inequalities and Asymptotic Formulas in Algebraic Combinatorics</title>
<link>https://hdl.handle.net/1721.1/144514</link>
<description>Inequalities and Asymptotic Formulas in Algebraic Combinatorics
Jiradilok, Pakawut
This thesis concerns certain inequalities and asymptotic formulas in algebraic combinatorics. It consists of two separate parts. The first part studies inequalities concerning triangular-grid billiards and plabic graphs of Lam–Postnikov essential dimension 2. The material in this part is based on joint work with Colin Defant. The second part studies inequalities and asymptotic formulas concerning large-scale rook placements.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144514</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounded Rationality in Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/144513</link>
<description>Bounded Rationality in Macroeconomics
Sastry, Karthik Amrutur
People are inattentive, forgetful, and otherwise imperfect decisionmakers. It is well documented that models of choice with cognitive or attentional constraints, or bounded rationality, can capture these realities and explain deviations of individual behavior from a benchmark of pure payoff maximization. This thesis studies the aggregate implications of such imperfect reasoning and decision making for macroeconomic dynamics and policy.&#13;
&#13;
The first chapter, “Attention Cycles” (jointly authored with Joel P. Flynn), studies the causes and consequences of fluctuations in apparent bounded rationality over the business cycle. Using data from US public firms’ regulatory filings and financial statements, we document that firms’ attention to macroeconomic conditions rises in downturns and that their propensity to make input-choice mistakes rises in booms. We explain these phenomena with a business-cycle model in which firms face a cognitive cost of making precise decisions. Because firms are owned by risk-averse households, there are greater incentives to deliver profits when aggregate consumption is low. Thus, firms exert more cognitive effort and make smaller input-choice mistakes in aggregate downturns. In the data, consistent with our model, financial markets punish mistakes more in downturns and macroeconomically attentive firms make smaller mistakes. When calibrated to match our evidence, attention cycles generate quantitatively significant asymmetric, state-dependent shock propagation and stochastic volatility of output growth.&#13;
&#13;
The second chapter, “Strategic Mistakes” (jointly authored with Joel P. Flynn), more abstractly studies the equilibrium implications of state-dependent imperfections in optimization. We introduce a model of costly control in continuum-player games in which agents interact via an aggregate of the actions of others. We find primitive conditions such that equilibria exist, are unique, are efficient, and feature monotone comparative statics for action distributions, aggregates, and the size of agents’ mistakes. We use our results to provide robust equilibrium predictions in a class of generalized beauty contests, which we apply to study the implications of imperfect optimization for financial speculation and price-setting.&#13;
&#13;
The third chapter, “Managing Expectations: Instruments vs. Targets” (jointly authored with George-Marios Angeletos), studies how bounded rationality affects the optimal communication of policy commitments. Specifically, we study the question: should policy communications aim at anchoring expectations of the policy instrument (“keep interest rates at zero until date &#119909;”) or of the targeted outcome (“do whatever it takes to bring unemployment down to &#119906;%”)? People have limited depth of knowledge and rationality, and thus form beliefs about the behavior of others and the general equilibrium (GE) effects of policy that are distorted relative to the policymaker’s. We show that the bite of this distortion on implementability and welfare is minimized by target-based guidance if and only if GE feedback is strong enough. Our results rationalize why central banks should shine the spotlight on unemployment when faced with a prolonged liquidity trap, a steep Keynesian cross, or a large financial accelerator.&#13;
&#13;
The final chapter, “Disagreement About Monetary Policy,” empirically and theoretically studies the causes and consequences of belief differences between markets and central banks about monetary policy over the business cycle. Using US data since 1995, I document that bad macroeconomic news in leading indicators predicts market over-estimation of interest rates and employment, relative to both the ex post realizations and the US Federal Reserve’s contemporaneous forecasts. In a model that accommodates disagreements between the market and central bank via three mechanisms—asymmetries in signals about fundamentals, beliefs about the monetary rule, and confidence in public signals—I show that different confidence in public signals is necessary to explain the empirical findings. The model implies that the market’s relative under-reaction to public signals substantially dampens the response of market beliefs to fundamentals, while central-bank signaling about fundamentals or the “information effect” has almost no role.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144513</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Reconstructing Dynamic Protein Structures from Cryo-EM Images</title>
<link>https://hdl.handle.net/1721.1/144512</link>
<description>Machine Learning for Reconstructing Dynamic Protein Structures from Cryo-EM Images
Zhong, Ellen D.
Proteins and other biomolecules form dynamic macromolecular machines that carry out essential biological processes responsible for life. However, studying the mechanisms of these biomolecular complexes at relevant atomic-scale resolutions is an extraordinarily challenging task in structural biology. This thesis presents new algorithms that address the computational bottlenecks at the frontier of structure determination of dynamic biomolecular complexes via cryo-electron microscopy (cryo-EM).&#13;
&#13;
In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a target biomolecular complex from a set of noisy and randomly oriented 2D projection images, a challenging inverse problem especially when instances of the imaged biomolecular complex exhibit structural heterogeneity.&#13;
&#13;
The main contribution of this thesis is a machine learning system, cryoDRGN, for reconstructing continuous distributions of biomolecular structures from cryo-EM images. Underpinning the cryoDRGN method is a deep generative model parameterized by a new neural representation of cryo-EM volumes and a learning algorithm to optimize this representation from unlabeled 2D cryo-EM images. Released as an open source software tool, cryoDRGN has been applied on real datasets to uncover heterogeneity in high resolution datasets, discover new conformations of large macromolecular machines and visualize continuous trajectories of their motion. This thesis also describes an extension, cryoDRGN2, for learning this model from unposed images, i.e. ab initio reconstruction. Finally, this thesis presents emerging directions in analyzing the learned manifold of cryo-EM structures and in incorporating atomic model priors into cryo-EM reconstruction.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144512</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the role of molecular motors on chromatin organization</title>
<link>https://hdl.handle.net/1721.1/144507</link>
<description>Investigating the role of molecular motors on chromatin organization
Jiang, Zhongling
In eukaryotes, chromatin carrying most of the genetic material and information is densely packed within the cell nucleus. To facilitate the proper functioning of numerous life-essential biological processes, it possesses multiple levels of packaging. On one hand, nucleosomes, as the most fundamental packing unit, controls the accessibility of chromatin through their positioning along the DNA sequence, where ATP-driven remodelers, DNA-binding proteins are known to play essential roles. On the other hand, the spatial organization of the genome, folding into compartments to harbor the functional regions, is also impacted by these non-equilibrium motor activities. Therefore, we carried out theoretical and computational investigations to unveil the role of molecular motors on chromatin organization in both one-dimensional and three-dimensional levels. In one dimension, we showed that the effect of remodel- ing enzymes can be well approximated by effective equilibrium models with rescaled temperatures and interactions, when using a perturbation theory. We further constructed a unifying model to illustrate the construction of nucleosome positioning pattern during transcription. In three dimensions, we realized the conventional and inverted compartment distribution of chromatin with the interplay between active motors and passive interactions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144507</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Device-Enabled Biomechanical Modulation of the&#13;
Infarcted Heart</title>
<link>https://hdl.handle.net/1721.1/144504</link>
<description>Device-Enabled Biomechanical Modulation of the&#13;
Infarcted Heart
Varela Farías, Claudia Elena
Every 40 seconds someone in the United States suffers from a myocardial infarction (MI) and 86% of these patients survive this initial event. After an MI, a dense collagenous scar replaces damaged tissue and subsequently impedes cardiac function, prompts tissue remodeling, and ultimately can induce heart failure (HF)—a prominent cause of long-term morbidity and mortality. Current clinical practice focuses on preventing or managing HF in these patients through medication and lifestyle changes, and if HF is severe, by implanting left ventricular assist devices to act as a bridge-to-transplant or bridge-to-destination. Early interventions after MI to reduce adverse remodeling and prevent HF entirely have been developed but have yet to be translated to the clinic.&#13;
&#13;
The main goal of this thesis is to develop implantable devices that directly modify the biological and/or mechanical environment of the recently infarcted heart to overcome the limitations associated with HF prevention strategies. &#13;
&#13;
First, I optimize an implantable reservoir system that enables localized, multi-dose delivery of regenerative bioagents—previously necessitating multiple direct cardiac injections. After demonstrating that therapy transport from the reservoir is attenuated but not entirely impeded after fibrotic encapsulation in vivo, I introduce mechanical actuation as a strategy to improve transport from the system. Finally, our system is used to characterize how different dosing regimens of FSTL1, a regenerative protein, influence cardiac function and healing, demonstrating that three doses of FSTL1 has a more pronounced functional benefit than a regimen with less doses. &#13;
&#13;
Second, I develop a sutureless patch platform whose mechanical behavior can be tuned to modulate cardiac biomechanics when attached to the heart. First, the in vivo performance of a bioadhesive hydrogel is optimized to allow for atraumatic coupling and facilitate minimally invasive deployment of the patches. Then, I realize a patch design and fabrication workflow that allows for custom, tunable mechanical behavior of patches via 3D printing.  Finally, I demonstrate that distinct patch designs achieve variable modulation of epicardial strain and ventricular hemodynamics in vivo.&#13;
&#13;
In summary, this thesis presents versatile implantable device technologies that both circumvent existing limitations in biomechanical interventions for HF prevention in the recently infarcted heart and introduce novel therapeutic possibilities conducive for clinical translation.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144504</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-asymptotic Behavior in Massive Multiple Access and Streaming System Identification</title>
<link>https://hdl.handle.net/1721.1/144502</link>
<description>Non-asymptotic Behavior in Massive Multiple Access and Streaming System Identification
Kowshik, Suhas Subramanya
Non-asymptotic understanding of information theoretic and algorithmic limits of estimation in statistical problems is indispensable for practical applications in engineering. There are two broad approaches to this end. The first: tools and advances in modern probability theory aid in deriving fully non-asymptotic bounds for such problems. This approach, while useful, is sometimes insufficient as the bounds it yields usually have sub-optimal constants and unavoidable logarithmic factors. Consequently, in some situations where the primary goal is to obtain sharp characterizations, e.g. error exponents, it can be highly non-trivial to derive fully non-asymptotic results that serve this purpose, particularly in high dimensional problems of recent times. In aforesaid circumstances there is a second approach: recourse to asymptotics that can serve as a reasonable substitute for finite length behavior. In this thesis, we employ both these approaches. First we provide high dimensional asymptotic bounds for massive multiple access which is an important consideration in upcoming wireless networks. Then we turn towards streaming system identification where we develop a novel algorithm provide tight non-asymptotic bounds showing the optimality of our method. &#13;
&#13;
Massive multiple access is an important problem in current and upcoming wireless networks. Also known as massive machine type communication (mMTC) in 5G, it envisions a scenario of a large number of transmitters (usually small sensors in IoT for instance) with small payloads communicating sporadically with a base station. Information theoretic understanding of such a problem is of paramount importance for evaluating existing multiple access schemes and developing new strategies that handle such drastic interference. To this end, many-user multiple access channel (MAC) is a crucial model that captures the new effects in massive multiple access. Previous works have focused on the additive white Gaussian noise (AWGN) many-user MAC. In this thesis, we aim to understand the fundamental limits of energy efficiency in the quasi-static Rayleigh fading many-user MAC. In particular, we provide tight achievability and converse bounds on the minimum energy-per-bit required to support a certain user density, fixed payload and target per-user error (in the limit as blocklength grows to infinity). Although asymptotic in nature, the results are expected to serve as a good proxy for true finite length behavior. We confirm the presence of the promising almost perfect multi-user interference cancellation, first observed in the AWGN setting, in the quasi-static case. Further we also provide a new achievability bound for the AWGN many-user MAC.&#13;
&#13;
Next we turn towards problem of streaming or online system identification with the goal of designing optimal algorithms and providing non-asymptotic rates on the convergence. In particular, we consider a class of linear and generalized linear (nonlinear) parametric discrete time dynamical systems. Observing a single trajectory from such a system, the aim is recover the system parameters in a streaming fashion. Our work shows that one-pass forward stochastic gradient descent (SGD) algorithm where samples are read in order is sub-optimal compared to the offline ordinary least squares (OLS) estimator. More importantly, based on the observation that reading samples in reverse order mitigates the effect of temporal dependencies, we develop a novel algorithm called SGD with reverse experience replay (SGD-RER) and derive fully non-asymptotic bounds that show it to be near minimax optimal for both stable linear and generalized linear models. Furthermore, we consider a Quasi-Newton style offline algorithm for the generalized linear setting and show that is near optimal even when the process is unstable.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144502</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase Behavior, Filling Dynamics, and Packing of Fluids inside Isolated Carbon Nanotubes</title>
<link>https://hdl.handle.net/1721.1/144500</link>
<description>Phase Behavior, Filling Dynamics, and Packing of Fluids inside Isolated Carbon Nanotubes
Faucher, Samuel James
Fluids behave differently inside nanoscale pores than they do in bulk solution. When confined inside so-called single digit nanopores – pores with diameters smaller than 10 nm – the atomic configuration, phase behavior, and dynamics of fluids vary markedly from their bulk behavior and depend sensitively on the confining diameter. Understanding fluid behavior at these scales is critical to the design of a wide variety of engineering systems, such as membranes for chemical separations and batteries for energy storage. The study of nanofluidics also informs our understanding of natural phenomena, including the single-file transport of water into biological cells and flow through nanoporous geologic media. &#13;
&#13;
In this thesis, we develop experimental platforms to study confinement effects on fluid packing, filling, and phase behavior inside isolated, substrate-bound carbon nanotubes with diameters ranging from 0.8 nm to 3 nm. Carbon nanotubes are grown by chemical vapor deposition on marked silicon substrates and segmented by photolithography or use of a focused ion beam, producing multiple, identical segments of the same diameter and chirality carbon nanotube. By Raman spectroscopy, it is possible to determine on the micrometer length scale and second time scale whether an isolated carbon nanotube is empty, fluid-filled, or partially fluid-filled as a function of location, time, temperature, and nanotube diameter. &#13;
&#13;
After building precision nanopore systems and developing techniques to characterize nanopore filling, we address several topics of interest to the field of nanofluidics, as explored in the chapters of this thesis. First, we explore knowledge gaps in nanofluidics, including gaps in our understanding of phase behavior and dynamics of fluids under conditions of extreme confinement. Second, we study the diameter dependence of fluid packing and filling inside carbon nanotubes, showing that the variation in the change of the Raman radial breathing mode upon fluid filling is indicative of configurational changes in water inside nanotubes of different sizes. Third, we develop continuum elastic shell theories to explain why double-walled nanotubes, but not single-walled nanotubes, can distinguish between interior fluid filling and exterior fluid adsorption by changes in radial vibrations alone. Fourth, we perform a thermodynamic analysis of water-filled, closed carbon nanotubes, calculating enthalpies of phase change from a Clausius-Clapeyron type expression for nanoconfined water. Fifth, we perform a thermodynamic analysis of water-filled carbon nanotubes in a thermodynamically open system, observing a phase change driven by variable laser heating and calculating enthalpies of adsorption by comparison to a Langmuir-type adsorption model. Sixth, we observe dynamic changes in filling state with time, calculating diffusion coefficients of vapor-like and liquid-like water inside carbon nanotubes as a function of diameter. Finally, as an aside, we perform a computational analysis of a heated mask.&#13;
&#13;
Measurement of water inside carbon nanotubes, as explored in this thesis, expands our view of the thermodynamic and kinetic properties of fluids under confinement and addresses key knowledge gaps in nanofluidics. These measurements can inform new theories, force fields, and mechanisms for fluids in nanoconfined environments.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144500</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon Catabolite Repression Relaxation: Approaches for Sugar Co-Utilization in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/144499</link>
<description>Carbon Catabolite Repression Relaxation: Approaches for Sugar Co-Utilization in Escherichia coli
Fox, Kevin J.
Bioprocessing provides a sustainable, renewable, and green alternative to petroleum-based methods for the production of chemicals. A primary reason for the benefit of bioprocessing is its ability to utilize waste materials containing sugars as a feedstock, such as those from agriculture. This advantage is complicated with Escherichia coli due to carbon catabolite repression (CCR), an intrinsic sugar preference system. The relaxation of the effects of CCR has the possibility to increase bioprocessing’s economic viability and sustainability through better feedstock utilization. In this work we examine reported strategies for the relaxation of CCR and describe novel methods for the utilization of sugar mixtures for the production of an industrially relevant chemical.&#13;
&#13;
A microbial production platform was developed to synthesize enantio-pure D-glyceric acid, a chemical with potential use in the materials industry, from D-galacturonate. The expression of udh from Pseudomonas syringae and gli from Agrobacterium fabrum, along with the inactivation of garK, encoding for glycerate kinase, enables D-glyceric acid accumulation by utilizing the endogenous expression of garD, garL, and garR. Optimization of carbon flux through the elimination of competing metabolic pathways led to the development of a ΔgarKΔhyiΔglxKΔuxaC mutant strain that produced 4.8 g/l of D-glyceric acid from D-galacturonate, with an 83% molar yield. Additionally, a substrate-based induction platform was developed that enabled the expression of udh and gli upon the addition of D-galacturonate by utilizing the transcription factor ExuR from Bacillus subtilis, eliminating the need for chemical induction.&#13;
&#13;
Two strategies for CCR relaxation were investigated; one employing a global alleviation strategy and the other a sugar-specific strategy. A mutation in EIIAglc, an essential part of the phosphoenolpyruvate transferase system (PTS), was investigated as a global CCR relaxation strategy due to its ability to lock the protein in its phosphorylated state, mimicking a lack of glucose. While this engineered strain did co-utilize sugar mixtures, this phenotype was not stable. To enable sugar-specific CCR relaxation, the galacturonate-specific permease ExuT was engineered to lessen the effects of inducer exclusion. A galacturonate-specific biosensor was utilized to perform high-throughput screening of an exuT mutant library to search for mutants that enabled higher levels of intracellular galacturonate. Utilizing the synthetic pathway to produce Dglyceric acid from galacturonate, a S391R mutant of ExuT increased titer by 20% when a co-feed of galacturonate and glucose was used.&#13;
&#13;
An analysis of the opportunity for synthetic biology to disrupt the specialty chemicals industry was performed. Synthetic biology firms should leverage their capabilities to utilize sustainable feedstock and synthesize novel products to differentiate and limit commoditization. Competitive landscape analysis displayed the relative success of the differing strategies of current players.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144499</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing planetary system histories via observations, experiments, and modelling of circumstellar gas and dust</title>
<link>https://hdl.handle.net/1721.1/144498</link>
<description>Probing planetary system histories via observations, experiments, and modelling of circumstellar gas and dust
Schneiderman, Tajana
Circumstellar disks of gas and dust are integral parts of planetary systems from formation to maturation. Protoplanetary disks, the name for circumstellar material at the earliest stages of a stellar lifetime, provide key information about the formation processes of planets, and therefore of the initial conditions that set system evolution in motion. These disks are host to both primordial interstellar materials and reprocessed constituents that become part of nascent planets and planetesimals. Debris disks, the name for circumstellar material after the protoplanetary disk dissipates, are remnants of earlier processes and carry clues to the formation conditions and evolutionary pathways of mature systems. In this thesis, I discuss three approaches to probing planetary system histories by examining circumstellar gas and dust. &#13;
&#13;
The first approach is to experimentally measure the desorption binding energies and entrapment efficiencies of neon, argon, krypton, and xenon in astrophysical ice analogs. Noble gases are valuable tracers of both nebular gas accretion and volatile delivery to planetary atmosphere; placing experimental constraints on these fundamental physical properties allows us to understand the extent to which each gas traces different sources of volatiles within the protoplanetary disk. We find that all four nobles are likely present in the nebular gas, and able to be directly accreted to nascent planets. We further find that argon, krypton, and xenon are trapped efficiently in interstellar ice analogs, with entrapment efficiencies ranging from 65-95\% in astrophysically relevant ices. This suggests that that they are valuable tracers for the solid volatile content within the disk. Lastly, we find that neon is inefficiently trapped; maximally 10\% of neon is entrapped in interstellar ices, although the actual entrapment efficiency may be lower than 1\%. Thus, neon is a tracer of only the nebular gas. &#13;
&#13;
The second approach is to examine archival data from the Atacama Large Millimeter Array for the HD~172555 system. This system is unique among debris disks due to its atypical dust composition. We detect the presence of carbon monoxide gas in the circumstellar debris. By considering the morphology and composition of both CO gas and dust in the system, we are able to rule out several origins for the debris. We instead find that the only scenario that adequately describes observations of the system is that of a giant impact; the CO gas is likely the remnant of a stripped planetary atmosphere, while the dust is debris produced in the collision. These observations provide evidence for giant impacts in systems other than our own. &#13;
&#13;
The third approach simulates the dust spectra of nine highly inclined debris disks for a range of compositions and particle size distributions. Multibandpass observations are required to adequately characterize dust in a system; dust spectra allow for an understanding of compositional classes of parent planetesimals, while deviations from steady-state predictions for the particle size distributions  might indicate a history of giant impacts in a system. This chapter aims to develop a preliminary framework for analysis of systems in advance of data from the James Webb Space Telescope. We find that iron and troilite compositions are most easily disentangled from the suite of compositional families we consider, although silicates, water ice, and carbonaceous compounds are identifiable.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144498</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Economic Effects of Public Housing Programs</title>
<link>https://hdl.handle.net/1721.1/144495</link>
<description>The Economic Effects of Public Housing Programs
Blanco Fernández, Hector
This dissertation studies the economic effects of public housing programs. Public housing used to be the primary form of housing assistance throughout the 20th century in countries such as the United States and the United Kingdom. In recent decades, however, public housing has fallen out of favor, mainly due to the negative experience with large public housing developments, which concentrated high levels of poverty and crime. As a result, policymakers have shifted resources towards subsidized private housing in mixed-income developments, i.e., buildings that combine affordable with market-rate units. In the first two chapters, I examine the impact on local housing markets of demolishing and regenerating public housing into mixed-income developments. In the last chapter, I lay out a quantitative model to think about the distributional implications of shifting resources from public housing towards other housing assistance programs, such as housing vouchers or subsidies to low-income housing construction. &#13;
&#13;
Chapter 1 estimates the effects of demolishing public housing on private house prices. I examine the impact of a large and negative housing supply shock caused by the demolition of public housing developments in Chicago in the 1990s and 2000s. Using a synthetic control method based on census tracts in distant parts of the city, I estimate that house prices increased by about 20 percent over a ten-year period in census tracts near the demolitions. A calibration exercise suggests that the upward price pressure associated with reduced housing supply cannot fully explain the observed price effect. This leaves room for a contribution from positive amenities generated by demolitions, which raised the demand for nearby housing units. The estimated importance of amenity effects is, however, sensitive to the way the affected housing market is defined. The results highlight that, while public housing can lead to lower local house prices for unsubsidized households by increasing overall supply, the way in which the public sector supplies housing --in this case, high-rises concentrating very low-income households-- can impose significant adverse consequences on its neighbors. &#13;
&#13;
Chapter 2 (joint work with Lorenzo Neri) studies the effects of regenerating public housing into mixed-income communities on the local housing market. We exploit a wave of public housing regenerations in London that not only demolish and rebuild existing public housing but also almost double the number of units on-site by adding new market-rate units. Over a six-year period, we estimate that regenerations significantly raise nearby house prices and rents, although house prices decrease slightly farther away. We also find that they attract higher-income households, increase positive amenities (e.g., cafés, restaurants), and reduce negative amenities (e.g., crime). The results are consistent with strong demand effects concentrated near the buildings and moderate effects from increased supply that persist in the broader area. We provide suggestive evidence that changes in a neighborhood's socioeconomic composition are important to explain price effects: regenerations in low-income areas and those adding a large number of market-rate units lead to larger price increases. Overall, our findings indicate that providing public housing through mixed-income housing can overcome some of the negative consequences on nearby areas associated with traditional public housing developments, as suggested in Chapter 1. However, the supply of additional market-rate units can reduce affordability in low-income neighborhoods, possibly due to an increased risk of gentrification and displacement of low-income neighbors.&#13;
&#13;
Finally, Chapter 3 (joint work with Juliette Fournier) examines the distributional implications of the policy shift from public housing to subsidized private housing initiated by the U.S. government over the past few decades. This policy shift leaves a larger role to private developers and property owners in supplying low-income housing, who may end up capturing a substantial share of the benefits intended for disadvantaged households. We build a quantitative urban framework where housing assistance complements income taxation to redistribute across workers. We argue that the provision of affordable housing involves a trade-off between indirect pecuniary redistribution and direct amenity effects. On the one hand, public housing drives local rents down by increasing supply, while amplifying the spatial concentration of poverty. On the other hand, project- and tenant-based rental assistance enhances the local amenities of subsidized households by promoting mixed-income communities, but pushes private landowners’ rents up. We estimate the key parameters of the model, which allows us to disentangle the forces behind this crucial trade-off.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144495</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Behavioral Economics of Mental Health and Belief Formation</title>
<link>https://hdl.handle.net/1721.1/144494</link>
<description>Essays on the Behavioral Economics of Mental Health and Belief Formation
Vautrey, Pierre-Luc
These three essays use the lens of behavioral economics to study topics in mental health, decision-making, and belief formation. They leverage randomized experiments to evaluate the impact  of three interventions, and new ``lab-in-the-field'' incentivized surveys and games to measure outcomes.&#13;
&#13;
The first chapter studies the effects of mindfulness meditation on mental health, productivity and decision-making. We model a simple mechanism through which emotions and worries can reduce individuals' available attention and affect economic decisions. In a four-week experiment with 2,384 US adults, offering free access to a popular mindfulness meditation app that costs \$13 per month improves mental health, productivity and decision-making. First, it causes a 0.44 standard deviation reduction in symptoms of stress, anxiety, and depression, comparable to the impacts of expensive in-person therapy, with improvements even among participants with minimal or mild symptoms at baseline. Second, it increases earnings on a proofreading task by 1.9 percent. Third, it makes decision-making more stable across emotional states, reducing the interference of personal worries with risk choices. Overall, our results demonstrate the potential of affordable mindfulness meditation apps to improve mental health, productivity, and the impact of emotions on economic decisions.&#13;
&#13;
The second chapter revisits two clinical trials that randomized depressed adults in India (n=775) to a brief course of psychotherapy or a control condition. Four to five years later, the treatment group was 11 percentage points less likely to be depressed than the control group. The more effective intervention averted 9 months of depression on average over five years and cost only \$66 per recipient. Therapy changed people's beliefs about themselves in three ways. First, it reduced their likelihood of seeing themselves as a failure or feeling bad about themselves. Second, when faced with a novel work opportunity, therapy reduced over-optimistic belief updating in response to feedback and thus reduced overconfidence. Third, it increased self-assessed levels of patience and altruism. Therapy did not increase levels of employment or consumption, possibly because of other constraints on employment in the largely female study sample. &#13;
&#13;
The third chapter studies public beliefs in times of crises, and the role of governments' early recommendations about issues that remain uncertain. Do their early positions affect how much people believe the latest recommendations? We investigate this question using an incentivized online experiment with 1,900 US respondents in early April 2020. We present all participants with the latest CDC projection about coronavirus death counts. We randomize exposure to information that highlights how President Trump previously downplayed the coronavirus threat. When the President's inconsistency is salient, participants are less likely to revise their prior beliefs about death counts from the projection. They also report lower trust in the government. These results align with a simple model of signal extraction from government communication, and have implications for the design of changing guidelines in other settings.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144494</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacial Fluid Dynamics in Porous Media</title>
<link>https://hdl.handle.net/1721.1/144492</link>
<description>Interfacial Fluid Dynamics in Porous Media
Primkulov, Bauyrzhan K.
A beautiful array of patterns emerges when one fluid displaces another in porous media, a physical situation prevalent in many clean energy production and storage applications. These patterns can be reminiscent of dielectric breakdown, diffusion-limited growth of crystals, or percolation clusters in polymer gelation, depending on the relative affinity of the two fluids to the porous medium (wettability) and the balance of viscous and capillary forces. Examining this rich system at microscopic and macroscopic scales is at the center of this dissertation.&#13;
&#13;
In Part I, we build computational models to capture macroscopic fluid-fluid displacement patterns in disordered porous media, which helps synthesize decades' worth of experimental observations. We draw parallels between electrical circuits and flow in porous media, where resistors model viscous effects and a combination of batteries and capacitors model capillary forces. This simple analogy, augmented with wettability-dependent pore-invasion mechanisms, allows capturing the rich dynamics of pattern formation within a single pore-network model and helps delineate the role of wettability. Finally, we explore intriguing features of self-organized criticality during fluid-fluid displacement in disordered porous media.&#13;
&#13;
In Part II, we examine fluid displacement at a scale of a single capillary. We use lubrication theory to produce precise predictions of film evolution during spin-coating of capillary tubes---a technique one can use to fabricate capillaries with controlled surface properties. We then study the spontaneous imbibition of liquids in capillary tubes, where classical imbibition front slows with time. We propose a simple modification that renders imbibition constant-rate in capillary tubes and allows tuning of viscous dissipation; we use this system to characterize sources of dissipation during fluid-fluid displacement. We conclude Part II by revisiting the theory of moving contact lines over heterogeneous surfaces and rationalizing the transition from stick-slip to steady sliding.&#13;
&#13;
The physical problems we investigate in this dissertation may prove helpful in addressing our current environmental challenges by inspiring physics-informed advances in CO₂ storage, electrolyzers and fuel cells, design of sustainable micromechanical devices and self-cleaning surfaces.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144492</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differential analysis of scRNA-Seq data to characterize epithelial cells in health and disease</title>
<link>https://hdl.handle.net/1721.1/144491</link>
<description>Differential analysis of scRNA-Seq data to characterize epithelial cells in health and disease
Nyquist, Sarah Kate
Barrier tissue maintenance and function is crucial to human health. These tissue systems, such as airways, the gut, lungs, and mammary gland, are surfaces for exchange of essential inputs to sustain life that must also tolerate significant insults from physical and biological hazards. Many individual specialized cells within these tissues collaborate to support proper tissue function. Over the course of a person’s lifetime, these tissues sustain a diverse array of perturbations in accordance with their core function. In the mammary gland this includes hormonally-driven re-arrangement to support the production of breast milk by mammary epithelial cells during lactation. In the respiratory system, this can include response to disease, such as allergic inflammation or infection with SARS-CoV2. Studies of these tissues and the cells of which they are composed can allow us to better understand how epithelia respond to diverse perturbations, leading to broad insights in how to mitigate dysfunction and promote health.&#13;
&#13;
The recent advent of high-resolution methods like single-cell RNA-Seq (scRNASeq) have enabled the study of these tissues with unprecedented resolution, revealing substantial cell-to-cell heterogeneity and the role of diverse cell states in health and disease. Increased accessibility of these technologies led to a parallel explosion of data collected from diverse tissues and disease states along with available computational methods for their analysis. We describe the use of these methods to identify biological insights from scRNA-seq applied to barrier tissues under diverse perturbations. These include the nasal airway epithelium during allergic inflammation and alterations in mammary epithelial cells over the course of lactation. Both studies find that epithelial diversification aids in maintenance of these barrier tissues. Additionally, we describe a database of scRNA-Seq datasets and its application to an exploratory analysis of potential target cells for the SARS-CoV2 virus including types of epithelial cells in the gut and airway. This work underscores the need to create easy-to-use databases to enable rapid progress during global pandemics.&#13;
&#13;
Taken together, this work provides a perspective on the utility of scRNA-Seq data for the study of health and disease to empower both cross-disciplinary collaborations and the development of accessible computational tools and resources.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144491</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and Algorithmic Barriers in High-Dimensional Statistics and Random Combinatorial Structures</title>
<link>https://hdl.handle.net/1721.1/144485</link>
<description>Algorithms and Algorithmic Barriers in High-Dimensional Statistics and Random Combinatorial Structures
Kizildag, Eren C.
We focus on several algorithmic problems arising from the study of random combinatorial structures and of neural network models, with a particular emphasis on computational aspects. Our main contributions are summarized as follows.&#13;
&#13;
1. Our first focus is on two algorithmic problems arising from the study of random combinatorial structures: the random number partitioning problem (NPP) and the symmetric binary perceptron model (SBP). Both of these models exhibit a so-called statistical-to-computational gap: a striking gap between the existential and the best known algorithmic guarantees with bounded computational power (such as polynomial-time algorithms). We investigate the nature of this gap for the NPP and SBP by studying their landscape through the lens of statistical physics and in particular spin glass theory. We establish that both models exhibit the Overlap Gap Property (OGP), an intricate geometrical property that is known to be a rigorous barrier for large classes of algorithms. We then leverage the OGP to rule out certain important classes of algorithms, including the class of stable algorithms and the Monte Carlo Markov Chain type algorithms. The former is a rather powerful abstract class that captures the implementation of several important algorithms including the approximate message passing and the low-degree polynomial based methods. Our hardness results for the stable algorithms are based on Ramsey Theory from extremal combinatorics. To the best of our knowledge, this is the first usage of Ramsey Theory to show algorithmic hardness for models with random parameters.&#13;
&#13;
2. Our second focus is on the Sherrington-Kirkpatrick (SK) spin glass model, a mean-field model for disordered random media. We establish that the algorithmic problem of exactly computing the partition function of the SK model is average-case hard under the assumption that &#119875; ̸= #&#119875; (an assumption that is milder than &#119875; ̸= &#119873;&#119875; and is widely believed to be true) both for the finite-precision arithmetic model and for the real-valued computational model. Our result is the first provable hardness result for a statistical physics model with random parameters that is based on standard complexity-theoretical assumptions.&#13;
&#13;
3. Our last focus is on neural network (NN) models arising from modern machine learning and high-dimensional statistical inference tasks.&#13;
&#13;
• Our first set of results to this direction establishes self-regularity for two-layer NNs with sigmoid, binary step, rectified linear unit (ReLU) activation functions and non-negative output weights in an algorithm-independent manner. That is, we establish that under very mild distributional assumptions on the training data, any such network has a bounded output norm provided that it attains a small training error on polynomially many data. Our results explain why the overparameterization does not hurt the generalization ability for such architectures. This conundrum has been observed empirically in NNs and defies the classical statistical wisdom.&#13;
&#13;
• Our final focus is on the problem of learning two-layer NNs with quadratic activation functions under the assumption that the training data are generated by a so-called teacher network with planted weights. We first investigate the training aspect, establishing that there exists an energy barrier &#119864;0 below which any stationary point of the empirical risk is necessarily a global optimum. That is, there are no spurious stationary points below &#119864;0. Consequently, we show that the gradient descent algorithm, when initialized below &#119864;0, nearly recovers the planted weights in polynomial-time. We then investigate the question of proper initialization under the assumption that the planted weights are generated randomly. By leveraging a certain semicircle law from random matrix theory, we show that a deterministic initialization suffices, provided that the network is sufficiently overparameterized. Finally, we identify a simple necessary and sufficient geometric condition on the training data under which any minimizer of the empirical risk has good generalization. We lastly show that randomly generated data satisfy this condition almost surely under very mild distributional assumptions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144485</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Labor Market Institutions</title>
<link>https://hdl.handle.net/1721.1/144483</link>
<description>Essays on Labor Market Institutions
Young, Samuel Goericke
This thesis includes three papers about the impacts of different labor market institutions. In the first chapter (joint with Sean Wang), we study the effect of private-sector unionization on establishment employment and survival. Specifically, we analyze National Labor Relations Board (NLRB) union elections from 1981 to 2005 using administrative Census data on the universe of establishments in the U.S. Our empirical strategy extends a difference-in-differences design with regression discontinuity extrapolation methods. This strategy allows us to estimate treatment effects that include elections that win by large margins of support. We show that unionization decreases an establishment's employment and likelihood of survival. We hypothesize that two reasons for these effects are firms' ability to avoid working with new unions and managers' opposition to unions. We test this hypothesis for unionization in manufacturing. There, the negative effects are significantly larger for elections at multi-establishment firms. Additionally, after a successful union election at one establishment, employment increases at the firms' other establishments. Both pieces of evidence are consistent with firms avoiding new unions by shifting production from unionized establishments to other establishments. Finally, we find larger declines in employment and survival following elections when managers were more opposed to the union. To support this, we estimate treatment effect heterogeneity based on two proxies for managers' opposition: delays during the election process and the lack of other unionized establishments at the firm. Taken together, our results are consistent with firms' union avoidance tactics contributing to the overall negative effects of unionization.&#13;
&#13;
&#13;
The second chapter (joint Simon Jäger, Benjamin Schoefer, and Josef Zweimüller), we measure the wage effect of changes in the value of   nonemployment  among initially employed workers. Nonemployment is often posited as a worker's outside option in wage setting models such as bargaining and wage posting. The value of nonemployment is therefore a key determinant of wages. Our quasi-experimental variation in the value of nonemployment arises from four large reforms of unemployment insurance (UI) benefit levels in Austria. We document that wages are insensitive to UI benefit changes: point estimates  imply a wage response of less than $0.01 per $1.00 UI benefit increase, and we can reject sensitivities larger than $0.03.  The insensitivity holds even among workers with low wages and high predicted unemployment duration, and among job switchers hired out of unemployment. The insensitivity of wages to the nonemployment value presents a puzzle to the widely used Nash bargaining model, which predicts a sensitivity of $0.24--$0.48.   Our evidence  supports wage-setting models that  insulate wages from the  value of nonemployment.&#13;
&#13;
The third chapter, I study the effect of noncompete agreements on low-earning workers using a noncompete ban in Austria. The ban increased treated workers’ annual job-to-job transition rate by 0.3 percentage points (a two percent increase). This effect was driven by within-industry job transitions. The reform also disproportionately increased transitions to higher-quality firms and transitions accompanied by earnings gains. However, I do not find that the ban increased treated workers' overall earnings growth rates. This evidence shows that noncompetes in Austria restricted low-earning workers’ job mobility but that their impact was not large enough to affect aggregate mobility or earnings trends.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144483</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electromagnetic suspensions for ground transportation.</title>
<link>https://hdl.handle.net/1721.1/144356</link>
<description>Electromagnetic suspensions for ground transportation.
Weinberg, Marc Steven,
            1948-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1974; Leaf 209 omitted in paging. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144356</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the 5s5p³P₁ level of cadmium</title>
<link>https://hdl.handle.net/1721.1/144191</link>
<description>An investigation of the 5s5p³P₁ level of cadmium
Lacey, Richard F.
            (Richard Frederick),
            1931-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1959; Vita.; Includes bibliographical references (leaves 43-44).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144191</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electromagnetic waves and instabilities in relativistic plasma</title>
<link>https://hdl.handle.net/1721.1/144189</link>
<description>Electromagnetic waves and instabilities in relativistic plasma
Yoon, Peter H.,
            1958-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1987; Bibliography: p. 241-245.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144189</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fermion number fractionization in relativistic field theories : supersymmetry at finite temperature</title>
<link>https://hdl.handle.net/1721.1/144188</link>
<description>Fermion number fractionization in relativistic field theories : supersymmetry at finite temperature
Paranjape, M. B.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144188</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntheses and Photophysical Studies of Two-Dimensional Hybrid Organic-Inorganic Semiconductors</title>
<link>https://hdl.handle.net/1721.1/144101</link>
<description>Syntheses and Photophysical Studies of Two-Dimensional Hybrid Organic-Inorganic Semiconductors
Paritmongkol, Watcharaphol
This thesis focuses on the studies of two families of two-dimensional (2D) hybrid organic-inorganic semiconductors with exciting properties for optoelectronic applications. The first family is 2D lead halide perovskites (LHPs), which have received immense interest due to their solution processability and interesting luminescent properties. However, their uses in energy harvesting and light-emitting applications are limited by the difficulty in their synthesis, the knowledge of their structural-property relationships, and the understanding of their exciton physics. In this thesis, we first present cooling-induced crystallization to produce phase-pure 2D LHPs with controllable chemical compositions. Using single-crystal X-ray diffraction, we refined their crystal structures across temperatures and observed two phase transitions corresponding to structural changes in organic and inorganic sub-lattices. The structural information across these phase transitions was then used to explain temperature-dependent optical properties and address the debate over the origins of their broadband emission. We observed two broadband emission features and distinguished defect-associated from self-trapped exciton (STE) emission by the difference in their temperature-dependent behaviors. Moreover, we found that the temperature dependence of STE emission is strongly correlated with exciton-phonon coupling strength and structural distortion, suggesting a possible tuning of this property by compositional engineering. &#13;
&#13;
The second material is silver phenylselenolate (AgSePh), an emerging 2D metal organic chalcogenolate (MOC) material with blue luminescence, in-plane anisotropy, large exciton binding energy, non-toxic and earth-abundant elemental composition, and a scalable synthetic method. Despite these desirable characteristics for modern electronics, its fundamental studies and device integration are limited by its crystal size and quality produced by current synthetic methods. In this thesis, we report two synthetic advances to produce AgSePh thin films with controllable grain size from &lt;200 nm to &gt;5 µm and microcrystals with increased crystal size from ~2 µm to &gt;1mm. Systematic optical and electrical characterizations through photoluminescence spectroscopy and electrical conductivity suggest higher crystalline qualities with lower defect densities in these samples. Using ⁷⁷Se nuclear magnetic resonance spectroscopy and monitoring reaction kinetics, we provide mechanistic insights that enable the development of generalizable single-crystal growth. Overall, we expect these reported synthetic methods to facilitate the studies on this exciting material and its family.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144101</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics in Organic and Inorganic Nanoscale Materials</title>
<link>https://hdl.handle.net/1721.1/144100</link>
<description>Exciton Dynamics in Organic and Inorganic Nanoscale Materials
Klein, Megan D.
The study of the photophysical properties of nanoscale materials has been a substantial research area for several decades due to their unique optoelectronic behaviors and numerous applications ranging from quantum emitters to solar panels and beyond. Any use of these materials, however, that involves their photophysical behavior relies on a careful understanding of the light-matter interactions that occur within the system. Spectroscopic studies provide a powerful tool for probing these interactions and their dependence on the materials’ fundamental properties.&#13;
&#13;
In this thesis I investigate the dynamics of optically excited excitons in these materials and the available pathways for their recombination. I first study the interaction between excitons and the supramolecular lattice found in nanotubular molecular aggregates. The mechanism of photobrightening in these aggregates is explored through fluorescence measurements and wide angle X-ray scattering to demonstrate that the increase in quantum yield is associated with a change of structure. This leads to the development of a population of shielded or trapped excitons due to the formation of large polarons. Furthermore, I demonstrate design handles to control the recovery from the photobrightened state through rigidification of the aggregate structure.&#13;
&#13;
The next section then investigates the dynamics of triexciton recombination pathways in inorganic CdSe/CdS core-shell nanocrystals (NCs). I present the use of time- and spectrally-resolved ensemble emission measurements to determine the emission energy of the P-like above band-edge state in the NCs. This information is used to develop of a state-resolved third order correlation experiment to directly measure the emission pathway of triexciton events under low-flux conditions and observe that the recombination is dominated by band-edge emission. These results are then compared to theory to determine how the relative carrier overlap affects the triexciton recombination dynamics.&#13;
&#13;
In the final section I discuss the nuances of uncertainty estimation that can arise in any spectroscopic measurement. Using two particular case studies of experiments I demonstrate where typical error approximation assumptions begin to fail. This is then followed by a thorough exploration of multiple techniques that can be used to gain more insight into the uncertainty of a measurement even when typical approximations fail.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144100</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flow Chemistry Guided by Computer-Aided Synthesis Planning</title>
<link>https://hdl.handle.net/1721.1/144099</link>
<description>Flow Chemistry Guided by Computer-Aided Synthesis Planning
Breen, Christopher P.
Synthetic organic chemistry provides access to valuable materials and molecules that enable advancements across scientific disciplines. New advancements in synthetic technology continue to push the boundaries of how rapidly and efficiently complex matter can be produced. Despite this, it is often difficult for practitioners outside the core field of organic synthesis to fully leverage these advances, in part due to lack of experience in planning and executing synthetic sequences as well as often times incomplete communication of procedural information known only to expert practitioners.&#13;
&#13;
New tactics for planning and executing syntheses that disrupt conventional approaches to process development must be established in order for the utility of synthetic chemistry to become more broadly accessible. Computer-aided synthesis planning tools that utilize machine learning techniques is one such tactic wherein a computer program suggests synthetic routes to user defined small molecule targets. While there has been much attention paid to the development and refinement of computer-aided synthesis planning tools, reports that detail validation of synthetic routes proposed in silico remain rare. Continuous flow chemistry, wherein reagents are pumped through volumes in a time-dependant fashion rather than in fixed reactor volume batches, has been established as successful strategy for pushing the limits of synthetic chemistry through process intensification. In this thesis, the marriage of computer-aided synthesis planning, flow chemistry, and often times automation technology, is showcased through the preparation of active pharmaceutical ingredients according to synthetic strategies and tactics proposed by a computer-aided synthesis program.&#13;
&#13;
In the 1st chapter, the concepts of continuous flow chemistry, computer-aided synthesis planning, automated experimentation, and the interplay of these concepts are introduced. The general approach and maturity of laboratory scale continuous flow chemistry for is first established. This is followed by a discussion of how automated continuous flow systems can use decision-making algorithms to enable closed-loop feedback in optimization experiments. Applications of this paradigm in the areas of reaction parameter measurements, materials synthesis, nano-materials synthesis, modern synthetic organic methods, and all-purpose synthesis machines is then discussed. The chapter concludes by outlining current challenges in the field and by proposing avenues of future work.&#13;
&#13;
The 2nd chapter describes a strategy for the continuous synthesis of angiotensin converting enzyme inhibitors, including quinapril and enalapril. This strategy was informed by ASKCOS, a computer-aided synthesis program, and developed manually using bench-top flow reactors as well as batch chemistry techniques. An optimization effort guided by in situ Fourier-transform infrared spectroscopy analysis resulted in a general amide coupling approach facilitated by N‐carboxyanhydride activation that was further characterized by reaction kinetics analysis in batch. The three‐step continuous process was demonstrated by synthesizing 8 different ACE inhibitors in up to 88% yield with throughput values in the range of 0.5 g h-1, all while avoiding both isolation of reactive intermediates and process intensive reaction conditions. The process was further developed by preparing enalapril, a World Health Organization essential medicine, in an industrially relevant flow platform that scaled throughput to 1 g h-1. The results of this effort motivated translation of the process to an automated reaction execution platform described in the next chapter.&#13;
&#13;
The 3rd chapter outlines a step toward a paradigm of chemical synthesis that relieves chemists from routine tasks, combining artificial intelligence-driven synthesis planning and a robotically controlled experimental platform. Synthetic routes are proposed through generalization of millions of published chemical reactions and validated in silico to maximize their likelihood of success. Additional implementation details are determined by expert chemists and recorded in reusable recipe files, which are executed by a modular continuous-flow platform that is automatically reconfigured by a robotic arm to set up the required unit operations and carry out the reaction. This strategy for computer-augmented chemical synthesis is demonstrated for 15 drug or drug-like substances, five of which were translated from the process detailed in chapter 1.&#13;
&#13;
In the 4th and final chapter, the telescoped flow synthesis of sonidegib, an important chemotherapeutic agent, according to the computer-aided synthesis program, ASKCOS, is described. The program proposed three potential pathways that were evaluated as telescoped flow sequences. It was found that a three step synthesis of the target could be performed as a fully telescoped flow process in 86.6 \% overall yield. One of the other options, involving an amide coupling and Pd-catalyzed C-N coupling sequence, could be performed as a batch process but could not be translated to continuous flow. The other two step sequence, amide coupling followed by an SNAr did not proceed under the CASP program reaction conditions. This work establishes a general approach for using CASP suggestions to guide development of telescoped flow chemistry.  During this project, all work was performed by C.P.B. under T.F.J.'s guidance.&#13;
&#13;
Portions of this thesis have been reprinted, adapted, or both, with permission from their respective publishers.&#13;
&#13;
Breen, C.P.; Nambiar, A.M.K.; Jamison, T.F.; Jensen, K.F. Ready, Set, Flow! Automated Continuous Synthesis and Optimization Trends Chem. 2021 3, 373-386. Copyright 2021 Elsevier Incorporated. C.P.B. and A.M.K.N. contributed equally to preparing the manuscript. T.F.J. and K.F.J. provided guidance and helped edit the manuscript.&#13;
&#13;
Christopher P. Breen and Timothy F. Jamison. Continuous Flow Synthesis of ACE Inhibitors From N-Substituted L-Alanine Derivatives. Chem. Eur. J. 2019 25, 14527-14531. Copyright 2019 John Wiley &amp; Sons. All work was performed by C.P.B. under T.F.J.'s guidance.&#13;
&#13;
Coley, C.W.; Thomas, D.A. III; Lummiss, J.A.M.; Jaworski, J.N.; Breen, C.P.; Schultz, V.; Hart, T.; Fishman, J.S.; Rogers, L.; Gao, H.; Hicklin, R.W.; Plehiers, P.P.; Byington, J.; Piotti, J.S.; Green, W.H.; Hart, A.J.; Jamison, T.F.; Jensen, K.F. A Robotic Platform for Flow Synthesis of Organic Compounds Informed by AI Planning. Science 2019, 365, eaax1566. Copyright 2019 American Association for the Advancement of Science. Reprinted with Permission from AAAS. C.W.C. designed, developed, and implemented the synthesis-planning software; D.A.T. designed, developed, and supervised construction of the robotic platform; J.A.M.L., J.N.J., C.P.B., V.S., and L.R. developed chemistry, performed experiments with the robotic platform, and interpreted results; T.H., J.S.F., J.B., and J.S.P. assisted in the development, maintenance, and assembly of the robotic platform; H.G. and P.P.P. assisted in the development of synthesis-planning code; R.W.H., W.H.G., and K.F.J. advised on the development of the synthesis-planning software; A.J.H. and K.F.J. advised on the development of the robotic platform; T.F.J. supervised chemistry development; C.W.C., D.A.T., and J.A.M.L. prepared the manuscript; T.F.J., K.F.J., and A.J.H. edited the manuscript; and K.F.J. supervised the project and secured funding.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144099</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of microbial responses to transition metal sequestration by the innate immune protein calprotectin</title>
<link>https://hdl.handle.net/1721.1/144098</link>
<description>Investigation of microbial responses to transition metal sequestration by the innate immune protein calprotectin
Zygiel, Emily M.
In response to microbial infection, the human host deploys metal-sequestering host-defense proteins to limit nutrient availability and thereby inhibit microbial growth and virulence. Calprotectin (CP) is an abundant antimicrobial protein that is released from neutrophils and epithelial cells at sites of infection. CP sequesters divalent first-row transition metal ions and thereby limits the availability of essential metal nutrients in the extracellular space. The activity of CP has historically been understood from the standpoint of manganese and zinc withholding, but recent work has uncovered the ability of CP to bind iron and nickel. In this thesis, we investigate how iron and nickel withholding by CP contribute to its antimicrobial activity, and how microbes respond to the withholding of these nutrient metals. The work presented herein reveals changes to bacterial growth, cellular metal levels, metal starvation responses, and virulence characteristics in response to iron and nickel withholding by CP. Taken together, these recent contributions inform our current model for how CP contributes to metal homeostasis and immunity, and provide a foundation for further investigations of this remarkable metal-chelating protein at the host-microbe interface and beyond.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144098</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Metal-Organic Frameworks and Crystalline Porous Polymers and Studies of Their Reactivity</title>
<link>https://hdl.handle.net/1721.1/144097</link>
<description>Synthesis of Metal-Organic Frameworks and Crystalline Porous Polymers and Studies of Their Reactivity
Sun, Chenyue
Porous crystalline solids such as metal-organic frameworks (MOFs) and crystalline porous polymers are unparalleled platforms for studying fundamental processes involving gaseous components, which ultimately leads to applications such as catalysis, gas sorption and gas separation. To that end, new materials, and new synthetic methods are constantly needed to fuel the growth of our knowledge. The aim of this thesis is to provide new avenues to constructing crystalline frameworks in a designed, target-driven manner. Applications of the newly synthesized materials are also detailed to demonstrate their unique advantages and showcase their versatility. &#13;
&#13;
Chapter 1 provides a summary of the current landscape of gas-solid reactions in MOFs with an emphasis on introducing MOFs to chemists from a homogeneous catalysis background. Both important concepts and practical examples are presented to compare and contrast chemistry in solution and chemistry inside MOFs. Opportunities and challenges are identified for future development of the field and deeper realization of MOFs’ potential. Chapter 2 introduces a new strategy, i.e. in-situ metalation, for incorporating flexible ligands into the backbone of MOFs and showcases this method with the synthesis of a new MOF, ZrTpmC*, which acts as a site-isolated and well-defined trispyrazolylmethane ligand in the solid state. In Chapter 3, the merit of site isolation in ZrTpmC* was demonstrated through study of Cu(I)-mediated NO reductive coupling, a reaction of importance to biological systems and automobile emission abatement. A previously hypothesized but unseen N₂O₂•– intermediate was fully characterized by a range of spectroscopic techniques. In Chapter 4, photochemical topochemical dimerization reaction is introduced as a new strategy for synthesis of porous crystalline polymers. To exemplify, topochemical polymerization of the tetrahedral monomer MTBA is described and the adsorption property, solution processibility and recyclability of polyMTBA is discussed.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144097</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Structural Dynamics of Bacterial Chemotaxis</title>
<link>https://hdl.handle.net/1721.1/144096</link>
<description>Exploring the Structural Dynamics of Bacterial Chemotaxis
Gordon, Jesse
Motile bacteria can track chemical gradients and adjust their swimming behavior using their chemotaxis system. The central components of the chemotaxis pathway are transmembrane chemoreceptors, which bind attractant ligands in the periplasmic domain and transmit ligand occupancy information through the cytoplasmic domain to inhibit a downstream histidine kinase that regulates motor control. Receptors are hypothesized to transmit this signal via dynamic structural changes in the &gt;200 Å coiled coil cytoplasmic domain. However, the nature of the structural dynamics induced by ligand binding remains unclear. Additionally, to operate over a wide dynamic range of attractant concentrations, chemoreceptors undergo adaptational modifications in the adaptation region of the cytoplasmic domain, which are thought to impact the structural dynamics associated with signaling in the cytoplasmic domain. Single-molecule Förster Resonance Energy Transfer (FRET), which provides a nanometer scale ruler that enables the observation of protein conformations and dynamics, was applied to intact, functional chemoreceptor homodimers to observe ligand-induced changes. This thesis describes work to interrogate the structural dynamics of chemoreceptor homodimers associated with ligand binding, using single-molecule FRET. Chapter 2 describes the construction and characterization of a total internal reflection fluorescence (TIRF) microscope for use in single-molecule FRET measurements, which permits the observation of hundreds of single molecules in parallel. This enables the high-throughput screening of chemoreceptors with different labeling positions and adaptational states. Chapter 3 describes the observation of concerted differential changes in helical packing and dynamics within the cytoplasmic domain of a bacterial chemoreceptor homodimer induced by ligand occupancy in the periplasmic domain. In the presence of attractant ligand, the more dynamic pair of helices was observed to undergo an increase in interhelix separation and a further increase in dynamics. Conversely, the less dynamic pair remained low in dynamics and underwent a compaction. Chapter 4 describes preliminary work measuring the nature of ligand-induced signaling as a function of receptor modification state. Initial single-molecule FRET results suggest that the receptors in an adaptational modification state which inhibits kinase activity undergo an expansion similar to that described in the previous chapter for receptors shifted to a modification state which favors downstream kinase activity.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144096</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalytic and Biological Applications of Benzoxaborolones</title>
<link>https://hdl.handle.net/1721.1/144095</link>
<description>Catalytic and Biological Applications of Benzoxaborolones
Graham, Brian James
Though first isolated 150 years ago, boronic acids have historically been neglected species. The more recent advent of transition metal-catalyzed reactions has led to an explosion in their use as building blocks, which in turn has led to their application in many other contexts as synthetic methods improved. The presence of a largely vacant p orbital on boron lends boronic acids unique properties and makes them of particular interest for organocatalysis and in chemical manipulation of biological systems. &#13;
&#13;
In Chapter 2, I describe the development of an organocatalytic process for the conversion of biomass-derived sugars to 5-hydroxymethylfurfural. Building on previously reported metal-promoted processes, I found that 2-carboxyphenylboronic acid (2-CPBA) is an optimal catalyst for this process which can promote the desired reaction without the addition of metals and is unique among boronic acids in its ability to promote the reaction. &#13;
&#13;
The unique properties of 2-CPBA as a catalyst led us to further investigate its structure and reactivity. 2-CPBA was found to exist as a cyclized benzoxaborolone adduct, rather than the free carboxylic acid. In Chapter 3, I describe the consequences of this cyclization on the oxidative stability of the boronic acid. The stereoelectronic effects present in the oxaborolone ring destabilize the oxidation transition state by reducing electron donation from the cyclic oxygen to the developing p orbital on boron, leading to an improvement in the oxidative stability of these species by over four orders of magnitude while maintaining the normal reactivity of boronic acids toward nucleophiles and diols.&#13;
&#13;
In Chapter 4, I describe the application of the benzoxaborolone scaffold to improve the oxidative stability of a boronic acid-based inhibitor of HIV protease. Replacement of a phenylboronic acid with a benzoxaborolone led to a ~100-fold improvement in oxidative stability, and the new inhibitor largely maintained potency toward the protease. Analysis of an X-ray crystal structure of the inhibitor-protease complex revealed that the benzoxaborolone forms a strong network of hydrogen bonds due to its oxygen-rich and anionic nature.&#13;
&#13;
Appendices describe further investigations into the mechanism for conversion of glucose into HMF catalyzed by 2-CPBA, the reactivity of lignin and conversion of various biomass sources into carbohydrates in ionic liquids, and computational investigations into the stability of modifications on the benzoxaborolone scaffold toward oxidation and protodeboronation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144095</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of triphenylene-based radical-containing ligand bridges in mediating electronic spin coupling and sensing chemical analytes</title>
<link>https://hdl.handle.net/1721.1/144094</link>
<description>Investigation of triphenylene-based radical-containing ligand bridges in mediating electronic spin coupling and sensing chemical analytes
Yang, Luming
Tritopic radical-containing ligand bridges are important components for the design of molecular and solid-state magnetic material, as they mediate strong spin exchange coupling between paramagnetic metals, and have potential for maintaining coherence at room temperature. In this thesis, the author explores ligand-mediated spin coupling and quantum properties of the tritopic radical bridge HXTP (HXTP = 2,3,6,7,10,11-hexa-substituted triphenylene). In the first part, a series of trimetallic complexes containing HXTP radical bridges are studied. Radical-mediated spin exchange between the metal centers, as well as the electronic delocalization across the HXTP ligands are investigated with combined crystallographic, spectroscopic, magnetic, and electrochemical techniques. Structurally resembling the building blocks of HXTP-based two-dimensional metal-organic frameworks (2D MOFs), these trimetallic complexes are further discussed as molecular models for the MOFs in the context of dimensional reduction. Moreover, the HXTP-centered radical with oxygen bridgehead atoms possesses long spin relaxation times at room temperature when integrated into a MOF matrix. The second part of the thesis explores such radicals as electronic spin qubits and their application as qubit sensors for chemical analytes. Chapter 1 provides an overview of the current research on HXTP-bridged trimetallic complexes and the dimensional reduction approach in the context of HXTP-based 2D MOFs. An overview of qubit-embedded MOF in the sensing of chemical analytes is also provided. Then, the author’s works are discussed in a broader context while providing possible future directions. Chapter 2 discusses synthesis and characterization of two tricopper HXTP complexes, where the HXTP-mediated spin coupling between paramagnetic metal centers was first quantified. Chapter 3 investigates the redox tuning of spin coupling in three trinickel complexes bridged by closed-shell, monoradical, and diradical HXTP ligands. Chapter 4 presents extremely strong magnetic coupling persistent at room temperature achieved in a trinickel HXTP complex with nitrogen bridgehead atoms. Finally, chapter 5 describes room-temperature quantitative detection of alkali metal ions using radical spin qubits in an HXTP MOF. Broader implications to the chemistry and quantum-related frontiers are also discussed.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144094</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Investigation of the Catalytic and Structural Roles of Metals in Metalloenzymes</title>
<link>https://hdl.handle.net/1721.1/144093</link>
<description>Computational Investigation of the Catalytic and Structural Roles of Metals in Metalloenzymes
Mehmood, Rimsha
Metalloenzymes capitalize on the unique roles of metal co-factors and protein scaffolds in catalyzing crucial chemical transformations at ambient conditions with exquisite selectivity. Some metalloenzymes exploit the redox properties of metal cofactors to catalyze challenging reactions, while others recruit metals for structural roles in stabilizing enzyme-substrate complexes. Although crystallography and spectroscopy provide foundational knowledge of the structure and reactivity of metalloenzymes, critical gaps remain in our understanding of the catalytic and structural role of metals in enzymes. Therefore, the use of novel computational tools to understand the role of metals and protein environment in dynamically promoting the reactivity and selectivity of metalloenzymes is of fundamental importance.&#13;
&#13;
In this thesis, we study the catalytic and structural roles of metals in metalloenzymes using quantum mechanics (QM), classical molecular mechanics (MM), and hybrid, multi-scale (QM/MM) atomistic simulations.  To address the unique challenges in QM/MM simulations of metalloenzymes, we study the relative magnitude of configurational and QM-region sensitivity of energetic and electronic properties in a representative structural metal (Zn2+) binding site of a DNA methyltransferase. Next, we develop a protocol using spectroscopically-guided molecular dynamics (MD) simulations augmented with large-scale QM/MM calculations to unearth the role of protein-substrate dynamics in governing selective halogenation catalyzed by non-heme iron halogenases. Demonstrating the utility of this protocol, our simulations provide essential insights on the interplay between strategic substrate positioning, active-site configurational isomerization, and protein dynamics in halogenases SyrB2, WelO5 and BesD. We also investigate the use of vanadyl as a mimic of experimentally-elusive ferryl catalytic intermediates of non-heme iron halogenase. Additionally, we employ long-time MD simulations to investigate the conformational dynamics of ScoE, a non-heme iron dioxygenase, by connecting the contrasting crystal structures obtained thus far. In this thesis, we also provide computational evidence for the mechanical interlocking of proteins in hydrogels, a phenomenon that is difficult to visualize experimentally. We expect that insights from this work can directly guide efforts on enzyme engineering, biomimetic chemistry and therapeutic drug development.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144093</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semiconducting Devices and Nanomaterials: Insight from Computational Chemistry</title>
<link>https://hdl.handle.net/1721.1/144092</link>
<description>Semiconducting Devices and Nanomaterials: Insight from Computational Chemistry
McIsaac, Alexandra Ross
In the past two decades, new technologies such as organic light emitting diodes (OLEDs) and quantum dots have emerged as promising candidates for applications from displays to solid state lighting. Many phenomenological and empirical models exist to explain the properties of these materials, and have succeeded in describing some of their properties. However, both of these systems have high degrees of disorder; for OLEDs, this manifests due to the molecular makeup of the emitting layer, and for quantum dots, due to their highly non-crystalline surface. Explaining properties that arise due to this disorder requires models that go beyond the phenomenological, in particular, it requires methods that can explicitly model the atoms and molecules causing disorder. In this thesis, we investigate the properties of quantum dot surfaces using density functional theory, which is an atomistic, all-electron electronic structure method. This allows us to identify specific features on the quantum dot surface and tie these features to the optical properties of the quantum dot. We find that undercoordinated surface atoms on the surface of CdSe can cause optical traps even when there are no traps in the ground state band structure, show that surface reorganization and annealing can significantly improve the optical properties of CdSe, and also explore sources of traps in CdSe/CdS core/shell quantum dots. In addition, we develop a model for OLED kinetics, which is able to incorporate the effects of molecular disorder but is very computationally efficient. We show that this model can extract molecular rate constants from a device-level measurement, and can help identify sources of efficiency loss in OLED devices.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144092</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Showcase of Functional Fluorous Materials and Their Applications</title>
<link>https://hdl.handle.net/1721.1/144091</link>
<description>A Showcase of Functional Fluorous Materials and Their Applications
Yoshinaga, Kosuke
This thesis highlights conjugated organic materials functionalized with fluoroalkyl chains and explores their applications in material chemistry. In Chapter 1, we begin with a brief introduction of incorporation of fluorine into materials and its effect on materials properties. An introduction to dynamically reconfigurable complex emulsions reported by our group and their applications in sensing is also provided. In Chapter 2, we disclose the synthesis and photophysical properties of “fluorofluorescent” perylene bisimides, where fluoro- refers to both fluorine and fluorescence. These perylene bisimides were utilized to realize the concept of a fluorofluorescent solar concentrator. In Chapter 3, we report the synthesis and photophysical properties of fluorous phthalocyanines and subphthalocyanines. We then explored these molecules’ Faraday rotation properties in organic and fluorous solvents. In Chapter 4, we describe the applications of the perylene bisimide and subphthalocyanine dyes by incorporating them into fluorescent Janus emulsions. We also report the synthesis of fluorous soluble black hole quencher dyes, which were found to be more absorptive than fluorous soluble subphthalocyanines. The emulsions were engineered to function as biosensors for the detection of Salmonella, Zika virus, Listeria, and antiSARS-CoV-2 spike antibody. In Chapter 5, we report the utilization of the Heck reaction for facile access to functional fluorous materials. We also showcase how this fluoroalkenyl side chain turns out to be an important functional group for lithium primary batteries.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144091</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrically conductive porous catecholate metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/144090</link>
<description>Electrically conductive porous catecholate metal-organic frameworks
Skorupskii, Grigorii
Metal-organic frameworks (MOFs) are porous crystalline solids made up from countless possible combinations of organic ligands and metal nodes. MOF research has seen rapid expansion in the past two decades owing to the high surface areas of MOFs – and since the first studies in the mid-1990s, these materials have already found their first real-life applications in gas storage and separation. Of particular recent interest are electrically conductive MOFs, seen not only as porous conductors, but also as designer conductors thanks to the high degree of control over their structures. In this thesis, we attempt to improve our understanding of charge transport in MOFs by focusing on the compounds of 2,3,6,7,10-11- hexahydroxytriphenylene (H₆HOTP), which is essentially a trimerized 1,2-dihydroxybenzene, or catechol.&#13;
&#13;
Chapter 1 introduces basic concepts of electrical conductivity in MOFs and explores in detail the current state of experimental investigations into record-setting two-dimensional (2D) conductive MOFs with extended π-d conjugation throughout the layers. Chapter 2 challenges the common assumption that this extended conjugation is the primary pathway for conduction in these materials: we explored MOFs based on the lanthanides (Ln) and H₆HOTP, Ln₁+ₓHOTP (x ~ 0.2) with no in-plane conjugation, and found that high conductivities could be achieved only through π-π stacking interactions of the organic linkers. Chapter 3 supports the findings of Chapter 2 with another new 2D conductive MOF, Ga₉HOTP₄. We found that despite little electron delocalization within the 2D layers, Ga₉HOTP₄ possesses conductivities matching those of its heavily delocalized transition-metal analogs, and that the conductivity similarly originates in ππ stacking. In Chapter 4, we find that high quality crystals of La₁.₅HOTP and Nd₁.₅HOTP are one-dimensional metals with record-high conductivities surpassing 1000 S/cm at room temperature. Importantly, the crystals also transition to a charge density wave phase below 370 K – such transitions are characteristic to one-dimensional metals and have not been reported previously for MOFs or any other porous solids. Lastly, in Chapter 5, we present a novel family of conductive MOFs based on rare-earth metals and H₆HOTP with isotropic cubic structures – a feature that is surprisingly rare in conductive MOFs.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144090</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Studies of Amyloid-β Fibrils using Magic Angle Spinning Nuclear Magnetic Resonance and Dynamic Nuclear Polarization</title>
<link>https://hdl.handle.net/1721.1/144089</link>
<description>Structural Studies of Amyloid-β Fibrils using Magic Angle Spinning Nuclear Magnetic Resonance and Dynamic Nuclear Polarization
Bahri, Salima
Magic Angle Spinning Nuclear Magnetic Resonance (MAS NMR) is a powerful method that probes the structure and dynamics of insoluble molecules at atomic level detail. The last two decades has seen a rise in studies of amyloid fibrils, which have been implicated in many diseases. In particular, amyloid-β is a consistent feature in Alzheimer’s disease, with over 6 million patients in the US diagnosed to date. The first part of this thesis details the pathway to solving the structure of Aβ1-40 fibrils that were grown from pure peptide as a baseline for further studies, including brain seeding and mutations. This work was carried out in a 3.2mm rotor and spinning at a maximum of ωr/2π = 20kHz. We were able to unambiguously assign ~75% of the 40-residue peptide, and used this information to acquire long-range constraints for structure calculations. Using the CYANA software package, we have converged on a structure that satisfies the defined constraints with no violations, in which the core ranges from Q15-V40 with a backbone RMSD of 0.6 ± 0.1 Å. We proceeded to explore state-of-the-art MAS NMR methods in ultra-fast spinning ¹H-detection (ωr/2π = 111kHz) and fast-spinning high field DNP on fully protonated M₀-Aβ₁₋₄₂ fibrils, whose structure has been thoroughly characterized in in the literature. In doing so, we were able to assign ~92% of all protons in the fibril core (K16- A42), and obtained long-range ¹H¹H correlations that correspond to the fibril structure. At the same time, high field DNP experiments at ωr/2π = 40kHz produced well-resolved spectra, showing that these state of the art techniques are amenable to studies on amyloid fibrils. Finally, we set out to develop machining methods in order to fabricate diamond rotors for the purpose of combining ¹H-detection and DNP to facilitate rapid data acquisition and analysis. Our progress shows that we are able to fabricate 0.7mm rotors out of diamond that can currently spin up to ~7 kHz MAS, which will likely increase by improving the taper and concentricity of our current through-holes. Progress in this area will facilitate carrying out studies on natural abundance samples.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144089</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tuning Heterogeneous Catalysis Using Interfacial Polarization</title>
<link>https://hdl.handle.net/1721.1/144088</link>
<description>Tuning Heterogeneous Catalysis Using Interfacial Polarization
Ryu, Jaeyune
A plethora of emerging energy conversion technologies employ heterogeneous catalysis at solid-liquid interfaces, ranging from faradaic electrocatalysis and nonfaradaic thermochemical catalysis. Unlike for a gas-solid interface, charge transfer events at a solid-liquid interface can polarize the interface, resulting in a local environment at the active site that is radically distinct from the environment in the bulk liquid phase. Thus, unraveling the influence of electrical polarization on solid-liquid interface is a critical prerequisite for elucidating reactivity trends and for the rational design of new catalysis. Accordingly, this dissertation aims to understand (Part I) and control (Part II) the interfacial polarization effects during heterogeneous catalysis.&#13;
&#13;
Part I of this thesis establishes a quantitative correlation between the degree of interfacial polarization and the perturbation of the local interfacial microenvironments under the conditions of catalysis. In particular, we examine spontaneous and driven polarization mechanisms that give rise to interfacial electrostatic gradient (Chapter 2) and non-equilibrium concentration profiles (Chapter 3), respectively. Exploiting a surface-specific nonfaradaic reaction probe to sample the local activity of protons, which serve both as free ionic charge carriers and reactants/products of proton-coupled electron transfer reactions, we quantify interfacial electrostatic field strength and non-equilibrium pH gradient within molecular length scales from the catalytic surface.&#13;
&#13;
Leveraging the fundamental knowledge of interfacial polarization mechanisms gained in Part I, Part II of this thesis establishes a general mechanistic framework for exploiting interfacial polarization to mediate and promote thermochemical catalysis. Specifically, we demonstrate that thermochemical aerobic oxidation catalysis in water is mediated via spontaneous interfacial polarization induced by the coupling of constituent electrochemical half-reactions (Chapter 4). Additionally, exploiting driven-polarization to induce the non-equilibrium local pH swing, we show that Pd-catalyzed thermochemical CO₂ hydrogenation to formate can be dramatically promoted with modest electrical bias under mild reaction conditions (Chapter 5).
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144088</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonspectator Reactivity of Nontrigonal Tricoordinate Phosphorus Ligands</title>
<link>https://hdl.handle.net/1721.1/144087</link>
<description>Nonspectator Reactivity of Nontrigonal Tricoordinate Phosphorus Ligands
Tanushi, Akira
This thesis describes novel ‘nonspectator’ reactivity of geometrically deformed tricoordinate phosphorus ligands that diverge from traditional supporting roles of phosphines in transition metal catalysis. Chapter 1 presents an overview of the chemistry of higher coordinate phosphorus ligands in transition metal complexes. Chapter 2 describes experiments validating the enhanced electrophilicity of nontrigonal Cₛ-symmetric P(III) compounds as compared to typical trigonal P(III) ligands with quasi-C₃ᵥ local symmetry. Specifically, phosphorus K-edge XANES spectroscopy combined with time-dependent DFT calculation reveal ca. 1.5 eV bathochromic shift in the position of P K-edge onset. In Chapter 3, the development of nonspectator reactivities of a novel chelating ligand containing a nontrigonal P(III) center with Group 8 Ru complexes is presented. In a first finding, a unique net insertion of nontrigonal P(III) ligand into a Ru–H bond, yielding a five-coordinate phosphorus center in which of the substituents is a transition metal (i.e. metallohydrophosphoranes). The mechanistic investigation of the net insertion shows an α-H-migration between Ru–P bond in a reversible and controllable fashion. Chapter 4 extends the nonspectator reactivity to metal–ligand cooperative bond activation to Group 9 metal systems. Various transformations, such as heterolytic splitting of carbon dioxide and cooperative O–H addition of phenol, are achieved by a designed Ir–P bond with a bifunctional reactivity. Finally, Chapter 5 presents results on the net insertion of nontrigonal P(III) ligands into Group 10 metal–carbon bonds, and the factors governing the insertion reactivity is discussed. Taken together, the versatile nonspectator reactivities provide a conceptually new role of higher coordinate phosphorus ligands as a viable platform for novel bond activation and group transfer processes.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144087</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activated Phosphate Reagents for the Synthesis of Functionalized Oligophosphates</title>
<link>https://hdl.handle.net/1721.1/144086</link>
<description>Activated Phosphate Reagents for the Synthesis of Functionalized Oligophosphates
Shepard, Scott M.
Oligophosphates play essential roles in biochemistry, and considerable research has been directed towards the synthesis of both naturally occurring oligophosphates and their synthetic analogs. However, oligophosphates longer than triphosphates have been less well studied than shorter analogs. Here we have expanded the synthetic knowledge of these important compounds through the development of new reagents and methodologies for chemical oligophosphorylation. We developed and elaborated reagents for di-, tri-, and tetra- phosphorylation which react with a wide variety of nucleophiles to generate a library of oligophosphate products ranging from triphosphates to pentaphosphates. Furthermore, preliminary studies were undertaken to further elaborate the roles these oligophosphate play in biology.&#13;
&#13;
Trimetaphosphate reacts with PyAOP ([(H₈C₄N)₃PON₄C5H₃][PF₆]), to yield an activated species, [P₃O₉P(NC₄H₈)₃] − isolated as its bis(triphenylphosphine)iminium (PPN) salt. Treatment of this activated trimetaphosphate with a variety of simple nucleophiles, such as alcohols and amines, generates nucleophile substituted trimetaphosphate products. The activated trimetaphosphate compound furthermore reacts with methylenetriphenylphosphorane to generate a trimetaphosphate based phosphorus ylide capable of P–C bond formation through Wittig chemistry. Ring opening of these substituted trimetaphosphate products with hydroxides results in linear triphosphate derivatives.&#13;
&#13;
Adenosine and uridine 5′-tetra- and 5′-pentaphosphates were synthesized by treatment of the unprotected nucleosides or nucleoside monophosphates with an activated tetrametaphosphate ([PPN]₂ [P₄O₁₁]) and subsequent ring opening with hydroxide. These nucleotides were then tested for inhibition of the enzymatic activity of ribonuclease A. We then solved X-ray co-crystal structures of the highest affinity nucleotide binders to rationalize the increased binding affinity of longer nucleotides.&#13;
&#13;
The anion [P4O₁₁] ²⁻, employed as its bis(triphenylphosphine)iminium (PPN) salt, was shown herein to be a versatile reagent for nucleophile tetraphosphorylation. Treatment with a nucleophile (amines, alcohols, phosphates, ylide, azide) yields a nucleophile substituted tetrametaphosphate intermediate that was then ring opened with a second nucleophile (hydroxide, amine, phenoxide, fluoride) to give a wide variety of tetra- and pentaphosphate derivatives.&#13;
&#13;
Diphosphorylation of select nucleophiles was achieved by treatment with neutral, doubly zwiterrionic adducts of nucleophilic tertiary amines (pyridine, DABCO) and P₂O₅. The adducts were easily synthesized by reaction of the tertiary amine with polymeric phosphorus pentoxide in acetonitrile. The resulting adducts are competent diphosphorylation reagents, capable of generating unsymmetric dinucleoside tetraphosphates from nucleoside monophosphates or nucleoside tetraphosphates from nucleoside diphosphates.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144086</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis, Design, and Control of Supernumerary Robotic Limbs Coupled to a Human</title>
<link>https://hdl.handle.net/1721.1/143930</link>
<description>Analysis, Design, and Control of Supernumerary Robotic Limbs Coupled to a Human
Daniel, Phillip Howard
Musculoskeletal disorders (MSDs) are a prevalent type of injury in many careers such as ship and aircraft building, farming, and construction. These injuries cost companies and governments a great deal of money, and cause workers a great deal of discomfort. This thesis investigates how a new type of wearable robot can reduce the risk of contracting a MSD.&#13;
&#13;
Wearable robots, such as exoskeletons, can reduce the load on targeted muscles of the human body during fatiguing tasks. It is common, however, that exoskeletons also increase the load at untargeted muscles, resulting in minimal to negative net improvement. Here, a novel wearable robot is designed and controlled based on human models and experimental data. The models predict the effect of the device, called Supernumerary Robotic Limbs (SuperLimbs), on the wearer’s total muscular effort. SuperLimbs brace the human’s body while they work at floor-level, leaving both of their natural limbs free to complete tasks. Their effectiveness varies depending on how they attach to the human (harness design), are coupled to the floor (wrist and hand design), and are controlled (actuation policy). These factors are analyzed and used to design and control the robot.&#13;
&#13;
First, this thesis describes how two human models are constructed and validated using experimental data gathered from a series of motion capture experiments. The validated models are used to specify design criteria for the SuperLimbs to ergonomically support the wearer. Following the design criteria, two control policies are presented. One allows the human to reposition their body while they work at a fixed location on the ground, while the other allows the human to crawl using the SuperLimbs. More importantly, we justify these two control policies by using the human models to predict how they will affect the human’s total muscular effort. Finally, we describe a novel Lyapunov based controller that prioritizes the ability to make stability guarantees about the regulation of the human’s SuperLimb assisted crawl.&#13;
&#13;
We determine that the human’s lower lumbar region is a key area to redistribute loads away from. The design principles that we discover with our human models and experimental data aid in doing just this. Namely, we find that it is best to attach the SuperLimbs at the anterior edge of a rigid, full-chest harness. Additionally, we find that the SuperLimbs are able to significantly redistribute muscle loads away from the lower back during crawling if they are equipped with robotic wrists. However, robotic wrists are not necessary for stationary postures as the contact point of the SuperLimbs on the ground can be chosen to best redistribute the loads in the human’s back. Finally, we show that synchronizing the SuperLimbs to the human’s natural motion during crawling minimizes the work we expect their lower back to do.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143930</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear Algebra, Random Matrices and Lie Theory</title>
<link>https://hdl.handle.net/1721.1/143929</link>
<description>Linear Algebra, Random Matrices and Lie Theory
Jeong, Sungwoo
This thesis bridges the gap between pure and applied mathematics. The first part of this thesis focuses on the theory and computation of various Lie groups. The classical Lie groups as well as the automorphism groups of the bilinear and sesquilinear forms are discussed with numerical examples. In particular, we present a general approach for computing a basis of the tangent space of the automorphism group. In the second part, we derive a series of matrix factorizations from the generalized Cartan decomposition introduced by Flensted-Jensen and Hoogenboom. The generalized Cartan decomposition applied to structured matrices proves the existence of several known matrix factorizations at once and at the same time reveals a number of new matrix factorizations. Finally in the last part we derive the joint eigenvalue-like densities of the classical random matrices associated with the matrix factorizations. The Jacobian of the generalized Cartan decomposition computes the classical joint densities with various parameters using root systems. We complete the link between classical random matrices and symmetric spaces by introducing this generalized approach. Furthermore, two new families of the Jacobi ensemble parameters are obtained as a result.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143929</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Failure of Martensitic Sharp Edges: A Micro-Mechanical Exploration for Design Guidelines</title>
<link>https://hdl.handle.net/1721.1/143927</link>
<description>Failure of Martensitic Sharp Edges: A Micro-Mechanical Exploration for Design Guidelines
Roscioli, Gianluca
Sharp edges produced today are honed from carbide-rich martensitic stainless steel and applied coatings to achieve high hardness and wear resistance. Yet they become practically unusable upon cutting much softer materials like cheese, potatoes, or human hair.&#13;
&#13;
Although this is an everyday observation, the underlying physical micro-mechanisms are poorly understood due to the structural complexity of the interacting materials, and the complex boundary conditions of their co-deformation. As a result, simplistic guidelines, such as increasing material hardness, are the only ones currently available.&#13;
&#13;
This thesis proposes a new approach to investigate failure of common sharp edges, specifically razor blades cutting human hairs, in order to identify more accurate design guidelines for improved products. This approach consists in the development of new methods for in-situ scanning electron microscope experimental characterization, which enable visualization in real-time of the cutting action of hairs as well as measurement of the cutting force.&#13;
&#13;
Investigations carried out using pristine sharp edges enabled the identification of the mechanical factors governing failure: (i) large mode III component in the stress, (ii) particular position of the hair relative to an asperity along the sharp edge, and (iii) presence of spatial variation of the heterogeneous lath martensite structure along the sharp edge. Combination of these three factors can promote crack propagation at an angle with respect to the sharp edge to form a chip, before other failure mechanisms such as wear or fatigue are activated.&#13;
&#13;
Investigations carried out using corroded sharp edges revealed the detrimental effect produced by the environment on the steel. Due to local chromium depletion in the martensitic matrix because of its segregation into chromium-rich carbides, a percolated void structure forms in the sharp edges due to corrosion. The heterogeneous variation of mechanical properties produced by this structure, in turn, causes cracks to propagate perpendicularly to the sharp edge, thus causing a change in failure mechanism and an increase in failure probability.&#13;
&#13;
The insights gained from these investigations allowed the formulation of new design guidelines for sharp edges, requiring (i) high hardness for wear resistance, (ii) heterogeneity length scale below the sharp edge tip radius for chipping resistance, and (iii) absence of chromium-rich carbides for corrosion resistance. &#13;
&#13;
These guidelines have been implemented through the design of a new manufacturing process that uses tapered rolls to locally severely deform a strip of material into sharp edges. This process creates a gradient in the microstructure from bulk to tip, resulting high strength near the sharp edge thanks to the severe deformation. Preliminary results obtained with pearlitic steel showed promising signs of cementite dissolution.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143927</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strongly Correlated 2D Electronic Systems: Interplay between Band Topology and Electron-electron Interaction</title>
<link>https://hdl.handle.net/1721.1/143925</link>
<description>Strongly Correlated 2D Electronic Systems: Interplay between Band Topology and Electron-electron Interaction
Mao, Dan
In recent years, a new type of 2D materials, moiré superlattice materials, has been made in laboratories and exhibits interesting behavior including superconductivity at “high” temperature, strange metals, correlated insulators, ferromagnetism, etc.[1, 2]&#13;
&#13;
Inspired by these experiments, we studied various graphene-based moiré materials. We found that band topology and electron-electron interaction can be easily tuned by electric field in these systems, which opens the door for various fascinating quantum many-body phenomena such as the quantum Hall effect at zero external magnetic field[3]. Our theoretical prediction was later confirmed by experimental studies[4].&#13;
&#13;
In some moiré materials, the band topology can be tuned between “trivial” and “topologically non-trivial” with a perpendicular displacement field. On the “trivial” side, with electron interaction taken into account, one natural and important question is the low energy effective theory. We constructed an SU(4) Hubbard model for some of these moiré materials, which has been rarely explored in the condensed matter literature. We studied this SU(4) Hubbard model and found that it can realize spin liquids and pseudo-gap metals [5]. On the “topological” side, there is difficulty in formulating a low energy effective lattice model, which comes from an obstruction to constructing symmetric, localized Wannier functions. We circumvented this lattice obstruction by either enlarging the low energy Hilbert space [6] or by constructing a momentum space model[7] for twisted bilayer graphene(TBLG) aligned with hexagonal boron nitride(hBN). We also discussed the effect of translation symmetry breaking in this system[7].&#13;
&#13;
Motivated by these results in moiré superlattices, we considered the effect of electron interaction in a generic topological band. We studied the model of electrons confined to lowest Landau level (LLL) in a spatially periodic magnetic field. We showed that there is a relationship between spatially periodic magnetic field acting on LLL and momentum space periodic Berry curvature in a topological Chern band. We considered spinful electrons at total filling &#120584; = 1 and studied the spin wave dispersion in the presence of Coulomb potential. We found that compared to uniform magnetic field, the spin stiffness increases in the periodic magnetic field.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143925</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding Deoxygenative Transformations of Alcohols by Phosphorus Compounds Through Geometric Deformation</title>
<link>https://hdl.handle.net/1721.1/143924</link>
<description>Expanding Deoxygenative Transformations of Alcohols by Phosphorus Compounds Through Geometric Deformation
Moon, Hye Won
Tricoordinate phosphorus (σ³-P) compounds act as strong O-atom acceptors in many deoxygenative transformations by forming strong P=O bonds in phosphine oxides. While typical trigonal σ³-P compounds are nucleophilic, it has been demonstrated that lowering the C₃ᵥ-symmetry of σ³-P compounds to Cₛ-symmetry leads to enhanced electrophilicity of the P-center. Such electrophilic character of the nontrigonal σ³-P compounds has enabled unprecedented reactivity toward alcohols, forming hydridoalkoxyphosphoranes through a formal O–H oxidative addition. Hypothetically, these resulting phosphoranes can undergo deoxygenative reactions of alcohols via C–O cleavage driven by P=O formation through accessing alkoxyphosphonium and alkoxyphosphoranyl species. With the approach converting hydridoalkoxyphosphoranes into phosphine oxides, this thesis details the development of a new deoxygenative transformation of alcohols mediated by nontrigonal σ³-P compounds. &#13;
&#13;
As a point of departure, the existing literature precedent involving C–O cleavage via alkoxyphosphonium and alkoxyphosphoranyl intermediates is reviewed in Chapter 1. Given that these intermediates can be potentially generated by cleaving P–H bond in hydridoalkoxyphosphoranes, understanding the factors controlling P–H thermochemistry is illustrated. In Chapter 2, proximal substituent effects on P–H thermochemical parameters (pKₐ, bond dissociation energy, and thermodynamic hydricity) are described. Specifically, it is shown that hydridophosphoranes derived from O–H oxidative addition to a nontrigonal σ³-P compound with an O,N,O-chelate results in an acidic P–H bond; by complement, hydridophosphoranes derived from O–H oxidative addition to a nontrigonal σ³-P compound with N,N,N-chelate affords an hydridic P–H moiety. In Chapter 3, ancillary substituent effects on the stability of hydridophosphoranes is shown. By introducing an ethylene linker in the ligand of a nontrigonal σ³-P compound, the (σ5-P)→(σ³-P) tautomerism is considerably suppressed upon E–H bond (E = OR, OOCR, and SR) oxidative addition without significant changes in the geometry and electronic structure of the nontrigonal P-compound. Employing the knowledge gained from the aforementioned fundamental studies, in Chapter 4, the development of a new deoxyfluorination of alcohols via alkoxyphosphonium intermediates is detailed. This method has shown that access to alkoxyphosphonium cations sufficiently activates alcohols to undergo C–O cleavage. Combined with the use of fluoroborate as a fluoride donor, a nontrigonal σ³-P compound has enabled fluorination of tertiary and secondary alcohols with stereoinversion.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143924</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Interactions in Self-Assembling Systems</title>
<link>https://hdl.handle.net/1721.1/143923</link>
<description>Optical Interactions in Self-Assembling Systems
Zornberg, Leonardo Z.
Self-assembly of nanoscale components into ordered macroscopic structures is a powerful tool for the fabrication of metamaterials with tailorable or exotic effective material properties. Of particular interest are optical metamaterials, since the lengthscales of the nanoparticle subunits and microscale assemblies match the interacting wavelengths of light. While this overlap offers many opportunities for the fabricated structure to influence light, as well as for light to influence the structure, it also introduces several technical challenges. First, it is inherently difficult to simultaneously control optically relevant nanoscale and microscale structures because the microscale shape is governed by nanoscale interactions. Second, the stochastic nature of self-assembly kinetics introduces structural variability at the microscale that directly leads to variability of the optical properties. Third, only thermodynamically stable features can be consistently (non-stochastically) reproduced in a self-assembled structure, which further limits the types of optical devices that may be assembled. In this thesis, we address these challenges in optical self-assembling systems by exploring two approaches: how light may be used to manipulate the nanostructure and microstructure of a self-assembling system, and how the correlated nanostructure and microstructure can be used to manipulate light. We initially investigate the use of optical processing to induce structural changes in a self-assembled nanoparticle film via local heating and explore the interrelated dynamics of nanoscale reorganization and microscale rearrangement that influence the final morphology of the processed film. Then, we conduct a process study of a self-assembling system to determine the influence of processing conditions on optically relevant features of the assembled structure, including crystallite size and material heterogeneity. Lastly, we develop a process for assembling a diverse array of faceted structures on a surface and demonstrate that the thermodynamically determined facet angles of these structures can be used as tilted mirrors that couple light into the substrate plane. These combined research components span the role of optical interactions in self-assembling systems from start to finish: From controlling assembly conditions, to optical processing after assembly, to using a new self-assembled metamaterial as an optical device.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143923</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expansion microscopy of C. elegans : nanoscale imaging of biomolecules throughout an entire organism</title>
<link>https://hdl.handle.net/1721.1/143683</link>
<description>Expansion microscopy of C. elegans : nanoscale imaging of biomolecules throughout an entire organism
Yu, Chih-Chieh
            (Chih-Chieh Jay)
Expansion microscopy (ExM) enables 3-D, nanoscale-precise imaging of biological specimens by isotropic swelling of hydrogel-embedded, chemically processed tissue. Such capability raises the question of whether nanoscale mapping of biomolecules could be performed in an entire organism, which would allow super-resolution-mediated in situ analyses, such as digital quantification of biomolecules and mapping of synaptic contacts, to be performed within the context of an entire nervous system. The nematode Caenorhabditis elegans could be a suitable model for such organism-wide analyses, due to its tractable physical size, deterministic cell lineage, ease of genetic control, and well-established literature. However, C. elegans is enclosed in a chemically impermeable and mechanically tough cuticle, which could hinder the deployment of ExM. In this thesis, we present a strategy, expansion of C. elegans (ExCel), to expand fixed, cuticle-enclosed intact animals of C. elegans. ExCel enables simultaneous readout of fluorescent proteins, RNAs, DNA locations, and anatomical structures at resolutions of ~65-75 nm (3.3-3.8x linear expansion). We also developed epitope-preserving ExCel, which enables imaging of endogenous proteins stained by antibodies, and iterative ExCel, which enables imaging of fluorescent proteins at a ~25-nm resolution (20x linear expansion). We demonstrate the utility of the ExCel toolbox for multiplexed imaging of multiple molecular types, for mapping synaptic proteins, for identifying previously unreported proteins at cell junctions, and for gene expression analysis in multiple individual neurons of the same animal. In addition to ExCel, we discuss two other ExM-related technologies, including tetragel, which is a highly homogeneous hydrogel network that improves the nanoscale isotropy of biological ultrastructure expanded by ExM, and stochastic arrangement of reporters in clusters (STARC), which is a strategy for recording neuronal activity at a subneurite-level resolution, in densely labeled neuronal populations. Taken together, the work presented in this thesis extends the capabilities of ExM, and lays the foundation for a comprehensive, functionally and structurally informed analysis of an entire organism, which could reveal new insights in neuroscience, organismal development, and systems biology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from student-submitted PDF version of thesis. "March 2020." Date of graduation, May 2020.; Includes bibliographical references (pages 140-146).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143683</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A method for evaluating the potential of geothermal energy in industrial process heat applications</title>
<link>https://hdl.handle.net/1721.1/143660</link>
<description>A method for evaluating the potential of geothermal energy in industrial process heat applications
Packer, Michael Benjamin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143660</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Longshore currents generated by wind, tide and waves</title>
<link>https://hdl.handle.net/1721.1/143659</link>
<description>Longshore currents generated by wind, tide and waves
Ostendorf, David William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1980; Bibliography: leaves 173-175.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143659</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of hydrogen sulfide on the kinetics of hydrodenitrogenation of quinoline and its reaction intermediates in vapor phase</title>
<link>https://hdl.handle.net/1721.1/143658</link>
<description>Effect of hydrogen sulfide on the kinetics of hydrodenitrogenation of quinoline and its reaction intermediates in vapor phase
Gültekin, Selâhattin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1980; Vita.; Bibliography: leaves 269-273.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143658</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>0-10 transacetylase : control of synthesis by bacteriophage [epsilon]¹⁵ and substrate specificity of the enzyme</title>
<link>https://hdl.handle.net/1721.1/143626</link>
<description>0-10 transacetylase : control of synthesis by bacteriophage [epsilon]¹⁵ and substrate specificity of the enzyme
Keller, John Mahlon.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, Division of Biochemistry, 1966; In title on t.p., "[epsilon]" appears as the lower-case Greek letter. "September, 1966."; Includes bibliographical references (leaves 157-165).
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143626</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partition, integration, economic growth, and interregional trade: a study in the growth of interwing trade in Pakistan</title>
<link>https://hdl.handle.net/1721.1/143625</link>
<description>Partition, integration, economic growth, and interregional trade: a study in the growth of interwing trade in Pakistan
Rahman, M. Akhlaqur,
            1926-1992.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1962; Vita.; Includes bibliographical references. Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143625</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive control of a robotic weld bead grinding system</title>
<link>https://hdl.handle.net/1721.1/143622</link>
<description>Predictive control of a robotic weld bead grinding system
Kurfess, Thomas Roland.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1989; Includes bibliographical references (leaves 140-142).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143622</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The enforcement of collusion in oligopoly</title>
<link>https://hdl.handle.net/1721.1/143620</link>
<description>The enforcement of collusion in oligopoly
Levine, David Knudsen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Vita.; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143620</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Host-virus interactions in Salmonella O-antigen biosynthesis</title>
<link>https://hdl.handle.net/1721.1/143518</link>
<description>Host-virus interactions in Salmonella O-antigen biosynthesis
Bray, Dennis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1967; Vita.; Bibliography: leaves 139-142.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143518</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A real time model of nitrogen-cycle dynamics in an estuarine system.</title>
<link>https://hdl.handle.net/1721.1/143504</link>
<description>A real time model of nitrogen-cycle dynamics in an estuarine system.
Najarian, Tavit Ohannes.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Vita.; Bibliography: leaves 266-271.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143504</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivariant cohomology and smooth p-toral actions</title>
<link>https://hdl.handle.net/1721.1/143503</link>
<description>Equivariant cohomology and smooth p-toral actions
Duflot, Jeanne.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1980; Vita.; Bibliography: leaves 50-51.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143503</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trojan Horses in the Marine Realm: Characterizing Protistan Parasite Ecology in Coastal Waters</title>
<link>https://hdl.handle.net/1721.1/143427</link>
<description>Trojan Horses in the Marine Realm: Characterizing Protistan Parasite Ecology in Coastal Waters
Sehein, Taylor Rae
Protists are taxonomically and metabolically diverse drivers of energy and nutrient flow in the marine environment, with recent research suggesting significant roles in global carbon cycling throughout the water column. Top-down controls on planktonic protists include grazing and parasitism, processes that both contribute to nutrient transfer and biogeochemical cycling in the global ocean. Recent global surveys of eukaryotic small subunit ribosomal RNA molecular signatures have highlighted the fact that parasites belonging to the marine alveolate order Syndiniales are both abundant and ubiquitous in coastal and open ocean environments, suggesting a major role for this taxon in marine food webs. Two coastal sites, Saanich Inlet (Vancouver Island, BC) and Salt Pond (Falmouth, MA, USA) were selected as model ecosystems to examine the impacts of Syndinian parasitism on protist communities. Data presented in this thesis combines high-resolution sampling, water chemistry (including nutrients) analyses, molecular marker gene analyses, fluorescence in situ hybridization, and modeling to address key knowledge gaps regarding syndinian ecology. Information is presented on previously undescribed putative host taxa, the prevalence of syndinian parasites and infections on different hosts in coastal waters, and a framework for modeling host-parasite interactions based on field observations.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143427</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Mechanistic Evaluation of the Role of Aneuploidy During Oncogenesis</title>
<link>https://hdl.handle.net/1721.1/143426</link>
<description>A Mechanistic Evaluation of the Role of Aneuploidy During Oncogenesis
Wang, Ruoxi Wendy
Accurate chromosome segregation during cell division is critical to cellular fitness and survival. Errors during the segregation process often lead to aneuploidy, a state where cells harbor whole chromosome gains or losses. In untransformed cells, aneuploidy is highly detrimental to cell physiology, where it elicits multiple stress responses and impairs cell proliferation. Paradoxically, aneuploidy is also a hallmark of cancer and high degrees of aneuploidy in tumors are often associated with aggressive disease progression and poor prognosis. It is thus important to study the molecular mechanisms of aneuploidy during oncogenesis in order to reconcile the different effects of karyotype alteration in untransformed and cancer cells. In this thesis, we first investigate the mechanism by which untransformed cells harboring highly complex karyotypes trigger a natural killer cell-mediated immune response. We find that activation of the NF-κB pathway is responsible for such aneuploidy-associated immune clearance in vitro. We also provide evidence that potential mutations may counteract the NF-κB-mediated cytotoxicity during the cell transformation process. Second, we study the role of frequent chromosome 8 (chr8) gain in Ewing sarcoma, a pediatric bone and soft tissue tumor that is mainly driven by the EWS-FLI1 fusion oncogene. Here, we specifically investigate the molecular mechanism of one chr8 gain-driver gene, RAD21, in mitigating EWS-FLI1-induced replication stress and promoting oncogenesis. We find that the overexpression of RAD21 facilitates the resolution of transcription-replication conflicts in EWS-FLI1 expressing cells. This is achieved partially by RAD21’s recruitment to the stalled replication forks, where RAD21 interacts with DNA repair initiation proteins to promote efficient damage repair and stalled replication fork restart. In summary, our work reveals how the role of karyotype alterations during oncogenesis can be highly context-dependent. Whereas random aneuploidies bring fitness penalty in normal untransformed cells, a tumor-specific aneuploidy can provide benefits for cellular fitness and lead to positive selection of specific karyotypes. Outcomes from our study can implicate the potential therapeutic targets for treatment of aneuploid cancer.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143426</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating the functional states of tumor-resident dendritic cells that drive productive anti-tumor immunity</title>
<link>https://hdl.handle.net/1721.1/143425</link>
<description>Elucidating the functional states of tumor-resident dendritic cells that drive productive anti-tumor immunity
Duong, Ellen
While checkpoint blockade immunotherapy (CBT) has demonstrated remarkable clinical efficacy, its success is tempered by an inability to induce responses in the majority of cancer patients. Efforts to understand mechanisms of CBT resistance have unveiled a requirement for a subset of dendritic cells (DC) called Batf3-driven conventional DC1, given their superior ability to initiate tumor-reactive cytotoxic CD8+ T cell responses. The exclusion or functional suppression of DC1 in tumors impedes CBT efficacy and enables tumor immune evasion and outgrowth. While the role of DC1 to anti-tumor immunity has been well-established, much less is known about the contribution of other DC subsets which can be found infiltrating tumors. Furthermore, under inflammation, DC subsets can exist in distinct functional states with differential impacts on their function. In this work, we sought to study DC states associated with productive or dysfunctional anti-tumor immune responses and dissect the signals that drive them.&#13;
&#13;
To study DC states, we compared the DC infiltrate of a spontaneously regressing tumor (MC57-SIY; productive anti-tumor immunity) with a progressing tumor (MC38-SIY; dysfunctional anti-tumor immunity). We identified a novel activation state of CD11b+ conventional DC expressing an interferon-stimulated gene signature (ISG+ DC) that was enriched in regressor tumors. Like DC1, ISG+ DC was capable of driving anti-tumor CD8+ T cell immunity. However, unlike cross-presenting DC1, ISG+ DC activated T cells by crossdressing with pre-formed tumor-derived peptide-MHC complexes. We determined that constitutive tumor cell-derived type-I-interferon (IFN-I) production in regressor tumors was driving the ISG+ DC state. Ablation of tumor cell-derived IFN-I in regressor tumors led to complete loss of anti-tumor T-cell responses in mice lacking DC1. Conversely, addition of IFNβ to progressor tumors induced ISG+ DC and rescued anti-tumor T-cell responses in the absence of DC1.&#13;
&#13;
Our study highlights the untapped stimulatory potential of the DC compartment that can be harnessed to drive anti-tumor CD8+ T cell responses. In ongoing work, we are dissecting the mechanistic signals driving dysfunctional DC in progressor tumors over time. Engaging the functional states of DC or rewiring dysfunctional DC towards these functional states has the potential to strengthen anti-tumor immunity and may improve CBT responses.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143425</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Programmed Cell Death by GCN-1</title>
<link>https://hdl.handle.net/1721.1/143424</link>
<description>Control of Programmed Cell Death by GCN-1
Cho-Park, Yoon Andrew
Apoptosis is a cell death phenomenon that is fundamental to the development of an organism and to the pathogenesis of disease states. Regulation of apoptosis occurs at various molecular steps, but its regulation at the translational level remains poorly understood. The phenomenon of germline apoptosis, either physiological (i.e., without stress) or stress-induced, occurs throughout eukaryotic organisms from worms to humans and may have a role in maintaining germline immortality by eliminating compromised germ cells. Therefore the ability to regulate germline apoptosis is intricately linked with the survival and fitness of species. GCN1, a known translational regulator, has been traditionally associated with GCN-2 to activate the Integrated Stress Response in order to modulate protein synthesis in the context of stress.&#13;
&#13;
In the present study/thesis, we have discovered potentially a novel translational control mechanism of programmed cell death by GCN-1, pending further molecular studies. Contrary to conventional wisdom, this control occurs in a GCN-2 and Integrated Stress Response pathwayindependent manner, providing a non-canonical function to GCN-1. The present study also potentially provides further mechanistic understanding of the poorly understood biological phenomenon of germline programmed cell death; a biological process that initiates death to nurture life. The knowledge presented herein could help further understand human female reproductive physiology in order to potentially extend reproductive lifespan and mitigate oocyte loss and sterility caused by environmental stressors, such as radiation and chemotherapy.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143424</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of cell-identity maintenance in C. elegans</title>
<link>https://hdl.handle.net/1721.1/143423</link>
<description>Regulation of cell-identity maintenance in C. elegans
Saul, Joshua C.
Multicellular organisms are composed of a huge variety of different cell types, defined by their unique morphology, function, and gene expression. In animals, these characteristics are established during development, where internal and external cues guide the determination and differentiation of distinct cell identities. Following the establishment of a cell identity, that identity must be maintained for the remaining life of that cell. Maintenance requires both the continued expression of genes that define that terminally differentiated cell type, as well as suppression of the expression of genes that do not. Master transcriptional regulators are able to maintain the expression of genes in a terminally differentiated cell, though they can also drive the expression of additional undesired genes in a given cell type. The mechanisms by which the expression of these undesired genes is suppressed are poorly understood. In this dissertation, I describe genetic analyses of the nematode C. elegans that have provided insights into a mechanism that cells utilize to regulate gene expression and in turn maintain the identities of terminally differentiated cell types.&#13;
	ctbp-1 is the sole worm ortholog of the C-terminal Binding Protein (CtBP) family of transcriptional corepressors. ctbp-1 mutants exhibit a disruption of normal gene expression in the bilaterally symmetric AIA interneurons. I characterized ctbp-1 mutant worms for aspects of AIA cell identity and found that ctbp-1 mutants display a progressively more severe disruption of AIA gene expression, morphology and function, defects characteristic of a loss of cell-identity maintenance. I screened ctbp-1 mutant worms for suppression of the loss of AIA cell identity and identified the transcription factor EGL-13. Characterization of egl-13 mutants suggests that CTBP-1 acts to maintain AIA cell identity in part by repressing the transcriptional activity of factors such as EGL-13, and, in the absence of CTBP-1, unregulated transcription factors drive defects in AIA gene expression, morphology, and function. My findings suggest that the repression of gene expression through the combined activity of transcriptional corepressors like CTBP-1 and transcription factors like EGL-13 might provide a mechanism through which the activation of gene expression by master transcriptional regulators is fine-tuned as part of proper cell-identity maintenance.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143423</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraining climate impact uncertainties from future aviation</title>
<link>https://hdl.handle.net/1721.1/143422</link>
<description>Constraining climate impact uncertainties from future aviation
Sanz-Morère, Inés
Environmental impacts from the aviation sector are in continuous growth. The total sector contribution to anthropogenic climate forcing is approximately 3.5%, representing up to 9% of US greenhouse gas emissions from transportation in 2018. Despite the COVID-19 crisis, it is also expected to grow at a global rate of approximately 4% per year in the next 20 years, and a full sector recovery is expected by 2024. The total impacts of aviation emissions on the climate, however, are still uncertain. This is due to factors including (i) the uncertainty regarding the radiative effects of short- and long-term climate forcers; (ii) the difficulty of validating modeling tools e.g. contrail formation and persistence or stratospheric chemical response to emissions; and (iii) the growing interest in new air transportation technologies such as unmanned aerial vehicles, supersonic aviation, or hydrogen and alternative fuels. These factors together require a persistent effort to improve the available tools assessing aviation environmental footprint.&#13;
&#13;
The objective of this thesis is to provide additional insights into aviation climate impacts, by improving current modeling capabilities. Specially, I aim to resolve elements that will be of increasing interest as the sector evolves. The work is divided into two parts. The first part focuses on improving climate impact estimates from contrails, ice clouds which form behind aircraft. Those are estimated to cause approximately half of the total climate forcing from aviation. The second part focuses on developing modeling tools for assessing climate impacts from future commercially viable supersonic fleets, as multiple companies are currently designing projects of that type (Aerion, Boom, Spike Aerospace, NASA, Lockheed Martin, etc.).&#13;
&#13;
In the first part, I develop a new contrail radiative forcing model with a new parameterization to model exchanges of radiation when multiple cloud layers overlap occur. My parameterization also reduces current uncertainties related to uncertainties in contrail microphysical structure. I find that, assuming maximum possible overlap, cloud-contrail overlap in 2015 increased the net radiative forcing from contrails. This effect was greatest in the North-Atlantic corridor. For 2015, contrail-contrail overlap results in a 3% net reduction in the estimated radiative forcing. Finally, using "in situ" measurements to constrain contrail microphysical evolution pathways, I find that the global net radiative forcing due to contrails in 2015 is between 8.6 and 10.7 mW/m2. Relative to the mid-point, this uncertainty range is less than one quarter of that previously reported in the literature.&#13;
&#13;
In the second part, I estimate the sensitivity of the global supersonic market and its climate impacts to factors such as design choice, regulations and economic assumptions. For this, I develop a detailed supersonic aircraft design model providing robust information on cruise altitude, fuel burn and emissions variation with aircraft design choice. I also, in order to address overland restrictions, develop a high-resolution routing algorithm, capable of assessing optimal routing for multiple regulatory options. I obtain that, in the absence of flight path restrictions, a fleet of 130-870 supersonic aircraft can be feasible, operating up to 2.5% of the seat-kilometers in the global aviation market. This will result in a net increase of fuel burn from commercial passenger aviation of up to 7%. However, between 78% and 100% of the global unrestricted market potentials cannot be addressed when supersonic flight is restricted over land or over areas with a population density of more than 50 inhabitants per square kilometer. When evaluating environmental impacts, aircraft design choice can change the sign of supersonic aviation impact on non-CO2 aviation climate forcing. In general, implementing supersonic aviation results in a global warming effect. However, if reducing fleet average NOx emission index by 58%, through an increase in fuel burn of 7%, climate forcing can change from positive (increase) to negative (reduction). Designs aiming to address high-value demand, at the upper bound of supersonic speeds (cruise Mach number = 2.2), are the most environmentally harmful because of their higher cruise altitude and fuel burn. While based on my results, we shouldn't expect any significant viable market from them, a 10% fleet substitution would be responsible of a doubling in global non-CO2 radiative forcing impact.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143422</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Climate-Agriculture-Water Nexus at the Regional Scale</title>
<link>https://hdl.handle.net/1721.1/143421</link>
<description>On the Climate-Agriculture-Water Nexus at the Regional Scale
Nikiel, Catherine A.
Development of large-scale agriculture is one of the most significant anthropogenic global change processes during the 20&#119905;h century. In this thesis, connections between climate, agriculture, and water availability are investigated, using examples from Egypt and the Central United States.&#13;
&#13;
Agriculture reflects and impacts water availability at local, regional, and global scales. A bottom-up, reconstruction of agricultural water consumption in Egypt illustrates how this process is driven by socioeconomic trends and constrained by ecological limits. This analysis shows that Egypt is currently withdrawing most of the Nile’s annual flow, 61.5 km3, and in the coming few years will be importing that same volume as virtual water, to satisfy the growing population and economy.&#13;
&#13;
Agricultural land-use change in the Central U.S. is found to be the dominant factor in shaping regional climate during the 20&#119905;h century. Agricultural development (expansion, intensification, irrigation) accounts for observed July-August temperature decreases (0.2-0.3 ∘C) from 1920-1949 to 1970-1999 and about 30% of precipitation increases (0.2 to 0.3 mm/day). These agriculturally driven cooling and wetting trends in the historical period have led to a modification of summer (May-August) water availability (represented through Precipitation-Evapotranspiration (P-E)) with increases of about 17 mm from 1915-1944 to 1975-2004, and significant increases in summer relative humidity and rainfall over a highly productive region spanning Iowa, Illinois, Indiana, and Ohio.&#13;
&#13;
In the future, projected climate change will impact the same agricultural systems significantly, requiring adaptation of existing drainage infrastructure to manage projected excess springtime soil moisture, and development of supplemental irrigation systems to manage the drier summers. Temperature increases, consistent with global warming, and associated reductions in summer relative humidity are shown in multi- model ensembles of CMIP5 and CMIP6 to lead to summer drying: a P-E reduction of 12 mm and 36 mm respectively.&#13;
&#13;
Finally, this thesis investigates impacts of agriculture on the climatology of heat waves in the Central U.S. Non-irrigated (irrigated) agriculture has increased June-September average daily maximum wet-bulb temperatures by 0.3 (0.7) ∘C in the historical period. In the future, irrigated agricultural areas will be 0.9 ∘C hotter than without irrigation. These enhancements of heat stress will exacerbate projected impacts of climate change.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143421</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stimulation of chemotherapy-induced immunity by targeting IL-6 in the tumor microenvironment</title>
<link>https://hdl.handle.net/1721.1/143418</link>
<description>Stimulation of chemotherapy-induced immunity by targeting IL-6 in the tumor microenvironment
Millán-Barea, Luis R.
Tumor formation and progression involves the growth and co-evolution of neoplastic cells in combination with microenvironmental stromal components. Transformed tumor cells initiate crucial changes in healthy tissues that convert their environment into one that supports and furthers cancer development. Consequently, heterogenous cell types within tumors and context dependent factors can influence and shape the therapeutic responses of these diseases. These interactions promote outgrowth of therapy-refractory malignancies, which are a recurrent and challenging problem when treating cancer patients in the clinic.&#13;
&#13;
One of the most effective emerging approaches to safeguard patients from cancer recurrence is the stimulation and mobilization of the immune system against tumor cells. To this end, preclinical studies of a group of cytostatic and genotoxic agents has shown that these drugs exert their effects on cancer cells by, in part, boosting the functions of immune cells. However, the vast majority of these agents do not effectively engage the immune system when used as therapies for cancer patients. Activation of innate and adaptive immune responses against cancers rely on cell-to-cell communications regulated by cytokines. These soluble factors can also generate anti-inflammatory responses depending on their concentration and timing of exposure. Thus, detrimental immunosuppressive activity promoted by cytokines is one of the context-dependent factors that inhibit tumor-specific responses.&#13;
&#13;
In my thesis, using an immune-competent mouse model of B-cell acute lymphoblastic leukemia, we describe how the microenvironmentally derived cytokine IL-6 inhibits anticancer immune responses generated by chemotherapy treatment. Specifically, we demonstrate that absence of IL-6 from tumor microenvironments leads to enhanced T-lymphocyte responses that culminate in the generation of long-term immunologic memory. These findings reveal one of the mechanisms by which microenvironmental changes brought upon by tumor cells result in therapy resistance and disease recurrence. Therefore, we present supporting evidence that unravelling the therapeutic potential of IL-6 pathway inhibition in combination with immune-stimulating therapies could improve care and treatment for various oncological indications.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143418</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating the Role of BMI1 in Lung and Colon Tumor Maintenance and Progression</title>
<link>https://hdl.handle.net/1721.1/143417</link>
<description>Elucidating the Role of BMI1 in Lung and Colon Tumor Maintenance and Progression
Kuo, Elaine Yih-Shuen
Cancer is a highly dynamic disease characterized by dedifferentiation, heterogeneity, and plasticity. It is postulated that within heterogeneous tumor tissues, there exists a subpopulation of cells, termed cancer stem cells (CSCs), with the ability to initiate and support tumor growth, promote metastatic spreading, and drive relapse following chemoradiotherapy. Given the high tumorigenic potential of these cells, there is strong interest in identifying methods to specifically target and eradicate CSCs. B lymphoma Mo-MLV insertion region 1 homolog (BMI1) is an epigenetic regulator important for stem cell self-renewal and differentiation, and it is also a proto-oncogene that is overexpressed in a wide variety of cancer types. Given its role in normal stem cell biology and its overexpression in cancer, there is strong interest in inhibiting BMI1 as a method of targeting CSCs. In this thesis, I will examine the role of BMI1 in tumor initiation, maintenance, and progression in lung and colon cancer, two cancer types in which BMI1 is overexpressed. Using genetically engineered mouse models, we determine that in oncogenic KRAS-driven lung adenocarcinomas, with or without Trp53 deletion, genetic ablation of Bmi1 at tumor initiation induces a pronounced proliferation defect and consequently significant suppression of tumor development and thus extension of lifespan. In stark contrast, Bmi1 deletion in established lung adenocarcinomas does not impair tumor progression, CSC numbers or capacity, or metastatic potential, and instead, upregulates transcriptional programs associated with lung adenocarcinoma dedifferentiation and progression. Similarly, Bmi1 deletion in established colon tumors does not impair colon cancer progression. Our work demonstrates that the effects of BMI1 loss are highly dependent on the context and timing of deletion relative to tumor development, and our findings raise concern over the use of BMI1 inhibitors as cancer treatments.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143417</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic properties of the constitutive centromere associated network of proteins</title>
<link>https://hdl.handle.net/1721.1/143416</link>
<description>Dynamic properties of the constitutive centromere associated network of proteins
Navarro, Alexandra P.
Ensuring the proper transmission of genetic material across generations is fundamental to the propagation of cellular life. In each mitotic cell division, a series of highly regulated processes ensure that the genetic material of a dividing cell is properly duplicated and segregated to produce two genetically identical daughter cells. The distribution of genetic material requires the attachment of spindle microtubules to a specified region of the genome called the centromere. This attachment is mediated by the proteinaceous structure called the kinetochore. To enact its function the kinetochore must ensure the proper assembly of its two functional domains: the inner kinetochore referred to as the constitutive centromere associated network (CCAN), which is assembled specifically at the centromere; and the outer kinetochore, which directly interacts with spindle microtubules and is assembled strictly in mitosis. The proteins of the CCAN are characterized by their constitutive localization to the centromere throughout the cell cycle. However, the different processes the occur at each stage of the cell cycle enact distinct changes to the centromere that would require the CCAN to adapt to these changes. However, the molecular mechanisms that contribute to reorganization have not been fully elucidated. &#13;
&#13;
In this thesis work, I demonstrate that a component of the CCAN called the CENP-LN complex displays cell cycle dependent kinetochore localization behaviors. Although the CENP-LN complex is always present at the kinetochore, it is distinctly enriched in S-phase. Utilizing genetic manipulation and cell biological analyses in mammalian cells, I demonstrate that the cell cycle dependent phosphorylation of either CENP-L or CENP-N regulates the localization of this complex. In this process, phosphorylation negatively affects the interaction between these two proteins and therefore prevents their localization to the kinetochore in mitosis. Alternatively, precluding phosphorylation does not affect their interaction but prevents their localization to the kinetochore during interphase. Additionally, in my studies of the function and behavior of the CCAN, we serendipitously identified a small 37 amino acid peptide that is encoded by an alternative open reading frame within the mRNA of the CENP-R gene. This small peptide uniquely localizes to the Golgi. In my work, I utilize this short peptide sequence to identify a minimal 10 amino acid signal sequence that is sufficient for Golgi targeting and localization.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143416</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of Urban Congestion Pricing Policies with Microsimulation of Passenger and Freight</title>
<link>https://hdl.handle.net/1721.1/143415</link>
<description>Design and Evaluation of Urban Congestion Pricing Policies with Microsimulation of Passenger and Freight
Jing, Peiyu
With rapid urbanization and increasing mobility demand, urban traffic congestion has been growing. The contribution of urban freight to congestion has also been increasing due to surging e-commerce demand. Congestion pricing is effective for urban congestion mitigation, yet knowledge about its impacts on the freight sector is insufficient. We propose to design and evaluate congestion pricing policies using integrated microsimulation of urban passenger and freight.&#13;
&#13;
First, we extend a state-of-the-art microsimulation system of urban passenger and freight for analyzing pricing strategy impacts. We integrate freight into the passenger components and incorporate cost sensitivity into the system. Specifically, we develop a freight vehicle operations planning model to capture the cost sensitivity of freight carriers in fleet tour planning.&#13;
&#13;
Next, we develop methodologies for designing location-specific, vehicle-type-specific, and time-of-day-specific urban congestion pricing policies. We also develop an evaluation methodology to systematically assess three aspects of impacts: social welfare, network level of service (LOS), and behavioral patterns &amp; logistics operations.&#13;
&#13;
Finally, we design and evaluate distance-based, cordon-based, and area-based congestion pricing policies for an auto-innovative prototype city. All policies improve total social welfare. The improvement is the greatest under the distance-based policy and the area-based one follows it. We show profiles of passengers and shippers losing or gaining profits under congestion pricing. Passengers losing profits have lower value of time and household income whereas those benefiting are the reverse and have a higher proportion of work activities. Shippers losing profits have smaller business sizes, fewer shipments, lower shipment value whereas those benefiting are the reverse. All policies improve LOS in the toll area, especially the most congested subzones. The distance based and cordon-based policies are most effective during peak periods but the area-based one is less effective during peak periods. We also reveal heterogeneous impacts on behavioral patterns, including passenger’s and carrier’s mode choice, departure time choice, trip length, and logistics operations including shipper’s shipment size &amp; frequency choice, carrier’s vehicle tour planning, etc. Logistics efficiency improves the most for e-commerce shipments and internal tours under the distance-based policy. The evaluation results contribute to the state of knowledge and provide insights for policy-making.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143415</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alternative Freight Contracts: Data-driven Design Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/143414</link>
<description>Alternative Freight Contracts: Data-driven Design Under Uncertainty
Acocella, Angela Josephine
We demonstrate how firms (shippers) should incorporate market uncertainty into their strategic procurement of truckload (TL) transportation services from suppliers (carriers). These direct point-to-point TL movements are a major segment of total freight volumes for shippers. As such, firms emphasize the need to form strong relationships with their carriers. This becomes difficult due to the two-sided, non-binding nature of TL contacts; shippers are not required to offer the stated contract volumes - causing uncertainty and cost escalations for carriers - and carriers are not required to accept the offered freight - forcing shippers to rely on higher priced backup providers. These costs intensify when freight markets cycle between periods of over and under supply; soft and tight markets, respectively.&#13;
&#13;
We propose an empirical modeling approach utilizing large, detailed microeconomic data sets to help shippers and carriers form better contractual relationships given present uncertainties. Previous research on TL procurement and operations has predominantly taken analytical approaches due to limited availability of real-world industry data. Thus, it has been limited in addressing the three sources of uncertainty: (1) demand from shippers, (2) capacity supplied by carriers, and (3) the fluctuations in the freight market that shift the power back and forth between parties.&#13;
&#13;
This thesis provides six main contributions. First, we explicitly incorporate the three sources of uncertainty into shippers' TL transportation contracting decisions by developing empirical behavioral models. Second, we confirm when the underlying structure of the freight markets change and impact carrier behaviors. Third, we identify which actions shippers can take to encourage contracted carriers to maintain high freight acceptance rates during tight markets. Fourth, we quantify carriers' contract price stickiness and identify which segments of a shipper's network and carrier base are most promising for a market-based contract to mitigate the negative effects of market fluctuations on cost and performance levels. Fifth, we determine the market-based contract designs that result in a Pareto improvement for both shippers and carriers over the traditional long-term, fixed-price contract and quantify the expected benefit to both sides. Finally, we measure the causal effect of index-based contracts on the service levels and costs shippers experience.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143414</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Garment Design Workflows for On-Demand Machine Knitting</title>
<link>https://hdl.handle.net/1721.1/143409</link>
<description>Garment Design Workflows for On-Demand Machine Knitting
Kaspar, Alexandre
Modern computerized weft knitting machines enable on-demand production of custom, whole garments at once. They reduce the need for manual post-processing and generate minimal waste. Yet, their programming is still hardly accessible and is effectively done manually by few skillful knitting technicians. The programming of knitted garments typically involves scheduling hundreds of thousands of stitches. While every individual stitch created on such machines can, in theory, be controlled digitally, the ability to effectively do so depends heavily on the programming software being sufficiently accessible to the user. Unfortunately, current knitting software is typically closed and relies mostly on low-level programming. The lack of standardization and more accessible, higher-level user design tools effectively hinder the possibility of a digital, on-demand production of garments for all.&#13;
&#13;
In this thesis, I explore the design space that flat-bed, weft knitting machines span and propose novel design workflows to enable accessible, digital customization of garments created on these machines. First, I introduce the inverse design problem of automatic knitting program generation from a single image, together with a machine learning framework that enables it. Second, I describe a parametric, primitive-based design tool that merges inspirations from both computer-aided design and pixel-based image editing. Finally, I propose a novel workflow to translate traditional, sketch-based garment patterns into knitting programs. The resulting system allows anyone to harness the plethora of existing garment designs while providing knitting-specific customization capabilities.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143409</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of Active DNA Demethylation and its Role in Fertility in Arabidopsis thaliana</title>
<link>https://hdl.handle.net/1721.1/143408</link>
<description>Regulation of Active DNA Demethylation and its Role in Fertility in Arabidopsis thaliana
Pohlmann, Deborah Allison
The genome of Arabidopsis thaliana must maintain control over the balance between de novo methylation, maintenance of existing methylation, and active demethylation as it utilizes these processes to facilitate gene expression changes throughout the lifetime of the plant. Expression of the DNA demethylase REPRESSOR OF SILENCING1 (ROS1) is dependent upon this balance as well as perpetuating it, yet many aspects of the regulation of ROS1 remain unknown. In this work, I show that the downregulation of ROS1 in mutants of the RNA-directed DNA methylation pathway occurs at the transcriptional level, and is dependent upon an 817-bp region in the proximal promoter region of ROS1. The deletion of this region using CRISPR-Cas9 technology resulted in increased expression of ROS1 in both wildtype and methylation-deficient backgrounds, indicating that this region may be a methylation-sensitive silencer sequence. Additional deletions in the endogenous chromosome identified further regions that contain regulatory elements of ROS1. Additionally, I further investigated the results when the balance between methylation and active demethylation is disturbed, by characterizing a quadruple mutant of all four member of the DEMETER family of DNA glycosylases in somatic tissues: dme;ros1;dml2;dml3 (drdd). This mutant displays an early flowering phenotype which was linked to downregulation of the floral repressor FLOWERING LOCUS C, concurrent with DRDD-dependent hypermethylation in the 5’ flanking region. I also characterized a low-penetrance male fertility defect in drdd mutants, which I determined is caused by a delay in anther dehiscence that could be a result of altered reactive oxygen species accumulation. This work has led to an increase in our understanding of the mechanisms by which ROS1 is regulated, and the mechanisms by which active demethylation affect transcription and development of the plant.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143408</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical and physiological exploration of the nutrient sensing pathway upstream of mTORC1</title>
<link>https://hdl.handle.net/1721.1/143407</link>
<description>Biochemical and physiological exploration of the nutrient sensing pathway upstream of mTORC1
Gu, Xin
The mTORC1 (mechanistic target of rapamycin complex 1) protein kinase controls growth in response to environmental cues. Aberrant mTORC1 activity is linked to numerous diseases including cancer. Understanding the regulation of mTORC1 will facilitate therapeutic developments. Amino acids promote the translocation of mTORC1 to the lysosomal surface, a process dependent on the Rag GTPases nucleotide state, which is regulated by several multi-component complexes. Leucine and arginine are known activators of mTORC1 with reported corresponding sensors.&#13;
&#13;
Despite years of research, two essential questions still remain: 1. What and how other inputs impact mTORC1 activity? 2. What are the physiological functions of the nutrient sensing pathway?&#13;
&#13;
We identified SAMTOR as a previously uncharacterized protein that inhibits mTORC1 signaling by interacting with GATOR1, the GTPase activating protein (GAP) for RagA/B. The methyl donor Sadenosylmethionine (SAM) disrupts the SAMTOR-GATOR1 complex by binding directly to SAMTOR with a dissociation constant of approximately 7 μM. In cells, methionine starvation reduces SAM levels below this dissociation constant and promotes the association of SAMTOR with GATOR1, thereby inhibiting mTORC1 signaling. Methionine-induced activation of mTORC1 requires the SAM binding capacity of SAMTOR. Thus, SAMTOR is a SAM sensor that links methionine and one carbon metabolism to mTORC1.&#13;
&#13;
In parallel, I explored the physiological roles of the nutrient sensing pathway in Drosophila melanogaster. Recent work in cultured cells established Sestrin as a conserved cytosolic leucine sensor, but its role in the organismal response to dietary leucine remains elusive. I found that Sestrin null flies (Sesn-/- ) fail to inhibit mTORC1 or activate autophagy upon leucine deprivation and survive worse on a low leucine diet. Knock-in flies expressing a leucine-binding deficient Sestrin mutant (SesnL431E) show decreased and leucine-insensitive mTORC1 activity. Interestingly, we found that flies can discriminate between food with or without leucine in a Sestrin-dependent manner. Leucine regulates mTORC1 activity in glial cells and a knockdown of Sesn in these cells reduces the ability of flies to detect leucinefree food. Thus, nutrient sensing by mTORC1 is not only necessary for flies to adapt to, but also to detect, a diet deficient in an essential nutrient.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143407</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the role of aneuploidy in phenotypic variability</title>
<link>https://hdl.handle.net/1721.1/143406</link>
<description>Exploring the role of aneuploidy in phenotypic variability
Moomau, Christine Anne
Phenotypic variability is a noted feature of human trisomies. This is exemplified by the presentation of trisomy 21 (Down syndrome). The incidence of and severity of clinical features are highly variable in individuals with Down syndrome. These differences have long been attributed to genetic differences within the population altering the likelihood that particular phenotypes will develop. However, work in yeast and mouse models of aneuploidy suggest that phenotypic variability can be a consequence of aneuploidy itself in the absence of genetic heterogeneity.&#13;
&#13;
By studying variability in induction of the GAL1-10 promoter in aneuploid strains of budding yeast, S. cerevisiae, we show that altering gene dosage can lead to variability. The endocytosis defect caused by a specific aneuploidy (Disome IX) is sufficient to increase variability in the GAL signaling pathway. The addition of a second copy of chromosome IX in haploid yeast increases the dosage of multiple genes involved in endocytosis. This leads to an endocytic defect that impacts the cell surface localization of hexose transporters, which ultimately leads to variability in uptake of hexose sugars and thus variability in induction of the GAL1-10 promoter.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143406</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Personalized Tolling with Long-term Objectives</title>
<link>https://hdl.handle.net/1721.1/143404</link>
<description>Real-time Personalized Tolling with Long-term Objectives
Xie, Yifei
Managed lanes are separate tolled lanes adjacent to free general-purpose lanes. The key real-time operation problem is how to set the toll for both effective network management and revenue generation, jointly considering the objectives of the operator, the travelers and the regulator. Based on a comprehensive analysis of travel behavior, this thesis develops a solution with adaptive personalized pricing.&#13;
&#13;
Travelers are observed to either predominantly use managed lanes or almost never. This could be attributed to two competing latent behavioral factors: preference heterogeneity, and state dependence—not switching between options causally yields positive utility. Their econometric quantifications have crucial implications on pricing, but are challenging due to endogeneity known as the initial condition problem. We begin by proposing a Control Function solution under a general setting, which is shown to improve a commonly used solution by Wooldridge. Then, through applying the developed solutions to empirical data, we discovered heterogeneity and state dependence to be both significant in explaining the usage decision. It is further shown that when ignoring unobserved heterogeneity or the initial condition problem, state dependence will be largely overstated. Price endogeneity caused by dynamic pricing is also discovered and corrected. &#13;
&#13;
The developed behavioral model is integrated into an online personalized tolling system that incorporates prediction, optimization and personalization. In addition to optimizing the toll adaptively, an online bi-level optimization problem is formulated to jointly offer personalized discounts. A flexible multi-component objective is designed to consider not only short-term revenue and social welfare, but also the impact on future revenue based on the state-dependent choice behavior. The online personalized tolling system is deployed to a microscopic traffic simulator calibrated with real data. The results show simultaneous improvements of revenue, traffic conditions and social welfare. Equity improvement is also discovered as travelers with lower values of time are presented lower tolls.&#13;
&#13;
The developed methodologies for behavioral analysis and personalized pricing could be directly adapted for other applications in transportation and beyond.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143404</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Modeling of the Effects of Process Variations on Silicon Photonics</title>
<link>https://hdl.handle.net/1721.1/143403</link>
<description>Statistical Modeling of the Effects of Process Variations on Silicon Photonics
El-Henawy, Sally Ibrahim
The rapidly growing field of silicon photonics is an attractive research and manufacturing platform, due to its ability to enable novel functionalities. Silicon photonics leverages existing CMOS processes and fabrication infrastructure, making its components suffer from the process variations present in CMOS technology. Long and repetitive simulations are required to understand the effect of these variations, largely due to the lack of variation-aware models.&#13;
&#13;
This thesis explores methodologies for the development and application of process variation-aware compact models for silicon photonics components to enable photonics design for manufacturability. We consider the effect of a number of common unavoidable process variations, including both systematic and random variations, on the behavior of key optical building blocks. We examine the effect of line edge roughness as a random process variation on different components including Y-branches and coupled resonator optical waveguides. For the Y-branch, we use ensemble simulations to develop behavioral statistical models that can predict the behavior in the presence of different line edge roughness parameters. In the case of coupled resonator optical waveguides, to predict the behavior in the presence of different line edge roughness parameters, we develop an S-parameter based model that can be used directly in circuit simulation. Also, we present methods to develop S-parameter based compact models against systematic variations (geometric variations) in rings for both silicon and silicon nitride waveguides. The models are capable of predicting the behavior much faster than by full wave simulations, and give insight on resulting performance variation to enable yield prediction and optimization. We use the developed compact model to simulate photonic integrated circuits and compare the time required with the case of traditional simulations loops. We also present methods for extraction of spatial variations using variation test chip design and measurement. The spatial variations are decomposed into die-to-die and within-die variations.&#13;
&#13;
We examine modulation (electrical and thermal) as a conventional approach to account for the effect of process variations. For electrical modulation, we study typical operating condition variations it can experience and find that their effect is not as severe as typical process variations. Moreover, the power budget required to correct for process variations is calculated.&#13;
&#13;
Together, these methods are key components toward design for manufacturability approaches and serve as a basis for extended PDKs for silicon photonics. Such models and methods help increase the speed of the simulation process required in photonics integrated circuit design, and inform designers of potential design modifications to correct for process variations for high yield and performance.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143403</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disordering Capital: The Politics of Business in the Business of Water Provision</title>
<link>https://hdl.handle.net/1721.1/143402</link>
<description>Disordering Capital: The Politics of Business in the Business of Water Provision
Cruxên, Isadora Araujo
This dissertation consists of three articles that together seek to deconstruct, or disorder, monolithic treatments of “private sector” participation in the delivery of urban water and sanitation services. The studies interrogate how variation in forms of business ownership and politics not only shape public-private collaboration for service delivery over time but also contribute to re-configuring the institutions that govern service provision markets in global South contexts. Drawing on historical and ethnographic research on the development of private participation in water and sanitation provision in Brazil, my work yields three central insights. First, it illuminates how shifts in business ownership away from family-owned construction business groups towards ownership by financial investors produced a “centralizing” organizational and institutional pull in the governance of private urban water and sanitation services. Once heavily embedded in local politics, private holdings reduced subsidiary autonomy, eschewed close relations with local politicians, and mobilized for regulatory centralization. This finding problematizes the tendency within scholarship on the financialization of urban development to position financial investors as capitalizing on local forms of entrepreneurial politics, suggesting the need to consider how different investors fluidly engage with shifting market contexts. I argue that financial investors perceived centralization as an effective strategy for ensuring stable returns across consolidated operations within otherwise unstable and fragmented local political environments. Second, my work challenges the tendency to portray infrastructure investors as passive onlookers searching for institutionally-stable investment geographies. I show that private investors in Brazil’s water and sanitation sector were able to counter strong opposition and successfully lobby for a centralizing regulatory reform by constructing business power over time. This entailed learning from mistakes and adjusting mobilization strategies, revealing that infrastructure investors do not have fixed preferences, may learn and adapt, and can be key agents of institutional change. Finally, my research unsettles the assumption that profit maximization will override other service objectives. My comparative analysis of the long-term outcomes of different models of public-private collaboration shows that states can still shape service delivery priorities through the work of politically-appointed managers and state allies, what I call “political modulation.” This finding not only problematizes policy advice that prescribes political insulation as a strategy for improving service delivery, it also suggests politics can play a positive role in promoting more equitable service outcomes.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143402</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Angular Resolution Beam Steering Terahertz  Antenna Arrays for Imaging Applications</title>
<link>https://hdl.handle.net/1721.1/143401</link>
<description>High Angular Resolution Beam Steering Terahertz  Antenna Arrays for Imaging Applications
Monroe, Nathan M.
Terahertz antenna arrays can produce Terahertz electromagnetic waves that can be steered by electronic control. They are a promising technology for many applications, including imaging, radar, communications and other sensing applications due to bandwidth availability, penetration of dielectric materials, and short wavelength, enabling smaller structures. There is particular application in automotive radar imaging, where a narrow FMCW radar beam is swept across a scene to produce a depth image which, unlike LIDAR, is tolerant to environmental conditions such as rain and snow. However, challenges exist in the design of large dense THz arrays, limiting demonstrations to hundreds of antennas, a fraction of the size required for high-resolution imaging. This is explained by challenges including THz phase shifters which are high-loss and too large for dense integration, consume large DC power, and introduce amplitude and phase errors. In addition, challenges exist with high-loss on-chip RF power distribution, array scalability and phase control.&#13;
&#13;
The approaches taken in this work address these issues, enabling a 98x98 antenna array at 265GHz which employs passive one bit phase shifters based on two MOSFET switches. These phase shifters are low-loss, low-area and consume no DC power. A reflector array (reflectarray) architecture and in-unit memory address RF distribution and digital control challenges, and a scalable design allows for arbitrary array sizes. In-unit memory additionally enables performance-enhancing algorithms to mitigate beam squint and radiation sidelobes which improve effective resolution of a wideband FMCW radar image and enable radiation performance approaching that of ideal phase shifters. The concepts are demonstrated on-chip in a 22nm FinFET CMOS process, with a 4x4mm2 chip containing a dense 7x7 antenna array.&#13;
&#13;
An on-PCB tiling of 14x14 chips produces a 98x98 antenna array, which demonstrates electronic steering over a 120 degree window of a 1x1 degree THz beam with 42dBi of directivity, and is further enhanced by algorithmic approaches. The antenna array is employed in a radar imaging application where the high-directivity beam is used to produce 90x90 pixel radar images. This represents the largest beam-steering THz antenna array demonstrated to date, and a step towards practical solid-state THz imaging.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143401</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear Microscopy System and Protocol for Rapid Evaluation of Freshly Excised Human Tissue</title>
<link>https://hdl.handle.net/1721.1/143400</link>
<description>Nonlinear Microscopy System and Protocol for Rapid Evaluation of Freshly Excised Human Tissue
Yoshitake, Tadayuki
Histopathology determines disease type/condition by microscopic evaluation of biopsy and surgically excised tissue, playing a critical role in medicine. The clinical protocol for histology requires physical sectioning of tissue, either with intense chemical processing and paraffin embedding or freezing of tissue, limiting the rapid evaluation applications. Nonlinear microscopy (NLM) is an emerging optical sectioning microscopy technology that enables histological visualization of fresh and intact human tissue without requiring physical sectioning.&#13;
&#13;
In this thesis, we developed high-throughput, real-time NLM imaging system and protocol for intraoperative NLM evaluation of surgically excised tissue. We demonstrated the versatile imaging capability of NLM with comparative studies between NLM and other optical sectioning microscopy techniques. Interventional randomized clinical trials are designed and conducted to demonstrate the feasibility of intraoperative NLM evaluation (breast lumpectomy and radical prostatectomy). A pilot study demonstrates the feasibility of NLM imaging of bone. The studies in this thesis were performed in close collaboration with the Beth Israel Deaconess Medical Center. This thesis aims to develop NLM system and protocol for rapid evaluation of fresh human tissue and to design and perform clinical trials for validation of the efficacy of intraoperative NLM evaluation. The results suggest the practical potential of NLM as a modality to improve/transform clinical/surgical procedures.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143400</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aqueous Polysulfide Electrodes for Low-Cost Grid-Scale Energy Storage</title>
<link>https://hdl.handle.net/1721.1/143399</link>
<description>Aqueous Polysulfide Electrodes for Low-Cost Grid-Scale Energy Storage
Pan, Menghsuan Sam
With the emerging interest in large-scale long-duration energy storage for the electricity grid, sulfur redox reaction in aqueous solutions presents an attractive candidate due to its low cost and high abundance. To this end, an air-breathing aqueous polysulfide redox flow battery was demonstrated as a promising candidate, which pairs a similarly low-cost oxygen electrode with aqueous polysulfide, to meet the criteria for grid storage. Techno-economic modeling shows such a battery has one of the lowest chemical and installed costs among energy storage options, economically competitive with mechanical storage such as pumped-hydro and compressed air storage but without the geographic constraints.&#13;
&#13;
Further, studies focusing on the materials properties of aqueous polysulfide electrolytes as well as the kinetic and transport properties under various electrolyte/electrode designs were performed to elucidate the mechanisms of and limitations on aqueous polysulfide reactions. Two materials properties limit aqueous polysulfide electrolyte capacity as a redox flow electrolyte: species chemical stability and solubility limit. Even in highly alkaline environments, aqueous polysulfide is found to only exhibit chemical stability in the confined range of oxidation states between S₄²⁻ and S₂²⁻, corresponding to a quarter of the theoretical sulfur capacity. On the other hand, by allowing reversible precipitation and dissolution during cycling, the effective solubility limits can be increased. Aqueous polysulfide electrolytes cycled beyond the solubility limit are extensively examined in order to understand chemical/electrochemical stability as well as the nucleation and growth mechanism during sodium polysulfide deposition.&#13;
&#13;
The last portion of this thesis focuses on improving reaction kinetics by altering the electrode design. To address the sluggish kinetics, a percolating conductive nano-network was introduced by suspending carbon nanoparticles in the polysulfide electrolyte. Such networks improve reaction kinetics by providing high surface area for reaction. Moreover, nickel sulfide, as an exemplary electrocatalyst, was introduced into the suspension electrode in the form of nickelcoated carbon. This modification has the effect of nearly eliminating the nucleation barrier for sodium polysulfide deposition.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143399</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Facing the Coded Gaze with Evocative Audits and Algorithmic Audits</title>
<link>https://hdl.handle.net/1721.1/143396</link>
<description>Facing the Coded Gaze with Evocative Audits and Algorithmic Audits
Buolamwini, Joy
Increasing numbers of real-world examples show that algorithmic systems can amplify racism, sexism, ableism, and other intersecting forms of discrimination. I created the term “the coded gaze” to describe the ways in which discrimination and other harms are embedded into algorithmic systems. Algorithmic audits have emerged as a mechanism to surface the technical limitations of these systems that have negative social implications, yet these audits by themselves do not necessarily lead to stakeholder engagement that can mitigate algorithmic harms. Algorithmic audits also provide evidence of systematic discrimination in algorithmic systems using decontextualized metrics, but these by themselves do not show the human costs at play. In this thesis, I conceptualize the evocative audit as an approach to humanizing the negative impacts that can result from algorithmic systems. I introduce the counter-demo as a necessary component of evocative audits that allows others to bear witness to issues created by algorithmic systems of interest. I build on the insights of Black women intellectuals to show how Black feminist epistemology, intersectionality, and the outsider within standpoint provide avenues for shaping audits of algorithmic systems that reach everyday people. &#13;
&#13;
I present two case studies of an impactful algorithmic audit (Gender Shades) and an evocative audit (AI, Ain’t I A Woman?) that have reached over 1 million people worldwide. The cases show how in combination, algorithmic audits and evocative audits provide complementary evidence to reach stakeholders beyond academic circles and increase public awareness about algorithmic harms. I outline strategic considerations about key strengths, limitations, and risks of each audit type to offer a roadmap for fellow artists, advocates, and academics who aim to combine performance metrics and performance arts to push for algorithmic justice.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143396</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Passive Acoustic Detection, Localization, and Tracking Applied to Unmanned Underwater Vehicles</title>
<link>https://hdl.handle.net/1721.1/143395</link>
<description>Advances in Passive Acoustic Detection, Localization, and Tracking Applied to Unmanned Underwater Vehicles
Kita, Kristen Railey
Detection, classification, localization, and tracking (DCLT) of unmanned underwater vehicles (UUVs) in the presence of shipping traffic is a critical task for passive acoustic harbor security systems. In general, vessels can be tracked by their unique acoustic signature due to machinery vibration and cavitation noise. However, cavitation noise of UUVs is considerably quieter than ships and boats, making detection significantly more challenging. In this thesis, I demonstrated that it is possible to passively track a UUV from its high-frequency motor noise using a stationary array in shallow-water experiments with passing boats. First, causes of high frequency tones were determined through direct measurements of two UUVs at a range of speeds. From this analysis, common and dominant features of noise were established: strong tones at the motor’s pulse-width modulated frequency and its harmonics. From the unique acoustic signature of the motor, I derived a high-precision, remote sensing method for estimating propeller rotation rate. In shallow-water UUV field experiments, I demonstrated that detecting a UUV from motor noise, in comparison to broadband noise from the vehicle, reduces false alarms from 45% to 8.4% for 90% true detections. Beamforming on the motor noise, in comparison to broadband noise, improved the bearing accuracy by a factor of 3.2×. Because the signal is also high-frequency, the Doppler effect on motor noise is observable and I demonstrate that range rate can be measured. Furthermore, measuring motor noise was a superior method to the “detection of envelope modulation on noise” algorithm for estimating the propeller rotation rate. Extrapolating multiple measurements from the motor signature is significant because Bearing-DopplerRPM measurements outperform traditional bearing-Doppler target motion analysis. In the unscented Kalman filter implementation, the tracking solution accuracy for bearing, bearing rate, range, and range rate improved by a factor 2.2×, 15.8×, 3.1×, and 6.2× respectively. These findings are significant for improving UUV localization and tracking, and for informing the next-generation of quiet UUV propulsion systems.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143395</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Human Brain Organoids for Studying Malignant Cell States and Intercellular Communication in Human Glioma</title>
<link>https://hdl.handle.net/1721.1/143394</link>
<description>Human Brain Organoids for Studying Malignant Cell States and Intercellular Communication in Human Glioma
Mangena, Vamsi
Human glioma is an incurable cancer of the central nervous system. Recent advances in glioma biology have highlighted the heterogeneity and inter-cellular communication within these tumors; it follows that appropriate models for studying glioma behavior and progression – and, in turn, therapeutic avenues – must adequately recapitulate these key features. Indeed, patientderived tumor xenografts (PDXs) are, for this reason, attractive and widely used in vivo model systems for studying gliomas, despite their limitations. The development of complementary (and currently non-existent) in vitro glioma models that better capture the molecular and phenotypic spectrum of the corresponding human tumor would enable reliable disease modeling and therapeutic testing at unprecedented scale and spatiotemporal resolution, potentially leading to much-needed breakthroughs for the field.&#13;
&#13;
The compartmentalization and emergent phenotypes of human gliomas are determined, in large part, by cooperative interactions between the intrinsic features of malignant cells and the tumor microenvironment. In this regard, a fundamental limitation of current in vitro glioma models (e.g., gliomaspheres) is the lack of appropriate environmental cues, leading to a prohibitively reductionist or skewed representation of the disease. In recent years, human brain organoids have emerged as promising 3D, in vitro model systems for partially recreating the cellular composition and function of the human brain. In the context of this research, human brain organoids represent a potential construct through which to provide 3D, human-specific environmental cues to patient-derived glioma cells, at once addressing a significant limitation of current in vitro glioma models.&#13;
&#13;
In this thesis, we describe our efforts to develop glioma-brain organoid models for a variety of glioma subtypes and applications. In the first section, we show technological feasibility of growing pediatric and adult glioma-brain organoid models, including those involving otherwise intractable IDH-mutant gliomas. In the second section, we focus on IDH-WT glioma and show that human brain organoids induce a spectrum of malignant cell states that are more faithful to human glioma than matched gliomasphere models. Finally, in the third section, we show evidence of intercellular communication between malignant and non-malignant cells in gliomabrain organoid models, demonstrating a functionally integrated tumor microenvironment. Collectively, this thesis represents a major advance in the in vitro modeling of human glioma for eventual therapeutic testing.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143394</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory and Mechanistic Studies of Volatile Organic Carbon Oxidation Systems in the Atmosphere</title>
<link>https://hdl.handle.net/1721.1/143393</link>
<description>Laboratory and Mechanistic Studies of Volatile Organic Carbon Oxidation Systems in the Atmosphere
Moss, Joshua A.
Volatile Organic Compounds (VOCs) oxidize in the troposphere and significantly influence the formation of pollutants including ground-level ozone, CO2, and particulate matter (PM). Ozone and PM negatively impact human health, and all three pollutants influence Earth’s climate. VOCs also dominate the OH reactivity of the atmosphere which in turn influences concentrations of other important radical species including NOx and HO2. Chamber experiments are often conducted to measure VOC oxidation in a controlled laboratory setting, but these studies are may be complicated by vapor deposition on chamber surfaces and potential VOC decomposition in the Chemical Ionization Mass Spectrometers (CIMS) which are used to measure a broad range of oxidation products. Mechanistic simulations are also frequently performed to emulate chamber chemistry with less effort and fewer complications than may arise during a chamber experiment, but the results of these simulations are limited by uncertainties and gaps in our understanding of VOC oxidation chemistry from empirical studies. This thesis addresses uncertainties in chamber measurements and mechanisms and uses both in tandem to provide mutual benefits. Chapter 2 focuses on the development and characterization of a Total Suspended Carbon (TSC) apparatus which may be used to parametrize chamber vapor deposition. Chapter 3 centers around the development of new methodology to compare carbon closure chamber datasets and mechanistic datasets using GECKO-A as the base mechanism. Comparisons suggest a propensity for the decomposition of nitrate, peroxyacyl nitrate, alcohol, and aldehyde functional groups in the process of being detected by CIMS, so the final comparison methodology is based on carbon number and average carbon oxidation state distributions which are largely unaffected by decomposition. Chapter 4 uses the methodology from Chapter 3 to investigate how targeted edits to GECKO-A mechanism generator affect its overall agreement with chamber observations for α-pinene, isoprene, and 1,2,4-trimethylbenzene oxidation studies. This chapter highlights reaction pathways of particular importance for each VOC oxidation system and provides new methods to target pathways and specific reactions for further study. Overall, this thesis provides broadly applicable new tools to reduce uncertainty and improve chemical understanding of VOC oxidation systems.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143393</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Iron Garnet Thin Films for Integrated Photonics and Spintronics</title>
<link>https://hdl.handle.net/1721.1/143392</link>
<description>Iron Garnet Thin Films for Integrated Photonics and Spintronics
Fakhrul, Takian
In pursuit of beyond CMOS computing, this thesis aims to provide material solutions for both photonic interconnects and ultra-fast spintronic memory. Iron garnet thin films are uniquely suited for both applications as they have: 1) strong magneto-optic (MO) response and low optical absorption at communication wavelengths making them enablers for nonreciprocal devices essential in photonic integrated circuits. 2) Low Gilbert damping and perpendicular anisotropy (PMA) that promote fast DW dynamics making them exciting candidates for next generation spintronic memory.&#13;
&#13;
We first study polycrystalline in-plane magnetized iron garnets for optical isolation. The first successful demonstration of top-down crystallized polycrystalline BiYIG/YIG films on Si exhibit a record high MO figure of merit (FoM) of up to 770° dB⁻¹ at 1550 nm wavelength. Growth of single phase BiYIG on the sidewalls of waveguides is also demonstrated, which can be used in transverse electric-mode devices. Tb₃Fe₅O₁₂ (TbIG), CeTbIG, and BiTbIG films are grown directly on Si substrates without any seed layers. The Faraday rotation at 1550 nm of the Bi₀.₀₃TbIG films is 6200 ± 300° cm⁻¹, which is the highest reported for polycrystalline films, and absorption can be engineered by composition control that may reduce Fe²⁺ and Tb⁴⁺ absorption pathways. We then study single crystal BiYIG with PMA and ultra-low Gilbert damping in the order 1.3× 10⁻⁴ . These films exhibit record spin-orbit torque-driven domain wall (DW) velocities of up to 4300 m/s, but require an in-plane field. We show that Dzyaloshinskii–Moriya interaction (DMI) can be introduced in heterostructures of BiYIG and TmIG and report the first proof-of-concept of field-free current induced DW motion in Pt/BiYIG/TmIG stacks, as well as formation of room temperature skyrmions.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143392</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protease activated nanosensors for the noninvasive diagnosis of community-acquired pneumonia</title>
<link>https://hdl.handle.net/1721.1/143391</link>
<description>Protease activated nanosensors for the noninvasive diagnosis of community-acquired pneumonia
Anahtar, Melodi N.
Community-acquired pneumonia is the most common infectious cause of death worldwide, and is defined as an infection of the lung parenchyma. It is a heterogeneous disease caused by wide range of bacteria, viruses, and occasionally fungi. An arsenal of diagnostic tools have been developed to detect the many causes of CAP, yet in approximately 60% of CAP patients the causative organism is never determined. As such, the standard of practice for treating suspected CAP is to administer antibiotics as soon as possible because existing diagnostics cannot determine the etiology of disease quickly and accurately enough to warrant withholding treatment. This diagnostic and therapeutic paradigm is out of touch with the approaches to personalized medicine that are being taken for other diseases such as cancer, and an urgent need exists for an accurate and rapid pneumonia diagnostic that can simultaneously detect CAP and stratify etiology.&#13;
&#13;
To address this gap, in this work we have developed two novel approaches to diagnosing CAP. Both approaches leverage differential protease expression by the host in response to pneumonia-causing pathogens to detect pneumonia and stratify etiology. To this end, we first derived a 40-gene signature from human transcriptomic data, which consisted of proteases biomarkers for pneumonia. We then used our lab’s activity-based nanosensor (ABN) technology to create a 20-plex panel of nanoparticles that could produce urinary signatures of disease state in response to the activity of a subset of these proteases. We validated that this panel could generate unique urinary signatures of disease in five in vivo mouse models of CAP within two hours of sensor administration. Using these signatures, we trained diagnostic classifiers to distinguish healthy mice from those with bacterial and viral pneumonia with high accuracy. To produce an even faster readout, we then modified these ABNs with volatile reporters to create breath-based volatile activity-based nanosensors (vABNs), and demonstrated that we that could detect pneumonia within 15 minutes of administration. Altogether, these nanosensors enable urine and breath-based detection of CAP, and constitute a means of diagnosing pneumonia that is orthogonal to existing clinical tests, thus opening a new direction of study for pneumonia diagnostics.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143391</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive experience-dependent plasticity in mouse primary visual cortex</title>
<link>https://hdl.handle.net/1721.1/143389</link>
<description>Passive experience-dependent plasticity in mouse primary visual cortex
Hayden, Dustin Jared
Stimulus-selective response plasticity (SRP) is a form of experience-dependent plasticity readily measured in primary visual cortex (V1) of mice. Chronic local field potential (LFP) recordings in layer 4 (L4) of V1 allow for the tracking of visually evoked potentials (VEPs) in response to phase-reversing sinusoidal grating stimuli. As a given visual stimulus becomes familiar to the mouse, the VEP magnitude increases. This increase in VEP magnitude is highly selective to stimulus features, such as the orientation, spatial frequency, and contrast of the grating. Previous work has shown that SRP requires synaptic mechanisms that are not only hallmarks of Hebbian synaptic plasticity, but also the engagement of parvalbumin-positive (PV+) inhibitory interneurons. Herein we build upon this foundational work and show that SRP expression can be explained as the engagement of two different interneuron subclasses: somatostatin-positive (SOM+) and PV+ cells. Familiar visual stimuli induce an increase in low-frequency (10-30 Hz) oscillations and an increase in SOM+ cell activity in L4. Conversely, novel visual stimuli induce an increase in high-frequency (60-80 Hz) oscillations and an increase in PV+ cell activity in L4. These differences in oscillations and cell activities to familiar and novel stimuli emerge in the seconds after the start of a block of stimuli. Finally, we show using laminar recordings in V1 that familiar stimuli cause elevated peak firing throughout most layers compared to novel stimuli, but reduced overall activity due to quick attenuation of the evoked signal. Together, these data further develop our understanding of experience-dependent plasticity.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143389</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strong Light-Matter Interaction with Cavities for Quantum Information Processing</title>
<link>https://hdl.handle.net/1721.1/143388</link>
<description>Strong Light-Matter Interaction with Cavities for Quantum Information Processing
Choi, Hyeongrak
Strong light-matter interaction enabled many modern technologies such as precision metrology, light sources, optical communication and spectroscopy. Normally, the coupling between materials and photons in free-space is weak, limiting the performance of these applications. Photonic cavities can boost this interaction by holding and confining photons in a small space. On the other hand, quantum information science demands extremely strong light-matter interaction. Qubits, the smallest unit of quantum information, needs to be isolated from the environment to preserve its fragile superposition state. Often, single-photons mediate the interaction between distant isolated qubits due to their low loss. Because it is only a single photon and single qubit, the interaction is naturally very weak and needs to be enhanced using photonic cavities.&#13;
&#13;
We propose two methods for enhancing light-matter interaction. In the first method, we focus on "electric" light-matter interaction in a dielectric cavity. We engineer electromagnetic boundary conditions to locally increase the electric energy density. We illustrate the design concept with a silicon nanobeam photonic crystal cavity reaching ultrasmall mode volume that is a hundred-thousand-times smaller than the diffraction limit. In the second method, we study "magnetic" light-matter interaction in a metallic cavity. The three mode engineering techniques presented entail existing cavity designs and show how to further increase the interaction. Especially, in both cases, we show that the mode volume can be arbitrarily small, only limited by materials or fabrication methods. The ultrastrong light-matter interaction opens the door to quantum network, quantum internet, and quantum computing.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143388</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneous Ultrafast Energy Transfer in Photosynthetic Proteins</title>
<link>https://hdl.handle.net/1721.1/143386</link>
<description>Heterogeneous Ultrafast Energy Transfer in Photosynthetic Proteins
Moya III, Raymundo
Photosynthesis begins with the absorption and rapid transport of energy through a protein network to reach the reaction center, where conversion to chemical energy occurs. The energy transport is facilitated through a series of energy transfer events between pigments bound to a protein backbone. These steps exhibit a remarkable near-unity quantum efficiency. Energy transfer rates are exquisitely sensitive to intermolecular distances, which vary within proteins due to thermal fluctuations of the structure. As a result, models predict that the energy transfer rates vary dramatically between proteins, producing some instances of inefficient energy transfer. Reconciling the high efficiency with the presence of thermal fluctuations has not been possible, because whether energy transfer rates in photosynthetic proteins actually vary and, if so, by how much has not been measured.&#13;
&#13;
Previous experiments to measure rates relied on ensemble ultrafast spectroscopy, which is limited to average values. With single-molecule approaches, we can overcome the limitations of ensemble averaging by measuring individual proteins. However, single-molecule experiments have been primarily restricted to fluorescence, which occurs on a nanosecond timescale. The limited knowledge provided by ensemble averaging or nanosecond dynamics cannot describe heterogeneity in femtosecond energy transfer rates. Therefore, direct measurements of the distribution are required.&#13;
&#13;
In this thesis, we expand upon a novel technique, single-molecule pump-probe spectroscopy (SM2P), which measures energy transfer rates at the single-molecule level. In Chapter 2, I describe a newly developed SM2P apparatus that is easily tunable across the visible region and demonstrate its utility on a fluorescent dye. In Chapter 3, I expand upon these results by applying SM2P to photosynthetic light- harvesting systems, where we are able to directly measure the distribution of energy transfer rates on cyanobacterial light-harvesting subunits. These novel experiments provide insight into how photosynthetic light-harvesting proteins are robust to thermal fluctuations and tightly regulate energy transfer rates. Lastly, in Chapter 4 I present the first single-molecule experiments on light-harvesting complexes from cryptophyte algae. The results imply that light-harvesting complexes from cryptophytes maintain the ability to regulate light harvesting through quenched states that prevent excess sunlight from damaging the light-harvesting machinery.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143386</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Network Topology in Photopolymer Networks for Additive Manufacturing</title>
<link>https://hdl.handle.net/1721.1/143385</link>
<description>Control of Network Topology in Photopolymer Networks for Additive Manufacturing
Qin, Ke
Chapter 1. The introduction provides an overview of additive manufacturing as well as the material systems and photochemical processes that enable additive manufacturing. An emphasis is placed on work done to regulate free radical polymerization via addition-fragmentation chain transfer (AFCT) and reversible addition-fragmentation chain transfer (RAFT), and their applications in photopolymer networks to improve material properties and introduce novel functional materials to additive manufacturing.&#13;
&#13;
Chapter 2. In this work, we incorporate a RAFT agent into a crosslinker to make chain-transferring crosslinkers, or Transferinkers, and investigate their effect on photopolymer networks. Transferinkers were shown to improve the tensile toughness of acrylic photopolymer resins by up to 100% without incurring significant loss in strength. In addition, we exploited the unique reactivity of the RAFT agent to induce accelerated degradation of the thermoset acrylic network. Reduction in kinetic chain length caused by transferinkers is observed experimentally and in simulations. As such, transferinkers are shown to be an effective and translatable strategy for developing sustainable photopolymer networks with improved mechanical properties. &#13;
&#13;
Chapter 3. We report the isolation and structural elucidation of 3,3,8,8-tetramethyl-1-oxa-4,6,9-trithiaspiro [4.4] nonane-2,7-dione, an unexpected product obtained during the synthesis of functionalized trithiocarbonate RAFT agents. A spirocyclic structure was proposed, and confirmed with X-Ray crystallography. This molecule can undergo ring opening when treated with a nucleophilic amine to form a trithiocarbonate, which can be used to mediate RAFT polymerization. Potential applications in synthesizing polymers with defined head groups are also explored. &#13;
&#13;
Chapter 4. We report a new class of supramolecular polymer metal-organic cage (polyMOC) gels based on the assembly of Cu24L24 cuboctahedra. We demonstrate how these polyMOCs can be reversibly photoswitched between three oxidation states (Cu(II), Cu(I), and Cu(0)) that each give rise to unique properties. Cu(II) polyMOC can also be applied to direct ink writing (DIW) 3D printing. Cu(II) polyMOC containing polymeric azides and alkynes precursors can be extruded, where the mechanically robust polyMOC serves as a template. Subsequent photoswitching to the Cu(I) state crosslinks the precursors and re-oxidation provides MOC-interpenetrating networks (MINs). In addition, the Cu(II) polyMOC can be completely removed with competing ligands to expose the nascent covalent network.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143385</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and Computational Methods for Shortwave Infrared Imaging</title>
<link>https://hdl.handle.net/1721.1/143384</link>
<description>Experimental and Computational Methods for Shortwave Infrared Imaging
Saif, Mari
Optical imaging methods help researchers interrogate complex biological and medical processes with the use of optical light, usually in the visible region (400-700 nm). We explore visualizing the state of liver diseases in living mice using non-invasive near infrared (NIR, 700-900 nm) and shortwave infrared (SWIR, 900-1700 nm) autofluorescence imaging. NIR/ SWIR imaging makes use of recently developed, high-sensitivity cameras, and relatively low en-ergy NIR excitation, which is less destructive to and penetrates deeper through biological tissue than conventional ultraviolet or visible light. Detecting longer NIR/ SWIR autoflu-orescence emission takes advantage of both the maximal transparency of biological tissue at these wavelengths, and could also enable greater specificity to disease associated signal, as there are very few autofluorescent materials from healthy tissue samples at these wave-lengths. We extend the imaging techniques with incorporation of background and shading correction methods from a suite of computer vision methods to determine autofluorescence signal levels in brain tissue, which consists of highly complex varying cellular types, helping us understand the applicability of our imaging techniques with advanced methods of im-age processing. In addition, we further investigate immunofluorescence methods with the incorporation of NIR/SWIR autofluorescence as a lipofuscin specific channel to digitally re-move autofluorescence from multi-fluorophore immuno-stained mouse liver samples. We also explore color deconvolution in histopathology imaging, and develop algorithms to support automated thresholding and segmentation for more accurate autofluorescence quantification. We show the development of NIR/SWIR experimental methods and computer vision processes to achieve a better understanding of extending NIR/SWIR imaging in pre-clinical and clinical settings for studying disease progression and regression.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143384</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>How short, degenerate motifs across the human proteome recognize the actin remodeling factor ENAH</title>
<link>https://hdl.handle.net/1721.1/143383</link>
<description>How short, degenerate motifs across the human proteome recognize the actin remodeling factor ENAH
Hwang, Theresa
Protein-protein interactions form the backbone of the myriad signaling pathways that cells use to respond to their environments. A central class of protein-protein interactions in eukaryotic interactomes are those that occur between short linear motifs (SLiMs) and structured domains. SLiMs are stretches of 3-10 residues found in intrinsically disordered regions of proteins. Given their degeneracy, short length, and weak affinities, it is not immediately clear how such low complexity interaction elements are able to discriminate amongst the many homologous SLiM-binding domains within the cell.&#13;
&#13;
Prior work on SLiMs has focused primarily on the “core motif”, or the minimal set of amino acids necessary to bind to a given domain. However, SLiMs are embedded in the context of larger proteins. In this thesis, I highlight the critical role of the sequence context surrounding SLiMs in modulating binding to the SLiM-binding Ena/VASP EVH1 domain family. I identified three mechanisms by which adjacent and distal sequence elements surrounding native EVH1-binding SLiMs enhance binding affinity and specificity to ENAH, an Ena/VASP paralog. I also determined the strategies employed EVH1 domains to recognize these extended SLiMs. Notably, the ENAH EVH1 domain leverages a network of dispersed residues to adopt a conformation that Ena/VASP paralogs VASP and EVL cannot access. An extended SLiM in cilia-associated protein PCARE exploits this conformation to bind ENAH with high specificity and affinity. I used this information to design selective and tight peptide inhibitors that could disrupt ENAHmediated interactions in mammalian cells.&#13;
&#13;
Collectively, my work highlights the importance of studying SLiMs in their native context, motivating the application of my work to other SLiM-binding domain families.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143383</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concentration-dependent Splicing through Suboptimal Motifs Enables Waves of Gene Regulation in Neuronal Development</title>
<link>https://hdl.handle.net/1721.1/143382</link>
<description>Concentration-dependent Splicing through Suboptimal Motifs Enables Waves of Gene Regulation in Neuronal Development
Begg, Bridget E.
Alternative splicing, which occurs in over 95% of human genes, is the process by which exons are differentially included in transcripts produced from the same gene to produce a variety of transcript isoforms. This mode of post-transcriptional gene regulation endows the 20,000 protein-coding genes in the human genome with tunable expression, manifold protein–protein interactions, and an additional layer by which to regulate protein localization, activity, and signal response. Increasingly, alternative splicing is understood to be a significant contributor to organismal complexity in animals.&#13;
&#13;
The Rbfox family of splicing factors regulates alternative splicing during animal development and in disease, impacting thousands of exons in the maturing brain, heart, and muscle. Although it is well established that Rbfox binds to the RNA sequence GCAUG with high affinity and specificity, this motif is responsible for only half of the Rbfox binding sites observed in cellular and neuronal contexts. We incubated recombinant RBFOX2 with over 60,000 mouse and human transcriptomic sequences to reveal substantial binding to several moderate-affinity, non-GCAYG sites at a physiologically relevant range of RBFOX2 concentrations. We find that these “secondary motifs” bind Rbfox robustly in cells and that several together can exert regulation comparable to GCAUG in a trichromatic splicing reporter assay. In the brain, secondary motifs regulate RNA splicing in neuronal development, enabling a second wave of splicing changes as Rbfox levels increase. Additionally, secondary motifs are activated in neuronal subtypes according to cellular gene expression levels of Rbfox, contributing to tissue diversity in the mammalian brain.&#13;
&#13;
This work presents the first observation of spatiotemporal regulation through suboptimal motifs in RNA-binding proteins, a phenomenon that may be a widespread. Furthermore, the characterization of Rbfox secondary motifs reveals new regulatory targets of an essential splicing factor, which may contribute to mammalian brain development, synaptic plasticity, and human disease.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143382</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formation Process of Acoustophoretic Patterns</title>
<link>https://hdl.handle.net/1721.1/143380</link>
<description>Formation Process of Acoustophoretic Patterns
Wang, Yi Jenny
The acoustic radiation force is a nonlinear wave effect that exerts a non-zero time- averaged force on objects in the wave. In recent years, the acoustic radiation force has been widely explored for manipulating objects and small particles. These applications range from moving a few objects at a time, to acting on a continuous flow of particles, to arranging many particles into patterns. In particular, using the acoustic radiation force to move embedded materials into patterns provides many opportunities for advanced manufacturing and 3D printing of composite materials.&#13;
&#13;
There are three key considerations in acoustophoretic patterning: 1) the overall pattern geometry, 2) the quality of the final pattern, and 3) the time required to make the pattern. While much work in the literature has been devoted to increasing the pattern geometries that can be made using acoustophoresis, the effort is largely still in the proof of concept stage and the final pattern quality is often only evaluated qualitatively. There is also a need to measure the pattern formation process so that the inherent design tradeoffs can be quantified. This thesis quantifies the tradeoff between pattern quality and patterning time by using the local microsphere concen- tration to describe the pattern formation process. Existing method of measuring the absolute magnitude of the ARF by particle tracking is extended to 2D, which facilitates mapping the patterning device. Finally, this thesis utilizes the design trade-off between time and hardware complexity to create acoustophoretic patterns with non-uniform spacing without increasing hardware complexity by sequencing multiple frequencies.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143380</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering physico-chemical interactions across drug delivery, agriculture and carbon capture</title>
<link>https://hdl.handle.net/1721.1/143379</link>
<description>Engineering physico-chemical interactions across drug delivery, agriculture and carbon capture
Jayaprakash, Vishnu
The interface between two phases often limits the efficiency of several phenomena. In drug delivery, viscous formulations are difficult to inject through medical needles as the no-slip boundary condition between the needle and the viscous drug product greatly resists fluid flow. In agriculture, the inherent water repellency of plant surfaces causes pesticide sprays to bounce off, resulting in enormous waste and environmental pollution. In post-combustion carbon capture, absorbing gaseous CO₂ into liquids remains prohibitively expensive as reaction rates are limited by the low interfacial areas between the flue gas and absorbents in current systems. This work explores how introducing new interfaces or interfacial forces can help solve these three challenges. First, we demonstrate viscosity agnostic injectability of drug formulations through needles using core annular flows, where the transport of a highly viscous fluid through a needle is enabled via coaxial lubrication by a less viscous fluid. Second, by cloaking spray droplets in minute quantities of plant oils (≤ 1wt%), we enhance energy dissipation during droplet impact on hydrophobic surfaces and demonstrate a 5x reduction in pesticide waste on a variety of plant leaves. Finally, mist-scale droplets and space charge injection are used to enhance interfacial areas in CO2 absorption and develop a carbon capture system that could lead to a 2.6x reduction in plant capital costs.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143379</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Homological Algebra to Equational Theories</title>
<link>https://hdl.handle.net/1721.1/143377</link>
<description>Applications of Homological Algebra to Equational Theories
Ikebuchi, Mirai
It is well-known that some equational theories such as groups or Boolean algebras can be defined by fewer equational axioms than the original axioms. However, it is not easy to determine if a given set of axioms is the smallest or not. Malbos and Mimram investigated a general method to find a lower bound of the cardinality of the set of equational axioms (or rewrite rules) that is equivalent to a given equational theory (or term rewriting system), using homological algebra. Their method is an analog of Squier’s homology theory on string rewriting systems. In this dissertation, I develop the homology theory for term rewriting systems more and provide a better lower bound under a stronger notion of equivalence than their equivalence.&#13;
&#13;
Also, the same methodology applies to equational unification, the problem of solving an equation modulo equational axioms. I provide a relationship between equational unification and homological algebra for equational theories. I will construct abelian groups associated with equational theories. Then, the main theorem gives a necessary condition of equational unifiability that is described in terms of the abelian groups and homomorphisms between them.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143377</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning human beliefs with language models</title>
<link>https://hdl.handle.net/1721.1/143376</link>
<description>Learning human beliefs with language models
Chu, Eric
Measuring explicitly expressed beliefs, whether stated on online platforms or captured in public opinion polls, can shed insight into present and future societal patterns of behavior. Mass media and social media both reflect and help form public opinion, which can ultimately lead to positive outcomes such as civil engagement across divides, but also negative outcomes such as non-adherence to health-beneficial social guidelines. Understanding viewpoints expressed online and the relationship between media content and beliefs is increasingly pertinent today, in a world of constant connectivity and shrinking common ground. &#13;
&#13;
This dissertation introduces new deep, neural language model -based approaches for capturing beliefs reflected in and formed by media. In part one of this dissertation, we introduce a model for automatically summarizing multiple documents about the same subject, which we apply to opinionated posts found on popular review websites. Summaries can help organize large amounts of often siloed information, and help people understand the most salient viewpoints from people in different communities. In contrast to typical approaches that require large, labeled datasets, our method is the first unsupervised model for abstractive multi-document summarization. &#13;
&#13;
In part two of the dissertation, motivated by the effects of information in the COVID-19 pandemic, we introduce an approach for using “media diet models”, which can act as proxies for human media consumption. By probing these models, we can predict public opinion as measured by nationally representative surveys. We validate our method in two domains: attitudes towards COVID-19 and consumer confidence, and show that the approach is valid and intuitive in a number of ways. We find that it is robust and has predictive power across mediums and outlets, has increased predictive power when people are paying more attention to news, and may capture duration effects of media consumption. These results both provide insight into a driving force of human belief formation and suggest practical implications for pollsters, public health officials, and policymakers moving forward.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143376</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Search and Representation in Program Synthesis</title>
<link>https://hdl.handle.net/1721.1/143375</link>
<description>Search and Representation in Program Synthesis
Nye, Maxwell
Building systems that can synthesize programs from natural specifications (such as examples or language) is a longstanding goal of AI. Building such systems would allow us to achieve both scientific and practical goals. From a scientific perspective, program synthesis may provide a way to learn compact, generalizable rules from a small number of examples, something machine learning still struggles with, but humans find easy. From a practical perspective, program synthesis systems can assist with real-world programming tasks, from novice end-user tasks (such as string editing or repetitive task automation) to expert functions such as software engineering.&#13;
&#13;
In this work, we explore how to build such systems. We focus on two main interrelated questions: 1) When solving synthesis problems, how can we effectively search in the space of programs and partially constructed programs? 2) When solving synthesis problems, how can we effectively represent programs and partially constructed programs?&#13;
&#13;
In the following chapters, we will explore these questions. Our work has centered around the syntax and the semantics of programs, and how syntax and semantics can be used as tools to assist both the search and representation of programs and partial programs. We present several algorithms for synthesizing programs from examples, and demonstrate the benefits of these algorithms over previous approaches.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143375</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relational compilation: Functional-to-imperative code generation for performance-critical applications</title>
<link>https://hdl.handle.net/1721.1/143374</link>
<description>Relational compilation: Functional-to-imperative code generation for performance-critical applications
Pit-Claudel, Clément
Purely functional programs verified using interactive theorem provers typically need to be translated to run: either by extracting them to a similar language (like Coq to OCaml) or by proving them equivalent to deeply embedded implementations (like C programs).  Traditionally, the first approach is automated but produces unverified programs with average performance, and the second approach is manual but produces verified, high-performance programs.&#13;
&#13;
This thesis shows how to recast program extraction as a proof-search problem to automatically derive correct-by-construction, high-performance code from shallowly embedded functional programs. It introduces a unifying framework, relational compilation, to capture and extend recent developments in program extraction, with a focus on modularity and sound extensibility.  To demonstrate the value of this approach, it then presents Rupicola, a relational compiler-construction toolkit designed to extract fast, verified, idiomatic low-level code from annotated functional models.&#13;
&#13;
The originality of this approach lies in its combination of foundational proofs, extensibility, and performance, backed by an unconventional take on compiler extensions: unlike traditional compilers, Rupicola generates good code not because of clever built-in optimizations, but because it allows expert users to plug in domain- and sometimes program-specific extensions that allow them to generate exactly the low-level code that they want.  This thesis demonstrates the benefits of this approach through case studies and performance benchmarks that highlight how easy Rupicola makes it to create domain-specific compilers that generate code with performance comparable to that of handwritten C programs.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143374</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Manufacturing Methodology for Carbon Nanotube-based Digital Systems: from Devices, to Doping, to System Demonstrations</title>
<link>https://hdl.handle.net/1721.1/143373</link>
<description>A Manufacturing Methodology for Carbon Nanotube-based Digital Systems: from Devices, to Doping, to System Demonstrations
Lau, Christian Lee
Electronics is approaching a major paradigm shift because silicon transistor scaling no longer yields historical energy-efficiency benefits, spurring research towards beyond-silicon nanotechnologies. In particular, carbon nanotube field-effect transistor (CNFET)-based digital circuits promise substantial energy-efficiency benefits, but the inability to perfectly control intrinsic nanoscale defects and variability in carbon nanotubes has precluded the realization of very-large-scale integrated systems. In this thesis, I overcome these defects and variations to enable, for the first time, a demonstration of a beyond-silicon modern microprocessor: RV16XNANO, designed and fabricated entirely using CNFETs. RV16X-NANO is a 16-bit microprocessor based on the open-source and commercially available RISC-V instruction set processor, running standard RISC-V 32-bit instructions on 16-bit data and addresses. It integrates &gt;14,000 CMOS CNFETs, and operates as modern microprocessors do today (for example, it can run compiled programs; in addition, we demonstrate its functionality by executing all types and formats of instructions in the RISC-V instruction-set architecture). This is made possible by the manufacturing methodology for CNTs (MMC)—a set of original processing and circuit design techniques that are combined to overcome the intrinsic CNT challenges.&#13;
&#13;
Importantly, the entire MMC and all of the work in this thesis are wafer-scale, VLSI-compatible and is seamlessly integrated within existing infrastructures for silicon CMOS—both in terms of design and of processing. Together, the contributions of this thesis establish a robust CNT CMOS technology and represent a major milestone in the development of beyond-silicon electronics.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143373</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total Synthesis of Himastatin</title>
<link>https://hdl.handle.net/1721.1/143372</link>
<description>Total Synthesis of Himastatin
D'Angelo, Kyan A.
I. Total Synthesis of Himastatin via Bioinspired Oxidative Dimerization&#13;
&#13;
The concise total synthesis of (–)-himastatin via a biomimetic final-stage dimerization is described. Our approach relies on expedient preparation of a macrocyclic depsipeptide monomer via hybrid solution/solid phase peptide synthesis, followed by a newly developed oxidative dimerization reaction to secure the C5–C5' biaryl linkage at the center of himastatin’s homodimeric structure. Application of the oxidative dimerization methodology enabled the preparation of dimeric C5–C5' cyclotryptophans, cyclotryptamines, and indolines via a radical-radical coupling pathway that was supported by mechanistic studies.&#13;
&#13;
II. Synthesis and Biological Study of Himastatin Derivatives&#13;
&#13;
The modularity and convergence of our hybrid solution/solid-phase approach to the synthesis of macrocyclic peptide monomers enabled general access to several himastatin derivatives and their comparative biological evaluation. Our findings indicate that the central C5–C5' biaryl linkage, depsipeptide linkage, and piperazic acid residue of himastatin are important for bioactivity, but that substitution of the leucine residue has negligible impact. The synthesis and biological evaluation of a series of stereochemical probes further reveal that the absolute stereochemistry of himastatin does not impact its bioactivity, consistent with primarily achiral interactions with its cellular target. Relying on our late-stage dimerization methodology for the union of complex macrocyclic peptide fragments, we also accessed a uniquely active heterodimeric fluorescent probe, TAMRA-himastatin. Confocal microscopy enabled direct observation of the antibiotic’s localization within Gram-positive bacteria, and provided evidence that himastatin targets the bacterial membrane as part of its mode of action.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143372</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Tools for Structural Biology and Biophysics: High-Throughput Fluorine Solid-State NMR and Applications to Membrane Proteins</title>
<link>https://hdl.handle.net/1721.1/143371</link>
<description>New Tools for Structural Biology and Biophysics: High-Throughput Fluorine Solid-State NMR and Applications to Membrane Proteins
Shcherbakov, Alexander A.
Solid-State Nuclear Magnetic Resonance (SSNMR) spectroscopy is a powerful method for characterizing the structure and dynamics of crystalline and amorphous solid compounds, materials, and biological systems. When applied to biomolecular systems such as membrane proteins, it can provide access to information about structure and dynamics in native environments, on targets that are difficult to characterize by other biophysical methods. Membrane proteins in particular are critical for biological function and are overrepresented as drug targets; however, they are notoriously difficult to study. In this thesis, new SSNMR methods are developed, utilizing fast Magic Angle Spinning (MAS), multidimensional correlation, and the 19F nucleus as a biophysical probe for the understanding the structure and dynamics of crystalline and membrane-bound proteins and protein-ligand complexes. &#13;
&#13;
Internuclear distances are critical in biomolecular structure determination. The 19F nucleus, due to its high gyromagnetic ratio, absence of natural background, small atomic radius, and highly developed chemistry is uniquely suited as a probe for measuring long internuclear distances. Utilizing uniform 13C labeling and multidimensional correlation, an experiment for multiplex measurement of 13C-19F distances is developed. Furthermore, 13C-19F coherence transfer methods are compared and optimized to enable direct 13C-19F correlation to disambiguate constraints in polyfluorinated systems. These technological developments are applied toward determining the structure of the Envelope (E) protein of the novel SARS CoV-2 virus.&#13;
&#13;
With fast MAS, proton-detected experiments in SSNMR are possible with high resolution and sensitivity. A new method for measuring nanometer-length distances in a multiplex manner is developed, utilizing 1H-19F Rotational Echo Double Resonance (REDOR) and two-dimensional 1H-15N correlation. The experiment is developed on a quad-labeled (uniform 2H, 13C, 15N, and 19F-tagged) model protein, and the distances measured are shown to be in quantitative agreement with the known structure. This technology is applied to refine the structure of the E. coli. multidrug resistance protein E (EmrE), by measuring a large number of 1H-19F distances between a tetrafluorinated ligand and the protein HN atoms. The structure of the EmrE protein was determined at high and low pH, modeling functional states of the transporter, and providing insight into the mechanism of the proton-coupled antiport.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143371</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless Sensing with Machine Learning: Through-Wall Vision &amp; Contactless Health Monitoring</title>
<link>https://hdl.handle.net/1721.1/143370</link>
<description>Wireless Sensing with Machine Learning: Through-Wall Vision &amp; Contactless Health Monitoring
Zhao, Mingmin
There is significant interest in technologies that can sense people and monitor their health with minimal overhead. Existing solutions typically require people to wear different sensors and devices on their bodies. This thesis demonstrates how we can use wireless signals and machine learning to sense people without any physical contact with their bodies. We develop novel radio sensors that sit in the background like a Wi-Fi router. Our sensors however analyze the surrounding radio signals using novel machine learning algorithms to monitor people’s movements and activities, assess their vital signs, learn their sleep and sleep stages, and recognize their emotions. Since wireless sensors traverse walls, our sensors can deliver all of these functions through walls and occlusions.&#13;
&#13;
The key challenge in delivering the above contributions is that radio signals interact with people and the environment in complex ways, resulting in an underdetermined mapping that varies across time and space. To address this problem, this dissertation adopts a data-driven approach and develops custom machine learning models that operate on radio signals. Developing such models requires technical innovations to address unique challenges due to the specularity of radio signals in the frequencies of interest, multipath reflections in indoor environments, high data rates and computation complexity, and the lack of training data and the difficulty in annotating radio signals. Our work addresses these challenges and enables two new capabilities: through-wall tracking of the human pose and contactless health monitoring.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143370</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Highly multiplexed molecular mapping of biological samples via integrated experimental and computational technologies</title>
<link>https://hdl.handle.net/1721.1/143366</link>
<description>Highly multiplexed molecular mapping of biological samples via integrated experimental and computational technologies
Goodwin, Daniel Robert
Identification and labeling of diverse molecular identities in biological samples has traditionally come at a price of decreased spatial information. Electron microscopy, with its nanometer spatial resolution and excellent ultrastructure preservation, yet essentially no molecular identification, is the most obvious example of this dichotomy that pervades all imaging paradigms. The Boyden Lab has previously developed Expansion Microscopy (ExM), which increases both spatial resolution of fluorescence imaging and target accessibility via isotropically expandable hydrogels. While proteomicbased approaches have the bottleneck of antibodies, nucleic acid sequencing is universally applicable to every molecular target and provides for uniform sample handling and essentially infinite multiplexing. In this thesis, I present the development of the Expansion Sequencing (ExSeq) technology suite, which resolves the underlying tensions between molecular, spatial, and ultrastructural information by multiplexing in situ sequencing and protein information in single, intact specimens. ExSeq produces high-resolution transcriptomic maps of intact tissues and is sensitive enough to detect thousands of different genes within a single sample. Applied to the mouse hippocampus, ExSeq produces transcriptomic atlases of diverse cell types and visualizes mRNA transcript content across thousands of dendritic spines of single CA1 pyramidal neurons. ExSeq also reveals the molecular organization and position-dependent states of many cells in a human metastatic breast cancer sample from a patient. ExSeq harnesses novel experimental and computational techniques to systematically encode and decode biological information from the microscope. I conclude with an exploration of ExSeq as a platform technology for molecular connectomics, with an eye toward robust and democratizeable synaptic-scale maps of the brain.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143366</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language learning at scale: Data‐driven and model‐motivated analyses of lexical and morphological development</title>
<link>https://hdl.handle.net/1721.1/143365</link>
<description>Language learning at scale: Data‐driven and model‐motivated analyses of lexical and morphological development
Braginsky, Mika
Studying how children learn language gives complementary insights into both children and language. Given that languages have the properties that they do, what does children’s ability to learn them tell us about cognitive development? And given that children have the cognitive capacities that they do, what does the learnability of language tell us about its properties?&#13;
&#13;
I present a series of large‐scale investigations of language learning, using dense datasets and computational models to support generalization across children, over development, among languages, and while distinguishing among theories. In the initial investigation (Chapter 2), we ask: what makes words harder or easier to learn, and what does that reveal about word learning mechanisms? We examined the factors that contribute to individual words’ learning trajectories, and the consistency of those factors across languages. This work establishes a framework for conducting analyses of large‐scale language learning data in a way that brings together disparate data sources, generalizes across languages, and tracks change over development.&#13;
&#13;
For subsequent investigations (Chapter 3‐4), I focus on the study of morphology learning, from various directions. Learning morphology is a particularly interesting problem because it straight‐ forwardly involves both memorizing specific forms and generalizing beyond direct experience. It is thus a fruitful case study for the interplay between mechanisms of memory and inference, which is fundamental both in language and in cognition more generally.&#13;
&#13;
Chapter 3 applies the datasets and approaches developed in Chapter 1 to the domain of morphology, investigating how morphology learning relates to age, vocabulary development, and phono‐ logical structure. Chapter 4 delves further into morphology learning by applying the abstraction and rigor of computational modeling. Finally, I propose a series of studies to use a novel data col‐ lection method to create a dense dataset on morphological development and evaluate theories of morphological development. &#13;
&#13;
Taken together, these studies synthesize empirical and computational methods to investigate multiple domains of language development at scale. Focusing on lexical and morphological development, they clarify and enhance the empirical and theoretical landscape of language learning.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143365</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Applications of Resonances, from One to Infinity</title>
<link>https://hdl.handle.net/1721.1/143364</link>
<description>On Applications of Resonances, from One to Infinity
Benzaouia, Mohammed
Resonances are everywhere. They have different manifestations and are used in many applications. In this work, we study multiple systems making use of resonances, with numbers ranging “from one to infinity.”&#13;
&#13;
First, we derive single-frequency bounds for the surface-enhanced Raman scattering (SERS) where resonant nanostructures are used to enhance the Raman signal. These bounds are shape independent and only function of the material constants and the separation distance from the Raman molecule. They can be evaluated analytically or by simple numerical integration.&#13;
&#13;
We then present analytical design criteria for multi-resonant filters in strongly coupled systems where standard approaches (such as coupled mode theory or network synthesis) are not adequate. For this, we develop a quasi-normal mode theory (QNMT) of the scattering matrix that enforces the fundamental constraints of energy conservation and reciprocity even for truncated sums. As an example of application, we design microwave metasurface filters with various orders, bandwidths and types (such elliptic or Chebyshev).&#13;
&#13;
For systems making use of a large number of resonances over a large bandwidth (such as light trapping in solar cells), and in particular for metaparticle arrays, we present approximate frequency/angle-averaged absorption enhancement bounds in the radiative transfer regime and apply the results to ocean buoy energy extraction. Our results, which match full-wave simulations, enable us to propose and quantify approaches to increase performance through careful particle design and/or using external reflectors.&#13;
&#13;
Finally, we study single-mode lasing stability in periodic systems where a full continuum of modes should be taken into account in the nonlinear regime above threshold. In particular, we show that, under the right conditions, single-mode lasing is still possible in an infinite periodic structure, with practical limitations arising from boundary effects and manufacturing inaccuracies. Examples of band-edge (1d) and bound-in-continuum (2d) mode lasing are presented.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143364</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling, Design, Identification, Drive, and Control of a Rotary Actuator with Magnetic Restoration</title>
<link>https://hdl.handle.net/1721.1/143363</link>
<description>Modeling, Design, Identification, Drive, and Control of a Rotary Actuator with Magnetic Restoration
Mohammadi Yangijeh, Sajjad
Rotary actuators have been widely used in the industry. This thesis investigates the design, modeling, identification, drive, and control of an actuator with magnetic restoration. The design considerations are explained, FEM is used in the analysis, and a prototype is built for lab experiments. A design-oriented analytical model is developed for the actuator, in which the coil torque is obtained using the solution of Laplace’s equation in the elliptical coordinates, and the reluctance torque is derived by an approach named differential flux tubes. In addition, nonlinear and linearized electromechanical models are developed for control system designs and dynamic studies. To obtain higher accuracy, the eddy-currents in the laminations and the magnet are also modeled using an analytical solution of 1-D and 2-D diffusion equation and extracting a lumpedelement circuit for system-level analysis. It adds to the accuracy of the model to a large degree. The impact of the pre-sliding friction on the mechanical dynamic is studied as well. Then, identification of the model is performed. Next, an op-amp-based drive circuit for the current control loop is proposed, modeled, and designed. Then, three DSP-based position control techniques are implemented: pole placement with voltage drive, pole placement with current drive, and nonlinear control with feedback linearization. State observers are employed to estimate the unmeasured states. The control techniques are evaluated and compared through time response indices such as rise time, overshoot, steady-state error, and large-signal tracking, as well as by frequency domain indices like bandwidth, robustness, phase margin, sensitivity, disturbance rejection. A method of eddy-current plated is also proposed for inductance reduction. In the end, a new effectiveness index is proposed.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143363</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Graph Representation Learning and Generation for Drug Discovery</title>
<link>https://hdl.handle.net/1721.1/143362</link>
<description>Molecular Graph Representation Learning and Generation for Drug Discovery
Chen, Benson
Machine learning methods have been widely pervasive in the domain of drug discovery, enabling more powerful and efficient models. Before deep models, modeling molecules was largely driven by expert knowledge; and to represent the complexities of the molecular landscape, these hand-engineered rules prove insufficient. Deep learning models are powerful because they learn the important statistical features of the problem–but only with the correct inductive biases. We tackle this important problem in the context of two molecular problems: representation and generation. Canonical success of deep learning is deeply rooted in its ability to map the input domain into a meaningful representation space. This is especially poignant for molecular problems, where the “right” relations between molecules is nuanced and complex.&#13;
&#13;
The first part of this thesis will focus on molecular representation, in particular, property and reaction prediction. Here, we explore a transformer-style architecture for molecular representation, providing new tools to apply these models to graph-structured objects. Moving away from the traditional graph neural network paradigm, we demonstrate the efficacy of prototype networks for molecular representation, which allows us to reason over learned property prototypes of molecules. Lastly, we look at the molecular representations in the context of improving reaction predictions.&#13;
&#13;
The second part of this thesis will focus on molecular generation, which is crucial in drug discovery as a means to propose promising drug candidates. Here we develop a new method for multi-property molecule generation, by first learning a distributional vocabulary over molecular fragments. Then, using this vocabulary, we survey efficient exploration methods over the chemical space.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143362</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Informalization of Formal Housing Projects in the Global South: &#13;
Policy Failure or Counterhegemonic City-making?</title>
<link>https://hdl.handle.net/1721.1/143361</link>
<description>The Informalization of Formal Housing Projects in the Global South: &#13;
Policy Failure or Counterhegemonic City-making?
Wainer, Laura Sara
Faced with the rampant expansion of informal settlements in cities, many national governments across the global South have instituted formal social housing programs. In turn, however, many State-led housing projects, aimed at curtailing informal settlements, themselves informalize. How and why does this happen? My dissertation interrogates this recurrent phenomenon in Latin America and Sub-Saharan Africa: the physical, economic, and institutional encroachment of informal practices onto formal, large-scale housing projects. The scarce literature on the topic positions the phenomenon as either a policy failure or bottom-up adaptations to unsuitable policy decisions. Drawing on the intersection between State building theory, Southern Urbanism, and Design Politics, I suggest that it is instead a series of interconnected counterhegemonic city-making efforts that attempt to undo the norms and forms imposed by the national State to guarantee the political and social stability of Southern urban peripheries. As such, informalization operates over a complex matrix of pre-existing regulations and standards, engages in practices of territorial anchoring and economic development, and asserts de facto management status without legal-administrative capacity to address the social demands and conflicts of urban growth.&#13;
&#13;
I base my arguments on the in-depth study of three paradigmatic cases in Buenos Aires (Argentina), Cape Town (South Africa), and Cartagena (Colombia) to introduce the informalization of the formal as a process of counterhegemonic practices transversal --but not exogenous-- to the more formal managerial logic that entail: anchoring people and organizations to their territory, individualizing land to self-manage urban space, incrementing houses to serve the extended families’ needs, unlocking the local economy, and stabilizing tensions and social conflicts of urban management. The study cases show that informalization enhances livelihoods and provides political stability in the short term. Still, as space and infrastructure become more contested, significant new tensions emerge within the community and between the community and governments. In turn, the State has not yet found planning visions or pragmatic alternative solutions, contributing to ongoing neglect of these territories. The findings also bring out the possibilities of a techno-political re-imagination of the planning and design disciplines.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143361</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven Mechanistic Modeling of 3D Human Genome</title>
<link>https://hdl.handle.net/1721.1/143360</link>
<description>Data-driven Mechanistic Modeling of 3D Human Genome
Qi, Yifeng
Three-dimensional (3D) organization of the human genome regulates DNA-templated processes, including gene transcription, gene regulation, and DNA replication, which are crucial for cell differentiation and cell functionality. Computational modeling serves as an efficient and effective way of building high-resolution 3D genome structures and improving our understanding of these molecular processes. My PhD research has been focused on the development of a data-driven, mechanistic modeling framework aiming to better understand the physical principles of how genome organizes as well as the mechanisms of genome structure-coupled biological processes, such as the coalescence of nuclear bodies.&#13;
&#13;
This thesis is organized as follows. In the first chapter, we introduce a computational model to simulate chromatin structure and dynamics. The model defines chromatin states by taking one-dimensional genomics and epigenomics data as input and quantitatively learns interacting patterns between these states using experimental contact data. Once learned, the model is able to make de novo predictions of 3D chromatin structures at five-kilo-base resolution across different cell types. The manuscript associated with this study is published in PLoS Computational Biology, 15.6, e1007024 (2019).&#13;
&#13;
In the second chapter, we expand the spatial scale of the model to study the organization of the global diploid human genome in the entire nucleus. It has both data-driven and mechanistic nature, as the energy function is explicitly written out based on biologically motivated hypotheses, and all parameters are quantitatively derived from experimental contact data. The model has shown its usefulness both in reconstructing whole-genome structures and in exploring the physical and biological principles of genome organization. The manuscript associated with this study is published in Biophysical Journal, 119, 1905 (2020).&#13;
&#13;
In the third chapter, we further apply the data-driven modeling framework that we have developed to study the thermodynamics and kinetics of the formation and coalescence of nuclear bodies. Our study suggests that protein-chromatin interactions facilitate the nucleation of droplets, but hinder their coarsening due to the correlated motion between droplets and the chromatin network: as droplets coalesce, the chromatin network becomes increasingly constrained and is entropically unfavorable. Therefore, protein-chromatin interactions arrest phase separation in multi-droplet states and may drive the variation of nuclear body numbers across cell types. The manuscript associated with this study is published in Nature Communications, 12, 1 (2021).
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143360</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of data-driven methods in nuclear fuel performance analysis</title>
<link>https://hdl.handle.net/1721.1/143359</link>
<description>Application of data-driven methods in nuclear fuel performance analysis
Che, Yifeng
Accurately predicting the behavior of nuclear fuel performance is essential for the safe and economic operation of nuclear reactors. Computer codes of different fidelities have been developed over past decades to simulate the behavior of nuclear fuels, such as the multi-dimensional, parallel, finite element-based code BISON, and the NRC-auditing code FRAPCON. Multiple areas of research remain to be addressed in fuel performance while physics-based approaches often reach their limits. Studies to be presented in this thesis therefore revolve around applying data-driven methods to address these issues.&#13;
&#13;
First, discrepancies always exist between code predictions and real-world responses, thus uncertainties must be quantified for the code predictions for benefit of decision making, operation safety and design optimization. Systematic validation and verification are performed for BISON first, followed by a holistic sensitivity analysis (SA) framework built upon a complete set of uncertain input parameters. The number of uncertain input parameters can be effectively reduced based on the obtained qualitative importance ranking, benefiting the subsequent uncertainty quantification (UQ). To enhance the predictability, a novel Bayesian inference framework is introduced to efficiently calibrate the expensive high fidelity tools, possibly without resorting to approximate surrogate methods. The calibrated prediction aligns better with experimental observations, and is subject to significantly reduced uncertainty.&#13;
&#13;
Second, while full-core monitoring of fuel behaviors can provide the most realistic assessment of safety margins, its computational cost for use in design and operation optimization is prohibitive. Machine learning (ML) methods were used to construct fast-running full-core surrogates, which achieves a runtime acceleration of more than 10,000 (1,000) times compared to FRAPCON for the standard (high burnup) PWR cores, allowing for direct coupling of full-core fuel response into core design optimization in the future. Then for purpose of full-core PCI monitoring which requires BISON as the high-fidelity simulation tool, a physics-informed multi-fidelity ML framework is introduced to significantly reduce the number of necessary code runs. Finally, deep learning models are trained to predict the spatiotemporal distribution of the cladding hoop stress. The proposed data-driven methods for the selected applications enlightens the nuclear community on practical pathways to realize meaningful improvements in fuel performance assessment.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143359</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrostatically Levitated Object Handoff to Minimize Wear and Particle Generation</title>
<link>https://hdl.handle.net/1721.1/143356</link>
<description>Electrostatically Levitated Object Handoff to Minimize Wear and Particle Generation
Bhushan, Brij M.
In semiconductor photolithography, feature sizes are reducing towards low nanometers using extreme ultraviolet (EUV) light, which requires exposure in vacuum. Even a few particles on surfaces in the optical path (such as reticles, lenses, and mirrors) pose critical limits on performance, yield, and machine availability. In the semiconductor manufacturing process, object handoff between stages is one of the significant particle generation mechanisms. Vibrations in the handoff and pickup stages generate impact forces on the object and cause relative sliding between contact surfaces, leading to wear and particle generation. Unlike in air, non-contact handling techniques such as Bernoulli grippers do not work in vacuum. We have developed the concept of object handoff by electrostatic levitation, establishing a small upward bow shape to make first contact at the object center and then flattening outward. Our research solution converts the transfer problem from a three-body contact (handoff stage, object, and pickup stage) to a phased two-body contact by electrostatically levitating the object from the handoff to the pickup stage. &#13;
&#13;
This thesis describes the design and modeling of a proof-of-concept handoff system and presents the experimental results. We levitate a 152 by 152 mm square, 400 μm thick aluminum sheet (object) across a 200 μm air gap. We explore various sensing and actuation patterns, develop control strategies, and evaluate methods to avoid electrostatic discharge during stable levitation and handoff. The prototype suspends and stabilizes the object in six degrees-of-freedom (6-DOF) below the pickup stage electrodes. 3-DOF (Z, &#120579;ₓ, and &#120579;ᵧ) are actively controlled and the other 3-DOF in horizontal directions (X, Y, and &#120579; subscript z) are passively stabilized. The flexible Bow shape of the object is also controlled. The achieved steady-state object positioning noise is &lt; 200 nm-pp in the Z-direction, and &lt; 0.2 mdeg-pp in &#120579;ₓ, &#120579;ᵧ directions, with a 150 Hz maximum bandwidth. The horizontal positions are repeatable to &lt; 0.5 mm in X and Y, and ±0.2 deg in &#120579; subscript z. We demonstrate pickup-clamp and unclamp-placedown sequences by levitating the object from the resting pins (handoff stage) to the pickup stage electrodes and back. This methodology for object handoff by levitation could be extended, using electrostatic, electromagnetic, acoustic, and pneumatic force fields, to other situations in which wear and particle generation during object handoff are critical.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143356</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>What's at stake in political messaging?</title>
<link>https://hdl.handle.net/1721.1/143355</link>
<description>What's at stake in political messaging?
Hewitt, Luke
Campaigns spend billions attempting to persuade voters – with factual arguments, group cues, moral appeals, character attacks, and targeted messages tailored for a particular demographic – yet such persuasion campaigns are often found to have only ‘minimal effects’. In this thesis, I investigate the current and potential impacts of campaigns’ messaging in shaping public opinion. Based on evidence from a set of large-scale, randomized, online survey experiments which span multiple electoral contexts and policy issues, I draw the following conclusions. 1. While messages produced by real electoral campaigns have only minimal effects on average, they are highly variable: one message may be many times more effective than another for the same persuasive goal. 2. While political communication literature is rich with theory about the features of persuasive messages, I find from tests spanning many such theories that none can reliably distinguish strong messages from weak ones. 3. While techniques such as Moral Reframing claim to boost persuasion by ‘tailoring’ messages with language suited to different subgroups, I find that theoretical subgroup differences are at best unreliable with mixed results across issues. 4. Even if theory cannot yet provide reliable predictions of persuasion in diverse contexts, online experiments seem able to measure it meaningfully. I confirm that these experiments reflect not just a transient priming effect, but rather lasting attitude change as a direct consequence of message exposure.&#13;
&#13;
These findings suggest two broad lessons for future research. First, ‘what works’ in persuasion is context/issue-dependent. Studies in political communication should therefore span multiple policy issues, in order to avoid a disconnected literature with contradictory conclusions drawn from idiosyncratic contexts. Second, while existing theory provides an unreliable basis for persuasion campaigns to choose their messaging, randomized experiments themselves can provide valuable evidence that distinguishes strong messages from weak ones. Therefore, campaigns’ increasing use of experimentation should be viewed by scholars not only as a new source of data for researchers to learn what persuades voters, but as an important object of study in its own right with implications for the role of campaign messaging in democracy.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143355</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rethinking Plant-Based Materials Production: Selective Growth of Tunable Materials Using Cell Culture Techniques</title>
<link>https://hdl.handle.net/1721.1/143354</link>
<description>Rethinking Plant-Based Materials Production: Selective Growth of Tunable Materials Using Cell Culture Techniques
Beckwith, Ashley L.
Each year, global forests shrink by billions of trees due to human activities and natural disasters. This sustained deforestation impacts both environment and economy. Forests are ecologically essential—supporting biodiversity, stabilizing ecosystems, and sequestering carbon. Trees also supply feedstock for building infrastructure, energy generation, production of consumer goods, textile manufacturing, and an increasing range of other economic activities. Biotechnology may hold the key to satisfying the growing demand for wood and wood-based products while staving off further deforestation and environmental disruption. This work explores the use of cell culture techniques to selectively generate plant-based materials without whole plant cultivation and harvest. The proposed approach allows for localized, high-density production, elimination of energy-intensive harvest and hauling, and reduced processing, while offering inherent climate resilience. &#13;
&#13;
Employing a Zinnia elegans model system, this work provides the first proof-of-concept demonstrating net-shape, tunable plant material production in vitro. Central activities of the presented research include: (1) establishing knowledge to effectively direct cellular development, (2) devising and demonstrating mechanisms to direct material form, and (3) relating cellular-level properties to emergent material characteristics to allow tunable and predictable material outcomes. &#13;
&#13;
First, using newly proposed metrics, cellular responses to applied culture conditions are quantified and mapped; selected environmental parameters including hormone concentration, medium pH, and cell density are shown to significantly influence cell differentiation and morphology. &#13;
&#13;
Next, material form is directed through the casting and bioprinting of cell-laden hydrogel scaffolds. Gel-mediated culture enables plant material production both in forms and at scales that do not arise naturally in comparable whole plants. A maximum size for gel-based plant cultures has yet to be reached with prolonged cell viability maintained even at distances of more than 8.7 mm from the gel surface. &#13;
&#13;
Finally, this work reports the first mechanical, physical, and microstructural characterization of 3-D printed, lab-grown plant materials and demonstrates material tunability with simple adjustments to culture conditions (e.g., hormone levels). Grown material properties vary significantly with hormone levels of the nutrient medium. The storage modulus of high-hormone samples presenting vascular cell types was elevated at 404.7 MPa ± 146.9 compared to low-hormone samples (P=0.033) with storage modulus measuring at 135.3 ± 57.4 MPa. Hormone levels also impacted growth potential and physical material properties. High hormone samples saw only moderate increases in sample volume and mass (91% and 31%, respectively) compared with larger gains in low hormone formulations (398% and 119%, respectively), relative to a cell-free control. An examination of material microstructure reveals distinct cellular morphologies and identities for the evaluated growth conditions.&#13;
&#13;
This work demonstrates the feasibility, tunability, and scaling potential of net-shape plant material cultivation—presenting new uses for plant culture technology, establishing novel methods for quantification of culture growth and development, and providing a first characterization of cultured plant materials. More importantly, this proof-of-concept illustrates the promise of a customizable, land-free approach to generating plant materials—a decisive first step towards the vision of tree-free wood products.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143354</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Sampling Methods of, by, and for Stochastic Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/143353</link>
<description>Efficient Sampling Methods of, by, and for Stochastic Dynamical Systems
Zhang, Benjamin Jiahong
This thesis presents new methodologies that lie at the intersection of computational statistics and computational dynamics. Stochastic differential equations (SDEs) are used to model a variety of physical systems, and computing expectations over marginal distributions of SDEs is important for the analysis of such systems. In particular, quantifying the probabilities of rare events in SDEs -- and elucidating the mechanisms by which these events occur -- are critical to the design and safe operation of engineered systems.&#13;
&#13;
In the first part of the thesis, we use data-driven tools for dynamical systems to create methods for efficient rare event simulation in nonlinear SDEs. Our approach exploits the relationship between the stochastic Koopman operator and the Kolmogorov backward equation to derive optimal importance sampling and multilevel splitting estimators. By expressing an indicator function over a rare event in terms of the eigenfunctions of the stochastic Koopman operator, we directly approximate the associated zero-variance importance sampling estimator. We also devise efficient multi-level splitting schemes for SDEs by using the Koopman eigenfunctions to approximate the optimal importance function.&#13;
&#13;
Stochastic dynamical systems can also be tools for solving problems in computational statistics. Creative uses of SDEs have been instrumental in developing efficient sampling methods for high-dimensional, non-Gaussian probability distributions. The second part of the thesis develops new sampling methods that employ judiciously constructed SDEs. We first present a framework for constructing \emph{controlled} SDEs that can sample from a large class of probability distributions with Gaussian tails, in finite time. By choosing a linear SDE to be the uncontrolled reference system, we synthesize feedback controllers that drive the sampling of such distributions. We identify and approximate these controllers by solving only a static optimization problem.&#13;
&#13;
Next, we develop novel approaches for accelerating the convergence of Langevin dynamics-based samplers. Reversible and irreversible perturbations of Langevin dynamics can improve the performance of Langevin samplers. We present the geometry-informed irreversible perturbation (GiIrr) and show that it accelerates convergence of Riemannian manifold Langevin dynamics more than standard irreversible perturbations. We then propose the transport map unadjusted Langevin algorithm (TMULA), and show that the use of transport enables rapid convergence of the unadjusted Langevin algorithm for distributions that are not strongly log-concave. We also make connections between transport maps and Riemannian manifold Langevin dynamics to elucidate how transport maps accelerate convergence.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143353</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensor-Based Methods for Characterizing Technology Impact in Low-Resource Settings</title>
<link>https://hdl.handle.net/1721.1/143351</link>
<description>Sensor-Based Methods for Characterizing Technology Impact in Low-Resource Settings
Gandhi, Amit
The development of appropriate technologies aimed at improving the livelihoods of people living in poverty has been a focal area of international aid. These technologies can have a transformative impact on economic growth and individual health when designed and implemented properly. Long-term usage and performance of these products are difficult to measure accurately through self-reporting, and the use of low-cost sensors allows researchers and designers to better assess their overall impact. The first part of the thesis presents a methodology for implementing sensor technologies in lowresource settings. The second part of the thesis presents the design of a novel sensorbased data collection system that was developed for design and impact research. The third part of this thesis presents three case studies where this system is used to evaluate wheelchairs, improved cookstoves, and evaporative coolers. The wheelchair case study in Indonesia compares the performance of different wheelchairs ranging in cost from $100 to $300 and finds little variation between improved wheelchairs in frequency of use and distance traveled. All of the improved wheelchairs show significant benefits when compared to hospital style chairs. The improved cookstove case study demonstrates the effectiveness of using a household air pollution monitor in combination with a stove use monitor to identify cooking events and measure cookstove adoption in a research pilot in India. Particulate matter concentrations in all of the households using improved cookstoves were significantly higher than World Health Organization (WHO) recommended levels. The evaporative cooling case study demonstrates that clay pot coolers are effective at keeping fresh produce more than 5°C cooler than ambient temperatures in the hottest months of the year in Mali. The final part of this thesis examines the ethical challenges that may arise from incorporating sensors for field research and best practices for addressing some of these concerns.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143351</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stable Machine Learning</title>
<link>https://hdl.handle.net/1721.1/143350</link>
<description>Stable Machine Learning
Paskov, Ivan Spassimirov
This thesis explores one of the most fundamental questions in Machine Learning, namely, how should the "learning" component in Machine Learning be done? For essentially the entire history of the field, ever since Mosteller and Tukey proposed the paradigm in 1968, the answer has remained constant: use randomization. Namely, randomly split your data into training, validation, and test sets, then train your model on the training set, pick parameters based on the validation set, and then report performance based on the test set. Conceptually and practically simple, this methodology has gained near unanimous adoption. Despite this popularity, however, the methodology is fraught with numerous issues relating to the instability of the trained models, and the question remains whether or not we can do better?&#13;
&#13;
In this thesis, we answer that question in the affirmative. By taking a robust, combinatorial optimization approach, we propose a new way of training all machine learning models based on optimization rather than randomization. Rather than requesting that the model be performant against a single, randomly chosen training set, as is typically done, instead we require that it be robust against every training set of a fixed size. In this way, we extract out that which is common amongst all training sets, rather than the idiosyncrasies of any particular dataset, which are unlikely to generalize to new, yet unseen datasets.&#13;
&#13;
We begin by developing the methodology within the context of spatial, cross-sectional methods, and then proceed to extend the framework to time-series methods, where the contiguous structure of time now plays a key role. We next derive efficient algorithms that make the approach extremely scalable. Finally, we demonstrate the efficacy of the methodology across all methods on a large set of datasets, synthetic and real, derived from both academia and industry.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143350</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Attenuation from Fish on Long-Range Active and Passive Acoustic Sensing in the Ocean</title>
<link>https://hdl.handle.net/1721.1/143348</link>
<description>The Effect of Attenuation from Fish on Long-Range Active and Passive Acoustic Sensing in the Ocean
Duane, Daniel Michael
Attenuation from fish can reduce the intensity of acoustic signals and significantly decrease detection range for long-range active and passive sensing in the ocean. This makes it important to understand the relevant mechanisms and accurately predict attenuation from fish in underwater acoustic sensing. Formulations for predicting attenuation from fish, however, depend on the accurate characterization of population density and spatial distribution of fish groups along long-range propagation paths, which is difficult to achieve using conventional survey methods. In previous investigations of attenuation from fish, population densities were inferred from reductions in the intensity of long-range acoustic signals caused by diel or seasonal shoaling patterns of fish groups. Here, Ocean Acoustic Waveguide Remote Sensing (OAWRS) is used to instantaneously image massive Norwegian herring shoals that stretch for thousands of square kilometers and simultaneously measure attenuation from these shoals within the active OAWRS transmissions, as well as attenuation to ship-radiated tonals detected by Passive Ocean Acoustic Waveguide Remote Sensing (POAWRS). Reductions in signal intensity are predicted using a normal-mode-based analytical theory derived from first principles for acoustic propagation and scattering through inhomogeneities in an ocean waveguide. The predictions of the waveguide attenuation formulation are in agreement with measured reductions from attenuation, where the position, size, and population density of the fish groups are characterized using OAWRS imagery as well as in situ echosounder measurements of the specific shoals occluding the propagation path. Common heuristic formulations that employ free space scattering assumptions for attenuation from fish groups are not in agreement with measurements here, and waveguide scattering theory is found to be necessary for accurate predictions. It is experimentally and theoretically shown that attenuation can be significant when the sensing frequency is near the resonance frequency of the shoaling fish, where scattering losses from the fish swimbladders and damping from fish flesh is most significant. Negligible attenuation was observed in previous OAWRS and POAWRS surveys because the frequency of the acoustic signals was sufficiently far from the swimbladder resonance peak of the shoaling fish or the packing densities of the fish shoals were not sufficiently high.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143348</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Network Effects on Outcomes and Unequal Distribution of Resources</title>
<link>https://hdl.handle.net/1721.1/143346</link>
<description>Network Effects on Outcomes and Unequal Distribution of Resources
Jahani, Eaman
We focus on the link between networks and economic outcomes, study how networks affect different groups differently and in the process provide pathways to reinforce existing inequalities. First, we establish the link between the economic well-being and the network structure of US counties. We show that counties that are rich with long ties, those bridging different communities, have better outcomes over a range of economic indicators. Subsequently. we study the determinants of long ties and find that they are more frequent if the individual has experienced disruptions such as mobility, migration or switching schools throughout their life. Our findings suggest that creating and maintaining long ties require special skills that co-occur with the above mentioned life events.&#13;
&#13;
Second, we provide observational evidence for differential network advantages in access to information: higher status individuals receive higher marginal benefit from networking. We attribute this phenomena to unequal diffusion due to network homophily and provide causal evidence for it in the context of a randomized seeding experiment in networks.&#13;
&#13;
Third, we develop a network model that captures the structure of unequal diffusion or access to opportunities. We show that any departure from the uniform distribution of links to information sources in a group has both first order and second order effects. Not only some individuals will have fewer direct links, but also the whole group will have fewer diffusion paths to the information sources.&#13;
&#13;
Finally, we examine the network mechanisms that widen inter-group differences. We study an information sharing game, in which individuals have to compete for a rivalrous resource over repeated rounds. The equilibrium predicts lower cooperation among lower status agents, which leads the whole group to receive a “a smaller share of the pie”. We further validate this prediction in an online lab experiment.&#13;
&#13;
We hope that our findings contribute to the growing literature around the network origins of persistent inequality. Our findings suggest that policies that target groups rather than individuals are more successful in combating inequality as the benefits that arise from lifting a whole group out of poverty will be amplified by the existing social capital and the feedback mechanisms present in the network.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143346</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adam von Bartsch (1757-1821) and the Invention of the Original Printmaker</title>
<link>https://hdl.handle.net/1721.1/143343</link>
<description>Adam von Bartsch (1757-1821) and the Invention of the Original Printmaker
Feiman, Jesse
Through his widely cited catalogues of prints, Austrian printmaker, curator, and author, Adam von Bartsch (1757-1821), developed and disseminated concepts and methods of investigation that formed the foundation of the rational and empirical study of prints. The analysis of Bartsch’s publications, which included new editions of sixteenth-century woodcuts, catalogues raisonnés (authoritative registers of printmakers’ complete works), and didactic texts, reveals nuanced conceptions of originality and of historical development adapted to the technology of printmaking. He coined the term, peintre-graveur (“painter-printmaker”), to identify artists who used the printing press to multiply the novel expressions of their minds and hands, and to distinguish autographic printmakers from graveurs, practitioners who used the same means to repeat designs invented by other artists. The collaboratively produced prints Bartsch described in Le Peintre Graveur (Vienna, 1803-21), his twenty-one volume compendium of catalogues raisonnés of Dutch, German, and Italian artists from the fifteenth to the eighteenth centuries, however, showed that the author recognized originality generated by the application of genius and talent to the seemingly mechanical tasks of translating a composition into print or inking a printing surface. Bartsch’s descriptions illustrated his commitment to firsthand observation and his reliance on side-by-side comparisons for attributing and classifying prints, techniques that generations of print specialists learned from his publications. The roughly chronological order of the catalogues raisonnés in Le Peintre Graveur demonstrated the changes in artists’ manners over time in each national school, so that the organization of the catalogue outlined a schematic history of original printmaking. Collectors, dealers, and scholars of European prints adopted Bartsch’s work as a standard reference and, by the end of the century, printmakers possessed of artistic ambition self-identified as peintres-graveurs to signal the originality of their work to collectors. In the analysis of his scholarship, Bartsch emerges an important voice in the discourse on originality and as a key figure in promoting the acceptance of printmaking as a fine art.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143343</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonon and electron transport through interfaces and disordered structures</title>
<link>https://hdl.handle.net/1721.1/143341</link>
<description>Phonon and electron transport through interfaces and disordered structures
Song, Qichen
Understanding phonon and electron transport is of great significance for designing efficient solid-state devices such as transistors, laser diodes and thermoelectric energy converters. The structural randomness is inevitable in solid-state devices, and it is often regarded as the undesirable scattering source for phonons and electrons. This thesis studies manipulating phonon and electron flow using structural randomness via mode-resolved Green’s function calculations and pump-probe optical characterizations.&#13;
&#13;
Interface roughness is a common type of randomness in heterostructures, which strongly affects electron and phonon transport across interfaces. We find that atomically rough interfaces can scatter short-wavelength electrons and assist the transmission between mismatched valleys. The contact resistance is reduced by over an order of magnitude. Our study provides new insights on the conventional wisdom to improve the interfacial transport using graded interfaces. We also use the atomistic Green’s function to simulate phonon transport across rough interfaces to show that the basic assumption that phonons lose memories in the often-used diffuse phonon scattering model is questionable.&#13;
&#13;
The coherent backscattering of waves in disordered structures can lead to Anderson localization, where the waves are spatially localized and cannot propagate. Anderson localization has been observed in electronic, photonic and acoustic systems. However, observing its impact on heat conduction is challenging due to the broadband nature and three-dimensional transport of phonons. We use the aperiodicity as a type of randomness to enhance phonon Anderson localization. Our calculation predicts that aperiodic Si/Si0.2Ge0.8 superlattice can induce coherent backscattering for low-frequency phonons and limit the contribution to transport of high-frequency phonons. The interferences among scattered low-frequency phonons lead to a peak in the thermal conductivity versus length curve, a characteristic feature of phonon Anderson localization. Using frequency-domain thermoreflectance, we validate our theoretical predictions and find that the phonon Anderson localization exists up to 200 K. Our findings provide an efficient approach to localize phonons at moderate temperatures using randomness.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143341</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tools for Monitoring and Modulating Cellular Communication</title>
<link>https://hdl.handle.net/1721.1/143340</link>
<description>Tools for Monitoring and Modulating Cellular Communication
Rousseau, Erin Byrne
Biological materials possess the ability to sense and change in response to diverse stimuli. This creates a spatially and temporally dynamic environment, presenting a barrier to investigation and intervention. As such, interfacing with living systems demands precision and adaptability. Here, we present novel technologies for monitoring and modulating the biochemical environment of multicellular tissues. &#13;
&#13;
Neural and neuromuscular tissue offers both temporal and anatomical heterogeneity. Neuropathologies can arise from aberrant signaling from a single node; therefore targeting these structures directly for investigation and treatment is an attractive alternative to the standard systemic techniques. However, tissue response and device failure remain major challenges to local interfacing. Recent advances to our understanding of immune response to implantable materials has allowed for the development of technologies which promote minimal glial scarring while maintaining chronic function. To this end, we have developed modular neural implants for focal dosing, allowing for fine discrimination and investigation of proximal anatomical locations, such as the dorsal and ventral shell of the nucleus accumbens. These implants can be interfaced with our nanofluidic sampling platform for membraneless infusion and withdrawal of extracellular constituents at low flow rates. This allows for ‘liquid biopsies’ of the extracellular milieu and yields information on cellular signaling in healthy and diseased states in both in vitro  and in vivo  models.&#13;
&#13;
Better monitoring of the cellular environment elucidates the relationship between proteomic signaling and function, informing the engineering of tissue-based sensors and therapies. We explored the use of implantable light-activated muscle for monitoring the response to exercise in both in vitro  and in vivo  models. These materials are able to integrate with native tissue and maintain their ability to respond to external, user-defined stimuli, thus creating multifunctional implants for monitoring and modulating cellular communication.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143340</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Medical  Engineering and Medical Physics: Metagenomic Sequencing for Viral Diagnostics and Discovery</title>
<link>https://hdl.handle.net/1721.1/143339</link>
<description>Medical  Engineering and Medical Physics: Metagenomic Sequencing for Viral Diagnostics and Discovery
Ye, Simon Huang
The application of metagenomic sequencing is transforming microbiology by directly interrogating the entire community composition of a clinical sample in an unbiased manner, reducing reliance on culture-dependent approaches. In concert with the advent of next generation sequencing (NGS) technologies that interrogate extremely large quantities of genetic information on the order of billions to trillions of base pairs per sequencing run, computational approaches are necessary for storing and processing the vast quantity of NGS data into useful biological information. Here we benchmark the performance of metagenomic sequence classification methods, controlling for database differences by using a uniform database. Additionally, we developed an integrated metagenomic NGS (mNGS) computational pipeline incorporating stringent negative controls for the primary diagnosis of a cohort of patients with encephalitis with clinical suspicion of viral infection. These methods were used to interrogate secondary coinfections in patient cohorts with primary HIV and Lassa infection.&#13;
&#13;
Metagenomic sequencing can also be utilized to perform large scale screening for the directed evolution of viral vectors. Adeno-associated virus (AAV) is a non-pathogenic virus that infects humans and commonly used as a vector for gene therapy. However, natural AAV serotypes tend to accumulate in the liver, leading to toxic side-effects when higher doses are used to transduce non-liver tissues. In this work, we engineer specific amino acids on the viral capsid of AAV9 and use sequencing to screen millions of viral capsid variants to evolve an engineered AAV with up to 100 times higher muscle tissue specificity over natural AAV.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143339</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trends in C-H bond dehydrogenation energetics for small molecule conversion</title>
<link>https://hdl.handle.net/1721.1/143338</link>
<description>Trends in C-H bond dehydrogenation energetics for small molecule conversion
Akkiraju, Karthik
Low-temperature activation of C-H bonds and the conversion of C-H bond containing small molecules has remained a holy grail of chemical reactions over the past few decades. The design of materials to maximize product selectivity for wide–ranging energy and environmental applications is typically carried out by a creating of small library of materials. Optimal catalysts are identified by a series of measurements, and in most cases the underlying reaction mechanism is not well understood leading to difficulty in designing future catalysts. Systematic studies have to be carried out in order to investigate the catalyst surface under reaction conditions to probe the nature of reaction intermediates as well as the products of the reaction.&#13;
&#13;
In this thesis, we studied the interaction of small molecules such as formaldehyde, methanol, methane, and propane with oxide surfaces to reveal trends in adsorption energies, product selectivity, and reaction rates. We achieve this by developing suitable design descriptors by studying the reaction mechanism in situ. We first generated a library of manganese oxide catalysts to probe the reaction mechanism for formaldehyde oxidation to CO2 at room temperature. We identified γ-MnOx to have one of the highest reaction rates for formaldehyde oxidation and show that catalytic activity can further be improved by the addition of water. We then show that room temperature selective methanol oxidation towards methyl formate and methane oxidation to CO2 can be realized by increasing the surface oxygen activity of iridium oxide-based catalysts. We further developed a rational design approach for perovskite oxides by tuning the surface O 2p-band center to selectively oxidize methanol to formaldehyde. Finally, we extended this descriptor-based approach for oxidative dehydrogenation of propane to propene. Thus, using a combination of kinetic measurements, surface sensitive in situ techniques, and theoretical calculations, we show how catalyst surface can be designed to optimize product selectivity.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143338</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New techniques in low-Q² elastic electron-proton scattering measurements and the proton radius extraction</title>
<link>https://hdl.handle.net/1721.1/143336</link>
<description>New techniques in low-Q² elastic electron-proton scattering measurements and the proton radius extraction
Wang, Yimin
The proton charge radius is a fundamental property of the proton, sensitive to its charge distribution. The proton radius puzzle is the discrepancy between the muonic hydrogen spectroscopy measurement of 0.84 fm and the consensus of 0.88 fm from two traditional methods, electron-proton scattering and electronic hydrogen spectroscopy. A recent electron-proton scattering experiment, PRad, measured a smaller radius but also large electric form factor values within 0.02 ≤ Q² ≤ 0.06 (GeV/c)² conflicting with previous experiments. Motivated by those issues, we developed an innovative background-free target system, including a windowless gas jet target, a beam halo collimator and an active beam halo veto, to measure the proton electric form factor at momentum transfers of 0.01 ≤ Q² ≤ 0.065 (GeV/c)². The experiment was conducted in early 2020 at the three-spectrometer facility of the A1-collaboration at the Mainz Microtron in Mainz, Germany. Due to the COVID-19 lockdown, we measured the cross section only up to Q² = 0.043 (GeV/c)² with limited statistics. Although our form factor data cannot give a definitive answer to the form factor discrepancy and the proton radius puzzle yet, our work demonstrated the feasibility of such a background-free target design and we believe that further data taking with this technique can help to resolve the radius puzzle. In this work, we also study the extraction of the proton charge radius using non-parametric models. We demonstrate that the kernel ridge regression and the Gaussian process have similar levels of performance compared to the traditional function fitting approaches. Our extracted values from different data sets still show the discrepancy of the proton charge radius, supporting the point of view that the discrepancy originates from the data itself instead of the extraction method.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143336</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affective Matter: A Haptic Material Modality for Emotion Regulation and Communication</title>
<link>https://hdl.handle.net/1721.1/143332</link>
<description>Affective Matter: A Haptic Material Modality for Emotion Regulation and Communication
Papadopoulou, Athina
Our emotions do not always surface into our awareness, making it difficult to put them into words. When emotions do not reach our cognitive awareness they can still express themselves as physiological changes in our body, often unperceived by ourselves and by others. To facilitate emotion regulation and expand the bandwidth of emotion communication, I developed Affective Matter. Affective Matter is a haptic material modality that allows information about the physiological aspects of emotions to be communicated through materials. Through the development of Affective Matter, I aim to enhance intrapersonal and interpersonal affective communication through haptic means and contribute to sensory-based therapies for emotional disorders.&#13;
&#13;
In this dissertation, I first review literature pertaining to emotions and body-mind connections, to support the principles of Affective Matter, including the therapeutic impact of touch and controlled breathing, and the affective impact of interpersonal synchrony. I then discuss the development of two types of programmable affective sleeves as examples of Affective Matter, and describe two controlled studies with human subjects testing the psychophysiological impact of each of the sleeves. The combined results of the studies demonstrate a positive correlation between the sleeves’ pace of haptic action and the participants’ breathing rates and arousal levels. Finally, I discuss the development of a user interface for material-mediated emotion communication that translates affective information into personalized material haptic action.&#13;
&#13;
Harnessing the sensory properties of matter, this work builds on advances in design, computing, psychology, and materials to propose Affective Matter as a means for human-material therapeutic interaction, where bodies and their material environments can work in synergy to enhance our emotional wellbeing.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143332</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Congestion Control in Highly Variable Networks</title>
<link>https://hdl.handle.net/1721.1/143328</link>
<description>Congestion Control in Highly Variable Networks
Goyal, Prateesh
Modern applications place an enormous demand on networks to deliver high throughput and low delay. To support applications, computer networks are evolving rapidly. Several new network environments such as datacenter and wireless networks have emerged recently and become prominent. While bandwidth has been increasing steadily in these network environments, they also exhibit significant variability in network conditions. For example, the capacity of a cellular link varies with time. Deployed congestion control solutions struggle to adapt to these variations, and their performance is far from optimal in many environments: the feedback used by these schemes is often imprecise or fails to capture variations in the network conditions fast enough.&#13;
&#13;
To improve performance, we need accurate and timely feedback. To this end, we advocate designing separate feedback mechanisms tailored specifically to the nuances of each network environment. Understanding how conditions are varying in each environment can help us unravel what kind of information about the network conditions can improve adaption to such variations. Additionally, the feedback mechanism should be practical and only involve changes that are within the administrative and hardware constraints of the given network environment. Following this philosophy, this dissertation contributes separate high performance congestion control solutions for three prominent network environments: (1) Wireless Networks; (2) Datacenter Networks; (3) Wide-area Internet.&#13;
&#13;
ABC is a simple explicit congestion control protocol for network paths with wireless links. ABC adapts to variations in the link capacity quickly and accurately. Compared to deployed schemes, ABC either achieves 50% higher throughput for similar delays or 3× lower delays for similar throughput.&#13;
&#13;
BFC is a practical per-hop per-flow flow control architecture for datacenter networks with bursty traffic. Compared to deployed schemes, BFC responds to congestion faster, and achieves 2.3 - 60× lower tail latency for short flows and 1.6 - 5× better average completion time for long flows.&#13;
&#13;
Nimbus proposes a new feedback mechanism, elasticity detection, to robustly characterize the nature of cross-traffic competing a flow. Nimbus enables low delay congestion control in the Internet without any router modifications. Compared to deployed schemes, Nimbus achieves 40-50 ms lower delays in the Internet for similar throughput.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143328</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast image and data processing methods for novel neuroscience technologies</title>
<link>https://hdl.handle.net/1721.1/143327</link>
<description>Fast image and data processing methods for novel neuroscience technologies
Çeliker, Orhan Tunç
The nematode C. elegans, a transparent animal with 302 neurons, is a suitable model organism for whole-brain measurement of nervous activity. However, under panneuronal labeling, it is difficult to resolve the identity of the neurons by shape or location alone. We propose a fluorescent in situ hybridization (FISH) based pipeline for reading out gene expression from neurons. Using optimization methods, we select a compact set of genes that provide enough information to distinguish every neighboring pair of neurons in the nervous system. We show that we can process volumetric images of live and fixed C. elegans to read out the gene expression patterns of each observed neuron and match it to their calcium indicator data. Separately, we also outline computational approaches to processing fluorescence data from novel fluorescent sensors.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143327</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Precision, Field-Deployable, Opto-Mechanical Instrumentation: Accessibility as a Functional Requirement</title>
<link>https://hdl.handle.net/1721.1/143323</link>
<description>Development of Precision, Field-Deployable, Opto-Mechanical Instrumentation: Accessibility as a Functional Requirement
Fernández Galiana, Álvaro
The majority of the devices and instruments that we interact with in our daily life have been made accessible to us by (re-)engineering a technology that was once only available to a few researchers in highly specialized laboratories. Bridging the gap between the state-of-the-art instrumentation of research laboratories, which is usually specific to a narrow application, and the broader needs of society is a critical task that can have a transformative effect in society. Design for accessibility is the practice of developing systems that can be used by and provide solutions to as many people as possible. Although there is not a unique approach to design for accessibility, general good practices, such as a reduction in cost, size, or complexity, contribute to it. In this thesis, some of these practices are discussed via practical examples and the importance of considering accessibility as a functional requirement in engineering design is highlighted.  &#13;
&#13;
The first part of this thesis describes the design of a compact source of quantum squeezed vacuum states. Squeezed vacuum states are electromagnetic vacuum states with enhanced statistics that can be leveraged to improve the sensitivity of instruments beyond the quantum limit. They also constitute the stepping stone for the creation of highly entangled states with high fidelity, an essential resource for continuous-variable quantum information processing. However, the generation and handling of these fragile states is complex and resource-intensive, limiting the potential of the associated technologies. Using novel optical cavity control techniques and a combination of fiber and free space optics, the presented design reduces the total number and size of the required components, leading to a final system with a compact footprint. Such a system has the potential to expand the capabilities of quantum information research laboratories by giving them access to prepared quantum states without the need for large, complex optical setups. This work also presents the development and implementation of the seismic isolator of the advanced LIGO squeezed source. It is a tabletop, ultra-high vacuum compatible passive vibration isolation platform with active damping control. Its innovative architecture is demonstrated to meet the stringent requirements of  gravitational-wave interferometers,  advancing  the  field’s  suspension technology  to  be  simpler  yet  more  adaptable. Two units of this isolation system have been reliably operating at the LIGO observatories, contributing to an increase in gravitational-wave detection rate of more than 40%. &#13;
&#13;
The second part of the thesis is dedicated to technologies with biomedical applications. A comprehensive framework for the evaluation of universal pathogen detection platforms is introduced, and the potential of vibrational-spectroscopy based biosensors is evaluated. In particular, the benefits and limitations of Fourier-transform infrared spectroscopy coupled with machine learning techniques are highlighted through a review of the state of the art and exemplified with a case study on its application to SARS-CoV-2 detection. Similarly, the advantage of Raman-based platforms for high molecular specificity applications is introduced and the potential of advanced Raman techniques is analyzed. Finally, the design and development of a novel, biomimicry-inspired laparoscopic device for myomectomy surgeries is also discussed.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143323</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Harnessing Magnetic Switching and Dynamics Using Electron and Magnon Spin Currents</title>
<link>https://hdl.handle.net/1721.1/143320</link>
<description>Harnessing Magnetic Switching and Dynamics Using Electron and Magnon Spin Currents
Han, Jiahao
Spintronics exploits the intrinsic spin of the electron in addition to its fundamental charge for designing next-generation electronic devices, with the purpose of reducing power consumption and enabling novel computing functions. The manipulation, transmission, and detection of spin information require effective control of the magnetic orientation and dynamics in magnetic materials, which can be realized by the interactions between magnetic moments and spin currents. In this thesis, we explore two types ofspin currents, as carried by conduction electrons and magnons, respectively, to realize efficient magnetic switching and transmission of dynamical spin signals.&#13;
&#13;
Using electron spin current, we firstly demonstrated current-induced magnetic switching via the strong spin-orbit torque from topological insulators at room temperature. Secondly, we achieved spin-orbit torque switching of a ferromagnetic Weyl semimetal, where the topological properties are manipulated by the magnetic switching. Besides, spin wave can also transmit spin via magnon spin current. In the third work, we used magnetic domain walls to manipulate the phase and magnitude of a coherent spin wave, and in turn, investigated the domain wall motion driven by a spin wave. This mutual control can pave the way towards all-magnon computing devices. In the fourth work, we studied the nonreciprocal transmission of incoherent magnon currents in a magnetic bilayer, where dynamic interlayer dipolar interactions cause the asymmetric magnon diffusion. This effect is useful for designing signal isolation devices. Finally, we extended the material system from ferromagnets to antiferromagnets. We demonstrated long-distance spin transport in antiferromagnetic insulators with easy-plane anisotropy, where the magnon eigenmodes are linearly polarized and propagate in a birefringence manner.&#13;
&#13;
To conclude, multiple opportunities for magnetic switching and spin transport have been explored under the usage of electron and magnon spin currents, which represents a remarkable advancement towards energy-efficient and highly tunable memory and computing technologies.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143320</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Benefits and Detriments of Aneuploidy in Cancer</title>
<link>https://hdl.handle.net/1721.1/143319</link>
<description>The Benefits and Detriments of Aneuploidy in Cancer
Replogle, John Michael
Aneuploidy, defined as having a chromosome number that is not a multiple of the organism’s haploid number, is a hallmark of cancer. This creates an imbalanced karyotype, where the copy number of hundreds or thousands of genes is altered. About 90% of solid tumors and 70% of blood cancers are aneuploid.&#13;
&#13;
How does aneuploidy affect cancer cells? These copy number alterations affect expression at the RNA and protein level, which causes numerous problems for cells. Aneuploidy increases genomic instability, both through higher rates of DNA damage and causing more chromosome missegregation. The proteome is also significantly challenged; excess proteins aggregate or must be degraded and stress chaperones and the proteasome. These stresses culminate in slow proliferation, particularly through G1 of the cell cycle, relative to euploid cells.&#13;
&#13;
However, evidence is growing for the ways that aneuploidy benefits cancer cell fitness. Aneuploidy is associated with poor patient survival in cancer. In chapter 2, this dissertation describes another effect of aneuploidy: increased resistance to a wide variety of drugs. The slower proliferation of aneuploid cells is the predominant mechanism that protects them from some of the most common chemotherapeutics used today. When proliferation rate is equal between euploid and aneuploid cells, the chemotherapy resistance caused by aneuploidy mostly disappears; however, there is also some evidence for aneuploidy-induced-chemotherapeutic-resistance not explained by the cell cycle defects of aneuploidy. Beyond drug resistance, aneuploidy may benefit cancer cell fitness in other ways: there is growing, but still mixed, evidence that aneuploidy may promote immune-evasion of tumors and increase metastasis. This dissertation discusses the current evidence for how aneuploidy may provide advantages to a cancer cell.&#13;
&#13;
Outside of cancer, several chromosomal disorders exist, including Down syndrome, which may be better understood through aneuploidy. The appendix of this dissertation explores aneuploidy-tolerance and how trisomy 21 cells can relieve their proliferation deficit. A CRISPR screen for improved growth of trisomy 21 cells identified several genes of interest that may specifically contribute to proliferation of trisomy 21 cells. Ultimately, more work is needed to understand how these genes of interest interact with aneuploidy and trisomy 21 to affect proliferation.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143319</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization Theory and Machine Learning Practice: Mind the Gap</title>
<link>https://hdl.handle.net/1721.1/143318</link>
<description>Optimization Theory and Machine Learning Practice: Mind the Gap
Zhang, Jingzhao
Machine learning is a technology developed for extracting predictive models   from data so as to be able to generalize predictions to unobserved data. The process of selecting a good model based on a known dataset requires optimization. In particular, an optimization procedure generates a variable in a constraint set to minimize an objective. This process subsumes many machine learning pipelines including  neural network training, which will be our main testing ground for theoretical analyses in this thesis.&#13;
&#13;
Among different kinds of optimization algorithms, gradient methods have become the dominant algorithms in deep learning due to their scalability to high dimensions and their natural bound to backpropagation. However, despite the popularity of gradient-based algorithms, our understanding of such algorithms in a machine learning context from a theory perspective seems far from sufficient. On one hand, within the current theory framework, most upper and lower bounds are closed, and the theory problems seem solved. On the other hand, the theoretical analyses hardly generate empirically faster algorithms than those found by practitioners. In this thesis, we review the theoretical analyses of gradient methods, and point out the discrepancy between theory and practice. We then provide an explanation for why the mismatch happens and propose some initial solutions by developing theoretical analyses driven by empirical observations.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143318</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Instance Aware Systems using Explicit Performance Modeling</title>
<link>https://hdl.handle.net/1721.1/143317</link>
<description>Building Instance Aware Systems using Explicit Performance Modeling
Nathan, Vikram
Computer systems are often optimized to realize the best performance possible. However, these optimization techniques are typically blind to the system’s actual workload or the particular environment in which the system is deployed. The lack of awareness of the full picture, or “instance”, limits the extent to which a system’s performance can be improved. However, achieving instance awareness is difficult because it involves optimizing over a large search space of configurable parameters. This thesis explores the use of explicit performance modeling in the context of three different systems to accelerate this optimization process and therefore make instance awareness practical:&#13;
&#13;
Flood is a multidimensional database index that is tuned for both lowest latency and minimal space overhead on a query distribution known ahead of time. Flood uses a simple grid-based index system that adjusts the number of partitions in each dimension based on the workload. The “knobs” Flood turns are the parameters of this grid, making it more flexible than existing indexes, which have fewer such knobs. Flood models system performance as a combination of features determined by these grid parameters, and uses explicit measurement to train this model.&#13;
&#13;
Cortex is a correlation index that allows databases to index attributes correlated to already-indexed attributes with minimal additional overhead but substantial performance improvement. Cortex decides which points should be considered outliers and inliers in the correlation; this fine grained control is the set of knobs that Cortex needs to instance optimize its performance: for the hardware its on, the host index on the database, and the distribution of queries.&#13;
&#13;
Minerva is an end-to-end transport algorithm for video streaming, which aims to achieve Quality-of-Experience fairness, so all clients sharing a bottleneck link in the network have roughly equal picture quality and minimal stalls. Importantly, this is achieved without compromising the bandwidth share of non-video traffic and without any client knowing about the others’ existence. Minerva achieves fairness by making the network instance aware, using information about the client’s state and the videos being streamed to model the client’s performance and scale the aggressiveness of its congestion control algorithm.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143317</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Structure Function Relationship using Bio-Inspired&#13;
DNA-Chromophore Complexes</title>
<link>https://hdl.handle.net/1721.1/143316</link>
<description>Exploring Structure Function Relationship using Bio-Inspired&#13;
DNA-Chromophore Complexes
Chen, Wei Jia
Natural light harvesting complexes absorbs and transfers energy towards the reaction center with remarkably high quantum efficiency in the presence of thermal fluctuations. Furthermore, this network of natural complexes delivers absorbed energy via a series of exciton transport steps with an efficiency higher than that predicted by a classical random walk in a process that is not yet fully understood. The function of these natural complexes are controlled through carefully arranging electronically active molecules with nanoscale precision using a network of protein scaffolds.&#13;
&#13;
However, novel protein molecules are difficult to systematically manipulate. We adopt a DNA-based framework to scaffold cyanine chromophores in order to construct modularizable photonic circuit components. This framework allows nanoscale precision over the control of excitons and their dynamics, which is a prerequisite for the implementation of large scale molecular electronics. We chose cyanine chromophores for its ready availability and extensive synthetic and bioconjugation library, as well as its relatively well-understood excited state pathways.&#13;
&#13;
In this work, we use a combination of ensemble and single-molecule fluorescence spectroscopies and theoretical modelling to explore ways which the photophysics of the cyanine chromophore, Cy3, may be controlled through the tuning of its DNA scaffold. We outline the finding that the structural rigidity of the DNA scaffold can be used to directly control the heterogeneity of the Cy3 excited state photophysic as well as its energy transfer efficiency. We also found that delocalized excited states, formed from the electronic coupling between Cy3 monomers and whose strength can be tuned precisely using DNA nanotechnology, can be used to modulate the rate of end-to-end energy transfer. Finally, we utilize the ability of DNA-scaffolded Cy3s to seed the formation of solid-phase silica nanoparticles and describe the photophysical changes that result from the silicification process.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143316</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring evolution of multi-ion effects and electron temperature in ICF implosions at Omega and the NIF</title>
<link>https://hdl.handle.net/1721.1/143307</link>
<description>Exploring evolution of multi-ion effects and electron temperature in ICF implosions at Omega and the NIF
Kabadi, Neel
In inertial confinement fusion (ICF) implosions, one of the most important parameters for quantifying performance is temperature. In this thesis, effects related to temperature evolution of both the ions and electrons within the central plasma are explored in detail. &#13;
&#13;
From measurements of multiple ion temperatures in shock-driven ICF implosions, it is determined that shocks couple energy directly proportional to mass in multi-ion plasmas causing the deuterons and tritons to be out of thermal equilibrium during the shock-burn phase. It is also found that separation of the ion species can be explained by multi-ion diffusion that is driven by the strong gradients in the converging shock front.&#13;
&#13;
Measurement of the hotspot electron temperature is not affected significantly by the converging shock and plasma flows like the ion temperature. In this thesis, the design and prototype testing of a new diagnostic for measurements of time-resolved hotspot electron temperature in ignition-relevant implosions at the OMEGA laser facility is discussed. Initial data obtained in OMEGA Cryo-DT and room-temperature implosion experiments are discussed. A finalized diagnostic design and implementation plan is also presented.&#13;
&#13;
An ICF implosion is an ideal platform for studying energy-dependent fusion cross sections, relevant to stellar nucleosynthesis, as the ions are fully ionized within a thermal plasma, nicely mimicking the conditions in stellar objects. The D3He fusion cross section was studied in this work through measurements of the DD and D3He fusion yields. It was found that the ICF-based measurements resulted in a significantly lower cross section than obtained in standard accelerator-based experiments. This finding has implications for modeling of bound-electron screening in accelerator experiments, and for modeling of reaction rates is stellar objects.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143307</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiment and Modeling Combined Kinetic Study of Bottom-up Polycyclic Aromatic Hydrocarbon Formations</title>
<link>https://hdl.handle.net/1721.1/143306</link>
<description>Experiment and Modeling Combined Kinetic Study of Bottom-up Polycyclic Aromatic Hydrocarbon Formations
Yang, Jeehyun
Despite their importance, the chemical mechanisms of polycyclic aromatic hydrocarbon (PAH) formation are not well understood. Therefore, a combined theoretical and experimental study of the chemical kinetics PAH formation is essential to deepen our understanding to draw a complete picture of aromatic chemistry. This thesis includes both modeling and experimental works on PAH formations from small molecules. Through a combination of high-level quantum chemistry calculations, reaction rate coefficients calculation, and simulation of reactions, bottom-up PAH formation chemistry was predicted and understood. This model prediction can be validated and improved when combined with advanced experimental techniques using a unique apparatus that consists of a quartz reactor combined with time-of-flight mass spectrometry. Chapter 2 focuses on experimentally validating model-predicted tricyclic PAH (phenanthrene and anthracene) formations through the HACA mechanism during the (1, 2-) naphthalenyl radical + acetylene reaction at temperatures between 500–800 K and pressures between 15-50 Torr. We measure significant quantities of C14H10 for the first time, as well as C12H8 from 2-naphthalenyl radical + acetylene. We also explain the discrepancy between our experimental study and the previous experiment performed by Parker et al. that couldn’t detect C14H10. Chapter 3 focuses&#13;
on the investigation of the benzyne-related chemistry (both benzyne + benzene and benzyne + toluene) to validate its ability to rapidly form PAHs through &#120587;-bond 1,4-cycloaddition/fragmentation (1,4-CAF), which was predicted by the kinetic model. We measure C10H8 and C12H10 as well as its kinetics from benzyne + benzene at 800 K and 30 Torr. We measure C10H8, C11H10, and C13H12 from benzyne + toluene at 800 K and 30 Torr. These results provide the first direct experimental evidence for rapid molecular growth through &#120587;-bond 1,4-CAF of o-benzyne to C6 aromatic hydrocarbons. In chapter 4, preliminary kinetic modeling of the PAH formation of toluene (+benzene) pyrolysis at one experimental condition (1467 K, 10.02 Torr, up to 0.56 s) is reported to describe major product peaks observed from Shukla et al. using the reaction mechanism generator.16 Chapter 5 shows a recommended future&#13;
application of the knowledge learned from this thesis to astrochemistry. Overall, the studies here show a successful investigation of bottom-up PAH formation through experimental and theoretical approaches.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143306</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards comprehensive design of electrolytes for electrochemical energy storage</title>
<link>https://hdl.handle.net/1721.1/143303</link>
<description>Towards comprehensive design of electrolytes for electrochemical energy storage
Leverick, Graham
Increasing utilization of renewable, but intermittent energy sources like wind and solar necessitates accompanying deployment of energy storage technologies, where one promising approach is to store energy electrochemically using batteries or electrolyzers. At the heart of all electrochemical systems is the presence of an ionically conductive, but electronically insulating electrolyte that enables a chemical reaction to be separated into two electrochemical reactions when electrical current flows through an external circuit. The electrolyte plays a vital role in determining the performance of electrochemical devices, where properties like its ionic conductivity and (electro)chemical stability determine the working voltage window, power and cycle life of electrochemical devices. The electrolyte can also interact with redox reactions occurring at the electrodes, altering their thermodynamics and kinetics. Therefore, understanding how to control the properties of the electrolyte is vital for the development of next generation electrochemical storage devices.&#13;
&#13;
In this thesis, a deeper understanding of the fundamental interactions that govern electrolyte performance is developed. The influence of electrolyte on reaction pathways and kinetics is highlighted through studies on the discharge and charge processes in Li-O2 batteries. A unified picture of the influence of solvent, salt concentration and anion on the oxygen reduction reaction that occurs during discharge is developed based on the solvation energy of Li+ and O2 - ions, which influences the Li+-O2 - coupling strength. During the charging reaction, the thermodynamics and kinetics of LiI-based redox mediators reacting with Li2O2 and LiOH are shown to depend on the solvation strength of Li+ and I- ions. Moreover, I-Br interhalide redox mediators are introduced which allow the oxidizing power of the redox mediator to be tuned independently of the solvent. The role of solvation entropy, as well as the composition- and temperature-dependence of dielectric constant, on the ionic conductivity of liquid electrolytes is investigated. Finally, a unified picture of ion conduction in liquid, polymer and ceramic Li-electrolytes is presented based on microscopic dynamics and the energy landscape. From a stronger understanding of the role of solvation, dynamics and energy barriers on the performance of electrolytes, next generation electrochemical storage devices with enhanced properties can be designed.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143303</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of wild-type and hotspot mutant p53 interactomes: The hunt for mutant-specific binding partners &#13;
&amp;&#13;
Investigation and characterization of natural killer cell responses in genetically-engineered mouse models of non-small cell lung cancer</title>
<link>https://hdl.handle.net/1721.1/143301</link>
<description>Comparison of wild-type and hotspot mutant p53 interactomes: The hunt for mutant-specific binding partners &#13;
&amp;&#13;
Investigation and characterization of natural killer cell responses in genetically-engineered mouse models of non-small cell lung cancer
Kohn, Ryan
Part 1: The most frequently mutated gene in human cancer is the tumor suppressor gene p53, which is routinely found to have missense mutations at recurrent hotspot residues. Investigations of mutant p53 hotspot variants have suggested novel gain-of-function (GOF) traits, which can be driven through mutant-specific protein-protein interactions. In order to investigate possible drivers of GOF phenotypes, we used the protein proximity labeling assay BioID to interrogate differences between wild-type and point mutant p53 interactomes. We demonstrate that p53 retains function after addition of the biotin ligase miniTurbo and that known p53 binders can be identified. We further characterize the interactomes of 4 p53 hotspot mutants (R172H, R245Q, R245W &amp; 270H) and reveal numerous examples of mutant- specific binding partners, including potential pan-mutant interactors. We also validated the increased interaction between p53 R172H and the chaperonin CCT8. Further investigation to validate mutant-specific and pan-mutant binding partners found in this study could unlock a plethora of potential new treatments for cancers harboring p53 point mutants.&#13;
&#13;
Part 2: While there has long been speculation that the immune system can recognize and kill tumor cells, recent developments in immunotherapy have truly transformed cancer treatments. Although some patients who receive immunotherapy have durable responses, and even cures, a significant subset of patients do not respond, illustrating the need for improved immunotherapies and a better understanding of what drives the differences seen in patients. In this work, we describe the development of a novel method for investigating natural killer (NK) cell responses in autochthonous models of lung cancer, using lentiviral vectors that initiate tumors and express the NK cell ligand, m157. Using this model, we demonstrate that NK cells become rapidly dysfunctional during tumor progression, but can be stimulated to regain function. We additionally reveal that activation of NK cells in the tumor microenvironment results in increased recruitment and infiltration of adaptive immune cells. This work suggests that modulating NK cell responses may improve the efficacy of T cell immunotherapies.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143301</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Bioinspired Approach to Engineer the Seed Microenvironment</title>
<link>https://hdl.handle.net/1721.1/143300</link>
<description>A Bioinspired Approach to Engineer the Seed Microenvironment
Zvinavashe, Augustine T.
Bioinspired by the tardigrade and bombyx mori, we engineer the seed microenvironment to encapsulate, preserve and deliver Rhizobium tropici. Scientific discoveries in agriculture and sustainability are at the crossroads of material science, biochemistry, agriculture and biology. They underpin the innovative technological solutions that will impact water, energy and food security (WEFS). These new technologies can then be implemented to address major societal problems that are linked to climate change, soil degradation and increasing population. In particular, our objective is to augment agricultural outputs (i.e. crop yield and production) while decreasing inputs (e.g. water, energy, fertilizers, land, pesticides) by developing new technology to deploy plantgrowth-promoting-bacteria (PGPB) in the soil to alleviate abiotic plant stressors such as soil salinity and drought. Using PGPB to reduce and complement the use of synthetic fertilizer, our design approach engineers the seed microenvironment by coating the seeds with PGPB laden biopolymers. PGPB are well known to enhance crop production and protect plants from biotic and abiotic stresses, while decreasing the need for water and fertilizers. However, the bacteria’s delicate nature has hindered their use in current agricultural practices, due to low survivability. We use a silk and trehalose mixture that is able to encapsulate, protect, preserve and deliver Rhizobium tropici to Phaseolus Vulgaris, upon sowing. The coated P.vulgaris seeds are shown to be able to significantly alleviate soil salinity and water stresses in Moroccan soil when compared with uncoated (control) P.vulgaris seeds.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143300</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory-constrained Data-driven Model Selection, Specification, and Estimation: Applications in Discrete Choice Models</title>
<link>https://hdl.handle.net/1721.1/143299</link>
<description>Theory-constrained Data-driven Model Selection, Specification, and Estimation: Applications in Discrete Choice Models
Aboutaleb, Youssef Medhat
This thesis provides a framework, along with demonstrated applications, for carefully bringing data-driven flexibility to the specification and model selection of discrete choice models; while, at the same time, maintaining usability for analysis. Assumptions brought to bear under the classical theory-based paradigm enjoy varying degrees of credibility. Some are rooted in economic theory (e.g., utility maximizing behavior) or in information available to the scientist on the data generating process (e.g., exogeneity). These assumptions can be argued to be highly credible. Others are driven by convenience, convention, pursuit of smaller standard errors, or an otherwise lack of systematic specification and model selection process (e.g., restrictive functional and distributional forms, and trial-and-error specification testing). These assumptions are arguably less credible.&#13;
&#13;
Our goal is to overcome some of the arbitrary specification and model selection practices that undermine credibility. To this end, theory-constrained data-driven flexibility in specification is introduced to discrete choice models through an optimization framework. Systematic data-driven methods for model selection are used to enhance replicability. The introduced flexibility is constrained to guarantee trustworthiness of predictions through consistency with theory. At the same time, the imposed constraints are validated through hypothesis tests to maintain credibility.&#13;
&#13;
The framework we introduce well positions us to realize synergies between the data-driven and theory-based paradigms. The starting point for our approach is discrete choice models with well-established theoretical underpinnings that facilitate causal and behavioral interpretations. Discrete choice models consistent with random utility maximization, for example, are tethered to microeconomics and enable sound economic and welfare valuations. Further, the entire machinery of econometrics remains applicable to address endogeneity issues. This is in contrast to emerging trends in the literature that start with data-driven classifiers in pursuit of predictive gains, and then, as an afterthought, attempt to reconcile output with theory. &#13;
&#13;
We provide applications of our proposed framework in addressing specification aspects of both the systematic and stochastic components of discrete choice models. Specialized solution algorithms are developed for each application– leveraging some of the latest advances in mixed-integer and conic optimization (for classical estimation) and in Markov Chain Monte Carlo methods (for Bayesian inference). The methods developed are tested for consistency using synthetic data and applied to empirical data.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143299</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming the limits of strain-induced martensitic transformation in metastable face-centered cubic alloys</title>
<link>https://hdl.handle.net/1721.1/143297</link>
<description>Overcoming the limits of strain-induced martensitic transformation in metastable face-centered cubic alloys
Wei, Shaolou
Metastable phenomena are ubiquitous and have enabled substantial property enhancement for metallic alloys. Amongst them, martensitic phase transformations activated by plastic straining are considered as one of the most effective pathways to promote strength while preserving desirable ductility. The resultant transformation-induced plasticity effect has enabled great success in advancing steels, titanium alloys, and more recently complex concentrated alloys design. However, an intrinsic dilemma is still hindering the development of these metastable alloys: the limited plastic strain accommodation capability of the martensite often leads to early-stage damage nucleation.&#13;
&#13;
This thesis builds upon the objective to overcome such an intrinsic dilemma and explores potential microstructural design guidance with the aids of in-situ experiments and theoretical calculations. Two categories of approaches, respectively focusing on phase transformations and plastic deformation micro-mechanisms are explored. Specifically, plastic strain-induced sequential martensitic transformation and thermally-driven martensite reversion are recognized to exhibit the potentials to further improve the mechanical properties of metastable alloys. In light of the atomistic processes of strain-induced face-centered cubic (FCC) to hexagonal close-packed (HCP) martensitic transformation, a plastic deformation-driven stacking fault formation concept is also assessed, which contributes to latent strain hardening while mitigating the formation of blocky HCP-martensite. Future suggestions for metastable alloy design are also proposed based on the current experimental and theoretical understandings.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143297</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning-Augmented Spectroscopies for Intelligent Materials Design</title>
<link>https://hdl.handle.net/1721.1/143296</link>
<description>Machine Learning-Augmented Spectroscopies for Intelligent Materials Design
Andrejevic, Nina
Neutron and photon scattering and spectroscopy represent two fundamental categories of characterization techniques used to interrogate materials’ structural and dynamical properties at atomic to mesoscopic length scales. As advances at scientific user facilities enable the collection of ever larger data volumes in higher-dimensional parameter spaces, the design, analysis, and interpretation of such experiments becomes both increasingly valuable and complex. At the same time, interest in novel functional and quantum materials for next-generation technologies, including dissipationless electronics, energy harvesting, and quantum computing, demands an understanding of unconventional or emergent properties beyond the scope of many approximate models. Machine learning methods are designed to leverage large, highdimensional datasets in order to detect underlying patterns and make informed predictions on related tasks. Thus, integration of these data-driven methods with neutron and photon spectroscopies has the potential to improve experimental design, accelerate and enhance data analysis, and uncover insights beyond traditional models.&#13;
&#13;
This thesis work demonstrates the proposed integration of machine learning to augment experimental design and analysis in the context of four different scattering and spectroscopic techniques. First, we use Euclidean neural networks to predict materials’ vibrational properties directly from the atomic masses and positions of their constituent atoms, highlighting the importance of using effective materials data representations to strengthen model performance and interpretability. We then consider how machine learning methods can be applied to develop effective, low-dimensional representations of materials’ spectral signatures, which we exemplify through unsupervised representation learning of Raman spectra. Such learned representations are proposed as efficient prediction targets for supervised learning from relevant structural or chemical attributes, or as convenient parameter spaces for optimization of physical models. We illustrate the latter by training a variational autoencoder to retrieve the sample parameters of proximity-coupled heterostructures from their polarized neutron reflectometry profiles with high resolution. Finally, we study the capacity of machine learning models to extract “hidden” insights from spectral data by developing a neural network classifier of materials’ electronic band topology directly from X-ray absorption near-edge structure spectra. While we develop analysis frameworks with specific applications in mind, the proposed methodologies are expected to apply more broadly to diverse scattering and spectroscopic techniques.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143296</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical and Magnetochemical Approaches for Neuronal Modulation</title>
<link>https://hdl.handle.net/1721.1/143292</link>
<description>Electrochemical and Magnetochemical Approaches for Neuronal Modulation
Park, Jimin
The development of tools to deliver chemical signals to specific neurons can enhance our understanding of chemical signaling in the nervous system and enable chemical therapies for neurological disorders. Existing technologies for chemical neuromodulation, including intracranial injection of chemicals through an implanted cannula, 1) do not apply in the case of transient and unstable chemical species and 2) require tethering of animal subjects to external hardware, which can limit the study of freely-behaving subjects. By employing nanomaterials chemistry, electrochemistry, and magnetism, this thesis seeks to develop in vivo chemical delivery systems with unprecedented capabilities. First, we design an electrochemical strategy that enables in situ synthesis and delivery of unstable chemical signals to targeted neuronal circuits with nanoscale electrocatalysts, biocompatible precursors, and electric fields. This electrochemical system is implemented in an implantable probe allowing for the investigation of neurophysiological processes mediated by unstable chemical species, such as nitric oxide and carbon monoxide, in the mouse brain. The second focus of this thesis is to design a magnetochemical system for wireless delivery and control of chemical signals without tethered hardware. By designing nanotransducers or molecular radicals capable of converting non-invasive magnetic field stimuli into chemical signals, such as protons and flavin cofactors, we remotely modulate activity of specific neurons and chemical signal-mediated behaviors in the mouse.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143292</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase Transitions in Dipole-Dipole Interacting Atomic Systems</title>
<link>https://hdl.handle.net/1721.1/143289</link>
<description>Phase Transitions in Dipole-Dipole Interacting Atomic Systems
Wang, Qingyang
Phase transitions are generally a many-body phenomenon, and in order to access the full range of interesting physics of phase transitions, one needs interactions between the microscopic constituents. In this thesis, the phase transitions of atomic systems with interparticle dipole-dipole interaction controlled by laser fields are studied.&#13;
&#13;
In the first half of the thesis, a system with externally polarized dipole molecules at half-filling moving along a one-dimensional zigzag chain is studied, including groundstate phase diagrams. The dipoles are oriented in-plane. Together with the geometry of the chain, this gives rise to a bond-alternating nearest-neighbor interaction due to simultaneous attractive and repulsive interactions. By tuning the ratio between the nearest-neighbor interaction and hopping, various phases can be accessed by controlling the polarization angle. In the ultrastrong coupling limit, the system simplifies to a frustrated extended axial Ising model. For the small coupling limit, a qualitative discussion of the ordering behavior using effective field theory arguments is provided. We show that when the chain angle is small, the system mostly exhibits a phase transition from the gappless phase into the gapped phase, whereas a large chain angle would drive the system into a dimerized phase, where the hopping strength is closely related to the orientation of the dimerized pairs of the molecules.&#13;
&#13;
In the latter part of the thesis, the interatomic correlations of a semiclassical driven dissipative Dicke model are studied. By numerically examining the genuine multiparticle entanglement of the reduced systems of various particle numbers, we show that the entanglement is built up at the transition point, even when the system makes transitions into a highly mixed state. This suggests that the phase transition is of quantum nature. Additionally, the quantum discord of the system is computed. By the use of the full permutation invariance of the system, we show that the numerical complexity in computing quantum discord is significantly reduced. The result indicates that when the dissipation becomes dominant, the system is not entangled but possesses large quantum discord.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143289</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Algorithms and Applications in Health Care</title>
<link>https://hdl.handle.net/1721.1/143285</link>
<description>Machine Learning Algorithms and Applications in Health Care
Sobiesk, Matthew
There have been many recent advances in machine learning, resulting in models which have had major impact in a variety of disciplines. Some of the best performing models are black boxes, which are not directly interpretable by humans. However, in some applications such as health care it is vital to use interpretable models to understand why the model is making its predictions, to ensure that using them to inform decision making will not unexpectedly harm the people it should instead be helping. This leads to the question of whether a trade off between predictive accuracy and interpretability exists, and how we can improve interpretable models' performances to reduce such trade offs if they do.&#13;
&#13;
In the first chapter, we show that optimal decision trees are equivalent in terms of modeling power to neural networks. Specifically, given a neural network (feedforward, convolutional, or recurrent), we construct a decision tree with hyperplane splits that has identical in-sample performance. Building on previous research showing that given a decision tree, we can construct a feedforward neural network with the same in-sample performance, we prove the two methods are equivalent. We further compare decision trees and neural networks empirically on data from the UCI Machine Learning Repository  and find that they have comparable performance. &#13;
&#13;
In the second chapter, we propose a new machine learning method called Optimal Predictive Clustering (OPC). The method uses optimization with strong warm starts to simultaneously cluster data points and learn cluster-specific logistic regression models. It is designed to combine strong predictive performance, scalability, and interpretability. We then empirically compare OPC to a wide variety of other methods such as Optimal Regression Trees with Linear Predictors (ORT-L) and XGBoost. We find that our method performs on par with cutting edge interpretable methods, and that it enhances an ensemble of methods to achieve the best out-of-sample performance across all models.&#13;
&#13;
In the third chapter, we predict one year transplant outcomes for lung, liver, and kidney data to investigate whether predicted post-transplant outcomes should be included in the organ allocation system of organs other than lungs. We find that the models do not differentiate one-year graft survival or failure outcomes effectively enough to be useful components of the organ allocation process. We then theorize about possible reasons for this failure, including the actual transplant procedure having a large effect on the one-year graft outcome or the potential need for additional data, like genetic information.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143285</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraining Natural and Anthropogenic Disturbances in the Delivery of Coastal Ecosystem Services</title>
<link>https://hdl.handle.net/1721.1/143283</link>
<description>Constraining Natural and Anthropogenic Disturbances in the Delivery of Coastal Ecosystem Services
Luk, Sheron You-Xian
Coastal ecosystems provide key services that benefit human wellbeing yet are undergoing rapid degradation due to natural and anthropogenic pressures. This thesis seeks to understand how disturbances impact salt marsh and estuarine ecosystem functioning in order to refine their role in coastal ecosystem service delivery and predict future resilience. Salt marsh survival relative to sea- level rise increasingly relies on the accumulation and preservation of soil organic carbon (SOC). Firstly, I characterized SOC development and turnover in a New England salt marsh and found that salt marsh soils typically store marsh grass-derived compounds that are reworked over centuries-to-millennia. Next, I assessed how two common marsh disturbances – natural ponding and anthropogenic mosquito ditching – affect salt marsh carbon cycling and storage. Salt marsh ponds deepen through soil erosion and decomposition of long-buried marsh peat. Further, the SOC lost during pond development is not fully recouped once drained ponds are revegetated and virtually indistinguishable from the surrounding marsh. Mosquito ditches, which were installed in ~ 90% of New England salt marshes during the Great Depression, did not significantly alter marsh carbon storage. In Buzzards Bay, Massachusetts, a US National Estuary, we tested relationships among measures of estuarine water quality, recreational activity, and local socioeconomic conditions to understand how the benefits of cultural ecosystem services are affected by shifts in water quality associated with global change and anthropogenic activity. Over a 24-year period, water quality degradation coinciding with increases in Chlorophyll a is associated with declines in fishery abundance and cultural ecosystem service values ($0.08 – 0.67 million USD). In combination, incorporation of both anthropogenic and natural disturbances to coastal ecosystem functioning and service delivery can produce improved estimates of ecosystem service valuation for effective resource decision-making under future climate scenarios.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143283</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soft Robotics Applied to the Development of a Diaphragm Assist System</title>
<link>https://hdl.handle.net/1721.1/143282</link>
<description>Soft Robotics Applied to the Development of a Diaphragm Assist System
Hu, Lucy
Severe diaphragm dysfunction can lead to respiratory failure, requiring permanent mechanical ventilation. Permanent tethering to a mechanical ventilator via a patient’s mouth or tracheostomy can interfere with a patient’s autonomy by hindering activities like speech and swallowing. This thesis works towards a soft robotic alternative that aims to intervene internally at the diaphragm as opposed to the mouth. For medical problems that are mechanical in nature, soft robotics offer a promising solution by coupling advanced robotic control with soft elements that can interact nondestructively with biological systems. In this work, we present the findings from the development a soft robotic diaphragm assist system, from exploration to proof-of-concept.  &#13;
&#13;
In order to understand how soft robotic technologies interact with the respiratory system, simulators of respiratory motion and biomechanics were built with different soft actuator mechanisms. We find that pneumatic artificial muscles are capable of driving the diaphragm function in a respiratory simulator and replicating the work of breathing. Taking inspiration from this biomimetic system, pneumatic artificial muscles are designed and optimized for use in the diaphragm assist system. By implanting contractile, soft robotic actuators above the diaphragm to push down on the diaphragm during inspiration, this diaphragm assist system functions as an implantable ventilator. We demonstrate the proof-of-concept feasibility of this system to augment physiological metrics of ventilation in an in vivo porcine model of varied respiratory insufficiency. This system synchronizes with native respiratory effort to augment respiratory function.&#13;
&#13;
This diaphragm assist system lays the foundational work for a new therapeutic ventilation option that aims to restore respiratory performance without sacrificing quality of life.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143282</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microscale polymeric-based technologies for controlled vaccine delivery</title>
<link>https://hdl.handle.net/1721.1/143281</link>
<description>Microscale polymeric-based technologies for controlled vaccine delivery
Sarmadi, Morteza
Outbreak of infectious diseases such as COIVD-19 is one of the most critical challenges threatening global health and economy, particularly in developing world. Technologies that can improve delivery, access, effectiveness, and stability of vaccination, as a promising tool against outbreaks, would be strategic tools to potentially save lives and avoid trillions of dollars in financial losses. Our group has been developing such platform technologies for controlled delivery and tracking of vaccines. This thesis investigates further development of these technologies toward clinical translation. In the first part, we investigate a locally injectable microparticle system with a core-shell microstructure made from a novel 3D printing process compatible with biodegradable polymers. These microparticles can be used for delayed, pulsatile release of vaccines, therefore reducing the number of administrations to a single one. We study two translational aspects of core-shell microparticles, namely, injectability, and mechanism of pulsatile release. To study injectability, we use a wide range of tools, namely, multiphysics simulation, experiments, machine learning, and 3D printing to establish a framework for optimal injection of microparticle-based drugs. To study the mechanism of pulsatile release, we integrate various experimental tools with multiphysics simulations to form a model describing the mechanism of degradation and pulsatile release from core-shell particles. In the next phase of this thesis we move forward to a transdermal dissolvable microneedle patch without the need for injections. These microneedle patches can be used to track medical record on patient without the need for expensive healthcare infrastructure--a challenge in developing world. Using extensive computational modeling, we establish a design framework for microneedle devices, widely applicable to any microneedle system. Best trade-off design is then selected for administrations in vivo. We further develop a machine learning algorithm coupled with image processing tools to provide long-term pattern classification capability for encoding information transferred by microneedles to the patient, in an automated and robust fashion. Results of this thesis could be of great interest to development of next generation biomedical devices for controlled vaccine delivery and other applications.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143281</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and Diffusion in Networks</title>
<link>https://hdl.handle.net/1721.1/143279</link>
<description>Inference and Diffusion in Networks
Bertolotti, Paolo
Networks provide a powerful and unified framework to study complex systems. By abstracting systems down to entities and their connections, network models provide insight into the structure and dynamics of critical systems across multiple domains. In this thesis, we study diffusion in social networks. Diffusion through networked systems corresponds to numerous consequential processes, and we focus on epidemic spread and information diffusion. We study these processes by applying and extending ideas from statistical inference. Inference, which focuses on estimation, testing, and uncertainty quantification, provides the mathematical tools to learn from data rigorously. This thesis utilizes both theory and data in order to address several real-world challenges.&#13;
&#13;
In the first chapter, we study epidemic spread and consider the problem of identifying infected individuals in a population of size N. We introduce an approach that uses significantly fewer than N tests when infection prevalence is low. Our approach utilizes network structure to improve the performance of a classical approach called group testing. In the second chapter, we derive the performance of the most common form of group testing, Dorfman testing, under imperfect tests. We derive the full distribution of the number of tests needed, the number of false negatives, and the number of false positives, taking into account the conditions faced by medical practitioners. In the third chapter, we study information diffusion and introduce a statistical testing framework to identify cascades in network data. We define a test statistic that distinguishes between large, meaningful branches and the small branches formed during normal periods, and apply our statistic to identify information cascades in call detail record data. In the fourth chapter, we study the social network effects of drone strikes, focusing on information and physical diffusion around strikes. Utilizing a dataset of over 12 billion call detail records, we systematically analyze the impact of 74 U.S. drone strikes on communication and mobility in Yemen between 2010 and 2012.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143279</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrasound-based Noninvasive Monitoring Methods for Neurocritical Care</title>
<link>https://hdl.handle.net/1721.1/143271</link>
<description>Ultrasound-based Noninvasive Monitoring Methods for Neurocritical Care
Imaduddin, Syed M.
The human brain lacks internal energy deposits and instead relies on a steady uninterrupted supply of blood for its metabolic needs. Short-lived disruptions in cerebral blood supply can cause permanent damage. Likewise, mechanical stress on the brain can cause life-threatening and long-lasting damage to neural tissue. Existing neuromonitoring methods used for patients with severe head injury tend to be highly invasive and carry a risk of tissue damage and infection. To advance the field of clinical neuromonitoring, we developed a set of ultrasound-based noninvasive methods for real-time extra- and transcranial volumetric blood flow measurement and for embolus detection and embolic load quantification. The blood flow waveform measurements were also used to determine intracranial pressure (ICP), intracranial compliance (ICC), and cerebrovascular resistance (CVR) in a model-based, noninvasive, patient-specific, and interpretable fashion.&#13;
&#13;
High frequencies (4-6 MHz) were used for extracranial insonation. Vessel diameters were estimated with B-mode images and were combined with color flow derived velocity measurements to yield the volumetric flow rate. B-mode derived diameter measurements had a bias of 0.2 mm and a root-mean-square error (RMSE) of 0.5 mm compared to manual annotations in a group of healthy adults. Lower frequency (1-2 MHz), lower spatial resolution insonation was necessary for transcranial application to overcome attenuation due to the skull. A spatial-resolution enhancement approach was therefore developed. The technique yielded a clinically acceptable flow estimation bias of 0.2 mL/min, and an RMSE of 22.3 mL/min over a 150 mL/min range of mean flows in phantom vessels with 2-6 mm diameters. In addition, a time-frequency approach was developed to separate closely-spaced emboli using single-channel, single-frequency insonation. The method decomposed 38% of candidate embolic signals into two or more embolic events that ultimately accounted for 69% of the overall embolic counts in pediatric patients undergoing extracorporeal membrane oxygenation or ventricular assist device therapy. Finally, tilt-table studies were carried out to validate the proposed cerebrovascular monitoring scheme. Our proposed system successfully tracked tilt-induced changes in ICP, ICC, and CVR.&#13;
&#13;
The proposed techniques allow neuromonitoring across a wide spectrum of pathologies, patient age, and disease severity in a manner that has not previously been possible. The methods all rely on a common set of readily-available ultrasound imaging approaches, enabling a unified, ultrasound-based, single-probe, noninvasive neuromonitoring device with potential to revolutionize neuromonitoring worldwide.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143271</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Biomaterial-based Stem Cell Therapy for Retinal Regeneration</title>
<link>https://hdl.handle.net/1721.1/143269</link>
<description>A Biomaterial-based Stem Cell Therapy for Retinal Regeneration
Colombe Dromel, Pierre
Retinal diseases such as age-related macular degeneration and diabetic retinopathy are the leading cause of blindness worldwide with prevalence and resulting costs projected to increase. There are few available treatments, and their applicability is limited to slowing down the degeneration of the retina resulting from these diseases. New treatments are needed to either stop the degeneration or regenerate the retina with healthy cells to improve vision. Emerging therapies, in clinical trials, are investigating stem cell-derived photoreceptors injected, in a saline solution, into the subretinal space to replace dying cells caused by retinal degeneration.&#13;
&#13;
The principal objectives of this thesis were to engineer a bio-inspired injectable matrix to incorporate stem cell-derived retinal cells and growth factors (viz., epidermal growth factor) for injection into the subretinal space or vitreous, in place of saline, and to evaluate the ability of the gel therapy to increase the viability, engraftment, and functionality of the exogeneous cells in the retina. To achieve this goal, we employed conjugates of gelatin with hydroxyphenyl propionic acid, an amino acid-like molecule (Gtn-HPA), and hyaluronic acid with tyramine (HA-Tyr). We also investigated an interpenetrating network (IPN) comprising a combination of Gtn-HPA and HA-Tyr, which can be tuned to obtain the optimal mechanical, chemical, and injectable characteristics tailored to improve retinal regeneration. The effects of these hydrogels on various retinal cells (human ganglion cells, human retinal progenitors, and human cones photoreceptors) were analyzed in the in vitro setting and showed positive results: high viability, controlled differentiation, and low apoptosis.&#13;
&#13;
In the first of 3 application-based in vivo experiments, we investigated select hydrogels incorporating specific cell types to treat problems associated with retinal ganglion cells (RGCs) in vitreous injections. Transplanted RGCs showed significantly higher engraftment and process extension when encapsulated in our IPN compared to saline injections.&#13;
&#13;
In a second in vivo experiment, we transplanted retinal progenitor cells (hRPCs) in the subretinal space environment to address problems associated with photoreceptors in the retina. Transplantations were quantified and compared to injections in saline at 1- and 3-weeks post-transplantation. At both time points, a 5-fold increase in engrafted hRPCs in the outer nuclear layer (ONL) was observed when the cells were injected in our biomaterial compared to injection in saline.&#13;
&#13;
In a third vivo experiment, a novel human cone progenitor (hCP) cell line was created and studied in retinal degenerative animal models (such as RCS rats and RD1 mice). hCPs were found to engraft in high numbers and showed a significant 2-fold increase in retinal functionality, which was measured with optokinetic (OKN) and electroretinogram (ERG) assays. The results of this thesis motivate and guide further translational study in a large animal model to validate Gtn-HPA/HA-Tyr hydrogels incorporating retinal stem cells and growth factors for the promotion of retinal regeneration in a larger eyeball size with the attendant improvement in visual function.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143269</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Generation and Tracking of Fast and Agile Flight Trajectories</title>
<link>https://hdl.handle.net/1721.1/143268</link>
<description>Algorithms for Generation and Tracking of Fast and Agile Flight Trajectories
Tal, Ezra
High-speed flight through cluttered environments is essential to many time-sensitive robotics applications. It requires motion planning and flight control algorithms that enable highly accurate maneuvering at the edge of the vehicle’s capability. These algorithms must overcome challenges particular to fast and agile flight, such as complex dynamics effects including significant unsteady aerodynamics and challenging conditions like post-stall and uncoordinated flight. We propose trajectory generation and tracking algorithms that address these challenges for a quadcopter aircraft and for a fixed-wing transitioning aircraft that combines vertical take-off and landing (VTOL) with efficient forward flight.&#13;
&#13;
This thesis contains several contributions. First, we show that robust control based on incremental nonlinear dynamic inversion (INDI) enables fast and agile flight without depending on an accurate dynamics model. Based on the INDI technique, we design a comprehensive quadcopter flight control algorithm that achieves accurate trajectory tracking without relying on any vehicle aerodynamics model. Second, we show differential flatness of a global nonlinear six-degree-of-freedom (6DOF) flight dynamics model for a tailsitter flying wing transitioning aircraft. We leverage the flat transform to design an INDI flight control algorithm capable of tracking agile aerobatics maneuvers that exploit the entire flight envelope, including post-stall and sideways knife-edge flight. Third, we present a trajectory generation algorithm that aims to identify the actual dynamic feasibility boundary by efficiently combining analytical, numerical, and experimental evaluations in trajectory optimization. Finally, we demonstrate our contributions in fast and agile flight through elaborate experiments.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143268</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Turbulence in geophysics: from rotating, ionized fluids to sediment transport</title>
<link>https://hdl.handle.net/1721.1/143265</link>
<description>Turbulence in geophysics: from rotating, ionized fluids to sediment transport
Benavides, Santiago José
Part I: "Turbulence in the semiconducting regions of gas giant planets." The 'semiconducting' regions of gas giant planets, which lie between the fully ionized and electrically-neutral regions, have three distinct properties that make the dynamical nature of their flows unique. They are (i) partially-ionized, (ii) subject to rotation and a strong background magnetic field, and (iii) poor electrical conductors. In this Part, I employ a series of idealized direct numerical simulations to explore the impact that each of these properties has on localized turbulent flow, and consider the implications for realistic atmospheres. In Chapter 2, I show that, despite being partially-ionized, the flow in these regions can be treated as a single-species magnetohydrodynamic (MHD) fluid and that collisional heating is likely negligible. Chapter 3 demonstrates that the combination of a strong background magnetic field and a misaligned rotation axis results in the suppression of the inverse cascade, with possible implication for the formation of jets in such atmospheres. Finally, in Chapter 4, I show that the use of an 'MHD drag' fails once the time-scale associated with Lorentz force becomes shorter than the dynamical time-scale in the system. &#13;
&#13;
Part II: "Intermittency in bed load sediment transport."&#13;
Sediment transport by wind or water near the threshold of grain motion is dominated by rare transport events. This type of intermittency, termed 'on-off' intermittency, makes it difficult to calibrate sediment transport laws, or to define an unambiguous threshold for grain entrainment, both of which are crucial for predicting sediment transport rates. In Part II of this thesis, I use a combination of laboratory flume experiments and a simple stochastic model to capture the on-off intermittent statistics of bed load sediment transport near the threshold of motion. In Chapter 5 I use the stochastic model to estimate the threshold of grain motion in a novel way and quantify the strength of the shear stress fluctuations. Further analysis of the experiments in Chapter 6 reveals the physical origin of the intermittency and demonstrates that the on-off intermittency lies in the velocity of the grains which are rolling on the bed.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143265</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Signal Processing Techniques Applied to Biomedical Diagnostics</title>
<link>https://hdl.handle.net/1721.1/143262</link>
<description>Signal Processing Techniques Applied to Biomedical Diagnostics
Degani, Ismail
An effective way to combat cancer and infectious disease in resource-poor settings is to implement rapid and accurate diagnostic tests that can be administered at the point of care (POC). Developing such miniaturized, portable, and low-cost systems requires innovative approaches in both assay and device design. In this thesis, we construct a novel phase-sensitive "lock-in" amplifier (LIA) based on the Fast Walsh-Hadamard Transform (FWHT), and evaluate its ability to boost the signal-to-noise ratio of optical fluorescence signals. The LIA is designed to be resilient in challenging environments containing high/unpredictable ambient noise. We then develop two rapid diagnostic systems that pair this technology with isothermal CRISPR-Cas12a-based DNA/RNA amplification to detect clinically relevant targets with high specificity. Finally, we evaluate the clinical performance of our systems in detecting target genes for (1) SARS-CoV-2, the virus responsible for the COVID-19 pandemic, and (2) Human Papilloma Virus (HPV), the causal agent of cervical cancer.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143262</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed Palladium Catalyzed Acetoxylation of Indolines and Enantioselective Total Synthesis of (–)-Voacinol and (–)-Voacandimine C</title>
<link>https://hdl.handle.net/1721.1/143259</link>
<description>Directed Palladium Catalyzed Acetoxylation of Indolines and Enantioselective Total Synthesis of (–)-Voacinol and (–)-Voacandimine C
Flynn, Kristen M.
I.	Directed Palladium Catalyzed Acetoxylation of Indolines. Total Synthesis of N-Benzoylcylindrocarine&#13;
&#13;
We describe a palladium catalyzed C7-acetoxylation of indolines with a range of amide directing groups. While a variety of substituents are tolerated on the indoline-core and the N1-acyl group, the acetoxylation is most sensitive to the C2- and C6-indoline substituents. The practicality of this indoline C7-acetoxylation is demonstrated using a cinnamamide substrate on mmol-scale. Several N1-acyl groups, including those present in natural alkaloids, guide C7-acetoxylation of indoline substrates over a competitive C5-oxidation. The application of this chemistry allowed for the first synthesis of N-benzoylcylindrocarine by late-stage C17-acetoxylation of N-benzoylfendleridine.  &#13;
&#13;
II.	Total Synthesis of (–)-Voacinol and (–)-Voacandimine C&#13;
&#13;
We describe the first total synthesis of complex aspidosperma alkaloids (–)-voacinol and (–)-voacandmine C via a biogenetically inspired late-stage C7-methylenation strategy. We envisioned rapid access to these natural alkaloids from a common symmetrical precursor assembled by methylenation of a D-ring oxidized variant of the related natural product (–)-deoxoapodine. Chemoselective N9-oxidation of a pentacyclic deoxoapodine precursor enabled the synthesis of the corresponding hexacyclic C8-aminonitrile. Stereocontrolled methylenation of a C8-enamine derivative of deoxoapodine, accessed by ionization of the C8-aminonitrile, afforded a symmetrical dodecacyclic bis-aminonitrile. Final-stage biogenetically inspired controlled reductive opening of the oxolanes of this dodecacyclic intermediate provided a unified approach to (–)-voacinol and (–)-voacandmine C, while direct reduction of the same intermediate afforded structurally related (–)-methylenebisdeoxoapodine.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143259</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale Engineering for Mixed-Dimensional Heterostructure Growth and Integration</title>
<link>https://hdl.handle.net/1721.1/143255</link>
<description>Nanoscale Engineering for Mixed-Dimensional Heterostructure Growth and Integration
Lee, Sangho
Recent attempts to create van der Waals (vdW)-bonded heterostructures of distinct twodimensional (2D) crystals has opened new avenues for research, from the fundamental studies of emergent phenomena to the development of practical applications with unique functionalities, which are attributed to a wide variety of configurations attainable with no lattice and processing limitations. Despite compelling opportunities in the field of vdW heterostructures, it still faces a couple of fundamental limitations in scalability, tunability, and degree of physical coupling of electronic properties. On the other hand, remote epitaxy – an emerging growth method of singlecrystalline membranes copied from the underlying substrates through the atomically thin graphene interlayer – has been alternatively suggested to solve major challenges in all-2D vdW heterointegration as well as in conventional heteroepitaxy, resulting in high-quality threedimensional (3D) thin films that can be released from weak vdW interface and transferred/stacked onto arbitrary substrate/layer of interest. Therefore, graphene-based layer transfer technique offers an efficient route to produce all-3D artificial heterostructures analogous to all-2D vdW heterostructures, but 3D components here are expected to show more enhanced physical coupling at their interfaces and a wider range of materials including III-Vs, III-Ns, and complex-oxides, which in general outperforms 2D counterparts in material properties, can be considered as a building block to form heterogeneous stacks. However, remote epitaxy is still in infancy to work universally for all material systems and all epitaxy techniques.&#13;
&#13;
In this thesis, nanoscale engineering is introduced to tackle major challenges in current technologies, especially in remote epitaxy, and to propose new strategies to assemble or integrate a broad range of mixed-dimensional heterostructures that are distinct from the vdW heterostructure counterparts. Three case studies are presented to exemplify how material design and engineering at nanoscale are leveraged to solve long-standing problems that are not readily overcome through the conventional techniques and methodologies: 1) nanopatterned graphene-based universal epitaxy for single-crystalline membrane transfer; 2) freestanding complex-oxide membrane growth and transfer via sacrificial interlayer for emergent multiferroics; and 3) self-assembled block copolymer thin films templating hybrid nanostructures.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143255</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expressivity and Structure in Networks: Ising Models, Random Graphs, and Neural Networks.</title>
<link>https://hdl.handle.net/1721.1/143250</link>
<description>Expressivity and Structure in Networks: Ising Models, Random Graphs, and Neural Networks.
Nagaraj, Dheeraj M.
Networks are used ubiquitously to model global phenomena which emerge due to interactions between multiple agents and are among the objects of fundamental interest in machine learning. The purpose of this dissertation is to understand expressivity and structure in various network models. The basic high-level question we aim to address is for what ranges of parameters specifying a model does it capture complex dependencies. In particular, we consider widely used models such as a) Ising Model b) Exponential Random Graph Model (ERGM) c) Random Geometric Graphs (RGG) d) Neural Networks, where for each a version of this question is posed and solved.&#13;
&#13;
For the case of Ising Model, ERGM, and RGG, we establish statistical tests which can distinguish them from the respective mean-field models by just using structural information (without the information about specific parameters) whenever it is possible or develop convergence results to show statistical indistinguishability. We then explore the problem of neural network representation to characterize the kind of functions which can be represented by neural networks of a given depth. In doing so, we establish that even shallow networks can express smooth functions efficiently whereas depth is genuinely useful in representing spiky functions.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143250</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neurosymbolic Learning for Robust and Reliable Intelligent Systems</title>
<link>https://hdl.handle.net/1721.1/143249</link>
<description>Neurosymbolic Learning for Robust and Reliable Intelligent Systems
Inala, Jeevana Priya
This thesis shows that looking at intelligent systems through the lens of neurosymbolic models has several benefits over traditional deep learning approaches. Neurosymbolic models contain symbolic programmatic constructs such as loops and conditionals and continuous neural components. The symbolic part makes the model interpretable, generalizable, and robust, while the neural part handles the complexity of the intelligent systems. Concretely, this thesis presents two classes of neurosymbolic models—state-machines and neurosymbolic transformers and evaluates them on two case studies—reinforcement-learning based autonomous systems and multirobot systems. These case studies showed that the learned neurosymbolic models are human-readable, can be extrapolated to unseen scenarios, and can handle robust objectives in the specification. To efficiently learn these neurosymbolic models, we introduce neurosymbolic learning algorithms that leverage the latest techniques from machine learning and program synthesis.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143249</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visibility-Aware Motion Planning</title>
<link>https://hdl.handle.net/1721.1/143248</link>
<description>Visibility-Aware Motion Planning
Goretkin, Gustavo Nunes
The motion planning problem, of deciding how to move to achieve a goal, is ubiquitous in robotics. In many robotics applications, there is a map of the environment that is generally useful, but typically outdated as it does not include information about unknown obstacles, such as clutter. This thesis addresses the problem of planning for a robot with an onboard obstacle-detection sensor. The planning objective is to remain safe with respect to unknown obstacles by guaranteeing that the robot will not move into any region of the workspace before observing it.&#13;
&#13;
Although much work has addressed a version of this problem in which the field of view of the sensor is a sphere around the robot, we address robots with a limited field of view, which may arise from sensor limitations or self-occlusions in the case of mobile manipulation robots. We provide a formal definition of the problem, which we call Visibility-Aware Motion Planning (VAMP), and several solution methods with different computational trade-offs. We demonstrate the behavior of these planning algorithms in illustrative planar domains. The key to an efficient solution is to aggressively prune paths, while ensuring that the overall search strategy is sound and complete.&#13;
&#13;
We demonstrate that motion planning problems like VAMP benefit from a path-dependent formulation, in which the state at a search node is represented implicitly by the path to that node. The straightforward approach to computing the feasibility of a successor node in such a path-dependent formulation takes time linear in the path length to the node, in contrast to a (possibly very large) constant time for a more typical search formulation. For long-horizon plans, this linear-time computation for each node becomes prohibitive. To improve upon this, we introduce the use of a fully persistent spatial data structure (FPSDS). We apply a FPSDS to VAMP search, by using a nearest-neighbor data structure to perform bounding-volume queries. We demonstrate an asymptotic and practical improvement in the runtime of finding VAMP solutions in large domains. To the best of our knowledge, this is the first use of a fully persistent data structure for accelerating motion planning
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143248</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Faster and easier: cross-validation and model robustness checks</title>
<link>https://hdl.handle.net/1721.1/143247</link>
<description>Faster and easier: cross-validation and model robustness checks
Stephenson, William T.
Machine learning and statistical methods are increasingly used in high-stakes applications – for instance, in policing crime, making predictions about the atmosphere, or providing medical care. We want to assess the extent to which we can trust our methods, though, before we use them in such applications. There exist assessment tools, such as cross-validation (CV) and robustness checks, that help us understand exactly how trustworthy our methods are. In both cases (CV and robustness checks), a typical workflow follows the pattern of “change the dataset or method, and then rerun the analysis.” However, this workflow (1) requires users to specify the set of relevant changes, and (2) requires a computer to repeatedly refit the model. For methods involving large and complex models, (1) is expensive in terms of user time, and (2) is expensive in terms of compute time. So CV, which requires (2), and robustness checks, which often require both (1) and (2), see little use in the large and complex models that need them the most. In this thesis, we address these challenges by developing model evaluation tools that are fast in terms of both compute and user time. We develop tools to approximate CV when it is most computationally expensive: in high dimensional and complex, structured models. But approximating CV implicitly relies on the quality of CV itself. We show theory and empirics calling into question the reliability of the use of CV for quickly and automatically tuning model hyperparameters – even in cases where the behavior of CV is thought to be relatively well-understood. On the front of robustness checks, we note that a common workflow in Bayesian prior robustness requires users to manually specify a set of alternative reasonable priors, a task that can be time consuming and difficult. We develop automatic tools to search for a prediction-changing alternative prior for Gaussian processes, saving users from having to manually specify the set of alternative priors.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143247</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Divergence Covering</title>
<link>https://hdl.handle.net/1721.1/143246</link>
<description>Divergence Covering
Tang, Jennifer
A longstanding problem of interest is that of finding covering numbers. A very important measure between probability distributions is Kullback-Leibler (KL) divergence. Both topics have been massively studied in various contexts, and in this thesis we focus on studying the problem when the two concepts are combined. This combination yields interesting techniques for providing useful bounds on a number of important problems related to information theory. Our goal is to explore covering the probability simplex in terms of KL divergence. Various properties of KL divergence (e.g. it is not a metric, not symmetric, and can easily blow up to infinity) make it unintuitive and difficult to analyze using traditional methods. We look at covering discrete large-alphabet probabilities both with worst-case divergence distance and average-case divergence distance and examine the implications of these divergence covering numbers. One implication of worst-case divergence covering is finding how to communicate probability distributions with limited communication bandwidth. Another implication is in universal compression and universal prediction, where the divergence covering number provides upper bounds on minimax risk. A third application is computing capacity of the noisy permutation channel. We then use average-case divergence covering to study efficient algorithms for quantizing large-alphabet distributions in order to save storage space.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143246</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Nuclear-Targeting Peptides for Macromolecule Delivery via Machine Learning</title>
<link>https://hdl.handle.net/1721.1/143244</link>
<description>Design of Nuclear-Targeting Peptides for Macromolecule Delivery via Machine Learning
Schissel, Carly Katherine
The effective design of functional peptide sequences remains a fundamental challenge in biomedicine. For example, cell-penetrating peptides (CPPs) are capable of delivering macromolecular cargo to intracellular targets that are otherwise inaccessible. However, design of novel CPPs with high activity and unique structure remains challenging. In this thesis, methods to design and characterize highly active CPPs for antisense oligonucleotide delivery were explored.&#13;
&#13;
Machine learning is a promising method for de novo design of functional peptide sequences. A deep learning model inspired by directed evolution was used to optimize abiotic sequences that traffic antisense oligomers to the nucleus of cells. The model was able to predict activities beyond those in the training dataset, and simultaneously decipher and visualize sequence-activity predictions. The validated miniproteins (40-80 residues) were more effective than any previously known variant in cells. By augmenting the machine learning model to over-represent shorter sequence space, the model also predicted a short peptide (18-residues) with comparable activity to a positive control peptide. Empirical sequence-activity studies demonstrated reliance on the cationic residues as well as the C-terminal cysteine residue. These sequences were nontoxic, able to deliver other biomacromolecules to the cytosol, and efficiently delivered antisense cargo in mice.&#13;
&#13;
A different approach to discover and characterize CPP sequences was also taken, by extracting peptides taken up into cells and analyzing their relative quantities or identifying their sequences by mass spectrometry. First, several mirror-image D-peptides had similar delivery activity to their native forms, while demonstrating complete proteolytic stability. Mixtures of fully intact antisense-peptide conjugates could be recovered from whole cell and cytosolic lysates, and relative concentrations were quantified by MALDI-TOF. This method was then extended to the discovery of de novo sequences from a combinatorial library of antisense-peptide conjugates containing unnatural residues. Following cell treatment with the biotinylated antisense-peptide library, the cytosol of cells was extracted and internalized peptides recovered via affinity capture. De novo sequencing was achieved by Orbitrap tandem mass spectrometry, and several unique, unnatural sequences were identified that could effectively deliver the antisense oligomer to the nucleus.&#13;
&#13;
In summary, machine learning and mass spectrometry-based strategies to discover and characterize novel CPP sequences for antisense delivery were explored. In the future, we envision combining these methods in order to use lists of library hits to train a machine learning model to design sequences composed of fully unnatural amino acids.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143244</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials Design for Advanced Nuclear Energy Systems: Refractory High Entropy Alloys and Metallic Multilayer Composites</title>
<link>https://hdl.handle.net/1721.1/143243</link>
<description>Materials Design for Advanced Nuclear Energy Systems: Refractory High Entropy Alloys and Metallic Multilayer Composites
McAlpine, Samuel W.
Advanced nuclear reactors present a multitude of materials challenges due to high op- erating temperatures, corrosive environments, and neutron radiation damage. In this thesis, I focus on two approaches to designing better materials for advanced reactors, high entropy alloys (HEAs) and metallic multilayer composites (MMLCs). HEAs are chemically disordered solid solutions combining 4-5 or more elements, which of- ten have superior mechanical properties and radiation damage tolerance compared to advanced steels and Ni-base alloys. While HEAs have garnered immense attention within the research community, there is still no effective approach for predicting which compositions will tend to form a single phase microstructure. I develop an atomistic thermodynamic model which uses a quantity I coin as the atomistic mixing energy (AME) to understand phase stability in HEAs and predict which elements are more or less favored to mix within a given HEA system. The model also facilitates the correct calculation of the vacancy formation energy distribution in HEAs which gives insight to radiation damage, solid–state diffusion, and other vacancy–driven material behavior. To test the validity of the model, I synthesize and characterize 5 refractory HEA compositions: NbMoTaTiW, NbMoTaTiV, NbMoTaTiZr, NbMoTaHfW, and WTaVTiCr. Implications for single phase HEA design utilizing the model developed in this thesis are explored. The final part of the thesis focuses on MMLCs, in which different material functionalities are separated into different layers. Currently, few studies have aimed to understand radiation damage effects at the interface between different layers. I use interfacial self–ion irradiation along the bimetal interface within 2 MMLC systems to shed light on the radiation damage behavior of the interfacial region. Radiation–enhanced diffusion was observed in one MMLC, and a Cr-rich phase is observed along the interface in both MMLCs. The propensity for radia- tion–enhanced diffusion is related to the compositional gradient across the interface, while the Cr-rich interfacial phase could potentially lead to material embrittlement within MMLCs.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143243</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive Audio: Enabling Auditory Interfaces with an Understanding of How We Hear</title>
<link>https://hdl.handle.net/1721.1/143241</link>
<description>Cognitive Audio: Enabling Auditory Interfaces with an Understanding of How We Hear
Ananthabhotla, Ishwarya
Over the last several decades, neuroscientists, cognitive scientists, and psychologists have made strides in understanding the complex and mysterious processes that define the interaction between our minds and the sounds around us.  Some of these processes, particularly at the lowest levels of abstraction relative to a sound wave, are well understood, and are easy to characterize across large sections of the human population; others, however, are the sum of both intuition and observations drawn from small-scale laboratory experiments, and remain as of yet poorly understood. In this thesis, I suggest that there is value in coupling insight into the workings of auditory processing, beginning with abstractions in pre-conscious processing, with new frontiers in interface design and state-of-the-art infrastructure for parsing and identifying sound objects, as a means of unlocking audio technologies that are much more immersive, naturalistic, and synergistic than those present in the existing landscape.  From the vantage point of today's computational models and devices that largely represent audio at the level of the digital sample, I gesture towards a world of auditory interfaces that work deeply in concert with uniquely human tendencies, allowing us to altogether re-imagine how we capture, preserve, and experience bodies of sound -- towards, for example, augmented reality devices that manipulate sound objects to minimize distractions, lossy "codecs" that operate on semantic rather than time-frequency information, and soundscape design engines operating on large corpora of audio data that optimize for aesthetic or experiential outcomes instead of purely objective ones.&#13;
&#13;
To do this, I aim to introduce and explore a new research direction focused on the marriage of principles governing pre-conscious auditory cognition with traditional HCI approaches to auditory interface design via explicit statistical modeling, termed "Cognitive Audio".  Along the way, I consider the major roadblocks that present themselves in approaching this convergence: I ask how we might "probe" and measure a cognitive principle of interest robustly enough to inform system design, in the absence of immediately observable biophysical phenomena that may accompany, for example, visual cognition; I also ask how we might build reliable, meaningful statistical models from the resulting data that drive compelling experiences despite inherent noise, sparsity, and generalizations made at the level of the crowd.&#13;
&#13;
I discuss early insights into these questions through the lens of a series of projects centered on auditory processing at different levels of abstraction. I begin with a discussion of early work focused on cognitive models of lower-level phenomena; these exercises then inform a comprehensive effort to construct general purpose estimators of gestalt concepts in sound understanding. I then demonstrate the affordances of these estimators in the context of application systems that I construct and characterize, incorporating additional explorations on methods for personalization that sit atop these estimators.  Finally, I conclude with a dialogue on the intersection between the key contributions in this dissertation and a string of major themes relevant to the audio technology and computation world today.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143241</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient and Equitable Travel Demand Management Using Price and Quantity Controls</title>
<link>https://hdl.handle.net/1721.1/143240</link>
<description>Efficient and Equitable Travel Demand Management Using Price and Quantity Controls
Chen, Siyu
Traffic congestion is a serious problem that imposes significant costs on the economy, environment, and society. Congestion pricing as a demand management instrument has been known to be a cost-effective approach to deal with congestion. However, the issue of equity remains one of the major challenges to the successful design, acceptance, and deployment of congestion pricing. Although refunding revenues in a personalized manner has the potential to improve its acceptance by being Pareto-improving, there is limited research on methodologies to do so.&#13;
&#13;
An alternative approach to travel demand management termed tradable mobility credits (TMC) has been gaining attention recently. It is a type of quantity control which can avoid the flow of money from users to the regulator and has been shown to have better performance than pricing under demand and supply uncertainty. Despite these promises, several important questions remain with regard to the design and functioning of the market within the TMC schemes, an aspect critical to the effective operationalization of these schemes. &#13;
&#13;
&#13;
The objective of this thesis is to design the efficient, equitable and Pareto improving congestion tolling for both price and quantity controls. First, we develop a market design for TMC schemes that ensures TMC is used for mobility management and avoids undesirable behavior such as hoarding, frequent selling and speculation, excessive activity at boundary (of token expiration), and negotiation cost. The developed design considers all aspects of market including token allocation, expiration, transaction fee, price adjustment and market rules governing trading. In addition, a heuristic approach to model disaggregate selling behavior is developed and the resulting simple selling strategy is derived. The developed market design addresses a growing and imminent need to develop methodologies to realistically model TMC schemes that are suited for real-world deployments.&#13;
&#13;
Second, we develop a bi-level optimization framework for personalized distribution to make congestion tolling (both price and quantity controls) efficient, equitable, and Pareto improving. The system optimization determines the toll policy with the objective to maximize social welfare while the user optimization can be formulated with different objectives (e.g. to achieve Pareto improvement or maximize social welfare) to determine an individual-specific distribution of revenue for pricing or mobility credits for TMC. The developed personalized congestion tolling is promising as it addresses the important issue of equity and has the potential to improve public acceptance.&#13;
&#13;
The performance of the designed instruments are demonstrated via microsimulation in a daily commute context between a single origin-destination pair. The simulation experiments employ a day-to-day assignment framework wherein transportation demand is modeled using a logit-mixture model with the nonlinear income effects and supply is modeled using a standard bottleneck model. The evaluation framework includes four main categories: social welfare, distributional impacts, behavior change, and level of congestion.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143240</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Driven Synthesis Planning Applied to Zeolite Materials</title>
<link>https://hdl.handle.net/1721.1/143239</link>
<description>Data Driven Synthesis Planning Applied to Zeolite Materials
Jensen, Zachary
Materials discovery is critical for dealing with societal problems, but is a tedious process requiring substantial time and energy to accumulate knowledge. Computational techniques have accelerated understanding of material structure and properties, answering the question "What" materials to make for a specific application. These techniques have shifted the bottleneck in materials design to the synthesis and processing of materials, posing the question "How" to make a specified material. Zeolites are microporous, crystalline aluminosilicates described by this paradigm. Their relevance for chemical and "green" applications has led to sustained interest for many decades with substantial progress made in predicting hypothetical zeolites with databases of thousands of energetically favorable structures. However, only 255 of these structures have been synthesized and far fewer, approximately 20, are commercially viable pointing to synthesis as the major bottleneck in zeolite discovery and design. This thesis aims to improve the understanding of synthesisstructure relationships in zeolite materials through the use of data driven synthesis tools. It is guided by three questions: 1) How can zeolite synthesis data be automatically extracted on a large scale? 2) How can coupling of data-driven, first principles, and experimental approaches accelerate understanding of structure and processing relationships in zeolite materials? 3) In what ways can this data and discovered relationships be used to engineer improved zeolite materials?&#13;
&#13;
Data driven synthesis planning requires large amounts of data to develop hypotheses about underlying trends and train machine learning (ML) models. The zeolite literature provides thousands of records of synthesis routes and the resulting zeolite structure but requires advanced information extraction techniques to obtain. This thesis utilizes and builds upon a natural language processing (NLP) pipeline to extract and format this data on realistic timescales. Algorithmic improvements for this pipeline along with additional components targeted specifically to unique linguistic components of zeolite literature are developed along with a researcher-computer interaction framework designed to optimize both extraction accuracy and efficiency by fixing mistakes made by the extraction algorithm. This extraction algorithm results in five, highly curated datasets related to zeolite synthesis representing the largest collection of zeolite synthesis routes to the author’s knowledge.&#13;
&#13;
These datasets are used to study zeolite synthesis starting with organic structure directing agent (OSDA) design. Determining which OSDA molecule templates which zeolite structure is a difficult problem. The author extracts a dataset of known OSDA-zeolite pairs from the literature to study these relationships. Using an advanced featurization schemes for the OSDA, relationships between OSDAs and certain zeolite structures can be established. These relationships help answer thesis question two. A generative model is trained on the extracted data and validated through simulation to suggest potential OSDAs for a given zeolite structure providing tools to accelerate OSDA design addressing thesis question 3.&#13;
&#13;
OSDAs are very important in zeolite formation but the rest of the hydrothermal variables also play a large role. This thesis utilizes failed experiment data to study the probability of zeolite crystallization and interprets the model results through Shapley values to determine impacts of specific hydrothermal synthesis variables. Using multi-fidelity data and Bayesian inference, zeolite crystallization curves are studied to determine nucleation and crystal growth behavior. Both of these tasks are done in pursuit of thesis question two. An additional generative model that predicts hydrothermal synthesis conditions given an OSDA-zeolite pair is developed presenting another tool to guide zeolite development looking to answer thesis question 3.&#13;
&#13;
Finally, the thesis suggests high potential areas for future research and further exploration using the extracted data. It concludes with a brief commentary on the publication process and the necessity of data extraction.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143239</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of the growth and shape of Laplacian stream networks</title>
<link>https://hdl.handle.net/1721.1/143238</link>
<description>Theory of the growth and shape of Laplacian stream networks
Stansifer, Eric Marshall
Rivers self-organize into fractal river networks, with each river fed by smaller rivers, down to the scale of narrow rivulets. Certain groundwater-fed river networks develop stable and clearly delineated valley networks, which grow over time through headward erosion at the springheads. These networks exhibit non-linear dynamics, as their shape is a result of the groundwater flows, which itself depends on the shape of the networks. I explore the nature of this interdependence and ask, what consequences does it have on the shape of a river network?&#13;
&#13;
The shape of a river network is not random, but is driven by physical processes whose behavior is local and mathematically simple. Starting from these local phenomena I seek to deduce global consequences about the network's shape.&#13;
&#13;
This work is divided into two thematically-related parts:&#13;
&#13;
First, considering a flat network such that headward erosion at the springheads maintains a locally symmetric groundwater field, I find a novel constraint connecting the rate of headward erosion to the shape of the network. This constraint theoretically allows us to determine the whole history of a network's evolution from its present shape. The results in this part are equally applicable to any Laplacian network driven by locally symmetric Laplacian growth.&#13;
&#13;
Second, I consider the longitudinal profiles of a river network whose river slopes are sufficiently large to carry sediment downstream. This restriction on the profile of a river has implications on the development of new side branches. I find that side branches cannot form on inside curves of a main branch, as it would lose groundwater in competition to its parent. When the main branch is uncurved, I find that there is a minimum possible length a side branch can have, below which it is not prominent enough to attract the groundwater needed to sustain it. This acts as an obstacle to the formation of new side branches.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143238</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic infrastructure planning to enable personal vehicle electrification</title>
<link>https://hdl.handle.net/1721.1/143236</link>
<description>Strategic infrastructure planning to enable personal vehicle electrification
Wei, Wei
The transportation sector contributes a substantial fraction of global greenhouse gas emissions. For example, in the United States (US), it contributes roughly one third of greenhouse gas emissions and is projected to remain a significant contributor several decades into the future if no further policy actions are taken. Around 40% of the US greenhouse gas emissions in the transportation sector come from personal vehicles. A transition from internal combustion engine vehicles to electric vehicles has the potential to achieve significant emission reductions from personal vehicles when combined with a decarbonized electricity system. However, electric vehicles have a limited range and charging these vehicles may stress the power grid.&#13;
&#13;
To enable widespread vehicle electrification, a suitable network of electric vehicle charging stations and adequate power generation and distribution systems will be essential. Yet questions remain about the impact of different infrastructure expansion strategies.&#13;
&#13;
This thesis addresses a gap in the current literature by examining infrastructure requirements in the context of varying travel patterns and technology performance. Specifically, this work evaluates infrastructure expansion strategies against spatially- and temporally- resolved vehicle and household energy-consuming behaviors, based on a physical modeling of electric vehicle energy consumption.&#13;
&#13;
The central result of this thesis is that certain infrastructure expansion strategies can have significant impact on meeting travel demand to enable personal vehicle electrification. Specifically, this thesis reveals the essential role that overnight home charging can play, and the high impact of highway fast charging to meet energy requirements over time with battery electric vehicles (BEVs). This research also shows that circuit upgrades are likely needed to accommodate electricity demand peaks from BEV charging in some but not all locations. Adopting certain demand management strategies such as delaying home charging and shifting highway fast charging to adjacent highway stops may significantly reduce circuit peak loads. In the case of hydrogen fuel cell vehicles (HFCVs), this research shows that for a small fraction of personal vehicles, highway refueling can be sufficient for meeting energy requirements, though other refueling options will likely be needed for most drivers.&#13;
&#13;
Insights from this thesis can inform assessments of the viability of using electric vehicles as personal vehicles to conveniently meet energy demand. These insights also help reveal effective strategies for policy-making and other investments in infrastructure expansion to support vehicle electrification. Results from this thesis also provide insight on methods for reducing the cost of BEV charging and HFCV refueling by increasing the utilization of infrastructure.&#13;
&#13;
Fundamentally, this thesis contributes to an understanding of longitudinal vehicle and household energy consuming behaviors based on travel patterns and power grid electricity demand profiles. A majority of drivers can experience days with high energy requirements on a small number of days a year, leading to the high impact of occasional access to highway fast charging and supplementary long-range vehicles in meeting energy demand. Moreover, locations and times where people tend to stay for an extended period of time to allow for uninterrupted charging sessions, such as overnight at home and during the day at work, can often correspond to off-peak hours when grid electricity demand is low. In addition, the diversity in the time drivers arrive at and depart from these locations opens up opportunities for demand management to reduce electricity demand peaks from charging. These observations lay the groundwork for strategic infrastructure expansion to enable personal vehicle electrification.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143236</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plume to global-scale atmospheric impacts of aviation emissions</title>
<link>https://hdl.handle.net/1721.1/143235</link>
<description>Plume to global-scale atmospheric impacts of aviation emissions
Fritz, Thibaud Matthieu Martin
Commercial aircraft combustion emissions impact the atmospheric composition, alter the Earth’s climate by accounting for ~4% of anthropogenic radiative forcing [72, 49] and affect surface air quality, causing an estimated ~16,000 premature mortalities per year [7, 28]. These environmental impacts are driven by chemical, microphysical and transport processes that span different magnitudes of temporal and spatial scales, from near-field inplume chemistry that evolve over minutes and distances of ~100 m to global-scale phenomena taking place at the continental scale. To evaluate aviation’s environmental impacts, all temporal and spatial scales need to be captured. In this thesis I develop and evaluate numerical models that span all modeling scales. First, I quantify the role of plume-scale processes in the atmospheric impact of aviation emissions. Previous literature has indicated that current global-scale modeling of aircraft emissions overestimates aviation-attributable ozone by instantly diluting emissions at a coarse resolution [86]. To estimate the magnitude of the ozone discrepancy, I use a recently-developed aircraft plume model to calculate the nonlinear chemical conversions that occur in aircraft plumes. I then propagate the plume-scale results to the global atmospheric impact through the chemistry transport model (CTM) GEOS-Chem by embedding a plume-scale parameterization. After accounting for plumescale processes, I find a ~5% downward correction in the simulated aviation-attributable ozone response.&#13;
&#13;
High-altitude emissions from current subsonic aviation or from potentially future supersonic aircraft modify the total column ozone, thus leading to either increases in tropospheric ozone or a decrease in stratospheric ozone, with the latter causing larger UV flux at the ground. Both changes affect human health and, in this thesis, I identify a column ozone-neutral altitude for subsonic and supersonic aviation. Adjoint models of CTMs have been developed to quantify receptor-oriented sensitivities of environmental metrics (e.g. population-weighted ozone exposure) to emissions. Adjoint modeling overcomes the numerical cost of source-oriented sensitivity analysis, as performed by forward models. However, adjoint models of atmospheric chemistry have historically been limited to the troposphere. In this thesis, I build upon previous work and extend the GEOS-Chem Adjoint to further include stratospheric processes, and then validate the sensitivities with multi-year scenarios. I then present adjoint-derived sensitivities to identify column ozone-neutral altitudes for subsonic and supersonic aviation, based on their respective emission characteristics. I find that the 12 - 15 km altitude band is approximately column ozone-neutral for aviation emissions. Neglecting the effects of plume-scale processes introduces a positive bias in the column ozone-neutral altitude that varies between 0.3 up to 1 km.&#13;
&#13;
Finally, previous assessments of the environmental impact of aviation emissions using global climate models have found that coupled chemistry-climate feedback could have a magnifying effect on the response to commercial aircraft emissions. However, the aviation-induced environmental impact estimated with climate models have not been found to be consistent with CTMs [15]. To identify the cause of this discrepancy between climate models and CTMs and to evaluate the relevance of climate feedbacks in the assessment of the environmental response of aviation emissions, I develop a newly-coupled model for climate-chemistry simulations, CESM2-GC, coupling the climate model CESM2 to the model of atmospheric chemistry GEOS-Chem. I then validate CESM2-GC against atmospheric observations and results from the GEOS-Chem CTM and the “native” chemistry option in CESM2, CAM-Chem. Using CESM2-GC, I perform ensemble runs to evaluate the magnitude of the coupled chemistry-climate effects when evaluating aviation’s ozone and particulate matter response. I find that the ensemble mean provides an aviation-attributable population-weighted ozone and particulate matter perturbations of 0.56 ppbv and 0.08 µg/m3 , consistent with previous estimates using the GEOS-Chem CTM. Besides an increase of ~70 mK in tropical and Northern mid latitudes tropospheric temperatures, I observe no statistically significant response in upper-tropospheric meteorology that could indicate that coupled chemistry-climate feedback magnifies the aviation-attributable environmental response.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143235</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Optimization and Control of Modular Chemical Systems</title>
<link>https://hdl.handle.net/1721.1/143234</link>
<description>Automated Optimization and Control of Modular Chemical Systems
Nikolakopoulou, Anastasia
Continuous manufacturing has been widely used in many industries such as food, oil, and gas. The pharmaceutical sector is now transitioning from batch to continuous manufacturing as well, to achieve potential benefits such as improved responses to demand changes and reduction of supply chain disruptions, production time, drug costs, waste, and product quality variations. Continuous pharmaceutical manufacturing in compact modular systems provides additional flexibility in where and when the drug is produced.&#13;
&#13;
The transition to continuous operation raises new control challenges to meeting specifications in critical quality attributes (CQAs). Process models, facilitated by the widespread adoption of process analytical technology for on-line CQA measurements, enable the concise formulation of existing process understanding and promote realtime decision making and retrospective analyses of the process of interest. First-principles or data-driven models can be developed depending on the degree of process understanding and the purpose of the model construction. Models can be used in process optimization to improve experimental design and manufacturing practices. Models are also used in model-based control to handle variations which would lead to reduced product quality. Model-based control offers opportunities for bringing processes a step closer towards full automation.&#13;
&#13;
This thesis employs, enhances, and develops system engineering tools to address challenges associated with automating optimization and control solutions for modular chemical systems.&#13;
&#13;
First, the thesis presents first-principles mathematical descriptions for common modules in a modular chemical system. An approach is developed for the derivation of linear input–output (step-response) models that reduce model–plant mismatch. The proposed step-response models allow for the successful implementation of a variation of linear model predictive control (MPC), known as quadratic dynamic matrix control. Dynamic optimization for startup based on a first-principles plant-wide model is formulated and solved for a virtual plant for the upstream synthesis of atropine.&#13;
&#13;
Then the thesis presents a methodology for designing stabilizing dynamic state feedback controllers and observers with guaranteed properties for dynamic artificial neural network (DANN) models using matrix inequalities. Assuming a known DANN structure describing a system, a more conservative representation known as a diagonal norm-bounded linear differential inclusion is employed to derive sufficient criteria for estimation and control in the form of linear or bilinear matrix inequality problems using quadratic Lyapunov functions. A computational case study demonstrates the applicability of the method in a realistic multiple-input multiple-output pH control problem.&#13;
&#13;
Lastly, the thesis proposes a strategy for constructing nonlinear interpretable input–output models for modular chemical systems. Polynomial nonlinear-autoregressive-with-exogenous-inputs models are identified using a machine learning algorithm that promotes variable selection and grouping of correlated variables, and results in a sparse representation. The models are incorporated into a nonlinear MPC algorithm implemented in the JuMP algebraic modeling language, which results in very fast computational times. Computational case studies for two different types of chemical reactors demonstrate the applicability of the methodology in processes that commonly appear in modular chemical systems.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143234</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling up Genetic Circuits in Mammalian Cells: A U1-snRNA-based Platform Enables Mammalian Cells to Compute the Bitwise Inversion of the Square Root of a Number</title>
<link>https://hdl.handle.net/1721.1/143233</link>
<description>Scaling up Genetic Circuits in Mammalian Cells: A U1-snRNA-based Platform Enables Mammalian Cells to Compute the Bitwise Inversion of the Square Root of a Number
Alighieri, Giulio
Scaling up genetic circuits in mammalian cells can lead to a new class of therapies. As a result of my research as PhD candidate on exploring ways and engineering tools to scale up genetic circuits, I engineered and validated in HEK293FT cells a genetic circuit that allows those cells to compute the bitwise inversion of the square root of a number. To date, this circuit which has four inputs and two outputs is the most sophisticated genetically encoded circuit ever expressed in mammalian cells. The core processing module of the circuit is a novel miRNA-based NOT gate based on a platform that uses the U1 snRNA. We have called this platform "u.P.R.O.C.E.S.S.O.R." (U-gene-based Platform, RNAi-regulated Only, Compactly Employing Small Shuttle-miRNAs, Operates (through) RNA), which does not use any transcriptional regulators or exogenous proteins, which can cause dangerous immune responses. The design of this sophisticated logic circuit was found by executing an algorithm, which I developed, for the exhaustive search of logic circuits designs (with 4 inputs and 2 outputs). The solution tested in HEK293FT cells required just four transcriptional units and about 10kb of DNA. Furthermore, I have engineered a trans-activated gRNA (for CAS9) and a trans-activated miRNA to sense abundant nuclear RNAs by the use of the toehold-mediated strand displacement reaction. The transactivated miRNA also does not use any transcriptional regulators or exogenous proteins and, like the miRNA-based NOT gate, has a DNA footprint small enough to fit in a AAV virus.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143233</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-strength transformation-induced plasticity steels with reverted interlath austenite</title>
<link>https://hdl.handle.net/1721.1/143232</link>
<description>High-strength transformation-induced plasticity steels with reverted interlath austenite
Jiang, Menglei
The interlath austenite formed by reversion process increases the ductility of the martensitic steels. However, the reversion process requires high annealing temperatures and long annealing times, which lead to high energy cost and unwanted softening. In this thesis, two methods are applied to increase the reversion kinetics: (i) changing the reversion mechanism from diffusive to displacive; (ii) changing the alloy composition to promote diffusive transformation. To promote local displacive austenite reversion, the enrichment of austenite stabilizer at the martensite boundaries is introduced before reversion. To promote the diffusional transformation process, computational alloy design is applied, where the effects of alloying elements on kinetics of austenite reversion and martensite softening are evaluated. The phase transformation kinetics and the microstructural evolution during (displacive and diffusive) austenite reversion are characterized by carrying out differential scanning calorimetry measurements and multi-probe microstructure analysis, and its mechanical impacts are discussed. In method (i), the reversion kinetics increases significantly by promoting displacive transformation. The defect development during displacive reversion can help increase the overall strain hardening capacity of the alloy, which in turn increases the accumulative uniform elongation, and the formability. In method (ii), the diffusive austenite reversion is achieved before the overaging of the martensite, which significantly improves the ductility of the designed martensitic steel without softening. These results suggest that overaging and softening of the martensite (rather than the formation of the austenite) are the major contributions to the softening phenomenon. Thus, the strength of the transformation induced plasticity steels can be improved by increasing the kinetics of the interlath austenite reversion process.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143232</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quasi-particle Breakdown and Heat Transport in a Homogeneous Strongly-interacting Fermi Gas</title>
<link>https://hdl.handle.net/1721.1/143230</link>
<description>Quasi-particle Breakdown and Heat Transport in a Homogeneous Strongly-interacting Fermi Gas
Yan, Zhenjie
This thesis describes experiments on a homogeneous strongly-interacting Fermi gas, consisting of a spin mixture of ⁶Li atoms with interactions induced by a Feshbach resonance. The tunability and universality of this atomic gas make it an ideal platform to study many-body physics of interacting Fermi systems. The implementation of a uniform trapping potential enables experiments at constant density, allowing the study of transport, critical phenomena near phase transitions, and novel states of matter predicted in a narrow range of densities. Radio-frequency (rf) spectroscopy provides a powerful tool for probing singleparticle excitations in quantum gases. In particular, we employ it to study the thermal evolution of resonantly interacting spin impurities immersed in a Fermi gas. The rf spectra reveal a dramatic transition from an attractive polaronic Fermi liquid at low temperature to a classical Boltzmann gas above the Fermi temperature. In the polaron regime, the spectral width shows a characteristic &#119879;² temperature dependence, corresponding to the quasiparticle decay rate in a Fermi liquid. At high temperatures, the spectral width approaches the scattering rate of a classical, unitary Boltzmann gas, which scales as &#119879;⁻¹⸍². In the transition regime, a spectral width on the order of the Fermi energy is observed, indicating the breakdown of the quasiparticle picture of well-defined, long-lived excitations. I further describe the first direct observation of heat transport in a strongly interacting Fermi gas, using the temperature dependence of the rf spectra as a local thermometer. The superfluid phase transition in our attractive Fermi system separates two different regimes of heat propagation. While heat propagates diffusively in the normal phase, in a superfluid it propagates as a wave, called second sound. The measured speed of second sound yields the superfluid fraction, which quantifies the inertia against phase twists. The damping time scales of second sound and heat diffusion show a minimum diffusivity on the order of the quantum limit ♄/&#119898; for both modes. The diffusivity is observed to feature a peak at the phase transition temperature which is evidence for critical behavior, as seen in liquid ⁴He.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143230</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the Fluorine, Sodium, and Aluminum Fluxes in Cosmic Rays with the AMS Experiment on the International Space Station</title>
<link>https://hdl.handle.net/1721.1/143227</link>
<description>Measurement of the Fluorine, Sodium, and Aluminum Fluxes in Cosmic Rays with the AMS Experiment on the International Space Station
Qin, Xiaoting
Primary nuclei (He, C, O, Ne, Mg, Si, ...) are thought to be mainly produced and accelerated in astrophysical sources such as the supernova. Secondary nuclei (Li, Be, B, ...) are mostly produced by interactions of primary nuclei with the interstellar medium. Precise knowledge of the secondary-to-primary flux ratio, like B/C, is essential in the understanding of cosmic ray propagation. This thesis presents the first precision measurements of the heavy cosmic ray fluorine (F), sodium (Na), and aluminum (Al) fluxes in the rigidity range from 2.15 GV to 3.0 TV, based on data collected by the Alpha Magnetic Spectrometer (AMS) during the first 8.5 years of operation. The F flux is believed to be the only pure secondary flux between oxygen and silicon, and Na and Al fluxes are thought to be produced both in astrophysical sources and by the collisions of heavier nuclei with the interstellar medium.&#13;
&#13;
The measurements show that the F flux deviates from a single power law above 200 GV. The heavier secondary-to-primary F/Si flux ratio rigidity dependence is distinctly different from the lighter B/O (or B/C) rigidity dependence. In particular, above 10 GV, the [(F/Si)/(B/O)] ratio can be described by a power law &#119877;^δ with δ = 0.052 ± 0.007. This shows that the propagation properties of heavy cosmic rays, from F to Si, are different from those of light cosmic rays, from He to O, and that the secondary cosmic rays have two classes. The Na and Al fluxes are well described by the sums of a primary cosmic ray component (proportional to the Si flux) and a secondary cosmic ray component (proportional to the F flux), similar to the nitrogen (N) flux. The fraction of the primary component increases with rigidity for the N, Na, and Al fluxes and becomes dominant at the highest rigidities. The Na/Si and Al/Si abundance ratios at the source, 0.036 ± 0.003 for Na/Si and 0.103 ± 0.004 for Al/Si, are determined independent of cosmic ray propagation.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143227</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Power Microwave Generation for Advanced Particle Acceleration</title>
<link>https://hdl.handle.net/1721.1/143226</link>
<description>High Power Microwave Generation for Advanced Particle Acceleration
Picard, Julian Tesch
This thesis presents the experimental demonstration of advanced high power microwave sources for application to high gradient particle accelerators.&#13;
&#13;
The major effort was directed at the design and successful test of a metamaterial-based power extractor at the Argonne Wakefield Accelerator (AWA). We obtained up to 565 MW, 2.7 ns (FWHM) pulses at 11.7 GHz from the structure. The highest power was generated by a train of eight 65 MeV electron bunches spaced at 1.3 GHz with a total charge of 355 nC. The metamaterial structure consists of 100 copper unit cells with a total structure length of 0.2 m. Each unit cell is comprised of one “wagon-wheel” plate and one spacer plate. The 565 MW pulse generates a longitudinal on-axis wakefield of 135 MV/m that could be used to accelerate a trailing witness bunch. A surface electric field of &gt;1 GV/m is generated on the metamaterial plates at the peak power level but no evidence of breakdown was observed during testing. Tests with trains of bunches of up to 100 nC produced output power levels in excellent agreement with simulations. At higher total bunch charge, offsets of the bunches from the axis resulted in reduced output power. Simulations indicate that a perfectly aligned bunch train would generate more than 1 GW of power.&#13;
&#13;
An additional effort was directed at the design and characterization of a high power laser driven semiconductor switch (LDSS) that enabled the testing of high gradient 110 GHz accelerator structures at MIT. The LDSS, employing Si and/or GaAs wafers, was used to slice nanosecond-scale pulses from 3 microsecond pulses generated by a megawatt, 110 GHz gyrotron. An electron-hole cutoff plasma was induced in the wafers using 6 ns, 230 mJ pulses from a 532 nm laser. A 1-D model is presented that agrees well with the experimentally observed temporal pulse shapes obtained with a single Si wafer. The LDSS was integrated with the MIT megawatt gyrotron, enabling the testing of a SLAC W-band accelerator cavity at gradients up to 230 MV/m.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143226</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Homogeneous quantum gases: strongly interacting fermions and rotating bosonic condensates</title>
<link>https://hdl.handle.net/1721.1/143224</link>
<description>Homogeneous quantum gases: strongly interacting fermions and rotating bosonic condensates
Mukherjee, Biswaroop
Quantum gases are an ideal platform for studying problems in many-body physics. Highly tunable and reconfigurable, these systems work as quantum simulators for a range of other quantum mechanical systems, ranging from neutron stars, to superconductors, to quantum Hall systems. A crucial degree of freedom is the external geometry of the trapping potential. In this thesis, we describe experiments on creating homogeneous quantum gases and performing measurements using them.&#13;
&#13;
The first section of the thesis focuses on homogeneous Fermi gases, where we use tailored optical potentials to trap 6Li atoms in a homogeneous box potential. We observe uniform fermionic superfluids and measure the temperature dependence of the noninteracting Fermi surface. Radiofrequency (rf) spectroscopy offers unique insights into the spectral properties of Fermi gases. We exploit the high signal to noise ratio of rf spectroscopy of uniform Fermi gases to obtain precise measurements of the thermodynamic contact. We observe a dramatic change in the contact at the superfluid transition.&#13;
&#13;
The second section of this thesis concerns uniform rotating bosonic condensates. We discuss a new experimental apparatus and outline how geometric squeezing can be used to prepare systems of quantum gases in the lowest Landau level, a long sought-after goal. Lastly, we show a surprising spontaneous crystallization of these quantum Hall systems, and find that it is driven by interactions.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143224</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private and Provably Efficient Federated Decision-Making</title>
<link>https://hdl.handle.net/1721.1/143222</link>
<description>Private and Provably Efficient Federated Decision-Making
Dubey, Abhimanyu
In this thesis, we study sequential multi-armed bandit and reinforcement learning in the federated setting, where a group of agents collaborates to improve their collective reward by communicating over a network.&#13;
&#13;
We first study the multi-armed bandit problem in a decentralized environment. We study federated bandit learning under several real-world environmental constraints, such as differentially private communication, heavy-tailed perturbations, and the presence of adversarial corruptions. For each of these constraints, we present algorithms with near-optimal regret guarantees and maintain competitive experimental performance on real-world networks. We characterize the asymptotic and minimax rates for these problems via network-dependent lower bounds as well. These algorithms provide substantial improvements over existing work in a variety of real-world and synthetic network topologies.&#13;
&#13;
Next, we study the contextual bandit problem in a federated learning setting with differential privacy. In this setting, we propose algorithms that match the optimal rate (up to poly-logarithmic terms) with only a logarithmic communication budget. We extend our approach to heterogeneous federated learning via a kernel-based approach, and also provide a no-regret algorithm for private Gaussian process bandit optimization.&#13;
&#13;
Finally, we study reinforcement learning in both the multi-agent and federated setting with linear function approximation. We propose variants of least-squares value iteration algorithms that are provably no-regret with only a constant communication budget.&#13;
&#13;
We believe that the future of machine learning entails large-scale cooperation between various data-driven entities, and this work will be beneficial to the development of reliable, scalable, and secure decision-making systems.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143222</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogel Machines - Design, Manufacturing, and Applications</title>
<link>https://hdl.handle.net/1721.1/143221</link>
<description>Hydrogel Machines - Design, Manufacturing, and Applications
Liu, Xinyue
Hydrogels are polymer networks infiltrated with water. Natural hydrogels constitute the major components of the human body such as muscles and cartilages; and synthetic hydrogels have been widely used in applications that closely interact with biological organisms, ranging from tissue engineering, drug delivery, and contact lenses to sensors, actuators, electronics, optics, and other soft machines. As many living tissues undergo dynamic and repeating deformation, natural hydrogels have achieved the extreme mechanical robustness necessary for their survival and wellbeing through evolution. Mechanical properties of synthetic hydrogels are also required and play crucial roles in maintaining the integration and functionality of hydrogel machines, especially when they interact with the dynamic tissues and organs. In this thesis, I first report the design and manufacturing of tough, fatigue-resistant hydrogel materials and their adhesions with various engineering materials. Based on these hydrogel materials and adhesions, I present three types of hydrogel machines, including hydrogel living devices that incorporate living bacteria in hydrogel matrices, hydrogel ingestible devices that are retained in the gastrointestinal tract, and hydrogel optical fibers that collect the light from and deliver light to nervous systems. The design principles and implementation strategies of each hydrogel machine are described, followed by the theoretical calculation, experimental validation, and proof-of-concept demonstration. In the future, the field of hydrogel machines will enable a paradigm shift in machine design by integrating hydrogels due to physical and physiological matches with biological organisms.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143221</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Biology Approaches for Elucidating Early ALS Disease Processes</title>
<link>https://hdl.handle.net/1721.1/143216</link>
<description>Systems Biology Approaches for Elucidating Early ALS Disease Processes
Li, Jonathan
ALS is a devastating neurodegenerative disorder with no known cure. In this thesis, I describe high-throughput omics characterization of patient-derived induced pluripotent stem cell (iPSC) models. I present analyses aimed at discovering disease pathways and genes associated with ALS using systems biology approaches. In patients with the C9orf72 mutation, I use a network-based algorithm to identify disrupted pathways enriched for extracellular matrix organization and protein transport. Integrating these findings with results from a C9orf72 Drosophila model, I found causal and compensatory pathways that may be active in C9-ALS. Next, I investigated the genetics of ALS using genomic, transcriptomic, and epigenomic data collected across 181 iPSC-derived motor neurons from ALS patients and controls. I performed quantitative trait loci (QTL) analyses and found transcriptional regulators and genes implicated in ALS pathology which were enriched for autophagy and DNA damage repair. Notably, we found missplicing of G2E3 and SCFD1 to be a consequence of the previously uncharacterized SCFD1 risk locus. In all, these findings further our understanding of ALS pathology and help prioritize potential targets for therapeutic intervention. The systems biology approaches outlined in this thesis can be useful for studying other disease contexts as well.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143216</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>In Life's Likeness: Biomimicry and the Imitation of Nature</title>
<link>https://hdl.handle.net/1721.1/143212</link>
<description>In Life's Likeness: Biomimicry and the Imitation of Nature
Fadok, Richard Alexander
This anthropological study of biomimicry examines how nature has been imagined and mobilized by American management consultants at the dawn of 21st century. Whereas culture has been invested with economic potential in “human-centered design,” consultants who name themselves “biomimics” valorize nature, specifically “life,” as a site of technological innovation. In their folk epistemology of biology, 3.8 billion years of evolution have designed “adaptations” to the environment that are superior to human artifice: more powerful, more efficient, and more sustainable. Following the author-turned-consultant Janine Benyus, an ecosystem of innovation consultancies has arisen around the orthodoxy that the imitation of biological “design principles” would generate “sustainable innovation.” Despite their promise, however, little has materialized.&#13;
&#13;
Based on six years of ethnographic fieldwork in North America, with an emphasis on the United States, I analyze how biomimics construe, produce, and evaluate imitations to understand the cultural dimensions of this gap between vision and circumstance. Drawing on interviews with over forty biomimics and participant observation at consulting workshops, design competitions, and related public events, I document how consultants appraise the fidelity between originals and copies, life and design. While biomimics aspire toward artifacts that truthfully resemble nature, I demonstrate that, in practice, the meaning of resemblance—i.e. of life’s likeness—is multiple and situated. Specifically, I discern three mimetic stances: the pragmatic, idealistic, &amp; performative. I argue that this dissensus around valid imitations betrays the subsumption of biomimicry, and green design in general, by capitalist profit logics. In conversation with anthropologists of nature, design, &amp; imitation, I show how the “bio” in biomimicry, or the “nature” in nature-based design, has become a capacious term that consultants manipulate to justify the status quo. Biomimicry, I reflect, exposes contemporary tensions of pragmatism/utopia, capital/ecology, and ethics/design.&#13;
&#13;
In biomimicry, “Western” design is often contrasted with its antonym: “native” design. Biomimics believe that “native” design is ecologically harmonious because “native cultures” are intrinsically mimetic. In five interludes that alternate with the main chapters of the dissertation, I outline the anthropological figure of thought their belief rests upon—a romantic figure that I call Homo mimesis—and critically excavates its traces in their notions of time, value, sensation, and the nature of the human. Extrapolating from biomimicry to design more broadly, I contend that designers reflexively understand their own practices through empirical or speculative knowledge about how other societies make their material worlds. As so-called social, human, user, and other kinds of “cultural” design multiply, I invite anthropologists to attend to design’s anthropologies: the epistemologies of anthropos in design, epistemologies oft reduced to essentialized difference.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143212</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Classifying and Displaying Brain-waves through Self-supervised Learning</title>
<link>https://hdl.handle.net/1721.1/143210</link>
<description>Classifying and Displaying Brain-waves through Self-supervised Learning
Mohsenvand, Mostafa
Interpreting human electroencephalogram (EEG) is challenging and requires years of medical training. Hence, constructing labeled datasets for supervised learning from EEG signals is expensive and time-consuming. Moreover, the existing datasets use incompatible EEG setups (e.g. different numbers of channels, sampling rates, types of sensors, etc.) that make them hard to fuse to obtain larger datasets. To alleviate similar issues, self-supervised pretraining has been developed and utilized in other branches of machine learning. In this thesis, we introduce multiple self-supervised algorithms and data augmentation and mixup techniques to improve the accuracy and sample efficiency of downstream EEG classification. Our framework combines multiple EEG datasets for self-supervised learning and uses the resulting large-scale dataset to train our proposed algorithms SeqCLR (Sequential Contrastive Learning of Representations) and SeqDACL (Sequential Domain Agnostic Contrastive Learning). We apply our pre-trained algorithms to four downstream classification tasks. We show that our algorithms are able to compete and outperform other supervised and self-supervised methods. In particular, our methods achieve state-of-the-art accuracy and sample efficiency in Emotion Recognition (SEED dataset), Sleep-stage scoring (Sleep EDF dataset), and user identification (TUH dataset). We also explore using self-supervised representation learning for visualizing EEG data for diagnostic and research purposes. We present a sequential autoencoder architecture and a novel visualization method called chromograph. Our method visualizes multichannel EEG data through its latent representation in an economic and informative fashion that enables rapid and reliable recognition of abnormal EEG signals. Our user study shows that neurologists are able to make more accurate and faster detection of abnormal EEG using the chronograph. We also design and implement a real-time sonification device called the Physiophone for interactive sonification of electrophysiological signals. Our user study shows that novice users with four minutes audio-traiining could outperform medically trained users who used the conventional visualization of ECG signals for distinguishing normal and abnormal heart rhythms. We also observe a new superadditive bimodal effect in a conformity/priming test.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143210</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spin Dynamics in a Tunable Heisenberg Magnet Realized with Ultracold Atoms</title>
<link>https://hdl.handle.net/1721.1/143209</link>
<description>Spin Dynamics in a Tunable Heisenberg Magnet Realized with Ultracold Atoms
Jepsen, Paul Niklas
Spin is the elementary unit of magnetism. The interactions between many spins give a material its magnetic properties. This thesis focusses on one of the simplest magnetic materials: A one-dimensional chain of spins with tunable nearest-neighbor interactions. This system is described by the Heisenberg model, a paradigmatic model, which has been studied for almost a century now. But so far, experiments on spin dynamics have been mostly limited to isotropic spin-spin interactions.&#13;
&#13;
In this work we use ultracold atoms to implement the first quantum simulator for the anisotropic Heisenberg model, with fully adjustable anisotropy of nearest-neighbor spin-spin interactions (also called the XXZ model). We study spin dynamics in previously unexplored regimes far away from equilibrium, as well as stable spin patterns far away from the ground state, which are even exact many-body eigenstates. For this we utilize quantum quenches from initial far-from-equilibrium spin-helix patterns:&#13;
&#13;
Spin transport: By using a longitudinal spin-helix pattern, which involves a modulation in the population of spin up and spin down atoms, we see a drastic impact on the transport properties, when the anisotropy is varied. When spins are coupled only along two of three possible orientations (the XX model), we find ballistic behavior of spin dynamics, whereas for isotropic interactions (the XXX model), we find diffusive behavior. More generally, for positive anisotropies, the dynamics ranges from anomalous superdiffusion to subdiffusion, whereas for negative anisotropies, we observe a crossover in the time domain from ballistic to diffusive transport. &#13;
&#13;
Spin dephasing: A transverse spin-helix pattern is sensitive to additional decay mechanisms: Anisotropic spin couplings break spin-rotational symmetry. Transverse spin components are no longer conserved and can decay not only by transport, but also by fast, local dephasing. However, even for isotropic interactions, we observe dephasing due to a new effect: an effective magnetic field created by superexchange, which has its origin in the mapping from the Hubbard model and which has not been observed before. &#13;
&#13;
Bethe phantom states — excited many-body eigenstates of the Heisenberg model: For a given anisotropy, there exists one special winding angle, such that the transverse spin helix is an exact many-body eigenstate of the Hamiltonian. We find this eigenstate experimentally, by varying the winding angle and measuring the decay rate, which reveals a pronounced minimum. In a next step, we then use the sensitivity of the Bethe phantom states as a tool, to actually measure the anisotropy directly. We then find that the anisotropy can be strongly affected by nearest-neighbor off-site interactions, which have never been observed before for particles, which only interact with contact interactions. &#13;
&#13;
Our new quantum simulator platform with tunable interactions opens up possibilities for many new studies which are likely to provide new insight into the rich dynamics of Heisenberg spin models and beyond.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143209</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Large-scale Data Analytics and Applications to the COVID-19 Pandemic</title>
<link>https://hdl.handle.net/1721.1/143205</link>
<description>Algorithms for Large-scale Data Analytics and Applications to the COVID-19 Pandemic
Li, Michael Lingzhi
Operations Research (OR) can be defined as using advanced quantitative tools to make better decisions and create impact. In the modern world, to generate impact, we need both scalable algorithms that allow us to extract insights from an ever-increasing amount of data, and also important applications to apply our insights to the world. In this thesis, we demonstrate both sides of the coin. &#13;
&#13;
In the first part of the thesis, we focus on building scalable algorithms for large-scale data analytics. In Chapter 1, we consider a novel reformulation of the matrix completion problem and developed a projected stochastic gradient descent method, fastImpute, to solve matrix completion 20x faster than state-of-the-art methods while providing optimality guarantees. In Chapter 2, we introduce the Interpretable Matrix Completion problem (IMC) to provide meaningful insights for low-rank matrices using side information. We designed an algorithm, OptComplete, based on the novel concept of stochastic cutting planes that enables us to solve extremely large instances. In Chapter 3, we extend OptComplete to general data-driven mixed-integer optimization problems including sparse regression, support vector machines, and the knapsack problem. We show that the algorithm is able to match or exceed state-of-the-art results. &#13;
&#13;
The second part of the thesis revolves around applying large-scale data analytics to the COVID-19 pandemic. In Chapter 4, we introduce a novel policy-driven epidemiological model, DELPHI. We show that DELPHI compares favorably with other top epidemiology models and predicted the large-scale epidemics in US, UK and Russia months before. We demonstrate how the explicit modeling of governmental interventions in DELPHI enabled its use for planning the trial of the Janssen Ad26.Cov2.S vaccine. In Chapter 5, we apply DELPHI to determine COVID-19 mass vaccination centers in the US. We developed an optimization model to allocate the limited vaccine supply and minimize future pandemic deaths, while fully incorporating the nonlinear DELPHI dynamics. We proposed a coordinate descent model to solve the problem at scale, and showed how optimized vaccine allocation can save 20% more individuals while still ensuring equity. Our conclusions directly affected how FEMA allocated its vaccines, increasing its focus on states such as Texas and Florida.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143205</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Locality-First Strategy for Developing Efficient Multicore Algorithm</title>
<link>https://hdl.handle.net/1721.1/143200</link>
<description>The Locality-First Strategy for Developing Efficient Multicore Algorithm
Xu, Helen Jiang
To scale applications on multicores up to bigger problems, software systems must be optimized both for parallelism to take full advantage of the multiple cores and for locality to exploit the memory system for cache-friendliness. Parallelization alone does not suffice to reach peak performance due to the processor-memory gap: the increasing divergence of processor and memory speeds. Locality and parallelism are difficult to optimize for independently — and even more challenging to combine — because they tend to conflict with each other.&#13;
&#13;
I advocate that algorithm developers employ a locality-first strategy for developing efficient parallel and cache-friendly algorithms for multicores. That is, they should first understand and exploit locality as much as possible before introducing parallelism. I argue that an algorithm developer can achieve high-performing code more easily with the locality-first strategy than with either a parallelism-first strategy or a strategy of trying to optimize both simultaneously.&#13;
&#13;
I present ten artifacts that leverage the locality-first strategy to create fast multicore algorithms that are simple to describe and implement. For example, locality-first data structure design in graph processing achieves about 2× speedup over the state of the art. Additionally, I prove mathematically that multicore cache-replacement algorithms that take advantage of locality outperform all other online algorithms. The other eight artifacts make similar contributions in their respective domains. Together, these artifacts demonstrate that the locality-first strategy provides an effective roadmap for algorithm developers to design and implement theoretically and practically efficient multicore code.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143200</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Features and Applications of Random Unitaries</title>
<link>https://hdl.handle.net/1721.1/143199</link>
<description>Features and Applications of Random Unitaries
Kong, Linghang
Random unitary model plays essential roles in the study of many-body quantum systems, and have many applications in the field of quantum information. In this thesis we will explore some features and applications of this model. We will first study the scrambling property of random circuits (i.e., sequential applications of random unitaries according to some interaction geometry). We will analyze the behaviors of two commonly used measures of scrambling, out-of-time-ordered correlation and entanglement, and show that the corresponding notions of scrambling can be fundamentally different. Then we will study one application of random unitaries, which is generation of near-optimal covariant quantum error-correcting codes (i.e., codes that obey some symmetry). We will show that quantum codes with U(1) and SU(d) symmetry could be generated using random unitaries with the corresponding symmetry, and the error rates of such codes typically saturate the fundamental limits. Finally, we will explore another application of random unitaries, the randomized benchmarking protocol. We will extend the randomized benchmarking framework in previous work to the setting of a continuous gate set. This allows us to benchmark arbitrary quantum gates in a unified way, which accommodates the need posed by experiments.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143199</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving first principles-based methods for correlated materials modeling</title>
<link>https://hdl.handle.net/1721.1/143195</link>
<description>Improving first principles-based methods for correlated materials modeling
Bajaj, Akash
Since its inception, the widespread use of density functional theory (DFT) as a cost-effective tool for determining the electronic structure of matter has continued to grow in most scientific fields. However, the mean-field description offered by semi-local approximations to the exchangecorrelation (xc) functional within DFT becomes severely limiting while studying transition-metal based correlated materials. The delocalization and static correlation errors inherent in such approximations makes it challenging to study these systems with their localized d electrons and their open shell character. Beyond transition-metals, an insufficient treatment of electronic correlation also leads to reaction barrier height underestimation and unbound anions. Hybrid approximations and Hubbard-model based correction functionals added to conventional semi-local xc approximations (e.g., DFT+U) offer potential solutions, but either at the expense of higher computational cost in case of the former or with the lack of a systematic parameter evaluation scheme in both.&#13;
&#13;
In this thesis, we develop a formalized framework for either constructing new Hubbard-model inspired correction functionals or for reformulating existing functionals in this class, both targeted primarily towards an improved description of transition-metal based correlated materials. We first develop our judiciously modified DFT (jmDFT) framework and construct non-empirically derived, few-parameter, physically justified correction functionals. Developing these functionals is rooted in the recovery of a known stringent property of the exact xc functional, which ensures elimination of semi-local DFT errors at semi-local DFT cost. We validate the best jmDFT correction functional for eliminating semi-local DFT errors in s-, p- and d-block atoms and simple diatomics. We then focus on transition-metal based systems and first demonstrate a necessary reformulation of newly derived, and even existing, Hubbard-model based corrections for an efficient elimination of their semi-local DFT errors within the first-principles jmDFT framework. Focusing on specific transition-metal systems and properties, we first deploy jmDFT to recover physically meaningful eigenvalues within DFT that accurately estimate redox potentials of transition metal complexes deployed in redox flow batteries. As our next example, we demonstrate the applicability of jmDFT towards accurate prediction of the spin-splitting energy of a potential spin crossover complex. We then deploy our Hubbard-model reformulation for simultaneously improving the semi-local description of surface reactivity and surface stability of transition metal oxides, thereby bypassing the higher computational cost of hybrid approximations. Given the general lack of transferability of Hubbard-model approaches for non-transition-metal systems, we first use hybrid DFT for exploring degradation mechanisms of the Nafion membrane in fuel cells. We then demonstrate the use of a wave function theory based low-cost approach for verifying the accuracy of hybrid DFT predictions. Finally, we address this lack of transferability by deploying jmDFT for improving upon the semi-local description of unbound non-transition-metal anions. In all, this thesis demonstrates: (i) the importance of gaining a fundamental understanding of the limitations of conventionally used low-cost first-principles approaches behind their inaccurate treatment of electronic correlation, especially in transition-metal based correlated materials, and (ii) how this understanding paves the way for a judicious improvement of such approaches, thereby making them accurate for studying correlated systems while retaining their lower cost.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143195</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CLIMATE—CARBON—EQUITY Making Sustainable Design Concepts Accessible for All</title>
<link>https://hdl.handle.net/1721.1/143194</link>
<description>CLIMATE—CARBON—EQUITY Making Sustainable Design Concepts Accessible for All
Arsano, Alpha Yacob
Climate has a significant influence on how buildings perform, and how we design and build buildings impacts the climate. Therefore, the most effective sustainable strategies for low-carbon buildings are heavily influenced by the climatic context of the project. In this thesis, I present the development, validation, and application of an early-stage design analysis method called climabox as a toolset to evaluate the potential for low-carbon building strategies in any location for which climate data is available.&#13;
&#13;
By presenting reliable bioclimatic information in a clear, intuitive format, the approach enables designers and consultants worldwide to make actionable, sustainable design decisions from the beginning of a project forward. The methodology has been implemented in a web app called ClimaPlus that is accessible on any web-enabled device. &#13;
&#13;
The web app has been successfully tested in a Massive Open Online Course (MOOC), launched in collaboration with MIT Energy Initiative (MITEI), to teach sustainable building design as part of the Future of Energy Systems MicroMasters program. The goal is to make easily accessible and actionable design guidelines available for learners who want to develop energyefficient and low-carbon building concepts anywhere. With a total enrollment of over 40,000 learners worldwide, I discuss the challenges and lessons learned from delivering the introductory, university-level sustainable building design course.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143194</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Probabilistic Machine Learning Models to Semiconductor Fabrication</title>
<link>https://hdl.handle.net/1721.1/143184</link>
<description>Applications of Probabilistic Machine Learning Models to Semiconductor Fabrication
Lang, Christopher Ilic
Semiconductor fabrication relies heavily on the precision and accuracy of its individual processes in order to meet device requirements. If left unchecked, variations in these processes can lead to decreased performance and yield of the final product. While analysis and control of these variations have been employed for decades, recent developments in machine learning have introduced a wide variety of new methods that can potentially be used to better model, monitor, and control these processes. These methods offer the possibility of being more powerful, scalable, and accurate than traditional process control methods.&#13;
&#13;
While many machine learning methods are promising, unique aspects of semiconductor fabrication create challenges for many machine learning approaches. In particular, the high cost of semiconductor fabrication often results in data limited scenarios, as collecting large quantities of data can be infeasibly expensive. Because of this limitation, we investigate the use of probabilistic methods in a variety of semiconductor fabrication settings. These methods are often less prone to overfitting compared to alternative machine learning methods, while still being flexible enough to model complex systems. Specifically, we investigate the application of probabilistic machine learning methods in four distinct case studies.&#13;
&#13;
First, we study virtual metrology systems, with two goals in mind. Our first goal is to define a virtual metrology framework that allows us to better understand the sources of error commonly seen in these systems. This framework relates the recipe, chamber, sensor, and wafer variables, and incorporates two common sources of error: observability errors and concept drift. Our second goal is to then use this framework to develop our own modeling approach that is well suited to model systems where these errors are present. Our solution is a Bayesian approach that is similar to the traditional Kalman filter; however, it models the relationship between two variables, as opposed to an unknown system state. &#13;
&#13;
We then investigate a probabilistic method for optimizing dose uniformity in ion implantation systems. A common approach for improving dose uniformity relies on adjusting the implantation time across the wafer in order to compensate for beam variations. Here, we learn these variations, then solve for a set of compensating times. Our approach is comprised of two components, a modeling component and an optimization component. The modeling component is similar to the probabilistic method we use for modeling virtual metrology systems, but also incorporates prior beliefs tailored to the ion implantation setting. The optimization component then uses our forward model to improve dose uniformity given physical constraints of the tool and process. We compare this method to the prior existing industry tuning method, and see significant improvements in tuning time, process throughput, and tuning success.&#13;
&#13;
Next, we investigate probabilistic anomaly detection methods, which we use to detect process faults as they occur. These methods use process sensor information to determine whether the current process is operating nominally or not. We use kernel density estimation methods to estimate probability distributions for the sensor signals under normal operating conditions; These distributions are then used to determine the likelihood that a process is operating nominally. The approach is shown to compare favourably to a number of traditional process control methods, including statistical process control, one-class support vector machines, as well as variational auto encoder based anomaly detection methods.&#13;
&#13;
Finally, we investigate the use of Bayesian optimization and Gaussian process models to improve thickness uniformity in sputtering deposition processes. Here, we use Gaussian processes to model the thickness uniformity in sputtering deposition processes as a function of both chamber configuration and recipe parameters. This model is used in an iterative manner to find parameters that meet the desired uniformity requirements. Our modeling technique compares favourably to a number of standard regression approaches, including polynomial models, multivariate splines, gradient boosted regression trees, and a number of different deep learning architectures.&#13;
&#13;
While these four case studies each consider a unique application of probabilistic methods to semiconductor fabrication, two key themes run throughout. First, we find that these probabilistic methods are less prone to overfitting in data limited scenarios compared to many alternative methods. The inherent regularization provided by priors and observation noise estimates is key to the success of these methods. Second, the incorporation of process or domain specific knowledge is crucial to training with limited data. Understanding the underlying system, structuring the approach accordingly, and making minor approximations reduces the complex original problems to a simpler form, enabling effective application of probabilistic machine learning methods.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143184</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anthropogenic and natural radioisotopes as tracers for contaminant sources and particulate fluxes</title>
<link>https://hdl.handle.net/1721.1/143183</link>
<description>Anthropogenic and natural radioisotopes as tracers for contaminant sources and particulate fluxes
Kenyon, Jennifer An
Radioactive isotopes act as nuclear clocks that are utilized to trace and measure rates of chemical, biological, physical, and geological oceanographic processes. This thesis seeks to utilize both artificial (e.g., released from anthropogenic sources) and natural radioisotopes as tracers within the Pacific Ocean basin. Artificial radioisotopes released as a result of the 2011 Fukushima Daiichi nuclear power plants accident have the potential to negatively impact human and environmental health. This study evaluates 137Cs, 90Sr, and 129I concentrations in seawater off the coast of Japan, reconciles the sources of contaminated waters, and assesses the application of 137Cs/90Sr, 129I/137Cs, and 129I/90Sr as oceanic tracers. The analysis of activity ratios suggests a variety of sources, including ongoing sporadic and independent releases of radiocontaminants. Though decreasing, concentrations remain elevated compared to preaccident levels. Future planned releases of stored water from the reactor site may affect the surrounding environment; and thus, continued efforts to understand the distribution and fate of these radionuclides are warranted.&#13;
&#13;
Naturally-occurring radioisotopes (e.g., the 238U-234Th series used in this thesis) can give insight into surface export and remineralization of particulate organic carbon (POC) and trace metals (TMs). POC and TMs play a vital role in regulating the biological carbon pump (BCP), which in turn helps to moderate atmospheric CO2 levels by transporting carbon to the deep ocean, where it can be sequestered on timescales of centuries to millennia. Through this thesis we utilize the 238U:234Th disequilibrium method throughout the GEOTRACES GP15 Pacific Meridional Transect in order to provide basin-scale estimates of POC export and remineralization. There is only limited, recent use of this method to constrain TM fluxes, and as such this study also seeks to further develop this method for use in understanding TM cycling through comparative flux studies in the North Pacific.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143183</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global and Robust Optimization for Engineering Design</title>
<link>https://hdl.handle.net/1721.1/143181</link>
<description>Global and Robust Optimization for Engineering Design
Öztürk, Berk
There is a need to adapt and improve conceptual design methods through better optimization, in order to address the challenge of designing future engineered systems. Aerospace design problems are tightly-coupled optimization problems, and require all-at-once solution methods for design consensus and global optimality. Although the literature on design optimization has been growing, it has generally focused on the use of gradient-based and heuristic methods, which are limited to local and low-dimensional optimization respectively. There are significant benefits to leveraging structured mathematical optimization instead. Mathematical optimization provides guarantees of solution quality, and is fast, scalable, and compatible with using physics-based models in design. More importantly perhaps, there has been a wave of research in optimization and machine learning that provides new opportunities to improve the engineering design process. This thesis capitalizes on two such opportunities.&#13;
&#13;
The first opportunity is to enable efficient all-at-once optimization over constraints and objectives that use arbitrary mathematical primitives. This work proposes a constraint sampling and learning approach for global optimization, leveraging developments in machine learning and mixed-integer optimization. More specifically, the feasible space of intractable constraints is sampled using existing and novel design of experiments methods, and learned using optimal classification trees with hyperplanes (OCT-Hs). OCT-Hs describe union-of-polyhedra approximations of intractable constraints, which are solved efficiently using commercial solvers to find near-feasible and near-optimal solutions to the global optimization problem. The constraints are then checked and the solution is repaired using projected gradient methods, ensuring feasibility and local optimality. The method is first tested on synthetic examples, where it finds the global optima for 9 out of 11 benchmarks, and high-performing solutions otherwise. Then it is applied to two real-world problems from the aerospace literature, and especially to a satellite on-orbit servicing problem that cannot be addressed via other global optimization methods. These applications demonstrate that decision tree driven optimization provides efficient, practical and optimal solutions to difficult global optimization problems present in aerospace design as well as other domains, regardless of the form of the underlying constraints.&#13;
&#13;
The second opportunity is to optimize designs affected by parametric uncertainty in a tractable and deterministic manner, while providing guarantees of constraint satisfaction. Inspired by the wealth of literature on robust optimization, and specifically on robust geometric programming, this thesis proposes and implements robust signomial programming to solve engineering design problems under uncertainty. The methods are tested on a conceptual aircraft design problem, demonstrating that robust signomial programs are sufficiently general to address engineering design problems, solved efficiently by commercial solvers, and result in designs that protect deterministically against uncertain parameter outcomes from predefined sets. In addition, robust designs are found to be less conservative than designs with margins; robust aircraft demonstrate 9% better average performance than aircraft designed with margins over the same scenarios, while providing guarantees of constraint feasibility.&#13;
&#13;
In anticipation of future aerospace design problems becoming increasingly coupled, complex and risky, this thesis provides a new perspective for dealing with design challenges using structured mathematical optimization. The proposed methods inject mathematical rigor into engineering design methods while keeping practical concerns for conceptual design in focus.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143181</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A More Updated Union: A History of New Liberals and Their New Computers in the New New South</title>
<link>https://hdl.handle.net/1721.1/143180</link>
<description>A More Updated Union: A History of New Liberals and Their New Computers in the New New South
Aidinoff, Marc Frederick
By the end of the twentieth century, the United States welfare state itself seemed to exist inside of a computer system. Software and hardware mediated the experience of managing poverty. To seek entailments was to wait for a caseworker to find you in the system. This dissertation traces the computerization of the welfare state as an explicit reformatting of the liberal social contract from a localized system of uneven entitlements to a national regime of extraction. &#13;
&#13;
Following the computerization of the welfare state leads not to Silicon Valley, but to the U.S. South. In a region lampooned as so behind politicians could campaign on promises to “never be last again,” networked computing served as a particularly alluring symbol and mechanism for what governance ought to be. There, Democrats, often organizing under the banner of “neoliberalism,” sought to rebuild their party’s electoral power by modernizing the mechanisms of government. Computerization—the social, cultural, and technological processes of designing, installing, and maintaining computer systems—inspired and confined new liberal policy and politics. It would strengthen racialized systems of power by transforming normative policy choices about classes of people into technological problems. Technical administrative institutions such as Mississippi’s Central Data Processing Authority (CDPA) and automated networked tools such as the Mississippi Application Verification Eligibility Reporting and Information Control System (MAVERICS) more than evidenced a new liberal ideology; they enabled and enacted it.&#13;
&#13;
From 1968, when the state of Mississippi founded an agency to centralize government computing, to 2001, when a related state agency constructed a massive new facility to centralize the technology and technical personnel to collect incoming payments from citizens, the form and function of the U.S. welfare state changed. These shifts were epistemic—rooted in new ways the state could know the citizen—and operational—contingent on automated mechanisms to sustain administration. They served to reassign the responsibility of paying for welfare from the national or the state government to noncustodial parents. Local events demonstrated the persistence of welfare systems in an age of state retrenchment and codified national trends toward carceral welfare policy.&#13;
&#13;
Tacking between local and national, this dissertation follows not only Southern Democrats, but also technological federalism, the interstate flow of technical experts and technological things. This approach highlights the mutual reinforcement of computerization and neoliberalization; just as technology structured ideas about what the state could and should do, these ideas materialized as technical systems. Foregrounding computerization leaves the welfare state in plain view—a state that did not disappear after the 1960s but took on a new form, with paperless tools to discipline and networked mechanisms to punish the private citizen
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143180</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Characterization of Plaque Seeded Amyloid-β Fibrils by Magic Angle Spinning NMR</title>
<link>https://hdl.handle.net/1721.1/143174</link>
<description>Structural Characterization of Plaque Seeded Amyloid-β Fibrils by Magic Angle Spinning NMR
Michael, Brian C.
Accumulation of plaques consisting of amyloid fibrils of the peptide amyloid-β (Aβ) in the brain is one of the hallmarks of Alzheimer’s disease (AD). Aβ is prone to polymorphism and the structure of fibrils is sensitive to the conditions under which they are formed. The two dominant forms of Aβ are Aβ₁₋₄₀ and Aβ₁₋₄₂. Aβ₁₋₄₂ is more neurotoxic and aggregates faster. While studies of Aβ₁₋₄₀ have not found any consensus on a single structure, in the case of Aβ₁₋₄₂ three solid state nuclear magnetic resonance (NMR) studies conducted by different research groups have found essentially the same structure. However, since Aβ is prone to polymorphism it is not clear if this consensus structure reflects what is present in the brain of an AD patient. Unfortunately, NMR studies require isotopic labelling which is not possible in vivo, but amyloid fibrils display a seeding behavior where mature fibrils catalyze the formation of additional fibrils from peptide monomers. The work presented in this thesis focuses on preparing and characterizing fibrils by using plaques isolated from an AD patient’s brain as seeds for isotopically labelled Aβ₁₋₄₂ monomers. The goal of this process was to prepare isotopically labelled fibrils that reflect the structures found in the brain. We have demonstrated that we can reproducibly prepare such seeded samples, which display a single set of NMR peaks indicating a single molecular fold of Aβ. Interestingly, the NMR spectra of the plaque seeded samples do not match the previously identified structure of Aβ₁₋₄₂ found by three groups. I applied cutting edge solid state NMR techniques to obtain site specific spectral assignments and distance constraints. I have calculated a structural model for the plaque seeded fibrils based on those NMR derived constraints that converged in the region from D23-A42. We have also collected cryogenic electron microscopy (cryo-EM) images of the seeded fibrils. Even though the NMR spectra show a single set of peaks, we have been unable to reconstruct high resolution electron density maps due to heterogeneity in the width and twist of the fibrils.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143174</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Bayesian inference via optimal transport misfit measures: applications and algorithms</title>
<link>https://hdl.handle.net/1721.1/143171</link>
<description>Robust Bayesian inference via optimal transport misfit measures: applications and algorithms
Scarinci, Andrea
Model misspecification constitutes a major obstacle to reliable inference in many problems. In the Bayesian setting, model misspecification can lead to inconsistency as well as overconfidence in the posterior distribution associated with any quantity of interest, i.e., under-reporting of uncertainty.&#13;
&#13;
This thesis develops a Bayesian framework to reduce the impact of a type of model misspecification arising in inference problems involving time series data: unmodeled time warping between the observed and modeled data. Inference problems involving dynamical systems, signal processing, and more generally functional data can be affected by this type of misspecification. Inverse problems in seismology are an important example of this class: inaccuracies in characterizing the complex, spatially heterogeneous propagation velocities of seismic waves can lead to error in their modeled time evolution. Data are insufficient to constrain these propagation velocities, and therefore we instead seek robustness to model error. Instrumental to our approach is the use of transport–Lagrangian (TL) distances as loss/misfit functions: such distances can be understood as “graph-space” optimal transport distances, and they naturally disregard certain features of the data that are more sensitive to time warping. We show that, compared to standard misfit functions, they produce posterior distributions that are both less biased and less dispersed.&#13;
&#13;
In particular, we use moment tensor inversion, a seismic inverse problem, as our primary motivating application and demonstrate improved inversion performance of the TL loss—by a variety of statistical and physical metrics—for a range of increasingly complex inversion and misspecification scenarios. At the same time, we address several broader methodological issues. First, in the absence of a tractable expression for a TL-based likelihood, we construct a consistent prior-to-posterior update using the notion of a Gibbs posterior. We then compare the impact of different loss functions on the Gibbs posterior through a broader exploration of what constitutes “good” inference in the misspecified setting, via several statistical scoring rules and rank statistics, as well as application-specific physical criteria. In an effort to link our generalized (Gibbs) Bayesian approach to a more traditional Bayesian setting, we also conduct an analytical and numerical investigation of statistical properties of the transport-Lagrangian distance between random noisy signals.&#13;
&#13;
As a complement to Bayesian inversion, we also demonstrate the utility of optimal transport distances for frequentist regression. We study the linear regression model with TL loss, describe the geometry of the associated mixed-integer optimization problem, and propose dedicated algorithms that exploit its underlying structure. We then compare TL linear regression with classical linear regression in several applications.&#13;
&#13;
Finally, we discuss potential generalizations of TL distances to include the notion of “shape” through time series embeddings, as well as possible extensions of the proposed framework to other forms of model misspecification.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143171</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Functional Defects in Oxides Using External Stimuli</title>
<link>https://hdl.handle.net/1721.1/143169</link>
<description>Engineering Functional Defects in Oxides Using External Stimuli
Wang, Jiayue
The physical, chemical, and mechanical properties of materials often depend strongly on the functional defects of various dimensionalities, such as vacancies, interfaces, and precipitates. Therefore, defect engineering lies at the heart of materials innovations for the next-generation technologies. However, due to the enormous span in the length scale (from electrons to meters), such a bottom-up approach can be extremely challenging. This thesis aims to contribute to bridging this gap based on the following. First, by combining well-controlled synthesis, state-of-the-art defect characterization, and multiscale defect modeling, I developed an experimental and analysis framework to identify the critical functional defects during catalytic reactions and solid-state phase transformations in functional oxides. Second, I have utilized and explained the effects of external stimuli, specifically with lattice strain, oxygen chemical potential, chemical doping, and ion irradiation, in tailoring the concentration and stability of these critical defects in oxides to boost their functionalities. The findings and methodologies provided in this thesis help to establish a framework for defect genome, which can benefit materials design in a broad range of defect-sensitive applications including solid oxide fuel cells, memristors, and (electro)catalysts.&#13;
&#13;
In the first study, I investigated the surface defect equilibria and their strain dependency in La₀.₆Sr₀.₄FeO₃ (LSF) during oxygen incorporation reactions. Since the surface composition and structure can deviate significantly from the bulk, the surface defect chemistry can also differ from the bulk. Here, I demonstrated for the first time that the strain-dependent surface defect equilibria of LSF can be largely captured by bulk-like defect models with shifted oxygen chemical potential. &#13;
&#13;
In the second study, I investigated the role of surface defect chemistry in CeO₂ during carbon poisoning (i.e., coking) reactions, which is an undesirable process in CO₂ electrolysis. I examined the coking reaction both experimentally and computationally, and identified the surface Ce³⁺-Ce³⁺ pairs to be the critical catalytic center. Based on these insights, I successfully mitigate the coking on the CeO₂-based materials by suppressing the formation of Ce³⁺-Ce³⁺ pairs with doping.&#13;
&#13;
The third to sixth studies in this thesis investigated the role of defects in controlling the solid-state phase transformations in functional oxides. Exsolution is a promising synthesis method to fabricate self-assembled nanocomposite via phase decomposition. Since defect formation in the lattice is the elementary step in exsolution, I expect that defect engineering to be the fundamental tuning knob to control and tailor exsolution. To examine this hypothesis, I employed lattice strain, facet engineering, thermal annealing, and ion beam irradiation to tailor the defect chemistry in the host oxide and investigated their impact on both the surface and bulk exsolution phenomena. &#13;
&#13;
In the third study, I utilized lattice strain as a dopant-free method to tailor the concentration of point defects in LSF. As tensile strains facilitate defect formation, the tensile-strained LSF results in a higher Fe⁰ metal concentration, a larger density of nanoparticles, and a reduced particle size at its surfaces. In the fourth study, I controlled exsolution with surface orientation. Different lattice orientations give rise to different incubation time in exsolution, and hence generates exsolved particles with different sizes and densities. In the fifth study, I varied the external gas environment to control the bulk exsolution in LSF. By tuning the concentration and oxidation states of the exsolution-induced lattice defects, we succeeded to obtain a substantial increase in electrical conductivity by more than 2 orders of magnitude, and continuous modulation of magnetization between 0 and 110 emu/cm³. In the last study, we used 10 keV Ni- beam irradiation to introduce defects and dopants into the SrTi₀.₆₅Fe₀.₃₅O₃ (STF) to tailor Fe exsolution. As a result, we demonstrated that in-situ Ni bombardment can change the composition of the exsolved nanoparticles from unary Fe nanoparticles to binary Fe-Ni alloys. Moreover, we found that, compared to the thermal exsolution, the STF film after Ni irradiation also experienced significantly enhanced bulk exsolution, which is likely due to irradiation-induced defects in the STF matrix. These results not only advance the fundamental understanding of the exsolution mechanisms, but also pave the way towards utilizing external stimuli to design novel functional oxides with optimal morphology, composition, and defect chemistry.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143169</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Improving the Performance of Mitigating Transient&#13;
Execution Attacks</title>
<link>https://hdl.handle.net/1721.1/143165</link>
<description>Understanding and Improving the Performance of Mitigating Transient&#13;
Execution Attacks
Behrens, Jonathan
This thesis makes two contributions: (1) a measurement study of the performance evolution of mitigations against transient execution attacks over generations of processors, and (2) the WARD kernel design, which eliminates as much as half the overhead of mitigations on older processors.&#13;
&#13;
The measurement study maps end-to-end overheads to the specific mitigations that cause them. It reveals that hardware fixes for several transient execution attacks have reduced overheads on OS heavy workloads by a factor of ten. However, overheads for JavaScript applications have remained roughly flat because they are caused by mitigations for attacks that even the most recent processors are still vulnerable to. Finally, the study shows that a few mitigations account for most performance costs.&#13;
&#13;
WARD is a novel operating system architecture that is resilient to transient execution attacks, yet avoids expensive software mitigations that existing operating systems employ when running on pre-2018 processors. It leverages a new hardware/software contract termed the Unmapped Speculation Contract, which describes limits on the speculative behavior of processors.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143165</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secondary Eyewall Formation as a Response to Evolving&#13;
Tropical Cyclone Wind Structure</title>
<link>https://hdl.handle.net/1721.1/143163</link>
<description>Secondary Eyewall Formation as a Response to Evolving&#13;
Tropical Cyclone Wind Structure
Shivamoggi, Rohini
Secondary eyewalls are rings of strong winds and precipitation that form in many major (Category 3 or higher) tropical cyclones. They often contract, intensify, and replace the primary eyewall of the tropical cyclone in a process known as an eyewall replacement cycle. Eyewall replacement cycles have been linked to a spatial broadening of tropical cyclone winds as well as fluctuations in tropical cyclone intensity, making secondary eyewall formation an important problem for forecasters aiming to refine predictions of both tropical cyclone intensity and damage. In this thesis, we first provide an overview of current work on secondary eyewall formation (Chapter 1) and then examine the relationship between secondary eyewall formation and changes to a tropical cyclone's wind field in two contexts: (1) rapid intensification of the primary eyewall (Chapter 2) and (2) an expansion of the outer wind field (Chapter 3). The second chapter of the thesis demonstrates that in both an idealized hurricane intensity model (the Coupled Hurricane Intensity Prediction System) and best-track data and observations of secondary eyewalls, rapid intensification is often followed by secondary eyewall formation. This is concerning from the point of view of risk estimation because it suggests that, in addition to making tropical cyclones more intense, rapid intensification may also result in spatial broadening of the tropical cyclone winds near the inner core. The third chapter of the thesis describes work using an axisymmetric numerical model (Cloud Model 1) to establish a dynamical relationship between secondary eyewall formation and wind broadening in the outer region of a tropical cyclone. In these simulations, secondary eyewall formation is a means by which the inner region of a tropical cyclone adjusts to growth in the outer wind field. While some past studies have used quasi-steady frameworks for studying secondary eyewall formation, the results from Chapters 2 and 3 of this work emphasize the importance of examining secondary eyewall formation in frameworks in which the tropical cyclone's wind structure is evolving.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143163</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure, variability, and dynamics of the West Greenland Boundary Current System</title>
<link>https://hdl.handle.net/1721.1/143160</link>
<description>Structure, variability, and dynamics of the West Greenland Boundary Current System
Pacini, Astrid
The ventilation of intermediate waters in the Labrador Sea has important implications for the strength of the Atlantic Meridional Overturning Circulation. Boundary current-interior interactions regulate the exchange of properties between the slope and the basin, which in turn regulates the magnitude of interior convection and the export of ventilated waters from the subpolar gyre. This thesis characterizes the West Greenland Boundary Current System near Cape Farewell across a range of spatio-temporal scales. The boundary current system is composed of three velocity cores: (1) the West Greenland Coastal Current (WGCC), transporting Greenland and Arctic meltwaters on the shelf; (2) the West Greenland Current (WGC), which advects warm, saline Atlantic-origin water at depth, meltwaters at the surface, and newly-ventilated Labrador Sea Water (LSW); and (3) the Deep Western Boundary Current, which carries dense overflow waters ventilated in the Nordic Seas. The seasonal presence of the LSW and Atlantic-origin water are dictated by air-sea buoyancy forcing, while the seasonality of the WGCC is governed by remote wind forcing and the propagation of coastally trapped waves from East Greenland. Using mooring data and hydrographic surveys, we demonstrate mid-depth intensified cyclones generated at Denmark Strait are found offshore of the WGC and enhance the overflow water transport at synoptic timescales. Using mooring, hydrographic, and satellite data, we demonstrate that the WGC undergoes extensive meandering due to baroclinic instability that is enhanced in winter due to LSW formation adjacent to the current. This leads to the production of small-scale, anticyclonic eddies that can account for the entirety of wintertime heat loss within the Labrador Sea. The meanders are shown to trigger the formation of Irminger Rings downstream. Using mooring, hydrographic, atmospheric, and Lagrangian data, and a mixing model, we find that strong atmospheric storms known as forward tip jets cause upwelling at the shelfbreak that triggers offshore export of freshwater. This freshwater flux can explain the observed lack of ventilation in the eastern Labrador Sea. Together, this thesis documents previously unobserved interannual, seasonal, and synoptic-scale variability and dynamics within the West Greenland boundary current system that must be accounted for in future modeling.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143160</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Autonomous Navigation and Estimation in Novel Environments</title>
<link>https://hdl.handle.net/1721.1/143159</link>
<description>Improving Autonomous Navigation and Estimation in Novel Environments
Liu, Katherine Y.
Efficient autonomous navigation in novel environments is crucial to enable embodied agents to reach more sophisticated levels of autonomy. We are interested in improving autonomous navigation and estimation in unknown environments of vehicles carrying lightweight electro-optical sensor payloads. Due to sensing limitations, in non-trivial novel environments much of the geometric structure of the world has not yet been observed, leading to significant geometric ambiguity. Although collecting additional geometric information can reduce ambiguity, doing so is often at odds with the objectives of the mission. We propose to combine object-level semantic information and geometric information to tractably improve both navigation and estimation.&#13;
&#13;
In this thesis, we present three contributions towards improving autonomous navigation in novel environments. We first improve navigation efficiency in novel environments by encoding useful navigation behaviors in a sampling distribution informed by partial occupancy and object-level maps. Recognizing that object-level estimation is challenging under the limited viewpoints available while navigating efficiently, we also develop two methods of building object-level representations online. In our second contribution, we improve the view-point efficiency of object-level SLAM with ellipsoid representations by introducing an additional texture measurement and semantic class shape prior. Finally, in our third contribution, we propose a novel method of deeply learned 3D object estimation that utilizes indirect image-space annotations and intra-class shape consistency to enable 3D object estimation from a single RGB image.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143159</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparsity in Machine Learning: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/143157</link>
<description>Sparsity in Machine Learning: Theory and Applications
Lahlou Kitane, Driss
Sparsity plays a key role in machine learning for several reasons including interpretability. Interpretability is sought either by practitioners or by scientists. Indeed, on one hand interpretability can be key in a practice such as in healthcare, in which black box models cannot be used for the prescription of a treatment for a patient. On the other hand, interpretability is essential in understanding of phenomena that are modelled using machine learning such as plasma electromagnetic emissions. Besides interpretability, sparsity has several other important applications such as improvement of the predictive power of models and reduction of operational and investment costs.&#13;
&#13;
Integer optimization is a highly effective tool in the conception of methods to tackle sparsity. It offers a rigorous framework to build sparse models and has proved to provide more accurate and sparse models than other approaches including the ones using sparsity-inducing regularization norms. This thesis focuses on the application of integer optimization to address sparsity problems.&#13;
&#13;
We provide two applications of sparse modeling. The first one is related to the application of Mixed Integer Optimization (MIO) sparse regression to Laser Induced Breakdown Spectroscopy (LIBS), a modern and important chemical analysis technique. We build a methodology for sparse and robust models in chemometrics and test it on various types of mineral ore. The MIO approach beats experts’ predictions while offering remarkably sparser models compared to &#119871;&#119860;&#119878;&#119878;&#119874;. As the &#119877;2 achieved is higher than .99 in some cases, this application is, to the best of our knowledge, the first application that brings empirical proof that a true support exists in nature as the optimization community has been questioning the existence of such a concept in real life applications. The second application is related to COVID testing and sparse classification. We propose a fast and simple method for the detection of SARS-CoV-2 based on spectroscopy. This novel method builds on machine learning capabilities to deliver diagnosis in under a minute, without the use of any reagent, achieving a precision close to that of PCR. Sparse methods enable the detection of specific characteristics in the 3D structure of SARS-CoV-2 RNA and proteins.&#13;
&#13;
Given the importance PCA plays in our research and in machine learning in general, we also provide a new approach to tackle the sparse PCA problem. This approach is the first to generate several sparse principal components in one step, while existing techniques rely instead on deflation to iteratively generate principal components. The method proposed (GeoSPCA) generates high quality solutions that improves the variance explained by deflation techniques by more than an order of magnitude.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143157</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital and Microwave Superconducting Electronics and Experimental Apparatus</title>
<link>https://hdl.handle.net/1721.1/143150</link>
<description>Digital and Microwave Superconducting Electronics and Experimental Apparatus
Butters, Brenden A.
The lack of a high-performance and scalable superconducting memory has been a persistent issue in the field of superconducting computing for decades. There have been many attempts at addressing this issue; however, to-date no technology has been able to completely satisfy this demand. In this work we present a novel memory design based on superconducting nanowires controlled by localized thermal effects. Initial results from this design are very promising and suggest that with some further development, our design may satisfy the need for such a superconducting memory technology.&#13;
&#13;
As superconducting nanowire electronics mature and become increasingly faster and more complex, the traditional reliance on off-chip microwave components has become unsustainable. In this thesis, we present the design and experimental results for a set of on-chip microwave devices, including bias tees, filters, detectors, couplers, and delay lines. In addition, by using the modeling developed for the memory, we make this set of microwave devices tunable through thermally controlling their kinetic inductance. To demonstrate the on-chip instrumentation that this library enables, a characterization of the thermal response of our tunable devices by means of an on-chip interferometer is presented. With the increasing complexity of our designs, we find ourselves in need of a new experimental apparatus to support our work. Finding no suitable solutions either commercially available or in literature, we developed a new versatile cryogenic experimental platform for nanowire electronics. The design presented here consolidates what was previously a number of discrete setups into one universal platform, while also greatly improving performance. Through the advances presented in this work, we have enabled the future realization of more complex nanowire-based superconducting electronics.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143150</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative models for neural time series with structured domain priors</title>
<link>https://hdl.handle.net/1721.1/143148</link>
<description>Generative models for neural time series with structured domain priors
Song, Andrew Hyungsuk
When I initially set out to research in the intersection of statistical signal processing and neuroscience (neural signal processing), my research advisor, Professor Emery N. Brown, explained at length that the signals from seemingly complex neural/biological systems are not purely random, but rather those that have latent structures that can be recovered with principled approaches. This insight has stuck with me since that moment and my research throughout graduate school has been understanding and practicing what I thought was the appropriate neural signal processing framework. In this thesis, I define this framework from the Bayesian/optimization perspective and emphasize translating and integrating the clinical and scientific domain knowledge, obtained from constant interaction/collaboration with the experimental neuroscientists and clinicians. The thesis specifically focuses on uncovering latent structures in the neural time series data, by using domain priors/constraints, such as Gaussian process, shift-invariance, sparsity, and smoothness, among many others. It is demonstrated in the thesis that the Bayesian approach with careful integration of these constraints produces results/structures in the data that are not only intepretable but also better performing for the metrics of interest.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143148</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new triggering mechanism of the boiling crisis based on the percolation theory and its implication</title>
<link>https://hdl.handle.net/1721.1/143147</link>
<description>A new triggering mechanism of the boiling crisis based on the percolation theory and its implication
Zhang, Limiao
Boiling is a very effective heat transfer process, used in nuclear reactors and other applications such as high-performance computing cooling, sterilization, water desalination. However, this process is limited by the boiling crisis. The boiling crisis is an instability that causes a sudden transition from a nucleate boiling regime to a film boiling regime. The value of the heat flux at which this boiling crisis occurs is known as critical heat flux (CHF). The boiling crisis is likely to make overheated heaters to burn out and fail. Thus, systems are typically operated with an adequate margin to the CHF limit.&#13;
&#13;
Researchers have spent decades exploring the triggering mechanism of the boiling crisis. Historically, most models assumed that boiling crisis is triggered by a macroscale hydrodynamics instability in the far-field liquid-vapor flows. However, there is a growing consensus that the boiling crisis is a near-wall phenomenon. In summary, while the boiling crisis has been studied for almost a century, there is still no consensus on the triggering mechanism of the boiling crisis. &#13;
&#13;
Recent observations from our group and other groups suggest that the boiling crisis is a scale-free phenomenon and belongs to the universal class of critical phenomena including earthquakes, traffic jams, and the outbreak of COVID-19. Inspired by the new observations, we propose a new way to view boiling heat transfer and the boiling crisis by modeling boiling as a bubble percolation process. By combining high-resolution experimental data and a stochastic model, we posit that the boiling crisis is triggered by an instability in the near-wall stochastic bubble interaction process. We formulate a Monte Carlo (MC) simulation model based on the continuum percolation theory that elucidates how the scale-free distribution emerges from the bubble percolation process. This model allows formulating a unifying nondimensional law of the boiling crisis, which we verify using data from eleven different surface and operating conditions.&#13;
&#13;
Inspired by the concurrence of scale-free criticality and complexity in many of the aforementioned physical systems, we analyze the fractal behavior of the bubble interaction process and show that the critical phase transition in this phenomenon (i.e., the boiling crisis), coincides with a maximum in the fractal dimension (i.e., the maximum complexity in the system). We also reveal that nucleate boiling (not the boiling crisis) is a self-organized process that belongs to the universal class of phenomena with a self-organized criticality.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143147</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Rheological Characterization of Nanofilled Materials for Automotive Applications</title>
<link>https://hdl.handle.net/1721.1/143143</link>
<description>Advanced Rheological Characterization of Nanofilled Materials for Automotive Applications
Du, Jianyi
Nanofilled polymer composites and lubricants have gained significant attention in fuelefficient vehicle designs due to the superior material properties and economic potentials with minimal filler loadings. However, mass-market applications are impeded by a lack of understanding of the complex rheological behavior arising from addition of nanofillers, especially in strong shear and extensional flows. In this thesis, these challenges are addressed through design of a rapid characterization protocol for the extensional rheology of such material systems, as well as a comprehensive rheological study of a prototypical graphene-derived nanocomposite with the development of a robust constitutive model framework to provide more insights into the microstructural variations that are induced through large deformations and strong flows during material processing and manufacturing operations.&#13;
&#13;
In the first part of this thesis, an improved version of capillary breakup extensional rheometry (CaBER) is presented, with a special focus on quantifying the filament thinning dynamics which are governed by multiple contributions to the total tensile stress in the fluid. An Inelastic Rate-Thickening (IRT) constitutive model is proposed to characterize the weakly rate-dependent response of commercial synthetic motor oils. The evolution of the full-dimensional filament profiles is quantified through analytical and numerical calculations from which an explicit empirical expression is developed based on the magnitude of each stress contribution. Finally, a statistical strategy is proposed to select the best-fit model with regularized parameters on the basis of the Bayesian information criterion, paving the path for an automated industrial process to extract accurate and meaningful constitutive parameters from CaBER measurements.&#13;
&#13;
The second part of this thesis focuses on the filament thinning dynamics of entangled polymer systems based on two modern tube models derived from reptation theory. One-dimensional numerical solutions of the governing equations are demonstrated to accurately capture a number of key observations reported in previous studies of concentrated polymer solutions, including rate-thinning behavior near filament breakup, and markedly different relaxation time constants in shear and extensional flows. An analytical expression for the ratio of these two relaxation times is obtained as a function of the polymer concentration and the number of entanglements, which shows excellent agreement with the experimental results from a number of polymer systems with no additional fitting parameters. As a case study, the material response predicted from the Rolie-Poly (Rouse-Linear-Polymer) model is used to interpret the rheology and dynamics of concentrated cellulose/ionic liquid systems, which are beginning to find application in fabric recycling and regeneration operations through a wet-spinning process. To obtain an accurate set of constitutive parameters, the material response in nonlinear shear and extensional flows are fitted to the model in order to obtain a universal set of constitutive parameters and scalings that can describe the rheology of these complex nanocomposite solutions as the concentration, temperature and degree of polymerization are varied.&#13;
&#13;
The final part of this thesis presents a comprehensive study of the rheology of a graphene oxide (GO)/polyvinyl alcohol (PVA) system. Distinct features of the low-frequency dynamic moduli indicate the formation of a fractal nanofiller microstructure as the GO concentration is increased. A nonlinear fractional K-BKZ constitutive framework is used to develop a comprehensive rheological equation of state for this nanocomposite system in both the linear and non-linear regimes. In extensional flow the observed rheological behavior is similar to the prediction from the tube models due to the structural similarity of the materials, and the nanofiller orientation can be readily described in terms of the model parameters. The sensitivity of the nanofiller structural variations to the flow kinematics inspires the design of a new rheometric method to optimize nanofiller dispersion by using a periodic exponential shear flow. General principles for the design of the required flow profiles are provided and are justified via proof-of-concept experiments.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143143</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modern Interactive Proofs</title>
<link>https://hdl.handle.net/1721.1/143142</link>
<description>Modern Interactive Proofs
Holden, Dhiraj
In this thesis, we study several extensions of the concept of interactive proofs. First, we consider non-signaling multi-prover interactive proofs. Interacting with multiple non-interacting provers increases the ability of the verifier to check the solution by asking the provers different questions and checking the consistency of their answers. In a non-signaling multi-prover proof, the provers can interact and correlate their answers, but not in an unlimited way: non-signaling provers must make the distribution of answers for any subset of provers only depend on the distribution of questions the verifier sends to that same subset. Non-signaling proofs, have found applications in cryptography and hardness of approximation. An important open problem is characterizing the power of non-signaling proofs. It is known that 2-prover non-signaling proofs are characterized by PSPACE, and that non- signaling proofs with poly(&#119899;)- provers are characterized by EXP. However, the power of &#119896;-prover non-signaling proofs, for 2 &lt; &#119896; &lt; poly(&#119899;) remained an open problem. We show that &#119896;-prover non-signaling proofs (with negligible soundness) for &#119896; = &#119874;(&#119901; log &#119899;) are contained in PSPACE. We prove this via two different routes that are of independent interest. In both routes we consider a relaxation of non-signaling called sub-non-signaling. Our main technical contribution (which is used in both our proofs) is a reduction showing how to convert any sub-non- signaling strategy with value at least [formula] into a non-signaling one with value at least [formula].&#13;
&#13;
Second, we introduce pseudo-deterministic interactive proofs (psdIP): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo- determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. The IP = PSPACE characterization implies that psdIP = IP. The challenge is to find constant round pseudo-deterministic interactive proofs for hard search prob-lems. We show a constant round pseudo-deterministic interactive proof for the graph isomorphism problem: on any input pair of isomorphic graphs (&#119866;₀, &#119866;₁), there exist a unique isomorphism from &#119866;₀ to &#119866;₁ (although many isomorphism many exist) which will be output by the verifier with high probability, regardless of any dishonest prover strategy. In contrast, we show that it is unlikely that psdIP proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has a psdIP protocol, then the polynomial hierarchy collapses.&#13;
&#13;
Third, we define doubly-efficient pseudo-deterministic proofs for polynomial time search problems: pseudo-deterministic proofs with the extra requirement that the prover runtime is polynomial and the verifier runtime to verify that a solution is canonical is significantly lower than the complexity of finding any solution, canonical or otherwise. Naturally this question is particularly interest-ing for search problems for which a lower bound on its worst case complexity is known or has been widely conjectured.&#13;
&#13;
We show doubly-efficient pseudo-deterministic algorithms for a host of natural problems whose complexity has long been conjectured. In particular, linear programming and a variety of problems studied at the center of the fine grained complexity study.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143142</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraining Planetary Science Problems with Micro-Paleomagnetism</title>
<link>https://hdl.handle.net/1721.1/143141</link>
<description>Constraining Planetary Science Problems with Micro-Paleomagnetism
S. Borlina, Cauê
With the development of micro-paleomagnetic techinques we can measure the magnetic field of micro-scale samples that have direct implications for problems in planetary science. In this thesis I used the techniques from micro-paleomagnetism to address two main problems: (1) when Earth’s magnetic field started and (2) how did the magnetic field in the solar nebula varied in space and time. For the first, I conducted paleomagnetic measurements with the Jack Hills zircon grains from Western Australia to address the early evolution of Earth’s magnetic field, which has implications for the thermal evolution of the Earth and habitability. For the latter I focused on the paleomagnetism of three different components from CO carbonaceous chondrites: calcium-aluminum-rich inclusions, chondrules and matrix, with them we can measure the solar nebula magnetic field which have directs implications to planetary formation. This thesis is divided into 6 chapters. The first one introduces the general theme of the thesis. The second presents my work on the early evolution of Earth’s magnetic field. The third, fourth and fifth present my results from meteoritic magnetism. The sixth chapter discusses future work.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143141</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Design, Noise Reduction, and Improved Dimension Reconstruction for High Performance Ellipsometry</title>
<link>https://hdl.handle.net/1721.1/143138</link>
<description>System Design, Noise Reduction, and Improved Dimension Reconstruction for High Performance Ellipsometry
Jiang, Bo
The advancement in nano-manufacturing and many other industries calls for high-performance metrology and inspection methods. Nano-manufacturing has witnessed shrinking critical dimension and demonstrated mass production capability: the lithography process has reached 5nm node and could potentially reach 2nm node by 2024; one production line can produce 125 wafers in one hour, each 300mm in diameter. Therefore, the desired metrology method must be non-invasive to achieve full sample and batch inspection, have high resolution to keep up with the shrinking critical dimensions, and feature high speed to be compatible with high-throughput manufacturing needs.&#13;
&#13;
Ellipsometry gains popularity due to its advantages of non-invasiveness, high speed, and high resolution. The technology is an important metrology and inspection tool in many industries. Ellipsometry is a major tool in new material characterization, and considered an important metrology method for the next generation of semiconductor devices in nano-manufacturing. In addition, the technology finds its application in biomedical detection and surface roughness estimation. The working principles of ellipsometry are as follows. An ellipsometer experimentally measures the samples’ changing effects on a light beam’s polarization state, quantified by ellipsometric parameters or a Mueller matrix. The experimental results are then fitted to an optical model to extract the sample’s critical dimensions and/or optical properties. &#13;
&#13;
This thesis improves the performance of ellipsometry through three aspects. &#13;
&#13;
The first part of this thesis quantifies and mitigates the mixed Poisson-Gaussian noise induced errors to improve ellipsometer’s measurement accuracy and precision. The measurement accuracy can be significantly affected by the existence of PoissonGaussian noise originated from detection and environment. This work characterizes and quantifies the noise through experiments on an in-house setup. Error propagation analysis is then performed to quantify the measurement error in terms of normalized Mueller matrix elements. The effects of system parameters on the Poisson-Gaussian noise induced errors are studied, including signal strength, the signal sampling frequency, and the first-order coefficient between the signal variance and mean. This thesis then proposes a signal demodulation method in spectroscopic ellipsometry based on maximum likelihood, in order to reduce the effects of Poisson-Gaussian noise. The method accounts for the signal’s statistical distribution and solves for the Fourier coefficients by maximizing the probability of the observed signal. The method’s capability of achieving higher Mueller matrix accuracy as well as higher dimension precision is demonstrated.&#13;
&#13;
The second part develops a reconstruction method for dimensions. The objective is to improve the dimension reconstruction precision and the reconstruction’s sensitivity to changes in dimensions. The reconstruction algorithm along with weights’ selection are formulated. The method assigns higher weights to the more important configurations, where the measurement is sensitive to dimension changes. The selection is based on partial derivatives of the Mueller matrix elements with respect to dimensions. Improved precision is demonstrated through experimental measurements of thin film standards and gratings.&#13;
&#13;
The third part of this thesis shows the design and effectiveness of a Faraday effect-based photometric ellipsometer. The new instrument eliminates mechanical motions and enables high-speed and controllable modulation frequency. In addition, it features a linear relationship between the applied current and the rotation of the polarization plane and thus enables fast and easy demodulation. This thesis presents the design, data reduction and the calibration procedure. Air and thin film sample experiments validate the effectiveness of the prototype.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143138</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forced Response System Identification of Full Aero-Engine Rotordynamic Systems</title>
<link>https://hdl.handle.net/1721.1/143135</link>
<description>Forced Response System Identification of Full Aero-Engine Rotordynamic Systems
Hur, In Young
A new class of rotordynamic challenges have surfaced in advanced turbofan engine designs where there are opportunities for dynamical coupling between rotor shafts and support structures, requiring entire system-level assessment. Addressing such vibration challenges relies on knowledge of damping levels in the system, but determining the rotordynamic damping in a full aero-engine remains challenging. &#13;
&#13;
This thesis presents a first-of-its-kind forced-response system identification approach to measuring rotordynamic damping of shaft modes in a full gas turbine aero-engine. A reduced-order modeling framework that captures the full-engine dynamics by incorporating the coupling between rotor shafts and support static structure was developed for rigorous design and assessment of the experiment. A statistical analysis involving virtual simulation of the experiment design demonstrates that the experiment is capable of measuring rotordynamic damping within 15% for most shaft modes throughout the operating range of the Pratt &amp; Whitney 615 turbofan engine.&#13;
&#13;
Virtual forced-response system identification experiments demonstrate robustness of the approach to realistic noise levels and variations in experimental setting. Simulation of different rotor shaft geometries demonstrates general applicability of the approach with error levels similar to those in the PW615 engine. Guidelines on experimental setup, procedure and data processing are developed for future rotor forced-response experiment design and execution.&#13;
&#13;
The thesis contributions are (1) a new approach for measuring rotor shaft dynamics in full aero-engines that enables the development of engine-condition based mechanical health monitoring and maintenance technologies, and (2) an extended reduced-order modeling framework that provides new capability for preliminary design of engine rotor systems and forced-response experimental design.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143135</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Impact of Nuclear Environment on the Hydrothermal Corrosion in SIC</title>
<link>https://hdl.handle.net/1721.1/143134</link>
<description>Understanding the Impact of Nuclear Environment on the Hydrothermal Corrosion in SIC
Seshadri, Arunkumar
SiC/SiC fiber composites are potential candidates for advanced cladding materials to improve the accident tolerance of commercial light-water nuclear reactor fuel. To evaluate the fiber composites’ viability, understanding the kinetics of their corrosion in Light Water Reactor (LWR) conditions is critical. SiC corrosion results in the formation of silica, which can then be rapidly dissolved in the LWR environment. LWR conditions are demanding on materials because they are subjected to irradiation, high pressure, high temperature, and aqueous chemistry. Experiments were performed under prototypical LWR conditions (Pressure, temperature, flow rate) to understand the corrosion and silica dissolution characteristics of high purity chemical vapor deposited (CVD) SiC. Sensitivity studies are performed to develop a comprehensive model for silica dissolution considering the impact from irradiated microstructure (through Si/ proton, Co60 gamma, and neutron pre-irradiation), flow rate, electrical resistivity, surface roughness, surface wettability, and CRUD (metallic oxide impurities) deposition. The corrosion rate in the irradiated microstructures was found to be an order of magnitude higher compared to the unirradiated microstructure under boiling water reactor (BWR) conditions. Electrochemical and spectroscopic studies revealed that the enhanced corrosion in irradiated samples was the result of an increased surface reaction potential that can be associated with the structural defects and the electronically excited states produced by irradiation. Surface roughness effects on hydrothermal corrosion also accelerated the corrosion rate significantly at high mass flow rates relevant to LWR operating conditions. Based on the experimental results, the existing semi-empirical SiC hydrothermal corrosion kinetic models are updated to include the effects of irradiation, resistivity, flow rate, and pH. Further, the experimental results suggest that the CRUD deposition on the CVD SiC would reduce corrosion significantly. Enhanced CRUD formation was observed under gamma irradiation and was correlated to the reduced zeta potential and the contact angle of the surface. Further adhesion properties responsible for CRUD deposition in SiC are investigated to evaluate the likelihood of CRUD deposition in LWR conditions. &#13;
&#13;
The silica dissolution rate of nuclear grade Hi-Nicalon type S fibers and fibers manufactured with Rapid Laser chemical vapor deposition (R-LCVD) with varying surface chemistries were also obtained through experiments performed in static autoclave simulating PWR conditions. The hydrothermal corrosion behavior of stoichiometric R-LCVD fibers was observed to be comparable to the nuclear grade Hi-Nicalon Type S fibers. The results show that the impact of stoichiometry was much higher than the particular manufacturing technique, though the higher surface roughness in R-LCVD fibers significantly affected the corrosion kinetics. Thermal pre-treatment of R-LCVD fibers leads to a drastic reduction in the corrosion of SiC fibers and was correlated to the increased grain size on the fiber surface when exposed to high temperatures. The effect of pre-ion irradiation on the hydrothermal corrosion behavior of SiC fibers was found to exhibit a complex relationship based on the stoichiometric composition of the fibers. &#13;
&#13;
Finally, the Radiation Chemistry Analysis Loop (RADICAL) code that models the complete coolant loop chemistry, radical, and species transport in LWRs, is modified to include SiC/SiC cladding corrosion and silica transport based on experimentally determined silica formation and dissolution rates. Sensitivity analysis is further carried out on several parameters in RADICAL to inform the industry on the extent of spatial inventory of silica deposition in typical BWR and pressurized water reactor (PWR) primary loops. RADICAL modeling suggests that silica deposition in PWR components and CVD SiC thickness loss is not of great concern even when the effect of irradiation damage on SiC corrosion is considered. However, for BWRs, significant silica deposition on components and CVD SiC thickness loss is expected unless the fuel rod is covered entirely in stable CRUD within the first few months of operation. As such, a feasibility study on different protective metallic coatings applied on the SiC/SiC fiber composite was conducted to reduce the thickness loss. Out of different coatings tested, plasma spray coated and vacuum annealed FeCrAL with blended FeCrAl, Cr coating served as a stable protective barrier against SiC dissolution in hydrothermal conditions.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143134</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and reactivity of multiply bonded tungsten dimers</title>
<link>https://hdl.handle.net/1721.1/143132</link>
<description>Synthesis and reactivity of multiply bonded tungsten dimers
Sturgeoff, Lynda Gail.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143132</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An application of artificial intelligence and machine vision to protein engineering</title>
<link>https://hdl.handle.net/1721.1/143131</link>
<description>An application of artificial intelligence and machine vision to protein engineering
Arkin, Adam Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1992; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143131</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on biological nitrogen fixation.</title>
<link>https://hdl.handle.net/1721.1/143130</link>
<description>Studies on biological nitrogen fixation.
Collet, Thomas Anatol,
            1963-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1991; Vita.; Includes bibliographical references (leaves 180-186).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143130</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Triangulation of stratified sets,</title>
<link>https://hdl.handle.net/1721.1/143129</link>
<description>Triangulation of stratified sets,
Hendricks, Edward Charles,
            1911-1967.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Bibliography: leaf 31.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143129</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formation of Taylor vortices in spherical Couette flow</title>
<link>https://hdl.handle.net/1721.1/143128</link>
<description>Formation of Taylor vortices in spherical Couette flow
Tuckerman, Laurette Stephanie.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1984; Bibliography: leaves 135-138.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143128</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methodology for assessing the potential impact of urban development on urban runoff and the relative efficiency of runoff control alternatives.</title>
<link>https://hdl.handle.net/1721.1/143127</link>
<description>Methodology for assessing the potential impact of urban development on urban runoff and the relative efficiency of runoff control alternatives.
Leclerc, Guy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Vita.; Bibliography: leaves 227-230.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143127</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Raman Cooling to High Phase Space Density</title>
<link>https://hdl.handle.net/1721.1/142847</link>
<description>Raman Cooling to High Phase Space Density
Vendeiro, Zachary
Experiments on quantum degenerate gasses have become widespread since the first experimental observations of Bose-Einstein condensates (BECs) a few decades ago. Traditionally such experiments have relied heavily on evaporation to produce their quantum degenerate Bose or Fermi gases. Although evaporation has proven to be effective for many atomic species, it leads to the loss of many trapped atoms and is generally slow. The work presented in this thesis aims to improve upon the performance of evaporation by using Raman cooling. First it is shown that Raman cooling alone can produce (impure) BECs, cooling clouds all the way to condensation without evaporation. This is the first demonstration of direct laser cooling to a true three-dimensional BEC. Next it is shown that when evaporation and Raman cooling are both used in the same sequence, pure BECs can be produced in as little as 575 ms. This is the fastest BEC production time known to the author and it is achieved with a much simpler apparatus than other sub-second BEC experiments.&#13;
&#13;
Raman sideband cooling in a 3D optical lattice may provide a way to prepare BECs even faster and potentially with very few collisions between atoms. Some preliminary work along these lines is also presented. Results from Raman sideband cooling in 1D and 2D optical lattices are also presented. Raman sideband cooling in a 1D lattice is shown to produce clouds with phase space densities of about 0.1, which is significantly larger than that achieved in previous work, but still shy of quantum degeneracy. Raman sideband cooling in a 2D lattice is shown to lead to nonthermalized clouds when the atom number and trap frequencies are sufficiently large. These unusual clouds are significantly hotter along the loosely-confined direction of the trap than in the tightly-confined direction, indicating the lack of thermalization in these effectively one-dimensional systems.&#13;
&#13;
Along a separate line of research, the design of a Rydberg cavity quantum electrodynamics (cQED) experiment is discussed. The apparatus is designed to house two optical cavities and two imaging systems. One optical cavity is asymmetric in mirror transmission, and a method is demonstrated for measuring the offset between the trap and probe light standing waves very precisely using only frequency measurements. The imaging systems have moderately large numerical aperture (NA) and are designed to be flexible, cost effective, relatively simple, cause only small aberrations, and require little in-vacuum alignment. The system will be capable of creating atom arrays in the cavities. These arrays could be used to implement quantum logic gates locally using Rydberg interactions between nearby atoms, and the cavity modes could be used as a bus to meditate gates between distant atoms.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142847</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Technologies and Control Techniques for Trapped Ion Array Architectures</title>
<link>https://hdl.handle.net/1721.1/142846</link>
<description>Integrated Technologies and Control Techniques for Trapped Ion Array Architectures
Stuart, Jules Michael
Over the past few decades, trapped ions have emerged as one of the top contenders for realizing large scale quantum information processing. To date, experiments with ions have reached the level of tens of ion qubits, but the current model of adding ions to long chains may not extend to the number required for some computations. An alternative architecture, which has been recently explored, is to arrange ions in a large array, such that they can be shuffled around to transmit quantum information around a chip. This approach promises greatly increased numbers of qubits, while maintaining speed, fidelity and connectivity, but as the scale of these arrays increases, the required density of control systems may become intractable with current methods. In this thesis, we explore the integration of classical control technologies into ion traps and investigate whether this can provide the level of control needed to establish the array architecture as a more viable path toward more complex trapped-ion quantum computers. We focus first on the integration of classical, cryogenic electronics into an ion trap, which are used to control an ion’s trap frequency and demonstrate rudimentary motion. An integrated switch allows the ions to be isolated from the effects of voltage noise. Next, we demonstrate the operation of a stimulated Brillouin scattering (SBS) laser for addressing an ion in an atomic clock protocol. In our experiment, the SBS laser is shown to have a linewidth commensurate with a bulk-cavity-stabilized laser and may offer a path towards generating highly-coherent light within an ion trap package. Subsequently, we explore the integration of photonic waveguides and grating couplers which can route laser light around on-chip and focus light onto ions trapped above the chip. The effects of stray electric field are considered, and the benefits of integrated light sources are characterized. In an array architecture, it will be important to be able to transport ions around between zones without introducing excessive motional decoherence. We present a technique using circuit simulation to pre-distort voltage waveforms for fast transport and demonstrate the basic operation of a trap designed to rapidly split and join ion chains. The studies covered here help inform future ion-trap architecture decisions and set the stage for further analysis of tradeoffs between these different technologies.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142846</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Periodically Modulated Electronic States in Natural Superlattices</title>
<link>https://hdl.handle.net/1721.1/142841</link>
<description>Periodically Modulated Electronic States in Natural Superlattices
Devarakonda, Aravind
The periodic arrangement of atoms in a crystal underpins our understanding of their electronic states. Since the development of the band theory description of such systems, it was realized that interaction effects and materials synthesis techniques could be exploited to introduce additional periodic modulations atop this underlying atomic periodicity. Over the years, efforts in this direction based on two-dimensional (2D) thin-films and van der Waals (vdW) heterostructures have realized a plethora of unconventional electronic states of matter. By virtue of their low-dimensionality, however, these states can prove fragile and inaccessible to a variety of experimental probes. Bulk materials exhibiting such periodically modulated electronic states could pave the way to incisive experiments and, potentially, new electronic states heretofore unavailable.&#13;
&#13;
In this thesis, we present the discovery of a new family of natural superlattices formed by an alternating stacking of transition metal dichalcogenide (TMD) monolayers and insulating spacer layers, that host such periodically modulated electronic states. We present experimental results from three members of this family containing the group-V transition metal disulphides H-MS₂, M = (V, Nb, Ta). Across these materials, the TMD and spacers layers combine to form effectively 2D electronic states that experience a structure-derived periodic modulation. In addition to yielding single particle physics distinct from the parent compounds, for example topologically non-trivial bands for M = Nb and unusual quantum oscillations for M = Ta, these materials also host unconventional correlated states; we observe long-anticipated signatures of 2D Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superconductivity for M = Nb and preliminary evidence for a correlated insulator ground state when M = V. We conclude by discussing prospects of identifying new members of this family and, more broadly, new families of bulk superlattices.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142841</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction</title>
<link>https://hdl.handle.net/1721.1/142836</link>
<description>Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction
Nakagaki, Ken
Research on Actuated and Shape-Changing Tangible User Interfaces (TUIs) in the field of Human Computer Interaction (HCI) has been explored widely to design embodied interactions using digital computation has been explored widely.  While advanced technical approaches, such as robotics and material science, have led to many concrete instances of Actuated TUIs, a single actuated hardware system, in reality, is inherently limited by its fixed configuration, thus limiting the reconfigurability, adaptability, and expressibility of its interactions.&#13;
&#13;
In my thesis, I introduce novel hardware augmentation methods, Shells and Stages, for Actuated TUI hardware to expand and enrich their interactivity and expressibility for dynamic physical interactions. Shells act as passive mechanical attachments for Actuated TUIs that can extend, reconfigure and augment the interactivity and functionality of the hardware. Stages are physical platforms that allow Actuated TUIs to propel on a platform to create novel physical expression based on the duality of front stage and back stage. These approaches are inspired by theatrical performances, computational and robotic architecture, biological systems, physical tools and science fiction. While Shells and Stages can individually augment the interactivity and expressibility of the Actuated TUI system, the combination of the two enhances advanced physical expression based on combined shell-swaping and stage-transitioning. By introducing these novel modalities of Shells and Stages, the thesis expands and contributes to a new paradigm of Inter-Material / Device Interaction in the domain of Actuated TUIs.&#13;
&#13;
The thesis demonstrates the concepts of Shells and Stages based on existing Actuated TUI hardware, including pin-based shape displays and self-propelled swarm user interfaces. Design and implementation methods are introduced to fabricate mechanical shells with different properties, and to orchestrate a swarm of robots on the stage with arbitrary configurations. To demonstrate the expanded interactivity and reconfigurability, a variety of interactive applications are presented via prototypes, ranging from digital data interaction, reconfigurable physical environment, storytelling, and tangible gaming. Overall, my research introduces a new A-TUI design paradigm that incorporates the self-actuating hardware (Actuated TUIs) and passively actuated mechanical modules (Shells) together with surrounding physical platforms (Stages). By doing so, my research envisions the future in which computational technology is coupled seamlessly with our physical environment. This next generation of TUIs, by interweaving multiple HCI research streams, aims to provide endless possibilities for reconfigurable tangible and embodied interactions enabled by fully expressive and functional movements and forms.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142836</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Searching for Extreme-BCG Clusters at 0.2 &lt; z &lt; 1.3</title>
<link>https://hdl.handle.net/1721.1/142831</link>
<description>Searching for Extreme-BCG Clusters at 0.2 &lt; z &lt; 1.3
Somboonpanyakul, Taweewat
Active galactic nuclei (AGN) feedback is believed to be responsible for counteracting the formation of the classical “cooling flow”, predicted to be associated with most “cool core” clusters of galaxies. Several studies have shown that many phenomena found in galaxy clusters can be neatly explained by AGN feedback. Yet, the physical mechanism behind AGN feedback remains poorly understood. Careful analysis of clusters with unique characteristics, such as hosting starburst and/or active nuclei, provides an alternative path to tackle the issue of when, and how precisely, AGN feedback impacts clusters.&#13;
&#13;
For my Ph.D. thesis, I show that by finding extreme-BCG clusters we can better understand the processes of cluster formation and evolution. In the first part of my thesis, I conduct an optical survey to discover new extreme-BCG clusters at low redshift. Finding clusters with distinct properties from the survey allows us to make detailed studies of the objects and better understand the formation mechanism of the feedback necessary to sustain long-lived clusters. In the second half, I study a sample of clusters over a large redshift range to find distant objects with extreme BCGs. This enables us to investigate a possible evolution of the feedback across cosmic time, and how the evolution has impacted the growth of all clusters. Thousands of galaxy clusters will be discovered in the coming decade with a certainty that a handful of them will host extreme BCGs. These peculiar objects will play an important role in understanding the complex nature of black hole feedback and galaxy evolution.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142831</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping Genotype to Phenotype with High-Throughput Empirical Approaches</title>
<link>https://hdl.handle.net/1721.1/142830</link>
<description>Mapping Genotype to Phenotype with High-Throughput Empirical Approaches
Lawrence, Katherine
Understanding how genetic variation gives rise to phenotypic variation is a central goal of biology. The structure of this genotype-phenotype map, or landscape, underlies the dynamics of populations adapting under natural selection, and quantitative understanding will be required to predict and engineer outcomes in evolving organisms like viral pathogens, cancer cells, or microbial communities. Characterizing the landscape structure remains largely an empirical question, and observing general patterns requires high-throughput, high-powered experiments that systematically probe landscapes in different biological contexts.&#13;
&#13;
At the scale of a single protein, we considered the binding landscape of broadly neutralizing antibodies (bnAbs) that confer protection against diverse influenza strains. Our understanding of the evolutionary pathways leading to bnAbs, and thus how best to elicit them, remains limited. We measure binding affinities of combinatorially complete mutational libraries for two naturally isolated bnAbs, the first such libraries for antibodies and the largest for any protein (2 16 variants). By examining the extensive pairwise and higher-order epistasis between mutations, we find key sites with strong synergistic interactions that explain the strikingly different patterns of breadth in the two antibody libraries. These features of the binding affinity landscapes strongly favor sequential acquisition of affinity to more diverse antigens.&#13;
&#13;
At the whole-genome scale, we mapped the genetic basis of complex traits in budding yeast. Discrepancies exist between results from previous studies in humans as compared to model organisms, perhaps resulting from our limited ability to resolve numerous small-effect variants, precisely map them to causal genes, and infer nonadditive interactions between loci. We introduce barcoded bulk quantitative trait locus (BB-QTL) mapping, which allows us to construct, genotype, and phenotype 100,000 offspring of a budding yeast cross (100 times larger than state of the art). We find hundreds of small-effect loci densely spaced throughout the genome, many with widespread pleiotropic effects across multiple traits, consistent with results from recent genome-wide association studies in humans. Epistasis plays a central role, with thousands of interactions that reveal the structure of underlying biological networks.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142830</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Methods for In Situ Genomics</title>
<link>https://hdl.handle.net/1721.1/142829</link>
<description>Scalable Methods for In Situ Genomics
Payne, Andrew C.
The maxim that biological structure determines function was inspired by the discovery of the DNA double helix, yet mapping the structure of genomic DNA within a cell remains challenging, and accordingly the role of genome structure and organization in determining cellular function is an open question. Mapping cellular genome organization is difficult because it requires joint measurement of linear DNA sequence and 3D spatial context, however, existing genome-scale methods lack either base-pair sequence information or direct spatial localization. To overcome these limitations, we invented In Situ Genome Sequencing (IGS), a set of scalable methods for simultaneously sequencing and imaging cellular genomes within intact biological samples. We first report technological developments enabling IGS, including new chemistries for in situ sequencing library construction, workflows for multimodal sequencing of libraries, and strategies for computational integration of spatial and genetic information. Next, we use IGS to map spatial genome organization in cultured human fibroblasts, validating and benchmarking our results against key genomic features such as chromosome positioning, chromosome folding, and repetitive sequence localization. Finally, we apply IGS to map genome organization in intact mouse early embryos, extending known features and uncovering new features of embryonic genome architecture. We characterize parent-specific changes in genome structure across embryonic stages, reveal single-cell chromatin domains in zygotes, and uncover epigenetic memory of global chromosome positioning within individual embryos. We conclude with a discussion of IGS scaling properties, by which we can anticipate many-fold future improvements in yield and resolution. We anticipate IGS and related scalable in situ methods will be instrumental in unifying genomics and microscopy, enabling scientists to map genome organization from single base pairs to whole organisms and ultimately to connect genome structure and function.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142829</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Signal Processing and Detector Optimization in Project 8</title>
<link>https://hdl.handle.net/1721.1/142824</link>
<description>Statistical Signal Processing and Detector Optimization in Project 8
Buzinsky, Nicholas
Despite the unambiguous discovery of non-zero neutrino masses from flavor oscillation experiments, a direct measurement of the absolute mass scale of the neutrino remains elusive to experimentalists. Project 8 is a tritium endpoint experiment utilizing Cyclotron Radiation Emission Spectroscopy (CRES), a novel, high-precision spectroscopic technique, in order to establish the absolute neutrino mass scale. In this document, I investigate the statistically motivated limits to CRES signal detection and parameter estimation, as well as the resultant consequences on optimal detector configuration. I implement and test an application of the Viterbi algorithm for CRES signal reconstruction, yielding the first derived limits on the minimal detection criteria. I then present an original derivation of the Cramér-Rao Lower Bound of the start frequency resolution for realistic CRES signals, along with estimators yielding near-optimal performance. Finally, these improved detection and reconstruction algorithms lead into a discussion of optimal detector design.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142824</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Beauty Quark Hadronization in Vacuum and Quark-Gluon Plasma with CMS</title>
<link>https://hdl.handle.net/1721.1/142823</link>
<description>Analysis of Beauty Quark Hadronization in Vacuum and Quark-Gluon Plasma with CMS
Shi, Zhaozhong
An analysis of fully reconstructed &#119861;ₛ⁰ and &#119861;⁺ mesons decay into &#119869;/&#120595; and strange hadrons using Compact Muon Solenoid (CMS) Experiment 2017  &#119901;&#119901; dataset and 2018 PbPb data at the center of mass energy per nucleon √&#119904;&#119873;&#119873; = 5.02 TeV at the Large Hadron Collider (LHC) is presented. We apply machine learning techniques along with multivariate analysis to obtain significant B-meson signals and extend the kinematic regime of B-meson measurements with higher precision. In our analysis, &#119861;0&#119904; signal of greater than 5 &#120590; significance is observed for the first time in heavy-ion collisions. The measured &#119861;ₛ⁰/&#119861;⁺ ratios as functions of transverse momentum and event centrality in PbPb collisions along with &#119901;&#119901; references are compared with theoretical model predictions. These results will help elucidate the beauty quark hadronization mechanisms in vacuum and quark-gluon plasma at the LHC energy. Significant B-meson signals have also been observed at low very &#119901;ₜ and high event multiplicity in &#119901;&#119901; collisions, which will allow us to study beauty quark hadrochemistry in small systems as well as energy loss mechanisms in the future.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142823</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Measurements of Neon, Magnesium, and Silicon Flux in Cosmic Rays with the Alpha Magnetic Spectrometer on the International Space Station</title>
<link>https://hdl.handle.net/1721.1/142822</link>
<description>Precision Measurements of Neon, Magnesium, and Silicon Flux in Cosmic Rays with the Alpha Magnetic Spectrometer on the International Space Station
Phan, Huy Duc
Precise measurements of primary cosmic ray Neon, Magnesium and Silicon flux is important to understand the origins and propagation properties of heavy elements in the Galaxy. This thesis presents the measurements of Neon, Magnesium, and Silicon flux in the rigidity (momentum per unit charge) range from 2.15 GV to 3 TV, with 5.6 million Ne, Mg, and Si nuclei events collected during 7 years of AMS operation (2011- 2018). The three fluxes show identical rigidity dependence above 86.5 GV, deviating from a single power law and hardening at high rigidity above 200 GV. Surprisingly, the rigidity dependence of Neon, Magnesium, and Silicon flux is different from the rigidity dependence of primary nuclei Helium, Carbon and Oxygen, even though the two groups are both primaries produced at cosmic rays sources.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142822</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Transport in Topological Phases of Matter</title>
<link>https://hdl.handle.net/1721.1/142821</link>
<description>Quantum Transport in Topological Phases of Matter
Papaj, Michal
Topological phases of matter attract constant attention in the condensed matter physics community, both due to the fundamental yet simple principles that govern them, and a multitude of experimental observations with the potential for technological applications. Among the ways of studying such materials, quantum transport methods prove to be of particular importance. In this thesis, I touch upon many aspects of quantum transport in topological materials. First, I introduce a novel type of Hall effect, called Magnus Hall effect, that allows one to probe Berry curvature in ballistic, time-reversal invariant systems that break inversion symmetry. Next, I present a detailed characterization of extrinsic Nernst effect in Dirac and Weyl semimetals, providing interpretation of existing experimental results and predictions for new enhanced responses in materials such as Fe3Sn2. In the following section, I demonstrate that a strong disorder can lead to a novel behavior of Dirac fermions in surface states of topological crystalline insulators, resulting in appearance of nodal arcs in place of Dirac points and in tilting of the Dirac cone. In the second part of the thesis, I focus on topological superconductors, starting by presenting a new method for creating Majorana zero modes using segmented Fermi surface. This approach, based on the Fermi surface of Bogoliubov quasiparticles allows for the reduction of the magnetic field required to induce a topological phase transition and reduces the number of spurious, low energy modes that hamper observation and utilization of Majorana zero modes. Finally, I show that the presence of multiple Majorana modes in a strongly correlated superconducting island leads to Kondo-like behavior with a topological twist.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142821</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implosion Fabrication: Rethinking 3D Nanofabrication from First Principles</title>
<link>https://hdl.handle.net/1721.1/142819</link>
<description>Implosion Fabrication: Rethinking 3D Nanofabrication from First Principles
Oran, Daniel
Micro- and nanofabrication has revolutionized the world by enabling the explosion of ubiquitous electronic products and devices. However, the traditional lithography and deposition methods used in micro- and nanofabrication processes are planar in nature, with limited capacity for creating complex 3D structures. Techniques for 3D nanofabrication would ideally allow independent control over the geometry, feature size, and chemical composition of the final material. To address these needs, we invented a fundamentally new technology for nanofabrication called Implosion Fabrication (ImpFab). This technology was borne of three basic insights. First, that 2D nanofabrication is predicated on the planar deposition of functional materials; therefore, a truly 3D nanofabrication process might be enabled by a method for volumetric deposition of functional materials. Second, that by patterning inside a scaffold material, such as hydrogel, it is possible to not only create any geometry, but also pattern gradients and multiple different materials. Lastly, that a controllably shrinkable scaffold allows for the chemical assembly of materials in 3D patterns at one scale, which once shrunken, can increase the resolution and concentration of the patterned materials. This means the original patterning and deposition steps can be performed using machinery far less precise, and thus less expensive, than used for traditional 2D nanofabrication while eliminating the need to pattern sequential layers, vastly increasing the speed of 3D patterning while making layer-layer registration irrelevant. As a result, ImpFab expands the possibilities of nanofabrication in several fundamental ways that gives it the potential to create a revolution in fabrication much in the same way the planar process did for computation (Moore’s law) and microelectromechanical systems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142819</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bloch-oscillating Electrons in Moiré Superlattices</title>
<link>https://hdl.handle.net/1721.1/142818</link>
<description>Bloch-oscillating Electrons in Moiré Superlattices
Fahimniya, Ali
The extremely narrow bands in moiré materials harbor surprising transport regimes not accessible in other materials. As discussed in this thesis, a large superlattice periodicity and narrow bandwidth create an appealing setting for realizing and probing electronic Bloch oscillations. A key property of Bloch oscillations of electrons in moiré systems is their two-dimensional character, described by several incommensurate frequencies rather than a single frequency. Crucially, although the frequencies of these oscillations are identical for all carriers in the system, the oscillations of different carriers are asynchronous, i.e., out of phase. The oscillation frequencies being equal for a macroscopically large number of carriers enables various exciting effects, e.g., a comb-like spectrum of emitted electric noise with resonances near the Bloch frequencies and a strong response under an AC field near these frequencies. Furthermore, moiré systems present an appealing opportunity for achieving phase-coherent collective oscillations. To that end, we outline a synchronization scheme based on coupling the Bloch-oscillating electrons to an oscillator mode in a THz resonator. We also discuss Bloch oscillations in the presence of a magnetic field. The competition between Bloch oscillations and the cyclotron motion of electrons gives rise to interesting dynamical effects. We identify distinct phases that occur depending on the relative strength of the electric and magnetic fields. The Hall current, which undergoes a sharp drop at the transition between the electric and the magnetic regimes, can serve as a diagnostic of magnetic Bloch oscillations. The appealing properties of Bloch-oscillating moiré bands single out these systems as a promising candidate for achieving and exploring Bloch oscillations in solids, a long-sought-after collective many-body behavior.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142818</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social Change through Community Innovation: Feminist and Participatory Design Approaches to Organizing Inclusive, Equitable, and Joyful Hackathons</title>
<link>https://hdl.handle.net/1721.1/142814</link>
<description>Social Change through Community Innovation: Feminist and Participatory Design Approaches to Organizing Inclusive, Equitable, and Joyful Hackathons
Hope, Alexis
In this dissertation, I explore the potential for social impact hackathons to support meaningful social change. Hackathons — a long-running community practice within open-source groups, hackerspaces, technology companies, and educational settings — remain a popular style of gathering for those engaged with technology, design, and innovation work. Over the past twenty years, hackathons have also been embraced by the social change sector as a means of developing possible solutions to social issues. However, skeptics point out numerous shortcomings of hackathons, including poor problem-selection, diversity and inclusion issues around who participates, the exploitation of unpaid labor, their limited impact, and the dangers of positing purely technological solutions to sociotechnical issues. &#13;
&#13;
At the same time, hackathons have enormous potential as a participatory approach to both technology development and problem solving. They bring people together around a common cause, help contribute to participant skill and identity development, and have an impact on media narratives around an issue. &#13;
&#13;
Rather than abandoning the hackathon as a social form, this dissertation examines how the union of feminist values and participatory design approaches can mitigate these critiques and help hackathons live up to their many potentials, including as a means of making space for community innovation at centers of technology innovation. To explore this, I present four case studies of iterations on the 2014 “Make the Breast Pump Not Suck” Hackathon held over the past seven years, including one event held virtually in response to COVID-19. Drawing on these case studies, I present design tenets and principles for hackathon organizers that can be used to design events that are inclusive, equitable, and joyful.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142814</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying Perfect Nonlocal Games</title>
<link>https://hdl.handle.net/1721.1/142811</link>
<description>Identifying Perfect Nonlocal Games
Bene Watts, Adam
This thesis is about nonlocal games. These “games” are really interactive tests in which a verifier checks the correlations that can be produced by non-communicating players. We study the class of commuting operator correlations: correlations which can by produced by players who make commuting measurements on some shared entangled state. This thesis contains following results: &#13;
• A general algebraic characterization of games with a “perfect” commuting operator strategy, i.e. games with a winning correlation that can be produced exactly by commuting operator measurements. This characterization is built on a key result in non-commutative algebraic geometry known as a (non-commutative) Nullstellensatz. &#13;
• A sufficient condition for a class of nonlocal games called XOR games to have a perfect commuting operator strategy. This condition can be checked in polynomial time, and can be understood either as non-existence of a combinatorial object called a PREF (the noPREF condition) or as non existence of a solution to an instance of the subgroup membership problem in a specially constructed group. &#13;
• A family of simple one-qubit-per-player strategies we call MERP strategies, which we show are optimal for any XOR game which has a perfect commuting operator strategy by the noPREF condition. &#13;
• Proofs that the noPREF condition is both necessary and sufficient for symmetric XOR games and 3 player XOR games. &#13;
• Explicit constructions of several families of XOR games with interesting properties. &#13;
• An analysis of randomly generated XOR games using the noPREF condition and the first moment method.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142811</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Inference of Particle Transport in Tokamak Plasmas</title>
<link>https://hdl.handle.net/1721.1/142810</link>
<description>Experimental Inference of Particle Transport in Tokamak Plasmas
Sciortino, Francesco
The achievement of sustainable operation in tokamaks depends crucially on accurate understanding of fuel, impurity, and neutral particle dynamics. In this work, radial profiles of experimental particle transport coefficients have been inferred following laser blow-off (LBO) impurity injections into both Alcator C-Mod and DIII-D plasmas. Development of the Aurora modeling package has supported the creation of Bayesian frameworks that leverage a wide range of spectroscopic diagnostics. This investigation spans regimes without Edge-Localized Modes including Enhanced D-Alpha H-mode and I-mode on C-Mod, and diverted negative triangularity on DIII-D. On C-Mod, a novel forward model for the entire Ca K&#120572; spectrum has been combined with Extreme Ultra-Violet (EUV) spectroscopy of multiple charge states. On DIII-D, analogous EUV measurements complement Soft X-Ray and Charge Exchange Recombination spectroscopy. While the impact of Charge eXchange (CX) between impurities and background neutrals from heating beams is found to be relatively small, edge neutrals are shown to be extremely important for ionization balance and radiation in the pedestal region. This conclusion is supported by SOLPS-ITER simulations, shown to compare favorably to a database of Ly&#120572; measurements near the C-Mod midplane. We find neoclassical, gyrofluid, and nonlinear gyrokinetic modeling to be in relatively good agreement with experimental estimates of diffusion, whereas significant discrepancies in convection are evident in several cases. In particular, experimental observations of hollow impurity profiles often cannot be reproduced by microturbulence models within uncertainties, suggesting that current transport codes may be missing critical physics for impurity peaking predictions of future devices. As a whole, this work provides one of the highest-fidelity assessments of cross-field impurity transport in tokamaks, offering the means to extend comparisons between theory and experiments in the particle transport channel.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142810</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Initial Conditions for Cosmic Inflation, the History of the Dark Sector, and Dark-Onium</title>
<link>https://hdl.handle.net/1721.1/142809</link>
<description>Initial Conditions for Cosmic Inflation, the History of the Dark Sector, and Dark-Onium
Fitzpatrick, Patrick John
The question remains whether inflation is robust to inhomogeneous initial conditions. This thesis first describes the basics of cosmic inflation and the evolution of primordial inhomogeneities in the matter field and spacetime metric during inflation. Then a new approach for analyzing the onset of inflation amid backreaction from significant inhomogeneities is presented. This new approach incorporates certain nonlinear interactions among the coupled degrees of freedom by using the nonperturbative Hartree approximation. Results applying this approach to a single-field inflationary model find inflation to be robust for large-field models.&#13;
&#13;
The particle nature of dark matter is still a mystery. This thesis very briefly summarizes what we know about dark matter and our current efforts to detect it. This thesis also provides the basic tools necessary to calculate the thermal history of the dark sector. Then the results for a full exploration of the thermal freezeout histories of a vector-portal dark matter model, in the region of parameter space in which the ratio of masses of the dark photon &#119860;′ and dark matter &#120594; is in the range [formula], are presented. The temperatures of all species are carefully tracked, relaxing the assumption of previous studies that the dark and Standard Model sectors remain in thermal equilibrium throughout dark matter freezeout. A rich set of novel pathways which lead to the observed relic density of the dark matter is revealed. This thesis also examines the [formula] regime of the vector-portal inelastic dark matter model, where the dark matter is made up of a Majorana ground state &#120594; and excited state &#120594; * with a small mass splitting between them, carefully tracking the dark sector temperature throughout freezeout. The inelastic nature of the dark sector relaxes stringent cosmic microwave background and self-interaction constraints compared to symmetric dark matter models.&#13;
&#13;
The spectrum of Weakly-Interacting-Massive-Particle (WIMP) dark matter generically possesses bound states when the WIMP mass becomes sufficiently large relative to the mass of electroweak gauge bosons. After a review of the treatment of bound states in quantum electrodynamics, this thesis examines the formation and decay of bound states for dark matter inhabiting a more general nonabelian dark sector. The rate for SU(2) triplet dark matter (the wino) to bind into WIMPonium, and rates for the subsequent decays of these bound states, are computed. Results with applications beyond the wino case, e.g. for dark matter inhabiting a nonabelian dark sector, are also presented.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142809</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of vacuum breakdown</title>
<link>https://hdl.handle.net/1721.1/142749</link>
<description>Investigation of vacuum breakdown
Jedynak, Leo.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1962; Vita.; Includes bibliographical references (leaves 134-137).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142749</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular beam studies of van der Waals molecules.</title>
<link>https://hdl.handle.net/1721.1/142747</link>
<description>Molecular beam studies of van der Waals molecules.
Chu, Frank Yiu Fui.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1972; Vita.; Bibliography: leaves 201-203.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142747</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Selection and localization of cloned DNA sequences from human chromosome 11</title>
<link>https://hdl.handle.net/1721.1/142744</link>
<description>Selection and localization of cloned DNA sequences from human chromosome 11
Gusella, James F.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142744</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on the active site modification of pyridoxal and flavin dependent enzymes with acetylenic and olefinic substrate analogues.</title>
<link>https://hdl.handle.net/1721.1/142741</link>
<description>Studies on the active site modification of pyridoxal and flavin dependent enzymes with acetylenic and olefinic substrate analogues.
Marcotte, Patrick Allen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142741</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental drivers of the abundance and distribution of forage fishes on the Northeast US shelf, with a particular emphasis on northern sand lance</title>
<link>https://hdl.handle.net/1721.1/142713</link>
<description>Environmental drivers of the abundance and distribution of forage fishes on the Northeast US shelf, with a particular emphasis on northern sand lance
Suca, Justin J.
Small pelagic fishes, also termed forage fishes, represent a critical link between secondary production and myriad top predators in marine ecosystems, including the Northeast US shelf. In this dissertation, I analyze the drivers of forage fish distribution throughout the Northeast US shelf and the drivers of the abundance of the ecologically important northern sand lance. Chapter 2 examines the basic ecology of northern sand lance and uses these insights to identify mechanistic drivers of their abundance. I then explore different scenarios of these drivers to project sand lance abundance through the end of the 21st century, which appears precarious for adult sand lance unless current trajectories change. Chapter 3 analyzes the environmental drivers of the distribution of the six dominant, offshore forage fish species (northern sand lance, Atlantic herring, alewife, blueback herring, Atlantic mackerel, and Atlantic butterfish) on the Northeast US shelf to elucidate the role of environmental covariates in shelf occupancy by these taxa. The results of this chapter indicate shelf occupancy of butterfish and Atlantic mackerel are increasing through time while occupancy of sand lance is decreasing with time. The occurrence of most of these species is also moving deeper and northward with time. Chapter 4 assesses the source-sink dynamics of three sand lance hotspots through Lagrangian particle tracking models simulating larval sand lance transport. Connectivity varies among these hotspots with Georges Bank and Stellwagen Bank having notable retention while the Great South Channel relies on larvae from other hotspots. Retention on Stellwagen Bank and Georges Bank are linked to strong wind events during the larval period of sand lance. Collectively, this dissertation improves our understanding of the dynamics driving variability in the Northeast US shelf forage fish complex, particularly for northern sand lance.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142713</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing anthropogenic noise impacts and relevant soundscape cues for marine invertebrates: leveraging squid and coral reefs as model systems</title>
<link>https://hdl.handle.net/1721.1/142712</link>
<description>Assessing anthropogenic noise impacts and relevant soundscape cues for marine invertebrates: leveraging squid and coral reefs as model systems
Jones, Ian Thomas
Sound is utilized by marine animal taxa for many ecologically important functions, and these taxa are vulnerable to adverse effects of anthropogenic noise on hearing and behavior. However, little is known about marine invertebrates’ responses to anthropogenic noise, and the ambient environmental sounds (“soundscapes”) they detect and respond to. Most acoustic studies report sound pressure (detected by mammals and some fish), but few report particle motion, the back-and-forth vibratory component of sound detected by marine invertebrates. I investigated invertebrate use of and response to sounds in two facets: 1) behavioral responses of longfin squid, Doryteuthis pealeii to anthropogenic noise, and 2) particle motion of coral reef soundscapes in the U.S. Virgin Islands. In laboratory-based experiments I exposed D. pealeii to construction noise originally recorded from an offshore wind farm. I found significant increases in squids’ alarm responses and in failed prey capture attempts during noise. Conversely, noise exposure had no significant effects on reproductive behaviors of groups of D. pealeii, indicating high motivation of these squid to reproduce during this stressor. Collectively, these experiments revealed the importance of considering behavioral context in studies and regulatory decisions regarding invertebrates’ susceptibility to anthropogenic noise impacts. In studying coral reef soundscapes, I reported particle motion trends over several months for coral reefs varying in habitat quality, including coral cover and fish abundance. I found acoustic properties over which particle motion closely scaled with pressure, and others over which it did not. I compared soundscape data with particle motion hearing thresholds, and found that invertebrates may only detect high amplitude and low frequency transient sound cues on reefs, such as those produced by fishes. My research bring new insights on natural and anthropogenic sound cues detectable by marine invertebrates, and how and when invertebrates will be vulnerable to anthropogenic noise pollution.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142712</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redefining the Coordination of Gene Expression Machineries in Bacillus subtilis</title>
<link>https://hdl.handle.net/1721.1/142711</link>
<description>Redefining the Coordination of Gene Expression Machineries in Bacillus subtilis
Johnson, Grace Eleanor
Transcription-translation coupling has long been considered a defining feature of bacterial gene expression. As soon as the ribosome binding site emerges from the RNAP, the ribosome can initiate translation, and both physically associate and kinetically coordinate with the RNAP over the course of transcription. This close proximity between the RNAP and ribosome allows the ribosome to modulate the fate of the transcribing RNAP, and forms the basis of many regulatory mechanisms, including translation-mediated transcription attenuation and transcriptional polarity. However, transcription-translation coupling has remained largely uncharacterized outside of Escherichia coli and other closely related bacteria. In this thesis, I describe an alternative mode of gene expression, runaway transcription, utilized by the bacterium Bacillus subtilis. Through measurement of transcription and translation kinetics, I demonstrate that the RNAP outpaces the ribosome in B. subtilis, resulting in large distance between the RNAP and ribosome that sets alternative rules for gene expression in this bacterium. In particular, runaway transcription helps to explain the increased use of riboswitches and RNA binding proteins in regulating transcription attenuation. In addition, I show that runaway transcription necessitates a more limited role for Rho-dependent transcription termination in B. subtilis, whereby Rho activity is selectively targeted to specific regions of the genome by cis-encoded sequence elements. Subsequent characterization of these Rho termination events across conditions reveals the Rho termination on specific transcripts can be regulated, providing an additional physiological role for this protein. Together, this characterization of runaway transcription and its consequences in B. subtilis provides insights into the underlying principles that have shaped the evolution of divergent regulatory mechanisms across bacteria.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142711</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between FQH Ground States, Regular Graphs, Binary Invariants, and [formula] -Algebras</title>
<link>https://hdl.handle.net/1721.1/142710</link>
<description>Interplay between FQH Ground States, Regular Graphs, Binary Invariants, and [formula] -Algebras
Pakatchi, Hamed
Fractional quantum Hall (FQH) phases are arguably the most outstanding example of topologically ordered matter. The ground state’s delicate structure, conjointly with anyonic statistics of local excitations, are two of the intriguing, if not defining, features of these phases. For the eventual goal of classifying topologically ordered matter, a better understanding of FQH ground states is critical. This thesis takes a closer look at FQH ground states, their properties, and the intersection with several other mathematical fields. We report on four novel findings in this document. (1) We present a graph-theoretic point of view toward clustering FQH ground state wavefunctions. In particular, we show that every ground state is essentially a superposition of regular graphs. In this paradigm, algebro-analytic properties of polynomial wavefunctions are synonymous with graph-theoretic properties. (2) We introduce a new possible local property of ground states called separability. Utilizing separability, a pseudopotential Hamiltonian, refining the existing projection Hamiltonians, is proposed. This new Hamiltonian has a much higher chance of realizing a unique densest zero-energy state when the traditional projection Hamiltonians fail to do so. (3) We also study model FQH ground states that are essentially chiral conformal blocks in [formula] parafermionic theories. We design an easy-to-work-with ansatz for multi-parafermion operator product expansions. Putting the ansatz to use, we unravel several polynomial structures within the [formula] -algebras. (4) We identify the second quantized version of FQH ground state wavefunctions with the so-called binary invariants. The wealth of knowledge on the theory of invariants of binary forms in the literature allows us to design an ansatz for the principal (i.e., smallest non-trivial) model FQH ground states which are conformal blocks in a [formula] algebra. In particular, we fully determine the principal [formula] wavefunctions. A partial characterization of [formula] wavefunctions with &#119903; = 3, 4 is also provided.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142710</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Low Temperature Bolometers to Reactor Neutrinos and Neutrinoless Double Beta Decay</title>
<link>https://hdl.handle.net/1721.1/142709</link>
<description>Applications of Low Temperature Bolometers to Reactor Neutrinos and Neutrinoless Double Beta Decay
Johnston, Joseph
Low temperature bolometers are a powerful tool for precise energy measurements. This thesis describes the application of bolometers to the detection of Coherent Elastic Neutrino-Nucleus Scattering (CEvNS) and Neutrinoless Double Beta Decay (0νββ).&#13;
&#13;
First, design, development, and feasibility studies of the Ricochet CEvNS experiment are presented. In addition, results are shown demonstrating that upcoming CEvNS experiments are a powerful tool to search for new physics, including a neutrino magnetic moment and non-standard interactions between neutrinos and quarks. In particular, combining multiple detector materials and measuring CEvNS at a reactor places leading bounds on beyond the Standard Model neutrino-nucleon interactions. Limiting such interactions is crucial for upcoming beam experiments such as&#13;
DUNE. In addition, results are presented showing the capability of next generation experiments to measure low energy neutrinos for the first time.&#13;
&#13;
Next, design, development, and analysis of the CUPID and CUPID-Mo experiments are presented. Characterization of bolometric crystals grown by a US-based crystal provider is shown, demonstrating progress towards the CUPID design goal. Next, improvements to the CUPID-Mo analysis are presented, including cuts and time corrections, followed by an initial 0νββ measurement with a simplified background model. Finally, a CUPID-Mo background model is described culminating in characterization of backgrounds required for a future precise 0νββ measurement, and a preliminary measurement of 2νββ in molybdenum. These CUPID-Mo results demonstrate the feasibility of the next-generation CUPID experiment.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142709</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Ancient Milky Way and its Relic Dwarf Galaxies</title>
<link>https://hdl.handle.net/1721.1/142708</link>
<description>Mapping the Ancient Milky Way and its Relic Dwarf Galaxies
Chiti, Anirudh
In the first billion years after the Big Bang, the first stars and galaxies began transforming the dark, primitive universe into the rich, complex one that we observe today. These primitive objects thus govern crucial, foundational rungs in our understanding of how the universe came to be. However, little is directly known of their properties since their large distances render direct, detailed observations difficult.&#13;
&#13;
Fortunately, the Milky Way hosts populations of ancient, “metal-poor" stars and satellite dwarf galaxies that function as nearby time capsules for investigations of early star formation, galaxy formation, and chemical evolution. The study of these objects is known as Galactic Archaeology, and has led to significant advances in our understanding of the first stars, supernovae, and galaxies. However, the most primitive, metal-poor stars are rare, and the difficulty of discovering them continues to bottleneck this promising approach.&#13;
&#13;
In this thesis, I present several pioneering studies of the ancient stellar populations in the Milky Way including (1) a large-scale mapping of low-metallicity stars in the Galaxy, (2) first insights into the early evolution of carbon in several satellite dwarf galaxies and implications on the early assembly of the Milky Way, and (3) a detection of an extended “halo" of stars around a tiny (∼3000 stars) relic galaxy; the first direct evidence that primitive galaxies formed in massive, extended dark matter halos, and that even the tiniest galaxies may have had an early merger history. These discoveries were enabled by my development of novel imaging analyses that has led to nearly an order of magnitude improvement in the efficiency of identifying the most metal-poor stars relative to traditional spectroscopic techniques. Such analyses will be readily scalable with upcoming surveys (e.g., LSST) for the next generation of Galactic Archaeology studies.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142708</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of [formula] in Associated Production with the CMS Detector</title>
<link>https://hdl.handle.net/1721.1/142707</link>
<description>Measurement of [formula] in Associated Production with the CMS Detector
Abercrombie, Daniel Robert
The differential cross section of [formula] is measured with the CMS Detector. The Simplified Template Cross Section framework is used. The inclusive strength of the measured signal relative to the Standard Model is [formula], which agrees with the Standard Model within 2.1 standard deviations. The measured spectrum of the recoiling vector boson transverse momentum has a p-value of 9.3%, assuming Standard Model predictions at the measured signal strength.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142707</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nucleoid condensation in Escherichia coli by the DNAbinding protein SymE</title>
<link>https://hdl.handle.net/1721.1/142706</link>
<description>Nucleoid condensation in Escherichia coli by the DNAbinding protein SymE
Thompson, Mary Katherine
Bacteria are at the mercy of whatever environment surrounds them. As such, they have developed a number of clever methods to respond to the myriad of stressors that they face. Toxin-antitoxin (TA) systems are one of the most interesting yet poorly understood of these methods. TA systems are genetic modules made up of a toxin that can kill a cell or stop its growth and an antitoxin that counteracts its cognate toxin. Type I TA systems consist of a toxic protein and an RNA antitoxin which interacts directly with toxin mRNA to prevent its translation. The toxins of all but two type I TA systems are small proteins that imbed into the inner membrane where they generally oligomerize and form pores that effect membrane permeability and proton gradient formation. symE/symR from Escherichia coli is characterized as a type I TA system with a non-canonical toxin. Rather than forming inner membrane pores, SymE is believed to be an endoribonuclease that has predicted structural similarity to DNA binding proteins. In this work I sought to better understand SymE’s target preference and specificity. Surprisingly, when I assessed the transcriptome for RNA cleavage events after symE expression Isaw no evidence of RNase activity. Instead I show that SymE binds DNA both in vitro and in vivo. Furthermore, I demonstrate that the toxicity of symE overexpression is likely due to its accumulation in the nucleoid and subsequent severe nucleoid condensation. This condensation is accompanied by DNA damage. Nucleoid compaction, DNA damage and global downshifts in transcription and translation are consistent with lethal levels of the E. coli nucleoid associated protein H-NS. Taken together, SymE’s reclassification from RNase to DNA binding protein, and its toxicity only being evident at high expression, has thrown into question symE/symR’s classification as a type I TA system.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142706</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Simulation of Spin-1 Physics with Bosons in Optical Lattice</title>
<link>https://hdl.handle.net/1721.1/142703</link>
<description>Quantum Simulation of Spin-1 Physics with Bosons in Optical Lattice
Chung, Woo Chang
Mott insulators of ultracold atoms in optical lattices are widely used as an experimental platform for simulating and studying many-body physics. A topic at the frontier of quantum simulation with Mott insulators is the study of quantum spin models, which are intimately connected to other modern research topics such as the study of quantum phase transitions and quantum thermalization. While quantum spin models are also realized and have traditionally been studied with complex magnetic materials in solid-state physics, the advantages of a cold atom quantum simulator are its wide tunability of model parameters and its capability of preparing a variety of initial states, some of which may not even be possible in solid-state settings.&#13;
&#13;
So far, most of the spin models realized with two-component atom clouds in optical lattices are based on Mott insulators with singly occupied sites. In this thesis, I describe how a Mott insulator with doubly occupied sites gives rise to a qualitatively new spin model. In particular, the addition of the on-site interaction in doubly occupied sites gives rise to a new magnetic anisotropy term known as a single-ion anisotropy, which is inaccessible to models with singly occupied lattice sites. The thesis describes in detail the mapping from a doubly-occupied Mott insulator to an effective spin-1 Heisenberg model with a tunable single-ion anisotropy, along with the details of the experimental setup and the benchmarks that need to met in order to probe low-energy physics such as those studied in the effective spin model. I demonstrate that the experimentally realized spin chains in our optical lattice features coherent spin-1 exchange dynamics and also demonstrate a remarkable interplay between the spin exchange and the single-ion anisotropy term.&#13;
&#13;
The atomic species used is 87Rb, and its internal structure allows the use of a state dependent lattice with an appreciable detuning from resonances. I explain how the state-dependent lattice is used to control the spin-dependent interaction and how it can be used to initialize a two-component atom cloud into a spin Mott insulator, which is a spinful state with no density and spin fluctuation. The spin Mott insulator state is highly interesting because it can be used as an initial state for adiabatic passages to different spinful states, such as the spin superfluid state or an antiferromagnetically textured spin state. I describe the on-going experimental efforts for making adiabatic passages from the spin Mott to the aforementioned target states and provide an outlook on how the experimental setup can be upgraded for a cleaner and more versatile study of spin-1 physics within the optical lattice.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142703</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for High-Energy Collider Physics</title>
<link>https://hdl.handle.net/1721.1/142702</link>
<description>Machine Learning for High-Energy Collider Physics
Komiske III, Patrick Theodore
Fundamental physics, in particular high-energy collider physics, seeks to understand the natural world at the smallest scales, leading experimentally to the creation of large, complex datasets. Machine learning comprises a powerful set of statistical and computational tools enabling comprehensive exploitation of data. In this thesis, I develop machine learning methods to facilitate cutting-edge analysis techniques in particle physics. I model collider events as point clouds and develop neural network architectures that respect the inherent permutation symmetry and variable number of particles of an event, with infrared safety naturally incorporated. I further design a procedure that uses high-dimensional classifiers to achieve full-phase space, unbinned unfolding of all observables simultaneously. In the second part of this thesis, I define a distance metric between collider events based on optimal transport that allows for a rigorous construction of "event space" and its corresponding geometry. Using public datasets provided by the CMS collaboration, I explore this metric on a dataset of real jets, demonstrating its viability as an experimental method as well as the value of public collider data in benchmarking new techniques.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142702</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>How, when, and where: fate selection in regenerative planarians</title>
<link>https://hdl.handle.net/1721.1/142701</link>
<description>How, when, and where: fate selection in regenerative planarians
Owusu-Boaitey, Kwadwo E.
Whole-body regeneration requires an organism to produce all missing cell types. The planarian flatworm Schmidtea mediterranea contains an estimated 150 distinct cell types, which can all be regenerated after injury. Cell-type production in planarians is regulated by the expression of fate-specific transcriptions factors (FSTFs) in dividing cells called neoblasts. However, it remains unclear whether all ~150 fate choices are made in neoblasts, or whether additional mechanisms for generating cell type diversity are used. We used single-cell RNA-sequencing of S/G2/M neoblasts and early post-mitotic cells to identify new neoblasts states corresponding to mature cell types, along with evidence that some cell type diversity is generated in post-mitotic neoblast progeny. We find that strategies for generating cell type diversity differ across tissue types. Furthermore, by annotating a complete set of predicted planarian transcription factor-encoding genes (the planarian TFome), we identify novel FSTFs and additional neoblast states. These data indicate that different strategies for generating cell type diversity exist across tissues, including fate choice outside of neoblasts. &#13;
&#13;
Understanding how cells choose their fates is an important challenge in development and regeneration. In regenerative planarians, fate choices are primarily made in neoblasts through the expression of FSTFs. But how individual neoblasts within the intact animal choose which fate to adopt and what role their spatial position plays remains poorly understood. Using fluorescent in situ hybridizations, we find that neoblast fate choice is spatially heterogenous and not tightly regulated by position. We find that specialized neoblasts of different classes are commonly found neighboring specialized neoblasts of other classes, creating an intermingled, salt-and-pepper distribution in the animal. Furthermore, we develop the in situ RNA-sequencing technique, STARmap, for use in planarians, and utilize it to study the in vivo spatial distribution of specialized neoblasts through spatial cell-type mapping. We identify the gene nlg-7 as a candidate regulator of the migratory targeting that spatially heterogenous neoblasts progenitors must undergo to maintain and regenerate tissues, given their heterogenous fate specification pattern. These results indicate that fate specification in neoblasts is not precisely regulated by position, and therefore, migratory targeting of progenitors is a major driver of tissue formation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142701</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the role of a JNK-like MAP kinase signaling in regulating dauer developmental arrest in Caenorhabditis elegans</title>
<link>https://hdl.handle.net/1721.1/142700</link>
<description>Investigating the role of a JNK-like MAP kinase signaling in regulating dauer developmental arrest in Caenorhabditis elegans
Dogra, Deepshikha
Many metazoans have the ability to respond to stressful growth environments by arresting in a diapause state. The young larvae of nematode Caenorhabditis elegans undergo drastic anatomic and metabolic remodeling to enter a developmental diapause called dauer in response to crowding, diminished food and elevated ambient temperature. The experimental studies of dauer formation have advanced our understanding of how conserved neuroendocrine signaling regulates developmental plasticity. While the molecular basis of how C. elegans respond to high population density has been defined, the mechanisms by which food and temperature regulate dauer entry are less well-known. This thesis identifies and characterizes the role of a stress-activated c-Jun Nterminal Kinase (JNK)-like mitogen activated protein kinase (MAPK), KGB-1, in dauer formation. In view of established roles of JNK MAPK in stress-activated signal transduction, the identification of the KGB-1 pathway in regulating dauer diapause is remarkable since dauer entry represents one of the most dramatic and organism-wide stress responses. In this thesis, we will review the genetic and cellular basis of the C. elegans dauer developmental decision, the neuroendocrine signaling pathways that regulate dauer arrest, and the role of MAPK pathways in stress response. We show that the KGB-1 pathway functions in the sensory neurons and acts in parallel to the TGF-beta and insulin signaling pathways. We demonstrate that increased activation of the KGB-1 pathway can substitute for diminished food and elevated temperature in triggering dauer entry, cementing its role in transduction of these environmental cues. Finally, we characterize the interaction of the KGB-1 pathway with other signaling pathways in dauer formation and analyze known upstream regulators and downstream targets of KGB-1. Future directions are included which aim to advance our understanding of neuroendocrine regulation of stress physiology in animals with C. elegans as a model.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142700</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolic regulation of mammalian cell growth and proliferation</title>
<link>https://hdl.handle.net/1721.1/142699</link>
<description>Metabolic regulation of mammalian cell growth and proliferation
Diehl, Frances Flewelling
Proliferation requires that cells acquire sufficient biomass to produce two daughter cells. To accomplish this, cells must utilize available nutrients to generate new components of cell mass, including proteins, lipids, and nucleic acids. In addition, cells must coordinate biosynthesis and production of specific macromolecules with the events that enable cell cycle progression. A cell’s ability to fulfill these anabolic requirements is impacted by environmental factors that influence how the metabolic network is used. This dissertation examines how cells regulate their biosynthesis to enable coordinated growth and division, and how metabolic dependencies impact proliferation. We first investigated why cells that genetically upregulate serine synthesis still rely on consuming large amounts of serine from the environment. We found that serine synthesis is constrained by availability of the oxidizing cofactor NAD+, and that decreased production of purine nucleotides downstream of serine limits proliferation. These findings demonstrate that regeneration of NAD+ can be a limitation for serine and nucleotide synthesis that constrains proliferation. We next determined how cells respond to perturbations to relative levels of nucleotide species. We found that imbalanced nucleotides inhibit cell proliferation, but do not constrain cell growth, allowing cells to grow excessively large. Instead, nucleotide imbalance is not sensed until cells enter S phase, when the replication stress response becomes critical for cell survival. Moreover, we found that replication stress sensing promotes nucleotide availability during normal S phases, suggesting that proliferating cells enter S phase without sensing whether they have sufficient nucleotides. Together, these studies contribute new insights into how metabolism is regulated to support cell growth and division.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142699</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gene expression changes during mammalian male meiotic initiation</title>
<link>https://hdl.handle.net/1721.1/142698</link>
<description>Gene expression changes during mammalian male meiotic initiation
Christensen, Holly C.
In sexually reproducing organisms, germ cells can divide by mitosis, which generates daughter cells that are genetically identical to the mother cell, or meiosis, which divides the genome for haploid gamete generation. Germ cells undergo meiotic initiation, the transition from the mitotic cell cycle to the meiotic cell cycle, once in a generation. In mammals, genetic studies have identified key regulators of meiotic initiation. However, a holistic assessment of the transcriptional changes during meiotic initiation has not been performed. In this thesis, we describe the transcriptional changes that occur during meiotic initiation in male mice.&#13;
&#13;
Previously, obtaining pure populations of male germ cells right before, during, and immediately after meiotic initiation was not experimentally achievable. Using the 3S method, which combines spermatogenesis synchronization, lineage tracing, and cell sorting, we obtained pure populations of undifferentiated Type A spermatogonia, Type B spermatogonia, STRA8- preleptotene spermatocytes, loSTRA8 preleptotene spermatocytes, hiSTRA8 preleptotene spermatocytes, and leptotene spermatocytes. Following transcriptome sequencing, data analysis showed several different patterns of gene expression during meiotic initiation, but the largest groups of genes showed either increased or decreased expression across the time course. While the genes whose expression increased at meiotic initiation were enriched for functions in meiotic prophase I, the genes whose expression decreased at meiotic initiation were enriched for housekeeping processes, including mRNA and protein regulation. Both upregulated and downregulated genes were expressed throughout the time course, but their expression levels were modulated at meiotic initiation. By performing motif enrichment analysis at the proximal promoters, we identified YY1 as a potential regulator of the genes whose expression decreases at meiotic initiation. In taking a descriptive experimental approach we identified novel gene expression changes during meiotic initiation, which furthers our understanding of germ cell biology.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142698</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of the environmental stress response in aneuploid and cell cycle-arrested budding yeast</title>
<link>https://hdl.handle.net/1721.1/142697</link>
<description>The role of the environmental stress response in aneuploid and cell cycle-arrested budding yeast
Terhorst, Allegra Louise
Size matters in eukaryotic cells. Inability to maintain cell size homeostasis, or the coordination between growth and division, has direct consequences on cellular functions and fitness. Eukaryotic cells have developed mechanisms to ensure proper coordination of biomass accumulation and cell cycle progression. Cell growth can regulate cell cycle progression, and cells are required to grow to a “critical size” before entrance into the cell cycle. Much less is known about how cell cycle progression affects biomass accumulation, specifically, what happens to cell growth when cell cycle progression is slowed or halted. Here, I investigate this question using two models of cell cycle delay and arrest in S. cerevisiae: aneuploidy and temperature sensitive cdc (cdc-ts) mutants.&#13;
&#13;
I first show that the environmental stress response (ESR) a gene expression pattern that represses ribosome biogenesis, is activated in both heterogeneous populations of aneuploid cells as well as complex aneuploid strains with one or more additional or lost chromosomes. Although my results here contradict a previous study using heterogeneous aneuploid populations, I show that their heterogeneous aneuploid populations did exhibit the ESR, but their euploid control population was grown into stationary phase, tainting their analysis. I see that in complex aneuploid strains, growth rate correlates to ESR strength and ribosomal fraction of the proteome, but this correlation is lost when strains are grown in a nutrient-limiting chemostat. Furthermore, there is a similar loss of ribosomes in the heterogeneous aneuploid populations. Next, I study size regulation in cdc-ts mutants, which arrest in the cell cycle at the restrictive temperature, and also see ribosome downregulation and ESR activation. Similar ESR activation and ribosome loss occurs in cells when either the TORC1 pathway or Ras/PKA pathway is inhibited. When I hyperactivate the Ras/PKA pathway during cdc-ts arrests, cells no longer exhibit the ESR and have significant loss of viability. I show that these strains no longer downregulate ribosomes and attenuate cell size growth. These studies have profound insights into how the ESR helps coordinate cell growth and cell cycle progression when the two are uncoupled.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142697</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cryptographic Simulation Techniques with Applications to Quantum Zero-Knowledge and Copy-Protection</title>
<link>https://hdl.handle.net/1721.1/142696</link>
<description>Cryptographic Simulation Techniques with Applications to Quantum Zero-Knowledge and Copy-Protection
La Placa Massa, Rolando L.
Bob is stuck doing a crossword puzzle and is starting to think that the puzzle is impossible to complete. Alice assures Bob that the puzzle can be solved, but she wants to prove it without revealing a single entry of the puzzle. Their cryptographer friend, Eve, tells them that Alice can prove it by using a zero-knowledge (ZK) protocol. These protocols are a cornerstone of modern cryptography, yet most of the work has been limited to the classical setting. Since Bob has a quantum computer, Alice needs to be careful choosing the right protocol to make sure it is a quantum zero-knowledge (QZK) protocol, guaranteeing that quantum Bob cannot learn anything about the puzzle except that it has a solution.&#13;
&#13;
Proving the security of ZK protocols comes with additional hurdles when adversaries are quantum capable, in part because the main tool used in the classical setting, rewinding, has additional limitations in the quantum case. While one version of quantum rewinding introduced by Watrous has been successfully used to construct QZK protocols, most of the classical ZK results have been challenging to port to the quantum setting. Ideally, we want quantum secure protocols with the same desirable properties that have been achieved in the classical literature, like concurrent security or low-round complexity. In this thesis, we introduce new quantum simulation techniques and apply them to construct the following QZK protocols assuming the quantum hardness of learning with errors (QLWE).&#13;
&#13;
• &#119874;(1)-round black-box QZK classical argument system for NP: We use techniques developed in the context of ‘tests of quantumness’ to obtain an extraction mechanism that can be leveraged to construct a QZK simulator. &#13;
• Public coin bounded concurrent black-box QZK proof system for NP and QMA: We introduce the technique of block rewinding and use it to obtain a concurrent QZK simulator. &#13;
• Simulatable and extractable quantum proofs of knowledge for NP: We construct QPoK with desirable properties needed for composability. The technique combines Watrous’ rewinding with a recently studied cryptographic tool, statistical receiver-private oblivious transfer. This is the first construction of QPoK with the desired composability features.&#13;
&#13;
We also introduce a new non-black-box knowledge extraction technique using quantum fully homomorphic encryption (QFHE) and lockable obfuscation. One of our main results is that we can adapt this non-black-box technique to the setting of quantum copy-protection to prove that it is impossible to quantum copy-protect arbitrary unlearnable functions. This resolves a long-standing open problem in the negative, assuming QLWE and the existence of QFHE.&#13;
&#13;
Our impossibility result states that we can’t construct quantum copy-protection for arbitrary functions. However, we can hope to do it for restricted families of functions like point functions or compute-and-compare functionalities. While this remains an interesting and challenging open question, we show that provable secure constructions in a standard model (without oracles) are possible if we consider weaker security guarantees from those of quantum copy-protection. For this purpose, we introduce the notion of Secure Software Leasing (SSL), and construct an SSL scheme for a general class of evasive circuits.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142696</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcription Regulates Biased Mitochondrial DNA Inheritance</title>
<link>https://hdl.handle.net/1721.1/142695</link>
<description>Transcription Regulates Biased Mitochondrial DNA Inheritance
Corbi, Daniel R.
Maintaining mitochondrial DNA (mtDNA) is essential in eukaryotes for cellular respiration and ATP production by oxidative phosphorylation. Respiration is the preferred method for creating mitochondrial membrane potential, which is important for the import of nuclearly encoded proteins into the mitochondrion as well as supporting oxidative phosphorylation. Loss of mtDNA causes problems with maintaining membrane potential and mitochondrial ATP production, making mtDNA of critical importance to cells. However, mtDNA inheritance in proliferating cells is not fully understood. There is a set of mtDNA mutants in Saccharomyces cerevisiae, called hypersuppressives, that exhibit biased inheritance when competed with wild-type (rho+) mtDNA. A particular hypersuppressive allele, HS ORI5-1, damages rho+ mtDNA in the same cell and eliminates respiratory capability from progeny. Overexpression of a mitochondrial RNA exonuclease that interacts with the mitochondrial RNA polymerase inhibits mitochondrial transcription and restores inheritance of rho+ mtDNA when competed against HS ORI5-1. Reduction of mitochondrial transcription affects only particular hypersuppressive alleles, suggesting that there may be multiple ways to achieve biased inheritance.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142695</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cities, networks, and knowledge spillovers</title>
<link>https://hdl.handle.net/1721.1/142694</link>
<description>Cities, networks, and knowledge spillovers
Jara-Figueroa, Cristian
Economies grow as a result of new ideas enabling innovations that render existing technologies obsolete. Yet, explaining economic growth by using the growth of ideas just pushes the question a step further. If growth comes from new ideas, where do ideas come from? With the availability of new data sources on the inputs and outputs of innovation, we have started to build a fairly accurate description of our idea-producing machine. A picture emerges where ideas are cumulative, innovation relies on the ability of ecosystems to produce complex combinations of new ideas, and geography still poses a barrier for knowledge exchange. This work contributes to our understanding of innovation by documenting three stylized facts about knowledge creation: the type of knowledge matters for new companies, complex knowledge is better produced in large cities, and urban vibrancy can help enhance knowledge spillover. First, we document that when starting new ventures, knowledge about the industry is more important than knowledge about the occupations involved. Second, we find that complex economic activities tend to be disproportionately concentrated in large cities and that this concentration has been growing for the past one hundred and fifty years. Third, we use the staggered roll-out of state-level R&amp;D tax credits in the US together with department-level publication data to measure the benefit to university researchers working in close physical proximity to private researchers. We find that urban vibrancy plays a role in increasing the spillover to academia. Our understanding of innovation used to be based on speculation built on anecdotes and stories of success. With the availability of new data sources and platforms that track different pieces of our idea-making machine, we are no longer restricted to study innovation by focusing only on the big winners.&#13;
&#13;
The first chapter focuses on how worker mobility can bring different types of knowledge to pioneer companies in Brazil. Using methods from network science to build indicators of knowledge relatedness, we explore the question: how does the success of entrepreneurial activities depend on the experience of a team? We measure the industry-, occupation-, and location-specific knowledge carried by workers from one establishment to the next, using a dataset summarizing the individual work history for an entire country. Our results show that hiring workers with industry-specific knowledge produces the largest and most significant boost in the survival and growth of new firms. This is particularly important for pioneer firms, which are firms operating in an industry that was not present in their region. Pioneers are of particular importance because the success of pioneers is the basic unit of regional economic diversification.&#13;
&#13;
The second chapter studies how the spatial concentration of economic activities depends on its knowledge complexity. Are economic activities that rely heavily on complex knowledge more concentrated? How has their concentration changed in the last decades? We find that complex economic activities, such as biotechnology, neurobiology, and semiconductors, concentrate disproportionately in a few large cities compared to less complex activities, such as apparel or paper manufacturing. We use multiple proxies to measure the complexity of activities, finding that complexity explains from 40% to 80% of the variance in urban concentration of occupations, industries, scientific fields, and technologies. Using historical patent data, we show that the spatial concentration of cutting-edge technologies has increased since 1850, suggesting a reinforcing cycle between the increase in the complexity of activities and urbanization. These findings suggest that the growth of spatial inequality may be connected to the increasing complexity of the economy.&#13;
&#13;
The third chapter explores the role of urban vibrancy in mediating knowledge spillover between two types of knowledge workers: private researchers and university researchers. Do university researchers benefit from private R&amp;D? Does this benefit depend on the urban environment around them? Using the staggered roll-out of state-level R&amp;D tax credits in the US together with department-level publication data, we measure the benefit to university researchers working in locations dense with related industry R&amp;D. We use data on patents to calculate how exposed university researchers are to related private R&amp;D, and data on the density of cafes and restaurants to build an index or urban vibrancy. We find that university researchers benefit from R&amp;D tax credits only when located in areas dense with related industry R&amp;D activity. More importantly, urban vibrancy increases the benefits from R&amp;D tax credits when the academic department is located in areas dense with related industry R&amp;D. These results highlight that although the urban environment can increase the positive externalities of investment in R&amp;D, it cannot create innovation by itself.&#13;
&#13;
Understanding how ideas are created used to be based on speculation built on anecdotes and stories of success. With the availability of new data sources and platforms that track different pieces of the ideas-making machine, we are no longer restricted to study innovation by focusing only on the winners.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142694</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interrogation of Changes in Cell State during Tumor Evolution of a Genetically Engineered Mouse Model of Lung Adenocarcinoma</title>
<link>https://hdl.handle.net/1721.1/142693</link>
<description>Interrogation of Changes in Cell State during Tumor Evolution of a Genetically Engineered Mouse Model of Lung Adenocarcinoma
Cruz, Amanda Margarita
In genetically engineered mouse models of lung adenocarcinoma (LUAD), tumors become more heterogeneous and dysregulate cell identities as they progress and evolve. In this thesis, single-cell RNA-sequencing technology was utilized to understand dynamic changes that occur during tumor evolution both with respect to tumor cells and tumor-specific cytotoxic CD8 T cells. In tumor cells, expression of Etv4 and Etv5, which belong to the Pea3 family of transcription factors, vary as a consequence of tumor progression. Etv5 regulates the identity of the cells that give rise to KP tumors, and its expression is lost as tumors evolve. Conversely, Etv4 is not expressed in the adult lung, but becomes latently expressed in aggressive tumors. Interestingly, we find that both Etv4 and Etv5 are required for lung tumor initiation. In addition, we also profile CD8 T cells that specifically recognize experimentally defined tumor neoantigens and provide evidence for an antigen dominance hierarchy that creates competition between T cell responses to tumor neoantigens. Critically, we find that this hierarchy influences the functionality of CD8 T cells and describe novel differentiation trajectories that distinguish subdominant and dominant antigen responses. Together, findings from these studies were used to propose analytical methodologies to model tumor evolution.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142693</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomolecular Condensates in Transcriptional Regulation</title>
<link>https://hdl.handle.net/1721.1/142692</link>
<description>Biomolecular Condensates in Transcriptional Regulation
Coffey, Eliot L.
Transcriptional regulation of gene expression is fundamental in determining cell behavior, identity, and organism development. When dysregulated it can cause disease. Core to transcriptional regulation is the control of RNA polymerase II (RNAPII) activity. The combined activity of DNA regulatory elements, transcription factors and cofactors, and epigenetic chromatin states determine where and when RNAPII transcribes – and thus regulates transcriptional activity. The highly cooperative interactions between components that positively and negatively regulate transcription have been mysterious for decades. However, recent study of biomolecular condensates has reframed our understanding of these cooperative interactions. Condensates are membrane-less compartments that concentrate components involved in the same biochemical processes. This thesis examines the formation and function of condensates that both activate and repress transcription. Transcriptional condensates form at active euchromatic genes to facilitate transcription (Sabari et al., 2018). In contrast, heterochromatin condensates form at transcriptionally silent regions of the genome to repress transcription (Li et al., 2020). Studies presented in this thesis demonstrate that transcriptional and heterochromatin condensates regulate gene expression via the concentration of specific components. Notably, we find that methyl-CpG binding protein 2 (MeCP2) is a key component of heterochromatin condensates. Mutations in MeCP2 cause the neurodevelopmental disorder Rett syndrome, and we link disease-causing mutations in MeCP2 to the disruption of heterochromatin condensate formation and function. These findings implicate condensate disruption in human disease. Our new understanding of condensates demands the development of new therapeutic hypotheses that must be explored in order to improve the lives of patients.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142692</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commissioning the DIRC Detector and Searching for Axion-like Particles at GlueX</title>
<link>https://hdl.handle.net/1721.1/142690</link>
<description>Commissioning the DIRC Detector and Searching for Axion-like Particles at GlueX
Yang, Yunjie
This thesis centers around problems in the study of the strong nuclear force. The GlueX DIRC, a Cherenkov radiation-based detector, was proposed to upgrade the particle identification capability of the GlueX experiment, which aims to perform quantitative tests of Quantum Chromodynamics in the nonperturbative regime by searching for and studying hybrid mesons. This thesis describes the construction, commissioning, reconstruction, and calibration of the GlueX DIRC detector. &#13;
&#13;
Originally proposed to solve the strong CP problem, axions and axion-like particles are hypothetical pseudoscalar particles found in many proposed extensions to the Standard Model of particle physics. This thesis presents a search for photoproduction of axion-like particles using data in photon-proton interactions collected by the GlueX experiment at Jefferson Laboratory in the &#120574;&#120574; and &#120587; +&#120587; −&#120587; 0 final states of the axion-like particles.&#13;
&#13;
In addition, the Monte Carlo modeling of the strong interaction at low energies leads to challenges known as the event generator tuning problem. This thesis presents a novel approach to the Monte Carlo event generator tuning problem using Bayesian optimization.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142690</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probes of Dark Matter from the Universe’s Past and Present</title>
<link>https://hdl.handle.net/1721.1/142689</link>
<description>Probes of Dark Matter from the Universe’s Past and Present
Wu, Chih-Liang
There are compelling evidences of gravitational effects from dark matter in cosmology and astronomy. By studying other imprints besides gravity, we might be able to infer its properties and interactions with Standard Model particles. This thesis explores how different dark matter models generate interesting observables and what can be learned by looking into various datasets. I demonstrate that novel techniques and datasets from indirect searches can be combined to set stringent constraints on the model parameters. In particular, this thesis details the calculations of constraints from two directions: on the one hand I will show how to constrain the rate of scattering, decay, and annihilation by taking advantage of cosmological observables including the Cosmic Microwave Background and the 21-cm neutral hydrogen line; on the other hand I will exploit astronomical datasets to infer properties of dark matter substructures and halo structures. The sources of the observables span a wide of range of cosmological time from cosmological recombination to the late-time universe, and they can be applied to probe complementary parameter space. With the upcoming precision measurements of astronomy, these novel ideas and calculations of indirect detections will lead insight into the nature of dark matter.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142689</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Search for Dark Photons at LHCb and Machine Learning in Particle Physics</title>
<link>https://hdl.handle.net/1721.1/142688</link>
<description>The Search for Dark Photons at LHCb and Machine Learning in Particle Physics
Weisser, Constantin Niko
Investigating hypothetical particles called dark photons helps shed light on the nature of dark matter, which is one of the biggest open questions in particle physics. This thesis presents world-leading limits in searches for prompt-like and long-lived dark photons decaying into two muons, as well as other dimuon resonances, produced in proton-proton collisions and collected by the LHCb experiment at the Large Hadron Collider at CERN.&#13;
&#13;
In addition, this thesis proposes various machine and deep learning techniques and their applications to particle physics: classifier bias on a continuous feature can be controlled more flexibly with a novel moment decomposition loss function than with simple decorrelation, which can enhance bump hunt sensitivity; the first high precision generative model approach to high energy physics simulation has potential to help close the gap between pledged and required resources; we developed a simple, powerful, and novel deep learning approach to vertexing, a technique to determine the location of vertices of sprays of particles, given particle tracks; the statistics chapter is concluded by a pedagogical study of using machine learning classifiers for multivariate goodness-of-fit and two-sample tests.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142688</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanochemical pattern formation in the cellular actomyosin cortex</title>
<link>https://hdl.handle.net/1721.1/142687</link>
<description>Mechanochemical pattern formation in the cellular actomyosin cortex
Tekant, Melis
Protein patterning is essential for cellular function. From cell division to cell migration, specific positional and temporal arrangement of proteins are requisite to triggering and executing vital cell processes. In the aforementioned examples and many other fundamental cellular activities, biochemical patterns drive or are accompanied by dramatic shape transformations. As mechanical conditions, such as cortical stress and membrane curvature, change in response to spatially arranged and temporally varying forces, the biochemical patterns, too, must evolve with this dynamic environment to enact complex cell movements. While the intricate interplay between protein patterning and cell deformations is important to any cellular function, it is especially paramount to carrying out processes that require large transformations of cell geometry. Yet, how cells rapidly and reliably communicate information between their chemical and mechanical fields is still not fully understood.&#13;
&#13;
In this thesis, I explore the mechanisms of coupling between cell mechanics and biochemical patterns in the actomyosin cortex of Patiria miniata sea star oocytes. This is an ideal biological model system for exploring the interactions between biochemical patterning and mechanical deformations in evolving mechanochemical systems in vivo due to their experimental accessibility and wealth of attainable biochemical patterns. In Chapter 2, I utilize endogenous fluorescent markers embedded in the actomyosin mesh to probe the spatiotemporal surface strain patterns induced by the activity of Rho proteins, a highly conserved regulator of cell contractility, on the oocyte membrane. In Chapter 3, I show how these Rho patterns can be tuned in vivo using dynamic, external geometrical deformations by combining micropipette aspiration with live fluorescence imaging. In Chapter 4, I describe the infrared spectroscopy setup built in the pursuit of uncovering the properties of fluorescent markers. Taken together, the work in this thesis outlines a quantitative approach towards uncovering the coupling between contractility regulating biochemical patterns and cellular deformations in dynamically evolving geometries.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142687</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Near-Term Quantum Computation: Theoretical Aspects of Variational Quantum Algorithms and Quantum Computational Supremacy</title>
<link>https://hdl.handle.net/1721.1/142686</link>
<description>On Near-Term Quantum Computation: Theoretical Aspects of Variational Quantum Algorithms and Quantum Computational Supremacy
Napp, John C.
In recent years, programmable quantum devices have reached sizes and complexities which put them outside the regime of simulation on modern supercomputers. However, since their computational power is not well understood, it’s not obvious what to do with them! Of course, there are several ideas, and this thesis contributes to the theory underpinning some of these ideas. It has two parts, corresponding to two of the most natural directions to pursue in searching for applications of near-term quantum computers. The first part is concerned with obtaining a deeper understanding of heuristic, hybrid quantum-classical algorithms which are potentially implementable on near-term devices and are aimed at attaining quantum speedups for practical problems, but lack a strong theoretical foundation and provable guarantees on their performance. More precisely, we obtain new theoretical results on the convergence rates of variational quantum algorithms, and prove that certain optimization strategies in such algorithms can, in some settings, lead to substantially better performance than the originally proposed, simpler, and potentially easier-to-implement approach. The second part is concerned with better understanding the capabilities of near-term quantum computers for demonstrating evidence of quantum computational supremacy in the complexity-theoretic sense of violating the Extended Church-Turing Thesis: a superpolynomial quantum speedup for a well-defined computational problem, possibly of no practical use, over all classical algorithms. More precisely, we study the computational complexity of classically simulating random 2D quantum circuits. While the classical hardness of simulating random circuits forms the basis of one of the leading quantum supremacy proposals, we challenge some of the intuition and evidence underlying this belief by developing new classical simulation algorithms which are efficient (polynomial-time) for 2D random circuits of sufficiently low constant depth; interestingly, these algorithms appear to experience computational phase transitions into an inefficient, exponential-time regime when the depth or local Hilbert space dimension surpasses some critical value.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142686</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light, Unstable Sterile Neutrinos: Phenomenology, a Search in the IceCube Experiment, and a Global Picture</title>
<link>https://hdl.handle.net/1721.1/142685</link>
<description>Light, Unstable Sterile Neutrinos: Phenomenology, a Search in the IceCube Experiment, and a Global Picture
Moulai, Marjon H.
Longstanding anomalies in neutrino oscillation experiments point to the existence of a fourth, hypothetical neutrino: the sterile neutrino. Global fits to a sterile neutrino model find a strong preference for such a model over the massive neutrino Standard Model. However, the fit results suffer from inconsistencies between datasets, referred to as tension. This motivates more complicated models for new physics. This thesis considers a model of unstable sterile neutrinos, where the heaviest mass state can decay. First, the phenomenology of unstable sterile neutrinos is explored in the IceCube experiment, a gigaton neutrino detector located at the South Pole. Second, global fits to traditional and unstable sterile neutrino models are combined with one year of data from IceCube. A preference for the unstable sterile neutrino model is found, as well as a reduction in tension. Lastly, a high statistics search for unstable sterile neutrinos is performed in IceCube. The Standard Model is rejected with a &#119901;-value of 2.8% and the traditional sterile neutrino model is rejected with a &#119901;-value of 4.9%. The best-fit point is [formula] , [formula], and &#119892;² = 2.5&#120587; ± 1.5&#120587;, where &#119892; is the coupling that mediates the neutrino decay. The best-fit corresponds to a lifetime of the heaviest neutrino of &#120591;₄/&#119898;₄ = 6 × 10⁻¹⁶ s/eV. A Bayesian analysis finds a best model with similar sterile parameters.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142685</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable Machine Learning for Prediction and Avoidance of Disruptions in Tokamak Plasmas</title>
<link>https://hdl.handle.net/1721.1/142684</link>
<description>Interpretable Machine Learning for Prediction and Avoidance of Disruptions in Tokamak Plasmas
Montes, Kevin J.
Tokamak plasmas are sometimes terminated due to off-normal events called disruptions, which are characterized by successive thermal and current quench events that deplete the stored thermal and magnetic energy. In addition to the costs of disruptions due to loss of confinement and operation, their corresponding thermal, electromagnetic,&#13;
and potential runaway electron loads can cause significant structural damage to the tokamak’s plasma facing components. Therefore, disruption forecasting algorithms are needed to either avoid disruptions altogether via plasma control, or to mitigate their deleterious effects once they happen. A limited physical understanding and wealth of experimental tokamak data from decades of research make this problem ripe for machine learning-based prediction and control, yet it is often difficult to explain how these data-driven algorithms make particular predictions. This thesis demonstrates the novel application of data-driven methods to address this issue via two main contributions. For the first, databases of thousands of discharges on multiple&#13;
tokamaks were used to develop a random forest disruption predictor, demonstrating a relatively low limit for feasible disruption prediction on Alcator C-Mod when compared to DIII-D and EAST. Its predictions are shown to be interpretable using metrics known as feature contributions, which were made available in real-time experiments on the DIII-D tokamak to inform control actions. For the second contribution, the semi-supervised label spreading algorithm is applied to detect events&#13;
often preceding disruptions in a large set of discharges, given few manually labeled examples. A method is proposed to construct event databases from scratch with the algorithm, and an accompanying software module was developed and made available for this purpose.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142684</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of miRNA degradation</title>
<link>https://hdl.handle.net/1721.1/142683</link>
<description>Regulation of miRNA degradation
Kingston, Elena Ruth
miRNAs are small RNAs that repress gene expression by guiding the effector protein Argonaute (Ago) to complementary sites in the 3′ UTRs of target genes. The levels of a miRNA in a given cell-type are determined by the balance of its rates of production and decay. Although much is known about miRNA production, relatively little is understood about miRNA degradation. Here, I describe the work that I’ve done to address this gap in our knowledge.&#13;
&#13;
To understand the extent that degradation rate is individually specified for miRNAs, I measured miRNA dynamics in mammalian cells. Supporting individualized regulation, measured miRNA half-lives spanned two orders of magnitude. Analyses of these data suggested that interactions with targets help shape turnover rates, and indeed, we showed that target RNA-directed miRNA degradation (TDMD) – a phenomenon whereby a highly complementary target site promotes degradation of the bound miRNA – is largely responsible for driving rapid turnover of miR-7. This observation raises the possibility that TDMD might also destabilize other miRNAs.&#13;
&#13;
The subsequent discovery that the protein ZSWIM8 mediates TDMD, and the identification of many miRNAs that are stabilized upon loss of ZSWIM8, confirmed that this pathway broadly shapes miRNA turnover dynamics. We investigated whether other classes of small RNAs might too be regulated by this pathway, and found that loss of the Drosophila ZSWIM8 homolog, Dora, has no discernible effect on the levels of small-interfering RNAs (siRNAs). Such protection from regulation by Dora is conferred by the Ago protein into which siRNAs load. This finding implies that effector protein identity dictates whether a small RNA is regulated by ZSWIM8.&#13;
&#13;
Loss of Dora is lethal, suggesting an essential role for TDMD during development, yet this lethality precludes the rigorous analysis of dora animals. To circumvent this challenge we disrupted TDMD in a more targeted manner by identifying and perturbing a TDMD-triggering, highly complementary target site for a single miRNA in flies. Although analyses of this fly line reveal some ways in which TDMD shapes organismal development, future studies are needed to fully understand the how the regulation of miRNA degradation operates throughout the Drosophila lifecycle.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142683</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organizing morphogenesis: Mechanisms of actomyosin patterning by RhoGTPase signaling</title>
<link>https://hdl.handle.net/1721.1/142682</link>
<description>Organizing morphogenesis: Mechanisms of actomyosin patterning by RhoGTPase signaling
Denk-Lobnig, Marlis
Morphogenesis is an astonishing orchestration of molecules, cells and tissues, of signaling and mechanics, across space and time. How are its components choreographed across scales? This thesis investigates this question in the context of Drosophila ventral furrow formation, a well-established, simple model of tissue folding. First, we demonstrated how, on the tissue level, combinatorial activation of two transcriptional tissue patterns sets up actomyosin distribution. This tissue-level actomyosin pattern is tuned by the balance of two RhoA GTPase regulators, RhoGEF2 and C-GAP and in turn regulates the curvature of the resulting fold. We then investigated how myosin organization on the cell-level is concerted by RhoGEF2 and C-GAP interplay. We found that the balance of RhoGEF2 and C-GAP regulates the size of an active myosin patch at the constricting apical cell surface, but that both regulators act partially synergistically in promoting temporal myosin dynamics. Overexpression of both regulators together causes a distinct myosin spatiotemporal pattern, suggesting that RhoGEF2 and C-GAP are more than simple antagonists and their regulatory interactions provide essential components of myosin patterning and organization. In summary, this thesis provides insight into how a simple and common regulatory module organizes contractility in the ventral furrow across multiple scales, in space and time.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142682</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetism in Two-Dimensional van der Waals Materials</title>
<link>https://hdl.handle.net/1721.1/142681</link>
<description>Magnetism in Two-Dimensional van der Waals Materials
Klein, Dahlia
Layered van der Waals crystals are a rich proving ground for exploring electronic behavior confined to two dimensions. Since the discovery of graphene in 2004, a large family of crystals has been isolated in the ultrathin limit, hosting a range of different properties, including semiconductors, superconductors, and topological insulators. These few-atom-thick sheets can be restacked into endless combinations of artificial heterostructures with atomically sharp interfaces that can be thought of as fundamentally new quantum materials. For over a decade, however, magnetism was noticeably absent from van der Waals materials.&#13;
&#13;
In this thesis, I present experiments on one of the first families of 2D magnets, the insulating chromium trihalides (CrX3), including CrI3 and CrCl3. These results were enabled by techniques developed to manipulate these air-sensitive few-layer crystals in an inert glovebox environment. I discuss magneto-optical experiments to measure and electrically control the magnetic ordering of ultrathin CrI3. I also present a new approach to probe the layer-dependent magnetic ordering by electron tunneling through van der Waals spin-filter magnetic tunnel junctions, where a few-layer crystal of CrX3 serves as the insulating tunnel barrier. Surprisingly, these magneto-optical and tunneling experiments reveal magnetic properties in ultrathin CrX3 differing from those of the bulk crystals. Using Raman spectroscopy, I connect these differences to changes in the lateral stacking arrangements between individual crystalline layers.&#13;
&#13;
The techniques established to handle air-sensitive 2D magnets lay the groundwork for the discovery of novel magnetic phenomena in the many yet unexplored layered magnetic insulators. Moreover, the development of 2D magnetic tunnel junctions with large magnetoresistances and highly spin-polarized currents paves the way for integration in the spintronics community. Finally, the more complete understanding of the layer-dependent magnetism in ultrathin CrX3 unlocks the potential to carefully incorporate 2D magnets in a variety of van der Waals heterostructures for proximity magnetism effects and beyond.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142681</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning and Variational Algorithms for Lattice Field Theory</title>
<link>https://hdl.handle.net/1721.1/142680</link>
<description>Machine Learning and Variational Algorithms for Lattice Field Theory
Kanwar, Gurtej
Discretizing fields on a spacetime lattice is the only known general and non-perturbative regulator for quantum field theory. The lattice formulation has, for example, played an important role in predicting properties of QCD in the strongly coupled regime, where perturbative methods break down. To recover information about continuum physics, parameters defining the lattice theory must be tuned toward criticality. However, Markov chain Monte Carlo (MCMC) methods commonly used to evaluate the lattice-regularized path integral suffer from critical slowing down in this limit, restricting the precision of continuum extrapolations. Further difficulties arise when computing the energies and interactions of physical states by measuring correlation functions of operators widely separated in spacetime: for most correlation functions, an exponentially severe signal-to-noise problem is encountered as the operators are taken to be widely separated, limiting the precision of calculations.&#13;
&#13;
This dissertation introduces two new techniques to address these issues. First, we define a novel MCMC algorithm based on generative flow-based models. Such models utilize machine learning methods to describe efficient approximate samplers for distributions of interest. Independently drawn flow-based samples are then used as proposals in an asymptotically exact Metropolis-Hastings Markov chain. We also construct models that flexibly parameterize families of distributions while capturing symmetries of interest, including translational and gauge symmetries. By variationally optimizing the distribution selected from these families, one can maximize the efficiency of flow-based MCMC. We secondly introduce an approach to 'deform' Monte Carlo estimators based on contour deformations applied to the domain of the path integral. The deformed estimators associated with an observable give equivalent unbiased measurements of that observable, but generically have different variances. We define families of deformed manifolds for lattice gauge theories and introduce methods to efficiently optimize the choice of manifold (the 'observifold') so that the variance of the associated deformed observable is minimized. Finally, we demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications to scalar phi^4 theory and U(1) and SU(N) lattice gauge theories.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142680</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The dynamics of bubbles in nucleate boiling</title>
<link>https://hdl.handle.net/1721.1/142294</link>
<description>The dynamics of bubbles in nucleate boiling
Griffith, P.
            (Peter)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1956; Vita.; Bibliography: leaves 86-88.
</description>
<pubDate>Sun, 01 Jan 1956 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142294</guid>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular beam study of van der Waals molecules: spin-rotation interaction in potassium-argon.</title>
<link>https://hdl.handle.net/1721.1/142291</link>
<description>Molecular beam study of van der Waals molecules: spin-rotation interaction in potassium-argon.
Mattison, Edward M.
            (Edward Martin)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1974; Bibliography: leaves 167-169.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142291</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optically active phonons and electronic transitions in rare-earth tri-fluorides.</title>
<link>https://hdl.handle.net/1721.1/142290</link>
<description>Optically active phonons and electronic transitions in rare-earth tri-fluorides.
Parrish, John Frederic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1970; Vita.; Bibliography: leaves 168-175.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142290</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convection and segregation phenomena in low Prandtl number melt growth systems : a quantitative experimental and theoretical approach</title>
<link>https://hdl.handle.net/1721.1/142288</link>
<description>Convection and segregation phenomena in low Prandtl number melt growth systems : a quantitative experimental and theoretical approach
Martin, Edward Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142288</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Filtration and some related processes for aluminum alloys.</title>
<link>https://hdl.handle.net/1721.1/142287</link>
<description>Filtration and some related processes for aluminum alloys.
Apelian, Diran.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142287</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of labor in the transition to capitalism : the case of the coffee plantations in São Paulo, Brazil (1880-1925)</title>
<link>https://hdl.handle.net/1721.1/142272</link>
<description>The role of labor in the transition to capitalism : the case of the coffee plantations in São Paulo, Brazil (1880-1925)
Guimaraés De Camargo, José Marcio Antonio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1978; Bibliography: leaves 240-243.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142272</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A systems analysis of scheduled air transportation networks</title>
<link>https://hdl.handle.net/1721.1/142271</link>
<description>A systems analysis of scheduled air transportation networks
Swan, William M.
            (William Maynard)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1979; Bibliography: leaves 344-347.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/142271</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution time series reveals differential behaviors of closely-related microbes in coastal communities</title>
<link>https://hdl.handle.net/1721.1/141959</link>
<description>High-resolution time series reveals differential behaviors of closely-related microbes in coastal communities
Elsherbini, Joseph
Coastal plankton, primarily composed of heterotrophic bacteria, eukaryotic microalgae, and other small eukaryotes, have an outsized impact on global biogeochemical cycles. Understanding the forces that affect community assembly and dynamics through time is therefore important for our understanding of these cycles. From careful analyses of the genetics and behaviors of isolates we know that very closely related microbes can vary in their potential for growth, defense from predation, and ability to compete and cooperate for resources. However in surveys of the environment, most studies have focused on lower-resolution groupings of taxa, so little is known about how these differences between closely related microbes play out in the wild. In this thesis, I show that when viewed at high temporal, spatial, and genetic resolution, the coastal plankton community is highly dynamic. Making use of a 93-day time series collected from Nahant, Massachusetts, I analyze amplicons at single-nucleotide resolution. First, for the eukaryotic community I show that despite apparent stability at higher taxonomic levels, there is rapid turnover of the community at the sequence level, and that for sequences one nucleotide apart there is evidence for distinct ecologies. Second, in the bacterial community, I use a much more resolved genetic marker library to show that even when sequences emerge from the same species there is evidence for distinct dynamics during the time series. Taken together, these observations demonstrate a seemingly fractal diversity in the coastal ocean plankton, where the further one zooms in the more distinctions one can make between organisms. This dizzying diversity across temporal, spatial, and evolutionary scales may have previously unappreciated impacts on our understanding of these communities.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141959</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interstitial Intelligence: Human-Rodent Sensing, Cognition, and Work in Morogoro, Tanzania</title>
<link>https://hdl.handle.net/1721.1/141958</link>
<description>Interstitial Intelligence: Human-Rodent Sensing, Cognition, and Work in Morogoro, Tanzania
Lee, Jia Hui
This dissertation is a historically informed ethnography of various human-rodent encounters in zoological research, animal training, and pest management schemes in Morogoro, Tanzania. I focus on the mobile and multiple forms of translocal knowledge production, particularly within the context of science and technology in Tanzania, and consider their entanglements with colonial legacies, global inequities, and uncertainty about the future. I investigate how rodent trappers, trainers, and researchers enact an interstitial intelligence through thinking with and athwart rodents. Drawing on both archival and ethnographic research, I argue that human-rodent encounters in Tanzania are nodes for generating critique, theorization, and speculation about thinking as the practice relates to questions of science, technology, and innovation in the global South.&#13;
&#13;
In Part I, I present a history of the development of rodent science in Tanzania that began during the British colonial government of Tanganyika. I show that rodent outbreaks compelled the colonial government to launch several scientific investigations into rodent ecology. These logics persisted into the postcolonial period, during which several European-Tanzanian partnerships were established to study rodents as pests and disease carriers. I combine archival research and oral history interviews to track how these partnerships were crucial to the establishment of the Pest Management Centre at the Sokoine University of Agriculture, led by Tanzanian rodent scientists.&#13;
&#13;
In Part II, I draw on ethnographic fieldwork, including participant observation and interviews, to analyze emergent forms of interspecies and interstitial thinking practiced by those who research, train, and trap rodents. I pay attention to the construction of and code-switching between Linnaean, Kiswahili, and Kiluguru rodent taxonomic systems. I provide a semiotic account of interspecies sensory co-laboring, essential to the social practice of animal training that transforms giant pouched rats into technologies for landmine detection. I then suggest that rodent trainers propose a working theory of rodent minds that contrasts with an important, Tanzanian type of intelligence called “hekima.” The final chapters situate these human-rodent encounters within larger social and political issues in Tanzania. I position rodent traps as innovative designs and examine practices of looking for buried treasure as part of thinking about resource nationalism in Tanzania.&#13;
&#13;
Kiswahili - &#13;
Tasnifu hii inachunguza mahusiano kati ya binadamu na panya kwenye miradi ya utafiti wa kizoolojia, mafunzo ya wanyama, na udhibiti wa baa la panya iliyopo Morogoro, Tanzania. Lengo kuu la tasnifu hii ni kujua mchakato wa uzalishaji elimu, hasa kwenye mada za sayansi na teknologia nchini Tanzania, ikiwemo historia ya ukoloni, ukosefu wa usawa duniani, na kiwaa cha muda ujao. Utafiti huu ulitumia mbinu tatu za ukusanyaji data ambazo ni uchunguzi shirikishi, usaili, na utafiti wa nyaraka. Ninachunguza jinsi watafiti, wanategaji panya, na wanafundishaji panya wanavyotoa nadharia na uchambuzi ikilinganishwa na vitendo vya sayansi, teknologia, na uvumbuzi miongoni ya nchi za kusini mwa dunia.&#13;
&#13;
Tasnifu hii ina sehemu mbili. Katika sehemu ya kwanza, ninawasilisha historia ya maendeleo ya utafiti wa panya nchini Tanzania kuanzia kipindi cha ukoloni wa Uingereza. Ninaonyesha jinsi mlipuko wa baa la panya ulivyoilazimu serikali ya kikoloni ya Uingereza kuanzisha uchunguzi wa kisayansi katika ikolojia ya panya. Itikadi hii ya ukoloni iliendelea hata baada ya kipindi cha ukoloni, ambapo miradi mabalimbali ya ushirikiano baina ya Ulaya na Tanzania ilianzishwa kuendeleza uchunguzi wa panya kama wasambazaji magonjwa na wadudu. Ninaunganisha utafiti wa nyaraka binafsi na mahojiano ya mdomo ya historia kufuatilia jinsi ushirikiano huu ulivyokuwa muhimu katika kuchangia uanzishaji wa Kituo cha Kudhibiti Viumbe Hai Waharibifu katika Chuo Kikuu cha Sokoine cha Kilimo, kinachoongozwa na Wanasayansi Watanzania.&#13;
&#13;
Katika sehemu ya pili, ninachunguza vitendo vya kuwaza na kutengeneza nadharia vinavyotendwa na wanaotafiti, kufundisha, na kutega panya. Wanatumia mifumo ya Linnaean, Kiswahili na Kiluguru kutofautisha aina mbalimbali za panya kufuata vigezo vya jamii fulani. Wanapofundisha panya kunusa mabomu chini ya ardhi, wafundishaji wa panya wanabadilishana taarifa na panya kwa msingi wa jinsi panya wanavyofukua ardhi na mienendo mingine. Ninapendekeza kwamba wafundishaji wa panya wanatengeneza nadharia kuelezea akili ya panya ambayo ni tofauti na “hekima.”. Sura za mwisho za tasnifu hii zinajadili mitego ya panya kama ubunifu na ugunduzi. Hatimaye, ninatoa nafasi ya kufikiria kitendo cha kutafuta hazina iliyofichwa kama kipengele cha majadiliano kitaifa kuhusu miliki ya rasilimali ya nchi.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141958</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The tempering of high carbon high chromium steels</title>
<link>https://hdl.handle.net/1721.1/141949</link>
<description>The tempering of high carbon high chromium steels
Zmeskal, Otto Francis.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1941; Vita.; Includes bibliographical references (leaves 222-231).
</description>
<pubDate>Wed, 01 Jan 1941 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141949</guid>
<dc:date>1941-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The sterochemistry of rotenone</title>
<link>https://hdl.handle.net/1721.1/141946</link>
<description>The sterochemistry of rotenone
Kaltenbronn, James Stanley,
            1934-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1960; Vita.; Includes bibliographical references (leaf 46).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141946</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cell-free enzymatic preparation of important biochemicals : l-glycerol phosphate, NAD, NADP and ATP</title>
<link>https://hdl.handle.net/1721.1/141945</link>
<description>Cell-free enzymatic preparation of important biochemicals : l-glycerol phosphate, NAD, NADP and ATP
Rios-Mercadillo, Victor Manuel.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141945</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperfine structure and higher nuclear moments</title>
<link>https://hdl.handle.net/1721.1/141944</link>
<description>Hyperfine structure and higher nuclear moments
Schwartz, Charles Leon,
            1931-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1954; Vita.; Bibliography: leaf 59.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141944</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two essays on Keynesian user cost.</title>
<link>https://hdl.handle.net/1721.1/141941</link>
<description>Two essays on Keynesian user cost.
Martin, Stephen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1977; Vita.; Includes bibliographies.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141941</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Labor in Turkey's economic development</title>
<link>https://hdl.handle.net/1721.1/141936</link>
<description>Labor in Turkey's economic development
Rosen, Sumner Maurice,
            1923-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Engineering, 1959; Vita.; Includes bibliographical references (leaves 675-684).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141936</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Damaged Collagen Detection and A Novel Approach to 1,3-Dipolar Cycloaddition Reactivity: Research at the Interface of Chemistry and Biology</title>
<link>https://hdl.handle.net/1721.1/141800</link>
<description>Damaged Collagen Detection and A Novel Approach to 1,3-Dipolar Cycloaddition Reactivity: Research at the Interface of Chemistry and Biology
Dones Monroig, Jesús M.
Part I&#13;
&#13;
Chapter 1. Collagen Mimetic Peptides as a Selective Binder for Damaged Collagen. “Collagen diseases” were first introduced by Klemperer and coworkers in the 1950s. This type of disease can cause fibrinoid degeneration or changes in the collagen fibers that denature the collagen triple helix. Collagen mimetic peptides (CMPs), variants that are monomeric in solution but have high ability to anneal with damaged collagen, are a growing tool in the biomedical field for the detection and treatment of collagen-related diseases. This chapter provides a review of recent developments in the field of damaged collagen targeting for studying collagen-related diseases and injuries. &#13;
&#13;
Chapter 2. Optimization of interstrand interactions enables burn detection with a collagen-mimetic peptide. In this chapter, through a computational screen, we identify (flpHypGly)7 as an optimal monomeric CMP for heterotrimer formation. We find that (flpHypGly)7 forms stable triple helices with (ProProGly)7 but not with itself. The nonnatural amino acid HflpOH, which is (2S,4S)-4-fluoroproline, is not toxic to human fibroblasts or keratinocytes. Conjugation of (flpHypGly)7 to a fluorescent dye enables the facile detection of burned collagenous tissue with high specificity. The ubiquity of collagen and the prevalence of injuries and diseases that disrupt endogenous collagen suggests widespread utility for this approach.&#13;
&#13;
Chapter 3. A Cyclic Peptide Mimetic of Damaged Collagen.  In this chapter, a duplex of CMPs was envisioned as a macromolecular mimic for damaged collagen. The duplex was synthesized on a solid support from the amino groups of a lysine residue and by using olefin metathesis to link the N termini. The resulting cyclic peptide, which is a monomer in solution, binds to CMPs to form a triple helix. Among these, CMPs that are engineered to avoid the formation of homotrimers but preorganized to adopt the conformation of a collagen strand exhibit enhanced association. Thus, this cyclic peptide enables the assessment of CMPs for utility in annealing to damaged collagen. Such CMPs have potential use in the diagnosis and treatment of fibrotic diseases and wounds.&#13;
&#13;
Chapter 4. Optical imaging of collagen fiber damage to assess thermally injured human skin. In this chapter, we present two complementary candidate methods for visualization of collagen structure in three dimensions. Second harmonic generation imaging offers a label-free, high-resolution method to identify intact collagen. Simultaneously, a fluorophore-tagged collagen-mimetic peptide can detect damaged collagen. Together, these methods enable the characterization of collagen damage in human skin biopsies from burn patients, as well as ex vivo thermally injured human skin samples. These combined methods could enhance the understanding of the role of collagen in human wound healing after thermal injury and potentially assist in clinical decision-making.&#13;
&#13;
Chapter 5: Hox genes maintain critical roles in the adult skeleton. Recently, it has been demonstrated that Hox expression continues from embryonic stages through postnatal and adult stages exclusively in a skeletal stem cell population. However, whether Hox genes continue to function after development has not been rigorously investigated. In this chapter we discuss Hox11 critical roles in skeletal homeostasis of the forelimb zeugopod (radius and ulna). We generated a Hoxd11 conditional allele and induced genetic deletion at adult stages. Together, our studies show that Hox11 genes continuously function in the adult skeleton in a region-specific manner by regulating differentiation of Hox-expressing skeletal stem cells into the osteolineage.&#13;
&#13;
Part II&#13;
&#13;
Chapter 1. Acceleration of 1,3-Dipolar Cycloadditions by Integration of Strain and Electronic-Tuning. The 1,3-dipolar cycloaddition between azides and alkynes is enabling new means to probe and control biological processes. A major challenge is to achieve high reaction rates with stable reagents. The optimization of alkynyl reagents has relied on two strategies: increasing strain and tuning electronics. In this chapter we report on the integration of these strategies through both computational and experimental analysis. A computational analysis suggested that a CH→N aryl substitution in dibenzocyclooctyne (DIBO) could be beneficial. In transition states, the nitrogen of 2-azabenzo-benzocyclooctyne (ABC) engages in an n→π* interaction with the C=O of α-azidoacetamides and forms a hydrogen bond with the N–H of α-diazoacetamides. These interactions act cooperatively with electronic activation of the strained π-bond to increase reactivity. Both ABC and DIBO are accessible in three steps by the alkylidene carbene-mediated ring expansion of commercial cycloheptanones. Our findings enhance the accessibility and utility of 1,3-dipolar cycloadditions and encourage further innovation.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141800</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nucleation kinetics in alkali amalgam vapors.</title>
<link>https://hdl.handle.net/1721.1/141129</link>
<description>Nucleation kinetics in alkali amalgam vapors.
Martinez-Sanchez, Manuel.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1973; Numbers 63-65 and 186 omitted in paging. Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141129</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Particle-hole models for closed-shell nuclei.</title>
<link>https://hdl.handle.net/1721.1/141128</link>
<description>Particle-hole models for closed-shell nuclei.
Iachello, F.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1969; Vita.; Bibliography: leaves 275-277.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141128</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Area-minimizing equivariant cones and coflat calibrations</title>
<link>https://hdl.handle.net/1721.1/141127</link>
<description>Area-minimizing equivariant cones and coflat calibrations
Cheng, Benny Ngo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1987; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141127</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fe-Al-O spinel : internal structure and electrical conduction.</title>
<link>https://hdl.handle.net/1721.1/141126</link>
<description>Fe-Al-O spinel : internal structure and electrical conduction.
Mason, Thomas Oliver.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141126</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recessive lethal amber suppressors in yeast</title>
<link>https://hdl.handle.net/1721.1/141124</link>
<description>Recessive lethal amber suppressors in yeast
Brandriss, Marjorie Carol.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1975; Vita.; Bibliography: leaves 114-122.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/141124</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically-inspired Structural Color: Material Design and Fabrication Strategies Drawn from Nature’s Color Palette</title>
<link>https://hdl.handle.net/1721.1/140995</link>
<description>Biologically-inspired Structural Color: Material Design and Fabrication Strategies Drawn from Nature’s Color Palette
Datta, Bianca C.
To harness the spectacular functionality found in nature, researchers have developed a multitude of biomimetic and bio-inspired techniques, each with strengths and constraints. A classic example is structural color, which abounds in nature, creating captivating visual displays from the brilliant plumage of the bird of paradise to the camouflage of the chameleon, with functional uses in mating, warning, communication, defense and more. Structural color provides a fascinating case study for exploring the role of material design on macroscale properties and can provide insights on animal evolution, photonic devices, human and animal communication, signaling, and art. These impressive effects result from interference and diffraction of light incident upon multilayer nanostructures, in which color is broadly tuned based on surface structure and geometry. Throughout the natural world, we see examples of clever, multifunctional features and solutions adapted to serve organisms in their local environments, interact with other living creatures, and maintain robustness and structural integrity over a lifetime. The processes and resulting hierarchical systems use few materials and demonstrate complex functional properties that inspire human-made engineered systems.&#13;
&#13;
This thesis provides tools and design methodologies for directing design and fabrication of structurally-colored surfaces. First, we depict methodologies based around computational inverse design for the formulation of nanostructures exhibiting structural col-oration, and demonstrate prototype surfaces fabricated from these designs. Next, we employ self-assembly of colloidal particles as a versatile, low cost approach for mimicking aspects of natural coloration. We explore the role of substrates on evaporative dynamics and interplay between pigment and structure in pattern formation. We further examine the social implications of such work; as commercialization of structural color becomes more feasible, we have an opportunity to critically examine the social and environmental impacts and contributions of this ˝eld. Through this work, we aim to provide methods and tools for researchers to control color production and devise new structurally-colored surfaces with directed properties by presenting material building blocks and demonstrating their role in color production.&#13;
&#13;
This thesis provides a path towards expanding the palette of achievable colors and pat-terns through bio-inspired design techniques, and lends an understanding of the multitude of ways in which we can pattern, tune, and control factors that induce coloration. The benefits of such biomimetic nanostructures are plentiful: they provide brilliant, iridescent color with mechanical stability and light steering capabilities. Structural color can be harnessed for long lasting paints, fabrics, signaling and communication systems, and displays. The color changes achievable with these structures are intuitively interpretable by humans. We discuss the methods for control over material design to tune nano- and microscale structure and properties in order to achieve macro scale responses that humans can interact with in meaningful and interesting ways. Through this work, we aim to provide tools with which researchers can explore color through a material design lens.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140995</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>HOW TO GROW A SPACESHIP: A Hybrid Living Material (HLM) Framework for Developing Technological Interfaces to Complex Living Systems</title>
<link>https://hdl.handle.net/1721.1/140994</link>
<description>HOW TO GROW A SPACESHIP: A Hybrid Living Material (HLM) Framework for Developing Technological Interfaces to Complex Living Systems
Smith, Rachel Soo Hoo
Future-facing organizations forecast that our ability to design with biological systems—from genes, to bodies, to biospheres—will play a crucial role in meeting humanity’s goals for both planetary sustainability and long-term space exploration. Our foundational “spaceship” Earth is our only current form of sustained life support, and yet we anticipate that the regenerative functions of our large-scale living systems are failing due to the Anthropocene. Our ability to productively and reciprocally mediate with complex living systems presents an intricate, urgent problem that necessitates hyper-interdisciplinary tools and expertise. While there are growing examples of how engineers and designers have merged the principles of artificial technology with biological processes, even these efforts are dispersed across disparate foci. Thus, there is need for a directive and an enabling structure to organize and assimilate tools and perspectives that guide research at the intersection of living systems and critical technology for the future. &#13;
&#13;
This dissertation establishes the field of Hybrid Living Materials (HLMs) as the study of how to interface human-designed systems with living materials, processes, and environments. In support of this goal, I define a conceptual HLM Framework that provides information-based hierarchical levels for HLM technology development: Materials, Modules, and Systems. To substantiate these level classifications, I present a corpus of projects that demonstrate novel approaches for coupling living systems to material fabrication processes, regulation programs, and architectural-ecosystem designs. Finally, to demonstrate the HLM Framework’s ability to address increasingly complex systems, I detail two “cross-level” research initiatives that integrate emergent phenomena exhibited by model organism groups: social bees and microbes. Results of these investigations produced data-rich, high-throughput, and interlinked tool suites that enable living systems to work in tandem with advanced analytical, computational, experimental, and collaborative technologies for cross-boundary information sharing and generation.&#13;
&#13;
This dissertation contributes a conceptual and practical basis for the cross-disciplinary development of hierarchically complex tools for mediation with living systems. The goal of this work is to shift our perception of technology such that humans and complex living systems may co-author mutually beneficial collaborations for joint survival on Earth and in space.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140994</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing for a new "ZIP code destiny"</title>
<link>https://hdl.handle.net/1721.1/140993</link>
<description>Designing for a new "ZIP code destiny"
Gillani, Nabeel
We live immersed in “cocoons”: tight-knit, segregated psychosocial units that shape who we encounter, the media we consume, what we believe, and ultimately, the opportunities we are able to access in order to positively shape ourselves, families, and communities.  These cocoons are fractal in nature, manifesting geographically as schools and neighborhoods segregated by race and income; as social networks that shape which media, mentors and role models we are exposed to; and even in our minds as cloistered concepts that spur biases and make us less welcoming of difference.  They are one of the reasons that America is the land of “ZIP code destinies”: the geographic and social contexts in which a child grows up often dramatically affect the opportunities they are able to capitalize on.&#13;
&#13;
Advances in social media and communications platforms were supposed to create new connective tissues between cocoons to enable a freer flow of knowledge and opportunity between disparate groups.  In some ways, this has happened, and many of these advances have also spun cocoons where marginalized groups can build solidarity and offer mutual support.  Yet these advances have also produced social media ecosystems that are highly fragmented, amplifying a priori preferences for which information to consume, and from whom.  They have also enabled those with various privileges to more easily access and act on information to obtain education, healthcare, jobs, and other critical resources.&#13;
&#13;
This dissertation explores how the analysis of data from digitally and physically-mediated social environments might help inform the design of new technologies to mitigate cocoons across two domains: politics and education.  We start with an analysis of political fragmentation on Twitter in the wake of the 2016 US Presidential Election.  This analysis motivates the design of a web application, Social Mirror, to probe how prompting social media users to reflect on their own “echo chambers” might help mitigate such fragmentation—which has become a crippling feature of US society, often impeding policies that could positively shape children’s futures.  &#13;
&#13;
Social fragmentation, of course, is not only rampant in our social media ecosystems: the neighborhoods in which many children grow up are fragmented by race and income, creating cocoons that impede access to quality role models, schools, and other educational opportunities.  First, we investigate existing neighborhood-level datasets detailing the importance of exposure to role models to inform the design of INSPIRE, a new video-based social network for middle schoolers to enhance exposure to role models as they start to think about their future aspirations.  Next, given the importance of schools and the role parents play in school choice—turning both to personal networks and online resources to inform their choices—we use recent advances in natural language processing to analyze parents’ reviews of schools posted online.  Our analyses, however, reveal that affluent parents are more likely to post reviews, and that reviews recapitulate well-documented racial and income disparities in education.  We use these insights to inform the design of EdMirror, a “community-sourcing” platform that seeks to surface less biased, more actionable insights from Boston Public Schools parents to other parents and school leaders in ways that might help spark sustainable, positive changes in schools.&#13;
&#13;
A central theme across these efforts is the role of prompted reflection and introspection as a potential mechanism for mitigating the biases and other psychological barriers that perpetuate cocoons.  The dissertation concludes by exploring how the analysis and design of communications platforms can inform new tools for reflection—i.e., “mirrors”—and how these mirrors might combine with other structural interventions (like policy change) to fuel designs for a new ZIP code destiny.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140993</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Human-Centered Optimality Criteria</title>
<link>https://hdl.handle.net/1721.1/140992</link>
<description>Towards Human-Centered Optimality Criteria
Ghandeharioun, Asma
Despite the transformational success of machine learning across various applications, examples of deployed models failing to recognize and support human-centered (HC) criteria are abundant. In this thesis, I conceptualize the space of human-machine collaboration with respect to two components: interpretation of people by machines and interpretation of machines by people. I develop several tools that make improvements along these axes.&#13;
&#13;
First, I develop a pipeline that predicts depressive symptoms rated by clinicians from real-world longitudinal data outperforming several baselines. Second, I introduce a novel, model-agnostic, and dataset-agnostic method to approximate interactive human evaluation in open-domain dialog through self-play that is more strongly correlated with human evaluations than other automated metrics commonly used today. While dialog quality evaluation metrics predominantly use word-level overlap or distance metrics based on embedding resemblance to each turn of the conversation, I show the significance of taking into account the conversation's trajectory and using proxies such as sentiment, semantics, and user engagement that are psychologically motivated. Third, I demonstrate an uncertainty measurement technique that helps disambiguate annotator disagreement and data bias. I show that this characterization also improves model performance. Finally, I present a novel method that allows humans to investigate a predictor's decision-making process to gain better insight into how it works. The method jointly trains a generator, a discriminator, and a concept disentangler, allowing the human to ask "what-if" questions. I evaluate it on several challenging synthetic and realistic datasets where previous methods fall short of satisfying desirable criteria for interpretability and show that our method performs consistently well across all. I discuss its applications to detect potential biases of a classifier and identify spurious artifacts that impact predictions using simulated experiments.&#13;
&#13;
Together, these novel techniques and insights provide a more comprehensive interpretation of people by machines and more powerful tools for interpretation of machines by people that can move us closer to HC optimality.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140992</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems of Becoming: Mediating Dialogue between Nature and Design</title>
<link>https://hdl.handle.net/1721.1/140991</link>
<description>Systems of Becoming: Mediating Dialogue between Nature and Design
Gonçalves Marins Costa, João Pedro
The Human-Nature relationship has historically been a subject of study within numerous disciplines. From Descartes’ concept of the “animal machine” (1913) through Heidegger’s views on the abyss between human and nonhuman (1995), to the more recent studies of the “anthropological machine” by Agamben (2004) and the investigation of “companion species” by Haraway (2013), this dichotomy has assumed different forms and has been influential in how we both perceive the world and generate knowledge. In the field of Design, theoretical research has been accompanied by physical manifestations that attempt to challenge and redefine what it means to work with Nature in order to bridge this existing gap that we have created between ourselves and the nonhuman.&#13;
 &#13;
In this thesis, I conduct a theoretical analysis of design frameworks that have undertaken this endeavor and suggest an alternative to these current practices by (1) acknowledging the current condition of a set of organisms that includes silkworms, honey bees and harvester ants, and (2) understanding the needs and means necessary for the creation of a dialogue between biological systems and mechanical apparatuses. In order to address these conceptual inquiries, I describe the development of physical tools and mechanisms that are responsible for mediating and interacting with these biological systems. These artifacts are conceived in order to challenge the common anthropocentric approach that subjugates the nonhuman in order to obtain expected results, behaviors or materials from their interactions with humans.&#13;
 &#13;
I propose the investigation of a field within Design in which practices of becoming in parallel with machinic adjustments enable adaptation that occurs in a reciprocal manner between biological systems and mechanical apparatuses. Building on my previous work with silkworms, honey bees and ants, I show how the discipline of Design can and should shift away from the legacy of previous practices such as biodesign and biomimicry. In Design, mastering and tuning the variables of a process to exact specifications is necessary to produce a satisfactory outcome. Therefore, in pursuit of a shift in these requirements, I characterize a design approach we have developed in the MIT Media Lab’s Mediated Matter Group called templating and describe how it points towards a future in which the distance from human to nonhuman may be reduced, leading to what seems to be the next horizon in the field of Design—a movement towards co-creation and co-fabrication, entertaining improvisation not only as a goal but also as a means of research.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140991</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fake Feature and Valuation From Context</title>
<link>https://hdl.handle.net/1721.1/140990</link>
<description>Fake Feature and Valuation From Context
Bassi, Itai
This thesis offers a new account of a persisting puzzle in the theory of ellipsis and association with focus: the fact that φ-featural content on full DPs and on bound pronouns can sometimes be ignored in focus alternatives and in calculating identity for ellipsis (‘Fake Features’). I present new data about gender and number mismatch in ellipsis which proves difficult to model on existing approaches to fake features (e.g. Sauerland 2013; Sudo &amp; Spathas 2020). The heart of the proposal is a derivational theory of contentful φ-features: they do not, as usually assumed, enter a derivation from the lexicon with listed meanings (presuppositions) that constrain the denotation of their host DP; rather, they are inserted late in the derivation towards PF by a process called “Valuation from Context” (Kucerov ˇ a´ 2018): the features are inserted based on the meaning of the DP in the (local or global) context of evaluation. Ellipsis identity and focus alternatives are computed off of the feature-less representation. The theory assumes that the construction of local contexts for embedded constituents (Schlenker 2009) is blind to information encoded in focus alternatives. The account supports an architecture of grammar in which representations that are submitted to semantic interpretation (meaning-in-context) feed morphological valuation processes. It also implies that there is no substantial difference between “interpreted” and “uninterpreted” φ-features; in a sense, both are uninterpreted, the distinction being whether they are valued from context or from pieces in the structure.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140990</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Socially-Aware Machine Learning: Towards Leveraging the Relationship between Narrative Comprehension and Mentalizing</title>
<link>https://hdl.handle.net/1721.1/140987</link>
<description>Socially-Aware Machine Learning: Towards Leveraging the Relationship between Narrative Comprehension and Mentalizing
Vijayaraghavan, Prashanth
Narratives are the fundamental means by which people organize, understand, and explain the social world. Research suggests that exposure to narratives improves mentalizing, referring to the capacity to forecast and reason about others' mental states. Simultaneously, enhanced mentalizing abilities are closely linked to exhibiting improved narrative processing skills. The purpose of this dissertation is to develop modular computational methods that leverage the relationship between mentalizing and narrative comprehension for understanding specific aspects of social-cognitive processes and seek to advance the research towards imparting social awareness to machines. Our work consists of three main functional modules. First, we present a representation learning approach that computes a social situational embedding of sentence-level social events. Next, we apply the learned social event representation to embed, infer and explain the characters' mental states from the narratives. Finally, we analyze some of the basic elements of narrative structure present in short personal narratives as a means of exemplifying the story understanding capability. Particularly, we investigate the role of characters' cognitive tension captured using our inferred mental representation for automatically detecting the central conflict of a story, i.e., the climax and their resolution.  &#13;
&#13;
Unlike most previous work that either uses conventional trait-based models or exploits low-level annotations of short fixed-length stories, we tackle a subset of the data and modeling challenges directed at inferring human motives and emotional reactions. First, we construct a relatively open-ended corpus of personal narratives and commonsense knowledge from social media containing more variations in terms of topical content. Using this weakly annotated corpus, we train deep learning models that compute rich representations of social events capturing aspects of syntactic, semantic, and pragmatic properties and integrate them to generate textual explanations of motives and emotions of characters in the narrative. Empirically, our proposed approaches outperform several baselines in mental state tracking tasks and harness transferability to low-resource regimes and other downstream tasks. &#13;
&#13;
As a final contribution in this dissertation, we demonstrate improved narrative processing skills by computationally predicting key elements of narrative structure in personal narratives. Notably, our studies show that integrating the protagonist's mental state embeddings with linguistic information leads to the enhanced prediction of climax and resolution in narratives. Our data and modeling contributions emphasize the value of exploiting the mutual influence of mentalizing and narrative comprehension, thereby promoting future efforts towards building human-centered AI systems.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140987</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Matrix algorithms for bilinear estimation problems in chemometrics</title>
<link>https://hdl.handle.net/1721.1/140448</link>
<description>Matrix algorithms for bilinear estimation problems in chemometrics
Kim, Ryan Royce.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1985; Bibliography: leaves 202-206.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140448</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying and reducing the uncertainties in global contrail radiative forcing</title>
<link>https://hdl.handle.net/1721.1/140372</link>
<description>Quantifying and reducing the uncertainties in global contrail radiative forcing
Agarwal, Akshat
Condensation trails, or contrails, are line-shaped ice clouds that form in the exhaust plume of aircraft engines under sufficiently cold and humid conditions. They can persist for several hours, growing to be indistinguishable from natural cirrus clouds. Numerical simulations of contrails (including contrail-cirrus) have estimated the global, annual average, net radiative forcing (RF) to be 50 mW/m² in 2011, representing 2.3% of the RF due to all anthropogenic emissions. However, these estimates are uncertain and model estimates vary by more than one order of magnitude. In this thesis, I address three major sources of uncertainty: the number of black carbon (BC) particles emitted by an engine, the accuracy of the meteorological datasets used for modeling contrails, and the approach to simulate the evolution of a contrail.&#13;
&#13;
The number of BC particles emitted by an aircraft engine is required to estimate the number of crystals that form in a contrail. Decreasing the number of crystals that form by 80% could reduce the contrail RF by 50%. The first part of this thesis develops an approach to estimate the number of particles emitted by an engine. Using two complementary datasets, I relate smoke number measurements to the BC mass concentration, quantify losses in the measurement system, and connect mass emissions to particle number emissions. The method is applied to existing BC measurements achieving an &#119877;² of 0.80 and 0.82, respectively. Global BC emissions for all operations in 2015 were estimated to be 2.0 Gg/year (95% CI = 1.7 − 2.3) and 2.42 × 10²⁶ particles/year (95% CI = 1.58 − 3.81 × 10²⁶).&#13;
&#13;
Contrail formation is sensitive to the background atmospheric conditions, specifically the temperature and humidity. These are estimated from reanalysis models that assimilate observations with numerical estimates of the atmosphere. Upper tropospheric water vapor has been found to be overestimated in multiple reanalyses, but the effect on the formation of persistent contrails has not been quantified. In the second part of this thesis, I quantify the error in predicting the formation of persistent contrails using two reanalysis models - ERA5 and MERRA-2. Using data from 793,044 radiosondes, persistent contrails forming at cruise altitudes in 30°N – 60°N are overestimated by factors of 2.0 and 3.5 for ERA5 and MERRA-2, respectively. I also define the evaporation depth, which measures the depth to which a contrail can survive based on the available ice mass and is thus a measure of contrail lifetime. This metric is found to be overestimated by 17% in ERA5 and 45% in MERRA-2 suggesting the contrail lifetime is overestimated. Finally, the reanalyses incorrectly identify individual regions that could form persistent contrails 87% and 52% of the time, respectively. These results suggest that contrail models currently overestimate the number and lifetime of persistent contrails.&#13;
&#13;
Global contrail models simulate contrail evolution using simplified approaches. These may not capture important physical phenomena of the contrail, such as the size distribution of contrail ice particles, that more detailed simulations can. In the final part of this thesis, I use an intermediate-fidelity model, the Aircraft Plume Chemistry, Emissions, and Microphysics Model (APCEMM), to quantify the global RF using statistical inference, where samples are randomly drawn from the distance flown by aircraft in 2016. The global, annual average, net contrail instantaneous RF in 2016 was found to be 96 mW/m², within 6% of other literature estimates. The contrail energy forcing per unit flown was highest for flights over Western Europe, which was 39% and 66% higher than that for flights over the contiguous United States and South and East Asia, respectively. I also used the particle emissions approach as an input to APCEMM and found it had a correlation coefficient of 0.53 with the initial number of crystals that form in the contrail. In comparison, it had a smaller correlation coefficient of 0.10 with the energy forcing per contrail, suggesting that reducing particle number emissions may not have a strong effect on contrail RF. Finally, the effect of the overestimate in PCC in the MERRA-2 data was studied and found to reduce the global, annual average, net RF by a factor of 2.8 from 96 mW/m² to 30 mW/m². This difference could have important implications in identifying the most important climate forcers to focus on to reduce aviation’s climate impact, but further research is required to quantify it more thoroughly
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140372</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Healthy Food Access and Consumption: Informing Interventions Through Analytics</title>
<link>https://hdl.handle.net/1721.1/140371</link>
<description>Healthy Food Access and Consumption: Informing Interventions Through Analytics
Paulson, Elisabeth
In the U.S., about 19 million people reside in low income food deserts--neighborhoods where the majority of the population does not have access to large grocery stores. These areas are associated with less healthy diets and higher rates of poor health outcomes such as obesity and heart disease. Roughly 25% of cardiovascular-related deaths (140,000 deaths per year) in the U.S. can be attributed to low fruit and vegetable intake. Thus, increasing consumption of fruits and vegetables among underserved communities is a key priority of policymakers. At the same time, over half of the produce that is grown in the U.S. each year is wasted. Unfortunately, connecting surplus produce to households in need is a diﬃcult task. This thesis uses advanced analytics to inform public interventions and supply chain design with the goal of increasing access to, and consumption of, fresh produce, particularly among underserved communities.&#13;
&#13;
Chapters 2, 3, and 4 focus on understanding the impact of, and optimizing, consumer-level interventions for increasing fruit and vegetable consumption among low income households. First, we perform an empirical analysis in order to understand which interventions are most eﬀective as a function of the household attributes (Chapter 2). Based on these empirical findings, we develop a novel consumer behavioral model of grocery shopping dynamics, which is nested into a bi-level optimization model for determining the government’s optimal investments across three diﬀerent types of food policy interventions: access, education, and price-related interventions (Chapter 3). Although this model is developed at an individual household level, we also discuss designing interventions for groups of individuals.&#13;
&#13;
Chapter 4 generalizes the idea of group-level interventions by defining a new problem in which a service provider must determine the optimal bundles of products or services to oﬀer its users while meeting an individual-level fairness constraint. This problem arises in settings such as healthcare and public policy (where services can be thought of as interventions or treatments), as well as retail settings in which fair out- come guarantees are desirable. We present two approximation algorithms for solving this problem.&#13;
&#13;
Lastly, Chapter 5 proposes and analyzes a new supply chain management intervention that increases eﬃciency in perishable food supply chains. This chapter studies a supply chain with multiple retailers who practice dual sourcing and compete with each other for supply, but do not have transparency to the inventory distributions of their suppliers a priori. A new downstream information sharing scheme is proposed that results in better ordering decisions that benefit the entire supply chain while decreasing food waste.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140371</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral Models for Air Transportation Networks</title>
<link>https://hdl.handle.net/1721.1/140370</link>
<description>Spectral Models for Air Transportation Networks
Li, Max Zhaoyu
From natural disasters to power outages, these events, even geographically-localized ones, often result in widespread disruptions across the air transportation network. In order to engineer resilience and design better proactive mitigation strategies, it is important to identify, characterize, and control the effects of such disruptions. A more resilient and well-prepared air transportation system directly translates to mitigated delay costs and increased service quality. &#13;
&#13;
Current delay performance metrics reflect only the magnitude of incurred flight delays at airports. In the first half of the thesis, we show that it is also important to consider the spatial distribution of delays across a network of airports. We analyze graph-supported signals, leveraging techniques from spectral graph theory and graph signal processing to compute analytical and simulation-driven bounds for identifying outliers in spatial distribution. We then apply these methods to analyze US airport delays from 2008 through 2017. We also perform an airline-specific analysis, deriving insights into the delay dynamics of individual airline sub-networks. We highlight key differences in delay dynamics between different types of disruptions, ranging from nor’easters and hurricanes to airport outages. We also examine delay interactions between airline sub-networks and the system-wide network, as well as compile an inventory of outlier days. This inventory could guide future aviation system planning efforts and research. We demonstrate the generalizability of this outlier identification and characterization framework through a comparative analysis of US and Chinese airport networks.&#13;
&#13;
After establishing the framework of modeling and analyzing airport delays as graph-supported signals, in the second half of the thesis we focus on two applications enabled by this framework: Examining commonly-occurring disruption-recovery cycles in the US airport network, and proposing an approximate network control scheme. In regards to the first application, we study these disruption and recovery cycles through a state-space representation that captures the severity and spatial impact of airport delays. In particular, using US airport delay data from 2008-2017, we first identify representative disruption and recovery cycles. These representative cycles provide insights into the common operational patterns of disruptions and recoveries in the system. We also relate these representative cycles to specific off-nominal events such as airport outages, and elucidate the differing disruption-recovery pathways for various off-nominal events. Finally, we explore temporal trends in terms of when and how the system tends to be disrupted, then subsequently recovers. For the second application, we consider the problem of designing control strategies for high-dimensional systems that lack a detailed model. To do so, we leverage the ability of copulas to represent dependent structures in high-dimensional data, and approximate the state space of airport delays through inverse sampling. We demonstrate the use of the control policies obtained from our methodology through a case study of controlling flight delays within the US air transportation network. &#13;
&#13;
We conclude this thesis with some directions for future work, an example of which is a new hierarchical approach towards air traffic management procedures such as airport ground holding. We also comment briefly on the applicability of the methods developed in this thesis for other transportation and networked systems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140370</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep elastic strain engineering of materials electronic properties by machine learning</title>
<link>https://hdl.handle.net/1721.1/140369</link>
<description>Deep elastic strain engineering of materials electronic properties by machine learning
Shi, Zhe
The introduction of elastic strains has become an appealing strategy for providing unique and exciting electronic properties in nanostructured materials. Recent successes in diamond and silicon deformation experiments further extended the applicable strain levels in these materials, harbingering a new stage of elastic strain engineering of semiconductor fundamental electronic properties and device performance. However, it is not generally easy to know from experiments how much an electronic property change is for materials undergoing bending or even uniaxial tension, let alone designing the optimal combination of functional properties for the material in a vast and more complex six-dimensional (6D) strain space. The complexity of controllably engineering materials properties in such a 6D strain space necessitates high-fidelity high-efficiency computer screening for a desirable figure-of-merit and then designing a proper straining pathway to guide future experiments.&#13;
&#13;
To address this challenge, we developed in this thesis a general framework that combines machine learning and a limited amount of ab initio calculations to guide strain engineering whereby basic electronic properties are designed. Our method invokes deep neural networks, convolutional neural networks, data fusion, and active learning algorithms, allowing for accurate and efficient prediction of strain-dependent fundamental electronic properties such as band structure, bandgap, band extrema location, and effective mass, as well as other properties with minor modifications. It is also used for discovering indirect-to-direct bandgap transition that would benefit photon emission and absorption in a semiconductor such as silicon by scanning the entire strain space.&#13;
&#13;
Integrating this method with finite-element simulations, we predicted energy-efficient strain pathways that would reversibly transform an ultrawide-bandgap material such as diamond to a metalized state in an experimentally feasible geometry. The fast and reliable inference of the proposed framework opens a path beyond analyzing and scrutinizing electronic band structures. In particular, an application of this framework in the studies of phonon band structure and phonon stability of diamond yielded a visualization and theoretical understanding of the deep elastic strain engineering boundary in the vast 6D strain space. We also applied the machine learning models to investigate the strain-induced variations of defect ionization energy and predicted deep-to-shallow defect level transition in diamond, offering a theoretical possibility to make strain-controlled switchable devices with doped diamond.&#13;
&#13;
We illustrate the applications of the method with results for silicon and diamond, although the general technique presented here is potentially useful for optimizing figures-of-merit for a variety of semiconductors, providing guidance for experimentally tailoring materials properties via deep elastic strain engineering for electronic, photonic, and energy applications.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140369</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Worker Productivity and Labor Supply</title>
<link>https://hdl.handle.net/1721.1/140368</link>
<description>Essays on Worker Productivity and Labor Supply
Bessone Tepedino, Pedro
This thesis is a collection of essays exploring factors that affect developing country workers’ productivity, well-being and labor supply. The first essay explores how workers labor supply is affected by transitory income shocks. In neoclassical models, transitory income shocks should not affect labor supply. This prediction has often been rejected empirically in favor of theories featuring reference-dependent preferences. We show that apparent negative daily income effects can be generated in a dynamic neoclassical model of labor supply by dynamic selection, where income early in the day causes differential attrition throughout workers’ shifts. Using data from an RCT with rich experimental variation in income and fine measures of labor supply, we show that estimates of negative income effects are an artifact of dynamic selection in this setting, providing a neoclassical explanation to the findings of the income targeting literature.&#13;
&#13;
The second essay seeks to answer how alleviating sleep deprivation among poor urban workers affect work outcomes, well being, and decision making. The urban poor in developing countries face challenging living environments, which may interfere with good sleep. Using actigraphy to measure sleep objectively, we find that low-income adults in Chennai, India sleep only 5.5 hours per night on average despite spending 8 hours in bed. Their sleep is highly interrupted, with sleep efficiency—sleep per time in bed—comparable to those with disorders such as sleep apnea or insomnia. A randomized three-week treatment providing information, encouragement, and improvements to home sleep environments increased sleep duration by 27 minutes per night by inducing more time in bed. Contrary to expert predictions and a large body of sleep research, increased nighttime sleep had no detectable effects on cognition, productivity, decision-making, or well-being, and led to small decreases in labor supply. In contrast, short afternoon naps at the workplace improved an overall index of outcomes by 0.12 standard deviations, with significant increases in productivity, psychological well-being, and cognition, but a decrease in work time.&#13;
&#13;
The third essay explores the effect of optimally assigning workers to teams and to tasks on their productivity. Governments regularly assign bureaucrats to teams and tasks, yet rarely with the explicit goal of boosting public sector productivity. This paper asks whether a low-capacity government can increase tax revenue by optimally assigning tax collectors to teams and these teams to households. We study these questions in the context of a property tax campaign in the DRC, where randomly formed teams of tax collectors were randomly assigned to work in different neighborhoods. Due to complementarities in collectors’ ability, the optimal assignment policy consists in assigning high-ability collectors to each other and assigning high-ability teams of collectors to households with high payment propensity. This optimal assignment policy would result in a 36% increase in tax compliance. In contrast, the government would have to replace 62% of low-ability collectors with high-ability ones to achieve a similar increase in compliance. We provide suggestive evidence that the complementarity between high-ability collectors is explained by skill transmission between collectors rather than peer pressure (or motivation) to increase effort. JEL Codes: D9, J2, O1
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140368</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lipschitz homotopies of mappings from 3-sphere to 2-sphere</title>
<link>https://hdl.handle.net/1721.1/140367</link>
<description>Lipschitz homotopies of mappings from 3-sphere to 2-sphere
Berdnikov, Aleksandr
This work focuses on important step in quantitative topology: given homotopic mappings from S^m to S^n of Lipschitz constant L, build the (asymptotically) simplest homotopy between them (meaning having the least Lipschitz constant). The present paper resolves this problem for the first case where Hopf invariant plays a role: m = 3, n = 2, constructing a homotopy with Lipschitz constant O(L).
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140367</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulatory Frameworks and Evaluation Methodologies for the Licensing of Commercial Fusion Reactors</title>
<link>https://hdl.handle.net/1721.1/140366</link>
<description>Regulatory Frameworks and Evaluation Methodologies for the Licensing of Commercial Fusion Reactors
White, Robert Patrick
Private companies hoping to deploy commercial fusion facilities in the next two decades will encounter a variety of technical, social, and economic challenges. These companies will also need to assess and develop appropriate technology regulation.&#13;
&#13;
The under-regulation, over-regulation, or mis-regulation of a new technology could jeopardize long-term commercial deployment opportunities. Timely assessment and development of appropriate regulatory requirements are critical to the success of commercial fusion technology in the next two decades. The assessment and development of regulatory requirements for new technologies, however, is often based prior on operating experience or regulation of similar technologies. The applicability of these assessment and development methods is restricted for commercial fusion facilities for a variety of factors including the wide variety of fusion technologies currently under development, the preliminary nature of commercial design efforts, and the limited characterization of commercial fusion facility concept of operations.&#13;
&#13;
This work presents an initial comprehensive approach to the assessment and development of appropriate regulatory requirements for commercial fusion technology. Models and methods based on the fundamental hazards of a technology are utilized to help examine the licensing and regulation of novel technologies and provide insights on how to more effectively assess and develop regulatory requirements. The different licensing evaluation methods and regulatory frameworks are developed and presented to provide insights on the impact of these regulatory decisions on the design constraints and regulatory burden for commercial fusion technology.&#13;
&#13;
Specific insights are given on the selection of licensing evaluation methods and regulatory framework from this work. Licensing evaluation related insights include the incompatibility of large tritium inventories with low regulatory burden licensing evaluation methods, the design benefits and regulatory burden drawbacks of crediting engineering safety features in the licensing of fusion facilities, and potential advantages of utilizing System Theoretical Process Analysis (STPA) in the development of operational requirement for novel, complex systems such as commercial fusion. Regulatory framework related insights include the potential applicability of a delegated review regulatory framework (similar to commercial aviation) to commercial fusion, the potential economic costs of a new minimal regulation system based on a new strict liability insurance framework, and the development advantages of a new cooperative operational characterization regulatory framework for novel technologies such as commercial fusion.&#13;
&#13;
The methods and models described in this work are intended to help regulators and industry evaluate the hazards of commercial fusion facilities and select licensing evaluation methods and regulatory frameworks that satisfy the social and economic constraints on commercial fusion facilities. Regulation is often viewed as inhibiting innovation but the proactive development of regulatory requirements using a comprehensive hazard based approach can help maintain social license for fusion technology, facilitate safe operation, and create a stable regulatory environment that will help foster the successful commercial development and deployment of fusion facilities for clean energy production.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140366</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Laser Micromachining for Correction of Thin Optics for Next Generation X-ray Space Telescopes</title>
<link>https://hdl.handle.net/1721.1/140364</link>
<description>Ultrafast Laser Micromachining for Correction of Thin Optics for Next Generation X-ray Space Telescopes
Zuo, Heng E.
The central theme of this thesis is the development of implementing ultrafast laser micromachining technology to the correction of thin-shell lightweight high-resolution mirrors for future high performance X-ray space telescopes. The existing fabrication methods cannot achieve the required accuracy for individual mirror segments, and the high reflective coatings could distort the mirror figures beyond acceptable tolerance. As mirrors become thinner and more pliant, the need for precise and high throughput figuring methods of thin mirrors becomes more imminent. &#13;
&#13;
The main contributions of this thesis is the development of two unique approaches to implement ultrafast laser micromachining technology with stress-based figure correction technique to correct for figure errors and coating distortions in thin mirrors. Rapid developments of ultrafast laser technologies have enabled high accuracy laser material processing and structuring on micron scales. By using simple optical setups with scanning X-Y stages, I showed that both equibiaxial and general biaxial stress fields can be generated with laser micromachined features in thin mirrors. The influences of various micromachining parameters have been examined to establish the effectiveness of the approach. A multi-pass correction scheme is proposed and demonstrated, where a feedback loop is implemented to induce controlled bending and reduce figure errors repeatedly. In addition, a finite element model is built to simulate the bending and stresses in thin mirrors using the stressed film patterning method with ultrafast laser micromachining, and has achieved comparable results with the experiments. Further, the breaking strengths of the mirrors treated with ultrafast laser micromachining have been evaluated, which demonstrates the proposed approaches as viable methods for processing optics for space applications.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140364</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Barrier Island Groundwater Dynamics</title>
<link>https://hdl.handle.net/1721.1/140363</link>
<description>Barrier Island Groundwater Dynamics
Housego, Rachel Mary
Nearly 1.5 million people inhabit barrier islands along the U.S. Atlantic and Gulf Coasts and coastal groundwater dynamics influence the availability of freshwater, ecosystem health, pollutant transport, and flooding in these densely populated communities. However, groundwater dynamics, including the aquifer head distribution and subsurface salinity structure, in coastal aquifers are affected by multiple environmental forcings, such as waves, tides, storm surges, and precipitation that act on a variety of spatial and temporal scales, making coastal groundwater dynamics complex and difficult to predict. &#13;
&#13;
Here, measurements of groundwater heads, salinities, and temperatures collected for 3 years across a 550-m-wide barrier island are used in conjunction with observations of ocean tides, surge, waves, sound level, and rainfall to characterize the dynamics of the surface aquifer. Infiltration from surge, tides, and waves during storms caused up to 2 m increases in the groundwater level under the dune. The head gradients owing to these storm-induced groundwater bulges suggest flows become inland directed on the ocean-side of the island during storms. An upper saline plume (20-30 PSU) was observed above fresher (10 PSU) water up to 30 m inland of the dune face, which was the maximum wave runup location. Differences in inland propagation between tidal- and storm-induced groundwater head fluctuations are explained using analytical theories for intermediate depth aquifers. Additionally, a separate analytical water-table evolution model driven with estimated ocean shoreline water levels (based on the 36-hr-averaged offshore tide, surge, and wave height) and measured precipitation is validated by citizen-science flood reports and predicts the maximum water-table height within 0.1 m of the observed levels across the barrier island.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140363</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stability Methods for Regulated Loads</title>
<link>https://hdl.handle.net/1721.1/140361</link>
<description>Stability Methods for Regulated Loads
Gutierrez, Manuel
Power electronic circuits often regulate load power and present a constant power profile to the utility or other electrical source. These constant power loads (CPLs) therefore exhibit a negative incremental input impedance and pose stability challenges when present in either dc or ac systems. This thesis presents CPL design techniques to mitigate these issues, as well as tools to determine in which applications CPL instabilities may occur. &#13;
&#13;
For stability analysis of a dc distribution system, an equivalent circuit model for limited-bandwidth CPLs is presented. This model quantifies the intrinsic damping properties of regulated converters and how they relate to the control bandwidth. A control architecture based on this equivalent circuit model is then analyzed. The control scheme limits the destabilizing impact of CPL operation by adding a lossless internal damping component and shaping the effective input impedance to resemble that of a CPL operating under reduced control bandwidth. An intermediate, internal energy buffer is used to support high-bandwidth output load regulation while enabling the programmable or selectable input impedance on desired time scales. The effectiveness of this strategy on system stability is shown using a stability analysis and experimental results. In addition, for a collection of loads on a dc distribution system, a participation factor analysis is presented which can help identify likely sources of instability. Finally, an ac application is presented in which an adaptation of the previously discussed control architecture can be used to promote stable interaction between ac CPLs implementing power factor correction and an ac source.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140361</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Micro-/Nano-Structures for Electromechanical Actuation</title>
<link>https://hdl.handle.net/1721.1/140359</link>
<description>Active Micro-/Nano-Structures for Electromechanical Actuation
Han, Jinchi
The design and implementation of micro-/nano-structures as active components has become an increasingly popular scheme for the development of electronic devices and systems in pursuit of unprecedented device performance and unique system functionality. In particular, cooperative actuation of active structures assembled in parallel results in emerging concepts of electromechanical devices and systems with tunable characteristics enabled by exploiting all the degrees of design freedom available to the active components. Engineering such micro-/nano-structures requires precise and scalable fabrication with resolution down to the nanometer-scale, often beyond the capabilities of conventional processing techniques. Exploring these opportunities could contribute substantially to the understanding of fundamental physical phenomena and the upsurge of novel device structures and concepts, as well as pushing the frontier of nanoscale fabrication and metrology.&#13;
&#13;
In this thesis, challenges and opportunities in engineering active micro-/nano-structures for electromechanical actuation are explored through two case studies, a tunneling nanoswitch and an acoustically-active surface, which aim to present paradigms for high-performance electromechanical devices and systems designed based on cooperating active micro-/nano-structures.&#13;
&#13;
The tunneling nanoswitch utilizes molecules as active nanoscale springs. An ensemble of such molecular springs takes the form of a self-assembled molecular layer sandwiched between ultra-smooth bottom electrodes and an active top nanoparticle contact. This nanoswitch operates by electromechanical modulation of the current tunneling through the nanometer-scale molecular switch gap. This unique mechanism enables the device to demonstrate a low turn-on voltage (under 3 V) and a short delay (2 ns) simultaneously, which are among critical challenges facing nanoelectromechanical (NEM) switches. Significantly, the molecular layer and the top nanoparticle contact serve as two degrees of design freedom with which to independently tailor static and dynamic device characteristics, thereby enabling a path towards sub-1-V switching in the GHz regime for electromechanical logic.&#13;
&#13;
The acoustically-active surface depends on widely distributed, microstructured piezoelectric transducers as active components. One example of such an acoustic surface is a PVDF film embossed with active micro-domes in an array. Existence of these freestanding micro-domes actuating in parallel significantly enhances the acoustic performance and allows our acoustic surface to achieve a unit-area sensitivity of 0.2075 mPa/(V∙cm2), which well outperforms existing flexible loudspeakers. The acoustic response can be further improved by engineering the profile and dimensions of the micro-domes. In addition, coordinating actuation of the micro-domes based on adaptive amplitude and phase control could enable directional audible sound generation, currently unavailable based on standalone commercial loudspeakers. The outstanding acoustic performance, attractive features (wide-area, thin, flexible, low-cost and even transparent), and unique functionality make the acoustic surface promising for broad emerging application scenarios.&#13;
&#13;
Creating high-performance active structures in these case studies has motivated our development of novel processing techniques, including scalable manipulation of nanomaterials and low-cost, high-precision micro-embossing of polymer thin films. The highly uniform, mechanically tunable molecular junctions and the wide-area, phased acoustic micro-transducer array provide promising platforms for scientific studies of fundamental physical phenomena and innovations in diverse electronic devices and systems.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140359</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organometallic Palladium Reagents for Polypeptide Bioconjugation and Macrocyclization</title>
<link>https://hdl.handle.net/1721.1/140354</link>
<description>Organometallic Palladium Reagents for Polypeptide Bioconjugation and Macrocyclization
Mallek, Aaron J.
Chapter 1: Bicyclic peptides have been developed to mimic good binders that structurally resemble antibody mimetics. The identity of the linker plays a critical role in enforcing the structure of the cyclic entity. Palladium-mediated arylation has enabled cysteine-specific modifications with a number of previously inaccessible aryl groups. Mono-Pd oxidative addition complexes (OACs) derived from 1,3,5-triiodobenzene enable iterative C–S arylation around the benzene scaffold through regeneration of the active palladium site. This reagent is successfully used to generate a highly rigid, benzene linked bicyclic variant of a human plasma kallikrein inhibitor. A nearly 5000-fold loss in inhibitory activity demonstrates the potential difference in bioactivity accessed by this chemical change. In addition to potential application as a bicyclic linker, peptide cross-linking is achieved in high conversion using peptides containing a single cysteine. Furthermore, linear and cyclic peptide OACs are accessible through non-exhaustive C–S arylation, leaving the active Pd(II) site intact for future functionalization to small molecules or other biomolecules.&#13;
&#13;
Chapter 2: Macrocyclic peptides hold immense potential for targeting “undruggable” protein-protein interactions as their expanded surface areas enable increasingly complex surface interactions. Unlike biologics however, peptides are highly synthetically accessible and may be readily modified to enhance binding interactions, proteolytic stability, and intracellular access. High throughput techniques such as mRNA and phage display have been instrumental for discovering new peptidic protein ligands, however these techniques are limited with respect to incorporation of non-canonical amino acids. Synthetic libraries used in solution phase affinity selection-mass spectrometry (AS-MS) enable the use of nearly limitless non-canonical residues, however this approach is not compatible with macrocyclic peptides. Herein, we target cysteine-linked macrocycles for mild oxidation to the sulfoxide and subsequent base-promoted elimination to linearize the macrocycle and form a dehydroalanine adduct for further modification. This approach is used to successfully linearize stapled peptides and aryl/alkyl-linked bicyclic peptides. Furthermore, the protocol maintained sequencing integrity even with as little as 50 pg of peptide demonstrating viability for use with synthetic libraries in affinity selection-mass spectrometry.&#13;
6&#13;
Chapter 3: The selective N-arylation of p-aminophenylalanine in polypeptides with pre-formed palladium oxidative addition complexes is described. The depressed pKa of the aniline NH2 group enables chemoselective C−N bond formation on peptides containing multiple other aliphatic amino groups at lysines or the N-terminus via Curtin-Hammett control under mild conditions. Using palladium complexes derived from electron-poor aryl halides, p-aminophenylalanine is fully arylated in aqueous buffer in as little as one hour at micromolar concentrations. A complementary protocol using the non-nucleophilic, organic base 1,5-diazabicyclo(4.3.0)non-5-ene (DBN), expands the substrate scope to tolerate electron-rich functional groups provides up to 97 % conversion. These procedures enable the chemoselective conjugation of functionally diverse small molecule pharmaceuticals to p-aminophenylalanine containing derivatives of cell-penetrating peptides.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140354</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of the electrodeposition behavior of submicrogram quantities of silver and manganese</title>
<link>https://hdl.handle.net/1721.1/140341</link>
<description>Studies of the electrodeposition behavior of submicrogram quantities of silver and manganese
Byrne, John Thomas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1951; Vita.; Bibliography: leaves 141-143.
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140341</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pressure drops through bubble cap plates</title>
<link>https://hdl.handle.net/1721.1/140274</link>
<description>Pressure drops through bubble cap plates
Dauphiné, Thonet Charles.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1939; Vita.; Includes bibliographical references (leaves 320-321).
</description>
<pubDate>Sun, 01 Jan 1939 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140274</guid>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The carbon-hydrogen system at temperatures above 2500⁰C</title>
<link>https://hdl.handle.net/1721.1/140272</link>
<description>The carbon-hydrogen system at temperatures above 2500⁰C
Iwasyk, John M.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Vita.; Includes bibliographical references (leaves 180-181).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140272</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental study of a 200-300 GHz megawatt gyrotron oscillator</title>
<link>https://hdl.handle.net/1721.1/140267</link>
<description>Experimental study of a 200-300 GHz megawatt gyrotron oscillator
Grimm, Terry L.
            (Terry Lee)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1992; Includes bibliographical references (leaves 168-171).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140267</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smoldering combustion of flexible polyurethane foam</title>
<link>https://hdl.handle.net/1721.1/140266</link>
<description>Smoldering combustion of flexible polyurethane foam
Ortiz Molina, Marcos German.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140266</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total synthesis of ḏḻ-camptothecin.</title>
<link>https://hdl.handle.net/1721.1/140265</link>
<description>Total synthesis of ḏḻ-camptothecin.
Bradley, Joel Chandler.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1975; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140265</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing the Collective Behavior of Earthquakes to Understand Fault Mechanisms Better</title>
<link>https://hdl.handle.net/1721.1/140201</link>
<description>Analyzing the Collective Behavior of Earthquakes to Understand Fault Mechanisms Better
Beaucé, Eric
The Gutenberg-Richter law tells us that there is a tenfold increase in the number of earthquakes of magnitude &#119898; &gt; &#119872; when &#119872; decreases by one unit. Thus, the vast majority of earthquakes occur at magnitudes so small that the vibrations they cause can barely be recorded at the surface of Earth. Given that earthquakes are the symptoms of motion on faults, observing small earthquakes brings valuable information on fault mechanisms. In this thesis, not only do I focus on studying small-to-moderate size earthquakes (M &lt; 4), but I study properties that emerge when many of these earthquakes interact. Many of my conclusions are drawn from observations of earthquake temporal clustering.&#13;
&#13;
I present the automatic earthquake detection and location method that I developed for collecting the time and space coordinates of as many earthquakes as possible, and base all subsequent analyses on these. My investigations covered two study regions: the Southwestern Alps, and the western section of the North Anatolian Fault that last broke in August 1999. In both studies, I demonstrate how different fault systems produce seismicity with different temporal clustering properties. Observations of temporal clustering describe seismicity patterns between two end-members: the swarm-like seismicity with little inter-event triggering, and the cascade-like seismicity with strong earthquake interaction.&#13;
&#13;
Temporal clustering and the analysis of earthquake source characteristics in the Southwestern Alps helped explain differences in fault mechanisms in the two most active areas of the study region. My results also point towards non self-similar earthquakes. Along the North Anatolian Fault, in addition to temporal clustering, I analyzed the earthquake focal mechanisms, used them to infer the state of stress in the fault zone, and thus provided a comprehensive description of the study region. A major conclusion of this study is that strongly time clustered seismicity developed in normal fault systems several years after the 1999 Izmit earthquake, and may indicate the inter-play between seismic and aseismic slip on these faults.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140201</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of anti-tumor T cell specificities to inform engineering of antigen-targeted immunotherapies</title>
<link>https://hdl.handle.net/1721.1/140200</link>
<description>Characterization of anti-tumor T cell specificities to inform engineering of antigen-targeted immunotherapies
Grace, Elizabeth E.
Over the past several decades, advances in immunotherapy have revolutionized cancer treatment. Immune checkpoint blockade has resulted in durable responses for some patients, but others have not seen the same benefits. T cells are essential to the success of many immunotherapies, as their receptors can recognize peptide antigens presented by major histocompatibility complexes (MHCs); productive recognition of antigens displayed by tumors results in T cell-mediated killing of the tumor cells. However, it is not always known what antigens are being recognized by cytotoxic T cells. Thus, there are many avenues of research being pursued to broaden and improve responses to cancer therapy, including T cell antigen identification and the development of combination immunotherapies. &#13;
&#13;
In this thesis, we characterized the CD8+ T cell response to murine melanoma following combination immunotherapy. Sequencing of tumor-infiltrating T cells revealed a set of clones with strikingly similar T cell receptors (TCRs) that were particularly expanded in treated mice and likely recognized a shared antigen. We identified the antigen recognized by these clones as the p15E peptide from the envelope protein of murine leukemia virus, an endogenous retrovirus. When vaccination against this peptide failed to raise a protective T cell response in vivo, we utilized yeast displayed peptide-MHC libraries to identify mimotopes (peptide mimics) of the p15E antigen. Several mimotopes were found to exhibit increased affinity or activity for the T cell clones. Vaccination of mice with mimotope peptides induced a significant expansion of CD8+ T cells cross-reactive to the p15E antigen and resulted in delayed tumor growth. Together, this work demonstrates the identification of a dominant tumor-associated antigen and the targeting of mimotopes of this antigen in a prophylactic vaccine setting to improve the anti-tumor immune response.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140200</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sharing Vulcan’s Secrets: Why States Disclose Details of Advanced Military Technology to Other States</title>
<link>https://hdl.handle.net/1721.1/140198</link>
<description>Sharing Vulcan’s Secrets: Why States Disclose Details of Advanced Military Technology to Other States
Sand, Erik Andrew Hustad
Why do technologically advanced states share cutting-edge military technology – the capability to produce weapons – with other states? Technology is a key component of national power, so sharing technology means making other states more powerful. I identify technology sharing as a unique form of interstate security assistance. Unlike alliances and arms sales, states cannot claw-back the capability technology provides if relations with a recipient worsen. As result, technology sharing’s consequences last for as long as the transferred technology remains relevant.&#13;
&#13;
I create a typology of technology sharing policies based on the ease and breadth of technology transfer they facilitate and explain choices amongst these policies with an original theory called Threats Over Time Theory (TOTT). TOTT predicts decisionmakers share technology when they face severe threats – to either the survival of their state or the organization that they lead. When such threats exist, decisionmakers adjust the liberalness of their desired technology sharing policy based two factors: the likelihood a future adversary may gain the technology because of the sharing – either through a leak or because recipient itself becomes an adversary – and the speed at which the shared technology is likely to become obsolete. &#13;
&#13;
I test TOTT using cases during and between the World Wars – the most recent previous period of multipolar international competition. Using more than 40,000 pages of archival documents, I examine British and American decisions to share technology with each other, Japan, and the Soviet Union. In the process, I produce new or updated histories of these technology transfers.&#13;
&#13;
The findings have implications for scholars’ understanding of how decisionmakers make choices with costs and benefits that vary across time, tradeoff between relative and absolute gains, and prioritize state versus organizational interests. They also provide insight into how policymakers can consider the risks and benefits of technology transfer.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140198</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the interaction of portmanteaux and ellipsis</title>
<link>https://hdl.handle.net/1721.1/140195</link>
<description>On the interaction of portmanteaux and ellipsis
Banerjee, Neil
This thesis is an investigation of what happens when an ellipsis boundary tries to split a portmanteau. It focuses on two patterns: elliptical indivisibility, and elliptical divisibility. Elliptical indivisibility is exemplified by the Hungarian portmanteau negative 3rd person indicative copula, which gets pronounced in its entirety even when the ellipsis site is thought to contain the copula. Elliptical divisibility is exemplified by the Bengali portmanteau negative perfect, which splits into a default sentential negation and a silent perfect when the complement of negation is elided. Investigations of both ellipsis sites shows evidence for complex unpronounced structure, suggesting that the variation in elliptical (in)divisibility arises not from having different kinds of ellipsis operations, but from different portmanteaux forming operations. I propose that indivisible portmanteaux are the result of either fusion or non-terminal insertion in a Late Insertion model of the postsyntax, while divisible portmanteaux are the result of two cases of contextual allomorphy. &#13;
&#13;
From the study of Hungarian elliptical indivisibility, we learn that ellipsis sites must be post-syntactically accessible, at least to some morphological operations, since portmanteau formation in Hungarian is shown to be post-syntactic. From the study of Bengali elliptical divisibility, we learn that locality and directionality restrictions on contextual allomorphy must be loose enough to allow allomorphy to be triggered by non-local inward sensitivity to morphosyntactic features. From the synthesis of the two case studies, we learn that to successfully model both elliptical divisibility and indivisibility, a single ellipsis silencing mechanism is sufficient, as long as different portmanteau forming operations are used.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140195</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An efficient algorithm for sensitivity analysis of chaotic systems</title>
<link>https://hdl.handle.net/1721.1/140194</link>
<description>An efficient algorithm for sensitivity analysis of chaotic systems
Chandramoorthy, Nisha
How does long-term chaotic behavior respond to small parameter perturbations? Using detailed models, chaotic systems are frequently simulated across disciplines – from climate science to astrophysics. But, an efficient computation of parametric derivatives of their statistics or long-term averages, also known as linear response, is an open problem. The difficulty is due to an inherent feature of chaos: an exponential growth over time of infinitesimal perturbations, which renders conventional methods for sensitivity computation inapplicable. More sophisticated recent approaches, including ensemble-based and shadowing-based methods are either computationally impractical or lack convergence guarantees. We propose a novel alternative known as space-split sensitivity or S3, which evaluates linear response as an efficiently computable, provably convergent ergodic average. The main contribution of this thesis is the development of the S3 algorithm for uniformly hyperbolic systems – the simplest setting in which chaotic attractors occur – with one-dimensional unstable manifolds. S3 can enable applications of the computed sensitivities to optimization, control theory and uncertainty quantification, in the realm of chaotic dynamics, wherein these applications remain nascent.&#13;
&#13;
We propose a transformation of Ruelle’s rigorous linear response formula, which is ill-conditioned in its original form, into a well-conditioned ergodic-averaging computation. We prove a decomposition of Ruelle’s formula, called the S3 decomposition, that is differentiable on the unstable manifold. The S3 decomposition ensures that one of the resulting terms, the stable contribution, can be computed using a regularized tangent equation, similar to in a non-chaotic system. The remainder, known as the unstable contribution, is regularized and converted into a computable ergodic average. The S3 algorithm presented here can be naturally extended to systems with higher-dimensional unstable manifolds.&#13;
&#13;
The secondary contributions of this thesis are analysis and applications of existing methods, including those shadowing-based and ensemble-based, to compute linear response. A feasibility analysis of ensemble sensitivity calculation, which is a direct evaluation of Ruelle’s formula, reveals a problem-dependent, typically poor rate of convergence, rendering it computationally impractical. Shadowing-based sensitivity computation is not guaranteed to converge because of atypicality of shadowing orbits. This atypicality also implies that small parameter perturbations can lead, contrary to popular belief, to a large change in the statistics of a chaotic system, a consequence being that numerical simulations of chaotic systems may not reproduce their true long-term behaviors.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140194</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noise-Centric Decoding</title>
<link>https://hdl.handle.net/1721.1/140190</link>
<description>Noise-Centric Decoding
Solomon, Amit
Maximum Likelihood (ML) decoding of forward error correction codes is known to be optimally accurate, but is not used in practice due to the lack of a feasible implementation. As the common approach in coding theory is a code-centric one, designing a ML decoder is a challenging code-specific task. We establish a noise-centric approach for decoding of error correction codes that enables us to introduce a universal ML soft detection decoder called Soft Guessing Random Additive Noise Decoder (SGRAND), which is a development of a previously described hard detection ML decoder called Guessing Random Additive Noise Decoder (GRAND), that fully avails of soft detection information. SGRAND is suitable for use with any arbitrary moderate redundancy block code. A further development of the algorithm is provided that can decode coded signals transmitted on Multiple Access Channels (MACs), where transmitters not only suffer from noise, but also interfere one another. We propose a scheme that deals with the two problems of MAC separately: interference and the noise. We prove that a scheme based on SGRAND results in optimally accurate decodings. Finally, we study how correlated noise between orthogonal channels can be used to improve rates and reduce Block Error Rate (BLER) performance via a scheme called Noise Recycling.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140190</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remodeling Rationality: An Inquiry into Unorthodox Modes of Logic and Computation</title>
<link>https://hdl.handle.net/1721.1/140189</link>
<description>Remodeling Rationality: An Inquiry into Unorthodox Modes of Logic and Computation
Ochigame, Rodrigo
This dissertation investigates unorthodox models of computational rationality. Part I examines the histories of such models as nonclassical formalisms of mathematical logic from Brazil, nonbinary Turing machines from postcolonial India, and frameworks of information science from postrevolutionary Cuba. Part II analyzes contemporary developments in the field of artificial intelligence (AI), particularly attempts to incorporate ethics and aesthetics into mathematical models of optimization. Part III presents experimental methods of indexing and searching information, developed in response to epistemological and political critiques of dominant search engines. Altogether, the dissertation argues that computational rationality, despite its grand aspirations to universality, is open to radically distinct alternatives.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140189</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods to Improve Fairness and Accuracy in Machine Learning, with Applications to Financial Algorithms</title>
<link>https://hdl.handle.net/1721.1/140188</link>
<description>Methods to Improve Fairness and Accuracy in Machine Learning, with Applications to Financial Algorithms
Lazar Reich, Claire
Critical decisions like loan approvals, foster care placements, and medical interventions are increasingly determined by data-driven prediction algorithms. These algorithms have the potential to greatly aid decision-makers, but in practice, many can be redesigned to achieve outcomes that are fundamentally fairer and more accurate. This thesis consists of three chapters that develop methods toward that aim.&#13;
&#13;
The first chapter, co-authored with Suhas Vijaykumar, demonstrates that it is possible to reconcile two influential criteria for algorithmic fairness that were previously thought to be in conflict: calibration and equal error rates. We present an algorithm that identifies the most accurate set of predictions satisfying both conditions. In a credit-lending application, we compare our procedure to the common practice of omitting sensitive data and show that it raises both profit and the probability that creditworthy individuals receive loans.&#13;
&#13;
The second chapter extends the canonical economic concept of statistical discrimination to algorithmic decision-making. I show that predictive uncertainty often leads algorithms to systematically disadvantage groups with lower-mean outcomes, assigning them smaller true and false positive rates than their higher-mean counterparts. I prove that this disparate impact can occur even when sensitive data and group identifiers are omitted from training, but that it can be resolved if instead data are enriched. In particular, I demonstrate that data acquisition for lower-mean groups can increase access to opportunity. I call the strategy “affirmative information” and compare it to traditional affirmative action in the classification task of identifying creditworthy borrowers. &#13;
&#13;
The third chapter, co-authored with Suhas Vijaykumar, establishes a geometric distinction between classification and regression that allows risk in these two settings to be more precisely related. In particular, we note that classification risk depends only on the direction of a regressor, and we take advantage of this scale invariance to improve existing guarantees for how classification risk is bounded by the risk in the associated regression problem. Building on these guarantees, our analysis makes it possible to compare classification algorithms more accurately. Furthermore, it establishes a notion of the “direction” of a conditional expectation function that motivates the design of accurate new classifiers.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140188</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some studies on the computation and interpretation of seismic interface waves and modes in Earth’s mantle</title>
<link>https://hdl.handle.net/1721.1/140184</link>
<description>Some studies on the computation and interpretation of seismic interface waves and modes in Earth’s mantle
Matchette-Downes, Harry
I discuss ways in which seismic data can be used to constrain the elastic parameters, density, mineralogy, and rheology of planetary interiors, focusing on interface waves and modes in Earth’s mantle. The thesis consists of two parts.&#13;
&#13;
In the first part, I characterise the lithospheric mantle in southwestern Tibet, based on Rayleigh wave dispersion, receiver functions, and virtual deep seismic sounding profiles. These observations indicate a crust thickness of 70 ± 4 km, and high shear wave speeds of 4.6 ± 0.1 km s−1 down to around 300 km, which is interpreted as the base of the lithosphere. I combine these constraints and gravity data in an isostatic balance, which indicates that the lithospheric mantle is negatively buoyant, ruling out a depleted composition.&#13;
&#13;
In the second part, I build upon a recently-developed finite-element technique for the calculation of planetary normal modes. Starting with the spherically-symmetric case, I compare the new technique against the conventional numerical integration approach. Then, for a rotating Earth model with a lower-mantle anomaly, I calculate the modes without recourse to perturbation theory. Motivated by the goal of testing the accuracy of perturbation theory, I develop tools for the spherical harmonic analysis of modes calculated using the new method.&#13;
&#13;
I apply all of these new methods to investigate the ‘mixed Rayleigh-Stoneley modes’, which arise when Rayleigh modes and core-mantle-boundary (CMB) Stoneley modes have very similar frequencies. I identify this as an example of seismic waveguide coupling, show numerically that the coupling is stronger at lower frequencies, relate this to previous observations, and demonstrate that the coupling persists in nonspherically-symmetric planets.&#13;
&#13;
Finally, I generalise the finite-element technique to anelastic Earth models, in which the shear modulus is frequency-dependent. This results in a non-linear eigenvalue problem, which I solve with the Infinite Arnoldi method. I explore the oscillation modes of simple mechanical systems, and show how more complicated rheologies, such as the Extended Burgers model, can be handled, with comparisons to previous work. I conclude by discussing applications on Earth, where a physically-consistent rheological model is within reach, and on other worlds, where exact methods will prove even more valuable.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140184</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering the fundamental driver of semiconductor radiation tolerance</title>
<link>https://hdl.handle.net/1721.1/140182</link>
<description>Uncovering the fundamental driver of semiconductor radiation tolerance
Logan, Julie V.
Radiation damage is a prominent cause of device failure in orbit, but we do not currently understand what innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose. These devices include circuits required for control and communication as well as instruments used for observation. This disables the ability to design materials with radiation tolerance as a criterion. The first contribution of this thesis is to improve our understanding of semiconductor damage done in orbit by showing that (1) nuclear transmutation is an insignificant damage mechanism for any prominent satellite orbit and common optoelectronic material and (2) current dominant terrestrial radiation-tolerance qualification tests (high energy protons) are unrepresentative of the damage done in orbit (recommendations are given to improve their realism). To address the main problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). This was accomplished through the development of positron annihilation lifetime spectroscopy (PALS) and Rutherford backscatter channeling (RBS/C) experiments to compare the relative open volume and displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of radiation damage. With this experimental information on relative radiation tolerance in hand, hybrid density functional theory (DFT) electron densities (and their derived quantities) are processed by considering their gradient and laplacian to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. Previous theories for the driver of radiation tolerance, like bond strength, ionicity, and bandgap, are shown to be inconsistent with the experimental results. With this criterion to predict radiation tolerance, simple DFT simulations can be conducted on potential new materials to predict their anticipated operation in the demanding space radiation environment.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140182</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Networks, Influence and Repetition</title>
<link>https://hdl.handle.net/1721.1/140181</link>
<description>Networks, Influence and Repetition
Sassine, Jad Georges
The diffusion of beliefs and behaviors is shaped by the network in which people are embedded. Our focus is on the context of complex diffusion where multiple interactions and reinforcements may be needed for adoption of an idea, action, process, or product. A powerful intuition informs current thinking on the topic: clustered networks provide the repeated reinforcement needed for complex contagion. Thus, current theory makes a sharp distinction between simple and complex contagion, where the former benefits from random bridges to distant parts of the network, but complex contagion is more efficient on densely clustered networks.&#13;
&#13;
The first paper uses analytical arguments and extensive simulations to challenge this common intuition. We show that when there is some stochasticity in choice, random links are more valuable than previously acknowledged, even in the context of complex diffusion; and that the repetition of messages by the same adopter can significantly strengthen the advantages of random (vs. clustered) networks. The second paper investigates the role of repeated reinforcements empirically. We build a simple model to quantify the effect of repetition through the lens of limited memory, and parameterize this model using data from an online experiment where participants need to estimate the opinion of their friends.&#13;
&#13;
The third paper explores social reinforcement through a different lens: within-category spillovers during category emergence. Categories are defined by within substitution effects: increasing the utility of one product decreases that of the others. However, in situations for which the category has yet to gain acceptance, understandings, and legitimacy, increasing the utility of one product may increase familiarity with other products, leading to positive spillover effects. We analyze these effects during the emergence of hybrid electric vehicles, leveraging an incentive that affected a subset of vehicles, providing a natural exclusion restriction.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140181</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Label-Efficient and Compute-Efficient Video Analytics</title>
<link>https://hdl.handle.net/1721.1/140178</link>
<description>Label-Efficient and Compute-Efficient Video Analytics
Bastani, Favyen
The ability to analyze large-scale video datasets is useful in an increasing range of applications. For example, a traffic planner may want to analyze traffic camera video to compare the frequency of hard braking at different junctions, while an ecology researcher may be interested in identifying instances of various behaviors between pairs of birds in video of a bird feeder. However, implementing machine learning (ML) pipelines for video analytics tasks remains challenging for two reasons. First, these tasks generally require applying expensive ML models to robustly detect and track objects such as cars and birds. These models are both label-intensive, often requiring thousands of labeled examples to achieve high-accuracy, and compute-intensive, executing at tens of frames per second even on datacenter GPUs. Second, in addition to applying ML models, these tasks often require several auxiliary operations to pre-process the input video and associated metadata, and to post-process model outputs to extract useful insights. For example, counting hard braking incidents necessitates post-processing object tracks of cars to identify sharp decelerations.&#13;
&#13;
In this thesis, we present SkyhookML, a platform for analytics tasks over large-scale video datasets. To reduce the cost of video analytics, we integrate approximate video query processing optimizations, efficient video pre-processing methods, and self-supervised learning techniques into SkyhookML. Approximate processing optimizations sacrifice a small amount of accuracy for large gains in throughput by avoiding applying the most accurate but also most expensive models on every video frame. Efficient pre-processing methods extract general-purpose insights from video that can be reused across several analytics tasks. Self-supervised learning techniques can substantially reduce the labeling effort needed to train robust models by deriving learning signals from unlabeled data. By employing novel approaches in each of these three categories that are specialized for analyzing object detections and tracks that appear in video data, SkyhookML addresses the label- and compute-intensiveness of video analytics and enables users to efficiently develop and deploy ML pipelines.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140178</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ecological City Design and Planning: How China Expands Urban Ecology, Institutional Learning, and Cultural Shifts through the Evolving Eco-Developments</title>
<link>https://hdl.handle.net/1721.1/140175</link>
<description>Ecological City Design and Planning: How China Expands Urban Ecology, Institutional Learning, and Cultural Shifts through the Evolving Eco-Developments
Chiu-Shee, Colleen
As the concept of sustainability has become a global norm, industrialized and industrializing countries have sought to innovate their strategies for urbanization and modernization in order to set standards for sustainable development. In China, a pro-environmental movement has emerged with continued experimentation of eco-environmental approaches to urbanization. Through the lens of a series of high-profile eco-developments initiated by the Chinese state, this dissertation examines the transnational influences of eco-environmental ideas on urbanization policy and practice, as well as the meanings and impacts of experimental projects that demonstrate eco-environmental principles. These projects were conceived as replicable paradigms for urbanization and concomitant modernization based on the idea of growing the city in harmony with nature. The selected cases include four nationally promoted model eco-cities and two award-winning, locally initiated developments—Zhengdong New District and Nanhu Eco-City. A deep dive into the vicissitudes of the selected eco-developments reveals their limited eco-environmental effects and social influences constrained by the scale of these privileged developmental jurisdictions. Genuine eco-environmental considerations were undermined by growth-oriented developmental agendas of entrepreneurial local states. Eco-environmental rationality was adopted within an authoritarian regime to reinforce state legitimacy. Reflecting on these limitations, this study points to accelerant factors for pro-environmental sociopolitical transitions. The assessment and comparison of the examined eco-developments illuminates how ecological design and planning has stimulated eco-environmental ethics in local practices, which have pushed the boundaries of China’s conventional approaches to urbanization. Various ecological perspectives embodied in China’s eco-developments—whether scientific, technological, aesthetic, or philosophical—have made these projects stand out as demonstrations of a greener path to urbanization. Despite the limited achievements in these experimental projects, eco-developments are meaningful experiments that have stimulated institutional learning about eco-environmental values. Facilitated by the dissemination of ideas in China’s political and professional networks, China’s evolving eco-developments have created an ecological image of the nation’s modernity, manifested by new landscapes, new infrastructure, new rhetoric, and new social life. These projects not only reshape the built environment but also influence culture, politics, and society.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140175</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference with Measurement Errors: with Applications to Experimental and Observational Studies</title>
<link>https://hdl.handle.net/1721.1/140173</link>
<description>Causal Inference with Measurement Errors: with Applications to Experimental and Observational Studies
Liu, Shiyao
Measurement errors cause problems in causal inference. However, except for canonical cases, researchers rarely realize the existence of measurement errors in their studies. As a result, they sometimes fail to adjust for them. By combining tools drawn from the literature on machine learning, causal inference, and measurement errors, this dissertation illustrates the existence of measurement errors in these seemingly unrelated scenarios and further develops new frameworks and methods to mitigate their impacts on causal estimations.&#13;
&#13;
The first chapter shows that the inability of investigators to fully observe the treatment take-up status of a respondent in an experiment is equivalent to a measurement error for the treatment indicator. Such errors prevent researchers from a correct estimation for average treatment effects. The new framework considers whether a unit is a complier as a latent variable, and subsequently estimates the probability of a respondent being a complier with a Gaussian mixture model, such that researchers can recover the treatment effect despite the measurement error. &#13;
&#13;
The second chapter is motivated by the fact that the estimation of causal quantities with the treatment variable predicted by a machine-learning model is problematic because the prediction error will translate into a measurement error. Under the overarching theme of measurement errors, this chapter develops new methods to mitigate the bias caused by these errors on causal estimation and show the effectiveness of these methods via simulations and validation examples. &#13;
&#13;
The third chapter, by adopting a data-driven theory discovery technique, proposes the hypothesis that the local government in China is more likely to respond if the petitioner sends a credible signal to the government that she is an insider. It further tests this hypothesis with an active-labeling-enhanced semi-supervised learning algorithm as proposed in this dissertation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140173</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalytic Upgrading of Lignin From Biomass</title>
<link>https://hdl.handle.net/1721.1/140172</link>
<description>Catalytic Upgrading of Lignin From Biomass
Stone, Michael L.
Lignocellulosic biomass is an abundant feedstock that could sustainably offset our dependence on fossil sources for carbon-based fuels and chemicals. The final hurdle to enable wide-scale success of biorefineries is the utilization of lignin, an oxygenated aromatic polymer making up 15-30% of all lignocellulose. Effective and economical lignin upgrading techniques have remained elusive, largely due to the complex structure and reactivity of lignin during processing. Broadly, my graduate work has focused on designing experiments to gain fundamental understanding while working with complex, real biomass in order to bring to light and subsequently address limitations to lignin upgrading.&#13;
&#13;
Reductive Catalytic Fractionation (RCF) is a promising “lignin-first” biomass upgrading technique that extracts and depolymerizes lignin while preserving the polysaccharides as a solid residue. Using a custom-built flow-through reactor, we determined the mechanisms under which the catalyst was rapidly deactivating, we learned what factors limit rates of lignin extraction from biomass, and we began to understand how the biosynthetic production of lignin in the plant ultimately dictates the possible products that can be made through catalytic upgrading. To better elucidate the relationship between plant genetics and lignin structure, we developed a high-throughput RCF process, which will enable a genome-wide association study of poplar. Technoeconomic and lifecycle assessment of the an RCF-based biorefinery identified, among other findings, that complete utilization of RCF oil is essential to economic viability. To this end, we developed a process for near-complete utilization of RCF lignin oil through hydrodeoxygenation (HDO) to jet-range aromatics using molybdenum carbide (Mo2C) in a trickle-bed reactor. The study of neat lignin oil HDO elucidated two key findings: 1. The Mo2C surface is oxidized by neat lignin oil, necessitating higher reaction temperatures and 2. Higher temperatures result in condensation and loss of dimeric products. Using these fundamental learnings, we could achieve high yields of aromatic hydrocarbons by stabilizing the oil in a first-pass at low temperature, followed by complete conversion in the second pass at higher temperature. In total, these studies highlight the need for rigorous experiments with real biomass feedstocks to identify and address the factors that hinder biorefinery success.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140172</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory and Evolutionary Evidence of the Autocatalytic Oxygenation of Earth’s Surface Environment</title>
<link>https://hdl.handle.net/1721.1/140171</link>
<description>Theory and Evolutionary Evidence of the Autocatalytic Oxygenation of Earth’s Surface Environment
Shang, Haitao
Although molecular oxygen (O₂) is a major component of Earth’s atmosphere today and a key signature of life on this planet, we do not understand why and how Earth has evolved from the ancient oxygen-deficient world to the modern oxygen-rich environment, and whether a similar in-crease can be expected on other planets. This thesis provides a theory to answer this fundamental but unsolved question in Earth science. In the modern environment, atmospheric O₂ is maintained at a stable level due to the existence of negative feedback mechanisms. However, this brings us to a conundrum:  under the regulation of negative feedbacks, how could O₂ concentrations have risen? This thesis suggests that the expansion of oxidative metabolisms provided a positive feed-back responsible for Earth’s oxygenation. This may appear counterintuitive: oxidative metabolic processes, after all, consume O₂.  A potentially important positive feedback nevertheless lies in partially-oxidized organic matter (POOM) produced by oxidative metabolisms in sedimentary environments. This positive feedback  derived from oxidative metabolisms is demonstrated via a mathematical model in this thesis. Its relevance to the rise of atmospheric O₂ crucially depends on the existence of POOM-producing oxidative metabolism(s) at the time of Earth’s  oxygenation(s). One group of enzymes that can catalyze the formation of oxidative metabolic products is the oxygenase family.  The methods of molecular phylogenomics are applied to reconstruct the evolutionary history of a representative oxygenase family; the results support such a relevance. Finally, this thesis constructs a mathematical model of Earth’s oxygen and carbon cycles and explores the dynamics of these two cycles during oxygenation events. From the perspective of non linear dynamics, this mathematical model interprets Earth’s oxygenations as dynamical bifurcations of the oxygen cycle and the accompanying excursions in carbon isotope records as the characteristic fluctuations associated with dynamical bifurcations. Collectively, the physical reasoning, phylogenomic analyses, and mathematical modeling in this thesis suggest an unstable evolution of Earth’s oxygen and carbon cycles in deep time.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140171</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale Modeling of Electronic Polarization Effects in Interfacial Thermodynamics and Nanoscale Transport Phenomena</title>
<link>https://hdl.handle.net/1721.1/140168</link>
<description>Multiscale Modeling of Electronic Polarization Effects in Interfacial Thermodynamics and Nanoscale Transport Phenomena
Misra, Rahul Prasanna
Molecular simulations, along with the tools of statistical mechanics, can provide essential mechanistic insights about the thermodynamic and transport properties of electrolytes at solid/water interfaces, which has broad applications in several scientific disciplines ranging from membrane science to biophysics and electrochemistry. At any solid/water interface, water being a polar solvent and salt ions being charged species, can exert strong electric fields which can in turn result in a significant electronic polarization of the solid. However, a fundamental understanding of the role of electronic polarization effects on interfacial thermodynamics and nanoscale transport has been largely lacking. Moreover, due to the vectorial nature of the electric fields, the ion-solid and water-solid polarization energies, which result from the ion-induced and the water-induced electronic polarization of the solid, respectively, are strictly pair-wise non-additive and many-body in nature. Therefore, a theoretical framework is required which can self-consistently model the polarization energies at solid/water interfaces. In this thesis, a multiscale approach involving quantum chemical and classical molecular dynamics (MD) simulations is advanced to investigate the role of electronic polarization effects, first at planar solid/water interfaces, and subsequently under nanoscale confinement, using 2D and 1D graphitic nanomaterials as model systems. By investigating the wetting of graphitic surfaces, the thermodynamics of salt ion adsorption, and the confined water and salt ion transport through carbon nanotubes, this thesis underscores the broad relevance of electronic polarization effects in interfacial phenomena.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140168</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compilation Techniques for Reconfigurable Analog Devices</title>
<link>https://hdl.handle.net/1721.1/140167</link>
<description>Compilation Techniques for Reconfigurable Analog Devices
Achour, Sara
Reconfigurable dynamical-system solving analog devices are a powerful new ultra-low-power computing substrate capable of executing dynamical systems in a performant and energy-efficient manner. This class of devices leverages the physical behavior of transistors to directly implement computation. Under this paradigm, voltages and currents within the device implement continuously evolving variables in the computation. These hardware platforms are challenging to use because they are subject to a variety of low-level physical behaviors that profoundly affect the computation. Relevant physical behaviors include operating range and frequency limitations, noise, process variation, and quantization error.&#13;
&#13;
In this thesis, I present compilation techniques for automatically configuring such devices to execute dynamical systems and present the first compiler that automatically targets a physical dynamical system-solving reconfigurable analog device of this class. The presented compiler frees the end user from reasoning about the low-level physical behaviors present in the hardware and automates the process of mapping the dynamical system to the analog hardware. This thesis also introduces specification languages for describing dynamical systems, and the capabilities and physical limitations of the reprogrammable analog hardware. The compiler targets these specifications when mapping the computation.&#13;
&#13;
To faithfully implement a computation, the compiler configures the device so that the original dynamical system dynamics can be recovered from the physics of the device at runtime. The mapped computation simultaneously leverages the device physics to implement the desired computation, respect the physical limitations of the device, and attenuate away the unwanted physical behaviors present in the analog hardware. The compiler configures and composes together the analog blocks and simultaneously accounts for all of the low-level behaviors present in the device.&#13;
&#13;
The compiler first maps the target dynamical system to the analog hardware and then transforms the produced circuit to attenuate away unwanted analog behavior. The compiler employs a multi-stage, algebraic rewrite-based circuit synthesis procedure to map the dynamical system to the analog hardware. This procedure synthesizes analog circuits that effectively use parametric and specialized analog blocks and leverage physical laws to perform computation.&#13;
&#13;
The compiler automatically transforms the mapped circuit to attenuate away the unwanted analog behaviors present in the circuit. This transformation transforms the signals to respect the operating range and frequency limitations present in the hardware and reduces the effect of analog noise, quantization error, process variation-induced behavioral deviations on the computation. The transformed circuit preserves the original dynamics of the system such that the original dynamical system variable trajectories can be recovered by applying a compiler-derived recovery transform. The compiler formulates the problem of transforming the circuit as a convex optimization problem – this enables the compiler to optimally identify circuit transformations that maximize circuit characteristics such as execution speed and signal quality.&#13;
&#13;
The compiler deploys a cross-cutting program optimization in which the calibration algorithm and compiler work together to reduce the effect of process variation-induced behavioral variations on the overall computation. This thesis presents the concept of a delta model, a hardware abstraction that captures the device-specific behavioral deviations present in the calibrated analog hardware.&#13;
&#13;
The compiler uses this hardware abstraction to compensate for behavioral variations for the specific device at hand while transforming the circuit. This optimization involves all parts of the software stack. I introduce delta model language constructs to the hardware specification language, develop a novel delta-model aware circuit scaling optimization, and introduce new calibration and characterization procedures into the device runtime and firmware to implement this optimization. With this optimization enabled, I am able to attain higher fidelity results with more consistency on the target hardware. This thesis also presents a co-designed calibration algorithm that prioritizes eliminating behavioral deviations that cannot be compensated for in compilation.&#13;
&#13;
I evaluate the compiler on applications from the biology, physics, and controls domains. The results demonstrate that these applications execute with acceptable error while consuming microjoules of energy
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140167</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Energetic Perspective on the Tropical Atmosphere and its Response to Climate Warming</title>
<link>https://hdl.handle.net/1721.1/140166</link>
<description>An Energetic Perspective on the Tropical Atmosphere and its Response to Climate Warming
Duffy, Margaret Louise
This thesis evaluates the dynamics of the tropical atmosphere and its response to warming using energetic approaches. We focus on three features of the atmosphere over tropical oceans: the response of precipitation to warming, the gross moist stability (GMS), and the response of the Pacific Walker circulation (WC) to warming. There have been a number of mechanisms proposed to explain the response of precipitation to warming. The ``wet-get-wetter'' mechanism describes an amplification of the pattern of precipitation in a moister atmosphere, and the ``warmer-get-wetter'' mechanism describes enhanced upward motion and precipitation in regions where the increase in sea surface temperature (SST) exceeds its tropical-mean increase. Studies of the current climate have shown that surface convergence (SC) over the tropical oceans is largely driven by horizontal gradients of low-level temperature. Chapter 2 finds that a ``Laplacian-of-warming'' mechanism is of comparable importance to wet get wetter and warmer get wetter for the response of precipitation to climate change over tropical oceans. The GMS quantifies the energy import or export of a circulation but, despite its importance, is a difficult quantity to understand and to observe. Chapter 3 approximates the vertical GMS using SST and the Laplacian of SST and finds that the approximation works well in the mean and seasonal cycle. There is uncertainty about the sign and magnitude of the response of the WC to warming. Chapter 4 finds a strong relationship between GMS response and WC strength response across a hierarchy of GCMs. Further, Chapter 4 finds that WC strength and GMS responses are sensitive to the degree of parameterized convective entrainment, but that the spread in GMS responses due to differing entrainment rates is smaller than the spread in GMS response across CMIP5 models. Taken together, this thesis progresses our understanding of the tropical circulation and precipitation pattern and its response to warming.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140166</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Optimization and Co-Learning Methods for Automated Design of Rigid and Soft Robots</title>
<link>https://hdl.handle.net/1721.1/140163</link>
<description>Co-Optimization and Co-Learning Methods for Automated Design of Rigid and Soft Robots
Spielberg, Andrew Everett
Nature demonstrates an incredible diversity, capability, and complexity of life, with organisms that can robustly run, jump, and swim.  Compared with their biological brethren, robotic "life" lacks rich dexterity or economy of motion, and their plainly simple designs indicate room for improvement.  Unfortunately, a major barrier to creating similarly adept robots is the design process itself.  Each aspect of robot design, including the (physical) body (e.g. actuation, sensing, geometry, materials) and the (cyber) brain (e.g. control, proprioception) is typically not integrated in a single design workflow, and a lack of fast, accurate, useful simulators leads to expensive, spiraling, hardware intensive iteration.  This thesis introduces methods to marry all aspects of robot design into combined algorithms for holistic cyberphysical design.  Core to this solution is co-optimization and co-learning methods which can simultaneously reason about different design domains and achieve locally optimal performance.  This thesis further discusses considerations in modeling (via differentiable simulation) and realizability (through automated and semi-automated fabrication worfklows).  We describe how this entire suite of capabilities from modeling to automated fabrication can be conceptualized in a complete end-to-end "robot design stack," providing full CAD-CAM computational design capabilities.  We demonstrate these capabilities on for rigid, compliant, and soft robot design tasks, including locomotion, manipulation, and tactile sensing,  and discuss the frontiers of this burgeoning field of computational robot design.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140163</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A deterministic model for wear of piston ring and liner and a machine learning-based model for engine oil emissions</title>
<link>https://hdl.handle.net/1721.1/140161</link>
<description>A deterministic model for wear of piston ring and liner and a machine learning-based model for engine oil emissions
Gu, Chongjie
Nowadays, more constraints are required for design of internal combustion engines, to meet the energy saving and the emissions standards in the new era. Engine emissions and engine durability are two of the most important factors in the development of IC engines.&#13;
&#13;
Engine particulate emissions are strongly correlated with the lubricant oil consumption. On the other hand, the carbon soot particles mixed in the lubricant from the combustion are the major source for long term wear of the piston, piston ring, and cylinder liner. Costly engine tests are required to develop the new system to meet emission and durability requirements. More advanced data analytics and models connecting critical design and operating parameters to performance will help shorten the development lead time for more efficient and cleaner engines.&#13;
&#13;
This thesis work aims to model the engine wear during break-in and steady-state stages, capture oil emission correlations with engine operating parameters, and provide engine design guidance. This work is the first time to build deterministic physics-based wear models to perform systematic level engine wear simulations, including the effect of the liner topography. The wear simulation results are compared to experimental outcomes for both engine stages. It is also the first try to model the oil emission based on machine learning and connect the data-driven results with different engine ring-pack designs. The results suggest a good consistency of the machine learning analyzation and the underlying oil emission physics. The entire defined data-driven procedures show a promising future to accelerate engine development cycle, reduce engine testing cost, and help understand oil transport mechanisms and design influences.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140161</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cult of the Persuasive: The U.S. Military’s Aversion to Coercion in Security Assistance</title>
<link>https://hdl.handle.net/1721.1/140157</link>
<description>The Cult of the Persuasive: The U.S. Military’s Aversion to Coercion in Security Assistance
Metz, Rachel (Rachel Tecott)
Why does the United States struggle to build stronger militaries in partner states? The fundamental challenge of security assistance is that of influencing recipient political-military decision-making. How does the United States aim to influence recipient leaders? Which strategies of influence work best? Why does the United States choose the strategies of influence it chooses? I conceptualize U.S. influence strategies in security assistance as an influence escalation ladder with four rungs: teaching, persuasion, bargaining, and direct command. I develop Influence Strategy Theory (IST), arguing that the United States is more likely to successfully influence partners and build better partner militaries when it employs the full escalation ladder. It is less likely to succeed when it relies exclusively on teaching and persuasion. &#13;
&#13;
Moving a link back in the causal chain, I offer two competing models of strategy selection—the rational actor model, and the Cult of the Persuasive. I argue that the rational actor model sufficiently explains U.S. strategy in pre-Vietnam security assistance efforts, but cannot explain U.S. advisors’ persistent reliance on persuasion in Vietnam and thereafter. In Vietnam, the U.S. Army untethered from its civilian principal in Washington to instead pursue its parochial bureaucratic interests. An institutional ideology—“the cult of the persuasive”— preaching the normative and causal superiority of persuasion over coercion evolved within the U.S. Army to minimize disruption of its bureaucratic machinery. The ideology continues to guide U.S. security assistance today because the U.S. military has no institutional incentive to change course. &#13;
&#13;
I test these arguments within and across three critical cases of U.S. security assistance, with chapters examining the U.S. effort to build the Republic of Korea Army (1948 – 1953), the Army of the Republic of Vietnam (1955 – 1973), and the Iraqi Army (2003 – 2011). I draw from thousands of archival documents, over 500 oral histories collected from former U.S. advisors, and over 150 original interviews. I find strong support for the expectations of the study. The findings provide new theoretical and empirical insights for students of security assistance and military strategy, as well as practical lessons for policymakers and military advisors.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140157</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gas Separation Using Nanoporous Single-Layer Graphene Membranes</title>
<link>https://hdl.handle.net/1721.1/140155</link>
<description>Gas Separation Using Nanoporous Single-Layer Graphene Membranes
Yuan, Zhe
Nanoporous single-layer graphene is regarded as a highly promising membrane material for gas separation due to its atomic thickness. When single-layer graphene contains a high density of gas sieving nanoscale pores, it can exhibit both a high gas permeance and a high selectivity, which is beneficial for reducing the cost of gas separation processes. However, significant challenges remain for matching theoretical predictions with experimental measurements and for the real application of graphene membranes for gas separations. To tackle these challenges, in this thesis, I carry out both theoretical and experimental investigations to understand and to improve the gas separation properties of nanoporous single-layer graphene membranes.&#13;
&#13;
On the theoretical side, first, using molecular dynamics simulations, I investigate the mechanism of activated gas permeation through sub-nanometer graphene pores when energy barriers exist for pore crossing. I develop an analytical framework based on transition state theory to predict the gas permeance through a given graphene nanopore. Second, I extend the analytical framework mentioned above from sub-nanometer pores to larger pores. I formulate the transport kinetics associated with the direct impingement from the bulk and with the surface diffusion from the adsorption layer on graphene, and then combine them to predict the overall gas permeation rate using a reaction network model. Last, I apply the theory developed above to predict the total gas permeance through a pore ensemble with a realistic pore size distribution, which is generated by Kinetic Monte Carlo simulations. I show that the total gas permeance through a pore ensemble is dominated by a small fraction of large nanopores having low energy barriers of pore crossing.&#13;
&#13;
On the experimental side, I demonstrate temperature-dependent gas mixture separation using singlelayer graphene membranes. The membranes contain intrinsic nanopores formed during the chemical vapor deposition synthesis of graphene. I investigate the formation mechanism of the intrinsic graphene nanopores, and systematically control the density of the intrinsic graphene nanopores while maintaining appropriate pore sizes for gas sieving. I identify that nanoscale molecular fouling of the graphene surface where graphene pores are partially blocked by hydrocarbon contaminants under experimental conditions, affects both gas permeance and selectivity.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140155</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of Energy and Environmental Policies on Air Quality: Bridging Observational Data, Statistical, and Atmospheric Models</title>
<link>https://hdl.handle.net/1721.1/140152</link>
<description>Impacts of Energy and Environmental Policies on Air Quality: Bridging Observational Data, Statistical, and Atmospheric Models
Qiu, Minghao
As more countries have adopted regulations on ambient air pollution and announced commitments moving away from fossil fuel energy, assessing the impacts of adopted energy and environmental policies on air quality is essential to evaluating policy progress and informing future actions. The increasing amount of measurement data on pollutant concentrations and precursor emissions provides an opportunity for tracking the progress of policies in mitigating air pollution, but key challenges remain. Levels of measured air pollutants and its precursor emissions are subject to variability in both the natural environment and human activities. This thesis incorporates four studies that integrate research tools across disciplines - from statistical causal inference to atmospheric chemistry models - to assess the impacts of adopted energy and environmental policy on air quality, in support of decision making in energy, climate, and environmental governance. &#13;
&#13;
The first study estimates the impacts of energy policies on air quality in major energy-intensive industrial sectors in China with both prospective and retrospective methods. It finds that the realized effects of policy on energy and pollution outcomes are generally much smaller than the projected benefits. The differences between projected and realized benefits stem from how policy baselines are selected and reflect heterogeneity in firms' policy responses.&#13;
&#13;
The second study evaluates the impacts of wind power development on air quality and related environmental justice issues in the US. We find substantial air quality benefits from existing wind power, but benefits would increase four-fold if policies could prioritize displacing the most damaging units. The fraction of air quality benefits accruing to low income and minority populations fall below a new 40% goal for future US policies, suggesting targeted efforts are needed to address air pollution disparities.&#13;
&#13;
The third study designs a statistical method to estimate the average emission factors of vehicles (the relevant outcome for decision making) based on snapshot measurements (the quantity being measured in the field). We find that a much lower fraction of the measured fleet in Europe is in compliance with emission standards compared to previous estimates. We further quantify the uncertainty and effectiveness of detecting high-emitting vehicles with snapshot measurements. &#13;
&#13;
The fourth study evaluates the ability of statistical methods to attribute observed pollutant trends to emissions changes under meteorological variability. We show that widely-used regression methods do not perform well, and we propose a machine learning model that offers better performance. We further provide a lower bound of the estimation error due to interactions between meteorology and emissions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140152</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Through the Lens of Robustness</title>
<link>https://hdl.handle.net/1721.1/140148</link>
<description>Learning Through the Lens of Robustness
Tsipras, Dimitris
Despite their impressive performance on large-scale benchmarks, machine learning sys- tems turn out to be quite brittle outside of the exact setting in which they were developed.&#13;
&#13;
How can we build ML models that are robust and reliable enough for real-world deployment?&#13;
&#13;
To answer this question, we first focus on training models that are robust to small, worst-case perturbations of their input. Specifically, we consider the framework of robust optimization and study how these tools can be leveraged in the context of modern ML models. As it turns out, this approach leads us to the first deep learning models that are robust to a wide range of (small) perturbations on realistic datasets.&#13;
&#13;
Next, we explore how such a paradigm of adversarially robust learning differs from the standard learning setting. As we will see, robust learning may require training a model that relies on a fundamentally different set of input features. In fact, this requirement can give rise to a trade-off between robustness and accuracy. At the same time, the features that robust models rely on turn out to be more aligned with human perception and, in turn, make these models also useful outside the context of reliability.&#13;
&#13;
Finally, we move beyond the worst-case perturbation setting and investigate other robustness challenges in deploying models in the wild. On one hand, we develop general methodologies for creating benchmarks that gauge model robustness along a variety of axes, such as subpopulation shift and concept transformations. On the other hand, we explore ways to improve the reliability of our models during deployment. To this end, we study how we can bias the features that a model learns towards features that generalize to new environments. Moreover, we develop a methodology that allows us to directly rewrite the prediction rules of a model with virtually no additional data collection.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140148</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bacterial Interspecies Interactions and Microbial Community Assembly</title>
<link>https://hdl.handle.net/1721.1/140145</link>
<description>Bacterial Interspecies Interactions and Microbial Community Assembly
Ortiz Lopez, Anthony F.
Microbial communities play central roles in the development and maintenance of human health and in the functioning of the Earth’s ecosystems. The microbial biodiversity of many environments has been thoroughly studied in recent years, yet the dominant processes shaping microbiota assembly remain unresolved. In this thesis, I leverage a bottom-up approach to experimentally build synthetic microbial communities and to test the prevalence of different ecological and evolutionary forces. High-throughput experiments in nanoliter droplets show a wide occurrence of bacterial interactions in the form of one species or consortium affecting the abundance and yield of another species. Positive and negative interactions appear unavoidable in bacterial co-cultures when growth is permitted, with growing bacteria typically facilitating non-growing bacteria. In this thesis, I also show that bacterial interspecies interactions in the C. elegans intestine are mostly competitive and hierarchical. Interestingly, simple two-species microbiotas can predict the composition of three-species and eight-species microbiotas in this nematode. Finally, we found that the importance of interspecies interactions is robust to bacterial strains with and without previous exposure to the C. elegans gut, and to worm mutants with different immune activities. These results show that constructing and characterizing synthetic microbial communities can elucidate fundamental principles for the control of microbial communities.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140145</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Memristor-based AI Hardware for Reliable and Reconfigurable Neuromorphic Computing</title>
<link>https://hdl.handle.net/1721.1/140143</link>
<description>Memristor-based AI Hardware for Reliable and Reconfigurable Neuromorphic Computing
Choi, Chanyeol
In the field of artificial intelligence hardware, a memristor has been proposed as an artificial synapse for creating neuromorphic computer applications. Changes in weight values in the form of conductance must be identifiable and uniform to train a neural network in memristor arrays. Because of the high mobility of metal ions in the Si switching medium, an electrochemical metallization (ECM) memory has shown a high analogue switching capacity. However, switching unpredictability is caused by the extreme stochasticity of ion transport. I demonstrated a Si memristor with alloyed conduction channels that works dependably and enables large-scale crossbar array deployment. In addition, heterogeneously integrated neuromorphic chips have been developed to allow physically reconfigurable neuromorphic computing. This thesis examines alloyed metal-based silicon memristors and stackable neuromorphic chips with heterogeneous integration for reliable and reconfigurable neuromorphic computing.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140143</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sediment plumes and financial modeling in the context of deep-sea polymetallic nodule mining</title>
<link>https://hdl.handle.net/1721.1/140142</link>
<description>Sediment plumes and financial modeling in the context of deep-sea polymetallic nodule mining
Muñoz Royo, Carlos
Deep-sea polymetallic nodule mining commercial operations are close to becoming a reality. In recent years, research activity in the field has substantially increased, but there are yet many technical, environmental and financial challenges to be addressed. This work tackles three critical topics essential to advance the knowledge and understanding in the field, namely, (i) nodule collector and (ii) return water sediment plumes discharged to the ocean during nodule mining operations, and (iii) the financial modeling of a nodule mining operation and the analysis of royalty payment systems for the International Seabed Authority.&#13;
&#13;
A nodule collector vehicle will operate on the seabed to pick up polymetallic nodules and, inevitably, also certain amount of seabed sediment. Most of the unwanted sediment will be discharged directly at the rear of the collector vehicle, creating the so-called collector plume, which will then be transported away from the mining area with the consequent environmental impact. DSM21, a dedicated collector plume field study with a prototype nodule collector vehicle, was designed and conducted for the first time at 4500 m of depth in the Clarion Clipperton Fracture Zone in the Pacific ocean to monitor the collector sediment plume. Field observations confirm that the sediment discharge initially behaves as a turbidity current, shifting the currently established modeling paradigm for such plumes towards a multi-scale approach. The turbidity current sets the initial conditions of the subsequent ambient plume that is then transported by ocean currents, and subjected to the ocean turbulence in the bottom boundary layer and the sediment settling. The observations from the field studies also suggest that the design of the collector vehicle may have a direct influence on the initial height reached by the sediment, which, in turn, is critical to set the impact time and length-scales.&#13;
&#13;
A fraction of the unwanted sediment will be lifted up to a mining vessel together with the nodules. There, the sediment and water will be separated and discharged back to the ocean at depth. First, a Monte Carlo analysis is conducted to define the parameter space of the discharge characteristics and its dynamic regime. Then, a multi-scale modeling approach based on well-established fluid dynamic principles is developed to model the discharge in the midwater column and close to the seabed. PLUMEX, a large scale field study, was designed and conducted for the first time in the Pacific ocean aboard R/V Sally Ride to discharge and monitor six plumes in the midwater column. Field observations served as a validation of the near field modeling approach, and showed that sediment aggregation is not relevant for this type of discharge.&#13;
&#13;
The International Seabed Authority is currently developing the regulations for the commercial exploitation of polymetallic nodules in international waters. Nodules are common heritage of mankind and, as such, their exploitation for commercial purposes is to be financially compensated via a royalty payment, as established in the current version of the draft regulations. However, the singularities of nodule mining in international waters, the available number of alternative royalty systems and their implications make it challenging to determine which system is best for this application. As requested by the International Seabed Authority, a financial model of a nodule mining operation in international waters was developed incorporating feedback from stakeholders. The model was then used to analyze and compare a number of royalty systems, concluding that the characteristics, flexibility and risk profile of a variable ad valorem system are most suitable for this application.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140142</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing moral hypotheticals</title>
<link>https://hdl.handle.net/1721.1/140141</link>
<description>Computing moral hypotheticals
Holmes, Dylan Alexander
Our moral judgments depend on our ability to imagine what else might have happened: we forgive harms that prevent greater harms, we excuse bad outcomes when all others seem worse, and we condemn inaction when good actions are within reach. To explain how we do this, I built a computational model that reads and evaluates short textual stories, computing hypotheticals in order to make moral judgments.&#13;
&#13;
I identify what specialized knowledge we need in order to know which hypothetical alternatives to consider. I show how to connect abstract knowledge about moral harms to the particular details in a story. Finally, I show how the system can assess outcomes in a purely qualitative, human-like way by decomposing outcomes into their harmful components; I argue that—as in real life—many outcomes are incomparable.&#13;
&#13;
I support my theoretical claims with references to the cognitive science and philosophical literature, and I demonstrate the system’s explanatory breadth with diverse examples including escalating revenge, slap-on-the-wrist, preventive harm, self-defense, and counterfactual dilemma resolution.&#13;
&#13;
The key insight is that hypothetical context modulates understanding. With this system, I shed light on what is needed to grasp hypothetical context as effortlessly and automatically as we humans do. And I lay the groundwork for moral reasoning systems that are as nuanced, imaginative, and articulate as we humans are.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140141</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Analytics and Machine Learning for the Risk-Based Management of Agricultural Supply Chains</title>
<link>https://hdl.handle.net/1721.1/140138</link>
<description>Predictive Analytics and Machine Learning for the Risk-Based Management of Agricultural Supply Chains
Renegar, Nicholas
Safe, healthy and resilient food supply chains are essential to ensuring the livelihood and well-being of humans and societies, as well as local and global economies. However, the ability to provide and sustain access to nutritious and safe food continues to be a major concern and a challenge for every country around the world, including developed countries. In fact, a number of serious and global public health risks arise from food supply chains. Two central such risks are adulteration, in which unsafe food is sold for human consumption, and zoonotic diseases (i.e., viruses and diseases that can transfer from animals to humans through food supply chains) such as avian influenza, SARS, and COVID-19.  &#13;
&#13;
This thesis focuses on food adulteration and zoonotic disease risks, and highlights a variety of use cases and applications in which operations research, machine learning and predictive supply chain analytics can inform the management of these risks through public policy and techno-operational processes. The second chapter focuses on US food imports, and uses network structures of international supply chains (made public from bills of lading) to identify high-risk consignees (importers) at risk for economically motivated adulteration. The third and fourth chapters focus on China's food supply chain, and leverage publicly-posted food safety test results to evaluate risk-based regulatory resource allocations. Specifically, it is shown how risk-based testing can identify more adulteration problems and trace more problems to their source. It is also found for aquatic products that wholesale and wet markets are potentially undersampled by regulators, despite consolidating the riskiest supply from aquaculture farms. The fifth chapter focuses further on Chinese live animal markets, also implicated in numerous zoonotic disease outbreaks, and demonstrates how food adulteration tests and a novel unsupervised clustering algorithm can be leveraged to identify specific markets at risk of spreading zoonotic disease. The sixth chapter uses machine learning to enable more effective development of single-walled carbon nanotube, DNA wrapped, molecular recognition sensors, capable of offering rapid and quantitative detection of adulterants in food. This technology could be extremely advantageous for managing food adulteration risks at wholesale and wet markets in China, where products are quickly sold overnight.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140138</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrative Approach to Metal Extraction and Electrification</title>
<link>https://hdl.handle.net/1721.1/140135</link>
<description>Integrative Approach to Metal Extraction and Electrification
Rush, Lucas Thorley
Molten sulfide electrolysis is a novel technology that produces metals directly from their sulfide minerals. A particular focus of the technology has been on copper metal extraction from chalcopyrite. Molten sulfide electrolysis is competing against established technologies in a legacy industry. The premise behind this work is that molten sulfide electrolysis has to maximize value to the copper supply chain in order to compete with established technologies. Two areas explored in this work are the synergy between molten sulfide electrolysis and the electricity grid and the synergy between molten sulfide electrolysis and the minerals processing of copper concentrates.&#13;
&#13;
Discounted cash flow analysis is used to show the strength of molten sulfide electrolysis compared to traditional copper smelting techniques. The same technique is used to show that molten sulfide electrolysis of copper can be integrated into the electricity grid and provide additional value to the system. This is achieved by selectively idling the facility during periods of high electricity prices. This has the effect of trading off a higher capital cost with a lower operating cost.&#13;
&#13;
Process models of a molten sulfide electrolysis copper processing facility were developed to determine how excess heat generated in the system can be used to melt excess gangue materials in the concentrate. It was found that the minerals processing of copper concentrates could be reduced under all base case scenarios. The excess heat of the system was found to be sensitive to the faradaic efficiency of the system and the electrical conductivity of the electrolyte. The faradaic efficiency of the system and the electrical conductivity of the electrolyte was experimentally determined over a range of operating conditions. &#13;
&#13;
The primary focus of the development of the molten sulfide electrolysis technology has been on the copper supply chain. The lead and zinc supply chain was proposed as an additional market for the technology to explore. Experimental results showed that the technology could be used to produce lead and zinc metal from a molten sulfide electrolyte.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140135</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical reasoning in the brain</title>
<link>https://hdl.handle.net/1721.1/140134</link>
<description>Hierarchical reasoning in the brain
Sarafyazd, Morteza
When we interact in an uncertain environment, we continuously reason to disambiguate internal and external sources of uncertainty at multiple spatiotemporal scales to guide our goal-directed behavior. Understanding the neural mechanism of this reasoning behavior is essential for consequential applications in brain sciences. In this thesis, I address hierarchical reasoning at three levels of behavior, neural circuit, and computational models. First, I developed behavioral experiments to examine the reasoning behavior in dynamic environments with two hierarchical sources of uncertainty. The rational behavior necessitates evidence integration under multiple sources of uncertainty to update internal belief about the external environment. This behavioral study showed that human and non-human primates are able to reason accordingly by accumulating evidence to update their belief in a longer timescale. Second, I performed electrophysiology in the frontal cortex. The concurrent neural recordings revealed that brain regions in the frontal cortex carry signals related to reasoning behavior and performance monitoring. Third, to interpret the neural results, a probabilistic integrator model is implemented to address the key interpretable variables of the behavior. Finally, after observing the neural data, I aimed to explore neural hypotheses of the behavior through in-silico simulations of two neural-network models to perform the reasoning task. These simulations led to a better evaluation of proposed neural hypotheses relevant to key behavioral variables.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140134</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Characteristics of Precipitation and Their Response to Climate Change</title>
<link>https://hdl.handle.net/1721.1/140133</link>
<description>Understanding the Characteristics of Precipitation and Their Response to Climate Change
Li, Ziwei
In this thesis, I present three projects that target different characteristics of atmospheric precipitation that are not well understood. First, I focus on precipitation extremes in the extratropics and aim to understand changes in vertical velocity which control the spatial variation in the climate-change response of extratropical precipitation extremes. I solve the quasi-geostrophic omega equation and attribute the vertical velocity changes in two ways. In the dry decomposition, a positive contribution by latent heating is largely offset by a negative contribution from an increase in dry static stability. In the moist decomposition, changes in moist static stability play a key role. Second, I look at the power-law frequency distributions and fractal dimensions of tropical precipitation clusters. I propose a viewpoint that regards precipitation as thresholded islands on a rough column-water vapor (CWV) topography, which is supported by good agreement between the precipitation clusters and CWV islands in frequency distributions and fractal dimensions. I further show that self-affine surfaces with a roughness exponent of 0.3 reproduce these statistics, and that the self-affine scaling theory provides analytical relations between multiple power-law exponents. Third, I further investigate the general dynamics behind tropical precipitation and present a 2-dimensional conceptual model to study tropical convective organization. The model uses the column moist static energy (CMSE) as a prognostic variable and has terms parameterized by diagnosing a high-resolution simulation with explicit convection. Through analyzing the conceptual model, I show that self-aggregation is due to the interplay between the amplifying effect of vertical advection and the smoothing effect of horizontal advection, with relatively weaker contributions from the radiative forcing and surface fluxes. Furthermore, I find that a temporal red noise and a diffusive horizontal advection term sets the CMSE spectrum shape which approximately matches that in the high-resolution simulation except for a shallowing of slope at high wavenumber. Overall, these contributions combine to make progress in understanding the dynamics of precipitation in the current climate and under climate change.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140133</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principled Methods and Models for Deep Learning Based Functional Genomics</title>
<link>https://hdl.handle.net/1721.1/140131</link>
<description>Principled Methods and Models for Deep Learning Based Functional Genomics
Krismer, Konstantin
Many advances in functional genomics and in biology more broadly can be attributed to the rise of massively parallel sequencing technology and its derivatives. As the volume of sequencing and other high-throughput experimental data increases exponentially, so does the need for computational methods to analyze and condense these vast amounts of data, and to help explain the underlying phenomena. In this thesis, I describe five projects that introduce novel techniques and methods in functional genomics.&#13;
&#13;
The first project introduces a simulation-based framework to investigate neural network architectures that are trained on biological sequence data, as is common in functional genomics. The second project describes a two-pronged approach to study the determinants of cell type-specific chromatin accessibility, with an ensemble of neural networks trained on DNase-seq data to predict chromatin accessibility, and MIAA, the multiplexed integrated accessibility assay, to validate, experimentally, these in silico predictions. The third project presents a method to identify long-range genomic interactions from ChIA-PET and HiChIP data. Enabled by this work, the fourth project aims to provide a means to identify reproducible long-range genomic interactions. We continue the analysis of long-range interactions in the fifth project by performing co-enrichment analysis of transcription factor sequence motifs.&#13;
&#13;
Collectively, these methods provide new approaches to a range of problems in functional genomics, from finding appropriate neural network architectures for sequence-based prediction tasks to uncovering patterns in long-range genomic interactions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140131</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Space-Resolved Raman Analysis of Structure-Property Relationships in Heterogeneous Structural Materials</title>
<link>https://hdl.handle.net/1721.1/140130</link>
<description>Time-Space-Resolved Raman Analysis of Structure-Property Relationships in Heterogeneous Structural Materials
Loh, Hyun-Chae
Concrete is responsible for 8% of the world’s CO2 emission, 9% of the global industrial water withdrawal, and 68% of sand and gravel mining, and the use of concrete is expected to continue increasing. With such a significant environmental impact, research on sustainable construction materials is essential. In particular, characterizing the chemo-mechanical properties could lead to the development and evaluation of sustainable materials. However, the spatial and temporal complexity of cementitious materials makes it challenging to characterize these materials. Here, a time-space-resolved Raman characterization framework was developed by combining confocal Raman spectroscopy and two-point correlation analysis. In addition, other chemical, mechanical, and crystallographic characterization tools were correlated with the Raman maps to link the structural features to the chemo-mechanical properties.&#13;
&#13;
First, the space-resolved Raman analysis framework is established in order to study the structureproperty relationships in nacre material. The crystallographic texture of the nacre is correlated with its energy absorption capacity to study the toughening mechanism. A corporative movement of co-oriented stacks of aragonite platelets is observed, which defined the platelet stacks as another hierarchical structure contributing to the material’s toughness. Furthermore, the mechanical contributions of the microtexture in drumfish teeth were studied through a correlative chemo-mechanical characterization.&#13;
&#13;
Next, the time component is added to the framework. Using the time-space-resolved Raman analysis framework, the molecular structure of C-S-H is analyzed, and the real-time hydration process of cementitious materials is visualized. The Raman spectrum of C-S-H obtained from insitu underwater Raman spectroscopy corroborates Gartner’s C-S-H model. Moreover, the earlystage cement hydration dynamics are visualized for the first time. The setting process is explained by the crystallization of calcium hydroxide and the percolation process, where the hydration products construct a network. &#13;
&#13;
This framework can be applied to studying other chemical reactions in cementitious materials. Future applications include studying the hardening or deterioration process and identifying the effects of admixtures in cementitious materials. In addition, understanding the hydration kinetics in real-world conditions could lead to the advancement of sustainable construction techniques, such as carbonation curing or 3D concrete printing.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140130</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring and Imaging Seismic Velocity Changes Across Temporal and Spatial Scales</title>
<link>https://hdl.handle.net/1721.1/140129</link>
<description>Monitoring and Imaging Seismic Velocity Changes Across Temporal and Spatial Scales
Mao, Shujuan
Relative changes in seismic velocity (dv/v) are associated with changes in mechanical properties of crustal materials, and can reflect the perturbations in stress fields, rock damage, and fluid content. Interferometry of seismic ambient noise enables the continuous monitoring of dv/v.&#13;
&#13;
In this thesis, I first explore the sensitivity and resolution limits of noise-based monitoring of dv/v. By employing dense seismic arrays at La Réunion Island, I demonstrate the feasibility of using noise-based interferometry to detect tidally-induced deformation (volumetric strain ∼ 10-8, dv/v ∼ 10-4) with ∼ hourly time resolution. I further extend the applications of noise-based monitoring by not only detecting the temporal changes but also imaging the spatial variations of dv/v. Based on the spacetime dv/v observations, I investigate groundwater fluctuations in the Coastal Los Angeles Basins during 2000–2020. The spatial imaging of dv/v reveals pronounced seasonal variability in confined aquifers. The spatial patterns of dv/v are consistent with surface deformation inferred from InSAR but also constrain aquifers and their hydrology at different depths. Moreover, I propose a method for measuring seismic travel-time shifts based on wavelet cross-spectrum analysis. This new method provides stable time-shift measurements with optimal time-frequency joint resolution that can enhance the high-resolution spatial imaging of dv/v.&#13;
&#13;
This dissertation advances the techniques and applications of temporal monitoring as well as spatial imaging of dv/v via seismic interferometry. Monitoring dv/v in time and space is shown to be a promising tool, which can be used in concert with other geophysical observations, to identify and decipher dynamic processes in the crust.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140129</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Let’s Talk About Sex: Sexual Ethics, Agency, and Justice Beyond Consent</title>
<link>https://hdl.handle.net/1721.1/140125</link>
<description>Let’s Talk About Sex: Sexual Ethics, Agency, and Justice Beyond Consent
Atherton, Emma Marija
This dissertation is about sexual pleasure and good, ethical sex. It is also about the ways women’s pleasure is routinely marginalized in (cis)heterosex, and the gendered and heteronormative social norms and scripts that lead to such routine marginalization. Through the lens of pleasure, this dissertation highlights dimensions of sexual ethics, sexual agency, and sexual (in)justice that are often overlooked in philosophical conversations dominated by the concept of consent. &#13;
&#13;
Chapter 1 concerns the “pleasure gap”: the fact that in (cis)heterosex women report experiencing significantly less pleasure than men report (and significantly less pleasure than women having queer or non (cis)heterosex). I examine how the pleasure gap has been socially misunderstood and miscast as a “women’s problem” in ways which essentialize and pathologize women’s sexuality. I argue that pleasure gap is a social-structural problem, a phenomena arising out of the fact that the practice of (cis)heterosex is structured by social norms and expectations that reliably and routinely lead to the marginalization of women’s pleasure.&#13;
&#13;
Chapter 2 examines how social scripts for (cis)heterosex shape women’s relationships to sexual pleasure. I suggest that the culturally dominant script for (cis)heterosex both constrains women’s sexual agency, and plays a role in producing women as particular kinds of sexual agents and subjects who relate to pleasure in (cis)heterosex primarily as something to perform and provide rather than pursue or experience. As such, we must understand the script as productive as well as repressive with respect to women’s sexuality in the context of (cis)heterosex.&#13;
&#13;
Chapter 3 pivots to focus on good, robustly ethical sex. I introduce the reciprocal self-regulation model of sexual agency to describe how sexual partners co-determine the nature and content of their shared sexual experiences. I introduce this model as a means of thinking about what is actually involved in good, ethical sex and in sexual “flow”, and as an alternative to “ongoing enthusiastic” consent models which are increasingly and, I think, mistakenly cast as a new standard not only for permissible sex but also for sex that is robustly ethical and pleasurable.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140125</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Reshaping Social Networks with Advanced Computational Techniques</title>
<link>https://hdl.handle.net/1721.1/140124</link>
<description>Understanding and Reshaping Social Networks with Advanced Computational Techniques
Yuan, Yuan
Social networks are powerful in modeling interdependence among individuals. Recently, the availability of large-scale social network data and advances in computational tools have facilitated the rapid development in social network research. However, a few important aspects of social networks have been understudied, and advanced computational tools may not directly help social scientists draw scientific knowledge. My thesis thus aims to move towards applying and developing computational tools that help investigate important questions on social networks.&#13;
&#13;
The first component of my thesis focuses on understanding social interactions and networks, which offers implications for reshaping social networks to improve social cohesion. Specifically, I examine the formation and dynamics of social networks, with a focus on social exchange and "long ties." Utilizing large-scale social network data and computational tools, I first discuss benefits of the social exchange with dissimilar people in social networks; and then I proceed to study dynamic social networks and focus on long ties, or the social ties that bridge different communities in dynamic networks. Methodologically, I develop a novel interdisciplinary approach that combines game theory and machine learning techniques.&#13;
&#13;
Second, I study what features on online platforms may improve social interactions and reshape social networks. To do so, I utilize large-scale data of online social media and provide two examples in the field. The first example is the identification of social contagion of online gift giving. This study examines how receiving a gift would promote the person to pay forward the gift, and also discusses how this social contagion can promote social interactions and tight social bonds. The other example is to examine how the designs of peer effects and prosociality on online social platforms encourage users' offline fitness behavior. Methodologically, both studies involve advanced causal inference and machine learning techniques to test the main hypotheses. &#13;
&#13;
Moreover, I develop computational tools that analyze social network data. In the final component of my thesis, I introduce an algorithm for controlled experiments in social networks. This algorithm detects heterogeneous spillover effects -- how the treatment assignments received by one's network neighbors affect a person's behavior -- in the data of networked experiments. This interdisciplinary algorithm combines approaches in causal inference, machine learning, and network science.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140124</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cumulativity from Homogeneity</title>
<link>https://hdl.handle.net/1721.1/140123</link>
<description>Cumulativity from Homogeneity
Chatain, Keny
Since Schein (1996), cumulative readings of quantifiers have often motivated a departure from standard assumptions about composition. This dissertation proposes a new theory of these cumulative readings that connects them to the phenomenon of homogeneity. Specifically, taking inspiration from Bar-Lev (2018), I argue that predicates sometimes have weak existential meanings, which are revealed when placed under negation. The stronger meaning observed in positive sentences are the result of a procedure of exhaustification. By recognizing predicates' underlying weak meanings and their liability to strengthening, cumulative reading of quantifiers can be accounted for by maintaining relatively standard assumptions about composition. This analysis predicts a range of intricate cases, including Schein's famous video-game examples. It also predicts the truth-conditions of negative cumulative sentences and asymmetries in the availability of cumulative readings of quantifiers.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140123</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prototype development and techno-economic analysis of electrochemical energy storage systems</title>
<link>https://hdl.handle.net/1721.1/140122</link>
<description>Prototype development and techno-economic analysis of electrochemical energy storage systems
Narayanan, Thaneer Malai
The US has to implement decarbonization efforts at twice the current rate to achieve its net-zero emission target by the year 2050. Electrochemical energy storage systems are expected to play an important role in this effort to manage the temporal and spatial mismatch in variable renewable energy (VRE) sources availability and the energy demand. Despite the prevalence of Li-ion batteries, this technology alone cannot be a panacea for all our energy storage needs, particularly for applications such as long-duration energy storage for the electric power sector and clean energy carriers for other energy sectors. In this study, three technologies with low energy capacity costs to meet the aforementioned demands were evaluated and their potential roles in the future decarbonized energy sector was identified.&#13;
&#13;
First, the feasibility of a new flow battery chemistry, namely, Zn-MnO2 semi-solid flow battery (SSFB) was evaluated for energy storage applications in the electric power sector. Despite the low energy capacity cost of Zn-MnO2 SSFB, stringent pumping requirements compared to an all-liquid flow battery may limit their techno-economic feasibility. To understand the trade-off between electrochemical performance, rheological performance, and cost, experimental analysis and bottom-up cost analysis was performed. The high power required for pumping was found as a bottleneck for the power capacity costs of the SSFB system. However, with the adoption of appropriate strategies, the system cost can be made competitive with Li-ion battery systems for discharge durations over a day.&#13;
&#13;
Secondly, the feasibility of Zn-air and Al-air battery technologies was evaluated using techno-economic analysis. Three important cell performance parameters were studied to understand their sensitivity towards the levelized cost of storage of the metal-air batteries in comparison to existing Li-ion technology. Technologies such as Zn-air batteries was found to require collective improvements in all three cell performance parameters (areal capacity, cycle life, and efficiency) to be competitive with existing solutions like Li-ion battery.&#13;
&#13;
Thirdly, we evaluated the techno-economic feasibility of alternative hydrogen storage systems such as liquified hydrogen and liquid organic hydrogen carrier (LOHC) to meet the energy demand in both electric power and other sectors using a hydrogen supply chain optimization model. We found that these ultra-low energy capacity cost technologies can play an important role in meeting the seasonal demand in both energy sectors. Additionally, the low discharge power capacity cost of LH2 was found to add significant value as short-burst energy release for the electric power sector similar to present-day liquified natural gas.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140122</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bootstrapping new knowledge from abstract representations</title>
<link>https://hdl.handle.net/1721.1/140120</link>
<description>Bootstrapping new knowledge from abstract representations
Pelz, Madeline C.
A long tradition in developmental psychology has used formal scientific inquiry as a basis for understanding learning in early childhood. But much of this work has focused on situations in which children observe firsthand the covariation between parts of a causal system, or can intervene directly on the system in order to test and refine their hypotheses. While these studies point to impressive inferential abilities, both formal science and everyday reasoning also require us to make inferences about hidden generative processes even without any direct evidence. In this thesis, I aim to address scenarios in which young children can bootstrap new knowledge using 1) knowledge about their own knowledge, 2) knowledge about probable underlying generative processes, and 3) knowledge about high-level properties linking causal events. My approach includes a combination of computational modeling, and behavioral data from both adults and young children (ages 4-8 years).&#13;
&#13;
The first set of experiments demonstrates that adults and children can metacognitively represent the amount of information they might need to solve a particular statistical reasoning problem, suggesting that young children have precise metacognitive access to their own knowledge. The second study demonstrates that adults and children can infer an agent’s mental state and goals based only on a trace left on the environment, suggesting that children can identify hidden underlying generative processes and use them as the basis for rich inferences. The third study demonstrates that children can use high-level properties in order to link causal events, using features that are preserved across simple causal functions in order to match effects to their candidate causes. Taken together, these findings suggest that even if children do not have access to covariation data necessary to establish a relationship through statistical evidence, they can rely on other subtle sources of information in order to bootstrap new knowledge in a variety of domains.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140120</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploration and Exploitation Techniques for High-Dimensional Simulation-Based Optimization Problems in Urban Transportation</title>
<link>https://hdl.handle.net/1721.1/140119</link>
<description>Exploration and Exploitation Techniques for High-Dimensional Simulation-Based Optimization Problems in Urban Transportation
Tay, Timothy
Stochastic traffic and mobility simulation models are popular tools for modeling urban transportation networks. However, their use for optimizing urban transportation networks can be challenging due to their computationally intensive nature. This thesis focuses on high-dimensional simulation-based (SO) optimization problems. To find solutions with good performance efficiently, we need to balance exploration and exploitation. We propose techniques for achieving a better balance between exploration and exploitation when tackling high-dimensional SO problems in urban transportation.&#13;
&#13;
The first part of the thesis considers a general-purpose exploration mechanism and introduces exploitation components to it. We propose an inverse cumulative distribution function (cdf) sampling mechanism that makes use of problem-specific prior information in the form of an analytical model to efficiently sample for points with good performance. The inverse cdf sampling mechanism can be used in conjunction with any optimization algorithm. We study whether problem-specific prior information should be used in the exploration (i.e., sampling) mechanism and/or the exploitation (i.e., optimization) algorithm when tackling a high-dimensional traffic signal control problem in Midtown Manhattan. The results show that the use of inverse cdf sampling mechanism as part of an optimization framework can help to quickly and efficiently identify solutions with good performance.&#13;
&#13;
The second and third parts of the thesis focus on developing a framework to enable high-dimensional Bayesian optimization (BO) for stationary and dynamic transportation SO problems respectively. BO naturally combines exploration and exploitation. In the second part, we consider stationary problems and propose approaches to incorporate problem-specific prior information in the BO prior functions such as to jointly enhance both exploration and exploitation. This is done through the use of a stationary analytical surrogate traffic model. In the third part, we extend the BO framework to tackle dynamic problems by formulating and embedding a computation ally efficient dynamic analytical surrogate traffic model. For both parts, we evaluate their performance with a traffic signal control problems for a congested Midtown Manhattan (New York City) network. The proposed methods enhance the ability of BO to tackle high-dimensional urban transportation SO problems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140119</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on International Trade and Sovereign Debt</title>
<link>https://hdl.handle.net/1721.1/140118</link>
<description>Essays on International Trade and Sovereign Debt
Serfaty, Charles
This thesis studies the way intratemporal trade matters for intertemporal trade, focusing on three different interactions through sovereign debt. The thesis is divided in 3 chapters.&#13;
&#13;
In the first chapter, I show that existing evidence suggests that sovereign defaults disrupt international trade. As a consequence, countries that are more open have more to lose from a sovereign default and are less inclined to renege on their debt. In turn, lenders should trust more open countries and charge them lower interest rate. In most cases, a country should also borrow more debt the more open it is. The first chapter formalizes this idea in a simple sovereign debt model à la Eaton and Gersovitz (1981). It also provides evidence using gravitational instrumental variables from Frankel and Romer (1999) and Feyrer (2019) as a source for exogenous variation for trade openness.&#13;
&#13;
In the second chapter of the thesis, I develop a new model of crisis contagion through international trade. We focus on sovereign debt crises in a multi-country economy with endogenous default à la Eaton and Gersovitz (1981). The starting point of our analysis is the observation that sovereign defaults do not only reduce international borrowing, but also international trade flows: international trade is a commitment device to repay debt as a disruption in trade is one of the costs of default. As a consequence, when a country defaults, it reduces gains from trade in the rest of the world and it raises the incentives to default everywhere else. After providing some suggestive evidence for this kind of contagion through trade, we show how our model can rationalize default waves. Our model also predicts that more trade openness lowers the risks of a worldwide crisis, and it also has normative implications about tariffs and macroprudential policies because there is excess debt. A tax on debt and free-trade agreements with special tariff derogations for countries that want to default improve welfare from intertemporal transfers.&#13;
&#13;
In the third chapter, I wonder why sovereign spreads of a sovereign country appear to depend more on economic conditions in the rest of the world rather than those of the country itself. To shed light on this puzzle, I propose an Eaton-Gersovitz sovereign debt model with international trade and terms of trade effects. I assume that there is an exogenous foreign demand for the domestic good that can vary over time and that trade costs increase whenever the sovereign government defaults. After calibrating my model on recent data, I show that a large share of the volatility of spreads can be explained by movements in the foreign demand for domestic goods.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140118</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mobile Carbon Footprinting: Sensing and Shaping the Carbon Emissions of Daily Activities Using Digital Technologies</title>
<link>https://hdl.handle.net/1721.1/140115</link>
<description>Mobile Carbon Footprinting: Sensing and Shaping the Carbon Emissions of Daily Activities Using Digital Technologies
Brazier, Johnna Cressica
Lifestyle changes for reducing greenhouse gas emissions are becoming a necessary component of climate change mitigation. Data-driven digital tools for encouraging widespread participation in bottom-up emissions reduction and climate action do not yet exist, however, and emissions remain illegible in the public’s daily information context and built environment. In this dissertation, I examine whether digital technologies can make personalized emissions information more accessible and actionable in everyday life, by developing and evaluating a digital system for footprint feedback in three phases: an exploratory study, a short-term feedback intervention, and a quasi-experimental assessment of long-term participation in activities that reduce emissions. In focus groups and interviews, I use the exploratory study to understand how members of the public incorporate multi-dimensional, daily activity-oriented footprint information into evaluating their activity choices. I then incorporate these observations into the smartphone-based digital feedback tool, named Mobile Carbon Footprinting, which uses daily activity diaries to track and compare emissions across an interrelated set of everyday activities: travel, time at home and away from home, food, and other expenditures.&#13;
&#13;
The results of the experimental intervention suggest that the digital feedback system might support short-term efforts to reduce emissions for a limited set of self-reported activities, but automatically recorded activities remain unchanged. Attention to the interactive footprint feedback was likely a key factor linked to emissions reductions. In line with previous studies, the information-based intervention increased participants’ awareness of the emissions consequences of daily activities. In the long term, however, increases in awareness might not improve, and might even reduce, motivation to participate in personal emissions reductions. In light of the experiment findings, I discuss the role of emissions feedback systems in directing attention to, motivating, and supporting learning and communication about lifestyle changes aimed at reducing emissions, as well as the technical and organizational challenges that widespread deployment of personal footprint tracking might face. Recognizing these challenges, I propose that future iterations of continuous, multi-activity footprint feedback designs could support policies and programs for behavioral emissions mitigation and education at multiple scales.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140115</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A theory of two strong islands</title>
<link>https://hdl.handle.net/1721.1/140111</link>
<description>A theory of two strong islands
Privoznov, Dmitry
This thesis is dedicated to two strong island effects: The Subject Condition and The Adjunct Condition. Both effects can be unified under a single generalization, known as Condition on Extraction Domain, or CED (Cattell, 1976; Kayne, 1981; Huang, 1982): any maximal projection that is merged with a phrase is an island.&#13;
&#13;
The thesis develops the so-called Spell Out theory, based on the original proposal by Johnson (2003). This theory derives CED from two basic assumptions about when and to which constituent Spell Out is applied over the course of syntactic derivation. The assumptions are, first, that between any two phrasal sisters at least one must be spelled out, and second, that a spelled out phrase does not project its category. The thesis also offers a theory of the interaction between syntactic derivation and memory structure that derives these two assumptions. The core principle is that focus of attention can only hold one element at a time.&#13;
&#13;
The thesis examines three main predictions of the Spell Out theory. The first prediction is the Adjunct Condition. The thesis shows that adjuncts may sometimes be transparent, but only if their sister is opaque. The second prediction is the Subject Condition. The thesis argues that any extraction out of subjects either involves extraction out of complements (not specifiers) or covert pied-piping. The third and new prediction is that all specifiers and all adjuncts are interpreted by the LF interface before their sister, that is, they create the local context for their sister, as is evident from the behavior of discourse anaphora (the so-called Island Condition).
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140111</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The (in)distinction between wh-movement and c-selection</title>
<link>https://hdl.handle.net/1721.1/140102</link>
<description>The (in)distinction between wh-movement and c-selection
Newman, Elise Sophia Bershad
This thesis asks the following question: what can wh-movement teach us about verb phrase structure? I examine two apparent interactions between wh-movement and Voice: Mayan Agent Focus and the Double Object Movement Asymmetry (DOMA) (Holmberg et al., 2019). In certain Mayan languages, subject but not object wh-questions require the verb to take a special intransitive-looking form; in many languages with symmetrical passives, wh-moving an indirect object in a passive clause is restricted to contexts in which the indirect object is the passive subject. By contrast, wh-moving direct objects face no restrictions about which argument is the passive subject. Typical approaches to these phenomena take the basic underlying verb phrase structure of a language to be insensitive to whether any of its arguments are wh-phrases. In other words, the fact that wh-questions are built from clauses containing a wh-element, while non-questions are built from clauses that lack a wh-element, is assumed to be irrelevant to what we assume the basic underlying clause structure to be in each case — object wh-questions are therefore assumed to be built from clauses that are identical to their non-wh-counterparts; subject wh-questions are assumed to built form clauses that are identical to their non-wh-counterparts, and so forth. On this view, many researchers propose that the so-called interactions between wh-movement and Voice should be explained by constraints on wh-movement from certain contexts. By contrast, I take the opposite approach. I propose that the observed interactions between wh-movement and Voice are teaching us very transparently about the basic structure of clauses that contain wh-elements, which may be different than their non-wh-counterparts. In other words, Mayan Agent Focus teaches us that clauses containing a wh-subject (as opposed to a non-wh-subject) are built in such a way as to feed intransitive-looking morphosyntax; the DOMA is teaching us that indirect object wh-phrases (in contrast to non-wh-indirect objects) are always generated in such a way as to make them the subject in a passive clause. I propose a theory of the features driving Merge in which the underlying position of a wh-phrase is not only determined by the “selectional” properties of verbs, but also by the feature that controls successive cyclic wh-movement through the edge of the verbal domain. Thus, the structure of a verb phrase is not invariant across all contexts — it depends on the features and categories of the elements that are configured inside of it, including the distribution of wh-elements. This approach likewise has implications for clauses that do not contain wh-elements, which I propose account for symmetric and asymmetric A and A-movement in different contexts.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140102</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfaces and Models for Improved Understanding of Real-World Communicative and Affective Nonverbal Vocalizations by Minimally Speaking Individuals</title>
<link>https://hdl.handle.net/1721.1/140101</link>
<description>Interfaces and Models for Improved Understanding of Real-World Communicative and Affective Nonverbal Vocalizations by Minimally Speaking Individuals
Narain, Jaya
This work focuses on a sub-group (denoted by mv*) of non- and minimally speaking individuals who have fewer than 10 words or word approximations and limited expressive language through speech and writing. In the United States alone, this group comprises over one million individuals. Their nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content) often have selfconsistent phonetic content and vary in tone, pitch, and duration depending on the individual’s emotional state or intended communication. While these vocalizations contain important affective and communicative information and are understood by close family and friends, they are often poorly understood by those who don't know the communicator well. Improved understanding of these nonverbal vocalizations could contribute to the development of technology to augment communication. This thesis aims to help the community at-large better understand and communicate with mv* individuals by utilizing families’ unique understanding of nonverbal vocalizations. &#13;
&#13;
For this work, families provided personalized labels for vocalizations, which were then used to compile a novel dataset and train machine learning models. The thesis contributes (1) the design and evaluation of a novel data collection protocol for real-world audio with personalized in-themoment labels, (2) a new dataset, ReCANVo, of over 7,000 nonverbal vocalizations from eight mv* communicators, collected longitudinally in real-world settings, (3) machine learning evaluation strategies and algorithms suitable for messy, real-world data that can classify vocalizations from mv* individuals with F1-scores above chance, and (4) the design of a novel communication interface, based on these interviews, surveys, and data analyses. The presented dataset ReCANVo is the only dataset of nonverbal vocalizations from mv* individuals, the largest dataset of nonverbal vocalizations, and one of the first datasets capturing real-world emotions across settings. The presented data analyses show, for the first time, that it is possible for models to classify nonverbal vocalizations by mv* individuals by function using audio alone. While this work was motivated by impact for a small, specialized population, the results can inform the design of real-world data collection and modeling approaches more broadly.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140101</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Role of Drosophila Tomosyn in Synaptic Strength and Plasticity</title>
<link>https://hdl.handle.net/1721.1/140098</link>
<description>Investigating the Role of Drosophila Tomosyn in Synaptic Strength and Plasticity
Sauvola, Chad W.
Neurotransmission is an adaptation of cellular secretion characterized by precise spatial and temporal regulation of SNARE assembly that occurs at specialized presynaptic subdomains in response to transient calcium influx following an action potential. SV fusion from the presynaptic terminal results in a postsynaptic response that varies in size depending on synaptic strength. Both the postsynaptic and presynaptic terminals contribute to synaptic strength with the postsynaptic terminal regulating its own sensitivity for neurotransmitters by governing receptor field composition, and the presynaptic compartment controlling the probability of SV fusion (Pr) following an action potential. While many postsynaptic mechanisms controlling strength have been described, the presynaptic contribution remains incompletely understood. Chapter 1 describes current models of SNARE assembly and disassembly during cycles of synaptic vesicle release. Each protein described in this chapter provides a potential point of regulation for setting presynaptic strength and modulating presynaptic release during plasticity. &#13;
&#13;
Chapter 2 focuses on the decoy SNARE protein Tomosyn and its role at the Drosophila larval neuromuscular junction (NMJ). Larval muscles are typically co-innervated by two glutamatergic motoneurons (Ib and Is) that show highly stereotyped differences in Pr at rest as well as differential expression of presynaptic homeostatic plasticity (PHP) when glutamate receptor function is impaired. Tonic Ib terminals display moderate initial Pr, robust potentiation, and sustained release during train stimulation whereas phasic Is terminals show high intrinsic Pr, rapid depression, and variable PHP expression. Tomosyn contributes to these differences by suppressing Pr and evoked release from tonic Ib motoneurons without affecting phasic Is release. tomosyn null mutants show phasic-like properties including high intrinsic Pr, enhanced depression, and impaired presynaptic homeostatic potentiation (PHP) suggesting Tomosyn regulates the tonic/phasic character and PHP expression of Drosophila synapses. The results in this chapter argue Tomosyn suppresses Pr at Ib synapses to enable tonic release and robust potentiation. Phasic release dominates when Tomosyn expression is low, contributing to the high intrinsic Pr in MNIs terminals at the expense of sustained release and robust PHP. Chapter 3 outlines future directions that might lend further insight into how Tomosyn regulates presynaptic release.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140098</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translational phosphoproteomics methods to identify biomarkers&#13;
and novel therapeutic targets</title>
<link>https://hdl.handle.net/1721.1/140097</link>
<description>Translational phosphoproteomics methods to identify biomarkers&#13;
and novel therapeutic targets
Kohale, Ishwar N.
Phosphorylation plays a fundamental role in cellular processes, and it is commonly dysregulated in cancer. Characterization of phosphorylation mediated signaling networks in tumors can inform therapeutic interventions. Additionally, analysis of phosphoproteome in response to drug candidates can help identify biomarkers for therapeutic response as well as lend direct insights into potential adaptive resistance mechanisms. However, quantification of phosphoproteome, especially the translationally relevant low abundant signals including tyrosine and pathway specific phosphorylation, is limited in the clinic. Here, we describe mass spectrometry based methods and their applications for quantitative analysis of low level phosphoproteome in preclinical models and patient tumors.&#13;
&#13;
In the first part, we describe a method for highly sensitive and quantitative analysis of tyrosine phosphorylation from 1 to 2 10-µm sections of formalin fixed paraffin embedded (FFPE) clinical tissue specimens, opening the doors for direct translational insights from FFPE tumor tissue banks in hospitals. In the second part, we present an integrative platform using mass spectrometry imaging, phosphoproteomics and multiplexed tissue imaging for mapping drug distribution, target engagement, and adaptive response to gain insights into heterogeneous response to therapy. In the last part of thesis, we demonstrate the application of quantitative tyrosine phosphorylation in identifying novel therapeutic targets in chemotherapy resistant triple negative breast cancer tumors. Together these approaches highlight the importance of low level phosphorylation signals to serve as biomarkers and inform novel therapeutic strategies.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140097</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Optimization Methods: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/140096</link>
<description>Large-Scale Optimization Methods: Theory and Applications
Vanli, Nuri Denizcan
Large-scale optimization problems appear quite frequently in data science and machine learning applications. In this thesis, we show the efficiency of coordinate descent (CD) and mirror descent (MD) methods in solving large-scale optimization problems.&#13;
&#13;
First, we investigate the convergence rate of the CD method with different coordinate selection rules. We present certain problem classes, for which deterministic rules provably outperform randomized rules. We quantify the amount of improvement and the corresponding deterministic order that achieves the maximum improvement. We then show that for a certain subclass of problems, using any fixed deterministic rule yields a superior performance than using random permutations. Then, we illustrate the efficiency of the CD method on a constrained non-convex optimization problem that arise from semidefinite programming with diagonal constraints. We show that the proposed CD methods can recover the optimal solution when the rank of the factorization is sufficiently large, and establish the rate of convergence. When the rank of the factorization is small, we provide tight approximation bounds as a function of the rank.&#13;
&#13;
Next, we study convergence properties of the continuous-time and discrete-time MD methods. We present a unified convergence theory for mirror descent and related methods. Then, we establish the implicit bias of the MD method with non-differentiable distance generating functions. Finally, we introduce the continuous-time MD method with non-differentiable and non-strictly convex distance generating functions. We show the existence and convergence of the solutions generated by the MD method and establish their implicit bias. We illustrate that the combinatorial algorithms resulting from this approach can be used to solve sparse optimization problems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140096</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identification and Knockout of Immunodominant Endogenous Retroviral Antigen in Murine Tumor Models</title>
<link>https://hdl.handle.net/1721.1/140095</link>
<description>Identification and Knockout of Immunodominant Endogenous Retroviral Antigen in Murine Tumor Models
Kang, Byong Ha
Advances in cancer immunotherapy have demonstrated the dominant role of T cells in an anti-tumor response, but the functional role of B cells and their antibodies in cancer remains less clear. In murine tumor models, curative immunotherapies have been previously shown to result in the development of immunotherapy-induced antibodies (iiAbs). These iiAbs were able to protect naive mice from intravenous tumor challenge and cross-react to heterologous tumor cells, motivating the identification of their antigens. Using a 2D-gel-based technique, we identified that the antigens targeted by iiAbs are the products of an endogenous retrovirus (ERV) called ecotropic murine leukemia virus (eMLV). In particular, the envelope glycoprotein (env) of eMLV was found to be the dominant cell-surface antigen targeted by iiAbs.&#13;
&#13;
Based on this finding, we studied the role of anti-env antibodies in an anti-tumor response. An anti-env antibody termed 1E4 was isolated from a cured mouse by single-cell cloning, and its affinity was improved by yeast surface display of its single-chain variable fragment. Systemically administered 1E4 was efficacious in a prophylactic setting and prophylactic vaccination against env protected mice from tumor challenge. However, 1E4 did not show any therapeutic efficacy in combination with a cytokine. We found that this discrepancy in efficacy was due to the production of replication-competent eMLV by the tumor cells. Murine tumor models commonly used for preclinical studies B16F10, MC38, 4T1, and CT26 were able to produce replication-competent eMLV able to infect other murine cells in vitro and T cells in vivo.&#13;
&#13;
Human tumors have not been shown to produce infectious ERV, and so we removed the infectious potential of these murine tumor models by knocking out env. The envKO tumors were more susceptible to immune control at varying degrees compared to control knockout cells in vivo. CT26 envKO tumors were spontaneously rejected and MC38 envKO tumors grew much slower compared to the control tumors. Both B16F10 and 4T1 envKO tumors were characterized by slight growth delay and 4T1 envKO tumors were found to be more susceptible to immune checkpoint blockade therapy. Single-cell RNA sequencing of untreated 4T1 envKO tumors and tumor-infiltrating immune cells revealed that 4T1 envKO tumor cells were more inflamed and CD4+ T cells and M1 macrophages were more activated.&#13;
&#13;
In separate work, we used yeast surface display to develop antibodies against an inhibitory receptor CD161, which was overexpressed in clonal CD8+ T cells found in primary human glioma samples. Inactivation of KLRB1, the gene encoding for CD161, in T cells was previously found to result in higher activation and improved T-cell function. Anti-CD161 antibodies developed by yeast surface display were able to block the interaction between CD161 and its ligand CLEC2D and induce activation of T cells.&#13;
&#13;
Overall, this work presents methods to identify antigens targeted by antibodies of unknown specificity and to develop antibodies against an antigen of interest. The findings in this work shed light on the pervasive problem in murine tumor models that are commonly used for preclinical studies.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140095</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disordered Optics for Multidimensional Information Processing</title>
<link>https://hdl.handle.net/1721.1/140093</link>
<description>Disordered Optics for Multidimensional Information Processing
Li, Xinhao
Photonic platforms with multiplexing capabilities are of profound importance for high-dimensional information processing. With the rapid expansion of data volume, growing pressure on the sensor and post-processing hardware calls the development of physical pre-processing interfaces. Complex photonic systems aim to address the challenges as the interfaces between raw signals and sensors, which improve the efficiency of information reconstruction. In this thesis, we explore disordered photonic devices for multidimensional information processing and compatible fabrication techniques. &#13;
&#13;
The first part of the thesis is about diffractive optical elements (DOE) for spectral imaging. We designed a spatially modulated DOE filter that can efficiently sample in the Fourier transformed domain and facilitate spectral image reconstruction. The DOE layer distinguishes the main Fourier spectral components, and the second spatial modulation layer mediates spectral aliasing. Unlike conventional snapshot spectral imagers, our design does not require sub-super-pixel-level sensing, which enables the efficient usage of sensor resolution. We further demonstrated a grayscale stencil lithography technique for efficient and customizable manufacturing of DOE or multilayer optics with spatial thickness variation. &#13;
&#13;
The second part of the thesis is about scattering reservoir computers (RC). Complex optical medium shows great potential for large-scale optical RC thanks to the intrinsic parallelism and scalability. We identify the trade-off between the fading memory and non-normality of scattering RC, which determines their memory capacity and resistance to noise. Further, we proposed a transient amplification method to fully harness the high noise resistance and high dimensionality of non-normal scattering RC. We further developed a dynamic hydrogel scatter, and projection lithography method for 2D patterning of gain/loss materials, which are promising for applications in dynamic light modulation and re-configurable optical computing.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140093</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Coherence Tomography Angiography for Imaging and Analysis of the Choriocapillaris in Late Age-Related Macular Degeneration</title>
<link>https://hdl.handle.net/1721.1/140092</link>
<description>Optical Coherence Tomography Angiography for Imaging and Analysis of the Choriocapillaris in Late Age-Related Macular Degeneration
Moult, Eric Michael
Age-related macular degeneration (AMD), a progressive disease of the retina and choroid, is a leading cause of vision loss. At present, AMD pathogenesis remains incompletely understood, and approved treatments only exist for certain AMD subtypes. In this thesis, we investigate AMD pathophysiology with a focus on choriocapillaris impairment in macular neovascularization and geographic atrophy, the two subtypes comprising late AMD. The choriocapillaris, the capillary layer of the choroid, is responsible for nourishing the outer retina, including the photoreceptors, and has long been hypothesized to have an important role in AMD development and progression. We contribute to the existing understanding of AMD-associated choriocapillaris impairment by developing and applying optical coherence tomography angiography (OCTA) technology in conjunction with image analysis and disease modeling approaches. Key study results presented in this thesis include: (1) the development of an OCTA-based method for microvascular velocimetry that is compatible with clinical ophthalmic imaging; (2) the demonstration and quantification of choriocapillaris impairment surrounding macular neovascularization; (3) the association of macular neovascularization blood flow speeds with responses to vascular endothelial growth factor inhibitor treatment; (4) the demonstration of choriocapillaris impairment surrounding regions of geographic atrophy; and, (5) the development of a biophysical model describing the spatiotemporal expansion of geographic atrophy, and the integration of this model with spatial statistical methods to enable rigorous assessments of local correlations between choriocapillaris impairment and geographic atrophy growth.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140092</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Drawing Participation: Histories of Geospatial Computing, Professional Silos, and Computational Potentials for Collaboration in Planning and Design</title>
<link>https://hdl.handle.net/1721.1/140089</link>
<description>Drawing Participation: Histories of Geospatial Computing, Professional Silos, and Computational Potentials for Collaboration in Planning and Design
Sandoval Olascoaga, Carlos Emilio
Our tools for geocomputing help us synthesize, record, represent, and imagine the world. But, how can our tools for describing, representing, and designing space help us plan for an uncertain and equitable urban and ecological future? And how can we empower, include, support, and reconcile the visions of a diverse population and complex ecological change in thinking about the future? This is particularly relevant, as in the next forty years, we will build as much urban fabric in the next forty years as we have in the past human history. Yet, for the past forty years, the built environment has been shaped by tools for geocomputing that reflect the vision of a single vendor and the needs of disciplines other than design and planning. &#13;
&#13;
Instead, our Geospatial Computing Systems (GCS), their data models, interfaces, and methods need to start with domain theory and critical understandings of the background of our current tools, rather than the capacities of a specific computational framework. As a model in this direction, in this dissertation I integrate critical research of the different historical, cultural, and technological forces that have shaped our GCS as the starting point of the tool development process.&#13;
&#13;
In the first part of the dissertation, I introduce two episodes in the history of digital GCS: the development of early digital mapping tools at the Laboratory of Computer Graphics (LCG) at Harvard, and the subsequent development of GCS programs at private institutions. I begin by exploring the different ways in which drawing enabled GCS to become computational. I present the early encounters between drawing and computers at the LCG to argue that digital computing was implemented in GCS methods to broaden the use of mapping and facilitate collaboration in the planning and design disciplines. I show how, at the same time, such early computer mapping programs inspired and enabled the development of rational planning and design methods.&#13;
&#13;
Second, I present the history and work of Environmental Systems Research Institute (ESRI) during the 1970s and early 1980s, a period of time and an institution that developed modern GCS software paradigms. Through a computational lens, I show how these interfaces and representations have permanently affected the ways in which GCS have been incorporated in planning and design, and has resulted in a lack of a collaborative, robust, visual and graphic-based GCS framework for the design and planning practice. &#13;
&#13;
In the second part of this dissertation, I reflect on the limitations and potentials that have been historically built into current GCS tools, and I introduce a series of guidelines to improve GCS frameworks for participatory planning and inclusive design. I describe 1) the conceptual and technical background of two new tools; 2) the technical implementations of the tools; 3) and a set of case studies to test the value of both tools in planning and design education and practice. The first tool that I present in this dissertation, Painting with Data (PWD) is an open-source, collaborative, web-based software with a visually-based interface and data structure, which allows users to create spatial models by directly manipulating the graphic representation. The second tool, Drawing Participation (DP) is the first real-time, peer-to-peer, collaborative GCS tool that integrates drawing, mapping and spatial analysis functionality to bridge the capacities of two types of software paradigms, GCS and CAD. Altogether, the frameworks point towards the potential of GCS tools for integrating analytic, participatory, and design methods, while bringing ideas of inclusion, equity, and justice into the urban design and planning process.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140089</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetic and Thermodynamic Aspects of Voltage as a Driving Force for Ammonia Activation</title>
<link>https://hdl.handle.net/1721.1/140088</link>
<description>Kinetic and Thermodynamic Aspects of Voltage as a Driving Force for Ammonia Activation
Schiffer, Zachary J
Renewable energy sources, such as solar and wind, have become increasingly prevalent and helped drive progress toward decarbonization of electricity. The commodity chemical industry is a large consumer of energy and a major contributor to global greenhouse gas emissions, and electrification of the industry using renewable sources is a possible step toward reducing the carbon footprint of chemicals. In this thesis, I first propose a paradigm where electrochemical systems enable bond-formation steps in the chemical industry, leveraging voltage as an alternative driving force to enable operation at mild temperatures and pressures. I then aim to answer the question “If I can apply mechanical energy (pressure), thermal energy (temperature), or electrical energy (voltage) to a chemical reaction, which should I use?” In particular, I present a universal expression for the equilibrium constant of a chemical reaction as a function of thermodynamic driving forces, and demonstrate how this universal equation and facile visualization of chemical reactions enables quick and informed justification for electrochemical versus thermochemical energy sources.&#13;
&#13;
I then focus on the particular case of electrochemical utilization of ammonia, a ubiquitous nitrogen precursor throughout the chemical industry. First, I look at an electrochemical analogue to reductive amination, where a carbonyl group is converted to an amine. Specifically, I demonstrate the electrochemical reductive amination reaction of benzaldehyde and ammonia and investigate its kinetics. I find that the reaction proceeds via an inner-sphere route at heterogeneous metal surfaces, in contrast to most previous work on outer-sphere electrochemical reductive amination systems. I then investigate the kinetics of activating ammonia by breaking the nitrogen-hydrogen bond oxidatively, and I find that the reaction proceeds through an outer-sphere, radical pathway. Last, I propose an energy storage paradigm that leverages ammonium formate, a combination of ammonia and formic acid, to store renewable electricity. I discuss the advantages of this fuel and demonstrate how voltage can aid in the release of energy from this fuel. Overall, in this thesis I start with the broad question of why and when to choose electrochemistry over traditional thermochemical routes in the chemical industry, and I then focus in on how electrochemistry can aid in the utilization of ammonia for both synthesis reactions as well as energy storage purposes.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140088</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Mechanical Interventions on Human Locomotion</title>
<link>https://hdl.handle.net/1721.1/140087</link>
<description>Effects of Mechanical Interventions on Human Locomotion
Lee, Jongwoo
Due to population ageing and increasing incidence of neurological disorders, the demand for robotic technologies for assisting, augmenting, and restoring human locomotion is rapidly increasing. Recent approaches aim to make the devices adaptive to improve performance and to deal with individual differences. When developing adaptive devices, however, it should be remarked that humans are also adaptive, and physical interaction with mechanical interventions may substantially change their behavior. To advance technologies for human locomotion, therefore, not only it is important to understand fundamentals of human locomotion itself, but also it is required to understand how human locomotion is altered by the mechanical interventions.&#13;
&#13;
In this thesis, I aimed to understand and establish fundamentals of the effects of mechanical interventions on human locomotion. In the first part of the thesis, I characterized how human walking was changed with a powered hip exoskeleton robot and investigated its underlying principles. In the second part of the thesis, I quantified how human balance on a narrow beam was substantially and immediately changed by altering mechanical interface or using mechanical support (i.e., canes). Behavioral indicators of changes in central neural processes were investigated, which is critical to determine the potential of an intervention for rehabilitation or compensation. In the last part of the thesis, I developed methods to quantify human balance mechanisms during normal standing without applying perturbations which may evoke perturbation-dependent changes to the identified human behavior. Throughout this work, simple models were extensively used to design and interpret human experiments as well as to quantify human behaviors with a handful of parameters.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140087</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Profiling, prototyping, and perturbing human immune responses</title>
<link>https://hdl.handle.net/1721.1/140085</link>
<description>Profiling, prototyping, and perturbing human immune responses
Reyes, Miguel
Studies in animal models paved the way for the discovery of several basic mechanisms in immunology, but successful translation of these findings into clinical practice has been rare. Recent advances in biological methods and instrumentation allow the analysis and manipulation of limited quantities of human samples, shifting focus away from inbred mice and allowing an alternative research framework to emerge. This  approach aims to ‘reverse-engineer’ the human response by finding disease-relevant biological phenomena through deep phenotyping of humans and using insight from patients to rationally design experimental models and perturbations. By devising systems that faithfully recapitulate human disease biology, this paradigm would enable the discovery of new immunological mechanisms in humans and narrow the translational gap between basic biological findings and their clinical application. &#13;
&#13;
In this thesis, we describe the development of technologies that support this paradigm and the application of this research framework in the study of sepsis. We describe two technologies: one which enables cell-type specific transcriptomic profiling of human blood using an integrated fluidic circuit, and another which enables combinatorial chemical perturbation of human immune cells using droplet microfluidics. In addition, we apply this framework to identify immune phenotypes associated with sepsis and severe COVID-19, and develop an experimental system that models sepsis-induced emergency myelopoiesis using human cells.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140085</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-Layer Systems for Craniomaxillofacial Bone Repair</title>
<link>https://hdl.handle.net/1721.1/140082</link>
<description>Layer-by-Layer Systems for Craniomaxillofacial Bone Repair
Howard, MayLin Tian
The treatment of craniomaxillofacial bone defects with drug-eluting synthetic implants is a promising treatment currently being investigated. It is becoming increasingly recognized that release rate of growth factor proteins signaling bone regeneration and vascularization is a key optimization parameter since these proteins are rapidly cleared from the body and can have safety ramifications when released too rapidly. A clear delivery challenge exists to produce a synthetic bone implant which can deliver growth factors in safe doses with appropriate delivery time frames locally to the bone defect site. A technology that can potentially meet this challenge is layer-by-layer (LBL) self-assembly, which can be used to coat defect-relevant implants with nanoscale thickness films that elute growth factors with tunable release kinetics and dose.&#13;
&#13;
In this work, we first developed LBL film architectures deliver osteogenic growth factor, bone morphogenetic protein-2 (BMP-2), over four different time scales ranging from 2 days to 30 days by changing the method of diffusional barrier incorporation. We next implanted formulations with rapid or slow release of a minimal dose of BMP-2 in a rat calvarial defect model to determine the influence of BMP-2 release kinetics on bone growth. We then investigated the effects of combination growth factor therapies incorporating angiogenic growth factors vascular endothelial growth factor (VEGF) or platelet-derived growth factor (PDGF) to determine if dual-protein delivery could enhance bone regeneration. Finally, we translated BMP-2-eluting films to customized 3D-printed scaffolds and implanted these formulations into a rabbit mandibular defect model to investigate the effects of differential BMP-2 release kinetics in a larger animal model with a load-bearing bone defect. &#13;
&#13;
As a result of this research, we developed LBL diffusional barrier tools that can be used to engineer targeted release kinetics of growth factor proteins. These tools could be applied to various growth factors or other biologic therapeutics in order to address delivery challenges in other disease and injury applications. We also leveraged the power and flexibility of LBL technology to enable investigation into optimal delivery parameters for single or dual growth factor delivery in two different craniomaxillofacial bone defects. These findings inform optimization and formulation of future clinical products incorporating growth factors for bone regeneration.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140082</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Performance Computational Genomics</title>
<link>https://hdl.handle.net/1721.1/140081</link>
<description>High-Performance Computational Genomics
Shajii, Ariya
Next-generation sequencing data is growing at an unprecedented rate, leading to new revelations in biology, healthcare, and medicine. Many researchers use high-level programming languages to navigate and analyze this data, but as gigabytes grow to terabytes or even petabytes, high-level languages become prohibitive and impractical for performance reasons. This thesis introduces Seq, a Python-based, domain-specific language for bioinformatics and genomics that combines the power and usability of high-level languages like Python with the performance of low-level languages like C or C++. Seq allows for shorter, simpler code, is readily usable by a novice programmer, and obtains significant performance improvements over existing languages and frameworks. Seq is showcased and evaluated by implementing a range of standard, widely-used applications from all stages of the genomics analysis pipeline, including genomic index construction, data pre- and post-processing, read mapping and alignment, and haplotype phasing. We show that the Seq implementations are up to an order of magnitude faster than existing hand-optimized implementations, with just a fraction of the code. Seq's substantial performance gains are made possible by a host of novel genomics-specific compiler optimizations that are out of reach for general-purpose compilers, coupled with a static type system that avoids all of Python's runtime overhead and object metadata. By enabling researchers of all backgrounds to easily implement high-performance analysis tools, Seq aims to act as a catalyst for scientific discovery and innovation. Finally, we also generalize many of the principles used by Seq to create a domain-configurable compiler called Codon, which can be applied to other domains with similar results.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140081</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-metaprogramming</title>
<link>https://hdl.handle.net/1721.1/140079</link>
<description>Meta-metaprogramming
Koppel, James
Programming languages researchers have developed many advanced tools that promise to greatly ease software engineering. Yet even conceptually simple tools are expensive to implement fully due to the complexity of the target language, and standard techniques tie an implementation to a particular target language. In order to make the development of advanced programming tools economical, these problems demand new techniques for decomposing the development of tools and automating portions of their construction, which I collectively dub "meta-metaprogramming."&#13;
 &#13;
In this thesis, I present three new meta-metaprogramming techniques reducing the work needed to build programming tools, each applicable to the specific problem of sharing implementation code between similar tools for different languages. These techniques respectively allow a single implementation of a transformation to losslessly rewrite code in many languages, automatically generate a family of programming tools from a language's semantics, and develop a new representation for sets of programs which is applicable to a variety of languages and synthesis tasks.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140079</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Versatile Biological Sample Preparation Platform using Microfluidic Cell Sorting Device</title>
<link>https://hdl.handle.net/1721.1/140073</link>
<description>Versatile Biological Sample Preparation Platform using Microfluidic Cell Sorting Device
Choi, Kyungyong
Biological assays for various biological samples are often limited by the low purity of target cells/particles due to the presence of significant host background. A spiral inertial microfluidic cell sorting device can be used to separate cells/particles based on their sizes without any specific labeling and result in a purified sample with a higher purity of target particles, enabling a higher chance of detecting/analyzing those particles of interest for biological assays. Moreover, the cell-sorting ability of spiral devices can be applied as a cell-washing technology to isolate target cells while removing unwanted particles such as adventitious agents for a better quality of cellular products or manufacturing cells in biomanufacturing. This work provides numerous applications of spiral cell sorter-based microfluidic sample preparation on various biological samples. We propose to apply spiral microfluidic sorter for various biological samples to acquire enhanced readouts for targeted downstream assays such as Next-generation sequencing (NGS) or improved quality of cells for biomanufacturing.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140073</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microplasma-Enabled Sputtering of Nanostructured Materials for the Agile Manufacture of Electronic Components</title>
<link>https://hdl.handle.net/1721.1/140072</link>
<description>Microplasma-Enabled Sputtering of Nanostructured Materials for the Agile Manufacture of Electronic Components
Kornbluth, Yosef S.
Additive manufacturing has revolutionized the low-volume manufacturing space; for example, polymers can be extruded and joined together to produce arbitrary shapes at the push of a button. However, this revolution is primarily confined to thermoplastics, and more broadly, to structural materials. The ability to add electronic capabilities to these printed shapes would greatly enhance their utility. Unfortunately, most additive manufacturing methods for conductive features do not produce high-quality films, or require processing that can damage printed surfaces.&#13;
&#13;
Cleanroom technology is unmatched in its ability to produce high-resolution, high-quality interconnects and electronic features, but it has rigid requirements. The best results require precision equipment, tightly controlled environments, and are limited to patterning planar wafers and removing unwanted material to produce the desired patterns.&#13;
&#13;
This thesis develops and demonstrates the capabilities of a microplasma-based atmospheric-pressure sputterer, which combines the strengths of both. This microsputterer was developed in order to achieve a direct-write method to deposit arbitrary patterns of electronics-quality thin films for additive manufacturing at room temperature. It uses a sputtering plasma, scaled down to millimeter scale and operated at atmospheric pressure, without the benefit of pre- or post-processing, and with a minimally controlled environment. Process parameters’ impact on the material and manufacturing properties (i.e., adhesion, conductivity, resolution, speed) of the deposits are discussed.&#13;
&#13;
The results of this thesis include near-bulk electrical conductivity for gold films with sub-millimeter resolution and significantly better adhesion than traditionally sputtered films, and alumina films with a breakdown strength that surpasses the state of the art. The printer’s multimaterial capabilities and control over the sheath gas will allow for the creation of objects made of different materials with different electrical properties. This capability allows for the demonstration of practical applications that showcase the printer’s capabilities, including an ultrathin capacitor, produced entirely through microplasma sputtering.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140072</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Security Research for the Public Good: A Principled Approach</title>
<link>https://hdl.handle.net/1721.1/140067</link>
<description>Security Research for the Public Good: A Principled Approach
Specter, Michael A.
Recent history is littered with examples of software vendors betraying user trust, exposing the public to exploitable code, data leaks, and invasive privacy practices. Undirected security research may be insufficient for preventing such foreseeable and preventable failures, as these problems are often the result of misaligned vendor incentives rather than the technical specifics of the systems themselves.&#13;
&#13;
This dissertation illustrates the utility of security research that is motivated explicitly by the goal of realigning incentives of market actors toward providing better security. We find that a research approach guided by a deep understanding of the economic, regulatory, and technical attributes of the actors involved is crucial for solving important societally-relevant problems in computer security. We present three case studies in applying this vision:&#13;
&#13;
Our first case study considers vulnerability discovery as applied to Internet voting. We perform a security analysis of the dominant Internet voting systems used in U.S. federal elections, including those used in the 2020 U.S. presidential race. We find that, despite decades of research in cryptography and voting, all deployed systems are of simplistic design and suffer basic security and privacy problems, supporting the conclusion that the market is in failure.&#13;
&#13;
Our second case study involves designing cryptography to disincentivize (rather than prevent) bad behavior through the example of deniability in messaging. We find that the evolution of the email ecosystem has inadvertently resulted in most messages being nonrepudiable, incentivizing email theft and public exposure of private data. We present cryptographic constructions that solve this problem while fitting in with email’s already complicated ecosystem. &#13;
&#13;
Our final case study involves government requests to mandate law enforcement access to encrypted data, colloquially known as ‘backdooring’ encryption. We perform a security analysis of technical proposals to provide such government exceptional access, and find that they would cause untenable security and privacy risks.&#13;
&#13;
Finally, we conclude with a discussion of security research as a public good, and provide direction for future work.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140067</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improved prediction and optimal sequencing strategies for genomic variant discovery via Bayesian nonparametrics</title>
<link>https://hdl.handle.net/1721.1/140066</link>
<description>Improved prediction and optimal sequencing strategies for genomic variant discovery via Bayesian nonparametrics
Masoero, Lorenzo
Despite the advent of Big Data, data-gathering in many domains can still be an expensive process that necessitates careful planning when operating under a fixed, limited budget. For instance, sequencing new genomic data is a complex procedure that requires careful tuning: researchers can spend resources to sequence a greater number of genomes (quantity), or spend resources to sequence genomes with increased accuracy (quality). In this thesis, I consider the common setting in which scientists have already conducted a pilot study to reveal variants in a genome and are contemplating a follow-up study. Spending additional resources has the potential to reveal new variations in the genome, and thereby new genetic insights. Therefore, practitioners are interested in (i) predicting how many new discoveries they will make under different experimental design choices. In turn, they can leverage these predictions to optimally allocate available resources in the design of a future experiment, e.g. (ii) to maximize the number of future discoveries or (iii) to optimize the usefulness of a future experiment for the task at hand, e.g. the power of an associated statistical test.&#13;
&#13;
In this thesis, I introduce novel methodologies to solve the problems mentioned above. My approach relies on a Bayesian nonparametric formulation that facilitates (i) prediction for the number of new variants in the follow-up study based on the pilot study. I show empirically that, when experimental conditions are kept constant between the pilot and follow-up, my method's prediction is competitive with the best existing methods. Unlike current methods, though, my new method allows practitioners to change experimental conditions between the pilot and the follow-up. I demonstrate how this distinction allows my method to be used for more realistic predictions and for optimal allocation of a fixed budget between quality and quantity. In particular, I first show how, under a fixed budget, my predictions can be used to maximize (ii) the number of new genomic variants discovered in a follow-up study. Last, I show how my framework can guide practitioners in other experimental design problems, and specifically how to achieve (iii) the highest possible power in statistical tests in the context of rare variants association studies.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140066</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Deep Learning: From Theory to Practice</title>
<link>https://hdl.handle.net/1721.1/140065</link>
<description>Efficient Deep Learning: From Theory to Practice
Liebenwein, Lucas
Modern machine learning often relies on deep neural networks that are prohibitively expensive in terms of the memory and computational footprint. This in turn significantly inhibits the potential range of applications where we are faced with non-negligible resource constraints, e.g., real-time data processing, embedded devices, and robotics. In this thesis, we develop theoretically-grounded algorithms to reduce the size and inference cost of modern, large-scale neural networks. By taking a theoretical approach from first principles, we intend to understand and analytically describe the performance-size trade-offs of deep networks, i.e., the generalization properties. We then leverage such insights to devise practical algorithms for obtaining more efficient neural networks via pruning or compression. Beyond theoretical aspects and the inference time efficiency of neural networks, we study how compression can yield novel insights into the design and training of neural networks. We investigate the practical aspects of the generalization properties of pruned neural networks beyond simple metrics such as test accuracy. Finally, we show how in certain applications pruning neural networks can improve the training and hence the generalization performance.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140065</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on Natural Language Processing and Information Extraction with Applications to Political Violence and International Security</title>
<link>https://hdl.handle.net/1721.1/140064</link>
<description>Three Essays on Natural Language Processing and Information Extraction with Applications to Political Violence and International Security
Halterman, Andrew
This dissertation consists of three essays that develop new word order aware text analysis methods to extract information from text and apply these techniques to questions in security studies. Most existing text analysis tools in political science classify or cluster documents, discarding the order of words. In contrast, the techniques I introduce take word order into account in order to extract pieces of information within documents. Each paper introduces new text analysis techniques and applies them to measuring key variables in conflict and security studies. One paper introduces a new technique for extracting descriptions of political events, using both the syntax, or grammar, of a sentence and the semantic meaning of words. I apply the new event extraction technique to two domains, extracting descriptions of human rights abuses from State Department reports and descriptions of political violence from Indian newspapers, finding that the standards of the human rights reports have changed over time, and that existing techniques greatly undercount both political violence and police responses to it in India. Another dissertation paper uses new text analysis techniques, including for Arabic text, to create a new dataset of over 100,000 civilian casualties in the Syrian civil war to test theories of violence against civilians. I find that theories emphasizing strategic violence have much greater explanatory power than theories emphasizing territorial control or threats against the regime. A third dissertation paper extends this geolocation work, providing a neural network-based method for linking the events described in text with the locations where they occurred, even when multiple events and locations occur in the same sentence.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140064</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heat Vulnerability and Risk Analytics for The Built Environment</title>
<link>https://hdl.handle.net/1721.1/140062</link>
<description>Heat Vulnerability and Risk Analytics for The Built Environment
Bayomi, Norhan Magdy
Climate change risks are considered one of the major global concerns that face humankind in the 21st century. Heatwaves have been identified as one of the deadliest climate hazards, especially in urban areas. Also, extreme heat events have been growing in intensity, duration, and frequency; more risks are expected to affect the urban population. Heat vulnerability assessment in the built environment is a complex process with multiple dynamics and components of human-natural systems and their interaction with the surrounding built environment. These dynamics include social, demographics, urban growth, environmental changes, access to public services, and policy impacts. Yet, there are considerable gaps in the literature on the effect of heat exposure and the built environment as a protective factor from potential vulnerability and risk perspectives.&#13;
&#13;
This dissertation addresses this need by developing a multifaceted and multi-scalar framework for heat vulnerability assessment. The framework is designed to inform decision-makers on the local dimension and the distribution of vulnerable populations by answering two key questions: where and what are the impacts of heat exposure in an urban setting? Who are the most susceptible populations to heat exposure? The dissertation explores vulnerability at multiple levels starting with a detailed assessment of the built environment by integrating the impacts of the physical characteristics of the existing building stock, available urban resources for long-term adaptation, and individuals’ adaptive capacity and potential health impacts under varying indoor exposure. Next, a methodology for rapid vulnerability analytics using novel technology such as aerial thermography coupled with Computer Vision (CV) and Machine Learning (ML) techniques to assess the thermal performance of building envelopes to provide actionable data for adaptation strategies at both the building and district levels. Finally, an evaluation framework to assess policy impact on the vulnerability of the urban system during heat events and how delays in the public policy response can increase risk levels
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140062</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Miniaturized Chip-Scale Quantum and Terahertz Systems Through Tight Integration of Electronics, Electromagnetics, and Qubits</title>
<link>https://hdl.handle.net/1721.1/140061</link>
<description>Miniaturized Chip-Scale Quantum and Terahertz Systems Through Tight Integration of Electronics, Electromagnetics, and Qubits
Ibrahim, Mohamed Ibrahim Mohamed
The utilization of electromagnetic waves in quantum information science and the least-explored terahertz (THz) regime are posed to revolutionize sensing, computing, and communication. The key to the prosperity of such a frontier is the development of integrated circuits that enable high-precision and high-flexibility manipulation of the RF-to-optical spectrum. This thesis presents innovations of chip-scale quantum and THz systems, which allow for significant miniaturization, practical solutions, and exciting research opportunities across the device, circuit, and system levels. To illustrate such opportunities, we propose two chip-scale systems realized through tight integration of electronics, electromagnetics, and qubits on CMOS technology. The first one is a hybrid CMOS magnetometer that integrates the essential microwave and optical components to control and measure the field-sensitive quantum states of the solid-state nitrogen-vacancy (NV) centers in diamond. This hybrid architecture is a step to achieve compact and scalable integrated platforms towards quantum-enhanced sensing and information processing. The second system is a package-less THz identification tag (THzID) in CMOS, the smallest monolithic ID chip with far-field communication capability, beam steering, and asymmetric cryptography. This ID opens the door to aggressively utilize the overlooked size shrinkage aspect of THz technology while sustaining broad-bandwidth and low-power operation. The thesis is concluded with potential improvements and perspectives for future work, in addition to several research directions that utilize the advantages of wireless communication and quantum systems, enabling new paradigms in sensing, computing, and communication infrastructures.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140061</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chromatin accessibility informs cell identity: studies in silico, in vitro, and in vivo</title>
<link>https://hdl.handle.net/1721.1/140060</link>
<description>Chromatin accessibility informs cell identity: studies in silico, in vitro, and in vivo
Hammelman, Jennifer
Chromatin accessibility provides key regulation in defining cell identity as it exhibits control over the ability of transcription factors and transcriptional machinery to bind to regulatory elements and initiate changes to gene expression. In this thesis I elaborate on the intricate relationships between genome sequence, cell type-specific chromatin accessibility, and cell reprogramming to specific fates. The first chapter focuses on an investigation in vitro of the relationship between DNA sequence and chromatin accessibility through the development of a novel massively parallel reporter assay for chromatin accessibility. In collaboration with Budhaditya Banerjee and Rich Sherwood, we identify DNA sequence features and subtle influences of transcription factor ordering within a sequence for tuning levels of cell typespecific chromatin accessibility. The second chapter develops methodology to identify significant cell type-specific DNA sequence patterns such as transcription factor motifs or grammar such as transcription factor spacing or combinations from deep learning models trained to predict chromatin accessibility from DNA sequence. The third chapter evaluates nine computational methods for ranking reprogramming transcription factors from known reprogramming protocols, and optimizes each method to explore 150 alternative strategies for ranking reprogramming transcription factors. In the final chapter of this thesis, in collaboration with Tulsi Patel and the Wichterle lab, I apply our novel methods and understanding of best practices to investigate chromatin accessibility and gene expression changes during mouse motor neuron maturation in vivo. Using a time-series of RNA-seq and ATAC-seq collected from embryonic, juvenile, and adult stages of mouse motor neuron development, we find that chromatin accessibility and gene expression changes become static as mice reach adulthood, find that maturation is characterize by both cell type-specific and shared maturation programs, and identify potential transcription factor targets for functional neuronal maturation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140060</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quench dynamics and fiber optic quench detection of VIPER high temperature superconductor cable</title>
<link>https://hdl.handle.net/1721.1/140058</link>
<description>Quench dynamics and fiber optic quench detection of VIPER high temperature superconductor cable
Salazar, Erica
High temperature superconductors (HTS) enable many high field magnet applications—such as magnetic fusion devices—to operate at superior current densities, temperatures, and fields compared to low temperature superconductor (LTS) based magnets. However, HTS introduces new challenges: slow quench propagation velocities (orders of magnitude slower than LTS) make quench events difficult to detect which increases risk of permanent damage to the magnet or loss of service if undetected. This thesis describes the test campaign, results, and analysis of an experimental quench test on the VIPER HTS Delta cable, a prototype HTS cable designed in collaboration between MIT-PSFC and Commonwealth Fusion Systems, and supports the results with a numerical 3D Matlab model. The quench tests were performed at the SULTAN test facility in Switzerland. This thesis experimentally measures and verifies the propagation velocities on VIPER HTS cable (the first quench measurement on a HTS cable in fusion-relevant conditions), and proposes a method to standardize quench propagation velocity measurements in HTS cables when fully developed velocities or quench conditions are not possible. To  address the challenge of slow quench propagation velocities, a method to predict the initiation of a quench (by up to tens of seconds before thermal runaway initiates) by measuring and calculating a quench temperature threshold parameter is described. Additionally, a novel temperature-based fiber optic technology (FBG and ULFBG) is validated as a promising quench detection technology that is immune to the challenging electromagnetic induced noise affecting conventional voltage-based quench detection systems. The temperature-based quench prediction parameter in combination with the temperature-based fiber optic quench detection technology enables detection of quench initiation before the quench event is fully developed. Robust quench detection technology enables HTS cable designs to operate at higher currents relative to the critical current (Ic). The VIPER Delta cable exhibited high cryostability (did not quench when exposed to local thermal disturbances) when tested up to 0.9 of Ic at 10 K and up to full Ic at 20 K. Higher operating current relative to Ic is desirable because it decreases the required amount of HTS tape and cross sectional area of the cable which significantly reduces the cost of the cable design and allows more use of the cable’s cross sectional area for other magnet system requirements such as structural or insulation requirements. By mitigating the quench dynamics and detection risks of employing HTS in high field applications, more opportunities to expand the operation and innovation of high field magnets exist. In particular, the SPARC program and future tokamak devices will have a pathway to implement smaller and more economical, compact fusion devices for a future with clean, limitless energy production.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140058</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making the City Livable: Caregiving and Health in Gentrifying Boston</title>
<link>https://hdl.handle.net/1721.1/140051</link>
<description>Making the City Livable: Caregiving and Health in Gentrifying Boston
Binet, Andrew
How does the urban environment shape everyday caregiving practices? How does care mediate the relationship between cities and health? What are the planning ingredients of urban environments that support caregivers and viable caring relations? I answer these questions by analyzing data from interviews with caregivers conducted as part of the Healthy Neighborhoods Study, a longitudinal Participatory Action Research project exploring the relationship between gentrification and community health in nine Boston-area neighborhoods. First, I find that the strategies respondents employ to fulfill caregiving goals are shaped by the availability and adequacy of specific components of the urban environment, which I argue comprise the urban infrastructure of care. However, this infrastructure may be unavailable, inaccessible, inadequate, or poorly connected. Caregivers compensate for these shortcomings by securing, connecting and maintaining the components of the urban infrastructure of care to ensure satisfactory background conditions for caregiving. By shaping the extent and nature of this “infrastructural labor,” cities influence what forms of care are possible and what the work of care demands from caregivers. Second, comparing caregivers’ experiences at different stages of gentrification, I find significant differences in their perceptions of changes in local urban infrastructures of care, and in what the work of care entails. I argue that gentrification produces “care insecurity” for respondents: diminished confidence in the ability to provide satisfactory care in the future and to adapt to neighborhood changes that impact caregiving. Finally, I explore the hypothesis that changes in the urban infrastructure of care could produce caregiver stress, and that stress might thus be a pathway through which the relationship between caregiving and the urban environment becomes embodied by caregivers. I find that the challenges of performing infrastructural labor necessary to care can be physically, mentally and emotionally depleting for caregivers with negative consequences for their health and wellbeing. I conclude by proposing a framework for “planning for care” focused on the intertwined priorities of alleviating and equitably distributing the burden of care work, proliferating the possible forms that care can take, and maximizing people’s freedoms to give and receive care in ways that they value.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140051</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse Learning using Discrete Optimization: Scalable Algorithms and Statistical Insights</title>
<link>https://hdl.handle.net/1721.1/140050</link>
<description>Sparse Learning using Discrete Optimization: Scalable Algorithms and Statistical Insights
Hazimeh, Hussein
Sparsity is a central concept in interpretable machine learning and high-dimensional statistics. While sparse learning problems can be naturally modeled using discrete optimization, computational challenges have historically shifted the focus towards alternatives based on continuous optimization and heuristics. Recently, growing evidence suggests that discrete optimization methods can obtain more interpretable models than popular alternatives. However, scalability issues are limiting the adoption of discrete methods and our understanding of their statistical properties. This thesis develops scalable discrete optimization methods and presents new statistical insights for a fundamental class of sparse learning problems.&#13;
&#13;
In the first chapter, we consider the L0-regularized linear regression problem, which aims to select a subset of features that best predict the outcome. We propose fast, approximate algorithms, based on coordinate descent and local combinatorial optimization, and establish convergence guarantees. Empirically, we identify important high-dimensional settings where L0-based estimators achieve better statistical performance than popular sparse learning methods (e.g., based on L1 regularization). Our open-source implementation (L0Learn) can handle instances with millions of features and run up to 3x faster than state-of-the-art sparse learning toolkits.&#13;
&#13;
In the second chapter, we propose an exact, scalable approach for L0-regularized linear regression. In particular, we develop a specialized nonlinear branch-and-bound (BnB) framework that solves a mixed integer programming (MIP) formulation of the problem. In a radical shift from modern MIP solvers, we solve the BnB subproblems using a specialized first-order method that exploits sparsity. Our open-source solver L0BnB can scale to instances with ~ 10^7 features, over 1000x larger than what modern MIP solvers can handle.&#13;
&#13;
In the third chapter, we focus on L0-regularized classification. We propose an exact and novel algorithm that solves the problem via a sequence of MIP subproblems, each involving a relatively small number of binary variables. The algorithm can scale to instances with 50,000 features. We also develop fast, approximate algorithms that generalize those of the first chapter. We show theoretically and empirically that our proposals can outperform popular sparse classification methods.&#13;
&#13;
In the last two chapters, we consider structured sparse learning problems, in which group or hierarchy constraints are imposed to enhance interpretability. We develop specialized convex and discrete optimization algorithms for these problems. Our experiments indicate that the proposed algorithms are more scalable and can achieve better statistical performance than existing methods.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140050</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dissection of Deep Neural Networks</title>
<link>https://hdl.handle.net/1721.1/140048</link>
<description>Dissection of Deep Neural Networks
Bau, David
We investigate the role of neurons within the internal computations of deep neural networks for computer vision.&#13;
&#13;
We introduce network dissection, a method for quantifying the alignment between human-interpretable visual concepts and individual neurons in a deep network.  We apply network dissection to examine and compare the internal computations of several networks trained to classify and represent images, and we ask how well human-understandable concepts align with neurons at different layers, in different architectures, with various training objectives; we also compare neurons to random linear combinations of neurons, and examine emergence of concepts as training proceeds.&#13;
&#13;
Then, we adapt network dissection to analyze generative adversarial networks.  In GAN dissection, human-understandable neurons are identified by applying a semantic segmentation model to generated output.  We find that small sets of neurons control the presence of specific objects within synthesized scenes.  We also find that activating neurons reveals modeled rules and interactions between objects and their context.&#13;
&#13;
We then ask how to dissect and understand the omissions of a generative network.  Omissions of human-understandable objects can be quantified by comparing semantic segmentation statistics between the training distribution and the generated distribution.  Then we develop a method that can invert and reconstruct generated images in a progressive GAN, and show that this reconstruction can visualize specific cases in which the GAN omits identified object classes.&#13;
&#13;
Finally, we ask how rules within a generative model are represented.  We hypothesize that the layers of a generative model serve as a memory that stores associations from representations of concepts at the input of a layer to patterns of concepts at the output to the layer, and we develop a method for rewriting the weights of a model by directly rewriting one memorized association.  We show that our method can be used to rewrite several individual associative memories in a Progressive GAN or StyleGAN, altering learned rules that govern the appearance of specific object parts in the model.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140048</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic river networks drive landscape change and biological evolution</title>
<link>https://hdl.handle.net/1721.1/140047</link>
<description>Dynamic river networks drive landscape change and biological evolution
Stokes, Maya F.
Rivers transport sediment, nutrients and life throughout a continent, all-the while shaping the landscapes they flow through. My thesis investigates the processes that govern the dynamics of river basins and the ways in which these processes may influence the evolution of aquatic life. In Chapters 1-3 of my thesis I investigate river network reorganization, a phenomenon that occurs when some river basins erode faster than their neighbors, growing at their expense. The exchange of drainage area across drainage basin divides can influence the relative size of drainage basins, the route in which rivers flow across a landscape, and the dispersal corridors available to aquatic organisms. If a channel is abruptly rerouted into a neighboring basin, a process called river capture, populations of organisms may be separated and undergo genetic divergence and eventual speciation. In Chapter 1 of this thesis, I use geomorphic observations to describe an ongoing river capture occurring between the Rio Orinoco and Amazon River, two of the largest rivers in the world. However, very few river captures are observed, and are instead almost always inferred from topographic evidence, the accuracy of which is unknown. Therefore, in Chapter 2, I use erosion rates to test the accuracy of topographic evidence for divide motion along the Blue Ridge Escarpment in the Appalachian Mountains. In Chapters 3-4 I investigate the connections between geomorphic processes and biological evolution. In Chapter 3, I build a coupled model that simulates river network reorganization in addition to speciation, extinction and dispersal of organisms within the simulated landscape. I find that river network reorganization can increase diversification rates, especially when river captures are frequent and the organisms disperse slowly throughout the landscape. In Chapter 4, I use genomic data to investigate mechanisms driving speciation within, as opposed to between, drainage basins. I connect spatial and temporal patterns of the diversification of an endemic fish species to fluvial erosion into spatially varied rocks in the Tennessee River, USA. Altogether, this work illustrates the rich connections between rivers, landscapes, and the evolution of aquatic life.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140047</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Application of Machine Learning and Physical Modeling Theory to Causal Lifting Linearizations of Nonlinear Dynamical Systems with Exogenous Input and Control</title>
<link>https://hdl.handle.net/1721.1/140046</link>
<description>On the Application of Machine Learning and Physical Modeling Theory to Causal Lifting Linearizations of Nonlinear Dynamical Systems with Exogenous Input and Control
Selby, Nicholas Stearns
Methods for constructing causal linear models from nonlinear dynamical systems through lifting linearization underpinned by Koopman operator and physical system modeling theory are presented. Outputs of a nonlinear control system, called observables, may be functions of state and input, Φ(x,u). These input-dependent observables cannot be used for lifting the system, because the state equations in the augmented space contain the time derivatives of input, hence anti-causal. Furthermore, finding effective observable functions for approximation with a low-order linear system remains an open question. Here, we present three methods for solving the causality problem in lifting linearization and one algorithm to further lift the new set of of causal observables using neural networks. The first method to solve the anticausal problem assumes that the observables are linearly dependent on exogenous input, then constructs a filter to remove that dependence. The second method is to replace anticausal observables by their integral variables Φ*, and lift the dynamics with Φ*, so that the time derivative of Φ* does not include the time derivative of input. The third method is to alter the original physical model by adding a small inertial or capacitive element so that the system's causal relationship changes. These augmented dynamics alter the signal path from the input to the anticausal observable, so that the observables are not dependent on inputs. The three anticausal solutions and the learned lifting linearization are applied to practical systems and numerical simulations validate the effectiveness of the methods.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140046</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and Hardness for Approximating the Diameter of a Graph</title>
<link>https://hdl.handle.net/1721.1/140045</link>
<description>Algorithms and Hardness for Approximating the Diameter of a Graph
Wein, Nicole Spence
The diameter of a graph is one of the most basic and fundamental attributes of a graph. It is defined as the distance between the pair of vertices that is the farthest apart. The diameter of a graph is a meaningful parameter for many applications such as distributed computation and social networks. We seek fast algorithms for computing the diameter of a graph. This is one of the central problems in the area of fine-grained complexity. &#13;
&#13;
The naive algorithm for computing the diameter of a graph is to find the distance between all pairs of vertices and return the largest one. Interestingly, no better algorithm is known. Furthermore, there is evidence from fine-grained complexity, that no subquadratic time algorithm exists for computing the diameter of a graph exactly. In particular, such an algorithm would falsify the Strong Exponential Time Hypotehsis (SETH). For applications with very large graphs, even quadratic time can be prohibitively slow. Thus, we turn to approximation algorithms with faster running times. Prior work establishes a hierarchy of algorithms that trade-off time and accuracy, as well as a single lower bound conditioned on SETH.&#13;
&#13;
Our first main contribution is the development of a hierarchy of conditional lower bounds under SETH for approximating the diameter of a graph, that establish a time vs. accuracy trade-off. These lower bounds show that several of the known algorithms on the trade-off curve are conditionally tight. &#13;
&#13;
Second, we study the approximability of the diameter of a graph in a variety of natural settings, such as when the graph is changing over time, or when we only care about the distances between particular subsets of vertices, or when we only care about one-way distances in a directed graph. For these variants, we develop both approximation algorithms and conditional lower bounds, that are often tight.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140045</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymer Grafted Nanoparticles as Functional and Mechanically Robust Single-Component Composites</title>
<link>https://hdl.handle.net/1721.1/140043</link>
<description>Polymer Grafted Nanoparticles as Functional and Mechanically Robust Single-Component Composites
Kubiak, Joshua Moses
Since their inception, polymers have been used in the formulation of composite materials that capitalize on the ease of processing, low density, and low cost of plastics while incorporating specific filler materials that enhance mechanical properties or add functionality. Synthesizing polymer matrix composites with a high content of particulate additives can maximize the particular functionality imparted by the additive phase and lead to materials with advantageous property combinations. Critically, the distribution of particulate fillers has a profound influence on the properties of the composite material. For many applications, such as optically transparent or high strength composites, maintaining a uniform distribution of non-aggregated filler is vital. Obtaining such a uniform distribution, particularly for high loadings or nanoscale particles, is a significant challenge, and substantial research and engineering effort has been dedicated to establishing methods of compatibilizing and dispersing filler particles within a polymer matrix. Of these methods, polymer grafted nanoparticles (PGNPs) provide a unique and tunable platform for controlling composite composition and mediating interparticle interactions while precluding aggregation of the particle cores. While the utility of PGNPs as filler materials has been demonstrated extensively, their independent use as single-component composites remains a rapidly developing area of investigation. A pivotal challenge in the development of PGNP composites is the trade-off between filler loading and the mechanical robustness and processability of the composite. In this work, multiple strategies for bridging this gulf are presented and investigated in order to create highly-filled, single-component PGNP composites without compromising mechanical performance or processability. Specifically, the introduction of interparticle bonds between PGNPs via traditional chemical crosslinking, thermal self-crosslinking, and embedding inside of a polymer network are explored as routes to functional nanocomposites.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140043</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Perovskite Thin-Film Photovoltaics</title>
<link>https://hdl.handle.net/1721.1/140041</link>
<description>Scalable Perovskite Thin-Film Photovoltaics
Swartwout, Richard Michael Steuben
In recent years lead halide based thin-film perovskites have emerged as a promising direct bandgap semiconductor for light absorption and emission applications. This is in part due to their compositional tunability and potential alloying that allows their color to be tuned from the near-IR through the visible spectrum. Additionally, these materials are moderately soluble, opening the door to solution processing methods rather than traditional physical or chemical vapor-based growth. For photovoltaic applications, the thin-film devices created using perovskite materials are thin relative to silicon wafer-based counterparts and therefore have the potential to be used in mechanically flexible cell architectures. This allows for high-speed roll-to-roll printing and coating processes as pathways for large scale manufacturing.&#13;
&#13;
Here, the challenges of scaling these materials are discussed from multiple vantage points keeping end slot-die manufacturing in mind. First, as these are ink-based materials, governmental regulations on the use of solvents is considered and a technoeconomic model is created to guide manufacturing scale-up development. Second, the solubility limits of these materials are determined and novel ligand based multicomponent inks are developed that fit within the economic limits of regulation. Lastly, a novel ink-based recrystallization method is presented that is capable of accessing all industrially relevant stable perovskite compositions with minimal post-annealing requirements. We use these inks and recrystallization methods on both lab scale and larger area slot-die coating techniques for high-efficiency photovoltaic devices.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140041</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Encoding of Prior Experience in Sensorimotor Behavior</title>
<link>https://hdl.handle.net/1721.1/140036</link>
<description>Neural Encoding of Prior Experience in Sensorimotor Behavior
Meirhaeghe, Nicolas
One influential hypothesis in neuroscience holds that the nervous system learns statistical regularities in the environment to optimize behavior based on past experiences. The main challenge in evaluating this hypothesis is to reconcile conceptual views that have historically been developed at different scales. At the behavioral scale, the effect of statistical regularities is often described by the Bayesian theory in terms of prior distributions that represent knowledge previously gathered about the environment. At the neural scale, the effects of prior experience have been described by the theory of predictive processing in terms of efficient coding principles that govern the response properties of neurons. The major contribution of this thesis is to bridge these two levels of description. Using a series of time-intervalreproduction tasks in rhesus macaques, I first establish a quantitative link between temporal regularities in the environment and coding properties of neurons in the frontal cortex. Specifically, I show that patterns of activity across populations of neurons are precisely rescaled in time to match the statistical mean of a learned temporal distribution, in accordance with predictive processing. Second, I show that the structure of the underlying neural representation implements the effect of a Bayesian prior, and biases behavioral responses toward prior expectations as predicted by the theory. Third, I demonstrate that the results hold in non-stationary environments when animals adapt to new temporal statistics. Fourth, I present a computational model that recapitulates the behavioral and neural findings and provides a solution for incorporating temporal expectations in neural dynamics. Finally, I conclude with a broader perspective on sensorimotor learning.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140036</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximal Correlation Feature Selection and Suppression With Applications</title>
<link>https://hdl.handle.net/1721.1/140035</link>
<description>Maximal Correlation Feature Selection and Suppression With Applications
Lee, Joshua Ka-Wing
In standard supervised learning, we assume that we are trying to learn some target variable &#119884; from some data &#119883;. However, many learning problems can be framed as supervised learning with an auxiliary objective, often associated with an auxiliary variable &#119863; which defines this objective. Applying the principles of Hirschfeld-Gebelein-Rényi (HGR) maximal correlation analysis reveals new insights as to how to formulate these learning problems with auxiliary objectives. We examine the use of the HGR in feature selection for multi-source transfer learning learning in the fewshot setting. We then apply HGR to the problem of feature suppression via enforcing marginal and conditional independence criteria with respect to a sensitive attribute, and illustrate the effectiveness of our methods to problems of fairness, privacy, and transfer learning. Finally, we explore the use of HGR in extracting features for outlier detection.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140035</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Metabolic Lens on Phytoplankton Physiology</title>
<link>https://hdl.handle.net/1721.1/140034</link>
<description>A Metabolic Lens on Phytoplankton Physiology
McLean, Craig
Phytoplankton are communities of diverse groups of prokaryotic and eukaryotic single-celled organisms responsible for nearly 50% of global primary production. The relative abundance of individual groups changes dynamically in response to environmental perturbations. Recent studies suggest that such changes are primarily driven by the distinct physiological responses employed by each group towards a particular perturbation. Although knowledge of some of these responses has come to light in recent years, many aspects of their metabolisms remain unknown. We attempt to address this gap by studying the metabolism of several phytoplankton groups using metabolomics. Firstly, we developed a method to enhance the analysis of untargeted metabolomics data. Secondly, we constructed two conceptual models describing how metabolism of the raphidophyte Heterosigma akashiwo responds to phosphorus and nitrogen stress. These conceptual models revealed several new stress response mechanisms not previously reported in other phytoplankton. Finally, we compared the metabolic changes of several distinct phytoplankton groups to uncover possible adaptation and acclimations that distinguish them. This analysis revealed several pathways and metabolites that represent the studied groups. The contributions of these pathways and metabolites towards physiology may support the ecological fitness of these organisms.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140034</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information, Learning and Incentive Design for Urban Transportation Networks</title>
<link>https://hdl.handle.net/1721.1/140032</link>
<description>Information, Learning and Incentive Design for Urban Transportation Networks
Wu, Manxi
Today's data-rich platforms are reshaping the operations of urban transportation networks by providing information and services to a large number of travelers. How can we model the travelers' strategic decisions in response to services provided by these platforms and develop tools to improve the aggregate outcomes in a socially desirable manner? In this thesis, we tackle this question from three aspects: 1) Game-theoretic analysis of the impact of information platforms (navigation apps) on the strategic behavior and learning processes of travelers in uncertain networks; 2) Market mechanism design for efficient carpooling and toll pricing in the presence of autonomous driving technology; 3) Security analysis and resource allocation for robustness under random or adversarial disruptions. &#13;
&#13;
Firstly, we present game-theoretic analysis to evaluate the impact of multiple heterogeneous information platforms on travelers’ selfish routing decisions, and the resulting network congestion. We compare the value of information provided by multiple platforms to their users, and capture the key trade-off between the gain from information about uncertain network state and the congestion externality resulting from other users. We also design an optimal information structure that induces socially preferred traffic flows. Next, we extend the static model to a dynamic setting that addresses the behavior of users who learn and strategically act in an uncertain environment, while adapting their decisions to the up-to-date information received from platforms. The resulting stochastic learning dynamics requires analyzing strategic and adaptive (hence, endogenous and non i.i.d.) data. We present new results for convergence and stability of such learning dynamics and develop conditions for convergence to complete information equilibrium.&#13;
&#13;
Secondly, we design a market mechanism that enables efficient carpooling and optimal toll pricing in an autonomous transportation market. In this market, the transportation authority sets toll prices on edges, and riders organize carpooled trips using driverless cars and split payments. Riders have heterogeneous preferences, with the value of each trip depending on the travel time of chosen route and rider-specific parameters that capture their individual value of time and carpool disutilities. We identify sufficient conditions on the network topology and travelers' preferences under which a market equilibrium exists, and carpooling trips can be organized in a socially optimal manner. We also present an algorithm that computes a set of equilibrium trips, toll prices and payments that maximize rider utilities. &#13;
&#13;
Finally, we analyze stylized game-theoretic models of attacker-defender interactions for the purpose of evaluating security risks in transportation networks. Our equilibrium analysis suggests an optimal resource allocation strategy to defend multiple infrastructure facilities against an adversarial attacker. To evaluate robustness against random disturbances, we also develop a class of machine learning models that predict the change of travelers' usage demand in congestion-prone multi-modal networks. These results have the potential to help mitigate the impact of transportation network disruptions and limit security risks.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140032</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Radiative Heat and Momentum Transfer by Nanophotonic Engineering</title>
<link>https://hdl.handle.net/1721.1/140031</link>
<description>Control of Radiative Heat and Momentum Transfer by Nanophotonic Engineering
Tsurimaki, Yoichiro
Radiative transfer of electromagnetic energy and momentum between objects is one of fundamental and ubiquitous processes in nature. The ability to control it is crucial for various optoelectronic and optomechanical engineering applications including optical sensing, photovoltaics, and object manipulations. The radiative energy and momentum transfer between objects can be controlled by engineering electromagnetic states in the objects and in their environment. Although this can be achieved by modifying materials’ geometrical shapes, optical properties, and system configurations, the extent of this control is restricted when using naturally occurring materials. In this thesis, we explore two strategies of engineering electromagnetic states to control radiative transfer of energy and momentum.; One strategy is to use photonic nanostructures and we study two systems. First, we examine a metal mirror with periodic metal-dielectric-metal nanoslit structures, in which each slit supports a single surface plasmon mode that propagates normal to the mirror surface. We numerically demonstrate that complete resonant absorption at multiple wavelengths in the range from the visible to mid-infrared can be achieved. We also discuss that engineering interactions between the surface plasmon modes excited at the mirror surface and within the slits allow to independently control spectral and angular responses of this system. Second, we develop a planar multilayered photonic-plasmonic structure supporting optical Tamm plasmon states. The resonant near-perfect absorption of incident light owing to coupling to these states is accompanied by a singular behavior of the phase of a reflected electromagnetic field. We use this singular behavior to achieve highly sensitive sensing and experimentally demonstrate remote temperature measurements. Our simulation and experiments form the basis for simple and robust planar sensing platforms with tunable spectral characteristics.; Most photonic materials and systems obey the Lorentz reciprocity theorem, by which detailed balance of emission and absorption of an electromagnetic mode is satisfied. To bring these systems into fundamentally new regime for controlling radiative transfer, the other strategy we pursue is to engineer electromagnetic states via magnetization-induced reciprocity breaking. We predict near-complete violation of Kirchhoff’s law of radiation without external magnetic field by resonantly exciting nonreciprocal surface plasmon polaritons at the interface between a dielectric material and magnetic Weyl semimetals, which we identify as a promising reciprocity-breaking material. We also investigate radiative momentum transfer between two and three magnetic Weyl semimetal spheres both in thermal equilibrium and nonequilibrium situations. We derive a formalism of Casimir forces between an arbitrary number of spheres in thermal nonequilibrium based on fluctuational electrodynamics and scattering theory without the assumption of Lorentz reciprocity. We predict that lateral Casimir forces can arise in thermal nonequilibrium situations due to non-zero angular momentum of thermal radiation from magnetic Weyl semimetal spheres. We also show that the Casimir energy in thermal equilibrium depends on the static magnetization directions in the spheres and that the lateral Casimir force will act between the spheres to relax the system into the minimum energy state without transferring net energy and momentum to the environment. Our work on engineering light-matter interactions in nonreciprocal systems points a path towards improving efficiency of radiative energy conversion devices and extending capability for optomechanical manipulation of objects in nonreciprocal systems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140031</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-dimensional Quantum Key Distribution with Frequency Encoding</title>
<link>https://hdl.handle.net/1721.1/140030</link>
<description>High-dimensional Quantum Key Distribution with Frequency Encoding
Chen, Changchen
Time-energy entanglement is a quintessential resource for emerging quantum technologies. In this thesis, we harness this resource for applications such as quantum communication and quantum networking. In the first part of this thesis, we demonstrate an indistinguishable heralded single-photon source, featuring time-energy entanglement elimination in the spontaneous parametric down-conversion (SPDC) process through custom engineering of phase-matching conditions. The heralded single-photon source is useful in measurement-based quantum applications, where indistinguishable single photons are required to ensure high-visibility photon-photon interference.&#13;
&#13;
The second part of this thesis studies the time-domain characterization of timeenergy entangled photon pairs. We present here the first experimental demonstration of the conjugate-Franson interferometer (CFI). We show that the CFI visibility can certify time-energy entanglement and detect the biphoton spectral phase, which Franson interferometry and Hong-Ou-Mandel interferometry are incapable of.&#13;
&#13;
In the final part of this thesis, we show an experimental demonstration of high-dimensional quantum key distribtuion (QKD) protocol with frequency-bin encoding using time-energy entangled photon pairs. We used programmable frequency filters to obtain 16 frequency bins for key generation within a 640 GHz flat spectrum in the telecommunication wavelength band. The security of the protocol was safeguarded by the CFI visibility being measured at the same time of the key generation process. Over a 137-meter fiber link, we measured a secure photon information efficiency (PIE) of 0.6 bit/coincidence, corresponding to a secret key rate (SKR) of 42.6 kbits/s.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140030</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Scheduling Method for Multi-Energy System</title>
<link>https://hdl.handle.net/1721.1/140028</link>
<description>Optimal Scheduling Method for Multi-Energy System
Mei, Jie
In response to the challenge of improving energy production and consumption efficiencies due to environmental problems and energy crisis, multi-energy systems composed of electrical power, natural gas, heating power, cooling power networks and energy storage are attracting more attention and are being developed rapidly in recent years. Traditionally, different energy infrastructures are scheduled and operated independently, which results in less efficient energy usage and resource wasting. Through integrating as a multi-energy system, different energy carriers can be coupled and optimized as one unit to improve overall energy utilization efficiency, reduce system operating cost, and improving solar power integration.&#13;
&#13;
In this thesis, optimal scheduling methods based on machine learning and optimization techniques of a real multi-energy system, Stone Edge Farm, CA, are proposed from an economic point of view. Specifically, Random Forest forecasting model is applied and further improved with online adaptability feature to provide input for the subsequent optimization. Besides, a new two-stage optimization formulation is proposed, which help greatly reduce computation time comparing with traditional integrated methods in the literature. Thus, the scheduling of MES operation can be conducted in much shorter time interval while considering more possible future scenarios. &#13;
&#13;
Simulation results suggest that the proposed scheduling methods can help quantify the daily operating cost, balance real-time power demands and PV output solar power, and achieve considerable operating cost savings by appropriately arranging and utilizing all the devices in the multi-energy system.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140028</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic Properties of Iron Meteorites and Their Parent Bodies</title>
<link>https://hdl.handle.net/1721.1/140026</link>
<description>Magnetic Properties of Iron Meteorites and Their Parent Bodies
Maurel, Clara
This thesis investigates fundamental properties of the first planetary bodies of our solar system. Similar to Earth, some of these “planetesimals” underwent differentiation into a metallic core and a silicate mantle, and powered magnetic fields by the motion of their liquid cores via dynamo effect. Although most planetesimals no longer exist, records of their magnetic fields are preserved in meteorites and asteroids. These records offer the opportunity to better characterize the intrinsic properties of this primordial population. Here I present a series of paleomagnetic studies conducted on iron meteorites, potential carriers of such magnetic signatures.&#13;
&#13;
First, I focus on the IIE iron meteorites. This peculiar meteorite group is thought to have formed on a planetesimal that underwent only partial differentiation, but the processes leading to the formation and evolution of such object remain poorly constrained. To better characterize these processes, I conduct synchrotron-based micromagnetic measurements on the IIE meteorites. One major outcome of this project is a time-resolved record of the dynamo activity of the IIE parent body. This record demonstrates that some partially-differentiated planetesimals formed sizable metallic cores and remained magnetically active for hundreds of millions of years. In addition, I develop a model of formation of the magnetic microstructures in iron-rich meteorites, which advances the field along two axes: it quantifies important sources of uncertainty in paleomagnetic data and serves as indicator of the cooling rate of these meteorites.&#13;
&#13;
Second, I focus on the following observation: in contrast with the multiple evidence for magnetized meteorites, no magnetized asteroid has so far been identified. To explain this apparent discrepancy, it has been hypothesized that because magnetization decreases with sample size, it would be inherently undetectable at asteroid scale. To test this hypothesis, I present a new magnetometer array accommodating meter-size samples, and combine measurements of iron meteorites at multiple size scales. This work shows that measured magnetization need not be reduced to zero as size increases but instead may asymptote at a non-zero value. Consequently, size does not prevent asteroid magnetization to be detected, although multiple other factors may hinder the detection.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140026</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Competition-based CRISPR-dCas9 transcriptional control mechanisms and application of dCas9 biosensors for high-throughput, cell-based protease inhibitor screens</title>
<link>https://hdl.handle.net/1721.1/140025</link>
<description>Competition-based CRISPR-dCas9 transcriptional control mechanisms and application of dCas9 biosensors for high-throughput, cell-based protease inhibitor screens
Anderson, Daniel Allen
Catalytically-dead Cas9 (dCas9) is a programmable transcription factor that can be targeted to promoters through the design of small guide RNAs (sgRNAs), where it can function as an activator or repressor. In Chapter 1 of this thesis, I outline the multitude of tools and applications that have been developed for dCas9 circuits. I then discuss the limitations and advantages of these systems and and outline some of the most promising opportunities for dCas9-based genetic circuits.&#13;
&#13;
In Chapter 2, I devise, model, and implement a new-to-nature transcriptional control mechanism using dCas9. Natural promoters use overlapping binding sites as a mechanism for signal integration, where the binding of one transcription factor can augment the activity of another. Here, I implement this strategy in Escherichia coli using pairs of sgRNAs designed to repress and then derepress transcription through competitive binding. I demonstrate that this mechanism can control both transcriptional initiation and transcriptional elongation with over 30-fold dynamic range. This work characterizes and demonstrates a new genetic control modality that could be used to build analog circuits or to implement cis-regulatory logic on CRISPRi-targeted native genes.&#13;
&#13;
In the final chapter of this thesis, I use a dCas9 genetic circuit to create an in vivo selection system for protease inhibitors. By leveraging a previously-described dCas9 toolkit, I create a synthetic genetic circuit that responds to SARS-CoV-2 viral protease activity. Using this circuit as an in vivo biosensor, I integrate it with a RiPP-based molecular library and an in vivo selection system to screen for inhibitors of the SARS-CoV-2 Papain-like protease (PLpro). With this integrated system, I screened tens of millions of RiPPs and identified DAA680, a 13-AA modified peptide with PLpro inhibitory activity. However, follow-up studies showed that this peptide also inhibits another SARS-CoV-2 viral protease, CLpro, indicating a non-specific mechanism of inhibition. Nonetheless, these results validate our system’s ability to identify and isolate RiPP-based protease inhibitors from large libraries. Additionally, our extensive characterization of the selection system should be generalizable to any biosensor with a transcriptional output. This should enable the rapid deployment of novel cell-based selection methods that can identify molecules with diverse bioactivities.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140025</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Assisted Superconducting Qubit Readout</title>
<link>https://hdl.handle.net/1721.1/140024</link>
<description>Machine Learning Assisted Superconducting Qubit Readout
Lienhard, Benjamin
Quantum computers hold the promise to solve specific problems significantly faster than classical computers. However, to realize a practical quantum computer, the quantum processor’s constituent components, their control, and their readout must be very well-calibrated. Over the last few decades, infrastructure and protocols have been developed to operate small-scale quantum processors efficiently. However, the operation of medium- to large-scale quantum processors presents new engineering challenges. Among those challenges are efficient and high-fidelity multi-qubit control and readout. In particular, qubit-state readout is a significant error source in contemporary superconducting quantum processors. The fidelity of dispersive qubit-state readout depends on the readout pulse shape and frequency as well as the resulting qubit-state discriminator. For a single qubit, fast and high-fidelity readout is achieved with minor changes to the rising and falling edge of a rectangular microwave pulse and a linear matched filter discriminator. However, in resource-efficient, frequency-multiplexed readout of multiple qubits, optimizing the readout pulse shape and discriminator becomes a computationally intensive task.&#13;
&#13;
In this thesis, control and readout hardware and software tools for multiple superconducting qubits are developed. First, I discuss the principles to engineer microwave packages for multiple qubits. I designed and engineered a novel multiqubit package to enable efficient qubit control and readout and minimize errors due to interactions between the quantum processor and its immediate environment. Second, I demonstrate deep machine learning techniques to improve frequency-multiplexed superconducting qubit readout pulse shapes and discrimination for a five-qubit system. Compared with currently employed readout methods, these novel techniques reduce the required measurement time, the readout resonator reset, and the discrimination error rate by about 20% each. The developed readout techniques are a significant step towards efficient implementations of near-term quantum algorithms based on iterative optimization and quantum error correction protocols necessary for future universal quantum processors.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140024</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust and Efficient Deep Learning for Misinformation Prevention</title>
<link>https://hdl.handle.net/1721.1/140022</link>
<description>Robust and Efficient Deep Learning for Misinformation Prevention
Schuster, Tal
Deep learning models have recently revolutionized the online environment, opening up many exciting opportunities to improve the user experience. These models, however, also introduce new threats by possibly creating or promoting misinformation, either intentionally or deliberately by malicious users. In this thesis, we present novel methods to fight the proliferation of misinformation online. We focus on the task of automated fact verification where the veracity of a given claim is examined against external reliable sources. We analyze the desired specifications of fact verification systems and describe the need for efficiency when operating against large comprehensive free text information resources, while ensuring robustness to challenging inputs and sensitivity to modifications in the referenced evidence. Our methods are general and, as we demonstrate, improve the robustness, efficiency, and interpretability of many other models beyond fact verification.&#13;
&#13;
In the first part of this thesis, we focus on the robustness, sensitivity, and interpretability of sentence-pair classifiers. We present methodologies for identifying and quantifying idiosyncrasies in large curated datasets that undesirably lead models to rely on nongeneralizable statistical cues. We demonstrate how contrastive evidence pairs can alleviate this issue by enforcing models to perform sentence-pair inference. To obtain such examples automatically, we develop a novel rationale-based denoising pipeline for modifying refuting evidence to agree with a given claim. In addition, we present a semi-automated solution for creating contrastive pairs from Wikipedia revisions and share a new large dataset.&#13;
&#13;
In the second part, we turn to improve the inference efficiency of both the evidence retrieval and the claim classification modules, while reliably controlling their accuracy. We introduce new confidence measures and develop novel extensions to the conformal prediction framework. Our methods can dynamically allocate the required computational resources for each input to satisfy an arbitrary user-specified tolerance level. We demonstrate on multiple datasets that our well-calibrated decision rules reliably provide significant efficiency gains.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140022</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Methods for Sound Change Discovery</title>
<link>https://hdl.handle.net/1721.1/140021</link>
<description>Automatic Methods for Sound Change Discovery
Luo, Jiaming
Describing the phonological history of languages has been a central topic in historical linguistics. In this thesis, we develop automatic methods to discover patterns of sound change in different settings of input and output conditions. More specifically, we focus on three challenging tasks: (1) automatic decipherment, (2) automatic decipherment in unsegmented scripts, and (3) automatic sound law induction. We show that a careful model design that implements historical linguists’ priors and intuitions is essential for the success of these methods. In addition, we demonstrate that these computational methods can provide relevant evidence to answer important research questions in historical linguistics.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140021</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Models for Mixtures of Fluids and Granular Sediments</title>
<link>https://hdl.handle.net/1721.1/140019</link>
<description>Development of Models for Mixtures of Fluids and Granular Sediments
Baumgarten, Aaron S.
Mixtures of fluids and granular sediments play an important role in many industrial, geotechnical, and aerospace engineering problems, from waste management and transportation (liquid–sediment mixtures) to dust kick-up below helicopter rotors (gas–sediment mixtures). These mixed flows often involve bulk motion of hundreds of billions of individual sediment particles and can contain both highly turbulent regions and static, non-flowing regions. To avoid tracking individual grain–grain interactions and pore-scale fluid flows, it is desirable to model these problems using continuum techniques, where microscopic grain-scale properties are homogenized into bulk descriptions of the mixture’s behavior. This approach offers exceptional scaling; however, it requires the development of material constitutive models and simulation techniques that are capable of capturing the breadth of phenomena exhibited by submerged granular sediments under different loading conditions.&#13;
&#13;
When compacted, the friction between grains manifests as a bulk yield stress, resulting in solid-like behavior. When this yield stress is exceeded, the microscopic reorganization of grains can produce critical state behavior as the material transitions to a flowing, fluid-like state. Additionally, in unconfined flows, grains can become disconnected from each other and begin interacting through infrequent, inelastic collisions: behaving more like a granular gas. This breadth of different material behaviors is also coupled to the motion of the fluid filling the pore space between grains. A complete continuum modeling framework should be able to describe, predict, and simulate this wide range of behaviors, smoothly transitioning between these different flow regimes.&#13;
&#13;
Recently developed continuum modeling frameworks that use the material point method (MPM) have shown substantial promise; however, existing approaches are limited in the range of material behaviors that are considered and types of engineering applications that can be addressed. In this thesis, a continuum modeling framework for fluid–sediment mixtures is developed that incorporates a new granular material model and addresses several of the numerical limitations associated with the MPM. This granular material model is designed to capture the important behaviors described above and can also be extended to capture other non-trivial mixture phenomena, such as that observed in shear-thickening suspensions (e.g., cornstarch–water mixtures). Additionally, this thesis considers techniques for mitigating simulation error in the material point representation of the pore fluid, including direct changes to the MPM as well as combining the MPM with a more common numerical solver, such as the finite volume method (FVM).&#13;
&#13;
The modeling framework developed in this thesis is shown to be predictive for a wide range of mixed flows, including both liquid–sediment and gas–sediment problems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140019</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differentiable Simulation Methods for Robotic Agent Design</title>
<link>https://hdl.handle.net/1721.1/140018</link>
<description>Differentiable Simulation Methods for Robotic Agent Design
Du, Tao
Designing robots with extreme performance in a given task has long been an exciting research problem drawing attention from researchers in robotics, graphics, and artificial intelligence. As a robot is a combination of its hardware and software, an optimal robot requires both an excellent implementation of its hardware (e.g., morphological, topological, and geometrical designs) and an outstanding design of its software (e.g., perception, planning, and control algorithms). While we have seen promising breakthroughs for automating a robot's software design with the surge of deep learning in the past decade, exploration of optimal hardware design is much less automated and is still mainly driven by human experts, a process that is both labor-intensive and error-prone. Furthermore, experts typically optimize a robot's hardware and software separately, which may miss optimal designs that can only be revealed by optimizing its hardware and software simultaneously.&#13;
&#13;
This thesis argues that it is time to rethink robot design as a holistic process where a robot's body and brain should be co-optimized jointly and automatically. In this thesis, we present a computational robot design pipeline with differentiable simulation as a key player. We first introduce the concept of computational robot design on a real-world copter whose geometry and controller are co-optimized with a differentiable simulator, resulting in a custom copter that outperforms designs suggested by human experts by a substantial margin. Next, we push the boundary of differentiable simulation by developing advanced differentiable simulators for soft-body and fluid dynamics. Contrary to traditional belief, we show that deriving gradients for such intricate, high-dimensional physics systems can be both science and art. Finally, we discuss challenges in transferring computational designs discovered in simulation to real-world hardware platforms. We present a solution to this simulation-to-reality transfer problem using our differentiable simulator on an example of modeling and controlling a real-world soft underwater robot. We conclude this thesis by discussing open research directions in differentiable simulation and envisioning a fully automated computational design pipeline for real-world robots in the future.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140018</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Unstructured Environments for Robotic Manipulation: Toward Generalization, Dexterity and Robustness</title>
<link>https://hdl.handle.net/1721.1/140017</link>
<description>Representing Unstructured Environments for Robotic Manipulation: Toward Generalization, Dexterity and Robustness
Gao, Wei
We would like to have highly useful robot manipulators that can handle a diversity of objects/environments, perform challenging manipulation tasks while being sufficiently robust such that deployment at scale is feasible. This thesis aims at such a generalizable, dexterous and robust manipulation pipeline. At the core of our approach is the representation of the environment. In particular, how should we represent the unstructured world such that it is useful for: 1) developing a capable manipulation pipeline; 2) performing a thorough robustness evaluation of it. To answer question 1), we propose the keypoint affordance, a novel object representation consists of 3D semantic keypoints. Existing works typically use 6 Degree-of-Freedom (DOF) poses to represent the manipulated objects. However, representing an object with a parameterized transformation defined on a fixed template cannot handle large shape mismatches among different objects. In contrast, our keypoint representation captures task-related geometric information while ignoring irrelevant details, which enables the generalization to unknown objects. We implement perception, planning and feedback control modules on top of the keypoint representation and integrate them into a fully functional perception-to-action manipulation pipeline. The second part of this thesis studies the pipeline robustness and attempts to answer the question 2). Due to the infeasibility of a parametric (pose-based) object representation, we do not have a continuous input domain for investigating how the object geometry impacts the robustness, which is a prerequisite for existing methods. To address this challenge, we model factors that affect the robustness as a structured distribution over variables (e.g. the camera pose), combined with an empirical distribution, that describes visual properties (e.g. the object geometry/texture). We then formulate the robustness evaluation as a failure rate estimation problem on this combined distribution and propose an efficient graph-based algorithm to solve it. Our formulation is applied to the developed manipulation pipeline, and it can benefit many other cyber-physical systems, such as autonomous cars.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140017</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collaborative, open, and automated data science</title>
<link>https://hdl.handle.net/1721.1/140016</link>
<description>Collaborative, open, and automated data science
Smith, Micah J.
Data science and machine learning have already revolutionized many industries and organizations and are increasingly being used in an open-source setting to address important societal problems. However, there remain many challenges to developing predictive machine learning models in practice, such as the complexity of the steps in the modern data science development process, the involvement of many different people with varying skills and roles, and the necessity of, yet difficulty in, collaborating across steps and people. In this thesis, I describe progress in two directions in supporting the development of predictive models. First, I propose to focus the effort of data scientists and support structured collaboration on the most challenging steps in a data science project. In Ballet, we create a new approach to collaborative data science development, based on adapting and extending the open-source software development model for the collaborative development of feature engineering pipelines, and is the first collaborative feature engineering framework. Using Ballet as a probe, we conduct a detailed case study analysis of an open-source personal income prediction project in order to better understand data science collaborations. Second, I propose to supplement human collaborators with advanced automated machine learning within end-to-end data science and machine learning pipelines. In the Machine Learning Bazaar, we create a flexible and powerful framework for developing machine learning and automated machine learning systems. In our approach, experts annotate and curate components from different machine learning libraries, which can be seamlessly composed into end-to-end pipelines using a unified interface. We build into these pipelines support for automated model selection and hyperparameter tuning. We use these components to create an open-source, general-purpose, automated machine learning system, and describe several other applications.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140016</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape, Reflectance, and Illumination From Appearance</title>
<link>https://hdl.handle.net/1721.1/140015</link>
<description>Shape, Reflectance, and Illumination From Appearance
Zhang, Xiuming
The image formation process describes how light interacts with the objects in a scene and eventually reaches the camera, forming an image that we observe. Inverting this process is a long-standing, ill-posed problem in computer vision, which involves estimating shape, material properties, and/or illumination passively from the object’s appearance. Such “inverse rendering” capabilities enable 3D understanding of our world (as desired in autonomous driving, robotics, etc.) and computer graphics applications such as relighting, view synthesis, and object capture (as desired in extended reality [XR], etc.).&#13;
&#13;
In this dissertation, we study inverse rendering by recovering three-dimensional (3D) shape, reflectance, illumination, or everything jointly under different setups. The input in these different setups varies from single images to multi-view images lit by multiple known lighting conditions, then to multi-view images under one unknown illumination. Across the setups, we explore optimization-based recovery that exploits multiple observations of the same object, learning-based reconstruction that heavily relies on data-driven priors, and a mixture of both. Depending on the target application, we perform inverse rendering at three different levels of decomposition: I) At a low level of abstraction, we develop physically-based models that explicitly solve for every term in the rendering equation, II) at a middle level, we utilize the light transport function to abstract away intermediate light bounces and model only the final “net effect”, and III) at a high level, we treat rendering as a black box and directly invert it with learned data-driven priors. We also demonstrate how higherlevel abstraction leads to models that are simple and applicable to single images but also possess fewer capabilities.&#13;
&#13;
This dissertation discusses four instances of inverse rendering, gradually ascending in the level of abstraction. In the first instance, we focus on the low-level abstraction where we decompose appearance explicitly into shape, reflectance, and illumination. To this end, we present a physically-based model capable of such full factorization under one unknown illumination and another that handles one-bounce indirect illumination. In the second instance, we ascend to the middle level of abstraction, at which we model appearance with the light transport function, demonstrating how this level of modeling easily supports relighting with global illumination, view synthesis, and both tasks simultaneously. Finally, at the high level of abstraction, we employ deep learning to directly invert the rendering black box in a data-driven fashion. Specifically, in the third instance, we recover 3D shapes from single images by learning data-driven shape priors and further make our reconstruction generalizable to novel shape classes unseen during training. Also relying on data-driven priors, the fourth instance concerns how to recover lighting from the appearance of the illuminated object, without explicitly modeling the image formation process.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140015</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Free Volume Manipulation Techniques of Polymer Membranes for Gas Separations</title>
<link>https://hdl.handle.net/1721.1/140014</link>
<description>Free Volume Manipulation Techniques of Polymer Membranes for Gas Separations
Lin, Sharon
Gas separations are ubiquitous in today’s industries and society, playing a role in many key applications such as oxygen generation for medical procedures and natural gas separations for power generation and petrochemicals. In the United States, separation processes consume about 16 quadrillion BTU of energy per year, with nearly half of that energy consumption coming from energy-intensive and thermally-driven processes such as distillation. Using non-thermally-driven processes, such as polymer membranes, for gas separations could reduce energy costs by up to 90% and save the United States over 4 billion USD per year.&#13;
&#13;
Recently, polymers of intrinsic microporosity (PIMs) have shown promise as a platform for energy-efficient gas separations due to their rigid and contorted chemical structures, which increase the amount of free volume and gas throughput. The combination of high permeability and good selectivity exhibited by many PIMs have placed them near or above the Robeson upper bound, a standard metric used to compare polymer membrane performance. However, free volume that is generated from PIMs is done in a “bottom-up” manner where the rigid and contorted chains cause free volume formation. The size and distribution of free volume elements, therefore, are not selectively controlled.&#13;
&#13;
In this thesis, alternative methods to free volume generation are explored. The first method involves a “top-down” approach, where thermally labile functional groups are attached to a polymer backbone. After film formation, the functional groups are thermally removed well below the glass transition temperature to systematically template free volume elements of a desired size and distribution within the polymer matrix. The effect of various thermal treatments on both the packing structure and gas transport properties was analyzed, and the results suggest that polymer chain mobility occurring below glass transition temperatures can disrupt the templated free volumes. Therefore, more robust polymer systems that can preserve the free volume architecture after thermal treatment from this approach are needed.&#13;
&#13;
The second method is a “bottom-up” approach similar to that used by PIMs, but with a new chemical structure consisting of a flexible polymer backbone and rigid side chains that form a “bottlebrush”-like structure. The polymers were generated via ring-opening metathesis polymerization (ROMP), and their gas transport properties were examined in ideal and realistic industrial conditions. These polymers, referred to as ROMP polymers, showed excellent gas transport properties, as well as unprecedented plasticization and physical aging resistance. The excellent stability exhibited by ROMP polymers was attributed to the rigid side chains. The effect of side-chain length on the gas transport properties of a methoxy-functionalized ROMP (OMeROMP) was also studied. In this case, increasing side-chain length led to increased free volume and plasticization resistance. Lastly, to further probe their plasticization resistance, sorption measurements and mixed-gas tests using realistic industrial conditions were conducted on OMeROMP samples with different side-chain lengths. Overall, this thesis focuses on alternative methods to free volume generation in polymer membranes that can be used for energy-efficient gas separations.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140014</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing CAST and Designing the STAMP-Enhanced Learning And Reporting System</title>
<link>https://hdl.handle.net/1721.1/140011</link>
<description>Implementing CAST and Designing the STAMP-Enhanced Learning And Reporting System
Wong, Lawrence Man Kit
The safety performance of healthcare is concerning. Around 6% of patients are impacted by preventable harm. Adverse events recur because incident reporting systems (IRSs) are not effective and causes are not addressed. In-depth incident analyses, in particular, often focus on frontline human error and are blame-laden. All IRS functions, from data collection to learning dissemination, need to be improved. CAST (based on a state-of-the-art accident causality model, STAMP) is a more effective analysis technique, but its application in healthcare is hindered by time and knowledge constraints. This work investigated how CAST can be introduced and what features are required in a more effective IRS.&#13;
&#13;
Seven enhancements were made, encompassing methodological refinement, reference materials, templates, and training. The enhancements seek to make CAST applications more efficient and consistent for novices. For evaluation, a hospital safety team was trained and analyzed an incident with CAST. The analysis not only identified the unsafe actions by frontline staff but also the underlying reasons. The departmental management’s unsafe decisions and their underlying reasons were identified as well. The proposed safety interventions had broad system coverage, the potential for hazard elimination and could prevent dissimilar incidents. More learning was gained than with the conventional technique. Moreover, the analysis—produced with less time, training and without the guidance of a safety science expert—was at least comparable to, if not better than, other CAST analyses done by novices without the enhancements. Self-reported attitude agreement suggests a paradigm change may have been made. However, self-confidence in analysis abilities did not differ substantially, suggesting the training program should be revised to reconcile the mismatch with the improved analysis performance.&#13;
&#13;
For broader IRS improvements, a conceptual design for the STAMP-Enhanced Learning And Reporting (STELAR) system was created. It focuses on improving the interdependencies between IRS functions and with external safety information sources.&#13;
&#13;
CAST can be feasibly learned and applied in healthcare; IRSs can be improved by designing it holistically. This work advances the goal to research, develop, and apply systems engineering tools in healthcare and contributes to a safer healthcare system—by enabling effective safety learning with a systems approach.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140011</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Influence of environmental conditions and proton irradiation on molten salt corrosion of metals</title>
<link>https://hdl.handle.net/1721.1/140010</link>
<description>Influence of environmental conditions and proton irradiation on molten salt corrosion of metals
Zhou, Weiyue
Practical applications of molten salts have emerged, especially in the power industry, such as molten salt reactors and concentrated solar power plants. However, the understanding of the molten salts, both physically and chemically, is still lacking. As one of the interactions between molten salts and materials, corrosion is critical to study for the successful deployment of these and other applications.&#13;
&#13;
This thesis contributes to our understanding of corrosion of metals in high temperature molten salts, especially fuorides.&#13;
&#13;
Corrosion of materials in molten salts starts with electrochemical corrosion reactions. However, the evolution of the interface is determined jointly by the corrosion reactions, interdifusion in solids, and surface difusion at the interface. These processes make possible the initiation and growth of tunnels flled with salt. It is recognized here that molten salt corrosion is highly penetrating corrosion in most cases, more penetrating than has ever been recognized. A categorization of molten salt corrosion based on morphology is provided, suggesting the necessity to understand each category and develop new quantifcation methods.&#13;
&#13;
As one mode of penetrating corrosion, intergranular penetrating corrosion in the molten salt is studied further in this thesis. The unique morphology and mechanism call for a new concept of intergranular corrosion as “1D wormhole corrosion”. Asymmetry of the elemental distribution across grain boundaries and the difusion-induced grain boundary migrations are observed.&#13;
&#13;
For applications involving simultaneous radiation, the infuence of radiation on corrosion is critical to understand. A dedicated experimental facility is constructed to study the synergy efect of radiation and corrosion using protons as the irradiation source. Various interaction modes are discovered, including the deceleration of intergranular penetrating corrosion and the acceleration of transgranular penetrating corrosion. Protons are also recognized to push the transition between intergranular and transgranular modes to favor the transgranular one while rendering the salt more corrosive. The complexity of the synergy calls for future studies to understand it, especially for application-relevant cases.&#13;
&#13;
The Discovery of this thesis provides a variety of directions for future studies to follow. It is important that we appreciate the complexity of molten salt corrosion and develop systematic approaches to study it.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140010</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable and Efficient Graph Algorithms and Analysis Techniques for Modern Machines</title>
<link>https://hdl.handle.net/1721.1/140008</link>
<description>Scalable and Efficient Graph Algorithms and Analysis Techniques for Modern Machines
Liu, Quanquan C.
Rapidly growing real-world networks, with billions of vertices, call for scalable, fast, and efficient graph algorithms. Luckily, commercial multi-core, multi-processor, and multi-machine environments can handle such volumes of data. Unfortunately, despite the availability of such resources, many current graph algorithms do not take full advantage of these parallel and distributed environments or have non-optimal theoretical guarantees, translating to slower and less efficient algorithms in practice. The purpose of this thesis is to theoretically improve previous graph algorithms in modern machines. We demonstrate through experiments that such theoretical improvements also translate to practical gains. &#13;
&#13;
Towards this goal, this thesis takes a two-pronged approach. &#13;
First, we formulate algorithms in computation models that mimic large-scale data processing environments. Algorithms in such models take advantage of clusters of machines and a machine's multiple cores and processors. Second, we use specific properties of real-world networks when designing our algorithms. The degeneracy is one such characteristic; while a network may have billions of vertices, its degeneracy may only be a few hundred.&#13;
&#13;
This thesis consists of three parts. The first part presents static graph algorithms. We first introduce  a set of new editing algorithms for a framework that approximates solutions to hard-to-solve optimization problems via editing a graph into a desired structured class. Then, we present novel small subgraph counting algorithms,  with better theoretical space and round guarantees, in the massively parallel computation model; our experiments corroborate our theoretical gains and show improvements in number of rounds and approximation factor, compared to the previous state-of-the-art, in real-world graphs. We conclude this part with a near-linear time scheduling algorithm for scheduling on identical machines with communication delay where precedence constrained jobs are modeled as directed acyclic graphs.&#13;
&#13;
The second part focuses on dynamic graph algorithms. We first show a O(1) amortized time, with high probability, dynamic algorithm for (Δ+1)-vertex coloring. Then, we provide a new parallel level data structure for the k-core decomposition problem under batch-dynamic updates (where dynamic edge updates are applied in batches). We show that our data structure provably provides a (2+&#120576;)-approximation on the coreness of each vertex, improving on the previously best-known bound of (4+&#120576;). We conclude with new parallel, work-efficient batch-dynamic algorithms for triangle and clique counting. Our extensive experiments for our batch-dynamic algorithms show orders of magnitude improvements in performance over the best previous multi-core implementations in real-world networks.&#13;
&#13;
The last part concludes with lower bounds. We show via hard instances the hardness of obtaining an optimal computation schedule on directed acyclic computation graphs in the external-memory model. We then demonstrate that such graphs can be used to construct static-memory-hard hash functions that use&#13;
disk memory to deter large-scale password-cracking attacks.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140008</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fock-Schwartz spin representation space</title>
<link>https://hdl.handle.net/1721.1/140007</link>
<description>The Fock-Schwartz spin representation space
Valiveti, Kaavya G.
In this thesis, we define and study a family of Sobolev-like subspaces (the “FockSobolev spaces”) and the corresponding Schwartz-like space (the “Fock-Schwartz space”) arising from the infinite-dimensional spin representation constructed by Pressley and Segal. In particular, we study the infinitesimal actions of the group of orientation-preserving diffeomorphisms, Diff⁺(&#119878;¹), and the loop group &#119975;Spin(2&#119899;), as well as the action of an infinite-dimensional Clifford algebra on the Fock-Sobolev spaces and Fock-Schwartz space. All of this work is motivated by the goal of constructing the Dirac-Ramond operator on the loop space of a string manifold.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140007</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphs, Principal Minors, and Eigenvalue Problems</title>
<link>https://hdl.handle.net/1721.1/140006</link>
<description>Graphs, Principal Minors, and Eigenvalue Problems
Urschel, John C.
This thesis considers four independent topics within linear algebra: determinantal point processes, extremal problems in spectral graph theory, force-directed layouts, and eigenvalue algorithms. For determinantal point processes (DPPs), we consider the classes of symmetric and signed DPPs, respectively, and in both cases connect the problem of learning the parameters of a DPP to a related matrix recovery problem. Next, we consider two conjectures in spectral graph theory regarding the spread of a graph, and resolve both. For force-directed layouts of graphs, we connect the layout of the boundary of a Tutte spring embedding to trace theorems from the theory of elliptic PDEs, and we provide a rigorous theoretical analysis of the popular Kamada-Kawai objective, proving hardness of approximation and structural results regarding optimal layouts, and providing a polynomial time randomized approximation scheme for low diameter graphs. Finally, we consider the Lanczos method for computing extremal eigenvalues of a symmetric matrix and produce new error estimates for this algorithm.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140006</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time, momentum, spin, and energy resolved tunneling spectrum of a two-dimensional electron system</title>
<link>https://hdl.handle.net/1721.1/140005</link>
<description>Time, momentum, spin, and energy resolved tunneling spectrum of a two-dimensional electron system
Yoo, Heun Mo
Two-dimensional (2D) electronic systems host a plethora of remarkable phenomena, including the integer and fractional quantum Hall effects (Nobel Prize 1985 and 1998), and have stimulated a wide range of fundamental science and engineering research. Yet, many of the important phenomena that emerge from the collective behavior of 2D electrons remain poorly understood as there have been substantial experimental challenges in probing the electronic structures and interactions in these systems. In this thesis, we present a time-domain pulsed tunneling technique that can visualize the energy, momentum, spin, and time resolved electronic structures of 2D electronic systems. Unlike the conventional tunneling method that requires the in-plane conductivity of the system, our pulsed tunneling technique functions on strongly insulating systems at low temperatures or in large applied magnetic fields. Furthermore, through the use of pulses that drive tunneling in extremely short time intervals, the technique eliminates perturbations such as heating. Using the pulsed tunneling technique, we visualized the effect of electron-optic phonon interactions and Landau level quantization in energy-momentum space. We also performed spin resolved pulsed tunneling experiments and measured the magnetization of a 2D electronic system over a wide range of applied magnetic fields and electron densities. Moreover, we pumped a 2D electronic system using an additional electrical pulse and imaged time resolved tunneling spectra of the system driven out of equilibrium. These results illustrate the potentially broad applicability of our time-domain pulsed tunneling technique to studying correlated electron phenomena in a wide variety of emerging two-dimensional materials.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140005</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thin Film Energy Devices</title>
<link>https://hdl.handle.net/1721.1/140003</link>
<description>Thin Film Energy Devices
Xu, Lin
The rapid emergence of the Internet-of-Things (IoT) is driving the demand for chipbased self-powered sensors that require energy harvesters and energy storage devices, i.e. “thin film energy devices”, as key components.&#13;
&#13;
The first section of this thesis introduces the working principle of a new type of thermal energy harvester, a “Multi-cell Thermogalvanic System” (MTS), that provides an alternative to other thermal energy harvesters that cannot be miniaturized or require materials that cannot be used in chip-based IOT devices. Using coin cells, several proofs of concept showed that MTS devices have efficiencies that are comparable to other thermal energy harvesters and have sufficient energy and power output for IOT devices, without requiring large heat sinks and with less stringent constraints on materials selection. It is also shown that by making several improvements in materials and processing methods, MTS devices can be used as thin film energy harvesters in chip-based IoT sensors.&#13;
&#13;
The second section of this thesis focuses on the mechanisms of cyclic lithiation and delithiation of RuO2. RuO2 is a candidate cathode material for next-generation thin film lithium ion batteries (TF-LIBs), due to its relatively large capacity (~5x LiCoO2) and its very good cyclability and rate capability, as well as compatibility with integration with silicon-based microelectronic circuits (all-room-temperature processing). Like other electrode materials that store Li through reversible conversion reactions, RuO2 was found to have a relatively large voltage hysteresis, which limits its cycling energy efficiency. Investigation of the mechanism of this reaction provided insights for further optimization of RuO2 for TF-LIBs. The methods developed in this study can also be used to investigate other high-capacity conversion-reaction electrode materials.&#13;
&#13;
The third section demonstrates a method for improving the mechanical stability of RuO2 thin films by making arrays of lithographically-patterned notches within them. Additionally, it was found that this approach can be used to form regular arrays of channels with widths that can be modulated by the state of lithiation and that can be reversibly opened and closed by delithiation and re-lithiation. Therefore, this method may also be applied to microfluidic devices or sensors.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140003</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Shoe Midsoles for Running Performance</title>
<link>https://hdl.handle.net/1721.1/140002</link>
<description>Optimizing Shoe Midsoles for Running Performance
Fay, Sarah Claire
Running shoe design affects running performance. Running performance most commonly refers to running economy, the rate of oxygen consumed by an athlete running at a given speed, because it compares the input (metabolic) energy to the output (kinetic) energy. Running economy data is collected in highly controlled environments by using gas exchange measurement systems often in the form of wearable masks for runners on treadmills. The process of facilitating such an experiment is slow and requires specialized and expensive equipment; it is not suited for rapid testing of new shoe designs.&#13;
&#13;
Rapid evaluation of new shoe designs, however, is increasingly more important with the expansion of the shoe design space due to advancing technology. In particular, additive manufacturing has become a viable option for the mass production of shoe components, which allows for the creation of shoes whose properties vary (and are engineered to vary) in ways that foam casting, which is the traditional midsole fabrication method, does not. These new designs have the potential to improve running performance and are therefore being used by sports brands, especially by adidas, to create their latest lines of shoes.&#13;
&#13;
To perform rapid evaluation of these new possible designs, a data-informed, mechanical model for running was developed. This model successfully captures the relationship between shoe properties of well-tested shoes and running performance. This model also allows the flexibility for evaluation of shoes which have yet to be made, particularly those in the new design space afforded by the advances in additive manufacturing. Specific lattice-structured midsoles, like those in the adidas 4D shoe line, were then evaluated using this model. The predicted highest performing shoe design was then fabricated for more formal testing. While COVID-19 restrictions precluded full statistically meaningful evaluation of the shoes, the results of the testing that did occur gave shoe designers confidence in using the model as a rapid evaluation tool in the future.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140002</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identification and Robustness in Central Banking and Supply Chain</title>
<link>https://hdl.handle.net/1721.1/140001</link>
<description>Identification and Robustness in Central Banking and Supply Chain
Jiang, Bomin
In this thesis, we study identification and robustness issues in central banking and supply chains. First, we present a methodology to identify multiple linear financial networks when only an aggregate outcome is observed, and use the method to assess financial networks among financial institutions in the United States. We discover that the data is better explained by a mix of distinct networks, each of which corresponds to a different transmission mechanism. Second, we investigate the effect of bounded uncertainty in central banking, and derive robust decision rules for central banking policymaking. When bounded uncertainty is passed through a conditional expectation channel, we find that committing not to use a policy tool is sometimes optimal for the central bank. An asset purchasing model and a forward guidance model are examined in depth to illustrate our point. Third, we study a stylized supply chain model where large aggregate shock hits and prices are not adjusted due to anti-price gouging laws. We show that individual producers, in this case, will not diversify for aggregate shock due to the externality of fixed prices. Multinational corporations, on the other hand, would still diversify their supply chain due to continuation value. Furthermore, a robustness analysis shows that individual producers will not diversify even when they adopt robust decision rules, but multinational corporations will further diversify their supply chain in this case. The first two chapters tackle the real-world challenge of financial systemic risk reflected by the 2008 financial crisis, and the proliferation of monetary policy tools thereafter. The third chapter tries to analyze the supply chain disruption caused by the outbreak of the Covid-19 global pandemic, and give policy suggestions based on our model.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140001</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable and Broad Hardware Acceleration through Practical Speculative Parallelism</title>
<link>https://hdl.handle.net/1721.1/140000</link>
<description>Scalable and Broad Hardware Acceleration through Practical Speculative Parallelism
Abeydeera, Maleen
With the slowing down of Moore’s Law, silicon fabrication technology is not yielding the performance improvements it once did. Hardware accelerators, which tailor their architecture to a specific application or domain, have emerged as an attractive approach to improve performance. Unfortunately, current accelerators have been limited to domains such as deep learning, where parallelism is easy to exploit. Many applications do not have such easy-to-extract parallelism and have remained off-limits to accelerators.&#13;
&#13;
This thesis presents techniques to build accelerators for applications with speculative parallelism. These applications consist of atomic tasks, sometimes with order constraints, and need speculative execution to extract parallelism. In speculative execution, tasks are executed in parallel assuming they are independent. A runtime system monitors their execution to see if they are. If a task produces a conflict during execution, i.e., if it may violate a data dependence, then it is aborted and re-executed.&#13;
&#13;
This thesis proposes Chronos, a framework-based approach for building accelerators that use speculation to extract parallelism. Under Chronos, accelerator designers express the algorithm as a set of ordered tasks, and then design processing elements (PEs) to execute each of these tasks. The framework provides reusable components for task management and speculative execution, saving most of the developer effort in creating accelerators for new applications. &#13;
&#13;
Prior general-purpose architectures have leveraged already existing techniques, like cache-coherence protocols, for conflict detection, but implementing coherence would add complexity, latency and significant on-chip storage requirement, making these techniques expensive on accelerators.&#13;
&#13;
To tackle this challenge, we first propose a new execution model, Spatially Located Ordered Tasks (SLOT), that uses order as the only synchronization mechanism and limits task accesses to a single read-write object. We then use SLOT to implement the Chronos framework. This implementation avoids the need for cache coherence and makes speculative execution cheap and distributed. This reduces overheads and improves performance by up to 2× over prior conflict detection techniques.&#13;
&#13;
While SLOT achieves excellent performance on many algorithms, it is sometimes desirable to allow a single task to access multiple objects. Thus, we extend Chronos to support the more general Swarm execution model, which allows this and is also easier to program. This Chronos-Swarm implementation improves performance when Swarm’s features are needed, but it hurts performance when they are not, as the Swarm execution model requires more expensive conflict checks on each memory access. To bridge this gap, we introduce a hybrid SLOT/Swarm execution model that combines the generality and ease-of-programming of Swarm with the performance of SLOT.&#13;
&#13;
We develop FPGA implementations of Chronos and use them to build accelerators for several challenging applications. When run on cloud FPGA instances, these accelerators outperform state-of-the-art software versions running on a higher-priced multicore instance by 3.5× to 15.3×.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140000</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Plan by Learning Rules</title>
<link>https://hdl.handle.net/1721.1/139998</link>
<description>Learning to Plan by Learning Rules
Araki, Minoru Brandon
Many environments involve following rules and tasks; for example, a chef cooking a dish follows a recipe, and a person driving follows rules of the road. People are naturally fluent with rules: we can learn rules efficiently; we can follow rules; we can interpret rules and explain them to others; and we can rapidly adjust to modified rules such as a new recipe without needing to relearn everything from scratch. By contrast, deep reinforcement learning (DRL) algorithms are ill-suited to learning policies in rule-based environments, as satisfying rules often involves executing lengthy tasks with sparse rewards. Furthermore, learned DRL policies are difficult if not impossible to interpret and are not composable. The aim of this thesis is to develop a reinforcement learning framework for rule-based environments that can efficiently learn policies that are interpretable, satisfying, and composable. We achieve interpretability by representing rules as automata or Linear Temporal Logic (LTL) formulas in a hierarchical Markov Decision Process (MDP). We achieve satisfaction by planning over the hierarchical MDP using a modified version of value iteration. We achieve composability by building off of a hierarchical reinforcement learning (HRL) framework called the options framework, in which low-level options can be composed arbitrarily. And lastly, we achieve data-efficient learning by integrating our HRL framework into a Bayesian model that can infer a distribution over LTL formulas given a low-level environment and a set of expert trajectories. We demonstrate the effectiveness of our approach via a number of rule-learning and planning experiments in both simulated and real-world environments.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139998</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction</title>
<link>https://hdl.handle.net/1721.1/139997</link>
<description>Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction
DelPreto, Joseph
This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans’ capabilities and improve quality of life. &#13;
&#13;
Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG.&#13;
&#13;
Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations. &#13;
&#13;
Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy.&#13;
&#13;
This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139997</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-Layer Nanoparticles for Targeted Delivery and Treatment of Ovarian Cancer</title>
<link>https://hdl.handle.net/1721.1/139996</link>
<description>Layer-by-Layer Nanoparticles for Targeted Delivery and Treatment of Ovarian Cancer
Kong, Stephanie Mabel
Ovarian cancer remains the most lethal gynecologic malignancy in the United States. Lack of early detection methods often leads to a late diagnosis and reduced survival outcomes for patients. The overwhelming majority of ovarian cancer patients are diagnosed with late stage high grade serous ovarian cancer (HGSOC), which is highly metastatic. While many patients initially respond to treatment with debulking surgery and adjuvant chemotherapy, most will have residual disease and relapse. Depending on the extent of metastatic spread and the size of nodal lesions, complete surgical removal of solid tumor may be impossible. Adjuvant chemotherapy is often associated with significant dose-limiting systemic toxicities, and can lead to platinum-resistant disease. To date, there are no FDA-approved therapies that significantly prolong the overall survival of HGSOC patients that relapse with platinum-resistant disease. Although therapies for downregulating mechanistic resistance are currently under investigation, efficacy is hampered by nonspecific interactions with healthy tissue and poor pharmacokinetics. &#13;
&#13;
Targeted drug delivery carriers can reduce systemic toxicity caused by off-target interactions, increase drug solubility and stability in physiological fluids, and improve efficacy through enhanced targeting and intracellular delivery to cancer cells. Layer-by-layer nanoparticles (LbL NPs) are multifunctional drug delivery vehicles whose modular architecture allows for the incorporation of a broad spectrum of therapeutics with diverse chemistries, and functionalization with targeting moieties that increase delivery to solid tumors. This thesis explores LbL NP materials design for optimizing delivery and targeted treatment of HGSOC. &#13;
&#13;
The first part of this thesis develops a synergistic combination LbL NP therapy for downregulating resistance pathways in metastatic HGSOC. Combination treatment with BH3 mimetics targeting the mitochondrial apoptotic pathway induced significant synergistic toxicity in a panel of patient-derived HGSOC cells. Formulation of LbL NPs was optimized for high drug loading and designed with tumor-targeting outer layer chemistry. Co-encapsulation of the combination therapy within an LbL NP significantly enhanced efficacy over free drug treatments. Combination LbL NP therapy was well tolerated in animal models of HGSOC, and resulted in a dramatic regression solid tumor. This suggests the use of LbL NP drug delivery carriers to improve the therapeutic profile of combination treatments.&#13;
&#13;
The second part of this thesis explores the role of nanoparticle stiffness in overcoming physiological barriers to drug delivery. LbL NP architecture uniquely allows for the decoupling of targeting chemistry and particle stiffness effects. Soft-LbL NPs were found to preferentially accumulate in solid tumors and have higher tumor penetration over Rigid-LbL NPs. Decreased nanoparticle stiffness extends in vivo elimination half-life—leading to greater exposure of solid tumor to circulating nanoparticles. Thus, Soft-LbL NPs are preferred for systemically delivered (ie. intravenous) therapies. Increasing rigidity enhances intracellular uptake of LbL NPs by cancer cells. Uptake of Rigid-LbL NPs is most significant when NPs are functionalized with tumor-targeting chemistry, and appears to be receptor-mediated. These results highlight the importance of combined targeting and stiffness modulation in nanoparticle design, and suggest the use of Rigid-LbL NPs for localized (intraperitoneal) delivery where intracellular uptake is the dominant delivery barrier.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139996</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vertical Fluxes in the Upper Ocean</title>
<link>https://hdl.handle.net/1721.1/139994</link>
<description>Vertical Fluxes in the Upper Ocean
Freilich, Mara
Oceanic fronts at the mesoscale and submesoscale are associated with enhanced vertical motion, which strengthens their role in global biogeochemical cycling as hotspots of primary production and subduction of carbon from the surface to the interior. Using process study models, theory, and field observations of biogeochemical tracers, this thesis improves understanding of submesoscale vertical tracer fluxes and their influence on carbon cycling. Unlike buoyancy, vertical transport of biogeochemical tracers can occur both due to the movement of isopycnals and due to motion along sloping isopycnals. We decompose the vertical velocity below the mixed layer into two components in a Lagrangian frame: vertical velocity along sloping isopycnal surfaces and the adiabatic vertical velocity of isopycnal surfaces and demonstrate that vertical motion along isopycnal surfaces is particularly important at submesoscales (1-10 km). The vertical flux of nutrient, and consequently the new production of phytoplankton depends not just on the vertical velocity but on the relative time scales of vertical transport and nutrient uptake. Vertical nutrient flux is maximum when the biological timescale of phytoplankton growth matches the vertical velocity frequency. Export of organic matter from the surface and the interior requires water parcels to cross the mixed layer base. Using Lagrangian analysis, we study the dynamics of this process and demonstrate that geostrophic and ageostrophic frontogenesis drive subduction along density surfaces across the mixed layer base. Along-front variability is an important factor in subduction. Both the physical and biological modeling studies described above are used to interpret observations from three research cruises in the Western Mediterranean. We sample intrusions of high chlorophyll and particulate organic carbon below the euphotic zone that are advected downward by 100 meters on timescales of days to weeks. We characterize the community composition in these subsurface intrusions at a lateral resolution of 1–10 km. We observe systematic changes in community composition due to the changing light environment and differential decay of the phytoplankton communities in low-light environments, along with mixing. We conclude that advective fluxes could make a contribution to carbon export in subtropical gyres that is equal to the sinking flux.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139994</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling-Based Robot Task and Motion Planning in the Real World</title>
<link>https://hdl.handle.net/1721.1/139990</link>
<description>Sampling-Based Robot Task and Motion Planning in the Real World
Garrett, Caelan Reed
We seek to program a robot to autonomously complete complex tasks in a variety of real-world settings involving different environments, objects, manipulation skills, degrees of observability, initial states, and goal objectives. In order to successfully generalize across these settings, we take a model-based approach to building the robot’s policy, which enables it to reason about the effects of it executing different sequences of parameterized manipulation skills. Specifically, we introduce a general-purpose hybrid planning framework that uses streams, modules that encode sampling procedures, to generate continuous parameter-value candidates. We present several domain-independent algorithms that efficiently combine streams in order to solve for parameter values that jointly satisfy the constraints necessary for a sequence of skills to achieve the goal. Each stream can be either engineered to perform a standard robotics subroutine, like inverse kinematics and collision checking, or learned from data to capture difficult-to-model behaviors, such as pouring, scooping, and grasping. Streams are also able to represent probabilistic inference operations, which enables our framework to plan in belief space and intentionally select actions that reduce the robot’s uncertainty about the unknown world. Throughout this thesis, we demonstrate the generality of our approach by applying it to several real-world tabletop, kitchen, and construction tasks and show that it can even be effective in settings involving objects that the robot has never seen before.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139990</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tectonic and Climatic Controls on Continental River Systems</title>
<link>https://hdl.handle.net/1721.1/139989</link>
<description>Tectonic and Climatic Controls on Continental River Systems
Goldberg, Samuel L.
Erosion by rivers is the dominant driver of topographic change over much of Earth’s terrestrial surface, and sets the pace of change for other landscape processes. In this thesis, I explore the complexities of landscapes evolving under fluvial processes. I address questions of how rivers respond to climate, geologic substrate, and tectonics across scales, how landscape disequilibrium results from changing tectonic and climatic forces, and implications of landscape evolution for past human settlement. In the first chapter, I use a process model of river erosion that includes sediment transport feedbacks to show that climate controls the degree to which rock type affects erosion through transport limitations, with arid regions showing a much weaker dependence on rock type than humid regions. This complexity is not captured by typical models of river erosion. In the second chapter, I study the unusual case of the Rio Casiquiare, an ongoing river capture of the Amazon River from the Rio Orinoco. I use this case study to show that large lowland rivers with slope asymmetry across drainage divides reorganize their planform geometry towards a more equilibrated state, and in doing so can create perennial interbasin connections for centuries or longer. In the third chapter, I show that large lowland Amazon rivers have been quickly responsive to cyclical Quaternary climate changes, and as a result have repeatedly incised and aggraded with successive wettings and dryings of the region. In the fourth chapter, I use remote-sensing imagery and machine-learning classification to identify spatial patterns and distributions of ancient settlements, and find that they are almost universally located at the bluff edge at the interface between uplands and floodplains; this is an example of the ways in which geologic and environmental history can influence human society. These studies advance our knowledge of landscape evolution towards a more realistic understanding of the complexities of the natural world and its constant change.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139989</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in the Grammar of Koryak</title>
<link>https://hdl.handle.net/1721.1/139986</link>
<description>Topics in the Grammar of Koryak
Abramovitz, Rafael Meghani
This thesis consists of four chapters on the grammar of Koryak, a highly-endangered Chukotko-Kamchatkan language of the Russian Far East. In the first chapter, I argue that the distribution of the segments v and w in morpheme-final position needs to be handled by a phonological process that applies to bare morphemes. In the second chapter, I argue for a similar conclusion regarding the language’s vowel harmony system. Both of these chapters there- fore argue for a phonological architecture that includes the morpheme as a domain of to which phonology can apply, as in early generative phonology (Halle 1959; Chomsky and Halle 1968), but unlike in Lexical Phonology (Kiparsky 1982), standard Optimality Theory (Prince and Smolensky 1993), Stratal Optimality Theory (Bermúdez-Otero 2008), among others. The third and fourth chapters are independent, and concern the syntactic underpinnings of case-marking in Koryak. In the third paper, I argue that moving wh-words cause other nouns in the sentence to change their case-marking in a way that is consistent with a configurational account of ergative and certain instances of dative case (Yip et al. 1987; Marantz 1991; Baker 2015). In the fourth paper, I that inverse case attraction, a phenomenon where the head of a relative clause is marked with the case of the gap inside the relative clause, is the result of an internally-headed relative clause with a left-peripheral head, a type of relative clause that has otherwise only been proposed for the Gur languages of West Africa (Hiraiwa 2005 et seq.) Based on the available data on inverse case attraction in other languages, I further argue that the internal head analysis of inverse case attraction is a general solution to the phenomenon crosslinguistically.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139986</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Efficiency and Fairness in Machine Learning: a Discrete Optimization Approach</title>
<link>https://hdl.handle.net/1721.1/139985</link>
<description>Improving Efficiency and Fairness in Machine Learning: a Discrete Optimization Approach
Bandi, Hari
In recent years, machine learning models are being increasingly deployed in various applications including Education, Finance, Healthcare, Transportation, etc. However, in most practical situations one-size-fits-all solutions suffer from poor predictive performance and/or bias against certain subgroups. This necessitates developing newer approaches to enhance robustness, interpretability and fairness in the resulting machine learning systems. We borrow tools from discrete and robust optimization to develop models and algorithms for such systems. &#13;
&#13;
The first part of this thesis focuses on developing novel methodologies to enhance performance of specific predictive models. In particular, in the first chapter we propose a novel Mixed Integer Optimization (MIO) formulation that optimally recovers the parameters of a Gaussian mixture model (GMM) by minimizing a discrepancy measure (either the Kolmogorov-Smirnov or the Total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. In the second chapter, we present a holistic framework employing tensor completion and robust optimization for prescribing influenza vaccine composition. We also build an optimal classification tree to predict the efficacy of the proposed vaccine in terms of morbidity and mortality rates for different countries.&#13;
&#13;
In the second part of the thesis, we present novel algorithms to alleviate systemic bias with respect to gender, race and ethnicity, often unconscious, but prevalent in datasets involving choices made by people. We propose (a) a novel optimization approach based on optimally flipping outcome labels and training classification models simultaneously to discover changes to be made in the selection process so as to achieve diversity without significantly affecting meritocracy, and (b) a novel implementation tool employing optimal classification trees to provide insights on which attributes of individuals lead to flipping of their labels, and to help make changes in the current selection processes in a manner understandable by human decision makers.&#13;
&#13;
In the final chapter, we present an application of our work on a discharge disposition prediction problem for trauma patients to debias the dataset with respect to race, and train optimal classification trees to predict discharge decisions for trauma patients with penetrating injuries. Our impact here is two fold: (1) alleviating bias to enhance diversity in discharge decisions and developing an implementation tool using optimal classification trees to promote changes in the selection process, and (2) improving predictive performance (AUC) of the resulting classifiers after debiasing the dataset.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139985</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partisanship, Friendship, and Censorship in Online Social Networks</title>
<link>https://hdl.handle.net/1721.1/139984</link>
<description>Partisanship, Friendship, and Censorship in Online Social Networks
Yang, Qi
With the widespread adoption of social media in today’s society, it has become increasingly important to understand users’ behaviors and the underlying factors driving these behaviors. This thesis considers different aspects of this problem, from tie formation to influence campaigns, examining factors of partisanship, friendship, and censorship in today’s online social networks.&#13;
&#13;
We begin in the first chapter with an overview of social media and the key behaviors of the users of these platforms and the platforms themselves that we will be studying. In the second chapter, we introduce the follow back problem, and examine how different following strategies and political ideologies can influence the follow back rate. After obtaining followers, one can then begin posting content to influence them. In the third chapter, we consider this type of influence campaign. Recent studies have shown that exposure to opposing opinions causes a backfire effect, where people become more steadfast in their original beliefs. We demonstrate a technique known as pacing and leading which can mitigate this backfire effect over time.&#13;
&#13;
In the fourth chapter, we consider the challenge of inferring political bias in a hyper-partisan media ecosystem. From empirical studies in the previous chapters, we discovered that Twitter exhibited an anti-conservative bias when suspending users. However, many studies find that conservatives are more likely to share misinformation on social media. Therefore, it is possible that the suspensions are due to enforcing an unbiased policy aimed at limiting the spread of misinformation. Here, we evaluate the two possible hypotheses empirically by examining the suspension of Twitter users. We found that the observation that Republicans were more likely to be suspended than Democrats provides no evidence that Twitter was biased against conservatives. Instead, this asymmetry can be explained entirely by the tendency of the Republicans to share more misinformation.&#13;
&#13;
Lastly, the COVID-19 pandemic created large shifts in how people stay connected with each other due to social distancing and isolation measures. In the fifth chapter, we study research questions around the impact of COVID-19 on online public and private sharing propensity, its influence on online communication homophily, and correlations between online communication and offline case severity in the United States. To do so, we study the usage patterns of 79 million US-based users on Snapchat, a large, leading mobile multimedia-driven social sharing platform. Our findings suggest that COVID-19 has increased private communication, while decreased publicly shared content when users are out-and-about, decreased homophily across locations, ages and genders, and has a positive correlation with widening gaps between across-state and within-state communication increases after the onset of COVID-19.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139984</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Superelastic Secondary-Phase-Toughened Alloys</title>
<link>https://hdl.handle.net/1721.1/139981</link>
<description>Design of Superelastic Secondary-Phase-Toughened Alloys
Cho, Jaclyn L.
As the global demand for metal increases, the resulting production is creating a growing strain on the environment. One strategy to combat this sustainability challenge is to increase material lifetime through alloy design aimed at improving mechanical performance under difficult conditions like fatigue. This thesis presents an alloy design strategy to benefit from transformation toughening without retention of the transformation product by designing a multi-phase alloy combining a superelastic phase with a stable elasto-plastic phase. In particular, nano- and micro-scale precipitates, lamellar structures, and bulk matrix morphologies of TiNi are investigated in combination with stable (V,Nb)-Ti matrix phases. These structures allow the study of phase stability as a function of morphology, size, and composition. The phase stability of each form of the superelastic phase is not only investigated in isolation, but also in its role in the various phase mixtures within the alloy. The effects of the complex microstructure on the phase stability as well as the effects of the transformation on the deformation micro-mechanics of the microstructure have been probed through a variety of in-situ micro-mechanical tests, including in-situ SEM tensile experiments combined with &#120583;- DIC, in-situ synchrotron tensile experiments, in-situ TEM micro-pillar compression tests, and in-situ SEM crack propagation experiments. These results provide a statistical view of the importance of the properties of the incorporated phases on phase stability, co-deformation, and ultimately crack propagation behavior.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139981</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Testbeds for Advancement of Powder Bed Additive Manufacturing with Application to Reactive Binder Jetting of Ceramics</title>
<link>https://hdl.handle.net/1721.1/139979</link>
<description>Testbeds for Advancement of Powder Bed Additive Manufacturing with Application to Reactive Binder Jetting of Ceramics
Oropeza Gomez, Daniel
Binder jet additive manufacturing (BJAM) offers design flexibility and compatibility with a variety of materials due to material processing in the solid state. The formation of the powder bed is a critical step to ensuring quality parts are produced via binder jetting, but low powder bed densities and a lack of understanding of the effect of recoating parameters have limited the applicability of binder jet components. Furthermore, polymer-based binders are most commonly used despite the need for a debinding step and challenges with part warping during sintering. Adaptations to the binder jet AM process, such as the use of powder spreading optimization and reactive binders, could facilitate the development of high-density ceramics from dry powder feedstock. To attain this, an understanding of powder spreading, binder-powder interactions, and reactive metal salt decomposition and interparticle-bridge evolution is required.&#13;
&#13;
This thesis: (1) describes the design and fabrication of testbeds for powder spreading, ink jetting, and binder jetting processes, (2) explores the effects of powder feedstock and spreading parameters on powder bed density and uniformity of alumina ceramics for application in binder jet additive manufacturing, (3) establishes a process for novel binder ink development and applies it to the production of reactive metal salt binders for preceramic binder jetting, and (4) fabricates alumina ceramic components through BJAM and compares the efficacy of polymer and reactive binders in microstructural and dimensional control during post-process sintering. The powder spreading and BJAM testbeds are validated using representative experiments to characterize powder layer and green component fabrication. By coupling the powder spreading tested with an x-ray-based powder layer density measurement methodology, the influence of powder size and shape distribution, as well as spreading and dispensing methodologies is interrogated. A process including characterization of ink rheology, jetting properties, decomposition, and green strength is applied to the development of novel reactive binders with sustained strength during sintering. Finally, the BJAM testbed is utilized to fabricate ceramic components using polymer and reactive binders, showcasing the capability for microstructural and dimensional control of ceramics through the use of reactive binders.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139979</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Liquid-Metal-Activated Aluminum-Water Reactions and Their Application</title>
<link>https://hdl.handle.net/1721.1/139978</link>
<description>Mechanisms of Liquid-Metal-Activated Aluminum-Water Reactions and Their Application
Godart, Peter T.
The work presented in this thesis contributes to the fundamental understanding of the liquid-metal-activated aluminum-water reaction system, as well as methods that leverage these insights to improve the practicality of aluminum-based fuels.  Water-reactive aluminum is a promising energy storage material given its ability to generate hydrogen and heat at a high volumetric energy density.  Accounting for only the hydrogen released in this aluminum-water reaction, energy densities up to 36.3 MJ/L can be achieved, compared to 7.2 MJ/L for liquid hydrogen. The ability for this reaction to generate hydrogen on-demand also eliminates safety concerns associated with gaseous or liquid hydrogen storage. In addition, the heat generated from the aluminum oxidation (15.8 MJ/kg) can be used to power thermal processes including seawater desalination, making aluminum a potentially attractive fuel source for disaster relief applications in which debris can be mined for energy to generate critical resources like electricity and potable water.&#13;
&#13;
To make aluminum water-reactive, its natural oxide layer must first be disrupted. One promising activation approach is to introduce a liquid-phase gallium-indium eutectic (eGaIn) into the aluminum grain boundary network. While this method produces a highly reactive fuel with only roughly 5 wt.% added, viability for practical applications had hinged on several important but previously untested assumptions made in the literature.  Specifically, the work presented in this thesis addresses the uncertainty around (1) whether the eGaIn can be recovered as a liquid post-reaction and recycled to activate more aluminum with minimal loss, (2) how the aluminum-water system performs at elevated pressures and with near-stoichiometric water inputs, and (3) whether this process can be applied to practical scrap aluminum with surface contamination and high alloying content.  &#13;
&#13;
In this research, SEM-EDS and XRD analysis showed that the activating eGaIn cannot be recovered under standard reaction conditions (i.e. deionized water, 1 bar, 100 degC) due to dealloying of the gallium and indium at the microscale. It was then discovered that in ionic aqueous solutions, liquid-phase eGaIn emerges from solution under specific ambient conditions.  From this discovery, a method for recovering and recycling the eGaIn was developed, using NaCl in moderate concentrations as the only additive. In following experiments, &gt;99% of the input eGaIn was recovered and recycled to produce aluminum fuel with no observed loss in performance.  With support from additional experimental evidence, it was hypothesized that the recoverability in this method is due to the development of a passivating electronic double layer at the eGaIn-electrolyte interface, thereby inhibiting gallium oxidation and subsequent dealloying.&#13;
&#13;
This proposed mechanochemical reaction theory was then extended to non-ionic solutions. A new experimental technique was developed in which aluminum and water can be reacted arbitrarily slowly in a non-aqueous environment via controlled exposure to room-temperature water vapor, enabling in-progress characterization via SEM-EDS and XRD techniques. This method was used to identify a two-part reaction mechanism in which the aluminum first disintegrates along its microstructure via a fractal-like exfoliation process, followed by its reaction with water at unoxidized sites along the freshly exposed grain surfaces.  The indium in the activating alloy was shown to be crucial for the disintegration process specifically, and additional evidence suggests that the initial reaction driving the exfoliation is not a large-scale aluminum-water reaction, but possibly a gallium oxidation reaction instead. It was also shown that ambient oxygen in the reaction environment is capable of repassivating the exposed aluminum grains, severely limiting hydrogen yields. &#13;
&#13;
This reaction mechanism was then studied for constant volume, elevated ambient pressure, and near-stoichiometric water input conditions. An idealized thermodynamics model was developed and validation experiments showed that actual hydrogen yields are suppressed under each of these conditions. Reduced reactivity was observed for &lt;10x stoichiometric water inputs, and experimental evidence suggests excess water is both being taken up into the crystalline reaction products via intercalation and also serves as a physical barrier preventing repassivation by ambient oxygen. It was discovered that by increasing the pH of the reaction environment, a two-fold increase in hydrogen production under near-stoichiometric conditions can be achieved. To account for the reduced reactivity in isochoric reactions, it was observed that the disintegration phase of the reaction mechanism is inhibited at high ambient pressures. Using these insights and empirical parameterizations of reactivity under these various conditions, the accuracy of the thermodynamics model was improved.&#13;
&#13;
Finally, a method for activating practical scrap aluminum using eGaIn was developed. Used aluminum beverage cans (UBCs) were selected as a challenging case study due to their thin geometry, polymer coatings, and high alloying content. It was demonstrated that shredding and compacting the UBCs into pellets under parameters optimized in this research for hydrogen production produces a fuel with consistent theoretical hydrogen yield fractions &gt;0.97.  The total energy input for this process was measured at 551 kJ/kg, only 1.8% of the embodied energy of the aluminum.  A preliminary economics model that incorporates the results of this work predicts that the value of the electricity, potable water, and high-quality aluminum hydroxide produced by an aluminum oxidation power system is up to 9x that of the input scrap depending on location. In total, this work enables aluminum that would otherwise sit idle in a landfill to be mined locally for energy.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139978</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling tools for de-novo single molecule protein sequencing</title>
<link>https://hdl.handle.net/1721.1/139977</link>
<description>Enabling tools for de-novo single molecule protein sequencing
Estandian, Daniel Masao
Proteins are the functional units of all living cells and the subsequent product of many genes. Subtle changes in protein structure and function can cause deleterious effects and, in cases of critical protein units, precipitate large-scale biological dysfunction. Amino acids are the fundamental building block of proteins, ultimately determining protein folding structure and functionality. As such, the ability to detect and analyze low abundance proteins at amino acid resolution can greatly accelerate research into protein function and biology. However, in stark contrast to the relative success of DNA sequencing technologies, there is currently no efficient and cost-effective strategy that is able to adequately achieve single molecule protein sequencing at amino acid resolution. The critical hurdle in protein sequencing is overcoming the intramolecular interactions between amino acids that compete, prevent access, or interfere with the function of reagents that can detect amino acid side chains. Current solutions involve denaturation agents that are transient, harsh, and can compromise the integrity of amino acid identification tools. In addition, these methods only solve some of the intramolecular interactions of proteins. In this thesis, I present a novel and complete intramolecular disruption strategy, as well as the rational design of a molecule called ClickP. Together, these two elements enable single molecule protein sequencing by disrupting proteins’ internal intramolecular environment and physically isolating terminal amino acids, which in turn, will allow detection tools to identify individual amino acids with specificity and sensitivity. Progress in developing this technology has the potential to revolutionize proteomics by enabling single molecule resolution protein analysis to become feasible, inexpensive, and routine.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139977</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Small-scale Quantum Information Processors based on Electronic Spins in Diamond</title>
<link>https://hdl.handle.net/1721.1/139975</link>
<description>Developing Small-scale Quantum Information Processors based on Electronic Spins in Diamond
Sun, Won Kyu Calvin
Isolated optically-active solid-state spins such as the Nitrogen-Vacancy (NV) center in diamond have demonstrated good properties as qubits for quantum information tasks. However, engineering larger quantum registers around a central NV enables more powerful applications. For example, a register of nuclear spins around the NV has already demonstrated many quantum protocols such as quantum error correction and entanglement distillation. Still, thanks to their stronger coupling to the NV and external fields, a register of electronic spins would enable new complementary applications, such as quantum enhanced sensing and scalable architectures for quantum computation.&#13;
&#13;
In this thesis, three critical steps are demonstrated toward the goal of developing a small-scale quantum information processor based on electronic spins in diamond.&#13;
&#13;
First, we develop a method to systematically scale up a system of electronic spins starting from one qubit – the optically-addressable NV – by characterizing the Hamiltonian of nearby optically-dark electron-nuclear spin defects in its microscopic environment. The knowledge of the system Hamiltonian, which characterizes spin defects in the solid, further enables coherent control over the system.&#13;
&#13;
Second, we characterize the quantum register of electronic spins with respect to two important aspects: entanglement and decoherence.&#13;
&#13;
As entanglement is critical to many quantum information tasks, an accurate characterization of entanglement generated by a quantum device is desired. Therefore, to improve upon the conventional entanglement witness based on the state fidelity, we develop a new metric, called the subspace witness, that is more robust in the presence of local unitary control errors. The subspace witness, at the cost of additional measurements, is insensitive to any combination of single-qubit phase errors accrued during the state-preparation-and-measurement of the target entangled state.&#13;
&#13;
Furthermore, as the power of quantum devices is limited by decoherence, a practical (i.e., classical) and predictive noise model for the device is desired. As the first step to characterize the coherence of a multi-qubit register, we demonstrate a method to build a self-consistent classical noise model for individual qubits. For the NV qubit, well isolated from the bath, it is possible to develop a self-consistent model, which not only characterizes the bath but can help develop more robust quantum gates and circuits. However, for a nearby qubit, this is not possible due to a possibly a more complex and quantum bath for this qubit – which, after future investigation, may further scale up the size of the quantum register.&#13;
&#13;
Finally, to demonstrate the potential advantage of an electronic spin register, we implement a quantum information task in sensing of external fields. The electronic spins serve to enhance the sensitivity not only via entanglement, but also through a repetitive readout scheme. This result paves the way towards practical quantum advantage in sensing.&#13;
&#13;
The methods and results presented in this thesis outline a path toward developing small-scale quantum registers based on electronic spins in diamond and demonstrate their practical applications to enhance a broad range of tasks in quantum information processing.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139975</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Virtual Ocean framework for environmentally adaptive, embedded acoustic navigation on autonomous underwater vehicles</title>
<link>https://hdl.handle.net/1721.1/139972</link>
<description>A Virtual Ocean framework for environmentally adaptive, embedded acoustic navigation on autonomous underwater vehicles
Bhatt, EeShan Chetan
Autonomous underwater vehicles (AUVs) are an increasingly capable robotic platform, with embedded acoustic sensing to facilitate navigation, communication, and collaboration. The global positioning system (GPS), ubiquitous for air- and terrestrial-based drones, cannot position a submerged AUV. Current methods for acoustic underwater navigation employ a deterministic sound speed to convert recorded travel time into range. In acoustically complex propaganda environments, however, accurate navigation is predicated on how the sound speed structure affects propagation. The Arctic's Beaufort Gyre provides an excellent case study for this relationship via the Beaufort Lens, a recently observed influx of warm Pacific water that forms a widespread yet variable sound speed lens throughout the gyre. At short ranges, the lens intensifies multipath propagation and creates a dramatic shadow zone, deteriorating acoustic communication and navigation performance. The Artic also poses the additional operational challenge of an ice-covered, GPS-denied environment.&#13;
&#13;
This dissertation demonstrates a framework for a physics-based, model-aided, real-time conversion of recorded travel time into range--the first of its kind--which was essential to the successful AUV deployment and recovery in the Beaufort Sea, in March 2020. There are three nominal steps. First, we investigate the spatio-temporal variability of the Beaufort Lens. Second, we design a human-in-the-loop graphical decision-making framework to encode desired sound speed profile information into a lightweight, digital acoustic message for onboard navigation and communication. Lastly, we embed a stochastic, ray-based prediction of the group velocity as a function of extrapolated source and receiver locations. This framework is further validated by transmissions among GPS-aided modem buoys and improved upon to rival GPS accuracy and surpass GPS precision.&#13;
&#13;
The Arctic is one of the most sensitive regions to climate change, and as warmer surface temperatures and shrinking sea ice extent continue to deviate from historical conditions, the region will become more accessible and navigable. Underwater robotic platforms to monitor these environmental changes, along with the inevitable rise in human traffic related to trade, fishing, tourism, and military activity, are paramount to coupling national security with international climate security.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139972</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vectors of Health: Epidemics, Ecologies, and the Reinvention of Mosquito Science in Brazil</title>
<link>https://hdl.handle.net/1721.1/139970</link>
<description>Vectors of Health: Epidemics, Ecologies, and the Reinvention of Mosquito Science in Brazil
Castro, Luisa Reis
The Aedes aegypti mosquito can be a disease vector, transmitting pathogenic viruses that are harmful to human health, such as Zika, dengue, chikungunya, and yellow fever. Since the early 20th century, this insect has been the focus of scientific research and public health campaigns, framed as a conveyor of death and targeted as a winged enemy to be destroyed. This species continues to be a serious concern; in Brazil, the A. aegypti is a persistent presence in urban landscapes, with disease outbreaks an almost expected public health issue.&#13;
&#13;
This dissertation ethnographically investigates three groups proposing novel solutions to manage this multispecies interaction. All three projects aimed to instrumentalize the A. aegypti, harnessing the insect in the very efforts to address the viruses it can transmit. The proponents of these projects argued that their strategies worked with these insects. In Rio de Janeiro, a group infected the A. aegypti with a bacterium called Wolbachia, a microbe that can inhibit viral transmission. In Recife, a group sterilized male mosquitoes through irradiation, releasing these insects to mate with wild ones on the island of Fernando de Noronha and prevent future A. aegypti generations. The third group, in Foz do Iguaçu, made use of the mosquitoes’ need for blood to mature eggs and preference for biting humans in order to entrap these insects; captured mosquitoes were then transformed into sentinels, mapping the insect’s presence, distribution, and status as a vector.&#13;
&#13;
Here, I show that in these bio-technoscientific projects the goal was not simply to transform the A. aegypti from a vector of pathogenic viruses into a vector of health that could embody the solution to mosquito- borne diseases. I also argue that my interlocutors themselves aimed to become vectors of health who inverted the usual direction of knowledge production, globally, nationally, and administratively. In this case, these mosquito projects were expected to give the science produced from their particular positionality (Global South; Northeast of Brazil; and a public health center) deserved recognition, even amid austerity measures and budget cuts that threaten research and health policies. In addition, I demonstrate that these projects are situated within Brazil’s long history of instrumentalizing mosquitoes to make scientific and political claims.&#13;
&#13;
Overall, my research shows that national ideologies of belonging are intertwined with modes of knowledge and power that shape relations between humans, and between humans, mosquitoes, and microbes. What, I ask, are the different geopolitics of knowledge production and health practices that drive efforts to address mosquito-borne diseases? What kinds of multispecies arrangements emerge in these racialized, political ecologies? How are the histories and imaginaries of Brazilian science reinscribed or transformed through these initiatives?&#13;
&#13;
Based on two years of multi-sited ethnographic research as well as qualitative interviews, document analysis, and archival research, this dissertation shows that mosquitoes are not only historical, social actors that can reconfigure politics and the production of knowledge, but that the A. aegypti species in particular has come to embody both the legacies and possibilities of Brazilian science. And by thinking with, and alongside, my interlocutors, historical actors, and even the A. aegypti species itself, I reveal the politics and possibilities of pursuing scientific research and public health interventions in Brazil.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139970</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Deep Learning to Scientific Inverse&#13;
Problems</title>
<link>https://hdl.handle.net/1721.1/139969</link>
<description>Applications of Deep Learning to Scientific Inverse&#13;
Problems
Li, Matthew T. C.
The first part of this thesis introduces an end-to-end deep learning architecture, called the wide-band butterfly network (WideBNet), which comprehensively solves the inverse wave scattering problem across all length scales. Our architecture incorporates the physics of wave propagation using tools from computational harmonic analysis, specifically the butterfly factorization, and traditional multi-scale methods such as the Cooley-Tukey FFT algorithm. This allows WideBNet to automatically adapt to the dimension of the data so that the number of trainable parameters scales linearly, up to logarithmic factors, with the inherent complexity of the inverse problem. While our trained network provides competitive results in classical imaging regimes, most notably it also succeeds in the super-resolution regime where other comparable methods fail. This encompasses both (i) reconstruction of scatterers with sub-wavelength geometric features, and (ii) accurate imaging when two or more scatterers are separated by less than the classical diraction limit. We demonstrate these properties are retained even in the presence of strong noise and extend to scatterers not previously seen in the training set. In addition, we also demonstrate that our proposed framework outperforms both classical inversion and competing wave scattering specialized architectures across a variety of wave scattering mediums.&#13;
&#13;
The second contribution of this thesis concerns scientific inverse problems in which uncontrollable experimental conditions induce nuisance variations in data and encumber inference. In particular, domain experts in these settings contend with the challenge of disambiguating whether changes in data arise from evolution of the physical quantities of interest (in eect, the signal), or from experimental fluctuations (in eect, the noise). We address this question using a bespoke auto-encoding architecture called the symmetric autoencoder (SymAE). SymAE embeds the data into explanatory latent coordinates corresponding to either coherent physical information or nuisance information. We assume weak supervision in the data and explicitly incorporate symmetries into the architecture to achieve this partitioning. As a result, this endows SymAE with the ability to align datapoints to a common nuisance variation by swapping relevant coordinates in the structured latent space. These resulting virtual datapoints can then be reliably used by domain experts for the purpose of extracting the physics retained in the coherent information. As a motivating example we consider applications to time-lapse monitoring in which geophysicists aim to determine whether changes in data arise on account of evolution in subsurface variabilities (e.g. leaks of supercritical CO2), or arise from uncontrollable conditions encountered during the seismic survey (e.g. from inherent randomness of the micro-seismic sources). We provide numerical experiments demonstrating SymAE is capable of disentangling coherent and nuisance eects in its latent space for a broad range of models for wave propagation. Furthermore, we quantify the accuracy of SymAE redatuming using examples with synthetic seismic data.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139969</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Auditors' Role in Fair Value Monitoring: Evidence from Security-Level Data</title>
<link>https://hdl.handle.net/1721.1/139968</link>
<description>Auditors' Role in Fair Value Monitoring: Evidence from Security-Level Data
Berfeld, Natalie
I study the role of the audit firm as monitor of its clients’ fair value (FV) measurements. Specifically, using a setting in the insurance industry where I can identify fair values at the security level, I find that audit firms’ security-specific FV experience is associated with increased consistency in valuations among clients holding the same security, consistent with audit firms developing FV expertise at the security level. Moreover, FV consistency is higher when the audit office is in a more concentrated market, and when the client is economically less important to the audit office, consistent with audit office market incentives affecting FV audit quality. My study sheds light on the mechanisms that shape the role of auditors in monitoring the increasingly important yet subjective FV determination process.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139968</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Simulation of Pedestrians Exploring the Built Environment</title>
<link>https://hdl.handle.net/1721.1/139966</link>
<description>Machine Learning Simulation of Pedestrians Exploring the Built Environment
Gonzalez Rojas, Paloma
This dissertation explores a new method to model human navigational behavior when engaging with the built environment, with the intent of informing architectural design. A computational process is proposed to produce a generalizable Agent that navigates and explores novel complex environments while interacting with its surroundings.&#13;
&#13;
Current modeling software can effectively simulate pedestrian movement. However, it does not provide other simulations that are critical to the design of architecture such as exploration. Exploratory behavior is especially relevant for architects who seek to predict, to a feasible degree, the interaction between people and newly designed spaces; humans investigating new environments and paths to compelling architectural features. This dissertation demonstrates how machine learning methods identify human exploratory trajectories and map such data to Agent navigational behavior.&#13;
&#13;
Several Machine Learning techniques were applied, including Computer Vision, Bayesian Probability Programming, Density-Based Clustering, and Reinforcement and Imitation Learning. This data-driven method included human trajectory data and three-dimensional site data, collected through fieldwork in Machu Picchu, located in Cusco, Peru. The method had two sections Data Production and Model Development, which resulted in a trained Navigational Agent. Finally, validation tests were proposed and conducted to evaluate the Agent's behavior. &#13;
&#13;
The proposed navigational Agent initially demonstrated generalizable behavior, as when exploring unseen complex environments that it was not trained in. Then, by applying machine learning and analysis of the site-specific data, the human trajectory data was assigned with exploratory intentions of the visitors when approaching compelling architectural features. After the Agent accommodated general behaviors, it demonstrated that site-specific data expanded its behaviors towards simulating human behaviors on the site.&#13;
&#13;
The contributions of this dissertation consist of a pedestrian simulation method, a trained navigational Agent, human trajectory data classification method, human trajectory datasets, three-dimensional site models, and finally, theoretical analysis of a tool that simulate pedestrians’ behavior for architectural design. This dissertation uniquely combined these existing machine learning methods and constitutes an important step in developing computational tools to predict human behavior in architectural space.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139966</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Spectral Model of Grain Boundary Solute Segregation</title>
<link>https://hdl.handle.net/1721.1/139964</link>
<description>The Spectral Model of Grain Boundary Solute Segregation
Wagih, Malik Mamoon AbdelHalim
The segregation of solute atoms at grain boundaries (GBs) can profoundly impact the structural properties of metallic alloys, and induce effects that range from strengthening to embrittlement. And, as such, the control of solute segregation is emerging as an alloy design tool, uses of which include the stabilization of nanocrystalline alloys. To date, the standard approach to predict the extent of solute segregation at GBs uses a simplified representation that treats the GB network as a single entity, and thus, uses a single “average” segregation energy to characterize solute GB segregation in an alloy. This simplification, however, fails to capture the highly disordered and anisotropic nature of GBs in polycrystals, which results in a spectrum of solute segregation tendencies (energies). In this thesis, we aim to address and remove this simplification; the thesis has five major contributions. First, we elucidate computationally the nature of this spectrum for an Mg solute in an Al polycrystal; the distribution is found to be captured accurately with a skew-normal function. Second, we outline a thermodynamic segregation isotherm that incorporates this spectrum, and employ it to study the effect of such a spectrum on predictions of the equilibrium GB segregation state. Third, we develop a machine learning framework that can accurately predict the segregation tendency of solute atoms at GB sites in polycrystals, based solely on the undecorated (pre-segregation) local atomic environment of such sites. We proceed to use the learning framework to scan across the alloy space, and build an extensive database of segregation energy spectra for more than 250 metal-based binary alloys. Fourth, we outline more formally correct thermodynamic criteria to screen for thermodynamic stability of polycrystalline structures, accounting for the spectral nature of GBs. And, we proceed to apply the developed criteria to screen over 200 alloy combinations. Among its benefits, this spectral approach enables strict enforcement of the third law of thermodynamics, where an average segregation energy does not. Fifth, we take the first step to extend the developed framework to handle solute segregation beyond the dilute limit, by outlining a thermodynamic segregation isotherm that accounts for both the spectrality of grain boundary sites, and solute-solute interactions; we also develop a computational framework to extract, and delineate both effects. Finally, we hope that the developed spectral thermodynamic framework, machine learning models, and solute segregation database in this thesis would help unlock the full potential of GB segregation as an alloy design tool, and enable the design of microstructures that maximize the useful impacts of segregation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139964</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Gulf Stream: Along-stream evolution of volume transport and water properties observed by underwater gliders</title>
<link>https://hdl.handle.net/1721.1/139963</link>
<description>The Gulf Stream: Along-stream evolution of volume transport and water properties observed by underwater gliders
Heiderich, Joleen
The Gulf Stream, the western boundary current of the subtropical North Atlantic, plays a key role in the Earth’s climate system with its poleward volume and heat transports being major components of the upper limb of the Atlantic Meridional Overturning Circulation. Extensive observations collected using Spray autonomous underwater gliders from 2004 through 2020 fill a 1500-km-long gap in longer-term sustained subsurface measurements of the Gulf Stream. The gliders provide concurrent, high-resolution measurements of Gulf Stream hydrography and velocity over more than 15 degrees of latitude between Florida and New England. These observations are used to characterize the along-stream evolution of Gulf Stream volume transport; its long-known poleward increase is shown to result primarily from entrainment of subthermocline waters. Antarctic Intermediate Water, which makes up the deepest waters within the Gulf Stream in the Florida Strait, is eroded through both vertical mixing and lateral stirring as it flows downstream. Satellite-based observations of sea surface height coincident with the glider observations are used to evaluate the efficacy of inferring Gulf Stream transport from remotely sensed measurements. The detailed analyses of Gulf Stream transport and water property evolution herein provide targets for regional and global circulation models to replicate.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139963</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization for Deep Learning: Bridging the Theory-Practice Gap</title>
<link>https://hdl.handle.net/1721.1/139962</link>
<description>Optimization for Deep Learning: Bridging the Theory-Practice Gap
Yun, Chulhee
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical questions still remain unsolved. For example, despite the nonconvexity of training objectives, deep neural networks can be reliably trained to fully memorize training datasets, yet perform very well on unseen test data. Although progress has been made over the recent years, the gap between theory and practice remains wide open; this thesis takes a step towards closing this gap.&#13;
&#13;
First, we discuss the optimization landscape of neural networks. We prove the existence of spurious local minima for general datasets and activation functions, which suggests that the convergence of optimization methods on neural networks cannot be explained solely via the training objectives. &#13;
Next, we establish tight bounds on memorization capacity of ReLU networks. We present results showing that width-$\Theta(\sqrt{n})$ ReLU networks can memorize arbitrary $n$ data points, which brings down the existing width requirement $n$ to a much more realistic number. The third part discusses implicit bias in training neural networks, which is an area to understand generalization via the optimization trajectory. Through a unified analysis using a tensor representation of neural networks, we show how different architectures in linear neural networks lead to different global minima in overparameterized regimes. Lastly, we address a major theory-practice gap in stochastic finite-sum optimization: practical algorithms shuffle and iterate through component indices, while most theoretical analyses assume uniformly sampling the indices. To close this gap, we develop tight convergence rates of shuffling-based SGD faster than the rate of its uniform-sampling counterpart.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139962</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Data-Driven Internet Routing Security</title>
<link>https://hdl.handle.net/1721.1/139960</link>
<description>Towards Data-Driven Internet Routing Security
Testart Pacheco, Cecilia Andrea
The Internet infrastructure is critical for the security and reliability of online daily life. The Border Gateway Protocol (BGP), the defacto global routing protocol, was not designed to cope with untrustworthy parties, making BGP vulnerable to misconfigurations and attacks from anywhere in the network. Recently, unintended large-scale misconfigurations caused significant amount of Internet traffic towards major providers to be dropped for hours, and through BGP attacks, perpetrators have stolen millions in fraudulent transactions. Nonetheless, little has changed in operational environments despite the many proposals to increase security by the research, standardization and industry communities. The problem space is complex: it involves multiple stakeholders, with different interests and available resources, and increasingly, geopolitical challenges. Yet, these stakeholders ultimately need to cooperate and coordinate their efforts to improve security. This dissertation proposes a holistic approach to study routing security. It includes the assessment of barriers of adoption of technical proposals to secure BGP, the empirical analysis of exploitations and misconfiguration due to BGP design flaws, as well as the empirical study of the mitigation strategies deployment and benefits. This analysis reveals the extent of misbehavior and misconfiguration in the use of BGP, and the benefit that operational security practices provide. It also discusses this new evidence in the context of tradeoff that have prevented the adoption of routing security. Finally, it provides a set of actions, which could be orchestrated by a bottom-up industry effort or top-down by governments, and directions for future technical work that would encourage collective adoption of security in BGP.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139960</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High throughput extrusion additive manufacturing –&#13;
rate limits and system design</title>
<link>https://hdl.handle.net/1721.1/139957</link>
<description>High throughput extrusion additive manufacturing –&#13;
rate limits and system design
Stevens, Adam Gregory
Fused filament fabrication (FFF) is an additive manufacturing (AM) process in which a polymer feedstock is melted and extruded through a nozzle while guided by a motion system, resulting in a three-dimensional part. FFF is applicable across a range of length scales and with a wide variety of thermoplastic polymers and composites. As such, FFF is increasingly used in prototyping, low volume production, and tooling/fixture applications.&#13;
&#13;
One of the drawbacks to FFF is its low build rate, which is limited by the fundamentally serial nature of the process–a single printhead must traverse a trajectory that spans the entire built volume of the part while depositing material. As a result, the build rate is governed by the performance of the motion and extrusion systems. This thesis explores the influence of motion and extrusion system design on FFF build rate by: 1) deriving a set of design rules for FFF systems that maximize rate, subject to specified quality constraints and guided by finite element analyses and parametric models; 2) the mechanical design and construction of a servo-driven FFF testbed that uses a parallel H-frame belt drive; and 3) implementing closed-loop servo control with the aforementioned hardware and assessing axis-level motion performance and overall print quality using test artifacts. Performance of the custom-built FFF system is benchmarked against the models in (1), and against commercial FFF systems, and rate-resolution tradeoffs are quantified. This thesis concludes with suggestions for further machine design and process control improvements for FFF AM.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139957</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sea Spray-Mediated Fluxes at Extreme Wind Speeds</title>
<link>https://hdl.handle.net/1721.1/139956</link>
<description>Sea Spray-Mediated Fluxes at Extreme Wind Speeds
Sroka, Sydney
Tropical cyclones are complex systems that are challenging to forecast and model. Since tropical cyclones are powered by the warm ocean surface, the accuracy of intensity forecasts depends heavily on the air-sea interaction scheme. However, at extreme wind speeds the air-sea transition layer becomes replete with sea spray such that there is no longer a well-defined interface. This means that the microphysics of sea spray plays a critical role in mediating the fluxes which control tropical cyclone intensity.&#13;
&#13;
The first part of this thesis reviews and synthesizes results from the literature on parameterizations of air-sea enthalpy and momentum fluxes in tropical cyclones, with an emphasis on work that estimated the sea spray-mediated fluxes. The second part of this thesis analyzes the microphysical equations that describe how sea spray mediates enthalpy and momentum. An analysis of an ensemble of temperature, radius, and speed time histories of evaporating drops suggests that, for sufficiently high wind speeds, the formulation for air-sea exchange can be substantially simplified. The third part of this thesis describes the results from multiphase, direct numerical simulations of the sea surface subject to a large wind stress. The preliminary results suggest that the simulated vertical transport of liquid water is comparable to the expected volume flux, which is an encouraging outcome for the prospect of being able to supplement sparse observations of sea spray with numerical simulations. Finally, the fourth part of this thesis analyzes the turbulent air-sea heat flux over ocean mesoscale eddies in reanalysis data to determine whether persistent sea surface temperature perturbations have a significant effect on the time-averaged turbulent heat flux. The findings show that the ocean mesoscale eddies have a small but detectable influence on the time-averaged turbulent heat flux in the reanalysis data.&#13;
&#13;
This thesis explores how small-scale processes can project onto large-scale dynamics. For tropical cyclones in particular, as model resolution improves, previously unresolved mechanisms will come into focus and help illuminate the workings of these complex natural phenomena.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139956</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Biological Materials for Carbon Capture and the Electrochemical Reduction of Carbon Dioxide to Light Hydrocarbons</title>
<link>https://hdl.handle.net/1721.1/139955</link>
<description>Engineering Biological Materials for Carbon Capture and the Electrochemical Reduction of Carbon Dioxide to Light Hydrocarbons
Lehnhardt, Eric Christian
Decarbonization of the global economy will require the identification of substitute carbon sources for the production of fuels, plastics, textiles, pharmaceuticals, and other products that are derived from fossil carbon. The gigaton scale of carbon dioxide emissions necessitates the development of better materials for its capture, yet offers the opportunity to use purified carbon dioxide gas as an input to the electrochemical carbon dioxide reduction reaction. This reaction combines carbon dioxide, water, and electricity in the presence of specialized catalysts to create light hydrocarbons such as methane, ethanol, and ethylene, precursors critical to many of the products that power the modern economy.&#13;
&#13;
In this thesis, I present a range of biological materials capable of capturing carbon dioxide and catalyzing its conversion to products. First, catalysts made from genetically-engineered M13 bacteriophage are light-crosslinked and metallized to create copper electrodes that explore the effect of pore structure on catalyst performance. Second, catalysts made via copper electrodeposition are modified by viral proteins to create nanostructured, crystalline electrodes that shift product distributions towards C1 hydrocarbons like formate and methane. Third, catalysts made from biological carbon nanofibers template copper nanoparticles that increase catalyst activity and generate product distributions on par with copper catalysts found in the literature. Fourth, amine resins templated on the surface of engineered M13 bacteriophage produce high-surface-area materials capable of carbon dioxide capture and release. Additionally, I exposit reaction systems for maximizing gas availability and reaction stability for single- and double-sided electrodes in carbon dioxide electroreduction.&#13;
&#13;
The biological catalysts and membranes described here provide structure/performance information to advance the design of specialized catalysts and membranes for the sustainable creation of hydrocarbon products from atmospheric carbon dioxide.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139955</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of the abyssal ocean overturning circulation by mixing-driven bottom boundary layers</title>
<link>https://hdl.handle.net/1721.1/139952</link>
<description>Control of the abyssal ocean overturning circulation by mixing-driven bottom boundary layers
Drake, Henri Francois
An emerging paradigm posits that the abyssal overturning circulation is driven by bottom-enhanced mixing, which results in vigorous upwelling in the bottom boundary layer (BBL) along the sloping seafloor and downwelling in the stratified mixing layer (SML) above; their residual is the overturning circulation. This boundary-controlled circulation fundamentally alters abyssal tracer distributions, with implications for global climate.&#13;
&#13;
Chapter 1 describes how a basin-scale overturning circulation arises from the coupling between the ocean interior and mixing-driven boundary layers over rough topography, such as the sloping flanks of mid-ocean ridges. BBL upwelling is well predicted by boundary layer theory, whereas the compensation by SML downwelling is weakened by the upward increase of the basin-wide stratification, which supports a finite net overturning. These simulated watermass transformations are comparable to best-estimate diagnostics but are sustained by a crude parameterization of boundary layer restratification processes. In Chapter 2, I run a realistic simulation of a fracture zone canyon in the Brazil Basin to decipher the non-linear dynamics of abyssal mixing layers and their interactions with rough topography. Using a hierarchy of progressively idealized simulations, I identify three physical processes that set the stratification of abyssal mixing layers (in addition to the weak buoyancy-driven cross-slope circulation): submesoscale baroclinic eddies on the ridge flanks, enhanced up-canyon flow due to inhibition of the cross-canyon thermal wind, and homogenization of canyon troughs below the level of blocking sills. Combined, these processes maintain a sufficiently large near-boundary stratification for mixing to drive globally significant BBL upwelling. In Chapter 3, simulated Tracer Release Experiments illustrate how passive tracers are mixed, stirred, and advected in abyssal mixing layers. Exact diagnostics reveal that while a tracer’s diapycnal motion is directly proportional to the mean divergence of mixing rates, its diapycnal spreading depends on both the mean mixing rate and an additional non-linear stretching term. These simulations suggest that the theorized boundary-layer control on the abyssal circulation is falsifiable: downwelling in the SML has already been confirmed by the Brazil Basin Tracer Release Experiment, while an upcoming experiment in the Rockall Trough will confirm or deny the existence of upwelling in the BBL.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139952</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactions between mobile genetic elements and their bacterial hosts</title>
<link>https://hdl.handle.net/1721.1/139951</link>
<description>Interactions between mobile genetic elements and their bacterial hosts
Clark, Emily L.
Integrative and conjugative elements (ICEs) are widespread mobile genetic elements that facilitate the spread of many important genes, including those involved in antibiotic resistance, metabolism, pathogenesis, or symbiosis. These elements are also powerful tools for genetic analyses and engineering. ICEs are typically found integrated in a host chromosome. Either stochastically, or upon some signal, they can excise, undergo DNA processing events, be transferred through encoded conjugation machinery into a neighboring cell, and stably integrate into the new chromosome. Interactions between an ICE and its hosts throughout the life cycle can influence the efficiency of acquisition by new hosts. In this work, I investigated and compared the interactions that two ICEs, Tn916 and ICEBs1, have with their host cells. First, I explored how the different functional modules of these elements impact how efficiently they are transferred into different host species. I generated hybrid conjugative elements that merge functions of both elements to increase transfer efficiencies, presenting exciting potential to be used for genetic engineering. Next, I investigated a previously unknown ability of Tn916 to cause a growth arrest and kill its host cell. I took genetic approaches to determine that two Tn916-encoded genes interact with a defective phage-like element in the B. subtilis chromosome to elicit some of these effects. Finally, I evaluated the integration site selection of Tn916 in the B. subtilis chromosome, identifying several hundred unique AT-rich insertion sites, one of which is a “hot spot” for integration. I found that a host nucleoid-associated protein does not influence integration site selection. I conclude this body of work with a discussion of how the efficient spread of an element is shaped by its interactions with host cells and other horizontally acquired elements.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139951</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring Nociception Under Anesthesia</title>
<link>https://hdl.handle.net/1721.1/139950</link>
<description>Measuring Nociception Under Anesthesia
Subramanian, Sandya
Sixty thousand patients receive general anesthesia each day in the US alone. The problem of monitoring and managing nociception, the flow of information associated with harmful stimuli through the nervous system even when unconscious, in real time is critical during surgery. While there are measures to assess unconsciousness, immobility, and physiologic stability, objectively monitoring a patient’s nociceptive state remains challenging. Intraoperative management of nociception affects post-operative pain management and side effects such as delirium and post-operative cognitive dysfunction.&#13;
&#13;
This thesis focuses on monitoring nociceptive state by tracking autonomic nervous system (ANS) responses. The two autonomic markers are heart rate variability (HRV), the beat-to-beat variation in heart rate, and electrodermal activity (EDA), the measurable change in skin conductance due to sweat gland activity. Since traditional experimental models of pain such as thermal or electrical stimulation are not adequate representations of true surgical nociception, I collected continuous electrocardiogram (ECG), EDA, and ANI data during 70 surgeries at Massachusetts General Hospital (MGH) in an IRB-approved study. I annotated the occurrence of nociceptive stimuli and retrieved the times and doses of anesthetics from the electronic medical record. First, I developed a statistically rigorous framework to extract the valuable instantaneous information from EDA. I also developed a pipeline to preprocess and clean the operating room data. Then I used two frameworks, supervised classification models and state space models, to show that my physiological indices can track the occurrence of nociceptive stimulation to determine the degree of antinociception more accurately on a subject-by-subject basis than the ANI.&#13;
&#13;
In summary, I have: 1) constructed and validated quantitative multi-dimensional measures of intraoperative nociceptive state using HRV and EDA; and 2) compared these measures to the existing Analgesic Nociception Index (ANI) index for nociception monitoring using data collected during surgery. This work presents the first step towards truly integrated and physiology-based intraoperative management, and eventually closed-loop control of nociception under general anesthesia.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139950</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redox-active materials for electrochemically-mediated separations</title>
<link>https://hdl.handle.net/1721.1/139948</link>
<description>Redox-active materials for electrochemically-mediated separations
Tan, Kai-Jher
Separation processes are ubiquitously employed for purification purposes in chemical production, as well as potable water generation and environmental remediation. The rising intensity of global issues such as the growing world population, low water security, and worsening industrial pollution all necessitate further advancement in the discovery, development, and application of materials for separations. Electrochemical approaches are an attractive alternative due to the innate propensities of redox-active species for modular design, molecular recognition, reversible operation, and lower energetic requirements, all of which can be realized via only electrochemical stimuli without any chemical additives. In this work, the focused design of redox-active materials for imparting specific functionalities to electrosorption processes are discussed.&#13;
&#13;
Redox-active composite materials have been developed from four different classes of electroactive species in order to enable redox-mediated ion detection and removal in water, with designed controllability over their pseudocapacitances, stabilities, analyte selectivities, and energetics. Anion separation was achieved using an organometallic ferrocene co-polymer whose electrochemical activity in different electrolyte solutions can be manipulated based on its co-monomer. Through principles of hydrophobicity, the metallopolymer can be engineered to activate in the presence of hydrophilic anions instead of hydrophobic ones, for subsequent application in the stable and 17-fold selective electrochemical extraction of perrhenate over nitrate. The ferrocene macromolecule motif was also extended to the generation of zwitterionic polyelectrolytes that possess pH-sensitive electrochemical behavior in both homogeneous and heterogeneous environments. Cation separation was facilitated through the preparation of pH-dependent quinone and electronically-tunable crystalline hexacyanoferrate-based conductive electrodes, which were paired together with polymeric ferrocenes to produce asymmetric redox-active systems that were able to reduce the extent of anode fouling and current leakage by preventing parasitic water reduction, offer optimizable separation energies based on solution conditions as well as the selected anode-cathode couple, allow for the enhanced and simultaneous recovery of transition and alkali metal compounds without altering their bulk oxidation state, and enable continuous operation under flow conditions. Lastly, redox-active components were incorporated into a particulate scaffold instead of an electrode framework with the purpose of further improving their potential for ion separation through heightened maneuverability and phase contact provided in part by the judicious variation of material hydrophilicity. Specifically, polypyrrole and polyferrocene-containing magnetic composites were synthesized through a facile multi-step procedure, and exhibit strong adsorption capacities for both inorganic and organic contaminants.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139948</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Impact of Electrode Properties and Their Design for Redox Flow Battery Performance</title>
<link>https://hdl.handle.net/1721.1/139945</link>
<description>On the Impact of Electrode Properties and Their Design for Redox Flow Battery Performance
Greco, Katharine V.
Redox flow batteries (RFBs) are a promising technology for grid energy storage. However, cost reductions are required prior to widespread adoption. Advances in the design and engineering of the electrochemical stack may enable cost reductions for multiple redox chemistries. Porous electrodes are a prime target for improvement of system power to lower cost per kilowatt-hour, as they are responsible for multiple critical functions in the flow cell including providing surfaces for electrochemical reactions, distributing liquid electrolytes, and conducting electrons and heat. However, there is limited knowledge on how to systematically design and implement these materials in emerging RFB applications, leading to the repurposing of available materials that are not tailored for this system, i.e. porous carbon papers or felts. For optimal RFB performance, it is necessary to pretreat carbons prior to use to improve electrode wetting and enhance redox kinetics, yet the impact of thermal pretreatment on electrode properties and the correlation between these properties are not well defined, thus the subsequent influence on performance is nebulous. Gaining a deeper understanding of electrode properties and their influence on performance will enable targeted improvements to electrode platforms, allowing system-specific performance gains. Further, identifying essential electrode properties will guide the development of alternative electrocatalytic material that may enable new systems in which carbon is unstable or is not catalytically active.&#13;
&#13;
In this thesis, I will discuss the impact of electrode treatments on RFB performance, combining experimental and computational approaches. First, I investigate the interrelated effects of thermal pretreatment on electrode properties and correlate the changes in these properties with performance. Surface functionalization, wetting, and surface area are identified as the key properties that influence electrode performance. Next, I specifically investigate the impact of 2 surface area on electrode performance. I show that, while thermal treatment adds a significant amount of physical surface area to the electrode, electrochemical species are unable to access a large fraction of this surface area. Further, I use a convection-reaction model to show that even when all surface area is accessible, there is a limit to the surface area that will improve electrode performance. This limit to “useful” surface area is dictated by rate of reaction and transport within the electrode. Finally, I investigate the viability of nickel metal electrodeposition on carbon electrodes to enhance the performance of a novel polysulfide-permanganate flow battery. I show that nickel-deposited carbon electrodes outperform commercially available metal materials, including foams and weaves. The overarching goal of this thesis work is to develop a deeper understanding of the influence that electrode properties have on performance. By continuing to characterize the fundamental kinetic and transport properties within complex porous materials under forced convection, the community will be prepared to design novel material sets well-suited for use in RFBs and other challenging electrochemical environments.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139945</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Validation of a Passive Prosthetic Foot Design Framework based on Lower Leg Dynamics</title>
<link>https://hdl.handle.net/1721.1/139941</link>
<description>Development and Validation of a Passive Prosthetic Foot Design Framework based on Lower Leg Dynamics
Prost, Victor
People with lower limb amputations face considerable challenges to everyday mobility that affect their quality of life. This is especially the case in low and middle income countries (LMIC) where the lack of affordable high-performance prosthetic devices forces people to use inadequate limbs that require more effort and exhibit unnatural walking motions. This thesis develops methods for designing customized, high-performance, low-cost, and durable passive prosthetic feet that enable users to replicate able-bodied walking patterns.&#13;
&#13;
The current development process of prosthetic feet relies on extensive user testing and iterative design rather than a predictive and quantitative design methodology that would facilitate the development of improved prosthetic devices. Here, we further developed the lower leg trajectory error (LLTE) framework, a novel design methodology that connects the mechanical characteristics of a prosthetic foot to the user's walking pattern. We extended the methodology to describe the entire prosthetic step for multiple walking activities and foot architectures, including durability requirements, and efficient constitutive modelling of prosthetic foot designs. These developments resulted in more than a two-fold improvement in the walking performance of LLTE-designed prosthetic feet that fulfilled the international standards durability requirements, and a ten-times reduction in computational time compared to the original LLTE methodology. The LLTE design framework and foot architectures described in this work should provide designers, engineers, and clinicians with a practical, predictive, and quantitative tool for designing and evaluating prosthetic feet.&#13;
&#13;
Using the LLTE framework, low-cost, customized passive prosthetic feet prototypes were designed and clinically evaluated for level ground walking against conventional carbon fiber prostheses. The LLTE feet performed as predicted with no iteration for a wide variety of patients. In addition, these prosthetic feet demonstrated 14% closer replication of able-bodied walking motion, 46% higher propulsion, 13% lower peak leg loading, and higher user preference compared to a standard commercial carbon fiber foot for less than a tenth of its cost. These results suggest that the LLTE framework can be used to design customized, low-cost prostheses that enable able-bodied walking pattern, with reduced effort and risk of long-term injuries.  &#13;
&#13;
A systematic sensitivity investigation of five foot prototypes designed using the LLTE framework showed that users' most closely replicated the target able-bodied walking pattern with the predicted LLTE-optimal foot, experimentally demonstrating that the predicted optimum was a true optimum. In addition, the predicted LLTE performance of the prototype feet was correlated to the user’s ability to replicate the target walking pattern, user's preference, and conventional clinical outcomes. This sensitivity study illustrated the utility of the LLTE framework as an systematic and robust evaluation methodology for prosthetic feet, potentially improving the development and prescription of prosthetic devices.&#13;
&#13;
A rugged prosthetic foot with a cosmetic overmold was also designed using the LLTE framework to accommodate the economic, environmental, and cultural requirements for users in India. The foot was distributed to 16 prosthetic users in India to be used for several months. Users walked 16% faster with the foot compared to their daily-use prosthesis, the Jaipur foot, and commented on the reduced effort of walking. The rugged foot endured one million cycles of fatigue testing, and the wear and tear of daily living without alterations in its mechanical performance. This mass-manufacturable, high-performance rugged foot could replace conventional feet used in low-resource settings and significantly improve the mobility and quality of life of LMIC prosthesis users.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139941</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediate energy proton irradiation: An experimental and analytical foundation for bulk radiation damage testing</title>
<link>https://hdl.handle.net/1721.1/139940</link>
<description>Intermediate energy proton irradiation: An experimental and analytical foundation for bulk radiation damage testing
Jepeal, Steven Joseph
Fusion and next-generation fission power plants will require thermal and structural materials to perform under new, extreme conditions. Among mechanical, thermal, and corrosion or plasma exposure challenges, materials near the core of fusion and fission power plants will need to survive heavy exposure to high energy neutrons, outside of our current operating experience with fission reactors. Understanding the evolution of mechanical properties during radiation damage is essential to the design and commercial deployment of these systems. However, existing irradiation test methods are either extremely slow or not able to predict macroscopic property changes. In some applications, the shortcomings of existing methods could be addressed by a new technique - intermediate energy proton irradiation (IEPI) - using beams of 10 - 30 MeV protons to rapidly and uniformly damage bulk material specimens before direct testing of engineering properties. However, IEPI has seen relatively little use, and there has been little published work exploring the role IEPI could play in the future of nuclear materials development.&#13;
&#13;
This thesis presents a foundation for the use of IEPI to study radiation damage for future reactor conditions. Modeling of damage and beam heating shows that IEPI can achieve accelerated dose rates of 0.1 - 1 DPA/day in bulk irradiated structural materials, and doses exceeding 1 DPA/day in high thermal conductivity metals like copper and tungsten. Activation analysis shows experimental time savings that can range from 86-99+% when compared to irradiation and cool-down for reactor irradiation experiments. Calculated recoil energy transfers highlight that average recoil energies can mimic those of fusion plasma facing components, which are nearly an order of magnitude above fast reactors. Transmutation analysis highlights the ability to emulate fusion helium production levels of 10-30 ppm for common metals, where fission reactors cannot. Specifically for tungsten irradiation, protons are able to induce damage without confounding solid transmutation levels such as the 5% rhenium/dpa of high flux reactors. A first-of-its-kind IEPI facility was built using 12 MeV protons and custom miniature tensile testing. Dose rates exceeding 0.1 DPA/day were demonstrated with temperature control of ±5 − 10∘C. Tensile testing was demonstrated to be reproducible to within 20 MPa and 0.05 strain in both irradiated and unirradiated samples. Proton irradiated inconel samples up to 0.003 dpa were compared to neutron irradiated samples and showed favorable matching in irradiation hardening. Proton irradiated copper samples up to 0.1 dpa showed hardening and loss of ductility that was approximately 50% of past reactor irradiations, suggesting the impact of disparate recoil energies. In total, IEPI is shown to be a viable bulk irradiation technique with particular relevance to fusion plasma facing materials and fusion-relevant helium generation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139940</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing and Fabricating 3D Nanostructures through Directed Self-Assembly of Block Copolymers</title>
<link>https://hdl.handle.net/1721.1/139939</link>
<description>Designing and Fabricating 3D Nanostructures through Directed Self-Assembly of Block Copolymers
Huang, Hejin
Intricate designs of 3D nanostructures are needed in different research areas. Our current techniques for fabrication of 3D structures are limited by either length scale or efficiency. Compared to the conventional methods such as ‘top-down’ lithography, self-assembly of block copolymer (BCP) provides a promising avenue to fabricate 3D nanostructures in a ‘bottom-up’ way. By applying an external field, the BCP self-assembly process is biased, and single-crystalline nanostructures with well-controlled orientation are fabricated upon annealing. Over the past two decades, researchers have successfully created arbitrary 2D complex structures through directed self-assembly of block copolymers.&#13;
&#13;
Despite the research efforts in directed self-assembly of BCP, most key achievements reported so far are on novel 2D nanostructures. How to create uniform and complex 3D nanostructures through BCP self-assembly in a controlled way remains unsolved. In this Thesis, a novel route to create complex 3D nanostructures has been discovered by simulation: defects and aperiodicity introduced in the base layer will propagate to subsequent layers to generate a multilayer aperiodic structure through layer-by-layer stacking. This approach enables inverse designs of various complex 3D nanostructures.&#13;
&#13;
In the first stage, dissipative particle dynamic (DPD) has been reparametrized to give accurate predictions of self-assembled structures of BCP thin films. DPD is a particle-based simulation, which gives intuitive representations of the various experiment conditions compared to field-based simulation such as self-consistent field theory (SCFT). The previous parametrization of DPD simulation manages to reproduce the BCP phase structures in bulk, but fails in most of the thin film structures. Our reparametrized model reproduced all the experiment conditions in BCP thin films. Furthermore, effects of important parametrizations in the DPD simulation are studied to provide theoretical background of the reparameterization. This reparametrized DPD simulation serves as the tool to investigate how to fabricate novel 3D nanostructures.&#13;
&#13;
A novel approach to fabricate 3D nanostructure, named self-directed self-assembly (SDSA) of block copolymer, is then proposed. The feasibility of this method is tested by DPD simulation. Through layer-by-layer stacking of two block copolymers, AB and AC, structural information of the base layer will propagate to subsequent layers to generate uniform 3D nanostructures. Different uniform 3D nanostructures, such as parallel cylinder, sphere aligned with cylinders and bilayer nanomeshes have been fabricated by SDSA.&#13;
&#13;
Finally, inverse design of 2D and 3D nanostructures have been achieved by combining an evolutionary algorithm with DPD simulations. The method encompasses rapid algorithms for characterizing the internal structures of BCP morphologies from the atom coordinates generated by MD simulations to substrate optimizing algorithms for developing effective routes to propagate information from the substrate into the BCP film. Four design rules of 3D nanostructures being discovered. These findings have the potential to help us fabricate 3D nanostructures with selective connections between the layers effectively and efficiently.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139939</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving pavement networks through performance-based planning with optimal treatment strategies and management policies</title>
<link>https://hdl.handle.net/1721.1/139938</link>
<description>Improving pavement networks through performance-based planning with optimal treatment strategies and management policies
Guo, Fengdi
Performance-based planning (PBP) is an efficient way to improve pavement networks. It is the practice of using data from pavement management systems (PMSs) to support analyses on the predicted network performance based on available budgets, treatment strategies and management policies. PBP involves the collection and analyses of PMS data, pavement deterioration prediction, budget allocation, the selection of treatment strategies, and the promotion of appropriate pavement management policies.&#13;
&#13;
This dissertation provides a comprehensive framework for PBP. First, it focuses on the development of a pavement deterioration prediction model and a budget allocation model. A weighted-output neural network model is proposed, which can predict multiple pavement condition metrics simultaneously and incorporate their correlations into the prediction process. During model training, each condition metric is assigned a weight to reflect its relative importance. When the weights equal to those in the formula for a multi-condition metric pavement condition index (PCI), the prediction performance for PCI is optimal (13% lower mean squared error than optimal, single-output models). In terms of the budget allocation model, a probabilistic treatment path dependence (PTPD) model has been proposed. This model incorporates uncertainties of both treatment cost and pavement deterioration, and evaluates a treatment by considering benefits of both the evaluated treatment and its following actions. Compared to a conventional benefit cost ratio model, PTPD can deliver equivalent pavement network performance with an annual budget that is 10% less.&#13;
&#13;
Most existing research on PBP focuses on improving allocation decisions through changes in the allocation algorithm without considering the consequences of how optimization analyses are framed. In this thesis, both the environmental and economic performance of a pavement network are evaluated for different framings of the problem. Specifically, framings in the form of different treatment strategies that consist of treatment materials, treatment types, and evaluation period are considered. Results show that the proposed strategy that uses multiple materials (both concrete and asphalt), an increased number of treatment types, and a long evaluation period could both reduce greenhouse gas emissions and improve pavement network performance. Finally, this thesis explores the potential impact of different federal or state policies regarding PBP. Three pavement management policies are proposed, including flexible decision-making, long-term planning, and market diversification. Model results suggest that incorporating these policies for the whole U.S. pavement network (compared to a business-as-usual scenario), could reduce total excess vehicle fuel expenditures from 2017 to 2050 due to poor road conditions by 28% or about 62 billion dollars. All states can benefit from the proposed management policies. &#13;
&#13;
These research findings can help transportation agencies improve their performance-based planning for pavement networks within a limited budget. In addition, this thesis also provides insights for federal or state agencies regarding the value of key policies to improve pavement networks and to reduce greenhouse gas emissions due to poor road conditions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139938</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Values and Science: An Interdisciplinary Feminist Exploration</title>
<link>https://hdl.handle.net/1721.1/139937</link>
<description>Values and Science: An Interdisciplinary Feminist Exploration
Boulicault, Marion
This dissertation is an experiment in interdisciplinary feminist philosophy of science. It explores an extremely broad question—what is the relationship between science and social values? —by interweaving three families of methodological approaches, to produce three papers, each geared to a different audience. &#13;
&#13;
In Chapter One, I consider the role of values in science through the lens of abstract ideals. Employing the tools and methods of analytic philosophy of science, I take a stand in an ongoing debate about what’s known as the ‘value-free ideal’ (VFI): the view that ideal science is ‘epistemically pure.’ In recent years, the VFI has come under vigorous attack. With many theorists rejecting the VFI, a space is left open for new ideals to guide science. I articulate an alternative ideal—what I term the ‘idiosyncrasy-free ideal’ (IFI)—that is motivated not by epistemic purity, but by intersubjectivity. I draw connections between the IFI and work in feminist philosophy of science, political philosophy and the history of science to argue for the potential of the IFI to fill the space left open by the fall of the VFI. With the IFI in hand, one can relinquish the commitment to epistemic purity, while maintaining a science that is objective and worthy of our trust. &#13;
&#13;
In Chapter Two, I shift away from abstract ideals and delve into an in-depth case study focused on scientific measurement practices. I present a model of how values and norms come to play a role in scientific measurement through a comparative analysis of two human fertility metrics: semen analysis and ovarian reserve testing. Drawing heavily on the literature and methods of feminist science studies, I argue that fertility metrics reflect and enact different gendered imperatives of reproductive responsibility. In doing so, I explicate one mechanism by which racialized and gendered values, norms and ideologies come to be enacted in scientific practice at a basic quantitative level, with profound implications for those whose bodies are measured, and for collective understanding and public debates over the future of our species. &#13;
&#13;
Chapter Three delves further into the roles of social values in the practice of fertility measurement, this time focusing on semen analysis. Geared towards a scientific audience, this chapter is a result of an interdisciplinary collaboration with members of the Harvard GenderSci Lab. We analyze a high-profile 2017 study by andrologist Hagai Levine and colleagues, which claims to show a greater than 50% decline in sperm counts among men from “Western” countries. In doing so, we identify and systematically question a set of shared value-laden assumptions with which sperm researchers approach sperm count measurement. We show how these assumptions—including, for example, the unquestioned choice to categorize data into “Western” vs “Other”—implicitly invoke powerful narratives around gender, sex, race, ethnicity, and anxieties about the future of the human species. In the best tradition of feminist scholarship, this chapter goes beyond critique; it offers a novel paradigm for sperm decline research, which we term the Sperm Count Biovariability hypothesis. This paradigm, we argue, facilitates improved research design and interpretive strategies for investigating the reproductive health and bodies of people of all genders.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139937</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering the Synthesis and Properties of Two-Dimensional Colloidal Perovskite Nanoplatelets</title>
<link>https://hdl.handle.net/1721.1/139936</link>
<description>Engineering the Synthesis and Properties of Two-Dimensional Colloidal Perovskite Nanoplatelets
Ha, Seung Kyun
Colloidal semiconductor nanocrystals are among the leading material platforms for a wide range of optoelectronic applications including photovoltaics, displays, photodetectors, and thermoelectrics. Their colloidal stability enables facile device fabrication by providing solution processability, and tunability of the bandgap with particle size opens up the opportunity to independently optimize nanocrystal properties for specific applications. Recently, colloidal lead halide perovskite nanoplatelets (chemical formula: L₂[ABX₃]ₙ₋₁BX₄, L: alkylammonium, A: methylammonium or formamidinium or cesium, B: lead, X: halide, n: number of [BX₆] ⁴⁻ octahedral layers in the direction of thickness) have emerged as a promising class of novel semiconductor nanocrystals capitalizing on their strong absorption, bright emission with high color purity, strong quantum- and dielectric-confinement, and anisotropic transition dipole moment orientation. This dissertation seeks to establish a robust synthetic protocol for the preparation of colloidal perovskite nanoplatelets and further engineer their desirable properties.&#13;
&#13;
First, I briefly review the history of perovskite nanoplatelets and introduce a protocol for the facile synthesis of colloidal perovskite nanoplatelets at room-temperature. Monodispersity of the nanoplatelets is confirmed by optical and structural characterizations. Photoluminescence and absorption spectra reveal strongly-confined excitonic features which can be tuned in the visible range by changing nanoplatelet thickness and varying the composition of halide anions. Furthermore, I show that multiple species of surface-bound alkylammonium ligands can be introduced. This demonstrates the possibility of further optimizing the surface properties of nanoplatelets which can hugely impact the charge transport behavior inside the device as well as the operating stability.&#13;
&#13;
Then I focus on lead bromide nanoplatelets, whose deep-blue luminescence makes it one of the leading light-emitting platforms for the next-generation displays. I systematically investigate key factors that determine the stability of the nanoplatelets under UV excitation, which mimics the condition of hot-carrier injection into an operating device. It is shown that the freshness of the perovskite precursor solution is crucial in maintaining the stability and efficient luminescence. Then I show that the decrease in photoluminescence intensity upon UV irradiation primarily results from intrinsic instability of the perovskite lattice, whereas the moisture triggers the transformation of nanoplatelets into thicker nanostructures. Then substitution of the organic cation from formamidinium to methylammonium and the addition of excess alkylammonium bromide ligands during the synthesis are shown to be effective stabilization strategies.&#13;
&#13;
Lastly, doping of manganese (Mn²⁺) ions — a powerful method for manipulating excited state dynamics and altering semiconductor nanocrystal properties — in colloidal perovskite nanoplatelets is demonstrated. Substitutional doping of manganese for lead introduces bright and long-lived mid-gap Mn²⁺ atomic states, and the doped nanoplatelets exhibit dual emission from the band edge and the dopant state due to facile band edge-to-dopant excitation transfer. I show that photoluminescence quantum yields and band-edge-to-dopant photoluminescence intensity ratios exhibit strong excitation power dependence that cannot be explained by the saturation of long-lived dopant states. By developing a kinetic model combined with time-resolved spectroscopic studies, it is demonstrated that the annihilation of dopant-site excitons by interacting with band-edge excitons is responsible for the observed power dependence. Then I discuss significantly faster band edge-to-dopant excitation transfer in methylammonium-containing nanoplatelets compared to the transfer in formamidinium-containing nanoplatelets.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139936</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Representations for Vision in Biological and Artificial Neural Networks</title>
<link>https://hdl.handle.net/1721.1/139934</link>
<description>Generalizable Representations for Vision in Biological and Artificial Neural Networks
Schwettmann, Sarah Elizabeth
This thesis makes empirical and methodological progress toward closing the representational gap between human perception and generative models.&#13;
&#13;
Human vision is characteristically flexible and generalizable. One of the persistent challenges of vision science is understanding the underlying representations that allow us to recognize objects and scene attributes across a diversity of environments. A central framework for identifying such representations is inverse graphics, which hypothesizes that the brain achieves robust scene understanding from image data by inverting generative models to recover their latent parameters. I demonstrate that we can directly test the biological plausibility of generative models by uncovering relevant neural representations in the human brain. For instance, if physical reasoning were to be implemented in the brain as probabilistic simulations of a mental physics engine, we would expect neural representations of physical properties like object mass to be abstract and invariant––useful as inputs to a forward model of objects and their dynamics. I present the first evidence that this is indeed the case: fMRI decoding analyses in brain regions implicated in intuitive physics reveal mass representations that generalize across variations in physical scene, material, friction, and motion energy.&#13;
&#13;
We can describe real-world physical scene and object understanding as inverse graphics because we know how to formalize the forward graphics model in a meaningful way, e.g. a physics engine, such that vision inverts it. However, this is not the case with other attributes of visual scenes such as their style or mood, where the relationship between what is experienced and what would be considered the image data is difficult to formalize, not sufficiently explained by optical principles or models of physics that can be inverted. How do we begin to get traction on how humans experience higher-level aspects of visual scenes, or recognize and appreciate meaningful structure that may be difficult to articulate?&#13;
&#13;
I argue that large and flexible generative models for computer vision––that learn structure entirely from data––offer a promising setting for probing computational representations of human-interpretable concepts at different levels of abstraction. Attempts to interpret deep networks have traditionally searched only for predetermined sets of concepts, limiting what representations they can discover. I introduce a more data-driven approach to the interpretation question: a framework for building shared vocabularies, represented by deep networks and salient to humans, from the ground up. I present a procedure that uses human annotations to discover an open-ended set of visual concepts, ranging from low-level features of individual objects to highlevel attributes of visual scenes, in the same representational space. In a series of experiments with human participants, I show that concepts learned with this approach are reliable and freely composable: generalizing across scenes and observers, and enabling fine-grained manipulation of image style and content. Next I introduce a learned captioning model that maps patterns of neuron activation to natural language strings, making it possible to generate open-ended, compositional descriptions of neuron function. These approaches enable us to map between visual concepts in model representations and human perception, analyse models, and synthesize novel scenes that extrapolate dimensions of visual experience that are meaningful to observers.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139934</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and investigating phenomenological models for active matter</title>
<link>https://hdl.handle.net/1721.1/139933</link>
<description>Learning and investigating phenomenological models for active matter
Supekar, Rohit B. (Rohit Balasaheb)
Active matter, such as a suspension of swimming cells or colloidal particles, or even a flock of birds, consists of self-propelled units that convert stored or ambient free energy into motion. Recent advances in high-resolution imaging techniques and particlebased simulation methods have enabled the precise characterization of the collective dynamics in such biological and engineered active systems. In parallel, data-driven algorithms for learning interpretable continuum models have shown promising potential for the recovery of underlying partial differential equations (PDEs) from continuum simulation data. Motivated by these advancements, in this thesis we analytically and numerically investigate phenomenological models in the context of active fluids, and subsequently leverage recent model learning algorithms to infer continuum models directly from microscopic simulation and experimental video data.&#13;
&#13;
First, we consider idealized two-dimensional swimmers with a fixed body shape (‘squirmers’) to understand the impact of yield stress on the swimming characteristics of micro-organisms. Using the Bingham constitutive law, we compute numerical flow fields around squirmers and determine their swimming speeds. Our findings demonstrate how yield stress localizes the flow and makes jet-based propulsion energetically more efficient than tangential ciliary motions. Additionally, for the related problem of non-squirming translating cylinders, we derive and provide numerical evidence for previously unestablished analytical solutions using slipline theory from ideal plasticity.&#13;
&#13;
Second, we explore how a phenomenological continuum model for active suspensions can provide insights into other pattern-forming systems. Motivated in part by the complex flow patterns observed in planetary atmospheres, we investigate generalized Navier–Stokes (GNS) equations that couple nonlinear advection with a generic linear instability. This analytically tractable minimal model for fluid flows driven by internal active stresses has recently been shown to permit exact solutions on a stationary 2D sphere. Here, we extend the analysis to linearly driven flows on rotating spheres. We derive exact solutions of the GNS equations corresponding to timeindependent zonal jets and superposed westward-propagating Rossby waves, qualitatively similar to those seen in planetary atmospheres. Direct numerical simulations with large rotation rates obtain statistically stationary states close to these exact solutions. The measured phase speeds of waves in the GNS simulations agree with our analytical predictions for Rossby waves.&#13;
&#13;
Finally, we address the challenge of learning macroscopic hydrodynamic equations for active matter directly from microscopic data. Here, we present a framework that leverages spectral basis representations and sparse regression algorithms to discover PDE models from microscopic simulation and experimental data, while incorporating the relevant physical symmetries. We illustrate the practical potential through applications to a chiral active particle model mimicking swimming cells and to recent experiments of engineered active particles. In both cases, our scheme learns hydrodynamic equations that quantitatively reproduce the self-organized collective dynamics observed in the simulations and experiments. This inference framework makes it possible to measure a large number of hydrodynamic parameters in parallel and directly from video data. &#13;
&#13;
Overall, this thesis shows how phenomenological models for active matter can offer dynamical insights that can be further leveraged to inform data-driven equation discovery.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139933</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boosting Biodetection Signals via Photopolymerization: Strategies for Photocatalyst Amplification</title>
<link>https://hdl.handle.net/1721.1/139932</link>
<description>Boosting Biodetection Signals via Photopolymerization: Strategies for Photocatalyst Amplification
Kim, Seunghyeon
Photopolymerization-based signal amplification (PBA) is a method to enhance biodetection signals by coupling molecular recognition events with photocatalyst labels (eosin Y) and amplifying through visible light-initiated free radical polymerization. Leveraging inherent amplification in the radical polymerization, PBA can provide naked-eye detectable signals within 5 min, showing great potential as a versatile signal amplification platform for rapid point-of-care diagnostic tests. However, the sensitivity of conventional PBA is not sufficient for its extensive use in early diagnosis of diseases. Thus, this thesis focuses on improving the sensitivity of PBA without compromising its short amplification time. To achieve this goal, we sought to investigate the mechanism of eosin Y photoinitiation to understand oxygen consumption processes and identify inefficiencies during the photoinitiation, and then we developed liposome-enhanced PBA to increase the number of target-associated eosin Y per binding event. The knowledge gained from these studies allowed us to develop a new exponential photocatalyst amplification method using photoredox autocatalysis and apply it to PBA.&#13;
&#13;
Through spectroscopic investigations (Chapter 2), we showed that oxygen is consumed stoichiometrically by α-aminoalkyl radical of triethanolamine (TEOA) and reduced eosin Y radical produced during the eosin Y/TEOA photocatalysis. We also identified eosin Y degradation pathways from the reduced eosin Y radical at low oxygen levels. These degradation reactions suggested coupling numerous eosin Y with a specific binding event would be more advantageous to improve the detection limit of PBA. &#13;
&#13;
To increase the number of target-associated eosin Y, we incorporated eosin Y-loaded liposomes into PBA (Chapter 3). Compared to conventional PBA, liposome-enhanced PBA provides 30-fold improvement in sensitivity. However, the extra eosin Y in the monomer solution is still required to suppress oxygen inhibition, limiting the improvement in detection limit. Furthermore, poor thermal stability of the eosin Yloaded liposomes may limit the accessibility of potential diagnostic tests.&#13;
&#13;
To address the issues of liposome-enhanced PBA, we designed an exponential photocatalyst amplification method using photoredox autocatalysis(Chapter 4). In this method, eosin Y, a photocatalyst, amplifies itself by oxidizing a non-fluorescent eosin Y derivative (EYH3─) under green light. The deactivated photocatalyst is stable and rapidly activated under low intensity light, so the eosin Y amplification is suitable for resourcelimited settings. Moreover, we demonstrated that the photocatalyst amplification is compatible with other photochemical reactions and bioassays.&#13;
&#13;
In Chapter 5, we applied the photocatalyst amplification strategy to PBA with sequential red and green light irradiation. Under red light, target-associated methylene blue (MB+ ) activates EYH3─ through photocatalysis without the risk of bulk polymerization. Then, following green light illumination initiates photopolymerization with eosin Y autocatalysis. This approach allowed to remove the extra eosin Y from the monomer solution, improving the detection limit of PBA through the photocatalyst amplification in situ depending on the amount of target-associated methylene blue.&#13;
&#13;
Finally, we consider unexplored research directions to address current limitations in photoinitiation efficiency, dynamic range, and non-specific photocatalyst amplification for further advancing PBA as a versatile signal amplification platform for low-cost, sensitive, and rapid biodetection at the point of care.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139932</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technological Change and Agricultural Development</title>
<link>https://hdl.handle.net/1721.1/139931</link>
<description>Technological Change and Agricultural Development
Moscona, Jacob
Understanding the forces shape the rate and direction of technological progress is central to confronting key challenges facing the world today. Yet there is little systematic evidence on the factors that determine the evolving patterns of invention across applications, or the impact of these patterns on productivity and resilience in the face of shocks to production. The present set of essays investigates this topic, with an empirical focus on agriculture and the environment.&#13;
&#13;
Chapter 1: The first chapter investigates the impact of intellectual property enforcement on technology development and productivity. Patent protection was introduced for plant biotechnology in the United States in 1985, and it affected crops differentially depending on their reproductive structures. Exploiting this unique feature of plant physiology and a new dataset of crop-specific technology development, I find that the introduction of patent rights increased the development of novel plant varieties in affected crops. Technology development was driven by a rapid increase in private sector investment, was accompanied by positive spillover effects on innovation in certain non-biological agricultural technologies, and led to an increase in crop yields. Patent rights, however, could come with significant costs to the consumers of technology and distort downstream production. Nevertheless, I document that in US counties that were more exposed to the change in patent law because of their crop composition, land values and profits increased.&#13;
&#13;
Chapter 2: The second chapter investigates the extent to which innovation mitigates the economic impact of environmental catastrophe, focusing on the the American Dust Bowl, an environmental crisis that led to widespread soil erosion and production losses on the US Plains during the 1930s. Combining data on county-level erosion, the historical geography of US crop production, and crop-specific technology development, I document that the Dust Bowl led to a major shift in the direction of US agricultural technology toward more Dust Bowl-exposed crops and, within crops, toward bio-chemical and planting technologies that could directly mitigate environmental distress. County-level exposure to new innovation significantly dampened the effect of the Dust Bowl on land values and agricultural revenues. These results highlight the role of crises in shaping the direction of innovation and the importance of endogenous technological progress as an adaptive force in the face of disasters.&#13;
&#13;
Chapter 3: The third chapter, written with Karthik Sastry, studies how agricultural innovation reacts to modern climate change and shapes its economic impacts in the US. We show in a model that directed innovation can either mitigate or exacerbate climate change’s economic damage depending on whether new technology is on average a substitute for or complement to favorable climatic conditions. To empirically investigate the technological response to climate change, we combine data on the geography of agricultural production, shifting temperature distributions, and crop-specific temperature tolerance to estimate crop-specific exposure to damaging extreme temperatures; we then use a database of crop-specific biotechnology releases and patent grants to measure technology development. We first find that innovation has re-directed toward crops with increasing extreme-temperature exposure and show that this effect is driven by types of agricultural technology most related to environmental adaptation. We next find that US counties’ exposure to climate-induced innovation significantly dampens the local economic damage from extreme temperatures, and estimate that directed innovation has offset 20% of the agricultural sector’s climate damage since 1960 and could offset 15% of projected damage in 2100.&#13;
&#13;
Chapter 4: An influential hypothesis explaining the persistence of global productivity differences is that frontier technologies are finely tuned to the local conditions of the high-income countries that develop them and inappropriate for application elsewhere. The fourth chapter, written with Karthik Sastry, studies how environmental differences between frontier innovators and the rest of the world shape the global diffusion, adoption, and productivity consequences of agricultural technology. Our empirical design uses differences in the presence of unique crop pests and pathogens (CPPs) as a instrument for the appropriateness of crop-specific biotechnology developed in one country and applied in another. We first find that inappropriateness predicted by CPP differences reduces cross-country transfer of novel biotechnology. We next find that inappropriateness relative to frontier innovators reduces adoption of improved seeds and crop-level output. Our estimates suggest that the inappropriateness of the contemporary frontier reduces global productivity by 50% and increases cross-country dispersion in log productivity by 15% relative to a world in which technology were equally productive in all contexts. We use our framework to study how historical and predicted changes in the geography of innovation affect the global distribution of agricultural productivity.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139931</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring the Thermomechanical State of the Lithosphere Using Geophysical and Geochemical Observables</title>
<link>https://hdl.handle.net/1721.1/139930</link>
<description>Inferring the Thermomechanical State of the Lithosphere Using Geophysical and Geochemical Observables
Shinevar, William Joseph
This thesis focuses on interpreting geophysical and geochemical observables in terms of the thermomechanical state of the lithosphere. In Chapter 1, I correlate lower crustal rheology with seismic wave speed. Compositional variation is required to explain half of the total variability in predicted lower crustal stress, implying that constraining regional lithology is important for lower crustal geodynamics. In Chapter 2, I utilize thermobarometry, diffusion models, and thermodynamic modelling to constrain the ultra-high formation conditions and cooling rates of the Gore Mountain Garnet Amphibolite in order to understand the rheology of the lower crust during orogenic collapse. In Chapter 3, I interpret geophysical data along a 74 Myr transect in the Atlantic to the temporal variability and relationship of crustal thickness and normal faults. In Chapter 4, I constrain the error present in the forward-calculation of seismic wave speed from ultramafic bulk composition. I also present a database and toolbox to interpret seismic wave speeds in terms of temperature and composition. Finally, in Chapter 5 I apply the methodology from Chapter 4 to interpret a new seismic tomographic model in terms of temperature, density, and composition in order to show that the shallow lithospheric roots are density unstable.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139930</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Parameter Estimation with Devices in Microgrids</title>
<link>https://hdl.handle.net/1721.1/139929</link>
<description>Methods for Parameter Estimation with Devices in Microgrids
Overlin, Matthew Ryan
Microgrids may be described as miniaturized, independent, islanded, autonomous electrical networks. Before deploying or building a microgrid, it is informative to simulate its operation. In such a simulation, one must assign parameter values to the device models, and these parameter values may not be easily known for any number of reasons: lack of manufacturing data, inaccessibility to directly measure or infer such parameters, or safety concerns. Using non-invasive measurements, this work seeks to estimate these parameter values in devices which may exist within a low voltage microgrid. Constant power loads, diesel gensets, and solar inverters can all be found in low voltage microgrids. This thesis will discuss each model and seek to find optimal parameters for each device’s operation.&#13;
&#13;
First, two different dynamic constant power loads (DCPLs) are considered. An appropriate model structure is established, and a hybrid algorithm for parameter estimation (HAPE) is introduced to estimate defining parameters in the model. In order to verify the load model and the HAPE, two experiments are conducted with different DCPLs using a Power-Hardware-in-the-Loop (PHiL) testbed. The PHiL testbed consists of a real-time computer working with a programmable power amplifier in order to perturb the input voltage’s amplitude and frequency. The experimental waveforms are used to inform the HAPE. The resulting parameter estimates are used to define simulation models, and the performance of the HAPE is discussed.&#13;
&#13;
Second, a similar approach will be taken to estimate parameters in a model for a diesel genset, not a load. Unlike the first part of this thesis, this second part will implement a similar HAPE, but with some important differences. The HAPE used here will proceed in generations, consider a parameter sensitivity analysis, and be implemented across multiple computing nodes on a supercomputing platform: MIT Supercloud.&#13;
&#13;
Third, a small system is considered: a grid-connected home with rooftop solar power. Unlike the previous two parts of this thesis, this third part will discuss an approach for choosing parameter values in a solar inverter’s simulation model. The solar inverter includes active power filtering functionality in its control strategy to mitigate current distortion at the home’s point of common coupling. Waveforms captured from experimental non-linear loads are included to show how a solar inverter would operate alongside such loads while connected to the utility grid. A Monte Carlo method, implemented on MIT Supercloud in a massively parallel fashion, is used to survey a wide range of parameter values. With results from thousands of simulations, a set of parameters is selected which minimize component size.&#13;
&#13;
In all three parts of this thesis, the models with parameters to be estimated may be described as grey box models. With the model’s structure established, a hybrid algorithm for parameter estimation (HAPE) can be used to repeatedly simulate the model with a candidate set of parameters. The HAPE borrows from some established approaches (Simulated Annealing, Tabu Search, Particle Swarm Optimization) and offers new features. Recent advances in computing have allowed for algorithms to be implemented in a massively parallel fashion. Heuristic approaches are sometimes preferred in simulations that may contain a large number of non-linearities, exhibit non-smoothness, or contain event-based phenomena. Also, heuristic approaches may need many more iterations to reach convergence, however, so the algorithms in this work are implemented on MIT Supercloud. With appropriate device models established, they can be included as part of a larger simulation of a microgrid to more accurately demonstrate its operation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139929</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Optimization of In-Space Manufacturing to Inform Technology Development</title>
<link>https://hdl.handle.net/1721.1/139928</link>
<description>Modeling and Optimization of In-Space Manufacturing to Inform Technology Development
Moraguez, Matthew Tyler
In-space manufacturing (ISM) has the potential to alter the paradigm of space system design by enabling fabrication of components that are unencumbered by launch-related design constraints. However, the size, weight, and power of manufacturing equipment can easily outweigh the benefits, while the manufacturing equipment capabilities can unnecessarily constrain the set of manufacturable components.&#13;
&#13;
To address this challenge, a methodology is developed to inform ISM technology development by simultaneously considering the design of manufacturing equipment and fabricated components. This methodology relates ISM facility design parameters to overall mission benefit through both manufacturing models, which compute the component properties manufacturable by a given ISM concept, and component models, which compute the mission-level benefit delivered by components with given properties. This methodology has been applied to two case studies of interest. For the case of on-demand spares manufacturing, the results indicate that the maximum possible spares mass savings for a 0.45 m^3 ISM facility is limited to 52% by build volume constraints alone. For the case of ISM solar arrays, the results reveal that a state-of-the-art specific power of 300 W/kg can be realized by an ISM concept that uses thermoplastic pultrusion to produce an 11.6 kg truss structure supporting a 60 kW solar array wing.&#13;
&#13;
The approach presented in this thesis is intended for use by space system designers seeking to incorporate ISM into their mission plans, as well as for ISM technology developers looking to evaluate the impact of their developments on overall mission capabilities. This work is thus positioned to inform the way in which ISM technology development is pursued with the specific purpose of yielding the maximum benefit at the mission level.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139928</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven Modeling of Lithium Intercalation Materials</title>
<link>https://hdl.handle.net/1721.1/139927</link>
<description>Data-driven Modeling of Lithium Intercalation Materials
Zhao, Hongbo
For multi-scale electrochemical systems such as batteries, a number of tools including microscopy, diffraction, spectroscopy, and impedance exist to probe and measure from single active particles to the whole cell, but traditional modeling approaches fail to capture all the available information. With the arrival of high-throughput computation and experimentation, there is an unprecedented opportunity to solve key challenges in energy storage via data-driven methods.&#13;
&#13;
In this thesis, I developed the framework of learning physics and extracting quantitative models of lithium intercalation materials from experimental data. Built upon the theoretical foundation of the reaction kinetics, transport phenomenon, thermodynamics, and electrochemistry of lithium intercalation materials, the theory describes the single-particle behavior and the population dynamics in a porous electrode. PDE-constrained optimization and Bayesian inference are used to infer and quantify the uncertainty of the constitutive laws from multiple data streams. Applications include learning the constitutive laws from images of pattern formation, inverting the unknown free energy, reaction kinetics, spatial heterogeneity, and chemo-mechanical coupling from images of lithium iron phosphate particles, and quantifying the autocatalytic reaction kinetics from X-ray diffraction of Li layered oxides. The results show the possibility to achieve full utilization of the datasets with quantitative agreement, enable model selection and validation, and advance the modeling of batteries.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139927</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Representation Learning for Medical Image Analysis</title>
<link>https://hdl.handle.net/1721.1/139925</link>
<description>Multimodal Representation Learning for Medical Image Analysis
Liao, Ruizhi
My thesis develops machine learning methods that exploit multimodal clinical data to improve medical image analysis. Medical images capture rich information of a patient’s physiological and disease status, central in clinical practice and research. Computational models, such as artificial neural networks, enable automatic and quantitative medical image analysis, which may offer timely diagnosis in low-resource settings, advance precision medicine, and facilitate large-scale clinical research.&#13;
&#13;
Developing such image models demands large training data. Although digital medical images have become increasingly available, limited structured image labels for the image model training have remained a bottleneck. To overcome this challenge, I have built machine learning algorithms for medical image model development by exploiting other clinical data.&#13;
&#13;
Clinical data is often multimodal, including images, text (e.g., radiology reports, clinical notes), and numerical signals (e.g., vital signs, laboratory measurements). These multimodal sources of information reflect different yet correlated manifestations of a subject’s underlying physiological processes. I propose machine learning methods that take advantage of the correlations between medical images and other clinical data to yield accurate computer vision models. I use mutual information to capture the correlations and develop novel algorithms for multimodal representation learning by leveraging local data features. The experiments described in this thesis demonstrate the advances of the multimodal learning approaches in the application of chest x-ray analysis.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139925</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling-based Algorithms for Fast and Deployable AI</title>
<link>https://hdl.handle.net/1721.1/139924</link>
<description>Sampling-based Algorithms for Fast and Deployable AI
Baykal, Cenk
We present sampling-based algorithms with provable guarantees to alleviate the increasingly prohibitive costs of training and deploying modern AI systems. At the core of this thesis lies importance sampling, which we use to construct representative subsets of inputs and compress machine learning models to enable fast and deployable systems. We provide theoretical guarantees on the representativeness of the generated subsamples for a variety of objectives, ranging from eliminating data redundancy for efficient training of ML models to compressing large neural networks for real-time inference. In contrast to prior work that has predominantly focused on heuristics, the algorithms presented in this thesis can be widely applied to varying scenarios to obtain provably competitive results. We conduct empirical evaluations on real-world scenarios and data sets that demonstrate the practicality and effectiveness of the presented work.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139924</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking of Eye Movement Features for Individualized Assessment of Neurocognitive State Using Mobile Devices</title>
<link>https://hdl.handle.net/1721.1/139923</link>
<description>Tracking of Eye Movement Features for Individualized Assessment of Neurocognitive State Using Mobile Devices
Lai, Hsin-Yu
The ability to objectively track neurocognitive state is very important in a wide variety of settings and conditions. For example, with current clinical techniques, it is difficult to assess a patient's neurodegenerative disease (e.g., Alzheimer's) state accurately and frequently.  The most widely used tests are qualitative, variable, and only performed intermittently, exposing the need for quantitative, accurate, and non-obtrusive metrics to track disease progression. Clinical studies have shown that saccade latency (an eye movement measure of reaction time) and error rate (the proportion of eye movements towards the wrong direction) are significantly affected by neurocognitive states. We propose a novel system that measures and tracks these features outside of the clinical environment using videos recorded with a mobile device. It is challenging to attain this goal, given variable environments and the absence of infrared illumination, high-speed cameras, and chinrests.&#13;
&#13;
Several steps are taken to overcome these challenges and therefore enable tracking of eye movement features in large cohorts of subjects. We designed an app to guide subjects to record their eye movements at a proper distance in a well-lit environment. By enabling large-scale data collection, we have collected over 6,800 videos from 80 subjects across the adult age spectrum, which are about two orders of magnitude more videos than in most previous literature. To measure eye-movement features from these video recordings, we used a deep convolutional neural network for gaze estimation and model-based methods to measure saccade latency and error rates. With the frequent measurements of these features, we then designed an individualized longitudinal model using a Gaussian process that learns individual characteristics and the correlations across these eye-movement features. With a system that can measure eye-movement features on a much finer timescale in a broader population than previously available, our research opens up the possibility to understand whether eye-movement features can be used to help track neurocognitive states more frequently and accurately.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139923</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rethinking Algorithm Design for Modern Challenges in Data Science</title>
<link>https://hdl.handle.net/1721.1/139922</link>
<description>Rethinking Algorithm Design for Modern Challenges in Data Science
Chen, Sitan
Heuristics centered around gradient descent and function approximation by neural networks have proven wildly successful for a number of fundamental data science tasks, so much so that it is easy to lose sight of how far we are from understanding why they work so well.&#13;
&#13;
Can we design learning algorithms with rigorous guarantees to either match, outperform, or augment these heuristics? In the first part of this thesis, we present new provable algorithms for learning rich function classes like neural networks in natural learning settings where gradient-based methods provably fail. Our algorithms are based on a new general recipe that we call filtered PCA for dimensionality reduction in multi-index models.&#13;
&#13;
Asking for rigorous guarantees not only helps uncover general mechanisms that make learning tractable, but also lets us be certain that our algorithms are resilient to the demands of modern data. In the second part of this thesis, we study challenging settings where even a constant fraction of data may have been corrupted and develop new iterative reweighing schemes for mitigating corruptions in the context of distribution estimation, linear regression, and online learning. A distinctive feature of many of our results here is that they make minimal assumptions on the data-generating process.&#13;
&#13;
In certain situations however, data may be difficult to work with not because it has been corrupted, but because it comes from a number of heterogeneous sources. In the third part of this thesis, we give improved algorithms for two popular models of heterogeneity, mixtures of product distributions and mixtures of linear regressions, by developing novel ways of using Fourier approximation, the method of moments, and combinations thereof to extract latent structure in the data.&#13;
&#13;
In the final part of this thesis, we ask whether these and related ideas in data science can help shed light on problems in the sciences. We give two such applications, one to rigorously pinning down the much-debated diffraction limit in classical optics, and the other to showing memory-sample tradeoffs for quantum state certification.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139922</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable development of multiplexed microparticle technologies for optical single-cell barcoding</title>
<link>https://hdl.handle.net/1721.1/139921</link>
<description>Scalable development of multiplexed microparticle technologies for optical single-cell barcoding
Dannenberg, Paul H.
The biological complexity of an organism arises from the diversity and interaction of individual cells. Optical imaging techniques with single cell resolution have played an invaluable role in developing an understanding of cellular identity and function. However, current imaging techniques, although widely used to distinguish several different cell populations, are not scalable to single cells at a large scale because they rely on fluorescent molecules with broad spectral emission that results in significant spectral crosstalk. In this thesis, we develop new intracellular optical probes called ‘laser particles’ (LPs), which possess subnanometer spectral linewidth. This narrowband emission enables us to generate hundreds of unique colors well suited for cellular multiplexing. Using a top-down fabrication approach, we develop a scalable method to produce billions of micron-sized LPs from a single semiconductor wafer. Moreover, we refine the design of the particles by perturbing their optical modes using nano-scatterers to optimize their emission signal. By physically combining multiple LPs we are able to scale the number of unique optical barcodes from hundreds to tens of thousands. Using these LP barcodes, we tag thousands of mammalian cells and read out their barcode emissions using a modified microscope and a custom-developed flow cytometer. We expect that the proposed technology offers a platform to identify single cells in various single-cell measurements and allows the acquired data of same cells to be integrated using the optical barcodes.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139921</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Beyond Accuracy: A Features Perspective On Model Generalization</title>
<link>https://hdl.handle.net/1721.1/139920</link>
<description>Machine Learning Beyond Accuracy: A Features Perspective On Model Generalization
Santurkar, Shibani
Prompted by its performance on a variety of benchmark tasks, machine learning (ML) is now being applied to tackle real-world problems. Yet, there is growing evidence that benchmark performance does not convey the full picture. Existing ML models turn out to be remarkably brittle: a striking example of which is their susceptibility to imperceptible input perturbations known as adversarial examples.&#13;
&#13;
In the first part of this thesis, we revisit adversarial examples, to use them as a window into current models. Our investigation provides a new perspective on why this susceptibility arises: it is a direct consequence of models’ reliance on predictive, yet brittle input features. In fact, our findings demonstrate that adversarial examples are a manifestation of a deeper problem: the mechanisms by which current models succeed on benchmarks are fundamentally misaligned with what humans tend to envision. This prompts the question:&#13;
&#13;
How can we build ML models that generalize not only on the benchmarks used for their development but also to the real world?&#13;
&#13;
To answer this question, we examine the ML pipeline from a “features perspective”: focusing not only on what label models predict, but also on what features they use to do so. To this end, in the second part of this thesis, we develop a suite of tools to get a better grasp on: (i) what features models learn, (ii) why they learn them, and (iii) how one can modify the learned features at train or test time. These tools enable us to gain new insights into crucial design choices made during model development, such as how we create datasets, and train and evaluate models. Equipped with these insights, we then propose concrete refinements to the ML pipeline to improve model generalization in the aforementioned broader sense.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139920</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technologies for Room-Temperature Mid-Infrared Photodetection using Graphene</title>
<link>https://hdl.handle.net/1721.1/139919</link>
<description>Technologies for Room-Temperature Mid-Infrared Photodetection using Graphene
Goldstein, Jordan A.
Mid-infrared light is used for thermal imaging, relying on thermal radiation, and chemical analysis, based on vibrational absorption spectra. Ironically, these applications require detector materials and architectures with resilience to thermal noise and infrared-transparent optical materials with minimal vibrational absorption, restricting the mid-infrared material toolbox. 2D materials, which promise to combine high crystallinity with inexpensive and low-temperature processing paradigms, may alleviate some of the material compatibility issues that complicate the design of advanced mid-infrared systems beyond photodetectors and imagers. Graphene is a particularly promising 2D material whose photoresponse has been shown to range from visible to terahertz wavelengths and enjoys fairly mature synthesis and processing technology. Thus, in this thesis, I demonstrate two different mid-infrared systems with novel features enabled by graphene as the optically active material. First, I introduce a multispectral imager concept based on metasurfaces composed of differently-sized, graphene-loaded slot antennas. Here, the tight juxtaposition of sub-wavelength antennas allows broadband transfer and wavelength-sorting of incident mid-IR light into graphene patches with a theoretical efficiency of up to ∼ 58%. I develop a compact circuit model which accurately predicts the absorption spectra of these slot antennas, and demonstrate an electroplating process for fabricating such metasurfaces. This research paves the way towards CMOS-integrable mid-infrared spectral imagers. Second, I demonstrate a chalcogenide glass-on-CaF₂ platform accommodating waveguide-integrated split-gate photothermoelectric graphene photodetectors. These devices achieve waveguide-integrated photodetection at a record-long wavelength of 5.2 µm with a Johnson noise-limited noise-equivalent power of 1.1 nW/Hz¹⸍² . They also feature fast response, with no fall-off in photoresponse up to &#119891; = 1 MHz and a predicted 3-dB bandwidth of &#119891;₃ subscript d subscript B &gt; 1 GHz. The demonstrated platform can be readily extended to longer wavelengths and opens the door to distributed gas sensing and portable dual-comb spectroscopy applications. Taken together, these results demonstrate the ability of graphene to enable novel mid-infrared microsystems with unique features and capabilities.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139919</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Decentralized Desalination More Affordable Using Improved Process Design, Control, and Energy Recovery</title>
<link>https://hdl.handle.net/1721.1/139917</link>
<description>Making Decentralized Desalination More Affordable Using Improved Process Design, Control, and Energy Recovery
Shah, Sahil R.
In countries such as India, where continuous access to treated piped-water is uncommon, many have resorted to desalinating brackish groundwater to meet their drinking needs. This form of decentralized treatment is performed at the community-scale, as is common in rural areas, and within individual homes, using point-of-use (POU) purifiers. This thesis develops methods to lower the costs and improve the efficiencies of two technologies for these applications: electrodialysis (ED) and reverse osmosis (RO).&#13;
&#13;
Batch ED desalination, which relies on recirculating water to reach a desired product concentration, is often conducted at constant voltage. This operation scheme causes the membrane area to be underutilized because the ratio of applied current to limiting current is initially low during the batch cycle. By applying a time-varying voltage to the ED stack, we raised this ratio and increased production rate by up to 37% using the same membrane area. In parallel, we derived an analytical prediction of the batch time and validated it under varying feed and product concentrations, and flow velocities. The experiments and model together suggest that the proposed control scheme will improve production rate most significantly when desalinating through large concentration changes at low flow velocities. This work will assist engineers and operators seeking to size, evaluate, and maximize the production performance of new and existing batch ED systems. &#13;
&#13;
Decreasing the energy requirements of community-scale RO, by recovering hydraulic power from the brine stream, will make off-grid deployments more affordable. However, existing energy recovery devices (ERDs) are prohibitively expensive. We investigated the feasibility of leveraging ubiquitous gear and sliding vane positive-displacement mechanisms within a fixed-recovery architecture to provide a low-cost ERD solution. By modeling the coupled behavior of the pump, ERD, and RO train, we showed that production performance is sensitive to volumetric efficiency. Based on this finding, vanes were selected over gears for prototyping. The prototype enabled a 17% decrease in measured power consumption, and through characterizing friction, we determined that these savings could be doubled by balancing pressure loads on the vane mechanism’s rotor. This work lays the groundwork for realizing an affordable ERD for community-scale RO treatment.&#13;
&#13;
Finally, today’s POU RO purifiers only recover 20-30% of the input feed as drinking water and consume significant energy. By testing and analyzing a POU RO system, it was identified that recirculating the brine within a semi-batch configuration could help address these limitations. We engineered such a system using off-the-shelf parts, and in initial testing, showed that it could achieve recoveries of up to 75% without affecting production rate and quality. With further testing and refinement, this semi-batch system could make POU water desalination more efficient.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139917</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Magnetic Relaxation Nanomaterial Biosensor Platform for Local, Integrative Chemical Monitoring</title>
<link>https://hdl.handle.net/1721.1/139914</link>
<description>Modular Magnetic Relaxation Nanomaterial Biosensor Platform for Local, Integrative Chemical Monitoring
Murdock, Richard Joshua
There is a large gap in the diagnostic tools available to the clinical world today. Current sporadic health data points are only snapshots of a larger picture of disease in populations whose conditions change with time. The objective of this work is to develop an implantable diagnostic medical device platform for the tracking of temporally-varying disease chemical biomarkers. Continuous measurement of critical biomarkers provides a fuller understanding of physiology in response to stimulus, stress, or episodic and acute events. Classification of patient phenotype can be based on the timeline and level of biomarker elevation, persistence, and remission following these stimuli. Chronic disease surveillance in particular requires stable, long term in vivo sensors to realize such measurements. Current limitations in signal stability and inefficient transduction from opaque, heterogeneous biological environments hamper clinical implementation of biosensor technologies. Cumulative exposure, colloidal nanoparticle, magnetic relaxometry assays coupled with single-sided NMR relaxometry for clinic and point-ofcare, resource-limited settings are a promising technology for in vivo biosensing. Through a multidimensional combination of nanotechnology, diagnostic device development, magnetic relaxation physics, and hydrogel biomaterial diffusion membrane design, this dissertation endeavors to improve the diagnostic armamentarium of modular diagnostic biosensors available to the medical community.&#13;
&#13;
Multicomponent populations of layered particles were designed to stably measure the presence of clinically-relevant biomolecules over months. Reading the irreversible interaction of these particles with the biochemical markers they are tuned against by magnetic relaxation allows for wireless, non-invasive measurement providing insight into real-time results over varying timescales. Here we show that by fundamental advancements in the exploration of nanomaterial properties, chemical conjugation strategies, and device engineering parameters, the kinetic and signal amplitude performance of a switch-based dosimeter can be substantially improved. The simultaneous investigation of device search and data extraction methods for implanted devices and the characterization of hydrogel material diffusion barriers also serves to advance the translation of magnetic relaxation biosensors beyond a benchtop system toward a clinically relevant, implantable platform technology. The implications of this work will help guide the personalized medicine campaign in biosensing and provide valuable insights into the next generation of diagnostic management of chronic disease.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139914</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Separate Effects of Surface Condition on Subcooled Flow Boiling Heat Transfer</title>
<link>https://hdl.handle.net/1721.1/139912</link>
<description>Investigation of Separate Effects of Surface Condition on Subcooled Flow Boiling Heat Transfer
Seong, Jee Hyun
In Pressurized Water Reactor (PWRs), heat generated by nuclear fissions is effectively transferred to the water coolant by subcooled flow boiling. The maximum reactor power is limited by the critical heat flux (CHF), at which the boiling crisis occurs. Understanding the mechanisms that trigger this boiling crisis and predicting the CHF limit is key to the safety and efficiency of nuclear reactors. The CHF limit depends on cladding material, thickness, and surface conditions. Importantly, the surface of a fuel rod cladding evolves during operation due to oxidation and crud deposition. Many studies have investigated the effects of surface properties (e.g., surface roughness, wettability, and porosity) on boiling heat transfer and CHF. Still, the results of these investigations are not always in agreement with each other. We believe that the reason for these discrepancies is due to the lack of control of the surface conditions. &#13;
&#13;
This thesis aimed at developing experimental capabilities and protocols to conduct “separate effect” studies and investigating the real effect of surface oxidation and Accident Tolerant Fuel (ATF) coatings on subcooled flow boiling heat transfer. To that end, we prepared Zircaloy-4 heaters that mimic the commercial PWRs fuel cladding, and conducted subcooled flow boiling experiments at 1 bar, 10 K subcooling, and 1000 kg/m2 s mass flux, using high-resolution high-speed video (HSV) and infrared (IR) diagnostics. A computational model solving a 3-D inverse conduction problem was developed to post-process infrared (IR) measurements. An HSV post-processing approach including a deep-learning tool, U-net, and a global optical flow algorithm was proposed to quantify boiling parameters from HSV images. The parameters were incorporated into a heat flux partitioning model, where we introduced a term to account for the non-symmetric growth of the microlayer.&#13;
&#13;
The experimental results showed that groove pattern, average roughness, and wettability do not affect subcooled flow boiling. Instead, they suggest that the process is determined by the location, size, and shape of cavities, and that micro-scale surface modifications (e.g., porous cracks) or nano-scale structures play a crucial role in the formation of active nucleation cavities and modify the bubble dynamics. A key takeaway from this study is that, to elucidate how surface modifications affect boiling heat transfer, one should carefully examine how the surface morphology changes at both the micro- and nano-scale and how the surface preparation process affect the formation of cavities.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139912</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graph Representation Learning for Drug Discovery</title>
<link>https://hdl.handle.net/1721.1/139909</link>
<description>Graph Representation Learning for Drug Discovery
Jin, Wengong
Drug discovery is an expensive and labor-intensive process, typically taking an average of 10–15 years. The goal of this thesis is to substantially accelerate this process by developing machine learning (ML) algorithms for three key steps in drug discovery pipeline. First, we develop better property predictors that enable us to effectively navigate known chemical space. The main challenge is to learn a predictor based on a small, biased assay and generalize to a much broader chemical space. We address this challenge by a new domain generalization method called counterfactual consistency regularization, which seeks to eliminate spurious correlations in biological assays. Second, we extend property prediction capabilities to combinations of molecules, enabling us to screen and discover synergistic drug therapies. Direct experimental data about combinations are extremely limited. To counter this limitation, we build more biological structure (drug-target interaction) into the models in order to leverage heterogeneous single-compound assays as well as to provide a mechanism to assess drug combinations through competitive binding to such targets. Third, we extend the search for new drugs beyond known chemical matter by developing deep generative models that can realize novel compounds with better characteristics. To this end, we propose hierarchical graph generative models that make use of larger structural building blocks derived from either tree decomposition of molecular graphs or molecular rationales explaining the outcome of property predictors. Lastly, we demonstrate how these techniques managed to discover novel antibiotics and COVID-19 antiviral drug combinations. These discoveries highlight the significant impact that deep learning can have on drug discovery by decreasing its time and cost.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139909</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient algorithms and representations for chance-constrained mixed constraint programming</title>
<link>https://hdl.handle.net/1721.1/139908</link>
<description>Efficient algorithms and representations for chance-constrained mixed constraint programming
Fang, Cheng
Resistance to adoption of autonomous systems in comes in part from the perceived unreliability of the systems. The concerns can be addressed by deploying decision making algorithms that operate in the presence of uncertainty.&#13;
&#13;
The default approach is to optimise expected utility given probabilistic descriptions of uncertainty. However, such approaches become problematic when the cost of failure is difficult to define, for example when imposing constraints on remote science missions. Without well-defined costs of failure, it is difficult to balance the risks of failure against the rewards of success. This motivates an alternative approach, in which we define what it means to fail, and look for plans with the highest reward while limiting the probability of failure.&#13;
&#13;
The alternative approach thus explicitly imposes a set of constraints required for success, and provide upper-bounds on the probability of violating such constraints. A chance-constrained mixed logical-linear program (CC-MLLP) is a natural formulation, allowing for the specification of linear and logical constraints, with probabilistic continuous variables. The formalism can be used to describe problems ranging from automonous underwater vehicle path planning, to network routing under uncertainty.&#13;
&#13;
My thesis addresses shortcomings in current approaches to CC-MLLP. In particular, I focus on the problem of computation speed for solving CC-MLLPs, and the problem of accurate uncertainty representation.&#13;
&#13;
While naive encodings of CC-MLLPs can be solved with generalised solvers, the solution time may be unreasonable. In this thesis, I study architectures to speed up solutions by partitioning CC-MLLPs into the discrete and continuous portions.&#13;
&#13;
In order to provide faster solutions, I investigate methods for speeding up the solutions to the continuous chance-constrained linear programs. Further, by exploiting the new solution methods, I develop techniques for guiding the discrete decision making portion of the problem. The resulting algorithm achieves 10 times speed up over prior approaches on autonomous path planning benchmarks.&#13;
&#13;
Lastly, current chance-constrained approaches require distributional descriptions of uncertainty. In this thesis, I consider the problem of deriving uncertainty bounds from data, which cover a required proportion of outcomes with a quantifiable amount of confidence. In particular, I provide bounds for real world scenarios which feature a finite number of executions. This is demonstrated on MBTA Red Line subway schedules.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139908</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contrastive Text Generation</title>
<link>https://hdl.handle.net/1721.1/139900</link>
<description>Contrastive Text Generation
Shah, Darsh J.
This thesis focuses on developing summaries that present multiple view-points on issues of interest. Such capacity is important in many areas like medical studies, where articles may not agree with each other. While the automatic summarization methods developed in the recent decade excel in single document and multi-document scenarios with high content overlap amongst inputs, there is an increasing need to automate comparative summarization. This is evident by the number of services for such reviews in the domains of law and medicine. Building on a traditional generation pipeline of planning and realization, I propose models for three scenarios with contradictions where the planners identify pertinent pieces of information and consensus to adequately realize relations between them.&#13;
&#13;
First, I tackle contradictions between an old piece of text and a claim for the task of factual updates. As there is no supervision available to solve this task, our planner utilizes a fact-checking dataset to identify disagreeing phrases in an old text with respect to the claim. Subsequently, we use agreeing pairs from the fact-checking dataset to learn a text fusion realizer. Our approach outperforms several baselines on automatically updating text and on a fact-checking augmentation task, demonstrating the importance of a planner-realizer pipeline which can deal with a pair of contrastive inputs. &#13;
&#13;
Second, I describe an approach for multi-document summarization, where input articles have varying degrees of consensus. In a scenario with very few parallel data points, we utilize a planner to identify key content and consensus amongst inputs, and leverage large amounts of free data to train a fluent realizer. Compared to stateof-the-art baselines, our method produces more relevant and consensus cognisant summaries. &#13;
&#13;
Third, I describe an approach for comparative summarization, where a new research idea is compared and contrasted against related past works. Our planner predicts citation reasons for each input article with current research to generate a tree of related papers. Utilizing an iterative realizer to produce citation reason aware text spans for every branch, our model outperforms several state-of-the-art summarization models in generating related work for scholarly papers.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139900</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational dissection and prediction of cancer immunotherapy response</title>
<link>https://hdl.handle.net/1721.1/139899</link>
<description>Computational dissection and prediction of cancer immunotherapy response
Shi, Alvin
Checkpoint blockade immunotherapies have transformed the standard of care and outcomes for many cancer types; however. more than 60% of patients still do not experience a durable clinical response from these treatments. To address this problem, the development of novel biomarkers and more effective combinatorial therapies are needed. In this thesis, we first explore and validate the use of extracellular vesicular (EV) RNA as a potential biomarker for immunotherapy response. We discover differentially expressed genes and pathways within the plasma-derived EV RNA that is concordant with known biology. We also show that mutational information contained within EV RNA can stratify responders and non-responders. We leverage a Bayesian probabilistic model to deconvolve the tissue-of-origin of EV RNA transcripts, allowing greater interpretability for differentially expressed genes and pathways. Next, we performed large-scale epigenomics profiling in two cohorts of immunotherapy patients, and we discovered a non-responder enhancer signature that is lost in responders. Many genes contained within this epigenetic signature are associated with immunotherapy resistance, and we reasoned targeting this signature with acetylation-reader bromodomain inhibitors would allow suppression of multiple resistance mechanisms at once. We show that bromodomain inhibitors exhibit considerable synergism with anti-PD1 in reducing tumor volume in murine melanoma transplantation models, and this synergism also improves anti-tumor killing by tumor infiltrating lymphocytes. Using the same cohort, we also identify 189 peaks with differential activity in both the responders and non-responders, and we show these peaks are potentially predictive biomarkers of immunotherapy response. Finally, we leverage three transgenic mice lines to investigate the effect of T-cell receptor repertoire on cell fate commitment by CD4+ SP T-cells into either the thymic conventional (Tconv) or thymic T regulator (tTreg) lineages. We show based on overlap and machine learning analysis that T-cell receptors are not the sole determining factor in Tconv vs. tTreg cell fate decisions. Together, these projects offer new biomarkers and novel combinatorial treatment options for checkpoint blockade immunotherapies.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139899</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Laws for Deep Learning</title>
<link>https://hdl.handle.net/1721.1/139897</link>
<description>Scaling Laws for Deep Learning
Rosenfeld, Jonathan S.
Running faster will only get you so far — it is generally advisable to first understand where the roads lead, then get a car ...&#13;
&#13;
The renaissance of machine learning (ML) and deep learning (DL) over the last decade is accompanied by an unscalable computational cost, limiting its advancement and weighing on the field in practice. In this thesis we take a systematic approach to address the algorithmic and methodological limitations at the root of these costs. We first demonstrate that DL training and pruning are predictable and governed by scaling laws — for state of the art models and tasks, spanning image classification and language modeling, as well as for state of the art model compression via iterative pruning. Predictability, via the establishment of these scaling laws, provides the path for principled design and trade-off reasoning, currently largely lacking in the field. We then continue to analyze the sources of the scaling laws, offering an approximation-theoretic view and showing through the exploration of a noiseless realizable case that DL is in fact dominated by error sources very far from the lower error limit. We conclude by building on the gained theoretical understanding of the scaling laws’ origins. We present a conjectural path to eliminate one of the current dominant error sources — through a data bandwidth limiting hypothesis and the introduction of Nyquist learners — which can, in principle, reach the generalization error lower limit (e.g. 0 in the noiseless case), at finite dataset size.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139897</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-performance metallo-dielectric photonic crystals: Design, fabrication, and testing of a practical emitter for portable thermophotovoltaic generators</title>
<link>https://hdl.handle.net/1721.1/139896</link>
<description>High-performance metallo-dielectric photonic crystals: Design, fabrication, and testing of a practical emitter for portable thermophotovoltaic generators
Sakakibara, Reyu
Hydrocarbon thermophotovoltaic (TPV) systems, a concept first proposed in the 1950s, are emerging as a viable power source for small, portable generators for a spectrum of applications such as UAVs and robotic platforms. In a TPV system, an emitter is heated to above 1000 K, producing thermal radiation that is then converted to electricity by a low-band-gap photovoltaic cell (in hydrocarbon TPV, the heat source is fuel combustion). Unfortunately, state of the art TPV systems still have low efficiencies (&lt;10%).&#13;
&#13;
One approach to increase both the efficiency and power density of the system is to use a selective emitter (one which preferentially emits in the wavelength range that can be converted by the photovoltaic cell). A promising class of broadband selective emitters is two-dimensional photonic crystals, which consist of a square array of cavities etched into a refractory metal substrate, and whose emission spectrum can be tuned by adjusting the geometry of the cavities. In particular, previous work has shown that photonic crystals made of tantalum and conformally coated with hafnium oxide can achieve in-band emissivities up to 0.6, allowing for prototype systems with 4.4% fuel-to-electricity efficiency. Even higher in-band emissivities of 0.8-0.9 are theoretically possible using a metallo-dielectric, or filled, photonic crystal: a tantalum photonic crystal both filled and capped with hafnium oxide.&#13;
&#13;
This thesis presents a metallo-dielectric photonic crystal with close to full theoretical performance. Using a combination of numerical simulations and cross section images, I identified a number of major geometric imperfections in previous prototypes: a hollow air core within the cavity, a thick and uneven capping layer of hafnium oxide, and the recession of hafnium oxide from the top of cavity. I then developed and implemented a fabrication process to achieve a better-filled cavity and a thin capping layer of hafnium oxide, enabling in-band emissivities of 0.7-0.9. Full system simulations predict an up to 37.5% increase in system output power: 6.0W for 100W fuel input, compared to 4.4W system output power for my group’s previous prototype system. This selective emitter paves the way towards efficient, practical, and portable mesoscale generators.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139896</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>MRI techniques for quantitative and microstructure imaging</title>
<link>https://hdl.handle.net/1721.1/139895</link>
<description>MRI techniques for quantitative and microstructure imaging
Dong, Zijing
The ability to map brain microstructures such as axonal fibers and myelin content is critical in improving our understanding of brain organization and neurological diseases. Quantitative MRI has been proved with its high sensitivity to microstructures, providing a safe, non-invasive approach for human in vivo imaging. However, quantitative MRI typically requires the acquisition of multiple images for biophysical model fitting, which leads to long scan time that consequently causes low SNR, low spatial resolution, image artifacts and vulnerability to motion.&#13;
&#13;
This thesis aims at overcoming these challenges by developing novel MRI acquisition and reconstruction methods that exploit the strength of spatiotemporal encoding, recent hardware innovations and low-rank signal priors, to provide efficient microstructure imaging of the human brain with higher speed, SNR, resolution, and motion robustness. Three quantitative MRI contrasts were studied in this thesis, including diffusion MRI, myelin water imaging, and MR relaxometry. These contrasts provide sensitivity to axonal fibers, myelin concentration, and iron contents for the study of brain microstructures. Specifically, in the first part of this thesis, a fast and high-fidelity diffusion imaging method was developed that achieves 30-40% higher SNR-efficiency than the current state-of-the-art method. This technique was also shown to be robust to physiological motion and field variations, as well as the capability to resolve multi-echo images that are free from image distortions and artifacts. The second part of this thesis presents a novel acquisition method for myelin water imaging with &gt;10× acceleration compared to current approaches, that can potentially be used for a fast examination of demyelination diseases. The work also demonstrates the first submillimeter myelin water imaging in vivo at 600-um isotropic resolution to study cortical myeloarchitecture. The third part of this thesis shows an ultra-fast MR relaxometry method with navigated motion correction, which provides fast, repeatable, and motion-robust quantitative imaging of the human brain.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139895</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Control of Networked Systems: Applications to Air Transportation</title>
<link>https://hdl.handle.net/1721.1/139894</link>
<description>Modeling and Control of Networked Systems: Applications to Air Transportation
Kavassery Gopalakrishnan, Karthik
Growing air traffic has resulted in congestion and flight delays. Delays not only inconvenience passengers but also have negative environmental impacts and cause monetary losses for airlines. Reducing delays is therefore crucial for operating a sustainable, efficient, and robust aviation infrastructure. Network analysis has been a popular tool to study large-scale interconnected systems due to its analytical and computational tractability. However, time-varying topologies, multilayered interactions, and the inability to model the high variability in flight delays have limited the utility of traditional network models in aviation applications. In this dissertation, we use ideas from switched-systems theory, graph signal processing, and machine learning to develop tools that overcome some of these limitations. &#13;
&#13;
The key idea behind our modeling approach is to (i) simplify the complex network interactions by identifying a small, finite set of representative network topologies, (ii) use data to learn the delay dynamics for each individual topology, and (iii) identify an appropriate topology transition policy to model the dynamics on time-varying networks. We call this the Markov jump linear system (MJLS) model for airport delays. We develop this delay model for the US, validate it, and demonstrate its superior predictive performance in comparison to other benchmarks. Next, we use this model to identify appropriate interventions that can minimize delays, and develop novel controllers for regulating delays. Our findings suggest that (i) optimal interventions that target highly-connected airports at Atlanta, San Francisco and Chicago can provide maximum systemwide benefits, and (ii) the most effective time for such interventions is between 11 and 2 pm ET. The methods developed in this dissertation can help airlines and air traffic managers improve the efficiency and robustness of the air transportation system.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139894</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemotherapy-Eluting Intraperitoneal Implants for Advanced Stage Ovarian Cancer Treatment</title>
<link>https://hdl.handle.net/1721.1/139892</link>
<description>Chemotherapy-Eluting Intraperitoneal Implants for Advanced Stage Ovarian Cancer Treatment
Subramanyam, Kriti S.
The objective of this work was to develop an implant for sustained abdominal chemotherapy delivery, intraperitoneal (IP) chemotherapy. IP chemotherapy has improved survival in ovarian cancer (OC) patients, but its adoption has suffered due to resource requirements and dose-limiting toxicities. We previously showed that an IP device that delivers sustained low-dose chemotherapy over time is equally effective in treating OC in mice and less toxic than intermittent high-dose injections. &#13;
&#13;
To translate that work to a clinically relevant delivery system, a biocompatible composite implant was developed from medical-grade materials containing microparticulate cisplatin, a widely used drug in OC treatment. The material was designed to match mechanical properties of human abdominal organs for safe long-term IP placement. Sheets fabricated from the composite material were evaluated in vitro for sustained cisplatin release and for bioactivity against OC cell lines. Three-dimensional implant geometries were designed and prototyped to facilitate deployment by minimally invasive laparoscopic surgery. &#13;
&#13;
Sustained low-dose chemotherapy may also potentiate tumor antigen-specific immune responses, but the impact of drug selection and dosing on this response is not well characterized. Multiple anticancer drugs were screened in this work and evaluated for their ability to induce immunogenic cell death-associated gene expression to inform future dose selection for the implant. The impact of sustained chemotherapy delivery is anticipated to be greatest against microscopic tumors left behind following debulking surgery, which precedes chemotherapy treatment in OC patients. Most OC animal models fail to replicate this disease pattern and, therefore, do not reflect the expected clinical treatment response. To address this shortcoming, a novel OC mouse model with disperse microscopic abdominal tumors was developed and characterized to provide a clinically translatable model to study the efficacy of the IP implant.&#13;
&#13;
An implant that enhances efficacy and accessibility of IP chemotherapy will have greater impact in resource-limited settings. Design reviews and surveys of practicing physicians in India revealed eagerness for new technology adoption and potential for an IP implant to supplement and integrate into existing treatment regimens. The implant developed in this work serves as a sustained IP drug delivery platform to improve patient compliance and physician adoption globally.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139892</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Spheres to Sheets: Colloidal Hydrodynamics, Thermodynamics, and Statistical Inference</title>
<link>https://hdl.handle.net/1721.1/139891</link>
<description>From Spheres to Sheets: Colloidal Hydrodynamics, Thermodynamics, and Statistical Inference
Silmore, Kevin Stanton
This thesis involves the development of Bayesian methods for statistical inference of distributions, the construction of optimization and thermodynamic sampling algorithms, and the use of hydrodynamical simulations to better understand the physics of soft matter systems consisting of particles ranging in shape from spheres to sheets.&#13;
&#13;
In the first part of this thesis, we introduce a Bayesian method that we call Maximum A posteriori Nanoparticle Tracking Analysis (MApNTA) for estimating the size distributions of nanoparticle samples from high-throughput single-particle tracking experiments. We show that this approach infers nanoparticle size distributions with high resolution by performing extensive Brownian dynamics simulations and experiments with mono- and polydisperse solutions of gold nanoparticles as well as single walled carbon nanotubes. We then extend the non-parametric Bayesian framework developed to infer the orientation probability distribution function (OPDF) of suspensions of rod-like particles from small-angle neutron scattering data with a method that we call Maximum A Posteriori Scattering Inference (MAPSI).&#13;
&#13;
In the second part of this thesis, we create two high-performance algorithms — one for feasible optimization and the other for accelerated thermodynamic sampling — to aid in the simulation of large-scale physical models. Drawing on the Riemannian optimization and sequential quadratic programming literature, a practical algorithm that we call Locally Feasibly Projected Sequential Quadratic Programming (LFPSQP) is constructed to conduct feasible optimization on arbitrary implicitly defined constraint manifolds. Specifically, with n (potentially bound-constrained) variables and m &lt; n nonlinear constraints, each outer optimization loop iteration involves a single O(nm^2)-flop factorization, and computationally efficient retractions are constructed that involve O(nm)-flop inner loop iterations. The second algorithm developed, called Collective Mode Brownian Dynamics (CMBD), is a method based on Brownian dynamics simulations that uses a specially constructed mobility matrix that can reduce the computational time it takes to reach equilibrium and draw decorrelated thermodynamic samples. Importantly, the method is completely agnostic to particle configuration and the specifics of interparticle forces and runs in O(N) time on graphics processing units, where N is the number of particles.&#13;
&#13;
In the final part of this thesis, we study the behavior of flexible 2D materials. Using the LFPSQP algorithm for feasible optimization, the minimum-energy shapes of membranes with boundaries subject to fixed area and contour lengths (relevant to 2D biological objects like kinetoplasts) are found over a range of dimensionless areas and dimensionless spontaneous curvatures. Notably, as spontaneous curvature is increased, it is found that axisymmetry is broken. The constrained normal modes of the sheets are also computed and shed light on the behavior of fluctuations. Additionally, we perform numerical simulations of "tethered" semiflexible sheets with hydrodynamic interactions in shear flow. With athermal sheets, we find buckling instabilities of different mode numbers that vary with bending stiffness and can be understood with a quasi-static model of elasticity. For different initial orientations, chaotic tumbling trajectories are observed. With thermal sheets, we observe a dynamical transition from stochastic flipping to significant crumpling and continuous tumbling consistent with the onset of chaotic dynamics found for athermal sheets. The effects of different dynamical conformations on rheological properties such as viscosity and normal stress differences are also quantified.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139891</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Equilibria and Feasibility of Ecological Polynomial Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/139886</link>
<description>On Equilibria and Feasibility of Ecological Polynomial Dynamical Systems
AlAdwani, Mohammad S Kh F Sh
Explaining and predicting the behavior of ecological systems has been one of the greatest challenges in ecology. One promising route to accomplish this challenge has been based on the mathematical modeling of species abundances over time. However, finding a compromise between tractability and realism has not been easy. Functional responses in 2-species models and higher-order interactions in 3-species systems have been proposed to reconcile part of this compromise. However, it remains unclear whether this compromise can be fulfilled and extended to multispecies models. Yet, answering this question is necessary in order to differentiate whether the explanatory power of a model comes from the general form of its polynomial or from a more realistic description of multispecies systems. Nevertheless, extracting the set of conditions compatible with feasibility (i.e, the necessary conditions for coexistence or of species, stability and permanence), even at the 2-species level, remains a big mathematical challenge. Currently, there is no methodology that can provide us with a full analytical understanding about feasibility for any given model.&#13;
&#13;
Here, we develop a general method to quantify the mathematical consequences of adding higher-order terms in ecological models based on the number of free-equilibrium points that can emerge in a system (i.e., equilibria that can be feasible or unfeasible as a function of model parameters). We characterize complexity by the number of free-equilibrium points generated by a model, which is a function of the polynomial degree and system’s dimension. We show that the probability of generating a feasible system in a model is an increasing function of its complexity, regardless of the specific mechanism invoked. Our results reveal that conclusions regarding the relevance of mechanisms embedded in complex models must be evaluated in relation to the expected explanatory power of their polynomial forms. Then, we propose a general formalism to analytically obtain feasibility conditions for any population dynamics model of any dimension. From our methodology, we establish mathematically how two or more model parameters are linked—a task that is impossible to perform with simulations. By showing how feasibility can be studied as a function of a given model, we establish the partial conditions for species coexistence, moving us a step closer to the goal of systematically understanding the behavior of ecological systems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139886</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Throughput Synthesis of Metal–Organic Frameworks in a Continuous Flow Reactor</title>
<link>https://hdl.handle.net/1721.1/139885</link>
<description>High-Throughput Synthesis of Metal–Organic Frameworks in a Continuous Flow Reactor
Bagi, Sujay Dilip
Metal–organic frameworks (MOFs) are promising materials for a wide range of applications given their chemical stability and structural tunability. MOFs are crystalline coordination complexes consisting of organic linkers and inorganic polynuclear metal clusters forming highly ordered 2D and 3D structures. The past decade has seen an exponential growth in the number of new MOF structures reported in the literature and their potential applications in water harvesting, carbon capture, gas storage, catalysis, separation, among others. A major bottleneck in widespread deployment of MOF-based platforms stem from the high cost of synthesis in traditional batch reactors that use excess solvents, require long crystallization times, suffer from low yields and intrinsic inefficiencies in heat/mass transfer processes. This thesis focuses on developing low-cost, high-throughput and energy-efficient synthesis routes using a continuous flow reactor for MOFs used in Atmospheric Water Capture (AWC) and Zr MOFs with coordinatively unsaturated open metal sites.&#13;
&#13;
The first part of the thesis describes the modules used in the flow reactor platform, design of heated reaction zone (crystallizer), injection strategies for viscous reaction mixtures and scale-up scenarios. The flow platform is then used to develop a continuous manufacturing process for MOF808—a Zr-MOF widely studied as a catalyst and an adsorbent in industrially important processes—that can achieve high process yields with minimal solvent use. Under flow optimized conditions (150 °C, 5 min), the N,N-dimethylformamide solvent and formic acid modulator amounts were decreased by 84% and 67% in volume, respectively, and resulted in an increase in productivity (defined in units of kgₘₒ subscript f m⁻³day⁻¹) by two orders of magnitude with similar yields, compared to the established batch synthesis (130 °C, 48 hours). A techno-economic model based on laboratory-demonstrated synthesis routes was developed to compare energy and cost savings for the flow system compared to batch, indicating that solvent use was the largest contributor to the overall cost. The flow platform was then used to evaluate the kinetics of crystallization for MOF-808 using time resolved powder X-ray diffraction measurements. The role of temperature and linker concentration on MOF-808 crystallization were investigated by determining the rate constants for nucleation (kₙ) and growth (k subscript g), which are obtained from nonlinear fitting of the crystallization curves with the Gualtieri model. The activation energies obtained using Arrhenius plots for nucleation (Eₐ(N)) and growth (Eₐ (G)) are 64.7 ± 4 kj mol⁻¹ and 59.2 ± 5 kj mol⁻¹ respectively. The use of higher flow rates at the same residence time and temperature resulted in higher crystal sizes with a narrow CSD—a simpler route for controlling crystal sizes. Finally, the flow platform is employed for process intensification of Ni₂Cl₂(BTDD) MOF synthesis—an optimal candidate for AWC which has tremendous potential to address global shortage of clean drinking water. Flow synthesis achieved higher yields, reduced solvent volume by ~50% with a simultaneous increase in process productivity by 3-fold. A computational fluid dynamics (CFD) model was developed to quantitate productivity enhancements in the flow reactor based on improved heat-transfer rates, larger surface-area to volume ratios, and effective residence times. This work adds critical facets to the growing body of research suggesting that the synthesis of MOFs in flow reactors offers unique opportunities to increase production rates and reduce synthesis costs.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139885</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Frequency Sonophoresis Assisted Cancer Immunotherapy</title>
<link>https://hdl.handle.net/1721.1/139884</link>
<description>Low-Frequency Sonophoresis Assisted Cancer Immunotherapy
Sinha, Diviya
Inducing and maintaining a long-lived potent cellular CTL (Cytotoxic T Lymphocyte) response is of significant interest to the global scientific community. Protection against most intracellular pathogens (e.g., HIV) and tumors require functional CTL immunity and are ineffectively treated by antibodies or humoral immunity. Traditional needle-based vaccination strategies primarily induce humoral immunity and are inefficient at eliciting robust CTL responses using subunit or whole protein formulations. Viral adjuvant vectors capable of generating strong CTL responses face limitations of associated toxicity and anti-vector immunity limiting their boosting potential. Hence, the alternative availability of adjuvant-free vaccination strategies capable of inducing a decades long or lifetime potent CTL response using simple vaccine formulations would be very desirable.&#13;
&#13;
In this thesis, it is demonstrated that a short 30 second pretreatment of the skin with low-frequency ultrasound, also known as low-frequency sonophoresis (LFS), followed by topical ovalbumin antigen application, results in the induction of a potent CTL response in the absence of any co-administered adjuvants. LFS is a minimally invasive, FDA approved technique for the transdermal delivery of lidocaine. Significant research has been carried out on its applicability for transdermal drug delivery. However, there are only a few reports on its use for cutaneous immunization (CI), and these have focused on induction of humoral immunity. Here, we fill this gap in knowledge by investigating the cellular arm of the immune response upon LFS CI.&#13;
&#13;
LFS pretreatment of the skin caused rapid dispersion of topically applied antigen into the skin and draining lymph nodes, targeting both skin and lymph node resident dendritic cells, which are known to be potent activators of the CTL response. Additionally, only antigen specific T cells were activated and proliferated upon LFS immunization, eliminating the possibility of general inflammation in the absence of antigen causing non-specific T cell activation. Physical perturbation of the skin by LFS, resulting in the secretion of inflammatory cytokines by skin immune cells, is hypothesized to induce the observed polyfunctional Th1 CD4 and CTL immune responses. These resulting attributes of the T cells have been correlated with strong efficacy for protection and treatment with regards to cancer vaccines and viral infections, including HIV. The potency of resting CTLs in LFS immunized mice was investigated with a viral LCMV challenge months after a single immunization. Following the viral challenge, LFS-immunized mice had a much faster recall response of more than 24 hours ahead of, and an order of magnitude higher expansion of CTLs than, both the priming phase and the naïve control groups. This observation of long-lived potent CTLs was subsequently followed-up in the functional investigation of CTLs in tumor therapy. Functional potency of induced CTL cells for effective therapy was investigated against two subcutaneous tumor challenge models, the B16-OVA melanoma and EG7-OVA thymoma, wherein significant prolonged survival and tumor rejection through LFS CI was attained in the respective tumor challenges.&#13;
&#13;
In summary, the findings of this thesis demonstrate the potential for LFS skin immunization as an adjuvant-free vaccination strategy for protection against viral infections and tumors via induction of robust CTL responses using simple vaccine formulations.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139884</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Subambient Passive Cooling Enabled by Polyethylene Aerogels</title>
<link>https://hdl.handle.net/1721.1/139882</link>
<description>Subambient Passive Cooling Enabled by Polyethylene Aerogels
Leroy, Arny
Subambient cooling is vital for promoting human health and well-being, driving sustainable economic growth, and minimizing food waste. Providing these benefits however, comes with major energetic and environmental costs. The over 1.6 billion air conditioning units currently in use around the world today already consume over 2000 TWh/year of electricity, or the equivalent of half the United States yearly electricity consumption, straining existing electrical systems and contributing to 3% of global CO2 emissions. With growing space cooling demand stemming from economic and population growth in hot and developing parts of the world, more efficient air-conditioning and refrigeration systems are urgently needed.&#13;
&#13;
One promising solution to help address existing and future global cooling challenges is to use passive cooling solutions such as passive radiative or evaporative cooling to provide electricity-free subambient refrigeration for food produce or to improve the efficiency of existing air conditioners and refrigerators. Passive radiative cooling relies on the rejection of naturally occurring infrared radiation emission of terrestrial objects to the cold (3 K) outer space through earth’s transparent atmospheric spectral window (8-13 µm) to achieve passive cooling to subambient temperatures. On the other hand, evaporative cooling leverages the large enthalpy of vaporization of water, and the difference in water vapor concentration between a liquid surface and the ambient to generate high cooling power and subambient cooling. While promising, the cooling performance of these systems has traditionally suffered from important parasitic solar absorption during the day and parasitic heat gain from the warmer ambient air when operating at subambient temperatures. In this work, we propose to tackle these two longstanding challenges by optimizing and using polyethylene aerogel, a solar reflecting, infrared transparent, and vapor permeable thermal insulator, as a cover for radiative and evaporative coolers.&#13;
&#13;
We first present the development, characterization, and optimization of polyethylene aerogels to achieve a low thermal conductivity material with high solar reflectance and infrared transmittance. We then theoretically and experimentally investigate the benefits of using polyethylene aerogel covers in outdoor radiative coolers exposed to direct sunlight. We demonstrate significant passive cooling below the ambient temperature and high subambient cooling power over a continuous 24h period. Next, we show how ZnS nanoparticles inside polyethylene aerogel covers can help increase the solar reflectance of the cover, improving the daytime cooling performance of radiative coolers. Next, we propose a hybrid cooling architecture combining passive evaporative and radiative cooling, leveraging heat rejection mechanisms of both approaches to achieve lower subambient and higher cooling power passive cooling. Finally, we explore the potential impact of our proposed hybrid evaporative-radiative cooler in applications such as off-grid food produce storage and building air-conditioning and refrigeration systems. We show that our passive hybrid cooler can meaningfully extend the lifetime of perishable fruit and vegetables and provide important energy savings for cooling and refrigeration in commercial buildings across the United States with low water consumption. Successful development and commercialization of our hybrid cooling structure have the potential to reduce food-related wastes in developing countries while reducing building cooling energy and water consumption, and CO2 emissions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139882</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microglia and Myelin: Improved Tools and Molecular Interactions</title>
<link>https://hdl.handle.net/1721.1/139881</link>
<description>Microglia and Myelin: Improved Tools and Molecular Interactions
Kaiser, Tobias
Myelination of axons evolved in vertebrates to promote signal transduction and the development of a compact and energy-efficient nervous system. Myelin development and homeostasis are critical for normal function and impairment of myelin is associated with sensory, motor, and cognitive dysfunction. While it has long been understood that oligodendrocytes produce myelin in the CNS, more recent studies also implicate additional cell types including microglia as providers of important molecular cues in myelination. Studies dissecting these cues have made great strides, yet the functions of many candidate molecular mediators remain incompletely defined, at least in part due to the scarcity of suitable tools. The goal of my thesis was threefold: (1) creating and supplying the research community with much needed improved genetic tools to study the role of microglia, (2) developing analysis tools to facilitate the analysis of white-matter ultrastructure, and (3) defining the role of a candidate molecular mediator, IgG, as well as its receptor, in white-matter development. In Chapter 1, I report the generation and characterization of novel transgenic mouse lines for the labeling and the inducible genetic manipulation of microglia. Harnessing the microglia-specific Tmem119 locus and CRISPR/Cas, I engineered knock-in mice expressing EGFP or CreERT2 and show that these lines are highly specific in discerning microglia from other closely related myeloid cells. Extending this work, in Chapter 2, I present evidence for the generation of a highly efficient and specific constitutively active Cre line leveraging the Fcrlslocus. Most notably, flow cytometric analysis shows that this line completely spares monocytes and other white blood cells. Complementing Tmem119- EGFP and Tmem119-CreERT2 mice that have already been adopted by hundreds of labs, Fcrls-Cre mice will enable genetic studies of microglia with improved specificity. In Chapter 3, I report the development and characterization of MyelTracer, an easy-to-install software suite made available to the research community for the quantification of myelin g-ratio, a key metric in studies of myelin morphology. Finally, in Chapter 4, I present evidence for the occurrence of maternally derived IgG on microglia in the postnatal brain. Using a genetic mouse model to study its functional relevance, I show that brains of mice lacking IgG postnatally harbor fewer myelinating oligodendrocytes and thinner axons in the corpus callosum. Further, using both newly generated conditional Fcer1g (part of IgG receptor) and existing constitutive Fcer1g knockout models, I show that the effect of IgG is not mediated through the canonical IgG-Fc receptor pathway. Independent of Fc receptors, IgG appears to be required for the function of a subset of microglia that occurs in white-matter tracts postnatally and is required for normal white-matter development. Overall, the work presented in this thesis resulted in the generation of several much-needed tools for the research community, and it reveals molecular insights into the role of IgG in the developing brain.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139881</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Chromium Cycling in Global Oxygen Deficient Zones with Chromium Isotopes</title>
<link>https://hdl.handle.net/1721.1/139880</link>
<description>Investigating Chromium Cycling in Global Oxygen Deficient Zones with Chromium Isotopes
Huang, Tianyi
Chromium (Cr) isotopes have shown great potential as a paleo-redox proxy to trace the redox conditions of ancient oceans and atmosphere. However, its cycling in modern environments is poorly constrained. In my thesis, I attempt to fill in the gap of our understanding of chromium cycling in the modern ocean, with a focus on the redox processes in global oxygen deficient zones (ODZs).&#13;
&#13;
Firstly, we developed a method to analyze Cr isotopes of different Cr redox species. Tests on processing conditions demonstrated its robustness in obtaining accurate Cr isotope data. It is applicable to both frozen and fresh samples. This method allows us to investigate the redox cycling of Cr that is hard to unravel by existing total Cr methods. Secondly, in the Eastern Tropical North Pacific (ETNP), Eastern Tropical South Pacific (ETSP) and Arabian Sea ODZs, their total dissolved Cr profiles show preferential reduction of isotopically light Cr(VI) to Cr(III), which is scavenged and exported to deeper oceans. Applying our new method to ETNP and ETSP ODZ seawater samples, we observed Cr(VI) reduction in both ODZs with a similar fractionation factor. This indicates similar mechanisms may be controlling Cr(VI) reduction in the two ODZs. Cr(III) maximum coincides with Fe(II) and secondary nitrite maximums in the upper core of both ODZs. Shipboard incubations with spiked Fe(II) showed fast Cr(VI) reduction occurring in the ETNP ODZ. But neither Fe(II) nor microbes were reducing Cr(VI) directly. Thirdly, we calculated the isotope effects of Cr scavenging in the ETNP and ETSP ODZs. The two ODZs show a similar isotope partitioning during Cr scavenging. And spatial variability is observed in the ETNP ODZ. Our calculated scavenged Cr isotope ratio is lighter than that of the total dissolved Cr from the same depth. It is also comparable to that of reducing or anoxic sediments, which implies that Cr isotopes can be used as an archive for local redox conditions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139880</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Robot Training for Complex Tasks</title>
<link>https://hdl.handle.net/1721.1/139878</link>
<description>Interactive Robot Training for Complex Tasks
Shah, Ankit Jayesh
Domains such as high-mix manufacturing, domestic robotics, space exploration, etc., are key areas of interest for robotics. In these domains, it is difficult to anticipate the exact role of the robot apriori, therefore defining the robot specifications is challenging. This presents a crucial hurdle to widespread adoption of robots in these domains. Developing robots that can be re-programmed easily during deployment by domain experts, through the modification of the task specifications, without requiring extensive programming knowledge is a key research thrust of this dissertation.&#13;
&#13;
I present a multi-modal framework for training a robot through demonstrations and acceptability assessments provided by the teacher as per their intended task specification. I adopt an online Bayesian approach, where the robot maintains a belief over the teacher’s intended task specification, and each input provided by the teacher iteratively updates the robot’s belief. Further, I enabled the robot to infer task specifications that require satisfaction of temporal properties by utilizing a well-defined fragment of linear temporal logic (LTL). Towards developing this framework, I address three key research questions.&#13;
&#13;
I begin by presenting a novel approach to inferring formal temporal specifications from labeled task executions, called Bayesian specification inference. This approach can learn tasks expressed by an expressive but relevant fragment of LTL while modeling the ambiguity of demonstrations as a belief distribution over candidate LTL formulas. We demonstrate the utility of this approach in inferring task specifications for the representative multi-step manipulation task of setting a dinner table. We also utilize this model to learn an assessment model for multi-aircraft combat missions that shows a high degree of alignment with the assessments provided by a domain expert.&#13;
&#13;
Next, I present planning with uncertain specifications (PUnS), a novel formulation that enables planning with a belief distribution over the true specification. I propose four evaluation criteria that capture the semantics of satisfying a belief over logical formulas and demonstrate the existence of an equivalent Markov decision process (MDP) for every instance of a PUnS problem. We show that the robot policies produced through the PUnS formulation demonstrate flexibility by generating distinct valid task executions and result in a low error rate by simultaneously satisfying a maximal subset of the specifications in the belief distribution.&#13;
&#13;
Finally, I present an integrated specification inference framework that interleaves inference and planning through active learning. Our models for active learning allow the robot to identify whether a task demonstration or an assessment of its task execution provided by the teacher would be most beneficial in refining its belief. Further, we developed algorithms that enable the robot to identify and perform the task execution that would be most informative in refining its uncertainty. We explore the impact of different information utility functions and the degree of teacher’s pedagogical selectivity on the robot’s learning performance, and demonstrate that allowing the robot to select the ideal learning modality allows it to overcome the limitations of a non-pedagogical teacher, and still converge to the true task specification. We also demonstrate our framework through a study involving users teaching a robot to set a dinner table with only five task executions.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139878</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elements of lubricant transport critical to piston skirt lubrication and to leakage into the piston ring pack in internal combustion engines</title>
<link>https://hdl.handle.net/1721.1/139871</link>
<description>Elements of lubricant transport critical to piston skirt lubrication and to leakage into the piston ring pack in internal combustion engines
Ahling, Sebastian Gerd
With an influx of new propulsion technologies and increasingly stringent emission standards on the horizon, the internal combustion engine (ICE) faces more hurdles than ever before to remain competitive in a rapidly changing social and market environment. Due to the distinct advantages of the ICE—namely the existing energy supply infrastructure and long driving range combined with the ability to replenish the energy supply in a short matter of time—it remains, however, yet to be seen how fast these new technologies will be adopted. Therefore, further developing ICE is critical if we are to succeed in reducing overall greenhouse gas and pollutant emissions. Understanding lubricant transport in the piston-cylinder unit is essential to this mission, due to its direct impact on both types of emissions.&#13;
&#13;
The first part of this thesis focuses on the development of an experimental system which allows us to visualize liquid oil transport under realistic engine operating conditions. To achieve this, a one-cylinder gasoline research engine equipped with a sapphire window for optical access is utilized in combination with the Laser Induced Fluorescence (LIF) measurement technique. This novel system is capable of capturing oil transport phenomena at various time- and length-scales synchronized with engine operation. &#13;
&#13;
The second part of this thesis provides an analysis of the images acquired by the system, with the primary points of interest being the piston skirt area and oil control ring (OCR). Within the skirt area, we were able to characterize oil addition and transport, as well as and most especially, the influence of piston secondary motion on the latter. These key findings enable us to design a better, more efficient piston in the future—in fact, one recent design iteration of the piston skirt has already revealed the benefits of this approach. &#13;
&#13;
The OCR is at the crucial interface between skirt and ring pack. As a result of the high resolution in time and space made possible by this system, we were also able to expand our understanding of known transport mechanisms in the ring pack, as well as discover new avenues of oil transport. Another considerable finding is the interplay between oil supply through the Twin-Land oil control ring (TLOCR) gap and gas pressure enhanced bridging—a discovery which finally offers a conclusive explanation as to how oil is transported into the severe contact areas of the ring pack in engines without reverse flow. Lastly, this thesis contains a summary of avenues for transport in the ring-pack, avenues which were observed in this research. This summary will inform how we approach new designs that will achieve a healthy oil transport system in the next generation of combustion engines.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139871</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Integrated State: Architecture, Planning, and Politics in Mexico, 1938-1958</title>
<link>https://hdl.handle.net/1721.1/139869</link>
<description>The Integrated State: Architecture, Planning, and Politics in Mexico, 1938-1958
López, Albert José-Antonio
This dissertation focuses on the intersections of the professionalization of architecture, regional and national infrastructure planning, and Mexican politics during the first three decades following the Mexican Revolution. I especially aim to shed light on the political and managerial role of architects during a crucial period of pseudo state-corporatism, dominant party political "institutionalization," and ideologically fraught nationalist socio-economic development occurring between the late 1930s to mid1950s.&#13;
&#13;
I argue that architects became central figures in mid-century Mexican political society and the state's planning bureaucracy as Mexico's mode of governmentality shifted from an ideologically flexible post-conflict reconstructionist model to various modalities of governance ranging from socialist and nationalist interpretations of pseudo state-corporatism, increasing executive branch empowerment with varied levels of presidentialism, and eventually the embrace of a developmentalist, technocratic and bureaucratic authoritarianism.&#13;
&#13;
An important faction of Mexican Architects argued their indispensability to political society and appealed to popular support via print journalism, but also public speeches, and use of other mass forms of media, in a broad and long-term collective mobility project. The key claim of this faction – and in particular the small handful of its most capable and already politically connected leaders – was that the architectural profession possessed not only artistic and creative defining qualities, but also technical and managerial capabilities that could serve the very particular constructive needs of post-revolutionary nation-state construction and socio-economic development. They additionally sought to differentiate their training, especially in regards to their expertise on urbanism, town and regional planning, and graphic projection in general so as to distinguish themselves from other technical professions, such as engineering, as they competed for primacy in the growing bureaucracies that held jurisdiction over the infrastructural organization of the national territory. However, some of the deepest inroads by architects into Mexican political society were made by those who proved to be equally capable in using the written and spoken word so as to characterize themselves as public intellectuals, visionaries, and moral reformers. These were powerful figures in the construction of a modern public and political consciousness at a time when Mexico was undergoing internal crisis and transformation due to outcries against corruption, uncertain developmentalist success, changing political dispensations, and the framing of new legitimizing, regenerative, and nationalist modernizing programs.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139869</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum State Discrimination with Overcompleteness</title>
<link>https://hdl.handle.net/1721.1/139868</link>
<description>Quantum State Discrimination with Overcompleteness
Medlock, Catherine Aiko
The central topics of this thesis are operating characteristics for binary hypothesis testing in classical and quantum settings and overcomplete quantum measurements for quantum binary state discrimination. With this we explore decision and measurement operating characteristics defined as the tradeoff between probability of detection and probability of false alarm as parameters are varied. The thesis specifically addresses the Neyman-Pearson optimality of receiver operating characteristics when they are generated using threshold tests on the score variable rather than threshold tests on the likelihood ratio. The analysis applies to any scalar score variable. In the quantum setting, informationally overcomplete POVMs are explored to provide more robust quantum binary state discrimination schemes. We focus on equal trace rank one or Etro POVMs, which can be specified by arrangements of points on a sphere that we refer to as an Etro sphere.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139868</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model-based Methodology for Strategic Reuse of Legacy Designs in Space Mission Architecting</title>
<link>https://hdl.handle.net/1721.1/139865</link>
<description>A Model-based Methodology for Strategic Reuse of Legacy Designs in Space Mission Architecting
Trujillo, Alejandro Elio
Design reuse is a common practice in the space industry — stemming from a desire by engineers and managers to realize cost and schedule benefits while buying down risk with proven designs. Legacy design reuse, specifically, is characterized by its opportunistic nature compared to more intentional platform-based reuse. However, legacy reuse decisions made during preliminary phases of an architecting effort are often overly optimistic regarding potential benefits. This can lead to reuse scenarios that either do not fully realize expected benefits, or result in detrimental impacts or even mission failure.&#13;
&#13;
This work presents a remedy in the form of the Legacy Design Reuse in MBSE (LDRM) methodology. It is a systematic approach for conducting technical and programmatic analyses for informing legacy reuse decisions in the early design phases of a mission. LDRM incorporates design reuse best practices and process improvements derived from a survey of industry practitioners. The resulting procedure is implemented in a Model-based Systems Engineering (MBSE) environment in order to leverage the integrated, authoritative, and curated data landscape of this new paradigm. Key outputs of LDRM are: a) assessment of reuse feasibility of a candidate design, b) enumeration of the required rework/adaptation effort, and c) estimate of the reuse scenario’s cost or schedule impacts versus a comparable from-scratch effort - using the COSYSMO 2.0 parametric cost modeling tool.&#13;
&#13;
A sample design problem demonstrates application of LDRM to evaluate reuse of the robotic arm design of the Curiosity rover on Luna, a hypothetical lunar rover. The Luna design case is carried into a two-phase virtual Lego design/build/reuse validation experiment. Decision-making performance of study participants with access to LDRM outputs is improved by close to 30% over a control group. LDRM is then applied to two industry case studies. The first, reuse of the bus subsystems across the AeroCube 10 and DAILI CubeSat missions, demonstrates nominal procedures and outcomes for 3 of 4 subsystems explored; model incompatibilities in the attitude control system led to recommendations to the sponsor for improvements to MBSE model practices and curation. The second case study, exploring congressionally-mandated reuse in NASA’s SLS vehicle, finds reuse limitations in the Core Stage engine section borrowed from the Space Shuttle program. LDRM predicts a 43% increase in systems engineering effort due to extensive interface rework of engine section components. These real-world findings suggest that decision support tools, like LDRM, can improve legacy reuse outcomes in the next generation of space systems.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139865</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nominal licensing: the syntactic distribution and number interpretation of bare nominals in Wolof</title>
<link>https://hdl.handle.net/1721.1/139864</link>
<description>Nominal licensing: the syntactic distribution and number interpretation of bare nominals in Wolof
Fong, Suzana
This thesis is a study of two levels of nominal licensing illustrated by bare nominals (BN) in Wolof: licensing of the BN itself and licensing at the level of its features. The former can be seen in the restrictions imposed on the syntactic positions a BN can occupy and the strategies employed to bypass them. The latter is reflected in the typologically unusual singular interpretation of BNs in this language, which stands in contrast with the number neutrality that BNs usually display in other languages.&#13;
&#13;
Bare nominals in Wolof can occur in the object position and they must be adjacent to the transitive verb that subcategorizes for them. They are, furthermore, narrow scope indefinites. These are properties usually attributed to Pseudo Noun Incorporation. However, there are two circum-stances under which the requirement to be adjacent to the verb can be obviated: when either a DP is introduced between the subject and the PNI-ed object or the latter is A'-moved. While the introduction of an additional argument and A'-movement are disparate phenomena, a dependent case analysis of nominal licensing (Branan, to appear) can account for why they both allow a PNI-ed object to not be adjacent to the verb in Wolof. Branan argues that all nominals must be li-censed with case (Levin, 2015), with case assignment being calculated in terms of dependent case (Marantz, 1991). When assigning case to a nominal is impossible, a last resort licensing strategy is available, namely, surface adjacency with the verb. Under the proposal that Branan makes about domains of case assignment and the position of case competitors in the sentential spine, bare nominal objects in Wolof cannot be licensed with case, which is why they must be adjacent to the verb. However, the introduction of an additional argument provides a case competitor to a PNI-ed object, allowing it to do away with licensing via linear adjacency with the verb. Likewise, A'-moving a bare nominal object brings it close to the subject, which can transformationally act as a case competitor. I argue thus that a dependent case theory of PNI can provide a uniform analysis of the PNI distribution of bare nominals in Wolof. If correct, this analysis has two implications. Empirically, it provides further evidence that a strict adjacency condition cannot adequately characterize PNI crosslinguistically (Driemel, 2020). Theoretically, it motivates a reappraisal of the claim that dependent case and nominal licensing are necessarily incompatible with each other (Marantz, 1991).&#13;
&#13;
This analysis, however, is not sufficient to account for another facet of BNs in Wolof, namely, its singular interpretation. Crosslinguistically, BNs are often number-neutral, i.e., their number interpretation does not imply any commitment to a singular or plural interpretation. In Wolof, however, BNs are singular when unmodified. This can be argued for based on, e.g. the impossibility of saturating a collective predicate, on the fact that they must be referred back to with a singular pronoun, and that they cannot be the antecedent of a plural anaphor. However, a plural interpretation becomes available when a nominal-internal plural feature is exponed in the form of relative complementizer or possessum agreement. The generalization is that BNs in Wolof are singular, unless plural morphology is exponed within the nominal. I propose a version of Kalin’s (2017;2018;2019) framework of nominal licensing whereby certain interpretable features require licensing by the operation Agree; they are “derivational time bombs” that must be “defused” by this operation. Specifically, I argue that the feature [+plural] in Wolof nominals fall under this category. I assume that all nominals in Wolof, bare and full, can in principle be singular or plural. An obligatorily [+singular] interpretation arises in a BN when there is no probe to Agree with the [+plural] version, causing the derivation to crash. Conversely, if the BN merges with structure that contains a number probe, [+plural] can be defused, so that the corresponding construal can arise. This probe surfaces as relative complementizer or possessum agreement. The singular interpretation of BNs in Wolof arises as conspiracy between the need to license [+plural] and the restrictions and resources available within the nominal a BN is embedded into. If correct, this analysis offers an analysis as to why BNs in Wolof do not follow the number neutrality tendency found in other BN languages. It also provides support for the view that the licensing of interpretable features may be a driving force in a derivation.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139864</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Understanding and Enhancing Pool Boiling Heat Transfer via Surface Property and Structure Design</title>
<link>https://hdl.handle.net/1721.1/139862</link>
<description>Mechanistic Understanding and Enhancing Pool Boiling Heat Transfer via Surface Property and Structure Design
Song, Youngsup
Boiling is a vital process used to transfer heat effectively via harnessing the large latent heat of vaporization for a variety of energy and thermal management applications. The boiling heat transfer performance is described mainly by critical heat flux (CHF) and heat transfer coefficient (HTC), which quantifies the operational heat flux limit and the efficiency of boiling heat transfer, respectively. The goal of this thesis is two-fold: fundamental understanding on the mechanisms associated with CHF and significantly enhancing pool boiling heat transfer. First, we addressed the large discrepancy of experimental CHF values on flat surfaces reported in the literature by accounting for hydrocarbon adsorption and oxidation of metallic surfaces during boiling. Accordingly, we developed an experimental protocol based on this understanding on the causes of spread in CHF values and used the protocol throughout this thesis for consistent experimental measurements.&#13;
&#13;
We subsequently investigated the effects of surface structures on enhanced CHF during pool boiling of hemi-wicking surfaces. We systematically designed micropillar surfaces with controlled roughness and wickability, and combined the results with scaling analysis to obtain a unified descriptor for CHF. This unified descriptor represents the combined effects of the extended contact line length and volumetric wicking rate, which shows a reasonable correlation with CHF values with our experiments and literature data.&#13;
&#13;
Next, we engineered boiling surfaces to achieve simultaneous CHF and HTC enhancements. We developed a microtube structure, where a cavity is defined at the center of a pillar, to enhance the heat transfer characteristics in controllable manner. In addition to uniform microtube arrays, we designed a surface with microtube clusters interspersed with micropillars, referred to as tube-clusters in pillars (TIP), to mitigate the earlier boiling crisis of uniform microtube arrays due to the extensive bubble coalescence. While uniform microtube arrays and TIP surfaces showed significant enhancement of both CHF and HTC compared to a flat surface, there was an intrinsic trade-off between CHF and HTC associated with the nucleation site density. Accordingly, we proposed hierarchical TIP (h-TIP) surfaces to control vapor nucleation with multi-scale structures while providing capillary wicking. These surfaces showed CHF and HTC enhancements up to 138 and 389%, respectively, compared to a flat surface.&#13;
&#13;
Finally, we investigated the use of sandblasting as a scalable surface engineering technique for enhanced pool boiling heat transfer for industry-scale applications. Pool boiling results along with surface characterizations on silicon surfaces showed that surface roughness and volumetric wicking rates increased with the sandblasting abrasive size. As a result, CHF and HTC values enhanced up to 192.6 and 433.6% compared to a flat surface, respectively.&#13;
&#13;
This thesis provides important insights to understand the role of surface properties and structures on pool boiling heat transfer, thereby providing guidelines for the systematic design of surface structures for enhanced pool boiling heat transfer.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139862</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactivity of crystalline slag phases in cementitious&#13;
systems</title>
<link>https://hdl.handle.net/1721.1/139860</link>
<description>Reactivity of crystalline slag phases in cementitious&#13;
systems
Traynor, Brian
Portland cement manufacturing is thought to account for as much as 4-8% of global CO2 emissions. Efforts to incorporate industrial wastes into concrete in order to offset the environmental burden of traditional cements have been growing in recent years. Reactive industrial wastes such as blast furnace slag and fly ash have been the subject of much research and have been successfully implemented in blended cements as partial replacements for Portland cement. Other wastes, such as steel and copper slags, have been largely neglected due to their lower reactivity. Concurrent with efforts at incorporating industrial wastes into concrete has been the development of alternative concretes with different chemistries, e.g. alkali-activated materials, blended Portland cements, and calcium sulfo-aluminate cements. This thesis focuses on improving our understanding of how the constituent phases of steel and copper slags interact with the aqueous phase of concrete, with the goal of identifying suitable applications in concrete. The focus on the constituent phases is recognition of the fact that steel and copper slags are too variable in composition to study on a case-by-case basis, and only through studies of their constituent phases can we identify opportunities for their use. First, I quantify the effect of aqueous chemical environment on the dissolution rate of the minerals calcio-olivine and fayalite, identified as primary phases of ladle furnace steel slag and copper slag, respectively. Calcio-olivine and fayalite were exposed to solutions of NaOH and Ca(OH)2, chosen to mimic the pore solutions of alkali-activated and Portland-cement based binders. These results have significance for incorporating high olivine content steel and copper slags in concrete. Second, I elucidate the effect of aggregate surface chemistry on the type, morphology, and rate of reaction product formation in Portland cement-type systems. Polished surfaces of limestone, quartz, fayalite, and diopside were exposed to Ca and Si rich solutions and the resultant reaction products were characterized using scanning electron microscopy. The results of this study indicate that calcium silicate hydrate (the dominant reaction product of Portland cement concretes) nucleation and growth kinetics are accelerated on limestone surfaces relative to quartz, fayalite, and diopside surfaces, although no differences in the morphology of the precipitated C-S-H is observed. The relevance of this research is in the importance of the type and kinetics of reaction product formation on the surface of an aggregate, which plays a crucial role in the development of hardened, load-bearing concrete. This experimental research is supplemented by extensive literature review of the composition and microstructure of steel and copper slags, as well as the dissolution rates and thermodynamics of dissolution in concrete pore solution of the relevant steel and copper slag phases. This work serves to contextualize the previous experimental research and to contribute towards the development of a kinetic model that accounts for the reaction kinetics of both Portland cement and crystalline slags (steel or copper slag). This thesis also presents a methodology for calibrating pH meters in highly alkaline solutions such as those relevant to cementitious systems
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139860</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediaries of the state : bureaucratic transaction costs of claiming welfare in Mexico</title>
<link>https://hdl.handle.net/1721.1/139725</link>
<description>Intermediaries of the state : bureaucratic transaction costs of claiming welfare in Mexico
Rizzo, Tesalia
            (Tesalia Elisa Rizzo Reyes)
This dissertation argues that the existence of bureaucratic transaction costs hinders individuals' pursuit of welfare benefits Instead, these costs make individuals dependent on intermediaries of the state who facilitate access to benefits Intermediaries range from street-level bureaucrats or social workers to political bosses, brokers or caciques However, in institutionally weak contexts, intermediaries will often demand political loyalty in return for their assistance, a practice better known as clientelism If chentelist intermediaries mediate citizens' engagement with the state, then claiming welfare benefits does not enhance citizenship and produce stakeholders, as research on the welfare state in developed country contexts suggests, but rather intensifies political loyalties to intermediaries This democratic penalty, although most prevalent in economically developing countries, is not exclusive to the poor Mediated interactions with the state perpetuate pre-existing skill deficits, with adverse consequences for individuals' sense of self-efficacy, political autonomy, and capacity to hold governments accountable Alongside more than 100 in-depth interviews with citizens, intermediaries, politicians, and bureaucrats, obtained during 13 months of fieldwork, I test this argument through a large-scale field experiment in rural Mexico The experimental results reveal that reducing the bureaucratic transaction costs increases the number of claims made through non-chentelist avenues The experimental intervention also weakened the belief that welfare entitlements must be reciprocated with political support and diminished general approval of quid pro quo exchanges, two key norms that sustain clientelism. Although intermediaries may be efficient deliverers of benefits and may even compensate for deficiencies in state capacity, this dissertation maintains that mediated avenues of distribution can have detrimental effects on building citizens' bureaucratic skills, autonomous political participation, and the ability to reliably make policy demands on the state However, my results demonstrate that the vicious cycle of mediation can be broken by reducing the costs that citizens face in obtaining welfare benefits directly from the state.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, February, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 185-200).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139725</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flue gas CO₂ capture using electrochemically mediated amine regeneration</title>
<link>https://hdl.handle.net/1721.1/139724</link>
<description>Flue gas CO₂ capture using electrochemically mediated amine regeneration
Wang, Miao,
            (Ph. D. of Massachusetts Institute of Technology)
To sustain global energy demand while preventing a 2 °C global temperature increase, it is imperative that mitigation technologies such as carbon capture are deployed to reduce carbon footprint of the continued use of fossil fuel. This thesis describes the current status of the electrochemically mediated amine regeneration (EMAR) process for the capture of CO₂ from flue gas. The absorption step in the EMAR process is the same as in the widely used amine process, but the desorption step is accomplished electrochemically as opposed to a thermal swing. The EMAR process has several potential advantages over today's amine process. In terms of energy use, it has the potential to match the energy requirements of the amine process. Further, unlike thermal swing operation, EMAR uses just electricity, eliminating the need of steam. Another advantage is that theoretically CO₂ can be released at higher pressures in the EMAR process, reducing the compression energy requirement. Although the EMAR process has many unique advantages, its practical implementation is hindered by several challenges. Firstly, few large-scale electrochemical operations can serve as guides to scale up this process. Moreover, the EMAR system requires materials (e.g., plastics, metals, and membranes) that are very different from a packed column and can be expensive to manufacture due to a smaller market demand. Motivated by these challenges, we began by investigating the thermodynamics of the mediated separation and developed process flowsheets to study the impact of various operation parameters (e.g. desorption pressure). The net energetics for a base-case process requires up to 52 kJe/mol[subscript CO₂] for flue gas treatment, which is comparable to those of thermal swing processes. Furthermore, the EMAR process can desorb CO₂ at high pressures with minimal additional energy penalty to further reduce the size of the compression train. Additionally, we integrated an EMAR desorption unit into a lab-scale system for flue gas capture with online electrochemical and gas separation monitoring. With 15% CO₂ the EMAR separation scheme can continuously operate for up to 200 hours with flexible modulation of the regeneration capacity of between 0.12 to 0.62 CO₂/mol[subscript amine] while consuming up to 80 kJ/mol of electrical energy. With periodic switching of the polarity, the system achieved a reproducible amine regeneration for 130 cycles, demonstrating significant improvement over prior development. Lastly, we sized the electrochemical separation stacks incorporating kinetics and mass transfer effects to facilitate the system design. Changing system components (e.g. membranes) and operation parameters (desorption pressure, temperature, solvent capacity, and applied potentials) led to additional variation in system size and energy consumption. We identified the membrane cost as the dominant capital cost for the electrochemical separation train. The CO₂ avoided cost can be reduced to below $60/t[subscript CO₂] with optimized process conditions (e.g. utilization of waste heat). Thermal processes have a long history of operation at the large scales that are needed for carbon capture. The modular nature of electrochemical cells may alleviate the concern of scale to some extent, providing a viable pathway to "plug-and-play" EMAR systems. The development described in this thesis indicates that the EMAR has the potential to be a viable option for flue gas CO₂ capture.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-155).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139724</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graceful codes : fundamental limits and constructions</title>
<link>https://hdl.handle.net/1721.1/139723</link>
<description>Graceful codes : fundamental limits and constructions
Roozbeham, Hajir
            (Hosseini Roozbeham)
A central question in information theory is to understand when and how data can be reconstructed from noisy observations Error correcting codes are means of adding redundancy to the data to enable better recovery Most commonly, codes are designed to recover data in a regime where the statistics of the noise are kept constant In a number of applications, however, it is required that the quality of the reconstruction degrade gracefully as noise statistics worsen It was known since the early work of Jacob Ziv (among others) that trade-offs between gracefullness and error correcting capability exist We focus on characterizing these trade-offs and proposing codes that are closer to optimal than those employed today The information-theoretic contributions consist of three parts combinatorial where we study the so called alpha-beta profile of codes over large alphabets, geometric - where we show that a linear code that spreads out nearby data vectors must contract some far away data vectors as well, and probabilistic - where we show that good linear codes must necessarily experience threshold effect, i e degrade their performance sharply when the noise level exceeds a certain limit Our main coding-theoretic contribution is the introduction of a new class of nonlinear sparse-graph codes that we call Low-Density Majority Codes (LDMCs) They admit efficient decoding via belief propagation and have provably superior performance compared to the best-possible linear systematic codes, in particular LDGMs Hence, we hope that LDMCs will be able to replace LDGMs in practical applications, such as pre-coding for optical channels, tornado-raptor codes, and protograph constructions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 147-155).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139723</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular simulation of nucleation from flowing polymer melts</title>
<link>https://hdl.handle.net/1721.1/139722</link>
<description>Molecular simulation of nucleation from flowing polymer melts
Nicholson, David Andrew,
            Ph. D.
            Massachusetts Institute of Technology.
During processing, flow drastically accelerates the kinetics of polymer crystallization, altering the morphology and properties of the resulting material The earliest stage of flow-induced crystallization, known as flow-enhanced nucleation (FEN), is challenging to study experimentally due to the small spatiotemporal scale over which it occurs Non-equilibrium molecular dynamics (NEMD) simulation, which operates at short time and length scales, has proven to be a useful tool for investigating this stage Using NEMD, FEN was studied in n-eicosane (C20), n-pentacontahectane (C150) and their blends Analysis of these simulations provided insight into the underlying mechanism and constitutive relationships that govern FEN A method was developed to extract kinetic rates and critical parameters from direct simulations of nucleation events It is based on the first-passage time statistics of a stochastic process obeying the classical nucleation theory (CNT) and utilized a novel approximation for the first-passage time distribution. Using this method, FEN in C20 was determined to occur through both a reduction in the barrier to nucleation and enhanced diffusion Mechanistically, the barrier reduction is consistent with CNT with an extra driving force due to flow-induced stretching and orientation. In FEN simulations of C150, intense fluctuations gave rise to domains with high local alignment on the Kuhn length-scale Upon quenching, the nucleation of small crystallites occurred preferentially within these domains. This behavior is at odds with CNT and indicates a more complex mechanism at play The C150 simulations were further utilized to draw correlations between the nucleation rate and the response of the melt to the applied flow field. Based on the fidelity of the observed correlations, empirical models reported in the literature were evaluated for their consistency with the NEMD data, and new models are proposed for data that do not comport with existing ones In bimodal blends of C20 and C150, nucleation kinetics were found to depend on the degree to which both the long and short chains are deformed This result conflicts with experimental evidence that the nucleation rate in a polydisperse melt is determined only by the high molecular weight component.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 141-152).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139722</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling up 3D imaging, analysis, and culture of complex brain models</title>
<link>https://hdl.handle.net/1721.1/139721</link>
<description>Scaling up 3D imaging, analysis, and culture of complex brain models
Swaney, Justin M.
            (Justin Mark)
The brain is the most complex human organ, containing components from the nanometer scale to the centimeter scale However, many experimental techniques in neuroscience have been optimized for small brain models This thesis summarizes a body of work aimed at scaling up 3D imaging, analysis, and tissue culture techniques for large-scale brain models We present a technique termed SWITCH that inhibits probe binding to allow for diffusion without the formation of a reaction front To improve imaging resolution, we present a tissue expansion technique called MAP that physically magnifies tissue samples for super-resolution imaging with conventional fluorescence microscopes Using these tools to achieve volumetric imaging of large-scale brain models generates petabyte-scale data, for which we present horizontally scalable image processing pipelines for analysis of intact mouse beams, marmoset bi am samples, and cerebral organoids The mouse brain pipeline allows region-based statistical analysis of protein expression and cell counts An efficient single-cell non-rigid coregistration algorithm for multiplexed volumetric fluorescence imaging based on matching corresponding nuclei between imaging founds is presented A multiscale phenotyping pipeline allows single-cell, cytoarchtectural, and morphological analyses to be combined into a hyperdimensional statistical analysis of cerebral organoids We use this pipeline to show phenotypic changes due to neurodevelopment, Zika virus infection, and changes in organoid culture protocols Current cerebral organoid cultures lack a vascular system and are limited by nutrient transport To address this issue in vitro, we fabricated synthetic vasculature by two-photon photopolymerization of polyethylene glycol-based resins Printed micro-vessels wee biocompatible, less than 100 [mu]m in outer diameter, and permeable to biomolecules through engineered pore structures Perfusion of vascularized cerebral organoids cultured for 30 days resulted neuronal differentiation as well as integration of the vascular network Future studies can use and build on these technical advances to further our understanding of the bi am through the use of large-scale brain models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis. "February 2020." Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139721</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illuminating reaction pathways in low-temperature combustion, pyrolysis &amp; atmospheric oxidation</title>
<link>https://hdl.handle.net/1721.1/139720</link>
<description>Illuminating reaction pathways in low-temperature combustion, pyrolysis &amp; atmospheric oxidation
Goldman, Mark Jacob.
Reaction mechanisms are key to developing many technologies that could aid in addressing climate change. Evaluating how emissions transform in the atmosphere, obtaining more efficient combustion engines, and finding a way to store transient renewable energy all rely on accurate chemical mechanisms. This thesis spans this space by refining basic reaction rate theories, combining information and techniques from various fields to advance knowledge about atmospheric and combustion chemistry, and developing computational tools to model pyrolysis for isotopic analysis. Having a solid foundation for reaction kinetics is necessary to get accurate kinetic rates. The first chapter corrects a longstanding issue in transition state theory by determining the proper symmetry in reactions with identical reactants and estimating the error from taking incorrect approach. The next chapter then applies transition state theory to develop high pressure limit and pressure-dependent kinetics for peroxy radicals from isobutanol, a potential bio-derived fuel. To fully understand the behavior of oxygenated bio-based fuels, this thesis utilizes these isobutanol rate coefficients along with other estimated and measured rate coefficients to examine how peroxy radicals react across a range of temperatures, pressures, functional groups, and NO concentrations. This wide perspective elucidates why certain conditions cause pathways in combustion chemistry to start impacting atmospheric systems and vise versa. Peroxy reactions in the condensed phase also matter, so the subsequent chapter addresses this by developing a more accurate framework to simulate oxidation of small particles in the atmosphere. Both of these chapters help improve predictions on how changing emissions impacts the environment. The final chapter of this thesis describes the creation of a tool to help assess the origin of volatile fuels by applying combustion-based automatic mechanism generation in order to help measure the exact isotopic positions within molecules. Overall, these projects solve pieces of a larger climate problem which will require, among other things, large-scale technological, political, and social changes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 193-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139720</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast stochastic model predictive control under parametric uncertainties</title>
<link>https://hdl.handle.net/1721.1/139719</link>
<description>Fast stochastic model predictive control under parametric uncertainties
Freiherr von Andrian-Werburg, Matthias.
Model predictive control (MPC) is widely applied in industry due to its ability to handle constraints explicitly Many processes in chemical engineering have a high number of states, but a relatively low number of inputs and outputs Input-output formulations of MPC employ process models, predicting outputs directly from input data and thus avoiding the higher computational complexity of state space models, resulting in fast MPC Model uncertainties are ubiquitous and there are two popular approaches to incorporate them in the MPC framework In robust MPC, the worst case of the uncertainty is optimized, which can result in sluggish performance, because this case often has a very low probability of occurrence Stochastic MPC on the other hand incorporates information of the probability distribution of the uncertainty, which allows the optimization based on the probability of occurrence The main focus of this thesis is on fast stochastic input-output formulations of MPC Polynomial Chaos Theory is used to incorporate probability distributions of uncertainties into process models This approach avoids the need for sampling and makes on-line model evaluations possible Fast stochastic MPC algorithms are presented that address probabhstics uncertainties while having no steady-state offset One method of applying Polynomial Chaos Theory to process models is Galerkin projection, which requires the manipulation of model equations A fully automated implementation based on symbolic arithmetic is presented to perform these manipulations By introducing output feedback control, it is shown that the fast stochastic MPC can be used to control unstable systems The thesis also shows the applicability of linear input-output formulations of MPC to control a highly integrated nonlinear continuous crystallization process
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 131-135).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139719</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure, rheology and applications of thermally-gelling nanoemulsions</title>
<link>https://hdl.handle.net/1721.1/139718</link>
<description>Structure, rheology and applications of thermally-gelling nanoemulsions
Cheng, Li-Chiun.
Colloidal gelation is an effective tool to engineer material properties. Nanoemulsions, liquid-liquid dispersions where the droplet size is on the order of 100 nm, have become an emerging model system for studying aspects of colloidal gel materials. In conventional approaches, the gelation of nanoemulsions relies on the addition of another component into the suspension. However, such strategies require adding or removing components, which can be challenging for material processing, manipulation and studies of the colloidal gel physics. Therefore, a simple external stimulus such as temperature that can induce gelation of the nanoemulsion is highly desired. This thesis focuses on the design and property-structure relationship of novel nanoemulsion dispersions whose gelation is responsive to temperature. Armed with a molecular-scale understanding of the thermally-gelling nanoemulsion system, we design a gelling platform that can accommodate a wide range of colloidal formulations and gelling nanoemulsions that are responsive to different external stimuli such as pH and ionic strength. Moreover, by using the resulting thermally-gelling nanoemulsions, we study fundamental aspects of colloidal gel physics and develop applications for practical use. First, we design a gelling colloidal system whose inter-droplet interaction is modulated through thermally-responsive repulsions. By including amphiphilic oligomers in colloidal suspensions, the ionic surfactants on the colloids are replaced by the nonionic oligomer surfactants at elevated temperatures, leading to a decrease in the electrostatic repulsion. The mechanism is examined by carefully characterizing the colloids, and subsequently allowing the construction of interparticle potentials to capture the material behaviors. With the thermally-triggered surfactant displacement, the dispersion assembles into a macroporous viscoelastic network, and the gelling mechanism is robust over a wide range of compositions, colloid sizes and component chemistries. Second, with the molecular understanding of the thermally-gelling nanoemulsion system, nanoemulsions that are responsive to different stimuli are designed and studied. In one project, we report a gelling nanoemulsion system in which the material properties are responsive to changes in temperature and pH. The nanoemulsion is stabilized using a weak acid surfactant containing a poly(ethylene glycol), PEG, segment and a carboxyl group. We show that the interplay of the dissociated carboxyl group and the PEG segments greatly affects the nanoemulsion properties and gives rise to the thermally and pH-responsive gelation of the system. In the other project, we revisit the original nanoemulsion that the Doyle group designed and investigate the previously-overlooked depletion interactions and screened electrostatic repulsions. We take advantage of these interactions and study the material behaviors by sequentially applying two different gelation routes - first screening the electrostatic repulsion and then inducing the droplet bridging. The results show a non-intuitive trend in the material, and we show that the screening of electrostatic repulsions at room temperature in the first step has a considerable influence on the nanoemulsion microstructures and the associated rheological properties. Third, using thermally-gelling nanoemulsions, we investigate the effect of processing history on the material properties of colloidal gels. We provide new experimental evidence of path-dependent rheology and associated microstructures in colloidal gel systems. Moreover, we also show that material properties can be beyond the limit set by direct quenching and the gel strength can be greatly enhanced. On the other hand, we perform multiple particle tracking (MPT) to probe the nanoemulsion gels at a micrometer scale. We show that, by tailoring the surface chemistry of MPT probe beads, different domains of the gel microstructures are independently probed. The transportation modes of particles and the gel strength at different length scale are obtained. Finally, the complex structure from the self-assembled nanoemulsions is utilized for practical applications. We show that we can synthesize hierarchical hydrogels using 3D-printing. By properly engineering the nanoemulsions, the gel serves as ink with good shear-thinning behavior and remarkable structural recovery. The bottom-up route via droplet self-assembly provides various internal structures, while the top-down route during printing shapes the hydrogel geometry. The resulting hydrogels can be used in applications such as membranes and tissue engineering.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 206-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139718</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bioinformatic tools for single-cell data analysis in clinical studies</title>
<link>https://hdl.handle.net/1721.1/139717</link>
<description>Bioinformatic tools for single-cell data analysis in clinical studies
Monian, Brinda.
Mechanistic understanding of disease has been dramatically enhanced by an explosion of new high-throughput experimental techniques for profiling biological samples, including RNA-Seq, mass spectrometry, and single-cell sequencing However, the ability to gather exponentially more measurements comes with pitfalls of increased Type I error and reduced interpretability In theory, single-cell measurements can be helpful in combating this problem, since each sample of cells represents hundreds to thousands of observations But thinking is still emerging on how best to utilize single-cell data to boost statistics and generate meaningful findings This thesis represents several parallel efforts to develop and apply new bioinformatic techniques to generate robust findings from single-cell data The advances are especially pertinent for small clinical studies in which low sample numbers are limiting In the first part of the thesis, two classes of methods are introduced gene module discovery in single-cell RNA sequencing data using sparse PCA, and probability-based metrics for evaluating the degree of association between paired modalities of single-cell data (in this case, single-cell RNA sequencing and paired TCR sequencing data) The methods are shown on two different human datasets, as proof-of-concept and examples of the biological findings capable of being unearthed In the second part of the thesis, these methods are applied to larger clinical datasets with questions surrounding acquired tolerance and clinical reactivity in food allergy In the first study, T-helper cells from peanut-allergic patients undergoing oral immunotherapy were profiled to identify therapy-induced effects and baseline predictors of outcome Two distinct subsets of expanded TH2 clones were found to be suppressed, but not deleted, by the therapy In the second study, transcriptional correlates of clinical reactivity were evaluated in peanut-activated memory T-helper cells from peanut-allergic adults Cells from more reactive patients had higher expression of TH1 and MHC I gene programs, suggesting activation of auxiliary, non-TH2 cell types In each of these studies, new single-cell analysis techniques were integrated to generate clinical findings with improved robustness and interpretability.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis. "February 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139717</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of DNA damage and repair responses in cerium exposed cells and hepatocyte spheroids</title>
<link>https://hdl.handle.net/1721.1/139716</link>
<description>Analysis of DNA damage and repair responses in cerium exposed cells and hepatocyte spheroids
Chao, Christy.
Many environmental exposures, including airborne particulates and industrial chemicals, have long been known to impact public health and safety by contributing to the risk of developing cancer. One key concern is a chemical's potential to induce DNA damage, as damage can lead to the formation of mutations that drive cancer. While cancer can take years to develop following exposures, carcinogenicity can be predicted much earlier by studying a chemical's ability to damage DNA. These experiments are generally performed in vitro using cultured cells to predict responses in vivo. Due to limitations of cell culture systems, there is a growing interest in utilizing 3D cell culture models to better mimic conditions found in tissues. This thesis investigates the DNA damaging potential of cerium oxide (CeO₂) nanoparticles, and describes the development of an integrated platform for high-throughput spheroid culture and DNA damage analysis. CeO₂ nanoparticles are industrially valuable diesel fuel additives. While these additives improve fuel efficiency, studies have shown that exhaust contains CeO₂ nanoparticles. Gaining a better understanding of the biological effects of these nanoparticles is necessary for making informed decisions on public health policies, and a major focus of this thesis is on the DNA damage and repair responses induced by CeO₂ exposure in mammalian cells. CeO₂ treated cells were studied using the CometChip, a cell-based assay that measures DNA damage. While double strand breaks and oxidative base lesions were not detected, single strand breaks were observed following CeO₂ exposure, raising the possibility of downstream health effects. DNA damage and repair were also analyzed in fibroblasts from apparently healthy and ethnically diverse people to learn about interindividual variation. Subtle differences in repair kinetics were observed among the samples tested, but in all cases, cells fully repaired CeO₂ induced DNA damage within a few hours. Although DNA repair was possible, I also observed that CeO₂ may induce cell stress, as evidenced by an increase in cytokine and interferon signaling pathways. Together, changes to DNA structure and the potential for CeO₂-induced stress responses highlight the potential health impact CeO₂ nanoparticles. It is now broadly appreciated that 3D cell culture models often mimic characteristics of tissues better than traditional 2D cell culture. These more biologically relevant systems are currently being developed as alternatives to animal testing. One area where predictive 3D models would be beneficial is liver toxicity studies, since hepatocytes have been shown to have improved levels of liver specific metabolic activity in 3D culture. While many complex platforms have been developed for this purpose, a very simple and effective approach is to create liver spheroids. Spheroids are cell aggregates that spontaneously form when cells are placed in low attachment environments. In this thesis, I describe the creation of a patterned agarose array for high-throughput spheroid formation and test its efficiency for DNA damage analysis. Specifically, the agarose chip contains a grid of ~100 microwells condensed into the area of a single well of a standard 96-well plate. HepG2 spheroids formed in these microwells through agarose assisted aggregation, and cell viability was maintained over time. To test efficacy for genotoxicity studies in intact spheroids, DNA damage and repair was directly measured in HepG2 spheroids cultured on this platform, following exposure to inflammatory chemicals (H₂O₂, SIN-1). The agarose array made it possible to assess DNA damage and repair in intact spheroids, enabling more physiologically relevant studies of DNA damage, compared to 2D cell culture. It is anticipated that this platform will have broad applications for studies of DNA damage and repair in hepatocytes. Taken together, this thesis provides new understanding of the biological impacts of CeO₂ nanoparticles, and introduces a new method for studies of DNA damage and repair under conditions that better represent tissues.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139716</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing aromatics formation in combustion using experimental and modeling methods</title>
<link>https://hdl.handle.net/1721.1/139715</link>
<description>Revealing aromatics formation in combustion using experimental and modeling methods
Chu, Te-Chun,
            Ph. D.
            Massachusetts Institute of Technology.
Aromatic species have always been important in industry, but even increasingly so in recent years. Besides the large amount of aromatics produced as commodity chemicals, soot formation caused by polycyclic aromatic hydrocarbons (PAH) leads to serious clogging or pollution in the petrochemical industry. Therefore, understanding aromatics formation has been prioritized to improve the efficiency of desired chemical production processes. Advanced experimental techniques and detailed models help us draw a complete picture of aromatic chemistry. This thesis includes a modeling work for rich natural gas combustion where syngas, ethylene (C₂H₄), and acetylene (C₂H₄) are the valuable products, and aromatics are undesired byproducts. Through high-level quantum chemistry calculations performed in literature and trained into Reaction Mechanism Generator (RMG), aromatic chemistry was understood from a theoretical viewpoint and accurately predicted by the model. For example, Hydrogen-Abstraction-C₂H₄- Addition (HACA) was identified as the leading pathway of naphthalene formation (C₁₀H₈, one of' the simplest PAHs). To measure and validate critical aromatic kinetics in models, we designed and built a new compact quartz reactor to replace a stainless steel reactor in a unique apparatus combining laser absorbance spectroscopy (LAS) and time-of-flight mass spectrometry (TOF-MS). Reaction networks including phenyl radical (C₆H₄) + C₂H₄ and C₆H₅ + C₂H₄ (reactants of HACA) have been investigated for their kinetics and time-dependent product branching ratios. At various temperatures and pressures, the total rate constants of C₆H₅ + C₂H₄ were measured by LAS and direct product quantification of both networks was reported from TOF-MS measurements. Experimental measurements successfully validated detailed models built from ab initio calculations without fitting any kinetics parameters. Furthermore, the importance of HACA in developing three fused-benzene ring species was emphasized, and reactions of naphthyl radicals (C₁₀H₇) + C₂H₂ were investigated theoretically along with new pathways connecting networks of 1-C₁₀H₇ + C₂H₄ and 2-C₁₀H₇ + C₂H₄ . Pressure dependence of kinetics in this dissertation was considered due to the high temperature nature of natural gas combustion; therefore, the pressure effects in a natural gas pilot plant have been studied to apply chemistry learned from calculations or experiments to the real-world chemical industry. Overall, studies in this dissertation successfully investigate fundamental aromatic chemistry through experimental and theoretical approaches and connect them to natural gas combustion for future process development.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 249-262).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139715</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frequency stability in a compact, deployable optical atomic clock</title>
<link>https://hdl.handle.net/1721.1/139714</link>
<description>Frequency stability in a compact, deployable optical atomic clock
Sheerin, Todd F.
            (Todd Fillmore)
The advent of ultra-narrow lasers, ultra-fast lasers, and optical frequency combs in the last several decades have enabled a new generation of atomic clocks based on optical transitions as opposed to microwave transitions While optical atomic clocks in the laboratory outperform microwave standards by orders of magnitude, their complexity, size, weight, and power have so far precluded their application to fielded, compact systems Efforts described in this thesis to transfer optical atomic timekeeping technology from the laboratory to the field and to improve analytic tools for spectroscopy with ultra-narrow lasers are motivated by a need to support GPS-denied operations (a DARPA objective) and to enable a broad range of positioning, navigation, and timing applications in civil, commercial, and defense sectors Existing theoretical frameworks describing coupling strength for atom-laser interactions in optical atomic systems implicitly assume broad laser linewidths This thesis explores possible spectroscopic implications of ultra-narrow lasers interacting with atoms Additionally, a simple optical atomic clock architecture based on thermal calcium Ramsey-Bordé (R-B) matter-wave interferometry is described Experimental investigations in this thesis were carried out in two systems a compact, deployable Ca Beam Optical Timekeeping (CaBOT) clock, and a second-generation laboratory clock at NIST (Ca-2) This thesis describes a performance evaluation of the CaBOT frequency reference exhibiting fractional frequency instability of 5 0x10⁻¹⁴ at one second Measurement noise floor analyses revealed excess laser noise to be the dominant performance limitation With modest improvements, instability is projected to reach the 10⁻¹⁵ decade In the Ca-2 system, temperature fluctuations were observed to drive instability for time scales &gt;100s, and a temperature-frequency correlation study indicated that temperature control at the mK level would enable 10⁻¹⁶ instability A thermal enclosure limiting frequency reference temperature variations to 10s-mK enabled repeatable, sustained Ca-2 operations with &lt;/=2x10⁻¹⁶ instability between 10s-1,000s Finally, a thermal-fluid temperature control system was designed for the deployable CaBOT clock to dissipate ~220W and maintain mK level instability for the &lt;0 09 m³ chassis with low added noise Physical demonstration of sub-mK temperature control with a model liquid cooling circuit indicated the potential for excellent mid- to long-term stability of the CaBOT clock when fully integrated
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 293-298).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139714</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Associative learning in auditory thalamus and amygdala</title>
<link>https://hdl.handle.net/1721.1/139713</link>
<description>Associative learning in auditory thalamus and amygdala
Leppla, Christopher Albert.
Although much pioneering work has furthered our understanding of the roles the medial geniculate nucleus of the thalamus (MGN) and basolateral amygdala (BLA) play in the formation of associative learning, many open questions remain. MGN is often thought of as only contributing relayed sensory information, and BLA the primary site of importance for the formation of associative learning, aspects of these assumptions have not been directly tested. It is known that both regions are necessary for learning to occur, but the current empirical understanding of the role MGN plays is lacking. Here we present circuit-specific characterization of the information MGN transmits to BLA during discriminative learning, using a combination of optogenetics and in vivo single unit electrophysiology. We demonstrate that while MGN may act as a relay station for information necessary for learning, this is an active process. This input also exhibits dynamic changes between conditions which can also be seen in the BLA population it projects onto. This provides strong evidence suggesting it plays an active role in learning, rather than the assumed role, restricted to sensory relay. Finally, we show that BLA encodes unavoidable punishment associations more strongly than avoidable punishment associations through a shift in the bias of difference encoding of these associations. Taken together, these findings suggest that the assumption of a sensory relay role for MGN in the formation of association central to current dogma is incomplete, and that the avoidability of an associated punishing outcome impacts the magnitude with which BLA encodes the bias between reward and punishment-associated tones.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, September, 2019; Manuscript.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139713</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments in electroaerodynamic propulsion</title>
<link>https://hdl.handle.net/1721.1/139712</link>
<description>Experiments in electroaerodynamic propulsion
Xu, Haofeng,
            Ph. D.
            Massachusetts Institute of Technology.
Electroaerodynamic (EAD) propulsion, which produces a thrust force by electrokinetic acceleration of ions in air, is a proposed method of electric airplane propulsion that is solid-state, nearly silent, and produces no direct combustion emissions Recent work suggested that EAD-propelled aircraft are possible, although no EAD aircraft had flown prior to this thesis In the first of three experiments described in this thesis, the design of a fixed-wing heavier-than-air prototype airplane with 5m wingspan and 2 45kg takeoff mass was completed and the airplane was tested in-flight An analysis of the flight trajectory showed that this airplane achieved energy-gaining flight, demonstrating that EAD can propel a flying airplane This is a proof-of-concept for electroaerodynamic propulsion The prototype airplane flown in the first experiment carried no useful payload and had an endurance of 90 seconds Two successive experiments assessed potential methods to increase the performance of EAD propulsion One experiment confirmed that increasing the electrode gap spacing from 50 mm to 150 or 200 mm can increase the thrust-to-power of EAD propulsion, provided that certain non-ideal physical effects, which this experiment explained for the first time, are suppressed The other experiment determined that an original "decoupled" EAD architecture, which uses separate ionization and acceleration stages, can increase the thrust density and overcome previously-observed performance trade-offs The results of the latter experiments make possible the development of EAD aircraft with longer endurance, a payload capacity, and which could begin to fulfil useful missions A preliminary design analysis suggests that the increases in thrust-to-power and thrust density resulting from large gap spacing and decoupled thrusters -- among other design and power supply improvements -- can enable payload of 0 2kg and endurance of 10 minutes in an aircraft of approximately 3kg takeoff mass
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 89-95).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139712</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformation-assisted antifouling of surfaces</title>
<link>https://hdl.handle.net/1721.1/139709</link>
<description>Deformation-assisted antifouling of surfaces
Goon, Grace Swee See.
Fouling of surfaces occurs across many scales and applications, ranging from biomedical devices to ships, from membrane separation processes to ice formation on the wings of planes Chemical cleaning of the surfaces is often seen as the de facto treatment, but the drawbacks include the costs of using chemicals, environmental impact and long duration required for the reaction of the chemicals with the foulant With the increased focus on sustainable industrial practices, there has been increased interest in chemical-free options The current state of the art chemical-free antifouling technique employed includes passive methods such as surface treatment or modification, the application of new surface materials, surface vibration, large deformation or heating (for deicing only) The problem of fouling is especially severe in the case of membrane processes due to the variety of foulant types that comes in contact with the membrane surface, the limitation, and complexity of commercially available spiral wound module Recent research work in chemical-free membranes antifouling strategies have likewise focused on surface chemistry, the vibration of magnetically and electrically responsive membranes, the vibration of the entire system and osmotically induced backwashing While surface chemistry reduces the fouling propensity, the system will foul eventually and will still have to be cleaned periodically Osmotically induced backwashing was also found to be severely ineffective in the presence of feed spacers, a feature necessary in the spiral wound module Vibratory methods often require the magnetically or electrically responsive membranes or mechanical assemblage to move the entire system, all of which have drawbacks of their own Firstly, in this thesis, the concept of cyclic deformation is first introduced and its effect studied in a no-flow model cell The membrane is first fouled by creating a layer of uniform alginate gel with known thickness before being placed in a no-flow model cell Deformation is induced by volumetrically controlling the volume of water on one side of the membrane using a syringe pump The interfacial shear strength was then measured after the desired number of cycles It will be shown that the interfacial shear strength decrease with the number of deformation cycles, even though the interfacial shear stresses generated by the cyclic deformation is low This is an indication of interfacial fatigue behavior To understand the eventual failure mechanics, the deformations required to generate sufficient shear stress and strain energy release rate were evaluated A characteristic adhered foulant layer size where the mechanics switches from a strength-based to energy-based mechanics was derived Then, the complexity of the model is increased through the addition of feed pressure, osmotic pressure, permeate flow, and crossflow Over time, alginate and calcium ion near the surface of the membrane will crosslink and form a foulant film that results in flux decline At the desired flux, deformation-induced cleaning (DIC) was applied The membranes were deformed through permeate pressure modulation Excellent flux recoveries were achieved even in the presence of spacers, without incurring any damage to the membrane In contrast, chemical cleaning took 6 times longer than DIC to achieve comparable cleaning performance, while osmotically induced cleaning was completely ineffective in the presence of spacers Finally, DIC was applied to a commercially available spiral wound module and showed flux recoveries over a wide spectrum of feedwater with different fouling propensity To sum up the implication of DIC to real industrial practices, a system analysis was conducted for a medium-size brackish water desalination plant It was shown that the shorter shutdown time of DIC would prompt operators to conduct cleaning more frequently, improving the lifetime average production of the plant by 5-6%, reducing specific energy consumption by 10% and reducing the cost of cleaning by $0 1/m3 of purified water produced Beyond desalination, membranes are used in many food and industrial processes The invention of DIC has a significant impact on making these processes more productive and sustainable at the same time.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available. The images contained in this document are of the best quality available"--Disclaimer page.; Includes bibliographical references (pages 164-181).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139709</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of the reaction of aryl halides with metallic amides. II. Transannular effects in the solvolysis of cis- and trans-cycloöctene oxide</title>
<link>https://hdl.handle.net/1721.1/139706</link>
<description>The mechanism of the reaction of aryl halides with metallic amides. II. Transannular effects in the solvolysis of cis- and trans-cycloöctene oxide
Simmons, Howard E.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1954; Vita.; Bibliography: leaves 129-134.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139706</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of innervation-dependent expression of the acetylcholine receptor delta subunit gene</title>
<link>https://hdl.handle.net/1721.1/139705</link>
<description>Control of innervation-dependent expression of the acetylcholine receptor delta subunit gene
Simon, Alexander Michael.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1993; Includes bibliographical references (leaves 137-138).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139705</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An unstable Adams spectral sequence.</title>
<link>https://hdl.handle.net/1721.1/139704</link>
<description>An unstable Adams spectral sequence.
Rector, David Lee,
            1964-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; Bibliography: leaf 8.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139704</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regional peace-keeping and peace-making : the African model</title>
<link>https://hdl.handle.net/1721.1/139703</link>
<description>Regional peace-keeping and peace-making : the African model
Jonah, James O. C.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1967; Two unnumbered pages inserted; lacking l. 491. Vita.; Bibliography: leaves 618-638.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139703</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task identification and machine design for construction automation</title>
<link>https://hdl.handle.net/1721.1/139702</link>
<description>Task identification and machine design for construction automation
Demsetz, Laura A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1989; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139702</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photo- and collision-induced processes in small molecules</title>
<link>https://hdl.handle.net/1721.1/139701</link>
<description>Photo- and collision-induced processes in small molecules
Van Zoeren, Carol M.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1988; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139701</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical studies of heavy fermion materials</title>
<link>https://hdl.handle.net/1721.1/139700</link>
<description>Theoretical studies of heavy fermion materials
Millis, Andrew John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1986; Includes bibliographies.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139700</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electronic transport phenomena in polycrystalline ZnO : a study in grain boundary chemistry and electronic structure</title>
<link>https://hdl.handle.net/1721.1/139699</link>
<description>Electronic transport phenomena in polycrystalline ZnO : a study in grain boundary chemistry and electronic structure
Sukkar, Mary Helena.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1986; Bibliography: leaves 358-378.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139699</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and evaluation of glucose-responsive biomaterials as self-regulated insulin delivery systems</title>
<link>https://hdl.handle.net/1721.1/139696</link>
<description>Development and evaluation of glucose-responsive biomaterials as self-regulated insulin delivery systems
Volpatti, Lisa Rae.
Motivation: Diabetes mellitus is a disease characterized by poor glycemic control which often leads to severe complications including cardiovascular disease and kidney failure Many diabetic patients must continually monitor their blood sugar and self-administer multiple daily doses of exogenous insulin to combat hyperglycemia To reduce this patient burden, limit the occurrence of hypoglycemic events, and better mimic native insulin activity, therapies which can self-regulate insulin delivery are an attractive option This work begins to address current limitations of such glucose-responsive insulin delivery systems by developing novel biomaterial-based formulations Results This Thesis presents 3 types of glucose-responsive insulin delivery systems developed during my PhD Each system employs the enzyme glucose oxidase as a glucose sensor, which converts glucose to gluconic acid and reduces the pH of the microenvironment when glucose levels are high. This change in pH acts as a trigger to release insulin on demand The first system uses the pH-responsive polymer acetalated-dextran to formulate nanoparticles that physically encapsulate both insulin and glucose oxidase The particles rapidly degrade in the presence of acid, making this system a fast acting therapeutic that reduces blood sugar within an hour of administration in diabetic mice The second system is comprised of alginate microgels that encapsulate nanoparticles to create a depot of insulin for sustained glucose-responsive release in vivo for over 3 weeks with just 2 doses. The third system is based on the electrostatic complexation of insulin to positively charged polymers, such as polyethyleneimine When the pH is reduced below the isoelectric point of insulin, the complex dissociates and releases insulin only in response to elevated levels of glucose. These complexes are afforded a prolonged functional lifetime by decreasing the rate of insulin release under normal glucose concentrations The synthesis, formulation, in vitro characterization, and in vivo results in diabetic mouse models for each of these systems are discussed Conclusion: The development and characterization of the glucose-responsive insulin delivery systems described here marks an important step in the advancement of self-regulated insulin delivery. Furthermore, these formulations may provide generalized strategies for the development of future stimuli-responsive drug delivery systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 123-131).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139696</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>SYSTEMS BIOLOGY APPROACHES TO DECIPHERING COMPLEX IMMUNE RESPONSES</title>
<link>https://hdl.handle.net/1721.1/139603</link>
<description>SYSTEMS BIOLOGY APPROACHES TO DECIPHERING COMPLEX IMMUNE RESPONSES
Cui, Ang
Many diseases, including cancer, autoimmunity, infections, and allergies, are associated with dysregulation of immune responses. Therapeutic strategies targeting the immune system have succeeded in treating a wide range of diseases. However, we lack the detailed understanding of the immune system needed to design more effective and targeted therapies for many immune-mediated diseases that yet have no cure.&#13;
 &#13;
To address this knowledge gap, this thesis presents a dictionary of immune responses. We leveraged recent advancements in high-throughput genomic technologies to comprehensively interrogate how major cell types involved in the immune system respond to immune stimuli. We created a dictionary of single-cell transcriptomic profiles of individual responses to 86 cytokines and 5 vaccine adjuvants in over 20 cell populations, representing one of the most comprehensive analyses of cellular responses to immune stimuli to date. Based on the dictionary, we created companion software for assessing cytokine activities, constructing cell-cell communication network models, and analyzing time-series data. Our dictionary reveals principles of immune responses, expands our knowledge of activation states in each immune cell type, and provides a framework to assess the roles that cytokines and cell-cell communication networks play in any immune response.&#13;
 &#13;
Based on a detailed understanding of immune responses provided by these systems approaches, we created vaccination strategies that significantly enhanced CD8+ T-cell responses in animal studies. Overall, this thesis combines high-throughput computational and experimental approaches to systematically characterize immune responses, enabling the design of more targeted and effective vaccines and immune-based therapies for thus far incurable diseases.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139603</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phosphonate biogeochemical cycling in the marine environment: from an ocean scale to a molecular scale</title>
<link>https://hdl.handle.net/1721.1/139601</link>
<description>Phosphonate biogeochemical cycling in the marine environment: from an ocean scale to a molecular scale
Acker, Marianne
The existence of a marine phosphorus (P) redox cycle was recently confirmed when phosphonates, organophosphorus compounds with P in the (III) oxidation state, were found in high molecular weight dissolved organic matter. Although some features of the P redox cycle have come to light since the discovery of phosphonates, many aspects of phosphonate production, cycling and fate remain unknown. To address these gaps in our understanding, we studied phosphonate cycling in the Eastern Mediterranean Sea, a chronically P-limited basin, using 33P and enzymatic assays. We showed that phosphonate production was low but consumption was high, suggesting that phosphonate production and consumption may be spatially or temporally decoupled. We also explored phosphonate production in the model marine cyanobacterium Prochlorococcus SB. Using 31P NMR, we found Prochlorococcus SB allocates ~50% of its cellular P to phosphonates. Allocation of P to phosphonates was conserved under P-limitation, and further investigation revealed phosphonates were associated with proteins. The discovery of phosphonoproteins in Prochlorococcus SB opens new perspectives on the biochemical function of phosphonates and their role in P-cycling. Finally, we developed a new P-targeted method to characterize marine organophosphorus compounds using liquid chromatography coupled to electrospray ionization and inductively coupled plasma mass spectrometry.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139601</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction and Testing of a Portable Time Projection Chamber for Fast Neutron Detection</title>
<link>https://hdl.handle.net/1721.1/139594</link>
<description>Construction and Testing of a Portable Time Projection Chamber for Fast Neutron Detection
Koch, William L.
The need to closely monitor the flow of nuclear fuel through the entire fuel cycle is an increasingly important component of global security.  Closely connected to this monitoring is the need for improved radiation detectors.  Gamma ray detectors experienced a boom in development over the last half century, but neutron detectors have met a higher level of resistance.  When monitoring for the potential proliferation of irradiated nuclear fuel, the fast neutron signature from the spontaneous fission of Plutonium-240 offers a window of opportunity for interdicting illicit nuclear material movement.  Directly detecting fast neutrons, as opposed to reaction-based detectors that rely on signal moderation, retains the directional information which can be used to offset low intrinsic efficiency for fast neutron detection.  There are currently no portable, directional fast neutron detectors built to date.  &#13;
&#13;
This work presents an investigation to build a one-person portable Time Projection Chamber (TPC) for the directional detection of a source of spontaneous fission neutrons, capitalizing on the relative motion to build algorithms for rapidly locating the source of neutrons.  A triple mesh avalanche setup is studied to understand the limitations for achieving higher single electron gain.  In addition to boosting the gain, an imaging system that uses the light amplification by means of a micro-channel plate is also investigated, with a new approach to the data acquisition algorithms.&#13;
&#13;
New algorithms for data analysis were coupled with new techniques for event-by-event data handling.  These were tested using data collected with an AmBe fast neutron source and compared to simulated data.  Using the measured fast neutron background and estimates of the measurement uncertainties from stationary data runs, simulations involving relative motion between source and detector show promising results for this technology.&#13;
&#13;
With a one-person portable detector, multiple people can carry detectors that are wirelessly linked to each other, building a combined radiation field map and reducing the time to locating a source through a shared probabilistic model.  A portable, directional fast neutron detector will aid in International Atomic Energy Agency (IAEA) inspections, portal security monitoring, emergency response teams, and military search operations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139594</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wet Adhesion and Bioadhesive Technology</title>
<link>https://hdl.handle.net/1721.1/139590</link>
<description>Wet Adhesion and Bioadhesive Technology
Yuk, Hyunwoo
Along with the rise of complex and ever more capable technologies, our body is interacting with a rapidly growing number of man-made devices and machines ranging from biomedical devices for surgical repair and disease treatment to implantable electronics for monitoring and augmentation of bodily functions. Despite the recent advances, the interfacing between man-made devices and the human body – their interactions and communications – are still dominated by relatively primitive, short-term, low-efficacy, and incompatible strategies. Owing to their close similarity in mechanical, chemical, and biological properties, hydrogels – polymer networks infiltrated with a large amount of water – have emerged as an ideal candidate to interface these two dissimilar realms. However, conventional hydrogels have suffered various limitations to serve as an effective interface. In particular, one of the central challenges in the development and practical translation of hydrogel interfaces is robust, reliable, and functional integration to wet, dynamic, and living biological tissues. This dissertation aims to provide a comprehensive set of scientific and technological advances to address the challenges in wet adhesion and bioadhesive technology.&#13;
&#13;
The first part of this dissertation is focused on the mechanics of wet adhesion. In particular, we will systematically discuss the mechanical design principles for achieving fast tough adhesion on wet surfaces covered by foulants. First, we propose the design principle for tough wet adhesion by a synergistic combination between the strong interfacial linkages and the mechanical dissipation in bulk tough hydrogels. Second, we propose the design principle for rapid wet adhesion by a dry-crosslinking mechanism that quickly removes the water on wet surfaces. Third, 3 we propose the design principle for foulant-resistant wet adhesion by a repel-crosslinking mechanism that cleans the foulants on wet surfaces. In the second part of this dissertation, we introduce a set of hydrogel interface technologies uniquely enabled by the wet adhesion in Part I. First, we introduce novel bioadhesive technologies in the form of a double-sided tape (DST) and a barnacle-inspired paste to achieve unprecedented rapid, robust, on-demand detachable, and blood-resistant adhesion on wet and injured tissues. Second, we introduce development and fabrication of high-performance conducting polymer hydrogels and their robust wet adhesion on a wide range of bioelectronic devices. Third, we explore a synergistic combination of bioadhesive and bioelectronic technologies for stable and functional interfacing between biological tissue and bioelectronic devices based on an electrical bioadhesive interface. In the last part of this dissertation, we will summarize and discuss the remaining challenges and opportunities in the wet adhesion and bioadhesive technology for seamless integration and communication between the human body and artificial devices and machines.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139590</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetohydrodynamic Heat Transfer for Fusion Energy</title>
<link>https://hdl.handle.net/1721.1/139589</link>
<description>Magnetohydrodynamic Heat Transfer for Fusion Energy
Sorensen, Caroline
This thesis uses measurements from a new experimental flow loop and new analysis of existing literature data to provide a heat transfer correlation for turbulent magnetohydrodynamic flows. Fluids used as breeder/coolant in fusion power plant blankets must provide adequate heat transfer of up to 10 MW/m2 within strong magnetic fields . Because the two most popular choices for this function (liquid metals and molten salts) are electrically conductive, magnetohydrodynamic effects can alter the fluid flow fields, and therefore heat transfer, in these conditions. The primary mechanism by which these magnetohydrodynamic effects change the flow field is turbulence damping, where sufficiently high magnetic fields can fully relaminarize a flow and decrease heat transfer. &#13;
&#13;
A flow loop  was designed, constructed, and operated to measure heat transfer in a circular pipe with electrically conducting walls through a transverse magnetic field to study the impact of B field on modestly conducting fluids. Aqueous potassium hydroxide is used as a high Prandtl number simulant fluid for molten salts and is flowed through a heated test section with 15 diameters of flow development within a uniform transverse magnetic field of up to 1.7 T (provided by a copper dipole) and Reynolds numbers up to 15,000. Changes in wall temperature are used to measure magnetohydrodynamic Nusselt numbers. Due to the moderate electrical conductivity of the working fluid (~100 S/m), it is only possible to achieve high enough Hartmann numbers for partial flow relaminarization in the experiment. The flow loop allows for qualitative characterization of behavior in regions of steep magnetic field gradients and transitional flows .  &#13;
&#13;
This thesis uses existing liquid metal literature data  to develop a new form of heat transfer correlation relating the heat transfer coefficient to Reynolds and Hartmann numbers of the flow over the full range of turbulent MHD flows (i.e. from non-MHD to fully relaminarized) for low Prandtl number fluids . Using the newly developed heat transfer correlational form, the experimental high Prandtl number data is extrapolated to propose a new heat transfer correlation for molten salts at any Hartmann number and Reynolds numbers up to 15,000. While the achievable experimental conditions only produce relative drops in heat transfer coefficients of around 20%, the analysis quantitatively predicts Nusselt number drops of up to a peak of &gt;90% at relaminarization. Given the constrained design window, including these potentially large magnetohydrodynamic effects will be necessary in engineering successful fusion blankets.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139589</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding flow-induced vibration via a physics-constrained, data-driven approach</title>
<link>https://hdl.handle.net/1721.1/139588</link>
<description>Understanding flow-induced vibration via a physics-constrained, data-driven approach
Ma, Leixin
Vortex-induced vibration (VIV) of long flexible cylinders involves a large number of physical variables, such as Strouhal number, Reynolds number, and damping. Due to the nonlinearity and high-dimensionality of the problem, current VIV prediction models have large error bounds, and require selection of input parameters without knowing which ones are most important. In this thesis, a prior-knowledge-based, trend-constrained, machine learning technique is developed for the purpose of identifying those physical parameters which are most useful in the prediction of the VIV of flexible cylinders, such as drilling risers and umbilical cables. The results show that from four to six input parameters are usually sufficient to provide acceptable predictions without overfitting the data.&#13;
&#13;
In the course of the research three new parameters were found to be particularly useful. The first is the dimensionless damping parameter, c*, which is derived from the requirement that vibration power flowing into the structure from the flow equals the power lost to damping. The second is the mode dominance factor, &#120639;, which reveals the extent to which the observed vibration is dominated by a single mode. The third, denoted by the character, α, expresses the dominance of travelling or standing waves in the response. In a neural network model these new features are put in competition with fourteen previously known VIV parameters to see which of all these candidate features are most important.&#13;
&#13;
For cross-flow amplitude prediction, a trend-constrained neural network balances the usual goal of minimizing the model prediction error with the error associated with a departure from experimentally observed dependence on Reynolds number and damping. The resulting model is able to reveal important additional insights, including the role of mode number, mode dominance and travelling waves in the regulation of VIV response amplitude. Compared to unconstrained machine learning models, the trend-constrained models are more consistent with extensive experimental observations.&#13;
&#13;
For the prediction of higher harmonics, prior knowledge of the influence of bending stiffness and damping are embedded in the machine learning model. This model reveals the importance of mode dominance and travelling waves in the regulation of higher harmonics response. The previously postulated importance of damping parameter and bending stiffness is verified.&#13;
&#13;
The overall results point towards promising directions for improving existing engineering design software, that is currently used to predict the fatigue life of structures exposed to VIV.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139588</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chalcogenide Glass on Layered van der Waals Crystals for Integrated Photonic Devices</title>
<link>https://hdl.handle.net/1721.1/139580</link>
<description>Chalcogenide Glass on Layered van der Waals Crystals for Integrated Photonic Devices
Deckoff-Jones, Skylar
Layered van der Waals (vdW) materials have demonstrated huge potential for photonic devices with their varied and tunable optical properties. They can be integrated into planar photonic devices on virtually any substrate due to their van der Waals bonding, thereby introducing desirable material properties to existing integrated photonic platforms. Previously, their utilization has been limited to their transfer onto prefabricated photonic structures, limiting device design, and often introducing undesirable stress or fracture. Recently, the integration of vdW materials with chalcogenide glasses (ChG) has been developed for near and mid-infrared integrated photonic applications. This ChG-on-vdW platform enables new device architectures that can better utilize vdW material’s strong anisotropy and accelerates prototyping.&#13;
&#13;
In this work, we leverage the ChG-on-vdW material platform to demonstrate integrated photonic devices with enhanced performance, while also gaining further insight into the vdW material’s properties. First, we show that ChG processing does not damage vdW materials, while even serving as a passivation layer for unstable vdW materials such as black phosphorus. We then fabricate and characterize black phosphorus and tellurene based mid-infrared photodetectors, which not only achieve high sensitivity, but also give insight to the critical role of vdW material anisotropy in photodetection. Next, we utilize the strong second order nonlinearity in indium selenide and tellurene to investigate vdW semiconductor’s linear elecrooptic Pockels effect: an essential, yet elusive, effect to realize high-performance waveguide integrated optical modulators. Finally, we show how gallium sulfide’s use in hybrid waveguides can enhance the waveguide optical nonlinearity, which we use to demonstrate all-optical modulation. Cumulatively, this work demonstrates the power of the ChG-on-vdW platform and shows the promise of using vdW materials to engineer future generations of integrated photonic devices.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139580</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress in Nanosystems for Computing and Health</title>
<link>https://hdl.handle.net/1721.1/139578</link>
<description>Progress in Nanosystems for Computing and Health
Bishop, Mindy D.
Ubiquitous capture, digitization and integration of health data remains a grand challenge, promising to transform how we understand, diagnose and treat disease. To realize its full potential, electronics must capture and transform increasing volumes of diverse data into highly processed information. Unfortunately, systems finely-integrating thousands of sensors with compute and memory are currently infeasible. Since the materials and technologies leveraged to fabricate conventional computing, memory, and sensing are often distinct, systems today rely on separately packaged chips for each of these functionalities, severely limiting data bandwidth between them. To enable radically new electronic systems for capturing and integrating ubiquitous data, new system architectures are required.&#13;
&#13;
In Chapter One, I present benefits of emerging nanomaterials (in particular, carbon nanotubes, CNTs) for computing. I show how leveraging new nanomaterials, their novel properties and new fabrication capabilities enables radically new system architectures and applications. For field- effect transistors fabricated with carbon nanotubes (CNFETs), these benefits lead to the concept of monolithic 3D integration – the ability to integrate previously heterogeneous technologies within a single chip, achieving functionalities that exceed the sum of their parts.&#13;
&#13;
Moving from theoretical promise to real-world implementation requires a CNFET technology that is compatible with commercial silicon-based semiconductor manufacturing without sacrificing performance benefits versus conventional silicon-based technologies. In Chapter Two, I outline the work that overcame this enduring challenge, achieving the first CNFET fabrication process in a commercial silicon-based semiconductor manufacturing facility and foundry.&#13;
&#13;
While the first two chapters are dedicated to continued progress in the field of computing for medicine, the third chapter expands the lens of traditional computing to the field of medicine. In Chapter Three, I show how the classical engineering concept of up-sampling (increasing temporal and spatial sampling frequency) can be applied to the problem of clinical blood analysis, experimentally demonstrating the application of this new methodology, called Distributed Single Point Blood Analysis (D-SPAYSS), to the localization of blood clotting pathologies in vivo.&#13;
&#13;
Overall, this thesis presents a vision for a new generation of electronic systems for computing in medicine, presenting conceptual and practical advances and laying the foundations for transforming these visions from concept into reality.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139578</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Transparent Models</title>
<link>https://hdl.handle.net/1721.1/139573</link>
<description>Building Transparent Models
Lee, Guang-He
Transparency has become a key desideratum of machine learning. Properties such as interpretability or robustness are indispensable when model predictions are fed into mission critical applications or those dealing with sensitive/controversial topics (e.g., social, legal, financial, medical, or security tasks). While the desired notion of transparency can vary widely across different scenarios, modern predictors (like deep neural networks) often lack any semblance of this concept, primarily due to their inherent complexity. In this thesis, we focus on a set of formal properties of transparency and design a series of algorithms to build models with these specified properties. In particular, these properties include:&#13;
&#13;
(i) the model class (of oblique decision trees), effectively represented and trained via a new family of neural models, &#13;
(ii) local model classes (e.g., locally linear models), induced from and estimated jointly with a black-box predictor, possibly over structured objects, and &#13;
(iii) local certificates of robustness, derived for ensembles of any black-box predictors in continuous or discrete spaces. &#13;
&#13;
The contributions of this thesis are mainly methodological and theoretical. We also emphasize scalability in large-scale settings. Compared to a human-centric approach to interpretability, our methods are particularly suited for scenarios that require factual verification or cases that are challenging to subjectively judge explanations by humans (e.g., for superhuman models).
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139573</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Spectral Approach to Noninvasive ICP Estimation: From Modeling to Clinical and Experimental Validation</title>
<link>https://hdl.handle.net/1721.1/139572</link>
<description>A Spectral Approach to Noninvasive ICP Estimation: From Modeling to Clinical and Experimental Validation
Jaishankar, Rohan
Intracranial pressure (ICP) is a cranial vital sign for monitoring patients with head injuries and to guide treatment decisions. Clinical ICP measurements are highly invasive and hence, ICP measurement is limited to critically ill patients. We present a spectral approach to model-based noninvasive ICP estimation, relying on a second-order circuit model of cerebrovascular physiology. We estimate ICP in the frequency domain, from arterial blood pressure and cerebral blood flow velocity waveforms. When validating our algorithm on two clinical patient cohorts of eight and a half hours, with measured ICP ranging from 1.3 mmHg to 24.8 mmHg, we achieved an accuracy and precision of 0.1 mmHg and 5.1 mmHg, respectively. Additionally, we designed an experimental porcine model to titrate the ICP in a predetermined manner over a wide range. This experimental model resulted in a rich dataset comprising 35 hours of data from eight pigs, with measured ICP ranging from 2.1 mmHg to 78.2 mmHg. We obtained an accuracy of 1.6 mmHg and a precision of 5.2 mmHg in estimating ICP on the porcine data. To evaluate our estimates' ability to correctly classify elevated ICP (defined as ICP&gt;22 mmHg), we obtained an area under the receiver operating characteristic curve of 0.94. Additionally, the algorithm achieved a sensitivity of 0.88 and a specificity of 0.87 in this binary classification task at a noninvasive ICP threshold of 22 mmHg. Clinically, missing an episode of elevated ICP or under-treatment can have potentially fatal consequences, and we demonstrated that with appropriate margins on the classification thresholds, the probabilities of these events are less than 1\%, using our noninvasive ICP estimates. Finally, we obtained a correlation coefficient of 0.89 between our estimates and the measured ICP, indicating a high degree of capturing underlying variations in measured ICP. Our algorithm's performance is well within the clinically acceptable range and comparable or superior to past attempts at estimating ICP noninvasively reported in literature. We believe that the work presented here takes a significant step towards realizing the clinical dream of implementing a real-time, noninvasive ICP measurement modality in a calibration-free and patient-specific manner at the bedside.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139572</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can Batch Reverse Osmosis Make Desalination More Affordable and Sustainable?</title>
<link>https://hdl.handle.net/1721.1/139563</link>
<description>Can Batch Reverse Osmosis Make Desalination More Affordable and Sustainable?
Wei, Quantum J.
Reverse osmosis (RO) desalination can help to ensure secure water resources, but the process remains costly. From 2007-2017, global desalination capacity nearly doubled, from 47 to 92 million m3 /day, with RO accounting for two thirds of installed capacity. Despite this growth, the total volume of treated water accounts for less than half a percent of global freshwater consumption. To be part of a sustainable water supply, RO must be made cheaper. RO energy consumption can never fall below the thermodynamic least work of separation, which is 1 kWh/m3 for 50% recovery of seawater. Practically speaking, RO energy consumption will not reach the thermodynamic limit but may be further reduced through improvements in system design.&#13;
&#13;
Batch RO is the most energy-efficient RO process. It saves energy because the feed pressure varies over time with the osmotic pressure. In this thesis we further develop the batch RO technology to identify its benefits and limitations. We demonstrated the first batch RO system using a flexible bladder and validated theoretical models of energy consumption and water production. Next, we investigated practical losses associated with batch operation. This work shows that current batch RO designs are not attractive due to the combined inefficiencies of salt retention and water loss. Incomplete flushing of brine from cycle-to-cycle leads to an elevated feed salinity relative to the feed intake, boosting energy consumption by about ∼10%. De-pressurization during the reset phases of the batch RO cycle leads to water loss via osmosis. This water loss is significant (∼10%) under seawater conditions. We introduce an improved batch RO design which rapidly flushes the system to reduce downtime and water loss. Unfortunately, there does not appear to be a practical way to avoid the salt retention penalty. Batch RO has more economic value in increasing plant productivity, rather than reducing energy consumption. We conclude that batch RO is a promising technology and identify future directions for research and commercialization.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139563</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using fusion-product spectroscopy to diagnose inertial confinement fusion implosions and study stopping power on OMEGA, the NIF, and Z</title>
<link>https://hdl.handle.net/1721.1/139560</link>
<description>Using fusion-product spectroscopy to diagnose inertial confinement fusion implosions and study stopping power on OMEGA, the NIF, and Z
Lahmann, Brandon
This thesis summarizes three distinct but related projects that use the spectroscopy of fusion products to diagnose areal densities in Inertial Confinement Fusion (ICF) implosions or study stopping power to better understand the areal density requirements of said implosions.&#13;
&#13;
The first project at the Z Facilty regards spectroscopy of the primary DD neutron spectra from Magnetized Liner Inertial Fusion (MagLIF) implosions enables diagnosing yields and liner areal densities. Both of these quantities are extremely important for assessing implosion performance on the Z. Traditional nToF spectrometers face additional challenges at the Z facility due to large scattering sources from the machine and long neutron burn widths. For this reason, a CR-39 based neutron-recoil spectrometer has been developed for measuring the DD spectrum at the Z. A proof of principle design was fielded and the data is presented. A improved shielded design for accurately measuring the liner areal density is also developed and presented.&#13;
&#13;
The second project at the Nation Ignition Facility (NIF), spectroscopy of secondary nuclear reactions from surrogate deuterium filled implosions are sensitive to hot spot areal density magnitude and asymmetry. Secondary DT neutrons are routinely measured on the NIF using four neutron time of flight (nToF) spectrometers positioned at different lines of sight. These measurements infer convergence ratios that differ from those inferred by x-ray imaging techniques. This discrepancy is explained by each method having different sensitivities to profiles and asymmetries. Additionally, the widths of the secondary DT neutron spectra are sensitive to mode-2 asymmetries and further confirm that these asymmetries are present in the hot-spot of NIF implosions.&#13;
&#13;
Finally, the third project at the OMEGA laser facility, a unique experimental platform for accurately characterizing and measuring the stopping power of Warm Dense Matter (WDM) plasmas has been developed. Understanding stopping power in this regime is critical for probing areal densities in the high-density fuel of ICF implosions. The platform uses X-ray Thomson Scattering (XRTS) to characterize the plasma’s temperature and ionization state. Proton spectroscopy is used to accurately measure the energy loss through the WDM subject. Results from several experiments 3 indicate that WDM plasmas consistently have higher stopping than cold matter when sufficiently heated. These results show good agreement with stopping power models that account for partial ionization.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139560</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hide and Seek: Remote Sensing and Strategic Stability</title>
<link>https://hdl.handle.net/1721.1/139559</link>
<description>Hide and Seek: Remote Sensing and Strategic Stability
MacDonald, Thomas D.
The competition between the survivability of nuclear arsenals and the counterforce weapons that hold them at risk has been ongoing since the dawn of the nuclear age. A spate of technological and political developments have called the endurance of stability into question once again. This thesis addresses one particular aspect of the competition between technologies of counterforce and strategies of survivability, the use of space-based radar systems to track ground-based mobile missiles carried by TELs (transporter-erector-launcher vehicles).&#13;
&#13;
The work herein describes the development of a set of interlocking models to determine under what conditions space-based radar systems can be used to track TELs, the costs of doing so, and the vulnerability of those systems to countermeasures. This work contributes to an ongoing debate in the security studies literature on the impact of changing technology on strategic stability. I offer policy recommendations on future deployments of SBR systems, evaluate the capabilities of current remote-sensing systems, and provide a framework for evaluating emerging technologies.&#13;
&#13;
In developing these models, the capabilities of radar modalities are reviewed, the tasks that are required to track moving targets are identified, and a scheme for SBR tracking of TELs is developed. A set of satellite constellations are designed and simulated, and their ability to detect TELs is determined by Monte Carlo simulation. A model of the seeker's uncertainty is developed, and geometric tiling is used to model attack planning. Drawing on real-world nuclear forces, model counterforce attacks against a peer and sub-peer adversary are used to determine the number of satellites needed to enable tracking.&#13;
&#13;
SBR systems which are capable of tracking TELs are found to require 50 or more satellites, depending on the conditions, with a total system price tag on the order of a few hundred billion dollars. Countermeasures to SBR tracking are identified and evaluated. SBR systems are found to be vulnerable to a number of countermeasures which are low cost or already at hand, making it unlikely that current or future deployments of SBR systems will meaningfully disrupt strategic stability by undermining the survivability of ground-based mobile missiles.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139559</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Fibers: Materials, Processing, and Information</title>
<link>https://hdl.handle.net/1721.1/139549</link>
<description>Digital Fibers: Materials, Processing, and Information
Loke Zi Jie, Gabriel
Ubiquitous computation has influenced a broad array of domains from manufacturing to drug discovery and from communications to machine learning. While the capabilities of computing platforms have progressed dramatically, one can argue that materials have not been tailored or designed to capture the spectra of digital capabilities out there.&#13;
&#13;
In this thesis, I seek to synergize digital tools with fiber materials towards constructing devices of new form factors and fibers with digital features. First, digital additive manufacturing of devices has been limited by the lack of materials suitable for printing. Overcoming this limitation, I have harnessed multimaterial fibers as the ’ink’ in 3D-printers to print objects not only with digitally designed shapes, but also with user defined device functions. A new print approach, termed as fiber surface heating, is introduced where the print nozzle is modified so that these fibers can be heated and fused to each other during printing, while ensuring that their device functions are well-retained when forming the 3D structure. This approach was later validated by printing fibers of different functions, including light-detection, light-emission, and energy storage. Several 3D objects of tailored shapes and spatially-defined device functions were showcased. This print technique is also capable of printing custom porous scaffolds from engineered porous fibers, enabling a means for accelerated nerve regeneration for patients with nerve injuries. Finally, I describe the fabrication of fibers with digital capabilities, including memory storage and analog-to-digital sensing. These polymeric fibers contain an engineered material setup that allows for the connections of multiple addressable discrete digital microchips along their length, enabling independent operation of different functions within a single fiber. This fiber, when woven into a shirt, senses the body temperature, stores its values, and through a trained neural network stored within the fiber, provides inference on the wearer’s activity. This approach sets a foundation for future applications in fabric-based computing and on-body machine learning inference.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139549</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytic Solutions to the Laplace, Poisson, and Biharmonic Equations with Internal Boundaries: Theory and Application to Microfluidic Dynamics</title>
<link>https://hdl.handle.net/1721.1/139539</link>
<description>Analytic Solutions to the Laplace, Poisson, and Biharmonic Equations with Internal Boundaries: Theory and Application to Microfluidic Dynamics
Zhang, Chengzhao “Richard”
This dissertation focuses on developing analytical methods for elliptic partial differential equations with conditions imposed on internal boundaries. Internal boundaries are formed where materials with different properties meet to form interfaces. These interfaces arise in a variety of physical and engineering contexts such as in the evaporation of water droplets, dielectric double-spheres, and soft-material Janus drops. The solutions to problems with interfaces are often singular where the interfaces meet the boundaries or two interfaces meet. This causes difficulties when attempting to solve these problems solely with numerical approaches. In contrast analytical approaches (while limited to relatively simple geometries) lend significant insight into the nature of the singularities with full resolutions in some cases. Potentially this knowledge can then be used to improve the quality of numerical solutions for more generic situations.&#13;
&#13;
We will focus here on four important elliptic PDE problems: Laplace, Poisson, biharmonic, and Stokes flow. First we introduce our main analytic result known as the Parity Split Method (PSM) developed in the context of the Laplace and the Poisson equation. The method is then applied to the problem of a thermally driven evaporative liquid bridge in a long V-shaped channel. The problem involves solving a coupled temperature-concentration system of the Laplace equation. Complex analysis based analytic solutions to the concentration equation are also developed along the way. Finally, we extend the PSM for biharmonic equation and addresses several numerical issues in regards to solving for the fluid flow around a soft-material Janus drop.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139539</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry, topology and mechanics of twisted elastic fibers</title>
<link>https://hdl.handle.net/1721.1/139535</link>
<description>Geometry, topology and mechanics of twisted elastic fibers
Patil, Vishal P.
Elastic rods and fibers are ubiquitous in physical and biological systems across a range of length scales, from microtubules to construction beams. In this thesis, we explore the impact of twist on the failure and stability of elastic rods by studying fragmentation and knot dynamics. We begin with a famous phenomenon in elastic rod fragmentation observed by Feynman, who discovered that dry spaghetti typically breaks into three or more pieces when exposed to large bending stresses. Combining theory, experiments and analytic scaling arguments, we demonstrate that twist may be used to achieve binary fracture of brittle elastic rods. Additionally, we show that quenching allows for robust control of the fragmentation cascade. In the second half of this thesis, we use twist to investigate the stability of softer fibers in knotted configurations. We identify twist-based topological counting rules that explain the relative stability of bend knots, which are used to tie two ropes together. These counting rules reveal an underlying stability phase diagram which agrees with numerical simulations and experimental testing of several climbing and sailing knots. Combining the notions of structural and topological stability, we then investigate the energy discharge dynamics of a knotted elastic fiber after it is broken. We show that this class of topological batteries contains special topologically resonant states for which energy release is superslow. Finally, we apply our topological model to surgical knots. Through numerical simulation, we show that topology can be used to identify mechanically stable and balanced suture knots.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139535</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction of Deligne Categories through ultrafilters and its applications</title>
<link>https://hdl.handle.net/1721.1/139534</link>
<description>Construction of Deligne Categories through ultrafilters and its applications
Kalinov, Daniil
The present thesis is concerned with the study of Deligne categories and their application to various representation-theoretic problems. The lens that is used to view Deligne categories in this study is the one of ultrafilters and ultraproducts. As will be shown in our work, this approach turns out to be a very powerful one. Especially if one wants to solve such representation-theoretic problems as presented by P.Etingof in his papers on "Representation theory in complex rank" ([13, 14]). The results are presented in two parts. In the first one (Chapters 2 and 3) an introduction to the theory of ultrafilters is given, and the construction of the Deligne categories through ultrafilters is presented. This also allows us to understand how one can make sense of Deligne categories as a limit in rank and characteristic. The later part of the text describes two applications of this construction to actual representation-theoretic problems. In Chapter 4 the full classification of simple commutative, associative and Lie algebras in Rep(&#119878;&#120584;) for &#120584; /∈ Z≥0 is stated and proven. The second application, the construction of deformed double current algebras as a space of endomorphisms of a certain ind-object of Rep(&#119878;&#120584;), is contained in Chapter 5. There it is also proven that this construction agrees with Guay’s deformed double current algebra of type &#119860; if the rank &#119903; ≥ 4 (Guay’s algebra is presently only defined for such rank), and the presentation by generators and relations for the case of &#119903; = 1 is given.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139534</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Crack-Resistant Polycrystalline Zirconia Shape-Memory Ceramics with Low Hysteresis</title>
<link>https://hdl.handle.net/1721.1/139533</link>
<description>Towards Crack-Resistant Polycrystalline Zirconia Shape-Memory Ceramics with Low Hysteresis
Pang, Edward L.
A new class of shape memory materials has been proposed based on zirconia-based ceramics, which offer higher work output, transformation temperatures, and possibly environmental resistance compared to metallic shape-memory alloys. Despite these potential benefits, two shortcomings have prevented these materials from reaching their full potential: i) polycrystals catastrophically crack during the martensitic transformation, and ii) the transformation exhibits a large hysteresis of over 150 K. It is yet unknown what drives these phenomena and how they can be avoided. In this thesis, these phenomena are systematically investigated in a series of ceriadoped zirconia compositions. Using in situ X-ray diffraction, transformation mismatch strains are characterized and subsequently correlated with observations of cracking during cycling experiments and measurements of thermal hysteresis by calorimetry. These findings demonstrate for the first time the importance of interface compatibility on transformation-induced cracking and reveal that special compositions with optimized compatibility can largely avoid transformation cracking even in polycrystal form. Specimens with reduced cracking, however, show transformation suppression, and a comparative study between polycrystalline pellets and oligocrystalline powders has elucidated the effect of grain constraint on transformation behavior. In addition, a mechanistic understanding of thermal hysteresis in these materials is established using martensite nucleation theory, which reveals that the same controlling factors as in shape-memory alloys, namely interface compatibility and thermal friction, are also operative in zirconia shape-memory ceramics, although here thermal friction has been attributed to Peierls barrier controlled interfacial glide. Using this newfound understanding, a set of design criteria is outlined to obtain reduced cracking, complete transformation, and low hysteresis in a polycrystalline specimen. Using computational materials design techniques, novel zirconia compositions are designed and experimentally tested. Although not all criteria were met, a pseudo-ternary prototype demonstrated improved cracking resistance with a reduced hysteresis of only 75 K. Together, these studies aid our understanding of martensitic transformation in polycrystalline zirconia shape-memory ceramics and pave the way towards tunable and repeatable shape memory and superelastic behavior in these emerging materials.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139533</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear Computations under Uncertainty New methods to infer and propagate nuclear data uncertainty across Monte Carlo simulations</title>
<link>https://hdl.handle.net/1721.1/139530</link>
<description>Nuclear Computations under Uncertainty New methods to infer and propagate nuclear data uncertainty across Monte Carlo simulations
Ducru, Pablo
This thesis introduces new methods to efficiently infer and propagate nuclear data uncertainty across Monte Carlo simulations of nuclear technologies.  The main contributions come in two areas: 1. novel statistical methods and machine learning algorithms (Embedded Monte Carlo); 2. new mathematical parametrizations of the quantum physics models of nuclear interactions and their uncertainties (Stochastic Windowed Multipole Cross Sections).&#13;
&#13;
1. Embedded Monte Carlo infers the uncertainty in nuclear codes inputs (reactor geometry, nuclear data, etc.)  from samples of noisy outputs (e.g.  experimental observations), and in turn propagates this uncertainty back to the simulation outputs(reactor power, reaction rates, flux, multiplication factor, etc.), without ever converging any single Monte Carlo reactor simulation. Such embedding of the uncertainty within the Nested Monte Carlo computations vastly outperforms previous methods(10–100 times less runs), and is achieved by approximating the input parametersBayesian posterior via variational inference, and reconstructing the outputs distribution via moments estimators. We validate the Embedded Monte Carlo method on anew analytic benchmark for neutron slowdown we derived.&#13;
&#13;
2. Stochastic Windowed Multipole Cross Sections is an alternative way to parametrize nuclear interactions and their uncertainties (equivalent to R-matrix theory), whereby one can sample on-the-fly uncertain nuclear cross sections and analytically compute their thermal Doppler broadening. This drastically reduces the memory footprint of nuclear data (at least 1,000-fold), without incurring additional computational costs.&#13;
&#13;
These contributions are documented in nine peer-reviewed journal articles (eight published and one under review) and seven conference articles (six published and one under review), constituting the core of this thesis.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139530</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Instrumentation for Dynamic Nuclear Polarization &amp; Magic-Angle Spinning NMR</title>
<link>https://hdl.handle.net/1721.1/139529</link>
<description>Advances in Instrumentation for Dynamic Nuclear Polarization &amp; Magic-Angle Spinning NMR
Banks, Daniel P.
Dynamic nuclear polarization (DNP) is an invaluable tool for increasing the sensitivity of magic angle spinning (MAS) nuclear magnetic resonance (NMR) experiments. Historically, one of the primary drawbacks of DNP has been limited spectral resolution. The resolution of DNP spectra can be substantially improved via data acquisition at higher magnetic fields as well as faster MAS and increased experiment dimensions. However, achieving the maximum possible sensitivity and resolution in MAS DNP experiments is significantly limited by the equipment that is currently available. In this thesis I discuss new designs and fabrication methods for constructing instrumentation for DNP and MAS NMR with an emphasis on designing and fabricating equipment to enable ultra-fast MAS DNP experiments. &#13;
&#13;
This thesis covers several topics including 3D printing stators for MAS experiments, the design of a balanced transmission line DNP probe for ¹⁷O experiments, the design of a helium recirculation system, and the fabrication of CVD diamond rotors for MAS DNP experiments. These projects are intended to increase the capabilities of MAS DNP equipment, leading to improved spectral sensitivity and resolution. The balanced transmission line probe design is compatible with a helium recirculation system and includes a new 1 mm stator design that should achieve MAS frequencies greater than 80 kHz at 100 K. At these spinning frequencies it will be possible to perform ¹H-detected DNP experiments that will not only provide access to an additional set of biological structural information, but also significantly improve the sensitivity of experiments over traditional ¹³C detection. The development of diamond MAS rotors is expected to increase the sensitivity and resolution of MAS DNP experiments even further via higher DNP enhancements and faster MAS.&#13;
&#13;
Additional studies are presented on the amyloidogenic peptide GNNQQNY, which is used as a model system for ¹⁷O bound water studies and amyloid polymorphism. The aforementioned equipment will be used to perform ¹H-detected HON experiments on GNNQQNY to directly probe the hydrogen bonds present in the system. These studies will serve as a framework for future multidimensional ¹⁷O studies on complex biological systems.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139529</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Excitons and Quantum Emitters in Two-Dimensional Materials</title>
<link>https://hdl.handle.net/1721.1/139528</link>
<description>Control of Excitons and Quantum Emitters in Two-Dimensional Materials
Moon, Hyowon
Layered two-dimensional materials have emerged as promising platforms for compact solid-state devices due to their easy formation of nanometer-thick heterostructures and integration into various photonic circuits. Recently, single-photon emitters have been identified in an insulating hexagonal boron nitride (hBN) and semiconducting transition metal dichalcogenides (TMDs). In spite of the unique characteristics of these emitters, they suffer from large inhomogeneous distribution because of their proximity to the surface and external environment, limiting practical applications and identification of their origins. This thesis tackles the problem by applying external strain to control the emission energy of quantum emitters in hBN. Photophysical studies at cryogenic temperature show the coupling efficiency of different phonon modes to the quantum emitters in hBN, and a correlation between the local strain and localized exciton energy in TMDs. In addition, a strain gradient applied by a nanoscale tip modulates the local band structure of a suspended monolayer TMDs to funnel excitons in arbitrary directions. The active control of quantum emitters and excitons presented in the thesis opens up a new possibility for realizing large-scale quantum and excitonic devices.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139528</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic and Spintronic Properties of Rare-Earth Iron Garnets</title>
<link>https://hdl.handle.net/1721.1/139526</link>
<description>Magnetic and Spintronic Properties of Rare-Earth Iron Garnets
Rosenberg, Ethan
Rare earth iron garnets (REIG) can be grown as thin films with strain-induced perpendicular magnetic anisotropy (PMA), and their potential for spintronic device applications has been studied extensively in recent years. In particular, thulium iron garnet (TmIG) has excited great interest due to record-breaking spin orbit torque-driven domain wall velocities over 2km/s and the presence of the Dzyaloshinskii-Moriya interaction, which stabilizes chiral Néel domain walls. In order to optimize REIG for these applications, it is useful to develop methods for tuning their magnetic and spintronic properties. In this work, we accomplish this through varying the RE site occupancy.&#13;
&#13;
We report the growth and characterization of fully-strained terbium iron garnet (TbIG) and europium iron garnet (EuIG) with PMA ranging in thickness from 10 to 80 nm. EuIG can be grown with PMA on (100) and (111) gadolinium gallium garnet (GGG) substrates, making it ideal for orientation-dependent studies. For instance, Pt/EuIG had similar (001) and (111) imaginary spin mixing conductances of 4.6-5.4×1012 Ω-1m-2 in contrast to similar studies on the Pt/CFO system. The (111) imaginary spin mixing conductance of Pt/TbIG (4.6×1012 Ω-1m-2) was similar to the Pt/EuIG system, and both Pt/TbIG and Pt/EuIG had comparable spin mixing conductances to Pt/TmIG.&#13;
&#13;
The TbIG films had a low saturation magnetization (~30 emu/cc) at room temperature due to their easily accessible magnetic compensation point of 330K, and anomalous Hall effect measurements of Pt/TbIG showed a sign change at the compensation point. Through a combination of x-ray absorption measurements and molecular field simulations, we propose a model to explain this observation involving point defects such as iron vacancies and terbium antisite defects.&#13;
&#13;
We also report the static and dynamic magnetic properties of yttrium substituted thulium iron garnet (Y:TmIG) thin films on GGG as a function of Y concentration. We report the tunability of the magnetic anisotropy energy, with full control achieved over the type of anisotropy (from perpendicular, to isotropic, to an in-plane easy axis) on the same substrate. In addition, we report a nonmonotonic composition-dependent anisotropy term, which we ascribe to growth-induced anisotropy similar to what has been reported in garnet thin films grown by liquid-phase epitaxy. Ferromagnetic resonance shows linear variation of the damping and the g-factor across the composition range, consistent with prior theoretical work. Domain imaging reveals differences in reversal modes, remanant states, and domain sizes in YxTm3-x iron-garnet thin films as a function of anisotropy.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139526</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tools for engineering multicellular systems through cell sorting and cell state detection</title>
<link>https://hdl.handle.net/1721.1/139525</link>
<description>Tools for engineering multicellular systems through cell sorting and cell state detection
Enghuus, Casper Nørskov
The human genome, the genetic blueprint that every cell in our body follows, encodes approximately 20,000 genes. Through complex regulation of these genes, each cell is able to play the role it needs within our body. Synthetic biology, an emerging field in biology, seeks to expand on this blueprint and create cells with novel functions. The aim of this thesis is to provide methods that expands our ability to engineer and control multicellular systems by detecting and rewriting the cell state.&#13;
&#13;
We first develop a method that enables the creation of a synthetic cell state to control morphogenesis. Using inducible expression of recombinases, we show this approach can induce a cell to commit to one of two mutually exclusive cell states. By regulating the expression of recombinases, we are able to control the distribution of cell states within an initially monoclonal and homogenous population of cells. We use the induction of a synthetic cell state to control morphogenesis by cell state-specific expression of homotypic cadherins which controls the cell’s adhesive properties. This enables us to create a large number of different shapes and control morphogenesis.&#13;
&#13;
Secondly, we develop a library-based approach for cell state-specific gene regulation. We design a set of 6,107 Synthetic Promoters with Enhanced Cell-State Specificity (SPECS), and identify several SPECS with spatiotemporal specificity during the programmed differentiation of stem cells, as well as SPECS that are highly specific for breast cancer and glioblastoma stem-like cells.&#13;
&#13;
Thirdly, we develop a method that allows detection of endogenous gene expression without modifying the endogenous gene itself. We show that placing a regulatory RNA downstream of a terminator allows for expression of the regulatory RNA, and demonstrate this method for miRNAs and gRNAs.&#13;
&#13;
Together, this thesis develops methods to create synthetic cell states that can be used to control morphogenesis, and provides tools to detect endogenous cell states which can serve as inputs to control gene regulatory networks.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139525</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling structured biological processes with machine learning</title>
<link>https://hdl.handle.net/1721.1/139524</link>
<description>Modeling structured biological processes with machine learning
Shen, Max Walt
Models of natural phenomena have played a fundamental role in scientific progress. In modern biology, we seek to model ever more complex phenomena, driven by advances in highthroughput measurement technology and machine learning. These advances motivate a topdown data-driven modeling approach, but directly applying such methods to model complex biological processes can fail to yield models with causal understanding. It would be desirable to build models that combine the rich bodies of causal knowledge built over decades of research with modern flexible machine learning methods that scale to large and rich datasets.&#13;
&#13;
Here, I present deep data-driven models that incorporate biological and causal prior knowledge to model fundamental biological processes in genome editing and directed evolution. I first consider a model of DNA repair following CRISPR/Cas9 cleavage, which was generally thought to be unpredictable. In a large-scale dataset, I find signatures implicating an alternative and more predictable DNA repair pathway. I describe a model that accurately predicts genome editing outcomes by representing these competing but mechanistically independent repair pathways while flexibly learning unknown relationships from data. I use the model to discover a new genome editing strategy for efficiently and precisely correcting a class of disease-causing genetic mutations. Next, I consider a model for base editing, where I decompose a complex prediction problem into simpler subproblems and solve one with an autoregressive sequence-todistribution of sequences model. The models enable designing genome editing strategies with optimized outcomes for disease-causing mutation and enabled the first demonstration of transversion base editing by cytosine base editors, broadening the scope of base editing to potentially correcting new classes of mutations. These models also broaden the scope of C to G base editors with restrictive sequence preferences. Finally, I propose a method for reconstructing sequence-to-function datasets from directed evolution that can help increase the availability of datasets for machine learning for protein engineering. This method exploits the structure of a differential equation governing natural selection for efficient inference and is capable of proposing variants with higher activity than conventional methods. Incorporating prior knowledge and structure into models of natural phenomena can support scientific discovery.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139524</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing sensorimotor behaviors through information integration and mental simulation</title>
<link>https://hdl.handle.net/1721.1/139523</link>
<description>Optimizing sensorimotor behaviors through information integration and mental simulation
Chang, Chia-Jung
To generate dynamic thoughts, make proper decisions and act appropriately, humans and animals need to make reliable estimates of the state of the world. Recent studies have shown that the brain reduces uncertainty associated with noisy measurements by strategies including incorporating prior knowledge with sensory cues, extracting lowdimensional manifolds from heterogeneous activities, and updating internal models through simulating upcoming events. However, it remains unclear whether the brain utilizes additional sources that might have been ignored in previous work. To address this question, my thesis starts by asking how implicit temporal rhythms are used during mental simulation of object trajectory with partial observations. By designing psychophysics experiments with varying spatial and temporal structures, I show that humans simulate temporal rhythms in addition to the kinematics when interacting with dynamic stimuli. Bayesian modeling further suggests that explicit kinematics and implicit timing are integrated optimally. Following this work, the neural mechanism of time reproduction is revealed by analyzing the dynamics of low-dimensional state-space from large-scale electrophysiology recordings. This approach is further applied to uncover mechanisms underlying fear observational learning, which would not be possible with a traditional single-cell analysis. In the next chapter, the idea of neural coding in calcium imaging studies is challenged by demonstrating that the background residuals represent additional behavioral information. By building a convolutional neural network, position and speed of the animals can be directly decoded from raw microendoscopic data. Critically, saliency maps of the model reveal emergence of video-decomposition, and identify neural clusters representing distinct behavioral aspects on original images. Finally, inspired by replays in the hippocampus, I design a reinforcement learning agent with mental simulation to approximate the relaxation of constrained optimization. The results reveal scenarios where simulating to break physical barriers can improve learning efficiency. Together, my thesis examines how additional information may be integrated with spatial and temporal simulation to optimize complex sensorimotor behaviors, and proposes efficient models for decoding and learning.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139523</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonon hydrodynamic transport at elevated temperature</title>
<link>https://hdl.handle.net/1721.1/139517</link>
<description>Phonon hydrodynamic transport at elevated temperature
Ding, Zhiwei
For over half a century, phonon hydrodynamic transport was deemed exotic and mattered only at extremely low temperatures. In this work, by combining the theoretical and experimental approach, we successfully predict and confirm the existence of phonon hydrodynamic transport in graphite above 200 K. More specifically, we introduce a direction-dependent definition of normal and Umklapp scattering, which gives an improved description of mode-specific phonon dynamics. By extending the classical Fuchs-Sondheimer solution, we developed a first-principles framework to study phonon hydrodynamics under the size effect with mode-by-mode phonon scattering details. We unambiguously revealed the Poiseuille heat flow by studying the variation of heat flow as the graphite ribbon width and identified for the first time the existence of phonon Knudsen minimum – an unusual phenomenon unique to hydrodynamic regime – which can be observed up to 90 K. Using a sub-picosecond transient grating technique, we directly observed second sound in graphite at record-high temperatures of 200 K. With the enlarged grating-period window, we firstly reported the dispersion of thermal wave, whose velocity increases with decreasing grating period. Our experimental findings are well explained with the interplay among “three fluids”: ballistic, diffusive, and hydrodynamic phonons. We believe our study may stimulate further work into discovering more material systems possessing significant phonon hydrodynamic features, as well as new research into understanding and manipulating the phonon transport in the hydrodynamic scheme.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139517</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Environmental and Development Economics</title>
<link>https://hdl.handle.net/1721.1/139515</link>
<description>Essays in Environmental and Development Economics
Hsiao, Allan
Chapter 1. Weak environmental regulation has global consequences. When domestic regulation of carbon-intensive industries fails, the international community can intervene by targeting these industries with import tariffs. I argue that import tariffs must possess two features – coordination and commitment – in order to be effective. Without coordination across importers, tariffs are undermined by leakage to unregulated markets. Without commitment to upholding tariffs over the long term, tariffs are reduced over time as importers give in to static incentives. I develop a dynamic empirical framework for quantifying these forces in settings with incomplete regulation and sunk investment, and I apply it to the market for palm oil, a major driver of deforestation and one of the largest sources of emissions globally.&#13;
&#13;
Chapter 2. Does electoral accountability discipline public spending? After the fall of Suharto, Indonesia held local elections for the first time in decades. I use a dynamic discrete choice framework to study how democratization affected the spatial allocation of public investment in healthcare infrastructure. On one hand, democratization limits distortions from Suharto-era biases toward certain areas, such as those within the patronage network. On the other hand, spillover effects are less internalized as districts become more focused on their own constituents.&#13;
&#13;
Chapter 3. Many infrastructure investments have spatial effects that make optimal allocation a difficult, combinatorial problem. Schools are one such example: when graduates seek employment nationally and migrate, schools have effects that extend beyond local labor markets. But policymakers often allocate infrastructure investments with simple rules like population cutoffs, ranked lists, and need-based formulas that do not account for spatial interdependencies. How effective are these simple rules compared to more sophisticated approaches? I use a spatial equilibrium model of individuals’ education and migration decisions to study this question in the context of Indonesia’s Sekolah Dasar INPRES program, the largest school construction program in history.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139515</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical and Computational Foundations to Enable Predictive Digital Twins at Scale</title>
<link>https://hdl.handle.net/1721.1/139514</link>
<description>Mathematical and Computational Foundations to Enable Predictive Digital Twins at Scale
Kapteyn, Michael G.
A digital twin is a computational model that evolves over time to persistently represent a unique physical asset. Digital twins underpin intelligent automation by enabling asset-specific analysis and data-driven decision-making. Although the promise of digital twins is well established, state-of-the-art digital twins are typically bespoke, one-off implementations that require considerable expertise and deployment resources. This thesis develops mathematical and computational foundations to support the transition from this custom implementation phase toward accessible and robust digital twins at scale.&#13;
&#13;
First, a unified mathematical foundation for digital twins is established. A mathematical abstraction of a digital twin and its associated physical asset is presented. This abstraction is then developed into a probabilistic graphical model describing the evolution of the coupled system. This model affords a unified treatment of all the aspects of a digital twin and can span the entire asset lifecycle. While mathematically rigorous, the model is flexible and extensible to enable application in a wide range of application areas.&#13;
&#13;
Building on this mathematical foundation, scalable computational methodologies are developed to enable asset-specific physics-based models to be incorporated into a digital twin. A central element of the proposed approach is a library of component-based reduced-order models derived from high-fidelity simulations of the asset in various states. The component-based approach scales efficiently to complex systems and provides a flexible and expressive framework for model adaptation—both critical features in the digital twin context. A methodology is proposed for combining these physics-based models with interpretable machine learning techniques in order to determine which observational data are most informative, and how these data can be fused within an interpretable classifier. This classifier can be deployed online to enable dynamic data-driven updating of the digital twin.&#13;
&#13;
The proposed methodologies are demonstrated through the creation, calibration, and deployment of a structural digital twin for a custom-built 12ft wingspan unmanned aerial vehicle. In flight, the digital twin assimilates sensor data to update its internal structural models in response to damage or degradation. The dynamically updated digital twin provides rapid computational analysis of the vehicle’s structural health, which in turn enables intelligent self-aware decision-making.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139514</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of New, More Stable, Precursors to Organopalladium(II) Complexes and Methods for the Palladium-Mediated Late-Stage Diversification of Pharmaceuticals</title>
<link>https://hdl.handle.net/1721.1/139499</link>
<description>Design of New, More Stable, Precursors to Organopalladium(II) Complexes and Methods for the Palladium-Mediated Late-Stage Diversification of Pharmaceuticals
King, Ryan P.
Chapter 1. Pharmaceutical Diversification via Palladium Oxidative Addition Complexes&#13;
&#13;
Palladium-catalyzed cross-coupling reactions have transformed the exploration of chemical space in the search for materials, medicines, chemical probes, and other functional molecules. However, cross-coupling of densely functionalized substrates remains a major challenge. We devised an alternative approach using stoichiometric quantities of palladium oxidative addition complexes (OACs) derived from drugs or drug-like aryl halides as substrates. In most cases, cross-coupling reactions using OACs proceed under milder conditions and with higher success than the analogous catalytic reactions. OACs exhibit remarkable stability, maintaining their reactivity after months of benchtop storage under ambient conditions. We demonstrated the utility of OACs in a variety of experiments including automated nanomole-scale couplings between an OAC derived from rivaroxaban and hundreds of diverse nucleophiles, as well as the late-stage derivatization of the natural product k252a.&#13;
&#13;
Chapter 2. A Ligand Exchange Process for the Diversification of Palladium Oxidative Addition Complexes&#13;
&#13;
Palladium oxidative addition complexes (OACs) have recently emerged as powerful tools to enable challenging bond formations for the functionalization and diversification of pharmaceuticals and biomolecules. However, each OAC can only be formed with one particular ancillary ligand at a time.  As no one ligand is optimal for every cross-coupling reaction and as the accessibility to pharmaceutically-derived OACs bearing different ligands is limited by arene availability, we herein disclose a ligand exchange protocol that allows for the preparation of a series of OACs bearing a diverse array of ancillary ligands ranging from phosphines to phosphites and bipyridyls  from one common complex. The complexes generated were further applied to both stoichiometric and catalytic cross-coupling reactions.&#13;
&#13;
Chapter 3. A Neophyl Palladacycle as an Air- and Thermally Stable Precursor to Oxidative Addition Complexes&#13;
&#13;
The synthesis and utilization of isolated Palladium Oxidative Addition Complexes (OACs) has had a significant impact on Pd-catalyzed and Pd-mediated cross-coupling reactions. Despite their importance, widespread utility of OACs has been greatly limited by the instability of their Pd precursor complexes. Herein we report the use of Cámpora’s palladacycle as a new, more stable, precursor to Pd OACs. Using this palladacycle, a diverse series of biarylphosphine ligated OACs, including those with pharmaceutical-derived aryl halides and relevance to bioconjugation, were prepared. Additionally, Cámpora’s palladacycle was investigated as a thermally activated precatalyst for Pd-catalyzed C–N cross coupling reactions.  &#13;
&#13;
Chapter 4. Synthesis of (MeCN)2Pd(CF3)OTs, a General Precursor to Palladium(II)-Trifluoromethyl Complexes LPd(CF3)X&#13;
&#13;
In palladium-catalyzed aryl–trifluoromethyl cross-coupling reactions, reductive elimination is often the rate-limiting step. Stoichiometric studies of reductive elimination have proved effective in evaluating the ability of various ligands to facilitate this challenging elementary step. However, the difficulty of synthesizing palladium trifluoromethyl complexes has hindered the use of this strategy. To address this deficiency, we herein report the synthesis of (MeCN)2Pd(CF3)OTs, an air- and moisture-stable solid that can be used as a common precursor to access various LPd(CF3)X complexes. From this complex we were able to prepare palladium trifluoromethyl complexes bearing many monophosphine, bisphosphine, and diamine ligands that are known to help facilitate Ar–CF3and vinyl–CF3 reductive elimination. Further, we found that the anionic ligand (X) could be readily changed by modifying the NaX or AgX salt used.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139499</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interface-Governed Optical Properties of Van der Waals Heterostructures</title>
<link>https://hdl.handle.net/1721.1/139498</link>
<description>Interface-Governed Optical Properties of Van der Waals Heterostructures
Lee, Hae Yeon
Van der Waals (vdW) layered materials are an emerging class of materials that can be readily separated into atomically thin layers, exhibiting distinct physical properties compared to their bulk counterparts. Moreover, each of the layers can be vertically assembled into vdW heterostructures, opening a new way to design materials with novel electronic and optical properties. To design vdW heterostructures with desirable functionalities, proper interfacial engineering is essential as vdW interfaces between the layers govern the physical properties of the entire structure. In this thesis, I investigate the optical properties of vdW heterostructures in which the interfaces are engineered by hetero-interfacial coupling, atomic misalignment, and mechanical deformation.&#13;
&#13;
Firstly, cathodoluminescence (CL) from monolayer transition metal dichalcogenides (TMDs) was demonstrated in scanning transmission electron microscope (STEM) for the first time by utilizing the interfacial coupling with hexagonal boron nitride (hBN). The effect of imperfect interfaces on CL is directly visualized by nanoscale optical-structural correlation, showing that STEM-CL is an efficient tool to characterize optical emission of monolayers TMDs at nanoscale. Next, it was shown that the optical properties of hBN multilayers can be continuously tuned by the twist angle at the inner interface. Due to the formation of moiré superlattice at the twisted interface, a new moiré band gap is formed, of which magnitude decreases continuously with increasing twist angle, resulting in tunable luminescence wavelength and intensity. This work extends the moiré superlattice-related phenomena beyond monolayer-based system and suggests a strategy to control light in the ultraviolet region. Lastly, the effect of mechanical bending on the optical properties of multilayers is studied by employing bubbles buried inside the vdW multilayers. The materials confined in the bubbles are used to modify bubble geometry and its optical properties under an electron beam. As a result, strong and localized luminescence is observed from the bubbles that forms optical cavities where the geometry can be engineered by the theoretical model developed here.&#13;
&#13;
This thesis provides three new approaches to modulate the optical emission in vdW heterostructures by interfacial engineering, which provides insight into the role of interfaces as well as motivates further work on designing vdW heterostructures for optoelectronic applications.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139498</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Big-Data Applications Near Flash</title>
<link>https://hdl.handle.net/1721.1/139492</link>
<description>Computing Big-Data Applications Near Flash
Xu, Shuotao
Current systems produce a large and growing amount of data, which is often referred to as Big Data. Providing valuable insights from this data requires new computing systems to store and process it efficiently. For a fast response time, Big Data typically relies on in-memory computing, which requires a cluster of machines with enough aggregate DRAM to accommodate the entire datasets for the duration of the computation. Big Data typically exceeds several terabytes, therefore this approach can incur significant overhead in power, space and equipment. If the amount of DRAM is not sufficient to hold the working-set of a query, the performance deteriorates catastrophically. Although NAND flash can provide high-bandwidth data access and has higher capacity density and lower cost per bit than DRAM, flash storage has dramatically different characteristics than DRAM, such as large access granularity and longer access latency. Therefore, there are many challenges for Big-Data applications to enable flash-centric computing and achieve performance comparable to that of in-memory computing.&#13;
&#13;
This thesis presents flash-centric hardware architectures that provide high processing throughput for data-intensive applications while hiding long flash access latency. Specifically we describe two novel flash-centric hardware accelerators, BlueCache and AQUOMAN. These systems lower the cost of two common data-center workloads, key-value cache and SQL analytics. We have built BlueCache and AQUOMAN using FPGAs and flash storage, and show that they can provide competitive performance of computing Big-Data applications with multi-terabyte datasets. BlueCache provides a 10-100X cheaper key-value cache than DRAM-based solution, and can outperform DRAM-based system when the latter has more than 7.4% misses for a read-intensive workloads. A desktop-class machine with single instance of 1TB AQUOMAN disk can achieve performance similar to that of a dual-socket general-purpose server with off-the-shelf SSDs. We believe BlueCache and AQUOMAN can bring down the cost of acquiring and operating high-performance computing systems for data-center-scale Big-Data applications dramatically.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139492</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular SMT-Based Verification of Rule-Based Hardware Designs</title>
<link>https://hdl.handle.net/1721.1/139491</link>
<description>Modular SMT-Based Verification of Rule-Based Hardware Designs
Wright, Andrew C.
The highly-concurrent nature of hardware logic designs makes their design and verification difficult. Bluespec SystemVerilog (BSV) simplifies this problem by introducing the rule-level abstraction. Modules are expressed in terms of guarded atomic actions that appear to fire in a sequential order, even when multiple rules fire concurrently per clock cycle. This allows designers to think about concurrent systems one step at a time.&#13;
&#13;
This thesis aims to make hardware verification easier by leveraging the rule-level abstraction for verification using SMT-based verification, e.g. bounded and unbounded model checking.&#13;
The main aspect of the rule-level abstraction taken advantage of is the modularity. Rule-level modules can only be interacted with through their interface methods, so if a module looks the same as its specification with respect to the legal sequences of method calls, then they can be used interchangeably without affecting the outer module. A modular verification technique that is based off of this idea can be used to replace complex submodules with simpler versions that reduce the complexity and the number of steps required for unbounded model checking. This modular verification methodology can take many problems that are infeasible for unbounded model checking and makes them feasible. Other aspects of rule-level hardware design languages (HDLs) are taken advantage of during verification such as the use of uninterpreted functions and abstract types.&#13;
&#13;
In this thesis, I present Spec `n' Check, an HDL inspired by BSV that is designed for powerful modularity and easy-to-write specifications. To fully support SMT-based verification, we present formal semantics for Spec `n' Check along with a symbolic representation of the semantics that can be used in SMT solvers.&#13;
We also present what it means for a module to implement a specification and the related metatheorems that describe how the implements relation can be used to verify larger modules. We show that with this work, it is possible to formally verify a RISC-V pipelined processor implemented in the synthesizable subset of Spec `n' Check against an ISA specification in a matter of minutes of SMT solver run time. This is only possible thanks to module refinement and abstraction of the control and status register file (CSRF) and the memory system, and the use of uninterpreted functions.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139491</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnostics for Periodically Operated Actuators</title>
<link>https://hdl.handle.net/1721.1/139490</link>
<description>Diagnostics for Periodically Operated Actuators
Huchel, Lukasz Marek
Increasing constraints on quality, reliability and minimum downtime require the revision of existing maintenance approaches. Preventive maintenance, or even more reactive maintenance, require information about a system’s condition in order to enable predictive maintenance approach. Condition monitoring requires efficient sensing and data processing for extraction of condition-related signal features. Advances in both connectivity and embedded systems enable a wide range of possibilities in the field of condition monitoring.&#13;
&#13;
This thesis develops signal processing tools and hardware solutions optimized for, but not limited to, diagnostics of periodically operated actuators. These actuators are mechanical or electromechanical systems that experience non-uniform loads during an operating cycle. The platform presented in this thesis serves state-of-the-art vibration and acoustic measurements and combines the quality of high-end acquisition systems with the portability of IoT devices. This allows for temporary field installations and monitoring of critical industrial equipment. Cyclostationary analysis enables diagnostics based on signals with strong random components by extracting modulation signatures otherwise unattainable by conventional time or frequency domain analysis, as demonstrated with applications to diaphragm pumps and cutting tools. An extension to the Integrated-Electronics-Piezoelectric (IEPE) industry standard for vibration measurements stretches the applications to a wide range of measurands like temperature, pressure or mechanical strain. These stretched capabilities enable a more unified sensing strategy and decrease complexity of the condition monitoring systems; thus, it further supports miniaturization and on-the-edge applications.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139490</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Languages and Compilers for Rendering and Image Processing</title>
<link>https://hdl.handle.net/1721.1/139489</link>
<description>Languages and Compilers for Rendering and Image Processing
Anderson, Luke
Even though computer graphics applications are widely used, they remain challenging to implement and graphics programming systems must navigate conflicting trade-offs between correctness, performance, and hardware portability. This thesis describes the design and implementation of domain specific languages with particular trade-off decisions in mind and the application of machine learning to these languages and their compilers.&#13;
&#13;
Rendering systems suffer from a tension between separation of concerns and performance. Existing rendering systems typically focus on performance, but complex probability computations make advanced rendering algorithms difficult to implement correctly. We first identify some common operations that are foundational to many rendering algorithms, describe some goals of an ideal rendering system, explore the space of trade-offs in achieving these goals, and discuss some possible implementation strategies. We then present Aether, a domain specific language for rendering, designed with a focus on correctness. Users write sampling code using reusable building block components and all probability code is then automatically generated. We demonstrate the effectiveness of this approach by implementing a range of modern rendering algorithms, including the novel tridirectional path tracing, which otherwise would have been prohibitively difficult to implement.&#13;
&#13;
Halide provides a modular approach to writing image processing code but achieving high performance still requires considerable manual effort and expertise. We present a new automatic algorithm that quickly generates high performance GPU implementations of imaging and vision pipelines, directly from high-level Halide algorithm code. We address the scalability challenge of extending search-based automatic scheduling to the nested parallelism on GPU architectures in reasonable compile time. We find schedules that are on average 1.7x faster than existing automatic solutions (up to 5x), and competitive with what the best human experts were able to achieve in an active effort to beat our automatic results.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139489</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Machine Learning Algorithms for Personalized Medicine and Insurance</title>
<link>https://hdl.handle.net/1721.1/139482</link>
<description>Novel Machine Learning Algorithms for Personalized Medicine and Insurance
Orfanoudaki, Agni
Over the past decades, analytics have provided the promise of revolutionizing healthcare, providing more effective, patient-centered, and personalized care. As an increasing amount of data is being collected, computational performance is improved, and new algorithms are developed, machine learning has been viewed as the key analytical tool that will advance healthcare delivery. Nevertheless, until recently, despite the enthusiasm about the potential of “big data”, only a few examples have impacted the current clinical practice. This thesis presents a combination of predictive and prescriptive methodologies that will empower the transition to personalized medicine.&#13;
&#13;
We propose new machine learning algorithms to address major data imperfections like missing values, censored observations, and unobserved counterfactuals. Leveraging a wide variety of data sources, including health and claims records, longitudinal studies, and unstructured medical reports, we demonstrate the potential benefit of analytics in the context of cardiovascular and cerebrovascular diseases. To propel the adoption of these methodologies, we lay the foundations in the area of algorithmic insurance, proposing a quantitative framework to estimate the litigation risk of machine learning models. This work emphasizes interpretability and the design of models that facilitate clinician engagement and integration into the healthcare system. &#13;
&#13;
Part I introduces data-driven algorithms for missing data imputation, clustering, and survival analysis that lie at the intersection of machine learning and optimization. Part II highlights the potential of prescriptive and predictive analytics in the medical field. We develop a new framework for personalized prescriptions and apply it for the treatment of coronary artery disease. Part II also presents predictive models that could support the early diagnosis and improve the management of stroke patients. Finally, Part III proposes a novel risk evaluation methodology that will enable healthcare institutions to manage the risk exposure resulting from the implementation of analytical decision tools.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139482</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ambient Acoustics as Indicator of Environmental Change in the Beaufort Sea: Experiments &amp; Methods for Analysis</title>
<link>https://hdl.handle.net/1721.1/139479</link>
<description>Ambient Acoustics as Indicator of Environmental Change in the Beaufort Sea: Experiments &amp; Methods for Analysis
Chen, Rui
The Arctic Ocean is a vital component of Earth’s climate system experiencing dramatic environmental changes. The changes are reflected in its underwater ambient soundscape as its origin and propagation are primarily dependent on properties of the ice cover and water column.&#13;
&#13;
The first component of this work examines the effect on ambient noise characteristics due to changes to the Beaufort Sea sound speed profile (SSP) and ice cover. Specifically, the emergence of a warm water intrusion near 70 m depth has altered the historical Arctic SSP while the ice cover has become thinner and younger due to the rise in average global temperature. Hypothesized shifts to the ambient soundscape and surface noise generation due to these changes are verified by comparing the measured noise data during two experiments to modeled results. These changes include a broadside notch in noise vertical directionality as well as a shift from uniform surface noise generation to discrete generation at specific ranges.&#13;
&#13;
Motivated by our data analyses, the second component presents several tools to facilitate ambient noise characterization and generation monitoring. One is a convolutional neural network (CNN) approach to noise range estimation. Its robustness to SSP and bottom depth mismatch is compared with conventional matched field processing. We further explore how the CNN approach achieves its performance by examining its intermediate outputs. Another tool is a frequency domain, transient event detection algorithm that leverages image processing and hierarchical clustering to identify and categorize noise transients in data spectrograms. The spectral content retained by this method enables insight into the generation mechanism of the detected events by the ice cover. Lastly, we present the deployment of a seismo-acoustic system to localize transient events. Two forward approaches that utilize time-difference-of-arrival are described and compared with a more conventional, inverse technique. The examination of this system’s performance prompts recommendations for future deployments.&#13;
&#13;
With our ambient noise analysis and algorithm development, we hope these contributions provide a stronger foundation for continued study of the Arctic ambient soundscape as the region continues to grow in significance.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139479</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-intersection of Manin-Drinfeld Cycles and Taylor expansion of L-functions</title>
<link>https://hdl.handle.net/1721.1/139475</link>
<description>Self-intersection of Manin-Drinfeld Cycles and Taylor expansion of L-functions
Chen, Yongyi
A rising philosophy in the theory of automorphic representations in number theory is that higher central derivatives of L-functions of automorphic forms should correspond to the intersection numbers of special cycles on moduli spaces. A classic early result along this philosophy was achieved by Gross and Zagier, who proved that the derivative of the L-function of an elliptic curve is equal, up to a constant, to the Néron-Tate height pairing of a special point called a Heegner point on the elliptic curve.&#13;
&#13;
A more recent result was proven in the function field case by Yun and Zhang which showed that higher derivatives of the base change L-function of an unramified automorphic representation over PGL₂ over a function field are equal, up to a constant, to the self-intersection number, inside the moduli stack of PGL₂-shtukas, of the moduli stack of shtukas for an anisotropic torus.&#13;
&#13;
We prove in the function field case that the higher derivatives of the square of the L-function of unramified automorphic representations over PGL₂ are equal, up to a constant, to the self-intersection number, inside the moduli stack of PGL₂-shtukas, of the moduli stack of shtukas for the split torus.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139475</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mining Software Artifacts for use in Automated Machine Learning</title>
<link>https://hdl.handle.net/1721.1/139465</link>
<description>Mining Software Artifacts for use in Automated Machine Learning
Cambronero Sánchez, José Pablo
Successfully implementing classical supervised machine learning pipelines requires that users have software engineering, machine learning, and domain experience. Machine learning libraries have helped along the first two dimensions by providing modular implementations of popular algorithms. However, implementing a pipeline remains an iterative, tedious, and data-dependent task as users have to experiment with different pipeline designs. To make the pipeline development process accessible to non-experts and more efficient for experts, automated techniques can be used to efficiently search for high performing pipelines with little user intervention. The collection of techniques and systems that automate this task are commonly termed automated machine learning (AutoML).&#13;
&#13;
Inspired by the success of software mining in areas such as code search, program synthesis, and program repair, we investigate the hypothesis that information mined from software artifacts can be used to build, improve interactions with, and address missing use cases of AutoML. In particular, I will present three systems -- AL, AMS, and Janus -- that make use of software artifacts. AL mines dynamic execution traces from a collection of programs that implement machine learning pipelines and uses these mined traces to learn to produce new pipelines. AMS mines documentation and program examples to automatically generate a search space for an AutoML tool by starting from a user-chosen set of API components. And Janus mines pipeline transformations from a collection of machine learning pipelines, which can be used to improve an input pipeline while producing a nearby variant. Jointly, these systems and their experimental results show that mining software artifacts can simplify AutoML systems, make their customization easier, and apply them to novel use cases.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139465</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on the Torsion Subgroups of Néron–Severi&#13;
Groups</title>
<link>https://hdl.handle.net/1721.1/139463</link>
<description>Bounds on the Torsion Subgroups of Néron–Severi&#13;
Groups
Kweon, Hyuk Jun
Let &#119883; ⤷ Pʳ be a smooth projective variety defined by homogeneous polynomials of degree ≤ &#119889; over an algebraically closed field &#119896;. Let Pic &#119883; be the Picard scheme of &#119883;, and let Pic⁰ &#119883; be the identity component of Pic &#119883;. The Néron–Severi group scheme of &#119883; is defined by NS &#119883; = (Pic &#119883;)/(Pic⁰ &#119883;)ᵣₑ subscript d, and the Néron–Severi group of &#119883; is defined by NS &#119883; = (NS &#119883;)(&#119896;). We give an explicit upper bound on the order of the finite group (NS &#119883;)ₜₒᵣ and the finite group scheme (NS &#119883;)ₜₒᵣ in terms of &#119889; and &#119903;. As a corollary, we give an upper bound on the order of the torsion subgroup of second cohomology groups of &#119883; and the finite group [mathematical equation]. We also show that (NS &#119883;)ₜₒᵣ is generated by (deg &#119883; − 1)(deg &#119883; − 2) elements in various situations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139463</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid evaluation of pathology using nonlinear microscopy with applications in breast cancer, prostate cancer, and renal disease</title>
<link>https://hdl.handle.net/1721.1/139450</link>
<description>Rapid evaluation of pathology using nonlinear microscopy with applications in breast cancer, prostate cancer, and renal disease
Cahill, Lucas C.
Frozen section analysis (FSA) is used intraoperatively for rapid evaluation of surgical tissue. In FSA, excised tissue is rapidly frozen, sectioned using a microtome, stained with hematoxylin and eosin (H&amp;E), and evaluated by a pathologist. FSA requires ~20 minutes for the first specimen to be evaluated and the specimen size is limited by the equipment and personnel available. In some tissues that have a high lipid content (such as breast), freezing is difficult and adds artifacts that can affect interpretation. Techniques that enable fresh surgical specimen evaluation without freezing or microtoming can reduce the time and labor required for intraoperative tissue analysis. Nonlinear microscopy (NLM) is a fluorescence microscopy technique that provides subcellular resolution images of tissue using a femtosecond laser to excite fluorescence at the laser focus. Tissue does not need to be microtomed enabling visualization of freshly excised tissue. Exogenous fluorophores can be used to visualize cellular components such as nuclei and cytoplasm/stroma and multiple specimens can be stained in parallel without the restrictions of tissue size associated with FSA. NLM could dramatically reduce the time required for tissue evaluation compared with FSA. &#13;
&#13;
In this thesis, we developed and validated NLM evaluation techniques for breast and prostate cancer and renal disease. The studies in this thesis were performed in close collaboration with the Beth Israel Deaconess Center. We developed a rapid, robust fluorescent staining protocol for NLM imaging with a compact, low cost ytterbium laser and validate it on fresh breast tissue. A randomized controlled trial was designed and initiated to assess the rate of indication for repeat breast surgeries in a study group of patients receiving intraoperative NLM evaluation vs a control group of patients receiving no intraoperative evaluation. We also developed techniques for NLM evaluation of fresh prostatectomy tissue and prostate biopsies and performed blinded readings of NLM images to assess accuracy. An optical clearing and staining technique for rapid three-dimensional analysis of renal biopsies was developed. Overall, this thesis aims to provide a new approach for intraoperative/intraprocedural consultation with NLM, potentially enabling more accurate, rapid, and simplified surgeries/procedures.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139450</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Fouling-Resistant Coatings for Energy Systems: Theory and Proof of Principle at Realistic Conditions</title>
<link>https://hdl.handle.net/1721.1/139446</link>
<description>Design of Fouling-Resistant Coatings for Energy Systems: Theory and Proof of Principle at Realistic Conditions
Carlson, Max
The buildup of corrosion deposits, known as fouling or CRUD in PWRs, seriously hinders large-scale energy production. From nuclear power plants to geothermal reservoirs, fouling increases system pressure drops, impedes heat transfer, and accelerates corrosion, leading to derating and early failure. Here we propose and demonstrate a design principle for foulant-agnostic thin film (sub-micrometer) coatings, based on minimizing van der Waals (vdW) forces between a material surface and any foulant immersed in a fluid. First-principles calculations of Hamaker constants are used to determine candidate coating materials. These materials are then tested in the first documented high temperature (315 C), high pressure (14 MPa), liquid cell atomic force microscope (HPAFM) capable of performing colloidal probe measurements at in-situ PWR pressure and temperature. The results are compared to flow loop tests, and the coating stability is tested in an irradiated flow loop with PWR spectrum. A set of promising coating materials is found, with the best candidates proven to reduce CRUD fouling by approximately an order of magnitude in a PWR-representative internally heated test flow loop.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139446</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New tools for the discovery of pigment gene function</title>
<link>https://hdl.handle.net/1721.1/139443</link>
<description>New tools for the discovery of pigment gene function
Adelmann, Charles H.
Dozens of genes contribute to the vast variation in human pigmentation. Many of these encode proteins that localize to the melanosome, the lysosome-related organelle that synthesizes pigment, but have unclear functions. Here, we describe the MelanoIP method for rapidly isolating melanosomes and profiling their labile metabolite contents. We use it to study MFSD12, a transmembrane protein of unknown molecular function that when suppressed causes darker pigmentation in mice and humans. We find that MFSD12 is required to maintain normal levels of cystine, the oxidized dimer of cysteine, in melanosomes, and to produce cysteinyldopas, the precursors of pheomelanin synthesis made in melanosomes via cysteine oxidation. Tracing and biochemical analyses show that MFSD12 is necessary for the import of cysteine into melanosomes, and, in non-pigmented cells, lysosomes. Indeed, loss of MFSD12 reduced the accumulation of cystine in lysosomes of fibroblasts from patients with cystinosis, a lysosomal storage disease caused by inactivation of the lysosomal cystine exporter CTNS (Cystinosin). Thus, MFSD12 is an essential component of the long-sought cysteine importer for melanosomes and lysosomes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139443</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spin-Aware Neural Network Interatomic Potential for Atomistic Simulation</title>
<link>https://hdl.handle.net/1721.1/139441</link>
<description>Spin-Aware Neural Network Interatomic Potential for Atomistic Simulation
Bloore, David A.
Computational modeling is key in materials science for developing mechanistic insight that enables new applications. ab initio methods capture exceptional phenomenological richness to high numerical accuracy, but at high cost and limited scale. Empirical potentials are faster and scale better, but cannot compare to ab initio in numerical and physical accuracy. Machine learning (ML) interatomic potentials (IPs) of recent years offer a balance: excellent phenomenology and accuracy, while scaling well and at moderate cost. Interatomic potentials are generally formulated as functions of atomic coordinates only—i.e. spin-agnostic. For materials whose structures or energetics are influenced by spin, this is insufficient. Iron’s strong magnetism is coupled to its mechanical properties. This confounds spin-agnostic IPs because they implicitly use an expectation value across spin states for a given geometry.&#13;
&#13;
Thus, this work offers a novel ML engine employing: (1) novel basis functions that translate spin information into neural network (NN) inputs, (2) and novel NN architectures that improve their ability to learn and express relationships between geometry, spin, and energy.&#13;
&#13;
When applied to a broad dataset with high variance in both geometry and spin, the new bases achieve a 4x reduction in energy prediction error compared to the spin-agnostic Behler- Parrinello (BP) framework, and 5x using both the new bases and new NN architecture. When applied to a high spin-variance dataset, the new bases reduce energy prediction error by over 10x. Even when applied to a dataset with low spin-variance, the new bases reduce energy prediction error by 45%. These predictive improvements come at an increased computational cost of about 5% compared to spin-agnostic BP using only the new bases, but roughly 3x using both the new bases and NN.&#13;
&#13;
This work presents two physical predictions to further elucidate the capabilities and value of the Spin-Aware NN IP (SANNIP). First, Monte Carlo (MC) spin relaxations using SANNIP exhibit behavior consistent with hysteresis in that the relaxed spin state is dependent on its initial alignment. Second, MC spin relaxations resolve the temperature beyond which ferromagnetically initialized systems lose their magnetization to between 1100 and 1150K, which is roughly consistent with experimental measurement of the Curie Temperature (TC) of 1043K.&#13;
&#13;
The evaluation of numerical accuracy and physical predictions demonstrate the utility of the novel bases and NN architectures. Future work can generate a broader dataset and deploy SANNIP potentials in molecular dynamics (MD) seeking insight into the role of spin in mechanical properties, defect interactions, etc. Additional bases and can explicitly treat externally applied electric and magnetic fields. Further NN architecture innovations can incorporate transfer learning into treatment of multi-component systems. This work is foun- dational to and enabling of many new avenues of investigation in computational materials science with the aim of improving materials design, fabrication, remediation, recycling, and disposal.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139441</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward antiquity-inspired design in materials and construction: Insights into the production and durability of the ancient materials Egyptian blue and Roman concrete</title>
<link>https://hdl.handle.net/1721.1/139434</link>
<description>Toward antiquity-inspired design in materials and construction: Insights into the production and durability of the ancient materials Egyptian blue and Roman concrete
Seymour, Linda Marie
With increasing pressure on the global climate, there is a dire need to reduce the impact of both resource use and production of manufactured materials. In particular, modern ordinary Portland cement is responsible for up to 8% of global greenhouse gas emissions and offers a design life on the order of decades, requiring continued maintenance and reconstruction of buildings and infrastructure. Antiquity-inspired design, or examining past engineering achievements to inspire modern design, is a new paradigm through which properties of interest from ancient materials are understood and translated to new design applications. This thesis examines two ancient materials of interest, Egyptian blue and Roman concrete, to understand properties that can be translated to sustainable design.&#13;
&#13;
First, visible-induced luminescence, a property of interest for photovoltaics and forensics, is mapped at the micron-scale in ancient Egyptian blue pigment samples. The luminescence is correlated to specific crystalline structures and production pathways, including a modern antiquity-inspired sample using non-traditional raw materials.&#13;
&#13;
Next, the interfacial zone of aggregates within and cementing binder of ancient Roman mortars are characterized. Ancient Roman structures, produced with predominantly local materials, have remained standing for millennia in a variety of seismic and climatic conditions. High-resolution chemical and microstructural characterization techniques, including synchrotron micron-scale computed tomography, synchrotron x-ray diffraction, Raman microspectroscopy, scanning electron microscopy and thinsection petrography, map complex, heterogeneous dissolution processes throughout the cementing matrix of mortar samples. Samples from the Tomb of Caecilia Metella (First Century BCE) indicate that dissolution in the interfacial zone of volcanic aggregates (pozzolane rosse scoriae, fresh leucite and pyroxene) is not inherently detrimental to the mortars. Raman microspectroscopy maps the C-A-S-H binding phase in both pozzolane rosse mortars and lime-ceramic mortars from ancient water infras3 tructure of Rome and Pompeii. Finally, aggregate-scale lime clasts inform on possible production pathways for both the ancient mortar of the Privernum archaeological site and antiquity-inspired materials of the future.&#13;
&#13;
This work provides a characterization framework for the study of ancient materials; introduces new insights into the durability of ancient Roman concrete; and identifies a path forward for sustainable, durable design in civil engineering.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139434</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving the Environmental Stability of Methylammonium-Based Perovskite Solar Cells</title>
<link>https://hdl.handle.net/1721.1/139427</link>
<description>Improving the Environmental Stability of Methylammonium-Based Perovskite Solar Cells
Hartono, Noor Titan Putri
Perovskite solar cells (PSCs), as an emerging type of photovoltaics, have reached beyond 20% efficiency within a decade. Technoeconomic analysis suggests that PSCs are promising alternatives to the market-dominant silicon, because PSC manufacturing processes are more cost effective due to their solution processing methods. However, the prototypical perovskite material, methylammonium lead iodide (MAPbI3), is environmentally unstable and degrades in the presence of oxygen, light, and moisture. Thus, despite its high initial performance, the degrading performance over time means that the levelized cost of electricity (LCOE) of perovskites is prohibitively high. An improved stability (targeting &lt;0.25% degradation per year or less) could help improve the LCOE of perovskites to surpass silicon.&#13;
&#13;
Researchers have been incorporating low-dimensional (LD), such as 0D, 1D, or 2D perovskites, to improve PSCs stability. We can obtain LD perovskites by changing any &#119860;, &#119861;, or &#119883; ions in the &#119860;&#119861;&#119883;3 structures of high-performing 3D perovskites. The &#119860;-site cations can be organic or inorganic, which give us a vast number of possible perovskite compounds. Some common examples of 3D perovskite &#119860;-site cation are methylammonium (MA) and formamidinium (FA). When the &#119860;-site is larger than FA, it forms LD perovskite structures.&#13;
&#13;
This thesis focuses on investigating how to incorporate the LD perovskites as a capping layer to improve the stability of MA-based perovskites, including how to screen and select the &#119860;-site cations of LD perovskite capping layers that can improve the MAPbI3 absorber stability, how to improve the stability of MAPb(IxBr1–x )3 mixed halide, a wide-bandgap absorber for tandem cells and indoor PV applications, and how to incorporate capping layers in inverted p–i–n PSCs device architectures. These 3 questions are answered by combining high-throughput experiments with machine learning analysis. The optoelectronic, structural, and chemical composition properties of the LD capping-3D perovskite absorber materials are probed to identify the degradation mechanisms using advanced characterization methods. This deeper understanding of perovskite degradation and the strategies to solve the instability issue are critical to push PSCs closer toward commercialization.&#13;
&#13;
Keywords: perovskite solar cell, low-dimensional perovskite, capping layer, 2D-3D&#13;
heterostructures, high-throughput experiment, machine learning
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139427</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear and Non-Linear Mechanical Nature of a Living Mammalian Cytoplasm</title>
<link>https://hdl.handle.net/1721.1/139426</link>
<description>Linear and Non-Linear Mechanical Nature of a Living Mammalian Cytoplasm
Gupta, Satish Kumar
Cytoplasm is a complex active material wherein numerous biochemical processes occur, which are critical for functioning of a cell. Many of the biological processes, such as cell migration, mechanotransduction, and cancer metastasis, involve mechanical processes and are critically regulated by cell mechanics. This thesis comprises of three parts with a common goal of understanding the mechanical nature of the cytoplasm.&#13;
&#13;
In the first part of the thesis, we develop high-frequency passive microrheology, a high-throughput, non-invasive, and inexpensive method to measure the frequency dependent complex moduli of the cytoplasm using the fluctuations of tracer particles. We use a combination of theoretical and experimental analysis to show that although the cytoplasm operates far-from-equilibrium, it behaves as an equilibrium material at short time-scales. This allows us to extract its mechanical properties using the generalized Stokes-Einstein relationship. The results obtained agree with independent optical tweezers measurements.&#13;
&#13;
In the second part of the thesis, we systematically show that cell polarity, which is known to facilitate a diverse set of cellular processes such as directional cell migration, differentiation, localized membrane growth, activation of the immune system, is a critical regulator of the anisotropic behavior of mechanics, dynamics and forces within the cytoplasm. We demonstrate that the changes in cell shape can significantly modify the cytoskeletal organization and regulate the degree of mechanical anisotropy.&#13;
&#13;
We develop microscopic medium amplitude oscillatory shear (µMAOS), a novel method to measure the frequency-dependent micromechanical properties of soft materials in the asymptotically nonlinear regime using optical tweezers. We have developed a theoretical framework to extract these nonlinear mechanical properties of the material from experimental measurements and also proposed a physical interpretation of the third-order nonlinearities measured in single-tone oscillatory tests. We demonstrate the method for a well-characterized surfactant solution of wormlike micelles, and subsequently employ this novel technique to demonstrate that the cytoplasm of a living cell undergoes strain softening and shear thinning when locally subjected to weakly nonlinear oscillatory deformations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139426</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prevention &amp; Reduction of Opioid Misuse with Systems Exploration: Modelling complex, uncertain problems for policy development</title>
<link>https://hdl.handle.net/1721.1/139420</link>
<description>Prevention &amp; Reduction of Opioid Misuse with Systems Exploration: Modelling complex, uncertain problems for policy development
Lim, Tse Yang
The opioid crisis is one of the worst public health challenges in the United States, with overdoses killing over 50,000 people a year. The crisis is a complex and dynamic problem, with long delays and multiple feedbacks, in which any policy actions risk causing adverse unintended consequences. Recognising these challenges, in 2017, the National Academies of Sciences, Engineering, and Medicine recommended the development of a quantitative systems model to guide Federal government policy to address the crisis.&#13;
&#13;
Here I present PROMISE, a dynamic simulation model of the opioid crisis developed in conjunction with the US Food and Drug Administration in response to that charge. The model encompasses misuse of prescription and illicit opioids, opioid use disorder, treatment and remission, and tracks a range of health outcomes. PROMISE is calibrated to the US population using 20 years of national-level data. It brings together a more comprehensive combination of endogenous feedback mechanisms and empirically-grounded operational details than any other model of the crisis.&#13;
&#13;
Our baseline model estimates highlight the impact of supply-side changes, responses to perceived overdose risk, and the competing influences of illicit fentanyl and overdose prevention efforts in shaping the trajectory of the crisis. We find that fentanyl-driven increases in mortality far outweigh efforts to counteract them. Baseline projections indicate the crisis is shifting away from opioid use as surging mortality deters new initiates. These estimates yield the most thorough quantitative understanding of the historical trajectory of the crisis available to date, and provide a solid foundation for identifying and analysing policy solutions.&#13;
&#13;
PROMISE also serves as a practical demonstration of applied simulation modelling in two ways – first, as an empirically-grounded model of a complex and highly uncertain problem, and second, as a model and modelling process developed in collaboration with policy-makers and deployed explicitly in support of policy decision-making. I conclude with reflections on the practice and process of modelling for decision support, on using the model in analytic and discursive ways, and on its limitations and directions for future work.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139420</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity Today: Essays on Inequality in the Modern Workplace</title>
<link>https://hdl.handle.net/1721.1/139418</link>
<description>Diversity Today: Essays on Inequality in the Modern Workplace
Jackson, Summer Rachel Maria
My dissertation seeks to contribute to our understanding of how and when organizations can achieve diverse, equitable, and inclusive organizations using empirical and theoretical perspectives.&#13;
&#13;
In Chapter 1, I explore the question of how organizations can hire individuals from underrepresented backgrounds. Past studies highlight a combination of demand- and supply-side constraints that create a ‘thin labor market’ for candidates from underrepresented backgrounds. Drawing on data from a 20-month ethnographic study of a fast-growth technology firm (“ShopCo,” a pseudonym), I examined ShopCo’s efforts to increase representation of racial minorities in technical positions and reveal a previously unrecognized barrier to hiring racial minorities into organizations: repugnant market concerns. In Chapter 2, Basima Tewfik (coauthor) and I theorize on the relationship between microaggressions and systemic prejudice. We offer a precise definition of microaggressions at work and propose how multi-level responses (i.e., target, workgroup, and organization) to microaggressions can intensify and amplify to either inhibit or facilitate organizational progress on addressing systemic prejudice. In Chapter 3, I use data from an 18-month ethnography of a public defender agency to develop grounded theory on the role of racial and economic disenfranchisement on an advocate’s ability to successfully influence a higher-power target. I found that public defenders needed to first manage the impressions of their clients – using triadic advocacy tactics designed to address the racial and economic barriers – before attempting to influence the more powerful district attorneys.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139418</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical and Transport processes in Ion Intercalation materials</title>
<link>https://hdl.handle.net/1721.1/139412</link>
<description>Electrochemical and Transport processes in Ion Intercalation materials
Fraggedakis, Dimitrios
Moving towards an environmentally sustainable society, energy storage becomes increasingly important. As has been widely recognized with the recent Nobel prize in chemistry, ion intercalation materials play an essential role in the state-of-the-art energy storage technologies, such as Li-ion batteries, as they are used everywhere around us; from the phones we use to the cars we drive. Intercalation-based energy storage devices can be thought as an electrochemical plant where processes related to transport phenomena (solid/liquid diffusion) and kinetics (ion insertion/extraction) take place. Even though extensive research has been done in understanding and optimizing the solid and liquid diffusion properties of the active materials and electrolytes, very few studies have been conducted in elucidating the fundamentals of ion insertion kinetics that take place at interfaces. Additionally, most ion intercalation materials tend to phase separate into ion–rich and –poor phases under ion insertion/extraction. Our understanding, however, on the effects of intercalation rates, applied electric fields and the microscopic nature of the reactions on the resulting phase morphologies is not complete. The main goal of this thesis is to comprehend, from a very basic level, the reasons that limit the performance of energy (e.g. Li-ion batteries) and information (memristors) storage technologies, and provide insights and engineering solutions for next generation intercalation-based devices that deliver high efficiency and optimal performance. &#13;
&#13;
In the first part of the thesis, I develop the theory of coupled ion-electron transfer, where both the ion and the electron have to be transferred in a concerted way. By using simulations and experiments I demonstrate the use of the theory on ion intercalation, the fundamental process of Li-ion batteries. In the second part of this thesis, I derive a simple theory that unifies the behavior of all phase-separating electrode materials under driven ion insertion. The proposed criterion predicts the non-equilibrium phase morphology during ion insertion, which is validated by phase-field simulations of single Li&#119909;CoO2 (LCO) particles, in situ optical imaging of single Li&#119909;C6 (graphite) particles undergoing transitions between stage 1 (&#119909; = 1) and 2 (&#119909; = 0.5) at different rates, and collapse of all the available literature data for single-particle imaging of 3 LCO, graphite and Li&#119909;FePO4. In the third part, I investigate the influence of different electron transfer kinetics on the thermodynamic phase stability of open driven systems. By using non-equilibrium thermodynamics, I demonstrate different ways to control the morphology of interfaces during operation, e.g. ion intercalation, electroplating, corrosion, and conclude that the electronic density of states of the electron donor is key on engineering the morphology of electrochemical interfaces. In the last part of the thesis, I focus on the effects of electric fields on the thermodynamic stability of ion intercalation materials. There, I develop the theory that describes how electric fields induce metal-to-insulator transitions and lead to effective dielectric breakdown, which is essential in (non-)volatile memristive devices. &#13;
&#13;
In summary, this thesis constitutes a comprehensive study of how kinetics, phase separation and transport phenomena affect the non-equilibrium response of ion intercalation materials. The theoretical, computational, and experimental results of the present work can serve as a guideline for engineering ion intercalation systems suitable for fast-charging energy storage devices and non-volatile neuromorphic chips. In addition to its practical aspect, the present thesis expands the boundaries of what is known in terms of reaction kinetics, phase-separating driven open systems, and non-equilibrium thermodynamics.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139412</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Science and Innovation</title>
<link>https://hdl.handle.net/1721.1/139407</link>
<description>Essays on the Economics of Science and Innovation
Stein, Carolyn
This thesis consists of three chapters on the economics of science and innovation. The first chapter studies whether the rewards for publishing first in science induce scientists to rush and produce lower-quality work; the second estimates the magnitude of these priority rewards. The third chapter studies whether male and female patent examiners treat patent applications submitted by women differently.&#13;
&#13;
The first chapter, joint with Ryan Hill, investigates how competition to publish first and thereby establish priority impacts the quality of scientific research. We begin by developing a model where scientists decide whether and how long to work on a given project. When deciding how long to let their projects mature, scientists trade off the marginal benefit of higher quality research against the marginal risk of being preempted. The most important (highest potential) projects are the most competitive because they induce the most entry. Therefore, the model predicts these projects are also the most rushed and lowest quality. We test the predictions of this model in the field of structural biology using data from the Protein Data Bank (PDB), a repository for structures of large macromolecules. An important feature of the PDB is that it assigns objective measures of scientific quality to each structure. As suggested by the model, we find that structures with higher ex-ante potential generate more competition, are completed faster, and are lower quality. Consistent with the model, and with a causal interpretation of our empirical results, these relationships are mitigated when we focus on structures deposited by scientists who – by nature of their employment position – are less focused on publication and priority.&#13;
&#13;
The second chapter, also joint with Ryan Hill, studies priority rewards in science. The scientific community assigns credit or “priority” to individuals who publish an important discovery first. We examine the impact of losing a priority race (colloquially known as getting “scooped”) on subsequent publication and career outcomes. To do so, we take advantage of data from structural biology where the nature of the scientific process together with the Protein Data Bank — a repository of standardized research discoveries — enables us to identify priority races and their outcomes. We find that race winners receive more attention than losers, but that these contests are not winner-take- all. Scooped teams are 2.5 percent less likely to publish, are 18 percent less likely to appear in a top-10 journal, and receive 20 percent fewer citations. Getting scooped has only modest effects on academic careers. Finally, we document empirical evidence suggesting that the priority reward system reinforces inequality of attention in science.&#13;
&#13;
The third chapter, joint with Jane Choi and Heidi Williams, considers the role of gender in the evaluation of patent applications submitted to the US Patent &amp; Trademark Office (USPTO). Using the quasi-random assignment of patents to patent examiners, we document two facts. First, male examiners are more lenient overall than female examiners. Second, we find that patent examiner gender appears to have no effect on the evaluation of patent applications submitted by female inventors relative to male inventors. In other words, male examiners are not differentially stringent (or lenient) compared to their female counterparts when evaluating patent applications submitted by women. Our analysis is not able to assess whether the patent application evaluation system as a whole holds female inventors to a higher standard than their male counterparts. However, these results stand in contrast with evidence from other markets which has suggested that female reviewers may hold female applicants to higher standard than male reviewers.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139407</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Press ‘1’ to speak to a machine: An examination of the psychological factors influencing preference for interaction with artificially intelligent actors</title>
<link>https://hdl.handle.net/1721.1/139393</link>
<description>Press ‘1’ to speak to a machine: An examination of the psychological factors influencing preference for interaction with artificially intelligent actors
Yang, Hee Jin (Heather)
What psychological factors influence the preference for interaction with a human versus an artificially intelligent actor? How can these factors be used to increase adoption of novel technologies, and what are their broader societal impacts? In this dissertation, I answer these questions through two streams of research: Firstly, by examining what kinds of people seek out algorithmic advice; and secondly, how the implicit application of social information to algorithmic agents impacts their interpretability and evaluation. &#13;
&#13;
In Chapter 1, I examine the individual level differences of users of artificially intelligent advisors. Across four studies, users’ cognitive style predicted advice-seeking behavior from algorithmic advisors, even after controlling for a host of consequential factors, such as prior experience with artificial intelligence, comfort with technology, social anxiety, and educational background. Building on the Dual Process theory literature, I show that increased cognitive reflection is related to increased perceptions of accuracy for algorithmic (versus human) advisors, with accuracy perceptions mediating the relationship between cognitive style and advisor preference.  I find that individuals who rely on their intuition perceive human advisors as being more accurate than algorithmic advisors, in comparison to their deliberative counterparts, and also rate algorithmic advisors as being less impartial.&#13;
&#13;
In Chapter 2, I investigate how individuals apply social stereotypes to digital voiced assistants (DVAs) and how this facilitates understanding of novel personified devices. Through experimentally pairing participants with fake artificially intelligent voiced agents, I demonstrate that individuals implicitly apply social stereotypes to the agent in the same way as they do to humans. Consistent with traditional gender stereotypes and in contrast to current academic justifications reliant on the generalized preference for female voices, I find that individuals prefer female (versus male) voiced artificial intelligent agents when occupying roles that are female-typed, but not male-typed, demonstrating a stereotype congruence effect. I extend this finding to show how gender stereotype congruent features of a novel device facilitate understanding of its capabilities for inexperienced users.&#13;
&#13;
Finally, I discuss the implications of this research for managers, policy makers, developers and users of artificially intelligent agents.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139393</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of large palindromes on the primate X chromosome</title>
<link>https://hdl.handle.net/1721.1/139389</link>
<description>Evolution of large palindromes on the primate X chromosome
Jackson, Emily Katherine
Human sex chromosomes are enriched for complex genomic architecture, including massive palindromes with arms that can exceed 1 Mb in length and arm-to-arm sequence identity higher than 99%. Palindrome arms harbor protein-coding genes with testis-biased expression, suggesting roles in male fertility. However, palindromes are under-represented in non-human reference genomes due to technical challenges associated with genomic repeats, limiting our understanding of palindrome origins and evolution. &#13;
&#13;
In this thesis, we used specialized methods to investigate the evolution of X-chromosome palindromes in primates. We used a clone-based sequencing approach that incorporates ultralong nanopore reads to generate accurate reference sequence for regions orthologous to human X palindromes in two non-human primates, the chimpanzee and the rhesus macaque. Twelve human X palindromes have conserved orthologs in both species, demonstrating a common origin at least 25 million years ago. The majority of these palindromes were missing or misassembled in existing reference genomes for these species. Comparative analyses demonstrate that natural selection preserves X-palindrome gene families, despite limited functional characterization of these genes in humans. Unexpectedly, structural comparisons of conserved palindromes between species revealed frequent rearrangements around the center of palindrome symmetry; this instability persists among human X chromosomes, which are enriched for deletions within the spacer that separates palindrome arms. &#13;
&#13;
Sequence identity between palindrome arms is maintained by high rates of intra-chromosomal gene conversion, which led us to hypothesize that palindromes may be subject to amplified effects from GCbiased gene conversion. Among twelve conserved primate X palindromes, we find that palindrome arms are significantly more GC-rich than flanking sequence, and that GC content in primate X-palindrome arms is increasing over time. Evolutionary simulations reveal that nucleotide replacement patterns between species are consistent with a magnitude of GC bias in gene conversion of around 70%, consistent with previous estimates derived from analyses of human meiosis. Altogether, the work presented in this thesis demonstrates an unexpectedly deep evolutionary history of primate X palindromes that is shaped by a complex mixture of natural selection, localized structural instability, and GC-biased gene conversion.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139389</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Talking Shop: Worker Voice and Representation in the Digital Age</title>
<link>https://hdl.handle.net/1721.1/139387</link>
<description>Talking Shop: Worker Voice and Representation in the Digital Age
Myers, Jenna E.
This dissertation analyzes how frontline worker interests can be both included and affected throughout the lifecycle of intelligent technologies (e.g., AI-enabled sensors, robotics, and analytics), with a particular focus on the role of third-party technology vendors. By drawing on a 31-month ethnographic study of a digital production monitoring technology used by manufacturing firms, I examine the barriers, facilitators, and processes that guide how frontline workers are considered during technology design, development, and deployment. In Chapter 1, I focus on technology use inside one small manufacturing firm to study when and how worker input can be included in the ongoing design of advanced technologies in the workplace. My findings highlight how a change in the vendor’s product development strategy (i.e., from top-down to user-centered) reconfigured role relations between workers, managers, and vendor representatives and subsequently influenced worker voice and involvement in technology design. In Chapter 2, I directly study the vendor’s development processes, and I address why and how vendors may establish a pro-worker focus during development. I advance the concept of technology design ideologies—which I define as developers’ beliefs about the functions and broader purpose of their technologies—and I show how developers used institutional work practices to influence their company’s existing design ideology in ways that made it more centrally concerned with the effects of the technology on machine operators’ jobs. In Chapter 3, I focus on the vendor’s creation and delivery of self-directed online learning tools as new technology features were deployed. I find that the vendor’s training efforts—which were co-produced with the users themselves—did not equally serve all user types and encountered particular barriers when directed towards frontline workers, rather than managers. As a whole, this dissertation contributes to research on employee involvement in workplace technologies, social constructivist theories of technologies and organizing, and information systems research on vendors of digital, intelligent technologies.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139387</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Design of Online Marketplaces and Platforms</title>
<link>https://hdl.handle.net/1721.1/139386</link>
<description>Essays on the Design of Online Marketplaces and Platforms
Holtz, David M.
This dissertation consists of three chapters that concern the design of online marketplaces and platforms. In Chapter 1, I estimate the impact of increasing the extent to which content recommendations are personalized by analyzing the results of a randomized experiment on approximately 900,000 Spotify users across seventeen countries. I find that increasing recommendation personalization increased the number of podcasts that Spotify users streamed, but also decreased the individual-level diversity of Spotify users’ podcast consumption and increased the dissimilarity between the podcast consumption patterns of different users across the population. In Chapter 2, I propose methods for obtaining unbiased estimates of the total average treatment effect (TATE) when conducting experiments in online marketplaces, and test the viability of said methods using a simulation built on top of scraped data from Airbnb. I find that blocked graph cluster randomization can reduce the bias of TATE estimates in online marketplaces by as much as 64.5%, however, this reduction in bias comes with a substantial increase in root-mean-square error (RMSE). I also find that fractional neighborhood treatment response (FNTR) exposure models and inverse probability-weighted estimators have the potential to further reduce bias, depending on the choice of FNTR threshold. In Chapter 3, I conduct two large-scale meta-experiments on Airbnb in an attempt to estimate the actual magnitude of bias in TATE estimates from marketplace interference. In both meta-experiments, some Airbnb listings are assigned to experiment conditions at the individual-level, whereas others are assigned to experiment conditions at the level of clusters of listings that are likely to substitute for one another. The two meta-experiments measure the impact of two different pricing-related interventions on Airbnb: a change to Airbnb’s fee policy, and a change to the pricing algorithm that Airbnb uses to recommend prices to sellers. Results from the fee policy meta-experiment reveal that at least 32.60% of the treatment effect estimate in the Bernoulli-randomized meta-experiment arm is due to interference bias. Results from the pricing algorithm meta-experiment highlight the difficulty of detecting interference bias when treatment interventions require intention-to-treat analysis.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139386</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial Methods in Statistics</title>
<link>https://hdl.handle.net/1721.1/139383</link>
<description>Combinatorial Methods in Statistics
Turner, Paxton Mark
This thesis explores combinatorial methods in random vector balancing, nonparametric estimation, and network inference. First, motivated by problems from controlled experiments, we study random vector balancing from the perspective of discrepancy theory, a classical topic in combinatorics, and give sharp statistical results along with improved algorithmic guarantees. Next, we focus on the problem of density estimation and investigate the fundamental statistical limits of coresets, a popular framework for obtaining algorithmic speedups by replacing a large dataset with a representative subset. In the following chapter, motivated by the problem of fast evaluation of kernel density estimators, we demonstrate how a multivariate interpolation scheme from finite-element theory based on the combinatorial-geometric properties of a certain mesh can be used to significantly improve the storage and query time of a nonparametric estimator while also preserving its accuracy. Our final chapter focuses on pedigree reconstruction, a combinatorial inference task of recovering the latent network of familial relationships of a population from its extant genetic data.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139383</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplicative Structures on Brown–Peterson Spectra at Odd Primes</title>
<link>https://hdl.handle.net/1721.1/139382</link>
<description>Multiplicative Structures on Brown–Peterson Spectra at Odd Primes
Senger, Andrew
We show that the odd-primary Brown-Peterson spectrum does not admit the structure of an E₂₍ₚ^₂ ₊₂₎ ring spectrum and that there can be no map MU → BP of E₂ₚ₊₃ ring spectra for odd primes p. This extends results of Lawson at the prime 2.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139382</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Macroeconomics and International Trade</title>
<link>https://hdl.handle.net/1721.1/139381</link>
<description>Essays on Macroeconomics and International Trade
Fukui, Masao
This thesis consists of three essays. In the first essay, I develop a new theory of wage rigidity and unemployment fluctuations. The starting point of my analysis is a generalized version of Burdett and Mortensen’s (1998) job ladder model featuring risk-neutral firms, risk-averse workers, and aggregate risk. Because of on-the-job search, my model generates wage rigidity both for incumbent workers, through standard insurance motives, and for new hires, through novel strategic complementarities in wage-setting between firms. In contrast to the conventional wisdom in the macro literature, the introduction of on-the-job search implies that: (i) the wage rigidity of incumbent workers, rather than new hires, is the critical determinant of unemployment fluctuations; (ii) fairness considerations in wage-setting dampen, rather than amplify, unemployment fluctuations; and (iii) new hire wages are too flexible, rather than too rigid, in the decentralized equilibrium. Quantitatively, the wage rigidity of incumbent workers caused by the insurance motive alone accounts for about one-fifth of the unemployment fluctuations observed in the data.&#13;
&#13;
&#13;
In the second essay (joint with Arnaud Costinot and David Atkin), we study the relationship between international trade and development in a model where countries differ in their capability, goods differ in their complexity, and capability growth is a function of a country’s pattern of specialization. Theoretically, we show that it is possible for international trade to increase capability growth in all countries and, in turn, to push all countries up the development ladder. This occurs because: (i) the average complexity of a country’s industry mix raises its capability growth, and (ii) foreign competition is tougher in less complex sectors for all countries. Empirically, we provide causal evidence consistent with (i) using the entry of countries into the World Trade Organization as an instrumental variable for other countries’ patterns of specialization. The opposite of (ii), however, appears to hold in the data. Through the lens of our model, these two empirical observations imply dynamic welfare losses from trade that are small for the median country, but pervasive and large among a few developing countries.&#13;
&#13;
&#13;
In the third essay, I build a model of endogenous capital flow reversal. In the data, Capital tends to flow from fast-growing countries to slow-growing countries, contrary to the prediction of neoclassical models. I propose a parsimonious theory in which slower growth causes capital inflow. The theory builds on the idea that financial development is demand-driven. In the model, a relatively larger demand for store of value in slow-growing countries stimulates domestic financial innovation. The endogenous response of financial development can be strong enough to attract capital inflow. This contrasts with the existing theories in which slow-growing countries happen to have relatively well-developed financial markets.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139381</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Homotopy Theory of Stratified Spaces</title>
<link>https://hdl.handle.net/1721.1/139376</link>
<description>On the Homotopy Theory of Stratified Spaces
Haine, Peter J.
This thesis is broken into two parts. In the first part (Chapters 2 to 6) is dedicated to proving a 'homtopy hypothesis' for stratified spaces. Specifically, given a poset P, we show that the ∞-category Strₚ of ∞-categories with a conservative functor to P can be obtained from the ordinary category of P-stratified topological spaces by inverting a class of weak equivalences. For suitably nice P-stratified topological spaces, the corresponding object of Strₚ is the exit-path ∞-category of MacPherson, Treumann, and Lurie. To prove this stratified homotopy hypothesis, we define combinatorial simplicial model structure on the category of simplicial sets over the nerve of &#119875; whose underlying ∞-category is the ∞-category Strₚ. This model structure on P-stratified simplicial sets allows us to easily compare other theories of P-stratified spaces to ours and deduce that they all embed into ours.&#13;
&#13;
The second part (Chapters 7 to 9) explores a number of consequences of this stratified homotopy hypothesis, as well as related results on exit-path ∞-categories and constructible sheaves. This includes an overview of our joint work with Bariwck and Glasman on exit-path categories in algebraic geometry; this work uses as input the perspective on stratified spaces provided by our stratified homotopy hypothesis.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139376</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Provable Algorithms for Learning and Variational Inference in Undirected Graphical Models</title>
<link>https://hdl.handle.net/1721.1/139373</link>
<description>Provable Algorithms for Learning and Variational Inference in Undirected Graphical Models
Koehler, Frederic
Graphical models are a general-purpose tool for modeling complex distributions in a way which facilitates probabilistic reasoning, with numerous applications across machine learning and the sciences. This thesis deals with algorithmic and statistical problems of learning a high-dimensional graphical model from samples, and related problems of performing inference on a known model, both areas of research which have been the subject of continued interest over the years. Our main contributions are the first computationally efficient algorithms for provably (1) learning a (possibly ill-conditioned) walk-summable Gaussian Graphical Model from samples, (2) learning a Restricted Boltzmann Machine (or other latent variable Ising model) from data, and (3) performing naive mean-field variational inference on an Ising model in the optimal density regime. These different problems illustrate a set of key principles, such as the diverse algorithmic applications of “pinning” variables in graphical models. We also show in some cases that these results are nearly optimal due to matching computational/cryptographic hardness results.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139373</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to act with objects, relations and physics</title>
<link>https://hdl.handle.net/1721.1/139368</link>
<description>Learning to act with objects, relations and physics
Allen, Kelsey Rebecca
Humans display an unrivalled degree of control over their environments. From a young age, humans represent the world in ways that allow them to not just make inferences about how the world works, but also to act and intervene on the world in order to accomplish their goals. Even children can pick up a new skill like “catapulting” from a single demonstration or just a few trials of experience, while it might take a machine agent several hundreds, thousands, or even millions of attempts to master such a skill. For those focused on better understanding these human capabilities, or for those wishing to build more flexible and efficient machines, the computational question is the same: how do people learn and generalize to new problems from just a handful of experiences? &#13;
&#13;
This thesis presents physical problem solving as a window on the flexibility and efficiency of human and machine action. Across two tasks introduced and studied in this thesis, the Gluing Task and the Virtual Tools game, structured action spaces and mental simulation are crucial to explaining human behavior. These action spaces are both object-oriented and relational, and their representations can be learned with techniques such as deep reinforcement learning or program induction to enable better generalization to new problems. By combining structured action spaces with mental simulation, humans and machines can be efficient in the number of actions they require to solve problems and compositionally integrate information gained by trial-and-error experience with information gained by passive observation. Embodied real world experience can additionally affect how much humans rely on mental simulation in physical problem solving. Individuals born with limb differences, like having only one hand, spend significantly more time thinking and less time acting when faced with a physical puzzle, perhaps reflecting a higher cost of action learned from their everyday experience. Taken together, these results suggest that the flexibility of human physical problem solving stems from the mental simulations people employ when faced with new problems, while the efficiency of human search rests upon appropriately structured action spaces that can be rapidly transformed through minimal trial-and-error experience.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139368</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering M13 Bacteriophage Nanoplatforms for Diagnostic and Therapeutic Applications</title>
<link>https://hdl.handle.net/1721.1/139365</link>
<description>Engineering M13 Bacteriophage Nanoplatforms for Diagnostic and Therapeutic Applications
Tsedev, Uyanga
M13 bacteriophage, a naturally monodisperse multifunctional nanostructure, consists of thousands of distinct protein subunits organized in a filamentous viral capsid; 900nm in length and 6nm in diameter. All M13 capsids are amenable to mutation and can be tuned for the binding and nucleation of inorganics and nanoparticles, and for the expression of ligands, functional moieties, and even enzymes. To harness these capabilities for medical imaging and therapy, the author has (i) tailored the assembly of M13 into ultra-short, ‘inho’, phage derived particles, (ii) developed a chlorotoxin (CTX) motif on the M13 p3 capsid to enable phage particle crossing of the blood-brain-barrier and homing to glioblastoma cancer cells, and (iii) built ‘inho’ phage derived transgene cassettes for phage gene delivery in mammalian cells. Tight control over the genetic sequence provided by ‘inho’ phagemids allow production of phage particles ranging in length from 25nm to over 2500nm, as dictated by the length of the packaged DNA. This length control over the phage filament is used to demonstrate the impact of the particle length on the morphology of phage templated metal nanofoams and on the in-vitro and in-vivo tissue trafficking of targeted phage nanocarrriers. An optimal length for enhancing ion transport and active material access in MnOx cathodes is described. Chlorotoxin-phages, conjugate with indocyanine green dye (ICG), are visualized in-vivo in the second window near infrared (SWIR) and home effectively to mouse brain tumor. Ultra-short, 50nm chlorotoxin-phage particles are shown to vastly improve this localization specificity. Additionally, the ‘inho’ phagemid system is engineered to produce ITR-flanked transgene cassettes. Such reporter genes packaged within targeted, cationically modified, ‘inho’ phages are able to transduce liver and brain cancer cells. The closed-ended, single-stranded ‘inho’ phage-derived cassettes have capacity up to 20 kilobases and can be delivered within phage particles as well as non-viral delivery vehicles. Ultimately, therapy or imaging agent carrying, miniaturized, chlorotoxin-targeted, M13 phage is considered here as a complete nanotheranostic platform that could augment the therapeutic efficacy of combination drugs shuttled to the site of glioma. The described multimodal, nanoplatform is re-designable for applications in nanomaterials, diagnostics, and across disease types.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139365</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Control in Robotic Excavation</title>
<link>https://hdl.handle.net/1721.1/139358</link>
<description>Methods for Control in Robotic Excavation
Sotiropoulos, Filippos Edward
Robotic excavation has been proposed as a means to meet the ever growing demand for skilled excavator operators in the construction, infrastructure, mining and agriculture industries. The development of fully autonomous excavation systems is hindered by inadequate methods to deal with the interaction between the machine and soil. The dynamics arising from the bucket-earth interaction, present a unique challenge from the robotics perspective. Not only is the behaviour of earth and fragmented rock highly nonlinear and distributed but also the terrain profiles and properties of excavation sites are uncertain, unstructured, and non-uniform. In this thesis new methods are presented and tested to tackle the current limits in robotic excavation arising from terramechanics modeling.&#13;
&#13;
In particular, methods are presented to tackle the soil interaction problem in the context of three tasks typically encountered in robotic excavation. Firstly, we address the efficient bulk removal of material by introducing an approach that adapts for unknown and varying soil properties. Unlike traditional force control or trajectory control, the method uses the power transmitted from the excavator to the soil as a signal for adaptive excavation. Using an extremum-seeking algorithm, an optimal excavation depth that maximizes the output power of the machine is sought.&#13;
&#13;
Secondly we tackle terrain shaping, where the objective is creating a final desired site shape. Here, more precise control of the bucket trajectory is required and a model-based approach is presented. Based on a novel form of lifting linearization termed Dual-Faceted Linearization (DFL), where a nonlinear dynamical system is cast into a higher dimensional space where its dynamics are more linear, a model for the bucket-earth interactions is obtained from data and utilized for effective path following control.&#13;
&#13;
Finally, the problem of gathering larger monolithic rocks mixed with loose material is addressed. By training a Gaussian Process model the motion of a rock arising from its interactions with the bucket and surrounding material is modeled. Leveraging this model, and online estimation, we are able to reliably collect rocks of varying geometry.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139358</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a lithium-mediated nitrogen reduction process</title>
<link>https://hdl.handle.net/1721.1/139352</link>
<description>Development of a lithium-mediated nitrogen reduction process
Lazouski, Nikifar
Ammonia is an important industrial chemical that is used predominantly for producing nitrogen-containing fertilizers as well as nitric acid, polymers, and pharmaceuticals. It is also being considered as an energy-dense, carbon-free energy carrier. Today, ammonia (NH₃) is produced via the Haber-Bosch process, in which air, water, and fossil fuels are used to make nitrogen and hydrogen, which are then reacted at elevated temperatures and pressures to produce NH₃. The use of fossil fuels as a hydrogen and energy source leads to significant CO₂ emissions. In addition, the process is very centralized and capitally intensive, which makes expanding ammonia production capacity, particularly in a distributed manner, difficult. Electrochemical methods for producing ammonia from air, water, and renewable electricity have been proposed as possible solutions to these issues. &#13;
&#13;
In this thesis, we investigated the use of a lithium-mediated electrochemical method for nitrogen reduction to produce NH₃. A setup for reproducibly producing ammonia using the chemistry was developed, and the impacts of changing operating conditions such as electrolyte composition and applied current density were studied. The results of these experiments were used to develop a detailed coupled kinetic-transport model for ammonia production. We found that the rate of diffusion of nitrogen through both the solid-electrolyte interphase (SEI) and the bulk electrolyte generally defines ammonia production rates and selectivities. To overcome diffusion limitations in the bulk electrolyte, solvent-agnostic gas diffusion electrodes were developed. By using these electrodes, a maximum NH₃ formation rate of 30 nmol cm⁻² s⁻1 (8.8 mA cm⁻²) was obtained; the highest Faradaic efficiency for ammonia obtained was 47.5%. Hydrogen oxidation at &gt;25 mA cm⁻² and 100% FE was demonstrated. A technoeconomic model for a general, large-scale electrochemical ammonia production process was developed and was applied to specifically analyze lithium-mediated nitrogen reduction. This work demonstrates that lithium-mediated nitrogen reduction can be used for ammonia production, though significant improvements in operating parameters are necessary for the process to be economically viable in a large-scale process. We believe the tools and findings developed in this work are useful for both electrochemical ammonia synthesis and other synthetically relevant electrochemical reactions.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139352</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accounting for Computational Expenditures in Bayesian Experimental Design</title>
<link>https://hdl.handle.net/1721.1/139351</link>
<description>Accounting for Computational Expenditures in Bayesian Experimental Design
Zheng, Sue
Many analysis problems involve the collection of data.  The task of selecting the conditions for the data collection process is known as experimental design.  The Bayesian optimal experimental design (BOED) formulation uses Bayesian inference to update beliefs after observing data and optimizes a utility function – most commonly mutual information – and is computationally challenging.  Real-world problems entail complicated forward models that relate the data to the unknown quantities of interest.  Identification of informative designs requires use of these models to assess the potential data provided under different experimental conditions.  Furthermore, evaluation of information measures is generally intractable and estimation is often computationally expensive.  This work focuses on computation as a resource constraint in experimental design and it appears in two very different ways.&#13;
&#13;
The first relates to choices in the representation and how they influence the cost of experimental design.  We explore this in the sensor planning problem for detecting special nuclear materials in cargo containers.  We demonstrate the costs and benefits of modeling an additional physics phenomenon (i.e., Compton scattering) with respect to inference costs and information gain.  While these results are specific to this problem, it demonstrates how consideration to the representation can lead to computational savings.&#13;
&#13;
Secondly, we consider computational resources required in evaluating information measures.  In this setting, we take an explicit treatment of the computational costs to develop an approach using a robust estimator that facilitates computation reuse between rounds of data collection and another that adaptively allocates computation towards targeted designs in a cost-aware manner.  Importantly, this latter approach overtly trades off computation for performance, allowing one to strike their desired balance between the costs and benefits of experimental design and can be used in problems with a limited computational budget.  We expect these approaches to broaden the application of Bayesian experimental design
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139351</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Methods for Scalable Bayesian and Causal Inference with Provable Quality Guarantees</title>
<link>https://hdl.handle.net/1721.1/139350</link>
<description>Practical Methods for Scalable Bayesian and Causal Inference with Provable Quality Guarantees
Agrawal, Raj
Many scientific and decision-making tasks require learning complex relationships between a set of &#119901; covariates and a target response, from &#119873; observed datapoints with &#119873; ≪ &#119901;. For example, in genomics and precision medicine, there may be thousands or millions of genetic and environmental covariates but just hundreds or thousands of observed individuals. Researchers would like to (1) identify a small set of factors associated with diseases, (2) quantify these factors’ effects, and (3) test for causality. Unfortunately, in this high-dimensional data regime, inference is statistically and computationally challenging due to non-linear interaction effects, unobserved confounders, and the lack of randomized experimental data.&#13;
&#13;
In this thesis, I start by addressing the problems of variable selection and estimation when there are non-linear interactions and fewer datapoints than covariates. Unlike previous methods whose runtimes scale at least quadratically in the number of covariates, my new method (SKIM-FA) uses a kernel trick to perform inference in linear time by exploiting special interaction structure. While SKIM-FA identifies potential risk-factors, not all of these factors need be causal. So next I aim to identify causal factors to aid in decision making. To this end, I show when we can extract causal relationships from observational data, even in the presence of unobserved confounders, non-linear effects, and a lack of randomized controlled data. In the last part of my thesis, I focus on experimental design. Specifically, if the observational data is not adequate, how do we optimally collect new experimental data to test if particular causal relationships of interest exist.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139350</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low Power Adaptive Wireless Circuits for the Internet of Things and In-body implants</title>
<link>https://hdl.handle.net/1721.1/139349</link>
<description>Low Power Adaptive Wireless Circuits for the Internet of Things and In-body implants
Abdelhamid, Mohamed Radwan
The emergence of the Internet of Things (IoT) and the desire for novel biomedical applications have resulted in growing demands for ultra low power wireless systems and circuits. To drive down energy consumption, conventional approaches for designing wireless systems focus on independently optimizing each of the layers of their designs: whether it is energy harvesting, sensor interface, security accelerators, or wireless protocols and MAC algorithms. While these approaches have delivered significant performance improvements, they remain inherently constrained by the performance of each respective layer.&#13;
&#13;
This thesis demonstrates that by rethinking the abstractions across these layers and co-designing the entire stack of end-to-end wireless systems, we can build adaptive and ultra-low-power integrated systems with new capabilities and serve new applications. At the core of the innovations presented in this thesis are techniques that enable end-to-end adaptation ranging from reprogrammable antennas and harvesting circuits to adaptive wireless protocols and analog front-ends.&#13;
&#13;
I demonstrate the value of my approach by designing, fabricating, and evaluating three end-to-end wireless systems each fully integrated in a 65nm CMOS IC for IoT and in-body applications. First, I present the first fully-integrated wireless and batteryless micro-implanted sensor which powers up by harvesting energy from RF signals and communicates at less than 400nW via backscatter. In contrast to prior designs which cannot operate across various in-body environments, my sensor can self-reconfigure to adapt to different tissues and channel conditions. Second, I present the first secure, wireless, and batteryless implantable sensor node for in-body pressure sensing. The node uses a piezoelectric sensor for in-body gastrointestinal (GI) pressure sensing and a loop antenna for wireless power and data communication. The pressure sensor front end, including the front-end amplifiers, achieves an efficiency of 4.3nJ/Conversion step with a resolution of 1.4mmHg. Third, I present a Bluetooth Low Energy wake-up receiver with a 80dBm sensitivity using a packet structure and a duty cycling scheme compliant with the Bluetooth Low Energy advertising protocol trading off power with latency. Event-driven applications achieve power lower than 240nW from a 0.75V supply while latency-critical systems wake up in almost 200 microseconds. The thesis describes the design, implementation, and evaluation of each of these systems, and tests them in both simulation and representative real-world environments such as in-vitro and ex-vivo setups for biomedical implants.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139349</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Miniaturizing High Step-Down, High Output Current Power Converters</title>
<link>https://hdl.handle.net/1721.1/139348</link>
<description>Miniaturizing High Step-Down, High Output Current Power Converters
Ranjram, Mike Kavian
Power conversion systems providing high voltage step-down capability at high output current are required in many applications, such as data center servers, electric vehicle charging, and USB power delivery. Converter miniaturization is a critical but especially challenging design goal, and transformers present a key bottleneck in this effort. To address this challenge, a new paradigm for magnetic component design is proposed in which magnetic and electronic elements are viewed as a single ``coupled electronic and magnetic system'' (CEMS).&#13;
&#13;
The first proposed CEMS is the Variable Inverter/Rectifier Transformer (VIRT), which enables a transformer with fractional and reconfigurable effective turns ratios (e.g. 12:0.5, 12:2/3, 12:1, and 12:2).  Its wide gain variation and high step-down capability are utilized in a 120-380V input, 5-20V, 5A/36W output dc/dc converter having a peak efficiency of 96% and greater than 93% efficiency across the wide range. The VIRT is also employed in a two-stage universal ac input, 5/9/12V, 5A/50W output portable charger having a component power density of 55W/in3 and a peak end-to-end efficiency of 95.7%.&#13;
&#13;
Challenges associated with leveraging highly interleaved high-layer-count planar windings - another means for handling high current - are elucidated and mitigation strategies are proposed. A novel winding termination strategy is demonstrated to reduce ac resistance by more than 40% in a highly interleaved design. &#13;
&#13;
A CEMS that is especially well suited for processing high output current is derived by combining the VIRT with popular multi-phase concepts. The resulting split-phase half-turn VIRT is employed in a 380V input, 12V/1kW output data center supply having a peak efficiency of 97.7% and a full-load efficiency of 97.1% with a transformer volume up to 36% smaller than best-in-class alternatives. Finally, a generalized modeling framework for developing new CEMS implementations is presented.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139348</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Spatial Labor Markets and Public Policies</title>
<link>https://hdl.handle.net/1721.1/139347</link>
<description>Essays on Spatial Labor Markets and Public Policies
Fournier, Juliette
This thesis consists of three essays on spatial labor markets and public policies. I study successively the interactions of space with job search, demography and housing policy.&#13;
&#13;
In the first essay, I develop a framework to study theoretically and quantitatively the welfare attributes of spatial mismatch, defined as a misalignment between where job seekers reside and suitable employment opportunities. In a quantitative urban model with frictional labor markets, the structure of the city interacts with labor markets because commuting is costly and information about job offers decays with distance. The decentralized equilibrium might feature too much or too little spatial mismatch, depending on whether commuting costs or information decay dominates. When commuting costs prevail, the constrained-efficient allocation may be restored by a mix of moving-to-opportunity and enterprise zone interventions that bring jobs and workers together.&#13;
&#13;
The second essay, joint with David Autor, studies the relationship between population age and population density in the United States. We document the inversion of the rural-urban age gradient between 1950 and 2019. Whereas in 1950, residents in the least dense counties were on average 4.5 years younger than their counterparts in the most dense counties, by 2019 residents of the most rural counties were 2.7 years older than those in the most urban counties, a swing of 7.2 years. We show that sharp temporal changes in age-specific migration rates were the predominant contributor to this reversal.&#13;
&#13;
In the third essay, Hector Blanco and I examine the distributional implications of the shift from public housing to subsidized private housing initiated by the U.S. government over the past few decades. We build a quantitative urban framework where housing assistance complements income taxation to redistribute across workers. We argue that provision of affordable housing involves a trade-off between indirect pecuniary redistribution and direct amenity spin-offs. On the one hand, public housing drives local rents down, while amplifying the spatial concentration of poverty. On the other hand, project- and tenant-based rental assistance enhances the local amenities of subsidized households by promoting mixed-income communities, but pushes private landowners’ rents up.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139347</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial neural network and precision genome engineering frameworks for genetic system engineering in mammalian cells</title>
<link>https://hdl.handle.net/1721.1/139346</link>
<description>Artificial neural network and precision genome engineering frameworks for genetic system engineering in mammalian cells
Palacios, Sebastian
Synthetic biology is an emerging engineering discipline that merges biology with engineering design principles with the objective to engineer novel biological behavior in living organisms. The engineering process in synthetic biology, however, is often unpredictable and inefficient.  Therefore, there is a need for the development of frameworks that enable efficient and predictable engineering in synthetic biology. In this thesis, I introduce machine learning and precision genome engineering frameworks aimed at advancing the engineering process in mammalian synthetic biology. &#13;
&#13;
The genome engineering framework is a recombinase-mediated cassette exchange (RMCE) landing pad. I apply this framework to address challenges associated with organoid engineering, such as the lack of ability to reliably engineer organoid formation and maturation. In particular, I demonstrate the engineering of a genetically encoded miRNA sensor in hiPSCs and their differentiated derivatives, including derivatives generated using GATA6-mediated and Neurogenin-mediated directed differentiation. The sensor provides a fluorescent readout based on the activity of a specific miRNA with single-cell resolution, and has applications in organoid engineering and basic research. The machine learning frameworks are comprised of artificial neural networks. In particular, I introduce the use of artificial neural networks applied to the problem of modeling and optimization of genetic systems, which I demonstrate on a biological cancer cell classifier, a type of genetic system for discriminating cancerous from noncancerous cells. Moreover, I introduce the use artificial neural networks trained on single-cell flow cytometry measurements aimed at modeling and predicting the behavior of genetic systems in mammalian cells.&#13;
&#13;
Together, these frameworks advance our ability to engineer genetic systems in synthetic biology. In particular, the results in this thesis demonstrate the utility of artificial neural networks for genetic system engineering in mammalian synthetic biology. Moreover, I expect that the genome engineering framework described can be employed synergistically in the context of constructing and implementing genetic systems designed utilizing the machine learning frameworks presented in this work.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139346</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Assessment and Optimal Response Strategies for Resilience of Electric Power Infrastructure to Extreme Weather</title>
<link>https://hdl.handle.net/1721.1/139339</link>
<description>Risk Assessment and Optimal Response Strategies for Resilience of Electric Power Infrastructure to Extreme Weather
Chang, Hao-Yu Derek
Extreme weather is an increasingly critical threat to infrastructure systems. This thesis develops a stochastic modeling and decision-making framework for proactive resource allocation and response strategies to improve the resilience of electric power infrastructure in the wake of severe weather events. The framework is based on a physically-based, probabilistic risk assessment approach to estimating the weather-induced damage, and accounts for power flow constraints in designing response actions within electricity distribution networks. &#13;
&#13;
Firstly, we formulate an asymmetric hurricane wind field model that is applicable to forecasting and large-scale ensemble simulation. The hurricane wind field model incorporates low-wavenumber asymmetries, and its parameters are estimated using a Constrained Nonlinear Least Squares problem. Inclusion of asymmetries in the model improves the accuracy of wind risk assessment in the hurricane eye wall, where wind velocities are maximized. &#13;
&#13;
Secondly, the wind field forecasts are used as inputs to a probabilistic model for damage estimation in infrastructure systems. The novelty of this damage model is that it accounts for the spatial variability in damage estimates resulting from the hurricane wind field and forecast uncertainty in the hurricane’s temporal evolution. We demonstrate that our model is capable of accurately predicting outage rates resulting from damage to the electrical grid following Hurricane Michael. &#13;
&#13;
Thirdly, we develop a computational approach for optimal resource allocation and multi-step response operations. Using a two-stage stochastic mixed-integer formulation, we model the strategic deployment of distributed energy resources (DERs) ahead of a storm’s landing, and the joint operation of islanded microgrids and repair of damaged components in the post-storm stage. The failure scenarios in this formulation are drawn from our physically-based damage model. The key challenge here is that the size of the optimization problem increases super-linearly with the network size. To address this computational bottleneck, we develop three solution approaches based on L-shaped Benders decomposition. These approaches incorporate the network structure and power flow constraints to derive more effective Benders cuts. We evaluate the scalability of these approaches on benchmark networks, and show that they are useful in evaluating the resiliency improvements due to optimal DER allocation and response strategies under various resource constraints.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139339</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconfigurable Photonics based on Broadband Low-loss Optical Phase Change Materials</title>
<link>https://hdl.handle.net/1721.1/139332</link>
<description>Reconfigurable Photonics based on Broadband Low-loss Optical Phase Change Materials
Zhang, Yifei
Optical phase change materials (O-PCMs), a unique group of materials featuring drastic optical property contrast upon solid-state phase transition, have found widespread adoption in photonic switches and routers, reconfigurable meta-optics, reflective display, and optical neuromorphic computers. Current phase change materials, such as Ge-Sb-Te (GST), exhibit large contrast of both refractive index (Δn) and optical loss (Δk), simultaneously. The coupling of both optical properties fundamentally limits the function and performance of many potential applications. We report a new class of O-PCMs, Ge-Sb-Se-Te (GSST) which breaks this traditional coupling. Guided by first-principle computational designs, the compositionally optimized alloy Ge₂Sb₂Se₄Te₁ claims an unprecedented material figure of merit (FOM) over two orders of magnitude larger than that of classical GST alloys, benefiting from blue-shifted interband transitions as well as minimal free carrier absorptions, as confirmed by Hall measurements. In-situ heating TEM and XRD measurements are carried out to confirm and understand the crystal structures of Ge₂Sb₂Se₄Te₁. We show that the optimized alloy, Ge₂Sb₂Se₄Te₁, combining broadband low loss (1 – 18.5 μm), large optical contrast (Δn = 2.0), and significantly improved glass forming ability, enables an entirely new field of integrated and free-space photonic applications.&#13;
&#13;
Based on the extraordinary optical and switching properties of this new O-PCM, GSST, we are able to demonstrate an entirely new field of integrated and free-space photonic applications with record-low losses and nonvolatile reconfigurable switching. Nonvolatile optical switches with both narrow-band and broadband responses were realized based on GSS4T1. Their record low loss and switching contrast, derived from the exceptional FOM of the material, qualify the device as a useful building block for scalable photonic networks. A transient directional coupler is proposed and realized to facilitate wafer-scale photonic testing. Material cycling lifetime is also investigated by tracking the reflectance contrast between the amorphous and crystalline state on a single pixel over 1,000 cycles. For the first time, large-scale electrically-driven active metasurface based on PCMs are demonstrated with the geometry-optimized metal-heater platform. Devices including reconfigurable spectral filters with world record large spectral tuning range and metasurface deflectors are demonstrated.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139332</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Intelligence via Graph Neural Networks</title>
<link>https://hdl.handle.net/1721.1/139331</link>
<description>Modeling Intelligence via Graph Neural Networks
Xu, Keyulu
Artificial intelligence can be more powerful than human intelligence.  Many problems are perhaps challenging from a human perspective. These could be seeking statistical patterns in complex and structured objects, such as drug molecules and the global financial system. Advances in deep learning have shown that the key to solving such tasks is to learn a good representation. Given the representations of the world, the second  aspect of intelligence is reasoning. Learning to reason implies learning to implement a correct reasoning process, within and outside the training distribution. &#13;
&#13;
In this thesis, we address the fundamental problem of modeling intelligence  that  can learn to represent and reason about the world. We study both questions from the lens of graph neural networks, a  class of neural networks acting on graphs. First, we can abstract many objects in the world as graphs and learn their representations with graph neural networks. Second, we shall see how graph neural networks exploit the algorithmic structure in reasoning processes to improve  generalization. &#13;
&#13;
This thesis consists of four parts. Each part also studies one  aspect of the theoretical landscape of learning: the representation power, generalization, extrapolation, and optimization. In Part I,  we characterize the expressive power of graph neural networks for representing graphs, and build maximally powerful graph neural networks. In Part II, we analyze generalization and show implications for what reasoning a neural network can sample-efficiently learn. Our analysis takes into account the training algorithm, the network structure, and the task structure. In Part III, we study how neural networks extrapolate and under what conditions they learn the correct reasoning outside the training distribution. In Part IV, we prove global convergence rates and develop normalization methods that accelerate the training of graph neural networks. Our techniques and insights go beyond graph neural networks, and extend broadly to deep learning models.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139331</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Algorithms, Protocols and Hardware Architectures for Next-Generation Cryptography in Embedded Systems</title>
<link>https://hdl.handle.net/1721.1/139330</link>
<description>Efficient Algorithms, Protocols and Hardware Architectures for Next-Generation Cryptography in Embedded Systems
Banerjee, Utsav
The Internet of Things (IoT) consists of an ever-growing network of wireless-connected electronic devices which are always collecting, processing and communicating data. While the IoT has inspired many new applications, these embedded devices have unique security challenges, thus making IoT security a major concern. Security architectures for IoT devices, both software and hardware, must be low-power and have low energy consumption, while still providing strong cryptographic guarantees and side-channel resilience. Network security protocols use a variety of cryptographic algorithms to achieve these goals. However, the associated computational complexity makes it extremely important to have low-power and energy-efficient embedded implementations of cryptography, especially public key algorithms.&#13;
&#13;
The research presented in this thesis demonstrates the design, implementation and experimental validation of efficient next-generation cryptography for embedded systems using software optimization, hardware acceleration and software-hardware co-design, along with side-channel countermeasures. Using circuit, architecture and algorithm techniques, efficient hardware-accelerated implementations of elliptic curve cryptography, pairing-based cryptography, lattice-based cryptography and other post-quantum cryptography algorithms are demonstrated with up to two orders of magnitude energy savings compared to state-of-the-art software and hardware. These configurable hardware accelerators are further coupled with a low-power micro-processor to provide the flexibility to implement a wide variety of security protocols, thus enabling strong and affordable security for energy-limited IoT nodes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139330</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactions between Atmospheric Deep Convection and the Surrounding Environment</title>
<link>https://hdl.handle.net/1721.1/139328</link>
<description>Interactions between Atmospheric Deep Convection and the Surrounding Environment
Abbott, Tristan H.
Atmospheric deep convection is a central player in Earth's climate, influencing the vertical distribution of energy and moisture in the atmosphere, modulating the hydrological cycle, and altering Earth's energy balance. However, many interactions between convection and other components of the climate system remain poorly understood and poorly simulated in global climate models. Motivated by this challenge, I present work on several topics related to the coupling between atmospheric deep convection and the surrounding environment.&#13;
&#13;
First, I explore the effects of global warming on convective precipitation extremes. I show that, in idealized convection-permitting simulations, updraft velocity profiles associated with extreme precipitation collapse to a climate-invariant shape when plotted in a moisture-based vertical coordinate. I then leverage the collapse to better understand the success of a "Clausius-Clapeyron" scaling of precipitation extremes with surface specific humidity, providing a foundation for its use as a baseline prediction for changes in extreme rainfall in a warming world.&#13;
&#13;
Second, I focus on explaining observations that suggest a link between high atmospheric aerosol concentrations and more vigorous deep convection. Based on a combination of convection-permitting simulations and simple scale analysis, I argue that the link may arise through a novel "humidity-entrainment" mechanism that relies on a two-way coupling between clouds and clear-air humidity. &#13;
&#13;
Finally, I examine the response of convection over tropical land to large perturbations in atmospheric humidity, with an emphasis on the potential for bistability. Similar to past studies of convection over tropical oceans, I find that convection-permitting simulations over a moist land-like surface support both precipitating equilibria (attained when initialized with a humid atmosphere) and non-precipitating equilibria (attained when initialized with a dry atmosphere). However, I also find that soil moisture feedbacks—a distinctive feature of land surfaces—can destabilize non-precipitating equilibria when uneven surface drying produces spatial soil moisture gradients. This negative soil moisture-precipitation feedback arises from the effects of mesoscale circulations on the boundary layer moisture budget, highlighting a potentially-important role for surface heterogeneity in limiting the duration of dry spells over tropical land.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139328</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Taichi High-Performance and Differentiable Programming Language for Sparse and Quantized Visual Computing</title>
<link>https://hdl.handle.net/1721.1/139327</link>
<description>The Taichi High-Performance and Differentiable Programming Language for Sparse and Quantized Visual Computing
Hu, Yuanming
Using traditional programming languages such as C++ and CUDA, writing high-performance visual computing code is often laborious and requires deep expertise in performance engineering. This implies an undesirable trade-off between performance and productivity. Emerging visual computing workloads such as sparse data structure operations, differentiable programming, and quantized computation, lead to further development difficulties with existing programming systems. To address these issues, we propose Taichi, an imperative and parallel programming language, tailored for developing high-performance visual computing systems. Taichi leverages domain-specific features of visual computing tasks, providing first-class abstraction and support for spatially sparse computation, differentiable programming, and quantization. With Taichi’s optimizing compiler that has a high-level understanding of these domain-specific language constructs and automatically optimizes Taichi programs, we achieve performance and productivity simultaneously in various visual computing tasks, especially physical simulation. For example, with Taichi we can easily achieve 4.55× higher performance using 1/10 lines of code on sparse computations, effortlessly develop 10 differentiable physical simulators, and simulate unprecedented 235 million material point method (MPM) particles on a single GPU.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139327</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fibrous Membranes in Personal Protective Applications</title>
<link>https://hdl.handle.net/1721.1/139324</link>
<description>Fibrous Membranes in Personal Protective Applications
Hao, Junli
Personal protective equipment (PPE) is a collection of devices and garments that protect the wearer from hazards and threats. An important type of PPE is respiratory protection devices that reduce the exposure of the respiratory system to aerosols, which may contain harmful components that can cause infection of diseases and other health problems. Mechanical protective clothing is a type of PPE that protects against bruises, cuts, and injuries from sharp objects or even ballistic impacts. Fibrous materials of different forms have found their uses in both respiratory protection and protective clothing. For example, when used as filters in respirators and face masks, fibrous membranes can act as excellent barriers to aerosols without resulting in unreasonably high air resistance. The application of strong and tough fibrous textiles in protective clothing has continued to grow since the 1970s, replacing traditional metal and ceramic plates in many use cases because of the flexibility and the improved wear comfort of the textiles. &#13;
&#13;
This thesis focuses on the applications of fibrous materials, particularly electrospun ultrafine fiber (UFF) membranes, in aerosol filters and protective clothing. The electrospun UFF membranes are first studied for their ability to alter the chemical composition of aerosols to explore the possibility of reducing human exposure to harmful aerosol constituents by selective filtration. Filtration using the UFF membranes is shown to change both the size distribution and the chemical composition of a binary liquid aerosol when the selective particle size range of the membrane is aligned with the aerosol size distribution. The size selectivity and the chemical selectivity are correlated by the composition-size relationship of the aerosol and can be tuned by adjusting membrane morphology and configuration. Then, an extensive investigation on the filtration 4 performance and mechanism is conducted for melt-blown filters in medical respirators and for alternative face mask filters, including electrospun UFF membranes. This study demonstrates that the level of electrostatic charges plays an important role in the effectiveness of the melt-blown filters. Variation in the level of the electrostatic effect results in inconsistent filtration efficiency across the examined respirators. Electrospun membranes, on the other hand, appear to be a promising alternative to the melt-blown materials due to their stability and possible reusability. Finally, electrospun UFF membranes are combined with shear-thickening fluids (STF) to improve the shape stability and the fluid retention of fabric-STF composites, which are potential materials for protective clothing and conventionally consist of microfibers. The UFF-STF composites are shape-stable and exhibit better STF retention capability due to the small pore size and the high capillary forces provided by the electrospun membrane. The addition of STF is shown to improve the mechanical responses of the membranes under impact, as the fluid can potentially hinder fiber movements and increase inter-fiber friction.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139324</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Confident Learning for Machines and Humans</title>
<link>https://hdl.handle.net/1721.1/139321</link>
<description>Confident Learning for Machines and Humans
Northcutt, Curtis George
The coupling of machine intelligence and human intelligence has the potential to empower humans with augmented capabilities (e.g., improving rhyme-density while writing song lyrics, enhancing empathy via emotion detection, and personalizing learning in online courses). Unfortunately, humans operate in an uncertain world – where the performance of even the most sophisticated model-centric artificially intelligent system often depends on its data-centric ability to deal with the uncertainty in the labels upon which it is trained.&#13;
&#13;
To this end, we introduce confident learning whereby a machine (like humans) must learn with noisy-labeled data, directly quantify and identify label noise, and unlearn misconceptions by re-learning with confidence on cleaned data with erroneous labels removed. We achieve this by developing a principled theory and framework for confident learning with affordances for quantifying, identifying, and learning with label errors in data, and we open-source their implementations in the cleanlab Python package. Based on human verification of the label errors found using cleanlab: we estimate a 3.4% lower bound error rate of the test set labels of ten of the most commonly used machine learning datasets across audio, image, and text modalities; examine the noise prevalence needed to change machine benchmark rankings; and provide corrected test sets so that humans can benchmark machine performance with increased confidence.&#13;
&#13;
We then build and evaluate three artificially intelligent systems that augment human capabilities in noisy, real-world settings. Namely: (1) assisted-turn-taking in multi-person conversations by combining noisy embodied audio and video signals from multiple synchronized perspectives, (2) assisted-generation of writing song lyrics by exploiting the inherent aleatoric uncertainty of language and semantics, and (3) assisted-human-learning in open online courses by depolarizing/diversifying comment rankings to mitigate the majority bias inherent in rankings based on upvotes. In each case, the artificially intelligent system’s ability to overcome uncertainty is linked to its efficacy of augmenting human capabilities, and by extension, humans’ confidence in their ability to perform the associated task.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139321</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing School Operations</title>
<link>https://hdl.handle.net/1721.1/139320</link>
<description>Optimizing School Operations
Delarue, Arthur
Educational institutions in the United States such as public schools and universities face difficult operational problems in order to keep the lights on and the doors open. These challenges must often be solved on a shoestring budget, and tend to intersect with complex policy questions. For instance, school transportation is both a complex combinatorial problem, and has served as a vector of racial and socioeconomic integration since the 1970s. This thesis seeks to develop practical, scalable optimization methodologies to address key challenges in educational operations.&#13;
&#13;
Foremost among these challenges is school transportation. In the first chapter, we develop a novel algorithm to solve school bus routing problems at scale. We rely on a novel decomposition approach called bi-objective routing decomposition (BiRD). Instead of building routes for each school in isolation, we consider several solutions, allowing locally suboptimal routes for certain schools if it improves overall cost. The approach has led to $5 million annual savings at Boston Public Schools. In the second chapter, we design an algorithm to estimate travel times in a network given travel times (but not paths) observed between many pairs of locations.&#13;
&#13;
We also focus on challenges that intersect with and arise from the peculiarities of school transportation. School districts often stagger start times across the morning (and correspondingly end times across the afternoon) to maximize bus re-use and reduce costs. However, setting school start times is a problem with many stakeholders, from students to staff. In the second part of the first chapter, we develop a multi-objective optimization approach to evaluate the tradeoffs between different objectives in selecting school start times, including the cost of transportation. Our approach was used by Boston Public Schools to propose new start times, though they were not implemented due to community concerns. Building on this work, we consider in the third chapter the interplay between transportation, start times, and student-to-school assignment.&#13;
&#13;
Finally, in the fourth chapter, we consider the problem of scheduling courses in time and space, in the particular context of sudden capacity reduction caused by the COVID-19 pandemic. Using a simple two-stage optimization framework, we explore the implications of hybrid in-person and online education on campus density and faculty teaching load. Our approach was used to create a new course schedule for the Sloan School of Management in the Fall of 2020, affording students significant in-person learning opportunities in compliance with public health and safety regulations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139320</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative mass spectrometry-based approaches for characterizing the immunopeptidome and tyrosine phosphoproteome in cancer</title>
<link>https://hdl.handle.net/1721.1/139318</link>
<description>Quantitative mass spectrometry-based approaches for characterizing the immunopeptidome and tyrosine phosphoproteome in cancer
Stopfer, Lauren Elizabeth
Significant advancements in proteome-based analyses stem from innovations in the field of mass spectrometry (MS), an analytical method which allows for the sequencing, identification, and quantification of peptides and proteins in complex biological mixtures. MS enables a molecular and systems-wide understanding of the cell state, capturing post-translational modifications, protein turnover rates, protein-protein interactions, and other measurements that genetics cannot assess. Still, MS-based methods often require a compromise between reproducibility, quantitative accuracy, sensitivity, and depth of coverage, limiting their utility in research and translational settings alike. Here, I present a collection of MS-based platforms for targeted tyrosine phosphorylation signaling measurements and quantitative immunopeptidomics profiling, enabling novel biological findings in the field of cancer research. I describe how targeted tyrosine signaling assays can be leveraged to identify activated signaling pathways and assess immune infiltration in colorectal cancer. I also demonstrate how small molecules alter the peptide major histocompatibility complex repertoire in melanoma, and report copies-per-cell estimates of select treatment-modulated antigens using targeted MS, informing the development of targeted immunotherapies. Together, these findings highlight how innovations in MS-based methods can be used to advance a basic biology understanding of cancer and serve to demonstrate the clinical utility of using such assays to inform cancer therapy.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139318</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on Urysohn width</title>
<link>https://hdl.handle.net/1721.1/139312</link>
<description>Bounds on Urysohn width
Balitskiy, Alexey
The Urysohn d-width of a metric space quantifies how closely it can be approximated by a d-dimensional simplicial complex. Namely, the d-width of a space is at most w if it admits a continuous map to a d-complex with all fibers of diameter at most w. This notion was introduced in the context of dimension theory, used in approximation theory, appeared in the work of Gromov on systolic geometry, and nowadays it is a metric invariant of independent interest. The main results of this thesis establish bounds on the width, relating local and global geometry of Riemannian manifolds in two contexts. One of them is bounding the global width of a manifold in terms of the width of its unit balls. The other one is waist-like inequalities, when a manifold is sliced into a family of (singular) surfaces, and the global width is related to the supremal width of the slices.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139312</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Economics of Education</title>
<link>https://hdl.handle.net/1721.1/139304</link>
<description>Essays in Economics of Education
Idoux, Clémence
This thesis is composed of three essays on the economics of education. The first essay is about the heterogeneity of gains from selective school admission. The question of who benefits from selective school enrollment remains controversial. I show that Boston exam schools have heterogeneous effects on achievement. Impact differences are driven primarily by the quality of an applicant's non-exam-school alternative rather than by student demographic characteristics like race. Admission policies prioritizing students with the weakest schooling alternatives have the potential to increase the impact of exam schools on academic achievement. In particular, simulations of alternative admissions criteria suggests schemes that reserve seats for students with lower-quality neighborhood schools are likely to yield the largest gains. &#13;
&#13;
The second essay is about understanding the impact of selective school admission screens on segregation in New York City schools. 70 years after \textit{Brown v. Board of Education}, US school districts are still economically and racially segregated. School segregation is especially apparent in NYC, the largest US school district. I analyze the impact of two integration plans which reduced the role of screens in admission in two local NYC school districts. I show that abolishing selective admissions reduced both economic and racial segregation. Amending selective admission criteria also elicits substantial behavioral response from applicants. I find evidence that reducing the role of admission screens leads to White and high-income enrollment losses, which decreases the effect of the plans. On the other hand, applicants' changes in application behavior in response to the reforms increased the plans' impact on segregation.&#13;
&#13;
The final essay is about predicting the effect of changes in school admission on students' enrollment. Such predictions are based on estimated student preferences, which in turn are obtained from the ranked order lists they submit. A concern is that an applicant with fixed preferences might submit different lists when faced with different admission criteria. For instance, an applicant could strategically take into account their probability of admission at each school, therefore violating the truthfulness assumption. A solution is to estimate preferences allowing students to strategically choose over all possible lists, but this runs into the curse of dimensionality as the choice space is large. This paper provides a model of applicants' list formation which presumes applicants use a simple heuristic in selecting their lists. In the model, applicants fill their list sequentially, without fully internalizing the dynamic consequences of each choice. Using this simplification, I estimate applicants’ preferences, circumventing the dimensionality problem. I leverage an admission reform in NYC to estimate the model. Allowing applicants to deviate from truthfulness affects substantially their estimated preferences.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139304</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Driven Operations: From Algorithm Development to Experimental Design</title>
<link>https://hdl.handle.net/1721.1/139302</link>
<description>Data Driven Operations: From Algorithm Development to Experimental Design
Zhao, Jinglong
Digital innovation has gained increasing attention in today's world. The explosion of data that is generated through modern marketplaces provides new opportunities to use data-driven tools to understand and optimize the marketplace operations. This dissertation studies various problems around the following two pillars of data-driven operations: optimization and econometrics.&#13;
&#13;
In the first module we focus on optimization, in which we consider dynamic resource allocation problems under zero adaptivity. Dynamic resource allocation problems are omnipresent in modern business operations. In the revenue management setting, there are unreplenishable resources to allocate to heterogeneous consumer demands, immediately and irrevocably upon their arrivals. In such settings, zero adaptivity refers to a policy whose actions are independent of the remaining resources. Traditional revenue management literature has mainly focused on fully adaptive policies; and there is a gap between the provable effectiveness of adaptive policies in theory, and the applicability of non-adaptive policies in practice. We show that under different models of demand uncertainty, carefully designed non-adaptive policies may provably perform almost as well as the best fully adaptive counterparts.&#13;
&#13;
In the second module we focus on econometrics, in which we consider experimental design problems. Experimental design is a widely adopted approach for firms to evaluate the effectiveness of their initiatives, by comparing the standard offering to a new initiative. Such a task is often challenging due to interference, both over time and across units. Traditional experimental design methods suffer from large variances of the estimators when accounting for interference; and practitioners have recognized that insufficient precision may lead to unreliable inference. We build the theoretical foundations to use optimization approach to maximize the precision when designing experiments.&#13;
&#13;
Finally, we conclude with discussions of the limitations of the models and methods we have considered. We also provide practical suggestions to applied researchers and data scientists.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139302</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fault Detection and Identification of Large-scale Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/139296</link>
<description>Fault Detection and Identification of Large-scale Dynamical Systems
Chaiwatanodom, Paphonwit
A single fault in a large-scale industrial system without a proper process monitoring tool may propagate into a disastrous accident. In the field of process control, the model predictive control (MPC) is being widely applied in industry where high-quality dynamical models need to be developed for process optimization. In the fault detection and identification field, it is less common to see a model-based approach given the cost justification in developing a high-quality dynamical models, therefore most companies adopted data-based approach for process monitoring. &#13;
&#13;
The main focus of this thesis in on the utilization of the MPC models along with process data for fault detection and identification in a large-scale process. The method proposed in this work allows for re-purposing of high-quality dynamical models from the MPC. In earlier work, a direct-conversion between finite step response coefficients in MPC models and a state-space model for fault detection was proposed. However, the size of the state-space model becomes very large. The model size affects the computational resources and data requirement for noise covariance estimation -- an important parameter to distinguish fault from non-fault events -- such that it becomes impractical for a large-scale industrial system. &#13;
&#13;
We applied the method to both a high-fidelity process simulation and real-world datasets. We demonstrated the advantages and disadvantages of model-based fault detection over data-based methods.  We also proposed the approach to combine the knowledge of model and plant data to further improve the performance of using just process model or data. We discussed challenges when apply the method to a large model and presented modifications to address those challenges. &#13;
&#13;
We also explored a case study where dynamic models are impractical to develop (e.g. a non-linear and time-varying dynamic process). The anomaly detection method is proposed using process data. A high-throughput feature generation concept was applied with a newly proposed feature selection algorithm to prevent the problem with curse of dimensionality. The method was compared to the set of features selected by domain experts. The thesis shows the applicability of using process models and data for advanced process monitoring in a large-scale dynamical system.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139296</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>MFSD7C: A Solute Carrier Linking Heme and Calcium in Mitochondrial Energy Metabolism</title>
<link>https://hdl.handle.net/1721.1/139291</link>
<description>MFSD7C: A Solute Carrier Linking Heme and Calcium in Mitochondrial Energy Metabolism
Ivica, Nikola A.
Major Facilitator Superfamily Domain-Containing 7C (MFSD7C) is an orphan solute carrier. In humans, loss-of-function mutations in MFSD7C cause Fowler syndrome, an often prenatally lethal disorder that affects brain development and angiogenesis. MFSD7C and its closely related paralog MFSD7B, have been proposed to function as heme transporters, however this hypothesis was challenged by several groups. We now show that MFSD7C localizes to mitochondria where it interacts with the electron transport chain complex. Loss of MFSD7C in various cell lines and primary cells results in increased mitochondrial respiration, but decreased ATP synthesis. Using several different methods, we show that the loss of MFSD7C stimulates cellular thermogenesis. The soluble N-terminal domain of MFSD7C contains conserved heme-binding motifs and directly binds two molecules of heme. We show that heme treatment of cells phenocopies the loss of MFSD7C, and that the N-terminal domain is essential for this effect. In order to find the transport substrate for MFSD7C, we purified the protein and performed a candidate substrate screen using thermostability-shift assay. The screen revealed that calcium is a candidate substrate for MFSD7C. Transport assays with isolated mitochondria suggest that MFSD7C may function as a mitochondrial calcium transporter. In conclusion, our experiments suggest that MFSD7C is a mitochondrial calcium transporter that switches ATP synthesis to thermogenesis in response to heme.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139291</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Metal Penetration in Solid Electrolytes</title>
<link>https://hdl.handle.net/1721.1/139288</link>
<description>Mechanisms of Metal Penetration in Solid Electrolytes
Park, Joon Young Richard
An important unresolved topic in materials science is the mechanism by which metals infiltrate solid electrolytes during electrodeposition. A deep understanding of this phenomenon in Li+-conducting solid electrolytes could determine whether these materials can enable fast-charging (&gt; 3 mA cm⁻²) solid state batteries that are safer and more energy-dense than the state of the art. At present, it is thought that intensified stresses are generated at the largest surface flaws on the electrolyte during electrodeposition, and at the critical current density these stresses drive brittle fracture within the bulk to create paths for metal advancement.&#13;
&#13;
This thesis demonstrates that metal penetration depends on two additional factors. The first is whether electric field focusing is present between the stripping and plating electrodes. We show that amplified electric fields, which correlate with increased local current densities, cause Li filled cracks to initiate and grow to penetration, overriding the presence of larger surface defects elsewhere. The second factor is the yield stress of the electrodeposited metal. We show that in Li⁺-, Na⁺-, and K⁺-conducting solid state systems, the critical current density scales inversely with the mechanical deformation resistance of the electrodeposited metal.&#13;
&#13;
We then present two novel electrode architectures in which a liquid phase enables higher critical current densities via interfacial stress relief and current homogenization. First, biphasic (liquid-solid) Na-K alloys are shown to exhibit K⁺ critical current densities over 15 mA cm⁻², in contrast to 2.5 mA cm⁻² for pure K metal. Second, an interfacial film of Na-K liquid between Li metal and Li₆.₇₅La₃Zr₁.₇₅Ta₀.₂₅O₁₂ solid electrolyte doubles the critical current density compared to cells without the Na-K interlayer. These design approaches hold promise for overcoming mechanical stability issues that have heretofore limited the performance of solid state batteries.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139288</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>DRIVING NOVEL REACTIVITY BY DECODING THE ELECTRONIC STRUCTURE OF NONTRIGONAL PHOSPHORUS TRIAMIDES</title>
<link>https://hdl.handle.net/1721.1/139284</link>
<description>DRIVING NOVEL REACTIVITY BY DECODING THE ELECTRONIC STRUCTURE OF NONTRIGONAL PHOSPHORUS TRIAMIDES
Cleveland, Gregory T.
Trivalent phosphorus compounds are ubiquitous throughout synthesis, with applications ranging from ligands in transition metal chemistry to mediating organic transformations. Their reactivity is canonically regarded as nucleophilic by virtue of the electronic structure governed by their three-fold symmetry. Geometric distortion away from a C₃ᵥ symmetric structure drives stabilization of a phosphorus-based acceptor orbital, enabling electrophilic reactivity. Leveraging this electrophilicity, nontrigonal phosphorus compounds elicit a host of novel chemical transformations. While the impact of these new chemistries is unquestionable, there still lacks fundamental understanding regarding these novel compounds. Herein are described efforts to drive future novel reactivity by garnering a more basic understanding of the impact of geometric distortion.&#13;
&#13;
First, ³¹P solid-state NMR spectroscopy is utilized to probe the frontier electronic structure of nontrigonal phosphorus triamides in comparison to trigonal analogues. This provides the first known analysis of the pairwise frontier orbitals that govern biphilic reactivity and offers a routine and facile method by which to analyze future compounds. Next, the properties of nontrigonal phosphorus compounds as ligands for transition metals is investigated, showing an increasing in π-backbonding that occurs upon geometric distortion. The electrophilicity of the phosphorus center is leveraged to generate a stable metallophosphorane by nucleophilic attack of exogenous fluoride, offering potential for nonspectator ligand behavior. Finally, a suite of nontrigonal phosphorus triamides featuring varying substituents in the ligand scaffold is prepared, showing that the reactivity of this class of compounds can be further modulated by electronic effects beyond that of the initial geometric deformation.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139284</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning Approach for the Automated Characterization of Cardiac Mechanics</title>
<link>https://hdl.handle.net/1721.1/139277</link>
<description>Deep Learning Approach for the Automated Characterization of Cardiac Mechanics
Morales, Manuel Antonio
Cardiac mechanics reflects the precise interplay between myocardial structure and contraction essential for sustaining the blood pumping function of the heart. Ejection fraction is the usual index of function, yet mechanical impairment and even heart failure may occur without changes in this measure. Strain analysis provides more meaningful measures through non-invasive evaluation of myocardial deformation from cardiac magnetic resonance imaging data, and can therefore identify dysfunction before reduction in ejection fraction. Diagnosis based on strain measures requires highly accurate and repeatable cardiac tissue detection, labelling, and tracking. These are very challenging and time-consuming tasks requiring extensive technical and clinical expertise, and have many sources of error that limit the wider clinical adoption of strain analysis. &#13;
&#13;
In this thesis, a novel deep learning workflow termed DeepStrain was developed and validated to provide automated strain analysis from standard magnetic resonance data. DeepStrain integrates three convolutional neural networks designed specifically for accurate and precise myocardial tissue detection, labelling, and tracking. These&#13;
networks were trained using data from healthy subjects and cardiac patients. In healthy subjects, accuracy was evaluated using the gold standard strain analysis technique, and repeatability was assessed using data from subjects imaged multiple times. Finally, DeepStrain was tested in a prospective cross-sectional study in asymptomatic young adults with a mixture of cardiovascular disease risks factors, i.e., overweight, hypertension, and type 2 diabetes mellitus. In summary, DeepStrain automatically provides very precise measures of strain two orders of magnitude faster than current technology, enabling more accurate and comprehensive characterization of cardiac mechanics.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139277</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymer-Metal–Organic Framework (MOF) Mixed-Matrix Membranes for Gas Separation Applications</title>
<link>https://hdl.handle.net/1721.1/139270</link>
<description>Polymer-Metal–Organic Framework (MOF) Mixed-Matrix Membranes for Gas Separation Applications
Qian, Qihui
Metal–organic frameworks (MOFs) represent the largest known class of porous crystalline materials ever synthesized. Their narrow pore windows and nearly unlimited structural and chemical features have made these materials of significant interest for membrane-based gas separations. Mixed-matrix membranes (MMMs) formed by incorporating MOF particles into polymers have attracted significant attention because these composite systems can potentially surpass the separation performance of pure polymers alone. However, performance improvements are often unrealized because of poor interfacial compatibility between the MOF and the polymer, which results in interfacial defects. From a practical perspective, strategies are needed to address these defects so that MMMs can be deployed in real-world separation processes. From a fundamental perspective, strategies are needed to reliably form defect-free MMMs so that transport models can be applied to estimate pure-MOF property sets, thereby enabling the development of robust structure–property relationships. To address these interfacial challenges, this thesis describes a developed method to surface functionalize MOFs with nanoscopic shells of covalently tethered oligomers through various imidization routes. Upon embedding these post-synthetically modified (PSM) MOFs in high molecular weight polymers, defect-free MMMs were formed, revealing synergistic improvements in both permeability and selectivity due to enhanced interfacial compatibility. Additionally, pure-MOF permeabilities for various gases were predicted by the Maxwell Model. The PSM technique developed initially was further developed to address its generalizability to various MOFs, oligomer surface reactions, reaction conditions, and polymer compositions, providing robust guiding principles to form MMMs with excellent polymer–MOF interfacial compatibility. Finally, the potential of a novel MOF, MFU-4, as a filler in MMMs for CO2/H2S/CH4 separation was studied by dispersing the MOF in high molecular weight polymers. To validate a CO2-driven gate-opening mechanism proposed by other researchers earlier, a systematic temperature study of diffusion, sorption, and permeation through an MFU-4/polyimide MMM was carried out. Separation performance of the MMM did improve with decreasing temperatures, however, no obvious evidence of the gate-opening mechanism was found under the conditions tested.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139270</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring and Analyzing Resource Competition in Genetic Circuits</title>
<link>https://hdl.handle.net/1721.1/139266</link>
<description>Measuring and Analyzing Resource Competition in Genetic Circuits
McBride, Cameron
The field of synthetic biology aims to engineer organisms has shown great promise for exciting applications in fields such as medicine, pharmaceuticals, agriculture, and chemical engineering. However, the design and implementation of genetic circuits is hampered by resource competition between genetic circuit parts, where separate genetic parts become coupled due to individual parts competing for the use of shared cellular resources necessary for the part's function such as ribosomes. These resource competition effects create undesired interactions in genetic circuits and may cause engineered systems to behave unexpectedly, which impedes the design of complex genetic circuits.&#13;
&#13;
This thesis analyzes the effects of resource competition on the behavior of genetic circuits. Using mechanistic mathematical models, we first present a theoretical framework and give conditions to determine when the number of equilibrium points of a dynamical system is subject to change due to state-dependent perturbations, which is applicable to genetic circuits with resource competition. These tools can inform genetic circuit design for improved robustness to resource competition. Next, we develop a method to experimentally measure and predict resource competition in genetic circuits. We propose two key measures of resource competition based on a mathematical model that determine a genetic circuit module's behavior under changes in the availability of cellular resources. Using a special module, called a resource sensor, we experimentally estimate the resource competition measures for any genetic circuit module and are able to accurately predict the circuit's behavior in new contexts. Finally, we analyze the effects of resource competition in biomolecular controllers regulating bacterial population size. We demonstrate that the controller faces a fundamental trade-off making it impossible for the population size to be robust to both resource competition disturbances and environmental disturbances simultaneously. This work enables a better-informed approach to genetic circuit design where resource competition is accounted for, leading to more robust outcomes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139266</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monopoles and Landau-Ginzburg Models</title>
<link>https://hdl.handle.net/1721.1/139257</link>
<description>Monopoles and Landau-Ginzburg Models
Wang, Donghao
In this thesis, we define the monopole Floer homology for any pair (&#119884;, &#120596;), where &#119884; is any oriented compact 3-manifold with toroidal boundary and &#120596; is a suitable closed 2-form on &#119884; , generalizing the construction of Kronheimer-Mrowka for closed 3-manifolds. The basic setup is borrowed from the seminal paper of Meng-Taubes. This thesis will be divided into three parts:&#13;
&#13;
∙ Part I is concerned with the geometry of planar ends. We exploit the framework of the gauged Landau-Ginzburg models to address two model problems for the (perturbed) Seiberg-Witten moduli spaces on either C x Σ or H²₊ x Σ, where Σ is any compact Riemann surface of genus ≥ 1. These results will lead eventually&#13;
to the compactness theorem in the second part;&#13;
&#13;
∙ In Part II, we supply the analytic foundation for this Floer theory based on the results from Part I. The Euler characteristic of this Floer homology recovers the Milnor-Turaev torsion invariant of &#119884; by a classical theorem of Meng-Taubes and Turaev.&#13;
&#13;
∙ In Part III, more topological properties of this Floer theory are explored in the special case that the boundary ∂&#119884; is disconnected and the 2-form &#120596; is nonvanishing on ∂&#119884; . Using Floer’s excision theorem, we establish a gluing result for this Floer homology when two such 3-manifolds are glued suitably along their common boundary. As applications, we construct the monopole Floer 2-functor and the generalized cobordism maps. Using results of Kronheimer-Mrowka and Ni, we prove that for any such irreducible &#119884; , this Floer homology detects the Thurston norm on &#119867;₂(&#119884;, ∂&#119884;; R) and the fiberness of &#119884; . Finally, we show that our construction recovers the monopole link Floer homology for any link inside a closed 3-manifold. &#13;
&#13;
This thesis is the compilation of the three arxiv preprints [Wan20a][Wan20b][Wan20c].
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139257</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/139256</link>
<description>Essays in Financial Economics
Elias, Leonardo Ariel
This dissertation studies various topics in international finance and macrofinance.&#13;
&#13;
In Chapter 1, I examine the costs associated with reversals in international capital flows. I exploit plausibly exogenous variation in firms' exposure to rollover risk to identify a causal liquidity channel at play during sudden stop episodes. Using a panel of firms across 39 countries, I show that firms with higher exposure (as measured by the share of long-term debt maturing over the next year) reduce investment ten percentage points more than non-exposed firms following sudden stops in capital flows. The impact is persistent: exposed firms experience lower investment, lower employment and lower assets than non-exposed firms even three years after the initial shock. &#13;
&#13;
In Chapter 2, in joint work with Fernando Duarte and Marta Szymanowska, we propose a long-run risk model with real effects of inflation that matches a broad set of empirical moments, while simultaneously keeping risk aversion and the elasticity of intertemporal substitution low. The moments we match capture the joint dynamics of stock returns, bond returns, bond yields, and macroeconomic fundamentals. We also match moments that have remained elusive in the literature ---including those from predictability regressions of stock returns, consumption, and dividends on the price-dividend ratio. The key element that we introduce in the model is that inflation non-neutralities are time-varying in a manner consistent with the data, with inflationary shocks predicting higher or lower real consumption growth depending on the current state of the economy.&#13;
&#13;
In Chapter 3, I study the effects of US Macroeconomic surprises on the pricing of sovereign risk of sixty-six countries in the period 2002-2017 using daily CDS data. I also explore how a country spread's sensitivity to these shocks depends on a wide range of country characteristics. I discuss potential transmission mechanisms of sovereign distress to the real economy by studying the cross-sectional response of security prices (corporate CDS spreads and stock returns) to global shocks. I find that positive macroeconomic surprises in the US systematically reduce sovereign spreads consistent with the view that global investors price sovereign risk. However, I find that both the size and the sign of the effect depend on the business cycle in the US.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139256</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging, Mechanics, Construction, and Sonification of Three-Dimensional Spider Webs</title>
<link>https://hdl.handle.net/1721.1/139255</link>
<description>Imaging, Mechanics, Construction, and Sonification of Three-Dimensional Spider Webs
Su, Isabelle
Spiders, silks and webs have adapted, survived, and prospered in most ecosystems for millions of years, despite being subjected to environmental and human pressures. They are the proof of an evolutionary success, in part due to the exceptional mechanical and biological properties of their silks but also the various web geometries they can build, from simplistic T-webs, to typical 2D orb webs, to complex 3D webs. Silk’s microscale thickness and the highly complex spatial network are significant challenges towards quantifying and visualizing the intricate architectures, mechanics, and construction of 3D webs. In this work, we aim to provide a consistent and automated non-destructive in-situ experimental and computational framework for quantifying and validating what has been observed in nature. A better understanding of the biological and mechanical performance of the 3D spider webs could inspire sustainable high-performance fiber networks and complex assembly strategies. &#13;
&#13;
Our framework begins with the first automatic imaging method for quantifying 3D spider web geometries. We use image processing on high-resolution images of slices of the web illuminated by a sliding sheet laser to automatically quantify and model the web. Using the web model and coarse-grain bead-spring particle dynamic simulations, we investigate the important role of the interplay between nonlinear behavior of dragline silk and the complex and redundant structure of the web in the mechanical and functional performance of the web. We investigate and quantify the structure and mechanics of a 3D spider web at varying stages of construction. This is accomplished by imaging, modelling and simulations throughout the web-building process to capture changes in the natural web geometry and the mechanical properties. Finally, we introduce a novel method for visualizing complex 3D spider web using sonification, a visualization method through sound. We developed an intuitive, interactive, and immersive sonification platform that, by translating the complex 3D fiber network architecture into sound, can be used for 3D spider web data and creative exploration.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139255</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Missing Data and Scalable Optimization for Data-driven Decision Making</title>
<link>https://hdl.handle.net/1721.1/139254</link>
<description>Addressing Missing Data and Scalable Optimization for Data-driven Decision Making
Song, Dogyoon
Data-driven decision making has become a necessary commodity in virtually every domain of human endeavor, fueled by the exponential growth in the availability of data and the rapid increase in our computing power. In principle, if the collected data contain sufficient information, it is possible to build a useful model for making decisions. Nevertheless, there are a few challenges to address to bring it into reality. First, the gathered data can be contaminated by noise, or even by missing values. Second, building a model from data usually involves solving an optimization problem, which may require prohibitively large computational resources. In this thesis, we explore two research directions, motivated by these two challenges.&#13;
&#13;
In the first part of the thesis, we consider statistical learning problems with missing data, and discuss the efficacy of data imputation approaches in predictive modeling tasks. To this end, we first review low-rank matrix completion techniques and establish a novel error analysis for the matrix estimation beyond the traditional mean squared error (Frobenius norm), focusing on the singular value thresholding algorithm. Thereafter, we study two specific predictive problem settings -- namely, errors-in-variables regression and Q-learning with thrifty exploration -- and argue that the predictions based on the imputed data are typically nearly as accurate as the predictions made when the complete data were available.&#13;
&#13;
In the second part of the thesis, we investigate the tradeoff between the scalability and the quality of optimal solutions in the context of approximate semidefinite programming. Specifically, we ask the question: “how closely can we approximate the set of unit-trace n × n positive semidefinite (PSD) matrices, denoted by Dⁿ, using at most N number of k × k PSD constraints?” We show that any set S that approximates Dⁿ within a constant approximation ratio must have superpolynomially large Sᵏ₊ -extension complexity for all k = o(n/ log n). Our results imply that it is impossible to globally approximate a large-scale PSD cone using only a few, smaller-sized PSD constraints. Therefore, we conclude that local, problem-adaptive techniques are essential to approximate SDPs for enhanced scalability.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139254</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strong-field Phenomena in Low-dimensional Materials at Terahertz Frequencies</title>
<link>https://hdl.handle.net/1721.1/139252</link>
<description>Strong-field Phenomena in Low-dimensional Materials at Terahertz Frequencies
Shi, Jiaojian
The advent of terahertz(THz)-frequency laser pulses carrying a substantial fraction of their energy in a single field oscillation cycle has opened a new era in the experimental investigation of strong light-matter interactions in solids, motivated by the quest for the ultimate frontiers of all-optical controls. Exploring ways to approach those frontiers requires insight into the underlying strong-field physics. Meantime, the development of low-dimensional materials with a reduction in at least one dimension has been shown to reveal novel properties beyond those encountered in bulk forms, including the emergence of multi-phase landscapes, collective quantum effects, and topological orders.&#13;
&#13;
This dissertation explores strong-field phenomena in low-dimensional materials driven by strong-field pulsed excitation at THz frequencies. I first introduce the generation and detection of high-field THz and mid-infrared (MIR) pulses. The perplexing strong-field responses of low-dimensional materials urge the implementation of multimodal probe schemes and advanced theoretical frameworks. So I describe three classes of spectroscopic methods to disentangle intricate couplings and complex behaviors under THz-frequency electromagnetic irradiation. Then I elaborate on strong-field theories of non-periodic and periodic systems under oscillating fields.&#13;
&#13;
We have investigated two-dimensional transition metal dichalcogenides (2D TMDs) and zero-dimensional quantum dots (QDs) under electromagnetic excitation at THz frequencies. For 2D TMDs, we have explored a hitherto unobserved Franz-Keldysh effect on exciton resonance in monolayer MoS2 under THz fields. We have demonstrated a metastable topological phase transition in 2D MoTe2, driven by THz-liberated carriers assisted with coherent phonon excitations. Further single-shot measurements reveal evidence of an intermediate phase. For QDs, we have demonstrated THz-driven reemergence of quenched photoluminescence in QDs on gold by suppressing the trion-mediated Auger recombination. By effectively engineering the charge transfer between luminophore systems, we have developed a record-sensitive THz detector and polarimeter via THz-to-visible upconversion. We have investigated crossover between the quantum-mechanical and classical description of light in the QD up-conversion spanning visible, near-infrared, MIR, and THz regimes. With the above knowledge, we have demonstrated an all-optical control of fluorescence blinking in single QDs with MIR pulses by removing excess charges, thereby significantly reducing photoluminescence flicker and achieving near-unity quantum yield even at high excitation flux.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139252</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Themes in numerical tensor calculus</title>
<link>https://hdl.handle.net/1721.1/139243</link>
<description>Themes in numerical tensor calculus
Mickelin, Oscar
This thesis studies several distinct, but related, aspects of numerical tensor calculus. First, we introduce a simple, black-box compression format for tensors with a multiscale structure. By representing the tensor as a sum of compressed tensors defined on increasingly coarse grids, the format captures low-rank structures on each grid-scale, which leads to an increase in compression for a fixed accuracy. &#13;
&#13;
Secondly, we consider phase retrieval problems for signals that exhibit a low-rank tensor structure. This class of signals naturally includes a wide set of multidimensional spatial and temporal signals, as well as one- or two-dimensional signals that can be reshaped to higher-dimensional tensors. For a tensor of order &#119889;, dimension &#119899; and rank &#119903;, we present a provably correct, polynomial-time algorithm that can recover the tensor-structured signals using a total of &#119978;(&#119889;&#119899;&#119903;) measurements, far lower than the &#119978;(&#119899; &#119889; ) measurements required by dense methods. &#13;
&#13;
Thirdly, we consider the problem of recovering an orthogonally decomposable tensor with a subset of elements distorted by noise with arbitrarily large magnitude. We focus on the particular case where each mode in the decomposition is corrupted by noise vectors with components that are correlated locally, i.e., with nearby components. This deterministic tensor completion problem has the unusual property that it can be solved in polynomial time if the rank of the tensor is sufficiently large. This is the polar opposite of the low-rank assumptions of typical low-rank tensor and matrix completion settings. Our approach enables recovery even with a substantial number of missing entries, for instance for &#119899;-dimensional tensors of rank &#119899; with up to 40% missing entries. &#13;
&#13;
Lastly, we study properties and algorithms for low storage-cost representations in two constrained tensor formats. We study algorithms for computing with the tensor ring format, which is an extension of the tensor train format with variable end-ranks, as well as properties of orthogonally decomposable symmetric tensors.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139243</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improved Tools for Local Hamiltonians</title>
<link>https://hdl.handle.net/1721.1/139241</link>
<description>Improved Tools for Local Hamiltonians
Abrahamsen, Nilin
In this thesis we consider computational problems related to many-body spin systems with a structured energy operator, a local Hamiltonian. &#13;
&#13;
We begin with the most structured setting where the Hamiltonian has a spectral gap and spatial locality. This setting is widely studied using approximate ground space projectors (AGSPs). In chapter 1 we give an improved analysis of AGSPs in the setting of local Hamiltonians with a degenerate ground space. This implies a direct generalization of the AGSP⇒entanglement bound implication of [Arad, Landau, and Vazirani ’12] from unique to degenerate ground states. We use the improved analysis to give a particularly simple algorithm for frustration-free spin systems provided an AGSP with structure as a matrix product operator. We apply our tools to a recent 2D area law of [Anshu, Arad, and Gosset ’21], giving a sub-exponential-time classical algorithm to compute the ground states. This time complexity cannot be improved beyond sub-exponential assuming the randomized exponential time hypothesis, even for the special case of classical constraint satisfaction problems on the 2D grid. In chapter 2 we consider frustrated systems and extend results for spin chains to certain trees with intrinsic dimension β &lt; 2. This condition is met for generic trees in the plane and for certain models of hyperbranched polymers in 3D.&#13;
&#13;
In chapter 3 we relax the conditions on the Hamiltonian and no longer require a spectral gap or geometric locality, and we consider an approximation problem for the spectrum of the local Hamiltonian. We give a simple proof of a Chernoff bound for the spectrum of a k-local Hamiltonian based on Weyl’s inequalities. The complexity of estimating the spectrum’s ϵ(n)-th quantile up to constant relative error thus exhibits the following dichotomy: For ϵ(n) = d −n the problem is NP-hard and maybe even QMA-hard, yet there exists constant a &gt; 1 such that the problem is trivial for ϵ(n) = a −n.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139241</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating the Role of Fluorine on Gas Transport Through Fluorinated Polymer Membranes</title>
<link>https://hdl.handle.net/1721.1/139238</link>
<description>Elucidating the Role of Fluorine on Gas Transport Through Fluorinated Polymer Membranes
Wu, Albert Xiuyuan
Fully fluorinated polymers (i.e., perfluoropolymers) are a unique class of materials that have shown exceptional separation performance due to their anomalous thermodynamic partitioning compared to typical hydrocarbon polymers. The goal of this work is to elucidate the role of fluorine on gas permeability, diffusion, and sorption through the systematic synthesis and characterization of hydrocarbon, partially fluorinated, and fully fluorinated polymer structures, with a particular focus on the development of structure–property relationships and connecting the behavior of hydrocarbon and fully fluorinated polymers. The effect of the higher sorption selectivity displayed by perfluoropolymers on separation performance was demonstrated through a refinement of upper bound theory. Inclusion of aliphatic fluorine groups resulted in higher diffusion due to increased interchain spacing caused by the larger size of fluorine, while inclusion of aromatic fluorine groups resulted in significantly higher diffusion but also lower diffusion selectivity due to weakened interchain interactions as well as increased interchain spacing. Through the lens of the dual-mode sorption model, increased polymer fluorination affected only the Henry sorption mode through increased amounts of unfavorable equilibrium mixing interactions while the sorption in the Langmuir mode was relatively unchanged. Within the scope of the non-equilibrium lattice fluid model, increased fluorine content resulted in larger unfavorable deviations from ideal mixing, particularly for CH4. Increased enthalpic selectivity with fluorine content was also observed, driving the increase in infinite dilution sorption selectivity. Additionally, an updated group contribution method for estimating fractional free volume in polymers was developed to streamline calculation for any polymer structure.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139238</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral Fukaya Categories for Liouville Manifolds</title>
<link>https://hdl.handle.net/1721.1/139233</link>
<description>Spectral Fukaya Categories for Liouville Manifolds
Large, Tim
This thesis constructs stable homotopy types underlying symplectic Floer homology, realizing a program proposed by Cohen, Jones and Segal twenty-five years ago. We work in the setting of Liouville manifolds with a stable symplectic trivialization of their tangent bundles, where we prove that the moduli spaces of Floer trajectories are smooth and stably framed. We then develop a basic TQFT formalism, in the stable homotopy category, for producing operations on these Floer homotopy types from families of punctured Riemann surfaces. As a byproduct, we can generalize many familiar algebraic constructions in traditional Floer homology over the integers to Floer homotopy theory: among them symplectic cohomology, wrapped Floer cohomology, and the Donaldson-Fukaya category.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139233</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manufacturing Social Capital: Social Networks through Civic Innovation Initiatives</title>
<link>https://hdl.handle.net/1721.1/139232</link>
<description>Manufacturing Social Capital: Social Networks through Civic Innovation Initiatives
Ahn, Chaewon
The gentrification of industrial land in post-industrial cities is a driving force to displace urban manufacturing firms from cities. Entering the 2010s, manufacturing clusters started to be rebranded as innovation districts that connect manufacturers to entrepreneurs, designers, and artists to develop innovative products with a renewed attention on the rapid and flexible prototyping techniques in the area. This research takes the Sewoon area in the center of Seoul, South Korea as a case to evaluate the efficacy of the network-building innovation programs. The dissertation questions whether and how artificially created new networks can enhance the firms’ innovative capacity and also strengthen the power and agency of manufacturing communities in the planning process to challenge state-sanctioned industrial gentrification.&#13;
&#13;
Combining social network analysis, interviews, and ethnographic fieldwork on the “Remake Sewoon” project, I investigate, 1) how the innovation projects, planned and implemented by governance working groups create collaborative relationships with firms structurally and culturally embedded in the local economy; 2) whether firms with strong innovative potential that occupy “structural holes” in these innovation networks also “bridge” heterogeneous industries to promote inclusiveness; and 3) how the governance working groups utilize these bridging and linking networks between the community and the city government in the negotiation of a controversial zoning rule that threatens the manufacturing community.&#13;
&#13;
I argue that the network-building innovation programs create concentrated opportunities for new industries that lead to serious pitfalls as this weakens the overall power against industrial gentrification. In a context in which strong firm embeddedness of traditional firms act as barriers against their inclusion in innovative networks, innovative opportunities and access to vertical power are focused on the new industries. This compounds the vulnerabilities of traditional industries and jeopardizes the sustainability of the innovation programs in the long run. As a result, I find that the innovation networks embedded in top-down planning reinforce city ideologies and power, despite the governance working groups’ effort to mitigate the community and institutional power. The dissertation argues that the sustainability of innovation networking programs depends on the more active involvement of pre-existing social capital of traditional industries.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139232</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Understanding and Fighting Infectious Disease</title>
<link>https://hdl.handle.net/1721.1/139231</link>
<description>Algorithms for Understanding and Fighting Infectious Disease
Hie, Brian Lance
Infectious disease is a persistent and substantial threat to human health, with consequences that include widespread mortality, suffering, and economic disruption. This thesis presents several algorithmic advances that, when coupled with biotechnologies for data collection and perturbation, are aimed at understanding infectious disease and using this knowledge to fight it. First, this thesis develops geometric algorithms that enable a panoramic understanding of the systems biology of the human immune system and of infectious pathogens at single-cell resolution. Next, this thesis will show how state-of-the-art Bayesian machine learning can explore complex biological spaces to search for new therapies that fight infectious disease. Finally, this thesis develops neural language models that can predict how pathogens mutate to evade human immunity, potentially enabling more broadly effective vaccines and therapies. Taken together, this thesis outlines a highly interdisciplinary, algorithmic approach to infectious disease research, with broader implications for computation and biology more generally.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139231</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pseudo-nitzschia in the Gulf of Maine: investigating bloom dynamics, species introduction, and climate change implications with observations and models</title>
<link>https://hdl.handle.net/1721.1/139230</link>
<description>Pseudo-nitzschia in the Gulf of Maine: investigating bloom dynamics, species introduction, and climate change implications with observations and models
Clark, Suzanna C.
The apparent global increase in harmful algal blooms (HABs) includes Pseudo-nitzschia blooms in the Gulf of Maine, where shellfishery closures can cost millions of dollars. Temperatures in the gulf are warming, which can affect the severity of some HABs. Yet Pseudo-nitzschia in the region are understudied.&#13;
&#13;
Pseudo-nitzschia bloom dynamics, P. australis introduction, and potential future changes thereof were investigated in the Gulf of Maine. Data from ship surveys and moorings were used, as well as &#13;
hydrodynamic, climate, and Lagrangian particle tracking models. Pseudo-nitzschia bloom toxicity was driven primarily by species composition, not environmental factors. P. australis was introduced to the region in 2016 via a coastal current from the Scotian Shelf. Climate change might intensify Pseudo-nitzschia blooms, shift bloom timing&#13;
1–2 weeks earlier in the spring or 4–6 weeks later in the fall, or lengthen the growing season by 3 weeks. It might also affect species composition and connectivity within the gulf. This work has implications for the monitoring of current and future blooms in the&#13;
Gulf of Maine and for our understanding of HAB introduction to the region. It can also be used to develop predictive models for Pseudo-nitzschia, which could be applied to other HABs.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139230</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical Modulation of Peripheral Nerves Using Ion-Selective Electrodes</title>
<link>https://hdl.handle.net/1721.1/139226</link>
<description>Electrochemical Modulation of Peripheral Nerves Using Ion-Selective Electrodes
Flavin, Matthew T.
New methods of locally delivering chemicals to biological systems have emerged in recent years, but nearly all are limited by their reliance on finite capacities of therapeutic stock which require physical access for restoration or replacement. In this thesis, I examine the ion-selective membrane (ISM), a filter that discriminately transports single ions within mixed electrolytes, whose operation in ion delivery platforms does not suffer from the fundamental restriction of finite capacity. Specifically, I discuss the optimal design of the ISM and its operating conditions, as well as its implementation for peripheral neuro-modulation. Up to this point, limitations in available characterization methods, both theoretical and practical, have severely restricted its technical development and the exploration of in vivo applications. In this thesis, I (1) began by revisiting the operation of ISM systems from a first-principles perspective, constructing a highly detailed physicochemical transport model to describe intra-membrane processes, (2) developed a new experimental technique for direct, real-time monitoring of ion concentration polarization which allowed me to evaluate those processes under relevant conditions, (3) established a rapid-manufacturing process for fabricating nerve cuff electrodes compatible with ISM coating and tested it in vivo, and (4) built off this foundational work to fabricate a Ca²⁺-selective nerve cuff electrode suitable for electrochemical operations, implanted in vivo on the sciatic nerve of a rat. I discovered over-limiting phenomena that arise in ISM systems under electrical polarization, which have broad implications for all of its ion filtering applications. Finally, from my electrophysiological studies, I found evidence that Ca²⁺ depletion with the nerve cuff had a discriminatory effect on distinct fascicles within the rat sciatic nerve, potentially offering a new strategy for the selective activation of therapeutic targets.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139226</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Priming systemic anti-tumor immunity via in situ immunomodulation of the tumor microenvironment</title>
<link>https://hdl.handle.net/1721.1/139220</link>
<description>Priming systemic anti-tumor immunity via in situ immunomodulation of the tumor microenvironment
Milling, Lauren Elizabeth
Systemic cancer immunotherapies including checkpoint blockade antibodies targeting the PD-1/PD-L1 axis and CTLA-4 have improved survival outcomes in a subset of cancer patients by driving tumor-directed immune responses. However, many patients do not benefit from such immunotherapies due to immune resistance mechanisms or severe inflammatory adverse events resulting in treatment discontinuation. Direct intratumoral injection of immunomodulatory agents has been successfully implemented to maximize immune stimulation at the site of the tumor while minimizing drug concentrations in systemic circulation. By focusing the immune response within a tumor of interest, immune cell killing of cancer cells releases tumor debris to which the immune system can be educated – effectively generating a tumor-specific vaccine. Localized inflammation in the tumor microenvironment additionally serves to adjuvant the in situ vaccination response. Ultimately, immune cells primed after intratumoral immunomodulation can traffic to distant sites of metastasis leading to tumor regressions at injected and non-injected lesions. The intratumoral in situ vaccination approach is amenable to a variety of therapeutic modalities ranging from small molecules, to proteins, and cell-based therapies. In this thesis, we present two combination immunotherapy regimen that take advantage of intratumoral injection. First, we describe an “off-the-shelf” in situ vaccine featuring locally administered small molecule activators of the STimulator of INterferon Genes (STING) pathway combined with systemically administered interleukin-2 and anti-PD-1 towards the generation of anti-tumor immunity in spontaneously metastatic breast tumor models. In this setting we detail the integration of immunotherapy with surgical resection and define the immune cell types mediating metastasis clearance. Taking a more personalized vaccine approach, we secondly demonstrate that in vitro treatment of tumor cells with DNA-damaging chemotherapy can promote tumor antigen-specific T cell activation by dendritic cells. Intratumoral injection of these chemotherapy-damaged cells synergizes with immune checkpoint blockade to promote tumor regression. Together, these studies underscore the versatility of intratumoral immunomodulation and highlight the wholistic activation of both innate and adaptive immune cells, hopefully contributing to more patients benefiting from cancer immunotherapy.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139220</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Post-Translationally Modified Peptides by Combining Enzymes from Diverse Pathways</title>
<link>https://hdl.handle.net/1721.1/139218</link>
<description>Design of Post-Translationally Modified Peptides by Combining Enzymes from Diverse Pathways
Glassey, Emerson Walker
Over the past decade, ribosomally-synthesized and post-translationally modified peptides (RiPPs) have emerged as both therapeutically-relevant and engineerable, two traits previously unobserved together in a natural product class. Their biosynthesis is modular: a precursor peptide recruits enzymes that bind one region of the peptide and modify another. This separation of substrate recognition from catalysis allows modifying enzymes to be both highly specific for their peptide and permissive of diverse sequences at the modification site. After modification, the molecules are chemically diverse, sometimes not appearing peptidic at all, and can exhibit picomolar activity for their biological targets in nature. As medium-sized constrained molecules, they also have exciting applications in drug discovery as protein-protein interaction inhibitors, a modality that is currently out of reach of small molecules and antibodies. Despite the therapeutic potential of these molecules, their development has been hampered by a lack of genetic tools and standardized protocols to express, modify, and engineer peptides. Simple peptide expression in a heterologous host, outside the context of a native pathway, is complicated by peptide degradation and solubility, while existing bulky stabilization tags interfere with analytics. As such, efforts to engineer biosynthesis of new RiPPs have been ad hoc, with no formalization of methods to elucidate enzyme-substrate specificities or engineer multi-enzyme pathways. To address this, I utilize a peptide stabilization tag that is small enough for peptides to be analyzed without tag removal, showing both stabilization of diverse peptides and compatibility with their respective modifying enzymes. I then use the stabilization tag and its established expression/purification pipeline to characterize substrate constraints of 9 enzymes in order to engineer biosynthesis of new-to-nature “hybrid peptides”. Collectively, these standardized expression tools, expression conditions, and engineering principles form an enabling platform for future RiPP discovery and engineering.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139218</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Growth and Characterization of Polycrystalline Rare-Earth Iron Garnet Films and Heterostructures</title>
<link>https://hdl.handle.net/1721.1/139206</link>
<description>Growth and Characterization of Polycrystalline Rare-Earth Iron Garnet Films and Heterostructures
Bauer, Jackson J.
Spintronics is a fast-developing field which makes use of the two spin states of the electron, and has the potential for more efficient, robust, and faster microelectronic devices.  Thin films of rare-earth iron garnets, a class of insulating ferrimagnetic oxides, are particularly well suited to this application as the anisotropy, magnetization, magnetostriction, and damping can be easily controlled through selection of rare-earth ion and substrate.  Previous work on garnets has focused on epitaxial single-crystal films grown on garnet substrates, which are expensive and not of commercial importance.  Thus, it is of interest to grow nanometer scale thin films of garnets as polycrystalline layers on non-garnet substrates with perpendicular magnetic anisotropy.  &#13;
&#13;
In this thesis, the growth of polycrystalline thin films of rare-earth iron garnets with controllable anisotropy and spin transport properties comparable to single crystal films is reported.  Perpendicular magnetic anisotropy, which is essential for efficient manipulation of the magnetization through spin-orbit torque injection from an adjacent conductive layer, is achieved via control of the magnetoelastic anisotropy from thermal expansion mismatch between the film and substrate for europium iron garnet/quartz and dysprosium iron garnet/silicon.  Heterostructures with a platinum overlayer allow investigation of the spin Hall magnetoresistance, which indicates a high degree of spin transparency at the interface.  Next, a novel heterostructure is developed that allows for the growth of ultra-thin (&lt; 10 nm) polycrystalline garnet films through the use of a nonmagnetic seed layer and diffusion barrier.  The magnetic proximity effect in heavy metals is also investigated across the magnetic compensation temperature of dysprosium iron garnet, demonstrating exchange coupling behavior different from that of metallic magnets and validating the importance of a sharp, contamination free interface in these materials.   Finally, a molecular field coefficients model is modified to account for non-ideal stoichiometry and site occupancy in the garnet crystal structure.  The model is used to explain discrepancies between the bulk and thin film magnetic compensation temperatures.  The work demonstrated here outlines the potential for integration of magnetic insulators into next-generation spintronic devices.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139206</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpolating Spline Curves of Measures</title>
<link>https://hdl.handle.net/1721.1/139204</link>
<description>Interpolating Spline Curves of Measures
Clancy, Julien
When dealing with classical, Euclidean data, the statistician's toolkit is enviably deep: from linear and nonlinear regression, to dealing with sparse or structured data, to interpolation techniques, most any problem dealing with vector or matrix data is amenable to several different statistical methods. Yet modern data is often not Euclidean in nature. The semantic content of natural images does not have a vector structure; shifting an image one pixel to the right does not perceptibly change it, but its vector representation is very different. For model cross-validation or bootstrapping, each data point is a dataset in its own right, and one might want to consider an "average dataset''. In an ensemble method, experts may express their beliefs as prior distributions; how would we perform a statistical analysis of these?&#13;
&#13;
Recently much attention has been paid to a framework which subsumes all of these problems: the Wasserstein space of measures with finite second moment. Works on point estimation, generalized means, and linear regression have appeared, as have some on smooth interpolation, greatly expanding the statistical toolkit for modern data. In this vein, the present work is broadly a theoretical and computational exploration of curves of measures which in some sense minimize curvature while interpolating data, as splines do in Euclidean space. We answer several questions about the relationship between the intrinsic Wasserstein-Riemannian curvature of such curves and a particle flow-based, "fluid-dynamical" formulation, and provide fast and accurate smooth interpolation techniques. We also study a related probabilistic interpolation problem unique to the measure setting, which asks for particle trajectories that satisfy certain interpolation constrains and minimize a KL divergence, in analogy with the Schrödinger bridge problem. We conclude with an extension of our methods to the case of unbalanced measures in the Wasserstein-Fisher-Rao space.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139204</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Contactless Sleep Studies at Home using Wireless Signals</title>
<link>https://hdl.handle.net/1721.1/139195</link>
<description>Enabling Contactless Sleep Studies at Home using Wireless Signals
Yue, Shichao
Sleep studies help doctors diagnose a variety of sleep-related disorders, such as insomnia and sleep apnea. Most disorders can be managed once they are correctly diagnosed. However, sleep studies usually introduce discomfort and high cost, as patients need to go to hospitals, sleep in unfamiliar beds, and wear sensors all over the body. Such high bars may discourage patients from taking sleep studies.&#13;
&#13;
In this thesis, we explore alternative solutions that make sleep studies comfortable and affordable, and at the same time, deliver clinically meaningful measurements. Specifically, we propose four systems that leverage radio frequency (RF) signals to remotely and automatically monitor the user's sleep in the user's bedroom. First, we present EZ-Sleep, the first contactless sleep sensor that automatically identifies bed entries and exits, and monitors key insomnia parameters like sleep latency and sleep efficiency. Further, we propose RF-Sleep, which demonstrates a much superior sleep stages classification accuracy than state-of-art methods. Next, we introduce DeepBreath, the first wireless respiration monitoring system that can recover the breathing signals of multiple individuals even when they right next to each other. And finally, we have BodyCompass, the first RF-based system that provides accurate sleep posture monitoring overnight. &#13;
&#13;
Collectively with all four systems, we turn the user's bedroom into a contactless sleep lab. We provide doctors tools to diagnose and keep track of sleep-related symptoms without any additional efforts from the patient.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139195</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Professionals' Temporal Autonomy</title>
<link>https://hdl.handle.net/1721.1/139192</link>
<description>Essays on Professionals' Temporal Autonomy
Conzon, Vanessa Mariangela
Professionals struggle to control their work time, despite often (1) having relatively greater control over their work tasks, and (2) wanting to control their work time. My dissertation addresses this empirical and theoretical puzzle by refining our understanding of why professionals face difficulties expanding their temporal autonomy, and identifying mechanisms and processes that can address these barriers. I draw upon data from four separate ethnographic studies of STEM professionals. In my first essay, I identify conditions under which managers either support or limit employees’ use of flexible work policies, and in turn, facilitate increases in professionals’ temporal autonomy. In my second essay, I show how professionals— independent of managers—collaborate to expand control over their work hours. In my third essay, I show how professionals’ temporal autonomy is shaped by family responsibilities. Overall, I contribute to the literature on professions, as well as related literatures on temporality and time in organizations, flexible work schedules, and the work-life interface. This dissertation also contributes to our understanding of gender inequality by showing how gendered experiences of time subtly disadvantage women.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139192</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical and Experimental Study of Rock Fracture Creep Under Dry Conditions</title>
<link>https://hdl.handle.net/1721.1/139190</link>
<description>Numerical and Experimental Study of Rock Fracture Creep Under Dry Conditions
Kang, Hao
In many rock engineering projects such as hydrocarbon extraction or geothermal energy utilization, the hydro-mechanical behavior of rock fractures has a strong effect on the safety and profitability of the project. In field conditions, the hydraulic and mechanical behavior of rock fractures changes with time (the rock fractures creep). Rock fracture creep is a coupled process of four mechanisms: mechanical compression, pressure solution, dissolution, and erosion. To obtain systematic results, it is important to consider these mechanisms separately. Previous research indicates that rock fracture creep under dry conditions is not negligible, and under dry conditions, mechanical compression is the only mechanism. &#13;
&#13;
Experimental work has been conducted to investigate the size and time dependency of the creep of Musandam limestone (a reservoir rock). The elastic and strength properties of Musandam limestone were obtained from triaxial and indentation tests. The results indicate that linear elasticity and von Mises plasticity can reasonably well describe the contacting asperity deformation. Then, triaxial, micro- and nano-indentation creep tests were conducted, and their creep patterns were compared. The results imply that the creep patterns of triaxial and indentation tests are different. In addition, micro- and nano-indentation test results show a time-hardening and a time-softening behavior, respectively. The experimental results can provide a reference for rock creep measurements and modeling. &#13;
&#13;
To study the effect of surface geometry on fracture dry creep, initial steps considering viscoelasticity have been taken. A numerical model was developed to investigate the effect of surface geometry on rock fracture visco-elastic deformation. First, synthetic fracture surfaces were generated based on Brown’s (1995) model: the Hurst exponent, root mean square roughness, and mismatch length were varied systematically. Then, an in-house numerical code was developed to simulate the visco-elastic deformation of rough fractures. The results indicate that by increasing the Hurst exponent or decreasing the mismatch length or decreasing the root mean square roughness, the fracture mean aperture decreases, and the contact ratio (the number of contacting cells / total number of cells) increases faster with time. The numerical results can be used to predict visco-elastic deformation of rough rock fractures.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139190</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporation of High-Fidelity Flow Field Information into Preliminary Design of Multi-Stage Axial Compressors</title>
<link>https://hdl.handle.net/1721.1/139183</link>
<description>Incorporation of High-Fidelity Flow Field Information into Preliminary Design of Multi-Stage Axial Compressors
Jörger, Alexander Timo
This thesis establishes an axisymmetric methodology that incorporates pre-performed high-fidelity CFD into the performance estimation of multi-stage axial compressors during preliminary design. Its key differentiator is that radial non-uniformity, inferred from three-dimensional CFD and represented using orthonormal basis functions, replaces empirical correlations of blockage, loss, and deviation as well as simplified models of flow features, such as boundary-layer growth, spanwise mixing, and endwall-corner separation. The methodology includes the effects of changes in radial non-uniformity and in blade geometry on the axisymmetric flow field. The approach can supersede current throughflow methods, increasing the fidelity of preliminary design.&#13;
&#13;
The primary impact of the methodology is a new capability for power gas turbine compressors to rapidly assess off-design matching at different spanwise locations along the blade height, enabling early-design choices, such as the annulus-area scheduling, based on the fidelity of CFD. Over a range of off-design conditions from near stall to near choke, the massflow capacity of a four-stage compressor was estimated within 1.2% and its efficiency within 1.5 percentage points compared to CFD at equal loading. The estimation of quasi-one-dimensional performance and the characterization of the flow close to the endwalls are improved relative to estimations of a legacy streamline curvature method since radial non-uniformity is inferred from high-fidelity flow field information. The methodology is demonstrated to be suitable for incorporation into compressor design systems.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139183</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design and synthesis of organic chromophores for Faraday rotation and photoluminescence</title>
<link>https://hdl.handle.net/1721.1/139181</link>
<description>The design and synthesis of organic chromophores for Faraday rotation and photoluminescence
Nelson, Zachary
This thesis summarizes the design, synthesis, and application of novel small-molecule and polymeric materials, with emphasis on understanding and tuning the chromophore electronic structure to maximize properties including Faraday rotation and delayed fluorescence.&#13;
&#13;
In Chapter 1, we introduce the magneto-optical property Faraday rotation. The current frontiers in performance, applications, and mechanistic understanding of Faraday rotation in organic-chromophore thin-films is discussed. Faraday rotation measurements of several novel materials are introduced in context.&#13;
&#13;
In Chapter 2, we evaluate the Faraday rotation in thin films of several phthalocyanine and porphyrin derivatives, and observe maximum Verdet constants greater than those found in competing inorganic materials. The effect of chemical modifications, including the introduction of paramagnetic and chiral centers, is discussed.&#13;
&#13;
In Chapter 3, we evaluate the Faraday rotation in thin films of π-conjugated polymers when influenced by chiral moieties. We demonstrate the key role chirality serves in the magneto-optical properties of these materials, the influence of structural order and film thickness on Faraday rotation, and report among the largest Verdet constants observed to date in organic thin films.&#13;
&#13;
In Chapter 4, we develop the polymerization for and evaluate the properties of a new class of polyarylene chalcogenides. We utilized dynamic covalent SNAr behavior to demonstrate selective depolymerization. In addition, we studied the mechanism of the bright photoemission, ascribing it to delayed fluorescence enabled by the presence of sulfur atoms.&#13;
&#13;
In Chapter 5, we discuss the synthesis and properties of two classes of solution-processable materials: M(III) and Au(I) arylthiolate coordination polymers. We demonstrate the presence of strong metal–metal interactions between M(III) centers along the polymer backbone. Further, we discuss the strong phosphorescence and dynamic supramolecular coordination in the solution phase of the Au(I) coordination polymers.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139181</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and consequences of resistance to PRMT5 Inhibition</title>
<link>https://hdl.handle.net/1721.1/139173</link>
<description>Mechanisms and consequences of resistance to PRMT5 Inhibition
Mueller, Helen S.
Drug resistance to cancer therapies poses a major clinical challenge and is a key obstacle to successful cancer treatment. Understanding how resistance develops and whether resistance brings new vulnerabilities, or collateral sensitivities, to secondary therapies is essential to combating drug resistance. Epigenetic regulators are being increasingly linked to cancer, resulting in the development of inhibitors against them and testing the inhibitors as effective cancer therapies. Some have entered the clinic, and many are in clinical trials, however, for many, little is known about mechanisms of resistance. One such epigenetic regulator, protein arginine methyltransferase 5 (PRMT5), is highly expressed in many tumor types and PRMT5 inhibitors are currently in early phase clinical trials. &#13;
&#13;
In this dissertation, we have generated the first model of resistance to PRMT5 inhibitors. In Kras-G12D;p53-null lung adenocarcinoma cell lines, resistance arose rapidly, and stemmed from a drug-induced transcriptional state switch, not from the selection of a pre-existing population. This resistant state was stable and brought with it vulnerabilities to other chemotherapeutics, especially the taxane paclitaxel. We found that this taxane sensitivity required stathmin 2 (STMN2), a microtubule regulator that is specifically expressed in the resistant state. Remarkably, Stmn2 was also required for the establishment and maintenance of resistance to PRMT5 inhibition. Analysis of TCGA patient data showed that high STMN2 levels correlate with complete regression of tumors in response to taxane treatment. Thus, identification of collateral sensitivities yields insight into resistance mechanisms and uncovers ways to optimize treatment strategies. The results we present here may offer solutions to combat the emergence of resistance and impact the future development of using PRMT5 inhibitors as in combination with other therapies.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139173</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Topology of Loop Space</title>
<link>https://hdl.handle.net/1721.1/139172</link>
<description>Quantitative Topology of Loop Space
Elliott, Robin
In this thesis we investigate how the size of a cycle in the based loop space of a simply connected Riemannian manifold controls its topology. Analogous to Gromov’s notion of distortion of higher homotopy groups of underlying Riemannian manifold, we define notions of distortion for (co)homology classes in the loop space with real coefficients, and study the asymptotics of these distortions. Upper bounds for cohomological distortion are obtained using K.-T. Chen’s theory of iterated integrals to set up differential forms on the loop space. Lower bounds, matching the upper bounds up to a log factor, are given by exhibiting an efficient family of cycles built out of the cells of a cell decomposition the underlying manifold.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139172</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Technologies, Customer Experience, and Decisions</title>
<link>https://hdl.handle.net/1721.1/139170</link>
<description>Digital Technologies, Customer Experience, and Decisions
Yu, Shuyi
This dissertation consists of three chapters that investigate how digital technologies have changed customer experience and their decisions. &#13;
&#13;
The first chapter investigates market participants’ reactions to predictive algorithms and the effects of this public information source on market outcomes. In particular, I study the extent to which buyers and sellers rely on a home’s Zestimate when making decisions. Using detailed property transaction data for 120,482 properties sold between May 2017 and May 2019 in the Greater Philadelphia area, I show that the sale price of a property does respond to exogenous shocks to its estimated home value. I develop a theoretical framework and provide empirical evidence to show how people use the Zestimate as a source of publicly available information that plays an important role in coordination and helping people reach an agreement. The results suggest that market participants tend to rely more on this public information source when it is harder to reach a consensus based on private information. Moreover, I show that people’s reliance on the Zestimate might mitigate racial disparities in the housing market by providing less biased information. &#13;
&#13;
In the second chapter, we study how consumers respond to repeated marketing campaigns driven by algorithms and how the responses vary across different algorithms. To investigate it, we collaborate with a U.S. food delivery company and conduct a field experiment where targeted coupons are sent by applying the same algorithms repeatedly. Our results show that algorithms utilizing more information perform better than simpler algorithms, and this difference only exists when the consumers have already been treated by the same algorithmdriven policy a few times. By exploring the variation in the purchase patterns, we show that those differences arise because advanced algorithms reduce the level of learning and strategic behaviors against the rules. This result also suggests that consumers may have some level of algorithm awareness, especially when algorithms are easy to learn, and are forward-looking enough to play strategically against the policies powered by those algorithms. &#13;
&#13;
In the third chapter, we study how digitization has transformed customer experience in the public sector. Customers with more education may get better service after complaining, because they are better placed to advocate for themselves. It is unclear how digitization of the consumer complaint process will change this situation. To investigate this, we analyze 364,189 customer complaints to the city of Boston. Empirically, complaints that originate 3 from areas with high levels of education are more likely to be solved quickly. However, dedicated mobile app technologies that automate the complaint process can help mitigate the advantage conferred by education. Since the adoption of digital devices is endogenous to wealth and education, we instrument their usage using granular geographic data on a proxy for cellular signal strength. This analysis again suggests that mobile applications can partially eliminate the disparity between educated and uneducated people. We present suggestive evidence that this is because mobile devices and the standardization of communication they require, eliminate potential differences in treatment of cases that arise due to differences in communication skills. This result suggests that using newer forms of automated digital communication tools enhances equality in customer service.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139170</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Application of Solid-State NMR Methods for Investigating Protein Structure and Dynamics</title>
<link>https://hdl.handle.net/1721.1/139165</link>
<description>Development and Application of Solid-State NMR Methods for Investigating Protein Structure and Dynamics
Gelenter, Martin D.
Solid-state nuclear magnetic resonance spectroscopy (SSNMR) is uniquely well-suited among spectroscopic and microscopy techniques for studying the structure and dynamics of biomacromolecules with atomic length-scale resolution. The development and application of advanced SSNMR techniques facilitates the study of novel systems to understand protein structure and function.&#13;
&#13;
Obtaining long-range distance restraints is imperative for solving high-resolution protein structures via SSNMR. Third-spin-assisted recoupling (TSAR) experiments have been shown to be extremely useful in obtaining such distance restraints, particularly in the structure determination of fibrils. By replacing the continuous-wave spin-lock with a pulsed spin-lock on the low frequency channels, pulsed TSAR (P-TSAR) experiments reduce the radiofrequency duty cycle of these experiments and makes their optimization more straightforward while maintaining their ability to obtain long distance internuclear contacts. &#13;
&#13;
Glucagon is a peptide hormone that is used as a pharmaceutical agent to treat severe hypoglycemia. Unfortunately, it rapidly fibrillizes at pharmaceutically-relevant concentrations and pH and is thus shipped as a dry lyophilized powder and a diluent solution. Utilizing SSNMR we have solved the high-resolution structure of these cross-β fibrils. Glucagon fibrils consist of alternating antiparallel conformers along the hydrogen bonding fibril axis. In the plane perpendicular to the fibril axis, each conformer forms a symmetric homodimer. Mutations at S2, Y13, A19, or T29 to arginine inhibit fibrillization at pharmaceutical concentrations of glucagon and are promising analogues that would have longer shelf-life in solution compared to wild-type glucagon.&#13;
&#13;
The influenza matrix-2 protein (M2) conducts protons across the lipid membrane in the endosome and is essential for viral replication. Less is known about how the M2 protein of the influenza B strain (BM2) functions compared to AM2 and there are currently no antiviral drugs that are FDA approved that target BM2. Combining SSNMR with molecular dynamics simulations showed that the open, active BM2 channel is more hydrated than the closed, inactive channel, water within the open channel is more dynamic, and water in the open channel has greater orientational anisotropy. The orientational anisotropy is associated with a flip in the orientation of water molecules above and below H19 and is associated with the charge state of H19.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139165</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photoinduced Dynamics Studied using Single-Shot&#13;
Optical and Terahertz Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/139164</link>
<description>Photoinduced Dynamics Studied using Single-Shot&#13;
Optical and Terahertz Spectroscopy
Gao, Frank Yi
This thesis focuses on the advancements made in and the applications of echelon-based ultrafast single-shot optical and terahertz (THz) spectroscopy to study irreversible photoinduced dynamics and materials far from equilibrium.&#13;
&#13;
First, we motivate the understanding of photoinduced material responses as initiated by, and probed with, ultrafast laser pulses, including the techniques of conventional optical pump-probe spectroscopy and optical pump THz probe spectroscopy.&#13;
&#13;
Next, we provide an overview of various single-shot techniques, including a detailed discussion about the design and characteristics of echelons for use in single-shot optical and THz readout. In addition, we discuss the development of a novel two-arm noncollinear parametric amplifier with exceptional parametric conversion efficiency.&#13;
&#13;
We then move on to applications of single-shot spectroscopy studying the semimetals bismuth and tellurium. In bismuth, we observe anomalous optical phonon behavior under conditions of double pump excitation, including the apparent beating of multiple A1g-like phonon modes. In tellurium, we demonstrate that laser-induced amorphization with ultrafast pulses is purely thermal and results from the rapid melting and cooling of the tellurium lattice at high fluences.&#13;
&#13;
After this, we discuss work probing the photolysis of I3– in glass-forming liquids across an extensive range of solvent viscosities and temperatures. We demonstrate that the increased rigidity of the solvent network at low temperatures and high viscosities results in an increasing fraction of photofragments being unable to break out of their solvent cage, which results in the increased formation of caged-contact pairs. We identify a clear spectral signature of this caged species and assign a binding energy to this species.&#13;
&#13;
Lastly, we observe the real-time formation of a persistent hidden quantum (H) phase in the charge density wave material 1T-TaS2. We demonstrate that the formation of this H phase is ultrafast, electronic in origin, and corresponds to a near-complete melting of the charge order. Furthermore, we identify a persistent increase in the optical and THz conductivities as the primary order parameter for this transition.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139164</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Substratum Interactions Modulate the Interplay between Endothelial Cell Phenotype, Function, and Immune Recognition</title>
<link>https://hdl.handle.net/1721.1/139163</link>
<description>Substratum Interactions Modulate the Interplay between Endothelial Cell Phenotype, Function, and Immune Recognition
Wilcox, Elise C.
Endothelial cells (ECs) sense and adapt to their environment, allowing them to shift between a range of functional phenotypes. When connected in a monolayer they create the endothelium, a barrier and a platform from which ECs can individually respond to flow, and circulating cells and factors apically, cell density circumferentially, and substratum composition, stiffness, and texture basolaterally. Plasticity allows ECs to promote vascular homeostasis, and to interact with and modulate the immune system. Changes in endothelial state enable immune cells to migrate into the tissue to repair tissue damage and fight infection. However, the ECs also modulate the function of immune cells through the expression of adhesion molecules, chemokines, major histocompatibility complex (MHC), and an array of co-stimulatory and inhibitor molecules. These interactions allow ECs to act as antigen presenting cells (APCs) and influence the outcome of immune recognition. Thus, the study of ECs elucidates how microenvironment, vascular cell biology, and immune response are not only connected but interdependent.&#13;
&#13;
This work explored how cell-substratum interactions influence EC phenotype and function and how these differences affect allorecognition in a model of cell transplantation. Investigation of EC state was carried out using RNA sequencing and flow cytometry while assessment of the allogeneic response included measurements of immune cell cytotoxic ability, T cell proliferation, cytokine release, serum antibodies, and histological staining. We found that differences in substratum led to divergent EC phenotypes which in turn influenced immune response to transplanted cells, both due to the physical barrier of matrix-adhesion and differences in gene expression.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139163</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Macroeconomics and Banking</title>
<link>https://hdl.handle.net/1721.1/139152</link>
<description>Essays on Macroeconomics and Banking
Bosshardt, Joshua James
This thesis contains three papers related to measures for improving the stability of the banking system, including regulations and public ownership. &#13;
&#13;
The first chapter shows using a difference-in-differences strategy that the introduction of the U.S. bank stress tests led small businesses to concentrate their debt within a smaller number of banks. I explain this using a model of bank competition in which creditworthy but informationally opaque firms have an incentive to establish a small number of concentrated lending relationships to facilitate information acquisition by their lenders. Tightening credit standards reduces the rate of non-performing loans, but it also decreases the availability of credit. In response, firms strengthen their lending relationships by concentrating in a smaller number of lenders. When the model is calibrated to match the empirical estimates, tightening credit standards has zero net effect on efficiency, but it shifts the surplus from firms to banks because firms have fewer informed lenders with which to bargain over prices.&#13;
&#13;
The second chapter, joint with Ali Kakhbod, illustrates channels by which regulations that require banks to hold liquid assets can either increase or decrease a bank’s incentive to take risk with its remaining ineligible assets. A greater capacity to respond to liquidity stress increases the potential profits a bank would put at stake by making risky investments, but it also mitigates the illiquidity disadvantages of holding risky assets. We do not find evidence that the reserve requirement or the liquidity coverage ratio significantly affected measures of risk-taking such as non-performing loan ratios or credit default swap spreads.&#13;
&#13;
The third chapter, joint with Eugenio Cerutti, shows that state-owned or public banks lent relatively more than domestic private banks during the Global Financial Crisis (GFC). Using a novel bank-level dataset covering 25 emerging market economies, we provide evidence that this was because they pursued an objective of helping to stabilize the economy, rather than because they had superior fundamentals or access to public or depositors’ funding. Nonetheless, their countercyclical behavior seems unique to the GFC rather than a regular characteristic of public banks before and after the GFC.&#13;
&#13;
JEL Codes: G21, G28, G01
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139152</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-finite Complementation: A case study of Bùlì</title>
<link>https://hdl.handle.net/1721.1/139142</link>
<description>Non-finite Complementation: A case study of Bùlì
Sulemana, Abdul-Razak
This dissertation analyzes a number topical issues in Bùlì syntax as a way of contributing to both the theoretical and typological literature in the area of clausal complementation, control, serial verb constructions, and temporal markers. Among the questions addressed are  (1) Does Bùlì possess non-finite clauses? (2) How should serial verb constructions be analyzed? (3) Are the temporal remoteness markers in Bùlì tense markers?  &#13;
This dissertation represents an attempt to provide partial answers to these questions for  Bùlì, a Mabia (Gur) language spoken in Sandema in the Upper East Region of Ghana. While the main concern of the dissertation is Universal Grammar (UG) and linguistic typology,  I have, in the discussions, provided a substantial amount of descriptive as well as analytical material concerning these aspects of the grammar of Bùlì. The main claim of the dissertation is that Bùlì possesses two kinds of non-finite complements: (1) Non-finite complements with obligatory overt pronominal subjects that must be co-indexed with  matrix argument, (2) Non-finite complements at allow full DPs in their subject position.  Also, it reviews the properties of serial verb constructions and argues that they are best analyzed as instances of coordination. Finally with regards to the temporal remoteness markers, it argues that they are optional tenses.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139142</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Sustainable Electrosynthesis of Industrially&#13;
Valuable Small Molecules</title>
<link>https://hdl.handle.net/1721.1/139141</link>
<description>Towards Sustainable Electrosynthesis of Industrially&#13;
Valuable Small Molecules
Melville, Jonathan Francis
The decarbonization of legacy industrial processes calls for substantial advances in our ability to apply sustainably generated electricity as a means of making and breaking arbitrary chemical bonds. The development of such carbon-neutral systems demands the design of efficient and selective electrocatalysts with an eye towards economic viability and industrial operability. In this work, we present three electrochemical processes at varying degrees of practical maturity with theoretical applicability to a zero-carbon future economy:&#13;
&#13;
In Chapter 2, we rigorously interrogate the electrolysis of molten condensed phosphate salts to white phosphorus, a valuable specialty chemical currently generated by energy-intensive carbothermal reduction. We demonstrate that the reduction of phosphate to phosphorus occurs near the limit of energetic and atom efficiency, portending future application as a milder and possibly carbon-neutral route for industrial phosphorus synthesis.&#13;
&#13;
In Chapter 3, we investigate the electrochemical reduction of nitrogen to ammonia,&#13;
whose current production is dependent on the reforming of methane. We examine the fundamental challenges intrinsic to this challenging reactivity, and highlight the amplification of catalyst selectivity at elevated pressures, a strategy which is showcased on a novel copper nitride electrode with exceptional selectivity towards nitrogen reduction in aqueous media.&#13;
&#13;
In Chapter 4, we discuss electrochemical methane monofunctionalization as a strategy for gas-to-liquid conversion, capable of valorizing methane flare streams economically inaccessible by incumbent industrial chemistries. We devise a process scheme for methane gas-to-liquid electroconversion with capacity for real-world implementation, which maximizes overall carbon efficiency by minimizing distillative overheads.&#13;
&#13;
The development of sustainable processes for generation of energy-dense fuels or valuable refined chemicals is ultimately reliant upon the application of efficient electrocatalysts for selectively employing electrons to rearrange chemical bonds. With this work, we demonstrate the rich potential for electrochemistry to unlock future routes to desirable industrial reactivities.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139141</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application-driven Intersections between Information Theory and Machine Learning</title>
<link>https://hdl.handle.net/1721.1/139138</link>
<description>Application-driven Intersections between Information Theory and Machine Learning
Liu, Litian
Machine learning has been tremendously successful in the past decade. In this thesis, we introduce guidance and insights from information theory to practical machine learning algorithms. In particular, we study three application domains and demonstrate the algorithmic gain of integrating machine learning with information theory. In the first part of the thesis, we deploy the principle of network coding to propose a decomposition scheme for distributing a neural network over a physical communication network. We show through experiments that our proposed scheme dramatically reduces the energy used compared to existing communication schemes under various channel statistics and network topologies. In the second part, we design a learning-based coding scheme, developed from the concept of error correction codes, for bio-molecular profiling. We show through simulations that, with a learning-based encoder and a maximize a posterior (MAP) decoder, our scheme significantly outperforms existing schemes in reducing the false negative rate of rare bio-molecular types. In the third part, we exercise guesswork on the machine translation problem. We study machine translation using the seq2seq model and we provide insights into quantifying the uncertainty within. Our results shed light on the design of inference in machine translation for selecting the beam size in beam search.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139138</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Velocity Microparticle Impact for Analytical Modelling of High-Strain-Rate Mechanics and Material Behavior</title>
<link>https://hdl.handle.net/1721.1/139137</link>
<description>High-Velocity Microparticle Impact for Analytical Modelling of High-Strain-Rate Mechanics and Material Behavior
Sun, Yuchen
A recent optical technique for conducting and observing high-velocity micron-scale impacts against various materials is the focus of this work. This technique, the laser-induced particle impact test (LIPIT), is capable of accelerating microparticles to supersonic velocities to achieve strain-rates up to 108 s-1. The platform also provides in situ observations of the impact to allow for quantification of the particle trajectories before and after collision. This work first details the experimental capabilities and limitations. The majority is then spent presenting and discussing the methods of material characterization permitted by the LIPIT platform across various material systems, often involve analytical modelling, post-impact microscopy, and a plethora of complementary techniques. Two major systems discussed are elastomers and metals.&#13;
&#13;
For elastomers, the LIPIT provides a high-strain-rate characterization that is first compared to two low-strain-rate techniques. These results are contextualized through chemical characterization of the elastomers using solid state nuclear magnetic resonance. Temperature dependent impact measurements are then compared with dielectric spectroscopy results to link the impact behavior with the glassy transition.&#13;
&#13;
For metals, high-velocity impacts are performed and characterized using in situ observations, post-impact microscopy, and analytical modelling. Observations of a jetting phenomenon in both in situ and post impact imaging are correlated to divergence from a power-law behavior to reveal a new regime of ‘divergent’ behavior. This regime is further explored in the context of oblique impacts to reveal more interesting impact behavior.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139137</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Ethics</title>
<link>https://hdl.handle.net/1721.1/139134</link>
<description>Making Ethics
Byrne, Thomas
A window broke and Annie was involved. What’s of moral importance in a situation like this? Not whether Annie caused the window to break and not whether the window wouldn’t have broken if it weren’t for Annie. What’s morally important is whether Annie broke the window. In this thesis, I first generalise and argue for that claim; afterwards, I put it to work in ethics, applied ethics, and legal theory.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139134</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning and causality: Building efficient, and reliable models for decision-making</title>
<link>https://hdl.handle.net/1721.1/139131</link>
<description>Machine learning and causality: Building efficient, and reliable models for decision-making
Makar, Maggie
We explore relationships between machine learning (ML) and causal inference. We focus on improvements in each by borrowing ideas from one another.&#13;
&#13;
ML has been successfully applied to many problems, but the lack of strong theoretical guarantees has led to many unexpected failures. Models that perform well on the training distribution tend to break down when applied to different distributions; small perturbations can “fool” the trained model and drastically change its predictions; arbitrary choices in the training algorithm lead to vastly different models; and so forth. On the other hand, while there has been tremendous progress in developing causal inference methods with strong theoretical guarantees, existing methods typically do not apply in practice since they assume an abundance of data. Working at the intersection of ML and causal inference, we directly address the lack of robustness in ML, and improve the statistical efficiency of causal inference techniques.&#13;
&#13;
The motivation behind the work presented in this thesis is to improve methods for building predictive, and causal models that are used to guide decision making. Throughout, we focus mostly on decision making in the healthcare context. On the ML for causality side, we use ML tools and analysis techniques to develop statistically efficient causal models that can guide clinicians when choosing between two treatments. On the causality for ML side, we study how knowledge of the causal mechanisms that generate observed data can be used to efficiently regularize predictive models without introducing biases. In a clinical context, we show how causal knowledge can be used to build robust, and accurate models to predict the spread of contagious infections. In a non-clinical setting, we study how to use causal knowledge to train models that are robust to distribution shifts in the context of image classification.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139131</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/139128</link>
<description>Essays in Financial Economics
Hansen, Peter G.
Chapter 1 introduces novel preference formulations which capture aversion to ambiguity about unknown and potentially time-varying volatility. These preferences are compared with Gilboa and Schmeidler's maxmin expected utility as well as variational formulations of ambiguity aversion. The impact of ambiguity aversion is illustrated in a simple static model of portfolio choice, as well as a dynamic model of optimal contracting under repeated moral hazard. Implications for investor beliefs, optimal design of corporate securities, and asset pricing are explored. &#13;
&#13;
Chapter 2 develops a method informed by data and models to recover information about investor beliefs. This approach uses information embedded in forward-looking asset prices in conjunction with asset pricing models. We step back from presuming rational expectations and entertain potential belief distortions bounded by a statistical measure of discrepancy. Additionally, this method allows for the direct use of sparse survey evidence to make these bounds more informative. Within this framework, market-implied beliefs may differ from those implied by rational expectations due to behavioral/psychological biases of investors, ambiguity aversion, or omitted permanent components to valuation. Formally, evidence about investor beliefs is represented as a nonlinear expectation function deduced using model-implied moment conditions and bounds on statistical divergence. This method is illustrated with a prototypical example from macro-finance using asset market data to infer belief restrictions for macroeconomic growth rates. &#13;
&#13;
Chapter 3 develops diagnostic tools to assess whether individual factor risk premia are identified from return data. We describe a necessary and sufficient condition for population identification, which we call the kernel-orthogonality condition. This condition can be thought of intuitively as the existence of a “true” factor mimicking portfolio, and is weaker than the standard rank condition commonly assumed for linear factor models. Furthermore, this condition remains meaningful even if the factor model is misspecified, as a condition for the identification of the factor risk premium consistent with minimal pricing error. We discuss test procedures to assess identification, and provide a novel test of the kernel-orthogonality condition in reduced-rank models. Finally, we apply our test methodology to assess identification of risk premia associated with consumption growth and intermediary leverage.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139128</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Operations in Changing Environments</title>
<link>https://hdl.handle.net/1721.1/139127</link>
<description>Data-Driven Operations in Changing Environments
Zhu, Ruihao
Rapid development of data science technologies have enabled data-driven algorithms for many important operational problems. Existing data-driven solutions often requires the operational environments being stationary. However, recent examples have shown that the operational environments can change dynamically. It is thus imperative to design data-driven algorithms that is capable of working in time-varying environments.&#13;
&#13;
We first introduce data-driven decision-making algorithms that achieve state-of-the-art dynamic regret bounds for non-stationary bandit and reinforcement learning settings. These settings capture applications such as advertisement allocation, dynamic pricing, and inventory control in changing environments. Our main contribution is a general algorithmic recipe for a wide variety of non-stationary bandit and reinforcement learning problems without any knowledge about the environments in advance. &#13;
&#13;
Next, we study the problem of learning shared structure across a sequence of dynamic pricing experiments for related products. We consider a practical formulation where the unknown demand parameters for each product come from an unknown prior that is shared across products. We then propose a meta dynamic pricing algorithm that learns this prior online while solving a Thompson sampling pricing experiments for each product. &#13;
&#13;
Finally, motivated by our collaboration with AB InBev, a consumer packaged goods (CPG) company, we consider the problem of forecasting sales under the coronavirus disease 2019 (COVID-19) pandemic. Our approach combines online learning and pandemic modeling to develop a data-driven online non-parametric regression method. Numerical experiments show that our method is capable of reducing the forecasting error in terms of WMAPE (i.e., weighted mean absolute percentage error) and MSE (i.e., mean squared error) by more than 50% for AB InBev.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139127</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods to dissect the genetic basis of human disease</title>
<link>https://hdl.handle.net/1721.1/139122</link>
<description>Computational methods to dissect the genetic basis of human disease
Kim, Samuel Sungil
Genome-wide association studies (GWAS) have been successful in identifying disease-associated genetic variants. However, the path from GWAS to biological insight remains challenging, notably in identifying relevant biological pathways, explaining mechanistic links between diseases, and nominating disease-critical tissues and cell types. In this thesis, I introduce computational methods to dissect the genetic basis of human disease by integrating GWAS with functional data. In the first chapter, I integrate the GWAS with biological pathways and gene networks to elucidate biological mechanisms. I identify significantly associated pathways and highlight the importance of accounting for regulatory annotations in pathway enrichment and gene network analyses. In the second chapter, I investigate the shared genetic architecture between Mendelian disease and common disease by developing a machine learning framework to impute and denoise Mendelian disease-derived pathogenicity scores. I assess the informativeness of Mendelian pathogenicity scores for common disease and improve upon existing scores. In the third chapter, I prioritize disease-critical cell types by integrating GWAS with single-cell gene expression and chromatin accessibility profiling of fetal and adult brains. I show that identified disease-cell type associations recapitulates known biology while informing future analyses of disease mechanisms.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139122</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and Characterization of Chalcogenide Perovskites</title>
<link>https://hdl.handle.net/1721.1/139111</link>
<description>Synthesis and Characterization of Chalcogenide Perovskites
Filippone, Stephen Andrew
Chalcogenide perovskites are both an old and new group of materials. First synthesized in 1957, chalcogenide perovskites are generally sulfur and selenium compounds which contain a 3D network of corner-sharing octahedra. Not until 2015 was this material class recognized as being suitable for any application. Their strong light absorption in the visible to near-IR wavelengths makes them of interest as photovoltaic absorber materials. The potential to deliver on the promise of low-cost thin film photovoltaics is reason enough to study chalcogenide perovskites. Additionally, their structural similarity to lead-halide and oxide perovskites offers a new opportunity to study related questions in those fields such as defect tolerance in lead-halide perovskites and ferroelectricity in oxides. The following projects represent advances in both synthesis and characterization of chalcogenide perovskites.&#13;
&#13;
I used computational software to calculate phase diagrams for chalcogenide perovskites. These phase diagrams were calculated in ultra-high vacuum conditions to serve as guides for thin film deposition techniques. I also attempted to synthesis the predicted ferroelectric compound ZnSnS3 by molecular beam epitaxy and solid phase epitaxy. Here I also report on the application of the new “cold sintering process” technique applied to densification of BaZrS3. I explored multiple sintering aides, including organic solvents, sulfur and iodine. I achieved high densification with iodine as a sintering aid.&#13;
&#13;
I characterized the dielectric and electronic properties of BaZrS3 and Ba3Zr2S7 single crystals. I used impedance spectroscopy on single crystals to measure their low-frequency relative dielectric constants. I used pump-probe IR reflectivity to measure mobility anisotropy in Ba3Zr2S7 single crystals. I used a Drude model to estimate the ambipolar carrier mobility from IR reflectivity in the &lt;110&gt; and &lt;001&gt; directions of Ba3Zr2S7. &#13;
&#13;
This body of work advances the emerging field of chalcogenide perovskites by providing practical guides for thin film synthesis, lessons from thin film and bulk synthesis work, as well and basic property measurements on bulk material, critical for the design of future thin film devices.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139111</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rare-Earth Nanoparticles for Non-invasive In Vivo Imaging of Immune Cells in Cancer Immunotherapy</title>
<link>https://hdl.handle.net/1721.1/139103</link>
<description>Rare-Earth Nanoparticles for Non-invasive In Vivo Imaging of Immune Cells in Cancer Immunotherapy
Kataria, Swati
Cancer immunotherapy is fast emerging as a promising treatment approach for many cancers. However, outcomes remain highly variable across individuals, as only a minority of patients respond, and addressing this variability is one of the most active areas of immunotherapy research. Studies have shown that the infiltration of tumors by immune cell subtypes is one of the most significant prognostic indicators of disease-free survival across a large number of cancers. However, we remain limited in our ability to non-invasively sample the continuously evolving tumor micro-environment for the presence of immune cells that can give us critical insights into the response status of the patient.&#13;
&#13;
Rare-earth nanoparticles are an exciting new class of optical materials very attractive for high resolution imaging of deep tissue structures. They have narrow and tunable emission bands in regions of near-zero tissue autofluorescence (1300-1700nm), large anti- Stokes shift enabling clear separation of excitation and emission wavelengths, high photostability for continuous and repetitive imaging, and very low intrinsic toxicity in vivo due to their exceptional chemical inertness, making them attractive for clinical translation. Moreover, near-infrared optical imaging technology offers several advantages over conventional clinical imaging tools in terms of resolution, cost, safety, and repeatability.&#13;
&#13;
In this thesis, we show that near-infrared imaging using rare-earth nanoparticles can serve as a powerful tool to non-invasively image the distribution of immune cells in a tumor. We first present nanoparticle synthesis strategies for the generation of small but ultra-bright rare earth nanocrystals necessary for deep tissue imaging of rare cell types. We then evaluate a variety of surface modification approaches and present new methods for facile and reproducible phase-transfer of these nanoparticles to enable further biofunctionalization and cell targeting. Finally, we demonstrate that these nanoparticles can be used for high resolution in vivo imaging of tumor-infiltrating T-cells in a melanoma tumor model. The ability to noninvasively monitor the immune contexture of a tumor during immunotherapy will yield valuable real time insights into the response status of a patient, and could lead to early identification of responding and non-responding patient cohorts.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139103</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/139102</link>
<description>Essays in Financial Economics
Dernaoui, Zaki
In chapter 1, We study the US housing market using a proprietary dataset covering nearly 90 million transactions over 1998–2018. First, we document the evolution and quantify the contributions of non-primary housing demand to the housing cycle. Our findings suggest that the share of market timers grew substantially in the run-up to the global financial crisis, which amplified the boom-bust cycle, while out-of-state buyers partially propped up prices. Second, we use a novel quasi-natural experiment design to establish a causal relationship between housing speculation and prices. Third, we show that the rise of shadow banking is associated with riskier mortgages, more speculation, and jointly amplify the housing cycle.&#13;
Chapter 2 revisits the exchange rate disconnect puzzle at the firm level. If a firm invoices a transaction in a foreign currency, a delay of payment between the transaction date and the settlement date exposes the firm to exchange rate risk. In their income statements, firms report such exchange rate gains and losses, signaling their exposure to currency risk. We focus on two countries, Japan and the United States, that exhibit a similar trade openness but two very different shares of foreign currency invoicing. We find that an appreciation of the yen significantly decreases the net income and investment of Japanese firms, but an appreciation of the dollar has no significant effect on the U.S. sample. Exchange rate risk appears linked to the value of Japanese firms: the higher the exposure to exchange rate risk according to their income statements, the higher the loadings of their equity returns on exchange rate returns.&#13;
Chapter 3 examines the recent compositional shift in corporate capital and its impact on the investment sensitivity to funding costs. We show that the rising share of intangibles in U.S firms’ assets significantly dampens the stimulus effect of interest rate shocks. For a given surprise change to the fed funds rate, a one standard deviation above the mean in intangible capital intensity mutes the investment response by around 30%. These results hold in robust specifications, when isolating the pure interest rate effect, and controlling for other known factors such as leverage and firm growth. A number of characteristics of intangible capital can potentially explain the heterogeneous responses: collateral value, adjustment costs, project duration and depreciation rates. We propose a structural interpretation of the empirical findings in a quantitative model of heterogeneous firms.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139102</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New combinatorics of the weak and strong Bruhat orders</title>
<link>https://hdl.handle.net/1721.1/139101</link>
<description>New combinatorics of the weak and strong Bruhat orders
Gaetz, Christian
This thesis describes a line of work, much of it joint with Yibo Gao, which began with our proof of Stanley's conjecture that the weak order on the symmetric group is Sperner.  Further developments---either directly related or related in our thinking at the time---involve weighted path enumeration in the weak and strong Bruhat orders, specializations of Schubert polynomials, separable elements in finite Weyl groups, and inequalities for linear extensions of finite posets.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139101</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Surgery Exact Triangle in Monopole Floer Homology with Z[i] Coefficients</title>
<link>https://hdl.handle.net/1721.1/139100</link>
<description>The Surgery Exact Triangle in Monopole Floer Homology with Z[i] Coefficients
Freeman, Jesse
In their seminal paper, Kronheimer, Mrowka, Ozsváth and Szabó, establish the existence of a surgery exact triangle for Monopole Floer homology. The triangle was a theoretical breakthrough and allowed us to answer questions about the surgery correspondence between knots and 3-manifolds that were previously mysterious. A serious limitation of their triangle was that it only holds over characteristic two. In this paper, we establish the existence of a surgery exact triangle using local coefficients valued in Z[i].
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139100</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Belief-Space Planning for Real-World Systems: Efficient SLAM-Based Belief Propagation and Continuous-Time Safety</title>
<link>https://hdl.handle.net/1721.1/139099</link>
<description>Belief-Space Planning for Real-World Systems: Efficient SLAM-Based Belief Propagation and Continuous-Time Safety
Frey, Kristoffer M.
Uncertainty-aware planning has long been a recurring goal in robotics. By enabling autonomous systems to explicitly reason about their own uncertainty, desirable behaviors that increase observability and ensure robust constraint satisfaction arise naturally from high-level optimization specifications. For partially-observable and under-sensed systems in particular, belief-space planning (BSP) provides a natural probabilistic formulation. Despite significant research attention over the years, a few key challenges have prevented the application of BSP to the real-world systems that would stand to benefit the most, such as SLAM-reliant Micro-Aerial Vehicles (MAVs). &#13;
&#13;
The most fundamental of these challenges is that of efficiently propagating the state belief, particularly under SLAM-based estimation schemes like Visual-Inertial Odometry (VIO). This thesis describes a structureless and consistent approximation for&#13;
belief propagation under SLAM, the efficacy of which is demonstrated in the challenging setting of observability-aware planning for VIO.&#13;
&#13;
A key attraction of BSP is the ability to specify constraints on the total probability of failure – however, actually encoding these constraints within practical optimization schemes remains a challenge, particularly for physical systems, which evolve continuously in time. General-purpose Monte-Carlo methods can be used to accurately assess failure rates, but these are cumbersome to optimize against, while more convenient “direct” estimates are based on discrete-time simplifications and fail to meaningfully constrain the full continuous-time risk. To address this gap, a novel risk estimate is derived directly in continuous-time, providing a principled, lightweight, and convenient means of ensuring probabilistic safety for real-world systems. Together, these contributions enable online, risk-constrained BSP for a large class of systems of widespread practical interest.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139099</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducibility and Statistical-Computational Gaps from Secret Leakage</title>
<link>https://hdl.handle.net/1721.1/139096</link>
<description>Reducibility and Statistical-Computational Gaps from Secret Leakage
Brennan, Matthew S.
Inference problems with conjectured statistical-computational gaps are ubiquitous throughout modern statistics, computer science, statistical physics and discrete probability. While there has been success evidencing these gaps from the failure of restricted classes of algorithms, progress towards a more traditional reduction-based approach to computational complexity in statistical inference has been limited. These average-case problems are each tied to a different natural distribution, high-dimensional structure and conjecturally hard parameter regime, leaving reductions among them technically challenging. Despite a flurry of recent success in developing such techniques, existing reductions have largely been limited to inference problems with similar structure – primarily mapping among problems representable as a sparse submatrix signal plus a noise matrix, which is similar to the common starting hardness assumption of planted clique (PC).&#13;
&#13;
The insight in this work is that a slight generalization of the planted clique conjecture – secret leakage planted clique (PCₚ), wherein a small amount of information about the hidden clique is revealed – gives rise to a variety of new average-case reduction techniques, yielding a web of reductions relating statistical problems with very different structure. Based on generalizations of the planted clique conjecture to specific forms of PCₚ, we deduce tight statistical-computational tradeoffs for a diverse range of problems including robust sparse mean estimation, mixtures of sparse linear regressions, robust sparse linear regression, tensor PCA, variants of dense &#119896;-block stochastic block models, negatively correlated sparse PCA, semirandom planted dense subgraph, detection in hidden partition models and a universality principle for learning sparse mixtures. This gives the first reduction-based evidence supporting a number of statistical-computational gaps observed in the literature [Li17, BDLS17, DKS17, CX16, HWX15, BBH18, FLWY18, LSLC18, RM14, HSS15, WEAM19, ASW13, VAC17].&#13;
&#13;
We introduce a number of new average-case reduction techniques that also reveal novel connections to combinatorial designs based on the incidence geometry of Fᵗᵣ and to random matrix theory. In particular, we show a convergence result between Wishart and inverse Wishart matrices that may be of independent interest. The specific hardness conjectures for PCₚ implying our statistical-computational gaps all are in correspondence with natural graph problems such as &#119896;-partite, bipartite and hypergraph variants of pc. Hardness in a &#119896;-partite hypergraph variant of pc is the strongest of these conjectures and sufficient to establish all of our computational lower bounds. We also give evidence for our PCₚ hardness conjectures from the failure of low-degree polynomials and statistical query algorithms. Our work raises a number of open problems and suggests that previous technical obstacles to average-case reductions may have arisen because planted clique is not the right starting point. An expanded set of hardness assumptions, such as PCₚ, may be a key first step towards a more complete theory of reductions among statistical problems.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139096</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Next Generation Nucleic Acid Diagnostics Using Synthetic Biology and Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/139083</link>
<description>Designing Next Generation Nucleic Acid Diagnostics Using Synthetic Biology and Artificial Intelligence
Angenent-Mari, Nicolaas Manuel
Nucleic Acid Testing (NAT) is an indispensable tool for effective disease diagnosis. Analyzing pathogen and host RNA or DNA often provides otherwise unobtainable information necessary for proper patient treatment. Unfortunately, a number of technical barriers prevent the expansion of NAT technology into novel application spheres, such as wearable, digital, and direct-to-consumer or point-of care diagnostic testing. These limitations include the cost of NAT, the equipment needed to perform it, and assay sensitivity. No available technologies have simultaneously achieved the combination of a consumer-tolerable cost, equipment-free passive operation, and gold standard sensitivity. The design of novel assays that overcoming all such limitations in concert would allow for the deployment of NAT in previously unprecedented environments, improving the range and accessibility of crucial disease monitoring.&#13;
&#13;
In this thesis I outline four efforts to expand the capacity of NAT in this direction. First, I describe the design of a novel CRISPR-Cas13 activated riboswitch which demonstrates the potential of synthetic biology tools for nucleic acid detection. Second, I describe the prototyping of a platform for the deployment of freeze-dried synthetic biology-based diagnostic assays in wearable formats, including examples of NAT assays and also assays for small molecule analytes. Third, I describe the synthesis and subsequent analysis using deep learning of a toehold switch library, demonstrating the potential for high-throughput AI-guided design of diagnostic tools. Fourth, I describe the design of a novel isothermal nucleic acid amplification method that functions at low temperatures. I conclude by discussing the future direction of NAT technologies, and describe new opportunities for improved health outcomes that could arise from a new generation of diagnostic tools.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139083</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soil-Plant-Atmosphere Coupling during Interstorm Periods</title>
<link>https://hdl.handle.net/1721.1/139077</link>
<description>Soil-Plant-Atmosphere Coupling during Interstorm Periods
Feldman, Andrew F.
The future trajectory of net terrestrial carbon uptake and agricultural yields are dependent on how vegetation responds to climate forcings. However, characterizing vegetation responses to water stress and other environmental drivers is challenging because these forcing factors are inter-related, especially on seasonal timescales. Here, newly available global mapping measurements from microwave satellite sensors are used to characterize water exchange in the soil-plant-atmosphere continuum. These satellites enable evaluation of time evolution of landscape-scale plant water content during interstorm periods, providing insights into underlying mechanisms and allowing disentangling of their drivers. Here, I ask: what are the fundamental landscape-scale plant responses to rainfall events and interstorm drying? What does interstorm land surface behavior reveal about landscape responsiveness to climate forcing and vulnerability to change? &#13;
&#13;
Using satellite and field tower observations, plant water and carbon uptake responses to rain pulses are characterized across global ecosystems. Responses depend on pulse characteristics: smaller pulses on initially dry soils produce slow rehydration responses; larger pulses on initially wet soils trigger rapid growth. Though more pronounced in drier environments, these responses occur across global ecosystems, which is more widespread than previously thought. Furthermore, the soil moisture threshold is estimated below which waterlimitation occurs and land-atmosphere interactions strengthen. With sufficient drying below this threshold, soil and plant water loss become highly linked to surface warming and drying. This enhances plant water stress. Finally, regions spending more time in this water-limited evaporative regime are most responsive to surface forcings. &#13;
&#13;
Several findings emerge here: (1) rapid plant growth and slow rehydration responses to surface rewetting are more widespread than previously thought. Given their additional water stress responses to interstorm drying, global vegetation and consequently the carbon cycle are vulnerable to shifting rainfall intensity and interstorm length under climate change. (2) Landscapes most vulnerable to environmental variability and change are those that progressively spend more time in the water-limited regime, not necessarily those that receive the greatest forcing variability. (3) Satellite plant water content observations contain information on subweekly timescales consistent with plant hydraulic theory and field measurements, which reveals new avenues for vegetation remote sensing and ecological modeling efforts.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139077</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Air Quality Count: Low-cost sensors, Public Health and Urban Planning</title>
<link>https://hdl.handle.net/1721.1/139076</link>
<description>Making Air Quality Count: Low-cost sensors, Public Health and Urban Planning
deSouza, Priyanka
Ambient air pollution is responsible for ~ 4.2 million premature deaths every year making it the world’s single largest environmental health risk. Although 90% of this burden is borne by countries in the Global South, effective air pollution governance and monitoring in many of these countries is lacking. As of 2019, 57 countries had no air quality standards and 108 countries did not have air pollution data in any form. This dissertation attempts to understand and address some of the factors that have resulted in these gaps in data and governance. Specifically, this work makes two main interventions: 1) Low-cost sensors and satellite instruments have immense potential to further our understanding of air pollution, especially in the Global South where little data is available. This thesis develops new methods to derive useful insights about air pollution levels and sources from these technologies. Throughout, it highlights inequalities in production and access to data and these technologies that need to be addressed. 2) Air pollution monitoring practices and governance are intertwined with data infrastructures, political economy conditions, and anticipation of political engagement. This thesis studies the gaps in the data infrastructures and political economy conditions that prevent air pollution science and data from leading to effective regulatory action in the Global South. It uses Kenya as a case study for this work.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139076</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational illumination for portrait photography and inverse graphics</title>
<link>https://hdl.handle.net/1721.1/139074</link>
<description>Computational illumination for portrait photography and inverse graphics
Murmann, Lukas
Supervised training of deep networks has led to remarkable successes in computer vision, for example on image classification or object detection problems. These successes are driven by the availability of large amounts of paired training data with manual ground truth annotations. For many photography or inverse graphics applications however, manual annotation of ground truth labels is not viable. Motivated by this, the research presented in this thesis proposes several portable hardware prototypes that enable the collection of training data for applications ranging from non-line-of-sight imaging to relighting and dark-flash photography. &#13;
&#13;
The thesis also discusses a novel formulation for fast and accurate differentiable rendering based on analytical anti-aliasing. It is demonstrated how this renderer can be used for inverse graphics problems. The thesis concludes with a discussion on how differentiable programming can be combinded with data-driven feed forward networks for practicle inverse graphics applications.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139074</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fracture mechanics in the semigrand canonical ensemble</title>
<link>https://hdl.handle.net/1721.1/139066</link>
<description>Fracture mechanics in the semigrand canonical ensemble
Mulla Mahmoud, Talal
We present a novel simulation method to assess the quasi-static fracture resistance of materials. Set within a semigrand canonical Monte Carlo (SGCMC) simulation environment, an auxiliary field –the bond rupture potential– is introduced to generate a sufficiently large number of possible microstates in the semigrand canonical ensemble, and associated energy and bond fluctuations. The SGCMC approach permits identifying the full phase diagram of brittle fracture for harmonic and non-harmonic bond potentials, analogous to the gas-liquid phase diagram, with the equivalent of a liquidus line ending in a critical point. The phase diagram delineates a solid phase, a fractured phase and a gas phase, and provides clear evidence of a first-order phase transition intrinsic to fracture. Moreover, energy and bond fluctuations generated with the SGCMC approach permit determination of the maximum energy dissipation associated with bond rupture, and hence of the fracture resistance of a widespread range of materials that can be described by bond potentials. &#13;
&#13;
We further adapt the method to a hybrid analytical-simulation investigation of the fracture resistance of heterogeneous materials. We show that bond-energy fluctuations sampled by Monte Carlo simulations in the semigrand canonical ensemble provide a means to rationalize the complexity of heterogeneous fracture processes, encompassing probability and percolation theories of fracture within a unified framework of fluctuation-based fracture mechanics. For a number of random and textured model materials, we derive upper and lower bounds of fracture resistance, which are critical to identify toughening mechanisms. Specifically, elastic toughening mechanisms due to elastic energy mismatch are shown to result from both the activation of cooperative interactions in soft-tough bulk phases and interfaces, and the transition from critical to subcritical bond fracture percolation in textured materials. While counter-intuitive on first sight, this soft-tough paradigm can explain a number of experimental observations, including toughening of brittle solids by deformable polymers or organics, such as gas shale, nacre, stress-induced transformational toughening mechanisms in ceramics, and toughening of sparse elastic networks in hydrogels, to name a few.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139066</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Size Effects in Shape Memory Ceramics</title>
<link>https://hdl.handle.net/1721.1/139058</link>
<description>Size Effects in Shape Memory Ceramics
Crystal, Isabel R.
Bulk shape memory ceramics (SMCs) are attractive for their high transformation temperatures and transformation stresses compared to shape memory metals but exhibit transformation-induced cracking due to mismatch stresses arising at the grain boundaries. Current strategies for mitigating cracking in SMCs involve moving towards smaller volume structures, which feature high surface area for stress relaxation and fewer grain boundaries to minimize transformation stresses. While this approach has proven successful, it typically limits SMCs to sample sizes at the micrometer-scale in micro-pillar and micro-particle structures. Here it is proposed that the strategy of simply lowering the grain boundary area can result in bulk SMCs that do not crack as these structures, like the micropillars and microparticles, satisfy the constraint of having relatively few sites of high stress concentrations. This led to the investigation of optical floating zone and cold crucible induction crystal growth of zirconia-based SMCs. Single crystals produced through the cold crucible induction melting route were found to be of high quality and ultimately increased the number of thermally induced martensitic transformation cycles from 5 in bulk materials out to at least 125.&#13;
&#13;
The tetragonal-to-monoclinic transformation behavior of yttria-doped zirconia in polycrystalline and single crystalline forms were compared over many transformation cycles collected in the differential scanning calorimeter. The evolution of thermal hysteresis and transformation strains were used to characterize thermal cycling performance of each structure. Whereas single crystals had very repeatable transformation behavior in terms of hysteresis and strain amplitude, polycrystals degraded dramatically as they accumulate cracking damage with repeated cycling. As the polycrystal evolved from a pellet to granular packing of loose single crystals/grains, the energy dissipation converged with that of the single-crystal structure, and the energy spent on cracking throughout that process is captured by calorimetry analysis.&#13;
&#13;
We then explored grain size effects in cyclic martensitic transformations in polycrystalline structures to study cracking-induced disaggregation as a function of grain size from ~0.6 to 7.9 µm. A smaller grain size was found to increase the number of cycles required to disaggregate the pellet because of the larger amount of grain boundary area that must crack. Calorimetry analysis showed that the energy relieved through cracking decreases with increasing grain size and suggested an apparent material length scale of ~2 micrometers for the stress relief zone. By comparing data from initial and final transformation cycles, grain size and particle size effects could be developed and compared to relationships already established in shape memory alloys.&#13;
&#13;
These results all verify that grain boundaries play a key role in damage accumulation/ evolution during cyclic martensitic transformation and that microstructural control can extend the size-scale of viable single crystal or oligocrystal SMCs from the micro- to the millimeter scale
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139058</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polarity Inversion in Silicon and Phosphorus Compounds</title>
<link>https://hdl.handle.net/1721.1/139046</link>
<description>Polarity Inversion in Silicon and Phosphorus Compounds
Gilhula, James Connor
The desire for new metal-free catalysts and reagents is currently fueling a renaissance in synthetic main group chemistry. The work herein describes efforts to design silicon and phosphorus compounds of inverse polarity with respect to conventional reactivity by accessing oxidation states that are atypical for these p-block elements. This umpolung approach affords novel reactivity that was previously unavailable to silicon and phosphorus molecules. In a first demonstration, synthetic and mechanistic studies of the 1,3-dipolar reaction of nitroarenes with geometrically constrained base-stabilized silylenes will be described. This cycloaddition initiates stepwise deoxygenation by silicon(II), and the application of this rare mode of reactivity to a room-temperature variant of the Cadogan reaction will be detailed. In a second section, tetragonal phosphorus(V) cations supported by a macrocyclic ligand exhibiting strong electrophilicity and hydrolytic stability will be disclosed. By virtue of the low-lying acceptor orbital enforced by the square pyramidal structure, these compounds are potent (yet water-tolerant) Lewis acids that catalyze carbonyl activation, C–H functionalization, and glucose deoxygenation. Finally, a combined experimental and theoretical study on a class of geometrically constrained phosphine imides (i.e., phosphazenes) will be discussed. By imposition of non-VSEPR geometries through cyclic constraint, these vicinal ambiphilic P(V) compounds readily undergo unorthodox 1,2-addition reactions as a function of increased electrophilic character at the nominally inert phosphorus center. Taken together, the development of new Si- and P-based reaction chemistry showcases the many opportunities for discovery of useful methods enabled by innovations in p-block molecular design.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139046</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Econometrics and Public Finance</title>
<link>https://hdl.handle.net/1721.1/139033</link>
<description>Essays in Econometrics and Public Finance
Sun, Liyang
In recent years, natural experiments and randomized control trials (RCTs) have become increasingly common.  Econometric evaluation of these data allows economic researchers and policymakers to assess the treatment effect of some intervention. Traditional approaches were often developed under the assumption of homogeneous treatment effect. In this thesis, I investigate whether these approaches remain reliable for estimating the average treatment effect in a more realistic setting of heterogeneity, and in cases not, I propose more accommodating estimation methods.  &#13;
&#13;
In the first chapter, I focus on the important problem of efficient allocation of an intervention when it is infeasible to reach everyone due to limited resources. An overlooked aspect of the existing approach is that the cost of the intervention can also be heterogeneous and requires estimation.  I find the direct extension to the existing approach does not account for the uncertainty of the estimated cost, and can lead to infeasible allocations.  I provide policymakers with new approaches to allocations that account for imperfect information about feasibility.&#13;
&#13;
Treatment effects heterogeneity can affect individuals’ decisions to comply with the intervention.  Their take-up decisions are unobserved, and economic researchers would like to estimate which subpopulations are most likely to comply with the intervention.  Traditional approaches focus on settings with low-dimensional observed confounders.  In the second chapter, Rahul Singh and I develop methods to characterize compilers while adjusting for high-dimensional observed confounders.   In our approach, the adjustment is itself performed by machine learning, a variant called automatic de-biased machine learning (Auto-DML), and avoids the ad hoc trimming or censoring of a learned propensity score.  &#13;
&#13;
In the third chapter, joint with Sarah Abraham, we examine two-way fixed effects regressions that include leads and lags of the treatment, a popular approach to estimating the effect of dynamic shocks and non-transient treatment. We show that in settings with variation in treatment timing across units, the coefficient on a given lead or lag can be contaminated by effects from other periods, and apparent pretrends can arise solely from treatment effects heterogeneity. We propose an alternative estimator that is free of contamination, and therefore more robust to treatment effects heterogeneity.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139033</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Decision Making Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/139032</link>
<description>Essays on Decision Making Under Uncertainty
Koduri, Nihal
In Chapter 2, "The Origin of Cooperation," we construct an evolutionary model of a population consisting of two types of interacting individuals that reproduce under random environmental conditions. We show that not only does the evolutionarily dominant behavior maximize the number of offspring of each type, it also minimizes the correlation between the number of offspring of each type, driving it towards -1. We provide several examples that illustrate how correlation can be used to explain the evolution of cooperation.&#13;
&#13;
In Chapter 3, "Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach," we present two methods, based on regression in reproducing kernel Hilbert spaces, for solving an optimization problem with uncertain parameters for which we have historical data, including auxiliary data. The first method approximates the objective function and the second approximates the optimizer. We provide finite sample guarantees and prove asymptotic optimality for both methods. Computational experiments suggest that at least the second method overcomes a curse of dimensionality that afflicts existing methods, extrapolates better to unseen data, and achieves a many-fold decrease in sample complexity even for small dimensions.&#13;
&#13;
In Chapter 4, "Robust Optimization with Side Data," we introduce and solve the problem of minimizing a cost function subject to constraints that depend on a vector of uncertain parameters, given historical data on the parameters, including side data. We develop two approaches to solving this problem, both involving replacing the uncertain constraints by constraints evaluated at data points. If the original problem is convex, both approaches require solving a single new convex optimization problem. We show that the degree of suboptimality and degree of constraint violation of the decision produced by both approaches converges almost surely to zero. We show that for both approaches, varying a single parameter controls the tradeoff between suboptimality and constraint violation, and in lieu of finite-sample probabilistic guarantees, we propose a cross-validation scheme for choosing the parameter to enforce the desired tradeoff. We implement both approaches and the cross-validation scheme on a wide range of computational examples and observe that both approaches produce a satisfactory decision and that the cross-validation scheme is effective in choosing the tradeoff between suboptimality and constraint violation. We also combine both approaches with prior work to form two tractable approaches to optimization under uncertainty with side data when both the objective and constraints are uncertain.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139032</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ozone Chemistry in the Lower Stratosphere: Drivers, Trends, and Impacts</title>
<link>https://hdl.handle.net/1721.1/139029</link>
<description>Ozone Chemistry in the Lower Stratosphere: Drivers, Trends, and Impacts
Wilka, Catherine A.
This thesis seeks to improve our understanding of the connections between and the modulations of anthropogenically driven chemical changes in the stratosphere by natural processes. It probes these questions on timescales that range from daily to decadal and on spatial scales ranging from a few kilometers to the global mean, and presents new results improving our understanding of both the chemistry and physics behind these processes and improvements to our ability to represent them in chemistry-climate models. This work is driven by the belief that improved understanding of the fundamentals of stratospheric chemical and physical processes can simultaneously advance science and help society in evaluating past policy decisions and informing future ones. &#13;
&#13;
First, the impact of large and moderate sized volcanic eruptions on anthropogenically driven ozone depletion and recovery trends is explored. Results confirm previous work showing that large volcanic eruptions increased the rate of ozone depletion prior to the mid-1990s, and for the first time quantify how the combination of an unusually volcanically quiescent period from the mid-1990s to the mid-2000s and a flurry of moderate sized volcanic eruptions after the mid-2000s decreased the rate of ozone recovery until at least 2014. This study is the first to show that the timing of moderate sized volcanic eruptions, not just large ones, can significantly alter ozone decadal trends. &#13;
&#13;
Second, the ability of equatorial dynamical flows known as Matsuno–Gill patterns to alter stratospheric chemistry is investigated in the deep tropics for months in the spring and fall for the first time. Results from a chemistry-climate model show these anticyclonic flows both entrain extratropical chlorine and induce cooling, allowing rapid heterogeneous chlorine activation on sulfuric acid aerosols. Perturbations to ClO and NO2 provide a testable signature for the presence of this activation. This study shows a previously unrecognized mechanism in near-equinox months for dynamical influences on the spatial structures of atmospheric composition changes in the deep tropics. &#13;
&#13;
Third, the success of the Montreal Protocol on Substances that Deplete the Ozone 3 Layer is examined in the context of the record Arctic ozone depletion of Spring 2020. Results indicate that, in a “World Avoided” where ozone depleting substances had continued to increase, the first true Arctic ozone hole would have occurred in Spring 2020 due to the combined effects of higher chlorine and extreme meteorological conditions, and ozone depletion would have begun earlier and lasted longer than seen in the real world. This study also presents an improved parameterization of polar stratospheric cloud impacts on denitrification for the Whole Atmosphere Community Climate Model v.6 (WACCM6), which was necessary to accurately simulate extreme ozone loss. &#13;
&#13;
Fourth, the importance of temperature perturbations from sub-grid scale gravity waves on chemical rates in WACCM6 is explicitly investigated for the first time via a new connection between the orographic gravity wave parameterization and the chemistry module. Results indicate that several key heterogeneous reaction rates proceed faster on average and that this can have a significant effect on monthly chemistry concentrations. The lifetime of PAN also changes in some areas, affecting the distance this tropospheric pollutant can be transported. Verification of temperature perturbations using the COSMIC satellite array is investigated, and preliminary results show generally good agreement when gravity waves’ vertical wavelengths are expected to be small. This study reinforces the importance of accounting for sub-grid scale temperature effects when simulating chemistry. &#13;
&#13;
Finally, some directions for future work are discussed, including continued improvements to the denitrification parameterization in WACCM and preliminary efforts to observe tropical heterogeneous chlorine activation using satellites.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139029</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A systematic approach for cataloging mTORC1 regulators</title>
<link>https://hdl.handle.net/1721.1/139027</link>
<description>A systematic approach for cataloging mTORC1 regulators
Condon, Kendall J.
mTORC1 is a major regulator of eukaryotic cell growth. As a cellular decision-maker, mTORC1 surveils internal levels of basic biomolecules such as amino acids, ATP, cholesterol, and external levels of growth factors. Under sufficient nutrient conditions, mTORC1 is active and drives anabolic processes while inhibiting catabolic ones. Dysregulation of mTORC1 activation has far-reaching consequences for human health, and understanding how upstream regulation occurs is of great interest.&#13;
&#13;
Toward generating a complete list of mTORC1 regulators, we developed a strategy to identify them by CRISPR-Cas9 FACS-based screening. As a proof-of-principle screen, we first identified negative regulators of the mTORC1 pathway. Then, we adapted the technique to uncover positive regulators of mTORC1. These screening results revealed many novel genes as mTORC1 regulators as well as highlighted the importance of mitochondrial health for mTORC1 activation.&#13;
&#13;
Following validation studies, we investigated the long-standing question of how mitochondrial stress impinges upon the mTORC1 signaling pathway. We observed short- and long-term responses to mitochondrial distress through time course experiments with the mitochondrial inhibitor, oligomycin. Furthermore, we attributed these responses to the kinases, AMPK and HRI, respectively.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139027</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unstable Modules with Only the Top k Steenrod Operations</title>
<link>https://hdl.handle.net/1721.1/139023</link>
<description>Unstable Modules with Only the Top k Steenrod Operations
Li, Zhulin
We study an abelian category of unstable modules with the top k Steenrod operations at the prime 2. We show that this category has homological dimension at most k. We establish forgetful functors, suspension functors, loop functors and Frobenius functors between such modules. The forgetful functors induce an inverse system of Ext groups, and the inverse system stabilizes when the covariant module is bounded above. We define an analogue of the Lambda algebra in this context and verify that its cohomology computes Ext.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139023</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Methodology to Model Metal Chemistry at High Temperature</title>
<link>https://hdl.handle.net/1721.1/139022</link>
<description>New Methodology to Model Metal Chemistry at High Temperature
Wagner, Mary Elizabeth
There is currently a lack of ability to predict which species will be reduced at the cathode and what purity will be achieved during metal electrodeposition. Problems related to co-deposition and contamination are usually avoided by using selective aqueous electrolytes or pre-purifying feedstock. However, these approaches are not always possible, particularly when developing novel, high temperature electrochemical processes where there is little experimental information about the electrolyte. In addition, present thermodynamic modeling methods fall short of their ability to accurately predict the properties of the multicomponent, high temperature solutions commonly used for electrolytes. In absence of meaningful models and sufficient data, the standard state electrochemical potential is often used as a metric to determine which reduction reaction will dominate. However, this approach assumes every species in the electrolyte acts as if it were a pure species, and does not accurately reflect true electrochemical behavior.&#13;
&#13;
Herein, a new approach to modeling electrolytes is developed. By examining liquid solutions in a traditional chemical thermodynamic framework, and using this as a foundation for combining targeted experiments with calculated Gibbs energy data, deeper insights into the role of electrolytes on cell behavior can be obtained. A quantitative link between the activity of the electrolyte and the cathode composition is modeled. In order to expand the utility of the model, a new reference state for activity has been derived, specifically suited to the unique challenges of electrolyte thermodynamics. This model was tested against experimental data for several case study systems and performed well at predicting electrochemical behavior. Activity measured relative to the new reference state accurately informed on thermodynamic phenomena. Use of the model on systems with limited data enabled efficient design of new electrochemical processes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139022</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endoscopy for affine Hecke categories</title>
<link>https://hdl.handle.net/1721.1/139019</link>
<description>Endoscopy for affine Hecke categories
Li, Yau Wing
We show that the neutral block of the affine monodromic Hecke category for a reductive group is monoidally equivalent to the neutral block of the affine Hecke category for the endoscopic group. The semisimple complexes of both categories can be identified with the generalized Soergel bimodules via the Soergel functor. We extend this identification of semisimple complexes to the neutral blocks of the affine Hecke categories by the technical machinery developed by Bezrukavnikov and Yun.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139019</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Democratizing Details-on-demand Data Visualizations at Scale</title>
<link>https://hdl.handle.net/1721.1/139012</link>
<description>Democratizing Details-on-demand Data Visualizations at Scale
Tao, Wenbo
Details-on-demand is a powerful interaction paradigm which features the use of simple mouse interactions such as pan and zoom to help the viewer navigate through a large data space. In the past years, we have witnessed an increasing amount of data visualization applications that embrace this paradigm to facilitate data exploration and analysis. Web maps are a clear example. However, due to the highly specialized nature of these applications as well as the lack of general scalable toolkits, building new details-on-demand data visualizations remains hard especially for large datasets. This thesis proposes new tools and systems to “democratize” details-on-demand-based data visualizations, i.e., to make it much easier to build such applications at scale. The main approach is to offer declarative data visualization languages for developers to author applications in small amounts of code, and work with a database backend to transparently handle the rendering and performance optimizations needed to enable fluid interactions on large datasets.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139012</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Modeling of Bacterial Biofilms</title>
<link>https://hdl.handle.net/1721.1/139009</link>
<description>Computational Modeling of Bacterial Biofilms
Song, Boya
With recent advances in experimental imaging and image analysis techniques, highly time-resolved measurements of complex bacterial communities at single-cell resolution are now possible to obtain. Guided by these rich experimental data sets, we improve a recently proposed three-dimensional individual-based simulation framework to uncover governing microscopic dynamics at single-cell level that drive the structural developments in growing biofilms. Our individual-based model incorporates the essential biophysical processes of cell growth and division, viscous drag, attractive-repulsive cell-surface interactions, attractive-repulsive cell-cell interactions and external forces and torques (e.g. from surrounding flow field). Codes employing graphics processing units (GPUs) are developed to perform simulations to achieve a high degree of parallelization. To validate our simulations with single-cell experimental data, we develop quantitative methods to effectively summarize biofilm architectural properties by a feature vector. With this simulation framework, we investigate the collective dynamics of Vibrio cholerae biofilm formation in various flow intensities. Our experimental and numerical results imply that mechanical cell-cell interactions, combined with the effect of flow when flow intensity is high, account for the emergence of order and structure seen in growing biofilms. In addition, this framework is used to identify the single-cell level mechanisms in the breakdown of  Vibrio cholerae biofilm architecture during exposure to antibiotics. We further apply this framework to identify universal mechanical properties that determine early-stage biofilm architectures of four widely studied bacterial species.This work shows an enhanced understanding of the microscopic physics governing biofilm development, which is essential to control and inhibit bacterial populations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139009</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social Modeling in Computational Simulations: Racial and Ethnic Representation in Videogames and Virtual Reality Systems</title>
<link>https://hdl.handle.net/1721.1/139008</link>
<description>Social Modeling in Computational Simulations: Racial and Ethnic Representation in Videogames and Virtual Reality Systems
Olson, Danielle Marie
Computational simulations such as videogames and virtual reality (VR) systems already pervasively attempt to represent aspects of human identity, including modeling race and ethnicity-related phenomena. However, existing strategies typically focus on representing racial and ethnic identity only as graphics-level customizations and often rely on racial stereotypes [222]. Race and ethnicity are tied to social systems,histories, embodied experiences, interpersonal interactions, and discourse [129, 72, 48] which cannot be reduced to solely graphical models. Furthermore, individuals within the same racial or ethnic groups may have a wide range of differences in their racial and ethnic socialization (RES) experiences [113], feelings of commitment and belonging to their group [171], racial ideologies [153], and how they perceive discriminatory racial encounters (DREs) [15]. It is critical to address the shortcomings of racial and ethnic identity representations in virtual systems because they have real-world consequences on human users (e.g., academic outcomes [120, 41], social behavior [188], racial attitudes [88, 25], healthcare outcomes [111]). There are a lack of formal design approaches for creating compelling racial identity representations and models for use in computational simulations that address these shortcomings. &#13;
&#13;
An interactive narrative videogame system called Passage Home was developed through a design-based research collaboration with clinical and community psychology researchers who study racial discrimination and socialization in Black families to reduce racial stress and trauma [13, 14]. The system embeds a computational model informed by the Racial Encounter Coping Appraisal and Socialization Theory (RECAST; [197, 15]) to simulate a DRE between a Black student and her white teacher. Using Passage Home, two user studies were conducted with 110 PreK-12 educators and 60 youth across the U.S. to understand the relationships between participants’ physical-world RES experiences, identity development, and attitudes and their experience and interpretations in the game. Quantitative analyses revealed statistically significant relationships between participants’ RES experiences [113, 12], colorblind racial attitudes [153], and ethnic identity development [185] with their game experiences [115] and narrative interpretations. Qualitative analyses revealed a range in appraisals of and emotional responses to the DRE in the game. Implications, limitations, and future work are discussed. &#13;
&#13;
Computational simulations are powerful socializing agents that influence individuals’ race-related beliefs, values, and attitudes [76]. This dissertation proposes a novel design framework for racial and ethnic identity representation in these systems, whichmaps four themes of RES practices—(1) cultural endorsement of the mainstream, (2) promotion of mistrust, (3) alertness to discrimination and preparation for bias, and (4) cultural pride and legacy appreciation—onto four simulation components—(1) environments, (2) player characters, (3) non-player characters, and (4) content structures. The upshot is a framework featuring 16 novel design strategies, each with prompts for critical reflection [191], examples from existing systems, and theorized consequences of these representations on users based on the RES literature [113]. The framework provides a new tool to aid practitioners in becoming more conscious of the RES practices they are using when developing racial and ethnic identity representations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139008</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-Chip Engineered Human Lymphatic Microvasculature for Physio-/Pathological Transport Phenomena Studies</title>
<link>https://hdl.handle.net/1721.1/139004</link>
<description>On-Chip Engineered Human Lymphatic Microvasculature for Physio-/Pathological Transport Phenomena Studies
Serrano, Jean Carlos
In addition to the blood vasculature, the majority of tissues contain a secondary vascular system known as the lymphatics that supports tissue homeostasis and immune cell trafficking. As such, impairment of the lymphatic capillaries can result in diverse diseases including abnormal tissue swelling (edema), compromised immunity and cancer metastasis. Current in vitro methods to study the lymphatic vasculature, in health and disease, mostly rely on monolayer and transwell culture systems which only lend themselves to reductionist studies with a considerable lack of physiological relevance. In comparison, animal models provide the full spectrum of biological complexities; however, they offer limited control over biological events in the cellular microenvironment, thus making it increasingly difficult for mechanistic studies. To address these limitations, we developed a 3D lymphatic microvasculature model, that physiologically emulates the lymphatic structure and function, within a microfluidic system that allows for high spatial-temporal control over the biological transport phenomena to study cellular events. &#13;
&#13;
For the first part in this thesis, we implemented a microfluidic-based cell culture system to screen for the optimal balance of growth factors, extracellular matrix compositing and interstitial fluid flow that would induce controlled-levels of angiogenic sprouting by the lymphatic endothelial cells. From this study, we developed two distinct approaches to generate 3D lymphatic microvasculature on-chip were lymphangiogenic-induction is achieved by diffusive exposure to growth factors or via mechanotransduction response to high levels of interstitial fluid flow. After validating the in vivo-like morphology of our engineered lymphatics, we quantified their solute drainage functionality using fluorescent tracers of varying molecular weights, resembling interstitial soluble proteins. Results validated that the lymphatic microvasculature exhibited solute drainage rates approaching in vivo lymphoscintigraphy standards. Computational and scaling analyses were performed to understand the underlying transport phenomena which elucidated the importance of a 3D geometry and the lymphatic endothelium to recapitulate physiological drainage. We then examined the capability of our on-chip lymphatics to elicit an immune response under a pathological-inflammatory condition by locally recruiting immune cells. Experimental and computational results demonstrate an increased infiltration of immune cells into the lymphatics guided by chemotactic gradients that trigger the CCR7-CCL21/19 and CXCR4-CXCL12 inflammatory axes. Finally, we demonstrate the utility of our microphysiological system for pre-clinical studies, specifically by screening the vascular absorption rate of therapeutic monoclonal antibodies developed by Amgen Inc. We coupled our experimental measurements with a physiological-based framework to describe their systemic transport which allowed us to quantitatively predict their corresponding pharmacokinetics.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139004</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/139003</link>
<description>Essays in Macroeconomics
Fornino, Michele
This thesis consists of three essays in the field of macroeconomics. In the first chapter, Andrea Manera and I study the economic incentives for automation when labor and machines are perfect substitutes. We find that labor may still be employed in production, even when it is a costlier input than robots on a productivity-adjusted basis. This occurs if firms face uninsurable idiosyncratic risk, adjusting the stock of machines is costly, and workers can be hired and fired quickly enough. Even though labor survives, jobs become less stable, as workers are hired in short-lived bursts to cope with shocks. We calibrate a general equilibrium, multi-industry version of our model to match data on robot adoption in US manufacturing sectors, and use it to compute the employment and labor share consequences of progress in automation technology. A fall in the relative price of robots leads to relatively few jobs losses, while reductions in adjustment costs, or improvements in relative robot productivity, can be far more disruptive. The model-implied semi-elasticity of aggregate employment to robot penetration (number of robots per thousand employees) ranges between 0.01% and 0.12%, depending on the underlying source of increased robot adoption, consistent with findings in the empirical literature. In an extension, we show that reduced-form hiring and firing costs unambiguously depress long-run employment.&#13;
&#13;
In the second chapter, Elia Sartori and I present a wage posting model with search frictions where idiosyncratic shocks to optimal firm size generate different returns to adjusting the workforce. Our approach takes the firms as the only decision makers, and it assumes a simple employment contract: firms pay a wage and may only change an employment relationship by paying exogenously specified costs. Labor market outcomes are modeled as a mean field game equilibrium in which aggregate statistics impacting firms' policies, which play the role of prices, are the hiring and poaching flow rates. Consistency of aggregate choices with prices builds on a reduced form matching function which subsumes the entire functioning of the labor market outside of firms. Leveraging and extending recently developed numerical methods, we can solve for the equilibrium. The model delivers nontrivial policy functions and aggregates, which can be used to quantify features of the endogenous reshuffling of workers both in the size ladder and in the wage ladder, including net poaching along these two margins, as presented in, e.g., Haltiwanger et al. (2017). A calibrated version of the model is able to generate an inverted net poaching schedule which is consistent with their finding that smaller firms poach workers from larger ones.&#13;
&#13;
In the third and final chapter, I analyze evidence from US states to compute the open economy relative multiplier along the lines of Nakamura and Steinsson (2014). Identification of exogenous government spending shocks is achieved by exploiting the secular tendency of some states to receive a disproportionate share of military spending relative to others. The contribution is twofold. First, I gather additional procurement data to extend the previous series until 2013, thus including the Great Recession and its aftermath. Second, this is the first attempt at analyzing the effects of military spending shocks on an aggregate consumption measure at state level. Estimated short run multipliers on output range between 1.3 and 1.6. There is some weak evidence that points to a positive effect on private consumption in the short run. Nonlinearities in the estimated multipliers seem to play a much more important role: both output and consumption respond sharply when unemployment is relatively high, or since the onset of the Great Recession. I use these estimates as a diagnostic tool to evaluate the performance of competing models. The strong evidence pointing to state-dependent multipliers, even after controlling for monetary policy, is consistent with the predictions of a New-Keynesian model with credit-constrained consumers.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139003</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Fluidic Systems for Water Filtration and Bio-Separation</title>
<link>https://hdl.handle.net/1721.1/139001</link>
<description>Development of Fluidic Systems for Water Filtration and Bio-Separation
Ramchander, Krithika
Biological contaminants and biomolecules play a major role in disease etiology and pathogenesis. In the context of disease prevention and management, removal of biological contaminants and biomolecules often relies on separations performed in fluidic systems. The design and operation of such systems relies fundamentally on understanding how fluidic transport phenomena are governed by material properties and effects such as sorption. This thesis focuses on understanding transport phenomena in two novel fluidic systems and leverages the insights to develop devices for water filtration and blood purification.&#13;
&#13;
In the first section, we focus on the characterization and engineering of gymnosperm xylem for developing water filters. The xylem tissue, which transports water and nutrients in plants, has nanoscale pores that can remove contaminants from water. However, xylem’s functional attributes as a water filter, such as flow rate, filtration capacity, rejection performance, susceptibility to foulants in water, etc., are not well-understood. Additionally, methods that can help tailor these attributes to suit practical needs have not been developed. We generate new insights into the mechanisms that govern the transport of water through xylem. These include the non-linear dependence of resistance to fluid flow on filter thickness explained using a percolation-based model, ‘self-blocking’ behavior governed by the dissolution and convective re-deposition of hemicellulose within the xylem conduits, and elevated propensity for fouling in the presence of large organic molecules and dust. We use these insights to develop methods for fabrication of practically useful xylem filters. We demonstrate that these filters have shelf-life &gt;2 years and can provide &gt;3 log removal of E. coli, MS-2 phage, and rotavirus from synthetic test waters and coliform bacteria from natural water sources. To show how xylem could be incorporated in filtration devices, we develop a gravity-operated functional device prototype for household drinking water treatment using user-centered design approaches. The findings related to the characterization, modeling, and engineering of xylem reported in the thesis fundamentally advance the state of knowledge about xylem tissue and lay the groundwork for the design and development of a wide variety of xylem-based devices in the future. &#13;
&#13;
In the second section, we focus on modeling cytokine transport in an extracorporeal blood purification (EBP) device for managing hypercytokinemia. Traditional EBP methods, which focus on non-specific removal of broad-spectrum cytokines to regulate host immune response, have many disadvantages, such as potential immuno-suppression and elimination of desirable molecules. A cytokine-specific EBP method can overcome these drawbacks. We study the cytokine binding and transport characteristics in a device, where selective cytokine removal is achieved by pumping the blood through tubes coated with antibodies. Analogous to the Lévêque problem, we develop a mass transport model which can predict the rate of cytokine removal and volumetric clearance as a function of device geometry, operational conditions, and surface properties. These predictions matched in vitro experimental results. In the future, such devices could be used for creating flexible and highly selective blood-filtering platforms for elimination of individual, harmful cytokines as they are expressed, facilitating the development of personalized treatment strategies.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139001</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Exploiting Anion Redox Process for High Energy Density Positive Electrode Materials for Li-ion Batteries</title>
<link>https://hdl.handle.net/1721.1/138998</link>
<description>Understanding and Exploiting Anion Redox Process for High Energy Density Positive Electrode Materials for Li-ion Batteries
Yu, Yang
The development of (hybrid) electric vehicles calls for a better solution for energy storage systems. Li-ion battery system give rise to such opportunity by providing reasonable cycle life and energy density. It was widely assumed that Li-ion battery positive electrode materials, Li transition metal oxides, store charges through the redox activity of transition metal species, accompanied by the intercalation and deintercalation of Li-ions into and out of the host structure. Layered lithium nickel, manganese and cobalt oxides (NMC) are state-of-the-art commercial positive electrodes in the past decades, which relies on the redox of Niᶟ⁺ᐟ⁴⁺ upon charging, limiting further increase of energy density of current Li-ion battery systems. Anionic redox in positive electrode materials in Li-ion batteries provides an additional redox couple beyond the conventional metal redox, which can be harvested to further boost the energy density of current Li-ion batteries. However, the physical origin of observed anion redox remains debated, and more direct experimental evidence is needed. Furthermore, the requirement for the reversible anionic redox activity remains under debate, hindering rational design of new materials leveraging reversible anionic redox.&#13;
&#13;
In this thesis, we primarily focus on understanding the cationic and anionic redox process in the positive electrode materials upon lithium deintercalation using X-ray absorption and emission spectroscopy (XAS and XES), X-ray photoelectron spectroscopy (XPS), coupled with density functional theory (DFT) calculations. We have shown electronic signatures of oxygen-oxygen coupling, direct evidence central to lattice oxygen redox (O²⁻/(O₂)ⁿ⁻),  in charged Li₂₋ₓ RuO₃ after Ru oxidation (Ru⁴⁺/Ru⁵⁺) upon first-electron removal with lithium de-intercalation. This lattice oxygen redox of Li₂₋ₓ RuO₃ was accompanied by bulk Ru reduction. This observed redox trend is in stark contrast of the observations in charged Ni-rich NMC upon charging. In Ni-rich NMC positive electrodes, nickel oxidation is primarily responsible for the charge capacity up to removing ~0.7 Li, beyond which was followed by Ni reduction near the surface (up to 100 nm) due to oxygen release, where there is no significant bulk metal reduction observed. The uniqueness of Ru-based system lies in the highly covalent nature of Ru-O bond, stabilizing (O²⁻/(O₂)ⁿ⁻) intermediate, forbidding further oxygen release.&#13;
&#13;
Through systematic transition metal substitution, we have proposed an electronic structure descriptor based on the energetic overlap between transition metal and oxygen to tune the cationic and anionic redox process in Ni-rich NMC as well as Li-rich positive electrode materials to enhance their cycling stability. We have also shown that the electronic structure descriptor can be applied to various electrochemical systems going through redox processes. Our study has laid a solid foundation for future high-throughput screening of novel and affordable metal oxides for battery and electrocatalysis applications.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138998</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive Human Activity and Plan Recognition for Human-Robot Collaboration</title>
<link>https://hdl.handle.net/1721.1/138991</link>
<description>Cognitive Human Activity and Plan Recognition for Human-Robot Collaboration
Lee, Sang Uk
With the growth of the robotics field, it is expected that robots will increasingly become part of our everyday lives. Consequently, there is an emerging need for humans and robots to work together. Human-robot collaboration has become an important topic in various domains, such as home assistance and manufacturing. For effective collaboration, robots must be able to recognize human activity and plan, while determining which functions would be helpful to humans. The problem of recognizing human activity and plan, known as human activity and plan recognition (HAPR), is considered to be the main bottleneck for successful collaboration. HAPR becomes even more complex in the analysis of visual inputs, such as RGB-D images.&#13;
&#13;
This thesis addresses this bottleneck by investigating how to perform an efficient and accurate vision-based HAPR for fluent collaboration in real-world applications. The following limitations of state-of-the-art HAPR studies are examined in this thesis. First, although learning-based, model-free approaches are gaining significant attention owing to recent advances in deep learning, they require significant amount of training data. This makes recognition inefficient. Second, previous studies recognized human activity and plan separately and sequentially. They recognized human activity first and subsequently plan. Separate and sequential recognition cannot consider the plan context while recognizing human activity because the plan context is not available during activity recognition. However, the plan context provides useful information for activity recognition. Thus, separate and sequential recognition is inaccurate.&#13;
&#13;
We pose a fundamental question: Do humans share the same limitations when recognizing others' activity and plan? To answer this question, we introduce a novel problem called cognitive HAPR. Cognitive HAPR attempts to improve the HAPR system by adopting three ideas motivated by how cognitive humans perform HAPR. The first idea is to apply symbolic reasoning based on the preconditions-and-effects structure of activities, which humans understand well. For example, let us assume that a person is getting a bowl. It is intuitive to understand or assume that the person's hand must be empty, as a precondition of this activity, and the person would be holding a bowl as an effect of this activity. We propose that such intuitive preconditions-and-effects structure of activities provides valuable domain knowledge for HAPR. The second idea is the application of commonsense spatial knowledge with qualitative representations. Several cognitive science studies have shown that humans efficiently and effectively perceive their surroundings by abstracting the scene using qualitative representations. Qualitative representations are more compact and effective than quantitative data such as 6-D poses (i.e., x, y, z, roll, pitch, and yaw) of objects. We propose qualitative spatial representation (QSR), a representation framework that describes the spatial information of objects in a qualitative manner, as a good qualitative representation tool for HAPR. We effectively model complex predicates relevant to activities through QSR statements using intuitive commonsense knowledge. This modeling of predicates also provides valuable domain knowledge for HAPR. The third idea is the application of context-aware human activity recognition using a plan context. Several cognitive science studies have proven that humans recognize activity and plan as a combined framework, instead of recognizing them separately and sequentially. Humans employ the plan context when recognizing activity using the combined framework. We proposed a combined model for HAPR that captures the Bayesian theory of mind (BToM) from cognitive science.&#13;
&#13;
This thesis presents a cognitive HAPR system called cognitively motivated plan and activity estimation system (COMPASS) that achieves the three ideas. We evaluate COMPASS in a home care scenario, called the activities of daily life (ADL). The ADL scenario takes place in a household environment where a human and robot collaborate to complete daily tasks. We use the ADL scenario to demonstrate that COMPASS resolves the two limitations of previous HAPR studies. First, by using a model-based approach, COMPASS requires significantly less training data compared to the case using a learning-based approach. This makes COMPASS more efficient. The two ideas of applying symbolic reasoning based on the preconditions-and-effects structure of activities and commonsense spatial knowledge with qualitative representations provide good domain knowledge for model-based recognition that requires minimal modeling effort. Second, by using a combined framework, COMPASS can perform context-aware human activity recognition using the plan context. This makes COMPASS more accurate compared to the case using the sequential model.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138991</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quasi-Monte Carlo and Picard Iteration Algorithms for the Nonlinear Hydrodynamics, Dynamics and Controls of Wave Energy Converters</title>
<link>https://hdl.handle.net/1721.1/138990</link>
<description>Quasi-Monte Carlo and Picard Iteration Algorithms for the Nonlinear Hydrodynamics, Dynamics and Controls of Wave Energy Converters
Larson, David F. H.
Ocean waves are an increasingly attractive target for renewable energy technologies, spurring the development of tools to optimize the capture of this available energy. While there has been an abundance of wave energy converter (WEC) concepts developed over the last 50 years, few have been commercially realized due in part to the challenges in assessing an optimal design. Design efforts have sought to maximize energy generated from WECs by optimizing nonlinearities in the power take off (PTO) mechanism, controls, and, more recently, geometry, while often assuming linear hydrodynamics and small body motions. Yet, in the most energetic sea states, large WEC motions and hydrodynamic nonlinearities may become appreciable, potentially resulting in a mismatch between the real-world performance and the performance predicted by linear theory. Existing nonlinear hydrodynamic models face a combination of high computational cost, numerical sensitivity, and complex user interfacing, rendering them impractical or restricted to specific geometries and conditions.&#13;
&#13;
This thesis introduces a novel and robust approach to simulating the nonlinear hydrodynamics and large response motions of floating WECs and general bodies in stochastic waves. The dominant incident wave Froude-Krylov and hydrostatic nonlinearities are expressed as volume integrals using Fluid Impulse Theory. The proposed framework debuts Quasi-Monte Carlo (QMC) spatial integration in nonlinear wave-body interactions, in conjunction with a new extension of Modified Chebyshev Picard Iteration compatible with the fluid force impulses. A mesh-free geometric representation using signed distance functions circumvents the numerically sensitive mesh-mesh surface intersection at each time step, and a continuously differentiable boundary blur accelerates the QMC convergence. These algorithms have been implemented in a parallel-time Julia code, which is studied on several canonical problems to understand the fundamental behavior of this framework.&#13;
&#13;
The performance of the blurred-boundary QMC integration algorithm is assessed on a fixed cylinder, demonstrating rapid convergence of the nonlinear incident wave Froude-Krylov impulses and convergence of the hydrostatic forces. The robust time-integration of the equation of motion is demonstrated with a heaving cylinder and applied to a surging tension leg platform that forms the basis of a new WEC concept under development. These studies indicate very promising perfor- mance without observing adverse numerical sensitivity that may limit the simulation time span, or the need for delicate mesh preparation. This framework has the ability to support optimal design studies, taking into account the interaction between hydrodynamic, geometric, dynamic, PTO and control nonlinearities. The potential of this method extends to more general ship seakeeping studies, where (hydro)dynamic nonlinearities can become important in severe stochastic waves.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138990</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-shot quantitative interferometric microscopy for imaging high-speed dynamics</title>
<link>https://hdl.handle.net/1721.1/138988</link>
<description>Single-shot quantitative interferometric microscopy for imaging high-speed dynamics
Ge, Baoliang
The observations of millisecond, micrometer-scale dynamical events are essential, especially in cell biology investigations, material science studies and the development of next-generation clinical diagnosis methods. For imaging these high-speed dynamics, we need optical microscopy methods having millisecond temporal resolution, submicron spatial resolution and the capability of quantitative imaging. However, current imaging techniques usually either could only offer qualitative imaging, or has low quantitative imaging speed due to the requirement of multiple measurements or scanning. Therefore, high-speed quantitative optical imaging techniques that could provide satisfactory imaging performance for the observations of millisecond, micrometerscale dynamical events are still highly demanded.&#13;
&#13;
In my PhD work, several single-shot quantitative imaging techniques are proposed based on off-axis interferomertic microscopy, to overcome limitations of current imaging techniques, driven by different specific motivations in biomedical researches and material inspections. Firstly, I presented the novel technique of single-shot quantitative amplitude and phase microscopy, which is motivated by the requirement of fast quantitative imaging of RBCs for the clinical diagnosis and drug screening of many diseases, such as malaria and sickle cell disease. Taking advantage of quantitative interferometric microscopy along with the engineering of the medium’s optical properties, we realized the simultaneous measurements of RBCs’ morphological, molecular and mechanical properties. The second novel imaging technique is for studying novel aniostropic materials (i.e., lyotropic chromonic liquid crystals (LCLCs)), especially the rheology of them, which needs the fast quantitative mapping of polarization parameters (i.e. retardance and orientation angle) of the light field transmitting through anisotropic materials. The polarization sensitive microscope is combined with off-axis shearing interferometry, realizing the single-shot quantitative imaging of LCLC flow with an imaging speed of over 500 frame per second (fps) for the first time. Finally, deep-learning single-shot optical diffraction tomography (DS-ODT), is proposed that fully exploits the potential of off-axis interferometric microscope, to push the imaging speed of 3D cell imaging. By illuminating the cell with four angles simultaneously, 3 and using innovative deep learning approach to extract the prior knowledge from a training dataset of ⇠ 900 NIH/3T3 cells, we realized single-shot 3D cell imaging at a 3D imaging speed of over 10,000 fps, enhancing the throughput of 3D flow cytometer to over 5000 cells per second.&#13;
&#13;
These technology advances open the horizons in which the single-shot quantitative interferometric microscopes could serve as powerful platforms in biological cell characterizations and anisotropic material (liquid crystals) inspections, benefiting from their unprecedented quantitative imaging speed. Furthermore, due to the increasingly needs for studying high-speed dynamics and developing novel microfluidic devices and cell characterizing methods, we can envision these techniques could find an even wider range of applications in biology, material science and clinical diagnosis.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138988</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface structure enhanced microchannel flow boiling of low surface tension fluids</title>
<link>https://hdl.handle.net/1721.1/138987</link>
<description>Surface structure enhanced microchannel flow boiling of low surface tension fluids
Sircar, Jay
Microchannel flow boiling can meet the thermal management requirements of high power and high frequency integrated circuits, but the technology has been limited by the instabilities unique to their length scales.  During microchannel flow boiling of water, many of these instabilities have been successfully mitigated by the addition of surface microstructures.  Thermohydraulic performance in terms of critical heat flux (CHF) and heat transfer coefficient (HTC) have also benefited from surface modification, without significantly adding to the overall flow pressure drop.  For some microelectronic applications, water may not be the ideal fluid for microchannel flow boiling, such as in aerospace engineering where thermal management solutions must operate over a wide range of temperatures from subzero Celsius to optimal semiconductor temperatures.  Alternatives to water as a working fluid include methanol, which has one of the largest ranges of operating temperatures suitable for electronics cooling, and hydrofluoroether (HFE) 7000, an environmentally friendly dielectric fluid.  Though there are benefits to using these alternative working fluids, several of their thermophysical properties involved in generating capillary flows are significantly less than that of water—most notably their surface tensions.  Prior studies have shown that the level of enhancement to the critical heat flux during the flow boiling of water, was positively correlated with the capillary-limited thin-film dryout heat flux.  The same semi-analytical model suggested that thin-film dryout would occur at approximately one and two orders of magnitude smaller heat fluxes when switching from water to methanol and HFE 7000, respectively.  &#13;
&#13;
In this thesis, thermohydraulic changes from surface microstructures during intrachip flow boiling of lower surface tension working fluids, primarily methanol, was investigated.  We fabricated microchannels on the heated bottom wall of silicon test samples, with/without micropillars of two different heights (25 and 75 µm) and two different cylindrical pillar solid fractions (~ 5% and 20%).  For methanol, a maximum CHF of 494 W/cm^2 was achieved with a structured surface, a 61% enhancement compared to smooth surface.  At higher heat fluxes, the maximum HTC increased by as much as 71%, to 271 kW/m^2 K, for the taller, sparser micropillar wicked channels compared to smooth microchannels.  The presence of micropillars reduced the HTC or resulted in no significant change at lower exit quality conditions.  The CHF enhancement among different geometries of wicks did not fully agree with the fluid wicking model, suggesting that capillarity may not be the dominant factor contributing to the enhanced performance.  Annular films of methanol within smooth microchannels near CHF abruptly dewet from bulk of the heated wall, and resulted in inverted annular (film) flow boiling or transition boiling.  High speed imaging coupled with hydraulic and thermal measurements, showed that the taller micropillars prevented this liquid film rupture.  The importance of hydrodynamic effects resulting from the micropillar wick arrays was supported by force scaling analysis and finite element analysis.  Observed experimental flow boiling behaviors near CHF for water, methanol, and HFE 7000, revealed that as surface tension decreased, the effectiveness of micropillar wicks in preventing the rupture and removal of annular films dwindled.  Heat dissipation approaching ½ kW/cm^2 at a calculated wall superheat of less than 20 K during flow boiling of methanol in microstructured channels was achieved, suggesting that this can be a promising cooling strategy for high power-density electronic systems operating in challenging environments.  Insights gained from this work will lead to the development of new design principles that will allow for even lower surface tension fluids, e.g. fluorinated dielectrics, to maximize their potential during flow boiling.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138987</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nano-scale Glucose Fuel Cells for Energy Harvesting in the Human Body Based on Proton Conduction in Cerium Oxide</title>
<link>https://hdl.handle.net/1721.1/138980</link>
<description>Nano-scale Glucose Fuel Cells for Energy Harvesting in the Human Body Based on Proton Conduction in Cerium Oxide
Simons, Philipp
Future implantable medical devices such as sensors, drug delivery systems, and electroceuticals require efficient, reliable, and highly miniaturized power sources. To date, the predominant power source of implants is the Li–I2 pacemaker battery, which is limited in its scale-down potential without sacrificing capacity. Therefore, new power sources for implantable devices are needed. In this thesis, a ceramic-electrolyte glucose fuel cell is invented, which constitutes the smallest potentially implantable glucose fuel cell to date. By use of the ceramic proton-conducting electrolyte ceria and a free-standing membrane device architecture, the novel ceramic-electrolyte glucose fuel cell can be scaled down to a thickness below 400 nm. The ceramic-electrolyte glucose fuel cell is biocompatible by materials choice, and unlike polymer-electrolyte glucose fuel cells, can be easily thermally sterilized for future implantation. This thesis demonstrates fabrication, fundamentals and the performance of the first ceramic-electrolyte glucose fuel cells, with a power density of up to 43 µW cm−2, and shows unusually broad performance statistics across 150 devices thanks to a custom-designed measurement apparatus. A fundamental property required to realize such glucose fuel cells was to define a proton-conducting ceramic thin film electrolyte. Therefore, beyond device design and development, this thesis explores the proton transport properties of ceria. Through a ceria model system deposited via wet-chemical spray pyrolysis, sufficient proton conductivity is observed. Moreover, slow hydration kinetics, at the order of several days, is detected in ceria, which could explain the large discrepancies of observed proton conductivity in the literature to date. Finally, the structural properties of ceria deposited through spray pyrolysis are studied under various thermal processing conditions, providing guidelines on the cost-effective processing of ceria as a proton-conducting electrolyte. Here, it is found that Raman spectroscopy can be employed to observe texture evolution in ceria thin films, which is relevant for both glucose fuel cells and other catalytic applications. Overall, this thesis constitutes a study of the processing, structure, proton transport, and device development of ceria, resulting in nano-scale ceramic-electrolyte glucose fuel cells that could enable the next generation of miniaturized implants.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138980</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatiotemporal Encoding Methods for Brain Magnetic Resonance Imaging</title>
<link>https://hdl.handle.net/1721.1/138978</link>
<description>Spatiotemporal Encoding Methods for Brain Magnetic Resonance Imaging
Wang, Fuyixue
Magnetic resonance imaging (MRI) is a widely used non-invasive imaging technology for both clinical diagnosis and neuroscientific research. However, the imaging sensitivity and specificity of brain MRI are limited by the well-known technical challenge of MRI acquisition—low image encoding efficiency, leading to limited acquisition speed, spatial resolution and signal-to-noise ratio especially for in-vivo imaging. In order to address these challenges, this thesis presents newly developed spatiotemporal encoding methods, which are used to improve the sensitivity and specificity as well as provide time and cost savings for different MRI applications, including diffusion, quantitative relaxometry and functional imaging. The novel encoding strategies in high-dimensional space together with efficient data sampling schemes allow better use of radio-frequency pulse, modern receiver coil arrays and shared data correlation. The high imaging efficiency provided by these spatiotemporal acquisition methods was demonstrated to help overcome several long-standing challenges in brain MRI, which should help increase its diagnosis power and gain further understanding of the structural and functional organization of the human brain.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138978</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Methods and Analyses for Assessing Cerebellar Electrophysiology with Magneto- and Electroencephalography</title>
<link>https://hdl.handle.net/1721.1/138977</link>
<description>Computational Methods and Analyses for Assessing Cerebellar Electrophysiology with Magneto- and Electroencephalography
Samuelsson, John Gustaf Wilhelm
The cerebellum contains almost 80 % of all neurons in the human brain and is now recognized as a critical node in the distributed neural circuits underlying autonomic, sensorimotor, cognitive, and emotional functions. Cerebellar dysfunction has furthermore been implicated in some of the most prevalent neuropsychiatric diseases including autism and schizophrenia, with new information implicating the cerebellum in neurodegenerative dementias and Parkinson's disease as well.&#13;
&#13;
Despite advances in our understanding of cerebellar structure and function on the microscopic scale, and an increasing number of hemodynamic functional imaging studies, the macroscopic scale electrophysiology of the cerebellum remains poorly characterized. Magneto- and electroencephalography (M/EEG) can non-invasively and directly measure neural activity at sub-millisecond temporal resolution and therefore hold promise to bridge this gap. M/EEG has, however, so far mainly been employed to study the cerebral cortex and its use in the study of cerebellar electrophysiology is largely unexplored.&#13;
&#13;
This thesis presents a comprehensive investigation on assessing cerebellar neural activity with M/EEG. First, a new technique that allows for reconstruction of the cerebellar cortex from standard-resolution in-vivo MRI data is presented. This technique is used to create a surface source space and quantify the detectability of neural activity in the cerebellum with M/EEG. We then develop a novel analytical framework to compare the performance of source estimators and quantify the spatial fidelity of the cerebellar source estimates that are attainable using our new cerebellar source space tools. The proposed methods and analyses assume only standard MRI and M/EEG data. They are therefore readily applicable and thus enable a new and efficient way of studying cerebellar electrophysiology in health and disease.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138977</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Effects of Public Policy</title>
<link>https://hdl.handle.net/1721.1/138970</link>
<description>Essays on the Effects of Public Policy
Felix Silva, Mayara Priscila
This dissertation presents three papers on the effects of public policy on market outcomes. In the first paper, I analyze the effects of trade liberalization on firm labor market power in Brazil. I find that while Brazil’s 1990s trade liberalization significantly lowered wages and increased labor market concentration, it did not increase firm labor market power. The negative effects of trade on local wages were therefore likely driven by reductions in the marginal revenue product of labor. In the second paper, in collaboration with M. Chatib Basri, Rema Hanna, and Benjamin A. Olken, I analyze the effects of two corporate taxation reforms in Indonesia: one in tax administration and one in tax rates. We find that the tax administration reform had large effects on tax revenue and reported income, and that the government would have had to raise the corporate income tax marginal tax rate on affected firms by 8 percentage points to match those revenue gains. Finally, the third paper evaluates the impact of a school discipline policy in Massachusetts on student suspensions and test scores at charter schools. I find that the policy reduced charter suspensions by roughly 10 percentage points, but had no impact on charter learning.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138970</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Asynchronous Ensemble-Averaging Approach to CMFD Source&#13;
Acceleration: Rearchitecting Monte Carlo Reactor Simulation&#13;
Paradigms for the Exasacle Computing Age</title>
<link>https://hdl.handle.net/1721.1/138963</link>
<description>An Asynchronous Ensemble-Averaging Approach to CMFD Source&#13;
Acceleration: Rearchitecting Monte Carlo Reactor Simulation&#13;
Paradigms for the Exasacle Computing Age
Kumar, Shikhar
The safe and reliable operation of a nuclear power plant requires a meticulous understanding of the various physics phenomena at play within the reactor core. Monte Carlo neutron transport methods are commonly used for the purpose of high-fidelity full-core reactor analysis, stochastically sampling a large number of neutron histories throughout the reactor core with minimal approximations to the underlying phase space of the problem. Monte Carlo methods, however, face several issues, primary of which is the prohibitively large number of computational resources required for tallied quantities to reach adequate uncertainty levels for full-core analysis. Advances in High-Performance Computing (HPC) systems and significant code parallelization of Monte Carlo algorithms have helped to reduce overall time to solution. This thesis extends these efforts by proposing a novel ensemble averaged Coarse Mesh Finite Difference (CMFD) simulation strategy in order to improve parallel efficiency on HPC systems while also addressing other issues surrounding Monte Carlo methods including fission source stationarity, real variance calculations, and nonlinear coupling schemes.&#13;
&#13;
This thesis begins by proposing a fission source stationarity diagnostic that monitors the behavior of Functional Expansion Tally (FET) coefficients. This method relies on the efficient computation of meshless quantities that signal whether various spatial modes of the fission source have stopped fluctuating within statistical limits, avoiding issues of false convergence that are prevalent with more commonly used diagnostics that rely on Shannon entropy calculations. This work provides the framework for establishing total runtimes associated with Monte Carlo inactive generations, as well as a heuristic for exact runtime savings from source acceleration methods such as CMFD.&#13;
&#13;
Next, a rigorous sensitivity study of CMFD source acceleration is conducted in order to pinpoint the exact parameters that affect the evolution of Monte Carlo simulations in the inactive and active generations. Here, it is found that coarse CMFD meshes should be used with an expanding windowing scheme that employs a relatively large maximum window size in order to reduce overall runtime while ensuring that variance 3 levels are on par with un-accelerated simulations. Runtime analysis also indicates that a hybrid solution that combined a loosely-converged CMFD solution during initial inactive generations while reverting to un-accelerated Monte Carlo during the later inactive generations should be employed in order to reduce overall runtime to stationarity, continuing to turn off CMFD feedback into the active generations as well.&#13;
&#13;
Finally, a comparison of various Monte Carlo run strategies motivates the development of the ensemble averaged CMFD simulation strategy, which is shown to be the optimal simulation paradigm on large-scale computing systems. From the perspective of parallel efficiency, ensemble averaged CMFD is shown to improve load balancing by 5-10% on representative 1-D, 2-D, and 3-D reactor problems, while the use of an aggregate seed stopping criteria ensures that initial error levels converge at ideal 1/ p N rates, where N is the total number of Monte Carlo ensembles simulated in parallel. Ensemble averaging also provides a simple and accurate method for calculating real variance estimates across statistically independent ensembles of neutrons, something that is not typically possible with single seed Monte Carlo simulations. Moreover, the asynchronous ensemble averaged CMFD implementation provides better fault tolerance against possible node failures on large-scale computer facilities. Finally, arguments for how ensemble averaging can be applied to nonlinear multiphysics coupling schemes as well as Graphics Processing Unit (GPU)-based computer systems are also presented.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138963</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty Quantification and Structure Discovery for Scalable Behavior Science</title>
<link>https://hdl.handle.net/1721.1/138961</link>
<description>Uncertainty Quantification and Structure Discovery for Scalable Behavior Science
Hayden, David S.
Scientific analysis of motion and social interaction can identify animal models of human disease by relating genetics or neural activity to behavior. However, experiments are often limited in scope because they require vast quantities of expert annotation on private data. Attempts to automate aspects of behavior science typically have limited interpretability and lack uncertainty representation. Errors will go unrecognized without manual inspection and propagate to hypothesis tests, corrupting conclusions. In response, this dissertation develops principled Bayesian approaches to low-level behavior analysis that discover the articulated part structure of a moving object and quantify uncertainty in the motion of multiple objects. Uncertainty is used to identify possible errors and automatically schedule sparse annotations. We apply parts modeling and tracking to primate behavior data in experimental and observational settings, in one case contributing to the first evidence supporting the use of primate animal models in autism research. We additionally develop Marmoset100, a 100-hour RGB-Depth dataset of pairwise primate social interactions labeled with 25 high-level behaviors, and show that uncertainty representation in tracking estimates improves behavior classification. &#13;
&#13;
The Nonparametric Parts Model (NPP) discovers structure by learning articulated parts decompositions in an unsupervised fashion by briefly observing objects moving in an image, depth, point cloud, or mesh sequences. NPP combines distributions on Lie groups with a Bayesian nonparametric prior to perform joint reasoning over an interpretable state-space model with nonlinear dynamics and state-dependent observation noise. In developing sampling-based inference for NPP, we discover a novel and efficient Gibbs decomposition for prior distributions on SE(D), the manifold of rigid transformations. We show that NPP learns intuitive part segmentations for diverse objects and enables both analysis and synthesis of relative part motion in the body frame. &#13;
&#13;
The Joint Posterior Tracker (JPT) is a comprehensive Bayesian treatment of the general multiobject tracking problem that quantities uncertainty in the motion of multiple objects. JPT uniquely performs asymptotically exact inference without gating heuristics or the combinatorial costs of exponential and factorial complexity. We develop novel Metropolis-Hastings proposals that reason over permutations of the latent space and enable efficient posterior mode hopping that correspond to possible confusion events. We show that JPT yields accurate uncertainty representation of data associations with high performance on standard metrics. Finally, we use posterior uncertainty to identify ambiguities in observed data and automatically schedule sparse human annotations that rapidly improve posterior estimates and reduce uncertainty.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138961</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacial and Physical Confinement Effects on the Structure and Properties of Aligned Carbon Nanotube Architectures</title>
<link>https://hdl.handle.net/1721.1/138960</link>
<description>Interfacial and Physical Confinement Effects on the Structure and Properties of Aligned Carbon Nanotube Architectures
Kaiser, Ashley L.
The advantaged mass-specific, intrinsic, and scale-dependent properties of aligned nanofibers, such as carbon nanotubes (CNTs), and their ability to be densified into high volume fraction (commonly, vol%) 3D architectures motivates their use as shape-engineerable materials and composite reinforcement. While controlling nanofiber adhesion to the growth substrate and nanofiber packing density in arrays, e.g. aligned-CNT (A-CNT) arrays in this work, are essential for improving material properties towards bulk-scale manufacturing and application-specific performance, experimental and theoretical approaches to date have not addressed the mechanisms and scaling of CNT-substrate adhesion with processing conditions. Also unexplored is how the nano-, meso-, and micro-scale structures of aerospacegrade composite matrices are affected by high levels of A-CNT confinement (inter-CNT spacings on the order of nm). To understand these effects, this thesis studies the CNTsubstrate strength that governs the manufacturing of shape-engineered CNT arrays, and the interfacial interactions of A-CNTs with the composite matrix that scale performance. In addition to CNT-substrate pull-off tuning via thermal processing, four types of nanocomposites are synthesized to study confinement effects as a function of A-CNT vol%: ACNT-polymer nanocomposites (PNCs) with aerospace-grade epoxy, bismaleimide (BMI), and phenolic matrices, and an A-CNT-carbon nanocomposite (A/C-NC) with a phenolicderived pyrolytic carbon (PyC) matrix.&#13;
&#13;
Thermal post-growth processing of mm-tall aligned CNT arrays on the growth substrate (Fe/Al₂O₃/SiO₂/Si wafers) at temperatures from Tₚ = 700°C (CNT synthesis temperature) up to 950°C in helium for 40 minutes is used to study CNT-substrate pull-off strength. The bulk CNT-array pull-off strength, as measured by CNT array pull-off from the flat substrate via tensile testing, shows that the array fails progressively, similar to microfiber bundles in tension, and in particular evolves non-monotonically with Tₚ with three regimes identified: first increasing from an as-grown strength of ∼0.04 MPa to ∼0.13 MPa up to Tₚ = 735°C (in Regime I), then up to ∼0.35 MPa strength up to Tₚ = 800◦C (in Regime II), and then decreasing back to ∼0.13 MPa strength up to Tₚ = 950◦C (in Regime III). The force-strain relation from tensile testing is modeled analytically based on microfiber bundle mechanics and Weibull statistics, which considers the statistical failure of individual CNTs (fibers) as they either debond from the substrate in Regimes I &amp; II, or break leaving ∼2-micron long CNTs attached to the substrate in Regimes II &amp; III. Morphological and chemical analyses indicate that in all regimes, the Fe catalyst remains on the substrate after CNT array pull-off, and the CNT array structural quality is maintained; higher Tₚ may graphitize the disordered carbon on the substrate and at the CNT roots to increase the substrate adhesion strength for Tₚ up to 800°C, beyond which thermally-induced marring of the substrate is observed in Regime III, where CNT-substrate pull-off strength reduces relative to Regime II.&#13;
&#13;
Key results regarding A-CNT PNCs include process development leading to the first successful fabrication of fully infused, microvoid-free BMI and epoxy PNCs with high volume fractions (1 to 30 vol%) corresponding to square-packing inter-CNT spacings of 70 to 6 nm of mm-tall A-CNT arrays. For both resins, the development of a polymer infiltration model based on Darcy’s law accurately predicts the time for uncured resin to fully infuse into A-CNT arrays during capillary-assisted PNC fabrication, corroborating experimental observations via X-ray micro-computed tomography and microscopy that a diluted 65 wt% resin with ∼10× lower viscosity than neat resin is required for full infusion into dense ACNT arrays (10−30 vol%) for both BMI and epoxy. For each tested vol%, the cured PNCs maintain consistent vertical CNT alignment and glass transition temperature, and the decomposition onset temperature is constant for epoxy PNCs but increases by ∼8°C for BMI PNCs up to 30 vol% A-CNTs. For both polymers, quasi-static nanoindentation yields an ∼2× increase in the axial indentation modulus for 30 vol% A-CNT PNCs compared to the neat resin, with no change in transverse A-CNT modulus, showing enhanced anisotropic mechanical properties with high A-CNT vol%.&#13;
&#13;
Key results regarding A/C-NCs and their phenolic resin PNC precursors include process development leading to the first successful fabrication of fully infused, microvoid-free phenolic 1−30 vol% A-CNT PNCs, and also A/C-NCs after four cycles of polymer infiltration pyrolysis (PIP). A vacuum-assisted thermal drying step is used to remove adsorbed moisture from A-CNTs prior to their capillary-assisted infusion with diluted 50 wt% phenolic resin. The cured PNCs exhibit an 11% increase in decomposition onset temperature and 45% increase in ten-weight-percentage loss temperature from neat resin to 30 vol% PNCs. From PyC to 30 vol% A/C-NCs, 4 PIP cycles decreases porosity by ∼10 vol% and increases bulk density by ∼5% compared to 1 PIP; the 4 PIP A/C-NC bulk density decreases from ∼1.14 g/cm³ to ∼0.80 g/cm³ and porosity increases from ∼47 vol% to ∼74 vol%. Vickers microhardness testing in the axial CNT direction of A/C-NCs shows agreement with prior mechanical modeling and data at low vol% CNTs, where specific hardness increases from ∼3.3 GPa/(g/cm³) for PyC to ∼6.7 GPa/(g/cm³) for 30 vol% A/C-NCs, demonstrating that A/C-NCs are an advantaged superhard lightweight material. &#13;
&#13;
Using the process-structure-property prediction understanding and tools developed in this thesis, more precise tailoring of application-specific performance for aligned CNT architectures is enabled. Future paths of study that enable the design and manufacture of next-generation nanofiber-based materials are recommended.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138960</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Polymer Materials: From Iptycenes to Ring-Opening Polymerizations</title>
<link>https://hdl.handle.net/1721.1/138955</link>
<description>Functional Polymer Materials: From Iptycenes to Ring-Opening Polymerizations
Wu, You-Chi
In Chapter 1, we incorporate functional groups in triptycene-containing poly aryl ethers to serve a variety of applications. Amine-functionalized polymers are used in graphene oxide composite membranes towards aqueous filtration. Guanidinium-functionalized polymers are explored for impedance-based gas sensing. Pyrazolium-functionalized polymers are fabricated into anion-exchange membranes. &#13;
&#13;
In Chapter 2, we design a set of two- and three-component polyurethanes to elucidate the influence of molecular composition on thermal transition characteristics, crystallinity, segmental dynamics of PTMO, as well as high-strain-rate impact response. &#13;
&#13;
In Chapter 3, we describe the living cationic ring-opening polymerization of a 2-alkylthio-2-oxazoline as a general platform for postpolymerization modification. Mild substitution conditions provide broad functional group tolerance, constituting a versatile postpolymerization modification platform with access to a diversity of polyureas and polythiocarbamates. &#13;
&#13;
In Chapter 4, we seek to synthesize a dibenzobarrelene-elaborated polyacetylene, whereby the dibenzobarrelene groups can provide stability by steric shielding, as well as a site for attaching solubilizing and functional groups. Routes based on dehydrogenation, bromination–elimination, sulfoxide elimination, and reductive defluorination are discussed. &#13;
&#13;
In Chapter 5, we study a family of bottlebrush polymers that consist of a flexible backbone with rigid, porogenic side chains. Tuning of side-chain length and dispersity, as well as incorporation of new functional groups, are performed to elucidate structure–property relationships and to enable improvements in selectivity.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138955</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The K-theoretic Hall Algebra On Surfaces and Categorifications</title>
<link>https://hdl.handle.net/1721.1/138953</link>
<description>The K-theoretic Hall Algebra On Surfaces and Categorifications
Zhao, Yu
In the thesis, I made a construction of the K-theoretic Hall algebra on smooth algebraic surfaces and proved that it was associative. I also made a naive categorification of the elliptic Hall algebra.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138953</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Nanoparticulate Antigens for Enhanced Follicular Accumulation and Immunogenicity</title>
<link>https://hdl.handle.net/1721.1/138948</link>
<description>Engineering Nanoparticulate Antigens for Enhanced Follicular Accumulation and Immunogenicity
Read, Benjamin J.
In vaccine design, antigens are often arrayed in a multivalent nanoparticle form, but in vivo mechanisms underlying the enhanced immunity elicited by such vaccines remain poorly understood. In this thesis, we began by examining two model, HIV-immunogen-bearing nanoparticles that displayed an unusually high degree of immunogenicity. We found that these nanoparticles accumulated selectively within the follicular dendritic cell network of draining lymph nodes, and discovered that this trafficking pattern was dependent on complement recognition mediated by mannose-binding lectin (MBL) binding to glycans on the surface of the nanoparticles. Accumulation within follicles was positively associated with multiple immune response outputs. Trafficking was found to occur in a variety of nanoparticles of different sizes and compositions, and the primary factor that allowed for trafficking to occur was the presence of high-mannose glycans on the surface of the nanoparticles. Several clinically-relevant nanoparticle antigens were also found to traffic in an MBL-dependent fashion, suggesting that this mechanism could be utilized to improve the efficacy of a variety of important nanoparticulate antigens. We also utilized DNA origami nanoparticles as a model system to probe critical parameters of nanoparticle vaccine design, including antigen density, antigen spacing, nanoparticle geometry, and overall antigen concentration. Initial in vivo studies are described, as are suggestions for future experiments to further the design and functionality of these particles for vaccination. Overall, these studies have elucidated a number of design principles that should aid in the engineering of next generation vaccines to provide protection against a variety of possible pathogens.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138948</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single Subcompartment Drug Delivery</title>
<link>https://hdl.handle.net/1721.1/138947</link>
<description>Single Subcompartment Drug Delivery
Cotler, Max J.
Most drugs are administered systemically, intravenously or orally, but there are inherent challenges with these methods. Drug distributes throughout the body, which results in two main challenges: (1) poor efficacy due to the under accumulation of drug at sites of disease and (2) side effects due to the over accumulation of drug in healthy tissue where the drug may interact with off-target molecules or cells. Several strategies have been developed to improve drug distribution. Nanoparticles and antibody-drug conjugates use targeting molecules to increase drug accumulation at certain sites or cells, leading to improved outcomes, but only a handful of such therapies have had a clinical impact thus far or do not work for all indications. Further advancements have also been made to change the administration route of drugs rather than relying on molecular targeting. The body can be broken down into a series of compartments that encompass different cavities or organ systems, such as the peritoneum, urinary tract, or eye. Drugs can be directly delivered to a single compartment through an infusion system or drug-eluting implant. Single compartment drug delivery increases drug concentration at the target site and minimizes drug exposure at distant sites of the body, leading to improved outcomes and reduced side effects. Several such strategies have successfully been commercialized. &#13;
&#13;
Improved understanding of physiology and disease progression at the cellular and molecular level has led to further defining of disease pathology beyond the single compartment. The brain, for example, has been divided into several microstructures, each of which has a different function that can be disrupted leading to disease. Tumors have also been further defined as separate entities from healthy organs due to genetic, immune, and microenvironment variations. We hypothesize that targeting single subcompartments can further advance single compartment drug delivery strategies.&#13;
&#13;
We present novel examples of single subcompartment drug delivery devices for treating and diagnosing disease. Chronic brain implants were developed to elicit minimal scarring and deliver microliters of drug to distinct microstructures suspect in disease, such as a seizure focus. An implant and computational platform was further developed to deliver drug microdoses to tumor, analyze the tumor response to each drug, and computationally predict potential efficacy of each drug. This platform successfully predicted systemic treatment outcomes in ovarian patient-derived xenograft tumors with greater than 90% accuracy, with the potential for clinical implementation in the near-term. The results of both studies represent the promise of single subcompartment drug delivery to improve outcomes for patients.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138947</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Control of Queuing Networks: Applications to Airport Surface Operations</title>
<link>https://hdl.handle.net/1721.1/138946</link>
<description>Modeling and Control of Queuing Networks: Applications to Airport Surface Operations
Badrinath, Sandeep
The significant growth in air traffic over the past few decades has led to increased congestion at major airports worldwide. Airport congestion results in increased flight delays, fuel burn and emissions. There is consequently a need to accurately model airport traffic operations and to design control algorithms that mitigate airport congestion.&#13;
&#13;
In this thesis, we propose a new class of queuing network models of airport surface operations that are capable of capturing congestion at multiple locations, and that can account for the time-varying nature of demand and capacity. The proposed queuing models are based on point-wise steady state approximation that result in a simple ordinary differential equation representation for the dynamics of the ensemble queue length. Further, the models can account for propagation delays between servers in the network, and handle general service time distributions, overcoming some of the limitations of the traditional probabilistic queuing models. The queuing models are developed, adapted, and validated using actual operational data from several major airports. These models also allow us to apply techniques from reachability analysis to better understand the performance of queuing networks.&#13;
&#13;
The second part of this thesis focuses on the development of airport congestion control algorithms using the proposed queuing network models. The dynamical systems representation of the queuing process allows us to use optimal and robust control techniques to regulate the queue length, and to obtain theoretical guarantees for certain special cases. We compare our algorithms with NASA's logic that was recently field-tested, using stochastic simulations. We also investigate the impact of uncertainty in airline-supplied estimates of traffic demand on the efficacy of congestion control algorithms, and quantify the benefits of reducing this uncertainty. Finally, we note that the modeling and control techniques for queuing networks developed in this thesis have broad applicability in other contexts beyond airport surface operations.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138946</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-based Design and Control of Biopharmaceutical Manufacturing Processes</title>
<link>https://hdl.handle.net/1721.1/138943</link>
<description>Model-based Design and Control of Biopharmaceutical Manufacturing Processes
Hong, Moo Sun
Biopharmaceuticals are products derived from biological organisms for treating or preventing diseases. The global sales of biopharmaceuticals have continually increased for many years and will continue to increase, as the pipeline of product candidates continues to grow. Recent trends in biopharmaceutical manufacturing provide opportunities for process systems engineering to make major advances in biomanufacturing: (1) process analytical technology (PAT) is providing on-line measurements of critical quality attributes (CQAs) for constructing first-principles and data-based models of each unit operation and enable advanced control, (2) a transition from batch to continuous operation provides process control needs to handle the propagation of impurities and other disturbances caused by tight integration of unit operations, and (3) the invention of new designs for downstream processes is creating new processes to control. These trends ultimately lead biomanufacturing to digital manufacturing, which is an integrated approach to manufacturing centered around a computer system. &#13;
&#13;
To address the challenges associated with the recent trends, this thesis applies modern system engineering tools to develop advanced biomanufacturing systems. For the upstream process, the thesis constructs mathematical models and optimal control methods for multiple bioreactor configurations including microbioreactor systems and stirred-tank bioreactors. &#13;
&#13;
Microbioreactors are a promising technology to accelerate biologic drug development, but can have localized dissolved oxygen (DO) concentrations that are damaging to the cells. To address this problem, the thesis derives analysis, estimation, and control methods that improve understanding and spatiotemporal control of DO concentration in these systems. &#13;
&#13;
Macroscopic models can produce useful predictions of bioreactor operations while being suitable for parameter estimation and real-time model predictive control. The thesis constructs an extensive macroscopic bioreactor model for recombinant protein producing Pichia pastoris in defined medium, which enables the comprehensive understanding of cellular metabolism and optimization of bioreactor operations. &#13;
&#13;
For the downstream process, the thesis designs and implements laboratory unit operation systems for protein crystallization and continuous viral inactivation, which are optimally designed and controlled based on developed mathematical models. &#13;
&#13;
Although crystallization is a promising non-chromatography separation method, research and development are needed to develop technology effective for most therapeutic proteins. The thesis develops a systematic approach to the design and control of protein crystallization from micro-scale based on first-principles models. First, preliminary experimental data from the literature are analyzed to obtain feasible crystallization conditions and lower bounds on the crystallization rates. Then a droplet-based evaporative system is developed to evaluate candidate crystallization conditions and estimate crystallization kinetics using only a minimum quantity of protein. The estimated crystallization kinetics provide information required for the model-based optimal scale-up to production scale. &#13;
&#13;
Batch low-pH hold is a common processing step to inactivate enveloped viruses for biologics derived from mammalian sources. Control challenges with adapting batch low-pH hold to continuous processing have not been addressed. The thesis develops a low-cost column-based continuous-flow viral inactivation system constructed with off-the- shelf components to provide tight control of critical process parameters, operating pH and residence time distribution. This work provides tools for the design and operation of continuous viral inactivation systems in service of increasing productivity, improving product quality, and enhancing patient safety.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138943</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays examining social vulnerability and place-based determinants of health</title>
<link>https://hdl.handle.net/1721.1/138940</link>
<description>Three essays examining social vulnerability and place-based determinants of health
Tan, Shin Bin
This three-essay dissertation examines how different environmental characteristics might support better health and economic outcomes of socially vulnerable individuals, with a focus on the role policy and planning can play. The first essay focuses on community resilience – the capability that helps communities resist, absorb and recover from disasters and traumas. I empirically examine how well a community resilience assessment tool developed by the U.S. Federal Emergency Management Agency (FEMA) predicts post-disaster, county and individual-level health outcomes and find concerning shortfalls. Using FEMA’s tool as an illustrative case, I highlight limitations of current efforts to quantify community resilience via indicator-based assessment tools, and propose possible improvements to policy-based community assessment tools. The second essay examines whether disaster-induced residential relocation to ‘higher economic opportunity’ areas might support better post-disaster outcomes. Using a mixed-methods approach to analyze repeated survey data and interviews of low-income mothers hit by the 2005 Hurricane Katrina, this study found a unit increase in year-weighted county opportunity predicted a doubling of incomes approximately 12 years after the hurricane, but only if participants remained in higher opportunity counties beyond 9 to 21 months. Living in higher opportunity areas however was not associated with better long-term health, potentially because of the challenges integrating into a new area post-disaster, which suggests the need for additional support to help disaster survivors sustain residence in higher opportunity areas. The third essay examines how accumulated exposures to obesogenic neighbourhood environments affect children and mothers’ BMI, and how socioeconomic status might modify such neighbourhood effects. Drawing from a mother-child birth cohort study based in Singapore, this study found that socioeconomic vulnerability modified the relationship between exposure to specific obesogenic neighborhood characteristics and BMI-related outcomes, such that increased access to bus-stops and park connectors predicted a drop in BMI outcomes for higher SES children and mothers respectively, but an increase for lower SES individuals. Study results emphasize the importance of considering how urban interventions might have heterogenous effects by SES.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138940</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics</title>
<link>https://hdl.handle.net/1721.1/138937</link>
<description>Essays in Financial Economics
Kakhbod, Ali
This thesis contains four chapters on liquidity, financial crisis, dynamic pricing and optimal contracting with externalities. &#13;
&#13;
The first chapter studies how transparency (information disclosure), along with long term incentive of informed dealers, affect price informativeness (efficiency), market liquidity and welfare in dynamic over the counter (OTC) markets. We show more transparency, via the public disclosure of additional information about past trades, paradoxically, makes the markets more opaque, by reducing market price informativeness. However, this market opacity creates liquidity and improves welfare. Our policy implications are threefold: (i) forwardlooking incentive of informed dealers reduces price efficiency but improves liquidity; (ii) the post-trade public disclosure of prices may have no impact on price efficiency and liquidity; (iii) however, the public disclosure of past transaction orders or volumes reduces price efficiency but improves liquidity. We also derive several testable implications about price efficiency and market liquidity and demonstrate the robustness of our findings in face of a general class of payoff functions, stochastic trading positions, divisible and indivisible orders, finite and infinite trading calendars, and fixed or time-varying fundamentals. &#13;
&#13;
In the second chapter we propose an amplification mechanism of financial crises based on the information choice of investors. Information acquisition always makes investors more likely to act against what is suggested by the prior. Deteriorating public news under an initially strong (weak) prior increases (reduces) the value of private information and induces more (less) information acquisition. Deteriorating public news always increases the probability of a crisis, since the initially strong (weak) prior suggests do-not-attack (attack). This effect is amplified when information choices are endogenous. To enhance financial stability, a policymaker can use taxes and subsidies to affect information acquisition. We also derive testable implications for the magnitude of amplification. This chapter is published in the Review of Financial Studies (RFS), vol 30. &#13;
&#13;
In the third chapter we study the problem of optimal dynamic pricing for a monopolist selling a product to consumers in a social network. In the proposed model, the only means of spread of information about the product is via Word of Mouth communication; consumers’ knowledge of the product is only through friends who already know about the product’s existence. Both buyers and non-buyers contribute to information diffusion while buyers are more likely to get engaged. By analyzing the structure of the underlying endogenous process, we show that the 3 optimal dynamic pricing policy for durable products with zero or negligible marginal cost, drops the price to zero infinitely often. By attracting low-valuation agents with free-offers and getting them more engaged in the spread, the firm can reach out to potential highvaluation consumers in parts of the network that would otherwise remain untouched without the price drops. We provide evidence for this behavior from smartphone app market, where price histories indicate frequent free-offerings. Moreover, we show that despite infinitely often drops of the price to zero, the optimal price trajectory does not get trapped near zero. We demonstrate the validity of our results in face of strategic forward-looking agents, homophilybased engagement in word of mouth, network externalities, and consumer inattention to price changes. We further unravel the key role of the product type in the drops by showing that the price fluctuations disappear after a finite time for a nondurable product. This chapter is published in the Management Science (MS), vol 64. &#13;
&#13;
Finally, in the last chapter we study optimal contracting between a firm selling a divisible good that exhibits positive externality and a group of agents in a social network. The extent of externality that each agent receives from the consumption of neighboring agents is privately held and is unknown to the firm. By explicitly characterizing the optimal multilateral contract, we demonstrate how inefficiency in an agent’s trade propagates through the network and creates unequal and network-dependent downward distortion in other agents’ trades. Furthermore, we describe bilateral contracts (non-linear pricing schemes) and characterize their explicit dependence on the network structure. We show that the firm will benefit from uncertainty in an agent’s valuation of other agents’ externality. We describe the profit gap between multilateral and bilateral contracts and analyze the consequences of the explicit dependence of the contracts on network structure. When the network is balanced in terms of homogeneity of agents’ influence, network structure has no impact on the firm’s profit for bilateral contracts. On the other hand, when the influences are heterogeneous with high dispersion (as in core-periphery networks) the restriction to bilateral contracts can result in profit losses that grow unbounded with the size of networks. This chapter is published in the Journal of Economic Theory (JET), vol 183. &#13;
&#13;
JEL Classification: G1, G2, G3
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138937</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time and process in sedimentary rocks at the dawn of animal life</title>
<link>https://hdl.handle.net/1721.1/138936</link>
<description>Time and process in sedimentary rocks at the dawn of animal life
Cantine, Marjorie D.
One of the most profound changes in the history of our planet was the relatively recent emergence of complex life—macroscopic, multicellular organisms with tissues. Sedimentary rocks record this evolutionary event, alongside other shifts in environment and climate, in fragmentary form. Sedimentological studies track change in space and relative time, and radiometric studies allow us to tie events to absolute time. Both are necessary to understand how and if changes in environment are linked to early animal evolution. This thesis combines both.&#13;
&#13;
Chapters 1 and 2 of this thesis explore early records of life on Earth that predate, in whole or in part, animal life. In Chapter 1, the molecular record of life's origins are explored for evidence of environmental influence and adaptation. I argue that environmental heterogeneity has been a key driver in life's history since its beginning. In Chapter 2, the Precambrian to Cambrian carbonate rock record is assessed in a quantitative, high-resolution database of more than 40 km of carbonate stratigraphy. The sedimentological insights derived from this work provide an independent record of changes in ocean biogeochemistry before and during the rise of animals. &#13;
&#13;
Chapters 3, 4, and 5 focus on the sedimentary rock record of the Ediacaran Period (635-541 Ma). Chapter 3 describes new Re-Os age constraints on Earth's largest negative carbon isotope excursion, the Shuram excursion. Chapter 4 presents additional Re-Os age constraints on the Ediacaran succession of Oman to explore shifts in Earth’s surface environments during the rise of animals. These novel constraints suggest an expansion of shallow water marine environments in the late Ediacaran Periods, possibly concurrent with the expansion of animals from deep to shallow water environments. Chapter 5 investigates the detrital zircon record of an Ediacaran unit, the Rainstorm Formation, and analyzes the role of grain size and sediment transport processes in generating bias in detrital mineral provenance records.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138936</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Econometrics: Nonparametrics and Robustness</title>
<link>https://hdl.handle.net/1721.1/138934</link>
<description>Essays in Econometrics: Nonparametrics and Robustness
Deaner, Ben
This thesis consists of three chapters. In each chapter I consider a particular problem in econometrics with implications for applied research, and in each case I attempt to solve that problem. &#13;
&#13;
In Chapter 1 I consider the task of inferring causal effects when only `proxy controls' are available. Proxy controls are informative proxies for unobserved confounding factors. For example, suppose we wish to estimate the causal impact of holding students back a grade on their future test scores. Academic ability is likely a confounding factor. While ability is not observed, early test scores may be used to proxy for ability. Under suitable conditions, nonparametric identification and estimation of treatment effects is possible in this setting. I present novel nonparametric identification results that motivate simple and `well-posed' nonparametric estimation and inference methods for use with proxy controls.&#13;
&#13;
My analysis applies to cross-sectional settings but is particularly well-suited to panel models. In panel settings, proxy control methods provide a novel approach to the difficult problem of identification with non-separable, general heterogeneity and fixed T. In panels, observations from different periods serve as proxies for unobserved&#13;
heterogeneity and my key identifying assumptions follow from restrictions on the serial dependence structure.&#13;
&#13;
I derive convergence rates for my estimator and construct uniform confidence bands with asymptotically correct size. I apply my methodology to two empirical settings. I estimate causal effects of grade retention on cognitive performance and I estimate consumer demand counterfactuals using panel data.&#13;
&#13;
In Chapter 2 I show that nonparametric instrumental variables (NPIV) estimators are highly sensitive to misspecification: an arbitrarily small deviation from instrumental validity can lead to large asymptotic bias for a broad class of estimators. One can mitigate the problem by placing strong restrictions on the structural function in estimation. If the true function does not obey the restrictions then imposing them imparts bias. Therefore, there is a trade-off between the sensitivity to invalid instruments and bias from imposing excessive restrictions. In response, I present a method that allows researchers to empirically assess the sensitivity of their findings to misspecification. I apply my procedure to the empirical demand setting of Blundell(2007) and Horowitz (2011).&#13;
&#13;
In Chapter 3 I consider methods for inference in dynamic discrete choice models that are robust to approximation error in the solution to the dynamic decision problem. Estimation and inference in dynamic discrete choice models often relies on approximation to lower the computational burden of dynamic programming. If it is not accounted for, the use of approximation can impart substantial bias in estimation and results in invalid confidence sets. I present a method for set estimation and inference that explicitly accounts for the use of approximation and is thus valid regardless of the approximation error. I show how one can account for the error from approximation at low computational cost. My methodology allows researchers to assess the estimation error due to approximation and thus more&#13;
effectively manage the trade-off between bias and computational expedience. I provide simulation evidence to demonstrate the practicality of my approach.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138934</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Permanent Magnetic Dipole Reaction Sphere Actuator for Spacecraft Attitude Control</title>
<link>https://hdl.handle.net/1721.1/138932</link>
<description>A Permanent Magnetic Dipole Reaction Sphere Actuator for Spacecraft Attitude Control
Hamer, Tyler Thomas
This thesis presents decoupled, closed-loop, suspension and rotation of a spherical dipole permanent magnet. The spherical magnet is used as the rotor in a bench-level prototype reaction sphere actuator for spacecraft attitude (orientation) control. Reaction spheres rotate spacecraft either with an equal-and-opposite torque about their rotation axis when accelerated about that axis, similar to reaction wheels, or with a more ecient gyroscopic torque used to reorient their rotation axis, similar to control moment gyroscopes (CMGs). This thesis focuses on the modeling, design, fabrication, control, and operation of a prototype for generating reaction torque. However, modifications to the prototype for generating gyroscopic torque are discussed. Compared to conventional reaction wheels and CMGs, reaction spheres can potentially generate higher torque and store more angular momentum for lower SWaP (Size, Weight, and Power), lower pointing jitter, and over longer mission durations due to the elimination of mechanical bearings and gimbal structures. Thus, such a reaction sphere can potentially enable attitude control performance traditionally found in large, expensive, monolithic spacecraft in fleets of small, low-cost spacecraft as the demand for small, low-cost spacecraft only continues to increase.&#13;
&#13;
In the reaction sphere prototype, 12 equidistant stator coils, each centered on one face of a dodecahedron, levitate and rotate the enclosed dipole rotor using translation and rotation feedback from 12 equidistant sensing modules, each coaxially placed within one of the coils. The dipole rotor is a 38.1 mm (1.5 in) diameter, permanently axially magnetized, NdFeB (Grade 42) sphere (K&amp;J Magnetics SX8) with a mass of 0.22 kg and a remanence of 1.22 T. The rotor can translate up to 1.0 mm from center and rotate about the 2 axes orthogonal to its magnetization axis as the coils cannot generate torque about the rotor’s magnetization axis. Each stator coil, a 12.0 mm ID x 20.0 mm OD x 10.0 mm long hollow cylinder wound from 348 turns of AWG 28 magnet wire, has a nominal center-to-center distance of 26.1 mm from a centered rotor. Each sensing module’s optical sensor (Fairchild QRE1113GR) measures the rotor’s translation along its axis with a noise-limited standard deviation of 0.24 mm. Taken as a full set, these optical sensors estimate the rotor’s translation along the inertial (X-, Y-, and Z-) axes with a standard deviation of 60 nm. Each sensing module’s Hall eect sensor (Allegro A1308KUA-1-T) measures the rotor’s radial magnetic flux density with a noise-limited standard deviation of 47 mT. Taken as a full set, these Hall eect sensors estimate the direction of the rotor’s magnetization axis with a standard deviation of 160 mrad (9 mdegree), from which the rotor’s angular velocity is estimated with a standard deviation of 6.1 rpm (0.64 rad/s).&#13;
&#13;
Given the rotor’s external magnetic flux density is that of a magnetic dipole, we approximate the coil’s external magnetic flux density as that of a magnetic dipole as well. With this approximation, we can approximate a single coil’s force and torque on the rotor by the interaction of 2 magnetic dipoles. Superimposing the force and torque from each coil, the overall force and torque on the dipole rotor resembles the governing equations for voice coils and brushless DC motors respectively. Thus, the overall force and torque equations are in forms familiar to mechanical and electrical engineers to design with. Using the overall force and torque equations, we (1) determine a minimum of 12 equidistant coils are needed for 3 DoF (degree of freedom) suspension independent of the rotor’s orientation and (2) size the prototype such that a single coil can levitate the rotor against gravity by itself. Additionally, we develop commutation laws which: (1) decouple the rotor’s DoFs, reducing the control of this multi-input multi-output (MIMO) system to the control of several parallel single-input single-output (SISO) systems, and (2) reorient the rotor such that the rotor’s magnetization axis is orthogonal to the axis about which desired torque is commanded. Thus, despite the rotor’s inherent underactuation, we can rotate the rotor about an arbitrary axis in steady state. Using classic loop shaping techniques, we design a SISO controller for each of the 3 translational DoFs and another SISO controller for each of the 3 rotational DoFs. The suspension controller is a filtered proportional-integral-derivative (PID) controller with a loop crossover of 60 Hz and phase margin of 30°. The rotation controller is a filtered proportional (P) controller with a loop crossover of 20 Hz and phase margin of 60°. We implement the commutation laws and controllers in real-time on a National Instruments PXIe-8135 controller using the LabVIEW programming language with a loop sampling rate of 4 kHz.&#13;
&#13;
Using this control scheme, we successfully demonstrate control of the rotor’s position and rotation. Specifically, the rotor’s position can independently step ±500 mm along each inertial axis with a measured crossover of 60 Hz and a measured phase margin of 30° as designed for. A single coil directly above the rotor requires 0.86 A to suspend the rotor against gravity, corresponding to a measured force constant of 2.45 N/A, which is within 4 % of the predicted force constant of 2.55 N/A. Additionally, the rotor’s angular velocity can independently step ±900 rpm (±94.3 rad/s) about each inertial axis with a measured crossover of 20 Hz and a measured phase margin of 60° as designed for. A sequentially applied 7.5 A coil current corresponding to torque about an arbitrary axis accelerates the rotor to a maximum angular velocity of 13 600 rpm (1420 rad/s) about that axis in 250 ms, corresponding to a measured torque constant of 21.0 mN·m/A, which is with 5 % of the predicted torque constant of 22.1 mN·m/A.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138932</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Martech: Learning to Design, Deliver, and Diffuse Interventions</title>
<link>https://hdl.handle.net/1721.1/138931</link>
<description>Essays on Martech: Learning to Design, Deliver, and Diffuse Interventions
Yang, Jeremy (Zhen)
This dissertation consists of three chapters on leveraging machine learning to better design, deliver, and diffuse interventions with a focus on advertising and targeting.&#13;
&#13;
Chapter one develops an algorithm to predict the causal effect of influencer video advertising on product sales. A summary statistic, motion-score, or m-score, is proposed to capture the extent to which a product is advertised in the most engaging parts of a video. Pixel-level product placement is located with an object detection algorithm and pixel-level engagement is estimated as a saliency map by fine-tuning a deep 3D convolutional neural network on video-level engagement data. M-score is then defined as pixel-level engagement-weighted advertising intensity of a video. The algorithm is constructed and evaluated with influencer video ads on TikTok. Causal effects of video ads on product sales are identified by exploiting variation in video posting time. Videos of higher m-score indeed lift more sales. This effect is sizable, robust, and more pronounced among impulsive, hedonic, or inexpensive products. The mechanism can be traced to influencers’ incentives to promote themselves rather than the product. How various stakeholders in entertainment commerce can use m-score in a scalable way to optimize content, align incentives, and improve efficiency are discussed.&#13;
&#13;
Chapter two proposes a method to optimize a targeting policy that maximizes an outcome observed only in the long term. Traditionally, this typically requires delaying decisions until the outcome is observed or relying on simple short-term proxies for the long-term outcome. The method builds on the statistical surrogacy and off-policy learning literature to first im- pute the missing long-term outcomes and then approximate the optimal targeting policy on the imputed outcomes via a doubly robust approach. It is applied in large-scale proactive churn management experiments at The Boston Globe by targeting optimal discounts to its digital subscribers to maximize their long-term revenue. It is shown that conditions for the validity of average treatment effect estimation with imputed outcomes are also sufficient for valid policy evaluation and optimization; furthermore, these conditions can be somewhat relaxed for policy optimization. The method is also validated empirically by comparing it with a policy learned on the ground truth long-term outcomes, they are shown to be statistically indistinguishable. It also outperforms a policy learned on short-term proxies for the long-term outcome.&#13;
&#13;
Chapter three explores how network embeddings can be applied to the study of diffusion. Two sets of questions are investigated using a combination of real and simulated datasets: First, can node embeddings predict adoption decisions better than standard centrality-based summary statistics? Second, can node embeddings be used as control variables to reduce the bias in peer effect estimation? Some initial results and future work are discussed.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138931</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Efficient Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/138930</link>
<description>Data Efficient Reinforcement Learning
Xu, Zhi
Reinforcement learning (RL) has recently emerged as a generic yet powerful solution for learning complex decision-making policies, providing the key foundational underpinnings of recent successes in various domains, such as game playing and robotics. However, many state-of-the-art algorithms are data-hungry and computationally expensive, requiring large amounts of data to succeed. While this is possible for certain scenarios, in applications arising in social sciences and healthcare for example, where available data is sparse, this naturally can be costly or infeasible. With the surging interest in applying RL to broader domains, it is imperative to develop an informed view about the usage of data involved in its algorithmic design.&#13;
&#13;
This thesis hence focuses on studying the data efficiency of RL, through a structural perspective. Advancement along this direction naturally requires us to understand when and why algorithms are successful to begin with; and building upon such understanding, further improve the data efficiency of RL. To this end, this thesis begins by taking inspiration from the empirical successes. We consider the popular use of simulation-based Monte Carlo Tree Search (MCTS) in RL, as exemplified by the remarkable achievement of AlphaGo Zero, and probe the data efficiency of incorporating such a key ingredient. Specifically, we investigate the correct form to utilize such a tree structure for estimating values and characterize the corresponding data complexity. These results further enable us to analyze the data complexity of a RL algorithm that combines MCTS with supervised learning as done in AlphaGo Zero.&#13;
&#13;
Having developed a better understanding, as a next step, we improve the algorithmic designs of simulation-based data-efficient RL algorithms that have access to a generative model. We provide such improvements for both bounded and unbounded spaces. Our first contribution is a structural framework through a novel lens of low-rank representation of the Q-function. The proposed data-efficient RL algorithm exploits the low-rank structure to perform pseudo-exploration by querying/simulating only a selected subset of state-action pairs, via a new matrix estimation technique. Remarkably, this leads to a significant (exponential) improvement in data complexity. Moving to our endeavor with unbounded spaces, one must first address the unique conceptual challenges incurred by the unbounded domains. Inspired by classical queueing systems, we propose an appropriate notion of stability for quantifying "goodness" of policies. Subsequently, by leveraging the stability structure of the underlying systems, we design efficient, adaptive algorithms with a modified, efficient Monte Carlo oracle that guarantee the desired stability with a favorable data complexity that is polynomial with respect to the parameters of interest.&#13;
&#13;
Altogether, through new analytical tools and structural frameworks, this thesis contributes to the design and analysis of data-efficient RL algorithms.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138930</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Assembly of Underconstrained Heavy Shafts Suspended By Multiple Cables From A Robotic Crane</title>
<link>https://hdl.handle.net/1721.1/138929</link>
<description>Precision Assembly of Underconstrained Heavy Shafts Suspended By Multiple Cables From A Robotic Crane
Hoffman-Bice, Rachel Marie
This work presents a new approach to precision mating of heavy objects suspended from overhead cranes. Overhead cranes have been used extensively in heavy industry for transporting heavy objects across a factory floor. The current work aims to make such cranes more dexterous and capable of performing precision assembly tasks using the WinchBot system, which suspends an object with tension-controlled cables. Analysis and new control strategies are presented for coordinating multiple winches axed to a crane, so that, despite a small clearance, a) a suspended shaft (peg) can be smoothly inserted into a hole without jamming due to ill-proportioned insertion forces, and b) even in the case a shaft gets wedged inside a hole, the shaft can be recovered and re-inserted.&#13;
&#13;
First, the physics regarding the chamfer crossing and one point contact are explored. Using static force balance and kinematic analyses, it was shown that there are two primary ranges of cable mounting angles in which a fixed-length cable-suspended peg can successfully cross the chamfered surface of a hole, enter the hole and proceed up to two point contact. The first region, where the cables are nearly vertical and the second where the cable mounting angles are moderately small, greater than 85 degree sand less than 65 degrees for our specific experimental conditions.&#13;
&#13;
Further, the introduction of linear actuators to the WinchBot system to results in the expansion of the the semi-dexterous workspace such that successful peg insertion can occur if the hole is not directly located beneath the center of the WinchBot System. It was shown that for a given hole location, there may be a certain winch configuration that allows the WinchBot to position the peg directly above the hole albeit while avoiding unnecessary peg tilt.&#13;
&#13;
Finally, analysis and new control strategies are presented for coordinating multiple winches axed to a crane, so that a) a suspended shaft (peg) can be smoothly inserted into a hole without jamming despite a small clearance, and b) even in the case a shaft gets wedged inside a hole, the shaft can be recovered from wedging. It is shown through analysis, and validated with experiments, that simply equalizing the three cable tensions allows the shaft to be inserted smoothly into a hole. If the difference between the cable tensions is large, the shaft may experience large contact frictional forces, which may cause the shaft to jam. If wedging occurs, two particular proportions of cable tensions are obtained to break wedging. A process monitor is designed to detect wedging, estimate the location and orientation of the shaft wedged within the hole, and confirm whether the shaft has recovered from wedging. The tilt of the shaft can then be adjusted and re-insertion can occur.&#13;
&#13;
The effectiveness of the proposed control strategies are validated on a 3D multicable crane prototype that is able to demonstrate the successful insertion of a 15 kg shaft into a hole with 120 µm of clearance. The incorporation of the WinchBot system into existing factories will allow for the automation of precision assembly tasks. In turn, this will reduce the reliance on skilled workers, reduce damage to parts in increase factory throughput.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138929</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Data Science and Artificial Intelligence to Decision Making in Healthcare and Finance</title>
<link>https://hdl.handle.net/1721.1/138922</link>
<description>Applications of Data Science and Artificial Intelligence to Decision Making in Healthcare and Finance
Wong, Chi Heem
Decision-making requires timely and accurate information in order to understand the implications of the actions and to manage the potential risk. This thesis presents computational methods to quantify risk in drug development programs, address current challenges in health economics, and investigate and predict rare events in finance. The thesis is split into three major parts. &#13;
&#13;
Part I addresses a core issue in accessing the risk and value of drug development programs: the probability of success (PoS). We introduce a Markov chain model of a drug development program that allows us to fill in missing data and infer phase transitions from clinical trial metadata. We investigate the PoSs across various therapeutic areas, and then conduct further analysis for areas that are of public interest (e.g., oncology, vaccine, and anti-infective therapeutic) in order to understand the bottlenecks in the drug development process.&#13;
&#13;
Part II of the thesis focuses on the use of modeling and simulations to make informed predictions and drive policy-making in healthcare. One chapter in this Part is devoted to the use of data to estimate the financial impact of gene therapy in the U.S. between 2020 and 2035, while another chapter is dedicated to estimating the cost and benefit of various clinical trial designs for the development of a vaccine to prevent  COVID-19.&#13;
&#13;
Part III presents a novel 'big data' analysis and machine learning prediction model of panic selling behavior by retail investors. We document the frequency and timing of panic selling, analyze the demographics of investors who tend to freak out and panic sell, and determine if panic selling is a detrimental or optimal action financially. We also develop machine learning models to predict if an investor might panic sell in the near future given the demographic characteristics of the investor, their portfolio history, and the current and past market conditions.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138922</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization and Characterization of Chance-Constrained Guidance, Navigation, and Control for Low-Energy Lunar Transfers</title>
<link>https://hdl.handle.net/1721.1/138920</link>
<description>Optimization and Characterization of Chance-Constrained Guidance, Navigation, and Control for Low-Energy Lunar Transfers
Fitzgerald, Riley McCrea
Low-energy transfers utilizing multi-body dynamics show great promise for future space exploration; in regions with multiple dominant gravitating bodies (for example, in the Earth-Moon system or among the gas giant moons) they can offer reduced capture Δv, more flexible launch windows, and an increased set of accessible arrival conditions to any spacecraft willing to deal with the extra transfer time they require. The reduced propellant utilization and increased flexibility make these transfers especially tempting for lunar missions with small satellites, which may have to deal with launch date uncertainty and may be unable to carry an engine large enough to achieve lunar orbit insertion via traditional means. As we seek to increase the utilization of lunar space in the coming years and decades, low-energy transfers will certainly prove to be a valuable tool.&#13;
&#13;
Although these transfers have been well studied, the practicalities of navigating a spacecraft along them are often ignored. The long durations and unstable, non-linear dynamics of the problem allow small deviations from the nominal trajectory to grow considerably, and so a reliable strategy for orbit determination, navigation, and correction must be employed for any real flight along such a transfer. This is by no means impossible, and missions like Hiten and GRAIL have successfully flown to lunar orbit along low-energy trajectories. However, the standard method of correction is to employ large ground-based tracking resources like the Deep Space Network (DSN) at regular intervals along the transfer, and implement trajectory correction maneuvers at pre-planned times if needed. While this may be feasible for high-budget missions, the cost and operational burden that such a strategy imposes can make these transfers too expensive or impractical for resource-constrained missions.&#13;
&#13;
This dissertation provides methods for reducing the cost that navigation and tracking impose on low-energy transfers, and for characterizing the cost and difficulty of transfer execution. First, a large and diverse catalog of transfers in the circular-restricted four-body problem is developed. Then, a general method for optimizing tracking and correction schedules is outlined; a genetic algorithm—making use of an efficient method for evaluating the probability of transfer mission success given a guidance, navigation, and control (GN&amp;C) timetable—is used to minimize measurement cost subject to a constraint on the probability of transfer success. This method is then applied to all transfers in the catalog, in order to obtain minimum-cost DSN tracking and correction schedules for each trajectory. The results are analyzed using new metrics for transfer navigation feasibility and cost; these metrics predict when a given chance-constrained problem is infeasible, and provide rough predictions and upper bounds on the cost of navigation along feasible low-energy transfers. Finally, the optimization method is again applied to the catalog with the addition of on-board optical measurements in order to demonstrate the potential further cost savings enabled by the use of autonomous navigation. In summary, this work develops a practical method for designing efficient chance-constrained GN&amp;C schedules for low-energy Earth-Moon transfers, and provides unique insights into the general structure of minimum-cost navigation and correction strategies along these trajectories.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138920</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing methods for modeling and estimation of complex socio-technical systems</title>
<link>https://hdl.handle.net/1721.1/138919</link>
<description>Enhancing methods for modeling and estimation of complex socio-technical systems
Li, Tianyi
This thesis consists of three studies designed to improve the rigor of simulation models of complex dynamic systems in social sciences and to enhance the use of quantitative data in the estimation of model structure and parameters. The data available to social scientists developing models of complex systems are often severely limited and suffer from substantial measurement error. Acquiring additional data can be difficult and expensive. Existing methods to estimate model structure and parameters, and to test these models, are often inappropriate. Each essay in this thesis addresses an important dimension of these issues.&#13;
&#13;
Chapter 1: We study parameter estimation methods in the context of epidemic models. We compare standard least-squares estimation with a panel of alternative estimation schemes in which we test various likelihood functions as well as the use of Kalman filtering. We explore the performance of these methods under different assumptions about data availability and quality, including missing data on important variables and measurement error. While all methods perform comparably in terms of bias in estimated parameters, they vary significantly in the quality of the confidence intervals they yield. Naive least-squares estimation performs poorly, while a negative binomial likelihood or the application of Kalman filtering yields more reliable results. The results should apply not only to epidemics, but to models of social contagion, innovation adoption and diffusion, and potentially other domains.&#13;
&#13;
Chapter 2: When sufficient data for model specification and estimation are unavailable, how should modelers optimally determine which data should be acquired? Specifically, for a given model and set of variables to collect data on, which next k model variables provide the greatest utility for model calibration? We connect this problem with the sensor placement problem in engineering systems, which leads to a combinatorial optimization. We first translate two established solution approaches from engineering systems to social science simulation models. Then, based on the idea of Data Availability Partition and drawing on insights from existing solutions, we propose a new objective function for the optimization. Analytical results for the optimal placement solution under the new objective function are derived for binary and multi-ary trees. For a general tree structure with n nodes, the optimal placement algorithm is devised, with complexity growing at an upper bound of O(nlog2(n)). For arbitrary model structures with feedback loops, approximate solution schemes are developed. Comparison against existing approaches shows notable advantages of the newly proposed method. These findings provide modelers across domains with an objective method and a useful toolkit to prioritize data acquisition.&#13;
&#13;
Chapter 3: Because System Dynamics (SD) strives to create realistic, operationally grounded, endogenous explanations of broad-boundary issues, it often needs large complex models that are difficult to understand and leverage at the aggregate level. Recent efforts have formalized the analysis on the structural determinants of the system’s transient behavior. In this study, we complement these efforts by focusing on model elements (i.e., control inputs) that are ultimately responsible for managing the levels of system’s state variables. We borrow from structural control theory and develop a set of analysis to formally identify the control inputs in a model and assess their capacity to control system states. This post-modeling workflow is summarized as the structural control analysis (SCA) of SD models. The results of these algorithms provide insights into the system controllability and policy design. We illustrate these benefits through several examples and outline potential areas of future research.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138919</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>United States peacetime strategic planning, 1920-1941: the color plans to the victory program.</title>
<link>https://hdl.handle.net/1721.1/138912</link>
<description>United States peacetime strategic planning, 1920-1941: the color plans to the victory program.
Mead, Dana George.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1967; 12 unnumbered pages inserted. Vita.; Bibliography: B-1 - B-18.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138912</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The reformulation model of expertise.</title>
<link>https://hdl.handle.net/1721.1/138911</link>
<description>The reformulation model of expertise.
Mark, William Scott.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Bibliography: leaves 292-296.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138911</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cycling of Iron and Manganese (Oxyhydr)oxides in the Presence of Organic Matter</title>
<link>https://hdl.handle.net/1721.1/138892</link>
<description>Cycling of Iron and Manganese (Oxyhydr)oxides in the Presence of Organic Matter
Gadol, Hayley Jayne
Iron (Fe) and manganese (Mn) (oxyhydr)oxides (or “oxides”) are found in many different environments and are strong sorbents of nutrients and contaminants. Therefore, understanding their cycling has implications for many other biogeochemical cycles. This thesis aims to improve our understanding of microbial and photochemical processes governing Fe/Mn cycling and how the specific mineralogy of Fe/Mn oxides influences these processes. In Chapter 1, we determine the influence of iron oxide mineralogy on methanogenesis using sediment from Upper Mystic Lake (Arlington, MA) as an inoculum in acetate-fed bioreactors. We find that the poorly crystalline mineral ferrihydrite delays the onset of methanogenesis relative to controls while more well crystalline hematite and goethite do not. We also find that only ferrihydrite-amended microbial communities differ considerably from controls based on 16S-rRNA sequencing data. These ferrihydrite-amended communities continue to diverge from other experiments over time despite a convergence in methane production. In Chapter 2, we characterize Mn cycling in Siders Pond (Cape Cod, MA). We find Mn oxides consistently present in the surface waters, and concentrations reach a maximum in late summer before declining substantially in November. Incubation experiments are used to determine Mn cycling in Siders Pond is mediated mainly by the balance between microbial Mn oxidation and photoreduction. In Chapter 3, we further explore Mn oxide photoreduction in the presence of organic matter. We find that photoreduction apparent quantum yields (AQYs) of &#120575;-MnO₂ in humic acid solutions decrease with wavelength regardless of pH, and that AQYs in the UV-A range are highest at pH 5, while AQYs in the 450- 550 nm range are highest at pH 9. Additionally, we see no measurable photoreduction of triclinic birnessite in similar experiments. We also show that AQYs of &#120575;-MnO₂ photoreduction in natural filtered waters from three water bodies in Cape Cod, MA (Siders Pond, Oyster Pond, and seawater) are similar. We use this AQY response with wavelength to calculate water column reduction rates in Siders Pond and determine that the visible range from 400-450 nm is the most important for Mn photoreduction. This work improves our understanding of Fe and Mn oxide cycling in natural environments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, September, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 127-145).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138892</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a mechanism for utilizing science in global environmental policy formulation</title>
<link>https://hdl.handle.net/1721.1/138750</link>
<description>Development of a mechanism for utilizing science in global environmental policy formulation
Matthews, William Henry,
            1942-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science and the Sloan School of Management, Interdepartmental Program, Socio-Technological Engineeringg, 1970; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138750</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The heat capacities of potassium dihydrogen arsenate and ammonium dihydrogen phosphate from 15 to 300⁰K. the anomalies at the curie points</title>
<link>https://hdl.handle.net/1721.1/138717</link>
<description>The heat capacities of potassium dihydrogen arsenate and ammonium dihydrogen phosphate from 15 to 300⁰K. the anomalies at the curie points
Zettlemoyer, Albert C.,&#13;
            1915-1991.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1941; Vita.; Includes bibliographical references (leaf 54).
</description>
<pubDate>Wed, 01 Jan 1941 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138717</guid>
<dc:date>1941-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some results in hyperarithmetical analysis</title>
<link>https://hdl.handle.net/1721.1/138711</link>
<description>Some results in hyperarithmetical analysis
Ritter, William Ernest.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1962; Vita.; Includes bibliographical references (leaves 105-106).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138711</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A calculation of the energy bands of the graphite crystal by means of the tight-binding method</title>
<link>https://hdl.handle.net/1721.1/138704</link>
<description>A calculation of the energy bands of the graphite crystal by means of the tight-binding method
Corbató, F. J.
Thesis: Ph. D., Massachusetts Institute of Technology. Dept. of Physics, 1956; Vita. Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 109-110).
</description>
<pubDate>Sun, 01 Jan 1956 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138704</guid>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The synthesis of 1,2-disubstituted cyclooctatetraenes and the preparation and thermal rearrangement of diethyl β-naphthylallylmalonate</title>
<link>https://hdl.handle.net/1721.1/138696</link>
<description>The synthesis of 1,2-disubstituted cyclooctatetraenes and the preparation and thermal rearrangement of diethyl β-naphthylallylmalonate
Meili, Jay Ernest.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1952; Vita.
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138696</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fatigue behavior of drilled T300/PEEK laminated composites</title>
<link>https://hdl.handle.net/1721.1/138692</link>
<description>Fatigue behavior of drilled T300/PEEK laminated composites
Billaut, François.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1993; Includes bibliographical references (leaves 79-84).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138692</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum chromodynamics at low energies</title>
<link>https://hdl.handle.net/1721.1/138691</link>
<description>Quantum chromodynamics at low energies
Lellouch, Laurent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (leaves 142-149).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138691</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental behavior of grinding wheels.</title>
<link>https://hdl.handle.net/1721.1/138688</link>
<description>Fundamental behavior of grinding wheels.
Kirk, James Allen.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138688</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creep deformation in nickel base superalloy single crystals</title>
<link>https://hdl.handle.net/1721.1/138686</link>
<description>Creep deformation in nickel base superalloy single crystals
Pollock, T. M.
            (Tresa M.)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1989; Includes bibliographical references (leaves 133-137).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138686</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomechanics of extravehicular activity and its neutral buoyancy simulation</title>
<link>https://hdl.handle.net/1721.1/138685</link>
<description>Biomechanics of extravehicular activity and its neutral buoyancy simulation
Cousins, Daniel.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1987; Vita.; Bibliography: leaves 123-128.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138685</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure and flexibility of myoglobin : molecular dynamics and x-ray crystallography</title>
<link>https://hdl.handle.net/1721.1/138683</link>
<description>The structure and flexibility of myoglobin : molecular dynamics and x-ray crystallography
Kuriyan, John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1986; Includes bibliographies.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138683</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in intertemporal economics</title>
<link>https://hdl.handle.net/1721.1/138681</link>
<description>Essays in intertemporal economics
Woodford, Michael,
            1955-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1983; Includes bibliographies.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138681</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microwave spectral lines in galactic dust globules.</title>
<link>https://hdl.handle.net/1721.1/138673</link>
<description>Microwave spectral lines in galactic dust globules.
Martin, Robert Norman.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1976; Bibliography: leaves 235-239.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138673</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperarithmetical real numbers and hyperarithmetical analysis</title>
<link>https://hdl.handle.net/1721.1/138670</link>
<description>Hyperarithmetical real numbers and hyperarithmetical analysis
Hodes, Louis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1962; Vita.; Includes bibliographical references (p. 82-84).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138670</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of the automatic control of multiple capacity systems</title>
<link>https://hdl.handle.net/1721.1/138653</link>
<description>An analysis of the automatic control of multiple capacity systems
Hrones, John A.
            (John Anthony)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1942; Vita.; Includes bibliographical references (leaves 212-214).
</description>
<pubDate>Thu, 01 Jan 1942 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138653</guid>
<dc:date>1942-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The reactions of gem-dihalocyclopropanes with organometallic reagents</title>
<link>https://hdl.handle.net/1721.1/138652</link>
<description>The reactions of gem-dihalocyclopropanes with organometallic reagents
Moser, William R.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1964; Vita.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138652</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the theory of combinatorial independence</title>
<link>https://hdl.handle.net/1721.1/138650</link>
<description>On the theory of combinatorial independence
Crapo, Henry H.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1964; Vita.; Includes bibliographical references (leaves 195-200).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138650</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On cuts and clutters</title>
<link>https://hdl.handle.net/1721.1/138640</link>
<description>On cuts and clutters
Ramakrishnan, V. S.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1994; Includes bibliographical references (p. 87-91).
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138640</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collisional excitation of vibration and rotation in diatomic molecules</title>
<link>https://hdl.handle.net/1721.1/138638</link>
<description>Collisional excitation of vibration and rotation in diatomic molecules
Lukasik, Stephen J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1956; Vita.; Bibliography: leaves 259-261.
</description>
<pubDate>Sun, 01 Jan 1956 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138638</guid>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decomposition theorem for the Haar integral</title>
<link>https://hdl.handle.net/1721.1/138635</link>
<description>Decomposition theorem for the Haar integral
Davis, Harry F.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1954; Vita.; Bibliography: leaf 34.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138635</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication processes and United States government policy: the development of communication satellite policy in the executive branch.</title>
<link>https://hdl.handle.net/1721.1/138631</link>
<description>Communication processes and United States government policy: the development of communication satellite policy in the executive branch.
Mobius, Joseph Benhard Mark.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1964
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138631</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High energy scalar quantum electrodynamics.</title>
<link>https://hdl.handle.net/1721.1/138628</link>
<description>High energy scalar quantum electrodynamics.
Bhattacharyya, Ajit Kumar.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1970; Vita.; Bibliography: leaves 135-136.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138628</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Slag as a heat transfer medium.</title>
<link>https://hdl.handle.net/1721.1/138624</link>
<description>Slag as a heat transfer medium.
Fine, Harold Alan.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138624</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Texture modeling--temperature effects on Markov/Gibbs random fields</title>
<link>https://hdl.handle.net/1721.1/138623</link>
<description>Texture modeling--temperature effects on Markov/Gibbs random fields
Picard, Rosalind W.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1991; Includes bibliographical references (leaves 148-152).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138623</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic response of structures in layered soils.</title>
<link>https://hdl.handle.net/1721.1/138620</link>
<description>Dynamic response of structures in layered soils.
Chang Liang, Victor.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1974; Vita.; Bibliography: leaves 185-190.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138620</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The creep and rupture behavior of aluminum as a function of purity</title>
<link>https://hdl.handle.net/1721.1/138619</link>
<description>The creep and rupture behavior of aluminum as a function of purity
Servi, Italo S.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1951; Vita.; Bibliography: leaves 125-130.
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138619</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical layers in parallel flows.</title>
<link>https://hdl.handle.net/1721.1/138617</link>
<description>Critical layers in parallel flows.
Haberman, Richard,&#13;
            1945-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1971; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138617</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decaying dark matter and reionization</title>
<link>https://hdl.handle.net/1721.1/138612</link>
<description>Decaying dark matter and reionization
Jubas, Jay M.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1993; Includes bibliographical references (leaves 135-140).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138612</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A peptide model for the heparin binding site on Antithrombin III</title>
<link>https://hdl.handle.net/1721.1/138608</link>
<description>A peptide model for the heparin binding site on Antithrombin III
Lellouch, Annemarie Coffman.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1992; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138608</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organic synthesis via group 4 metallocene complexes</title>
<link>https://hdl.handle.net/1721.1/138605</link>
<description>Organic synthesis via group 4 metallocene complexes
Grossman, Robert B.&#13;
            (Robert Bruce)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1992; Includes bibliographical references (leaves 118-135).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138605</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abnormal selectivity in adsorption on activated carbon of binary gaseous mixtures of low molecular weight olefin and the corresponding paraffin.</title>
<link>https://hdl.handle.net/1721.1/138604</link>
<description>Abnormal selectivity in adsorption on activated carbon of binary gaseous mixtures of low molecular weight olefin and the corresponding paraffin.
Ku, Chen Chun.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1945; Vita.; Includes bibliographical references (leaf 146).
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138604</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Projective and injective Banach spaces.</title>
<link>https://hdl.handle.net/1721.1/138601</link>
<description>Projective and injective Banach spaces.
Metas, Nick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; Bibliography: leaves 228-231.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138601</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laminarization of the turbulent boundry layer acceleration.</title>
<link>https://hdl.handle.net/1721.1/138593</link>
<description>Laminarization of the turbulent boundry layer acceleration.
Launder, B. E.&#13;
            (Brian Edward)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138593</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer-aided teaching of dynamic system behavior.</title>
<link>https://hdl.handle.net/1721.1/138592</link>
<description>Computer-aided teaching of dynamic system behavior.
Rosenberg, Ronald Carl.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1965; Bibliography: leaves 182-184.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138592</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural design synthesis using machine learning</title>
<link>https://hdl.handle.net/1721.1/138590</link>
<description>Structural design synthesis using machine learning
Danhaive, Renaud Aleis Pierre Emile.
This dissertation investigates how data-driven methods may support creative, performance-informed design processes for early-stage building and structural design. Given the imperative to curb greenhouse gas emissions, designers have an increased responsibility to consider the environmental impact of their decisions early on in the design process when they have outsize effects on a building's environmental performance. Although existing methods of optimization and design space exploration can guide designers toward better design options based on simulation data, there remain significant hurdles to the effective adoption of these tools despite their potential benefits. First, many engineering simulations remain cumbersome to connect with and slow to run, disrupting the pace of a fluid design process. Second, the design spaces used to generate and evaluate design variations are so vast that they are virtually impossible for humans to effectively explore. Finally, due to the intrinsically human nature of architecture and design, there is strong resistance to any process which purports to fully automate it. This dissertation addresses these challenges by proposing three strategies that capitalize on recent advances in deep learning to connect the power of performance-driven computing with the fluidity and creativity of human design and help human designers explore complex structural design spaces more intuitively. The first approach uses convolutional neural networks to expand surrogate modeling, which substitutes fast data-driven approximations for slow engineering simulations, from the prediction of single metrics to entire simulation fields. This reveals how performance is distributed spatially, providing more holistic feedback than previously possible. Two case studies show how this can uniquely link shape exploration and design materialization in fast and responsive ways. The second strategy introduces a sequential sampling algorithm that can help increase the effectiveness of many data-driven design approaches by helping build high-quality design datasets. Finally, the third approach takes advantage of the proposed sampling scheme to train deep generative models with low-dimensional latent spaces that can be intuitively explored by human designers to synthesize diverse structures with prescribed performance levels. Cases studies spanning different typologies and scales illustrate these approaches and demonstrate how harnessing advances in machine learning can amplify human creativity in structural design.
Thesis: Ph. D. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, September, 2020; Cataloged from the official pdf of thesis.; Includes bibliographical references (pages 203-219).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138590</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Migrant sarifa settlements and state-Building in Iraq</title>
<link>https://hdl.handle.net/1721.1/138589</link>
<description>Migrant sarifa settlements and state-Building in Iraq
Gupta, Huma.
This dissertation examines the negative dialectical relationship between state-building and sarifa (reed) and kukh (mud) dwellings built by rural migrants in Baghdad. Rural migrants served as the paradigmatic developmental subjects of the Iraqi state, providing a supply of devalued labor for the army, civil service, construction industry, domestic services, manufacturing, police, and transportation. In contrast with how Baghdad's history has been portrayed, sarifa and kukh dwellings were arguably the most salient architectural feature of the mid-20th century capital. These dwellings are thus, examined beyond discourses of underdevelopment or the vernacular in order to show how the Iraqi governmental apparatus was shaped and reshaped through encounters with migrant subjects and their settlements between 1920 and 1970. Rural migrants in the capital were central to communist agitation, protest movements, demands for land and housing, and the 1958 anti-monarchic revolution that transformed the modern state of Iraq. By examining how governmental apparatuses were designed to manage land dispossession under the benevolent guise of urban development, this dissertation narrates how the birthing of state territory is a violent and iterative process. Birthing here is used in the gerund form to indicate that this process is never fully realized. It exists in a state of becoming, between dwelling and building. This dissertation recuperates the figure of the sarifa-dweller as a central subject in Iraq's historiography. And it employs rare archival film footage, photographs, architectural drawings, diplomatic records, government reports, interviews and surveys in order to reconstruct the history of these ephemeral places through the very documentation strategies, which accompanied modes of state formalization that sought to erase them. Migrant settlements were exceptional places where macroeconomic theory or architectural modernity were allowed to disintegrate until the point that they became an economic or political problem for the ruling regime. Lastly, this dissertation examines how the international macroeconomic project of devaluing rural architecture was applied towards sarifa dwellings in Iraq. It demonstrates that the material transformation of dwelling in Iraq was, therefore, part of a larger globalist project of integrating individuals whose lives and dwellings were rendered unproductive through the calculative frameworks of a world industrial system.
Thesis: Ph. D. in History and Theory of Architecture, Massachusetts Institute of Technology, Department of Architecture, September, 2020; Cataloged from the official pdf of thesis.; Includes bibliographical references (pages 317-331).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138589</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Valuing design and designing value : the financial impact of daylight and views in office building real estate</title>
<link>https://hdl.handle.net/1721.1/138588</link>
<description>Valuing design and designing value : the financial impact of daylight and views in office building real estate
Turan, Irmak İfakat.
Architecture and finance both contribute to the conception and production of the built environment. Their agendas for buildings are sometimes in agreement and other times at odds. In this dissertation, I examine the intersection of architecture and finance by quantitatively assessing the economic value of design. Specifically, I measure the premium of two visual attributes--daylight and views--in office spaces in the borough of Manhattan in New York. Combining computational building performance analysis with empirical commercial rent data, I evaluate offices simultaneously as designed spaces and as property. First, I simulate spatially-distributed daylight and views in 5,154 offices. In the case of views, the hybrid performance-finance approach informs a new method for view modeling in an urban context. Second, using a hedonic pricing regression, I measure the premium for daylight and views in office rent prices. The results show that spaces with high levels of daylight have a 5 to 6% premium over spaces with low daylight; and spaces with high access to views have a 6% premium over spaces with low access to views. The combined value of spaces with both high daylight and view access, similarly, is 6%, indicating that the impact of daylight and views together is significant but is not additive. The identified premiums reflect how much more tenants are willing to pay for these attributes, holding all other building, neighborhood, and lease contract characteristics constant. At a moment when the affordability and sustainability of the urban built environment are in question, identifying the financial value of spatial characteristics can inform the production and regulation of properties. Architectural design and the flow of real estate capital are among a multitude of factors that collectively impact the creation of buildings. Combining methods of building performance analysis and financial modeling, this dissertation presents a new lens through which to understand how spatial design relates to economic forces governing our built world.
Thesis: Ph. D. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, September, 2020; Cataloged from the official pdf of thesis.; Includes bibliographical references (pages 143-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138588</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning the "Multiracial City" : architecture, decolonization and the design of stability in British Africa (1945-1957)</title>
<link>https://hdl.handle.net/1721.1/138587</link>
<description>Planning the "Multiracial City" : architecture, decolonization and the design of stability in British Africa (1945-1957)
Woudstra, Rixt
            (Rixt Laurien)
In the two turbulent decades before most British African territories gained independence, British authorities reorganized rapidly growing cities such as Nairobi, Kampala, and Accra by constructing state-sponsored housing estates for African families. This dissertation examines how these late-colonial housing projects were part of a larger effort to maintain control over British Africa during a moment frequently described by colonial officials as "instable," but which for many others held the promise of a different, independent future. I argue that British architects and planners collaborated with labor experts, sociologists, and social welfare workers to prevent anticolonial protests, labor strikes, and mass demonstrations, and to create a "stable" black working class. Building on archival research and fieldwork in Ghana, Uganda, South Africa, and the United Kingdom, I explore four interrelated architectural and spatial strategies employed by British colonial architects and planners: the promotion of the sociological construct of the "multiracial city" to reduce racial tensions; the creation of community centers to stimulate social cohesion; the design of built-in furniture to modernize the domestic sphere; the engineering of new building materials to improve the durability of housing estates. While the political process of decolonization is frequently discussed as a moment of rapid change, this dissertation shows that architects and planners, such as Alfred Alcock and Leonard Thornton-White, participated in the drawn-out negotiation between colonial rule and self-government. Their designs aimed to impede anticolonial struggles for self-determination, racial equality, and social reform and thus postpone the looming prospect of independence. This dissertation also investigates the British welfare state's imperial dimensions. The construction of late-colonial housing estates was entangled with the design of council flats in London, Liverpool, and other English cities. The case studies demonstrate that the principles of social welfare, founded on the ideal of a modern, more equal society, served to support a violent political system of extraction and labor exploitation abroad. The housing estates in Britain's African territories were presented as progressive investments to benefit local workers but were, in fact, designed to avoid uprisings that would interrupt Britain's lucrative supply chain.
Thesis: Ph. D. in History and Theory of Architecture, Massachusetts Institute of Technology, Department of Architecture, September, 2020; Cataloged from the official pdf of thesis. "September 2020."; Includes bibliographical references (pages 221-246).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138587</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generation and propagation of focused shock fronts.</title>
<link>https://hdl.handle.net/1721.1/138575</link>
<description>Generation and propagation of focused shock fronts.
Sanai, Mohsen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1974; Bibliography: leaves 133-138.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138575</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new grammatical transformation into deterministic top-down form.</title>
<link>https://hdl.handle.net/1721.1/138573</link>
<description>A new grammatical transformation into deterministic top-down form.
Hammer, Michael,
            1948-2008.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Number 256 omitted in paging. Vita.; Bibliography: leaves 300-301.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138573</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generation and amplification of short laser pulses.</title>
<link>https://hdl.handle.net/1721.1/138571</link>
<description>Generation and amplification of short laser pulses.
Stark, Eugene Earle.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138571</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algebraic treatment of the Whitney conditions</title>
<link>https://hdl.handle.net/1721.1/138570</link>
<description>Algebraic treatment of the Whitney conditions
Callejas-Bedregal, Roberto.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1990; Includes bibliographical references (leaves 66-69).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138570</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The reaction of molecular fluorine with silicon(100) : adsorption, desorption, and scattering dynamics</title>
<link>https://hdl.handle.net/1721.1/138564</link>
<description>The reaction of molecular fluorine with silicon(100) : adsorption, desorption, and scattering dynamics
Schulberg, Michelle Tobi.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1990; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138564</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of the heat capacity results of antiferromagnetic compounds and compounds which exhibit rotation.</title>
<link>https://hdl.handle.net/1721.1/138560</link>
<description>Analysis of the heat capacity results of antiferromagnetic compounds and compounds which exhibit rotation.
Smith, David.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138560</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Part I: The kinetics of thiophene hydrogenolysis. Part II: Effectiveness factors for porous catalysts: Langmuir-Hinshelwood kinetic equations.</title>
<link>https://hdl.handle.net/1721.1/138559</link>
<description>Part I: The kinetics of thiophene hydrogenolysis. Part II: Effectiveness factors for porous catalysts: Langmuir-Hinshelwood kinetic equations.
Roberts, George Willard.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138559</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The electron distribution function and impurities in alcator.</title>
<link>https://hdl.handle.net/1721.1/138558</link>
<description>The electron distribution function and impurities in alcator.
Rice, J. E.&#13;
            (John E.)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Physics, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138558</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations in coolant mixing and flow split for LMFBR wire wrapped assemblies.</title>
<link>https://hdl.handle.net/1721.1/138557</link>
<description>Investigations in coolant mixing and flow split for LMFBR wire wrapped assemblies.
Chiu, Chong.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138557</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The convective melting and coalescence of polymer granules</title>
<link>https://hdl.handle.net/1721.1/138554</link>
<description>The convective melting and coalescence of polymer granules
Malguarnera, Salvatore Chris.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138554</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of quantum field theoretical methods to some problems in the nonequilibrium statistical mechanics of conductors</title>
<link>https://hdl.handle.net/1721.1/138553</link>
<description>Applications of quantum field theoretical methods to some problems in the nonequilibrium statistical mechanics of conductors
Tremblay, André-Marie.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138553</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental study of toroidal plasmas with non-circular cross-section.</title>
<link>https://hdl.handle.net/1721.1/138551</link>
<description>Experimental study of toroidal plasmas with non-circular cross-section.
Martin, Francis F.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138551</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The development and evaluation of 252Cf neutron irradiation facilities for two biomedical applications using discrete ordinates, Monte Carlo, and experimental methods.</title>
<link>https://hdl.handle.net/1721.1/138550</link>
<description>The development and evaluation of 252Cf neutron irradiation facilities for two biomedical applications using discrete ordinates, Monte Carlo, and experimental methods.
Pettigrew, Roderic Ivan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1977; Bibliography : leaves 169-175.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138550</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron scattering studies of 166Er, 176Yb, and 238U.</title>
<link>https://hdl.handle.net/1721.1/138549</link>
<description>Electron scattering studies of 166Er, 176Yb, and 238U.
Creswell, Carroll William.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138549</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new theory for the wake of marine propellers.</title>
<link>https://hdl.handle.net/1721.1/138548</link>
<description>A new theory for the wake of marine propellers.
Loukakis, Theodore.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1971; Bibliography: leaves viii-x.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138548</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identity across possible worlds</title>
<link>https://hdl.handle.net/1721.1/138547</link>
<description>Identity across possible worlds
Siegel, Kenneth Harry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Philosophy, 1975; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138547</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The utility of sulfur (IV) and selenium (IV) imido compounds in organic synthesis.</title>
<link>https://hdl.handle.net/1721.1/138541</link>
<description>The utility of sulfur (IV) and selenium (IV) imido compounds in organic synthesis.
Singer, Stephen Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1977; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138541</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical methods for the study of metal-ligand interactions in aquatic environments.</title>
<link>https://hdl.handle.net/1721.1/138540</link>
<description>Chemical methods for the study of metal-ligand interactions in aquatic environments.
Westall, John Cooper.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1977; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138540</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>People processing and power : the street-level bureaucrat in public service bureaucracy</title>
<link>https://hdl.handle.net/1721.1/138539</link>
<description>People processing and power : the street-level bureaucrat in public service bureaucracy
Prottas, Jeffrey Lee.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1976; Bibliography: leaves 352-359.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138539</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illiquidity, the demand for consumer durables, and monetary policy</title>
<link>https://hdl.handle.net/1721.1/138537</link>
<description>Illiquidity, the demand for consumer durables, and monetary policy
Mishkin, Frederic Stanley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1976; Bibliography: leaves 115-121.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138537</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approaches for the total synthesis of miroestrol.</title>
<link>https://hdl.handle.net/1721.1/138535</link>
<description>Approaches for the total synthesis of miroestrol.
Williams, David Ransom.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138535</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision selenodesy via differential very-long-baseline interferometry</title>
<link>https://hdl.handle.net/1721.1/138534</link>
<description>Precision selenodesy via differential very-long-baseline interferometry
King, R. W.&#13;
            (Robert Wilson),&#13;
            1947-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1975; Vita.; Bibliography: p.168-172.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138534</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational fluid dynamics and turbulence model uncertainty quantification for nuclear reactor safety applications</title>
<link>https://hdl.handle.net/1721.1/138529</link>
<description>Computational fluid dynamics and turbulence model uncertainty quantification for nuclear reactor safety applications
Acton, Michael
            (Michael John)
Computational fluid dynamics (CFD) simulations are becoming an increasingly important tool across a wide range of engineering disciplines. CFD simulations improve the accuracy and predictive capability of the evaluation process through numerical solution of the Reynolds-Averaged form of the Navier-Stokes (RANS) equations describing fluid flow. Uncertainty arises in CFD simulations due to a variety of sources, and this uncertainty must be rigorously quantified in order to be useful in support of reactor licensing and decision making. In traditional system thermal hydraulics codes, the code scaling, applicability, and uncertainty (CSAU) approach is used to ensure simulation quality and to estimate the level of uncertainty present. No such method is presently available for CFD. This thesis discusses the pathway for utilizing CFD in reactor licensing applications in the face of uncertainty through the introduction of CFD-CSAU. In order to do this, modifications to the CSAU method are proposed to bring the process in line with requirements of CFD. This includes processes to ensure the quality of a simulation which are utilized in the CFD verification and validation community, but which are not currently utilized. In addition to ensuring simulation quality and confidence, a rigorous estimate of simulation uncertainty must be made. Emphasis is placed on the quantification of turbulence modeling uncertainty in a predictive context, as it is the clear missing link in the handling of CFD uncertainty. Previous work has commonly focused on the propagation of uncertainty due to turbulence model calibration coefficients, however such an approach ignores much of the uncertainty associated with the turbulence model, and does not extrapolate well to a full variety of flow conditions. In this work, a novel approach is discussed which is based on treating the uncertainty directly through the turbulent viscosity field (pt). This allows for a more complete treatment of the modeling uncertainty compared to the uncertainty in the calibration coefficients. As the turbulent viscosity takes on unique values in continuous space, the uncertainty must be modeled as a random field, defined by the marginal distribution and the covariance function. These properties are defined through two unique hyper-parameters, which are inferred on a training data set and applied to a variety of validation data sets. The approach is shown to generalize well to a wide variety of turbulent test cases in the accurate prediction of uncertainty bounds especially as compared to previous methods. The applicability for a representative reactor flow condition is demonstrated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 185-192).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138529</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Role of city texture in identifying drag coefficients of buildings to prevent hurricane damage</title>
<link>https://hdl.handle.net/1721.1/138526</link>
<description>Role of city texture in identifying drag coefficients of buildings to prevent hurricane damage
Roxon, Jacob.
Hurricane damage is one of the costliest and most frequent of natural disasters. In total, the cumulative cost of all 16 hurricanes in the US in 2017 was in excess of $300 billion and by 2075 the average annual damage cost in the US is expected to rise by nearly 40%. In order to mitigate disaster damage, governments mandate minimum standards for construction depending on location and building type--standards known as building codes. Yet most codes remain insufficient as they account only for individual buildings and overlook the influence of city layout on wind speeds and storm damage. To reinvigorate design codes and better predict hurricane damage, we propose a new city texture resilience approach, which accounts for local geometric layouts to predict more accurate building codes. Tested using computational fluid dynamics simulations for different city textures with common geometrical layouts, we found that the city texture model, derived using online GIS data of building footprints, predicts with 67% accuracy damage from 2018 Hurricane Michael in Mexico Beach, FL. Furthermore, we find that ordered "crystal" cites have higher susceptibility to hurricane damage showing higher proportion of buildings with upper range values of drag coefficients. Using this approach, stakeholders can readily identify entire cities (or neighborhoods) with high susceptibility to hurricane damage. Moreover, they can identify buildings with the highest risk of damage, which will offer targeted retrofitting, thereby enabling more resilient developments and urban planning to reduce the risk of hurricane damage and mitigate the kinds of extreme damage experienced by communities with histories of high speed winds, especially as climate change is going to intensify future storms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript. "The images contained in this document are of the best quality available"--Disclaimer page.; Includes bibliographical references (pages 131-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138526</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling multiphysics of traveling wave reactor spent fuel disposal in deep crystalline host rock</title>
<link>https://hdl.handle.net/1721.1/138525</link>
<description>Modeling multiphysics of traveling wave reactor spent fuel disposal in deep crystalline host rock
Rodríguez Buño, Mariana.
This study is the first investigation of a method to dispose of the spent fuel of the Travelling Wave Reactor (TWR), an innovative nuclear reactor design. Because significantly higher heat is produced in the central region of the rods than in conventional spent fuel, TWR spent fuel presents new challenges. This work studies the disposal of TWR high-linear-power-spent fuel in deep boreholes in crystalline host rock. The boreholes are 5 km deep, separated horizontally by 200 in, and the spent fuel is enclosed in metallic canisters placed vertically in the deposition boreholes in the bottom 2 km. Nuclear regulators require analysis of the repository's performance for one million years. Other than human intrusion, groundwater transport is the only important mechanism for escape of radioactive material from the repository. Heat decay, combined with the natural geothermal flux, causes groundwater to flow, compromising radioactive containment. The numerical model used to study this problem must accurately predict the thermal field and induced fluid flow at different time and length scales, with strong coupling of all physics. Given these requirements, the numerical simulations of the coupled thermo-hydraulic behavior of a nuclear waste repository are computationally very expensive. To perform the repository simulations, we modified an open-source, finite element-based, fully implicit, fully coupled hydrothermal C++ code, FALCON, based on the MOOSE framework (Multiphysics Object Oriented Simulation Environment). Our simulations show that a first local maximum temperature in the rock near the central borehole of the array occurs within 30 years of disposal (76°C), and an extended period of elevated temperatures with a larger absolute maximum (96°C) begins at 5,000 years. Neither supercritical conditions nor boiling are reached. Thermally driven fluid flow leads to particles from the waste breaking through at the surface at about 150,000 years. A comparison with nuclear waste from conventional Pressurized Water Reactors (PWR) shows that TWR spent fuel produces lower temperatures than PWR. spent fuel for the first 3,200 years. After this time, TWR temperatures surpass PWR results. The flow characteristics for PWR and TWR are similar. The breakthrough time can be extended by increasing the spacing between the boreholes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 241-252).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138525</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining numerical simulation and machine learning - modeling coupled solid and fluid mechanics using mesh free methods</title>
<link>https://hdl.handle.net/1721.1/138524</link>
<description>Combining numerical simulation and machine learning - modeling coupled solid and fluid mechanics using mesh free methods
Raymond, Samuel J.
            (Samuel James)
The prediction and understanding of physical systems is largely divided into two camps, those based on data, and those based on the numerical models. These two approaches have long been developed independently of each other. This work shows further improvements of the modeling of physical systems and also presents a new way to inject the data from simulations into deep learning architecture to aid in the engineering design process. In this thesis the computational mechanics technique, the Material Point Method (MPM) is extended to model the mixed-failure of damage propagation and plasticity in the aggregate materials commonly found deep underground. To achieve this, the Grady-Kipp damage model and the pressure dependent Drucker-Prager plasticity model are coupled to allow for mixed-mode failure to develop in the material. This is tested against analytical results for brittle materials, as well as a series of experimental results. In addition, the brittle fracture in thin silicon wafers is also modeled to better understand the tolerances in manufacturing loads on these delicate objects. Finally, in a novel approach to combine the results of a numerical simulation and the power of a deep neural network, biomedical device design is studied. Here the simulation of the acoustofluidics of a microchip is performed to generate a large dataset of boundary conditions and solved pressure fields. This dataset is then used to train a neural network so that the inverse relationship between the boundary condition and the pressure field can be obtained. Once this training is complete, the network is used as a design tool for a specified pressure field and the results are fabricated and tested.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 137-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138524</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancement of unconventional oil and gas production forecasting using mechanistic-statistical modeling</title>
<link>https://hdl.handle.net/1721.1/138523</link>
<description>Enhancement of unconventional oil and gas production forecasting using mechanistic-statistical modeling
Montgomery, Justin B.
            (Justin Bruce)
Unconventional oil and gas basins have rapidly become expansive and critical energy resource systems. However, accurately predicting highly variable well production rates remains challenging, given the typically poor subsurface characterization and complex flow behavior involved. This creates uncertainty about future resource availability, undermining reliable economic assessments and good stewardship of the resource. Production, drilling, and hydraulic fracturing datasets from thousands of wells offer insight into patterns of productivity but are noisy and incomplete. Fully exploiting this information is only possible by leveraging contextual knowledge to structure observations. This thesis provides a novel framework for combining machine learning and probabilistic modeling with domain knowledge and physics to understand and predict well productivity. Technology is a constantly evolving driver of productivity that must be captured in forecasts. This thesis shows that the immense geological heterogeneity of unconventional basins can lead to overestimating the role of technology when the best areas are increasingly targeted alongside design improvements. This conflation is remedied using spatial structure to infer geological productivity as a latent variable. A regression-kriging technique is shown to effectively disentangle technology from geology--which play roughly equal roles--and reduce error in initial well productivity predictions by more than a third compared to established methods. Long-term production dynamics for unconventional wells are unpredictable and current forecasting approaches have considerable limitations. Fitted production curve models are ill-posed and unreliable, but aggregated type-well curves ignore important differences between wells. This thesis introduces Tikhonov regularization as a way of effectively sharing information across wells, cutting error in the earliest long-term productivity forecasts in half. Additionally, a spatiotemporal hierarchical Bayesian approach is developed that incorporates physical relationships to enhance predictions and interpretability while quantifying and reducing uncertainty. Sampling from this high dimensional model is enabled by designing a unique Metropolis-Hastings within Gibbs scheme to take advantage of the model's structure. This novel mechanistic-statistical approach is able to learn and generalize physical relationships across ensembles of wells with vastly different properties--realistic scenarios where current techniques generate two to five times as much error--providing an important and practical advance in better understanding and managing these resources.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 107-115).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138523</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring ambient air quality using low-cost sensors</title>
<link>https://hdl.handle.net/1721.1/138522</link>
<description>Measuring ambient air quality using low-cost sensors
Hagan, David Henry.
Exposure to ambient air pollution increases morbidity and mortality - it is the leading environmental risk factor for the global burden of disease. Accurate and reliable air quality measurements can help identify sources of pollution, enabling mitigation strategies that lead to longer, healthier lives and reduce the economic impact caused by air pollution. Low-cost sensors (LCS) offer the opportunity to vastly increase our ability to make air quality measurements with finer spatiotemporal resolution than ever before; however, the use of such sensors has outpaced our understanding of their abilities, limitations, and overall utility. This thesis describes several detailed characterization experiments and modeling exercises that were carried out to better understand the strengths and limitations of using LCS for measuring air pollution and investigates new ways in which they can be used to identify types and sources of particulate pollution. First, we explore the use of electrochemical sensors for measuring sulfur dioxide in Hawaii. We introduce a new algorithm for reducing the uncertainty while still capturing high concentration data points and we demonstrate that electrochemical sensor performance does not degrade over the course of several months. Second, we investigate the long-term viability of electrochemical sensors with a three-year deployment in Boston. We show that CO and NO electrochemical sensors show little-to-no degradation over the course of at least two years. Third, we designed and built a Mie Theory-based model for exploring the ability of low-cost particle sensors to accurately measure fine particles. We show the expected differences between two commonly used technologies under a variety of ambient conditions. Last, we demonstrate how multipollutant LCS, despite their inherent limitations, can be used to provide insight into sources of fine particulate matter in complex urban environments. The results from this work demonstrate that LCS can be viable tools that can be integrated into the greater air quality measurement ecosystem and can be used to obtain reliable data at scales previously unseen.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; "February 2019." Author's name in February 2020 and not in February 2019 MIT Registry degree list. Manuscript. "The images contained in this document are of the best quality available"--Disclaimer page.; Includes bibliographical references (pages 130-146).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138522</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imaginative reasoning in probabilistic programs</title>
<link>https://hdl.handle.net/1721.1/138517</link>
<description>Imaginative reasoning in probabilistic programs
Tavares, Zenna.
Human reasoning is complex, messy, and approximate, and as a result, has been subject to a millennia-long enterprise to extract principles that are simple, neat, and impeccable. This enterprise is incomplete; there are acts of reasoning that humans perform everyday, often effortlessly, that remain both poorly understood and beyond the capabilities of modern methods of artificial intelligence. Specifically, humans understand the causal structure of the world, and mentally manipulate it to imagine worlds that could have been but were not, and even worlds that could never exist in reality. This thesis investigates computational principles of imaginative reasoning; develops programming languages to express the knowledge upon which imaginative reasoning relies, and upon this foundation introduces practical algorithms of automatic inference. Concretely, we introduce probabilistic programming languages - which encode causal probabilistic models as programs - with two new forms of inference. The first is distributional inference, which means to reason with statistical information rather than observational data. This allows us for instance to address problems of algorithmic fairness, robustness, and perform parameter estimation using data about probabilities, expectations and other distributional properties. The second is causal inference, which allows us in complex simulation models to reason about counterfactual what-if scenarios, as well as causation, i.e., whether some event A is the cause of some other event B. To perform inference, we introduce a number of new algorithms. Unlike traditional methods, these modify the internal structure of the model or reinterpret how it is executed. We introduce parametric inversion, which inverts the causal structure to literally run programs in reverse from observations to causes, and predicate exchange, which relaxes Boolean operators to make inference more tractable. Collectively, these contributions shrink the gap between human and machine reasoning, as well as serve as practical tools for scientific modelling and inference.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020; Manuscript.; Includes bibliographical references (pages 139-149).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138517</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of IL-17a in the rescue of ASD-like behavioral phenotypes following immune stimulation in a mouse model of neurodevelopmental disorders</title>
<link>https://hdl.handle.net/1721.1/138516</link>
<description>The role of IL-17a in the rescue of ASD-like behavioral phenotypes following immune stimulation in a mouse model of neurodevelopmental disorders
Reed, Michael Douglas.
A subset of children with autism spectrum disorder (ASD) exhibit temporary but significant improvement of their behavioral symptoms during episodes of fever. Fever is an increase body temperature that is typically the product of the immune response mounted in order to fight infection. Investigation into the curious connection between fever and the manifestation of ASD behavioral phenotypes is at its infancy and therefore the mechanisms underlying this phenomenon are unknown. Here, we attempt to understand the neural and molecular mechanisms mediating this connection. The etiology of ASD is heterogeneous and is composed of both environmental and genetic risk factors. Therefore, we compared an environmental model of neurodevelopmental disorders in which mice were exposed to maternal immune activation (MIA) during embryogenesis with mice genetically deficient for Cntnap2, Fmr1, and Shank3. We found that activation of the immune system using lipopolysaccharide (LPS) was sufficient to rescue behavioral deficits within the MIA model, but not the monogenic mutant model mice. Behavioral rescue was correlated with reduced hyperactivation in the primary somatosensory cortex dysgranular zone (S1DZ), a region that has been previously shown to be tightly linked to MIA behavioral phenotypes. Consistent with the selective effects of LPS within MIA offspring, this reduction in hyperactivation was unique to MIA offspring. Differences in response to LPS could be explained by reduced IL-17a induction in monogenic mutant mice compared with MIA offspring. Circumventing this difference by directly administering IL-17a into the brain was sufficient to promote sociability in MIA offspring as well as monogenic mutant mice. IL-17a signaling is shown to be critical for the LPS-induced behavioral rescue and reduction in hyperactivity. Genetic removal of its cognate receptor, IL-17Ra selectively within the S1DZ, similarly prevented the ability of LPS to rescue MIA sociability deficits. These data support a model in which IL-17a signaling within the S1DZ mediates the behavioral and neurophysiological effects of immune activation in the MIA model.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020; Manuscript. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138516</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enriching models of natural language with auxiliary data</title>
<link>https://hdl.handle.net/1721.1/138515</link>
<description>Enriching models of natural language with auxiliary data
Malmaud, Jonathan Matthew.
The highest-performing natural language processing models generally solve language tasks by deriving statistical regularities of sequences of arbitrary tokens supplied as training data. Humans have a much richer notion of language, however. For one thing, they understand that language refers to objects aid actions in the real world, which enables them to use language to efficiently transmit instructions on how to accomplish goals. For another, they learn to focus their attention on only those spans of text important for accomplishing the task at hand. ăIn this thesis, we attempt to improve machine models of language by taking inspiration from these aspects of human language. The first half of this thesis concerns understanding instructional "how-to" language, such as "Add remaining flour. Then mix." The meaning is ambiguous without context: Add how much flour to what? Mix what, using what tools, until when? We show how to successfully parse this language by maintaining a distribution over the state of a theoretical kitchen as the instructions are parsed. We also show how to aid interpretation if videos of the task are also available by training a joint vision-language model with over 300,000 Youtube videos on how to cook. The second half discusses taking advantage of people's ability to focus on important parts of a passage in a multiple-choice reading comprehension task to enhance the performance of an automatic question-answering system. We record the gaze location of hundreds of subjects as they read and answer questions about newspaper articles. We then train a state-of-the-art transformer model to predict human attention as well correct answers and find this leads to a substantial boost in performance over merely training the model to predicting correct answers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020; Manuscript.; Includes bibliographical references (pages 81-89).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138515</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a theory for the emergence of grid and place cell codes</title>
<link>https://hdl.handle.net/1721.1/138514</link>
<description>Towards a theory for the emergence of grid and place cell codes
Ma, Tzuhsuan.
This work utilizes theoretical approaches to answer the question: which functions grid and place cells perform that directly lead to their own emergence? To answer such a question, an approach that goes beyond a simple modelling is necessary given the fact that there could be circuit solutions other than grid or place cells that better perform these functions. With this reasoning, I adopted a systematic guideline that aims for an optimization principle attempting to find the optimal solution for performing the hypothesized functions while reproducing the correct phenomenology. Within the optimization principle framework, I applied both recurrent neural network (RNN) training and coding-theoretic approaches to set up appropriate optimization problems for testing a given function hypotheses. The descriptive function hypotheses: 1) Grid cells exist for having a high-capacity and robust path-integrating code and 2) Place cells exist for having a sequentially-learnable and highly-separable path-integrating code were adopted. The non-converging performance in training an RNN to perform a hard navigation task suggests that the attractor dynamics forbids a network to simultaneously possess online learnability and high coding capacity. Because of this dynamical constraint in learning, a grid cell circuit has to be hardwired through some developmental process and cannot be easily modified by an experience-based synaptic rule without compromising its capacity. On the contrary, a place cell circuit being able to continually learn a novel environment inevitably have a mere linear capacity. These results imply that the functional separation of grid and place cell systems observed in the brain could be a result of an unavoidable dynamical constraint from their underlying RNNs. Lastly, a fundamental principle called the tuning-learnability correspondence was uncovered in pursuit of a sequentially learnable neural implementation for place cells. It explains that the seemingly incidental existence of conjunctive tuning property is in fact caused by a necessary metastable attractor dynamics for having sequential learnability rather than by another functional need attached to a particular tuning property. In addition, from the unique property of metastable attractor dynamics, I also predicted that the biased place field propensity recently observed in CA1 sub-region should originate from CA3 due to an inevitable biased activation in the RNN as a side effect of such a dynamical property. In sum, both this principle and the subsequent prediction thus provide a new perspective that contradicts the conventional wisdom which often assumed that a certain nonspatial tuning property exists for performing a relevant task.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020; Manuscript.; Includes bibliographical references (pages 227-238).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138514</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the climate variability of tropical cyclone potential intensity and Atlantic hurricane activity</title>
<link>https://hdl.handle.net/1721.1/138368</link>
<description>On the climate variability of tropical cyclone potential intensity and Atlantic hurricane activity
Rousseau-Rizzi, Raphaël
From a meteorological standpoint, the most important questions one needs to answer about a given tropical cyclone is how strong the winds generated by the event can become. From a climatological standpoint, it is critical to predict tropical cyclone activity, or the collective destructive potential of all tropical cyclones in a given basin and during a given period. Potential intensity (PI) is defined as a thermodynamic bound on tropical cyclone maximum wind speeds, and is a good predictor of the intensity of a single event but also of tropical cyclone activity.  As such, PI is a useful quantity to help answer both meteorological and climatological pressing questions about tropical cyclones. First, this thesis adresses recent controversies about whether the PI assumptions of inviscid free troposphere and steady-state make it inapplicable. Comparing various forms of the PI bound to the corresponding bounded quantities in low-mixing axisymmetric simulations shows that PI is in fact a valid bound on tropical cyclone intensity. Then, a categorization of definitions of tropical cyclone steady state used in the literature is introduced to clarify the conditions in which simulations can be compared to theories such as PI. It is shown that most intensity theories can be compared to the simulated period surrounding peak tropical cyclone intensity, while theories for the structure of the storm requires the simulated storm to have come into equilibrium with the surrounding environment. Next, turning to climate, a linear model for interannual basin-wide PI variations is developed, which captures almost all the PI variance in reanalysis products and provides a way to partition global and local contributions to PI variations. The model notably shows that tropical North-Atlantic PI variations over the last 40 years have been dominated by local influences. The final part of the thesis evaluates the causes of the Atlantic hurricane drought of the 1970s and 1980s. An anthropogenic nature of the hurricane drought is proposed. Concurrent hemispherically asymmetric anthropogenic sulfate emissions caused a drying of the Sahel region and enhanced the emissions of eolian dust from the Sahara and the Sahel which is shown to be detrimental to hurricane activity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, June, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 143-157).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138368</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-ray Micro-Computed Tomography and Deep Learning Segmentation of Progressive Damage in Hierarchical Nanoengineered Carbon Fiber Composites</title>
<link>https://hdl.handle.net/1721.1/138357</link>
<description>X-ray Micro-Computed Tomography and Deep Learning Segmentation of Progressive Damage in Hierarchical Nanoengineered Carbon Fiber Composites
Kopp, Reed Alan
Advanced composite laminates comprised of carbon (micro) fiber reinforced polymer (CFRP) have become widespread in modern high-performance aerospace structures, providing high, tailorable mass-specific stiffness and strength. However, while underpinning such performance benefits, CFRP microstructural heterogeneity and mechanical property anisotropy concomitantly give rise to complex damage mechanisms that lead to difficult-to-predict failure, limiting CFRP understanding. Progressive damage mechanisms in CFRPs generally encompasses a spectrum of modalities, interactions, and sequences across multiple scales, exhibiting broad sensitivity to loading conditions. Dominant damage mechanisms have been identified generally as polymer matrix cracking within (intralaminar) and between (interlaminar, termed ‘delamination’) plies, fiber fracture, fiber bundle microbuckling, and fiber/matrix interfacial debonding. Two emerging solutions aiming to suppress or delay such mechanisms toward enhanced strength and stiffness are considered in this dissertation: (i) aligned carbon nanotube (A-CNT) interlaminar reinforcement (termed ‘nanostitch’) that primarily targets delaminations, and (ii) thin-ply morphology that targets intralaminar cracking and delaminations. Both solutions have demonstrated significant mechanical improvements via standard ex situ tests that lack underlying progressive damage understanding. In view of these limitations, this dissertation advances understanding of composite progressive damage by modern ex situ and state-of-the-art in situ X-ray micro-computed tomography (µCT) studies, including advancing experimental techniques via artificial intelligence (AI), in the context of aerospace-grade CFRP strengthening and toughening effects of nanostitch, thin-ply, and their combination.; Enabling here the first in situ 4D (3D spatial plus temporal) insights into these new strengthening/toughening technologies, synchrotron radiation CT (SRCT) has recently emerged with a focus on conventional composites. SRCT enables unprecedented high-resolution, non-destructive observation of the composite volume during loading, revealing full-field damage progression with resolution at the micron- or submicron-scale, though with limited fields-of-view (FoVs) in the mm to cm range. In situ SRCT of scaled-down double edge-notched tension (DENT) tests of quasi-isotropic baseline and nanostitched thin-ply laminates, coupled with extensive human tomographic segmentations, reveals their strong similarity in intralaminar matrix-dominated damage progression. Significant overall intralaminar matrix damage suppression (6.5× less surface area on average) is found in the thin-ply laminates relative to identically configured experiments with conventional (standard ply thickness) aerospace-grade CFRP composites.; At present, objective quantitative mechanistic insights are extremely challenging to extract from the big (∼10 GB/mm³), typically damage-feature-sparse SRCT datasets due to time-intensive, subjective semi-automatic (human-driven) damage segmentation techniques. Thus, a novel deep learning (DL, a sub-field of AI)-based approach to automate µCT segmentation of multiclass microscale damage in composite laminates is developed, leveraging 65,000 (trained) human-segmented tomographic images for machine development. Following downselection from 20 hyperparametrically different machines, the selected trained machine is shown to segment complex and sparse (≪1% of image volume) composite damage classes to ∼99.99% agreement, unlocking objectivity and efficiency; nearly 100% of human segmentation time is eliminated. This machine performs as well or better than the human due to inevitable, and ‘machine-discovered’, human error, with machine improvements manifesting primarily as new damage discovery, diffuse segmentation augmentation, and segmentation extension to artifact-rich exterior edges.; Next, a second in situ SRCT study coupled with DENT testing is designed to elucidate nanostitch, ply thickness (via ply blocking of thin-ply laminae), and their combinatory effects on progressive damage in cross-ply laminates, using a 20°-canted loading rig for clear 3D reconstruction of all interior features. An intermediate-thickness-ply baseline laminate (2× thicker than thin-ply, similar to conventional plies) exhibits 8% and 17% higher ultimate tensile strengths (UTSs) compared to thick-ply (4× thicker than thin-ply) and thin-ply baseline laminates, respectively, which is explained by an observed progressive damage mode transition from notch-blunting inter- and intra-laminar matrix damage-dominated (thick-ply) to brittle fiber breakage- and diffuse matrix damage-dominated (thin-ply). The overall highest UTS is achieved by the nanostitched thick-ply laminate (15% increase over baseline), which exhibits an effective combination of notch-blunting intralaminar matrix damage and greatly suppressed interlaminar matrix damage (4.3× less surface area at 90% UTS).; Finally, motivated by prior ambient environmental tests and the need to understand temperature and moisture effects for aerospace applications, hygrothermal effects are investigated for the first time for nanostitched laminates, employing quasi-isotropic conventional-thickness CFRP composites subjected to open-hole compression (OHC). The environmental conditions of -55°C/dry and 100°C/dry correspond to statistically significant ultimate strength increases of 2.6% and 5.9%, respectively, for nanostitched laminates over the baseline; however, room temperature/50% relative humidity (RT/50%RH) and 100°C/wet conditions exhibit no ultimate strength change, suggesting correlation of positive nanostitch effects with polymer brittleness. The progressive damage effects associated with nanostitch and RT/50% RH and 100°C/dry conditions are characterized via step-wise ex situ lab-based µCT of large FoVs (mm-scale) in an interrupted loading campaign, revealing similar 0° ply-dominated matrix damage at up to 98% of ultimate strength in the baseline and nanostitched specimens.; This dissertation establishes new understanding of 3D strengthening and toughening mechanisms associated with progressive matrix damage via novel in situ SRCT- and ex situ µCT-based testing of aerospace-grade CFRP laminates in multiple configurations. DL is found to be a disruptive approach to quantitative and objective structure-property characterization, enabling high-throughput, generalizable, ultra-high-resolution damage segmentation of heterogeneous materials with complex hierarchical architectures for the first time. Altogether, these techniques and findings enable and suggest future studies, including toughening and strengthening mechanisms at nearer-failure in situ CT load steps, nanostitched laminates subjected to expanded hygrothermal loading configurations, and AI machines capable of segmenting additional damage classes (e.g., fiber breaks), as vital steps toward rational optimization and prediction of emerging hierarchical composite mechanical properties, with application to layered, bio-inspired, and biological heterogeneous materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 343-379).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138357</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconstructing neurons from serial section electron microscopy images</title>
<link>https://hdl.handle.net/1721.1/133076</link>
<description>Reconstructing neurons from serial section electron microscopy images
Lee, Kisuk,
            Ph. D.
            Massachusetts Institute of Technology.
Neuronal connectivity can be reconstructed from a 3D electron microscopy (EM) image of a brain volume. A challenging and important subproblem is the segmentation of the image into neurons. For the past decade, convolutional networks have been used for 3D reconstruction of neurons from EM brain images. In this thesis, we develop a set of deep learning algorithms based on convolutional nets for automated reconstruction of neurons, with particular focus on highly anisotropic images of brain tissue acquired by serial section EM (ssEM). In the first part of the thesis, we propose a recursively trained hybrid 2D-3D convolutional net architecture, and demonstrate the feasibility of exploiting 3D context to further improve boundary detection accuracy despite the high anisotropy of ssEM images. In the following parts, we propose two techniques for training convolutional nets that can substantially improve boundary detection accuracy. First, we introduce novel forms of training data augmentation based on simulation of known types of image defects such as misalignments, missing sections, and out-of-focus sections. Second, we add the auxiliary task of predicting affinities between nonneighboring voxels, reflecting the structured nature of neuronal boundary detection. We demonstrate the effectiveness of the proposed techniques on large-scale ssEM images acquired from the mouse primary visual cortex. Lastly, we take a radical departure from simple boundary detection by exploring an alternative approach to object-centered representation, that is, learning dense voxel embeddings via deep metric learning. Convolutional nets are trained to generate dense voxel embeddings by assigning similar vectors to voxels within the same objects and well-separated vectors to voxels from different objects. Our proposed method achieves state-of-the-art accuracy with substantial improvements on very thin objects.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Computation, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 155-167).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/133076</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of self-organization in planarian regeneration</title>
<link>https://hdl.handle.net/1721.1/133075</link>
<description>Mechanisms of self-organization in planarian regeneration
Atabay, Kutay Deniz.
There is an unbreakable link between shape and function. In biology, the architecture of cells, tissues and organisms, that have evolved adapting to the world around them, translate into specific functional outcomes. Self-organization is an adaptive, non-linear and dynamic process, where diverse ordered patterns emerge from an initially disordered and noisy state through local interactions between the elements of a system. This can lead to the fascinating biological diversity and functional complexity in such systems. Unwavering storms on the surface of Jupiter, patterns on the wing of a butterfly, a regenerating planarian eye, development of a neuronal circuit in the human brain can all be studied systematically using the conceptual tools derived from the field of self-organization. Here, I sought to address a central, but understudied, problem in animal regeneration: How do regenerative progenitors organize into complex replacement structures in the context of adult anatomy? I used the planarians as a system for studying regenerative progenitors and focused on eye regeneration to elucidate the mechanisms. I found that self-organization has a major role in determining the behavior of regenerative progenitors. This work revealed three properties that govern regenerative progenitor behavior, and these three properties in concert explain many previously mysterious aspects of how regeneration works: (i) self-organization, (ii) an extrinsic migratory target for progenitors, and (iii) a broad progenitor specification zone that allows progenitors to be targeted into self-organizing systems even if they are transiently in incorrect locations during the process of regeneration. These components yield a model with broad explanatory and predictive power. As an example, we were able to generate wild-type animals with 3, 4, or 5 eyes instead of 2 by simple manipulations of the system using the model developed. Remarkably, the extra eyes were stably maintained throughout the life of the animal, resulting in wild-type animals with an alternative and stable anatomical state. This model prominently incorporates self-organizing principles, which have been little explored in regeneration. The new conceptual model with broad explanatory power allowed us to address some of the fundamental previous mysteries of regeneration.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/133075</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organizations, technology, and weapons acquisition : the development of the infantry fighting vehicle</title>
<link>https://hdl.handle.net/1721.1/132989</link>
<description>Organizations, technology, and weapons acquisition : the development of the infantry fighting vehicle
Kaufman, Daniel Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1983; Bibliography: leaves 519-540.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132989</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging and controlling atoms and semiconductor spins with advanced optical microscopy</title>
<link>https://hdl.handle.net/1721.1/132988</link>
<description>Imaging and controlling atoms and semiconductor spins with advanced optical microscopy
Kim, Donggyu.
Technologies based on the rules of quantum mechanics promise to dramatically outperform their classical counterparts. Atoms and atom-like semiconductor spins are outstanding quantum objects in which such quantum technologies are implemented. In developing quantum systems, optical microscopy is central to controlling these quantum objects with their distinct atom-photon interactions, which enable quantum state preparation, manipulation, and detection with high spatial resolution. However, conventional capabilities of optical microscopes often limit advances of quantum science and technologies that are based on atoms and atom-like semiconductor spins. In this thesis, I present new approaches to extend such optical microscopes' capabilities for advanced optical imaging and quantum control. In particular, my research focuses on innovating optical microscopy with (i) quantum reference beacons that enable optical super-resolution beyond conventional imaging depth [1], (ii) engineered microscope substrates with very-large-scale-integrated electronics for compact and scalable semiconductor spin control [2], and (iii) high-throughput coherent structured illumination for controlling ultracold neutral atom arrays [3]. Optical microscopy has allowed revolutionary applications from life sciences to semiconductor industries. The field of microscopy is now undergoing another revolution as it is combined with quantum technologies that open entirely new possibilities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, June, 2019; Cataloged from the PDF version of thesis. "June 2019."; Includes bibliographical references (pages 123-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132988</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering membrane-selective antibiotic peptides to combat multidrug resistance</title>
<link>https://hdl.handle.net/1721.1/132987</link>
<description>Engineering membrane-selective antibiotic peptides to combat multidrug resistance
Mourtada, Rida.
Antibiotic resistance is a global health emergency that mandates new drug development strategies. Natural antimicrobial peptides (AMPs) have been long-recognized as a potential source of bacteriolytic drugs, but the shortcomings of non-specific membrane toxicity, proteolytic instability, and in vivo toxicity have stymied their clinical translation. Here, we subjected expansive stapled-peptide libraries of the magainin II (Mag2) AMP to structure-function analyses and uncovered the biophysical and mechanistic determinants that allow for the rational design of stapled AMPs (StAMPs) that are bacterial-membrane selective, proteolytically-stable, well tolerated in mice upon intravenous administration, and most importantly, overcome even the most antibiotic-resistant bacteria, including colistin-resistant A. baumannii and mobilized colistin resistance plasmid-bearing E. coli. Specifically, we discovered that the topographic continuity and strength of hydrophobic networks, in the context of alpha-helical amphipathic cationic peptides, dictates both the selectivity and mechanism of membrane lysis. We further harnessed our results to develop an algorithm for the design of a new generation of non-toxic, bacterial-selective StAMPs for clinical development.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, June, 2018; Cataloged from the official PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132987</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New strategies for in vivo continuous directed evolution</title>
<link>https://hdl.handle.net/1721.1/132986</link>
<description>New strategies for in vivo continuous directed evolution
Papa, Louis John,
            III.
Continuous in vivo directed evolution facilitates the exploration of large biomolecule libraries at unprecedented speeds. By inserting a biomolecule of interest into a constantly mutating virus whose replicative capacity has been rendered dependent on the desired activity of that biomolecule, the once-separate mutagenesis, selection, and amplification steps of directed evolution are integrated into one simultaneous, self-sustaining process. This strategy has the potential to greatly accelerate the study of evolutionary hypotheses and the development of new biotechnologies. Unfortunately, current iterations are difficult to implement and largely restricted to E. coli. Furthermore, mutation rates are limited by the lack of simple mutagenesis methods that can focus mutations to desired portions of a viral genome. In this thesis, I describe the development of several new continuous in vivo directed evolution strategies and tools that overcome current limitations and expand the methodology to human cells. Using a generalizable adenovirus-based continuous directed evolution system, we evolved, directly in human cells, multiple variants of the tTA transcription factor that gained resistance to their small molecule inhibitor and piloted selection couples for evolving complex biomolecule activities. This system enables the continuous directed evolution of biomolecules that are important to human health and that function within complex networks that are absent in E. coli. Furthermore, biotechnologies developed directly in mammalian cells are more likely to have optimal function than biomolecules that are evolved in E. coli and then transferred to the mammalian cellular context. We also developed an in vivo targeted mutagenesis method that focuses mutations to a carefully defined DNA region of variable size. Using fusions of various DNA damaging enzymes and the T7 RNA polymerase, achieved high mutation rates without the usual toxicity associated with off-target mutagenesis. We expect this mutagenesis technique to be applicable across a wide variety of organisms and particularly useful for viral-based continuous evolution platforms. Finally, we are currently developing a new continuous evolution strategy for use in E. coli cells utilizing the lambda bacteriophage. If successful, this system would be much easier to monitor and tune than previous systems, and would expand the biomolecule cargo capacity by an order of magnitude.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2020; Cataloged from the PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132986</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering arterial substitutes that recapitulate vessel microstructure and mimic native physiological responses</title>
<link>https://hdl.handle.net/1721.1/132985</link>
<description>Engineering arterial substitutes that recapitulate vessel microstructure and mimic native physiological responses
Miranda-Nieves, David.
Engineering small caliber (&lt; 6mm) arterial grafts remains an unsolved problem. Current synthetic and autologous grafts suffer from short and long-term limitations including decreased patency rates, risk of bacterial infection, and compliance mismatching that results in neointimal hyperplasia. Tissue engineering is seen as a solution; however, a true arterial replacement remains elusive. Despite the numerous publications that have appeared over the last three decades, most reported strategies mimic functional and structural arterial properties to a limited extent. Furthermore, these strategies require long maturation times before implantation and carry the risk of failure in patients, who are often elderly with multiple comorbidities. Our central hypothesis was that living arterial substitutes that display normal physiological responses after in vivo implantation can be engineered through the controlled assembly of vascular cells and free-standing collagen sheets of controlled fibril orientation in a manner that recapitulates native vessel microstructure. We first present a scalable and continuous strategy for generating strong, free-standing, ultrathin, and centimeter-wide collagen sheets with controlled anisotropy using a flow-focusing approach. This strategy represents the first of its kind to generate anisotropic collagen sheets with control over nano- and macro-molecular properties. Next, controlled assembly of vascular cells and free-standing collagen sheets allowed us to design living blood vessels that recapitulated the arterial wall microstructure, and through structural, mechanical and biological characterization confirmed mimicry of native physiologic properties. We believe that the scalable fabrication schemes, and thorough characterization techniques, presented here will serve as a strong reference for future blood vessel tissue engineering efforts.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 107-118).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132985</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parametric design and performance validation of low-cost, low-pressure drip emitters and irrigation systems</title>
<link>https://hdl.handle.net/1721.1/132984</link>
<description>Parametric design and performance validation of low-cost, low-pressure drip emitters and irrigation systems
Sokol, Julia A.
            (Julia Alexandrovna)
This thesis proposes and validates methods to reduce the cost and energy use of drip irrigation systems, with the aim of increasing their adoption among smallholder farmers. By 2050, the growing world population will require a 55% increase in food production above 2010 levels. Yet, agriculture already places a large strain on the earth's resources, occupying 47% of habitable land area and comprising 70% of freshwater withdrawals. Thus, agricultural intensification needs to occur through increased efficiency, rather than increased resource consumption. While irrigation is an effective means to increase food production over rainfed land, traditional surface and overhead irrigation systems--such as flood, furrow, and sprinkler--have low water use efficiencies. Drip irrigation, which distributes water through a pressurized pipe network and slowly releases it through emitters in the immediate root zone of each crop, has been shown to increase water efficiency by 25-65% over flood or furrow irrigation. However, adoption of drip irrigation is limited by several factors, including high initial cost compared to conventional practices. To address the cost barrier to drip irrigation adoption, this work focuses on modeling, designing, and validating drip components and systems that operate at low pressures, reducing energy consumption and the costs of pumps and power systems. These savings are enabled by pressure-compensating (PC) emitters--which maintain a constant flow rate with variations in pressure--specifically designed for low-pressure operation. The first part of this thesis experimentally validates the ability of low-pressure PC online emitters (used for tree crops) designed by the MIT Global Engineering and Research Lab to reduce pumping power and energy in a series of field trials in the Middle East and North Africa. With a minimum operating pressure of 0.15 bar, these online emitters are shown to reduce pumping energy by at least 43% compared to commercial emitters with higher operating pressures, without compromising water distribution uniformity. The next section focuses on the design of low-pressure PC inline emitters (used for vegetable crops), which are bonded to the interior of irrigation tubing. While inline emitters are manufactured widely, their design in industry occurs largely by trial-and-error, which may limit product performance. To address this gap, this section presents a new, fully-analytical, parametric model for predicting the activation pressure and flow rate of typical inline PC emitters from their geometry and material properties of the membrane. The model's utility is demonstrated by systematically redesigning a commercial emitter to reduce its minimum compensating pressure from 0.4 bar to 0.15-0.25 bar, depending on the membrane used, while maintaining a similar flow rate. The last section of this thesis places low-pressure emitter designs in a system-level context to evaluate their impact and suggest further research directions. Concurrently, it presents a flexible, parametric model for designing cost-optimal drip irrigation systems with grid and off-grid power sources for any farm location, size, and crop. When applied to case studies representative of typical farms in Morocco, the model shows potential reductions of up to 20% in initial cost and up to 9% in lifetime system cost with optimized low-pressure drip systems, compared to conventional system designs. The results are used to identify and recommend opportunities for further system cost reduction.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132984</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically programmable surfaces for high-speed optical modulation and detection</title>
<link>https://hdl.handle.net/1721.1/132983</link>
<description>Dynamically programmable surfaces for high-speed optical modulation and detection
Peng, Cheng,
            Ph. D.
            Massachusetts Institute of Technology.
Dynamically programmable surfaces for spatiotemporal control of light are crucial to many optoelectronic technologies including high-speed optical communication, display and projection, autonomous driving, optical information processing, imaging, and fast programmable optical tweezers. Currently available electro-optically tunable components are often bulky, inefficient, and have limited operation speeds. This thesis describes the development of a compact, high-speed, electro-optic spatial light modulator (SLM) architecture based on two-dimensional arrays of tunable microcavities. Optimized microcavity designs can enable high-speed, high diffraction efficiency SLMs with standard-CMOS-compatible driving voltages. An electro-optic material, graphene, is also investigated in detail. A graphene carrier density spatiotemporal modulation technique is proposed and experimentally validated. This technique enables the demonstration of a compact graphene thermopile in the mid-infrared wavelengths and paves the way for future implementations of graphene plasmonic metasurfaces.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 123-133).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132983</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of forward genetic screens to LncRNAs, cancer immunotherapy, and cellular engineering</title>
<link>https://hdl.handle.net/1721.1/132982</link>
<description>Applications of forward genetic screens to LncRNAs, cancer immunotherapy, and cellular engineering
Joung, Julia.
Forward genetic screens are powerful tools for the unbiased discovery and functional characterization of specific genetic elements associated with a phenotype of interest. By perturbing thousands of genes simultaneously and selecting for a desired phenotype, genetic features can be systematically mapped to phenotypic changes. Recently, CRISPR-Cas9 has emerged as a powerful genetic perturbation technology, opening up new opportunities for forward genetic screens. In this thesis, I present work to advance this approach and demonstrate its application in a range of contexts relevant to human health. We first established a detailed CRISPR-Cas9 screening protocol that outlines experimental design considerations. We then applied this methodology to develop a CRISPR toolkit for screening and characterizing long non-coding RNAs in the human genome, many of which remain uncharacterized. We identified the EMICERI locus as a regulator of four neighboring genes, one of which conferred resistance to a melanoma therapeutic. We next sought to use CRISPR activation screening to gain insight into the cellular processes that govern tumor resistance to immunotherapy. We identified four candidate genes in our screen, which we validated in diverse cancer cell types and explored through mechanistic studies, leading to the discovery of novel immunotherapy resistance pathways. Finally, we developed a pooled transcription factor (TF) screening platform that provides a generalizable approach for studying cellular programming. We created a comprehensive human TF library and applied it to identify TFs that can drive differentiation of embryonic stem cells toward neural cell fates. We discovered that one TF, RFX4, leads to differentiation of neural progenitors that produced inhibitory neurons, providing an efficient method for generating this important cell type. During the COVID-19 pandemic, we paused the screening work and developed a streamlined SARS-CoV-2 detection assay, STOPCovid, suited for low-complexity settings. STOPCovid combines viral RNA concentration with isothermal amplification and CRISPR-mediated detection. STOPCovid achieved a sensitivity and specificity of 93.1% and 98.5%, respectively, on patient samples. Together, our applications of forward genetic screens address diverse problems in human health and broadly demonstrate the potential of this approach for systematically interrogating genetic elements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 185-197).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132982</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and catalyst design for heterogeneous Olefin metathesis</title>
<link>https://hdl.handle.net/1721.1/132981</link>
<description>Mechanisms and catalyst design for heterogeneous Olefin metathesis
Consoli, Daniel Francis.
Olefin metathesis offers a promising technology for on-purpose production of propylene and has been implemented in several industrial processes including the Lummus Olefin Conversion Technology. However, current metathesis technology cannot produce propylene cheaply and efficiently due to reliance on decades-old catalyst technology like tungsten oxide on silica. This poor performance can be attributed to a lack of understanding of heterogeneous metathesis reaction mechanisms and the catalysts involved. Current understanding proposes that olefin metathesis follows the Chauvin mechanism in which olefins coordinate with catalytic metal carbenes to form metallacyclobutanes that then rearrange into metathesis products. While this mechanism has been proven for homogenous metathesis catalysts, for which the 2005 Nobel Prize was awarded, the mechanism over heterogeneous catalysts may be more complicated as metal carbenes do not necessarily exist over freshly prepared material. In this thesis, we present a complete mechanistic cycle for heterogeneous olefin metathesis that comprises carbene site formation, stable metathesis, and active site decay. We demonstrate why this expanded mechanism is necessary to explain reaction order behavior for propylene on WO₃/SiO₂ and the presence of site formation byproducts during isobutene self-metathesis. Furthermore, we leverage our knowledge of the complete mechanism to introduce a secondary olefin to promote active site formation and, consequently, overall metathesis rate. Finally, we utilize our understanding of the importance of site formation to synthesize catalysts supported on tunable zeolite and metal organic framework materials that optimize metal-support interaction and metathesis rates. In sum, an improved understanding of the heterogeneous olefin metathesis mechanism has led to improvements in catalytic design, material characterization, and reactor operation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, June, 2019; Cataloged from PDF version of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available"--Disclaimer page.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132981</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Increasing the optical transparency of a living mouse brain (and other nanotechnologies)</title>
<link>https://hdl.handle.net/1721.1/132980</link>
<description>Increasing the optical transparency of a living mouse brain (and other nanotechnologies)
Gupta, Ishan.
Many methods for increasing the optical transparency of non-living brain tissue have come into widespread use because of their utility in enabling better anatomical brain imaging. In the first part of this thesis, we explore whether this is also possible for living brain tissue. We report a general principle for doing so, namely the reduction of refractive index mismatch between cellular membranes and the extracellular space, using high refractive index biocompatible reagents that have high molecular weights, so that they can be used at low concentrations. We implement this via multiple reagents that satisfied these criteria, including the iodinated radiocontrast agent iodixanol, high molecular weight polyethylene glycol (PEG), high molecular weight Dextran, and PEG-ylated Silicon nanoparticles. We achieve ~2x increases in the brightness of cells expressing red fluorescent proteins in vivo in mice, as measured by conventional one-photon epifluorescence imaging, using concentrations of reagents that increased the refractive index of the extracellular space by just 0.01. Lastly, We show that Dextran does not have a statistically significant effect on neural physiology or neural network properties. We expect such strategies to not only facilitate live imaging of the brains of mice and other mammals, but open up a new class of strategies for changing the electromagnetic properties of living systems. We conclude this thesis with two nanotechnologies that may be leveraged for making higher performance reagents for increasing the optical transparency of living brain tissue. (1) A method for the synthesis of high-yield and high-monodispersity nanoparticles of a variety of materials with tailored surface ligands, using common benchtop equipment. This method may be useful for developing nanoparticles with better biosafety, efficacy and performance. (2) A method for the delivery of hydrophobic NVNDs to neural cell membranes using PEG-ylated liposomes. These PEG-ylated liposomes may be used for delivery of hydrophyllic nanoparticles to neural soma and achieve maximal transparency.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2019; Cataloged from the PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132980</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian learning for high-dimensional nonlinear dynamical systems : methodologies, numerics and applications to fluid flows</title>
<link>https://hdl.handle.net/1721.1/132760</link>
<description>Bayesian learning for high-dimensional nonlinear dynamical systems : methodologies, numerics and applications to fluid flows
Lin, Jing,
            Ph. D.
            Massachusetts Institute of Technology.
The rapidly-growing computational power and the increasing capability of uncertainty quantification, statistical inference, and machine learning have opened up new opportunities for utilizing data to assist, identify and refine physical models. In this thesis, we focus on Bayesian learning for a particular class of models: high-dimensional nonlinear dynamical systems, which have been commonly used to predict a wide range of transient phenomena including fluid flows, heat transfer, biogeochemical dynamics, and other advection-diffusion-reaction-based transport processes. Even though such models often express the differential form of fundamental laws, they commonly contain uncertainty in their initial and boundary values, parameters, forcing and even formulation. Learning such components from sparse observation data by principled Bayesian inference is very challenging due to the systems' high-dimensionality and nonlinearity. We systematically study the theoretical and algorithmic properties of a Bayesian learning methodology built upon previous efforts in our group to address this challenge. Our systematic study breaks down into the three hierarchical components of the Bayesian learning and we develop new numerical schemes for each. The first component is on uncertainty quantification for stochastic dynamical systems and fluid flows. We study dynamic low-rank approximations using the dynamically orthogonal (DO) equations including accuracy and computational costs, and develop new numerical schemes for re-orthonormalization, adaptive subspace augmentation, residual-driven closure, and stochastic Navier-Stokes integration. The second part is on Bayesian data assimilation, where we study the properties of and connections among the different families of nonlinear and non-Gaussian filters. We derive an ensemble square-root filter based on minimal-correction second-moment matching that works especially well under the adversity of small ensemble size, sparse observations and chaotic dynamics. We also obtain a localization technique for filtering with high-dimensional systems that can be applied to nonlinear non-Gaussian inference with both brute force Monte Carlo (MC) and reduced subspace modeling in a unified way. Furthermore, we develop a mutual-information-based adaptive sampling strategy for filtering to identify the most informative observations with respect to the state variables and/or parameters, utilizing the sub-modularity of mutual information due to the conditional independence of observation noise. The third part is on active Bayesian model learning, where we have a discrete set of candidate dynamical models and we infer the model formulation that best explains the data using principled Bayesian learning. To predict the observations that are most useful to learn the model formulation, we further extend the above adaptive sampling strategy to identify the data that are expected to be most informative with respect to both state variables and the uncertain model identity. To investigate and showcase the effectiveness and efficiency of our theoretical and numerical advances for uncertainty quantification, Bayesian data assimilation, and active Bayesian learning with stochastic nonlinear high-dimensional dynamical systems, we apply our dynamic data-driven reduced subspace approach to several dynamical systems and compare our results against those of brute force MC and other existing methods. Specifically, we analyze our advances using several drastically different dynamical regimes modeled by the nonlinear Lorenz-96 ordinary differential equations as well as turbulent bottom gravity current dynamics modeled by the 2-D unsteady incompressible Reynolds-averaged Navier-Stokes (RANS) partial differential equations. We compare the accuracy, efficiency, and robustness of different methodologies and algorithms. With the Lorenz- 96 system, we show how the performance differs under periodic, weakly chaotic, and very chaotic dynamics and under different observation layouts. With the bottom gravity current dynamics, we show how model parameters, domain geometries, initial fields, and boundary forcing formulations can be identified and how the Bayesian methodology performs when the candidate model space does not contain the true model. The results indicate that our active Bayesian learning framework can better infer the state variables and dynamical model identity with fewer observations than many alternative approaches in the literature.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 553-567).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132760</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Patents over planning : industrial capital and federal innovation policy</title>
<link>https://hdl.handle.net/1721.1/132757</link>
<description>Patents over planning : industrial capital and federal innovation policy
Traficonte, Daniel
            (Daniel Martin)
In recent years, scholars from a range of disciplines have analyzed the collective set of federal R&amp;D programs as a high tech-oriented industrial policy through which the US government actively targets certain economic sectors over others for state support. Analysts have emphasized one dominant institutional feature of this system: federal R&amp;D programs lack a central planning mechanism, and are instead highly fragmented and ad hoc. While some analysts have interpreted this institutional structure as a strength, others view the absence of R&amp;D planning as a major shortcoming, a view shared by policymakers advocating for increased coordination of federal R&amp;D programs in order to help combat economic and environmental challenges. This study examines the origins and institutional evolution of federal innovation policy, and in doing so, probes possibilities for future reform. My account focuses primarily on the business-state nexus as an explanatory factor, emphasizing the role of politically active industrial firms in shaping the system's legal and institutional structure. I argue that R&amp;D-based industrial firms were opposed to proposals for R&amp;D planning, but only insofar as these proposals also threatened a separate institutional feature to which these firms were more firmly committed: the transfer of patent rights resulting from government-led R&amp;D projects into private hands. During the New Deal and into the immediate postwar period, the link between patent reform and innovation planning prompted industrial firms to lead the attack against progressive calls for a more coordinated R&amp;D system. When government patent policy became decoupled from planning during the Space Race and eventually led to a new consensus on "technology transfer," industrial firms shifted in favor of R&amp;D planning but by that time saw their political influence substantially reduced. The neoliberal business coalition lobbied instead for increasingly fragmented one-off programs to promote specific high-tech fields--a "hidden developmental state" that would remain intact until the present. From this perspective, the structure of the federal R&amp;D system is more a result of a conflict over property than over planning, and the institutional link between coordination and government patent policy may frustrate future attempts to finally realize planned innovation in the US.
Thesis: Ph. D. in Political Economy, Massachusetts Institute of Technology, Department of Urban Studies and Planning, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132757</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can housing policy address spatial inequality? : innovations in policy and politics to expand access to opportunity neighborhoods</title>
<link>https://hdl.handle.net/1721.1/132756</link>
<description>Can housing policy address spatial inequality? : innovations in policy and politics to expand access to opportunity neighborhoods
Kelly, Nicholas F.,
            1987-; Gould Ellen, Ingrid.
While research has demonstrated that low-poverty neighborhoods can improve economic outcomes for low-income children, policymakers have few scalable solutions to help families access those areas. In this dissertation, I present three innovations in policy and politics aimed at improving access to opportunity neighborhoods. First, with Ingrid Gould Ellen, I argue for a streamlined measure of neighborhood opportunity we call the School-Violence-Poverty (SVP) Index based on the three metrics that are most strongly associated with positive outcomes among children. We combine it with data on rental prices in New York City and Greater Boston to identify "opportunity bargain" areas that have lower rents than expected given their high ratings on measures of school quality, low levels of violent crime, and low poverty rates. We find that rents capitalize a wide assortment of amenities unrelated to opportunity, such as access to restaurants, while in some cases undervaluing opportunity neighborhoods. Second, I evaluate the impact of three policy changes on increasing access to opportunity: rental subsidies set at the ZIP Code level, a randomized controlled trial of a housing mobility counseling program, and a randomized controlled trial of a housing search tool that provides customized neighborhood recommendations based on public transit access, school quality and public safety preferences. I find that rental subsidy changes were associated with higher numbers of moves to areas with better schools, as well as the percentage of families moving to areas with high performing schools and low rates of violent crime and poverty. I also find the housing mobility counseling program increased access to areas with lower violent crime rates, and the housing search tool helped those in the treatment group already interested in moving to high-opportunity areas move to significantly higher opportunity neighborhoods. Third, I ask: how do city agencies implement regional policies? I propose a theory of urban bureaucratic policy implementation that argues that city agencies are an important vehicle for the implementation of regional policies due to their bureaucratic autonomy. I focus on two strategies these agencies use to facilitate implementation: reframing regional policy to align with the city's interest, and redesigning policy to reduce political opposition. I test the theory by examining the implementation of "housing mobility" programs that help low-income families move to areas of opportunity in the United States, finding that reframing housing mobility from a desegregation policy to an upward economic mobility strategy facilitated implementation of regional policies by recasting it in the city's interest. I end by reflecting on paradoxical conclusions for democratic accountability, given that agencies less accountable to city leaders may in fact be more responsive to society by enacting policy to benefit the regional good.
Thesis: Ph. D. in Public Policy and Urban Planning, Massachusetts Institute of Technology, Department of Urban Studies and Planning, February, 2021; Cataloged from the official PDF of thesis. Thesis contains 3 articles.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132756</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced striatal glutamatergic function upon chronic antipsychotic action</title>
<link>https://hdl.handle.net/1721.1/132750</link>
<description>Enhanced striatal glutamatergic function upon chronic antipsychotic action
Vernon, Amanda,
            Ph. D.
            Massachusetts Institute of Technology.
Schizophrenia is a psychiatric disorder characterized by multiple clusters of symptoms including positive symptoms, such as hallucinations and delusions, negative symptoms, such as decreased motivation and flattened affect, and cognitive symptoms, such as memory impairment and impaired executive function. Currently available antipsychotics mitigate some symptoms of schizophrenia, particularly the positive symptoms, but there is no preventive treatment nor cure after schizophrenia develops. Efforts to generate more effective antipsychotics are made particularly challenging by the fact that the therapeutic effect of currently prescribed antipsychotics is not well understood and the cell type(s) and brain circuits crucial for beneficial effects have not been conclusively identified. Here we show that chronic antipsychotic administration enhances glutamatergic function in the ventral striatum through translational alterations and increased synaptic function. Cell type-specific mRNA profiling on spiny projection neurons (SPNs) of the direct (dSPNs) and indirect (iSPNs) pathways following chronic antipsychotic administration revealed cell type-specific molecular alterations indicating increases in components of the glutamatergic postsynaptic density. Subsequent functional experiments demonstrated the presence of calcium-permeable AMPARs and increased mEPSC frequency following chronic administration of one especially effective antipsychotic, clozapine. Furthermore, we find that striatal astrocytes also respond to chronic antipsychotic treatment with translational alterations promoting synaptogenesis. Together, these data have identified a core molecular signature of increased glutamatergic transmission in the striatum induced by chronic antipsychotic treatment. This work provides evidence that effective antipsychotics address a lack of glutamatergic drive into the striatum in cases of schizophrenia. Additionally, it suggests that drug development efforts seeking improved antipsychotics may benefit by finding compounds that feature an increased glutamatergic drive into the striatum as a core function.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 186-208).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132750</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-sensory gamma stimulation ameliorates Alzheimer's-associated pathology and improves cognition</title>
<link>https://hdl.handle.net/1721.1/132749</link>
<description>Multi-sensory gamma stimulation ameliorates Alzheimer's-associated pathology and improves cognition
Martorell, Anthony J.,
            Ph. D.
            (Anthony James)
            Massachusetts Institute of Technology.
Changes in gamma activity (30-90 Hz) have been observed in humans and animal-models of Alzheimer's disease (AD). Examining the relationship between gamma oscillations and disease pathology is a significant problem in neuroscience. Recent work using a non-invasive light flicker at 40 Hz, termed Gamma ENtrainment Using Sensory stimulus, or 'GENUS', was shown to impact pathology in the visual cortex of AD-mouse models. However, it is not known whether other sensory modalities at 40 Hz can change pathology in higher order brain regions, or affect cognition, in AD-like animal models. In this thesis, I combine in vivo electrophysiology, biochemical and imaging techniques, and behavioral assays to understand the effects of multi-sensory gamma stimulation in AD-like animals. I first show that auditory tone stimulation at 40 Hz (auditory GENUS) can drive gamma frequency neural activity in auditory cortex (AC) and hippocampal CA1. I then demonstrate that seven days of auditory GENUS results in improved spatial and recognition memory and reduced amyloid load in AC and hippocampus of 5XFAD mice. These changes in activation responses were evident in microglia, astrocytes, and vasculature. Additionally, auditory GENUS reduced phosphorylated tau in the tau P301S model. Finally, I demonstrate that combined auditory and visual GENUS, but not either alone, decreases amyloid and produces a microglial-clustering response in the medial prefrontal cortex. Whole brain analysis using SHIELD processing revealed widespread reduction of amyloid plaques throughout neocortex after multi-sensory GENUS. These findings suggest that GENUS can be achieved through multiple sensory modalities with wide-ranging effects across multiple brain areas to improve cognitive function.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis. Page 123 blank.; Includes bibliographical references (pages 115-122).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132749</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Color : functional organization and behavior</title>
<link>https://hdl.handle.net/1721.1/132748</link>
<description>Color : functional organization and behavior
Lafer-Sousa, Rosa.
Color is a fundamental aspect of visual experience that confers a myriad of behavioral advantages: finding objects in cluttered scenes, recognizing familiar objects, and gleaning information about the material composition and state of objects (e.g. the edibility of fruit) and agents in the world (e.g. health or emotional status). As famously pointed out by Marr (1980), a full understanding of perception requires an analysis of the computations performed, the algorithms that carry out those computations, and the implementation of those algorithms in the physical hardware of the brain. This thesis employs psychophysical methods and functional imaging to tackle questions about human color vision at all three levels: what it is used for, how we solve the classic problem of color constancy, and how our color processing machinery is functionally organized in the brain. Chapter 1 provides a brief survey of the background to these questions. Chapter 2 describes functional MRI studies in humans that find both segregation and convergence of the processing of color and shape in the brain, as well as evidence for the homology of the color system between humans and macaques. Chapter 3 uses psychophysics and a recently discovered ambiguous color stimulus ('#theDress') to investigate the cues and assumptions used by the human visual system to constrain the classic ill-posed problem of inferring the intrinsic reflectance of an object by discounting the spectral properties of the illuminant. Specifically, these studies find evidence that color constancy is mediated by sensory, perceptual, and cognitive factors (i.e., low-level features, inferences about 3D scene geometry, prior knowledge, and attention), and provide the first evidence that human skin is a sufficient cue to infer the illuminant and bring about color constant percepts. Chapter 4 uses psychophysics to evaluate the impact of memory on the color appearance of familiar objects and faces. The study finds a novel perceptual illusion that reveals the role of memory for face color in perceptual experience and social communication, sheds light on the selective pressures for the evolution of trichromatic vision in primates, and demonstrates the powerful ability of cognition to influence perception. Taken together, these studies provide clues about the perceptual and neural mechanisms underlying our rich experience of a colorful world.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132748</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding experience-dependent plasticity of cellular and network activity in the mouse primary visual cortex</title>
<link>https://hdl.handle.net/1721.1/132747</link>
<description>Understanding experience-dependent plasticity of cellular and network activity in the mouse primary visual cortex
Kim, Taekeun,
            Ph. D.
            Massachusetts Institute of Technology.
Sensory experiences in daily life modulates corresponding primary sensory cortices and eventually alter our behavior in a befitting manner. One of the most impactful sensory modules is vision. Primary visual cortex (V1) in mammals is particularly malleable during a juvenile critical period, but this plasticity lasts even in adulthood. A representative form of visual cortical plasticity is ocular dominance (OD) plasticity following temporary monocular deprivation (MD). Here, we used a mouse model of amblyopia and revealed that juvenile OD plasticity, which manifests as depression of response to the deprived eye, requires expression of an immediate early gene, Arc. Also, the juvenile OD shift requires the activity of N-methyl-D-aspartate (NMDA) receptors in layer 4 excitatory principal neurons in V1. Another simple but powerful phenomenon of an adult form of visual cortical plasticity is stimulus-selective response potentiation (SRP). SRP is induced simply through experience to the same gratings visual stimulus over days, resulting in potentiation of visually-evoked potentials (VEPs) in layer 4 of V1. Due to the lack of studies regarding the cellular and network activity changes coincident with the induction of SRP, we have used calcium indicator expressing mice to visualize cellular activity across days of SRP training. Using two-photon calcium imaging, we found that there is indeed no significant net change in the population of active neurons during presentation of the familiar (trained) visual stimulus. Follow-up endoscopic calcium imaging revealed that rather, there is a significant reduction of somatic calcium responses selectively for the familiar visual stimulus on the test day following 5 days of SRP induction. Interestingly, the cellular calcium response to the first presentation of the familiar visual stimulus in each block was substantially similar to the response to those of a novel, yet unseen visual stimulus. However, calcium responses to the familiar visual stimulus dramatically decreased as stimulation was repeated in each presentation block within, and across days of SRP training, whereas the response to the novel visual stimulus on the test day was maintained. The findings that short-latency VEP responses are potentiated, while the slower responses revealed by calcium imaging are depressed suggest that feedback inhibition in V1 is strongly recruited by visual recognition of familiar stimulus. A number of previous studies have suggested that deficits in experience-dependent sensory cortical plasticity and perceptual learning are associated with neuropsychiatric disorders such as autism spectrum disorder (ASD), Rett syndrome and schizophrenia. Our results, therefore, may contribute to our understanding of the underlying mechanisms of these disorders and may help inform ways of intervention and treatments.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis. Vita.; Includes bibliographical references (pages 143-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132747</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchy and invariance in auditory cortical computation</title>
<link>https://hdl.handle.net/1721.1/132746</link>
<description>Hierarchy and invariance in auditory cortical computation
Kell, Alexander James Eaton.
With ease, we recognize a friend's voice in a crowd, or pick out the first violin in a concerto. But the effortlessness of everyday perception masks its computational challenge. Perception does not occur in the eyes and ears - indeed, nearly half of primate cortex is dedicated to it. While much is known about peripheral auditory processing, auditory cortex remains poorly understood. This thesis addresses basic questions about the functional and computational organization of human auditory cortex through three studies. In the first study we show that a hierarchical neural network model optimized to recognize speech and music does so at human levels, exhibits a similar pattern of behavioral errors, and predicts cortical responses, as measured with fMRI. The multi-task optimization procedure we introduce produces separate music and speech pathways after a shared front end, potentially recapitulating aspects of auditory cortical functional organization. Within the model, different layers best predict primary and non-primary voxels, revealing a hierarchical organization in human auditory cortex. We then seek to characterize the representational transformations that occur across stages of the putative cortical hierarchy, probing for one candidate: invariance to realworld background noise. To measure invariance, we correlate voxel responses to natural sounds with and without real-world background noise. Non-primary responses are substantially more noise-invariant than primary responses. These results illustrate a representational consequence of the potential hierarchical organization of the auditory system. Lastly, we explore of the generality of deep neural networks as models of human hearing by simulating many psychophysical and fMRI experiments on the above-described neural network model. The results provide an extensive comparison of the performance characteristics and internal representations of a deep neural network with those of humans. We observe many similarities that suggest that the model replicates a broad variety of aspects of auditory perception. However, we also find discrepancies that suggest targets for future modeling efforts.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis. "June 2019"--Hand written on title page.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132746</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural dynamics of the anesthetized brain and the control of conscious states</title>
<link>https://hdl.handle.net/1721.1/132745</link>
<description>Neural dynamics of the anesthetized brain and the control of conscious states
Donoghue, Jacob A.
            (Jacob Alexander)
General anesthesia (GA) reversibly induces unconsciousness. It is arguably the most powerful brain state manipulation that clinicians and researchers can reliably perform. However, the mechanisms underlying GA at the neural systems level are underexplored and largely not understood. To link neural dynamics to the loss of consciousness, we measured spiking activity and local field potentials (LFPs) from multiple cortical and thalamic regions while monkeys were pharmacologically rendered unconscious. In Chapter 2, we examine effects of the GABAergic anesthetic propofol across prefrontal cortices (PFC), parietal cortex, temporal cortex, and the mediodorsal and intralaminar thalamic nuclei. Propofol decreased brain-wide spiking and high-frequency LFPs (e.g. gamma, 30- 80Hz) while producing prominent slow cortical oscillations (0-4 Hz). These slow rhythms were incoherent across PFC yet synchronized in frontoparietal networks. Electrical stimulation of the central thalamus immediately and continuously reversed the neurophysiological effects of propofol and awakened the anesthetized monkeys. Thus, we interpret GABAergic anesthetics to produce unconsciousness via fragmented network dynamics facilitated by subcortical arousal pathway inhibition. In Chapter 3, we explore an alternative unconscious state mediated by the anti-glutamatergic anesthetic ketamine. Ketamine substantially increased spiking and gamma rhythms while eliminating beta (13- 25 Hz) power and coherence across the cortical areas studied in Chapter 2. In anesthesia, slow waves interrupted high-frequency activity globally and PFC uniquely entrained central thalamic LFPs. Seemingly, ketamine harnesses an excitatory mechanism to disrupt conscious processing, overwhelming cortex with disordered spiking activity and binding thalamo-prefrontal flexibility. In Chapter 4, we describe our model for closed-loop control of GA in monkeys. We established and implemented a pharmacokinetic-pharmacodynamic paradigm within an optimal control framework that automatically titrated propofol using an LFP-derived GA biomarker. Together, this collection of work demonstrates the distinct network mechanisms that can drive GA and the systems-level approach to enhanced control of conscious states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 179-192).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132745</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The search for unmodeled gravitational-wave transients in the Advanced LIGO-Virgo Era</title>
<link>https://hdl.handle.net/1721.1/132742</link>
<description>The search for unmodeled gravitational-wave transients in the Advanced LIGO-Virgo Era
Lynch, Ryan Christopher.
Between 2015 and 2017, the era of gravitational-wave (GW) astronomy began in a spectacular fashion. The Advanced-era GW detectors directly observed GW transients from two types of compact-binary sources: binary black holes (e.g., GW150914) and binary neutron stars (e.g., GW170817). Compact-binary sources are well-studied theoretically with well-understood strain waverforms, and thus their detections with Advanced LIGO-Virgo has led to an enormous number of physical insights. Nevertheless, we expect transient GW sources with waveforms that are not fully modeled or are too quiet to be fully resolved may contain an abundant wealth of physical richness in their own right. This thesis explores how to confidently establish poorly-modeled and poorly-resolved, i.e., "unmodeled", GW transients as detections. We first develop a search algorithm that can be used to detect short-duration GW transients of general signal morphology. This algorithm was one of two independent algorithms to first detect the first GW detection, GW150914, in low-latency. After establishing how GW transients of arbitrary morphology can be detected, we turn our attention to the detection of quiet GW signals that are not fully resolvable. We first explore the prospect of using multi-messenger astronomy to elevate low-significance GW candidates to the status of confident detections. Then, we develop a statistical consistency test that can be used to detect populations of poorly-resolved GW candidates. We apply the new search algorithm and new statistical consistency test to data obtained in the first and second observing runs of the Advanced Detector Era. We show that standard compact-binary sources, such as GW150914, can be detected confidently using these methods. Although no non-compact-binary GW transients are detected, we use these new tools to set the strictest upper limits to date on the rate-density of non-compact-binary GW transients. Finally, we turn our attention to how future improvements to the Advanced Detectors, such as squeezed-light injection, will impact the science done with GW transients.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, September, 2018; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 227-238).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132742</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Macroscopic graphene membranes with tunable nanopores for highly selective mass separation</title>
<link>https://hdl.handle.net/1721.1/132740</link>
<description>Macroscopic graphene membranes with tunable nanopores for highly selective mass separation
Jang, Doojoon.
Membrane-based filtration enables energy-efficient separations of solutes, solvents, or gases, benefiting a wide range of applications including water desalination, nanofiltration, hemodialysis, solvent-based separation, or natural gas purification. Semipermeable polymeric desalination membranes rely on solution-diffusion mechanism to separate water from salts, where selective transport of species arises from their solubility and diffusivity in polymer phase. Despite the remarkable progress in materials, structure, and separation process over the past few decades, today's membranes are subjected to intrinsic challenges ranging from resolving the trade-off between permeability and selectivity to maintaining robust operation with high stability and low fouling. Two dimensional materials have the potential to address some of the above challenges by offering a fundamentally new mechanism to control nanofluidic transport with sustainable nanoscale pores, thereby presenting a platform for next-generation reverse osmosis (RO) or nanofiltration (NF) membranes. Although theoretical investigations of great breadth and depth have been pursued to understand mass transport across the atomically thin materials, experimental efforts are required to engineer and tune nanopore structure in macroscopically large graphene membranes and understand the resulting transport characteristics. Moreover, the effects of interplay between graphene nanopore structure and porous support layer on membrane transport properties need to be considered to identify the structure-function relationship of the nanoporous graphene membranes. This thesis aims at controlling selective graphene nanopore structure for high permeability and selectivity and understanding the tunable membrane transport properties. A two-step process of ion bombardment and oxygen plasma is carried out to introduce a high density of nanopores in large-area graphene membranes. Pore creation parameters are thoroughly explored to investigate the influence on pore size and density. The resulting transport properties of graphene membranes can be tuned to achieve high permeance to water, comparable to that of NF membranes, and highly selective transport of monovalent ions over organic molecules. Nanopore structure introduced in graphene membranes is inspected to quantitatively relate the pore creation parameters with the resulting pore size distributions. A multiscale transport model is constructed to investigate the interplay between nanoporous graphene and support pores that governs osmotic water flux and diffusive solute transport. Internal concentration polarization of draw solutes estimated by the model suggests that achieving narrowly distributed graphene pores with minimal leakage is essential to optimal operation of high-flux asymmetric graphene membranes under forward osmosis. Sterically governed molecular assembly is explored to mitigate residual solute leakage across large, non-selective pores for enhanced membrane selectivity. High molecular weight polymers can electrostatically or covalently assemble across nanoscale defects of graphene to narrow down the effective pore size distribution, sterically and electrostatically hindering transport. Multi-step size-selective polyelectrolyte assembly enables &gt;/=99% retention of divalent ions and organic molecules, promising the potential of graphene in desalination, nanofiltration or organic solvent nanofiltration (OSN). With experimental/theoretical means to characterize membrane structure and transport properties, this thesis forms the basis for regulating nanofluidic mass transport with tunable nanopores and developing atomically thin separation membranes with high selectivity and permeability.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2018; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 126-139).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132740</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon nanotube-based optical sensors for pharmaceutical applications : theory and experiment</title>
<link>https://hdl.handle.net/1721.1/132735</link>
<description>Carbon nanotube-based optical sensors for pharmaceutical applications : theory and experiment
Salem, Daniel P.
            (Daniel Parker)
Semiconducting single-walled carbon nanotubes (SWCNTs) are attractive transducers for biosensor applications due to their unique photostability, single molecule sensitivity, and ease of multiplexing. Sensors can be rendered selective via several detection modalities including the use of natural recognition elements (e.g., proteins) as well as the formation of synthetic molecular recognition sites from adsorbed heteropolymers. However, to date, deployment of SWCNT-based biosensors has been limited. The aim of this thesis was to study the design and development of SWCNT-based optical sensors for analytes relevant to the food and pharmaceutical industries including neurotransmitters, proteins, and metal ions. The research described in this thesis spans several levels of nanosensor development including: i) the fundamental study of SWCNT-polymer interactions and their dependence on solution properties; ii) sensor development using existing detection modalities and the use of mathematical modeling to guide sensor design and interpret data; and iii) the invention of a new sensor form factor enabling long-term sensor stability and point-of-use measurements. Our fundamental work on SWCNT-polymer interactions investigates the influence of polymer structure, SWCNT structure, and solution properties on molecular recognition, using single-stranded DNA as a model polymer system. We find that specific ssDNA sequences are able to form distinct corona phases across SWCNT chiralities, resulting in varying response characteristics to a panel of biomolecule probe analytes. In addition, we find that ssDNA-SWCNT fluorescence and wrapping structure is significantly influenced by the solution ionic strength, pH, and dissolved oxygen in a sequence-dependent manner. We are able to model this phenomenon and demonstrate the implications of solution conditions on molecular recognition, modulating the recognition of riboflavin. These results provide insight into the unique molecular interactions between DNA and the SWCNT surface, and have implications for molecular sensing, assembly, and nanoparticle separations. In addition to our experimental work, we used mathematical modeling to guide sensor design for biopharmaceutical characterization. A mathematical formulation for glycoprotein characterization was developed as well as a dynamic kinetic model to describe the data output by a label-free array of non-selective glycan sensors. We use the formulated model to guide microarray design by answering questions regarding the number and type of sensors needed to quantitatively characterize a glycoprotein mixture. As a second example, we report the design of a novel, diffusion-based assay for the characterization of protein aggregation. Specifically, we design hydrogel-encapsulated SWCNT sensors with a tunable hydrogel layer to influence the diffusion of immunoglobulin G protein species of variable size, and we develop a combined model that describes both the diffusion of analyte and analyte-sensor binding. By measuring the sensor response to a series of well-characterized protein standards that have undergone varying levels of UV stress, we demonstrate the ability to detect protein aggregates at a concentration as low as one percent on a molar basis. Finally, we report the development of a new form factor for optical nanosensor deployment involving the immobilization of SWCNT sensors onto paper substrates. We find that SWCNT optical sensors can be immobilized onto many different paper materials without influencing sensor performance. Moreover, we pattern hydrophobic barriers onto the paper substrates to create 1-dimensional sensor arrays, or barcodes, that are used for rapid, multiplexed characterization of several metal ions including Pb(II), Cd(II) and Hg(II). In addition to providing a new form factor for conducting point-of-use sensor measurements, these findings have the potential to significantly enhance the functionality of SWCNT-based optical sensors by interfacing them with existing paper diagnostic technologies including the manipulation of fluid flow, chemical reaction, and separation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, June, 2019; Cataloged from the PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132735</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orthotopic liver metastasis mouse models of mismatch repair-proficient colorectal cancer recapitulate clinical inefficacy of immune checkpoint blockade</title>
<link>https://hdl.handle.net/1721.1/132614</link>
<description>Orthotopic liver metastasis mouse models of mismatch repair-proficient colorectal cancer recapitulate clinical inefficacy of immune checkpoint blockade
Ho, William Wee Teck.
Liver metastasis is a major cause of mortality in patients with colorectal cancer (CRC). Immune checkpoint blockade (ICB) therapy has significantly improved overall survival in several cancer types including melanoma and non-small cell lung cancer. However, patients with mismatch repair-proficient (pMMR) metastatic CRC do not respond to ICB therapy. MC38 and CT26 are two of the most commonly used mouse syngeneic CRC cell lines in preclinical studies of colorectal cancer. In most of these preclinical studies, MC38 and CT26 are implanted under the skin in the hind flank of the mice where they grow into subcutaneous tumors. Several studies have shown that these subcutaneous MC38 or CT26 tumors respond very well to ICB treatment. However, MC38 and CT26 have been reported previously to be pMMR CRC cell lines, indicating that these subcutaneous tumor mouse models do not recapitulate the clinical reality well. In this thesis, we show that when pMMR CRC cell lines are implanted orthotopically in the liver as liver metastasis, the resultant liver metastases are unresponsive to ICB, which recapitulates the clinical reality that patients with pMMR metastatic CRC do not respond to ICB treatment. We further show that when treated with ICB, these orthotopic pMMR CRC liver metastasis mouse models have poor infiltration and activation of key immune cells and significantly decreased activity of key pathways that are critical for the efficacy of ICB. We also evaluated several strategies aimed at overcoming the inefficacy of ICB in these pMMR CRC liver metastasis mouse models. We found that radiation therapy was able to overcome inefficacy of ICB in the pMMR CRC liver metastasis mouse model with moderately low tumor mutational load. We also found that antibody-peptide epitope conjugates (APECs) were able to increase the efficacy of ICB in the pMMR CRC liver metastasis mouse model with very low tumor mutational load. Our results demonstrate that by implanting pMMR CRC cell lines in a relevant tissue site such as in the liver to model CRC liver metastasis, we can more accurately recapitulate the clinical efficacy of therapies such as ICB.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 70-75).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132614</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equilibrium and stability analysis of plate structures in space.</title>
<link>https://hdl.handle.net/1721.1/131205</link>
<description>Equilibrium and stability analysis of plate structures in space.
Efimba, Robert Elangwe.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1972; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131205</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The prior thiol capture method for peptide synthesis</title>
<link>https://hdl.handle.net/1721.1/131204</link>
<description>The prior thiol capture method for peptide synthesis
Galakatos, Nicholas G.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131204</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the choice of a model to fit data from an exponential family</title>
<link>https://hdl.handle.net/1721.1/131187</link>
<description>On the choice of a model to fit data from an exponential family
Haughton, Dominique Marie-Annick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1984; Bibliography: leaves 102-106.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131187</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>I. Rates of thiol-disulfide interchange reactions involving proteins. II. regeneration of the nicotinamide cofactor NADH</title>
<link>https://hdl.handle.net/1721.1/131186</link>
<description>I. Rates of thiol-disulfide interchange reactions involving proteins. II. regeneration of the nicotinamide cofactor NADH
Shaked, Ze'ev.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131186</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strong classification of [gamma]-structures</title>
<link>https://hdl.handle.net/1721.1/131185</link>
<description>Strong classification of [gamma]-structures
Bracho, Javier.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1981; Bibliography: leaves 102-103.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131185</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detailed chemical and spectroscopic probes of the multicopper active sites in Rhus laccase</title>
<link>https://hdl.handle.net/1721.1/131069</link>
<description>Detailed chemical and spectroscopic probes of the multicopper active sites in Rhus laccase
Spira, Darlene Joy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1985; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131069</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The impact of political coalitions on the decision-making process in the city of Cambridge, 1960 to 1975</title>
<link>https://hdl.handle.net/1721.1/131068</link>
<description>The impact of political coalitions on the decision-making process in the city of Cambridge, 1960 to 1975
Malin, Maureen Ann.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1985; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131068</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Signal transduction in human cells by metabolites derived from methionine and glucose</title>
<link>https://hdl.handle.net/1721.1/131009</link>
<description>Signal transduction in human cells by metabolites derived from methionine and glucose
Orozco, Jose M.(Jose Miguel Orozco Segrera)
Organisms sense nutrients to match physiological responses to their environment. The mechanistic Target of Rapamycin complex I (mTORC1) pathway integrates information from a wide range of nutrient inputs to appropriately control cell growth. Nutrients like amino acids and glucose are required for mTORC1 to localize to the lysosome, its site of activation. The Rag-GTPases dictate mTORC1 localization and are regulated by GATOR1 and GATOR2 in response to nutrients, but how the GATOR-Rag axis senses nutrients is not completely understood. We found a novel protein, which we named SAMTOR, that interacts with GATOR1, and promotes GATOR1 activity and mTORC1 inhibition. Moreover, we show that SAMTOR is an S-adenosylmethionine binding protein, and that SAM binding disrupts the SAMTOR-GATOR1 interaction. SAM is a derivative of methionine, and we show that methionine starvation promotes SAMTOR action on GATOR1 following a decrease in SAM levels.; The consequence of this series of events is to inhibit mTORC1. In contrast, when SAM levels are high, it disrupts the interaction between SAMTOR and GATOR1, maintaining mTORC1 activity high. We concluded that SAMTOR is a SAM sensor in the mTORC1 pathway. Glucose is required for full activity and lysosomal localization of mTORC1. However, the mechanism of glucose sensing and the identity of the metabolite derived from glucose that is sensed by the mTORC1 pathway remain unknown. To identify the metabolite that signals glucose to mTORC1, we used metabolically engineered human cells lacking the canonical energy sensor AMPK to identify glucose-derived metabolites required to activate mTORC1 independent of energetic stress. We show that mTORC1 senses a metabolite downstream of the aldolase and upstream of the glyceraldehyde 3-phosphate dehydrogenase steps of glycolysis and pinpoint dihydroxyacetone phosphate (DHAP) as the key molecule.; We also show that an intact GATOR-Rag axis is required for glucose sensing upstream of mTORC1. Altogether, we identified two metabolites that regulate the mTORC1 pathway, one-SAM-derived from methionine and the other-DHAP-from glucose. Greater mechanistic insights into both SAM- and DHAP-sensing will reveal how these two signals, along with that of other amino acids and nutrients, are integrated by the GATOR-Rag signaling axis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131009</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymers and plastrons : active and passive drag reduction in wall-bounded turbulent flows</title>
<link>https://hdl.handle.net/1721.1/131007</link>
<description>Polymers and plastrons : active and passive drag reduction in wall-bounded turbulent flows
Rajappan, Anoop.
Frictional energy dissipation in wall-bounded turbulence is ubiquitous in modern engineering systems, ranging from the flow of liquids through pipelines, to the drag-inducing boundary layer around ships and submarines. The effective mitigation of this frictional drag is therefore of great practical interest, and offers substantial economic and environmental benefits. This thesis focuses on two complementary techniques for turbulent drag reduction--the active injection of polymers into the flow, and the passive aerophilic texturing of the wall--and aims to address practical challenges that prevent their widespread adoption in real-life systems, with an emphasis on marine applications. The prohibitive cost of synthetic polymers remains a key impediment to their large-scale deployment in commercial marine operations.; This thesis hence focuses on affordable and readily accessible sources of high molecular weight biopolymers: specifically, the water-soluble fiber, or mucilage, extracted from seeds such as flax, chia and psyllium. By means of frictional drag measurements inside a bespoke Taylor-Couette apparatus, seed mucilage is shown herein to display drag reduction efficacy and flow longevity comparable to common synthetic polymers, while offering significant advantages in terms of raw material cost and biodegradability. Preliminary investigations confirm that oil-soluble natural polymers, such as rubber latex, can analogously be employed as eco-friendly drag reducers for the transport of hydrocarbon feedstocks. Superhydrophobic texturing of submerged flow boundaries has emerged recently as another viable method of drag reduction in aqueous flows.; Despite sustained research interest in both polymers and superhydrophobic walls as standalone methods for drag mitigation, attempts to employ them jointly has remained unsuccessful. In this thesis, cooperative drag reduction effects are explored for two common drag-reducing polymers, paired with regularly patterned as well as randomly textured superhydrophobic walls. Dissolved flexible macromolecules are shown to act in concert with the slip-inducing air layer, or plastron, trapped atop the superhydrophobic texture, yielding enhanced reductions in turbulent drag greater than that achievable from either method employed independently. An additive law in Prandtl-von Kármán coordinates is derived that accurately predicts this combined effect at dilute polymer concentrations, and the adverse influence of the surface activity of polymers on wall slip is also elucidated.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 187-211).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131007</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tritium retention in nuclear graphite, system-level transport, and management strategies for the fluoride-salt-cooled high-temperature reactor</title>
<link>https://hdl.handle.net/1721.1/131004</link>
<description>Tritium retention in nuclear graphite, system-level transport, and management strategies for the fluoride-salt-cooled high-temperature reactor
Dolan, Kieran Patrick.
Advanced reactor concepts which use a lithium- or beryllium-bearing primary salt coolant will require technical solutions to mitigate the environmental release of tritium. One such design is the Fluoride-Salt-Cooled High-Temperature Reactor (FHR), which combines a molten Flibe (2LiF-BeF₂) salt coolant and tri-structural isotropic coated-particle fuel to produce power or process heat. Compared to current water-cooled reactors, managing tritium release from a FHR is further complicated by the mobility of tritium at high temperatures and limited knowledge of interactions between tritium and nuclear graphite in the molten fluoride salt environment. The total activity, chemical forms, and retention mechanisms for tritium in nuclear graphite were studied through thermal desorption analysis of sample materials from three in-core Flibe salt irradiations (denoted FS-1, FS-2, and FS-3) at the MIT Reactor (MITR).; Tritium desorption rates as a function of temperature were observed in distinct peak structures which are indicative of distinct trapping sites in graphite. The tritium content measurements led to estimations of overall retention in nuclear graphite of 19.6±1.9% from FS-1, 34±10% from FS-2, and 27.1±1.9% from FS-3 relative to the total calculated tritium generation in each experiment. Thermal desorption measurements of the MITR samples were consistent with previously proposed mechanisms for retention of gaseous hydrogen in graphite based on the chemical form of desorbed tritium, the activation energy of the desorption process, and the effect of excess H₂ on the desorption rate as a function of temperature. Therefore, a methodology based on gaseous retention mechanisms was proposed and developed to model the uptake of tritium into graphite from Flibe in a FHR.; A tritium retention model based on a bulk-diffusivity in graphite was developed as well as a model based on differential transport in graphite pores and grains. Using a system-level tritium transport model, the overall retention on graphite pebbles in a FHR was calculated to be 20.3% and 26.3% of the equilibrium generation rate for the bulk-diffusivity and pore and grain methods, respectively. In each case, modeling the transport and trapping of tritium inside graphite significantly reduced the retention rates compared to a retention process solely limited by mass transport in Flibe. According to the results of a sensitivity analysis, the level of tritium retention in core graphite has the largest uncertainty in the FHR tritium distribution because of relatively high standard deviations in literature measurements of tritium solubility and diffusivity in graphite.; Tritium management technology options were then examined in the system-level transport model based on permeation barrier coatings and tritium extraction systems. Permeation barrier coatings of a specified performance level applied to Flibe-facing surfaces were found to be more effective than exterior-surface coatings, while extraction systems with design constraints were able to significantly reduce overall tritium releases. A combination of the interior-surface barriers and extraction systems applied to various regions of the plant was shown to reduce tritium release into the FHR reactor building to levels below that of current light water reactors.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 319-333).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131004</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affordable autonomous lightweight personal mobility</title>
<link>https://hdl.handle.net/1721.1/131000</link>
<description>Affordable autonomous lightweight personal mobility
Lin, Michael Chia-Liang.
Self-driving cars and micro-mobility services are among the most important trends in the mobility landscape. While robo-taxi services are still in the pilot phase, residents in many cities today are adopting micro-mobility services as a more affordable and energy-efficient last-mile alternative to traditional forms of transportation. This dissertation proposes a new genre of urban mobility by bringing together the advantages of micro-mobility with those of the self-driving car. This dissertation presents a novel vehicle design that leverages the safety and autonomous navigation capabilities of a self-driving car while remaining ecologically responsible, lightweight, and affordable. In addition, the novel design enables new types of urban mobility services with the ability to operate autonomously in bike lanes and low-speed urban environments, and to provide door-to-door mobility delivery of both people and goods.; The proposed autonomous vehicle design takes a bottom-up approach, piecing together modularized hardware components and software blocks and giving rise to autonomous functionality. During the development of these systems, multiple full-scale working prototypes were completed, each designed to explore a specific research goal. The testing and evaluation of these prototypes were conducted within urban living labs, using the bike lanes of Cambridge, Taipei, and Andorra. Each prototype concluded with a public exhibition demonstrating the validity of these systems when applied to hypothetical mobility scenarios of the future. This dissertation includes the following five contributions: 1. A new genre of mobility that enables novel mobility services of the future. 2. A software framework for autonomous navigation that utilizes low-cost sensors and computers. 3.; A set of human-machine interactions using state-of-the-art autonomous vehicle perception as input for establishing effective Vehicle-to-Pedestrian communications. 4. A new methodology for road tests and evaluation of these systems i n the living environment. 5. The introduction of a possible decentralized community-based mobility industry. This dissertation will describe the research story of successful cooperation across academic institutions, cities, industries, and borders.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 302-308).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131000</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grain-growth mechanisms in rapidly solidified matrix steels</title>
<link>https://hdl.handle.net/1721.1/130999</link>
<description>Grain-growth mechanisms in rapidly solidified matrix steels
Hsu, Chen-Yih.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1984; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130999</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactive chemisorption of molecular fluorine on Si(100)</title>
<link>https://hdl.handle.net/1721.1/130998</link>
<description>Reactive chemisorption of molecular fluorine on Si(100)
McGonigal, Marianne.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1989; Includes bibliographical references (leaves 215-217).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130998</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The instability of the prewar economy reconsidered : a critical examination of historical macroeconomic data</title>
<link>https://hdl.handle.net/1721.1/130997</link>
<description>The instability of the prewar economy reconsidered : a critical examination of historical macroeconomic data
Romer, Christina.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1985; Vita.; Includes bibliographies.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130997</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formulation and evaluation of parallel algorithms for the orbit determination problem</title>
<link>https://hdl.handle.net/1721.1/130994</link>
<description>Formulation and evaluation of parallel algorithms for the orbit determination problem
Shaver, Jeffrey Scott.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Vita.; Bibliography: p. 858-873.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130994</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and mechanics of the subducted Gorda Plate : constrained by afterslip simulations and scattered seismic waves</title>
<link>https://hdl.handle.net/1721.1/130902</link>
<description>Structure and mechanics of the subducted Gorda Plate : constrained by afterslip simulations and scattered seismic waves
Gong, Jianhua
Subduction zones host the greatest earthquakes on earth and pose great threat to human society. The largest slip in megathrust earthquakes often occurs in the 10-50 km depth range, yet seismic imaging of the material properties in this region has proven difficult. This thesis focuses on developing methods to utilize high frequency (2-12 Hz) seismic waves scattered from the megathrust plate interface to constrain its fine-scale velocity structures and to investigate the relationship between velocity structures and megathrust slip behaviors. Chapter 2 investigates the locking condition of the subducted Gorda plate by simulating afterslip that would be expected as a result of the stress changes from offshore strike-slip earthquakes. Chapter 3 develops array analysis methods to identify P-to-S and S-to-P seismic converted phases that convert at the subducted Gorda plate interface from local earthquakes and uses them to constrain the geometry and material properties of the plate boundary fault of the subducted Gorda plate between 5-20 km depth. Chapters 4 and 5 use a dense nodal array and numerical modeling methods to study the seismic guided waves that propagate along the thin low velocity layer at the boundary of the subducted Gorda plate. Taken together, our results indicate that material properties of the subduction plateboundary fault is highly heterogeneous and the plate-boundary fault is potentially contained in a low velocity layer with significant porosity and fluid content at seismogenic depths.
Thesis: Ph. D., Joint Program in Marine Geology and Geophysics (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 175-198).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130902</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distribution, growth, and transport of larval fishes and implications for population dynamics</title>
<link>https://hdl.handle.net/1721.1/130901</link>
<description>Distribution, growth, and transport of larval fishes and implications for population dynamics
Hernández, Christina M.,Ph. D.(Christina Maria)Massachusetts Institute of Technology.
The early life stages of marine fishes play a critical role in population dynamics, largely due to their high abundance, high mortality, and ease of transport in ocean currents. This dissertation demonstrates the value of combining larval data, collected in the field and the laboratory, with model simulations. In Chapter 2, analyses of field observations of ontogenetic vertical distributions of coral reef fish revealed a diversity of behaviors both between and within families. In Caribbean-wide particle-tracking simulations of representative behaviors, surface-dwelling larvae were generally transported longer distances with greater population connectivity amongst habitat patches, while the evenly-distributed vertical behavior and downward ontogenetic vertical migration were similar to one another and led to greater retention near natal sites. However, hydrodynamics and habitat availability created some local patterns that contradicted the overall expectation.; Chapter 3 presents evidence of tuna spawning inside a large no-take marine protected area, the Phoenix Islands Protected Area (PIPA). Despite variation in temperature and chlorophyll, the larval tuna distributions were similar amongst years, with skipjack (Katsuwonus pelamis) and Thunnus spp. tunas observed in all three years. Backtracking simulations indicated that spawning occurred inside PIPA in all 3 study years, demonstrating that PIPA is protecting viable tuna spawning habitat. In Chapter 4, several lines of larval evidence support the classification of the Slope Sea as a major spawning ground for Atlantic bluefin tuna with conditions suitable for larval growth. The abundance of bluefin tuna larvae observed in the Slope Sea aligns with typical observations on the other two spawning grounds.; Age and growth analyses of bluefin tuna larvae collected in the Slope Sea and the Gulf of Mexico in 2016 did not show a growth rate difference between regions, but did suggest that Slope Sea larvae are larger at the onset of exogenous feeding. Collected larvae were backtracked to locations north of Cape Hatteras and forward tracked to show that they would have been retained within the Slope Sea until the onset of swimming. As a whole, this thesis presents valuable contributions to the study of larval fishes and the attendant implications for marine resource management.
Thesis: Ph. D., Joint Program in Biological Oceanography (Massachusetts Institute of Technology, Department of Biology; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 119-135).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130901</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards security by design of connected and automated vehicles : cyber and physical threats, mitigations, and architectures</title>
<link>https://hdl.handle.net/1721.1/130856</link>
<description>Towards security by design of connected and automated vehicles : cyber and physical threats, mitigations, and architectures
Suo, Dajiang.
Security, safety and privacy converge when it comes to the design of cyber-physical systems (CPS) such as connected and automated vehicles (CAVs). This trend can be attributed to the increased level of connectivity and automation and the new potential of insider attacks caused by changes in vehicle ownership. For example, A CAV whose on-board sensors, such as Light detection and ranging (LIDAR) and camera, are under spoofing attacks or subject to variations in environmental conditions (e.g., light, weather) may conduct risky maneuvers. Additionally, a CAV that can communicate with nearby vehicles, cloud servers, and roadside infrastructure can be turned into a "cyber-weapon" by adversaries to compromise transportation services or customer privacy. Designing mitigation solutions is a challenging task for Original equipment manufacturers who need to prioritize among safety, security, and privacy, and deal with ever-changing attack surfaces and the power of attackers.; This thesis proposes a security by design framework for identifying and mitigating cyber and physical threats on CAVs. A structured security engineering process for threat identification is first presented, which provides guidance to designing defensive mechanisms such that any compromise in design goals is traceable to a specific cyber or physical attack. After prioritizing among different identified threats, this thesis focuses on solutions to mitigate two types of threats: Physical threats on perception tasks with optical sensors and cyber threats on traffic event forgery in Vehicle-to-Infrastructure (V2I) communication. Second, to mitigate physical threats to on-board optical sensors caused by environmental hazards, this thesis develops a object-recognition method based on light polarization. The proposed approach can provide multimodal data providing clues about the surface of objects, which complements the depth and RGB information from existing optical sensors.; A proof-of-concept platform built in a laboratory benchtop verifies and evaluates the proposed concept. Third, a secure V2I communication protocol titled "Proof-of-Travel" (POT) is developed to verify the authenticity of V2I messages. This novel approach utilizes and combines the physical laws of vehicle movement with cryptography mechanisms used for ensuring the security of distributed networks. By developing and demonstrating these two proof-of-concept mitigation solutions, this thesis illustrates that security and safety goals for cyber-physical systems can be achieved more cost-effectively by following the security by design framework.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 103-115).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130856</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model for the dig-in instability in serial sectioning and iterative orthogonal cutting</title>
<link>https://hdl.handle.net/1721.1/130853</link>
<description>A model for the dig-in instability in serial sectioning and iterative orthogonal cutting
Ramirez, Aaron Eduardo.
Microtome serial sectioning is a key part of building brain maps of neurological tissue, which are made by serially cutting resin-embedded brain tissue into thin sections, followed by imaging on an electron microscope; features of interest are traced through a stack of images. However, lateral dimensions of the sections typically do not exceed 1 mm due to instabilities encountered when attempting to cut wider sections. One such instability is the dig-in instability, which occurs in any cutting process with a cutting force component pulling the tool deeper into the workpiece; it is a niche phenomenon in industrially important processes such as machining, where it is easily avoided, and thus is not studied in-depth in the literature; however, microtome cutting is especially susceptible to the dig-in instability due to the combination of high rake angles, small cutting tool wedge angles, and highly lubricated cutting.; There are currently no models for the dig-in instability nor engineering guidelines available linking mechanical characteristics of the cutting system, such as stiffness requirements, to dig-in instability regimes, despite system stiffness being acknowledged in the microtome cutting literature as important to successful cutting. The goal of this research is to generate a model for the dig-in stability which ties together cutting system mechanical characteristics to the maximum allowable width of cut to avoid digging in. A second model was generated to model how variations in cutting parameters result in variations on the resulting cut surface, and how this variation would change with each cutting pass. An instrumented cutting setup was designed and built to measure cutting forces and record cutting videos. A compliant knife was designed to control the stiffness characteristics of the cut.; Delrin polymer specimens were designed as stepped "pyramids" which would increase in width as the cut progressed, to identify the cutting width for which the cutting is unstable. Achieving this link between cutting system characteristics and successful sectioning outcome will enable designing machines capable of cutting at larger widths, and be a stepping stone towards mapping larger brain volumes. This in turn would enable greater understanding of neural function and pathology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 285-289).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130853</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance effects and causal mechanisms of mid-channel congestion in diesel particulate filters</title>
<link>https://hdl.handle.net/1721.1/130852</link>
<description>Performance effects and causal mechanisms of mid-channel congestion in diesel particulate filters
Tracy, Ian Patrick.
The diesel particulate filter (DPF) is a ~$5,000-$50,000 USD critical component of aftertreatment systems installed in diesel engine-powered vehicles. The device is designed to trap particles emitted by the diesel combustion process in order to prevent their release into the surrounding environment, thereby reducing pollution levels and mitigating greenhouse gas emissions. Increasing stringency of emissions regulations has progressively necessitated the installation of DPFs on diesel-powered vehicles over the past few years, with the DPF market expected to remain significant in size at least through 2025.; While DPFs nominally operate by trapping and accumulating incoming PM continuously in the far downstream plug region of the filter channels so that no gaps form between trapped particulate matter (PM) agglomerates, both real-world field and laboratory bench tests have demonstrated that channel-spanning ash agglomerates form well upstream of the end plug region, prematurely clogging the mid-channel region. This effectively renders useless the remaining open space in the channel downstream of the blockage location. In addition to mid-channel congestion, this adverse phenomenon is referred to in the literature interchangeably as mid-channel collapse (MCC), mid-channel clogging, and mid-channel deposits (MCD). MCC, due to accelerated filling of the filter channels, often results in significantly reduced DPF lifetime and performance (i.e. increased backpressure yielding depressed fuel economy), both of which prove costly for diesel vehicle operators.; Existing hypotheses regarding causality of MCC are largely based on inconclusive empirical observations, and not substantiated by fundamental quantitative analysis. The primary contributions of this dissertation include: 1) summarizing hypothesized causal mechanisms of MCC with an emphasis on sintering as a primary driver thereof, 2) introducing a method by which to analyze X-Ray CT scans that show MCC in DPF channels, 3) assessing the performance penalty associated with MCC by correspondingly extending the industry standard model for pressure drop across a DPF, and 4) suggesting modifications to the DPF regeneration process in order to prevent sintering of ash agglomerates to the DPF side walls, based on an efficient reformulation of the prevalent temperature history model of the DPF that solves for both flow and temperature conditions inside filter channels over time during active regeneration.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 215-226).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130852</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational processing and modeling of intravascular images precisely couple arterial morphology and biomechanics</title>
<link>https://hdl.handle.net/1721.1/130851</link>
<description>Computational processing and modeling of intravascular images precisely couple arterial morphology and biomechanics
Olender, Max Louis.
Cardiovascular diseases, and coronary artery disease in particular, remain a persistent devastating and prevalent menace to health and wellbeing globally despite great strides in vascular biology and medicine. While biomechanical forces are known to play a driving role in the natural history of atherosclerosis, the nuanced yet profound impact of patient- and lesion-specific biomechanics in disease presentation, course, and treatment are not fully appreciated or accounted for in clinical practice. The incredible strides in melding image processing with artificial intelligence, computational modeling, and numerical methods is increasingly filling gaps in knowledge, especially at the intersection of pathological anatomy and biomechanical structural behavior. We derived geometric and morphological structure, as well as constitutive material properties, from invasive intravascular image sequences to quantitatively assess and characterize the state of atherosclerotic arteries.; Overcoming the challenge of limited penetration depth in the presence of signal-attenuating plaque, contextual information and spatial continuity was leveraged by a novel surface fitting method to fully delineate the mural conformation of the diseased vessel wall. Neural networks enriched with domain knowledge of vascular geometry and imaging classified pathological regions of interest within heterogeneous lesions. Construction of in silico computational models and in vitro phantom models facilitated the execution and validation of inverse methods to determine material constitutive mechanical properties non-destructively and in clinically amenable fashion. Strategic simplifying assumptions freed the approach from data acquisition limitations which inhibited previous methods of in situ mechanical characterization.; Finally, to bridge the chasm between virtual and physical medicine and facilitate integration of these new capabilities into clinical practice, synthetic images were generated by an adversarial network trained in the familiar visual vernacular of vascular imaging. Through the insights described in this thesis, greater information can be extracted, augmented, and made accessible from clinically-available imaging data. Approaches to more quantitatively and reliably assess, model, and convey biomechanical disease states may offer mechanistic insight into disease development, progression, and treatment response, ultimately leading to improved personalized patient care in an emerging era of computational cardiology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 279-301).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130851</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated bio-photonic devices : sensors, imagers, and beyond</title>
<link>https://hdl.handle.net/1721.1/130850</link>
<description>Integrated bio-photonic devices : sensors, imagers, and beyond
Singh, Robin,Ph. D.Massachusetts Institute of Technology.
Optical imaging, sensing, and testing are ubiquitous in biology, offering elegant solutions for diagnostic, therapeutic, and theranostic applications. If these optical systems can be built using complex miniaturized photonic systems, then scalability, portability, lower cost, and higher performance can be obtained for real-time monitoring and bedside treatment. My Ph.D. focuses on the design, optimization, low-loss in-house fabrication, and testing of the building blocks of miniaturized photonic devices for three biological applications: 1) Neurophotonic Probes for Deep Brain Photoacoustic Imaging: Conventionally, the implantable probe technology is based on an array of patterned electrodes to monitor electrical signals in the extracellular matrix of deep neural cells.; The state-of-art design can successfully record only 100 neurons simultaneously, making it rather slow progress to reach the ultimate goal of probing 100 billion neurons in the human brain! To overcome this bottleneck, we propose an implantable neurophotonic photoacoustic probe architecture that could image about 10000 neurons with cellular resolution. Realized on Michigan style MOEMS technology, the probe consists of photonic waveguide-based meta illuminators for photoacoustic excitation and high-frequency ultrasonic transducers for acoustic detection. The probe is a miniaturized implantable sensing system that improves the depth of penetration ( 8-10 mm) and resolution ( 1-5 [mu]m) in neural imaging. We will discuss imaging feasibility, engineering different optical excitation beam profiles using nanophotonic structures, and will demonstrate an ultrasound detector using an integrated photonics platform.; 2) Integrated Optofluidic Sensors for Aerosol Sensing and Blood Coagulometry: We will demonstrate optofluidic sensors for in-situ characterization (size, count, and chemistry) of aerosol and bio-aerosol particles. These photonic sensor designs based on Near-IR and Mid-IR platforms can extract the physical and chemical nature of interacting particles over a broad range of sizes (100 nm to 2 [mu]m in diameter) compared to current integrated photonics-based sensors, that are restricted to molecular or nanoparticle sensing. We also explore these photonic sensors for on-chip blood coagulometry. 3) Machine Learning for Nanophotonic Design: The applications mentioned above require complex photonic structures that can manipulate and guide light waves at the nanoscale. The design space of such nanostructures is often high-dimensional, where conventional design optimization methods fail. We employ machine learning to capture a global optimum in functionality.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-183).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130850</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational tools towards automating the scientific method</title>
<link>https://hdl.handle.net/1721.1/130849</link>
<description>Computational tools towards automating the scientific method
Spanbauer, Span.
We present a collection of novel computational tools designed to contribute to the goal of large-scale scientific automation. Deep Involutive Neural MCMC and other inference compilation techniques present a promising path to accelerating inference in probabilistic programs. Neural Group Actions provide foundational methods for learning symmetric transformations useful for the development of statistical models and probabilistic algorithms. Coarse-Grained Nonlinear System Identification provides an exceptional new model class for nonlinear dynamic systems, enabling accurate model identification with minimal experimental data. Optimization plus Stochastic Interchange is a flexible new way to generate experimental stimuli, leading to optimally informative measurements during system identification. Extended Koopman Models advance a new method for the optimal control of nonlinear systems. When coupled with high-throughput laboratory automation, these and other computational tools made possible by recent developments in artificial intelligence promise to revolutionize the way we do science and engineering.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 123-134).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130849</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational approaches for sub-meter ocean color remote sensing</title>
<link>https://hdl.handle.net/1721.1/130848</link>
<description>Computational approaches for sub-meter ocean color remote sensing
O'Shea, Ryan Edward.
The satellite ocean color remote sensing paradigm developed by government space agencies enables the assessment of ocean color products on global scales at kilometer resolutions. A similar paradigm has not yet been developed for regional scales at sub-meter resolutions, but it is essential for specific ocean color applications (e.g., mapping algal biomass in the marginal ice zone). While many aspects of the satellite ocean color remote sensing paradigm are applicable to sub-meter scales, steps within the paradigm must be adapted to the optical character of the ocean at these scales and the opto-electronics of the available sensing instruments. This dissertation adapts the three steps of the satellite ocean color remote sensing paradigm that benefit the most from reassessment at sub-meter scales, namely the correction for surface-reflected light, the design and selection of the opto-electronics, and the post-processing of over-sampled regions.; First, I identify which surface-reflected light removal algorithm and view angle combination are optimal at sub-meter scales, using data collected during a field deployment to the Martha's Vineyard Coastal Observatory. I find that of the three most widely used glint correction algorithms, a spectral optimization based approach applied to measurements with a 40' view angle best recovers the remote-sensing reflectance and chlorophyll concentration despite centimeter scale variability in the surface-reflected light. Second, I develop a simulation framework to assess the impact of higher optical and electronics noise on ocean color product retrieval from unique ocean color scenarios. I demonstrate the framework's power as a design tool by identifying hardware limitations, and developing potential solutions, for estimating algal biomass from high dynamic range sensing in the marginal ice zone.; Third, I investigate a spectral super-resolution technique for application to spatially over-sampled oceanic regions. I determine that this technique more accurately represents spectral frequencies beyond the Nyquist and that it can be trained to be invariant to noise sources characteristic of ocean color remote sensing on images with similar statistics as the training dataset. Overall, the developed and critically assessed sub-meter ocean color remote sensing paradigm enables researchers to collect high fidelity sub-meter data from imaging spectrometers in unique ocean color scenarios.
Thesis: Ph. D. in Mechanical and Oceanographic Engineering, Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 181-199).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130848</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanostructured electroadhesive and electrofrictive surfaces for dexterous grasping and manipulation</title>
<link>https://hdl.handle.net/1721.1/130847</link>
<description>Nanostructured electroadhesive and electrofrictive surfaces for dexterous grasping and manipulation
Nayakanti, Nigamaa.
Robots will become commonplace in personal, public, and working environments and must be equipped for a broad range of manipulation tasks and physical encounters. Traditional robotic grasping methodologies like opposed fingers, suction cups or gecko based adhesion have advanced, yet new grasping technologies are needed, especially for delicate and fragile objects. One promising surface adhesion methodology is electroadhesion, which uses patterns of electrodes that induce charge in a target object. Grasping by electroadhesion is attractive because of its capability to produce surface adhesion on wide range of substrates, its electrically controllable nature and its low power consumption and mechanical simplicity. However, many of these studies are restricted to studying the design of flat and rigid electroadhesive pads and lack understanding of interfacial contact mechanics driven by electrostatics crucial for its scalability and successful deployment.; This thesis aims to develop the fundamental understanding necessary to realize scalable, high-performance electroadhesives built from compliant nanostructured surfaces. In particular, an array of conductive nanofibers (specifically carbon nanotubes), coated with a thin dielectric, exhibits extremely low off-state adhesion, yet a high onstate adhesion because these fibers not only conform to the target object but also provide high polarization due to the use of a thin dielectric coating. In this thesis, the following interrelated research objects are presented: (1) Analytical and numerical modeling of charge accumulation in conductive nanofiber arrays coated with thin dielectric materials. (2) Investigation of coupling between electrostatic charging of conductive nanofibers, their surface compliance, morphology, adhesion using theoretical contact mechanics models. (3) Corroboration of theoretical contact mechanics model for adhesion using molecular dynamics simulations.; (4) Experimental investigation of tunable adhesion and friction using atomic force microscopy (AFM). The ultimate goal of this thesis is to therefore enable the design of SNEs with electrically tunable adhesion and friction towards a universal grasping methodology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 115-119).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130847</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrology and mechanics for manufacturing space-based x-ray grating spectrometers</title>
<link>https://hdl.handle.net/1721.1/130846</link>
<description>Metrology and mechanics for manufacturing space-based x-ray grating spectrometers
Song, Jungki.
Small errors in critical dimensions (CDs) or deformation of optical components can lead to severe performance degradation in high-resolution imaging and spectroscopy tools. Consistent innovations towards more precise optical elements and assembly procedures have led to high-resolution optical systems in many fields - including telescope, microscopy, lithography, and display technologies. A space x-ray telescope needs more stringent requirements as it observes distant space objects using a limited number of x-ray photons in a harsh space environment. The optical instruments for x-ray telescopes need to be high-resolution, efficient in collecting x-ray photons, and lightweight. Optical elements in x-ray telescopes have large-apertures (typically around 1-2 m²) which are realized by populating them with &gt; 1000 high-quality optical sub-elements (i.e., mirrors or gratings).; In this thesis, we limit our attention to an xray grating spectrometer, one of the essential elements in x-ray telescopes. It is placed downstream of the focusing optics and prior to the x-ray detector to disperse nonmonochromatic x rays from distant sources for space-based x-ray spectroscopy. A critical-angle transmission (CAT) grating, a lightweight, freestanding, high-aspect ratio x-ray grating with 200-nm period and 4 [mu]m depth, is a building block for grating spectrometers. More than 1000 high-quality CAT gratings need to be manufactured and precisely aligned within tolerance to build future CAT grating spectrometers. This thesis attacks this manufacturing challenge through 1) inventing metrologies for characterizing CDs, 2) developing alignment processes, and 3) performing design and analysis of CAT grating structural supports. First, a metrology to characterize period variation of CAT gratings was developed.; Metrology repeatability of 0.004 nm rms was achieved, successfully characterizing period variations of 0.018 nm rms (1 sigma) over large-area CAT gratings patterned with traditional interference lithography. The demonstrated metrology uncertainty and period variations fulfill the requirements for future x-ray telescope missions. Second, alignment metrology and protocols were developed, demonstrating an ability to align multiple CAT gratings to satisfy alignment requirements ( &lt;6 arcmin or 0.1 deg). The developed alignment protocol is reliable and scalable for flight-level alignment, for which a large volume (&gt;1000) of CAT gratings need to be aligned in a fast and accurate manner. Third, a metrology to characterize grating bar tilt variations was developed using small-angle x-ray scattering and a laser setup. The developed metrology demonstrated repeatability of &lt;0.01 deg (1 [sigma]) and accuracy of ~0.08 deg (4.8 arcmin).; It successfully characterized bar tilt angle variations from CAT gratings and results agree well with synchrotron measurements. It enabled us to close the loop in process optimization for CAT grating fabrication, and contributed towards suppressing bar tilt (or blaze) error within tolerance ( &lt;6 arcmin or 0.1 deg). Fourth, analytical and finite element studies were performed to design CAT grating structural supports that minimize x-ray blockage at a given stiffness. In-plane and out-of-plane stiffness of several 2D-lattice topologies were examined. A triangular lattice shows 23-580% on stiffness improvement ( depending on mode of stiffness) over the current hexagonal lattice design for the same open area fraction. Adopting the new 2D lattice design is expected to increase open area fraction by ~5% without compromising stiffness.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 223-231).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130846</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction, analysis, and learning of advective transport in dynamic fluid flows</title>
<link>https://hdl.handle.net/1721.1/130845</link>
<description>Prediction, analysis, and learning of advective transport in dynamic fluid flows
Kulkarni, Chinmay Sameer.
Transport of any material quantity due to background fields, i.e. advective transport, in fluid dynamical systems has been a widely studied problem. It is of crucial importance in classical fluid mechanics, geophysical flows, micro and nanofluidics, and biological flows. Even though mathematical models that thoroughly describe such transport exist, the inherent nonlinearities and the high dimensionality of complex fluid systems make it very challenging to develop the capabilities to accurately compute and characterize advective material transport. We systematically study the problems of predicting, uncovering, and learning the principal features of advective material transport in this work.; The specific objectives of this thesis are to: (i) develop and apply new numerical methodologies to compute the solutions of advective transport equations with minimal errors and theoretical guarantees, (ii) propose and theoretically investigate novel criteria to detect sets of fluid parcels that remain the most coherent / incoherent throughout an extended time interval to quantify fluid mixing, and (iii) extend and develop new machine learning methods to infer and predict the transport features, given snapshot data about passive and active material transport. The first part of this work deals with the development of the PDE-based 'method of flow map composition', which is a novel methodology to compute the solutions of the partial differential equation describing classical advective and advective-diffusive-reactive transport. The method of composition yields solutions almost devoid of numerical errors, and is readily parallelizable.; It can compute more accurate solutions in less time than traditional numerical methods. We also complete a comprehensive theoretical analysis and analytically obtain the value of the numerical timestep that minimizes the net error. The method of flow map composition is extensively bench-marked and its applications are demonstrated in several analytical flow fields and realistic data-assimilative ocean plume simulations. We then utilize the method of flow map composition to analyze Lagrangian material coherence in dynamic open domains. We develop new theory and schemes to efficiently predict the sets of fluid parcels that either remain the most or the least coherent over an extended amount of time. We also prove that these material sets are the ones to maximally resist advective stretching and diffusive transport. Thus, they are of significant importance in understanding the dynamics of fluid mixing and form the skeleton of material transport in unsteady fluid systems.; The developed theory and numerical methods are utilized to analyze Lagrangian coherence in analytical and realistic scenarios. We emphasize realistic marine flows with multiple time-dependent inlets and outlets, and demonstrate applications in diverse dynamical regimes and several open ocean regions. The final part of this work investigates the machine inference and prediction of the principal transport features from snapshot data about the transport of some material quantity. Our goals include machine learning the underlying advective transport features, coherent / incoherent sets, and attracting and repelling manifolds, given the snapshots of advective and advective-diffusive material fields. We also infer and predict high resolution transport features by optimally combining coarse resolution snapshot data with localized high resolution trajectory data.; To achieve these goals, we use and extend recurrent neural networks, including a combination of long short-term memory networks with hypernetworks. We develop methods that leverage our knowledge of the physical system in the design and architecture of the neural network and enforce the known constraints that the results must satisfy (e.g. mass conservation) in the training loss function. This allows us to train the networks only with partial supervision, without samples of the expected output fields, and still infer and predict physically consistent quantities. The developed theory, methods, and computational software are analyzed, validated, and applied to a variety of analytical and realistic fluid flows, including high-resolution ocean transports in the Western Mediterranean Sea.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 251-282).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130845</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered microvascular brain-on-a-chip model for the study of tumor progression</title>
<link>https://hdl.handle.net/1721.1/130844</link>
<description>Engineered microvascular brain-on-a-chip model for the study of tumor progression
Hajal, Cynthia.
Cancers of the brain tend to be among the most fatal, due to their rapid rates of growth and the difficulty in transporting therapeutics across the blood-brain barrier (BBB), one of the tightest vascular barrier in humans¹. High-grade glioma, the most common type of primary brain cancer, has one of the worst prognoses of all cancers with a five-year survival rate of ~2.4%²⁻⁴. In addition, an estimated 20% of all cancer patients develop metastatic tumors in the brain⁵. Highly lethal, these stem from circulating tumor cells in brain capillaries that transmigrate into the parenchyma despite the presence of highly-regulated transport mechanisms at the BBB⁶⁻⁸. The lack of physiologically relevant in vitro human BBB models as well as the challenges involved in translating results from animal experiments to the clinic have significantly hindered progress in improving patient prognoses⁹⁻¹¹.; A better understanding of the mechanisms of tumor progression at the brain in a microvascular human brain-on-a-chip model that allows for high spatio-temporal resolution imaging is critical to developing new therapeutic strategies that address tumor extravasation across the BBB and glioma-BBB interactions. In this thesis, we develop an in vitro microvascular model of the human BBB in a microfluidic chip to assess the cellular and molecular interactions between cancer cells and brain stromal cells. The selfassembled BBB vascular networks are generated with induced pluripotent stem cell-derived endothelial cells, primary brain pericytes, and astrocytes. The addition of brain stromal cells resulted in improved barrier function and decreased vessel permeabilities, comparable to in vivo measurements.; In addition, the engineered model has the capability to recapitulate the early steps of the metastatic cascade at the brain and primary tumor progression and interaction with the BBB in real-time via confocal microscopy. The BBB microvascular assay is then employed to obtain biological insights into the roles of brain stromal cells in the extravasation of cancer cells from various primary sites. Particularly, astrocytes are identified to play a major role in tumor transmigration through their secretion of CCL2. This chemokine is internalized by CCR2-expressing tumor cells and promotes their extravasation via both chemotaxis and chemokinesis. The translational strength of our in vitro BBB model was validated in vivo in mouse brains. We uncovered that CCR2 knock-down on tumor cells significantly reduces transmigration and can thus be harnessed as a potential therapeutic strategy to mitigate the early steps of the metastatic cascade at the brain.; Furthermore, we expand upon the current BBB assay to recapitulate the complex tumor-stroma interactions with the incorporation of a high-grade glioma spheroid in the in vitro brain vasculature. Specifically, we explore the mechanisms of drug delivery, across the BBB and into the brain tumor, of layered nanoparticles that bind to endothelial receptors. With this novel platform and in vivo validation in glioma tumor-bearing mice, we demonstrate that transport occurs via transcytosis and is improved with LRP1-binding nanoparticles compared to control carriers, particularly across the vasculature near the glioma tumor. Keywords: cancer, extravasation, blood-brain barrier, microfluidics, organ-on-a-chip, glioma, nanoparticle
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 100-114).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130844</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and optimization of shared mobility on demand : dynamic routing and dynamic pricing</title>
<link>https://hdl.handle.net/1721.1/130843</link>
<description>Design and optimization of shared mobility on demand : dynamic routing and dynamic pricing
Guan, Yue,Ph. D.Massachusetts Institute of Technology.
Mobility of people and goods has been critical to urban life ever since cities emerged thousands of years ago. With the ushering in Cyber-Physical Systems enabled by the development of smart mobile devices, telecommunication technologies, as well as affordable, accessible and powerful computing resources, new paradigms are revolutionizing urban mobility. Among these, Shared Mobility on Demand Service (SMoDS) has changed the landscape of urban transportation, providing alternatives with a customized combination of affordability, flexibility, and carbon footprint. Dynamic routing and dynamic pricing are two central pillars of an SMoDS solution, where the former offers customized routes according to the specific passenger request and real time traffic conditions, and the latter provides incentive signals that appropriately influence the passengers' subscription of the service. Although emerging SMoDS solutions have seen remarkable successes, further improvements are in need.; In this thesis, we present an integrated SMoDS design with dynamic routing and dynamic pricing that introduces two major improvements over the state of the art: (i) enhanced optimality in travel times through dynamic routing with added spatial flexibility, and (ii) explicit accommodation of behavioral modelling of empowered passengers so as to lead to an accurate dynamic pricing strategy. The first part of this thesis focuses on the development of the dynamic routing framework with a new concept of space window. To accommodate the complexity introduced by space window in the optimization of dynamic routes, we propose an algorithm based upon the Alternating Minimization (AltMin) paradigm, and demonstrate an order of magnitude improvement in computational efficiency compared to benchmarks provided by standard solvers.; The second part of this thesis, related to dynamic pricing, is broken down into two modules, with the first related to behavioral modelling of empowered passengers based on Cumulative Prospect Theory (CPT). The CPT based behavioral model is able to capture the subjective and potentially irrational behaviors of passengers when deciding upon the SMoDS ride offer amidst uncertainties and risks associated with framing effects, loss aversion, diminishing sensitivity, and probability distortion. Key properties and the implications of the CPT based passenger behavioral model on dynamic pricing are discussed in detail. The second module of dynamic pricing determines the desired probability of acceptance from each passenger so as to optimize key performance indicators of the SMoDS such as the estimated waiting time.; A Reinforcement Learning (RL) based approach combined with the problem formulation in the form of a Markov Decision Process (MDP) is used to estimate this desired probability of acceptance. The proposed RL algorithm deploys an integrated planning and learning architecture where the planning phase is carried out by a lookahead tree search, and the learning phase is achieved via value iteration using a neural network as the value function approximator. Two major challenges that arise in this context is the varying dimension of the underlying state and the arrival of information in a sequential manner where long-term dependency needs to be preserved. These are addressed through the incorporation of Long Short-Term Memory (LSTM), convolutional and fully-connected layers.; Their judicious incorporation in the underlying neural network architecture allows the extraction of this information and successful estimation of the desired probability of acceptance that leads to the optimization of the SMoDS. A number of computational experiments are carried out using various datasets of large-scale problems and are shown to result in a superior capability of the proposed RL algorithm.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 181-192).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130843</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference, estimation, and prediction for stable operation of modern electric power systems</title>
<link>https://hdl.handle.net/1721.1/130842</link>
<description>Inference, estimation, and prediction for stable operation of modern electric power systems
Chevalier, Samuel Chapman.
To keep pace with social-ecological disruptions and technological progressions, electrical power systems must continually adapt. In order to address the stability-related challenges associated with these adaptations, this thesis develops a set of analytically rigorous yet practically oriented methods for ensuring the continued stability of modern power systems. By leveraging inference, estimation, and predictive modeling techniques, the proposed methods capitalize on the unprecedented amount of real time data emerging from modernizing smart grids. For each method, we provide simulated test results from IEEE benchmark systems. Newly deployed Phasor Measurement Units (PMUs) are observing the presence of detrimental low frequency forced oscillations (FOs) in transmission grid networks. To begin this thesis, we address the problem of locating the unknown sources of these FOs.; To perform source identification, we develop an equivalent circuit transformation which leverages suitably constructed transfer functions of grid elements. Since FO sources appear in this equivalent circuit as independent current injections, a Bayesian framework is applied to locate the most probable source of these injections. Subsequently, we use our equivalent circuit to perform a systematic investigation of energy-based source identification methods. We further leverage this equivalent circuit transformation by developing "plug-and-play" stability standards for microgrid networks that contain uncertain loading configurations. As converter-based technology declines in cost, microgrids are becoming an increasingly feasible option for expanding grid access. Via homotopic parameterization of the instability drivers in these tightly regulated systems, we identify a family of rotational functions which ensure that no eigenmodes can be driven unstable.; Any component which satisfies the resulting standards can be safely added to the network, thus allowing for plug-and-play operability. High-fidelity linearized models are needed to perform both FO source identification and microgrid stability certification. Furthermore, as loss of inertia and real-time observability of grid assets accelerate in tandem, real-time linearized modeling is becoming an increasingly useful tool for grid operators. Accordingly, we develop tools for performing real-time predictive modeling of low frequency power system dynamics in the presence of ambient perturbations. Using PMU data, we develop a black-box modeling procedure, known as Real-Time Vector Fitting (RTVF), that takes explicit account for initial state decay and concurrently active input signals. We then outline a proposed extension, known as stochastic-RTVF, that accounts for the corrupting effects of unobservable stochastic inputs.; The surrogate modeling utilized by vector fitting can also be applied to the steady state power flow problem. Due to an unprecedented deployment of distributed energy resources, operational uncertainty in electrical distribution networks is increasing dramatically. To address this challenge, we develop methodology for speeding up probabilistic power flow and state estimation routines in distribution networks. We do so by exploiting the inherently low-rank nature of the voltage profile in these systems. The associated algorithms dynamically generate a low-dimensional subspace which is used to construct a projection-based reduced order model (ROM) of the full nonlinear system. Future system solves using this ROM are highly efficient.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 261-277).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130842</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry-driven filamentary structures : elastic gridshells, weaves, clasps, and knots</title>
<link>https://hdl.handle.net/1721.1/130841</link>
<description>Geometry-driven filamentary structures : elastic gridshells, weaves, clasps, and knots
Baek, Changyeob.
In this thesis, we cover four research topics in the realm of the mechanics of slender structures involving strong geometric constraints: elastic gridshells, triaxial weaves, elastic clasps, and elastic knots. These studies involve a combination of geometric reasoning, high-fidelity numerical simulations, and precision model experiments using scale-invariance and advanced imaging techniques (e.g., 3D laser scanning, and X-ray computed tomography). First, we study the shape and the mechanical response of elastic gridshells, the three-dimensional structure of which results from the out-of-plane buckling of an initially flat and biaxial network of rods. A purely geometric continuum model, originally introduced by Chebyshev for woven fabric, is used to describe the underlying kinematics and form-finding. The results suggest that rod inextensibility, rather than elasticity, is the primary factor that determines the shape of elastic gridshells.; Second, we investigate triaxial weaving, a craft technique used to generate surfaces using tri-directional arrays of initially straight elastic ribbons. Traditional weavers intentionally introduce discrete topological defects, leading to unsmooth surfaces in the overall structure. As an alternative point of departure, we achieve smooth, threedimensional weaved structures by prescribing in-plane curvatures to the flat ribbons. We demonstrate that a continuous range of integrated Gaussian curvatures can be achieved, which is not feasible using straight ribbons. The potential of this novel design scheme is demonstrated with a few canonical target shapes.; Third, we investigate the mechanics of two elastic rods in a crossing contact, whose geometric counterpart is often referred to in the mathematics community as a 'clasp.' We compare our experimental and computational results to a well-established description for ideal clasps of geometrically rigid strings, finding that the latter acts as an underlying 'backbone' for the full elasticity solution. Our findings suggest that the tight contact between rods must be analyzed as a three-dimensional solid, not a one-dimensional rod. We also study a frictional elastic clasp with relative motion between the two rods. Finally, we present preliminary results on the full three-dimensional finite element method simulations of tight elastic knots, as a continuing discussion of tight contact between filaments. Our numerical results reveal significant deviations for the tight knots from existing one-dimensional models for loose overhand knots.; Our findings corroborate the three-dimensional nature of the tight contact that we demonstrated during the investigation of the elastic clasp.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 217-232).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130841</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technology-assisted coaching : a system for Children's literacy learning</title>
<link>https://hdl.handle.net/1721.1/130837</link>
<description>Technology-assisted coaching : a system for Children's literacy learning
Nazare, Juliana Toni.
Children learn best when knowledgeable adults support their learning process. Yet many learning technologies have not yet incorporated this vital social dimension. In response, we develop a technology-assisted coaching system, where a new adult collaborator--a coach--uses digital tools to support children and their families as they use children's literacy apps. This coaching system blends in-person and digital coaching in order to preserve the relational elements vital to coaching, while harnessing the power of digital technology to make information easily accessible to coaches, children, and families at their convenience. In our system, as children play with literacy apps, every tap and click of their play is streamed to their coach through our digital coaching platform. Using custom-built digital tools, coaches engage in four core coaching practices.; They analyze children's in-app activity, scaffold their learning, share progress with caregivers, and invite caregivers to engage in literacy learning experiences with their children. To develop this system, we iteratively designed, built, and evaluated it with approximately a hundred children and their families. We conducted in-depth study of two versions of the system's design through a randomized control trial (RCT) and a formative pilot study. From the RCT, we found that the coaching system increased caregivers' awareness of their children's in-app play and children's playtime with the literacy app. We also found that for families with lower formal education levels, the effect of the coaching system was greater across almost all outcome variables investigated. In both studies, we found coaches were able to use our digital coaching tools to effectively engage in the four core coaching practices, and that these tools helped increase coaches' efficiency.; Based on our findings, we discuss changes to the system's design to improve and scale this approach and provide design considerations for building digital coaching systems. Through the creation and in-depth study of a novel sociotechnical system for coaching children's literacy learning, this work contributes to the field of learning technology. We hope this work serves as a helpful guide to designers, developers, and policymakers as they create and scale-up these types of digital networks for children's learning.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 279-290).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130837</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translational design computation</title>
<link>https://hdl.handle.net/1721.1/130836</link>
<description>Translational design computation
Bader, Christoph,Ph. D.Massachusetts Institute of Technology.
Synergetic tensions have evolved the dichotomy between the physical and digital design domains into a symbiotic unity. New capabilities in digital fabrication give rise to sophisticated tools of computational design, while new affordances in computational design inspire innovation in digital fabrication. The role of design in this process is that of synthesis through mediation. As designers, we mediate between different principles and fields, and their synergies and conflicts generate new elements of design. The challenge to mediate in a universal language across domains becomes critical as a third domain encompassing biological entities grows more amenable to design. Biological systems offer reproduction, self-organization and growth -- among other features and benefits --; which in turn enable previously unattainable properties to design systems. At the same time, their own modes of intelligence, expression, and agency demand a promising shift in design thinking. This thesis hypothesizes that the relations across design domains can be established through translational design computation, which is a framework that uses computational design as a language to mediate between physical, digital, and biological entities. We build this framework in two parts --; Systems and Mediations. The first part, Systems, explores whether computational design can serve as a mediating language between the three entities. The second part, Mediations, examines how these mediations can occur. In Systems, we show that computational design can mediate between living and nonliving matter along the spectrum of biomimetic, biointegrated, and biosynthetic systems. As part of this, we demonstrate three systems of computational mediation: (i) programmable matter applies computational design to physical systems to enable biologically inspired design strategies, (ii) programmable templating applies computational design to the intersection of physical and biological systems to facilitate synergistic relationships, and (iii) programmable growth applies computational design to biological systems to give rise to material architectures.; In Mediations, we present dynamic, synergetic, and emergent strategies for how computational mediations can occur within cocreation systems. The living and nonliving parts of any cocreation system may interact to form synergies. Combined, these synergies produce complexes that give rise to new macro-level organizations -- products of the synergies of the parts and not simply of the parts themselves. Thus, the mediation between physical, digital, and biological entities needs to address the design of dynamic relations guiding synergetic behaviors, the design of the synergetic behaviors themselves or ultimately, the design of emergent self-expression of the system. Throughout this thesis, the framework is developed theoretically and applied in practice. It is documented in publications such as Making Data Matter and Hybrid Living Materials and projects such as Wanderers, Living Mushtari, the Vespers Series, Rottlace, Lazarus, Totems, Fiberbots, and Silk Pavilion II.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 218-240).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130836</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-organized fine-tuned response in a driven spin glass</title>
<link>https://hdl.handle.net/1721.1/130835</link>
<description>Self-organized fine-tuned response in a driven spin glass
Gold, Jacob Mitchell.
In this thesis, I investigate the principles that that can be used to predict the behavior of a many-bodied system when an external drive is applied. I consider a spin glass as a prototypical model of such a system, and investigate these principles through simulation. I find that spins differentiate into slow spins which decouple from the drive and fast spins which couple more strongly to the drive, resulting in macroscopic quantities like work absorption rate and internal energy decreasing as compared to the near-equilibrium distribution. Which spins fall into which categories is specific to a particular realization of the external drive; changing to another drive changes which spins are fast and which are slow, revealing a drive-specific adaptation. I investigate limits on the memory of the system, and demonstrate the system's capability to identify changes in real-world images.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 83-89).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130835</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance and economics of monovalent selective electrodialysis desalination for irrigation</title>
<link>https://hdl.handle.net/1721.1/130834</link>
<description>Performance and economics of monovalent selective electrodialysis desalination for irrigation
Ahdab, Yvana D.(Yvana Damiella)
Reverse osmosis (RO) is the most widely used desalination technology for the treatment of irrigation source water and wastewater. Brackish groundwater, seawater, and agricultural effluent often contain both monovalent ions damaging to crops (Na⁺, Cl⁻) and divalent ions beneficial for crops (Ca²⁺, Mg²⁺, SO²⁻₄). RO removes both types of ions. These beneficial ions must then be reintroduced to the desalinated water through the addition of fertilizer. Monovalent selective electrodialysis (MSED) demonstrates greater potential to align with the needs of the agriculture sector. MSED is a variant of conventional electrodialysis (ED). MSED preferentially removes monovalent ions relative to multi-valent ions, defined as monovalent selectivity, via selective ion-exchange membranes. MSED operates at a significantly higher water recovery than RO.; In the treatment of irrigation source water, MSED's selective removal may reduce fertilizer requirements and associated costs, while its greater recovery saves water and decreases the volume of brine for disposal. In the treatment of agricultural wastewater, MSED's selective removal of sodium, the biggest barrier to water reuse, may help greenhouses achieve minimal liquid discharge. Despite the possible economic and environmental benefits of MSED, the technology has not been commercially employed for the treatment of agricultural water. Rather, it has historically been used to concentrate sodium chloride from seawater brine for salt production. Consequently, the literature has focused on characterizing and designing MSED systems almost exclusively for high salinity applications. Because water composition greatly influences membrane behavior, separate analyses must be conducted to determine how MSED will perform for lower salinity applications relevant to agriculture.; This thesis investigates the membrane performance, energetics, and economics of MSED for the treatment of irrigation source water and wastewater. Experiments are conducted on two types of MSED membranes, one of which has never been tested in the literature, to characterize the following system parameters as a function of feedwater composition: monovalent selectivity, ion transport, membrane resistance, membrane permeability, and limiting current density. Feedwaters used in the present MSED experiments simulate seawater and numerous compositions of brackish groundwater and agricultural effluent, which often vary with location. We find that both MSED membranes demonstrate notable monovalent selectivity for all tested feedwaters, although the selectivity varies with ionic composition and salinity. The experimentally-determined system parameters then serve as inputs to our MSED cost model.; This model evaluates fertilizer and water savings as a function of farm size for the different feedwaters and membranes. These savings are weighed against the greater capital and operating costs of MSED relative to RO, in order determine the feasibility of MSED adoption for irrigation. While the energy consumption of MSED is comparable to that of RO for the treatment of brackish water and wastewater, MSED requires significantly more energy to desalinate seawater. Solar powered, in addition to conventionally powered, desalination is integrated into the cost model for seawater. The insights described in this thesis suggest that MSED may be the future of desalination for agriculture, particularly for brackish water and wastewater treatment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 261-276).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130834</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and applications of copper(I) hydride catalysis in asymmetric reactions and heterocycle synthesis</title>
<link>https://hdl.handle.net/1721.1/130831</link>
<description>Development and applications of copper(I) hydride catalysis in asymmetric reactions and heterocycle synthesis
Zhou, Yujing,Ph. D.Massachusetts Institute of Technology.
Chapter 2. Enantioselective CuH-Catalyzed Hydroacylation Employing Unsaturated Carboxylic Acids as Aldehyde Surrogates The direct asymmetric copper hydride (CuH)-catalyzed coupling of [alpha],[beta]-unsaturated carboxylic acids to aryl alkenes is developed to access chiral [alpha]-aryl dialkyl ketones. A variety of substrate substitution patterns, sensitive functional groups and heterocycles are tolerated in this reaction, which significantly expands the range of accessible products compared to existing hydroacylation methodology. Although mechanistic studies are ongoing, we propose that CuH-catalyzed silylation of unsaturated acids occurs to access a uniquely effective acyl electrophilic coupling partner. Chapter 3. CuH-Catalyzed Asymmetric Reduction of [alpha],[beta]-Unsaturated Carboxylic Acids to [beta]-Chiral Aldehydes The copper hydride (CuH)-catalyzed enantioselective reduction of [alpha],[beta]-unsaturated carboxylic acids to saturated aldehydes is reported.; This protocol provides a new method to access a variety of [beta]-chiral aldehydes in good yields, with high levels of enantioselectivity and broad functional group tolerance. A reaction pathway involving a ketene intermediate is proposed based on preliminary mechanistic studies and density functional theory calculations. Chapter 4. CuH-Catalyzed Asymmetric Reductive Amidation of [alpha],[beta]-Unsaturated Carboxylic Acids The direct enantioselective copper hydride (CuH)-catalyzed synthesis of [beta]-chiral amides from [alpha],[beta]-unsaturated carboxylic acids and secondary amines under mild reaction conditions is reported. The method utilizes readily accessible carboxylic acids, and tolerates a variety of functional groups at [beta]-position including several heteroarenes. A subsequent iridium-catalyzed reduction to [gamma]-chiral amines can be performed in the same flask without purification of the intermediate amides. Chapter 5.; CuH-Catalyzed Asymmetric Hydroamidation of Vinylarenes A CuH-catalyzed enantioselective hydroamidation reaction of vinylarenes has been developed using readily accessible 1,4,2-dioxazol-5-ones as electrophilic amidating reagents. This method provides a straightforward and efficient approach to synthesize chiral amides in good yields with high levels of enantiopurity under mild conditions. Moreover, this transformation tolerates substrates bearing a broad range of functional groups. Chapter 6. Enantioselective Allylation Using Allene, a Petroleum Cracking Byproduct Allene (C₃H₄) gas is produced and separated on million-metric-ton scale per year during petroleum refining but is rarely employed in organic synthesis. Meanwhile, the addition of an allyl group (C₃H₅) to ketones is among the most common and prototypical reactions in synthetic chemistry.; Herein, we report that the combination of allene gas with inexpensive and environmentally benign hydrosilanes, such as PMHS, can serve as a replacement for stoichiometric quantities of allylmetal reagents, which are required in most enantioselective ketone allylation reactions. This process is catalyzed by copper catalyst and commercially available ligands, operates without specialized equipment or pressurization, and tolerates a broad range of functional groups. Furthermore, the exceptional chemoselectivity of this catalyst system enables industrially relevant C3 hydrocarbon mixtures of allene with methylacetylene and propylene to be applied directly. Based on our strategy, we anticipate the rapid development of methods that leverage this unexploited feedstock as an allyl anion surrogate. Chapter 7.; Synthesis of Pyrroles through the CuH-Catalyzed Coupling of Enynes and Nitriles Herein, we describe an efficient method to prepare polysubstituted pyrroles via a copper-hydride (CuH)- catalyzed enyne-nitrile coupling reaction. This protocol accommodates both aromatic and aliphatic substituents and a broad range of functional groups, providing a variety of N-H pyrroles in good yields and with high regioselectivity. We propose that the Cu-based catalyst promotes both the initial reductive coupling and subsequent cyclization steps. Density functional theory (DFT) calculations were performed to help elucidate the reaction mechanism.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130831</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of the single cell genomics in deciphering tumor heterogeneity and its role in tumor progression and drug resistance</title>
<link>https://hdl.handle.net/1721.1/130830</link>
<description>Application of the single cell genomics in deciphering tumor heterogeneity and its role in tumor progression and drug resistance
Marjanovic, Nemanja.
Tumor progression, from the single mutated cell to the advanced stages of cancer, represents an evolutionary process. During tumor progression, cancer cells acquire new genetic mutations, becoming more heterogeneous, leading to tumor progression and resistance to therapy. However, clear genetic drivers of progression, metastasis, and therapeutic resistance are identified in only a subset of tumors, pointing to non-genetic contributors to cancer progression. Also, somatic evolution in cancer is occurring at the level of the single cell. Therefore, the application of the single cell genomic method is crucial for deciphering phenotypic heterogeneity. Here, we profiled single cell transcriptomes from genetically engineered mouse lung tumors at seven stages spanning tumor progression from atypical adenomatous hyperplasia to lung adenocarcinoma. The diversity of transcriptional states spanned by tumor cells increased over time and was reproducible across tumors and mice, but was not explained by genomic copy number variation. Cancer cells progressively adopted alternate lineage identities, computationally predicted to be mediated through a common transitional, high-plasticity cell state (HPCS). HPCS cells prospectively isolated from mouse tumors had robust potential for phenotypic switching and tumor formation and were more chemoresistant in mice. Our study reveals transitions that connect cell states across tumor evolution and motivates therapeutic targeting of the HPCS.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130830</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capture and control of excitations</title>
<link>https://hdl.handle.net/1721.1/130829</link>
<description>Capture and control of excitations
Sinclair, Timothy S.(Timothy Scott)
Fundamental understanding of the capture and control of excitations, including photons and excitons, in optoelectronic devices is important to optimizing their performance. Devices such as luminescent solar concentrators and signal concentrators that increase the efficiency of solar energy production and the speed of point-to-point communication, respectively, will be crucial for maximizing the sustainability and connectedness of the world going forward. Behind the workings of these devices are micro-scale interactions of excitations with the device materials that must be carefully modeled and well understood. In this thesis, I model the performance of both luminescent solar concentrators and signal concentrators using the Monte Carlo method to predict the efficiency from the average results of many trials of quantum behavior. For each of these devices, I propose a path to improved performance. For luminescent solar concentrators, this is the use of tandem fluorophores. In this approach, the addition of a second fluorophore material increases the amount of sunlight that can be absorbed without interfering with the efficiency at which the first fluorophore collects solar photons. For signal concentrators, this is a mutli-aggregate fluorophore with &lt;100 ps fluorescence lifetime that does not re-absorb its own emission because of the introduction of an artificial Stokes' shift. In addition, in this thesis I model the photophysical properties of the C8S3 J-aggregate to understand two of its properties: the long exciton migration it exhibits, and its ability to be irreversibly photobrightened and reversibly photodarkened under continuous illumination. I show the exciton migration distance is due to strong nearest-neighbor coupling along a helical direction that is aligned close to the axis of the aggregate tube, while the photobrightening and photodarkening behaviors are due to changes in two different types of disorder.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 76-81).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130829</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for analyzing and modeling gene regulation and 3D genome organization</title>
<link>https://hdl.handle.net/1721.1/130828</link>
<description>Computational methods for analyzing and modeling gene regulation and 3D genome organization
Belyaeva, Anastasiya.
Biological processes from differentiation to disease progression are governed by gene regulatory mechanisms. Currently large-scale omics and imaging data sets are being collected to characterize gene regulation at every level. Such data sets present new opportunities and challenges for extracting biological insights and elucidating the gene regulatory logic of cells. In this thesis, I present computational methods for the analysis and integration of various data types used for cell profiling. Specifically, I focus on analyzing and linking gene expression with the 3D organization of the genome. First, I describe methodologies for elucidating gene regulatory mechanisms by considering multiple data modalities. I design a computational framework for identifying colocalized and coregulated chromosome regions by integrating gene expression and epigenetic marks with 3D interactions using network analysis.; Then, I provide a general framework for data integration using autoencoders and apply it for the integration and translation between gene expression and chromatin images of naive T-cells. Second, I describe methods for analyzing single modalities such as contact frequency data, which measures the spatial organization of the genome, and gene expression data. Given the important role of the 3D genome organization in gene regulation, I present a methodology for reconstructing the 3D diploid conformation of the genome from contact frequency data. Given the ubiquity of gene expression data and the recent advances in single-cell RNA-sequencing technologies as well as the need for causal modeling of gene regulatory mechanisms, I then describe an algorithm as well as a software tool, difference causal inference (DCI), for learning causal gene regulatory networks from gene expression data.; DCI addresses the problem of directly learning differences between causal gene regulatory networks given gene expression data from two related conditions. Finally, I shift my focus from basic biology to drug discovery. Given the current COVID19 pandemic, I present a computational drug repurposing platform that enables the identification of FDA approved compounds for drug repurposing and investigation of potential causal drug mechanisms. This framework relies on identifying drugs that reverse the signature of the infection in the space learned by an autoencoder and then uses causal inference to identify putative drug mechanisms.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 261-281).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130828</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactions of Kr(F₂), O₂, and (O₂)₂ with Si(100)</title>
<link>https://hdl.handle.net/1721.1/130827</link>
<description>Interactions of Kr(F₂), O₂, and (O₂)₂ with Si(100)
Sathitwitayakul, Thanasak.
The high reactivity of XeF₂ with fluorinated Si is proposed to arise from localized vibrational excitation of the Si lattice as a result of the multiple collisions required to reverse the momentum of the heavier XeF₂ incident on the lighter Si lattice. This study is part of a series of experimental studies to test the hypothesis that the mass of the incident molecule is responsible for the collision induced vibrational excitation that leads to enhanced reactivity by varying the mass of the inert gas X in a X(F₂) van der Waals (vdW) molecule. The reaction probability of Kr(F₂) towards fluorinated Si(100) is measured to be 0.2±0.1, which is around two orders of magnitude higher than that of the lighter F₂ and almost one order of magnitude lower than that of the heavier XeF₂/Xe(F₂), thereby supporting the viability of the proposed mechanism. Dissociative adsorption of triplet O₂ on the singlet Si(100) surface is spin-forbidden.; A possible unexplored reason for the low but non-zero reaction probability of triplet O₂ on Si(100) is an atom abstraction reaction mechanism that circumvents the spin transition since a complementary triplet O atom is released. Mass spectrometry experiments do not detect the complementary O atoms for O₂ with incident translational energy of 1-2 kcal/mol scattering from Si(100) at O coverages between zero to 1 ML at 150-1100 K, suggesting the absence of atom abstraction. Abstraction maybe undetectable due to insufficient partitioning of reaction exothermicity into translational energy of the complementary O atom. Singlet (O₂)₂ vdW dimer is more reactive than triplet O₂ at incident translational energies of about 1 kcal/mol with Si(100) at temperatures of 250-500 K. Four-center and step-wise are singlet mechanisms proposed to be responsible for increased (O₂)₂ reactivity.; The probability of (O₂)₂ undergoing either or both singlet mechanisms is measured to be at least 2±1 and at most 8±4 times higher than the triplet O₂ reaction probability. This result indicates that the slow oxidation of Si(100) by triplet O₂ due to the necessary spin transition can be circumvented via oxidation by singlet (O₂)₂.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130827</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbohydrate and bacterial binding specificity of human intelectin-1</title>
<link>https://hdl.handle.net/1721.1/130825</link>
<description>Carbohydrate and bacterial binding specificity of human intelectin-1
Isabella, Christine R.(Christine Rose)
The mucosal surfaces of the human body exist in close contact with complex communities of resident microorganisms termed the microbiome. The microbiome is crucial for host health, and therefore the host must discern between which microbes colonize and which must be cleared. Human soluble lectins are secreted carbohydrate-binding proteins that bind microbes by specific recognition of cell surface glycans. Many soluble lectins are important mucosal innate immune factors, as lectins binding to microbes can result in their clearance from the host. However, the glycan and microbial binding specificities of lectins are poorly defined. In this thesis, I aim to address this gap with a focus on human intelectin-1 (hItln-1). In Chapter 1, I review the recently identified class of lectins, the X-type lectins. The X-type lectins, or intelectins, are found throughout chordates and share highly conserved sequences but their biological roles are not well understood.; However, their expression patterns and microbial binding specificity suggest a role in regulation of the microbiome. In Chapter 2, I build on previous work to further define hItln-1 carbohydrate specificity. These studies reveal that carbohydrate conformation is stabilized by stereoelectronic effects, and that carbohydrates are bound by hItln-1 in their stabilized conformation. In Chapter 3, I turn to bacterial cell recognition by hItln-1 and determine that hItln-1 displays competitive binding to bacterial strains in a mixture. These studies reveal the need to assay lectin-bacteria recognition against diverse microbial communities to understand their binding specificity in a biological context. In Chapter 4, I develop lectin-sequencing (lectin- SEQ) as a method for identifying bacterial targets of lectins in native communities. Using the human stool microbiome, I assess binding to stool bacteria by hItln-1, and surfactant protein-D (SP-D).; Lectin-SEQ reveals that hItln-1 recognizes health-promoting commensal bacteria, while SP-D recognizes pathogenic bacteria. These results indicate a novel role for hItln-1 in promoting colonization of commensal bacteria.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130825</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein immobilization using complex coacervates and complex coacervate thin films</title>
<link>https://hdl.handle.net/1721.1/130824</link>
<description>Protein immobilization using complex coacervates and complex coacervate thin films
Sureka, Hursh Vardhan.
Enzymes can enable a wide and growing range of chemistries, often outperforming synthetic catalysts. However, enzymes must often be converted to heterogeneous catalysts. Protein immobilization enables this conversion and can enhance the stability of enzymes. Complex coacervates are highly effective at encapsulating and stabilizing enzymes. This thesis demonstrates the use of complex coacervate thin films for the immobilization of enzymes and systematically probes methods to enhance the performance of these materials. The first study presents a proof-of-concept demonstration of complex coacervate thin films for the synthesis of functional biomaterials. The immobilization method itself is all-aqueous, reducing the likelihood of enzyme denaturation, and facile, only requiring two steps: coating followed by crosslinking.; A model biosensor was synthesized and demonstrated to have both high sensitivity and selectivity, and the immobilization method imparted increased thermal stability on the enzyme. From here, two directions were explored: how protein properties affect their coacervation behavior and optimizing the performance of the complex coacervate thin films. The second study aims to quantify the surface charge distribution or the "patchiness" of proteins and relate this to their complexation behavior. A patchiness parameter that averaged pair correlations between neighboring points on the protein surface was shown to correlate with the coacervation behavior of proteins with greater patchiness favoring the formation of complexes. Further work will enable this parameter to be incorporated with other protein properties in order to create robust predictive algorithms for protein-polymer coacervation.; The third and fourth studies aimed to enhance the performance and properties of complex coacervate thin films. The third study probed whether the morphology of these composite materials could be controlled and found that morphologies varied greatly as a function of the polyelectrolyte strength and the loading of the encapsulated molecule. The strongest interactions led to precipitation, but weaker interactions led to micellization in both solution and the films. The fourth study aimed to understand how various polymer properties, including polyelectrolyte strength and monomer conformational freedom, affect the performance of complex coacervate thin films. Strong interactions were found to favor greater catalytic activity but lower stability, while weaker interactions favored greater stability.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130824</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental tuning of the reactivity of molecules confined to polarized interfaces</title>
<link>https://hdl.handle.net/1721.1/130823</link>
<description>Environmental tuning of the reactivity of molecules confined to polarized interfaces
Kaminsky, Corey J.(Corey Jarin)
The heterogenization of molecular catalysts to polarized interfaces provides an appealing approach to the design of more efficient and selective electrochemical devices. The well-defined nature of molecular catalysts renders them amenable to synthetic tuning to unravel structure-function relationships. From these studies, key insight to optimization of their activity is obtained. However, recent work has established that outer-sphere effects such as the surface structure and ligation method can impact reactivity as much as catalyst structure. This thesis explores these environmental contributions to reactivity with a particular focus on exploring the impact of electronic coupling between a molecular site and the band structure of graphitic carbon electrodes or using this coupling as a tool to understand the reactivity of molecules confined to solid-liquid interfaces. Chapters two through four explore environmental contributions to porphyrin electrocatalysis.; We report on how the magnitude of electronic coupling conferred by the linkage tunes the rate of oxygen reduction catalysis. We further demonstrate solvent-dependent concerted proton electron transfer for a cobalt porphyrin attached to graphitic carbon by an alkyl-tether. Building on these results, we present a mechanistic basis for the stark differences in the selectivity and activity of heterogenized and soluble cobalt porphyrins for the CO₂ reduction reaction. Chapters five and six address charge effects at solid-liquid interfaces. In chapter five, we analyze the rate of dissociative ligand exchange for identical heterogeneous and soluble binding sites and find a modest rate enhancement that we attribute to the enhanced charge stabilization by the solid support.; Chapter six details our attempt to use ambient pressure X-ray photoelectron spectroscopy to experimentally test our previously established model for catalysts strongly electronically coupled to the band states of graphitic carbon by direct measurement of the interfacial electrostatic potential drop.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130823</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of two-component signaling systems in Pseudomonas aeruginosa and their roles in the mucus barrier</title>
<link>https://hdl.handle.net/1721.1/130821</link>
<description>Investigation of two-component signaling systems in Pseudomonas aeruginosa and their roles in the mucus barrier
Wang, Benjamin X.
All living things must be able to sense and respond to signals in their environment in order to grow and survive. For bacteria, a common means through which this occurs is via two-component signaling systems, which are typically comprised of a membrane-bound histidine kinase that senses external cues signals, and a cognate cytoplasmic response regulator that triggers changes in gene expression. The importance of these systems is highlighted by the fact that they have been identified in the vast majority of sequenced bacteria. However, despite their prevalence, much remains to be discovered about these sensory systems. In particular, both the actual processes that these systems control, along with the signals that they sense and respond to, remain poorly characterized in many cases.; In this work, I further characterize two-component signaling systems in the opportunistic pathogen Pseudomonas aeruginosa, which is most well-known for chronically infecting the lungs of patients with cystic fibrosis. I begin by screening a collection of in-frame deletion mutants of each histidine kinase in the PA14 strain against a dozen virulence-associated phenotypes, including different types of motility, biofilm formation, virulence factor production, and antibiotic resistance. Through this approach, I identify nearly two dozen of these proteins as important regulators of virulence. As P. aeruginosa is a human-associated mucosal pathogen, I next search for host-derived signals in mucus that act through these sensory systems in P. aeruginosa. I identify mucins and their associated glycans as signals that act through the RetS histidine kinase and the GacS-GacA two-component signaling system.; One major output of this signaling is the downregulation of the type VI secretion system, which suggests that mucin glycans may serve as host-derived "safety signals" that suppress microbial competition under non-dysbiotic conditions. Finally, I characterize the molecular mechanisms by which RetS inhibits GacS activity to better understand the unusual interactions between these two histidine kinases. Overall, this work underscores the diverse and important roles that two-component signaling systems play in bacteria, and begins to shed light on how microbes like P. aeruginosa utilize these systems to sense and respond to signals in the host.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130821</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A multiscale framework for the chemomechanical characterization of ancient heterogeneous materials</title>
<link>https://hdl.handle.net/1721.1/130820</link>
<description>A multiscale framework for the chemomechanical characterization of ancient heterogeneous materials
Maragh, Janille Maria.
Over the course of hundreds--or sometimes even thousands--of years, ancient building materials have survived environmental exposure, modifications, and restorations, often to an extreme degree. These processes, which have occurred from antiquity all the way to the present, have resulted in materials that are poorly understood both chemically and mechanically. This increases the difficulty of designing conservation, preservation, and restoration strategies to ensure that future generations have access to the same ancient marvels that are accessible today. This is further complicated by the inherent value of cultural heritage materials, which makes it challenging and impractical to obtain this information using large samples.; In this work, a minimally invasive multiscale chemomechanical framework for the study and homogenization of heterogeneous composite materials of unknown composition is presented, with the ultimate goal being the chemomechanical characterization of ancient Roman mortar. First, advanced chemical characterization methods are used with computational techniques to better understand the phases present in a sample, and the application of these techniques to the study of ancient Roman mortar is presented. It is then demonstrated how the techniques developed for this study may be used for other applications, for example in the determination of ancient production technologies and in the understanding of the underlying mechanisms responsible for the durability and time resilience of ancient materials.; In a chemical characterization study of a fragment of the Temple Scroll, the longest and most visually striking of the Dead Sea Scrolls, evaporitic sulfate salts that point to a unique production process are identified. The second half of this thesis demonstrates how chemistry is interfaced with mechanics in the chemomechanical homogenization framework. Instead of directly using ancient Roman concrete, which is of immense cultural value, the homogenization framework is tested on a series of modern mortars produced using ordinary Portland cement, which bear numerous similarities to ancient Roman mortars but are more readily accessible and more easily produced. First, scanning electron microscopy energy dispersive X-ray spectroscopy (SEM-EDS) is used with data clustering algorithms to identify the distribution of phases in each sample.; Next, microindentation is used to assign mechanical properties to each of the chemically distinct phases identified, allowing for the generation of large-area high-resolution maps of mechanical properties. In the final stage of the framework, computational homogenization is performed: the chemomechanical maps are converted to finite element models, which are subjected to uniaxial compression and pure shear simulations to obtain estimates of the effective elastic properties of the samples, which are validated using laboratory compression testing data. Finally, the framework is applied to an ancient mortar sample using phase properties from the literature to estimate its effective elastic properties.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-188).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130820</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The influence of mucin glycans on microbial virulence and competition</title>
<link>https://hdl.handle.net/1721.1/130819</link>
<description>The influence of mucin glycans on microbial virulence and competition
Wheeler, Kelsey M.(Kelsey Morgan)
The human body is colonized by trillions of microbes, many of which reside in a layer of mucus that covers all wet epithelia in the body. In this way, mucus serves as the first line of defense to the host, simultaneously protecting against pathogens while providing a habitat where commensal microbes thrive. It has long been known that defects in mucus production or biochemistry are associated with opportunistic infections; however, few studies have focused on how components of the intact mucus barrier interact with resident microbes to promote health. In this thesis, I fill this gap using a clinically relevant 3-dimensional model of the mucus environment based on mucin glycoproteins, the major structural component of mucus. This in vitro culturing system mimics the natural mucus environment, where mucin polymer domains interact and entangle into a flexible hydrogel, as opposed to 2-dimensional surface coatings, which can create artificially concentrated amounts of surface mucins. I apply this system to answer three major conceptual questions, separated into three projects. In the first project, I study the ability of mucin and their attached glycans to regulate interactions between a clinically-important opportunistic pathogen, Pseudomonas aeruginosa, and its host. I then investigate the underlying genetic mechanisms that enable P. aeruginosa to sense and respond to the mucus environment, and explore how mucin glycan-sensing in turn impacts microbe-microbe interactions in the mucosal niche. I end by investigating how mucin glycan-mediated microbial regulation modulates the composition of complex microbial communities isolated from the human oral cavity. Collectively, the work presented in this thesis lays the framework for characterizing the therapeutic nature of mucin and how specific mucin glycan moieties modulate the behavior, pathogenicity, and competitive interactions of host-associated microbes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130819</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrative multi-omics dissection of cancer cell states and susceptibility</title>
<link>https://hdl.handle.net/1721.1/130818</link>
<description>Integrative multi-omics dissection of cancer cell states and susceptibility
Liu, Yunpeng(Biologist)Massachusetts Institute of Technology.
Cancer cells are characterized by a broad spectrum of unique genetic, epigenetic and transcriptional states, which are often concomitant with high degrees of plasticity in cell identity. These cell states and the fluidity therein are a major source of resistance to both chemotherapy and targeted therapy. Combinatorial efforts in experimental assays and computational modeling are pivotal for understanding the origins of cancer cell plasticity and exposing cell state-specific vulnerabilities. In this thesis, I will first present my studies on two clinically challenging types of hematopoietic malignancies and discuss key genes that sustain cell identity and survival programs revealed through multi-omics approaches.; In the first study, a combination expression, chromatin binding and chromatin accessibility analyses revealed the plant homeodomain finger-like family protein PHF6's novel functions as a lineage identity regulator in a mouse model of BCR-ABL-driven B cell acute lymphoblastic leukemia. In the second case, single cell transcriptomic profiling, computational inference of cell cycle trajectories and unbiased functional genomics jointly identified RAD51B as a uniquely essential gene in near-haploid leukemia. Finally, to systematically model heterogeneous cell states and generate readily testable predictions of susceptibilities in cancer, I proposed a novel computational pipeline that integrates multiple data types to construct a quantitative model of transcription regulation, which can in turn be used to infer changes in gene expression in response to transcription factor perturbation.; The pipeline then uses these gene expression responses to perturbations to estimate changes in protein activity and finds a combination of protein activity score changes that best predicts changes cell fitness. Applying the pipeline to glioblastoma multiforme - a cancer type that lacks effective targeted therapy, I prioritized a small set of genes including MYBL2 as subtype-specific candidate targets. My thesis work demonstrates the power of integrative, multi-omics approaches for effective discovery of susceptibilities in cancer and highlights an emerging paradigm for understanding the information flow in the cellular circuitry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references (pages 217-239).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130818</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CH-[pi] interactions play a central role in protein recognition of carbohydrates/</title>
<link>https://hdl.handle.net/1721.1/130817</link>
<description>CH-[pi] interactions play a central role in protein recognition of carbohydrates/
Diehl, Roger Christopher.
Carbohydrate-protein interactions play a central role in biology, but knowledge of the forces underlying them is limited. Carbohydrates are generally hydrophilic and therefore present unique challenges in their recognition. One underappreciated force involved in carbohydrate-protein interactions is the CH-[pi] interaction, an attractive interaction between the aliphatic protons of a carbohydrate and the [pi] system of an aromatic ring. In this thesis, I examine the fundamental nature, strength, and biological significance of this interaction, largely in the context of a family of carbohydrate-binding proteins known as galectins. In Chapter 1, I review previous knowledge of the forces underlying carbohydrate-binding proteins and the forces they utilize to bind their ligands. In particular, I focus on CH-[pi] interactions and galectins.; In Chapter 2, I examine the forces that contribute to CH-[pi] interactions in the context of carbohydrates and aromatic compounds in aqueous solution. I find the CH-[pi] interaction to be electronic in nature, and demonstrate its selectivity between different carbohydrates. In Chapter 3, I determine the contribution of the CH-[pi] interaction to the ligand binding of galectin-3, a human carbohydrate-binding protein of medical significance. The data demonstrate that the CH-[pi] interaction accounts for a majority of the binding energy. In Chapter 4, I explore the biological implications of the CH-[pi] interaction in galectin-3. I demonstrate that the CH-[pi] interaction is critical for the biological activities of galectin-3. In Chapter 5, I propose several directions future researchers could take to extend this work. For three of four directions, I present the progress I have made during my studies.; The work contained within this thesis demonstrates that CH-[pi] interactions play a central role in protein-carbohydrate interactions at both a molecular level and a biological level. Understanding the CH-[pi] interaction is key to explaining and predicting the activity of carbohydrate-binding proteins.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis. In title on title page, [pi] appears as lower case Greek letter.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130817</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phosphoproteomics analysis of Alzheimer's disease</title>
<link>https://hdl.handle.net/1721.1/130816</link>
<description>Phosphoproteomics analysis of Alzheimer's disease
Morshed, Nader Francis.
Alzheimer's disease (AD) is a form of dementia characterized by the appearance of amyloid-[beta] plaques, neurofibrillary tangles, and inflammation in brain regions involved in memory. Despite numerous clinical trials, a limited understanding of disease pathogenesis has prevented the development of effective therapies. Several lines of genetic and biomolecular evidence indicate that AD progression involves cellular signaling through neuronal and glial protein phosphorylation networks. In order to understand which phosphorylation networks are dysregulated, I use mass spectrometry to characterize the phosphoproteome of post-mortem brain tissue from AD patients and multiple mouse models of AD. Using computational analysis, I identified several signaling pathways that are dysregulated before neurodegeneration occurs. Many of these signaling factors were expressed primarily in non-neuronal cell types, including microglia, astrocytes, and oligodendrocytes.; My results highlight potential therapeutic targets in the signaling responses of glial cells and are split into two parts. In the first part of this thesis, I have quantified the phosphoproteome of the CK-p25, 5XFAD, and Tau P301S mouse models of neurodegeneration. I identified a shared response involving Siglec-F which was upregulated on a subset of reactive microglia. The human paralog Siglec-8 was also found to be upregulated on microglia in AD. Siglec-F and Siglec-8 were upregulated following microglial activation with interferon gamma (IFN[gamma]) in BV-2 cell line and human stem-cell derived microglia models. Siglec-F overexpression activates an endocytic and pyroptotic inflammatory response in BV-2 cells, dependent on its sialic acid substrates and immunoreceptor tyrosine-based inhibition motif (ITIM) phosphorylation sites. Related human Siglecs induced a similar response in BV-2 cells.; Collectively, my results point to an important role for mouse Siglec-F and human Siglec-8 in regulating microglial activation during neurodegeneration. In the second part of this thesis, I performed a combined analysis of the tyrosine, serine, and threonine phosphoproteome, and proteome of temporal cortex tissue from AD patients and aged matched controls. I identified several co-correlated peptide modules that were associated with varying levels of Tau, oligodendrocyte, astrocyte, microglia, and neuronal pathologies in different patients. I observed phosphorylation sites on known Tau-kinases and other novel signaling factors that were correlated these peptide modules. Finally, I used a data-driven statistical modeling approach to identify individual peptides and co-correlated signaling networks that were predictive of AD pathologies. Together, these results build a map of pathology-associated phosphorylation signaling events occurring in AD.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages [137]-[153]).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130816</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic biology approaches for engineering bacteria as living therapeutics</title>
<link>https://hdl.handle.net/1721.1/130815</link>
<description>Synthetic biology approaches for engineering bacteria as living therapeutics
Triassi, Alexander John.
Bacteria and humans have a long-standing symbiotic relationship. As the details of their symbiosis continue to be elucidated, it has become clear that these host-resident bacteria are much more than spectators within the body. The ability of bacteria to manipulate human biology has inspired the notion that bacteria can be harnessed as "probiotics" to combat disease; in other words, as living therapeutics. Synthetic biology takes this concept one step further through genetically introducing novel functions into bacteria to provoke a targeted therapeutic effect in humans. However, none of the engineered living therapeutics that have progressed into clinical trials have been approved for therapeutic use. I pursued two approaches in an effort to reverse this trend. In my first approach, I developed a platform to overcome practical challenges of therapeutic strain design. This platform enables high protein expression levels from the genome of E. coli Nissle through the development of a genetic "landing pad" system and characterization of genetic regulators that can be controlled through the addition of chemical inducers. In my second approach, I developed a method for screening a diverse panel of bacteria for their ability to receive and express biosynthetic gene clusters encoding for antimicrobial peptides. After identifying bacteria that were capable of expressing these peptides, I explored their potential to prevent infection of the opportunistic pathogen Clostridium difficile and to serve as a bioproduction chassis of the C. difficile-targeting peptide. Together, this work outlines the development of a platform for creating the next-generation of living therapeutics and a unique method for engineering collections of bacteria to identify new chassis strains for heterologous protein expression.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 90-95).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130815</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>LisNRs : a novel class of liposomal contrast agents for molecular MRI</title>
<link>https://hdl.handle.net/1721.1/130814</link>
<description>LisNRs : a novel class of liposomal contrast agents for molecular MRI
Simon, Jacob Cyert.
Biological systems depend on numerous molecular messengers that transduce information across large distances. Understanding the spatial and temporal dynamics of molecular signaling networks is crucial for the construction of systems- and organism-level models of biological function. Molecular imaging, a technique that employs chemical probes to relay molecular events into spatially-resolved signal changes, is a promising strategy for studying complex molecular signaling networks in situ. Magnetic resonance imaging (MRI) is a leading noninvasive imaging modality that allows for imaging of large volumes of deep tissue with high spatiotemporal resolution. Paramagnetic molecular sensors enable detection of molecular phenomena with MRI (molecular MRI). The scope of molecular MRI experiments thus far, however, has been limited by the modest sensitivity and signal changes provided by existing probes.; In this dissertation, I introduce Ḻiposomal Ṉanoparticle Ṟeporters (LisNRs), a novel class of MRI-detectible sensor that utilizes an innovative contrast mechanism in which reversible modulation of the water permeability of liposomal bilayers simultaneously modulates water access to a large, concentrated pool of conventional T1-weighted MRI contrast agents. This architecture gives rise to significant signal amplification with respect to first-generation MRI probes that rely on stoichiometric sensing mechanisms in which binding of one analyte molecule modulates water access to a single paramagnetic metal ion. I employ two strategies for the signal-dependent modulation of liposomal water permeability. The first approach uses reversible modulation of lipid bilayer fluidity to induce changes in passive bilayer water permeability. To demonstrate this concept, I build Light- LisNR, a photosensitive MRI contrast agent, which I use to map light distribution in the rat brain.; The second approach utilizes ligand-gated water-permeable channels to modulate bilayer water permeability. I demonstrate the potential of this strategy for molecular sensing using biotin/streptavidin as a model system. Together, this work introduces and demonstrates a novel platform for sensing with MRI that addresses longstanding challenges of low sensitivity and signal change with existing MRI-detectible probes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 87-104).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130814</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering exclusively-quadruplet codon translation in vivo</title>
<link>https://hdl.handle.net/1721.1/130813</link>
<description>Engineering exclusively-quadruplet codon translation in vivo
DeBenedictis, Erika Alden.
Living organisms universally encode amino acids with three-base codons specifying the twenty canonical amino acids. A genetic code entirely based on four-base codons would answer many questions about the origin of life and have profound implications for expanding the genetic code to include novel amino acids. However, the task of assembling enough quadruplet-tRNAs (qtRNAs) to implement an all-quadruplet code remains a major hurdle. Here, we create qtRNAs that decode canonical amino acids by modifying E. coli tRNAs that continue to rely upon endogenous aminoacyl-tRNA synthetases (AARSs) for charging. We find that AARSs generally tolerate quadruplet anticodons, resulting in efficient, selectively charged qtRNAs for eight of the twenty canonical amino acids, as well as candidate qtRNAs for the remaining 12 amino acids. We develop a directed evolution technique based on Phage Assisted Continuous Evolution and use it to improve the translation efficiency of qtRNAs. In order to address the large number of necessary evolutions, we execute these evolutions using a high-throughput evolution platform we developed. We find that directed evolution of qtRNAs can substantially improve quadruplet codon translation efficiency, often by 10x or more, without compromising amino acid selectivity. We use the evolved qtRNAs to implement an 10-amino acid all quadruplet codon code and processive quadruplet codon translation of a small peptide within a standard bacterial chassis, without the need for genome recoding.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis. "February 2021."; Includes bibliographical references (pages 95-106).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130813</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dendritic biophysics and evolution</title>
<link>https://hdl.handle.net/1721.1/130812</link>
<description>Dendritic biophysics and evolution
Beaulieu-Laroche, Lou.
The biophysical features of neurons are the building blocks of computation in the brain. Dendrites are the physical site of the vast majority of synaptic connections and can expand the information processing capabilities of neurons. Due to their complex morphological attributes and various ion channels, dendrites shape how thousands of inputs are integrated into behaviorally-relevant outputs at the level of individual neurons. However, several long-standing issues limit our understanding of dendritic biophysics. In addition to distorted electrophysiological measurements, prior studies have largely been limited to ex vivo preparations from rodent animal models, providing little insight for computation in the awake human brain. In this thesis, we overcome these limitations to provide new insights on biophysics at the intersection of dendritic morphology and evolution. In chapter 1, we demonstrate that voltage-clamp analysis, which was employed to derive much of our understanding of synaptic transmission, is incompatible with most synapses because they reside on electrically-compartmentalized spines. We also develop new approaches to provide accurate measurements of synaptic strength. Then, in chapter 2, we directly correlate somatic and distal dendritic activity in the awake mouse visual cortex to show an unexpectedly high degree of coupling in vivo. In chapter 3, we perform dendritic recordings in large human neurons to reveal distinct integrative properties from commonly studied rat neurons. Finally, in chapter 4, we characterize neurons in 10 mammalian species to extract evolutionary rules governing neuronal biophysics and uncover human specializations. Together, these four thesis projects expand our understanding of the influence of dendritic geometry and evolution on neuronal biophysics.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2021; Cataloged from the official PDF version of thesis. "February 2021."; Includes bibliographical references (pages 190-207).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130812</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamentals in unsteady fluid fragmentation from drop impact</title>
<link>https://hdl.handle.net/1721.1/130811</link>
<description>Fundamentals in unsteady fluid fragmentation from drop impact
Wang, Yongji,Ph. D.Massachusetts Institute of Technology.
Fluid fragmentation is ubiquitous in industrial, agricultural and natural settings. An important class of fragmentation processes is unsteady, continuously generating droplets of properties that vary over time. Yet, the dynamics of unsteady fragmentation has received little attention thus far. Common examples of unsteady fragmentation are splashes created upon drop impact on either a liquid pool or a solid surface. Upon impact, the drop is converted to a thin liquid sheet extending in the air and bounded by a thick rim. Hydrodynamic instabilities destabilize the rim: it develops corrugations that grow into ligaments which finally fragment into secondary droplets. Prior investigations of drop impacts have focused on the sheet dynamics but mostly overlooked its role on the final and most important outcome: secondary droplets and their formation. In this thesis, we study a two-dimensional unsteady fragmentation process upon drop impact on a small target.; This allows us to explore the underlying fundamental physics of unsteady fragmentation. We develop experimental techniques and image processing algorithms that enable the quantification of the key unsteady physical quantities with high precision. We also establish a theoretical framework that links the dynamics of the sheet, rim, ligaments and secondary droplets, enabling the prediction of the secondary droplet statistics from the impact conditions only. We show that both the size and speed distributions of droplets ejected during the fragmentation are largely determined by the time-varying properties of the ligaments they originate from. The dynamics of the ligaments, rim and sheet are fully coupled. We show that the sheet evolution is influenced by the fluid shedding from the rim, causing mass and momentum loss, while ligament formation is controlled by both the fluid shedding and the rim deceleration, themselves governed by the sheet evolution.; Using our proposed universal model of rim destabilization, we resolve this coupling and establish a closed theoretical framework that can predict the entire dynamics of the fragmentation. Our theoretical framework is fully validated by our experiments. These developments pave the way for a fundamental understanding of a wider range of unsteady fragmentation processes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 517-527).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130811</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive resilience is mediated by the MEF2 network in mice and humans</title>
<link>https://hdl.handle.net/1721.1/130809</link>
<description>Cognitive resilience is mediated by the MEF2 network in mice and humans
Barker, Scarlett J.V.(Scarlett Jazmine)
Recent increases in human longevity have been accompanied by a rise in the incidence of dementia. While a large proportion of aged individuals display pathological hallmarks of neurodegenerative disease, a small number of these individuals are able to maintain healthy cognitive function even in the presence of extensive brain pathology. The molecular mechanisms that govern this neuro-protected state remain unknown, but individuals that exhibit cognitive resilience (CgR) represent a unique source of insight into potential therapies that could preserve brain function in the face of neurodegenerative disease. Here, we employ a two-pronged approach to dissect the mechanism underlying CgR. First, using multiple integrated repositories of clinical and brain transcriptomic data we identified individuals who maintained normal cognition despite harboring a large burden of Alzheimer's disease (AD) pathology.; We observe significant up-regulation of MEF2 family members in these resilient patients when compared to patients whose cognition declined in the presence of pathology. Second, we utilize the only existing animal model of CgR -- environmental enrichment --; to investigate the molecular mechanisms involved in the induction of resilience. Accessibility of Mef2 binding sites, and expression of Mef2 targets are significantly increased upon enrichment. Additionally, knockdown of Mef2 family members just prior to the initiation of enrichment block its cognitive benefits, demonstrating the necessity of Mef2 activity for achieving the enhanced cognitive potential afforded by enrichment. Neurons lacking Mef2 are hyperexcitable, which is also one of the earliest pathological alterations observed in AD. These results suggest a potential mechanistic link between the Mef2 transcriptional network induced by enrichment and the prevention of disease-associated hyperexcitability. To determine the causal impact of Mef2 on cognition in the context of neurodegeneration, we use a viral approach to manipulate the PS19 mouse model of tauopathy.; Remarkably, in the absence of enrichment, Mef2 overexpression alone is sufficient to improve cognition and reduce hyperexcitability in PS19 mice. Overall, our findings reveal a novel role for MEF2 transcription factors in promoting cognition throughout life, and maintaining cognitive resilience in the context of neurodegenerative disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2021; Cataloged from the official PDF version of thesis. "February 2021."; Includes bibliographical references (pages 119-126).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130809</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>IMU-based estimation of human lower body kinematics and applications to extravehicular operations</title>
<link>https://hdl.handle.net/1721.1/130808</link>
<description>IMU-based estimation of human lower body kinematics and applications to extravehicular operations
McGrath, Timothy M.(Timothy Michael)
The use of body-worn inertial measurement units (IMUs) as an alternative to traditional human optical motion capture (OMC) techniques has gained increasing attention over the last twenty years. In contrast to traditional OMC, IMUs are less intrusive and allow measurements to be taken in the environment of interest--not just a contrived laboratory space. The primary goal of this work is to advance human-IMU kinematic modeling and estimation techniques through increasing the accuracy of IMU-derived human skeletal joint angles while minimizing the required calibration necessary to use an IMU-based human mocap system. A secondary goal of this work is to demonstrate practical application of an IMU-based mocap system to a specific domain of interest: space suit design and operations. In this domain, IMUs offer a tractable approach to understanding suited or unsuited human kinematics in the field. The capture of these kinematics in relevant environments allow engineers to better design and maintain space suits as well as model the operational paradigms which enable the future of human extraplanetary spaceflight.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 149-168).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130808</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials in extreme conditions : experimental developments and studies of systems far from equilibrium</title>
<link>https://hdl.handle.net/1721.1/130807</link>
<description>Materials in extreme conditions : experimental developments and studies of systems far from equilibrium
Martynowych, Dmitro Jaroslau.
Advances to shock wave generation and characterizations techniques are presented. Building upon previous work employing a novel quasi-2D focusing geometry, pressures of at least 100 GPa are achieved on the benchtop at the micron scale. Diagnostics for optical visualization of these generated waves were also refined; an existing single-shot multiframe imaging technique was extended into phase sensitive imaging. These techniques are applied to a range of material systems (energetic materials, brittle solids and biological membranes), pushing them far from equilibrium, and studying their response. While the explosive 1,3,5-triaza-1,3,5-trinitrocyclohexane (RDX), has been used for almost a century, questions remain about its initiation, reactivity, and interaction with shock waves. Very little work has directly addressed the coupling between mechanical deformation and chemistry in this system. Further, the literature to date mostly comprises studies with idealized 1-dimensional waves.; Herein, the reactivity of single crystals of RDX responding to multiple shockwaves is considered. A surprising transient sensitivity to low pressure reflected shockwaves following a high-pressure initial excitation is discovered. This effect confirms the link between morphological change, and sensitivity previously reported. Silica glass is a widely studied system under shock compression and high pressure. A relatively low pressure phase transition is known and several distinct amorphous states exist prior to the transition. Several studies have also reported a lagging wave following dynamic compression in glasses. This "failure wave" is thought to precede fracture and brittle failure. We demonstrate a method of generating shock waves in glass and show the ability to image the dynamics in real-time. Additionally, we directly image fracture following shock compression and an associated failure wave. Finally, we address the interaction of shock and acoustic waves with cell membranes.; This interaction is of great interest in biomedical technology for targeted cell lysing, as well as drug and vaccine delivery. Basic science questions also exist regarding the interactions between these waves and biological cells and membranes. Here we employ simultaneous interferometric and fluorescence imaging to elucidate the mechanism of transmembrane transport in cells and cell models. We show that under single cycle shock conditions the spatial structure of the wave is key to transfection; the wave must be spatially narrower than the cell itself. However, in a multicycle acoustic case spatially wider waves can still affect transfection suggesting that we have moved from a mechanism relying on a pressure gradient across the cell (in the shock case) to a pore formation mechanism.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130807</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring and enhancing context-dependent beta-lactam antibiotic efficacy</title>
<link>https://hdl.handle.net/1721.1/130805</link>
<description>Exploring and enhancing context-dependent beta-lactam antibiotic efficacy
Bening, Sarah Christine.
Antibiotics, such as beta-lactams, are essential medical tools for the treatment of bacterial infections. Unfortunately, clinical treatment efficacy is declining over time as bacteria adapt to and evade antibiotic treatment through mechanisms called antibiotic resistance, tolerance, and persistence. Antibiotic tolerance and persistence, in particular, are often context-dependent phenotypes: environmental factors can influence bacterial physiology and alter antibiotic efficacy. Optimal antibiotic use, as well as strategies to enhance antibiotic efficacy, can therefore be informed by studies of context-dependent antibiotic action. In this thesis, I present three vignettes about beta-lactam antibiotic efficacy and how environmental context alters in vitro treatment outcomes. First, I explore bacterial killing in multi-drug contexts, focusing on how different beta-lactams can have different effects in combination with antibiotics of other classes. Second, I present a new counter-tolerance method using metabolic stimulation to sensitize tolerant, stationary phase bacteria to beta-lactam antibiotics. Third, I present an extension of this metabolic counter-tolerance strategy, now combining metabolic and target-specific stimulation to further enhance beta-lactam efficacy. I demonstrate that this combined approach, when coupled with beta-lactamase inhibitors, restores beta-lactam sensitivity to simultaneously tolerant and resistant cultures of clinically relevant pathogens. I conclude by discussing opportunities for future study into antibiotic context-dependence and the application of counter-tolerance approaches such as the one described in this thesis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 77-89).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130805</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and dynamics of influenza M2 proton channels from solid-state NMR</title>
<link>https://hdl.handle.net/1721.1/130804</link>
<description>Structure and dynamics of influenza M2 proton channels from solid-state NMR
Mandala, Venkata Shiva.
The A and B strains of influenza virus places a substantial burden on health, causing around 30 million illnesses, several hundred thousand hospitalizations, and a few tens of thousands of deaths every year in the United States alone. Developing novel antiviral and vaccines against influenza requires understanding the proteins employed by these infectious virions. The matrix-2 protein (M2) is an essential viral protein that conducts protons in the endosomes of infected host cells and induces membrane curvature to facilitate virus budding. M2's proton channel activity is encapsulated by a transmembrane domain (TM) that is targeted by the FDA-approved antiviral drugs amantadine and rimantadine to inhibit viral replication. Proton conduction by M2's TM is mediated by a His-xxx-Trp motif conserved in the otherwise disparate M2 from influenza A (AM2) and influenza B (BM2) strains.; The histidine selects for protons and activates the channel at low pH, while the tryptophan is responsible for gating and unidirectional conduction from the N-terminus (outside) to the C-terminus (inside). M2 conducts protons across lipid bilayers at a moderate rate of ca. 10-1000 s⁻¹. AM2 is well characterized and several high-resolution structures in a variety of membrane and membrane-mimetic environments are available, yet the mechanism of gating and the rate-limiting step of proton conduction are unknown. A gating-deficient mutant was utilized to determine that asymmetric conduction in AM2 is due to tryptophan blocking activation of histidine from the C-terminal side under low pH. Further, in phospholipid bilayers, AM2 shows two discrete conformations that interconvert on the proton conduction timescale, providing the first experimental evidence for a transporter-like mechanism.; In contrast to AM2, BM2 is relatively poorly studied and no high-resolution structures in membranes is available. BM2 shares little sequence homology with AM2, is not inhibited by the antiviral drugs targeting influenza, and conducts protons faster but more bidirectionally than AM2. The first high-resolution structures of membrane-embedded BM2 in the closed and open states are reported. In contrast to the transporter-like motion of AM2, BM2 undergoes a subtler channel-like scissor opening motion, that allows for more efficient proton conduction at the expense of some bidirectional current.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130804</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CRISPRi screens to identify combination therapies for the improved treatment of ovarian cancer</title>
<link>https://hdl.handle.net/1721.1/130803</link>
<description>CRISPRi screens to identify combination therapies for the improved treatment of ovarian cancer
Handly, Erika Daphne.
Ovarian cancer is the fifth leading cause of cancer death for women in the United States, with only modest improvements in patient survival in the past few decades. Standard-of-care consists of surgical debulking followed by a combination of platinum and taxane agents, but relapse and resistance frequently occur. To identify genes that confer sensitivity or resistance in tumor cells treated with platinum chemotherapeutics, I performed genome-wide screens combining cisplatin or oxaliplatin with CRISPRi pooled gene knockdowns. Screens were analyzed at 9-days to mimic patient care, and at 48-hours to isolate the short-term DNA damage response. Genes whose knockdown caused sensitivity to the platinum chemotherapeutics were identified through a multi-objective optimization approach to account for knockdown efficiencies and variances in sequencing depth.; To filter the noise in the genome-wide screen and more confidently identify 'hits,' a smaller pooled CRISPRi screen of four hundred targets was designed, and a few 'hits' were validated. Interestingly, knockdown of FAAP24, a component of the FA core complex, was found to sensitize multiple ovarian cancer cells to platinum compounds, and thus may be a promising candidate for a combination treatment with oxaliplatin and cisplatin. Chapter 5 details an implementation of a combination therapy with cisplatin using peptide nanoparticles. Peptide nanoparticles are a promising therapeutic for the delivery of siRNA and allow for targeting of specific proteins that are difficult to inhibit with small molecular inhibitors; specifically, nanoplexes allowed for the targeting of the REV3 protein, the catalytic component of the translesion synthesis polymerase.; Interfering with REV3 expression through siRNA has a synergistic effect with cisplatin treatment in both human and mouse models of lung cancer, indicating that REV3 is an excellent target to combine with cisplatin therapies. This REV3 knock-down sensitivity was also extended to human ovarian cancer cell lines, indicating the potential of the combination treatment for both lung and ovarian cancers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis. "February 2021."; Includes bibliographical references (pages 153-168).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130803</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards engineering living functional materials</title>
<link>https://hdl.handle.net/1721.1/130802</link>
<description>Towards engineering living functional materials
Tang, Tzu-Chieh,Ph. D.Massachusetts Institute of Technology.
Synthetic biology has become one of the most rapidly evolving research fields, with impacts on all aspects of our daily life. Through applying engineering principles to programming biological systems, synthetic biology provides advanced techniques to program organisms to perform desired tasks, similar to machines created by humans. Today, it has enabled the development of alternative meat substitutes, biosensors for water contamination, and living fertilizers that promote plant growth. The grand challenge to bridge the concept-to-product gap is twofold: scalability and safe deployment. First, most model microorganisms cannot produce a macroscale matrix to sustain themselves as standalone devices. The field of engineered living materials (ELMs) aims to recapitulate the remarkable properties of natural biology to create novel, growable, multifunctional materials using genetically engineered organisms.; Nevertheless, most relevant pioneering work was created using nano- to microscale biofilm, which has rather small yields and usually requires costly modification. Second, releasing genetically modified microorganisms (GMMs) into the field for food, water, or agricultural applications is often considered risky due to the uncertainty of wild-type organisms acquiring undesirable traits, such as antibiotic resistance, from the GMMs. A significant effort in addressing these unmet needs is called for. This Thesis starts with an introduction of genetic circuits and an in-depth review of the current trends in materials synthetic biology, which includes two major categories of ELMs: self-organizing functional materials and hybrid living materials. The following chapters describe the technologies developed to achieve high scalability and safe deployment of ELMs in these two categories and living devices suitable for real-world applications.; Finally, a detailed outlook summarizes the challenges and prospects for materials synthetic biology and engineering living functional materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 206-221).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130802</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the role of cell-cell interactions in hepatic ensembles</title>
<link>https://hdl.handle.net/1721.1/130801</link>
<description>Probing the role of cell-cell interactions in hepatic ensembles
Chen, Amanda X.
While organ transplantation is one of the greatest advances of modern medicine and provides immense therapeutic benefit to patients suffering from severe and fatal liver disease, donor tissue is scarce. Alternatives such as engineered cell-based therapies aim to restore tissue-specific functions of solid organs, but leave much to be desired. Key challenges hindering the translation of cell-based therapies relate to (1) cell sourcing, (2) graft scale-up, and (3) vascularization, all of which contribute to therapeutic performance. The performance of an implantable graft is a function of the underlying cell-cell and cell-matrix interactions. These grafts typically consist of a multicellular ensemble in which combinations of epithelial, stromal, and immune cells give rise to physiologic function. Currently, precise, spatiotemporal control of these interactions is experimentally intractable.; This thesis introduces a technique termed CAMEO (C̲ontrolled A̲poptosis in M̲ulticellular tissues for E̲ngineered O̲rganogenesis), in which we can non-invasively actuate the removal of a desired cell population from a pre-established multicellular ensemble. As an exemplar, we use CAMEO to study the contribution of supportive stromal cells to the phenotypic stability of primary human hepatocytes. 3D hepatic ensembles, in which stromal cells enhance phenotypic stability of spheroids, were found to rely only transiently on fibroblast interaction to support multiple axes of liver function, such as protein secretion and drug detoxification. Importantly, CAMEO revealed crucial cell-cell and cell-material interactions that occur in the first 24 hours of co-culture that drive the stabilization and enhancement of hepatic phenotype.; Due to its modularity, we expect that CAMEO is extendable to other applications that are tied to the complexity of 3D tissues, including in vitro organoid models and in vivo integration of cell therapies. As such, we also employed CAMEO and our strategy of engineering-via-elimination in an implantable device containing both hepatic ensembles and engineered vasculature, and demonstrate our ability to engineer desired function and cell composition. With an improved understanding of cell-cell interactions in vitro in hand, the next step toward the clinic is to assess the performance of 3D hepatic ensembles in vivo. Here, we lay the groundwork for defining a final product lock for our hepatic cell therapies, and specifically explore the role of fibroblasts in in vivo integration, incorporate vasculature to meet the metabolic demands of scaled-up tissue grafts, and tune tissue microarchitecture to enhance engraftment, function, and persistence in vivo.; Taken together, the efforts contained in this thesis represent a significant advance in tools and biology that enable clinical applications of tissue engineering and regenerative medicine.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 147-173).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130801</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of high-temperature firebrick resistance-heated energy storage (FIRES) using doped ceramic heating system</title>
<link>https://hdl.handle.net/1721.1/130800</link>
<description>Development of high-temperature firebrick resistance-heated energy storage (FIRES) using doped ceramic heating system
Stack, Daniel Christopher.
Towards combating climate change, the widespread replacement of fossil fuel energy with zero-carbon energy has two main requirements: (1) the ability to reach very high temperatures using zero-carbon energy to perform a variety of continuous industrial processes, such as iron smelting and cement clinker production; (2) the ability to match electricity supply with demand. Firebrick resistance-heated energy storage (FIRES) is a previously proposed technology capable of meeting both requirements by storing zero-carbon electricity as high-temperature heat, and delivering it to industrial plants or power plants as needed in place of fossil fuels. The capability limits of FIRES is set by existing electrical heater options, which limit the temperatures and heat rates of the system. The work herein describes the development of a novel high-temperature FIRES system using cation-doped ceramic firebricks as the basis of the electrical heating system.; The firebricks are directly resistance heated (DRH), enabling peak temperatures equal to that of flame temperatures, and allowing FIRES to electrify and decarbonize the hottest industrial processes and power the highest efficiency turbines. The fundamentals of semiconductor joule heater design were reviewed, and the material option space was evaluated. Chromia doped with nickel was identified as a promising material candidate for a DRH-style FIRES system. Commercial and doped lab-fabricated chromia samples were prepared. Electrical resistivity was measured as a function of temperature, and brick-brick contact resistivity was measured as a function of temperature and contact load, up to 1500°C, to determine viability of electrically heating a freely stacked brick mass. A proof of concept of DRH was successfully demonstrated by electrically heating a small stack of doped samples.; Calculations and simulations of brick-brick contact temperature rise, short circuit failure conditions, and thermal runaway were developed and undertaken using the experimentally measured electrical properties and the expected operating conditions of a FIRES system as inputs. Results informed the development of an electrically conductive brickwork design, including the brick dimensions and patterns of interlaid insulative and conductive firebrick to form an electrical circuit resistant to thermal runaway. The bricks and brickwork were designed with the intention of being compatible with hot blast stoves (heat regenerators) commonplace in the steel industry to provide a pathway for commercial development using existing industrial experience. Specific modifications necessary for coupling FIRES with blast furnaces, cement kilns, and gas turbines were explored.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 117-121).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130800</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hamiltonian engineering for quantum sensing and quantum simulation</title>
<link>https://hdl.handle.net/1721.1/130799</link>
<description>Hamiltonian engineering for quantum sensing and quantum simulation
Liu, Yi-Xiang
Quantum sensing and quantum simulation are emerging areas in quantum science and technology with broad applications. In this thesis, we explore Hamiltonian engineering techniques to build better quantum sensors and quantum simulators. In quantum sensing, advanced control techniques are required to extract all the information available about the sensing target from the sensor. Unfortunately, one major challenge to implementing the optimal control sequence-which extracts the maximum information-is the clock rate of the (classical) hardware used to control the sensor. To overcome this challenge, we develop a novel control technique inspired by quantum simulation ("quantum interpolation") and achieve an effective six picoseconds sampling rate from the hardware-constrained two nanoseconds. This improved sampling rate enables a higher precision in measuring classical fields and the quantum signal arising from a single nuclear spin.; To further improve quantum sensing, we engineer the sensor-target Hamiltonian and make the sensor more sensitive to the target. In particular, we address the challenge that a single sensor cannot be sensitive to all components of a vector DC magnetic field. To overcome this challenge, we modify the sensor Hamiltonian, using an ancillary oscillator, to realize a hybrid magnetometer sensitive to both the longitudinal and the transverse component of a vector DC field. We achieve a nanoscale vector magnetometer with comparable sensitivities for longitudinal and transverse magnetic field components. Finally, we turn to digital quantum simulation, a versatile scheme to study large quantum systems' complex dynamics via controllable quantum devices. In digital quantum simulation, the desired dynamics are approximated by a sequence of elementary gates (Trotterization). Finding a good ordering of gates to achieve high-fidelity simulation is a nontrivial task.; To address this challenge, we develop a geometric picture of Trotter formulas and their errors, from which we were able to find intuitive Trotter formulas providing higher-fidelity simulation compared with the most commonly used Trotter formulas. While the results cover a wide range of applications, this thesis's key insight is that they all emerge from improved control techniques that engineer effective Hamiltonians starting from the natural interactions present in the original quantum system.
Thesis: Ph. D. in Quantum Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 121-144).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130799</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A small, bright silicon light-emitting diode directly integrated with microelectronics</title>
<link>https://hdl.handle.net/1721.1/130779</link>
<description>A small, bright silicon light-emitting diode directly integrated with microelectronics
Xue, Jin,Ph. D.Massachusetts Institute of Technology.
Silicon technologies have been developed for both electronics and photonics. Future demands call for further innovation in each field separately, but also depend on our ability to bring the best of both worlds together through integrated solutions. For decades, the pursuit of all-silicon electronic-photonic integration has been hindered by the lack of a native light source due to silicon's indirect bandgap. Considerable effort has been expended for light generation from silicon by altering material structure or composition, but a useful device with practical output intensity has yet to be achieved. In this thesis, I demonstrate near-infrared, micro- and nanoscale light emitting diodes (LEDs) in native silicon that realize high radiation intensity and useful output power, achieved in an unmodified open-foundry microelectronic 55nm CMOS process, along with other photonic and electronic components integrated on the same chip. Efficient bipolar carrier injection and tight confinement for radiative recombination in sub-wavelength dimensions allow us to achieve intense electroluminescence with designs leveraging high-quality passivation of material interfaces. Under room-temperature continuous-wave operation, an external light emission intensity of over 500W/cm² from a single nano-LED is demonstrated, which is several orders of magnitude higher than previous silicon-based emitters, and surpasses the state-of-the-art nano- or microscale LEDs using direct-bandgap III-V semiconductors. An all-silicon, chip-to-chip fiber optic communication link is demonstrated as well as light sources with sufficient power for use as illumination sources for macroscopic objects.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 191-213).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130779</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphene-metal interactions beyond Van der Waals forces</title>
<link>https://hdl.handle.net/1721.1/130775</link>
<description>Graphene-metal interactions beyond Van der Waals forces
Wang, Haozhe,Ph. D.Massachusetts Institute of Technology.
With extraordinary properties, graphene has been applied in various studies to explore new phenomena, new understandings, and new applications. With only one atomic layer thick and all its atoms are on the surface, graphene's properties could be altered by being brought into contact with any arbitrary surfaces, and similarly the close presence of graphene contact could change the properties of that surface as well. This thesis aims to provide insights into graphene-metal interactions by analyze novel phenomena and develop practical applications. First, we modified the graphene-copper interface to prevent the growths of thicker graphene islands in order to grow a uniform bilayer graphene (2LG), by introducing the concept of interface adhesive energy. To characterize the different types of 2LG better, we developed a machine-learning-assisted Raman analysis tool and confirmed that our FM-mode-grown 2LG is of quasi-AB-stacking.; Taking advantage of the better mechanical properties of 2LG, a support-free, Marangoni-driven graphene transfer technique was developed. Next, we utilize the stability of Sn-graphene-Cu interfaces to develop an approach that allows for more reliable low-temperature bonding technology. The proposed bonding scheme inherently solves both high bonding temperature and interfacial diffusion issues. Specifically, the lowering of bonding temperature is achieved by introducing nano-cone features in the surface of the Cu side. A one-atomic-thick graphene interlayer on the Cu does not prevent the bonding between Cu and Sn, while preventing the atom diffusion to form the intermetallic compounds that are harmful for the reliability of the bonding. Finally, we examined graphene-metal interactions for the modification of optical properties in the graphene-aluminum and graphene-tin systems.; The Sn nanodot arrays have a light-trapping effect on graphene, therefore, increases graphene absorption from &lt;2% to &gt;15% in spectral regime of [lambda] = 900-2000 nm. On the contrary, a counter-intuitive transmittance enhancement is observed in graphene-Al hetero-films. We find that the relative transmittance enhancement reaches up to 27% for 370nm incident light on an interface of graphene and 10nm-thick aluminum. These two examples reveal the fascinating aspect of how the interface between graphene and metal could bring about unexpected property alterations in either of the layer or the whole system together.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-165).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130775</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extending memory system semantics to accelerate irregular applications</title>
<link>https://hdl.handle.net/1721.1/130774</link>
<description>Extending memory system semantics to accelerate irregular applications
Zhang, Guowei,Ph. D.Massachusetts Institute of Technology.
Computer systems are increasingly bottlenecked by data movement, and rely on sophisticated memory hierarchies to address this issue. However, conventional memory systems suffer from poor performance on many irregular access patterns. This is because memory systems use an inexpressive interface that does not convey sufficient program semantics: they organize data in fixed-sized chunks and access data with only reads and writes. As a result, memory systems incur significant performance loss on several common patterns. In this thesis, we identify three such patterns: accesses to small data fragments suffer poor locality; concurrent updates introduce excessive traffic and serialization; and dependent reads incur long latencies that are on the critical path. To tackle these issues, this thesis proposes techniques that extend the semantics of the memory system. We apply this insight to address each of the three issues and propose solutions with different degrees of generality.; COUP and COMMTM provide general architectural support by exploiting commutative updates to reduce communication and synchronization. COUP supports strict single-instruction commutativity by extending the cache coherence protocol, while COMMTM supports multi-instruction and semantic commutativity by leveraging hardware transactional memory. Whereas COUP and COMMTM are general, HTA and GAMMA target a specific data structure and a specific application, respectively. HTA addresses the inefficiencies of small fragments in the context of hash tables. It exploits the associativity in hash tables and leverages caches to reduce runtime overheads and to improve spatial locality. GAMMA is a sparse matrix-matrix multiplication accelerator. Its novel storage idiom, FIBERCACHE, combines caching and decoupled execution to ensure low latency for dependent reads with irregular reuse. This enables GAMMA to adopt an efficient dataflow, Gustavson's algorithm, to minimize off-chip traffic.; In return, these techniques improve the performance and reduce the data movement of challenging applications significantly.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 109-128).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130774</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale integrated quantum photonics with artificial atoms</title>
<link>https://hdl.handle.net/1721.1/130773</link>
<description>Large-scale integrated quantum photonics with artificial atoms
Wan, Noel H.(Noel Heng Loon)
The construction of large, controllable quantum systems is a formidable task in quantum science and technology. In the context of quantum networks, single emitters in diamond have emerged as leading quantum bits that combine long coherence times with efficient optical interfaces. Despite their potential manufacturability, such solidstate qubits have been limited to small-scale quantum network demonstrations due to their low system efficiencies, deteriorated properties in devices, and low yields. To address these challenges, we report the development of a nanophotonic platform in diamond for the efficient control and routing of photons. In particular, we describe the fabrication and coupling of qubits to diamond parabolic reflectors, single-mode waveguides and photonic crystal resonators. We then demonstrate the large-scale heterogeneous integration of diamond waveguide-coupled qubits with photonic circuits in another material system. This hybrid quantum chip architecture enables the combination of coherent qubits in diamond with low-loss active photonics in aluminum nitride or silicon nitride. This modularity also circumvents the low device yields associated with monolithic chips, enabling here a 128-channel, qubit-integrated photonic chip with frequency tunability and high optical coherence. Finally, we describe new qubit flavors in diamond that offer potentially long spin coherence times at higher operational temperatures. As an outlook, we discuss ongoing efforts that combine the advances in this thesis towards the construction of a quantum repeater microchip.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 107-125).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130773</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics for accelerating biomedical innovation</title>
<link>https://hdl.handle.net/1721.1/130772</link>
<description>Analytics for accelerating biomedical innovation
Siah, Kien Wei.
Despite the many breakthroughs in biomedical research and the increasing demand for new drugs to treat unmet medical needs, the productivity of research and development in the pharmaceutical industry has been steadily declining for the past two decades and is at its lowest level today. Traditional sources of financing in biopharma are no longer compatible nor aligned with the new realities of biomedical innovation, a process which has become more challenging, complex, expensive, time-consuming, and risky in the past twenty years. This has led to an outflow of capital from the biopharma industry, creating an ever-widening gap in funding between early-stage basic biomedical research and late-stage clinical development, where many promising academic discoveries fail not because of bad science but due to financial reasons.; In this thesis, we explore the use of data analytics to facilitate biomedical innovation with a particular emphasis on the mismatch between the risk characteristics of biomedical projects and the risk preferences of biopharma investors. We begin with a brief introduction of the challenges faced by the biopharma industry in Part I. In Part II, we focus on analytics in the context of clinical trials. First, we develop analytics for precision medicine in non-small cell lung cancer, an emerging area of innovation in disease treatment with the advent of human genome sequencing. Next, we train and validate predictive models for estimating the probability of success of drug development programs. By providing greater risk transparency, our models can help facilitate more accurate matching of investor risk preferences with the risks of biomedical investment opportunities, thus increasing the efficiency of capital allocation.; Finally, we turn our attention to the ongoing COVID-19 (coronavirus disease 2019) pandemic. We propose a systematic framework for quantitatively assessing the potential costs and benefits of different vaccine efficacy trial designs for COVID-19 vaccine development, including traditional and adaptive randomized clinical trials, and human challenge trials (HCTs). Our results contribute to the current ethical debate about HCTs by identifying situations where HCTs can provide greater social value versus non-challenge development pathways, and are thus justifiable. In Part III, we explore new business models to address the dearth of funding for translational medicine in the valley of death.; In view of the increasingly critical role that academic institutions play in the biotechnology industry, we develop a systematic framework for tracking the financial and research impact of university technology licensing in the life sciences using the Massachusetts Institute of Technology as a case study. Next, we investigate the use of a recently proposed megafund structure for financing early-stage biomedical research. We extend the existing model to account for technical correlation between assets in the underlying portfolio, thus allowing us to evaluate the tail risks of the megafund more accurately. We show that financial engineering techniques can be used to structure the megafund into derivatives with risk-reward characteristics that are attractive to a broad range of investors. This allows the fund to tap into a substantially larger pool of capital than the traditional sources of biopharma funding.; In the last part of the thesis, we further extend the megafund framework to include adaptive clinical trial designs, and demonstrate the economic viability of using the megafund vehicle to finance and accelerate drug development for glioblastoma, a disease with very few treatment options, low historical probabilities of success, and huge unmet need.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 347-367).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130772</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and control for interactions in mixed human-robot environments</title>
<link>https://hdl.handle.net/1721.1/130771</link>
<description>Learning and control for interactions in mixed human-robot environments
Schwarting, Wilko.
Autonomous robots will soon be a commonplace presence in our daily lives in environments such as homes, factories, and roads. In order to reap the tremendous benefits that these robots offer to society, we must ensure that they can interact with humans seamlessly and safely. In this dissertation, we study intelligent agents that learn how to reason about human behavior and people's intentions. These agents predict others' intentions and implicitly communicate their own intentions through human-like actions that can be understood by people. They also anticipate and leverage the effect of their actions on the actions of others in the environment. When their own interests and the interests of others are not aligned, the agents quantify people's willingness to cooperate or defect and negotiate through social behavior. The agents form beliefs by perceiving the world and the actions of others.; They create plans to actively gather information about themselves, others, and the environment, while simultaneously avoiding actions that lead to high uncertainty. They also reason about the beliefs of others, and can leverage how their actions influence others' beliefs. In part (I) of this thesis, we formulate social human-robot interactions between agents as a best-response game wherein each agent negotiates to maximize their utility, and learn human rewards from data. We measure Social Value Orientation (SVO) to quantify an agent's degree of selfishness or altruism to better predict human behavior. In part (II) we additionally enable agents to leverage information gain and reasoning about the beliefs of others in stochastic environments with partial observations by combining game-theoretic and belief-space planning. In part (III) we present a multi-agent reinforcement learning algorithm that learns competitive visual control policies through self-play in imagination.; The agent learns from competition by imagining multi-agent interaction sequences in the compact latent space of a learned world model that combines a joint transition function with opponent viewpoint prediction. Lastly, in part (IV) we introduce Parallel Autonomy, a Guardian system that uses uncertain predictions to provide safety in challenging driving scenarios while following people's desired actions as close as safely possible.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 217-235).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130771</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deeper learning at scale with roleplaying systems</title>
<link>https://hdl.handle.net/1721.1/130770</link>
<description>Deeper learning at scale with roleplaying systems
Ortiz-Lampier, Pablo José.
Contemporary online learning systems are increasingly important and common elements of post-secondary, workplace, and lifelong education. The current state is that these systems typically employ the banking model of education to educate learners. While this method is quite effective for teaching foundational knowledge, it is ill-suited for fostering deeper learning, "...an umbrella term for the skills and knowledge that students must possess to succeed in 21st century jobs and civic life..." [218] including, among other things, critical thinking. To meet learners' growing needs, we must go beyond the banking model of education and advance the state of the art. As a step toward this goal, I investigated how one might design online learning systems to scalably support critical thinking. Reflection, when not treated synonymously with critical thinking, is often cited as a key component of critical thinking. Thus, by working to support reflection in online learning systems, I work to support critical thinking skills at scale and, by extension, deeper learning. This dissertation contributes (1) a framework, grounded in roleplay theory &amp; practice, for designing online learning systems that scalably support reflection, (2) novel systems that exemplify and operationalize this framework, (3) a method for effectively evaluating reflection at scale, and (4) an evaluation of the novel systems and design framework in terms of their ability to support reflection.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 201-221).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130770</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable interfaces for biomedical and neuroscience applications</title>
<link>https://hdl.handle.net/1721.1/130769</link>
<description>Programmable interfaces for biomedical and neuroscience applications
Orguc, Sirma.
The rapidly changing fields of biomedical sciences and neuroscience increasingly adopt scientific and technological innovations to advance the diagnosis and treatment of disease. With the emergence of miniaturized and low-cost electronics, intelligent and computationally-efficient algorithms, together with new materials and fabrication techniques, an interdisciplinary approach becomes key in designing human-machine interface systems for these applications. This thesis explores the design of four programmable interface systems in this domain. First, we present an EMG-based facial gesture recognition platform. The system integrates a custom-designed EMG sensor interface for energy-efficient signal acquisition from a small footprint. The gesture recognition algorithm runs on the computer and achieves the classification of resting, clenching, chewing, and jaw opening activities in real-time. A wavelet-transform-based feature extraction improves the computational-efficiency.; Next, we present an optoelectronic system for wireless neuromodulation during free behavior. The head-borne system interfaces with flexible, fiber-based, multifunctional brain probes that carry integrated [mu]LEDs for optical stimulation. The modular platform can also perform precise optical intensity control, in-vivo temperature sensing, and low-frequency neural recording when needed. The system uses BLE to communicate with the computer and can control multiple [mu]LEDs and multiple devices on different animals. Third, we present a strain-programmable artificial muscle, suitable for use in soft-robotics, neuroprosthetics, and smart-textiles applications. The fiber muscle is arbitrarily scalable and can be produced in kilometer-long scales. The strain-programmability allows precise control of the mechanical and electrical properties. It can carry 650 times of its weight, achieves a power-to-mass-ratio of 90 W/kg, and latency levels as low as 0.02 seconds.; The conductive versions allow for direct piezoresistive feedback. Multiple fibers can be used in parallel to form bundle structures similar to human muscle. Finally, we present a biometric interface integrated into transparent, long-lasting respirators. The respirators are alternatives to commonly-used N95 masks. The electronic interface uses one of the filter insert locations to measure temperature, humidity, pressure, and air quality. The system uses BLE and sends real-time sensor information to a phone or a computer. The data can be used to inform the user regarding mask fit, fatigue, mask condition, and potential diagnostic information.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 143-157).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130769</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solving for syntax</title>
<link>https://hdl.handle.net/1721.1/130768</link>
<description>Solving for syntax
Indurkhya, Sagar.
Among the key questions that have guided research into the nature of human language for the past sixty years, two have been particularly salient: (1) What constitutes knowledge of language? and (2) How is that knowledge acquired? In particular, children using limited input examples, in effect small "sample size" complexity, all acquire their native language. This thesis attempts to answer these two questions by developing a novel, explicit, computational implementation of one contemporary approach to human language known as the Minimalist Program. It provides an answer to question (1) via the explicit axiomatization of a declaratively specified logical model of minimalist grammars, consisting of a set of formally specified principles and a single structure-building operation, along with a lexicon.; By rendering these axioms along with the lexicon as a set of constraints that are expressed using Satisfiability Modulo Theories (SMT), that must be simultaneously satisfied, the thesis demonstrates how to "solve for syntax": it uses an SMT-solver to computationally deduce the syntactic derivations that associate particular input sentences with their logical forms. In this sense, the thesis demonstrates that the proposed linguistic principles underlying such a system, including the contemporary notion of "economy conditions" in syntax are both coherent and consistent, and, importantly, that minimalist syntax can be placed within a classical "parsing as deduction" framework. This thesis then extends the system developed to address question (1) to provide an answer to question (2), by modeling acquisition as the construction of a succession of lexicons, starting from some initial, essentially empty lexicon, and then augmenting that lexicon.; To do this, it uses a set of (input sentence, skeletal "meaning") pairs intended to reflect minimally cognitively faithful constraints to infer what lexicon would bridge from input to quasi-meaning forms, again using an SMT-solver. Using this approach, the thesis explicitly demonstrates that a wide variety of syntactic sentence constructions in English can be acquired in this way, sufficient to account for the infinite generative capacity of at least one portion of English syntax. Importantly then, the thesis thus demonstrates that a contemporary minimalist syntactic system, with a single, fixed structure-building operation leaving only the lexicon to vary, can serve as the foundation for a contemporary approach to language acquisition. In this sense, the implementation serves as a concrete, computationally realized contemporary instantiation of the model of language acquisition set out in Aspects of the Theory of Syntax.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 195-200).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130768</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partial state in dataflow-based materialized views</title>
<link>https://hdl.handle.net/1721.1/130767</link>
<description>Partial state in dataflow-based materialized views
Gjengset, Jon Ferdinand Ronge.
This thesis proposes a practical database system that lowers latency and increases supported load for read-heavy applications by using incrementally-maintained materialized views to cache query results. As opposed to state-of-the-art materialized view systems, the presented system builds the cache on demand, keeps it updated, and evicts cache entries in response to a shifting workload. The enabling technique the thesis introduces is partially stateful materialization, which allows entries in materialized views to be missing. The thesis proposes upqueries as a mechanism to fill such missing state on demand using dataflow, and implements them in the materialized view system Noria. The thesis details issues that arise when dataflow updates and upqueries race with one another, and introduces mechanisms that uphold eventual consistency in the face of such races. Partial materialization saves application developers from having to implement ad hoc caching mechanisms to speed up their database accesses. Instead, the database has transparent caching built in. Experimental results suggest that the presented system increases supported application load by up to 20 x over MySQL and performs similarly to an optimized key-value store cache. Partial state also reduces memory use by up to 2/3 compared to traditional materialized views.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 141-149).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130767</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reduction of prediction side-information for image and video compression</title>
<link>https://hdl.handle.net/1721.1/130766</link>
<description>Reduction of prediction side-information for image and video compression
Nissenbaum, Lucas.
Many compression systems are based on parameterized predictors. In HEVC, directional intra-prediction species a prediction direction. Motion compensation forms a motion vector to predict a block's movement from another frame. These parameters lead to side-information bits, which must be encoded. In recent image and video compression standards, the number of intra-prediction directions used has been increasing. Motion vectors are also using finer fractional precision. The side-information bits have therefore become a signicant part of the encoder bit-rate. In this thesis, we will show that there is signicant room to use adaptive ways to reduce this side-information. In particular, we will develop a theoretical framework to consider this side-information. In this theoretical framework, we assign a set of possible values of side-information parameter for each block based on information available at the decoder. Based on this framework, two main questions are proposed: How do we nd the number of values that compose this set? If we know the cardinality of this set, what values should compose it? We propose three methods to reduce the intra-prediction side-information bitrate, based on this framework and on prediction inaccuracy modeling. Our first method selects between a set of 7 modes and the full set of 35 modes from HEVC, by thresholding the maximum absolute gradient boundary. Our second method selects between four possible sets, by using scaled thresholds derived from prediction inaccuracy modeling. The third method uses only two sets, but constructs the smaller set adaptively based on neighboring blocks' information. We then present a theoretical and experimental comparison between these three methods. We then propose a method to adaptively decide whether or not to use fractional precision motion vectors. Our experimental results show there is room to use side- information reduction for the case of motion compensation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 105-108).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130766</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy efficient sub-terahertz electrical interconnect</title>
<link>https://hdl.handle.net/1721.1/130765</link>
<description>Energy efficient sub-terahertz electrical interconnect
Holloway, Jack Wade,1980-
With the end of Moore's Law and Dennard scaling in silicon platforms, coupled with the increase in computational demand across applications, the semiconductor industry has seen a move towards high-density compute leveraging multiple dies in package. These types of products have been partially enabled by short-reach, energy-efficient, high-speed interconnect in package. Big data and AI/ML applications have pushed the development of longer-reach, high-capacity, and energy efficient interconnect enabling connectivity between racks across large data centers. This work investigates and demonstrates a new interconnect technology that fills a meter-class interconnect gap in these applications. By leveraging the wide transmission bandwidth and low-losses associated with dielectric waveguides in the sub-THz regime (100 GHz - 1 THz), large baseband data rates are aggregated across multiple channels, multiplexed on to a single electrical channel, efficiently coupled into a dielectric waveguide, and transmitted between chips. In this work, enabling component technologies are developed and demonstrated, including planar broadband couplers and high-performance sub-THz multiplexers operating in the 220-330 GHz WR-3.4 band -- both technologies designed to ease implementation and packaging costs. Lastly, an end-to-end link is realized in a 130nm Silicon Germanium BiCMOS process and is demonstrated utilizing a small cross-section polymer dielectric waveguide. The link achieves 105 Gbps in a 250 ̄ 400 [mu]m² waveguide cross section, demonstrating a state of the art 330 Gbps/mm figure of merit and better than 5 pJ/bit energy efficiency.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 175-184).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130765</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution tactile sensing for reactive robotic manipulation</title>
<link>https://hdl.handle.net/1721.1/130764</link>
<description>High-resolution tactile sensing for reactive robotic manipulation
Dong, Siyuan,Ph. D.Massachusetts Institute of Technology.
This thesis explores tactile sensing to enable reactive behavior in robotic manipulation. More specifically, we focus on developing high-resolution vision-based tactile sensing hardware, perceptual algorithms, and controller designs for robotic manipulation. Tactile sensing plays a key role in human manipulation. However, the existing artificial tactile sensors have multiple limitations in terms of form factor, robustness, and sparse measurement. Tactile sensors are rarely integrated into the current robotic manipulation systems. In this thesis, we design new vision-based tactile sensors that capture the contact surface with high-resolution images and reconstruct the 3D geometry of the contact surface. We first design a variation of the GelSight sensor that improves the accuracy of the depth map reconstruction.; To further optimize the form factor and enhance the robustness, we designed another vision-based tactile sensor, GelSlim, which keeps the high-resolution sensing output but has a slimmer former, sharper tip, and improved robustness. Based on the new sensor, we propose algorithms to distill useful contact information from the raw signal output. The key challenge is connecting the contact geometry directly observed from the raw image to contact signals that have meanings in the context of contact mechanics, e.g., contact forces, contact slip. We use an algorithm to track the gel deformation and compare it with a rigid body motion to detect incipient slip. We deploy an inverse Finite Element Method (iFEM) to reconstruct the contact force distribution. Finally, we explore how the tactile signals can be fed into the control loop in real manipulation tasks. We choose 2 representative contact rich manipulation tasks that benefit from tactile control: cable following and object insertion.; We implement cable following by sensing &amp; controlling both the state of the grasp of the cable and its configuration in realtime to allow smooth sliding of the fingers along a cable. We train a general tactile-based RL insertion policy in an end-to-end fashion to align the object pose with the insertion hole and keep sticking contact of the grasp by detecting incipient slip during the contact exploration. The RL insertion policy is capable of inserting novel objects, for which we show that tactile feedback is more informative than force-torque feedback.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 115-122).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130764</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Engineering of Proof-Based Software Systems at scale</title>
<link>https://hdl.handle.net/1721.1/130763</link>
<description>Performance Engineering of Proof-Based Software Systems at scale
Gross, Jason S.
Formal verification is increasingly valuable as our world comes to rely more on software for critical infrastructure. A significant and understudied cost of developing mechanized proofs, especially at scale, is the computer performance of proof generation. This dissertation aims to be a partial guide to identifying and resolving performance bottlenecks in dependently typed tactic-driven proof assistants like Coq. We present a survey of the landscape of performance issues in Coq, with micro- and macro-benchmarks. We describe various metrics that allow prediction of performance, such as term size, goal size, and number of binders, and note the occasional surprising lack of a bottleneck for some factors, such as total proof term size. To our knowledge such a roadmap to performance bottlenecks is a new contribution of this dissertation.; The central new technical contribution presented by this dissertation is a reflective framework for partial evaluation and rewriting, already used to compile a code generator for field-arithmetic cryptographic primitives which generates code currently used in Google Chrome. We believe this prototype is the first scalably performant realization of an approach for code specialization which does not require adding to the trusted code base. Our extensible engine, which combines the traditional concepts of tailored term reduction and automatic rewriting from hint databases with on-the-fly generation of inductive codes for constants, is also of interest to replace these ingredients in proof assistants' proof checkers and tactic engines. Additionally, we use the development of this framework itself as a case study for the various performance issues that can arise when designing large proof libraries.; We also present a novel method of simple and fast reification, developed and published during this PhD. Finally, we present additional lessons drawn from the case studies of a category-theory library, a proof-producing parser generator, and cryptographic code generation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-207).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130763</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical abstractions for model-based visual navigation</title>
<link>https://hdl.handle.net/1721.1/130762</link>
<description>Hierarchical abstractions for model-based visual navigation
Ok, Kyel.
In this thesis, we explore hierarchical map representations that improve autonomous vision-based navigation. Challenged with the task of navigating in an unknown environment, an autonomous agent must perceive the environment around it while making progress towards a goal. While incrementally constructing a map of the world based on visual sensor measurements is a popular choice, we observe that the choice of representation for the map has significant consequences on the performance of navigation. To improve the efficiency and robustness of visual navigation of a computationally limited robotic platform, we introduce three key ideas in the form of applying varying levels of abstraction to the map representation and sensor measurements. First, we propose to apply multiple levels of abstraction to the map representation to improve the computational efficiency of on-board pose estimation on a low-cost micro air vehicle (MAV). Second, we show that multiple levels of abstraction can also apply to the sensor measurements, thereby creating multiple pseudo-measurements of lower dimensions, to mitigate the viewpoint dependency of ellipsoid-based object-level simultaneous localization and mapping (SLAM). Finally, we show that adaptively changing the level of abstraction in the map representation and sensor measurements online based on the quality of available measurements improves the accuracy of the constructed map and results in improved robustness and efficiency of autonomous vision-based navigation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-165).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130762</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy efficient SAR ADC with resolution enhancement for sensor signals</title>
<link>https://hdl.handle.net/1721.1/130761</link>
<description>Energy efficient SAR ADC with resolution enhancement for sensor signals
Khurana, Harneet Singh.
Many signals from sensors are low activity signals that spend most of its time around middle of the full scale with occasional large activity. A/D conversion of such signals using a conventional ADC with a constant resolution and a full-scale search space consumes unnecessary amounts of time and energy. SAR ADC architecture using a comparator and Capacitor-DAC has been the choice for this application space due to minimal analog components and low static power consumption while providing moderate speed and resolution that is adequate for sensor signals. DAC and comparator power reduction has been the focus of attention as the logic automatically benefits from digital centric process scaling. This work develops an energy efficient 10B/12B SAR ADC for such sensor signals using a new algorithm to save energy and time and use the savings for resolution enhancement.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 129-134).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130761</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust machine learning models and their applications</title>
<link>https://hdl.handle.net/1721.1/130760</link>
<description>Robust machine learning models and their applications
Chen, Hongge,Ph. D.Massachusetts Institute of Technology.
Recent studies have demonstrated that machine learning models are vulnerable to adversarial perturbations - a small and human-imperceptible input perturbation can easily change the model output completely. This has created serious security threats to many real applications, so it becomes important to formally verify the robustness of machine learning models. This thesis studies the robustness of deep neural networks as well as tree-based models, and considers the applications of robust machine learning models in deep reinforcement learning. We first develop a novel algorithm to learn robust trees. Our method aims to optimize the performance under the worst case perturbation of input features, which leads to a max-min saddle point problem when splitting nodes in trees.; We propose efficient tree building algorithms by approximating the inner minimizer in this saddle point problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting models such as XGBoost. Experiments show that our method improve the model robustness significantly. We also propose an efficient method to verify the robustness of tree ensembles. We cast the tree ensembles verification problem as a max-clique problem on a multipartite graph. We develop an efficient multi-level verification algorithm that can give tight lower bounds on robustness of decision tree ensembles, while allowing iterative improvement and termination at any-time.; On random forest or gradient boosted decision trees models trained on various datasets, our algorithm is up to hundreds of times faster than the previous approach that requires solving a mixed integer linear programming, and is able to give tight robustness verification bounds on large ensembles with hundreds of deep trees. For neural networks, we contribute a number of empirical studies on the practicality and the hardness of adversarial training. We show that even with adversarial defense, a model's robustness on a test example has a strong correlation with the distance between that example and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks.; Consequentially, we demonstrate that an adversarial training based defense is vulnerable to a new class of attacks, the "blind-spot attack," where the input examples reside in low density regions ("blind-spots") of the empirical distribution of training data but are still on the valid ground-truth data manifold. Finally, we apply neural network robust training methods to deep reinforcement learning (DRL) to train agents that are robust against perturbations on state observations. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and propose a theoretically principled regularization which can be applied to different DRL algorithms, including deep Q networks (DQN) and proximal policy optimization (PPO). We significantly improve the robustness of agents under strong white box adversarial attacks, including new attacks of our own.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 109-124).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130760</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural design and proof of hierarchical cache-coherence protocols</title>
<link>https://hdl.handle.net/1721.1/130759</link>
<description>Structural design and proof of hierarchical cache-coherence protocols
Choi, Joonwon.
Cache-coherence protocols have been one of the greatest correctness challenges of the hardware world. A memory subsystem usually consists of several caches and the main memory, and a cache-coherence protocol defined in such a system allows multiple memory-access transactions to execute in a distributed manner, across the levels of a cache hierarchy. This source of concurrency is the most challenging part in formal verification of cache coherence. In this dissertation, we introduce Hemiola, a framework embedded in Coq to design, prove, and synthesize cache-coherence protocols in a structural way. The framework guides the user to design protocols that never experience inconsistent inter-leavings while handling transactions concurrently. Any protocol designed in Hemiola always satisfies the serializability property, allowing a user to prove the protocol assuming that transactions are executed one-at-a-time. The proof relies on conditions on the protocol topology and state-change rules, but we have designed a domainspecific protocol language that guides the user to design protocols that satisfy these properties by construction. The framework also provides a novel way to design and prove invariants by adding predicates to messages in the system, called predicate messages. On top of serializability, it is much simpler to prove a predicate message, since it is guaranteed that the predicate is not spuriously broken by other messages. We used Hemiola to design and prove hierarchical MSI and MESI protocols, in both inclusive and noninclusive variants, as case studies. We also demonstrated that the case-study protocols are indeed hardware-synthesizable, by using a compilation/ synthesis toolchain in the framework.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 139-146).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130759</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on industrial organization and health care markets</title>
<link>https://hdl.handle.net/1721.1/130758</link>
<description>Essays on industrial organization and health care markets
Olssen, Alexander Lee.
This thesis comprises three essays on the industrial organization of health care markets. In the first essay, joint with Mert Demirer, I study how formulary-contingent rebates affect insurers formulary placement of branded statins. The prices charged for on-patent, branded pharmaceuticals represent a large, and controversial, component of medical spending in the U.S. In contrast to many countries and many other government programs, drug prices in the Medicare Part D program are determined by privately negotiated rebates between insurance plans and drug manufacturers. How large are these rebates? What would happen to formularies, consumer surplus, and firm profits if the government could increase the rebates of a blockbuster Medicare Part D drug? We estimate a simultaneous model of insurance demand and statin demand for the population of statin users in 2010. Our demand estimates allow us to quantify how insurer profits change under different statin formulary structures.; We use these profit functions to estimate the rebates for Crestor and Lipitor, two blockbuster drugs, using a moment inequality approach; we estimate rebates between 25% and 54% for branded statins. In counterfactuals, we analyze the effect of rebates on formulary design and consumer surplus. We show that increasing only Crestor rebates has no effect on consumer surplus because of offsetting effects on winners and losers. In contrast, increasing only Lipitor rebates does increase consumer surplus. If rebates reduced U.S. prices to match those paid in Canada, then consumer surplus would increase by up to 3.1% In the second essay, I compare estimates of formulary-contingent rebates using three empirical moment inequality models. Unobserved private rebates are an important determinant of the prices that insurers pay drug manufacturers in Medicare Part D. There is growing interest in understanding these negotiated rebates and there consequences on market equilibrium.; However rebates are secret and have proven difficult to estimate. In this paper I compare three moment inequality models that I use to estimate formulary-contingent rebates for branded statins. The first model, which only allows for measurement error, imposes the strong assumption that their is no rebate heterogeneity that is unobserved to the econometrician. Due to the fact that different insurers use different agents (Pharmacy Benefit Mangers) in rebate negotiations, this assumption is unlikely to hold. As a consequence I develop two models that allow for unrestricted insurer-specific unobserved rebate heterogeneity. Somewhat surprisingly, the measurement errors only model produces reasonable results in this context, however the rebates for Lipitor are approximately twice as large in my preferred model relative to the measurement errors only model.; In the third essay, also joint with Mert Demirer, I study the effects of government negotiated drug prices using Nash-in-Nash bargaining models. One of the most controversial aspects of Medicare Part D is that the government is prohibited from being involved in price negotiations despite the fact that it provides almost $100 billion to the program in subsidies each year. We model pharmaceutical drug price setting using Nash-in-Nash bargaining models. We compare two models: one where insurers negotiate drug prices and another where the government negotiates prices. We show that the ability for the government to improve consumer surplus depends on both upstream and downstream market structure. With many insurers and few drug manufacturers, the government can increase consumer surplus, but with few insurers the government cannot increase consumer surplus no matter how much bargaining power it has vis-a-vis drug manufacturers.; We also show that a Nash-in-Nash bargaining model where insurers and drug manufacturers negotiate over both manufacturer prices and copays can be used to estimate unobservable manufacturer prices and bargaining weights as long as there are profit spillovers across bilateral negotiations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 137-142).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130758</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on econometrics and economic theory</title>
<link>https://hdl.handle.net/1721.1/130757</link>
<description>Essays on econometrics and economic theory
Li, Kevin Kainan.
This thesis consists of three chapters. The first chapter extends the recent result of Athey and Wager on the asymptotic normality of random forest estimators to a multivariate setting; in particular, we examine stability properties and bounds for certain classes of tree estimators, and provide guidance and heuristics that make our results useful for practitioners. The second chapter studies a continuous-time principal-agent model for risky projects in which success is binary and not quantifiable. We consider optimal incentive contracts which feature two components: a flow payment that is paid out at each moment and a lump-sum bonus that is paid on project success. We characterize the optimal solution and calibrate our model to data on executive compensation. The final chapter studies speed and competition in trading venues, in particular those for illiquid markets. We show that differences in trading speeds among different venues lead to endogenous market segmentation, thus increasing trading volume and overall liquidity. We also examine equilibria and various notions of welfare in an entry game in which an entrant has the option to create a faster trading venue and compete against an incumbent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 137-142).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130757</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding plausible biosignature gas candidates for detection in habitable exoplanet atmospheres</title>
<link>https://hdl.handle.net/1721.1/130756</link>
<description>Expanding plausible biosignature gas candidates for detection in habitable exoplanet atmospheres
Zhan, Zhuchang.
Exoplanet science has become a central spotlight of astrophysics in the last decade, due in part to the discovery of more than 4400 exoplanets, of which 60 are potentially habitable. One rapidly growing branch of exoplanet research is atmosphere simulations for the search for signs of life with upcoming telescopes via remote detection of biosignature gases in exoplanet atmospheres. Existing work, however, has thus far only assessed ~ 20 molecules as potential biosignature gases. Recent work has proposed to study tens of thousands of small molecules as possible biosignature gases and constructed the All Small Molecules (ASM) Database. Assessing each of these molecules individually is not feasible, so we need to develop an efficacious approach to sort through these molecules to nd the best candidate biosignature gases. In this thesis, I provide the first practical solution to filter molecules in the ASM Database by introducing a triage framework.; Using the triage approach, I can quickly retrieve a compact cluster of plausible biosignature gas candidates that match the query spectrum to speed up the assessment. The same process allows us to construct mock prototype spectra for many unassessed gases to retrieve plausible biosignature gases in response to a remote observationthe triage approach proceeds by first clustering molecular IR spectra to identify highly detectable molecular subgroups. Next, the approach lters highly detectable clusters by the ability of the molecules contained therein to accumulate in an exoplanet atmosphere using proxy estimates for solubility, volatility, and photochemistry. Finally, the triage prioritizes the filtered molecules with notable biological functions or abundant production by life on Earth for further assessment.; Using this process substantially and eciently narrows the molecules in the ASM Database to a typically small candidate subset bounded only by the spectral resolution that the James-Webb Space Telescope (JWST) supports. Additional information or improvements to spectral resolution will undoubtedly further narrow the candidates, possibly to a unique association. A result that emerges is that I can distinguish among dierent groups of hydrocarbons even at a spectral resolution of [delta]v = 100 cm⁻¹ (R = 100 at 1 [mu]m and R = 10 at 10 [mu]m). Hydrocarbon molecules with the conjugated-diene and iso-ene structure are of particular interest because their smallest member, isoprene (C₅H₈), is produced in the highest amounts on Earth (equivalent to methane) by a broad spectrum of life; from bacteria to plants and mammals.; I further discover that isoprene has no abiotic false-positives, and it can be a potential biosignature gas for exoplanets with anoxic atmospheres transiting M dwarf stars. However, JWST observation simulations suggest that isoprene is indistinguishable from other conjugated-diene and iso-ene molecules. Identifying the exact molecule will require [delta]v = 10 cm⁻¹ (R = 1000 at 1 [mu]m and R = 100 at 10 [mu]m). Furthermore, the triage approach identifies the carbonyl group (molecules with C=O functional group) as highly detectable and prioritizes carbonyls for further assessment because all life produces carbonyl, and their chemistry is a fundamental pillar in Earth's biochemistry. The triage approach, however, correctly rules out carbonyls as possible biosignature gases due to their high solubility in water, low volatility, and rapid photochemical destruction.; In the study of the carbonyl group, I discover that carbonyl destruction may lead to a signicant accumulation of carbon monoxide (CO) in reducing H₂-dominated atmospheres. Although this offers a counter-argument to popular wisdom that carbon monoxide is an \anti-biosignature" gas, CO also has abiotic pathways that temper such an inference. For the first time, it is feasible for us to use a largely automated process to narrow down thousands of potential biosignature gas candidates. The projected savings over a purely human endeavor that the Machine Learning-enabled triage framework offers is an exciting new interdisciplinary outcome. I posit that the triage approach's efficacy is necessary for proposing new biosignature gas candidates, thus opening a new avenue, in conjunction with AI, for biosignature gas research.; With further improvements in resolution, even modest ones, biosignature detectability sharpens by sifting candidates far more finely, and we may even excitedly anticipate, uniquely. Keywords: Biosignature Gases, Molecular IR Spectroscopy, Triage, Machine Learning, Hierarchical Clustering, Exoplanet Atmosphere, Transmission Spectroscopy
Thesis: Ph. D. in Computational Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 233-262).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130756</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on physician innovation and practice style in healthcare markets</title>
<link>https://hdl.handle.net/1721.1/130755</link>
<description>Essays on physician innovation and practice style in healthcare markets
Badinski, Ivan Nikolaev.
This thesis consists of three chapters on technological innovation, diffusion, and practice style among physicians. The first chapter investigates the effect of age on a surgeon's propensity to use new medical procedures. I identify a large number of medical technologies undergoing diffusion by a well-defined risk set of physicians by exploiting the parent-descendant relationship among ICD9-CM inpatient procedure codes where each new code has a well-defined antecedent. I find that surgeons that are ten years older at the time of new-code approval are sixteen percent less likely to use this code. Evidence from the diffusion of new pharmaceuticals, diagnostic codes, and minimally invasive procedures suggests that this effect may be driven by skill acquisition costs rather than information frictions.; In the second chapter, I study the impact of market size on the development of novel surgeries, an important domain of medical innovation where intellectual property rights, approval regulation, and financial incentives play only a minor role. Using the codification of ICD9 CM procedure codes as a novel measure of new-surgery development, I investigate the behavior of surgical innovation and compare it to pharmaceuticals where traditional innovative institutions are salient. I find that the two processes follow very different aggregate trends. Despite this difference, I estimate a positive and significant elasticity of surgical innovation with respect to potential market size by leveraging quasi-exogenous changes in potential market size due to shifting US demographics. The third chapter, joint with Amy Finkelstein, Matthew Gentzkow, Peter Hull, and Heidi Williams, studies the role of physician practice style in Medicare geographic spending variation.; We estimate a model that allows for variation in patient demand, physician treatment intensity, and regional supply-side factors, as well as patient-physician sorting. The model is identified by quasi-experimental migration of Medicare patients and physicians and their matching within regions. We find that physicians vary greatly in their treatment intensity. Our baseline decomposition suggests that about 30 percent of regional variation in health care utilization is explained by differences in average physician treatment intensity, 20 percent by other area supply factors, and 50 percent by differences in patient demand.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-182).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130755</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The heterogeneity and volatile content of Earth's mantle, magmas and crust</title>
<link>https://hdl.handle.net/1721.1/130753</link>
<description>The heterogeneity and volatile content of Earth's mantle, magmas and crust
Urann, Benjamin M.(Benjamin Macy)
This thesis explores the volatile content of the mantle, subducted oceanic crust, and arc magmas as well as the structure of slow spreading ocean crust and the heterogeneity of Earth's upper mantle. In Chapter 2, I directly explore the halogen (F and Cl) content of mantle minerals in situ, then use these measurements to assess the halogen content of the upper mantle. In Chapter 3, I investigate the volatile content of Raspas eclogites (SW Ecuador), a proxy for deeply subducted oceanic crust, to evaluate volatile transfer from crustal generation at divergent plate boundaries (e.g., mid-ocean ridges) to recycling of ocean crust at subduction zones. In Chapter 4, I use the H₂O content of nominally anhydrous minerals in plutonic arc cumulates to elucidate the H₂O content of the melts from which the rocks crystallized. In this way, I assert that primitive arc magmas may contain 4-10 wt.% H₂O and through fractional crystallization up to ~20 wt.% H₂O, making them far more hydrous than traditional methods (i.e., olivine-hosted melt inclusions) surmise. In Chapter 5, I show that mantle peridotite exposed along the 16°N region of the Mid-Atlantic Ridge originated in an arc setting and has been remixed into subridge mantle, indicating that the sub-ridge mantle is more heterogeneous and depleted than inferences made from mid-ocean ridge basalts suggest. Chapter 6 surveys the life cycle of oceanic core complexes through zircon geochronology and posits an updated framework for understanding the termination of oceanic core complexes, and more broadly oceanic detachment faults. Together, this contribution highlights the chemical heterogeneity of the mantle, and quantifies the full extent of volatiles hosted by mantle and crustal reservoirs.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 153-169).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130753</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coral reefs in the anthropocene ocean : novel insights from skeletal proxies of climate change, impacts, and resilience</title>
<link>https://hdl.handle.net/1721.1/130752</link>
<description>Coral reefs in the anthropocene ocean : novel insights from skeletal proxies of climate change, impacts, and resilience
Mollica, Nathaniel R.(Nathaniel Rust)
Anthropogenic emissions of greenhouse gases are driving rapid changes in ocean conditions. Shallow-water coral reefs are experiencing the brunt of these changes, including intensifying marine heatwaves (MHWs) and rapid ocean acidification (OA). Consequently, coral reefs are in broad-scale decline, threatening the livelihoods of hundreds of millions of people. Ensuring survival of coral reefs in the 21st century will thus require a new management approach that incorporates robust understanding of reef-scale climate change, the mechanisms by which these changes impact corals, and their potential for adaptation. In this thesis, I extract information from within coral skeletons to 1) Quantify the climate changes occurring on coral reefs and the effects on coral growth, 2) Identify differences in the sensitivity of coral reefs to these changes, and 3) Evaluate the adaptation potential of the keystone reef-building coral, Porites.; First, I develop a mechanistic Porites growth model and reveal the physicochemical link between OA and skeletal formation. I show that the thickening (densification) of coral skeletal framework is most vulnerable to OA and that, under 21st century climate model projections, OA will reduce Porites skeletal density globally, with greatest impact in the Coral Triangle. Second, I develop an improved metric of thermal stress, and use a skeletal bleaching proxy to quantify coral responses to intensifying heatwaves in the central equatorial Pacific (CEP) since 1982. My work reveals a long history of bleaching in the CEP, and reef-specific differences in thermal tolerance linked to past heatwave exposure implying that, over time, reef communities have adapted to tolerate their unique thermal regimes. Third, I refine the Sr-U paleo-thermometer to enable monthly-resolved sea surface temperatures (SST) generation using laser ablation ICPMS.; I show that laser Sr-U accurately captures CEP SST, including the frequency and amplitude of MHWs. Finally, I apply laser Sr-U to reconstruct the past 100 years of SST at Jarvis Island in the CEP, and evaluate my proxy record of bleaching severity in this context. I determine that Porites coral populations on Jarvis Island have not yet adapted to the pace of anthropogenic climate change.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130752</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of zooplankton in regulating carbon export and phytoplankton community structure : integrating models and observations</title>
<link>https://hdl.handle.net/1721.1/130751</link>
<description>The role of zooplankton in regulating carbon export and phytoplankton community structure : integrating models and observations
Archibald, Kevin Matthew.
In this thesis, I explore two topics in plankton ecology with a combination of models and observations. First, I investigate the contribution of zooplankton diel vertical migration (DVM) to the vertical flux of carbon as part of the biological pump. I do this by constructing and analyzing a global model that includes DVM and is driven by satellite-based estimates of primary productivity. There has long been speculation about the significance of DVM to the biological pump, but quantitative estimates of its impact are rare. I estimate that DVM constitutes approximately 16% of the global carbon export flux associated with the biological pump and that the relative contribution of DVM is higher in subtropical latitudes. In later chapters, I build two nutrient-phytoplankton-zooplankton (NPZ) models with different levels of complexity to evaluate the role of nutrient supply and grazing in promoting phytoplankton diversity.; Zooplankton switching plays a significant role in promoting diversity because it allows competing phytoplankton types to coexist in situations that would otherwise lead to competitive exclusion. When implemented in a size-structured NPZ model, stronger switching increases the evenness of the distribution of biomass between coexisting size classes, which is used as a proxy for taxonomic diversity. I also describe a particular characteristic of the Kill-the-Winner functional response (used in the NPZ models), which I have termed synergistic grazing. Synergistic grazing occurs when the grazing rate on one phytoplankton type increases as the biomass of an alternative phytoplankton type increases. This characteristic can result in unintuitive model dynamics. Finally, I describe patterns in phytoplankton community size structure in the shelfbreak region of the Northeast U.S. Shelf using high-resolution flow-cytometry measurements.; I find that enhancement of phytoplankton biovolume at the shelfbreak front is common during the springtime, but these enhancement events are not associated with consistent changes in community size structure. I evaluate these results in the context of hypotheses generated based on my analysis of the NPZ models.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 127-135).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130751</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A scientific machine learning approach to learning reduced models for nonlinear partial differential equations</title>
<link>https://hdl.handle.net/1721.1/130748</link>
<description>A scientific machine learning approach to learning reduced models for nonlinear partial differential equations
Qian, Elizabeth Yi.
This thesis presents a new scientific machine learning method which learns from data a computationally inexpensive surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE), an enabling technology for many computational algorithms used in engineering settings. The proposed approach generalizes to the PDE setting an Operator Inference method previously developed for systems of ordinary differential equations (ODEs) with polynomial nonlinearities. The method draws on ideas from traditional physics-based modeling to explicitly parametrize the learned model by low-dimensional polynomial operators which reflect the known form of the governing PDE. This physics-informed parametrization is then united with tools from supervised machine learning to infer from data the reduced operators.; The Lift &amp; Learn method extends Operator Inference to systems whose governing PDEs contain more general (non-polynomial) nonlinearities through the use of lifting variable transformations which expose polynomial structure in the PDE. The proposed approach achieves a number of desiderata for scientific machine learning formulations, including analyzability, interpretability, and making underlying modeling assumptions explicit and transparent. This thesis therefore provides analysis of the Operator Inference and Lift &amp; Learn methods in both the spatially continuous PDE and spatially discrete ODE settings. Results are proven regarding the mean square errors of the learned models, the impact of spatial and temporal discretization, and the recovery of traditional reduced models via the learning method. Sensitivity analysis of the operator inference problem to model misspecifications and perturbations in the data is also provided.; The Lift &amp; Learn method is demonstrated on the compressible Euler equations, the FitzHugh-Nagumo reaction-diffusion equations, and a large-scale three-dimensional simulation of a rocket combustion experiment with over 18 million degrees of freedom. For the first two examples, the Lift &amp; Learn models achieve 2-3 orders of magnitude dimension reduction and match the generalization performance of traditional reduced models based on Galerkin projection of the PDE operators, predicting the system evolution with errors between 0.01% and 1% relative to the original nonlinear simulation. For the combustion application, the Lift &amp; Learn models accurately predict the amplitude and frequency of pressure oscillations as well as the large-scale structures in the flow field's temperature and chemical variables, with 5-6 orders of magnitude dimension reduction and 6-7 orders of magnitude computational savings.; The demonstrated ability of the Lift &amp; Learn models to accurately approximate the system evolution with orders-of-magnitude dimension reduction and computational savings makes the learned models suitable for use in many-query computations used to support scientific discovery and engineering decision-making.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 165-172).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130748</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging prior information for real-time monocular simultaneous localization and mapping</title>
<link>https://hdl.handle.net/1721.1/130747</link>
<description>Leveraging prior information for real-time monocular simultaneous localization and mapping
Greene, W. Nicholas(William Nicholas)
Monocular cameras are powerful sensors for a variety of computer vision tasks since they are small, inexpensive, and provide dense perceptual information about the surrounding environment. Efficiently estimating the pose of a moving monocular camera and the 3D structure of the observed scene from the images alone is a fundamental problem in computer vision commonly referred to as monocular simultaneous localization and mapping (SLAM). Given the importance of egomotion estimation and environmental mapping to many applications in robotics and augmented reality, the last twenty years have seen dramatic advances in the state of the art in monocular SLAM. Despite the rapid progress, however, several limitations remain that prevent monocular SLAM systems from transitioning out of the research laboratory and into large, uncontrolled environments on small, resource-constrained computing platforms. This thesis presents research that attempts to address existing problems in monocular SLAM by leveraging different sources of prior information along with targeted applications of machine learning. First, we exploit the piecewise planar structure common in many environments in order to represent the scene using compact triangular meshes that will allow for faster reconstruction and regularization. Second, we leverage the semantic information encoded in large datasets of images to constrain the unobservable scale of motion of the monocular solution to the true, metric scale without additional sensors. Lastly, we compensate for known viewpoint changes when associating pixels between images in order to allow for robust, learning-based depth estimation across disparate views.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 135-151).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130747</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atmospheric impacts and potential for regulation of current and emerging technologies in transportation</title>
<link>https://hdl.handle.net/1721.1/130745</link>
<description>Atmospheric impacts and potential for regulation of current and emerging technologies in transportation
Chossière, Guillaume P.(Guillaume Pierre)
Although it is an integral part of everyday life and a key driver of economic growth, road transportation is also associated with negative externalities: it is a contributor to global greenhouse gases emissions, and is responsible for the emission of air pollutants. In this dissertation, I evaluate the atmospheric impacts of existing and emerging technologies in transportation, and examine regulatory options to limit negative externalities. This work develops methods to quantify and reduce the public health impacts of atmospheric emissions from road transportation. I focus on three case studies: the regulation of on-road emissions from diesel cars in Europe; the air quality impacts of electric vehicles in China; and the effect of the largest short-term decreases in global anthropogenic emissions in modern history, the COVID-19 related lockdowns.; In the first part of this thesis, I analyze the public health impacts in Europe of nitrogen oxides (NO[subscript x]) emissions from diesel cars in excess of the regulatory limits. Drawing on recent on-road measurement campaigns, fleet inventory data and driving behaviors, I estimate linearized sensitivities to changes in road transportation NO[subscript x] emissions in Europe using a state-of-the-art chemical transport model. My findings suggest that excess NO[subscript x] emissions cause 2,700 premature mortalities in Europe in 2015. 70% of these impacts occur in a different country than the emissions, suggesting that suggesting that effective strategies to reduce transportation-related air quality impacts in European countries require international cooperation. The second part of this thesis addresses the deployment of electric vehicles in China.; Although it reduces CO₂ and air pollutant emissions from transportation and refineries, substituting gasoline cars with electric vehicles (EVs) requires increased power generation. To quantify the resulting climate and air quality trade-off, I develop a high-resolution power grid model that estimates, for each unit, hourly generation and emissions under four EV charging scenarios in 2020. Using the GEOS-Chem atmospheric chemistry transport model, I find that the projected growth in EV usage by the end of 2020 would result in ~1,900 (95% CI: 1,600-2,200) avoided premature mortalities and a 2.4 Tg decrease in CO₂ emissions with the current power grid. By 2022, the benefits of EV deployment with regards to air quality and CO₂ emissions are expected to increase by 26% and 4% nationally, respectively. However, as regulations governing emissions from the oil refining sector tighten, the benefits of EV deployment for air quality will become more dependent on a cleaner power grid.; Finally, the last part of this thesis focuses on the largest short-term decreases in anthropogenic emissions in modern history. It is a comprehensive assessment of the impact of COVID-19-related lockdowns on air quality and human health. Although all sectors of activity have been impacted by the lockdowns, transportation emissions fell most, and the COVID-19-related lockdowns provide a natural experiment to quantify the air quality impacts of short-term decreases in transportation emissions in the context of decreasing emissions in all sectors. Using global satellite observations and ground measurements from 36 countries in Europe, North America, and East Asia, I find that lockdowns led to statistically significant reductions in NO₂ concentrations globally, resulting in ~26,000 avoided premature mortalities, including ~19,000 in China. However, I do not find corresponding reductions in PM₂.₅ and ozone globally.; Using satellite measurements, I show that the disconnect between NO2 and ozone changes is the result of low chemical sensitivity to NO₂. The COVID-19-related lockdowns demonstrate the need for targeted air quality policies to reduce the global burden of air pollution, especially related to secondary pollutants.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 147-167).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130745</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tensor-train-based algorithms for swarm state estimation with a team of mobile sensors</title>
<link>https://hdl.handle.net/1721.1/130744</link>
<description>Tensor-train-based algorithms for swarm state estimation with a team of mobile sensors
Miculescu, David.
Rapid development and deployment of large-scale swarms of robotic agents motivates the need for computationally-efficient algorithms for monitoring the state of the swarm. This dissertation is concerned with the state estimation of swarms via tensor methods, specifically the Tensor-Train (TT) decomposition. Existing nonlinear filtering algorithms for swarms, such as particle-based methods, suffer from particle degeneracy when scaling to larger swarms and higher dimensional state-spaces. Tensor-based methods such as the TT decomposition have the potential to alleviate the curse of dimensionality via computationally-efficient multi-linear algebra methods in "low-rank" problem instances. Our first contribution is a class of algorithms allowing for the efficient reconstruction of the state of agents in a swarm in a Bayesian manner. Specifically, we consider swarms under the standard multi-target tracking model, i.e., independent agents with identical dynamics.; We demonstrate that the traditional Probability Hypothesis Density (PHD) filter can be efficiently implemented in the TT format for problems settings in which the relevant parameters have efficient TT approximations. We then generalize our TT-based algorithms to swarm problems with interacting agents via the Gas-kinetic (GK) PHD filter. Specifically, we demonstrate that the GK-PHD filter may be efficiently implemented via the TT format. We analyze the convergence and computational complexity of our TT-based algorithms. Our second contribution is a class of efficiently implemented controllers for a team of mobile sensors performing swarm state estimation guided by an information-theoretic objective. Specifically, certain costly quantities arising from the traditional binary measurement model for sensors is implemented efficiently in the TT format. We analyze the convergence and computational complexity of our TT-based implementations.; Furthermore, we propose a novel controller based on a tertiary sensor model and describe its efficient implementation in the TT format.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-158).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130744</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnosing the variability in temperature and velocity in the Middle Atlantic Bight</title>
<link>https://hdl.handle.net/1721.1/130743</link>
<description>Diagnosing the variability in temperature and velocity in the Middle Atlantic Bight
Forsyth, Jacob Samuel Tse.
Observations of hydrographic and dynamical properties on the Middle Atlantic Bight shelf document strong variability at time scales spanning events that last a few days to century long trends. This thesis studies individual processes which impact shelf temperature and velocity structure, and quantifies the mean velocity conditions at the shelf break. Chapter 2 uses model output to study the dynamics that lead to the breakdown of summertime thermal stratification, and how the processes which reduce stratification vary from year to year. In summer, the atmosphere heats the surface of the ocean, leading to strong thermal stratification with warm water overlying cool water. During fall, strong storm events with downwelling-favorable winds are found to be the primary process by which stratification is reduced. The timing of these events and the associated destratification varies from year to year.; In Chapter 3, the velocity structure of the New Jersey shelf break is examine, with a focus on the Shelfbreak Jet. Using 25 years of velocity measurements, mean velocity sections of the Shelfbreak Jet are created in both Eulerian and stream coordinate frameworks. The jet exhibits strong seasonal variability, with maximum velocities observed in spring and minimum velocities in summer. Evidence is found that Warm Core Rings, originating from the Gulf Stream and passing through the Slope Sea adjacent to the New Jersey shelf, tend to shift the Shelfbreak Jet onshore of its mean position or entirely shutdown the Shelfbreak Jet's flow. At interannual timescales, variability in the Shelfbreak Jet velocity is correlated with the temperature on the New Jersey Shelf, with temperature lagging by about 2 months. Chapter 4 focuses on the impact of Warm Core Rings on the velocity and temperature structure on the New Jersey shelf.; Warm Core Rings that have higher azimuthal velocities and whose cores approach closer to the shelf are found to exert greater influence on the shelf's along-shelf velocities, with the fastest and closest rings reversing the direction of flow at the shelf break. Warm Core Rings are also observed to exert long-lasting impacts on the shelf temperature, with faster rings cooling the shelf and slower rings warming the shelf. Seasonal changes in thermal stratification strongly affect how rings alter the shelf temperature. Rings in summer tend to cool the shelf, and rings throughout the rest of the year generally warm the shelf.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 119-126).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130743</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forced response system identification of gas turbine fan flutter</title>
<link>https://hdl.handle.net/1721.1/130742</link>
<description>Forced response system identification of gas turbine fan flutter
Kiss, Andras Laszlo Andor.
Flutter in aero-engine fans continues to present major safety concerns and cost millions of dollars in maintenance from high-cycle fatigue. New technologies to improve safety and reduce cost require knowledge of damping levels, but determining the in-situ flutter damping of an installed fan remains challenging. This thesis introduces a first-of-its kind forced response system identification approach to measure fan flutter damping in a full gas turbine aero-engine. A reduced order modeling framework that captures the effect of actuator dynamics, flutter and stall dynamics, and sensor dynamics on forced response was developed for rigorous design of the experiment. A statistical analysis demonstrates that the experiment is capable of measuring flutter damping within 15% for most flutter modes throughout the entire operating range of the Pratt &amp; Whitney 615 turbofan engine.; The least damped, and therefore most limiting, flutter modes yield the least error with flutter margin estimated within 1.5 points or 14%. A key enabling technology for this experiment is a high power zero-net mass flux actuator capable of exciting the flutter modes. Experimental characterization of the actuator dynamics at this thus far un-explored scale demonstrates the limitations of current reduced order models in predicting actuator performance. This thesis establishes the need for experimental characterization of such actuators for future forced response experiments. Virtual forced response system identification experiments demonstrate robustness of the approach to noise sources, with damping measurements succeeding even with noise levels above those typical in engine testing. Fans with different performance characteristics were simulated demonstrating that flutter damping can be measured with similar levels of error as compared with the PW615 engine.; Guidelines on system identification, data processing, and experimental setup are developed for future forced response experiments. Finally, detailed design demonstrates the experiment can be conducted safely and with no impact on the engine steady state performance. The thesis contributions are (1) a new approach to measuring fan flutter damping in aero-engines that enables the development of flutter mitigation technologies and advanced prognostics, and (2) a forced response reduced order modeling framework that provides new capability for experimental design and coupled engine system dynamic modeling that can identify detrimental engine conditions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 217-223).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130742</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, characterization, and In vivo evaluation of a magnetorheological fluid as a hemostatic agent</title>
<link>https://hdl.handle.net/1721.1/130741</link>
<description>Design, characterization, and In vivo evaluation of a magnetorheological fluid as a hemostatic agent
Tekleab, Yonatan.
Magnetorheological (MR) fluids and elastomers have been shown to be effective in systems requiring responsive materials with fast-acting, tunable properties. Recently, use of MR fluids (MRFs) has risen with improvements in quality and cost of raw materials and manufacturing processes. Traditionally used in automotive and manufacturing industries, these applications have recently extended into healthcare, improving prosthetics and exoskeleton designs. Inspired by such applications, we have developed a magnetically-actuated fluidic valve using a biocompatible MRF suspension for use in the human body, to slow hemorrhage. Traumatic injury is the leading cause of death in the United States and globally for people of ages 1 - 46, 30% to 40% of which are attributed to severe or prolonged hemorrhage. Furthermore, 80% of trauma-related deaths in the first hour of hospital admission are due to hemorrhagic shock.; Field-responsive, biocompatible suspensions present a unique opportunity to intervene in pre- and early hospital settings to stem thoracic and abdominal bleeding. Such a hemostat would provide physicians more time to resuscitate patients upon trauma facility admission. The MR valve comprises an injectable, biocompatible MRF suspension with externally placed permanent magnets. To produce a significant, controllable MR effect in a bleeding patient near the site of injury, the MRF was designed for biocompatibility, rapid delivery, and spatially localized actuation within blood vessels such that bleeding can be controlled. Understanding and optimizing the particulate chaining and accumulation mechanisms by which the MRF stems bleeding in situ is critically important. We have synthesized and characterized a novel, biocompatible, MR hemostatic agent for use with hemorrhaging patients through a minimally invasive technique.; Safety and efficacy of the technique have been demonstrated through benchtop and preliminary in vivo (rat) models. Using small Neodymium magnets in 3D printed holders that can be worn by field surgeons, we demonstrate arrest of a major hemorrhagic event over a range of physiologically-relevant flow conditions, with sustained blood pressure, dramatically reduced volumes of blood loss, and significantly increased survival time. In future phases, safety and efficacy of the method will be demonstrated in swine models before testing in controlled surgical settings with human patients.
Thesis: Ph. D. in Materials and Structures, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 291-315).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130741</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic advancements in the practice of revenue management</title>
<link>https://hdl.handle.net/1721.1/130724</link>
<description>Algorithmic advancements in the practice of revenue management
Amar, Jonathan Z.(Jonathan Zalman Aron Yaich)
In recent years, firms have been personalizing the customer experience by recommending specific products, and simultaneously these customers have raised their expectations in terms of personalization. To support these efforts, the firm must conceptualize its understanding of the market and the way customers interact with their products. This requires some modeling of how customers value each product and their attributes. The first part of the dissertation is dedicated to showing how one can go from data to personalization in retail applications. The first chapter's contribution is methodological, as we have worked closely with our partner beer retailer on providing store-specific assortment optimization. Using an efficient estimation procedure for choice models, conjointly with a novel application of collaborative filtering, we learn a demand model which is store specific and reliable, using a cautious validation procedure.; Once armed with our model, we leverage continuous optimization techniques coupled with technical advances, to produce at scale personalized assortments which generate higher revenue subject to multiple business constraints. The second chapter considers a different setting relevant to new e-retailers which lack the data to inform their personalization. These usually rely on questionnaires to extract information. We incorporate the personalization task into the questionnaire design, which is driven by the product recommendation objective. We provide a framework for extending extant utility-estimation questionnaires, and additionally provide a direct approach leveraging robust optimization for tractability. We support our work with numerical simulations and theoretical justification in simplified settings, promising practical gains for personalization.; While we have acknowledged data uncertainty in the first part, the second part of the dissertation is focused on the study of uncertainty in modern markets, and how to address it. The third chapter considers the canonical Network Revenue Management (NRM) problem. More specifically, we take the perspective of a monopoly seller which offers multiple products which consume capacitated resources. Given demand forecasts at the granularity of products may be unreliable, in cases where demand is highly volatile or sporadic -e.g. the airline, hotel industries-, we provide a distribution free algorithm for NRM, which is essentially robust to market uncertainties. By analyzing our algorithm's performance through the primal-dual schema, we establish its asymptotic optimality under the competitive lens. We benchmark our algorithm by showing that in regimes where the market is potentially rapidly changing, we outperform state of the art methods.; Finally in the fourth chapter, we analyzed the problem faced by budget constrained e- advertisers. Ad-slots are allocated using a second price auction, that is the highest bidder wins the auction paying the second highest price. In this case, the advertiser is faced with a bidding decision, not knowing how much they will need to pay if they win. Considering the price uncertainty at the time of bid, which is specific to this modern market, we provide a methodology for converting plethora of knapsack algorithms to bidding strategies, implementable for this setting where the price is unknown. We show near-optimality of the bidding strategies, which in turn have substantial potential impact for e-advertisers.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-184).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130724</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization for online platforms</title>
<link>https://hdl.handle.net/1721.1/130723</link>
<description>Optimization for online platforms
Sinha, Deeksha.
In the last decade, there has been a surge in online platforms for providing a wide variety of services. These platforms face an array of challenges that can be mitigated with appropriate modeling and the use of optimization tools. In this thesis, we examine, model, and provide solutions to some of the key challenges. First, we focus on the problem of intelligent SMS routing faced by several online platforms today. In a dynamically changing environment, platforms need to carefully choose SMS aggregators to have a high number of text messages being delivered to users at a low cost. To model this problem, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards.; We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for the SMS routing problem. For an instance with K arms and time horizon T, we then establish a fundamental lower bound of [omega](K¹/³T²/³) on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms. Second, we focus on the problem of making real-time personalized recommendations which are now needed in just about every online setting, ranging from media platforms to e-commerce to social networks.; While the challenge of estimating user preferences has garnered significant attention, the operational problem of using such preferences to construct personalized offer sets to users is still a challenge, particularly in modern settings with a massive number of items and a millisecond response time requirement. Thus motivated, we propose an algorithm for personalized offer set optimization that runs in time sub-linear in the number of items while enjoying a uniform performance guarantee. Our algorithm works for an extremely general class of problems and models of user choice that includes the mixed multinomial logit model as a special case. Our algorithm can be entirely data-driven and empirical evaluation on a massive content discovery dataset shows that our implementation indeed runs fast and with increased performance relative to existing fast heuristics.; Third, we study the problem of modeling purchase of multiple items (in online and offline settings) and utilizing it to display optimized recommendations, which can lead to significantly higher revenues as compared to capturing purchase of only a single product in each transaction. We present a parsimonious multi-purchase family of choice models called the BundleMVL-K family, and develop a binary search based iterative strategy that efficiently computes optimized recommendations for this model. We establish the hardness of computing optimal recommendation sets and characterize structural properties of the optimal solution. The efficacy of our modeling and optimization techniques compared to competing solutions is shown using several real-world datasets on multiple metrics such as model fit, expected revenue gains, and run-time reductions. Fourth, we study the problem of A-B testing for online platforms.; Unlike traditional offline A-B testing, online platforms face some unique challenges such as sequential allocation of users into treatment groups, large number of user covariates to balance, and limited number of users available for each experiment, making randomization inefficient. We consider the problem of optimally allocating test subjects to either treatment with a view to maximize the precision of our estimate of the treatment effect. Our main contribution is a tractable algorithm for this problem in the online setting, where subjects must be assigned as they arrive, with covariates drawn from an elliptical distribution with finite second moment. We further characterize the gain in precision afforded by optimized allocations relative to randomized allocations and show that this gain grows large as the number of covariates grows.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-189).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130723</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering mammalian cell line for N-linked glycosylation control</title>
<link>https://hdl.handle.net/1721.1/130720</link>
<description>Engineering mammalian cell line for N-linked glycosylation control
Jung, Giyoung.
N-linked glycosylation in monoclonal antibodies (mAbs) plays a critical role in their biological function, clinical efficacy, and safety. mAbs have highly conserved N-linked glycans at the Asn297 position of the Fc region affecting Fc-mediated effector functions, including antibody-dependent cellular cytotoxicity (ADCC) and complement-dependent cytotoxicity (CDC). Despites increasing efforts to develop strategies for glycoengineering, the lack of tools to precisely control mAb N-glycosylation has significantly hampered the production of more effective and safe antibodies. Here, we leverage synthetic biology to control mAbs fucosylation and galactosylation with small molecule inducible systems and synthetic miRNA regulators in engineered Chinese hamster ovary (CHO) cells. We achieved precise tuning of fucosylation (0-97%) and galactosylation (0-87%) levels with dose-dependent induction of two glycosyltransferases genes, FUT8 and [beta]4GALT1. Importantly, orthogonal and small molecule inducible gene expression enabled us to simultaneously and independently control levels of fucosylation and galactosylation. Next, we developed a system for intrinsic control of mAbs fucosylation, thereby eliminating the need for expensive small molecules. Using FUT8⁻/⁻ CHO cells, recombinant FUT8 expression levels were controlled by varying numbers of synthetic miRNA target sites at the 5' and 3' UTRs. Upon induction of miR-FF4, precise tuning of mAb fucosylation levels (0.9-98%) were achieved and mAb production remained stable for long-term experiments. The development of tools to control N-glycosylation levels of monoclonal antibodies will help to overcome existing bottlenecks in next-generation antibody engineering, thereby improving their clinical efficacy.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 101-109).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130720</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Human interaction &amp; gait strategy with tightly-coupled lower-extremity systems</title>
<link>https://hdl.handle.net/1721.1/130718</link>
<description>Human interaction &amp; gait strategy with tightly-coupled lower-extremity systems
Gupta, Aditi,M. Ph. D.Massachusetts Institute of Technology.
Interest in the use of exoskeletons (wearable robotic devices tightly-coupled to the user's body) for human gait augmentation has soared recently, with research flourishing in system design, control, and use efficacy. Use cases span many fields, from military (e.g. load carriage assistance) to medicine (e.g. gait rehabilitation or restoration) and industry (e.g. injury prevention). Evaluating the human factors of human-exoskeleton interaction is an essential step towards operationalization. Unexplained variation in gait strategy and adaptation observed across individual operators must be better understood to enable safe and effective exoskeleton use in real-life environments. Cognitive fit is an individuals' understanding and ability to operate a system. Exoskeletons and similar tightly-coupled lower-extremity (TCLE) systems entail new interaction modalities that may affect cognitive fit.; This thesis explores how cognitive factors and alternative interaction modalities impact individuals' gait and task performance. Two studies were conducted, one evaluating inhibitory control as measured by a modified Simon task using interaction modalities relevant to TCLE system use, i.e. tactile cues and lower-extremity responses. Second, the Human-Exoskeleton Strategy &amp; Adaptation (HESA) study was implemented, in which individuals completed tasks assessing cognitive factors, i.e. inhibitory control and attention, then walked with an ankle exoskeleton. Evaluation of inhibitory control with tactile cues and lower-extremity responses resulted in slower response times and decreased response accuracy. A probe of attention in the HESA study, i.e. completion of a walking task on a self-paced treadmill, showed modified gait characteristics under increased attentional loads, particularly at slower walking speeds and with the addition of a secondary task.; Individualized variation in exoskeleton gait, quantified by spatiotemporal gait characteristics, was explicitly presented for the first time, showing that distinct individuals initially prioritize goals like stability and coordination with an ankle exoskeleton differently. Finally, select measures of cognitive function were found to be correlated to individuals' exoskeleton gait strategy. Individual differences in baseline factors like inhibitory control and ability to perform tasks under divided attention impact individuals' cognitive fit with exoskeleton systems. This individualized variation, as well as broader population patterns, should inform exoskeleton design and training by encouraging gait strategies that support desired exoskeleton use goals. For example, stroke patients using an exoskeleton to restore their gait and mitigate fall risk should prioritize stability during system use, while factory workers should prioritize system coordination to minimize injury risk.; This thesis provides foundational insights into human-exoskeleton interaction and gait strategy from a human factors perspective.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 133-148).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130718</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and applications of cold-cathode X-ray imaging systems</title>
<link>https://hdl.handle.net/1721.1/130711</link>
<description>Design and applications of cold-cathode X-ray imaging systems
Cramer, Avilash(Avilash Kalpathy)
X-ray computed tomography (CT) and planar x-ray imaging are mainstays of modern clinical care. The electron generation mechanism in standard x-ray tubes - specifically, a thermionic cathode - is reliable and capable of high current. However, thermionic cathodes are bulky, and cannot be pulsed quickly. Non-thermionic ('cold-cathode') electron generation can be exploited to make a smaller and rapidly pulsable x-ray source. Such an x-ray source could improve not just the portability of x-ray devices, but would allow for a CT system to operate by pulsing a distributed ring of x-ray sources instead of rotating a single large x-ray source. Furthermore, cold-cathode x-ray sources could allow for new signal acquisition and processing paradigms in the x-ray domain. This includes time-based image acquisition techniques, such as elastography and photon-counting measurements. In this dissertation, I discuss (1) the development of two novel types of cold-cathode x-ray sources: an ultraviolet photocathode-based source, and a silicon field emission chip; (2) novel methods for planar x-ray image acquisition, including a demonstration of dynamic x-ray elastography using a pulsed photocathode x-ray source; and (3) applications of modern signal processing techniques to the tomographic image reconstruction problem. In an epilogue, I discuss our research on N95 respirator sterilization and re-use for crisis situations.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, September, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 145-159).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130711</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for selective remote control of cellular signaling using magnetic nanoparticles</title>
<link>https://hdl.handle.net/1721.1/130676</link>
<description>Design for selective remote control of cellular signaling using magnetic nanoparticles
Moon, Junsang.
Understanding the structure and dynamics of the mammalian cellular signaling and the nervous system opens up the chances to induce tissues, organs, or organisms to function in a coordinated way. Consequently, extensive research and efforts have rapidly developed strategies for rationally manipulating cellular and neuronal signaling. In this thesis, I pursued to develop a wirelessly modulating system that can selectively control a targeted biological circuit. Heat dissipating properties of magnetic nanoparticles have garnered sustained research interest in biomedical applications, including drug release, cancer hyperthermia, and neural stimulations. However, research has been mainly focused on improving the magnetic nanomaterials' heat dissipation efficiency. To introduce selective heat dissipation of magnetic nanoparticles, I have designed a custom AC magnetometer that can capture dynamic magnetization of magnetic particles suspended in solution. The collected magnetic response data fed to an equation called multiplexing factor to find optimized alternative magnetic field conditions that can selectively trigger heat dissipation on each particle ensemble. This approach was confirmed by direct temperature change observation using a thermographic camera. The selective heating system was later combined with genetic engineering and a drug delivery system for selective cellular modulation. This work culminates in a demonstration of selective remote control of cellular signaling in vitro. The theoretical background, systematic design, and precisely executed demonstration can be transplanted to any system using magnetic nanoparticles as a heat transducer. Therefore, this study will set a strong foundation and suggest new approaches for researches utilizing magnetic nanoparticle's heat dissipation for any applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 137-147).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130676</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of associative polymer networks</title>
<link>https://hdl.handle.net/1721.1/130675</link>
<description>Dynamics of associative polymer networks
Mahmad Rasid, Irina.
Associative polymer networks are a versatile class of materials as demonstrated by their use in a wide variety of applications including biomaterials, viscosity modifiers and underwater adhesives. The viscoelastic and transport properties displayed by these associative networks can be tuned by careful design of the polymers that make up the network, as well as the transient interactions between them. Thus, elucidating the relationship between the molecular level details and the observed macroscopic properties is of high importance to further advance our understanding of associative networks. However, the complex dynamics displayed by these materials over a wide range of length and time scales present a significant challenge in studying this relation. This thesis aims to provide insights into the molecular origin of the dynamics of associative polymer networks.; The first part of this thesis investigates the molecular origin of shear thinning in associative networks through the design of a model associative polymer and a custom-built set-up referred to as "rheo-fluorescence" to quantify force-induced bond dissociation under shear flow. Comparison to existing models in transient network theory then demonstrate that retraction of dangling chains alone is insufficient to account for shear thinning in the model associative polymer network. Additional modes are likely contributing to the observed shear thinning behavior. The second part of this thesis focuses on the effect of sticker density, sticker clustering and entanglements on the dynamics of the associative networks through combined studies of self-diffusion performed using forced Rayleigh scattering (FRS) and rheology.; All three studies were performed using well-defined polymers with the same chemical composition such that the observed effects are solely due to the changes in sticker density, clustering and entanglements introduced during synthesis. The combined FRS and rheology studies on the effect of sticker density using a set of random copolymers revealed apparent superdiffusive scaling for chains with up to 15 stickers. This finding demonstrates that molecular hopping and thus, deviation from predictions of mean-field models persists to a higher limit than expected. To study the effect of sticker clustering, a polymer with clustered stickers was synthesized such that the molecular weight and sticker density were comparable to the random copolymers. It is demonstrated that the network topology is significantly altered by sticker clustering as evidenced by the opposite trends observed in the FRS and rheology measurements.; Finally, the onset of entanglements was examined by performing FRS and rheology measurements on a high molecular polymer, prepared at concentrations that span the unentangled to the weakly entangled regime. A clear transition seen in the self-diffusion measurements demonstrates the advantage of FRS to study the transition regimes where other techniques like rheology only show subtle changes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130675</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of colloidal quantum dot and lead halide perovskite light emitting devices</title>
<link>https://hdl.handle.net/1721.1/130674</link>
<description>Development of colloidal quantum dot and lead halide perovskite light emitting devices
Xie, Sihan,Ph. D.Massachusetts Institute of Technology.
In recent years, optically active semiconductors, such as organic molecules, colloidal quantum dots (QDs) and lead halide perovskites, have emerged as top candidates for light emitting materials. One key feature of these materials is their bandgap tunability, e.g. via size or chemical composition, allowing for their emission color to be turned throughout the entire visible spectrum. Thin-film light emitting devices (LEDs) based on these luminophores are promised to deliver the next-generation display technologies that are ultrathin and light, high-color-quality, and energy efficient with new form factors (e.g. foldable and flexible). In this thesis, we present the work performed to improve the understanding and performance of colloidal nanocrystal QDs and lead halide perovskites as visible luminophores in optically- and electrically-driven thin-film LEDs. First, we create an efficient voltage-controlled optical down-converter by operating a quantum dot light emitting diode (QD-LED) under reverse bias. Using field-induced luminescence quenching to our advantage, we show that a large electric field can strongly modify QD carrier dynamics, resulting in stable and reversible QD photoluminescence (PL) modulation. Next, we address the QD's toxicity issue by developing a synthesis of heavy-metal-free ZnSe/ZnS core-shell QDs with narrow spectral linewidth and high PL quantum yield. By employing these QDs as emitters, we demonstrate QD-LEDs with efficient and saturated blue electroluminescence (EL). Finally, we present a new way of depositing compact CsPbBr₃ perovskite thin films by thermal co-evaporation and demonstrate all vacuum-processed perovskite LEDs with efficient green EL emission. Our results show that evaporative deposition can be a viable alternative to solution-based deposition for fabricating high-quality perovskite thin films for LEDs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, February, 2021; Cataloged from the official PDF of thesis. Page 139 blank.; Includes bibliographical references (pages 118-138).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130674</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Raman imaging and thermography</title>
<link>https://hdl.handle.net/1721.1/130673</link>
<description>Computational Raman imaging and thermography
Li, Zheng,Ph. D.Massachusetts Institute of Technology.
Thermography tools that perform accurate temperature measurements with nanoscale resolution are highly desired in our modern society. Although researchers have put extensive efforts in developing nanoscale thermography for more than three decades and a significant amount of achievements have been made in this field, the mainstream thermography tools have not fully met the requirements from the industry and the academia. In this thesis, we present our home-built Raman microscope for Raman imaging and thermography. The performance of this instrument is enhanced by computational approaches. The body of the thesis will be divided into three parts. First, the instrumentation of our setup are introduced. Second, we present the results of Raman imaging with computational super-resolution techniques. Third, this instrument is used as a thermography tool to map the temperature profile of a nanowire device. These results provide insights in combining advanced instrumentation and computational methods in Raman imaging and Raman thermography for the applications in modern nano-technology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 185-201).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130673</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational approaches to understand the atomistic drivers of enzyme catalysis</title>
<link>https://hdl.handle.net/1721.1/130672</link>
<description>Computational approaches to understand the atomistic drivers of enzyme catalysis
Seelam, Natasha.
Enzymes readily perform chemical reactions several orders of magnitude faster than their uncatalyzed versions in ambient conditions with high specificity, making them attractive design targets for industrial purposes. Traditionally, enzyme reactivity has been contextualized through transition-state theory (TST), in which catalytic strategies are described by their ability to minimize the activation energy to cross the reaction barrier through a combination of ground-state destabilization (GSD) and transition-state stabilization (TSS). While excellent progress has been made to rationally design enzymes, the complexity of the design space and the highly optimized nature of enzymes make general application of these approaches difficult. This thesis presents a set of computational methods and applications in order to investigate the larger perspective of enzyme-assisted kinetic processes.; For the first part of the thesis, we analyzed the energetics and dynamics of proficient catalyst orotidine 5'-monophosphate decarboxylase (OMPDC), an enzyme that catalyzes decarboxylation nearly 17 orders of magnitude more proficiently than the uncatalyzed reaction in aqueous solvent. Potential-of-mean-force (PMF) calculations on wild type (WT) and two catalytically hindered mutants, S127A and V155D (representing TSS and GSD, respectively), characterized the energy barriers associated with decarboxylation as a function of two parameters: the distance between the breaking C-C bond and a proton-transfer coordinate from the nearby side chain of K72, a conserved lysine in the active site. Coupling PMF analyses with transition path sampling (TPS) approaches revealed two distinct decarboxylation strategies: a simultaneous, K72-assisted pathway and a stepwise, relatively K72-independent pathway.; Both PMF and TPS rate calculations reasonably reproduced the empirical differences in relative rates between WT and mutant systems, suggesting these approaches can enable in silico inquiry into both pathway and mechanism identification in enzyme kinetics. For the second study, we investigated the electronic determinants of reactivity, using the enzyme ketol-acid reductoisomerase (KARI). KARI catalyzes first a methyl isomerization and then reduction with an active site comprised of several polar residues, two magnesium divalent cations, and NADPH. This study focused on isomerization, which is rate limiting, with two objectives: characterization of chemical mechanism in successful catalytic events ("reactive") versus failed attempts to cross the barrier ("non-reactive"), and the interplay between atomic positions, electronic descriptors, and reactivity.; Natural bonding orbital (NBO) analyses provided detailed electronic description of the dynamics through the reaction and revealed that successful catalytic events crossed the reaction barrier through a 3-center-2-electron (3C) bond, concurrent to isomerization of hydroxyl/carbonyls on the substrate. Interestingly, the non-reactive ensemble adopted a similar electronic pathway as the reactive ensemble, but its members were generally unable to form and sustain the 3C bond. Supervised machine learning classifiers then identified small subsets of geometric and electronic descriptors, "features", that predicted reactivity; our results indicated that fewer electronic features were able to predict reactivity as effectively as a larger set of geometric features. Of these electronic features, the models selected diverse descriptors representing several facets of the chemical mechanism (charge, breaking-bond order, atomic orbital hybridization states, etc.).; We then inquired how geometric features reported on electronic features with classifiers that leveraged pairs of geometric features to predict the relative magnitude of each electronic feature. Our findings indicated that the geometric, pair-feature models predicted electronic structure with comparable performance as cumulative geometric models, suggesting small subsets of features were capable of reporting on electronic descriptors, and that different subsets could be leveraged to describe various aspects of a chemical mechanism. Lastly, we revisited OMPDC in order to learn the key geometric features that distinguished between the simultaneous and stepwise pathways of decarboxylation, aggregating and labeling pathways drawn from WT and mutant systems ensembles. We leveraged classifiers that predicted between reactive pathways by selecting small subsets of structural features from 620 geometric features comprised of atoms from the active site.; The classifiers performed comparably, with greater than 80% testing accuracy and AUC, between times starting from in the reactant basin to 30 fs into crossing the reaction barrier. Remarkably, model-selected features reported on chemically meaningful interactions despite no explicit prior knowledge of the mechanism in training. To illustrate this, we focused analyses on two particular features shown to be predictive while in the reactant basin, prior to crossing the barrier: a potential hydrogen-bond between D75*, an aspartate in the active site, and the 2'-hydroxyl of OMP, and electrostatic repulsion through the proximity of a different aspartate, D70, to the leaving group carboxylate of OMP. Analysis between the simultaneous and stepwise ensembles demonstrated that the simultaneous ensemble adopted shorter distances for both features, generally suggesting stronger interactions.; Both features were additionally shown to be associated with the ability to distort the planarity of the orotidyl ring, where shorter distances for either feature were correlated with larger degrees of distortion. Taken together, this suggested the simultaneous ensemble was more effective at distorting the ground state structure prior to crossing the reaction barrier.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130672</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering gas diffusion electrodes for electrochemical carbon dioxide upgrading</title>
<link>https://hdl.handle.net/1721.1/130671</link>
<description>Engineering gas diffusion electrodes for electrochemical carbon dioxide upgrading
Leonard, McLain E.(McLain Evan)
Electrochemical carbon dioxide reduction (CO2R) is increasingly recognized as a viable technology for the generation of chemicals using carbon dioxide (CO₂) recovered from industrial exhaust streams or directly captured from air. If powered with low-carbon electricity, CO2R processes have the potential to reduce emissions from chemicals production. Historically, three-electrode analytical cells have been used to study catalyst activity, selectivity, and stability with a goal of incorporating proven materials into larger devices. However, it has been recognized that the limited CO₂ flux through bulk volumes of liquid electrolyte limit the effective reaction rate of CO₂ when using promising catalyst systems.; Gas-fed electrolyzers adapted from commercial water electrolyzer and fuel cell technologies have motivated researchers to explore combinations of porous electrodes, catalyst layers, and electrolytes to achieve higher areal productivity and favorable product selectivities. Present art demonstrates that high current density production (&gt;200 mA cm₋²) of valuable chemicals at moderate cell voltages (ca. 3-4 V) is achievable at ambient conditions using electrolysis devices with catalyst-coated gas diffusion electrodes (GDEs). However, beyond short durations (1-10 h) stable performance outcomes for flowing electrolyte systems remain elusive as electrolyte often floods electrode pores, blocking diffusion pathways for CO₂, diminishing CO2R selectivity, and constraining productivity. Systematic study of the driving forces that induce electrode flooding is needed to infer reasonable operational envelopes for gas-fed electrolyzers as full-scale industrial devices are developed.; In this thesis, I investigate GDE wettability as a prominent determinant of gas-fed flowing electrolyte CO₂ electrolyzer durability. To do this, I combine experimental and computational approaches. First, I use a flow cell platform to study transient evolution of activity, selectivity, and saturation to identify failure modes, including liquid pressurization, salt precipitation, electrowetting, and liquid product enrichment. Next, I use material wettability properties and reactor mass balances to estimate how enriched liquid product streams might defy non-wetting characteristics of current GDE material sets. Finally, I construct computational electrode models and vary surface chemistry descriptors to predict transport properties in partially saturated electrodes. Specifically, I consider how saturation evolves in response to relevant scenarios (i.e., electrowetting and liquid products) that challenge CO₂ electrolyzer durability.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 219-233).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130671</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Profiling hotspots of DNA breaks and learning-induced gene expression in the mouse brain</title>
<link>https://hdl.handle.net/1721.1/130669</link>
<description>Profiling hotspots of DNA breaks and learning-induced gene expression in the mouse brain
Stott, Ryan(Ryan Timothy)
Neuronal activity generates DNA double-strand breaks (DSBs) in the brain. Topoisomerase IIb (TOP2B), important for relieving transcription-associated DNA supercoiling, was implicated as the source of these neuronal activity-induced DSBs, facilitating rapid transcriptional induction of immediate early genes (IEGs). However, the locations of these DSBs in vivo and their relation to brain function was unclear. In Chapter 2, following contextual fear conditioning (CFC) of wild type mice, we profiled the locations of DSBs genome-wide through gH2AX ChIP-Seq, along with transcriptomic changes in neuronal and glial-enriched nuclei in two brain regions. We found DSB-susceptible genes were involved in synaptic processes, while both activity-regulated and proteostasis-related transcription factors appeared to govern gene expression changes across cell types at some sites of gH2AX. Finally, we found that glia have a robust transcriptional response to glucocorticoids and some of these genes are sites of brain DSBs. In Chapter 3, we examined the relationship between brain DNA breaks and TOP2B function. We utilized a mouse forebrain excitatory neuron-specific functional knockout of Top2b. We found that neuronal loss of Top2b in the medial prefrontal cortex (mPFC) impairs both the expression of long genes and gene induction following exposure to a learning paradigm. Ultimately, loss of Top2b leads to abnormal learning and cognition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130669</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-layer coated microneedles for cancer immunotherapy</title>
<link>https://hdl.handle.net/1721.1/130668</link>
<description>Layer-by-layer coated microneedles for cancer immunotherapy
He, Yanpu.
Over the past few decades, research in therapeutic cancer vaccines has achieved remarkable advances in both pre-clinical and clinical studies. However, strategies to boost antigen presentation and T cell priming in order to increase the fraction of patients responding to immunotherapy remains an urgent need. For the delivery platform, we applied microneedle (MN) skin patches to transdermally deliver vaccines to activate the potent epidermal Langerhans cells and dermal dendritic cells. The drug was incorporated as a releasable multilayer coating on the microneedle surface constructed with alternating absorption of oppositely charged species including protein or nucleic acid drugs and biocompatible polymer carriers, forming a 'sticky' layer-by-layer (LbL) film through electrostatic attraction. Past LbL MN strategies have all retained this 'sticky' nature and therefore require a long application time (15-90 mins) for drug implantation.; To resolve this problem, we devised a pH-induced charge-invertible polymer as a lift-off layer that significantly shortens the application time to 1 min. Our approach has inspired other work involving rapid film lift-off with charge-invertible species. On the drug side, we focused on the stimulator of interferon genes (STING) pathway. Current strategies mostly focus on developing synthetic vehicles to deliver the STING agonist, cyclic GMP-AMP (cGAMP) into the cells. However, this assumes the presence of fully functional STING protein to bind cGAMP. STING signaling has not only been shown to be frequently impaired in cancer cells due to epigenetic silencing of the protein; it is also under debate whether the overall population is responsive to delivery of just the agonist, since 19% of humans carry a mutated STING variant called HAQ STING that has been suggested to exhibit impaired function.; To address these challenges, we engineered a recombinant STING protein without the transmembrane domain (STING[delta]TM) that could be used as a functional carrier for cGAMP delivery and elicit type I IFN expression in cell lines deficient of STING or with defective endogenous STING. In vivo, our cGAMP-STING[delta]TM signaling complex elicited enhanced antigen specific B and T cell responses as well as robust anti-tumoral immunity in a B16 melanoma and a MC38 colon cancer mouse model.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Vita. Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130668</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetic analysis of leaching reactions in multi-component mineral systems</title>
<link>https://hdl.handle.net/1721.1/130666</link>
<description>Kinetic analysis of leaching reactions in multi-component mineral systems
Close, Thomas,Jr.(Thomas Charles)
The rational design of reactive systems requires the use of kinetic models of system behavior. However, the development of such models for multicomponent systems is complicated by conditions of mutual interference in determining reaction rates. Addressing this shortcoming for mineral systems requires developing methods to solve the fundamental problem of identity and resolve the partitioning of system behavior between components. In this work a complete description of the problem of simultaneous rate determination under conditions of mutual interference is developed and progress towards solving this problem in microfluidic and bulk systems is presented. Results show that there are unique challenges posed in microfluidic systems that hinder the ability to accurately partition the behavior of the total system between its constituents. In contrast, the bulk system permits a practical experimental solution based on particle size and shape for certain classes of solid mixtures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-177).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130666</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining the role of aneuploidy throughout tumorigenesis</title>
<link>https://hdl.handle.net/1721.1/130665</link>
<description>Defining the role of aneuploidy throughout tumorigenesis
Silberman, Rebecca Estelle.
Aneuploidy is a state of genome imbalance which alters the copy number of whole chromosomes. While aneuploidy is rare in healthy tissues, it is one of the most common features of cancerous tumors. Studies of aneuploid yeast and aneuploid mammalian cells growing in culture revealed that aneuploidy induces cellular stress and slows proliferation. So it is surprising that aneuploidy is a hallmark of cancer, a disease of cellular over-proliferation and inappropriate cell survival. We sought to elucidate aneuploidy's role in tumorigenesis by defining the factors that affect the prevalence of aneuploid cells in normal, pre-cancerous, and cancerous tissues. First, we investigated whether aneuploid mammalian cells experience fitness defects in vivo. We found that aneuploidy decreases hematopoietic stem cells' fitness and that aneuploid cells are selected against in normal, regenerating tissues in vivo.; However, we also found that aneuploid cells can accumulate in the hematopoietic system when purifying selection is relaxed following bone marrow reconstitution. We then sought to extend our observations to the context of pre-cancerous tissues. We analyzed the prevalence of aneuploidy in the highly tumorigenic, but histologically normal tissues of women harboring heterozygous germline BRCA2 mutations. Using single-cell sequencing, we revealed that breast cells from BRCA2 mutation carriers lack aneuploidy but feature a distinct form of genome imbalance called sub-chromosomal copy number variants (CNVs), even before the initiation of tumorigenesis. We then analyzed the timing with which these two forms of genome imbalance--whole-chromosomal aneuploidy and sub-chromosomal CNVs--arise during tumorigenesis. We found that CNVs are present in the cells of early precursors of multiple cancers, but that whole-chromosomal aneuploidy arises late in tumorigenesis.; Our findings propose that whole-chromosomal aneuploidy reduces cells' fitness in both normal and pre-cancerous tissues, and that aneuploidy is selected against throughout tumorigenesis. This has implications for the role of aneuploidy in cancer, suggesting that aneuploidy does not contribute to early tumorigenesis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130665</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genome organization in transcriptional regulation</title>
<link>https://hdl.handle.net/1721.1/130664</link>
<description>Genome organization in transcriptional regulation
Li, Charles H.
Transcriptional regulation of gene expression plays critical roles in the control of cell identity, development, and disease. Genome organization contributes to transcriptional regulation in multiple ways. At a fundamental level, the genome is organized into distinct active and repressive chromatin states that facilitate transcriptional regulation. These chromatin states are established and maintained at specific genomic regions via the interconnected activities of transcription factors and epigenetic pathways. An additional layer of genome organization is the three-dimensional structure of the genome within the nucleus. Transcriptional regulation occurs within a hierarchy of genome structures that are formed by the activities of structuring factors. Studies described in this thesis identify the transcription factor YY1 as a general structural regulator of enhancer-promoter loops (Weintraub et al., 2017). In recent years, the study of biomolecular condensates has led to a dramatic shift in our understanding of the mechanisms contributing to transcriptional regulation and to genome structure. Distinct chromatin condensates organize the genome by compartmentalizing components associated with transcriptionally active euchromatin and repressive heterochromatin. Whether disruption of chromatin condensates can lead to transcriptional dysregulation in human disease is not well understood. Our finding that MeCP2 is a key component of heterochromatin condensates and that Rett syndrome patient mutations affecting MeCP2 cause condensate disruption (Li et al., 2020), demonstrates a link between chromatin condensate disruption and human disease. These studies reveal important mechanisms of genome organization contributing to transcriptional regulation, and provide new insights into human disease that might be leveraged to provide therapeutic benefit for patients in the future.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130664</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Separase cleaves the kinetochore protein Meikin to direct the meiosis I/II transition</title>
<link>https://hdl.handle.net/1721.1/130663</link>
<description>Separase cleaves the kinetochore protein Meikin to direct the meiosis I/II transition
Maier, Nolan Kenji Kwaisun.
To generate haploid gametes, meiotic cells must undergo two consecutive rounds of chromosome segregation without an intervening gap phase. Importantly, because homologous chromosomes are segregated in meiosis I, but sister chromatids are segregated in meiosis II, this requires a dramatic rewiring of the cell division machinery between the two divisions. How meiotic cells coordinate this rapid and substantial change to the cell division machinery is a central mystery at the heart of proper fertility and reproduction. Our work reveals a new paradigm that rewires key cell division processes at the meiosis I/II transition through the action of the protease Separase, which we demonstrate acts by cleaving the meiosis-specific kinetochore protein Meikin. Cleavage of Separase substrates such as cohesin results in their potent and complete inactivation.; In contrast, we find that Separase cleavage of Meikin acts as a molecular "scalpel," providing an elegant mechanism to precisely and irreversibly modulate Meikin activity between the two meiotic divisions without inactivating Meikin function. Our results demonstrate that the C-terminal Meikin cleavage product generated by Separase proteolysis retains substantial activity such that it localizes to kinetochores, binds to Plk1 kinase, and promotes downstream activities such as the cleavage of the meiosis-specific cohesin subunit Rec8, similar to full length Meikin. Importantly, we demonstrate that both the failure to cleave Meikin or the complete inactivation of Meikin at the meiosis I/II transition each result in dramatic defects in the proper execution of meiosis II. Our functional analysis in mouse oocytes demonstrates that precise Meikin cleavage is critical to differentially control meiosis I and II.; Thus, in contrast to previous models, Meikin is not just a regulator of meiosis I-specific activities, but differentially coordinates chromosome segregation across both meiotic divisions. Our discovery of Meikin as a new substrate for Separase cleavage represents a novel mechanism for the regulatory control of the meiosis I/II transition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130663</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactions between an integrative and conjugative element and its bacterial host</title>
<link>https://hdl.handle.net/1721.1/130662</link>
<description>Interactions between an integrative and conjugative element and its bacterial host
Harden, Mark Michael,Jr.
Conjugative elements are mobile genetic elements that can transfer from a donor bacterium to a recipient via an element-encoded type IV secretion system. Integrative and conjugative elements (ICEs) are an abundant class of conjugative element. ICEs are typically integrated into the bacterial host chromosome, but under certain conditions, or stochastically, they can excise from the chromosome and transfer to a recipient. ICEs likely interact with their bacterial host at every stage of their life cycle, but few of these interactions have been characterized. In this work I sought to 1) identify bacterial host factors necessary for efficient transfer of the integrative and conjugative element ICEBs1 to a recipient, and 2) determine whether the ICEBs1-encoded cell wall-modifying enzyme CwlT acts on the cell wall of the donor bacterium, the recipient bacterium, or both.; I used CRISPR interference to induce a knockdown of individual essential Bacillus subtilis genes, and then screened for gene knockdowns that caused an acute defect in transfer of ICEBs1. I found that wall teichoic acids were necessary in both ICEBs1 donors and recipients for efficient conjugative transfer. I found that depletion of wall teichoic acids caused cells involved in ICEBs1 conjugation to sustain lethal envelope damage caused by active conjugation machinery. Conjugative elements must bypass the cell wall of both the donor and recipient cells in a mating pair. Conjugative elements encode cell wall hydrolases that are required for efficient transfer, which are presumed to partly degrade the cell wall of the donor bacterium during conjugation. In order to investigate the role of the ICEBs1-encoded cell wall hydrolase CwlT in conjugation, I generated cell wall-less (L-form) strains of B. subtilis which could donate or receive ICEBs1.; In the absence of either the donor or recipient cell wall, CwlT was dispensable for efficient transfer. This finding indicates that CwlT acts on both the donor and recipient cell wall in a mating pair.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130662</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding microRNA targeting with high-throughput biochemistry</title>
<link>https://hdl.handle.net/1721.1/130661</link>
<description>Understanding microRNA targeting with high-throughput biochemistry
McGeary, Sean E.(Sean Edward)
MicroRNAs (miRNAs) are short RNAs that, in complex with Argonaute (AGO) proteins, guide repression of mRNA targets. miRNAs negatively regulate most mammalian mRNAs, and disruption of this regulation often results in severe defects at the cellular and organismal level. miRNA repression occurs primarily through base-pairing between the miRNA seed region (nucleotides 2-8) and mRNA 3'-UTR sites, leading to transient recruitment of mRNA-destabilizing factors. However, only a small fraction of the gene-expression changes caused by a miRNA can currently be predicted, which precludes a deeper understanding of how miRNA regulation impacts the animal transcriptome. miRNA targeting efficacy should in principle be a function of the affinity between AGO-miRNA complexes and their targets. However, only a few such measurements had been reported, with measured values differing from those predicted for RNA-RNA pairing in solution.; We therefore adapted a high-throughput biochemical platform utilizing random-sequence RNA libraries to obtain the vast quantity of affinity values required to predict miRNA targeting efficacy. Through a novel analytical approach, we assigned relative dissociation (K[subscript D]) constants to all binding sites &lt;/-12 nt in length, for six miRNAs. These analyses revealed unanticipated miRNA-specific differences in the affinity of similar sites, unique sites for different miRNAs, and a 100-fold influence of flanking dinucleotide context surrounding a site. These measurements informed a biochemical model of miRNA targeting that outperformed all existing models of miRNA targeting, which was extended to all miRNAs using a convolutional neural network (CNN) trained on both affinity and repression data. We also applied this high-throughput biochemical approach to understand the role of the miRNA 3' region using partially random RNA libraries.; We found unique 3'-pairing preferences for each miRNA, and evidence for two distinct binding modes. The miRNA-specific differences and two binding modes depended on G nucleotides in the miRNA 3' region, thus providing a heuristic by which to extend these findings to target prediction in vivo. This work establishes high-throughput biochemistry combined with mathematical modeling and deep learning as a powerful paradigm for building quantitative models of gene regulation, which might aid in eventually building a complete model of the cell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130661</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-molecule studies of the mechanism of eukaryotic helicase activation</title>
<link>https://hdl.handle.net/1721.1/130660</link>
<description>Single-molecule studies of the mechanism of eukaryotic helicase activation
De Jesús-Kim, Lorraine.
Eukaryotic DNA replication is a fundamental process that must occur accurately and only once per cell cycle. To ensure that origins only initiate once per cell cycle, the events of DNA replication initiation are temporally separated to different phases of the cell cycle. This regulation separates two key events that center the replicative DNA helicase Mcm2-7: helicase loading and helicase activation. During G1 phase, two inactive Mcm2-7 are loaded unto origin DNA. Upon entry into S phase, the association of multiple factors will promote helicase activity. Although loaded helicases mark all potential origins of replication, only the subset that is activated will promote origin initiation, and consequently DNA unwinding. After helicase activation the cell must duplicate its genome prior to chromosome segregation and cell division, making helicase activation the committed step of DNA replication. In my thesis, I describe a novel single-molecule reaction that recapitulates helicase activation in vitro with purified proteins. This single-molecule method allows real-time monitoring of protein associations and dissociations during helicase activation. Through these single-molecule reactions, I found that Cdc45 and GINS are recruited to Mcm2-7 in two stages. First, they are recruited to the unstructured N-terminal tails of Mcm2-7. DDK levels carefully control this initial recruitment, creating binding sites for these proteins that result in the formation of a previously unknown intermediate, which we call the Cdc45-tail-GINS (CtG) complex. Elevated DDK lead to increased numbers of CtG complexes formed on each Mcm2-7, which consequently increases the number of active Cdc45-Mcm2-7-GINS (CMG) helicases formed. This mechanism provides an explanation for the tight control of helicase activation by DDK activity during the cell cycle.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130660</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards quantitatively predicting the properties of gels and elastomers</title>
<link>https://hdl.handle.net/1721.1/130659</link>
<description>Towards quantitatively predicting the properties of gels and elastomers
Lin, Tzyy-Shyang.
Polymer networks are critical to modern society. Applications of polymer networks, in the form of gels and elastomers, provide essential support for everyday life. However, while many aspects of the physics of polymer networks have been established in the past few decades, most theoretical efforts have been centered around the study of model polymer networks with highly regular topology. While this framework has provided important insights into the physics of polymer networks, the neglection of variation in network topology can significantly undermine the predictive power of existing classical theories. To provide accurate predictions, the role of network topology must be considered. Therefore, this thesis investigates the impact of topological defects on the properties of polymer networks. The first part of this thesis focuses on the topology of end-linked networks and the formation of defects.; Herein, rate theories and kinetic Monte Carlo simulations are used to simulate the gelation of multi-arm macromonomers. It is shown that even with precursor geometries that eliminate the formation of primary loops, polymer networks still invariably possess higher order cyclic defects. Furthermore, it is found that the cyclic topologies of A[subscript f]+B[subscript f] networks are characterized by the same universal function that determines the cyclic topologies of A₂+B[subscript f] networks. The second part of this thesis focuses on quantifying the impacts of topological defects on the mechanical properties of polymer networks. Starting with revisiting the assumptions of classical linear elasticity theories, closed-form expressions for the effects of dangling ends and isolated loops are derived under the context of phantom network theory. Notably, it is found that in the infinite dilution limit, all loops of order three or larger do not exhibit any net impact on elasticity.; In addition, a comparison to experimental data reveals that the conformation of network strands may be more contracted than commonly perceived, motivating revisits to classical assumptions. Furthermore, by going beyond the mean-field approximation, new network theories that provide predictions for systems containing large fractions of defects are also presented. Beyond linear mechanics, underlying assumptions of the fracture theory of Lake and Thomas have also been revisited. It is shown that by revising their central assumption of sharp crack planes, tearing energy data of loopy networks can be quantitatively explained, thereby providing insights into the molecular details of fracture mechanics. The final part of this thesis investigates the effects of looping reactions on sol-gel transition. It is found that a purely topological kinetic model can provide accurate predictions on the gel point suppression across many systems.; Furthermore, studies on the critical exponents reveals that loopy networks do not always follow mean field percolation. Rather, a family of previously unreported criticality classes are discovered. This finding provides a potential reconciliation to existing data that are largely disparate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2021; Cataloged from the official PDF of thesis. "February 2021."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130659</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bioenergetics and metabolism of eukaryotic cell proliferation</title>
<link>https://hdl.handle.net/1721.1/130658</link>
<description>Bioenergetics and metabolism of eukaryotic cell proliferation
Li, Zhaoqi
Cellular growth and proliferation necessitates the transformation of cell-external nutrients into biomass. Strategies of biomass accumulation across the kingdoms of life are diverse and range from carbon fixation by autotrophic organisms to direct biomass incorporation of consumed nutrients by heterotrophic organisms. The goal of this dissertation is to better understand the divergent and convergent modes of metabolism that support biomass accumulation and proliferation in eukaryotic cells. We first determined that the underlying mechanism behind why rapidly proliferating cells preferentially ferment the terminal glycolytic product pyruvate is due to an intrinsic deficiency of respiration to regenerate electron acceptors. We tested this model across an assorted array of proliferating cells and organisms ranging from human cancer cells to the baker's yeast Saccharomyces cerevesiae. We next determined that a major metabolic pathway of avid electron acceptor consumption in the context of biomass accumulation is the synthesis of lipids. Insights from this work has led to the realization that net-reductive pathways such as lipid synthesis may be rate-limited by oxidative reactions. Lastly, we established the green algae Chlorella vulgaris as a model system to study the comparative metabolism of photoautotrophic and heterotrophic growth. We determined that heterotrophic growth of plant cells is associated with aerobic glycolysis in a mechanism that may be suppressed by light. Collectively, these studies contribute to a more holistic understanding of the bioenergetics and metabolic pathways employed by eukaryotic cells to accumulate biomass and lay the foundation for future studies to understand proliferative metabolism.
Thesis: Ph. D. in Biochemistry, Massachusetts Institute of Technology, Department of Biology, February, 2021; Cataloged from the official PDF of thesis. "February 2021." Vita. Page 179 blank.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130658</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory and applications for sulfur chemistry : hydrogen from hydrogen sulfide</title>
<link>https://hdl.handle.net/1721.1/130618</link>
<description>Theory and applications for sulfur chemistry : hydrogen from hydrogen sulfide
Gillis, Ryan J.(Ryan Joseph)
In this thesis, I explore the chemistry of reacting sulfur species computationally and experimentally. The computational work centers around creating the capability to automatically predict the thermochemical properties of arbitrary sulfur molecules and the kinetic parameters of reactions between these species. A demonstration of this enhanced capability is shown in the automatic creation of detailed chemical mechanism describing the partial oxidation of dimethyl sulfide. The experimental work focuses on a hydrogen generating chemical cycle that uses a hydrogen sulfide feed-stock. Initially exploring the reactivity of hydrogen sulfide, water, and iodine mixtures to form hydroiodic acid, two competing pathways were discovered. The more interesting pathway involved the reaction of hydrogen sulfide with iodine and water to form hydroiodic acid and sulfur dioxide. A bench-top prototype was created demonstrating the creation of hydrogen gas from hydrogen sulfide through this pathway. Technoeconomic modeling of the proposed process was conducted, suggesting both commercial and environmental motivation for adoption. The thesis concludes with a brief discussion of future work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 231-244).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130618</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commerce and coercion in contemporary China : local leader responses to foreign policy crises</title>
<link>https://hdl.handle.net/1721.1/130616</link>
<description>Commerce and coercion in contemporary China : local leader responses to foreign policy crises
Miura, Kacie Kieko.
During foreign policy crises involving China and its most important foreign economic partners, some local leaders in China willingly threaten foreign commercial interests, while others attempt to protect foreign commerce from diplomatic tensions. This subnational variation in participation in economic retaliation is surprising given the presence of a strong central government. As key stewards of foreign commerce, local leader participation in state punishment is critical to China's coercive capacity. I argue that the Chinese Communist Party's system of personnel management, which emphasizes both meritocratic and patronage elements, generates cross-cutting incentives that shape local leader responses to foreign policy crises. The first is economic dependence, or whether a locality's commercial ties to the targeted state are essential for local economic growth.; The second is whether local leaders are politically vulnerable to demotion or disciplinary punishment, either because they lack powerful patrons or because their localities recently experienced social unrest. While economic dependence creates incentives to shield foreign commerce, politically vulnerable leaders have incentives to shore up their patriotic credentials by participating in economic retaliation. I evaluate this theory of discretionary local leader behavior in the context of recent foreign policy crises in China's relations with Japan, South Korea, and the United States. I conduct in-depth case studies of city leader responses in eight cities: Dalian, Shenyang, Shanghai, Chongqing, Xi'an, Chengdu, Guangzhou, and Wuhan. For these cases, I draw on data from interviews, conducted during 14 months of fieldwork in China, with local government officials, foreign consular officials, business representatives, journalists, and scholars.; I also use computer-assisted text analysis methods to build an original dataset of sentiment analysis scores of Chinese language local official media coverage, which provides insights to the policy preferences of local leaders. I assess how official newspapers in around 65 cities covered Japan, South Korea, and the United States during the height of the foreign policy crises involving each. Cross-city statistical analyses show that anti-foreign propaganda tends to be most intense in cities where local leaders are politically vulnerable and not economically dependent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 283-311).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130616</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical consequences of natural and synthetic post-translational modifications</title>
<link>https://hdl.handle.net/1721.1/130615</link>
<description>Physical consequences of natural and synthetic post-translational modifications
Kilgore, Henry R.(Henry Ralph)
Protein modifications endow diverse chemical and biological functions. Development of new modifications, control over extant, and the appreciation of their consequences promises to unveil the mechanisms of disease, provide avenues of therapeutic development, and provide insight into intricacies of life. As a result, developing, harnessing, and understanding protein modifications has led to profound direct and indirect consequences for biotechnology, the pharmacopeia and academic research. The express purpose of this thesis was to characterize and investigate the consequences of these modifications. This led to investigations into the stereoelectronic effects endemic to the properties of cysteines and disulfide bonds in proteins and provided a rational basis for the function of thioredoxins, thioredoxin-fold enzymes, and other enzymes that bear highly variable reduction potentials.; Relationships between the ground state geometric properties of disulfide bonds and their photophysical properties has revealed new relationships with potential applications in photoredox chemistry, sulfur-photocatalysis, and protein engineering and design. A strategy based on stereoelectronic effects was used to diversify the reduction potential innate to the cytotoxic activity of epidithiodiketopiperazine natural products, and is leading to experiments that clarify their mechanism of action in cellulo. Detailed analysis revealed a relationship between the thermodynamics of thioldisulfide electrochemical equilibria and the interaction of these motifs with light. Arising from optimization of the hydrolytic stability of fluorogenic probes, a halogen n--&gt;[pi]* interaction in acylated 22,72-halosubstituted fluorosceins was observed and characterized.; Coincident with the investigations into the physicochemical properties of disulfide bonds and other n--&gt;[pi]* interactions, insights into the biological and functional consequences of glycosylation were also investigated. An apparent difference arose in the internalization and release of different dextrans functionalized with fluorogenic probes, suggesting a glycomonomer oxidation-state dependent mechanism for endosomal uptake and release. Further, transformation of proteins with pendant dextrans endowed increased cellular internalization as assessed with proteins that initiate cell death upon internalization. The structure of glycosylated human ribonuclease 1 afforded insight into the origin of their variable catalytic and thermostability. These investigations unveiled a new helix-capping motif, stemming from N-glycosylation at the C terminus of an α-helix.; Finally, the characterization of cellular reactivity and the associated mass transport properties of SNO-OCT reagents, their ability to engage in triple ligations, and their utility for metabolic labeling experiments were also investigated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from the official PDF of thesis. "September 2020."; Includes bibliographical references (pages 385-409).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130615</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Colloidal Electronics</title>
<link>https://hdl.handle.net/1721.1/130612</link>
<description>Colloidal Electronics
Liu, Tianxiang(Albert Tianxiang)
Arming nano-electronics with mobility extends artificial systems into traditionally inaccessible environments. Carbon nanotubes (1D), graphene (2D) and other low-dimensional materials with well-defined lattice structures can be incorporated into polymer microparticles, granting them unique electronic functions. The resulting colloidal electronic 'cells', comprised of microscopic circuits connecting artificial 'organelles' (e.g., generators, sensors, logic gates, etc.), combine the modularity of modern electronics with the characteristic mobility found in dispersive colloidal systems. Fundamental to colloidal electronics lie two challenges: (1) providing electrical energy to a microscopic system with limited footprint; and (2) developing energy efficient electronic devices and circuitries with low power consumption. In this context, my thesis introduces two concepts - Autoperforation and Asymmetric Chemical Doping - as means to fabricate and power electronic circuit elements on top of colloidal particles. These advances allow us to build the first colloidal electronic system that perform autonomous functions integrating energy harvesting, chemical detection and digital memory recording - all within a form-factor no larger than biological cells.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis. "July 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130612</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring cancer metabolism through isotopic tracing and metabolic flux analysis</title>
<link>https://hdl.handle.net/1721.1/130611</link>
<description>Exploring cancer metabolism through isotopic tracing and metabolic flux analysis
Dong, Wentao,Ph. D.Massachusetts Institute of Technology.
Cancer is the second leading cause of death following heart diseases in the U.S. During the past two decades, cancer metabolism has emerged as an indispensable part of contemporary cancer research. Various types of metabolic alterations in cancer cells have been documented, prompting extensive investigation of the link between reprogrammed cell signaling pathways and rewired cellular metabolism. In addition, drug targeting of rewired metabolic pathways has been demonstrated to be a promising cancer treatment strategy. Despite this progress, a fundamental question remains unanswered: whether there is a difference in the metabolism of cancerous, fast growing cells and normal proliferative cells. This deficiency has hindered our understanding of cancer metabolism and the efforts to develop effective cancer therapies targeting metabolism with reduced side effects.; In this thesis, we used ¹³C-isotope tracing and metabolic flux analysis (MFA) to study cancer metabolism and identify metabolic pathways differentially activated in cancer cells. To support efforts to design effective therapeutic therapies, we sought to distinguish metabolic behavior in cancer versus normal cells growing at the same speed, and obtain a systematic understanding of cancer metabolism. To this end, we dissected bioreaction networks in human mammary epithelial cells (HMECs) that have been genetically modified to exhibit different levels of tumorigenicity. We discovered distinct substrate utilization pattern in the tricarboxylic acid (TCA) cycle and de novo lipogenesis. Specifically, we found that glucose was catabolized in the TCA cycle up to the formation of citrate, which was then used primarily for lipogenesis. The majority of the TCA cycle flux, however, was maintained by glutamine anaplerosis.; ¹³CMFA further revealed that some metabolic reactions were more activated in tumorigenic HMECs. By introducing a new quantity termed metabolic flux intensity, defined as pathway flux divided by the specific growth rate, we identified three most enhanced reactions - oxidative pentose phosphate pathway (oxPPP), malate dehydrogenase (MDH) and isocitrate dehydrogenase (IDH) in the most tumorigenic HMEC. Targeting of these three pathways with small molecule inhibitors selectively reduced growth in the cancerous HMEC line. In addition, our study provides direct evidence that metabolism may be dually controlled by proliferation and oncogenotypes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis. "July 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130611</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete mechanical metamaterials</title>
<link>https://hdl.handle.net/1721.1/130610</link>
<description>Discrete mechanical metamaterials
Jenett, Benjamin(Benjamin Eric)
Digital fabrication enables complex designs to be realized with improved speed, precision, and cost compared to manual techniques. Additive manufacturing, for example, is one of the leading methods for rapid prototyping and near net shape part production. Extension to full scale structures and systems, however, remains a challenge, as cost, speed and performance present orthogonal objectives that are inherently coupled to limited material options, stochastic process errors, and machine-based constraints. To address these issues, this thesis introduces new materials that physically embody attributes of digital systems, scalable methods for automating their assembly, and a portfolio of use cases with novel, full-scale structural and robotic platforms. First, I build on the topic of discrete materials, which showed a finite set of modular parts can be incrementally and reversibly assembled into larger functional structures.; I introduce a new range of attainable properties, such as rigidity, compliance, chirality, and auxetic behavior, all within a consistent manufacturing and assembly framework. These discretely assembled mechanical metamaterials show global continuum properties based on local cellular architectures, resulting in a system with scalability, versatility, and reliability similar to digital communication and computation. Next, I present a new kind of material-robot system to enable methods of assembly automation. Rather than relying on global motion control systems for precision, mobile robots are designed to operate relative to their discrete material environment. By leveraging the embedded metrology of discrete materials, these relative robots have reduced complexity without sacrificing extensibility, enabling the robots to build structures larger and more precise than themselves.; Multi-robot assembly is compared to stationary platforms to show system benefits for cost and throughput at larger scales. Finally, I show a range of discretely assembled systems that blur the boundary between structure and robotics. Full-scale demonstrations include statically reconfigurable bridges, supermileage racecars, and morphing aero and hydrodynamic vehicles. Performance scaling is projected to new regimes, using case studies of turbine blades, airships, and space structures. These discrete systems demonstrate new, disruptive capabilities not possible within the limits of traditional manufacturing.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 127-136).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130610</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic studies and design of supported transition metal complexes</title>
<link>https://hdl.handle.net/1721.1/130605</link>
<description>Mechanistic studies and design of supported transition metal complexes
Gani, Terry Zhi Hao.
Supported transition metal (TM) complexes are an emerging class of materials with many potential applications in the chemical industry ranging from separations to catalysis. They offer increased tunability and often also improved performance over their bulk heterogeneous counterparts. Their study and rational design is, however, accompanied by several unique considerations and challenges that we address in this thesis. The first part of the thesis broadly develops and applies computational screening strategies for supported TM complexes. First, we detail how weak C-H[superscript ...]O hydrogen bonds can be exploited to increase selectivity of ferrocenium (Fc⁺)-based polymer electrode materials for formate adsorption over perchlorate adsorption while maintaining reasonable desorption rates in the reduced (ferrocene, Fc) state. Through a systematic characterization of formate and perchlorate interactions with a small (ca.; 40) but diverse set of functionalized Fc⁺ complexes, we identify and rationalize design rules for functionalizations that simultaneously increase selectivity for formate in aqueous environments while permitting rapid release from Fc. Next, we screen a larger (ca. 500) set of model Fe(II) complexes for methane hydroxylation in order to assess if linear free energy relationships (LFERs), extensively developed to reduce the computational cost of computationally screening bulk heterogeneous catalysts, can also be applied to supported single-site TM catalysts. We demonstrate that structural distortions achievable in porous frameworks and chelating ligands break these LFERs by altering relative d-orbital splittings, thereby revealing a potential strategy for improving the activity of these catalysts.; Finally, to address a particularly pervasive issue in density functional theory (DFT) studies of first-row open-shell TM complexes, we investigate how the fraction of exact exchange parameterized in the functional affects computed reaction and spin-splitting energies. We rationalize this sensitivity in terms of differences in metal-ligand electron delocalization and introduce the metal-ligand bond valence as a simple, yet robust, descriptor that unifies understanding of exchange sensitivity for catalytic properties and spin-state ordering in TM complexes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis. "July 2020."; Includes bibliographical references (pages 170-200).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130605</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prophets and priests : religious leaders and protest in Iraq</title>
<link>https://hdl.handle.net/1721.1/130603</link>
<description>Prophets and priests : religious leaders and protest in Iraq
Alshamary, Marsin R.(Marsin Rahim)
When do religious clerics join anti-government protest and in what capacity? In my dissertation project, I argue that clerical participation in protest is mediated by the internal structure of the religious system. Specifically, the degree of hierarchy and bureaucratization of a religious system imposes different abilities and responsibilities on individual clerics therein. In turn, these factors mediate clerical behavior and determine the type and timing of clerical participation in the face of external pressures and particular ideological leanings. I build this structural theory of clerical participation by analyzing the behavior of clerics in the Iraqi Hawza (the Shiʻa religious establishment) in six instances of anti-government protest from 1917 to 2020. I triangulate data gathered from clerical, government, and opposition resources from ten months of fieldwork in Iraq in addition to archival work and interviews in the United States and the United Kingdom.; I argue that clerical decisions to participate in protest are influenced by structural pressures from their respective position in their religious institutions. When religious elites feel a responsibility to maintain the institutional integrity of the religious establishment, they avoid advocating rebellion because it risks harming the institution. Rebellion is most likely to be instigated and supported by religious elites who are influential but who feel limited institutional responsibilities. These influential, low-responsibility clerics are few in number because influence and responsibility tend to go together, but their call to action can plunge a society into violence. In the Iraqi case, these tend to be clerics with informal (usually, familial) ties to the religious establishment but no official position within it.; High-responsibility clerics may get involved in protest after violence has broken out, seeking to manage the conflict in ways that will leave the institutions unscathed. These arguments hold across over a century of Iraqi history and have significant policy implications for the region.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis. "September 2020."; Includes bibliographical references (pages 264-278).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130603</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the political economy of service provision</title>
<link>https://hdl.handle.net/1721.1/130602</link>
<description>Essays on the political economy of service provision
Bozçağa, Tuğba.
Service provision has often been studied as an outcome of political decisions and processes. This dissertation examines how the distribution of service provision and its electoral outcomes are also contingent on local social structures. It contributes to theoretical knowledge on the political economy of service provision by introducing novel arguments that explain spatial and temporal variations in state capacity and government services, non-state services, and electoral returns to service provision. The first paper develops a theory based on bureaucratic efficiency and argues that bureaucratic efficiency increases with social proximity among bureaucrats. I find that social proximity, as proxied by geographic proximity, increases bureaucratic efficiency. However, in line with theoretical expectations, geographic proximity is less likely to lead to high bureaucratic efficiency in socially fragmented network structures or when there are ethnic divisions between bureaucrats.; To test this theory, I leverage a spatial regression discontinuity design and novel data from Turkey's over 35,000 villages. The second paper explores the origins of non-state service provision, with a focus on Islamist political movements. Exploiting the spatial variation in an Islamist service provision network across Turkey's 970 districts, this study shows that service allocation by non-state actors is highly dependent on a group's ability to marshal local resources, specifically through the associational mobilization of local business elites. The findings rely on an original district-level dataset that combines data from over sixty government decrees, archival data, and other novel administrative data.; The third paper introduces a theory suggesting that electoral returns to local public goods will increase with their excludability, i.e., the degree to which they are used only by the local population, as the local population will see them as "club goods" and as a signal of favoritism. Using a panel dataset that contains information on all public education and health investments in Turkey since the 1990s and mobility measures that rely on mobile call data, this study finds that electoral returns to public good investments are higher when they have a club good nature, although the effect is weaker in secular districts, where a perception of favoritism is less likely to develop due to the cleavages with the conservative incumbent party.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 193-206).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130602</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protest without repression : protest policing and nonviolent resistance in the US</title>
<link>https://hdl.handle.net/1721.1/130601</link>
<description>Protest without repression : protest policing and nonviolent resistance in the US
Dumas, Nicolas K.(Nicolas Kasem)
Activists often identify violent repression, and ensuing backlash, as a key mechanism through which peaceful protests can successfully achieve political change. This view has been affirmed by a body of research showing that the violent repression of protest can raise awareness of and build support for the protesters. And US history has many examples of these repression backlash benefiting protesters, from the Birmingham bus boycotts to the "Bonus Army" March on Washington, to the Kent State shootings. However, in the United States, and in other western democracies, the probability of violent police repression of protests has varied significantly over time, as a result of a multitude of institutional factors. While the impacts of repressed protest have been documented, how peaceful protests fare in the absence of repression is less well-understood.; This dissertation explores whether the absence of repression impacts protests' ability to capture attention and persuade the public, and whether the absence of repression impacts the types of protests that are successful. To answer these two questions, I draw on a wide array of data sources, including a novel dataset of local protests coded from protest permit applications, geo-referenced Google search data, Wikipedia page-view data, New York Times coverage data, historical archives of an activist group's internal communications. I show that, while repression makes it easier for protests to garner news coverage, command public attention, and persuade the public, it is not a necessary condition. Peaceful protests can achieve these outcomes without repression if they can become newsworthy in other ways, such as by increasing the scale of the protest.; I also show that in the absence of repression, the types of protests that achieve success are similar in background to the protests that achieve success in the presence of repression. Unlike some other forms of political participation, the resources needed to succeed without repression do not appear to be skewed towards individuals or groups with higher socio-economic status. Although the probability of violent repression changes over time, protests continue to serve as an effective tactic for a relatively small group to capture attention and build broader support.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 121-129).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130601</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploitative friendships : manipulating asymmetric alliances</title>
<link>https://hdl.handle.net/1721.1/130600</link>
<description>Exploitative friendships : manipulating asymmetric alliances
Fukushima, Mayumi.
This dissertation is the first systematic analysis of variation in alliance behavior in the context of asymmetric international security alliances. When weak states ally with stronger states - i.e. states with significantly greater military capabilities - what explains differences in the junior party's approach to the alliance relationship? Why do some junior allies show their strong willingness to coordinate their military policy with their senior partner, whereas others distance themselves from their senior partner? Why do some grow more dependent on their senior partner for security, while others pursue their own deterrent to reduce their dependence? Their military dependence is not necessarily determined by structural factors, as states generally have some room for maneuver to decide on the level of resources they extract for national security from their overall economic and technological capacity.; This variation in alliance behavior deserves scholarly attention, because these differences affect their senior partner's alliance management costs, including the chance of alliance entrapment - i.e. getting dragged into an unwanted war due to a junior ally's problematic behavior. When a senior partner has vested interests in the asymmetric alliances that advance its own interests, its junior partners, as parties to the alliance contracts, also have the power to "manipulate" their senior partner with a variety of strategies to maximize what are often noninstitutionalized benefits from their security relationships. To explain the variation in the junior partner's approach, the dissertation proposes a Theory of Asymmetric Alliance Strategy, a new paradigm for understanding four types of junior partner alliance behavior and strategy.; In essence, their differences are based upon differences relating to the two most contentious and yet core issues of alliance management - the junior ally's degree of dependence for security and its level of coordination with the senior partner. As junior allies choose one of the two opposing approaches to each of these two core issues, there are four different, mutually exclusive strategies: [More Dependent, Reluctant Coordination], [More Dependent, Proactive Coordination], [Less Dependent, Proactive Coordination], and [Less Dependent, Reluctant Coordination], which I call Cheap-riding, Rescue-compelling, Favor-currying, and Autonomy-seeking, respectively. The Theory posits that the following three factors determine a junior partner's choice of alliance strategy: (1) perceived senior partner commitments to fighting the adversary by force; (2) the junior partner's "revisionist" goal - i.e.; a goal of changing the local distribution of power and goods by force; and (3) the local balance of power. Particularly problematic from a senior partner's perspective is the Rescue-compelling strategy, which is driven by weak or weakened security commitments a junior ally perceives when it faces a local balance of power shifting in favor of its adversary. A junior ally utilizing this strategy can make a crisis escalation more likely and cause serious consequences including a costly war. By explaining the sources of the variation in alliance strategy and identifying risks associated with security partnerships with some types of junior allies, the dissertation helps better anticipate the costs of offering new security commitments to other states as well as those of withdrawing, or threatening to withdraw, existing commitments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 364-374).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130600</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From recognition to representation : collective rights and democratic citizenship in the Philippines</title>
<link>https://hdl.handle.net/1721.1/130599</link>
<description>From recognition to representation : collective rights and democratic citizenship in the Philippines
McMurry, Nina(Nina Katherine Siegel)
How does the recognition of self-determination rights for indigenous and tribal communities affect governance in modern democratic states? Nearly half of UN member states recognize indigenous groups in their constitutions, many devolving control over land and local governance functions. A dominant perspective in political science, rooted in the concept of the nation-state, implies these policies, by empowering nonstate authorities and crystallizing sub-national identities, are likely to have negative unintended consequences. Yet few studies have investigated these predictions directly. This study examines the effects of collective recognition for indigenous communities on state consolidation and democratic representation.; Rather than weakening states and undermining democratic accountability, I argue that given underlying conditions of state weakness, collective recognition can encourage the incorporation of marginalized populations by enabling more effective claim-making through formal democratic politics. I evaluate empirical implications of this theory in the Philippines, which has one of the most robust frameworks for indigenous recognition in Southeast Asia. Drawing on more than two years of fieldwork in the country, I combine analysis of administrative data, original survey data and survey experiments, and in-depth qualitative interviews with indigenous leaders and policymakers. I find that recognition through the granting of collective land titles leads to increased indigenous self-identification, but also to greater attachments to national identity and multiple indicators of state integration.; In addition, I find evidence that recognition, rather than simply entrenching political elites, increases community electoral mobilization directed toward obtaining public goods from the state. This work not only speaks to contemporary debates surrounding indigenous rights, but also has broader implications for our understandings of post-colonial state consolidation, ethnic and identity politics, and collective participation in democratic systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 245-257).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130599</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mare interpretatum : continuity and evolution in States' interpretations of the Law of the Sea</title>
<link>https://hdl.handle.net/1721.1/130597</link>
<description>Mare interpretatum : continuity and evolution in States' interpretations of the Law of the Sea
Odell, Rachel Esplin.
Disagreements over how to interpret the international law of the sea have caused contention among the United States, China, and other Asian nations as the regional balance of power has shifted in recent decades. This dissertation examines the sources of those disagreements, investigating why states favor mare liberum ("the free sea"), claiming limited jurisdiction over the oceans, or mare clausum ("the closed sea"), claiming expansive authority at sea, and how their interpretations change over time. I argue that countries interpret the law of the sea in ways that serve their strategic interests, treating the ocean as neither mare liberum nor mare clausum, but instead mare interpretatum. In their legal interpretations, states balance their interests in protecting against perceived threats along their own coasts with their interests in conducting operations near other states' coasts, while also seeking legitimacy in the international community.; States are reluctant to change their interpretations lest they incur hypocrisy costs, but they still often find ways to adapt to shifting material circumstances by exploiting ambiguity in their past rhetorical positions to alter their claims subtly. I illustrate this argument by analyzing how countries have interpreted the law of the sea across time and space, coupled with in-depth qualitative case studies of China, Japan, the United States, and the Soviet Union, drawing upon archival materials, government statements, legal commentaries, and interviews with more than 100 officials and experts in six countries. My principal case study traces evolution in China's interpretations of the law of the sea governing foreign military activities in territorial seas, straits, and exclusive economic zones; maritime entitlements of islands; and historic rights and waters.; I find that despite the history of U.S.-China competition over the meaning of "freedom of navigation," China's interpretation of this principle has begun converging with the U.S. interpretation as its own naval power has grown. At the same time, facing perceived threats to its maritime interests, Beijing has made expansive legal claims in the South China Sea, damaging its legitimacy among its neighbors. These dynamics will play a crucial role in shaping prospects for maritime peace and security in Asia.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 481-509).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130597</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery and engineering of antiviral defense systems in bacteria and archaea</title>
<link>https://hdl.handle.net/1721.1/130594</link>
<description>Discovery and engineering of antiviral defense systems in bacteria and archaea
Gao, Linyi,Ph. D.Massachusetts Institute of Technology.
Viruses are the most abundant and diverse life form on Earth. With over 10³¹ viral particles in existence, viruses fundamentally shape global biogeochemistry and ecology. Most viruses infect bacteria, archaea, and other microbes, and the threat of infection continually challenges microbes' survival. As a consequence of this expansive war, bacteria and archaea have acquired a potent arsenal of molecular defense systems in order to survive. Known defense systems, such as CRISPR, have given rise to transformative technologies including genome editing. However, defense systems as a whole remain underexplored. Continued investigation of these systems and the warfare between microbes and viruses may lead not only to a better understanding of basic microbiology and evolution, but also to new technologies and therapeutic applications. In this thesis, we investigate the collective arsenal of molecular defense systems that bacteria and archaea use to fight viral infections. First, we focus on known defense systems and use protein engineering to increase the specificity and targeting range of CRISPR enzymes for human genome editing. Second, by computational mining and experimental reconstitution, we discover 29 novel defense gene cassettes that are collectively present in one third of all sequenced bacterial and archaeal genomes. These systems incorporate enzymatic activities not previously implicated in antiviral defense, including RNA editing and retron satellite DNA synthesis. In addition, we predict a diverse set of other putative defense genes that remain to be characterized. These results highlight an immense array of molecular functions that bacteria and archaea employ against viruses.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2020; Cataloged from the official PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130594</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Driving forces of self-assembly in protein-polymer bioconjugates</title>
<link>https://hdl.handle.net/1721.1/130592</link>
<description>Driving forces of self-assembly in protein-polymer bioconjugates
Yao, Helen,Ph. D.Massachusetts Institute of Technology.
Protein-polymer bioconjugates have shown great promise as high-performance biomaterials, with a diverse range of applications. Bioconjugation to a polymer allows the protein to maintain or even enhance its activity while imparting self-assembly capabilities to the overall material, which provide control over the orientation and nanostructure of the bioconjugates, enabling the design of materials with superior transport properties and high stability. The phase behavior of globular protein-polymer bioconjugates is comparable to that of traditional synthetic polymer block copolymers and leads to the formation of many of the same nanostructures. Despite these similarities, there are also key differences between these systems. The phase behavior of protein-polymer bioconjugates is affected by coarse-grained properties of both the protein and polymer. However, a unifying theory describing the self-assembly of these materials does not yet exist.; The goal of this thesis was to understand interaction-based and structural driving forces of bioconjugate self-assembly. Partial structure factor analysis and subsequent inverse Fourier transformation showed that protein-polymer interactions could be quantified and understood in the context of physical phenomena through a real-space correlation function. The nature of these interactions can affect the propensity of a bioconjugate system to order. Polymer-water interactions were probed using small-angle neutron scattering, which showed that polymer hydration is affected by both polymer chemistry and concentration. This dependence likely underpins the significant effect that polymer chemistry has on self-assembly. On the structural side, the self-assembly of protein-rod block copolymers was investigated by imparting secondary structure to the polymer through chirality. The rigidity of the rod block was shown to drive self-assembly in inherently weakly segregated systems.; Finally, a hard sphere-soft sphere dumbbell model for protein-polymer bioconjugates was built to understand the role of coarse-grained structural properties in phase behavior. Molecular dynamics simulations reproduced the most notable features of bioconjugate self-assembly, including an asymmetrical phase diagram and a lyotropic reentrant order-disorder transition at high concentrations. The success of this coarse-grained model revealed that colloidal interactions are sufficient to effect self-assembly in the globular protein-polymer block copolymer system.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130592</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collisional transfer between excited electronic states as a mechanism for sulfur mass-independent fractionation</title>
<link>https://hdl.handle.net/1721.1/130591</link>
<description>Collisional transfer between excited electronic states as a mechanism for sulfur mass-independent fractionation
Hull, Alexander W.(Alexander William)
The Great Oxygenation Event, the introduction of O2 into the Earth's atmosphere approximately 2.5 billion years ago, is a critical milestone in the development of life on Earth. The exact timing of this event is thought to be correlated with the disappearance of Archean sulfur isotopic anomalies, called Sulfur Mass -Independent Fractionation (S-MIF), in the rock record. This anomalous fractionation pattern can be described, generally, as an enrichment in the three rare isotopes: S-33 (0.75%), S-34 (4.25%) S-36 (0.01%), relative to the most abundant isotopologue S-32 (0.75%), However, the mechanism for the generation of S-MIF in a reducing atmosphere is still unknown. I use the B-X UV transition (~31,000-36,000 cm⁻¹) in S₂ as a proxy for study of excited state collisional transfer as a possible mechanism for S-MIF. The short-lifetime B state (natural lifetime: 32 ns) state-mixes extensively with a longer-lifetime B" state (4200 ns).; Furthermore, the most abundant isotopologue of S₂, ³²S-³²S has only half the number of rotational states compared to its asymmetric counterparts. In this work, I replicated Green and Western's effective Hamiltonian for the X, B, and B" states with additional considerations for mass-dependent vibrational level shifts and nuclear permutation effects. I hypothesize that the collisional transfer between the B and B" states occurs differently for different isotopologues. This difference results in a different average excited state lifetimes, which, in turn, affects the relative rate at which they undergo chemical reactions and enter the rock record. Here, spectroscopic B/B" perturbations act as doorways through which population can exchange between the B and B" states. My model incorporates absorption, fluorescence, and predissociation, as described by Green and Western.; It also includes the Gelbart-Freed model for electronically inelastic collisions and Brunner and Pritchard model for rotationally inelastic collisions. I calculate the amount of each isotopologue that enters the rock record by time-dependently solving a master equation kinetic model. Results show that, generally, each region where a B vibronic state crosses a B" vibronic state behaves differently. However, the interactions display some systematic behavior. Because of the energy level patterns, lighter isotopologues are generally favored over the heavier ones (i.e. 32-36 &lt; 32-34 &lt; 32-33 &lt; 32-32) in the absence of predissociation. When predissociation is present the trend reverses, but remains Mass-Dependent, and cannot explain the S-MIF signature in the rock record. The most important conclusion, however: the interactions with the smaller B/B" state-mixing showed the larger isotope effects.; To continue my analysis, therefore, I considered the same B/B" system where the perturbation matrix elements are 1% of their original Green and Western values, i.e. a "weak perturbation model". Here, I develop a statistical doorway model, which posits that doorway locations are somewhat random, and that the asymmetric isotopologues converge to a limiting, ensemble behavior at lower pressures than do symmetric species that are missing half of their rotational states. Results show that this statistical isotope effect is relevant to the weak perturbation model, and may help explain the anomalous isotope patterns in the rock record. Further analysis shows that non-statistical effects may also play a critical role. These include transfer between B and B" states with very small state-mixing (as little as 0.005% ) and non-statistical doorway sampling. I conclude that a model that combines statistical and non-statistical isotope effects may explain Archean S-MIF.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from the official PDF of thesis. Page 242 blank.; Includes bibliographical references (pages 235-241).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130591</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on optimal economic growth</title>
<link>https://hdl.handle.net/1721.1/130229</link>
<description>Essays on optimal economic growth
Levhari, David,1935-
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics and Social Science, 1964.; Vita.; Includes bibliographical references (leaves 98-99).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130229</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photometric studies of non-aqueous acid-base reactions</title>
<link>https://hdl.handle.net/1721.1/130227</link>
<description>Photometric studies of non-aqueous acid-base reactions
Hummelstedt, Leif Erik Ingmar.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 1959.; Includes bibliographical references (leaves 199-202).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130227</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On imbeddings and holonomy</title>
<link>https://hdl.handle.net/1721.1/130226</link>
<description>On imbeddings and holonomy
Bishop, Richard L.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1959.; Vita.; Includes bibliographical references (leaf 59).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130226</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-ray studies of strained overlayers</title>
<link>https://hdl.handle.net/1721.1/130223</link>
<description>X-ray studies of strained overlayers
Capano, Michael Anthony.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 1989.; Includes bibliographical references (leaves 184-188).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130223</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental studies of internal dark currents in high gradient accelerator structures at 17 GHz</title>
<link>https://hdl.handle.net/1721.1/130220</link>
<description>Experimental studies of internal dark currents in high gradient accelerator structures at 17 GHz
Xu, Haoran,Ph. D.Massachusetts Institute of Technology.
This thesis presents the measurement of the internal dark current in normal conducting single cell standing wave disk-loaded waveguide (DLWG) accelerator structures that operate at 17 GHz, and its comparison with theory. Dark current is the unwanted current of electrons generated by field emission, multipactor on the accelerator inner surfaces, or both. It is in distinction from the primary beam propagating along the accelerator axis. Dark current that propagates to the ends of the accelerator has been extensively studied, but this is the first detailed study of the internal dark current generated at the structure sidewalls by multipactor. Theoretical calculations indicate that the collision of electrons on the accelerator sidewall will lead to secondary electron emission and subsequent resonant multipactor discharges. Simulations of the multipactor modes were carried out with both our inhouse particle tracking code and with the commercial CST PIC code.; Multipactor modes of different orders were predicted to appear at the sidewall with increasing acceleration gradient. The first tested cavities were fabricated from copper and had a sidewall that was either uncoated or coated with diamond-like carbon or titanium nitride. The dark current was measured by a downstream current monitor and by current monitors behind two thin slits opened on the cavity sidewall. With increasing gradient, the downstream dark current increased monotonically, as expected for field emission. The variation of the internal, side dark current was not monotonic but showed the onset of peaks at gradients near 45 and 65 MV/m, in good agreement with simulations using the CST code as well as the in-house code. These were identified as the N = 2 and N = 1 single surface one-point multipactor resonances. The total internal dark current was estimated at ~15 - 30 A. The coated sidewall cavities showed the same multipactor resonances as the uncoated structure.; A second set of tests was conducted with a structure with an axisymmetric elliptical central cell sidewall, which was predicted to suppress the internal dark current. After conditioning with 2.2x10⁵ pulses to 93 MV/m, the multipactor modes were completely suppressed and no multipactor resonances were observed. Studies of internal dark current may help to understand the rf conditioning and the ultimate breakdown performance of high gradient rf accelerator structures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 169-180).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130220</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From strongly-interacting Bose-Fermi mixtures to ultracold molecules</title>
<link>https://hdl.handle.net/1721.1/130219</link>
<description>From strongly-interacting Bose-Fermi mixtures to ultracold molecules
Yan, Zoe Z.(Zoe Ziyue)
This thesis describes experiments on ultracold quantum gases. First, I discuss quantum simulation involving mixtures of bosonic and fermionic atoms. Second, I present work on creating and controlling ultracold dipolar molecules of ²³Na⁴⁰K. The rich phase diagram of Bose-Fermi mixtures was studied with our system of bosonic ²³Na and fermionic ⁴⁰K atoms. When the fermions were immersed as a minority species within a Bose-Einstein condensate, the system realized the canonical Bose polaron quasiparticle, which is an important paradigm in condensed matter physics. We investigated the strongly-coupled Bose polaron as it approached the quantum critical regime of the Bose-Fermi mixture. Using radiofrequency spectroscopy, we probed the binding energy and decay rate as a function of temperature.; In particular, the decay rate was found to scale linearly with temperature near the Planckian rate k[subscript B]T/h⁻ in the unitarity-limited regime, a hallmark of quantum critical behavior. Bose-Fermi mixtures host a complex spectrum of collective excitations, which can shed light on their properties such as collisional relaxation rates, equilibrium equations of state, and kinetic coefficients. We probed the low-lying collective modes of a Bose-Fermi mixture across different interaction strengths and temperatures. The spin-polarized fermions were observed to transition from ballistic to hydrodynamic flow induced by interactions with the bosonic excitations. Our measurements establish Bose-Fermi mixtures as a fruitful arena to understand hydrodynamics of fermions, with important connections to electron hydrodynamics in strongly-correlated 2D materials. The second part of this thesis describes the creation and manipulation of ultracold molecules in their ground state.; Molecules have more tunable degrees of freedom compared to atoms, paving the way for studies of quantum state-controlled chemistry, quantum information, and exotic phases of matter. We created loosely-bound Feshbach molecules from ultracold atoms, then transferred those molecules to their absolute electronic, vibrational, rotational, and hyperfine ground state by stimulated Raman adiabatic passage. The rotational level structure, sample lifetimes, and coherence properties were studied, culminating in a demonstration of second-scale nuclear spin coherence times in an ensemble of NaK. Controlling the intermolecular interactions - which can be tunable, anisotropic, and long range - is an outstanding challenge for our field. We induced strong dipolar interactions via the technique of microwave dressing, an alternative to using static electric fields to polarize the molecules.; The origin of these dipolar collisions was the resonant alignment of the approaching molecules' dipoles along their intermolecular axis, resulting in strong attraction. Our observations were explained by a conceptually simple two-state picture based on the Condon approximation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 193-213).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130219</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic polarizability and collective modes in narrow-band electron systems</title>
<link>https://hdl.handle.net/1721.1/130218</link>
<description>Dynamic polarizability and collective modes in narrow-band electron systems
Lewandowski, Cyprian(Cyprian Krzysztof)
The family of moiré materials, in particular the magic angle twisted bilayer graphene, has emerged recently as a platform to study strongly interacting physics. This thesis analyzes the impact of the ultranarrow Bloch bands and strong electron-electron interactions on the dynamical polarization response of these systems. Strong interactions alter the collective charge dynamics in a number of interesting ways, in particular by stiffening the frequency-momentum dispersion of surface plasmons and making it much stronger than that of the underlying narrow-band carriers. Strongly dispersing plasmons pierce through the particle-hole continuum and extend in the forbidden energy band above it. This behavior enables decoupling of plasmons from particle-hole excitations. Such over-the-band plasmons are unable to decay into particle-hole pairs and thus are not subject to Landau damping. As a result, plasmons acquire longer lifetimes as well as an enhanced spatial optical coherence. The optical coherence manifests itself in spatial interference patterns that provide telltale signatures of over-the-band plasmons that are readily accessible in near-field imaging experiments. We further show that the over-the-band plasmon dispersion remains robust in the presence of ordering of the narrow-band carriers. The specific examples of a Wigner crystal and a Mott-Hubbard order, worked out in detail, show that interaction-driven gap opening has no impact on the over-the-band plasmon dispersion. Lastly, we consider the implications of the mechanisms behind the over-the-band behavior for achieving of unidirectional collective modes. We present a new mechanism for plasmon nonreciprocity the magnitude of which is controllable through the strength of electron-electron interactions, which makes it particularly pronounced in the moiré materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 114-123).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130218</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale dissection of bacterial proteome optimization</title>
<link>https://hdl.handle.net/1721.1/130217</link>
<description>Multiscale dissection of bacterial proteome optimization
Lalanne, Jean-Benoît.
The quantitative composition of proteomes results from biophysical and biochemical selective pressures acting under system-level resource allocation constraints. The nature and strength of these evolutionary driving forces remain obscure. Through the development of analytical tools and precision measurement platforms spanning biological scales, we found evidence of optimization in bacterial gene expression programs. We compared protein synthesis rates across distant lineages and found tight conservation of in-pathway enzyme expression stoichiometry, suggesting generic selective pressures on expression setpoints. Beyond conservation, we used high-resolution transcriptomics to identify numerous examples of stoichiometry preserving cis-elements compensation in pathway operons. Genome-wide mapping of transcription termination sites also led to the discovery of a phylogenetically widespread mode of bacterial gene expression, 'runaway transcription', whereby RNA polymerases are functionally uncoupled from pioneering ribosomes on mRNAs. To delineate biophysical rationales underlying these pressures, we formulated a parsimonious ribosome allocation model capturing the trade-off between reaction flux and protein production cost. The model correctly predicts the expression hierarchy of key translation factors. We then directly measured the quantitative relationship between expression and fitness for specific translation factors in the Gram-positive species Bacillus subtilis. These precision measurements confirmed that endogenous expression maximizes growth rate. Idiosyncratic transcriptional changes in regulons were however observed away from endogenous expression. The resulting physiological burdens sharpened the fitness landscapes. Spurious system-level responses to targeted expression perturbations, called 'regulatory entrenchment', thus exacerbate the requirement for precisely set expression stoichiometry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 315-348).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130217</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring system properties from thermodynamic fluctuations : a tool development approach</title>
<link>https://hdl.handle.net/1721.1/130216</link>
<description>Inferring system properties from thermodynamic fluctuations : a tool development approach
Jung, Yoon,Ph. D.Massachusetts Institute of Technology.
Biological systems are far from equilibrium which require novel tools for unraveling their complex behavior. This thesis focuses on developing a toolbox in order to understand properties of living systems from thermodynamic fluctuations. In the first chapter, I discuss a fluorescence imaging platform which allows 3D information combined with non-invasive and photostable probes named single-walled carbon nanotubes. The second chapter discusses an image processing algorithm for analyzing the fluorescence images acquired with the proposed custom-built microscope. I demonstrate its robust image reconstruction capability under dense scenes of fluorescence images with its inherent parallel nature which allows implementation on GPUs. Finally, I develop a framework which predicts system properties from thermodynamic fluctuations in a data-driven manner. The proposed framework uses feature extraction methods based on wavelets with recurrent neural networks for processing time series data. A combination of these tools completes a pipeline which allows studying complex behavior of biological systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 63-70).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130216</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetomicrometry : tissue length tracking via implanted magnetic beads</title>
<link>https://hdl.handle.net/1721.1/130210</link>
<description>Magnetomicrometry : tissue length tracking via implanted magnetic beads
Taylor, Cameron Roy.
Target tracking is necessary across a wide range of disciplines and scales, such as in monitoring tissues and cells, beam bending, fluid dynamics, human-computer interaction, and traffic. Due to these widespread applications, advances in target tracking drive cascades of new medical, social, and scientific capabilities. In particular, this dissertation advances magnetomicrometry, a technology that tracks visually-obscured magnetic beads implanted within biological tissue to monitor in-vivo tissue length and speed within freely moving animals and humans. There are many methods to track visually-obscured objects, but magnetic-target tracking has the advantages of being low-cost, portable, and safe. However, current magnet tracking technologies are slow, precluding high-speed real-time magnetic-target tracking. This is due to the mathematics of magnet tracking, whereby magnet positions are traditionally determined via numerical optimization, suffering from instability and significant delays. This dissertation develops the mathematics for an improved method to track one or more magnets with high speed and accuracy and validates this method by demonstrating real-time muscle length tracking. We develop a high-speed, real-time, multiple-magnetic-target tracking method using the analytic gradient of the magnetic field prediction error. We extend this method to compensate for magnetic disturbances in real time using a simpler, more portable strategy than currently-published disturbance compensation methods. Validating our method in a physical system against state-of-the-art motion capture, we demonstrate increased maximum bandwidths of 336%, 525%, 635%, and 773% for the simultaneous tracking of 1, 2, 3, and 4 magnets, respectively, with tracking accuracy comparable to state-of-the-art magnet tracking.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 111-113).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130210</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meaning change, in theory and in practice</title>
<link>https://hdl.handle.net/1721.1/130209</link>
<description>Meaning change, in theory and in practice
Koslow, Allison Robbins.
Semantic change is of interest in its own right in the philosophy of language. For instance, it sheds light on the relationship between the meaning of a word and its use. It also plays a role in ideology critique. Philosophy of language often focuses on explaining synchronic features related to truth, entailment, and implicature. It is presumed an account of language's diachronic features will follow. But the tools developed are imperfectly suited, and mask phenomena of interest. Chapter 1 concerns volitional meaning change of the sort advocated in conceptual engineering and amelioration. The question addressed is not what our concept, say, WOMAN, is, but what our concept should be. I bring to bear underappreciated empirical constraints on this normative project. Usage finely reflects equilibrium between communicative pressures (just as sales do between market pressures).; Revising concepts is not an impossible task, but has significantly different contours than its proponents, and opponents, believe. Chapter 2 concerns a family of cases that resemble those found in the mid-20th century. Austin's famous example is of a bird that looks for all the world like a goldfinch and then blows up like a grenade. Is it a goldfinch? A more realistic case involves the word "food": are meal replacement capsules food? Hard to say. There could be a separation, where British speakers affirm they are, and Americans deny it, without either appearing to misapply "food." Is this kind of open texture reducible to familiar linguistic phenomena, like vagueness? I argue not. It is a sui generis feature of meaning worthy of its own theory. Chapter 3 addresses the charge that radical conceptual analyses, or revisions, change the subject. On a standard picture of meaning, they do. But some puzzles about diachronic synonymy suggest otherwise.; I defend a radical position: truth-conditions are projected. They are partly a function of how, in the interpreter's community, it is reasonable to go on with the expression in question. For example, from our present perspective, "meal replacement capsules are food" may be false even as that very utterance is true from the perspective of future (or subaltern) speakers.
Thesis: Ph. D. in Philosophy, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130209</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protease activity sensors for noninvasive diagnosis and monitoring of pulmonary diseases</title>
<link>https://hdl.handle.net/1721.1/130206</link>
<description>Protease activity sensors for noninvasive diagnosis and monitoring of pulmonary diseases
Kirkpatrick, Jesse D.
Effective disease management requires high quality and accurate information about disease state. As science and technology have evolved, the history and physical exam, once the foundations of the diagnostic workflow, have been supplemented with modalities that allow physicians to peer inside the body and acquire otherwise inaccessible information. To gain maximal information about a disease, a promising approach would be to administer a probe that can detect disease activity inside the body and emit a signal to the outside world. To this end, our group has developed "activity-based nanosensors", which detect dysregulated protease activity at the site of disease and release a reporter that can be measured in the urine. Because proteases are implicated in multiple diseases, including cancer, activity-based nanosensors have the potential to enable quantitative, noninvasive, and real-time monitoring of disease activity.; Respiratory diseases are leading causes of death and disability, owing in large part to the constant exposure of the lungs to the external environment. Though this accessibility makes the lungs vulnerable to carcinogens and pathogens, it also provides a unique diagnostic opportunity. In this thesis, we aimed to optimize activity-based nanosensors for lung disease sensing in two settings: early detection and treatment response monitoring. Finally, we sought to establish a generalizable pipeline to rationally design such tools for human disease. We first delivered a multiplexed panel of sensors via intrapulmonary administration in two genetically engineered mouse models of lung adenocarcinoma. We found that our sensor panel diagnosed lung cancer in both models, detecting tumors as small as 2.8 mm³ without false positives from benign lung inflammation. We then evaluated this approach in monitoring treatment response in mouse models of malignant and benign pulmonary disease.; We observed dramatic treatment-induced shifts in pulmonary protease activity in both models, enabling rapid, noninvasive, and quantitative evaluation of drug response. Finally, we established a suite of ex vivo assays that enabled the bottom-up design of a protease-activated diagnostic probe, opening the door for translation to human disease. Collectively, this thesis provides a framework for the clinical development of activity-based nanosensors for pulmonary disease diagnosis and monitoring.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 128-139).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130206</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering of tools for De Novo Assembly of Human Cells</title>
<link>https://hdl.handle.net/1721.1/130205</link>
<description>Engineering of tools for De Novo Assembly of Human Cells
Chao, Chung-Yun(Chung-Yun George)
Organs for transplantation has continuously been in short supply and, given COVID-19's propensity to adversely impact solid organs, the shortage will likely become exacerbated. For decades, the field of tissue engineering has developed innovative methods to generate model tissues de novo. Top-down approaches, such as microfluidics and 3D bioprinting, provide spatial control by patterning cell types with high resolution, but face challenges in reproducing physiologically accurate cell types and interactions. Bottom-up methods, such as organoids, induce pluripotent cells to differentiate into aggregates that resemble their in vivo counterparts, yet the size and complexity of these structures are limited by nutrient diffusion and the morphology cannot be controlled. An ideal system would allow for high spatial control while retaining native cell-cell interactions formed through developmental progression.; To approach this capability, we aimed to create a sequential gene expression system that programmatically aggregate and differentiate cells, merging both top-down and bottom-up characteristics. First, we curated and characterized 28 recombinases to determine efficiency and pairwise compatibility for use in mammalian recombinase genetic circuits (RGC). From this set, we designed an RGC capable of expressing 12 genes in sequence, providing a framework for simulating the gene expression cascades of development. To elucidate the temporal dynamics of recombinase action in mammalian cells, we formulated a mathematical model for recombinase expression and catalysis and validated it with experimental data. We found that recombinases have variable expression levels, catalytic rates, and binding affinities, which should be accounted for when designing RGCs.; Separately, we designed a platform for engineering novel membrane proteins for inducing specific cell-cell interactions using coiled-coils, called helixCAM. We demonstrated that helixCAMs are capable of inducing patterned cell binding in E. coli, yeast, and human cells, and further utilized a library-on-library approach to engineer new helixCAM-optimized coiled-coils. Taken together, the genetic tools described in this thesis establish groundwork towards hybrid tissue engineering strategies capable of high-resolution patterning while enabling endogenous cell differentiation and cell-cell interactions to form, ultimately serving as a template for engineering large-scale tissue and organs de novo.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, September, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130205</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering gelation in metal ion cross-linked hydrogels</title>
<link>https://hdl.handle.net/1721.1/130204</link>
<description>Engineering gelation in metal ion cross-linked hydrogels
Cazzell, Seth Allen.
Inspired by their role in the extraordinary mechanical properties of aquatic mussel threads, reversible metal ion cross-links have been utilized to engineer the toughness, stress relaxation, and healing ability of polymer hydrogel networks. Such transient network hydrogels are easily made by reversibly cross-linking a growing variety of polymers modified with ligands capable of binding metal ions in dynamic coordination complexes, and researchers by now have developed a range of orthogonal strategies to tune the viscoelastic properties of these metal ion cross-linked hydrogels. However, several critical challenges have slowed the further development of these materials. Like any two-component cross-linked network, metal ion cross-linked hydrogels are limited by a reliance on strict stoichiometric balance between the metal and ligand to achieve full network connectivity, or percolation, and robust mechanical properties.; Additionally, the application space for any hydrogel is ultimately limited by their tendency to either dehydrate or freeze, whereupon the unique material properties that motivated their initial use are lost. Finally, it remains challenging to predict the mechanical properties of metal ion cross-linked hydrogels a priori. This thesis reports new strategies to expand the conditions that allow gelation to occur, create viable materials with a defined application, and predict the mechanical properties of metal ion cross-linked hydrogels. Specifically, we demonstrate that metal ion cross-linked hydrogels avoid traditional stoichiometric limits on gelation through self-regulating hydroxide competition. Additionally, we show that metal ion crosslinked hydrogels can assemble in the presence of large quantities of competitor ligand, further expanding the range of conditions resulting in gelation.; Building on these discoveries, we provide a practical demonstration of metal ion cross-linked hydrogels by assembling a broad spectrum vibration damping material, while additionally suppressing dehydration and freezing of the gel. Finally, we develop a computational framework to predict the plateau moduli of metal ion cross-linked hydrogels, a measure of their connectivity and stiffness, as a function of an arbitrary combination of metal ions, ligands, and polymer architecture. The progress made in this thesis should advance the engineering of metal ion cross-linked hydrogels by demonstrating their robust assembly through expanded gelation conditions, their ease of design through our computational model, and their potential application in a broader range of environments due to suppressed dehydration and freezing.; More broadly, this thesis pushes forward the development of metal ion cross-linked hydrogels for applications in 3D printing and bioinspired manufacturing and presents a general hypothesis of how to expand gelation conditions in all transient networks outside of metal ion cross-linked hydrogels.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 195-201).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130204</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technology and applications of 2D materials in micro- and macroscale electronics</title>
<link>https://hdl.handle.net/1721.1/130201</link>
<description>Technology and applications of 2D materials in micro- and macroscale electronics
Hempel, Marek,Ph. D.Massachusetts Institute of Technology.
Over the past 50 years, electronics has truly revolutionized our lives. Today, many everyday objects rely on electronic circuitry from gadgets such as wireless earbuds, smartphones and laptops to larger devices like household appliances and cars. However, the size range of electronic devices is still rather limited from the millimeter to meter scale. Being able to extend the reach of electronics from the size of a red blood cell to a skyscraper would enable new applications in many areas including energy production, entertainment, environmental sensing, and healthcare. 2D-materials, a new class of atomically thin materials with a variety of electric properties, are promising for such electronic systems with extreme dimension due to their flexibility and ease of integration. On the macroscopic side, electronics produced on thin films by roll-to-roll fabrication has great potential due to its high throughput and low production cost. Towards this end, this thesis explores the transfer of 2D-materials onto flexible EVA/PET substrates with hot roll lamination and electrochemical delamination using a custom designed roll-to-roll setup. The transfer process is characterized in detail and the lamination of multiple 2D material layers is demonstrated. As exemplary large-scale electronics application, a flexible solar cell with graphene transparent electrode is discussed. On the microscopic side, this thesis presents a 60x60 [mu]m² microsystem platform called synthetic cells or SynCells. This platform offers a variety of building blocks such as chemical sensors and transistors based on molybdenum disulfide, passive germanium timers, iron magnets for actuation, as well as gallium nitride LEDs and solar cells for communication and energy harvesting. Several system-level applications of SynCells are explored such as sensing in a microfluidic channel or spray-coating SynCells on arbitrary surfaces.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 198-209).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130201</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dewdrops on the genome : regulation of gene expression by biomolecular phase separation</title>
<link>https://hdl.handle.net/1721.1/130193</link>
<description>Dewdrops on the genome : regulation of gene expression by biomolecular phase separation
Shrinivas, Krishna,Ph. D.Massachusetts Institute of Technology.
Human development and physiology depend on the coordinated function of thousands of cell types - for example, neurons, immune cells, and skin cells. Each cell-type contains an identical copy of the genetic material yet performs specialized and diverse functions, in large part, due to the selective expression of particular coding DNA-elements (genes) into RNA. Mutations or dysregulation in control of gene expression underlie many diseased states, including cancer and neurodegenerative disorders. Non-coding DNA elements called enhancers orchestrate the complex biochemical pathways that lead to precise activation of cell-type specific genes. Decades of advances in molecular biology have identified many of the key proteins and their interactions in these pathways.; Yet, how dozens of proteins and their complex network of interactions are organized in space and time by enhancers to robustly relay regulatory information to their target genes remains one of the central puzzles of transcriptional control. In this thesis, I will leverage approaches from statistical physics, simulation, and informatics, in synergy with experimentalists, to gain mechanistic insights into gene control through the lens of biomolecular phase transitions. Proposal: I will introduce recent evidence that proteins and nucleic acids with certain features phase separate into two liquid phases, like oil from water, to compartmentalize cellular pathways. Employing a simple physical model, I will propose the phase separation of the transcriptional machinery explains established and recently observed puzzles underlying a class of enhancer elements called super-enhancers.; Subsequently, I will describe studies performed in collaboration with the Young and Sharp labs that provide direct experimental evidence of transcriptional condensates model in vivo. Mechanism: I then will describe our efforts to identify the mechanisms contributing to the formation of transcriptional condensates. By combining molecular dynamics, informatics, and experimental assays, we identify specific features encoded in DNA that enable spatio-temporally localized formation of condensates. I will discuss implications on the origins of enhancer activity. Control: Here, we'll combine non-equilibrium models of phase separation, coacervate chemistry, and imaging data in cells to explore the dynamic control of transcription through it eventual outcome i.e. ATP-dependent synthesis of RNA. We propose a dual-feedback mechanism in which low levels of RNA synthesis promote condensate formation and higher levels trigger dissolution.; I will close by discussing the ramifications of our model on two enigmatic features of transcription - the pervasive synthesis and degradation of non-coding RNA and discrete and bursty dynamics of mRNA synthesis. I will conclude with a short summary and brief discussion on future work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130193</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering non-immunoglobulin binding proteins for in vitro diagnostic Tests</title>
<link>https://hdl.handle.net/1721.1/130192</link>
<description>Engineering non-immunoglobulin binding proteins for in vitro diagnostic Tests
Sung, Ki-Joo.
In 2016, nearly 5.5 million deaths were attributed to infectious and parasitic diseases. Although many of these diseases are preventable and treatable, resource-constrained regions often lack access to rapid and accurate diagnostic tests to appropriately diagnose and treat these diseases. In order to improve the accessibility of diagnostics, the development of low-cost, simple, and rapid diagnostic tests is vital. Antibodies have been widely used as the binding reagents in these tests to detect a target biomarker from the patient sample. These tests are often designed as a sandwich assay, which requires a pair of antibodies as complementary capture and reporter reagents. However, antibodies have some limitations for use in in vitro applications, including variable stability from clone to clone, long developmental timelines, and structural complexity.; In this thesis, we investigated the use of the reduced-charge Sso7d (rcSso7d) binding scaffold as an antibody replacement in diagnostic tests due to its intrinsic stability, inexpensive production in bacteria, and ease of genetic modification. In order to identify unique rcSso7d clones specific to different target biomarkers, we used directed evolution techniques by screening through a yeast surface display library of 1.4 x 10⁹ different clones. Through this process, we identified multiple high affinity variants against target biomarkers for Zika virus, malaria, inflammation and infection, and a foodborne pathogen. We also demonstrated flexibility of the in vitro surface display selection process by incorporating additional selective pressures based on the desired properties, e.g. complementary binding pairs, minimal off-target binding, or binding to a conserved epitope.; In order to integrate rcSso7d into diagnostic assays, we incorporated the scaffold into a reporter reagent format to associate a signal in the presence of the target biomarker. We then demonstrated applicability and translatability of the rcSso7d scaffold for use in different diagnostic assay formats, including paper-based, bead-based, well plate ELISA-based, and agglutination assays. Finally, we found that the rcSso7d scaffold retained full functionality in 100% human serum. This work demonstrates that the rcSso7d binding scaffold is a promising alternative binding reagent for the development of robust, low-cost, rapid diagnostic tests to reduce the large global burden of infectious diseases.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130192</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virus-enabled design of high-performing, three-dimensional nanomaterials for electrochemical energy applications</title>
<link>https://hdl.handle.net/1721.1/130191</link>
<description>Virus-enabled design of high-performing, three-dimensional nanomaterials for electrochemical energy applications
Records, William Christopher.
The accelerating pace of anthropogenic climate change has galvanized intensive interest in electrochemical energy storage and conversion. Developing electrode materials for electrochemical devices requires precision and synthetic control over a number of factors, including surface morphology, nanostructure, and distribution of active materials. To this end, my thesis work investigated strategies to implement the M13 bacteriophage as a programmable, lightweight scaffold in the synthesis of three-dimensional, nanoporous foams. Virus-templated nanofoams were incorporated into several relevant energy applications spanning water electrolysis, microbatteries, and electrolytic urea decomposition. The virus-mediated synthesis toolkit yielded clear enhancements in electrochemical performance, as well as design insights into improving nanostructured electrodes in diverse contexts.; Virus-templated, platinum-nickel hydroxide nanofoams were first designed and optimized, displaying strong performance as electrocatalysts for the hydrogen evolution reaction in alkaline conditions (ca. -200 mA cm⁻² [subscript geo] and -4.9 A mg⁻¹ [subscript Pt] at -70 mV versus the reversible hydrogen electrode). Mass-normalized activity was definitively linked to the platinum dispersion within the virus-templated matrix, providing a guideline for future electrocatalyst development. Next, virus-templated metal phosphides were engineered with orthogonal control over nanoscale features, phase, and composition. Synthetic versatility was developed across monometallic nickel and copper, as well as bimetallic nickel-cobalt, material systems.; When applied as Li-ion microbattery anodes, virus-templated Ni₅P₄ demonstrated a discharge capacity of 677 mAh g⁻¹ (677 mAh cm⁻³) and an 80% capacity retention over more than 100 cycles, outperforming analogous reported Ni₅P₄ materials. The strong performance was attributed to the virus-templated nanostructure, which remains electronically conductive throughout cycling and obviates the need for conductive additives. In the final application, a fundamental exploration into Ni-based catalysts for the electrooxidation of urea was undertaken, highlighting the need for revised benchmarks to facilitate accurate comparisons across the literature and developing an empirical hypothesis for catalyst instability under constant-current electrolysis. Virus-templated, Ni[subscript x]P[subscript y] nanofoams were again applied as electrocatalysts, displaying strong activity relative to the field and enhanced resistance to deactivation.; Finally, several directions for scaling methodologies were presented with a future outlook for virus-templating as a material synthesis platform in electrochemical energy storage and conversion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 171-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130191</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalytic conversion of methane to partially oxidized products over copper-exchanged zeolites</title>
<link>https://hdl.handle.net/1721.1/130190</link>
<description>Catalytic conversion of methane to partially oxidized products over copper-exchanged zeolites
Dinh, Kimberly T.(Kimberly Tam)
The selective conversion of methane to liquid oxygenated compounds is a grand challenge in catalysis. Although natural gas can be processed industrially in large-scale facilities, new catalytic processes are required that economically directly convert methane to liquid products in small-scale units to exploit highly abundant but difficult-to-access gas reserves. Our group recently reported the first instance of a continuous, gas phase catalytic process for the direct conversion of methane to methanol using copper-exchanged zeolites by feeding only methane, water, and oxygen at 473 K. While this continuous system is an attractive route for the mild conversion of methane to value-added products, fundamental understanding of the reaction pathway and active site is necessary to engineer improved catalysts and an improved process.; Thus, my thesis has investigated the fundamental kinetics and active site requirements for continuous partial methane oxidation and using this knowledge to design an improved process. First, a reaction pathway and a [Cu-O-Cu]²⁺ motif as the active site were identified for the selective catalytic conversion of methane to methanol. Kinetic analysis on copper-exchanged SSZ-13 zeolites across a range of Cu loadings and Al spatial distributions revealed the reaction pathway is initiated by rate-limiting C-H bond scission of methane. Water is kinetically inconsequential, but required for methanol desorption. Carbon dioxide is generated from the sequential over oxidation of partially oxidized intermediates and downstream methanol oxidation. Selective partial oxidation was achieved with catalyst samples of high Al content and moderate Cu content (Cu/cage&lt;0.3) with high methane partial pressure in the presence of water.; These learnings were used to design a tandem partial oxidation and alkylation process that effectively scavenges methanol to produce toluene by introducing an H-ZSM-5 catalyst and benzene co-feed. Benzene reacts with methanol over Brønsted acid sites, arresting methanol over oxidation and enabling 59% selectivity for partial oxidation products at 0.66% methane conversion. In total, these findings resulted in a process that can circumvent the thermodynamic selectivity-conversion limit for the direct partial oxidation of methane to methanol and provide a new avenue of research in product protection to increase methane conversion while maintaining high product selectivity over heterogeneous catalysts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 157-169).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130190</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcriptome-wide organization of subcellular microenvironments revealed by ATLAS-Seq</title>
<link>https://hdl.handle.net/1721.1/130189</link>
<description>Transcriptome-wide organization of subcellular microenvironments revealed by ATLAS-Seq
Adekunle, Danielle(Danielle Aduke)
Subcellular localization of RNAs is a ubiquitous and evolutionarily conserved process that provides an additional layer of transcriptome organization promoting coordinated control of gene expression in both space and time. It has been shown to contribute to processes ranging from cell fate determination and embryonic patterning to local translation and directed cell movement. Elegant efforts focused on a small handful of RNAs have established RNA localization to play key roles in cell function - yet recent studies suggest that specific localization patterns are the rule, not the exception, across the transcriptome. We still lack global maps and organizing principles for how RNAs are localized in cells and tissues.; This dissertation details the findings of a new approach to investigating RNA localization on a transcriptome-wide scale, ATLAS-Seq, a detergent-free method that generates transcriptomes and proteomes from tissue lysates fractionated across a continuous sucrose gradient by density ultracentrifugation. We conducted proteomic analyses of fractions to determine separation of subcellular compartments. Transcriptomic analyses revealed that RNAs sedimenting similarly across gradients encode proteins in similar protein complexes, cellular compartments, or with similar biological functions, suggesting that RNAs that are functionally related are cosegregated to be coregulated. Overall, most RNAs sedimented differently than their encoded protein counterparts, signifying that most RNA compartmentalization is not directed at restricting RNA localization to the final destination of their protein product.; To identify regulatory RNA binding proteins potentially driving these patterns, we correlated their sedimentation profiles to all RNAs, confirming known protein-RNA interactions and predicting new associations. Interestingly, hundreds of alternative RNA isoforms exhibited distinct sedimentation patterns across the gradient, despite sharing most of their coding sequence. These results provide new insights into establishment and maintenance of subcellular organization of the transcriptome.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130189</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional changes in connectivity induced by differential manipulations of activity in Drosophila tonic versus phasic motoneurons</title>
<link>https://hdl.handle.net/1721.1/130188</link>
<description>Functional changes in connectivity induced by differential manipulations of activity in Drosophila tonic versus phasic motoneurons
Aponte Santiago, Nicole Ann.
Structural and functional plasticity induced by neuronal competition is a common feature of developing nervous systems. However, the rules governing how postsynaptic cells differentiate between presynaptic inputs are unclear. In this thesis, I characterized synaptic interactions following manipulations of Ib tonic or Is phasic glutamatergic motoneurons that co-innervate postsynaptic muscles at Drosophila neuromuscular junctions (NMJs). After identifying drivers for each neuronal subtype, I performed ablation or genetic manipulations to alter neuronal activity and examined the effects on synaptic innervation and function. Ablation of either Ib or Is resulted in decreased muscle response, with some functional compensation occurring in the tonic Ib input when Is was missing. In contrast, the phasic Is terminal failed to show functional or structural changes following loss of the co-innervating Ib input. Decreasing the activity of the Ib or Is neuron with tetanus toxin light chain resulted in structural changes in muscle innervation. Decreased Ib activity resulted in reduced active zone (AZ) number and decreased postsynaptic subsynaptic reticulum (SSR) volume, with the emergence of filopodial-like protrusions from synaptic boutons of the Ib input. Decreased Is activity did not induce structural changes at its own synapses, but the co-innervating Ib motoneuron increased the number of synaptic boutons and AZs it formed. These findings indicate tonic and phasic neurons respond independently to changes in activity, with either functional or structural alterations in the tonic motoneuron occurring following loss or reduced activity of the co-innervating phasic input, respectively. This thesis work contributes to the understanding of how multiple neuronal inputs innervating one postsynaptic muscle interact.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130188</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering the human gut microbiome through personalized dietary interventions</title>
<link>https://hdl.handle.net/1721.1/130187</link>
<description>Engineering the human gut microbiome through personalized dietary interventions
Nguyen, Le Thanh Tu.
The human gastrointestinal tract is home to a dense and dynamic microbial community. The composition and metabolic output of the human gut microbiota have been implicated in many diseases: from inflammatory bowel disease, colorectal cancer, and diarrheal diseases to metabolic syndromes like diabetes. Treatment of these diseases will likely require targeted therapeutic interventions aimed at modulating the abundance and metabolism of specific commensal microbial species or probiotics. A promising avenue for such interventions is through diet, where the dietary components act as substrates for the species producing beneficial metabolites one wishes to enrich. In this thesis, I focus on a dietary intervention study in healthy individuals. Since the human gut microbiota is known for its highly heterogeneous composition across different individuals, it comes as no surprise that a more personalized approach is preeminent.; We first test effects of multiple micronutrients spiked into a fixed diet. Using a highly controlled diet within the cohort, we identify strong and predictable responses of specific microbes across participants consuming prebiotic spike-ins. However, select macronutrient spike-ins like unsaturated or saturated fat and protein, produce no predictable response. We next investigate prebiotic supplement in diet further as well as its downstream products, short chain fatty acids, in the digestive tract. We look to alleviate the stress of a highly controlled, low complexity diet on participants by testing the effect of different prebiotics simultaneously ex vivo. We show that individuals vary in their microbial metabolic phenotypes (as in they produce different quantities and proportions of short chain fatty acids from the same prebiotic inputs) mirroring differences in their microbiota composition.; Finally, we run a pilot study to elucidate how closely our ex vivo experiment results may reflect the in vivo changes following a short-term dietary fiber supplementation. In addition to obtaining preliminary data on this direct comparison, we also explore different parameters for generating high-throughput data on personalized dietary interventions. Together, these projects provide the framework for building a predicative model for the effect that prebiotic dietary supplementation will have on gut microbiota's composition. Such a prediction model would be equally helpful in both enhancing individuals' gut health and improving gut dysbiosis in cases of disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130187</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Epigenetic determinants of cellular differentiation, transcriptional reprogramming, and human disease</title>
<link>https://hdl.handle.net/1721.1/130186</link>
<description>Epigenetic determinants of cellular differentiation, transcriptional reprogramming, and human disease
Nguyen, Khoi Thien.
Much of the diversity we observe in cellular and organismal phenotypes can be attributed to epigenetic and genetic variation. DNA provides the instructions for life, while epigenetic modifications regulate which parts of the genetic information contained in DNA can be read out in a given cell and how this information is interpreted. In recent years, epigenetic and genetic variation has been profiled on a large scale with sequencing-based assays, generating many datasets to be explored. In this thesis, I present three projects which apply computational techniques to identify and characterize epigenetic mechanisms that may contribute to the regulation of phenotypic variance. First, we mine a dataset charactering the epigenomes of diverse cell types in order to discover signatures of adult stem cell differentiation.; We identify a novel marker of the multipotent state, a chromatin state characterized by the histone marks H3K36me3 and H3K9me3, and describe biological processes that may be linked to the loss of this chromatin state in fully differentiated cell types. Next, I present what we learned from profiling the epigenetic state of cells before and after transplantation into Xenopus oocytes, a process that transcriptionally reprograms the cells. This analysis elucidates how the initial epigenetic state of a cell influences the success of cellular reprogramming and identifies transcription factors that help regulate this process. Finally, we integrate studies measuring the effects of genetic variants on disease with studies measuring the effects of genetic variants on transcriptional and epigenetic activity. This identifies specific mechanisms underlying disease processes, and demonstrates that transcriptional and epigenetic mechanisms may independently contribute to disease pathogenesis.; Together, these projects demonstrate the biological insights that can be gained from epigenetic profiling, and expand our understanding of the potential effects of epigenetic modifications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 111-130).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130186</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hippocampal microcircuits for social memory specification</title>
<link>https://hdl.handle.net/1721.1/130185</link>
<description>Hippocampal microcircuits for social memory specification
Lim, Rosary Yuting.
During social interactions, humans and social animals can distinguish not only familiar and novel conspecifics (social recognition) but also between multiple familiar individuals (social specification). Recent studies have implicated hippocampal sub-region dorsal CA2 (dCA2) in social recognition and identified social recognition memory engram in downstream ventral CA1 (vCA1). However, the anatomical site for the storage of social specification memory and its underlying neuroscientific mechanisms are poorly known. Here, we report that social specification memory engrams are stored in vCA1 while social information encoded in dCA2 becomes sharpened as it travels from dCA2 to vCA1 microcircuits within CA2, thereby acquiring a progressive increase in specification through repeating motifs of feed-forward inhibition. Both the inhibition of GABAergic inhibitory neurons in CA2 and reduced activity of excitatory neurons by ablation of oxytocin receptors in the dCA2 to vCA1 microcircuits impair social memory specification. These results suggest that the vCA1 and the multiple feed-forward inhibition motifs in the dCA2 to vCA1 microcircuits are crucial for social memory specification.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 76-90).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130185</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for learning to induce programs</title>
<link>https://hdl.handle.net/1721.1/130184</link>
<description>Algorithms for learning to induce programs
Ellis, Kevin,Ph. D.(Kevin M.)Massachusetts Institute of Technology.
The future of machine learning should have a knowledge representation that supports, at a minimum, several features: Expressivity, interpretability, the potential for reuse by both humans and machines, while also enabling sample-efficient generalization. Here we argue that programs-i.e., source code-are a knowledge representation which can contribute to the project of capturing these elements of intelligence. This research direction however requires new program synthesis algorithms which can induce programs solving a range of AI tasks. This program induction challenge confronts two primary obstacles: the space of all programs is infinite, so we need a strong inductive bias or prior to steer us toward the correct programs; and even if we have that prior, effectively searching through the vast combinatorial space of all programs is generally intractable. We introduce algorithms that learn to induce programs, with the goal of addressing these two primary obstacles. Focusing on case studies in vision, computational linguistics, and learning-to-learn, we develop an algorithmic toolkit for learning inductive biases over programs as well as learning to search for programs, drawing on probabilistic, neural, and symbolic methods. Together this toolkit suggests ways in which program induction can contribute to AI, and how we can use learning to improve program synthesis technologies.
Thesis: Ph. D. in Cognitive Science, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-224).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130184</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave functions and transition probabilities of light atoms</title>
<link>https://hdl.handle.net/1721.1/129963</link>
<description>Wave functions and transition probabilities of light atoms
Yilmaz, Huseyin,1924-2013.
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Physics, 1954.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129963</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chaotic obliquity of Mars</title>
<link>https://hdl.handle.net/1721.1/129962</link>
<description>The chaotic obliquity of Mars
Touma, Jihad Rachid.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1993.; Includes bibliographical references (leaves 81-84).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129962</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic engineering on 2D materials using electron irradiation and chemical protection</title>
<link>https://hdl.handle.net/1721.1/129931</link>
<description>Atomic engineering on 2D materials using electron irradiation and chemical protection
Su, Cong,Ph. D.Massachusetts Institute of Technology.
Controlling the exact atomic structure is an ultimate form of engineering. Atomic manipulation and atom-by-atom assembly can create functional structures that are hard to synthesize chemically. Defects with one- or few-atom-scale pertain intriguing properties that can be applicable to fields like Quantum Engineering (e.g. nitrogen vacancy center, single photon emitter, etc.), or Single-Atom Catalysis. Historically, scanning tunneling microscopy (STM) has demonstrated good stepwise control of single atoms, leading to physicochemical insights and technological advances. However, their scalability and throughput are severely limited by the mechanical probe movements, and its applicability is constrained by the low-temperature environment (usually lower than 77K) needed for stabilize the structure. Therefore, a method of controlling atoms at room-temperature without mechanical movement is essential for a broader interest and unleashing the constraints.; The advancement of aberration corrector makes it possible to focus high-energy (usually ranging from 30 keV to 300 keV) electron beams to a single-atom scale inside scanning transmission electron microscope (STEM). Despite being a versatile tool for characterizing the precise atomic structures of materials, STEM has also demonstrated the capability of controlling atoms on two-dimensional (2D) materials, like a substitutional dopant in graphene or molybdenum disulfides (MoS2). This turns the irradiation damage of electron beam (which is not what we want) to a powerful tool with a positive value (what we want). While controlling atoms using STEM is promising, it is still haunted by the fact that most of the dynamic processes are random. The core of this thesis, a theoretical framework called Primary Knock-on Space (PKS), will be introduced for optimizing the control process by biasing the possibilities of different atomic dynamics.; This framework predicts how various external factors tunable in experiment, such as temperature, electron beam incident angle, electron beam voltage, and dopant species, can influence the atom dynamics. It is proved to be useful in guiding the control process towards a more deterministic route. Following the introduction of the framework, several proof-of-concept experiments are demonstrated for validating the PKS framework. The future of Atomic Engineering will also be envisioned at the end. An additional corrosion inhibition of 2D materials will also be discussed, which is found to be critical during the materials transfer process.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 91-98).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129931</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive modeling of polycyclic aromatic hydrocarbon formation during pyrolysis</title>
<link>https://hdl.handle.net/1721.1/129925</link>
<description>Predictive modeling of polycyclic aromatic hydrocarbon formation during pyrolysis
Liu, Mengjie,Ph. D.Massachusetts Institute of Technology.
Polycyclic aromatic hydrocarbons (PAHs), large molecules comprised of multiple aromatic rings like anthracene or pyrene, are a notable intermediate and byproduct in combustion or pyrolysis of hydrocarbon fuels. On their own, they have been shown to pose a significant health risk, with certain PAHs being linked to increased cancer risk in humans. In addition, PAHs are known to play an important role as building blocks towards larger particles, known as soot or black carbon, which contribute a significant fraction of atmospheric PM₂.₅ pollution (particulate matter with diameters under 2.5 [mu]m). These particulates pose additional health risks and can also contribute to global climate change via radiative forcing. This motivates interest in understanding the chemical pathways leading to the formation of these PAHs, which could inform better models to predict PAH emissions and optimize methods to reduce their formation.; This thesis presents methods to improve the capabilities of automatic mechanism generation software in modeling the complex chemistry involved in PAH formation. In particular, it focuses on the Reaction Mechanism Generator (RMG) software, an open-source package developed primarily in Python. RMG automatically identifies species and reactions which are relevant at conditions of interest to aid construction of detailed mechanisms, but it has not been previously applied for PAH chemistry. To do so, new algorithms were developed to improve treatment of aromaticity and chemical resonance to better reflect the true behavior of molecules within the limitations of programmatic representations. The effect of polycyclic ring strain on parameter estimation was also investigated, highlighting challenges in capturing 3D conformational effects using the existing estimation frameworks and methods to address them.; These improvements to fundamental algorithms play an important role in how thermochemical and kinetic parameters are estimated. The combined utility of these developments is demonstrated by the generation of a detailed mechanism for modeling PAH formation up to pyrene in acetylene pyrolysis, which represents an important milestone in RMG capabilities. Analysis of the model provides insight into the relative contributions of various PAH formation pathways, revealing that hydrogen abstraction, acetylene addition pathways are the key contributors to PAH formation in this system.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129925</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cumulative migration method for computing multi-group transport cross sections and diffusion coefficients with Monte Carlo calculations</title>
<link>https://hdl.handle.net/1721.1/129924</link>
<description>Cumulative migration method for computing multi-group transport cross sections and diffusion coefficients with Monte Carlo calculations
Liu, Zhaoyuan,Ph. D.Massachusetts Institute of Technology.
In nuclear reactor physics analysis, fast accurate deterministic methods are needed for the many full-core calculations required for safe and efficient operation of nuclear power plants. Multi-group diffusion coefficients and transport cross sections are the crucial parameters that balance efficiency and accuracy in full-core simulations. However, it is not clear what definition of diffusion coefficients and transport cross sections should be employed or what "transport properties" are preserved by the numerous approximations available in the literature. Among the sources of error associated with efficient deterministic simulations of nuclear reactors, whether diffusion or transport theory, the anisotropy of neutron scattering introduces one major challenge for achieving highly accurate eigenvalues and power distributions. Anisotropic scattering has a significant impact on the neutron spatial migration, which is an important transport property in nuclear reactor systems.; It is well known that the scattering is highly forward-peaking when neutrons collide with light nuclides such as hydrogen in water, but how anisotropic scattering contributes to neutron migration has not been thoroughly studied. The Cumulative Migration Method (CMM) is developed in this thesis as a new method for computing multi-group diffusion coefficients and transport cross sections using Monte Carlo methods which preserves migration area. Thus, CMM is able to overcome the shortcomings of commonly-applied transport approximations. CMM is directly applicable to lattice calculations performed by Monte Carlo and is capable of producing rigorous homogenized diffusion coefficients and transport cross sections for arbitrarily heterogeneous lattices. By preserving neutron migration area, CMM also improves the accuracy of heterogeneous transport cross sections in multi-group transport calculations.; The advantage of CMM in achieving higher accuracy in full-core calculations is demonstrated on a series of 2D benchmark problems with both water and graphite moderators. The transport correction using CMM significantly improved agreement in full-core simulation results compared with other approximations. Consistent improvement is shown in reducing the error of eigenvalue and migration area. By employing pre-computed continuous energy correction tables for light nuclides, CMM offers a potential pathway to improve tally capabilities of existing Monte Carlo codes in generating transport cross sections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-214).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129924</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The distinct neural mechanisms underlying the production of stereotyped and exploratory vocal behavior in songbirds</title>
<link>https://hdl.handle.net/1721.1/129923</link>
<description>The distinct neural mechanisms underlying the production of stereotyped and exploratory vocal behavior in songbirds
Lynch, Galen(Galen Forest)
Whether it is speaking to one another, or nailing a tennis serve, humans can perform an incredible range of behaviors, most of which are learned. How do we and other animals learn complicated sequential behaviors, and once they are learned how are they executed? This thesis is an investigation into the neural basis of the two modes of behavior that occur at the beginning and end of learning a motor skill: the initially highly variable exploratory behavior, and the ultimately stereotyped skilled performance. To understand the start and end points of learned motor behaviors, I present two studies, each on the premotor activity of ensembles of neurons that underlie song production in zebra finches. Executing learned motor behaviors requires animals to produce precisely timed motor sequences.; While cortical motor regions traditionally have been viewed as encoding features of motor gestures (Georgopoulos et al., 1982), more recent studies have suggested that motor regions may have intrinsic dynamics to pattern the production of motor gestures (Churchland et al., 2012). A similar debate has arisen in songbirds. Adult birdsong requires the premotor nucleus HVC (used as a proper noun), in which projection neurons burst sparsely at stereotyped times in the song. It has been hypothesized that projection neuron bursts, as a population, form a continuous sequence, while a different model of HVC function proposes that HVC activity is tightly organized around motor gestures. Using a large dataset of HVC neurons recorded in singing birds, we test several predictions of these models. We find that projection neuron bursts in adult birds are continuously and nearly uniformly distributed throughout song.; Another model posits that LMAN may act as an excitable media producing locally propagating waves of activity, and predicts that all nearby pairs of neurons would be highly correlated. To test these models and to understand how LMAN actively generates behavioral variability, we built a miniature lightweight microdrive to simultaneously record from multiple neurons, as well as a lightweight endoscope to perform functional calcium imaging of ensembles of LMAN neurons. With these new technologies, we observed the simultaneous activity of pairs of single units in singing juvenile and adult birds. We find that most pairs of neurons with small separation (&lt;250 μm) are completely uncorrelated, which is incompatible with the wave model. However, a small subset of pairs have strikingly large correlations, with correlation coefficients of up to 0.81. Intriguingly, these correlated pairs of neurons can be separated by up to 400 μm.; The existence of such highly correlated neurons within LMAN is inconsistent with LMAN being a simple balanced excitatory-inhibitory network with uniformly random connectivity. These results suggest that new models of variability generation are required to explain how LMAN generates exploratory behavioral variability.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-232).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129923</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improved methods for managing megaprojects</title>
<link>https://hdl.handle.net/1721.1/129920</link>
<description>Improved methods for managing megaprojects
Minelli, Paolo,Ph. D.Massachusetts Institute of Technology.
Nuclear power is in danger of fading away as a signicant source of energy supply in some parts of the world unless it becomes more competitive. The cost of nuclear energy largely depends on capital cost because nuclear power plants are very expensive to design and build and relatively cheap to operate. The capital cost of nuclear power plants is literally dominated by factors other than those of physical equipment, such as project preparation, site preparation, engineering, planning, installation, and management. Interestingly, project planning, monitoring, execution and management is where too often the nuclear industry has failed in the past. As of 2014, the average time and cost overrun for nuclear projects worldwide are 64% and 117.3%, respectively. There is evidence showing that there is something that we fundamentally do not understand (or represent) in project planning and management. This work is based on the premise that nuclear projects are megaprojects.; As such, they are complex systems that are (1) constantly changing, (2) tightly coupled, (3) governed by feedback, (4) non-linear, (5) history-dependent, (6) self-organizing, (7) adaptive, (8) characterized by tradeoffs, (9) counter intuitive, (10) policy resistant. Hence, traditional project management methods alone are insufficient. An alternative approach to reduce cost and uncertainties of nuclear projects is to turn them into more standard projects, in terms of scope, complexity and capital at risk. For example, the nuclear industry is pursuing the development of micro-reactors, a type of plug- and-play nuclear batteries that would be two orders of magnitude smaller in physical size, wholly manufactured and fueled in a factory, and transported to the site within standard-size freight containers, requiring minimal site excavation and preparation.; This work develops an improved framework for managing megaprojects, estimating their value, and making project/design decisions involving numerous stakeholders, multiple competing objectives, and substantial uncertainty. The framework is built on two pillars: a System Dynamics (SD) model and a probabilistic Discounted Cash Flow (DCF) model. The former focuses on design and construction, while the latter focuses on operations. These two models are consistent with each other and they are run sequentially. To demonstrate its feasibility and appreciate its benefits, the SD-DCF approach is applied to a real-world case study, e.g. an ongoing project in North America based on a marine nuclear power plant entirely built in a shipyard and towed to the site upon completion. A multi-objective decision making problem is framed to illustrate the importance of a solid decision management process in megaprojects.; 270 dierent projects are derived from the combination of six high-level design/project choices: plant's capacity, deployment concept, flexibility, overlap between design and construction, level of efforts spent on FEED, size and availability of management team. The projects are simulated using the SD and DCF models, represented on trade spaces and finally evaluated against success objectives to derive general policy insights. This framework represents a synthesis of management methods that was not practically avail- able before. This work documents the development of the models, it shows why they should be used, it applies them to an actual case study hence providing a real-world application, and it makes the method credible, publicly available and convenient to adopt.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 192-202).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129920</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel, asynchronous ray-tracing for scalable, 3D, full-core method of characteristics neutron transport on unstructured mesh</title>
<link>https://hdl.handle.net/1721.1/129911</link>
<description>Parallel, asynchronous ray-tracing for scalable, 3D, full-core method of characteristics neutron transport on unstructured mesh
Gaston, Derek Ray.
One important goal in nuclear reactor core simulations is the computation of detailed 3D power distributions that will enable higher confidence in licensing of next-generation reactors and lifetime extensions/power up-rates for current-generation reactors. To date, there have been only a few demonstrations of such high-fidelity deterministic neutron transport calculations. However, as computational power continues to grow, such capabilities continue to move closer to being practically realized. Predictive reactor physics needs both neutronics calculations and full-core, 3D coupled multiphysics simulations (e.g., neutronics, fuel performance, fluid mechanics, structural mechanics). Therefore, new reactor physics tools should harness supercomputers to enable full-core reactor simulations and be capable of coupling for multiphysics feedback. One candidate for full-core nuclear reactor neutronics is the method of characteristics (MOC).; Recent advancements have seen a pellet-resolved 3D MOC solution for the BEAVRS benchmark. However, MOC is traditionally implemented using constructive solid geometry (CSG) that makes it difficult (if not impossible) to accurately deform material to capture physical feedback effects such as fuel pin thermal expansions, assembly bowings, or core flowering. An alternative to CSG is to use unstructured, finite-element mesh for spatial discretization of MOC. Such mesh-based geometries permit directly linking to unstructured mesh-based multiphysics tools, such as fuels performance. Utilizing unstructured mesh has been attempted in the past, but those attempts have fallen short of producing usable 3D reactor simulators.; Several key issues have hindered these attempts: lack of fuel volume preservation, approximations of boundary conditions, inefficient spatial domain decompositions, excessive memory requirements, ineffective parallel load balancing, and lack of scalability on massively parallel modern computer clusters. This thesis resolves these issues by developing a massively parallel, 3D, full-core MOC code, called MOCkingbird, using unstructured meshes. Underpinning MOCkingbird is a new algorithm for parallel ray tracing: the Scalable Massively Asynchronous Ray Tracing (SMART) algorithm. This algorithm enables efficient parallel ray-tracing across the full reactor domain, alleviating issues of reduced convergence associated with standard parallel MOC algorithms. In addition, to enable full-core simulation using unstructured mesh MOC, several new algorithms are developed, including reactor mesh generation, sparse parallel communication, parallel cyclic track generation, and weighted partitioning.; Within this work MOCkingbird and SMART are tested for scalability from 10 to 20,000 cores on the Lemhi supercomputer at Idaho National Laboratory. Accuracy is tested using a suite of benchmarks that ultimately culminate in a first-of-a-kind, 3D, full-core, simulation of the BEAVRS benchmark using unstructured mesh MOC.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-224).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129911</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemically informed modeling of miRNA targeting efficacy</title>
<link>https://hdl.handle.net/1721.1/129906</link>
<description>Biochemically informed modeling of miRNA targeting efficacy
Lin, Kathy S.
In metazoans, microRNAs (miRNAs) are short pieces of RNA that load into Argonaute (AGO) proteins and base-pair to complementary sequences in mRNAs. Upon binding an mRNA, AGO- miRNA complexes recruit machinery that translationally represses and degrades the mRNAs. Mammalian genomes encode hundreds of miRNAs, and most mRNAs in mammals have evolutionarily conserved target sites to at least one of these miRNAs. Because of the widespread and varied roles of miRNAs in regulating gene expression, there have been many efforts over the past decade to predict the extent of targeting between a miRNA and an mRNA from their sequences alone. This targeting relationship between a miRNA and an mRNA depends on the binding affinities for the AGO-miRNA complex to target sites on the mRNA, which are poorly predicted by nearest-neighbor rules used for predicting RNA-RNA duplex stabilities.; This is presumably because AGO modulates the energetics of duplexes formed between its loaded miRNA and mRNA target sites. The recent development of a high-throughput method of measuring RNA-binding affinities, RNA bind-n-seq (RBNS), has allowed us to determine the relative KD values for AGO-miRNA complexes binding to hundreds of thousands of potential target sites. In this work, we use these biochemical parameters to build a quantitative model of miRNA targeting that predicts mRNA repression by a miRNA in cells better than existing in silico models. We then expand this approach to all miRNAs, including those for which we have not measured binding affinities for, by training a convolutional neural network (CNN) to predict the binding affinity between arbitrary miRNA and target sequences. We show that CNN-predicted KD values parallel the utility of experimentally determined KD values in predicting the repression of mRNAs in cells.; By measuring the binding affinities between miRNAs and their targets, we can also estimate how much binding affinity contributes to miRNA-mediated targeting. Although the majority of the variance in targeting is attributable to binding affinity, about 40% of the variance remains unexplained, motivating future efforts to expand the deep learning framework to learn important features of mRNAs outside of target sites that influence miRNA activity.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129906</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical modeling, characterization and monitoring of the seasonal behavior of expansive clays</title>
<link>https://hdl.handle.net/1721.1/129904</link>
<description>Numerical modeling, characterization and monitoring of the seasonal behavior of expansive clays
Rosa Montenegro, Ivo.
Seasonal ground movements associated with the swelling and shrinking of expansive clays represent a major cause of damage to buildings, roads, and pipelines. The processes that give rise to these movements involve complex ground-atmosphere interactions (due to combined effects of infiltration and evaporation), combined with highly non-linear hydraulic and mechanical properties of partially saturated clay. This thesis presents an integrated study to measure and interpret long-term ground movements at a greenfield test site in Mustang Ridge (MR), Texas, adjacent to toll road SH130. The site is underlain by almost 12 m of high plasticity (montmorillonite-rich) clay. We designed and installed an autonomous field station that measures local weather conditions together with sub-surface water contents, using WCR reflectometers, and deformations, through a novel system of string-pot potentiometers.; We have obtained online data from the site for more than 3 years and observed a seasonal range of 50 mm in ground surface movements. We have investigated the engineering properties of the MR clay using samples extracted during site investigation and lab tests on compacted specimens of blended/reconstituted clay. These data are then used to calibrate constitutive models of suction-water content (SWCC), hydraulic conductivity and 1D compressibility for the MR clay. Numerical analyses of non-linear coupled flow-deformation in the partially-saturated soil column has been carried using a customized finite difference and finite volume framework (MPME) implemented within MATLAB. The MPME framework enables the representation of specified atmospheric boundary conditions (either as specified fluid pressures or fluxes) and hence, can simulate periods dominated by rainfall-induced infiltration or net drying by evapotranspiration.; Parametric MPME analyses are used to interpret field measurements at the test site to explain seasonal fluctuations in average strains and water contents and hence, to interpret the active depth in the MR clay. The results show that surface cracks influence the transient response of ground movements by significantly increasing the hydraulic conductivity of the medium. This feature affects the response rate of soil deformations to atmospheric phenomena. This behavior was simulated through the use of a constitutive model proposed by Stewart et al. (2016a) that quantifies the development of desiccation cracking. By improving our understanding of sources of seasonal ground movements, the research provides the basis for a more robust design of foundations and roadbeds in areas underlain by expansive clays.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 355-369).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129904</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid nutrient fluctuations and their implications for bacterial growth</title>
<link>https://hdl.handle.net/1721.1/129902</link>
<description>Rapid nutrient fluctuations and their implications for bacterial growth
Nguyen, Jennifer Kim Thu.
Bacteria, like all lifeforms, rely on resource acquisition for growth, and the tight coupling between bacterial growth physiology and nutrient availability has long been observed. When nutrient environments shift on timescales of hours, days or seasons, bacteria adapt to a physiological steady state characteristic of the new environment. However, the microscopic heterogeneity inherent of bacterial habitats implies that nutrient concentrations often fluctuate on timescales of seconds or minutes, timescales too rapid for bacteria to reach the steady state corresponding to the instantaneous environment. Despite this, steady-state growth is expansively used as a model to understand bacterial physiology, even in dynamic environments. In this Thesis, I experimentally demonstrate that bacterial growth in rapidly fluctuating environments cannot be predicted from steady-state growth models. Using a custom microfluidic device coupled with single-cell microscopy and image analysis, I quantified the growth physiology of thousands of individual E. coli cells experiencing either (1) periodic nutrient fluctuations on timescales ranging from 30 seconds to 60 minutes or (2) steady environments of equal average nutrient concentration. Growth rate in fluctuating environments was 16-50% smaller than in comparable steady environments, corresponding to a 10²- to 10⁸-fold loss in daily biomass production. However, cells grown in fluctuating environments had a growth advantage in the minutes after a nutrient shift over cells grown in steady environments. Cell size also displayed deviations from steady-state trends, with a particular fluctuating timescale producing cell sizes 54% larger than expected. These significant deviations from steady-state predictions highlight the importance of nutrient timescale and challenges our classical understanding of bacterial growth.
Thesis: Ph. D. to the Microbiology Graduate Program, Massachusetts Institute of Technology, Department of Biology, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129902</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social and affective machine learning</title>
<link>https://hdl.handle.net/1721.1/129901</link>
<description>Social and affective machine learning
Jaques, Natasha(Natasha M.)
Social learning is a crucial component of human intelligence, allowing us to rapidly adapt to new scenarios, learn new tasks, and communicate knowledge that can be built on by others. This dissertation argues that the ability of artificial intelligence to learn, adapt, and generalize to new environments can be enhanced by mechanisms that allow for social learning. I propose several novel deep- and reinforcement-learning methods that improve the social and affective capabilities of artificial intelligence (AI), through social learning both from humans and from other AI agents. First, I show how AI agents can learn from the causal influence of their actions on other agents, leading to enhanced coordination and communication in multi-agent reinforcement learning. Second, I investigate learning socially from humans, using non-verbal and implicit affective signals such as facial expressions and sentiment.; This ability to optimize for human satisfaction through sensing implicit social cues can enhance human-AI interaction, and guide AI systems to take actions aligned with human preferences. Learning from human interaction with reinforcement learning, however, may require dealing with sparse, off-policy data, without the ability to explore online in the environment - a situation that is inherent to safety-critical, real-world systems that must be tested before being deployed. I present several techniques that enable learning effectively in this challenging setting. Experiments deploying these models to interact with humans reveal that learning from implicit, affective signals is more effective than relying on humans to provide manual labels of their preferences, a task that is cumbersome and time-consuming. However, learning from humans' affective cues requires recognizing them first.; In the third part of this thesis, I present several machine learning methods for automatically interpreting human data and recognizing affective and social signals such as stress, happiness, and conversational rapport. I show that personalizing such models using multi-task learning achieves large performance gains in predicting highly individualistic outcomes like human happiness. Together, these techniques create a framework for building socially and emotionally intelligent AI agents that can flexibly learn from each other and from humans.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2020; Cataloged from student-submitted PDF of thesis. "February 2020."; Includes bibliographical references (pages 309-342).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129901</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commensal-specific immune responses at the intestinal mucosa</title>
<link>https://hdl.handle.net/1721.1/129900</link>
<description>Commensal-specific immune responses at the intestinal mucosa
Bousbaine, Djenet.
The intestinal mucosa harbors a dense community of microbes that breaks down polysaccharides indigestible by the host, synthesizes essential vitamins, stimulates maturation of the immune system, and outcompetes the growth of pathogenic species. In return, the host provides commensals with a habitat rich in energy derived from ingested food. The intestinal immune system faces the daunting task of maintaining homeostasis despite the enormous load and diversity of antigens present at this site. Failure to maintain this balance has dramatic consequences and can cause food allergies, inflammatory bowel disease or invasive infections. Peripherally-induced Foxp3⁺-regulatory T cells (pTregs) maintain immune homeostasis at the intestinal mucosa by regulating effector T cell responses against dietary antigens and microbes.; Similarly to pTregs, a subset of small intestine intraepithelial lymphocytes CD4⁺CD8[alpha][alpha]⁺ (CD4[subscript IELs]) exhibit regulatory properties and promote tolerance against dietary antigens. In this thesis, I describe a new commensal-specific CD4+ T cell model obtained by somatic cell nuclear transfer using, as a donor, a single pTreg from the mesenteric lymph node. In chapter 1, I provide an overview of the interplay between the microbiota and the mucosal immune system. In chapter 2, we describe our newly developed model and use it to assess how the identity of the T cell receptor (TCR) affect the fate of a T cell. In chapter 3, I describe the antigen and epitope recognized by this transnuclear (TN) TCR and show that TN cells can protect against intestinal inflammation in a colitis model. In chapter 4, I describe how TN cells can also differentiate into T follicular helper and promote systemic responses.; In chapter 5, we developed a strategy to target antigens to outer membrane vesicles (OMVs) of Bacteroides and thereby assess the antigen-specific responses to OMVs. In chapter 6, I provide concluding remarks and discuss future prospective for our findings. In the appendix, I describe a new mouse model to site-specifically label and track the B cell receptor of primary B cells.
Thesis: Ph. D. to the Microbiology Graduate Program, Massachusetts Institute of Technology, Department of Biology, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129900</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep neural networks for choice analysis</title>
<link>https://hdl.handle.net/1721.1/129894</link>
<description>Deep neural networks for choice analysis
Wang, Shenhao.
As deep neural networks (DNNs) outperform classical discrete choice models (DCMs) in many empirical studies, one pressing question is how to reconcile them in the context of choice analysis. So far researchers mainly compare their prediction accuracy, treating them as completely different modeling methods. However, DNNs and classical choice models are closely related and even complementary. This dissertation seeks to lay out a new foundation of using DNNs for choice analysis. It consists of three essays, which respectively tackle the issues of economic interpretation, architectural design, and robustness of DNNs by using classical utility theories. Essay 1 demonstrates that DNNs can provide economic information as complete as the classical DCMs.; The economic information includes choice predictions, choice probabilities, market shares, substitution patterns of alternatives, social welfare, probability derivatives, elasticities, marginal rates of substitution (MRS), and heterogeneous values of time (VOT). Unlike DCMs, DNNs can automatically learn the utility function and reveal behavioral patterns that are not prespecified by modelers. However, the economic information from DNNs can be unreliable because the automatic learning capacity is associated with three challenges: high sensitivity to hyperparameters, model non-identification, and local irregularity. To demonstrate the strength of DNNs as well as the three issues, I conduct an empirical experiment by applying the DNNs to a stated preference survey and discuss successively the full list of economic information extracted from the DNNs. Essay 2 designs a particular DNN architecture with alternative-specific utility functions (ASU-DNN) by using prior behavioral knowledge.; Theoretically, ASU-DNN reduces the estimation error of fully connected DNN (F-DNN) because of its lighter architecture and sparser connectivity, although the constraint of alternative-specific utility could cause ASU-DNN to exhibit a larger approximation error. Both ASU-DNN and F-DNN can be treated as special cases of DNN architecture design guided by utility connectivity graph (UCG). Empirically, ASU-DNN has 2-3% higher prediction accuracy than F-DNN. The alternative-specific connectivity constraint, as a domain-knowledge- based regularization method, is more effective than other regularization methods. This essay demonstrates that prior behavioral knowledge can be used to guide the architecture design of DNN, to function as an effective domain-knowledge-based regularization method, and to improve both the interpretability and predictive power of DNNs in choice analysis.; Essay 3 designs a theory-based residual neural network (TB-ResNet) with a two-stage training procedure, which synthesizes decision-making theories and DNNs in a linear manner. Three instances of TB-ResNets based on choice modeling (CM-ResNets), prospect theory (PT-ResNets), and hyperbolic discounting (HD-ResNets) are designed. Empirically, compared to the decision-making theories, the three instances of TB-ResNets predict significantly better in the out-of-sample test and become more interpretable owing to the rich utility function augmented by DNNs. Compared to the DNNs, the TB-ResNets predict better because the decision-making theories aid in localizing and regularizing the DNN models. TB-ResNets also become more robust than DNNs because the decision-making theories stablize the local utility function and the input gradients.; This essay demonstrates that it is both feasible and desirable to combine the handcrafted utility theory and automatic utility specification, with joint improvement in prediction, interpretation, and robustness.
Thesis: Ph. D. in Computer and Urban Science, Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 117-128).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129894</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making musical magic live : inventing modern production technology for human-centric music performance</title>
<link>https://hdl.handle.net/1721.1/129893</link>
<description>Making musical magic live : inventing modern production technology for human-centric music performance
Bloomberg, Benjamin Arthur Philips.
Fifty-two years ago, Sergeant Pepper's Lonely Hearts Club Band redefined what it meant to make a record album. The Beatles revolutionized the recording process using technology to achieve completely unprecedented sounds and arrangements. Until then, popular music recordings were simply faithful reproductions of a live performance. Over the past fifty years, recording and production techniques have advanced so far that another challenge has arisen: it is now very difficult for performing artists to give a live performance that has the same impact, complexity and nuance as a produced studio recording. Live performance production technology is now used almost exclusively to recreate studio albums exactly as they were recorded. Recently, this approach has dominated the entertainment industry. In an attempt to reach superhuman levels of perfection and complexity, many elements that make live performances emotionally meaningful for audiences have been given less priority --; or lost altogether. The mission of the work described in this dissertation is to reverse this trend by investigating methods of integrating technology and live music performance in such a way that the technology allows for flexible musical expression, sound and connection to the audience, while still enabling exciting, sophisticated and "magical" production values. This dissertation identifies six objectives for the human-centric design and integration of technology in musical performance, and a methodology to support each objective. These have been developed, refined and tested with artists and performers through a series of ten large-scale projects and approximately 300 individual performances. Through this work, I demonstrate that it is possible to combine high-value production with interactive musical performance.; We are now on the cusp of redefining live musical performance production as an art form just as Sergeant Pepper's Lonely Hearts Club Band redefined studio album production as an art form fifty years ago.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 277-285).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129893</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Centering peripheries : warning systems and disaster risk reduction planning on the island city</title>
<link>https://hdl.handle.net/1721.1/129892</link>
<description>Centering peripheries : warning systems and disaster risk reduction planning on the island city
Bui, Lily,1987-
Warning systems play a crucial role in disaster events on islands. They enable timely communication of risk, bolstering capacity and counterbalancing the negative force exerted by hazards, exposures, and vulnerabilities that threaten island communities. Disasters frequently result in the breakdown of communication due to both structural (i.e., power outages, failed telecommunications equipment, aging infrastructure) and nonstructural issues (i.e., governance, socioeconomic inequity, language barriers). Through semi-structured interviews, participant observation, document review and spatial data visualization, this dissertation compares the hurricane warning systems of two U.S. island cities: San Juan, Puerto Rico, and Honolulu, O'ahu, Hawaii, during Hurricane Maria (2017) and Hurricane Lane (2018), respectively. The research questions are as follows: -- Under what conditions are warning systems successful or unsuccessful in island cities? --; What gaps in capacity can be observed in island city warning systems? --; How do these gaps affect disaster planning in the island context? This dissertation proposes a conceptual framework for evaluating warning systems that takes into consideration the temporal aspects of warning. The framework illustrates the ways in which warning and planning are interrelated, as well as how planning and warning processes take place over time. The dissertation argues that good planning is good warning, and good warning is shaped by good planning. It finds that short-term warning (i.e. forecasting) is usually able to achieve its goals successfully whereas long-term warning (i.e. planning around preparedness, generational knowledge and culture, myths and history, and recovery) is prone to various capacity gaps across the two cases. The most significant finding is that O'ahu and Puerto Rico's planning and warning capacity grew after Hurricanes Lane and Maria, but the gap in capacity between both islands still remains noteworthy.; Ultimately, the planning gaps between both islands point toward other possible differential capacities for planning and warning on other U.S. islands.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 205-226).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129892</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Provably convergent anisotropic output-based adaptation for continuous finite element discretizations</title>
<link>https://hdl.handle.net/1721.1/129891</link>
<description>Provably convergent anisotropic output-based adaptation for continuous finite element discretizations
Carson, Hugh Alexander.
The expansion of modern computing power has seen a commensurate rise in the reliance on numerical simulations for engineering and scientific purposes. Output error estimation combined with metric-based mesh adaptivity provides a powerful means of quantifiably controlling the error in these simulations, for output quantities of interest to engineers and scientists. The Mesh Optimization via Error Sampling and Synthesis (MOESS) algorithm, developed by Yano for Discontinuous Galerkin (DG) discretization, is a highly effective method of this class. This work begins with the extension of the MOESS algorithm to Continuous Galerkin (CG) discretization which requires fewer Degrees Of Freedom (DOF) on a given mesh compared to DG. The algorithm utilizes a vertex-based local error decomposition, and an edge-based local solve process in contrast to the element-centric construction of the original MOESS algorithm.; Numerical results for linear problems in two and three dimensions demonstrate the improved DOF efficiency for CG compared to DG on adapted meshes. A proof of convergence for the new MOESS extension is then outlined, entailing the description of an abstract metric-conforming mesh generator. The framework of the proof is rooted in optimization, and its construction enables a proof of higher-order asymptotic rate of convergence irrespective of singularities. To the author's knowledge, this is the first such proof for a Metric-based Adaptive Finite Element Method in the literature. A three dimensional Navier Stokes simulation of a delta wing is then used to compare the new formulation to the original MOESS algorithm. The required stabilization of the CG discretization is performed using a new stabilization technique: Variational Multi-Scale with Discontinuous sub-scales (VMSD).; Numerical results confirm that VMSD adapted meshes require significantly fewer DOFs to achieve a given error level when compared to DG adapted meshes; these DOF savings are shown to translate into a reduction in overall CPU time and memory usage for a given accuracy
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages [123]-131).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129891</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mass reduction : opportunities and structural optimization methods to reduce material use in mass timber buildings</title>
<link>https://hdl.handle.net/1721.1/129890</link>
<description>Mass reduction : opportunities and structural optimization methods to reduce material use in mass timber buildings
Mayencourt, Paul Louis.
.Mass timber, a contemporary type of wood construction using engineered wood products, sourced from sustainably managed forests, has the potential to reduce the carbon emissions of the construction sector and act as a climate mitigation solution. Mass timber buildings made from renewable wood material can store carbon over their life cycles and support the regeneration of forests. Unfortunately, in the current market conditions in North America, a modern mass timber construction can cost up to 5-15% more than a conventional building, resulting in a low likelihood of wide adoption beyond green construction trends or environmentally conscious clients. There is, however, a missed opportunity in the way these buildings' structural systems are designed: up to 66% of structural material is under-utilized as a result of standardization because it is more convenient to manufacture. Structural optimization and new manufacturing techniques (i.e. digital fabrication) offer ways to design and manufacture customized structural elements with higher material efficiencies. This dissertation presents three new structural design methodologies to reduce cost through a reduction of material use in modern mass timber buildings. Each methodology addresses a standard structural element with high-recurrence and low material efficiency. The first methodology examines the design of hollow cross-laminated timber panels. The second methodology was developed to design shaped structural timber beams. The last methodology expands the design of shaped beam elements to frame structures. The results demonstrate that a total cost reduction of 5-7% can be achieved from structural material savings of 16-26%. A reduction of the total cost of mass timber structures is then likely to increase their competitiveness against other structural solutions and drive a greater implementation of sustainable mass timber as a climate mitigation solution.
Thesis: Ph. D. in Architecture: Building Technology, Massachusetts Institute of Technology, Department of Architecture, February, 2019; Cataloged from student-submitted thesis.; Includes bibliographical references (pages 143-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129890</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning for understanding protein sequence and structure</title>
<link>https://hdl.handle.net/1721.1/129888</link>
<description>Machine learning for understanding protein sequence and structure
Bepler, Tristan(Tristan Wendland)
Proteins are the fundamental building blocks of life, carrying out a vast array of functions at the molecular level. Understanding these molecular machines has been a core problem in biology for decades. Recent advances in cryo-electron microscopy (cryoEM) has enabled high resolution experimental measurement of proteins in their native states. However, this technology remains expensive and low throughput. At the same time, ever growing protein databases offer new opportunities for understanding the diversity of natural proteins and for linking sequence to structure and function. This thesis introduces a variety of machine learning methods for accelerating protein structure determination by cryoEM and for learning from large protein databases. We first consider the problem of protein identification in the large images collected in cryoEM. We propose a positive-unlabeled learning framework that enables high accuracy particle detection with few labeled data points, both improving data quality and analysis speed. Next, we develop a deep denoising model for cryo-electron micrographs. By learning the denoising model from large amounts of real cryoEM data, we are able to capture the noise generation process and accurately denoise micrographs, improving the ability of experamentalists to examine and interpret their data. We then introduce a neural network model for understanding continuous variability in proteins in cryoEM data by explicitly disentangling variation of interest (structure) for nuisance variation due to rotation and translation. Finally, we move beyond cryoEM and propose a method for learning vector embeddings of proteins using information from structure and sequence. Many of the machine learning methods developed here are general purpose and can be applied to other data domains.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 183-200).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129888</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial experience in humans and machines</title>
<link>https://hdl.handle.net/1721.1/129880</link>
<description>Spatial experience in humans and machines
Zaman, C̦ağrı Hakan.
Spatial experience is the process by which we locate ourselves within our environment, and understand and interact with it. Understanding spatial experience has been a major endeavor within the social sciences, the arts, and architecture throughout history, giving rise to recent theories of embodied and enacted cognition. Understanding spatial experience has also been a pursuit of computer science. However, despite substantial advances in artificial intelligence and computer vision, there has yet to be a computational model of human spatial experience. What are the computations involved in human spatial experience? Can we develop machines that can describe and represent spatial experience? In this dissertation, I take a step towards developing a computational account of human spatial experience and outline the steps for developing machine spatial experience.; Building on the core idea that we humans construct stories to understand the environment and communicate with each other, I argue that spatial experience is a type of story we tell ourselves, driven by what we perceive and how we act within the environment. Through two initial case studies, I investigate the relationships between stories and spatial experience and introduce the anchoring framework --a computational model of constructing stories using emergent spatial, temporal, and visual relationships in perception. I evaluate this framework by performing a visual exploration study and analyzing how people verbally describe environments. Finally, I implement the anchoring framework for creating spatial experiences by machines. I introduce three examples, which demonstrate that machines can solve visuo-spatial problems by constructing stories from visual perception using the anchoring framework.; This dissertation contributes to the fields of design, media studies, and artificial intelligence by advancing our understanding of human spatial experience from a story perspective; providing a set of tools and methods for creating and analyzing spatial experiences; and introducing systems that can understand the physical environment and solve spatial problems by constructing stories.
Thesis: Ph. D. in Architecture: Design and Computation, Massachusetts Institute of Technology, Department of Architecture, February, 2019; Cataloged from student-submitted thesis.; Includes bibliographical references (pages 215-224).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129880</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new approach to predicting departure from Nucleate Boiling (DNB) from direct representation of boiling heat transfer physics</title>
<link>https://hdl.handle.net/1721.1/129879</link>
<description>A new approach to predicting departure from Nucleate Boiling (DNB) from direct representation of boiling heat transfer physics
Demarly, Etienne.
Accurate prediction of the Departure from Nucleate Boiling (DNB) type of boiling crisis is essential for the design of Pressurized Water Reactors (PWR) and their fuel. Recent advances in the instrumentation of boiling experiments via infrared thermometry have provided new insights on DNB physics that were ignored in past modeling efforts. The growing consensus from experimental studies that DNB is caused by a microhydrodynamic phenomenon at the boiling surface, instead of the classical macrohydrodynamic scale effects, has yet to be formulated into a working model and applied for simulation of high pressure conditions representative of PWRs. While the idea of an energy imbalance between wetted (in contact with liquid) and dry (in contact with vapor) areas has been suggested by multiple experimental groups [1-6] as a triggering mechanism for DNB, incorporating this new understanding into a functional boiling model remains an open challenge.; Existing approaches to model DNB have not demonstrated the capability to generally "predict" the occurrence of the boiling crisis due to two main reasons: (1) the incomplete or inaccurate physical description of the phenomenon, and/or (2) the absence of consideration of parameters of influence, such as the contact angle or the cavity size distribution which affect DNB. Their modeling capabilities are typically limited to the thermal hydraulic conditions they were initially validated against and are not suitable when extrapolated to new untested conditions. The present work tackles the development of a new DNB model via a systematic approach that leverages understanding of the boiling crisis at the microscopic scale. From the hypothesized mechanism, a formulation is proposed and is then validated against high resolution data.; The concept of heat partitioning for nucleate boiling offers a general framework where the total heat removed at the boiling surface is split between contributions from individual heat transfer mechanisms. In this formulation, the boiling crisis is identified as the onset of instability in the energy balance at the boiling wall. The model formulation is extended to high pressure, representative of PWR conditions, and benchmarked against relevant DNB data, in comparison to the industry-standard W-3 correlation and CHF Lookup Table. The model is further adopted to evaluate the potential benefits of surface property alterations (contact angle, cavity distribution) in advanced nuclear fuel concepts aiming at delaying the boiling crisis limit.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 140-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129879</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Realizing dignity : Dalits rights, land reform, and the learning of democratic citizenship</title>
<link>https://hdl.handle.net/1721.1/129874</link>
<description>Realizing dignity : Dalits rights, land reform, and the learning of democratic citizenship
Siddiqi, Faizan Jawed.
This dissertation addresses the questions: When, and how, are durable inequalities disrupted and democratic citizenship deepened in societies that are politically committed to liberal democracy but have substantial social inequalities? How do law and social movements influence and shape this process? I develop answers by examining a successful case of land reform in Surendranagar (Gujarat, India), which was the result of socio-legal mobilization spearheaded by a local human rights organization called Navsarjan Trust. My main argument is that by working with Dalits in Surendranagar Navsarjan caseworkers helped articulate and popularize what philosopher Martha Nussbaum has called the "public myth of equality." I develop this main argument by developing responses to four questions.; First, what was the role of emotions and reasons in shaping the organizational strategy and praxis of Navsarjan? I show that the robustness of Navsarjan's strategy came from strategic deployment of emotional energy. Second, is the land redistribution implementation better understood as "top-down" or "bottom-up?" By showing how under unanticipated circumstances, Navsarjan partnered with the local bureaucracy, I argue that the implementation process transcends neat categorization into either category. Third, how do constitutional expressive norms--abstract principles that are supposed to order and restrain the state--matter in the shaping Dalit politics? I show that constitutional expressive norms matter fundamentally but contingently.; Fourth, was law merely used instrumentally to mount resistance to upper caste oppression or did it also create a "moral deepening" within the Dalit community? I argue that while for many land reform beneficiaries law was a strategic choice, once they expressed loyalty to it, they publicly bound themselves to its moral commitment. This was used strategically and purposively by Navsarjan caseworkers that pressured community members to live up these moral commitments in their social relations. In conclusion, I argue that the project of "realizing dignity" is likely to continue in Surendranagar because Navsarjan's efforts have not only created a narrative of hope in the Dalit community but also helped its members develop the skills, knowledge, and networks that are needed to put rights to work and achieve positive results.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 203-214).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129874</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The production of rurality : social and spatial transformations in the Tamil countryside 1915-65</title>
<link>https://hdl.handle.net/1721.1/129866</link>
<description>The production of rurality : social and spatial transformations in the Tamil countryside 1915-65
Rao Cavale, Karthik.
This dissertation advances a critique of the "planetary urbanization" thesis inspired by Henri Lefebvre's writings on capitalist urbanization. Theoretically, it argues that Lefebvrian scholars tend to conflate two distinct meanings of urbanization: a) urbanization understood simply as the territorial expansion of certain kinds of built environment associated with commodity production; and b) urbanization as the reproduction of capitalist modes of production of space on an expanded, planetary scale. Empirically, the dissertation constructs a social history of Tamil Nadu (India) between 1915 and 1965, and seeks to explain how 'rural' spaces were reproduced during a period marked by greater market penetration into the countryside, democratization and regime change, and the reorganization of community relations at multiple scales. The argument is developed in three inter-related but self-contained chapters.; The second chapter focuses on how 'village communities' came to be imagined in political and academic discourse, through the economic writings of Gilbert Slater and N. G. Ranga. Whereas 19th century writers believed that the modern exchange economy posed an existential threat to village communities governed by 'custom', I show that Slater and Ranga inaugurated an empiricist approach that rendered village communities compatible with generalized commodity production. Focusing on the history of rural roads, the third chapter examines how the conceptual distinction between 'productive' and 'unproductive' infrastructure reproduced under-investment in the countryside. Despite a significant democratization of local and provincial governments from the 1920s onwards, I demonstrate that the fiscal arrangements of colonial rule reproduced barriers against treating resources devoted to 'rural' infrastructure as capital investment, as opposed to a mere expenditure of revenue.; In the final chapter, I demonstrate the resilience of non-capitalist moorings in actually existing village communities, and their importance in enabling the social mobility of excluded communities. This chapter constructs a detailed case study of a group of villages in southern Tamil Nadu, where land owned by upper caste landlords was transferred to lower caste tenants in the mid-20th century. It is through these contestations surrounding land rights that village communities were reproduced well into the 20th century in southern India.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 264-288).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129866</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pictorial noise</title>
<link>https://hdl.handle.net/1721.1/129638</link>
<description>Pictorial noise
Huang, Thomas S.,1936-
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering, 1963.; Vita.; Includes bibliographical references (leaves 79-80).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129638</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bus route performance evaluation under stochastic considerations</title>
<link>https://hdl.handle.net/1721.1/129517</link>
<description>Bus route performance evaluation under stochastic considerations
Marguier, Philippe Henri Joseph.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1986.; Bibliography: leaves 255-264.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129517</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-linear meson theory of nuclear forces</title>
<link>https://hdl.handle.net/1721.1/129511</link>
<description>Non-linear meson theory of nuclear forces
Finkelstein, David Ritz,1929-2016.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1953.; Includes bibliographical references (leaves 52-55).
</description>
<pubDate>Thu, 01 Jan 1953 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129511</guid>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The independent particle theory of nuclear scattering</title>
<link>https://hdl.handle.net/1721.1/129510</link>
<description>The independent particle theory of nuclear scattering
Lamarsh, John R.
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Physics, 1952.; Vita.; Bibliography: leaves 88-89.
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129510</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the asymptotic expansion of the trace of the the [sic] heat kernel for a subelliptic operator</title>
<link>https://hdl.handle.net/1721.1/129500</link>
<description>On the asymptotic expansion of the trace of the the [sic] heat kernel for a subelliptic operator
Xu, Chuan-yi.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1987.; Includes bibliographical references (leaves 89-91).
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129500</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays in international trade and uncertainty</title>
<link>https://hdl.handle.net/1721.1/129499</link>
<description>Three essays in international trade and uncertainty
Baldwin, Richard E.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 1986.; Includes Bibliographies.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129499</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancement of closure relations for annular flow modeling in CFD</title>
<link>https://hdl.handle.net/1721.1/129364</link>
<description>Advancement of closure relations for annular flow modeling in CFD
Agostinelli, Giulia.
In Boiling water Reactors (BWRs), the presence of a liquid film in contact with the heated rod surface is crucial to ensure an efficient heat removal and prevent the threatening occurrence of dryout. The accurate prediction of the complex multidimensional liquid film behavior in advanced BWR fuel assemblies is critical to guarantee improved reactors performance and safety. Multiphase- CFD (M-CFD) brings the ability to model the complex three-dimensional flow structures in annular flow regime [1], while physics-based constitutive equations are needed to accurately represent the phase interactions, particularly at the liquid film interface. The development of closure relations for droplet deposition and entrainment as well as wave-induced interfacial shear, is a major priority for the modeling of annular flow in M-CFD. In annular flow conditions, liquid is continuously exchanged at the interface between the bulk steam and the film on the walls.; While liquid droplets deposit onto the film driven by turbulent diffusion, new ones are entrained from the waves appearing on the film surface. A modeling approach is proposed and assessed to represent the local subgrid-scale deposition in CFD, showing comparable results with existing integral correlations, and an average error of 30%. Available closures are also evaluated for their ability to represent entrainment in the CFD implementation. Finally, in order to drive the advancement of the representation of interfacial shear, as well as physics-based droplet entrainment, the work focuses on the analysis and modeling of disturbance waves. The recent high resolution film measurements collected by Robers [2] are analyzed and leveraged to propose a physical representation of disturbance waves, which can be implemented into a complete model.; The proposed model is successfully assessed against the experimental measurements of Sawant [3], while a large disagreement is found in comparison with the high pressure data evaluated at the RISO facility [4]. The new model predictions are consistent with existing integral correlations, demonstrating the need for further advancement of high pressure experiments with high resolution, necessary to drive more general representations. The complete set of closures is implemented in a commercial CFD software, and demonstrated adopting data from the Robers experiments. While the lingering limitations of the CFD implementation to transport thick films lead to overprediction of the local film thickness, the formulation shows promising performance towards more fundamental modeling of annular flow in M-CFD.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 118-124).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129364</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/129363</link>
<description>Essays in financial economics
Ernst, Thomas(Thomas H.)
Chapter 1 constructs a theoretical model of an ETF. Conventional wisdom warns that exchange-traded funds (ETFs) harm stock price discovery, either by ``stealing'' single-stock liquidity or forcing stock prices to co-move. Contra this belief, I develop a theoretical model that investors with stock-specific information trade both single stocks and ETFs. While the ETF is payoff-redundant, asymmetric information and a position limit for informed traders combine to make the ETF non-redundant. Single-stock investors can access ETF liquidity by means of this tandem trading, and stock prices can flexibly adjust to ETF price movements. Effects are strongest when an individual stock has a large weight in the ETF and a large stock-specific informational asymmetry. I conclude that ETFs can provide single-stock price discovery. Chapter 2 empirically tests the predictions of the ETF model. Using high-resolution data on SPDR and the Sector SPDR ETFs, I exploit exchange latencies in order to show that investors place simultaneous, same-direction trades in both a stock and ETF. Consistent with my model predictions, effects are strongest when an individual stock has a large weight in the ETF and a large stock-specific informational asymmetry. Chapter 3 models how risk-averse investors trade when they are uncertain about the quality of their signal. I show that when traders are risk-averse, traders can submit demands which are non-monotone in their signal. While their expected value for the asset may rise with stronger signals, so does the risk that the signal is noise. This leads to short-term behavior which is herding-like. Unlike herding, investors maintain a positive expected value for the asset, but it is their risk aversion leads them to take smaller positions, which has a similar slowing effect on price discovery.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129363</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on spillover effects in digital media</title>
<link>https://hdl.handle.net/1721.1/129362</link>
<description>Essays on spillover effects in digital media
Zhao, MichaelPh. D.Massachusetts Institute of Technology.
In this thesis, I address spillover effects in digital media across 3 different essays. In Chapter 1, I focus on estimating social spillovers in the consumption of online news. Exploiting regional rainfall as a natural experiment, I find strong statistical evidence of positive social spillovers in consumption of online New York Times content. Specifically, I find that a 1% increase in aggregate viewership outside of a focal region causes viewership in that region to increase by approximately 0.34%. Further analysis shows that these spillover effects are skewed toward the most popular content and are driven by social media sharing rather than search engines or news aggregators. In Chapter 2, I estimate the spillover effect of paid advertising on organic mobile app installations. I analyze a spend shutoff "experiment" conducted by GameSpace, a major US-based mobile game developer.; Using both difference-in-differences (DiD) and regression discontinuity in time (RDiT) approaches, I surprisingly find evidence that paid advertising is boosting organic installs. Moreover, a two-way fixed effects panel regression indicates that every $100 in spend is associated with approximately 4 organic installs and 34.4 paid installs --; estimates that are quantitatively consistent with our RDiT results. In Chapter 3, I investigate spillover effects of statewide social distancing policies and reopenings during the COVID-19 pandemic on mobility behavior. Specifically, I find that if all alter states implement a shelter-in-place order, an ego county's mobility decreases by 15- 20%. Alter state reopenings have similarly large but opposite effect: once all alter states reopen, ego county mobility increases by 19-32%. These statewide policies also have a major impact on interstate travel: when a destination county is subject to a shelter-in-place order, its out-of-state traffic decreases by 13-18% but only for distant counties. However, once reopened, traffic from both nearby and distant counties increases by 12-13%.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 123-127).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129362</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven decision making in online and offline retail/</title>
<link>https://hdl.handle.net/1721.1/129361</link>
<description>Data-driven decision making in online and offline retail/
Singhvi, Divya.
.Retail operations have experienced a transformational change in the past decade with the advent and adoption of data-driven approaches to drive decision making. Granular data collection has enabled firms to make personalized decisions that improve customer experience and maintain long-term engagement. In this thesis we discuss important problems that retailers face in practice, before, while and after a product is introduced in the market. In Chapter 2, we consider the problem of estimating sales for a new product before retailers release the product to the customer. We introduce a joint clustering and regression method that jointly clusters existing products based on their features as well as their sales patterns while estimating their demand. Further, we use this information to predict demand for new products. Analytically, we show an out-of-sample prediction error bound.; Numerically, we perform an extensive study on real world data sets from Johnson &amp; Johnson and a large fashion retailer and find that the proposed method outperforms state-of-the-art prediction methods and improves the WMAPE forecasting metric between 5%-15%. Even after the product is released in the market, a customer's decision of purchasing the product depends on the right recommendation personalized for her. In Chapter 3, we consider the problem of personalized product recommendations when customer preferences are unknown and the retailer risks losing customers because of irrelevant recommendations. We present empirical evidence of customer disengagement through real-world data. We formulate this problem as a user preference learning problem. We show that customer disengagement can cause almost all state-of-the-art learning algorithms to fail in this setting.; We propose modifying bandit learning strategies by constraining the action space upfront using an integer optimization model. We prove that this modification can keep significantly more customers engaged on the platform. Numerical experiments demonstrate that our algorithm can improve customer engagement with the platform by up to 80%. Another important decision a retailer needs to make for a new product, is its pricing. In Chapter 4, we consider the dynamic pricing problem of a retailer who does not have any information on the underlying demand for the product. An important feature we incorporate is the fact that the retailer also seeks to reduce the amount of price experimentation.; We consider the pricing problem when demand is non-parametric and construct a pricing algorithm that uses piecewise linear approximations of the unknown demand function and establish when the proposed policy achieves a near-optimal rate of regret (Õ)( [square root of] T), while making O(log log T) price changes. Our algorithm allows for a considerable reduction in price changes from the previously known O(log T) rate of price change guarantee found in the literature. Finally, once a purchase is made, a customer's decision to return to the same retailer depends on the product return polices and after-sales services of the retailer. As a result, in Chapter 5, we focus on the problem of reducing product returns. Closely working with one of India's largest online fashion retailers, we focus on identifying the effect of delivery gaps (total time that customers have to wait for the product they ordered to arrive) and customer promise dates on product returns.; We perform an extensive empirical analysis and run a large scale Randomized Control Trial (RCT) to estimate these effects. Based on the insights from this empirical analysis, we then develop an integer optimization model to optimize delivery speed targets.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 228-238).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129361</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Peeking and poking at a new quantum fluid : studies of gaseous Bose-Einstein condensates in magnetic and optical traps</title>
<link>https://hdl.handle.net/1721.1/129360</link>
<description>Peeking and poking at a new quantum fluid : studies of gaseous Bose-Einstein condensates in magnetic and optical traps
Stamper-Kurn, Dan M.(Dan Moses),1971-
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 2000.; Includes bibliographical references (leaves 253-272).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129360</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some factors affecting fluidity of metals</title>
<link>https://hdl.handle.net/1721.1/129359</link>
<description>Some factors affecting fluidity of metals
Ragone, David V.,1930-
Thesis (Sc.D.) Massachusetts Institute of Technology. Dept. of Metallurgy, 1953.; Vita.; Bibliography: leaves 82-84.
</description>
<pubDate>Thu, 01 Jan 1953 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129359</guid>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-ray diffraction by elastically deformed crystals</title>
<link>https://hdl.handle.net/1721.1/129358</link>
<description>X-ray diffraction by elastically deformed crystals
White, J. E.(James Edward),1918-2003.
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Physics, 1949.; Vita.; Bibliography: leaves 86-89.
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129358</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semi-infinite Homology of Floer spaces</title>
<link>https://hdl.handle.net/1721.1/129324</link>
<description>Semi-infinite Homology of Floer spaces
Suwara, Piotr.
This dissertation presents a framework for defining Floer homology of infinite-dimensional spaces with a functional. This approach is meant to generalize the traditional constructions of Floer homologies which mimic the construction of the Morse-Smale-Witten complex. To define Floer homology we use cycles modelled on infinitedimensional manifolds with corners, as described by Maksim Lipyanskiy, where the key is to impose appropriate compactness and polarization axioms on the cycles. We separate and carefully inspect these two types of axioms, paying special attention to correspondences, generalizing the definition of a polarization as well as axiomatizing the notion of a family of perturbations. The latter is used to define an intersection pairing and maps induced on Floer homology by correspondences. Moreover, we prove suspension isomorphisms and prove that this Floer homology agrees with Morse homology for finite-dimensional manifolds with a Palais-Smale functional. Finally, we explain how to apply this framework to Seiberg-Witten-Floer theory, defining Floer homology groups associated to rational homology spheres and their spinc-structures. Importantly, we prove moduli spaces of solutions to Seiberg-Witten equations induce maps on Floer homology in a functorial fashion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 111-113).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129324</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polarization and toxicity in political discourse online</title>
<link>https://hdl.handle.net/1721.1/129320</link>
<description>Polarization and toxicity in political discourse online
Saveski, Martin.
The web and social media promised to fundamentally change the public sphere by democratizing access to information and lowering barriers for participation in public discourse. While some of these expectations have been met, we have also seen the negative effects of the web and social media, amplifying people's tendency to self-sort and polarize, and providing a platform for uncivil public discourse. In this thesis, we focus on two phenomena, toxicity and polarization in political discourse online. In the first part of this thesis, we study media outlets' role in political polarization online, mainly, how the language they use to promote their content influences the political diversity of their audience. We track the engagement with tweets posted by media outlets over three years (556k tweets, 104M retweets) and model the relationship between the tweet text and the political diversity of the audience.; We build a tool that integrates our model and helps journalists craft tweets engaging to a politically diverse audience, guided by the model predictions. To test the real-world impact of the tool, we partner with the PBS documentary series Frontline and run a series of advertising experiments on Twitter. We find that in five out of the seven experiments, the tweets selected by our model were indeed engaging to a more politically diverse audience, illustrating the effectiveness of our tool. In the second part of this thesis, we study the relationship between the structure and the toxicity in political conversations on Twitter. We collect data on conversations prompted by tweets posted by news outlets and politicians running in the 2018 US midterm elections (1.18M conversations, 58.5M tweets). To investigate the link between structure and toxicity, we analyze the conversations at the individual, dyad, and group levels.; We also consider two prediction tasks: (i) whether the conversation as a whole will become more or less toxic, and (ii) whether the next reply, posted by a specific user, will be toxic. We demonstrate that the structural characteristics of a conversation can be used to detect early signs of toxicity, both at the individual and the group level.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 165-176).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129320</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust genome editing with broad-targeting CRISPR enzymes</title>
<link>https://hdl.handle.net/1721.1/129319</link>
<description>Robust genome editing with broad-targeting CRISPR enzymes
Chatterjee, Pranam.
Programmable CRISPR enzymes are powerful and versatile tools for genome editing. They, however, require a specific protospacer adjacent motif (PAM) flanking the target site, which constrains the accessible sequence space for position-specific genome editing applications, such as base editing and precise gene insertion. For example, the standard Cas9 from Streptococcus pyogenes (SpyCas9) requires a PAM sequence of 5'-NG̲G̲-3' downstream of its RNA-programmed target, which limits genome editing applications to around 10% of all DNA sequences. To broaden the targeting range of CRISPR, we first bioinformatically discover and characterize a highly similar SpyCas9 homolog from Streptococcus canis (ScCas9) with a more minimal 5'-NNG̲-3' PAM specificity. Furthermore, we employ motifs from closely-related Streptococcus orthologs to engineer an optimized variant of Sc- Cas9 (Sc++) that simultaneously exhibits broadened targeting capability, robust DNA cleavage activity, and minimal off-targeting propensity. Next, we recombine the PAM-interacting domain of Streptococcus macacae Cas9 (SmacCas9) with SpyCas9, and subsequently introduce enhancing mutations to generate iSpyMac with altered and efficient 5'-NA̲A̲-3' PAM preference. Together, these efforts expand the range of CRISPR nucleases to over 70% of DNA sequences, allowing for targeting of genomic loci that were previously inaccessible, including sequences within candidate genes for denser CRISPR screens and disease-related mutations that can now be fixed with genome editing architectures expressing our engineered variants.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 85-97).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129319</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Olfactory interfaces : toward implicit human-computer interaction across the consciousness continuum</title>
<link>https://hdl.handle.net/1721.1/129318</link>
<description>Olfactory interfaces : toward implicit human-computer interaction across the consciousness continuum
Amores Fernandez, Judith.
Human-computer interaction has traditionally focused on interfaces that provide explicit visual, auditory, or haptic feedback. This thesis proposes a new type of user interface that uses scent as an implicit, less conscious output that influences the person's cognition and pairs it with implicit physiological information as the input to the system. Unlike other modalities such as sound or light, olfactory stimuli presented during sleep are less likely to awaken the user, and, during the daytime they can be subtle enough not to distract the user from their primary activity. Therefore, scent offers a unique opportunity to create novel interfaces and applications that extend from wake to sleep states. This thesis provides a framework that conceptualizes how these type of liminal interfaces fit within the broader field of HCI.; It posits that a continuum of possible human-computer interactions exists as the combination of 1) Implicit and Explicit inputs from the human to the computer, 2) Explicit or Implicit outputs from the machine to the human and 3) the level of consciousness of the user (such as during sleep). The dissertation exemplifies this framework with closed-loop olfactory interfaces that provide scent-feedback based on real-time user information during wakefulness and sleep. This research necessitated the development of new wearables, concepts, software, and designs that considerably improve on state of the art olfactometers. Current scent technologies used in sleep laboratories are not portable and require the use of nasal masks, large olfactometers, and a minimum of 22 wire attachments to track physiological information. As a result, they are not suitable for mobile, daytime applications, nor for home usage by non-expert users.; In contrast, the prototypes created for this thesis allow for wearable scent delivery and have been used by non-expert users in home settings during day and night. These devices have been validated through a series of user studies that show the usability of the prototypes and their significant effect on relaxation, sleep, and memory consolidation. This thesis presents the qualitative and quantitative results obtained in these studies using subjective reports and physiological monitoring.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 157-180).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129318</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>GraphIt : optimizing the performance and improving the programmability of graph algorithms</title>
<link>https://hdl.handle.net/1721.1/129317</link>
<description>GraphIt : optimizing the performance and improving the programmability of graph algorithms
Zhang, Yunming,Ph. D.Massachusetts Institute of Technology.
In recent years, large graphs with billions of vertices and trillions of edges have emerged in many domains, such as social network analytics, machine learning, physical simulations, and biology. However, optimizing the performance of graph applications is notoriously challenging due to irregular memory access patterns and load imbalance across cores. We need new performance optimizations to improve hardware utilization and require a programming system that allows domain experts to easily write high-performance graph applications. In this thesis, I will present our work on GraphIt, a new domain-specific language that consistently achieves high performance across different algorithms, graphs, and architectures, while offering an easy-to-use high-level programming model that supports both unordered and ordered graph algorithms. GraphIt decouples algorithms from performance optimizations (schedules) for graph applications to make it easy to explore a large space of cache, non-uniform memory access, load balance, and data layout optimizations. GraphIt achieves up to 4.8x speedup over state-of-the-art graph frameworks on CPUs, while reducing the lines of code by up to one order of magnitude.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 129-139).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129317</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistics in high dimensions without IID samples : truncated statistics and minimax optimization</title>
<link>https://hdl.handle.net/1721.1/129316</link>
<description>Statistics in high dimensions without IID samples : truncated statistics and minimax optimization
Zampetakis, Emmanouil.
A key assumption underlying many methods in Statistics and Machine Learning is that data points are independently and identically distributed (i.i.d.). However, this assumption ignores the following recurring challenges in data collection: (1) censoring - truncation of data, (2) strategic behavior. In this thesis we provide a mathematical and computational theory for designing solutions or proving impossibility results related to challenges (1) and (2). For the classical statistical problem of estimation from truncated samples, we provide the first statistically and computationally efficient algorithms that provably recover an estimate of the whole non-truncated population. We design algorithms both in the parametric setting, e.g. Gaussian Estimation, Linear Regression, and in the non-parametric setting. In the non-parametric setting, we provide techniques that bound the extrapolation error of multi-variate polynomial log densities. Our main tool for the non-parametric setting is a Statistical Taylor Theorem that is based on sample access from some probability distribution with smooth probability density function. We believe that this theorem can have applications beyond the topic of Truncated Statistics. In the context of learning from strategic behavior, we consider the problem of min-max optimization, which is a central problem in Game Theory and Statistics and which has recently found interesting applications in Machine Learning tasks such as Generative Adversarial Networks. Our first result is the PPAD-hardness of minmax optimization which implies the first exponential separation between finding an approximate local minimum and finding an approximate local min-max solution. Our second result is a second-order algorithm that provably asymptotically converges to an approximate local min-max solution. Due to our PPAD-hardness result an algorithm with stronger guarantee is out of reach.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 367-380).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129316</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and optimization of occluder-based imaging</title>
<link>https://hdl.handle.net/1721.1/129315</link>
<description>Analysis and optimization of occluder-based imaging
Yedidia, Adam(Adam B.)
Occluders, i.e. opaque objects, can be used in the apertures of cameras to supplement or replace a traditional lens. This thesis describes a novel mutual information-theoretic framework for analyzing and comparing occluders. It justifies the use of uniformly-redundant arrays (URAs), a popular choice of occluding pattern in coded-aperture imaging. This thesis shows these patterns to be optimal under ideal conditions using this framework. Outside of those ideal conditions, this thesis proposes a method for selecting between different URAs and compares it to other occluder-selection methods, such as a greedy search, identifying under which conditions each is preferable. It also shows, analytically and empirically, the superiority of designed occluding patterns like URAs to random occluding patterns. The mutual-information theoretic framework is compared to a similar, MSE-minimizing framework. This thesis also considers the use of occluders in the context of non-line-of-sight (NLoS) imaging, used as "accidental cameras." The idea of the accidental camera is to opportunistically make use of occluding objects that happen to be available as ad-hoc coded apertures. Methods of this class, having originally been developed by Torralba and Freeman in 2012, are extended in this thesis to a wide variety of different scenarios, and used to solve formerly unsolved NLoS problems. These include imaging around a corner using the corner as the occluder, imaging a light-field of an unknown scene using a known, calibrated occluder, and imaging an unknown scene using an unknown occluder. The tools of the aforementioned framework are used to draw tentative conclusions about NLoS imaging systems, including resolution limitations due to longer light wavelengths and the quality of reconstructions across different systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 229-237).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129315</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware-aware efficient deep neural network design</title>
<link>https://hdl.handle.net/1721.1/129314</link>
<description>Hardware-aware efficient deep neural network design
Yang, Tien-Ju.
Deep neural networks (DNNs) deliver best-in-class accuracy on various artificial intelligence applications. However, the high accuracy comes at the cost that the computational complexity of DNNs is much higher than that of conventional methods. The resultant low efficiency leads to high carbon emissions, high financial cost, and hinders the deployment of DNNs on mobile devices. Although many methods have been proposed to improve DNN efficiency, most of them focus on optimizing proxy metrics, such as the number of weights and operations. Because these proxy metrics do not reflect the hardware properties, the improvement in proxy metrics does not necessarily translate to improved hardware metrics, such as lower latency and energy consumption, which are of the utmost importance in practice. In this thesis, we present how to properly bring hardware into the loop while designing DNNs to address the problems mentioned above.; We extensively study this research topic from different perspectives and propose comprehensive solutions that realize state-of-the-art efficient DNNs across different hardware platforms, applications, and use cases. We first propose three automated DNN design algorithms that directly optimize hardware metrics to push the frontier of efficient DNNs. Because evaluating hardware metrics directly on hardware devices can be slow, we then propose two fast methods for estimating hardware metrics to speed up the hardware-aware DNN design process for most of the use cases and make hardware metrics more accessible. Moreover, existing design approaches are mostly designed for digital accelerators and image classification, but different hardware and applications face different challenges due to their specific hardware properties and constraints.; In view of this, we also explore designing efficient DNNs for a broad range of hardware and applications to demonstrate how hardware properties and constraints change the design approaches and propose corresponding solutions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 191-217).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129314</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating robustness of neural networks</title>
<link>https://hdl.handle.net/1721.1/129313</link>
<description>Evaluating robustness of neural networks
Weng, Tsui-Wei(Tsui-Wei Lily)
The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive metric of robustness. This thesis is dedicated to developing several robustness quantification frameworks for deep neural networks against both adversarial and non-adversarial input perturbations, including the first robustness score CLEVER, efficient certification algorithms Fast-Lin, CROWN, CNN-Cert, and probabilistic robustness verification algorithm PROVEN. Our proposed approaches are computationally efficient and provide good quality of robustness estimates and certificates as demonstrated by extensive experiments on MNIST, CIFAR and ImageNet.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 135-143).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129313</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smart energy solutions to smart grid challenges</title>
<link>https://hdl.handle.net/1721.1/129312</link>
<description>Smart energy solutions to smart grid challenges
Wang, Xuntuo(Nelson Xuntuo)
The smart micro-grid is an energy network, and researchers have spent decades working on technology development. Among most existing solutions, they focus on dynamic operation and stability issues, however the smart grid still exhibit these challenges: the electric energy network has complicated system structure, high cost, and low energy efficiency in operation. In light of popular applications, this thesis analyzes the smart grid at the local (residential) level into two categories: high-power network - electric network dealing with electric vehicles (EVs), and low-power network - electronics network dealing with smart mobile phones. This thesis proposes a centralization approach to tackle these addressed problems. On the electric network side, current micro-grid systems with high penetration of electric vehicles (EVs) have a bulky and low energy efficiency system to connect solar panels and the utility with EV batteries. This work proposes a smart and efficient EV charger to be the hub to make the system more energy efficient; On the electronics network side, multi-use application requires multiple power converters, this work proposes a centralized wireless power transfer system to be the hub charging multiple phones and low power devices using the grid power. Data mining is applied to the operation analysis of the electric network on economics where optimization strategies are discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 181-191).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129312</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient metric representations for big data</title>
<link>https://hdl.handle.net/1721.1/129311</link>
<description>Efficient metric representations for big data
Wagner, Tal.
Contemporary datasets are often represented as points in a high-dimensional metric space. To deal with increasingly larger datasets, many algorithms rely on efficient or compressed representations of the induced metric. In this thesis, we study several fundamental aspects of efficient metric representations. Our results include: -- Fully determining the minimal number of bits required to represent all distances, up to a given precision, in a finite Euclidean or Manhattan metric space. -- A space-efficient data structure for Euclidean approximate nearest neighbor search in high dimensions. -- A sublinear time algorithm for low-rank approximation of distance matrices, which is optimal in the number of entries it reads of the input matrix. -- A fast algorithm for nearest neighbor search in the Optimal Transport distance. Previous bounds on Euclidean metric compression have been restricted to discretizing a classical dimensionality reduction theorem of Johnson and Lindenstrauss (1984). Our results improve over those bounds, thereby establishing an asymptotic advantage of generic sketching over dimension reduction. All of our algorithms are both proven analytically and implemented and validated empirically.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 201-213).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129311</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New hardness results for total search problems and non-interactive lattice-based protocols</title>
<link>https://hdl.handle.net/1721.1/129310</link>
<description>New hardness results for total search problems and non-interactive lattice-based protocols
Sotiraki, Aikaterini.
We investigate new hardness results for total search problems, namely search problems that always have a solution, and non-interactive lattice protocols. 1. A question of particular importance, raised together with the definition of the class of total search problems, is to identify complete problems that do not have explicitly a Turing machine or a circuit as a part of their input. We provide natural complete problems for various TFNP subclasses. We first consider the classes PPP and PWPP, where our complete problems are inspired by lattice theory and lead towards answering important questions in cryptography. Additionally, we identify complete problems for classes that generalize the class PPA, called PPA[subscript q]. In this case, our results have connections to number theory, and in particular to the Chevalley-Warning theorem. 2. The Learning With Errors (LWE) problem, introduced by Regev has had a profound impact on cryptography. We consider non-interactive protocols that are secure under the LWE assumption, such as non-interactive zero-knowledge and non-interactive key-exchange. We show new strategies and new barriers towards constructing such protocols.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 217-230).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129310</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convex optimization and machine learning for scalable verification and control</title>
<link>https://hdl.handle.net/1721.1/129309</link>
<description>Convex optimization and machine learning for scalable verification and control
Shen, ShenPh. D.Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Having scalable verification and control tools is crucial for the safe operation of highly dynamic systems such as complex robots. Yet, most current tools rely on either convex optimization, which enjoys formal guarantees but struggles scalability-wise, or blackbox learning, which has the opposite characteristics. In this thesis, we address these contrasting challenges, individually and then via a rapprochement. First, we present two scale-improving methods for Lyapunov-based system verification via sum-of-squares (SOS) programming. The first method solves compositional and independent small programs to verify large systems by exploiting natural, and weaker than commonly assumed, system interconnection structures. The second method, even more general, introduces novel quotient-ring SOS program reformulations. These programs are multiplier-free, and thus smaller yet stronger; further, they are solved, provably correctly, via a numerically superior finite-sampling.; The achieved scale is the largest to our knowledge (on a 32 states robot); in addition, tighter results are computed 2-3 orders of magnitude faster. Next, we introduce one of the first verification frameworks for partially observable systems modeled or controlled by LSTM-type (long short term memory) recurrent neural networks. Two complementary methods are proposed. One introduces novel integral quadratic constraints to bound general sigmoid activations in these networks; the other uses an algebraic sigmoid to, without sacrificing network performances, arrive at far simpler verification programs with fewer, and exact, constraints. Finally, drawing from the previous two parts, we propose SafetyNet, which via a novel search-space and cost design, jointly learns readily-verifiable feedback controllers and rational Lyapunov candidates.; While leveraging stochastic gradient descent and over-parameterization, the theory-guided design ensures the learned Lyapunov candidates are positive definite and with "desirable" derivative landscapes, so as to enable direct and "high-quality" downstream verifications. Altogether, SafetyNet produces sample-efficient and certified control policies--overcoming two major drawbacks of reinforcement learning--and can verify systems that are provably beyond the reach of pure convex-optimization-based verifications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 127-135).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129309</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming languages for sound computation with continuous values</title>
<link>https://hdl.handle.net/1721.1/129308</link>
<description>Programming languages for sound computation with continuous values
Sherman, Benjamin(Benjamin Marc)
Increasingly, computer software is used to manipulate continuous values, in domains such as machine learning, solid modeling, and scientific computing. At the same time, such programs are increasingly integrated within safety-critical systems. Usually, these algorithms are designed and analyzed assuming real-number arithmetic, whereas they are implemented with finite-precision arithmetic, such as floating point. Because finite-precision arithmetic suffers from rounding error and overflow, answers may be far from the ideal results that assume real arithmetic. Furthermore, programming with continuous values extends far beyond the reals and real arithmetic, involving objects such as probability distributions and shapes, and sometimes involving differentiation. Reasoning about the accuracy of programs that use finite-precision arithmetic to manipulate these objects is even more difficult.; My thesis proposes programming-language foundations for sound computation with continuous values, as well as datatypes and programs in these languages for representing and manipulating, in addition to real numbers, more complex objects such as probability distributions and shapes. Values may be computed to arbitrary precision, and equality of mathematical expressions corresponds to equivalence of programs in these languages (unlike floating point). [lambda][subscript C] is a language where types are generalized spaces and functions are (generalized) continuous maps [lambda][subscript S] builds on [lambda][subscript C] to support differentiation, where types have differentiable structure and functions are (generalized) smooth maps.; To demonstrate that these languages can serve as programming-language foundations for sound computation with continuous values, I implemented [lambda][subscript C] and [lambda][subscript S] and built prototype libraries within them for solid modeling and probabilistic computing. These libraries give a proof of concept that the foundations are sufficiently expressive for a broad class of tasks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 155-160).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129308</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ohmic contact to monolayer semiconductors</title>
<link>https://hdl.handle.net/1721.1/129307</link>
<description>Ohmic contact to monolayer semiconductors
Shen, Pin-Chun.
Scaling of silicon transistors is reaching its physical limits due to severe short channel effects, threatening to end Moore's law. Advanced beyond-silicon electronic technology requires discoveries of both new channel materials and ultralow-resistance contacts. The aim of this thesis is to develop an understanding of the science and engineering in applying synthetic monolayer semiconductors as the channel materials in field-effect transistors and in achieving excellent electrical contacts to these emerging materials. Monolayer semiconductors such as MoS₂,WS₂ and WSe₂ not only offer atomically thin thicknesses for device minimization, but also embrace desirable physical properties such as dandling-bond-free and atomically flat surface, sizable bandgap, and relatively heavier carrier effective mass to enhance transistor performance at the atomic limit.; We start by developing scalable methods for synthesizing high-quality monolayer MoS₂ that exhibits near intrinsic characteristics and moderate electron mobility at room temperature. These developments improve the fundamental transport property and reduce the unwanted impurity doping of the synthetic monolayer MoS₂ for electronic applications. Next, we further demonstrate a method to eliminate the detrimental defect states in monolayer MoS₂, which mitigates the defect-state induced Fermi-level pinning at the metal-MoS₂ interface. The monolayer MoS₂ transistors fabricated through this technique exhibit a lowered Schottky barrier and a reduced contact resistance at the metal-MoS₂ interface. This study provides an insight into the defect properties of MoS₂ that are fundamentally related to the device performance.; Finally, we provide a deeper understanding of the ohmic contact nature at metal-monolayer semiconductor interfaces and propose a new contact paradigm, gap-state saturation, to overcome the in-gap Fermi-level pinning and eventually eliminate the Schottky barrier. We experimentally apply this new ohmic contact technology to a variety of monolayer semiconductors and demonstrate high-performance monolayer semiconductor transistors. We have achieved record-low contact resistance approaching the quantum limit among all the metal-2D semiconductor interfaces. Our monolayer semiconductor transistors have also reached record-high ON-state current density among all the monolayer devices. We show that monolayer semiconductors can have a performance on par with the state-of-the-art silicon-based technology, reaching the goal of next-generation transistor technologies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 204-220).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129307</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal inference : a Tensor's perspective</title>
<link>https://hdl.handle.net/1721.1/129306</link>
<description>Causal inference : a Tensor's perspective
Shen, Dennis,Ph. D.Massachusetts Institute of Technology.
Quantifying the causal effect of an intervention is a ubiquitous problem that spans a wide net of applications. Typically, this quantity is measured through the difference in outcomes under treatment (e.g., novel drug) and control (e.g., placebo). However, only one outcome ever be revealed - this is the fundamental challenge in causal inference. In order to overcome this obstacle, there have been two main types of studies: experimental (ES) and observational (OS). While the former conducts carefully designed experiments, the latter utilizes observed data. In this thesis, we reinterpret the classical potential outcomes framework of Rubin through the lens of tensors. Formally, each entry of the potential outcomes tensor is associated with a tuple of entities; namely, the measurement (e.g., time), unit (e.g., patient type), and intervention (e.g., drug).; Subsequently, each study can be characterized by a unique sparsity pattern, which allows us to translate the age old problem of estimating counterfactuals into one of tensor estimation. As an added benefit, our tensor formulation also opens the door to discussions about the computational and statistical trade-offs of causal inference methods, a conversation (to the best of our knowledge) that has largely not yet been had. Ultimately, this novel perspective, coupled with basic principles of the popular synthetic control method for OSs, enables us to provably estimate counterfactual potential outcomes for every unit under all treatments and control with low sample and computational complexity. As a result, we can customize treatment plans for every unit in a computationally tractable and data-efficient manner. Pleasingly, we show that this result bears implications towards what-if scenario planning, drug discovery, and personalized, data-efficient randomized control trials.; Methodically, we furnish a data-driven hypothesis test to check when our algorithm can reliably recover the underlying tensor. The key technical contribution of this thesis advances the state-of-the art analysis for principal component regression.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 177-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129306</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive data analytics using GPUs</title>
<link>https://hdl.handle.net/1721.1/129305</link>
<description>Interactive data analytics using GPUs
Shanbhag, Anil(Anil Atmanand)
Modern GPUs provide an order-of-magnitude greater memory bandwidth compared to CPUs. In theory, this means data processing systems can process O(TB) of data with sub 100ms latency, thereby enabling interactive query response times on analytical SQL queries. However, the massively parallel architecture of GPUs requires rearchitecting in-memory data analytics systems in order to achieve optimal performance. This thesis describes how we adapted and redesigned in-memory data analytics systems to better exploit the GPU's memory and execution model. We present Crystal, a library of building blocks that can be used for writing high performance SQL query implementations for GPU.We use Crystal to implement basic SQL query operators and an analytical benchmark. We present theoretical models based on memory bandwidth as the critical bottleneck for query performance and show that implementations using Crystal are able to achieve these theoretical limits. We also present a study of the fundamental performance characteristics of GPUs and CPUs for database analytics. Our analysis shows that using modern GPUs vs CPUs can lead to a runtime gain equal to 1.5x bandwidth ratio of GPU to CPU ( 25x in our setup) and be 4x more cost effective than CPUs. Finally, we used Crystal's design principles to develop massively parallel variants of two classic sequential algorithms: top-k and bit-packing based compression. Bitonic Top-K is a top-k algorithm based on bitonic sort that is 4x faster than previous approaches. GPU-FOR is a compression format that can be decompressed efficiently in parallel and can be used to fit more data into the limited GPU memory. In summary, this thesis makes the case for using GPUs as the primary execution engine for interactive data analytics, and shows that implementations are efficient and practical.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 157-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129305</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical privacy and security</title>
<link>https://hdl.handle.net/1721.1/129304</link>
<description>Statistical privacy and security
Salamatian, Salman.
The tremendous increase of personal data being shared online, along with the rapid development of data mining techniques is a serious threats to privacy and security, as evidenced by the numerous privacy and security scandals of the past several years. At their core, the new privacy and security challenges that the big data revolution poses are due to the unclear boundary between data shared willingly, which is deemed not-sensitive, and the sensitive data that one wants to keep private. Traditional tools in security and privacy provide protection by encrypting personal data, but this method is not sustainable when it is unclear whether, or how much, the data is sensitive to begin with. The premise of this thesis is that information theoretic tools and insights are useful to identify how releasing personal data can impact privacy and security, and can serve as a design driver for building privacy preserving, and security enhancing systems.; In particular, we will be focused on two types of attacks. In the first, we consider how a user may release some personal data (e.g. movie ratings) in exchange for a service (e.g. movie recommendations), while simultaneously not leaking information about a sensitive attribute correlated with the personal data (e.g. political orientation). To this end, we design a privacy framework which captures the inference threat of releasing data, and use the latter to find optimal privacy-preserving mechanisms, which allows the user to trade utility for privacy. In the second part, we look at brute-force attacks where an adversary attempts to breach into a password secured system by querying potential passwords. Users of such systems are likely to generate poor passwords, re-use passwords across systems, and especially susceptible to targeted attacks if their password is correlated with personal data that is available online.; We consider various setups under which Brute-force attacks occur, and analyze the security guarantees one obtain via Guesswork - an information theoretic quantity that is a surrogate for the computational effort than the attacker has to perform. The analysis of both attacks reveals that data is a precious commodity which should be handled with care, and how the entire data acquisition and communication pipeline can come under attack. Additionally, Information Theory and Statistics offers a dimension of tools which is complementary to the existing ones, while still capturing the fundamentals of the security and privacy threats in the digital age.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 143-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129304</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sigma shapes : parametric shape estimation for view and interaction planning</title>
<link>https://hdl.handle.net/1721.1/129303</link>
<description>Sigma shapes : parametric shape estimation for view and interaction planning
Prentice, Samuel J.(Samuel James)
It is often useful for robots to actively build a model of an unknown 3D scene to enable tasks such as manipulation, mapping and object search. To do so requires choosing a representation to accumulate spatial knowledge, and strategies for selecting actions to acquire relevant spatial information and interact with objects. To achieve reliable performance, the data representation and planning algorithm should take into account uncertainty in the robot's belief of the world, to mitigate the effects of sensor noise and promote informative and robust actions. In this thesis we develop a spatial representation based on geometric shapes that maintains a probability distribution over shape parameters. By augmenting the representation with uncertainty, the robot can reason over object-level information about the shape parameters. Our approach enables the shape of novel objects to be inferred online from a sequence of views, and supports predicting viewpoint information and grasp robustness.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis. Pages 179 and 180 blank.; Includes bibliographical references (pages 171-178).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129303</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image segmentation for highly variable anatomy : applications to congenital heart disease</title>
<link>https://hdl.handle.net/1721.1/129302</link>
<description>Image segmentation for highly variable anatomy : applications to congenital heart disease
Pace, Danielle F.
Automated segmentation of medical images can facilitate clinical tasks in diagnosis, patient monitoring, and surgical planning. However, current methods either rely on explicit correspondence detection, or use machine learning techniques that require a large collection of fully annotated and representative images. Neither of these approaches are suitable when anatomical variability is high and labeled data is limited. In this thesis, we formulate new interactive segmentation methods and evaluate their applicability to congenital heart disease, which involves a wide range of cardiac malformations and topological changes and for which few image analysis methods have been previously developed. We begin by describing the new imaging datasets that we have created to support our research in congenital heart disease. Next, we show that image patches can be used to exploit manual segmentations made on a small set of slice planes in order to automatically segment the rest of an image, and investigate the potential of active learning to automatically solicit user input. Third, we develop an iterative segmentation model that can be accurately learned from small datasets which do not necessarily include the same pathologies as a new image to be segmented, and demonstrate that our model better generalizes to patients with the most severe heart malformations. Ultimately, the methods developed here take a step towards bringing the benefits of medical image analysis to challenging clinical applications involving large anatomical variability and small datasets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 125-144).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129302</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A design methodology for computer architecture parameterized by robot morphology</title>
<link>https://hdl.handle.net/1721.1/129301</link>
<description>A design methodology for computer architecture parameterized by robot morphology
Neuman, Sabrina M.
Robots that safely interact with people are a promising solution to address societal challenges from elder care to hazardous work environments. A key computational barrier to the robust autonomous operation of complex robots is running motion planning online at real-time rates and under strict power budgets. A performance gap of at least an order of magnitude has emerged: robot joint actuators respond at kHz rates, but promising online optimal motion planning for complex robots is limited to 100s of Hz by state-of-the-art software. While domain-specific hardware accelerators have improved the power and performance of other stages in the robotics pipeline such as perception and localization, relatively little work has been done for motion planning. Moreover, designing a single accelerator is not enough. It is essential to map out design methodologies to keep the development process agile as applications evolve.; We address these challenges by developing a generalized design methodology for domain-specific computer architecture parameterized by robot morphology. We (i) describe the design of a domain-specific accelerator to speed up a key bottleneck in optimal motion planning, the rigid body dynamics gradients, which currently consumes up to 90% of the total runtime for complex robots. Acceleration is achieved by exploiting features of the robot morphology to expose fine-grained parallelism and matrix sparsity patterns. We (ii) implement this accelerator on an FPGA for a manipulator robot, to evaluate the performance and power efficiency compared to existing CPU and GPU solutions. We then (iii) generalize this design to prescribe an algorithmic methodology to design such accelerators for a broad class of robot models, fully parameterizing the design according to robot morphology.; This research introduces a new pathway for cyber-physical design in computer architecture, methodically translating robot morphology into accelerator morphology. The motion planning accelerator produced by this methodology delivers a meaningful speedup over off-the-shelf hardware. Shrinking the motion planning performance gap will enable roboticists to explore longer planning horizons and implement new robot capabilities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 113-122).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129301</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards automated construction of compiler optimizations</title>
<link>https://hdl.handle.net/1721.1/129300</link>
<description>Towards automated construction of compiler optimizations
Mendis, Thirimadura Charith Yasendra.
Compilers are the workhorse that bridge the gap between human-readable and machine-executable code.e ultimate goal of a compiler is to find a legal translation that provides the most optimized machine code sequence for a given hardware platform. However, the diversity of modern programs, along with the advent of new and complex hardware architectures, has strained the capabilities of current compilers, making development and maintenance of automatic program optimizations in compilers exceedingly challenging. Despite this, compiler optimizations are still hand-crafted, using technology that existed decades ago. In this thesis, we show how to move towards more automated methods of constructing compiler optimizations, using compiler auto-vectorization as an example. Modern compilers perform vectorization using hand-crafted algorithms that typically only find local solutions under linear performance models.; First, we present goSLP, a framework that uses integer linear programming to find a globally pairwise-optimal statement packing strategy to achieve superior vectorization performance. Next, we discuss how we can automatically learn how to vectorize. We present Vemal, the first end-to-end learned vectorizer, which eliminates the need for hand-writing an algorithm. The key is to formulate the optimization problem as a sequential decision making process in which all steps guarantee the correctness of the resultant generated code. Not only does Vemal reduce the need for expert design and heuristics, but it also outperforms hand-crafted algorithms, reducing developer effort while increasing performance. Finally, we show how we can use data to learn non-linear performance models that are better than the complex and incorrect hand-crafted models designed by experts, to enhance the decision procedure used in Vemal.; We present Vemal, the first learned cost model for predicting the throughput of x86 code. Ithemal more than halves the error-rate of complex analytical models such as Intel's IACA. Both Vemal and Ithemal achieve state-of-the-art results and pave the way towards developing more automated and modern compiler optimizations with minimal human burden.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 153-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129300</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediate lower bounds and their relationship with complexity theory</title>
<link>https://hdl.handle.net/1721.1/129299</link>
<description>Intermediate lower bounds and their relationship with complexity theory
McKay, Dylan(Dylan Mathis)
While Complexity Theory has been centered around several major open problems about the relationships between complexity classes, showing resource lower bounds which amount to much weaker versions of these separations still seems to be challenging. We examine some of these lower bounds and techniques for showing them. We improve the techniques of Beame (1989) and use these results to show time-space lower bounds for various circuit problems such as #SAT and a version of SAT for which we are required to give witnesses to satisfiable formulas. We reveal a surprising significance of lower bounds of this kind by presenting their relationships with long-standing questions in Complexity Theory, notably by showing that certain weak lower bounds against the Minimum Circuit Size Problem (MCSP) and other compression problems would imply strong complexity class separations such as P [not equal sing] NP or NP [not subset symbol] P/poly. We further explore techniques for proving lower bounds as well as the connections between lower bounds and the big picture of Complexity Theory. In doing so, we explore the technique of proving fixed polynomial circuit size lower bounds through improvements to the Karp-Lipton theorem and give surprising evidence that improvements to the Karp-Lipton Theorem are (in some sense) the "only" way to prove fixed polynomial size circuit lower bounds against P[superscript NP].
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 127-133).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129299</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing clinically useful risk stratification models</title>
<link>https://hdl.handle.net/1721.1/129298</link>
<description>Developing clinically useful risk stratification models
Myers, Paul Daniel.
When a patient presents to a hospital with symptoms of cardiovascular disease, one of the first courses of action is to estimate the patient's risk of an adverse outcome. The process of categorizing patients by risk level, known as risk stratification, is an essential step in assigning appropriate therapy. Risk stratification models, which aid clinicians in this task, consist of feature sets that are combined by an algorithm to yield a score. In addition to the performance of the model, a key factor in model development is clinician acceptance of the score. One way to bolster clinician acceptance is to choose parsimonious feature sets to be used in risk scores that are convenient to integrate into the clinical workflow. A second consideration is establishing clinician trust in the model predictions. This is particularly important when using models that are difficult to explain to clinicians and when it is not straightforward to identify failure modes for the model.; Providing clinicians with a measure of how much to trust a given prediction from a model is one way to encourage the use of models that are difficult to interpret. In this thesis, we consider the problem of developing clinically useful risk models using real clinical data. We begin by discussing how to choose clinical variables in a data-driven fashion in the context of acute coronary syndrome. We present a risk score that can accommodate a variable number of inputs and demonstrate that it has superior performance to the Global Registry of Acute Coronary Events (GRACE) risk score, particularly on the difficult to risk stratify low-risk patients (AUC 0.754 vs. 0.688 for the GRACE score, p &lt; 0.007). We then discuss the development of a risk score for aortic stenosis (AS) using both data-driven feature selection and expert opinion.; We show that the model performs well on patients with moderate to severe aortic stenosis (AUC 0.74), as well as on the difficult to risk stratify low gradient severe AS subgroup (2-5 year hazard ratios &gt;/= 3.3, p &lt; 0.05). Finally, we develop a method to identify unreliable predictions in clinical risk models and show, using the GRACE dataset, that we can identify subgroups of poor model performance to aid in bolstering clinician trust of risk models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 103-110).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129298</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Network system optimization with reinforcement learning : methods and applications</title>
<link>https://hdl.handle.net/1721.1/129297</link>
<description>Network system optimization with reinforcement learning : methods and applications
Mao, Hongzi.
Networked systems rely on many control and decision-making algorithms. Classical approaches to designing and optimizing these algorithms, developed over the last four decades, are poorly suited to the diverse and demanding requirements of modern networks and applications. In the classical paradigm, the system designer assumes a simplified model of the system, specifies some low-level design goals, and develops a fixed algorithm to solve the problem. However, as networks and applications have grown in complexity and heterogeneity, designing fixed algorithms that work well across a variety of conditions has become exceedingly difficult. As a result, classical approaches often sacrifice performance for universality (e.g., TCP congestion control), or force designers to develop point solutions and specialized heuristics for each environment and application. In this thesis,we investigate a new paradigm for solving challenging system optimization problems. Rather than design fixed algorithms for each problem, we develop systems that can learn to optimize the performance on their own using modern reinforcement learning. In the proposed approach, the system designer does not develop specialized heuristics for low-level design goals using simplified models. Instead, the designer architects a framework for data collection, experimentation, and learning that discovers the low-level actions that achieve high-level resource management objectives automatically. We use this approach to build a series of practical network systems for important applications, including context-aware control protocols for adaptive video streaming, and schedulers for data-parallel and large-scale data processing workloads. We also use the insights from these systems to identify common problem structures and develop new reinforcement learning techniques for designing robust data-driven network systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 165-192).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129297</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational design and fabrication of portable MRI systems</title>
<link>https://hdl.handle.net/1721.1/129296</link>
<description>Computational design and fabrication of portable MRI systems
McDaniel, Patrick Christopher.
In this work, I have developed techniques for designing portable MRI scanners and have applied them to three portable systems for brain imaging. I first describe the procedue for designing a portable, low-field MRI scanner - in particular, how the constraints of compactness and portability affect the design of all system components (magnets, coils, sequences, RF pulses, and reconstruction schemes). I then describe the design of the principal hardware components of a portable MRI system: the B₀ magnet, the magnet shim array, the gradient coils, and the RF coils. This work makes novel use of numerical and computational tools for both sub-system design and physical construction. I next apply this paradigm to the design of gradient coils and a shim magnet array for portable whole-brain MRI scanner and demonstrate in vivo adult brain images. Finally, I describe two novel MRI scanners designed ab ovo using the approach described herein. The former is the "MR Cap", a single-sided MRI device designed for imaging over a reduced 8 x 8 x 3cm³ region of the adult brain; the latter is the "Helmet MRI", a whole-brain scanner optimized specifically for the head geometry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 191-212).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129296</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The control of complex double emulsions through reactive interfaces</title>
<link>https://hdl.handle.net/1721.1/129295</link>
<description>The control of complex double emulsions through reactive interfaces
Zentner, Cassandra A.(Cassandra Aileen)
This thesis summarizes the use of interfacial reactions, responsive surfactants, and specific tuning of interfacial tensions to discover novel ways to manufacture and manipulate dynamic double emulsion systems. In Chapter 1, we introduce emulsions and surfactants. We describe the fabrication of emulsions and creating stimuli responsive systems. Finally, we explore the relatively recent research into dynamic double emulsions, which is explored further in this thesis. In Chapter 2, we demonstrate the use of selective, interfacial imine formation at emulsion interfaces for the in situ formation of surfactants for novel manufacturing of emulsions and biosensors, dynamic morphology changes through perturbing imine equilibria, and the destruction of emulsions with imine formation at the emulsion-solid interface.; In Chapter 3, we introduce surfactants that localize at the internal interface of double emulsions, which enables the incorporation of liquid crystals into dynamically reconfigurable complex emulsions. Further, we demonstrate that isomerization of a photo-responsive azobenzene surfactant at the internal interface of liquid crystal double emulsions results in reversible morphology change. In addition, isomerization of the azobenzene internal surfactant results in overall droplet movement, both orientational and translational. In Chapter 4, we describe that interfacial confinement of magnetic nanoparticles to emulsions interfaces, accomplished through interfacial imine formation, imparts ferromagnetic behavior to dynamic double emulsion comprising isotropic solvents. Further, we demonstrate liquid crystal double emulsions enable precise assembly of magnetic nanoparticles at the emulsion interface and can produce droplets movement and reorganization of the director field.; In Chapter 5, we synthesize nucleophile-responsive surfactants with Michael acceptor functionalities to create responsive single and double emulsions. We demonstrate the emulsion systems are responsive to both small nucleophiles and polymeric nanoassemblies. Further, we describe the use of an unrelated stimuli, light, to trigger a cascade that results in emulsion responses.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 196-207).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129295</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast carotenoid-mediated dynamics in the light-harvesting complex of green plants</title>
<link>https://hdl.handle.net/1721.1/129294</link>
<description>Ultrafast carotenoid-mediated dynamics in the light-harvesting complex of green plants
Son, Minjung,Ph. D.Massachusetts Institute of Technology.
Photosynthetic organisms, including green plants, have mastered the ability to efficiently capture sunlight and rapidly transport the absorbed energy to drive life-sustaining chemical reactions. At the same time, they have evolved mechanisms to dynamically regulate photosynthetic efficiency to protect against photodamage in excess sunlight, by dissipating excess energy as heat. Both light harvesting and photoprotective dissipation take place within pigment-binding membrane proteins called light-harvesting complexes, and originate from the interactions of the electronic transitions of the bound pigments that span the visible solar spectrum. In particular, carotenoids, the accessory light-harvesting pigments in plants, are thought to play a central role in mediating both light-harvesting and dissipative pathways by leveraging their unique electronic structure with one or more dark states and extreme sensitivity to local environment.; However, carotenoid-mediated photophysics in plants have remained poorly understood due to limited spectral bandwidths of previous ultrafast measurements as well as use of unphysiological experimental environments. In this thesis, I describe how I overcome both limitations by combining an ultra-broadband ultrafast spectroscopic technique with improved spectral bandwidth and a near-physiological model membrane to house the light-harvesting complexes with conformations close to their native ones. In Chapter 2, I describe the development of a high-sensitivity ultrabroadband two-dimensional electronic spectrometer that enables interrogation of the energetics and dynamics of the broad range of electronic transitions of photosynthetic light-harvesting complexes across the visible solar spectrum.; Then, I discuss the application of this technique on studies of the light-harvesting and dissipative photophysical pathways in light-harvesting complex II (LHCII), the principal light-harvesting complex in green plants. A series of previously uncharacterized carotenoid-mediated light-harvesting and dissipative pathways are uncovered. In Chapter 3, I find that one of the four carotenoids bound within LHCII plays an integral role in mediating the rapid and efficient energy transfer in plants, partially via a previously debated but unobserved dark state. In Chapters 4-6, I discuss the direct observation of a predicted but uncharacterized dissipative energy transfer pathway involving a short-lived carotenoid dark state. The sensitivity of dissipation to local environment, carotenoid composition, and protein-protein interaction is explored within the near-physiological model membrane platform.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 209-233).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129294</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot manipulation with learned representations</title>
<link>https://hdl.handle.net/1721.1/129293</link>
<description>Robot manipulation with learned representations
Manuelli, Lucas,Ph. D.Massachusetts Institute of Technology.
We would like to have robots which can perform useful manipulation tasks in real-world environments. This requires robots that can perceive the world with both precision and semantic understanding, methods for communicating desired tasks to these systems, and closed loop visual feedback controllers for robustly executing manipulation tasks. This is hard to achieve with previous methods: prior work hasn't enabled robots to densely understand the visual world with sufficient precision to perform robotic manipulation or endowed them with the semantic understanding needed to perform tasks with novel objects. This limitation arises partly from the object representations that have been used, the challenge in extracting these representations from the available sensor data in real-world settings, and the manner in which tasks have been specified. This thesis presents a family of approaches that leverage self-supervision, both in the visual domain and for learning physical dynamics, to enable robots to perform manipulation tasks. Specifically we (i) develop a pipeline to efficiently annotate visual data in cluttered and multi-object environments (ii) demonstrate the novel application of dense visual object descriptors to robotic manipulation and provide a fully self-supervised robot system to acquire them (iii) introduce the concept of category-level manipulation tasks and develop a novel object representation based on semantic 3D keypoints along with a task specification that uses these keypoints to define the task for all objects of a category, including novel instances, (iv) utilize our dense visual object descriptors to quickly learn new manipulation skills through imitation and (v) use our visual object representations to learn data-driven models that can be used to perform closed loop feedback control in manipulation tasks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 177-187).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129293</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>De novo discovery of synthetic peptide binders to protein-protein interfaces</title>
<link>https://hdl.handle.net/1721.1/129292</link>
<description>De novo discovery of synthetic peptide binders to protein-protein interfaces
Quartararo, Anthony James.
Protein-protein interactions (PPIs) play crucial roles in mediating normal cellular physiology, but their modulation has been historically challenging. PPIs tend to be intractable to small molecule inhibition, due to their wide and relatively featureless interfaces, and biologics are generally not viable for the approximately two-thirds of PPIs that take place in the intracellular milieu. Therefore, this class of target has been in many circles deemed undruggable. Peptides are an emerging therapeutic modality for disrupting PPIs. With proper engineering, they can engage proteins over large surface areas and in some cases be modified to access the cytosol. PPI-disrupting peptides are often discovered from highly diverse combinatorial libraries, through either genetic or chemical means. Genetically encoded approaches can reliably investigate enormous libraries (10⁸-10¹³ members), typically via selection.; However, despite progress in this area, these libraries are generally limited to natural chemical space. Peptides identified from such approaches therefore require extensive engineering to improve proteolytic stability and promote cell penetration, at the potential cost of potency. Synthetic libraries, on the other hand, are highly amenable to non-canonical amino acid incorporation and a wide variety of chemical modifications. However, these libraries are typically examined by screening, which in practice limits the diversities that can be explored to ~10⁶. In this thesis, a magnetic bead-based affinity selection-mass spectrometry (AS-MS) workflow was developed to interrogate fully randomized, chemically accessed peptide libraries comprising up to 10⁸ members with high fidelity.; This approach takes advantage of recent advances in nano-liquid chromatography-tandem mass spectrometry (nLC-MS/MS)-based peptide sequencing, which facilitates high-confidence decoding of complex peptide mixtures. Starting with a model selection target, an anti-hemagglutinin monoclonal antibody, it was demonstrated that high enrichments of true binders could be achieved from a library comprising 10⁶ members. The number of binders identified scaled in proportion to library diversity, as diversity was then increased from 10⁶-10⁸. Beyond 10⁸, the complexity of isolated pools from single-pass selections became too great for reliable decoding. These results were applied to selections against biomedically-relevant targets, enabling the identification of p53-like binders to the oncogenic ubiquitin ligase MDM2, and a family of low-nanomolar affinity, [alpha]/[beta]-peptide-based binders to 14-3-3. Both sets of binders engage these targets at PPI epitopes.; Finally, machine learning methods were developed to distinguish nonspecific from true binders identified by AS-MS, which we anticipate will greatly streamline future discovery efforts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129292</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Re-targeting of anthrax toxin binding for immunomodulation and targeted cancer therapy</title>
<link>https://hdl.handle.net/1721.1/129291</link>
<description>Re-targeting of anthrax toxin binding for immunomodulation and targeted cancer therapy
Loftis, Alexander Robert.
The intracellular delivery of cytotoxic proteins is a longstanding goal in drug development. This is challenging due to biological membranes, which prevent facile entry of macromolecules into the cellular cytosol. Anthrax, derived from B. anthracis, is a protein delivery platform. Recent work has established anthrax as a non-toxic and reliable method to deliver a variety of proteins and other molecules to the cytosol of cells in a receptor-directed manner. However, native anthrax receptors are ubiquitously expressed membrane proteins TEM8 and CMG2, limiting the therapeutic application of wild-type anthrax. Moreover, unmodified anthrax is immunogenic, which limits the efficacy and number of doses which can be administered. In this thesis, anthrax was re-targeted to cells of therapeutic interest.; In particular, the pore-forming agent of anthrax, protective antigen (PA), was fused to single-chain variable fragments targeting two pancreatic cancer cell receptors, human endothelial growth factor receptor and carcinoembryonic antigen. This design enabled pancreatic cancer cell-specific delivery of two highly toxic protein cargoes, including a Ras protease which rapidly degrades cytosolic Ras protein. The mutant protective antigen-scFv provides a facile and generalizable strategy for delivery of cytosolic delivery of proteins to any number of membrane receptors which can be targeted by known scFvs and, by extension, known antibodies. To mitigate anthrax's inherent immunogenicity, anthrax was re-targeted to mouse erythrocytes. Recent work has established that targeting of antigens to erythrocytes can lead to decreased antigen-specific inflammatory responses.; To discover a reliable method to direct a molecule, such as protective antigen from anthrax toxin, to erythrocytes, a synthetic peptide library was selected in mice to identify a D-peptide, DQLR, which binds preferentially to mouse erythrocytes in vivo. When administered to immunocompetent mice, a DQLR-PA conjugate led to significantly decreased anti-PA antibodies. Similarly, a DQLR-peptide antigen conjugate led to decreased antigen-specific inflammatory responses and antigen-specific T cells, indicating antigen-specific tolerance is induced by the DQLR-mediated association of antigens to mouse erythrocytes in vivo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129291</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The host heat shock response, viral immune escape and viral replication</title>
<link>https://hdl.handle.net/1721.1/129290</link>
<description>The host heat shock response, viral immune escape and viral replication
Ponomarenko, Anna,Ph. D.Massachusetts Institute of Technology.
Viral outbreaks follow human history, as illustrated by past and ongoing pandemics. The continuous nature of viral threats to global health is dictated by viral evolution, which enables the rise of resistance to therapeutic and preventive measures, emergence of new strains, and host switching. Addressing all of these challenges requires a deep understanding of all aspects of viral evolution at the host-pathogen interface. The underlying root of viral evolution is the high mutation rate coupled with relatively short replication times. Rapid accumulation of mutations produces mutant viral proteins with new biological functions, such as escape from the host immune system. However, such mutations are generally deleterious to protein structure or folding and thus confer a high biophysical cost, which most viruses are not equipped to address on their own.; In contrast, host cells have an extensive chaperone network, which assists folding and resolves misfolding of endogenous cellular proteins. Chaperone assistance also facilitates evolution of host proteins, and, recent evidence suggests, of viral proteins as well, arising from the extensive involvement of chaperones in the viral lifecycle. In this thesis, I address the implications of host chaperones potentiating viral protein evolution. First, I describe how influenza virus resolves the biophysical cost of adaptive mutations by exploiting host chaperones. High-throughput profiling of influenza nucleoprotein mutational tolerance revealed the dependence of a key 1918 Spanish Flu adaptive variant on the availability of host chaperones. Limited access to host chaperone assistance, especially at fever temperatures, restricts accessibility of this structurally deleterious innate immune escape variant.; Next, I address molecular details of chaperone folding assistance required for efficient propagation of this adaptive mutant. This work provides the first experimental evidence of host chaperones defining the accessibility of evolutionary important protein variants. Such dependence on host chaperone assistance highlights the vulnerability of rapidly mutating viral proteins. Finally, I demonstrate that excessive upregulation of the host proteostasis network can also render viruses vulnerable and restrict HIV-1 replication. Overall, interaction with the host chaperones is a critical determinant of viral evolution, and the extent of such interaction has to be carefully regulated throughout the viral life cycle.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129290</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymers to modulate host-microbe interactions</title>
<link>https://hdl.handle.net/1721.1/129289</link>
<description>Polymers to modulate host-microbe interactions
Kruger, Austin Grant.
Chemical interactions between microbes and hosts both support and destroy life. Recognition of self- and non-self, destruction of pathogens, and recruitment of symbionts are all largely mediated by multivalent interaction between macromolecules. Multivalent binding occurs when multiple receptors on one surface engage multiple ligands on another. Analogous to protein-small molecule recognition, wherein precise arrangement of atoms in ligand and receptor create complementarity, multivalent ligands and receptors gain specificity based on the three-dimensional arrangement of both targets. Cells control multivalent systems by displaying ligand-receptor pairs on polymers such as proteins. By controlling polymer properties such as size, shape, and ligand density, life utilizes multivalent chemistry to accomplish key cellular and organismal functions such as proliferation and adaptive immunity.; In particular, multivalent recognition is critical to the maintenance of host-microbe symbiosis and pathogenesis. Mucins, the massive glycoprotein structural components of mucus, feature multivalent displays of oligosaccharide which specifically bind microbial adhesins recruiting them to mucosal barriers. Mucin structure has proven critical to their ability to attract symbionts, repel pathogens, and control microbial virulence. Additionally, adaptive immunity hinges on multivalent recognition of pathogenic epitopes for precise identification and elimination of harmful microbes. The structure of protein and carbohydrate antigens have profound influences on the ability of the immune system to recognize and destroy pathogens. Similar to structure-activity relationships for small molecules, synthetic polymers can be systematically tuned to perturb and enhance these systems.; However, it is only with the advent of living polymerizations, such as the ring opening metathesis polymerization, that sufficient control over polymer structure has enabled precise structure-activity investigations for multivalent interactions. To better understand how chemical structure influences host-microbe relationships, I have synthesized precisely defined and highly tunable synthetic polymers to mimic the structure and anti-virulence properties of mucin and the immunogenicity of natural antigens. To assist in the transfer of this knowledge to practical applications, I have developed methodology for functionalization of degradable polymers whose structures can be readily controlled. This research has resulted in tools to better understand host-microbe symbiosis and pathogenesis. It is my sincere hope that they will contribute to a brighter future for all life.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 203-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129289</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Squaric ester applications as novel lysine electrophiles in molecular probe design</title>
<link>https://hdl.handle.net/1721.1/129288</link>
<description>Squaric ester applications as novel lysine electrophiles in molecular probe design
Ho, Jordan Sun.
Small molecule probes for biology have been instrumental in uncovering enzyme mechanisms and developing therapeutics. Covalent probes are valuable because they can irreversibly tag proteins of interest for analysis. Selective covalent proteins for lysine residues are especially valuable because they allow for fine control over biological systems. Many binding sites contain lysine residues, but current amine-reactive electrophiles in biology are largely unselective. Moreover, at almost 6% lysine is more prevalent in the proteome compared to other nucleophilic residues such as cysteine. The ability to selectively target specific lysines would open new avenues for analyzing biomolecular interactions. Current efforts have yielded compounds with high reactivity and low stability, severely limiting their utility. Squaric esters are small chemical compounds with multiple amine reactive sites that have been used extensively as a linker in organic synthesis.; Additionally, squaric esters are mild electrophiles when compared to other amine reactive electrophiles used in organic chemistry. This attenuated reactivity, coupled with their high selectivity for amines suggests that substituted squaric esters may serve as novel biological probes. To this end, we characterized the reactivity and the kinetics of squaric ester reactions. We also applied squaric esters in different biological contexts to evaluate their utilities as novel lysine-reactive electrophiles. We show that squaric esters react orders of magnitude slower than other amine-reactive electrophiles commonly used in biology. We then applied squaric esters practically in the design of novel galactofuranosyltransferase 2 inhibitors, an enzyme responsible for the biosynthesis of the galactan. The galactan is a polysaccharide chain of galactofuranose residues, and it is essential for the cell wall of many bacteria, including pathogens such as Mycobacterium tuberculosis.; We generated substituted squaric ester inhibitors that bind to galactofuranosyltransferase 2 with specificity that provided insight into a potential allosteric binding site of galactofuranosyltransferase 2. Finally using fragment-based ligand design, in efforts to expand the lysine target space of small molecular probes, we constructed a squaric ester fragment library to screen for novel ligandable lysines across entire proteomes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 135-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129288</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies in duality : discovering a dual catalytic amination reaction and investigating the origin of biphilicity in phosphacycles</title>
<link>https://hdl.handle.net/1721.1/129287</link>
<description>Studies in duality : discovering a dual catalytic amination reaction and investigating the origin of biphilicity in phosphacycles
Dummit, Krysta A.(Krysta Alanna)
First, the development of a novel C-H amination strategy using both a Cu(II) Lewis acid and an organic hydrogen atom transfer (HAT) catalyst to activate benzylic C-H bonds adjacent to aromatic azaheterocycles is described. This simple method demonstrates very high selectivity towards aromatic azaheterocycles without using exogenous directing groups and affords excellent site selectivity in substrates with more than one reactive position. A wide range of aromatic azaheterocyclic structures not compatible with previously reported catalytic systems have proven to be amenable to this approach. Mechanistic investigations indicate a possible radical-mediated H-atom abstraction for select substrates, which would stand in contrast to known closed-shell Lewis acid catalyzed processes. Second, the synthesis and analysis of a series of three homologous [alpha],[alpha],[alpha]',[alpha]'-tetramethyl-[subscript 1]-phenylphosphacycles in order to investigate the theory that 4-membered ring phosphacycles - phosphetanes - are maximally biphilic as a result of bond angle compression that minimizes the HOMO-LUMO gap is reported. Analysis of [superscript 31]P NMR principal components validates the decrease in HOMO-LUMO gap as the intra-ring bond angle is compressed in the synthesized series as predicted by TD-DFT computations; however, ¹J[subscript P-Se] coupling constants, cyclic voltammograms and UV-Vis measurements are less conclusive. Computational modeling of the (3+1) cheletropic addition of the phosphacycles to nitrobiphenyl as a measure of the biphilicity reveals relative activation barriers within computational error for all but the smallest ring, indicating that the effect of intra-ring bond angle compression on phosphorus biphilic reactivity is a generally subtle effect.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129287</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of polymers and quantum dots for luminescent solar concentrators</title>
<link>https://hdl.handle.net/1721.1/129286</link>
<description>Synthesis of polymers and quantum dots for luminescent solar concentrators
Achorn, Odin B.(Odin Bnejamin)
In order to meet the world's increasing energy demands, we must develop innovative ways of harnessing renewable energy. Benefits can be achieved by supplementing direct photovoltaic technologies with building-integrated photovoltaics, which take advantage of existing infrastructure. For example, luminescent solar concentrators (LSCs) are semitransparent devices that harvest sunlight and redirect it toward photovoltaic cells. Because of their semitransparent nature, LSCs can be used as windows, highway noise barriers, and greenhouse walls to convert these passive structures into energy harvesters. In the first part of this thesis, we describe the design of thin-film quantum dot (QD) LSCs, which can be deposited on existing transparent structures in the built environment. In order to disperse the thick-shelled CdSe/CdS QDs at a high concentration, we designed and synthesized a new polymer bearing carboxylic acids.; The resulting film of the polymer/QD composite is highly concentrated and low scattering, and the high quantum yield of the QDs is retained. We also use a Monte Carlo simulation to predict the benefits of creating a two-layer device with both CdSe/CdS QDs and Lumogen F Red 305. In the second part of this thesis, we introduce our development of new less toxic QDs to replace CdSe/CdS QDs in LSCs. These new QDs are based on InP, and our first task was to extend their absorption band by developing a new synthetic technique to access large sizes. This technique consists of the slow injection of phosphorus precursors into a reaction mixture of indium precursors. By controlling the injection rate, we accessed large sizes and low size dispersity. In the third part of this thesis, we addressed reabsorption losses in InP QDs as LSC fluorophores. We did this by introducing silver dopants to the InP QDs, which redshift the emission from the absorption.; We improved the quantum yield of these silver-doped InP QDs by adjusting the number of silver dopants per QD and by adding thiols as ligands. Finally, we used our Monte Carlo simulation to predict the performance of these QDs relative to CdSe/CdS QDs in LSCs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 129-135).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129286</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing delivery and synthesis challenges for peptide-based cancer vaccines</title>
<link>https://hdl.handle.net/1721.1/129285</link>
<description>Addressing delivery and synthesis challenges for peptide-based cancer vaccines
Holden, Rebecca Lynn.
Therapeutic peptide vaccines have the potential to elicit and direct an anti-cancer T cell response, but their clinical efficacy has been limited in part by poor delivery to the lymphatic system, inefficient cell uptake, and scalable synthesis in the case of personalized vaccines. The work presented in this thesis explores several approaches to address these challenges. First, we use our fast-flow automated synthesis technology to confront the synthesis bottleneck for patient-specific 'neoantigen' vaccines, which have shown early promise but are hindered by slow production. We synthesize a particularly challenging set of peptides from a previous clinical trial as a test case, demonstrating that our technology can produce the majority of sequences in sufficient quantities with comparable or higher purity than a commercial vendor in a fraction of the time.; Turning towards vaccine design, we explore several approaches to address the lymphatic and intracellular delivery of peptide antigens. We demonstrate the generality and anti-tumor efficacy of vaccines containing cell-penetrating peptides (CPPs), sequences shown to enhance the cell uptake of various cargo. We characterize their mechanism and identify several unanticipated contributors, namely trafficking to the lymph nodes, serum stability, and extended presentation in vivo. We then expand on an existing approach to mediate lymphatic trafficking via binding serum albumin by exploring additional albumin binding moieties. Next, we develop a straightforward and general approach to directly quantify antigen presentation and implement this technique to explore two strategies to design more effective vaccines, including CPPs. Finally, we build on previous work using CPPs to deliver antisense oligonucleotides (ASOs), another application that we pursued in tandem with cancer vaccines.; We combine amphipathic and cationic CPPs to create chimeric sequences that synergistically enhance activity of an ASO and access new routes of uptake not utilized by either parent CPP. Drawing from our experience using CPPs to deliver ASOs as well as our expertise in peptide synthesis and design, we provide insight into the rapid production of personalized vaccines and efficient delivery of vaccine antigens. This thesis represents a new area of research for our lab, one in which we will hopefully continue to apply our unique skillset and perspective.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129285</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embodied mathematics by interactive sketching</title>
<link>https://hdl.handle.net/1721.1/129275</link>
<description>Embodied mathematics by interactive sketching
Saquib, Nazmus
The language and formalization of mathematics historically evolved as an interplay between abstractions and their grounding in real world objects and events. The embodied mathematics philosophy posits that our mathematical capabilities are centered around our embodied experiences, and abstract math concepts are layers of metaphors that are based on simple arithmetic capabilities such as categorizing objects, subitizing, and part-whole analysis. Unfortunately, current practice of abstract math often requires us to memorize rules of manipulation, which are cognitively arbitrary, severed from intuitive grounding, and have mechanics and metaphors of their own. While journeying into mathematics, many of us therefore lose the intuitive underpinnings of abstractions. In this dissertation, I develop a design framework and an interactive sketch interface to combine computer algebra algorithms with layers of sketched, visually interpretable compositions.; The framework reimagines some abstract mathematical activities to sketching manipulable structures and performing actions that utilize our natural arithmetic capabilities, enabling the user to form personalized, embodied mathematical representations. The framework solves the challenge of defining a mapping between sketching, iconic objects, symbol algebra, and functions. It converts the symbolic representations of computer algebra to three kinds of sketched primitives and two key interactions. To evaluate the framework and interface, I present examples from the Common Core curriculum, demonstrating how the primitives and the interactions cover a wide range of exercises from Kindergarten to 8th Grade. Qualitative findings from playtesting studies in Bangladesh and USA are then compared to point out the strengths and weaknesses in the design.; An online drawing experiment is also presented that evaluates user preferences for drawing descriptive forms for the proposed mathematical compositions. Next, I describe collaborations with scientists and mathematicians (in astrophysics, neuroscience, and epidemiology) that explore how the proposed methods can reimagine some advanced mathematics, such as numerical integration and differentiation. The ability to do symbol algebra on iconic representations opens up opportunities for doing mathematics with a wide range of physical objects and media. Such affordances are demonstrated through examples. I finish the dissertation by discussing future research opportunities in mathematics, scientific computing, and HCI, and how the landscape of abstract mathematics can adapt embodied mathematics.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 189-197).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129275</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensor networks for experience and ecology</title>
<link>https://hdl.handle.net/1721.1/129273</link>
<description>Sensor networks for experience and ecology
Mayton, Brian D.(Brian Dean)
Wetlands are critically important ecosystems, providing numerous benefits to the global environment. As cranberry farms in southeastern Massachusetts come out of production, many are undergoing active restoration with the goal of returning them to functional wetlands. This is a developing practice, and we are still learning about techniques that lead to the most favorable outcomes. It is also a disruptive process that brings significant and very visible changes to the landscape. But the process of wetland restoration and the restored wetlands themselves present fantastic opportunities for learning and enjoyment. In this dissertation, I present a custom sensor network installed at the Tidmarsh Wildlife Sanctuary, a former cranberry farm in Plymouth, Massachusetts that underwent active restoration in 2016.; This network combines hundreds of custom low-power environmental sensor nodes with high-bandwidth continuous audio and video streams and the required supporting communications and data storage infrastructure. As a permanent fixture of the site, the network was designed to serve multiple functions, making Tidmarsh a testbed for many ideas. Long-term continuous monitoring, both before and after the restoration, allows us to observe changes that take years to decades and answer questions about restoration techniques and outcomes. Broad sensing capabilities allow us to make observations and gather data about questions we may not have thought to ask. Real-time data streaming and access protocols allow us to build novel ways of exploring and experiencing the site, both while physically present and remotely. Rich media streams enhance these experiences and allow us to assess complex factors such as biodiversity.; I describe the design, implementation, and deployment of two generations of custom wireless sensor hardware and the supporting network infrastructure; a multi-channel audio streaming installation; and a setup for video streaming and timelapse recording. I demonstrate how the network is used for scientific research through an experiment to determine the impact of microtopography (a restoration technique) on soil hydrology. Finally, I enumerate the many projects that have made use of the network to learn from the data and connect people to restored wetlands through novel experiences and creative expressions.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 165-172).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129273</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically encoded augmented reality : multiplexing perceptual bandwidths</title>
<link>https://hdl.handle.net/1721.1/129271</link>
<description>Biologically encoded augmented reality : multiplexing perceptual bandwidths
Lawson, Matthew Everett.
Information floods the center of our visual field and often saturates the focus of our attention, yet there are parallel channels constantly and unconsciously processing our environment. Using sensory plasticity as a medium to create new visual experiences allows us to challenge how we perceive extant realities. This dissertation describes a novel system and methodology for generating a new visual language targeting the functional biology of far peripheral mechanisms. Instead of relegating peripheral vision to the role of redirecting attentional awareness, the systems described here leverage unique faculties of far peripheral visual processing to deliver complex, semantic information beyond 50° eccentricity without redirecting gaze. Synthetic position shifts can be elicited when frequency gratings or random dot patterns are translated behind a static aperture and viewed peripherally, a phenomenon called motion-induced position shift (MIPS).; By transforming complex symbols into a series of strokes articulated through MIPS apertures, I present a codex of motion-modulated far-peripheral stimuli. Methodologies describe a two-stage implementation: first, proven psychophysical constructs are integrated into contextual forms, or codex blocks, and second, adapted to an environment of complex scenes. This approach expands upon prior work not only in first principles of visual information composition and delivery but also in its capacity to convey highly abstract forms to the far periphery with no gaze diversion, via apertures spanning only 0.64 degrees of the visual field. Spatial compression and far peripheral delivery of complex information have immediate applications in constrained display environments for interaction, navigation, and new models of visual learning.; As the technological cutting edge outpaces our physiological sensitivities, the proposed methodologies could facilitate a new approach to media generation utilizing peripheral vision as a compression algorithm, redirecting computation from external hardware to direct correlations within our biology. Systematic and applied longitudinal studies were conducted to evaluate the codex in increasingly complex dynamic visual environments. Despite increasing scene complexity, high detection accuracy rates were achieved quickly across observers and maintained throughout the varied environments. Trends in symbol detection speed over successive trials demonstrate early learning adoption of a new visual language, supporting the framework and methods for delivering semantic information to far peripheral regions of the human retina as valuable extensions of contemporary methodologies.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 85-90).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129271</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-aware self-assembly for space architecture : growth paradigms for in-space manufacturing</title>
<link>https://hdl.handle.net/1721.1/129269</link>
<description>Self-aware self-assembly for space architecture : growth paradigms for in-space manufacturing
Ekblaw, Ariel C.(Ariel Caitlyn)
Humanity stands on the cusp of interplanetary civilization. As we prepare to venture into deep space, we face what appears to be an irreconcilable conundrum: at once a majestic domain for human exploration, while also a domain of unrelenting challenges, posing dangers that are fundamentally at odds with our evolved biology. The field of space architecture struggles with not only these environmental challenges, but also constrained physical dimensions (e.g., rocket payload fairings), risky astronaut space-walks and limited robotic mobility for assembly, and capricious budgets as political whims change. How might we incorporate the robustness principles and incremental additions of indeterminate-growth living systems into the habitats that will sustain us over time? We begin by exploring how to enable dynamic, self-assembling space structures that are informed by both inorganic and organic growth processes from complex Earth systems.; How can we design, induce, and scale self-aware self-assembly to grow space architecture, natively, in orbit? We answer this call, for space architecture that builds itself, through transformative self-aware self-assembly--adaptive, responsive "living structures" that follow principles of tessellation and self-similarity to scale elegantly from common base units to modular space stations to future mega-structures. In an orbiting context, freed from the constraints of Earth's gravity, we can redefine how space architecture is conceived, designed, assembled, and lived within.; These principles are applied across all four areas of thesis contributions: a novel design theory for space architecture realized in a portfolio of space structure concepts; systems engineering mission architecture and feasibility analyses for contextualizing proposed space structures in realistic aerospace deployments; physics simulation modeling for habitat-scale self-assembly dynamics in microgravity; and quasi-stochastic self-assembling tile hardware creation and evaluation across four space environment missions, culminating in a successful 30-day International Space Station experiment in March 2020. The thesis contributions center on the TESSERAE (Tessellated Electromagnetic Space Structures for the Exploration of Reconfigurable, Adaptive Environments) platform.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 217-231).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129269</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From data physicalization to data experiences : combining art, science, technology, and community to move towards collective action on environmental challenges</title>
<link>https://hdl.handle.net/1721.1/129267</link>
<description>From data physicalization to data experiences : combining art, science, technology, and community to move towards collective action on environmental challenges
Perovich, Laura J.(Laura Jones)
The environmental risks our society faces are becoming increasingly urgent and may have devastating implications both locally and globally. Over the past 40 years, environmental efforts have built understanding and have led to some significant changes. Yet, we have not taken enough action on many issues despite an abundance of data, policy proposals, and knowledge. Many fields have explored environmental issues over the decades, including work in human computer interaction and art. My research responds to some of the successes and critiques of these fields and others to investigate how and with whom we understand our local environmental problems. In particular, it explores collective environmental experiences through community based research processes that use art, technology and science to move towards environmental action in communities. I pursue a holistic approach that integrates emotional and aesthetic parameters and investigates opportunities for impact on water quality issues in New England. This dissertation includes the following community-based environmental projects: (1) SeeBoat : citizen science water quality tools for data collection and in situ display, (2) Data Lanterns : open data physicalizations of water quality in industrial rivers, (3) ArtBoat: installations for artistic collaboration in public spaces, and (4) Participatory Self-Portrait : gallery show to bring people together around the dissertation work. Each of these projects adds to an understanding of the local environmental and social system and suggests possible paths towards influencing the overall framework. I evaluate the work through qualitative methods in order to best capture the nuanced human experience of the events and the complexity of the systems that influence our environmental realities.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 325-352).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129267</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomic activity from human videos</title>
<link>https://hdl.handle.net/1721.1/129265</link>
<description>Autonomic activity from human videos
Chen, Weixuan,Ph. D.Massachusetts Institute of Technology.
The autonomic nervous system (ANS) is a part of the nervous system that is responsible for regulation and integration of internal organs' functioning. Traditionally, for the assessment of the ANS function, autonomic activity is measured in various tests by medical devices with contact sensors. Most of these tests require wearing cumbersome equipment on the human body, so they are commonly conducted in clinics and only sporadic data can be collected. A potential solution to more convenient analysis of autonomic activity is via camera-based human sensing. Recent research has shown that it can be combined with computer vision algorithms to realize visualization of ANS activity in human videos and non-contact estimation of ANS parameters such as heart rate, respiration rate, and heart rate variability (HRV).; However, there are still many hurdles that prevent the solution from reaching the accuracy and covering the scope of clinical tests: 1) The robustness of the existing methods are still unsatisfactory in ambulatory situations, especially when illumination changes and body motions are significant. 2) Previous visualization algorithms distort non-sinusoidal components of autonomic activity in motion magnification. 3) Potential ethics and privacy issues might impede the deployment of the new techniques.; To address these problems, this dissertation proposes an end-to-end convolutional attention network using both gradient descent and gradient ascent to enable robust measurement and visualization under major motions, proposes a near-infrared-based carotid pulse tracker that can work under too dynamic or absent illumination, proposes a motion magnification algorithm that can magnify non-sinusoidal autonomic activity faithfully, discusses potential privacy issues in video-based autonomic activity monitoring, and as a solution proposes a framework for eliminating autonomic activity from facial videos without affecting their visual appearance. Through combining these proposed approaches, the final goal of the dissertation is to realize unobtrusive analysis of autonomic activity from human video that can work in the field.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 171-179).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129265</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reorienting machine learning education towards tinkerers and ML-engaged citizens</title>
<link>https://hdl.handle.net/1721.1/129264</link>
<description>Reorienting machine learning education towards tinkerers and ML-engaged citizens
Lao, Natalie.
Artificial intelligence (AI) and machine learning (ML) technologies are appearing in everyday contexts, from recommendation systems for shopping websites to self -riving cars. As researchers and engineers develop and apply these technologies to make consequential decisions in everyone's lives, it is crucial that the public understands the AI-powered world, can discuss AI policies, and become empowered to engage in shaping these technologies' futures. Historically, ML application development has been accessible only to those with strong computational backgrounds working or conducting research in highly technical fields. As ML models become more powerful and computing hardware becomes faster and cheaper, it is now technologically possible for anyone with a laptop to learn about and tinker with ML.; This includes audiences without programming or advanced mathematics knowledge, who can use public ML tools to construct both useful technical artifacts and meaningful socio-ethical artifacts around ML. My research investigates the following question: How do we reorient ML engineering education to transform "ML consumers" to be "ML contributors"? In order to illuminate this problem, I have created a Machine Learning Education Framework. In this dissertation, I present the framework and three applications of it: (1) a course based on the framework that aims to develop ML self-efficacy in general college-level audiences; (2) a curriculum rubric based on the ML Education Framework and analyses of 6 ML/AI courses using the rubric; and (3) two public ML tools for high school and above ML tinkerers that empower them to create personal ML applications through image and audio classification.; The ML Education Framework is organized into three interconnected categories: Knowledge (General ML Knowledge, Knowledge of ML Methods, Bias in ML Systems, Societal Implications of ML), Skills (ML Problem Scoping, ML Project Planning, Creating ML Artifacts, Analysis of ML Design Intentions &amp; Results, ML Advocacy, Independent Out-of-Class Learning), and Attitudes (Interest, Identity &amp; Community, Self-Efficacy, Persistence). The three top level categories of the framework were chosen to encompass the holistic process of teaching people ML and ensuring continued participation in the field. "Knowledge" about ML will be necessary for individuals to thrive in today's ML-driven world and builds foundations for further engagement with the technology. ML "Skills" define specific actions and activities that students need to do and practice in order to become ML contributors.; The category "Attitudes" reflects long-term societal goals for educational initiatives in ML, and addresses the perspectives that students should develop through an ML course. This framework can be used in creating new ML curricula, in analyzing the results of ML interventions, and in situating support tools within ML education for students of a variety of backgrounds and levels. Today a large portion of the world solely resides in the "ML consumer" population. Regulation of ML technologies is nascent. As policymakers think about reasonable legal structures, the public should be able to actively participate in these discussions. It is imperative that we empower these "ML consumers" to become "ML tinkerers" and "ML-engaged citizens".
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 213-223).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129264</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wide-bandgap integrated photonics for quantum technologies</title>
<link>https://hdl.handle.net/1721.1/129262</link>
<description>Wide-bandgap integrated photonics for quantum technologies
Lu, Tsung-Ju Jeff.
Development of quantum networks is necessary for quantum communication and distributed quantum computing. This requires the distribution of entanglement across many stationary qubits in the network. Solid state defect quantum emitters (QEs) can function as light-matter interfaces for connecting the internal electron spin states acting as stationary qubits and quantum states of emitted photonic qubits. Entanglement can then be generated between distant spin qubits by heralded optical measurements of the emitted photons over fibers. Thus, a key challenge is the control of many QEs, as well as efficient routing and detection of the spin-state-dependent photons. With QEs being in solid-state, we can achieve the scaling needed through miniaturization of the control and routing components by using integrated electronics and photonics.; However, advanced and commonly-used integrated photonic platforms produced in foundries are based on silicon and silicon nitride, which are incompatible with the short wavelength emission of leading solid-state QEs. As such, there is a need for a wide-bandgap integrated photonics platform for quantum technologies. This thesis first develops photonic integrated circuits (PICs) based on aluminum nitride (AlN) on sapphire, which enables low-loss routing from the visible down to the ultraviolet spectrum. We will then show that thin film AlN is also host to bright, high-purity QEs compatible with monolithic PIC integration. As solid-state emitters in diamond are among the promising qubits for quantum networks due to their efficient optical interfaces and minute-scale spin coherence, we will then present on the large-scale heterogeneous assembly of diamond-waveguide-coupled QEs into AlN photonic circuits with in situ wavelength tuning.; To demonstrate the versatility of this photonics platform, we will lastly discuss the heterogeneous integration of QEs in 2D materials and detectors. These advances show that AlN is a promising and versatile wide-bandgap integrated photonics platform for quantum information processing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 131-148).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129262</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast transactions in distributed and highly available databases</title>
<link>https://hdl.handle.net/1721.1/129261</link>
<description>Fast transactions in distributed and highly available databases
Lu, Yi,(Ph. D. in Computer Science)Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Many modern data-oriented applications are built on top of distributed OLTP databases for both scalability and high availability. However, when running transactions that span several partitions of the database, signicant performance degradation is observed in existing distributed OLTP databases. In this thesis, we develop three systems -- (1) STAR, (2) COCO, and (3) Aria -- to address the inefficiency and limitations of existing distributed OLTP databases while using dierent mechanisms and bearing various tradeoffs. STAR eliminates two-phase commit and network communication through asymmetric replication. COCO eliminates two-phase commit and reduces the cost of replication through epoch-based commit and replication. Aria eliminates two-phase commit and the cost of replication through deterministic execution. Our experiments on two popular benchmarks (YCSB and TPC-C) show that these three systems outperform conventional designs by a large margin. We also characterize the tradeoffs in these systems and the settings in which they are most appropriate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 151-159).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129261</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond predictive modeling : new computational aspects for deep learning based biological applications</title>
<link>https://hdl.handle.net/1721.1/129259</link>
<description>Beyond predictive modeling : new computational aspects for deep learning based biological applications
Liu, Ge,Ph. D.Massachusetts Institute of Technology.
Next generation sequencing and large-scale synthetic DNA synthesis have enabled advances in biological studies by providing high-throughput data that can eciently train machine learning models. Deep learning models have proven to provide state-of-the- art performance for predictive tasks across many biological applications. However, black-box predictive modeling is not sucient for scientific discovery in biology. For discovery it is important to nd the mechanisms that underlie outcomes. Mechanism discovery requires the visualization and interpretation of black-box predictive models. Discovery further requires analyzing data from exploratory experiments, and such experiments may produce data that is dissimilar from previous observations and thus be outside of a model's training distribution. Recognizing and quantifying the uncertainty of model predictions on out-of-distribution data is crucial for proper experiment interpretation.; Moreover, therapeutic molecular design usually involves iterations of proposing and testing new candidates, which require sequential decision making and directed optimization of molecules in a multiplexed fashion. Finally, certain machine learning design tasks such as vaccine design need to meet objectives such as population coverage which require ecient algorithms for combinatorial optimization. This thesis investigates and proposes novel techniques in four areas: model interpretation, model uncertainty, generating optimized antibody candidates, and optimization of vaccines with population coverage objectives. We first present Deep-Resolve, a novel analysis framework for deep convolutional models of genome function that visualizes how input features contribute individually and combinatorially to network decisions. Unlike other methods, Deep-Resolve does not depend upon the analysis of a predefined set of inputs.; Rather, it uses gradient ascent to stochastically explore intermediate feature maps to 1) discover important features, 2) visualize their contribution and interaction patterns, and 3) analyze feature sharing across tasks that suggests shared biological mechanism. Next, we introduce Maximize Overall Diversity (MOD), an approach to improve ensemble-based uncertainty estimates by encouraging larger overall diversity in deep ensemble predictions across all possible inputs. We also explore variations of MOD utilizing adversarial techniques (MOD-Adv) and data density estimation (MOD-R). We show that for out-of-distribution test examples, MOD improves predictive performance and uncertainty calibration on multiple regression and Bayesian Optimization tasks. Thirdly, we use ensembles of deep learning models and gradient based optimization in antibody sequence design. We optimize antibodies for optimized binding affinity and specicity, and experimentally confirm our optimization results.; Last, we combine deep learning models for predicting peptide MHC display with population frequency objectives to create a novel vaccine design tool, OptiVax, that estimates and optimizes the population coverage of peptide vaccines to facilitate robust immune responses. We used OptiVax to design peptide vaccines for SARS-CoV-2 and achieved superior predicted population coverage when compared to 29 public baseline designs. Collectively our studies will enable the application of deep learning in broad range of scenarios in biological studies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 165-182).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129259</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of fine-grained complexity</title>
<link>https://hdl.handle.net/1721.1/129258</link>
<description>Applications of fine-grained complexity
Lincoln, Andrea(Andrea I.)
This thesis is on the topic of the applications of Fine-Grained Complexity (FGC). FGC is concerned with categorizing computational problems by their running time up to low order terms in the exponent. FGC has been a very successful field, explaining the computational hardness of many problems via a network of reductions. Given the explanatory success of FGC in the standard computational model, it would be valuable to apply FGC to new areas. This thesis focuses on studying the core assumptions of FGC and three areas of applications: (1) traditional FGC in the standard model (the worst-case RAM model), (2) average-case FGC (ACFGC), and (3) fine-grained cryptography. If we can strengthen the core of FGC, then we would also strengthen the applications of FGC. This thesis demonstrates that a core hypothesis of FGC (the 3-SUM hypothesis) is equivalent to its small space counterpart. This makes the 3-SUM hypothesis more plausible.; FGC has built a network of reductions between problems that explain the known running time of the problems contained in the network. A core goal of FGC research is to add new problems to this network of reductions. This thesis shows that the sparse All Pairs Shortest Paths problem in n-node m-edge graphs requires (nm)[superscript 1-0(1)] time if the zero-k-clique hypothesis is true. This result gives a novel connection between the hardness of these two problems. A problem of much interest to both traditional complexity and FGC is Boolean Satisfiability (SAT). There is a well-studied average-case variant of SAT called Random k-SAT. In this thesis we study the running time of this problem, seeking to understand its ACFGC. We present an algorithm for Random k-SAT which runs in 2[superscript n](superscript [1-[omega](lg[superscript 2](k))=k)] time, giving the fastest known running time for Random k-SAT. Modern cryptography relies on average-case constructions.; That is, an encryption scheme is shown to be hard to break via reduction from a problem conjectured to be hard on average. Similarly, fine-grained cryptography relies on average-case fine-grained lower bounds and reductions. This is the core connection between fine-grained cryptography and ACFGC. This thesis presents a plausible fine-grained average-case hypothesis which results in a novel public-key cryptosystem. This thesis presents results that strengthen the core hypotheses of FGC, gives more efficient algorithms for problems of interest, and builds fine-grained cryptosystems. The core goal is to apply the tools and techniques of FGC to get novel results in both the worst-case setting as well as the average-case setting.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 261-281).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129258</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to guide task and motion planning</title>
<link>https://hdl.handle.net/1721.1/129257</link>
<description>Learning to guide task and motion planning
Kim, Beomjoon.
How can we enable robots to efficiently reason both at the discrete task-level and the continuous motion-level to achieve high-level goals such as tidying up a room or constructing a building? This is a challenging problem that requires integrated reasoning about the combinatoric aspects of the problem, such as deciding which object to manipulate, and continuous aspects of the problem, such as finding collision-free manipulation motions, to achieve goals. The classical robotics approach is to design a planner that, given an initial state, goal, and transition model, computes a plan. The advantage of this approach is its immense generalization capability. For any given state and goal, a planner will find a solution if there is one. The inherent drawback, however, is that a planner does not typically make use of planning experience, and computes a plan from scratch every time it encounters a new problem. For complex problems, this renders planners extremely inefficient.; Alternatively, we can take a pure learning approach where the system learns, from either reinforcement signals or demonstrations, a policy that maps states to actions. The advantage of this approach is that computing the next action to execute becomes much cheaper than pure planning because it is simply making a prediction using a function approximator. The drawback, however, is that it is brittle. If a policy encounters a state that is very different from the ones seen in the training set, then it is likely to make mistakes and might get into a situation from which it does not know how to proceed. Our approach is to take the middle ground between these two extremes. More concretely, this thesis introduces several algorithms that learn to guide a planner from planning experience. We propose state representations, neural network architectures, and data-efficient algorithms for learning to perform both task and motion level reasoning using neural networks.; We then use these neural networks to guide a planner and show that it performs more efficiently than pure planning and pure learning algorithms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages [113]-124).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129257</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New frontiers in THz quantum cascade lasers</title>
<link>https://hdl.handle.net/1721.1/129256</link>
<description>New frontiers in THz quantum cascade lasers
Khalatpour, Ali.
Terahertz (THz) frequencies (0.5-10 THz) are among the most underdeveloped electromagnetic spectra, even though their application potentials are great in imaging, sensing, and communications. This underdevelopment is primarily due to the lack of compact and powerful THz sources. The invention of THz quantum cascade lasers (QCL) held great promise to bridge the gap between semiconductor electronic and photonic devices. However, the demanding cooling requirements for THz QCL have been a hard brake in the race for achieving compact and portable systems, and they have confined THz QCL systems to a laboratory environment. Therefore, raising the maximum operating temperature to above that of a compact cooler (&gt;/= 235 K for single-stage thermoelectric coolers), has been a paramount long-term goal in the THz field. In this thesis, THz QCLs (at ~~ 4 THz) with a maximum operating temperature T[subscript max]= 250 K has been developed. This operating temperature enabled the construction of coherent THz radiation sources using cheap commercial single-and multi-stage thermoelectric coolers, yet with power levels sufficient for real-time imaging of beam pattern and fast spectral measurements without requiring expensive cryogenically cooled detectors. The combination of TEC-cooled THz QCLs with room-temperature cameras and detectors enables portable systems that are operable outside the laboratory environment. Furthermore, and perhaps more importantly, the demonstrated significant increase in T[subscript max] and the preservation of room-temperature NDR pave a clear path toward further increases in T[subscript max]: designing clean n-level systems based on the direct-phonon scheme with tall barriers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 103-113).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129256</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On optimization and scalability in deep learning</title>
<link>https://hdl.handle.net/1721.1/129255</link>
<description>On optimization and scalability in deep learning
Kawaguchi, Kenji,Ph. D.Massachusetts Institute of Technology.
Deep neural networks have achieved significant empirical success in many fields, including computer vision, machine learning, and artificial intelligence. Along with its empirical success, deep learning has been theoretically shown to be attractive in terms of its expressive power. That is, neural networks with one hidden layer can approximate any continuous function, and deeper neural networks can approximate functions of certain classes with fewer parameters. Expressivity theory states that there exist optimal parameter vectors for neural networks of certain sizes to approximate desired target functions. However, the expressivity theory does not ensure that we can find such an optimal vector efficiently during optimization of a neural network. Optimization is one of the key steps in deep learning because learning from data is achieved through optimization, i.e., the process of optimizing the parameters of a deep neural network to make the network consistent with the data.; This process typically requires nonconvex optimization, which is not scalable for high-dimensional problems in general. Indeed, in general, optimization of a neural network is not scalable without additional assumptions on its architecture. This thesis studies the non-convex optimization of various architectures of deep neural networks by focusing on some fundamental bottlenecks in the scalability, such as suboptimal local minima and saddle points. In particular, for deep neural networks, we present various guarantees for the values of local minima and critical points, as well as for points found by gradient descent. We prove that mild over-parameterization of practical degrees can ensure that gradient descent will find a global minimum for non-convex optimization of deep neural networks.; Furthermore, even without over-parameterization, we show, both theoretically and empirically, that increasing the number of parameters improves the values of critical points and local minima towards the global minimum value. We also prove theoretical guarantees on the values of local minima for residual neural networks. Moreover, this thesis presents a unified theory to analyze the critical points and local minima of various deep neural networks beyond these specific architectures. These results suggest that, whereas there is the issue of scalability in the theoretical worst-case and worst architectures, we can avoid the issue and scale well for large problems with various useful architectures in practice.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 253-260).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129255</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in deep generative modeling for clinical data</title>
<link>https://hdl.handle.net/1721.1/129254</link>
<description>Advances in deep generative modeling for clinical data
Gopalkrishnan, Rahul.
The intelligent use of electronic health record data opens up new opportunities to improve clinical care. Such data have the potential to uncover new sub-types of a disease, approximate the effect of a drug on a patient, and create tools to find patients with similar phenotypic profiles. Motivated by such questions, this thesis develops new algorithms for unsupervised and semi-supervised learning of latent variable, deep generative models - Bayesian networks parameterized by neural networks. To model static, high-dimensional data, we derive a new algorithm for inference in deep generative models. The algorithm, a hybrid between stochastic variational inference and amortized variational inference, improves the generalization of deep generative models on data with long-tailed distributions. We develop gradient-based approaches to interpret the parameters of deep generative models, and fine-tune such models using supervision to tackle problems that arise in few-shot learning. To model longitudinal patient biomarkers as they vary due to treatment we propose Deep Markov Models (DMMs). We design structured inference networks for variational learning in DMMs; the inference network parameterizes a variational approximation which mimics the factorization of the true posterior distribution. We leverage insights in pharmacology to design neural architectures which improve the generalization of DMMs on clinical problems in the low-data regime. We show how to capture structure in longitudinal data using deep generative models in order to reduce the sample complexity of nonlinear classifiers thus giving us a powerful tool to build risk stratification models from complex data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 203-221).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129254</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A unified modeling for control of reactive power dynamics in electrical energy systems</title>
<link>https://hdl.handle.net/1721.1/129253</link>
<description>A unified modeling for control of reactive power dynamics in electrical energy systems
Jaddivada, Rupamathi.
This thesis is motivated by the renewed need to understand and control the dynamics of reactive power in the electrical energy systems. Of particular interest are the dynamical non-sinusoidal interactions between heterogeneous system components, such as power-electronically controlled renewable intermittent resources, storage, delivery components, and loads. The main contribution of this thesis is a newly-introduced model which relates the mismatch in the rate of power absorbed/produced by the diverse components to their reactive power dynamics; all else evolves around this novel modeling. The notion of reactive power dynamics is further generalized for diverse types of energy conversion processes. As an outgrowth of the previous work, we propose a physically intuitive multi-layered interactive model for characterizing the input-output dynamics of components and the dynamics of their interactions in the interconnected system, especially when reactive power dynamics become dominant.; The higher layer model utilizes the new model and is in terms of instantaneous power and the rate of change of generalized reactive power. The dynamics of instantaneous power is a direct consequence of energy conservation law. In contrast, the dynamics of generalized reactive power reflects the difference between the rates of work potential and the useful work. The ratio of the generalized reactive power and the work potential, thus represents the inefficiencies in power transfer between the components. Notably, the higher layer models are linear, creating opportunities for scalable analysis and provable control design, under mild assumptions on stand-alone component models. We then re-interpret the critical role of nonlinear decentralized power electronically-switched controllers in enabling the feasibility, stability, and robustness of the emerging electric energy systems using this new modeling.; We consider several examples to demonstrate the generality and the intuitive analysis and control design principles, brought about by the proposed modeling.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 189-207).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129253</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming technologies for engineering quality multicore software</title>
<link>https://hdl.handle.net/1721.1/129252</link>
<description>Programming technologies for engineering quality multicore software
Kaler, Tim(Tim F. S.)
The widespread availability of large multicore computers in the cloud has given engineers and scientists unprecedented access to large computing platforms. Traditionally, high-end computing solutions have been developed and used by only a small community, as these solutions rely on expensive and specialized computing environments. The emergence of large-scale cloud computing providers, however, has democratized access to large-scale (although not necessarily HPC-scale) computing power, which can now be rented on-demand with just a credit card. The complexity of parallel programming, however, has made it more difficult for even expert programmers to develop high-quality multicore software systems. For average programmers, developing parallel programs that are debuggable, correct, and performant is a daunting challenge.; This thesis is concerned with the development of programming technologies that reduce the complexity of parallel programming to make it easier for average programmers to exploit the capabilities of multicore hardware. I contend that realizing the full potential of the multicore revolution requires the development of programming technologies that make it easier to write quality code -- code that has a simple understandable structure and performs well in practice. These programming technologies broadly include parallel algorithms, data structures, optimization techniques, profiling tools, and system design principles. Along these ends, this thesis presents seven intellectual artifacts from the domains of parallel algorithms, multicore-centric systems for scientific computing, and programming tools that make it easier to write quality code by simplifying the design, analysis, and performance engineering of multicore software: --; Chromatic: Parallel algorithms for scheduling data-graph computations deterministically. -- Color: Parallel algorithms and ordering heuristics for graph coloring that have the simple semantics of serial code. -- PARAD: An ecient and parallelism-preserving algorithm for performing automatic differentiation in parallel programs. -- Connectomics: An end-to-end image-segmentation pipeline for connectomics using a single large multicore. -- Alignment: An image-alignment pipeline for connectomics that uses memory-ecient algorithms, and techniques for judiciously exploiting performance{accuracy tradeoffs. -- Reissue: Reissue policies for reducing tail-latency in distributed services that are easy to analyze and eective in practice. --; Cilkmem: Ecient algorithms and tools for measuring the worst-case memory high-water mark of parallel programs. Although the emphasis and domains of these artifacts vary, they each involve the discovery of a way to tame complexity in parallel software systems without compromising, in fact, usually enhancing, theoretical guarantees and real-world performance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 231-256).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129252</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graph guided predictions</title>
<link>https://hdl.handle.net/1721.1/129251</link>
<description>Graph guided predictions
Garg, Vikas,Ph. D.(Vikas Kamur)Massachusetts Institute of Technology.
Graphs provide a natural abstraction to model relational and strategic data in domains as diverse as biology (e.g., molecules), multiagent settings (e.g., online vendors on ecommerce platforms), and distributed systems (e.g., Internet). Graphs also find much use as theoretical objects (e.g., probabilistic graphical models), and several important algorithms (e.g., max-flow algorithm for image segmentation) can be used when tasks are formulated in terms of graphs. In this thesis, we focus on three important issues that arise when using graph structured data for prediction: (a) compressing graphs to facilitate predictions, (b) understanding the capability of state-of-the-art algorithms (graph neural networks) operating on graphs, and (c) inferring interaction graphs so as to predict strategic outcomes. Our approach to graph compression builds on the idea of optimal transport specifying the cost of mapping a large graph to a smaller one. The cost decomposes as a flow on the edges, and the selection of the subgraph to retain can be optimized via convex Boolean relaxations. Graph neural networks (GNNs) are naturally suited for making predictions based on graphs but they remain poorly understood in terms of what they can and cannot do. We analyze whether GNNs can distinguish graphs that differ in properties such as cycles, but have similar local structure. We also investigate data dependent generalization bounds for GNNs. In many cases the graph structure is not given but must be inferred. This is the case, for example, in trying to understand how a set of players arrive at their decisions. We study the role of structure -- interaction graph -- in the context of predicting outcomes of such games. We analyze conditions under which players converge to an equilibrium, when the samples from equilibrium are themselves at (near) equilibrium, and when the unknown interaction graph can be identified from a data set of sampled context-dependent outcomes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129251</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anomaly detection through explanations</title>
<link>https://hdl.handle.net/1721.1/129250</link>
<description>Anomaly detection through explanations
Gilpin, Leilani Hendrina.
Under most conditions, complex machines are imperfect. When errors occur, as they inevitably will, these machines need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of a possible failure. My thesis contributes a system architecture that reconciles local errors and inconsistencies amongst parts. I represent a complex machine as a hierarchical model of introspective sub-systems working together towards a common goal. The subsystems communicate in a common symbolic language. In the process of this investigation, I constructed a set of reasonableness monitors to diagnose and explain local errors, and a system-wide architecture, Anomaly Detection through Explanations (ADE), which reconciles system-wide failures. The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be backtracked and queried for support and counterfactual explanations. I have applied my results to explain incorrect labels in semi-autonomous vehicle data. A series of test simulations show the accuracy and performance of this architecture based on real-world, anomalous driving scenarios. My work has opened up the new area of explanatory anomaly detection, towards a vision in which: complex machines will be articulate by design; dynamic, internal explanations will be part of the design criteria, and system-level explanations will be able to be challenged in an adversarial proceeding.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 211-230).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129250</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spintronics using low magnetization materials</title>
<link>https://hdl.handle.net/1721.1/129249</link>
<description>Spintronics using low magnetization materials
Finley, Joseph T.(Joseph Tyler)
Information storage using magnetic materials is accomplished by controlling and sensing the magnetic moment orientation of nanoscale ferromagnets. In order to improve performance and compete with existing and alternative emerging memory technologies, further improvements in device switching speeds, density, and energy efficiency are needed. To address these issues, we explore the use of low magnetization ferrimagnetic and antiferromagnetic materials as information storage mediums. We demonstrate the feasibility of spin-torque switching in compensated ferrimagnetic systems, along with increased switching speeds. We also show the existence of resistive artifacts in current-induced antiferromagnetic switching, which need to be removed if practical devices are to be realized. Finally we achieve spin wave transmission in ferrimagnetic insulators with perpendicular magnetization, which is a promising step forward for the development of spin wave computing devices. Magnetic devices using small moment magnets promise a spintronic platform for fast, dense, and energy efficient memory technology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129249</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving end-to-end neural network models for low-resource automatic speech recognition</title>
<link>https://hdl.handle.net/1721.1/129248</link>
<description>Improving end-to-end neural network models for low-resource automatic speech recognition
Drexler, Jennifer Fox.
In this thesis, we explore the problem of training end-to-end neural network models for automatic speech recognition (ASR) when limited training data are available. End-to-end models are theoretically well-suited to low-resource languages because they do not rely on expert linguistic resources, but they are difficult to train without large amounts of transcribed speech. This amount of training data is prohibitively expensive to acquire in most of the world's languages. We present several methods for improving end-to-end neural network-based ASR in low-resource scenarios. First, we explore two methods for creating a shared embedding space for speech and text. In doing so, we learn representations of speech that contain only linguistic content and not, for example, the speaker or noise characteristics in the speech signal. These linguistic-only representations allow the ASR model to generalize better to unseen speech by discouraging the model from learning spurious correlations between the text transcripts and extra-linguistic factors in speech. This shared embedding space also enables semi-supervised training of some parameters of the ASR model with additional text. Next, we experiment with two techniques for probabilistically segmenting text into subword units during training. We introduce the n-gram maximum likelihood loss, which allows the ASR model to learn an inventory of acoustically-inspired subword units as part of the training process. We show that this technique combines well with the embedding space alignment techniques in the previous section, leading to a 44% relative improvement in word error rate in the lowest resource condition tested.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 131-140).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129248</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gen : a high-level programming platform for probabilistic inference</title>
<link>https://hdl.handle.net/1721.1/129247</link>
<description>Gen : a high-level programming platform for probabilistic inference
Cusumano-Towner, Marco Francis.
Probabilistic inference provides a powerful theoretical framework for engineering intelligent systems. However, diverse modeling approaches and inference algorithms are needed to navigate engineering tradeoffs between robustness, adaptability, accuracy, safety, interpretability, data efficiency, and computational efficiency. Structured generative models represented as symbolic programs provide interpretability. Structure learning of these models provides data-efficient adaptability. Uncertainty quantification is needed for safety. Bottom-up, discriminative inference provides computational efficiency. Iterative "model-in-the-loop" algorithms can improve accuracy by fine-tuning inferences and improve robustness to out-of- distribution data. Recent probabilistic programming systems fully or partially automate inference, but are too restrictive for many applications.; Differentiable programming systems are also inadequate: they do not support structure learning of generative models or hybrids of "model-in-the-loop" and discriminative inference. Therefore, probabilistic inference is still often implemented by translating tedious mathematical derivations into low-level numerical programs, which are error-prone and difficult to modify and maintain. This thesis presents the design and implementation of the Gen programming platform for probabilistic inference. Gen automates the low-level implementation of probabilistic inference algorithms while remaining flexible enough to support heterogeneous algorithmic approaches and extensible enough for practical inference engineering.; Gen users define their models explicitly using probabilistic programs, but instead of compiling the model directly into an inference algorithm implementation, Gen compiles the model into data types that encapsulate low-level inference operations whose semantics are derived from the model, like sampling, density evaluation, and gradients. Users write their inference application in a general-purpose programming language using Gen's abstract data types as primitives. This thesis defines Gen's data types and shows that they can be used to compose a variety of inference techniques including sophisticated Monte Carlo algorithms and hybrids of Monte Carlo, variational, and discriminative techniques. The same data types can be generated from multiple probabilistic programming languages that strike different expressiveness and performance tradeoffs.; By decoupling probabilistic programming language implementations from inference algorithm design, Gen enables more flexible specialization of both, leading to performance improvements over existing probabilistic programming systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 221-231).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129247</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study Of electronic correlation and superconductivity in twisted graphene superlattices</title>
<link>https://hdl.handle.net/1721.1/129246</link>
<description>Study Of electronic correlation and superconductivity in twisted graphene superlattices
Cao, Yuan,Ph. D.Massachusetts Institute of Technology.
Two-dimensional materials, such as graphene, exhibit various unique electronic and optical properties that distinguish them from their bulk parent compounds. Besides being highly tunable by electrostatic gating, these 2D materials can be assembled into van der Waals heterostructures, which greatly extend the possibilities one can achieve. Among these possibilities, the twist angle in a van der Waals heterostructure is a unique knob, which we can utilize to engineer the properties of the 2D materials in unprecedented ways. In this thesis, I mainly studied the electronic properties of twisted bilayer graphene, consisting of two pieces of graphene rotated by a certain angle. It is shown experimentally that the twist angle significantly alters the band structure, by reducing the Fermi velocity at the Dirac points and by inducing new band gaps, due to the formation of a moiré superlattice. In particular, at a 'magic' twist angle, the band structure becomes strongly flattened, to an extent that the Coulomb interactions between the electrons now become dominant. In such a regime, peculiar correlated insulator states and unconventional superconductivity are found, which share many common traits with those observed in high-T[subscript c] superconducting materials. These findings establish the first graphene superconductor in two dimensions. Furthermore, it is found that both the superconducting and normal state in magic-angle twisted bilayer graphene exhibit significant anisotropy, likely as a result of the electronic correlations as well. I also present results in twisted graphene superlattices beyond twisted bilayer graphene. These studies might help us understand more about the correlated physics in flat-band systems, which might in turn shed more light towards the research of high-T[subscript c] superconductors.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 151-164).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129246</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exhaustive search and hardness proofs for games</title>
<link>https://hdl.handle.net/1721.1/129245</link>
<description>Exhaustive search and hardness proofs for games
Bosboom, Jeffrey(Jeffrey William)
This thesis explores several games from two perspectives: exhaustive search and hardness proofs. First, we present an exhaustive search for hardness proofs: a system for finding motion planning simulations. Second, we prove that the pencil-and-paper puzzle Tatamibari is NP-complete, a proof developed using a Tatamibari solver we wrote based on the Z3 SMT solver. Third, we find by computer search that the board game Push Fight played on a board with one column (four squares) removed is a draw. Then we prove that mate-in-1 in generalized Push Fight is NP-complete and that determining the winner of a game in progress is PSPACE-hard. Fourth, we prove that path puzzles are NP-complete, ASP-complete, and #P-complete. We describe a solver for path puzzles based on depth-first search that solves 14 of 15 puzzles from the last chapter of the path puzzles book. Fifth, we present a nonogram solver based on automaton intersection. Relatedly, we prove that finding an optimal automaton intersection ordering is PSPACE-hard. Sixth, we analyze puzzles from the video game The Witness and obtain NP-completeness for most clue types and [sigma]₂-completeness for puzzles containing antibody clues. Finally, we propose a generic framework for parsing screenshots of grid-based video games.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 275-289).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129245</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building efficient algorithms by learning to compress</title>
<link>https://hdl.handle.net/1721.1/129244</link>
<description>Building efficient algorithms by learning to compress
Blalock, Davis W.(Davis Whitaker)
The amount of data in the world is doubling every two years. Such abundant data offers immense opportunities, but also imposes immense computation, storage, and energy costs. This thesis introduces efficient algorithms for reducing these costs for bottlenecks in real world data analysis and machine learning pipelines. Concretely, we introduce algorithms for: -- Lossless compression of time series. This algorithm compresses better than any existing method, despite requiring only the resources available on a low-power edge device. -- Approximate matrix-vector multiplies. This algorithm accelerates approximate similarity scans by an order of magnitude relative to existing methods. -- Approximate matrix-matrix multiplies. This algorithm often outperforms existing approximation methods by more than 10x and non-approximate computation by more than 100x. We provide extensive empirical analyses of all three algorithms using real-world datasets and realistic workloads. We also prove bounds on the errors introduced by the two approximation algorithms. The theme unifying all of these contributions is learned compression. While compression is typically thought of only as a means to reduce data size, we show that specially designed compression schemes can also dramatically increase computation speed and reduce memory requirements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 137-152).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129244</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving fairness in budget-constrained algorithmic decision-making</title>
<link>https://hdl.handle.net/1721.1/129243</link>
<description>Improving fairness in budget-constrained algorithmic decision-making
Bakker, Michiel Anton.
The last five years have seen a vast increase in academic and popular interest in "fair" machine learning. But while the community has made significant progress towards developing algorithmic interventions to mitigate unfairness, research has focused predominantly on static classification settings. Real-world algorithmic decision making, however, increasingly happens in more dynamic settings. In this thesis, we will study fairness in some of these settings. The first part focuses on mitigating unfairness in settings in which decision makers can choose to spend part of a limited budget on acquiring more information for individuals. For example, a doctor who is unsure about a diagnosis can first decide to conduct additional tests before making a final decision. Studying fairness in this budget-constrained decision-making setting is important not only because of its applicability to a wide range of domains but also because it offers a novel perspective on how fairness can be defined and improved. We will propose three methods for achieving fairness in this setting that provide guarantees at the level of a population subgroup or at the level of an individual. The second part of the thesis studies a real-world budget-constrained application of algorithmic decision-making. We detect bias in statistical models that are currently deployed to support the distribution of social programs among millions of households in the developing world. Finally, we propose a domain-specific decision support tool that addresses bias in this domain while accounting for the complex multi-stakeholder decision-making process.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 143-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129243</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distribution testing : classical and new paradigms</title>
<link>https://hdl.handle.net/1721.1/129242</link>
<description>Distribution testing : classical and new paradigms
Aliakbarpour, Maryam.
Hypothesis testing is a fundamental topic in statistics. To put it simply, hypothesis testing is a framework to examine whether a hypothesized model is in line with the observed data. Hypothesis testing has been widely used in experimental research in a variety of fields, such as biology, medical science, and social sciences. Despite a century of constant use, there is still a lot left to be done for the evolving needs in the practical world. Some of the high-priority challenges we face are preserving privacy, working with high-dimensional distributions, handling noisy data, and dealing with data that is gathered from multiple sources. In this thesis, we focus on basic statistical problems in a more recently considered setting of hypothesis testing, referred to as property testing, in which we aim to address the challenges mentioned above. In particular, we have the following contributions: 1.; We investigate the problem of testing whether a distribution has the shape property that it is monotone according to some (partial) order of the domain elements, or it is far from being such a distribution. Among other results, our main contribution is that testing monotonicity over a high dimensional domain, a boolean hypercube needs almost linearly many samples in terms of the domain size. 2. We consider well-studied identity and closeness testing problems in a new mixture based noise model. We provide testers with optimal sample complexity for these problems under various scenarios that differ in terms of how the tester can access the distribution, or what knowledge about the noise is available to the tester. 3. We developed differentially private testers for several fundamental problems in testing, such as testing uniformity, identity, closeness, and independence.; The conceptual message of our work is there exist private hypothesis testers that are nearly as sample-efficient as their non-private counterparts. 4. We consider a new model in distribution testing for multiple data sources when only a few samples are available from each source. This assumption is in contrast to the common distribution testing model, which views the data as i.i.d. samples from a single distribution. We generalized uniformity, identity, and closeness testing problems to this setting, and developed sample-optimal tester for these problems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 191-198).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129242</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient radio frequency power generation and impedance matching</title>
<link>https://hdl.handle.net/1721.1/129241</link>
<description>Efficient radio frequency power generation and impedance matching
Al Bastami, Anas Ibrahim.
A wide range of applications require the efficient generation and delivery of radio-frequency (rf) power to do useful work. This is commonly achieved by utilizing an rf power amplifier or inverter that delivers the required amount of power to the load. In many applications, the load impedance varies widely due to changes in the operating conditions. A key challenge in these rf power delivery systems is attaining high efficiency and performance across all operating conditions, including accurate control of the delivered power, and achieving wide bandwidths with respect to power delivery and/or load variation. Typical rf amplifiers and inverters operate efficiently into a fixed load resistance, but their performance heavily degrades with variations in load impedance. This problem is often addressed through tunable matching networks (TMNs), which provide adaptive impedance transformation between the rf source and load.; However, they become costly, bulky, and slow in response to load changes, especially at high power levels (hundreds to thousands of watts and above). This thesis develops solutions for efficient generation and delivery of rf power into dynamically-varying loads, with a focus on systems suitable for plasma generation at power levels from hundreds of watts to tens of kilowatts. It presents an efficient, low cost, and small size alternative to a TMN that achieves acceptable impedance matching for inductively-coupled rf plasma systems. It also develops techniques that enable the design of an efficient high power TMN capable of very fast modulation of impedance. In addition, this thesis explores different architectural implementations for complete rf generation and power delivery systems that are efficient, fast, and capable of driving a wide load range with accurate control of power, and identifies the most promising implementations.; While the solutions in this thesis are demonstrated in the context of plasma generation systems, they can be applied to many other systems having dynamically-varying rf loads.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 249-261).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129241</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for enhancing electron microscopy</title>
<link>https://hdl.handle.net/1721.1/129240</link>
<description>Techniques for enhancing electron microscopy
Agarwal, Akshay.
Electron microscopy is a powerful imaging technique that allows us to push the limits of our understanding of materials at the nanoscale. An important limitation in the application of electron microscopy to organic and biological materials is sample damage induced by the electron beam. Recently, quantum mechanical and adaptive illumination imaging schemes have been devised to use the available electron dose efficiently to get the maximum information about the specimen. The primary requirement for the implementation of these schemes is efficient illumination and detection of electrons in the microscopes, which has limited the applicability of such low-dose imaging techniques. In this thesis, we have developed and implemented low-dose imaging schemes achievable with current technology on a wide range of electron microscopes. We have also proposed microscopy schemes that combine ideas from quantum mechanical and adaptive illumination imaging to lower the electron dose required for imaging by up to an order of magnitude. Further, we have developed electron count imaging on a scanning electron microscope (SEM) and demonstrated improvement of up to 30% in image quality for the same imaging dose. Finally, we have implemented an adaptive illumination scheme on the SEM and demonstrated that the incident electron dose can be traded off with a tolerable increase in imaging errors. The work in this thesis improves the dose reduction possible with quantum imaging and adaptive illumination schemes and represents a major step towards their implementation in different types of electron microscopes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 257-267).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129240</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tuning geometric and electronic structure of noble metal with core-shell platform as enhanced catalysts</title>
<link>https://hdl.handle.net/1721.1/129239</link>
<description>Tuning geometric and electronic structure of noble metal with core-shell platform as enhanced catalysts
Wang, Zhenshu(Zhenshu Stan)
The noble metals (NMs) are special materials satisfying the Sabatier principle which bind adsorbates neither too strongly nor too weakly. Along with the resistance to corrosion and bulk oxidation, this makes noble metals universal catalysts for many important industrial reactions. Despite the appealing chemical properties, noble metals are not the optimal catalysts for particular reactions based on theoretical calculations. Core-shell nanostructures are a versatile platform that has the potential to solve this problem. Recent advances have enabled the synthesis of platinum titanium tungsten carbide (Pt/TiWC) and platinum titanium tungsten nitride (Pt/TiWN) core-shell nanoparticles featuring superior Pt mass activity in oxygen reduction reaction (ORR) and CO tolerance during hydrogen oxidation reaction (HOR) which are the critical reactions to enable hydrogen fuel cell technologies.; However, applications with these materials in thermal catalysis reactions and other adsorbates have not been shown. Furthermore, the use of tungsten carbide or nitride as the only backbone core material limits the tunability and core stability for the core-shell catalysts. This thesis includes a combination of synthesis, characterization, and catalytic performance of new core-shell nanoparticles to provide a fundamental understanding of this versatile platform. Firstly, new core materials tantalum carbide (TaC) and niobium carbide (NbC) combining with platinum (Pt), rhodium (Rh), and iridium (Ir) shells will be introduced. The materials were analyzed by x-ray photoelectron spectroscopy (XPS) and x-ray absorption spectroscopy (XAS) to demonstrate the significant shell electronic and geometric structure alterations induced by the core.; Next, these materials along with the conventional Pt/TiWC and Pt/TiWN materials were tested with thermal catalytic probe reactions, namely, carbon dioxide and acetylene selective hydrogenation. Core-shell catalysts featured superior selectivity towards the intermediate products compared to their parent NM catalysts owing to the modified geometric and electronic structures. Finally, the Pt/TaC core-shell nanoparticle was shown to outperform commercial Pt in methanol oxidation reaction (MOR) and methanol interfered ORR application due to the Pt shell electronic structure change which facilitates direct methanol fuel cell technologies. Overall, this work not only demonstrates methods to synthesize a broader spectrum of core-shell nanoparticles but also provides fundamental understandings of shell-core interactions in thermal-catalytic and electrochemical reactions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references (pages 177-202).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129239</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studying topologically complex DNA at the single-molecule level</title>
<link>https://hdl.handle.net/1721.1/129237</link>
<description>Studying topologically complex DNA at the single-molecule level
Soh, Beatrice W.(Wan Yuan Beatrice)
Over two decades ago, with advances in microfabrication techniques and fluorescence microscopy, single-molecule studies emerged as a powerful approach to investigate polymer dynamics at the molecular level. By providing a platform for the direct observation and precise manipulation of individual polymer molecules, single-molecule studies allow for the probing of microscopic interactions that give rise to the macroscopic properties of the polymer system. Single molecule studies have been widely used to investigate the static and dynamic properties of double-stranded deoxyribonucleic acid (DNA) as a model polymer. Such studies not only help to develop a fundamental understanding of key topics in polymer physics that cannot be easily accessed via traditional bulk experimental methods, but also facilitate the development of emerging DNA mapping and sequencing techniques. The majority of single-molecule studies to date have involved linear DNA molecules.; It is known that topological constraints on the molecular level have a signicant influence on polymer dynamics. A nascent area in the field of polymer physics is the study of polymers with complex topologies. In this thesis, we present a series of single-molecule experiments and Brownian dynamics simulations used to investigate the polymer physics of topologically complex DNA. Specically, we focus on knotted polymers, ring polymers and catenated polymer networks. To investigate the impact of a knot on polymer dynamics, we employ a combined approach of single-molecule experiments and Brownian dynamics simulations. We study experimentally the steady-state behavior of knotted polymers in planar elongational fields and nd that the presence of a knot leads to a faster relaxation time and, accordingly, a shift in the coilstretch transition for the molecule. In consequence, the untying of a knot near the coilstretch transition can give rise to dramatic changes in chain conformation.; We use Brownian dynamics simulations to study in detail the impact of the knot untying process on polymer dynamics in planar elongational fields and complement the simulations with experimental results. As a knot moves o the chain in an elongational field, the knot size changes due to the non-uniform tension prole along the chain and causes a change in the eective Weissenberg number, which in turn leads to a change in chain extension. With the use of simulations, we further investigate the knot untying process by probing the topological pathway of an untying knot. We study the distributions of knot conformational states and knot untying pathways on uniformly tensioned chains and chains subjected to elongational fields, and demonstrate that external fields can be used to influence how a knot unties from a chain. Next, we shift focus to ring polymers. We use single-molecule experiments to study the dynamics of self-entangled circular DNA.; Our results demonstrate that ring polymers can self-entangle by forming self-threadings, and that such threadings can lead to a signicant slowdown in polymer dynamics. It seems counterintuitive that self-entanglements can arise in ring polymers, which lack chain ends. To delve into the physics of self-entanglements on circular chains, we implement a macroscopic system that allows for the direct visualization of chain conformation. We investigate the formation of self-entanglements on granular chains subjected to a tumbling motion, and use the well-studied self-entanglements on linear chains as a framework for interpreting self-entanglements on circular chains. We develop a method to characterize the self-entanglements on circular chains with known topological descriptors from knot theory and propose a general mechanism for the self-entanglement of circular chains. Finally, we consider the deformation dynamics of catenated DNA networks.; A kinetoplast is a complex network of catenated DNA rings that resembles a two-dimensional polymeric system. We perform single-molecule experiments to study the deformation response of kinetoplasts in a planar elongational field. Our results demonstrate that kinetoplasts deform in a stagewise fashion and undergo transient deformation at large strains, as a result of conformational rearrangements from a metastable state. In contrast to linear polymers that display a coil-stretch transition, kinetoplasts do not exhibit an abrupt transition between the non-deformed and deformed states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 208-222).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129237</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering nanolayers for localized delivery of siRNA</title>
<link>https://hdl.handle.net/1721.1/129236</link>
<description>Engineering nanolayers for localized delivery of siRNA
Chou, Jonathan Ju-En.
RNA interference (RNAi) is a promising technology for therapeutic application. The RNAi pathway involves sequence-specific gene silencing directed by RNA fragments of 21-23 nucleotides long known as short interfering RNA (siRNA). The great potential for siRNA to modulate gene expression has prompted research in treatment for diseases including inflammatory disorders, viral infections, and a host of cancers. Yet siRNA therapy is not without its challenges. Delivery barriers such as nuclease degradation, rapid clearance, cell membrane rejection, and lysosomal degradation must be overcome for effective siRNA therapy. Local delivery of siRNA presents advantages including reducing off-target effects, increased efficacy at target site, and reduction in load requirements compared to systemic siRNA administration. Layer-by-layer (LbL) self-assembly technology is a promising method of nanolayer surface coating fabrication for the localized and controlled delivery of therapeutics.; One area of particular interest for controlled localized siRNA delivery is the treatment of soft tissue wounds. Wound healing is a complex, multi-staged process wherein dysregulation in whichever healing phase may cause severe complications for patients. Here we present the engineering of LbL thin films for localized delivery of siRNA. We design LbL films for release of multiple siRNAs. By tuning film architecture and incorporating barrier layers to prevent interlayer diffusion, we achieve sequential release of siRNA at physiological timescales relevant to a healing wound. To improve knockdown efficacy of released siRNA complexes, we investigate the assembly of a bilayer composed of siRNA and the polycation poly([beta]-amino ester) (PBAE). Through a fractional factorial design, we elucidate the effects of LbL assembly parameters on the resultant film's loading, composition, and in vitro efficacy. From these findings, we determine optimized assembly parameters for gene silencing.; Finally, we develop a mouse model for evaluating in vivo efficacy of LbL films assembled on sutures. Findings from a pilot study with our optimized films and recommendations for future studies are reported. This thesis work expounds the utility of LbL technology in assembling films for effective controlled localized siRNA delivery.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129236</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering myelination in vitro</title>
<link>https://hdl.handle.net/1721.1/129235</link>
<description>Engineering myelination in vitro
Espinosa-Hoyos, Daniela.
The brain is the powerhouse of the central nervous system (CNS). When this system is out of balance, the implications are massive: neurological diseases are the leading cause of disability and number two leading cause of fatalities in the world. Returning the system to its steady state parallels in complexity. Despite the challenges in drug discovery and development for CNS disorders that have contributed to the shut-down of entire neuroscience programs in the pharmaceutical industry, the massive unmet need continues to motivate basic research and innovation in this space: there are no cures for CNS disorders. Myelin and oligodendroglia--the myelinating cells of the CNS--play central roles in homeostasis, and the pathogenesis of a myriad of neurological disorders, including multiple sclerosis. Academic and industrial researchers need new tools, which include new materials and procedures, to develop new strategies for myelin and oligodendroglial protection and repair.; This thesis leverages interdisciplinary technologies and concepts to address challenges and inefficiencies in the current approach to discover and develop therapies for myelin disorders. We sought to address the need for preclinical in vitro tools compatible with high content screening that can replicate key aspects of myelination and the oligodendroglial niche. Inspired by physical and mechanical properties of neuronal axons, we developed new compliant and biocompatible polymers and additive manufacturing methodology to create Artificial Axons. We established primary rat myelination assays and showed that Artificial Axons capture key properties of oligodendroglial and neuronal interactions, which can be imaged in real time, and quantified.; We implemented induced pluripotent stem cell technology to demonstrate that some but not all aspects of oligodendrocyte mechanotransduction are conserved across rats and humans in vitro, further motivating the use of human cells for the study of uniquely human diseases. We also demonstrated the first quantitative axon-free human myelination assays, and explored their use in drug discovery, including for dose response and small molecule screening. Finally, we discuss the modification of these assays and manufacturing methodologies, including harnessing material properties, to scale up the fabrication of Artificial Axons with better spatial resolution for high content screening.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129235</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intra/extracellular multi-drug delivery for osteoarthritis</title>
<link>https://hdl.handle.net/1721.1/129234</link>
<description>Intra/extracellular multi-drug delivery for osteoarthritis
Krishnan, Yamini,Ph. D.Massachusetts Institute of Technology.
Osteoarthritis (OA), the most common form of arthritis, affects hundreds of millions of people worldwide and tens of millions of people within the United States. This disease is typically diagnosed only after extensive and irreparable damage to the joints. There are currently no clinically effective disease modifying drugs that can slow or stop disease progression. While cartilage degeneration is the hallmark of OA, there is increasing recognition that OA is a disease of the whole joint, and multiple joint tissues contribute to disease progression. Due to the complex pathogenic processes that involve interactions between different joint tissues, a successful disease modifying therapy will likely require treatment with multiple drugs, each having a different target. While several disease modifying drug candidates have shown promise in disease models both in vitro and in vivo, delivering these drugs effectively and with minimal side effects remains challenging.; Cartilage does not have a blood supply, which decreases the efficacy of systemic drug administration methods. In intra-articular injections, drugs are directly injected into the affected joints. But any drugs injected into the joint are rapidly cleared out by the joint capsule over the span of a few hours to a day. As a result, there is limited drug penetration into cartilage. Frequent injections with high drug doses can overcome this challenge, but such a treatment would lead to undesirable systemic side effects and increase the risk of infections within the joint. Recent prior work in our lab has demonstrated that positively charged drug delivery carriers can bind to negatively charged extracellular matrix components in cartilage and thereby improve drug uptake and retention. In this thesis, we characterized the effect of varying the charge of drug delivery carriers on their uptake and penetration into human and bovine cartilage tissues and cells.; We identified optimally charged carriers that can be used to deliver drugs to extracellular or intracellular targets. We successfully used these carriers to provide sustained and targeted delivery of growth factors to full-thickness human cartilage explants in vitro, and developed a mathematical model that predicts in vivo transport behavior in human knee joints. We further established an in vitro cartilage-synovium co-culture model that captures physiologically relevant tissue interactions that contribute to OA progression. We also tested drugs targeting inflammatory pathways in this co-culture model, and the results provide a starting point for developing a combination therapy of growth factors and anti-inflammatory drugs conjugated to optimally charged carriers. This thesis is organized as follows: Chapter 1 provides a broad overview of different diseases affecting cartilage, including osteoarthritis.; In Chapter 2, we used engineered green fluorescent proteins (GFPs) with a range of net positive charges and surface charge distributions to characterize the effects of charge on carrier transport in cartilage. In both bovine and human cartilage, the uptake of GFPs into cartilage tissue explants decreased with increasing net charge. In contrast, cellular uptake of GFPs increased with increasing charge. Experiments with three neutrally charged GFP variants demonstrated that the surface charge distribution of the carrier also plays an important role in determining its transport properties. Based on the results of this study, we identified optimally charged GFP carriers for delivering drugs to extracellular matrix or cell-surface targets, as well as to intracellular targets. In Chapter 3, we tested the delivery of Insulin-like growth factor 1 (IGF-1), a proanabolic drug, using engineered GFP carriers.; Since the target for IGF-1 is the extracellular domain of a cell-surface receptor, and a cationic GFP variant with a net charge of +9 (abbreviated as +9 GFP) was found to be optimal for extracellular targets, we designed, expressed and purified fusion proteins of IGF-1 with +9 GFP. Five fusion protein variants had flexible or rigid polypeptide linkers of different sizes connecting the IGF-1 and +9 GFP domains. A sixth fusion protein with no linker was also synthesized. Single doses of two of the fusion proteins had sustained IGF-1 bioactivity in both normal and cytokine-treated human cartilage explants for 7 to 10 days. These fusion proteins increased sGAG biosynthesis rates in normal cartilage and rescued the loss of sGAG biosynthesis in cytokine-treated cartilage, but could not rescue the increase in cumulative sGAG loss caused by cytokine treatments. These responses are consistent with the effects of free IGF- 1 in human cartilage explant cultures.; However, the main difference is that free IGF-1 needs to be continuously replenished to achieve these effects, whereas a single dose at the start of the experiments was sufficient for the fusion proteins. All of the experimental work in this thesis was performed using in vitro cartilage explant cultures. In order to successfully translate these results to preclinical and clinical studies, it is important to predict the transport behavior of GFP carriers and carrier-drug conjugates once they are injected into the joints. In Chapter 4, we developed a mathematical transport model and fit model predictions to data from in vitro dynamic uptake experiments to estimate the transport properties of engineered GFPs and GFP-IGF-1 fusion proteins in cartilage. This model was then used to predict the concentration of GFPs and GFP-IGF fusion proteins in synovial fluid and inside human knee cartilage in an intact knee joint as a function of time after intra-articular injection.; The model predicted that significant amounts of the injected molecules could quickly penetrate cartilage tissue before being cleared out by the joint capsule. These cartilage concentration predictions will enable the estimation of injection doses that can achieve appropriate drug doses within cartilage. Additionally, the model also predicts the amount of GFPs and GFP-IGF fusion proteins that are cleared out into the systemic circulation by the joint capsule. These predictions will be useful in assessing the likelihood of potential systemic side effects and estimating safe injection doses. In Chapter 5, we established an in vitro model of post-traumatic osteoarthritis (PTOA) in which cartilage and synovium explants were cultured together. In these experiments, the injury caused by cutting synovium explants triggered the release of large quantities of cytokines.; In both human and bovine co-cultures, the cytokine levels were comparable to those observed clinically in the days and weeks following a traumatic joint injury. Cytokine and chemokine levels in co-culture decreased with time, which is also similar to clinical observations. Co-culture with synovium led to significant decreases in the sGAG biosynthesis rate, explant sGAG content, explant metabolic rate and cell viability in cartilage explants. There were no changes in the sGAG (sulfated glycosaminoglycan) biosynthesis rate or explant sGAG content of human cartilage explants that were co-cultured with synovium over the 2-week duration of the experiments. In chapter 6, we tested toll-like receptor (TLR) inhibitors and MAPK pathway inhibitors in the human cartilage-synovium co-culture system. In experiments with tissues from two human donors, treatment with these inhibitors decreased the levels of cytokines and chemokines released by synovium.; Future directions could include further characterization with multiple donor tissues (in order to account for donor-to-donor variability), and conjugating the inhibitors to optimally charged GFPs for combination therapy with GFP-IGF fusion proteins.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 235-269).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129234</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-layer nanoparticles for cytokine delivery to treat cancer</title>
<link>https://hdl.handle.net/1721.1/129233</link>
<description>Layer-by-layer nanoparticles for cytokine delivery to treat cancer
Barberio, Antonio Eric.
Since the initial approval of checkpoint inhibition in 2011, immunotherapy has become an ever more present therapeutic strategy in the clinic and an increasingly large focal point in preclinical cancer research. Much of the success of immunotherapy in the clinic has focused on expanding indications of checkpoint inhibitors which "take the brakes off" the immune response to cancer. However, this strategy has seen limited success in many solid tumors, with only a small fraction of patients responding. One explanation for this phenomenon is a "cold" or poorly immune infiltrated tumor environment. An alternative strategy to utilize the immune system to fight the tumor in these cases is to deliver a proinflammatory agent such as a cytokine to drive immune infiltration and activity within the tumor environment, or "hitting the gas" on the cancer immunity cycle.; Unfortunately, many proinflammatory cytokines, such as interleukin -12 (IL-12), that have been translated to the clinic have shown high, schedule dependent toxicity at relevant doses, making translation infeasible. One strategy to potentiate administration of therapies that are too toxic for systemic delivery is to use a nanoparticle delivery vehicle to concentrate the therapy within tumors and avoid off-target exposure. However, proinflammatory cytokines such as IL-12 pose unique design challenges for optimal delivery from a nanoparticle, including efficient encapsulation, subcellular targeting to cell surfaces to maintain activity on external receptors, and targeting to tumor cells to concentrate IL-12 in tumors and avoid systemic exposure. In this thesis we utilize the layer-by-layer (LbL) nanoparticle technique to adjust the material properties of a nanoparticle delivery vehicle to meet these design criteria.; We demonstrate extensive in vitro and in vivo characterization of the designed LbL nanoparticles. We demonstrate reduced toxicity and enhanced efficacy of systemic IL-12 therapy from optimized LbL nanoparticles not only compared to carrier-free IL-12 but also compared to a simpler nanoparticle design that does not incorporate targeting polymer layers. Importantly, we demonstrate this effect in an orthotopic ovarian tumor model, a malignancy that has been particularly refractory to immunotherapies currently available in the clinic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 129-134).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129233</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The child as hacker : building more human-like models of learning</title>
<link>https://hdl.handle.net/1721.1/129232</link>
<description>The child as hacker : building more human-like models of learning
Rule, Joshua S.(Joshua Stewart)
Cognitive science faces a radical challenge in explaining the richness of human learning and cognitive development. This thesis proposes that developmental theories can address the challenge by adopting perspectives from computer science. Many of our best models treat learning as analogous to computer programming because symbolic programs provide the most compelling account of sophisticated mental representations. We specifically propose that learning from childhood onward is analogous to a style of programming called hacking--; making code better along many dimensions through an open-ended and internally-motivated set of diverse values and activities. This thesis also develops a first attempt to formalize and assess the child as hacker view through an in-depth empirical study of human and machine concept learning. It introduces list functions as a domain for psychological investigation, demonstrating how they subsume many classic concept learning tasks while opening new avenues for exploring algorithmic thinking over complex structures. It also presents HL, a computational learning model whose representations, objectives, and mechanisms reflect core principles of hacking. Existing work on concept learning shows that learners both prefer simple explanations of data and find them easier to learn than complex ones. The child as hacker, by contrast, suggests that learners use mechanisms that dissociate hypothesis complexity and learning difficulty for certain problem classes.; We thus conduct a large-scale experiment exploring list functions that vary widely in difficulty and algorithmic content to help identify structural sources of learning difficulty. We find that while description length alone predicts learning, predictions are much better when accounting for concepts' semantic features. These include the use of internal arguments, counting knowledge, case-based and recursive reasoning, and visibility--a measure we introduce to modify description length based on the complexity of inferring each symbol in a description. We further show that HL's hacker-like design uses these semantic features to better predict human performance than several alternative models of learning as programming. These results lay groundwork for a new generation of computational models and demonstrate how the child as hacker hypothesis can productively contribute to our understanding of learning.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 241-258).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129232</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating theories of speaker choice in a classifier language</title>
<link>https://hdl.handle.net/1721.1/129231</link>
<description>Investigating theories of speaker choice in a classifier language
Zhan, Meilin,Ph. D.Massachusetts Institute of Technology.
Speakers often face choices as to how to structure their intended message into an utterance. When multiple options are available to express more or less the same meaning, what general principles govern speaker choice? Here I investigate the influence of contextual predictability on the encoding of linguistic content manifested by speaker choice in a classifier language, Mandarin Chinese. In English, a numeral modifies a noun directly (e.g., three tables). In classifier languages such as Mandarin Chinese, it is obligatory to use a classifier (CL) with the numeral and the noun (e.g., three CL.flat table, three CL.general table). While different nouns are compatible with different specific classifiers, there is a general classifier "ge" (CL.general) that can be used with most nouns. I focus on the alternating options between using the general classifier versus a specific classifier with the same noun where the options are nearly semantically invariant.; When the upcoming noun has high surprisal, the use of a specific classifier would reduce surprisal at the noun thus potentially facilitate comprehension (predicted by the Uniform Information Density account (Levy &amp; Jaeger, 2007)), but the use of that specific classifier may be dispreferred from a production standpoint if accessing the general classifier requires less effort (predicted by the Availability-Based Production account (Bock, 1987; Ferreira &amp; Dell, 2000)). Using evidence from naturalistic language datasets and real-time language production experiments, I have argued that speaker choice is emerged from availability-based production and is the results of speaker-centric information processing, where availability is affected by linguistic experience such as contextual predictability and word frequency, as well as real-time production pressure in speaking.; From the perspective of language design, the redundancy in information content shared between nouns and classifiers provides a mechanism that potentially is beneficial for the listener, making communication more robust. However, the capacity of using a specific classifier is constrained by speaker-oriented availability production.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 117-125).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129231</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>State-space modeling and electroencephalogram source localization of slow oscillations with applications to the study of general anesthesia, sedation and sleep</title>
<link>https://hdl.handle.net/1721.1/129230</link>
<description>State-space modeling and electroencephalogram source localization of slow oscillations with applications to the study of general anesthesia, sedation and sleep
Hotan, Gladia Chork.
General anesthesia, sedation and sleep correspond to distinct physiological states on a spectrum of unconsciousness. Slow oscillations (0.1-1Hz) are a common feature of these unconscious states. It is unclear whether these slow oscillations might have different properties that could relate to mechanistic or behavioral dierences observed in these states. In this thesis we develop novel methods to characterize the dynamic properties and spatial relationships of slow oscillations during general anesthesia, sedation, and sleep. First we analyze the electroencephalogram (EEG) power spectrum in each of these states and nd that slow oscillation power increases with increasing levels of unconsciousness. Next, we perform source localization analysis to characterize the spatiotemporal relationships among distributed cortical generators for the slow oscillation using canonical coherence analysis. We nd that the inherent spatial dispersion of MNE estimates could produce spurious coherence values even when sources were uncorrelated. To improve the accuracy of coherence estimates, we develop an improved source localization method using a state space model for the slow oscillation. This method employs a novel state space representation for oscillatory signals developed by Matsuda and Komaki, combined with an expectation maximization (EM) algorithm to estimate the model parameters in the sensor and source spaces. We demonstrate in simulation studies that this oscillator-EM method improves localization performance as compared to MNE. Finally, we apply the oscillator-EM method to analyze slow oscillations in the propofol, dexmedetomidine and sleep datasets, respectively. We illustrate how the application of this novel state space model and source localization method can elucidate novel properties of slow oscillation dynamics and coherence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 103-109).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129230</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tool development for the rapid identification of microbiome manipulating agents</title>
<link>https://hdl.handle.net/1721.1/129224</link>
<description>Tool development for the rapid identification of microbiome manipulating agents
Cervantes, Bernardo.
Manipulation of complex microbial communities, such as human microbiomes, plays a critical role in the study and treatment of microbiome associated diseases. However, the tools available to perform microbiome manipulations suffer from a lack of specificity. Current methods like the use of antibiotics and microbiome transplants decimate natural ecologies and run the risk of introducing unknown agents into the microbiome. New tools capable of manipulating the microbiome with strain level specificity hold great potential to advance research into the microbiome and the development of new therapeutics. In this thesis, I present advancements aimed to accelerate the discovery and study of new microbiome manipulating agents. First, I present our work to develop a method for the rapid identification of displacer strains (RIDS). Displacer strains are capable of selectively replacing other strains within a complex microbial community without damage to the ecology. We showed that RIDS is cheaper and faster than current methods for the discovery of microbe-microbe interactions. Next, I present our work using RIDS to discover a potential displacer strain (BN6) isolated from a healthy human. We show that BN6 secretes a narrow spectrum antimicrobial against Enterococci, an important bacterial genus associated with nosocomial infections. We also demonstrate BN6's ability to displace Enterococcus faecium in liquid coculture motivating further work to validate BN6's status as a displacer strain. I also present a collaborative effort to build a synthetic microbiome that enables its host to consume an otherwise inaccessible carbon source. Finally, I discuss the current state of microbiome manipulation tools where I highlight some of the exciting recent advancements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 93-103).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129224</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic devices for robust, context-independent control of gene expression levels in mammalian cells</title>
<link>https://hdl.handle.net/1721.1/129221</link>
<description>Genetic devices for robust, context-independent control of gene expression levels in mammalian cells
Jones, Ross D.
Synthetic biology is an emerging engineering discipline that aims to program living cells with novel and useful functions. Though great progress has been made in the field so far, the development of new gene regulatory systems remains slow and difficult. A major cause of this difficulty is context-dependence of gene expression. In this thesis, we develop synthetic genetic controllers to decouple the behavior of genetic devices from their context in mammalian cells. First, we study the context-dependence caused by competition among genes for shared cellular gene expression resources, finding that overloading such resources pervasively affects the expression levels of engineered genetic devices. We then develop a genetic feedforward controller to decouple gene expression levels from changes in cellular resource availability.; We show that this controller is effective at offsetting the effects of resource loading by various resource competitors on gene expression levels across commonly-used cell lines. In addition to resource loading, the feedforward controller also suppresses gene expression noise resulting from cell-to-cell variation in gene delivery. To address limitations of our feedforward controller, we next develop a genetic feedback controller based on protein phosphorylation-dephosphorylation cycles. Compared to the feedforward controller, the feedback controller provides less resistance to resource loading, but affords greater tunability of gene expression levels and enables desensitization to the context-dependence caused by off-target gene regulation.; Finally, we develop a control strategy for precisely overwriting the expression level of an endogenous gene, addressing the context-dependence that arises from interactions with natural gene regulation networks and enabling improved eciency in cell fate reprogramming. Overall, the gene expression control strategies developed in this thesis will enable more rapid and predictable development of sophisticated gene regulatory systems that are resistant to their context within mammalian cells.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129221</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical and transcriptional alterations during cancer cell transendothelial migration</title>
<link>https://hdl.handle.net/1721.1/129219</link>
<description>Mechanical and transcriptional alterations during cancer cell transendothelial migration
Roberts, Anya Burkart.
Cancer metastasis results in over ninety percent of cancer-related deaths. A critical aspect of metastasis is the extravasation of tumor cells through blood vessel endothelial cell junctions, thought to be smaller than the tumor cell nuclear diameter. The significant deformations required for extravasation change the chromatin spatial configuration along with the distribution of nuclear enzymes and transcription factors. However, it remains unknown whether tumor cells modulate their mechanical properties in order to facilitate this extravasation and whether the transcriptome is altered in this process to further downstream colonization. Improved understanding of how tumor cells undergo transendothelial migration and the resulting transcriptomic implications may help uncover new approaches to prevent cancer metastasis.; In this thesis, we utilized an in vitro 3D cell model for transendothelial migration, in which tumor cells cross an endothelial monolayer and travel into a collagen gel without interference of stiff substrates and fixed pores. We employed two non-perturbative optical methods, Brillouin confocal microscopy and confocal reflectance quantitative phase microscopy, to map the mechanical properties of live transmigrating tumor cells. First, we demonstrated agreement in the measurements from these two methods, by testing changes in tumor cell nuclear mechanical properties in response to doxorubicin, a common chemotherapeutic. Our subsequent studies of tumor cells during transendothelial migration revealed that they soften and that this softening persists (both in bulk internal and nuclear membrane measurements). These results are the first demonstration of tumor cell mechanical property modulation during transendothelial migration.; We also investigated transcriptomic alterations during transendothelial migration. We accomplished this by separating extravasated from non-extravasated tumor cells at different time points and performing RNAseq. We found that significant transcriptomic alterations occur during transendothelial migration, especially up-regulation of genes involved in the epithelial-to-mesenchymal transition and lamellipodia assembly, which support further migration into the stroma. Intriguingly, the lamin genes, which are a primary determinant of nuclear stiffness, were not differentially regulated. These experiments provide new insights into genetic alterations that occur in tumor cells during extravasation. In summary, these findings shed light on the modulation of tumor cell mechanics and transcriptome during metastasis across the blood vessel endothelium.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages [150]-[171]).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129219</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering physiologically relevant In vitro liver models for inflammation response and vascularized co-culture</title>
<link>https://hdl.handle.net/1721.1/129217</link>
<description>Engineering physiologically relevant In vitro liver models for inflammation response and vascularized co-culture
Wang, Alex J-S.
In vitro human liver models are becoming increasingly crucial for disease modeling and drug development. Here we take a design principle-driven approach to engineering more physiologically relevant 3D liver models by focusing on microenvironmental parameters influencing physiological functions. Specifically, we address the biophysical properties of the scaffold and its interplay with culture medium components and examine the effects on phenotypes, including inflammation response. First, we engineered a poly(ethylene glycol) (PEG)-based hydrogel scaffold to support 3D tissue formation in a flow-driven bioreactor. This geometrically controllable, low swelling, and physiologically stiff gel sustained a healthier hepatic morphology and better functional stability compared to traditional polystyrene cultures. Second, we improve upon existing hepatic 3D liver spheroid aggregation platforms by engineering a 3D-printed alginate microwell system.; Spheroids produced with this microwell system are functionally stable in long-term culture and can be efficiently harvested for downstream applications. We investigated liver models generated from these PEG and alginate scaffolds for their response to inflammatory signals in culture media. We initially observed a lower basal inflammatory state when primary human hepatocytes were cultured in both systems compared to polystyrene. We then developed a defined, serum-free growth factor supplemented medium, which enhanced hepatocyte function and strongly induced an inflammatory and regenerative microenvironment. Gene expression profiling, multiplexed cytokine analysis, and imaging indicated that differential responses in each scaffold were due to the attenuation of YAP/TAZ signaling. Notably, PEG scaffolds gave the highest response range, indicating that this model can respond dynamically to stimuli.; Additionally, we saw a more controlled dose-dependent response in PEG to short-term TGF[beta] stimulation. Using the culture technologies and the developed media, we engineered a vascularized tri-culture model using hepatocytes, endothelial cells, and mesenchymal stem cells. This model forms a robust vascular network that interacts with hepatic spheroids. Seeding tri-culture spheroids into a flow-driven bioreactor resulted in the formation of lumen-like features indicative of physiological polarization and morphology. This thesis illuminates design principles for multifaceted liver tissue engineering and introduces generalizable and translatable technologies. We further enable the design of sophisticated liver disease models that can enable more predictable drug development and insights into liver biology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129217</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ecological insights through single-cell measurements of marine bacteria</title>
<link>https://hdl.handle.net/1721.1/129213</link>
<description>Ecological insights through single-cell measurements of marine bacteria
Gao, Cherry.
Bacteria in the ocean, though invisible to the naked eye, play an indispensable role in facilitating life on Earth by driving chemical reactions that are essential to the planet's habitability. Some marine bacteria, however, cause disease outbreaks that are capable of rapid and massive destruction of ecosystems. Although individual bacterial cells are ~1 [mu]m in size, their collective action enables large-scale nutrient fluxes throughout the marine food web, as well as wreak havoc in marine systems with major socioeconomic consequences for humans. In this Thesis, I seek to connect single-cell measurements of behavior and metabolism of marine bacteria to ecological processes that shape global biogeochemical cycles and influence ecosystem health.; In particular, I focus on the impact of microbial activities on two globally-relevant contexts: (1) the biogeochemical cycling of sulfur, a chemical element that is essential to life, and (2) coral disease, which threatens the reef ecosystems that support marine biodiversity and provide food security for many human coastal communities. In Chapter 1, I describe the development of synthetic biology tools for the construction of fluorescent reporters in a marine bacterium (Ruegeria pomeroyi). These engineered reporter strains enabled the investigations in Chapter 2, which presents the first single-cell measurements of the transcriptional response of R. pomeroyi to different concentrations of dimethylsulfoniopropionate (DMSP), a pivotal compound in the oceans' carbon and sulfur cycles and a key chemical currency in marine microbial interactions. These measurements revealed the importance of microscale DMSP hotspots in marine sulfur cycling.; In Chapter 3, I describe the simultaneous measurements of behavior (through microscopy) and gene expression (through RNA sequencing) of a coral pathogen, Vibrio coralliilyticus, to investigate the sequence of microscopic events preceding infection. The Appendix describes the methodology of tracking single cells over time through quantitative microscopy and high-throughput image analysis. The application of new tools from biological engineering to marine microbial ecology presents an unprecedented opportunity to understand the connections between single-cell behaviors, and ecosystem- and global-scale processes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 181-189).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129213</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of XBP1s in the unfolded protein response and N-linked glycosylation</title>
<link>https://hdl.handle.net/1721.1/129211</link>
<description>The role of XBP1s in the unfolded protein response and N-linked glycosylation
Chen, Kenny,Ph. D.Massachusetts Institute of Technology.
The secretory pathway processes approximately one-third of the cellular proteome, modifying proteins with diverse chemical structures such as carbohydrates. These modifications can help guide protein folding and expand the functional diversity of the proteome, ultimately influencing intracellular signaling and extracellular interactions. The endoplasmic reticulum (ER) is the site of protein folding along the secretory pathway, featuring a suite of chaperones to assist protein folding and quality control factors for degrading misfolded proteins. Co- and post-translational modifications such as N-glycosylation take place in the ER, and glycoproteins are further processed in the Golgi to yield a vast array of N-glycan structures. During both normal physiology and disease, cells encounter environments that can result in proteotoxic stress. The proteostasis network safeguards against protein misfolding stress through the upregulation of chaperones and quality control factors.; The unfolded protein response (UPR) regulates the ER's proteostasis network through the activity of transcription factors that remodel the expression of proteostasis regulators. Prior studies in our lab have established a role for the UPR's XBP1s transcription factor in N-glycan maturation, demonstrating that XBP1s bridges ER stress with the molecular architecture of N-glycans. However, these studies were limited to analyzing ectopically expressed model proteins. This thesis examines the role of XBP1s in regulating the structural distribution of N-glycans in endogenous systems, and explores the mechanisms by which XBP1s activation is regulated. We employed stress-independent activation of XBP1s and glycomic analyses by lectin microarrays and mass spectrometry to show that XBP1s drives significant changes in sialylation and bisecting GlcNAc in HEK293 cells, and in high-mannose, branched, and core fucosylated N-glycans in HeLa cells.; We also inhibited formation of XBP1s in breast cancer cells displaying constitutively high levels of XBP1s to show that glycosylation features associated with malignancy are modestly affected when XBP1s formation is blocked. Lastly, we demonstrated that pharmacological activation of the IRE1-XBP1s signaling axis cannot be sustained despite loss of co-chaperones negatively regulating IRE1. Our results demonstrate that XBP1s is a significant regulator of both the UPR and N-glycosylation, and they emphasize the importance of studying the regulation of IRE1-XBP1s signaling for understanding disease targets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129211</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-cell technology developments: from 3' barcoding to recording historical metadata through endothelial cells differentiation</title>
<link>https://hdl.handle.net/1721.1/129209</link>
<description>Single-cell technology developments: from 3' barcoding to recording historical metadata through endothelial cells differentiation
Gaillard de Saint Germain, Alethe.
Complex organisms, such as humans and mice, consist of trillions of cells, yet we lack the tools to deeply characterize the immense space of cellular identity and behavior that defines health and disease. Indeed, evidence shows that cells, even those derived from identical clones, can present differences on the genomic, transcriptomic and epigenomic levels. It is therefore critical for researchers to be able to conduct studies at single-cell resolution in order to understand the vast diversity of life processes and further develop biological technologies. Here we describe the implementation of a plate-based 3' single-cell RNA-sequencing (scRNASeq) barcoding strategy which allowed us to dramatically reduce costs, ease implementation, improve throughput, and generate more quantitative data.; Further developments in 3' barcoding strategies allowing parallel sequencing of thousands of cells at once, enabled us to study the mechanisms by which T cells cross into solid tumors, (i.e., their interactions with the endothelial cells lining blood vessels, and more specifically, venular endothelial cells; VEC). This revealed a unique transcriptional profile in VECs which highlights the importance of several transcription factors in establishing a gene signature conducive to T cell recruitment in immunogenic tumors. However, scRNA-Seq can only give us information about the state of a cell at the time of sequencing. Thus, to further explore the mechanisms at play behind single cell heterogeneity we worked on developing a strategy to record "historical" data in single cells. A CRISPR-based and a recombinase-based strategy were explored in this work.; CRISPR presents the advantage of being very versatile and easy to multiplex, while recombinases are a well-established inducible DNA editing tool that is easier to implement. Using recombinases, we were able to demonstrate the recording of two independent signals in single cells. Overall, our work helped to establish 3' prime barcoding as a valid strategy for scRNA-Seq laying the foundation for the development of high throughput technology in the lab that we then used to explore endothelial cell responses in cancer and to develop of new tools to couple historical "metadata" with scRNA-Seq.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129209</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for studying cellular differentiation using single-cell RNA-sequencing</title>
<link>https://hdl.handle.net/1721.1/129208</link>
<description>Computational methods for studying cellular differentiation using single-cell RNA-sequencing
Yeo, Hui Ting Grace.
Single-cell RNA-sequencing (scRNA-seq) enables transcriptome-wide measurements of single cells at scale. As scRNA-seq datasets grow in complexity and size, more complex computational methods are required to distill raw data into biological insight. In this thesis, we introduce computational methods that enable analysis of novel scRNA-seq perturbational assays. We also develop computational models that seek to move beyond simple observations of cell states toward more complex models of underlying biological processes. In particular, we focus on cellular differentiation, which is the process by which cells acquire some specific form or function. First, we introduce barcodelet scRNA-seq (barRNA-seq), an assay which tags individual cells with RNA 'barcodelets' to identify them based on the treatments they receive. We apply barRNA-seq to study the effects of the combinatorial modulation of signaling pathways during early mESC differentiation toward germ layer and mesodermal fates.; Using a data-driven analysis framework, we identify combinatorial signaling perturbations that drive cells toward specific fates. Second, we describe poly-adenine CRISPR gRNA-based scRNA-seq (pAC-seq), a method that enables the direct observation of guide RNAs (gRNAs) in scRNA-seq. We apply it to assess the phenotypic consequences of CRISPR/Cas9-based alterations of gene cis-regulatory regions. We find that power to detect transcriptomic effects depend on factors such as rate of mono/biallelic loss, baseline gene expression, and the number of cells per target gRNA. Third, we propose a generative model for analyzing scRNA-seq containing unwanted sources of variation. Using only weak supervision from a control population, we show that the model enables removal of nuisance effects from the learned representation without prior knowledge of the confounding factors.; Finally, we develop a generative modeling framework that learns an underlying differentiation landscape from population-level time-series data. We validate the modeling framework on an experimental lineage tracing dataset, and show that it is able to recover the expected effects of known modulators of cell fate in hematopoiesis.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 159-176).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129208</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing scientific innovation by learning on knowledge graph dynamics</title>
<link>https://hdl.handle.net/1721.1/129206</link>
<description>Optimizing scientific innovation by learning on knowledge graph dynamics
Weis, James W.(James Woodward)
The integration of data-driven methodologies, including techniques from artificial intelligence and network science, into the research process and funding ecosystem is an exciting, potentially paradigm-changing opportunity to augment the effective intelligence of the scientific community--potentially increasing the efficiency, fairness, and overall impact of the scientific enterprise. In this thesis, we explore the development of new technologies to extract actionable insights from large-scale data corpora through the design and deployment of machine learning approaches.; Specifically, we describe (1) the creation of new algorithms that compute on simulations of complex biophysical processes to generate novel scientific insights, (2) artificial intelligence-based improvements to the academic publishing system, (3) a study of institutional barriers bottle-necking the development of large-scale algorithmic approaches to scientific knowledge analysis, and (4) a new algorithmic framework that, by learning from the history of biotechnology innovation as models by dynamic knowledge graphs, is able to identify with high-fidelity new technologies of likely high future impact. We also develop tools to facilitate the real-world utilization of these quantitative approaches, effectively demonstrating how theses "intelligence-augmenting" algorithms could be used to more efficiently navigate the scientific literature and design scientifically impactful collaborations.; Finally, we conclude by discussing the potential deployment of these technologies in the future--with a focus on potential applications in the funding of scientific research and commercialization, and the potential design of diversified, impact-optimized funding portfolios. Collectively, our results demonstrate that machine learning approaches can be used to extract meaningful insight from existing data corpora, and that these signals can be used synergistically with human intuition to increase the rate at which we collectively generate breakthrough scientific insights and transformative new technologies.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 142-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129206</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A framework for proving the computational intractability of motion planning problems</title>
<link>https://hdl.handle.net/1721.1/129205</link>
<description>A framework for proving the computational intractability of motion planning problems
Lynch, Jayson(Jayson R.)
This thesis develops a framework for proving computational complexity results about motion planning problems. The model captures reactive environments with local interaction. We introduce a motion planning problem involving one or more agents that move around a connection graph and through "gadgets" which are stateful parts of the environment whose state and traversability can change only in response to traversals of the agent within the gadget. The model includes variants for 0-player, 1-player, 2-player, and team imperfect information games. This thesis considers various classes of gadgets and give both algorithms and hardness results ranging from NL-completeness to Undecidability. Full dichotomies are obtained for some classes including the natural class of gadgets which can be traversed a bounded number of times. For 1-player this gives a separation between containment in NL versus NP-completeness, for 2-player a separation between containment in P and PSPACE-completeness, and for team imperfect information games a separation between containment in P and NEXPTIME-completeness. Our model builds on and generalizes several other proof techniques for motion planning problems and games. This thesis also provides examples of how this new framework can simplify many of those old results, as well as applying to many new hardness results for video games and variants of block pushing puzzles. New hardness results include PSPACE-hardness for Trainyard, Sokobond, The Legend of Zelda: Breath of the Wild, The Legend of Zelda: The Minish Cap, The Legend of Zelda: Oracle of Seasons, Captain Toad: Treasure Tracker, Super Mario Oddsey, Super Mario Galaxy 1 and 2, Super Mario Sunshine, and Super Mario 64.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 235-240).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129205</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coupling sparse models and dense extremal problems</title>
<link>https://hdl.handle.net/1721.1/129189</link>
<description>Coupling sparse models and dense extremal problems
Hirst, JamesPh. D.Massachusetts Institute of Technology.
We study the problem of coupling a stochastic block model with a planted bisection to a uniform random graph having the same average degree. Focusing on the regime where the average degree is a constant relative to the number of vertices n, we show that the distance to which the models can be coupled undergoes a phase transition from O [square root of n to [omega]n) as the planted bisection in the block model varies. This settles half of a conjecture of Bollobas and Riordan and has some implications for sparse graph limit theory. In particular, for certain ranges of parameters, a block model and the corresponding uniform model produce samples which must converge to the same limit point. This implies that any notion of convergence for sequences of graphs with [theta] edges which allows for samples from a limit object to converge back to the limit itself must identify these models. On the other hand, we demonstrate that the existing theory of dense graph limits is a powerful tool for dealing with extremal problems on graphs with [theta](n2) edges. The language of graphons along with the flag algebra method allow us to obtain many results which would otherwise be out of reach or at least difficult to manage. We study graph profiles which capture correlations between different graphs in a larger network. Further, we give proofs in the flag algebra of some inducibility-like problems which have gained some particular interest recently.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 45-47).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129189</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A worker-centered approach to convex optimization in engineering design</title>
<link>https://hdl.handle.net/1721.1/129175</link>
<description>A worker-centered approach to convex optimization in engineering design
Burnell, Edward(Edward E.)
Design tools shape what engineers and their organizations consider desirable. In particular, "design models" that implement interactions amongst parameters are central to the work of early-stage concept development. Such models delimit the space of designs under consideration, and are built to intentionally expand, explore, or shrink that space. Engineers often develop close relationships with their models, seeing them as mirrors to their own cognition. Within an organization these models also work as rhetorical reinforcement, as maps labeled to argue for particular design decisions, and it is often within and around design models that workers' perspectives clash and coalesce. Design models are thus loci for understanding what might be built which encode decisions on why: they are mathematical artifacts, but so deeply intertwined with design process and practice that to see them purely mathematically misses much of what makes them interesting.; This thesis establishes design models and their design spaces as inextricable from user experience. Specifically, although convex optimization is rarely analyzed for human factors, there is a great deal of overlap between convexity and design work. Such findings inform and are informed by GPkit, a modeling language for geometric programming developed by the author and used daily in both industry and academia. How organizations incorporated GPkit emerged from and inspired its syntax and algorithms, a two-way flow between mathematical structure and worker knowledge that formed contributions both to the formulation and interpretation of convex programs and to understandings of early-stage engineering design. For instance, dual solutions are generally more valuable than an "optimal design", and from this insight are drawn novel algorithms and design methods.; There is great potential for software that collaborates with workers, for no-one knows as well as they how technology structures each day's work. This thesis demonstrates that conducting research with this in mind, as a participant in a worker-centered community of inquiry, can improve engineering design. Technology shapes what is considered desirable, and any success of this research has lain in taking that as both an opportunity and a responsibility.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 105-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129175</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New problems in revenue management, theory and applications</title>
<link>https://hdl.handle.net/1721.1/129172</link>
<description>New problems in revenue management, theory and applications
Fata, Elaheh.
This thesis studies new applications of revenue management that can be used by airlines and the ride-sharing industry. Our first and main focus is on online advertising with applications to advertisement of flight tickets. Moreover, the last chapter studies an online ride-sharing problem in which both drivers and riders appear on the platform in an online manner and the goal is to find the best matchings between the two parties. More specifically, Chapter 2 studies an online advertising problem, from the perspective of the advertiser, whose goal is to determine the best set of targets to show their ads to. We develop new algorithms that help advertisers, such as airline companies, to bid optimally on target portfolios while taking into account some limitations inherent to online advertising. In this work, we formulate the problem and develop an Optimistic-Robust Learning (ORL) algorithm that uses ideas from Upper Confidence Bound (UCB) algorithms and robust optimization.; We prove that the expected cumulative regret of the algorithm is bounded. Additionally, simulations on synthetic and real-world data show that the ORL algorithm reduces regret by at least 10-20% compared to benchmarks. Chapter 3 studies an assortment optimization problem where a customer chooses a single item from a sequence of non-overlapping sets shown to her, while limited inventories constrain the items offered to a sequence of customers over time. An example of this is selecting a flight ticket among the options provided by a travel booking website. In the special case where all of the assortments have size one, we derive an approximation algorithm which earns at least 1 - ln(2 - 1/e) of the optimum. For the general assortment problem, we establish the first constant-factor approximation ratio of 0.09 when revenues are customer specified and 0.15 otherwise.; Similarly, Chapter 4 studies an online advertisement problem with the goal of determining which products to show to each customer while respecting important online advertising business rules. More specifically, customers appear throughout time and the advertiser's goal is to determine which product (e.g., a flight ticket) to show to her while avoiding ad fatigue and respecting a set of diversity constraints on products. We study this problem under online and offline settings. Finally, Chapter 5 studies an online optimization problem with applications to ridesharing. The problem can be modeled as a bipartite graph, where one side is associated to drivers and the other side to riders. For this problem, we have optimal matching algorithms for the case of bipartite graphs with degrees at most 2 as well as bounds on competitive ratios for trees.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 215-220).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129172</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the spatiotemporal dynamics of cell-cell interactions in engineered tissues</title>
<link>https://hdl.handle.net/1721.1/129166</link>
<description>Probing the spatiotemporal dynamics of cell-cell interactions in engineered tissues
Song, Hyun Ho(Hyun Ho Greco)
Formation of capillary blood vasculature is a critical requirement for native as well as engineered organs and often dictates their growth and survival. in these tissues. Despite recent technological advancements such as 3D printing, the tissue engineering field still relies on vascular self-assembly to fabricate new blood vessels within an engineered tissue. Many efforts have utilized stromal fibroblasts as "feeder cells" to induce endothelial cells to undergo morphogenesis in vitro and facilitate engraftment of the vascularized tissues in vivo. However, the dispensability of these fibroblasts in different stages of vascular morphogenesis and engraftment is currently unknown and underexplored, and the ability to remove these feeder cells after assembly could be useful for clinical translation.; The goal of this thesis is to 1) investigate this temporal interaction between endothelial cells and fibroblasts by employing microfluidic platforms and genetic tools and 2) use this insight to create implantable, feeder-free organs with perfusable vasculature. To achieve this, we first describe our microfluidic platform that allows for the functional vascularization of a 3D tissue through the endothelial cell and fibroblast co-culture. We then introduce a technique termed CAMEO (Controlled Apoptosis in Multicellular Tissues for Engineered Organogenesis), whereby fibroblasts are selectively ablated on-demand, and utilize it in our microfluidic devices to probe the dispensability of fibroblasts during vascular morphogenesis. The presence of fibroblasts is shown to be necessary only during the first few days of endothelial cell morphogenesis, after which these feeder cells can be ablated without significantly affecting the structural and functional features of the developed vasculature.; Furthermore, we develop a scaled-up microfluidic platform for creating implantable tissues and use CAMEO to vascularize primary human hepatocytes that rapidly integrates with the host vasculature in vivo. We demonstrate that fibroblasts are necessary for the speedy integration of the implanted vasculature and the maintenance of hepatic function in vivo. In conclusion, our study suggests that transient, initial support from fibroblasts is sufficient to drive vascular morphogenesis in engineered tissues, and with further optimization, our strategy of engineering-via-elimination may provide a new general approach for achieving desired functions and cell compositions in engineered organs.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 95-106).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129166</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information exchange and robust learning algorithms for networked autonomy</title>
<link>https://hdl.handle.net/1721.1/129160</link>
<description>Information exchange and robust learning algorithms for networked autonomy
Talak, Rajat(Rajat R.)
Networked autonomy deals with problems in which several autonomous agents interact via a communication network to cooperatively perform intelligent tasks. Networked autonomous systems operate in a constantly changing, real-world environment and thus have to constantly apprise themselves of the changing state of the world and other autonomous agents in the network. The autonomous agents have to do this by consistently exchanging information via the communication network. The first part of this thesis deals with the problems of optimizing information freshness in wireless communication networks formed by autonomous agents. We argue why Age-of-Information is a good metric for information freshness, show how it fundamentally differs from a traditional latency metric of packet delay, and devise several policies to minimize Age-of-Information in wireless communication networks. We consider single-hop as well as multi-hop communication networks.; We provide theoretical guarantees on the performance of the proposed policies. The autonomous systems also have to estimate their locations and construct a map of their shared environment. The estimated map and locations must be robust to outliers and model parameters, hardware failures, unknown system dynamics for a fail-safe, long-term operation of the networked system. In the second part of the thesis, we propose a new theory of uncertainty variables which provides a mathematical framework for performing set-estimation over complex graphical models. An uncertainty variable is characterized by an uncertainty set, in which its realization is bound to lie, while the conditional uncertainty is characterized by a set map. We prove Bayes' law and the law of total probability equivalents for uncertainty variables. We then develop notions of independence, conditional independence, and graphical models over a collection of uncertainty variables.; We propose belief propagation algorithms with provable performance guarantees for set-estimation, which can be used for localization, mapping, high-level perception and planning in networked autonomous systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 249-261).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129160</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revenue management and resource allocation for communication satellite operators</title>
<link>https://hdl.handle.net/1721.1/129158</link>
<description>Revenue management and resource allocation for communication satellite operators
Guerster, Markus.
This dissertation presents the first academic study on Revenue Management (RM) for broadband communication satellite (satcom) operators. It proposes a satcom RM framework to structure, automate, and optimize operators' demand and capacity management. New entrants, increasing demand for data, digital payloads, and new phased array technologies are likely to remake the current satcom landscape. One of the challenges, old and new, operators face is how to manage demand and capacity. This work finds that airlines' tiered pricing and seat inventory control (known as RM) offers insights to the satcom market. The satcom industry shares many characteristics with the airline industry, such as inflexible capacity, low marginal sales cost, perishable inventory, heterogeneous customers, and variable and uncertain demand. Generally, those characteristics favor the implementation of an RM system. However, four unique challenges are discovered that require the extension of existing RM frameworks by a resource management part: First, the unit of capacity (Watts) is not the unit of demand (Mbps). Second, the resource allocation is an optimization problem itself. Third, available capacity is uncertain based on resource usage. And fourth, existing Service Level Agreements (SLAs) do not fully leverage the new satellites' flexibility. The dissertation proposes algorithmic solutions to each of these four challenges. It specifically focuses on the resource allocation process and its optimization of user terminal grouping, routing, frequency assignment, and power allocation. Finally, the value of the proposed satcom RM framework is demonstrated by applying it to a satellite operator's data. The results show that dynamic resource allocation frees up considerable capacity (38% in the analyzed scenario), which operators can monetize into additional revenues. More sophisticated RM algorithms lift the revenues between 4-7% compared to heuristic pricing policies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 303-320).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129158</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Age-of-information in wireless networks : theory and implementation</title>
<link>https://hdl.handle.net/1721.1/129152</link>
<description>Age-of-information in wireless networks : theory and implementation
Kadota, Igor.
Emerging data-driven applications will increasingly rely on sharing time-sensitive information for monitoring and control. Examples are abundant: mobile robots in automated warehouses sharing status information to cooperate with each other and with humans, self-driving cars exchanging safety-related information with other vehicles and infrastructure, and smart-cities analyzing data from Internet-of-Things (IoT) sensors to provide real-time feedback for vehicles and traffic management systems. In such application domains, it is essential to keep state information fresh, as outdated information loses its value and can lead to system failures and safety risks. The Age of Information (AoI) is a recently proposed performance metric that captures the freshness of information from the perspective of the destination.; Optimizing AoI is a challenging objective that goes beyond low latency, it requires that packets with low delay are delivered regularly over time to every destination in the network. In this thesis, we use rigorous theory to gain insight into the AoI optimization problem and to develop practical network control mechanisms, and we leverage system implementation to evaluate the performance of these mechanisms in real operating scenarios. We consider a broadcast single-hop wireless network with a base station and a number of nodes sharing time-sensitive information through unreliable communication links. We formulate a discrete-time decision problem and use tools from mathematical optimization and stochastic control to develop network control mechanisms that optimize AoI. Our first approach is to develop an algorithm that computes the optimal transmission scheduling decision at every time t.; As expected, this optimal solution is impractical due to its high computational complexity - shown to grow exponentially with the size of the network. To overcome this challenge, we propose low-complexity transmission scheduling policies with provable performance guarantees in terms of AoI. For example, we use Lyapunov Optimization to develop an AoI-based Max-Weight policy, show that this policy is optimal for symmetric networks, and show that, for general networks, this policy is guaranteed to be within a factor of two away from the optimal AoI. Numerical results suggest that this Max- Weight policy achieves near-optimal performance in various network settings.; Throughout the thesis, we analyze, optimize, and evaluate important classes of centralized and distributed low-complexity transmission scheduling algorithms, namely Max-Weight, Maximum Age First, Stationary Randomized, Whittle's Index, Slotted-ALOHA and Carrier-Sense Multiple Access, using tools from Dynamic Programming, Lyapunov Optimization, Renewal Theory and the Restless Multi-Armed Bandits framework. Leveraging the theoretical results, we propose WiFresh: an unconventional network architecture that scales gracefully, achieving near optimal information freshness in wireless networks of any size, even when the network is overloaded. We propose and realize two strategies for implementing WiFresh: one at the MAC layer in a network of FPGA-enabled Software Defined Radios using hardware-level programming, and another at the Application layer, without modifications to lower layers of the communication system, in a network of Raspberry Pis using Python 3.; Our experimental results show that the more congested the network, the more prominent is the superiority ofWiFresh when compared to an equivalent WiFi network, with WiFresh achieving two orders of magnitude improvement over standard WiFi. Our measurements suggest that WiFresh is well-suited for large-scale applications that rely on sharing time-sensitive information.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages [225]-233).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129152</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical analysis of ultrasound signals for tissue characterization : the Homodyned K Distribution</title>
<link>https://hdl.handle.net/1721.1/129150</link>
<description>Statistical analysis of ultrasound signals for tissue characterization : the Homodyned K Distribution
Tresansky, Anne Joyal Pigula.
Diagnostic ultrasound (US) is a safe and inexpensive imaging technology that is widely used for qualitative assessment of anatomic features much larger than a wavelength, at least several millimeters in size. Statistical analysis of US envelope signals can provide information about scattering from structures that are smaller than a wavelength, and can therefore provide information on tissue composition and organization that would otherwise require a biopsy. The Homodyned K (HK) distribution is the most general in the family of random walk envelope distributions, which are strongly grounded in a physical modeling of scattering and therefore are ideal for tissue characterization purposes. In this thesis, several issues are considered that relate to the implementation and interpretation of the HK distribution. The physical interpretations of the HK parameters are explored, providing greater context and understanding for clinical applications. A novel parameter estimation algorithm based on the Levenberg-Marquardt curve-fitting algorithm is presented, and it is shown to be more robust in the presence of image artifacts than the gold standard. The effects of a single-element US imaging system on HK parameters are characterized, enabling calibration and therefore system-independent measurements. Finally, two animal studies are presented that use HK parameters to characterize skeletal muscle and liver in mouse models. These results represent progress towards implementing the HK distribution as a system-independent, clinically useful technology.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 101-121).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129150</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space-based laser guide stars for astronomical observatories</title>
<link>https://hdl.handle.net/1721.1/129146</link>
<description>Space-based laser guide stars for astronomical observatories
Clark, James R.,Ph. D.Massachusetts Institute of Technology.
The Laser Guide Star (LGS) concept is proposed to enable reductions in cost of next-generation space telescopes, by providing reference targets (as bright as apparent magnitude -7) to enable wavefront stability and control (WFSC) to compensate for high-rate motions of mirror segments. This will relax the requirements on the stability of the telescope and flow down to metrology, construction, and control. In this work, we present the detailed design of an LGS small satellite (and constellation of LGSs) that would fly in formation with a large space observatory that uses adaptive optics (AO) for wavefront sensing and control, or orbit around the Earth to support ground-based telescopes. We find that an LGS small satellite using the 12U CubeSat standard can accommodate a propulsion system sufficient to enable the LGS satellite to formation fly near the targets in the telescope boresight and to meet exoplanet direct imaging mission requirements on number of targets and duration.; We simulate the formation flight for an LGS/telescope system at L2 to assess the precision required to enable the wavefront sensing and control during observation, and find that commercial off-the-shelf attitude control hardware can easily satisfy the pointing needs (error &lt; 14°) and that the telescope needs to update the LGS no more than once every five minutes. We compare and recommend commercial off-the-shelf (COTS) propulsion and attitude determination and control systems (ADCS) for controlling the LGS spacecraft. We develop a constellation design tool for assessing the number of LGS spacecraft required to support a desired rate and quantity of observations at L2, and for trading that quantity against the parameters of the LGS spacecraft and the telescope(s) they support. We present a design reference mission (DRM) for deploying up to 19 LGS spacecraft to L2 to assist the Large Ultraviolet Optical Infrared Surveyor (LUVOIR).; The L2 LGS DRM covers 259 exoplanet target systems with 5 or more revisits to each system over a 5-year mission. We also identify a series of technology demonstration missions for deploying an LGS satellite to geostationary orbit and other Earth orbits for use with 6.5+ meter ground telescopes with AO to observe Wolf 1061, 40 Eridani, and other near-equatorial targets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 215-221).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129146</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online learning and optimization in operations management</title>
<link>https://hdl.handle.net/1721.1/129140</link>
<description>Online learning and optimization in operations management
Sun, Rui,Ph. D.Massachusetts Institute of Technology.
We study in this thesis online learning and optimization problems in operations management where we need to make decisions in the face of incomplete information and operational constraints in a dynamic environment. We first consider an online matching problem where a central platform needs to match a number of limited resources to different groups of users that arrive sequentially over time. The platform does not know the reward of each matching option and must learn the true rewards from the matching results. We formulate the problem as a Markovian multi-armed bandit with budget constraints, and propose an innovative algorithm that is based on assembling the policies for each single arm. We prove the algorithm's worst-case performance guarantee, and numerically show the algorithm's robust performance compared to alternative heuristics. We next consider a revenue management problem with add-on discounts where a retailer offers discounts on selected supportive products (e.g.; video games) to customers who have also purchased the core products (e.g. video game consoles). When the products' demand functions are unknown, we propose a UCB-based learning algorithm that uses the an FPTAS optimization algorithm as a subroutine to determine the prices of different types of products. We show that the algorithm can converge to the optimal full-information pricing policy. We also conduct numerical experiments with real-world data to illustrate the performance of our algorithm and the advantage of using the add-on discount strategy in practice. We last consider a network revenue management problem where a retailer aims to maximize revenue from multiple products with limited inventory. The retailer does not know the demand of different products, and must learn demand from the sales data. To optimize the pricing decisions, we propose an efficient algorithm that combines the Thompson sampling technique and the online gradient descent method with a primal-dual framework.; In comparison to traditional algorithms that are based on frequently solving linear programs, our algorithm does not need to solve any linear program, and therefore, has the advantage in computational efficiency. We analyze the performance guarantee of our algorithm, and show the algorithm's fast running time through numerical experiments.
Thesis: Ph. D. in Social Engineering Systems and Statistics, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 161-167).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129140</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On what language is</title>
<link>https://hdl.handle.net/1721.1/129124</link>
<description>On what language is
Balcarras, David Alexander.
What is language? I defend the view that language is the practical capacity for partaking in communication with linguistic signs. To have a language just is to know how to communicate with it. I argue that this view -- communicationism --; is compatible with its main rival: the view that we know our language by tacitly knowing a particular generative grammar, a set of rules and principles pairing sounds with meanings. But only communicationism gets at language's essence. Moreover, the rival view may be false, for there is in fact little reason to think we tacitly know grammars. In chapter 1, I argue that communicationism is compatible with the view that language is constituted by tacit knowledge of grammar because the brain states that realize grammatical knowledge do so because they enable us to know how to linguistically communicate. In chapter 2, I offer further reasons to accept communicationism. The starting thought that we know how to communicate by knowing how to use sentences in a particular rule-governed way in order to express our thoughts is developed into a use-based account of meaning, on which all expressions have their meanings because we know how we use them to mean things.; In chapter 3, I explore the extent to which language use is enabled by unconscious representations of grammatical rules. In particular, I consider whether linguistic understanding is enabled by tacit knowledge of compositional semantics. I argue that it is not. Language comprehension and production can be explained without appeal to tacit knowledge of semantics, by instead appealing to our subpersonal capacity to translate natural language sentences into the medium of thought. I conclude that there does not seem to be any reason to believe in tacit knowledge of grammar. Finally, in chapter 4, I survey proposals about what it would be for a speaker to tacitly know a grammar, and argue that they are all inadequate. I conclude that linguistic meaning cannot be explained in terms of tacit knowledge of grammar. Rather, it should be understood in terms of the practical knowledge that manifests in intentional linguistic action, rather than in terms of that which might underlie it.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 169-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129124</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Doing : an essay on causation, events, and action in the most general sense</title>
<link>https://hdl.handle.net/1721.1/129123</link>
<description>Doing : an essay on causation, events, and action in the most general sense
Baron-Schmitt, Nathaniel.
Our world is populated not just by things, such as bombs, matches, and people, but also by events, like explosions, ignitions, and decisions. Part I, "Doings", is centered around my attempt to capture the nature of events. Events straddle the realms of thing and fact, eluding analysis, making this a difficult task. Yet it is an important one, because events play crucial roles in so many places: in philosophy of action and mind, in syntax and semantics, and particularly in metaphysics, where they are widely supposed to be the only true causes and effects. Part II, "Thing Causation", argues that the true causes are things. I first argue that previous theories have failed to capture the nature of events. Jaegwon Kim's well-known view takes every event to be associated with a triple of a thing, a repeatable that the thing instantiates, and a time of instantiation. Kim uses this one-to-one association to give existence and identity criteria for events.; I argue that Kim's "events" are not really events at all; insofar as we can make sense of them, they are more like facts or propositions. But Kim's approach should not be abandoned altogether; the problem is not with association itself, but rather with Kim's assumption that association is one-to-one. Dropping this assumption results in a moderately coarse-grained conception of events that better matches our ordinary conception. It shares most of the theoretical virtues that Kim's view enjoys; most importantly, association can still be used to give existence and identity criteria. And it has a number of significant theoretical advantages over Kim's view, two of which I develop in depth : these moderately coarse-grained events are robust enough to support a version of token physicalism that does not collapse into type physicalism, and they illuminate the logical structure of the determinate-determinable relation. A second topic in Part I is the distinction between events and states.; This distinction usually is either ignored, or else captured by taking events, but not states, to be changes in things over time. The latter approach is too narrow, for it precludes instantaneous events, and it forecloses a "dynamic" picture of fundamental reality, on which there are goings-on that (unlike changes) do not consist merely in reality being one way and then another. Instead, events are best understood as cases of things doing something, or simply "doings". Rockslides, for instance, are cases of rocks sliding, and sliding is something rocks can do. Things done, like sliding, are a special sort of repeatable. Thus I say that events are associated with triples of a thing, a repeatable that can be done , and a time. I develop this very broad notion of "doing something" by appealing to a linguistic distinction between dynamic and stative verbs.; This distinction is central to the linguistics literature on aspect, and it is also philosophically important, since dynamic verbs stand for things done, whereas stative verbs stand for properties. Once we understand what events are, it emerges that events are not the sorts of entities that could cause, except in a derivative sense. In Part II, "Thing Causation", I argue that causation most fundamentally involves a thing causing another thing to do something. It is most fundamentally people and explosive substances, not actions and explosions, that cause. Causation between events is reducible to thing causation, but no reverse reduction is possible. I also touch on a number of other questions, including whether causation is partly normative, whether causation can occur even when no particular entity does any causing, and whether free agency involves causation by an agent.; Regarding the last of these, I argue that agent causation is coherent and real, and the best-known objections to it fail completely, but agent causation on its own does not do the heavy lifting some agent-causal theorists expect from it. What is needed for agent-causal freedom is not just any causing done by an agent, but causing that is basic -- that the agent does not do by doing anything further.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Page 163 blank. Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-162).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129123</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The linear limitations of syntactic derivations</title>
<link>https://hdl.handle.net/1721.1/129122</link>
<description>The linear limitations of syntactic derivations
Davis, Colin Pierce Bryon.
In this dissertation, I identify and analyze several new generalizations about how phrasal displacement and discontinuity are constrained in natural language. These patterns reveal, I argue, that many limitations of syntactic derivations are attributable to the way in which syntactic operations are cyclically interleaved with the component of the grammar that establishes word order. This finding has consequences for many phenomena, and for the architecture of grammar in general. Chapter 1 introduces the theory of cyclic linearization that serves as the foundation for this work, and several principles about the locality of movement that importantly interact with it. Chapter 2 shows that these concepts predict the crosslinguistic distribution of stranding in intermediate positions. Chapter 3 extends these considerations to an analysis of possessor extraction in colloquial English, a phenomenon subject to numerous intricate but systematic limitations. Chapter 4 provides further evidence for the theory advanced here from constraints on subextraction in Russian. Chapter 5 argues that certain facts about parasitic gaps in sentences with overlapping moved phrases reveal further evidence that linearization constrains syntactic derivations, with consequences for the nature of movement more generally. Chapter 6 argues that linear order constrains extraposition, and proposes an account of this phenomenon that addresses a number of puzzles about its distribution. Chapter 7 diverges from the theme of linearization to explore parasitic gaps in relative clauses, which connect to several results from chapter 5. Chapter 8 summarizes the findings of this dissertation, and discusses several more general implications of the framework advanced here.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 349-370).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129122</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consent and Concepts</title>
<link>https://hdl.handle.net/1721.1/129121</link>
<description>Consent and Concepts
Hodges, Jerome,Ph. D.Massachusetts Institute of Technology.
This dissertation lays out the groundwork for building a theory of radical consent and autonomy. Chapter 1, "Framing Consent," argues for a context-sensitive account of the semantic content of consent claims, and presents an introductory model of consenting as a speech act. In particular, I argue for a contrastive model of consent claims, in which consent is given against a backdrop of relevant alternatives. More generally, I argue that the context-sensitivity of the sort guaranteed by this model -- viz. context-sensitivity of what is consented to in a consent claim --; is an ineluctable feature of such claims. This has far-reaching consequences, I claim, for the use of consent in both normative ethics and political philosophy. Chapter 2, "Conceptual Amelioration and Epistemic Responsibility," co-authored with Ekaterina Botchkina, looks at the question of conceptual amelioration more generally: when thinking about concepts,1 what is the proper role of value considerations? There we argue that, under a remarkably theory-neutral constraint, at least some conceptual interventions motivated by concerns of justice are acceptable. In particular, we suggest that particular accounts of amelioration offered or suggested by Mark Richard and Sally Haslanger are too restrictive in their metaphysical and semantical commitments to provide a general account of how amelioration is possible.; Instead, we suggest that the core aim of amelioration can be understood as preserving a sort of conceptual possibility, and that this preservation is precisely what is aimed at in scientific theories' development of concepts -- in the decision, for instance, to abandon the concept of [æther] while retaining (but refining) the concept of [atom]. As such, amelioration isn't as unusual -- or as troubling -- as it might at first blush appear. We use a tool suggested by Steve Yablo -- what we call the "Turning-Out Test" --; as a way to test for conceptual possibility, and thus epistemic responsibility. Chapter 3, "Performative Consent and Autonomy," returns to consent, and specifically to the role of consent in legal and quasi-legal contexts of sexual assault and battery. I argue there that the speech-act account suggested in Chapter 1 is justfied flied in such contexts on the ameliorationist grounds articulated in Chapter 2. I then extend these considerations to extra-legal contexts, and show that this account is compatible with radical feminist claims around sexual assault. In particular, I defend the possibility of an account of assault in which, at least sometimes, whether a sexual assault has occurred may depend upon a survivor's posterior assessment of the event.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 105-110).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129121</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Loanwords and the perceptual map : a perspective from MaxEnt Learning</title>
<link>https://hdl.handle.net/1721.1/129120</link>
<description>Loanwords and the perceptual map : a perspective from MaxEnt Learning
Olson, Erin(Eric K.)
This dissertation examines the predictions of two computational models of grammar within the domain of loanword phonology. These models, formulated within a Maximum Entropy (MaxEnt) framework, have been shown to be successful when simulating the effects that a substantive bias such as the Perceptual Map (PMap) hypothesis of Steriade (2001) may have on a phonological learner. While previous studies have focused primarily on modelling data taken from artificial grammar learning experiments (Wilson, 2006; White, 2013), this dissertation will instead model loanword adaptation. Loanword adaptation was chosen as a useful test domain as speakers will often choose to repair phonotactically-illicit loanwords in ways that are not attested in their native grammar. It thus provides a wealth of data about how speakers structure their grammar in the absence of overt phonological evidence. To this end, a case study of English loanword adaptation in Cantonese is undertaken.; It will be shown that the patterns of consonant deletion and vowel epenthesis used by speakers of Cantonese to adapt English words are compatible with the PMap, and can be modelled through the MaxEnt learners mentioned above. It will also be shown through a series of computational simulations that Wilson's (2006) learner fails to acquire the grammar necessary to account for the patterns of loanword adaptation, while White's (2013) learner succeeds. This is a result of the way in which the PMap is encoded within these learners. While both encode the PMap as a series of asymmetrical Gaussian distributions on the weights of constraints, Wilson (2006) encodes this asymmetry through the variances, or plasticities, of the distributions, while White (2013) encodes it through the means, or target weights. A grammar which encodes the PMap through asymmetrical plasticities must encounter evidence from the phonology of the language in order to alter the weights of constraints.; However, the loanword phonology of Cantonese crucially lacks such phonological evidence, and Wilson's (2006) model cannot make use of it when establishing constraint asymmetries. White's (2013) model, however, allows constraint asymmetries to be maintained in the absence of overt evidence, and results in more accurate grammars of Cantonese loanword adaptation.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-216).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129120</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure of attitude reports : representing context in grammar</title>
<link>https://hdl.handle.net/1721.1/129119</link>
<description>The structure of attitude reports : representing context in grammar
Spadine, Carolyn.
This dissertation argues for a view of grammar that encodes certain facts about the discourse context in the narrow syntax. In particular, the recurring claim that there are clause peripheral elements that correspond to a kind of perspectival center is supported by novel evidence that this perspectival element can be overt in certain languages. This is shown using data from attitude reports in Tigrinya (Semitic, Eritrea), which overtly realizes a perspective holder, as well as a diverse collection of other languages, including Ewe and Malayalam. In analyzing this construction, I propose that the certain complementizers have a secondary use as a marker of reported speech. I unify this use of complementizers with their more common clausal subordination use by adopting the proposal in Kratzer (2006), which argues that the modal quantification component of attitude reports is in the complementizer, rather than the attitude predicate, as is commonly assumed. I also analyze two unique properties of these reportative complementizer constructions, indexical shift and logophoricity. In Tigrinya, indexical shift can be accounted for by allowing these reportative complementizers to quantify over contexts, rather than worlds, and by introducing a contextshifting operator. From a morphosyntactic perspective, I find evidence from indexical shift that person features must be assigned throughout the course of the derivation, rather than at the point of lexical insertion. I also find that these constructions create contexts for matrix clause indexical shift in Tigrinya, something that has not previously been observed. Evidence from Ewe and other languages suggests a correlation between logophoric domains and the presence of a complementizer with reportative properties. Based on this distinction, I argue that Condition A-violating reflexives in languages like French and English are not reducible to logophors, based on their distribution, as well as other syntactic properties.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 177-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129119</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering micro-perfusable scaffolds for MesoPhysiological Systems using projection Micro-StereoLithography</title>
<link>https://hdl.handle.net/1721.1/129115</link>
<description>Engineering micro-perfusable scaffolds for MesoPhysiological Systems using projection Micro-StereoLithography
Sphabmixay, Pierre.
MicroPhysiological Systems (MPS) are in vitro models that capture the complexity of human organs at miniature scale by recreating the native microenvironment of resident cells. These systems offer promising alternatives to in vivo animal models for the development of new drugs, disease modeling and biological research. The organs in the human body are continuously perfused via a dense network of blood vessels delivering oxygen, nutrients and biomolecules locally while clearing waste materials produced by the tissue. As a result, MPS that incorporate microperfusion in a three-dimensional format have been a major focal point in the community driving major efforts towards in vitro vascularization methods. A major obstacle to the development of these MPS was the micrometric scale of the human cells forming the building block of any biological system.; But advances in micro and nanofabrication techniques have led to the creation of a myriad of new MPS that allow the successful culture of 3D tissues under microperfusion. Nevertheless, the translation of in vitro data from MPS to clinical data is confronted with the fundamental problem arising from the multi-dimensional scaling of experimental parameters, from micrometric systems to macroscale organs. This thesis describes the design, fabrication and implementation of a MesoPhysiological System (MePS) for the culture of human cells at mescoscopic scale. The MePS consists of a perfusable 3D printed network of microcapillaries serving as a scaffold for the tissue with built-in vasculature. The manufacturing of the MePS was performed using a Projection Micro-StereoLithography Apparatus which enabled the fabrication of centimetric scaffolds with micrometric features at high through-put.; The geometry of the MePS was carefully designed using computational fluid dynamics and computational model of oxygen transport so that critical physico-chemical parameters of the MePS, such as shear forces and oxygen levels would reach physiological values. Long term cultures of liver and brain tissues were performed in the MePS and featured elevated function and viability compared to other MPS. The increased metabolic rate and hepatic function of the liver MePS permitted to recapitulate critical features of metabolic disorders, such as chronic development of an insulin resistance phenotype in type 2 diabetes mellitus.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 140-155).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129115</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing radiative heat transfer modeling in high-temperature liquid-salts</title>
<link>https://hdl.handle.net/1721.1/129113</link>
<description>Advancing radiative heat transfer modeling in high-temperature liquid-salts
Coyle, Carolyn Patricia.
Nuclear and solar-thermal communities are investigating the use of high Prandtl number liquid-salts in energy generation systems, including fluoride salt-cooled high-temperature reactors (FHRs), molten salt reactors (MSRs), fusion devices, and concentrated solar power plants. The temperature distribution in the coolant salts can be affected by participating media radiative heat transfer, due to the high temperature operation and their semitransparent nature. Computational fluid dynamics (CFD) becomes a valuable tool to model the complex 3-dimensional nature of the heat transfer, especially in regions where temperature-dependent material corrosion drives the need for accurate local temperature predictions. Correctly modeling radiative heat transfer in CFD requires well-characterized liquid-salt optical properties, which are not yet known. Additionally, current CFD approaches can become computationally too expensive for practical use when spectral effects need to be resolved.; A lower cost approach, capable of still resolving the coupled convective-radiative heat transfer is therefore needed. In this thesis, an experimental apparatus for measuring the spectral absorption coefficients of 46.5%LiF:11.5%NaF:42%KF (FLiNaK) and 50%NaCl:50%KCl is designed and validated to have high-measurement accuracy in the transmissive and multiphonon absorption regions where radiative emissions peak. A high-fidelity CFD methodology is then developed to model participating media radiative heat transfer. The approach defines a consistent, spectral banding procedure that captures non-gray absorption behavior at reasonable computational cost. The methodology is applied to CFD simulations of a twisted elliptical tube heat exchanger geometry, where local, 3-dimensional effects are especially significant.; A matrix of simulation results comparing FLiNaK and 66.6%LiF:33.4%BeF2 (FLiBe) coolants provides a quantitative assessment of the thermal radiation contributions to the overall heat transfer. Laminar flows, expected in accident scenarios, experience the strongest effect, where lower average wall temperatures and enhanced temperature uniformity result in an effective Nusselt number increase of up to 11%. Turbulent flows see a reduction in maximum local wall temperatures up to 25'C, which could have a notable impact on reducing corrosion effects. The observed trends demonstrate the larger impact of radiation effects in FLiBe simulations due to larger absorption in BeF2. This suggests thermal radiation may be more dominant in MSRs, where dissolved fuel and impurities increase absorption.; The method proposed to include the effects of thermal radiation in CFD analysis can support a more effective and accurate design of high temperature systems and components, providing increased safety margins for operation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 125-133).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129113</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tally derivative based surrogate models for faster Monte Carlo multiphysics</title>
<link>https://hdl.handle.net/1721.1/129109</link>
<description>Tally derivative based surrogate models for faster Monte Carlo multiphysics
Harper, Sterling(Sterling M.)
Existing neutron transport methods used in the nuclear power industry rely on a complex toolchain of modeling and simulation software. Each link in this chain applies various approximations to the spatial, angular, and energy distributions of the problem variables; and these approximations can limit solver predictive capabilities. Monte Carlo (MC) neutron transport is a high-fidelity method that can relax many of these approximations and possibly replace much of the existing toolchain. However, MC neutron transport is also very slow, particularly when coupled into a multiphysics solver. Some researchers have published runtime costs of over 100 000 cpuhours to converge a quarter-core multiphysics problem with MC --; an expense which makes MC-based tools prohibitive for regular use. In response to this issue, some researchers have developed acceleration techniques using the diffusion-based CMFD (coarse mesh finite difference) method. This thesis extends that work by coupling the CMFD solver directly to the thermalhydraulics solvers in a multiphysics simulation. To enable this coupling, MC differential tallies are used to compute the feedback dependence of CMFD parameters. Novel methods based on the windowed multipole cross section representation are used to compute fuel temperature derivatives along with coolant density derivatives. This differential tally approach proves to be flexible; the same procedure is applied to each coarse mesh cell regardless of the presence of control rods, burnable poisons, spacer grids, 135Xe, or other details of the MC model.; With the inclusion of a simple pin power reconstruction scheme, these methods create a surrogate neutronics solver capable of bi-directional coupling with thermal-hydraulics. This surrogate can then accelerate multiphysics convergence by reducing the reliance on costly MC simulations. Furthermore, a novel source-weight clipping procedure is introduced to damp MCCMFD instabilities. Because this clipping procedure does not require multiple MC generations, CMFD and multiphysics coupling can be performed after each MC generation --; even the first generation. This allows simulations to be run with very few MC generations, a feature which alleviates the cost of using many neutrons per MC generation to reduce the impact of fission source distribution undersampling. This methodology is tested on a quarter-core model of the BEAVRS benchmark, a large pressurized water reactor. Simplified subchannel fluid dynamics, fuel pin heat transfer, and equilibrium xenon solvers are included to form a multiphysics system. Without the presented acceleration methods, these quarter-core multiphysics simulations using 200 million neutrons per generation are projected to require 3 300 cpu core-hours to reach stationarity. With the presented methods, this cost falls to 270 core-hours. Further results are shown to demonstrate the runtime costs needed to tightly resolve fine-mesh power distributions with projected runtime savings of 6� over prior work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 225-231).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129109</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated atomistic prediction of structure, dynamics and material properties in molten salts</title>
<link>https://hdl.handle.net/1721.1/129108</link>
<description>Accelerated atomistic prediction of structure, dynamics and material properties in molten salts
Lam, Stephen Tsz Tang.
Various advanced nuclear reactors including fluoride high-temperature salt-cooled reactors (FHRs), molten salt reactors (MSRs) and fusion devices have proposed to use molten salt coolants. However, there remain many uncertainties in the chemistry, dynamics and physicochemical properties of many salts, especially over the course of reactor operation, where impurities are introduced, and compositional and thermodynamic changes occur. Density functional theory (DFT) and ab initio molecular dynamics (AIMD) were used for property, structure and chemistry predictions for a variety of salts including LiF, KF, NaF, BeF2, LiCl, KCl, NaCl, prototypical Flibe (66.6%LiF-33.3%BeF2), and Flinak (46.5%LiF-11.5NaF-42%KF). Predictions include thermophysical and transport properties such as bulk density, thermal expansion coefficient, bulk modulus, and diffusivity, which were compared to available experimental data.; DFT consistently overpredicted bulk density by about 7%, while all other properties generally agreed with experiments within experimental and numerical uncertainties. Local structure was found to be well predicted where pair distribution functions showed similar first peak distances (+ 0.1 A) and first shell coordination numbers (+ 0.4 on average), indicating accurate simulation of chemical structures and atomic distances. Diffusivity was also generally well predicted within experimental uncertainty (+20%). Validated DFT and AIMD methods were applied to study tritium in prototypical salts since it is an important corrosive and diffusive impurity found in salt reactors. It was found that tritium species diffusivity depended on its speciation (TF vs. T2), which was related to chemical structures formed in Flibe and Flinak salts. Further, predictions allowed comparison with and interpretation of past contradictory experimental results found in the literature.; Lastly, robust neural network interatomic potentials (NNIPs) were developed for LiF and Flibe. The LiF NNIP accurately reproduced DFT calculations for pair interactions, solid LiF and liquid molten salt. The Flibe NNIP was developed for molten salt at the reactor operating temperature of 973K and was found to reproduce local structures calculated from DFT and showed good stability and accuracy during extended MD simulation. Ab initio methods and NNIPs can play a major role in advanced reactor development. Combined with experiments, these methods can greatly improve fundamental understanding and accelerate materials discovery, design and selection.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 122-142).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129108</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/129094</link>
<description>Essays in financial economics
Martins, Fernando Miguel Pinto.
This dissertation studies various facets of corporate governance across three chapters. Chapter 1 examines the relationship between corporate social responsibility (CSR), shareholder value, and competition by applying a regression discontinuity design to the outcome of CSR shareholder proposals. The approval of CSR proposals generates positive abnormal returns on the vote date, increases CSR scores, increases market shares, and increases the CSR scores of industry peers. The impact on abnormal returns is higher among firms who possess low CSR scores, firms who operate in more competitive industries, and firms subject to higher customer awareness. Altogether, these results suggest that CSR improves shareholder value by enhancing the competitiveness of firms. Chapter 2 studies the underlying characteristics of shareholder activism and the stock market reaction to the announcement of activist campaigns.; I find that hedge funds are the most common type of activists and that most campaigns are non-hostile. Stock markets react strongly to the announcement of activism as cumulative abnormal returns are close to 9% for value campaigns and 4% for ESG campaigns. These gains do not revert over the following year. Furthermore, industry peers under a high threat of activism also exhibit positive cumulative abnormal returns around the announcement of non-hostile ESG campaigns. These findings suggest that shareholder activism increases the shareholder value of target firms and generates positive spillovers when activists pursue ESG demands using non-hostile tactics. Chapter 3 examines the relationship between classified boards and managerial entrenchment by applying a panel regression discontinuity design to shareholder proposals on board declassification.; The analysis focuses on shareholder proposals that pass or fail by a small margin of votes in order to provide a causal estimate of the impact of board declassification. I find that shareholder proposal approval leads to a reduction in CEO compensation, an increase in the likelihood of CEO replacement, a positive but insignificant impact on pay-performance elasticity, and an increase in firm value. The reduction in CEO compensation is strongest among firms who possess weaker levels of governance. These findings suggest that classified boards promote managerial entrenchment.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129094</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of market transparency on corporate disclosure</title>
<link>https://hdl.handle.net/1721.1/129093</link>
<description>The effect of market transparency on corporate disclosure
Rickmann, Georg(Georg Alexander)
Market prices and trading in financial markets are important information signals that reveal firm specific information to the public. I study how the observability of such prices and trading (hereafter, "market transparency") affects firms' disclosure incentives. I exploit the staggered introduction of TRACE, which made bond prices and transactions publicly observable, and show that firms provide more guidance when their bonds' prices and trading become observable. This effect is stronger for firms with informationally sensitive bonds and firms without exchange-listed bonds prior to TRACE. Also, firms become particularly more likely to disclose bad news, consistent with the notion that investors' access to market information limits managers' incentives to withhold information. I corroborate my results using (1) a small controlled experiment, in which prices and trading are revealed for a randomized set of bonds, and (2) threshold rules used by the regulator. Taken together, my results suggest that observable market outcomes inform investors not only directly, by aggregating and revealing investors' information and beliefs, but also indirectly by increasing corporate disclosure.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 52-54).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129093</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the counterintuitive consequences of labor policies in service industries</title>
<link>https://hdl.handle.net/1721.1/129091</link>
<description>Essays on the counterintuitive consequences of labor policies in service industries
Hashemian, MohammadMahdi.
In essays one and two, I examine how unstable schedules affect financial performance. In essay one, using 52 weeks of data from over 1,000 stores and more than 15,000 employees of a specialty retailer, I estimate the effect of unstable schedules on store productivity. I use an instrumental variable approach and a natural experiment to partially address the possible endogeneity of scheduling decisions. I find evidence that increasing the adequacy and consistency of employees' hours improves employee and store productivity and find partial support for the positive effect of predictability. To study the policy impact of these findings, I build a behavioral agent-based model of scheduling in essay two. My model provides a platform to conduct counterfactual analyses and thus increases the external validity of my findings.; Results suggest that standard scheduling practices, under certain conditions, may have negative, direct labor cost consequences despite their intended rationale for aligning service capacity and demand. Findings highlight the unintended consequences of a narrow focus on matching labor supply to customer demand; designing more employee-friendly schedules could not only create better jobs but also improve firm performance. In essay three, I build a simulation model to explain why Startups play a major role in establishing many new markets when existing firms have more resources and the relevant core and peripheral capabilities. I explore how the strong link between startups' past performance and the resources available for their future capability building conditions their growth prospects. I show that this reinforcing loop leads to entrepreneurial financial markets rapidly focusing on more promising startups.; The strength of this mechanism can allow startups to over-take projects within incumbent firms that are initially better endowed. Using an online experiment, I test the key requirement for our mechanism, showing that the strength of the reinforcing loop is larger for start-ups than in-house projects.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129091</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interdependent diffusion : the social contagion of interacting beliefs</title>
<link>https://hdl.handle.net/1721.1/129089</link>
<description>Interdependent diffusion : the social contagion of interacting beliefs
Houghton, James P.Ph. D.Massachusetts Institute of Technology.
A common simplifying assumption in theories of social contagion is that ideas or beliefs spread from person to person in a social network without regard to other ideas or beliefs that spread concurrently. This assumption is both useful and generative, as it allows researchers to produce tractable models of the effects of network structure and social reinforcement on diffusion patterns. Unfortunately, the social contagion of multiple beliefs cannot be understood by linearly superimposing the results of independent contagion processes. Any decision that a human makes to adopt an idea or belief is influenced by the other ideas and beliefs that she already holds. This dissertation shows that interdependence between beliefs alters the progress of social contagion to create internally-consistent clusters of beliefs within subsets of the population (worldviews) and contributes to polarization. The first paper of this dissertation comprises a method for observing the evolution of broadly-held structures of beliefs. The paper uses a case study with social media data to demonstrate the clustering of beliefs that emerges due to their mutual interaction. The second paper introduces a formal theory of interdependent diffusion which attempts to explain the mechanisms by which micro-scale interactions between beliefs lead to macro-scale outcomes for societies. The third paper reports an online laboratory experiment to test whether the predicted theoretical outcomes hold when the decision rules of simulated agents are replaced with actual human actors exchanging actual information.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129089</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving farmers' and consumers' welfare in agricultural supply chains via data-driven analytics and modeling : from theory to practice</title>
<link>https://hdl.handle.net/1721.1/129083</link>
<description>Improving farmers' and consumers' welfare in agricultural supply chains via data-driven analytics and modeling : from theory to practice
Singhvi, Somya.
The upstream parts of the agricultural supply chain consists of millions of smallholder farmers who continue to suffer from extreme poverty. The first stream of research in this thesis focuses on online agri-platforms which have been launched to connect geographically isolated markets in many developing countries. This work is in close collaboration with the state government of Karnataka in India which launched the Unified Market Platform (UMP). Leveraging both public data and platform data, a difference-in-differences analysis in Chapter 2 suggests that the implementation of the UMP has significantly increased modal price of certain commodities (5.1%-3.5%), while prices for other commodities have not changed. The analysis provides evidence that logistical challenges, bidding efficiency, market concentration, and price discovery process are important factors explaining the variable impact of UMP on prices.; Based on the insights, Chapter 3 describes the design, analysis and field implementation of a new two-stage auction mechanism. From February to May 2019, commodities worth more than $6 million (USD) had been traded under the new auction. Our empirical analysis suggests that the implementation has yielded a significant 4.7% price increase with an impact on farmer profitability ranging 60%-158%, affecting over 10,000 farmers who traded in the treatment market. The second stream of research work in the thesis turns to consumer welfare and identifies effective policies to tackle structural challenges of food safety and food security that arise in traditional agricultural markets. In Chapter 4, we develop a new modeling framework to investigate how quality uncertainty, supply chain dispersion, and imperfect testing capabilities jointly engender suppliers' adulteration behavior.; The results highlight the limitations of only relying on end-product inspection to deter EMA and advocate a more proactive approach that addresses fundamental structural problems in the supply chain. In Chapter 5, we analyze the issue of artificial shortage, the phenomenon that leads to food security risks where powerful traders strategically withhold inventory of essential commodities to create price surge in the market. The behavioral game-theoretic models developed allow us to examine the effectiveness of common government interventions. The analysis demonstrates the disparate effects of different interventions on artificial shortage; while supply allocation schemes often mitigate shortage, cash subsidy can inadvertently aggravate shortage in the market. Further, using field data from onion markets of India, we structurally estimate that 10% of the total supply is being hoarded by the traders during the lean season.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Page 236 blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 223-235).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129083</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online and offline learning in operations</title>
<link>https://hdl.handle.net/1721.1/129080</link>
<description>Online and offline learning in operations
Wang, Li
With the rapid advancement of information technology and accelerated development of data science, the importance of integrating data into decision-making has never been stronger. In this thesis, we propose data-driven algorithms to incorporate learning from data in three operations problems, concerning both online learning and offline learning settings. First, we study a single product pricing problem with demand censoring in an offline data-driven setting. In this problem, a retailer is given a finite level of inventory, and faces a random demand that is price sensitive in a linear fashion with unknown parameters and distribution. Any unsatisfied demand is lost and unobservable. The retailer's objective is to use offline censored demand data to find an optimal price, maximizing her expected revenue with finite inventories.; We characterize an exact condition for the identifiability of near-optimal algorithms, and propose a data-driven algorithm that guarantees near-optimality in the identifiable case and approaches best-achievable optimality gap in the unidentifiable case. Next, we study the classic multi-period joint pricing and inventory control problem in an offline data-driven setting. We assume the demand functions and noise distributions are unknown, and propose a data-driven approximation algorithm, which uses offline demand data to solve the joint pricing and inventory control problem. We establish a polynomial sample complexity bound, the number of data samples needed to guarantee a near-optimal profit. A simulation study suggests that the data-driven algorithm solves the dynamic program effectively. Finally, we study an online learning problem for product selection in urban warehouses managed by fast-delivery retailers. We distill the problem into a semi-bandit model with linear generalization.; There are n products, each with a feature vector of dimension T. In each of the T periods, a retailer selects K products to offer, where T is much greater than T or b. We propose an online learning algorithm that iteratively shrinks the upper confidence bounds within each period. Compared to the standard UCB algorithm, we prove the new algorithm reduces the most dominant regret term by a factor of d, and experiments on datasets from Alibaba Group suggest it lowers the total regret by at least 10%..
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 213-219).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129080</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Residual overturning circulation and its connection to Southern Ocean dynamics</title>
<link>https://hdl.handle.net/1721.1/129068</link>
<description>Residual overturning circulation and its connection to Southern Ocean dynamics
Youngs, Madeleine Kendall.
Over the last 20 years, our understanding of the meridional overturning circulation has improved, but primarily in a two-dimensional, zonally-averaged framework. In this thesis, I have pushed beyond this simplification and shown that the additional complexity of meanders, storm tracks, and other zonal asymmetries is necessary to reproduce the lowest-order behavior of the overturning circulation. First I examined the role of basin width for determining whether the Atlantic or Pacific oceans experience deep convection. I used a two layered model and a rectangular single-basin model to show that the basin width, in combination with scalings for the overturning circulation make the overturning relatively weaker in the wider basin, priming it for a convection shut down.; In addition to this large-scale work, I have examined Southern Ocean-like meanders using a hierarchy of idealized models to understand the role of bottom topography in determining how the large-scale circulation responds to climate change scenarios. These are useful because they preserve the lowest-order behavior, while remaining simple enough to understand. I tested the response of the stratification and transport in the Southern Ocean to changes in wind using a highly-idealized two-layer quasi-geostrophic model. In addition to showing that meanders are necessary to reproduce the behavior of the Southern Ocean, I found that strong winds concentrate the baroclinic and barotropic instabilities downstream of the bottom topography and weaken the instabilities elsewhere due to a form-drag process. With weak winds, however, the system is essentially symmetric in longitude, like a flat-bottomed ocean.; This result is consistent with observations of elevated turbulence down-stream of major topography in the Southern Ocean. My next study investigated a more realistic Southern Ocean-like channel, with and without bottom topography, and examined the three-dimensional circulation in order to understand where vertical transport occurs and develop a picture of the pathways taken by each individual water parcel. I found that the vertical transport happens in very isolated locations, just downstream of topography. Finally, I added a biogeochemical model to my simulations and found that carbon fluxes are enhanced near topography, again highlighting the role of zonal asymmetries.
Thesis: Ph. D., Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 135-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129068</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of doping and microstructural variables on hydrogen generated via aluminum-water reactions enabled by a liquid metal</title>
<link>https://hdl.handle.net/1721.1/129067</link>
<description>Effects of doping and microstructural variables on hydrogen generated via aluminum-water reactions enabled by a liquid metal
Meroueh, Laureen.
Hydrogen has the potential to replace fossil fuels in numerous industrial sectors, considering its high energy density, ability to be used within our existing power or heating infrastructure, and lack of CO₂ emissions upon conversion of hydrogen's chemical energy into electricity. However, 96% of hydrogen is currently produced through steam methane reformation, which emits ~12 tons of CO₂ for every 1 ton of hydrogen produced. Consequently, hydrogen production accounts for roughly 830 million tons of annual global CO₂ emissions. Additionally, hydrogen storage can be impractical and expensive. The aluminum-water reaction presents itself as a hydrogen storage and generation solution. Without a passive oxide layer, aluminum will react with water to produce emission-free hydrogen, on-demand.; We enable the reaction by harnessing eutectic gallium-indium (eGaIn), an ambient temperature liquid metal that permeates through aluminum grain boundaries, disrupting its passive oxide layer and inhibiting passivation of its grain surfaces. The focus of this work is on the investigation of the underlying aluminum-water reaction mechanism in the presence of eGaIn and on understanding the effects of using scrap aluminum (i.e. doped aluminum) as feedstock. Surprisingly, experiments show that silicon doping has a tremendous accelerating effect on the aluminum-water reaction in the presence of eGaIn. In combination with grain size manipulation, Si-doping can increase hydrogen evolution rates by two orders of magnitude compared to pure aluminum. Doping with magnesium significantly retards the aluminum-water reaction, resulting in relatively steady hydrogen evolution rates. It is also shown that eGaIn permeates through aluminum as a line dislocation front.; These discoveries demonstrate that doping, grain refining and grain coarsening offer latitudes in the engineering of aluminum microstructures to tune hydrogen generation rates across three orders of magnitude and to tune the reaction efficiency. Using the results of this work, I provide a guide to the design/selection of aluminum for controllable hydrogen generation according to application. Lastly, while the corrosion of aluminum and its commercial alloys has been historically studied, results of this work show that the redox behavior of aluminum in the presence of eGaIn strays from what can be understood within the classical corrosion (galvanic theory) framework.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages [110]-127).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129067</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on residential mobility, housing, and health</title>
<link>https://hdl.handle.net/1721.1/129066</link>
<description>Three Essays on residential mobility, housing, and health
Daepp, Madeleine I. G.(Madeleine Isabelle Gorkin)
Over 700,000 people moved for health reasons in the last year, and many more moved for reasons in which health was implicated, such as to escape climate hazards. Changes in the extent to which a residence promotes health should change housing prices--an important health and social exposure in its own right, as well as a mechanism through which numerous other features of a place are reshaped--yet the relationships between residential mobility, health, and housing markets remain poorly understood. This dissertation comprises three papers on the association of residential mobility with health and housing. In the first paper, I evaluate the effect of a localized change in healthcare access--the 2006 Massachusetts Healthcare Reform--on housing prices and interstate migration along the state border.; I find an increase in the prices of affordable housing that is offset by a commensurate decrease in the price of luxury housing; I also observe a small increase in migration into Massachusetts versus into neighboring states. My second paper seeks to better understand the effects of climate migration on housing markets. Examining the impacts of displacement due to Hurricane Katrina, I show that housing prices decreased in destination neighborhoods that received the largest numbers of movers, relative to neighborhoods that did not receive large inflows. Effects are larger in predominantly Black destination neighborhoods than in predominantly White destination neighborhoods. I also find larger effects in places that received more economically disadvantaged movers relative to similar neighborhoods that received more advantaged movers.; My third paper describes a collaboration with the Healthy Neighborhoods Study Consortium, for whom I constructed a data set of estimated moving flows between Massachusetts neighborhoods. I then created a web-based app to make the resulting estimates accessible to planners, community organizations, and residents. An overarching theme of this work is the recognition that communities share housing and health challenges with the places to which former residents move and the places from which new residents arrive.
Thesis: Ph. D. in Urban and Regional Planning, Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 107-121).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129066</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High resolution sedimentary archives of past millennium hurricane activity in the Bahama Archipelago</title>
<link>https://hdl.handle.net/1721.1/129063</link>
<description>High resolution sedimentary archives of past millennium hurricane activity in the Bahama Archipelago
Wallace, Elizabeth Jane.
Atlantic hurricanes threaten growing coastal populations along the U.S. coastline and in the Caribbean islands. Unfortunately, little is known about the forces that alter hurricane activity on multi-decadal to centennial timescales. This thesis uses proxy development and proxy-model integration to constrain the spatiotemporal variability in hurricane activity in the Bahama Archipelago over the past millennium. I present annually-resolved archives of storm activity stretching over the past 1000 to 1500 years in sediment cores from blue holes on three islands in the Bahama Archipelago: South Andros Island, Long Island, and Middle Caicos Island. I explore the sensitivity of each site to coarse-grained sediment deposition for modern storms. I find that the local geomorphologic conditions and the angle of approach and size of passing storms play a more important role in inducing coarse-grained sediment transport than storm intensity.; All three paleorecords capture multi-decadal and longer periods of elevated hurricane activity over the past millennium. Dramatic differences between these records suggest localized controls on the hurricane patterns observed by each island. Thus, compiling the records from this thesis together more accurately captures regional variations in hurricane strikes. Integrating our new Bahama Archipelago compilation with compiled paleohurricane records from the U.S. coastline indicates shifting patterns of hurricane activity over the past millennium between the Gulf Coast and the Bahama Archipelago/New England. I attribute these shifting storm patterns to changes in local environmental conditions and/or large-scale variations in hurricane tracks. Finally, I address whether variability in hurricane strikes observed in Bahamian paleohurricane records is related to climate or random variability.; Using a large suite of synthetic storms run over past millennium climate, I generate 1000 pseudo paleohurricane records containing centennial-scale signal like our proxy reconstructions. However, the signal observed in any individual record of paleohurricane activity from the Bahama Archipelago is driven more by random variability in hurricane tracks than by climate. This thesis lays the groundwork for creating high-resolution paleohurricane records from coastal karst basins and using hurricane models to inform our interpretations of these records.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 211-226).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129063</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Morphological approaches to understanding Antarctic Sea ice thickness</title>
<link>https://hdl.handle.net/1721.1/129062</link>
<description>Morphological approaches to understanding Antarctic Sea ice thickness
Mei, M. Jeffrey(Ming-Yi Jeffrey)
Sea ice thickness has long been an under-measured quantity, even in the satellite era. The snow surface elevation, which is far easier to measure, cannot be directly converted into sea ice thickness estimates without knowledge or assumption of what proportion of the snow surface consists of snow and ice. We do not fully understand how snow is distributed upon sea ice, in particular around areas with surface deformation. Here, we show that deep learning methods can be used to directly predict snow depth, as well as sea ice thickness, from measurements of surface topography obtained from laser altimetry. We also show that snow surfaces can be texturally distinguished, and that texturally-similar segments have similar snow depths. This can be used to predict snow depth at both local (sub-kilometer) and satellite (25 km) scales with much lower error and bias, and with greater ability to distinguish inter-annual and regional variability than current methods using linear regressions. We find that sea ice thickness can be estimated to &lt;20% error at the kilometer scale. The success of deep learning methods to predict snow depth and sea ice thickness suggests that such methods may be also applied to temporally/spatially larger datasets like ICESat-2.
Thesis: Ph. D., Joint Program in Applied Ocean Physics and Engineering (Massachusetts Institute of Technology, Department of Aeronautics and Astronautics; and the Woods Hole Oceanographic Institution), 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 181-198).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129062</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instabilities of finite-width internal wave beams</title>
<link>https://hdl.handle.net/1721.1/129060</link>
<description>Instabilities of finite-width internal wave beams
Fan, Boyu,Ph.D.Massachusetts Institute of Technology.
Internal gravity waves are fundamental to the dynamics of density stratified fluids and the instability mechanisms by which these waves dissipate their energy are a potentially significant factor that underlies the distribution of energy and momentum in the natural environment. Recently, it has been recognized that internal waves in the oceans and atmosphere often take the form of beams: plane waves with locally confined spatial profile. While there is a large body of theoretical work concerning the instability of sinusoidal internal waves, instability mechanisms of beams are not yet fully understood. Although various nonlinear mechanisms have been proposed, it remains unclear which, if any, are dominant in the natural environment and under what circumstances. This thesis examines the instability of finite-width internal wave beams in order to extend the current understanding of internal wave instability into more realistic settings.; Part I of this thesis uses a combination of experimental and theoretical techniques to investigate finite-amplitude instabilities of beams. First, using a variant of the classical 'St. Andrew's Cross' experiment, whereby beams are generated using a harmonically oscillated horizontal cylinder, we present novel experimental observations of instability in large-amplitude internal wave beams. These results are compared against the predictions of linear stability analysis based on Floquet theory and reveal the competition between two- and three-dimensional instability mechanisms. Next, Floquet theory is used to investigate the well-known parametric subharmonic instability (PSI) for finite-width beams. Our findings show that frequency components typically ignored in standard analyses based on triad resonance are in fact crucial to the instability dynamics of fine-scale perturbations.; The Floquet stability analysis also reveals that PSI is restricted to a finite range of perturbation wavenumbers and that a broadband instability dominates at large perturbation wavenumber. Furthermore, in the nearly inviscid limit, this broadband instability persists for small-amplitude beams that are not typically susceptible to PSI. Part II focuses on the PSI of finite-width internal wave beams and investigates the role of background mean flows, which provide a more realistic setting for PSI in the natural environment. Using weakly nonlinear asymptotic theories, two types of internal wave beams are considered: nearly-monochromatic beams whose spatial profile consists of a sinusoidal carrier modulated by a locally confined envelope, and thin beams with general profile under the effects of Earth's rotation.; In both cases, the presence of a uniform background mean flow has a stabilizing effect on PSI for finite-width beams, in contrast to the PSI of a purely sinusoidal plane wave where the background mean flow has no effect.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 97-100).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129060</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular mediators of cardiac-specific enhancer activation</title>
<link>https://hdl.handle.net/1721.1/129059</link>
<description>Molecular mediators of cardiac-specific enhancer activation
Demuren, Olukunle O.(Olukunle Oluseyi)
Understanding how transcription factors (TFs) control gene expression programs is critical for determining the genetic pathways responsible for development. Heart development is particularly sensitive to precise control of gene programs as faulty regulation leads to congenital heart defects (CHD), the leading cause of infant mortality. Although sets of TFs have known roles in heart development, in most cases, we lack a fundamental understanding of how these binding events regulate cell specification. To identify potential key regulatory TFs, we used the Assay for Transposase-Accessible Chromatin (ATAC-seq) to map changes in chromatin accessibility and integrated these data with maps of histone modification patterns and gene expression across several stages of embryonic stem cell (ESC) differentiation toward cardiomyocytes (CMs). Based on bioinformatic analysis of these data, we identified the TEA domain family (TEAD) TF TEAD1 as a candidate regulator of enhancer activation during cardiac-lineage commitment. We then used an inducible degron-tag strategy to conditionally deplete TEAD1 and observed an abnormal beating phenotype in CMs. Further mechanistic studies revealed that TEAD1 was necessary for the activity of a subset of cardiac enhancers putatively linked to cell-cell contacts. These data have allowed us to characterize a potential link between extracellular signaling and cardiac contraction and morphogenesis during development.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129059</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extratropical storm tracks and the mean state of the atmosphere</title>
<link>https://hdl.handle.net/1721.1/129058</link>
<description>Extratropical storm tracks and the mean state of the atmosphere
Gertler, Charles Garrison.
The exact impacts of changes in the mean state of the atmosphere on the high-frequency phenomena that form the extratropical atmospheric circulation are uncertain. The extratropical storm tracks, regions of frequent extratropical cyclones, dominate weather in the extratropics, affecting the lives and livelihoods of billions of people. The results presented in this thesis connect changes in the mean state of the atmosphere to changes in the extratropical storm tracks. The Northern Hemisphere summer extratropical storm track has weakened in observations over the satellite era, while evidence indicates convective precipitation in the extratropics has concurrently increased. Using the concept of mean available potential energy (MAPE) partitioned into nonconvective and convective components, the second chapter of this thesis demonstrates that the changes in storm track strength and convection are consistent with changes in the temperature and humidity structure of the atmosphere.; Further, experiments with idealized atmospheres indicate how characteristic changes in surface temperatures over this period lead to diverging changes in the energy available to extratropical cyclones and their associated convection. In the third chapter of this thesis, the storm track strength is examined in solar geoengineering scenarios using results from climate models. The Northern Hemisphere extratropical storm track weakens in response to increased CO₂ by similar magnitudes regardless of whether solar geoengineering is used. In the Southern Hemisphere, the storm track strengthens in global warming scenarios, but weakens with solar geoengineering. Storm track intensity changes are shown to be consistent with changes in the structure of temperature and humidity using MAPE. In the fourth chapter of this thesis, a new method to calculate MAPE is introduced and used to perform the first exact MAPE calculations in a three-dimensional domain.; Further, an eddy-size restriction on the MAPE calculation is developed and introduced, which provides a measure of available energy that could be accessed locally by an extratropical cyclone. This approach is also used to identify the thermodynamic potential for ascent on the eddy lengthscale, which is shown to relate strongly to the frequency of warm conveyor belts (WCBs), dynamic components of extratropical cyclones with large impacts on weather.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 127-135).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129058</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental investigations of isotopologue fractionation during microbial methanogenesis</title>
<link>https://hdl.handle.net/1721.1/129057</link>
<description>Experimental investigations of isotopologue fractionation during microbial methanogenesis
Rhim, Jeemin Hannah.
The work described in this thesis explores and develops different culturing methods to test the following hypothesis: hydrogen concentration and redox potential are important controlling factors of carbon (¹³C/¹²C) and hydrogen (D/H) isotope ratios as well as the abundance of methane clumped isotopologues ([delta]¹³CH₃D) during microbial methanogenesis. Chapter 2 uses batch and fed-batch culturing systems to investigate the effects of H₂ concentrations on isotopologue fractionation. The results from fed-batch experiments confirmed the previous observation of decoupled ¹³C/¹²C and D/H systematics and provide experimental support for the hypothesis linking D/H and [delta]¹³CH₃D systematics.; Results from a mathematical model indicated that the dissolved H₂ concentration, [H₂], at the cell surface can be up to an order of magnitude lower than [H₂] expected in equilibrium with the headspace mixing ratio, highlighting the importance and challenge in controlling [H₂] during fed-batch experiments. Chapter 3 and Chapter 4 present the application of bioelectrochemical system (BES) as a means to control methane production. In Chapter 3, mixed culture BESs were used to enrich for methanogenic microbial communities. Distinct molecular and morphological characterized the anodic and cathodic communities. Within the tested range, methane production and the D/H values of methane showed general correlations with applied potentials, indicating a promising application of this system in isotope studies. Chapter 4 introduces a new design of a pure culture BES to directly test the effects of cathode potentials on methane production and isotope signatures.; Methane production decreased exponentially with increasing cathode potentials, up to 80 mV within the thermodynamic limit under our experimental conditions. Theoretical predictions indicate that the decrease in methane production rate is expected to be much more extreme at higher cathode potentials (&lt;30 mV within the limit, for our system), while isotope data indicated a negative correlation between methane production rate and D/H values. This demonstrates the potential application of pure culture BESs to elucidate the origin of equilibrium isotope signatures in energy-limited environments often found in marine sediments. Limitations and future directions in the application of each experimental system explored in this thesis work are discussed at the end of each chapter.
Thesis: Ph. D. in Geobiology, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 150-160).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129057</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Mexican paleoclimate with precisely dated speleothems</title>
<link>https://hdl.handle.net/1721.1/129056</link>
<description>Investigating Mexican paleoclimate with precisely dated speleothems
Serrato Marks, Gabriela.
Speleothems, or sedimentary rocks formed in caves, act as valuable archives of past climate change due to their suitability for U-series dating and high-resolution proxy analysis. These records can provide insights into water availability and controls on hydrology prior to the instrumental record. In this thesis, I present three records from newly-analyzed Mexican stalagmites using stable isotope (oxygen and carbon) and trace element to calcium (Mg/Ca and Sr/Ca) ratios as proxies for changing hydroclimate. Chapter 2 presents a precisely dated, mid- Holocene record of high rainfall and limited precipitation variability in the Yucatan Peninsula, Mexico. Chapters 3 and 4 present novel climate records from northeastern Mexico, an understudied region of North America. Both records come from cave sites within the Mexican arid zone, which is simultaneously experiencing increased water scarcity and a rapidly growing population. In Chapter 3, I examine a speleothem from the first millennium of the Common Era, which showed that there is a precipitation dipole between northern and southern Mexico. Chapter 4 highlights, for the first time at decadal resolution, the northeast Mexican response to the 8.2 ka event and the Younger Dryas. These chapters show that the San Luis Potosí region is vulnerable to droughts under multiple climate mean states, and is subject to drying as Atlantic Meridional Overturning Circulation weakens due to anthropogenic climate change. The climate records detailed in this thesis improve our understanding of controls on Mexican hydroclimate and can serve as benchmarks for climate models.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from student-submitted PDF of thesis. "September 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129056</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning methods for the design and understanding of solid materials</title>
<link>https://hdl.handle.net/1721.1/129054</link>
<description>Deep learning methods for the design and understanding of solid materials
Xie, Tian,Ph.D.Massachusetts Institute of Technology.
The trend of open material data and automation in the past decade offers a unique opportunity for data-driven design of novel materials for various applications as well as fundamental scientific understanding, but it also poses a challenge for conventional machine learning approaches based on structure features. In this thesis, I develop a class of deep learning methods that solve various types of learning problems for solid materials, and demonstrate its application to both accelerate material design and understand scientific knowledge. First, I present a neural network architecture to learn the representations of an arbitrary solid material, which encodes several fundamental symmetries for solid materials as inductive biases. Then, I extend the approach to explore four different learning problems: 1) supervised learning to predict material properties from structures; 2) visualization to understand structure-property relations; 3) unsupervised learning to understand atomic scale dynamics from time series trajectories; 4) active learning to explore an unknown material space. In each learning problem, I demonstrate the performance of the approach compared with previous approaches, and apply it to solve several realistic materials design problems and extract scientific insights from data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 127-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129054</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comprehensive single-cell transcriptional profiling of the regenerative planarian Schmidtea mediterranea</title>
<link>https://hdl.handle.net/1721.1/129053</link>
<description>Comprehensive single-cell transcriptional profiling of the regenerative planarian Schmidtea mediterranea
Fincher, Christopher T.(Christopher Terry)
Animals can contain hundreds of cell types, each of which has a distinct morphology and function. The transcriptome of a cell dictates this unique cell biology. Recent approaches for high throughput single-cell RNA sequencing have made it possible to generate transcriptomes easily and affordably for tens of thousands of single cells, raising the possibility that transcriptomes could be generated for all cell types and cell states in a complete animal. Planarians are freshwater flatworms renowned for their capacity for whole-body regeneration. They possess a complex body plan with multiple distinct tissues. They also possess a population of dividing cells, called neoblasts, which contain pluripotent stem cells and are the source of all new tissue, with all cell types being turned over throughout the life of the animal. Planarians also constitutively express an arrangement of regionally expressed genes in their muscle that serve as patterning information for the animal.; As such, at a single time point in the adult, pluripotent stem cells, all differentiated cells, and all associated transition states from stem cell to differentiated cell can be recovered, including patterning information expressed in muscle. This makes planarians ideally suited to generating an atlas of transcriptomes for all cell types and cell states in a whole animal. We used the single- cell RNA sequencing technology Drop-seq to determine the transcriptomes for 66,783 cells from adult planarians. In doing so, we identified a number of known and novel cell populations, including a novel class of phagocytic cells. We also uncovered novel neoblast subpopulations and putative transition state populations between neoblasts and differentiated cells, as well as a number of genes with regional expression in muscle.; Through the identification of known rare cell types in the data, we conclude that we have obtained near-to-complete cell type saturation for all cell types and cell states in the adult planarian. We now have full transcriptomes for each of these cell populations, which can be utilized to assay their roles in planarian biology. This approach can also be applied widely to diverse animal species, including those with limited molecular tools available.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129053</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for robust autonomous navigation in human environments</title>
<link>https://hdl.handle.net/1721.1/129052</link>
<description>Algorithms for robust autonomous navigation in human environments
Everett, Michael F.
Today's robots are designed for humans, but are rarely deployed among humans. This thesis addresses problems of perception, planning, and safety that arise when deploying a mobile robot in human environments. A first key challenge is that of quickly navigating to a human-specified goal - one with known semantic type, but unknown coordinate - in a previously unseen world. This thesis formulates the contextual scene understanding problem as an image translation problem, by learning to estimate the planning cost-to-go from aerial images of similar environments. The proposed perception algorithm is united with a motion planner to reduce the amount of exploration time before finding the goal. In dynamic human environments, pedestrians also present several important technical challenges for the motion planning system. This thesis contributes a deep reinforcement learning-based (RL) formulation of the multiagent collision avoidance problem, with relaxed assumptions on the behavior model and number of agents in the environment. Benefits include strong performance among many nearby agents and the ability to accomplish long-term autonomy in pedestrian-rich environments. These and many other state-of-the-art robotics systems rely on Deep Neural Networks for perception and planning. However, blindly applying deep learning in safety-critical domains, such as those involving humans, remains dangerous without formal guarantees on robustness. For example, small perturbations to sensor inputs are often enough to change network-based decisions. This thesis contributes an RL framework that is certified robust to uncertainties in the observation space.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 143-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129052</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and assessment of a physics-based model for subcooled flow boiling with application to CFD</title>
<link>https://hdl.handle.net/1721.1/129051</link>
<description>Development and assessment of a physics-based model for subcooled flow boiling with application to CFD
Kommajosyula, Ravikishore.
Boiling is an extremely efficient mode of heat transfer and is the preferred heat removal mechanism in power systems in general and, more recently, in electronics cooling. Physics-based models that describe boiling heat transfer, when coupled with Computational Fluid Dynamics (CFD), can be an invaluable tool to increase the performance of such systems. Existing modeling approaches do not incorporate all relevant heat transfer mechanisms at the wall, limiting their predictive capability and general applicability. These shortcomings restrict the application of CFD in the design process. For the nuclear industry, this means having to rely on expensive experimental campaigns to develop and license new reactor designs. A second-generation mechanistic heat flux partitioning framework developed in our group provides an enhanced physical description of flow boiling.; It introduces several mechanisms not accounted for in previous formulations, such as 1) bubbles sliding on the heater surface, 2) interaction of nucleation sites and 3) microlayer evaporation. The framework requires describing the complete bubble ebullition cycle, including bubble nucleation, growth, and departure through closure models, which are currently lacking. This thesis extends the framework into a closed-formulation by developing closure models that adequately represent the underlying physics. New models for predicting the bubble departure diameter and frequency are developed based on insights gathered from experiments and direct numerical simulations. An assessment against existing approaches to model boiling heat transfer demonstrates the model's ability to predict over 80% of the boiling curves within a 20% error, while also capturing the correct trends with flow conditions.; The model implementation in a commercial CFD software is demonstrated using data from the Bartolomei experiment. The extendability of the model to novel heater surfaces is further demonstrated for a sapphire heater substrate, where fewer cavities for nucleation shift the boiling curves to considerably higher wall superheats. This mechanistic representation of boiling heat transfer has the potential to support predictive design with optimal boiling heat transfer for improved system efficiency, with the specific objective to accelerate the development of novel nuclear fuel concepts.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 113-119).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129051</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Humanizing autonomy : social scientists' and engineers' futures for robotic cars</title>
<link>https://hdl.handle.net/1721.1/129050</link>
<description>Humanizing autonomy : social scientists' and engineers' futures for robotic cars
Stayton, Erik Lee.
Highly automated cars -- unlike robots in factories -- must operate in existing social spaces, which are complex and hard to control. Unlike household robots, these systems are also fast and dangerous. The fundamental problem of getting robots to interact in the world will be getting them to do the "right thing" -- according to developers, users, and societies. But what is "right" is a matter of perspective, and there will be many ways to achieve any particular robotic performance. Through ethnographic fieldwork at a site of robotic vehicle development, I investigate alternative strategies for robotic cars and discuss their social implications. Supported by a framework from multispecies ethnography and the practices of robot developers, I argue that robots do not see like humans or experience the world as humans do. But they must be explicitly made to think -- to represent the world and act in it --; in ways that work for people, and obey people's intersubjective assumptions about how robots will act in a given moment. Faced with this difficult set of design constraints, developers seek to humanize robots to make them socially acceptable, or robot-proof the world to make it safer for robots, through four idioms or strategies of heterogeneous engineering: mapping and annotating, perceptual omniscience, AI decision-making, and human-in-the-loop supervised operation. Social scientists involved in the design process challenge and complicate these four approaches, and introduce a fifth one: humanizing robots by allowing them to communicate via external human-machine interfaces. These idioms form a language by which to characterize approaches to socially integrated robotic systems. The debates between them show that different humanizing idioms imply different perspectives on social order, what it takes to be a competent social actor, and how humans and machines can work together.; Each idiom imagines different kinds of future worlds in which robotic technologies come to coexist with humans, with vastly different political consequences. Social scientists are vital participants in the project of exploring the contours of these futures, and I suggest new approaches and open questions for the development of social scientists' engagement in technology development.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, September, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 376-398).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129050</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Are two heads better than one in CAD? a comparison of various CAD working styles</title>
<link>https://hdl.handle.net/1721.1/129049</link>
<description>Are two heads better than one in CAD? a comparison of various CAD working styles
Phadnis, Vrushank S.(Vrushank Shripad)
Collaboration in Computer-Aided Design (CAD) has existed since the inception of CAD tools. The established norm in multi-user CAD work has been to use top-down modeling techniques wherein a complex model is divided into sub-assemblies (or part files) for individual designers to work on separately. In this process, designers integrate their work through a check-in/check-out process. This style of collaboration does not change regardless of team sizes, product types, or over time. However, recent cloud-based CAD tools are expected to change this by offering real-time collaboration like Google Docs. In this research, we are interested in learning the effects of real-time collaboration on designers' work. We draw heavily from software development research where dyadic work is common and is known as 'pair programming'. We use an experimental approach to investigate research questions pertaining to speed and quality of real-time collaboration. We found that pair work in CAD is not summative. In other words, the work of two designers does not lead to twice the outcome of individuals. This results is contrary to previous real-time CAD collaboration research but consistent with software programming research. However, we also found that the quality of CAD increases in certain pair CAD settings. We observed that sharing control of the CAD software leads to higher quality and parallelizing work leads to worse quality. To elaborate on our results, we reveal specific patterns of participant behaviour based on audio communication and cursor activity. In summary, we establish foundational knowledge in real-time CAD collaboration research. Through our work, we share insights which can inform practicing engineers that are interested in adopting pair CAD work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 113-120).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129049</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixing with complex patterns : from the impact of miscible viscous fingering to the effects of motile bacteria</title>
<link>https://hdl.handle.net/1721.1/129048</link>
<description>Mixing with complex patterns : from the impact of miscible viscous fingering to the effects of motile bacteria
Chui, Jane(Jane Yuen Yung)
Flow instabilities arising from differences in temperature, density, or viscosity are commonplace. Viscous fingering is a hydrodynamic instability that occurs when a less viscous fluid displaces a more viscous one. Instead of progressing as a uniform front, the displacing fluid forms fingers that vary in size and shape to form complex patterns. The interface created from these patterns affects mixing between the two fluids, and therefore understanding how these patterns evolve in time is essential in applications such as enhanced oil recovery, bioremediation, and microfluidics. In this Thesis, we experimentally quantify the impact miscible viscous fingering has on mixing. We use a radial Hele-Shaw cell as an analog of radial flows in porous media, and high-resolution fluorescent imaging, to measure the temporal and spatial evolution of the mixing zone. We identify distinct regimes in both the interface length and average thickness of the mixing zone.; We use these results to propose a scaling framework for the growth of the mixing zone, and identify for the first time the competing factors of time-dependent dispersion and fluid-interface stretching from viscous fingering. Although bacteria can be found virtually everywhere viscous fingering occurs, there are no studies on the effects of their presence on the displacement dynamics. In this Thesis, we seek to begin filling this knowledge gap by employing as invading fluid an active suspension of fluorescent motile E. coli, and observing how bacterial motility affects the interface and mixing zone between the two fluids. We start by characterizing how viscous environments affect the rheology of these dense suspensions capable of collective swimming (and therefore effective viscosity reductions) using a Couette rheometer.; Remarkably, we find that for the entire range of solvent viscosities tested (1 to 17 mPa · s), we recover superfluidic regimes, in which the effective suspension viscosity is reduced to near-zero values through collective swimming. We use these experimental results to formulate a constitutive model for the rheology of bacterial superfluids under flow as a function of the bacterial concentration and the solvent viscosity. To visualize the motile bacteria both individually and collectively under viscous fingering conditions, we design and fabricate a mesofluidic Hele-Shaw cell that is large enough to accommodate viscous fingering instabilities and small enough to be used with fluorescent microscopy. Surprisingly, we observe a textured interface between the two fluids, in addition to the larger-scale viscous fingering pattern.; This interface consists of four distinct regions: a monodisperse region near the core of the finger, a filamentous region where bacteria segregate into separate flow paths in the direction of finger movement, a "rafting" region where bacteria aggregate into small groups (or "rafts") near the tip of the finger that then get pushed to the sides of the finger, and a diffuse region where the bacteria organize into a diffuse band at the very edge of the interface. These unexpected observations are a first step towards understanding how the interplay between active suspensions of motile bacteria and fluid-mechanical instabilities, such as viscous fingering, affects overall mixing under these complex flow conditions which are found in both natural and engineered environments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 131-149).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129048</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social policy and operations management</title>
<link>https://hdl.handle.net/1721.1/129047</link>
<description>Social policy and operations management
Brennan, Mark Emmanuel.
This dissertation strengthens planning and policy analysis by using concepts from operations management to examine production and distribution of goods and services for disadvantaged groups. Building on the introduction, chapter two tells a cautionary tale, investigating how scholars and decision makers used operations management methods to consider operations in planning and policy analysis in the 1970s in ways that further marginalized already vulnerable residents. The tools and concepts of operations management, however, if sufficiently framed by concerns about equity and advocacy, are powerful instruments in solving production and distribution problems with social consequences. Chapter three explores how these concepts can be used to descriptively identify disparities in access to goods and services by socio-economic status, examining the distribution of irrigation equipment in Senegal. The core question is about the allocation of risk and inventory across levels of a supply chain that extends far into Senegal's farming regions. Chapter four identifies how these concepts can be used to causally explain disparities, tracing policies and plans that aggregative or ameliorate them. It focuses on the main program that subsidizes affordable housing construction in the United States, a durable necessity that is unevenly available and exposed to environment risks across space. The core question is about patterns over space and time in building affordable housing stocks, relative to where and when disasters occur. Chapter five shows how these concepts can be used to prescriptively remedy disparities. It investigates quality risks in the US international food assistance supply chain in Eastern Africa. The core question is about what levers can be pulled in supply chain design to improve food aid quality. Chapter six concludes.
Thesis: Ph. D. in Policy, Operations, and Management, Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129047</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A molecular dynamics study of the tribological properties of diamond like carbon</title>
<link>https://hdl.handle.net/1721.1/129046</link>
<description>A molecular dynamics study of the tribological properties of diamond like carbon
Swisher, Mathew M.
Diamond like carbon (DLC) is an attractive choice as a coating for mechanical components, because of its excellent wear resistance and very low coefficient of friction . We use molecular dynamics (MD) simulations with a reactive force field (ReaxFF) to study the friction and wear between DLC counterfaces, both in comparison to and in contact with steel counterfaces. We show that the tribological properties of DLC in dry sliding friction are heavily dependent on both the structure of the DLC as well as the passivation layer that forms on the sliding counterfaces under different environmental conditions, and that when optimizing for the lowest COF the best structure for the DLC depends on the type of passivation layer. We also find that, by preventing bonding across the counterfaces as the thin film of lubricant is squeezed out at the point of contact, the passivation layer is instrumental in the material's ability to resist scuffing and wear. Additionally, we find that the strength and hardness of DLC makes damaging the passivation layer due to contact forces unlikely under real world conditions. Finally, we use MD simulations to study in more detail the transition from lubricated to dry friction, and in particular, the role of DLC surface chemistry and the resulting passivation layer in this transition. Our work shows that the frictional force can be described quite accurately across the transition from pure slip ( dry friction) to the purely hydrodynamic regime using a simple model which superposes the two effects, provided it also accounts for any immobile fluid layers at the fluid-solid interface. We show that, for water lubrication, the transition from the pure slip to the purely hydrodynamic regime occurs at smaller lengthscales in DLC counterfaces compared with steel.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 103-111).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129046</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational insights into multivalently binding polymers</title>
<link>https://hdl.handle.net/1721.1/129045</link>
<description>Computational insights into multivalently binding polymers
Zumbro, Emiko.
Multivalent binding is commonly used throughout biology to create strong, conformal bonds using multiple weak binding interactions simultaneously. Bonds are considered multivalent when multiple ligands on one species simultaneously bind to multiple receptors on another species. Together, this bond can be much stronger than the sum of its parts. Throughout this thesis, we use theory and coarse-grain Brownian dynamics simulations with specific reactive-binding to explore general characteristics of multivalent polymer interactions. Our simulations bridge length and timescales and can sample large polymer systems that bind targets at the sub-nanometer lengthscale. While the simulation and theory presented is very general and can be applied to many different systems of multivalent polymers, this thesis specifically explores consequences for two applications: multivalent polymers as decoys to inhibit infection and polymers as scaffolds for biocondensates.; Many pathogens use multivalent bonds to attach to our cell surfaces before entering and causing infection. Therefore, there is significant interest in preventing infection from viruses, bacteria, and toxic proteins by inhibiting this attachment step using multivalent decoys. There have been many experiments showing successful binding of long polymers or other large multivalent architectures to colloids or small proteins that pathogens use to bind to our cells. While these experiments have shown how promising multivalent inhibitors are for preventing infection, a theoretical understanding of why design parameters of multivalent polymers result in a particular binding affinity is still missing. Simulations can easily isolate a single design parameter to provide direct links between structure and function, when experiments cannot always do so. This research is intended to provide a systematic study linking structure of multivalent polymers to their binding behavior.; In the first half of this thesis, we explore design properties of polymeric binders and how degree of polymerization, solvent quality, binding site affinity patterns, backbone stiffness, and target concentration change the multivalent binding affinity. We provide simple theory to show that multivalent polymers are limited by their ability and the energetic costs of forming polymer loops. We go on to show how these results and theory have implications on the binding affinity of polymers with heterogeneous binding sites and determines the effect of polymer backbone flexibility and solvent quality on binding affinity. Multivalent polymers are also an essential component of biocondensates, liquid-like droplets comprised of proteins and nucleic acids are found throughout cells. Although the function of these biocondensates is still an active field of study, it is clear that multivalent polymers are essential to their formation through liquid-liquid phase separation (LLPS).; There is little theoretical study of biocondensates that contain binding between species of asymmetric size and valency and the effects of multivalent polymers on the dynamics of these liquid droplets is not well understood. Studying how multivalent polymers modulate droplet dynamics is important because droplet crystallization or solidification is often associated with neurodegenerative disorders such as dementia and amyotrophic lateral sclerosis (ALS). Therefore, in the second half of this thesis, we present research on the role multivalent polymers play in LLPS droplets and their resulting dynamics. We consider how a host of design parameters can change the phase boundary of systems with multivalent polymers binding to smaller targets including solvent quality, valency, binding affinity, specific versus non-specific binding sites, and backbone stiffness.; We found that consistent with previous work on other systems, asymmetric valency systems also showed increased phase separation with increased binding affinities and valencies. We show that phase separation due to non-specific bonds is highly sensitive to changes in attraction, but that phase separation through specific-bonds is much more robust. By combining specific and non-specific multivalency, systems can precisely tune the phase separation boundary. Polymer stiffness can also modulate the phase boundary, where stiff, rod-like polymers were less able to cause phase separation than their flexible counterparts. We also elucidate how polymer-target binding affinities can be used to form micro-phase separated droplets. Lastly, we show that increasing attraction to polymers can slow target diffusion inside droplets while decreasing the density of droplets, with implications for droplet solidification.; We hope that this work will provide direction for the rational design of synthetic multivalent polymer systems such as pathogen inhibitors as well as improve understanding of native biological systems like biocondensates.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 253-264).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129045</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the ice nucleation activity of organic aerosol</title>
<link>https://hdl.handle.net/1721.1/129044</link>
<description>Investigating the ice nucleation activity of organic aerosol
Wolf, Martin Johann.
Emissions of aerosol particles and their precursors affect climate directly by scattering radiation and indirectly by altering cloud properties. Aerosol-induced ice nucleation includes several processes that impact cloud formation, lifetime, albedo, and precipitation efficiency. Ice nucleating particles (INPs) promote ice formation at warmer temperatures and lower relative humidities than required to spontaneously freeze aqueous aerosol. This dissertation investigates the sources and ambient concentrations of organic INPs. We quantify the ice nucleation activity -- defined by the conditions required to initiate ice nucleation and the ice nucleation active site density -- of primary and secondary organic aerosol species. Organically-enriched sea spray aerosol emitted from bubble bursting mechanisms at the ocean surface are more effective INPs than inorganic sea salt aerosol. We demonstrate that polysaccharides and proteinaceous molecules likely determine the ice nucleation activity of sea spray aerosol. Our results illustrate that seawater biogeochemistry affects the organic content of sea spray aerosol and that enhanced primary productivity results in the emission of more effective INPs. We further investigate secondary organic aerosol (SOA) sources of INPs. Isoprene-derived SOA material is unlikely to significantly contribute to INP concentrations in the mid-latitude troposphere due to its poor ice nucleation activity. However, it may be an important source of INPs in convective outflow systems over forested environments. SOA material derived from hydrofluoroolefin refrigerant emissions is an effective INP, but our analyses predict it will not be abundant enough to impact cirrus cloud properties. These results demonstrate the diversity of organic INPs. By better understanding the sources, characteristics, and concentrations of organic ice nucleating particles, our understanding of aerosol-climate feedbacks will reciprocally grow.
Thesis: Ph. D. in Atmospheric Chemistry, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2020; Cataloged from student-submitted PDF of thesis. Page 222 blank.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129044</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiphase flow and fault poromechanics : understanding earthquake triggering and seismic hazard</title>
<link>https://hdl.handle.net/1721.1/129043</link>
<description>Multiphase flow and fault poromechanics : understanding earthquake triggering and seismic hazard
Alves da Silva Junior, Josimar.
In this Thesis, we investigate natural and engineered processes related to the assessment of the seismic hazard from the impact of anthropogenic operations on the stability of pre-existing geological faults. We do so by developing simulation tools that coupled multiphase flow and geomechanics, and apply them at the field scale using geologically realistic representations of the subsurface. In a first contribution at the scale of individual fractures, we study the impact of confining stress on the capillary pressure behavior during drainage through rough fractures, where we find that capillary pressure variations are sensitive to the degree of confining stress and the degree of spatial correlation of the fracture aperture.; By solving the elastic contact problem and simulating slow two-phase displacements through the fracture gap, we uncover the universality class of avalanche size in fluid displacement, and find that it is consistent with a process controlled by self-organized criticality. In a second contribution at the scale of hundreds of kilometers, we address the importance of long-term, large-scale crustal deformation on the spatiotemporal distribution of Slow Slip Events (SSEs) in the Guerrero Gap, putting forward an alternative explanation for SSE nucleation, interval time and arrest. We show, by means of finite element simulations with rate-state friction, that fault geometry and crustal deformation control the nucleation and arrest of SSEs, via normal stress changes along the subducting slab that act as a mechanism for SSE stabilization. In a third contribution, we develop a two-way coupled multiphase flow and geomechanics model that rigorously accounts for the fluid-solid interaction.; We do so by coupling two well-established open-source simulators, the open-source finite element mechanical simulator PyLith and the finite volume open source flow simulator MATLAB Reservoir Simulation Toolbox (MRST). We employ the fixed-stress split of the fully-coupled problem, which renders the sequential iterative scheme unconditionally stable. We validate our implementation using analytical solutions for single-phase flow for a range of model parameters, and find excellent agreement in all cases. We then apply our simulator to synthetic cases to illustrate the impact of CO₂ injection on earthquake triggering on a pre-existing fault, demonstrating that poroelastic effects can have a strong fault-weakening effect even through impermeable geologic strata. In the two final contributions in this thesis, we apply the coupled multiphase flow and geomechanics simulator described above to assess seismic hazard from fluid injection at the reservoir scale.; In our first application, we revisit the classical experiment in earthquake control from water injection at the Rangely oil field, Colorado. The coupled flow-geomechanics simulations on a geologically constrained structural model of the Rangely field, along with reservoir-pressure and seismological data, provide an unique opportunity to understand the mechanisms responsible for the observed seismicity. In particular, our analysis allows us to separate the contributions to fault destabilization from direct pore pressure diffusion and poroelastic effects and to elucidate the fundamental role of fluid flow along the fault. In our second field-scale application, we investigate the impact of industrial-scale CO₂ storage on the stability of, and potential leakage along, pre-existing faults in the Gulf of Mexico (GoM).; We do so by performing 3D numerical simulations of coupled flow and geomechanics using high-fidelity geological models of the Miocene section of the GoM, both at the field scale (10s of km) and at the regional scale (100s of km). We pay particular attention to the frictional and hydraulic properties of unlithified sedimentary faults, and incorporate a detailed, physics-based, probabilistic representation of clay and sand smearing to populate the flow properties of normal faults. We then investigate different scenarios of injection-well location in relation with faults' geometry and architecture, representing geologic settings corresponding to "open" and "closed" reservoirs.; The results of our flow-geomechanics simulations suggest that CO₂ injection results in small fault destabilization, and vanishingly small probability of leakage along faults--supporting the notion that large-scale (100s of Mt) CO₂ injection in the GoM is feasible, but that well location is key for the success of individual Carbon Capture and Storage (CCS) projects.
Thesis: Ph. D. in Geophysics, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 245-268).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129043</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reverse genetic approaches reveal gene redundancy in Arabidopsis anthers</title>
<link>https://hdl.handle.net/1721.1/129041</link>
<description>Reverse genetic approaches reveal gene redundancy in Arabidopsis anthers
Jacobowitz, Joseph(Joseph R.)
When aquatic plants migrated to land 500 million years ago, they were met with harsh conditions associated with terrestrial life, such as dry air, high radiance light, and high effective gravity. Early plants of this time period underwent rapid evolution to develop novel plant traits to mitigate these challenges - these traits remain highly conserved in modern land plants. Among the most important of these traits is the biosynthesis of a unique polymer known as sporopollenin, which protects the vulnerable plant spore and pollen grain. Sporopollenin is considered to be the toughest known biopolymer and although chemists and botanists have studied this remarkable material for over a century, relatively little is known about sporopollenin compared to other major plant biopolymers. In this thesis, I employ reverse genetic approaches to identify novel Arabidopsis genes that are responsible for sporopollenin biosynthesis. With these methods, I identify a previously unstudied gene, hereby known as IPE2 , which acts redundantly with IPE1 in the synthesis of sporopollenin. Additionally, I identify two unstudied peroxidases, PRX9 and PRX40 , which are also redundant and critical for pollen development, although these are not involved in sporopollenin and instead crosslink cell wall extensin proteins. These works enhance our understanding of the pollen wall and of pollen development. Moreover, this work reveals the untapped potential of reverse genetics to predict redundant relationships between paralogs in well-studied model organisms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129041</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the therapy resistance landscapes of acute leukemias using in vivo functional genomics</title>
<link>https://hdl.handle.net/1721.1/129040</link>
<description>Mapping the therapy resistance landscapes of acute leukemias using in vivo functional genomics
Ramos, Azucena.
The recurrence of therapy resistant disease remains an intractable problem in oncology clinical care. To address this issue, investigators have traditionally focused on elucidating cell-intrinsic mechanisms that render tumors refractory to both classical chemotherapeutics and targeted agents. However, cancers resident in organs throughout the body do not develop in isolation. Instead, tumors arise in the context of the non-malignant components of a tissue, defined as the tumor microenvironement (TME). While the importance of cell-extrinsic factors in cancer biology is well established, our understanding of the TME's influence on therapeutic outcome is in its infancy. Pooled in vivo screens offer an unbiased strategy for identifying novel resistance mediators in the context of a normal immune system and microenvironment.; In the first part of this thesis, I describe the results of an in vivo RNAi screen in a treatment naïve mouse model of acute myeloid leukemia (AML) completed in the context of combination chemotherapy. Using this approach and a new mouse model of AML chemoresistance generated in our lab, I identified and validated the tricarboxylic acid cycle gene Succinate-CoA Ligase GDP-Forming Beta Subunit (SUCLG2) as an in vivo-specific mediator of therapy resistance. Additional experiments indicate that proper function of the Succinate-CoA Synthetase (SCS) complex, in which SUCLG2 functions, is critical for AML LSCs to survive therapy. Our data suggest that depletion of SCS members may lead to altered tumor energetic features that ultimately sensitize AML blasts to combination chemotherapy.; In the second part of my thesis, I describe a genome-wide CRISPR-Cas9 screen to investigate mechanisms of resistance to chimeric antigen receptor T cell therapy in a mouse model of B-cell acute lymphoblastic leukemia both in vitro and in vivo. Here, we describe preliminary results from an in vivo pilot screen and results from in vitro genome-wide screens. Preliminary analyses indicate the screen is robust, with genes previously reported to be important for T cell mediated killing showing expected phenotypes. Ultimately, completion of these screens will provide the field with a critically necessary data set that can guide efforts to uncover highly synergistic agents that potentiate the effects of this promising treatment modality.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129040</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mobility politics : local ideologies in the multi-jurisdictional metropolis</title>
<link>https://hdl.handle.net/1721.1/129039</link>
<description>Mobility politics : local ideologies in the multi-jurisdictional metropolis
Freemark, Yonah(Yonah Slifkin)
What is the interplay between local politics and metropolitan infrastructure planning in the context of the multi-jurisdictional governance of contemporary urban regions? I interrogate, first, how cities make policy when many governmental organizations are involved in city planning. And I ask, second, how politics--in the form of partisan affiliations and personal ideologies--influences political officials' decisions and ultimately the designs of new transportation projects and adjacent development. I develop a new theory for how regional planning works. I first show that, even when deprived de jure jurisdiction over transportation projects and land-use planning, local governments harness their perceived democratic legitimacy to exert de facto power over planning. Second, I demonstrate that they expand this power through alliances with other localities, structured on the concept of mutual deference.; Third, I offer new evidence that local action on land-use and transportation planning is differentiated by partisanship, beyond typical explanations of municipal choices being based on demographics or economics. Fourth, I develop a typology of land-use ideologies held by local officials and structured both by differences in views on the left/right spectrum and preferences for the scale of new spatial development, that I use to further explain heterogeneous local action. Finally, I show how actors representing multiple jurisdictions and with contrasting ideological viewpoints coalesce around a single regional transit project by adjusting for these ideologies in the planning process. I examine six transit infrastructure projects in France and the United States. For each, I conduct interviews and archival research.; My comparative research approach--which operates across country and project levels--allows the deciphering of common and distinctive traits within each, allowing me to detect how officials promote goals independently and through alliances, and to identify the influence of partisanship and officials' ideologies on outcomes.
Thesis: Ph. D. in Urban and Regional Studies, Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 427-444).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129039</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast modeling of multi-phase mixture transport in piston/ring/liner system via GAN-augmented progressive modeling</title>
<link>https://hdl.handle.net/1721.1/129038</link>
<description>Fast modeling of multi-phase mixture transport in piston/ring/liner system via GAN-augmented progressive modeling
Zhang, Qin,Ph.D.Massachusetts Institute of Technology.
As a continued effort to advance the understanding of the power cylinder system and design capacities, we develop a modeling framework for multi-phase macro mixture transport that integrates all length scales, time scales and flow regimes using a hybrid approach combining deterministic modeling and machine learning. This framework considers various mechanical and physical processes including ring dynamics, gas flow, oil redistribution and multi-phase transport to paint a detailed picture of the global lubrication environment in the piston/ring/liner system. The main contributions of this thesis can be summarized as the following: 1) designed a modular architecture that decouples various processes to manage complex dependencies, 2) achieved fast inference of flow separation and vortices near ring gaps by a physics-informed Generative Adversarial Network, and 3) established a lower bound estimation of oil consumption based on the "healthy system" oil distribution pattern. This thesis provides a powerful modeling methodology that can achieve fast modeling and monitoring of oil consumption and PM emissions from IC engines, which is of immediate economic, environmental and health concern.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 177-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129038</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aqueous reactivity of glassy industrial byproducts in alternative cementitious systems</title>
<link>https://hdl.handle.net/1721.1/129037</link>
<description>Aqueous reactivity of glassy industrial byproducts in alternative cementitious systems
Uvegi, Hugo Jake.
Alkali-activated, geopolymeric, and other novel binders offer an opportunity to curb the carbon footprint associated with ordinary Portland cement (OPC). CO₂ emissions inherent to source-material processing (i.e., firing of limestone at 1450 °C) and annual OPC production volumes of 4.1 billion metric tons cause an estimated 5-11% of global annual greenhouse gas (GHG) emissions. Material substitution with lower-footprint resources is therefore necessary for GHG impact mitigation. Glassy silica-, alumina-, lime-, and/or alkali-rich industrial byproducts (IBs) exhibit the properties necessary to achieve emissions reductions while preserving final product attributes expected of cementitious binders. Research and industry have both focused primarily on metakaolin and IBs such as blast furnace slag and coal fly ash as supplementary and alternative cementitious precursors.; Given projected limitations in such IB supply, it is imperative that we efficiently expand the materials search to other useful precursor candidates. This thesis focuses on chemical characterization and kinetic reactivity analysis of lesser-studied glassy materials through a combined experimental-computational approach, resulting in (1) physicochemical drivers for material aqueous reactivity and (2) a framework for evaluating new materials. First, I describe laboratory experiments involving reaction of a siliceous mixed-feedstock Indian biomass ash in aqueous sodium hydroxide solutions with selectively present lime and alumina sources. These experiments respectively yield tobermoritic calcium silicate hydrate products (Ca/Si ~~ 0.6-1) and semi-crystalline zeolite / geopolymer products (Si/Al ~~ 1); shown compositional ratios are known to be relevant to final material properties.; Through this work, I demonstrate a novel approach to calculating reaction product composition using spectroscopic solution analysis of dissolution / precipitation experiments. Subsequently, I describe computational efforts to mine literature-reported data for potential precursor materials. This results in a database of material compositional and physical property data represented by a SiO₂-Al₂O₃- CaO ternary diagram. Finally, I employ supervised and semi-supervised computational models, which confirm log-linear relationships between glass dissolution rates (i.e., log₁₀(rate)) and pH, inverse temperature (1/K), and glass connectivity (i.e. non-bridging oxygens per tetrahedron). While less interpretable, black-box models are observed to be more robust to the presence of additional features. Throughout the research program, reactivity is understood via material dissolution in aqueous solutions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 177-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129037</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precipitation variability and change over Morocco and the Mediterranean</title>
<link>https://hdl.handle.net/1721.1/129036</link>
<description>Precipitation variability and change over Morocco and the Mediterranean
Tuel, Alexandre.
Water is a critical factor limiting economic and social development in Morocco and the Mediterranean Basin. In addition to strong seasonality and high inter-annual variability, annual precipitation remains low (&lt;500mm) across much of the region. Furthermore, the situation is not expected to improve under climate change as models project a sustained decline in precipitation in the Mediterranean, most pronounced during the winter season. Despite the significance of such projections, a comprehensive theory for Mediterranean winter climate change is still lacking. Here, we adopt a multi-faceted approach to investigate precipitation variability and change over Morocco and the Mediterranean, with a focus on resulting water availability. First, we link inter-annual variability of seasonal precipitation in Morocco to global sea-surface temperatures, and develop empirical forecast models that can predict up to 35% of this variability with a one-month lead time.; Turning our attention to regional climate change processes and impacts, we show how future winter precipitation trends in the Mediterranean directly result from projected circulation anomalies. The enhanced advection of dry air from the Sahara Desert caused by these anomalies is key in causing precipitation to decline over Morocco. In addition, a major contribution of this work is to propose a physical explanation for the circulation trends involving planetary-scale circulation shifts and reduced warming of the Mediterranean Sea compared to land. We develop high-resolution regional climate simulations over Morocco to assess future risks from drought and weather extremes relevant to agriculture. Our results point to robust declines of 25-45% in annual precipitation and confirm physical drivers identified at the regional scale.; Because snow is such an important component of the water cycle in this semi-arid region, we also investigate snowpack dynamics in the High Atlas and we quantify components of the snow water balance for the first time. Future trends in snowpack and associated runoff are also investigated: at best, snowpack volume will decline by at least 60%, which, combined with increased air dryness, will likely reduce mountain runoff by 60%. Our findings have important implications for climate change adaptation and water management in Morocco, particularly in agriculture, which uses 90% of all available water.
Thesis: Ph. D. in Hydrology, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 264-287).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129036</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The genetic landscape of protein-protein interaction specificity</title>
<link>https://hdl.handle.net/1721.1/129035</link>
<description>The genetic landscape of protein-protein interaction specificity
Lite, Thúy-Lan Võ.
Protein-protein interaction specificity is often encoded at the primary sequence level, and by just a few interfacial residues. Collectively, these residues have both positive and negative roles, promoting a desired, cognate interaction and preventing non-cognate interactions, respectively. However, for most protein-protein interactions, the contributions of individual specificity residues are poorly understood and often obscured by robustness and degeneracy of protein interfaces. Using bacterial toxin-antitoxin systems as a model, we use a variant of deep mutational scanning to dissect the positive and negative contributions of antitoxin residues that dictate toxin specificity. By screening a combinatorially complete library of antitoxin variants, we uncover a distribution of fitness effects for individual interface mutations measured across hundreds of genetic backgrounds. We show that positive and negative contributions to specificity are neither inherently coupled nor mutually exclusive. Further, we argue that the wild-type antitoxin may be optimized for specificity, because mutations that further destabilize the non-cognate interaction also weaken the cognate interaction. No mutations strengthen the cognate interaction. By comparing crystal structures of paralogous complexes, we provide a structural rationale for all of these observations. Finally, we use a library approach to identify hundreds of novel systems that are insulated from their parental systems, and that carry only two mutations - a negative specificity element on the toxin, and one on the antitoxin. This result demonstrates that highly similar (and in this case, nearly identical) complexes can be insulated using compensatory mutations of individually large effect. Collectively, this work provides a generalizable approach to understanding the logic of molecular recognition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129035</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the landscape of aminoacyl-tRNA synthetase protein production in Bacillus subtilis</title>
<link>https://hdl.handle.net/1721.1/129034</link>
<description>Characterizing the landscape of aminoacyl-tRNA synthetase protein production in Bacillus subtilis
Parker, Darren John.
The phenotype of a cell is a consequence of both the identity of the genes in the genome and the magnitude of their expression into proteins. While the biochemical function of many proteins has been uncovered, for most it is unclear how important native protein abundances are for cell fitness. Furthermore, linking changes in abundances with downstream effects on enzymatic output, pathway function, and ultimately cell fitness is unexplored in nearly all cases. Here I use a model enzyme family, the aminoacyl tRNA synthetases (aaRS), to explore how sensitive Bacillus subtilis are to changes in aaRS production from the molecular to phenotypic level. This culmination of protein levels, functional output, and fitness, leads to a complete "fitness landscape" for the aaRS proteins and provides a framework for future study in quantitative biology.; In Chapter I, I outline the conceptual questions explored in this thesis, review the current understandings of bacterial translation and aaRS function, and note the various regulatory strategies bacteria utilize to adapt to perturbations. In Chapter II, I find that the aaRS proteins are produced to optimize the growth rate of cells despite the presence of uncharged tRNAs. These native levels are positioned near a 'fitness cliff' as the underlying molecular processes of tRNA charging, translation, and regulation, are sensitive to reductions but not increases in synthetase production. In Chapter III, I complete the characterization of the aaRS fitness landscapes by exploring the source of the fitness defects of aaRS overproduction. In Chapter IV, I present a novel protocol for RNA-seq library preparation to reduce the cost and time associated with generating transcriptomic datasets.; In Chapter V, using the aforementioned protocol, I characterize the transcriptomes of over 70 strains within the Escherichia coli single gene knockout collection. With the help of a colleague we find that strong selective pressures to induce genes involved in motility leads to a large amount of transcriptome heterogeneity within the collection. Finally, in Chapter VI, I discuss the results of my work, setting up future directions within the context of gene expression, bacterial physiology, and beyond.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129034</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ge and GeSi electroabsorption modulator arrays via strain and composition engineering</title>
<link>https://hdl.handle.net/1721.1/129033</link>
<description>Ge and GeSi electroabsorption modulator arrays via strain and composition engineering
Ma, Danhao.
Electronic and photonic integrated circuits serve as a promising platform for telecommunications and sensing applications. Electroabsorption modulators allow fast modulation, small device footprint, and low power consumption. Epitaxially grown GeSi films on SOI substrates are a suitable materials platform for integrated modulator applications. A modulator's operation wavelength adjustment and its system integration for broadband modulation are two major challenges of fabricating on-chip modulator arrays for telecommunication. Unlike Si MZI modulators, GeSi electroabsorption modulators are not broadband due to its limited working region near absorption edge for the Franz-Keldysh effect. Optimization of a modulator material for a target wavelength can be achieved by tuning material composition or applying strain to the material.; In order to realize an integrated system with a broadband modulation, multiple electroabsorption modulators need to be fabricated individually and assembled onto a chip in a conventional approach. Each fabrication step adds cost to design and processing. Integrating more modulators for multiple operating wavelengths allows a broader optical band coverage and higher optoelectronic data processing capacity, which is desirable with lower cost, simpler layout, and easier electronic and photonic circuits integration. In this thesis work, a one-for-all strained GeSi modulator array design is proposed and demonstrated to cover a broad telecommunication band with multiple modulators designed and fabricated simultaneously in the same process flow. A stressor layer applies a homogeneous strain to a waveguide modulator. By changing a modulator width, strain in the modulator changes, tuning the material bandgap, therefore, adjusting the modulator operation wavelength.; Modulators made of the same material can operate at various wavelengths with the same stressor layer with a simplified layout and device process flow. The matrix of investigation consists of two compositions (Ge and Ge₀.₉₉Si₀.₀₁) and three types of strain (compressive, tensile, and no strains). Individual GeSi EAMs with waveguide width less than 2 [mu]m have demonstrated an improved extinction ratio/insertion loss value from 1 to 1.7, which is the highest value among Si Mach-Zehnder, and GeSi electroabsorption modulators. Strained Ge₀.₉₉Si₀.₀₁ modulator arrays have demonstrated a broad optical bandwidth of ~100nm in C- and L-bands in telecommunication. An ultralow insertion loss of 2dB and high modulation speed above 100 GHz is achievable with minor improvements in electrode design. An increase in Si composition to 4% allows a strained Ge₀.₉₆Si₀.₀₄ modulator array to cover the optical wavelength from 1300nm to 1450nm.; Strained GeSi modulator and detector arrays can be fabricated in the same process flow with the same stressor layers to achieve an integration of transmitters and receivers on a single chip with a simplified design layout and fabrication procedure. That presents a promising platform for integrated photonic transceivers with an ultrawide optical coverage in the entire telecommunication bands.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 151-159).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129033</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An organoid platform to study alveolar stem cells in lung generation and cancer</title>
<link>https://hdl.handle.net/1721.1/129032</link>
<description>An organoid platform to study alveolar stem cells in lung generation and cancer
Naranjo, Santiago(Santiago Jose)
Lung adenocarcinoma (LADC) remains the most common and lethal cancer type worldwide. Although recent breakthroughs using a new class of immune-modulatory therapeutics have improved patient survival in the clinic, the majority still invariably succumb to this disease, highlighting the importance of improving treatment strategies. A wide variety of models have been developed to study LADC. Cell line- and transplant-based models offer rapid and flexible platforms for discovering and testing novel therapeutics using patient-derived specimens. On the other hand, genetically engineered mouse models (GEMMs) recapitulate key aspects of human LADC including initiation from normal pulmonary epithelial cells and progression into a malignant state. The development of organoid technology has revolutionized the way we model cancer and a vast number of other biological phenomena.; Organoids are cultured miniature organs derived from normal adult stem cells that display self-renewal, differentiation capacity and remarkable genetic stability. These features have facilitated the creation of next generation cancer models that combine the best features of their predecessors. The alveolar type 2 (AT2) cell represents the most prominent cell-of-origin of LADC. These cells serve as stem cells in the adult lung to support tissue turnover during homeostasis and regeneration after injury. They accomplish this by self-renewing and differentiating into alveolar type 1 (AT1) cells. Using organoid technology, we have developed an improved system to cultivate alveolar organoids from normal murine lungs. We demonstrated that these organoids are positive for AT2 and AT1 markers and completely lack expression of basal and club cell makers. Critically, we observed long-term proliferative potential in these organoids.; Using this improved culture system, we generated organoid models of LADC, representing three distinct molecular subclasses of this disease. We found that Kras-, Braf-, and Alk-mutant organoids with Trp53 deficiency displayed mitogen independent growth in vitro. Most strikingly, Krasmutant Trp53-inactivated organoids orthotopically transplanted into immunocompetent recipient mice formed tumors that displayed histopathological characteristics of human LADC. Taken together, the work presented here demonstrates the power of organoid technology for building clinically relevant and experimentally flexible cancer models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129032</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Past price and trend effects in promotion planning; from prediction to prescription</title>
<link>https://hdl.handle.net/1721.1/129030</link>
<description>Past price and trend effects in promotion planning; from prediction to prescription
Cohen-Hillel, Tamar.
Sales promotions are a popular type of marketing strategy. When undertaking a sales promotion, products are promoted using short-term price reductions to stimulate their demand and increase their sales. These sales promotions are widely used in practice by retailers. When undertaking a sales promotion, retailers must take into consideration both the direct and indirect effects of price promotions on consumers, and as a result, on the demand. In this thesis, we consider the impact of two of these indirect effects on the planning process of promotions. First, we consider the problem of the promotion planning process for fast-moving consumer goods. The main challenge when considering the promotion planning problem for fast-moving consumer goods is the negative indirect effect of promotions on future sales. While temporary price reductions substantially increase demand, in the following periods after a temporary price reduction, retailers observe a slowdown in sales.; To capture this post promotion slowdown, we suggest a new set of past prices (namely, the last seen as well as the minimum price seen within a limited number of past periods) as features in the demand model. We refer to demand models that use this set of past prices as Bounded Memory Peak-End models. When tested on realworld data, our suggested demand model improved the estimation quality relative to a traditional estimation approach through a relative improvement in WMAPE by approximately 1 - 19%. In addition to the improvement in prediction accuracy, we analyze the sensitivity of our proposed Bounded Memory Peak-End demand model to demand misspecification. Through statistical analysis, and using principles from duality theory, we establish that even in the face of demand misspecification, the proposed Bounded Memory Peak-End model can capture the demand with provably low estimation error, and with low impact on the resulting optimal pricing policy.; The structure of the new proposed demand model allows us to derive fast algorithms that can find the optimal solution to the problem of promotion planning for a single item. For the case of promotion planning for multiple items, although we show that the problem is NP-hard in the strong sense, we propose a Polynomial Time Approximation Scheme that can solve the problem efficiently. Overall, we show that using our proposed approach, the retailer can obtain an increase of 4 - 15.6% in profit compared to current practice. Second, we consider the promotion targeting problem for trendy commodities. In the case of trendy commodities, the demand is driven, among other factors, by social trends. Examples of trendy commodities include fashion items, wearable electronics, and smartphones. To capture the demand with high accuracy, retailers must understand how the purchasing behavior of customers can impact the future purchasing behavior of other customers.; Social media can be instrumental in learning how consumers can impose trends on one another. Unfortunately, many retailers are unable to obtain this information due to high costs and privacy issues. This has motivated us to develop a model that detects customer relationships based only on transaction data history. Incorporating the customer to customer trend in the demand estimation, we observe a significant improvement of 12% in the WMAPE forecasting metric. The proposed customer to customer trend-based demand model subsequently allows us to formulate the promotion targeting optimization problem in a way that consider the indirect effect of targeted promotions through trends. We show that the problem of finding the personalized promotion policy that would maximize the profit function is NP-hard. Nonetheless, we introduce an adaptive greedy algorithm that is intuitive to implement and can find a provably near-optimal personalized promotion policy.; We tested our approach on Oracle data and observed a 5-12% improvement in terms of profit.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 261-268).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129030</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface plasmon enhanced fluorescence for biological imaging : from visible to short-wave infrared</title>
<link>https://hdl.handle.net/1721.1/129028</link>
<description>Surface plasmon enhanced fluorescence for biological imaging : from visible to short-wave infrared
Huang, Shengnan,Ph.D.Massachusetts Institute of Technology.
Fluorescence imaging offers high spatio-temporal resolution, low radiation dosage exposure, and low cost among all the available imaging modalities, for example, magnetic resonance imaging, computerized tomography and positron emission tomography. Imaging probes of high emissivity and photostability are the key to achieving fluorescence imaging with high signal-to-background ratio (SBR). One promising approach to developing highly bright and stable imaging probes is through surface plasmon enhanced fluorescence. In the first part of the thesis, we develop a fluorescent probe with high site-specificity and emission efficiency by exploiting the targeting-specificity of M13 virus and co-assembling plasmonic nanoparticles and visible dye molecules on the viral capsid. Practical factors controlling fluorescence enhancement, such as nanoparticle size and dye-to-nanoparticle distance, are studied in this project. Lastly, the highly fluorescent probe is applied for in vitro staining of E.; coli. The methodology in this work is amendable to developing a wide range of affinity-targeted fluorescent probes using biotemplates. Compared to visible and near infrared spectrum, short-wave infrared (SWIR, 900-1700 nm) spectrum promises high spatial resolution and deep tissue penetration for fluorescence imaging of biological system, owning to low tissue autofluorescence and suppressed tissue scattering at progressively longer wavelengths. In the second part of the thesis, a bright SWIR imaging probe consisting of small SWIR dyes and gold nanorods is developed for in vivo imaging. Fluorescence enhancement is optimized by tuning the dye density on the gold nanorod surface. The SWIR imaging probes are applied for in vivo imaging of ovarian cancer. The effect of targeting modality on intratumor distribution of the imaging probes is studied in two different orthotopic ovarian cancer models.; Lastly, we demonstrate that the plasmon enhanced SWIR imaging probe has great potential for fluorescence imaging-guided surgery by showing its capability to detect submillimeter-sized tumors. Apart from enhancing the SWIR down-conversion emission above, surface plasmon enhanced SWIR up-conversion emission is another promising approach to achieving "autofluorescence-free" imaging with minimal tissue scattering. In the third part of the thesis, we use gold nanorods to enhance the up-conversion emission of small SWIR dyes. The mechanism of surface plasmon enhanced up-conversion emission is studied. The up-conversion fluorescence shows much higher SBR than down-conversion fluorescence in non-scatting biological solution and scatting medium. Lastly, we demonstrate in vivo imaging for the first-time using SWIR up-conversion fluorescence with exceptional image contrast.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 139-147).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129028</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The persistence of haploinsufficiency and its role in genome evolution</title>
<link>https://hdl.handle.net/1721.1/129024</link>
<description>The persistence of haploinsufficiency and its role in genome evolution
Morrill, Summer Ashlee.
In diploid organisms there are two copies of every gene, one from each parent. While the majority of genes are robust to deletion of one of the two copies, a subset of genes remains highly dosage sensitive, causing a significant decrease in fitness when heterozygously deleted. These genes, known as haploinsufficient (HI) genes, are present in eukaryotic species from yeast to humans. Why haploinsufficiency persists over evolutionary time is not known. To answer this, I systematically tested two existing models of haploinsufficiency: 1) the dosage stabilizing hypothesis, which states that haploinsufficiency is caused by imbalances among protein complex members, and 2) the insufficient amounts hypothesis, which says that haploinsufficient gene products are limiting for growth. In this thesis I find that having a single extra copy of haploinsufficient genes was sufficient to cause a growth defect in Saccharomyces cerevisiae.; This showed that HI genes are sensitive to both over- and under-expression. Although having an extra copy of HI genes resulted in heightened sensitivity to proteotoxic stress agents, proteotoxicity could not wholly explain the fitness defect that occurred when HI genes were heterozygously deleted. Haploinsufficiency phenotypes were still present even when all members of a complex were deleted at the once, restoring protein balance but not expression levels. In creating a new dosage sensitivity dataset by pooled fitness competition, I found that genes sensitive to increased copy number and HI genes are not mutually defined. All together, these data suggested that HI genes are unique among dosage sensitive genes, and that HI genes must also be rate-limiting for maximal growth. Many HI genes showed strong evidence of growth-limiting phenotypes, including ribosomal genes, and genes involved in protein folding.; I propose a "dosage-stabilizing" model for haploinsufficiency, which states that HI genes are unable to increase or decrease their expression without fitness penalty. This is due to both the growth-limiting nature of HI genes, and to the proteotoxicity of dosage imbalance. From these two selective pressures, HI genes have very narrow ranges of expression - unable to modulate expression over time. This has caused haploinsufficiency to persist throughout the evolution of the eukaryotic genome.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129024</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technologies of perception : searches for life and intelligence beyond Earth</title>
<link>https://hdl.handle.net/1721.1/129021</link>
<description>Technologies of perception : searches for life and intelligence beyond Earth
Webb, Claire Isabel.
Scientists in the late 1950s in the United States gained technological capabilities to test for signs of extraterrestrial life. While exobiologists developed visual techniques to detect whether sites beyond Earth might harbor microbes, "biosignatures," radio astronomers searched for extraterrestrial intelligence (SETI) in the form of "technosignatures." This dissertation explores how scientists since the Space Age have constructed experimental assemblages to imagine, relate to, and investigate the alien and exotic microbes -- unknown, indeed, as-yet-imperceptible, objects --; through familiar sensory metaphors of seeing (exobiologists) and listening (SETI scientists). From historical material gathered at various D.C. archives, the American Philosophical Society, and the National Library of Medicine, I show how exobiologists' technologies of vision rendered anew images of the Moon, Mars, Venus, and the Earth from afar and at surface, affording scientists the ability to conceptually anticipate relationships between their world and others. Through a epistemic pratice I call "gaze-scaling," they yoked the concept of "island" to "planet," casting extraterrestrial sites as fragile laboratories of life that beckoned exploration. I next draw from immersive participant observation since 2016 to engage ethnographic sonar on the SETI group Breakthrough Listen based at U.C. Berkeley, California. I analyze how they construct criteria of intelligence through "experiments of anticipation" that are parametrized to hear from a commensurable subject.; I theorize "figures of listening" in both observational protocols and as a preemptive attunement to Other intention, acts that configure an alien who would be not just perceptible, but relatable. If exobiologists envisioned universal standards of biochemistry that would map life's common origins, SETI astronomers have traded on imagined superhuman characteristics of the alien -- more benevolent, wiser, and technologically superior -- to suggest human futures. I outline how the alien has been imagined through three potent analogical figures: as artifacts, animals, and angels. Furnished by feminist epistemologies and queer theories of care around multispecies becomings -- traditions that have persistently challenged ontological stability across species, gender, race, and spacetime --; I theorize those analogies as acts of "reflexive alienation": a mode of worldmaking in which scientists imagine Others imagining them. Future-oriented extraterrestrial objects held in abeyance cultivate Earthly concepts of being..
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, September, 2020; Page 229 blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 217-228).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129021</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and characterization of immunogenic genetically engineered mouse models of pancreatic cancer</title>
<link>https://hdl.handle.net/1721.1/129020</link>
<description>Development and characterization of immunogenic genetically engineered mouse models of pancreatic cancer
Lambert, Laurens J.(Laurens Johannes)
Insights into mechanisms of immune escape have fueled the clinical success of immunotherapy in many cancers. However, pancreatic cancer has remained largely refractory to checkpoint immunotherapy. To uncover mechanisms of immune escape, we have characterized two preclinical models of immunogenic pancreatic ductal adenocarcinoma (PDAC). In order to dissect the endogenous antigen-specific T cell response in PDAC, lentivirus encoding the Cre recombinase and a tumor specific antigen (SIINFEKL, OVA[subscript 257-264]) was delivered to Kras[superscript LSL-G12D/+]; Trp[superscript 53flox/flox] (KP) mice. We demonstrate that KP tumors show distinct antigenic outcomes: a subset of PDAC tumors undergoes clearance or editing by a robust antigen-specific CD8+ T cell response, while a fraction undergo immune escape. Subsequently, we have developed an immunogenic pancreatic tumor organoid orthotopic transplant model.; In this model, immunogenic pancreatic tumors manifest divergent tumor phenotypes; 40% of tumor organoids do not form tumors ("non-progressors"), whereas 50% of organoids form aggressive tumors despite maintaining antigen expression and a demonstrable T cell response ("progressors"). Additionally, a subset (10%) of tumors show an intermediate phenotype, possibly reflective of an immune equilibrium state. We have further phenotypically and transcriptionally characterized the CD8+ T cell response to understand immune escape in this model. Our analyses reveal unexpected T cell heterogeneity, and acquisition of T cell dysfunctionality. Therapeutic combinatorial targeting of co-inhibitory receptors identified on dysfunctional antigen-specific CD8+ T cells led to dramatic regression of aggressive pancreatic tumors.; Finally, we demonstrate that human CD8+ T cells isolated from pancreatic tumors co-express co-inhibitory receptors, suggesting that T cell dysfunction may be operational in human disease. This is the first demonstration of immunoediting in an autochthonous and organoid-based model of pancreatic cancer. Further characterization of these preclinical model systems will enable rational design of novel clinical immunotherapeutic strategies for treatment of this devastating disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis. Vita. Page 191 blank.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129020</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling of piston pin lubrication in internal combustion engines</title>
<link>https://hdl.handle.net/1721.1/129019</link>
<description>Modeling of piston pin lubrication in internal combustion engines
Meng, Zhen,Ph.D.Massachusetts Institute of Technology.
The piston pin joins the piston and the connecting rod to transfer the linear force on the piston to rotate the crankshaft that is the eventual power outlet of the engine. The interfaces between the piston pin and the pin bore as well as the connecting rod small end are one of the most heavily loaded tribo pairs in engines. Piston pin seizure still occurs often in the engine development and the solution often comes from applying expensive coatings. Furthermore, it has been found that the friction loss associated with the pin can be a significant contributor to the total engine mechanical loss. Yet, there lacks a basic understanding of the lubrication behavior of the pin interfaces. This work is aimed to develop a piston pin lubrication model with consideration of all the important mechanical processes. The model predicts the dynamics of the pin and the lubrication of the interfaces between the pin and pin bore as well as small end.; The model couples the dynamics of the pin with the structural deformation of the mating parts, the hydrodynamic and boundary lubrication of all the interfaces, and oil transport. The model is successfully implemented with an efficient and robust numerical solver with the second order accuracy to compute this highly stiff system. The preliminary results applying the model to a gasoline engine show that the boundary lubrication is the predominant contributor to the total friction. As a result, the interface with more asperity contact tends to hold the pin with it. Thus, the pin friction loss is coming from the interface with less contact. Solely from friction reduction point of view, ensuring efficient hydrodynamics lubrication in one interface is sufficient.; Furthermore, as the heavy load is supported in several small areas, mechanical and thermal deformation of all the parts are critical to load distribution, oil transport, and the generation of hydrodynamic and asperity contact pressure, providing the necessity of the elements integrated in the model. This work represents the first step to establishing a more comprehensive engineering model that helps the industry understand the pin lubrication and find cost-effective solutions to overcome the existing challenges.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 120-121).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129019</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technological change &amp; the changing nature of grassroots development organizations : the case of the Self-Employed Women's Association of India (SEWA)</title>
<link>https://hdl.handle.net/1721.1/129017</link>
<description>Technological change &amp; the changing nature of grassroots development organizations : the case of the Self-Employed Women's Association of India (SEWA)
Ferreira Cardoso, Cauam.
Grassroots development organizations (GDOs) are defined by their hybrid identity as part social movement and part NGO. In the last decade, they have received renewed attention from the international development community due to their unique governance structure that, unlike their traditional NGO peers, empowered people living in poverty to organize and help themselves to achieve political and social welfare goals. With the large-scale adoption of information and communication technologies (ICTs) in developing countries, optimism over the feasibility of an inclusive, bottom-up development model grew even higher. In practice, however, only a handful of GDOs managed to generate impact at scale, and even among those who did, it remains unclear whether ICTs had a positive effect on their performance.; As academics and policy makers search for the appropriate role of technological products and services to improve the lives of vulnerable populations, too many overlook the fact that GDOs, as organizations, endure their own process of adaptation to technological change. In this dissertation, I address this gap in the literature by developing an in-depth, mixed-methods case study of one of the world's most prominent GDOs, the Self-Employed Women Association of India (SEWA). Drawing from organizational theory, I assess the extent to which SEWA is influenced by the technologies it uses, and demonstrate that the operational gains produced ICTs can have the unintended effect of weakening a GDO's most important comparative advantage: their deep community ties and accountability to its members. At SEWA, ICTs facilitated transactional interactions at the expense of relational ones, making it easier to become a service provider and more difficult to sustain a social movement.; They reinforced the type of accountability based on performance typical of market/client-provider relationships, but limited the quantity and quality of personal interactions that ultimately keep GDOs and its members connected through a common cause. Such findings have at least two implications for GDOs, and "bottom-up" development strategies more generally. First, technology is not a neutral input of development interventions but it rather influences people and organizations in different, often contradictory ways. Second, a narrow interpretation of ICTs' role in development obscures the fact that there is more to information and communications than their technological characteristics. The closer development organizations are from the people they serve, the more their work relies on iterative, context-specific relationships that are not always replicable through digital means.; To engage with ICTs effectively, GDOs will be better served if they complement their technology upgrading efforts with proactive countervailing measures that promote a shared ideological, identity and collective goals.
Thesis: Ph. D. in International Development, Massachusetts Institute of Technology, Department of Urban Studies and Planning, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 277-291).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129017</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Input-output biomolecular systems</title>
<link>https://hdl.handle.net/1721.1/129016</link>
<description>Input-output biomolecular systems
Shah, Rushina(Rushina Jaidip)
The ability of cells to sense and respond to their environment is encoded in biomolecular reaction networks, in which information travels through processes such as production, modification, and removal of biomolecules. These reaction networks can be modeled as input-output systems, where the input, state and output variables are concentrations of the biomolecules involved in these reactions. Tools from non-linear dynamics and control theory can be leveraged to analyze and control these systems. In this thesis, we study two key biomolecular networks. In part 1 of this thesis, we study the input-output behavior of signaling systems, which are responsible for the transmission of information both from outside and from within the cells, and are ubiquitous, playing a role in cell cycle progression, survival, growth, differentiation and apoptosis. A signaling pathway transmits information from an upstream system to downstream systems, ideally in a unidirectional fashion.; A key obstacle to unidirectional transmission is retroactivity, the additional reaction flux that affects a system once its species interact with those of downstream systems. In this work, we identify signaling architectures that can overcome retroactivity, allowing unidirectional transmission of signals. These findings can be used to decompose natural signal transduction networks into modules, and at the same time, they establish a library of devices that can be used in synthetic biology to facilitate modular circuit design. In part 2 of this thesis, we design inputs to trigger a transition of cell-fate from one cell type to another. The process of cell-fate decision-making is often modeled by means of multistable gene regulatory networks, where different stable steady states represent distinct cell phenotypes. In this thesis, we provide theoretical results that guide the selection of inputs that trigger a transition, i.e., reprogram the network, to a desired stable steady state.; Our results depend uniquely on the structure of the network and are independent of specific parameter values. We demonstrate these results by means of several examples, including models of the extended network controlling stem-cell maintenance and differentiation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 194-206).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129016</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing CO₂ fixation by synergistic substrate cofeeding</title>
<link>https://hdl.handle.net/1721.1/129014</link>
<description>Enhancing CO₂ fixation by synergistic substrate cofeeding
Liu, Nian,Ph. D.Massachusetts Institute of Technology.
The irrevocable rise of atmospheric CO₂ levels has prompted the development of scalable carbon fixation technologies in recent years. To this end, biological non-photosynthetic methods serve as promising leads since they not only sequester CO₂, but also convert it into a variety of value-added fuels, chemicals, and pharmaceuticals with high specificity. Additionally, as the organisms responsible for fixing CO₂ derive energy from free electrons or electron carriers such as H₂, the process can interface with existing photovoltaics to achieve a high energy efficiency (~8%) that outcompetes photosynthetic systems (&lt;1%). In this thesis, we describe a non-photosynthetic CO₂-fixation approach that sequentially utilizes an acetogenic bacterium and an oleaginous yeast to accomplish the conversion of H₂/CO₂ into lipid-based biodiesel with acetate as the key intermediate.; Despite its feasibility, this two-stage system suffers from slow metabolism of the two chosen microbes. To remedy the issue, we began by identifying the limiting factors, which was determined to be insufficient ATP availability for CO₂ fixation in the first stage and inadequate NADPH levels for acetate-driven lipogenesis in the second stage. Correspondingly, a dual carbon source cofeeding scheme was developed to promote mixed substrate metabolism in the two organisms, synergistically stimulating CO₂ reduction into products. We demonstrate that minor amounts of glucose addition to acetogen cultures saturated with H₂ enhances net CO₂ conversion into acetate by simultaneously satisfying ATP and e⁻ demands at the appropriate ratio. Similarly, feeding the resulting acetate in conjunction with small quantities of gluconate balances the supply of carbon, ATP, and NADPH, which significantly accelerates lipid formation.; The work advances our understanding of systems-level control of metabolism and can be applied to many other situations as an alternative tool for enhancing strain performance in metabolic engineering. Many products other than biofuels can also be synthesized from CO₂ using the two-stage non-photosynthetic design as long as proper organisms are employed. As such, in order to expand the utility of the process, we also developed a data-driven host selection framework. By implementing a recommender system algorithm on strain-product-titer information collected from literature, we aimed to systematically summarize the criteria used for choosing an organism given a certain product of interest and vice versa. The results revealed an implicit principle that governs the selection of model versus non-model host organisms, which could benefit many industrial biotechnological applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129014</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays in economics</title>
<link>https://hdl.handle.net/1721.1/129013</link>
<description>Three essays in economics
Furukawa, Chishio.
This thesis consists of three essays on diverse topics but shared emphasis on statistical models with theory and empirics. The first and third essay examines the role of cognitive limitations in understanding biases in communication and learning. The second essay, joint with Masao Fukui, highlights the role of distributional assumptions of infection rates for epidemiological predictions, responding to the recent COVID-19 outbreak. The first chapter considers the effects of aggregation frictions on scientific communication and shows that publication bias emerges even when researchers are unbiased and communicate their findings optimally for readers. Specifically, when readers are cognitively constrained, they may only consider the binary conclusions rather than the estimates of the papers. Under such aggregation frictions of readers, researchers are shown to omit noisy null results and inflate marginal results.; This chapter presents evidence consistent with this prediction, and develops a new bias correction method, called stem-based correction method, that is robust under the prediction of this and other models of publication selection processes. The second chapter examines the role of infection rate distributions for aggregate epidemiological dynamics in Susceptible-Infectious-Recovered (SIR) models. Specifically, we show that superspreading events (SSEs) of recent coronavirus outbreaks, including SARS, MERS, and COVID-19, follow a power law distribution with fat tails, or infinite variance. When embedding this distribution to stochastic SIR models, we find that idiosyncratic variations in SSEs generate important uncertainties in aggregate epidemiological dynamics.; This result stands in contrast with the existing literature on stochastic SIR models that have assumed thin tailed distributions, and thus concluded that the idiosyncratic uncertainties are unimportant when the population is large. The third chapter considers the impact of imperfect recall on experimentation decisions and resulting inferences. When a Bayesian experimenter has an imperfect recall over past actions and information, her decisions depend not only on a confidence level but also on the expectation the future self will hold for today's action. This expectation arises from the persistent prior belief, and leads to the biases to conform to it. Meditation, to regulate one's attention with focus on the present, is shown to have an ameliorating effect: when the attention is focused, prior belief becomes essentially diffused so that the self-imposed expectation over behaviors becomes agnostic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129013</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on nominal rigidities, bounded rationality, and macroeconomic policy</title>
<link>https://hdl.handle.net/1721.1/129012</link>
<description>Essays on nominal rigidities, bounded rationality, and macroeconomic policy
Petri Castro, Mikel.
This thesis consists of three chapters about macroeconomic policy. In the first chapter, I study the empirical relationship between nominal rigidities and the real effects of monetary policy. Nominal rigidities lie at the core of macroeconomics. The empirical evidence suggests that prices and wages adjust sluggishly to aggregate shocks, while theoretical models justify why and to what extent these rigidities imply monetary non-neutrality. However, direct evidence on nominal rigidities being the actual channel for the transmission of these shocks is relatively scarce. I construct a highly disaggregated measure of regional price stickiness for the U.S. and use it to provide evidence of this channel. My results are in line with sticky price models, indicating that employment in more rigid industries and commuting zones tend to have stronger reactions to monetary policy shocks.; In the second chapter, joint with Emmanuel Farhi and Iván Werning, we document the extreme sensitivity of New Keynesian models to fiscal policy announcements during a liquidity trap--a phenomenon we call the "fiscal multiplier puzzle". The response of current output to government spending grows exponentially in the horizon of the stimulus. Surprisingly, the introduction of rule-of-thumb hand-to-mouth agents, combined with deficit-financed stimulus, can easily generate negative multipliers that are equally explosive. This intuition translates to incomplete markets heterogeneous-agent New Keynesian models, leading to large negative multipliers when taxes are backloaded. We construct a belief-augmented New Keynesian framework to understand the role played by expectations in shaping the fiscal multiplier puzzle. The key element behind this result is the extreme coordination of the demand and supply blocks under rational expectations.; Common knowledge between these two blocks induces an inflation-spending feedback loop. Government spending boosts aggregate demand and drives up inflation, which in turn leads to lower real rates and higher spending by households, increasing aggregate demand again. We break this strategic complementarity by introducing bounded rationality in the form of level-k thinking. In contrast to rational expectations, level-k multipliers are bounded and tend to zero over infinite horizons for all finite k. Moreover, level-k interacts strongly with incomplete markets in two different ways. First, the attenuation of the multipliers increases for any level of k on the degree of market incompleteness, especially in the future. Second, in contrast to complete markets, incomplete markets increase the magnitude of the multipliers for low levels of k when taxes are backloaded, making deficits more effective at stimulating the economy.; In the third chapter, I explore the implications of downward nominal wage rigidities for fiscal policy and inflation in a liquidity trap. The standard Phillips Curve predicts big declines in economic activity should be accompanied by big deflation episodes. I study whether downward nominal wage rigidity can explain the missing deflation during the Great Recession. To do so, I introduce wage rigidity in a standard cash-in-advance liquidity trap model. My results show that nominal wage rigidities are consistent with mild deflationary episodes only when the trap is expected to be very short-lived. Away from this case, the model predicts large deflations and drops in output as in standard New Keynesian models. I also study the impact of fiscal policy in my setup, finding large multipliers that increase with the degree of wage rigidity. The main reason behind the effectiveness of government spending is its persistent effects on economic activity.; Wage rigidity generates unemployment persistence due to pent-up wage deflation. Fiscal spending boosts aggregate demand and decreases deflationary pressures today. This increases output today and in the future by relaxing the downward wage rigidity constraint in all subsequent periods. Keywords: nominal rigidities, price stickiness, monetary policy, regional, bounded rationality, incomplete markets, level-k, fiscal policy, downward nominal wage rigidity. JEL Classification: E52, E62, E7.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 139-144).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129012</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on electricity and matching markets</title>
<link>https://hdl.handle.net/1721.1/129011</link>
<description>Essays on electricity and matching markets
Karaduman, Ömer.
This thesis contains two chapters on the Electricity Markets and a chapter on Matching Markets. In the first chapter, I study how an energy storage affects the wholesale electricity market. The transition to a low-carbon electricity system is likely to require grid-scale energy storage to smooth the variability and intermittency of renewable energy. I investigate whether private incentives for operating and investing in grid-scale energy storage are optimal and the need for policies that complement investments in renewables with encouraging energy storage. In a wholesale electricity market, energy storage systems generate profit by arbitraging inter-temporal electricity price differences. In addition, storage induces non-pecuniary externalities due to production efficiency and carbon emissions. I build a new dynamic equilibrium framework to quantify the effects of grid-scale energy storage and apply it to study the South Australian Electricity Market.; This equilibrium framework computes a supply function equilibrium using estimated best responses from conventional sources to observed variation in the residual demand volatility. Accounting for storage's effect on equilibrium prices is quantitatively important: previous methods that ignore this channel overestimate the profitability of operating a storage unit. The first set of results shows that although entering the electricity market is not profitable for privately operated storage, such entry would increase consumer surplus and total welfare and reduce emissions. A storage operator that minimizes the cost of acquiring electricity could further improve consumer surplus by twice as much. Importantly, a competitive storage market cannot achieve this outcome because other power plants distort prices. These results argue for a capacity market to compensate for a private firm for investing in storage.; The second set of results shows that at moderate levels of renewable power, introducing grid-scale storage to the system reduces renewable generators' revenue by decreasing average prices. For high levels of renewable generation capacity, storage increases the return to renewable production and decreases CO₂ emissions by preventing curtailment during low-demand periods. In the second chapter, I study how a large scale wind power investment affects the wholesale electricity market. Renewable subsidies have been an influential device for wind power investment in many parts of the world. These policies help to lower emissions by offsetting high-emitting electricity generation with clean energy. For zero-emission targets, this transition towards renewable power should be accompanied by thermal generators' retirement to set clean the energy mix in the power sector.; In this paper, I build a framework to quantify the offset and revenue impact of large-scale wind power investment in a wholesale electricity market and apply it to study the South Australian Electricity Market. This equilibrium framework computes a supply function equilibrium using estimated best responses from conventional sources to observed variation in the residual demand volatility. I first show that reduced-form methods are biased as the scale of the additional capacity increases. My results highlight that with different investment sizes, the substitution patterns and negative revenue impact for wind power differ considerably. As the penetration level of wind power increases, the electricity becomes cheaper. The offset and negative shock shifts from low-cost inflexible generators to high-cost flexible generators, while the revenue impact is the highest on existing renewable generation.; I also show quite a bit heterogeneity in price impact among different potential wind power projects. These results have some policy implications on renewable targets' long-run effects and the project selection given the subsidy scheme. In the third chapter, joint with Nikhil Agarwal, Itai Ashlagi, Eduardo Azevedo and Clayton Featherstone, I study the market failure in kidney exchange. We show that kidney exchange markets suffer from market failures whose remedy could increase transplants by 30 to 63 percent. First, we document that the market is fragmented and inefficient; most transplants are arranged by hospitals instead of national platforms. Second, we propose a model to show two sources of inefficiency: hospitals only partly internalize their patientsâǍŹ benefits from exchange, and current platforms suboptimally reward hospitals for submitting patients and donors. Third, we calibrate a production function and show that individual hospitals operate below efficient scale.; Eliminating this inefficiency requires either a mandate or a combination of new mechanisms and reimbursement reforms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 219-228).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129011</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-Time Calibration of Large-Scale Traffic Simulators: Achieving Efficiency Through the Use of Analytical Mode</title>
<link>https://hdl.handle.net/1721.1/129009</link>
<description>Real-Time Calibration of Large-Scale Traffic Simulators: Achieving Efficiency Through the Use of Analytical Mode
Zhang, Kevin,M. Eng.Massachusetts Institute of Technology.
Stochastic traffic simulators are widely used in the transportation community to model real-world urban road networks in applications ranging from real-time congestion routing and control to traffic state prediction. Online calibration of these simulators plays a crucial role in achieving high accuracy in the replication and prediction of streaming traffic data (i.e., link flows, densities). In order to be relevant in a real-time context, the problem must also be solved within a strict computational budget. The primary goal of this thesis is to develop an algorithm that adequately solves the online calibration problem for high-dimensional cases and on large-scale networks. In the first half, a new online calibration algorithm is proposed that incorporates structural information from an analytical metamodel into a general-purpose extended Kalman filter framework.; The metamodel is built around a macroscopic network model that relates calibration parameters to field measurements in an analytical, computationally tractable, and differentiable way. Using the metamodel as an analytical approximation of the traffic simulator improves the computational efficiency of the linearization step of the extended Kalman filter, making it suitable for use in large-scale calibration problems. The embedded analytical network model provides a secondary benefit of making the algorithm more robust to simulator stochasticity compared with traditional black-box calibration methods. In the second half, the proposed algorithm is adapted for the case study of online calibration of travel demand as defined by a set of time-dependent origin-destination matrices. First, an analytical network model relating origin-destination demand to link measurements is formulated and validated on the Singapore expressway network.; Next, the proposed algorithm is validated on a synthetic toy network, where its flexibility in calibrating to multiple sources of field data is demonstrated. The empirical results show marked improvement over the baseline of offline calibration and comparable performance to multiple benchmark algorithms from the literature. Finally, the proposed algorithm is applied to a problem of dimension 4,050 on the Singapore expressway network to evaluate its feasibility for large-scale problems. Empirical results confirm the real-time performance of the algorithm in a real-world setting, with strong accuracy in the estimation of sensor counts.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 197-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129009</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defect and electrical properties of high-K̳ dielectric Gd₂O₃ for magneto-ionic and memristive memory devices</title>
<link>https://hdl.handle.net/1721.1/129007</link>
<description>Defect and electrical properties of high-K̳ dielectric Gd₂O₃ for magneto-ionic and memristive memory devices
Kim, Sunho,Ph.D.Massachusetts Institute of Technology.
While high-[subscript K] dielectrics utilized in CMOS technology are noted for their highly insulating characteristics, they have demonstrated surprising electrolytic behavior as key components in a variety of thin film memory devices, including those based on magneto-ionic and memristive behavior. In this work, we focus on the rare earth sesquioxide, Gd₂O₃, a well-known high-κ dielectric that has exhibited a variety of electrolytic properties during the development and operation of the first magneto-ionic devices developed at MIT. Specifically, we focused our investigation on the defect chemistry and electrical properties of Gd₂O₃ in order to better understand the relationship between the structure, chemistry, processing conditions, and operating environment and the material's low-temperature ionic and electronic transport properties and the means for their optimization vis-à-vis memory device operation.; Phase (monoclinic and cubic) and dopant controlled (Ca, Ce, Sr, Zr) polycrystalline pellets of 8 different Gd₂O₃ systems were prepared to investigate various defect regimes in consideration of this material's polymorphism. We considered intrinsic anion-Frenkel disorder and electronic disorder, equilibration with the gas phase, water incorporation, and dopant incorporation in the defect modeling, taking into account the roles of crystallographic structure as well as oxygen ion defect and protonic generation. The primary method utilized to characterize the defect chemistry and transport properties of Gd₂O₃ was the analysis of the dopant, p0₂ and temperature dependencies of the electrical conductivity extracted from complex impedance spectra obtained over the p0₂ range of 1 to 10⁻¹⁵ atm, for 5 isotherms between 700 and 900 °C with 50 °C steps and for a range of acceptor and donor dopants.; Based on the p0₂ dependency of conductivities, in light of the defect modeling, the majority point defects in each system were identified. Electronic and ionic migration energies and thermodynamic parameters were extracted via the defect modeling and temperature dependencies of conductivities. In nearly all cases, the predominant charge carrier under oxidizing conditions at elevated temperatures was identified as the p-type electron-hole, largely due to oxygen excess non-stoichiometry in these systems. With decreasing p0₂, transport tended to switch from semiconducting towards ionic. Depending on phase, dopant type &amp; concentration, temperature, and relative humidity, the predominant ionic conductivity was found to be via oxygen interstitials, oxygen vacancies, and/or protons, the latter given by the propensity of Gd₂O₃ to take up water in solid solution from the environment by the formation of OH[superscript .]species.; Unexpectedly, the ionic mobilities of defects in the denser and less symmetric monoclinic system exhibited higher ionic mobilities than the more open bixbyite structure. The hole electronic species in the investigated systems were found to migrate via the small polaron hopping mechanism with rather large hopping energies. This resulted in an inversion of hole and proton mobility magnitudes at reduced temperatures in the monoclinic system. Extrapolation of ionic and electronic defect conductivities to near room temperature, based on our derived defect and transport models, was not able to explain, on its own, the observed electrolytic properties of the Gd₂O₃ thin films utilized in magneto-ionic devices.; In an attempt to connect the transport properties obtained under equilibrium conditions at elevated temperatures with the behavior of Gd₂O₃ near room temperature, selected thin films Gd₂O₃, prepared by pulsed laser deposition or sputtering, were investigated by complex impedance spectroscopy over the temperature range of 20 - 170°C. While films prepared under dry conditions were indeed found to be highly electrically insulating, films exposed to water vapor exhibited dramatically higher proton conductivities (more than ~10⁸ x) than values extrapolated from high temperature. Parallel thermogravimetric analysis on Gd₂O₃ powder specimens, as a function of temperature, under high humidity conditions, demonstrated a correlation between uptake/loss of incorporated water and conductivity upon cooling and heating, respectively.; We can therefore conclude that the large disconnect between the electrical and electrolytic properties observed between high-κ dielectrics used in CMOS devices such as Gd₂O₃, and their much more highly conductive counterparts used in thin film memory devices, depends strategically on the thin film processing conditions. High-κ dielectrics are fabricated in carefully controlled environments with low relative humidity, while research on, for example, Gd₂O₃ - based magneto-ionic memory devices, is performed under ambient laboratory conditions, where significant water uptake becomes possible at surfaces and grain boundaries. The results and insights obtained in this study can be expected to be applied in achieving further progress in the understanding and optimization of magneto-ionic, memristive, and other devices that rely on proton gating.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis. The "K̳̳" in title on title page appeared as subscript "K."; Includes bibliographical references (pages 127-134).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129007</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on job search and retraining</title>
<link>https://hdl.handle.net/1721.1/129006</link>
<description>Essays on job search and retraining
Ben Dhia, Aicha(Aicha Lucie)
This thesis comprises three essays in empirical labor economics. Broadly, the essays provide evidence on the existence and the effects of information barriers in situations of job search and retraining. Chapter 1 (coauthored with Esther Mbih) begins from the observation that little is known about how job seekers decide to enroll in a training program. Decisions related to job training might be undermined by informational gaps, especially about program costs, enrollment procedures, and expectations of reemployment chances. The paper reports the results of a low-cost intervention aimed at testing for the existence of misinformation about training costs and returns, and its impact on enrollment. Partnering with the French Public Employment Services and the largest training provider in France, we sent 50,000 emails advertising training opportunities to job seekers in four regions of France in late summer 2016.; We randomly added short messages on training costs, registration procedures, and training returns to the basic email template. A baseline survey reveals misperceptions about financial aspects of training participation among more than half of job seekers: they either believe that they need pay to participate in a training (45%) and/or that their unemployment benefits would be affected (30%). Further, half of respondents perceive enrollment procedures as complex or very complex. We find that receiving an email with a message emphasizing training returns in terms of employment more than doubles the likelihood that job seekers call back the training center. However, callback rates are low in absolute value (less than one percent) and we detect no impact on enrollment one to six months after the intervention. We provide suggestive evidence that increasing salience of basic information about training is driving the effects on callbacks rather than belief updating.; Chapter 2 (coauthored with Bruno Crépon, Esther Mbih, Louise Paul-Delvaux, Bertille Picard and Vincent Pons) shows the results of another large-scale randomized experiment to evaluate the impact of an online platform helping job seekers adopt effective job search strategies. The platform combines labor market data from the French public employment agency and personal data from individual profiles to recommend users occupations and areas with high employment chances and to give them concrete tips to improve their job search methods. The experiment was conducted in collaboration with the French public employment agency on a sample of 212 277 job seekers from April to November 2017. An encouragement design led to a take-up rate of 26.2% in the treatment group and virtually zero in the control group. Following individual trajectories over 18 months after the intervention, we do not observe any impact on job seekers' search effort and search scope, whether occupational or geographical.; We find modest effects on search methods: job seekers using the website are more likely to rely on personal networks and to use resources provided by public employment services. However, we do not find any effect on self-reported well-being and on employment outcomes, both in the short run or in the middle run, indicating that more intensive interventions are required to bring unemployment down. Chapter 3 contributes to the debate on how to regulate the market of vocational training. Understanding the decision-making process of job seekers who benefit from public training is crucial to improve their matching with effective providers and increase competitive pressure on badly performing providers. The chapter reports the results of an online survey on job seekers in France who had participated in a training program between January 2017 and April 2018. The survey aimed at understanding what they knew of and how they selected a center among heterogeneous training providers.; I find two main results. First, job seekers use very limited information when making their choices. Only a third of respondents compare different centers before choosing one and to find a training provider, almost all respondents use a single source of information, which for half of respondents is their caseworker. Second, job seekers take into account various factors beyond the probability of finding a job. Logistical considerations such as start date or distance to home play a more important role than provider characteristics such as employment performance or size and connections to firms. Taken together, these results may explain the low competitive pressure between job centers, which in turn may contribute to low value added.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 151-157).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129006</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Germanium-on-silicon virtual substrate for lateral multijunction photovoltaics</title>
<link>https://hdl.handle.net/1721.1/129005</link>
<description>Germanium-on-silicon virtual substrate for lateral multijunction photovoltaics
Zhao, Xueying,Ph.D.Massachusetts Institute of Technology.
Lateral multijunction photovoltaics based on III-V direct band gap semiconductors enable efficient energy conversion. However, lattice matching between cell and substrate requires the use of expensive Ge or III-V substrates, which limits widespread application of III-V solar cells. Cost reduction can be achieved by using Ge-on-Si virtual substrate where a thin layer of Ge is grown on relatively inexpensive Si substrates, thanks to the greater material abundance and larger wafer diameters of Si. However, the lattice mismatch between Si and Ge can bring about threading dislocations that can significantly impair the efficiency of solar cells. This thesis presents patterned epitaxial growth of pure Ge on Si wafer through ultra-high vacuum chemical vapor deposition that achieves low threading dislocation density. This unlocks the potential for growing lattice-matched III-V photovoltaics of high quality on top of the virtual substrate. In addition, this thesis seeks to understand the mechanisms behind trapping of dislocations. The dislocation studies in this thesis not only shed light on dislocation motion in the Ge-on-Si epitaxy, but can be applied to other lattice mismatched materials systems as well. Lastly, the potential of lateral multijunction photovoltaics is demonstrated through simulation approaches.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 79-86).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129005</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transfer learning and robustness for natural language processing</title>
<link>https://hdl.handle.net/1721.1/129004</link>
<description>Transfer learning and robustness for natural language processing
Jin, Di,Ph.D.Massachusetts Institute of Technology.
Teaching machines to understand human language is one of the most elusive and long-standing challenges in Natural Language Processing (NLP). Driven by the fast development of deep learning, state-of-the-art NLP models have already achieved human-level performance in various large benchmark datasets, such as SQuAD, SNLI, and RACE. However, when these strong models are deployed to real-world applications, they often show poor generalization capability in two situations: 1. There is only a limited amount of data available for model training; 2. Deployed models may degrade significantly in performance on noisy test data or natural/artificial adversaries. In short, performance degradation on low-resource tasks/datasets and unseen data with distribution shifts imposes great challenges to the reliability of NLP models and prevent them from being massively applied in the wild. This dissertation aims to address these two issues.; Towards the first one, we resort to transfer learning to leverage knowledge acquired from related data in order to improve performance on a target low-resource task/dataset. Specifically, we propose different transfer learning methods for three natural language understanding tasks: multi-choice question answering, dialogue state tracking, and sequence labeling, and one natural language generation task: machine translation. These methods are based on four basic transfer learning modalities: multi-task learning, sequential transfer learning, domain adaptation, and cross-lingual transfer. We show experimental results to validate that transferring knowledge from related domains, tasks, and languages can improve the target task/dataset significantly. For the second issue, we propose methods to evaluate the robustness of NLP models on text classification and entailment tasks.; On one hand, we reveal that although these models can achieve a high accuracy of over 90%, they still easily crash over paraphrases of original samples by changing only around 10% words to their synonyms. On the other hand, by creating a new challenge set using four adversarial strategies, we find even the best models for the aspect-based sentiment analysis task cannot reliably identify the target aspect and recognize its sentiment accordingly. On the contrary, they are easily confused by distractor aspects. Overall, these findings raise great concerns of robustness of NLP models, which should be enhanced to ensure their long-run stable service.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 189-217).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129004</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in industrial organization and urban economics</title>
<link>https://hdl.handle.net/1721.1/129003</link>
<description>Essays in industrial organization and urban economics
Popov, Anton.; Atkin, David G., 1980-; Chen, Keith.
First two chapters of this thesis study the wholesale and retail tier of the beer supermarket sales. In the first chapter, I am interested in consolidation of distributors in the beer industry and its interaction with the uniform pricing by retailers. I build a theoretical model which illustrates how distributor consolidation in a set of counties may affect retail prices in all counties, depending on how strong the incentive of retail chains to price uniformly is. I test the predictions of the model using Nielsen scanner price data. I study two events of distributor consolidation in Ohio in 2009-2011, which followed upstream MillerCoors joint venture in 2008. In one of the events, distributor consolidation has no price effects. In another, bigger event, prices of consolidated brands (Miller, Coors, Heineken and Modelo) in treated counties increase by 0.46% relative to the control ABI brands. I find no evidence of prices in other counties being affected.; The findings are consistent with some cases of my theoretical model. The implications of this study are that modeling distribution tier and uniform pricing by retailers may be important for horizontal merger practitioners, both for retrospective analysis and for forecasting. Chapter 2 is devoted to the reasons for uniform pricing. I estimate the model, introduced in the first chapter, where supermarket chains have an incentive to set a uniform price for a given product across different locations. The model includes a product-specific baseline price which a supermarket chain sets, and a penalty for deviation from this baseline price. A single store will not deviate from the baseline price, if the marginal profits from doing so are smaller than the penalty parameter. My estimates suggest that the penalty for a dollar change from a benchmark price in a given week is around $12 to $16. Uniform pricing leads to suboptimal choice of prices relative to a problem with no penalty.; There is substantial price re-optimization, which, however, does not affect profits much, due to changes in prices having a small first-order effect around the optimum. Supermarket chains only lose 0.4% of profits from pricing uniformly. Effects on consumers are highly heterogeneous across locations and weeks, with change in consumer surplus varying from -0.55$ to 1.92$ per consumer per week. I show that change in consumer surplus due to uniform prices is positively correlated with income, with higher income zip codes benefiting more from uniform pricing. This effect, although economically meaningful in aggregate, is not large for an average consumer. Chapter 3, written with professors David Atkin and Keith Chen, adds to the literature studying knowledge spillovers in modern cities. The returns to face-to-face interactions are of central importance to understanding the determinants of agglomeration.; However, the existing literature studying patterns of geographic proximity in patent citations or industrial co-location has struggled to disentangle the benefits of face-to-face interactions from other spatial knowledge spillovers. In this paper we attempt to more directly measure face-to face interactions using highly granular worker geolocation data in Silicon Valley. To understand the degree to which knowledge flows result from their interactions, we study the relationship between cross-firm worker meetings and cross-citations between their firms. To navigate endogeneity concerns due to firms organizing meetings with firms they wish to learn from, we focus on serendipitous meetings--measured by the interactions of workers in neighboring firms in very different industries--that play a central role in the urban theories of Jane Jacobs.; The subset of these chance meetings occurring during work-hours also serve as costs shifters to meeting face-to-face rather than remotely, allowing us to separately identify the returns to planned meetings. Our results suggest substantial knowledge spillovers from face-to-face interactions, including increases in citations resulting from serendipitous meetings that are a third as large as the elasticity with respect to physical distance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis. "Chapter 3, written with professors David Atkin and Keith Chen"--Page 4.; Includes bibliographical references (pages 169-173).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129003</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the term structure of equity returns</title>
<link>https://hdl.handle.net/1721.1/129001</link>
<description>Essays on the term structure of equity returns
Kirshon, Layne David.
This dissertation contains three essays on the term structure of equity returns. In the first chapter I document substantial variation in the cross-section of the term premium of US stocks between 1996 and 2019. I introduce a model with multiple stocks and an SDF with two priced sources of risk - dividend volatility risk and discount rate risk - which generates an economy with both upward and downward sloping equity term structures. The model creates two hypotheses: (1) dividend strips of stocks with more volatile dividends should earn higher returns, and (2) controlling for dividend volatility, cash flow duration should increase a stock's term premium. I use the Fama-French factors to empirically validate the model, with factor regressions explaining the majority of variation in term premia. The second chapter studies the relationship between the low volatility anomaly and the equity term structure. I show that dividend strip returns are positively related to measures of risk and volatility, while term premium returns are negatively related to risk and volatility. This generates a puzzle for explanations of the low volatility anomaly based on general preference for volatility, such as leverage constraints, that cannot distinguish across the term structure. The results support market specific explanations such as behavioral models of utility over realized gains (as opposed to unrealized paper gains) from investments. The third chapter studies the effect of buybacks on the equity term premium. First, I show that firms that conduct a buyback for the first time see an immediate drop in the returns to their dividend strips (due to unfulfilled dividend expectations), and a concurrent increase in their term premia. This confirms that buybacks do indeed substitute for dividends. Second, I show that firms that have repurchased shares earn a "buyback premium" due to the fact that cash flows may be returned as repurchases instead of dividends.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129001</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of the geometry of axisymmetric point-absorber wave energy converters</title>
<link>https://hdl.handle.net/1721.1/128999</link>
<description>Optimization of the geometry of axisymmetric point-absorber wave energy converters
Edwards, Emma(Emma Chute)
Wave energy represents an abundant source of renewable energy, but as yet the potential is not fully utilized. Aiming to exploit this vast potential, many theoretical, experimental and pilot-scale studies have been conducted on wave energy converters (WECs), however as yet there has been no convergence on the optimal shape of a WEC. Furthermore, there is no agreed-upon definition of what it means for a WEC to be 'optimal' and no established framework to find optimal shapes. This thesis establishes a novel, scientifically rigorous framework to find practically realistic optimal shapes of WECs. Through a general, efficient and efficacious procedure, we systematically investigate groups of shapes to reveal powerful new results for the optimal shapes of axisymmetric WECs. Finally, we analyze these results to develop insights and gain physical intuition about the best WEC shapes.; Although the hydrodynamics of WECs under operating conditions can generally be considered linear, the dependence of the hydrodynamics and power extraction of the geometry can be highly nonlinear. In this thesis, we assume linear hydrodynamics but allow the geometry to be very general and consider a wide range of possible geometries. We optimize a single-body deep-water 3D axisymmetric point absorber WEC, with linear power take-off mechanisms, assuming a monochromatic unidirectional incoming wave with given wavenumber k. We consider two separate problems: a WEC moving and extracting energy in the heave mode only, as well as the complete 3D problem of an axisymmetric WEC moving and extracting energy in heave, surge and pitch. This thesis develops a robust computational approach for finding the optimal WEC shape underpinned by a strong theoretical grounding.; We describe general geometries using piecewise parametric polynomial basis functions and develop a multi-objective optimization to minimize WEC surface area and volume, while ensuring constant, maximum power for all shapes. We show that constraints are necessary to ensure feasible body motion, weight distribution and stability. We present a novel theorem to find roots of the heave resonance equation, which adds to our understanding of the problem and significantly speeds up the optimization process by effectively decreasing the degree of freedom of the optimization. Our systematic investigation encompasses a broad range of shapes, starting with truncated cylinders and then generalizing to significantly more complex shapes. We show that shapes that protrude outwards below the waterline generally perform better, due to their high heave damping coefficient, which enables smaller volumes while still adhering to the motion constraint.; Furthermore, in general the maximum radius occurs closer to the waterline than the maximum draft. Compared to the heave-only problem, the optimal shapes from the heave-surge-pitch problem are generally wider and less protruding outwards, resulting in a larger volume and surface area. The trends that we observe in the optimal shapes are consistent across all the groups of shapes, implying these may be features of a general optimum. Optimizing the geometry can significantly decrease the material used to produce the same, maximum power: for example, the optimal shapes have up to 72 % less surface are and 93 % less volume than the optimized cylinders. The methodology developed, along with the results found, in this thesis will help to inform future WEC development.; Through the discovery of WEC shapes which extract maximum power and require minimum material use, whilst ensuring the WEC shapes are practically feasible, this thesis is a step forward in our understanding of WECs and ultimately contributes towards wave energy becoming a viable source of renewable energy in future.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 281-285).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128999</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in industrial organization</title>
<link>https://hdl.handle.net/1721.1/128997</link>
<description>Essays in industrial organization
Grondahl, Samuel Isaac.
This thesis is a collection of three chapters that investigate burgeoning empirical issues in industrial organization. In the first chapter, I study platform fee policy with a specific focus on two-sided online marketplaces. The main contributions of the paper are threefold. First, I study a setting with coordinated price experimentation along the three different fee dimensions that are common to such marketplaces. Second, I describe the empirical impact of incomplete fee salience on equilibrium outcomes. Finally, I quantify the network externalities that must be present in order for observed fees to constitute an equilibrium. In the paper, I begin by developing a tractable model of the platform's problem that generates testable predictions and yields equilibrium conditions in terms of estimable quantities. Then, using estimates from experimental data obtained from a large online marketplace, I quantify the salience and network effects.; To conclude, I consider the counterfactual level and composition of equilibrium platform fees under when these effects are muted or absent. In the second chapter, using data from the same source as in chapter one, I study small sellers competing on the supply side of online marketplaces. As these platforms grow and markets become increasingly disintermediated, an important concern is whether small sellers, who may have limited experience or attention, can individually compete effectively with larger, often professional sellers operating on the same marketplaces. To answer this question, I develop and estimate a structural model that incorporates essential features of the empirical setting, including large and rapidly changing choice sets and buyer heterogeneity. Using the estimated model, I compute optimal pricing policies under various informational and computational restrictions.; I find that small sellers adhering to a simple strategy can obtain nearly optimal expected revenue and that this strategy's information requirements are easily satisfied in the online setting. Additionally, I present suggestive evidence that sellers learn to approximate such a strategy through repeated market interactions. In the third and final chapter, I investigate the industrial impacts of firm control rights, which confer discretion over firm policy and are usually shared between debt and equity holders. Control rights operate along a continuum and are difficult to measure. As a proxy, I consider the discontinuous shift in control from equity holders to creditors due to loan covenant violations, a common form of technical default. This paper contributes to the growing covenants literature in two ways. First, I consider the impact of and response to covenant violations at the industry level, inclusive of firms never in technical default.; Second, I empirically document the effects of violations on contemporary product markets. I find that control rights transfers to creditors make firms tough in product markets, consistent with the predictions of a stylized model, and that markups decline at the industry level, though the declines are sharpest for firms directly affected.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 192-200).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128997</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure, dynamics, and inference in networks</title>
<link>https://hdl.handle.net/1721.1/128994</link>
<description>Structure, dynamics, and inference in networks
Chodrow, Philip S.(Philip Samuel)
Networks offer a unified, conceptual formalism for reasoning about complex, relational systems. While pioneering work in network science focused primarily on the ability of "universal" models to explain the features of observed systems, contemporary research increasingly focuses on challenges and opportunities for data analysis in complex systems. In this thesis we study four problems, each of which is informed by the need for theory-informed modeling in network data science. The first chapter is a study of binary-state adaptive voter models (AVMs). AVMs model the emergence of global opinion-based network polarization from localized decision-making, doing so through a simple coupling of node and edge states. This coupling yields rich behavior, including phase transitions and low-dimensional quasistable manifolds. However, the coupling also makes these models extremely difficult to analyze.; Exploiting a novel asymmetry in the local dynamics, we provide low-dimensional approximations of unprecedented accuracy for one AVM variant, and of competitive accuracy for another. In the second chapter, we continue our focus on fragmentation in social systems with a study of spatial segregation. While the question of how to measure and quantify segregation has received extensive treatment in the sociological literature, this treatment tends to be mathematically disjoint. This results in scholars often re-proving the same results for special cases of measures, and grappling with incomparable methods for incorporating the role of space in their analyses. We provide contributions to address each of these issues. With respect to the first, we unify a large body of extant segregation measures through the calculus of Bregman divergences, showing that the most popular measures are instantiations of generalized mutual informations.; We then formulate a microscopic measure of spatial structure - the local information density - and prove a novel information-geometric result in order to measure it on real data in the common case in which the data is embedded in planar network. Using these tools, we are then able to formulate and evaluate several network-based regionalization algorithms for multiscale spatial analysis. We then take up two questions in null random graph modeling. The first of these develops a family of null random models for hypergraphs, the natural mathematical representation of polyadic networks in which multiple entities interact simultaneously. We formulate two distributions over spaces of hypergraphs subject to fixed node degree and edge dimension sequences, and provide Markov Chain Monte Carlo algorithms for sampling from them. We then conduct a sequence of experiments to highlight the role of hypergraph configuration models in the data science of polyadic networks.; We show that (a) the use of hypergraph nulls can lead to directionally different hypothesis-testing than the use of traditional nulls and that (b) polyadic nulls support richer and more complex measurements of graph structure. We close with a formulation of a novel measure of correlation in hypergraphs, as well as an asymptotic formula for estimating its expectations under one of our configuration models. In the final chapter, we study the expected adjacency matrix of a uniformly random multigraph with a fixed degree sequence. This matrix is an input into several common network analyses, including community-detection and mean-field theories of spreading properties on contact networks. The actual structure of this matrix, however, is not well understood. The main issues are (a) the combinatorial complexity of the space on which this random graph is defined and (b) an erroneous folk-theorem among network scientists which stems from confusion with related models.; By studying the dynamics of a Markov chain sampler, we prove a sequence of approximations that allow us to estimate the expected adjacency matrix - and other elementwise moments - using a fast numerical scheme with qualified uniqueness guarantees. We illustrate using a series of experiments on primary and secondary school contact networks, showing order-of-magnitude improvements over extant methods. We conclude with a description of several directions of future work.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 187-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128994</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal energy grid storage : liquid containment and pumping</title>
<link>https://hdl.handle.net/1721.1/128992</link>
<description>Thermal energy grid storage : liquid containment and pumping
Amy, Caleb(Caleb A.)
As the cost of renewable energy falls below fossil fuels, the key barrier to widespread sustainable electricity has become availability on demand. Energy storage can enable dispatchable renewables, but only with drastic cost reductions compared to current batteries. In this thesis, I investigate an electricity storage concept that stores electricity as sensible heat in an extremely hot liquid (&gt;2000°C) and uses multi-junction photovoltaics (MPV) as a heat engine to convert it back to electricity on demand hours, or days, later. In addition to a technoeconomic analysis, this thesis focuses experimentally on heating, liquid containment, and pumping. The transfer of the storage liquid is key because it enables conversion to and from electricity and compact, efficient heat transfer. However, operating at these extreme temperatures introduces many practical challenges, so several novel solutions related to containment and pumping are investigated including high-performance heaters, sealing a large multi-part tank with affordable materials, and pumping above 2000°C. The key result is that although affordable silicon can be contained in affordable graphite and pumped at these temperatures, temperature variation in the system causes it the graphite infrastructure to rapidly dissolve and ultimately fail in a matter of hours. Alternative embodiments are proposed with recommendations on areas of future work. The key takeaway from the technoeconomic modeling is that integrating low-cost thermal storage with an inexpensive heat engine can enable an economical approach to electricity storage, even without high round trip efficiencies. Thus, despite the challenges, future work is warranted.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 149-158).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128992</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems and control theoretic approaches to engineer robust biological systems</title>
<link>https://hdl.handle.net/1721.1/128991</link>
<description>Systems and control theoretic approaches to engineer robust biological systems
Qian, Yili.
Synthetic biology is an emerging field of research aimed to engineer biological systems by inserting programmed DNA molecules into living cells. These DNAs encode the production and subsequent interactions of biomolecules that allow the cells to have novel sensing, computing, and actuation capabilities. However, most success stories to date rely heavily on trial and error. This is mainly because genetic systems are context-dependent: the expression level of a synthetic gene often depends not only on its own regulatory inputs, but also on the expression of other supposedly unconnected genes. This lack of modularity leads to unexpected behaviors when multiple genetic subsystems are composed together, making it difficult to engineer complex systems that function predictably and robustly in practice. This thesis characterizes resource competition as a form of context dependence, and presents control theoretic approaches to engineer robust, context-independent gene networks. We first present a systems framework to model resource competition, which results in a hidden layer of unintended interactions among genetic subsystems. These unintended interactions lead to failure of the composed network in experiment. We then introduce a set of biomolecular controllers - designed to solve an output regulation problem in vivo - that can decouple a genetic subsystem's output from its context. We describe challenges applying classical control theory to engineer such controllers due to the physical constraints in living cells, and then present novel theory-guided engineering solutions. Finally, we point to additional design considerations when regulating multiple subsystems using multiple controllers in a single cell. These works have the potential to enhance the robustness of future synthetic biological systems and to fully unleash their power to address pressing societal needs in environment, energy, and health.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 189-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128991</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D organ property mapping using freehand ultrasound scans</title>
<link>https://hdl.handle.net/1721.1/128989</link>
<description>3D organ property mapping using freehand ultrasound scans
Benjamin, Alex(Alex Robert)
3D organ property mapping has gained a considerable amount of interest in the recent years because of its diagnostic and clinical significance. Existing methods for 3D property mapping include computed tomography (CT), magnetic resonance imaging (MRI), and 3D ultrasound (3DUS). These methods, while capable of producing 3D maps, suffer from one or more of the following drawbacks: high cost, long scan times, computational complexity, use of ionizing radiation, lack of portability, and the need for bulky equipment. We propose the development of a framework that allows for the creation of 3D property maps at point of care (specifically structure and speed of sound). A fusion of multiple low-cost sensors in a Bayesian framework localizes a conventional 1D-ultrasound probe with respect to the room or the patient's body; localizing the probe relative to the body is achieved by using the patient's superficial vasculature as a natural encoding system. Segmented 2D ultrasound images and quantitative 2D speed of sound maps obtained using numeric inversion are stitched together to create 3D property maps. A further advantage of this framework is that it provides clinicians with dynamic feedback during freehand scans; specifically, it dynamically updates the underlying structural or property map to reflect high and low uncertainty regions. This allows clinicians to repopulate regions within additional scans. Lastly, the method also allows for the registration and comparison of longitudinally acquired 3D property/structural maps.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 141-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128989</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Keep the ORCs at bay : how eukaryotic cells ensure one round of DNA replication per cell cycle</title>
<link>https://hdl.handle.net/1721.1/128988</link>
<description>Keep the ORCs at bay : how eukaryotic cells ensure one round of DNA replication per cell cycle
Amasino, Audra Leigh.
During each cell cycle, eukaryotic cells must faithfully replicate their genome, ensuring exactly one full copy is made. Both under-replicating or over-replicating the genome can have deleterious consequences including cell death, genome instability and cancer. Thus, this process is tightly regulated. The major mechanism to ensure that DNA is replicated once per cell cycle entails the temporal separation of two key replication events: helicase loading and helicase activation. Helicase loading occurs during the G1 phase of the cell cycle. In S. cerevisiae cells, Cyclin-Dependent Kinases (CDKs) prevent helicase loading outside of G1 by phosphorylating three of the four helicase-loading proteins: Mcm2-7, Cdc6, and the Origin Recognition Complex (ORC). Phosphorylation of free Mcm2-7 and Cdc6 leads to their removal from the nucleus (Mcm2-7 by nuclear export and Cdc6 by protein degradation). However, phosphorylated ORC remains in the nucleus bound to origins.; ORC phosphorylation intrinsically inhibits the helicase loading reaction. In in vitro reconstituted helicase loading reactions, CDK phosphorylation of ORC is sufficient to completely inhibit helicase loading. However, the precise event(s) during helicase loading that are affected by ORC phosphorylation were not known prior to this study. To identify the steps of helicase loading that are inhibited by ORC phosphorylation, we used single-molecule microscopy to compare the progression of helicase loading with phosphorylated versus unphosphorylated ORC. Successful helicase loading results in two head-to-head Mcm2-7 helicases encircling DNA. We show that ORC phosphorylation prevents loading of both the first and second Mcm2-7 complexes. An initial intermediate in helicase loading containing origin DNA and all four proteins (the OCCM) still forms when ORC is phosphorylated, albeit slower.; Focusing on events after OCCM formation, we found that ORC phosphorylation alters Cdt1 dissociation kinetics and inhibits successful Mcm2-7 ring closing. ORC is phosphorylated on both the Orc2 and Orc6 subunits in vivo; we find that in vitro phosphorylation of either single subunit leads to nearly identical effects as phosphorylation of both subunits. My studies suggest a model in which ORC directly controls Mcm2-7 ring closing through physical interactions with both Cdt1 and Mcm2-7 and these interactions, and thus ring closing, are inhibited by ORC phosphorylation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128988</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noisy-signalling models of organizational decision making</title>
<link>https://hdl.handle.net/1721.1/128975</link>
<description>Noisy-signalling models of organizational decision making
Palida, Ali Fakhruddin.
This thesis consists of three separate papers concerning the use of communication channels and intermediaries in organizations. A noisy-signalling model of strategic communication is introduced in the first chapter, and expanded upon in the remainder of the thesis. In the second part of the first chapter, I use the core noisy-signalling model to study organizational design of a single channel of communication. The results of the analysis provide a rational for the variation in communication processes observed across organizations, as well as costly political lobbying and advertising campaigns. In the second chapter, I extend the core model to allow the informed party to choose among multiple communication channels when conversing with the decision maker. The model suggests that polarization across communication channels may be an efficient response to "bandwidth" concerns facing decision-makers of large corporations or unqualified management. Conversely, coexistence of partisan and non-partisan channels within an organization or community (e.g. tabloids and professional news sources in the journalism industry) may also be socially efficient for other environments. In the third chapter, I consider a different extension of the core model by allow- ing the two parties to communicate via a strategic intermediary. I use the model to provide a possible explanation for the variety of roles communication intermediaries play in different organizations, the correlation between control-rights and communication hierarchies in organizations, as well usage of third-party, conflict-resolution arrangements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, September, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128975</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and algorithmic aspects of linear inequality systems</title>
<link>https://hdl.handle.net/1721.1/128971</link>
<description>Structural and algorithmic aspects of linear inequality systems
Lamperski, Jourdain Bernard.
Linear inequality systems play a foundational role in Operations Research, but many fundamental structural and algorithmic questions about linear inequality systems remain unanswered. This thesis considers and addresses some of these questions. In the first chapter, we reconsider the ellipsoid algorithm applied to solving a system of linear inequalities. Unlike the simplex method and interior point methods, the ellipsoid algorithm has no mechanism for proving that a system is infeasible (in the real model of computation). Motivated by this, we develop an ellipsoid algorithm that produces a solution to a system or provides a simple proof that no solution exists. Depending on the dimensions and on other natural condition measures, the computational complexity of our algorithm may be worse than, the same as, or better than that of the standard ellipsoid algorithm.. In the second chapter, we reduce the problem of solving a homogeneous linear inequality system to the problem of finding the unique sink of a unique sink orientation (USO) in the vertex evaluation model of computation. We show the USOs of interest satisfy a local property that is not satisfied by all USOs that satisfy the Holt-Klee property. This addresses an open question that is motivated by the idea that such local structure could be leveraged algorithmically to develop faster algorithms or a strongly polynomial algorithm. In the third chapter, we make progress on a conjecture about a particular class of linear inequality systems that have balanced constraint matrices. A balanced matrix is a 0-1 matrix that does not contain a square submatrix of odd order with two ones per row and column. The conjecture asserts that every nonzero balanced matrix contains an entry equal to 1, which upon setting to 0, leaves the matrix balanced.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-170).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128971</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The properties and the deformation by cold work of the silver-rich solid-solution alloys in the system silver-magnesium</title>
<link>https://hdl.handle.net/1721.1/128944</link>
<description>The properties and the deformation by cold work of the silver-rich solid-solution alloys in the system silver-magnesium
Gangulee, Amitava,1941-
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mining and Metallurgy, 1967.; Vita.; Includes bibliographical references (leaves 95-106).
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128944</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of marginal ice zone noise events</title>
<link>https://hdl.handle.net/1721.1/128940</link>
<description>Analysis of marginal ice zone noise events
Chen, Qifang.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1991.; Includes bibliographical references (leaves 140-146).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128940</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Worker mobility and unemployment</title>
<link>https://hdl.handle.net/1721.1/128939</link>
<description>Worker mobility and unemployment
Katz, Lawrence F.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 1986.; Includes bibliographies.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128939</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry of jet bundles and the structure of Lagrangian and Hamiltonian formalisms</title>
<link>https://hdl.handle.net/1721.1/128937</link>
<description>Geometry of jet bundles and the structure of Lagrangian and Hamiltonian formalisms
Kupershmidt, Boris A.,1946-
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1979.; Bibliography: leaves 58-59.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128937</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frequency dependence of the conductivity and dielectric constant of single crystal La₂CuO₄₊y̳</title>
<link>https://hdl.handle.net/1721.1/128802</link>
<description>Frequency dependence of the conductivity and dielectric constant of single crystal La₂CuO₄₊y̳
Chen, Chih-Yung.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1990.; On t.p. "y" is subscript.; Includes bibliographical references (p. 136-143).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128802</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular dynamics study of ionic grain boundaries</title>
<link>https://hdl.handle.net/1721.1/128799</link>
<description>Molecular dynamics study of ionic grain boundaries
Chen, Long-Qing,1962-
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 1990.; Includes bibliographical references (leaves 240-244).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128799</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fatigue and fracture in Inconel 718-copper-Inconel 718 explosion-bonded composites</title>
<link>https://hdl.handle.net/1721.1/128798</link>
<description>Fatigue and fracture in Inconel 718-copper-Inconel 718 explosion-bonded composites
Chen, Qiguang.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1990.; Title as it appears in the M.I.T. Graduate List, Feb. 1990: Fatigue and fracture in explosion-bonded Inconel 718-copper-Inconel 718 composites.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128798</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Random planar matching and bin packing</title>
<link>https://hdl.handle.net/1721.1/128792</link>
<description>Random planar matching and bin packing
Shor, Peter Williston.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1985.; Bibliography: leaves 123-124.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128792</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Individual and organizational Uses of Evidence-Based Practice in healthcare settings</title>
<link>https://hdl.handle.net/1721.1/128641</link>
<description>Individual and organizational Uses of Evidence-Based Practice in healthcare settings
Fingerhut, Henry Alan.
In the three decades since its introduction, Evidence-Based Practice (EBP) has become standard clinical practice and the subject of targeted interventions at all levels of the health system. Despite its prevalence, EBP is frequently challenged on philosophical, practical, empirical, and normative grounds. And EBP is often underused in practice relative to the considerable investment in training and sophisticated organizational interventions to implement EBP. In this dissertation, I identify what the concept of EBP means to health system stakeholders as a partial explanation for this persistent gap in EBP use and implementation outcomes. Through interviews with clinicians and healthcare administrators, I identify how providers and organizations use EBP in practice to clinical ends and in inter-professional relationships. First, I find that in contrast to the theoretical model, stakeholders vary in how they operationalize EBP for individual-level clinical use.; Stakeholders endorse a range of what I call implicit mental models of EBP that imply different approaches to clinical decision-making. Respondents' implicit mental models of EBP each emphasize an incomplete aspect of the full EBP model: Resource-Based EBP emphasizes specific evidence artifacts, Decision-Making EBP emphasizes the decision-making process, and EBT-Based EBP emphasizes specific Evidence-Based Treatments. These implicit models represent the decision inputs, process, and outputs, respectively. Second, I describe how and why healthcare organizations conduct EBP interventions, despite its initial design as an individual-level clinical decision-making model. I document a range of different organizational EBP activities and interventions, including disseminating resources, training providers, and implementing local standards. These organizational EBP activities both support individual EBP use and address broader organizational ends, which may conflict.; Finally, EBP takes on social and inter-professional meanings beyond its intended scope as a clinical decision-making model, which emerge in context and affect how providers understand and use EBP. Specifically, providers may renounce their standing to evaluate evidence, demonstratively use EBP, and administrators claim standing to evaluate evidence. This dissertation therefore demonstrates the varied uses of EBP that emerge in practice, contributing to our understanding of the challenges and contradictions that arise in applying general knowledge to individual cases and systematizing strategies for the same at the organization level.
Thesis: Ph. D. in Engineering Systems: Technology, Management, and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 135-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128641</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of hardware and soft features on the performance evolution of low-carbon technologies</title>
<link>https://hdl.handle.net/1721.1/128640</link>
<description>Effects of hardware and soft features on the performance evolution of low-carbon technologies
Klemun, Magdalena Maria.
This dissertation studies how physical and non-physical features of low-carbon technologies evolve and influence performance evolution. This fundamental question about the role of hardware- and non-hardware ('soft') innovations in technological progress remains largely unanswered despite the societal importance of improved technology. Multiple low-carbon technologies exhibit rising shares of soft costs, and understanding their determinants is thus critical to support climate mitigation. However, building this understanding is challenging. Technologies evolve through multi-faceted knowledge-generating processes, in which both endogenous factors, such as a technology's design, and exogenous factors, such as policies and research, play roles.; To capture this complexity, a new conceptual and quantitative model of technology performance evolution is developed, where performance change (e.g., cost change) is the outcome of changes in physical and non-physical ('soft') features ('variables'), both of which can affect the performance of hardware and processes needed to deploy technologies. While physical variables -- material usage ratios, efficiencies --; describe the tangible aspects of technologies, soft variables (e.g., task durations, wages) characterize the performance of intangibles, including deployment processes and services. In contrast to physical variables, soft variables can change after the factory gate due to locational differences in technology management or labor costs. By defining hardware and soft performance as functions of both hardware and soft variables, and separating their contributions to cost change when multiple variables change, this framework disentangles the effects of physical and non-physical forms of improvement at multiple conceptual levels --; from changes in hardware or soft features, to the specific physical and non-physical innovations that drive these changes, to the higher-order improvement processes in which many innovations originate (e.g., research and development). This approach addresses shortcomings in current methods to analyze and track cost change in technologies, which often treat the performance of hardware (e.g., equipment costs) and of deployment processes (e.g., soft costs) separately. However, features of hardware not only affect the cost of equipment, but also the cost of deploying this equipment, and accounting for such interdependencies can change assessments of the sources of past and future technology improvement ...
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 295-328).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128640</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market design opportunities for an evolving power system</title>
<link>https://hdl.handle.net/1721.1/128639</link>
<description>Market design opportunities for an evolving power system
Schneider, Ian Michael.
The rapid growth of renewable energy is transforming the electric power sector. Wind and solar energy are non-dispatchable: their energy output is uncertain and variable from hour-to- hour. New challenges arise in electricity markets with a large share of uncertain and variable renewable energy. We investigate some of these challenges and identify economic opportunities and policy changes to mitigate them. We study electricity markets by focusing on the preferences and strategic behavior of three different groups: producers, consumers, and load-serving entities. First, we develop a game-theoretic model to investigate energy producer strategy in electricity markets with high levels of uncertain renewable energy. We show that increased geographic dispersion of renewable generators can reduce market power and increase social welfare. We also demonstrate that high-quality public forecasting of energy production can increase welfare. Second, we model and explain the effects of retail electricity competition on producer market power and forward contracting. We show that increased retail competition could decrease forward contracting and increase electricity prices; this is a downside to the general trend of increased access to retail electricity competition. Finally, we propose new methods for improving demand response programs. A demand response program operator commonly sets customer baseline thresholds to determine compensation for individual customers. The optimal way to do this remains an open question. We create a new model that casts the demand response program as a sequential decision problem; this formulation highlights the importance of learning about individual customers over time. We develop associated algorithms using tools from online learning, and we show that they outperform the current state of practice.
Thesis: Ph. D. in Social and Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 117-126).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128639</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Variability in the emissions savings potential of battery electric vehicles across regions and individuals</title>
<link>https://hdl.handle.net/1721.1/128638</link>
<description>Variability in the emissions savings potential of battery electric vehicles across regions and individuals
Miotti, Marco,Ph. D.Massachusetts Institute of Technology.
Personal vehicles account for almost 25% of U.S. greenhouse gas emissions, and this share is increasing. The increase is due to several factors, including a growth in transportation demand and the decarbonization of electricity by 30% since 2007. Alternative technologies for road vehicles, such as battery electric, plug-in hybrid, and fuel cell powertrains have the potential to achieve significant emission reductions. Yet questions remain about the emissions and costs of these alternative technologies. This thesis evaluates the emissions reduction potential of vehicles with electrified powertrains, focusing on battery electric vehicles (BEVs). It evaluates this potential taking into account heterogeneous regional conditions and consumer behavior. Consumers help determine vehicle fleet emissions through their purchasing and driving decisions, which are guided in part by the costs of different options.; Therefore, the costs of ownership of BEVs in comparison to conventional vehicles inform the emissions reduction potential of BEVs. Here, we measure the lifecycle greenhouse gas emissions and costs of ownership of BEVs across different vehicle models as a function of travel patterns, driving styles, and properties of the natural, built, and institutional environment. We compare these costs and emissions to gasoline combustion engine vehicles (ICEVs), and then ask whether and under which condition electric vehicle adoption can play a central role in meeting emission targets for the transportation sector. The current literature does not cover all the interdependent sources of variation in the emissions and costs of BEVs compared to ICEVs. In particular, the effects of annual travel distance and fuel efficiency related to individual travel behavior and the wide variety of available vehicle models have not been assessed.; In addition, this variation in emissions and costs of personal vehicles has only been studied across regions, but not across individual vehicles within each region due to vehicle-specific driving patterns. This work addresses these gaps by developing several interlinked models. This includes the construction of a parametrized lifecycle emissions and cost of ownership model (Chapter 2), an algorithm to measure driving style linked to a vehicle energy model (Chapter 3), and a model to quantify the variability in annual travel distance and fuel consumption of different types of vehicles across regions within the United States, encoded as zipcodes, and across individual vehicles within those zipcodes (Chapter 4). Chapter 5 then ties Chapters 2 and 4 together and complements them with additional information to assess the overall heterogeneity in the emissions reduction potential of BEVs. The central results of the thesis are threefold.; First, a rapid decarbonization of electricity in conjunction with an electrification of powertrains will likely be required to meet emission targets for the U.S. transportation sector. Measures that relate to heterogeneous consumer behavior, such as improving driving style and nudging consumers towards purchasing smaller vehicles, can help to reduce greenhouse gas emissions. Second, the electrification of powertrains can come at little to no additional expense to consumers with today's technology and prices. In most parts of the country, BEVs are substantially cheaper than comparable ICEVs. Within regions, the individuals for which BEVs offer the greatest emissions savings would also tend to experience the largest cost savings, since both emissions savings and cost savings are correlated with annual travel distance. Third, emission reductions achieved by BEVs and their costs relative to ICEVs are highly heterogeneous.; The within-region variation in emissions and costs of BEVs compared to ICEVs due to individual driving patterns is at least as large as the variation across regional averages. As a result, a 10% share of BEVs in the fleet can lead to anywhere between 1% and 10% emission reductions, depending on which types of vehicles are being replaced by electric vehicles, by whom, and where. A key application of this work is to inform tools that provide localized and personalized information about the environmental and economic performance of different vehicle models. In Chapter 6, we discuss such a tool that was built as part of this work, called Carboncounter.com. Results from a survey launched on Carboncounter add to existing evidence that providing such information to consumers can help inform a transition to a cleaner light-duty vehicle fleet. These findings further confirm the importance of understanding heterogeneous human behaviors to inform decarbonization strategies for personal transport.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 219-232).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128638</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative invertibility of random matrices : a combinatorial perspective</title>
<link>https://hdl.handle.net/1721.1/128637</link>
<description>Quantitative invertibility of random matrices : a combinatorial perspective
Jain, Vishesh.
In this thesis, we develop a novel framework for investigating the lower tail behavior of the least singular value of random matrices - a subject which has been intensely studied in the past two decades. Our focus is on obtaining high probability bounds, rather than on estimating the least singular value of a 'typical' realisation of the random matrix. In our main application, we consider random matrices of the form Mn := M + Nn, where M is a fixed complex matrix with operator norm at most exp(Nc), and Nn is a random matrix, each of whose entries is an independent copy of a complex random variable with mean 0 and variance 1. This setting, with some additional restrictions, has been previously considered in a series of influential works by Tao and Vu, most notably in connection with the strong circular law, and the smoothed analysis of the condition number, and our results extend and improve upon theirs in a couple of ways. As opposed to all previous works obtaining such bounds with error rate better than n-1, our proof makes no use either of the inverse Littlewood-Offord theorems, or of any sophisticated net constructions. Instead, we show how to reduce the optimization problem characterizing the smallest singular value from the (complex) sphere to (Gaussian) integer vectors, where it is solved using direct combinatorial arguments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 101-106).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128637</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing policy feedback : experimental evidence on the everyday politics of the social contract</title>
<link>https://hdl.handle.net/1721.1/128636</link>
<description>Designing policy feedback : experimental evidence on the everyday politics of the social contract
Bergman, Olivia Anna Kristina.
For most of history, people had only infrequent personal contact with their governments. The modern social contract generates everyday interactions between citizens and states. In this dissertation, across three papers and three countries, I examine when and how such policy experiences shape attitudes toward government, using experimental methods grounded in a comparative perspective. I explore traditional policy design factors -- the defining aspects of policies that shape "who gets what", which I term 'macro design' -- alongside factors receiving less attention in the literature --; those that often shape "when and how", which I term 'micro design'. In the first paper, I compare the experience of filing taxes in the US and Sweden, showing that design focuses citizens' attention in very different ways: on tax compliance and payment in the US, on refunds in Sweden. In a nationally-representative survey experiment, I link a US-style tax environment with perceptions that the government is wasteful. In the second paper, I conduct a large-scale randomized field experiment that shifts administrative burdens away from citizens. Using a highly scalable digital intervention that simplifies the claiming of a means-tested benefit, I substantially increase reported satisfaction with both the bank and government among 195,414 low-income customers of an Australian bank. This is true even though effects on take-up, ascertained by linking bank data to government records, are modest.; In the third paper, I situate these findings within a new theory of how the design of policy experiences shapes government attitudes. I propose that two dimensions affect attribution of credit for policies: the valence of the experience, and the salience of the government's role. After outlining how macro and micro design factors shape experiences, I present a nationally-representative survey experiment showing that channeling benefits through the tax code results in government not getting credit where due. Together, these experiments illustrate the powerful effects of subtle design factors on citizens' views of government. More broadly, this dissertation suggests that, in addition to how policy content affects politics -- "who gets what" -- studying "when" and especially "how" citizens experience policies in their everyday lives illuminates attitudes underpinning democracy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 167-172).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128636</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The fragmentation of political risk and MNCs' supply chain linkages</title>
<link>https://hdl.handle.net/1721.1/128635</link>
<description>The fragmentation of political risk and MNCs' supply chain linkages
Intscher, Nicholas.
Political science research devotes considerable attention to the impact of political risk on multinational companies' (MNCs') behavior. However, this body of research suffers from two main oversights: (1) a disproportionate focus on MNCs' investment decisions, and (2) an assumption that political risk takes a common, centralized form across countries. In this dissertation, I redirect, attention to the political determinants of MNCs' supply chain linkages. I argue that these linkages represent a risk-mitigating strategy for MNCs, and one that is particularly well suited for dealing with environments where the sources of political risk are spread throughout the state apparatus -- which I refer to as fragmented political risk. To test this theory, I draw on both cross-sectional survey data of MNCs in Sub-Saharan Africa and firm-level panel data from Indonesia -- a country that experienced a profound fragmentation in the structure of political risk. The principal finding of this research is that fragmented political risk causes MNCs to increase their use of local suppliers, with particularly strong effects among those that are (1) more vulnerable to political risk, and (2) have a greater capacity to adopt linkages, in general. These findings qualify research on the political determinants of FDI by showing that MNCs, and not merely states, are capable of resolving political risk in the host country.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 260-279).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128635</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The information game : police-citizen cooperation in communities with criminal groups</title>
<link>https://hdl.handle.net/1721.1/128634</link>
<description>The information game : police-citizen cooperation in communities with criminal groups
Miller, Andrew Cesare.
Criminal groups -- gangs, mafias, and drug cartels, among others --; likely cause more deaths than interstate war, insurgency, and terrorism combined. This violence and the lack of accountability for perpetrators present a major challenge to states' central mandates of providing public safety and administering justice. States fall short of their mandates, in part because they struggle to gain cooperation from citizens. This study is about what I call The Information Game: the competition between the police, which want citizens to come forward with information about violence, and criminal groups, which want citizens to stay silent. I present cycle of silence theory, which posits that collective misperceptions prevent communities from reaching their full potential of police-citizen cooperation. Akin to terrorism, fear generated by criminal group violence makes retaliation appear to be more likely than it is.; The violence has the underappreciated but potent second order effect of pushing citizens who are willing to cooperate to hide their disposition from others. Cooperation thus appears to citizens to be less of a norm than it is. I also take new methodological approaches -- namely, fielding the first large-scale virtual reality experiment --; to test realistically and ethically strategies aimed at promoting cooperation. The results show that providing access to anonymous tip lines, creating awareness of community cooperation norms, and in some circumstances, exposing citizens to police officers of the same ethnicity increase citizen information-sharing with the police. Employing a multi-method research design, this study draws on original surveys in Baltimore, Maryland (N=650) and Lagos, Nigeria (N=1,025) as well as proprietary survey data of criminal justice experts (N=2,700) and citizens (N=109,000) in 113 countries provided by the World Justice Project. I pair the quantitative analysis with first-hand observation as well as interviews with more than 150 citizens, state authorities, and criminal group affiliates.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 312-339).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128634</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incubating financial development : private equity and the state</title>
<link>https://hdl.handle.net/1721.1/128633</link>
<description>Incubating financial development : private equity and the state
Puente, Ignacio.
This dissertation is motivated by two observations. The first one is that three decades after democratization and liberalization reforms, capital markets in Latin America remain underdeveloped. The second one is the considerable amount of state-related funds behind private equity (PE) investments, both in emerging and developed economies. Combining these two observations and using comparative case analysis to study the emergence of private equity markets, this dissertation proposes a shift in arguments about financial development and corporate governance reform: it emphasizes the role of state-related investors instead of just focusing on the institutional -- political and economic --; determinants of investment. This dissertation makes three main arguments. First, that private equity investors contribute to the development of financial systems by providing firms with a distinctive source of financing that has no bank-based substitutes. PE investors drive more institutional ownership and corporate governance structures, helping modernize business. Second, that state-related institutional investors play an important role in the emergence of domestic PE markets. Multilateral and domestic development financial institutions have an "incubating" role during the early stages of the PE industry, followed by the involvement of pension funds. And third, it advances a "quiet politics" explanation of the emergence of private equity. It emphasizes the public-private collaboration behind PE industry associations. These associations, in turn, help "co-create" the industry's regulatory framework, operating at the margins of partisan and legislative politics.; Ultimately, this dissertation makes three broad theoretical contributions: (1) it introduces private equity into the development debate; (2) prompts a shift in the discussion on financial development from institutional explanations focused only on rules -- democracy and investor protections -- to actor-based arguments centered on the role of institutional investors, in particular pension funds; and (3) characterizes a novel model of "financial" industrial policymaking.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 258-271).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128633</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The political logics of patronage : uses and abuses of government jobs in Brazil</title>
<link>https://hdl.handle.net/1721.1/128632</link>
<description>The political logics of patronage : uses and abuses of government jobs in Brazil
Toral, Guillermo,Ph. D.Massachusetts Institute of Technology.
The political appointment of bureaucrats (or patronage, for short) is a major resource for politicians all around the world. While scholars have long studied patronage, we lack a detailed understanding of how politicians target public employment and how that affects governance and public service delivery. This dissertation contributes to fill this gap. I identify five distinct rationales that drive politicians' use of government jobs: managing bureaucrats (to deliver public services or to extract rents), mobilizing voters, rewarding supporters, tying the opponent's hands, and anchoring coalitions. Each of these political logics of patronage has a different rationale, distinct employment patterns, and divergent effects on governance and service delivery.; Empirically, I document the logics of patronage with data on Brazilian municipal governments, a particularly useful context to study patronage given its wide variation in political and economic development and the coexistence of patronage with civil service and other bureaucrat selection modes. To illustrate the diverse uses of patronage and their consequences I combine administrative microdata (including restricted-access, identified data on the universe of municipal employees, and data on the performance of education and healthcare bureaucracies), two original surveys in one state (a face-to-face representative survey of 926 bureaucrats, and an online survey of 755 local politicians), and 121 in-depth interviews with bureaucrats, politicians, and anti-corruption agents done over 18 months of fieldwork in 7 states. Three novel implications emerge from this dissertation.; First, patronage can alleviate agency problems and thus enhance the accountability and effectiveness of bureaucrats, not only to extract rents but also to deliver public services. Second, when politicians use patronage to extract rents, they mobilize a diverse set of strategies that go beyond the hiring of supporters, including the hiring of civil service bureaucrats and the firing (not just hiring) of temporaries. Third, policies commonly used to reduce patronage -- such as civil service regimes, legal constraints on hiring, and elections for key bureaucratic positions -- can have undesirable consequences because of politicians' strategic responses to constraints on their hiring discretion. These findings are relevant to scholars and policymakers seeking to understand and to improve governance and state capacity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 309-330).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128632</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identities and issue opinions : learning from climate change</title>
<link>https://hdl.handle.net/1721.1/128631</link>
<description>Identities and issue opinions : learning from climate change
Vandeweerdt, Clara.
In this dissertation, I explore the formation of issue-specific opinions, in particular public opinion about climate change in the United States. More specifically, I analyze whether people use social groups and identities as mental "shortcuts" in order to form an opinion about complicated political topics such as climate change. I study three identity-related factors that may drive people's opinions about particular issues: partisan media content; the interests of social in-groups; and opinion cues from fellow partisans. Overall, I find that partisan identities are likely to have important effects through the media content that they expose Americans to. Other, more direct pathways for the opinion effects of identity, however, turn out to be surprisingly weak. I find no evidence that Americans' opinions are motivated by the material interests of their in-groups; nor that Americans change their opinions to align with the consensus among their in-party members.; In chapter 2, I ask what strategies partisan media use to fit real-world events into ideological narratives. I look at whether or not they connect events to related political issues (e.g. hurricanes and climate change), and whether each side is able to fit events into its existing set of issue positions. Using natural language processing and crowd-sourcing, I analyze almost 2 million hours of radio from hundreds of talk shows. I find that in the aftermath of an event, both ideological sides give far more attention to related political issues. At the same time, there are huge gaps between the positions that liberal and conservative shows tend to take on those issues, and events they do very little to close those gaps. Events turn up the volume of the discussion, without changing its ideological tune. This way, shared experiences could be turned into polarizing factors.; Next, in chapter 3, I investigate whether people change their attitudes about societal issues when they learn that those issues affect others like them. In three pre-registered survey experiments, I find that these in-group interest cues have little to no effect on issue-specific attitudes. This is true for social groups based on gender, race/ethnicity, and sexual orientation. People who closely identify with an in-group do not react more strongly to the group interest information. The findings raise new questions about exactly when and why people's group memberships in uence their political attitudes. Finally, in chapter 4, I ask whether people change their opinion when they learn the distribution of opinions among members of their own party (or of the out-party). I also compare the effect of these "mass cues" to the effect of elite cues information about politicians and their stances on an issue.; I run two preregistered survey experiments one national, and one on an Amazon Mechanical Turk convenience sample and draw two unexpected conclusions. First, I find that mass cues have no noticeable effect on opinions. When participants learn that a stance is shared by almost all members of their in-party, they do not move their own opinion closer that stance. Neither are they affected by learning about consensus among the out-party. Second, I am unable to replicate the well-established effect of elite cues. Combined with a closer inspection of the literature on cues, these findings suggests that cueing effects might be quite context-dependent
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 115-128).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128631</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introducing perovskites to the IoT world using photovoltaic-powered ID tags</title>
<link>https://hdl.handle.net/1721.1/128626</link>
<description>Introducing perovskites to the IoT world using photovoltaic-powered ID tags
Kantareddy, Sai Nithin R.
Billions of everyday objects could benefit from being augmented with sensors and wireless data transmitters. The prospect of developing advanced battery-powered sensors and smart devices with on-board radio and computing power has been a recent research direction for the Internet of Things (IoT). IoT devices enable us to build powerful data-driven applications by acquiring rich environmental information about an object. Often these devices are powered by batteries or direct power to run the electronics and transmit the information. Battery-powered devices are expensive and require frequent battery replacements resulting in higher maintenance costs that limit their pervasive implementation. Demand for low-cost wireless connectivity presents a huge potential to use passive sensors to augment everyday objects. Passive sensors based on Radio Frequency Identification (RFID) provide an inexpensive, scalable and energy efficient way to gather environmental information.; However, traditional passive tags are restricted in functionality to due to the limited RF energy available from an RFID reader. In this thesis, I show how traditional passive RFID tags can be enhanced by providing extra power with low-cost, high performance perovskite photovoltaic energy harvesters. I divide the work into three segments. First, I determine the power required for RFID tags and the current constraints on the communication range. Second, I explore perovskite photovoltaics for powering up passive tags to improve the communication range, and to provide onboard power for external sensors. I explore the tunability of perovskite photovoltaic materials to improve their indoor performance as well as create mechanically flexible energy harvesters. Third, I investigate how having additional sensors on RFID tags powered by low-cost energy harvesters can enable new IoT applications in a variety of areas. The main objectives of this thesis are: 1. Investigate passive tag power consumption with respect to different operating conditions 2. Investigate the current constraints on communication range in RFID tags and identify the limitations in real-world implementation 3. Investigate the performance and tunability of perovskite photovoltaics and their integration with the RFID tags 4. Explore industrial applications where the perovskite photovoltaic-powered tags are useful.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 85-97).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128626</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a reactive peptide sequence for site-selective bioconjugation</title>
<link>https://hdl.handle.net/1721.1/128606</link>
<description>Development of a reactive peptide sequence for site-selective bioconjugation
Tuang, Suan Lian.
Achieving catalyst-free, site-selective modification of proteins in water is a significant challenge in chemical biology. Issues of residue specificity, site-selectivity, reagent stability, and reaction rate are pervasive in this field, and despite advances over the past few decades, achieving fast, pinpoint modifications of complex molecules remains a tremendous obstacle. Herein, we describe the development of a nine-amino acid motif (Met-Cys-Pro-Phe-Leu-Pro-Val-Val-Tyr) termed engineered reaction (EnAct) tag. The EnAct interface, discovered by iterative screening of peptide libraries, consists of a reactive peptide (EnAct tag) that undergoes rapid (second-order rate constant, k ~ 150 M⁻¹ s⁻¹) nucleophilic aromatic substitution with a perfluoroarene-containing peptide (EnAct probe).; Bioconjugation reactions centered on peptide interfaces are emerging as promising strategies to prepare homogeneous biological conjugates, and our results with the EnAct interface represent a 210-fold increase in reaction rate over the previous standard for this class of cysteine arylation. Furthermore, the EnAct sequence consists of all-natural amino acids and thus enables the facile genetic engineering of the sequence onto proteins of interest. We disclose here the incorporation of the EnAct sequence at the C-termini of the IgG antibody trastuzumab heavy chains, which were subsequently conjugated to the EnAct probe with excellent site-selectivity, despite the 32 other Cys residues on this protein. Remarkably, this system's rapid kinetics enabled quantitative conversion in 1.5 hours and at lower substrate concentrations.; Finally, this bioconjugation reaction is still selective even in the complex environments of cell lysate mixtures, illustrating the enhanced selectivity and rapid reactivity of the EnAct interface. The appreciable increase in cysteine arylation rate and selectivity achieved with the EnAct sequenced represents a new standard for site-selective bioconjugations using peptide interfaces. To explore the versatility of the reactive peptide sequence, we found that this reactive peptide enabled aqueous arylations of Cys with small molecule electrophiles in mostly water, which was not previously accessed with this class of electrophiles. Furthermore, the perfluoroarene on the probe was found not only to function as an electrophile for thiol arylation, but also to offer a handle for easy elimination to form dehydroalanine. Thus, the EnAct system represents a powerful, versatile, and selective bioconjugation method.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, February, 2020; Cataloged from the PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128606</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Palladium catalyzed cross-coupling of esters and amides</title>
<link>https://hdl.handle.net/1721.1/128605</link>
<description>Palladium catalyzed cross-coupling of esters and amides
Shinabery, Ryan Spence.
Over the last 40 years, great strides have been made in the use of palladium as both a catalyst and a reagent for a multitude of chemical transformations that have had tremendous impacts all walks of life from material science to the development and large-scale synthesis of new, novel pharmaceuticals. One such class of reactions is the palladium-catalyzed functionalization of the [alpha]-position of carbonyl compounds using aryl halides. When compared to palladium catalyzed C-N bond forming reactions, these reactions have been understudied and underutilized. Herein we report our efforts towards the development of the [alpha]-arylation of ethyl acetate by using a continuous flow reactor and a unique lithium amide base. Lessons learned from these studies lead to the development of a simple, one-pot protocol for the mono-[alpha]-arylation of amides under palladium catalysis. Previously reported palladium catalyzed [alpha]-arylations of amides with aryl halides required relatively high catalyst loadings, the use of a glove box, excess ligand, pyrophoric reagents, high temperatures or extended reaction times. This protocol affords the desired mono-[alpha]-arylated amide products in good to excellent yields using 1-2 mol % catalyst loading at ambient temperature in as little as two hours.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from the PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128605</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding mucus selective permeability across length scales</title>
<link>https://hdl.handle.net/1721.1/128603</link>
<description>Understanding mucus selective permeability across length scales
Samad, Tahoura Sajida.
Mucus is a hydrogel that covers all wet epithelia in the human body including the eyes, respiratory, gastrointestinal and female reproductive systems. While mucus is sometimes characterized as a waste product, it actually has essential roles in the human body and complex impacts on health. Mucus is the habitat for many of the trillions of microbes in the human body, and has been shown to impact the physiology and virulence of its microbial residents. Mucus is also a selectively permeable barrier, which protects the underlying epithelium by binding and restricting the passage of harmful particles, while simultaneously permitting the passage of essential particles like nutrients and oxygen. Unfortunately, this protective ability can also limit the effective delivery of therapeutics, which are prevented from reaching their intended targets or inactivated. Despite the widespread impacts on human health outlined above, there exist many gaps in our understanding of mucus permeability.; In this thesis, I first considered the impact of mucus selectivity, examining the effect of mucus selective permeability on the efficacy of antibiotics used to treat bacterial pathogen Pseudomonas aeruginosa, which forms serious infections within mucus. We found that mucus reduced the efficacy of multiple classes of antibiotics, due in some cases to mucus-antibiotic binding. This result motivated further studies to understand the detailed molecular properties that distinguish particles that bind and are rejected by mucus barrier from those that permeate, as this knowledge will enable us to predict which therapeutics may be impacted by mucus, and rationally design therapeutics that are effective in mucus. I investigated this question first at the length scale of peptides and found that spatial arrangement of charge and hydrophobicity can be used to tune the accumulation and penetration of a peptides within a mucus layer, creating a continuum of transport behaviors.; I then built on this understanding to ask a similar question for nanoparticles, examining the transport of libraries of surface-modified phage through mucus. Finally, I considered how passively diffusing particles that can be found within mucus, like nonmotile mucosal bacteria S. aureus, may harness the active motility of bacteria in the same community as a strategy to transport long distances.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, June, 2019; Cataloged from PDF version of thesis. "Thesis submitted without pagination"--Disclaimer page.; Includes bibliographical references (pages [97]-[109]).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128603</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of dielectric multipactor discharges and gas breakdown initiated by high power millimeter waves</title>
<link>https://hdl.handle.net/1721.1/128600</link>
<description>Studies of dielectric multipactor discharges and gas breakdown initiated by high power millimeter waves
Schaub, Samuel Clay.
This thesis reports the theoretical and experimental investigation of discharge phenomena induced at dielectric surfaces and in gases by a 1.5 MW, 110 GHz gyrotron operating in 3 [mu]s pulses. The experimental study of multipactor discharges on dielectric surfaces in vacuum was performed at 110 GHz, a frequency that is an order of magnitude higher than in previous experimental studies. Two separate test assemblies were constructed: one with the 110 GHz electric field tangential to the dielectric surface and the second with a perpendicular field. Threshold electric fields for multipactor onset were measured in both field orientations for samples of sapphire, alumina, fused quartz, crystal quartz and silicon. The present results at 110 GHz, when combined with previously published data at lower frequencies, show that the threshold electric field for initiation of a dielectric multipactor discharge scales linearly with frequency.; This linear scaling, which is favorable for operation at high frequency, is in good agreement with theoretical predictions. The absorbed RF power due to multipactor was also measured as a function of RF intensity. At low intensities, absorbed RF power was found to agree quantitatively with theoretical predictions, though experimental results diverged from theory at higher RF intensities. Calculations of dielectric multipactor were carried out to help analyze the experimental results. A new equation for predicting threshold RF electric fields was derived that agrees with a wide variety of existing experimental data, spanning orders of magnitude in frequency and a variety of geometries and materials. The theory work also suggests strategies for mitigation of multipactor, dependent upon experimental geometry. Gas breakdowns were experimentally characterized at both 110 and 124.5 GHz by focusing the gyrotron beam in air at pressures from 25 to 760 Torr.; Prior studies of this system had revealed that the plasma spontaneously forms a reproducible twodimensional array of filaments. Optical emission spectroscopy was used to measure peak electron density in this plasma, through Stark broadening of the H[subscript alpha] line. Heating dynamics of the background neutral gas were spatially and temporally resolved using two-dimensional laser interferometry. These results hell) test the theory of gas breakdown at 110 GHz.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references (pages 191-200).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128600</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/128598</link>
<description>Essays in financial economics
Montecinos Bravo, Alexis.
This thesis consists of three chapters on financial economics: an empirical analysis of government banks, a dynamic stochastic general equilibrium (DSGE) model with financial intermediaries, and a DSGE model for capital utilization and leverage. Chapter 1 presents an empirical analysis of government banks and their effect on aggregate economic variables, such as real per capita GDP growth and GDP growth volatility. It shows that government banks are still pervasive worldwide. I perform several regressions in order to estimate the effects of government banks on the economy and whether these effects are different from those found in previous studies. I find that the effect of state-owned banks is heterogeneous and ultimately depends on how deep the financial market is and how well the political institutions function in every country. Chapter 2 introduces a new DSGE model with heterogeneous households and heterogeneous financial intermediaries: private and government banks. Using empirical evidence about the stabilization role of state-owned banks during recessions, I show that the inclusion of these intermediaries in the aggregate can improve our understanding of the reaction of certain variables, such as GDP, employment, and consumption. I show that the final effect on these variables depends on how deep the financial market is and how important the level of inefficiency in government banks is. In Chapter 3, I document, together with Diogo Duarte and Hamilton Galindo, the relationship between capital utilization and leverage. We find a positive and significant relationship between these variables, which is especially strong between capital utilization and short-run debt. Using our empirical result, we develop a DSGE model to characterize the mechanism behind this relationship. We show that omitting capital utilization as a key mechanism in the business cycle can generate misleading conclusions.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, September, 2019; Cataloged from PDF version of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references (pages 112-116).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128598</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>All photons imaging : time-resolved computational imaging through scattering for vehicles and medical applications with probabilistic and data-driven algorithms</title>
<link>https://hdl.handle.net/1721.1/128597</link>
<description>All photons imaging : time-resolved computational imaging through scattering for vehicles and medical applications with probabilistic and data-driven algorithms
Satat, Guy.
One of the greatest challenges in computational imaging is scaling it to work outside the lab. The main reasons for that challenge are the strong dependency on precise calibration, accurate physical models, and long acquisition times. These prevent practical progress towards medical imaging and seeing through occlusions such as fog in the wild. This dissertation demonstrates that with data-driven and probabilistic modeling we can alleviate these dependencies, and pave the way towards real-world time-resolved computational imaging through extreme scattering conditions using visible light. The ability to image through scattering media in the visible part of the electromagnetic spectrum holds many applications in various industries. For example, seeing through fog would enable autonomous robots to operate in challenging weather conditions; augment human driving; and allow airplanes, helicopters, and drones to take off and land in dense fog conditions.; In medical imaging, the ability to see into the body with near-infrared light would reduce the exposure to ionizing radiation and provide more clinically meaningful data. In order to image in diverse and extreme scattering conditions, we develop novel algorithms inspired by techniques in signal processing, optimization, statistical analysis, compressive sensing, and machine learning that leverage time-resolved sensing. More specifically, we demonstrate techniques that computationally leverage all of the optical signal, including scattered light, as opposed to locking onto a specific part of the optical signal. Furthermore, we show that by introducing probabilistic formulation to the imaging problem, the resulting system does not require user input for calibration and priors; this makes our systems more practical for real-world scenarios and enables them to operate in a wide range of scattering conditions.; We consider four cases of imaging through scattering media with increasing complexity: 1. A theoretical analysis of time-resolved single pixel imaging, which demonstrates scene reconstruction even when the entire scene is measured with a single pixel, an equivalent of simple scattering or a blur that is easy to model. 2. A data-driven calibration invariant technique for imaging through simple scattering (a sheet of paper). 3. Imaging through a thick tissue phantom by utilizing all of the optical signal with minimal assumptions on the tissue properties. 4. Imaging through a wide range of dense, dynamic, and heterogeneous fog conditions. In that case, we introduce a probabilistic model that is able to recover the occluded target reflectance and depth without any assumption about the fog.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 199-214).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128597</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the ecology of coral reef microorganisms across different scales within the Caribbean</title>
<link>https://hdl.handle.net/1721.1/128592</link>
<description>Characterizing the ecology of coral reef microorganisms across different scales within the Caribbean
Weber, Laura(Laura G.)
Microorganisms sustain the high productivity of coral reefs and support one of the most diverse, valuable, and threatened ecosystems on Earth. Despite the importance of reef microorganisms, there is a lack of understanding about their ecology, especially on Caribbean reefs. Furthermore, the hastening degradation of reefs due to anthropogenic stressors has made it difficult to understand natural patterns in microbial communities in the context of larger-scale ecosystem changes.; Using genomics and metabolomics approaches paired with biogeochemical and physicochemical measurements as well as quantification of cell abundances, this dissertation provides optimized methods for studying the coral microbiome, investigates potential interactions between corals and seawater microorganisms, measures changes in the composition and diversity of reef seawater microorganisms over different spatial and temporal scales, and provides baseline information about microbial ecology, biogeochemistry, and metabolite compositions of a protected and relatively-healthy Cuban coral reef-system to fill these critical knowledge gaps. I found that coral species and reef location influenced the composition of bacteria and archaea within the seawater surrounding coral colonies and this seawater was enriched with microbial colonization and interaction genes, providing evidence of a distinct microbial environment surrounding corals named the coral ecosphere.; In a separate study, diel and daily variation superseded spatial variation in terms of influencing shifts in the microbial community. At a larger scale, seawater microbial communities collected from the protected reef-system of Jardines de la Reina, Cuba had higher alpha diversity and community similarity, lower nutrient concentrations, and higher abundances of picocyanobacteria compared to less protected reef-systems within Los Canarreos, Cuba and the Florida Keys, U.S.A and seawater microbial communities collected from each reef-system were influenced by hydrogeography and environmental gradients. Lastly, the extracellular metabolite composition of reef seawater collected across Jardines de la Reina was highly similar, suggesting homogenous environmental and hydrogeographic conditions across these forereefs.; Overall, this dissertation characterizes reef seawater microbial communities across different scales and provides novel, baseline information about a protected and understudied Cuban reef-system, offering critical information about the ecology of reef microorganisms within the Caribbean.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Biology; and the Woods Hole Oceanographic Institution), 2020; Cataloged from PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128592</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards lightweight high-voltage power conversion</title>
<link>https://hdl.handle.net/1721.1/128589</link>
<description>Towards lightweight high-voltage power conversion
He, Yiou.
An emerging application, electroaerodynamic (EAD) propulsion, has stimulated the needs for light-weight high voltage dc-dc and dc-ac power converters. Weight reduction of these converters in the operating range of interest has seldom been studied and is limited by lossy switching devices, the size of energy storage components and isolation requirements. Achieving light weight while meeting demands for efficiency, temperature, and isolation requires an understanding of limitations and weight profiles of components and circuit building blocks, as well as advances on multiple subsystems and overall system architecture. The thesis will present a lightweight high-frequency high voltage dc-dc converter with greatly improved weight density compared with conventional designs; an investigation on losses and temperature rises of high voltage diodes and a circuit technique to use these diodes more effectively; a design study and implementation of a lightweight high voltage dc-ac converter for use in dielectric barrier discharge ion generation; and an demonstration of the first EAD-propelled flight using developed light weight power converters.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 343-356).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128589</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatially organized fluorescent reporters for recording complex biological dynamics in cell population</title>
<link>https://hdl.handle.net/1721.1/128587</link>
<description>Spatially organized fluorescent reporters for recording complex biological dynamics in cell population
Linghu, Changyang
Biological signals, such as the dynamic concentrations of ions, levels of signaling molecules, and activities of protein kinases, interact in complex ways within cells, and can exhibit great cell-to-cell heterogeneity as a function of cell history and state. There is increasing desire to use multiple fluorescent reporters to simultaneously measure multiple biological signals in single cells across cell populations, such as those in the brain. However, due to the diffraction limit of optical imaging, the biological signals recorded from neurons in densely-labeled neural populations in vivo are often mixed with signals from closely passing axons and dendrites from other neurons, resulting in erroneous signaling events and artifactual correlations of measured neural activity. Also, it is not yet possible to simultaneously record any given set of biological signals in single cells, because there are limited sets of corresponding spectrally-orthogonal fluorescent reporters available to date. Even if the fluorescent reporters for all biological signals in all possible colors are developed in the future, the number of biological signals that can be simultaneously recorded are still limited by the number of available optical channels. In this thesis, I address these problems by developing two new technologies, soma-targeted fluorescent calcium indicators and spatially multiplexed imaging.; Soma-targeted fluorescent calcium indicators (or 'SomaGCaMPs', the first part of the thesis) are fluorescent reporters of calcium dynamics that are selectively localized at the soma, but not axons and dendrites, of neurons. In vivo optical imaging of SomaGCaMPs in dense neural circuits in mouse and zebrafish brains reported fewer artifactual spikes, increased signal-to-noise ratio, and decreased artifactual correlation across neurons. Thus, soma-targeting of fluorescent reporters is a simple and powerful method for high-fidelity population imaging of neural activity in vivo.; Spatially multiplexed imaging (the second part of the thesis) enables simultaneous readout of multiple biological signals in single cells from fluorescent reporters regardless of their spectra. This is achieved by clustering reporters into spatially separated 'Signaling Reporter Islands' (or 'SiRIs') via self-assembling protein scaffolds or RNA scaffolds. Using the spatial dimension as an asset, SiRIs may open up the ability to simultaneously image nearly arbitrary numbers of signals within a physiological cascade.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 161-178).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128587</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The method of moments in convolved random matrix models and discrete analogues</title>
<link>https://hdl.handle.net/1721.1/128585</link>
<description>The method of moments in convolved random matrix models and discrete analogues
Ahn, Andrew(Andrew Jeehynn)
We study the global and local asymptotics of Macdonald processes, its degenerations, and related models using the method of difference operators. We focus on three applications. First, we consider random plane partitions with interactions, arising from Macdonald processes with periodic weighting. For these models, the Macdonald parameters q; t become interaction parameters for the underlying dimer model. We establish global limit shape and fluctuation theorems in the limit as the mesh size goes to 0 and the interaction parameters tend to 1. Second, we consider a particle system obtained by generalizing the notion of squared singular values of products of truncated orthogonal, unitary, and symplectic matrices to a one-parameter family of deformed models. This procedure is analogous to the extension of classical real, complex, and quaternion matrix ensembles to [beta]-ensembles. A discrete time Markov chain is obtained by considering iterative multiplication of matrix factors and its appropriate generalization. We show global limit shapes and fluctuations when time is of constant order and the number of particles tend to infinity. We also establish local limit theorems at the right edge when time increases with the number of particles. Third, we discuss new developments in the method of difference operators to models beyond Macdonald processes. We apply this technique to obtain moments formulas for eigenvalues of sums of unitarily invariant random matrices and for measures derived from tensor products of representations of the unitary group.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 175-178).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128585</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lead chalcogenide thin film materials and processing for infrared optical devices</title>
<link>https://hdl.handle.net/1721.1/128583</link>
<description>Lead chalcogenide thin film materials and processing for infrared optical devices
Su, Peter,Ph.D.(Peter Xinyang)Massachusetts Institute of Technology.
Lead chalcogenides are compounds of lead and a combination of the chalcogens sulfur, selenium, and tellurium. They are semiconductors with band gap energies in the infrared region of the electromagnetic spectrum, making them interesting infrared detector materials for both free space and photonic integrated circuit applications. For longer wavelengths of infrared light, they are transparent dielectric materials with extremely large refractive indices, making them potentially interesting materials for free space dielectric metasurfaces. In this work, we study both the processing, properties, and performance of lead chalcogenide materials as thin films and their integration and use in specific infrared photonic devices.; For materials development, we examine how we can use and improve lead chalcogenide thin films for infrared detector applications, both through tuning their band gaps via alloying of the binary lead chalcogenide compounds (PbS, PbSe, PbTe), where we show photoconductivity of ternary alloys greater than an order of magnitude above their noise level; and through controlling their majority carrier type and concentration via the addition of PbO to the thin film deposition source material, which can then be utilized to deposit detectors with 2-3 times higher responsivity. For device applications, we integrated PbTe detectors into photonic integrated circuit methane sensors and developed a new patterning method for PbTe thin films for use as transmissive dielectric metasurfaces.; Integrating PbTe detectors into the photonic integrated circuit improved the methane sensing performance by 2.5 times, while a ratiometric architecture we designed and tested with an electronic read out circuit tuned to the properties of the integrated PbTe detector promise both the elimination of input laser power fluctuations as a source of spurious signals and 5 times better sensor performance with our testing equipment. Modulating the laser input at even higher frequencies promises even better performance, with an anticipated limit of detection of 0.11 vol% of methane [square root]H[subscript z]. For patterning transmissive dielectric metasurfaces, a new fabrication method using nanostencils was designed and tested, creating PbTe metalenses with diffraction limited performance and efficiencies of 40+%.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 117-130).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128583</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>In vivo translation of near infrared fluorescent semiconducting single walled carbon nanotube sensors : theoretical and experimental applications</title>
<link>https://hdl.handle.net/1721.1/128582</link>
<description>In vivo translation of near infrared fluorescent semiconducting single walled carbon nanotube sensors : theoretical and experimental applications
Bakh, Naveed Ali.
With the advent of chemical biosensing, a wide variety of biomolecules endogenous to the body are detected in vivo using a number of different methods. Most fluorescent based chemical sensing is primarily done as a form analytical diagnostics, however, recently this technology is being applied to continuous data collection. This work focuses on the translation and usage of fluorescent sensing in vivo, primarily using corona phase molecular recognition (CoPhMoRe) single walled carbon nanotube (SWNT) sensors. These sensors were developed for the lab bench but their translation to in vivo required a number of advances to overcome the method error involved with fluorescent sensor usage in vivo. The goal of this work was to create a framework for the implantation and imaging of intensity modulated sensors to be used in vivo with confidence. This work explored a diverse range of topics in nanosesnsor development and biomedicine.; The theoretical and experimental tools developed for this work were then applied to a number of different topics. We initially used pharmacokinetic modeling to predict adipose analyte concentrations for sensor optimization. However, pharmacokinetic model was then adapted to predict glucose responsive insulin (GRI) dynamics. A GRI is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. We use well developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI to determine the desired kinetic parameters and dosing for optimal glycemic control.; Our model shows there exists an optimal parameter space that results in successful glycemic control within prescribed constraints over a 24-hour period that persists through a skipped meal. Our results show how tools developed for the sensing space can be applied to adjacent fields. Experimentally, we applied the concept of ratiometric fluorescent sensing to account for the myriad of confounding artifacts that occur while imaging in vivo: from light scattering to mechanical perturbations. The CoPhMoRe technique was applied to find a poly(styrene p-styrenesulfonate) polymer wrapped nanotube for use as a reference and was combined with a DNA SWNT sensor for riboflavin in a hydrogel. The encapsulating hydrogel was shown to preserve the riboflavin sensor response after exposure to 10% mouse serum and after three days of implantation in vivo.; Combining the sensor with the invariant reference into a single hydrogel to ratio the modulated signaling, we show that it corrects for in vivo errors, such as breathing and heart-beats, resulting in an order of magnitude increase in confidence signal detection. This works shows the ratiometric hydrogel strategy improves in vivo sensing, enabling SWNT and potentially other fluorescent nanosensor constructs. Even with ratiosensing there is the issue of optimizing the orientation, implantation location, and analyte administration for in vivo imaging of fluorescent sensors, as ratiometric sensing cannot account for all sources of error. We show that subcutaneous implantation and local injection involves significantly more method error compared to intraperitoneal implantation and analyte administration.; In combination with hydrogel implants, bottom up imaging, and retro-orbital injections, we show that it is possible to administer analyte systemically while having a stable fluorescent signal. This work has shown the ability to detect local analyte concentrations in vivo with confidence, a potential application of the detection of any generic analyte is introduced as the concept of chemical tomography. Chemical tomography is a technique where a sensor and a tracer analyte can be injected and used to characterize the 3D mass transfer characteristics of a volume. We show proof of concept for using fluorescent nanosensors for vitamins as a way to characterize properties of an environment. In this work we use HUVEC cells adhered to a porous membrane beneath a layer of MoS₂ sensors that change fluorescence in response to riboflavin. By adding riboflavin to this space, we can characterize the 2D diffusivity profile across the porous membrane.; Using riboflavin in combination with the sensor we can characterize the decreases in diffusivity as the HUVEC cells contract the space between the pores. Finally, as SWNT sensors are applied to a variety of environments they will inherently experience different laser excitations. Exposing fluorescent SWNT sensors to varying laser fluence [mW/area] can alter their responses to analytes significantly. As the laser fluence increases the nanotube response to certain analytes increases. However, this effect is corona phase dependent as corona phases like sodium cholate and phospholipid-PEG wrapped SWNTs are immune to the range of laser fluences tested. Additionally, we show that increasing the SWNT residence time under laser exposure by encapsulating the sensors in a hydrogel amplifies the effect. Mathematical modeling, Raman G peak shifting studies, fluorescent wavelength shifts all suggest that this effect is not due to laser heating of a single nanotube.; We show the importance of testing and accounting for the changes in laser fluence as novel SWNT sensors are developed for new applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis. "May 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128582</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Valence encoding and memory in the amygdala</title>
<link>https://hdl.handle.net/1721.1/128581</link>
<description>Valence encoding and memory in the amygdala
Zhang, Xiangyu,Ph.D.Massachusetts Institute of Technology.
Assignment of valence, positive or negative, to external stimuli is essential to the formation of emotional responses. Information pertaining to an animal's emotional state serve as critical components of memories. However, the neural circuits which govern the valence representations, and which modulate the balance of their dynamic interactions are barely known. This thesis presents recent advances in understanding the valence encoding and emotional memories in amygdala circuits. We identified genetically-distinct neuronal populations in the central amygdala (CeA), examined their neural activity in response to appetitive or aversive stimuli, and characterized their functional contributions towards valence-specific behaviors. Using cell-type specific approaches, we dissected a basolateral amygdala (BLA)-to-CeA circuit that promotes and suppresses appetitive behaviors.; This intra-amygdalar circuit motif reveals an organizing principle of genetic-specific representation of positive and negative valence in the mammalian brain. Next, we sought to examine the role of positive and negative neurons in control of fear memory. Using the engram-identification technique, we demonstrated that fear extinction memory is formed and stored in a genetically-defined population of BLA neurons that also drives reward behaviors and antagonizes BLA's fear neurons. The fear extinction engram cells and reward neurons are equivalent with regard to their neuronal representations within the BLA and their functions in driving appetitive functions and fear extinction behaviors. Accordingly, fear extinction is a newly formed reward memory. Lastly, we hypothesized that the absence of an expected aversive stimulus acts as a teaching signal for the amygdala-mediated fear extinction learning and memory.; To this end, we demonstrated that dopamine receptor 1 (Drd1)-mediated dopaminergic transmission from the ventral tegmental area (VTA) dopamine neurons to BLA reward neurons facilitates fear extinction learning and memory, suggesting that dopamine may signal the aversive prediction error and recruit the reward circuitry to drive fear extinction behaviors. Collectively, our work sheds light on the cellular and circuit mechanisms underlying valence encoding and memories in the amygdala, and offers implications for developing therapeutic targets for emotional disorders, such as generalized anxiety disorder (GAD) and post-traumatic syndrome disorders (PTSD).
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 167-184).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128581</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying functionally distinct neuronal ensembles within the memory engram</title>
<link>https://hdl.handle.net/1721.1/128580</link>
<description>Identifying functionally distinct neuronal ensembles within the memory engram
Sun, Xiaochen,Ph.D.Massachusetts Institute of Technology.
Memories in the brain are encoded by sparse ensembles of neurons within the memory engram. However, it remains unclear whether these neurons are functionally identical or can be divided into distinct subpopulations that encode distinct aspects of the memory and differently drive memory outputs. In this study, we found that contextual fear memory engrams in the mouse dentate gyrus (DG) contained functionally distinct neuronal ensembles, genetically defined by the Fos- or Npas4-dependent transcriptional pathways. The Fos-dependent ensemble promotes memory generalization and receives enhanced excitatory synaptic inputs from the medial entorhinal cortex. The Npas4- dependent ensemble mediates memory discrimination and receives enhanced inhibitory synaptic drive from local cholecystokinin-expressing interneurons. Moreover, acute deletion of Npas4 disrupted inhibitory synaptic transmission and memory discrimination, suggesting that activity-dependent genes like Npas4 and Fos play causal roles in the formation of memory engrams. Taken together, our findings support a working model in which neuronal ensembles within engrams undergo distinct learning-induced synaptic modifications and drive memory-guided behaviors differentially.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 112-127).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128580</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Validation of ion and electron scale gyrokinetic simulations in an NSTX H-mode and comparisons with a synthetic diagnostic for high-k scattering</title>
<link>https://hdl.handle.net/1721.1/128578</link>
<description>Validation of ion and electron scale gyrokinetic simulations in an NSTX H-mode and comparisons with a synthetic diagnostic for high-k scattering
Ruiz Ruiz, Juan,Ph. D.Massachusetts Institute of Technology.
In this thesis I perform an extensive validation study in an NSTX NBI-heated H-mode discharge, predicting that electron thermal transport can be entirely explained by shortwavelength electron-scale turbulence fluctuations driven by the electron temperature gradient mode (ETG), both in conditions of strong and weak ETG turbulence drive. For the first time, local, nonlinear gyrokinetic simulation carried out with the GYRO code [98] reproduce the experimental levels of electron thermal transport, the frequency spectrum of electron-scale turbulence, the shape of the wavenumber spectrum and the ratio of fluctuation levels between strongly driven and weakly driven ETG turbulence conditions. Ion thermal transport is very close to neoclassical levels predicted by NEO [215], consistent with stable ion-scale turbulence predicted by GYRO.; Quantitative comparisons between high-k fluctuation measurements [65] and simulations are enabled via a novel synthetic high-k diagnostic implemented for GYRO in real-space. A new type of simulation resolving the full ETG spectrum in an unusually large domain (L[subscript r], L[subscript theta]) ~ (20, 20)[subscript rho subscript s] is required to quantitatively compare with the measured frequency spectra of the high-k density fluctuations. Simulations that best match all experimental observables predict that the measured high-k turbulence is closer to the streamer peak of the density fluctuation spectrum than was previously believed. The frequency spectra characteristics of electron-scale turbulence (spectral peak and width) can be consistently reproduced by the synthetic spectra, but these reveal not to be critical constraints on the simulations.; The shape of the high-k wavenumber spectrum and the fluctuation level ratio between the strong and weak ETG conditions can also be simultaneously matched by electron-scale simulations within sensitivity scans about the experimental profile values, and result to be great discriminators of the simulations analyzed. Validation metrics are used to discriminate between simulations, are were able to isolate the effect of safety factor and magnetic shear to match the shape of the measured fluctuation wavenumber spectrum. Together, electron thermal transport comparisons and quantitative agreement of electron-scale turbulence spectra give the strongest experimental evidence to date supporting ETG-driven turbulence fluctuations as the main mechanism driving anomalous electron thermal transport in the outer-core of modest [beta] NSTX NBI-heated H-modes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 299-311).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128578</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A hardware and software architecture for pervasive parallelism</title>
<link>https://hdl.handle.net/1721.1/128566</link>
<description>A hardware and software architecture for pervasive parallelism
Jeffrey, Mark Christopher.
Parallelism is critical to achieve high performance in modern computer systems. Unfortunately, most programs scale poorly beyond a few cores, and those that scale well often require heroic implementation efforts. This is because current parallel architectures squander most of the parallelism available in applications and are too hard to program. This thesis presents Swarm, a new execution model, architecture, and system software that exploits far more parallelism than conventional multicores, yet is almost as easy to program as a sequential machine. Programmer-ordered tasks sit at the software-hardware interface. Swarm programs consist of tiny tasks, as small as tens of instructions each. Parallelism is dynamic: tasks can create new tasks at run time. Synchronization is implicit: the programmer specifies a total or partial order on tasks. This eliminates the correctness pitfalls of explicit synchronization (e.g., deadlock and data races). Swarm hardware uncovers parallelism by speculatively running tasks out of order, even thousands of tasks ahead of the earliest active task. Its speculation mechanisms build on decades of prior work, but Swarm is the first parallel architecture to scale to hundreds of cores due to its new programming model, distributed structures, and distributed protocols. Leaning on its support for task order, Swarm incorporates new techniques to reduce data movement, to speculate selectively for improved efficiency, and to compose parallelism across abstraction layers. Swarm achieves efficient near-linear scaling to hundreds of cores on otherwise hard-to-scale irregular applications. These span a broad set of domains, including graph analytics, discrete-event simulation, databases, machine learning, and genomics. Swarm even accelerates applications that are conventionally deemed sequential. It outperforms recent software-only parallel algorithms by one to two orders of magnitude, and sequential implementations by up to 600� at 256 cores.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 139-167).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128566</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling time-resolved phenomena</title>
<link>https://hdl.handle.net/1721.1/128455</link>
<description>Sampling time-resolved phenomena
Bhandari, Ayush.
Broadly speaking, time-resolved phenomena refers to three dimensional capture of a scene based on the time-of-flight principle. Since speed and and time are proportional quantities, knowing time-of-flight allows one to estimate distances. This time-of-flight may be attributed to a pulse of light or a wave packet of sound. Depending on the sub-band of the electromagnetic spectrum, the interaction of waves or pulses with the scene of interest results in measurements and based on this proxy of the physical world, one is interested in inferring physical properties of the scene. This may be something simple as depth, or something more involved such as fluorescence lifetime of a biological sample or the diffusion coefficient of turbid/scattering medium. The goal of this work is to develop a unifying approach to study time-resolved phenomena across various sub-bands of the electromagnetic spectrum, devise algorithms to solve for the corresponding inverse problems and provide fundamental limits. Sampling theory, which deals with the interplay between the discrete and the continuous realms, plays a critical role in this work due to the continuous nature of physical world and the discrete nature of its proxy, that is, the time-resolved measurements.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 193-210).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128455</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social inductive biases for reinforcement learning</title>
<link>https://hdl.handle.net/1721.1/128415</link>
<description>Social inductive biases for reinforcement learning
Adjodah, Dhaval D. K.(Adjodlah, Dhaval Dhamnidhi Kumar)
How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the impact of information manipulation campaigns, and fight back? How do we build machine learning algorithms that are more sample efficient when learning from each other's sparse data, and under time constraints? At the root of these questions is a simple one: how do agents, human or machines, learn from each other, and can we improve it and apply it to new domains? The cognitive and social sciences have provided innumerable insights into how people learn from data using both passive observation and experimental intervention. Similarly, the statistics and machine learning communities have formalized learning as a rigorous and testable computational process.; There is a growing movement to apply insights from the cognitive and social sciences to improving machine learning, as well as opportunities to use machine learning as a sandbox to test, simulate and expand ideas from the cognitive and social sciences. A less researched and fertile part of this intersection is the modeling of social learning: past work has been more focused on how agents can learn from the 'environment', and there is less work that borrows from both communities to look into how agents learn from each other. This thesis presents novel contributions into the nature and usefulness of social learning as an inductive bias for reinforced learning.; I start by presenting the results from two large-scale online human experiments: first, I observe Dunbar cognitive limits that shape and limit social learning in two different social trading platforms, with the additional contribution that synthetic financial bots that transcend human limitations can obtain higher profits even when using naive trading strategies. Second, I devise a novel online experiment to observe how people, at the individual level, update their belief of future financial asset prices (e.g. S&amp;P 500 and Oil prices) from social information. I model such social learning using Bayesian models of cognition, and observe that people make strong distributional assumptions on the social data they observe (e.g. assuming that the likelihood data is unimodal).; I were fortunate to collect one round of predictions during the Brexit market instability, and find that social learning leads to higher performance than when learning from the underlying price history (the environment) during such volatile times. Having observed the cognitive limits and biases people exhibit when learning from other agents, I present an motivational example of the strength of inductive biases in reinforcement learning: I implement a learning model with a relational inductive bias that pre-processes the environment state into a set of relationships between entities in the world. I observe strong improvements in performance and sample efficiency, and even observe the learned relationships to be strongly interpretable.; Finally, given that most modern deep reinforcement learning algorithms are distributed (in that they have separate learning agents), I investigate the hypothesis that viewing deep reinforcement learning as a social learning distributed search problem could lead to strong improvements. I do so by creating a fully decentralized, sparsely-communicating and scalable learning algorithm, and observe strong learning improvements with lower communication bandwidth usage (between learning agents) when using communication topologies that naturally evolved due to social learning in humans. Additionally, I provide a theoretical upper bound (that agrees with our empirical results) regarding which communication topologies lead to the largest learning performance improvement.; Given a future increasingly filled with decentralized autonomous machine learning systems that interact with humans, there is an increasing need to understand social learning to build resilient, scalable and effective learning systems, and this thesis provides insights into how to build such systems.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2019; Cataloged from the official PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references (pages 117-126).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128415</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Excitonic spin engineering for solar cells and organic light-emitting diodes</title>
<link>https://hdl.handle.net/1721.1/128410</link>
<description>Excitonic spin engineering for solar cells and organic light-emitting diodes
Einzinger, Markus.
The last two decades have seen renewed interest in molecular organic semiconductors. Since these materials support the formation of excitons, their behavior differs considerably from their inorganic counterparts. This gives rise to a variety of novel properties that can be exploited to create entirely new or improve existing optoelectronic devices. In this thesis, we explore excitonic concepts for improving both organic-light emitting diodes (OLEDs) and silicon solar cells. OLEDs are already commercially successful. However they still suffer from several major drawbacks. Multiexcited-state phenomena are believed to be the root cause of challenges like efficiency roll-off and degradation. The development of novel strategies to reduce exciton densities under heavy load is therefore highly desirable.; In this thesis, it is shown that triplet exciton lifetimes of thermally activated delayed fluorescence (TADF) emitter molecules can be manipulated in the solid state by exploiting intermolecular interactions. The external heavy-atom effect of brominated host molecules leads to increased spin-orbit coupling, which in turn enhances intersystem crossing rates in the guest molecule. Shorter triplet exciton lifetimes are observed, while high photoluminescence quantum yields (PLQYs) are maintained and emission spectra are essentially unaltered. A change in the intersystem crossing rate ratio due to increased dielectric constants leads to almost 50% lower triplet exciton densities in the emissive layer in the steady state and results in an improved onset of the PLQY roll-off at high excitation densities. Efficient OLEDs with better roll-off behavior based on these novel hosts are fabricated, demonstrating the suitability of this concept for real-world applications.; In addition, efficient and stable blue emitters for OLEDs are urgently needed for next-generation display and lighting applications. This thesis presents a tunable series of TADF emitter molecules. After pairing the iminodibenzyl donor with the triazine acceptor via a phenylene linker, dihedral angle tuning is employed to regulate the difference between the energy levels of singlet and triplet excited states. Enhanced reverse intersystem crossing rates are observed in response to increased methylation at the phenylene linker. PLQYs of up to 98% are achieved upon doping into a solid-state matrix. When incorporated in devices, the maximum external quantum efficiency is 28.3% for the emitter with the most favorable trade-off between singlet-triplet splitting and fluorescent oscillator strength.; This result highlights the general applicability of dihedral angle tuning, a molecular design strategy that can be used to improve the performance of donor-acceptor type TADF emitters without significantly changing their emission spectra. In contrast, contemporary solar cell technologies are dominated by silicon, an inorganic semiconductor. But when absorbing photons, silicon (like other semiconductors) wastes energy in excess of its bandgap. Reducing these thermalization losses is possible by sensitizing the silicon solar cell using singlet fission, a carrier multiplication phenomenon that occurs only in organic semiconductors. In this process, two triplet excitons are generated from a singlet exciton. In tetracene, those triplet excitons are energetically matched to the silicon bandgap. Transferring triplet excitons to silicon creates additional electron-hole pairs, promising to increase cell efficiencies from the single-junction limit of 29% to as high as 35%.; In this thesis we reduce the thickness of the protective hafnium oxynitride layer at the surface of a silicon solar cell to just eight angstroms, using electric-field-effect passivation to enable the efficient energy transfer of triplet excitons formed in tetracene. The maximum combined yield of the fission in tetracene and the energy transfer to silicon is around 133%. The processes at the interface are investigated using photoluminescent and magnetic field effect experiments, revealing the impact of different interlayer thicknesses. Finally, the thesis presents the first example of a singlet-fission-enhanced silicon solar cell, a breakthrough that establishes the potential of singlet exciton fission to increase the efficiencies of silicon solar cells and reduce the cost of the energy that they generate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 133-147).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128410</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-agent coordination under limited communication</title>
<link>https://hdl.handle.net/1721.1/128409</link>
<description>Multi-agent coordination under limited communication
Bhargava, Nikhil(Nikhil Gaurev)
In this thesis, we present a theory for constructing real-time executives that can reason about communication between agents. In multi-agent coordination problems, different agents have different beliefs about the state of the world that can eventually be reconciled if the agents are able to share sufficient information with one another. When communication is limited, this task becomes more difficult. To achieve robustness, coordination decisions need to be made, executed, and adapted in real-time by a real-time executive. Most existing real-time executives rely on perfect knowledge of the state of the world, making it difficult to use them in scenarios where agents either cannot or prefer not to communicate and share information. This thesis offers three contributions that together provide the basis for constructing a real-time executive capable of handling multi-agent coordination under limited communication.; First, we introduce delay controllability as a way to augment the input plan representation to including a communication model. Delay controllability lets us reason about multi-agent activities under limited communication in the form of communication delays and provides a guarantee that problem evaluation is tractable. Second, we provide a way to indicate by when each agent must communicate the results of their actions. Many agents have flexibility in choosing exactly when to communicate. We provide an algorithm for choosing a low-cost set of moments to communicate and present a strategy for adjusting those strategies when communication networks are unreliable causing disruptions in the original communication plan. Third, we offer a way to model noisy communication. Noisy communication offers approximate temporal information that is useful during execution but is generally difficult to incorporate.; We introduce variable-delay controllability as a way to model this kind of communication and provide the first sound and complete algorithm for incorporating noisy information that runs in polynomial time.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 239-244).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128409</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Palladium- and nickel-catalyzed C-N cross-coupling reactions featuring soluble organic bases</title>
<link>https://hdl.handle.net/1721.1/128408</link>
<description>Palladium- and nickel-catalyzed C-N cross-coupling reactions featuring soluble organic bases
Dennis, Joseph Michael,Jr.
Chapter 1: Breaking the Base Barrier: An Electron-Deficient Palladium Catalyst Enables the Use of a Common Soluble Base in C-N Coupling Due to the low intrinsic acidity of amines, palladium-catalyzed C-N cross-coupling plagued continuously by the necessity to employ strong, inorganic, or insoluble bases. To surmount the many Due to the low intrinsic acidity of amines, palladium-catalyzed C-N crosscoupling has been practical obstacles associated with these reagents, we utilized a commercially available dialkyl triarylmonophosphine-supported palladium catalyst that facilitates a broad range of C-N coupling reactions in the presence of weak, soluble bases. The mild and general reaction conditions show extraordinary tolerance for even highly base-sensitive functional groups. Additionally, insightful heteronuclear NMR studies using ⁻¹⁵N-labeled amine complexes provide evidence for the key acidifying effect of the cationic palladium center.; Chapter 2: Pd-Catalyzed C-N Coupling Reactions Facilitated by Organic Bases: Mechanistic Investigation Leads to Enhanced Reactivity in the Arylation of Weakly Binding Amines The ability to use soluble organic amine bases in Pd-catalyzed C-N cross-coupling reactions has provided a long-awaited solution to the many issues associated with employing traditional, heterogeneous reaction conditions. However, little is known about the precise function of these bases in the catalytic cycle or about the effect of variations in base structure on catalyst reactivity. We used ¹⁹F NMR to analyze the kinetic behavior of C-N coupling reactions facilitated by different organic bases. In the case of aniline coupling reactions employing DBU, the resting state was a DBU-bound oxidative addition complex, LPd(DBU)(Ar)X, and the reaction was found to be inhibited by base.; Generally, however, depending on the binding properties of the chosen organic base, increasing the concentration of the base can have a positive or negative influence on the reaction rate. Furthermore, the electronic nature of the aryl triflate employed in the reaction directly affects the reaction rate. The fastest reaction rates were observed with electronically neutral aryl triflates, while the slowest were observed with highly electron-rich and electrondeficient substrates. We propose a model in which the turnover-limiting step of the catalytic cycle is dependent on the relative nucleophilicity of the base, compared to that of the amine. This hypothesis guided the discovery of new reaction conditions for the coupling of weakly binding amines, including secondary aryl amines, which were unreactive nucleophiles in our original protocol.; Chapter 3: Use of a Droplet Platform to Optimize Pd-Catalyzed C-N Coupling Reactions Promoted by Organic Bases Recent advances in Pd-catalyzed carbon-nitrogen cross-coupling have enabled the use of soluble organic bases instead of insoluble or strong inorganic bases that are traditionally employed. The single-phase nature of these reaction conditions facilitates their implementation in continuous flow systems, high-throughput optimization platforms, and large-scale applications. In this work, we utilized an automated microfluidic optimization platform to determine optimal reaction conditions for the couplings of an aryl triflate with four types of commonly employed amine nucleophiles: anilines, amides, primary aliphatic amines, and secondary aliphatic amines.; By analyzing trends in catalyst reactivity across different reaction temperatures, base strengths, and base concentrations, we have developed a set of general recommendations for Pd-catalyzed crosscoupling reactions involving organic bases. The optimization algorithm determined that the catalyst supported by the dialkyltriarylmonophosphine ligand AlPhos was the most active in the coupling of each amine nucleophile. Furthermore, our automated optimization revealed that the phosphazene base BTTP can be used to facilitate the coupling of secondary alkylamines and aryl triflates. Chapter 4: The Quest for the Ideal Base: Rational Design of a Nickel Precatalyst Enables Mild, Homogeneous C-N Cross-Coupling Palladium-catalyzed amination reactions using soluble organic bases have provided a solution to the many issues associated with heterogeneous reaction conditions. Still, homogeneous C-N crosscoupling approaches cannot yet employ bases as weak and economical as trialkylamines.; Furthermore, organic base-mediated methods have not been developed for Ni(0/II) catalysis, despite some advantages of such systems over analogous Pd-based catalysts. We designed a new air-stable and easily prepared Ni(II) precatalyst bearing an electron-deficient bidentate phosphine ligand that enables the cross-coupling of aryl triflates with aryl amines using triethylamine (TEA) as base. The method is tolerant of sterically-congested coupling partners, as well as those bearing base- and nucleophile-sensitive functional groups. With the aid of density functional theory (DFT) calculations, we determined that the electron-deficient auxiliary ligands decrease both the pK[subscript a] of the Ni-bound amine and the barrier to reductive elimination from the resultant Ni(II)-amido complex. Moreover, we determined that precluding Lewis acid-base complexation between the Ni catalyst and the base, due to steric factors, is important for avoiding catalyst inhibition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128408</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the mechanism of antiretroviral nucleoside analogs as inhibitors of Epstein-Barr virus lytic DNA replication</title>
<link>https://hdl.handle.net/1721.1/128407</link>
<description>Understanding the mechanism of antiretroviral nucleoside analogs as inhibitors of Epstein-Barr virus lytic DNA replication
Drosu, Natalia C.
Epstein-Barr virus (EBV) is a human B-cell tropic double-stranded DNA y-herpesvirus. To date, there are no antiviral agents proven to be clinically effective in the treatment of EBV infection or EBV-associated diseases. In the experiments contained in this thesis, we define the ability of nucleoside/nucleotide analogs licensed for the treatment of HIV to inhibit EBV lytic DNA replication in vitro. Using an established system of EBV lytic DNA replication in HH514-16 cells induced with butyrate, we validate azidothymidine (AZT) as an inhibitor of EBV lytic DNA replication. We further demonstrate that several antiretroviral nucleoside/nucleotide analogs, including stavudine (d4T), abacavir (ABC), tenofovir disoproxil fumarate (TDF) and tenofovir alafenamide (TAF), effectively inhibit EBV DNA replication.; Inhibition of DNA replication by these compounds is specific to the lytic cycle, and primarily attributable to effects on the viral DNA polymerase by drug-triphosphates, except in the case of AZT. We extend studies of the tenofovir prodrugs TDF and TAF to show that these compounds are not only effective, but highly potent inhibitors of EBV lytic DNA replication. TAF has a 35- and 24-fold, and TDF has a 10- and 7-fold lower IC. than acyclovir and penciclovir, respectively. TAF is also twice as potent as the [beta]-herpesviral drug ganciclovir. In vitro, the active metabolite of TDF and TAF, tenofovir-diphosphate, is more potent than acyclovirtriphosphate at inhibiting dNTP incorporation into a DNA template by the EBV DNA polymerase. A functional consequence of bypassing viral-dependent drug metabolism is the ability to initiate treatment prior to the viral lytic cycle.; Additionally, we include a clinical case report of a patient with relapsing-remitting multiple sclerosis who experienced symptomatic and radiologic improvement with Combivir (AZT/3TC), and suggest these effects may be mediated via effects on EBV. These studies highlight the need for further investigation and detailed characterization of antiviral agents for EBV and other herpesviruses. The framework presented in this thesis provides an experimental approach for testing candidate nucleoside/nucleotide analogs and probing their interactions with both cellular and viral targets. This work may inform the selection of drugs for clinical translation in the treatment of EBV infection and EBV-associated diseases.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 169-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128407</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial gene regulation by transcription factors</title>
<link>https://hdl.handle.net/1721.1/128406</link>
<description>Combinatorial gene regulation by transcription factors
Grossman, Sharon R.(Sharon Rachel)
Combinatorial gene regulation is encoded in enhancers and promoters in the form of binding sites for transcription factors (TFs), which collaboratively recruit the transcriptional machinery and drive gene expression. Using high-throughput and quantitative technologies developed by our lab and others, we studied TF binding sites in enhancers from numerous different cell types and regulatory systems, shedding light general principles of motif composition and organization in typical cellular regulatory elements. We find extensive synergy between TF binding sites, some with organizational constraints and some with flexible positioning. We demonstrate that different TFs bind at distinct positions within regulatory elements, suggesting a new type of architectural constraint in enhancers. Importantly, our analysis of both TF organization and cooperativity revealed distinctive patterns that separates TFs into potential functional classes. Together, our results suggest a structure of the regulatory code at the level of TF function and generate new hypotheses about regiospecific binding patterns and functions of TF classes within enhancers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128406</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Folded functional foams</title>
<link>https://hdl.handle.net/1721.1/128404</link>
<description>Folded functional foams
Calisch, Samuel Eli.
Materials with effective properties dominated by geometric structure rather than composition, or architected materials, are used in nature and engineering to maximize performance subject to constraints of mass and energy. Conventional engineering examples have included textiles, polymer and metal foams, and honeycombs, but developments in digital fabrication have vastly expanded the field. The majority of this new work has focused on 3D printing for its high degree of geometric control, though production rates have been slow, material properties poor, and manufacturing costs high. An alternative, growing body of work has developed around structural origami and kirigami, where planar sheets are processed and folded to create three-dimensional architected materials.; This work aims to leverage planar fabrication for scalable manufacturing, on-demand customization, and low embodied energy while exploiting the geometric richness of origami to tailor shape and maximize mechanical performance. This thesis seeks to demonstrate the engineering potential of folded architected materials by showing scalability through automated production, structural control of three-dimensional shape and stiffness, and functional control of energy transduction. We first show a custom machine for automating cutting and folding of shaped honeycombs, illustrating the capability to prescribe large-scale geometry of an architected material in a continuous production process. We then modify this construction to make shaped architected materials with prescribed stiffness, producing shoe soles as a demonstration. Finally, we show three forms of energy transduction in folded architected materials -- reflection, absorption, and transmission --; and apply each to a relevant, difficult engineering problem. For energy reflection, we maximize the ratio of strain energy output to input in a collision event, taking running shoe soles as a test case and comparing performance to conventional polymer foams. For energy absorption, we maximize total energy absorbed per unit mass and apply this to vehicular crashboxes, comparing the results to aluminum honeycombs. For energy transmission, we use energy input to drive deformation modes with desired output force and geometry, taking as an application the generation of traveling waves on a hydrofoil surface (a longstanding goal of active flow control), evaluating viability under tow tank testing.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2019; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 117-135).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128404</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An optofluidic platform for longitudinal circulating tumor cell studies in mouse models of cancer</title>
<link>https://hdl.handle.net/1721.1/128399</link>
<description>An optofluidic platform for longitudinal circulating tumor cell studies in mouse models of cancer
Hamza, Bashar,Ph. D.(Bashar M.)Massachusetts Institute of Technology.
Metastasis is a complex, multi-step process that is responsible for over 90% of cancer-related deaths. Circulating tumor cells (CTCs) that are shed from primary tumors represent the disseminating "seeds" that give rise to distant malignant growths. Despite their importance to metastasis, understanding of their role has been hindered by the extreme difficulty of characterizing CTC populations over time and linking them to metastases that occur during natural tumor progression. The use of in vivo mouse models of cancer for studying metastasis has been crucial for discovering effective new cancer biomarkers and therapies. This thesis outlines the development of a new platform that enables longitudinal and dynamic CTC studies in mouse models of cancer. The platform is designed to help better understand how changes in CTCs may reflect the evolution of their tumors of origin over time.; It is composed of a microfluidic, cell-sorting chip connected serially to an un-anesthetized mouse via an implanted arteriovenous shunt. Pneumatically-controlled microfluidic valves capture CTCs as they flow through the chip, and CTC-depleted blood is returned back to the mouse via the shunt. To demonstrate the utility of this platform, we profiled CTCs isolated longitudinally from animals over several days of treatment with the BET inhibitor JQ1 using single-cell RNA sequencing (scRNA-Seq). We showed that our approach eliminates potential biases driven by intermouse heterogeneity that can occur when CTCs are collected across different mice. Furthermore, the direct access to a mouse's circulatory system in real time allowed us to devise a new method for measuring the circulatory dynamics and physical properties of CTCs in mice.; Direct measurements of the main parameters that govern CTC levels in blood - mainly the intravasation rate and the half-life time in the circulation - were demonstrated. Observing how such parameters change during tumor development or in response to therapy may help shed light on the dynamics of the most tumorigenic CTCs. Finally, in collaboration with several laboratories at MIT and elsewhere, further validation of this platform is demonstrated by carrying out longitudinal studies of single CTCs and rare, circulating, tumor-experienced immune cells collected from different mouse models of cancer. Information gained from these studies will help dissect the potential mechanisms of resistance to therapy (e.g. immunotherapy) and identify new "druggable" candidate cells.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 109-119).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128399</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dense visual learning for robot manipulation</title>
<link>https://hdl.handle.net/1721.1/128398</link>
<description>Dense visual learning for robot manipulation
Florence, Peter R.(Peter Raymond)
We would like to have highly useful robots which can richly perceive their world, semantically distinguish its fine details, and physically interact with it sufficiently for useful robotic manipulation. This is hard to achieve with previous methods: prior work has not equipped robots with the scalable ability to understand the dense visual state of their varied environments. The limitations have both been in the state representations used, and how to acquire them without significant human labeling effort. In this thesis we present work that leverages self-supervision, particularly via a mix of geometrical computer vision, deep visual learning, and robotic systems, to scalably produce dense visual inferences of the world state. These methods either enable robots to teach themselves dense visual models without human supervision, or they act as a large multiplying factor on the value of information provided by humans. Specifically, we develop a pipeline for providing ground truth labels of visual data in cluttered and multi-object scenes, we introduce the novel application of dense visual object descriptors to robotic manipulation and provide a fully robot-supervised pipeline to acquire them, and we leverage this dense visual understanding to efficiently learn new manipulation skills through imitation. With real robot hardware we demonstrate contact-rich tasks manipulating household objects, including generalizing across a class of objects, manipulating deformable objects, and manipulating a textureless symmetrical object, all with closed-loop, real-time vision-based manipulation policies.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 115-127).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128398</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Epitranscriptomics : translational regulation of metabolism, drug resistance and proteostasis during cellular stress</title>
<link>https://hdl.handle.net/1721.1/128394</link>
<description>Epitranscriptomics : translational regulation of metabolism, drug resistance and proteostasis during cellular stress
Davis, Nick K.(Nicholas K.)
The epitranscriptome -- the naturally occurring system of chemical modifications on ribonucleic acid (RNA) --; is an emerging frontier of research into how changes in the cellular environment are coupled with global rates of protein synthesis. Here we report the development of new analytical and computational approaches to study mechanisms of epitranscriptomic regulation and function in the context of (1) phenotypic antibiotic resistance in bacteria, and (2) proteostasis in eukaryotes. While at least 11 major classes of RNA have been identified to date, this work focuses on transfer RNA (tRNA), the most diversely modified species of RNA that plays a central role in the initiation, elongation and termination of translation. To provide context for investigating the epitranscriptomic regulation of microbial adaptation, we first use multivariate statistical modelling to integrate time-resolved, systems-level analyses of mycobacterial persistence using an in vitro model of tuberculosis infection.; Combining biochemical characterization of cellular pH and redox state, metabolic phenotyping, time-course metabolomics, whole-genome transcriptomics, and quantitative proteomics, we demonstrate that starved Mycobacterium bovis BCG (BCG) adapts to starvation by entering a ketotic state that results from coordinated metabolic shifts towards lipolysis and fatty acid [beta]-oxidation. We also show that management of toxic ketone body intermediates appears to be mediated by cytochrome P450 (CYP)-linked ketolysis and carbon cycling through CO₂ fixation, as evidenced by elevated endogenous reactive oxygen species production during starvation and the sensitivity of starved persisters to well-known CYP poisons. Using this model of mycobacterial pathogenesis, we next describe how BCG responds to nutrient deprivation by reprogramming the tRNA epitranscriptome to mediate selective translation of codon-biased stress response genes.; We discuss how insights from preliminary experiments with a new in-house method, Absolute QUAntification RNA-Seq (AQUA RNA-Seq), will deepen our mechanistic understanding of this alternative genetic code, and also describe a strategy for chemotherapeutic intervention to reverse phenotypic drug resistance. Finally, we detail the development of a new high-throughput platform to identify and quantify the role of the epitranscriptome in translational fidelity in Saccharomyces cerevisiae. Our results indicate that loss of certain tRNA-modifying enzymes induces the aggregation of stress response proteins with amino acid misincorporations that map to specific codon sites.; The research conducted under this thesis (1) advances our fundamental understanding of how genes are regulated at the level of translation, (2) establishes the role of the epitranscriptome in regulating cellular adaptation to physiological stringency, and (3) provides mechanistic insights into how the epitranscriptome can be engineered for the development of new RNA-targeted medicines.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Sc. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128394</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning distributions of transformations from small datasets for applied image synthesis</title>
<link>https://hdl.handle.net/1721.1/128342</link>
<description>Learning distributions of transformations from small datasets for applied image synthesis
Zhao, Amy(Xiaoyu Amy)
Much of the recent research in machine learning and computer vision focuses on applications with large labeled datasets. However, in realistic settings, it is much more common to work with limited data. In this thesis, we investigate two applications of image synthesis using small datasets. First, we demonstrate how to use image synthesis to perform data augmentation, enabling the use of supervised learning methods with limited labeled data. Data augmentation -- typically the application of simple, hand-designed transformations such as rotation and scaling -- is often used to expand small datasets. We present a method for learning complex data augmentation transformations, producing examples that are more diverse, realistic, and useful for training supervised systems than hand-engineered augmentation. We demonstrate our proposed augmentation method for improving few-shot object classification performance, using a new dataset of collectible cards with fine-grained differences. We also apply our method to medical image segmentation, enabling the training of a supervised segmentation system using just a single labeled example. In our second application, we present a novel image synthesis task: synthesizing time lapse videos of the creation of digital and watercolor paintings. Using a recurrent model of paint strokes and a novel training scheme, we create videos that tell a plausible visual story of the painting process.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis. "February 2020."; Includes bibliographical references (pages 75-91).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128342</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactions at interfaces across scales : from adsorption to adhesion</title>
<link>https://hdl.handle.net/1721.1/128341</link>
<description>Interactions at interfaces across scales : from adsorption to adhesion
Girard, Henri-Louis Jean-Paul.
The interface between two phases is a prime site for exchanges to occur: from heat or mass transfer to the adsorption of contaminants. This work explores a range of interactions at interfaces across scales, from the adsorption of molecules on substrates to the adhesion of ice on solids. Surface engineering is used to tailor the physicochemical properties of surfaces (microstructure, roughness, chemical functionalization, and charge) to achieve the desired behavior. First, macroscopic features are introduced on superhydrophobic substrates to restrict transport phenomena between an impacting droplet and a solid surface. Then, the adsorption of organic contaminants from oil is investigated as a function of surface functionalization and a hybrid liquid-solid substrate is developed to mitigate deposition. At the macroscale, the ice-solid interface is examined and two separate approaches that combine adhesion reduction with a robust surface design to make them practical for use in harsh environments are demonstrated. Finally, the directed adsorption of proteins is used to build in situ templates that enhance the nucleation rate of crystals for applications in protein-based drug manufacturing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-119).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128341</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to see the physical world</title>
<link>https://hdl.handle.net/1721.1/128332</link>
<description>Learning to see the physical world
Wu, Jiajun,Ph.D.Massachusetts Institute of Technology.
Human intelligence is beyond pattern recognition. From a single image, we are able to explain what we see, reconstruct the scene in 3D, predict what's going to happen, and plan our actions accordingly. Artificial intelligence, in particular deep learning, still falls short in some preeminent aspects when compared with human intelligence, despite its phenomenal development in the past decade: they in general tackle specific problems, require large amounts of training data, and easily break when generalizing to new tasks or environments. In this dissertation, we study the problem of physical scene understanding-building versatile, data-efficient, and generalizable machines that learn to see, reason about, and interact with the physical world. The core idea is to exploit the generic, causal structure behind the world, including knowledge from computer graphics, physics, and language, in the form of approximate simulation engines, and to integrate them with deep learning.; Here, learning plays a multifaceted role: models may learn to invert simulation engines for efficient inference; they may also learn to approximate or augment simulation engines for more powerful forward simulation. This dissertation consists of three parts, where we investigate the use of such a hybrid model for perception, dynamics modeling, and cognitive reasoning, respectively. In Part I, we use learning in conjunction with graphics engines to build an object-centered scene representation for object shape, pose, and texture. In Part II, in addition to graphics engines, we pair learning with physics engines to simultaneously infer physical object properties. We also explore learning approximate simulation engines for better flexibility and expressiveness. In Part III, we leverage and extend the models introduced in Parts I and II for concept discovery and cognitive reasoning by looping in a program execution engine.; The enhanced models discover program-like structures in objects and scenes and, in turn, exploit them for downstream tasks such as visual question answering and scene manipulation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 271-303).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128332</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Terahertz wave-molecule interactions via CMOS chips : from comb gas sensor with absolute specificity to ultra-stable, miniaturized clock</title>
<link>https://hdl.handle.net/1721.1/128330</link>
<description>Terahertz wave-molecule interactions via CMOS chips : from comb gas sensor with absolute specificity to ultra-stable, miniaturized clock
Wang, Cheng,Ph.D.Massachusetts Institute of Technology.
Under the excitation of electromagnetic waves within the millimeter wave and terahertz regimes, polar gaseous molecules generate unique rotational spectra. The frequency and absorption intensity of rotational spectral lines are directly linked to the micro-scale molecular structures. They serve as an indicator or "finger-print" of molecules. Thus, a rotational spectrometer with absolute specificity is promising for the analysis of complicated gas mixtures (e.g. human exhaled breath and industrial gas leakage). To utilize this important property, a CMOS dual-frequency-comb spectrometer is proposed and implemented. Broadband (220~320GHz), fast scanning (20x faster than conventional single-tone sensors) and highly sensitive (ppm level without pre-concentration) gas analysis is accomplished with the adoption of a high-parallelism architecture and multi-functional, highly-efficient circuit topologies.; This work also reveals that the rotational spectral lines with a quality factor of ~ 10⁶ can serve as the frequency references of ultra-stable clock systems. Based on this principle, two chip-scale molecular clocks (CSMC) locking to the 231.061 GHz rotational spectral line of carbonyl sulfide (OCS) molecules are presented. Their fully-electronic implementations on 65nm CMOS achieve "atomic-clock" level stability, miniaturization, low cost and low DC power. The first CSMC prototype locks to the fundamental dispersion curve of the OCS transition with a frequency-shift-keying (FSK) spectral line probing scheme. An Allan deviation of 3.8 x 10⁻¹⁰ with an averaging time of r=10³ s and 66 mW DC power is measured. Next, an upgraded CSMC prototype adopting high-order dispersion-curve locking effectively improves the clock stability to 4.3 x 10⁻¹¹ (r=10³ s).; The CSMCs present great potential for the time/phase synchronization of future high-speed wireless access networks, high-precision navigation and sensing under GPS-denied conditions, such as underwater seismology for oil detection.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 151-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128330</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a cognitive network management and control system</title>
<link>https://hdl.handle.net/1721.1/128329</link>
<description>Towards a cognitive network management and control system
Rezaee, Arman.
Future networks have to accommodate an increase of 3-4 orders of magnitude in data rates with heterogeneous session sizes and strict time deadline requirements. The dynamic nature of scheduling of large transactions and the need for rapid actions by the network management and control system, require timely and judicious collection of network state information. Within this context we will focus on the problem of shortest path routing, and identify pragmatic schemes that allow a central controller to collect relevant delay statistics from various links and nodes within the network. We present Significant Sampling as an adaptive monitoring technique to collect and disseminate network state information when it can be of significant value to the optimal operation of the network, and in particular when it can help in identifying the shortest routes.; We start by developing an analytical framework that can identify the optimal time for the collection of such information in a small but realistic setting, when the underlying delay model is a continuous-time diffusion process (e.g. Wiener process or Ornstein-Uhlenbeck process) and its parameters are known by the controller. We show that this technique balances the need for updated state information against the collection and dissemination costs and provides an algorithm that yields near optimum performance. We then extend the results by introducing a reinforcement learning framework that learns the aforementioned optimal policy from its own interactions with the network, and without any prior assumptions regarding the underlying delay model. In addition to achieving a performance comparable to the analytically derived policies, the deep reinforcement learning solution is more flexible and general and can accommodate a diverse set of network environments.; This is particularly important because it can provide good solutions for complex network environments where analytically tractable solutions are not feasible. We conclude our work by noting that sensible network controllers should continue to deliver a good performance between distinct instances of state collection and thus any meaningful solution should strive to meet application demands despite the unavoidable uncertainty about the instantaneous state of the network. To that end, we introduce a novel diversity routing scheme that can accommodate requirements regarding delay variations despite a controller's relative uncertainty about the instantaneous state of the network. More specifically we utilize mean-variance analysis as the basis for traffic distribution and route selection, and show that this technique can improve the users' quality of service by taking into account the correlated nature of delay across different paths.; We conclude this work by commenting on the potential application of this method to general transportation networks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 155-159).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128329</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-power operation of interferometric gravitational-wave detectors</title>
<link>https://hdl.handle.net/1721.1/128328</link>
<description>High-power operation of interferometric gravitational-wave detectors
Buikema, Aaron.
With the conclusion of the first two observing runs of the Advanced LIGO detectors, which saw the first direct detection of gravitational waves, we are firmly in the era of gravitational-wave astronomy. To reach the highest sensitivities, current interferometric gravitational-wave detectors are designed for hundreds of kilowatts of circulating optical power. At these high circulating powers, the sensitivity of the detectors to gravitational waves will be limited by the quantum properties of the light: shot noise at frequencies above ~ 100 Hz, and quantum radiation pressure noise at lower frequencies. To reach the high powers necessary for achieving the quantum noise limits imposed by the light, it is essential to solve the control problems and understand the additional noise introduced by high power operation. Additionally, development of high-power laser sources that reach the stringent noise and reliability requirements is crucial. This work comprises three experiments aimed at reaching the radiation-pressure-dominated regime of interferometric gravitational-wave detectors. The first part presents results from a high-power, meter-long Fabry-Prot Michelson interferometer to probe classical and quantum radiation pressure effects using a gram-scale mechanical oscillator. The second part is an exploration of the effects of electric fields and charging of test masses on the sensitivity of the LIGO detectors, which may limit the ability to observe radiation-pressure effects. Finally, we describe the development and characterization of a high-power, narrow-linewidth ytterbium-doped fiber amplifier for use in future gravitational-wave detectors.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-172).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128328</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Realizing quantum spin models with ⁷Li atoms in an optical lattice</title>
<link>https://hdl.handle.net/1721.1/128327</link>
<description>Realizing quantum spin models with ⁷Li atoms in an optical lattice
Dimitrova, Ivana Ljubomirova.
Quantum spin Hamiltonians are paradigmatic models, which display different kinds of quantum phase transitions, strongly-correlated and topological ground states, and various regimes of transport. Expanding their significance, many mappings exist between quantum spin models and other systems in different areas of physics, mathematics, and beyond. Even though quantum spin models have been studied extensively, there are still many open questions. Simulating these Hamiltonians with the system of ultracold atoms in optical lattices provides a new perspective with the wide tunability of parameters and the minimal coupling to the environment. The mapping involves using the Mott insulating state of ultracold atoms in optical lattices, where the energy of a second-order tunneling process (superexchange) maps to the parameters of a Heisenberg model. This thesis provides a detailed roadmap for the design and building of such a quantum simulator with ⁷Li atoms in optical lattices.; Each step of the process is described, together with the methods and techniques used for the building and the characterization of the physical system. A focus is placed on using the Mott insulator as a starting point for spin physics experiments and, in particular, on the characterization and improvements of the mapping from a density sector description to a spin sector description of the system. Several schemes for implementing and studying spin systems are presented. In particular, the feasibility of implementing the Heisenberg spin-1/2 and spin-1 models in this system is described. The tilted lattice is presented as a tool for studying pure superexchange-driven dynamics and for increasing their timescale by suppressing first order tunneling and the role of number defects. The first measurements and the tuning with this machine of superexchange-driven dynamics over a wide range in the anisotropic Heisenberg spin-1/2 models are presented.; Finally, the versatility of the BEC 5 machine is showcased by a study which does not involve an optical lattice. It explores the realization of an exotic quantum phase, a supersolid, in a new way. After many years of building and improvements, the BEC 5 machine emerges as a repeatable and reliable quantum simulator which has a clear scientific agenda of exploring many-body ground states and non-equilibrium dynamics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 279-292).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128327</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced light-atom interaction in an optical resonator</title>
<link>https://hdl.handle.net/1721.1/128326</link>
<description>Enhanced light-atom interaction in an optical resonator
Duan, Yiheng.
Cavity quantum electrodynamics is a powerful platform for manipulating photonƯatom interactions. In this thesis, we explore its applications in photonic and atomic state preparation. With an optical cavity and an ensemble of cesium atoms, we demonstrate that, for two light fields [psi]s (signal) and [psi]A (ancilla) that have only weakly interacted with one another, measurements on the ancilla can produce substantial conditional change on 7./is. We observe conditional signal power changes over a large range of 30 (by factor between 0.1 and 3.2), and phase shift up to [pi]/2, induced by measurements in different ancilla bases. The highest power gain of 3.2 is achieved with a success probability of 3%. The method allows one to modify or boost a given interaction by trading in success probability for interaction strength, and is generically applicable to a variety of systems. Next, we move on to atomic state preparation. We demonstrate cavity cooling of an ensemble of 200 cesium atoms to the theoretical limit. Within 200 ms, the atomic temperature is reduced from 200 [mu] 10 [mu]K, mainly determined by the cavity linewidth. The cavity cooling performance is largely independent to the atomic energy structure. This in principle makes it possible to apply the technique to molecules and atoms with complex internal energy structure. We further cool the atomic ensemble to quantum degeneracy with Raman sideband cooling. To suppress the unfavorable two-body and three-body loss rate of cesium, we confine the atoms into a lD geometry. In this lD geometry, cesium atoms with a large negative scattering length form a metastable state known as a super-TonksƯGirardeau (sTG) gas. We calibrate for the first time the two-body and three-body short-range correlations of the gas. Compared to a three-dimensional non-interacting Bose gas, the g(2) and g(3) correlations of the sTG gas are reduced by a factor of 5 and 130, respectively.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128326</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-assembly of biological heteropolymers</title>
<link>https://hdl.handle.net/1721.1/128325</link>
<description>Self-assembly of biological heteropolymers
Falk, Martin Jin-teng.
In this thesis, we are primarily concerned with understanding the complicated geometrical and topological structures that polymers can adopt. We first consider this in the context of chromatin, the polymer of DNA and associated proteins. Our experiments and coarse-grained modeling suggest that attractions between heterochromatic regions are central to the separation of the active and inactive genome in nuclei. We adopt a similar strategy of coarse-grained polymer modeling in order to devise a collagen-like scheme for twisting polymers together. We found that such scheme generically includes the presence of defects, which we speculate could be useful in designing hierarchical assemblies of twisted filaments. In order to extend strategies for twisting of filaments to arbitrary braid topologies, we constructed a simple numerical model for a device that manipulates float-attached wires with capillary interactions between the walls of the device and the float. We use this model to rationalize design rules for the device, and to predict the motion of the float in non-trivial geometries. Finally, we study the dynamics of a two-dimensional seven-particle cluster as it relaxes from an extended, polymer-like state. We find that this system rarely reaches its (non-degenerate) global free energy minimum.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 115-122).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128325</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-ray searches for decaying sterile neutrinos with the Micro-X and XQC sounding rockets</title>
<link>https://hdl.handle.net/1721.1/128324</link>
<description>X-ray searches for decaying sterile neutrinos with the Micro-X and XQC sounding rockets
Goldfinger, David Charles.
The nature of dark matter is a major question in modern physics. Cosmological measurements have motivated the existence of matter beyond the standard model, but thus far there has not been a definitive observation of any particular candidate. One proposed particle is the sterile neutrino, a counterpart of the observed active neutrino with right-handed chirality. A recent detection of an unidentified X-ray emission line has been suggested as a possible signal of sterile neutrino decay, but requires more observations with high-resolution spectroscopy to fully explore the nature of the line. In the first half of this thesis, we present the results from a search for a similar unidentified line, using data from the X-ray Quantum Calorimeter (XQC) sounding rocket.; The XQC microcalorimeter array provides superior energy resolution to current satellite instruments and its wide field of view is well suited for observations of the Milky Way dark matter halo which would provide an all-sky signal. This analysis does not find evidence of an emission line, but also does not exclude the signal seen in other targets, motivating a re-flight of the instrument targeting near the center of the galaxy where the signal strength is expected to be greater. In the second half of this thesis, we present a summary of the Micro-X sounding rocket and its development into a flight instrument. Micro-X is an X-ray microcalorimeter, which uses sensitive thermometry to perform high-resolution spectroscopy with superior resolution to other non-dispersive techniques.; The Micro-X array employs Transition Edge Sensors, which provide improved resolution and larger array sizes than the Si thermistor detectors used in previous instruments, along with SQUID multiplexing for readout and is the first instrument to employ either of these technologies in a space environment. We also describe the results of its first flight, which took place on July 22, 2018 in which it attempted to observe the Cassiopeia A supernova remnant. While a failure of the attitude control system meant that no astronomy could be done with this flight, it was still useful for evaluating the instrument performance in anticipation of future flights.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages ).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128324</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical selective electrokinetic concentration : the universal next-generation biomolecule enrichment technique for molecular diagnostics</title>
<link>https://hdl.handle.net/1721.1/128323</link>
<description>Hierarchical selective electrokinetic concentration : the universal next-generation biomolecule enrichment technique for molecular diagnostics
Ouyang, Wei,Ph.D.Massachusetts Institute of Technology.
Rapid and reliable detection of ultralow-abundance nucleic acids and proteins in complex biological media may greatly advance clinical diagnostics and biotechnology development. Because of the slow mass transport and weak binding kinetics at ultralow concentration of target biomolecules, enrichment of target biomolecules plays an essential role in the detection of ultralow-abundance biomolecules. Currently, nucleic acid tests rely on enzymatic processes for target amplification (e.g. polymerase chain reaction), which have many inherent issues restricting their implementation in diagnostics. On the other hand, there exist no protein amplification techniques, greatly limiting the development of protein-based diagnosis.; By learning from the desired and undesired features of existing techniques, we designed the blueprint of the next-generation biomolecule enrichment technique, which should ideally be universally applicable to all kinds of biomolecules and be capable of specifically enriching only the target biomolecules among the background biomolecules by billion-fold rapidly. Electrokinetic concentration is a promising candidate for the next-generation biomolecule enrichment technique, because of its simple architecture and ease of operation, high concentration speed, universal applicability, and the rich physics of the system that may enable the development of new functionalities. We defined a technical roadmap of engineering the primitive electrokinetic concentration technique toward the next-generation biomolecule enrichment technique. We start by deciphering the mechanism of electrokinetic concentration (Chapter 2), which is instrumental in the rational design and innovation of the system.; We next developed specific enrichment of target biomolecules in the electrokinetic concentrator based on electrophoretic mobility-based separation and mobility engineering of affinity binders (Chapter 3). We went on to realize the billion-fold enrichment capability of electrokinetic concentrator by massive parallelization and hierarchical cascading of unit electrokinetic concentrators (Chapter 4). After that, we demonstrated the engineered electrokinetic concentrator as an integrated, self-contained platform for universal amplification-free molecular diagnostics (Chapter 5). Finally, we interfaced the engineered electrokinetic concentrator with standard analytics to enhance their analysis sensitivity and greatly simplify their workflows (Chapter 6). At the end of the thesis, we conclude this thesis and present our outlooks on the future directions (Chapter 7).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 182-200).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128323</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical analyses of exoplanetary systems and individual studies of the atmospheres of two sub-Neptune-sized planets</title>
<link>https://hdl.handle.net/1721.1/128322</link>
<description>Statistical analyses of exoplanetary systems and individual studies of the atmospheres of two sub-Neptune-sized planets
Guo, Xueying,Ph. D.Massachusetts Institute of Technology.
With over 4000 exoplanets already confirmed and more to come through various surveys, efforts have been made to measure their population properties and to characterize individual planets in terms of their mass, temperature, atmosphere, and evolution history. In this thesis, I present research on two exoplanet topics - planet occurrence rates and atmospheres of sub-Neptune-sized planets - through three major projects and one ongoing work. The first project is a uniform spectroscopy analysis of over 700 Kepler target stars, which disproves the popular hypothesis that the discrepancy between close-in gas giant occurrence rates measured from transit surveys and from the California radial velocity survey can be completely explained by their stellar metallicity difference, using the relation between occurrence rates and stellar properties.; The second project contains a uniform analysis on 64 Spitzer transit observations of 28 sub-Neptune-sized planets around M/K dwarfs, which enables the measurement of transmission spectral slopes of these planets from Kepler's broad optical bandpass to Spitzer's 4.5 pm IRAC channel. With these transmission spectral slopes, I propose that there exist two populations of cool small planets around lowmass stars characterized by their transmission spectral slopes, and that the smaller but significant population (20 ± 10% of all) could produce transmission spectra with detectable features. And in the third major project, a detailed measurement of the transmission spectrum of a sub-Neptune-sized planet HD 97658b is presented, using four HST/WFC3 observations, twelve Spitzer IRAC observations, and eight MOST observations. Subsequently, I discuss the implications of the transmission spectrum of HD 97658b on the atmospheric composition of this planet.; I also present the progress from ongoing research to measure the secondary eclipse of a super-Earth-sized planet TOI 561.02, discovered recently by TESS, around a solar-type star. The secondary eclipse could encode the day-side brightness temperature and composition of this planet. In addition to individual planet studies, I perform a uniform assessment on the transmission spectrum detectability of all currently confirmed small cool/warm planets with the upcoming JWST mission, and rank the best targets for future efforts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 207-233).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128322</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward distributed control for autonomous electrical energy systems</title>
<link>https://hdl.handle.net/1721.1/128321</link>
<description>Toward distributed control for autonomous electrical energy systems
Miao, Xia,Ph.D.Massachusetts Institute of Technology.
In this thesis we study the problem of enabling autonomous electrical energy systems (AEESs) by means of distributed control. We first propose a modular modeling approach that represents a general electrical energy system (EES) as a negative feedback configuration comprising a planar electrical network subsystem and a subsystem of single-port components. The input-output specifications of all components are in terms of power and voltage. This mathematical modeling supports the basic physical functionality of balancing power supply and demand at the acceptable Quality of Service (QoS). These input-output specifications are met by the controllable components equipped with the newly proposed distributed control. We show that these controllers enable stable and feasible system-level closed-loop dynamics. Moreover, an interactive algorithm for autonomous adjustments of their controller set points based on the information exchange with neighboring components is introduced. This serves as a proof-of-concept illustration of how components adjust their power and voltage toward a system-level equilibrium. Such process is the basis for autonomous reconfigurable operation of small microgrids. As the first step toward scaling up the proposed concepts, we consider the problem of enhanced automatic generation control (E-AGC) for systems with highly dynamic load variations, including effects of intermittent renewable generation. Further work is needed to fully generalize this approach for control design of large-scale EES. In addition to theoretical results, we also report the results of several numerical and hardware tests. These show the effectiveness of the proposed approach in fairly complex scenarios, including unplanned large faults and hard-to-predict fast-varying power disturbances.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 136-144).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128321</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Poisson-Boltzmann : strong correlations and extreme confinement in ionic fluids</title>
<link>https://hdl.handle.net/1721.1/128320</link>
<description>Beyond Poisson-Boltzmann : strong correlations and extreme confinement in ionic fluids
Levy, Amir,Ph. D.Massachusetts Institute of Technology.
Understanding the electrostatic interactions of ions in the bulk and near electrified surfaces has been a fundamental question in physics for over a century since the "Poisson-Boltzmann" theory was first introduced. In this thesis, we study the bulk properties of ionic fluids in two important cases where the "Poisson-Boltzmann" theory fails: extreme confinement and strong ion-ion interactions. We first ask how ions behave when confined to a long and narrow tube. Recent advances in nanofabrication technology enabled us to make precise measurements in extremely narrow nanopores and revealed critical gaps in our understanding. A striking result of constraining ions to reside along a line is the exponentially long screening length that easily exceeds the macroscopic length of the pore, leading to electronetrality breakdown. Remarkably, this result has not been reported before, despite its fundamental consequences for ion transport and electrokinetic phenomena.; We build a general theoretical framework for electroneutrality breakdown in nanopores and show how it provides an elegant interpretation for the peculiar scaling observed in experimental measurements of ionic conductance in carbon nanotubes. Strong ion-ion correlations arise when the electrostatic interaction between neighboring ions is comparable to or greater than their thermal energy. This is most pronounced in ionic liquids, highly concentrated solvent-free electrolytes. While generally the Poisson-Boltzmann theory predicts monotonically decaying correlation function, ionic liquids have strong charge ordering and long-ranged charge oscillations. In this work, we show that the charges in ionic liquids are forming the optimal structure that minimizes the electrostatic energy, in the presence of strong positional distorter. We develop an approximated minimization scheme based on the Goemans-Williamson Max-Cut algorithm, adapted for a fully-connected graph with Coulombic interactions.; We demonstrate how the persistent layering structure exists due to partial ordering, which is maximized in ionic solids but gradually disappears with added solvent. Eventually, by adding solvent molecules or increasing the temperature, the system departs from its ground state and a mean-field description is more suitable. Finally, we study the regime of intermediate ionic strength using a non-local permittivity operator, which captures two important effects: ion-ion correlations and solvent structure. Our approach is phenomenological and introduces a small number of fitting parameters. We study the activity coefficients of bulk electrolytes in a wide range of ionic solutions and find that our models capture well the experimental data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-177).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128320</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a new Dy quantum gas experiment</title>
<link>https://hdl.handle.net/1721.1/128319</link>
<description>Development of a new Dy quantum gas experiment
Lunden, William(William D.)
Since the first realization of Bose-Einstein condensation in an atomic gas at the end of the twentieth century, ultracold atomic gases have become a widely adopted platform for the study of various quantum phenomena. In recent years, attention has increasingly turned to species with large magnetic dipole moments due to the much stronger long-range interactions that these species exhibit in comparison with the more commonly studied alkalis. Dysprosium, with a magnetic moment of about 10IB, is the most magnetic atomic species and therefore has become an attractive platform for studying systems in which the long-range (dipole-dipole) interactions compete with or dominate over the contact interactions. In this thesis I describe the design and optimization of a new dysprosium quantum gas machine. Apart from giving a detailed description of the components of the apparatus and their performance, I describe in detail the characterization and optimization of the "angled slowing" technique which is used to enhance the loading rate of our magneto-optical trap (MOT). I also describe in detail the production and detection of the first Bose-Einstein condensates (BECs) produced using the apparatus. This thesis also contains a detailed description of the development of new control hardware and software which are used in the dysprosium experiment, but can be (and have been) used with other quantum gas experiments. On the hardware side, I discuss the design of high-performance analog voltage control channels which offer advantages over commercially available alternatives. On the software side, I discuss a laboratory control and logging database system which I designed, which both expands the capabilities of our control software and simplifies the storage of and accessibility of lab data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 153-158).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128319</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining scalable high performance programming with DEF</title>
<link>https://hdl.handle.net/1721.1/128317</link>
<description>Defining scalable high performance programming with DEF
Leiserson, William Mitchell.
Performance engineering is performed in languages that are close to the machine, especially C and C++, but these languages have little native support for concurrency. We're deep into the multicore era of computer hardware, however, meaning that scalability is dependent upon concurrent data structures. Contrast this with modern systems languages, like Go, that provide support for concurrency but incur invisible, sometimes unavoidable, overheads on basic operations. Many applications, particularly in scientific computing, require something in between. In this thesis, I present DEF, a language that's close to the machine for the sake of performance engineering, but which also has features that provide support for concurrency. These features are designed with costs that don't impede code that doesn't use them, and preserve the flexibility enjoyed by C programmers in organizing memory layout and operations. DEF occupies the excluded middle between the two categories of languages and is suitable for high performance, scalable applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 149-156).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128317</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strong and scalable metadata security for voice calls</title>
<link>https://hdl.handle.net/1721.1/128316</link>
<description>Strong and scalable metadata security for voice calls
Lazar, David,Ph.D.Massachusetts Institute of Technology.
This dissertation presents a scalable approach to protecting metadata (who is communicating with whom) in a communication system. The emphasis in this dissertation is on hiding metadata for voice calls, but the approach is applicable to any two-way communication between users. Our approach is embodied in a new system named Yodel, the first system for voice calls that hides metadata from a powerful adversary that controls the network and compromises servers. Voice calls require sub-second message latency, but low latency has been difficult to achieve in prior work where processing each message requires an expensive public key operation at each hop in the network. Yodel avoids this expense with the idea of self-healing circuits, reusable paths through a mix network that use only fast symmetric cryptography. Once created, these circuits are resilient to passive and active attacks from global adversaries. Creating and connecting to these circuits without leaking metadata is another challenge that Yodel addresses with the idea of guarded circuit exchange, where each user creates a backup circuit in case an attacker tampers with their traffic. We evaluate Yodel across the internet and it achieves acceptable voice quality with 990 ms of latency for 5 million simulated users.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 73-76).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128316</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trust less : shrinking the trusted parts of trusted systems</title>
<link>https://hdl.handle.net/1721.1/128315</link>
<description>Trust less : shrinking the trusted parts of trusted systems
Lebedev, Ilia Andreevich.
Modern computers, industrial control systems, and other automation are broadly vulnerable as a result of decades of systemic forces that have prioritized cost and performance over security. Computers across the board face a crisis in the form of motivated software adversaries with access to our imperfect and enormously complex software. Considering these weaknesses, trust in modern computing systems is often not well-placed. Looking ahead to a shift in our collective priorities, this thesis is centered around a rigorous discussion of hardware-assisted isolation and enclaves -- authenticated software modules -- as a means to drastically reduce the complexity of trusted systems. By allowing trustworthy enclaved software to co-exist with, but remain strongly isolated from, existing software, we enable a gentle transition toward trustworthy systems. Specifically, this thesis refines formal definitions of enclaved execution and threat model via a series of hardware and software co-designs. These case studies explore enclave processors with small trusted computing bases spanning a gradient from an embedded SoC to a modern high-performance processor. This work is complementary to, and enables more effective application of, many powerful ideas such as information flow control, formal verification, multi-party computation, and other tools for trustworthy computing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 213-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128315</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse tensor algebra compilation</title>
<link>https://hdl.handle.net/1721.1/128314</link>
<description>Sparse tensor algebra compilation
Kjølstad, Fredrik Berg.
This dissertation shows how to compile any sparse tensor algebra expression to CPU and GPU code that matches the performance of hand-optimized implementations. A tensor algebra expression is sparse if at least one of its tensor operands is sparse, and a tensor is sparse if most of its values are zero. If a tensor is sparse, then we can store its nonzero values in a compressed data structure, and omit the zeros. Indeed, as the matrices and tensors in many important applications are extremely sparse, compressed data structures provide the only practical means to store them. A sparse tensor algebra expression may contain any number of operations, which must be compiled to fused loops that compute the entire expression simultaneously. It is not viable to support only binary expressions, because their composition may result in worse asymptotic complexity than the fused implementation.; I present compiler techniques to generate fused loops that coiterate over any number of tensors stored in different types of data structures. By design, these loops avoid computing values known to be zero due to the algebraic properties of their operators. Sparse tensor algebra compilation is made possible by a sparse iteration theory that formulates sparse iteration spaces as set expressions of the coordinates of nonzero values. By ordering iteration space dimensions hierarchically, the compiler recursively generates loops that coiterate over tensor data structures one dimension at a time. By organizing per-dimension coiteration into regions based on algebraic operator properties, it removes operations that will result in zero. And by transforming the sparse iteration spaces, it optimizes the generated code. The result is the first sparse iteration compiler, called the Tensor Algebra Compiler (taco).; Taco can compile any tensor algebra expressions, with tensors stored in different types of sparse and dense data structures, to code that matches the performance of hand-optimized implementations on CPUs and GPUs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 118-128).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128314</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An adaptive variational multiscale method with discontinuous subscales for aerodynamic flows</title>
<link>https://hdl.handle.net/1721.1/128310</link>
<description>An adaptive variational multiscale method with discontinuous subscales for aerodynamic flows
Huang, Arthur(Arthur Chan-wei)
A promising methodology for accurate and efficient simulation of aerodynamic flows is output-based mesh adaptation, which optimizes a mesh to minimize the discretization error in an output of interest. The state of the art in output-based adaptation uses the discontinuous Galerkin (DG) method, which is computationally expensive due to its duplicated degrees of freedom. Existing continuous Galerkin (CG) methods require up to 20 times fewer degrees of freedom, but lack the combination of stability and adjoint consistency required for output-based adaptation. This thesis presents a novel high order continuous Galerkin method, which is both adjoint consistent and stable. The scheme, called Variational Multiscale with Discontinuous subscales (VMSD), models unresolved solution perturbations with a discontinuous representation. The solution discontinuities are then used to stabilize the problem using methods borrowed from discontinuous Galerkin methods. At the same time, the mathematical structure of the discretization allows for the elimination of additional degrees of freedom in a computationally efficient manner, so that the method has a linear system of the same size as a conventional CG discretization. Finally, because the scheme is adjoint consistent, accurate error estimates can be obtained for use in an output-based mesh adaptation process. In this work, the method is derived and its optimal properties demonstrated through analysis and numerical experiment. In particular, the thesis describes the integration of VMSD in a high order adaptive method, namely the Mesh Optimization via Error Sampling and Synthesis (MOESS) algorithm. Adaptive DG and VMSD are compared for 3D RANS simulations. The adaptive VMSD method is shown to produces solutions with the same drag error as the adaptive DG method, with a factor of 3-10 fewer globally coupled degrees of freedom, and an associated factor of three or more reduction in computation time.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 161-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128310</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space and aerial architectures to expand global connectivity</title>
<link>https://hdl.handle.net/1721.1/128309</link>
<description>Space and aerial architectures to expand global connectivity
Del Portillo Barrios, Iñigo.
Currently, 46.4% of the world's population does not have access to the Internet. Bringing the more than 3.5 billion individuals still unconnected online is the primary goal for multiple international organizations, including the ITU and the UN Broadband Commission. In the last ten years, there has been steady growth in the number of Internet users (around 200 - 300 million per year), but this has been considered insufficient to meet the target of having 60% of the world's population be connected by the end of 2020 (as set in Resolution 200 - Connect 2020 Agenda for Global Telecommunication/ ICT Development). Besides, even more ambitious targets (75% of the world's population connected) have been proposed for 2025. Two important barriers that restrict connectivity are the lack of infrastructure and affordability.; To address these barriers, several novel concepts that involve space-borne and airborne platforms have been proposed to provide connectivity at a lower cost (improve affordability) to a wider reach of people (extend infrastructure). This thesis explores the tradespace of architectures for space and aerial communication network concepts to extend global connectivity. In particular, constellations of geostationary satellites, large constellations of MEO and LEO satellites, and high- and low-altitude aerial platforms are studied. For each of these concepts, I develop end-to-end system models that include the RF propagation, atmospheric channel, power- and mass-sizing, system dynamics, and costs. Different frequency bands are considered, including current state-of-the-art Ku- and Ka-bands and future scenarios with extremely high-frequency bands (V/Q, E, and optical). The potential of each of these concepts is then analyzed from a techno-economic perspective.; Given the large scale of the problem (global connectivity), the different spatial and temporal scales on which each of the concepts operate, plus the large tradespace of potential architectures, evaluating the potential impact requires the development of large simulation models to compute realistic estimates for performance and cost, as well as to identify trade-offs among concepts. However, due to limited computing resources, an exhaustive evaluation of all design configuration is impractical and often not possible; consequently, the resources devoted to concept exploration need to be carefully allotted, balancing exploration and exploitation within the tradespace. To that end, this thesis presents a Bayesian optimization approach tailored for System Architecture problems, to explore tradespaces efficiently when there is a tightly-constrained budget for objective function evaluations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis. "February 2020."; Includes bibliographical references (pages 271-294).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128309</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous navigation of distributed spacecraft using intersatellite laser communications</title>
<link>https://hdl.handle.net/1721.1/128308</link>
<description>Autonomous navigation of distributed spacecraft using intersatellite laser communications
Dave, Pratik K.(Pratik Kamlesh)
Autonomous navigation refers to satellites performing on-board, real-time navigation without external input. As satellite systems evolve into more distributed architectures, autonomous navigation can help mitigate challenges in ground operations, such as determining and disseminating orbit solutions. Several autonomous navigation methods have been previously studied, using some combination of on-board sensors that can measure relative range or bearing to known bodies, such as horizon and star sensors (Hicks and Wiesel, 1992) or magnetometers and sun sensors (Psiaki, 1999), however these methods are typically limited to low Earth orbit (LEO) altitudes or other specific orbit cases. Another autonomous navigation method uses intersatellite data, or direct observations of the relative position vector from one satellite to another, to estimate the orbital positions of both spacecraft simultaneously.; The seminal study of the intersatellite method assumes the use of radio time-of-flight measurements of intersatellite range, and a visual tracking camera system for measuring the inertial bearing from one satellite to another (Markley, 1984). Due to the limited range constraints of passively illuminated visual tracking systems, many of the previous studies restrict the separation between satellites to less than 1,000 kilometers (e.g., Psiaki, 2011), or simply drop the use of measuring intersatellite bearing and rely solely on obtaining a large distribution of intersatellite range measurements for state estimation (e.g., Xu et al., 2014). These assumptions have limited the assessment of the performance capability of autonomous navigation using intersatellite measurements for more general mission applications.; In this thesis, we investigate the performance of using laser communication (lasercom) crosslinks in order to achieve all necessary intersatellite measurements for autonomous navigation. Lasercom systems are capable of measuring both range and bearing to a receiving terminal with greater precision than traditional methods, and can do so over larger separations between satellites. We develop a simulation framework to model the measurements of intersatellite range and bearing using lasercom crosslinks in distributed satellite systems, with consideration of varying orbital operating environments, constellation size and distribution, and network topologies. We implement two estimation algorithms: an extended Kalman filter (EKF) used with Monte Carlo sampling for performance analyses, and a Cram~r-Rao lower-bound (CRLB) computation for uncertainty analyses.; We evaluate several case studies modeled off of existing and planned constellation missions in order to demonstrate the new capabilities of performing intersatellite navigation with lasercom links in both near-Earth and deep-space orbital applications. Performance targets are generated from the current state-of-the-art navigation capabilities demonstrated by Global Navigation Satellite Systems (GNSS) in Earth-orbit, and by radiometric tracking and orbit estimation using the Deep Space Network (DSN) in deep-space orbits. For Earth-orbiting applications, we simulate a relay satellite system in geosynchronous orbit (GEO) inspired by current optical communications missions in development by both ESA and NASA, and Walker constellations in LEO inspired by the upcoming mega-constellations for global broadband internet service, such as those proposed by SpaceX and Telesat.; In both case studies, we demonstrate improved navigation performance over the current state-of-the-art in GNSS receiver technologies by using intersatellite measurements from lasercom crosslinks. Monte Carlo simulations show median total position errors better than 3 meters in LEO, 12 meters in GEO, and 45 meters in high-altitude or highly-eccentric orbits (HEO), showing promise as an alternative navigation method to GNSS in Earth-orbiting environments. We also simulate current and future Mars-orbiting missions as examples of deep-space applications. In one case study, we create an ad-hoc constellation comprised of low-altitude Mars exploration orbiters modeled off of current Mars-orbiting missions. In a second case study, we focus on a high-altitude constellation proposed for dedicated Earth-to-Mars networked communications.; Again, in both case studies, we demonstrate improved navigation performance over the current state-of-the-art in DSN radiometric orbit solutions by using intersatellite measurements from lasercom crosslinks. Monte Carlo simulations show stable median total position errors better than 25 meters for Mars-orbit, which demonstrates a notable improvement both spatially and temporally versus DSN orbit estimation, mitigating the large cost and time constraints associated with DSN tracking. These results demonstrate the promise of using lasercom intersatellite links for autonomous navigation, offering enhanced performance over current state-of-the-art capabilities, and a greater applicability to missions both near Earth and beyond.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Space Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 149-157).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128308</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>100 Gbps ptical coherent modem for low earth orbit optical inter-satellite links</title>
<link>https://hdl.handle.net/1721.1/128307</link>
<description>100 Gbps ptical coherent modem for low earth orbit optical inter-satellite links
Aniceto, Raichelle Joy.
Free space optical communication (FSOC) provides a viable and cost-effective solution for future satellite systems with advantages in bandwidth, unregulated frequencies, and reduced system mass, volume, and power consumption in comparison with radio frequency systems. Several FSOC systems successfully demonstrated links between spacecraft and Earth ground stations as well as inter-satellite links. Commercial industry, including companies such as SpaceX and Telesat, have taken an interest in utilizing the benefits of FSOC for proposed LEO constellations and using optical inter-satellite links (OISLs) to reduce the need for expensive worldwide ground tracking networks. State-of-the-art FSOC space terminal data rate performance is 5.625 Gbps using coherent BPSK detection, achieved by the Tesat and DLR laser communication terminal (LCT) in 2008. The Tesat and DLR LCT demonstrated LEO to LEO OSLs over a link distance of 5100 km.; Within the past decade, the terrestrial communications industry advances in optical coherent DSP ASICs and integrated fiber optic component packages have enabled high capacity optical coherent communications systems with data rates of 100 Gbps and greater. It is desirable to leverage the data rate performance and cost point of these technologies to develop a state-ofthe- art optical coherent modem system for FSOC space applications. The goal of this work is to develop an optical coherent communications modem for LEO-to-LEO inter-satellite links with improvement in data rate of 10 times the current state of the art of 5.6 Gbps using commercial off the shelf components, such as optical coherent DSP ASICs, coherent transmitters, coherent receivers, and lasers, with minimal modifications as needed for space use.; This work focuses on developing an optical coherent communications modem for data rates up to 100 Gbps using commercial telecommunications industry components compatible with lOOG wavelength division multiplexed (WDM) coherent systems. We develop a process for selecting commercial optical coherent technologies that can meet performance requirements in a LEO space environment. We develop optical coherent modem hardware and assess the selected commercial optical coherent technologies for uses in the space environment. We identify and develop cost-effective modifications based on radiation characterization, ensuring that we can achieve successful space operation and meet performance requirements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis. "February 2020."; Includes bibliographical references (pages 215-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128307</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling crystallization via interfacial engineering : patterning, fouling-inhibition, and nutrient recovery</title>
<link>https://hdl.handle.net/1721.1/128300</link>
<description>Controlling crystallization via interfacial engineering : patterning, fouling-inhibition, and nutrient recovery
McBride, Samantha Ann.
Crystallization is ubiquitous in natural and anthropogenic environments; and can be detrimental or beneficial. For example, crystallization from sea-spray deposits is a leading contributor to rusting and fouling of coastal structures. However, crystallization can also be used as a purification technique for producing a variety of important chemicals. In this thesis, control of crystallization at interfaces is explored for improving sustainability across a variety of applications including pattering, anti-fouling, and as a separation process for recovery. Interfacial engineering is a natural starting point for controlling crystallization due to a propensity of many forms of crystals to form at phase boundaries. Control of crystallization on solid substrates is accomplished by modification of the surface morphology, length scale of surface features, surface chemistry, and surface energy. In this thesis I demonstrate that interfacial engineering can be used to prevent mineral fouling across salts and salt mixture, to develop microparticles which promote recovery of nutrients from waste water, and to design a micro-scale water-soluble crystalline masks with applications for the fabrication of microdevices.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128300</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot learning with strong priors</title>
<link>https://hdl.handle.net/1721.1/128299</link>
<description>Robot learning with strong priors
Wang, Zi,Ph.D.Massachusetts Institute of Technology.
Embedding learning ability in robotic systems is one of the long sought-after objectives of artificial intelligence research. Despite the recent advancements in hardware, large-scale machine learning algorithms and theoretical understanding of deep learning, it is still quite unrealistic to deploy an end-to-end learning agent in the wild, attempting to learn everything from scratch. Instead, we identify the importance of imposing strong prior knowledge on capable robotic systems and perform robot learning with strong priors. In this thesis, we exemplify the value of imposing strong priors in robot learning (or machine learning in general) via both practical experiments and theories with mild assumptions. Empirically, by proposing new algorithms and systems, we show that (active) model learning with strong priors on model structures makes it feasible to adopt advanced planners to solve complicated long-horizon robotic manipulation problems that were not possible before. On the other hand, we verify our theories through mathematical analyses of data efficiency for our active data acquisition strategies based on Bayesian optimization and systems combining learning and planning. The new approaches integrate structural prior knowledge with statistical machine learning methods to achieve state-ofthe- art performance on complex long-horizon robot manipulation tasks.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis. Via.; Includes bibliographical references (pages 211-224).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128299</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to keep a secret and share a public key (using polynomial commitments)</title>
<link>https://hdl.handle.net/1721.1/128298</link>
<description>How to keep a secret and share a public key (using polynomial commitments)
Tomescu Nicolescu, Ioan Alin.
Despite 40+ years of amazing progress, cryptography is constantly plagued by two simple problems: keeping secret keys secret and making public keys public. For example, public-key encryption is secure only if each user (1) keeps his secret key out of the hands of the adversary and (2) correctly distributes his public key to all other users. This thesis seeks to address these two fundamental problems. First, we introduce communication-efficient, fully-untrusted append-only logs, which can be used to correctly distribute public keys. Our constructions have logarithmic-sized proofs for the two key operations in append-only logs: looking up public keys and verifying the log remained append-only. In contrast, previous logs either have linear-sized proofs or need extra trust assumptions. Our logs can also be used to secure software distribution and, we hope, to increase transparency in any institution that wants to do so. Second, we speed up threshold cryptosystems, which protect secret keys by splitting them up across many users. We introduce threshold signatures, verifiable secret sharing and distributed key generation protocols that can scale to millions of users. Our protocols drastically reduce execution time, anywhere from 2x to 4500x, depending on the scale. For example, at large scales, we reduce time from tens of hours to tens of seconds. At the core of most of our contributions lie new techniques for computing evaluation proofs in constant-sized polynomial commitments. Specifically, we show how to decrease the time to compute n proofs for a degree-bound n polynomial from O(n²) to O(n log n), at the cost of increasing proof size from O(1) to O(log n). Our techniques could be of independent interest, as they give rise to other cryptographic schemes, such as Vector Commitments (VCs).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 155-171).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128298</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning through looking and listening</title>
<link>https://hdl.handle.net/1721.1/128297</link>
<description>Learning through looking and listening
Recasens Continente, Adriá.
In order to read emotions, understand actions or anticipate intentions, humans need efficient ways of gathering information about each other. In particular, gaze and speech are rich sources of information about other peoples' thoughts. This thesis investigates these modes. In the first part of the thesis, we describe our work on predicting human gaze. We introduce a series of methods to follow gaze for different modalities. First, we present GazeFollow, a dataset and model to predict the location people's gaze in an image. We then extend this method to work on video, where the system predicts when and where in the video the attended object appears. Finally, we introduce Gaze360, a large-scale gaze-tracking dataset and method for robust 3D gaze direction estimation in unconstrained scenes.; In order to improve processing efficiency, we also propose a saliency-based sampling layer designed to improve performance in arbitrary tasks by efficiently zooming into the relevant parts of the input image. In the second part of the thesis, we present our work on learning spoken words from raw audio descriptions of images. We describe a multi-modal system capable of learning correspondences between segments of audio - nouns - and specific visual concepts. To investigate how to extend this system beyond learning nouns, we present a novel training procedure to learn abstract visual attributes (i.e., size, material or color) by using a generative model to generate the training images. Building upon recent findings that GAN representations can be manipulated to edit semantic concepts in the generated output, our method uses GAN-generated images to train the model using a triplet loss.; Finally, we present three extensions and applications derived from our work: a dataset to jointly model speech and gaze; a system for gaze-tracking for behavioral research in children; and gaze-following in the classroom. Together, the methods presented in this thesis demonstrate the potential for human understanding through gaze and speech in images and videos.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 147-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128297</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated optical phased arrays for three-dimensional display applications</title>
<link>https://hdl.handle.net/1721.1/128296</link>
<description>Integrated optical phased arrays for three-dimensional display applications
Raval, Manan.
The compatibility of silicon photonic platforms with complementary metal-oxide-semiconductor (CMOS) fabrication processes has facilitated a surge in the development of silicon-based integrated optical phased arrays (OPAs) for light detection and ranging (LiDAR) and free-space communications. However, silicon is limited to operating at infrared wavelengths since its bandgap prevents visible light transmission. The development of integrated OPAs for arbitrary complex wavefront synthesis in the visible spectrum would enable the expansion of this technology into a multitude of new applications spaces such as optical trapping, imaging through scattering media, underwater LiDAR, optogenetic stimulation, and three-dimensional (3D) displays. Silicon nitride, a CMOS-compatible material that is transparent in the visible spectrum, may be used as the waveguiding material in phased array systems designed for the above applications.; In this work, we develop large-scale visible light integrated OPA systems fabricated in a silicon-nitride-based platform for 3D display applications. We begin by presenting the first demonstrations of visible light integrated OPAs. Building on this, we demonstrate a chip-scale architecture for autostereoscopic image projection using a system of multiple integrated OPAs to reconstruct virtual light fields. Specically, we generate a static virtual 3D image with horizontal parallax and a viewing angle of 5. Next, we present an architecture for realizing a transparent near-eye direct-view augmented/mixed reality (AR/MR) display using a system of integrated OPAs to directly project holographic images onto the user's retina. This display architecture was developed to address the deficiencies in current AR/MR headsets with respect to brightness, field of view (FOV), and the vergence-accommodation conflict, which causes eye fatigue.; Here, we present a passive demonstration of the display as well as a number of key photonic components required to realize a system for 3D video.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 154-164).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128296</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Declarative assembly of web applications from predefined concepts</title>
<link>https://hdl.handle.net/1721.1/128295</link>
<description>Declarative assembly of web applications from predefined concepts
Perez De Rosso, Santiago(Santiago Nicolas)
This thesis presents a new approach to web application development, in which an application is constructed by configuring and composing concepts drawn from a catalog developed by experts. A concept is a self-contained, reusable unit of behavior that is motivated by a purpose defined in terms of the needs of an end-user. Each concept includes both client- and server-side functionality and exports a collection of components--graphical user interface elements, backed by application logic and database storage. To build a web application, the developer imports concepts from the catalog, tunes them to fit the needs of the application via configuration variables, and links concept components together to create pages. Components of different concepts may be executed independently or bound together declaratively with dataflows and synchronization. The instantiation, configuration, linking and binding of components is all expressed in a simple template language. The approach has been implemented in a platform called Déjà Vu. We outline and compare our approach to conventional approaches to web application development and we present results from a case study in which we used our platform to replicate a collection of applications previously built by students for a web programming course.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 181-186).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128295</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical symmetry enhanced neural networks</title>
<link>https://hdl.handle.net/1721.1/128294</link>
<description>Physical symmetry enhanced neural networks
Jing, Li,Ph. D.Massachusetts Institute of Technology.
Artificial Intelligence (AI), widely considered "the fourth industrial revolution", has shown its potential to fundamentally change our world. Today's AI technique relies on neural networks. In this thesis, we propose several physical symmetry enhanced neural network models. We first developed unitary recurrent neural networks (RNNs) that solve gradient vanishing and gradient explosion problems. We propose an efficient parametrization method that requires [sigma] (1) complexity per parameter. Our unitary RNN model has shown optimal long-term memory ability. Next, we combine the above model with a gated mechanism. This model outperform popular recurrent neural networks like long short-term memory (LSTMs) and gated recurrent units (GRUs) in many sequential tasks. In the third part, we develop a convolutional neural network architecture that achieves logarithmic scale complexity using symmetry breaking concepts. We demonstrate that our model has superior performance on small image classification tasks. In the last part, we propose a general method to extend convolutional neural networks' inductive bias and embed other types of symmetries. We show that this method improves prediction performance on lens-distorted image
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, February, 2020; Cataloged from student-submitted PDF version of thesis; Includes bibliographical references (pages 91-99).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128294</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design methods for sensitive and comprehensive microbial surveillance</title>
<link>https://hdl.handle.net/1721.1/128293</link>
<description>Design methods for sensitive and comprehensive microbial surveillance
Metsky, Hayden C.
We are surrounded by a vast and dynamic microbial world. Effective surveillance tools can benefit medicine and public health, including infectious disease diagnostics, proactive pathogen detection and characterization, and microbiome studies. New genomic technologies are transforming microbial surveillance, but face challenges stemming from low concentrations in collected samples and extensive, ever-changing diversity. In this thesis, we first demonstrate a need for stronger surveillance through mapping the spread of Zika virus during the 2015-16 epidemic. We generate 110 Zika virus genomes from across the Americas, forming the largest and most diverse Zika virus dataset at the time. We perform a Bayesian phylogenetic analysis of Zika's spread and discover that it circulated undetected in multiple regions for many months. Two reasons are that Zika virus is present in samples at ultra-low abundance and was, during its rapid spread, an obscure pathogen.; Motivated by this, we develop computational approaches that enable sensitive, comprehensive surveillance. We present CATCH, an algorithm that enhances enrichment of highly diverse whole genomes for more sensitive sequencing. CATCH designs scalable capture probe sets that are comprehensive, to a well-defined extent, against known sequence diversity. We use CATCH to design probes targeting whole genomes of the 356 viral species known to infect humans, including their vast subspecies diversity. Applied to 30 patient and environmental samples, we show that these probes improve hypothesis-free detection of viral infections and considerably enhance genome assembly. Academic labs, research hospitals, and government public health institutes are using CATCH to help detect and characterize microbes. We also present ADAPT, a system for end-to-end sequence design of nucleic acid diagnostic assays.; We develop algorithms to comprehensively consider known diversity and enforce high taxon-specificity, even under relaxed criteria arising with RNA binding. Focusing on CRISPR-Cas13 detection, we perform high-throughput screening of crRNA-target pairs and develop a model, applied to our dataset, that predicts detection activity; using this, ADAPT's designs have high predicted activity. Along with CATCH, ADAPT advances microbial surveillance by leveraging and progressing with the extensive, ever-changing landscape of microbial genome diversity.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 169-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128293</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Balance control and locomotion planning for humanoid robots using nonlinear centroidal models</title>
<link>https://hdl.handle.net/1721.1/128291</link>
<description>Balance control and locomotion planning for humanoid robots using nonlinear centroidal models
Koolen, Frans Anton.
Balance control approaches for humanoid robots have traditionally relied on low-dimensional models for locomotion planning and reactive balance control. Results for the low-dimensional model are mapped to the full robot, and used as inputs to a whole-body controller. In particular, the linear inverted pendulum (LIP) has long been the de facto standard low-dimensional model used in balance control. The LIP has its limitations, however. For example, it requires that the robot's center of mass move on a plane and that the robot's contact environment be planar. This thesis presents control and planning approaches using nonlinear low-dimensional models, aimed at mitigating some of the limitations of the LIP. Specically, the contributions are: 1) a closed-form controller and region of attraction analysis for a nonlinear variable-height inverted pendulum model, 2) a trajectory optimization approach for humanoid robot locomotion over moderately complex terrain based on mixed-integer nonlinear programming with a low-dimensional model, and 3) a quadratic-programming based controller that uses the the results from these low-dimensional models to control a simulation model of the Atlas humanoid robot.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 133-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128291</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some hardness escalation results in computational complexity theory</title>
<link>https://hdl.handle.net/1721.1/128290</link>
<description>Some hardness escalation results in computational complexity theory
Kamath, Pritish.
In this thesis, we prove new hardness escalation results in computational complexity theory; a phenomenon where hardness results against seemingly weak models of computation for any problem can be lifted, in a black box manner, to much stronger models of computation by considering a simple gadget composed version of the original problem. For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols. This allows us to prove new lower bounds for: -- Monotone Circuit Size : we get an exponential lower bound for an explicit monotone function computable by linear sized monotone span programs and also in (non-monotone) NC². -- Real Monotone Circuit Size : Our proof technique extends to real communication protocols, which yields similar lower bounds against real monotone circuits. -- Cutting Planes Length : we get exponential lower bound for an explicit CNF contradiction that is refutable with logarithmic Nullstellensatz degree. Finally, we describe an intimate connection between computational models and communication complexity analogs of the sub-classes of TFNP, the class of all total search problems in NP. We show that the communication analog of PPA[subscript p] captures span programs over F[subscript p] for any prime p. This complements previously known results that communication FP captures formulas (Karchmer- Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020; Cataloged from student-submitted PDF of thesis. "February 2020."; Includes bibliographical references (pages 92-105).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128290</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum networks : state transmission and network operation</title>
<link>https://hdl.handle.net/1721.1/128289</link>
<description>Quantum networks : state transmission and network operation
Dai, Wenhan.
Quantum information science is believed to create the next technological revolution. As key ingredients of quantum information science, quantum networks enable various technologies such as secure communication, distributed quantum sensing, quantum cloud computing, and next-generation positioning, navigation, and timing. The main task of quantum networks is to enable quantum communication among different nodes in the network. This includes the topics such as the transmission of quantum states involving multiple parties, the processing of quantum information at end nodes, and the distribution of entanglement among remote nodes. Since quantum communication has its own peculiar properties that have no classical counterparts, the protocols and strategies designed for classical communication networks are not well-suited for quantum ones. This calls for new concepts, paradigms, and methodologies tailored for quantum networks.; To that end, this thesis studies the design and operation of quantum networks, with focus on the following three topics: state transmission, queueing delay, and remote entanglement distribution. The first part develops protocols to broadcast quantum states from a transmitter to N different receivers. The protocols exhibit resource tradeoffs between multiparty entanglement, broadcast classical bits (bcbits), and broadcast quantum bits (bqubits), where the latter two are new types of resources put forth in this thesis. We prove that to send 1 bqubit to N receivers using shared entanglement, O(logN) bcbits are both necessary and sufficient. We also show that the protocols can be implemented using poly(N) basic gates composed of single-qubit gates and CNOT gates. The second part introduces a tractable model for analyzing the queuing delay of quantum data, referred to as quantum queuing delay (QQD).; The model employs a dynamic programming formalism and accounts for practical aspects such as the finite memory size. Using this model, we develop a cognitive-memory-based policy for memory management and show that this policy can decrease the average queuing delay exponentially with respect to memory size. The third part offers a design of remote entanglement distribution (RED) protocols that maximize the entanglement distribution rate (EDR). We introduce the concept of enodes, representing the entangled quantum bit (qubit) pairs in the network. This concept enables us to design the optimal RED protocols based on the solutions of some linear programming problems. Moreover, we investigate RED in a homogeneous repeater chain, which is a building block for many quantum networks. In particular, we determine the maximum EDR for homogeneous repeater chains in a closed form. The contributions of this work provide guidelines for the design and implementation of quantum networks.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from student-submitted the PDF of thesis.; Includes bibliographical references (pages 147-155).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128289</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental test of the inverse compton effect hypothesis for the x-ray source in Scorpius</title>
<link>https://hdl.handle.net/1721.1/128192</link>
<description>Experimental test of the inverse compton effect hypothesis for the x-ray source in Scorpius
Overbeck, James W.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1964.; Includes bibliographical references (leaves 43-44).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128192</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Field studies of streamflow generation using natural and injected tracers on Bickford and Walker Branch watersheads</title>
<link>https://hdl.handle.net/1721.1/128165</link>
<description>Field studies of streamflow generation using natural and injected tracers on Bickford and Walker Branch watersheads
Genereux, David Paul.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1991.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128165</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Other litigation : a new measure of 10b-5 class action litigation risk</title>
<link>https://hdl.handle.net/1721.1/128097</link>
<description>Other litigation : a new measure of 10b-5 class action litigation risk
Powley, William A.
This thesis proposes a litigation risk measure based on other past federal contract and state-based litigation, firm characteristics, industry and federal judge ideology. This thesis finds that other litigation complements existing measures of litigation risk and is economically meaningful in predicting litigation occurrence and D&amp;O insurance premia. First, this thesis finds that firms with higher federal contract and state-based litigation verdicts and settlements are more likely to face securities class-action lawsuits in the future, the effects are economically significant, and the results hold with and without firm and year fixed effects. Second, this thesis finds that firms with higher federal contract-based litigation settlements are more likely to face higher D&amp;O insurance premia, the effects are economically significant, and the results hold with and without firm and year fixed effects.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 32-33).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128097</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in empirical finance</title>
<link>https://hdl.handle.net/1721.1/128096</link>
<description>Essays in empirical finance
Fourel, Valère(Valère Renaud Ernst)
Considering that monetary policy is multi-dimensional and cannot be solely reduced to changes in the short-term interest rate, Chapter 1 revisits the bank lending channel literature. Our approach consists in finding whether there are some significant cross-sectional disparities in the way French banks that exhibit different bank characteristics respond to various types of monetary policy shocks. We first extract from changes in interest rates around ECB's monetary policy announcements four different types of monetary policy shocks. The Target factor affects mostly the very-short end of the yield curve, the Timing factor, the 6-month interest rate and the Forward Guidance, interest rates at the 5-year horizon. The Quantitative Easing factor essentially moves interest rates at longer maturities. We then combine these monetary policy shocks that we first aggregate at the monthly frequency with our sample of monthly data on French banks for the period 2007 to 2018.; We uncover three new facts: 1) bank's size matters for monetary policy transmission when we consider a Forward Guidance shock; 2) Liquid assets held by a bank can be a vector of the smooth transmission of monetary policy; 3) Banks with a high share of deposits on their liability side tend to reduce their lending to non-financial corporations after an expansionary Timing or a Forward Guidance shock. Using a loan-level dataset, our results are robust when controlling for any firm-specific demand shock. In the second chapter which is a joint work with D. Rime, L. Sarno, M. Schmeling and A. Verdelhan, we build the largest dataset of high-frequency exchange rates so far: our sample covers the spot prices and order flows of 19 currency pairs over the last 15 years measured on the two main trading platforms at the 30-second frequency.; We uncover four new facts on intraday exchange rates: 1) The carry and dollar risk factors explain a large share of the intraday exchange rate variations; their explanatory power increase from 30-second to daily frequencies, while the explanatory power of order flows is more limited and decreases from 30-second to daily frequencies; 2) Dollar and carry betas are very persistent: their autocorrelation coefficients are around 0.5 at the daily horizon and 0.7 at the weekly horizon, thus offering a new key characteristic of exchange rates; 3) Dollar betas are correlated to bond yields; and 4) they are caused by additional trading. In the third chapter, exploiting a high frequency dealer-specific quote database of the FX market, we show that shocks to the CDS of a financial intermediary, proxy for its financial wealth, makes her quote larger bid-ask spreads when uncertainty about the underlying traded asset is high or when market competition is low.; We first establish that markets are dominated by a handful of dealers who are responsible for more than 90% of the quotes in the different FX spot markets. We then document that, when exchange rate volatility is high, a 1% increase in intermediary's default probability does translate into a 4 bps increase in the bid-ask spread that she quotes. When competition is low, a similar deterioration in financial wealth leads to a 6.4 bps increase in bid-ask spread size. We finally show that in the case of emerging country currencies, the average CDS spread of the financial intermediaries quoting in the FX market is a statistically significant predictor for the volatility of the idiosyncratic component of the currency risk premium.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-165).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128096</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Hubbard model physics and quantum Hall physics in graphene moire superlattices</title>
<link>https://hdl.handle.net/1721.1/128094</link>
<description>Bridging Hubbard model physics and quantum Hall physics in graphene moire superlattices
Zhang, Yahui,Ph.D.Massachusetts Institute of Technology.
This thesis is focused on the strongly correlated physics of graphene moiré superlattices formed in twisted bilayer graphene (TBG), twisted double bilayer graphene (TDBG) and ABC trilayer graphene aligned with hexagon boron nitride (TLG-hBN). First, I will show that the physics of these systems can be divided into two categories: (1)The nearly-flat bands have non-zero valley Chern number, which leads to "quantum Hall physics" including integer and fractional quantum anomalous Hall effect and composite fermi liquid (CFL) physics. (2) The narrow bands have trivial band topology. In this case the essential physics is captured by a Hubbard like lattice model similar to that of the high T[subscript c] cuprates. Both of the above two classes have already been realized in the experiments. I will discuss how current and future experiments on these moiré materials can deepen our understanding of the cuprate physics and quantum Hall physics. In addition, I will also propose several new phases in moiré systems, which have never been studied before. These include featureless and orthogonal pseudogap metals and quantum Hall spin liquids.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF of thesis.; Includes bibliographical references (pages 139-152).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128094</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genome organization by loop extrusion</title>
<link>https://hdl.handle.net/1721.1/128093</link>
<description>Genome organization by loop extrusion
Goloborodko, Anton.
All livings cells propagate in the same way - they duplicate their chromosomes and divide into two daughter cells, each containing one copy of each chromosome. Because of the extreme length of chromosomes, their two copies have be compacted in order to be successfully segregated from each other. The process of chromosome compaction is driven by the protein complexes called Structural Maintenance of Chromosomes (SMC), present in all branches of life. Despite their importance, the mechanism by which SMC complexes compact chromosomes remains unsolved to this day. This thesis presents a detailed analysis of the recent "loop extrusion" hypothesis of the SMC action. This hypothesis postulates that, upon binding to chromosomes, SMCs progressively bridge sites that are further away along the chromosome, thus extruding a loop. The collective action of loop extruding SMCs is hypothesized to compact a chromosome into an array of consecutive loops. In the first chapter, we present a brief history of the field of chromosomal biology. We describe the chain of discoveries that led to our current understanding of chromosomes and outline the context in which the loop extrusion hypothesis was proposed. In the second chapter, we present generalized computational and analytical models of multiple loop extruding enzymes acting on a chromosome. We show that loop extruding enzymes self-organize into an array of stable loops even when they are allowed to exchange with the solution. Our analysis demonstrates how the macroscopic characteristics of the loop array are determined by the microscopic properties of loop extruding enzymes and their abundance. In the third chapter, we use polymer simulations to model the changes in chromosome conformation induced by loop extrusion. We show that loop extrusion can robustly compact, segregate and disentangle chromosomes, producing individualized chromatids with a morphology similar to that observed in vivo. In the fourth chapter, we apply the loop extrusion hypothesis to interpret the experimental data on the process of cell division in budding yeast Saccharomyces cerevisiae. We analyze the data from chromosome conformation capture (Hi-C) experiments and use polymer modeling to show that mitotic yeast chromosomes are mildly compacted by loop extrusion. We also find that cohesin and condensin, two deeply conserved SMC complexes, play strikingly different roles in the yeast mitosis. While cohesin is responsible for compacting the bulk of chromosome arms, condensin has a more targeted role compacting the rDNA proximal regions and promoting resolution of peri-centromeric regions. Finally, in chapter five, we use the full arsenal of modern techniques - new methods of genetic manipulation of cells, Hi-C and polymer simulations - to dissect the architecture of mitotic chromosomes in vertebrates. We show that, in prophase, the interphase organization is rapidly lost in a condensin-dependent manner and chromosomes become compacted into arrays of consecutive loops. During prometaphase, these loops grow in size and become further split into series of smaller loops. The compacted chromosomes acquire a helical arrangement with consecutive loops emanating from a central spiral-staircase condensin scaffold. Acute depletion of condensin I or II shows that nested loops form by the differential action of the two condensins while condensin II is required for helical winding.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018; Cataloged from PDF of thesis.; Includes bibliographical references (pages 225-251).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128093</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of two particle correlation functions and photon-tagged jet fragmentation in PbPb and pp Collisions with CMS at the LHC</title>
<link>https://hdl.handle.net/1721.1/128092</link>
<description>Studies of two particle correlation functions and photon-tagged jet fragmentation in PbPb and pp Collisions with CMS at the LHC
Velicanu, Dragos(Dragos Alexandru)
This thesis studies of two particle correlations in proton-proton (pp), proton-lead (pPb), and lead-lead (PbPb) collisions, as well as jet fragmentation functions for jets paired with an isolated prompt photon in pp and PbPb collisions to better understand matter in a Quark Gluon Plasma (QGP) state. The correlation studies measure the precise hydrodynamic behavior of PbPb collisions through a Fourier analysis of charged particles emitted from the QGP and show unexpectedly strong collective behavior in high multiplicity pp and pPb collisions. The photon-tagged jet studies provide tight constraints for understanding the interactions between the jet and the Quark Gluon Plasma it traverses by both measuring the jet substructure properties via the fragmentation function, and having a sample of jets unbiased by the interactions we are trying to probe by selecting high energy photon events. These analyses are performed using data recorded by the CMS experiment at the LHC from PbPb, pPb, and pp collisions at a center of mass energy of 5.02 TeV per nucleon-nucleon pair, as well as 2.76 TeV PbPb collisions and 7 TeV pp collisions in the correlation analyses.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF of thesis.; Includes bibliographical references (pages 107-113).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128092</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and validation of a terrain adaptive prosthesis control system</title>
<link>https://hdl.handle.net/1721.1/128083</link>
<description>Development and validation of a terrain adaptive prosthesis control system
Stolyarov, Roman(Roman Mark)
Wearable lower limb robotic devices have great potential in addressing gait pathologies through assistive or rehabilitative means. In the case of amputation, powered prostheses can be used to recapitulate biological walking, improving mobility and diminishing amputation-associated comorbidity. In the case of intact limb pathologies such as weakness or paralysis, powered exoskeletons can be used for similar goals. A major challenge in developing these technologies lies in their control, whose aim is to improve gait dynamics across a variety of walking conditions. Perhaps the most significant determinant of gait dynamics is ground terrain: numerous studies have shown that walking on level ground, inclines, or stairs significantly affects leg dynamics. Additionally, it has been shown that abnormal or asymmetrical gait across any of these conditions causes comorbidities secondary to gait pathology, including back pain, increased fatigue, and in the case of amputation, osteoarthritis of joints in the unaffected limb. Motivated by the desire to normalize gait mechanics across a variety of conditions, the principle aim of this work is to develop an automatically terrain adaptive control system for lower limb robotic devices, wherein the control system anticipates transitions in walking tasks independently of external devices and switches to corresponding control policies. In particular, we focus on development and validation of such a control system in a below-knee prosthesis. The final result of this work is a method to automatically measure and accurately predict terrain geometry in a lower limb robotic device as a person is walking, along with a terrain-adaptive tunable control model that can successfully improve gait dynamics across multiple walking conditions.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2020; Cataloged from PDF version of thesis. "February 2020."; Includes bibliographical references (pages 69-75).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128083</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning for pain assessment and management</title>
<link>https://hdl.handle.net/1721.1/128081</link>
<description>Machine learning for pain assessment and management
Lopez-Martinez, Daniel,Ph.D.Massachusetts Institute of Technology.
Pain is a subjective distressing experience associated with actual or potential tissue damage with sensory, emotional, cognitive and social components. This work aims to develop automatic methods for quantifying pain intensity from physiological and behavioral metrics, and providing real-time clinically interpretable analgesic dosing recommendations personalized according to each patient' evolving pain and physiological condition. Historically, pain in humans has been measured using subjective self-report scales to determine presence and severity. However, these are problematic metrics for both diagnostic and research purposes. For example, self-report is impossible to obtain in various clinical populations, such as unconscious patients or patients with cognitive impairments.; Further, comparisons between people reporting their pain is difficult to do with confidence, as these metrics are highly subjective, depend on previous history with pain and other cognitive and behavioral factors, and can vary over time. Therefore, while current assessment of pain largely relies on the self-report of an individual, the development of an objective, automatic detection/measure of pain may be useful in many research and clinical applications. Such approaches, if successful may not only detect pain, but may provide for a more rational therapeutic intervention. Hence, the objective of this work was to evaluate the use of physiological and behavioral metrics as markers of pain, and to develop automatic methods for objectively quantifying pain. To do so, we focused on three sensing modalities: facial video, autonomic signals from wearable sensors, and functional near-infrared spectroscopy of the brain cortex.; There is often great variability in how people perceive, experience, and physiologically and behaviorally express pain, hence stemming efforts to build a one-size-fits-all pain recognition system. To address this, we proposed novel state-of-the-art machine learning methods for the personalization of the inference process. This approach results in models tailored specifically for individuals that still account for the broader population data. Finally, in this work, we explored the use of reinforcement learning to aid decision making in the intensive care setting by providing personalized pain management interventions.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 175-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128081</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanomaterial-mediated immune interactions for disease diagnosis and cancer immunotherapy</title>
<link>https://hdl.handle.net/1721.1/128080</link>
<description>Nanomaterial-mediated immune interactions for disease diagnosis and cancer immunotherapy
Buss, Colin G.
The body's natural defenses against disease,facilitated by a complex and highly-evolved immune system, eluded the scientific community's understanding for thousands of years following its first description. It wasn't until the mid to late 1900s that we were able to begin robustly describing the mechanisms through which innate and adaptive immune responses function, but numerous revolutionary discoveries over recent decades have since facilitated meaningful clinical advances impacting innumerable lives. From diagnostic techniques for the characterization of disease, to immunotherapies for their treatment, much of modern medicine can trace its roots to the study of immunology. Yet despite advances in immunological knowledge and its clinical applications, much remains to be understood, and many such applications have major limitations.; Mechanisms by which to interface with the immune system have thus generated immense interest, and nanotechnologies have emerged as useful tools in pursuit of this goal. Decades of research in a variety of applications have facilitated our capability to exquisitely engineer nanoparticles to incorporate desirable characteristics, allowing us to utilize these unique materials for the study and modulation of immunological activity. The work in this thesis aims to contribute understanding of the role of immunity in disease by using nanoparticle technologies that interact with the immune system to diagnose, monitor, and treat disease. First, we engineer a set of nanoparticles responsive to infection-associated proteolysis driven by the innate immune response to a pathogen as well as by the pathogen itself.; We demonstrate that detection of such proteolytic activity allows for the diagnosis of disease and monitoring of its progression as an immune response mounts, and following therapeutic treatment. Then, we design a separate nanoparticle system to deliver immunostimulatory oligonucleotides for cancer immunotherapy. This technology greatly enhances the activity of a model immunostimulant, suppressing tumor progression and powerfully potentiating immune checkpoint inhibitor antibody treatment, all while greatly reducing the dose of immunostimulant required to have such effects. Together, this work elucidates mechanisms by which nanomaterials can be utilized to interface with the immune system for the detection and modulation of its activity, thereby achieving sensitive and specific disease diagnosis and powerful tumor suppression.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-129).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128080</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on social insurance program design</title>
<link>https://hdl.handle.net/1721.1/128079</link>
<description>Essays on social insurance program design
Gray, Colin(Colin Travis)
This thesis studies the causal impacts of social insurance program design on household behavior and program-relevant outcomes. Each paper considers a major design feature of a large social insurance program - work requirements, periodic recertifications, or eligibility appeals - and considers how program-relevant outcomes would differ if this specific feature were modified. The first chapter - co-authored with Adam Leive, Elena Prager, Mary Zaki, and Kelsey Pukelis - studies the impacts of work requirements in the Supplemental Nutrition Assistance Program (SNAP). We use administrative data from Virginia to estimate the effects of work requirements on SNAP participation, beneficiary composition, and labor supply. Using discontinuities in age that determine whether a beneficiary is subject to work requirements, we find that the policy dramatically reduces overall SNAP participation and disproportionately screens out long-term SNAP beneficiaries.; While we fail to detect statistically significant impacts of work requirements on average employment or earnings, we find evidence that work requirements cause a portion of the wage distribution to shift right in the vicinity of the minimum work requirements threshold. In the second chapter, I study retention in the SNAP program. This paper uses administrative data from SNAP across seven states to establish three facts. First, retention in SNAP is low, with approximately one-half of entering cases leaving in the first year. Second, qualitative evidence and quantitative simulations suggest that approximately half of those who exit in the first year remain eligible. Third, using the staggered roll-out of an online case management tool in Michigan, I find that this simplification meaningfully reduced the rate of long-term exit at key verification dates.; These facts suggest that eligible retention is very incomplete, and that ongoing simplification efforts increase retention among eligible beneficiaries. In the third chapter, I study the role of adding or subtracting a stage of appeal in determining eligibility for the Social Security Disability Insurance (SSDI) program. On one hand, fewer stages of appeal means fewer opportunities to demonstrate eligibility, which may decrease overall allowances, administrative costs, and processing times. On the other hand, if applicants and adjudicators anticipate future appeals and infer information from past appeals, these effects may be mitigated or reversed. After showing the theoretical determinants of these effects, I study a 1999 reform to the Social Security disability adjudication process in which ten states eliminated an intermediate appeals stage.; In line with the latter model, I find evidence that eliminating this appeals stage increased allowances onto the program, and had muted effects on processing times and administrative costs due to specific dynamic responses.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 149-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128079</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finescale abyssal turbulence : sources and modeling</title>
<link>https://hdl.handle.net/1721.1/128078</link>
<description>Finescale abyssal turbulence : sources and modeling
Kaiser, Bryan Edward.
A detailed understanding of the intensity and three-dimensional spatial distribution of diabatic abyssal turbulence is germane to understanding the abyssal branch of the global overturning circulation. This thesis addresses the issue through 1) an investigation of the dynamics of an abyssal boundary layer and through 2) the construction of a probabilistic finescale parameterization using mixture density networks (MDNs). A boundary layer, formed by the interaction of heaving isopycnals by the tide and viscous/adiabatic boundary conditions, is investigated through direct numerical simulations (DNS) and Floquet analysis. Turbulence is sustained throughout the tidal period in the DNS on extra-critical slopes characterized by small slope Burger numbers, leading to the formation of turbulent stratified Stokes-Ekman layers. Floquet analysis suggests that the boundary layers are unstable to disturbances to the vorticity component aligned with the across-isobath tidal velocity on extra-critical slopes. MDNs, trained on microstructure observations, are used to construct probabilistic finescale parameterization dependent on the finescale vertical kinetic energy (VKE), N² f⁻² , and both variables. The MDN model predictions are as accurate as conventional parameterizations, but also predict the underlying probability density function of the dissipation rate as a function of the dependent parameters.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 157-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128078</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New insights into the marine oxygen cycle from manganese oxide minerals and reactive oxygen species</title>
<link>https://hdl.handle.net/1721.1/128077</link>
<description>New insights into the marine oxygen cycle from manganese oxide minerals and reactive oxygen species
Sutherland, Kevin Michael.
The redox cycling of oxygen between O₂, water, and intermediate redox states including hydrogen peroxide and superoxide, has profound impact on the availability and distribution of dissolved O₂, the habitability of the marine biosphere, and cellular metabolic and physiological reactions that utilize O₂. The sum total of processes that produce, consume, and exchange atoms with O₂ in the atmosphere, oceans, and subsurface leave their isotopic fingerprints on the abundance of the three stable isotopes of O₂ in the environment. In this thesis, I explore two aspects of the oxygen cycle in the past and present. First, I investigate the ability of manganese (Mn) oxide minerals to capture and retain the oxygen isotopic signature of dissolved O₂ during the oxidation of aqueous Mn(II) to Mn-oxide minerals.; I determine that approximately half of the oxygen atoms in Mn(lII,IV) oxides are directly incorporated from dissolved oxygen, and use isotope labeling techniques to further constrain how the dissolved oxygen isotope signature may be determined from that of Mn oxides. I perform an in-depth characterization of a ferromanganese crust from the central Pacific and, using triple oxygen isotope measurements, demonstrate that Mn oxides in ferromanganese crusts from around the world retain signatures of dissolved oxygen for at least 30 million years. I next turn to a previously unconsidered aspect of the global oxygen cycle: dark, extracellular superoxide production by marine microbes. I measure extracellular superoxide production rates by some of the ocean's most abundant organisms. I use these rates along with previous measurements to estimate that extracellular superoxide production yields a net sink of 5-19% of marine dissolved oxygen.; Ultimately, the degree to which superoxide production is a sink of oxygen lies in the fate of its primary decay product, hydrogen peroxide. I determine the range of oxidative and reductive decay of hydrogen peroxide across a range of environmental conditions in a meromictic pond, thus validating several assumptions from our global estimate. Altogether, this thesis illuminates a path toward investigating the oxygen cycle on million-year timescales in Earth's recent past and demonstrates the importance of microbial superoxide production in the biogeochemical cycling of O₂.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128077</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein regulation in Trichodesmium and other marine bacteria : observational and interpretive biomarkers of biogeochemical processes</title>
<link>https://hdl.handle.net/1721.1/128076</link>
<description>Protein regulation in Trichodesmium and other marine bacteria : observational and interpretive biomarkers of biogeochemical processes
Held, Noelle Adriana.
Marine microbes play key roles in global biogeochemistry by mediating chemical transformations and linking nutrient cycles to one another. A major goal in oceanography is to predict the activity of marine microbes across disparate ocean ecosystems. Towards this end, molecular biomarkers are important tools in chemical oceanography because they allow for both the observation and interpretation of microbial behavior. In this thesis, I use molecular biomarkers to develop a holistic, systems biology approach to the study of marine microbes. I begin by identifying unique patterns in the biochemical sensory systems of marine bacteria and suggest that these represent a specific adaptation to the marine environment. Building from this, I focus on the prevalent marine nitrogen fixer Trichodesmium, whose activity affects global nitrogen, carbon, phosphorus, and trace metal cycles.; A metaproteomic survey of Trichodesmium populations identified simultaneous iron and phosphate co-stress throughout the tropical and subtropical oceans, demonstrating that this is caused by the biophysical limits of membrane space and nutrient diffusion. Tackling the problem at a smaller scale, I investigated the metaproteomes of individual Trichodesmium colonies captured from a single field site, and identified significant variability related to iron acquisition from mineral particles. Next, I investigated diel proteomes of cultured Trichodesmium erythraeum sp. IMS101 to highlight its physiological complexity and understand how and why nitrogen fixation occurs in the day, despite the incompatibly of the nitrogenase enzyme with oxygen produced in photosynthesis. This thesis develops a fundamental understanding of how Trichodesmium and other organisms affect, and are affected by, their surroundings.; It indicates that a reductionist approach in which environmental drivers are considered independently may not capture the full complexity of microbechemistry interactions. Future work can focus on benchmarking and calibration of the protein biomarkers identified here, as well as continued connection of systems biology frameworks to the study of ocean chemistry.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from PDF of thesis. "February 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128076</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>U-Th dating of lacustrine carbonates</title>
<link>https://hdl.handle.net/1721.1/128075</link>
<description>U-Th dating of lacustrine carbonates
Chen, Christine Y.(Christine Yifeng)
Carbonates are prevalent in many modern and ancient lacustrine settings, but reconstructions of past lake levels or environments from such materials have been hindered by poor chronology. Uranium-thorium (U-Th) dating has the potential to fill a gap in current geochronological tools for such archives, but past attempts have been confounded by poor understanding of the complex makeup of lacustrine carbonates, leading to misguided conclusions on both the utility of certain geochronological tools as well as the age of these deposits. This thesis showcases strategies for the successful application of U-Th geochronology to two types of lacustrine carbonates: lake bottom sediments and tufa deposits. Chapter 2 presents a systematic approach to U-Th dating carbonate-rich lake sediments using the ICDP sediment core from Lake Junín, Peru.; Chapters 3-5 seek to demonstrate the descriptive power of combining precise U-Th dates on tufas and other carbonates with geologic observations of their depositional context at all scales-from the outcrop to the microscale. Here, the tufas originate from a transect of closed-basin lakes in the central Andes of northern Chile. With improved sample selection and leveraging of the incontrovertible constraints of stratigraphy and coevality, we are able to test the validity of U-Th data. Combining quality-controlled geochronological constraints with careful characterization of different carbonate facies can yield new insight on the character of lake level changes. These case studies offer frameworks for interpreting scattered geochronologic data of any size or system. By embracing the noise in our data, we now have a richer understanding of the controls on uranium in these deposits.; Of all the lessons learned, we hold the following as most important: for the determination of the age of lacustrine carbonates, geologic context--in the form of sedimentological observations, additional geochemical data, and paleoecological descriptions--is of equal importance to the numerical accuracy and precision of geochronological measurements.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from PDF of thesis. Vita.; Includes bibliographical references (pages 181-211).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128075</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-cell methods for profiling tumor &amp; microenvironment responses to therapeutic challenges</title>
<link>https://hdl.handle.net/1721.1/128072</link>
<description>Single-cell methods for profiling tumor &amp; microenvironment responses to therapeutic challenges
Prakadan, Sanjay Mathews.
Heterogeneity among cells affects function and dysfunction across many complex biological systems. This heterogeneity is particularly important in cancer biology, where variation in the cells composing tumors and their surroundings can affect a patients' response to treatment and subsequent survival. While current methods, such as bulk RNA-Sequencing, are incredibly powerful, they typically measure average phenomena, mischaracterizing the distribution of behaviors within a system. Single-cell technologies - single-cell RNA Sequencing in particular have been foundational in elucidating cellular heterogeneity from first principles, but there are limitations to their application for studying cancer and its response to treatment. Here, we detail efforts to address current needs in profiling treatment responses of tumors and their microenvironments at single-cell resolution.; Specifically, we characterize the underlying cellular diversity of tumor microenvironments, investigate the effect of drug treatment in specific cellular compartments, identify proxies of response in accessible cellular reservoirs, and investigate orthogonal cellular readouts of response. We first apply single-cell RNA Sequencing to study heterogeneity in metastatic melanoma, detailing heterogeneity and potential sources of resistance in cancer cells of profiled patients. Next, we study the effect of drug treatment in leptomeningeal carcinomatosis (LMD), extending previous strategies to utilize pre- and post-treatment patient sampling. We demonstrate the effect of immunotherapy in this microenvironment, and use longitudinal data from specific patients describe the evolution of cancer cell response to treatment. We next expand liquid biopsy profiling to other compartments, specifically circulating tumor cells (CTCs) in blood.; We describe the development of a microfluidic device that captures murine CTCs with minimal sampling. We perform single-CTC RNA-Sequencing to study their response to treatment and relationship to their primary tumors. Finally, we develop a device that simultaneously measures the mass, growth rate and transcriptome of single cells, and use it to investigate the transcriptional activity of cancer cells that continue to grow after therapeutic challenge. Together, this body of work represents contributions towards extending single-cell profiling to understand how cells in naturally occurring and model cancer microenvironments respond to drug treatment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128072</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reductive transformations of nitroarenes catalyzed by P(III)/P(V)=O redox cycling</title>
<link>https://hdl.handle.net/1721.1/128071</link>
<description>Reductive transformations of nitroarenes catalyzed by P(III)/P(V)=O redox cycling
Nykaza, Trevor V.(Trevor Vincent)
Nitroaromatics are widely available synthetic building blocks which present a strategical opportunity to serve as direct precursors to nitrogen-containing molecules of increasing complexity and worth through reductive/deoxygenative methods. Trivalent phosphorus compounds are valuable stoichiometric reagents for a range of reductive O-atom transfer reactions involving the conversion of R₃P[superscript III] to R₃P[superscript V]=O (including nitroarene deoxygenation), but can suffer from instability, pyrophoricity, and difficulty of removal during purification for both the phosphine and the generated phosphine oxide. Having the ability to start with a bench-stable phosphine oxide--which most often is regarded as a waste by-product--and repeatedly generate an active phosphine species in situ for catalytic reaction chemistry is a motivating concept with potentially practical benefits. With the incorporation of a hydrosilane reductant, it is demonstrated that a small-ring cyclic phosphine oxide can be quickly reduced in situ to catalyze the intramolecular cyclization of o-functionalized nitrobenzene derivatives to produce nitrogen-containing heterocycles (2H-indazoles, 2H-benzotriazoles, carbazoles, indoles, and benzimidazoles), as well as the intermolecular C-N cross coupling of nitroarenes with boronic acids through exhaustive nitro deoxygenation via P[superscript III]/P[superscript V]=O catalysis. The work herein not only describes the discovery of new organocatalytic methods founded on the use of a designer, small-ring phosphine oxide (pre)catalyst (1,2,2,3,4,4-hexamethylphosphetane 1-oxide) for the reductive transformations of nitroarenes, but also details investigations into the reaction mechanism for both reductive cyclization and C-N coupling reactions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis. "February 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128071</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Galactofuranose in mycobacteria and nematodes</title>
<link>https://hdl.handle.net/1721.1/128070</link>
<description>Galactofuranose in mycobacteria and nematodes
Justen, Alexander Mark.
All cells are covered in a coat of carbohydrates. These sugars participate in many important processes and are often essential for viability. Polysaccharides assembled by both prokaryotic and eukaryotic pathogens contain monosaccharide building blocks not used by mammals. For example, galactofuranose (Galf), the 5-membered ring conformation of galactose, is distributed broadly across many types of bacteria, lower eukaryotes, and invertebrates. However, mammals do not use Galf in their glycans. Since many human pathogens require Galf for viability, enzymes necessary for Galf biosynthesis and incorporation into polysaccharides represent worthy drug targets. In this thesis, I examine incorporation of Galf by glycosyltransferases and the importance of this sugar to mycobacterial and nematode physiology. In Chapter 1, I review cell envelope assembly for members of the Corynebacterineae suborder.; Many organisms within this suborder represent major public health threats and all require the presence of the galactan, a polysaccharide composed primarily by Galfresidues. In Chapter 2, 1 evaluate the consequences of Galfinhibition within the free-living nematode, C. elegans. Results from these experiments validate Galf biosynthesis as a worthy drug target for parasitic nematodes. In Chapter 3, I report the biochemical characterization of the glycosyltransferase G1fT1. This enzyme initiates polymerization of the galactan, and we find that G1fT 1 contains remarkable substrate selectivity. We propose that Nature has evolved two galactofuranosyl transferases so that G1fT1 can effectively discriminate between substrates that will later be elongated by the promiscuous enzyme, G1ff2. In Chapter 4, I explore structure-function relationships between galactan chain length and mycobacterial physiology.; Results from these experiments demonstrate that galactan chain length creates an important determinant of cell envelope mechanical integrity and periplasm size. In Chapter 5, I purify and test activity of uncharacterized G1fT2 orthologs. These proteins were thought to install unique linkages that are not created by mycobacterial G1fT2s. However, our results demonstrate that these proteins also create the same linkages as previously characterized enzymes, implying that G1fT2 regioselectivity is conserved.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis. "February 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128070</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymeric antigens as targeted probes of immunity</title>
<link>https://hdl.handle.net/1721.1/128069</link>
<description>Polymeric antigens as targeted probes of immunity
Jarvis, Cassie Marie.
Multivalent carbohydrate interactions play a critical role in many immunological processes, including pathogen recognition and immune activation. Synthetic polymers can mimic multivalent glycans and probe different facets of immunity to ultimately inform the design of effective vaccines. In Chapter 1, I review how antigen physical properties and lectin signaling can direct dendritic cell (DC)-mediated immune responses. The DC lectin DC-SIGN is involved in both immune activation and evasion. In Chapter 2, I generated glycopolymers bearing a multivalent display of a DC-SIGN-targeting aryl mannoside ligand to investigate how antigen features influence DC-SIGN-mediated responses. Specifically, I found that antigen size alters trafficking through DCSIGN. Large polymer aggregates were trafficked to the same subcellular DC reservoirs that harbor HIV, whereas small soluble polymers were routed to endosomes.; In light of these findings, in Chapter 3 we designed a nanoparticle vaccine platform using the same aryl mannoside ligand to efficiently target antigen to DC-SIGN for endosomal routing and antigen presentation. Functionalized bacteriophage Q[beta] virus-like particles elicited DC-mediated proinflammatory Th1-type immune responses, which are effective against intracellular pathogens and tumors. As many vaccines function by generating neutralizing antibodies, we used polymeric antigens to interrogate B cell activation. In Chapter 4, I used ROMP polymers to target B cells and systematically evaluated key antigen features that promote B cell activation and antibody production. The most robust responses were induced by polymers with a high valency of B and T cell epitopes where the T cell epitope is readily liberated upon endosomal processing. Optimal polymer designs stimulated more robust B cell activation than a comparable protein conjugate.; Overall, my thesis work identified numerous antigen parameters that can be tuned to direct and optimize DC and B cell activation for the design of effective targeted synthetic vaccines.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis. "February 2020."; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128069</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-toxicity, earth-abundant nanomaterials for photoluminescence or magnetic resonance</title>
<link>https://hdl.handle.net/1721.1/128068</link>
<description>Low-toxicity, earth-abundant nanomaterials for photoluminescence or magnetic resonance
Hansen, Eric Calvin.
Many recently-developed molecular, nanoscale, or macroscale materials intended for energy and medical science application are composed of toxic and/or rare elements, and are therefore unlikely to be commercially translated. In response, this thesis will explore the design, synthesis, characterization, and application of Magnetic Resonance Imaging (MRI) contrast agents (CAs) and photoluminescent nanocrystals (NCs) based on Earth-abundant, non-toxic elements. For instance, many colloidal semiconductor NCs show bright, tunable photoluminescence (PL) useful for displays and photovoltaics, but often contain highly-toxic Cadmium (Cd) and/or Lead (Pb). Analogously, clinically-available Gadolinium based (Gd-based) MRI CAs have been found to accumulate in the brain and other organs, even for healthy patients.[1, 2, 3] Although the toxicity of retained Gd-based CAs is not fully understood, a solution containing an endogenous metal (such as Iron³⁺ (Fe³⁺)) is a safer option. This thesis is divided into two themes: MRI CAs and photoluminescent NCs. First, we will explore a nanoparticle-based (NP-based) MRI CA and its in vivo efficacy. Next, small molecule Iron-containing complexes based on Iron chelation therapy drugs will be described. Changing direction, chemical study and optimization of Indium-based (In-based) ternary NCs will be presented. Finally, synthesis of Aluminum-containing (Al-containing) defective NCs (DNCs) and respective photophysical processes will be reported. The results presented here provide a starting point for realization of translatable nanomaterials for light downconversion or MRI contrast.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 111-119).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128068</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein-ligand binding by solid-state NMR : cholesterol interactions in membranes and with the Influenza A M2 protein</title>
<link>https://hdl.handle.net/1721.1/128067</link>
<description>Protein-ligand binding by solid-state NMR : cholesterol interactions in membranes and with the Influenza A M2 protein
Elkins, Matthew Ryan.
Biological membranes and their have significant influence over normal cell functions and disease pathologies. Solid-state nuclear magnetic resonance (SSNMR) spectroscopy has historically been used to study membranes, which are constantly changing heterogenous systems. SSNMR can obtain atomic- and molecular-level information on molecular structure and dynamics within lipid bilayers, making it the ideal choice for studying the interactions at the heart of membrane and membrane protein functions. We focus on two membrane denizens: the influenza A M2 protein and the animal sterol cholesterol. M2 resides in the viral envelope and plays a key role in enabling viral entry to and release from the cell. M2-mediated viral release is a cholesterol-dependent process hypothesized to involve direct binding of cholesterol to a C-terminal amphipathic helix.; Using ²H, ¹³C, and ¹⁹F-detected SSNMR experiments for measuring orientation and intermolecular contacts, we determined that cholesterol binds to the transmembrane helix of M2 rather than the amphipathic helix. Even in membranes with ~45 mol% cholesterol, similar to that of the viral envelope, M2' only binds ~2 cholesterol per tetramer. This behavior is consistent with M2's known localization to the edge of budding viruses and suggests that cholesterol binding is a mechanism to recruit and maintain sufficient M2 concentrations for membrane curvature induction and scission. Using ¹³C and ¹⁹F experiments we show that cholesterol forms oligomers both within the same bilayer leaflet and across the membrane. Even at fairly low cholesterol concentrations, cholesterol dimers are the most prevalent species; at high cholesterol concentrations, cholesterol can form clusters of four or more molecules in size.; Preferential interactions between faces of cholesterol suggest that cholesterol-protein interactions are more likely to occur through the methyl-rich cholesterol face. The self-association properties of cholesterol be able to facilitate membrane protein association and oligomerization. We also used ¹³C and ¹⁵N NMR to address the structural cause for the enhanced pathogenicity of A[beta]₄₀ fibrils of the familial Arctic mutant compared to wild-type fibrils. Not only were Arctic fibrils intrinsically polymorphic, one of its major forms closely resembled wildtype A[beta]₄₂,which is known to form fibrils more rapidly and cause earlier-onset Alzheimer's disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128067</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopy and dynamics of high orbital angular momentum Rydberg states</title>
<link>https://hdl.handle.net/1721.1/128066</link>
<description>Spectroscopy and dynamics of high orbital angular momentum Rydberg states
Barnum, Timothy J.(Timothy James)
Rydberg states of molecules with high orbital angular momentum (l &gt;/~ 3) are a unique class of electronic states. These high-l Rydberg states escape the rapid non-radiative decay by predissociation, which is typical of the intensively studied low-4 Rydberg states. Access to high-4 Rydberg states is challenging due to the [delta]l = ±1 transition propensity rule in combination with the short lifetimes of the optically accessible, low- Rydberg states. To address these dual challenges, we implement optical-millimeter-wave stimulated Raman adiabatic passage (optical-mmW STIRAP), which enables efficient population transfer from a low-lying electronic state to a high-l Rydberg state without directly populating a lossy, low-e Rydberg state. Our demonstration of optical-mmW STIRAP on an atomic system includes examination of the experimental and theoretical details of every step of this coherent process and demonstrates its promise for molecular applications.; We explore the physics of the Rydberg electron &lt;-&gt;ion-core system through investigation of the spectroscopy and dynamics of high-l Rydberg states of NO. We populate ng Rydberg states of NO by a three-color triple-resonance excitation scheme and probe Rydberg-Rydberg transitions by chirped-pulse millimeter-wave (CPmmW) spectroscopy. The precision of the experimental data obtained and the breadth of the state space examined by CPmmW spectroscopy provides challenges to the existing theory of the structure of high-l Rydberg states. We apply a long-range electrostatic model to disentangle and describe the physical mechanisms that contribute to the autoionization dynamics of NO Rydberg states. Our model accounts for the decay rates of vibrationally excited ng Rydberg states.; We explain the previously measured NO⁺ ion rotational state population distributions produced by autoionization of NO nf states and propose methods to generate single quantum state-selected NO⁺ ions by selective population of specific ng Rydberg states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 313-326).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128066</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Immune modulation by synthetic multivalent antigens</title>
<link>https://hdl.handle.net/1721.1/128065</link>
<description>Immune modulation by synthetic multivalent antigens
Alam, Mohammad Murshid.
Modulating immunity by delivering antigens that target antigen presenting cells (APC) is a promising approach for vaccine design. Cell surface receptors of APCs capture antigen for processing and presentation and induce signaling for immune activation. The antigen's structural properties including size, valency, and conjugation strategy can also play key roles in immune modulation. We therefore targeted APCs such as dendritic cells (DCs) and B cells to shape cellular and humoral immunity, respectively, using systematically designed synthetic multivalent antigens. DC-SIGN is a C-type lectin receptor expressed on DCs that recognizes highly mannosylated glycans. It facilitates pathogen recognition and modulate immune response. Herein, we engineered a bacteriophage Q[beta] virus-like particle (VLP) with a multivalent display of mannoside ligands and identified that, high-density display of a phenymannoside enhances VLP uptake by DCs and promotes efficient trafficking to endosomes.; The particle also induces proinflammatory cytokine expression and generates a ThI-type immune response in vivo, highlighting its utility as vaccine vehicles to induce cellular immunity. We also investigated the effects that physical properties of antigens have on the fate of DC-SIGN-mediated internalization. We generated soluble and particulate glycopolymers displaying multiple copies of phenylmannoside and showed that particulate antigens traffic to the non-endosomal, surface-accessible compartments where HIV-1 traffics. These particulate antigens also elicited responses associated with HIV-induced DC-SIGN signaling, including expression of cytokines and activation of Raf-1. These results underscore the significance of antigen structure in developing synthetic vaccines. Protective immunity towards extracellular pathogens is mediated by an effective antibody response.; To determine structural features of the epitope on antibody responses in vivo, we used synthetic polymers functionalized with defined B- and T-cell epitopes. The T-cell epitope was conjugated with a protease-sensitive linker to facilitate antigen presentation. This design induced a variety of immune responses in mice such as robust IgG antibody, antibody secreting cell, and helper T-cell response. We found that such responses are stronger for polymers that display a high number of T-cell epitopes compared with polymers that display a low copy number of T-cell epitopes. These findings provide insight into the key criteria for conjugate vaccine design against weak immunogens such as carbohydrates.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128065</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights into gene regulation by genome structure, phase separation and developmental signaling</title>
<link>https://hdl.handle.net/1721.1/128064</link>
<description>Insights into gene regulation by genome structure, phase separation and developmental signaling
Zamudio, Alicia V.(Alicia Viridiana Zamudio Monters de Oca)
Proper regulation of gene expression is essential to the developmental processes that give rise to hundreds of different cell types with unique cellular identities. Regulation of protein-coding and long non-coding RNA genes by RNA polymerase II is carried out by the coordinated action of transcription factors and cofactors. Transcription factors can be cell-type specific and bind cell-type specific gene regulatory regions called enhancers, which can be located far upstream or downstream from the gene they activate. The enhancer-bound factors can loop to the promoters of cell-type specific genes to enhance the levels of transcription of these genes, and studies described in this thesis have provided new insights into the factors that contribute to this looping process (Weintraub et al., 2017). Recent studies have revealed that super-enhancers, which contribute to regulation of genes with prominent roles in cell identity, form phase-separated condensates that compartmentalize and concentrate the transcription apparatus at these genes. This insight led us to test the idea that signaling factors, which bring information regarding the developmental environment of cells to the transcription apparatus, might preferentially interact with super-enhancers through condensate interactions that were not considered in previous studies of signaling. Our studies confirmed that super-enhancer condensate do indeed facilitate the preferential localization of signaling factors to genes with prominent roles in cell identity (Zamudio et al., 2019). Thus, the studies described in this thesis provide new insights into gene regulation by genome structuring, phase separation and developmental signaling.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128064</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viral visions : art, activism, and epidemiology in the global AIDS pandemic</title>
<link>https://hdl.handle.net/1721.1/128063</link>
<description>Viral visions : art, activism, and epidemiology in the global AIDS pandemic
Davidow, Jackson
            (Jackson Struthers),
            1990-
Most histories of HIV/AIDS, art, and cultural activism pivot around New York and are confined to the American context. Instead, this dissertation maps out a more expansive transnational Anglophone network of individuals, projects, and coalitions that conceived of the virus as a global problem during the 1980s and 1990s. Methodologically combining archival research with oral history interviews, this study proposes and models an epidemiological approach to art history that tracks and theorizes significant patterns of viral propagation, activist response, and visual culture-making across groups in Canada, the United Kingdom, South Africa, and the United States. Each chapter focuses on artists, activists, and critics--many of whom were queer, women, and people of color--as they formed communities in which the virus generated local, national, and global discourses and practices of cultural activism. Structured around four historical case-studies in and across Toronto, London, Cape Town, Johannesburg, and Boston, this dissertation encompasses a diverse cultural archive cutting across media and aesthetic forms: visual artworks, films, exhibitions, texts, protests, workshops, campaigns, festivals, and nightlife. Ultimately, this dissertation argues that transnational AIDS cultural activism, with its viral aesthetic strategies, emergent modes of identification, and bold political interventions in public space, produced new critical understandings of postmodernism, queerness, globalization, and postcoloniality.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in History and Theory of Art, Massachusetts Institute of Technology, Department of Architecture, May, 2019; Cataloged from PDF version of thesis. Images from pages 324 to 374 are redacted.; Includes bibliographical references (pages 295-323).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128063</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems analysis of urban air mobility operational scaling</title>
<link>https://hdl.handle.net/1721.1/128057</link>
<description>Systems analysis of urban air mobility operational scaling
Vascik, Parker D.(Parker Denys Neff)
Urban air mobility (UAM) refers to a set of vehicles and operational concepts that provide on-demand or scheduled air transportation services for passengers and cargo within a metropolitan area. Prior UAM systems based on helicopters or small aircraft did not achieve sustained, large-scale adoption. The goals of this thesis are: to identify the principal scaling constraints of UAM, to discern how the severity of these constraints varies with different implementation locations and operational concepts, and to assess the feasibility of large-scale UAM services in the United States subject to these constraints. Seven potential scaling constraints are identified through exploratory case studies of UAM operations in three U.S. cities. Of these constraints, the development of takeoff and landing areas (TOLAs) and the provision of air traffic control (ATC) services are proposed as principal near-term constraints and selected for detailed analysis.; The development of high-throughput, small-footprint TOLAs to enable UAM scaling in urban areas is evaluated as a multicommodity flow problem. TOLA design and aircraft performance attributes that enhance throughput per footprint are determined through tradespace analysis. TOLA throughput is found to be highly dependent on attributes of ATC, namely controller workload and separation minima. Estimates of maximum aircraft throughput capacity are developed for representative inner-city UAM TOLAs of various physical designs. The development of procedurally segregated airspace cutouts for UAM flight is shown to be a promising strategy to enable high-volume UAM operations within terminal airspace. Furthermore, four flight procedures are proposed to support UAM access to commercial airports under both instrument flight rules (IFR) and visual flight rules (VFR). Lastly, the magnitude of ATC restrictions on the scale of UAM operations is evaluated in the 34 largest U.S. metropolitan areas.; The degree to which ATC may constrain UAM scale is found to vary widely between these metropolitan areas potentially inhibiting service to over 75% of the population in the most restricted city but less than 15% in the least restricted city. The development of airspace cutouts for VFR UAM operations reduces this variation and increases population coverage from 65% to 80% in the median U.S. metropolitan area.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 195-205).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128057</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrument systematics calibration and performance validation for high precision photometry missions</title>
<link>https://hdl.handle.net/1721.1/128056</link>
<description>Instrument systematics calibration and performance validation for high precision photometry missions
Krishnamurthy, Akshata.
Exoplanet detection using planetary transits requires very high precision photometry. The systematic noise requirement on photometry missions is limited to tens of parts per million over few hour timescales. The present work focuses on developing a generalized framework to improve bright star photometry for both large space-based telescopes and small satellite missions. The framework uses active integration with inherent feedback mechanisms in order to maximize the utility of results from three functional areas: simulation and modeling, laboratory characterization, and flight data analysis techniques. We systematically assess the performance of the system by identifying, characterizing, calibrating, and removing the major systematic noise sources from the flight data with the goal to establish a noise floor for the mission. We present two applications namely the Transiting Exoplanet Survey Satellite (TESS) and Arcsecond Space Telescope Enabling Research in Astrophysics (ASTERIA).; TESS is a NASA Astrophysics Explorer mission that was successfully launched in April 2018. The present work establishes a noise floor of 16 ppm at 4 hours for TESS by evaluating hundreds of non-variable bright stars over multiple sectors of observation. We also develop methods to improve the photometric performance for outliers that do not conform to the noise floor. In addition, we develop laboratory techniques to very precisely characterize key detector properties such as absolute quantum efficiency and charge blooming. ASTERIA is a 6U CubeSat that was deployed into a low-earth orbit in November 2017. The present work provides a framework to assess the photometric performance of ASTERIA by developing one of the first data analysis pipelines for CMOS science. We demonstrate photometric precision of 65 ppm at 2 hours for HD219134, and 15 ppm at 2 hours for Alpha Centauri.; We also present in-flight calibration and ground characterization tests of the camera assembly to characterize and remove significant noise sources such as fixed pattern noise and temperature variations. Using the results from this research, we develop a pipeline-driven approach for calibration test development for SPARCS, an upcoming JPL CubeSat mission to demonstrate UV photometry, and provide guidelines for early design phase noise budgeting and detector selection for the ASTERIA constellation concept.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 239-250).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128056</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effective information sharing for human-robot collaboration</title>
<link>https://hdl.handle.net/1721.1/128055</link>
<description>Effective information sharing for human-robot collaboration
Unhelkar, Vaibhav Vasant.
Humans and machines often possess complementary skills. The recognition of this fact is leading to a steadily growing interest in collaborative robots. Despite the growing interest, however, a fundamental question remains to be answered: "How does one develop effective collaborative robots?" Three entities need to be considered while answering this question -- namely, the collaborative robot itself, the human teammate whom the robot interacts with, and, equally importantly, the robot developer who is tasked with designing the machine. Each of these entities possesses different information. Effective sharing of this information is essential for developing collaborative robots and achieving fluent collaboration. In this dissertation, I present models and algorithms to enable effective information sharing between the robot, the human, and the developer. I begin by presenting the Agent Markov Model (AMM), a Bayesian model of sequential decision-making behavior, and Constrained Variational Inference (CVI), a hybrid learning algorithm that can learn generative models both from data and domain expertise. By utilizing AMM and CVI, the developer can specify decisionmaking models both for the human teammate and the collaborative robot with reduced labeling effort. Next, I present ADACORL, a framework to generate the collaborative robot's policy for interaction. By leveraging algorithms for planning under uncertainty, ADACORL can generate fluent robot behavior for human-robot collaborative tasks with state spaces significantly larger than prior art (&gt; 1 million states) and short planning times (&lt; 1 s). Finally, I provide an approach for deciding if, when, and what to communicate during human-robot collaboration. Through human-robot interaction studies, I demonstrate that the proposed decision-making approaches result in the effective use of the robot's action and communication capabilities during collaboration with a human teammate.
Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 171-186).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128055</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plan summarization for decision support in human team planning</title>
<link>https://hdl.handle.net/1721.1/128054</link>
<description>Plan summarization for decision support in human team planning
Kim, Joseph,Ph.D.Massachusetts Institute of Technology.
Humans often make their plans in groups, sharing information, ideas, and preferences before arriving at a final plan. Human team planning sessions, however, often suffer from inefficiencies such as getting off topic, losing track of goals, and leaving meetings with a lack of shared understanding between the team members. Humans also tend to overlook constraints and lose situational awareness when presented with a large amount of data. The complexities of both the planning task and social factors can cause the necessity to hold additional meetings and degrade performance during plan execution. In order to alleviate the aforementioned challenges, Al researchers have been developing human-agent planning systems (also called human-aware or human-in-the-loop planners) to reduce the cognitive load of humans and help them generate high-quality plans prior to execution. Existing systems, however, mainly focus on dyadic interaction between a single human planner and a single Al agent.; There exists a need for decision support systems that can properly monitor and support the plan-making of human teams. In this thesis, I develop two classes of plan summarization algorithms in the context of human team planning. First, I develop a novel plan recognition algorithm to infer the human team's final plan from a structured form of their natural language conversation. The key idea is to leverage dialogue features from the team's planning conversation to monitor the team's overall agreement process. I show that by combining such contextual information with symbolic planning models, state-of-the-art summarization accuracy can be achieved. I also show that my model is generalizable across different discussion topics and to planning problems of various representations (e.g. PDDL, MDPs, or MILPs). Second, I focus on the problem of summarizing a dataset of plan traces, for a setting where the agent does not have access to a symbolic planning model.; I develop an algorithm to infer contrastive Linear Temporal Logic (LTL) specifications from a dataset of positive and negative traces. The motivation is that, even in this model-free setting, the agent can be used as an information-gatherer, communicating to humans any insightful rules learned from historical data. I introduce a novel probabilistic model that generates multiple contrastive LTL specifications while offering robustness to noisy input. Finally, I conduct an in-depth user study to assess the various properties of LTL specifications and how they impact human interpretability. I develop a new quantitative measure to gauge the "interestingness" of LTL specifications, and through human subject experiments I show that it aligns better to actual human intuition than an existing method.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 155-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128054</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deposition, crystallization and nanoindentation of substrate-bound biomolecule-assembled nanomaterials</title>
<link>https://hdl.handle.net/1721.1/128053</link>
<description>Deposition, crystallization and nanoindentation of substrate-bound biomolecule-assembled nanomaterials
Lewis, Diana J.
            (Diana Jean)
The directed assembly of nanoscale components has the potential to generate novel macroscopic materials with specific hierarchically ordered structures and programmable physical and chemical properties. DNA is a particularly promising material to direct nanomaterial assembly, as its sequence-specific complementary binding allows DNA to be used to precisely tailor interactions between a multitude of different nanoscale elements, allowing formation of complex nanoscale structures without top-down instruction. While multiple DNA-based materials assembly methods have been developed, the mechanical properties of many of these systems are unknown due to limitations in both the preparation of materials suitable for mechanical testing and difficulties in measuring the behavior of these soft materials under applied force. The modulus of one particular architecture - DNA-NP superlattices - is studied here by first examining methods to create thin films of DNA-NPs that can be accurately probed using nanoscale mechanical testing techniques, then using AFM nanoindentation of various superlattice designs to correlate DNA-NP lattice structure to mechanical behavior. A layer-by-layer deposition strategy was first examined in order to understand how deposition conditions affect the packing density and surface roughness of thin films of DNA-NPs as a function of deposition time, bulk system temperature, and solution ionic strength. Subsequent experiments used a slow-cooling method to generate single crystal superlattice architectures, where the size and shape of the substrate-bound crystals could be tailored by tuning the relative strength of the interactions between the substrate and DNA-NPs, and could be described by the Winterbottom construction. Additionally, FCC single crystal structures were demonstrated, which have not been previously shown in literature. Using the substrate-bound crystals, the dependence of the modulus on DNA length, nanoparticle size, and density of DNA strands was determined, which allowed for the establishment of design rules that will ultimately enable control over the mechanical properties of future DNA-based nanoparticle structures.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 163-169).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128053</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic models and optimization algorithms for large-scale transportation problems</title>
<link>https://hdl.handle.net/1721.1/128045</link>
<description>Probabilistic models and optimization algorithms for large-scale transportation problems
Lu, Jing
This thesis tackles two major challenges of urban transportation optimization problems: (i) high-dimensionality and (ii) uncertainty in both demand and supply. These challenges are addressed from both modeling and algorithm design perspectives. The first part of this thesis focuses on the formulation of analytical transient stochastic link transmission models (LTM) that are computationally tractable and suitable for largescale network analysis and optimization. We first formulate a stochastic LTM based on the model of Osorio and Flötteröd (2015). We propose a formulation with enhanced scalability. In particular, the dimension of the state space is linear, rather than cubic, in the link's space capacity. We then propose a second formulation that has a state space of dimension two; it scales independently of the link's space capacity. Both link models are validated versus benchmark models, both analytical and simulation-based. The proposed models are used to address a probabilistic formulation of a city-wide signal control problem and are benchmarked versus other existing network models. Compared to the benchmarks, both models derive signal plans that perform systematically better considering various performance metrics. The second model, compared to the first model, reduces the computational runtime by at least two orders of magnitude. The second part of this thesis proposes a technique to enhance the computational efficiency of simulation-based optimization (SO) algorithms for high-dimensional discrete SO problems. The technique is based on an adaptive partitioning strategy. It is embedded within the Empirical Stochastic Branch-and-Bound (ESB&amp;B) algorithm of Xu and Nelson (2013). This combination leads to a discrete SO algorithm that is both globally convergent and has good small sample performance. The proposed algorithm is validated and used to address a high-dimensional car-sharing optimization problem.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 179-186).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128045</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New approaches to maximizing influence in large-scale social networks</title>
<link>https://hdl.handle.net/1721.1/128044</link>
<description>New approaches to maximizing influence in large-scale social networks
Hunter, David Scott,Ph.D.Massachusetts Institute of Technology.
With the widespread adoption of social media in today's society, the problem of identifying the most influential individuals whose adoption of a product or action will spread maximally in the network is of increased practical significance. This thesis considers new strategies and methods for this problem, which is known as the influence maximization problem, focusing on a setting where the influence is determined by some function of user opinions. In the first chapter, we introduce a new model of opinion dynamics that is motivated by research in both social psychology and political science. We present a series of theoretical results concerning the convergence of the opinions to an equilibrium, including conditions under which convergence to a fixed point occurs, an explicit characterization of the equilibrium, and the rate of convergence to the equilibrium. In the second chapter, we propose new approaches to the influence maximization problem in a social network when the dynamics adhere to the model in the first chapter. We consider applying these methods on several large-scale real-world social networks. In doing so, we attempt to measure the validity of the model we propose, consider estimating the relative importance of some special users via a centrality function approach, and highlight the computational efficiency of our influence maximization methods. In the final chapter, we introduce an alternative approach to maximizing influence in a social network that has as a solution a dynamic policy that considers when, what, and with whom an agent communicates. We motivate the necessity for a dynamic policy solution by highlighting some realistic behaviors that make modeling and analyzing real-world dynamics difficult. By leveraging reinforcement learning, we learn policies that account for some of these realistic behaviors and find that these policies exhibit impressive performance on large-scale networks.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 169-182).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128044</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging data analytics to improve outpatient healthcare operations</title>
<link>https://hdl.handle.net/1721.1/128043</link>
<description>Leveraging data analytics to improve outpatient healthcare operations
Hu, Michael,Ph.D.Massachusetts Institute of Technology.
Healthcare reform in the United States has received significant attention from the public, physicians, health administrators, and insurance payors. The important policy discussions surrounding healthcare reform have a wide-reaching impact, and often require quantitative data and analytics to support successful changes. This thesis studies a number of burning healthcare problems and offers actionable insights driven by data and analytics. In Chapter 2, we examine the effect of the EHR phenomenon and how it has fundamentally transformed physicians' work. While these computer systems are designed in part to streamline workflows and increase employee efficiency, physician experiences are often the exact opposite. Instead of having more face-to-face time seeing their patients, physicians are forced to spend the majority of their time completing EHR tasks.; In this chapter, we establish rigorous, quantitative methods for measuring and analyzing this problem to help health systems curb the exploding population of burned out physicians. In Chapter 3, we demonstrate how the methods established in Chapter 2 can also be used to predict physician workload, which may assist in the design of physician compensation models. While health systems are shifting away from fee-for-service payment schemes, alternative payment schemes often encounter significant implementation challenges. There are many open questions that need to be resolved before these new payment schemes can achieve widespread adoption. In this chapter, we address one such question, which involves how to properly risk adjust for different patient populations. We leverage the techniques from Chapter 2 to measure the workload imposed on physicians by individual patients.; This enables us to subsequently develop a risk adjustment model that substantially outperforms existing risk adjustment methods in determining the physician workload associated with managing different patient populations. In Chapter 4, we examine the problem of relaxing hospital capacity. Many hospitals frequently operate close to full-capacity which poses serious safety concerns. Most attempted solutions in this space focus on inpatient interventions such as optimizing patient flow and surgery schedules. In contrast to this, we propose an approach based on changes in the longitudinal care delivered by ambulatory services, specifically for the treatment of heart failure patients. Lastly, in Chapter 5, we develop a new modeling framework for real-time appointment scheduling.; While others have applied existing algorithms from online binpacking to solve this problem, our modeling framework leverages unique aspects of appointment scheduling to further optimize scheduling decisions and reduce resource requirements. In doing so, we demonstrate that our modeling framework generalizes the classical bin-packing framework thereby enabling a potentially larger number of problems to be studied using similar techniques.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 147-158).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128043</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of departure from nucleate boiling in subchannel applications : from mechanistic modeling to hybrid framework</title>
<link>https://hdl.handle.net/1721.1/128042</link>
<description>Prediction of departure from nucleate boiling in subchannel applications : from mechanistic modeling to hybrid framework
Zhao, Xingang,Ph.D.Massachusetts Institute of Technology.
The critical heat flux (CHF) corresponding to the departure from nucleate boiling (DNB) is a regulatory limit for licensing of pressurized water reactors (PWRs). Under DNB conditions, the heated surface is blanketed by a stable vapor film, leading to a sharp deterioration of the heat transfer coefficient at the heater/coolant interface and an abrupt temperature rise. Despite the abundance of predictive tools available to the reactor thermal-hydraulics community, the path for an accurate, robust CHF model remains elusive due to lack of consensus on the DNB triggering mechanism. In this thesis, a comprehensive study of physics- and data-driven DNB modeling approaches is presented with the objective of achieving superior predictive capabilities in subchannel applications. In a rod bundle where the coolant region is modeled as an inter-connected array of subchannels, the local boundary conditions are determined through the use of subchannel codes, such as CTF in this work.; As an essential prerequisite step to determine code readiness, the single- and two-phase capabilities of CTF are assessed by comparing against targeted experiments and the commercial subchannel code VIPRE-01 on flow distribution, pressure drop, and void content. The key two-phase closures in CTF that are in the greatest need of improvement are identified and discussed, including subcooled boiling heat transfer and liquid-vapor interfacial friction. Then, existing macro-scale physics-driven CHF models are surveyed. In view of their limitations, an evolutionary mechanistic model is proposed, leveraging key assumptions in the relatively well-accepted mechanisms of liquid sublayer dryout and near-wall bubble crowding. Detailed validation and global sensitivity analysis of the proposed model demonstrate its improved accuracy and robustness over currently well-used predictive tools with regard to an extensive database that covers a broad range of geometric and flow conditions.; In addition to rod bundles, tube and annulus heaters are also included in the test matrix for their higher data availability and for the similarity of their flow channels to bundle subchannels. Meanwhile, an updated mechanistic model is proposed for use in transient studies where traditional quasi-steady-state approaches would yield significant conservatism. Such a transient DNB model relies on the mechanism established under steady-state scenarios and evaluates the depletion of the presumed liquid sublayer. The model is validated against three sets of power transient experiments, showing close agreement with measurements under conditions of practical interest for a PWR. Lastly, with a fresh perspective from machine learning (ML), the prediction of DNB is approached through a physics-informed ML-aided parallel framework.; Such a first-of-a-kind hybrid framework takes advantage of established understanding in the field (i.e., domain knowledge) and uses ML to capture the undiscovered information from the mismatch between experimental and domain knowledge-predicted output. A comprehensive evaluation is carried out to demonstrate the proposed hybrid approach's superior performance over conventional domain knowledge-based models and standalone ML methods for both interpolation and extrapolation purposes. It is shown to readily extend its applicability domain and model complexity on the fly, resulting in an elevated level of flexibility and robustness. In the light of the hybrid framework's promising potential, the concept of window-type extrapolation mapping is further proposed to help inform future high-cost measurement focuses.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 150-159).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128042</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic particle imaging for intraoperative breast cancer margin assessment and functional brain imaging</title>
<link>https://hdl.handle.net/1721.1/128037</link>
<description>Magnetic particle imaging for intraoperative breast cancer margin assessment and functional brain imaging
Mason, Erica Ellis.
Magnetic Particle Imaging (MPI) is an emerging tracer-based imaging modality that uniquely images the nonlinear magnetization of superparamagnetic iron oxide nanoparticles (SPIOs). MPI boasts high sensitivity, zero background signal, positive contrast, fast temporal resolution, and quantitative detection. The field of MPI is currently preclinical, and this work aims to scale MPI to human sizes by developing and validating it for two clinical applications: tumor detection and imaging for intraoperative margin assessment during breast-conserving surgery (BCS), and functional neuroimaging. For margin assessment in BCS, a hand-held Magnetic Particle detector and a small-bore MPI imager are assessed for intraoperative use along with an injected SPIO agent. The goal is to detect positive margins during surgery and thus reduce the need for future reexcision. Both hardware systems are validated using clinically relevant phantoms. For functional Magnetic Particle Imaging (fMPI) of the brain, a continuous time-series MPI imager is developed and validated for imaging of cerebral blood volume (CBV) changes during functional activation. The goal is improved sensitivity beyond the capabilities of current functional imaging modalities. We present initial results of in vivo rodent fMPI in a small-bore imager, and the design of a human head-sized system, with implementation underway. Through the collective development of these MPI hardware systems and validation of their potential for these two clinical applications, this work aims to catalyze the expansion of MPI into the clinical setting.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 171-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128037</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and analytical studies of partial melting in planetesimals and the Martian mantle</title>
<link>https://hdl.handle.net/1721.1/128036</link>
<description>Experimental and analytical studies of partial melting in planetesimals and the Martian mantle
Collinet, Max.
Planetesimals and planetary embryos, the building blocks of planets, started to melt within a few million years of the formation of the solar system. This thesis explores, through experiments and the analysis of meteorites, the magmatic processes that affected those early-formed bodies. Chapter 1 presents low-pressure experiments that simulate the onset of melting of planetesimals made of different chondritic materials (H, LL, CI, CM and CV). H, LL and CI compositions, melted at lower temperature and produced partial melts with higher SiO₂, Al₂O₃ and alkali element concentrations compared to CM and CV compositions. They formed unique trachyandesite achondrites upon crystallization. In Chapter 2, the experiments are compared to primitive achondrites, distinct groups of meteorites that represent the melting residues "left behind" within planetesimals.; Cumulative evidence from trachyandesite achondrites and primitive achondrites suggests that the planetesimals that accreted in the inner solar system were not depleted in alkali elements relative to the composition of the sun's photosphere. Chapter 3 is a detailed study of ureilites, the largest group of primitive achondrites. Twelve ureilites were analyzed to determine the chemical composition and relative proportions of olivine and pyroxene. Those analyses, together with additional experiments, constrain the initial Mg/Si ratio of the ureilite parent body. The experiments are used to develop a new geothermometer, based on the partitioning of Cr between olivine and pyroxene, which demonstrates that ureilites are residues of incremental melting. Chapter 4 is the first of two chapters describing igneous processes on Mars, a planet sometimes referred to as a planetary embryo due to its small size and early accretion age.; It describes a high-pressure experimental study of the partial melting of the primitive Martian mantle and discusses the origin of rocks from the Martian crust. Finally, chapter 5 is a study of Fe-Mg isotopic fraction in the olivine of the "enriched" shergottite Northwest Africa 1068. The composition and crystallization history of the parental melt, which represents a melt extracted from the Martian mantle, are constrained by modeling diffusion and crystal growth simultaneously.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128036</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave-driven geomorphology of Pacific carbonate coastlines : from landscape to wavelength scale</title>
<link>https://hdl.handle.net/1721.1/128035</link>
<description>Wave-driven geomorphology of Pacific carbonate coastlines : from landscape to wavelength scale
Bramante, James Francis.
The shallow marine ecosystems of coral atolls and the human communities they support are among the most vulnerable to anthropogenic climate change. Sea-level rise threatens to inundate low-lying reef islands, tropical cyclone intensification threatens islands with flooding and erosion, and ocean warming and acidification threaten the health of coral reefs. Unfortunately, the sediment dynamics that shape the morphology of coral reefs and atoll reef islands are poorly understood, hindering predictions of coral atoll responses to climate change forcing. Here, I apply an eclectic set of methods, including numerical modeling, physical lab experiments, and sedimentological analysis, to produce insights into the ways tropical cyclones and waves move sediment on fringing reefs. First, I use a numerical model of hydrodynamics to predict the influence of sea-level rise and wave climate change on sediment transport across a coral atoll fringing reef.; I demonstrate that by the end of the century, sea-level rise will reduce sediment transport rates from the fore reef to the beach, but increase transport rates from the reef flat to the beach. Wave climate change will have relatively negligible influence on cross-reef sediment transport. Additionally, I use the weathering of foraminifera tests to produce a sediment proxy of transport duration and direction across atoll reef flats, but demonstrate that the proxy does not clearly identify storm deposits. Second, I execute a series of experiments in an oscillating flow tunnel to constrain the rate at which sediment erodes reef surfaces under waves. I find that the erosion rate increases as a power law of wave orbital velocity, and that amount of sediment has a second-order influence.; Finally, I establish grain size in a sediment core retrieved from a blue hole in the Marshall Islands as a proxy for tropical cyclone genesis and, using the results from an ensemble of climate models, demonstrate that enhanced tropical cyclogenesis during the Little Ice Age may have been driven by an anomalously negative Pacific Meridional Mode. This thesis demonstrates the importance of sediment dynamics on the morphology of fringing reefs and atoll reef islands and the sensitivity of those dynamics to centennial climate variability.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 171-186).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128035</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The political dimension of labor-management relations: national trends and state level developments in Massachusetts</title>
<link>https://hdl.handle.net/1721.1/127981</link>
<description>The political dimension of labor-management relations: national trends and state level developments in Massachusetts
Saunders, Phillip.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics and Social Science, 1964.; Includes bibliographical references (leaves 1004-1017).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127981</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new procedure for the identification of hydrocarbons</title>
<link>https://hdl.handle.net/1721.1/127980</link>
<description>A new procedure for the identification of hydrocarbons
Wakeman, Reginald Leslie,1905-
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 1930.; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127980</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Competing conceptions of the liberal state and the governance of technological innovation in the U.S., 1933-1953</title>
<link>https://hdl.handle.net/1721.1/127979</link>
<description>Competing conceptions of the liberal state and the governance of technological innovation in the U.S., 1933-1953
Hart, David M.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1995.; Includes bibliographical references (p. 525-569).
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127979</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the theory of rapid filtration of water through sand</title>
<link>https://hdl.handle.net/1721.1/127970</link>
<description>A study of the theory of rapid filtration of water through sand
Stein, P. Charles(Philip Charles)
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Civil and Sanitary Engineering, 1940.; Vita.; Includes bibliographical references (leaves 228-230).
</description>
<pubDate>Mon, 01 Jan 1940 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127970</guid>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the hydrodynamics of superfluid helium</title>
<link>https://hdl.handle.net/1721.1/127969</link>
<description>On the hydrodynamics of superfluid helium
Clark, Alfred,Jr.,1936-
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1963.; Vita.; Includes bibliographical references (leaves 255-257).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127969</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trajectory and guidance theory for a low-thrust lunar reconnaissance vehicle</title>
<link>https://hdl.handle.net/1721.1/127967</link>
<description>Trajectory and guidance theory for a low-thrust lunar reconnaissance vehicle
Miller, James Swift.
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1961.; Vita.; Includes bibliographical references (p. 133-134).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127967</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thin elastic helicoidal shells</title>
<link>https://hdl.handle.net/1721.1/127961</link>
<description>Thin elastic helicoidal shells
Knowles, James K.(James Kenyon),1931-
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Mathematics, 1957.; Bibliography: leaves 82-83.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127961</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Categories in recursion theory,</title>
<link>https://hdl.handle.net/1721.1/127945</link>
<description>Categories in recursion theory,
Wasserman, Michael Gary.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1971.; Vita.; Bibliography: leaf 259.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127945</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of definate integrals by symbolic manipulation,</title>
<link>https://hdl.handle.net/1721.1/127944</link>
<description>Evaluation of definate integrals by symbolic manipulation,
Wang, Paul S.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1971.; Vita.; Bibliography: leaves 182-185.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127944</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Eisenstein series on the boundary of the disk,</title>
<link>https://hdl.handle.net/1721.1/127943</link>
<description>Eisenstein series on the boundary of the disk,
Lewis, John Block.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1970.; Vita.; Bibliography: leaf 39.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127943</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the stability of certain two-dimensional unsymmetric parallel flows</title>
<link>https://hdl.handle.net/1721.1/127936</link>
<description>On the stability of certain two-dimensional unsymmetric parallel flows
Foote, Joe Reeder.
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Mathematics, 1949.; Vita.; Bibliography: leaf 48.
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127936</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards optimal solution techniques for large eigenproblems in structural mechanics</title>
<link>https://hdl.handle.net/1721.1/127932</link>
<description>Towards optimal solution techniques for large eigenproblems in structural mechanics
Ramaswamy, Seshadri.
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1980.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127932</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neuromorphic computing systems : crystalline resistive random access memory</title>
<link>https://hdl.handle.net/1721.1/127915</link>
<description>Neuromorphic computing systems : crystalline resistive random access memory
Tan, Scott H.(Scott Howard)
Neuromorphic computing is a promising approach for efficient electronics by shaping computer hardware like the human brain. At the core of neuromorphic architectures are artificial synapses, which store conductance states to weight collections of electrical spikes according to Kirchoff's laws and Ohm's law. This thesis presents Silicon (Si)-based crystalline resistive random-access memory (crystalline RRAM) artificial synapses for neuromorphic computing. The main scaling bottleneck is poor temporal and spatial uniformity of artificial synapses. To the best of the author's knowledge, crystalline RRAM reported in this thesis have the lowest switching variations compared to other RRAM types. Controlling metal movement in resistive switching materials is extremely challenging. This thesis demonstrates two strategies to improve nanoscale control in crystalline RRAM: 1) intrinsic semiconductor regulation and 2) active metal alloying.; The first strategy relies on using defects to regulate resistive switching. Epitaxially-grown Silicon-Germanium (SiGe) on Si permits resistive switching via dislocations. Defect-selective chemical etching can increase ON/OFF ratio while maintaining low variations. The second approach to improve crystalline RRAM is active metal alloying. Pure silver (Ag) exhibits high mobility in Si due to thermodynamic repulsion between Ag and Si. Thermodynamic instability of Ag in Si induces poor weight stability, especially in low conductance states. This thesis demonstrates that adding a small amount of copper (Cu) to pure Ag can enhance weight stability because Cu can act as a bridge between Ag and Si to alleviate thermodynamic repulsion. Convolutional filtering and weight storage with 32 x 32 crystalline RRAM crossbar arrays are experimentally demonstrated using this approach. While these results are extremely promising, 2D crossbar scaling is limited by sneak currents.; Stacking artificial synapses in 3D could maximize scaling potential. However, 3D crystalline RRAM cannot be fabricated with single-crystalline materials that require high growth temperatures. Poly-crystalline Si could form 3D crystalline RRAM, however, resistive switching performance is inferior to single-crystalline RRAM, possibly due to free bonds. This thesis demonstrates hydrogen passivation can fix this problem. Hydrogenated doped poly-crystalline/micro-crystalline Si are presented as suitable materials for 3D neuromorphic computing cores. To conclude this thesis, monolithic character classifiers with micro-crystalline imaging and computing units are designed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 129-142).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127915</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building and using robust representations in image classification</title>
<link>https://hdl.handle.net/1721.1/127912</link>
<description>Building and using robust representations in image classification
Tran, Brandon Vanhuy.
One of the major appeals of the deep learning paradigm is the ability to learn high-level feature representations of complex data. These learned representations obviate manual data pre-processing, and are versatile enough to generalize across tasks. However, they are not yet capable of fully capturing abstract, meaningful features of the data. For instance, the pervasiveness of adversarial examples--small perturbations of correctly classified inputs causing model misclassification--is a prominent indication of such shortcomings. The goal of this thesis is to work towards building learned representations that are more robust and human-aligned. To achieve this, we turn to adversarial (or robust) training, an optimization technique for training networks less prone to adversarial inputs. Typically, robust training is studied purely in the context of machine learning security (as a safeguard against adversarial examples)--in contrast, we will cast it as a means of enforcing an additional prior onto the model. Specifically, it has been noticed that, in a similar manner to the well-known convolutional or recurrent priors, the robust prior serves as a "bias" that restricts the features models can use in classification--it does not allow for any features that change upon small perturbations. We find that the addition of this simple prior enables a number of downstream applications, from feature visualization and manipulation to input interpolation and image synthesis. Most importantly, robust training provides a simple way of interpreting and understanding model decisions. Besides diagnosing incorrect classification, this also has consequences in the so-called "data poisoning" setting, where an adversary corrupts training samples with the hope of causing misbehaviour in the resulting model. We find that in many cases, the prior arising from robust training significantly helps in detecting data poisoning.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 115-131).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127912</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explicit division and torsion points on superelliptic Curves and jacobians</title>
<link>https://hdl.handle.net/1721.1/127911</link>
<description>Explicit division and torsion points on superelliptic Curves and jacobians
Arul, Vishal.
In this thesis, I study two problems in the arithmetic of superelliptic curves. By a superelliptic curve, I mean the smooth projective model of the affine plane curve y[superscript n] = f(x) where f(x) is separable, n is coprime to deg(f), and the characteristic of the ground field does not divide n. When n = 2, this is commonly referred to as a hyperelliptic curve. I first generalize Zarhin's formula for division by 2 [68] on hyperelliptic curves to the superelliptic case. Rather than divide by n, I invert the 1[zeta] endomorphism on the jacobian. My formula reduces to Zarhin's when n = 2. Next, I study torsion points on superelliptic curves. Work of Coleman [15] and Grant-Shaulis [29] together classifies all torsion points on the hyperelliptic curve y² = x[superscript d] + 1, where d &gt;/= 5 is prime. I extend their results to the superelliptic curve y[superscript n] = x[superscript d] + 1, where n, d &gt;/= 2 are coprime. Using a specialization argument, I also classify torsion points on a generic superelliptic curve, extending Theorem 7.1 of Poonen-Stoll [57] to the hyperelliptic case. In order to classify torsion points, I prove a result about Galois action on the p-torsion of the jacobian of y[superscript p] = x[superscript q]+1, where p and q are distinct primes. This problem is equivalent to a new p-adic congruence for Jacobi sums, which I state and prove. This congruence is related to (but does not follow from) a congruence of Uehara [63].
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 113-116).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127911</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering the meaning behind the story : creating a system for documenting and supporting children's narrative development</title>
<link>https://hdl.handle.net/1721.1/127910</link>
<description>Discovering the meaning behind the story : creating a system for documenting and supporting children's narrative development
Woolf, Anneli Rane.
Narrative is a powerful core component of human development. Our ability to tell stories has been credited as one of the major influences for the success of the human species. We communicate, think, encode memories, dream, and learn about the world around us through stories. As story-beings, we need to recognize and harness the power of narrative as an educational tool. Despite the importance of narratives, there are significant gaps in the literature for understanding children's narrative development and designing interventions to support growth. Unlike literacy, there are no state-reported statistics of the rates of narrative development for children, nor are there established consistent methods with comprehensive metrics to systematically document narrative progress or evaluate interventions. These gaps are perpetuated by the complex space of narrative, specifically in the form of the content and the social, cultural, and individual context.; In response to these gaps, we developed Learning Loops, a novel digitally-mediated family learning system for documenting and supporting children's narratives. Embedded in the Learning Loops system is StoryBlocks, an open-ended storytelling app for children ages six to ten. While children play in StoryBlocks, their fine-grained interaction data is captured and streamed to a human coach, who uses a custom-built tool to analyze play and identify narrative trends. Coaches use this analysis to scaffold children's narrative process through direct feedback and promote caregiver co-engagement through text message updates and activities. This system is unique in that it: 1) documents children's stories as a basis for a comprehensive narrative analysis system, and 2) incorporates the important social role in children's learning by using digital tools to augment and support human social engagement in the narrative process.; Through presenting Learning Loops, this work explores the roles that both technology and humans play within these digitally-mediated systems to support narrative development within the child's social context. This dissertation proposes the Two-Lens Approach, a holistic theoretical framework for studying the form, content, and context of children's narratives. This approach is applied to critique the current design and guide future iterations to improve the program's ability to document, analyze, and support children's narrative capacity.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 176-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127910</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing cobalamin cycling by Antarctic marine microbes across multiple scales</title>
<link>https://hdl.handle.net/1721.1/127908</link>
<description>Characterizing cobalamin cycling by Antarctic marine microbes across multiple scales
Rao, Deepa,Ph.D.Massachusetts Institute of Technology.
Highly productive marine microbial communities in the coastal Southern Ocean sustain the broader Antarctic ecosystem and play a key role in Earth's climate via the biological pump. Regional phytoplankton growth is primarily limited by iron and co-limited by cobalamin (vitamin B₁₂), a trace cobalt-containing organometallic compound only synthesized by some bacteria and archaea. These micronutrients impact primary production and the microbial ecology of the two keystone phytoplankton types: diatoms and Phaeocystis antarctica. This thesis investigates microbe-driven cobalamin cycling in Antarctic seas across multiple spatiotemporal scales. I conducted laboratory culture experiments with complementary proteomics and transcriptomics to investigate the B₁₂-ecophysiology of P. antarctica strain CCMP 1871 morphotypes under iron-B₁₂ co-limitation.; We observed colony formation under higher iron treatments, and a facultative use of B₁₂-dependent (MetH) and B₁₂-independent (MetE) methionine synthase isoforms in response to vitamin availability, demonstrating that this strain is not B₁₂-auxotrophic. Through comparative 'omics, we identified a putative MetE protein in P. antarctica abundant under low B₁₂, which is also found in other marine microbes. Across Antarctic seas, community-scale cobalt and B₁₂ uptake rates were measured by ⁵⁷Co radiotracer incubation experiments and integrated with hydrographic and phytoplankton pigment data. I observed significant correlations between uptake fluxes and environmental variables, providing evidence for predominantly diatom-driven uptake of these micronutrients in warmer, fresher surface waters with notable regional differences.; To date, this work is the most comprehensive attempt to elucidate the processes governing the co-cycling of cobalt and B₁₂ in any marine system. At the ecosystem-scale, I developed and tested a hypothesis of micronutrient-driven community dynamics through a trait-based model with cross-feeding interactions. The model demonstrates how the observed seasonal succession of springtime P. antarctica from solitary to colonial cells, bacterioplankton, and summertime diatoms may be explained by the microbial cycling of iron, dissolved organic carbon, and B₁₂. Overall, this dissertation provides new information about the micronutrient-driven ecology of Antarctic marine microbes and adds to our understanding of the interconnections between organismal life cycle, trace metals, and trace organics in marine environments.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 161-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127908</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-assembling nanocomposite Tectons for ordered superlattices</title>
<link>https://hdl.handle.net/1721.1/127907</link>
<description>Self-assembling nanocomposite Tectons for ordered superlattices
Santos, Peter J.(Peter Jeffries)
Nanocomposites, materials of heterogeneous composition with at least one of the phases having dimensions between 1-100 nm, can be produced with unique properties dependent on their composition and geometric configuration. However, it is a major challenge to precisely and simultaneously design the structure of synthetic nanocomposites at the nanoscale, microscale, and macroscale. To create advanced nanocomposites in which both structure and composition can be programmed across these disparate size regimes, we have developed a new nanoparticle-based building block, the Nanocomposite Tecton (NCT). An NCT consists of an inorganic nanoparticle core and a polymeric shell, with each chain terminating in a supramolecular binding group at the periphery of the NCT.; As each NCT contains both an inorganic nanoparticle and a polymer phase, each building block is itself a nanocomposite, and the incorporation of supramolecular binding groups allows for the directed assembly of NCTs that contain complementary binding groups. These reversible supramolecular interactions enable the assembly of NCTs into ordered arrays, and the collective behavior of the binding groups can be regulated by the dynamics of the polymer chains. The NCTs are capable of rapidly self-assembling into several different crystalline phases that are determined by the design of the building block, and are resilient against dispersity in the molecular weight of the polymer brush and the diameter of the nanoparticle cores. NCTs have been synthesized with both gold and iron oxide nanoparticle cores, indicating the ability to produce NCTs at reasonable scales.; Moreover, the incorporation of multiple nanoparticle compositions allows for the synthesis of NCT-based materials with plasmonic and magnetic properties that can affect, as well as be affected by, the assembly process. We further demonstrate that the crystallization kinetics can be modulated to induce the assembly of NCTs into faceted crystallites with micron-sized diameters, and the resulting NCT crystallites can be post-processed into bulk solids with arbitrary macroscopic shape and controlled grain size. The NCT design concept is therefore a highly modular and versatile building block capable of fabricating materials with controlled structures at the levels of atomic composition and molecular geometry, nanoscale organization, microstructure, and macroscopic form.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 260-280).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127907</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics characterization for designing functional soft materials</title>
<link>https://hdl.handle.net/1721.1/127906</link>
<description>Dynamics characterization for designing functional soft materials
Lindemann, William Robin.
In solutions, the dynamic behavior of soft materials is often critical to their function. In biological materials such as proteins and peptides, the edict that 'structure dictates function' has been supplanted in recent decades by recognition that features like intrinsic disorder, conformational distribution, and solvent dynamics often play a part which is equally fundamental to the binding and reactivity of these materials. The same revelation holds for many other functional soft materials, including abiotic peptides and self-assembling materials, where function is controlled by the dynamic behavior of both the compound and the substrate. In this work, I elucidate the role of dynamics in several significant functional polyamides by the synthesis and characterization of samples spin-labeled for electron paramagnetic resonance (EPR) spectroscopy. By this approach, I developed insight into several soft-materials systems, including abiotic peptide tags, combinatorially selected for bioconjugation; fibronectin mimetic peptides, designed for therapeutic purposes, biomaterials and drug delivery; and finally, novel, self-assembling polyamide materials designed for water purification and energy conservation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 161-179).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127906</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utilizing bioinspired metal-coordinate bonding in the solidification of soft gels via crosslinking, dehydration and mineralization</title>
<link>https://hdl.handle.net/1721.1/127905</link>
<description>Utilizing bioinspired metal-coordinate bonding in the solidification of soft gels via crosslinking, dehydration and mineralization
Kim, Sungjin,Ph.D.Massachusetts Institute of Technology.
In nature, most organisms embark on the journey of life in a predominantly soft and compliant state and go through a series of material solidification processes to serve a multitude of complex functions at various life stages. Many of these material transitions occur via an organic-inorganic processing pathway that involves a genetically programmed hierarchy of spatiotemporally orchestrated macromolecular crosslinking, dehydration, and mineralization events. With the growing need for sustainable material processing, the interest has increased in understanding the underlying physical-chemical mechanisms that control the change in properties during such biological material transitions. Hence, in this thesis, we utilized polymers in combination with metal-coordinating ligands to form various types of metal-crosslinked networks as a platform to explore the possibly synergistic roles of crosslinking, dehydration, and mineralization in building solid materials out of soft gels.; First, we explored how metal-coordinate crosslinking contributes to macromolecular material mechanics upon dehydration-induced solidification, using mussel-inspired metal-catechol or metal-histidine crosslinked polymer hydrogels. We found evidence to suggest that a small amount of locally bound water by metal-coordinate complexes maintains their dynamic nature as mechanically dissipative crosslinks even in a dehydrated polymer network. In addition, a scaling relationship between the timescale of macroscopic network relaxation time and the amount of bound microscopic water was elucidated by demonstrating control over the fractions of dynamic and permanent crosslinks within the network. Second, we investigated the relationship between macromolecular crosslinking and mineralization.; Inspired by the self-assembling metal-reinforced mussel holdfast threads, we tested if metal-coordinate polymer networks can be utilized as simple composite scaffolds for direct in situ crosslink mineralization. Starting with aqueous solutions of well-dispersed metal-binding polymers, we found that inter-molecular metal-ion coordination complexes can serve as mineral nucleation sites, whereby significant mechanical reinforcement is achieved upon nanoparticle growth localized at the metal-coordinate network crosslink sites. Finally, we studied the control over mineralization of biominerals using catecholic metal-binding additives. We found that a common edible polyphenol, tannic acid (TA), ubiquitous in natural plants and foods could work as an effective binder for Ca to stabilize amorphous calcium carbonate (ACC). Furthermore, we demonstrated that the TA-induced ACC can readily transition to hydroxyapatites, or bone mineral, in simulated body fluid at human body temperature.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 104-116).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127905</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guided etching and deposition of transition metal dichalcogenides</title>
<link>https://hdl.handle.net/1721.1/127901</link>
<description>Guided etching and deposition of transition metal dichalcogenides
Ke, Jian-An.
Two-dimensional (2D) transition metal dichalcogenides (TMDs) with engineered nanopores have been suggested as a promising materials system in membrane and catalysis applications in both the energy and environment fields. Because of its atomic thinness, 2DTMDs are promising candidates for osmosis energy harvesting membranes. Furthermore, the scalable nanopore preparation in MoS₂ crystals provides a more cost-efficient option replacing precious metal-based catalyst for hydrogen evolution reactions catalysis. As an emerging class of semiconductor materials, 2D-TMDs have also attracted attention in electronic and optoelectronic applications, which require lithography processes to define the desired device structure. However, conventional top-down patterning methods rely on temporarily coating a polymer-based resist, which has been found challenging to remove completely and as such is detrimental to 2D devices.; Development of a resist-free, guided growth of TMDs is therefore desirable. We demonstrate the guided MoS₂ nanopore formation by engineering structural defects prior to the oxidative annealing process as a scalable and parallel process susceptible to large-scale applications. This process is based on our observation that the nanopore distribution is dramatically different in strained and unstrained MoS₂ crystals, which indicates that nanopore formation reflects distribution of the underlaying structural defects that act as nanopore nucleation sites. Our experimental observations indicate that dislocations in MoS₂ play a role in preferential nanopore nucleation. We further explore guided defect introduction by exposing MoS₂ crystals with electron and laser beams. By varying electron beam exposure dose prior to annealing MoS₂ in air, the nanopore formation density can be controlled.; Laser beam exposure, as another beam-based treatment, is also observed to locally enhance the etching of mechanically exfoliated WS₂ under regular wet transferring protocol. Finally, we explored the guided growth on a laser exposed WS₂ template and on electron beam exposed dielectric substrates. We show that MoS₂ preferentially nucleates at laser exposed exfoliated WS₂ crystals, forming WS₂-MoS₂ heterostructure. We then further demonstrate that the MoS₂ growth can be guided to the customized pattern prepared by electron beam exposure on bare SiO₂ substrates. This thesis provides insights into the role of defects during nanopore formation, and new process approaches for guided TMD etching and growth, which serve as a concept generally applicable to other 2D materials systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 114-129).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127901</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Voltage control of electrical, optical and magnetic properties of materials by solid state ionic transport and electrochemical reactions</title>
<link>https://hdl.handle.net/1721.1/127898</link>
<description>Voltage control of electrical, optical and magnetic properties of materials by solid state ionic transport and electrochemical reactions
Huang, Mantao.
Reversible post-fabrication control of material properties enables devices that can adapt to different needs or environmental conditions, and brings additional levels of functionality, paving the way towards applications such as reconfigurable electronics, reconfigurable antennas, active optical devices and energy efficient data storage. One promising way of achieving the controllability is through solid-state ionic transport and electrochemical reactions in thin film structures, where the properties of materials can be electrically controlled by a gate voltage in an addressable way. Here we explore using such ionic gating method to control the electrical, optical and magnetic properties of solid-state thin film layers, and show that large modification can be achieved for a wide range of properties. We demonstrate a new type of three terminal resistive switching device where the resistivity of a thin film conductive channel can be controlled by a gate voltage. We demonstrate solid-state ionic gating of the optical properties of metals and oxides and show the versatility of the approach by implementing voltage-controlled transmission, thin film interference, and switchable plasmonic colors. We also show that the approach allows for voltage control of ferrimagnetic order, demonstrating voltage induced 180-degree switching of the Néel vector, as a new way of magnetic bit writing. These findings extend the scope of voltage programmable materials and provide insights into the mechanisms of voltage controlled material properties by solid-state ionic transport and electrochemical reactions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 139-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127898</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling structure across length scales with directed assembly of colloidal nanoparticles</title>
<link>https://hdl.handle.net/1721.1/127895</link>
<description>Controlling structure across length scales with directed assembly of colloidal nanoparticles
Gabrys, Paul A.(Paul Anthony)
One of the promises of nanotechnology is the ability to create a bulk, designer material with its structure programmed at each length scale using deterministic control over the placement of each nanoscale component. Self-assembled nanoparticle colloids, particularly those directed by sequence-specific DNA hybridizations, have emerged as a promising building block for producing these designer materials from nanoparticles that arrange themselves into precise symmetries through mechanisms analogous to atomic crystallization. However, DNA-directed colloids and other self-assembled nanoparticle systems still struggle to realize the goal of arbitrary structure control at length scales larger than a few microns due to the complexity of forces impacting different scales simultaneously.; Utilizing existing atomic analogues for inspiration, this work extends the structure-defining nature of these programmable building blocks by imposing lithographic boundary conditions and devising processing techniques resembling those of atomic thin films and powders. Crystallization at an interface is explored, and preferential grain growth from a substrate is demonstrated to control large scale crystal texture. Full crystal orientation control is achieved by using standard nano-fabrication techniques to construct a lithographically-defined template for epitaxial growth that can define arbitrary macroscale shapes over millimeters. The resulting crystallization platform exhibits remarkable resiliency to lattice mismatch due to the 'soft' nature of the DNA ligands binding nanoparticles together. The understanding garnered from the DNA-grafted nanoparticle as a model system is extended to a colloid synthesized from a more scalable and robust directing polymer, polystyrene.; The unique advantages of this new building block enable the fabrication of truly bulk, 3D materials with arbitrary macroscale shape on the centimeter scale via sintering and post-processing of nanoparticle-based crystallites. The results of this work are nanoparticle-based materials with dictated structure from the nanoscale (crystallographic unit cell), through the microscale (crystallite size and orientation), to the macroscale (lithographically defined shape).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 319-334).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127895</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High accuracy computational methods for lithium ion battery materials</title>
<link>https://hdl.handle.net/1721.1/127893</link>
<description>High accuracy computational methods for lithium ion battery materials
Fadel, Eric R.(Eric Richard)
The ongoing research to improve the performance of Lithium-ion batteries has required the study of increasingly complex physical and chemical phenomena. In this context, the use of computational tools to quantitatively assess these phenomena has proven crucial for advancing the Lithium-ion battery technology. However, recent areas of research, ranging from studying the diædiffusion of Lithium ions across solid polymer or ionic salt electrolytes, to the calculation of the voltage curve and discharge rate for complex transition metal oxide electrodes, has pushed Lithium-ion battery research beyond the framework of common computational methods, compromising the accuracy of these tools. Thus, there is an increasing need to use more accurate computational tools, or develop new ones, that could still be used in practice to design battery materials. This project presents how more accurate methods can be used to compute voltage curves for Lithium-ion cathode materials, determine the voltage stability of organic electrolyte, or predict the conductivity of diædifferent electrolyte materials. The motivation for the use of higher accuracy methods is emphasized for each application by showing the limitations of commonly used methods. In particular, the achieved accuracy enables an enhanced understanding of the specific, complex physical and chemical phenomena at the heart of Lithium-ion battery limitations, which is crucial to the design of better battery materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 101-114).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127893</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods to interrogate cells and their interactions with single-cell resolution</title>
<link>https://hdl.handle.net/1721.1/127892</link>
<description>Methods to interrogate cells and their interactions with single-cell resolution
Genshaft, Alexander S.
Only recently have molecular methods achieved high-quality and unbiased representations of diverse intracellular molecules at the single-cell level. With this technological advancement, researchers have begun deconvolving population-level measurements to understand whether prior observations were homo- or heterogeneous across the sample. In order to make multi-omics workflows compatible with low input samples comprising a few to single cells, new methods are required. Here, we devise a scalable, integrated strategy for coupled protein and RNA detection in single cells. This method and other similar protocols enable researchers to dive deeper into cellular phenotypes while retaining single-cell resolution, critical for determining what transcriptional programs arise and in which cells with what other programs.; However, cellular phenotypes are not solely determined by the products of transcription and translation - cells are constantly receiving information from their environment including direct contact with other cells, secreted biomacromolecules and small molecules. To explicitly examine how a cell's spatiotemporal activity impacts its behavior, we developed and validated SPACECAT: a strategy to annotate, track, and isolate specific cells in a non-destructive, viability-preserving manner. To accomplish this goal, we created a novel photocaged viability dye and incorporated other photoactivatable fluorophores that we can combine to create five distinct fluorescence signatures. We show that the SPACECAT protocol is a powerful tool for targeting specific microenvironments to reveal phenotypes that would otherwise be obscured by bulk signatures. However, SPACECAT does not capture precise interaction history with defined cell types and cannot track secreted molecules.; To exert more control over cellular interactions, we created a protocol that confines interacting cliques of cells to microwells, preventing cells or secreted molecules from leaving their well of origin, and isolating them from the many other cliques interrogated in parallel. By examining cliques under biological, chemical, and null control stimuli, we see distinct transcriptional programs that underly immunological interaction between CD4+ T cells and antigen presenting cells. Through these novel methods and their proof-of-principle applications, we enable researchers &amp; clinicians to delve further into their systems of interest.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127892</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding electrochemistry at the Molecular scale : molecular dynamics methods and applications</title>
<link>https://hdl.handle.net/1721.1/127891</link>
<description>Understanding electrochemistry at the Molecular scale : molecular dynamics methods and applications
Dwelle, Kaitlyn Anne.
The relatively new field of nano-electrochemistry stands to enable more efficient energy storage and electrochemical techniques. However, traditional mean-field models which generally average over macroscopic detail may be inappropriate for understanding electrochemistry at the nanoscale. We propose a combination of methods for the molecular dynamics simulation of constant potential, electrochemically active devices and use these methods to reveal the importance of molecular character on nanoscale device behavior. For example, a macroscopic relationship between transference number and battery performance is shown not to hold up in nanoscale cells due to the nanoscale cell's ability to support significant deviations from electroneutrality. This result demonstrates the necessity of carefully reconsidering macroscopic phenomenology when designing nanoscale systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 103-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127891</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights into future electric mobility</title>
<link>https://hdl.handle.net/1721.1/127890</link>
<description>Insights into future electric mobility
Hsieh, I-Yun Lisa.
Growing global awareness of the environmental impacts of combustion is accelerating electric vehicles (EVs) adoption. However, future sustainable mobility cannot be achieved without substantial changes in vehicle technology, consumer behavior, infrastructure systems, and policy. Great impacts and uncertainties are anticipated during the transition towards electric mobility. This thesis examines key areas of interest such as battery techno-economic characteristics, EV recharging ecosystem, dynamics behind the market evolution, the prospects and challenges for the transition to electrification, and the impacts of evolving EV policies. For example, given that the battery prices have been dropping rapidly in the past several years, a recurring question is how much lower battery prices can be expected to go.; Greater production volumes and improvements in manufacturing efficiency will drive down costs, but the prices will eventually stabilize as they get closer to the cost of the materials they are made of. A 2-stage learning curve model is developed to investigate how the essential materials, especially expensive elements (lithium, nickel and cobalt) used in current battery technologies, will constrain the declining trajectory of production costs and set practical lower bounds on battery prices. Another big uncertainty surrounding EVs is whether they could really create a cleaner planet. EVs avoid tailpipe emissions of CO2 and air pollutants from fossil fuel combustion but may lead to greater emissions from the upstream stage of electricity generation, especially in the world's largest EV market, China, where coal-fired power generation has been the backbone of the electricity supply.; The current lifecycle emissions comparison for vehicles with different powertrains is presented and how China's sustainable mobility policy will affect future climate change, air quality, and public health is also explored. This thesis provides information that will help stakeholders anticipate and navigate some of the changes that lie ahead owing to a policy-driven shift from liquid fuels to electrification. The evaluation focuses mostly on private passenger vehicle sector, which in part reflects a recognition that this is the segment that is likely to be responding most proactively to the developments in advanced powertrains, alternative fuels, and environmental policies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 218-239).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127890</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering diatom peptides for the synthesis of silica nanomaterials</title>
<link>https://hdl.handle.net/1721.1/127889</link>
<description>Engineering diatom peptides for the synthesis of silica nanomaterials
Wallace, Andrea Kimi.
The ability to fabricate silica materials with highly organized nanostructures is of increasing demand across the medical, optical, energy, and mechanical fields. Diatoms, a class of eukaryotic algae, produce intricately-patterned silica structures under ambient conditions through a process initiated by post-translationally modified silaffin peptides that nucleate silicic acid. Designing these peptides would enable the production of silica nanostructures with desired properties; however, the functional effects of the modifications are poorly understood. In this thesis, I use Escherichia coli to express and modify recombinant silaffin R5 peptide from the diatom Cylindrotheca fusiformis. A library of 38 enzymes is tested for R5 modifications in vitro, from which active methyltransferases, kinases, acetyltransferases, oxidases, and myristoyltransferases are identified from diatoms, humans, yeast, and bacteria. Modified R5 peptides are used for silica precipitation and the impacts on particle size, shape, porosity, and surface area are quantified. I then used these individually characterized modifications to build synthetic enzyme pathways in vitro and in vivo, and demonstrate that introducing multiple modifications to R5 has additive effects on silica morphology. In the second part of this thesis, I apply the R5 peptide to synthesize silica coated core-shell nanoparticles for a range of core materials (Fe₃O₄ TiO₂, ZnO, HfO₂, and Ta₂O₅), and show that silica shell thickness can be tuned (2.3 - 120 nm) by altering the concentration of R5 used in the reaction. Together, these projects illustrate a design-driven approach for rapidly engineering and synthesizing silica nanostructures and multifunctional composite nanomaterials under ambient conditions, with potential applications in biomedicine, electronics, and photonics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 253-268).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127889</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovery of T cell receptor variable sequences from 3' barcoded single-cell RNA sequencing libraries</title>
<link>https://hdl.handle.net/1721.1/127888</link>
<description>Recovery of T cell receptor variable sequences from 3' barcoded single-cell RNA sequencing libraries
Tu, Ang A.(Ang Andy)
Heterogeneity of the immune system has increasingly necessitated the use of high-resolution techniques, including flow cytometry, RNA-seq, and mass spectrometry, to decipher the immune underpinnings of various diseases such as cancer and autoimmune disorders. In recent years, high-throughput single-cell RNA sequencing (scRNA-seq) has gained popularity among immunologists due to its ability to effectively characterize thousands of individual immune cells from tissues. Current techniques, however, are limited in their ability to elucidate essential immune cell features, including variable sequences of T cell antigen receptors (TCRs) that confer antigen specificity. Incorporation of TCR sequencing into scRNA-seq data could identify cells with shared antigen-recognition, further elucidating dynamics of antigen-specific immune responses in T cells.; In the first part of this thesis work, we develop a strategy that enables simultaneous analysis of TCR sequences and corresponding full transcriptomes from 32 barcoded scRNA-seq samples. This approach is compatible with common 32 scRNA-seq methods, and adaptable to processed samples post hoc. We applied the technique to identify transcriptional signatures associated with clonal T cells from murine and human samples. In both cases, we observed preferential phenotypes among subsets of expanded T cell clones, including cytotoxic T cell states associated with immunization against viral peptides. In the second part of the thesis, we apply the strategy to a 12-patient study of peanut food allergy to characterize T helper cell responses to oral immunotherapy (OIT). We identified clonal T cells associated with distinct subsets of T helper cells, including Teff, Treg, and Tfh, as well as Th1, Th2, and Th17 signatures.; We found that though the TCR repertoires of the patients were remarkably stable, regardless of their clinical outcomes, Th1 and Th2 clonotypes were phenotypically suppressed while Tfh clonotypes were not affected by therapy. Furthermore, we observed that highly activated clones were less likely to be suppressed by OIT than less activated clones. Our work represents one of the most detailed transcriptomic profiles of T helper cells in food allergy. In the last part of the thesis, we leverage the simplicity and adaptability of the method to recover TCR sequences from previously processed scRNA-seq samples derived from HIV patients and a nonhuman primate model of TB. In the HIV study, we recovered expanded clonotypes associated with activated T cells from longitudinal samples from patients with acute HIV infections. In the TB study, we modified the primers used in the method to T cells from TB granulomas of cynomolgus macaques.; We identified not only expanded clonotypes associated with cytotoxic functions, but also clonotypes shared by clusters of activated T cells. In total, these results demonstrate the utility of our method when studying diseases in which clonotype-driven responses are critical to understanding the underlying biology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127888</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of class II peptide-MHC repertoires and recognition via large yeast-displayed libraries</title>
<link>https://hdl.handle.net/1721.1/127887</link>
<description>Determination of class II peptide-MHC repertoires and recognition via large yeast-displayed libraries
Rappazzo, Charles Garrett.
T cells occupy essential roles throughout the immune system to prevent and limit disease. As such, breakdowns in their function and recognition underlie poor clinical outcomes across diverse maladies including pathogen infection, cancer, autoimmunity, allergies, and transplant rejection. Yet, when properly directed, T cells drive potent protective and therapeutic responses in prophylactic vaccinations and novel immunotherapies. Therefore, understanding and harnessing T cell function and recognition is of great importance to improving patient care and addressing currently unmet clinical needs. The function and recognition of T cells are driven through their T cell receptors (TCRs), which bind with great specificity to peptide-MHCs (pMHCs), Major Histocompatibility Complex proteins displaying tissue- and disease-specific peptide antigens derived from their host cell or its surroundings.; However, to specifically and comprehensively present and surveil antigens across highly divergent maladies, extreme diversity is required of both the population-level TCR and pMHC repertoires. However, this same diversity which drives T cell function also confounds generalized understanding of these repertoires, as well as their recognition. Therefore, there has been considerable recent interest in the development and application of tools to comprehensively define, predict, and screen these repertoires and their recognition at high throughput. In this thesis, I both utilize and build upon these tools to define TCR and pMHC repertoires and explore their recognition, particularly with yeast-displayed pMHC libraries for CD4⁺ T cell recognition of class II pMHCs, and especially in the context of cancer.; Using these technologies, I empirically define pMHC repertoires, explore the antigenic basis of TCR repertoire convergence in a preclinical tumor model, and explore the antigen reactivity of human T cells with clinical relevance. While these results provide detailed insights into the specific TCRs and pMHCs studied, they also provide guidance for future avenues in the exploration of TCR and pMHC repertoires and their recognition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127887</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Massively parallel combinatorial microbiology</title>
<link>https://hdl.handle.net/1721.1/127886</link>
<description>Massively parallel combinatorial microbiology
Kehe, Jared Scott.
Reductionist biology of the 20th century rooted pure culture methods and antibiotics as pillars of humankind's interaction with microbiology, igniting a revolution in medicine and biotechnology. The revolution was not without cost. By overlooking complex biological interactions, it introduced new problems--from the sharp rise in immune disorders to the antibiotic resistance crisis--that 21st century tools must address. While 'omics methods have fundamentally expanded our understanding of biological complexity, we lack a generalized method for measuring how the parts of a complex system, such as the individual strains of a microbial community, interact with each other. In this thesis, I present kChip, a new platform for constructing massively parallel combinatorial arrays of these parts in order to measure their interactions directly. I describe how kChip has been used to reveal patterns in microbial community assembly, unearth minimal microbial combinations with desirable functions, and screen for compounds that potentiate antibiotic activity. I demonstrate how kChip can advance the development of new technologies like microbial consortia and combinatorial drug therapies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 203-216).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127886</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The hippocampal "Event Code" : implications from Descartes to Gridworld</title>
<link>https://hdl.handle.net/1721.1/127883</link>
<description>The hippocampal "Event Code" : implications from Descartes to Gridworld
Sun, Chen,Ph.D.Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
The brain codes continuous spatial, temporal, and sensory changes in daily experience. Recent studies suggest the brain also tracks experience as segmented subdivisions (events), but the neural basis for encoding events remains unclear. Here, I present our recent advances to understand the encoding of distinct events at the single cell level. We did preliminary work which revealed distinct neural mechanisms for encoding different spatial contexts. Following this work, we designed a novel maze task for mice which permitted the isolation of neural signals tracking "events" as abstract and discrete entities, separate from sensory changes. This maze task was composed of 4 materially indistinguishable lap events. Using this maze, we reported hippocampal CA1 neurons whose activity was modulated not only by spatial location, but also lap number. These "event-specific rate remapping" (ESR) cells remain lap-specific even when the maze length was unpredictably altered within trials, suggesting ESR cells treated lap events as fundamental units. The activity pattern of ESR cells was reused to represent lap events when the maze geometry was altered from square to circle, suggesting it helped transfer knowledge between experiences. ESR activity was separately manipulable from spatial activity, and may therefore constitute an independent hippocampal code: an "event code" dedicated to organizing experience by events as discrete and transferable units.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127883</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The constrained geometry of structures : optimization methods for inverse form-finding design</title>
<link>https://hdl.handle.net/1721.1/127853</link>
<description>The constrained geometry of structures : optimization methods for inverse form-finding design
Cuvilliers, Pierre(Pierre Emmanuel)
This dissertation aims to improve form-finding workflows by giving more control on the obtained shapes to the designer. Traditional direct form-finding allows the designer to generate shapes for structures that need to verify a mechanical equilibrium when built; however, it produces shapes that are difficult to control. This dissertation shows how the design of constrained structural systems is better solved by an inverse form-finding process, where the parameters and initial conditions of the direct form-finding process are automatically adjusted to match the design intent. By defining a general framework for the implementation of such workflows in a nested optimizer loop, the requirements on each component are articulated. The inner optimizer is a specially selected direct form-finding solver, the outer optimizer is a general-purpose optimization routine. This is demonstrated with case studies of two structural systems: bending-active structures and funicular structures.; These two systems that can lead to efficient covering structures of long spans. For bending-active structures, the performance (speed, accuracy, reliability) of direct form-finding solvers is measured. Because the outer optimization loop in an inverse form-finding setup needs to rely on a robust forward simulation with minimal configuration, we find that general-purpose optimizers like SLSQP and L-BFGS perform better than domain-specific algorithms like dynamic relaxation. Using this insight, an inverse form-finding workflow is built and applied with a closest-fit optimization objective. In funicular structures, this dissertation first focuses on a closest-fit to target surface optimization, giving closed-form formulations of gradients and hessian of the problem. Finding closed-form expressions of these derivatives is a major blocking point in creating more versatile inverse form-finding workflows.; This process optimizer is then reimplemented in an Automatic Differentiation framework, to produce an inverse form-finding tool for funicular surfaces with modular design objectives. This is a novel way of implement-ing such tools, exposing how the design intent can be represented by more complex objects than a target surface. Reproducing existing structures, and generating more efficient funicular shapes for them, the possibilities of the tool are demonstrated in exploring the design space and fine-tuned modifications, thanks to the fine control over the objectives representing the design intent.
Thesis: Ph. D. in Architecture: Building Technology, Massachusetts Institute of Technology, Department of Architecture, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages [133]-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127853</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Independent fission yields of some xenon isotopes</title>
<link>https://hdl.handle.net/1721.1/127755</link>
<description>Independent fission yields of some xenon isotopes
Storms, Howard Albert,1935-
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 1962.; Vita.; Includes bibliographical references (leaves 124-128).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127755</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Role of dislocations in germanium and silicon</title>
<link>https://hdl.handle.net/1721.1/127746</link>
<description>Role of dislocations in germanium and silicon
Kurtz, Anthony D.(Anthony David)
Thesis (Sc.D.) Massachusetts Institute of Technology. Dept. of Metallurgy, 1955.; Vita.; Bibliography: leaves 110-112.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127746</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impurity effects on the thermal conductivity of magnesium at low temperatures</title>
<link>https://hdl.handle.net/1721.1/127745</link>
<description>Impurity effects on the thermal conductivity of magnesium at low temperatures
Sharkoff, Eugene G.(Eugene Gibb)
Thesis (Ph.D.) Massachusetts Institute of Technology. Dept. of Physics, 1953.; Vita.; Bibliography: leaves 76-77.
</description>
<pubDate>Thu, 01 Jan 1953 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127745</guid>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local boiling of water and superheating of high pressure steam in annuli</title>
<link>https://hdl.handle.net/1721.1/127744</link>
<description>Local boiling of water and superheating of high pressure steam in annuli
Kennel, William E.
Thesis (Sc.D.) Massachusetts Institute of Technology. Dept. of Chemical Engineering, 1949.; Vita.; Bibliography: leaves 438-443.
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127744</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regularity for the Dirichlet problem in convex domains</title>
<link>https://hdl.handle.net/1721.1/127740</link>
<description>Regularity for the Dirichlet problem in convex domains
Fromm, Stephen J.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1992.; Includes bibliographical references (p. 84-85).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127740</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism and Applications of large and persistent photoconductivity in cadmium sulfide</title>
<link>https://hdl.handle.net/1721.1/127735</link>
<description>Mechanism and Applications of large and persistent photoconductivity in cadmium sulfide
Yin, Han,Ph.D.Massachusetts Institute of Technology.
Photoconductivity is the phenomenon where electrical conductivity changes as a result of photoexcitation of new charge carriers. In some semiconductors, photoconductivity is accompanied with enormous conductivity change and long decay time after photoexcitation is ceased. This effect is called large and persistent photoconductivity (LPPC). LPPC is due to the trapping of photo-generated minority carriers at crystal defects. Theory has suggested that anion vacancies in II-VI semiconductors are responsible for LPPC due to negative-U behavior, whereby two minority carriers become kinetically trapped by lattice relaxation following photo-excitation. By performing a detailed analysis of photoconductivity in CdS, we provide experimental support for this negative-U model. We also show that, by controlling sulfur deficiency in CdS, we can vary the photoconductivity of CdS films over nine orders of magnitude, and vary the LPPC characteristic decay time from seconds to 10⁴ seconds.; Sulfur vacancies are deep donors at equilibrium in the dark, but convert to shallow donors in a metastable state under photoexcitation. We demonstrate two-terminal all-electrical thin film resistive switching devices that exploit this defect-level switching (DLS) mechanism as a new way to control conductivity. We introduce a hole injection layer to inject holes into the deep donor levels in CdS and switch CdS into a "photoconductive" state. The device is in low resistance state as fabricated, and shows repeatable resistance switching behavior under electrical bias with no electro-forming. Results from mechanism study rule out switching mechanisms based on mass transport and support our DLS hypothesis. LPPC is pronounced in n-type carrier-selective contact (CSC) materials in thin film solar cells, but its effect is rarely recognized. We numerically model the effect of LPPC in CSC by switching defect levels between deep and shallow donor states.; CSC photoconductivity can substantially affect solar cell performance. For instance, the power conversion efficiency of both CIGS and CdTe solar cells can be improved by over 4% (absolute) depending on the photoconductivity of the CdS CSC. The primary underlying cause is the influence of CSC shallow donor density on the junction depletion region. Optimizing CSC photoconductivity may be effective in solar cell engineering across multiple platforms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 119-130).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127735</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A second generation URANS approach for application to aerodynamic design and optimization in the automotive industry</title>
<link>https://hdl.handle.net/1721.1/127734</link>
<description>A second generation URANS approach for application to aerodynamic design and optimization in the automotive industry
Xu, Liangyu,Ph.D.Massachusetts Institute of Technology.
In the U.S., transportation is responsible for approximately 70% of all petroleum consumption and is now the largest source of carbon emissions and air pollution. Aerodynamics is an important aspect for energy saving and emission reduction in the automotive industry. In the design stage, aerodynamic drag is minimized through optimization of the vehicle shape, and Computational Fluid Dynamics (CFD) has become an invaluable tool to support this process. In combination with advanced optimization methods, CFD promises to considerably reduce the carbon footprint of modern passenger and good transportation. However, its success is severely limited by the poor description of complex unsteady turbulence at a practicable computational cost. For the flow past a car, unsteady turbulent flow structures are generated in the separation off the windshield, the mirrors, the wheels, and in the wake of the car body.; Capturing these turbulent structures is important for an accurate evaluation of the aerodynamic drag, especially for trains and freight trucks, where flow interaction between multiple bodies is involved and influences the overall drag. While high fidelity CFD techniques like Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) offer the ability to resolve the necessary turbulent structures and therefore predicting the drag with high accuracy, their computational costs are high that cannot allow efficient optimization. Reynolds Averaged Navier-Stokes (RANS) approach is most widely used for its computationally effectiveness and robustness, but current RANS models have turned out to have a poor description of complex unsteady turbulence. Hybrid models offer a potential balance between accuracy and computational cost. Despite increased accuracy, the present hybrid models suffer from lack of robustness, grid consistency, ease of use.; To address the issues of the existing hybrid models and to better address the industrial need for a robust, grid consistent, and widely applicable hybrid model, an interesting new approach has been proposed by Lenci [1] and Baglietto [2], which aims at increasing locally resolving the flow structures in the framework of second-generation URANS approach (2G-URANS), and is named STRUCT. The idea has shown the potential to provide improved accuracy, robustness, and mesh consistency for wall-bounded flows. However, the specific formulation delivered requires an averaging approach that introduces some application challenges, in particular being very sensitive to inlet boundary conditions and leading to spurious hybrid activation in open boundary external flows. This thesis assembles and demonstrates a new approach to support effective aerodynamic design and optimization through the delivery of an average-free STRUCT implementation applicable to all flow conditions.; The new model introduces a source term in the [epsilon] equation of the standard k-[epsilon] model based on a time scale defined by the second invariant of the resolved velocity gradient tensor and therefore is named STRUCT-[epsilon] model. The new STRUCT-[epsilon] model is then validated on the fundamental cases and cases in the automotive industry, demonstrating improved accuracy in comparison with the most commonly used Realizable k-e model (RKE), at a comparable computational cost and with low mesh sensitivity. To further reduce the computational cost to support effective aerodynamic design, the extension of the STRUCT-[epsilon] model to fast running steady simulations is explored, and the results have shown improved performance with a better agreement with the reference data in comparison with the RKE model.; On this basis, the STRUCT-[epsilon] model is applied to the optimization of a simplified tractor-trailer for demonstrating its value: at a computational cost amenable to industrial applications, it provides improved accuracy for the drag evaluation, and as a result, the optimal solution it generates through optimization is more accurate than the one obtained with the traditional RANS models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 201-212).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127734</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microfluidic and electronic detection of protein biomarkers</title>
<link>https://hdl.handle.net/1721.1/127733</link>
<description>Microfluidic and electronic detection of protein biomarkers
Wu, Dan,Ph.D.Massachusetts Institute of Technology.
Proteins are essential components of our bodies and play critical roles in biological processes. Quantifying protein levels can help characterize these biological processes and detect malfunctions. Thus, protein testing is widely utilized in clinical diagnostics. The current workflow for protein testing involves the processing of patient samples that are collected at healthcare facilities. These samples are sent to centralized laboratories and analyzed using sophisticated instruments. As a result, it can take days to deliver results. However, to manage acute conditions such as sepsis, rapid results are desired to ensure timely diagnosis and treatment. In contrast, point-of-care testing has shown great promise for rapid results through the use of miniaturized and inexpensive devices in non-laboratory settings.; While there have been many advances in this field, only a small number of proteins (e.g., cardiac biomarkers) are covered by point-of-care devices, due to the profound challenge of maintaining sensitivity while miniaturizing testing system and reducing assay time. In this thesis, this technical challenge was tackled to develop a point-of-care system for the measurement of interleukin-6, which can be used as a biomarker for sepsis or cytokine release syndrome. Inspired by the success of commercial glucose meters, an all-electrical system was developed, due to easy miniaturization and low cost of electronics. First, surface chemistries were developed to coat antibodies onto electrodes and successful surface modification was validated via various techniques. Non-Faradaic impedimetric, Faradaic impedimetric and chemiresistive label-free electrical biosensors were developed and examined.; However, it was found that these biosensing platforms are susceptible to drifts due to the non-specific nature of the signal transduction. Then, building on the enzyme-linked immunosorbent assay (ELISA, the gold standard), a bead-based electronic ELISA was developed, where beads expedite the testing and electrical readout replaces the colorimetric readout in the gold standard ELISA. An integrated mathematical model was developed to comprehensively understand the assay and inform optimization. While providing comparable limit of detection with the gold standard ELISA (&lt; 8 pg/ml), the bead-based electronic ELISA greatly reduces the assay time (40 min, as opposed to 5 hours in the conventional ELISA). A portable and multiplexed electrical readout system was then developed. In particular, a sequential multiplexing scheme was developed to incorporate multiplexing into the single-chip potentiostat.; Although simple, it was found that this multiplexing methodology may change the state of the biosensors by discharging the double layer capacitor and disrupting the mass transportation of redox species. With this regard, mathematical models were devised to analyze the sensor behavior and guide design. Finally, an integrated and automated system was obtained, by integrating the bead-based electronic ELISA on a microfluidic device and building an electronic interface to control the microfluidics. Validated with clinical patient samples, this system can provide clinically relevant limit of detection for interleukin-6 within 25 min and using less than 2.5 [mu]L sample.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 133-146).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127733</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficiency and abstraction in task and motion planning</title>
<link>https://hdl.handle.net/1721.1/127732</link>
<description>Efficiency and abstraction in task and motion planning
Vega-Brown, Will(William Robert)
Modern robots are capable of complex and highly dynamic behaviors, yet the decisionmaking algorithms that drive them struggle to solve problems involving complex behaviors like manipulation. The combination of continuous and discrete dynamics induced by contact creates severe computational challenges, and most known practical approaches rely on hand-designed discrete representations to mitigate computational issues. However, the relationship between the discrete representation and the physical robot is poorly understood and cannot easily be empirically verified, and so many planning systems are brittle and prone to failure when the robot encounters situations not anticipated by the model designer. This thesis addresses the limitations of conventional representations for task and motion planning by introducing a constraint-based representation that explicitly places continuous and discrete dynamics on equal footing.; We argue that the challenges in modelling problems with both discrete and continuous dynamics can be reduced to a trade-off between model complexity and empirical accuracy. We propose the use of abstraction to combine models that balance those two constraints differently, and we claim that by using abstraction we can build systems that reliably generate high-quality plans, even in complex domains with many objects. Using our representation, we construct and analyze several new algorithms, providing new insight into long-standing open problems about the decidability and complexity of motion planning. We describe algorithms for sampling-based planning in hybrid domains, and show that these algorithms are complete and asymptotically optimal for systems that can defined by analytic constraints. We also show that the reachability problem can be decided using polynomial space for systems described by polynomial constraints satisfying a certain technical conditions.; This class of systems includes many important robotic planning problems, and our results show that the decision problem for several benchmark task and motion planning languages is PSPACE-complete.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 142-157).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127732</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optically transparent, thermally insulating and soundproofing (OTTIS) aerogel for high-efficiency window applications</title>
<link>https://hdl.handle.net/1721.1/127731</link>
<description>Optically transparent, thermally insulating and soundproofing (OTTIS) aerogel for high-efficiency window applications
Strobach, Elise M.
Building heating, ventilation and air conditioning (HVAC) accounts for about 13.6 quadrillion BTU ("quads") per year or 14% of the total energy consumption in the United States.' Accounting for 39% of annual US carbon dioxide emissions, this consumption is directly related to the energy efficiency of the building envelopes. 2 Windows form an essential but lossy part of building envelopes, particularly during cold weather. Thermal losses in the U.S. from controlled indoor environments to outdoors climates account for $20 billion dollars in energy each year, signifying a need for more energy efficient windows. However, insulating windows represent a thermal challenge due to the needs for optical clarity and thermal performance. Successful application of window design requires an in-depth understanding of both fundamental heat transfer and the occupant needs of our buildings.; One promising solution to these energy losses is the use of silica aerogel, a porous material with super-insulating properties. Previous studies have explored the use of aerogels for energy efficient window glazing due to its low thermal conductivity and promise of transparency. However, its adoption in the general window market has been limited by its low optical clarity characterized by a blue haze. In this work, we present the development of a high-clarity silica aerogel that is able to achieve visible transmittance &gt; 98 % and thermal conductivity &lt; 13 mW/mK that has been optimized for use in building windows. This performance was achieved by careful tailoring of the interconnected particle network driven by optical modeling to reduce effective scattering size within the material below 10 nm diameter. Next, clarity and thermal conductivity of the material was improved by optimization of the solution-gelation synthesis across over 300 unique samples and 80 recipes.; This provided a framework for achieving a variety of low-haze aerogels with varied thermal, sound-proofing, and mechanical properties. After achieving high-clarity through optimization of the solution-gelation chemical recipe, several 5 inch diameter double-pane prototypes were to measure the optical, thermal, and acoustic performance. Results indicate that sealing high-clarity aerogel into the gaps of existing double-pane window designs, we can achieve a center-of-glazing U-factor of 0.20 BTU/h/ft²/F, which is 35-50% more insulating than current building codes across North America. These early thermal results and a production-scale techno-economic analysis indicate the aerogel has the ability to achieve cost-effective thermal performance that is competitive with traditional doubleand triple-pane windows.; Additionally, the aerogel is able to withstand exposure to extreme conditions, such as temperature &gt; 200 °C, relative humidity &gt; 60%, and ultraviolet exposure for more than 6 months without degradation of the nanostructure or optical quality. These results show a promising proof-of-concept design for an aerogel double-pane window that is capable of state-of-the-art performance without a prohibitive cost to consumers. Successful development and commercialization of this high-clarity aerogel has the potential to save billions of dollars in annual building energy losses while satisfying the diverse and complex needs of our buildings.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 115-120).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127731</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cell-free freeze-dried synthetic biology for wearable biotechnology applications</title>
<link>https://hdl.handle.net/1721.1/127730</link>
<description>Cell-free freeze-dried synthetic biology for wearable biotechnology applications
Soenksen Martinez, Luis Rubén.
Synthetic biology aims to develop modular genetic networks for computation, sensing, and control of biological systems, holding great promise for next-generation biosensing platforms. Similarly, advances in material sciences have allowed for the design of substrates and textiles engineered to exhibit novel mechanical, electrical, and optical properties for sensing and actuation. Wearable biosensors using synthetic biology principles and smart materials could expand on this potential, especially as solutions for continuous, fine-grained monitoring of physiological status, disease states, and pathogen/toxin exposure difficult to assess with other methods. Despite this, only few examples of synthetic biology sensors compatible with wearable use-cases have been described, all of which rely on the use of live engineered bacteria with sustainment limitations.; Thus, we report on the development of novel shelf-stable, genetically-programmable, and highly sensitive wearable sensing platforms based on cell-free synthetic biology components freeze-dried into flexible substrates and textiles; as well as on a new class of smart programmable synthetic biology materials capable of reacting to environmental queues. These systems were designed to exhibit colorimetric, fluorescent, luminescence, electrical, or mechanical outputs that can be passively or actively interrogated within isolated modules or in larger-scale garments with wireless networking capabilities. We functionally validated such platforms using a variety of synthetic biology circuits for detecting several relevant environmental exposure targets such as metabolites, chemicals, and pathogen-associated nucleic acids.; These findings suggest that cell-free synthetic biology tools have the potential to enable highly programmable wearable systems for rapid on-body detection or adaptation to external threats in first responders, warfighters or clinical personnel, as well as the assessment of athletic performance and monitoring to complex disease states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis. "February 2020."; Includes bibliographical references (pages 163-173).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127730</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining phosphate levels in natural water using a novel electrochemical measurement device</title>
<link>https://hdl.handle.net/1721.1/127729</link>
<description>Determining phosphate levels in natural water using a novel electrochemical measurement device
Park, Gee Hoon.
Current measurement of the phosphorus level in natural water are based on the phosphomolybdenum blue (PMB) method. In this method, the phosphate and molybdate ion form 12-molybdophosphoric acid (12-MPA) which is reduced to yield intensely coloured PMB, and its intensity is correlated with the phosphate concentration using spectrophotometry. Despite its well-established sensitivity and selectivity to the phosphate ion, commercially available in situ portable measurement devices suffer from their large footprints and limited working time. This is mainly because the wet chemistry of the PMB method requires a constant supply of liquid reagents of which volume determines the footprint and working time of the device. Such limitations of the existing methods make it difficult to access the temporal and spatial information of the phosphorus level in natural water which is crucial in the control of eutrophication.; In this thesis, we designed, fabricated, and evaluated two novel electrochemical phosphate detection devices that offer unique opportunities to be developed into portable, in-situ, and automated phosphate detection devices. The detection of phosphate is based on the formation of 12-MPA, wherein reagents are supplied in situ by the anodic dissolution of molybdenum (Mo). The first version of the device with two Mo electrodes in two separate chambers demonstrated that reducing the sample volume of the device reduces the time of detection and the energy consumption per measurement based on the Mo oxidation, when compared to the current state of the art (2 min and 900 mJ versus 70 min and 18 J, respectively). The second version device is improved further by simplifying the system into a single chamber with a single Mo electrode, which additionally decreases the response time to 30 s and the energy consumption to 4 mJ.; The experimental results with these two devices demonstrate the capability of phosphate determination (0.1 to 25 pM) in a high conductivity background solution (0.1 M NaCl), such as seawater, without significant interference from silicate ions. In addition, the second version of the device broadens its application into other types of natural water with low conductivity, and provides a promissing possibility to be further developed into an open-cell type sensor.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 157-177).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127729</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoengineered hierarchical advanced composites with nanofiber interlaminar reinforcement for enhanced laminate-level mechanical performance</title>
<link>https://hdl.handle.net/1721.1/127728</link>
<description>Nanoengineered hierarchical advanced composites with nanofiber interlaminar reinforcement for enhanced laminate-level mechanical performance
Ni, Xinchen.
At present, there is a need for novel, scalable, and high-performance structural materials that offer unprecedented combinations of stiffness, strength, and toughness at a low density, which can serve in a variety of applications in the aerospace, transportation, defense, and energy industries. To date, composite materials, specifically advanced carbon fiber reinforced plastics (CFRPs), which are comprised of high specific stiffness and strength continuous carbon microfibers and lightweight, relatively compliant polymers, have been among the most attractive materials and are used extensively in the aerospace sector. However, most CFRPs are fabricated by stacking plies in a layer-by-layer fashion, resulting in a weak polymer-rich region, known as the interlaminar region, at each ply interface that leads to poor properties through the laminate thickness.; Although the mechanically superior microfibers are designed to be the primary load carriers, the much weaker polymer matrix causes the laminates to be prone to premature failure with interlaminar delamination, which negatively affects both in-plane and out-of-plane performance. This key shortcoming is known as the Achilles' heel of CFRPs, which hinders their design and wider adoption in critical structural applications. In this dissertation, a novel nanoengineering approach to address the longstanding problem of weak ply interfaces of CFRPs is developed and demonstrated. High densities (&gt;10 billion nanofibers per cm²) of uniformly-distributed vertically aligned carbon nanotubes (A-CNTs) are placed between neighboring plies to bridge the weak polymer-rich interlaminar region in existing prepreg-based laminated composites, creating a hierarchical architecture termed "nanostitch".; The effectiveness of nanostitching is evaluated via various mechanical tests including short-beam shear (SBS), Mode I and II fracture, and double edge-notched tension (DENT), in all of which the nanostitched composites have demonstrated enhanced mechanical performance. Furthermore, the multiscale reinforcement mechanisms resulting from the CNTs are elucidated via a variety of ex situ and in situ damage inspection techniques, including optical microscopy, scanning electron microscopy, lab-based micro-computed tomography, and in situ synchrotron radiation computed tomography (SRCT). Specifically, in SBS, despite no increase in static strength, a 115% average increase in fatigue life across all load levels (60 to 90% of static strength), with a larger increase of 249% in high-cycle (at 60% of static strength) fatigue, is observed.; In Mode I and Mode II fracture, it is revealed that the interlaminar crack bifurcates into the intralaminar region from the interlaminar precrack, and then propagates within the intralaminar region parallel to the nanostitched interlaminar region as an "intralaminar delamination" in steady state. This unique crack bifurcation phenomenon has never been previously observed and is attributed to the A-CNTs adding interlaminar toughness to a level that causes the interlaminar crack to bifurcate into the less tough intralaminar region. In DENT, an 8% increase in ultimate tensile strength (UTS) is observed and is attributed to the A-CNTs suppressing critical interlaminar delaminations very close to final failure (greater than 90% UTS) via in situ SRCT.; In addition to the positive reinforcement results observed for the nanostitched composites, a next-generation higher volume fraction nanostitched composite with additional levels of beneficial hierarchy termed "buckled nanostitch" or "nanostitch 2.0" is created by exploiting the unique buckling behavior displayed by patterned A-CNT forests under compression. This multilevel hierarchical architecture further enhances the composite mechanical performance: SBS strength by 7% and DENT strength by 28%, compared to the baseline composites. The dissertation not only presents a controllable, scalable manufacturing method to produce engineered structural materials that are hierarchically designed down to the nanoscale with enhanced mechanical performance, but it also establishes key new understanding of the complex and coupled strengthening and toughening mechanisms acting at different scales, as well as their effects on macroscopic laminate-level mechanical properties.; A particular focus has been the seminal use of in situ SRCT to study the effects of the hierarchical nanoscale reinforcements, and thus the methods established provide an experimental path forward for future work in this area. Together, these advances open up new opportunities for creating next-generation engineered materials with a suite of programmable properties by controlling their structures and constituents across multiple length scales.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 157-177).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127728</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and algorithms for parametrization, optimization and customization of sled hockey equipment and other dynamical systems</title>
<link>https://hdl.handle.net/1721.1/127727</link>
<description>Analysis and algorithms for parametrization, optimization and customization of sled hockey equipment and other dynamical systems
Liang, Youzhi,Ph.D.Massachusetts Institute of Technology.
A dynamical system, an ensemble of particles, states of which evolve over time, can be described using a system of ordinary/partial differential equations (ODEs/PDEs). This dissertation presents fundamental investigations of the analysis and algorithms for the study of dynamical systems, by parametrizing, optimizing and customizing. We develop and/or implement numerical algorithms, for solving ODEs/PDEs, and statistical/machine-learning algorithms based on data, for physical inference and prediction. We further apply the methodologies on sled hockey, an adaptation of stand-up hockey, allows people with physical disabilities to participate in the game of ice hockey. First, we develop and implement numerical algorithms to study the nonlinear dynamics described by 4th-order nonlinear PDEs. The non-linear solvers apply multidimensional Newton's method with a Jacobian-free approach and a generalized conjugate residual (GCR) approach.; Applying the algorithms on the study of elastic systems, we investigate dynamics of hockey sticks as in a striking implement. We develop a mathematical model using an Euler-Lagrange equation to characterize the behavior of a hockey stick in the linear regime, and then apply this model to investigate the dynamic response of the stick throughout slap shots and wrist shots. We apply a modal decomposition method and decouple the resultant dynamics into kinetic and potential components. We further optimize the structures based on the dynamical analysis. Throughout testing with both elite and amateur sled hockey players, we find that final puck velocities with our prototype stick are on average over 10% higher compared to those achieved with commercially available sticks. Second, we investigate the dynamics of rigid system as in an over-constrained implement.; We propose two sets of dynamical modelling for the hockey sled using a trajectory-based modelling method and a state-space-based modelling method, which are used to study the dynamics of the propulsion for linear motion and of the tip-over and reset. We further propose a constrained optimization problem to optimize the Third, we develop and implement statistical and machine learning algorithms based on data, including algorithms of clustering for physical inference, algorithms of regression for Stribeck curve and algorithms of forecasting for wear rate. In the context of tribology of ice-metal contact, we design an experimental system to mimic the ice rink environment and to expand the experimental study of the friction coefficient in an extensive range of Hersey number from 10⁻¹³ to 10⁻⁴.; To build the understanding of the physics of friction, we perform a dimensional analysis and an asymptotic analysis for three regimes of friction - boundary friction regime, mixed friction regime, and hydrodynamic lubrication regime. We further develop a pipeline for creating the modified Stribeck curve based on data, after feature extraction, the regime of each experimental result is identified via clustering, followed by the regression constrained by the asymptotic analysis. Finally, we propose a methodology and algorithm to predict the wear rate subject to geometric constraints.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 157-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127727</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An injectable, biomaterial-based therapy to promote endogenous neural progenitor cells in a hemorrhagic stroke lesion</title>
<link>https://hdl.handle.net/1721.1/127726</link>
<description>An injectable, biomaterial-based therapy to promote endogenous neural progenitor cells in a hemorrhagic stroke lesion
Love, Christopher J.,Ph.D.Massachusetts Institute of Technology.
Stroke is the second leading cause of death and disability worldwide with prevalence and resulting costs projected to increase. There are few available treatments, and their applicability is limited to the type of stroke and the initial hours after onset. New treatments are needed to address post-stroke disability and the needs of survivors with a low quality of life. An emerging therapeutic concept is the direct injection of a biomaterial-based therapy into the stroke lesion to restore neural elements for improved function. The principal objective of this thesis was to evaluate the ability of an injectable, gelatin-based biomaterial (gelatin-hydroxyphenylpropionic acid, Gtn-HPA) incorporating epidermal growth factor (EGF) to increase the number of endogenous nestin-positive neural progenitor cells (NPCs) in the lesion. In a well-validated intracerebral hemorrhagic (ICH) stroke model in rats, NPCs were quantified and compared to ICH-only controls at 2 and 4 weeks post-injection.; At both time points, there was a ~10-fold increase in NPCs in the region of the biomaterial-treated lesion compared to the untreated control. Observations extended to 10 weeks post-injection revealed a persistence of the EGFbearing Gtn-HPA with continued infiltration of NPCs to a deeper extent into the biomaterial-filled lesion. To determine the effects of the specific biomaterial and growth factor combination, two additional groups were tested: an alternative hydrogel (RADA16 self-assembling peptides) incorporating EGF and an alternative mixture of growth factors extracted from endogenous blood platelets (platelet-rich plasma lysate) and incorporated into Gtn-HPA. Only Gtn-HPA incorporating EGF was able to increase significantly the number of NPCs in the stroke region. In a second objective, towards clinical translation, the target of the proposed therapy was characterized by examining the morphology and composition of human postmortem stroke brains.; In chronic hemorrhagic lesions, an atypical porous collagen matrix was observed--prompting further questions on the possible effect of lesion constituents on the injectable biomaterial. In a series of rheology experiments, a third objective was to determine the effects of blood elements on the gelation time and storage modulus of Gtn-HPA. Blood was found to impede gelation of the biomaterial--most likely through a catalase-promoted elimination of the catalyst that enables covalent crosslinking. These results inform the types of lesions amenable to therapy and the potential need to manage lesion constituents (e.g., by aspiration) prior to injection of the biomaterial. The results of this thesis motivate and guide further study in a large animal model to validate Gtn-HPA/EGF promotion of NPC infiltration of the ICH stroke lesion in a larger brain size and demonstrate the attendant improvement in function.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127726</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Open source hardware entrepreneurship</title>
<link>https://hdl.handle.net/1721.1/127724</link>
<description>Open source hardware entrepreneurship
Li, Zhuoxuan.
Having overturned the traditional producer-led, in-house production model of software, open source entered the physical world and started to change physical products' development and commercialization process. Will open source diversify the hardware world as it did in software 20 years ago? Since mid-2000, engineer entrepreneurs were observed to have purposefully chosen to abandon the intellectual properties of their products and licensed the design under open source licenses. As a consequence, public are allowed to participate to the product design processes from an early phase and interact with firms in an open, transparent way. It is reasonable to consider this extreme openness as a high risk move for hardware startup-level firms, who are normally resource-scarce, capital-intense and loosely organized. Then, the research questions come as how open source hardware firms generate profit and manage risks? Can open source model be a sustainable hardware development model in an entrepreneurial setting? Using data collected from 66 open source hardware firms over 4 years across 21 countries, the research questions were answered with a series of four research projects. In brief, the success of open source model in entrepreneurial activities is a result of dynamic design of organizational openness and community governance mechanism according to firm's business model and community's social needs, so that firms are able to exploit community value brought by being open and mitigate risks associated. Open source hardware entrepreneurship can serve as an extreme application of open innovation and user innovation theories in hardware venture creation, and we hope to use this work as a pilot study for the emerging socio-technological phenomenon.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF of thesis.; Includes bibliographical references (pages 136-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127724</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seismic and numerical constraints on the formation and evolution of ocean lithosphere</title>
<link>https://hdl.handle.net/1721.1/127722</link>
<description>Seismic and numerical constraints on the formation and evolution of ocean lithosphere
Mark, Hannah F.
This thesis explicates aspects of the basic structure of oceanic lithosphere that are shaped by the processes that form the lithosphere. The strength of lithospheric plates relative to the underlying mantle enables the surface plate motions and plate boundary processes that characterize plate tectonics on Earth. Surprisingly, we have a relatively poor understanding of the physical mechanisms that make the lithosphere strong relative to the asthenosphere, and we lack a reference model for ordinary lithospheric structure that can serve as a baseline for comparing geophysical observations across locations. Chapters 2 and 3 of this thesis investigate the seismic structure of a portion of the Pacific plate where the simple tectonic history of the plate suggests that its structure can be used as a reference model for oceanic lithosphere. We present measurements of shallow azimuthal seismic anisotropy, and of a seismic discontinuity in the upper mantle, that reflect the effects of shear deformation and melting processes involved in the formation of the lithosphere at mid-ocean ridges. Chapter 4 uses numerical models to explore factors controlling fault slip behavior on normal faults that accommodate tectonic extension during plate formation.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF of thesis.; Includes bibliographical references (pages 151-174).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127722</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the redistribution of jet energy in PbPb collisions at the LHC with CMS</title>
<link>https://hdl.handle.net/1721.1/127721</link>
<description>Mapping the redistribution of jet energy in PbPb collisions at the LHC with CMS
McGinn, Christopher Francis.
Quenched jets produced in heavy ion collisions at the LHC and reconstructed with the CMS detector are studied to understand the nature of interactions between hardscattered partons and the simultaneously produced hot and dense medium, the Quark- Gluon Plasma (QGP). Jets are objects with color charge evolving through many energy scales, so are an excellent tool for scattering experiment in QGP, with potential to resolve quasiparticle structure and induce medium response. Redistribution of jet energy is quantified in two methods: measurement of transverse PT of final state particles projected onto dijet azimuthal axis, and measurement of jet production cross sections in PbPb and pp as function of jet radius. Missing momentum shows recovery of lost energy when moving beyond the jet cone for a fixed collection of jets, approaching full recovery at ... A jet radius scan of jet production cross sections shows consistent observed suppression in PbPb when compared to appropriately scaled pp at all radii. However, less suppression is observed with increasing jet resolution parameter R. In combination the results imply that while jet energy lost to medium interactions can be found when looking beyond the jet cone, the substantial changes to the jet population in pp at each studied R lead to sustained spectral suppression with even the largest cone size.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 195-202).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127721</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spin-orbit coupled Bose gases</title>
<link>https://hdl.handle.net/1721.1/127720</link>
<description>Spin-orbit coupled Bose gases
Shteynas, Boris.
Quantum simulation is a very active and growing field. Various quantum systems can be used to emulate existing materials in an accurate and controllable way, as well as to generate new states of matter that have not been found in the real world but the existence of which does not contradict the fundamental laws of physics. Ultracold atoms form a perfect system to realize idealized models and study physical mechanisms that stand out clearly in them. Recent efforts have been made to simulate artificial gauge fields with ultracold atoms, including spin-dependent gauge fields, such as spin-orbit coupling. Motivated by this goal our lab explored several approaches to generate a one-dimensional spinorbit coupling interaction, which has a rich phase diagram and plays an important role for topological insulators, the quantum spin Hall effect and spintronics.; The first method we developed allowed us to detect a stripe phase by dressing Bose-Einstein condensates with an optical superlattice and Raman beams. The observed density modulation in the ground state meets the definition of the long-awaited supersolid state of matter. The second approach we took was to generate spin-orbit coupling without use of lasers. The method is based on the idea of periodic driving of the quantum system and dressing its evolution with fast micromotion, often refered to as Floquet engineering. Our experiment provided an insightful pedagogical example of what Floquet engineering is capable of. In the experiment we endowed a low energy radio-frequency photon with tunable momentum. When dressed with recoil momentum, the interaction of a radio-frequency photon with an atom occurred in a Doppler-sensitive way. We also demonstrated how to tune the momentum and flip its direction. In this thesis, I first describe the experiments done in the optical superlattice.; Then I discuss the behavior of periodically driven classical and quantum systems and provide detailed analysis of how a radio-frequency photon can be magnetically dressed with tunable momentum. The experiments we carried out demonstrated novel methods of generation for spin-dependent gauge fields and showed pedagogical examples and interpretations of evolution of periodically driven systems. The scheme of periodically driven atoms inspired a theoretical study of heating in Floquet systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 147-151).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127720</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lithium deposition and stripping in solid-state battery via coble creep</title>
<link>https://hdl.handle.net/1721.1/127717</link>
<description>Lithium deposition and stripping in solid-state battery via coble creep
Wang, Ziqiang,Ph. D.Massachusetts Institute of Technology.
Solid-state Li metal batteries require accommodation of electrochemically generated mechanical pressure inside Li metal. In this thesis it shows, through in situ transmission electron microscopy experiment of Li and Na deposition/stripping in mixed ionic-electronic conductor (MIEC) hollow tubules, an intriguing result that (a) Li metal can flow and retract inside 3D MIEC channels as a single crystal, (b) Coble creep dominates via interfacial diffusion along the MIEC/metal phase boundary, (c) this MIEC electrochemical tubular matrix can effectively relieve stress, maintain electronic and ionic contact, eliminate solid-electrolyte interphase (SEI) debris, reduce the possibility of "dead lithium", and allow the reversible deposition/stripping of Li metal across a distance of many microns, for 100 cycles. This thesis proposes quantitative design rules for MIEC electrochemical cell and shows that interfacial diffusion greatly liberates MIEC material choices when using ~100 nm wide and 10-100[mu]m deep channels. A centimeter-scale, ~10¹⁰ MIEC cylinders/solid electrolyte/LiFePO₄ full cell shows high capacity of ~ 164 mAh/g(LiFePO₄ and almost no degradation for over 50 cycles, starting with 1x excess Li.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 104-107).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127717</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolutionary and structural signatures of protein-coding function : synonymous acceleration, read-through, and structural impact of mutations</title>
<link>https://hdl.handle.net/1721.1/127716</link>
<description>Evolutionary and structural signatures of protein-coding function : synonymous acceleration, read-through, and structural impact of mutations
Wolf, Maxim,Ph. D.(Maxim Y.)Massachusetts Institute of Technology.
In this thesis I observe evolutionary signatures in coding regions to: (1) understand the sources of highly mutable coding regions in mammals; (2) to elucidate a new candidate function for a stop codon readthrough candidate gene, BRI3BP; and (3) to show how rapid sequence-based structure approximations can help predict the structural impact of amino-acid changes. (1) First, I searched for deviations from the evolutionary signatures of coding regions to recognize synonymous acceleration elements (SAEs) in protein coding genes. I showed that these are driven by an increased mutation rate, which persists in the human lineage, in otherwise evolutionarily-constrained protein-coding regions, providing an important resource to better characterize protein-coding constraint in mammals and within humans. (2) Second, I combined evolutionary signatures at the protein-coding and protein-folding level to characterize the functional implication of stop-codon readthrough in BRI3BP. I showed that this readthrough region has conserved spaced hydrophobic residues that pattern match to the -terminal helix forming a coiled-coil-like domain. This change alters BRI3BP function from pro-growth to pro-apoptotic, similarly to VEGF-A. This suggests that readthrough-triggered apoptosis may represent a general mechanism for limiting growth of cells with aberrant ribosomal termination. (3) Third, I used rapid protein-structure approximation of burial of residues based on protein sequence to predict the structural impact of amino acid alterations. I show that the prediction can be improved over using exclusively the hydrophobicity change of the residue. Overall my work demonstrates how evolutionary and structural signatures can be used to predict highly mutational gene regions, readthrough function and structural impact of mutation.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 87-90).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127716</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Invention and implementation of technologies for continuous flow synthesis</title>
<link>https://hdl.handle.net/1721.1/127715</link>
<description>Invention and implementation of technologies for continuous flow synthesis
Russell, Mary Grace.
In this thesis, I have optimized a synthesis of rufinamide an important epilepsy medication. This convergent synthesis generates two reactive intermediates in situ (aryl azide and propiolamide) and then combines them in a regioselective click reaction utilizing copper tubing as the catalyst. Next, I have optimized a synthesis of nicardipine which is prescribed to treat high blood pressure. The nature of the project required that the final product be relatively pure (&gt;90 %) so that the final product could be crystallized from the reaction mixture. Nicardipine was synthesized in three steps, but in two flow reactors where one of the reactors induced two steps. The reaction mixture was then purified using two in-line aqueous extrations. First, the reaction stream was washed with HCl to produce the salt of nicardipine and wash away polar compounds. Then, the product is extracted into the aqueous layer by using a 1:1 water DMSO mixture.; Finally, the synthesis's scale was increased and run in the system that was created in collaboration with the Jensen lab and Myerson lab. Next, a fully continuous synthesis of linezolid was optimized and run. The synthesis targeted the challenging intermediate amide epoxide that rapidly cyclizes into unwanted oxazolines. We were able to circumvent this side reactivity by masking the nucleophilic amide N-H by quenching the resulting nitrillium after Ritter type reaction with 2-propanol to produce the imidate. After accessing the masked amide epoxide, linezolid was produced by nucleophilic addition to the epoxide with the aniline made from a nucleophilic aromatic substitution (SNAr) reduction sequence. Finally, late stage oxazolidinone formation produces linezolid in a 73% yield in 27 minutes longest linear sequence. Next, I contributed to a system that automatically optimized and analyzed organic reactions in continuous flow.; This system in collaboration with the Jensen lab fully integrated software, hardware that controlled the continuous platform, and in-line analytics. This system, after the chemist had provided the desired chemical space, could optimize a reaction without any manual intervention. Finally, I developed a monolithic cellular solid made of functionalized silica for catalyst support. This system could solve some of the problems associated with packed bed reactors including catalyst deactivation due to channeling or clogging of the reactor. This type of catalyst support could be applicable to a large number of catalysts by attaching the catalyst to silane side chains with appended functionality. Portions of this thesis have been published in the following articles co-written by the author and have been reprinted and/or adapted with permission from their respective publishers.Zhang, P.; Russell, M.G.; Jamison, T.F. "Continuous Flow Total Synthesis of Rufinamide" Org. Proc. Res. Dev.; 2014, 15671570. © 2014 American Chemical Society. MGR ran the optimization of the synthesis as well as isolation and characterization of the final product. PZ wrote the manuscript and validated the results under TFJ's guidance. Zhang, P.; Weeranoppanant, N.; Thomas, D. A.; Tahara, K.; Stelzer, T.; Russell, M. G.; OMahony, M.; Myerson, A. S.; Lin, H.; Kelly, L. P.; Jensen, K. F.; Jamison, T. F.; Dai, C.; Cui, Y.; Briggs, N.; Beingessner, R. L.; Adamo, A. Advaced Continuous Flow Platform for On-Demand Pharmaceutical Manufacturing, Chem. Eur. J. 2018, 24, 2776-2784. DOI: 10.1002/chem.201706004. © 2018 John Wiley &amp; Sons, Inc. MGR optimized the synthesis of nicardipine as well as ran the synthesis in the synthesis frame. PZ, HL, LPK, CD, RLB all woked to develop chemistry for the syntheses of the different drug targets. NW, DAT, and AA worked to develop the up-steam synthesis unit as well as necessary undeveloped components.; KT, TS, MM, YC, and NB woked to deleop the continuous recrystalization unit and purified the drug targets. TFJ, KFJ, and ASM provided instrumental guidance to the teams. Russell, M. G.; Jamison, T. F. "Seven-Step Continuous Flow Synthesis of Linezolid Without Intermediate Purification," Angew. Chem Int. Ed. 2019, 58, 7678-7681. DOI: 10.1002/anie.201901814. © 2019 John Wiley &amp; Sons, Inc. All synthetic work was carried out by MGR under TFJ's guidance. B6dard, A.-C.; Adamo, A.; Aroh, K. C.; Russell, M. G.; Bedermann, A. A.; Torosian, J.; Yue, B.; Jensen, K. F.; Jamison, T. F. Reconfigurable System for Automated Optimization of Diverse Chemical Reactions, Science 2018, 361, 1220-1225. © 2018 American Association for the Advancement of Sciences. Reprinted with permission from AAAS. MGR and ACB worked together to run the various optimizations as well as substrate scopes. AAB developed initial conditions for several of the reactions. AA developed the system with JT and BJ's assistance.; KCA integrated the system with the software as well as modeled the optimization protocols. KFJ and TFJ provided instrumental guidance to the teams. Leibfarth, F. A.; Russell, M. G.; Langley, D. M.; Seo, H.; Kelly, L. P.; Carney, D. W.; Sello, J. K.; Jamison, T. F. Continuous-Flow Chemistry in Undergraduate Education: Sustainable Conversion of Reclaimed Vegetable Oil into Biodiesel, J. Chem. Ed. 2018, 95, 1371-1375. DOI: 10.1021/acs.jchemed.7b00719. © 2018 American Chemical Society. MGR and DML developed and optimized the chemistry. FAL wrote the manuscript and the laboratory experiment. MGR, HS, and LPK, taught the experiment. DWC provided assistance. JKS and TFJ provided guidance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2019; Cataloged from the PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127715</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rational hydrogel design for point-of-care bioassays</title>
<link>https://hdl.handle.net/1721.1/127714</link>
<description>Rational hydrogel design for point-of-care bioassays
Shapiro, Sarah Jane,1991-
As the global disease burden shifts increasingly towards chronic diseases, there is a need for improved diagnosis and monitoring so that patients can get the care they need. This is particularly evident in the developing world, where many people live far from diagnostic laboratories. Point-of-care diagnostics are tests that can be run in doctor's offices, clinics, and in patient homes. These tests must be rapid, so that they can be run quickly while the patient waits for results. Established point-of-care technologies are largely centered on lateral flow assays. Hydrogel microparticles have been used extensively for bioassays due to their nonfouling nature and ability to be functionalized with different types of biomolecules. Here, we use polyethylene glycol hydrogel particles to develop point-of-care bioassays. We focus on two different biomarkers: proteins and microRNA.; Proteins are well established clinical biomarkers that are regularly tested to diagnose a number of different diseases. miRNA are emerging biomarkers that were discovered within the past thirty years and have dysregulation patterns that are implicated in a wide variety of diseases. The aim of this thesis is to enable hydrogel-based point-of-care detection of miRNA and proteins by developing and applying theory to aid in rational design of the bioassay. First, we establish a theoretical framework to investigate the key factors that influence bioassay signal for hydrogel-based rapid bioassays. By developing scaling arguments for the flux of target into the hydrogel, we find that the key factors that influence bioassay signal are the reaction rate constant, the diffusion coefficient of the target in the gel, the probe concentration, the target concentration, the assay time, and the shape of the hydrogel.; By changing the hydrogel particle shape, we are able to decrease the limit of detection of a protein assay by a factor of six. We then apply the theory we developed for hydrogel signal to an assay for microRNA. Using the theory, we are able to design the hydrogels to enable muultiplexed detection of miRNA directly from serum in a 40-minute assay, with a clinically-relevant limit of detection. This assay only requires minimal preprocessing of the serum, making it useful for point-of-care applications. Leveraging our theoretical knowledge, we also develop a new assay format by incorporating hydrogels into fibrous substrates such as nitrocellulose, glass fiber membranes, and silk fabric and demonstrating their utility for bioassays. We demonstrate that these constructs can be used for detection of both miRNA and proteins.; This work combines the fields of flexible fibrous materials and lithographic patterning to directly pattern hydrogels of varying shape and function within fibrous substrates. The work presented in this thesis demonstrates the utility of hydrogels for point-of-care applications. We believe that this work can be leveraged in the future to develop tests for additional biomarkers and can be combined with advances in fluorescence imaging and portable heating to create point-of-care devices that can quickly and reliably quantify proteins and miRNA from complex samples, in order to enable earlier diagnosis of disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 125-146).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127714</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Cl-Based bioconversion with metabolic engineering</title>
<link>https://hdl.handle.net/1721.1/127713</link>
<description>Enabling Cl-Based bioconversion with metabolic engineering
Woolston, Benjamin Michael.
Single-carbon (C) substrates, such as synthesis gas and methanol, are attractive feedstocks for biochemical processes, as they are widely available, can be produced renewably, and do not compete with food supply. However, their use in industrial bioprocessing remains limited, primarily because microbes that utilize these substrates are poorly characterized biochemically, and limited tools exist for their genetic modification. This leaves the metabolic engineer with a choice: to develop genetic tools to enable engineering in the desired host, or to import the relevant catabolic pathway into a more tractable organism, such as Escherichia coli. This thesis explores both options within the context of developing strains for the conversion of C1 substrates into value-added chemicals and fuels.; Clostridium ljungdahlii is an acetogen that grows autotrophically on synthesis gas (CO, H₂ , and CO₂) using the Wood-Ljungdahl pathway, and is a promising candidate for non-photosynthetic CO₂ fixation. In the first section, we extended its primitive genetic tools by developing a CRISPRi system for the targeted knockdown of specific genes. Constitutive downregulation of several genes with putative roles in energy conservation and carbon flux by up to 30-fold was demonstrated, and the associated phenotypes analyzed. Optimization of the promoter controlling dCas9 expression allowed for inducible knockdown, paving the way for dynamic metabolic control strategies to redirect carbon flux in engineered strains. To demonstrate this concept, several variants of a heterologous pathway for the biosynthesis of 3-hydroxybutyrate (3HB) were constructed, to probe 3HB production in the wild-type background and with various CRISPRi plasmids.; The CRISPRi system represents a valuable contribution to the metabolic engineering field for its ability to redirect carbon flux, and is also useful to the microbiology community to probe gene function to answer open questions in the biochemistry underlying the Wood-Ljungdahl pathway. Efforts to develop genetic tools for the related acetogen Moorella thermoacetica are also described. To explore the alternative approach of importing a single-carbon catabolic pathway into a tractable host, in the second section E. coli was engineered to metabolize methanol. Screening various candidates of the three heterologous pathway enzymes enabled robust incorporation of 3 C-labeled methanol into central carbon metabolism. To further improve methanol assimilation, a kinetic-thermodynamic modeling framework was developed and combined with novel isotopic tracing experiments to probe potential pathway limitations.; Flux leakage from the cyclical ribulose monophosphate (RuMP) pathway was identified as the primary bottleneck, as this led to the build-up of the toxic intermediate formaldehyde and ablation of the thermodynamic driving force for methanol oxidation. Strategies were developed to re-wire central metabolism accordingly, which restored the driving force and led to the identification of the kinetics of the first enzyme of the pathway - methanol dehydrogenase (MDH) - as the next limitation. These results represent the first systematic analysis of flux limitations in E. coli engineered for methanol metabolism, and provide clear targets for further metabolic engineering to enable synthetic methylotrophy. Finally, the development of a formaldehyde biosensor in support of evolutionary approaches to enhance MDH activity and partition carbon flux between formaldehyde assimilation and growth is described.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2017; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 235-261).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127713</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein engineering of targeted cancer therapies</title>
<link>https://hdl.handle.net/1721.1/127712</link>
<description>Protein engineering of targeted cancer therapies
Santos, Michael Keith San Diego.
Protective antigen (PA). the pore-forming component of anthrax toxin, has emerged as a platform for the development of cancer therapies because of its versatility and robust ability to translocate proteins into a cell's cytosol. More recently, development of new techniques for modifying PA is enabling it to be retargeted to receptors of interest via fusion with existing protein binders. There is a vast library of potential binders for PA based on natural or novel protein scaffolds generated by the field of protein engineering. This has allowed new approaches for tumor cells to be targeted for cytosolic delivery of toxins as a therapeutic strategy. In our work, we sought to leverage the anti-tumor properties of an antibody, Elv3, to retarget PA to epidermal growth factor receptor (EGFR).; This PA construct was shown to be capable of translocating a recently discovered protease, Ras and RapI Specific Protease (RRSP), which cleaves and inactivates the signal effector, Ras, found in the cytosol. We demonstrated that when Ras is inhibited in this manner, downstream growth signaling through pERK is ablated and health of a pancreatic cancer cell line (AsPC-l) is affected. Our results suggest that this retargeted PA, rnPA-Elv3, efficiently translocates cytotoxic material into EGFR-positive tumor cells and thus presents a possible avenue for development of a potent therapeutic. Using the same approach, we also took another previously engineered antibody, sm3e and expressed it as a fusion to PA to confer specificity to carcinoembryonic antigen (CEA). Though CEA is not typically internalized, we demonstrated that this retargeted PA (mPA-sm3e) retained the property of endocytosis and translocation and was able to deliver toxins to inhibit proliferation of AsPC-1 tumor cells.; Finally, the retargeted PA variants, mPA-Elv3 and mPA-sm3e, were further characterized for tumor growth inhibition using mouse models. Nude mice were treated with the engineered PA variants against EGFR and CEA to test delivery of toxins into the cells of subcutaneous tumors. Initial results were promising, and future work should be aimed at additional studies confirming this work in mouse models. Our work has demonstrated that protein engineering can be used to retarget PA against tumor cells with positive results. We believe that the modularity and versatility of this retargeting strategy holds great potential in the design of anti-cancer therapeutics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2017; Cataloged from the PDF of thesis.; Includes bibliographical references (pages 99-106).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127712</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular, cellular, and circuit analysis of C. elegans spitting behavior</title>
<link>https://hdl.handle.net/1721.1/127711</link>
<description>Molecular, cellular, and circuit analysis of C. elegans spitting behavior
Sando, Steven R.
To identify the neural and molecular substrates of animal behavior and understand the principles by which they function is a major goal in neurobiology. One simple system for such behavioral studies is the pharynx of the nematode worm C. elegans, which contracts rhythmically (pumps) to ingest food. The connectivity (connectome) of the 20 pharyngeal neurons has been completely determined, making the pharynx one of the anatomically simplest and best-defined neuromuscular systems known to science. Previous work showed that ultraviolet (UV) light interrupts and modulates the pharyngeal pumping rhythm. This modulation requires the gustatory receptor orthologs lite-1 and gur-3. Because lite-1 and gur-3 control a similar pharyngeal response to hydrogen peroxide, the pharyngeal response to light was proposed to be a gustatory response to noxious chemical products of photooxidation (Bhatla and Horvitz, 2015).; Here, we report that UV light induces a novel pharyngeal behavior: the spitting of the pharyngeal contents from the pharynx. Using this behavior as a model to study how a simple nervous system encodes an inversion of function (i.e., switching from feeding to spitting), we identified biomechanical mechanisms of and neural circuitry for spitting. Spitting results from the sustained contraction of pharyngeal muscles pml, pm2, and/or pm3. This contraction opens the food-trapping pharyngeal valve that normally captures food during feeding, resulting in spitting. Spatially restricted calcium increases in pml, pm2, and/or pm3 colocalize with the site of contraction. As reported previously (Bhatla et al., 2015), spitting requires the M1 pharyngeal neuron, which directly innervates pml, pm2, and pm3. M1 responds to UV light with calcium increases, and activation of M1 produces spitting. UV light-induced spitting requires lite-1 and gur-3.; gur-3 functions in the 12 and 14 pharyngeal neurons and at least one other undetermined location to produce M1-dependent spitting. We propose that M1 acts as a functional hub for spitting, upon which multiple other neurons in the pharyngeal nervous system converge. Our work identifies a simple motif that controls an inversion of behavioral function and suggests that the M1 neuron is an important point of control in the pharyngeal network.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from the PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127711</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering polymer biomaterial interfaces for promoting cellular morphogenesis</title>
<link>https://hdl.handle.net/1721.1/127710</link>
<description>Engineering polymer biomaterial interfaces for promoting cellular morphogenesis
Sofman, Marianna.
Three-dimensional in vitro tissue and organ cultures have immense promise as models of human pathophysiology and stand to make a significant impact on the process of drug discovery and development. Many existing model systems do not capture the relevant complexity of the native tissue environment, relying on poorly characterized natural extracellular matrices (ECMs) for growth and development. These models are notably limited by the lack of vasculature, a key functional component of most human tissues, enabling oxygen and nutrient exchange, as well as facilitating paracrine signaling with surrounding epithelial cells. Fully-defined and tunable synthetic ECMs that support the generation of vascular network structures in dense tissue environments represent a path towards overcoming the limitations of existing model systems.; This thesis focuses on the development and characterization of polymeric biomaterials that can be used to enhance in vitro tissue models through engineering the cell-material interface to guide a particular biological response. A major application focus of this research is to engineer biomaterial tools that would enable vascularization of dense epithelial tissue in vitro. We developed and characterized a poly(ethylene glycol)-based microbead angiogenesis scaffold with tunable physical and biochemical properties, identifying a critical ligand concentration regime on the microbead surface that promotes integrin-mediated endothelial cell attachment and invasion into both a synthetic ECM as well as a tissue aggregate of hepatocarcinoma cells.; Furthermore, we investigated a novel hybrid PEG-polypeptide polymer, poly([gamma]-propargyl- L-glutamate) (PPLG) as a hydrogel substrate that can enhance endothelial cell attachment and spreading through modulation of the macromer structure and hydrophobicity properties. This work demonstrates how rational biomaterial design through chemical and structural modifications to polymer scaffolds can control cell fate within an in vitro tissue culture system.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127710</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online data collection for developmental research</title>
<link>https://hdl.handle.net/1721.1/127709</link>
<description>Online data collection for developmental research
Scott, Kimberly M.,Ph. D.Massachusetts Institute of Technology.
The strategies infants and young children use to understand the world around them provide unique insight into the structure of human cognition. However, developmental research is subject to heavy pragmatic constraints on recruiting large numbers of participants, bringing families back for repeat sessions, and working with special populations or diverse samples. These constraints limit the types of questions that can be addressed in the lab as well as the quality of evidence that can be obtained. In this dissertation, I present a new platform, "Lookit," that allows researchers to conduct developmental experiments online via asynchronous webcam-recorded sessions, with the aim of expanding the set of questions that we can effectively answer. I first present the results of a series of empirical studies conducted in the laboratory to assess difficulty faced by infants in integrating information across visual hemifields (Chapter 2), as an illustration of the creative workarounds in study design necessary to accommodate the difficulty of participant recruitment. The rest of this work concerns the development of the online platform, from designing the prototype (Chapter 3) and initial proof-of-concept studies (Chapter 4) to the demonstration of an interface for researchers to specify and manage their studies on a collaborative platform (Chapter 5). I show that we are able to reliably collect and code dependent measures including looking times, preferential looking, and verbal responses on Lookit; to work with more representative samples than in the lab; and to flexibly implement a wide variety of study designs of interest to developmental researchers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018; Cataloged from PDF version of thesis. Page 140 blank.; Includes bibliographical references (pages 134-139).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127709</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic and stimuli-responsive multi-phase emulsion droplets for optical components</title>
<link>https://hdl.handle.net/1721.1/127708</link>
<description>Dynamic and stimuli-responsive multi-phase emulsion droplets for optical components
Nagelberg, Sara(Sara Nicole)
Dynamic micro-optical components have revolutionized imaging, sensing, and display technologies. Multi-phase emulsions are micro-scale droplets formed from multiple immiscible material components suspended in a fluid medium. An interesting aspect of these droplets is that by tailoring the chemistry of the surrounding medium it is possible to control the droplet morphology or to render the droplets responsive to stimuli in the environment, including light, heat, specific molecules, or even bacteria. This thesis explores the optical characteristics of multi-phase droplets, including their refractive, emissive, and reflective properties. This work focuses predominantly on bi-phase droplets formed from two immiscible oils in water, which form double emulsions or Janus droplets. As tunable refractive components, these droplets form dynamic compound micro-lenses with adjustable focal length that is continuously variable from converging lenses to diverging lenses.; Macroscopically these refractive droplets can appear nearly transparent or strongly scattering, depending on their configurations. When a fluorescent dye is dispersed within the higher refractive index phase, a portion of the light emitted will undergo total internal reflection. This results in a strong morphology-dependent angular emission profile, which can be used in molecular sensing for chemicals or pathogens. In reflection, the droplets produce striking iridescent colors. This is due to the interference light being totally internally reflected at the internal interface along distinct optical paths, leading to color. These optical characteristics are analyzed both experimentally and theoretically. Finite Difference Time Domain simulations were used to model wave-optical effects and phenomena that could be treated using geometrical optics were calculated using a custom-built ray tracing algorithm.; Additionally, a theoretical model was developed to explain the iridescent colors, under a geometric approximation that takes into account interference effects. Experimentally the droplets were characterized using several different custom-built microscope setups. Beyond the optical characteristics, we used these setups to investigate the effects of thermal Marangoni flows within the droplets, which cause the droplets to re-orient towards a heat source. This work sets the foundation of understanding the refractive, reflective, and emissive properties of multi-phase droplets, which could form the basis of dynamically controllable or stimuli-responsive micro-scale optical components.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 136-143).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127708</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of divertor target plate conditions and their relationship to scrape-off layer transport</title>
<link>https://hdl.handle.net/1721.1/127707</link>
<description>Measurements of divertor target plate conditions and their relationship to scrape-off layer transport
Kuang, Adam QingYang.
Cross-field filamentary transport in the scrape-off layer (SOL) is important for controlling SOL profiles, main-chamber recycling fluxes, and divertor operation. However, questions remain about the extent to which divertor target conditions play a role in setting transport levels. The Alcator C-Mod SOL is well diagnosed and extensively characterized, making it an ideal platform to assess the impact of divertor target conditions on SOL filamentary transport and the resultant upstream profiles, in particular, density shoulder formation. To facilitate the investigation, a set of high heat flux handling, flush-mounted rail Langmuir probes were designed for the Alcator C-Mod divertor. They were validated and proved to be robust, reliable diagnostics. Main chamber SOL fluctuations and density profiles were observed and found to be strongly correlated with divertor target conditions when the core plasma Greenwald fraction was increased.; However, no trend was observed when local changes to near SOL divertor conditions were made using N₂ impurity seeding. To understand these results, a numerical model for filament transport was constructed that includes realistic magnetic geometry effects (e.g. magnetic shear) and collisionality profiles, both of which have been identified by theory to be important parameters. In validating the numerical model, a discrepancy was highlighted: experimental observations find fluctuation timescales in the SOL to be independent of location, whereas theories assume that timescales are set by local parameters--not accounting for the nonlocal effect of filaments being formed in the near SOL and propagating outwards.; The numerical model reveals that strong distortions to the magnetic geometry in the near SOL, due to proximity to the X-point, electrically disconnect the main chamber SOL from divertor target conditions, offering an explanation for the experimental observations, and further suggesting that divertor heat flux mitigation may be optimized without direct impact on main chamber plasma profiles. When the divertor is electrically connected to the main chamber SOL, simulations indicate that increasing divertor collisionality causes a decrease to filament velocity, contrary to published literature. In summary, the combined impact of SOL collisionality and magnetic geometry effects were found to be strong controlling parameters on cross-field filamentary transport consistent with theoretical expectations, providing strong motivation for including these effects in SOL transport simulations and in interpreting experimental results.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Applied Plasma Physics, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127707</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light manipulation with photonic fibers and optical light guides : dynamic structural color and light distribution in microalgae cultures</title>
<link>https://hdl.handle.net/1721.1/127706</link>
<description>Light manipulation with photonic fibers and optical light guides : dynamic structural color and light distribution in microalgae cultures
Sandt, Joseph David.
Optical and photonic fibers represent versatile systems for light manipulation. They are used to guide, reflect, emit, and absorb light, and can be designed to alter the light's spectral composition in any of these light-matter interactions. Additional functionality arises from the combination of these effects in single fibers, and the ability to employ fibers as individual strands, or as woven networks. Two distinct light-manipulating-fiber systems are the focus of this thesis: (1) photonic fibers, which have vivid structural color that changes reversibly in response to mechanical or electrical stimuli, and (2) leaky light guides, which emit light along their length when illuminated from one end. Mechanochromic fibers that convert a mechanical perturbation into an optical response can be used, standalone or integrated into textiles, as easy-to-read strain sensors. Such fibers respond to elongation with a gradual shift in their reflected color through the visible range of light.; In particular, their use in compressive bandages - discussed in detail in this thesis - could greatly improve the efficiency of compression therapy for chronic venous ulcers and other vascular maladies. Electrochromic fibers exploit the electrochemically-tunable absorption of poly(3,4-ethylenedioxythiphene) polystyrene sulfonate, a common conducting polymer, to design devices that can be flipped between a vivid, structurally colored state, and a dull, absorption-colored state. Custom optical multilayer and lumped parameter models are used to analyze the behavior of these fibers. Leaky light guides, by distributing light throughout volumes of algae culture, could yield greater productivity in microalgae cultivation, while lowering energy requirements. The combination of these factors could enable the economically favorable generation of algal biomass for fuels, feedstock, pharmaceuticals, and many other uses.; A passive system for distributing light throughout culture volumes, by selectively scattering light out of light-guiding fibers, is developed and implemented. The process of designing and manufacturing these leaky light guides, and their use in a variety of laboratory-scale bioreactors with live microalgae cultures, are described.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 66-68).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127706</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exotic superconductivity in quantum materials</title>
<link>https://hdl.handle.net/1721.1/127702</link>
<description>Exotic superconductivity in quantum materials
Kozii, Vladyslav.
The theory of superconductivity developed by Bardeen, Cooper, and Schrieffer has proven to correctly describe a wide class of metals, where the effective attraction between electrons is mediated by phonons. Despite huge success, this theory fails to explain certain types of superconductivity, which includes but not limited to topological superconductivity and superconductivity in systems with low carrier density. We study new exciting properties of these materials and discuss possible microscopic mechanisms for exotic superconductivity. In Part I of this thesis, we explore the properties of two-component superconductors with strong spin-orbit coupling. Our study is motivated by the experiments on a topological superconductor candidate material, Bi2Se3 doped with Cu, Sn, or Nb atoms. Generally, superconductivity in such systems comes in two flavors: nematic, which breaks rotational symmetry of the crystal, and time-reversal breaking chiral.; We study the relative energetics and different features specific to each of these flavors. We find that, in three dimensions, the nematic superconductors generically possess full pairing gap on the Fermi surface, thus representing a solid-state realization of a time-reversal-invariant topological superconductor. On the contrary, chiral superconductors host non-degenerate point nodes on the Fermi surface and represent the superconducting analog of topological Weyl semimetals; the low-energy excitations in these materials are itinerant Majorana fermions. In Part II, we suggest possible microscopic mechanisms for unconventional superconductivity. We show that strong fluctuations of the inversion-breaking order parameter induce instability in an odd-parity superconducting channel, suggesting a route towards topological superconductivity. Using bosonization, we generalize this result to one-dimensional systems.; We apply our findings to study superconductivity in three-dimensional Dirac materials with extremely low density of carriers. Finally, we discuss the mechanism for nematic superconductivity from density wave fluctuations in two-dimensional systems, with possible application to twisted bilayer graphene. The results presented in this thesis are mainly based on Refs. [1, 2, 3, 4, 5, 6, 7].
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 345-342).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127702</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulating fluids and fields In multimaterial fibers</title>
<link>https://hdl.handle.net/1721.1/127700</link>
<description>Manipulating fluids and fields In multimaterial fibers
Yuan, Rodger.
From conduits for fluid transport to the threads in highly absorbant textiles, high aspect ratio fibers and tubings have been used for thousands of years to manipulate fluids. The emergence of multimaterial thermal drawing as a method to fabricate fibers with precise spatial control of a broad range of materials, such as polymers and metals, enables the integration of new functionalities into these traditionally single material tools. In this thesis we investigate the use of thermally drawn fibers as a means to manipulate fluids and electric fields for various applications. As a conduit for fluids flowing in the axial direction of the fiber, we explore new regimes in inertial microfluidics by leveraging the geometric tunability of fiber channel cross-sections. By integrating electrodes onto the channel surface, we later design a microfluidic device capable of inertial-dielectrophoretic live/dead cell separation at throughputs as high as 100 5[mu]L/min. In addition, we show that UV-transparent hollow fibers can be used as templates to fabricate highly complex 3-D hydrogel microparticles with dielectrophoretic sub-particle localization. Finally, by integrating surface-interfaced porous structures into electrode-integrated fibers, we demonstrate fluid flow manipulation in the radial direction for application as a fiber sweat sensor.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 133-143).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127700</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles based treatment of charged species equilibria at electrochemical interfaces: model system of zirconium oxide and titanium oxide</title>
<link>https://hdl.handle.net/1721.1/127699</link>
<description>First-principles based treatment of charged species equilibria at electrochemical interfaces: model system of zirconium oxide and titanium oxide
Yang, Jing,Ph. D.Massachusetts Institute of Technology.
Ionic defects are known to influence the magnetic, electronic and transport properties of oxide materials. Understanding defect chemistry provides guidelines for defect engineering in various applications including energy storage and conversion, corrosion, and neuromorphic computing devices. While DFT calculations have been proven as a powerful tool for modeling point defects in bulk oxide materials, challenges remain for linking atomistic results with experimentally-measurable materials properties, where impurities and complicated microstructures exist and the materials deviate from ideal bulk behavior. This thesis aims to bridge this gap on two aspects. First, we study the coupled electro-chemo-mechanical effects of ionic impurities in bulk oxide materials. Second, we develop a modeling framework for describing point defect redistribution at extended defects, including grain boundaries, oxide/oxide interfaces, and oxide/water interfaces.; The first part of this thesis deals with zirconium oxide in the context of corrosion of zirconium alloys used as nuclear cladding materials in light water reactors. We provide physically-deduced diffusion coefficients for higher-level modeling as well as better mechanistic understanding of zirconium alloy corrosion by studying defect equilibria in ZrO₂ passive films. A first-principles based model for predicting charged species redistribution profiles at electrochemical interfaces is established and applied to ZrO₂/Cr₂O₃ and ZrO₂/water interfaces, and ZrO₂ grain boundaries. Defect redistribution at these extended defects can lead to significant changes in transport properties of oxides. The second part applies similar methodology to TiO₂ as a model system for studying field-assisted sintering (FAST).; FAST has been demonstrated for multiple ceramic materials as a promising sintering technique for shortening consolidation times and lowering sintering temperatures. By studying the defect chemistry of acceptor- and donor-TiO₂ and designing experiments accordingly, we show that while Joule heating is the dominant effect of the applied electric field, the shrinkage rate also correlates strongly with titanium diffusivity. Through donor doping, which increases the concentrations of fast-diffusing titanium interstitials, a higher shrinkage rate is achievable. These results prove that first-principles base calculations are capable of predicting the defect chemistry of oxide materials that quantitatively agree with measured values. Such understanding of defect chemistry gives insights into practical defect engineering strategies that are broadly applicable to electrochemical applications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 165-174).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127699</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deciphering how the viscoelastic properties of mussel-inspired metal-coordinate hydrogels dictate their adhesive and interfacial mechanics</title>
<link>https://hdl.handle.net/1721.1/127698</link>
<description>Deciphering how the viscoelastic properties of mussel-inspired metal-coordinate hydrogels dictate their adhesive and interfacial mechanics
Lai, Erica L.
In the world of adhesives, tunable viscoelasticity and adhesion to wet surfaces are two highly desirable properties. Mussels have already mastered both of these properties within the threads they create to anchor themselves in harsh intertidal conditions (collectively called the byssus). The key to both the mussel's ability to stick to a wide variety of surfaces and the highly energy-dissipative viscoelastic behavior of its byssal threads is a type of reversible bonding called metal-ligand coordination, which is comprised of amino acid functional groups binding to metal ions. Recently, researchers have incorporated metal-coordinate cross-links into various types of polymeric networks to improve their mechanical properties, particularly toughness, self-healing, and adhesion. However, there is not as much fundamental understanding of how the linear viscoelastic properties of these networks dictate adhesive behavior, both cohesively and at an interface.; In this thesis, we use shear rheology, tack tests, and spherical probe indentation tests to explore correlations between linear viscoelastic properties (i.e., plateau modulus, G[subscript p], and characteristic relaxation time, [tau][subscript c]) and adhesive behavior (e.g., peak stress, energy dissipation per volume or work of debonding per area) of transiently cross-linked hydrogels comprised of histidine-functionalized 4-arm PEG coordinated with Ni². It is important to note that this fully transient model system is technically a viscoelastic fluid even if it has gel-like behavior on the timescales studied. To control the viscoelastic properties of the transient networks, we varied the Ni²+-histidine ratio, the polymer wt %, or the choice of buffer; in a case study, we also added Co²+ for a second relaxation timescale. The experimental conditions of pull rate and substrate choice were also varied.; From our tack results, a strong dependence of peak stress on G[subscript p] and [tau][subscript c] was observed, and this correlation between network dynamics and mechanics under tensile load is in good quantitative agreement with our theoretical framework for peak stress, which includes the linear viscoelastic properties as parameters. Energy dissipation per volume is also influenced by G[subscript p] and [tau][subscript c], with an additional dependence on the polymer wt % at higher strains when the network is remodeling. These findings are consistent with previously proposed molecular mechanics of reversible His[subscript x]Ni²+ cross-links. From our ongoing spherical probe indentation tests, we have demonstrated that metal-ligand coordination at the interface can be a dominant contributor to adhesion, and we are starting to provide quantitative information about how that contribution is modulated by probe material choice and buffer-influenced timescales.; In addition to the adhesive studies, we also replicated the effect of the macroscopic byssal thread structure - a stiff metal-coordinate coating surrounding a compliant core - on its mechanical behavior. To do so, we mimicked the thread structure by coating PDMS fibers with dried 4-arm PEG that was end-functionalized with Dopa or nitroDopa and coordinated with Fe³⁺, and performed tensile tests on these coated fibers. From these studies, we demonstrated that the coating allowed for improved toughness, with the magnitude dependent on the coating composition (i.e. pH and covalent cross-linking content). Collectively, these findings provide us with new insights into the correlations between bulk mechanics and adhesive dynamics of gels with transient metal-coordinate cross-links, as well as ways to tune the toughness of mussel-inspired materials during larger extensions under tensile load.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 75-79).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127698</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancers and phase separation in the control of gene expression</title>
<link>https://hdl.handle.net/1721.1/127697</link>
<description>Enhancers and phase separation in the control of gene expression
Manteiga, John C.
Gene regulation underlies the control of cell identity, development, and disease. Transcription of genes is regulated by DNA elements called enhancers, which are bound by transcription factors and coactivators, leading to the recruitment of RNA polymerase II and the production of RNA. Enhancers are thought to loop to specific gene promoters to stimulate transcription, but the mechanisms that cause enhancers to selectively loop to specific gene promoters is not well understood. In this thesis, I first describe new insights into enhancer-promoter loop specificity from studies examining the mechanisms that allow tumor-specific super-enhancers to loop to the MYC oncogene in diverse cancer types (Schuijers and Manteiga et al., 2018). While conducting these studies, it was proposed that super-enhancers and the factors associated with them form liquid-liquid phase-separated condensates.; Following this proposal, I contributed to collaborative studies that strongly supported this model (Boija et al., 2018; Sabari et al., 2018, see Appendix I and II of this thesis). This model of transcription led me to ask how key transcriptional components could be recruited into super-enhancer condensates. I performed studies showing that the interaction of RNA polymerase II with these condensates involves the large heptapeptide repeat of the C-terminal domain (CTD) of the enzyme. Furthermore, these studies provided evidence that phosphorylation of the CTD, which is associated with the initiation to elongation transition, weakens these interactions, thus facilitating the transition of RNA polymerase II into different condensates involved in co-transcriptional splicing of the nascent transcript (Guo and Manteiga et al., 2019).; These studies provide new insights into the mechanisms of enhancer-promoter interaction, roles for the RNA polymerase II CTD in the enzyme's partitioning into nuclear condensates, and a role for phosphorylation in switching the nuclear condensate partitioning behavior of RNA polymerase II.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127697</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The spatial organization of the microtubule cytoskeleton and cell divisions promotes tissue morphogenesis</title>
<link>https://hdl.handle.net/1721.1/127696</link>
<description>The spatial organization of the microtubule cytoskeleton and cell divisions promotes tissue morphogenesis
Ko, Clint S.
During embryonic development, epithelial tissues grow and become sculpted into complex shapes through highly coordinated cell behaviors, such as cell shape change and cell division. Both processes involve contractile forces generated from distinctly organized actomyosin networks. How cells coordinate their behaviors and how different cell behaviors interact to affect tissue shape remain long-standing questions in developmental biology. One way that a single cell behavior can be coordinated, such as during collective cell shape changes (apical constriction), is through the attachment of actomyosin networks in each cell to adherens junctions, allowing contractile forces to be integrated across the epithelium. However, how actomyosin connections to junctions are maintained during morphogenesis is poorly understood.; In this thesis, I demonstrate that a novel organization of non-centrosomal microtubules plays an essential role in stabilizing force transmission between apically constricting cells in the early Drosophila embryo. Microtubule organization promotes apical F-actin meshwork turnover near junctions, which allows actomyosin to rapidly reattach to junctions when connections are lost. I find that actomyosin contractility drives the organization of non-centrosomal microtubules in apically constricting cells, uncovering crosstalk between F-actin and microtubule networks that serves to organize the apical cortex, stabilizing force transmission between cells. In addition, I investigate how coordination of apical constriction with another cell behavior--cell division--can influence the final shape of tissues. I show that cell division antagonizes apical constriction by disrupting medioapical contractile signaling.; Cell divisions that occur in the same time and place as apical constriction interfere with tissue internalization. However, when mitotic cells neighbor contractile cells, mitotic entry relaxes the apical cortex relative to neighboring cells, causing a force imbalance that allows surrounding cells to apically constrict. Mitotic cell relaxation then allows constricting cells to internalize, illustrating another mechanism by which cell divisions can shape tissues. This thesis highlights the significance of a novel cytoskeletal organization that coordinates apically constricting cells. In addition, apical constriction and cell division can interact in different ways to impact final tissue shape.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127696</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensing lights : transforming street lights into a networked urban knowledge platform</title>
<link>https://hdl.handle.net/1721.1/127622</link>
<description>Sensing lights : transforming street lights into a networked urban knowledge platform
Álvarez Félix, Jesús Ricardo.
This work is subdivided into three academic papers that together form a coherent exploration of the phenomena of intelligent street lights and their potential applications as a new type of digital urban infrastructure. In the first paper, I review existing cases of cities that are digitizing their public lighting infrastructure. I analyze their various approaches to smart lighting and then propose a framework by which we can maximize their potential uses. For the second paper, I perform an urban demonstration that pairs street lights with a prototype intelligent, networked digital imaging and computer vision platform, in order to monitor the utilization of curbside space, currently utilized for parking in cities, which serves as an example of how to develop interoperability between different urban infrastructure systems. For the third paper, I investigate the policy dimensions of implementing such a system, including the concerns raised by industry leaders and city officials, as street lights become multi-functional sources of urban data, and the dilemma this may pose for existing institutional arrangements and stakeholder's networks. Seeking to maximize social benefits I conclude by proposing a series of recommendations aimed at hybridizing functions of public lighting and real-time sensing of the built environment in cities, for the creation of a range of new urban experiences and civic benefits across a variety of use cases for cities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127622</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>How cities learn : urban experimentation for creating and governing technology</title>
<link>https://hdl.handle.net/1721.1/127620</link>
<description>How cities learn : urban experimentation for creating and governing technology
Claudel, Matthew(Matthew Christopher)
As technologies become increasingly complex and embedded in cities, they are less and less intelligible with existing urban governance frameworks. In response, actors in cities around the world are taking up urban experimentation. The goal of this dissertation is twofold: to understand how urban experimentation enables actors to create and govern technology that generates civic value, and under what conditions it contributes to social, economic and political adaptation over time. This is a question of how cities learn. An urban experiment is an intervention with a sociotechnical system in the public realm. It is explicitly bounded in space and time, it involves groups of actors from different sectors, and its goal is to evaluate the intervention. I elaborate civic value as an evaluative lens, and turn to pragmatist and evolutionary theories of political epistemology to provide a theoretical foundation for understanding market and state adaptation as a collective learning process.; The dissertation presents a body of empirical research: a nested case study of 12 urban experiments across three cities (Boston, Montreal, Amsterdam), two domains (real property, transportation), and several control regimes (from formal regulation to no control). I synthesize descriptive results at each of these levels, and find that urban experiments are typically structured as Partnerships, Sandboxes, or Exceptions. I then examine urban experiments through a theoretical lens. Actors in these cases have three different ways of thinking about how an experiment creates civic value. The first two -- performative and stochastic experimentation --; are prevalent, and they align with today's orthodox policy models. I find evidence that both can yield practical, short-term outcomes, in terms of creating technology or advancing regulation. However, there are critical conceptual faults related to uncertainty, power, and normalization. These experiments integrate sociotechnical systems only insofar as they fit existing urban governance frameworks. To some extent, these faults are resolved in a third approach -- emergent experimentation -- in which actors create and govern technologies in alternative (non-market, non-state) ways during the experiment. While the emergent approach is promising, the outcomes of an experiment are inevitably constrained to the narrow spectrum of organizational forms that are available in the market-state framework --; even if those conventional forms are ill-fit to sustain the civic value that emerged. No matter how inventive the experiment is, there remains a problem of stewarding civic value in perpetuity. I propose the civic corporation to fill that gap: a legal framework for new organization forms that have a duty to steward -- and perpetually rediscover -- civic value. In this way, emergent urban experimentation flows into ongoing structural adaptation. I argue that urban experimentation can become a technique for creating and governing technology in cities, if there exist stable but dynamic forms of distributed accountability and a structural capacity for learning with complex sociotechnical systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127620</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Water Operator Partnerships : utility reform and the struggle for alternatives to privatization</title>
<link>https://hdl.handle.net/1721.1/127618</link>
<description>Water Operator Partnerships : utility reform and the struggle for alternatives to privatization
Beck, Andrea Karin.
The water privatizations that swept across the global South in the 1990s and early 2000s failed to meet expectations. Rather than bringing about increased efficiency and investment, a suite of public-private partnerships ended prematurely and caused social unrest, most notably in the Bolivian city of Cochabamba. In response, scholars and activists embarked on a search for "alternatives to privatization." Informed by the work of the Municipal Services Project and post neoliberal scholarship, this dissertation examines Water Operator Partnerships (WOPs) as a potential alternative to private-sector engagement in water and sanitation. Relying on primary documents and interviews, I trace the WOP concept to its origins in the UN system and highlight its defining characteristics as a partnership type.; I further discuss the struggles behind the concept's emergence, focusing on the contested role of the private sector and the strategies applied by activists trying to safeguard a public orientation of WOPs. Based on two case studies of water companies in the Netherlands and Uganda, I examine the motivating factors that would cause water operators to engage in WOPs on a not-for-profit basis. My findings indicate that WOPs are driven by a number of interests that call into question their portrayal as solidarity-based partnerships, including staff development and the furthering of opportunities for aid, trade, and investment. I then follow the Dutch and Ugandan companies out of their headquarters and into the field, to the water utility serving Malawi's capital Lilongwe. Drawing on fieldwork in Malawi, I examine two WOPs in detail, showing how and why these partnerships failed or succeeded in supporting the reduction of non-revenue water.; Taken together, this dissertation points to a need to refocus the debate on WOPs, beyond the private sector and towards public water and sanitation operators. I argue that two trends in particular deserve critical attention: professionalization and corporatization. Both are somewhat more concealed and less visible than the outright inclusion of the private sector in WOPs, but they could, in the end, pose a more serious challenge to the WOP model and its post neoliberal potential.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 155-175).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127618</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding access to the city : how public transit fare policy shapes travel decision making and behavior of low-income riders</title>
<link>https://hdl.handle.net/1721.1/127617</link>
<description>Expanding access to the city : how public transit fare policy shapes travel decision making and behavior of low-income riders
Rosenblum, Jeffrey Laurence.
Over the past five years, as transit fares have been rising faster than inflation, interest in establishing programs providing discounted public transit fares to low-income individuals has blossomed in the US . Limited research exists, though, on how affordability of the fare influences travel behavior, and affects access, to destinations such as healthcare, and, ultimately, quality of life. This hampers efforts by policy makers and advocates to evaluate the potential for means-tested fare programs as an intervention to ameliorate the impacts of transit costs. This research aims to answer the following questions: 1. How do travel patterns of low-income transit riders differ from those of average riders? 2. What is the causal effect of a fare subsidy on the number of trips taken by low-income riders? 3. In what way does transit cost impact healthcare utilization for low-income individuals? 4.; How do low-income transit riders decide whether to purchase a pass or pay for trips individually? 50% fare subsidies cause an increase of 2.3 trips per week (27%), equivalent to a fare elasticity of -0.54. There is a statistically significant treatment effect on trip rates to healthcare appointments, and evidence from the interviews suggest that trips for regular maintenance visits for chronic conditions are the type of healthcare visits likely to be forgone because of an inability to afford the transit fare. I found that scarcity mindset, the behavioral economics theory which suggests that living in poverty impedes cognitive capacity, is not universal among low-income individuals. I also found that 30% of individuals paying for trips individually would have received better value by purchasing a pass product. Low-income riders take proportionally more off-peak trips, and African Americans have longer commutes even controlling for income.; A major policy implication of this research is that means-tested fare programs will provide tangible benefits to its recipients because the cost of public transit has been shown to limit mobility of low-income residents. This research also suggests that healthcare providers should be proactive in providing free public transit for patients. Next-generation fare collection systems will open the door for innovative collaboration with other social service agencies. The findings in this dissertation inform the future of public transit fare policies. Finally, with evidence of travel time disparities by race, structural causes must be addressed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 259-276).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127617</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Branching out into immigrant neighborhoods : how public libraries distribute community resources to meet immigrant needs</title>
<link>https://hdl.handle.net/1721.1/127616</link>
<description>Branching out into immigrant neighborhoods : how public libraries distribute community resources to meet immigrant needs
Delgado, Laura Humm.
Local organizations play a critical role in providing access to resources and opportunities for those who are low-income, socially isolated, or marginalized. This is especially true for immigrants in the United States, where support with integration falls almost entirely on local organizations. Immigrants are more likely to live in poverty; yet, they are accessing the social safety net less for fear of discrimination and deportation. This research asks how one type of local organization, the neighborhood library branch, distributes resources to immigrants across urban neighborhoods and how neighborhoods shape organizational resources. I approach this research through a mixed-method study of the Boston Public Library and its twenty-five neighborhood branches that relies on participant observation, interviews, and the analysis of archives, texts, and public library data.; The first part uses an immigrant integration framework to examine how neighborhood branches contribute to English language learning and political, economic, and social integration. I address how immigrant services align with neighborhood needs and to what extent immigrants access these resources. I find that institutional resources are well targeted to immigrant neighborhoods, but community resources are more effective at reaching immigrants and provide intangible benefits that are tailored to neighborhoods. A reliance on community resources, however, can exacerbate inequalities across neighborhoods. The second part of this research addresses how the neighborhoods in which neighborhood branches are located shape library resources through 1) expressed community needs, 2) level of volunteerism, 3) cultural sharing practices, and 4) organizational partnerships.; Whereas scholars have addressed the question of how organizations provide access to resources for marginalized populations by looking at the geographic distribution of organizations, institutional funding, and brokered resources, this research asks 1) how neighborhoods shape organizational resources and 2) what factors, beyond geographic proximity, affect access to resources. The findings from this research have implications for how scholars and planners conceptualize and identify organizational resources at the neighborhood level. Additionally, this research offers lessons for what practices local organizations and government agencies can adopt to reach immigrant communities at a time when immigrants are becoming increasingly fearful of accessing government institutions, public benefits, and public spaces.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 180-189).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127616</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediaries and electrification : dimensions of trust and consumer education in Kenya's off-grid solar market</title>
<link>https://hdl.handle.net/1721.1/127615</link>
<description>Intermediaries and electrification : dimensions of trust and consumer education in Kenya's off-grid solar market
Harrington, Elise Schley.
Countries still working towards universal energy access are beginning to utilize a range of renewable energy technologies and services to meet rural electrification goals. Stemming from mobile money innovations in Kenya, "pay-as-you-go" off-grid solar increases the short term affordability of small-scale solar solutions for rural households. Consumer experiences with off-grid solar vary across distribution models based on the local actors responsible for engaging with end-users. I call these on-the-ground actors frontline solar intermediaries and they link consumers and providers through in person interactions and have the ability to perform different acts of intermediation in designed solar distribution models. Frontline solar intermediaries are not only important for making off-grid solar sales, but for implementing consumer safeguards (e.g. quality assurance and consumer protections).; Based on fieldwork in Kenya, I identify four types of frontline solar intermediaries: community influencers, networking solar agents, embedded entrepreneurs, and group leaders. I use a conjoint survey experiment to test the influence of social capital and reputation on an intermediary's trustworthiness. I find that trust stems from more than just initial social capital and may be enhanced by strategic partnerships or collaborations between NGOs, government, and private solar providers. Using a second original survey, I examine the effect of intermediary type on consumer knowledge. I find that relying on solar agents for help with solar issues is associated with an 11% higher expected knowledge count and a 23% increase in seeking help to solve problems. Solar agents are the most common frontline solar intermediary in Kenya and remain a key source of information and assistance for after-sales services.; The incentives, interpersonal relationships, and training programs that influence frontline solar intermediary behavior suggest that these local actors are critical to building off-grid solar as a lasting complement (or in some cases, alternative) to the centralized grid.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 247-266).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127615</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constructing the Capital City : the politics of urban development and image making in Eurasia's hybrid regimes</title>
<link>https://hdl.handle.net/1721.1/127614</link>
<description>Constructing the Capital City : the politics of urban development and image making in Eurasia's hybrid regimes
Harris-Brandts, Suzanne.
The late 20th century saw significant geopolitical shifts as empires disintegrated and socialist unions collapsed. The result was not only a rise in independent states but the emergence of a distinct form of governance known as the "hybrid regime." In Eurasia, twelve such regimes surfaced. Having undergone dramatic politico-economic change, many turned toward capital city building. This dissertation investigates the utility of constructing the capital to such regimes, synthesizing theory from architecture, urban planning, political science, and political geography. It asks: How do incumbent parties in hybrid regimes retain power through urban development and image-making? What effects are there on the built environment and the long-term trajectories of these countries? To answer these questions, I conducted an in-depth analysis of the two Eurasian capitals most heavily mired in hybrid regime politics: Tbilisi, Georgia, and Skopje, North Macedonia.; Both underwent dramatic state and nation-building after socialism's collapse in the 1990s. They represent the widest array of incumbent party tactics used to increase party authority through urban development and are therefore useful cases to study. In Tbilisi, I foreground initiatives by the United National Movement (UNM) government of President Mikheil Saakashvili (2004-2013). In Skopje, the emphasis is on the Skopje 2014 campaign instigated by the VMRO-DPMNE government of Prime Minister Nikola Gruevski (2006-2016). Using qualitative mixed-methods (semi-structured interviews, focus groups, site observations, document and media analysis), the findings show that urban development and its correlated image-making are often extensively manipulated to entrench incumbent party authority.; Although these campaigns promised national pride, economic growth, and improved living conditions, they resulted in geopolitical tensions, subnational discord, charges of corruption, and far-reaching legal manipulations. Patronage networks, informal institutions, and populist ideology supported the power of the ruling elite at the detriment of developing the country. The built environment impacts were equally concerning, resulting in dysfunctional cityscapes that were over-saturated with contentious monuments and poorly constructed buildings. The research findings thus underscore the highly politicized processes of constructing capitals in hybrid regimes, offering insight into how civil society and international donors might work to hold incumbent parties accountable.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 256-290).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127614</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The politics of visibility in urban sanitation : bureaucratic coordination and the Swachh Bharat Mission in Tamil Nadu, India</title>
<link>https://hdl.handle.net/1721.1/127608</link>
<description>The politics of visibility in urban sanitation : bureaucratic coordination and the Swachh Bharat Mission in Tamil Nadu, India
Raman, Prassanna.
Often linked with class and caste and mired in socio-cultural taboos, sanitation has a reputation problem in India. The introduction of the Swachh Bharat Mission (SBM) aims to address these challenges not only at the individual level, but also at the organizational level. SBM heavily banks on the use of reputational devices such as social media campaigns and city rankings to incentivize the sub-national implementation of reforms. While literatures on sanitation implementation highlight coordination between agencies and between agencies and NGOs as key to service improvements, few if any, explore how organizational reputation may affect that coordination. Given the importance afforded to SBM within India's current march toward sanitation reform, this scholarly lacuna is surprising. My dissertation aims to address this knowledge gap through an in-depth study of coordination, and the role of organizational reputation in the roll-out of SBM in the South Indian state of Tamil Nadu.; First, I ask what impacts public sector coordination in urban sanitation under SBM. Second, I examine whether SBM's reputational devices have any effects on coordination. Within Tamil Nadu, I focus on two major streams of work within the SBM Urban portfolio -- toilet construction and solid waste management --; in the cities of Chennai, Coimbatore, and Trichy. To conduct my study, I use semi-structured interviews with bureaucrats and NGOs, document and social media analysis of SBM materials, and participant observation of behavioral change campaigns run by public agencies and sanitation-centric NGO partners. I found that SBM's reputational devices were no match for entrenched institutional weaknesses, like poor bureaucratic capacity and administrative incoherence, to incentivize coordination either between agencies or between agencies and NGOs across the three cities. Instead, SBM's emphasis on social media, city rankings, and certifications has exacerbated the burden of documentation and the "tick-box" culture within agencies. However, I also found that in some cases, SBM's reputational devices have empowered existing sanitation NGOs by increasing demand for their services.; I conclude that SBM's emphasis on visibility rather than deep institutional reform obfuscates the kind of work needed to improve outcomes in the urban sanitation sector.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 164-188).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127608</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on empirical macroeconomics</title>
<link>https://hdl.handle.net/1721.1/127606</link>
<description>Essays on empirical macroeconomics
Hazell, Jonathon.
This thesis consists of three chapters in empirical macroeconomics. In the first chapter, I study downward wage rigidity. Downward wage rigidity is central to many explanations of unemployment fluctuations. In benchmark models, the wage for new hires is particularly important, but there is limited evidence of downward rigidity on this margin. We introduce a dataset that tracks the wage for new hires at the job level -- across successive vacancies posted by the same job title and establishment. We show that the wage for new hires is rigid downward but flexible upward, in two steps. First, the nominal wage rarely changes at the job level. When wages do change, they fall infrequently, suggesting a constraint from below. Second, when unemployment rises, wages do not fall --; but wages do rise strongly as unemployment falls. We show that prior strategies, which study the average wage for new hires, cannot detect downward rigidity due to changing job composition. We then develop a tractable dynamic wage bargaining model with downward rigidity. We fit the model to our findings, and uncover state dependent asymmetry in unemployment dynamics. When there has been a contraction in the recent past, unemployment responds symmetrically to subsequent labor demand shocks; when there has recently been an expansion, unemployment is subsequently twice as sensitive to negative as to positive shocks. In the second chapter, I study the fall in the labor share. The labor share fell in the US and worldwide after the 1980s. This paper argues the falling labor share dampens unemployment fluctuations, in two steps. First, the paper studies a class of labor search models with capital.; The falling labor share lowers the sensitivity of unemployment to labor demand shocks, regardless of whether rising capital or rising rents govern the labor share. The peak-to-trough fall in the US labor share lowers the sensitivity of unemployment to labor demand shocks by 30%. Second, the paper provides evidence for dampening. I exploit labor share variation within industries and between regions, to show that low labor share markets are less sensitive to the aggregate business cycle. Then I identify variation in the labor share using the passage of statewide reforms. After these reforms pass, the labor share falls, and state unemployment becomes less sensitive to aggregate business cycles. In the third chapter, I study systemic risk in the banking system. Banks face different but potentially correlated risks from outside the financial system. Financial connections can help hedge these risks, but also create the means by which shocks can propagate.; We examine this tradeoff in the context of a new stylised fact we present: German banks are more likely to have financial connections when they face more similar risks -- potentially undermining the hedging role of financial connections and contributing to systemic risk. We find that such patterns are socially suboptimal, but can be explained by risk-shifting. Risk-shifting motivates banks to correlate their failures with their counterparties even though it creates systemic risk. JEL Codes E24, G21
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 319-342).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127606</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein-polymer conjugate arrays for enhanced biosensor sensitivity and selectivity</title>
<link>https://hdl.handle.net/1721.1/127580</link>
<description>Protein-polymer conjugate arrays for enhanced biosensor sensitivity and selectivity
Paloni, Justin Michael.
The capability of biosensors to provide highly sensitive and selective molecular detection has enabled development of rapid, inexpensive medical diagnostics. Despite significant advancements in sensor design over the past several decades, most biosensors experience significantly reduced sensitivity in common sensing fluids such as blood and urine. In these mixtures, off-target molecules nonspecifically bind to the sensor surfaces, blocking analyte binding sites and increasing background signal. The self-assembled structure of protein-polymer conjugates presents a potential solution to this issue, offering both biological functionality and a mechanism for excluding many non-analyte molecules in biosensing fluids. Therefore, this thesis explores the use of protein-polymer conjugate thin films as biosensors to minimize nonspecific binding effects during detection in complex mixtures.; The first part of this thesis focuses on protein engineering methods to improve the self-assembly of protein-polymer conjugates. It is first demonstrated that oligomerization of low-molecular weight protein blocks significantly enhances ordering quality of the corresponding conjugates. As the degree of oligomerization of the protein block increases, conjugates form ordered phases that display longer-range assembly. Another technique shown to improve protein-polymer conjugate self-assembly is fusion of complementary coiled-coil sequences to the protein block. When proteins bearing these sequences are mixed together in solution, a strongly associative coiled-coil forms, promoting a substantial ordering improvement. Both protein oligomerization and fusion to coiled-coil sequences retain the biological functionality of the protein block, and it is found that protein activity generally scales with conjugate ordering quality.; The second part of this thesis explores the capabilities of the polymer block in protein-polymer conjugate thin films to control diffusion into these films. By increasing the molecular weight of this polymer block, larger analyte molecules experience less restricted diffusion into the thin films. Transport studies performed in solutions of the polymer block indicate that most proteins display size-based diffusion following the Stokes-Einstein equation, but some proteins deviate significantly from this behavior due to a combination of protein-protein and protein-polymer interactions. When an analyte molecule is mixed with a protein that diffuses faster than the analyte in these polymer solutions, the sensitivity of the thin film conjugate biosensors towards the analyte is often significantly enhanced. This sensitivity improvement is also observed during detection in mixtures containing the analyte and several proteins, only some of which diffuse faster than the analyte.; Accordingly, biosensing measurements using protein-polymer conjugate thin films performed in blood serum and urine solutions, which should contain a variety of proteins that diffuse faster than a given analyte, display a two order of magnitude improvement in sensitivity over traditional surface-based biosensor technologies. Thus, protein-polymer conjugate thin film biosensors can overcome nonspecific binding effects and demonstrate greater sensitivity during measurements performed in complex protein mixtures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127580</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models across multiple length scales to advance biomass upgrading</title>
<link>https://hdl.handle.net/1721.1/127579</link>
<description>Models across multiple length scales to advance biomass upgrading
Orella, Michael J.(Michael Julian)
The development of efficient electrochemical processes, which can utilize electrons from renewable energy sources and incorporate sustainable sources of carbon present in biomass, could enable the decarbonization of the industrial sector and spur technological and scientific innovation. Moreover, electrochemical processing, specifically hydrogenation and hydrodeoxygenation, may allow new molecular transformations at previously unachievable conditions, unlocking what had been inaccessible or unimaginable chemical processing. Accordingly, there is significant room for exploration in organic electrochemistry to identify opportunities within the chemical industry to both replace crude-oil derived feedstocks with biomass and to shift from traditional thermally-driven reactions to those that use electrical energy.; Advancing the science and engineering of these nascent process concepts requires an interdisciplinary approach with key knowledge gaps that traverse distinct research communities and apply to the problem at multiple scales. My thesis work developed modeling toolkits that will be useful across the spectrum of biomass generation in planta to electrochemical processing of liquefied feedstocks, all of which are available open source to reduce the barrier to entry for new researchers interested in the potential for this interdisciplinary topic. Specifically, I developed Lignin-KMC, a model based on kinetic Monte Carlo methods that utilize first-principle-derived kinetic parameters to predict the structure of native lignin biopolymers, as having an accurate molecular model of reactor feeds is necessary to understand any reactivity trends that may be observed.; Next, I created DropPy, a Python-based toolkit for automating the analysis of contact angle goniometry data, as the performance of many electrochemical cells can be anticipated from the wettability of the electrode surface. Finally, I established a generalized techno-economic framework which could be used to evaluate the overall cost to the consumer of electrochemically-derived products, and could be used by researchers with various electrolysis interests to better understand the most critical areas of improvement for their devices. Through these three developments, the efficiency of research carried out in this space will be improved, hopefully speeding the eventual development of electrochemical upgrading devices at a lower total research cost and final system price.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 205-227).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127579</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of strained extinction for applications in natural gas combustion modeling</title>
<link>https://hdl.handle.net/1721.1/127578</link>
<description>A study of strained extinction for applications in natural gas combustion modeling
Long, Alan Everett.
Resistance to extinction by stretch is a key property of any flame, and recent work has shown that this property controls the overall structure of several important methane, the principal component of natural gas, based turbulent flames. This work gives an introduction to the parameter that quantifies resistance to extinction by stretch, Extinction Strain Rate (ESR), discussing how it is typically measured experimentally and calculated numerically. The primary limitations of ESR, given its historical definition and computation methods, are: (1) it depends on the dimensions of the experimental apparatus used to measure it, (2) it is often too computationally difficult to calculate using large kinetic models, #species &gt; 500, and (3) under elevated pressures relevant to gas turbines and internal combustion engines, 20-40 atm, it is difficult to measure. Work addressing, in some manner, these three issues is presented.; Subsequent work then uses ESR as a validation parameter for producing a kinetic model. To address the first issue, a method is proposed for translating experimental measurements for stretch-induced extinction into an unambiguous and apparatus-independent quantity (ESR[subscript infinity]) by extrapolating to infinite opposing burner separation distance. The uniqueness of the flame at extinction is shown numerically and supported experimentally for twin premixed, single premixed, and diffusion flames at Lewis numbers greater than and less than one. A method for deriving ESR[subscript infinity] from finite-boundary experimental studies is proposed and demonstrated for methane and propane diffusion and premixed single flame data. The values agree within the range of differences typically observed between experimental measurements and simulation results for the traditional ESR definition.; To address issue two, Ember, a new open-source code for efficiently performing ESR[subscript infinity] calculations using large, detailed chemical kinetic models is presented. Ember outperforms other standard software, such as Chemkin, in computation time by leveraging rebalanced Strang operator splitting which does not suffer the steady-state inaccuracies of most splitting methods. Ember is then able to improve computational performance primarily though parallelization and use of quasi-steady-state kinetics integrations at each independent spatial discretization point. Ember is validated for computation of ESR and the benefits of its computation techniques are demonstrated. With respect to the third issue, Ember is used to explore ESR[subscript infinity] pressure trends and kinetic model sensitivities in the current absence of experimental methods to probe the relevant parameter space. ESR shows opposing trends with pressure under lean and rich conditions for methane-air flames.; Ultimately, ESR decreases at higher pressures for lean conditions ([phi] = 0.7) and increases with pressure for rich conditions ([phi] = 1.3). Under both conditions, the ESR trends are non-monotonic. ESR reaction sensitivities are observed to generally mirror those of laminar flame speed calculations. This suggests that there is limited added value in more efficient methods of ESR reaction sensitivity calculation since efficient adjoint methods already exist for laminar flame speed. Strong species transport parameter sensitivities are observed for the fuel, oxidizer, and bath gas, with the limiting reactant showing the strongest sensitivity. Current levels of uncertainty in species enthalpy suggest little impact of enthalpy errors on ESR predictions for the conditions studied. Using the prior ESR improvements and analysis, an ESR validated kinetic model is produced.; The model uses a validation data set that includes relevant ignition delay, laminar flame speed, and extinction strain rate data. The work builds off recent works by Hashemi and coworkers for high pressure oxidation of small hydrocarbons. Prediction of the selected validation data set is improved primarily through sensitivity analysis and comparison with kinetic rate constants from other works. Additionally, to support nitrogen chemistry predictions, the full nitrogen subset from the recent review by Glarborg and coworkers is appended to the core model produced. The Reaction Mechanism Generator (RMG) software has recently been used to generate the rich chemistry relevant for partial oxidation of methane up to one and two ring aromatics. This rich chemistry subset is added to produce the final kinetic model. An important portion of the rich chemistry included within the produced kinetic model is the route from cyclopentadienenyl radical recombination to naphthalene.; Since this work seeks performance at elevated pressures of relevance to gas turbines and internal combustion engines, the pathway is re-computed here with explicit consideration of all relevant pressure dependent pathways on the C₁₀H₁₀ and C1₁₀H₉ surfaces. Lumped, single-step kinetics, often used to describe the net reaction to naphthalene, are observed to be insufficient. Specifically, the C₁₀H₁₀ intermediate species is observed to live long enough to undergo bimolecular reaction to enter the C₁₀H₉ surface before proceeding on to naphthalene. Rate expressions for the full network are produced and included in the kinetic model generated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-195).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127578</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lead sulfide nanocrystal ligand structure and its influence on superlattice self-assembly</title>
<link>https://hdl.handle.net/1721.1/127577</link>
<description>Lead sulfide nanocrystal ligand structure and its influence on superlattice self-assembly
Winslow, Samuel W.(Samuel Walter)
Colloidal semiconductor nanocrystals (NCs) or "quantum dots" are used in next-generation optoelectronic devices such as photovoltaics, displays, photodetectors, and thermoelectrics. For deployment in these architectures, NCs are cast out into the solid state. Because the NC ensembles are monodisperse, they readily self-assemble into an ordered superlattice (SL). Commonly for PbS NCs, body-centered cubic (BCC), body-centered tetragonal (BCT), and face-centered cubic (FCC) phases are observed with varying degrees of NC orientation relative to adjacent SL sites. Predictive control over the organization of NCs into SLs with long-range order remains a challenge. In this Thesis, oleate-capped PbS NCs are used as a convenient, prototypical system to establish a predictive framework for NC SL formation with respect to newly identified and existing tuning parameters.; I first identify and fully characterize unbound/free ligand as an important, controllable parameter to continuously adjust SL symmetry with theoretically single-molecule resolution. Increasing either the bound or unbound ligand populations shifts the SL uniaxially from the BCC to FCC phase. A high free ligand fraction has implications for the ease of formation of oriented SLs via spin-casting. Next, I measure a universal distortion of SL symmetry when cooling from room to cryogenic temperatures in which the SL contracts along one axis while expanding along the other two, ultimately shifting towards the BCC symmetry. Both hysteresis and non-monotonic, surprising trends in unit cell volume are observed and rationalized. The distortion is delineated by thermal markers of the surface-capping ligands and is generalizable to other material systems. I establish small-angle neutron scattering (SANS) as a valuable experimental tool for complete characterization of NC surfaces.; In order to fit SANS data, I develop a model inspired by the NC structure sampled from molecular dynamics (MD) simulations and introduce a Markov chain Monte Carlo (MCMC) algorithm for efficient parameter inference and uncertainty estimation. I quantify an epitaxial monolayer of PbCl₂ on the surface of PbS NCs synthesized from a large excess of PbCl₂ instead of from PbO. This elucidation reconciles sizing curves from the literature and explains the suitability of a specific NC synthesis for different applications. Finally, I extend the SANS method to measure the structure factor of semi-dilute PbS NC dispersions and liken the interactions to that of a square well fluid. The data-fitting yields a repulsive core size larger than the physical NC core diameter which stems from a densely-packed ligand layer near the NC surface. I also measure a weak attractive strength ~1 k[subscript B]T.; This novel understanding of ligand-mediated NC interactions is extended to parameterize patchy particle simulations which predict a complete PbS NC SL phase diagram consistent with all previous tuning strategies. This Thesis provides a complete description of the predictive framework for self-assembled SLs and develops new computational tools which may be applied to other material systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 217-235).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127577</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated discovery of important chemical reactions</title>
<link>https://hdl.handle.net/1721.1/127576</link>
<description>Automated discovery of important chemical reactions
Grambow, Colin A.(Colin Andres)
Innovations in chemistry are often informed by decades of accumulated chemical knowledge encoded into manually constructed reaction templates and rules of reactivity. Examples include retrosynthetic analysis for organic synthesis planning; chemical reaction mechanism generation for complex combustion, pyrolysis, and low-temperature oxidation processes; and elucidation of low-energy catalytic pathways. Nonetheless, all known chemistry is dwarfed by the vastness of chemical space, most of which still lies unexplored. De novo reaction discovery is rare but presents an enormous potential to uncover novel synthetic routes and key pathways in reaction mechanisms. Automated potential energy surface exploration has become a promising method to search for new reaction pathways, albeit at the expense of costly quantum mechanical calculations.; Therefore, this thesis develops methods to enable more computationally efficient discovery while also correctly determining thermochemistry and kinetics to allow for the construction of accurate reaction mechanisms. By utilizing automated transition state finding algorithms based on quantum chemistry, the thesis assesses which algorithm is most viable for the efficient discovery of new reactions, and it identifies key pathways of an important ketohydroperoxide system. It demonstrates that quantum chemical data can be used with emerging machine learning methods to estimate molecular thermochemistry. Leveraging a large data set of low-quality data in combination with a small data set of high-accuracy data in a transfer learning approach enables predictions that significantly improve upon group additivity methods, which are common in automated mechanism generation, and upon machine learning models that only use density functional theory data.; Furthermore, an automated workflow is developed to further enhance high-level quantum chemistry calculations using bond additivity corrections. While quantum chemistry calculations are incredibly useful at providing highly accurate data, their high cost--especially when applied to thousands of reaction pathways--limits their utility for discovering new chemistry. Therefore, this thesis improves the throughput of automated discovery via a combination of quantum chemistry data generation and reactivity prediction using deep learning. It automatically generates a data set of tens of thousands of elementary chemical reactions that are used to train a novel activation energy prediction model, which can quickly assess the importance of new reactions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127576</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational design of therapeutic monoclonal antibody formulations</title>
<link>https://hdl.handle.net/1721.1/127575</link>
<description>Computational design of therapeutic monoclonal antibody formulations
Cloutier, Theresa K.(Theresa Kruse)
Antibody formulation research seeks to move the field from heuristics and rules of thumb to mechanistic approaches. Traditionally, formulations are designed via significant trial and error work after the phase in which molecule discovery and optimization take place. However, this often leads to molecules failing in late development due to an inability to develop a formulation with the desired properties. This thesis aimed to develop a computational formulation design framework that would allow formulation to be addressed during the molecule discovery and optimization steps, allowing molecules able to be formulated to be selected early on. To this end, antibody behaviors with a variety of different formulation excipients were probed via simulation and experiment, and machine learning models of local antibody-excipient interactions were developed.; The behaviors of three antibodies were simulated in the presence of six excipients: sorbitol, sucrose, trehalose, proline, arginine.HCl, and NaCl. Carbohydrates tended to reduce aggregation propensity due to their preferential interactions with exposed aromatic residues. However, their impact on viscosity was highly dependent on the surface characteristics of the antibody, especially on whether charge effects significantly contributed to the antibody viscosity. Proline tended to interact with aromatic residues, reducing the aggregation of antibodies whose aggregation rate was association-limited. Arginine.HCl could interact via charge effects as well as with hydrophobic residues, while NaCl only interacted via charge effects. The overall impact of these excipients in terms of aggregation and viscosity was highly dependent on the surface charge distribution on the variable region. Finally, these local antibody-excipient interactions were modeled using machine learning techniques.; These models were shown to capture the important antibody-excipient interactions that are relevant for understanding the impact on stability. Thus, with the implementation of this tool, antibody formulation design could be implemented efficiently during the molecule optimization step, reducing the cost of follow-up formulation work and reducing the likelihood of molecule failure due to formulation issues.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 155-170).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127575</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic regulation and functions of locus-specific DNA methylation</title>
<link>https://hdl.handle.net/1721.1/127574</link>
<description>Dynamic regulation and functions of locus-specific DNA methylation
Song, Yuelin
The role and regulation of DNA methylation at various genetic elements have gathered tremendous interest over decades. The methylomes of many cell types have been described, revealing a dynamic and tissue-specific pattern of DNA methylation (tissue-specific differentially methylated regions, T-DMRs) in the distal regulatory elements, such as enhancers. The formation of T-DMRs still remain mysterious, however, one of their interesting features observed in mouse ES cells (mESCs) is the low-to-intermediate levels of average DNA methylation resulted from inter-cellular epigenetic heterogeneity. Given the transcriptional repressive role of DNA methylation at promoters, such non-zero levels of enhancer methylation is interesting to characterize.; Prior to this thesis, a reporter for genomic DNA methylation (RGM) has been developed in the Jaenisch lab, when targeted into T-DMRs of interest, the surrounding locus-specific DNA methylation will be reported as on-and-off of fluorescent signals in single cells. We further modified RGM to investigate the regulation of DNA methylation at pluripotency super-enhancers Sox2 and MiR290 at single allele level in mESCs. We found that enhancer DNA methylation is surprisingly dynamic with two alleles independently being demethylated and methylated within days. Such dynamics is the basis of epigenetic and transcriptional heterogeneity and is coupled with changes in histone modifications and transcription factor binding. Furthermore, epigenetic heterogeneity was also observed in the developing preimplantation embryos. Our work provided a paradigm to functionally investigate locus-specific DNA methylation in heterogenous tissues in diseases and development.; The regulation of locus-specific DNA methylation is highly context dependent and sensitive to the environment. Our understanding of how locus-specific DNA methylation is regulated in vivo is still restricted to a few genomic elements. The appendix of this thesis attempts to generate an animal model to expand the scope of research on DNA methylation to retroelement-associated metastable epialleles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127574</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating process development for biologics on an automated, pharmacy-scale manufacturing system</title>
<link>https://hdl.handle.net/1721.1/127573</link>
<description>Accelerating process development for biologics on an automated, pharmacy-scale manufacturing system
Crowell, Laura E.(Laura Ellen)
The conventional large-scale, centralized, single-product manufacturing model for biologic drugs does not allow for the economical production of drugs for small patient populations or for the distribution of these drugs in developing countries. A decentralized model featuring small-scale, fully automated, multi-product manufacturing of biologics at the point-of-care could address some of these issues. To truly realize the benefits of such a manufacturing paradigm, it must also be paired with rapid process development methods for the production of new molecules. In this thesis, we describe the development of a bench-scale, automated, multi-product manufacturing system for the end-to-end production of hundreds to thousands of doses of clinical quality protein medicines in about three days. We then demonstrate the application of this platform to the manufacture of a trivalent vaccine in a single campaign through co-expression and co-purification.; We further demonstrate new methodologies for the accelerated development of manufacturing processes to produce new molecules on the system including a strategy for the development and optimization of fully integrated, multi-column processes for straight-through chromatographic purification, and the development of a platform process for the production and purification of single-domain antibodies. We then propose a workflow for the collection of a dataset relating the chromatographic behavior of host-cell proteins to their biophysical characteristics with the goal of building an in silico tool for the prediction of purification processes for any new molecule. Finally, we propose a platform approach, as opposed to a platform process, for the development of manufacturing processes for new biologics which is based on gaining a deeper understanding of process development challenges with regard to the host and to the molecule itself.; Ultimately, we believe that the combination of a small-scale, automated manufacturing platform and accelerated strategies for developing processes to manufacture new products on the platform could enable time- and cost-efficient manufacturing of a wide variety of biologic drugs, increasing access to medicines throughout the world.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 157-166).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127573</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering probiotic microbes for in vivo applications</title>
<link>https://hdl.handle.net/1721.1/127572</link>
<description>Engineering probiotic microbes for in vivo applications
Müller, Isaak Elis.
Our overall goal is to engineer probiotic microbes as localized diagnostic and therapeutic tools for diseases in the gastrointestinal tract, like inflammatory bowel disease. Current treatments often rely on systemic supplementation of immunomodulatory drugs, potentially leading to severe side effects. The probiotic yeast Saccharomyces boulardii has shown promising results for the use as probiotic supplement for the amelioration of disease-related symptoms in the GI tract. However, its genetic engineering has been limited to date. This work focuses on the development of fundamental engineering tools for S. boulardii, including recombinant protein secretion, a new vector integration system and inducible promoters. Another major obstacle to move synthetic biology technologies from the bench to a patient's bedside is the need for gene circuits to function in a complex environment where unexpected crosstalk can occur. We show that synthetic gene networks can be engineered to compensate for crosstalk by integrating pathway signals, rather than by pathway insulation. We demonstrate this principle using reactive oxygen species (ROS)-responsive gene circuits in Escherichia coli that exhibit concentration-dependent crosstalk with the non-cognate ROS. By designing gene circuits that introduce compensatory crosstalk at the gene network level, the resulting gene network exhibits reduced crosstalk in the sensing of the two dierent ROS. The development of both fundamental genetic parts in a probiotic chassis as well as more complex genetic networks will contribute to the future implementation of living cell therapeutics in the clinic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 101-136).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127572</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microengineered hydrogels for spatially-resolved, multiplexed microRNA quantification from tissue</title>
<link>https://hdl.handle.net/1721.1/127570</link>
<description>Microengineered hydrogels for spatially-resolved, multiplexed microRNA quantification from tissue
Nagarajan, Maxwell Benjamin.
Many cancer patients develop resistance to cancer therapies over time. One major reason for resistance and disease recurrence is that cancer tissue is heterogeneous. A particular therapy may select for cells that are resistant to the therapy, leading to a more aggressive tumor over time. It is difficult to characterize the heterogeneities within cancer tissue with traditional bioassays that average over large tissue areas. Methods that can assess heterogeneities in tissue by measuring biomolecules while preserving spatial information are emerging as a key part of biological and medical studies, but there are limited technologies for quantitation and multiplexing of microRNA (miRNA), a class of small, noncoding RNAs that play important roles in many diseases. miRNA are being explored as potential therapeutic targets and as biomarkers for diagnostics.; There is increasing evidence that obtaining both spatially-resolved and multiplexed measurements of miRNA is critical for diagnostic and prognostic value of miRNA tests. In this thesis, we developed a new method for making spatially-resolved and multiplexed measurements of miRNA from tissue using microengineered hydrogels. First, we used barcoded, hydrogel microparticles to perform multiplexed miRNA measurements from formalin-fixed, paraffin-embedded (FFPE) tissue, the gold standard sample type used by pathologists. In an assay that requires fewer steps and less time than existing approaches, we found the signal after an assay from FFPE tissue with paraffin was 10% less than the signal from FFPE tissue when paraffin was removed before the assay. Second, we developed and characterized a nanoliter well array platform for performing multiplexed microRNA assays from nanoliter sample volumes and applied this to microRNA measurements from unprocessed cells.; By reducing sample volumes and sensing area we obtained about 100x improvement in assay sensitivity. Third, using assay conditions we found in the first project and adapting the nanoliter well arrays from the second project, we performed spatially-resolved and multiplexed measurements of microRNA directly from FFPE tissue. We achieved up to 9-plex assays and 300 [mu]m spatial resolution. We found statistically significant differences between different tumor regions from the same mouse model tissue section. We envision that this technology could be used for biomarker-based diagnostics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 117-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127570</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced process data analytics</title>
<link>https://hdl.handle.net/1721.1/127569</link>
<description>Advanced process data analytics
Sun, Weike,Ph. D.Massachusetts Institute of Technology.
Process data analytics is the application of statistics and related mathematical tools to data in order to understand, develop, and improve manufacturing processes. There have been growing opportunities in process data analytics because of advances in machine learning and technologies for data collection and storage. However, challenges are encountered because of the complexities of manufacturing processes, which often require advanced analytical methods. In this thesis, two areas of application are considered. One is the construction of predictive models that are useful for process design, optimization, and control. The other area of application is process monitoring to improve process efficiency and safety. In the first area of study, a robust and automated approach for method selection and model construction is developed for predictive modeling.; Two common challenges when building data-driven process models are addressed: the high diversity in data quality and how to select from a wide variety of methods. The proposed approach combines best practices with data interrogation to facilitate consistent application and continuous improvement of tools and decision making. The second area of study focuses on process monitoring for complex manufacturing systems, which includes fault detection, identification, and classification. Four sets of algorithms are developed to address limitations of traditional monitoring methods. The first set provides the optimal strategy for Gaussian linear processes, including deep understanding of the process monitoring structure and optimal fault detection based on a probabilistic formulation. The second set aims at building a self-learning fault detection system for changing normal operating conditions.; The third set is developed based on information-theoretic learning to address limitations of second-order statistical learning for both fault detection and classification. The fourth set tackles the problem of nonlinear and dynamic process monitoring. The proposed methodologies and algorithms are tested on several case studies where the value of advanced process data analytics is demonstrated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 465-498).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127569</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of complex epipolythiodiketopiperazine alkaloids for mechanistic studies</title>
<link>https://hdl.handle.net/1721.1/127568</link>
<description>Synthesis of complex epipolythiodiketopiperazine alkaloids for mechanistic studies
Olsson, Chase Robert.
I. Synthesis of Potent Cytotoxic Epidithiodiketopiperazines Designed for Derivatization. We describe our design, synthesis, and chemical study of a set of functional epidithiodiketopiperazines (ETPs) and evaluation of their activity against five human cancer cell lines. Our structure-activity relationship-guided substitution of ETP alkaloids offers versatile derivatization while maintaining potent anticancer activity, offering exciting opportunity for their use as there are no examples of complex and potently anticancer (nM) ETPs being directly used as conjugatable probes or warheads. Our synthetic solutions to strategically designed ETPs with functional linkers required advances in stereoselective late-stage oxidation and thiolation chemistry in complex settings, including the application of novel reagents for dihydroxylation and cis-sulfidation of diketopiperazines.; We demonstrate that complex ETPs equipped with a strategically substituted azide functional group are readily derivatized to the corresponding ETP-triazoles without compromising anticancer activity. Our chemical stability studies of ETPs along with cytotoxic evaluation of our designed ETPs against A549, DU 145, HeLa, HCT 116, and MCF7 human cancer cell lines provide insights into the impact of structural features on potency and chemical stability, informing future utility of ETPs in chemical and biological studies. II. Redox by Stereoelectronic Design: The Malleable n--&gt;[pi]* Interactions Behind Epidithiodiketopiperazine Thiol-Disulfide Exchange Equilibria We describe our efforts seeking to elucidate the mechanisms impacting the physicochemical properties of epidithiodiketopiperazine (ETP) alkaloids.; Prompted by observations that subtle substitutions of the polysulfide-bridged diketopiperazine pharmacophore could significantly impact the anticancer activity of ETPs, we developed an array of C4-substituted bisprolyl-ETPs designed to further deconvolute their structure-activity relationship. The complete structural analysis of our training set of synthetic ETPs was enriched by collaborative computational and reduction studies seeking to assess the stereoelectronic forces behind their reactivity. We found a complement of structural parameters correlated to the strongest example of the n--&gt;[pi]* interaction ever observed, compensating for the energetic barrier associated with the eclipsed conformation of ETPs. In addition to providing insight into the stereoelectronic effects governing the physicochemical properties of ETPs, our observations reveal a molecular design strategy that uses the n--&gt;[pi]* interaction to modulate the reduction potential of ETPs.
Thesis: Ph. D. in Organic Chemistry, Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127568</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A multi-omics approach to improving productivity of therapeutic proteins in Pichia pastoris (Komagataella phaffii)</title>
<link>https://hdl.handle.net/1721.1/127567</link>
<description>A multi-omics approach to improving productivity of therapeutic proteins in Pichia pastoris (Komagataella phaffii)
Brady, Joseph Richard.
Market sizes for novel therapies and growing demand for existing treatments in emerging markets promise to challenge the current capacity for production of therapeutic proteins. Significant reductions in cost and development speed are required to serve patients in the developing world, promote rapid response to disease outbreaks, and pave the way for increasingly personalized medicines. Production of therapeutic proteins in CHO is unlikely to meet future requirements for capacity, cost, and speed. Alternative hosts must be developed to supplement and sustain biomanufacturing. The yeast Komagataella phaffii is the only alternative host that blends three critical features: fast growth to high cell density on inexpensive media, secretion of complex human proteins at reasonable titers with minimal host modifications, and regulatory precedent for the production of several marketed biologics. A new approach is needed, however, to convert these opportunities for K.; phaffii into a meaningful impact on global biomanufacturing. Decades of research in this host have not yet translated into widespread use for complex but important proteins such as monoclonal antibodies (mAbs). In this thesis, we define a new approach, strain engineering, which leverages a multi-omics approach and modern genetic tools to rapidly and rationally understand cell biology and engineer solutions. We apply this strain engineering approach through three levers: smarter molecular design, selection of an optimal starting strain, and modifying the host genome for a product class. In the first part of this thesis, we engineered the sequences of a trivalent rotavirus vaccine to mitigate glycosylation, aggregation, and truncation variants and improve product titers. Using in silico sequence analysis, RNA-Seq, and ribosome profiling, we identified and modified problematic sequence motifs to improve manufacturability without harming antigenicity.; We further demonstrated the implications for agile, low-cost manufacturing by producing purified trivalent vaccine from a single strain in a single, end-to-end, manufacturing campaign. In the second part, we engineered a novel, optimal base strain of K. phaffii. We characterized 11 variants of this yeast by whole-genome sequencing and RNA-Seq to identify functional genetic variants that influenced performance as a recombinant host. Our analyses revealed key differences in cell wall functions among variants that explain inconsistencies reported in the literature for this host. We then assembled beneficial features from among the variants into a new base strain with enhanced transformation and secretion efficiencies. In the final part of this thesis, we develop tools that are necessary to engineer the host genome for the secretion of more complex proteins. First, we use ATAC-Seq and RNA-Seq to guide selection of optimal sites for integration of heterologous genes.; Second, we identify initiation site (TIS) sequences for translational control of protein expression. These two tools, along with CRISPR/Cas9, enabled engineering of a new strain for the production of mAbs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 105-118).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127567</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Growth and nucleation kinetics in continuous antisolvent crystallization systems</title>
<link>https://hdl.handle.net/1721.1/127565</link>
<description>Growth and nucleation kinetics in continuous antisolvent crystallization systems
Schall, Jennifer M.(Jennifer Moffitt)
Continuous pharmaceutical manufacturing can provide multiple advantages over batch processing: enhanced process control, more consistent final product quality, increased productivity using smaller equipment, and the ability to maintain smaller chemical inventories. More specifically, the multi-stage mixed-segment, mixed-product removal (MSMPR) process represents a robust, continuous process that will enable us to manufacture products with acceptable yield while meeting constraints on crystal size distribution (CSD), polymorphism, and purity. This thesis considers both the thermodynamic and kinetic effects of solvent composition on continuous MSMPR combined cooling and antisolvent crystallization (CCAC) cascades, detailing where common crystallization assumptions fail and suggesting ways to improve continuous CCAC process design in the future.; We successfully validated solvent-dependent growth and nucleation kinetic models to rationally design a multi-stage, continuous, combined cooling/antisolvent crystallization process for an industrially-relevant drug. Our work demonstrates that solvent effects must be incorporated in kinetic expressions for proper antisolvent MSMPR crystallization cascade design, as solvent composition effects may dominate temperature and residence time effects. In general, neglecting solvent-dependent kinetics tends to result in over-predicted yield and mean particle size at high solvent volume fractions and under-predicted yields at low solvent volume fractions. Our work also demonstrates that failing to incorporate activity coefficient-dependent supersaturation estimates leads to not only substantial errors in supersaturation calculations, but also large errors in predicting growth and nucleation kinetics, crystallization yields, and crystal size distributions.; Finally, we investigated conditions under which transitioning from batch to continuous manufacturing is financially advantageous. Together, our findings provide a framework for future continuous antisolvent crystallization process development and design.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, May, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 137-144).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127565</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards fuel-efficient formation flying of an observatory and external occulter at Sun-Earth L2</title>
<link>https://hdl.handle.net/1721.1/127555</link>
<description>Towards fuel-efficient formation flying of an observatory and external occulter at Sun-Earth L2
Sanchez, William D.(William David)
The prolific discovery of habitable zone residing exoplanets via indirect detection methods have spurred many in the astrophysics and space technology community to call for the prioritization of funding for a direct exoplanet imaging space telescope, such as the NASA/JPL proposed HabEx mission. Though the state-of-the-art in optical technology suggests near-term feasibility, successful and efficient high-contrast imaging remains a problem. A promising solution is formation flying an external occulter in front of the observatory to suppress host starlight and allow for imaging of the obscured exoplanet. However, recent analyses have demonstrated that for the required separation distance between the spacecraft, angular slew maneuvers to retarget the formation line-of-sight between stars in a Design Reference Mission (DRM) demand a significant amount of fuel, restricting the potential science yield of a five year mission.; It can be found that many of these analyses use traditional, impulsive control solutions to slew the occulter between points in three-dimensional positional space, or attempt exhaustive search methods to find less expensive alternatives. These approaches are uninformed by the rich and complex dynamical six-dimensional phase space in which the spacecraft truly lie. For this work it is assumed that both the observatory and external occulter are operating near Sun-Earth Lagrange point 2 (SEL2). Researchers across celestial mechanics, nonlinear dynamics, chaos theory, and astrodynamics over the last century have made considerable contributions to shedding light on the families and classes of natural trajectories existing in the phase space about Lagrange points. However, it has only been in the last few decades (and still continuing through the present) that it has been revealed how to use these previously elusive pathways in mission design.; All of this points to a rich and underutilized design space for crafting naturally existing, or minimally active-control assisted, low-fuel solutions to solve complex motion problems. The difficulty lies in teasing out trajectories of interest in the often times opaque dynamical structure. However, history has shown that by understanding the basic classes of motion existing in the phase space through the lens of Dynamical Systems Theory (DST) -- which is concerned with qualitatively uncovering the structure of solutions in a system's phase space through the study of its equilibrium points, their stability, sensitivity to parameters, and the vector flow connecting these points --; it can be done. This thesis investigates the use of natural solutions to frame and solve the formation retargeting maneuvers of an observatory/external occulter exoplanet imaging mission. By illuminating the classes of natural motion that can be exploited, fuel costs can be minimized, but more importantly, the set of all available paths contextualized within the dynamical landscape. This provides a baseline from which solutions can be interpreted and mission design trade-offs analyzed. To this end, a Trajectory Design Methodology (TDM) was developed that guides the spacecraft along the natural periodic and quasi-periodic motion of the CR3BP phase space's center manifold. The TDM determines the fuel-minimizing path, under the constraints of the analysis, that passes the formation line-of-sight through the maximum number of stars within an extended time window.; Since the framework is dynamically informed, the incremental costs of deviating from this maximal path, to achieve a specific science objective, can be readily considered. A sample mission analysis demonstrating these contributions is provided.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 175-185).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127555</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the economics of science and innovation</title>
<link>https://hdl.handle.net/1721.1/127517</link>
<description>Essays on the economics of science and innovation
Hill, Ryan(Ryan R.)
This thesis studies the economic forces that influence the creation of academic research and basic scientific knowledge. I study this from an empirical perspective, using administrative data from structural biology and observational astronomy, as well as detailed publication and citation data from many academic disciplines. These chapters illuminate the role of priority recognition in science, the effect of project failures on academic careers, and historical trends of extramural citation rates between social sciences and other disciplines. The first chapter -- co-authored with Carolyn Stein --; studies how the scientific community assigns credit or "priority to individuals who publish an important discovery first. We examine the impact of losing a priority race (colloquially known as getting "scooped" on subsequent publication and career outcomes. To do so, we take advantage of data from structural biology where the nature of the scientific process together with the Protein Data Bank -- a repository of standardized research discoveries --; enables us to identify priority races and their outcomes. We find that race winners receive more attention than losers, but that these contests are not winner-take- all. Scooped teams are 2.5 percent less likely to publish, are 18 percent less likely to appear in a top-10 journal, and receive 28 percent fewer citations. As a share of total citations, we estimate that scooped papers receive a credit share of 42 percent. On the whole, these estimates inform both theoretical models of innovation races and suggest opportunities to re-evaluate the policies and institutions that affect credit allocation in science. The second chapter studies the role of luck in the careers of scientists. Since the production of science is inherently risky, the allocation of resources, promotions, and publications may be based on noisy signals of ability. Therefore, success might be path dependent, such that lucky breaks early in the career are amplified into future recognition and opportunities.; I seek to quantify the short- and long-run effects of exogenous project success and failure in the context of academic astronomy. Using weather conditions during telescope viewing sessions, I test whether project-level shocks have a lasting effect on publication and citation rates. I find that idiosyncratic weather quality increases publication and citation rates for novice astronomers, but does not affect the productivity of veteran astronomers. Good weather shocks increase the number of future telescope sessions novices are awarded, suggesting that lucky breaks may improve early-career opportunities. However, these positive effects on productivity are transient, lasting about four years before diminishing. The third chapter -- co-authored with Joshua Angrist, Pierre Azoulay, Glenn Ellison, and Susan Feng Lu --; studies extramural citation rates between academic disciplines. Does academic economic research produce material of general scientific value, or do academic economists write only for peers? Is economics scholarship uniquely insular? We address these questions by quantifying interactions between economics and other disciplines. Changes in the influence of economic scholarship are measured here by the frequency with which other disciplines cite papers in economics journals. We document a clear rise in the extramural influence of economic research, while also showing that economics is increasingly likely to reference other social sciences. A breakdown of extramural citations by economics fields shows broad field influence. Differentiating between theoretical and empirical papers classified using machine learning, we see that much of the rise in economics' extramural influence reflects growth in citations to empirical work.; This parallels a growing share of empirical cites within economics. At the same time, some disciplines that primarily cite economic theory have also recently increased citations of economics scholarship.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 215-224).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127517</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on behavioral economics and macroeconomics</title>
<link>https://hdl.handle.net/1721.1/127513</link>
<description>Essays on behavioral economics and macroeconomics
Lian, Chen,Ph. D.Massachusetts Institute of Technology.
This thesis consists of three chapters in behavioral economics and macroeconomics. In the first chapter, I develop an approach, which I term narrow thinking, to break the decision maker's ability to perfectly coordinate her multiple decisions. For a narrow thinker, different decisions are based on different, non-nested, information. I recast this individual decision problem as multiple selves playing an incomplete-information game. The narrow thinker then makes each decision with imperfect knowledge of the other decisions. The friction effectively attenuates the interaction across decisions. Narrow thinking then provides a model of narrow bracketing without directly imposing that each decision is made in isolation. The main application is that narrow thinking generates smooth mental accounting, without requiring the decision maker to have explicit budgets. My approach also leads to unique predictions about what drives the degree of mental accounting behavior.; Depending on the environment, narrow thinking can translate into either over- or under-reaction relative to the frictionless benchmark. In the second chapter, I am motivated by the fact that consumers have difficulty tracking their total wealth, or keeping it at the front of their minds when making consumption and saving decisions. In this chapter, I show how such imperfect perception of wealth can explain several key deviations of consumption behavior from the permanent income hypothesis, including: excess sensitivity to current income, smaller MPCs out of wealth than out of current income, and excess discounting of future income. Importantly, my approach does not rely on liquidity constraints and can explain the empirical evidence on high-liquidity consumers' deviations from the permanent income hypothesis. I further provide an interpretation of the model in which the consumer has separate mental accounts for her current income and her wealth.; Thus, the consumer exhibits behavior similar to a two-asset model, in a one-asset context without borrowing constraints. The friction can be quantitatively important in explaining MPCs, and has substantive macro implications for monetary and redistributive policy. Methodologically, the paper develops a tractable method for incorporating imperfect perception of the endogenous state into an otherwise standard Markov decision problem. In the third chapter (joint with George-Marios Angeletos), we turn to the classic macroeconomics question about how the economy responds to news about future policies or future fundamentals. Standard practice assumes that agents have common knowledge of such news and face no uncertainty about how others will respond. Relaxing this assumption attenuates the general-equilibrium effects of news and rationalizes a form of myopia at the aggregate level.; We establish these insights within a class of games which nests, but is not limited to, the New Keynesian model. Our results help resolve the forward-guidance puzzle, offer a rationale for the front-loading of fiscal stimuli, and illustrate more broadly the fragility of predictions that rest on long series of forward-looking feedback loops. JEL Codes D90, E50, E90.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 289-302).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127513</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the empowerment and employment of women in India</title>
<link>https://hdl.handle.net/1721.1/127510</link>
<description>Essays on the empowerment and employment of women in India
McKelway, Madeline.
This thesis consists of three papers on the empowerment and employment of women in India. India's female labor force participation rate is among the lowest in the world. Research suggests many women in India want to work, husbands' opposition to their work is a key constraint, and husbands can be persuaded that their wives should work. But if women want to work and their husbands can be persuaded, why do women not persuade them? The first paper (chapter one) argues that doing so requires a general sense of self-confidence that is lacking amongst women in India. My experiment offered women in rural Uttar Pradesh a psychosocial intervention to raise generalized self-efficacy (GSE), or beliefs in own ability to attain desired outcomes. The intervention produced a persistent gain in GSE. I cross-randomized a short video promotion of women's employment for women's families. The promotion given alone increased short-run employment, which implies families in this setting can be persuaded.; The GSE intervention on its own also raised short-run employment, and data suggest a key channel was giving women confidence to persuade their families. Shortrun employment under both interventions was no higher and may have been lower than under either alone. I find no effects on long-run employment, suggesting it is harder to persuade families that women should stay in the workplace than enter it. The second paper (chapter two), which is co-authored with Matt Lowe, outlines a model of household decision-making about women's labor supply and presents results from an experiment in India that tested it. The key feature of the model is that households choose whether to engage in costly bargaining about the labor supply decision. Moreover, information about women's employment opportunities may be asymmetric. Spouses choose whether to share information and whether to bargain based on their preferences for women's employment. This model has support in recent empirical research.; Our experiment was set in rural Uttar Pradesh. We document misalignment of spousal preferences about women's work in this setting: wives are significantly more supportive of women's employment than their husbands. We experimentally varied enforcement of common knowledge and enforcement of bargaining. We randomized whether husbands or wives were given information about a women's job opportunity and an enrollment ticket. We cross-randomized whether nontargeted spouses were not informed, informed separately, or informed at the same time as their targeted spouses. In the third condition, we explicitly encouraged discussion with the view of enforcing bargaining. Surprisingly, we find that husbands did not withhold information and that discussion significantly decreased enrollment. Our results contradict the standard predictions outlined in the model. The final paper (chapter three) explores the effects of women's employment.; A standard prediction of the household literature is that women's employment should increase women's intra-household bargaining power. This paper provides the first experimental tests of this prediction. The experiment was conducted in rural Uttar Pradesh, a setting where women's family members typically decide whether women work. The intervention is the promotion intervention used in the first paper, a video promoting a women's employment opportunity that was designed to address family members' key objections to women working. I informed both treatment and control family members about the opportunity but only showed the promotion to treatment family members. The promotion led to large increases in women's employment and is unlikely to have affected other family outcomes through channels aside from women's employment; I interpret effects of the promotion on these outcomes as effects of women's employment.; I find employment enabled women to make more decisions independently and without their husbands' knowledge. However, it did not increase their bargaining power in joint decisions. My results are inconsistent with the standard collective model of the household and more aligned with a model in which spouses do not fully pool their incomes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127510</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing medical imaging workflows with deep learning</title>
<link>https://hdl.handle.net/1721.1/127506</link>
<description>Enhancing medical imaging workflows with deep learning
Chang, Ken,Ph. D.Massachusetts Institute of Technology.
The last few years mark a significant leap in the capability of algorithms with the advent of deep learning. While conventional machine learning has existed for decades, their utility has been rather limited, requiring considerable engineering and domain expertise to design pertinent data features that can be extracted from raw data. In contrast, deep learning methods have yielded state-of-the-art results in a wide range of computer vision tasks without the need for hand-crafted imaging features. At the same time, we are collecting ever-increasing quantities of medical imaging. Together, deep learning models and big data yield a powerful combination. Integrated in the data workflow, the clinic, or at the bedside, these models have the potential to aid with clinical decision-making, improving efficiency, accuracy, and reliability of patient care.; However, at present, there is a critical gap between the researchers who develop deep learning algorithms and the clinicians who could utilize the technology to improve patient care. In this thesis, I focus on several challenges that prevent clinical translation of algorithms. First, vast quantities of data needed to train effective models are often dispersed across institutions and cannot be shared due to ethical, infrastructure, and patient privacy concerns. As such, we developed distributed methods of training robust deep learning models that do not require sharing patient data in multi-institutional collaborative settings. Second, it is not clearly understood how decisions in algorithm design can affect model performance. To this end, I showcase how various training, data, and model parameters can impact algorithm prediction and performance.; Lastly, while many algorithms are designed to perform a single task, there are few pipelines that have multi-faceted functionality needed in patient care. I demonstrate an integrated and deployable clinical decision support pipeline for glioma and ischemic stroke that is extensible to other diseases.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 212-232).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127506</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations in applied probability and high-dimensional statistics</title>
<link>https://hdl.handle.net/1721.1/127504</link>
<description>Investigations in applied probability and high-dimensional statistics
Gaudio, Julia.
This thesis makes contributions to the areas of applied probability and high-dimensional statistics. We introduce the Attracting Random Walks model, which is a Markov chain model on a graph. In the Attracting Random Walks model, particles move among the vertices of a graph transition probabilities depending on the locations of the other particles. The model is designed so that transitions to more occupied vertices are more likely. We analyze the mixing time of the model under different values of the parameter governing the attraction. We additionally consider the repelling version of the model, in which particles are more likely to move to vertices with low occupancy. Next, we contribute to the methodology of Markov processes by studying convergence rates for Markov processes under perturbation. We specifically consider parametrized stochastically ordered Markov processes, such as queues. We bound the time until a given Markov process converges to stationary after its parameter experiences a perturbation. The following chapter considers the random instance Traveling Salesman Problem. Namely, n points (cities) are placed uniformly at random in the unit square. It was shown by Beardwood et al (1959) that the optimal tour length through these points, divided by [square root of] n, converges to a constant [2]. Determining the value of the constant is an open problem. We improve the lower bound over the original bound given in [2]. Finally, we study a statistical model: isotonic regression. Isotonic regression is the problem of estimating a coordinate-wise monotone function from data. We introduce the sparse version of the problem, and study it in the high-dimensional setting. We provide optimization-based algorithms for the recovery of the ground truth function, and provide guarantees for function estimation in terms of L2 loss.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 159-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127504</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vocal connection rethinking the voice as a medium for personal, interpersonal, and interspecies understanding</title>
<link>https://hdl.handle.net/1721.1/127503</link>
<description>Vocal connection rethinking the voice as a medium for personal, interpersonal, and interspecies understanding
Kleinberger, Rébecca(Rébecca Henrietta Marie Franca)
Voices are ubiquitous and familiar, so much so that it is easy to forget how fundamentally important vocal signals really are to how we relate to others and to ourselves. Vocal experiences can take many forms (audible, tangible, silent, internal, external, neurological, remote, etc.) and offer great potential for bridging diverse fields. I am proposing a new approach for looking at the voice holistically, in its experiential nature, based on its propensity to connect. This dissertation introduces and examines methods for the creation of interactive voice-based experiences that foster novel and profound connections. I present three projects to support and illustrate this approach by establishing connections at three levels: individual, interpersonal, and extending beyond human languages. The Memory Music Box establishes a sense of connection across space and time, and is specially designed to encourage conversation and to enhance a sense of connectedness for older adults.; With the Mumble Melody initiative, I extract musicality from everyday speech as a way to access inner voice processes and help people who stutter gain increased fluency. Finally, with the Sonic Enrichment at the Zoo project, I present ways to improve connections within and between species -- including between humans and animals --; by exploring sonic and vocal enrichment interventions at the San Diego Zoo. Each of these projects represents a different angle from which to consider the potential of the voice for creating new forms of connection. Such is the vision of this work. I consider the notion of connectedness broadly, including the raising of personal self-awareness, the creation of strong interpersonal bonds, and the potential to create new forms of empathetic understanding with other species. Although this research focuses on the voice, it extends beyond this realm. The broader themes examined through this work have implications in the fields of neurology, cognitive sciences, assistive technologies, human-computer interactions, communication sciences, and rapport-building. Indeed, since the voice is a versatile projection of ourselves into the world, it offers a unique perspective for the study and enhancement of cognition, learning, personal development, and wellbeing.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 227-261).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127503</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collective behavior over social networks with data-driven and machine learning models</title>
<link>https://hdl.handle.net/1721.1/127502</link>
<description>Collective behavior over social networks with data-driven and machine learning models
Leng, Yan
Individuals form network connections based on homophily; individuals' networks also shape their actions. Pervasive behavioral data provides opportunities for a richer view of the decisions on networks. Yet, the increasing volume, complex structures, and dynamics of behavioral data stretch the limit of conventional methods. I develop mathematical modeling (e.g., machine learning, game theory, and network science) and large-scale behavioral data to study collective behaviors over social networks. My dissertation will tackle this area in four directions, revolving around the intricate linkage between individuals' characteristics, actions, and their networks. First, I empirically investigate how social influence spreads over networks using two massive cell phone data, and theoretically model how do individuals aggregate information from local neighbors. Second, I study how to leverage influential nodes for selective network interventions (e.g., marketing and political campaigns), by proposing a centrality measure going beyond network structures. Third, I build a geometric deep learning model to infer individual preferences and make personalized recommendations to utilize noisy network information and nodal features effectively. Last, given that the network is essential, I develop a framework to infer the network connections based on observed actions, when networks are unavailable. My thesis provides building blocks for further network-based machine learning problems integrating nodal heterogeneity and network structures. Moreover, the findings on human behaviors and frameworks developed in my thesis shed light on marketing campaigns and population management.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 171-186).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127502</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resynthesizing volumetric soundscapes : low-rank subspace methods for soundfield estimation and reconstruction</title>
<link>https://hdl.handle.net/1721.1/127501</link>
<description>Resynthesizing volumetric soundscapes : low-rank subspace methods for soundfield estimation and reconstruction
Russell, Spencer(Spencer Franklin)
Sound and space are fundamentally intertwined, at both a physical and perceptual level. Sound radiates from vibrating materials, filling space and creating a continuous field through which a listener moves. Despite a long history of research in spatial audio, the technology to capture these sounds in space is currently limited. Egocentric (binaural or ambisonic) recording can capture sound from all directions, but only from a limited perspective. Recording individual sources and ambiance is labor-intensive, and requires manual intervention and explicit localization. In this work I propose and implement a new approach, where a distributed collection of microphones captures sound and space together, resynthesizing them for a (now-virtual) listener in a rich volumetric soundscape. This approach offers great flexibility to design new auditory experiences, as well as giving a much more semantically-meaningful description of the space.; The research is situated at the Tidmarsh Wildlife Sanctuary, a 600-acre former cranberry farm that underwent the largest-ever freshwater restoration in the northeast. It has been instrumented with a large-scale (300 by 300 m2) distributed array of 10-18 microphones which has been operating (almost) continuously for several years. This dissertation details methods for characterizing acoustic propagation in a challenging high-noise environment, and introduces a new method for correcting for clock skew between unsynchronized transmitters and receivers. It also describes a localization method capable of locating sound-producing wildlife within the monitored area, with experiments validating the accuracy to within 5m. The scale of the array provides an opportunity to investigate classical array processing techniques in a new context, with nonstationary signals and long interchannel delays.; We propose and validate a method for location-informed signal enhancement using a rank-1 spatial covariance matrix approximation, achieving 11dB SDR improvements with no source signal modeling. These components are brought together in an end-to-end demonstration system that resynthesizes a virtual soundscape from multichannel signals recorded in situ, allowing users to explore the space virtually. Positive feedback is reported in a user survey.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 106-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127501</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital expressive media for supporting early literacy through child-driven, scaffolded play</title>
<link>https://hdl.handle.net/1721.1/127500</link>
<description>Digital expressive media for supporting early literacy through child-driven, scaffolded play
Sysoev, Ivan
Digital technology holds many promises for supporting early literacy development. To stimulate both l earning achievement and children's interest in literacy, i t i s beneficial f or a l earning activity to be playful, support children's agency and self-efficacy, and meaningfully connect to their life. However, nearly all current literacy technology, designed within the instructionist paradigm, lack these qualities. This work attempts to address this issue by exploring the design space of technology that is: ( 1) "child-driven" -- allowing initiative and ideas to come from the learner; ( 2) expressive -- fostering the creation of messages or artistic artifacts; and ( 3) scaffolded --; assisting the child, in real time, in accomplishing his/her self-selected goals. Several forms of scaffolding were explored: ( 1) direct guidance routines with input from the child, ( 2) facilitating invented spelling, and ( 3) phoneme-based building blocks aimed at eschewing the orthographic complexities of English. The exploration was conducted through two apps, primarily aimed at phonological awareness development --; minimalistic SpeechBlocks I and scaffolded SpeechBlocks II. They were evaluated in four exploratory studies, both in classrooms and homes. The following was l earned: ( 1) The media sparked intrinsic motivation, supported agency and self-efficacy, and allowed f or non-trivial expression; ( 2) They were used i n markedly different ways: from chaotic, impulsive exploration to sophisticated imaginative play; ( 3) The media encouraged literacy-oriented social play; ( 4) Real-time, built-in scaffolding was essential i n supporting the meaningful participation of early literacy learners. I t allowed children to engage i n high-level creativity, while simplifying the necessary l ow-level routine; ( 5) Different scaffolding types fulfilled different functions, such as responding to children's specific requests and facilitating the search f or ideas; ( 6) The distinction between letter and phoneme blocks was ultimately less important than originally thought.; However, onomatopoeic mnemonics ( designed f or phoneme blocks) were helpful or a certain category of children; ( 7) Initial phonological awareness and executive function appear to be moderators in how productive children's engagement was with the media. This work can provide insights to researchers, educators, and designers on how to combine children's agency with supportive guidance.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 204-217).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127500</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing the organism-environment relationship</title>
<link>https://hdl.handle.net/1721.1/127499</link>
<description>Designing the organism-environment relationship
Sharma, Sunanda,Ph. D.Massachusetts Institute of Technology.
How does one reconcile the complex uncertainties of living systems with the control required for real-world design? This is the central question facing the field of biological design and the creative intersection it occupies, seeking to move beyond the mimicry of biological processes and structures into the physical fabrication of biohybrid materials and products. In experimental biology, the variability of life is often intentionally stifled through the use of highly controlled environments and well-characterized materials and organisms. However, the resulting findings cannot easily be translated out of the lab in a physical setting, severely limiting the potential impacts of this field. The stakes for impactful science and design are becoming increasingly high, given the stark deterioration of the natural environment and the imminent exploration of extreme reaches, such as deep space. At this point in the history of science and design, we are fortunate to experience two extremes -- tools that allow for finer control than ever before possible, be it additive manufacturing, microscopy, or computational design -- and a wave of systems-level thinking that grapples with the overwhelming complexity and variation of nature. Building upon ideas from nonlinear dynamics, systems biology, architecture, and design, I present an experimental approach to biological design, which seeks to provide guidelines and achieve influence at large spatiotemporal scales and in dynamic environments while considering inherent stochasticity in living systems as a feature. I present five project areas, spanning multiple phyla, scales, and public venues, through which I develop and demonstrate the practice of Organism-Environment Design.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127499</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting mental distress using healthcare claims data</title>
<link>https://hdl.handle.net/1721.1/127498</link>
<description>Forecasting mental distress using healthcare claims data
Taylor, Sara Ann.
Recently, depression rates have reached record levels in the US: 7.1% of adults in the US had at least one major depressive episode in 2015 and an estimated 7 million American adults aged 65 and older experience depression. Anxiety disorders are also on the rise, with a recent review estimating a prevalence of up to 25% for the general population. This dissertation focuses on estimating and forecasting mental distress using data from electronic health records and insurance claims to try to answer a fundamental question: Can we predict who will need mental health help before they need it? If these individuals can be identified, we can develop ways to quickly mobilize resources to respond to any increase in symptoms and develop the methods to mitigate the effects of mental distress through ongoing baseline treatments.; Following a brief high-level review of the US healthcare system and its data sources, we use various standardized survey scores stored in Electronic Health Records (EHRs) to define how mental distress is categorized in the more ubiquitous claims data. We achieve a Matthew's correlation coefficient of 0.29 and an accuracy of 75% on a hold-out test set. These definitions are then used throughout the rest of the dissertation as the label of interest. We also describe a state-space based generalized linear model that can be used to estimate the rate of health care events. We found that only a 16-day history was needed for the state-space models compared to an 85-day history in a static model to achieve similar accuracies. Finally, we forecast distress using demographic information and healthcare event rate features. We report Matthew's correlation coefficients, accuracy, and other metrics for predicting 1, 3, 6, and 12 months in the future.; On a hold-out test set, we achieved accuracies of 89%, 74%, 59%, and 47% for forecasting the presence of a distress event 1, 3, 6, and 12 months into the future, respectively (compared to a baseline static model with accuracies of 78%, 63%, 49%, and 34%). We found that including the current distress label significantly improved the forecast results of the next period.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 169-177).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127498</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the materials footprint of a university campus : data, methods, recommendations</title>
<link>https://hdl.handle.net/1721.1/127454</link>
<description>Characterizing the materials footprint of a university campus : data, methods, recommendations
Perlman, Rachel Martha Katims.
Universities are major consumers and disposers of many materials, but their specific flows are not well characterized. Both energy and material consumption drive a university's environmental impact. Many universities collect data about their energy consumption (from fuel usage or utility bills) and assess some resulting environmental impacts. However, very little effort has been focused on understanding purchasing, materials handling, and the resulting environmental impacts. To date, there have been few material flow analyses of universities; most analyses concern cities or countries. This paper describes a method for conducting a material flow analysis (MFA) of a university, and it offers the strategies used to obtain first-order characterization and quantification of the flows of the Massachusetts Institute of Technology (MIT).; This case study demonstrates that an MFA of a university requires the use of a portfolio of diverse methods that deliver different outcomes, which then must be pieced together. Inflows and stocks are characterized using financial data, and waste flows are quantified by mass data. Flows are characterized using a combination of product/commodity descriptors and materials. Material purchases are characterized by product category, temporal variation, purchasing unit/entity, and level of decentralization. The top five purchase categories (by spend) in descending order are: (1) laboratory supplies; (2) hardware purchases/maintenance; (3) laboratory equipment; (4) chemicals, reagents &amp; gases; (5) office furniture. The study also reports the largest stocks of durable goods by quantity and dollar value, as well as the average residence time, or lifetime, of different products.; The results also catalogue the quantity and disposal/recycling destinations of different waste streams, including municipal solid waste, single-stream recycling, hazardous waste, medical waste, and radioactive waste. To estimate the embodied GHG emissions from purchases, spend data was used with an economic input-output life cycle assessment (EIO-LCA). The product categories with the largest embodied emissions were found to be laboratory supplies, chemicals/gases, office furniture, and electronics. The total embodied greenhouse gas emissions of material goods purchased in FY2016 was found to be roughly 78,800 metric tons of CO2-eq. This is significant compared to Scope 1 and 2 emissions. Emissions from waste management were estimated using waste generation figures and EPA's WARM model; the results indicate that the greenhouse gas impact from waste is much smaller than that from procurement.; This study also reports the findings from sixteen in-person interviews conducted with MIT community members that make purchases. Among other findings, the interviews revealed that purchasers currently have a high level of individual agency and freedom. Purchasers also reported that they would like easily accessible information and guidelines for how to purchase sustainably, as well as formalized incentives for buying more sustainably and conserving materials. Currently, the purchasing process is carried out independently of any consideration of the materials' end of life (a linear system, rather than having circularity for sustainability). University entities are autonomous in their purchasing, with some using different systems, which makes complicates the tracking material consumption. This work provides several recommendations for making MFAs easier to perform at the university-level and for reducing the materials and carbon footprint of a research universities.; Some key recommendations include: centralizing data collection and storage on procurement and waste; requiring more detailed product-level data from vendors; and creating web-based interdepartmental sharing programs for material goods.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-138).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127454</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the very invisible college : global science and African participation</title>
<link>https://hdl.handle.net/1721.1/127448</link>
<description>Essays on the very invisible college : global science and African participation
Fry, Caroline Viola.
Despite globalization, innovative activities remain concentrated in a handful of high-income countries. Leveraging knowledge and resources in these locations through ties in the global network presents opportunities for emerging economies. This dissertation consists of three essays studying the role of international ties in the development of scientific capacity in sub-Saharan Africa. Each chapter helps to uncover a different feature of the way in which, and the scope by which, international ties impact African science, and ultimately facilitate technological catch-up and economic growth. Chapter 1 is an introductory chapter, and chapters 2-4 are specific research applications. Chapter 2 explores the value of international relationships to African scientists leveraging a unique opportunity afforded to some scientists to develop these relationships: the 2014 Ebola epidemic. Chapter 3 studies the spillover impact of the return home of American trained scientists to African institutions. Chapter 4 explores a macro-association between foreign knowledge stocks and African scientific productivity.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127448</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for the electronic structure of large chemical systems</title>
<link>https://hdl.handle.net/1721.1/127436</link>
<description>Methods for the electronic structure of large chemical systems
Ye, Hong-Zhou,Ph. D.Massachusetts Institute of Technology.
An accurate description of the electronic structure of chemical systems is crucial to understanding the atomistic mechanisms of many functional materials and designing new ones. In this thesis, we present methods that can potentially be used to computationally study the electronic structure of large chemical systems. The thesis can be roughly divided into two parts. In the first part, we focus on using quantum embedding (QE) theory to reduce the scaling of traditional electron correlation methods while maintaining their accuracy. Specifically, we extend bootstrap embedding (BE), a QE scheme that has displayed high performance for model Hamiltonians, to general chemical systems. Two challenges arise in such an extension.; First, unlike the model systems, the basis functions of a real chemical system do not possess a uniquely defined connectivity and often lack high symmetries, which pose challenges on partitioning a chemical system into fragments that are necessary for a BE calculation (or other QE schemes). Second, the key to the success of BE on model Hamiltonians is the density matching between fragments that overlap with each other, which requires one to identify fragment centers and edges. Unlike model Hamiltonians where the fragments are regularly shaped -- rendering the identification trivial --; a real chemical system can have fragments that display complex patterns that make the distinction not obvious. To that end, we propose two fragment choices for partitioning a chemical system, one based on individual orbitals and the other on atoms, and systematically benchmark their effects on the performance of BE. Our finding is that the atom-based fragments are a better choice for BE, leading to fast convergence with the fragment size to the full-system calculations. We then develop an efficient implementation of atom-based BE using coupled cluster with singles and doubles (CCSD) as the local solver, and benchmark it on a series of conjugated molecules containing up to ~ 2900 basis functions. Numerical tests confirm both the accuracy and computational efficiency of BE, rendering it a potential alternative to the more established local correlation methods.; At the end of the first part, we also present another QE scheme, incremental embedding (IE), that does not rely on the connectivity between the units of a system (orbitals or atoms). In the second part, we switch our gear to develop self-consistent methods for locating excited states, which are complementary to the linear response-based methods for excited state calculations. Here, the difficulty is to avoid the variational collapse: unlike the ground state, which is the global minimum of the energy, excited states are often saddle points, making regular algorithms aimed at minimizing the energy collapse down to the ground state. To that end, we propose minimizing the energy variance, which is a minimum for all states and hence promises a numerically robust algorithm for locating them. To target a specific state, we couple the variance minimization to a direct energy-targeting functional.; The resulting method, which we dub [sigma]-SCF, can in principle locate any excited state by specifying a guess of the energy of the state. Numerical tests confirm that [sigma]-SCF solutions behave like the energy-stationary solutions. More importantly, for single excitations, [sigma]- SCF displays the pertinent spin symmetry breaking, which motivates us to improve it using spin projection. This effort leads to the half-projected (HP) [sigma]-SCF, which maintains the ability of [sigma]-SCF to effectively locate excited states, and significantly improves the results for singlet and triplet single excitations. In the conclusion, we comment on how self-consistent excited state methods could be combined with QE, hence enabling large-scale, correlated calculations for excited states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 215-236).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127436</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing highly efficient lead halide perovskite solar cells</title>
<link>https://hdl.handle.net/1721.1/127435</link>
<description>Developing highly efficient lead halide perovskite solar cells
Yoo, Jason J.(Jason Jungwan)
Lead halide perovskite solar cells are an emerging technology that can be solution processed to yield low-cost, light weight and flexible photovoltaics. Much of the early work has been focused on developing device structures and processing techniques to improve light absorption and eliminate detrimental traps within the bulk of the perovskite active layer. As a result, the device efficiency of perovskite solar cells has improved from ~3% up to ~20% in less than a decade. However, the device efficiency of perovskite solar cells still need to be much improved in order to compete with traditional photovoltaic technologies, such as Silicon and GaAs, and to ultimately realize the theoretically determined Shockley-Queisser (SQ) efficiency limit. In this thesis, I focus on the development of a novel interface passivation strategy called selective precursor dissolution (SPD) strategy, that utilizes low dimensional 2D perovskites as the interface passivating layer.; The post treatment of the bulk perovskite thin film with 2D perovskites via SPD strategy prevented formation of a detrimental non-perovskite phase at the interface and resulted in much improved thin film quality with reduced detrimental interface recombination. As a result, a certified power conversion efficiency (PCE) of 22.6% is achieved from a quasi steady-state measurement along with an electroluminescence (EL) efficiency up to ~9%. Both device metrics were the highest values reported at the time of publication. In addition to developing an interface passivation strategy to improve device performance, a high quality electron transport layer (ETL) was developed and a new perovskite composition was adopted to further improve the device performance. A chemical bath deposition (CBD) was used for the synthesis of a tin dioxide (SnO₂) ETL. The pH of the reaction solution is identified as the key parameter for the CBD of SnO₂ that controls the quality of the SnO₂ ETL.; pH 1.5 is determined to be the optimum acidity that results in a SnO₂ ETL with compact and conformal coverage without producing a detrimental secondary crystal phase. To improve the optoelectronic properties of the perovskite active layer, MAPbBr₃ is significantly reduced to minimize the band gap penalty, which also resulted in improved effective carrier mobility. MAPbBr₃ is commonly added to the perovskite composition to stabilize the [alpha]-phase FAPbI₃ but results in an increase in the band gap. Addition of 0.8 mol% of MAPbBr₃ to the FAPbI₃ perovskite resulted in much improved carrier lifetime and effective mobility, compared to conventionally added 10 mol%. Together with the new SnO₂ and the perovskite active layer, a record setting and certified PCE of 25.2% is achieved, which translates to 80.5% of the SQ limit for its band gap.; In addition, due to low open-circuit voltage (V[subscript OC]) loss, the newly developed devices exhibit an EL efficiency up to 17.2% and an EL wall-plug efficiency up to 21.6%. Both PCE and the EL efficiency is the highest reported so far from a single perovskite solar cell structure.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis. Page 132 blank.; Includes bibliographical references (pages 119-130).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127435</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and application of polymer metal-organic cage gels</title>
<link>https://hdl.handle.net/1721.1/127433</link>
<description>Design and application of polymer metal-organic cage gels
Zhao, Julia.
Chapter 1. A unifying overview of the fundamentals of polymer network synthesis, structure, and properties is provided, tying together recent trends in the field that are not always associated with classical polymer networks, such as the advent of crystalline "framework" materials. Recent advances in using molecular design and control of topology showcase how a deep understanding of structure-property relationships can lead to advanced networks with exceptional properties. Chapter 2. A novel bispyridine-based M₆L₁₂ coordination cube inspired by related work from Fujita and coworkers is prepared and used to generate a polyMOC gel with intermediate branch functionality compared to previous polyMOC networks. The ligand is able to successfully self-assemble with not only Pd(II) and Pt(II), but also combinations of both metals to form mixed-metal cages.; Through adjusting the ratio of palladium and platinum metal salts incorporated into network assembly, we can tune the energy dissipation properties of these materials due to differences in lability of metal-pyridine coordination bonds. Using this strategy, the characteristic relaxation times and loss moduli of these M₆L₁₂-based gels can be tuned over nearly three orders of magnitude while maintaining the general network topology as well as the elastic behavior of the material. Chapter 3. The M₁₂L₂₄-based polyMOC network was optimized for water purification and reuse. A library of ligands was designed and synthesized to target three different chemical families: aromatic, perfluorinated, and alkylated groups. PolyMOC gel purification performance was tested for aromatic compounds and perfluoroalkyl substances (PFAS).; These evaluations demonstrated some successes in the absorption of selected model compounds; however, absorption in all cases were accompanied with nonspecific binding of unfunctionalized control gels. Potential absorption mechanisms contributing to this nonspecific binding are discussed. Future work to better determine interaction mechanisms are necessary for improved design and function of polyMOC gels towards water treatment applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127433</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Through-bond and through-space charge transport in metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/127432</link>
<description>Through-bond and through-space charge transport in metal-organic frameworks
Xie, Lilia Shell.
Electrically conductive metal-organic frameworks (MOFs) combine intrinsic porosity with efficient charge transport, opening up possibilities as active materials for applications ranging from electrocatalysis to chemiresistive sensing. In this thesis, efforts to study and control the conductivities of MOFs with different structural motifs enabling charge transport are detailed. Chapter 1 introduces the principles underlying electrical conductivity in solids and reviews relevant literature on MOFs with through-bond and through-space transport pathways. Chapter 2 demonstrates the application of post-synthetic mixed-valence doping in an iron-tetrazolate MOF exhibiting a through-bond pathway. Upon introducing Fe³⁺ sites into the native Fe²⁺ framework, the conductivity increases by 5 orders of magnitude, reaching the highest values for three-dimensionally connected MOFs. The remaining chapters are concerned with through-space transport pathways delineated by linker stacking interactions. Chapters 3 and 4 focus on lanthanide MOFs with the tetrathiafulvalene tetrabenzoate (TTFTB) linker. Chapter 3 describes the interplay between [pi]- [pi] stacking and conductivity in TTFTB MOFs with La³⁺, and proposes a general heuristic for relating transport properties to structural parameters in related materials. Chapter 4 presents TTFTB frameworks with the late lanthanides Tm³⁺, Yb³⁺, and Lu³⁺. The unprecedented topologies of the structures described in these chapters underscore the unique self-assembly properties of the TTFTB linker. Chapter 5 details the substitution of tetrathiafulvalene for a nickel(II) bis(glyoximate) core in a family of isostructural conductive MOFs. Broader implications for linker design in conductive MOFs, particularly those with three-dimensional connectivities and pronounced [pi]-[pi] stacking, are discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127432</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From benchtop to bedside and beyond : the development and application of low- and high-throughput, single-cell RNA-Seq platforms for precision medicine pipelines</title>
<link>https://hdl.handle.net/1721.1/127431</link>
<description>From benchtop to bedside and beyond : the development and application of low- and high-throughput, single-cell RNA-Seq platforms for precision medicine pipelines
Wadsworth, Marc Havens,II.
The development and application of single-cell technologies have revolutionized how we study health and disease. By deconstructing complex biological systems, like human tissues, into the fundamental building blocks of life, single cells, we can not only learn what makes each cell unique (intracellular circuitry) but also investigate how each interaction among them (intercellular circuits) lead to system-level functions. Single-cell approaches have the potential to be particularly crucial for precision medicine pipelines, where comprehensive cellular profiles of system-level functions could be leveraged to guide diagnosis and treatment of disease. Here, to demonstrate the promise of these new technologies, we have developed and implemented single-cell RNA-Sequencing (scRNA-Seq) techniques to profile low-input clinical samples across a multitude of diseases, providing critical insight into how patient-specific scRNA-Seq profiles can help improve clinical treatment.; More specifically, first, we applied scRNA-Seq to dissect the multicellular ecosystem of metastatic melanoma, profiling 4,645 single cells isolated from 19 patients to examine both malignant and nonmalignant phenotypes and their interactions, as well as to propose potential targets for new therapies. Next, to overcome the limitations of low-throughput scRNA-Seq platforms, we developed Seq-Well, a high-throughput platform for low-input clinical samples, that is not only competitive with other scRNA-Seq technologies but also significantly cheaper and portable, enabling the democratization of scRNA-Seq technologies by empowering scientists in high- and low-resource settings.; Finally, we drastically improved the gene and transcript capture of Seq-Well by introducing a step called Second Strand Synthesis (S 3) into the protocol and applied it to construct an atlas of skin inflammation across five conditions, resolving previously unappreciated adaptive and innate cellular phenotypes, as well as propose potential targets for therapeutic intervention unique to each inflammatory disease. Collectively, our work demonstrates the power of scRNA-Seq technologies and how they can be implemented in precision medicine pipelines to improve clinical outcomes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127431</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methane mono-oxidation electrocatalysis by palladium and platinum salts</title>
<link>https://hdl.handle.net/1721.1/127430</link>
<description>Methane mono-oxidation electrocatalysis by palladium and platinum salts
Kim, R. Soyoung(Rebecca Soyoung)
Selective oxidation of methane to methanol would enable better utilization of natural gas resources. Many homogeneous metal ions activate methane under mild conditions, but turning this reactivity into catalysis requires a viable oxidation step. Electrochemistry offers unique advantages in this regard, and this thesis demonstrates two mechanistically distinct approaches for methane functionalization electrocatalysis. Following the first approach, a novel high-valent Pd complex with exceptional methane functionalization reaction rates is electrochemically generated in fuming sulfuric acid. We present a structural model of this complex as a Pd[superscript III] dimer with a Pd-Pd bond and a 5-fold O-atom sulfate/bisulfate coordination environment at each Pd atom. We also discover, using EPR spectroscopy, a mixed-valent Pd₂[superscript II][superscript III] complex in the electrochemical oxidation sequence.; From these and redox potential measurements, a comprehensive thermodynamic landscape for the oxidation of Pd[superscript II] to Pd[superscript III]₂ emerge for the first time, and the critical role of M-M and M-L bonding in driving the electrochemical self-assembly of Pd[superscript III]₂ is exposed. Building on these structural studies, we arrive at a mechanistic model for methane functionalization by Pd[superscript III]₂ that simultaneously yields methyl bisulfate (MBS) and methanesulfonic acid (MSA). Rate-limiting H atom abstraction by Pd[superscript III]₂ and product bifurcation from the methyl radical intermediate is proposed based on experimentally determined rate laws and observations with radical scavengers and initiators. DFT calculations likewise support a shared outer-sphere proton-coupled electron transfer (PCET) reaction for the generation of both products.; Following the second approach for methane functionalization electrocatalysis, we establish an electrochemical solution to the long-standing oxidant problem of Shilov's Pt[superscript II] catalyst. Inner-sphere electron transfer facilitates the electrochemical oxidation of Pt[superscript II] to Pt[superscript IV] on Cl-adsorbed platinum electrodes without concomitant methanol oxidation. The favorable catalytic property of this electrode is exploited for the continuous regeneration of the Pt[superscript IV] oxidant during Pt[superscript II]-catalyzed methane functionalization. The critical Pt[superscript II]/[superscript IV] ratio is maintained via dynamic modulation of the electric current and in situ monitoring of the solution redox potential. Thereby, we show stable and sustained turnover of Shilov's catalyst for the first time.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127430</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-state nuclear magnetic resonance investigations of complex plant biomaterials : plant cell walls and pine sporopollenin</title>
<link>https://hdl.handle.net/1721.1/127429</link>
<description>Solid-state nuclear magnetic resonance investigations of complex plant biomaterials : plant cell walls and pine sporopollenin
Phyo, Pyae.
Plant cell walls offer support and protection to plant cells allowing land plants to thrive all over the world. Growing plant cell wall is a complex system mainly consisting of insoluble polysaccharides: cellulose, hemicellulose and pectin. The cell wall provides both mechanical strength and extensibility to cells via corporative polysaccharide rearrangements. Information on the structures of wall polysaccharides and their reorganization during growth has been elusive due to the lack of high-resolution methods for characterizing this disordered biomaterial. Here, solid-state NMR has been applied to investigate the structure, dynamics and interactions of wall polysaccharides in ¹³C-enriched whole cells and intact cell walls to identify the molecular basics for plant growth. Molecular comparisons between different regions along the elongation gradient of the growing Arabidopsis stem give insights into the structural importance of pectin in wall extension.; Our NMR results showed significant decreases in pectin amount, sidechain branching, methylation, polymer hydration, and mobility from the upper to basal regions of the growing stem. 2D ¹³C-¹³C spectra and water-polysaccharides spin diffusion experiments were conducted on the walls of wild-type Arabidopsis and various genetic mutants with different growth phenotypes. Our results showed that weakened cellulose-pectin contact and lower Ca²⁺-mediated HG cross-linking contribute to the polymer slippage underlying cell wall extension and plant growth. Additionally, the reduced HG methylation is suggested to impact these interactions and decrease plant growth. The structural and dynamical heterogeneity of polysaccharides complicate ¹³C NMR spectra.; Various dynamics-selective ¹H- and ¹³C-detected correlation experiments conducted under moderate-fast MAS were explored to assign ¹H chemical shifts of intact wall polysaccharides and investigate long-range pectin-cellulose contacts with enhanced spectral resolution and sensitivity. In a separate project, we have determined the structure of natural abundance pine sporopollenin. Sporopollenin is the major component of outer pollen wall and its extreme inertness protects pollens from hostile terrestrial environments. Combining multi-CP for quantitative spectral analysis, spectral editing techniques for assignment of the ¹³C natural abundance material, and biochemical information, we determined the chemical structure of intact sporopollenin. The structure explains the inertness of sporopollenin and gives insight into the biosynthetic pathways and functional properties of this important biopolymer.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127429</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New fundamental transformations of heterocyclic compounds enabled by copper catalysis</title>
<link>https://hdl.handle.net/1721.1/127428</link>
<description>New fundamental transformations of heterocyclic compounds enabled by copper catalysis
Gribble, Michael William,Jr.
Chapter One. Introduction to Catalytic C-C⁻Bond⁻Forming Reactions of Alkylcopper(I) Nucleophiles This chapter provides a brief historical perspective on the development of Cucatalyzed C-C⁻bond-forming reactions and an overview of the general facets of organocopper reactivity that are most important to the work documented in subsequent chapters. Chapter Two: Asymmetric Copper-Hydride-Catalyzed Markovnikov Hydrosilylation of Vinylarenes and Vinyl Heterocycles Copper hydride complexes catalyze highly enantioselective Markovnikov hydrosilylation of vinylarenes and vinyl heterocycles. This method has a broad scope and enables both the synthesis of isolable silanes and the conversion of crude products to chiral alcohols. DFT calculations support a mechanism proceeding by hydrocupration followed by !-bond metathesis with a hydrosilane.; Chapter Three: Asymmetric Cu-Catalyzed 1,4⁻Dearomatization of Pyridines and Pyridazines without Preactivation of the Heterocycle or Nucleophile. A chiral copper hydride complex catalyzes C-C bond-forming dearomatization of pyridines and pyridazines at room temperature. The catalytic reaction operates directly on free heterocycles and generates the nucleophiles in situ, eliminating the need for stoichiometric preactivation of either reaction partner; further, it is one of very few methods available for the enantioselective 1,4⁻dearomatization of heteroarenes. Combining the dearomatization with facile derivatization steps enables one-pot syntheses of enantioenriched pyridines and piperidines.; Chapter Four: Evidence for Simultaneous Dearomatization of Two Arenes Under Mild Conditions in Cu(I)-Catalyzed Direct Asymmetric Dearomatization of Pyridine Bis(phosphine) copper hydride complexes are uniquely able to catalyze the direct dearomatization of unactivated pyridines with carbon nucleophiles, but the mechanistic basis for this result has been unclear. Here we show that, contrary to our initial hypotheses, the catalytic mechanism is monometallic and proceeds via dearomative rearrangement of the phenethylcopper nucleophile to a Cparametalated form prior to reaction at heterocycle C4. Our studies support an unexpected heterocycle-promoted pathway for this net 1,5⁻Cu⁻migration beginning with a doubly dearomative imidoyl-Cu-ene reaction. Kinetics, substituent effects, computational modeling, and spectroscopic studies support the involvement of this unusual process.; The CuL₂ fragment subsequently mediates a stepwise Cope rearrangement of the doubly dearomatized intermediate to give the C4⁻ functionalized 1,4-dihydropyridine, lowering a second barrier in the pathway that otherwise prohibit efficient asymmetric catalysis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127428</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcribing the dynamic multicellular immune orchestra during acute HIV infection</title>
<link>https://hdl.handle.net/1721.1/127427</link>
<description>Transcribing the dynamic multicellular immune orchestra during acute HIV infection
Kazer, Samuel W.
The development of novel vaccines and therapeutics requires comprehensive understanding of the immune system and its functional responses in health and disease. For infections, blueprinting the complex immune response during the initial stages of disease is essential. Human immunodeficiency virus-1 (HIV-1) infection is a model for studying host-pathogen interactions and has led to the development of several far-reaching concepts in immunology and infectious disease (e.g. antibody affinity maturation, host-derived pathogen sensors, etc.). Thus, exploring primary HIV-1 infection could not only impact the advancement of HIV-specific vaccines and treatments, but also serve as a model for early immune response in other viral infections. Utilizing a unique prospective cohort of young women at high risk of contracting HIV, we have begun to profile the immune response to HIV infection at its earliest detectable timepoints.; To maximize the utility of these rare samples from the FRESH study, we applied bulk and single-cell RNA-sequencing (RNA-seq) approaches to characterize changes in cellular phenotype as a function of time during acute infection. Specifically, we first characterized changes to peripheral innate lymphoid cells (ILCs), which are irreversibly depleted in acute infection. ILCs express gene programming associated with apoptosis and cell death near peak peripheral viremia, suggesting that they are depleted throughout the body. Second, we profiled HIV-specific CD8⁺ T cells, comparing between early treated and untreated participants. We show strong transcriptional responses near peak viremia consisting of broad cellular activation and cytotoxic activity that was mitigated by early treatment. Cells from treated participants demonstrated higher levels of anti-apoptotic markers and displayed long-lasting memory phenotypes.; Finally, we applied single-cell RNA-seq to total PBMCs from four individuals in FRESH throughout the course of acute infection. We developed a novel computational framework to discover gene modules significantly varying in expression as a function of time, enabling us to link distinct cellular activity between cell subsets. Moreover, we identify early subsets of monocytes and NK cells that associate with future disease control. These transcriptomic approaches have allowed an unprecedented view into the cellular dynamics of infection response, corroborating and contextualizing flow-cytometry and in vitro culture experiments. Together this body of work broadens our understanding of the first moments of HIV infection on a cellular and molecular level, highlighting cell-subsets and signaling pathways for perturbation in future vaccines and treatments.; Moreover, we pioneer the application of bulk and single-cell transcriptomics to longitudinal infection data on the days-to-weeks timescale, providing approaches and tools for others to apply to new datasets and studies in humans and other model organisms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127427</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing and improving the regulatory compliance and end-of-life environmental impacts of lead-based thin-film photovoltaics</title>
<link>https://hdl.handle.net/1721.1/127426</link>
<description>Assessing and improving the regulatory compliance and end-of-life environmental impacts of lead-based thin-film photovoltaics
Moody, Nicole S.
Emerging thin film photovoltaics (PVs) with photoactive layers based on quantum dots (QDs) and perovskites are a promising source of low-carbon renewable energy. Their solution-processability and compatibility with flexible substrates could allow for low-cost, high-throughput production with decreased factory start-up costs as well as deployment in new and underserved markets. However, the highest-performing QD and perovskite materials for PV applications are lead-based, which could prevent their commercial deployment due to regulatory restrictions, or lead to negative end-of-life environmental impacts from lead contamination. In this thesis, I evaluate the regulatory requirements of emerging lead-based thin-film PVs based on lead halide perovskites and lead sulfide (PbS) QDs.; Using European Union Restriction of Hazardous Substances (RoHS) Directive and United States Resource Conservation and Recovery Act (RCRA) regulatory frameworks, I evaluate the market potential of rigid and flexible perovskite PV modules. I also perform a risk assessment of a worst-case, end-of-life-disposal scenario for lead halide perovskite PVs into an unlined landfill to determine whether lead solubility or total lead content poses a greater risk for public lead exposure under catastrophic failure conditions. I present two strategies for improved regulatory compliance and reduced risk of environmental lead contamination for lead-based thin-film PVs: 1) reduction of lead content using an alternative PbS QD PV fabrication procedure based on ligand exchange with lead-free tetrabutylammonium iodide (TBAI) rather than lead halides, and 2) prevention of lead leakage from lead halide perovskite PVs via the introduction of a calcium phosphate barrier film.; I also provide preliminary investigations of the interactions between bulk and nanostructured lead compounds and ethylene vinyl acetate (EVA), the polymer most commonly used to laminate commercial PV encapsulation architectures, and the toxicity of PbS QD and lead halide perovskite PV device components. These studies serve as a framework for future investigation of PV toxicity and regulatory issues and the development of low-cost PV technologies with low environmental risk.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 100-108).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127426</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New reactions and reagents for phosphorus-carbon bond-formation</title>
<link>https://hdl.handle.net/1721.1/127425</link>
<description>New reactions and reagents for phosphorus-carbon bond-formation
Geeson, Michael B.
Chapter 1 takes the format of an "Outlook", and sets forth the case for developing sustainable methods in the synthesis of phosphorus-containing compounds. Methods used by nature for phosphorus-carbon bond-formation, or in the chemistry of other elements such as silicon, are discussed as model processes for the future of phosphorus in chemical synthesis. Chapter 2 describes the discovery of [TBA][P(SiCl₃)₂], prepared from [TBA]₃[P₃O₉]-.2H₂O and trichlorosilane. The bis(trichlorosilyl)phosphide anion is used to prepare compounds that contain P-C, P-O, P-F, and P-H bonds in a method that bypasses white phosphorus (P₄), the traditional route to organophosphorus compounds. Chapter 3 extends the phosphate precursors to [TBA][P(SiCl₃)₂] from trimetaphosphate to crystalline phosphoric acid.; Balanced equations are developed for the formation of [TBA][P(SiCl₃)₂] from phosphate sources and the byproducts are identified as hexachlorodisiloxane and hydrogen gas. Extension of trichlorosilane reduction to bisulfate provides improved access the known trichlorosilylsulfide anion, [TBA][SSiCl₃]. This anion was used as a thionation reagent to prepare thiobenzophenone and benzyl mercaptan from benzophenone and benzyl bromide, respectively. Chapter 4 describes the synthesis of neutral phosphine, HP(SiCl₃)₂, obtained by protonation of [TBA]1 with triflic acid. HP(SiCl₃)₂ is a highly efficient reagent for photochemical hydrophosphination of terminal alkenes. The phosphorus-silicon bonds in the hydrophosphination products can be functionalized to provide compounds of the general formulae: RPCl₂, RPH₂, [RP(R')₃]Cl, RP(O)(H)(OH), and RP(O)(OH)₂.; Chapter 5 describes a method to prepare phosphiranes (three-membered rings that contain a phosphorus atom) from anthracene-based phosphinidene precursors and styrenic olefins. The phosphinidene transfer reaction requires an organoiron and fluoride catalyst. The resulting phosphirane is prepared in good yield (73%) with high stereoselectivity (&gt;99%). Experimental investigations into the mechanism point toward the intermediacy of an iron-coordinated fluorophosphide species.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis. Page 373 blank.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127425</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laser spectroscopy of acetylene</title>
<link>https://hdl.handle.net/1721.1/127424</link>
<description>Laser spectroscopy of acetylene
Erickson, Trevor J.,1989-
The purpose of this thesis is to explore recent advances in the spectroscopy of acetylene. Acetylene is among the most-studied molecules, and an astoundingly large volume of work has been done on it. Highly excited S1 acetylene suers from many effects that complicate a thorough understanding of it. For instance, isomerization occurs between trans- and cis-bent geometries. Any models, including the most successful polyad models that have been used to study S1 acetylene for many years, that are based on the more stable trans-bent structure are doomed to failure at the energy of isomerization is approached. A further problem is experimental, rather than theoretical. The "interesting" region of S1 dynamics, namely the energy region in the vicinity of the isomerization barrier, is dissociative. Near the cis-trans barrier, the electronic surface interacts with a nearby dissociative curve, and molecules tunnel through the barrier and dissociate. The lifetime constraints are addressed with a detection technique that, to a certain point, is insensitive to predissociative lifetimes, Photofragment Fluorescence Action Spectroscopy (PFAS). PFAS detection involves the photofragmentation of excited acetylene, at a faster rate than the molecules dissociate. The excited photofragments themselves fluoresce, and this fluorescence is collected as the signal. Using PFAS, the most detailed spectra of high-energy S₁ ever have been collected. The additional insight into the structure and dynamics of acetylene, both that have already been analyzed and that require further work, are discussed in this thesis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 101-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127424</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical properties of complex solids and exotic thermal transport dynamics investigated with optical and extreme-ultraviolet transient grating techniques</title>
<link>https://hdl.handle.net/1721.1/127423</link>
<description>Mechanical properties of complex solids and exotic thermal transport dynamics investigated with optical and extreme-ultraviolet transient grating techniques
Duncan, Ryan Andrew.
In this thesis we investigate the acoustic/mechanical properties and thermal transport phenomena in a range of solids using optical and extreme-ultraviolet transient grating (TG) techniques. A single-crystal (110)-oriented tungsten sample was subject to mild helium-ion bombardment and studied using acoustic TG spectroscopy. We observed a substantial and counterintuititive increase in the elastic anisotropy as a result of ion bombardment, consistent with previous ab initio calculations. The acoustic dispersion of a microgranular crystal composed of a hexagonal monolayer of polystyrene microspheres was measured over its entire Brillouin zone along the [gamma] - K direction. We observe three contact-based branches and ve spheroidal branches in addition to the surface acoustic wave of the glass substrate. We determine that both the contact and spheroidal modes are dispersive, collective vibrational modes of the system, and characterize a range of other dynamics in this system.; Reflection-geometry TG thermal transport measurements were performed on a bulk Si₉₃.₄Ge₆.₆ alloy for grating periods between 1 - 13.5 [mu]m. We obtain quantitative agreement with ab initio calculations using the variational solution of the phonon Boltzmann transport equation (BTE) under the relaxation time approximation (RTA). Nanoporous holey silicon membranes with feature sizes greater than 100 nm are studied with thermal transport TG measurements. The measured values of thermal diffusivity are in quantitative agreement with the results of ab initio RTA-BTE calculations assuming diffuse scattering from boundaries--i.e., the "Casimir formulation" of thermal transport through nanostructures. Hydrodynamic second sound over microscale distances was observed at temperatures above 100 K in graphite in a reflection-geometry TG measurement. We obtain semiquantitative agreement with ab initio linearized BTE calculations with the full scattering matrix.; TG measurements are extended to the extreme ultraviolet (EUV) spectral region using the EUV free-electron laser (FEL) sources of the FERMI light source at the Elettra Synchrotron Facility in Trieste, Italy. We report EUV-pump optical-probe TG measurements on a range of samples. We observe coherent phonon generation in BK-7 glass, diamond, and Bi₄Ge₃O₁₂ as well as highly non-diffusive and fully ballistic thermal transport in silicon and diamond, respectively, at a grating period of 277 nm. We then report EUV-pump EUV-probe TG measurements of amorphous Si₃N₄ and silicon, observing coherent acoustic signal and thermal transport signal at grating periods as small as 28 nm.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 185-202).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127423</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport and fluctuations at the nanoscale</title>
<link>https://hdl.handle.net/1721.1/127422</link>
<description>Transport and fluctuations at the nanoscale
Dodin, Amro.
Nonequilibrium nanoscale systems present a range of fundamental and technological questions that microscopic theories based on Newton's and Schrödinger's equations as well as coarse grained macroscopic theories like thermodynamics are ill-equipped to handle. These systems are typically comprised of too many degrees of freedom to permit a microscopic treatment but show significant fluctuations or dependence on molecular detail that prohibit macroscopic coarse graining, thereby requiring the development of new theoretical and computational tools. This thesis considers two complementary approaches to treating mesoscale systems. In the first part, a range of numerical Monte-Carlo models are developed for treating energy and charge transport in disordered nanostructured semiconductors.; These studies reveal surprising non-equilibrium effects such as transport enhanced dye-sensitization in molecular aggregate-quantum dot colloids and nonequilibrium exciton "heating" in ligand exchanged quantum dot films that can be engineered to enhance macroscopic device performance. In addition, I present a model for electron transport through quantum dot solids that is used to derive quantitative design principles for electron energy filtering materials that can be used to mitigate thermal broadening in electronic devices. In the second part, I present a new formalism for treating heterogeneity in quantum ensembles that can be applied to emerging single molecule quantum spectroscopies. The resulting state space distribution formulation shares several important properties with classical phase space distribution, allowing for novel generalizations of classical statistical mechanical results.; I show that this isomorphism can be used to systematically generalize the Crooks Fluctuation Theorem, Jarzynski Non-equilbirum Work Relation and the Bochkov-Kuzovlev generating functional that compactly encodes the Jarzynski equality, Onsager reciprocity relations and nonlinear response of all orders. The ability to generate these generalizations shows that despite very different mathematical manifestations in traditional theories, the fluctuations characterized by these theorems arise from the same fundamental physics in both quantum and classical systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 185-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127422</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multienzyme assemblies and dynamics in acetogenesis and methanogenesis</title>
<link>https://hdl.handle.net/1721.1/127421</link>
<description>Multienzyme assemblies and dynamics in acetogenesis and methanogenesis
Cohen, Steven E.,Ph. D.Massachusetts Institute of Technology.
The Wood-Ljungdahl pathway of acetogenesis allows for growth utilizing carbon dioxide as the sole carbon source. This process uses a series of metalloenzymes to catalyze the reduction of two molecules of carbon dioxide to acetyl-CoA. Key to this pathway are the bifunctional carbon monoxide dehydrogenase/acetyl-CoA synthase (CODH/ACS) and corrinoid iron-sulfur protein (CFeSP). CODH/ACS is a bifunctional enzyme, with the CODH subunit catalyzing the reduction of carbon dioxide to carbon monoxide. CFeSP is a cobalt-containing corrinoid-dependent methyltransferase. The active site of ACS, termed the A-cluster, is a nickel, iron, and sulfur-containing metallocluster which catalyzes the synthesis of acetyl-CoA from carbon monoxide provided by CODH, a methyl group provided by CFeSP, and CoA. Despite the unprecedented organometallic chemistry performed at the A-cluster much is unknown about this ACS activity. No substrate-bound structures of ACS have ever been reported and the molecular basis for ACS and CFeSP interactions are largely uncharacterized. Here we present the structure of the carbonylated A-cluster in CODH/ACS, highlighting the role of conformational dynamics in catalysis. We further characterize ACS dynamics using negative-stain electron microscopy, demonstrating a much larger extent of conformational flexibility than has been crystallographically observed. The basis for CFeSP:ACS interactions is further interrogated using a series of CFeSP domain constructs, showing the corrinoidbinding Rossmann fold to be sufficient for ACS interactions. Finally, we report preliminary structural characterization of acetyl-CoA decarbonylase/synthase (ACDS), a 2.2 MDa complex of CODH, ACS, and CFeSP found in archaeal methanogens. Together, this advances our understanding of the roles of structure and dynamics in the Wood-Ljungdahl pathway.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2020; Cataloged from the official PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127421</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering self-assembling living structures with mammalian synthetic biology</title>
<link>https://hdl.handle.net/1721.1/127373</link>
<description>Engineering self-assembling living structures with mammalian synthetic biology
Tordoff, Jesse(Jessica Jane)
The shapes and structures created by living organisms have properties that any engineer would desire: they are self-healing, growing, adaptive, and range in complexity and material property from the strength of bone to the lightweight flexibility of an insect wing. The goal of this thesis is to use synthetic biology to design, control, and understand the range of structures that can be constructed using a ubiquitous tool for self-organization in animal development: cell sorting. We first use experiments and computational modeling to demonstrate how incompletely sorted structures can be systematically designed by quantitative control of cell composition. By varying the number of highly adhesive and less adhesive cells in multicellular aggregates, we find that cell type ratio and total number of cells are controllers of pattern formation, and the resulting structures are maintained over the course of days.; Next, we establish a set of design rules for cell sorting-driven shape assembly using cell lines with engineered genetic circuits that can induce expression of different cadherins. We show that, even when well mixed, populations of cells with different cadherin expression profiles sort themselves in predictable ways. The resulting shapes vary significantly, including planes of semi-regular polka dots, a sphere engulfed by an outer shell, maze-like intertwined populations, and radial protrusions from a core. We can reliably select between these shapes and control their properties by changing induction of cadherin expression, population ratio, cadherin identity, and total aggregate size. Finally, we have designed and constructed a recombinase-based tool to break symmetry in a population of cells, with the ultimate goal of controlling the autonomous creation of different subpopulations from a single cell.; By creating a platform to programmably generate multicellular forms, this thesis aims to establish design principles for synthetic morphogenesis and provide a framework to control living shapes.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 114-124).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127373</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling car-sharing service design problems at scale with high-resolution data : discrete simulation-based optimization approaches</title>
<link>https://hdl.handle.net/1721.1/127372</link>
<description>Tackling car-sharing service design problems at scale with high-resolution data : discrete simulation-based optimization approaches
Zhou, Tianli,Ph. D.Massachusetts Institute of Technology.
This thesis considers the design of two-way (i.e., round-trip) car-sharing services. The optimization problems are formulated as high-dimensional discrete simulation-based optimization (DSO) problems. Existing DSO algorithms cannot tackle these problems at scale. Moreover, they are designed based on asymptotic performance guarantees, but lack computational efficiency, i.e., they tend to not perform well under tight computational or simulation budgets. The main contribution of this thesis is to show how mixed-integer programming (MIP) models can be used to enable general-purpose DSO algorithms to become: (i) scalable: the car-sharing problems can now be tackled at scale; and (ii) computationally efficient: solutions with good performance can be identified given tight computational budgets.; More generally, the methods proposed in this thesis contribute to bridging the gap between these two mostly disconnected research communities of analytical optimization and simulation-based optimization. This thesis formulates MIP models and proposes two approaches to embed the MIP information within the DSO algorithms. First, we use a MIP to formulate a metamodel, which is an analytical approximation of the simulation-based objective function. The information from the MIP is used at every iteration of a DSO algorithm by solving an analytical metamodel optimization problem. Second, we use a MIP to enhance the partitioning step of an existing globally convergent DSO algorithm. The MIP is used to identify low-dimensional subregions of the feasible region, where more exhaustive simulation is to be carried out.; We then compare the performance of methods that either: (i) use the MIP information for metamodeling, (ii) use the MIP information for partitioning, or (iii) use the MIP information for both metamodeling and partitioning. We study how the MIP's accuracy impacts the performance of these methods. Based on both small synthetic problems and a Boston area case study, we show how the scalability and the computational efficiency of both a general-purpose locally convergent DSO algorithm and a general-purpose globally convergent DSO algorithm are enhanced. We also present results from a New York City case study. The case studies use detailed car-sharing reservation data from a major car-sharing operator. We benchmark the methods versus several algorithms, including stochastic programming. The combination of MIPs with DSO algorithms leads to methods with both asymptotic performance guarantees as well as good short-term performance.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 135-142).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127372</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Choices in regeneration: position and fate</title>
<link>https://hdl.handle.net/1721.1/127370</link>
<description>Choices in regeneration: position and fate
Raz, Amelie A.
Positional information is required for animal regeneration, yet how it is harbored in adult tissues is poorly understood. In planarians, positional control genes (PCGs) control regeneration outcomes and are regionally expressed predominately in the musculature. Acoels are early diverging bilaterally symmetric animals, having separated from other bilaterians &gt;550 million years ago. We find that PCGs in the acoel Hofstenia miamia are expressed together and specifically in a primary differentiated cell type: muscle. The vast majority of Hofstenia muscle cells in regions tested express PCGs, suggesting positional information is a major feature of muscle. PCG expression domains are dynamic in muscle after injury, consistent with known PCG roles in guiding regeneration.; These data demonstrate an instructive positional role for Hofstenia muscle and this similarity with planarians suggests mesodermal muscle originated at the base of the Bilateria not only for contraction, but also as the source of positional information guiding regeneration. Planarians rely on a population of adult stem cells to perform whole-body regeneration. Previous work has shown that at least some of these stem cells, known as neoblasts, are individually pluripotent. The neoblast compartment is also highly heterogeneous, with many neoblast subpopulations, called specialized neoblasts, having different specified fates. These fates are specified through expression of fate-specific transcription factors (FSTFs), and inhibition of these factors can lead to precise ablation of a given lineage. Interestingly, fate specification of specialized neoblasts for the epidermis (zeta-neoblast) occurs during S phase, and commitment to this fate occurs in one cell cycle.; We demonstrate here that whereas FSTF expression is common among neoblasts in S, G2, and M cell cycle phases, neoblasts in G1 phase only rarely express FSTFs, suggesting that neoblasts might exist in a common, unspecialized state during G1. We also demonstrate that these unspecialized G1 neoblasts can arise from the division of a specialized neoblast, suggesting that specialized neoblasts retain pluripotent potential. Examination of expanding colonies of neoblasts show that early colonies can completely lack cells expressing markers of all known specialized neoblast classes, consistent with a model in which multiple-to-all neoblast classes can generate clonogenic, pluripotent cells. To further test this hypothesis, we performed single-cell transplants to assay the functional pluripotency of specialized neoblasts, comparing the frequency of colony formation by transplanted neoblasts to the rate of unspecialized neoblasts from the same cohort of cells.; We found that neither neoblasts of known specialization state nor unspecialized neoblasts alone can explain the frequency of colony formation by single-cell transplants. Together these findings suggest that specialization through expression of fate-specific markers does not necessitate fate commitment, and that all neoblasts might have clonogenic potential.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis. Page 195 blank.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127370</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of the mobile genetic element ICEBs1 on bacterial host fitness</title>
<link>https://hdl.handle.net/1721.1/127369</link>
<description>Effects of the mobile genetic element ICEBs1 on bacterial host fitness
Jones, Joshua M.(Joshua Mark); Grinberg, Ilana.; Eldar, Avigdor.; Grossman, Alan Davis.
Mobile genetic elements drive bacterial evolution by mediating horizontal gene transfer and by carrying cargo genes that confer important traits to host cells. Traits provided by mobile genetic elements include antibiotic resistance, novel metabolic capabilities, virulence factors, and the ability to form symbioses. Mobile genetic elements, especially Integrative Conjugative Elements (ICEs), are abundant in bacteria. Many do not contain cargo genes with known functions, but some likely carry novel types of cargo genes that provide traits beyond the scope of those currently attributed to mobile elements. In this thesis I describe the characterization of a fitness benefit provided by the mobile genetic element ICEBs1 to its bacterial host, Bacillus subtilis. Activation of ICEBs1 conferred a frequency-dependent selective advantage to host cells during biofilm formation and sporulation. The advantage was due to inhibition of biofilm-associated gene expression and delayed sporulation, which enabled ICEBs1 host cells to exploit their neighbors and grow more prior to sporulation. I identified a single gene within ICEBs1, ydcO, as both necessary and sufficient for the repression of development. Manipulation of host development programs allows ICEBs1 to increase host fitness. These findings highlight that cargo genes can alter existing aspects of physiology rather than providing entirely new traits, broadening our understanding of how mobile genetic elements influence their hosts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis. "Chapter 2. A mobile genetic element increases bacterial host fitness by manipulating development / Joshua M. Jones, Ilana Grinberg, Avigdor Eldar, and Alan D. Grossman"--Page 45.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127369</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The roles of the helicase double-hexamer complex and the ssDNA-binding protein RPA during eukaryotic DNA replication</title>
<link>https://hdl.handle.net/1721.1/127368</link>
<description>The roles of the helicase double-hexamer complex and the ssDNA-binding protein RPA during eukaryotic DNA replication
Friend, Caitlin M.(Caitlin Marie Niesen)
Eukaryotic DNA replication is a complex process that must occur accurately, completely, and only once per cell cycle. To accomplish these goals, the events of DNA replication are tightly coupled to cell-cycle progression. Origins of replication are licensed by loading of the Mcm2-7 replicative DNA helicase during G1. Two Mcm2-7 hexamers load onto each origin as a double hexamer encircling dsDNA. At this stage, the helicases are inactive. Upon entry into S phase, loaded Mcm2-7 complexes then recruit a number of other replication proteins that activate the helicase. Helicase activation results in separation of the double hexamer, a transition to encircling ssDNA, and initiation of DNA unwinding. Once activated, the helicase produces the ssDNA that acts as template for new DNA synthesis. Helicase activation is the committed step of DNA replication after which the cell must complete genome duplication before it can segregate its chromosomes and divide.; The work described in this thesis focuses on mechanisms that are essential for eukaryotic DNA replication with a focus on DNA unwinding and DNA synthesis. In Chapter II, I explore the essential functions and purpose of the double-hexamer conformation of the loaded helicases. Using a helicase mutant that loads as two single hexamers, I show that initial origin DNA melting can occur in the context of a single-hexamer helicase. Importantly, the amount of unwinding that occurs within a single helicase is not sufficient to allow the transition onto ssDNA. Further DNA unwinding and subsequent DNA synthesis requires robust double-hexamer helicase interactions. Together, my findings strongly suggest that the double-hexamer conformation is essential to complete helicase activation. In Chapter III, I explore the role and specificity of ssDNA-binding proteins (SSBs) in eukaryotic DNA replication. To this end, I substituted the eukaryotic SSB RPA with SSBs from other systems: E.; coli SSB (EcSSB) and T4 bacteriophage Gp32. I find that DNA unwinding is supported by RPA and EcSSB but not Gp32, suggesting that eukaryotic DNA unwinding requires at least one SSB function beyond ssDNA binding. Although both RPA and EcSSB support DNA synthesis, we only observed robust lagging-strand synthesis in the presence of RPA. My studies indicate that RPA must perform multiple functions beyond ssDNA binding to facilitate eukaryotic DNA replication.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127368</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulators of the Drosophila oocyte-to-embryo transition</title>
<link>https://hdl.handle.net/1721.1/127367</link>
<description>Regulators of the Drosophila oocyte-to-embryo transition
Avilés-Pagán, Emir E.(Emir Enrique)
The transition from oocyte to embryo is critical across metazoans, as it marks the onset of development and is essential for fertility. Although many important regulators of the oocyte-to-embryo transition have been uncovered, our understanding of how their activities are controlled is limited. Moreover, there are likely to be additional regulators as yet unidentified. This thesis describes investigation into new and known regulators of the oocyte-to-embryo transition in Drosophila. Control of mRNA translation is a crucial part of egg activation, the trigger for the oocyte-to-embryo transition. The PNG kinase is a master regulator of mRNA translation at egg activation in Drosophila. The activity of PNG is coupled with the completion of the meiotic cell cycle by regulated binding of the GNU activating subunit. The protein interactions and localization of the GNU regulatory subunit in mature oocytes were analyzed, revealing an unexpected link between GNU and RNP granules.; The functional roles of the domains of GNU were defined. Delineating developmental control of the PNG kinase complex is key to understand fully the changes in the translational landscape at the oocyte-to-embryo transition. Changes in mRNA translation are accompanied by changes in protein levels during the oocyte-to-embryo transition. Many proteins change in a manner that suggest developmental regulation, and thus are potential new regulators of the oocyte-to-embryo transition in Drosophila. A targeted screen of genes encoding proteins that appear to be under developmental regulation at this time was performed. Several genes were identified that are required for the proper onset of embryogenesis. Two of these genes, CG5003, a putative SCF subunit, and nebulosa (CG10960), a putative sugar transporter, were characterized. Elucidating the mechanisms through which these genes ensure the oocyte-to-embryo transition will yield fundamental insights.; These findings add new understanding to the regulation underlying the oocyte-to-embryo transition in Drosophila, painting a more complex picture than previously appreciated. They reveal the myriad of processes whose proper developmental control and execution is crucial for fertility.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis. "May 2020." Page 134 blank. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127367</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identification of a master regulator of differentiation in Toxoplasma gondii</title>
<link>https://hdl.handle.net/1721.1/127366</link>
<description>Identification of a master regulator of differentiation in Toxoplasma gondii
Waldman, Benjamin S.
Toxoplasma gondii is a parasite chronically infecting over a quarter of the world's population. While Toxoplasma infection is asymptomatic in immunocompetent individuals, it can cause life-threatening disease in immunocompromised patients and the developing fetus. Acute symptoms of infection and dissemination of Toxoplasma throughout the body are due to the rapid proliferation of the tachyzoite stage of the parasite. While tachyzoites can eventually be cleared through adaptive immunity, a subset will differentiate into more slowly replicating bradyzoites. These bradyzoites form intracellular cysts in brain and muscle tissue that cannot be cleared by the immune system or by current therapeutics. Cysts thereby act as latent reservoirs of infection, and periodic reactivation of cysts results in lifelong risk of disease. While differentiation is readily induced in cell culture by diverse stressors, the molecular basis of this conversion is unknown.; I therefore sought to determine the genetic requirements and regulation of bradyzoite differentiation in Toxoplasma. In this thesis, I describe the discovery and characterization of a master regulator of differentiation in Toxoplasma gondii. In my first chapter, I discuss application of CRISPR/Cas9-mediated forward genetic screening in Toxoplasma to identify a Myb-like transcription factor (BFD1; bradyzoite formation-deficient) as necessary for differentiation in cell culture and formation of brain cysts in mice. In my second chapter, I profile replicating and differentiating Toxoplasma in unprecedented detail through single-cell RNA-sequencing, and determine that parasites lacking BFD1 fail to initiate differentiation transcriptionally. In my third chapter, I demonstrate that BFD1 is post-transcriptionally regulated, and that conditional stabilization of BFD1 is sufficient to induce differentiation both phenotypically and transcriptionally in the absence of stress.; BFD1 binds preferentially at transcriptional start sites, including those of many stage-specific genes, and a putative BFD1-binding motif is associated with differential expression. Identification of BFD1 as a master regulator of differentiation is a breakthrough in our understanding of the regulation of chronic Toxoplasma infection, and provides a molecular switch for further characterization of this clinically relevant transition.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127366</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An environment-dependent framework to study ecological networks</title>
<link>https://hdl.handle.net/1721.1/127327</link>
<description>An environment-dependent framework to study ecological networks
Song, Chuliang.
Species interactions are central for the persistence of almost every form of life on Earth. Ecological networks provide an integrated representation of how species interactions are organized within ecological communities. Importantly, ecological networks are highly structured, which has motivated generations of ecologists to elucidate how these structures affect species coexistence. Unfortunately, we still do not have a clear and consistent answer about the link between network structure and species coexistence. Yet, solving this puzzle can provide key insights regarding the maintenance of biodiversity and ecosystem services. Interestingly, despite the extensive empirical evidence supporting that the environment affects both network structure and species coexistence, most of the studies do not take into account the conditional effects of the environment on these two variables.; Indeed, due to the multidimensional and changing nature of environmental factors, it has not been easy to develop a general framework that can link the structure of ecological networks, species coexistence, and the environment. In this thesis, I establish a scale-dependent, rigorous, testable framework to understand how ecological networks affect species coexistence within an environmental context. In the first part of the thesis, I develop a theoretical framework and computational tools, rooted in the notion of structural stability, to quantitatively link network structure, species coexistence, and environmental factors. In the second part of the thesis, I test and validate this theory using field and experimental observations of plant-pollinator, annual-perennial plant, and plant-herbivore interaction networks.; In specific, I show how spatio-temporal variability of the environment affects the structure of ecological networks, and how they jointly drive the patterns of species coexistence that we observe in nature. Overall, my thesis establishes that the structure of ecological networks may only make sense in the light of environmental information.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 271-296).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127327</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic last-mile distribution network design under demand uncertainty</title>
<link>https://hdl.handle.net/1721.1/127326</link>
<description>Strategic last-mile distribution network design under demand uncertainty
Snoeck, André Cornelis Joseph.
We propose quantitative models to incorporate demand uncertainty and physical distribution flexibility into the strategic design of last-mile distribution networks. Last-mile distribution typically constitutes the most expensive part of any global supply chain, and it is becoming increasingly complex due to the ongoing boom in e-commerce, the associated rise in customer expectations, and the increasing levels of urbanization. Appropriately designing the underlying distribution networks, including facility location, inventory allocation, and eet composition decisions, is paramount for the ecient operation of both traditional and highly responsive last-mile distribution services. In traditional networks, the order collection and delivery periods are segregated by an order cut-o, rendering the operational distribution problem deterministic.; We propose a stochastic programming model to capture the temporal hierarchy of decision making between strategic decisions made under uncertainty and deterministic operational recourse actions. However, for highly responsive networks, the order collection and delivery periods are intertwined, rendering the operational planning problem dynamic and stochastic. The aggregations and approximations required to formulate a tractable stochastic programming model fail to accurately capture the constraining impact of the strategic design on the operational response to dynamically realizing demand. Therefore, we propose a metamodel simulation-based optimization approach to address the design problem for highly responsive last-mile services. In this approach, we integrate a high-level analytical metamodel with an in-depth, disaggregate simulator.; We show that including demand uncertainty in the design process leads to networks that incorporate redundancy and flexibility in the strategic design, resulting in increased cost performance. Based on a study with a fast-moving consumer goods company that operates traditional distribution networks in emerging economies, we show that a stochastic design approach outperforms deterministic approaches, with and without embedding physical distribution flexibility in the network. In addition, we conduct a study with a global fashion company that aims to deploy a one-hour delivery service in Manhattan, NY. We show how congestion in order processing at facilities leads to picking queues that harm performance by an increase in late-delivery and a reduction in consolidation opportunities. Furthermore, we show that incorporating uncertainty allows to accurately incorporate local stock-out inventory effects.; Based on a generalization of the newsvendor model, we analytically show the potential for cost reduction that emerges from leveraging existing brick-and-mortar assets, including inventory positions and retail stores, in highly responsive distribution networks.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-173).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127326</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virus-driven evolution of marine Vibrio</title>
<link>https://hdl.handle.net/1721.1/127325</link>
<description>Virus-driven evolution of marine Vibrio
Hussain, Fatima Aysha.
Microorganisms are the most numerous and diverse organisms on the planet and occupy virtually every known habitat. For these microbes, genotypic diversity is intimately linked to their ecology. As one of the main predators of bacteria, bacteriophages (phages) play an important ecological role in regulating the abundance and diversity of bacterial populations. Through highly specific predatory interactions, these bacteria-infecting viruses promote gene turnover in their hosts through frequency-dependent selection. As a result, phages are thought to be key drivers of the immense genetic diversity seen in microbial genomes. However, the impact of phages on bacterial diversification in the wild is poorly understood. This thesis examines virus-driven evolution of environmental microbes using bacteria of the Vibrio genus as a model.; By isolating and sequencing the genomes of sympatric Vibrio strains and the viruses that infect them, we created a unique system to understand how viruses are impacting bacterial genomic evolution in nature. In the first study, we investigated the diversity and dynamics of lysogenic viruses, which integrate into the host genome as prophages, across Vibrio populations. By combining comparative genomics and lab-based inductions of lysogenic viruses from natural bacterial strains, we isolated numerous excisable prophages and mobile genetic elements, and found that transfer of prophages is more frequent among related hosts. In the second study, we investigated the evolution of resistance to viruses in bacteria at the resolution of clones. We found that viruses drive the rapid evolutionary turnover of novel phage-defense elements in bacteria, making them one of the strongest, if not the strongest, forces for near-term microbial evolution.; Finally, we explored the abundance, diversity, and transfer dynamics of a particular set of lysogenic viruses, related to the newly-discovered Autolykiviridae, in Vibrio. Together, this work sheds light on the rapid diversification of microbial genomes attributed to viruses and provides an ecologically-grounded perspective on the implications and applications of virus- and microbe-based therapies for environmental and human use.
Thesis: Ph. D. in Environmental Microbiology, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127325</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Device- and application-adapted quantum error correction</title>
<link>https://hdl.handle.net/1721.1/127314</link>
<description>Device- and application-adapted quantum error correction
Layden, David.
Precise control of coherent quantum systems could enable new generations of sensing, communication and computing technologies. Such systems, however, are typically noisy and difficult to stabilize. One promising technique to this end is called quantum error correction, which encodes quantum states in such a way that errors can be detected and corrected, much like in classical error-correcting codes. Quantum error-correcting codes usually cast a wide net, in that they are designed to correct errors regardless of their physical origins. In large-scale devices, this is an essential feature. It comes at a cost, however: conventional quantum codes are typically resource-intensive in terms of both the system size and the control operations they require. Yet, in smaller-scale devices the main error sources are often well-understood. In the near term, it may therefore be advantageous to cast a more targeted net through specialized codes. This thesis presents new families of such quantum error-correcting codes, which are adapted either for leading candidate devices, or for near-term applications. The device-adapted codes require exponentially less overhead than conventional codes to achieve the same level of protection, whereas the application-adapted codes can enhance quantum sensors, in which conventional codes cannot readily be used. The new techniques presented in this thesis adapt cornerstones of conventional theory in light of key experimental challenges and opportunities. The ultimate goal of this research is to help bridge the gap between the exacting requirements of proposed quantum technologies and the realities of emerging quantum devices. Bridging this gap is critical, if quantum technologies are to realize their full potential.
Thesis: Ph. D. in Quantum Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 185-194).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127314</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of a turbulence bifurcation underlying L-mode confinement transitions on Alcator C-Mod</title>
<link>https://hdl.handle.net/1721.1/127313</link>
<description>Characterization of a turbulence bifurcation underlying L-mode confinement transitions on Alcator C-Mod
Cao, Norman Ming-Chen.
Empirical energy confinement scalings play a crucial role in the design of tokamak fusion reactors, measuring how quickly energy is transported by turbulence from the fusion-producing core to conduction loss at the edge. Unfortunately, experiments often exhibit discontinuous changes in scaling behavior as the plasma parameters are varied, termed confinement transitions. Navigating these transitions requires an understanding of the physical origin and limits of confinement scalings, and is crucial for retiring the physics risk of extrapolating empirical results to future reactors. This thesis explores the connection between two universally observed transitions in tokamak transport: the Linear to Saturated Ohmic Confinement (LOC/SOC) transition and the concomitant intrinsic rotation reversal. Analysis and modeling of rotation reversal hysteresis experiments show that a single turbulent bifurcation underlies both transitions on Alcator C-Mod.; Plasmas on either side of the reversal exhibit different toroidal rotation profiles and therefore different turbulence characteristics despite profiles of density and temperature which are indistinguishable within measurement uncertainty. Elements of this bifurcation are also shown to persist for auxiliary heated L-modes. Within a reduced quasilinear transport model, the deactivation of subdominant (in linear growth rate and contribution to heat transport) ion temperature gradient (ITG) and trapped electron mode (TEM) instabilities is identified as the only possible change in turbulence across the reversal which is consistent with the measured profiles and inferred heat and particle fluxes. Experimental constraints on a possible change from strong to weak turbulence, outside the description of the quasilinear model, are also discussed.; These results indicate an explanation for the LOC/SOC transition that provides a mechanism for the hysteresis through the dynamics of subdominant modes and changes in their relative populations, and does not involve a change in the most linearly unstable ion-scale drift-wave instability. This work highlights the importance of considering the dynamics of the entire mode spectrum, and not just the dominant modes, in making predictions about transport and confinement regimes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 153-164).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127313</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple Monoenergetic Gamma Radiography (MMGR) using compact superconducting cyclotron</title>
<link>https://hdl.handle.net/1721.1/127312</link>
<description>Multiple Monoenergetic Gamma Radiography (MMGR) using compact superconducting cyclotron
Lee, Hin Yeung.
Smuggling of special nuclear materials (SNM) and nuclear devices through borders and ports of entry constitutes a major risk to global security. Reliable technologies are imperative for screening the flow of commerce for the presence of high-Z materials such as uranium and plutonium. This thesis presents an experimental proof-of-concept system using low energy (p, p2) nuclear reactions to generate monoenergetic photons to provide a means to measure the areal density and the effective atomic number (Zeff) of an object with accuracy that surpasses existing interrogation methods and other major deployed systems. This radiography system was designed using an ION-12SC compact superconducting 12 MeV proton cyclotron. Using a specially designed hybrid graphite water target, monoenergetic photons were generated at 4.4, 6.1, 6.9, and 7.1 MeV from (p, p2) nuclear reactions.; By performing GEANT4 simulations and numerical integration on existing cross sections, the gamma yield from MMGR are shown to be comparable to the X-ray yield from a bremsstrahlung-based system, with the advantage of lower radiation dose using MMGR. In a series of MMGR experiments using 4.4, 6.1, 6.9, and 7.1 MeV gammas, the author gamma transmission spectra on a variety of homogeneous (Z from 13-92) and heterogeneous mock cargoes. With the newly developed reconstruction algorithm, the author accurately predicted the areal density and Zeff of the experimental cargoes with an average Zeff reconstruction accuracy of 3.7 and an uncertainty of 6.2. The experimental results were also used to perform extrapolation and performance estimations for a future theoretical deployable system with higher beam current and proton energy for improved reconstruction precision.; In addition, a penetration study following the ANSI N42.46 standard was performed, demonstrating a maximum penetration thickness of 45 cm with a hypothetical beam current (14 [mu]A) and scanning speed (4 cm/s). In conclusion, MMGR using compact superconducting cyclotron was demonstrated to be a low-dose and mobile method to screen commercial cargoes with high material specificity, provided a means of distinguishing benign materials from SNM to prevent the smuggling of SNM and improve overall global security.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-161).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127312</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonization of Power Systems, Multi-Stage Decision-Making with Policy and Technology Uncertainty</title>
<link>https://hdl.handle.net/1721.1/127311</link>
<description>Decarbonization of Power Systems, Multi-Stage Decision-Making with Policy and Technology Uncertainty
Sepulveda, Nestor A.(Sepulveda Morales)
There is widespread agreement that "deep decarbonization" of the power sector, i.e., reduction of CO2 emissions to near or below zero, will be pivotal to climate change mitigation efforts. Nevertheless, given multi-decadal time horizons, planning for decarbonization must contend with uncertainty regarding technologies costs, new technologies characteristics and availability, and policies and incentives for reducing CO2 emissions in the multi-year adaptation process. At the same time, increasing penetration of variable renewable energy, the availability of energy storage technologies and the active participation of demand in electricity systems requires the appropriate consideration of temporal and spatial resolution to properly account for the cost and value of different resources at the system level.; New approaches are required to determine a the multi-year capacity expansion problem with hourly detail is computationally intractable using current methods and computational resources due to the increased number (millions) of variable and constraints that are involved in the problem. Our framework turns the problem into a computationally tractable one. This is accomplished by means of three decomposition methods at different levels and the integration of such methods into a single computational framework (FLIP).; Stochastic Dual Dynamic Programming is used to break down the problem at the year level iteratively passing information forwards and backwards across different years; Benders Partitioning is used to separate each yearly problem into a master investment problem and an operational problem passing information upwards and downwards between the two levels; and Dantzig-Wolfe decomposition is used to separate the year-long operational problem into a simplified operational problem and many operational sub-problems (e.g. weekly) passing information across problems to coordinate the year coupling constraints (e.g. CO2 policy) iteratively to find the optimal operation. This integrated framework requires solving orders of magnitude greater numbers of problems that are orders of magnitude smaller in size and complexity (number of variables and constraints down to thousands or hundreds from millions).; At the same time, the framework allows for parallelization at different levels of the problem, allowing the user to harness high performance computing resources with greater flexibility. The framework is implemented using the Julia general purpose programming language and its mathematical programming extension JuMP. value to the power systems of the 21st century. Conventional cost-based metrics (e.g., LCOE) are incapable of accounting for the indirect system costs associated with intermittent electricity generation, in addition to environmental and security constraints. Moreover, as recent research has shown, commonly used abstraction methods (sample hours, days or weeks selection methods) can provide inaccurate results by undervaluing some resources and overvaluing others.; Hence, there is a need to account for greater detail at the operational level while also accounting for multidecadal scenarios and imperfect information regarding costs, technologies and policies, all within a framework that is able to capture the value-cost trade-off dynamics of electricity resources. This work develops a methodology to properly account for the value-cost dynamics at the system level for decarbonization of power systems. Using this methodology, it then explores two key questions for policy and decision makers. First, we study the role of firm low carbon resources for deep decarbonization of power generation. We find that availability of firm low-carbon technologies -- including nuclear, natural gas with carbon capture and sequestration, and bioenergy --; reduces electricity costs by 10-62% across fully decarbonized cases. Then, we study the role of long duration energy storage (LDES) technologies for deep decarbonization. We find that the total system of LDES must fall below 40 [$/kWh] for LDES technologies to reduce system cost by more than 10%, even in our best case scenario. Finally, we expand our methodology into a multi-year capacity expansion planning framework for power systems that is able to solve for the optimal investment strategy/pathway with respect to future policies such as CO2 limits and/or renewable energy mandates while accounting for detailed operation at an hourly resolution over a full year as well as highlevel uncertainty (e.g. policy commitment, technology availability, etc).; In its original form the multi-year capacity expansion problem with hourly detail is computationally intractable using current methods and computational resources due to the increased number (millions) of variable and constraints that are involved in the problem. Our framework turns the problem into a computationally tractable one. This is accomplished by means of three decomposition methods at different levels and the integration of such methods into a single computational framework (FLIP).; Stochastic Dual Dynamic Programming is used to break down the problem at the year level iteratively passing information forwards and backwards across different years; Benders Partitioning is used to separate each yearly problem into a master investment problem and an operational problem passing information upwards and downwards between the two levels; and Dantzig-Wolfe decomposition is used to separate the year-long operational problem into a simplified operational problem and many operational sub-problems (e.g. weekly) passing information across problems to coordinate the year coupling constraints (e.g. CO2 policy) iteratively to find the optimal operation. This integrated framework requires solving orders of magnitude greater numbers of problems that are orders of magnitude smaller in size and complexity (number of variables and constraints down to thousands or hundreds from millions).; At the same time, the framework allows for parallelization at different levels of the problem, allowing the user to harness high performance computing resources with greater flexibility. The framework is implemented using the Julia general purpose programming language and its mathematical programming extension JuMP.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 253-256).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127311</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic advancements in discrete optimization : applications to machine learning and healthcare operations</title>
<link>https://hdl.handle.net/1721.1/127298</link>
<description>Algorithmic advancements in discrete optimization : applications to machine learning and healthcare operations
Pauphilet, Jean(Jean A.)
In the next ten years, hospitals will operate like air-traffic control centers whose role is to coordinate care across multiple facilities. Consequently, the future of hospital operations will have three salient characteristics. First, data. The ability to process, analyze and exploit data effectively will become a vital skill for practitioners. Second, a holistic approach, since orchestrating care requires the concurrent optimization of multiple resources, services, and time scales. Third, real-time personalized decisions, to respond to the increasingly closer monitoring of patients. To support this transition and transform our healthcare system towards better outcomes at lower costs, research in operations and analytics should address two concurrent goals: First, develop new methods and algorithms for decision-making in a data-rich environment, which answer key concerns from practitioners and regulators, such as reliability, interpretability, and fairness.; Second, put its models and algorithms to the test of practice, to ensure a path towards implementation and impact. Accordingly, this thesis is comprised of two parts. The first three chapters present methodological contributions to the discrete optimization literature, with particular emphasis on problems emerging from machine learning under sparsity. Indeed, the most important operational decision-making problems are by nature discrete and their sizes have increased with the widespread adoption of connected devices and sensors. In particular, in machine learning, the gigantic amount of data now available contrasts with our limited cognitive abilities. Hence, sparse models, i.e., which only involve a small number of variables, are needed to ensure human understanding. The last two chapters present applications and implementation of machine learning and discrete optimization methods to improve operations at a major academic hospital.; From raw electronic health records of patients, we build predictive models to predict patient flows and prescriptive models to optimize patient-bed assignment in real-time. More importantly, we implement our models in a 600-bed institution. Our impact is two-fold: methodological and operational. Integrating advanced analytics in their daily operations and building a data-first culture constitutes a major paradigm shift.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 235-253).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127298</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From data to decisions in urban transit and logistics</title>
<link>https://hdl.handle.net/1721.1/127296</link>
<description>From data to decisions in urban transit and logistics
Yan, Julia(Julia Y.)
Urban transit and city logistics have undergone major changes in recent years, including increased peak congestion, shrinking mass transit ridership, and the introduction of ride-sharing and micro-mobility platforms. At the same time, widespread data collection offers transit agencies insight into their riders in unprecedented detail. In this setting, data has the potential to inform decision-making and make meaningful impact on problems of great public interest. This thesis concerns data-driven decision-making for public transit systems, and spans topics from demand estimation to the design and operation of fixed-route systems and paratransit. The first chapter is concerned with origin-destination demand estimation for public transit. Our aim is to estimate demand using aggregated station entrance and exit counts, which can be modeled as the problem of recovering a matrix from its row and column sums.; We recover the demand by assuming that it follows intuitive physical properties such as smoothness and symmetry, and we contrast this approach both analytically and empirically with the maximum entropy method on real-world data. The next two chapters then use this demand data to inform strategic transit planning problems such as network design, frequency-setting, and pricing. These problems are challenging alone and made even more difficult by the complexity of commuter behavior. Our models address operator decision-making in the face of commuter preferences, and our approaches are based on column generation and first-order methods in order to model complex dynamics while scaling to realistic city settings. Finally, we explore tactical decision-making for paratransit. Paratransit is a government-mandated service that provides shared transportation for those who cannot use fixed routes due to disability.; Although paratransit is an essential safety net, it is also expensive and requires large government subsidies. These financial difficulties motivate us to develop large-scale optimization algorithms for vehicle routing in paratransit. We provide an optimization-based heuristic approach to servicing paratransit requests subject to labor constraints; this approach shows strong performance while also being tractable for several thousand daily requests..
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-155).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127296</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable machine learning methods with applications to health care</title>
<link>https://hdl.handle.net/1721.1/127295</link>
<description>Interpretable machine learning methods with applications to health care
Wang, Yuchen.
With data becoming increasingly available in recent years, black-box algorithms like boosting methods or neural networks play more important roles in the real world. However, interpretability is a severe need for several areas of applications, like health care or business. Doctors or managers often need to understand how models make predictions, in order to make their final decisions. In this thesis, we improve and propose some interpretable machine learning methods by using modern optimization. We also use two examples to illustrate how interpretable machine learning methods help to solve problems in health care. The first part of this thesis is about interpretable machine learning methods using modern optimization. In Chapter 2, we illustrate how to use robust optimization to improve the performance of SVM, Logistic Regression, and Classification Trees for imbalanced datasets. In Chapter 3, we discuss how to find optimal clusters for prediction. we use real-world datasets to illustrate this is a fast and scalable method with high accuracy. In Chapter 4, we deal with optimal regression trees with polynomial function in leaf nodes and demonstrate this method improves the out-of-sample performance. The second part of this thesis is about how interpretable machine learning methods improve the current health care system. In Chapter 5, we illustrate how we use Optimal Trees to predict the risk mortality for candidates awaiting liver transplantation. Then we develop a transplantation policy called Optimized Prediction of Mortality (OPOM), which reduces mortality significantly in simulation analysis and also improves fairness. In Chapter 6, we propose a new method based on Optimal Trees which perform better than original rules in identifying children at very low risk of clinically important traumatic brain injury (ciTBI). If this method is implemented in the electronic health record, the new rules may reduce unnecessary computed tomographies (CT).
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 131-142).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127295</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic optimization in the age of big data</title>
<link>https://hdl.handle.net/1721.1/127292</link>
<description>Dynamic optimization in the age of big data
Sturt, Bradley Eli.
This thesis revisits a fundamental class of dynamic optimization problems introduced by Dantzig (1955). These decision problems remain widely studied in many applications domains (e.g., inventory management, finance, energy planning) but require access to probability distributions that are rarely known in practice. First, we propose a new data-driven approach for addressing multi-stage stochastic linear optimization problems with unknown probability distributions. The approach consists of solving a robust optimization problem that is constructed from sample paths of the underlying stochastic process. As more sample paths are obtained, we prove that the optimal cost of the robust problem converges to that of the underlying stochastic problem. To the best of our knowledge, this is the first data-driven approach for multi-stage stochastic linear optimization problems which is asymptotically optimal when uncertainty is arbitrarily correlated across time.; Next, we develop approximation algorithms for the proposed data-driven approach by extending techniques from the field of robust optimization. In particular, we present a simple approximation algorithm, based on overlapping linear decision rules, which can be reformulated as a tractable linear optimization problem with size that scales linearly in the number of data points. For two-stage problems, we show the approximation algorithm is also asymptotically optimal, meaning that the optimal cost of the approximation algorithm converges to that of the underlying stochastic problem as the number of data points tends to infinity. Finally, we extend the proposed data-driven approach to address multi-stage stochastic linear optimization problems with side information. The approach combines predictive machine learning methods (such as K-nearest neighbors, kernel regression, and random forests) with the proposed robust optimization framework.; We prove that this machine learning-based approach is asymptotically optimal, and demonstrate the value of the proposed methodology in numerical experiments in the context of inventory management, scheduling, and finance.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 241-249).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127292</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New optimization approaches to matrix factorization problems with connections to natural language processing</title>
<link>https://hdl.handle.net/1721.1/127291</link>
<description>New optimization approaches to matrix factorization problems with connections to natural language processing
Berk, Lauren Elizabeth.
In this thesis, we propose novel formulation optimization methods for four matrix factorization problems in depth: sparse principal component analysis, compressed sensing, discrete component analysis, and latent Dirichlet allocation. For each new formulations, we develop efficient solution algorithms using discrete and robust optimization, and demonstrate tractability and effectiveness in computational experiments. In Chapter 1, we develop a framework for matrix factorization problems and provide a technical introduction to topic modeling with examples. Chapter 2, Certifiably optimal sparse principal component analysis, addresses the sparse principal component analysis (SPCA) problem. We propose a tailored branch-and- bound algorithm, Optimal-SPCA, that enables us to solve SPCA to certifiable optimality.; We apply our methods to real data sets to demonstrate that our approach scales well and provides superior solutions compared to existing methods, explaining a higher proportion of variance and permitting more control over the desired sparsity. Chapter 3, optimal compressed sensing in submodular settings, presents a novel algorithm for compressed sensing that guarantees optimality under submodularity conditions rather than restricted isometry property (RIP) conditions. The algorithm defines submodularity properties of the loss function, derives lower bounds, and generates these lower bounds as constraints for use in a cutting planes algorithm. The chapter also develops a local search heuristic based on this exact algorithm. Chapter 4, Robust topic modeling, develops a new form of topic modeling inspired by robust optimization and by discrete component analysis.; The new approach builds uncertainty sets using one-sided constraints and two hypothesis tests, uses alternating optimization and projected gradient methods, including Adam and mirror descent, to find good local optima. In computational experiments, we demonstrate that these models are better able to avoid over-fitting than LDA and PLSA, and result in more accurate reconstruction of the underlying topic matrices. In Chapter 5, we develop modifications to latent Dirichlet allocation to account for differences in the distribution of topics by authors. The chapter adds author-specific topic priors to the generative process and allows for co-authorship, providing the model with increased degrees of freedom and enabling it to model an enhanced set of problems. The code for the algorithms developed in each chapter in the Julia language is available freely on GitHub at https://github.com/lauren897
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 245-260).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127291</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The empirical relevance of metaphysics</title>
<link>https://hdl.handle.net/1721.1/127150</link>
<description>The empirical relevance of metaphysics
Builes, David(David Alan)
Are metaphysical debates relevant to ordinary empirical inquiry? This dissertation collects a series of papers which answers in the affirmative. The first part of the dissertation is concerned with inductive inference. I argue that we shouldn't expect the world to be amenable to induction if orthodox versions of Humeanism or Non-Humeanism are correct. I then develop and defend a hybrid view, a 'Humean Non-Humeanism', which has a better hope of vindicating inductive inference. The second part of the dissertation is concerned with self-locating belief. While puzzles regarding self-locating belief are often motivated by certain fanciful thought experiments, it has recently been argued that the epistemology of self-locating belief is of central concern to many of the deepest questions in fundamental physics: including the interpretation of quantum mechanics, large-scale cosmology, and the (alleged) fine-tuning of the universe. I begin by arguing that the correct epistemology of self-locating belief is also relevant to classic debates in the metaphysics of time. By exploiting the fact that different theories in the metaphysics of time classify different sorts of facts as 'merely indexical' facts, it can be shown that different views in the metaphysics of time make different empirical predictions. I then turn to argue for the correct epistemology of self-locating belief on metaphysical grounds. I first argue for some conditional claims: if one holds certain (controversial) metaphysical views regarding the nature of objects, properties, and identity across time, then one should uphold a particular theory of self-locating belief. I then go on to argue for an overall metaphysical picture that vindicates these views concerning the nature of objects, properties, and identity across time.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127150</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cyanobacterial evolution and interactions with the Proterozoic world</title>
<link>https://hdl.handle.net/1721.1/127144</link>
<description>Cyanobacterial evolution and interactions with the Proterozoic world
Moore, Kelsey Reed.
Our understanding of the biosphere prior to the rise of complex life is built largely upon microbial mat structures and some exceptionally well-preserved microbial fossils from the Proterozoic (2500 to 540 million years ago). Some of these exceptional fossils are identifiable as cyanobacteria that were preserved by pyrite, amorphous silica (SiO₂) and other minerals. Although a record exists of these organisms, the sparse nature of fossil assemblages and simplicity of many Proterozoic fossil morphologies makes it difficult to identify specific taxa or create a complete picture of the ancient biosphere and how it interacted with the early Earth. Cyanobacteria are thought to have evolved early in Earth history and played a large part in shaping the ancient biosphere and geosphere, but questions remain about their evolution and the ways in which cyanobacterial communities interacted with the Earth during the Proterozoic Eon.; In this thesis, I seek to build a more complete understanding the record of Proterozoic cyanobacteria, their responses to environmental perturbations and the chemical conditions and microbe-mineral interactions that characterized the Proterozoic marine realm. I begin by investigating the evolutionary relationships between different cyanobacterial lineages and their relationship to chloroplasts. I then analyze an assemblage of pyritized cyanobacteria that were preserved during the Cryogenian and provide a record of primary productivity in the oceans following a global glaciation. Finally, I investigate factors that enabled the fossilization of some exceptionally preserved cyanobacteria and implications of these mechanisms for cyanobacterial biochemistry, chemical conditions, and interactions between microbes and Proterozoic tidal environments.; The combined molecular, fossil and experimental insights allow us to go beyond morphological interpretations of microbial fossils and build a more complete understanding of the evolutionary history of cyanobacteria, the types of cyanobacteria that were preserved during the Proterozoic, the responses of these cyanobacteria to environmental stresses and the interactions of those cyanobacteria with the evolving seawater chemistry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127144</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying uncertainties and trends in the climate change trajectory</title>
<link>https://hdl.handle.net/1721.1/127143</link>
<description>Quantifying uncertainties and trends in the climate change trajectory
Lickley, Megan Jeramaz.
The characterization of climate change depends on the location and rate of change while its impacts on nature and society also depend on vulnerabilities. This thesis contributes to the quantification of uncertainties, drivers, the spatial variability, and impacts of the climate change trajectory. Results of this work have evolved using a range of data science techniques that combine observations and Earth models aimed at informing adaptation and mitigation policies. In the first chapter, the drivers, timing, and impacts of aridity change over the 21st century are assessed using an ensemble of general circulation models (GCMs) together with population statistics. Results indicate that drier regions are projected to dry earlier, more severely and to a greater extent than humid regions, a result driven by differential changes in precipitation across aridity zones.; Impacts are exacerbated as arid regions (such as the Mediterranean etc.) are more populated and experiencing much higher population growth than humid regions (which includes the Arctic). Under an unconstrained emissions scenario, GCMs project that most of humanity will live in a more arid climate by the end of the 21st century. For the second chapter, the southern African rainfall (SAR) response to sea surface temperature (SST) anomalies in the Indian Ocean, Atlantic Ocean and Niño 3.4 region is examined. This is done using observations and three large ensembles of GCMs run over the 20th and 21st century. Some previous studies suggested that the Indian Ocean dominated changes in SAR. In this chapter, Niño 3.4 SSTs are found to be most strongly correlated with SAR, while correlations between SAR and the Indian Ocean are dominated by their respective responses to Niño 3.4. GCMs project that this relationship persists under a warming background state.; In the third chapter, the end of rapid warming is examined by considering emissions trajectories where atmospheric greenhouse gas concentrations ([GHG]) are stabilized. Under such scenarios, the rate of global temperature increases eventually steady at a rate significantly lower than those of the 21st century. I present a framework for defining the beginning of this 'Time of Steady Change' (TSC) and, with the use of GCM ensembles, evaluate the spatial variability of TSC. Results indicate that TSC occurs latest in low latitudes and in the Arctic, despite these areas steadying at very different absolute warming rates. These broad patterns are robust across multiple GCM ensembles and alternative definitions of TSC. The fourth chapter contributes to the measurement and analysis of sea level change. As an ice sheet rapidly melts, it produces a unique geometry of sea level change driven by perturbations in the height of the sea and crustal surfaces.; While satellite altimeters only measure changes in the sea surface height (SSH), local impacts from changes in sea level depend on both changes in SSH and changes in the solid surface. The literature commonly conflates the two estimates by directly comparing them. Here I quantify the error incurred by conflating changes in SSH with changes in sea level for various ice mass flux scenarios. Results indicate that using satellite altimetry records to estimate global ocean volume changes can lead to biases that can exceed 15% and that the level of bias will depend on the relative contributions to sea level changes from the Antarctic and Greenland Ice Sheets. The final chapter of this thesis provides a probabilistic quantification of chlorofluorocarbons (CFCs) that were banked in old equipment and continue to be released, contributing to global CFC emissions.; A Bayesian probabilistic model is developed to quantify banks and emissions of CFC-11, 12, and 113, incorporating the broadest range of constraints to date. Implied bank sizes of CFC-11 and CFC-12 are larger than recent international scientific assessments suggest, and can account for much of current estimated CFC-11 and 12 emissions (with the exception of increased CFC-11 emissions after 2012). If current banks are left unrecovered, their future emissions could delay polar ozone hole recovery by about six years and contribute 9 billion metric tonnes of equivalent CO₂ emission. While observationally-derived CFC-113 emissions are subject to uncertainty, they are too large to explain from banks, raising questions about sources of this gas as well.
Thesis: Ph. D. in Climate Science, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 159-172).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127143</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Climate system response to perturbations : role of ocean and sea ice</title>
<link>https://hdl.handle.net/1721.1/127142</link>
<description>Climate system response to perturbations : role of ocean and sea ice
Gupta, Mukund.
When the Earth experiences a perturbation in its radiative budget, the global ocean can buffer climate change, while sea ice may amplify its effects via a positive albedo feedback. It is therefore of interest to consider the role of the ocean in the climate's response to changes in external forcing, such as volcanic eruptions, Snowball Earth initiation and rearrangements of the carbon cycle. The first part of this thesis isolates the impact of the deep ocean in the surface response to volcanic cooling. Relaxation of the surface temperature follows a two-timescale decay, due to ocean heat exchange being significantly stronger than climatic feedbacks. Deep ocean cooling sequestration helps explain long periods of cold climate that occurred, for example, during the Little Ice Age. The second part explores the volcanic forcing required to initiate state transitions in a GCM with multiple climate equilibria. Snowball transitions require cooling on the order of -100Wm⁻² for several decades. These transition timescales are a consequence of the whole water column needing to be cooled to the freezing point before sea ice develops at the surface. The third part investigates biogeochemical interactions between oceans and sea ice around Antarctica. During the glacial cycles of the Pleistocene, sea ice may have helped trap carbon in the ocean by inhibiting CO₂ outgassing. This work shows that flux capping may be weakened by the effect of sea ice on reducing the light available for biological productivity. Consequently, a large sea ice fraction is required to effectively cap the flux of carbon to the atmosphere.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 169-187).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127142</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging and interpreting seismic heterogeneity in the North American lithosphere</title>
<link>https://hdl.handle.net/1721.1/127140</link>
<description>Imaging and interpreting seismic heterogeneity in the North American lithosphere
Golos, Eva Marie.
Interpretation of seismic wavespeed anomalies inferred from global and continental-scale tomographic models is complicated by the competing effects of temperature and chemical composition. Understanding the origin of a seismic anomaly requires constraining multiple seismic parameters and quantifying how they are influenced by thermal and geochemical variations. In this Thesis I jointly interpret inverse modeling of seismic data and forward modeling of chemical and thermodynamic data in order to investigate the origins of seismic heterogeneity within the cratonic lithosphere. I present a joint inversion of body and surface wave travel-times to determine independent but mutually-constrained variations in V[subscript p] and V[subscript s] within the continental United States. From this, the V[subscript p] /V[subscript s] ratio may be determined, which is sensitive to compositional changes such as Fe - Mg substitution and Si enrichment.; The seismically-inferred wavespeeds are compared to predictions of V[subscript p] and [subscript p] /V[subscript s] made from forward-modeling of mantle rock compositions at a range of temperature and pressure conditions. By combining both forms of modeling it is possible to identify where thermal or compositional factors dominate the seismic and density structure. The first-order seismic structure within the North American lithosphere may be attributed to variations in temperature, but in certain regions compositional anomalies must be invoked. Subsequently, this framework is applied to questions related to the cratonic lithosphere. Compared to younger orogenic belts, Archean cratons have a relatively Fedepleted composition and low temperatures, the latter of which is sustained by a thick lithosphere. Finally, I investigate two anomalies within the North American craton that have been affected by mantle plumes.; Plumes influence the lithosphere in several ways, including thermal perturbations as well as the emplacement of compositionally distinct plume material into the lithosphere. The structure of the lithosphere at the time of plume passage influences how these interactions are manifested. The ways that the continental lithosphere can be altered therefore depend on its initial properties as well as on its history, and both must be considered to make a full geologic interpretation.
Thesis: Ph. D. in Geophysics, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 235-273).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127140</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disassembly of electron transport chain complexes drives macrophage TLR responses by reprogramming metabolism and translation</title>
<link>https://hdl.handle.net/1721.1/127139</link>
<description>Disassembly of electron transport chain complexes drives macrophage TLR responses by reprogramming metabolism and translation
Su, Yang
Metabolic switch from oxidative phosphorylation (OxPhos) to glycolysis is a key feature of inflammatory response in macrophages, but how this switch occurs in response to inflammatory signals and how it precisely contributes to macrophage function is still obscure. Here we show that stimulation of macrophages through Toll-like receptors (TLR) disrupts the assembly of mitochondrial electron transfer chain (ETC) complexes I-V, leading to the metabolic switch by inhibiting OxPhos and activating HIF-1[alpha]-dependent glycolysis. Disassembly of ETC complexes influences the global metabolic status of macrophages not only by inducing glycolysis but also largely by inducing the reprogramming of cellular translational capacity via mTORC1 and ATF4, leading to enhanced global translation rate, cell growth, and production of inflammatory cytokines. Inhibition of OxPhos via myeloid-specific knockout of OPA1, which stimulates ETC complex assembly, exacerbates sepsis in mice while inhibition of mTORC1 reverses this effect. These findings reveal that disassembly of ETC complexes underlies macrophage metabolic switch and inflammatory responses and may be a conserved pathway to reprogram cellular anabolism and function.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127139</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A nanobody suite for yeast scaffold nucleoporins provides details of the Y complex structure and nuclear pore complex assembly</title>
<link>https://hdl.handle.net/1721.1/127138</link>
<description>A nanobody suite for yeast scaffold nucleoporins provides details of the Y complex structure and nuclear pore complex assembly
Nordeen, Sarah Ann.
Nuclear pore complexes (NPCs) are the main conduits for molecular exchange across the nuclear envelope. The NPC is a modular assembly of ~500 individual proteins, called nucleoporins or nups, that can be classified into three categories: 1. Stably associated scaffolding nups, 2. Peripheral nups, and 3. Phenylalanine-glycine (FG) repeat containing nups that form the permeability barrier of the NPC. Most scaffolding nups are organized in two multimeric subcomplexes, the Nup84 or Y complex and the Nic96 complex. Working in S. cerevisiae to study the assembly of these two essential subcomplexes, we developed a suite of twelve nanobodies that recognize seven constituent nucleoporins of the Y and Nic96 complexes. The nanobodies bind their targets specifically and with high affinity, albeit with varying kinetics. We mapped the epitope of eight members of the nanobody library via crystal structures of nup-nanobody co-complexes.; Nuclear pore complexes (NPCs) are the main conduits for molecular exchange across the nuclear envelope. The NPC is a modular assembly of ~500 individual proteins, called nucleoporins or nups, that can be classified into three categories: 1. Stably associated scaffolding nups, 2. Peripheral nups, and 3. Phenylalanine-glycine (FG) repeat containing nups that form the permeability barrier of the NPC. Most scaffolding nups are organized in two multimeric subcomplexes, the Nup84 or Y complex and the Nic96 complex. Working in S. cerevisiae to study the assembly of these two essential subcomplexes, we developed a suite of twelve nanobodies that recognize seven constituent nucleoporins of the Y and Nic96 complexes. The nanobodies bind their targets specifically and with high affinity, albeit with varying kinetics. We mapped the epitope of eight members of the nanobody library via crystal structures of nup-nanobody co-complexes.; In two cases, the nanobodies facilitated the crystallization of novel nup structures, namely the full-length Nup84-Nup133 [alpha]-helical domain structure and the Nup133 [beta]-propeller domain structure. Together these two structures completely characterize the S. cerevisiae Y complex molecular assembly. Further, the Nup133 [beta]-propeller domain contains a structurally conserved amphipathic lipid packing sensor (ALPS) motif thought to anchor the Y complex to the nuclear envelope, which we confirmed by liposome interaction studies. An additional nanobody facilitated the structure of Nic96 at an improved resolution, revealing previously missing helices. In addition to the utility of these nanobodies for in vitro characterization of NPC assemblies, we also show that expression of nanobody-fluorescent protein fusions reveals details of the NPC assembly in their native, in vivo environment, and possibly of NPC heterogeneity within the nuclear envelope.; Overall, this suite of nanobodies provides a unique and versatile toolkit for the study of the NPC.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127138</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of DnaA as a transcription factor by modulation of cooperative binding, and by arrA, an antisense RNA</title>
<link>https://hdl.handle.net/1721.1/127137</link>
<description>Regulation of DnaA as a transcription factor by modulation of cooperative binding, and by arrA, an antisense RNA
Sedivy, Emma L.(Emma Louise)
DnaA is both the bacterial replication initiator and a transcription factor. Both activities are highly conserved and closely regulated. The mechanisms regulating its replication initiation activity have been well studied. Most organisms regulate both the abundance DnaA protein and its activity through a combination of evolutionarily divergent mechanisms. However, the mechanisms governing its role as a transcription factor are poorly understood, especially in Gram-positive organisms like Bacillus subtilis. I described the role of a small antisense RNA in the dnaA region, arrA, which represses dnaA. Mutation of the arrA promoter resulted in increased DnaA levels, which were insufficient to disrupt replication initiation but affected cellular physiology and expression of DnaA targets. In particular, arrA was required for proper sporulation. I also asked whether two of the previously described mechanisms regulating DnaA as the replication initiator in B. subtilis affect its activity as a transcriptional repressor. I found that forms of DnaA that are inhibited for replication initiation are still active transcriptional repressors. This interesting finding raises the possibility that some DnaA targets are regulated in a cell-cycle dependent manner, whereas others are repressed regardless of replication status.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 82-96).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127137</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of synaptotagmin 7 function in neurotransmission and its subcellular localization at synapses</title>
<link>https://hdl.handle.net/1721.1/127136</link>
<description>Characterization of synaptotagmin 7 function in neurotransmission and its subcellular localization at synapses
Quiñones-Frías, Mónica C.(Mónica Cristina)
Synaptic vesicle (SV) fusion is dependent on proteins that can sense Ca²⁺ and trigger fusion with the plasma membrane. Neurotransmitter release occurs during a rapid synchronized phase of SV fusion mediated by the Ca²⁺ sensor Synaptotagmin 1 (SYT1). A slower SYT1-independent asynchronous phase is also present at many synapses and has been hypothesized to be mediated by another Synaptotagmin, SYT7. To determine if SYT7 plays an evolutionarily conserved role as an asynchronous Ca²⁺ sensor, we used the CRISPR-Cas9 system to generate mutations in the Syt7 locus and introduced tags to label the endogenous protein in Drosophila. Electrophysiology, FM1-43 analysis and quantal imaging revealed that release probability is elevated 2-fold at larval neuromuscular junctions (NMJs) in Syt7 mutants. No structural changes were identified that could contribute to the elevated evoked response.; Syt1/Syt7 double mutants also display more release than Syt1 mutants alone, indicating SYT7 is not the asynchronous release Ca²⁺ sensor. Syt7 mutants display a larger pool of releasable vesicles during high frequency stimulation and a faster recovery of releasable SVs following stimulation, suggesting SYT7 is likely to regulate SV trafficking. Endogenously-tagged SYT7 localizes to a presynaptic membrane compartment called the periactive zone that has been implicated in SV endocytosis and recycling. SYT7 forms an internally connected presynaptic membrane compartment that surrounds and contacts a host of other intracellular compartments, including endosomes, ER and lysosomes. In addition to regulating asynchronous release, SYT7 is also known to regulate facilitation and vesicle replenishment. Heterogeneity of SYT7 functions across neurons could arise from posttranslational modification of SYT7 at synapses or differential expression of SYT7 across different neuronal populations.; The Drosophila NMJ serves as an ideal model synapse to study how SYT7 regulates SV fusion in different neuronal types because muscle contraction is regulated by two glutamatergic motor neuron populations that exhibit tonic and phasic electrophysiological properties. Preliminary data suggests that SYT7 levels might differentially regulate release probability in tonic and phasic neurons at NMJs. In addition, initial structure function studies of SYT7's C2 domains suggest they redundantly aid in trafficking SYT7 to nerve terminals, but are also required for normal stability of the protein.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127136</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functions of alternative ClpP subunits in Pseudomonas aeruginosa</title>
<link>https://hdl.handle.net/1721.1/127135</link>
<description>Functions of alternative ClpP subunits in Pseudomonas aeruginosa
Mawla, Gina D. (Gina Danielle), Ph.D., Massachusetts Institute of Technology
Proteolysis is the process by which proteins are broken down, or hydrolyzed, into small peptides or amino acids by enzymes. Cells from all forms of life carry out regulated protein degradation as a way to control cellular physiology and regulate stress responses. Clp proteases, containing a AAA+ (A̲TPases A̲ssociated with various cellular A̲ctivities) unfoldase stacked with a compartmentalized peptidase, are central to bacterial proteolysis, and use the energy of ATP hydrolysis to unfold and translocate protein substrates into the peptidase chamber for their destruction. The opportunistic pathogen Pseudomonas aeruginosa is unusual in that it contains two isoforms of the subunits that form the ClpP peptidase chamber. These isoforms, PaClpP1 and PaClpP2, have not been well characterized previously and their specific functions are largely elusive. This work examines the structures and functions of PaClpP1 and PaClpP2 and proposes a model for functional peptides generated by these enzymes in P. aeruginosa development. Biochemical analysis establishes that PaClpP2 is only active as a peptidase when it is part of a PaClpP1₇P2₇ heterocomplex. Furthermore, multiple lines of evidence support that P. aeruginosa cells have two distinct ClpP peptidase assemblies: PaClpP1₁₄ and PaClpP1₇P2₇. Importantly, peptidase and protease analyses establish that these two ClpP assemblies exhibit distinct peptide cleavage specificities and interact differentially with the AAA+ unfoldases, ClpX and ClpA. Finally, the PaClpP2 peptide-cleavage active site uniquely contributes to P. aeruginosa biofilm development. Therefore, results presented in this thesis suggest that within AAA+ proteases, the specificity of the peptidase subunits, not only the recognition properties of the AAA+ unfoldase, control the biological outcome(s) of proteolysis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127135</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the cytoplasmic role of E2F4 in multiciliogenesis</title>
<link>https://hdl.handle.net/1721.1/127133</link>
<description>Investigating the cytoplasmic role of E2F4 in multiciliogenesis
Hazan, Renin.
The E2F family of transcription factors plays essential transcriptional roles in many cellular processes including proliferation and terminal differentiation. Transgenic mouse models have established that E2F4 is necessary for multiciliogenesis in the airway epithelia. Here we show that E2F4 plays two distinct roles in multiciliogenesis. In early stages, it functions in the nucleus to transcriptionally activate centriole biogenesis genes required for cilia formation. Subsequently, E2F4 locates to the cytoplasm and colocalizes with the early components of deuterosome complexes, which enable the large-scale amplification of basal bodies, from which cilia are assembled. Reconstitution experiments using E2f4-deficient tracheal precursor cells in an in vitro differentiation assay, showed that both nuclear and cytoplasmic forms of E2F4 are essential for multiciliogenesis.; Our biochemical analyses showed that E2F4 associates with two distinct components of the deuterosome complex, Deup1 and SAS6. We found that these proteins use distinct motifs to interact with E2F4, a coiled coil domain in Deup1 and the pisa domain/motif II in SAS6. However, the same amino terminal region of E2F4, residues 1-197, is necessary and sufficient to bind both Deup1 and SAS6. Importantly, in vitro reconstitution and differentiation experiments showed that E2F4¹⁻¹⁹⁷ is sufficient to perform E2F4's cytoplasmic role in multiciliogenesis. The previously reported redundancy between E2F4 and E2F5 in multiciliogenesis led us to investigate whether other E2Fs associate with the deuterosome components. This showed that Deup1 and SAS6 also associate with E2F5, but not E2F1. Guided by the crystal structure of E2F4 and protein sequence comparison, we narrowed down the Deup1 and SAS6 interaction domains within E2F4.; This identified residues 48-53 of E2F4 as being of central importance in both Deup1 and SAS6 binding but not required for E2F4's interaction with its classic dimerization partner, DP1, arguing that they contribute to a specific Deup1 and SAS6 interaction motif, rather than affecting structural integrity. Taken together, these data identify a novel cytoplasmic role for E2F4 and E2F5 in the differentiation of multiciliated cells, which likely reflects interaction with core components of the deuterosome complex to enable the amplification of basal bodies, from which cilia are assembled.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127133</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A quantitative view of Y-chromosome gene expression across the human body</title>
<link>https://hdl.handle.net/1721.1/127132</link>
<description>A quantitative view of Y-chromosome gene expression across the human body
Godfrey, Alexander Kamitsuka.
Human Y-chromosome genes have long been known to play pivotal roles in two biological processes--sex determination and spermatogenesis. Recent studies have uncovered evidence that Y-chromosome genes also perform important functions beyond the reproductive tract. Little is known about the roles of Y-chromosome genes in these contexts, or how their expression might directly lead to differences between XX (female) and XY (male) individuals. This thesis presents a quantitative analysis of human Y-chromosome gene expression across 36 adult tissues collected from hundreds of individuals. Compared to past efforts, this work greatly expands the number of tissues in which Y-chromosome gene expression has been measured and offers a richly quantitative picture that could not previously be achieved. In contrast to traditional views of the Y chromosome, we show that Y-chromosome genes are abundantly expressed in a variety of tissues outside the reproductive tract. We additionally find that regulatory-sequence differences between the X and Y chromosomes can lead to Y-chromosome-driven, male-biased expression of critical regulatory genes. In one notable example, evolutionary loss of a conserved microRNA site on the Y chromosome enabled a Y-linked copy of eukaryotic initiation factor 1A, EIF1AY, but not its X-linked homolog EIF1AX, to be highly expressed in the heart. As a result, this essential translation initiation factor is nearly twice as abundant in male as in female heart tissue at the protein level. We were able to arrive at these conclusions through careful application of analytic and experimental methods suited specifically to the Y chromosome's complexity; guidelines for the selection and use of these methods are discussed. Taken as a whole, these efforts shed new light on the Y chromosome's evolution and possible roles in sex differences and suggest promising future avenues for Y-chromosome research.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127132</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and functional investigations of mechanisms of iron-utilizing enzymes</title>
<link>https://hdl.handle.net/1721.1/127130</link>
<description>Structural and functional investigations of mechanisms of iron-utilizing enzymes
Jonnalagadda, Rohan.
Biological systems use iron as a key cofactor to catalyze a variety of difficult chemical transformations, particularly as reservoirs from which to initiate enzymatic radical chemistry. Here, I discuss efforts to study the allosteric mechanisms of an essential human iron and free radical utilizing enzyme: ribonucleotide reductase (RNR), and efforts to elucidate the radical based mechanism of isonitrile formation by an mononuclear nonheme iron(II) α-ketoglutarate dependent dioxygenase, ScoE. The first part of this thesis focuses on mononuclear nonheme iron(II) α-ketoglutarate dependent dioxygenases: enzymes which use molecular oxygen, α-ketoglutarate and a mononuclear Fe(II) cofactor to initiate substrate radical chemistry and catalyze diverse chemistries such as hydroxylations, halogenations and desaturations. Chapter 1 of this thesis describes current understanding of the mechanisms of these enzymes. Chapter 2 describes biochemical and structural efforts to determine the catalytic mechanism of one member of these enzymes: the isonitrile formation enzyme ScoE. In the chapter 3 of this thesis, I discuss the structure, function and allosteric regulation of RNR, the sole known enzyme capable of catalyzing the de novo biosynthesis of deoxyribonucleotides in all organisms for use in DNA replication and repair. Chapter 4 concerns efforts to develop a procedure for the improved recombinant protein yield of human RNR, and chapter 5 describes the development of a new liquid chromatography-tandem mass spectrometry based assay for RNR activity that we propose holds promise for simultaneous detection of RNR activity on multiple substrates.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127130</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Form and function of Poly(A) tails</title>
<link>https://hdl.handle.net/1721.1/127129</link>
<description>Form and function of Poly(A) tails
Eisen, Timothy J.(Timothy Jonas)
Central to mRNA metabolism is the poly(A)-tail, a stretch of adenosine nucleotides at the mRNA 3' end. In this dissertation, I investigate the role of the tail in the dynamics of mRNA decay, and describe the predominant mechanisms of decay for thousands of mammalian mRNAs. Next, I examine the effects of microRNAs, which influence mRNA decay and perturb tail length dynamics. Finally, I describe a physiological context in which the tail helps to control translation: neurons of the mouse brain. mRNA decay is tightly regulated in eukaryotes, determining the steady-state abundances and rates of accumulation of mRNAs. Despite this central role, the dynamics of decay have been described for only a handful of mRNAs. We determine these dynamics for thousands of endogenous mRNAs. Nascent mRNAs have reproducible and heterogeneous tail lengths just after they escape the nucleus.; Once in the cytoplasm, most mRNAs are substrates for deadenylation, the rates of which vary by over 1000-fold, a range sufficiently large to capture the variation in mRNA decay rates. Surprisingly, once their tails become short, mRNAs decay at rates that also span a 1000-fold range. Moreover, these rates are coupled to their deadenylation rates, suggesting a concerted process of remodeling the mRNA--protein complex during decay. MicroRNAs (miRNAs) are small RNAs that influence decay of mRNA targets by recruiting deadenylases. Despite this recruitment, we observe no changes to steady-state tail length for miRNA targets. Resolving this paradox, we find that miRNAs not only deadenylate their targets but also increase the decay rate of short-tailed target molecules. By enhancing both rates, miRNAs do not alter the distribution of tail lengths of target mRNAs but enhance the rate at which mRNAs traverse these lengths. Neurons have unique requirements for translational control.; We perform ribosome profiling in primary neuronal cultures and brain tissues from a mouse. mRNA poly(A)-tail lengths explain some (~5%) of the large variance in translational efficiency we observe, as does coding-sequence length, expression level, and codon composition. For some mRNAs, neuronal stimulation modifies tail lengths, and for a subset, transcription cannot explain these changes. A linear model that uses known determinants to predict translational efficiency explains only a portion (30- 40%) of its variance, indicating the need for additional investigation of mechanisms of translation in neurons.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127129</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural characterization of glycyl radical enzymes in the human gut microbiome</title>
<link>https://hdl.handle.net/1721.1/127128</link>
<description>Structural characterization of glycyl radical enzymes in the human gut microbiome
Dawson, Christopher Daniel.
Anaerobic bacteria play important roles in the human gut microbiome and have dedicated chemical pathways for growth in the absence of oxygen. Glycyl radical enzymes (GREs) use an oxygen-sensitive glycyl radical cofactor to perform challenging, radical-based chemistry. Generation of this cofactor requires a dedicated GRE-activating enzyme (GRE-AE). This thesis presents structural analysis of one GRE involved in sulfur metabolism, structural and biochemical analysis of another GRE involved in nucleotide metabolism, and efforts towards structural characterization of a GRE-AE. C-S bond cleavage of isethionate by the GRE isethionate sulfite-lyase (IslA) generates sulfite, a substrate for sulfite respiration that in turn produces the disease-associated metabolite hydrogen sulfide. In this thesis, I present and describe an X-ray crystal structure of IslA from Bilophila wadsworthia with isethionate bound.; In comparison to other GREs, IslA uniquely positions active site beta strands and residues to create a highly tailored active site for the binding of the negatively charged isethionate substrate. Through kinetic analysis of thirteen site-directed IslA variants, we probe the mechanism by which radical chemistry is used for C-S bond cleavage. This work further elucidates the structural basis of chemistry within the GRE superfamily towards structure-based inhibitor design of IsIA and thus of microbial sulfide production. Another GRE, anaerobic ribonucleotide reductase (NrdD), which creates the subunits necessary for DNA replication and repair, is often regulated to prevent deleterious effects to the microbe. I describe unpublished structural data of a NrdD from Steptococcus thermophilus (StNrdD) and present data that suggests this activity regulation may involve certain intramolecular conformational changes.; The StNrdD activating enzyme (StNrdG) appears to lack essential features of a GRE-AE and presents an interesting structural case study for understanding glycyl radical activation. I present purification and reconstitution protocols for StNrdG and discuss efforts towards crystallization. Altogether, this work explores the wide diversity of chemical reactions afforded by the GRE fold to enable anaerobic bacteria to perform fundamental chemical transformations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127128</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activin signaling controls a wound-induced program essential for regenerative patterning</title>
<link>https://hdl.handle.net/1721.1/127124</link>
<description>Activin signaling controls a wound-induced program essential for regenerative patterning
Cloutier, Jennifer K.(Jennifer Kruse)
A central problem in animal regeneration is how animals determine what body part to regenerate. Planarians are flatworms that can regenerate any missing body part, and are studied to identify mechanisms underlying regeneration. At transverse amputation planes, a poorly understood mechanism specifies regeneration of either a head or a tail. This head-versus-tail regeneration decision-making process is referred to as regeneration polarity and has been studied for over a century to identify mechanisms that specify what to regenerate. The Wnt antagonist gene notum is induced within hours after injury robustly at anterior-facing wounds preferentially, where it specifies head regeneration. We report that Activin signaling is required for regeneration polarity, and the underlying asymmetric activation of notum preferentially at anterior-facing wounds. We propose Activin signaling is involved in regeneration-specific responses broadly in the animal kingdom. Planarian patterning requires signaling from specific subsets of muscle cells. Furthermore, several of these subsets have been shown to express specific transcription factors, for which inhibition results in specific patterning phenotypes. Muscle heterogeneity and function in regeneration can be further studied through optimized single-cell sequencing datasets. We report an improved 10x -based planarian muscle cell scRNA-seq dataset that predicts novel transcription factors associated with muscle cell heterogeneity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127124</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized power systems : reference-frame theory and stability region generation</title>
<link>https://hdl.handle.net/1721.1/127082</link>
<description>Decentralized power systems : reference-frame theory and stability region generation
O'Rourke, Colm J.
Electricity provides the foundation for many of today's technological advances. The desire for energy security, a reduction in carbon dioxide emissions and a diversification of resources are all motivations for changes in how electricity is generated and transmitted. Recent alternatives to traditional centralized power-plants include technologies that are decentralized and intermittent, such as solar photovoltaic and wind power. This trend poses considerable challenges in the hardware making up these systems, the software that control and monitor power networks and their mathematical modelling. This thesis presents a set of contributions that address some of the aforementioned challenges. Firstly, we examine the fundamental theories used in modelling and controlling power systems. We expand previous work on reference-frame theory, by providing an alternative interpretation and derivation of the commonly used Park and Clarke transformations. We present a geometric interpretation that has applications in power quality. Secondly, we introduce a framework for producing regions of stability for power systems using conditional generative adversarial neural networks. This provides transmission and distribution operators with an accurate set of control options even as the system changes significantly.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 87-91).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127082</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of low-thrust solid rocket motors for small, fast aircraft propulsion</title>
<link>https://hdl.handle.net/1721.1/127069</link>
<description>Development of low-thrust solid rocket motors for small, fast aircraft propulsion
Vernacchia, Matthew T.
Small, uncrewed aerial vehicles (UAVs) are expanding the capabilities of aircraft systems. However, a gap exists in the size and capability of aircraft: no small aircraft are capable of sustained fast flight. A small, fast aircraft requires a propulsion system which is both miniature and high-power, requirements which current UAV propulsion technologies do not meet. Solid propellant rocket motors could be used, but must be re-engineered to operate at much lower thrust and for much longer burn times than conventional small solid rocket motors. This imposes unique demands on the motor and propellant. This work investigates technological challenges of small, low-thrust solid rocket motors: slow-burn solid propellants, motors which have low thrust relative to their size (and thus have low chamber pressure), thermal protection for the motor case, and small nozzles which can withstand long burn times.; Slow-burn propellants were developed using ammonium perchlorate oxidizer and the burn rate suppressant oxamide. By varying the amount of oxamide (from 0-20%), burn rates from 4mms⁻¹ to 1mms⁻¹ (at 1MPa) were achieved. Using these propellants, a low-thrust motor successfully operated at a (thrust / burn area) ratio 10 times less than that of typical solid rocket motors. This motor can provide 5-10N of thrust for 1-3 minutes. An ablative thermal protection liner was tested in these firings. Despite the long burn time, only a few millimeters of ablative are needed. A new ceramic-insulated nozzle was demonstrated on this motor. The nozzle has a small throat diameter (only a few millimeters) and can operate in thermal steady-state. Models were developed for the propellant burn rate, motor design, heat transfer within the motor and nozzle, and for thermal stresses in the nozzle insulation.; This work shows that small, low-thrust solid motors are feasible, by demonstrating these key technologies in a prototype motor. Further, the experimental results and models will enable engineers to design and predict the performance of solid rocket motors for small, fast aircraft. By providing insight into the physics of these motors, this thesis may help to enable a new option for aircraft propulsion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 281-289).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127069</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems analysis of community noise impacts of advanced flight procedures for conventional and hybrid electric aircraft</title>
<link>https://hdl.handle.net/1721.1/127064</link>
<description>Systems analysis of community noise impacts of advanced flight procedures for conventional and hybrid electric aircraft
Thomas, Jacqueline(Jacqueline Leah)
Recent changes to aircraft approach and departure procedures enabled by more precise navigation technologies have created noise concentration problems for communities beneath flight tracks. There may be opportunities to reduce community noise impacts under these concentrated flight tracks through advanced operational approach and departure procedures and advanced aircraft technologies. A modeling method to assess their impacts must consider the contributions of aircraft engine and airframe noise sources as they vary with the position, thrust, velocity, and configuration of the aircraft during the flight procedure. The objective is to develop an analysis method to design, model, and assess the community noise reduction possibilities of advanced operational flight procedures performed by conventional aircraft and advanced procedures enabled by future aircraft concepts.; An integrated analysis framework is developed that combines flight dynamics and noise source models to determine the community noise impacts of aircraft performing advanced operational approach and departure procedures. Aircraft noise due to the airframe and engine is modeled using an aircraft source noise module as each noise component varies throughout the flight procedure and requires internal engine performance states, the flight profile, and aircraft geometry. An aircraft performance module is used to obtain engine internal performance states and aircraft flight performance given the aircraft technology level. A force-balance-kinematics flight profile generation module converts the flight procedure definition into altitude, position, velocity, configuration, and thrust profiles given flight performance on a segment-by-segment basis.; The system generates single-event surface noise grids that are combined with population census data to estimate population noise exposure for a given aircraft technology level and procedure. The framework was demonstrated for both advanced approach and departure procedures and advanced aircraft technologies. The advanced procedure concepts include modified speed and thrust departures as well as continuous descent, steep, and delayed deceleration approaches for conventional aircraft. The ability to model advanced aircraft technologies was demonstrated in the evaluation of using windmilling drag by hybrid electric aircraft on approach to allow the performance of steep and delayed deceleration approaches for noise reduction beyond the performance capability of standard gas-turbine aircraft.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 213-223).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127064</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Covariance estimation on matrix manifolds</title>
<link>https://hdl.handle.net/1721.1/127063</link>
<description>Covariance estimation on matrix manifolds
Musolas Otaño, Antoni M.(Antoni Maria)
The estimation of covariance matrices is a fundamental problem in multivariate analysis and uncertainty quantification. Covariance matrices are an essential modeling tool in climatology, econometrics, model reduction, biostatistics, signal processing, and geostatistics, among other applications. In practice, covariances often must be estimated from samples. While the sample covariance matrix is a consistent estimator, it performs poorly when the relative number of samples is small; improved estimators that impose structure must be considered. Yet standard parametric covariance families can be insufficiently flexible for many applications, and non-parametric approaches may not easily allow certain kinds of prior knowledge to be incorporated. In this thesis, we harness the structure of the manifold of symmetric positive-(semi)definite matrices to build families of covariance matrices out of geodesic curves.; These covariance families offer more flexibility for problem-specific tailoring than classical parametric families, and are preferable to simple convex combinations. Moreover, the proposed families can be interpretable: the internal parameters may serve as explicative variables for the problem of interest. Once a covariance family has been chosen, one typically needs to select a representative member by solving an optimization problem, e.g., by maximizing the likelihood associated with a data set. Consistent with the construction of the covariance family, we propose a differential geometric interpretation of this problem: minimizing the natural distance on the covariance manifold. Our approach does not require assuming a particular probability distribution for the data. Within this framework, we explore two different estimation settings.; First, we consider problems where representative "anchor" covariance matrices are available; these matrices may result from offline empirical observations or computational simulations of the relevant spatiotemporal process at related conditions. We connect multiple anchors to build multi-parametric covariance families, and then project new observations onto this family--for instance, in online estimation with limited data. We explore this problem in the full-rank and low-rank settings. In the former, we show that the proposed natural distance-minimizing projection and maximum likelihood are locally equivalent up to second order. In the latter, we devise covariance families and minimization schemes based on generalizations of multi-linear and Bézier interpolation to the appropriate manifold.; Second, for problems where anchor matrices are unavailable, we propose a geodesic reformulation of the classical shrinkage estimator: that is, we construct a geodesic family that connects the identity (or any other target) matrix to the sample covariance matrix and minimize the expected natural distance to the true covariance. The proposed estimator inherits the properties of the geodesic distance, for instance, invariance to inversion. Leveraging previous results, we propose a solution heuristic that compares favorably with recent non-linear shrinkage estimators. We demonstrate these covariance families and estimation approaches in a range of synthetic examples, and in applications including wind field modeling and groundwater hydrology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 135-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127063</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing internet of things experience in augmented reality environments</title>
<link>https://hdl.handle.net/1721.1/127062</link>
<description>Enhancing internet of things experience in augmented reality environments
Sun, Yongbin,Ph. D.Massachusetts Institute of Technology.
Seamless perception of objects' physical properties, such as temperature, is a key to improving the way we live and work. Thanks to the rapid development of sensor technology, Internet of Things (IoT) is shaping our world by expanding digital connectivity to real objects. In this way, physical properties of objects can be effectively collected, processed, transmitted and shared. Yet, only being able to sense the surrounding environment is not enough: A user-friendly way to visualize information is also required. Today, Augmented Reality (AR), which overlays digital information onto physical objects, is growing fast, and has been adopted successfully in many fields. This thesis focuses on fusing advantages of various technologies to create a better IoT experience in AR environment.; First, we describe an integrated system to enhance users' IoT experience in AR environments: Users are allowed to directly visualize objects' physical properties and control IoT devices in an immersive manner. This system is able to localize in-view target objects based on their natural appearances without using fiducial markers, such as QR codes. In this way, a more seamless user experience can be achieved. Second, existing handcrafted computer vision methods can estimate objects' poses only for simple cases (i.e. textured patterns or simple shapes), and usually fail for complex cases. Recently, deep learning has shown promise to handle various tasks in a data-driven approach. In this thesis, 3D deep learning models are explored to estimate objects' pose parameters in a more accurate manner. Hence, better robustness and accuracy can be achieved to support IoT-AR applications.; Third, standard deep learning training pipeline for object pose estimation is supervised, which requires ground truth pose parameters to be known. Manually obtaining such data is time consuming and expensive, making it hard to scale. As the last contribution, methods using synthetic data are studied to automatically train object pose estimation models without human labelling.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 111-125).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127062</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-constrained machine learning strategies for turbulent flows and bubble dynamics</title>
<link>https://hdl.handle.net/1721.1/127061</link>
<description>Physics-constrained machine learning strategies for turbulent flows and bubble dynamics
Wan, Zhong Yi,Ph. D.Massachusetts Institute of Technology.
Machine learning (ML) has in recent years become a sizzling trend in almost every science and engineering discipline. It enables scientists and engineers to make decisions or draw conclusions directly using information extracted from data, bypassing the necessity to unravel the delicate inner workings of the underlying phenomena. This, however, comes at the expense of having to search through an immense space of potential architectures and parameters for an optimized model that, not only provides the best description to the available data, but also applies to unseen cases. To cope with such difficulties, it is imperative that ample constraints are imposed on the architecture and parameter space, in order to facilitate efficient and generalizable learning. For physical systems, first-principle knowledge makes up a natural set of constraints that should be integrated into the ML system.; Past research efforts have mainly emphasized utilizing physical knowledge for cases where the system states are perfectly defined. For reduced-order settings, this condition is not satisfied and consequently, the existing physical knowledge is often incomplete and/or being compromised in accuracy. As a result, it remains a challenge to effectively leverage such knowledge in the design and implementation of ML frameworks. The objective of this thesis is to address the existing gaps and present physics-constrained ML strategies which work specifically in reduced-order spaces. The first part of this thesis focuses on using ML to improve imperfect reduced-order models, obtained from two common methods: (i) orthogonal subspace projection and (ii) slow-manifold reduction. The first case is particularly important for the modeling of rare extreme events such as those found in geophysical sciences.; In these problems, there is typically little data available to train a data-driven model, while reduced-order models of the full equations also fail to capture the relevant dynamics. We present a modeling strategy that allows the ML and physical model to complement each other. Specifically, physics-based equations are projected to a subspace which contains critical dynamical components associated with rare events and then combined with a data-driven model. In this way, the projected equations assist in modeling these events, which appear less frequently in data streams. On the other hand, observations are often plentiful in other regions of the state space, allowing ML to capture dynamics unaccounted by the projected equations. The effectiveness of this strategy is demonstrated through the prediction and modeling of extreme dissipation events in turbulent fluid flows.; Next we present a strategy for improving slow-manifold reduced-order models, suitable for systems with separated time scales. Our strategy employs ML to model the 'fast variables' of the system in terms of the 'slow variables', which can then be integrated with the equation-based dynamics of the latter to provide a complete description of the system evolution. In this way, we constrain ML to a specific part of the dynamics which cannot be easily derived or expressed analytically. We demonstrate the strategy through the modeling of finite-size (inertial) particle dynamics in generic fluid flows, a problem of critical importance for modeling bubbles in multi-phase flows, aerosols in the atmosphere, and ocean drifters. We first utilize training data obtained from the classical Maxey-Riley equation of motion and secondly via high-fidelity multi-phase direct numerical simulations. In both cases, we show that the kinematics, i.e.; the relationship between position (slow variable) and velocity (fast variable), can be effectively learned from limited trajectories and directly utilized to model interactions with complex, turbulent flows. We carefully study transferability of the ML models into different fluid flows and different parameters. To deal the problem of limited data the ML approach is complemented with a data-augmentation technique that enforces physical symmetries of the problem such as isotropicity of the particle dynamics. In the second part of the thesis, we consider a complementary problem related to the parameterization of the unmodeled variables of a reduced-order model with respect to the modeled/dynamically-resolved reduced-order states. This problem is particularly meaningful when explicit dynamical modeling of the target variables is prohibitive.; We present a formulation of this problem in the context of atmospheric modeling, where the spatially small-scaled features are much harder to measure/model than the corresponding large scales due to the high intrinsic dimensionality and the lack of predictability resulting from instabilities. To address these issues, we introduce the Stochastic Machine-Learning (SMaL) parameterization framework that decomposes the times-series for the small scales into a deterministic (predictable) and a stochastic (unpredictable) component. The deterministic component is directly captured with ML in terms of the large scale time-series. On the other hand, the local-in-time statistics of the stochastic components are estimated and learned with separate models. We then construct a non-stationary Gaussian process which enables efficiently drawing an ensemble of small-scaled trajectories that are consistent with the large scales.; The SMaL framework is illustrated on a realistic application, where the small-scaled vorticity fields are parameterized in terms of the large-scaled vorticity and temperature data from reanalysis over Europe. We show that the small-scaled random samples exhibit realistic characteristics in terms of the spatial spectrum, single-point probability density functions, and temporal spectral content.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 155-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127061</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated development of photovoltaics by physics-informed machine learning</title>
<link>https://hdl.handle.net/1721.1/127060</link>
<description>Accelerated development of photovoltaics by physics-informed machine learning
Oviedo Perhavec, Juan Felipe.
Terawatt-scale deployment of photovoltaics (PV) is required to mitigate the most severe effects of climate change. Despite sustained growth in PV installations, technoeconomic models suggest that further technical advances and cost reduction are required to enable a timely energy transition in the next 10 - 15 years. This limited timeline is incompatible with the historic rate of materials development: solar cell technologies have taken decades to transition from the laboratory to large-scale commercial applications. Recently, the convergence of high-performance computing, high-throughput experimentation and machine learning has shown great promise to accelerate scientific research. In this context, this thesis proposes and demonstrates a comprehensive methodology for accelerated PV development.; Machine learning constitutes a key component of the new framework, effectively reconciling the formerly disjoint aspects of first-principles simulation, experimental fabrication and in-depth characterization. This integration is achieved by judiciously formalizing material science problems, and developing and adapting algorithms according to physical principles. Under this interdisciplinary perspective, the physics-informed machine learning approach allows a 3 - 30x acceleration in various aspects of PV development. This work focuses in two particular areas. The first thrust aims to accelerate the screening and optimization of early-stage PV absorbers. The high-dimensionality of the material space, and the sparsity of experimental information, make early-stage material development challenging. First, I address the structural characterization bottleneck in material screening using deep learning techniques and physics-inspired data augmentation.; Then, I develop a physics-constrained Bayesian optimization algorithm to efficiently optimize material compositions, fusing experimentation and density functional theory with stochastic constraints. These advancements lead to the discovery of several promising lead-free perovskites, and a 3x more stable multication lead halide perovskite. The second thrust aims to accelerate the industrial transition of more mature PV devices. For this purpose, I reformulate the traditional record-efficiency figure of merit to include probabilistic and manufacturing considerations, allowing industrially-relevant optimization. Then, a scalable physical inference algorithm is developed by a principled combination of Bayesian inference, deep learning and physical models. This inference model efficiently provides physical insights leading to &gt; 3x faster solar cell optimization. Finally, this approach is expanded to solar cell degradation diagnosis.; I reduce the characterization time by &gt; 5x using time-series forecasting methods. Then, the scalable inference model is combined with a game-theoretic interpretability algorithm to elucidate physical factors driving degradation. Together, these methodology and results can dramatically accelerate PV technology development, and have a timely impact in climate change. The physics-informed models expand the horizon of applied machine learning, and the fundamental approach of this work is applicable to other energy materials and systems, such as thermoelectrics and batteries.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 179-194).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127060</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-cell mass measurements for drug susceptibility testing in cancer</title>
<link>https://hdl.handle.net/1721.1/127059</link>
<description>Single-cell mass measurements for drug susceptibility testing in cancer
Stockslager, Max A.
Measuring the size distributions of micron-scale particles is of central importance in the biological sciences and for a wide range of industrial processes. The suspended microchannel resonator (SMR) is a sensor that measures the mass of individual micron-scale particles by detecting a shift in resonance frequency as particles flow through a hollow resonating micro-cantilever beam. While SMRs offer extreme precision for measuring mass, their applications have been limited by low measurement throughput. In the first part of this thesis, I describe several technical advancements aimed at increasing the throughput of SMRs. First, we developed devices containing many SMRs connected fluidically in parallel on the same microfluidic chip. By operating many mass sensors simultaneously, these "parallel SMR arrays" achieve approximately 27-fold higher throughput than previously possible.; To further increase throughput, we developed a computational approach for resolving faster shifts in resonance frequency than previously possible using the resonator-phase-locked-loop coupled feedback systems. We describe in detail the operation and performance limitations of each technique. In the second part of this thesis, I discuss the application of SMRs for drug sensitivity testing in cancer. For most cancers there exists a long, growing list of FDA-approved chemotherapies and targeted agents, but often there is no rational basis to predict which drug will be most effective for a particular patient.. We have shown that in several cancers, tumor cell growth is altered upon ex vivo exposure to cancer therapeutics, and that changes in cell mass can be detected as a functional biomarker for drug sensitivity.; In particular, we show in a retrospective clinical study that cell mass measurements predict the response of glioblastoma multiforme patients to temozolomide, a standard-of-care chemotherapy, and that the SMR drug sensitivity assay predicts the duration that patients survive on therapy better than the gold-standard genetic biomarkers. Looking forward, we envision that functional drug susceptibility testing will be useful for matching patients to effective therapies across a wide variety of cancers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 116-121).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127059</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement and modeling of brain tissue and engineered polymer response to concentrated impact loading</title>
<link>https://hdl.handle.net/1721.1/127058</link>
<description>Measurement and modeling of brain tissue and engineered polymer response to concentrated impact loading
Mijailovic, Aleksandar S.
Our brains are among the most mechanically compliant and structurally complex organs in our bodies. To predict how brain tissue deforms, and to protect it from deforming in ways that reduce our cognitive function, we must be able to measure, model, and ideally replicate brain tissue mechanics. While this is a grand challenge that many have sought to address, this need is acute when considering spatially localized deformation of brain tissue under high rates, such as in collisions that cause traumatic brain injury (TBI). This thesis sought to address this challenge at increasing levels of spatial and temporal complexity by employing dynamic contact mechanics as a tool to consider reduction of TBI. Strategies to reduce TBI include helmets designed to absorb impact energy, which are evaluated typically by simplified impact tests with engineered headforms equipped with brain tissue simulant materials and accelerometers.; However, current brain tissue simulant materials are inaccurate mechanical mimics under these conditions. Additionally, the geometry of both the brain and protective equipment intended to absorb such impact energy can couple the structural mechanics and material mechanics in subtle ways. Finally, many modern helmet designs provide inadequate protection against a sufficiently wide range of anticipated adverse impact scenarios (e.g., impact velocities and corresponding impact energies) that a human may encounter. This thesis aimed to (1) improve the tissue simulant materials and head acceleration-based metrics used to evaluate TBI protection strategies, and (2) implement these metrics in a novel framework for helmet evaluation and optimization.; We developed and implemented novel methods to characterize both engineered tissue simulants and mammalian brain tissue at low deformation rates, using both conventional methods of rheology and indentation as well as novel methods of spatially localized rheology. To characterize those materials under concentrated impact conditions, we next employed impact indentation and developed a new method to analyze experimental results derived from dynamic contact mechanics. Whereas prior analyses were limited to empirical metrics, the methodology developed in this thesis facilitates measurement of viscoelastic constitutive properties of the material or tissue. We next turned our attention to characterizing and optimizing the multilayered materials designed to protect the brain, in the form of helmets.; To enable objective comparison among helmets of different design, geometry, or energy absorbing materials, we first developed and demonstrated a new and accessible interpretation of helmet impact tests using head acceleration-based efficiency metrics. We applied this approach to an "inverted helmet" design, a hemispherical cap in which compliant protective layers are located on the external surface and a thin, stiff shell is located closer to the skull. Experimental and computational comparison of this prototype with a modern conventional helmet exposed deficiencies of existing acceleration-based evaluation metrics. For example, while the inverted helmet scored better under those measures for a given test scenario, our approach revealed that such a conclusion was incomplete and misleading because each helmet was most efficient under differing impact conditions.; Indeed, since our analysis framework identified specific impact conditions under which a helmet absorbs energy most efficiently, we went on to demonstrate its utility in choice of materials to enhance impact energy absorption against anticipated adverse events. Just as we used contact mechanics to enable characterization of the brain tissue and engineered simulants, we used contact mechanics-based analytical models and finite element simulations to understand the theoretical underpinnings of impact energy absorption in multimaterial helmets. We augmented simplified analytical models from the literature to incorporate non-linear and viscoelastic behavior of the energy absorbing material layers. We found that simplified, approximate analytical models predicted many trends observed in finite element simulations and experimental measurements, with excellent agreement between finite element models and experimental results.; Further, this approach provided distinct advantages, accounting for helmet thickness and clearly identifying the impact conditions under which the helmet is most protective. We also identified the ideal rate-dependent material constitutive response that would furnish an optimally efficient design across a wide range of impact energies and impact rates. This thesis provides tools and methods for evaluating novel protective strategies, and employs these methodologies to develop new helmet optimization procedures and design principles. The unifying enabler of this work was the understanding and implementation of contact mechanics models under impact loading conditions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 191-206).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127058</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning in ocean applications : wave prediction for advanced controls of renewable energy and modeling nonlinear viscous hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/127057</link>
<description>Machine learning in ocean applications : wave prediction for advanced controls of renewable energy and modeling nonlinear viscous hydrodynamics
Ma, Yu,Ph. D.Massachusetts Institute of Technology.
Many conventional problems in ocean engineering remain challenging due to the stochastic nature of ocean waves, viscous effects of the flow, nonlinear resonance, etc., and the combination of these factors. Data-driven techniques is an prospective approach complementary to traditional methods to model physical problems since data from experiments, field tests or high-fidelity simulations are mostly informative about actual physical systems. Machine learning algorithms, especially kernel based methods have very good generalization capability as well as statistical inference. This thesis targets to establish a framework that how we can use data from real-time measurements or data gathered from experiments and field tests and simulations to provide an alternative approach for physical modeling or practical engineering solutions.; In this thesis, we mainly target two different types of problems-mapping highly nonlinear physical relations and predicting time series, to prove the feasibility of such a framework. More specifically, one problem is the short-term wave prediction based on realtime measurements and its application to the advanced controls of renewable energy. The other one is the modeling of nonlinear viscous hydrodynamic loads of ships and offshore platforms. The Support Vector Machines (SVM) is used in solving both the problems. In the thesis, the SVM regression model are developed for the realtime short-term forecast of wave elevations and wave excitation forces. Optimal controllers aiming to reduce the structural loads or optimize energy capture with the knowledge of the forecasted wave force are established for offshore floating wind turbines and wave energy converters.; A series of CFD simulations of a rectangular barge with bilge keels are conducted and validated, along with the experiment data of a fixed offshore cylindrical platform, to serve as the baseline data set to model the nonlinear viscous hydrodynamic loads. Using the wave elevations and ship roll kinematics as features, the SVM regression models are trained and tested to predict the nonlinear hydrodynamic loads. The influence of the stochastic effect and different feature selections and kernel selections are discussed in the thesis as well. Key words: Machine learning, SVM regression, short-term forecast, model predictive control, nonlinear viscous hydrodynamic loads
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 144-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127057</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating cleantech hardware product development</title>
<link>https://hdl.handle.net/1721.1/127056</link>
<description>Accelerating cleantech hardware product development
Looney, Erin Elizabeth.
Startups in the Cleantech sector have historically had poor return on investment and high failure rates when compared with the medical or software sectors. One reason given for the disappointing outcomes of many Cleantech companies is slow product development (PD) and cycles of learning. In this thesis, I explore the major bottlenecks in hardware product development (PD) including slow cycles of learning, especially in Cleantech. I then develop technical and operational interventions to help overcome these obstacles. I accomplish this through 55 interviews with hardware startup chief executive and chief technology officers worldwide and subsequent analysis of the data from these conversations. By analyzing this data, I find ways to accelerate the innovation process and work toward applying these to Cleantech hardware product development. First, I find that prototyping is the largest time sink for hardware startups with a median of 2.5 years.; I further find that prototyping times are not correlated with certain product complexity metrics. This suggests that something other than technological constraints determines prototyping times. To examine this further, I investigate the impact of innovation models on prototyping times. I find that a flexible, natural innovation model can accelerate early-stage innovation while a structured PD approach is preferred for later-stage innovation. Then through qualitative coding of the interview transcripts, I find another PD bottleneck in relationships with the key stakeholders of investors, customers, and manufacturers. I find several best practices regarding these relationships, and I propose that codifying some of these as requirements in the traditional sense might be a promising strategy to accelerate PD timelines. Next, I present an outlook for technological acceleration strategies that are cutting edge and have only just begun to be used by startups.; These include computational tools like automation, machine learning (ML), and high performance computing (HPC), as well as physical tools like 3D printing. Lastly, I present a computational tool for testing solar photovoltaics as an example of a PD acceleration method. This tool, called Representative Identification of Spectra and the Environment (RISE) uses K-means, a machine learning clustering algorithm. The RISE method overcomes the shortcomings of past spectral classifiers used in industry and academia. This method is technology agnostic and the two parameters of RISE, K₁ and K₂ can uniquely classify all spectra worldwide unlike the commonly used APE classifier. I further demonstrate the RISE method in practice using LED-based solar simulators to test solar devices, capturing more performance data relevant to real world conditions per unit time than current standard testing allows.; Using only 18 representative spectra, I correctly reproduce energy yield differences between silicon solar cells and CdTe solar cells with an accuracy of less then 1.5 ± 0.5% as compared to over 5% when using STC. With the findings enumerated above, this thesis adds data-driven technical and operational strategies to the Cleantech community's playbook to accelerate cycles of learning.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 228-241).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127056</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of chemical-free methods of fouling mitigation for membrane processes in desalination</title>
<link>https://hdl.handle.net/1721.1/127055</link>
<description>Development of chemical-free methods of fouling mitigation for membrane processes in desalination
Labban, Omar.
As water scarcity continues to intensify around the globe, the need for more efficient and sustainable desalination technologies has never been more pressing. While membrane technology, namely reverse osmosis (RO), currently stands as the most energy efficient desalination technology, it is plagued by fouling undercutting both productivity and permeate quality. To restore performance, desalination plants resort to chemical cleaning, incurring losses in productivity, chemical cost, and membrane replacement all while raising environmental concerns associated with chemical waste. In this work, we explore alternative chemical-free methods of membrane fouling mitigation. First, membrane pretreatment using nanofiltration is investigated as a means of mitigating inorganic fouling in downstream desalination systems. Transport modeling is employed in fabricating specialized nanofiltration membranes for desalination pretreatment.; The Donnan-Steric Pore Model with dielectric exclusion (DSPM-DE) is used to probe for desirable membrane properties, while new membranes are systematically fabricated in-house using layer-by-layer (LbL) deposition to validate model predictions and to develop a new specialized membrane for this application. The new membrane presents a 30% increase in permeability and a 50% reduction in permeate hardness relative to state-of-the-art NF membranes. Apart from proactive pretreatment approaches, reactive approaches remain necessary in handling already fouled RO membranes. To that end, osmotically-induced cleaning (OIC), whereby a RO membrane effectively undergoes osmotic backwashing, is explored. Specifically, the effectiveness of OIC against organic fouling is examined, underlying mechanisms are elucidated, and potential applicability in the presence of spacers is investigated.; While experimental results demonstrate flux recoveries of up to 30%, the method's effectiveness is shown to be dramatically influenced in the presence of spacers and far from completely eliminating a biofilm or preventing its regrowth once operation is resumed. Given the practical limitations of OIC, we finally present the development of deformation-induced cleaning (DIC), a novel chemical-free fouling mitigation method. applicable to commercially existing spiral-wound membrane modules. The method employs controlled membrane deformation through pressure modulation, which induces shear stresses at the foulant-membrane interface that lead to detachment and removal of the foulants. Experiments on organic fouling by alginate are conducted on a flat-sheet membrane coupon followed by tests on a commercial spiral-wound module. Shutdown durations are shown to be six-fold lower, while flux recoveries are comparable to those of chemical methods.; In-situ visualization is employed alongside bench-scale experiments to elucidate the underlying mechanisms and ultimately devise an optimized chemical-free fouling mitigation strategy. Experiments on a commercial spiral-wound module indicate this method will have applicability in industrially-relevant settings. By enabling more frequent cleanings, DIC considerably lowers operating expenses while offering a more sustainable and environmentally sound solution to membrane fouling mitigation in desalination.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-157).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127055</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and development of desktop fiber and fabric manufacturing system for advanced materials</title>
<link>https://hdl.handle.net/1721.1/127054</link>
<description>Design and development of desktop fiber and fabric manufacturing system for advanced materials
Kim, David Donghyun.
This thesis presents two novel desktop fiber manufacturing systems and a desktop fabric manufacturing system. The desktop fiber manufacturing system achieved two goals. The first goal was the use of the system to teach smart manufacturing fundamentals. The second goal was to develop a low-cost platform for prototyping the fiber for biological and neurological research applications. The educational system was deployed in multiple classes, which proved to be a useful tool for teaching smart manufacturing. The advanced fiber manufacturing system was able to produce a variety of fibers using different preform materials. The preform materials included fibers with Polycarbonate(PC) core and Polymethyl Methacrylate (PMMA), hollow fibers with PC and PMMA, and 3 layer fiber with Polystyrene (PS) cladding, Polycaprolactone (PCL) layer, and PS core. These fibers were used for neural probing and cell scaffold.; Finally, a generalized approach to designing a desktop fiber prototyping system is introduced. For the fabric manufacturing system, a novel knitting process was invented. Nitinol (NiTi) wires exhibit either shape memory properties or super-elastic properties. There has been extensive research progress on using conventional knitting machines to produced knitted fabric with shape memory Nitinol wires. However, there has not been any development with knitting the fabric using super-elastic Nitinol wires without preprocessing. With the new system, the super-elastic Nitinol wires can be directly knitted without preprocessing the wire to loop shapes. The new system significantly reduces the processing steps to make knitted super-elastic fabric. The resulting fabric showed large strain capabilities at low stress. This thesis will describe in detail the design and fabrication of the fabric knitting system. It will also discuss the property of the knitted fabric produced from the system.; The model was introduced to characterize the stress and strain relationship of the fabric. The model was also validated with the experimental data. The generalized approach in designing the super-elastic fabric system is also introduced. The relationship between the resulting fabric properties and the design parameters are discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 107-113).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127054</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combined CO₂ capture and electrochemical conversion in non-aqueous environments</title>
<link>https://hdl.handle.net/1721.1/127053</link>
<description>Combined CO₂ capture and electrochemical conversion in non-aqueous environments
Khurram, Aliza.
Carbon capture, utilization, and storage (CCUS) technologies have a central role to play in mitigating rising CO₂ emissions and enabling sustainable power generation. Most industrially mature CCS technologies based on amine chemisorption are highly energy-intensive, consuming up to 30% of the power generating capacity of the plant in order to thermally regenerate the sorbents for continued capture. Moreover, the released CO₂ must additionally be compressed and stored permanently, which adds additional energy penalties and potential risks of release. To address these challenges, this thesis develops a new strategy for integrating CO₂ capture and conversion into a single process stream.; Such an approach, which employs CO₂ in the captured state as the reactant for subsequent electrochemical reactions, eliminates the need for energetically-intensive sorbent regeneration and CO₂ release between capture and utilization steps while potentially providing new solutions for the storage challenge. In the first part of this thesis, a proof-of-concept demonstration of combined CO₂ capture and conversion within a Li-based electrochemical cell is presented. To develop this system, new electrolyte systems were first designed to integrate amines (used in industrial CO₂ capture) into nonaqueous electrolytes. The resulting systems were found to be highly effective in both capturing and activating CO₂ for subsequent electrochemical transformations upon discharge of the cell.; This activity was particularly well-demonstrated in solvents such as DMSO where CO₂ normally is completely inactive, in which the amine-modified electrolytes containing chemisorbed CO₂ were found to enable discharge at high cell voltages (~2.9 V vs. Li/Li⁺) and to high capacities (&gt; 1000 mAh/gc), converting CO₂ to solid lithium carbonate. Formation of a densely-packed, solid phase product from CO₂ is not only logistically attractive because it requires less storage space, but also eliminates the costs and safety risks associated with long-term geological storage of compressed CO₂. In addition, the conversion process generates electricity at point-of-capture, which may help to incentivize integration of the technology with existing point-source emitters. While promising, this initial system exhibited several challenges including slow formation of the active species in solution.; To address this, a suite of experimental and computational methods were employed to elucidate the influence of the electrolyte on electrochemical reaction rates. Reduction kinetics were found to be influenced by alkali cation desolvation energetics, which favors larger alkali cations such as potassium. Through further development, amine-facilitated CO₂ conversion was also demonstrated to be transferrable to other amine- and solvent- systems, opening a potentially large design space for developing improved electrolytes. Furthermore, the effect of operating temperature was investigated to evaluate the potential of this technology to integrate with practical CO₂ capture needs. While higher temperatures (40°C&lt;T&lt;70°C) improve the conversion kinetics of CO₂-loaded amines, device-level performance - especially in the low-rate regime - remains largely governed by the Li-electrolyte stability at elevated temperatures, which needs to be addressed in future work.; Lastly, CO₂ discharge activity as a function of electrolyte composition was also investigated in non-amine electrolytes for rechargeable Li-CO₂ batteries. In these systems, increased availability of the Li⁺ cation was found to be critical for supporting CO₂ activation and sustaining discharge to high capacities.Overall, the central advance of this thesis is the successful demonstration of using amine sorbents in an electrochemical context to activate new modes of CO₂ reactivity, establishing the feasibility of integrated CO₂ capture-conversion. This work not only provides a new reaction platform, but also proposes post-combustion storage concepts of CO₂ in solid phases that simultaneously achieve permanent CO₂ fixation and power delivery.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 234-253).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127053</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral engineering for solar-thermal and thermal-radiative systems</title>
<link>https://hdl.handle.net/1721.1/127052</link>
<description>Spectral engineering for solar-thermal and thermal-radiative systems
Huang, Yi,Ph. D.Massachusetts Institute of Technology.
Increasing energy efficiency for power generation and reduction of energy consumption are two important venues to address the energy supply and global warming challenges we face today. Radiation from the sun and terrestrial heat sources can be harvested for power generation. It is also an important heat transfer channel, with which one can control in order to regulate the temperature of objects. In this thesis, we focus on strategies to harvest and control solar and thermal radiation, with the goals (1) to improve power generation efficiency using solar and thermal photovoltaics and (2) to reduce the energy consumption used to maintain human comfort in built environments. Solar radiation, as one of the most abundant energy sources on Earth, is now harvested by photovoltaics around the world. While solar photovoltaics already has reached considerable efficiencies, there is still room for improvement.; One fundamental limit in solar photovoltaics is the discard of photons with energy smaller than the material bandgap. Another challenge for solar PVs, due to the intermittent nature of solar power, is the lack of low-cost electricity storage systems that provide electricity on-demand. Solar thermal systems, on the other hand, can dispatch energy on-demand due to low-cost of thermal storage systems. Hybrid systems that combine solar PV and solar thermal systems can potentially harvest solar energy at higher efficiency and provide more dispatchable sources of energy. In the first part of my thesis, we designed and experimentally tested a spectral-selective, thermally-conductive component to be used in such hybrid solar-PV thermal system.; The component can direct part of the solar spectrum to the photovoltaics and to absorb the rest of the spectrum for use in a thermal system, thereby harvesting the entire solar spectrum with an energy conversion efficiency close to 23%, and with over 40% dispatchable electricity generated from thermal energy. The photovoltaic energy conversion efficiency can also improve by recycling photons with energy smaller than the material bandgap. In a thermo-photovoltaic system, low-energy photons can be designed to reflect back to the radiation source, and therefore energy carried by these photons can be re-used. Thermo-photovoltaic devices also showed great potential to provide low-cost, dispatachable electricity when combined with high-temperature thermal storage systems and concentrated solar power. In the second part of my thesis, we have designed and optimized a practical, crystalline-Si based thermo-photovoltaic cell to be fabricated on double-side polished wafers.; The Si-based TPV cell, combined with a 2300K gray radiator, can potentially reach 40% energy conversion efficiency. We have evaluated and optimized the Si-TPV performance with comprehensive considerations of components in the photovoltaic cell, including doping and junction depth, front and back surface field, passivation layer, back reflector, front metallization, as well as tolerance to roughness introduced in fabrication. Experimental tests have been conducted on doped Si samples with back reflectors, and identified potential pathways to further reduce optical and electrical losses. The maturity of the Si PV technologies and its relatively low cost points to great promise of high-efficiency thermo-photovoltaic devices for high-temperature thermal energy storage. Thermal radiation is also integral to the regulation of heat balance and temperatures of human body. Spaces in built environments are typically kept at near-ambient temperatures for human thermal comfort.; However, heating and cooling of spaces consume 40% of the total energy used in the US. Instead of regulating temperature in vast spaces, local regulation of heat near human bodies can potentially save large amounts of energy. In the third part of my thesis, we study the use of fabrics to regulate skin temperatures of the human body by controlling the input and output radiation channels of the human skin, an important yet largely under-studied channel for body temperature regulation. We then propose desired spectral properties of fabrics for both heating and cooling purposes, and in both indoor and outdoor environments. Finally, we investigate via both simulation and experiments, how morphology and material of polymer-based fabrics can be used to achieve the desired spectral properties.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 224-239).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127052</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New overlapping finite elements and their application in the AMORE paradigm</title>
<link>https://hdl.handle.net/1721.1/127051</link>
<description>New overlapping finite elements and their application in the AMORE paradigm
Huang, Junbin,Ph. D.Massachusetts Institute of Technology.
The finite element method has become a fundamental analysis tool for modern sciences and engineering. Despite the great improvement in theory and application over the past decades, the need for regular conforming meshes in finite element analysis still requires much human effort in engineering practice. In this thesis we focus on designing novel finite element procedures to reduce the meshing effort expended on constructing a finite element model for solids and structures. The new meshing paradigm of "a̲utomatic m̲eshing with overlapping and regular elements", the AMORE paradigm, has recently been formulated. In this paradigm, the finite elements interior to the domain of interest are undistorted traditional elements and overlapping of elements is used for the discretization near the boundaries. The overlapping of elements gives much freedom to the meshing procedure and results in a much reduced meshing effort. Two types of overlapping are investigated.; In the first case we consider the overlapping of individual polygonal elements and propose new quadrilateral overlapping finite elements. The new formulation combines advantageous aspects from both traditional finite elements and meshless methods. The new overlapping finite elements, being insensitive to mesh distortions and giving high-order accuracy, are used to mesh the boundary regions. Such use leads to an effective meshing procedure as desired. In the second case we study the overlapping of conforming finite element meshes. Each individual mesh is spanned over a regular subdomain and is allowed to overlap with other meshes in any geometric form. Local fields on individual meshes are then assembled using a partition of unity to give the global compatible field. This new scheme allows very convenient local meshing and enriching so that the meshes can be easily adapted to various geometric features and solution gradients with a reasonable computational expense.; We formulate new schemes, analyze their convergence properties, and demonstrate their performance and their use in AMORE in the solution of various problems.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 129-134).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127051</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast learning and adaptation in control and machine learning</title>
<link>https://hdl.handle.net/1721.1/127050</link>
<description>Fast learning and adaptation in control and machine learning
Gaudio, Joseph Emilio.
As machine learning methods become more prevalent in society, problems of a dynamical nature will increasingly need to be considered, especially in the interactions of learning-based algorithms with the physical world. The dynamical nature of these problems may include regressors which are time-varying, necessitating new algorithms in machine learning approaches as well as real-time decision making in the presence of uncertainties using adaptive control approaches. Problems of stability, fast learning with analytical guarantees, and constrained nonlinear systems have to be simultaneously addressed. Some of these problems have to be addressed from a machine learning perspective, while others have to be dealt with using adaptive control approaches. Throughout, analytical guarantees must be considered in order to apply machine learning for decision making in real-time, especially for safety-critical systems. This thesis develops fast learning and adaptation algorithms for problems that lie at the intersection of adaptive control and machine learning. From the point of view of adaptive control, this thesis derives algorithms which ensure fast parameter convergence, with minimal overhead in computational complexity. In particular, algorithms with time-varying learning rates are employed to show fast parameter convergence with reduced requirements of persistent excitation, and analysis for time-varying parameters. From the point of view of machine learning, this thesis derives algorithms that are applicable for real-time decision making. In particular, these algorithms ensure fast prediction convergence, which is a necessary feature for satisfactory behavior in real-time systems. Algorithms which take into account natural system constraints, such as input magnitude and rate saturation are also derived order to provide for stability and learning in physically constrained dynamical systems. Throughout the thesis, analytical guarantees for all algorithms are provided.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 249-264).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127050</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interface engineering for exfoliation and integration of heteroepitaxial III-V films</title>
<link>https://hdl.handle.net/1721.1/127049</link>
<description>Interface engineering for exfoliation and integration of heteroepitaxial III-V films
Kim, Yunjo.
Compound semiconductors composed of III-V alloys play an essential role in modern optoelectronics and high frequency communication due to their superior optical properties and electron mobilities compared to silicon. However, the use of compound semiconductors devices has not seen the same level of wide-spread use compared to silicon owed in part to the significant cost of substrates required for processing III-V devices. To encourage the adoption of III-V technology, many strategies have been proposed to enable monolithic integration of III-V devices on Si to be compatible with Si CMOS technology. One approach to alleviate the cost of integrated III-V devices is to design a process that can selectively release the III-V device layer from its substrate for later heterointegration onto Si, while preserving the substrate for continuous reuse.; In this work, we introduce two novel processes for the exfoliation of III-V devices that also enable the reuse of the substrate for continuous processing. We first investigate a novel epitaxial process, termed 'remote epitaxy', that utilize graphene as a platform for epitaxial growth while also enabling the release of III-V devices. In our work, we have found that monolayer graphene interfaced on III-V substrate mildly screens the atomic interaction between the substrate and epitaxial film, enabling epitaxial films to grow with atomic registry to the parent substrate 'remotely' from the surface of graphene. In addition, remote epitaxially grown films could be readily released from the graphene interlayer by application of mechanical stress on the film, preserving the integrity of the substrate and device layer. The process of remote epitaxy and exfoliation of epitaxial films was demonstrated for a range of III-V compound semiconductors, GaAs, InP and GaP.; Secondly, we investigate an alternative strategy for thin-film exfoliation and wafer reuse based on a previously established controlled spalling process. In controlled spalling, mechanical stress is imparted onto a substrate by deposition of a stressed metal film in order to induce fracturing of the substrate at a specified depth from the surface. Using this process, the device layer can be released from the substrate at the sacrifice of damaging the substrate. In this work, we propose a non-destructive controlled spalling process that utilizes a mechanical sacrificial buffer layer between the substrate and device layer where selective fracturing can take place. Subsequent to exfoliation, the damaged buffer layer can be selectively etched to recover the original substrate for reuse.; This process, termed cleavage plane assisted exfoliation (CPEx), was applied to a GaAs/Ge/GaAs system utilizing Ge epitaxial film as the mechanical sacrificial buffer for the exfoliation of GaAs epitaxial films and reclamation of the original GaAs substrate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 78-82).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127049</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dexterous manipulation with simple grippers</title>
<link>https://hdl.handle.net/1721.1/127046</link>
<description>Dexterous manipulation with simple grippers
Chavan-Dafle, Nikhil(Nikhil Narsingh)
This thesis focuses on enabling robots, specially those with simple grippers, to dexterously manipulate an object in a grasp. The dexterity of a robot is not limited to the intrinsic capability of a gripper. The robot can roll the object in the gripper using gravity, or adjust the object's pose by pressing it against a surface, or it can even toss the object in the air and catch it in a different pose. All these techniques rely on resources extrinsic to the hand, either gravity, external contacts or dynamic arm motions. We refer to such techniques collectively as "extrinsic dexterity". We focus on empowering robots to autonomously reason about using extrinsic dexterity, particularly, pushes against external contacts. We develop mechanics and algorithms for simulating, planning, and controlling motions of an object pushed in a grasp. We show that the force-motion relationship at contacts can be captured well with complementarity constraints and the mechanics of prehensile pushing in a general setting can be formulated as a mixed nonlinear complementarity problem. For computational efficiency, we derive the abstraction of the mechanics in the form of motion cones. A motion cone defines the set of object motions a pusher can induce using frictional contact. Building upon these mechanics models, we develop a sampling-based planner and an MPC-based controller for in-hand manipulation. The planner generates a series of pushes, possibly from different sides of the object, to move the object to a desired grasp. The controller generates local corrective pushes to keep the object close to the planned pushing strategy. With a variety of regrasp examples, we demonstrate that our planner-controller framework allows the robot to handle uncertainty in physical parameters and external disturbances during manipulation to successfully move the object to a desired grasp.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 117-124).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127046</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Land use and development over the long run</title>
<link>https://hdl.handle.net/1721.1/127038</link>
<description>Land use and development over the long run
Smith, Cory B.
This thesis comprises three essays on the role of land use in economic development over the long run. As is natural for many theses, "development" here is defined in a range of ways. The first essay considers the long-run effects of historical land concentration on agricultural investment and productivity in the frontier United States. The second essay considers how disruptions to agriculture in the US South, in the form of the boll weevil pest, changed the political economy of the Jim Crow South. The final essay considers the long run in the future, using agronomic microdata to assess the impact of climate change on agricultural productivity. The first chapter provides new evidence on the old question of how concentrating land into the hands of large landlords affects economic development. Despite their popularization as bastions of pioneer equality, America's frontier regions often exhibited highly concentrated patterns of land ownership.; A patchwork of policies opened some areas to large-scale farming by absentee landlords but reserved others for settlement by small farmers. This paper studies the impacts of land concentration on the long-run development of the frontier United States using quasi-random variation in these allocation procedures. I collect a large database of modern property tax valuations and show that historical land concentration had persistent effects over a span of 150 years: lowering investment by 23%, overall property value by 4.4%, and population by 8%. I argue that landlords' use of sharecropping raised the costs of investment, a static inefficiency that persisted due to land market frictions. I find little evidence for other explanations, including elite capture of political systems.; I use my empirical estimates to evaluate counterfactual policies, applying recent advances in combinatorial optimization to show that an optimal property rights allocation would have increased my sample's agricultural land values by $28 billion (4.8%) in 2017. The second chapter, joint with James Feigenbaum and Soumyajit Mazumder, studies the role of Hirschman's threat of "exit" in the Great Migration in the Jim Crow South. How do coercive societies respond to negative economic shocks? Since before the nation's founding, cotton cultivation formed the politics and institutions in the South, including the development of slavery, the lack of democratic institutions, and intergroup relations between whites and blacks. We leverage the natural experiment generated by the boll weevil infestation from 1892-1922, which disrupted cotton production in the region.; Panel difference-in-differences results provide evidence that Southern society became less violent and repressive in response to this shock with fewer lynchings and less Confederate monument construction. Cross-sectional results leveraging spatial variation in the infestation and historical cotton specialization show that affected counties had less KKK activity, higher non-white voter registration, and were less likely to experience contentious politics in the form of protests during the 1960s. To assess mechanisms, we show that the reductions in coercion were responses to African American out-migration. Even in a context of antidemocratic institutions, ordinary people can retain political power through the ability to "vote with their feet." The third chapter, joint with Arnaud Costinot and Dave Donaldson, looks at the long run effects of climate change on agricultural productivity and land use.; A large agronomic literature models the implications of climate change for a variety of crops and locations around the world. The goal of the present paper is to quantify the macro-level consequences of these micro-level shocks. Using an extremely rich micro-level dataset that contains information about the productivity--both before and after climate change--of each of 10 crops for each of 1.7 million fields covering the surface of the Earth, we find that the impact of climate change on these agricultural markets would amount to a 0.26% reduction in global GDP when trade and production patterns are allowed to adjust.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 203-215).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127038</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on redistributive fiscal policies and macroeconomics</title>
<link>https://hdl.handle.net/1721.1/127037</link>
<description>Essays on redistributive fiscal policies and macroeconomics
Spector, Mariano Eduardo.
This thesis consists of three chapters. Chapters 1 and 2 study redistributive fiscal policies. Chapter 3 analyzes the role of asymmetric information in frictional labor markets. Fiscal stimulus during the Great Recession consisted mainly of transfers, rather than government purchases. Chapter 1 analyzes the role of marginal propensities to consume (MPCs) in shaping the effect of such policies. I construct a continuous-time New Keynesian model with heterogeneous overlapping generations which allows for arbitrary MPC heterogeneity. I characterize the output multipliers of fiscal transfers, and show that the role of MPCs is mainly to determine the timing of the fiscal stimulus. The relation between this timing and the cumulative effect on output is, however, ambiguous. Indeed, I show that transfers to low-MPC consumers may generate a higher cumulative effect on output.; From a normative perspective, however, there is no ambiguity: with larger differences in MPCs, optimal policy can obtain macro stabilization with smaller welfare losses. In Chapter 2, I analyze redistributive policies when households are heterogeneous with respect to both their MPCs and their risk aversion. I characterize transfer multipliers in a model in which capital is subject to uninsurable idiosyncratic risk. Based on survey data, I assume that MPCs and risk aversion are positively correlated in the population. A redistribution from low-MPC, low-risk aversion households to high-MPC, high-risk aversion households creates two opposing effects: a higher mean MPC tends to stimulate aggregate demand, but an increase in the mean risk aversion tends to depress asset prices, generating a negative income effect on consumption. In Chapter 3, I study a frictional labor market with horizontally differentiated workers.; Firms have incomplete information about the skills of workers who apply to their vacancies. Workers self-insure against unemployment risk by applying to jobs for which their skills are not well suited. This decreases firms' incentives to create vacancies by deteriorating the quality of the average applicant. Workers thus impose a negative externality on each other, which makes the equilibrium inefficient. However, although workers apply to too many jobs, I show that unemployment can be too low or too high. Welfare-improving government policies are considered.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 223-227).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127037</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the law and economics of public institutions</title>
<link>https://hdl.handle.net/1721.1/127036</link>
<description>Essays on the law and economics of public institutions
Petkun, Jonathan Blake.
This thesis consists of three chapters on the legal and economic organization of large-scale public institutions. Each chapter uses the tools of applied microeconomics in order to explore how the legal and organizational structure of public institutions affects the behavior of actors inside and outside of the organization as well as the quality of public goods provided. In other words, my thesis asks the question: how well do our public institutions function, or for whom do they function? Chapter 1 focuses on the inner-workings of the United States civil justice system; chapters 2 and 3 focus their attention on the U.S. military. In Chapter 1 I study a procedural reform in the U.S. federal trial courts. Re-cent court reform efforts in the U.S. and elsewhere have focused on speeding up what are perceived to be slow and burdensome civil justice systems.; I study a Congressionally-enacted reform known as the "six-month list," which uses social pressure to incentivize federal judges to decide cases more quickly. I construct an original dataset of nearly 500,000 federal district court motions--representing the approximate universe of summary judgment motions in federal civil cases for the period 2004-2014--and I exploit quasi-random variation in exposure to the six-month list in order to assess the causal effects of the six-month list on both the speed and quality of adjudications. My results indicate that the six-month list does indeed improve speed, though the effect is heterogeneous across judges, with judges who are young, non-white, or female being among the most responsive. Meanwhile, I find only mixed evidence of effects on the quality of adjudications. I interpret these results as consistent with a model of judicial behavior that Chapter 2 maintains a focus on public sector personnel policy, bit it shifts con-texts from the U.S.; courts to the U.S. military. In this chapter, which is the product of joint work with Christina Patterson and William Skimmyhorn, we study how the structure of common retention incentives affects employee quality in the U.S. military. This complements the existing literature on the determinants of public sector worker quality, which has primarily focused on levels of compensation rather than the structure of personnel policy and other non-wage incentives. We combine administrative data with quasi-random variation to find that low-ability soldiers are relatively more responsive to both lump-sum bonuses and early retirement benefits, and both effects are large enough to lower the organization's average ability level. We provide suggestive evidence that neither access to credit nor differences in personal discount rates explain these selection patterns.; In Chapter 3, joint with Paul Goldsmith-Pinkham, we assess the potential for pecuniary externalities relating to military housing allowances. In providing housing to its troops, the U.S. military chooses between direct in-kind provision (i.e. on-base barracks and family housing) and cash transfers (i.e. lump-sum housing allowances). In areas with a high military share of the overall population, military housing policies can have potentially significant impacts on the local civilian housing market. Anecdotally, some worry that military housing allowances drive up local housing prices, making it difficult for civilians to compete with their military neighbors for affordable housing. We combine panel data on the evolution of ZIP code-level military housing allowances and rental and house prices with plausibly exogenous changes to the military's housing allowance formula in order to identify pecuniary externalities.; We find suggestive evidence that increases to local military housing allowance rates generate sizeable pecuniary effects, with a 1% increase in military housing allowances leading to a 0.25% increase in local house prices in areas with a nearby military base.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127036</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on innovation in health care markets</title>
<link>https://hdl.handle.net/1721.1/127035</link>
<description>Essays on innovation in health care markets
Oostrom, Tamar.
This thesis consists of three chapters on innovation in health care markets. The first chapter examines incentives in pharmaceutical innovation; the second explores selection in the response to recommendations in health care. The third chapter presents new evidence on determinants of recent drug overdose mortality. The first chapter examines the effect of financial incentives on reported drug efficacy in clinical trials. I leverage the insight that the exact same sets of drugs are often compared in different randomized control trials conducted by parties with different financial interests. I estimate that a drug appears 0.15 standard deviations more effective when the trial is sponsored by that drug's manufacturer, compared with the same drug in the same trial without the drug manufacturer's involvement. Publication bias explains a large share of this effect; observable characteristics of trial design and patient enrollment are less important.; I find the sponsorship effect decreases over time as pre-registration requirements were implemented. The second chapter, joint with Liran Einav, Amy Finkelstein, Abigail Ostriker, and Heidi Williams, presents evidence on the role of selection in considering whether and when to recommend screening for a particular disease. In the context of recommendations that breast cancer screening start at age 40, we show that responders to the age 40 recommendation are less likely to have cancer and have smaller tumors than do women who self-select into screening at earlier ages. Responders to the age 40 recommendation also have less cancer than women who never screen, suggesting that the benefits of recommending early screening are smaller than if responders were representative of all covered individuals.; The second chapter, joint with Liran Einav, Amy Finkelstein, Abigail Ostriker, and Heidi Williams, presents evidence on the role of selection in considering whether and when to recommend screening for a particular disease. In the context of recommendations that breast cancer screening start at age 40, we show that responders to the age 40 recommendation are less likely to have cancer and have smaller tumors than do women who self-select into screening at earlier ages. Responders to the age 40 recommendation also have less cancer than women who never screen, suggesting that the benefits of recommending early screening are smaller than if responders were representative of all covered individuals. The third chapter examines the role of declining community ties and social cohesion in the increase in drug overdose mortality in the past two decades. I assess the causal impact of declining religiosity on opioid deaths, instrumenting for religiosity with the Catholic sex-abuse scandal.; I find that the recent decrease in religious employment would result in approximately one-third of the total current opioid mortality rate. The effects are concentrated in areas with higher Catholic rates before the scandal and among young adults.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 191-203).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127035</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays is bank competition and credit policy</title>
<link>https://hdl.handle.net/1721.1/127034</link>
<description>Essays is bank competition and credit policy
Passarelli Giroud Joaquim, Gustavo.
This thesis estimates the eect of competition in the financial sector using both individual level data and economic theory, and explores the role of credit policy in mitigating potential adverse effects of imperfect competition. The first essay uses heterogeneous exposure to large bank mergers to estimate the eect of bank competition on both financial and real variables in local Brazilian markets. Using detailed administrative data on loans and firms, we employ a difference-in-differences empirical strategy to identify the causal eect of bank competition. Following M&amp;A episodes, spreads increase and there is persistently less lending in exposed markets. We also find that bank competition reduces employment. We develop a tractable model of heterogeneous firms and concentration in the banking sector and show that the observed effects in the data and predicted by the model are consistent.; Among other counterfactuals, we show that if the Brazilian lending spread were to fall to the world level, output would increase by approximately 5%. The second essay develops a contract-based model of industrial organization for markets characterized by information and other frictions (Moral Hazard, Limited Commitment, Adverse Selection etc.) and dierent market structures (Monopoly, Oligopoly, Competition), the latter driven by spatial costs, idiosyncratic preferences, and number of financial service providers. We derive a likelihood estimator for the structural parameters that determine contracting frictions and market structure and apply this to the Townsend Thai data on small and medium enterprises and bank locations. Our model of production is microfounded and thus can be used for a broad set counterfactuals. The third essay explores the role of credit policies to mitigate the effects of lack of competition in the financial sector.; In many emerging markets, governments try to increase credit access and stimulate economic growth by imposing caps on lending rates. We analyze these policies by extending workhorse models with financial frictions to include a banking sector with market power. Caps are beneficial as they reduce credit costs but are also harmful as they crowd out risky borrowers which can access credit only at high interest rates, and thus have an ambiguous effect in current output and capital accumulation. We show that the optimal policy to maximize steady state welfare involves relatively high caps on a large share of bank loans. The optimal policy decreases output today, but increases capital accumulation through a lower cost of credit and thus output in the future. Thanks to tractable aggregation properties, the framework can be used to analyze a broad set of alternative credit policies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 284-290).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127034</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in environmental market design</title>
<link>https://hdl.handle.net/1721.1/127033</link>
<description>Essays in environmental market design
Rafey, William Minot.
This thesis consists of three essays on the design of environmental markets in the face of fast-approaching climate change. The first essay studies a water market used by irrigated farms inhabiting a connected river network in Australia's southern Murray-Darling Basin during a period of substantial environmental change (2007-2015). It uses new panel data to estimate shadow values of water for each farm from production functions identified with regulatory variation in river diversion caps. The estimates imply that observed water trades increased irrigated agricultural output by approximately 4-6%. Without this reallocation, output is the same as under an 8-11% uniform reduction in water resources, roughly the median reduction predicted for this region under 1C of global warming. The value of the water market is increasing and highly convex in water scarcity, with realized gains an order-of-magnitude greater during drought, concentrated in regions with stricter diversion limits and among farms with less rainfall. This suggests that retrospective analyses may understate the future value of trade in a changing climate and that a water market is an important institutional adaptation to climate change.; The second essay, written jointly with Daron Acemoglu, shows that, in a model without commitment to future policies, geoengineering breakthroughs can have adverse environmental and welfare effects by changing the (equilibrium) carbon price. In our model, energy producers emit carbon, which creates a negative environmental externality, and may decide to switch to cleaner technology. A benevolent social planner sets carbon taxes without commitment. Higher future carbon taxes both reduce emissions given technology and encourage energy producers to switch to cleaner technology. Geoengineering break-throughs, which reduce the negative environmental effects of the existing stock of carbon, decrease future carbon taxes and thus discourage private investments in conventional clean technology. We characterize the conditions under which these advances diminish--rather than improve--environmental quality and welfare, and show that given current estimates of costs and environmental damages, these conditions are likely to be satisfied.; The third essay, written jointly with Daniel Aronoff, introduces an empirical framework for valuing dynamic decentralized markets in environmental offsets. Such markets can provide flexibility to conserve a public good at lower cost, but raise concerns if offsets cannot perfectly substitute for the original public good. In our model, producers undertake long-run conservation activities to produce offsets, which they can either sell to individuals seeking to deplete the public good, or store costlessly on a ledger. The market clears in each period to maintain the total stock of the public good above a certain historical level. We estimate this model for a decentralized market in Florida wetlands, in which land developers purchase offsets from private producers to meet their obligations under the Clean Water Act. Our approach provides a way to (i) estimate the private gains from trade, (ii) predict the imperfect substitutability, in terms of flood risk, between extant wetlands and newly-created wetlands, and (iii) assess alternative market designs that preserve the original conservation objective but incorporate location-based pricing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127033</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the political economy of development</title>
<link>https://hdl.handle.net/1721.1/127032</link>
<description>Essays on the political economy of development
Montenegro, Mateo,Ph. D.(Juan Mateo Montenegro Zarama)Massachusetts Institute of Technology.
This thesis consists of three chapters which address different questions about the political economy of development. In the first chapter, Natala Garbiras-Díaz and I study whether crowdsourcing technologies aimed at augmenting civil oversight of elections might increase electoral integrity. We report the results of two large-scale field experiments we designed to assess the effectiveness of online crowdsourcing technologies in increasing the engagement of civil society in electoral monitoring around elections in Colombia. In these experiments, we leveraged Facebook advertisements to encourage citizen reporting of electoral irregularities through official websites, and also varied whether candidates were informed about the campaign in a subset of municipalities. We find that these interventions had effects on two different margins.; In addition to the expected informational effects - whereby citizen reports increased, and politicians reduced their engagement in electoral irregularities - the results highlight powerful salience effects, which operated by making electoral irregularities more top-of-mind to citizens. Specifically, the advertisements generated a large shift in the vote share of candidates perceived to be less corrupt and away from those perceived to be more corrupt. We argue that these salience effects are driven by a shift in voter preferences towards candidates they perceived as 'cleaner'. We formally test this hypothesis in a second, follow-up experiment in which we vary the salience of electoral irregularities in the advertisements sent through Facebook. As expected, we find that the advertisements featuring messages emphasizing the salience of electoral misdeeds generate a larger shift in the votes for 'cleaner' candidates than the ones only providing information about the reporting website.; The second chapter provides evidence on enforcement spillovers across enforcement activities. In particular, it shows that public audits, aimed at detecting and sanctioning corruption by public servants, increase tax compliance in Brazil. As a source of identification, it uses the geographic and time variation induced by a large-scale random audit program conducted by Brazilian federal government on municipal governments throughout the 2003-2015 period. I begin by showing that municipalities receiving an audit in the past experience an increase in federal, but not municipal tax collection. I show evidence that these effects operate through a state capacity signaling channel, whereby audits and the subsequent penal actions, act as signals both of the capacity and the willingness of the federal government to enforce the law in general, which induces citizens to increase tax compliance.; Consistent with this interpretation I show that local information about the audits, such as the one conveyed through local media or to neighboring municipalities, is key in determining the magnitude of these spillover effects across types of enforcement. The third chapter studies whether more decentralized public auditing institutions are better at increasing government accountability and reducing corruption than centralized ones. To answer this question I exploit the exogenous variation in the level of decentralization of local auditing institutions created by Colombian law to implement a regression discontinuity design and study the empirical effects of decentralizing public auditing. Using data from third-party investigations on corruption, I find that more centralized auditors do a better job at curbing corruption than decentralized ones. This result is driven by types of corruption related to public procurement as well as 'influence peddling'.; Furthermore, I find that 'effort' of public auditing institutions do not change with respect to whether these institutions are decentralized or not, which validates the use of the third-party investigations about corruption as a measure that does not confound the efforts of auditing institutions. Finally, I show evidence suggesting that the rules governing the appointment of decentralized auditors is an important mechanism in explaining the results in this setting.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 157-169).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127032</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays in economics/</title>
<link>https://hdl.handle.net/1721.1/127031</link>
<description>Three essays in economics/
Anghel, Anca-Patricia.
This thesis consists of three chapters. The αrst chapter and second chapter contribute to the literature on consumer search, while the third chapter analyzes the e; The optimal consumer behavior for a model with normal priors is characterized and the likelihood function is constructed. The learning model generates predictions that can be used to test whether consumers are learning. These predictions are conαrmed by the data. Using clickstream data on search and purchase behavior of consumers looking to book a hotel on Expedia, the theoretical model is leveraged to estimate consumer preferences, a clicking cost distribution, and prior beliefs parameters. Consumers have overoptimistic prior beliefs about the distribution of the unobservable characteristics compared to the true distribution. Thus, consumers click on more hotels compared to when they know the true distribution. However, the estimated clicking costs are too high for consumers to click enough to αnd the optimal hotels for them. The results suggest that the consumer experience could be improved by personalized recommendations and an improved selection of hotels.; distribution in demand models with sequential search. I show that if consumers need to pay a search cost to αgure out some product characteristics, aggregate data is not enough to fully non-parametrically identify the demand and the search cost distribution. I show that the distribution of the search cost can be identiαed in the mixed logit models using only aggregate data. Moreover, if micro data is available, the demand functions and search cost distribution cannot be identiαed fully non parametrically. However, if data on the search behavior of consumers is available (clickstream data), I provide a proof of non-parametric identiαcation of the demand function, distribution of utility, and search cost distribution. In the third chapter, I use data from the CPS March survey from 1990 to 2014 to analyze the e; It has been suggested that introducing mandatory paid parental leave will have a negative impact on women's wages since women are more likely to take advantage of this policy and since this will result in increased costs to employers. The data is analyzed using the synthetic control method proposed by Abadie and Gardeazabal (2003). There are no signiαcant short nor long term eﬀects on women's wages or employment rates following the introduction of this program.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127031</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The economics of fraud and corruption</title>
<link>https://hdl.handle.net/1721.1/127030</link>
<description>The economics of fraud and corruption
Leder-Luis, Jetson.
Fraud and corruption are serious issues which undermine the provision of public goods. This thesis consists of three papers which analyze the economics of fraud and the mechanisms by which it can be detected and averted. An introductory chapter presents an overview of the economic ideas surrounding these topics. In the αrst paper, I analyze a US federal law that incentivizes whistleblowers to litigate against fraud and misreporting committed against the Medicare program. I provide a theoretical framework for understanding the economic tradeoffs associated with privatized whistleblowing enforcement and then empirically analyze the deterrence effects of whistleblower lawsuits. In the second paper, conducted as joint research, we consider the incentives for misreported enrollment statistics in Israeli public school data and the way in which data manipulation undermines economic estimates of the returns to smaller class sizes. We provide evidence of enrollment manipulation and show that smaller class sizes have no effect on student achievement, overturning earlier literature. In the third paper, we develop a mechanism for detecting misreported αnancial data and apply it to reports from a World Bank project. Our results are consistent with strategic and proαtable falsiαcation of data, and our method matches the results of an audit conducted independently by the World Bank on the same project.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127030</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and systems for low power time-of-flight imaging</title>
<link>https://hdl.handle.net/1721.1/127029</link>
<description>Algorithms and systems for low power time-of-flight imaging
Noraky, James.
Depth sensing is useful for many emerging applications that range from augmented reality to robotic navigation. Time-of-flight (ToF) cameras are appealing depth sensors because they obtain dense depth maps with minimal latency. However, for mobile and embedded devices, ToF cameras, which obtain depth by emitting light and estimating its roundtrip time, can be power-hungry and limit the battery life of the underlying device. To reduce the power for depth sensing, we present algorithms to address two scenarios. For applications where RGB images are concurrently collected, we present algorithms that reduce the usage of the ToF camera and estimate new depth maps without illuminating the scene. We exploit the fact that many applications operate in nearly rigid environments, and our algorithms use the sparse correspondences across the consecutive RGB images to estimate the rigid motion and use it to obtain new depth maps.; Our techniques can reduce the usage of the ToF camera by up to 85%, while still estimating new depth maps within 1% of the ground truth for rigid scenes and 1.74% for dynamic ones. When only the data from a ToF camera is used, we propose algorithms that reduce the overall amount of light that the ToF camera emits to obtain accurate depth maps. Our techniques use the rigid motions in the scene, which can be estimated using the infrared images that a ToF camera obtains, to temporally mitigate the impact of noise. We show that our approaches can reduce the amount of emitted light by up to 81% and the mean relative error of the depth maps by up to 64%. Our algorithms are all computationally efficient and can obtain dense depth maps at up to real-time on standard and embedded computing platforms.; Compared to applications that just use the ToF camera and incur the cost of higher sensor power and to those that estimate depth entirely using RGB images, which are inaccurate and have high latency, our algorithms enable energy-efficient, accurate, and low latency depth sensing for many emerging applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-158).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127029</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on production function estimation</title>
<link>https://hdl.handle.net/1721.1/127028</link>
<description>Essays on production function estimation
Demirer, Mert.
This first chapter develops a new method for estimating production functions with factor-augmenting technology and assesses its economic implications. The method does not impose parametric restrictions and generalizes prior approaches that rely on the CES production function. I first extend the canonical Olley-Pakes framework to accommodate factor-augmenting technology. Then, I show how to identify output elasticities based on a novel control variable approach and the optimality of input expenditures. I use this method to estimate output elasticities and markups in manufacturing industries in the US and four developing countries. Neglecting labor-augmenting productivity and imposing parametric restrictions mismeasures output elasticities and heterogeneity in the production function. My estimates suggest that standard models (i) underestimate capital elasticity by up to 70 percent (ii) overestimate labor elasticity by up to 80 percent.; These biases propagate into markup estimates inferred from output elasticities: markups are overestimated by 20 percentage points. Finally, heterogeneity in output elasticities also affects estimated trends in markups: my estimates point to a much more muted markup growth (about half) in the US manufacturing sector than recent estimates. The second chapter develops partial identification results that are robust to deviations from the commonly used control function approach assumptions and measurement errors in inputs. In particular, the model (i) allows for multi-dimensional unobserved heterogeneity,(ii) relaxes strict monotonicity to weak monotonicity, (iii) accommodates a more flexible timing assumption for capital. I show that under these assumptions production function parameters are partially identified by an 'imperfect proxy' variable via moment inequalities. Using these moment inequalities, I derive bounds on the parameters and propose an estimator.; An empirical application is presented to quantify the informativeness of the identified set. The third chapter develops an approach in which endogenous networks is a source of identification in estimations with network data. In particular, I study a linear model where network data can be used to control for unobserved heterogeneity and partially identify the parameters of the linear model. My method does not rely on a parametric model of network formation. Instead, identification is achieved by assuming that the network satisfies latent homophily - the tendency of individuals to be linked with others who are similar to themselves. I first provide two definitions of homophily: weak and strong homophily. Then, based on these definitions, I characterize the identified sets and show that they are bounded under weak conditions.; Finally, to illustrate the method in an empirical setting, I estimate the effects of education on risk preferences and peer effects using social network data from 150 Chinese villages.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 193-201).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127028</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated optical phased arrays : augmented reality, LiDAR, and beyond</title>
<link>https://hdl.handle.net/1721.1/127027</link>
<description>Integrated optical phased arrays : augmented reality, LiDAR, and beyond
Notaros, Jelena.
Integrated optical phased arrays, fabricated in advanced silicon-photonics platforms, enable manipulation and dynamic control of free-space light in a compact form factor, at low costs, and in a non-mechanical way. As such, integrated optical phased arrays have emerged as a promising technology for many wide-reaching applications, including LiDAR sensors and augmented-reality displays. In this thesis, novel integrated-optical-phased-array devices, systems, results, and applications are presented. First, beam-steering optical phased arrays for LiDAR are shown, including the first beam-steering optical phased arrays powered by monolithically-integrated on-chip rare-earth-doped lasers, the first beam-steering optical phased arrays controlled using heterogeneously-integrated CMOS driving electronics, and the first single-chip coherent LiDAR with integrated optical phased arrays and CMOS receiver electronics.; These demonstrations are important steps towards practical commercialization of low-cost and high-performance integrated LiDAR sensors for autonomous vehicles. Next, integrated optical phased arrays for optical manipulation in the near field are developed, including the first near-field-focusing integrated optical phased arrays, the first quasi-Bessel-beam-generating integrated optical phased arrays, and a novel active butterfly architecture for independent amplitude and phase control. These near-field modalities have the potential to advance a number of application areas, such as optical trapping for biological characterization, trapped-ion quantum computing, and laser-based 3D printing.; Finally, a novel transparent integrated-phased-array-based holographic display is proposed as a highly-discreet and fully-holographic solution for the next generation of augmented-reality head-mounted displays; novel passive near-eye displays that generate holograms, the first integrated visible-light liquid-crystal-based phase and amplitude modulators, and the first actively-tunable visible-light integrated optical phased arrays are presented.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 129-139).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127027</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale high-density terahertz radiator and receiver Arrays on silicon chips</title>
<link>https://hdl.handle.net/1721.1/127026</link>
<description>Large-scale high-density terahertz radiator and receiver Arrays on silicon chips
Hu, Zhi,Ph. D.Massachusetts Institute of Technology.
Electromagnetic (EM) waves in the terahertz (THz) frequency range (usually deαned as from 10⁻¹ THz to 10¹ THz) have important applications in gas sensing, non-invasive imaging, short-range ultra-high-data-rate wireline/wireless communications. Historically, due to their speed limit, silicon-based transistors are unable to reach this frequency range, so terahertz systems were built upon high- cost custom-made devices, which limits their application space. Currently, thanks to Moore's law, the f[subscript max] of many transistors in commercially-available processes have reached the terahertz region, making generating terahertz waves from DC on low-cost chips possible, but due to limited gain, generating high-power terahertz waves is still a challenge.; Fortunately, the bright side, also unique to the terahertz frequency range, is that, given the small wavelengths of terahertz waves, there exists the on-chip option of expanding a standalone on-chip antenna-integrated terahertz transmitting/receiving circuit block to a large-scale two-dimensional array of such circuit block, with tera-hertz signals central to the operation of these blocks coherently coupled between all array elements. By doing so, for radiators, the total radiated power can be increased, for heterodyne detectors, the detection sensitivity improved, and for both cases, the beam directivity increased. The above argument has outlined the possibility of such an array building solution from the antenna dimension's point of view, and it remains to be shown how such a solution is technically realizable - this is the primary goal of this thesis.; The major obstacle of implementing 2D dense terahertz arrays comes from the apparent "conﬂict" between the array element area limit dictated by the antenna spacing requirement and the large footprint of typical terahertz circuits. To this end, EM-circuit co-design approaches of realizing compact and multi-functional array elements are proposed. Based on these approaches, two chip prototypes were realized. The αrst chip is a 42-element two-dimensional resonator-coupled oscillator array with 91 harmonic-selective antennas for high-power 1 THz signal generation, which has demonstrated the highest total radiated power (7 dB higher than the state of the art) and highest equivalent isotropically radiated power (23 dB higher than the state of the art) among all radiators near 1 THz. The second one is a 32-element phase-locked two-dimensional local-oscillation-signal-coupled 240-GHz heterodyne detector array equipped with compact self-oscillating mixers.; It has achieved ~ 1200x sensitivity improvement and 4x scale increase compared to the state-of-the-art terahertz heterodyne receiver array.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 139-145).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127026</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cyber-attack detection and resilient state estimation in power systems</title>
<link>https://hdl.handle.net/1721.1/127025</link>
<description>Cyber-attack detection and resilient state estimation in power systems
Jevtić, Ana,Ph. D.Massachusetts Institute of Technology.
Many critical infrastructures, such as transportation and electric energy networks, and health care, are now becoming highly integrated with information and communication technology, in order to be more efficient and reliable. These cyber-physical systems (CPS) now face an increasing threat of cyber-attacks. Intelligent attackers can leverage their knowledge of the system, disruption, and disclosure resources to critically damage the system while remaining undiscovered. In this dissertation, we develop a defense strategy, with the ability to uncover malicious and intelligent attacks and enable resilient operation of cyber-physical systems. Specifically, we apply this defense strategy to power systems, described by linear frequency dynamics around the nominal operating point. Our methodology is based on the notion of data aggregation as a tool for extracting internal information about the system that may be unknown to the attacker. As the first step to resilience and security, we propose several methods for active attack detection in cyber-physical systems. In one approach we design a clustering-based moving-target active detection algorithm and evaluate it against stealthy attacks on the 5-bus and 24-bus power grids. Next, we consider an approach based on Interaction Variables (IntVar), as another intuitive way to extract internal information in power grids. We evaluate the eectiveness of this approach on Automatic Generation Control (AGC), a vital control mechanism in today's power grid. After an attack has been detected, mitigation procedures must be put in place to allow continued reliable operation or graceful degradation of the power grid. To that end, we develop a resilient state estimation algorithm, that provides the system operator with situational awareness in the presence of wide-spread coordinated cyber-attacks when many system measurements may become unavailable.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 99-108).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127025</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relationships between functionality, security, and privacy for multiparty computation, hashing, and encryption</title>
<link>https://hdl.handle.net/1721.1/127024</link>
<description>Relationships between functionality, security, and privacy for multiparty computation, hashing, and encryption
LaVigne, Rio(Kristen Rio)
One of the fundamental goals of cryptography is to be able to offer security and privacy without sacrificing functionality. Cryptographers have been able to achieve the best of all three by exploiting the assumed hardness of some problems (e.g. discrete log), and have been able to build protocols for secure multiparty computation, collision-resistant hash functions, public key cryptography, and much more. This thesis explores three facets of this balance. First, we delve into Topology-Hiding Computation, which is multiparty computation where we also hide the communication network, strengthening the notion of privacy. Second, we study Property Preserving Hashing, which can be thought of as an extension of collision-resistant hashing where we add functionality. Finally, we explore Fine-Grained Cryptography, and develop a public key cryptosystem. In this model of cryptography, security takes on a much less restrictive role (e.g. adversaries must run in O(n¹⁰) time), but the protocols and security reductions must run in "fine-grained" time (e.g. less than O(n⁵)).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 157-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127024</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A hybrid approach towards on-chip visible lasers</title>
<link>https://hdl.handle.net/1721.1/127023</link>
<description>A hybrid approach towards on-chip visible lasers
Mahony, Thomas Stephen.
In recent years, the world of nanostructured optically active materials has expanded to include organic molecules; colloidal nanocrystals such as quantum dots, quantum rods, and quantum wells or nanoplatelets; perovskite semiconductors; and perovskite nanocrystals. A key feature of these materials is the capability to engineer their energy levels, e.g., via chemical composition or size, allowing for their absorption and emission spectra to be tuned throughout the visible and near-infrared electromagnetic spectrum. Many of these materials are deposited from solution, which makes them suitable for large-area technologies such as solar cells and light-emitting devices (LEDs) for displays. However, nanopatterning these materials and integrating them into photonic devices has proven dicult due to fabrication constraints. In this work, we demonstrate strategies for processing and nanopatterning organic molecules, colloidal quantum dots, and cadmium selenide nanoplatelets. We created nanobeam photonic crystal cavities that incorporate organic gain media resulting in an ultracompact low-threshold organic laser. We combined colloidal quantum dots with polymethylmethacrylate (PMMA) to create suspended polymeric cavities that showed enhanced spontaneous emission from the quantum dots. By functionalizing surfaces, we achieved orientation control of nanoplatelet αlms. We also achieved the αrst demonstration of lithographically patterned nanoplatelet αlms, and we integrated them into silicon nitride photonics. We developed these processing and nanopatterning strategies while building architectures for on-chip lasers; nevertheless, these techniques have broad applicability to other technologies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 181-195).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127023</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topographic deep artificial neural network as a model of primate ventral visual stream</title>
<link>https://hdl.handle.net/1721.1/127022</link>
<description>Topographic deep artificial neural network as a model of primate ventral visual stream
Lee, Hyo-Dong.
Face processing in visual cortex has been widely studied and emphasized for its importance in primate survival and social communication. Monkey inferior temporal (IT) cortex contains neurons that respond preferentially to faces and cluster into several regions ("face patches") that together are referred to as the IT face process-ing network. While recent work has demonstrated that deep artificial neural net-works (ANNs) optimized for object categorization are strong predictors of neuronal responses at corresponding levels of the primate ventral visual stream (V1, V2, V4, and IT), those models do not explain the spatial organization of those neurons in general or the organization of the IT face processing network in particular. In this work, we test whether a new class of ANNs can naturally reproduce the core phenomena of the IT face network, including the rich spatial topography of face-selective neurons. Specifically, we designed and successfully trained topographic deep artificial neural networks (TDANNs) to solve a real-world object recognition task and to also minimize a proxy for neuronal wiring costs within each of the two highest IT levels (cIT and aIT). We report that layers of the trained TDANNs corresponding to cIT and aIT cortex contain clusters of face-selective units and reproduce core phenomenology of the face-patch system, such as connectivity between clusters and the emergence of viewpoint invariance. We also found that the model IT face network emerged over a range of naturalistic experience during training, but not for highly unnatural experience. Taken together, these results argue that the functional organization of the ventral stream might be explained by the need for the visual system to perform general, real world object categorization while also minimizing wiring costs over evolutionary and/or post-natal developmental time scales.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 103-109).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127022</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speech processing with less supervision : learning from weak labels and multiple modalities</title>
<link>https://hdl.handle.net/1721.1/127021</link>
<description>Speech processing with less supervision : learning from weak labels and multiple modalities
Hsu, Wei-Ning,Ph. D.Massachusetts Institute of Technology.
In recent years, supervised learning has achieved great success in speech processing with powerful neural network models and vast quantities of in-domain labeled data. However, collecting a labeled dataset covering all domains can be either expensive due to the diversity of speech or almost impossible for some tasks such as speech-to-speech translation. Such a paradigm limits the applicability of speech technologies to high-resource settings. In sharp contrast, humans are good at reading the training signals from indirect supervision, such as from small amount of explicit labels and from different modalities. This capability enables humans to learn from a wider variety of resources, including better domain coverage. In light of this observation, this thesis focuses on learning algorithms for speech processing that can utilize weak and indirect supervision to overcome the restrictions imposed by the supervised paradigm and make the most out of the data at hand for learning.; In the first part of the thesis, we devise a self-training algorithm for speech recognition that distills knowledge from a trained language model, a compact form of external non-speech prior knowledge. The algorithm is inspired by how humans use contextual and prior information to bias speech recognition and produce confident predictions. To distill knowledge within the language model, we implement a beam-search based objective to align the prediction probability with the likelihood of the language model among candidate hypotheses. Experimental results demonstrate state-of-the-art performance that recover word error rates by up to 90% relative to using the same data with ground truth transcripts. Moreover, we show that the proposed algorithm can scale to 60,000 hours of unlabeled speech and yield further reduction in word error rates.; In the second part of the thesis, we present several text-to-speech synthesis models that enable fine-grained control of unlabeled non-textual attributes, including voice, prosody, acoustic environment properties and microphone channel effects. We achieve controllability of unlabeled attributes by formulating a text-to-speech system as a generative model with structured latent variables, and learn this generative process along with an efficient approximate inference model by adopting the variational autoencoder framework. We demonstrate that those latent variables can then be used to control the unlabeled variations in speech, making it possible to build a high-quality speech synthesis model using weakly-labeled mixed-quality speech data as the model learns to control the hidden factors. In the last part of the thesis, we extend a cross-modal semantic embedding learning framework proposed in Harwath et al.; (2019) to learn hierarchical discrete linguistic units from visually grounded speech, a form of multimodal sensory data. By utilizing a discriminative, multimodal grounding objective, the proposed framework forces the learned units to be useful for semantic image retrieval. In contrast, most of the previous work on linguistic unit discovery do not use multimodal data--they consider a reconstruction objective that encourages the learned units to be useful for reconstructing the speech, and hence those units may also encode non-linguistic factors. Experimental results show that the proposed framework outperforms state-of-the-art phonetic unit discovery frameworks by almost 50% on the ZeroSpeech 2019 ABX phone discriminative task, and learns word detectors that discover over 270 words with an F1 score of greater than 0.5. In addition, the learned units from the proposed framework are also more robust to nuisance variation compared to frameworks that learn from only speech.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 191-217).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127021</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive sensing of user behavior and Well-being at home</title>
<link>https://hdl.handle.net/1721.1/127020</link>
<description>Passive sensing of user behavior and Well-being at home
Hsu, Chen-Yu,Ph. D.Massachusetts Institute of Technology.
Learning people's behavior in their homes is central to health sensing, behavioral research, and building smarter environments. In this thesis, we explore learning such information in a passive and contactless manner - without asking people to wear sensors on their bodies or change the way they normally live. We leverage that radio frequency (RF) signals bounce off people, and carry information about them. This thesis presents systems, algorithms, and machine learning models to analyze the signals in the environment and infer information about people's behavior and well-being. Specifically, we analyze the surrounding RF signals to infer people's movement patterns and enable continuous monitoring of gait velocity and stride length. We also sense people's sleep efficiency, sleep onset, and nocturnal awakenings using radio signals, without any wearable devices. Further, we demonstrate that radio signals carry information about people's identity and body shape. This thesis introduces the first system that reconstructs a person's silhouette using RF signals. We then develop this system further to identify users in their homes with no restrictions on their movement patterns. This thesis also shows that the combination of identity and movements allows us to analyze user behavior and interaction at home, without asking users to write diaries or deploy cameras in their living space. Finally, we introduce a new self-supervised learning method to infer appliance usage at home. Collectively, the models and systems in this thesis provide a toolkit for learning behavioral analytics at home from the surrounding radio signals, and addressing questions like who, what, and when, in a passive manner with minimal interference with users' lives.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127020</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low power circuits with integrated magnetics for sensors and energy harvesting systems</title>
<link>https://hdl.handle.net/1721.1/127019</link>
<description>Low power circuits with integrated magnetics for sensors and energy harvesting systems
Garcha, Preetinder(Preetinder Kaur)
The continued expansion of Internet of Things has led to a proliferation of wireless sensors and systems across the globe. The application space for sensors is wide-ranging: from industries, to serve the upcoming era of Industry 4.0, to consumer products, like body wearable sensors. The rise to billions of sensors relies on two key trends in sensor systems: miniaturization and energy-efficiency. This work explores the use of integrated magnetics in microelectronics to enable low power, energy-efficient sensing, as well as energy harvesting to power the sensors, in a compact form factor. For industrial applications, we present the design of a bandwidth-scalable, integrated fluxgate magnetic-to-digital converter for energy-efficient contactless current sensing in smart connectors. The system uses mixed signal front-end design to en-able duty cycling and quick convergence techniques leading to 20x reduction in power consumption at low bandwidths of 1 kHz for power monitoring. It also employs fast read-out circuits to achieve a bandwidth of 125 kHz for machine health diagnosis. For personal body wearable electronics and beyond, we present the design of a cold start system with integrated magnetics for ultra low voltage startup in thermal energy harvesting applications. The Meissner Oscillator analysis with on-chip magnetics allows co-optimization of magnetics and circuits to achieve start up from as low as 25 mV input voltage to the circuits, despite 1000x lower inductance than off-chip transformers. Given the recent push towards artificial intelligence and a growing need for data, along with sensors to collect that data, we need to explore novel uses of technologies to meet the demands for small form factor and low power operation, as the number of sensors scale. The ideas presented in this thesis, with two very different applications of the integrated magnetics technology, can contribute to the continued growth towards trillions of sensors.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127019</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secure analog-to-digital conversion against power side-channel attack</title>
<link>https://hdl.handle.net/1721.1/127018</link>
<description>Secure analog-to-digital conversion against power side-channel attack
Jeong, Taehoon.
At the interface between analog circuits and a digital processor, an ADC can create a critical hardware security loophole. By exploiting the power side-channel leakage of the ADC, an attacker can expose the private signal chain data. Having recognized the security threat, this thesis explores both aspects of the SAR ADC power side-channel attack (PSA): attack method and its countermeasure. Firstly, this thesis proposes two neural-network-based SAR ADC PSA methods based on multi-layer perceptron net-works (MLP-PSA) and convolutional neural networks (CNN-PSA). When applied to a SAR ADC without PSA protection, the proposed attack methods decode the power supply current waveforms of the SAR ADC into the corresponding A/D conversion results with very high accuracy, demonstrating themselves as powerful ADC PSA methods. Secondly, this thesis proposes a current-equalizer-based SAR ADC PSA countermeasure. A 12-bit, 1.25MS/s prototype SAR ADC is implemented in 65nm CMOS technology for the proof-of-concept. With the proposed PSA countermeasure, the prototype SAR ADC demonstrated a strong PSA-resistance against MLP-PSA. Due to the second-order power side-channel leakage sources of a current equalizer, the prototype SAR ADC showed weaker PSA-resistance against CNN-PSA, but generally protected a significant portion of the information from the attack.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 125-129).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127018</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-state spin-integrated circuits for quantum sensing and control</title>
<link>https://hdl.handle.net/1721.1/127017</link>
<description>Solid-state spin-integrated circuits for quantum sensing and control
Foy, Christopher,Ph. D.(Christopher C.)Massachusetts Institute of Technology.
Spin systems are an increasingly important quantum-sensing platform. In particular, atomic defect centers in diamond called nitrogen-vacancy (NV) centers offer impressive room temperature imaging capabilities for both magnetic fields and temperature. NV-based sensing platforms have found utility in solid-state physics, biological systems, and vector magnetometry. These applications highlight the immense promise of NV quantum sensors. Despite this promise, the use of NV centers within commercial devices remains limited to date, with many impediments to transitioning this platform from the laboratory. This thesis describes the development of solid-state spin-integrated circuits (S3IC) for quantum sensing and control with the overarching goal of creating scalable NV platforms. We present two major experiments that develop S3IC. These expand the application space of NV centers and improve device functionality. The first application was to develop an NV spin microscope capable of wide-field temperature and magnetic field imaging to elucidate functional device behavior at the microscopic scale. The second experiment was integrating the essential components of an NV spin microscope, spin control and detection, with integrated electronics. In this manner, S3IC combines the exceptional sensitivity of NV centers with the robustness and scalability of modern electronic chip-scale platforms. This co-integration of spin systems into integrated electronics shows a potential path for migrating previous proof-of-principal sensing demonstrations into affordable packages that demonstrate both much greater system integration and custom electronic architectures. In short, this work demonstrates advances in NV-ensemble quantum sensing platforms and establishes a foundation for future integration efforts, perhaps inspiring innovations in both application space and the development of new quantum devices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 131-138).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127017</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical inference from dependent data : networks and Markov chains</title>
<link>https://hdl.handle.net/1721.1/127016</link>
<description>Statistical inference from dependent data : networks and Markov chains
Dikkala, Sai Nishanth.
In recent decades, the study of high-dimensional probability has taken centerstage within many research communities including Computer Science, Statistics and Machine Learning. Very often, due to the process according to which data is collected, the samples in a dataset have implicit correlations amongst them. Such correlations are commonly ignored as a first approximation when trying to analyze statistical and computational aspects of an inference task. In this thesis, we explore how to model such dependences between samples using structured high-dimensional distributions which result from imposing a Markovian property on the joint distribution of the data, namely Markov Random Fields (MRFs) and Markov chains. On MRFs, we explore a quantification for the amount of dependence and we strengthen previously known measure concentration results under a certain weak dependence condition on an MRF called the high-temperature regime. We then go on to apply our novel measure concentration bounds to improve the accuracy of samples computed according to a certain Markov Chain Monte Carlo procedure. We then show how to extend some classical results from statistical learning theory on PAC-learnability and uniform convergence to training data which is dependent under the high temperature condition. Then, we explore the task of regression on data which is dependent according to an MRF under a stronger amount of dependence than is allowed by the high-temperature condition. We then shift our focus to Markov chains where we explore the question of testing whether a certain trajectory we observe corresponds to a chain P or not. We discuss what is a reasonable formulation of this problem and provide a tester which works without observing a trajectory whose length contains multiplicative factors of the mixing or covering time of the chain P. We finally conclude with some broad directions for further research on statistical inference under data dependence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 259-270).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127016</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of phasor-field imaging</title>
<link>https://hdl.handle.net/1721.1/127015</link>
<description>Theory of phasor-field imaging
Dove, Justin(Justin Michael)
Phasor-field (P-field) imaging is a promising recent solution to the task of non-line-of-sight (NLoS) imaging, colloquially referred to as "seeing around corners". It consists of treating the oscillating envelope of amplitude-modulated, spatially-incoherent light as if it were itself an optical wave, akin to the oscillations of the underlying electro- magnetic field. We present a formal analysis of P-field propagation using paraxial wave optics and demonstrate how it can be used to form images of hidden diffuse targets both computationally and with physical lenses. In both cases, we find that hidden target planes can be imaged at the modulation-wavelength diffraction limit, despite the presence of intervening diffusers. To model propagation through more general scenarios, we introduce the two-frequency spatial Wigner distribution and derive primitives that characterize its behavior. These primitives are used to analyze occlusion-aided imaging scenarios as well as to verify intuitive results in the geometric-optics limit. Consistent with prior work, we find that intervening occluders offer the potential to form convolutional images of hidden target planes, even in the absence of time-of-flight information. Additionally, we demonstrate how to extend our frame-work beyond the paraxial regime and include a thorough exploration of the effects of speckle, which we find are likely manageable in realistic scenarios.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 141-143).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127015</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure as simplification : transportation tools for understanding data</title>
<link>https://hdl.handle.net/1721.1/127014</link>
<description>Structure as simplification : transportation tools for understanding data
Claici, Sebastian.
The typical machine learning algorithms looks for a pattern in data, and makes an assumption that the signal to noise ratio of the pattern is high. This approach depends strongly on the quality of the datasets these algorithms operate on, and many complex algorithms fail in spectacular fashion on simple tasks by overfitting noise or outlier examples. These algorithms have training procedures that scale poorly in the size of the dataset, and their out-puts are difficult to intepret. This thesis proposes solutions to both problems by leveraging the theory of optimal transport and proposing efficient algorithms to solve problems in: (1) quantization, with extensions to the Wasserstein barycenter problem, and a link to the classical coreset problem; (2) natural language processing where the hierarchical structure of text allows us to compare documents efficiently;(3) Bayesian inference where we can impose a hierarchy on the label switching problem to resolve ambiguities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 169-187).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127014</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning with physical and power-spectral priors for robust image inversion</title>
<link>https://hdl.handle.net/1721.1/127013</link>
<description>Deep learning with physical and power-spectral priors for robust image inversion
Deng, Mo,Ph. D.Massachusetts Institute of Technology.
Computational imaging is the class of imaging systems that utilizes inverse algorithms to recover unknown objects of interest from physical measurements. Deep learning has been used in computational imaging, typically in the supervised mode and in an End-to-End fashion. However, treating the machine learning algorithm as a mere black-box is not the most efficient, as the measurement formation process (a.k.a. the forward operator), which depends on the optical apparatus, is known to us. Therefore, it is inefficient to let the neural network to explain, at least partly, the system physics. Also, some prior knowledge of the class of objects of interest can be leveraged to make the training more efficient. The main theme of this thesis is to design more efficient deep learning algorithms with the help of physical and power-spectral priors.; We first propose the learning to synthesize by DNN (LS-DNN) scheme, where we propose a dual-channel DNN architecture, each designated to low and high frequency band, respectively, to split, process, and subsequently, learns to recombine low and high frequencies for better inverse conversion. Results show that the LS-DNN scheme largely improves reconstruction quality in many applications, especially in the most severely ill-posed case. In this application, we have implicitly incorporated the system physics through data pre-processing; and the power-spectral prior through the design of the band-splitting configuration. We then propose to use the Phase Extraction Neural Networks (PhENN) trained with perceptual loss, that is based on extracted feature maps from pre-trained classification neural networks, to tackle the problem of low-light phase retrieval under low-light conditions.; This essentially transfer the knowledge, or features relevant to classifications, and thus corresponding to human perceptual quality, to the image-transformation network (such as PhENN). We find that the commonly defined perceptual loss need to be refined for the low-light applications, to avoid the strengthened "grid-like" artifacts and achieve superior reconstruction quality. Moreover, we investigate empirically the interplay between the physical and con-tent prior in using deep learning for computational imaging. More specifically, we investigate the effect of training examples to the learning of the underlying physical map and find that using training datasets with higher Shannon entropy is more beneficial to guide the training to correspond better to the system physics and thus the trained mode generalizes better to test examples disjoint from the training set.; Conversely, if more restricted examples are used as training examples, the training can be guided to undesirably "remember" to produce the ones similar as those in training, making the cross-domain generalization problematic. Next, we also propose to use deep learning to greatly accelerate the optical diffraction tomography algorithm. Unlike previous algorithms that involve iterative optimization algorithms, we present significant progresses towards 3D refractive index (RI) maps from a single-shot angle-multiplexing interferogram. Last but not least, we propose to use cascaded neural networks to incorporate the system physics directly into the machine learning algorithms, while leaving the trainable architectures to learn to function as the ideal Proximal mapping associated with the efficient regularization of the data. We show that this unrolled scheme significantly outperforms the End-to-End scheme, in low-light imaging applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 169-182).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127013</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-area cell-tracking cytometry for biophysical measurements of single cells</title>
<link>https://hdl.handle.net/1721.1/127012</link>
<description>Large-area cell-tracking cytometry for biophysical measurements of single cells
Apichitsopa, Nicha.
Utility of single-cell biophysical markers is often limited due to the low-specificity nature of biophysical markers and lack of existing techniques which can test multiple biophysical characteristics for single cells. To address this challenge, I developed a multiparameter intrinsic cytometry approach which integrates multiple label-free biophysical measurements into a versatile (can combine techniques across domains) and readily extensible (to measure more than two biophysical markers) platform for single cell analysis. The proposed multiparameter cell-tracking intrinsic cytometry utilizes label-free microfluidic techniques to manipulate cells such that information regarding their biophysical properties can be extracted from their spatiotemporal positions. Furthermore, this technique utilizes cell tracking to extract and associate the biophysical markers for single cells. The specific instantiation of the cytometry platform can measure up to five intrinsic markers of cells, and has facilitated the quantitative investigation of label-free cell profiles and classification of cell types and functional states. The applications of this approach were extended by leveraging digital holographic microscopy and deep learning technologies to monitor cells in a large field of view, enabling rapid and high-throughput assessment of biophysical phenotypes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 96-106).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127012</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fourier analysis on the hypercube, the coefficient problem, and applications</title>
<link>https://hdl.handle.net/1721.1/127011</link>
<description>Fourier analysis on the hypercube, the coefficient problem, and applications
Ajjanagadde, Ganesh.
In this dissertation, we primarily study some problems that revolve around Fourier analysis. More specifically we focus on the magnitudes of the frequency components. Firstly, we perform a study on the hypercube. It is well known that the Delsarte linear programming bounds provide rich information on the magnitudes of the Fourier coefficients, grouped by Hamming weight. Classically, such information is primarily used to attack coding problems, where the objective is to maximize cardinality of a subset of a metric space subject to a minimum distance constraint. Here, we use it to study anticoding problems, where the objective is to maximize cardinality of a subset of a metric space subject to a maximum distance (diameter) constraint. One motivation for such study is the problem of finding memories that are cheap to update, where the cost of an update is a function of the distance in the metric space.; Such a view naturally supports the study of different cost functions going beyond hard diameter constraints. We work accordingly with different cost functions, with a particular emphasis on completely monotonic functions. Our emphasis is on the phenomenon of "universal optimality", where the same subset (anticode) simultaneously optimizes a wide range of natural cost functions. Among other things, our work here gives some answers to a question in computer science, namely finding Boolean functions with maximal noise stability subjected to an expected value constraint. Secondly, we work with Fourier analysis on the integers modulo a number by draw-ing upon Nazarov's general solution to the "coefficient problem". Roughly speaking, the coefficient problem asks one to construct time domain signals with prescribed magnitudes of frequency components, subject to certain natural constraints on the signal. In particular, Nazarov's solution works with l[subscript p] constraints in time.; This solution to the coefficient problem allows us to give an essentially complete resolution to the mathematical problem of designing optimal coded apertures that arises in computational imaging. However, the resolution we provide is for an l[subscript infinity] constraint on the aperture, corresponding to partial occlusion. We believe it is important to also examine a binary valued ({0, 1}) constraint on the aperture as one does not need to synthesize partial occluders for such apertures. We therefore provide some preliminary results as well as directions for future research. Finally, inspired by the recent breakthroughs in understanding the d = 8, 24 cases of sphere packing and universal optimality in R[superscript d], we attempt to show that the associated lattices (E₈ and the Leech lattice for d = 8, 24 respectively) are also optimal for the problem of vector quantization in the sense of minimizing mean squared error.; Accordingly, we develop a dispersion and anticoding based approach to lower bounds on the mean squared error. We also generalize Tóth's method, which shows optimality of the hexagonal lattice quantizer for d = 2, to arbitrary d. To the best of our knowledge, these methods give the first rigorous improved lower bounds for the mean squared error for all large enough d since the work of Zador over 50 years ago.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 155-164).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127011</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and transfer learning using information-theoretic approximations</title>
<link>https://hdl.handle.net/1721.1/127008</link>
<description>Representation and transfer learning using information-theoretic approximations
Qiu, David.
Learning informative and transferable feature representations is a key aspect of machine learning systems. Mutual information and Kullback-Leibler divergence are principled and very popular metrics to measure feature relevance and perform distribution matching, respectively. However, clean formulations of machine learning algorithms based on these information-theoretic quantities typically require density estimation, which could be difficult for high dimensional problems. A central theme of this thesis is to translate these formulations into simpler forms that are more amenable to limited data. In particular, we modify local approximations and variational approximations of information-theoretic quantities to propose algorithms for unsupervised and transfer learning. Experiments show that the representations learned by our algorithms perform competitively compared to popular methods that require higher complexity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 119-127).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127008</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed computation and inference</title>
<link>https://hdl.handle.net/1721.1/127007</link>
<description>Distributed computation and inference
Ramnarayan, Govind.
In this thesis, we explore questions in algorithms and inference on distributed data. On the algorithmic side, we give a computationally efficient algorithm that allows parties to execute distributed computations in the presence of adversarial noise. This work falls into the framework of interactive coding, which is an extension of error correcting codes to interactive settings commonly found in theoretical computer science. On the inference side, we model social and biological processes and how they generate data, and analyze the computational limits of inference on the resulting data. Our first result regards the reconstruction of pedigrees, or family histories, from genetic data. We are given strings of genetic data for many individuals, and want to reconstruct how they are related. We show how to do this when we assume that both inheritance and mating are governed by some simple stochastic processes. This builds on previous work that posed the problem without a "random mating" assumption. Our second inference result concerns the problem of corruption detection on networks. In this problem, we have parties situated on a network that report on the identity of their neighbors - "truthful" or "corrupt." The goal is to understand which network structures are amenable to finding the true identities of the nodes. We study the problem of finding a single truthful node, give an efficient algorithm for finding such a node, and prove that optimally placing corrupt agents in the network is computationally hard. For the final result in this thesis, we present a model of opinion polarization. We show that in our model, natural advertising campaigns, with the sole goal of selling a product or idea, provably lead to the polarization of opinions on various topics. We characterize optimal strategies for advertisers in a simple setting, and show that executing an optimal strategy requires solving an NP-hard inference problem in the worst case.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 319-331).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127007</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infrastructures for secure multiparty computation</title>
<link>https://hdl.handle.net/1721.1/127006</link>
<description>Infrastructures for secure multiparty computation
Raghuraman, Srinivasan.
We study the problem of implementing an infrastructure for secure multiparty computation (MPC). The goal of our infrastructure is to enable reliable communication, secure computation and fair computation in a network. The desiderata for an infrastructure include reusability, transferability and fault-tolerance. It is not hard to see that the above criteria are fulfilled for infrastructures that we use in daily life, for e.g., the infrastructure for online communication (e-mail, instant messaging, etc.) consisting of transatlantic undersea cables, routers, wireless access points, etc. We consider which cryptographic primitives would be good building blocks for a secure computation infrastructure. The first, reliable communication. We study the problem of almost-everywhere reliable message transmission.; The goal is to design low-degree networks which allow a large fraction of honest nodes to communicate reliably even while linearly many nodes can experience byzantine corruptions and deviate arbitrarily from the assigned protocol. We consider both the worst-case and randomized corruption scheduling models. In the worst-case model, we achieve a log-degree network with a polylogarithmic work complexity protocol improving over the state-of-the-art results that required a polylogarithmic-degree network and had a linear work complexity. In the randomized model, we improve upon the state of the art protocols, both in work-efficiency and in resilience. Next, we propose an infrastructure for secure computation, which would consist of OT channels between some pairs of parties in the network.; We devise information theoretically secure protocols that allow additional pairs of parties to establish secure OT correlations using the help of other parties in the network in the presence of a dishonest majority. Our main technical contribution is an upper bound that matches known lower bounds regarding the number of OT channels necessary and sufficient for MPC. In particular, we characterize which n-party OT graphs G allow t-secure computation of OT correlations between all pairs of parties, showing that this is possible if and only if the complement of G does not contain the complete bipartite graph Kn-t,n-t as a subgraph. Finally, we study the problem of building an infrastructure for fair secure computation, where we guarantee that if any party receives the output of the secure computation, then all honest parties do as well.; Toward this goal, we introduce a new 2-party primitive FSyX ("synchronizable fair exchange") and show that it is complete for realizing any n-party functionality with fairness in a setting where all n parties are pairwise connected by independent instances of FSyX. Additionally, a pair of parties may reuse a single instance of FSyX in any number of multiparty protocols (possibly involving different sets of parties).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 193-206).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127006</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning structure from unstructured data</title>
<link>https://hdl.handle.net/1721.1/127005</link>
<description>Learning structure from unstructured data
Sarkar, Tuhin.
This thesis develops statistical tools and algorithmic techniques for non-asymptotic system identification of dynamical systems from noisy input-output data. Specifically, we address the question: "For a fixed length of noisy data generated by an unknown model, what is the best approximation that can be estimated?"; this is in contrast to traditional system identification which answers the question of estimating the unknown model when data length tends to infinity. The importance of such analyses and tools cannot be overstated in applications such as reinforcement learning where a popular design principle is system identification for control. Typically, in such settings we are presented with two problems: first, we are given access only to a finite noisy data set; and second, the hidden state dimension or model order is unknown. The first problem limits our ability to comment on the finite time performance of estimation algorithms; and the second problem prevents appropriate parametrizations for model identification. The goal of this thesis is to address these issues for a large class of dynamical systems. The premise of our approach relies on the existence of suitable low order approximations of the true model that can be constructed from finite, albeit noisy, data. Since the true model order is apriori unknown, we simply estimate low order approximations of this model from data. The order of these approximations grow as we accumulate more data. By such a method, we construct consistent finite time estimators of the underlying data generating model. This principle of constructing low order estimates directly from data is different from the status quo of constructing the largest possible model and then performing a reduction procedure to obtain estimates. We show that in many cases our method outperforms existing algorithms in finite time.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 277-285).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127005</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and optimization in the face of data perturbations</title>
<link>https://hdl.handle.net/1721.1/127004</link>
<description>Learning and optimization in the face of data perturbations
Staib, Matthew James.
Many problems in the machine learning pipeline boil down to maximizing the expectation of a function over a distribution. This is the classic problem of stochastic optimization. There are two key challenges in solving such stochastic optimization problems: 1) the function is often non-convex, making optimization difficult; 2) the distribution is not known exactly, but may be perturbed adversarially or is otherwise obscured. Each issue is individually so challenging to warrant a substantial accompanying body of work addressing it, but addressing them simultaneously remains difficult. This thesis addresses problems at the intersection of non-convexity and data perturbations. We study the intersection of the two issues along two dual lines of inquiry: first, we build perturbation-aware algorithms with guarantees for non-convex problems; second, we seek to understand how data perturbations can be leveraged to enhance non-convex optimization algorithms. Along the way, we will study new types of data perturbations and seek to understand their connection to generalization.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127004</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconducting nanowire electronics for alternative computing</title>
<link>https://hdl.handle.net/1721.1/127003</link>
<description>Superconducting nanowire electronics for alternative computing
Toomey, Emily.
With traditional computing systems struggling to meet the demands of modern technology, new approaches to both hardware and architecture are becoming increasingly critical. In this work, I develop the foundation of a power-efficient alternative computing system using superconducting nanowires. Although traditionally operated as single photon detectors, superconducting nanowires host a suite of attractive characteristics that have recently inspired their use in digital circuit applications for amplification, addressing, and memory. Here, I take advantage of the electrothermal feedback that occurs in resistively shunted nanowires to develop two new technologies: (1) A multilevel memory cell made by incorporating a shunted nanowire into a superconducting loop, allowing flux to be controllably added and stored; and (2) An artificial neuron for use in spiking neural networks, consisting of two nanowire-based relaxation oscillators acting analogously to the two ion channels in a biological neuron. By harnessing the intrinsic dynamics of superconducting nanowires, these devices offer competitive energy performance and a step towards bringing memory and processing closer together on the same platform.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 141-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127003</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representations for intelligent navigation in unfamiliar environments</title>
<link>https://hdl.handle.net/1721.1/127002</link>
<description>Representations for intelligent navigation in unfamiliar environments
Stein, Gregory Joseph.
The way an agent chooses to represent the world around it is fundamental to its ability to effectively interact with it. The work presented in this thesis is centered around the development of new representations that enable embodied agents to better understand the impact of their actions, so that they may plan quickly and intelligently in a dynamic and uncertain world. In this thesis, we focus on the problem of autonomous navigation in complex, unknown environments. Consider an embodied agent tasked with traveling to an unseen goal in minimum time. In general, effective navigation requires that the agent explicitly reason about portions of the environment it has not yet seen. But the world is intractably complex; exhaustively enumerating all environment configurations is impossible. Instead, we imbue an agent with the ability to more tractably make predictions about uncertainty by changing the way in which it represents its surroundings and the actions it uses to define navigation. This thesis makes three primary contributions. First, we introduce Learned Subgoal Planning, a decision-making paradigm that leverages high-level actions to make tractable predictions about unknown space via supervised learning and enable efficient computation of expected cost. Second, we apply recent progress in image-to-image translation to the task of domain adaptation for image data, allowing an agent to transfer knowledge acquired in simulation to the real world. Finally, we introduce a learned pseudosensor and accompanying probabilistic sensor model that estimates sparse structure in view of an agent from monocular images. Fusing these estimates during exploration of unknown environments, we enable map-building of unfamiliar environments from monocular images suitable for high-level planning with topological constraints.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 197-215).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127002</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical metrology and process control of quantum devices</title>
<link>https://hdl.handle.net/1721.1/126998</link>
<description>Statistical metrology and process control of quantum devices
Walsh, Michael P.,Ph. D.Massachusetts Institute of Technology.
Quantum emitters, such as color centers (e.g., nitrogen-vacancy color centers in diamond), have a wide range of applications in quantum information processing, bioimaging, and quantum sensing. Such quantum emitters are typically addressed optically and store their quantum state as an electron spin that can subsequently be read out optically. For this process to work effectively, an efficient light-matter interaction must be achieved, which is difficult given the small interaction cross section of an atomic memory with the optical field. In this thesis, I address three problems that relate to the engineering of a quantum device. The first problem centers on the fact that most quantum emitters are randomly positioned throughout their host lattice making it difficult to lithographically pattern structures intended to increase the light-matter interaction. While there is a non-zero chance that a small number of randomly aligned structures will coincide with randomly positioned emitters, when efforts to scale such a system are made the yield drops exponentially. The second problem has to do with scaling. As systems scale up to larger sets of interacting qubits, it becomes increasingly necessary to produce quantum emitters with narrow optical transitions and long spin coherence times. The third problem is related to the development of tools to manage experiments and data in a more robust, team-centric, and structured manner. The automation of systems to measure qubits and devices that enables improvement of each step in the design process will be crucial if efforts to scale devices beyond a handful of qubits are to be successful. Here, I will review the progress that I made in each of these areas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 169-187).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126998</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive optical network architecture in dynamic environments</title>
<link>https://hdl.handle.net/1721.1/126997</link>
<description>Cognitive optical network architecture in dynamic environments
Zheng, Xijia,Ph. D.Massachusetts Institute of Technology.
Emerging network traffic requires a more agile network management and control system to deal with the dynamic network environments than today's networks use. The bursty and large data transactions introduced by new technological applications can cause both high costs and extreme congestion in networks. The prohibitive cost of massive over-provisioning will manifest as huge congestions during peak demand periods. The network management and control system must be able to sense the traffic changes and reconfigure in a timely manner (in tens of milliseconds instead of minutes or hours) to use network resources efficiently. We propose the use of cognitive techniques for fast and adaptive network management and control of future optical networks. The goal of this work is to provide timely network reconfigurations in response to dynamic traffic environments and prevent congestion from building up.; We make a simplified model of the expected traffic arrival rate changes as a multistate Markov process based on the characteristics of the dynamic, bursty, and high granularity traffic. The traffic is categorized into different network traffic environments by the length of the network coherence time, which is the time that the traffic is unvarying. The tunneled network architecture is adopted due to its supremacy in reducing the control complexity when the traffic volume is at least one wavelength. In the long coherence time regime where traffic changes very slowly, the traffic detection performances of two Bayesian estimators and a stopping-trial (sequential) estimator are examined, based on the transient behaviors of networks. The stopping trial estimator has the fastest response time to the changes of traffic arrival statistics. We propose a wavelength reconfiguration algorithm with continuous assessment where the system reconfigures whenever it deems necessary.; The reconfiguration can involve addition or subtraction of multiple wavelengths. Using the fastest detection and reconfiguration algorithm can reduce queueing delays during traffic surges without over-provisioning and thus can reduce network capital expenditure and prevent wasting resources on erroneous decisions when surges occur. For traffic with moderate coherence time (where traffic changes at a moderate rate) and the short coherence time (where traffic changes quickly), the stopping-trial estimator still responds to the traffic changes with a short detection time. As long as the inter-arrival times of traffic transactions are independent, the algorithm is still optimum. The algorithm provides no prejudice on the exact network traffic distribution, avoiding having to sense and estimate detailed arrival traffic statistics.; To deal with fast-changing traffic, we model the transient convergent behaviors of network traffic drift as a result of traffic transition rate changes and validate the feasibility and utility of the traffic prediction. In a simple example when the network traffic rate changes monotonically in a linear model, the sequential maximum likelihood estimator will capture the traffic trend with a small number of arrivals. The traffic trend prediction can help to provide fast reconfiguration, which is very important for maintaining quality of service during large traffic shifts. We further investigate the design of an efficient rerouting algorithm to maintain users' quality of service when the incremental traffic cannot be accommodated on the primary path. The algorithm includes the fast reconfiguration of wavelengths in the existing lit and spatially routed fibers, and the setting up and lighting of new fibers.; Rerouting is necessary to maintain users' quality of service when the queueing delay on the primary path (determined by shortest path routing) exceeds the requirement. Our algorithm triggers reconfiguration when a queueing delay threshold is crossed on the primary path. The triggering by a threshold on the queueing delay is used due to its simplicity, and it is directly measurable by the exact traffic transaction sizes and the queue size, which reflect both the current network traffic environment and the network configurations. A dynamic rerouting algorithm implemented with a shortest path algorithm is proposed to find the secondary paths for rerouting. We make the conjecture that it is desirable that the alternate paths for rerouting have small numbers of hops and are disjoint with other busy paths when the hops on the path are independent. In addition, the conjecture suggests that a good candidate network topology should have high edge-connectivity.; Wavelength reservation for rerouted traffic does not maximize wavelength utilization. We make the conjecture that traffic with different sizes should be broken up into multi-classes with dedicated partitioned resources and the queueing delay should be normalized by the transmission time for rerouting triggering to realize better network utilization.
Thesis: Ph. D. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 149-154).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126997</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on uninsured income risk, lumpy investment and aggregate demand</title>
<link>https://hdl.handle.net/1721.1/126996</link>
<description>Essays on uninsured income risk, lumpy investment and aggregate demand
Zorzi, Nathan Gaspar.
This thesis consists of three chapters on uninsured income risk, lumpy investment and aggregate demand. The first chapter analyzes the non-linear response of durable spending to income shocks. Empirically, the average marginal propensity to spend (MPC) on durable goods increases with the size of income changes. I investigate whether a canonical model of lumpy durable investment with incomplete markets can replicate this fact. I first clarify analytically the source of non-linearity in this model, and I show that its sign depends on the relative strength of the extensive and intensive margins of durable adjustment. In numerical exercises, I find that the extensive margin dominates quantitatively, so that the model generates the form of non-linearity observed in the data. However, the magnitudes predicted by this canonical model are substantially lower than their empirical counterparts. I suggest various avenues to improve the quantitative performance of the model.; The second chapter investigates the general equilibrium implications of this form of nonlinearity. I recognize that durable spending is strongly pro-cyclical, that workers employed in durable sectors have a more cyclical labor income than those employed in nondurable sectors, and that workers are imperfectly insured against these fluctuations. In turn, the average MPC on durables increases with income changes, so that this redistribution of labor incomes across sectors has aggregate effects. To formalize and quantify this mechansim, I develop a heterogeneous agent New Keynesian (HANK) model with multiple sectors and lumpy durable adjustment. There is no labor mobility between sectors and financial markets are incomplete, so that durable workers are more exposed to aggregate shocks. I first show analytically that the interaction between cyclical investment and redistribution amplifies the aggregate response of durable spending during booms and dampens it during recessions.; I then quantify the importance of this mechanism using my structural model. The third chapter focuses on the cyclical reallocation of workers across sectors or occupations. Specifically, I explore how uninsured income risk and liquidity frictions can hinder the efficient matching between workers and occupations. I investigate this question in a continuous-time Lucas-Prescott economy with incomplete markets. In this setting, uninsured income risk induces labor misallocation across occupations through two channels. First, it reduces workers' incentives to search (ex ante) for an occupation where they have a strong comparative advantage. Second, it induces excess separation (ex post) by forcing productive households to leave their occupation when their liquidity buffers are depleted. In general equilibrium, labor misallocation exacerbates endogeneously the effect of uninsured income risk, by depressing the value of equity that workers use as liquidity buffers
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126996</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Labor supply and accounting firm mergers</title>
<link>https://hdl.handle.net/1721.1/126982</link>
<description>Labor supply and accounting firm mergers
Abramova, Inna,Ph. D.Massachusetts Institute of Technology.
In this paper, I study how regulation-induced accounting labor supply shocks affect the audit market. Using a novel dataset that includes both large and small accounting firms, I identify labor supply shocks using the 150-Hour Rule and the Mobility Provision and investigate the resulting incidence of mergers and acquisitions (M&amp;A). I find that a reduction in labor supply increases accounting firms' M&amp;A activity and leads to a higher audit market concentration. My results suggest that accounting firm growth decisions and audit market structure depend on the supply of labor.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 36-39).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126982</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on status recognition and its consequences for top-talent mobility and productivity</title>
<link>https://hdl.handle.net/1721.1/126981</link>
<description>Essays on status recognition and its consequences for top-talent mobility and productivity
Bond, Brittany M.
Organizations increasingly rely on status recognition to motivate members toward higher performance. Yet status recognition inevitably invites social comparisons. Although research in organization theory and strategy has focused on the returns to, antecedents of, and relative advantages of status recognition, whether, when, and to what extent bestowing status recognition outweigh the costs of social comparison remain open questions. My dissertation contributes to this scholarship through experimental field and archival research that illuminates the unexpected ways status recognition influences motivation, mobility, and productivity. This leads me to identify, in my first essay, how the preservation of self-image leads employees to make costly employer exits even when there are no material, career, or reputation concerns to nominal status under-recognition. In my second essay, I demonstrate how highly relational managers are more likely to artificially inflate employee performance evaluations, how this over-valuation leads to persistent underperformance, and how structured management can counteract this downside to close managerial relationships. My third essay (coauthored with Ethan J. Poskanzer), demonstrates how specialists' productivity improves after engaging in tasks that these professionals are recognized as being relatively inexpert in relative to teammates and their area of specialization. The settings I study in this dissertation pertain to professionals operating in high-status organizations: a highly competitive multinational pharmaceutical company and Major League Baseball. Overall, my dissertation contributes to our understanding of how status recognition influences motivation, mobility, and productivity in unexpected ways and among top-talent professionals in particular. This research has implications for organizational and strategy research on social status, motivation, and the management of performance review systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126981</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>How should we measure the digital economy?</title>
<link>https://hdl.handle.net/1721.1/126980</link>
<description>How should we measure the digital economy?
Collis, Avinash.
Gross domestic product (GDP) measures production and is not meant to measure well-being. While many people nonetheless use GDP as a proxy for well-being, consumer surplus is a better measure of consumer well-being. This is increasingly true in the digital economy where many digital goods have zero price and as a result, the welfare gains from these goods are not reflected in GDP or productivity statistics. Chapter 1 proposes a way of directly measuring consumer's economic well-being using massive online choice experiments. It finds that digital goods generate a large amount of consumer surplus that is currently not captured in GDP. For example, the median Facebook user needed a compensation of around $48 to give it up for a month. Building up on these results, Chapter 2 extends the GDP framework to include welfare gains from new and free goods and construct a new metric called GDP-B, where B stands for benefits. It finds that including the welfare gains from Facebook would have added between 0.05 and 0.11 percentage points to GDP-B growth per year in the US. Chapter 3 proposes a way of measuring network effects on multi-sided platforms using choice experiments. It also models digital platforms allowing for heterogeneity in demand elasticity and network effects across users of different types. It then calibrates the model using an empirical application to Facebook and simulates six different taxation and regulatory policies. Chapter 4 looks at the impact of social media on subjective well-being and academic performance through a randomized controlled trial of University students. Chapter 5 summarizes the research agenda moving forward and concludes with a framework for measuring different aspects of well-being in the digital economy.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, June, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126980</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/126979</link>
<description>Essays in financial economics
Lu, Fangzhou,Ph. D.Massachusetts Institute of Technology.
In Chapter 1, I document that there is a high correlation between the returns of cryptocurrencies and those of utility tokens, which are claims to products and services yet to be developed that are issued through ICOs and traded on crypto-exchanges. I demonstrate the presence of a numeraire effect in the pricing of these tokens and present evidence that it is driven by a combination of group thinking and representativeness bias. Investors mistakenly overestimate the probability that a cryptocurrency-denominated token is issued by a blockchain firm, and thus believe the fundamental value of the token is correlated with that of Bitcoin. I show that a 1% increase in the return on Bitcoin during the month before a token first lists on a crypto exchange predicts a 5% higher ICO return for a cryptocurrency-denominated token than for a fiat-currency-denominated token.; If a token is denominated in a cryptocurrency on one exchange and its otherwise identical twin is denominated in a fiat currency on another exchange, then a 1% increase in the cryptocurrency return relative to the fiat currency predicts a 60 bp divergence in their prices. In Chapter 2, I show that consistent with being driven by a combination of group thinking and representativeness bias, the numeraire effect is more pronounced for tokens with more complex business plans. Moreover, experimental evidence corroborates these empirical findings and suggests that the numeraire effect is present in other asset prices as well and can explain home-currency bias. The combination of high volatility and numeraire effects undermines the ability of cryptocurrencies to serve as units of account. In Chapter 3, I demonstrate that debt owed to family and friends (DOFF) is a major component of household and entrepreneurial finance, particularly in developing countries.; However, such informal finance carries with it an implicit covenant that can cause households to forgo durable-good consumption. This is because durablegood consumption can be perceived by the lender as a mis-use of funds and can result in social sanctions or debt recall. This paper uses China's Vehicle Scrappage Program (VSP) as a laboratory in which to study the causal link between DOFF and consumption. Merging survey data on Chinese household balance sheets with bid prices from China's online used-car markets, I find that DOFF on the balance sheet significantly reduces the probability that eligible households participate in the VSP and trade in their clunkers for new cars. Further, I find that this negative effect of DOFF on consumption is significantly mitigated by the presence of formal features such as a written contract, pre-determined debt repayment schedule, or positive interest rate.; Together these results suggest that developing more formal channels for household finance can lead to increases in consumption. This is particularly important for developing countries such as China, where low consumption rates impede economic growth.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126979</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/126978</link>
<description>Essays in financial economics
Meeuwis, Maarten.
This dissertation consists of three essays in financial economics, with a focus on household financial decisions and their implications for asset pricing and macroeconomic dynamics. In Chapter 1, I use data on the portfolio holdings and income of millions of US retirement investors to show that positive and persistent shocks to income lead to a significant increase in the equity share of investor portfolios, while increases in financial wealth due to realized returns lead to a small decline in the equity share. In a standard homothetic life-cycle model with human capital and constant risk aversion, the portfolio responses to these two wealth shocks should be of equal magnitude and opposite sign. The positive net effect in the data is evidence for risk aversion that decreases in total wealth. In Chapter 2, I show that decreasing relative risk aversion preferences have significant long-run implications for inequality and asset prices.; I estimate the structural parameters of a life-cycle consumption and portfolio choice model that accounts for inertia in portfolio rebalancing. The model matches reduced-form estimates of the portfolio responses to wealth shocks with a significant degree of non-homotheticity in risk preferences, such that a 10% permanent income growth leads to a decrease in risk aversion by 1.7%. I find that decreasing relative risk aversion in the model doubles the share of wealth at the top, as equity is concentrated in the hands of the wealthy. The model also implies that rising income inequality in the US has led to a 15% decline in the equity premium over the past three decades. In joint work with Jonathan Parker, Antoinette Schoar, and Duncan Simester, we document in Chapter 3 how agents who believe in different models of the world change their investment behavior differently in response to a public signal.; We use a proprietary dataset of the portfolio holdings of millions of US households and identify households ex ante that hold different models of the world using political party affiliation (probabilistically inferred from zip code). Our public signal is the unexpected outcome of the US national election of 2016. Relative to Democrats, Republican investors actively increase the equity share and market beta of their portfolios following the election. The rebalancing is due to a small share of investors making large adjustments. We conclude that this behavior is driven by belief heterogeneity because of extensive controls for differential hedging needs or preferences, including detailed controls for age, wealth, income, state, and even county-employer fixed effects.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126978</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social exchange and valuations in the market for contemporary art</title>
<link>https://hdl.handle.net/1721.1/126977</link>
<description>Social exchange and valuations in the market for contemporary art
Riley, James Whitcomb.
The first essay draws on 18 months of ethnographic fieldwork to examine the puzzle of why galleries discipline collectors --; who provide much-needed financial capital - for appearing too motivated by profit. Whilst art worlds have strong norms that enjoin artists to avoid the naked pursuit of profit and instead affect an air of "disinterestedness" (that is, a concern only for universal virtues and aesthetic qualities such as truth and beauty), why might art dealers demand that collectors similarly conform to such norms? This study addresses how (and why) galleries enforce conformity to the art-world norm of disinterestedness among collectors as part of an array of tactics they deploy to "protect" their artists from price volatility that could depress demand for the artist's work. The findings suggests a paradoxical resolution. Although galleries framed such discipline as a moral imperative, a key implication of this study is that enforcing a norm that disavows extrinsic rewards such as fortune and fame ultimately supports a profitable business and investment strategy.; The second essay (coauthored with Ezra W. Zuckerman Sivan) also draws on an 18-month ethnographic investigation examining the rise and proliferation of International Art Fairs (IAFs) in the global art market. This study contributes to our understanding of how the construction and extension of market platforms shapes market dynamics. On the surface, the explosive growth of IAFs in the contemporary art market reflects the greater efficiency that market platforms typically offer, both for facilitating exchange and for expanding access. But past research on market construction does not prepare us for either of the two main findings of this paper. The first is that market participants (and especially the mid-size galleries that dominate the fairs) are deeply ambivalent about the fairs' value relative to the cost of participation. The second main finding -- that galleries (and others) believe they must participate in order to be visible in the market --; affords insight into how markets vary in their visibility and opacity; how such variation shapes status competition; and how markets that are designed to increase efficiency may
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Page 115 blank. Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126977</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of financial reporting on strategic investments : evidence from purchase obligations</title>
<link>https://hdl.handle.net/1721.1/126976</link>
<description>The effect of financial reporting on strategic investments : evidence from purchase obligations
Noh, Suzie.
I examine whether mandating the disclosure of investments influences firms' strategic interactions. I exploit an SEC regulation requiring firms to report off-balance sheet purchase obligations, such as commitments to inventory purchases, CAPEX, R&amp;D, and advertising. Motivated by theory on strategic investments, I predict and find that firms respond to the regulation by increasing investments if they have substitutive product market strategies with competitors, and decreasing investments if they have complementary strategies. This two-way finding is consistent with firms using investments to influence competitors' behavior in ways that increase their own profits. I show that changes in investments are concentrated among firms with large market share, which have a greater ability to influence competitors' actions, and that they have real effects on firms' sales and profit margins. Collectively, my results illustrate a novel channel through which financial reporting shapes firms' investments and competition.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 51-53).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126976</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the evaluation of novel ideas</title>
<link>https://hdl.handle.net/1721.1/126974</link>
<description>Essays on the evaluation of novel ideas
Wahlen, Jesse Michael.
In this dissertation, I examine the evaluation of novel ideas in two essays. In the first, I, along with Hong Luo and Jeffrey Macher, study a novel, low-cost approach to aggregating judgment from a large number of industry experts on ideas that they encounter in their normal course of business in the film industry, where customer appeal is difficult to predict and investment costs are high. The Black List, an annual publication, ranks unproduced scripts based on anonymous nominations from film executives. This approach entails an inherent trade-off: low participation costs enable high response rates; but nominations lack standard criteria, and which voters see which ideas is unobservable and influenced by various factors.; Despite these challenges, we find that such aggregation is predictive: listed scripts are substantially more likely to be released than observably similar but unlisted scripts, and, conditional on release and investment levels, listed scripts generate higher box office revenues. We also find that this method mitigates entry barriers for less-experienced writers as: (i) their scripts are more likely to be listed and to rank higher if listed; (ii) being listed is associated with a higher release rate. Yet, the gap in release probabilities relative to experienced writers remains large, even for top-ranked scripts. In the second essay, I move from the question of the abstract value of an idea to the relative value given one's past and resources. In markets which require architectural changes for established firms, prior research claims incumbents must best the young firms imprinted by the new technology at the dominant design.; If they cannot, they should seek to avoid the market altogether. In contrast, this paper suggests that rather than compete for the dominant design, incumbents can succeed by focusing on niche segments that are proximate to their core business. This paper finds support for this strategy in console game makers' response to the rise of mobile games. Using a novel method to measure distance from the dominant design, the analysis shows that deviating from the dominant design was associated with greater revenue for console publishers, while mobile publishers observed the opposite result.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126974</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on workplace practices in different institutional settings</title>
<link>https://hdl.handle.net/1721.1/126970</link>
<description>Essays on workplace practices in different institutional settings
Yang, Duanyi,Ph. D.Massachusetts Institute of Technology.
This dissertation consists of three essays investigating how organizational policies operate within different institutional contexts and in the face of migration, demographic shifts, and globalization. The first essay examines why, given apparent widespread violations, some migrant workers choose not to pursue remedies. Using survey data from China, I find only one fourth of surveyed workers who experience labor law violations interpret their experiences as labor rights violations, and workers' social relationship with the employers prior to migration explains some of this gap. This essay extends worker grievance research tradition within labor relations by drawing on research from the sociology of law and immigration to understand how these subjective interpretative processes and social identities outside of the workplace influence grievance behaviors. The second essay investigates whether flexible working time policies reduce the likelihood that individuals leave their employer.; Using linked employer-employee data from Germany, I find that by addressing mothers' needs at a critical period in their lives, flexible working time policies encourage women of young children to both remain in the labor force and continue building their careers in a given establishment even in context with extensive state policies that support work-family reconciliation. Further, I find flexible working time policies reduce young workers' likelihood of turnover. It suggests the policies can play an important role in helping young workers develop their human capital and advance their careers. The third essay studies an international self-regulatory initiative -- the SA8000 social responsibility certification --; focused on labor standards. Using industrial microdata from China, we find firms that self-regulated exhibited higher average wages than non-adopters even in context without effective surveillance and sanctions. To explain this puzzle, we theorize about self-regulation in pursuit of reputation-sensitive buyers. These buyers privately monitor their suppliers, making up for deficiencies in the broader institutional environment and reducing the expected returns of low-road firms bribing their way into self-regulatory institutions. Consistent with our theory, we find exports increased markedly after adopting self-regulation and domestic sales did not. This essay also provides further specification of the challenges of improving labor standards privately through supply chain standards.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126970</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic marketing policies : constructing Markov states for reinforcement learning</title>
<link>https://hdl.handle.net/1721.1/126961</link>
<description>Dynamic marketing policies : constructing Markov states for reinforcement learning
Zhu, Yuting(Scientist in business management)Massachusetts Institute of Technology.
Many firms want to target their customers with a sequence of marketing actions, rather than just a single action. We interpret sequential targeting problems as a Markov Decision Process (MDP), which can be solved using a range of Reinforcement Learning (RL) algorithms. MDPs require the construction of Markov state spaces. These state spaces summarize the current information about each customer in each time period, so that movements overtime between Markov states describe customers' dynamic paths. The Markov property requires that the states are"memoryless,"so that future outcomes depend only upon the current state, not upon earlier states. Even small breaches of this property can dramatically undermine the performance of RL algorithms.Yet most methods for designing states, such as grouping customers by the recency, frequency and monetary value of past transactions (RFM), are not guaranteed to yield Markov states. We propose a method for constructing Markov states from historical transaction data by adapting a method that has been proposed in the computer science literature. Rather than designing states in transaction space, we construct predictions over how customers will respond to a firm's marketing actions. We then design states using these predictions, grouping customers together if their predicted behavior is similar. To make this approach computationally tractable, we adapt the method to exploit a common feature of transaction data (sparsity). As a result, a problem that faces computational challenges in many settings, becomes more feasible in a marketing setting. The method is straightforward to implement, and the resulting states can be used in standard RL algorithms. We evaluate the method using a novelty a lidation approach. The findings confirm that the constructed states satisfy the Markov property, and are robust to the introduction of non-Markov distortions in the data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 65-67).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126961</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the higher Frobenius</title>
<link>https://hdl.handle.net/1721.1/126941</link>
<description>On the higher Frobenius
Yuan, Allen,Ph. D.Massachusetts Institute of Technology.
Given a homotopy invariant of a space, one can ask how much of the space can be recovered from that invariant. This question was first addressed in work of Quillen and Sullivan on rational homotopy theory in the 1960's and in work of Dwyer-Hopkins and Mandell on p-adic homotopy theory in the 1990's. In this thesis, we describe a way to unify these ideas and recover a space in its entirety, rather than up to an approximation. The approach is centered around the study of the higher Frobenius map. First defined by Nikolaus and Scholze, the higher Frobenius map generalizes to E[subscript infinity]-ring spectra the classical Frobenius endomorphism for rings in characteristic p. Our main result is that there is an action of the circle group on (a certain subcategory of) p-complete [subscript infinity]-rings whose monodromy is the higher Frobenius. Using this circle action, we give a fully faithful model for a simply connected finite complex X in terms of Frobenius-fixed [subscript infinity]-rings.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 107-110).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126941</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorics of affine Springer fibers and combinatorial wall-crossing</title>
<link>https://hdl.handle.net/1721.1/126939</link>
<description>Combinatorics of affine Springer fibers and combinatorial wall-crossing
Yue, Guangyi.
This thesis deals with several combinatorial problems in representation theory. The first part of the thesis studies the combinatorics of affine Springer fibers of type A. In particular, we give an explicit description of irreducible components of Fl[subscript tS] and calculate the relative positions between two components. We also study the lowest two-sided Kazhdan-Lusztig cell and establish a connection with the affine Springer fibers, which is compatible with the affine matrix ball construction algorithm. The results also prove a special case of Lusztig's conjecture. The work in this part include joint work with Pablo Boixeda. In the second part, we define the combinatorial wall-crossing transformation and the generalized column regularization on partitions and prove that a certain composition of these two transformations has the same effect on the one-row partition. This result gives a special situation where column regularization, can be used to understand the complicated Mullineux map, and also proves a special case of Bezrukavnikov's conjecture. Furthermore, we prove a condition under which the two maps are exactly the same, generalizing the work of Bessenrodt, Olsson and Xu. The combinatorial constructions is related to the Iwahori-Hecke algebra and the global crystal basis of the basic [ ... ]-module and we provide several conjectures regarding the q-decomposition numbers and generalizations of results due to Fayers. This part is a joint work with Panagiotis Dimakis and Allen Wang.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 149-152).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126939</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Singular behaviour and long time behaviour of mean curvature flow</title>
<link>https://hdl.handle.net/1721.1/126938</link>
<description>Singular behaviour and long time behaviour of mean curvature flow
Sun, Ao,Ph. D.Massachusetts Institute of Technology.
In this thesis, we investigate two asymptotic behaviours of the mean curvature flow. The first one is the asymptotic behaviour of singularities of the mean curvature flow, and the asymptotic limit is modelled by the tangent flows. The second one is the asymptotic behaviour of the mean curvature flow as time goes to infinity. We will study several problems related to the asymptotic behaviours. The first problem is the partial regularity of the limit. The partial regularity of mean curvature flow without any curvature assumptions was first studied by Ilmanen. We will follow the idea of Ilmanen to study the partial regularity of other asymptotic limit. In particular, we introduce a generalization of Colding-Minicozzi's entropy in a closed manifold, which plays a significant role. The second problem is the genericity of the tangent flows of mean curvature flow. The generic mean curvature flow was introduced by Colding-Minicozzi. Furthermore, they introduced mean curvature flow entropy and use it to study the generic tangent flows of mean curvature flow. We study the multiplicity of the generic tangent flow. In particular, we prove that the generic compact tangent flow of mean curvature flow of surfaces has multiplicity 1. This result partially addresses the famous multiplicity 1 conjecture of Ilmanen. One key idea is defining a local version of Colding-Minicozzi's entropy. We also discuss some related results. These results include a joint work with Zhichao Wang and a joint work with Julius Baldauf.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 125-130).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126938</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assorted results in boolean function complexity, uniform sampling and clique partitions of graphs</title>
<link>https://hdl.handle.net/1721.1/126937</link>
<description>Assorted results in boolean function complexity, uniform sampling and clique partitions of graphs
Wellens, Jake(Jake Lee)
This thesis consists of three disparate parts. In the first, we generalize and extend recent ideas of Chiarelli, Hatami and Saks to obtain new bounds on the number of relevant variables for a boolean function in terms of its degree, its sensitivity, and its certificate and decision tree complexities, and we also sharpen the best-known polynomial relationships between some of these complexity measures by a constant factor. In the second part, we show that the Partial Rejection Sampling method of Guo, Jerrum and Liu can solve a handful of natural sampling problems that fall outside the guarantees of the authors' original analysis. Finally, we revise and make partial progress on a conjecture of De Caen, Erdős, Pullman and Wormald on clique partitions of a graph and its complement, building on ideas of Keevash and Sudakov.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 107-112).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126937</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stable characters for symmetric groups and wreath products</title>
<link>https://hdl.handle.net/1721.1/126936</link>
<description>Stable characters for symmetric groups and wreath products
Ryba, Christopher(Christopher Jonathan)
Given a Hopf algebra R, the Grothendieck group of C = R-mod inherits the structure of a ring. We define a ring [mathematical equation]), which is "the [mathematical equation] limit" of the Grothendieck rings of modules for the wreath products [mathematical equation]; it is the Grothendieck group of a certain wreath product Deligne category. The construction yields a basis of [mathematical equation] corresponding to irreducible objects. The structure constants of this basis are stable tensor product multiplicities for the wreath products [mathematical equation]. We generalise [mathematical equation], allowing an arbitrary ring to be substituted for the Grothendieck ring of C. Aside from being a Hopf algebra, [mathematical equation] is the algebra of distributions on a certain affine group scheme. In the special case where C is the category of vector spaces (over C, say), [mathematical equation] is the ring of symmetric functions. The basis obtained by our construction is the family of stable Specht polynomials, which is closely related to the problem of calculating restriction multiplicities from [mathematical equation]. We categorify the stable Specht polynomials by producing a resolution of irreducible representations of S[subscript n] by modules restricted from [mathematical equation].
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 145-147).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126936</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convergence of complete Ricci-βat manifolds</title>
<link>https://hdl.handle.net/1721.1/126935</link>
<description>Convergence of complete Ricci-βat manifolds
Park, Jiewon.
This thesis is focused on the convergence at inαnity of complete Ricci βat manifolds. In the αrst part of this thesis, we will give a natural way to identify between two scales, potentially arbitrarily far apart, in the case when a tangent cone at inαnity has smooth cross section. The identiαcation map is given as the gradient βow of a solution to an elliptic equation. We use an estimate of Colding-Minicozzi of a functional that measures the distance to the tangent cone. In the second part of this thesis, we prove a matrix Harnack inequality for the Laplace equation on manifolds with suitable curvature and volume growth assumptions, which is a pointwise estimate for the integrand of the aforementioned functional. This result provides an elliptic analogue of matrix Harnack inequalities for the heat equation or geometric βows.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 57-59).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126935</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>K-theoretic Hall algebras for quivers with potential</title>
<link>https://hdl.handle.net/1721.1/126934</link>
<description>K-theoretic Hall algebras for quivers with potential
Pădurariu, Tudor(Tudor Gabriel)
Given a quiver with potential (Q, W), Kontsevich-Soibelman constructed a Hall algebra on the cohomology of the stack of representations of (Q, W). As shown by Davison-Meinhardt, this algebra comes with a filtration whose associated graded algebra is supercommutative. A special case of this construction is related to work of Nakajima, Varagnolo, Maulik-Okounkov etc. about geometric constructions of Yangians and their representations; indeed, given a quiver Q, there exists an associated pair (Q̃, W̃) for which the CoHA is conjecturally the positive half of the Yangian Y[subscript MO]([subscript gQ]). The goal of this thesis is to extend these ideas to K-theory. More precisely, we construct a K-theoretic Hall algebra using category of singularities, define a filtration whose associated graded algebra is a deformation of a symmetric algebra, and compare the KHA and the CoHA using the Chern character. As before, we expect our construction for the special class of quivers (Q̃, W̃) to recover the positive part of quantum affine algebra U[subscript q]([subscript gQ]) defined by Okounkov-Smirnov, but for general pairs (Q, W) we expect new phenomena.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 167-171).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126934</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frontiers of Liouville quantum gravity</title>
<link>https://hdl.handle.net/1721.1/126933</link>
<description>Frontiers of Liouville quantum gravity
Pfeffer, Joshua William.
Abstract This thesis studies a universal model of random geometry in two dimensions called Liouville quantum gravity (LQG). LQG was originally described heuristically by physicists, and mathematicians have grappled with the challenge of defining it rigorously and analyzing its properties. We investigate elements of the theory of LQG that are still poorly understood, often even from physicists' heuristic perspective. -- We analyze LQG as a metric space. We prove results necessary for the construction of LQG as a metric space, and prove fundamental estimates for these distances. We prove the most natural formulation of the Knizhnik-Polyakov-Zamolodchikov (KPZ) formula, which relates Hausdorff dimensions of sets with respect to the Euclidean and LQG metric. And we prove upper and lower bounds on the Hausdorff dimension of the LQG metric. --; We propose a model for LQG with matter central charge in (1, 25). We introduce and justify a model for LQG for matter central charge c in the range (1, 25), a regime whose probabilistic and geometric behavior is much less well-understood than the classical regime c &lt; 1, even from a physics perspective. -- We rigorously link the determinant of the Laplacian to the definition of LQG and to the mass of Brownian loops. We give a mathematically precise interpretation of physicists' original definition of LQG in terms of the determinant of the Laplace-Beltrami operator ("Laplacian"). And we rigorously relate the zeta-regularized determinant of the Laplacian to the regularized mass of Brownian loops on the surface. --; We apply the theory of LQG to answer open problems in other areas of probability. We apply tools from LQG to answer open problems about the connectivity of the adjacency graph of complementary connected components of a Schramm-Loewner evolution curve. And we prove a precise asymptotic growth exponent for external diffusion-limited aggregation in the setting of a spanning-tree-weighted random planar map-the first result of its kind on any class of graphs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 317-336).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126933</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical modeling of pilot-wave hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/126932</link>
<description>Theoretical modeling of pilot-wave hydrodynamics
Turton, Sam Edward.
In this thesis, we develop and apply a number of theoretical models describing the dynamics of a droplet walking on a vibrating liquid bath. We first review the hierarchy of theoretical models developed to describe this system. We begin with the stroboscopic model of Oza et al. (2013), and elucidate the role of spatial wave-damping and the effect of the droplet's vertical dynamics. We then extend the boost model of Bush et al. (2014), valid in the weak-acceleration limit, demonstrating its connection to the Rayleigh oscillator model of Labousse &amp; Perrard (2014). We extend the boost framework in order to consider droplet interactions with slowly-varying topography, and compare our model predictions with the results of an accompanying experimental study. Particular attention is given to outlining the physical limits in which the topographical effects may be captured by an effective force. We also investigate theoretically the dynamics of hydrodynamic spin lattices, and demonstrate that their collective behavior is captured by a generalized Kuramoto model, which we explicitly derive from the boost framework. Finally, motivated by the statistical signature reported in the trajectories of droplets interacting with wells by Sáenz et al. (2020), we consider the stability of the steady walking state. By considering a generalized pilot-wave framework that allows us to explore parameter regimes beyond that accessible in the laboratory, we discover states in which the walker's speed oscillates over a scale comparable to the Faraday wavelength, in addition to a regime in which walker motion is unstable and undergoes random-walk-like motion. We demonstrate how either of these two mechanisms may lead to the emergence of quantum-like statistics with the signature of the pilot wavelength from the pilot-wave dynamics. We conclude with a discussion of the implications of this work and suggest fruitful directions of future research.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 159-168).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126932</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unipotent representations of real reductive groups</title>
<link>https://hdl.handle.net/1721.1/126931</link>
<description>Unipotent representations of real reductive groups
Mason-Brown, Lucas(Lucas David)
Let G be a real reductive group and let Ĝ be the set of irreducible unitary representations of G. The determination of Ĝ (for arbitrary G) is one of the fundamental unsolved problems in representation theory. In the early 1980s, Arthur introduced a finite set Unip(G) of (conjecturally unitary) irreducible representations of G called unipotent representations. In a certain sense, these representations form the build-ing blocks of Ĝ. Hence, the determination of Ĝ requires as a crucial ingredient the determination of Unip(G). In this thesis, we prove three results on unipotent representations. First, we study unipotent representations by restriction to K [subset symbol] G, a maximal compact subgroup. We deduce a formula for this restriction in a wide range of cases, proving (in these cases) a long-standing conjecture of Vogan. Next, we study the unipotent representations attached to induced nilpotent orbits. We find that Unip(G) is 'generated' by an even smaller set Unip2(G) consisting of representations attached to rigid nilpotent orbits. Finally, we study the unipotent representations attached to the principal nilpotent orbit. We provide a complete classification of such representations, including a formula for their K-types.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 207-210).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126931</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic and extremal behavior in graphs and matrices</title>
<link>https://hdl.handle.net/1721.1/126930</link>
<description>Probabilistic and extremal behavior in graphs and matrices
McKinley, Gweneth(Gweneth Ann)
This thesis deals with several related questions in probabilistic and extremal graph theory and discrete random matrix theory. First, for any bipartite graph H containing a cycle, we prove an upper bound of [mathematical equation] on the number of labeled H-free graphs on n vertices, given only a fairly natural assumption on the growth rate of. Bounds of the form [mathematical equation] have been proven only for relatively few special graphs H, often with considerable difficulty, and our result unifies all previously known special cases. Next, we give a variety of bounds on the clique numbers of random graphs arising from the theory of graphons. A graphon is a symmetric measurable function [mathematical equation], and each graphon gives rise naturally to a random graph distribution, denoted G(n, W ), that can be viewed as a generalization of the Erdős-Ré́nyi random graph.; Recently, Doležal, Hladký, and Máthé gave an asymptotic formula of order log n for the clique number of G(n, W ) when W is bounded away from 0 and 1. We show that if W is allowed to approach 1 at a finite number of points, and displays a moderate rate of growth near these points, then the clique number of G(n, W) will be [theta]([square root n]) almost surely. We also give a family of examples with clique number [theta](n[superscript alpha]) for any [alpha] [element symbol] (0, 1) , and some conditions under which the clique number of G(n, W ) will be [omicron]([square root]n), [lower case omega]([square root]n), or [upper case omega]([superscript alpha]) for [alpha] [element symbol] (0, 1). Finally, for an nxm matrix M of independent Rademacher (±1) random variables, it is well known that if n &lt;/- m, then M is of full rank with high probability; we show that this property is resilient to adversarial changes to M.; More precisely, if m &gt;/- n + n[superscript 1-[epsilon]/6], then even after changing the sign of (1 - [epsilon])m/2 entries, M is still of full rank with high probability. This is asymptotically best possible, as one can easily make any two rows proportional with at most m/2 changes. Moreover, this theorem gives an asymptotic solution to a slightly weakened version of a conjecture made by Van Vu in [Vu08].
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 77-82).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126930</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strange duality on elliptic and K3 surfaces</title>
<link>https://hdl.handle.net/1721.1/126929</link>
<description>Strange duality on elliptic and K3 surfaces
Makarova, Svetlana,Ph. D.Massachusetts Institute of Technology.
The Strange Duality is a conjectural duality between two spaces of global sections of natural line bundles on moduli spaces of sheaves on a fixed variety. It has been proved in full generality on curves by Marian and Oprea, and by Belkale. There have been ongoing work on the Strange Duality on surfaces by various people. In the current paper, we show that the approach of Marian and Oprea to treating elliptic surfaces can be generalized in multiple directions: first, we can prove the Strange Duality in many cases over elliptic surfaces, and then, we extend their moduli construction to the non-ample quasipolarized locus of K3 surfaces.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 75-77).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126929</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contributions to sutured monopole and sutured instanton Floer homology theories</title>
<link>https://hdl.handle.net/1721.1/126928</link>
<description>Contributions to sutured monopole and sutured instanton Floer homology theories
Li, Zhenkun,Ph. D.Massachusetts Institute of Technology.
In this thesis, we present the development of some aspects of sutured monopole and sutured instanton Floer homology theories. Sutured monopole and instanton Floer homologies were introduced by Kronheimer and Mrowka. They are the adaption of monopole and instanton Floer theories to the case of balanced sutured manifolds, which are compact oriented 3-manifolds together with some special data on the boundary called the suture. We construct the gluing and cobordism maps in these theories, construct gradings associated to properly embedded surfaces inside the balanced sutured manifolds, and use these tools to further construct minus versions of knot Floer homologies in monopole and instanton theories. These constructions contribute to laying down a solid basis in sutured monopole and sutured instanton Floer homology theories, upon which we could develop further applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 261-267).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126928</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Arboreal representations, sectional monodromy groups, and abelian varieties over finite fields</title>
<link>https://hdl.handle.net/1721.1/126927</link>
<description>Arboreal representations, sectional monodromy groups, and abelian varieties over finite fields
Kadets, Borys.
This thesis consists of three independent parts. The first part studies arboreal representations of Galois groups - an arithmetic dynamics analogue of Tate modules - and proves some large image results, in particular confirming a conjecture of Odoni. Given a field K, a separable polynomial [mathematical expression], and an element [mathematical expression], the full backward orbit [mathematical expression] has a natural action of the Galois group [mathematical expression]. For a fixed [mathematical expression] with [mathematical expression] and for most choices of t, the orbit [mathematical expression] has the structure of complete rooted [mathematical expression]. The Galois action on [mathematical expression] thus defines a homomorphism [mathematical expression]. The map [mathematical expression] is the arboreal representation attached to f and t.; In analogy with Serre's open image theorem, one expects [mathematical expression] to hold for most f, t, but until very recently for most degrees d not a single example of a degree d polynomial [mathematical expression] with surjective [mathematical expression],t was known. Among other results, we construct such examples in all sufficiently large even degrees. The second part concerns monodromy of hyperplane section of curves. Given a geometrically integral proper curve [mathematical expression], consider the generic hyperplane [mathematical expression]. The intersection [mathematical expression] is the spectrum of a finite separable field extension [mathematical expression] of degree [mathematical expression]. The Galois group [mathematical expression] is known as the sectional monodromy group of X. When char K = 0, the group [mathematical expression] equals [mathematical expression] for all curves X.; This result has numerous applications in algebraic geometry, in particular to the degree-genus problem. However, when char K &gt; 0, the sectional monodromy groups can be smaller. We classify all nonstrange nondegenerate curves [mathematical expression], for [mathematical expression] such that [mathematical expression]. Using similar methods we also completely classify Galois group of generic trinomials, a problem studied previously by Abhyankar, Cohen, Smith, and Uchida. In part three of the thesis we derive bounds for the number of [mathematical expression]-points on simple abelian varieties over finite fields; these improve upon the Weil bounds. For example, when q = 3, 4 the Weil bound gives [ .. ] for all abelian varieties A. We prove that [mathematical expression], [mathematical expression] hold for all but finitely many simple abelian varieties A (with an explicit list of exceptions).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 93-97).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126927</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cohomologically proper stacks over Zp̳ : algebra, geometry and representation theory</title>
<link>https://hdl.handle.net/1721.1/126926</link>
<description>Cohomologically proper stacks over Zp̳ : algebra, geometry and representation theory
Kubrak, Dmitry(Dmitrii)
Abstract In this thesis, we study a class of so-called cohomologically proper stacks from various perspectives, mainly concentrating on the p-adic context. Cohomological properness is a relaxed properness condition on a stack which roughly asks the cohomology of any coherent sheaf to be finitely generated over the base. We extend some of the techniques available for smooth proper schemes to smooth cohomologically proper stacks, featuring in particular recently developed theory of prismatic co-homology and the classical Deligne-Illusie method for the Hodge-to-de Rham degeneration. As main applications we prove the Totaro's conjectural inequality between the dimensions of the de Rham and the singular F[subscript p]-cohomology of the classifying stack of a reductive group, compute the ring of prismatic characteristic classes at non-torsion primes and give some new examples of the Hodge-to-de Rham degeneration for stacks in characteristic 0. We also study some descent properties of certain Brauer group classes on conical resolutions, a question having some applications to the theory of Fedosov quantizations in characteristic p. Some surprising results about the G[subscript m]-weights of differential 1-forms that are obtained along the way, originally motivated the attempt to generalize the integral p-adic Hodge theory to the setting of cohomologically proper stacks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis. In title on title page, double underscored "p" appears as subscript.; Includes bibliographical references (pages 291-297).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126926</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asymptotic description of the formation of black holes from short-pulse data</title>
<link>https://hdl.handle.net/1721.1/126925</link>
<description>Asymptotic description of the formation of black holes from short-pulse data
Jaffe, Ethan Yale.
In this thesis we present partial progress towards the dynamic formation of black holes in the four-dimensional Einstein vacuum equations from Christodoulou's short-pulse ansatz. We identify natural scaling in a putative solution metric and use the technique of real blowup to propose a desingularized manifold and an associated rescaled tangent bundle (which we call the "short-pulse tangent bundle") on which the putative solution remains regular. We prove the existence of a solution solving the vacuum Einstein equations formally at each boundary face of the blown-up manifold and show that for an open set of restricted short-pulse data, the formal solution exhibits curvature blowup at a hypersurface in one of the boundary hypersurfaces of the desingularized manifold. This thesis is intended to be partially expository. In particular, this thesis presents an exposition of double-null gauges and the solution of the characteristic initial value problem for the Einstein equations, as well as an exposition of a new perspective of Christodoulou's monumental result on the dynamic formation of trapped surfaces [13].
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 303-305).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126925</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computability of rational points on curves over function fields in characteristic p</title>
<link>https://hdl.handle.net/1721.1/126924</link>
<description>Computability of rational points on curves over function fields in characteristic p
Hewett, Campbell L.(Campbell Lucas)
The motivating problem of this thesis is that of explicitly computing the K-rational points of a regular nonsmooth curve X over a αnitely generated αeld K of characteristic p. We start with an in-depth study of such curves in general and the tools exclusive to characteristic p geometry needed to compute their K-points. We describe a combined going-down and going-up approach to compute X(K) that generalizes and makes eﬀective the proof of ﬁniteness of X(K) given by Voloch ([39]). We break the problem up into three separate cases according to the absolute genus of X. In the absolute genus 0 case, we give an algorithm to compute X(K) that is an eﬀective version of a method given by Jeong ([16]). We also implement a special case of this algorithm in Sage and apply it to example curves. In the absolute genus 1 case, we give an algorithm to compute X(K) that works when we make extra assumptions about X, and we make some remarks in the case where those assumptions are removed. In the absolute genus at least 2 case, we give an unconditional algorithm to compute X(K). Some tools and algorithms we provide along the way do not directly involve regular nonsmooth curves and are interesting in their own right. We describe ways to eﬀectively descend curves with respect to transcendental or purely inseparable ﬁeld extensions. We explore the methods of p-descent on elliptic curves in characteristic p and provide explicit equations deﬁning Z/pZ- and [ mu]p-torsors over them. We prove an eﬀective de Franchis-Severi theorem for characteristic p that generalizes the one given by Baker, et al. over number ﬁelds ([3]). Lastly, we use a height bound proved by Szpiro ([34]) to give an algorithm to compute Y (K) for any smooth nonisotrivial curve Y over K followed by an algorithm to compute Y (K¹/p[infinity]), which was proved to be ﬁnite by Kim ([17]).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 103-105).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126924</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The stability of bound states in pilot-wave hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/126923</link>
<description>The stability of bound states in pilot-wave hydrodynamics
Couchman, Miles Meissner Paasikivi.
Millimetric droplets bouncing on the surface of a vertically vibrating fluid bath may self-propel through a resonant interaction with their own wavefield, displaying behaviors previously thought to be exclusive to the microscopic quantum realm. We investigate the stability of quantized bound states comprised of multiple droplets interacting through their shared wavefield, using an integrated experimental and theoretical approach. We consider the behavior of droplet pairs, rings, and chains as the bath's vibrational acceleration is increased progressively, and uncover a rich variety of dynamical states including periodic oscillations and traveling waves. The instability observed is dependent on the droplet number and size, and whether the drops are bouncing in- or out-of-phase relative to their neighbors. We develop a new theoretical model that accounts for the coupling between a drop's horizontal and vertical motion, enabling us to rationalize the majority of our experimental findings. We thus demonstrate that variations in a drop's impact phase with the bath have a critical influence on the stability of bouncing-droplet bound states. Our work provides insight into the complex interactions and collective motions that arise in bouncing-droplet aggregates, and forges new mathematical links with extant models of microscopic physics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 185-195).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126923</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Orientation map for height p - 1 real E theory</title>
<link>https://hdl.handle.net/1721.1/126922</link>
<description>An Orientation map for height p - 1 real E theory
Chatham, Hood,IV(Robert Hood)
Let p be an odd prime and let EO = E[superscript hC] [subscript p-1] be the Cp αxed points of height p - 1 Morava E theory. We say that a spectrum X has algebraic EO theory if the splitting of K[subscript *](X) as an K[subscript *][Cp]-module lifts to a topological splitting of EO [subscript grave] X. We develop criteria to show that a spectrum has algebraic EO theory, in particular showing that any connective spectrum with mod p homology concentrated in degrees 2k(p - 1) has algebraic EO theory. As an application, we answer a question posed by Hovey and Ravenel [10] by producing a unital orientation MW [subscript 4p-4] --&gt; EO analogous to the MSU orientation of KO at p = 2 where MW [subscript 4p-4] is the Thom spectrum of the (4p - 4)-connective Wilson space.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 43-44).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126922</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of smoothing effect in some dispersive equations</title>
<link>https://hdl.handle.net/1721.1/126921</link>
<description>The role of smoothing effect in some dispersive equations
Grande Izquierdo, Ricardo.
In this thesis, we study the role of smoothing effect in the local well-posedness theory of dispersive partial differential equations in three different contexts. First, we use it to overcome a loss of derivatives in a family of nonlocal dispersive equations. Second, we exploit a discrete version of the smoothing effect to study a discrete system of particles and how to approximate it by a continuous dispersive equation. Third, we use an anisotropic version of the smoothing effect to establish the local well-posedness theory of the two-dimensional Dysthe equation, which is used to model oceanic rogue waves.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 151-153).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126921</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affine Springer fibers and the representation theory of small quantum groups and related algebras</title>
<link>https://hdl.handle.net/1721.1/126920</link>
<description>Affine Springer fibers and the representation theory of small quantum groups and related algebras
Boixeda Alvarez, Pablo.
This thesis deals with the connections of Geometry and Representation Theory. In particular we study the representation theory of small quantum groups and Frobenius kernels and the geometry of an equivalued affine Springer fiber Fl[subscript ts] for s a regular semisimple element. In Chapter 2 we relate the center of the small quantum group with the cohomology of the above affine Springer fiber. This includes joint work with Bezrukavnikov, Shan and Vaserot. In Chapter 3 we study the geometry of the affine Springer fiber and in particular understand the fixed points of a torus action contained in each component. In Chapter 4 we further have a collection of algebraic results on the representation theory of Frobenius kernels. In particular we state some results pointing towards some construction of certain partial Verma functors and we compute this in the case of SL₂. We also compute the center of Frobenius kernels in the case of SL₂ and state a conjecture on a possible inductive construction of the general center.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 125-128).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126920</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A topology on points on stacks</title>
<link>https://hdl.handle.net/1721.1/126918</link>
<description>A topology on points on stacks
Christensen, Atticus(Atticus Ballroom)
For a variety over certain topological rings R, like Z[subscript p] or C, there is a well-studied way to topologize the R-points on the variety. In this paper, we generalize this definition to algebraic stacks. For an algebraic stack X over many topological rings R, we define a topology on the isomorphism classes of R-points of X. We prove expected properties of the resulting topological spaces including functoriality. Then, we extend the definition to the case when R is the ring of adeles of some global field. Finally, we use this last definition to strengthen the local-global compatibility for stacky curves of Bhargava-Poonen to a strong approximation result.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (page 55).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126918</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The evolution of Soviet military and civilian threat assessment in the Gorbachev era : fragmentation and competition</title>
<link>https://hdl.handle.net/1721.1/126351</link>
<description>The evolution of Soviet military and civilian threat assessment in the Gorbachev era : fragmentation and competition
Phillips, Richard Hyland.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1991.; Includes bibliographical references (leaves 276-334).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126351</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reduced confinement GaAlAs tapered waveguide antennas</title>
<link>https://hdl.handle.net/1721.1/126348</link>
<description>Reduced confinement GaAlAs tapered waveguide antennas
Bossi, Donald Elliott.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126348</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental study of tokamak plasmas with external rotational transform of the magnetic field</title>
<link>https://hdl.handle.net/1721.1/126345</link>
<description>Experimental study of tokamak plasmas with external rotational transform of the magnetic field
Janos, Alan Charles.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1980.; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126345</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanical properties of Nb3Sn multifilamentary composites</title>
<link>https://hdl.handle.net/1721.1/126344</link>
<description>The mechanical properties of Nb3Sn multifilamentary composites
Cogan, Stuart Forster.
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 1979.; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126344</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Entrepreneurial organizations and human capital</title>
<link>https://hdl.handle.net/1721.1/125486</link>
<description>Entrepreneurial organizations and human capital
Kim, J. Daniel(Jisoo Daniel)
This dissertation investigates how human capital shapes both the creation a nd performance of entrepreneurial organizations. In three essays, I study the intricate linkage between startups and the individuals that embody them - which include not only the founders, but also the non-founding joiners. In the first essay, my co-authors and I empirically assess the popular view that the most successful entrepreneurs tend to be young. Second, I investigate the types of individuals that choose to work for startups rather than established firms, and the resulting wage differential between the two employer types. Third, I study the effectiveness of high-tech startup acquisitions as a hiring strategy for incumbent firms - commonly known as "acqui-hiring."
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, February, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125486</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regularized predictive control framework for robust dynamic legged locomotion</title>
<link>https://hdl.handle.net/1721.1/125485</link>
<description>Regularized predictive control framework for robust dynamic legged locomotion
Bledt, Gerardo.
Legged robots have the potential to be highly dynamic machines capable of outperforming humans and animals in executing locomotion tasks within dangerous and unstructured environments. Unfortunately, current control methods still lack the ability to move with the agility and robustness needed to traverse arbitrary terrains with the same grace and reliability as animals. This dissertation presents the successful implementation of a novel nonlinear optimization-based Regularized Predictive Control (RPC) framework that optimizes robot states, footstep locations, and ground reaction forces over a future prediction horizon. RPC exploits expertly designed and data-driven extracted heuristics by directly embedding them in the optimization through regularization in the cost function. Well-designed regularization should bias results towards a "good enough" heuristic solution by shaping the cost space favorably, while allowing the optimization to find a better result if it exists.; However, designing meaningful regularized cost functions and adequate heuristics is challenging and not straightforward. A novel framework is presented for automatically extracting and designing new principled legged locomotion heuristics by fitting simple intuitive models to simulated and experimental data using RPC. Statistically correlated relationships between desired commands, robot states, and optimal control inputs are found by allowing the optimization to more exhaustively search the cost space during offline explorations when not subjected to real-time computation constraints. This method extracts simple, but powerful heuristics that can approximate complex dynamics and account for errors stemming from model simplifications or parameter uncertainty without the loss of physical intuition.; Nonlinear optimization-based controllers have shown improved capabilities in simulation, but fall short when implemented on hardware systems that must adhere to real-time computation constraints and physical limits. Various methods and algorithms critical to the success of the robot were developed to overcome these challenges. The controller is verified experimentally using the MIT Cheetah 3 and Mini Cheetah robot platforms. Results demonstrate the ability of the robot to track dynamic velocity and turn rate commands with a variety of parametrized gaits, remain upright through large impulsive and sustained disturbances, and traverse highly irregular terrains. All of these behaviors are achieved with no modifications to the controller structure and with one set of gains signifying the generalized robustness of RPC. This work represents a step towards more robust dynamic locomotion capabilities for legged robots.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-160).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125485</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a miniature, low power, solid state, continuously sensitive, diffusion cloud chamber</title>
<link>https://hdl.handle.net/1721.1/125484</link>
<description>Development of a miniature, low power, solid state, continuously sensitive, diffusion cloud chamber
Cheney, Craig(Craig B.)
A miniature, low-power, solid state, continuously sensitive, diffusion cloud chamber has been developed for use in educational settings as part of the MICA (Measurement, Instrumentation, Controls and Analysis) initiative. MICA aims to provide an immersive educational experience for a wide variety of subjects, through hands-on, experimentation based learning. Cloud chambers were first invented in the early 2 0 th century and offer the ability to visualize ionization tracks from high energy particles. Cloud chambers are no longer used for modern research purposes, but they present a unique and compelling opportunity for teaching physics, including classical mechanics, electricity &amp; magnetism, and nuclear concepts. Presented is a compact cloud chamber with custom, integrated power electronics that dramatically reduces the size and power requirements over those of existing devices. The device is built using a modular block system that enables the rapid development and reusability of the electronics from one MICA experiment to the next. The thermal system utilizes heat pipes, and is optimized to not require the use of a liquid coolant. An onboard controller provides flexible operation, real-time control, and data acquisition. Using cloud chambers in an educational setting allows students to visualize physics and phenomena that are otherwise intangible and difficult to learn.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-132).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125484</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A two-step port-reduced reduced-basis component method for time domain elastodynamic PDE with application to structural health monitoring</title>
<link>https://hdl.handle.net/1721.1/125483</link>
<description>A two-step port-reduced reduced-basis component method for time domain elastodynamic PDE with application to structural health monitoring
Bhouri, Mohamed Aziz.
We present a two-step parameterized Model Order Reduction (pMOR) technique for elastodynamic Partial Differential Equations (PDE). pMOR techniques for parameterized time domain PDEs offer opportunities for faster solution estimation. However, due to the curse of dimensionality, basic pMOR techniques fail to provide sufficiently accurate approximation when applied for large geometric domains with multiple localized excitations. Moreover, considering the time domain PDE for the construction of the reduced basis greatly increases the computational cost of the offline stage and treatment of hyperbolic PDEs suffers from pessimistic error bounds. Therefore, within the context of linear time domain PDEs for large domains with localized sources, it is of great interest to develop a pMOR approach that provides relatively low-dimensional spaces and which guarantees sufficiently accurate approximations.; Towards that end, we develop a two-step Port-Reduced Reduced-Basis Component approach (PR-RBC) for linear time domain PDEs. First, our approach takes advantage of the domain decomposition technique to develop reduced bases for subdomains, which, when assembled, form the domain of interest. This reduces the effective dimensionality of the parameter spaces and solves the curse of dimensionality issue. Moreover, the time domain solution is the inverse Laplace transform of a frequency domain function. Therefore, we can approximate the time domain solution as a linear combination of the PR-RBC solutions to the frequency domain PDE. Hence, we first apply the PR-RBC method on the elliptic frequency domain PDE. Second, we consider the resulting approximations to form a reduced space that is used for the time solver. We apply our two-step PR-RBC approach to a Simulation-Based Classification task for Structural Health Monitoring of deployed mechanical structure such as bridges.; For such task, we consider random ambient-local excitation with probabilistic nuisance parameters. We build time-domain cross-correlation based features and apply several state-of-the-art machine learning algorithms to perform a damage detection on the structure. In our context of many queries, the quality of the classification task is enhanced by the sufficiently large synthetic training dataset and the accuracy of the numerical solutions, both obtained thanks to the use of the two-step PR-RBC approach which reduces the computational burden associated with the construction of such dataset.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 245-250).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125483</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware-efficient quantum error correction with nitrogen-vacancy centers</title>
<link>https://hdl.handle.net/1721.1/125482</link>
<description>Hardware-efficient quantum error correction with nitrogen-vacancy centers
Chen, Mo,Ph. D.Massachusetts Institute of Technology.
Quantum technologies promise to revolutionize many fields, ranging from precise sensing to fast computation. The success of novel technologies based on quantum effects rests on engineering quantum systems robust to decoherence-the uncontrollable decay of quantum coherence, one of the very features that empowers quantum computation. To date, performance of quantum devices in the noisy intermediate-scale quantum (NISQ) era is still limited by decoherence. The long term solution is universal quantum computers that run on fault-tolerant quantum error corrected logical qubits which are immune to decoherence. However, the substantial overhead of qubits and quantum gates quantum error correction (QEC) imposes is thought to greatly limit its utility in NISQ devices.; In this thesis, we address this challenge through a hardware-efficient approach-leveraging understanding of the quantum system towards more efficient and robust QEC protocols, which opens a potential avenue for useful QEC in near-term, pre-fault-tolerant devices. We are interested in the solid-state quantum register comprising the nitrogen-vacancy (NV) electronic spin and neighboring nitrogen and carbon nuclear spins. First, we developed techniques that provided us with precise knowledge of the system Hamiltonian and in turn high-fidelity and fast control. Next, we investigated and identified the decoherence mechanism of nuclear spins in the quantum register. The dominant noise turns out to be the thermal fluctuation of the NV electron. We demonstrated a dynamical decoupling approach to suppress the fluctuator noise and extended the nuclear spin coherence time.; Furthermore, based on the precise knowledge of the system Hamiltonian and decoherence model, we customized a hardware-efficient QEC code for dephasing induced by a common fluctuator. This QEC code requires exponentially less overhead compared to the usual repetition code, and is robust to model imperfections. Finally, we developed experimental building blocks for near-term applications of the hardware-efficient QEC.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-178).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125482</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting the effects of random ocean dynamic processes on underwater acoustic sensing and communication</title>
<link>https://hdl.handle.net/1721.1/125481</link>
<description>Predicting the effects of random ocean dynamic processes on underwater acoustic sensing and communication
Cho, Byung Gu.
Acoustics is the primary means of sensing and communication in the ocean for humans and many marine animals. Natural fluctuations in the ocean, however, degrade these abilities in ways that have been previously difficult to forecast. In this thesis, we address this issue by predicting sensing and communication degradation in terms of acoustic attenuation, dispersion and temporal decorrelation at typical operational ranges and frequencies in continental-shelf environments. This is done with analytic expressions derived from first physical principles. The analytic expressions provide the statistics of the acoustic field after forward propagating through an ocean waveguide containing 3-D random inhomogeneities from the independent or combined effects of rough sea-surfaces, near-sea-surface air bubbles and internal waves.; The formulation also includes Doppler effects caused by the inhomogeneities' random horizontal motion, enabling modeling and prediction over a wide range of environments and frequencies. Theoretical predictions are confirmed with available acoustic measurements in several continental-shelf environments using standard oceanographic measurements for environmental support. We quantify how the acoustic signals decorrelate over timescales determined by the underlying temporal coherence of ocean dynamic processes. Surface gravity waves and near-sea-surface air bubbles decorrelate acoustic signals over seconds or less, whereas internal waves affect acoustic coherence at timescales of several to tens of minutes. Doppler spread caused by the inhomogeneities' motion further reduces acoustic temporal coherence, and becomes important at the high frequencies necessary for communication and fine-scale sensing.; We also show that surface gravity waves and bubbles in high sea states can cause increasingly significant attenuation as frequency increases. The typical durations of marine mammal vocalizations that carry over great distances are found to be consistent with the coherence timescales quantified here and so avoid random distortion of signal information even by incoherent reception.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-108).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125481</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technoeconomic analysis of pressure-retarded osmosis</title>
<link>https://hdl.handle.net/1721.1/125480</link>
<description>Technoeconomic analysis of pressure-retarded osmosis
Chung, Hyung Won.
Pressure-retarded osmosis (PRO) is a renewable method of power production from salinity gradients which has generated significant academic and commercial interest. Because PRO is a non-intermittent technology (i.e., the energy output can be actively controlled) and it has huge global capacity (1.4-2.6 TW global salinity gradient power potential), a successful commercialization of PRO can significantly impact the global energy landscape. However, the current status quo has too low energy efficiency and too high capital cost for PRO to be successfully implemented on a large scale. A broad objective of this thesis is to analyze PRO from a system-level perspective to identify major bottlenecks and propose methods to remove these bottlenecks and improve system-level performance. Economic analysis is necessary to ascertain the practical viability of a PRO system for power production, but high complexity and the lack of large scale data has limited such work.; Ve provide two methods to overcome such difficulty. First, we investigate lower bound cost scenarios for power generation with PRO to evaluate its economic viability. We build an economic model for PRO with assumptions that minimize the cost of power production, thereby conclusively identifying the operating conditions that are not economically viable. Second, we develop a simple yet powerful economic framework to relate the lower bound of levelized cost of electricity (LCOE) to net power density. A set of simplifying assumptions are used to develop an inverse linear relationship between net power density and LCOE. While net power density can be inferred based on experimentally measured power density, LCOE can be used to judge the economic viability of the PRO system.; The minimum required net power density for PRO system to achieve an LCOE of $0.074/kWh (the capacity-weighted average LCOE of solar PV in the U.S.) is found to be 56.4 W/m2 Finally, we investigate multistage configurations of PRO as a viable way to improve net power density. We develop a unifying framework to classify and analyze various multistage designs in the literature. Then we identify the optimal multistage configuration that achieves most of the benefit of multistaging without significantly increasing the design complexity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 105-111).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125480</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>In situ monitoring and control of carbon nanotube synthesis</title>
<link>https://hdl.handle.net/1721.1/125479</link>
<description>In situ monitoring and control of carbon nanotube synthesis
Dee, Nicholas T.(Nicholas Thomas)
While carbon nanotubes (CNTs) have exceptional properties, synthesis --; especially of high-quality, ordered assemblies of CNTs such as vertically aligned arrays ("forests") - remains a barrier to their broader commercial adoption. In particular, high-throughput production of CNTs via chemical vapor deposition (CVD) requires improvements in both control (i.e. achieving uniformity in CNT size and alignment) and efficiency (i.e. maximizing the yield of CNTs relative to a population of catalyst nanoparticles). These developments are critical for applications such as thermal interface materials, electrical interconnects, and filtration membranes. This thesis utilizes in situ characterization techniques to explore the synthesis of CNT forests, and to tailor CNT morphology by dynamic control of process conditions.; First, the formation of CNT forests is studied with in situ environmental transmission electron microscopy (ETEM), revealing that carbon availability during particle formation enhances both the formation of Fe catalyst nanoparticles and CNT nucleation, resulting in a 10-fold increase in CNT density. Then, a machine learning model is presented to identify the phase of individual nanoparticles in ETEM videos, to correlate catalyst particle phase dynamics with CNT nucleation probability. Next, a benchtop CVD system with an in situ Raman probe is used to monitor CNT nucleation and density accumulation during forest growth, to identify the correlation between time-variant carbon exposure to the catalyst and resulting CNT crystallinity.; Finally, the morphological development and growth kinetics of the CNT forest are shown to be mechanochemically modulated, by applying controlled mechanical forces during growth and coupling real-time height measurements with ex situ small-angle X-ray scattering analysis of forests grown under various loads. Taken together, these findings may be applied to the design and operation of large-area and continuous-feed reactors for CNT manufacturing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 269-287).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125479</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tractor design for small farms in resource limited markets</title>
<link>https://hdl.handle.net/1721.1/125478</link>
<description>Tractor design for small farms in resource limited markets
Díaz Lankenau, Guillermo Fabián.
This thesis describes the design of a tractor for small farms (&lt;2 ha) in resource limited markets, particularly India, and the analytical framework used to arrive at the design. Indian smallholder farmers typically rely on draft animals, which compared to tractors are more expensive to maintain, more exhausting to use, slower, and incompatible with many modern farming tools and methods. These disadvantages are detrimental to the farmer's income and crop yields. However, existing small tractors are too large and expensive to directly replace draft animals. The presented tractor design is unique in its ability to compete with draft animals' physical dimensions, pulling performance, and sale price, while retaining key tractor advantages like compatibility with modern tools, low maintenance costs, and reduced drudgery.; This tractor features motorcycle-like controls and seating, inline drive wheels, stabilization via an outrigger arm or a specially-developed balance board attachment, and the ability to attach implements ahead or behind the rear axle. The design was created to satisfy unmet farmer requirements identified during on-site interviews with Indian farming stakeholders. Before deviating from the conventional tractor design, a comprehensive description, from a historical and physical perspective, of why the conventional tractor came to be was elucidated. Then, the proposed tractor design was conceived by leveraging historical, physics-based, and user-focused insights. Experimental results with an instrumented proof-of- physics prototype validated the new tractor could produce traction forces as predicted by the analytical framework used to create the design, as well as meet or exceed the maximum pulling forces generated by draft animals.; A functional prototype of the tractor was built, and its ability to complete key farming operations was demonstrated on a Massachusetts farm. The vehicle was able to complete plowing, disc harrowing, rotary tilling, planting, cultivating, spraying, and towing of a trailer per Indian industry specifications. A study was conducted to assess whether the vehicle would meet the needs of small and marginal farmers in India through on-site, one-on-one interviews with 24 farmers in Karnataka, Gujarat, and Tamil Nadu. Farmers generally reported that the prototype tractor would meet their needs, with an average likelihood of 4.8/5 that they would use the vehicle for planting, inter-cultivation, and spraying, and an average likelihood of 3.8/5 that they would use the tractor for primary or secondary tillage.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-114).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125478</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contributions of the human operator to supernumerary robotic limbs</title>
<link>https://hdl.handle.net/1721.1/125477</link>
<description>Contributions of the human operator to supernumerary robotic limbs
Guggenheim, Jacob(Jacob William)
An expanding literature base has applied Supernumerary Robotic Limbs (Superlimbs) to fields as diverse as heavy industry, robotic surgery, and assistive technology. While the list of applications has grown, and the designs have become more diverse, the research community has focused almost exclusively on the robotic system's role in augmenting the humans capabilities. This represents only one side of the issue; little research has explored the role of the human operator. This thesis represents the first in-depth exploration of the humans contributions to the Superlimb-human system. We began by examining the control strategy of Superlimbs by asking whether fully manual control of the Superlimbs was viable when the human operator was asked to perform simultaneous and independent tasks with both their robotic and natural limbs.; Although we found that the human operator was able to control all four limbs-two robotic, two natural-simultaneously, we found that the human operator performed worse with their natural limbs when controlling all four limbs as compared to when the human operator was only controlling their natural limbs. Thus, when designing Superlimbs for a task set that requires the human and the robot to perform simultaneous independent tasks, this study points to the need for reducing the number of Superlimb degrees of freedom (DOFs) the human must manually control either through design or control. In order to achieve this reduction, we next exploited the high redundancy and flexibility of the human body. First, we proposed a methodology for reduced-actuator Superlimbs by exploiting the human operators' ability to manipulate the base of the Superlimb.; Based upon this methodology, we realized a lightweight Superlimb that could assist a human operator by opening a door when the human operator's hands are busy. Second, we proposed a novel control input methodology for communicating a rich variety of commands to the Superlimbs while both hands are busy. Based upon this methodology, and in combination with an intermittent control structure, we controlled the reduced-actuator Superlimb described above with action primitives to assist a human operator by opening a door when the human operator was holding a large box. Finally, as the Superlimb's state changes, that change is reflected as a change in the forces and torques felt by the human operator at the base of the Superlimb. We found that this inherent haptic feedback allowed the operator to both perform closed-loop manually control of the force output of a Superlimb and to supervise the autonomous actions of a Superlimb.; In sum, this thesis explores how Superlimbs can be designed to exploit the benefits while limiting the challenges of being attached to a human operator.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 99-103).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125477</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactive manipulation with contact models and tactile feedback</title>
<link>https://hdl.handle.net/1721.1/125476</link>
<description>Reactive manipulation with contact models and tactile feedback
Hogan, Francois R.
This thesis focuses on closing the loop in robotic manipulation, moving towards robots that can better perceive their environment and react to unforeseen situations. Humans effectively process and react to information from visual and tactile sensing, however robots often remain programmed in an open-loop fashion, and struggle to correct their motion based on detected errors. We begin our work by developing full-state feedback controllers for dynamical systems involving frictional contact interactions. Hybridness and underactuation are key characteristics of these systems that complicate the design of feedback controllers. We design and experimentally validate the controllers on a planar manipulation system where the purpose is to control the motion of a sliding object on a flat surface using a point robotic pusher. The pusher-slider is a simple dynamical system that retains many of the challenges that are typical of robotic manipulation tasks. We extend this work to partially observable systems, by developing closed-loop tactile controllers for dexterous manipulation with dual-arm robotic palms. We introduce Tactile Dexterity, an approach to dexterous manipulation that plans for robot/object interactions that render interpretable tactile information for control. Key to this formulation is the decomposition of manipulation plans into sequences of manipulation primitives with simple mechanics and efficient planners.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-120).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125476</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Impacting Electrochemical Phenomena Using Interfacial Engineering</title>
<link>https://hdl.handle.net/1721.1/125474</link>
<description>Towards Impacting Electrochemical Phenomena Using Interfacial Engineering
Khan, Sami,Ph. D.Massachusetts Institute of Technology.
Electrochemical phenomena can be broadly defined as any process that involves transfer of electrons, concurrently with a measurable and quantifiable chemical change. Interfacial engineering, on the other hand, can be defined as the altering of surface chemistry and texture across various length scales to impact electro-chemo-mechanical interactions between two or more surfaces. In this thesis, the role of interfacial engineering in impacting two specific electrochemical phenomena has been studied: corrosion and electrochemical reduction of CO 2 In the first part of this thesis, lubricant-impregnated surfaces (LISs) consisting of thin films of lubricant held stably in micro-textures by means of capillary forces have been systematically designed and studied for reducing corrosion. Corrosion is a detrimental process that can impact the performance and lifetime of many infrastructural systems and structures.; In this work, we fabricate microposts on silicon using photolithography, varying inter-post spacing systematically from 5 pm to 50 pm, and conformally sputter coat a thin layer of iron to study the corrosion phenomena electrochemically in an aqueous 3.5 wt% sodium chloride solution, and we show that the corrosion rate on LIS is drastically reduced by three orders of magnitude. Using electrochemical impedance spectroscopy, we develop model circuits for various LIS configurations and show that the measured resistances and capacitances agree with a theoretical model, and discuss in detail where deviations may occur. Similarly, we study the role of lubricant layers towards reducing hydrogen embrittlement. Using a Devanathan-Stachurski electrochemical permeation cell we show that under accelerated conditions, the effective diffusion coefficient of hydrogen on steel is reduced by an order of magnitude using LIS. Furthermore, we apply LIS to heal zirconia coatings on steel.; Zirconia, that grows natively on zirconium alloys used as cladding for fuel in nuclear reactors, is known to serve as a hydrogen barrier but the presence of grain boundaries or macroscopic surface defects can provide pathways for hydrogen entry. We show that LIS improves the effective diffusion coefficient of a defective zirconia film on steel by an order of magnitude. Finally, we develop aerophilic surfaces to trap CO2 bubbles in close proximity to a CO2 electro-reduction catalyst. Electrochemical reduction of CO2 (CO2RR) to valuable fuels is a promising approach towards reducing the ever-growing atmospheric CO2 levels, and efficient delivery and replenishment of CO2 remains a fundamental challenge that can impact CO 2RR current density and product distribution.; When CO2 bubbles are trapped near the catalyst in a plastron layer, the local co 2 concentration available to the catalyst is enhanced and maintained, thereby increasing the magnitude of current density associated with CO2 reduction by close to two times as compared to the conventional CO2 heads pace mode of delivery. We confirm the enhancement in local CO2 concentration using sensitive pH probes as well as colorimetric techniques. Furthermore, we find that hydrogen co-evolution is suppressed from 33% in the heads pace case to 13% with the plastron at -1.1 V vs RHE. We demonstrate that the increase in the CO2RR current density can be scaled up to an array of copper catalysts that possess a higher surface area, while maintaining current density and CO2 concentration close to the catalyst for over an hour. We also demonstrate that CO2RR performance can be enhanced on a porous, nano-structured catalyst using these plastron layers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 75-86).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125474</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated photonic devices for spectroscopic chemical detection</title>
<link>https://hdl.handle.net/1721.1/125473</link>
<description>Integrated photonic devices for spectroscopic chemical detection
Kita, Derek Matthew.
Chemical sensing systems realized with photonic components integrated on traditional semiconductor substrates have emerged as a promising technology for remote sensing applications that require low cost, low power consumption, light weight, small size, and high-performance. In this thesis, I discuss methods and systems for practical implementations of chip-scale integrated photonic chemical sensors and spectromeƯters. The work focuses on solutions to a variety of obstacles that have hindered real-world implementations of microphotonic chemical sensors. First, a new chip arƯchitecture capable of acquiring high channel count, high resolution optical spectra (200 pm resolution in the telecommunications C-band) is presented both theoretically and experimentally, along with a new 'elastic-D₁' regularized regression method for spectrum reconstruction. Next, evanescent field sensing using dielectric waveguides is studied theoretically and numerically, with a special emphasis on sensing perforƯmance in the presence of random, fabrication-induced waveguide sidewall roughness. I demonstrate that a locally flat perturbation approximation is valid for typical experƯimental roughness in silicon-on-insulator platforms, and use a volume-current method to explicitly compute scattering loss rates for a variety of three-dimensional wavegƯuide structures. To then experimentally realize photonic sensing systems, I developed a low-loss (0.36 ± 0.11 dB/cm), quick-turn (16.4 day turnaround) fabrication process for inexpensively prototyping silicon nitride photonic integrated circuits with heaters, etched edge couplers, and opened sensing windows. Using this fabrication process, I present a successful experimental demonstration of a fiber-packaged, waveguideƯenhanced Raman spectroscopic sensor used for detecting liquids in contact with the surface of the chip via measured Raman peaks from 500 - 3500 cm⁻¹.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 155-173).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125473</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanostructured materials towards high-efficiency visible and ultraviolet light emitting diodes : structure-property correlation on the nanoscale</title>
<link>https://hdl.handle.net/1721.1/125472</link>
<description>Nanostructured materials towards high-efficiency visible and ultraviolet light emitting diodes : structure-property correlation on the nanoscale
Goodman, Sarah Ann.
Lighting applications account for 15% of the world's energy consumption, indicating that highly efficient light emitting diodes (LEDs) have a tremendous potential to decrease energy usage. While LEDs are becoming more common in applications from households to industry, a number of improvements must be made to allow LEDs to achieve their maximum efficiency. In this thesis, we examine InGaN/GaN multiple quantum well (MQW) LEDs, which set the industry standard for blue/green inorganic LEDs with an external quantum efficiency of 70%. However, these LEDs suffer from efficiency droop, a phenomenon in which the device efficiency decreases past a certain threshold current. Additionally, the material quality suffers because lattice mismatches introduce defects into the device. We first examine electron energy-loss spectroscopy (EELS) as a method to investigate In fluctuations within the quantum wells, which are thought to contribute to efficiency droop.; EELS has been extensively utilized in the literature to characterize QW composition, but as highƯ resolution aberration-corrected scanning transmission electron microscopy (STEM) becomes the standard. investigations of the impact of the electron beam on the sample and EELS analysis are required. Here, we demonstrate imaging parameters for which beam-induced carbon deposition leads to the introduction of artifacts in EELS spectra. We find that with decreasing pixel sizes for EELS maps, carbon accumulates on the sample, causing an increase in the multiple scattering plasmons of carbon to interfere with the power-law background subtraction of the In pre-edge. This effect is demonstrated across three state-of-the-art aberration-corrected STEM platforms, showing the ubiquitous nature of carbon contamination even in the most advanced instruments. Next, we investigate the spectral broadening around V-pit defects in InGaN/GaN MQW LEDs.; As V-pit engineering, the practice of intentionally introducing V-pits into an LED in order to improve the efficiency, becomes more widespread, it is critical to understand the impact of VƯpits on the emission spectrum of LEDs. Here, we use cathodoluminescence (CL) spectroscopy within the S/TEM to identify three distinct regions of emission around V-pits. We identify blueƯ-shifted emission in the QWs of the V-pit sidewalls and confirm that the blue shift is due to reduced In composition. We find blue-shifted emission in the c-plane QWs adjacent to and in between V-pits that have been perturbed by strain-relaxation induced by V-pit formation. To take advantage of the V-pit-induced spectral broadening of the LED emission spectrum, we present a nanotextured device architecture based on V-pit overlap that emits broadly over a 100 nm range and does not require a nanofabricated template.; The efficiency of InGaN-based LEDs can be improved by tackling phenomena within the material that lead to nonradiative recombination, but efficiency can also be increased by implementing additional structural features to the LEDs that can enhance emission, particularly for UV LEDs that generally suffer from low efficiency. Here, we extend the energy range of surface plasmons in aluminum nanodisks into the UV by controlling the nanodisk diameter using high-resolution electron beam lithography. We use STEM-EELS to map the complete plasmonic spectrum of aluminum nanodisks from 3-120 nm in diameter, achieving surface plasmons in the range of 2-10 eV. Additionally, investigations of the surface and volume plasmon decay at the nanodisk boundary open up pathways for further understanding of electron beam-matter interactions. This thesis work is motivated by the growing need for sustainable lighting sources.; and we tackle this topic by providing new insights into a common methodology used for investigating sources of efficiency droop, analyzing the impact of defects used for efficiency improvement on LED emission. and by pushing the surface plasmon energies of aluminum nanostructures into the UV for the possibility of incorporation into UV LEDs for efficiency enhancement.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-178).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125472</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The biochemical basis for the cooperative action of microRNAs</title>
<link>https://hdl.handle.net/1721.1/125471</link>
<description>The biochemical basis for the cooperative action of microRNAs
Briskin, Daniel.
In metazoans, microRNAs (miRNAs) act to repress mRNAs through a combination of translational repression and target degradation. miRNAs predominantly pair within the 3' untranslated region (3' UTR) of the mRNA. In cells, closely spaced miRNA target sites within an mRNA can act cooperatively, leading to more repression of the target mRNA than expected by independent action at each site. This dissertation details the use of purified miRNA-AGO2 complexes, synthetic target RNAs, and a purified domain of TNRC6B that is able to simultaneously bind multiple AGO proteins. We examined the target site occupancy and affinities for miRNA-AGO2 binding in the absence and presence of TNRC6B, for target RNAs with a single miRNA site as well as multiple miRNA sites spaced at varying distances. As miRNA-AGOƯbinding to target correlates with target repression, our study assayed target binding. Absent TNRC68, miRNA-AGO2 complexes showed little if any cooperative binding. In the presence of the AGO-binding domain of TNRC6B, we observed strong cooperative binding to dual-site target RNAs. We went on to explore the miRNA site parameters suitable for cooperativity, investigating the spacing between sites as well as different miRNAs working alone or in combination with one another. To interrogate the mechanism by which TNRC6B increases cooperativity, competitive slicing experiments were performed; results indicated that association rates between miRNA-AGO2 complexes and targets were not affected by TNRC68, which implied that the improved affinities were due to reduced dissociation. Thus, the multivalent binding of TNRC6 enables cooperative binding of miRNA-AGO complexes to target RNAs, thereby explaining the basis of cooperative action.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from PDF version of thesis. "February 2020." Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125471</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alternating current voltammetry of high temperature electrolysis reactions</title>
<link>https://hdl.handle.net/1721.1/125470</link>
<description>Alternating current voltammetry of high temperature electrolysis reactions
Caldwell, Andrew Harvey.
A theory of the alternating current voltammetry (ACV) of electrolysis reactions in high temperature ionic melts is developed, providing a rigorous connection to the solution properties of electroactive components in molten electrolytes. The method presented herein addresses key issues in the rational design of electrolytes for extractive metallurgy and other electrolytic processes. The application of ACV for quantitative study of electrode reactions in high temperature molten electrolytes is validated by investigations of the Eu³⁺/²⁺ couple in molten Al₂O₃-CaO-Eu₂O₃ and of the Ir oxidation reaction in molten CaO-MgO-SiO₂. Analytic solutions are derived for the ACV harmonic waveforms of electrodeposition and gas evolution reactions of the form O + ne⁻ &lt;-&gt; R, where the surface activity of the reduced species R is constant. Reversible and quasi-reversible charge transfer kinetics are considered, as well as the effects of ohmic drop and double layer charging. It is shown that ohmic drop produces a characteristic distortion of the waveforms, resulting in the emergence of current-potential extrema in the higher harmonics that are distinct from those of the conventional ACV theory of soluble reduction-oxidation couples. An equation linking the peak potential of the second harmonic current amplitude and the activity coefficient of the solution species O is presented, establishing a voltammetric approach for the study of thermodynamic activities. Confirmation of the validity of the analytic solutions is found by close agreement with measurements of the fundamental, second, and third harmonic waveforms of Pb electrodeposition on liquid Pb and of Cl₂ evolution on graphite in molten PbCl₂-NaCl-KCl at 700 °C.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125470</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cell-Intrinsic and cell-extrinsic resistance to classical chemotherapies</title>
<link>https://hdl.handle.net/1721.1/125469</link>
<description>Cell-Intrinsic and cell-extrinsic resistance to classical chemotherapies
Dalin, Simona S.
Resistance is a constant presence in chemotherapy treatment. The very action of these therapeutic compounds causes resistance to these drugs. There are two modes of resistance to chemotherapy: the cancer cells themselves can undergo modifications that result in resistance, known as cell-intrinsic resistance. Alternatively, the non-malignant cells surrounding the tumor can undergo modifications that result in chemo-protection of the nearby malignant cells, known as cell-extrinsic resistance. Both of these modes of resistance have been extensively studied in the lab, however our knowledge is not advanced enough to subvert chemotherapy resistance in the clinic. To further our knowledge of chemotherapy resistance with the ultimate goal of reducing or eradicating this public health challenge, I have studied mechanisms of cell-intrinsic and cell-extrinsic resistance. I first studied how to best choose the drugs in a combination regimen to reduce the emergence of chemotherapy resistance.; I created over 100 cell lines resistant to front-line cytotoxic chemotherapies and surveyed collateral responses to acquisition of resistance. Previous research in bacteria and targeted cancer chemotherapeutics suggested that collateral effects of resistance to one drug can result in both resistance to a second drug, termed collateral resistance, as well as sensitivity to a second drug, termed collateral sensitivity. In contrast, I found that collateral sensitivity to resistance to classical chemotherapies is rare or nonexistent. Surprisingly, I also found that collateral responses to resistance to a single agent are not uniform, and are instead heterogeneous. Both of these results suggest that designing combination regimens with the objective of reducing resistance is not straightforward, and motivate more work to understand the forces at play. next studied the role of the microenvironment on gemcitabine resistance in pancreatic cancer.; Previously known mechanisms of microenvironment mediated resistance rely on paracrine signaling via heat-labile biomolecules such as cytokines and RNAs. In contrast, I found that pancreatic stromal cells secrete a heat-insensitive compound that confers gemcitabine resistance. I performed a series of biochemical experiments which identified this compound as deoxycytidine. This knowledge suggests that combining inhibition of deoxycytidine synthesis with gemcitabine treatment could increase gemcitabine efficacy.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125469</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intersubunit communication and coordinated mechanical activity in the AAA+ protease ClpXP</title>
<link>https://hdl.handle.net/1721.1/125468</link>
<description>Intersubunit communication and coordinated mechanical activity in the AAA+ protease ClpXP
Bell, Tristan A.
Proteases belonging to the AAA+ (ATPases associated with various cellular activities) family perform regulated proteolysis in all domains of life by binding, mechanically unfolding, and degrading target proteins. The bacterial AAA+ protease ClpXP is composed of two distinct proteins: ClpX, a ring-hexamer protein unfoldase; and ClpP, a barrel-shaped tetradecameric peptidase. The assembly of ClpX ring hexamers results in extensive interaction between subunits, and the motor exhibits positive cooperativity in both ATP hydrolysis and mechanical activity against substrates. Despite general understanding of the mechanism of protein unfolding and degradation by ClpXP and other AAA+ proteases, how the six unfoldase subunits coordinate their mechanical activity to produce the force required to quickly and efficiently degrade stably folded substrates is unclear.; Here, I present experiments that interrogate intersubunit communication and coordination of mechanical activity by Escherichia coli ClpXP. In Chapter I, I review the current understanding of AAA+ protease structure and function as background to contextualize the findings presented in later chapters. In Chapter II, I present structural and functional characterization of a ClpX structural element, termed the hinge-linker, in facilitating communication between subunits of the ring hexamer. In Chapter III and two related Appendices, I present experiments that systematically identified determinants of grip between ClpX and its substrates. These experiments also identified distinct functions for different unfoldase subunits during application of force to bound substrates. In Chapter IV, I present results from a collaborative project that determined structures of ClpXP bound to a protein substrate and biochemically characterized several previously unvisualized elements of ClpX and ClpP.; In chapter V, I report the effects of inhibiting relative rotation of ClpX and ClpP on ATP hydrolysis and mechanical activity of the ClpX unfoldase. Using the constraints on mechanism inferred from these findings, I also propose molecular models for processive mechanical activity. Finally, in Chapter VI, I discuss the results presented in previous chapters in the larger context of communication and coordination between subunits of AAA+ protein unfolding motors.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125468</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Omnidirectional obstacle detection using minimal sensing</title>
<link>https://hdl.handle.net/1721.1/124595</link>
<description>Omnidirectional obstacle detection using minimal sensing
Cloitre, Audren Damien Prigent.
An integrated approach to visual obstacle detection for aerial multi-rotor vehicles (drones) is introduced. The approach achieves omnidirectional detection of obstacles via suitable synergy of hardware and software. The drone requires a specific arrangement of two cameras, opposing each other, and placed below and above the drone. A total coverage of the drone's surroundings is achieved by fitting each camera with a fisheye lens whose field of view is significantly greater than 180 degrees. The combined field of view of the cameras is omnidirectional, and may be conceptually subdivided into three regions: the monocular portions of each camera (centered at the north and south poles of the drone) and the stereo portion common to both cameras (circling the drone's equator). To use both the stereo and monocular data, a special image projection is developed, based on a model of the world as a 'capsule'.; The capsule projection consists in a perspective cylindrical projection in the stereo portion, and a planar projection for the two monocular portions. Fisheye images warped by the capsule projection are called capsule images. A stereo algorithm is applied to the cylindrical portion of the capsule images to produce a stereo point cloud. Image features are tracked on the capsule images, since the projection is continuous across the stereo and monocular portions. The tracked features are used in a structure-from-motion algorithm that estimates their 3D locations, and produces a point cloud representing landmarks. The landmark and stereo point clouds are merged into a single set and projected to the unit sphere centered at the drone's coordinate frame. A 2D spherical Delaunay triangulation algorithm is used to build a triangular mesh from the projected points.; The vertices of the mesh are then back-projected to their original 3D location, to create a 3D triangulated surface that represents the obstacles surrounding the drone. The overall method is validated via field experiments conducted with a drone whose design implements our specific camera arrangement. The drone system design is detailed and the experimental results show that this drone can effectively detect obstacles in arbitrary direction, with satisfactory accuracy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124595</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elliptic fibrations among toric hypersurface Calabi-Yau manifolds and mirror symmetry of fibrations</title>
<link>https://hdl.handle.net/1721.1/124593</link>
<description>Elliptic fibrations among toric hypersurface Calabi-Yau manifolds and mirror symmetry of fibrations
Huang, Yu-Chien,Ph. D.Massachusetts Institute of Technology.
In this thesis, we investigate the prevalence of elliptic and genus one fibrations among toric hypersurface Calabi-Yau three folds by (1) constructing explicitly elliptically fibered Calabi-Yau threefolds with large Hodge numbers using Weierstrass model techniques motivated by F-theory, and comparing the Tate-tuned Wierstrass model set with the set of Calabi-Yau threefolds constructed using toric hypersurface methods, and (2) systematically analyzing directly the fibration structure of 4D reflexive polytopes by classifying all the 2D subpolytopes of the 4D polytopes in the Kreuzer and Skarke database of toric Calabi-Yau hypersurfaces. With the classification of the 2D fibers, we then study the mirror symmetry structure of elliptic toric hypersurface Calabi-Yau threefolds. We show that the mirror symmetry of Calabi-Yau manifolds factorizes between the toric fiber and the base: if there exist 2D mirror fibers of a pair of mirror reflexive polytopes, the base and fibration structure of one hypersurface Calabi-Yau determine the base of the other.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 245-255).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124593</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/124590</link>
<description>Essays in financial economics
Goulding, William,Ph. D.Massachusetts Institute of Technology.
The essays in this thesis study the impacts of regulation designed to curtail the accumulation of counterparty credit risk exposure at large banks on intermediation in over-the-counter derivatives markets. In Chapter 1 I analyze the effect on equity holders of regulation focused on exposures held off-balance sheet. While requiring banks to hold capital against their on-balance sheet exposures largely doses not perturb equity holder valuation, capital held against off-balance sheet exposures decreases the equity claim by generating a deleveraging effect when the balance sheet is expanded. Shareholders command a premium to compensate for the change in the value of their claim, leading to a deviation of prices of redundant derivative claims from their replicating portfolios. Importantly, this effect does not appear under a standard leverage ratio.; Chapter 2 examines the impact of regulation designed to reduce counterparty risk between large banks on trading behavior in over-the-counter derivatives markets. In particular, I use trade-level data in a causally identified setting to show that the counterparty risk portion of the Supplementary Leverage Ratio reduces trading volume in foreign exchange swaps and increases price on executed transactions. Further, I show that over-the-counter derivative netting creates a connection between derivatives of differing underlying asset classes through regulatory constraint. The effects of information disclosure in annual stress testing exercises conducted by the Federal Reserve are analyzed in Chapter 3. I find evidence of a reduction in intermediation activity in foreign exchange derivatives following stress test scenario announcement and an increase following results announcement.; While changes in volume show a consistent effect across test implementations, pricing effects are heterogeneous. My results show that the degree to which the stress testing process affects bank portfolio allocation in reference to its primary regulatory ratios is important in explaining the effects of information disclosure, with trading behavior adjusting to restore an optimal set of regulatory constraints conditional on stress test scenarios.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124590</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Healthcare Systems : three studies of patient management and policy change</title>
<link>https://hdl.handle.net/1721.1/124589</link>
<description>Healthcare Systems : three studies of patient management and policy change
Hashmi, Sahar.
For my PhD thesis, I conducted behavioral science research and wrote three first- author journal format papers, of which one paper has been published and the other two will be submitted to healthcare management journals after completion of my degree. All three papers introduce new information about either the cost or the behaviors of patients in local clinics, filling a gap in the healthcare system's management and policy literature. The first paper studies patients with diabetes who are non-adherent to scheduled appointments with physicians in a specialized diabetes clinic setting in Boston. I developed and introduced new and interesting ''technology comfort" measures and a "Smartphone usage" scale, to evaluate if patients would be able to use smart technologies for their disease self-management. This paper not only suggests that patients with diabetes could potentially benefit from using existing advanced technologies, but that new policies can be introduced to reduce the rate of diabetes patients' appointment-related non-adherence. The second paper examines the system of adherence or self-management in five areas ( diet, exercise, medications, doctor's appointments and regular glucose monitoring), revealing how it is correlated to emergency visits and patient lifestyle satisfaction. I analyze predictors of emergency room visits and propose potential policies to reduce these ER visits through the use of advanced smart technologies. The third paper identifies the incidence and consequences of not practicing non- pharmaceutical interventions, during the time of a pandemic, in a student population at a local university clinic.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018; Cataloged from PDF version of thesis. "Doctor of Philosophy in Healthcare Systems: Management and Policy Research."; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124589</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on managing innovation</title>
<link>https://hdl.handle.net/1721.1/124587</link>
<description>Essays on managing innovation
Kearney, Michael J.(Michael Joseph )
This dissertation investigates how choices by managers in research and entrepreneurial settings affect innovation and entrepreneurial outcomes. In the first three chapters, my coauthors and I consider the role of grant-makers in inducing exploitation or exploration among grant recipients at ARPA-E. We use internal data from ARPA-E project selection and quarterly performance reviews to show how active project management enables risk mitigation across a portfolio of projects. In the fourth chapter, we consider a set of decisions made by entrepreneurs related to technology commercialization. Specifically, this paper reconceptualizes the Technology S-Curve not as a technological given but as an envelope of potential outcomes derived by managerial action. We define and investigate a choice-based approach along several key dimensions of technological options, including the tradeoff between exploration versus exploitation, generality versus specialized versions of a technology, and modular versus systems-oriented innovations. In the fifth chapter, I empirically assess I-Corps, an entrepreneurial training program at the National Science Foundation. Using data from the last 11 years of NSF-grant awardees, I find that entrepreneurial training reduces perceived barriers for academics to commercialize their research, resulting in the formation of more innovation-driven enterprises. The results are particularly important for early-career academics, for example graduate students and post docs. The results also confirm that barriers to commercialization are higher for women and academics in locations that are not traditional hubs of entrepreneurship.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124587</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing vaccine dosing kinetics for stronger antibody response</title>
<link>https://hdl.handle.net/1721.1/124586</link>
<description>Optimizing vaccine dosing kinetics for stronger antibody response
Kang, Myungsun(Myungsun Sunny)
One of the barriers to rational vaccine design against evolving pathogens is our lack of mechanistic understanding of how innate and adaptive immune response systematically emerge and evolve. Immune response is comprised of dynamic events that require many components to cooperate collectively in a manner that spans a range of scales. These characteristics make it hard to predict mechanisms for immune response based solely on experimental observations. This thesis investigates various aspects of affinity maturation that are relevant to vaccination and therapeutic strategies but are not yet fully understood mechanistically, ranging from the evolution of the heterogeneity of the antibody population with respect to affinity to optimal design parameters for temporal dosing of vaccines. Our approach is to apply computational techniques to mathematically model the immune system, and being synergistic with complementary experiments. 1.; As affinity maturation ensues, average affinity of antibodies increase with time while resulting affinity distribution becomes increasingly heterogeneous. To shed light on how the extent of this heterogeneity evolves with time during affinity maturation, we have taken advantage of previously published data of antibodies isolated from individual serum samples. Using the ratio of the strongest to the weakest binding subsets as a metric of heterogeneity (or affinity inequality), we find that after a single injection of small antigen doses, the ratio decreases progressively over time. This is consistent with Darwinian evolution in the strong selection limit. By contrast, neither the average affinity nor the heterogeneity evolves much with time for high doses of antigen, as competition between clones of the same affinity is minimal. 2.; What are the aspects of affinity maturation being altered by various temporal patterns of antigen dosing? Certain extended-duration dosing profiles increase the strength of the humoral response, with exponentially-increasing(EI) dosage providing the greatest enhancement. While this is an exciting result, it is necessary to establish a mechanistic understanding of how immune response be enhanced to further engineer and optimize the temporal patterns. From our computational model, the effect is driven by enhanced capture of antigen in lymph nodes by evolving higher-affinity antibodies early in the GC response. We validate the prediction from independent experimental data, where EI dosage result in promoted capture and retention of the antigen in lymph nodes. To our knowledge, this work is the first to demonstrate a key mechanism for vaccine kinetics in the response of B cells to immunization, and may prove to be an effective method for increasing the efficacy of subunit vaccines. 3.; Are there optimal dosing profiles that maximize total protection? That is, lead to the evolution of the most antibodies of high affinity? In extension of mechanistic studies in 2, we propose a stochastic simulation method that can be used as a tool for optimizing dosage protocols for vaccine delivery. Using this tool, we analyze experimental conditions for EI dosage induce suboptimal immune response and investigate two approaches for the optimization. Specifically, reducing the total dosage optimizes affinity of resulting antibodies, while total protection is optimal neither at constant or EI dosage but that corresponding to a "linear-like" dosing profile. Our approach can be extended to broader applications in vaccine design.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis. "The pagination in this thesis reflects how it was delivered to the Institute Archives and Special Collections. The Table of Contents does not accurately represent the page numbering"--Disclaimer Notice page.; Includes bibliographical references (pages 95-102).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124586</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalysis and reactor engineering for the electrochemical conversion of carbon dioxide to carbon monoxide</title>
<link>https://hdl.handle.net/1721.1/124584</link>
<description>Catalysis and reactor engineering for the electrochemical conversion of carbon dioxide to carbon monoxide
Brown, Steven M.(Steven Michael)
Carbon dioxide (CO₂) utilization processes have garnered significant interest to both mitigate anthropogenic greenhouse gas emissions and increase the revenue of many chemical and fuel production processes. Low-cost renewable electricity provides impetus for exploring electrochemical methods to recycle CO₂ in cost-competitive and sustainable ways. Researchers have experimentally demonstrated CO₂ transformations into a variety of industrially relevant materials. Techno-economic assessments inform that the simplest transformations, such as generation of carbon monoxide (CO), appear to be the most feasible in the near future. Yet, widespread commercialization of this nascent technology has not occurred due to a number of challenges that include synthesizing stable and active catalyst materials, understanding activity-Ưdriving force relationships, identifying appropriate reactor configurations, and developing comprehensive process models.; This thesis advances both the experiment and theory of CO₂ conversion to CO through electrocatalysis and reactor engineering. A flow reactor was designed and manufactured to understand CO2 reduction in ways that traditional, batch, electroanalytical devices cannot. A key advantage is that, through the use of porous carbon electrodes, the reactor converts gaseous CO₂ to CO at the catalyst-electrolyte interface in a continuous fashion, alleviating the mass transport limitations common to liquid-phase CO₂ delivery systems. To facilitate this transformation, a carbon-supported gold nanoparticle catalyst was synthesized and deposited onto gas diffusion electrodes. These nanoparticles achieved high selectivity (&gt;90%) to CO formation over the competing hydrogen evolution side reaction.; Traditional electrokinetic analyses (e.g., Tafel) were largely unsuccessful at describing the observed current-potential relationship, prompting a rigorous follow-up study on electrokinetics wherein Marcus theory and mass transport convolution, amongst other considerations, were explored. Bayesian statistics concluded that, despite the ubiquitous implementation of Butler-Volmer kinetics in literature, in actuality, a Marcus-Hush-Chidsey model provides the most accurate description of the electrochemical reduction of CO₂ to CO. The implications of this result are significant, potentially resulting in order of magnitude differences in current density projections at high overpotentials.; Overall, this thesis experimentally measured catalysis rates in a specialized electrochemical flow cell, unimpeded by mass transport, and leveraging these results, as well as those from prior literature, advanced electrokinetic descriptions of CO₂ conversion all towards furthering the commercial prospects of this electrochemical technology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-176).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124584</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reductive conversion of lignin to aromatic chemicals with earth abundant catalysts</title>
<link>https://hdl.handle.net/1721.1/124580</link>
<description>Reductive conversion of lignin to aromatic chemicals with earth abundant catalysts
Anderson, Eric Michael,Ph. D.Massachusetts Institute of Technology.
The viability of lignocellulosic biomass as a feedstock for chemicals hinges on the successful utilization of the lignin. Lignin, which comprises 15-30 wt% of biomass, is an amorphous polymer composed of multiple phenolic monomers. Lignin polymerization occurs through an uncontrolled radical coupling of monomers leading to an array of C-O and C-C bonds in the polymer. As such, lignin is highly recalcitrant and is responsible for the poor utilization of biomass. Many harsh thermochemical processes exist to extract lignin, but typically result in the destruction and condensation of the lignin rendering it as process waste. Alternative fractionation techniques focused on preserving lignin have recently been developed. These methods utilize reduction catalysts to depolymerize and stabilize reactive lignin fragments to produce stable phenols.; This work focuses on the design of flow reactors to understand this process, evaluate catalysts and elucidate how structural changes in biomass impact lignin depolymerization and upgrading. Our lignin conversion process operates by adding a heterogeneous reduction catalyst with whole biomass, a solvent and a reducing agent. Two key steps were discovered in the reductive conversion of lignin. First, lignin oligomers are liberated from the biomass by solvolytic cleavage of lignin-carbohydrate bonds. Next, lignin oligomers are reductively fragmented at the catalyst surface to produce stable phenolic compounds. Lignin solvolysis and reduction were physically separated in a dual-bed flow-through reactor, which allows for independent control of each step. Direct control of lignin solvolysis and reduction allowed for limiting conditions to be isolated. Reduction limited conditions were used to study catalyst activity and stability.; Solvolysis limiting conditions were used to probe how lignin structure influences product selectivity and yield. Additionally, molybdenum-based catalysts were developed for the conversion of lignin-derived phenols into aromatics. Gas phase reactions with n-propyl guaiacol demonstrated that molybdenum polyoxometalates are effective catalysts to perform simultaneous alkylationƯhydrodeoxygenation to produce alkylated aromatics from phenolics. Finally, the dual-bed reactor was used to combine reductive fractionation and deoxygenation chemistry to directly convert lignin into aromatics over a molybdenum carbide catalyst. Overall, a versatile reactor system was developed to facilitate fundamental studies to understand biomass structure and catalyst performance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis. "The pagination in this thesis reflects how it was delivered to the Institute Archives and Special Collections. The Table of Contents does not accurately represent the page numbering"--Disclaimer Notice page. "February 2019."; Includes bibliographical references (pages 209-228).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124580</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translating dynamics of human-microbe interactions</title>
<link>https://hdl.handle.net/1721.1/124579</link>
<description>Translating dynamics of human-microbe interactions
Chu, Nathaniel David.
High-throughput genetic sequencing revolutionized our ability to systematically quantify and analyze biological systems. These methods have been particularly fruitful in understanding the composition and dynamics of the microbial communities that inhabit the human body and how our cells interact with these microbes to maintain health or generate disease. In this thesis, I describe the results of four projects that used high-throughput sequencing to interrogate the dynamics of four systems at the boundary of man and microbe. In the first project, I and my coauthor discovered a mechanism by which marine bacteria dynamically become hypermutators--allowing for rapid adaptation-and we discovered similar mechanisms in clinically-relevant pathogens. In the second project, I developed a new method for targetedly profiling living bacteria in a sample, allowing me to assess the effects of fecal processing on the viability of bacteria in fecal micro biota transplantations. In the third project, I characterized the longitudinal dynamics of the T cell receptor repertoire in healthy people, providing a critical baseline for interpreting changes in the adaptive immune system--our first line of contact with commensals and pathogens. In the fourth project, I tracked the dynamics of engrafting bacteria in a clinical trial of patients with inflammatory bowel disease who received fecal micro biota transplant, demonstrating that patients differ not only in their capacity to accept donor bacteria, but also in their ability to maintain those bacteria. Aside from contributing scientific conclusions about each system, these studies exemplify how genetic sequencing can allow us to directly study the complexity of human subjects, providing a shorter path to translatable clinical insights.
Thesis: Ph. D. in Microbiology, Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124579</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A framework for analyzing nuclear power multiunit accident scenarios and providing accident mitigation and site improvement suggestions</title>
<link>https://hdl.handle.net/1721.1/124578</link>
<description>A framework for analyzing nuclear power multiunit accident scenarios and providing accident mitigation and site improvement suggestions
Cai, Yinan,Ph. D.Massachusetts Institute of Technology.
During the Fukushima Daiichi and Daini accidents, the interactions of multiple units at the same site made accident mitigation more difficult compared to single unit sites. The accidents revealed important multiunit risk sources that are not identified by risk assessments for single-unit sites. Therefore, it's important to obtain an integrated risk evaluation for multiunit sites. However, multiunit accident scenarios are difficult to analyze due to the complexity of multiunit interactions. In the work reported here, a framework capable of analyzing multiunit accident scenarios involving inter-unit interactions is presented. Our framework provides a structured method to analyze accident propagation events, which are not being studied much currently. In addition, our framework is capable of providing accident mitigation and site improvement suggestions that can help improve site safety.; The accident scenarios and risk contributors analyzed in our framework are developed based upon our interviews with Tokyo Electric Power Company (TEPCO) engineers concerning their experiences during the 2011 Fukushima accidents. This first-hand information helps us to better understand multiunit accident scenarios and difficulties in multiunit accident mitigations. In this work, the major steps of our framework are first explained by a simplified two-unit site. The simplified site structure is constructed such that the distractions from overly complex systems are minimized. Additionally, analyses of more risk contributors are illustrated using a relatively complex two-unit site, which illustrates the capability of our framework to analyze complex sites. Even though only a limited number of accident scenarios and risk contributors are illustrated in our work, the capability of the framework goes beyond that.; With proper input information, our framework can be adapted to sites and accident scenarios more complex than those illustrated in our work. Analyzing multiunit risks using our framework can help sites to refine expertise and data and to identify hidden multiunit vulnerabilities and eliminate them in advance. In addition, the risk assessment groups developed during this process can support emergency trainings and risk communications as well as provide risk assessment leadership for utilities.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 205-208).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124578</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A framework to assess the economic and uncertainty implications for technologies for use in decarbonization</title>
<link>https://hdl.handle.net/1721.1/124576</link>
<description>A framework to assess the economic and uncertainty implications for technologies for use in decarbonization
Dawson, Karen Margaret.
The accumulation of greenhouse gasses is causing climate change on a global scale. From simulations of warming scenarios, it appears that complete replacements of fossil fuels within the global energy economy must occur within about 60 years if warming is to be limited to 2°C (Prinn et al., 2011). This means that the discussion about which pathways to decarbonization to pursue will need to occur soon, and will likely be difficult to correct later. This thesis developed and demonstrated two frameworks that can be used to guide discussions on decarbonization pathway choices. The first framework determines the economic usefulness of a technology by finding the difference in total system cost with and without that technology (the opportunity cost of not utilizing the technology).; The second framework quantifies uncertainties in proposed decarbonization pathways, propagates them through to target variables (such as carbon emissions), and calculates the probability of failing (or succeeding) to meet a target. Each framework is demonstrated with an example case set in the year 2050. The first framework assessed the economic usefulness of nuclear technology in regions in the United States, China and the United Kingdom at carbon emission constraints from 500 g/kWh to 1 g/kWh. It was found that the economic usefulness of nuclear technology depends upon the capital cost of nuclear as well as the renewable resources of the region. The second framework is used to assess the probability of meeting carbon emission targets at different carbon prices. It is found that nuclear technology increases the probability of succeeding to meet a carbon emission target (as compared to a scenario where nuclear technology is unavailable).; In addition, it is found that cases that benefit nuclear technology (such as electrification of space heating or a flexible, low-price electricity market) further increase the probability of succeeding to meet a carbon emission target. It is also found that the uncertainty in discount rate and nuclear capital cost have the biggest influence upon the distribution of possible carbon emissions in 2050. The development and demonstration of these frameworks show how discussions on decarbonization pathway choices can be guided. As the timeline to decarbonize diminishes, the choice of which decarbonization pathway to choose will shift from optimizing based on cost to a balance of optimizing cost and risk.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Page 165 blank. Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 137-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124576</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision measurement of and search for dark matter in the transverse momentum spectra of Z bosons</title>
<link>https://hdl.handle.net/1721.1/124575</link>
<description>Precision measurement of and search for dark matter in the transverse momentum spectra of Z bosons
Hsu, Dylan George.
A measurement of the differential Z boson production cross section in proton-proton collisions is presented. It furnishes a precision test of the Standard Model, and constrains the parton distribution functions of the proton. Moreover, it is a building block for future measurements of the mass of the W± boson. A study of the efficiency of lepton identification algorithms is performed which drives the precision of the measurement at lower values of transverse momentum. In tandem, a search for new physics in events with a Z boson produced in association with large missing transverse momentum is presented. The results of this search are interpreted in the context of several dark matter models: generic spin-0 or spin-1 mediators, invisible decays of a Higgs-like boson, unparticles, and large extra spatial dimensions. A multivariate analysis was developed to enhance the sensitivity of the invisible Higgs interpretation. The theoretical uncertainty on the irreducible background from electroweak diboson processes is constrained by emulating the missing energy using pure control samples in the fully leptonic final states. The data were collected with the Compact Muon Solenoid detector at the Large Hadron Collider and correspond to an integrated luminosity of 35.9 fb-1. No significant deviations from the Standard Model are found.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 241-256).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124575</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building blocks for regenerative medicine : vascularized models and immunomodulation to engineer hepatic cell therapies</title>
<link>https://hdl.handle.net/1721.1/124281</link>
<description>Building blocks for regenerative medicine : vascularized models and immunomodulation to engineer hepatic cell therapies
Chhabra, Arnav.
Despite significant advances in our knowledge about liver pathology and regeneration, non-palliative regimens for most end-stage liver disorders are limited. The only definitive treatment for end-stage liver failure is an orthotopic liver transplant. Unlike other major causes of mortality, death rates from liver diseases are rising instead of declining, leading to widening discrepancy between supply and demand of liver transplants. Thus, the development of an engineered liver construct is crucial to provide an alternative to whole organ transplantation or even for temporary support throughout medical intervention. The development of functional solid organ systems like the liver is an arduous task and faces many challenges. This thesis sought to address two of these challenges: hepatocyte (liver parenchymal cell) sourcing and immune compatibility. The main hurdle in primary human hepatocyte sourcing is the loss of regenerative potential of the cells ex vivo.; In vivo, liver regeneration occurs as a well-orchestrated process. Although animal models have offered myriad insights into the process, the exact mechanisms and interactions in humans are less well understood. To this end, we developed a three-dimensional platform called SHEAR (structurally-vascularized hepatic ensembles for analyzing regeneration) to model multiple aspects of human liver regeneration. SHEAR enables precise hemodynamic alterations, and supports biochemical inputs such as cytokines and paracrine interactions with endothelial cells. We found that exposing the endothelium to fluid flow led to increased secretion of regeneration-associated factors. Stimulation with relevant cytokines not only amplified the secretory response, but also induced cell cycling of human hepatocytes within the device.; By applying unsupervised machine learning techniques to scan the secretome in stimulated devices, we identified endothelial-derived mediators that can independently stimulate proliferation of human hepatocytes. Collectively, the data presented here underscore the importance of multicellular models that integrate tunable biochemical and fluid forces, and demonstrate that SHEAR devices can be used to discover and validate conditions that promote human liver regeneration. Limiting the allogeneic immune response is a major challenge in the implementation of cell-based therapies. To ameliorate this problem, we engineered an immune cloak around transplantable liver tissue by enabling trans-presentation of immune checkpoint pathways. This technology is called SHIELD (stealth hepatic immunotolerant ensembles for liver disease). SHIELD activates checkpoint pathways in supporting stromal cells and/or in endothelial cells lining the vasculature to induce immune cell exhaustion and anergy.; These support cells are admixed with hepatocytes using tissue fabrication techniques and serve as the protector unit. SHIELD imparts inhibitory signals to immune cells surveying the graft, such that the graft does not get rejected by Human Leukocyte Antigen (HLA)-mismatched T cells. Thus, SHIELD provides localized, controllable immune-tolerance in a number of transplantation settings.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124281</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boundaries, disorder and noise in quantum-coherent systems</title>
<link>https://hdl.handle.net/1721.1/124280</link>
<description>Boundaries, disorder and noise in quantum-coherent systems
Shtanko, Oles.
This thesis aims to understand how quantum-coherent phenomena in solids are affected by the presence of boundaries, disorder, and classical noise. First, we consider surface states in Dirac materials. We show that these systems can host ballistically propagating modes that are immune, to a large extent, to disorder at boundaries. Furthermore, in contrast to the conventional Tamm-Shockley states, Dirac surface states exist for generic boundaries, regardless of the boundary conditions specifics. Because of the robustness of surface states and their insensitivity to disorder, their contribution to transport dominates over the bulk contribution when the number of carriers is low enough. Our analysis was motivated by recent experimental observation pointing to existence of surface states propagating ballistically along imperfect boundaries. Second, we study quantum-coherent transport in the presence of strong classical noise.; Noise usually plays a nemesis to quantum coherence; we find, however, that it can also result in new dynamical effects that are absent in noiseless isolated systems. In particular, at certain lengthscales, noise may lead to new collective behavior that involves the formation of vortices and Poiseuille flows. It is surprising that a quantum-coherent system can mimic the behaviors usually associated with hydrodynamic transport. We provide a detailed microscopic analysis of these phenomena and derive an equation describing such quasi-viscous flows. Our prediction of environmentally induced quasi-viscosity suggests new possible transport regimes accessible in solid-state devices, cold atomic systems, and photonic quantum simulators. Third, we study the stability of topological phases in strongly disordered periodically driven Floquet systems.; Since the conventionally used perturbative expansions are insufficient in the presence of strong disorder, we are led to develop an approximate non-perturbative scheme, which is then applied to disordered Floquet systems driven at low frequency. We employ methods from the free probability theory to predict the bandgap in Floquet topological phases and to explore a disorder-induced gap-closing phase transition. Our analysis predicts the critical disorder value and critical exponents at the transition. The estimated critical disorder strength can be by order of magnitude larger than the bandgap, pointing to the extraordinary stability of topological phases in realistic experimental settings.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-143).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124280</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sub-state nationalism and social solidarity : essays on Spain and the United Kingdom</title>
<link>https://hdl.handle.net/1721.1/124279</link>
<description>Sub-state nationalism and social solidarity : essays on Spain and the United Kingdom
Berwick, Elissa,1987-
This dissertation focuses on sub-state nationalism in Europe, concentrating on the critical cases of Spain and the United Kingdom, where recent independence referenda in Catalonia and Scotland illustrate that sub-state nationalism is neither confined to weak states nor a thing of the past. In the first paper, I argue that the mobilization of sub-state identity in Spain influences preferences for what I term policy scope, the physical area in which a policy is intended to apply and be carried out, distinct from policy content, the intended effects of a policy. In the particular case of preferences for redistribution, I find that 1 that strong identifiers in the highly mobilized region of Catalonia, whether they identify more with Catalonia or with Spain, are more likely to support redistribution when its scope is the community with which they most identify.; The importance of scope is not merely due to in-group/out-group bias, but also stems from differential trust in political elites, such that shared identity between respondents and elites increases support for redistribution. In the second paper, I demonstrate that strong identifiers with sub-state units in the United Kingdom display attitudes of bounded social solidarity, adopting different redistributive preferences for the subnational and national community. British-identifiers display more encompassing solidarity and generalized trust, while sub-state identifiers are more apt to distinguish between state and region in their attitudes. Importantly, the bounded social solidarity of strong English, Scottish and Welsh identifiers is not solely the product of selfinterest based dynamics, but rather depends on the complex interplay of regional wealth, individual income, individual identity, and the effect of identity on social trust.; The third paper presents a framework for estimating multidimensional, dynamic group level latent preferences, and leverages it to compare the ideological leanings of Spanish regions. Geographic variation in preferences is particularly relevant in cases such as Spain, where strong regional identities complicate individuals' evaluations of what the state "should" do. Post-stratified regional estimates of latent ideology for Spanish regions highlight how fully understanding preference structure in this context requires grappling with concerns related to policy scope as well policy content.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 170-178).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124279</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase transition induced deformation in porous media</title>
<link>https://hdl.handle.net/1721.1/124278</link>
<description>Phase transition induced deformation in porous media
Zhou, Tingtao(Edmond Tingtao)
Capillary condensation-evaporation and freeze-thaw processes are the most familiar examples of first-order phase transitions in equilibrium thermodynamics. Porous, amorphous materials are widely used in everyday life, yet their complex spatial structures lead to unresolved questions for statistical mechanics, both in- and out-of equilibrium. In this thesis I examine the mechanical consequences of capillary force and freezing transition in porous media, using cement, a construction material of pivotal importance, as an example of practical concern. During changes in relative humidity, the amount of water absorbed in a porous material varies and typically displays hysteresis, i.e differences between adsorption and desorption at the same relative humidity. This process is accompanied by mechanical deformations, such as drying shrinkage in cement. A parallel computing library is developed to simulate the adsorption/desorption processes using a lattice gas model.; Based on my derivation of the generalized Maxwell-Korteweg stress tensor from Landau-Ginzburg theory, capillary forces are obtained and coupled into nano-particle movements using Molecular Dynamics simulation technique. I then investigated the poromechanics of wet cement using this framework. There I tested the continuum postulate at different length scales, and show local irreversible deformations despite linear elastic response to capillary forces on a macroscopic scale. Freezing poses threats to both living systems and infrastructures. Conventional thinking attributes the damage to water volume expansion upon freezing. However, multiple field observations/experimental evidence conflict with this thinking. To resolve the paradoxes, a thermodynamically consistent theory that highlights the role of charged pore surfaces as well as multiscale porosity is presented, predicting freezing point depression and pressures in different limits of salt behaviors.; Explanation of damage based on nano-fluidic salt trapping mechanism is qualitatively consistent with experimental observations. Further implications on freezing tolerance of biological materials are discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124278</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on politics and education</title>
<link>https://hdl.handle.net/1721.1/124277</link>
<description>Essays on politics and education
Cox Alcaíno, María Loreto.
This thesis comprises two projects on politics and education. The first one studies the consequences of unmet expectations after higher education in political behavior. It uses a larges ample panel that I designed and implemented in Chile, which follows students in their incorporation to the labor market (14,233 higher education students around graduation time in 2016, and 3,948 of them one year later). I use two different empirical methods. First, a survey experiment (Paper 1): after revealing their labor expectations for one year later, and before answering questions on political behavior, respondents randomly received information on average labor outcomes of past graduates from their same degree and institution. Second, panel methods (Paper 2): I calculate the gap between expected and actual outcomes of higher education at the individual level and estimate the relation between gaps and changes in political behavior.; Both methods robustly show that unmet expectations move individuals toward more pro-equality / pro government ideology. I interpret these results as coming from undermined perceptions of social mobility, induced by unmet expectations, in the lines of Picketty (1995) and Alesina and Giuliano (2011). Regarding political position, respondents with unmet expectations, despite moving to the left in ideology, punish the left-wing incumbent government for their misfortune, as in retrospective voting theory. The second project (Paper 3), coauthored with Eyzaguirre, Gallego, and Garcia, studies the electoral effects of providing information on the service provision of incumbent mayors through a randomized intervention in the 2016 local elections in Chile. We sent letters to 128,033 voters informing about performance of local public schools (levels and changes), and we vary the yardstick in each letter (average and maximum performance).; Results are different for old electoral booths, where people are older and have a longer electoral history, and new booths, where people are younger and have usually voted in a couple of elections, at most. In old booths, bad relative performance reduces turnout, which translates almost entirely in less support for the incumbent and produces spillovers to the election of local councilors. Results are concentrated in the outcomes in levels and with average performance as the yardstick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124277</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Islam, exclusivity, and the state in France</title>
<link>https://hdl.handle.net/1721.1/124276</link>
<description>Islam, exclusivity, and the state in France
Dekeyser, Elizabeth(Elizabeth A.)
This dissertation examines why individuals believe that their religious identity is incompatible with mainstream identities. Using the case of Islam in France, I demonstrate that government engagement with marginalized immigrant-origin communities plays a central role in influencing these beliefs, which I term religious exclusivity. I examine whether, why, and which type of government engagement matters, both historically and today, highlighting the central role of group identity and religious community structures and norms. I demonstrate that positive (negative) engagement decreases (increases) religious exclusivity and that this relationship is mediated by religious community structures. Beyond this, I show that positive government engagement is most effective in decreasing religious exclusivity when religious groups are not used as mediators in engaging with marginalized populations. These relationships are driven both by governments' provision of material benefits as well as the impact of government actions on perceptions of group rejection. I base this theory on nearly two years of ethnographic fieldwork across France, including visits to 130 French towns and over 200 in-depth interviews. The hypotheses are tested with causal designs and computational methods using innovative data sources, including social media responses to terror attacks, online town reviews, and a novel dataset of religious community structures. This examination is essential to understanding when and why religion conflicts versus cooperates with the state; it is also integral to discussions of how religion and identity interact with integration, radicalization, social cohesion, and conflict.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; "September 2019." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 194-203).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124276</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sacred politics : religious leaders and conflict in Israel</title>
<link>https://hdl.handle.net/1721.1/124275</link>
<description>Sacred politics : religious leaders and conflict in Israel
Freedman, Michael R.(Michael Raphael)
My dissertation examines why religious leaders adopt nationalist positions and how these positions contribute to the duration of an ongoing conflict. I propose a general framework of sacred politics that incorporates the state, religious leaders, and religious communities. Within this framework, I develop a theory of religious credibility that explains the variation in religious leader ideology through examining leaders' incentives to strategically adopt ideological positions and the network of religious institutions. I test these hypotheses in Israel using a combination of methods including statistical text methods that analyze religious communication of different religious leaders, spatial panel data showing the electoral impact of religious institutions, and a novel experimental design where I vary the credibility of religious leaders using religious sermons. This dissertation offers significant social science contributions as it offers an account of why religious leaders adopt moderate and extreme ideologies. They also give insight into the reasons why some religious leaders cooperate with the state, why this cooperation is crucial for the termination of conflict, and the influence that religious leaders have on the political behavior of their followers. Outside of scholarly literatures, the dissertation offers important findings for policy-makers seeking to understand and include religious leaders in development and peace-building processes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-176).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124275</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The terrible swift sword : US nuclear posture and foreign policy</title>
<link>https://hdl.handle.net/1721.1/124274</link>
<description>The terrible swift sword : US nuclear posture and foreign policy
McDonnell, Timothy P.
This dissertation explains how and why US nuclear posture has changed from the late 1940s to the present. It argues that presidents reliably pursue aggressive nuclear postures to advance their ambitious foreign and security policy goals. In the course of advancing this main argument, it makes five additional contributions. First, it overturns the conventional or folk wisdom that Mutual Assured Destruction (or MAD) characterized US Cold War nuclear posture. In fact, the desire to escape MAD-not maintain it-was a major driver of aggressive US posture. Second, it upends the standard political science argument that US nuclear posture became aggressive as a result of military service rivalries or bureaucratic pathologies within the Pentagon. When it comes to nuclear posture, presidents carry far more weight than bureaucrats. Third, it fills an important gap in the existing literature. Barrels of ink have been spilled on US nuclear weapons policy and related topics.; However, surprisingly, this is the first attempt at a full-length history of US nuclear posture. Fourth, it illuminates the character of the United States' post-World War II grand strategy. For over seventy years that grand strategy has encompassed three core objectives: defending the US homeland, especially against nuclear attack; protecting distant allies in Europe and Asia from their stronger nuclear-armed neighbors; and denying the security benefits of nuclear weapons to adversaries and allies alike. The costs and risks that US presidents have consistently accepted to pursue these far-reaching goals challenges America's self-image as a benevolent steward of international order. Fifth, this project explains our nuclear posture history with a view towards facilitating wise decisions in the present. Today the US faces decades of great power competition. We are also undertaking a major nuclear modernization effort.; By showing how thirteen presidents have set goals, made trade-offs, and balanced costs and risks in the past, I intend to facilitate the kind of informed debates on foreign policy and nuclear posture that American democracy deserves and demands.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124274</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sooner is better : covert action to prevent realignment</title>
<link>https://hdl.handle.net/1721.1/124273</link>
<description>Sooner is better : covert action to prevent realignment
Nutt, Cullen Gifford.
Why do states intervene covertly in some places and not others? This is a pressing question for theorists and policymakers because covert action is widespread, costly, and consequential. I argue that states wield it-whether by supporting political parties, arming dissidents, sponsoring coups, or assassinating leaders-when they fear that a target is at risk of shifting its alignment toward the state that the intervener considers most threatening. Covert action is a rational response to the threat of realignment. Interveners correctly recognize a window of opportunity: Owing to its circumscribed nature, covert action is more likely to be effective before realignment than after. This means that acting sooner is better. I test this argument in case studies of covert action decision-making by the United States in Indonesia, Iraq, and Portugal. I then conduct a test of the theory's power in a medium-N analysis of 97 cases of serious consideration of such action by the United States during the Cold War. Interveners, I suggest, do not employ covert action as a result of bias on the part of intelligence agencies. Nor do they use it to add to their power. Rather, states act covertly when they fear international realignment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 301-308).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124273</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical coherence vibrography : a quantitative tool for probing auditory and ocular biomechanics</title>
<link>https://hdl.handle.net/1721.1/124268</link>
<description>Optical coherence vibrography : a quantitative tool for probing auditory and ocular biomechanics
Ramier, Antoine.
Mechanical properties of biological tissues are inherently tied to their function. As such, they can provide direct insight into the structure and integrity of organs, and how they are affected by physiological and pathological processes. Optical coherence tomography (OCT) is a powerful imaging modality that can image the anatomy of biological tissues with near-cellular resolution. It can also be used to measure vibrations and deformations with nanometer-level sensitivity. This combination of tomography and vibrometry -- OCT vibrography -- forms a tool that is singularly positioned to quantify biomechanical behavior at the tissue scale. This thesis focuses on two promising fields of application for OCT vibrography: otology and ophthalmology. Sound-driven vibrations in the middle-ear ossicular chain and in the tympanic membrane are fundamental to hearing. Using the chinchilla ear as a model, we investigate the vibrational amplitude and phase as a function of sound frequency. Our 3-dimensional measurements reveal with unprecedented detail the modes of motion of the ossicular chain of an intact middle-ear. The ability of the cornea to focus light into a sharp image on the retina depends on its shape, which in turn is regulated by its mechanical properties. By measuring the velocity of mechanical waves, induced by an external stimulus and tracked using OCT vibrography, acoustic theory can be used to calculate the shear-elastic modulus of the corneal stroma. Our study demonstrates the first OCT-based quantification of corneal elasticity in live humans.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 137-155).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124268</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical and numerical study of air entrainment and bubble size distribution in strong free-surface turbulent flow at large Froude and Weber number</title>
<link>https://hdl.handle.net/1721.1/124246</link>
<description>Theoretical and numerical study of air entrainment and bubble size distribution in strong free-surface turbulent flow at large Froude and Weber number
Yu, Xiangming,1987-
Strong turbulence near an air-water interface, characterized by large Froude (Fr) and Weber number (We), leads to significant interactions and exchanges between gas and liquid, resulting in measurable air entrainment. Air entrainment influences a number of physical processes in the nature, including air-sea gas transfer, production of the sea-salt aerosol and scavenging of biological surfactant. The key factor in controlling these processes is the size distribution of entrained bubbles. However, the underlying mechanisms/physics of air entrainment driven by free-surface turbulence (FST) and the resulted bubble size distribution still remain unclear. Therefore, detailed studies on air entrainment in strong free-surface turbulence (SFST) are of fundamental scientific interest. With recent interest in modeling the white bubbly water in ship wakes, these studies are also of practical importance to the design and analysis of modern surface vessels. In this thesis, we perform both theoretical and numerical studies on air entrainment and bubble size distribution in SFST at large Fr and We. The thesis work 1) characterizes the unique turbulence characteristics of SFST; 2) quantifies the entrainment volume and the corresponding size distribution of SFST air entrainment; 3) elucidates the mechanisms/physics of the bubble size distribution of SFST entrainment; 4) provides useful insight and guidance to the development of sub-grid air entrainment models ...
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-192).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124246</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete robotic construction</title>
<link>https://hdl.handle.net/1721.1/124210</link>
<description>Discrete robotic construction
Langford, William Kai.
Robots, which require the integration of a wide variety of mechanical and electrical functionality, are seldom built in a single process, but are instead assembled from parts created using a variety of different processes. While fabrication has advanced significantly to enable the routine fabrication of complex and precise objects from computer designs, the assembly processes used to integrate these parts are still largely manual and are notoriously difficult to automate. Recent research in digital fabrication has looked for ways to avoid assembly altogether by manufacturing integrated devices in a single process but has often struggled to integrate more than a few materials or functionalities. Instead of avoiding assembly, this work embraces it.; Inspired by the universality of amino acids that are the basis of molecular biology, I demonstrate an interchangeable set of building blocks that enable the construction of a wide variety of robotic capabilities, including machines that can assemble themselves. In this thesis I introduce a discrete approach to robotic construction that enables the fabrication of structure, mechanism, actuation, circuitry, and computation in a single process through the assembly of a small set of building blocks. This work is based on discretely assembled "digital" materials, in which parts are reversibly joined with a discrete set of relative positions and orientations, allowing for global geometries to be determined from local constraints, assembly errors to be detected and corrected, heterogeneous materials to be joined, and disassembly and reuse rather than disposal.; This approach simplifies the fabrication of integrated electromechanical machines and points to the possibility of building technology that is able to grow (exponential self-assembly) and self-repair. Furthermore, this approach discretizes robotic systems at a finer granularity than prior work in modular robotics, offering benefits including the flexibility to integrate heterogeneous functions, agility to rapidly construct and modify designs, and incremental extensibility in both system size and performance. These benefits help lower barriers in the rapid prototyping of electromechanical machines, make designs more reusable by providing a physical representation that facilitates design automation and abstraction, and enable machines that are more integrated than would be practical with alternative methods.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 127-138).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124210</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Human and artificial intelligence in decision systems for social development</title>
<link>https://hdl.handle.net/1721.1/124209</link>
<description>Human and artificial intelligence in decision systems for social development
Noriega Campero, Alejandro.
Today there is widespread expectation about how ubiquitous data and intelligent systems may revolutionize society towards shared prosperity; or conversely, deepen social inequalities, bring the end of human agency, and forgo the right to privacy. In this two-part thesis, we investigate the societal value and perils of hybrid decision systems -- which integrate elements of human and artificial intelligence. Part I of this thesis focuses on their potential for promoting social development goals, with applications in poverty alleviation and public health. Towards public health, in the context of early detection of diabetic blindness, we show that human- AI hybrid systems can be more accurate than either human or algorithm in isolation, and that both opinions benefit from mutual exposure. Towards improved poverty action, we argue that poverty-targeting rules are among the most relevant algorithms operating in the world today. We demonstrate that a shift towards the use of AI methods in poverty-based targeting can substantially increase accuracy, extending the coverage of social policies by nearly a million people in the case of two Latin American countries, without increasing budgets. However, it is also shown that both the status quo and AI-systems induce disparities across population subgroups. Hence, we close by proposing a decision support tool that empowers diverse social institutions to design fair targeting rules under a distributed governance framework. Part II addresses cross-cutting challenges that arise as one applies these technologies in real-world domains towards social development. In particular, the work presented provides academic and practical contributions on: 1) achieving fairness in algorithmic decision systems by means of adaptive information collection, a novel paradigm we call active fairness; and 2) preserving privacy and mapping its tradeoff against utility in development contexts.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 131-142).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124209</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural-embedded discrete choice models</title>
<link>https://hdl.handle.net/1721.1/124207</link>
<description>Neural-embedded discrete choice models
Han, Yafei.
This dissertation is motivated by the possible value of integrating theory-based discrete choice models (DCM) and data-driven neural networks. How to benefit from the strengths of both is the overarching question. I propose hybrid structures and strategies to flexibly represent taste heterogeneity, reduce potential biases, and improve predictability while keeping model interpretability. Also, I utilize neural networks' training machinery to speed up and scale up the estimation of Latent Class Choice Models (LCCMs). First, I embed neural networks in DCMs to enable flexible representations of taste heterogeneity and enhance prediction accuracy. I propose two neural-embedded choice models: TasteNet-MNL and nonlinear-LCCM. Both models provide a flexible specification of taste as a function of individual characteristics. TasteNet-MNL extends the Multinomial Logit Model (MNL).; A feed-forward neural network (TasteNet) is utilized to predict taste parameters as a nonlinear function of individual characteristics. Taste parameters generated by TasteNet are further fed into a parametric logit model to formulate choice probabilities. I demonstrate the effectiveness of this integrated model in capturing nonlinearity in tastes without a priori knowledge. Using synthetic data, TasteNet-MNL is able to recover the underlying utility specification and predict more accurately than some misspecified MNLs and continuous mixed logit models. TasteNet-MNL also provides interpretations close to the ground truth. In an application to a public dataset (Swissmetro), TasteNet-MNL achieves the best out-of-sample prediction accuracy and discovers a broader spectrum of taste variation than the benchmark MNLs with linear utility specifications. Nonlinear-LCCM enriches the class membership model of a typical LCCM.; I represent an LCCM by a neural network and add hidden layers with nonlinear transformations to its class membership model. The nonlinearity introduced by the neural network provides a flexible approximation of the mixing distribution for both systematic and random taste heterogeneity. I apply this method to model Swissmetro mode choice. The nonlinear-LCCM outperforms an LCCM with a linear class membership model with respect to the out-of-sample prediction accuracy. Nonlinear-LCCM also provides interpretable taste parameters for each latent class.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 131-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124207</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of differential color refraction on astrometric observations of Solar System bodies and Earth satellites from ground-based optical telescopes</title>
<link>https://hdl.handle.net/1721.1/124204</link>
<description>The effect of differential color refraction on astrometric observations of Solar System bodies and Earth satellites from ground-based optical telescopes
Geykhman, Roman O.
Earth's atmosphere is optically dispersive and subjects astrometric observations from ground-based optical telescopes to systematic bias from differential color refraction (DCR). This bias is evident in Minor Planet Center observations of asteroids with known spectral types and in observations of GPS and GLONASS satellites. DCR bias is on the order of 0.1 arcsec, and until recently, őxed-pattern star catalog errors exceeded this level. With the release of the Gaia DR2 star catalog in April of 2018, catalog error is no longer dominant and the systematic error ŕoor in ground-based astrometry is deőned by DCR.; Unaccounted-for DCR bias in observations can introduce a small but statistically signiőcant bias into the estimate of Keplerian mean motion of inner Solar System asteroids, reduce the probability of successfully observing a stellar occultation by a Kuiper Belt Object, and in rare pathological cases can mean the difference between predicting an impact or a miss by a hazardous asteroid. DCR in observations of geostationary satellites can introduce a large bias into the estimate of solar radiation pressure area-to-mass ratio in a single-night orbit őt and tens of meters of error into an orbit prediction derived from several nights of observation.; Measurements of the 2017 near-Earth ŕyby of the asteroid 3122 Florence from MIT and MIT Lincoln Laboratory facilities in Westford, MA and Socorro, NM suggest that narrow passbands are insufficient to mitigate DCR, and measurements of a sample of geostationary satellites' spectra at the US Naval Observatory Flagstaff Station show that the DCR bias of active satellites can vary by up to 0.1 arcsec over half an hour. While the DCR of őducial stars is predictable from catalog data, satellites' DCR must be measured directly. To that end, a slitless spectrograph was deployed at the Firepond Optical Facility in Westford, MA and observed GPS and GLONASS satellites over seven nights. Using that data, I demonstrate DCR compensation yielding a 60% reduction in bias and 30% reduction in noise in astrometric residuals relative to color-agnostic processing when all atmosphere-induced effects (stellar DCR, target DCR, and parallactic refraction) are accounted for.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 289-300).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124204</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic wearable technology : designing and deploying small climbing robots for sensing and actuation on the human body</title>
<link>https://hdl.handle.net/1721.1/124199</link>
<description>Dynamic wearable technology : designing and deploying small climbing robots for sensing and actuation on the human body
Dementyev, Artem.
This thesis introduces the idea of Dynamic Wearable Technology - a concept of wearable devices as small autonomous robots that can move on and around the human body. Ecosystems in the natural world have static and dynamic organisms such as plants vs. animals. In our wearable ecosystem, all our current devices are static, thus limiting their functionality. Adding robots could significantly increase the usability of wearable devices and/or open up entirely new avenues of application. This thesis develops and evaluates two approaches to wearable robots. First, Rovables, an on-clothing climbing robot that pinches fabric with magnetic rollers, and second, Epidermal Robots that use controlled suction to attach to the skin. The robots contain on-board navigation that uses inertial measurement units, motor encoders, and occasional ground truth from on-skin features or beacons to estimate position. In this thesis, we analyze important aspects of such robots: size, localization, weight, power consumption, and locomotion. Dynamic wearable technology has potential applications in many areas, such as medicine, human-computer interactions, fashion, and art. We explore several applications in each of these areas. We focus on how the robots can help to systematically collect health information, such as the mechanical, optical, and electrodermal properties of tissues. Robots like these will provide new avenues of autonomous or guided medical assessment and treatment as well as new venues for the artistic and interfacial exploration of relationships between our bodies and our devices.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-162).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124199</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Committee of N : playful design in teacher education</title>
<link>https://hdl.handle.net/1721.1/124198</link>
<description>Committee of N : playful design in teacher education
Haas, Jason(Jason Michael)
Committee of N is a collaborative card game where players deepen their learning about educational policy and the history of education in the U.S. by creating new kinds of schools based on design constraints implemented through random card draws. In the core game play, players are randomly assigned educational values (e.g. belief in multiple intelligences or achievement on high-stakes tests) from a deck along with a specific school element (bell schedule, graduation requirements, etc.), and they then creatively design the element to reflect their assigned values. By using cards as context for discussion, research, and design, undergraduate education students become animated about the possibilities of designing and imagining the school they would most like to teach in, grounded in theory and practice. By examining student work created in the normal course of class, by interviewing students and teaching staff, and by examining student artifacts, I make the case that students are learning and reflecting as educational professionals. I argue that both game affordances and construction kit affordances help students create a constructive lens, marking a professional capacity to decompose existing learning environments as well as to modify and design their own.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 95-101).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124198</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time personalized toll optimization based on traffic predictions</title>
<link>https://hdl.handle.net/1721.1/124189</link>
<description>Real-time personalized toll optimization based on traffic predictions
Zhang, Yundi.
Road pricing is a traffic congestion management strategy that alters traffic demand and raises funds for transportation supply improvements. Compared to static pricing and reactive dynamic pricing, proactive dynamic pricing is most effective in achieving traffic management objectives, as the toll is based on traffic predictions that incorporate real-time information. We investigate a proactive toll pricing framework where the toll is optimized in real time based on traffic predictions generated by a dynamic traffic assignment (DTA) system. Toll optimization performance relies on accurate predictions, which is backed by the online calibration of the DTA system. We develop enhanced online calibration methodologies featuring a heuristic technique to calibrate supply parameters and improve the prediction accuracy of traffic speed. We test online calibration using real data from a real network consisting of managed lanes and general-purpose lanes.; We find the methodologies improve estimation and prediction accuracies of flow and speed. We then formulate toll pricing as an optimization problem to maximize expected revenue, subject to network condition requirements and tolling regulations. We test the proactive toll pricing system in a closed-loop evaluation framework where a microscopic simulator is used to mimic the real network. We perform tests in multiple demand and supply scenarios and find that the system generates higher revenue when online calibration is enabled. Growing applications of electronic toll collection enrich disaggregate trip data, making it possible to improve traffic management by personalized toll pricing.; We develop a personalized toll pricing system by extending the original system to a two-level framework, where a new personalized discount module generates discount offers for a subset of individuals, while the original optimization module optimizes the displayed toll rate and a control parameter that affects how much discount to offer. Discount also depends on individual traveler's choice behavior, represented by an enhanced route choice model that captures heterogeneities among individuals. We use real personalized trip records to estimate the choice model. We find that variables generated from individuals' trip history are capable of capturing heterogeneities among individuals. We test the personalized toll pricing system and find it improves optimization objective compared to non-personalized pricing.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 117-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124189</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A container-based lightweight fault tolerance framework for high performance computing workloads</title>
<link>https://hdl.handle.net/1721.1/124188</link>
<description>A container-based lightweight fault tolerance framework for high performance computing workloads
Sindi, Mohamad(Mohamad Othman)
According to the latest world's top 500 supercomputers list, ~90% of the top High Performance Computing (HPC) systems are based on commodity hardware clusters, which are typically designed for performance rather than reliability. The Mean Time Between Failures (MTBF) for some current petascale systems has been reported to be several days, while studies estimate it may be less than 60 minutes for future exascale systems. One of the largest studies on HPC system failures showed that more than 50% of failures were due to hardware, and that failure rates grew with system size. Hence, running extended workloads on such systems is becoming more challenging as system sizes grow. In this work, we design and implement a lightweight fault tolerance framework to improve the sustainability of running workloads on HPC clusters. The framework mainly includes a fault prediction component and a remedy component.; The fault prediction component is implemented using a parallel algorithm that proactively predicts hardware issues with no overhead. This allows remedial actions to be taken before failures impact workloads. The algorithm uses machine learning applied to supercomputer system logs. We test it on actual logs from systems from Sandia National Laboratories (SNL). The massive logs come from three supercomputers and consist of ~750 million logs (~86 GB data). The algorithm is also tested online on our test cluster. We demonstrate the algorithm's high accuracy and performance in predicting cluster nodes with potential issues. The remedy component is implemented using the Linux container technology. Container technology has proven its success in the microservices domain. We adapt it towards HPC workloads to make use of its resilience potential.; By running workloads inside containers, we are able to migrate workloads from nodes predicted to have hardware issues, to healthy nodes while workloads are running. This does not introduce any major interruption or performance overhead to the workload, nor require application modification. We test with multiple real HPC applications that use the Message Passing Interface (MPI) standard. Tests are performed on various cluster platforms using different MPI types. Results demonstrate successful migration of HPC workloads, while maintaining integrity of results produced.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 122-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124188</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the influence of biosphere-atmosphere interactions on atmospheric chemistry and composition</title>
<link>https://hdl.handle.net/1721.1/124187</link>
<description>Investigating the influence of biosphere-atmosphere interactions on atmospheric chemistry and composition
Silva, Sam J.(Sam James)
The interactions between the biosphere and the atmosphere are an important controlling factor for regional to global atmospheric chemistry and composition. This ultimately has wide impacts on issues like air quality and climate change. However, there are still substantial uncertainties in the biosphere-atmosphere interaction processes that drive the global abundance and variability of many critically important atmospheric constituents, including ozone, aerosol, and Volatile Organic Compounds (VOCs). This thesis aims to address these uncertainties through a multifaceted approach, combining theory and data-driven models with observations. The scope of the research completed herein is introduced and described in Chapter 1. Chapter 2 is a case study of biosphere atmosphere interactions where the air quality impact of large-scale agricultural deforestation in Southeast Asia is investigating using global models.; Chapters 3 and 4 focus on research toward improving model estimates of dry deposition, a process by which vegetation functions as a sink for atmospheric aerosol and reactive gas species. Chapter 3 constrains theoretical estimates of global dry deposition through comparison to a large suite of observations, in order to provide a detailed assessment of current theory. Chapter 4 develops a data-driven model for this process using "deep learning", an artificial intelligence-based regression method. This data-driven approach is nearly an order of magnitude more accurate than current theoretically based models. Chapter 5 focuses on assessing simulated impacts of biosphere-atmosphere interactions on atmospheric chemistry. Satellite observations of formaldehyde and glyoxal were used to constrain the chemical transformations relevant for VOC chemistry globally.; In the final project, in Chapter 6, an improved representation of plant canopy processes for use in atmospheric chemistry simulations is developed, and its performance is assessed. Finally, Chapter 7 summarizes the work completed in this thesis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 169-191).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124187</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light interstitials in iron under extreme mechanical conditions</title>
<link>https://hdl.handle.net/1721.1/124186</link>
<description>Light interstitials in iron under extreme mechanical conditions
Moeini Ardakani, Sina(Seyed Sina)
Addition of small amounts of light interstitial elements to iron can alter its physio-chemical characteristics to a great degree. The most crucial of these elements being carbon, overcomes many of iron deficiencies such as lack of hardenability, tensile strength and so on. It is owing to this element that iron in form of steel has become the most commonly used material in modern industry. However, not all interstitial elements have a positive impact on iron's performance, nor their presence is desirable. Due to its high diffusivity, hydrogen can travel inside iron with relative ease and interact with already formed, or forming defects such as dislocations and vacancies. It is believed that this interaction impacts the formation and evolution process of defects significantly. From macroscopic perspective, this is manifested in form of embrittlement of iron, usually referred to as hydrogen embrittlement (HE). Super-ferrite is a newly discovered phase of iron supersaturated in carbon.; It is usually formed under extreme mechanical conditions like severe plastic deformation, from iron and a commonly found form of carbide in steel, namely cementite. The first part of this document delves into many aspects of super-ferrite using atomistic simulations and density functional theory. Of the crucial findings of said chapter, one is the process of super-ferrite formation, which involves a secondary intermediate phase. Another is careful analysis of its structure and its comparison with the more common supersaturated phase, martensite. The second part is devoted to careful examination of a newly proposed HE mechanism in iron. Using the concrete framework of thermodynamics and statistical mechanics, complemented by numerical methods such as molecular dynamics, grand canonical Monte Carlo, and density functional theory, many aspects of this theory are scrutinized.; It is concluded although viable for iron under extremely high hydrogen pressure, this mechanism is not applicable to HE that is commonly observed in industry. As a by product of this part, the iron hydrogen phase diagram is extended to temperatures as low as 100 Kelvin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 99-108).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124186</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of scale and spatial variability on surface foundation on sand</title>
<link>https://hdl.handle.net/1721.1/124185</link>
<description>Effect of scale and spatial variability on surface foundation on sand
Chen, Jialiang,Ph. D.Massachusetts Institute of Technology.
This study aims to investigate the effect of scale and spatial variability on the bearing response of surface foundations on sand. Finite element analyses are performed using a generalized effective stress soil model, MIT-S, to investigate the scale effects in the foundation load response, under monotonic vertical settlement. Transitions in the mechanisms of ground deformations from general to punching modes are observed with changes in foundation size and sand void ratio. Model predictions of the bearing capacity factor, N[subscript gamma] are in good agreement with data reported from centrifuge model tests of circular and strip foundations on Toyoura sand. Further investigations are conducted to understand the effect of combined loading on circular footings on sand. Combinations of vertical, horizontal displacement and rotation are prescribed to foundation base to simulate different degree of load eccentricities and inclinations. Normalized failure envelopes describing allowable combination of vertical, horizontal and moment forces are established for different footing sizes and sand densities. Expansion of the failure envelope is associated with increased settlement depth and sand densities, while its rotation is affected by foundation size. Effect of spatial variability in sand density on the bearing response of circular footings on sands is also investigated. Random fields of sand void ratio are generated using a Karhunen-Loève expansion. A bounded-tanh function is used to characterize the statistical distribution of void ratio. Monte Carlo simulations are performed to establish impact of sand density variability and correlation length ratios on foundation bearing resistance. The stochastic predictions of N[subscript gamma] show that the mean bearing response is largely impacted by the statistical distribution of sand densities, but not by correlation lengths.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124185</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering yeast for heavy metal waste remediation</title>
<link>https://hdl.handle.net/1721.1/124184</link>
<description>Engineering yeast for heavy metal waste remediation
Sun, George L.(George Le-Le)
The global rate of waste production has consistently outpaced the world's ability to manage and remediate it. Specifically, global consumption of raw materials, unrenewable energy sources, and disposal of electronic goods have contaminated water sources with heavy metals causing enviornmental damage and public health concerns. Despite the urgent need to contain and remove metals from the environment, there still does not exist robust and complete remediation technologies. Physicochemical technologies like chemical precipitation, absorption, and ion-exchange lack the specificity for metal capture, produce their own secondary-waste in the form of chemical byproducts or sludge, and have a high cost barrier requiring development of dedicated infrastructure and technical expertise. Instead, this work investigates biologically-derived strategies for managing waste, technologies also known as bioremediation.; Principles from chemical precipitation, absorption, and ion-exchange were analogously designed in S. cerevisae-the common baker's yeast. The three analogies were: engineering yeast sulfur metabolic pathways for controlled metal sulfide precipitation; designing new metal trafficking schemes using membrane metal transporters; and engineering supramolecular forming proteins for yeast-protein metal chelation and sequestration. For all methods, metal removal were between 50-90% efficiency for heavy metals such as Cu, Cd, Hg, and Pb. Furthermore, 2-4 rounds of processing eliminated almost 100 [mu]M of metal, 100-1000 fold greater than EPA toxicity thresholds. Strategies to retrieve and recycle captured metals were also investigated, such as precipitating metal sulfide crystals onto the yeast surface, compartmentalizing metals into the yeast vacuole, or sedimenting bound metals into cell-protein complexes.; Relying on yeast takes advantage of their autonomous growth, ease of engineering, and its ubiquitous presence in the household and consumer market. The purpose of this work was to show that the same yeast used for brewing and baking can be harnessed for clean water applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124184</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cell-free synthetic biology for affordable, on-demand diagnostics</title>
<link>https://hdl.handle.net/1721.1/124183</link>
<description>Cell-free synthetic biology for affordable, on-demand diagnostics
Dy, Aaron J.(Aaron James)
Detection of biomarkers, such as nucleic acids, performs critical roles in managing infectious disease outbreaks, point-of-care testing, and public health monitoring. However, many diseases and public health problems suffer from a lack of affordable, portable tests that can be used to sensitively detect nucleic acids and respond in a rapid manner. Current methods to nucleic acid testing are too expensive, slow, and complex to be routinely used outside of specialized lab settings. New diagnostic tools are needed that can work in resource-limited settings to help guide prompt treatment decisions, prevent spread of infectious diseases, and inform public health decisions. Cell-free synthetic biology has shown promise as a portable, affordable technology to detect biomolecules like nucleic acids. In this thesis, I present several advancements to cell-free synthetic biology diagnostics that enable new application areas. First, I present a paper-based cell-free synthetic biology platform using RNA toehold switch sensors to detect RNAs from human gut microbiome. We showed that this method could quantify bacterial and human RNA transcripts comparably to gold standard methods while reducing time and cost. Next, I used similar cell-free detection technology to create a set of fruit DNA-sensing demonstrations that can be used in high school biology classrooms. I then sought to engineer biomolecular circuits that can process multiple sensor inputs to reduce cost, improve specificity, and build classifier circuits. Finally, I present work to develop and use clustered regularly interspaced short palindromic repeats (CRISPR) enzyme-based diagnostics to achieve attomolar sensitivity and single-nucleotide mismatch specificity. Together, these projects demonstrate a set of advancements in cell-free synthetic biology diagnostics toward filling the gap of nucleic acid detection technologies that are low-cost, portable, sensitive, and easy to use.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 104-116).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124183</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and debugging of ultrastable engineered genetic systems</title>
<link>https://hdl.handle.net/1721.1/124182</link>
<description>Design and debugging of ultrastable engineered genetic systems
Park, Yongjin,Ph. D.Massachusetts Institute of Technology.
Engineered genetic systems in bacteria have tremendous potential for biotech applications ranging from living therapeutics to the controlled production of chemicals. Engineering such genetic systems is challenging as these genetic systems often consist of multiple genes and genetic parts (&gt;4 genes or &gt; 45 genetic parts) interacting with each other as intertwined networks. These intertwined networks are invisible, making the design and debugging of these genetic systems to be particularly challenging. Additionally, expressing a large number of genes creates burdens on the host cell and reduces the long-term stability of these genetic systems. Here we address these two problems by (i) adapting high-throughput RNA-seq to visualize the inner-workings of these engineered genetic systems and (ii) developing a robust and efficient genome engineering platform that enables the implementation of long-term stable engineered genetic systems on the genome.; First, we applied a high-throughput RNA-sequencing, RNA tag-seq, to analyze the behavior of engineered genetic systems. We analyzed two systems with RNA-seq: (i) a library of 84 refactored nitrogenase clusters where each cluster consists of six genes with varying levels of expression and (ii) a genetic circuit that consists of eight interacting genes. With this analysis, we studied the design parameters for these genetic systems and identified various unexpected failure modes. Swapping a troubling genetic part in RNA-seq profile allowed us to effectively debug unwanted circuit expression profiles. To reduce the cellular burden from expressing these genetic systems, we developed a reliable and efficient genome engineering platform on the E. coli MG1655 K-12 genome. We built three genome landing pads, each of which consists of an att (phage attachment sites) site insulated with ultra-strong bidirectional terminators.; Landing pads locations were determined by Tn5 transposon library screening by finding genomic locations that showed high gene expression levels without interfering endogenous gene expression. We also developed a set of plasmids that integrates genetic circuits into these landing pads via simple transformation. With these landing pads, seven orthogonal sensors and eight orthogonal TetR-homolog NOT gates were engineered on the genome to have up to 640-fold changes in output promoter activity upon induction. Utilizing these sensors and gates, we successfully implemented 3-input genome circuits that are stably maintained without antibiotics for more than two weeks in rich media with continuous daily ON/OFF state cycling. We expect this platform could facilitate the design and debugging of long-term stable engineered genetic systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 160-184).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124182</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active STPA : integration of hazard analysis into a Safety Management System Framework</title>
<link>https://hdl.handle.net/1721.1/124172</link>
<description>Active STPA : integration of hazard analysis into a Safety Management System Framework
Silva Castilho, Diogo.
This dissertation describes a new approach to integrate a hazard analysis into Safety Management Systems (SMS). This new engineering process guides safety managers and analysts in the identification of a migration toward states of higher risk. The solution is the use of an active version of STPA (Systems-Theoretic Process Analysis), a hazard analysis tool based on Systems-Theoretic Accident Model and Processes (STAMP). The Active STPA uses data collected during operations, such as Flight Data Monitoring events and voluntary reporting, to identify leading indicators of increasing risk. The events are compared with the STPA. The discrepancies lead to a reasoning about previous assumptions on human behavior and the environment in which the system operates. New defenses are identified and implemented. The output of the process is a set of new defenses for prevention and mitigation that will enforce the requirements and constraints generated by the STPA, allowing the generation of cumulative knowledge on system behavior over time. The feedback on SMS activities allows targeted safety improvement activities and provides qualitative information for hazard management integrating Active STPA into an SMS. Most of the indicators currently in use in the aviation industry are reactive because they measure only parameter exceedances. Active STPA allows a proactive identification of the potential cause of future accidents.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 135-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124172</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization and mitigation of blade waviness effects on fan performance</title>
<link>https://hdl.handle.net/1721.1/124171</link>
<description>Characterization and mitigation of blade waviness effects on fan performance
Lee, Jinwook,Ph. D.Massachusetts Institute of Technology.
The role of surface waviness, arising from carbon composite manufacturing process, on aeroengine fan blade performance is characterized. The mechanisms for laminar-turbulent transition are assessed numerically and experimentally over relevant range of aerodynamic and geometric parameters. The governing mechanism (natural transition triggered by receptivity amplification from surface waviness) is explained based on a newly established numerical framework and validated experimentally. Computations and experiments are performed to assess the surface waviness loss mechanism under a relevant range of aerodynamic and geometric parameters. The major feature, with estimated isentropic efficiency loss up to 1%, is identified to be the movement of the natural transition location due to receptivity amplification, via geometric resonance between the Tollmien-Schlichting wavelength and the surface wavelength. An effective numerical framework, referred to as the extended eN method, is established to assess the occurrence of the start of transition by tracing the energy transfer from freestream acoustic disturbance to initiation and growth of Tollmien-Schlichting waves. A subsonic natural transition wind tunnel was designed and constructed to determine the effects of surface waviness on natural transition. The theoretical amplification of Tollmien-Schlichting waves, and the corresponding transition point movement due to surface waviness, is successfully validated by these experiments. The research contributes to aircraft engine fan blade technology through a new capability to estimate the effect of blade surface waviness on fan performance, characterization of the underlying mechanisms, and design guidelines for improvements of modern carbon composite fan blades.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 235-240).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124171</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A systems architecture framework towards hardware selection for autonomous navigation</title>
<link>https://hdl.handle.net/1721.1/124170</link>
<description>A systems architecture framework towards hardware selection for autonomous navigation
Collin, Anne(Anne Claire)
The inclusion of autonomous vehicles into our transportation networks requires methods for evaluation and certification of systems on the vehicle. Adding sensors or computing capabilities to the vehicle might improve performance for specific tasks, or resilience, but can also be accompanied with an increase in cost, system latency, and energy consumption. Currently, no method exists to quantify the trade-offs between these metrics of interest at the system level. This thesis provides a framework to support hardware selection by presenting a method to evaluate the effect of sensor type and placement on the vehicle's ability to perform Simultaneous Localization and Mapping (SLAM), and select high performing and resilient sensor architectures for realistic driving situations from the benchmarked KITTI dataset. For the specific sequence considered, this thesis shows that designing for resilience increases cost by only 4%. It is also found that LiDARs are critical to the performance and resilience of sensing systems in many different environments. A systems model for processor and bus selection is then developed, in order to minimize cost and latency of the hardware architecture, taking into account recent safety measures recommended by the ISO 26262. This model enables the evaluation of the impact of sensor choice on the overall latency. A new method is proposed to enumerate efficiently sensor architectures and place them in the tradespace containing four dimensions of interest: cost, latency, energy consumption and SLAM performance. It is found that, due to diminishing returns, the best architecture is 360% more expensive than the second best, for a performance increase of 1%. Finally, the framework is applied to specific situations such as the test of a new sensor, or poor weather conditions, providing architecture insights for the intelligent transportation community.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-207).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124170</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modest epistemology</title>
<link>https://hdl.handle.net/1721.1/124126</link>
<description>Modest epistemology
Dorst, Kevin(Kevin Matthew)
Thinking properly is hard. Sometimes I mess it up. I definitely messed it up yesterday. I'll likely mess it up tomorrow. Maybe I'm messing it up right now. I'm guessing you're like me. If so, then we're both modest: we're unsure whether we're thinking rationally. And, it seems, we should be: given our knowledge of our own limitations, it's rational for us to be unsure whether we're thinking rationally. How, then, should we think? How does uncertainty about what it's rational to think affect what it's rational to think? And how do our judgments of people's (ir)rationality change once we realize that it can be rational to be modest? This dissertation makes a start on answering those questions. Chapter 1 introduces a general framework for modeling situations in which you are rational to be unsure what the rational opinions are. I first show how this framework allows us to precisely formulate the questions from the "higher-order evidence" literature.; I then use it to argue that rational modesty is needed to explain the epistemic force of peer disagreement, and therefore that any theory of such disagreement must be based on a general theory of rational modesty. Many have suggested that such a theory can be easily formulated based on the enkratic intuition that your first-order opinions must "line up" with your higher-order ones. But I argue that this is incorrect: whenever modesty is rational, so too is epistemic akrasia. We need to look elsewhere for a general theory of rational modesty. Chapter 2 offers one. I build a theory that -- in a precise sense -- allows as much modesty as possible while still guaranteeing that rationality is a guide. The key principle -- which I call Trust --; formalizes the truism that it's likely that what the evidence supports is true. I show that Trust permits modesty, ensures that rational opinions are correlated with truth, and is necessary and (in a wide class of scenarios) sufficient to vindicate the truism that you should always prefer to respond rationally to your evidence. In sum, Trust establishes that there is a principled way for rational people to be modest. Chapter 3 applies this theory of rational modesty to the psychology of human reasoning. In particular, a wide range of studies suggest that people have a tendency to predictably polarize in the face of conflicting evidence: to gather and interpret evidence in a way that leads them to predictably strengthen their prior beliefs. This "confirmation bias" is standardly taken to be a hallmark of human irrationality. It need not be. I first prove that whenever modesty can be rational, so too can predictable polarization.; I then argue, further, that this abstract possibility may play a role in the actual polarization we observe. In particular, given common structures of rational modesty generated by the process of cognitive search, rational agents who care only about the truth should sometimes exhibit confirmation bias. So, I say, epistemology can simultaneously learn from mathematics and inform psychology. That is part of a broader narrative. This dissertation makes the argument that epistemology can be rigorous while also being relevant; that it can be formal while also being applicable; and that it can be abstract and principled, while also teaching us about ourselves. That is the hope, at least. Read it, and perhaps you will agree.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 123-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124126</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Delay, stability, and resource tradeoffs in large distributed service systems</title>
<link>https://hdl.handle.net/1721.1/124124</link>
<description>Delay, stability, and resource tradeoffs in large distributed service systems
Zubeldía Suárez, Martín.
This thesis addresses fundamental tradeoffs in the design of dispatching policies in large-scale distributed service systems, motivated by examples such as cloud computing facilities and multi-core processors. A canonical framework for modeling such systems is provided by a parallel queueing model with n servers, where service requests arrive stochastically over time as a single stream of jobs of rate proportional to n, and where a central controller is responsible for all decisions. The central controller makes decisions based on limited information about the state of the queues, which is conveyed through messages from servers to the dispatcher, and stored in a limited local memory. Our objective is to understand the best possible performance of such systems (in terms of stability region and delay) and to propose optimal policies, with emphasis on the asymptotic regime when both the number of servers and the arrival rate are large.; We study the tradeoffs between the resources available to the controller (memory size and message rate) and the achievable queueing delay performance and stability region of resource constrained dispatching policies. Our main findings are: 1. Queueing delay vs. resources tradeoff. We propose a family of dispatching policies, indexed by the size of their memories and by the average message rate, and show that the expected queueing delay vanishes as n --&gt; [infinity symbol] when either (i) the number of memory bits is of the order of log(n) and the message rate grows superlinearly with n, or (ii) the number of memory bits grows superlogarithmically with n and the message rate is at least as large as the arrival rate (Chapter 3).; Moreover, we develop a novel approach to show that, within a certain broad class of "symmetric" policies, every dispatching policy with a message rate of the order of n, and with a memory of the order of log(n) bits, results in an expected queueing delay which is bounded away from zero, uniformly as n --&gt; [infinity symbol] (Chapter 4). 2. Stability region vs. resources tradeoff. We propose a dispatching policy that requires a memory size (in bits) of the order of log(n) and an arbitrarily small (but positive) message rate, and show that it is stable for all possible server rates for which the entire system is underloaded. Moreover, we show that within a certain broad class of "weakly symmetric" policies, every dispatching policy with a message rate of the order of o(n²) , and with a memory size that grows sublogarithmically with n, results in a reduced stability region (Chapter 5).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged student-submitted from PDF version of thesis.; Includes bibliographical references (pages 161-164).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124124</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microwave engineering in superconducting nanowires for single-photon detection</title>
<link>https://hdl.handle.net/1721.1/124123</link>
<description>Microwave engineering in superconducting nanowires for single-photon detection
Zhu, Di,Ph. D.Massachusetts Institute of Technology.
Detecting light at the single-photon level plays a crucial role in photonic quantum information processing, deep-space optical communication, astronomical observation, and biological and chemical sensing. With their exceptional performance, superconducting nanowire single-photon detectors (SNSPDs) have emerged as the leading single-photon counting technology at infrared wavelengths. Conventionally, the superconducting nanowires are treated as lumped circuit elements, and their microwave properties were largely neglected. In this thesis, we engineer the nanowires into kinetic-inductive transmission lines and use them to devise new single-photon detector architectures. Through impedance engineering, we developed a superconducting tapered nanowire detector that has increased output voltage, reduced timing jitter, and most importantly, the ability to resolve photon numbers. Utilizing the slow propagation speed of electrical signals in the nanowire transmission lines, we developed a delay-line-multiplexed detector array. This two-terminal array can perform coincidence counting over a large number of spatial modes and can be scalably integrated on photonic waveguides.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124123</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems for collective human curation of online discussion</title>
<link>https://hdl.handle.net/1721.1/124122</link>
<description>Systems for collective human curation of online discussion
Zhang, Amy Xian.
The internet was supposed to democratize discussion, allowing people from all walks of life to communicate with each other at scale. However, this vision has not been fully realized--instead online discourse seems to be getting worse, as people are increasingly drowning in discussion, with much of it unwanted or unpleasant. In this thesis, I present new systems that empower discussion participants to work collectively to bring order to discussions through a range of curation tools that superimpose richer metadata structure on top of standard discussion formats. These systems enable the following new capabilities: 1) recursive summarization of threaded forums using Wikum, 2) teamsourced tagging and summarization of group chat using Tilda, 3) fine-grained customization of email delivery within mailing lists using Murmur, and 4) friendsourced moderation of messages against online harassment using Squadbox. In a world of abundant discussion and mass capabilities for amplification, the curation of a social space becomes as equally essential as content creation in defining the nature of that space. By putting more powerful techniques for curation in the hands of everyday people, I envision a future where end users are empowered to actively co-create every aspect of their online discussion environments, bringing in their nuanced and contextual insights.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 285-316).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124122</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanostructures for vacuum optoelectronic engineering</title>
<link>https://hdl.handle.net/1721.1/124121</link>
<description>Nanostructures for vacuum optoelectronic engineering
Yang, Yujia,Ph. D.Massachusetts Institute of Technology.
The interaction between free electrons and electromagnetic fields enables a wide range of scientific research and technological applications, ranging from electronic, optoelectronic, and microwave vacuum tubes, to electron beams for material processing and analysis, particle accelerators, and free-electron radiation sources. However, for most free-electron-based devices, the compactness, chip-scale integration, ultrafast temporal response, and quantum state manipulation remain impractical or unexplored. Recent advances in nanofabrication have pushed the boundary and extended the operating paradigm of free-electron devices. In this thesis, I will investigate the interplay between free electrons and optical frequency electromagnetic fields mediated by nanostructures. I will show high-yield, ultrafast, surface-plasmon-enhanced photoelectron emitters. With the photoemission driven by the optical field, this technology enables the detection of carrier-envelope-phase of ultrafast optical pulses with solid-state nanoantenna arrays integrated on a chip. Additionally, I will show free-electron-driven plasmon and photon emission from nanophotonic structures, which leads to the characterization of plasmonic nanostructures and the development of nanoscale tunable free-electron light sources. Furthermore, I will show the manipulation of free electrons with nanostructured phase plates, and propose an electron beam splitter design based on the quantum interaction-free measurement and quantum Zeno effect. The work demonstrated in this thesis presents a step towards chip-integrated petahertz optoelectronic devices, compact tunable free-electron radiation sources, as well as quantum devices for free electrons.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 129-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124121</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards realizing the internet-of-things vision : in-body, homes, and farms</title>
<link>https://hdl.handle.net/1721.1/124120</link>
<description>Towards realizing the internet-of-things vision : in-body, homes, and farms
Vasisht, Deepak.
The Internet-of-things (IoT) enables us to connect our physical and digital worlds by embedding computing devices into our environment. Today, there is a huge interest in IoT systems for smart homes, smart cities, digital healthcare, data-driven agriculture, etc. However, for these IoT systems to deliver their intended vision, we need to address two important challenges: (a) operation under limited resources like power and connectivity, (b) operation in spite of extreme heterogeneity in device deployments. In this thesis, we address both these challenges. We design a new communication primitive that allows inaccessible resource-constrained devices like in-body devices to communicate without requiring them to transmit any power of their own. To address heterogeneity, we present two approaches. First, we build a teacher-student model for IoT systems which allows us to train models that can learn to predict one sensor modality from another. This makes IoT systems more robust to failures, enables more accurate inference, and reduces deployment costs. Second, we build a formal model that embeds contextual information about the environment into the inference process and allows heterogenous devices to perform joint inference that is more accurate and robust than either of the devices alone. We demonstrate the efficacy of our approach through end-to-end systems developed for diverse environments with varying constraints on size, power, communication, and sensing modalities: inside the human body, smart homes, and agricultural farms. We deploy these systems for long-term in real world environments and present our insights from these deployments. Finally, we demonstrate that the techniques developed in this thesis have general applicability beyond the application scenarios themselves, for example, in next generation cellular communications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 171-187).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124120</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New directions in streaming algorithms</title>
<link>https://hdl.handle.net/1721.1/124119</link>
<description>New directions in streaming algorithms
Vakilian, Ali.
Large volumes of available data have led to the emergence of new computational models for data analysis. One such model is captured by the notion of streaming algorithms: given a sequence of N items, the goal is to compute the value of a given function of the input items by a small number of passes and using a sublinear amount of space in N. Streaming algorithms have applications in many areas such as networking and large scale machine learning. Despite a huge amount of work on this area over the last two decades, there are multiple aspects of streaming algorithms that remained poorly understood, such as (a) streaming algorithms for combinatorial optimization problems and (b) incorporating modern machine learning techniques in the design of streaming algorithms. In the first part of this thesis, we will describe (essentially) optimal streaming algorithms for set cover and maximum coverage, two classic problems in combinatorial optimization. Next, in the second part, we will show how to augment classic streaming algorithms of the frequency estimation and low-rank approximation problems with machine learning oracles in order to improve their space-accuracy tradeoffs. The new algorithms combine the benefits of machine learning with the formal guarantees available through algorithm design theory.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 233-246).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124119</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Underwater &amp; out of sight : towards ubiquity in underwater robotics</title>
<link>https://hdl.handle.net/1721.1/124118</link>
<description>Underwater &amp; out of sight : towards ubiquity in underwater robotics
Rypkema, Nicholas Rahardiyan.
The Earth's oceans holds a wealth of information currently hidden from us. Eective measurement of its properties could provide a better understanding of our changing climate and insights into the creatures that inhabit its waters. Autonomous underwater vehicles (AUVs) hold the promise of penetrating the ocean environment and uncovering its mysteries; and progress in underwater robotics research over the past three decades has resulted in vehicles that can navigate reliably and operate consistently, providing oceanographers with an additional tool for studying the ocean. Unfortunately, the high cost of these vehicles has stied the democratization of this technology. We believe that this is a consequence of two factors. Firstly, reliable navigation on conventional AUVs has been achieved through the use of a sophisticated sensor system, namely the Doppler velocity log (DVL)-aided inertial navigation system (INS), which drives up vehicle cost, power use and size.; Secondly, deployment of these vehicles is expensive and unwieldy due to their complexity, size and cost, resulting in the need for specialized personnel for vehicle operation and maintenance. The recent development of simpler, low-cost, miniature underwater robots provides a solution that mitigates both these factors; however, removing the expensive DVL-aided INS means that they perform poorly in terms of navigation accuracy. We address this by introducing a novel acoustic system that enables AUV self-localization without requiring a DVL-aided INS or on-board active acoustic transmitters. We term this approach Passive Inverted Ultra-Short Baseline (piUSBL) positioning. The system uses a single acoustic beacon and a time-synchronized, vehicle-mounted, passive receiver array to localize the vehicle relative to this beacon.; Our approach has two unique advantages: first, a single beacon lowers cost and enables easy deployment; second, a passive receiver allows the vehicle to be low-power, low-cost and small, and enables multi-vehicle scalability. Providing this new generation of small and inexpensive vehicles with accurate navigation can potentially lower the cost of entry into underwater robotics research and further its widespread use for ocean science. We hope that these contributions in low-cost underwater navigation will enable the ubiquitous and coordinated use of robots to explore and understand the underwater domain.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 261-277).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124118</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making fast informative queries with learned propagations</title>
<link>https://hdl.handle.net/1721.1/124117</link>
<description>Making fast informative queries with learned propagations
Pu, Yewen.
In an informative querying problem, one achieves a certain objective by issuing a series of queries to an oracle and receives a series of observations in return. It is a challenging task because the queries need to account for the uncertainties of the oracle, while being informative to the objective at hand. While successful algorithms have been developed for a range of querying tasks, these algorithms can be slow to compute and in some cases, intractable. A common Achilles's heel of these prior works is their reliance on the computation over the space of oracle functions itself during inference time. As a result, when the space of oracle functions becomes complex, these approaches become computationally infeasible. In this thesis, we explore an alternative approach to informative query selection. Rather than computing over the space of oracle functions, we learn a propagation function that, given a set of past observations, predicts future queries' outcomes directly. We show that by leveraging the propagation function, one can perform a range of informative querying tasks that were previously intractable. To this end, we prescribe a general method of informative querying with learned propagation: In meta-learning time, a propagation function is trained to learn the relationships between observations, and at inference time, a task specific acquisition function is constructed to leverage the propagation in making informative queries.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 81-82).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124117</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic modeling of bacterial nutrient uptake strategies</title>
<link>https://hdl.handle.net/1721.1/124116</link>
<description>Mechanistic modeling of bacterial nutrient uptake strategies
Norris, Noele Rosalie.
Bacteria have developed a variety of strategies to nd and consume the substrates necessary for both the cell's energy-consuming processes and for the additional biomass needed to replicate. A greater understanding of the diversity and regulation of these strategies can provide us with a number of insights relevant for a variety of applications, from predicting bacterial population dynamics and thus carbon-cycling rates in the ocean to bio-engineering bacteria into microscale robots. Here I use toy, mechanistic models of single-cell metabolism that allow me to quantify the costs and benefits of various nutrient uptake strategies. I find that: (i) a sensing-uptake trade-off governs E. coli's regulation of maltose uptake and chemotaxis to maltose; (ii) a rate-affinity trade-off in nutrient transport systems governs the speciation of marine oligotrophic and copiotrophic heterotrophs; and (iii) an exploration-conservation trade-off governs the prevalence of motility in the marine microbial world. This work thus provides new understanding of how both phenotypic diversity and cellular regulation are governed by trade-offs for maximizing growth rate in dierent environments.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 155-172).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124116</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for functional interpretation of diverse omics data</title>
<link>https://hdl.handle.net/1721.1/124115</link>
<description>Computational methods for functional interpretation of diverse omics data
Nazeen, Sumaiya.
Recent technological advances have resulted in an explosive growth of various types of "omics" data, including genomic, transcriptomic, proteomic, and metagenomic data. Functional interpretation of these data is key to elucidating the potential role of different molecular levels (e.g., genome, transcriptome, proteome, metagenome) in human health and disease. However, the massive size and heterogeneity of raw data pose substantial computational and statistical challenges in integrating and interpreting these data. To overcome these challenges, we need sophisticated approaches and scalable analytical frameworks. This thesis outlines two research efforts along these lines. First, we develop a novel three-tiered integrative omics framework for integrating and functionally analyzing heterogeneous omics datasets across a group of co-occurring diseases. We demonstrate the effectiveness of this framework in investigating the shared pathophysiology of autism spectrum disorder (ASD) and its multi-organ-system co-morbid diseases (e.g., inflammatory bowel disease, asthma, muscular dystrophy, cerebral palsy) and uncover a novel innate immunity connection between them. Second, we develop a new end-to-end computational tool, Carnelian, for robust, alignment-free functional profiling of whole metagenome sequencing reads, that is uniquely suited to finding hidden functional trends across diverse data sets in comparative analysis. Carnelian can find shared metabolic pathways, concordant functional dysbioses, and distinguish microbial metabolic function missed by state-of- the-art functional annotation tools. We demonstrate Carnelian's effectiveness on large-scale metagenomic studies of type-2 diabetes, Crohn's disease, Parkinson's disease, and industrialized versus non-industrialized cohorts.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 199-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124115</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Breaking barriers in secret sharing</title>
<link>https://hdl.handle.net/1721.1/124114</link>
<description>Breaking barriers in secret sharing
Liu, Tianren,Ph. D.Massachusetts Institute of Technology.
In this thesis, we study secret sharing schemes for general (non-threshold) access functions. In a secret sharing scheme for n parties associated to a monotone function [mathematical formula], a dealer distributes shares of a secret among n parties. Any subset of parties [mathematical formula] can jointly reconstruct the secret if F(T) = 1, and should have no information about the secret if F(T) = 0. One of the major long-standing questions in information-theoretic cryptography is to determine the minimum size of the shares in a secret-sharing scheme for an access function F. There is an exponential gap between lower and upper bounds for share size: the best known scheme for general monotone functions has shares of size 2[superscript n-o(n)], while the best lower bound is n² / log(n). In this thesis, we improve this more-than-30-year-old upper bound by construct- ing a secret sharing scheme for any access function with shares of size 2[superscript 0.994n] and a linear secret sharing scheme for any access function with shares of size 2[superscript 0.994n]. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size [mathematical formula] for a family of [mathematical formula] monotone access functions, out of a total of [mathematical formula] of them. As an intermediate result, we construct the first conditional disclosure of secrets (CDS) with sub-exponential communication complexity. CDS is a variant of secret sharing, in which a group of parties want to disclose a secret to a referee the parties' respective inputs satisfy some predicate. The key conceptual contribution of this thesis is a novel connection between secret sharing and CDS, and the notion of (2-server, information-theoretic) private information retrieval.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 50-53).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124114</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infrared detectors based on two-dimensional materials and heterostructures</title>
<link>https://hdl.handle.net/1721.1/124113</link>
<description>Infrared detectors based on two-dimensional materials and heterostructures
Lin, Yuxuan,Ph. D.Massachusetts Institute of Technology.
At the nanoscale, new forms of physical phenomena emerge that can provide remarkable opportunities for next-generation tools with unprecedented functionality and energy efficiency. Two-dimensional (2D) materials, a family of nanomaterials with atomic thickness, promise an ideal platform for nanoscience and nanotechnology research on which we are able to engineer functional structures and study their properties at the limit of the atomic scale. This thesis discusses opportunities and challenges of studying emerging light-matter interaction phenomena and developing advanced infrared detection technologies enabled by 2D materials and their heterostructures. First, we addressed some of the key challenges for reliable synthesis and characterization of 2D materials and functional nanostructures. We developed a new seeding-promoter-assisted chemical vapor deposition approach for the construction of vertical and lateral heterostructures between a variety of 2D materials over large area.; This technology enables many new physics and device applications, including 1D ohmic contacts to 2D semiconductors and their integrated circuits. Another material-related challenge we addressed is the fast material characterization of 2D materials. We developed a deep learning algorithm that can perform realtime, accurate material identification on optical microscope images of 2D materials. In addition, our method is able to extract deep graphical features and provide information about structural, optical and mechanical properties of the materials. Second, we studied three novel IR detector technologies based on 2D materials and other nanostructures that can potentially out-perform the state-of-the-art graphene thermopile, graphene-2D semiconductor photothermoelectric detector, and thermo-mechanical bolometer.; For the graphene thermopile, our theoretical analysis indicates that a high-quality graphene device provides the highest thermoelectric figure of merit among existing thermoelectric materials. We further demonstrated a monolithic 3D integration of graphene and Si CMOS technologies and fabricated a mid-IR/thermal imaging camera based on graphene thermopiles. For the second IR detection technology, we studied the unique hot carrier thermalization process on a graphene-2D semiconductor lateral heterojunction device, and showed that such a photothermoelectric photocurrent generation mechanism is advantageous in terms of picosecond response time, broadband spectral response, and room temperature operation.; The third IR detection technology we demonstrated in this thesis is a thermo-mechanical bolometer, in which the IR radiation is converted into an abrupt resistance change through the special thermo-mechanical response and an artificial metal-insulator transition of engineered nanostructures. Our results show that the sensitivity of this thermo-mechanical mid-IR detector can be at least one order of magnitude better than state-of-the-art microbolometers based on VOx.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 231-250).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124113</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Portable magnetic resonance sensors and methods for noninvasive disease diagnostics</title>
<link>https://hdl.handle.net/1721.1/124112</link>
<description>Portable magnetic resonance sensors and methods for noninvasive disease diagnostics
Bashyam, Ashvin(Ashvin Reddy)
Many diseases manifest as a shift in fluids between distinct tissue fluid compartments. For example, fluid depletion and fluid overload lead to a deficit or accumulation of fluids within the intramuscular interstitial space. A direct measurement of these fluid shifts could serve as a highly specific diagnostic or prognostic tool to improve clinical management of these disorders. Proton magnetic resonance is exquisitely sensitive to the local physical and chemical environment of water molecules within the body. Therefore, we hypothesized that localized magnetic resonance (MR) measurements could interrogate local tissue fluid distributions and assess systemic fluid volume status. This thesis explored the potential for a portable MR sensor to characterize shifts in tissue fluid distribution and identify the onset and progression of fluid volume status disorders.; First, we designed a portable, single sided MR sensor capable of performing remote measurements of the multicomponent T2 signal originating from distinct fluid compartments. Further, we present a design framework to create single sided sensors with magnetic field strength and geometry suitable for a wide range of applications. We then demonstrate that a localized measure of tissue fluid distribution using a portable MR sensor is capable of identifying systemic changes in fluid volume status associated with fluid depletion. We validate these findings via whole animal MR measurements and a standard MRI scanner capable of localizing its measurement towards the muscle tissue. Finally, we explore new strategies to enable the translation of these portable MR sensors towards humans.; We demonstrate techniques combining multicomponent T2 relaxometry, depth-resolved measurements, and diffusion-weighted pulse sequences to improve identification of fluid shifts within muscle tissue despite the presence of confounding tissues, such as the subcutaneous tissue. The magnetic resonance sensors and measurement techniques developed here lay the foundations for a non-invasive, portable, and quantitative indicator of tissue fluid distribution. This technology has the potential to serve as a clinical diagnostic for both localized and systemic fluid imbalances. Furthermore, these approaches enabling portable, quantitative MR measurements can be extended to the diagnosis and staging of the progression of other diseases which exhibit shifts in fluid distributions.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 144-154).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124112</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear algebraic techniques in algorithms and complexity</title>
<link>https://hdl.handle.net/1721.1/124111</link>
<description>Linear algebraic techniques in algorithms and complexity
Alman, Josh(Joshua H.)
We develop linear algebraic techniques in algorithms and complexity, and apply them to a variety of different problems. We focus in particular on matrix multiplication algorithms, which have surprisingly fast running times and can hence be used to design fast algorithms in many settings, and matrix rank methods, which can be used to design algorithms or prove lower bounds by analyzing the ranks of matrices corresponding to computational tasks. First, we study the design of matrix multiplication algorithms. We define a new general method, called the Universal Method, which subsumes all the known approaches to designing these algorithms. We then design a suite of techniques for proving lower bounds on the running times which can be achieved by algorithms using many tensors and the Universal Method.; Our main limitation result is that a large class of tensors generalizing the Coppersmith-Winograd tensors (the family of tensors used in all record-holding algorithms for the past 30+ years) cannot achieve a better running time for multiplying n by n matrices than O(n²[superscript .]¹⁶⁸). Second, we design faster algorithms for batch nearest neighbor search, the problem where one is given sets of data points and query points, and one wants to find the most similar data point to each query point, according to some distance measure. We give the first subquadratic time algorithm for the exact problem in high dimensions, and the fastest known algorithm for the approximate problem, for various distance measures including Hamming and Euclidean distance. Our algorithms make use of new probabilistic polynomial constructions to reduce the problem to the multiplication of low-rank matrices.; Third, we study rigid matrices, which cannot be written as the sum of a low rank matrix and a sparse matrix. Finding explicit rigid matrices is an important open problem in complexity theory with applications in many different areas. We show that the Walsh-Hadamard transform, previously a leading candidate rigid matrix, is in fact not rigid. We also give the first nontrivial construction of rigid matrices in a certain parameter regime with applications to communication complexity, using an efficient algorithm with access to an NP oracle.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-224).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124111</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organic reactions catalyzed by copper(I) hydride complexes</title>
<link>https://hdl.handle.net/1721.1/124110</link>
<description>Organic reactions catalyzed by copper(I) hydride complexes
Liu, Richard Y.(Richard Yifan)
Chapter 2. Regiodivergent and Diastereoselective CuH-Catalyzed Allylation of Imines with Terminal Allenes A chemoselective hydrometalation process enables the use of easily accessible allenes as allylmetal nucleophile surrogates in imine allylation reactions. By modulating the nitrogen-protecting group, either highly branched- or linear-selective addition can be achieved from the same allene. Both reactions exhibit excellent diastereoselectivity and broad functional-group tolerance. Good enantioselectivity can also be achieved in the linear-selective reaction. Finally, a mechanistic model for the regiodivergence is proposed on the basis of density functional theory calculations. Chapter 3. Enantioselective Ketone Allylation Using Allene, a Petroleum Cracking Byproduct Allene gas is produced and separated on million-metric-ton scale per year during petroleum refining but is rarely employed in organic synthesis.; Meanwhile, the addition of an allyl group to ketones is among the most common and prototypical reactions in synthetic chemistry. It is shown that the combination of allene gas with inexpensive and environmentally benign hydrosilanes can serve as a replacement for the allylmetal reagents that are required in most enantioselective ketone allylation reactions. This process is catalyzed by copper salts and commercially available ligands, operates without specialized equipment or pressurization, and tolerates a broad range of functional groups. Furthermore, the exceptional chemoselectivity of this catalyst system enables industrially relevant C3 hydrocarbon mixtures of allene with methylacetylene and propylene to be applied directly. Chapter 4. CuH-Catalyzed Enantioselective Ketone Allylation with 1,3-Dienes An efficient method for the copper-catalyzed allylation of ketones is described using widely available 1,3-dienes as allylmetal surrogates.; Homoallylic alcohols bearing a wide range of functional groups are obtained in high yield and with good regio-, diastereo-, and enantioselectivity. Mechanistic investigations using density functional theory implicate the in situ formation of a rapidly equilibrating mixture of isomeric copper(I) allyl complexes, from which Curtin-Hammett kinetics determine the major isomer of the product. A stereochemical model is provided to explain the high diastereo- and enantioselectivity of this process. Chapter 5. A Mild CuH-Catalyzed Dehydration of Primary Amides to Nitriles Metal-catalyzed silylative dehydration of primary amides is an economical approach to the synthesis of nitriles. This Chapter describes a copper(I) hydride (CuH)-catalyzed dehydration process that avoids a typically challenging 1,2-siloxane elimination step, thereby dramatically increasing the rate of the overall transformation relative to alternative metal-catalyzed systems.; This new reaction proceeds at ambient temperature, tolerates a variety of metal-, acid-, or base-sensitive functional groups, and can be performed using a simple ligand, inexpensive siloxanes, and low catalyst loading. Chapter 6. Attractive Ligand-Substrate Dispersion Interactions in the Copper-Catalyzed Hydroamination of Unactivated Olefins London dispersion interactions between the ligand and the substrate, although ubiquitous, are seldom accounted for in mechanistic models of transition-metal-catalyzed transformations. Computational models shown that in the copper(I) hydride (CuH)-catalyzed hydroamination of unactivated olefins, the substantially enhanced reactivity of copper catalysts based on bulky bidentate phosphine ligands originates from attractive ligand-substrate dispersion interactions.; This Chapter describes kinetic studies across a range of hydroamination reactions using structurally diverse phosphine ligands that, in conjunction with the theoretical results, reveal the critical role of bulky P-aryl groups in facilitating this process.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124110</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specificity and benefits of an exclusion mechanism for a mobile genetic element in Bacillus subtilis</title>
<link>https://hdl.handle.net/1721.1/124109</link>
<description>Specificity and benefits of an exclusion mechanism for a mobile genetic element in Bacillus subtilis
Davis, Kathleen P.(Kathleen Patricia)
The horizontal transfer of mobile genetic elements, including Integrative and Conjugative Elements (ICEs), plays an essential role in bacterial evolution by helping to promote the spread of genes involved in antibiotic and heavy metal resistance, metabolism, symbiosis, and pathogenicity. Like conjugative plasmids, ICEs spread to new hosts by conjugative transfer through Type 4 Secretion Systems (T4SSs) encoded in the ICE DNA, however unlike plasmids, ICEs are usually found integrated into the host chromosome except for immediately prior to, during and after conjugative transfer. Almost all plasmids and some ICEs have an exclusion mechanism, which prevents acquisition of a second copy of the element via conjugative transfer. ICEBs1 has an exclusion mechanism in which the ICEBs1 exclusion protein YddJ targets the conjugation machinery protein ConG, the VirB6 homolog in the ICEBs1 T4SS, to prevent transfer from a would-be donor cell. My work described in this thesis involves a mutagenesis and enrichment screen which isolated exclusion-resistant, transfer-competent mutations in ConG, and swap experiments with ICEBs1 and ICEBat1 ConG and YddJ homologs demonstrating that YddJ targets its cognate ConG for exclusion, and that YddJ and ConG together determine the specificity of exclusion. I identified regions of ConG and YddJ that are essential for exclusion specificity, and found that YddJ-mediated exclusion protects donor cells from serving as recipients during or immediately after they serve as donors. These findings further our understanding of regulation of horizontal gene transfer, particularly in Gram-positive bacteria. They provide further evidence of a conserved target in exclusion, the VirB6 homologs, and indicate that different mobile genetic elements can employ exclusion systems for different reasons.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124109</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cultural mandates, artistic missions, and "The Welfare of Palestine," 1876-1948</title>
<link>https://hdl.handle.net/1721.1/124108</link>
<description>Cultural mandates, artistic missions, and "The Welfare of Palestine," 1876-1948
Ari, Nisa.
This dissertation investigates the changing landscape for artistic production in Palestine from the last decades of Ottoman rule until the establishment of the State of Israel (1876-1948). The development of new artistic practices and exhibition spaces occurred against a political backdrop in near constant transition--including the dissolution of the Ottoman Empire, the First World War, British military and colonial occupation (in the form of the British Mandate from 1920-48), as well as the growth of Arab nationalism and Zionism. Previous scholarship on Palestinian art defines it as a national artistic movement, born out of a state of political disenfranchisement after 1948.; I argue instead that the conflicting ideological forces impacting Palestine in the early twentieth century produced the preoccupations and practices which define Palestinian art, and analyze how aspiring Palestinian artists, imperious foreign occupiers, moralizing evangelicals, international welfare agents, and Zionist immigrants contributed to its rise. The dissertation focuses on institutions supporting art's production and display, whose very presence reflected Palestine's complex political reality. These include, among others, the House of Industry created by Anglican missionaries for Jewish craftworkers in the late-1800s, the Supreme Muslim Council's First National Arab Fair for showcasing Arab arts and industry in 1933, and the Palestine Folk Museum initiated by British, Palestinian, and Jewish political representatives for collecting local costumes in the early 1930s.; By identifying the role such entities played in the promotion of new artistic forms, as they encouraged the use of new material technologies from synthetic embroidery threads to photographic prints and dried flowers, I critically reevaluate the work of canonical early twentieth-century Palestinian artists including Nicola Saig, Sophie Halaby, Jamal Badran, and Zulfa al-Saʻdi. The dissertation finds that it was within a relatively short historical span that art production in Palestine changed from being used for religious and commercial ends in the late-1800s to being deployed for humanitarian and, eventually, political purposes by the 1930s. In doing so, it highlights Palestine as a compelling site for interrogating the previously unexamined origins of a "cultural sector" within the evolutions of nation-building, colonial pacification, and international humanitarianism following the First World War.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D.: History and Theory of Art, Massachusetts Institute of Technology, Department of Architecture, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 312-334).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124108</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scope theory revisited : lessons from pied-piping in wh-questions</title>
<link>https://hdl.handle.net/1721.1/124104</link>
<description>Scope theory revisited : lessons from pied-piping in wh-questions
Demirok, Ömer
It is widely assumed that both the movement-based theory of scope and the scope-based theory of intensionality fall short in the face of empirical challenges like 'exceptional' scope out of extraction islands and the possibility of transparent/de re construals for DPs inside extraction islands. The standard response to these challenges consists in assuming that grammar makes available in-situ methods of scope-taking in addition to movement- (e.g. pointwise composition (Hamblin, 1973; Kratzer and Shimoyama, 2002; Cable, 2010), choice functions (Reinhart, 1997, 1998)) and adopting a richer representation of intensionality (e.g. in-situ binding of world/situation-denoting pronouns (Percus, 2000)). This thesis argues that a closer study of pied-piping in wh-questions reveals the true power of already-existing tools in grammar. Building on the important insight that more complex scope-takers can be recursively built (Dayal, 1994; Charlow, 2017), I advance the idea that grammar makes crucial use of pied-piping to generate meanings that would otherwise be unavailable. I argue that with pied-piping in its toolbox, grammar may not need in-situ methods of scope-taking and in-situ methods of assigning DPs a transparent/de re construal.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 187-195).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124104</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presuppositions in focus</title>
<link>https://hdl.handle.net/1721.1/124101</link>
<description>Presuppositions in focus
Francis, Naomi Clair.
This dissertation explores how presuppositions and focus interact. It takes as its starting point a puzzle about expressions like even and its cross linguistic kin in declarative sentences that deny presuppositions: these focus-sensitive scalar additive operators can be used in negative presupposition denials but not in positive ones. This puzzle reveals that i) presuppositions triggered within focus alternatives matter, and ii) even triggers an additive presupposition. The rest of the thesis considers what these findings can teach us about other areas of the grammar. It presents a variety of arguments in defense of even's additive presupposition, which has long been a point of controversy, and shows that even's additivity helps to make sense of some surprising behaviour that even displays outside of presupposition denials. It also argues that the distribution of even and any in imperatives and modal statements lends support to views that treat imperatives as containing an existential modal operator that is sometimes strengthened by exhaustification to yield universal readings.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 134-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124101</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On expletives and the agreement-movement correlation</title>
<link>https://hdl.handle.net/1721.1/124100</link>
<description>On expletives and the agreement-movement correlation
Longenbaugh, Nicholas.
This dissertation addresses two main topics: the correlation between agreement and movement, and the formal and distributional status of expletive elements cross-linguistically. Concerning the first topic, my proposal is that agreement and movement are formally dissociated, as proposed by Chomsky (2000, 2001), but often coupled together by the action of an economy constraint that preferences minimizing the number of syntactic objects operated on in the derivation. I explore the consequences of this proposal in the domain of past participle agreement in the Romance and Scandinavian languages, which is well known for its correlation with movment (Kayne 1989; Christensen and Taraldsen 1989). Concerning the second topic, I argue for two subproposals. The first is that expletive elements share the same formal status as the non-expletive forms from which they are derived. Notably, I argue that this entails that the locative expletive that appears in a variety of Western European languages bears inherent case, and hence functions for the purposes of Case and agreement like, e.g., dative subjects in Icelandic. The second subproposal is that in languages like English, Dutch, and Danish, which have both a locative expletive and a default third person expletive, only the former is a true expletive element, the latter always being selected as an argument of quasi-argument. In service of defending this proposal, I develop a novel analysis of the clausal-extraposition/CP-linking construction and the non-logical-if construction (Williams 1974).
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-174).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124100</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reparations, racial exploitation, and racial capitalism</title>
<link>https://hdl.handle.net/1721.1/124099</link>
<description>Reparations, racial exploitation, and racial capitalism
Lenehan, Rose(Rose Elizabeth)
We live in a starkly racialized economy: an economy whose history includes the Atlantic slave trade, enslaved plantation labor, and Jim Crow. The economic relations and social practices that have emerged from that history constitute, as Charles Mills puts it, "a system that is run by whites for white benefit." The question guiding this set of papers is how to pursue a racially just economy in light of this history. In the first paper, "Reparations and the Racial Wealth Gap," I evaluate the argument that the closure of the racial wealth gap is owed as reparations for the injustices that created it. I argue that there are a number of problems with the argument; most importantly, it treats racial populations as separate corporate bodies, and in doing so obscures class differences within them. In the second paper, "Racial Exploitation and the Race-Class Nexus," I argue against Charles Mills's theory of racial exploitation.; Mills argues that racial justice requires redistributing the proceeds from past racial exploitation and that racial exploitation can be distinguished from 'standard capitalist' exploitation. I argue that Mills's characterization of racial exploitation does not fit the most important historical cases, in part because Mills ignores processes of racial formation. So the concept of racial exploitation cannot provide the grounds for redistribution that Mills intends it to. Mills frames his project as a demand for a non-racial capitalism. In the third paper, "On the Idea of a Non-Racial Capitalism," I assess this broader goal. Many radical theorists and activists have argued that race and racism play critical roles in reproducing capitalist social relations and thus would argue that a non-racial capitalism is impossible.; I argue that we do not need to make any general claims about capitalism and racism to see why this goal is confused; instead, we need to recognize how distant a non-racial capitalism is from our own. Getting to something that could be called a non-racial capitalism would require not merely a redistribution of wealth but a reconfiguration of global relations of production.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 67-72).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124099</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations and philosophical applications of game theory</title>
<link>https://hdl.handle.net/1721.1/124098</link>
<description>Foundations and philosophical applications of game theory
Grant, Cosmo(Cosmo Douglas)
I investigate three questions. The first belongs to game theory: When will people play a Nash equilibrium? The second, to decision theory: Why maximize expected value? The third, to the philosophy of language: How should we work out the meaning of a sentence? What unites my dissertation is a decision-theoretic approach to games, and a game-theoretic approach to meaning. An epistemic characterization of a solution concept shows under what epistemic conditions the players behave as the solution concept describes. Chapter 1 is about epistemic characterizations of Nash equilibrium. First, I argue that theorists have slipped between two interpretations of Nash equilibrium: strategic and doxastic. As a result, they've drawn unwarranted conclusions from the characterizations. Second, following a broader discussion of the role of solution concepts, I assess doxastic equilibrium on its own merits. I argue that it doesn't deserve the attention it's received.; A key theme of Chapter 1 is the decision-theoretic approach to games: asking what you should do in a game is just a special case of asking what you should do in a decision problem. But what should you do in a decision problem? A standard answer is that you should maximize expected value, because maximizing expected value does best in the long-run. In Chapter 2, I adapt an idea well-known in economics but little known in philosophy-maximizing expected growth rate-to argue that the long-run defense of maximizing expected value isn't sound. In Chapter 3, I take for granted the decision-theoretic approach to games and apply it in the philosophy of language. David Lewis showed how conventions arise from repeated coordination games, and, as a special case, how meanings arise from repeated signaling games. I build on Lewis's framework.; I construct coordination games in which the players can be wrong about their conventions, and signaling games in which the players can be wrong about their messages' meanings. The examples put pressure on the Elicitation Method, a typical method in semantic fieldwork, according to which we should work out the truth-conditions of a sentence by eliciting speakers' judgments about its truth-value in different situations.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 102-113).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124098</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flash analog-to-digital converters with time-based techniques</title>
<link>https://hdl.handle.net/1721.1/124097</link>
<description>Flash analog-to-digital converters with time-based techniques
Yang, Xi,Ph. D.Massachusetts Institute of Technology.
High-speed medium-resolution flash analog-to-digital converters (ADCs) are in high demand in today's wireless and wireline systems. Conventional flash ADCs suffer from limited resolution and high power consumption. This thesis investigates time-based techniques that enhance the performance of a flash ADC at giga-sample-per-second (GS/s) sampling rate. Two major design challenges are addressed in this thesis. The first challenge is the ever-growing comparator offset with the scaling of CMOS technology. Conventional offset calibration methods utilize digitally-controlled capacitor banks or an additional input pair. The disadvantages include slower speed due to the added parasitic capacitance or higher input referred noise due to the extra input transistors. In this thesis, we propose an offset calibration method based on timing skew. The proposed method does not add any extra load to the comparator, avoiding the penalties of conventional methods. The second challenge is the exponentially growing number of comparators with resolution. This thesis proposes a time-based 4x interpolation technique that utilizes the timing information from adjacent comparators to resolve two extra bits of resolution without adding comparators. The number of comparators is reduced to 1/4 of a conventional flash ADC, and calibration capability is provided to achieve an 8-bit accuracy. Both techniques are demonstrated on a prototype flash ADC chip that measures state-of-the-art performances.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-116).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124097</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel electromagnetic scattering phenomena</title>
<link>https://hdl.handle.net/1721.1/124096</link>
<description>Novel electromagnetic scattering phenomena
Yang, Yi,Ph. D.Massachusetts Institute of Technology.
Scattering of electromagnetic waves is fundamentally related to the inhomogeneity of a system. This thesis focuses on several theoretical and experimental findings of electromagnetic scattering under contemporary context. These results vary from scattering off real structures and off synthetic gauge fields. The source of scattering also varies from near-field to far-field excitations. First, we present a general framework for nanoscale electromagnetism with experimental verifications based on far-field plasmonic scattering. We also theoretically propose two schemes featured by thin metallic films and hybrid plasmonic dielectric nanoresonantors, respectively, aiming at achieving high radiative efficiency in plasmonics. Second, treating free electrons as a near-field scattering excitation, we derive a universal upper limit to the spontaneous free electron radiation and energy loss, verified by measurements on the Smith-Purcell radiation. Such an upper limit allows us to identify a new regime of radiation operation where slow electrons are more efficient than fast ones. The limit also exhibits a emission probability divergence, which we show can be physically approached -by coupling free electrons to photonic bound states in the continuum. Finally, we will discuss the scattering of optical waves off synthetic magnetic fields. Specifically, we will describe a synthesis non-Abelian (non-commutative) gauge fields in real space, enabled time-reversal symmetry breaking with distinct manners. These synthetic non-Abelian gauge fields enables us to observe the non-Abelian Aharonov-Bohm effect with classical waves and classical fluxes, relevant for classical and quantum topological phenomena.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-161).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124096</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New interpretable machine learning techniques and an application to stroke prediction in atrial fibrillation patients</title>
<link>https://hdl.handle.net/1721.1/124095</link>
<description>New interpretable machine learning techniques and an application to stroke prediction in atrial fibrillation patients
Yang, Hongyu,Ph. D.Massachusetts Institute of Technology.
Building interpretable and accurate models are attracting more and more interest in the machine learning community. In this thesis, we developed an interpretable machine learning algorithm called SBRL and we built an interpretable and statistically more accurate model for predicting strokes for patients in atrial fabrication (AF) who have not had a prior history of stroke and who are not taking anticoagulants. The first part of the thesis presents an interpretable machine learning algorithm that can be used as an alternative algorithm to the decision tree algorithm. Our algorithm builds an optimized rules list model from data by maximizing the posterior probability of a natural hierarchical generative model. It has the form of chained IF-THEN clauses which is simple for a human to follow and derive its prediction by hand. We developed two theoretical bounds for the algorithm.; One for the length of the optimal rules list model; and the other for the upper bounds of the posterior probability of the optimized rules list given its prefixes. We thoroughly tested our algorithm against other interpretable and non-interpretable machine learning algorithms across multiple public datasets, in terms of interpretability, computational speed, and accuracy. Our algorithm strikes a balance among these metrics. The second part of the thesis presents how we used the ATRIA2-CVRN study cohort to build a stroke prediction model that is as simple as but statistically significantly more accurate than the stroke models in wide use, such as the CHA₂DS₂-VASc and ATRIA scores, for patients in AF who are not taking anticoagulants. We focused on the more challenging problem of primary prevention. We assessed the strengths of predictors and identified informative predictors not used in existing stroke models.; We created a univariate stroke model using the most informative predictor age and achieved statistically significantly better performance than CHA₂DS₂-VASc and similar performance as ATRIA. We used various machine learning models to test the limit of the information that can be extracted from the data. We built a linear model with optimized integer coefficients using RiskSLIM. We used SBRL to generate simple-yet-accurate representations for high-risk patients who should be recommended anticoagulants.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 117-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124095</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Morphosyntax and semantics of degree constructions</title>
<link>https://hdl.handle.net/1721.1/124094</link>
<description>Morphosyntax and semantics of degree constructions
Moracchini, Sophie,Ph. D.Massachusetts Institute of Technology.
This thesis investigates the morphosyntax and the semantics of comparatives and related degree constructions through the prism of a phenomenon called evaluativity, a type of inference whereby gradable adjectives receive a context-dependent interpretation. Pursuing the view that evaluativity is contributed by an optional null operator (EVAL, Rett 2008), this dissertation achieves the following results. First, it integrates a compositional analysis of evaluativity within a non-lexical view of antonymy. Second, it argues that the observed restrictions on the distribution of these inferences follow from independently motivated conditions that regulate the presence of the EVAL operator at the interfaces. In particular, three interface conditions are identified and discussed in detail: ++ At Logical Form (LF), derivations are subject to a structural economy condition, Minimize APs!, which executes transderivational comparisons over semantically equivalent Adjectival Phrases (APs).; The inclusion of EVAL in a parse licenses derivations that would otherwise be deemed deviant by this economy condition. ++ At Phonological Form (PF), the EVAL morpheme morphophonologically interacts with its surrounding environment. Specifically, EVAL is claimed to be a zero-morpheme subject to Myers Generalization, a PF-filter on syntactic derivations which prevents further morphological operations from applying to a zeroderived form. A consequence of this claim is that EVAL is licensed in derivations only where it does not disrupt post-syntactic operations that apply within the AP. ++ The distribution of EVAL is conditioned by aspects of Information Structure. In particular, in degree constructions that license contrastive adjectives, the distribution of focus is governed by (AvoIDF) which, in turn, interacts with conditions on deletion. Ultimately, the presence of EVAL can license a surface form which would otherwise get eliminated by PF-deletion.; In essence, the grammatical account of evaluativity developed in this thesis offers a window into the word-internal structure of complex degree expressions and presents new insights into the semantic and morphosyntactic primitives of the degree domain.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 183-188).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124094</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>What it means to want</title>
<link>https://hdl.handle.net/1721.1/124093</link>
<description>What it means to want
Phillips-Brown, Milo.
Wanting is an easy concept to use. Talk to any three year-old and you'll know they've mastered it. Wanting is important, too. We understand one another in no small part through what they want, and wanting is a pillar in theories of mind and ethics. An account of wanting, then, must do dual duties: be powerful enough to carry this theoretical burden and simple enough to explain wanting's effortless use in daily life. The first two Chapters of this dissertation discharge these duties in part. The latter two Chapters complicate the task of discharging them further. Chapter 1. Folk psychology and decision theory both represent our belieflike and desire- and preference-like states. Both use these representations to explain and predict our actions. If we can't account for one in terms of the other, we'd have a dubious dualism-two competing systems of representation, prediction, and explanation. I give a decision-theoretic account of a key folk psychological notion-wanting.; Chapter 2. What we want depends on what we believe. Yet you can want to stay home (it would be nice to) despite believing it would ruin your career. This case confounds my theory from Chapter 1, as well as the orthodox semantics for 'want'. In Chapter 2, I develop a semantics based idea that you want to stay home considering its benefits, but ignoring the career consequences. Chapter 3. The meaning of anankastic conditionals-like 'if you want to go to Harlem, you have to take the A train'-is clear, yet how it arises compositionally has proven an enigma. Many had thought the enigma unraveled by Condoravdi and Lauer (2016). I argue not: anankastic conditionals are still a mystery. Chapter 4 (co-authored with Lyndal Grant).; The widely held Satisfactionis- Truth Principle-if A wants p, then A has a desire that is satisfied in exactly the worlds where p is true-posits an appealingly straightforward link between what we want and the satisfaction conditions of our desires, and in turn, enables appealingly straightforward accounts linking what we want, the wanting relation, and the contents of desires. We argue that the principle is nonetheless false.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 77-82).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124093</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal nonlinear digital signal processing : a dynamical systems approach</title>
<link>https://hdl.handle.net/1721.1/124092</link>
<description>Optimal nonlinear digital signal processing : a dynamical systems approach
Tanovic, Omer.
This thesis addresses optimal nonlinear digital signal processing problems aimed to improve power efficiency of modern wireless transmission systems. The first part of this thesis is motivated by peak-to-average power ratio reduction of communication signals. The problem is formulated as minimization of a frequency-weighted convex quadratic cost subject to time-domain output amplitude constraints. A new method for converting optimality conditions into finite-latency stable systems generating optimal outputs with arbitrary precision is proposed. The second part contains analysis of the nonlinear distortion introduced into the base-band (discrete-time) input-output dynamics of the communication systems by the (continuous-time) power amplifier nonlinearity. It is shown that when the nonlinearity is represented by a Volterra series model the resulting baseband equivalent model is a series interconnection of a discrete-time Volterra series model, of the same degree and equivalent memory depth, and a linear system. The result suggests a new, analytically motivated, structure of digital pre-distortion (DPD) of power amplifier nonlinearities. The third part of the thesis focuses on analysis and design of digitally implemented pulse-width modulators (DPWM) used as quantizers for power amplifiers in switched-mode operation. A time-domain input-output model of DPWM which offers new insight into nonlinear behavior of this system is developed. A modified Lloyd-Max quantization based algorithm for linearization of the baseband of a DPWM output is proposed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 181-193).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124092</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Love first</title>
<link>https://hdl.handle.net/1721.1/124091</link>
<description>Love first
White, Patrick Quinn.
How should we respond to the humanity of others? My dissertation argues that the fundamental answer is love, defending and carrying out the beginnings of a love-first approach to ethics. The ideal of love for all (agape) has religious origins, and in my first chapter, I show that it can be detached from this religious context and still serve as a foundation for ethics. Ordinary love we have for friends and family is subject to "outward pressure": in our love for the few, we find reason to love all. I explain how we can make sense of such love, even love for those we have not met, in terms of the neglected phenomenon of "plural love," as when we love the members of our family, as such. The problem with agape is not that it is impossible but that it conflicts with the good of selective love. In light of this conflict, I argue that we should treat agape as an ideal to be approximated.; We can understand respect as the minimum required approximation of love and derive the basic features of deontological ethics-commitment to equality, respect for autonomy, non aggregative concern for well-being, imperfect duties of beneficence-from the ideal of agape. Where the first chapter is an investigation into how we should act towards others in general, the second turns to partiality, beginning with our love for friends and family. Suppose I help my friend Kevin, acting out of love, when I might have spent my time and resources helping strangers. It seems obvious that I am justified in being partial because we are friends. Yet when I act out of love, I am not motivated by the fact that we are friends (which seems like "one thought too many") but by facts like Kevin's need. The normative and motivational desiderata on a theory of partiality thus pull in opposite directions: where the relationship seems to play a justifying role, it need not be a motivating reason.; The solution lies in recognizing the neglected diachronic nature of partiality, and of practical rationality. To act of out love is not to be moved by the fact of one's relationship; it is to have a history of so relating to someone. And it is in virtue of that history that non-relational facts (e.g., that Kevin needs help) have greater rational significance. History counts not as a reason on which we act but as a phenomenon that affects the weight of other reasons. I derive this fact from a (defeasible) rational requirement of constancy and show how this requirement can be generalized to other cases of practical reasoning in the face of parity. My third chapter asks when and why we should tell the truth. Philosophers have traditionally answered this question with an exclusive focus on lying and deception, ways of obscuring the truth, while giving little though to indiscretion-sharing or eliciting a truth that should be left unsaid.; I argue that these vices and their correlative virtues must be theorized together. A unified account of honesty and discretion must start with the concrete relationships between speaker and interlocutor. Our relationships determine what information is and is not "in bounds." These communicative norms are constitutive of our relationships-that my friend can ask about my private life where my colleague cannot is part of what makes our relationship an intimate friendship. I argue that our reasons to tell the truth are explained by our relationships: we have reason to follow their communicative norms just insofar as we have reason to be in them. We moreover have reasons to share or withhold the truth in order to shape our relationships: we can put certain topics in or out of bounds, molding our relationships into something new.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 151-160).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124091</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Keep it secret, keep it safe : privacy, security, and robustness in an adversarial world</title>
<link>https://hdl.handle.net/1721.1/124090</link>
<description>Keep it secret, keep it safe : privacy, security, and robustness in an adversarial world
Sealfon, Adam Benjamin Gelernter.
The deployment of large-scale systems involving many individuals or devices necessitates the design of computational frameworks that are resilient to failures or malicious actors. This thesis introduces algorithms and definitions for a series of problems concerning robustness, security, and privacy in the many-party setting. We describe protocols for maintaining a stable configuration despite adversarial perturbations, for cryptographic tasks involving secure multiparty computation and anonymity-preserving authentication, and for privacy-preserving analysis of networks. The results presented span the fields of distributed algorithms, cryptography, and differential privacy. We first model and describe a protocol for the problem of robustly preserving a stable population size in the presence of continual adversarial insertions and deletions of agents. Turning to cryptography, we explore the possibility of leveraging an infrastructure for secure multiparty computation, characterizing which networks of pairwise secure computation channels are sufficient to achieve general secure computation among other sets of parties. We next introduce a definitional framework and constructions for ring signatures that provide more fine-grained functionality, explicitly delineating whether parties can convincingly claim or repudiate authorship of a signature. Finally, we turn to differential privacy for graph-structured data. We present efficient algorithms for privately releasing approximate shortest paths and all-pairs distances of a weighted graph while preserving the privacy of the edge weights. We also present efficient node-private algorithms for computing the edge density of Erdős-Rényi and concentrated-degree graphs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 239-249).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124090</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computation in models inspired by near-term quantum devices</title>
<link>https://hdl.handle.net/1721.1/124088</link>
<description>Computation in models inspired by near-term quantum devices
Schaeffer, Luke(Luke Robert)
The race is on to build the first quantum computer, and although there are many groups working towards this goal, their quantum devices have certain architectural properties in common. First, the devices tend to be built on qubits arranged in a 2D grid, with gates between neighboring qubits. Second, we expect Clifford gates will be an important gate set because of their close connection to stabilizer codes (being both necessary to encode qubits, and easily implemented on encoded logical qubits). Finally, the limited lifespan of qubits (due to various forms of noise) encourages shallow circuits, at least until fault tolerance is achieved. It is important to acknowledge these limitations and incorporate them into our models of computation in order to make the most out of near-term quantum devices. In this thesis, we will explore the three concepts above. First, we see a cellular automaton with a demanding universality property, to illustrate that computation in the grid is possible even under extreme circumstances. Second, we present a classification of subsets of the Clifford gates, furthering our understanding of this important quantum gate set. Finally, recent work of Bravyi, Gosset, and König (2018) shows, unconditionally, that there are problems that can be solved by constant-depth quantum circuits, but not constant-depth classical circuits. We present two follow-up results above low-depth quantum circuits with the goal of strengthening the classical hardness. One result extends the separation AC⁰ circuits (constant depth, unbounded fan-in AND/OR gates), and arguably simplifies the Bravyi et al. problem. The other result proves hardness beyond AC⁰ (specifically to [cross in a circle symbol]L) for the task of interactively simulating certain constant-depth quantum circuits.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 199-208).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124088</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication Information Modeling (FIM)</title>
<link>https://hdl.handle.net/1721.1/124086</link>
<description>Fabrication Information Modeling (FIM)
Duro Royo, Jorge.
This thesis discusses novel strategies to include physical media information at multiple dimensions and relating to diverse disciplines within traditional design tools. Specifically, it addresses challenges that arise when aiming at describing computational and manufacturing strategies for material-, time- and scale-dependent phenomena. Fabrication Information Modeling (FIM) develops design processes and exemplar projects able to operate across media, disciplines, and scales, incorporating concepts of multidimensionality, media-informed computation, and trans-disciplinary data integration. Digital fabrication is today a rapidly evolving concept transitioning from traditional assembly of differentiated parts, to file-to-fabrication construction efforts, and even towards guidance of material synthesis on-the-fly and growth of biological agents into structures. Advances in the fields of materials engineering, robotic automation, artificial intelligence, and synthetic biology open up opportunities for incorporating new physical world information, from organism, material, machine, and environment, within and throughout digital design and manufacturing processes. With FIM and FIM-driven projects I aim to contribute to the field of digital design and fabrication by enabling feedback workflows where (1) materials are designed rather than selected; where (2) the question of how information is passed across spatiotemporal scales is central to design generation itself; where (3) modeling at each level of resolution and representation relates traditionally-unlinked methods and is carried out by myriad media; and finally, where (4) virtual and physical considerations coexist as equals.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 188-200).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124086</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Holographic augmented reality : towards near-to-eye electroholography via guided wave acousto-optics</title>
<link>https://hdl.handle.net/1721.1/124085</link>
<description>Holographic augmented reality : towards near-to-eye electroholography via guided wave acousto-optics
Jolly, Sundeep(Sundeep Kumar)
Near-to-eye displays act to directly project imagery into a viewer's eye and can range in instantiation from extremely simple (such as an optical viewfinder) to more complex immersive displays for applications in virtual and augmented reality. Many current schemes for near-to-eye display employ stereoscopic techniques; however, such instantiations do not consistently present correct accommodation and vergence cues to the viewer, limiting their potential for seamless, comfortable augmented reality applications. Recent techniques based around light-field display methods show promise in the delivery of consistent depth cues although their applicability in presenting scenery with jointly high spatial and angular resolution is limited.; Electroholographic displays have been shown to provide the highest degree of visual realism and consistency amongst cues to depth relative to all competing technologies for 3-D display, and several recent instantiations based around pixelated spatial light modulators have shown their utility for near-to-eye display applications. However, constraints on available space-bandwidth product in such pixelated modulators limit the usable system dtendue, resulting in reduced eyebox or field of view. In contrast, waveguide spatial light modulators offer the potential for displays with extremely high space-bandwidth product, compact form factors, and full-color operation via frequency-division multiplexing. This dissertation aims to assess the feasibility of waveguide-based electroholography for near-to-eye augmented reality display.; In particular, such feasibility is assessed through (i) a static set of near-to- eye holograms computed via iterative Fresnel domain techniques and fabricated via grayscale electron-beam lithography and(2) the design and analysis of a fully monolithic photonic platform for transparent, flat-panel holographic display requiring no supporting optics and implemented via anisotropic leaky-mode coupling in conjunction with integrated Bragg-regime diffractive combiner optics in lithium niobate. Furthermore, this dissertation presents a fabrication modality for multiscale, transparent, flat-panel holographic video displays based around femtosecond direct laser writing. Methods for and results in the integration of anisotropic waveguides, volume Bragg reflection holograms, and surface acoustic wave transducers in a lithium niobate substrate are depicted.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124085</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laser direct-write fabrication of MEMS</title>
<link>https://hdl.handle.net/1721.1/124084</link>
<description>Laser direct-write fabrication of MEMS
Patil, Prashant(Prashant Tarachand)
Micro-electromechanical systems (MEMS) have many applications in healthcare, consumer electronics, and automobile industry. Unfortunately, the development of novel MEMS is significantly hindered by the limitations of the state-of-the-art MEMS microfabrication processes such as high cost of equipment ownership, long development time, and limited choice of fabrication material selection and integration. Recent developments in alternate MEMS fabrication processes such as PCB-MEMS, laminate MEMS, pop-up book MEMS, and soft-MEMS have reduced fabrication cost, increased material choice, and facilitated material integration. However, MEMS fabricated using these methods have large feature size and low aspect ratio as compared to MEMS produced utilizing conventional deep reactive ion etching (DRIE) microfabrication process. Moreover, fabricating MEMS with six degrees of freedom (DOF) free-standing microstructures using these processes is challenging.; Finally, the choice of fabrication material is fairly limited and each material requires a separate manufacturing process. This thesis presents a novel MEMS fabrication process called multi-lamina assembly of laser micromachined laminates (MALL), which can fabricate MEMS comparable to DRIE, enable creating free-standing microstructures with six degrees of freedom, and further expand the choice of fabrication material. Moreover, the proposed approach offers a single microfabrication method to process a wide range of materials. A novel microfabrication process called laser-assisted material phase-change and expulsion (LAMPE) micromachining is developed. Using this process, the fabrication of high aspect ratio structures with lateral features as small as 10Lm, and aspect ratio as large as 10:1 is demonstrated in metals, silicon and diamond.; Previously, such high aspect ratio and small lateral feature structures could be fabricated in silicon alone using the deep reactive ion etching process. The LAMPE micromachining process is used to manufacture individual layers of a MEMS. Subsequently, the micromachined laminates are stack assembled and bonded to construct MEMS devices. Using the MALL process, fabrication of six degrees of freedom free-standing structures as thin as 10[mu] is demonstrated. In addition, the gap between the free-standing structure and the substrate can be as small as 12.5pm. The utility of the MALL process is demonstrated by fabricating three MEMS. First, an electrostatic comb-drive actuator is fabricated using copper as the structural material. The distance between the comb-drive fingers is 10[mu], and the thickness of the fingers is 100[mu]. This is the first demonstration of using a metal to fabricate comb-drive structure with such small lateral feature and high aspect ratio.; Second, a MEM relay for high-current switching application is demonstrated. The current carrying capacity of the MEM relay is higher than OOmA. Finally, development of high-aspect-ratio diamond rotors for enhancing the resolution of magic-angle spinning nuclear magnetic resonance spectroscopy (MAS-NMR) is presented. This is the first demonstration of micromachining such ultra-deep (5 mm), and ultra-high aspect ratio (10:1) holes in diamond. The MALL process can manufacture MEMS comparable to conventional DRIE microfabrication process. Moreover, the manufacturing cost per device in MALL is less than DRIE. However, DRIE offers high part production rate than MALL. The part production rate in MALL can be matched with DRIE using multiple laser sources. For matching the part production rate, the investment required to purchase a laser micromachining tool with multiple lasers is comparable to the cost of a DRIE tool.; Thus, equal investment in MALL and DRIE results in equal part production rate. The MALL process significantly reduces the time required for material integration, process development, and design iteration. As a result, the MEMS device development time is reduced from many months (in DRIE) to a day. The MALL process empowers rapid testing of new MEMS concepts and theory. Moreover, MALL can be used to fabricate one-of-a-kind MEMS devices and used for low-volume production, where initial high investment can not be justified. The MALL process enables greater material selection and integration, rapid development, and integrated packing, thereby empowering a new paradigm in MEMS design, functionality, and application. The tools and material cost of MALL fabrication can be as low as $25,000, which is affordable to a wider scientific community. The low capital investment and use of low-cost of materials enables MEMS fabrication for masses and can expedite the development of novel MEMS.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-162).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124084</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Error discovery through human-AI collaboration</title>
<link>https://hdl.handle.net/1721.1/124083</link>
<description>Error discovery through human-AI collaboration
Ramakrishnan, Ramya.
While there has been a recent rise in increasingly effective human-Al teams in areas such as autonomous driving, manufacturing, and robotics, many catastrophic failures still occur. Understanding the cause(s) of these errors is crucial for reducing and fixing them. One source of error is due to an agent's or human's limited view of the world, which means their representations are insufficient for acting safely. For example, self-driving cars may have limited sensing that causes them to not recognize rare vehicle types, like emergency vehicles. This thesis focuses on identifying errors that occur due to deficiencies in agent and human representations. In the first part, we develop an approach that uses human feedback to identify agent errors that occur due to an agent's limited state representation, meaning that the agent cannot observe all features of the world. Experiments show that using our model, an agent discovers error regions and is able to query for human help intelligently to safely act in the real world. In the second part, we focus on determining the cause of human errors as either occurring due to the human's flawed observation of the world or due to other factors, such as noise or insufficient training. We present a generative model that approximates the human's decision-making process and show that we can infer the latent error sources with a limited amount of human demonstration data. In the final thesis component, we tackle the setting where both an agent and a human have rich perception, but due to selective attention, they each only focus on a subset of features. When deploying these learned policies, important features in the real world may be ignored because the simulator did not accurately model all regions of the real world. Our approach is able to identify scenarios in which an agent should transfer control to a human who may be better suited to act, leading to safe joint execution in the world.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 181-202).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124083</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast spectral primitives for directed graphs</title>
<link>https://hdl.handle.net/1721.1/124075</link>
<description>Fast spectral primitives for directed graphs
Peebles, John Lee Thompson,Jr.
In this thesis, we study several algorithmic problems involving numerical linear algebra, probability, and statistics. Its main results include the following: -- We give the first nearly linear time algorithms for a large class of directed graph problems including computing the stationary distribution of a Markov chain with only a logarithmic dependence on the mixing time. Our approach is based on developing new spectral tools for directed graphs, including the first algorithms for sparsifying directed graphs and solving directed Laplacian linear systems. -- Symmetric diagonally dominant matrices frequently arise in science and engineering applications, often when one is discretization certain types of differential equations. We give faster algorithms for estimating the determinant of a symmetric diagonally dominant matrix and for sampling random spanning trees from a graph. --; Generative adversarial networks (GANs) are an important technique used in deep learning. However, the methods for training them are not satisfactorily understood from a theoretical perspective. To help improve this understanding, we devise a parametric problem which is sophisticated enough to capture many of the main difficulties associated with GAN training, yet simple enough to analyze rigorously. -- Fibonacci heaps are a data structure that provide a heap (a priority queue) and have optimal amortized runtimes for these operations. They are especially useful when one has to change the priorities of element many more times than one needs to remove elements from the data structure. We resolve two conjectures regarding the efficiency of variants of Fibonacci heaps, the first due to Karger and the second due to Fredman. --; Perhaps the most basic statistical question one can ask is how many samples of data one needs in order to check whether the data came from a hypothesis distribution or not. This is sometimes called goodness of fit testing in statistics or identity testing in computer science. We give the first algorithms that provably use as few samples as possible for these problems in all parameter regimes up to constant factors.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 331-345).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124075</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Achieving high CPU efficiency and low tail latency in datacenters</title>
<link>https://hdl.handle.net/1721.1/124072</link>
<description>Achieving high CPU efficiency and low tail latency in datacenters
Ousterhout, Amy(Amy Elizabeth)
As datacenters have proliferated over the last couple of decades and datacenter applications have grown increasingly complex, two competing goals have emerged for networks and servers in datacenters. On the one hand, applications demand low latency-on the order of microseconds-in order to respond quickly to user requests. On the other hand, datacenter operators require high CPU efficiency in order to reduce operating costs. Unfortunately, today's systems do a poor job of providing low latency and high CPU efficiency simultaneously. This dissertation presents Shenango, a system that improves CPU efficiency while preserving or improving tail latency relative to the state-of-the-art. Shenango establishes that systems today are unable to provide CPU efficiency and low latency simultaneously because they reallocate cores across applications too infrequently. It contributes an efficient algorithm for deciding when applications would benefit from additional cores as well as mechanisms to reallocate cores at microsecond granularity. Shenango's fast core reallocations enable it to match the tail latency of state-of-the-art kernel bypass network stacks while linearly trading throughput for latency-sensitive applications for throughput for batch applications as load varies over time. While Shenango enables high efficiency and low tail latency at endhosts, end-to-end application performance also depends on the behavior of the network. Thus this dissertation also describes Chimera, a proposal for how to build on Shenango to co-design congestion control with CPU scheduling, so that congestion control can optimize for end-to-end latency and efficiency.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 95-104).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124072</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The kernel of doubt : agricultural biotechnology, braided temporalities, and agrarian environments in India</title>
<link>https://hdl.handle.net/1721.1/124070</link>
<description>The kernel of doubt : agricultural biotechnology, braided temporalities, and agrarian environments in India
Chaudhuri, Ashawari.
Genetically modified (GM) cotton, Bt cotton was introduced in India in 2002 through a joint venture company, Mahyco-Monsanto Biotech (India) Private Limited (MMB). It is a collaboration between the Indian agricultural company, Maharashtra Hybrid Seed Company (Mahyco), and the U.S. based agrochemical company, Monsanto, which is now acquired by byer. The way it translated in practice was that, the Indian seed companies purchased seeds from MMB and through conventional breeding techniques, made crosses between plants containing the Bt gene with cotton plants that are owned by the companies. From the very beginning of legalization of genetically modified Bt cotton, it emerged as the seed of certitude and doubt, of truth and ruse, of promise and disbelief at the same time. Debates were already brewing about the advantages of using transgenic cotton seeds as early as 2003.; From "remarkable success", because of higher yields making India the second largest producer of cotton in the world, to "continuous failure" due to the increased resistance of the pests developed against Bt cotton over the years, to relating the use of transgenic seeds with massive debt cycles and farmers' suicides and large scale protests, the debate over the advantages or disadvantages of using transgenic seeds have been fierce and muddled. As Glenn Stone points out in "Constructing Facts", these opposing camps have their own "authenticating systems" that constructed their own "rules for facticity", while nullifying all others (Stone 2012). This dissertation explores these radically different entailments of the introduction of a GM crop. My work is shaped by my long-standing desire to understand how agrarian lives and experiences might inform narratives of science and the environment at national and global scales.; Some of the questions that this dissertation explores are, how do different communities like farmers, scientists, regulators who are positioned on opposing ends of the agrarian political economy, understand and work with GM seeds? What are the modes of analysis, abstraction and writing about them emerge in these different sites as the materiality of the seeds get constantly entwined with the practices and experiences of the communities I study? What remains and what gets submerged when we understand biotechnology in terms of partnerships between corporate enterprises and academia, biocapital, risk studies, or cost-benefit analysis?
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 254-271).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124070</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The subjects of modernism : mathematics, art, and the politics of value in twentieth-century United States</title>
<link>https://hdl.handle.net/1721.1/124069</link>
<description>The subjects of modernism : mathematics, art, and the politics of value in twentieth-century United States
Kim, Clare Seungyoon.
My dissertation illuminates the status of mathematical knowledge in relation to other intellectual domains and racialized social forms-particularly American Orientalism-in the twentieth-century United States. Observers and practitioners have long engaged in drawing relations between mathematics and the arts. Around the turn of the twentieth century, however, when mathematicians reconceived mathematics around notions of abstraction, formalism, and "made" theories bearing no necessary relationship to the empirical world, understandings of the relationship of mathematics to the arts changed. Historians of mathematics, mathematicians, and historians of art have since referred to these twentieth-century intellectual changes as a "modernist" transformation. They refer to mathematical modernism in terms of metaphors that reflect shared values of being autonomous, creative, and a form of self-expression.; My dissertation recovers an alternative history that upends the assumption that mathematical modernism developed within pre-existing boundaries of its discipline. It tracks a series of collective efforts between mathematicians, artists, critics, and historians, to use and articulate a place for formally abstract and axiomatically derived mathematical techniques within humanistic and artistic inquiries. Drawing from archival and published sources across four main chapters, I trace four specific efforts that reflect changes and transformations in American higher education and academic institutions between the 1890s and the present. Chapter one chronicles mathematicians', historians', and art collectors' interpretations of Japanese and Chinese mathematical traditions between the 1890s and 1920s.; It shows how their conclusions that the resulting "oriental mathematics" was universal but inferior to the current practices were informed by a racialized discourse, treating Japanese and Chinese math as symbols of exotic difference. Chapter two recounts and describes the production of a mathematical theory of aesthetic measure at Harvard University in the 1930s. It shows how the theory was part of a broader artistic movement to articulate a theory of pure design. Chapter three examines the valuation and nature of mathematics within the liberal arts setting at Black Mountain College in the 1940s and 50s. It recovers how rather than being essential to high art, mathematics was also critical to the resurgence of craft. The final chapter elucidates the contradictions in valuing mathematics as abstract, creative, and autonomous, by examining a copyright dispute between a mathematical origami designer and a conceptual artist in the 2000s.; The resulting view of US mathematical modernism as embedded within broader intellectual domains illuminates a more nuanced view of changes in what has or has not counted as a mathematical subject.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 116-133).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124069</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale membranes for electromechanical systems</title>
<link>https://hdl.handle.net/1721.1/124068</link>
<description>Nanoscale membranes for electromechanical systems
Murarka, Apoorva.
Micro- and nano-electromechanical systems (MEMS/NEMS) are a technology field that branched out of semiconductor integrated circuit (IC) manufacturing about four decades ago, and one that forms the backbone of the Internet of Things era. However, as MEMS devices are becoming ubiquitous, they are also being limited by the narrow platform of IC material sets and design parameters, which significantly constrain prevalent MEMS functions and applications. In order to expand the application space of MEMS/NEMS, it is imperative that novel material platforms and manufacturing methods are considered. The basic building block of the approach demonstrated in this work is a suspended membrane of nanoscale thickness (or "nanomembrane") that is first separately fabricated, and then additively donated via contact-transfer printing to complete a nanostructured variable-capacitance device.; The process simplifies fabrication of mechanically-active nanostructured elements over relatively large areas, and yields electromechanical systems with low operating voltages and high energy efficiency. Specifically, transfer printing and suspension of 125-nm-thick gold nanomembranes of areas on the order of 10 mm2 and larger is demonstrated, and the resulting structures are utilized as electrostatic microspeakers. These purely metallic membranes exhibit ideal spring-like behavior at human auditory frequencies, devoid of any mass-related mechanical resonances. This in turn results in the widest uniform bandwidth in the human auditory range demonstrated for portable acoustic actuators. Electrostatic microspeakers enabled by these nanomembranes exhibit superior acoustic performance in terms of output acoustic pressure frequency response uniformity in both free-field and pressure-field radiation, below 10 Volts actuation.; This is in contrast to state-of-the-art electrostatic speakers that require high actuation voltages often in excess of 100 Volts, and other prevalent voice-coil technologies with high resistive power loss and low electromechanical conversion efficiency. The free-field and pressure-field performance of these gold nanomembrane electrostatic microspeakers is corroborated via analytical models derived using the inhomogeneous acoustic wave equation, via lumped parameter models, and via finite element analysis. The metallic nanomembranes are also integrated with polymeric materials and organic semiconductors to demonstrate vertical-cavity surface-emitting lasers (VCSELs) that are tuned over tens of nanometers with electrostatic actuation below 10 Volts.; The multi-domain nature of this work aims to demonstrate the versatility of contact-transfer printed metallic nanomembranes, and their potential as a superior alternative to conventional semiconductor and piezoceramic thin films for a variety of MEMS/NEMS sensor and actuator applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 296-303).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124068</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Afterlives of extinction : the politics of display in the modern United States</title>
<link>https://hdl.handle.net/1721.1/124067</link>
<description>Afterlives of extinction : the politics of display in the modern United States
Laurence, Alison(Alison G.)
Long extinct animals have a powerful hold on the American popular imagination. Dinosaurs, in particular, have been iconic and pervasive representatives of a planetary past for much of the twentieth century. While natural history museums are the most obvious spaces in which to encounter extinct animals, these scientific venues have never held exclusive rights to the charismatic mega fauna of the planet's past, nor have they controlled their cultural meanings entirely. This dissertation examines sites in which American publics have encountered extinct animals through interrelated case studies that span the twentieth century, including the American Museum of Natural History's fossil halls, Depression-Era world's fair exhibits, the contested construction of a Pleistocene Park at the La Brea Tar Pits of Los Angeles, Sinclair Oil Company's use of dinosaurs as spokes-creatures for oil culture, and the dinosaurs that currently draw visitors to young earth creationist museums.; While detailing these diverse exhibitions, I dig into who exhibits long extinct life, what these exhibits say about the deep past, and how diverse publics have embraced and pushed back against these stories, demonstrating the role of popular display in transforming fossilized animals from scientific specimens to consumer objects and artifacts of everyday American life. Further, I show how representations of the planetary past and creatures that inhabited the world before humans are inflected with contemporary concerns and serve as vehicles for human values. The exhibition of dinosaurs at the Creation Museum is the most conspicuous contemporary example of extinct animals operating as didactic instruments, deployed to serve a politically interested institution's agenda. While sensational and unusually explicit in its aims, this move is not unique.; Since the debut of dinosaur models in Victorian England, three-dimensional scenes from deep time have staked a claim in the political and cultural contests of their moment. From imperial apologia to biblical apologetics, the mute and mutable life forms of the planetary past have been used in defense of diverse narratives. Across all of these case studies, I tease out the explicit and implicit values embedded in these exhibits, emphasizing the presentist human interests that have been mapped onto extinct life.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124067</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Truffle crops and soil drugs : new fungal practices and epistemologies for the 21st century</title>
<link>https://hdl.handle.net/1721.1/124066</link>
<description>Truffle crops and soil drugs : new fungal practices and epistemologies for the 21st century
Oviatt, Peter Gibbs.
Perigord truffles (Tuber melanosporum) have, over the past 200 years, become a cultivated crop grown across the globe. Since the 1890s, soil microbes have been commodified to "fertilize" agricultural crops (and are now referred to as biofertilizers, biostimulants, or simply "soil drugs"). This dissertation examines both truffle crops and soil drugs to investigate how a beneficial relationship between plant roots and fungi has become meaningful in twenty-first century industrial societies. This fungus-root connection, which exists with over eighty percent of plant species, is called the mycorrhizal symbiosis.; I draw on ethnographic research centered in Corvallis, Oregon, and Dijon, France to show how mycorrhizal practitioners (from foragers and farmers to laboratory researchers and industry boosters) have struggled against the biological constraints of the mycorrhizal symbiosis and have combined agronomic and agrarian epistemologies to develop a diverse suite of "sustainable" land management practices that promise "symbiotic efficiencies." In truffle farming, this has resulted in an ethic of professionalization (with "best practice" guidelines), and a desire for what Anna Tsing has called "scale making." At the same time, a contrasting ethos of "engaged waiting" guides a subset of truffle farmers who continue to steward agrarian ecologies by remaining attuned to a wide array of life forms and extended time frames. In the biofertilizer industry, mycorrhizal science has given rise to numerous methods for producing mycorrhizal inoculants, or soil drugs.; Following the work of Christopher Henke, I discuss how mycorrhizal inoculants are poised to bring about two forms of repair to soil ecologies and industrial agriculture: maintenance and transformation. With both truffle farming and the mycorrhizal biofertilizer industry, I examine the challenges and controversies surrounding the efficacy of emergent mycorrhizal practices, testing claims about ecological restoration, universal standards of practice, and the role of farm consultants. A recent wave of mycorrhizal science employs experimental systems that look beyond a singular fungus-root pair to consider broad and indeterminate communities of fungi, bacteria, and plants; this new science critiques the use of commercial inoculants in favor of reformed agricultural practice (from plant breeding to tillage regimes) that directly consider the role of soil symbionts.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 255-276).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124066</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The complexity of sampling from a weak quantum computer</title>
<link>https://hdl.handle.net/1721.1/124065</link>
<description>The complexity of sampling from a weak quantum computer
Mehraban, Saeed.
Quantum computers have the promise of revolutionizing information technology, artificial intelligence, cryptography, and industries such as drug design. Up until this point our main understanding of quantum computing has been bound to theoretical speculations which has successfully lead to the design of algorithms and communication protocols which would dramatically change information technology but work well only assuming if we can experimentally build a reliable and large quantum computer which at the moment seems to be beyond the bounds of possibility. As of August 30, 2019, it appears that we are about to enter a new era in quantum technology. Over the next few years, intermediate-scale quantum computers with ~ 50-100 qubits are expected to become practical. These computers are too small to do error correction and they may not even capture the full power of a universal quantum computer.; This is a significant step and inevitably we encounter the question "what can we do with this new technology?". A major theoretical challenge is to understand the capabilities and limitations of these devices. In order to approach this challenge, quantum supremacy experiments have been proposed as a near-term milestone. The objective of quantum supremacy is to find computational problems that are feasible on a small-scale quantum computer but are hard to simulate classically. Even though a quantum supremacy experiment may not have practical applications, achieving it will demonstrate that quantum computers can significantly outperform classical ones on, at least, artificially designed computational problems. Among other proposals, over the recent years, two sampling-based quantum supremacy experiments have been proposed: (1) The first proposal, known as Boson Sampling (Aaronson Arkhipov '10), is based on linear optical experiments.; A baseline conjecture of Boson Sampling is that it is #P-hard to approximate the permanent of a Gaussian matrix with zero mean and unit variance with high probability. This is known as the permanent of Gaussians conjecture. (2) The second one is based on sampling from the output of a random circuit applied to a square grid of qubits. The Google quantum AI group is planning to implement this task on a processor composed of a few (~ 50-100) superconducting qubits. In order to argue that this sampling task is hard, building on previous works of Aaronson, Arkhipov, and others, they conjectured that the output distribution of a low-depth circuit, i.e., depth scaling asymptotically as the square root of the number of qubits, is anti-concentrated meaning that it has nearly maximal entropy. This is known as the "anti-concentration" conjecture.; The first part of this thesis makes progress towards the permanent of Gaussians conjecture and shows that the permanent of Gaussian matrices can indeed be approximated in quasi-polynomial time with high probability if instead of zero mean one considers a nonzero but vanishing mean (~ 1/ poly log log in the size of the matrix). This result finds, to the best of our knowledge, the first example of a natural counting problem that is #P-hard to compute exactly on average and #P-hard to approximate in the worst case but becomes easy only when approximation and average case are combined. This result is based on joint work with Lior Eldar. The second part of this thesis proves the anti-concentration conjecture for random quantum circuits. The proof is an immediate consequence of our main result which settles a conjecture of Brandão-Harrow-Horodecki '12 that short-depth random circuits are pseudorandom.; These pseudo-random quantum processes have many applications in algorithms, communication, cryptography as well as theoretical physics e.g. the so-called black hole information problem. This result is based on joint work with Aram Harrow. We furthermore study the speed of scrambling using random quantum circuits with different geometrical connectivity of qubits. We show that this speed crucially depends on the way we define scrambling and on the details of connectivity between qubits. In particular, we show that entanglement and the so-called out of time-ordered correlators are inequivalent measures of scrambling if the gates of the random circuit are applied between nearest-neighbor qubits on a graph with a tight bottleneck, e.g., a binary tree. A major implication of this result is that the scrambling speed of black holes may depend critically on the way one defines scrambling. Previously, it was believed that black holes are the fastest scramblers of quantum information.; This result is based on joint work with Aram Harrow, Linghang Kong, Zi-Wen Liu, and Peter Shor.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis. "Some pages in the original document contain text that runs off the edge of the page. p. 114, 134, 141"--Disclaimer Notice page.; Includes bibliographical references (pages 203-211).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124065</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speech, signal, symptom : machine listening and the remaking of psychiatric assessment</title>
<link>https://hdl.handle.net/1721.1/124064</link>
<description>Speech, signal, symptom : machine listening and the remaking of psychiatric assessment
Semel, Beth Michelle.
This multi-sited, ethnographic dissertation follows teams of psychiatric and engineering professionals collaborating to tackle one of Western psychiatry's longest standing issues: the subjective nature of mental illness. Situated at three different U.S.-based universities, the teams are driven by a conviction that conventional methods of psychiatric screening are fallible if not altogether inaccurate, since they depend upon a mental health care worker's ability to interpret the semantic content of a patient's speech. Through research studies involving human subjects, the teams hope to develop more biologically based and resource-efficient screening techniques that instead analyze paralinguistic, acoustic components of speech-such as pitch, speaking rate, and breathiness-which they argue are more directly linked to the internal mechanisms that drive mental illness.; By turning to the expertise of computer scientists and engineers, they seek to build "machine listening" prototypes for psychiatric assessment: technologies that use a microphone to capture sound and artificial intelligence (AI) to analyze sound. While their studies are premised on the notion that AI can listen beyond the human by attending to sounds of speech that have psychopathological significance supposedly set aside from linguistic meaning and human difference, in order to gather and classify the data necessary for building their technologies, researchers must rely on the very components of language that they seek to overcome: its interactional, sociocultural dimensions. I show how the connections between spoken utterances and inner states that researchers design their systems to make "autonomously" depend upon a tightly managed but oftentimes hidden infrastructure of human labor, including the labor of research subjects.; The division of labor within the teams replicates hierarchies of value within mental health care professions, which place diagnosis and treatment at the top as expert, biomedically and legally ratified forms of judgment, and place the data entry and triage work of assessment at the bottom, as skilless, para-professional, and mechanized tasks. In describing the vexed status and ethics of listening, language, labor, and care in contemporary U.S. mental health care, the dissertation tells a larger story about the stakes of framing mental illness as a scientific, bureaucratic problem calling for a technological intervention
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124064</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verification of correctness properties of programs that read input files</title>
<link>https://hdl.handle.net/1721.1/124063</link>
<description>Verification of correctness properties of programs that read input files
Kim, Deokhwan,Ph. D.Massachusetts Institute of Technology.
This thesis presents new techniques for verifying correctness properties of programs that process input files. These techniques apply to programs written in standard programming languages such as C and focus on relationships that must hold between program execution points, the current location of file position indicator of the open input file, and the contents of the input file. The thesis presents a specification language that developers can use to express these relationships and insert them into the program as assertions involving the file position indicator and file contents at different program points. It also presents a program verification system that verifies, for all possible input files and all possible input file contents, that the assertions hold in all program executions. The soundness of the verification system has been proved, based on the formal definition of the syntax and semantics of the specification language. The system synthesized verification conditions from the specifications for a PNG image viewer and a JPEG image converter, and successfully verified all of them.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 117-120).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124063</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three complexity classification questions at the quantum/classical boundary</title>
<link>https://hdl.handle.net/1721.1/124062</link>
<description>Three complexity classification questions at the quantum/classical boundary
Grier, Daniel.
A central promise of quantum computers is their ability to solve some problems dramatically more efficiently than their classical counterparts. Thus, to understand feasible computation in our physical world, we must turn to quantum rather than classical complexity theory. That said, classical complexity theory has a long and successful history of developing tools and techniques to analyze the power of various computing models. Can we use classical complexity theory to aid our understanding of the quantum world? As it turns out, the answer is yes. There is actually a very fruitful connection between quantum and classical complexity theory, each field informing the other. We will add to this perspective through the lens of classification--attempts to categorize all variations of the object of study as thoroughly and completely as possible. First, we will show that every regular language has quantum query complexity [theta](1), [theta]~([square root]n), or [theta](n). Combining quantum query complexity with these fundamental classical languages not only reveals new structure in these languages, but also leads to a generalization of Grover's famous quantum search algorithm. Second, we will discuss the complexity of computing the permanent over various matrix groups. In particular, this will show that computing the permanent of a unitary matrix is #P-hard. The theorem statement is classical, and yet, the proof is almost entirely the result of exploiting well-known theorems in quantum linear optics. Finally, we give a complete classification of Clifford operations over qubits. Although the Clifford operations are classically simulable, they also exhibit distinct quantum behavior, making them a particularly interesting gate set at the quantum/classical boundary.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 177-183).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124062</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting financials and discovering menu prices with alternative data</title>
<link>https://hdl.handle.net/1721.1/124061</link>
<description>Forecasting financials and discovering menu prices with alternative data
Fleder, Michael(Michael S.)
In the financial industry, key quantities like public-company financials and consumer spending drive asset pricing and other decisions. However, direct observation of any company's financials or other key signals is rare. For instance, although public companies disclose their financials through quarterly reports and press releases, disclosures are infrequent and of limited information. This has led to an explosion in demand for "alternative" datasets: noisy, secondary signals of fine-grained company financials. Alternative datasets -- e.g. consumer credit card transactions --; are increasingly available; however, quantitative methods for utilizing such noisy proxy signals are lacking. In this work, we develop quantitative methods for utilizing alternative data. Starting with datasets of anonymized consumer transactions, we focus on two problems: (i) forecasting and tracking company financials and (ii) estimating the prices customers pay for individual goods, and in what quantity. That is, first we estimate aggregate company financials (e.g. quarterly revenue) before zooming in to study customer spending details. Utilizing a novel forecasting and estimation framework, we outperform a standard Wall Street consensus benchmark in forecasting the quarterly financials of 34 public companies. Next, we perform seemingly counterintuitive inference: given an anonymous consumer's bill total (a single number), we estimate the number and prices of products purchased.; We show implications in (i) detecting changes in product offerings and (ii) performing revenue attribution by product. To forecast and track company financials, we utilize a classical linear systems model to capture both the evolution of the hidden or latent state (e.g. daily revenue), as well as the proxy signal (e.g. credit cards transactions). We analytically solve the often irresolvable system identification problem, and provide a finite-sample analysis of the resulting error. We show this enables optimal inference with respect to mean-squared error. Last, we provide a novel, robust estimation algorithm for decomposing bill totals into the underlying, individual product(s) purchases. We prove correctness and accuracy under mild assumptions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 101-105).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124061</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On foundations of public-key encryption and secret sharing</title>
<link>https://hdl.handle.net/1721.1/124060</link>
<description>On foundations of public-key encryption and secret sharing
Degwekar, Akshay(Akshay Dhananjai)
Since the inception of Cryptography, Information theory and Coding theory have influenced cryptography in myriad ways including numerous information-theoretic notions of security in secret sharing, multiparty computation and statistical zero knowledge; and by providing a large toolbox used extensively in cryptography. This thesis addresses two questions in this realm: Leakage Resilience of Secret Sharing Schemes. We show that classical secret sharing schemes like Shamir secret sharing and additive secret sharing over prime order fields are leakage resilient. Leakage resilience of secret sharing schemes is closely related to locally repairable codes and our results can be viewed as impossibility results for local recovery over prime order fields. As an application of the result, we show the leakage resilience of a variant of the Goldreich-Micali-Wigderson protocol. From Laconic Statistical Zero Knowledge Proofs to Public Key Encryption. Languages with statistical zero knowledge proofs that are also average-case hard have been used to construct various cryptographic primitives. We show that hard languages with laconic SZK proofs, that is proof systems where the communication from the prover to the verifier is small, imply public key encryption.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-164).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124060</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal transport in structured domains : algorithms and applications</title>
<link>https://hdl.handle.net/1721.1/124059</link>
<description>Optimal transport in structured domains : algorithms and applications
Alvarez Melis, David.
Optimal transport provides a powerful mathematical framework for comparing probability distributions, and has found successful application in various problems in machine learning, including point cloud matching, generative modeling, and document comparison. However, some important limitations curtail its broader applicability. In many applications there is often additional structural information that is not captured by the classic formulation of the problem. This information can range from explicit tree and graph-like structure, to global structural invariances. Failure to fully model this structure can hinder--if not preclude--the use of optimal transport-based approaches. This thesis presents several extensions of the optimal transport problem to incorporate structural information. First, a non-linear generalization of the cost objective based on submodularity is proposed.; The resulting formulation provides a flexible framework to model explicit or latent discrete structure in the data and admits efficient optimization. Next, we investigate the issue of geometric invariances when matching embedded representations, for which a general framework for optimal transport in the presence of latent global transformations is developed. Various approaches to solve the resulting optimization problem are proposed and compared. The last part of the thesis addresses the problem of aligning datasets in which the structure is encoded through non-Euclidean manifolds, such as hyperbolic spaces. In response to an unexpected type of invariance that hyperbolic embeddings learned from data exhibit, a novel framework that interweaves optimal transport and hyperbolic nonlinear registration with deep neural networks is proposed.; While these extensions are formulated in general terms, the experimental results presented in this thesis are focused on motivating applications in natural language processing, including unsupervised word translation, sentence similarity, domain adaptation, and ontology alignment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-169).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124059</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric methods in econometrics and statistics</title>
<link>https://hdl.handle.net/1721.1/124058</link>
<description>Geometric methods in econometrics and statistics
Mukhin, Yaroslav V.(Yaroslav Vadimovich)
Econometrics and statistics rely on asymptotic approximations to construct hypothesis tests and confidence regions. Asymptotic approximations can also be used more abstractly to study the quality (efficiency) of estimators and tests. These approximations are closely related to local (differential) properties of the functionals of the statistical model whose values are being estimated and tested. I consider statistical models and estimands motivated by economic theory and applications and study their local and also global properties: I study the local properties of functionals to characterize the efficiency bounds of their estimators and the directions of most rapid (gradient) change with respect to different metrics of distance on the model. I use gradient flows to describe global evolutions on the statistical model governed by changes in a scalar functional. These flows can be used to describe economic policy and to study structural estimators motivated by economic theory.
Thesis: Ph. D. in Economics and Statistics, Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124058</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in network economics</title>
<link>https://hdl.handle.net/1721.1/124057</link>
<description>Essays in network economics
Azar, Pablo Daniel.
This thesis is a collection of three chapters, each representing an individual paper. The first chapter studies how the formation of supply chains affects economic growth. It provides a new tractable model for supply chain formation. The main innovation in this model is that, firms can choose suppliers to maximize profits. Individual firms' actions determine the equilibrium input-output network, and affect macroeconomic variables such as GDP. We then apply this model to understand the effect of changing supply chains on American productivity during the 1987-2007 period. The second chapter studies how a monopolist may sell multiple goods to strategic bidders. The monopolist may face a series of combinatorial constraints. For example, it may be forced to allocate at most one good to each bidder, and it may have additional constraints on which bidders can be allocated which goods. Furthermore, the monopolist does not know bidders' demand distributions. Rather, it only knows one sample from the demand distribution corresponding to each bidder. Nevertheless, by developing new online optimization algorithms, we show how simple mechanisms can approximate the monopolist's optimal revenue. Finally, the third chapter, develops a new model of firm optimization to understand how shrinking electronics have contributed to increased productivity and welfare in the United States, during the 2002-2017 period. In this model, firms face constraints on the size of the products they can build. As intermediate inputs, such as electronics, shrink, the firms' production possibilities frontier expands, and GDP increases.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124057</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparative look at structure-function roles in light-harvesting dynamics of purple bacteria</title>
<link>https://hdl.handle.net/1721.1/124055</link>
<description>A comparative look at structure-function roles in light-harvesting dynamics of purple bacteria
Tong, Ashley(Ashley Lynn)
Using a unique approach to solar energy conversion, photosynthetic organisms have developed a light-harvesting process with near unity quantum efficiency. Light-harvesting proteins transfer energy from the sun to reach a central location, the reaction center, where charge separation occurs and energy is converted to chemical energy. Moreover, these proteins are able to carry out this efficient transfer in cellular membranes despite the complex environment found in these membranes. Particularly, light-harvesting in photosynthetic purple bacteria uses a diverse set of tools from species to species to efficiently transfer energy through this protein network. Induced by their habitats, external environmental pressures on the fitness of purple bacteria have caused species to evolve different mechanisms in order to deal with thesel pressures. Although these complexes have been studied for some time, there is still very little known about particular species.; Additionally, most previous work has been on non-native samples, such as detergent solubilized proteins, or on complex membranes such as vesicles, chromatophores, or whole membranes that contain multiple proteins with multiple processes occurring simultaneously. This work investigates how photosynthetic light-harvesting complexes are able to achieve their impressive efficiency using ensemble ultrafast spectroscopy to measure energy transfer dynamics and near-native discoidal model membrane-discs. These model membrane-discs provide a controlled environment to effectively study how energy is transferred in a single protein and between particular sets of proteins, allowing individual steps in the light-harvesting process to be probed without other processes interfering. They also provide a near-native system to explore how lipid-protein and protein-protein interactions affect the energy transfer kinetics in these proteins.; Additionally, this work explores the differences in energy transfer kinetics of light-harvesting proteins between species of purple bacteria. Overall, this provides new insights into the role the membrane plays in light-harvesting and how the composition of proteins within the native membrane of different species of purple bacteria can add variation to energy transfer kinetics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 67-79).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124055</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oxygen atom transfer with manganese-exchanged metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/124054</link>
<description>Oxygen atom transfer with manganese-exchanged metal-organic frameworks
Stubbs, Amanda Walcott.
Oxygenates represent some of the most versatile commodity chemicals, justifying continued interest in the discovery of new selective oxidation catalysts from both a fundamental and applied perspective. Metal-organic frameworks (MOFs) are an attractive platform for catalysis because they enable access to unique coordination environments and reactivities; this is due in part to their tunability combined with the site isolation offered by their solid state. In one example, partial substitution of Zn[superscript II] by Mn[superscript II] in Zn₄O(terephthalate)₃ (MOF-5) leads to a distorted all-oxygen ligand field supporting a single Mn[superscript II] site, whose structure was confirmed by Mn K-edge X-ray absorption spectroscopy. Upon exposure to [superscript t]BuSO₂PhIO, Mn-MOF-5 produces a putative Mn[superscript IV]-oxo intermediate, which upon further reaction with adventitious hydrogen is trapped as a Mn[superscript III]-OH species.; Most intriguingly, the intermediacy of the high-spin Mn[superscript IV]-oxo species is likely responsible for catalytic activity of the Mn[superscript II]-MOF-5 precatalyst, which in the presence of [superscript t]BuSO₂PhIO catalyzes oxygen atom transfer reactivity to selectively form epoxides from cyclic alkenes. In a second study, partial substitution of Zn[superscript II] by Mn[superscript II] in Zn₅(OAc)₄(bibenzotriazolate)₃ (CFA-1) yields a material in which manganese is supported by a ligand environment reminiscent of that found in molecular scorpionates. Unlike molecular analogs, Mn-CFA-1 is capable of activating molecular oxygen to convert substrates with sufficiently weak C-H bonds, such as cyclohexene, to alcohol and ketone products. In-situ spectroscopies including Mn K-edge X-ray absorption, DRIFTS, and Diffuse Reflectance UV-vis indicate that reactivity proceeds through a high valent Mn-peroxo species.; These results demonstrate that MOF secondary building units serve as competent platforms for accessing high-valent metal-oxygen species that consequently engage in catalytic oxygen atom transfer chemistry owing to the ligand fields and site isolation provided by the material.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; "September 2019." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 93-105).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124054</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Confinement effects on multiexciton dynamics in semiconductor nanocrystals</title>
<link>https://hdl.handle.net/1721.1/124053</link>
<description>Confinement effects on multiexciton dynamics in semiconductor nanocrystals
Shulenberger, Katherine E.(Katherine Emily)
Colloidal semiconductor nanocrystals are a promising platform for a number of technological developments in a wide variety of lighting applications. They also are an incredibly useful model system to interrogate fundamental carrier interactions in crystalline semiconductor lattices. This thesis investigates the properties of multiexciton states in semiconductor nanocrystals to build an understanding of what drives their emission dynamics and efficiency. A complete understanding of the processes which dominate in a wide variety of nanocrystal systems sheds light on electron-hole and exciton-exciton interactions and provides guidance on how to engineer nanocrystals for particular applications. In the first two chapters, I will build a foundation of understanding of semiconductor nanocrystal systems, and how to build up an intuitive understanding of the states in question from both fundamental modeling and chemical intuition.; I also present a variety of methods which are used to interrogate the luminescent properties of these materials, with a particular focus on those utilized in this thesis. In the second chapter in particular, I focus on how photoluminescence measurements can go astray, how to identify artifacts or background signal which could bias or invalidate data, and how to eliminate these artifacts. The next chapter details the biexciton and triexciton emission dynamics and efficiency in CdSe nanocrystals. Utilizing a well-established and well-studied semiconductor systems allows nuanced interpretation of the emission dynamics, and the identification of perhaps some unexpected material properties to enhance how we imagine these highly excited states. Chapter four employs a suite of methods to begin to understand carrier-carrier interactions in cesium lead halide perovskite nanocrystals.; This system provides a particularly interesting platform to investigate the effect of confinement and lattice mobility on excitonic properties. Finally, I present a few experimental directions and ideas which have not yet been explored and would provide an excellent continuation of this work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 125-135).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124053</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photoredox activation of carbon dioxide and unactivated aliphatic carbonyl compounds</title>
<link>https://hdl.handle.net/1721.1/124052</link>
<description>Photoredox activation of carbon dioxide and unactivated aliphatic carbonyl compounds
Seo, Hyowon,Ph. D.Massachusetts Institute of Technology.
Chapter 1: Photoredox Activation of Carbon Dioxide for Amino Acid Synthesis in Continuous Flow. Although carbon dioxide (CO₂ ) is highly abundant, its low reactivity has limited its use in chemical synthesis. In particular, methods for carbon-carbon bond formation generally rely on two-electron mechanisms for CO₂ activation and require highly activated reaction partners. Alternatively, radical pathways accessed via photoredox catalysis could provide new reactivity under milder conditions. Here we demonstrate the direct coupling of CO₂ and amines via the single-electron reduction of CO₂ for the photoredox-catalyzed continuous flow synthesis of [alpha]amino acids. By leveraging the advantages of utilizing gases and photochemistry in flow, a commercially available organic photoredox catalyst effects the selective [alpha]-carboxylation of amines that bear various functional groups and heterocycles.; The preliminary mechanistic studies support CO₂ activation and carbon-carbon bond formation via single-electron pathways, and we expect that this strategy will inspire new perspectives on using this feedstock chemical in organic synthesis. [color illustrations] Chapter 2: Direct [beta]-Selective Hydrocarboxylation of Styrenes with CO₂ Enabled by Continuous Flow Photoredox Catalysis. The direct [beta]-selective hydrocarboxylation of styrenes under atmospheric pressure of CO₂ has been developed using photoredox catalysis in continuous flow. The scope of this methodology was demonstrated with a range of functionalized terminal styrenes, as well as [alpha]-substituted and [beta]-substituted styrenes. [color illustrations] Chapter 3: Metal-free Reductive Coupling of Aliphatic Carbonyl Compounds and Styrenes by Photoredox Catalysis. Metal-free reductive coupling of aliphatic carbonyl compounds and styrenes by photoredox catalysis in continuous flow is described.; The method is applicable to both unactivated aliphatic ketones and aldehydes to afford the corresponding tertiary and secondary alcohols. Preliminary mechanistic investigations suggest the catalytic formation of a ketyl radical intermediate. [color illustrations]
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124052</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of electronic structure and kinetics methods for the rational design of electrocatalysts</title>
<link>https://hdl.handle.net/1721.1/124051</link>
<description>Development of electronic structure and kinetics methods for the rational design of electrocatalysts
Ricke, Nathan Darrell Peterson.
Computational modeling has untapped potential for novel material and chemical discovery. In this thesis, we explore ways to improve existing modeling methods, and how to apply these methods to design novel graphite-conjugated catalysts (GCCs). For improving electronic structure methods, we first present an extended study of bootstrap embedding theory (BET) and its ability to recover static correlation, as well as a proof on BET's ideal convergence properties. We then present a theoretical analysis using density functional theory (DFT) on a class of GCCs containing cationic nitrogen atoms, which are particularly active for catalyzing the oxygen reduction reaction (ORR). Using a mixture of high-throughput screening, statistical analysis, and computational exploration guided by chemical intuition, we design several novel GCCs, several of which DFT predicts would have enhanced activity above existing GCCs. Furthermore, our analysis reveals that known ORR scaling relations hold for GCCs, but hint at the possibility of breaking these relations with careful molecular engineering of the GCC active sites.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 79-87).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124051</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in the synthesis of distorted tricoordinate phosphorus compounds and their use as platforms in reductive chemistries</title>
<link>https://hdl.handle.net/1721.1/124050</link>
<description>Advancements in the synthesis of distorted tricoordinate phosphorus compounds and their use as platforms in reductive chemistries
Mattos, Jared Thomas.
This dissertation describes new reactivity of geometrically distorted [sigma]³ phosphorus compounds as well as attempts to synthesis new C₂v distorted [sigma]³ phosphorus compounds. Chapter two will describe the synthetic efforts towards synthesizing new [sigma]³ phosphorus compounds to provide new platforms to expand the chemistry of [sigma]³ phosphines. Chapter three presents a distorted phosphoramidate with a functionally hydridic P-H bond derived from the activation of water to generate equivalents of hydrogen. Chapter four explores the activation of both hydrazine and hydrazones by a distorted phosphoramidite and initial results in the use of these phosphoranes as platforms for hydrogen atom transfer chemistry. Chapter five presents a unique reaction in the formation of an oxazaphosphole that shows promise to allow for dearomatiztion of other aromatic substrates by a distorted phosphorus triamide. The research displayed here provides several novel approaches in the synthesis and use of [sigma]³ phosphorus compounds.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124050</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approaches to study Zn(II) deficiency and transport in biology</title>
<link>https://hdl.handle.net/1721.1/124049</link>
<description>Approaches to study Zn(II) deficiency and transport in biology
Richardson, Christopher E. R.
Divalent zinc, Zn(II), is an abundant and essential metal ion for human health. Across diverse biological settings, it stabilizes the structure of proteins, serves as a catalytic cofactor in enzymes with disparate functions, and mediates important signaling events. The ability of cells to apply Zn(II) in all these roles is contingent upon their ability to ensure adequate, but not excessive, Zn(II) levels. This control process, or homeostasis, is maintained by at least 24 transporters, including 14 ZIPs that increase the transition metal ion concentration of the cytosol and 10 ZnTs that decrease the transition metal ion concentration of the cytosol. Zn(II) homeostasis can be challenged either by excessive or inadequate nutritional Zn(II) or by interference of other metal ions with Zn(II) uptake transporters. Neither the molecular consequences of Zn(II) deficiency nor the molecular basis of ZIP-mediated selective metal uptake is well defined.; To address both these issues, I developed and applied new methodologies to study transition metal homeostasis. First, I report the preparation and use of "A12-resin", comprising the Zn(II)-binding protein S100A12 conjugated to agarose, that is capable of selective depletion of Zn(II) from diverse biological media. I deplete cell culture media of Zn(II) by using this resin and characterize the effects of Zn(II) insufficiency on the metabolism, transcriptome, and metallome of HEK293 cells. Second, I further apply Zn(II)-depleted cell culture media in a Zn(II) uptake assay. I show that repletion of Zn(II) depleted media with ⁷⁰Zn(II), a naturally low-abundance, stable isotope of Zn(II), enables sensitive, inductively coupled plasma-mass spectrometry-based measurements of Zn(II) uptake. Finally, I apply this assay to characterize the metal ion selectivity of human LIV-1 subfamily Zn(II) transporters.; I show that the kinetic parameters associated with ZIP4, ZIPS, ZIP8, and ZIP10 transport of Mn(II), Cd(II), and Zn(II) are distinct, and that metal ion selectivity is conferred by the transmembrane domains of the proteins rather than by the extracellular N-terminal domains. Taken together, the work presented in this thesis enables and motivates future work to interrogate transition metal homeostasis in human cells.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124049</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and functional studies of heme binding proteins toward the understanding of malaria</title>
<link>https://hdl.handle.net/1721.1/124048</link>
<description>Structural and functional studies of heme binding proteins toward the understanding of malaria
Goren, Allena Mistral.
Malaria, a potentially lethal parasitic infection, causes approximately half a million deaths each year. Currently, nearly half of the world's population is at risk of being infected with Plasmodium, the parasite causing malaria. In this thesis, we use biochemical and biophysical techniques to describe the iron binding properties of two proteins relevant to the biological study of Plasmodium falciparum. The first is a previously uncharacterized protein, which we have named MFP, or malarial ferrous protein, that is found in the parasitophorous vacuole of Plasmodium. The second is mRuby2, a fluorescent probe often utilized in biological work to track the localization of proteins. Herein, we report on the novel heme-binding properties of both proteins. We have identified a novel function of mRuby2, a commonly used fluorescent probe. Upon incubation with heme, the fluorescence of mRuby2 decreases, providing a potential use for the protein as a heme probe as well as a limitation of its utility for in vivo localization studies. MFP is a putative lipocalin-like protein that we have identified as binding both an [Fe-S] cluster and a heme moiety. We show that the [4Fe-4S] cluster of MFP inhibits heme binding to the protein and have proposed a potential structural model to explain this finding. Taken together, our data show a complicated metal-binding protein with a yet unknown in vivo function. The identification and initial characterization of MFP, a conserved protein essential for Plasmodium viability during the blood stage of infection, has provided a new potential therapeutic target for the treatment of malaria.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124048</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering LuxR-type quorum sensing proteins for new functions</title>
<link>https://hdl.handle.net/1721.1/124047</link>
<description>Engineering LuxR-type quorum sensing proteins for new functions
DeLateur, Nicholas Andrew.
Bacteria communicate information in a process known as quorum sensing, actuating downstream gene expression based on cell-cell signalling. Cell-cell signalling allows for complex and multi-cellular behavior otherwise impossible with unicellular logic. However, building complex cell-cell signalling genetic circuits is currently challenged by a lack of tools for the fine-tuning and control of quorum sensing systems. Although derived from distinct biochemical entities, the diffusion rate and expression profile of a given LuxR-family module are not modular. Here, we develop chimeric proteins that can accept the small molecule cognate belonging to the las operon from Pseudomonas aeruginosa while activating the cognate promoter of other quorum sensing systems. The ability to swap in a modular fashion the ligand-binding domain and DNA-binding domain of transcription factors allows precise control of diffusion rates and expression profiles independently. Methods to control quorum sensing by transcriptional repression can be slow because they rely on dilution and degradation, require promoter engineering, or lack specificity against only a single signalling pathway. Here, we develop proteins to knock down expression from LuxR-type quorum sensing transcription factors utilizing molecular sequestration for fast, tunable, and specific control. Natural sequesters and engineered truncation proteins are successfully applied against 5 of the most prevalent LuxR-type transcription factors (LasR, LuxR, RhlR, RpaR, and TraR) as well as the chimeric transcription factors developed in this work. Chimeric LuxR-type quorum sensing proteins and proteins for the sequestration of LuxR-type quorum sensing proteins provide powerful new parts to facilitate building sophisticated gene circuitry.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 99-115).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124047</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel metal- and main group-catalyzed methods for modulating molecular oxygenation</title>
<link>https://hdl.handle.net/1721.1/124046</link>
<description>Novel metal- and main group-catalyzed methods for modulating molecular oxygenation
Cooper, Julian C.(Julian Colton)
Synthetic methodologies frequently rely on oxygen functionality as a synthetic handle to enable structural modification of organic molecules; therefore, methods that enable new ways to modulate oxygen content in molecular structure facilitate structural diversification. This thesis details the development of catalytic methods affecting oxygen incorporation in organic molecules. These methods make use of fundamentally distinct strategies in bond activation to oxygenate and deoxygenate molecular architectures. First, a metal induced bond-weakening, dual-catalytic strategy was implemented to oxidize the benzylic positions of azaheterocycles. Coordination of a metal catalyst to the nitrogen lone pair is thought to induce weakening of proximal C-H bonds such that a radical catalyst with tunable chemoselectivity breaks this weakened bond. In the presence of an oxygen atmosphere, this leads to carbon-oxygen bond formation.; This two-catalyst strategy is applied to the oxygenation of the benzylic positions of pharmaceutically relevant heterocycles, and is found to exhibit site selectivity for the electron poor azaheterocyclic positions, thereby addressing a longstanding challenge in catalytic C-H oxidation methods, which are typically selective for more electron-rich positions. Complementing metal catalyzed oxygenation, a main group-catalyzed method for deoxygenation is detailed. Geometrically distorted phosphorus compounds are shown to be competent catalysts for reductive 0-atom transfer, enabling the deoxygenation of nitroarenes and carbonyls, thereby realizing new reactivity for these functional groups. This main group catalysis renders nitro groups competent coupling partners for C-N bond formation, enabling cross coupling with an aryl boronic acid or anti-Markovnikov hydroamination with olefins. While catalytic 0-atom transfer with organophosphorus is well-studied, asymmetric variants remain limited.; Advances in asymmetric carbonyl functionalization with distorted redoxactive phosphorus catalyst are presented, with the ultimate goal of gaining a greater understanding of the many factors that affect stereochemistry during these reactions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124046</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemically mediated separations and catalysis</title>
<link>https://hdl.handle.net/1721.1/124045</link>
<description>Electrochemically mediated separations and catalysis
Voskian, Sahag.
Mediated electrocatalysis was also investigated as a means of generating electrolyte-free hydrogen peroxide streams. This was achieved by employing redox mediator phase-transfer (RMPT) to produce and isolate aqueous H₂O from the electrolysis of H₂O and O2 where the quinone mediator was 'trapped' in an organic phase. Conversely, the mediator was trapped in a solid-state electrode, and exposed to alternating reduction and oxidation environments via slug flow. This concept has broad implications because the functionality on the mediator can also be purposefully designed to deliver redox equivalents to reaction/separation environments that would be incompatible with their regeneration. By decoupling the conditions of electrochemistry from the conditions ideal for substrate turnover and separation, the approach established here enables a vast expansion of the utility of electrochemical processes. Other redox-inactive mediators were also studied in this work. This portion investigated the use of task-specific ionic liquids (TSILs) as inert mediators for (i) solventless, non-volatile, amine-Ưbased carbon capture, coupled with electrochemical regeneration via the chelation of cupric ions from a corroding copper electrode, for the release of CO₂; (ii) hydrophobic chelators for liquid-Ưliquid extraction of metal ions from aqueous streams, followed by electroplating or acid regeneration; (iii) hydrophobic separators in Li-Br batteries (iv) antimicrobial activity on epidermal wounds and antifouling activity in thin-film ionogels on catheters. To summarize, this work demonstrates the integration of electrochemical steps into conventional chemical processes via the mediation of redox-active and inert organic molecules, which resulted in significant improvements in performance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124045</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of pyridines and azaindoles via Diels-Alder reactions of tosyl cyanide with vinyl- and heteroarylallenes</title>
<link>https://hdl.handle.net/1721.1/124044</link>
<description>Synthesis of pyridines and azaindoles via Diels-Alder reactions of tosyl cyanide with vinyl- and heteroarylallenes
Bartko, Samuel G.(Samuel Garrett)
Pyridines are an important class of heterocycle with widespread applications in a variety of areas. The efficient synthesis of highly substituted pyridines is an ongoing goal of modem synthetic chemistry. Part I of this thesis describes a new synthetic strategy for the synthesis of highly substituted pyridines and azaindoles involving Diels-Alder reactions of vinyl- and heteroarylallenes with tosyl cyanide. The synthetic elaboration of the various pyridine products via an ipso substitution strategy is also described. Part II of this thesis describes the optimization and scale-up of a reproducible synthesis of 1-iodopropyne, an important building block chemical with widespread applications in a variety of transformations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124044</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-assembly and dynamics of colloidal dispersions in steady and time-varying external fields</title>
<link>https://hdl.handle.net/1721.1/124043</link>
<description>Self-assembly and dynamics of colloidal dispersions in steady and time-varying external fields
Sherman, Zachary M.(Zachary Michael)
A diverse set of functional materials can be fabricated using dispersions of colloids and nanoparticles. If the dispersion is responsive to an external field, like dielectric and charged particles in an electric field or paramagnetic particles in a magnetic field, the field can be used to facilitate self-assembly and control particle transport. One promising feature of field-responsive materials is the ability to drive them out of equilibrium by varying the external field in time. Without the constraints of equilibrium thermodynamics, out-of-equilibrium dispersions display a rich array of self-assembled states with useful material and transport properties. To leverage their unique behaviors in real applications, a predictive, theoretical framework is needed to guide experimental design.; In this thesis, I carry out a systematic investigation of the self-assembly and dynamics of colloidal dispersions in time-varying external fields using computer simulations, equilibrium and nonequilibrium thermodynamics, and electro-/magnetokinetic theory. I first develop efficient computational models for simulating suspensions of polarizable colloids in external fields. The simulations are accurate enough to quantitatively reproduce experiments but fast enough to reach the large length and time scales relevant for self-assembly. I use this simulation method to construct the complete equilibrium phase diagram for polarizable particles in steady external fields and find that many-bodied, mutual polarization has a remarkably strong influence on the nature of the self-assembled states. Correctly accounting for mutual polarization enables a thermodynamic theory to compute the phase diagram that agrees well with simulations and experiments.; Though the equilibrium structures are crystalline, in practice, dispersions typically arrest in kinetically-trapped, disordered or defective metastable states due to strong interparticle forces. This is a key difficulty preventing scalable fabrication of colloidal crystals. I show that cyclically toggling the external field on and off over time leads to growth of colloidal crystals at significantly faster rates and with many fewer defects than for assembly in a steady field. The toggling protocol stabilizes phases that are only metastable in steady fields, including complex, transmutable crystal structures. I use nonequilibrium thermodynamics to predict the out-of-equilibrium states in terms of the toggle parameters. I also investigate the transport properties of dispersions of paramagnetic particles in rotating magnetic fields. Like toggled fields, rotating fields also drive dispersions out of equilibrium, and their dynamics can be tuned with the rotation frequency.; I find that the rotating field greatly increases particle self-diffusivity compared to steady fields. The diffusivity attains a maximum value several times larger than the Stokes- Einstein diffusivity at intermediate rotation frequencies. I develop a simple phenomenological model for magnetophoresis through porous media in rotating fields that predicts enhanced mobility over steady fields, consistent with experiments. Lastly, I study the nonlinear dynamics of polarizable colloids in electrolytes and report a new mode of electrokinetic transport. Above a critical external field strength, an instabilty occurs and particles spontaneously rotate about an axis orthogonal to the field, a phenomenon called Quincke rotation. If the particle is also charged, its electrophoretic motion couples to Quincke rotation and propels the particle orthogonally to the driving field, an electrohydrodynamic analogue to the Magnus effect.; Typically, motion orthogonal to a field requires anisotropy in particle shape, dielectric properties, or boundaries. Here, the electrohydrodynamic Magnus (EHM) effect occurs for bulk, isotropic spheres, with the Quincke rotation instability providing broken symmetry driving orthogonal motion. In alternating-current (AC) fields, electrophoresis is suppressed, but the Magnus velocity persists over many cycles. The Magnus motion is decoupled from the field and acts as a self-propulsion, so I propose the EHM effect in AC fields as a mechanism for generating a new type of active matter. The EHM "swimmers" behave as active Brownian particles, and their long-time dynamics are diffusive, with a field-dependent effective diffusivity that is orders of magnitude larger than the Stokes-Einstein diffusivity. I also develop a continuum electrokinetic theory to describe the electrohydrodynamic Magnus effect that is in good agreement with my simulations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 183-199).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124043</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Glycoprotein mimetic materials - synthetic methods and study of viral inhibition properties</title>
<link>https://hdl.handle.net/1721.1/124042</link>
<description>Glycoprotein mimetic materials - synthetic methods and study of viral inhibition properties
Seifried, Brian(Brian Michael)
The human body houses some of the most unique materials on the planet and is a promising source of bioinspiration. The high molecular weight and glycosylation of mucin-type glycopeptides make them challenging to produce or even mimic. This material production challenge is one of the key limiting factors for the study and design of biological materials such as mucus and cartilage. The first section of this thesis develops two approaches for creating high molecular weight heavily modified protein brushes. The first uses a combination of cysteine and diazo coupling reactions, while the second uses combination of global amino acid substitution and copper()-catalyzed alkyne-azide cycloaddition (CuAAC). The tyrosine enrichment required to prepare elastin-like-polypeptides (ELP) proteins for diazo coupling reactions prevented protein expression at 96 pentapeptide repeats and above, but maintained yields comparable to other work at 48 repeats and below.; Homopropargyglycine (HPG) incorporation through global amino acid substitution in a 50 pentapeptide repeat ELP backbone was found to achieve an average methionine replacement of 93%. Tyrosine was demonstrated as an attractive target for mass bioconjugation for protein because of the high specificity and efficiency of diazonium coupling reaction. The technique was performed in reducing and denaturing conditions as well as shown to couple chemical functionalities in sufficient quantities to affect the protein solubility as well as orthogonal to cysteine coupling. Sensitive chemical groups such as saccharides were conjugated to the protein through CuAAC. The structure-reactivity relationship for functionalizing the ELP backbone with CuAAC was further studied using the HPG substituted ELPs with a series of protected and deprotected mono-,di-,and tri-saccharides.; The second part of this thesis utilizes both the protein backbone-based mimics developed in the first part and a synthetic polymer-based mucin mimic to investigate the antiviral properties of sialic acid functionalized glycoprotein mimics. The potential of using these adhesion-decoy based polymers as countermeasures against viruses was explored and aerosol formulations of these polymer countermeasures were developed as a delivery method to address respiratory system infections. These formulations balanced the mist suppression properties of polymers with reasonable polymer loadings. This thesis developed synthetic toolboxes to create glycoprotein-mimetic materials and utilize the mucin mimics to create a deeper understanding of mucin's viral inhibition properties.; The tools developed to create protein-backbone based glycoprotein-mimetic materials allow for the creation of materials that both allow for the study of structure-property relationships of complex biological molecules, but also the design of materials with tailorable bioactivity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124042</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous processing of multiphase reactions for pharmaceutical applications</title>
<link>https://hdl.handle.net/1721.1/124041</link>
<description>Continuous processing of multiphase reactions for pharmaceutical applications
Mo, Yiming,Ph. D.Massachusetts Institute of Technology.
With current needs to expedite new drug development, reduce cost and increase availability of existing drugs, and improve stability and safety of pharmaceutical manufacturing, the continuous flow synthesis has appeared as an attractive alternative to the conventional batch processes. Numerous technologies have emerged to facilitate the development of continuous flow chemistry, and the benefits of flow chemistry have been successfully demonstrated for many chemistries that would otherwise be challenging in conventional batch process because of demanding process conditions, hazardous intermediates, and limitations in mass and heat transfer. However, in contrast to single-phase reaction, transformation of multiphase reactions from batch to continuous flow still remains cumbersome due to complicated multiphase hydrodynamics, mass transfer, interfacial reaction kinetics, and potential clogging issues of solids.; This thesis aims at developing enabling strategies and solutions to make challenging multiphase reaction systems amenable in continuous flow system. For solid-liquid reactions with reactor clogging problems, Chapter 2 presents a new modular miniature continuous stirred-tank reactor (CSTR) cascade to handle solid-forming reactions in flow, which serves as a robust strategy to study solid-containing reactions in small scale. For mass-transfer limited liquid-liquid systems, Chapter 3 describes a high-performance miniature CSTR unit with magnetic coupling rotation mechanism, which decouples mixing and residence time to accommodate different reaction kinetics.; To alleviate tedious scale-up procedure and safety concerns of catalytic gas-liquid reactions, Chapter 4 provides a robust design of a thin-layer membrane reactor to safely and scalably perform catalytic heterogeneous hydrogenation and homogeneous aerobic oxidation, providing a superior alternative to conventional packed-bed reactors. For electrosynthesis involving electrode surface as a heterogeneous reaction surface, Chapter 5 demonstrates a cost-effective and scalable electrochemical flow cell engineered for the N-hydroxyphthalimide (NHPI) mediated electrochemical aerobic oxidation of benzylic C-H bonds, and Chapter 6 utilizes the microfluidic electrochemical flow cell to accurately control the lifetime of persistent and transient radicals in order to selectively generate cross-coupling products. The developed modular and plug-and-play reactors in this thesis offer additional tools to enable facile implementation of multiphase chemistries in flow.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 134-147).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124041</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perception-driven optimal motion planning under resource constraints</title>
<link>https://hdl.handle.net/1721.1/124033</link>
<description>Perception-driven optimal motion planning under resource constraints
Sayre-McCord, Thomas(Roswell Thomas)
Over the past few years there has been a new wave of interest in fully autonomous robots operating in the real world, with applications from autonomous driving to search and rescue. These robots are expected to operate at high speeds in unknown, unstructured environments using only onboard sensing and computation, presenting significant challenges for high performance autonomous navigation. To enable research in these challenging scenarios, the first part of this thesis focuses on the development of a custom high-performance research UAV capable of high speed autonomous flight using only vision and inertial sensors. This research platform was used to develop stateof- the-art onboard visual inertial state estimation at high speeds in challenging scenarios such as flying through window gaps. While this platform is capable of high performance state estimation and control, its capabilities in unknown environments are severely limited by the computational costs of running traditional vision-based mapping and motion planning algorithms on an embedded platform. Motivated by these challenges, the second part of this thesis presents an algorithmic approach to the problem of motion planning in an unknown environment when the computational costs of mapping all available sensor data is prohibitively high. The algorithm is built around a tree of dynamically feasible and free space optimal trajectories to the goal state in configuration space. As the algorithm progresses it iteratively switches between processing new sensor data and locally updating the search tree. We show that the algorithm produces globally optimal motion plans, matching the optimal solution for the case with the full (unprocessed) sensor data, while only processing a subset of the data. The mapping and motion planning algorithm is demonstrated on a number of test systems, with a particular focus on a six-dimensional thrust limited model of a quadrotor.
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 105-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124033</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An efficient multi-layer boundary element method for direct computation of sound propagation in shallow water environments</title>
<link>https://hdl.handle.net/1721.1/124032</link>
<description>An efficient multi-layer boundary element method for direct computation of sound propagation in shallow water environments
Li, Chengxi,Ph. D.Massachusetts Institute of Technology.
The objective of this thesis is to develop and apply efficient three-dimensional (3D) direct simulation capabilities for underwater sound field predictions in shallow water environments. Despite the large number of theoretical and experimental studies, direct numerical simulation of the shallow water acoustic field is still challenging due to environmental complexities and large computation cost involved. In this thesis, we develop a highly efficient O(NlogN) multi-layer boundary-element method, PFFT-BEM, for direct numerical simulation of acoustic propagation and scattering in shallow water environment. This method utilizes a Pre-corrected Fast Fourier Transform (PFFT) approach to accelerate the boundary-element method and reduce the computational efforts from O(N²~³) to O(NlogN) where N is the total number of boundary unknowns.; PFFT-BEM is capable of accounting for complex topography, inhomogeneity of water properties, and dynamic environments associated with realistic coastal conditions. With the O(NlogN) efficiency and linear scalability on massively parallel high-performance computing platforms, we first conduct multilayer 3D simulations benchmarking low-mid frequency acoustics over kilometer ranges against available theoretical results and field experiments. We then apply largescale PFFT-BEM simulations to investigate two underwater acoustics problems which are of scientific interest and practical importance: (1) 3D sound scattering from rough ocean surface; (2) 3D sound propagation and scattering around underwater seamount(s). For the 3D rough surface scattering problem, several approximation models have been proposed such as the perturbation theory and Kirchhoff approximation.; These approximation models provide fast predictions of statistics for the acoustics scattering necessary for predicting the scattering effects and reverberations from the rough surfaces. The validities of these models need to be assessed by direct numerical methods. However, most existing direct numerical studies regarding the validity regions of the approximation models are limited to the 2D rough surface scattering problem. We apply direct PFFT-BEM computations to study the 3D rough surface scattering problem with a Gaussian roughness spectrum. We examine the accuracy of the approximation model results through comparisons with direct numerical simulation results by 3D PFFT-BEM with a Monte Carlo technique. We identify and quantify the 3D validity regions of the approximation models as a function of the surface roughness and correlation length. We characterize and quantify the importance of 3D scattering effects on the validities of different approximation models.; Moreover, we find that both perturbation theory and Kirchhoff approximation become inaccurate for 3D scattering problems with low grazing angles. For the problem of 3D sound propagation/scattering around underwater seamount(s), we investigate the effects of seamount geometry and sound source frequencies on the sound scatterings by the seamount using 3D PFFT-BEM simulations. In particular, we investigate the backscattering, blocking and 3D scattering effects due to the presence of the seamount. We find that the acoustics scattering effects by the seamount have a strong dependence on the source frequency, and small variations in seamount geometry (such as seamount height and cross section shape) can induce significant changes in the acoustics scattering field.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-152).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124032</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The instability of axial-symmetric gravity-capillary waves generated by a vertically-oscillating sphere</title>
<link>https://hdl.handle.net/1721.1/124031</link>
<description>The instability of axial-symmetric gravity-capillary waves generated by a vertically-oscillating sphere
Shen, Meng,Ph. D.Massachusetts Institute of Technology.
When a floating sphere is forced to oscillate vertically, axial-symmetric outgoing ring waves are generally expected to be produced. Laboratory experiments, however, show that when the oscillation amplitude of the sphere exceeds a threshold value, the axial-symmetric waves abruptly transfigure into asymmetric waves. This problem is related to interfacial instability phenomena widely seen in lab model tests such as sloshing, ship model wakes measurements, etc. Despite its fundamental importance, the mechanism that governs the occurrence of this phenomenon is still unknown. The objective of this thesis is to understand the mechanism of this instability phenomenon using theoretical analysis and direct numerical simulations. We first theoretically show that for an arbitrary three-dimensional body floating in an unbounded free surface, there exists a set of homogeneous solutions at any frequency in the gravity-capillary wave context.; The homogeneous solution depends solely on the mean free-surface slope at the waterline of the body and physically represents progressive radial cross-wave. Unlike standing cross-waves, progressive cross-wave loses energy during propagation by overcoming the work done by surface tension at the waterline and through wave radiation to far field. We then theoretically investigate the problem of subharmonic resonant interaction of progressive ring wave with progressive cross-wave. We derive the nonlinear spatial-temporal evolution equation governing the motion of cross-wave by use of the average Lagrangian method. In addition to energy-input terms from the interaction with forced ring wave, the evolution equation contains a damping term associated with energy loss in cross-wave propagation.; We show that the presence of the damping term leads to a non-trivial threshold value of oscillation amplitude beyond which the cross-wave becomes unstable and grows with time by taking energy from the ring wave. The theoretical prediction of the characteristic features of generated radial cross-waves agrees well with experimental observations, but the threshold value of oscillation amplitude is about 50% smaller. We finally investigate the instability of finite-amplitude progressive ring waves by direct numerical simulations. The analysis employs the transition matrix (TM) approach and uses a quadratic boundary-element method (QBEM) for computation of the fully-nonlinear wave dynamics. When the nonlinear ring wave effects and viscous effects are accounted for, the predicted threshold value of sphere oscillation amplitude matches the experimental data excellently.; In the case of relatively small-amplitude oscillations, the growth rates and shape of the unstable modes from the TM-QBEM computation agree well with the weakly nonlinear theoretical analysis we developed. This further confirms that the fundamental mechanism of the instability is associated with the triad resonance of the progressive ring wave with its subharmonic progressive radial cross-waves. The dependence of threshold value and growth rate of unstable modes on the physical parameters (such as oscillation frequency and amplitude of the body, initial phase of the disturbance) is also investigated and quantified.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-135).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124031</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pursuing the common good : overcoming barriers to collective action through transboundary water negotiation along the Blue Nile River</title>
<link>https://hdl.handle.net/1721.1/123913</link>
<description>Pursuing the common good : overcoming barriers to collective action through transboundary water negotiation along the Blue Nile River
Zaerpoor, Yasmin(Bijani Zaerpoor)
We are headed towards a global water crisis. While technological advancements may help reduce this gap, achieving global water security will also require establishing self-enforcing agreements negotiated among countries that share transboundary rivers. At its core, transboundary water governance is a type of collective action problem (Olson 1965), in which sovereign actors must cooperate to achieve a collective interest. In this research, I attempt to delineate common procedural and context-specific barriers to collective action within transboundary water negotiations in the Nile River Basin. I compare efforts by three countries -- Egypt, Ethiopia, and Sudan --; to pursue collective action in two separate, but related, face-to-face negotiations related to water use: the basin-wide negotiations on the Nile Basin Cooperative Framework Agreement (1997 - 2010) and the ongoing project-specific negotiations on the Grand Ethiopian Renaissance Dam which started in 2011. Between 2015 and 2018, I interviewed over 50 Egyptian, Ethiopian, and Sudanese negotiators; transboundary water scholars and academics; and journalists and reviewed primary and secondary documents to identify the perceived barriers within these negotiation processes. The conventional approach to treaty-making is through negotiations among state actors.; I argue that while many barriers related to the number of actors and degree of heterogeneity among them (as defined by differences in their capacity, access to information, preferences, beliefs, and identities) can be addressed through procedural interventions, non-procedural interventions by both state- and non-state actors are necessary to reduce these barriers at different scales (e.g., between negotiators or between negotiators and the public) in the short-, medium-, and long-term. Furthermore, I argue that multi-track water diplomacy is increasingly necessary in the Nile Basin due to several context-specific factors: the 'securitization' of water, frequent political transitions, and lack of public trust. Based on this research, I offer a list of procedural- and non-procedural interventions that can be employed by state- and non-state actors to reduce different types of barriers.; Although reducing these barriers will not guarantee collective action, I argue that these interventions can create a more enabling environment in which collective action can occur.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 179-186).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123913</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Red lines for a green china : adaptation, negotiation and experimentation in China's efforts to transform sustainably</title>
<link>https://hdl.handle.net/1721.1/123912</link>
<description>Red lines for a green china : adaptation, negotiation and experimentation in China's efforts to transform sustainably
Gordon, Jessica A.(Jessica Alexandra)
After years of rapid economic growth and unbridled urbanization, China is now attempting to transition to an "ecological civilization," the nation's term for sustainable development. A foundational element of China's approach is the development and enforcement of "ecological conservation red lines" (ERLs). As the most comprehensive conservation plan in China's history, for up to 35% of its territory, ERLs define strictly controlled boundaries for ecosystem services and ecologically sensitive areas in order to safeguard natural resources and human health and to address climate risks. Globally, this is the largest effort ever undertaken to plan land uses across geographic scales based on the optimization of ecosystem services. Based on extensive fieldwork and the application of multiple qualitative research and analysis methods, this dissertation is the first comprehensive study of the politics of the ERL.; It examines the development of the ERL at the national level and within two localities to address two puzzles: (1) How and why did China create the world's most comprehensive ecosystem-based land planning strategy? (2) How and why would China which is applying top-down command and control environmental regulation, and is increasingly politically centralized, be able to support local variation? The ERL is an integrated science and practice policy model that produced a paradigmatic idea which changes the conceptualization of land use planning in China. The ERL process combines a historical adaptive "guerilla policy style," with a more recent command and control environmental regulation in which targets are set by the central government for local implementation. This apparent contradiction in the ERL's design as dynamic with binding targets has produced an ERL process that was more adaptive, negotiated and flexible than expected.; The main determinants of the shape and size of the ERL in each locality were political economy, existing levels of environmental quality, ongoing planning processes, and long-standing institutional dynamics. The ERL outcome has been shaped primarily by interactions among existing institutions in the face of political, economic and moral incentives as understood by local officials. This analysis suggests advances in future studies of institutional change, especially in regards to sustainable transitions.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 173-186).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123912</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enduring or escaping legacies? : Politics, inherited institutions, and rebellion in the struggle over water futures in Chile</title>
<link>https://hdl.handle.net/1721.1/123911</link>
<description>Enduring or escaping legacies? : Politics, inherited institutions, and rebellion in the struggle over water futures in Chile
Gallagher, Daniel,Ph. D.Massachusetts Institute of Technology.
Following a wave of insurgent political action in 2011, the hegemony that governs life in Chile appears to be increasingly threatened. One area of politicized struggle has coalesced around water law. On one side of the struggle, water utilities, agro-export firms and entrenched political actors seek to retain the water laws inherited from the nation's 1973-1990 dictatorship. On the other, socio-political movements and recently elected political actors are challenging what they see as the political content of those laws that prioritize private economic gains.; Why does politicization take the form it does in Chile? To what extent, if at all, is politicization of water law reconfiguring the institutions of urban governance? Responding to scholarship in "post-political" urban governance, I draw on ethnographic fieldwork, process tracing, and historical analysis to present a narrative of the multi-scalar struggle over water laws that explains the effects of the new wave of political action. First, I argue that a range of factors combined to enable a politicization of water laws. Those factors include (i) the failure of a private water firm to depoliticize disruptions in water supply to the nation's capital (ii) the hyper-inequality in water access across the national territory produced by legally-sanctioned processes of accumulation by dispossession and (iii) a loss of fear of political disagreement in a new generation of politically-active youth, which translated to the formal political arena.; Second, I argue that politicization has widened the parameters of political debate and the collective imagination of different political trajectories. Issues naturalized during past decades are now rendered highly contentious and political action conducted "back stage" is increasingly exposed "front stage" through protest, congressional investigation and an invigorated independent media. Third, I argue that despite leftist politicians' pursuit of ambitious congressional reforms to national water laws, institutional reform is foreclosed due to material and discursive forces acting across geographical scales. I posit that Chile's institutional inertia in water law can be explained by an incomplete generational shift following the fall of dictatorship, wider political instability in the Latin American region, and Chile's deep articulation with global economic forces. Keywords: law; neoliberalism; politicization; post-politics; urban governance; water.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 228-251).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123911</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fighting for recognition : Asian American advocates and their strategic uses of identity</title>
<link>https://hdl.handle.net/1721.1/123909</link>
<description>Fighting for recognition : Asian American advocates and their strategic uses of identity
Kwon, Haegi.
Nonprofit advocacy organizations play a key role in advancing the rights of disadvantaged individuals and groups. Further, these organizations strategically frame issues and their image in ways that facilitate their ability to mobilize individuals, gain credibility, and sway public opinion. While scholars recognize that immigrant-serving nonprofit organizations have the potential to serve, advocate for, and/or mobilize some of the most disadvantaged communities in the US, there is little focus on organizational identity (very simply, the answer to "who are we as an organization?") and how organizational identities are deployed as a political strategy, especially in a political environment where politicians blame immigrants for damaging the livelihoods of Americans and in which racist and xenophobic rhetoric is increasingly normalized.; I use the example of Asian Americans, a group with tremendous intragroup socioeconomic, cultural, and political diversity, and the nonprofits serving this community, to examine how their identity deployment practices, in conjunction with other factors internal and external to these organizations, relate to social service and advocacy outcomes for immigrant constituents in New York City. Although these organizations differ in multiple ways (e.g., varying levels of attachment with Asian American identity, history, location in the city, constituency, size, organizational capacity, and programmatic and advocacy expertise), they also seek to mitigate organizational uncertainties in the midst of demographic, political, and economic change.; I find three cross cutting themes that contribute to the bulk of my findings: 1) flexible identity deployment and its "mixed" programmatic and advocacy outcomes, 2) boundary maintenance within organizations to maintain organizational legitimacy, and 3) claims of disadvantage relative to other groups. Ultimately, these findings contribute to understandings of the current state of Asian American politics and how these dynamics impact panethnic and multiracial forms of collective action.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 88-94).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123909</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-density parenting : design, policy, and family-oriented urbanism</title>
<link>https://hdl.handle.net/1721.1/123908</link>
<description>High-density parenting : design, policy, and family-oriented urbanism
Thomas, Louis L.(Louis Lawton)
Much recent academic and popular writing declares a North American urban renaissance and portrays new urbanites as the childless. Luxury buildings full of studio and one-bedroom units appear downtown, targeting young professionals and empty nesters. Missing from this discussion are middle-class parents choosing the city. Vancouver, BC stands as a critical case. In 1989 the city adopted explicit policies that target parents in dense central areas. These policies provide building and neighborhood amenities for families with children. The purpose of this single-case study dissertation is to understand both how these policies came to be, and the experiences of parents who choose these neighborhoods.; Two primary questions emerge: 1) How and why did Vancouver come to promote and build high-density family-oriented housing and neighborhoods in and around downtown? And 2) How do parents perceive these areas today as serving their childrearing needs? In other words, what are Vancouver's most important lessons for North American family-oriented urbanism at high densities? I answer these questions through environmental and participant observation, archive and document analysis, along with semi-structured interviews of Vancouver parents, policymakers, and other urban actors. This research contributes to the field in two primary ways. First, it reveals a high-density childrearing ideal from the experiences of middle-class urban parents. Parents identify socioeconomic diversity and densely mixed-use neighborhoods as beneficial to childrearing. Building upon the work of Jacobs, Lynch and others, I call this beneficial quality the Educative Potential of the Dense, Diverse, Transparent City.; This can support progressive urban policy as it reframes infill density -- including affordable housing -- as advantageous to middle-class families. Second, this research identifies two constructs of urban parents: committed and 'won over'. Committed urbanest parents define their identity as urban and anti-suburban. In contrast, 'won over' urbanist parents came to downtown as childless young professionals and assumed they would move out as they started families. Yet they stayed, in large part due to the amenities provided by Vancouver's policies. This suggests policy can influence parents' locational choices. Planners must consider the needs of diverse parents to avoid a class and age segregated city.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 373-389).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123908</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A methodology to quantify risk of failure for dynamic robots</title>
<link>https://hdl.handle.net/1721.1/123780</link>
<description>A methodology to quantify risk of failure for dynamic robots
Wang, Albert Duan.
Humans possess an innate sense of danger that guides the execution of extraordinary dynamic maneuvers. They can also use this sense to generate creative recovery strategies to eventually come to a safe stop. This capability is not yet available to robots, fundamentally because there is no clear metric that represents the quantified risk of failing. Possessing such a metric would allow robots to explore their dynamic capability up to their physical limitations. This thesis attempts to address this problem by introducing a methodology to quantify the risk of failure for dynamic robots. It employs a sampling-based network constructed using the principles of viability theory, which focuses on the avoidance of failure instead of the regulation to specific movements. Simplifications that specifically target complex hybrid systems are explored to extend the usage of viability theory for practical application to legged robots. The results of this methodology are the Viable State Network, a network showing the non-failing and failing states, and the Risk Map, the quantified risk of failure. These concepts are demonstrated for a planar hopping robot model.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 127-133).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123780</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-contact ultrasound</title>
<link>https://hdl.handle.net/1721.1/123779</link>
<description>Non-contact ultrasound
Zhang, Xiang,Ph. D.Massachusetts Institute of Technology.
This thesis explores the design, development, and evaluation of two novel non-contact ultrasound imaging methods: immersion ultrasound and optical ultrasound. Immersion ultrasound utilizes traditional piezoelectric elements in a tomographic framework to develop new algorithms and acquisition methods for quantification of tissue geometry and properties in human proximal limbs. Bone is uniquely challenging for ultrasound due to the high impedance mismatch between bone and soft-tissue in the imaging domain. New imaging algorithms are necessary for both geometric and quantitative reconstruction of subjects with bone. Multiple immersion systems were designed and constructed using the framework presented in this thesis. Mechanical systems include a 4 degrees of freedom single element system and a fully flexible 36 degrees of freedom robotic system abbreviated MEDUSA (Mechanically Discrete Ultrasound Scanning Apparatus).; An adaptive beamforming algorithm addressing specularity of bone in pulse-echo imaging and a Full Waveform Inversion algorithm for quantitative imaging with bone are discussed, with imaging results on tissue mimicking phantoms, excised animal tissue, and human subjects. Furthermore, a laser ultrasound (LUS) system was developed for full non-contact ultrasound imaging. LUS completely replaces conventional piezoelectric elements for generation and detection of ultrasound in biological tissue. LUS generates ultrasonic waves at the tissue surface via rapid transduction of optical energy to acoustic energy through thermomechanical coupling on the tissue surface and detects returning ultrasonic waves on the tissue surface using laser interferometry. In combination, LUS can utilize any tissue surface as a viable acoustic transmitter or detector.; Analysis of light and tissue interactions presented in this thesis identifies the critical process parameters for soft-tissue imaging at eye and skin safe optical exposure levels. LUS system design methods and imaging results on tissue mimicking phantoms, excised tissue, and humans subjects are presented. Human LUS results marks the first instance of full non-contact optical ultrasound imaging of in-vivo human subjects. All systems presented in this thesis were calibrated to ensure safe optical and acoustic exposure levels for human subjects. Approval was obtained from the MIT Committee on the Use of Humans as Experimental Subjects (COUHES) prior to any human experimentation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 145-157).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123779</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiative transport in transparent aerogels for solar thermal energy applications</title>
<link>https://hdl.handle.net/1721.1/123778</link>
<description>Radiative transport in transparent aerogels for solar thermal energy applications
Zhao, Lin,Ph. D.Massachusetts Institute of Technology.
Solar-thermal energy conversion systems hold great promise to meet our diverse energy demand by a renewable source. Converting sunlight into thermal energy requires solar radiation to be absorbed and transformed into heat effectively while minimizing system thermal loss to the ambient environment. Traditional solar-thermal systems utilize high optical concentration and vacuum enclosure to reduce the impact of heat loss. However, the cost of sophisticated optical and thermal components limits their market adoption to date. In this thesis, we explored the development of transparent aerogels for enhancing solar-thermal energy conversion. We established and validated a modeling framework to understand the fundamental light transport within an aerogel sample and yield helpful guidance for material development. We performed synthesis recipe optimization through a systematic parametric study and discovered a facile procedure to fabricate low-scattering aerogel samples with &gt;95% solar transmittance. We then incorporated the developed aerogel in solar-thermal collectors and tested the performance. Under unconcentrated sunlight, stagnation temperature beyond 265 °C can be reached and saturated steam above 120 °C can be generated without vacuum enclosures or selective surfaces. The improvements enabled by the low-scattering aerogels promote a new pathway of solar energy utilization for domestic, industrial, and power generation applications..
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 108-119).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123778</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale thermal and thermoelectric energy transport in crystalline and disordered materials</title>
<link>https://hdl.handle.net/1721.1/123777</link>
<description>Nanoscale thermal and thermoelectric energy transport in crystalline and disordered materials
Zhou, Jiawei.
Energy transport provides the fundamental basis for operation of devices from transistors to solar cells. Despite past theories that successfully illustrate the principles behind the energy transport based on solid state physics, the microscopic details of the energy transport are not always clear due to the lack of tool to quantify the contribution from different degrees of freedom. Recent progress in first principles computations and development in optical characterization has offered us new ways to understand the energy transport at the nanoscale in a quantitative way. In this thesis, by leveraging these techniques, we aim to providing a detailed understanding of thermal and thermoelectric energy transport in crystalline and disordered materials, especially about how the energy transport depends on atomistic level details such as chemical bondings. Specifically, we will discuss three examples.; 1) Electron transport in semiconductors: how electrons propagate as they interact with lattice and impurities. 2) Interaction between charge and heat: how the free carriers have an impact on the heat dissipation in semiconductors 3) Heat conduction in polymers: how the heat transfer in an amorphous system depends on its molecular structures. In the case of electron transport, we developed and applied first principles simulation to show that a large electron mobility can benefit from symmetry-protected non-bonding orbitals. Such orbitals result in weak electron-lattice coupling that explains the unusually large power factors in half-Heusler materials - a good thermoelectric material system. By devising an optical experiment to probe the ultrafast thermal decay, we quantified the effect of electron-phonon interaction on the thermal transport. Our results show that the thermal conductivity can be significantly affected by the free carriers.; Lastly, we built a theoretical model to understand the heat conduction in amorphous polymers, and used this knowledge to design materials that are heat-conducting yet soft. These understandings will potentially facilitate discovery of new material systems with beneficial charge and heat transport characteristic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-142).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123777</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dense, sonar-based reconstruction of underwater scenes</title>
<link>https://hdl.handle.net/1721.1/123776</link>
<description>Dense, sonar-based reconstruction of underwater scenes
Vaz Teixeira, Pedro Nuno.
Three-dimensional maps of underwater scenes are critical to-or the desired end product of-many applications, spanning a spectrum of spatial scales. Examples range from inspection of subsea infrastructure to hydrographic surveys of coastlines. Depending on the end use, maps will have different accuracy requirements. The accuracy of a mapping platform depends mainly on the individual accuracies of (i) its pose estimate in some global frame, (ii) the estimates of offsets between mapping sensors and platform, and (iii) the accuracy of the mapping sensor measurements. Typically, surface-based surveying platforms will employ highly accurate positioning sensors-e.g. a combination of differential global navigation satellite system (GNSS) receiver with an accurate attitude and heading reference system-to instrument the pose of a mapping sensor such as a multibeam sonar.; For underwater platforms, the rapid attenuation of electromagnetic signals in water precludes the use of GNSS receivers at any meaningful depth. Acoustic positioning systems, the underwater analogues to GNSS, are limited to small survey areas and free of obstacles that may result in undesirable acoustic effects such as multi-path propagation and reverberation. Save for a few exceptions, the accuracy and update rate of these systems is significantly lower than that of differential GNSS. This performance reduction shifts the accuracy burden to inertial navigation systems (INS), often aided by Doppler velocity logs. Still, the pose estimates of an aided INS will incur in unbounded drift growth over time, often necessitating the use of techniques such as simultaneous localization and mapping (SLAM) to leverage local features to bound the uncertainty in the position estimate.; The contributions presented in this dissertation aim at improving the accuracy of maps of underwater scenes produced from multibeam sonar data. First, we propose robust methods to process and segment sonar data to obtain accurate range measurements in the presence of noise, sensor artifacts, and outliers. Second, we propose a volumetric, submap-based SLAM technique that can successfully leverage map information to correct for drift in the mapping platform's pose estimate. Third, and informed by the previous two contributions, we propose a dense approach to the sonar-based reconstruction problem, in which the pose estimation, sonar segmentation and model optimization problems are tackled simultaneously under the unified framework of factor graphs. This stands in contrast with the traditional approach where the sensor processing and segmentation, pose estimation, and model reconstruction problems are solved independently.; Finally, we provide experimental results obtained over several deployments of a commercial inspection platform that validate the proposed techniques.
Thesis: Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123776</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning under uncertainty in resource-constrained systems</title>
<link>https://hdl.handle.net/1721.1/123775</link>
<description>Planning under uncertainty in resource-constrained systems
Strawser, Daniel DeWitt.
As autonomous systems become integrated into the real world, planning under uncertainty is a critical task. The real world is incredibly complex and systems must reason about factors such as uncertainty in their movements, environments, and human behavior. In the face of this uncertainty, agents must compute control trajectories and policies that enable them to maximize their expected performance while respecting probabilities on mission failure. The task is difficult because systems must reason about large numbers of scenarios concerning what may happen. Compounding the difficulty, many systems must reason about uncertainty while being resource-constrained. Autonomous cars and robots are time-limited because they must react quickly to their environment. Applications such as smart grids are computationally-limited because they require low-cost hardware in order to keep energy cheap.; Because of its importance to real world applications, a large amount of work has been devoted to planning under uncertainty. However, few applications focus on the resource-limited case. In time-constrained applications such as motion planning under uncertainty, methods typically focus on simplistic cost functions and models of the environment, dynamics, and stochasticity. In computation-constrained applications such as resource allocation for smart grids, approaches require computationally-intensive solvers that are unsuitable when system cost must be kept to a minimum. In both cases, the prior art is inadequate for real world demands. In this thesis, I develop a series of algorithms that enable resource-constrained systems to better plan under uncertainty. First, my work enables time-constrained systems to better approximate expected cost, search in non-convex regions, and reason about more complicated environmental geometries and agent dynamics.; Additionally, I propose algorithms that enable resource allocation under uncertainty on ultra low-cost hardware by distributing computation and approximating state uncertainty through a discretization. In the time-constrained case, I model the problem of motion planning under uncertainty as a hybrid search that consists of an upper level region planner and a lower level motion planner. First, I present a method for quickly computing expected path cost and that is able to vary the precision with which the stochastic model is evaluated as required by the search. Next, I present the Fast Obstacle eXploration (FOX) algorithm that quickly generates a constraint graph for the region planner from complex obstacle geometries. I present a method for integrating CDF-based chance constraints with FOX as well as a method to model the collision probability of agents with non-trivial geometry.; The latter algorithm transforms a complex problem in numerical integration into a simpler problem in computational geometry; importantly, the algorithm allows for massive parallelization on GPUs. Finally, this part concludes with the Shooting Method Monte Carlo (SMMC) algorithm. This algorithm models the chance constraint of dynamical systems with non-Gaussian state uncertainty. SMMC combines a shooting trajectory optimization with Monte Carlo simulation to approximate the collision probability for nonlinear dynamical systems. In the computation-constrained case, I develop a set of resource-allocation algorithms that are able to reason about uncertainty while being implementable on ultra low-cost hardware. A market for reliability algorithm is presented that solves resource allocation under uncertainty by allowing a distributed set of agents to bid for reliability.; A centralized planner is also presented where the resource allocation problem is modeled as a Markov Decision Process and a stochastic local search used to compute good policies. The approaches for resource-constrained planning under uncertainty are bench-marked against a variety of test cases. In the motion planning domain, test cases are presented using linear dynamical systems, a Dubins car model, and a Slocum glider AUV. The method for bounding expected cost is shown to perform well against a sampling-based approach. The approach to dealing with complex obstacle geometry is shown to reach better solutions more quickly than a sampling-based RRT approach and disjunctive linear program. The sampling-based collocation method is shown to better approximate trajectory risk in simulation than a CDF-based model. Likewise, Shooting Method Monte Carlo is shown to better approximate trajectory risk than a sampling-based collocation method.; In both sampling-based cases, GPU parallelization is shown to help scale computation of the chance constraint to large numbers of samples versus CPU-based computation. In the resource allocation case, the distributed and centralized contingency planners are benchmarked against mixed integer linear programs (MILP) and are shown to achieve comparable performance with a much smaller computational footprint.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 267-276).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123775</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A portable, ultra-Low cost NMR device</title>
<link>https://hdl.handle.net/1721.1/123774</link>
<description>A portable, ultra-Low cost NMR device
Raynal, Ashley Brown.
Nuclear magnetic resonance (NMR) provides powerful measurements that remain inaccessible in many applications due to the instruments' size and expense. Recent research efforts have focused on creating handheld devices with lower resolution but greatly reduced cost. Persistent challenges include implementing a miniature magnet with sufficiently homogeneous magnetic field, and isolating the weak NMR signal from the powerful excitation pulses. In this thesis, we demonstrate a magnet design and experimental technique to address these needs. A significant cost for a small NMR magnet is associated with the extensive labor for assembly and correction of field variations. To alleviate this difficulty, we optimized and constructed a self-assembling NMR magnet. The palm-sized assembly, called a shim-a-ring, had a mass of 418 g. The magnetic field strength was 0.48 T, large enough to perform spectroscopy.; To ease the process of correcting the field, we propose an active shim system, which would eliminate much of the labor required with other strategies. Electromagnetic shims were optimized to correct 14 lower-order spherical harmonics with minimal power consumption. When comparing the efficiency of the shims to the correction needed in the shim-a-ring magnet, the required current was found to be too large for steady-state operation. In short experiments, however, the strategy was shown to be feasible, with heat dissipation causing only a negligible temperature change. Stochastic excitation provides a low-power alternative to standard NMR techniques. With the pulse amplitudes reduced by orders of magnitude, isolating the signal from the excitation is much less challenging. Experiments performed in the shim-a-ring magnet demonstrated this benefit.; Although the magnetic field variations were too large for spectroscopy, the initial amplitude of the impulse response was proportional to the number of resonant nuclei in the sample, called the spin density. The ratio of water and heavy water contained in a sample was characterized using this technique.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 149-154).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123774</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoporous flexographic printing : fundamentals, applications and scale-up</title>
<link>https://hdl.handle.net/1721.1/123773</link>
<description>Nanoporous flexographic printing : fundamentals, applications and scale-up
Mariappan, Dhanushkodi D.,1979-
Printing of ultrathin layers of polymeric and colloidal inks is critical for the manufacturing of electronics on non-conventional substrates such as paper and polymer films, for applications such as smart packaging, asset tracking, and photovoltaics. Among conventional printing processes, flexography is a scalable, high-speed direct printing method, yet its resolution is limited by squeeze-out of ink between the non-porous stamp and substrate and by dewetting of the deposited ink film. Broadly, there remains an important need for improved printing technologies for ultrathin (~0.1 [mu[m or smaller) and fine features (~ [mu]m resolution) to advance printed electronics technology. In flexographic printing, it was recently demonstrated that significantly finer printed feature dimensions can be achieved when nanoporous stamps are used instead of traditional non-porous polymer stamps.; This thesis studies the fundamentals, applications and scalability of flexography using nanoporous stamps. - First, the dynamics of liquid transfer between nanoporous stamps and solid substrates are studied. The stamps comprise forests of polymer-coated carbon nanotubes (CNTs), and the surface mechanics and wettability of the stamp are engineered to imbibe colloidal inks and transfer patterns by conformal contact with the substrates. By high-speed imaging during printing we observe the dynamics of liquid spreading, which is mediated by progressing contact between the nanostructure stamp surface and the substrate, and by imbibition within the stamp-substrate gap. Via modeling of the liquid dynamics, and comparison with data, we elucidate the scale- and rate-limiting aspects of the process. Specifically, we find that the printed ink volume and resulting layer thickness are independent of contact pressure, and thickness decreases with retraction speed.; - Second, the design and evaluation of a benchtop plate-to-roll (P2R) machine for rapid prototyping of printed patterns is presented. The machine accommodates CNT stamps up to 20 x 20 mm size, and controls the contact force, contact speed, and the alignment between the stamp and the substrate. Using the P2R machine, we show printing of honeycomb patterns with 3 [mu]m linewidth at &gt;0.1 m/s demonstrating the scalability of the process for high-throughput manufacturing. - Third, simple devices are prototyped to leverage the capabilities of nanoporous flexography. Ultra-thin (&lt;500nm), short channel (~10 [mu]m) transistors are fabricated using flexoprinted silver electrodes (followed by sintering), with a uniform thin film of single walled carbon nanotubes (SWCNTs) as the active layer. The measured on-off ratios (~ 10³ - 10⁵) and mobilities (25-95 cm² /Vs) are comparable with that of other printed SWCNT network transistors.; Last, two-color pixels of colloidal quantum dots with good optical absorbance are printed for potential use in filters for low-cost imaging spectrometers. Together, the findings of this thesis suggest that flexography using nanoporous stamps is a promising approach to high-speed printing of colloidal nanoparticle inks, for manufacturing of electronic and sensing devices that require high-resolution printed features.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 127-134).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123773</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tissue-like hydrogels by design</title>
<link>https://hdl.handle.net/1721.1/123772</link>
<description>Tissue-like hydrogels by design
Lin, Shaoting,Ph. D.Massachusetts Institute of Technology.
Human bodies are mostly made of soft, wet yet robust biological hydrogels such as cartilages, ligaments, and muscles. The biological hydrogels commonly possess mechanical properties such as high toughness, resilience and fatigue resistance to guarantee the bodies' reliable functions and activities. While hydrogels with tissue-like mechanical properties are highly desirable in applications as diverse as tissue engineering, drug delivery and soft machines, these properties are rarely achieved in synthetic hydrogels. The first part of this dissertation is aimed to design synthetic hydrogels that possess tissue-like mechanical properties, including high toughness, resilience and fatigue threshold, through combined theory, modeling and experiments. First, we develop a coupled cohesive-zone and Mullins effect model to predict the fracture energies of tough hydrogels. Based on the model, we further provide a toughening diagram that can guide the design of new tough hydrogels.; Second, we propose that delaying mechanical dissipations in tough hydrogels can make the hydrogels resilient under moderate deformation while still high toughness and high resilience for hydrogels. Third, we study fatigue fracture in hydrogels and show that the introduction of nanocrystalline domains and aligned nanofibrils can substantially increase hydrogels' fatigue-resistant properties. In the second part of this dissertation, we study mechanical instabilities in hydrogels. Under tension, a layer of confined elastic material such as hydrogel can exhibit various modes of mechanical instabilities, including cavitation, fingering and fringe instabilities. While the cavitation has been extensively studied, the fingering and fringe instabilities have not been well understood, and the relations and interactions of these instabilities have not been explored yet.; We systematically study the formation, transition, interaction and co-existence of mechanical instabilities in confined elastic layers under tension. Through combined experimental, numerical and theoretical analysis, we find that the mode of instability is determined by both geometry and mechanical properties of the elastic layer through two non-dimensional parameters: layer's lateral dimension over its thickness and elastocapillary length over the defect size. A phase diagram is calculated to quantitatively predict the occurrence of any mode of instability, which can help the design of robust adhesives by rationally harnessing the desired mode of instabilities while suppressing the other modes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 210-228).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123772</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Holistic modeling and evaluation of material recovery from materially-complex end-of-life vehicles</title>
<link>https://hdl.handle.net/1721.1/123771</link>
<description>Holistic modeling and evaluation of material recovery from materially-complex end-of-life vehicles
Ip Kiun Chong, Karine.
Material recovery is a key lever to promote a sustainable materials recycling system. By capturing materials from end-of-life products and substituting primary with secondary material production, society can reduce the depletion of natural resources and prevent the accumulation of valuable materials in landfills. Material recovery from materially-complex products, such as vehicles, is becoming more challenging as the material mixture becomes more heterogeneous with new trends in lightweight materials and product designs. The goals of this thesis are to develop a holistic modeling framework for material recovery for end-of-life vehicles (ELVs) and to illustrate the application of this framework to evaluate the recovery performance for current ELVs and future lightweight ELVs, using an existing material recovery infrastructure.; The holistic framework encompasses an integrated series of material recovery models that span the recovery processes from grave to cradle: dismantling, hulk shredding, material separation, secondary metal production of steel and aluminum, and waste recovery from plastics. For each of these processes, we have developed an evaluative model using mass balance. The input into the recovery chain is the bill of materials of an ELV, including the assembly part hierarchies and assembly precedence constraints, as well as the material composition of all components and fasteners. The output from the recovery chain is an assessment of the value of the recovered scraps, based on the contribution of ferrous and aluminum scraps to secondary metal production, and of plastic residues to energy recovery.; The intent of this holistic, grave-to-cradle framework is to allow one to rethink how the effectiveness of the recovery chain depends on the attributes of each of the end-of-life processes as well as on the vehicle product design. For the first phase of the recovery chain, we model the dismantling process as an optimization problem to decide what valuable parts to remove, given the parts' hierarchies and assembly precedence relationships, and the parts' values and dismantling costs. After removing these parts, we are left with the ELV hulk. For the second phase, we develop a shredding model for the comminution of the ELV hulk, i.e. the transformation of the hulk into non-liberated (fasteners and wires attached to parts' fragments) and liberated material particles of different sizes.; For the third phase, we use a network flow model to model the sortation of the multi-material flow from the shredder through the network of sorting equipment, which performs separation based on material properties. Using this system of linear equations representing mass flow balance, we can solve for the material composition of the collected output scrap streams. From that, we can calculate the material quantity and quality losses due to inefficient separation. More importantly, this model is able to capture the metal contamination due to non-liberated particles ending up in the ferrous and aluminum scraps. For the fourth phase of the material recovery chain, we calculate the dilution losses incurred at secondary metal production for different scenarios of produced sink metal alloys.; To tie together all losses from the material recovery chain, we propose the normalized contribution of scraps (NCS), an overall performance metric that improves upon the typical overall recovery rate (ORR) by accounting for the dilution losses. We illustrate the framework with a baseline ELV built from a family-car teardown data. In general, we observe the NCS to be significantly lower than the ORR (89%) for scenarios where the metal scraps are used to produce medium-quality alloys of the level of closed-loop recycling (NCS of 25% for 6061 Al alloy and rolled steel production), but not so much so when the metal scraps are down-cycled to low-quality alloys (NCS of 88 % for A380 Al alloy and steel bar production). We conduct two case studies to explore the effects of variations to the profile of the baseline ELV on its material recovery performance. In the first case study, we run Monte-Carlo simulations to model the uncertainty in the profile of the ELV hulks.; Using data on the resale of used parts, we create a sample of 1000 different hulks by randomly generating the parts that are disassembled for resale. For this sample we observe that 300 kg of materials are removed from an ELV on average; the ORR varies from 86% to 91%, while the best-case scenario NCS varies from 60% to 91%. The second case study investigates the material recovery performance of an aluminum-intensive lightweight vehicle. For this vehicle, in comparison to the baseline vehicle, there is not only a higher concentration of aluminum, but also more ferrous fasteners in the aluminum body. Our analysis of this case suggests a decrease in recovery performance compared to that of the baseline ELV case. In comparison to the baseline vehicle, the lightweight vehicle has less ferrous, which results in a greater concentration of copper contaminant in ferrous scrap; thus, the ferrous scrap is of lower grade and requires more dilution.; However, the increase in aluminum input is high enough to counter the increase in aluminum-ferrous particle contamination in the aluminum scrap. While the performance results depend on the model parameters representing the material recovery infrastructure, these case studies highlight the need for a holistic material recovery model to capture unintended dilution loss consequences due to changes in vehicle designs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 244-252).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123771</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An inverse problem framework for reconstruction of phonon properties using solutions of the Boltzmann transport equation</title>
<link>https://hdl.handle.net/1721.1/123770</link>
<description>An inverse problem framework for reconstruction of phonon properties using solutions of the Boltzmann transport equation
Forghani, Mojtaba.
A methodology for reconstructing phonon properties in a solid material, such as the frequency-dependent relaxation time distribution, from thermal spectroscopy experimental results is proposed and extensively validated. The reconstruction is formulated as a non-convex optimization problem whose goal is to minimize the difference between the experimental results and the one calculated by a Boltzmann transport equation (BTE)-based model of the experimental process, with the desired material property treated as the unknown in the optimization process. Crucially, the proposed approach makes no assumption of an underlying Fourier behavior, thus avoiding all approximations associated with that assumption. The proposed method combines a derivative-free optimization method, referred to as the Nelder-Mead algorithm, with a graduated (multi-stage) optimization framework.; Our results show that, compared to other reconstruction methods, the proposed method is less sensitive to scarcity of data in a specific transport regime (such as submicron length scales). The method is also very versatile in incorporating known information into the optimization process, such as the known value of the material thermal conductivity or solid-solid interface conductance if a material interface is present; addition of this information improves the quality of the optimization. In the presence of a material interface of unknown conductance, we show that simultaneous reconstruction of both the solid-solid interface frequency-dependent transmissivity function and the relaxation time function is possible. The optimization algorithm is validated using both synthetically generated temperature profiles (generated by solving the BTE), as well as experimentally measured signals.; In the case of synthetic input data, the reconstructed properties are compared to the material models used to create the input data. In the case of experimental data, we compare the reconstructed phonon properties with their corresponding benchmark values, obtained using either theoretical predictions, such as relaxation times from density functional theory, or experimentally measured, such as the experimentally measured interface transmissivities. The interface transmissivity reconstruction is also validated on the 2D-dots geometry in the presence of Al-Si interface. Our results show good accuracy in all cases. The reliability and uniqueness of the optimized solution as well as its statistical properties due to the presence of noise are studied using a number of statistical techniques.; Our analysis provides strong evidence that the formulated optimization problem has a unique solution; furthermore the proposed optimization-based framework is capable of finding that solution with good accuracy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-144).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123770</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and learning for rigid-body models of manipulation</title>
<link>https://hdl.handle.net/1721.1/123769</link>
<description>Inference and learning for rigid-body models of manipulation
Fazeli, Nima.
In this thesis, we explore a spectrum of inference and modeling approaches for robotic manipulation. Particularly, we investigate the broad class of rigid-bodies undergoing frictional interactions. We begin by deriving a contact-implicit system identification formulation for articulated rigid-bodies. Assuming we have a physical model of the system, the objective is to derive system parameters and contact forces for articulated rigid-bodies without enumerating and inferring contact formations. We then ground this approach by investigating the fidelity of rigid-body contact models and their identification. We evaluate the fidelity of the contact models by empirically studying their predictive performance and parameter identification properties in a planar impact task. Next, we address one approach to augmenting these contact models with data. The objective here is to improve model fidelity through an optimization of model parameters and residual error learning for systems with prior physics models. We conclude the thesis by building models from data for tasks with rich latent structure and no prior physics models. Here, the objective is to learn data-efficient hierarchical models of physics that incorporate force and tactile sensory modalities and are amenable to inference, controls, and planning.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 139-146).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123769</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of high performance hybrid transmissions</title>
<link>https://hdl.handle.net/1721.1/123768</link>
<description>Design of high performance hybrid transmissions
Dorsch, Daniel Scott.
This thesis explores the design, development, and evaluation of transmission systems for integration into high-performance hybrid (internal combustion engine (ICE) and electric motor) vehicles. Traditional hybrid vehicle designs often fall into one of two categories. Every day road vehicles typically utilize hybridization for increased drivetrain efficiency, including traits such as low speed electric drive and regenerative braking. Alternatively, performance cars have typically utilized the electric motor functionality for increased performance. By using a new framework for analyzing the elements and their function within a propulsion system architecture, advanced hybrid architectures that allow for both high efficiency and increased performance are presented. A two-motor, clutchless hybrid transmission concept was developed. An analysis of driving modes available demonstrates the utility in a high-performance vehicle, increasing the performance and efficiency of the drivetrain.; A second, dual-shaft, single motor, clutchless transmission concept is presented, with the benefits and drawbacks of this architecture compared to the two-motor architecture, and a traditional ICE only transmission. The final part of this thesis presents a novel, two-speed electric motor system that could be integrated within a conventional ICE automated manual transmission. This system utilizes custom sensors for tracking the position of the dogteeth within the two-speed shift synchronizer. Electric motor control is used to synchronize motor speed during a shift event, as the inertia of the electric motor is too large for friction synchronization alone to be sufficient. This strategy removes the tradeoff that currently exists for optimal shift actuator design (larger pistons result in faster speed synchronization but slower actuation motion during other phases of a shift) and results in overall faster gearshifts.; Dogtooth tracking allows for firing of the shift actuator at the proper moment, ensuring no collision between dogteeth and allowing for faster shifter motion than with a traditional synchronizer. An experimental setup was developed to characterize shift performance. Full gearshifts can be made successfully utilizing speed matching and dogtooth tracking, validating the described shift control method and allowing for improved, frictionless synchronizer designs. The developments described in this work will lead to a new generation of hybrid vehicles, designed for high-performance and increased efficiency.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-118).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123768</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Underwater multi-vehicle co-operative target-tracking</title>
<link>https://hdl.handle.net/1721.1/123767</link>
<description>Underwater multi-vehicle co-operative target-tracking
Cheung, Mei Yi,Ph. D.Massachusetts Institute of Technology.
The marine domain is fundamentally challenging for collaborative, multi-robot sensing and tracking operations. Accurate mapping and modeling of an underwater environment is both time consuming and difficult, especially when robots have limited access to a high-quality localization solution such as GPS. Communications over distances greater than one hundred meters necessitates the use of acoustics (acomms), which introduces networking challenges such as limited throughput and bandwidth. Successful execution of a highly dynamic and co-operative task such as target-tracking requires optimization of the information-gathering process, probabilistic inference over disparate, noisy sensor data, and exchange of local information over a costly communication link. This thesis presents two formulations of the co-operative underwater target-tracking task: a dynamic nonlinear sigma-point joint estimator and a non-Gaussian, non-parametric, multi-modal factor graph formulation of SLAM. Within the field of SLAM simultaneous estimation of a robot's state (localization) and modeling of its environment (mapping), there is a wealth of research into approaches using point estimate representations and Gaussian sensor noise. The marine domain presents two challenges not well addressed by the usual formulation: (1) measurements obtained from diverse sensors often require extensive filtering and parametrization to fuse in a Gaussian syntax and (2) acomms between multiple robots significantly limits the amount of local information that can be shared over the network. The non-Gaussian approach presented here utilizes the technique of synthetic aperture sonar (SAS) to relate disparate acoustic measurements in a consistent probabilistic framework. Experimental results under real-world acoustic conditions are gathered onboard our custom-built lightweight and low-cost ASVs in the Charles River, and detailed design specifications are presented for our testbed robots.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 215-231).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123767</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next-generation dedicated outdoor air cooling systems for low-energy buildings</title>
<link>https://hdl.handle.net/1721.1/123766</link>
<description>Next-generation dedicated outdoor air cooling systems for low-energy buildings
Chen, Tianyi,Ph. D.Massachusetts Institute of Technology.
Building operations take up more than 40% of the overall energy consumption in the United States, among which cooling energy comprises an especially significant part in hot and humid climates. To achieve low-energy and low-carbon building communities, it is necessary to design high-performance building enclosures and develop energy-efficient cooling technologies and systems. High-level prescriptive code requirements for low-energy buildings are available but details of building envelope construction are missing in the literature. Previous studies have primarily focused on analyzing one type of new technology or its applications in a particular climate, while a systematic comparison is needed to address the prospects and limitations of each system. In this thesis, thermodynamic analysis has been performed on dedicated outdoor air cooling systems (DOAS).; Energy performances of next-generation DOAS cooling systems, namely those with desiccants and membranes, are compared with the industrial benchmark, the widely used chiller system based on vapor-compression cycle, on the basis of first-law and second-law efficiencies. Low-energy building prototypes have been constructed with specified design details to provide indoor cooling loads for the DOAS cooling systems, and the building energy performances are validated against the existing zero-energy buildings. Dynamic working conditions are simulated, and effects of cooling equipment designs on the energy performances are further examined. Integrations of passive cooling strategies are applied in different climates, and climate-specific solutions have been proposed to achieve best energy performances. Economic costs of each system are estimated, presented with payback period of replacing the existing chiller system.; The thesis reveals the principles of how to systematically organize cooling equipment to achieve potential energy savings from a thermodynamic point of view. Innovative cooling systems are proposed with next-generation cooling technologies. Significantly improved energy performance has been demonstrated through careful system design and integration, as well as energy recuperation. An automated and interactive workflow has been developed to thermodynamically analyze the cooling energy system performances.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 208-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123766</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frameworks for the design of passive prosthetic knee components using user-centered methods and biomechanics of level-ground walking</title>
<link>https://hdl.handle.net/1721.1/123765</link>
<description>Frameworks for the design of passive prosthetic knee components using user-centered methods and biomechanics of level-ground walking
Arelekatti, Venkata Narayana Murthy.
Passive knee prostheses in developing countries use low-cost components driven primarily by the need to prevent falls, resulting in undesirable gait deviations during walking. There is a severe lack of reliable data on the specific needs of low-income amputees, which poses a significant challenge towards developing globally appropriate prosthetic technology. This thesis presents the analysis of user-centered needs and relevant lower leg dynamics as frameworks for the design of passive prosthetic knee components that can enable transfemoral (above-knee) amputees to ambulate with minimal gait deviations leading to higher user satisfaction. The goal of developing these frameworks is ultimately to design a low cost, fully passive prosthetic knee device for persons with transfemoral amputations living in the developing world. To identify user needs, structured oral interviews of 19 transfemoral amputees in India were conducted regarding 22 different Activities of Daily Living (ADLs).; A scale of relative importance for different needs was compiled, which can help designers, doctors, and administrators provide better clinical solutions to amputees. Cross-legged sitting was identified as the most critical user need with the potential for highest improvement in the quality of life of amputees. Two identical rotator prototypes were designed and validated for cross-legged sitting on 9 amputees in India. To compute and replicate the target knee moment profile for a prosthetic knee device, the dynamics of level-ground walking were analyzed using a conceptual link-segment model of the prosthetic leg with the knee joint modeled as a combination of passive linear springs and dampers. The effects of changes in inertial properties (mass, radius of gyration, and center of mass location) of the prosthetic leg on the lower leg kinetics were also quantified in the model.; The knee moment required for achieving normative joint kinematics at the hip, knee and ankle by the optimal engagement of spring and dampers was replicated computationally with a maximum R²=0.90 in an idealized clutching scheme. Multiple prototypes of modular knee mechanisms were built to replicate the model, including (i) an automatic locking module for stability during early stance, (ii) a linear spring module for facilitating knee flexion-extension during early stance, and (iii) a rotary damping module for control during terminal stance and swing. Qualitative feedback from two unilateral transfemoral amputees in India showed the automatic locking module provided the predicted performance for timely stance to swing transition. Fluid-based viscous damping was found to provide more optimal control compared to friction-based damping. A comprehensive biomechanical framework was developed that predicted the range of optimal damping coefficients for transfemoral amputees.; The framework used the results from the link-segment model and empirical data of transfemoral gait characteristics such as slower walking speeds and asymmetries in the stance-swing duration. An experimental prosthetic knee with five different damping conditions was built and tested on three subjects with unilateral transfemoral amputation in a motion capture lab. Increased damping led to reduced peak knee flexion during terminal stance and swing, as predicted by the framework. The framework predicted the optimal damping value for achieving normative peak knee flexion to within one standard deviation of the able-bodied value during the swing phase.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123765</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Greenlandic ice archives of North Atlantic Common Era climate</title>
<link>https://hdl.handle.net/1721.1/123740</link>
<description>Greenlandic ice archives of North Atlantic Common Era climate
Osman, Matthew B.
The Common Era (A.D. 1- present) represents a crucial period for climatic studies, documenting the timespan over which human activities have become an increasingly domineering force in shaping Earth's landscape, climate, and ecology. Direct, quantifiable records of climatic phenomena are severely limited over much of the Common Era, necessitating high-resolution, naturally-derived proxies to extend climatic insights beyond the satellite and instrumental era, particularly across remote high-latitude and maritime regions of the North Atlantic. Here, I use modem, data-driven and physically-based modeling approaches to gain new insights into North Atlantic climate variability from the Greenlandic ice core archive. First, I investigate the climatic fidelity of ice core glaciochemical climate proxies at the microphysical-scale.; I show that several soluble chemical species - key among them methanesulfonic acid (MSA) - undergo rapid vertical migration through a super-cooled liquid-advection process along ice crystal grain-boundaries. I demonstrate that significant multi-year MSA changes occur only under low snow-accumulation and high-impurity-content conditions, thus mitigating the phenomenon over much of Greenland. Building upon these findings, I then investigate the cause of declining 19th and 20th-century MSA concentrations across the interior Greenland Ice Sheet. My results illustrate that Greenlandic MSA records provide a new proxy for North Atlantic planktonic biomass changes, illuminating a 10 ± 7% decline in marine productivity over the Industrial-era. I next present a new climate record from a previously-unexplored coastal ice cap in west-central Greenland.; Using a physically-constrained ice cap flowline inversion model, I identify marked centennial-scale changes in coastal precipitation during the last millennium, including a ~40% increase in coastal precipitation since the industrial-onset. These changes are drastically larger than those observed from inland Greenland records, revealing enhanced sensitivity in west Greenlandic hydroclimates to regional Atlantic and Arctic-wide temperature variability. Finally, leveraging a compilation of nearly 30 annual-resolution Greenland water-isotope records, I isolate coherent signatures of atmospheric circulation variability to reconstruct changes in the North Atlantic eddy-driven jet-stream over the last millennium, exposing progressively enhanced variability during the past two-centuries consistent with amplified Arctic thermal-wind forcing.; This thesis thus illuminates new Common Era climatic and ecologic changes, and expands the scope of the Greenlandic ice archive as proxies of the coupled North Atlantic climate system
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123740</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconstructing atmospheric changes in monsoon regions using eolian dust</title>
<link>https://hdl.handle.net/1721.1/123739</link>
<description>Reconstructing atmospheric changes in monsoon regions using eolian dust
Kinsley, Christopher William.
Mineral dust is generated in continental interiors and exported by winds to ocean basins, providing a sedimentary archive which is one of the few direct indicators we have of atmospheric circulation in the past. This archive can be utilized in regions of dust transport also affected by monsoons to examine how different climate forcing mechanisms impact the monsoon regions over glacial-interglacial, orbital, and millennial timescales. This thesis generates new eolian dust records from two monsoon regions to reconstruct changes in atmospheric circulation in response to forcing by high-latitude insolation and boundary condition change. In Chapters 2 and 3 I use ²³⁰Thxs-normalization to construct high-resolution eolian dust flux records from sedimentary archives downwind from the West African and East Asian Monsoon regions respectively.; The West African margin dust records show variability associated with an interplay between Northern Hemisphere summer insolation forcing and North Atlantic cooling. The longest record at ODP Site 658, stretching back to 67 ka, shows evidence for a "Green Sahara" interval from 60-50 ka and a skipped precessional "beat" from 35-20 ka. This record also shows evidence for abrupt increases in dust flux associated with Greenland stadials. The Shatsky Rise record at ODP Site 1208, downwind of East Asian dust sources, shows variability associated with glacial-interglacial boundary conditions over the last 330 ka, exhibiting high dust during glacial times. The record also exhibits variability associated with a Northern Hemisphere summer insolation control at times overriding the glacial-interglacial signal.; In Chapter 4 I demonstrate the feasibility of using radiogenic neodymium isotopes (¹⁴³Nd/¹⁴⁴Nd) at IODP Site U1430 in the Sea of Japan to fingerprint the provenance of eolian material at the core site from Asian dust sources. I then generate a ¹⁴³Nd/¹⁴⁴Nd record from isolated eolian material over the last 200 ka to examine Westerly Jet behavior in the Asian interior, which shows resolvable orbital-scale variability from 200 to 100 ka, and muted variability from 100 to 0 ka. The findings imply a quicker shift of the Westerly Jet to the north of the Tibetan Plateau during times of high Northern Hemisphere summer insolation and a strong Asian monsoon.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 120-126).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123739</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geophysical and geochemical constraints on submarine volcanic processes</title>
<link>https://hdl.handle.net/1721.1/123738</link>
<description>Geophysical and geochemical constraints on submarine volcanic processes
Jones, Meghan R.
Submarine volcanic systems form new oceanic crust, host unique chemosynthetic ecosystems, concentrate rare metals, and provide a conduit for chemical transfer from the Earth's interior to hydrosphere. Although our understanding of submarine volcanoes has been historically limited due to their relative inaccessibility, recent observations from active systems provide valuable opportunities to address key open questions in submarine volcanology. This thesis provides new insight into submarine volcanic processes using observations and samples from the 2011 Axial Seamount eruption, the 2012 Havre Volcano eruption, and the Mid-Atlantic Ridge near 14°N. In Chapter 2, I develop best practices for quantifying vesicle textures and reconstructing total CO₂ concentrations in mid-ocean ridge basalts (MORB).; Based on synthetic vesicle populations, 2D and 3D measurements, and Raman spectroscopy, I show that traditional methods overestimate MORB CO₂ concentrations by as much as 50%, which has important implications for estimating ridge CO₂ flux. In Chapter 3, I apply methods from Chapter 2, along with a bubble growth model, to samples from the 2011 Axial Seamount eruption in order to evaluate magma ascent and lava flow rates. I show that the variability in ascent rates during the 2011 eruption spans the range previously proposed over the global mid-ocean ridge system. I suggest that the variability in ascent rates relates to lateral dike propagation and evolving reservoir overpressures and that ascent rates influence flow morphology. In Chapter 4, I address the origin of highly vesicular MORB that pop upon recovery from the seafloor.; I show that bubble accumulation produces the high volatile concentrations in these popping rocks and demonstrate that mantle carbon concentrations are lower and less heterogeneous than previously proposed. In Chapter 5, I evaluate models for the submarine dispersal of giant pumice clasts using observations from the 2012 Havre Volcano eruption. I show that the seafloor distribution of giant pumice is controlled by conductive cooling, the advective displacement of steam by water through highly permeable pathways, and clast breakup during transport and deposition. Together, these chapters provide critical constraints on the flux of volatiles at mid-ocean ridges and the processes governing the emplacement of volcanic products on the seafloor.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123738</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equatorial ocean dynamics impacting upwelling west of the Galápagos Archipelago</title>
<link>https://hdl.handle.net/1721.1/123737</link>
<description>Equatorial ocean dynamics impacting upwelling west of the Galápagos Archipelago
Jakoboski, Julie K.(Julie Kathryn)
The Galápagos Cold Pool (GCP) is a region of anomalously cold sea surface temperature (SST) just west of the Galápagos Archipelago. Modeling studies have shown that the GCP is maintained by wind- and current-driven upwelling. The Galápagos Archipelago lies on the equator, in the path of the Pacific Equatorial Undercurrent (EUC) as it flows eastward across the Pacific at the depth of the thermocline. It is hypothesized that the EUC upwells into the GCP as it reaches the topographical barrier of the Galápagos Archipelago. The path of the EUC in the vicinity of the archipelago is not well understood. The 'Repeat Observations by Gliders in the Equatorial Region' (ROGER) program deployed a fleet of Spray autonomous underwater gliders in the region just west of the Galápagos Archipelago from 2013 - 2016 with the goal of continuously occupying three transects that form a closed area, with the archipelago as the eastern boundary.; Gliders obtained subsurface measurements of temperature, salinity, and velocity with unprecedented temporal and spatial resolution. These measurements are used to observe the path of the EUC as it bifurcates into a north and south branch around the Galápagos Archipelago. Net horizontal transport into the volume defined by the closed area formed by the glider transects is used to estimate an average vertical velocity profile in the region of the GCP, indicating upwelling in the upper 300 m. The bifurcation latitude of the EUC, estimated to be approximately 0.4°S from volume transport as a function of salinity, is coincident with the meridional center of the archipelago, suggesting the bifurcation latitude is topographically controlled. Ertel potential vorticity and a Bernoulli function are qualitatively conserved, supporting an inertial model of the EUC.; Average spectral variance from Argo profiling float observations is used to show that tropical instability waves propagate with frequency and wavelength consistent with linearized, equatorial [beta]-plane model results and may impact the GCP, according to their vertical structure.
Thesis: Ph. D., Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123737</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A global study of double seismic zones and its implications for the mechanism of intermediate-depth earthquakes</title>
<link>https://hdl.handle.net/1721.1/123736</link>
<description>A global study of double seismic zones and its implications for the mechanism of intermediate-depth earthquakes
Florez Torres, Manuel Alberto.
The fundamental physical processes to generate earthquakes contradict the occurrence of intermediate-depth (80-350 km) seismicity: both pressure and temperature increase with depth, which inhibits fracture, unstable sliding, and promotes ductile flow. Classic experiments on olivine, the main mineral that composes the mantle, show that the shear stresses necessary to overcome the high normal stresses imposed by the overburden pressure are unsustainable at these depths. In subduction zones, the dehydration of the descending oceanic lithosphere might enable the observed brittle-like behavior; but this hypothesis remains controversial, as there are other viable alternatives. Improving our understanding of the intermediate-depth seismicity relies on assembling accurate and systematic seismological observations, which I tackle here in my graduate work. First, I developed a relocation method that uses array processing in such a way that velocity model biases are reduced.; The technique identifies and picks the arrival of depth phases and perform a relative relocation scheme. When high-quality data is available, hypocentral depth can be estimated with a precision of a few kilometers. I systematically applied this technique to build a new global catalog of intermediate depth seismicity. At depths larger than about 50 km, most subducting slabs feature two distinct layers of seismicity, known as Double Seismic Zones (DSZ). I used my relocated catalog to characterize 32 slab segments, sampling a diverse range of tectonic environments. I was able to clearly resolve the geometrical structure of DSZs and to separately study subducting crust (upper layer) and lithospheric mantle (lower layer) earthquakes.; I performed a careful analysis of the frequency-size statistics for each layer, finding consistently larger b-values (proportion of low-to-high magnitude events), correlating with slab age, for the upper plane of the lithosphere while a roughly constant values for the lower plane. Provided that b-values are indicative of stress regime, this suggests a different mechanism for earthquakes happening in the upper and in the lower plan of the subducting oceanic lithosphere.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-105).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123736</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics and kinematics of an estuarine network</title>
<link>https://hdl.handle.net/1721.1/123735</link>
<description>Dynamics and kinematics of an estuarine network
Corlett, William Bryce.
This thesis addresses the dynamics of estuarine networks, based on hydrographic observations in Newark Bay, a sub-estuarine network connected to the Hudson River estuary through New York Harbor. Estuarine networks differ from simple estuaries in that they may have multiple connections to the ocean, multiple freshwater sources, and often contain complex junctions between estuarine segments. The Newark Bay estuarine network is connected to the sea through two tidal straits, and is fed by multiple internal and external sources of fresh water. The estuarine network is also naturally divided into a series of reaches, each of which is characterized by a different cross-sectional geometry. This thesis focuses on the hydrographic variability and varying exchange flow within the Newark Bay estuarine network. Shipboard hydrographic measurements reveal the time-dependent formation of salinity fronts between reaches of the estuary.; Each front is generated by a different mechanism; however, all are generated by tidal flow through channel junctions during ebb tide, and are advected landward during flood tide. Mooring-based measurements confirm that these fronts form during nearly every tidal cycle, and that the fronts are associated with substantial changes in local salinity on tidal timescales. The effect of tidal processes, such as frontal advection, on the exchange flow is investigated by applying the isohaline total exchange flow (TEF) framework to mooring-based observations in multiple reaches of the estuarine network. This reveals that over half of the exchange flow is driven by tidal processes at all sites within the estuary. Both the TEF-based salt balance and the standard Eulerian salt balance indicate that tidal processes are also responsible for at least half of the landward salt flux at most sites within the estuary; TEF and Eulerian salt balances are nearly identical.; Tidal processes within the estuary are in large part associated with fronts. The large influence of tidal processes on the exchange flow in Newark Bay is thus likely due to the prevalence of channel junctions within the estuarine network.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 115-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123735</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Storm signatures in coastal ponds and marshes over the late Holocene</title>
<link>https://hdl.handle.net/1721.1/123734</link>
<description>Storm signatures in coastal ponds and marshes over the late Holocene
Castagno, Katherine Amelia.
Tropical cyclones pose a growing threat to coastal populations, especially as both populations and infrastructure are increasingly concentrated along the eastern coast of the United States. This thesis seeks to characterize the impacts of storms on coastal ponds and marshes along the eastern coast of the United States. Tropical cyclones and other storms have been shown to cause a spectrum of effects on these coastal systems, ranging from widespread erosion to deposition of thick sediment deposits. Sediments deposited and preserved in coastal ponds and marshes can provide a proxy for tropical cyclone landfall, the development and interpretation of which is imperative to understanding past climate trends and informing decisions for the future. This thesis uses a variety of methods to quantify the spatiotemporal signatures of tropical cyclone events in coastal, marsh, bay, and pond systems in Massachusetts, Connecticut, and Virginia.; Trends in grain-size distribution and sediment coarse fraction are used to broaden our understanding of deposition and sediment sources during tropical cyclone events. The complexity of how storms interact with these systems requires a process-based, whole-site analysis to adequately develop a storm record. Given the many nuances to storm deposition in these systems (including reverse grading trends and apparent spatiotemporal variation in sediment source), the potential utility and caveats to inversely modeling storm intensity from deposit grain-size characteristics is discussed. Finally, the question of whether hurricanes can produce widespread erosion of marsh platforms is addressed through both field and modeling techniques. While storms typically deposit sediment, field evidence suggests that marshes have the potential to be eroded by a series of storms over time-a deviation from our traditional understanding of marsh evolution.; Deposition and erosion of sediment during major storms remain complex, emphasizing the importance of contextualizing storm signatures within a broader view of the study area. This provides an opportunity to strengthen both paleo-reconstructions of storm activity and our ability to make informed decisions for coastal management in response to potential future changes in storminess.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-171).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123734</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and decision models for regulatory and business challenges in low-Income countries</title>
<link>https://hdl.handle.net/1721.1/123732</link>
<description>Inference and decision models for regulatory and business challenges in low-Income countries
Beeler, Michael Francis.
This thesis develops inference and decision models to address challenges of particular relevance in low-income countries (LICs). The areas studied include intelligent tutoring systems (ITS), network infrastructure pricing, and anti-counterfeiting. The ITS chapter identifies previously unknown and serious limitations to Bayesian Knowledge Tracing and Deep Knowledge Tracing, which are two highly-cited methods designed to aid adaptive educational software. The work on Deep Knowledge Tracing led to new data augmentation methods for training recurrent neural networks to be robust in the face of unseen input sequences. We propose a statistically consistent, efficient, and unbiased alternative inference method for questions engaging one skill at a time. The network infrastructure pricing chapters examine how to allocate the cost of a future infrastructure network whose structure depends on the price-taking decisions of potential users. In a multi-period setting, strategic joining delay by users typically leads to lower utility. We develop a cost-allocation rule that uses rebates to prevent strategic delay. In the single-period setting, we derive closed-form solutions to the expected value of offering to build a simple 1D network and use the 1D solution to establish a lower-bound estimate for more complex 2D networks. The anti-counterfeiting chapter investigates the strategic procurement of counterfeits by retailers and the effects of shared retailer reputation on equilibrium procurement decisions using models that are more flexible and tractable than those previously appearing in the literature.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 207-213).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123732</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributionally robust optimization with marginals : theory and applications</title>
<link>https://hdl.handle.net/1721.1/123731</link>
<description>Distributionally robust optimization with marginals : theory and applications
Chen, Louis Lester.
In this thesis, we consider distributionally robust optimization (DRO) problems in which the ambiguity sets are designed from marginal distribution information - more specifically, when the ambiguity set includes any distribution whose marginals are consistent with given prescribed distributions that have been estimated from data. In the first chapter, we study the class of linear and discrete optimization problems in which the objective coefficients are chosen randomly from a distribution, and the goal is to evaluate robust bounds on the expected optimal value as well as the marginal distribution of the optimal solution. The set of joint distributions is assumed to be specified up to only the marginal distributions. We generalize the primal-dual formulations for this problem from the set of joint distributions with absolutely continuous marginal distributions to arbitrary marginal distributions using techniques from optimal transport theory.; While the robust bound is shown to be NP-hard to compute for linear optimization problems, we identify multiple sufficient conditions for polynomial time solvability - one using extended formulations, another exploiting the interaction of combinatorial structure and optimal transport. This generalizes the known tractability results under marginal information from 0-1 polytopes to a class of integral polytopes and has implications on the solvability of distributionally robust optimization problems in areas such as scheduling, which we discuss. In the second chapter, we extend the primal-dual analysis of the previous chapter to the problem of distributionally robust network design. In this problem, the decision maker is to decide on the prepositioning of resources on arcs in a given s-t flow network in anticipation of an adversarys selection of a probability distribution for the arc capacities, aimed to minimize the expected max flow.; Again, the adversarys selection is limited to those distributions that are couplings of given are capacity distributions, one for each arc. We show that we can efficiently solve the distributionally robust network design problem in the case of finite-supported marginals. Further, we take advantage of the network setting to efficiently solve for the distribution the adversary responds with. The primal-dual formulation of our previous work takes on a striking form in this study. As one might expect, the form relates to the well-known Max Flow, Min-Cut theorem. And this leads to the intriguing interpretation as a 2-player, zero-sum game wherein player 1 chooses what to set the arc capacities to and player 2 chooses an s-t cut.; Essential to our analysis is the finding that the problem of finding the worst-case coupling of the stochastic arc capacities amounts to finding a distribution over the set of s-t cuts- this distribution being among the mixed strategies that player 2 would play in a Nash equilibrium. Furthermore, the support of such a distribution is a nested collection of s-t cuts, which implies an efficiently sized solution. Finally, the third chapter involves work inspired by the daily operations of HEMA supermarket, which is a recently established new retail model by Alibaba Group, China. In a HEMA supermarket store, a single SKU may be presented with demand in the form of multiple channels. The challenge facing HEMA is the question of how many units to stock in total between the warehouse and the store-front in advance of uncertain demand that arises in several consecutive time frames, each 30 minutes long.; In this work, we provide the first distributionally robust optimization study in the setting of omnichannel inventory management, wherein we are to make a stocking decision robust to an adversarys choice of coupling of available (marginal) demand distributions by channel and by time frame. The adversarys coupling decision amounts to designing a random mathematical program with equilibrium constraints (MPEC). And we provide both a structural analysis of the adversarys choice of coupling as well as an efficient procedure to find this coupling. In general, the overall distributionally robust stocking problem is non-concave. We provide sufficient conditions on the cost parameters under which this problem becomes concave, and hence tractable. Finally, we conduct experiments with HEMAs data.; In these experiments, we compare and contrast the performance of our distributionally robust solution with the performance of a naive Newsvendor-like solution on various SKUs of varying sales volume and number of channels on a 5-hour time window from 2pm - 7pm on weekends. Numerical experiments show that the distributionally robust solutions generally outperform the stochastic Newsvendor-like solution in SKUs exhibiting low-medium sales volume. Furthermore, and interestingly, in all of our experiments, the distributionally robust inventory decision problems presented by the historical data provided by HEMA are in fact concave.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123731</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planet formation and evolution in our Solar System and beyond</title>
<link>https://hdl.handle.net/1721.1/123730</link>
<description>Planet formation and evolution in our Solar System and beyond
Biersteker, John Brooks.
The discovery of thousands of exoplanets in recent decades has revealed a remarkable diversity of planetary system architectures, including entire classes of planets for which there is no solar system analog. In particular, the Kepler mission has shown that planets intermediate in size between Earth and Neptune with orbital periods less than 100 days are abundant in our galaxy. Concurrently, spacecraft missions to small primitive bodies in our solar system have yielded valuable insights into conditions in the early solar system. This thesis addresses questions in planet formation theory arising from both sets of observations. We begin with an investigation into the observed diversity of super-Earth bulk densities, which range from being consistent with a terrestrial composition to requiring an extended hydrogen-helium (H/He) envelope comprising several percent of the planet's mass. Giant impacts are expected to play a role in the formation of these worlds.; We examine the thermal consequences of such an impact, and find that atmospheric loss from these effects can significantly exceed that caused by the previously considered process of mechanical shocks for H/He atmospheres. Specifically, the energy released can produce a period of sustained, rapid mass loss through a Parker wind, partly or completely eroding the envelope. The degree of loss depends on planetary properties and the stochastic details of the impact, making giant impacts an attractive explanation for the observed diversity of super-Earth compositions. The final assembly of the terrestrial planets in our solar system likely also concluded with a period of giant impacts. We explore the significance of post-impact thermal losses for terrestrial planet atmospheres in different evolutionary states, finding that H/He envelopes are unlikely to survive the giant impact phase, but that secondary, outgassed envelopes with higher mean molecular weights may be retained.; Atmospheric constituents with high mean molecular weights may be lost, however, if they are mixed into a predominantly H/He envelope. Next, this thesis examines magnetic measurements of comet 67P/Churyumov- Gerasimenko (67P) and their implications for the early solar system environment. Specifically, the remanent magnetization of solar system bodies reflects their accretion mechanism, the space environment in which they formed, and their subsequent geological evolution. We show that the Rosetta magnetometry requires very low bulk magnetizations of cometary material on spatial scales &gt;/=10 cm. If 67P formed during the lifetime of the solar nebula and has not undergone significant subsequent alteration, this low magnetization is inconsistent with its formation from the gentle gravitational collapse of a cloud of millimeter-sized pebbles in a background magnetic field &gt;/~3 [mu]T. This constraint is compatible with theories of magnetically driven evolution of protoplanetary disks.; Lastly, this thesis presents the first attempt to determine an exoplanet's oblateness and obliquity through the use of changes in the transit depth caused by the spin precession of an oblate planet. Determination of these quantities would provide insights into a planet's internal structure and formation history. Using Kepler photometry, we examine the brown dwarf Kepler-39b and the warm Saturn Kepler-427b. We do not usefully constrain the oblateness of Kepler-39b, but we find transit depth variations for Kepler-427b at 90% significance consistent with a precession period of 5.5 years and an oblateness comparable to solar system gas giants.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-212).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123730</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion-exchanged metal-organic frameworks for industrially relevant catalysis applications</title>
<link>https://hdl.handle.net/1721.1/123728</link>
<description>Ion-exchanged metal-organic frameworks for industrially relevant catalysis applications
Park, Hoyoung Daniel.
The inorganic clusters of metal-organic frameworks (MOFs) offer a unique combination of synthetic tunability, structural uniformity, and site accessibility uncommon in conventional heterogeneous catalysts. As such, the inorganic nodes of MOFs provide a promising platform that can be engineered to promote challenging chemical transformations for which no adequate solid catalysts exist. This thesis focuses on the postsynthetic ion exchange behavior of the inorganic nodes in MOFs and its use in the preparation of ion-exchanged MOF catalysts for industrially relevant chemical transformations. Chapter 1 introduces the characteristics of MOFs relevant to heterogeneous catalysis and highlights their structural tunability with an emphasis on their node ion exchange behavior.; Chapter 2 details the application of the postsynthetic ion exchange strategy in the preparation of Co(CO)₄⁻- incorporated Cr-MIL-101 (Co(CO)₄cCr- MIL-101, Cr-MIL-101 = Cr₃O(BDC)₃F, H₂BDC = 1,4-benzenedicarboxylic acid), the first heterogeneous catalyst for epoxide carbonylation. Chapters 3 and 4 outline the use of Co(CO)₄ cCr-MIL-101 in a fixed-bed reactor process for the continuous-flow carbonylative production of [beta]-lactone and succinic anhydride, respectively. Chapter 5 describes the use of Cr³⁺-exchanged MFU- 4l (Cr-MFU-4l, MFU- 4l = Zn₅Cl₄(BTDD)₃, H₂BTDD = bis(1H-1,2,3,-triazolo[4,5-b],[4',5'-i])dibenzo[1,4]dioxin)) as an exemplary system to demonstrate pre-reaction treatment with alkylaluminum species as a simple method to isolate a MOF catalyst for gas phase ethylene polymerization.; The favorable performance of these MOF catalysts underscores the intrinsic advantages of their inorganic nodes, which support precise coordination geometries as isolated single sites within a porous scaffold for novel catalytic applications. Combined with the high structural tunability of the node metal sites, the inorganic clusters of MOFs hold tremendous potential for rational catalyst design.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123728</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ideal reversible polymer networks : theory and applications</title>
<link>https://hdl.handle.net/1721.1/123727</link>
<description>Ideal reversible polymer networks : theory and applications
Parada Hernandez, Germán Alberto.
Hydrogels are crosslinked polymer networks with high water contents that can be designed to have similar properties as native tissue. Due to the tunability and unique properties of this class of materials, they are considered ideal biomaterials and have been explored for a variety of tissue engineering and biomedical applications. The widespread adoption of these materials outside research lab settings, however, has been hampered by multiple technical and non-technical limitations. We have addressed two of the technical limitations identified: Poor mechanical robustness and integration with non-hydrogel surfaces, and the lack of quantitative predictions of the hydrogel properties (based on the hydrogel's composition and structure). In this thesis we introduce a set of tough materials based on an interpenetrating network hydrogel architecture, and several strategies used to robustly adhere these materials to inorganic and elastomeric substrates.; These strategies are used to introduce thin hydrogel layers on flat surfaces and selected medical devices. Subsequently, we characterize the mechanical, biocompatibility, antifouling, functional and blood compatibility properties of various coated surfaces, as compared to those of pristine surfaces, for medical device applications. Addressing the second limitation, we have developed an Ideal Reversible Polymer Network (IRPN) system that shows a single relaxation timescale due to the minimization of defects present on its structure. This system, which features 4-arm end-functionalized macromers with reversible crosslinks, enables predictions of its viscoelastic properties under shear deformation using Maxwell-based frameworks. The predictions are validated using a PEG hydrogel featuring boronic acid-diol reversible bonding and the data matches well the model predictions up to a critical strain boundary (which is estimated using scaling arguments).; We hope this work enables the design and formulation of hydrogel-based materials and devices that can be employed to reduce clinical complications and healthcare-related challenges.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis. "June 2019." Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123727</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elastin-like polypeptide (ELP) tags for self-assembly and high-throughput processing of functional protein materials</title>
<link>https://hdl.handle.net/1721.1/123726</link>
<description>Elastin-like polypeptide (ELP) tags for self-assembly and high-throughput processing of functional protein materials
Mills, Carolyn Elaine.
The diverse recognition and catalytic capabilities of globular proteins makes these biomolecules promising candidates in a broad array of applications, including industrial production of commodity chemicals, point-of-care diagnostics, and therapeutics. Self-assembly of globular protein-polymer bioconjugates into nanostructured materials is an attractive protein immobilization strategy that allows for both high protein packing density and control over protein orientation in the material. However, challenges associated with protein-polymer bioconjugate preparation limit the use of these materials in high-throughput processes. This thesis focuses on overcoming this challenge by replacing the polymer block of these bioconjugate materials with a genetically fused elastin-like polypeptide (ELP) tag. The first part of this thesis explores the effects of ELP charge and hydrophobicity on the self-assembly of ELP-mCherry fusion proteins in concentrated solution.; Concentrated solution characterization of fusion protein self-assembly showed that the addition of charge to the ELP block decreases the propensity for fusion protein self-assembly. Subsequent dilute solution measurements on the ELPs and ELP/mCherry blends revealed that these ELPs behave similarly to analogous charged polymers, but do not complex with mCherry in dilute solution. This combination of results leads to two main conclusions concerning the self-assembly of ELP-mCherry fusion proteins. First, the addition of charge to the ELP block decreases the preopensity for concentrated solution self-assembly because it reduces the effective repulsion between ELP and mCherry blocks by reducing charge cohesion asymmetry between the two protein blocks. Second, for fusions containing a negatively charged ELP block, repulsion between negatively charged ELPs further weakens self-assembly.; The second part of this thesis focused on development of a platform for high-throughput preparation of ELP-fusion materials. One key pitfall of existing ELP-based purification strategies (which can be implemented in high-throughput formats) is that they do not permit control over final protein solution salinity. To overcome this challenge, the existence of ELP cononsolvency in water/alcohol solutions was investigated. This resulted in the discovery and first reports of ELP cononsolvency, as well as the report of upper-critical solution temperature (UCST) transitions of ELPs under certain solvent conditions. Application of ethanol-induced ELP precipitation to desalting of protein materials was then investigated using a model fusion protein of ELP and superfolder green fluorescent protein (ELP-sfGFP). A combination of sodium chloride- and ethanol-induced precipitations was used to reliably purify and desalt ELP-sfGFP in a well-plate format.; Finally, this purification procedure was applied to an ELP fusion construct capable of incorporating a library of Sso7d binding proteins with variate streptavidin binding affinity. Two Sso7d variants incorporated into this construct were found to have measurably different binding affinities in both dilute solution and in self-assembled films, demonstrating the capacity of this system to screen self-assembled materials for desired functional protein properties.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123726</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>In vivo steroid sensing using corona phase molecular recognition : design, synthesis, and applications</title>
<link>https://hdl.handle.net/1721.1/123725</link>
<description>In vivo steroid sensing using corona phase molecular recognition : design, synthesis, and applications
Lee, Michael A.(Michael Andrew)
Steroid hormones dictate a number of underlying biochemical processes controlling human physiology and disease. Changes in steroid hormone concentration and activity are often either indicators or direct causes of disease. In the clinic, steroids are measured as biomarkers of health and have been studied in relation to various diseases, including various cancers, endocrine disorders, and mental illnesses. However, measurements of these important signaling molecules are commonly restricted to the analysis of blood samples using chromatography or immunoassays, which lack temporal resolution and are labor-intensive. Given the dynamic behavior of these steroids, we argue in this thesis that their diagnostic value is highly constrained because of limitations of existing measurement technology. Hence, this thesis explores and develops the engineering tools for the design, synthesis, and application of a continuous biosensor capable of measuring in vivo steroid hormones in the human body.; A pharmacokinetic model was developed describing the concentrations of cortisol throughout the body as a function of time under normal physiological conditions. Previous mathematical models and parameters describing cortisol production, circulation, and clearance were compiled and combined in a unified model, and used to describe cortisol values in the adrenal gland, blood, adipose, muscle, and brain. The model was validated against physiological literature and used to tune a theoretical affinity sensor implanted in the interstitial space of adipose in terms of its geometry, sensor site concentration, and binding kinetics/equilibrium. An optimal set of parameters was collected, and the same sensor was shown to operate robustly in both a healthy patient and a patient with Cushing's disease. A major conclusion of this portion of the thesis is that the sensor output of most value for this problem is accurate measurement of the first derivative of concentration.; To address sensor development experimentally, we develop a compositionally controlled templated version of Corona Phase Molecular Recognition (CoPhMoRe) to produce unique molecular recognition sites for steroids. In the CoPhMoRe method, a single-walled carbon nanotube (SWNT) is wrapped with an amphiphilic polymer. The pinned polymers form a corona phase that modulate analyte binding. Upon analyte binding, the fluorescence spectrum may be modified in terms of its intensity and/or peak emission wavelengths. In this work, we synthesized a library of 16 polymers containing various amounts of acrylic acid, styrene, and a template cortisol molecule. The hypothesized mechanism was that the template cortisol monomer would occupy a free volume within the pinned polymer, which would produce a binding shape in the approximate shape of a steroid, allowing free steroid to competitively displace the template and modulate fluorescence. Selective constructs were found for cortisol and progesterone.; The progesterone sensor was translated to an implantable hydrogel form factor. Utilizing the reversibility of the sensor, we performed proof-of-concept experiments demonstrating the functionality of the progesterone sensor in an SKHl-E mouse. To examine potential application spaces, the feasibility of using CoPhMoRe sensors for aquatic organism biologging was explored. In recent years, biologgers have attached sensors to animals to characterize environmental and animal-derived parameters as they behave normally in their environment. By collecting orthogonal datasets describing environmental parameters (e.g. temperature) and animal movement, biologists have elucidated a number of insights regarding migration, predator-prey relationships, reproduction, feeding etc. Currently, however, biochemical information is underutilized and represents a potentially new frontier in biologging.; In this study, we examined basic feasibility questions of the use of CoPhMoRe sensors in aquatic biologging. We developed implantation procedures for intramuscular delivery of CoPhMoRe hydrogel sensors and characterized the maximum implantation depth for extraction of the optical signal. Furthermore, we demonstrate that for best fluorescence extraction, hydrogels should be placed into lightly colored tissues. We also demonstrate generally favorable biocompatibility results, with implants causing no observable changes in physiology or behavior. In both human health and biologging applications, biocompatibility of biomaterials is an important parameter that dictates organisms' tolerance of the material and lifetime of the material.; For any long-term use of implantable biosensors, minimizing adverse tissue reactions is critical to prevent chemical modification of the sensor, dislodgement of the sensor from the implantation site, and encapsulation leading to increasing diffusional barriers of analytes from reaching the sensor surface. There have been a number of studies reporting varying degrees of cellular responses depending on SWNT synthesis method, impurity content, SWNT wrapping, and cell type, but the effect of formulation has not been explored systematically. The same parameters that dictate cellular response (e.g. wrapping) also dictate which analytes can be tracked, so discovering orthogonal formulation parameters that can control tissue response while leaving SWNT sensor ability intact is critical. In this study, we tracked tissue responses to five different SWNT-hydrogel formulations to determine design rules to minimize tissue response.; Through analysis of the cellular infiltrate, we found that decreasing the hydrogel pore size accelerated the healing process after gel implantation, though all hydrogels were equivalent in inflammatory status by day 28. Furthermore, we demonstrate that the acute inflammatory response has the potential to deactivate hydrogel sensors in a time-dependent manner, pointing to the importance of modulating the tissue response to maximize sensor longevity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123725</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical and experimental study of electrochemically mediated adsorption processes</title>
<link>https://hdl.handle.net/1721.1/123724</link>
<description>Theoretical and experimental study of electrochemically mediated adsorption processes
He, Fan,Ph.D.Massachusetts Institute of Technology.
Water treatment by electrosorption is a topic of increasing interest in recent years due to the high efficiency, low cost, small footprint and no secondary treatment required. Electrochemically mediated adsorption, is one of the emerging electrosorption techniques. The great advantage of this technique is its superb selectivity (&gt;100), which allows selective removal of toxic or recovery of high-valued target ions under low energy consumption, which is difficult to achieve by any other water treatment techniques. Firstly, to understand electrochemically mediated adsorption, theoretical approach has been used.; Models with hierarchical complexity have been developed in this thesis; ranging from a physics-based equivalent circuit model to process-level lumped parameter model and eventually to a full two-dimensional time dependent transport model, which allow focuses on different aspects of the process (thermodynamics, kinetics and mass transport) and shed light on the mechanism of competitive electrosorption processes from microscale to macroscale. The theoretical framework also provides guidelines of designing redox active materials and optimizing operation conditions to achieve high salt adsorption capacity as well as selectivity towards target micropollutants. Secondly, we study the electrochemically mediated adsorption in practice. By setting up a continuous flow experimental platform equipped with inline sensors, we developed methodologies to allow real-time measurement of the salt adsorption performance as well as quantifying the selectivity of the competitive electrosorption process.; Asymmetric redox active electrodes are designed, synthesized and characterized in the flow system. The experimental study has demonstrated high salt adsorption capacity and ultra-high salt adsorption rate for brackish water deionization; and more importantly, selective removal of organic acid for waste water remediation. Finally, we extend the concept and methodology of electrochemically mediated adsorption from surface based (heterogeneous) electrosorption to bulk adsorption/regeneration with (homogeneous) complexation reactions. A theoretical framework is developed under flow-through configuration, and applied to carbon dioxide capture/release, which is of great practical interest. Improved reactor design has also been implemented and tested, which demonstrates the feasibility of utilizing the flow-through reactor for electrochemically mediated carbon capture and regeneration.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 238-248).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123724</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-based models of hysteresis in multiphase flow in porous media</title>
<link>https://hdl.handle.net/1721.1/123723</link>
<description>Physics-based models of hysteresis in multiphase flow in porous media
Gu, Zongyu,Ph.D.Massachusetts Institute of Technology.
We propose a novel probabilistic framework based on pore-scale probabilistic events to derive a theory of hysteresis in multiphase flow in porous media. In particular, we define the pore-space accessivity to contrast the serial and parallel arrangement of different-radius pore slices, and the radius-resolved saturations to detail the pore-scale distribution of immiscible fluids. We show that accessivity can be measured by mercury cyclic porosimetry. Our microscopic theory of hysteresis produces simple formulae that are suitable for use as hysteresis-enabling constitutive laws for capillary pressure and relative permeabilities in conventional continuum simulations of multiphase flow.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 161-177).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123723</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable materials from renewable protein feedstock and waste rubber</title>
<link>https://hdl.handle.net/1721.1/123722</link>
<description>Sustainable materials from renewable protein feedstock and waste rubber
Chan, Wui Yarn.
This thesis focuses on developing sustainable materials from underutilized feedstock, namely proteins and waste rubber from used car tires. Valorization of proteins for use in engineering plastics can reduce reliance on fossil fuels for materials manufacturing, as well as increase economic viability of agriculture and biorefinery processes. Strategies have been proposed to address challenges in formulating protein-based plastics, such as protein feedstock and functional group diversity, difficulties in material processing, and undesirable physical and mechanical properties. Methods to devulcanize and recycle used tire rubber, one of the largest polymer waste sources, are also described. The first part of this thesis explores the use of proteins as reinforcing domains in thermoset elastomers. Copolymers were prepared by conjugating proteins to rubbery polymers, and the presence of both components had synergistic effects on material mechanical properties.; These protein-based crosslinked materials were prepared using a two-step approach, both of which are versatile, and tolerant to the feedstock diversity and chemical functionality typical in protein biomass streams. Amine groups on protein were first reacted with methacrylic anhydride in water. The proteins were then mixed and randomly copolymerized with a water-soluble (meth)acrylate comonomer that makes up the flexible soft segment. This grafting-through polymerization strategy was first demonstrated via a solution polymerization method with whey protein and water soluble monomers, and the resulting materials were demonstrated to have mechanical performance comparable to that of some biomass based polyurethanes. To eliminate the need for post-processing solvent evaporation, the method was further expanded to enable melt polymerization with hydrophobic monomers.; Difficulties with thermoforming protein-based materials were addressed by using surfactants as plasticizers to lower softening points of proteins. The surfactants also functioned as compatibilizers, allowing protein blends and conjugates to be formulated with non-water soluble polymers, resulting in materials with lowered overall hydrophilicity. Screening studies showed that the protein-surfactant complexation and polymerization approaches are generalizable across many combinations of proteins, ionic surfactants, and vinyl monomers. As proteins typically have multiple copies of reactive functional groups, efforts at developing protein-based commodity plastics have focused almost entirely on chemically crosslinked networks. Synthesis of a novel thermoplastic protein-copolymer elastomer is described.; Diblock copolymers were prepared by site-selectively conjugating a RAFT agent to the protein N-terminus, followed by polymerization of the rubbery polymer segment via a grafting-from approach. The materials exhibited thermoplastic behavior, and were thermally reprocessable. The last part of this thesis presents alternative feedstocks for manufacturing materials. First, an engineered protein expressed in high yields in E. coli, recombinant cyanophycin, was investigated. This zwitterionic protein was found to be brittle in the dry state, and demonstrates both upper and lower critical solution temperature type behavior in solution. The high charge density and thermoresponsiveness of cyanophycin could potentially be harnessed in material design. Lastly, methods to recycle waste rubber were explored to process ground vulcanized rubber particles into new rubber sheets. Sheets containing high fractions of recycled rubber were prepared using a bulk devulcanization approach.; Ground rubber particles were melt mixed with nucleophiles that may selectively break sulfur crosslinking bonds, enabling the once-crosslinked rubber to be thermally processable. In addition, methods to increase the bond strength at the interfaces of virgin and once-cured rubber were shown to improve mechanical performance of rubber containing recycled material.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123722</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-cell analyses of tumor-infiltrating immune cells</title>
<link>https://hdl.handle.net/1721.1/123721</link>
<description>Single-cell analyses of tumor-infiltrating immune cells
Lam, Lionel K.W.(Lionel Kar Wei)
Immunotherapies and targeted therapies are emerging as promising methods of treating cancer, with high response rates and low associated side effects when compared with more traditional methods of cancer treatment. Across these therapies, it has been found that high response rates are correlated with the presence of specific biomarkers that may be cellular, protein-based, or genomic in nature. Specifically, the discovery of cell-based biomarkers via the study of patient biopsies and other related samples remains a key problem due to the limited numbers of cells involved, and current methodologies that do not lend themselves well to the discovery of novel cell subpopulations. In this thesis, we investigate the use of various immunophenotypic and transcriptomic single-cell assays to characterize tumor-infiltrating immune cells from mice exhibiting differential responses to anti-PD-1 immunotherapy. Data analysis pipelines that allow for the mining of this data for novel cell subpopulations are also discussed. Based on our immunophenotypic analysis (multispectral image-based cytometry), we have discovered subpopulations of CD8+ T cells harboring repertoires of immunomodulatory receptors (GITR, CD44, LAG-3) that are enriched upon anti- PD-1 treatment. We have also detected subpopulations of cells resembling B cells and dendritic cells in mice known to show positive responses to anti-PD-1 immunotherapy. This immunophenotypic data was corroborated by single-cell RNA-Seq data obtained via Seq-Well. By clustering the single-cell libraries obtained according to gene signature scores, we identified distinct high-level families of immune cells in specific tumor categories. Downstream differential gene expression analyses on the T cells across tumor categories revealed actionable targets that correlated with response to anti-PD-1 immunotherapy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis. "February 2018."; Includes bibliographical references (pages 124-128).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123721</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven pricing and inventory management with applications in fashion retail</title>
<link>https://hdl.handle.net/1721.1/123720</link>
<description>Data-driven pricing and inventory management with applications in fashion retail
Nambiar, Mila.
Fashion retail is typically characterized by (1) high demand uncertainty and products with short life cycles, which complicates demand forecasting, and (2) low salvage values and long supply lead times, which penalizes for inaccurate demand forecasting. In this thesis, we are interested in the design of algorithms that leverage fashion retail data to improve demand forecasting, and that make revenue-maximizing or cost-minimizing pricing and inventory management decisions. First, we study a multi-period dynamic pricing problem with feature information. We are especially interested in demand model misspecification, and show that it can lead to price endogeneity, and hence inconsistent price elasticity estimates and suboptimal pricing decisions. We propose a "random price shock" (RPS) algorithm that combines instrumental variables, well known in econometrics, with online learning, in order to simultaneously estimate demand and optimize revenue.; We demonstrate strong theoretical guarantees on the regret of RPS for both IID and non ID features, and numerically validate the algorithm's performance on synthetic data. Next, we present a case study in collaboration with Oracle Retail. We extend RPS to incorporate common business constraints such as markdown pricing and inventory constraints. We then conduct a counterfactual analysis where we simulate the algorithm's performance using fashion retail data. Our analysis estimates that the RPS algorithm will increase by 2-7% relative to current practice. Finally, we study an inventory allocation problem in a single-warehouse multiple-retailer setting with lost sales. We show that under general conditions this problem is convex, and that a Lagrangian relaxation-based approach can be applied to solve it in a computationally tractable, and near-optimal way.; This analysis allows us to prove structural results that give insights into how the allocation policy should depend on factors such as the retailer demand distributions, and demand learning.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-171).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123720</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physiological and behavioral responses, and their variability, in squid, Doryteuthis pealeii, embryos and paralarvae reared under chronic ocean acidification</title>
<link>https://hdl.handle.net/1721.1/123719</link>
<description>Physiological and behavioral responses, and their variability, in squid, Doryteuthis pealeii, embryos and paralarvae reared under chronic ocean acidification
Zakroff, Casey James.
Ocean acidification (OA) and related stressors, like warming, are occurring rapidly in coastal systems. There is concern about the impacts these stressors may have on the early development of species that use the nearshore as nursery habitat. The inshore longfin squid, Doryteuthis pealeii, plays an important role in the northwest Atlantic food web, and annually lays its eggs in the nearshore benthos during summer. This thesis sought to characterize morphological, physiological, and behavioral responses of D. pealeii embryos and paralarvae to OA. Experiments began in 2013, where I exposed squid eggs to a range of acidification levels (400 2200 ppm CO₂) to uncover when the dosage impacts first appear (around 1300 ppm). To do this, I developed multiple methods to better characterize the morphological changes and surface degradation of statoliths dueto acidification.; This initial work demonstrated small-scale variability in response intensity, across hatching days and the breeding season. I ran swimming behavior experiments with subsampled paralarvae from 2013 - 2015 and developed a novel 3D recording and analysis tracking system in the process. The 2D data from 2013 showed significant decreases in time spent near surface, while 3D data in subsequent years showed slight impacts to activity and swimming velocity with increasing acidification. Overall, I ran experiments from 2013-2016, and compiled and compared these data using response ratios. I show that seasonal temperatures impact the baseline state of the paralarvae through parental condition, while acidification sensitivity appears driven by parental year class. Finally, I examined the interaction of acidification stress with warming, demonstrating an antagonistic relationship between these stressors for this life stage of this squid.; These data indicate that acidification builds as a stressor, impacting late stages of embryonic development, while warming impacts embryos early in development, and likely reduces acidification impacts by decreasing development time. This dissertation demonstrates that while the embryonic and paralarval stages can be sensitive to acidification, being so highly fecund and varying in resistance at multiple temporal scales alows for a substantial potential for resilience to a changing ocean in this population of squid.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Biology; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 257-278).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123719</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alternatively spliced isoforms of Fibronectin, Tenascin-R and other potential players in early vasculogenesis</title>
<link>https://hdl.handle.net/1721.1/123718</link>
<description>Alternatively spliced isoforms of Fibronectin, Tenascin-R and other potential players in early vasculogenesis
Nguyen, Thao Huong.
The absence of both the EIIIA and EIIIB domains of fibronectin (FN) has been shown to negatively affect blood vessel formation and maintenance. Vascular defects have been observed in the yolk sacs of EIIIA/B double-null embryos by as early as embryonic day E9.5, and these defects are likely due to alterations in the extracellular matrix (ECM). Therefore, I have conducted this study to investigate the differences in the ECM composition of the yolk sac in the presence and absence of EIIIA and EIIIB. I first collected yolk sacs at E9.5 from wild type, EIIIA-null, EIIIB-null, EIIIA/B heterozygous and EIIIA/B double-null mouse embryos, enriched for ECM content, and used quantitative proteomics to analyze their ECM composition. From these data, we identified a set of matrisome proteins that had decreased abundance in EIIIA/B double-null yolk sacs but were relatively unchanged in single-null and heterozygous yolk sacs compared to wild type.; Some of these proteins could play a role in ECM remodeling or directly affect angiogenesis, and their reduced level in double-null yolk sacs might contribute to the vascular defects seen in the double-null tissue. Subsequently, I carried out further studies with tenascin-R (TN-R), one of proteins that was downregulated in the ECM of EIIIA/B double-null yolk sac. TN-R has been previously described to be restricted to the central nervous system, and our finding of TN-R in the yolk sac is novel. TN-R is localized to the mesoderm layer of yolk sacs. TN-R fibers partially overlap with FN, and TN-R area coverage in EIIIA/B double-null yolk sacs are decreased compared to wild type, suggesting that the presence of EIIIA/B promotes TN-R assembly in the yolk sac ECM. In addition, TN-R colocalizes with blood vessels in both the yolk sac and the retina, suggesting that TN-R might participate in vasculogenesis and angiogenesis at these locations.; Together, this study extends our understanding of yolk sac ECM, provides insight into the role of EIIIA and EIIIB domains, identifies novel expression patterns of ECM proteins, and opens up the possibility of a novel function for TN-R.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123718</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic analysis of cGMP-dependent chemosensory signal transduction pathways in the detection of bacterial metabolites by C. elegans</title>
<link>https://hdl.handle.net/1721.1/123717</link>
<description>Genetic analysis of cGMP-dependent chemosensory signal transduction pathways in the detection of bacterial metabolites by C. elegans
Park, Jaeseok,Ph.D.Massachusetts Institute of Technology.
The ability of metazoans to sense and interpret the external chemical environment is conferred by the chemosensory nervous system governing the senses of smell and taste. Chemosensory neurons convey external stimuli in the form of electrical impulses, as well as by changes in gene expression. This thesis describes the genetic elucidation of a Caenorhabditis elegans molecular pathway that transduces the presence of pathogens to activate the transcription of a neuroendocrine ligand. Previously, our group has showed that secondary metabolites produced by the pathogen P. aeruginosa cause an expression pattern change of the gene coding for the C. elegans TGF-beta ligand DAF-7. Using forward and reverse genetic approaches, we identified several cGMP-related components that are essential for the pathway, including a subunit for cyclic nucleotide-gated channels, CNG-2, and a cGMP-dependent kinase, EGL-4. We show that while CNG-2 induces daf-7 expression in a calcium-dependent manner, EGL-4 likely works in a calcium-independent manner to regulate daf-7 expression. Our data suggest that EGL-4 acts by selectively promoting the transcription of neuronal genes in response to appropriate stimuli. In a separate set of experiments, we also showed that the expression of daf-7 is discretely regulated by different classes of intraflagellar transport proteins that function in cilia.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; "September 2019." Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123717</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New modeling of compact, high-efficiency, and widely-tunable gas-phase terahertz lasers</title>
<link>https://hdl.handle.net/1721.1/123716</link>
<description>New modeling of compact, high-efficiency, and widely-tunable gas-phase terahertz lasers
Wang, Fan,Ph. D.Massachusetts Institute of Technology.
The terahertz region, in the heart of the electromagnetic spectrum, has been the least utilized, in part due to inadequacies of available sources. Optically pumped far-infrared (OPFIR) lasers were one of the most powerful continuous-wave terahertz sources. However, such lasers have long been thought to have intrinsically low efficiency, not tunable in frequency, and large sizes. In this thesis, we introduce a compact, frequency-tunable source of terahertz radiation with high efficiency. We first present both an innovative theoretical model and experimental validation of a Methyl Fluoride OPFIR laser at 0.25 THz that exhibits 10x greater efficiency and 1,000x smaller volume than the best commercial lasers. Unlike previous OPFIR-laser models involving only a few energy levels that failed even qualitatively to match experiments at high pressures, our ab-initio theory matches experiments quantitatively, within experimental uncertainties with no free parameters, by accurately capturing the interplay of millions of degrees of freedom in the laser. Moreover, we demonstrate a widely frequency-tunable compact terahertz radiation with laughing gas (nitrous oxide N₂O) pumped by a quantum cascade laser (QCL). In experiments, broad tunability is achieved over 31 lines spanning 0.25-0.80 THz, each with kilohertz linewidths. Our comprehensive theoretical model is able to constrain the key molecular parameters and predict the optimal performance of the laser. The concept of QCL-pumped molecular laser (QPML) is a universal while revolutionary concept characterized by unprecedented frequency tunability over a wide range of rotational transitions using a single molecular gas as the gain medium. An analytical theory for QPML is presented to study the key factors for improving the laser performance. We believe that these developments will revive interest in optically pumped molecular laser as a powerful, tunable, and compact source of terahertz radiation.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 129-134).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123716</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prescriptive analytics in operations problems : a tree ensemble approach</title>
<link>https://hdl.handle.net/1721.1/123709</link>
<description>Prescriptive analytics in operations problems : a tree ensemble approach
Biggs, Max(Max Ray)
The main contributions of this thesis concern addressing challenges in the field of prescriptive optimization, and how machine learning techniques can be incorporated into solving data-driven operational optimization problems. In chapter 2, we provide a data-driven study of the secondary ticket market. In particular we are primarily concerned with accurately estimating price sensitivity for listed tickets. We propose a semi-parametric model for measuring heterogeneous treatment effects using the concept of orthogonalization in the classification setting, and derive a novel loss function which can be solved using a range of off-the-shelf machine learning methods. Over a wide range of synthetic data experiments, we show how this approach beats state-of-the-art machine learning and causal inference methods for estimating treatment effects in classification tasks.; In chapter 3, we show how to solve optimization problems with random forest objective functions and general polyhedral constraints. We show how to formulate this problem using MIO techniques and show this formulation can be decomposed and solved iteratively using Pareto-optimal Benders cuts. We also provide analytical guarantees on an approach that approximates a large scale random forest optimization problem by optimizing over a smaller forest, and develop heuristics based on ideas from cross validation. In chapter 4, we study a new problem where nurse practitioners need to be dynamically routed to patients' houses as service requests are received. We show how to solve using Approximate Dynamic Programming and develop methods to solve ADP's with combinatorial action spaces and non-linear cost-to-go functions approximated using a tree or tree ensemble approximation.; In chapter 5, we propose a Markov Decision Process (MDP) model for the tramp shipping that captures the dynamic and stochastic nature of spot cargo availability. We propose a novel methodology for solving this MDP in a tractable way, by introducing a ranking algorithm which is equivalent to solving the DP. We show that our ranking algorithm outperforms several benchmarks and the average performance of ships operating on the spot market in practice by between 4% and 32%.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 227-241).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123709</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational and statistical challenges in high dimensional statistical models</title>
<link>https://hdl.handle.net/1721.1/123708</link>
<description>Computational and statistical challenges in high dimensional statistical models
Zadik, Ilias
This thesis focuses on two long-studied high-dimensional statistical models, namely (1) the high-dimensional linear regression (HDLR) model, where the goal is to recover a hidden vector of coefficients from noisy linear observations, and (2) the planted clique (PC) model, where the goal is to recover a hidden community structure from a much larger observed network. The following results are established. First, under assumptions, we identify the exact statistical limit of the model, that is the minimum signal strength allowing a statistically accurate inference of the hidden vector. We couple this result with an all-or-nothing information theoretic (IT) phase transition. We prove that above the statistical limit, it is IT possible to almost-perfectly recover the hidden vector, while below the statistical limit, it is IT impossible to achieve non-trivial correlation with the hidden vector.; Second, we study the computational-statistical gap of the sparse HDLR model; The statistical limit of the model is significantly smaller than its apparent computational limit, which is the minimum signal strength required by known computationally-efficient methods to perform statistical inference. We propose an explanation of the gap by analyzing the Overlap Gap Property (OGP) for HDLR. The OGP is known to be linked with algorithmic hardness in the theory of average-case optimization. We prove that the OGP for HDLR appears, up-to-constants, simultaneously with the computational-statistical gap, suggesting the OGP is a fundamental source of algorithmic hardness for HDLR. Third, we focus on noiseless HDLR. Here we do not assume sparsity, but we make a certain rationality assumption on the coefficients. In this case, we propose a polynomial-time recovery method based on the Lenstra-Lenstra-Lóvasz lattice basis reduction algorithm.; We prove that the method obtains notable guarantees, as it recovers the hidden vector with using only one observation. Finally, we study the computational-statistical gap of the PC model. Similar to HDLR, we analyze the presence of OGP for the PC model. We provide strong (first-moment) evidence that again the OGP coincides with the model's computational-statistical gap. For this reason, we conjecture that the OGP provides a fundamental algorithmic barrier for PC as well, and potentially in a generic sense for high-dimensional statistical tasks.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 289-301).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123708</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven optimization with behavioral considerations : applications to pricing</title>
<link>https://hdl.handle.net/1721.1/123707</link>
<description>Data-driven optimization with behavioral considerations : applications to pricing
Hariss, Rim.
This thesis aims to introduce descriptive and predictive models that guide more informed pricing strategies in practice, drawing from interdisciplinary work of current OM, behavioral theories and recent machine learning advances. In chapter 2, we integrate a consumer purchase experiment and an analytical model to investigate how consumers' price-based quality perception, expected markdown, and a product's availability information influence a retailer's markdown pricing strategy. We subsequently develop a consumer model that incorporates consumers' price-based quality perception observed from the experimental data and consumers' potential loss aversion. We embed this consumer model into the retailer's markdown optimization and examine the impact of these behavioral factors on the retailer's optimal strategy.; In chapter 3, we study a retailer's optimal promotion strategy when demand is affected by different classes of customers' status in the rewards program and their heterogeneous redemption behavior. We formulate the retailer's problem as a dynamic program and prove a unique optimal threshold discounting policy. We also propose an approximation algorithm of the optimal price as a convex combination of the optimal prices for each class separately. Using data from a fast food chain, we assess the performance of the algorithm and the optimal pricing compared to current practice. In chapter 4, we are concerned with accurately estimating price sensitivity for listed tickets in the secondary market. In the presence of endogeneity, binary outcomes and non-linear interactions between ticket features, we introduce a novel loss function which can be solved using several off-the-shelf machine learning methods.; On a wide range of synthetic data sets, we show that our approach beats state-of-the-art machine learning and causal inference approaches for estimating treatment effects in the classification setting. In chapter 5, we consider an optimization problem with a random forest objective function and general polyhedral constraints. We formulate this problem using Mixed Integer Optimization techniques and show it can be solved to optimality efficiently using Pareto-optimal Benders cuts. We prove analytical guarantees for a random forest approximation that consists of only a subset of trees. We also propose heuristics inspired by cross-validation and assess their performance on two real-world case
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 227-241).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123707</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The assembly and functions of microbial communities on complex substrates</title>
<link>https://hdl.handle.net/1721.1/123706</link>
<description>The assembly and functions of microbial communities on complex substrates
Yu, Xiaoqian,Ph.D.Massachusetts Institute of Technology.
Microbes form diverse and complex communities to influence the health and function of all ecosystems on earth. However, key ecological and evolutionary processes that allow microbial communities to form and maintain their diversity, and how this diversity further affects ecosystem function, are largely underexplored. This is especially true for natural microbial communities that harbor large numbers of species whose interactions are often the result of long term evolutionary processes of co-occurring organisms. In this thesis, I make use of "common garden experiments" -- introducing varying microbial communities to the same environments -- to investigate how the assembly and functions of natural microbial communities are affected by the diversity of communities, as well as the chemical nature of substrates that they assemble on. In the first project, I present an experimental workflow that streamlines the generation of self-assembled microbial communities with a wide range of diversity, measurements of community function in "common gardens", followed by subsequent isolation of the most abundant taxa from these communities via dilution-to-extinction. This high-throughput workflow is applied to assess how interactions scale with organismal diversity to affect the function of microbial communities from the coastal ocean. In the second project, I use a combination of theoretical models and an ex vivo experimental framework to examine how the volume and content of gas produced by gut microbiota assembling on different prebiotic substrates ("gardens") are influenced by the chemical nature of the substrate and the composition of the gut microbiota itself. As a whole, this body of work represents a small step towards finding common organization principles in microbial community assembly and their functional consequences.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 105-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123706</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic and metabolomic analysis of how population density modulates neuroendocrine physiology of C. elegans</title>
<link>https://hdl.handle.net/1721.1/123705</link>
<description>Genetic and metabolomic analysis of how population density modulates neuroendocrine physiology of C. elegans
Wong, Spencer S.(Spencer Sai Git)
In nature, organisms are presented with ever-changing environmental stimuli that can be beneficial or detrimental. Distinguishing the critically important environmental cues from the insignificant ones is integral to the survival of the organism. These cues may be based on chemical communication, secreted from other organisms, both the same and different species, present in the same area. Chapter One of this thesis discusses chemical communication, specifically in Caenorhabditis elegans, throughout development, mating, and foraging for novel food sources. It also extends the description of chemical communication to include a wide variety of organisms from bacteria to vertebrates in various applications. In Chapter Two, we discuss the importance of insulin-like signaling in determining the reproductive duration of adult C. elegans. We show that the expression of an insulin-like peptide, INS-6, is downregulated in the presence of crowding and is mediated by ascaroside pheromones.; The addition of synthetic ascarosides, as well as mutations that disrupt ins-6, increase the reproductive span of C. elegans. In Chapter Three, we analyze expression changes of the TGF-[beta]-like ligand, DAF-7, in ASJ. Based on previous work that showed daf-7 is expressed in ASJ upon exposure to Pseudomonas aeruginosa, we expanded the bacterial substrates that produce this phenotype, focusing in on a nonpathogenic Gram-positive bacterium Bacillus subtilis. We also show that wild isolates of C. elegans vary in their expression of daf-7 in ASJ, even on E. coli, which produces no daf-7 in laboratory wild-type strains. The induction of daf-7 in ASJ by B. subtilis is modulated by population density-dependent cues, through the canonical ascaroside pheromones and daf-22-independent pheromones. The expression of daf-7 in ASJ is also sensitive to secreted natural products from Pristionchus pacificus, a predator of C. elegans.; In Chapter Four, I present ideas for expanding on the findings within these two projects. Specifically, I focus on the chemical identification of the modulators of daf-7 and the global transcriptional responses in ASI and ASJ upon elevated population density.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123705</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering orthogonal signaling pathways to probe sequence space capacity</title>
<link>https://hdl.handle.net/1721.1/123704</link>
<description>Engineering orthogonal signaling pathways to probe sequence space capacity
McClune, Conor James.
Gene duplication is a common and powerful mechanism by which cells create new signaling pathways, but recently duplicated proteins typically must become insulated from each other, and from other paralogs, to prevent unwanted cross-talk. A similar challenge arises when new sensors or synthetic signaling pathways are engineered within cells or transferred between genomes. How easily new pathways can be introduced into cells depends on the density and distribution of paralogous pathways in the sequence space defined by their specificity-determining residues. Here, I directly probe how crowded sequence space is by generating novel two-component signaling proteins in Escherichia coli using cell sorting coupled to deep-sequencing to analyze large libraries designed based on coevolution patterns. I produce 58 new insulated pathways, in which functional kinase-substrate pairs have different specificities than the parent proteins, and demonstrate that several new pairs are orthogonal to all 27 paralogous pathways in E. coli. Additionally, I readily identify sets of 6 novel kinase-substrate pairs that are mutually orthogonal to each other, significantly increasing the two-component signaling capacity of E. coli. These results indicate that sequence space is not densely occupied. The relative sparsity of paralogs in sequence space suggests that new, insulated pathways can easily arise during evolution or be designed de novo. I demonstrate the latter by engineering a new signaling pathway in E. coli that responds to a plant cytokinin without cross-talk to extant pathways. The work in this thesis also demonstrates how coevolution-guided mutagenesis and sequence-space mapping can be used to design large sets of orthogonal protein-protein interactions.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; "August 2019." Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123704</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of the roles of Xrn1p in small-RNA-mediated gene-silencing pathways</title>
<link>https://hdl.handle.net/1721.1/123703</link>
<description>Characterization of the roles of Xrn1p in small-RNA-mediated gene-silencing pathways
Getz, Matthew Aaron.
RNA interference (RNAi) is a small-RNA-mediated gene-silencing pathway that is involved in viral defense, transposon silencing, heterochromatin formation, and post-transcriptional gene silencing. Most RNAi pathways are initiated by long, dsRNA which is processed by Dicer into small interfering RNAs (siRNAs). These siRNAs are then loaded into an effector protein Argonaute forming the RNA-induced silencing complex (RISC). RISC is then able to silence its target RNAs in a variety of fashions, including by slicing them. RNAi is ubiquitous in eukaryotes with pathways found in plants, animals and fungi, suggesting its early origins and importance. Despite the usefulness of this pathway as a mechanism of genomic defense, the model budding yeast species, Saccharomyces cerevisiae, does not possess an RNAi pathway. Although S. cerevisiae does not possess an RNAi pathway, several related budding yeast species do, including Naumovozyma castellii and Vanderwaltozyma polyspora.; Each of these species possesses orthologs of Dicer and Argonaute and has a population of 21-23-nt siRNAs that map to repetitive regions of the genome, including Ty transposable elements and Y' subtelomeric repeats. Disrupting either Dicer or Argonaute causes the loss of these small RNA populations. Additionally, RNAi in N. castellii can silence an exogenous GFP gene. Over-expressing Dicer and Argonaute in S. cerevisiae restores a functioning RNAi pathway that can silence endogenous transposable elements and an exogenous GFP gene. To identify other factors that act in the budding-yeast silencing pathway, we performed an unbiased genetic selection in N. castellii. This selection identified Xrn1p, the cytoplasmic 5'-to- 3' exoribonuclease, as a cofactor of RNAi in budding yeast. Deletion of XRN1 impaired gene silencing in N. castellii and this impaired silencing was the result of multiple functions of Xrn1p.; These functions include affecting the amount of different siRNA species in the cell, influencing the efficiency of loading these siRNAs into Argonaute, degradation of cleaved passenger strand, and degradation of cleaved target RNA. XRN1 has also been implicated in miRNA-mediated silencing in human cells. We found that disrupting XRN1 in a human cell line had no effect on the levels of mature miRNAs or their passenger strands but did de-repress miRNA targets, suggesting that in the miRNA pathway, XRN1 functions to degrade target mRNAs.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123703</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endoribonuclease toxin-antitoxin systems in bacteria : targets and growth inhibition</title>
<link>https://hdl.handle.net/1721.1/123702</link>
<description>Endoribonuclease toxin-antitoxin systems in bacteria : targets and growth inhibition
Culviner, Peter Holmes.
Toxin-antitoxin (TA) systems are widely-distributed genetic modules that can reversibly inhibit the host bacteria's growth. Both toxin and antitoxin are encoded together on an operon and the antitoxin directly binds the toxin, preventing its activity. Under stressful conditions, the antitoxin may be degraded, allowing the toxin to inhibit growth. Bacteria often encode many copies of these mysterious systems and they have been suggested to play a role in a myriad of processes including plasmid maintenance, survival through antibiotic stress, growth regulation, and defense against bacteriophage. However, how TA systems might accomplish these diverse feats is not well understood. The toxic element of many of these systems is an endoribonuclease. In this work, I characterize the RNA targets of 9 endoribonuclease toxins encoded by the bacterium Escherichia coli.; Previous studies had shown that the toxin MazF created a pool of leaderless mRNAs that were preferentially translated by specialized ribosomes created through MazF cleavage of the mature 16S rRNA. In my first project, I developed an RNA-sequencing-based pipeline to identify and quantify MazF cleavage across the transcriptome. I found that, in vivo, MazF does not generate appreciable quantities of specialized ribosomes nor leaderless transcripts. Instead, it degrades a large portion of E. coli transcripts, preventing their proper translation. Further, I found that MazF strongly inhibits the biogenesis of new ribosomes through both cleavage of nascent rRNA and inhibition of ribosomal protein synthesis. In my second project, I expanded this work to 8 other endoribonuclease toxins. I found that, like MazF, these toxins degrade a significant portion of E. coli transcripts, leading to a global inhibition of translation.; Of particular interest, a number of these toxins are incapable of cleaving untranslated RNA such as rRNA but are still able to inhibit ribosome biosynthesis, likely through degrading ribosomal protein transcripts. I conclude that endoribonuclease toxins are efficient inhibitors of synthesis of macromolecular complexes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123702</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aneuploidy reveals insights into control of protein complex stoichiometry</title>
<link>https://hdl.handle.net/1721.1/123701</link>
<description>Aneuploidy reveals insights into control of protein complex stoichiometry
Brennan, Christopher M.
Aneuploidy, or an incorrect number of chromosomes, is caused by errors in chromosome segregation during cell division. Because genes are expressed in accordance with their copy number, aneuploidy simultaneously alters the gene dosage of hundreds to thousands of genes. The outcome is an imbalanced proteome, which has a negative impact on cellular physiology and places intense demand on the protein quality control system of the cell to effectively fold and/or degrade proteins. Aneuploidy further represents an ideal model for studying how cells cope with imbalances in their proteome as it allows for interrogation of the fate of hundreds to thousands of imbalanced proteins simultaneously. Here, we identify protein complex stoichiometry imbalances as a major cause of protein aggregation in aneuploid cells. Subunits of protein complexes encoded on excess chromosomes aggregate in aneuploid cells, which is suppressed when expression of other subunits is coordinately altered. We further show that excess subunits are either degraded or aggregate, a fate that is largely mutually exclusive for individual subunits. We also demonstrate that protein aggregation is nearly as effective as protein degradation at lowering levels of excess proteins. Our study explains why proteotoxic stress is a universal feature of the aneuploid state and reveals protein aggregation as a form of dosage compensation to cope with disproportionate expression of protein complex subunits. This work informs both our comprehension of aneuploid cell physiology, and also provides a more complete understanding of how aneuploid and euploid cells cope with stoichiometric imbalances, namely that protein aggregation can function as protein quality control mechanism in this regard.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123701</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hello from the other side; I'll compromise a thousand times : how minority party governors win unlikely elections</title>
<link>https://hdl.handle.net/1721.1/123634</link>
<description>Hello from the other side; I'll compromise a thousand times : how minority party governors win unlikely elections
Goldberg, Megan Elizabeth.
Despite increasing nationalization and polarization in state politics, some blue states continue to elect Republican governors, while some red states continue to elect Democratic governors. These minority party governors are out of step with their state's mass ideology and partisanship when research on public opinion and voting behavior suggests that it should be fairly easy for voters to elect officials that match their own ideology by using partisan cues and other information shortcuts. How do these governors win elections and maintain relatively high approval ratings? This dissertation uses data and text from gubernatorial social media accounts from 2009 through 2017 to argue that minority party governors are able to defy electoral odds by distancing themselves from their national party, through the use of language to downplay their partisan identity and ideological moderation, and then emphasize non-ideological valence issues such as the economy, good governance, and public health to shift the focus from issues that are highly partisan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 117-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123634</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mesomatters - design, manufacture and interact with architected mesoscopic materials</title>
<link>https://hdl.handle.net/1721.1/123631</link>
<description>Mesomatters - design, manufacture and interact with architected mesoscopic materials
Ou, Jifei.
Between traditional industrial design, which operates at the macro scale (cm to m), and material engineering, which operates at the micro/nano scale ([mu]m to nm), is the emerging design space of the mesoscale. While the definition of mesoscale varies across disciplines, mesoscale materials are usually considered to be in between the molecular and macroscopic length scale. It is the scale of human hair or a grain of sand. It is the scale where material properties meet human perception, and the rational meets intuition. In the past 10 years, additive manufacturing, especially 3D printing, has enabled designers to directly manipulate geometries at this scale. Yet existing design and manufacturing approach have not been able to unleash the full potential of mesoscale materials for the design world. This thesis proposes computational tools and an additive manufacturing apparatus to enable the creation and fabrication of materials at the mesoscale.; The ability to programmably assemble materials with tailored structures at the centimeter, millimeter, and micrometer length scales enables tunable mechanical and electrical properties. Those properties determine not only the static performance, but also, when energized, the dynamic behavior of a material. The emerging material performance and behavior allows us to design unprecedented objects and environments with input (sensing) and output (actuation) capabilities, which can be integrated for the next generation of interaction design. I first introduces three translations to bridge a material's microscopic properties with macroscopic interface design. Four research projects (bioLogic, KinetiX, SensorKnit, and Cilllia) are presented to embody the translation. I then propose an implementation workflow for additive manufacturing of mesoscopic materials. The implementation will be presented based on my ongoing research project Cilllia, 3D printed functional hair structures.; Cilllia investigates a scalable digital representation of hierarchical tunable materials, a CAD software interface for material design, and a DLP-based 3D printer that allows for continuous material production. The tools for creating Cilllia can be expanded to other types of architected mesoscale materials. Four examples will be presented. Together, they support the vision of a general digital description and physical production system for architected mesoscale materials.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 136-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123631</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policing and the rule of law in weak states : evidence from Liberia</title>
<link>https://hdl.handle.net/1721.1/123630</link>
<description>Policing and the rule of law in weak states : evidence from Liberia
Morse, Benjamin Sherman.
How can states with limited resources build citizens' trust in the police? How can they ensure the primacy of the police and courts over customary alternatives in peripheral regions long accustomed to autonomy? In urban areas plagued by high levels of crime and insecurity, how can they reduce reliance on vigilantism and extrajudicial justice? My dissertation explores these questions through a series of three essays on policing in Liberia. The first reports results from a large-scale, randomized control trial evaluation of the Liberian National Police's "Confidence Patrols" community policing program in rural Liberia.; I find that the program was successful at increasing knowledge of the police and courts, enhancing security of property rights, and increasing crime reporting, but that it also led to backlash from customary chiefs and members of Liberia's traditional society (who are privileged under customary law), possibly because they felt their interests would be threatened by greater access to the state. My second paper evaluates the effectiveness of community policing in the urban setting, with a particular focus on whether community policing combined with the opportunity to form "Watch Forums" can redirect communities away from vigilantism towards lawful activities that complement the efforts of police. I find that the intervention improved police-community relations, reduced support for vigilantism, and mobilized communities to participate in the Watch Forum initiative.; I further find that these improvements were accompanied by a roughly 40 percentage point reduction in the incidence of mob violence. I conclude that integrating local communities into formal policing practices is a potentially promising strategy for reducing vigilantism and promoting compliance with the rule of law in countries like Liberia. The third and final paper tests whether citizens expect the police to discriminate against victims of crime on the basis of their class, religion, or (lack of) personal connections to powerful government officials, and whether this in turn discourages crime reporting. I find that citizens expect discrimination on the basis of class and political connections, but that these expectations do not appear to influence the likelihood of crime reporting among actual victims of crime.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 269-280).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123630</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable synthetic hallucinations : towards a boundless mixed reality</title>
<link>https://hdl.handle.net/1721.1/123629</link>
<description>Programmable synthetic hallucinations : towards a boundless mixed reality
Novy, Daniel E.(Daniel Edward)
Programmable Synthetic Hallucinations describe the utilization of the bio-physiological mechanics of hallucination generated in the human brain to display virtual information directly in the visual field. Science fiction films, television shows, and video games have trained audiences to think of holograms as luminous volumetric images that float registered in the viewer's 3D space and require no special glasses or optics to see or interact with them. The ability of users to interact with a floating aerial lightfield without the use of face-worn binocular optics is a difficult challenge and one in which a hallucinatory experience offers a solution. While we do not have the ability to activate individual neurons to recreate an neuro-electrical pattern indiscernible from the perception of reality, this dissertation shows that creating phosphenes within the visual field via the magnetic stimulation of neurons in the visual cortex is a viable first step.; By electrically stimulating the cells in the hypercolumns of V1, one can induce the perception of a pixel of light within the visual field of a user. These magnetophosphenes are visual perceptions described as luminous shapes, which can be created by time-varying magnetic fields. These change the membrane potential and trigger an action potential directly in neurons of the visual cortex. Previous TMS studies have shown evocation of phosphenes in a binary manner, with subjects reporting the presence or absence of a phosphene but not targeted to a specific location. However, to date, no information or example has been found indicating the use of cortical phosphenes, induced magnetically or otherwise, in performance or public display. Presently, commercial transcranial magnetic stimulators can only be focused to an area approaching one square centimeter, a single output channel, and require manual placement of the coil apparatus.; Novel coil designs became a central focus of this research. Further work increased the number of output channels, embedding them in a wearable apparatus with a multichannel array of induction coils. Clinical trials were undertaken at MIT's Clinical Research Center. We were able to evoke visual phenomena in 11 out of 16 test subjects in a known, targeted location. The induced magnetophosphenes were noted above the noise floor of naturally occurring retinal phosphenes and were statistically verified to be a result of the system being tested.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis. "June 2019." Vita.; Includes bibliographical references (pages 118-122).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123629</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>"Stop or i'll shoot, comply and I won't" : coercive assurance in international politics/</title>
<link>https://hdl.handle.net/1721.1/123628</link>
<description>"Stop or i'll shoot, comply and I won't" : coercive assurance in international politics/
Pauly, Reid B. C.
Why do some coercive threats succeed while others fail? Successful coercion requires not only that I credibly threaten you until you comply, but also that I credibly assure you that I will not punish you after you comply. This is the overlooked dilemma at the heart of coercive strategies. I develop and test Coercive Assurance Theory to show that coercion is more likely to succeed when credible threats are paired with credible assurance. Threats often fail because they are insufficiently contingent. I then step back to test theories on how states make their assurances believable in the process of coercive bargaining. Existing literature on commitment-making in international politics implies a tradeoff between threat and assurance credibility. Instead, I propose that threats and assurances sometimes require different conditions and that fluctuations in their credibility do not always correlate.; In particular, states can generate credible assurance through three strategies: disentangling demands, exerting coercive control, and reducing visibility. Coercers often make multiple demands of targets but fail to keep punishments and demands discretely linked. Coercers also navigate their own domestic and international coalitions, which contain actors with varying interests. Unified coercers mitigate fears of capricious punishment. Finally, visibility reduction-for example, by allowing targets to plausibly deny their concessions-diminishes target fears of paying reputational costs. All three mechanisms also signal that the coercer is not seeking a pretext for punishment. I test my theory by examining cases of coercive bargaining between non-allies over nuclear weapons programs, with chapters on South Africa, Libya, and Iran. A policy evaluative chapter also applies the theory to North Korea.; I explain not only the occurrence but also the timing of nonproliferation bargains using primary documents from U.S. government archives, the South African apartheid-era government archives, and the International Atomic Energy Agency (IAEA) archives. I supplement these documents with the memoirs, recollections, and writings of target state policymakers, military leaders, and nuclear scientists. I also conducted interviews with participants in relevant policy-making processes. By introducing a theory of assurance and highlighting its role in striking coercive bargains, the dissertation aims to improve policymaking in a critical realm.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 420-478).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123628</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relational AI : creating long-term interpersonal interaction, rapport, and relationships with social robots</title>
<link>https://hdl.handle.net/1721.1/123627</link>
<description>Relational AI : creating long-term interpersonal interaction, rapport, and relationships with social robots
Kory-Westlund, Jacqueline M.(Jacqueline Marie)
Children are now growing up with Al-enabled, socially interactive technology. As such, we need to deeply understand how children perceive, interact, and relate to this kind of technology, especially given the many ethical concerns that arise in the context of human-machine interactions, most of which are most contentious with children. To this end, I explore questions about young children's interactions and relationships with one such technology--social robots-during language learning activities. Language learning is a ripe area for exploring these questions because of the social, interactive, interpersonal nature of the activity. In addition, literacy, language, and interpersonal skills are some of the most important skills any child will learn, as they can greatly impact children's later educational and life success.; Through a series of 9 empirical child-robot interaction studies with 347 children and using both teleoperated and autonomous robots, I establish the role of social robots as relational technology-that is, technology that can build long-term, social-emotional relationships with users. I hypothesize that a key aspect of why social robots can benefit children's learning is their social and relational nature. To that end, I demonstrate the capabilities of social robots as learning companions for young children that afford opportunities for social engagement and reciprocal interaction, particularly peer-to-peer mirroring. I discuss how we can understand children's conceptualizations of social robots as relational agents and measure children's relationships over time. I introduce the term relational AI to refer to autonomous relational technologies.; I develop a computational relational Al system to examine how using relational Al in a social robot can impact child-robot learning interactions. Through testing the autonomous system in a longitudinal study with 49 children, I explore connections between children's relationship and rapport with the robot and their engagement and learning. I discuss the ethical use and design implications of relational AL. I show that relational AI is a new, powerful educational tool, unlike any other existing technology, that we can leverage to support children's early education and development.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 266-294).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123627</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precise and expansive genomic positioning for CRISPR edits</title>
<link>https://hdl.handle.net/1721.1/123626</link>
<description>Precise and expansive genomic positioning for CRISPR edits
Jakimo, Noah Michael.
The recent harnessing of microbial adaptive immune systems, known as CRISPR, has enabled genome-wide engineering across all domains of life. A new generation of gene-editing tools has been fashioned from the natural DNA/RNA-targeting ability of certain CRISPR-associated (Cas) proteins and their guide RNA, which work together to recognize and defend against infectious genetic threats. This straight-forward RNA-programmed sequence recognition by CRISPR has facilitated its rapid global impact on genetic research, diagnostics, therapeutics, and bioproduction. An ideal DNA-editing platform would achieve perfect accuracy on any desired cellular and genomic target. CRISPR systems, however, have limited target fidelity and range, in part due to their evolutionary pressures to defend microbes from fast-mutating viruses without self-targeting their own guide RNA.; These natural limitations of CRISPR can especially constrain gene-editing in animals and plants, which are more vulnerable to off-target activity occurring in one of their trillions of cells with genomes that are 1000x larger than those of unicellular microbes that natively harbor CRISPR systems. This thesis overcomes three critical challenges for precise and broad gene-editing of complex organisms: 1) engineering a means of specificity for the type of cells to edit, 2) improving target-matching accuracy, and 3) broadening the editable portion of the genome.; This thesis addresses these challenges by integrating custom developed computational design tools and biological validation of the resulting novel CRISPR systems; 1) To target within multicellular heterogeneity, new oligonucleotide-sensing structural motifs are designed and embed into guides that can potentially control CRISPR nuclease activity based on cell-type transcriptome patterns; 2) To discern among increased similarity between a target and non-target sequences in larger genomes, base-pairing thermostability principles are employed to tune the biochemical composition of guides that can evade subtly mismatched off-target sites; 3) To expand the reach of editing techniques with narrow windows of operation, such as base-editing, bioinformatics workflows that discover previously uncharacterized Cas proteins with novel target scope are created.; This thesis demonstrates the effectiveness of these strategies in the context of in vitro, bacterial, and human cell culture assays, and contributes advancements in the precision and generality for CRISPR gene-editing.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-105).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123626</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The complexity of the future of work</title>
<link>https://hdl.handle.net/1721.1/123625</link>
<description>The complexity of the future of work
Frank, Morgan Ryan.
Rapidly advancing cognitive technologies, such as artificial intelligence (AI), have the potential to drastically impact modern society and to shape the future of work. Accordingly, policy makers and researchers seek forecasts into technological change and labor trends, including growing job polarization and income inequality as well as decreasing career mobility and spatial mobility for workers. Although a given technology impacts demand for only a narrow set of workplace skills, modern empirical work relies on coarse labor distinctions between cognitive and physical or routine and non-routine work to explain employment trends. In this dissertation, I explore the complex ways that skills and employment undergird aggregate labor dynamics in the US. As a motivating example, I demonstrate how simple measures for skills within a labor market contribute to the differential impact of automation across US cities of different sizes.; I build on this motivation to address methodological barriers through a refined model of workplace skills and their interdependencies, thus connecting microscopic workplace connections to macroscopic labor trends. I perform an unsupervised analysis of specific workplace skills as a skills network whose aggregate and refined topology grant new insights into job polarization and workers' career mobility. Since these inter-skill connections predict career mobility, I construct a map of US occupations that captures worker transition rates between employment opportunities and, in combination with urban employment data, predicts workers' spatial mobility. These refined models that connect workplace skills to both inter-city and intra-city dynamics enable new insights and new input data sources for real-time labor trends at the level of specific technologies and specific workplace skills.; I conclude by exploring one novel and potentially useful source of input information: the evolution of scientific Al research. The analyses in this dissertation provide new tools to policy makers designing viable worker retraining programs, offer new insights to individual workers navigating their careers, and present new measures for economic resilience in the face of changing technology.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 269-284).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123625</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automating data visualization through recommendation</title>
<link>https://hdl.handle.net/1721.1/123624</link>
<description>Automating data visualization through recommendation
Hu, Kevin Zeng.
Demand for data visualization has exploded in recent years with the increasing availability and use of data across domains. Traditional visualization techniques require users to manually specify visual encodings of data through code or clicks. While manual specification is necessary to create bespoke visualizations, it renders visualization inaccessible to those without technical backgrounds. As a result, visualization recommender systems, which automatically generate results for users to search and select, have gained popularity. Here, I present systems, methods, and data repositories to contextualize and improve visualization recommender systems. The first contribution is DIVE, a publicly available and open source system that combines rule-based recommender systems with manual specification. DIVE integrates state-of-the-art data model inference, visualization, statistical analysis, and storytelling capabilities into a unified workflow.; In a controlled experiment, we show that DIVE significantly improves task performance among a group of 67 professional data scientists. Over 15K users have uploaded 7.5K datasets to DIVE since its release. In response to the limitations of rule-based recommender systems, VizML is a machine learning-based method for visualization recommendation. VizML uses neural networks trained on a large corpus of datasetvisualization pairs to predict visualization design choices, such as visualization type and axis encoding, with an accuracy of over 85%, exceeding that of base rates and baseline models. Benchmarking with a crowdsourced test set, we show that our model achieves human-level performance when predicting consensus visualization type. To support learned visualization systems, VizNet is a large-scale visualization learning and benchmarking repository consisting of over 31M real-world datasets.; To demonstrate VizNet's utility as a platform for conducting crowdsourced experiments with ecologically valid data, we replicate a prior perceptual effectiveness study, and demonstrate how a metric of visualization effectiveness can be learned from experimental results. Our results suggest a promising method for efficiently crowdsourcing the annotations necessary to train and evaluate machine learning-based visualization recommendation at scale. Enabled by the availability of real-world data, Sherlock is a deep learning approach to semantic type detection. We train Sherlock on 686K data columns retrieved from the VizNet corpus by matching 78 semantic types from DBpedia to column headers. We characterize each matched column with 1, 588 features describing the statistical properties, character distributions, word embeddings, and paragraph vectors of column values.; A multi-input neural network achieves a support-weighted F1 score of 0.89, exceeding that of a decision tree baseline, dictionary and regular expression benchmarks, and the consensus of crowdsourced annotations. I conclude by discussing three opportunities for future research. The first describes design considerations for mixed-initiative interactions in AI-infused visualization systems such as DIVE. The second reviews recent work on statistical validity of insights derived from visualization recommenders, which is an especially important consideration with learned systems such as VizML. Lastly, I assess the benefits of learning visualization design from non-experts then present experimental evidence towards measuring the gaps between expert and non-expert judgment.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 162-180).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123624</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct-write assembly of colloidal materials</title>
<link>https://hdl.handle.net/1721.1/123620</link>
<description>Direct-write assembly of colloidal materials
Tan, Alvin Thong Lip.
Colloidal assembly, which is the spontaneous organization of nano- and micro- sized particles, is an attractive means to create materials with properties that can be engineered via hierarchy of particle composition, size, ordering, and macroscopic form. However, while there are well-established methods for assembling colloidal crystals as films and patterns on substrates, it has not been previously possible to build freeform colloidal crystal structures. Macroscale, freeform colloidal crystals could enable the development of novel composites, photonics, electronics, and new studies of crystallization in three-dimensions. This thesis describes the development of direct-write assembly, a process combining the bottom-up principle of colloidal self-assembly with the versatility of direct-write 3-D printing. Direct-write assembly is performed by precision dispense of a colloidal suspension from a fine needle into a temperature-controlled environment.; Using polystyrene particles suspended in water as a model system, we derive a scaling law that governs the rate of assembly. Moreover, by high resolution motion control of the substrate, the trajectory of crystal growth, and therefore the shape of the crystal, can be controlled in freeform. We show how to prevent cracking in these free-standing colloidal crystals, and demonstrate the emergence of structural color tunable by particle size. We also explore in-plane direct-write as a means for fabricating colloidal crystals patterned by a digital template. The kinetics of crystal growth can be modelled by the Dimitrov-Nagayama equation for convective assembly, which allows us to develop an operational phase diagram to serve as a practical guide for high-throughput assembly. Moreover, we develop a means of rapidly characterizing grain structure from the optical diffractive properties of the colloidal crystal.; By sequentially sintering and overlapping passes, in-plane direct-write can potentially build up to 3-D structures. Finally, we consider the scaling of forces in the direct-write assembly process and demonstrate that direct-write can be extended to various particle systems. In particular, we demonstrate application of direct-write to the assembly of colloidal silica, gold, and iron oxide supercrystals.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 102-108).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123620</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles studies of defects in colloidal nanocrystals</title>
<link>https://hdl.handle.net/1721.1/123619</link>
<description>First-principles studies of defects in colloidal nanocrystals
Liu, Yun,Ph.D.Massachusetts Institute of Technology. Department of Materials Science and Engineering.
Solar energy is one of the few renewable, low-carbon sources with both the maturity and accessibility to meet the ever-increasing global demand for energy. There are also accounting for an increasing percentage of our energy output due to increased adoption in both industrial and residential areas. Wafer based silicon photovolatics (PV) technology has dominated the solar market, whereby its price has increased significantly over the last decades. In order to fully capture the solar energy from the sun and extend the flexibility of PV technology, there is a need for constant innovation for new materials. Currently, there is a class pf emerging PV technologies that offer the potential of increased scalability, flexibility and lower prices. They include hybrid organic-inorganic lead halide perovskite PV, organic PV and colloidal quantum dot (CQD) PV. Colloidal quantum dots are semiconducting nanocrystals that exhibit size tunable electronic and optical properties.; Owing to their versatility and facile synthesis, they have seen wide application photovoltaics, light emitting diodes, solar concentrators and bio-imaging. In particular, their PV power conversion efficiency has grown rapidly over the last 9 years from 3% to 16.6%. Despite the rapid progress, the search for better PV materials has been carried out almost exclusively through tremendous numbers of trial and error experiments. This is due to the fact that many fundamental aspects of the materials has not been fully understood, especially the role of defects and trap states. Due to the nature of wet chemistry synthesis, vacancies, intersitial and other extended defects inevitably form. These defects often cause in gap states within the semiconductor bandgap, which sensitively impact the performance of the PV devices. In addition, defects are difficult to measure directly using experimental techniques, and we often rely on spectroscopic and imaging to probe their properties indirectly.; The core of the work described in this thesis deals with the theoretical understanding of nanocrystals with the goal of achieving a deeper and more fundamental understanding of the material's properties at the atomic scale, focusing on the roles of defects. To this end, we employ a technique of computational electronic structure calculation methods, namely density functional theory (DFT) calculations. In this thesis we will use DFT to investigate and find the role that defects play at controlling the 1) Stokes shift and 2) trap states in PbS quantum dot, as well as the 3) luminescent properties of CuAlS₂ nanocrystals. While we show that points defects can cause excessive Stokes shift in single PbS CQDs, and dimer defects are a source of detrimental trap states in PbS CQD solids, the presence of point defects are the source of high luminescence in CuAl₂ nanocrystals.; We have also provided insights and design guidelines to control defects to design ever more efficient PV devices at an atomic level. This thesis document is organized as follows: Chapter 1 introduce CQD and their applications in PV and other optoelectronic devices. Chapter 2 summarizes the computational techniques employed in this thesis work. Chapter 3 focuses on the origins of the Stokes shift in PbS nanocrystal. Chapter 4 focuses on the PbS superlattice solids, and highlight the origin of trap states in these solids as due to the presence of dimers. Chapter 5 studies the defect physics of CuAlS₂, and identifies the defect states responsible for the high photoluminescene.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-102).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123619</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing byproduct mining and metal recycling as indicators of material criticality</title>
<link>https://hdl.handle.net/1721.1/123618</link>
<description>Assessing byproduct mining and metal recycling as indicators of material criticality
Fu, Xinkai.
The development of advanced technologies relies on using a broader suite of elements from the periodic table, and many agree that the future availability of a set of 'critical materials' is an issue of global concern. However, assessments of material criticality are often overly general, leading to excessive concerns by policy makers and market participants. A quantitative and detailed investigation for supply risk indicators is necessary to further understand the risk associated with specific materials. This thesis investigates two aspects related to material criticality: 1) the status of a metal being produced as a byproduct; 2) The market impact of increased metal recycling. To identify the type of major risks associated with a byproduct metal, a techno-economic analysis is performed on 42 carrier-byproduct metal pairs, by employing cluster analysis and econometric modelling.; Contrary to conventional view, it is found in several case studies that the availability of a byproduct metal is not directly limited by carrier supply, but rather limited by the lack of incentive to improve recovery efficiencies. Therefore, developing alternative extraction processes with high recovery rate is proposed as a mitigation strategy for byproduct metals. The economic feasibility of such processes is examined, first in a screening assessment and then in a detailed case study for extracting indium as byproduct of zinc. It is demonstrated that an alternative process could significantly increase byproduct supply, by up to 10% in the case of indium. A bottom-up copper market simulation system is developed by modeling the behaviors of market participants, to estimate the market impact of increased metal recycling. Results from the simulation demonstrates the existence of various rebound effects for primary copper production.; Depending on the size and duration of secondary supply shocks, these rebound effects can offset 50% to 90% of the environmental benefits of recycling. In terms of carrier recycling impacting byproduct supply, it is shown that recycling as carrier metal supply risk mitigation strategy would not significantly hurt the availability of byproduct metal.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 170-184).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123618</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gatekeepers of prosperity : how the state and business block the path towards functioning market economies in Developing countries</title>
<link>https://hdl.handle.net/1721.1/123617</link>
<description>Gatekeepers of prosperity : how the state and business block the path towards functioning market economies in Developing countries
Limoeiro, Danilo Rocha,1982-
A well-functioning market economy is crucial for prosperity. However, there is considerable variation across countries in the cost of doing business. This invites the question of why governments impose these costs and why societies fail to enact reforms reducing them. This thesis seeks to solve this question by looking at the case of Brazil, a large economy riddled with state imposed transaction costs. It argues that the existence of a healthy business environment is analogous to the provision of a public good. The lack of it reflects an equilibrium where actors fail to coordinate. In this suboptimal equilibrium, political agents use discretion over transaction costs as a power resource. Business insiders nurture relationships with these agents, accessing low transaction costs and gaining a competitive edge over the outsiders. As such, insiders have weaker preferences for reforms that could decrease the overall cost of doing business.; The four empirical chapters offer a diverse set of evidence substantiating this argument. First, Brazil's intricate tax system stems from politicians and businesses clinging to discretionary tax exceptions that benefit the few. Second, econometric analysis comparing patterns of lobbying among exporters suggests that industry leaders are the most politically engaged. They try to 'buy' access to power, rather than to push a reformist agenda. Third, the case of the pharmaceutical industry shows that stringent regulation affects the new entrants more than it affects incumbent rms. This heterogeneous outcome contributes to the absence of a coalition that would push back heavy regulatory standards. Fourth, the study of agriculture explains how convergence between insiders and outsiders involved in exporting activities enables collective action towards reforms. The thesis also highlights the relationship between state-imposed transaction costs to explain long term prosperity.; These costs undermine growth by making capital less productive and limiting the pool of value-creating entrepreneurs. While secure property rights became a catchall explanation for development, social scientists will benefit from expanding their investigation towards overlooked dimensions of state-business relations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 347-376).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123617</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on economic sociology of innovation and entrepreneurship</title>
<link>https://hdl.handle.net/1721.1/123583</link>
<description>Essays on economic sociology of innovation and entrepreneurship
Kim, Hyejun.
This dissertation considers how innovation and entrepreneurship are developed, encouraged, and evaluated with the theoretical lens of economic sociology. The first chapter investigates who becomes an entrepreneur among the pool of general consumers. The process by which individuals become entrepreneurs is often described as a decisive moment of transition, yet it necessarily involves a series of smaller steps. By breaking down the transition stages of knitting hobbyists' transition to producers who sell their original design patterns, the study examines the distinctive characteristics that affect users' decision to (a) create new products and (b) commercialize them. The second chapter examines the role of social capital in revealing and encouraging entrepreneurship. To the question of how social capital benefits innovation and entrepreneurship, existing literature has provided one dominant answer: access to information and resources.; In this study, I suggest an alternative mechanism how social capital benefits an individual's entrepreneurial transition: social networks provide potential entrepreneurs self-confidence on the promise of their new ideas and encourages their entry into the market. Using a matched sample of potential innovators, I show that an individual's participation in a local group encourages her transition to an entrepreneur, especially for those who already have the necessary skills for the transition. The empirical analysis resonates with qualitative evidence that hobbyists make the transition to entrepreneurs when encouraged by their friends. The third chapter (co-authored with Pierre Azoulay and Ezra Zuckerman) considers commitment-based typecasting among knit designers. We show that "commitment-based typecasting" has two characteristic features: asymmetry in audience valuation and retrospective reevaluation.; When a novice performer experiences an "identity shock" that suggests that she is more committed to the audience for one category than another, "betrayed" audience tends to regard her as having always been less committed to the rival audience/category. We test this theory in the domain of knitting, where there is a divide between avant-garde knitters and traditional knitters, and we show that when a novice knit designer is first published in the publication associated with one category, this elicits a retrospective devaluation of her prior work by the audience of the opposing category.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123583</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on information technology, intangible capital, and the economics of artificial intelligence</title>
<link>https://hdl.handle.net/1721.1/123582</link>
<description>Essays on information technology, intangible capital, and the economics of artificial intelligence
Rock, Daniel Ian.
This dissertation contains four essays concerning the economics of information technology, intangible capital, and artificial intelligence. In the first essay, "Engineering Value: The Returns to Technological Talent and Investments in Artificial Intelligence" I describe how firms can appropriate some of the value of their employees' human capital by assigning firm-specific tasks. I then use a database of employment records to document dynamics in the valuation of publicly traded firms as they relate to different types of employment, focusing especially on AI skills. The second essay, "The Productivity J-Curve: How Intangibles Complement General Purpose Technologies" (coauthored with Erik Brynjolfsson and Chad Syverson) addresses the concern that new technologies with wide applicability throughout the economy can cause both underestimation and overestimation of total factor productivity.; As capital is accumulated, intangible investment output, and therefore productivity growth, will be underestimated only to later generate a yield (at which point productivity growth will be overestimated). Presenting a theoretical description of how to use corporate valuations to recover hidden investment value, we discuss how productivity growth and levels can be adjusted to accommodate these changes. Implications for research and development, computer hardware, and computer software investments are considered. The third essay, "Machine Learning and Occupational Change" (coauthored with Erik Brynjolfsson and Tom Mitchell), develops and implements a method to measure the labor market impact potential of machine learning technologies. Tasks are evaluated for their Suitability for Machine Learning (SML). We find that few occupations can be fully automated with machine learning, but many occupations will potentially be redesigned.; The final essay, "Do Labor Demand Shifts Occur Within Firms or Across Them? Non-Routine-Biased Technological Change 2000-2016" (coauthored with Seth Benzell and Guillermo Lagarda) decomposes labor share shifts of occupational groups into changes between firms, within firms, and due to entry and exit. We find that within-firm compositional shifts are an important component of changes in the overall labor market. We also find that the rate of within-firm shifts has declined in the period from 2000 to 2016. Together, these essays offer insights into how artificial intelligence technologies, particularly machine learning, will impact the U.S. economy.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123582</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial economics</title>
<link>https://hdl.handle.net/1721.1/123581</link>
<description>Essays in financial economics
Petukhov, Anton.
This dissertation consists of three chapters. Chapter 1 proposes a dynamic general equilibrium model to study jointly (i) the pace of technological progress and (ii) asset pricing properties of investment in innovation. Risk is a key characteristic that links the two together. Both empirically and in my theory innovation activity is associated with elevated levels of idiosyncratic risk. In the model, idiosyncratic risk is driven by uncertain productivity improvements and disruption that emerge in the process of innovation. Thus idiosyncratic risk is an instrumental determinant of the rate of technological progress and expected returns on investment in innovation. A calibrated version of the model provides an accurate quantitative description of the venture capital cycles both in terms of investment flows and financial returns.; Joint study with Hui Chen and Jiang Wang in Chapter 2 investigates the effects of market-wide trading halts, also called circuit breakers, on stock prices and trading behavior. We develop a model to examine how circuit breakers impact the market when investors trade to share risk. We show that a downside circuit breaker tends to lower the stock price, increase its volatility and raise the likelihood of reaching the triggering price. The volatility amplification effect becomes stronger when the wealth share of the relatively pessimistic agent is small. In Chapter 3 I develop a theory that shows how search frictions in the labor market shape asset pricing properties of stock returns. My theory reconciles and links together two empirical facts: (i) economic downturns are associated with higher pace of job reallocation and (ii) rapidly growing firms on average yield lower returns on their stocks compared to shrinking firms.; In the model, firms with more growth opportunities benefit from recessions due to more slack in the labor market, since in recessions growing firms can hire more and expand quicker. This feature makes growth firms' value less cyclical. A calibrated version of the model successfully replicates predictability of returns in the cross-section by a range of growth indicators.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123581</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mandatory corporate patent disclosures and innovation</title>
<link>https://hdl.handle.net/1721.1/123580</link>
<description>Mandatory corporate patent disclosures and innovation
Kim, Jinhwan.
I investigate the effect of corporate patent disclosures on innovation. Using the American Inventor's Protection Act (AIPA) as a plausibly exogenous shock to corporate patent disclosures, I find evidence of the AIPA shaping innovation through two simultaneous channels. First, the AIPA encourages a firm to innovate by facilitating access to the scientific information contained in other firms' patent disclosures. Second, the AIPA discourages a firm from innovating by increasing the risk of leaking business-related strategies through its own patent disclosures. These findings are consistent with the view that corporate patents contain information useful for both science and business, and highlight their respective roles in generating both spillover benefits and proprietary costs of mandating patent disclosures. Finally, using textual analysis, I find that firms with high proprietary costs respond to the AIPA by strategically changing their patent disclosures to obfuscate exploitable business-related signals.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 41-46).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123580</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Auto-calibrated urban building energy models as continuous planning tools for greenhouse gas emissions management</title>
<link>https://hdl.handle.net/1721.1/123574</link>
<description>Auto-calibrated urban building energy models as continuous planning tools for greenhouse gas emissions management
Nagpal, Shreshth.
To reduce greenhouse gas emissions associated with their buildings' energy use, owners frequently rely on building energy models that are calibrated to existing conditions for evaluation of potential energy efficiency retrofits. Development of such calibrated models requires the estimation of a series of building characteristics, a process which is extremely effort-intensive even for a single building and, therefore, almost prohibitive for large campus projects which often include hundreds of diverse-use buildings. There is a need for a framework that combines established urban energy model generation techniques with data-driven methods to reduce the manual and computational cost of developing calibrated baseline campus energy models, allow for real time evaluation of future building upgrades, and display their consequences to decision makers on an ongoing basis. This dissertation addresses this need by proposing new workflows for different development stages of models designed to evaluate future energy scenarios for large institutional campuses. First, the strengths and limitations of different urban modeling methodologies are assessed (modeling approach). Next, a methodology to employ statistical surrogate models is proposed for rapid estimation of unknown building properties (auto-calibration). Finally, a continuous energy performance tracking framework is presented to enable university campuses to manage their building related greenhouse gas emissions over time (continuous planning). As a proof of concept, the complete method has been implemented and tested at the author's home institution. Auto-calibration and continuous planning can be implemented independently or combined, and the dissertation includes a discussion about their possible impact if applied across the building stock.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Architecture: Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-117).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123574</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Early building design using multi-objective data approaches</title>
<link>https://hdl.handle.net/1721.1/123573</link>
<description>Early building design using multi-objective data approaches
Brown, Nathan C.(Nathan Collin)
During the design process in architecture, building performance and human experience are increasingly understood through computation. Within this context, this dissertation considers how data science and interactive optimization techniques can be combined to make simulation a more effective component of a natural early design process. It focuses on conceptual design, since technical principles should be considered when global decisions are made concerning the massing, structural system, and other design aspects that affect performance. In this early stage, designers might simulate structure, energy, daylighting, thermal comfort, acoustics, cost, and other quantifiable objectives. While parametric simulations offer the possibility of using a design space exploration framework to make decisions, their resulting feedback must be synthesized together, along with non-quantifiable design goals.; Previous research has developed optimization strategies to handle such multi-objective scenarios, but opportunities remain to further adapt optimization for the creative task of early building design, including increasing its interactivity, flexibility, accessibility, and ability to both support divergent brainstorming and enable focused performance improvement. In response, this dissertation proposes new approaches to parametric design space formulation, interactive optimization, and diversity-based design. These methods span in utility from early ideation, through global design exploration, to local exploration and optimization. The first presented technique uses data science methods to interrogate, transform, and, for specific cases, generate design variables for exploration. The second strategy involves interactive stepping through a design space using estimated gradient information, which offers designers more freedom compared to automated solvers during local exploration.; The third method addresses computational measurement of diversity within parametric design and demonstrates how such measurements can be integrated into creative design processes. These contributions are demonstrated on an integrated early design example and preliminarily validated using a design study that provides feedback on the habits and preferences of architects and engineers while engaging with data-driven tools. This study reveals that performance-enabled environments tend to improve simulated design objectives, while designers prefer more flexibility than traditional automated optimization approaches when given the choice. Together, these findings can stimulate further development in the integration of interactive approaches to multi-objective early building design. Key words: design space exploration, conceptual design, design tradeoffs, interactive design tools, structural design, sustainable design, multi-objective optimization, data science, surrogate modeling
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Architecture: Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 201-219).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123573</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning methods for targeting and new product development</title>
<link>https://hdl.handle.net/1721.1/123572</link>
<description>Machine learning methods for targeting and new product development
Timoshenko, Artem.
Chapter 1: Market research traditionally relies on interviews and focus groups to identify customer needs. User-generated content (UGC), such as online reviews, social media, and call-center data, provides an opportunity to identify customer needs more efficiently. Established methods are not well-suited for large UGC datasets because much of the content is uninformative or repetitive. We propose a machine learning approach for identifying customer needs from UGC and evaluate the method using a new dataset. Once identified, the needs can be used to inform marketing strategy, brand positioning and new product development. Chapter 2: Targeting policies are used in marketing to match different firm actions to different customers. For example, retailers want to send different promotions to different customers, real estate agents want to show different homes, and car dealers want to propose different prices.; We conduct two large-scale field experiments to evaluate seven methods widely used to design targeting policies. The findings compare the performance of the targeting methods and demonstrate how well the methods address common data challenges. The challenges we study are covariate shift, concept shift, information loss through aggregation, and imbalanced data. We show that model-driven methods perform better than distance-driven methods and classification methods when the training data is ideal. However, the performance advantage vanishes in the presence of the challenges that affect the quality of the training data. Chapter 3: Firms typically compare the performance of different targeting policies by implementing the champion versus challenger experimental design. These experiments randomly assign customers to receive marketing actions recommended by either the existing (champion) policy or the new (challenger) policy, and then compare the aggregate outcomes.; We recommend an alternative experimental design and propose an estimation approach to improve the evaluation of targeting policies. The recommended experimental design randomly assigns customers to marketing actions. This allows evaluation of any targeting policy without requiring an additional experiment, including policies designed after the experiment is implemented. The proposed estimation approach identifies customers for whom different policies recommend the same action and recognizes that for these customers there is no difference in performance. This allows for a more precise comparison of the policies. We illustrate the advantages of the experimental design and the estimation approach using data from an actual field experiment. We also demonstrate that the grouping of customers, which is the foundation of our estimation approach, can help to improve the training of new targeting policies.; Chapter 4: Coupon personalization requires to predict how different combinations of coupons affect customer purchasing behavior. We develop a nonparametric model which predicts product choice for the entire assortment of a large retailer. Our model is nonparametric and is based on a deep neural network. The model inputs purchasing histories of individual customers and the coupon assignments to predict individual purchasing decisions. The model operates without ex-ante definitions of product categories. We evaluate the proposed product choice model in simulations. Our model significantly outperforms the baseline machine learning methods in terms of the prediction accuracy. Coupon personalization based on our model also achieves a substantially higher revenue compared to the baseline prediction methods.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123572</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making the cut : the rate and direction of CRISPR innovation</title>
<link>https://hdl.handle.net/1721.1/123571</link>
<description>Making the cut : the rate and direction of CRISPR innovation
Zyontz, Samantha.
This dissertation explores, in real time, key institutional factors contributing to the diffusion and impact of a breakthrough technology from its very first days. The studies combine rigorous quantitative empirical methods with a deep understanding of the institutions of a novel setting that allows for a nuanced picture of the actors, institutions, technologies, and rules necessary to make recommendations on policies and strategies for the diffusion of emerging innovations. The first chapter examines whether the introduction of a breakthrough technology, the CRISPR DNA-editing system, affects the trajectory of a scientific field through project selection and new entry. Using proprietary data from the primary distributor of CRISPR to academic scientists, Addgene, the study shows that the relative proportion of scientists focusing on editing mammalian cells after the introduction of CRISPR increased over their counterparts working in bacteria and other eukaryotes.; The shift towards mammalian research may result mostly from entry of new authors. The second chapter (with Neil Thompson), explores whether characteristics of individual scientists who experiment with CRISPR differ from those who incorporate that experimentation into a new project. Using Addgene data we separately observe both groups by matching CRISPR orders to scientists' publication histories. We find that some characteristics (e.g., proximity to the discoverers) do not impact experimentation but do influence the ability to publish, empirically showing that access to a complex new tool does not automatically translate into the ability to use the tool. The third chapter builds on the previous two by noting that many new tools require specialized complementary know-how to be applied effectively and delving into how teams form to acquire that know-how.; Teams in any research domain face the tradeoff of either acquiring this know-how themselves or working with scarce external tool specialists who also have a choice over domain teams. CRISPR enables identification of external tool specialists on research teams by exploiting natural difficulties of applying the tool across disease domains. External tool specialists appear more often in teams for difficult diseases, especially in subsequent innovations, suggesting that external tool specialists may be more attracted to complex but influential problems.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 162-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123571</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural investigations of class la ribonucleotide reductases by electron microscopy</title>
<link>https://hdl.handle.net/1721.1/123568</link>
<description>Structural investigations of class la ribonucleotide reductases by electron microscopy
Kang, Gyunghoon.
Ribonucleotide reductase (RNR) catalyzes the reduction of nucleotides to their 2'-deoxynucleotide counterparts. The class la RNR from Escherichia coli is composed of two homodimeric subunits [alpha]2 and [beta]2 that form an [alpha]2[beta]2 complex to perform nucleotide reduction. Chemistry is initiated by a thiyl-radical (C439·) in the active site of [beta]2 that is reversibly generated by a diferric-tyrosyl radical cofactor (Y122·) in [beta]2 by a series of proton-coupled electron transfer steps: Y122[beta] &lt;-&gt; [W48[beta]] &lt;-&gt; Y356[beta] &lt;-&gt; Y731[alpha] - Y730[alpha] - C439[alpha]. A high-resolution structure of the active [alpha]2[beta]2 complex has long eluded the field due to the weak and transient nature of the a2-P2 interaction. Previous studies revealed that perturbing radical transfer by incorporating unnatural amino acids along the transfer pathway, or by using mechanistic inhibitors that trap the radical in the active site, can extend the lifetime of the [alpha]2[beta]2 complex, allowing for structural studies. Here, we present our efforts to study the E. coli class la RNR [alpha]2[beta]2 complex, trapped using these different perturbation methods, using cryo-electron microscopy. The two [alpha]2[beta]2 structures presented here provide deeper insight into the structural dynamics of nucleotide reduction. We end with a brief discussion of class la RNR from T4 bacteriophage, which despite sharing high sequence identity to its host E. coli class la RNR, employs a very different mode of oligomeric regulation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123568</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamic and tunneling measurements of van der Waals heterostructures</title>
<link>https://hdl.handle.net/1721.1/123567</link>
<description>Thermodynamic and tunneling measurements of van der Waals heterostructures
Tomarken, Spencer Louis.
In certain electronic systems, strong Coulomb interactions between electrons can favor novel electronic phases that are difficult to anticipate theoretically. Accessing fundamental quantities such as the density of states in these platforms is crucial to their analysis. In this thesis, I explore the application of two measurement techniques towards this goal: capacitance measurements that probe the thermodynamic ground state of an electronic system and planar tunneling measurements that access its quasiparticle excitation spectrum. Both techniques were applied to van der Waals materials, a class of crystals composed of layered atomic sheets with weak interplane bonding which permits the isolation of single and few-layer sheets that can be manually assembled into heterostructures. Capacitance measurements were performed on a material system commonly known as magic-angle twisted bilayer graphene (MATBG).; When two monolayers of graphene, a single sheet of graphite, are stacked on top of one another with a relative twist between their crystal axes, the resultant band structure is substantially modified from the cases of both monolayer graphene and Bernal-stacked (non-twisted) bilayer graphene. At certain magic angles, the low energy bands become extremely flat, quenching the electronic kinetic energy and allowing strong electron-electron interactions to become relevant. Exotic insulating and superconducting phases have been observed using conventional transport measurements. By accessing the thermodynamic density of states of MATBG, we estimate its low energy bandwidth, Fermi velocity, and interaction-driven energy gaps. Time-domain planar tunneling was performed on a heterostructure that consisted of monolayer graphene and hexagonal boron nitride (serving as the dielectric and tunnel barrier) sandwiched between a graphite tunneling probe and metal gate.; Tunneling currents were induced by applying a sudden voltage pulse across the full parallel plate structure. The lack of in-plane charge motion allowed access to the tunneling density of states even when the heterostructure was electrically insulating in the quantum Hall regime. These measurements represent the first application of time-domain planar tunneling to the van der Waals class of materials, an important step in extending the technique to new material platforms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 201-212).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123567</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Founders' dynamics -- interpersonal relationships and between-team interaction in early startups</title>
<link>https://hdl.handle.net/1721.1/123566</link>
<description>Founders' dynamics -- interpersonal relationships and between-team interaction in early startups
Lederman, Oren.
The ability to predict the success or failure of an early stage company is critical for accelerator programs and investors. Prior studies marked human and social capital as important factors determining the potential of a startup to succeed. However, very little is known about the effect that founders' interpersonal relationships have on the success of their companies, and the effect of their relationships with other startups located in the same innovation space on their performance. To investigate these relationships, we propose a combination of methodology and field experiments that make use of Rhythm, a wearable sensing platform designed for measuring social interaction. We first describe the design of the platform and its evaluation process. Then, we describe a large-scale field study in a university startup accelerator program and its results. Finally, we propose future enhancements to the platform and directions for future research.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 107-116).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123566</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic and robust network resource allocation</title>
<link>https://hdl.handle.net/1721.1/123565</link>
<description>Dynamic and robust network resource allocation
Zhang, Peter Yun.
Networks are essential modeling tools in engineering, business, and public policy research. They can represent physical connections, such as manufacturing processes. They can be relationships among people, such as patient treatment in healthcare. They can also represent abstract interactions, such as the biological reaction between a certain vaccine and a certain virus. In this work, we bring several seemingly disparate problems under the same modeling framework, and show their thematic coherence via the angle of dynamic optimization on networks. Our research problems are drawn from business risk management, public health security, and public policy on vaccine selection. A common theme is the integrative design of (1) strategic resource placement on a network, and (2) operational deployment of such resources. We outline the research questions, challenges, and contributions as follows.; Modern automotive manufacturing networks are complex and global, comprising tens of thousands of parts and thousands of plants and suppliers. Such interconnection leaves the network vulnerable to disruptive events. A good risk mitigation decision support system should be data-driven, interpretable, and computational efficient. We devise such a tool via a linear optimization model, and integrate the model into the native information technology system at Ford Motor Company. In public security, policymakers face decisions regarding the placement of medical resources and training of healthcare personnel, to minimize the social and economic impact of potential large scale bio-terrorism attacks. Such decisions have to integrate the strategic positioning of medical inventories, understanding of adversary's behavior, and operational decisions that involve the deployment and dispensing of medicines.; We formulate a dynamic robust optimization model that addresses this decision question, apply a tractable solution heuristic, and prove theoretical guarantees of the heuristic's performance. Our model is calibrated with publicly available data to generate insights on how the policymakers should balance investment between medical inventory and personnel training. The World Health Organization and regional public health authorities decide on the influenza (flu) vaccine type ahead of flu season every year. Vaccine effectiveness has been limited by the long lead time of vaccine production - during the production period, flu viruses may evolve and vaccines may become less effective. New vaccine technologies, with much shorter production lead times, have gone through clinical trials in recent years. We analyze the question of optimal vaccine selection under both fast and slow production technologies. We formulate the problem as a dynamic distributionally robust optimization model.; Exploiting the network structure and using tools from discrete convex analysis, we prove some structural properties, which leads to informative comparative statics and tractable solution methods. With publicly available data, we quantify the societal benefit of current and future vaccine production technologies. We also explore the reduction in disease burden if WHO expand vaccine portfolio to include more than one vaccine strain per virus subtype. In each of the applications, our main contributions are four-fold. First, we develop mathematical models that capture the decision process. Second, we provide computational technology that can efficiently process these models and generate solutions. Third, we develop theoretical tools that guarantee the performance of these computational technology. Last, we calibrate our models with real data to generate quantitative and implementable insights.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 139-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123565</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rate design for the 21st Century : improving economic efficiency and distributional equity in electricity rate</title>
<link>https://hdl.handle.net/1721.1/123564</link>
<description>Rate design for the 21st Century : improving economic efficiency and distributional equity in electricity rate
Burger, Scott P.
Electricity tariffs typically charge residential users a volumetric (that is, per-unit of electricity consumed) price that recovers the bulk of the costs of generating, transmitting, and distributing electrical energy. These tariffs also often include taxes and recover other costs associated with regulatory or policy measures. The resulting prices do not reflect the true social marginal costs of generating, transmitting, and distributing energy, capturing little or none of the temporal and geographic variability of marginal electricity costs. These inefficient rates incentivize customers to over-consume power during periods of peak system stress and under-consume power during periods of relatively low demand; this dynamic drives up power system costs, costing Americans and Europeans tens of billions of dollars annually. Critically, it leads to investments in long-lived and low-utilization infrastructure needed to meet peak demands.; Economists have long argued for reforming rates, but progress has historically been slow. Today, less than one quarter of one percent of residential electricity customers in the United States pay a tariff that reflects the real-time price of energy. The emergence of distributed energy resources -- such as solar photovoltaics and battery energy storage --; has sparked renewed interest among regulators and utilities in reforming electricity tariffs. Efficient rates hold the potential to improve the economic efficiency of distributed energy resource installation and operation decisions. However, the economic pressure to redesign electricity rates is countered by concerns of how more efficient rate structures might impact different socioeconomic groups. In particular, regulators have been dubious of efforts to reform how the costs of network infrastructure (that is, transmission and distribution networks) are recovered, rejecting more than 75% of such efforts in the U.S. in 2017. Focusing on developed power systems in contexts like the U.S. and Europe, this Thesis examines the distributional impacts of rate reform and proposes methods to improve the economic efficiency of rates without creating undesirable distributional impacts.; This Thesis also explores the distributional impacts of rooftop solar photovoltaics adoption under alternative rate designs. This Thesis leverages data on electricity consumption measured half-hourly for more than 100,000 customers in the Chicago, Illinois area, paired with Census data to gain unprecedented insight into the impacts of reforming electricity pricing across customers of varying socioeconomic statuses. This Thesis then builds a simple model of the local utility's -- Commonwealth Edison's --; cost of service, and simulates solar PV adoption under alternative rate designs, measuring the impacts on customers of differing income levels. This Thesis demonstrates that low-income customers would face increases in expenditures on average in a transition to rates that recover residual network and policy costs through economically efficient fixed charges. However, this Thesis demonstrates that simple changes to fixed charge designs can mitigate these disparities while preserving all, or the vast majority, of the efficiency gains. These designs rely exclusively on observable information and could be replicated by utilities in many geographies across the U.S.; Rooftop solar PV adoption under tariffs with inefficient, volumetric residual cost recovery are shown to create substantial distributional challenges: PV adoption under such tariffs increases expenditures substantially for non-adopters, which tend to be predominately lower income customers; efficient tariffs prevent this regressive cost shifting. In short, failing to reform rates may lead to worse distributional outcomes than reforming rates, even if reforms are implemented naively. Collectively, the findings in this Thesis underscore the need for regulatory reform around electricity pricing, and chart a path forward for balancing economic efficiency and distributional equity in public utility pricing.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 241-257).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123564</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Micro optics for micro hybrid concentrator photovoltaics</title>
<link>https://hdl.handle.net/1721.1/123563</link>
<description>Micro optics for micro hybrid concentrator photovoltaics
Li, Duanhui.
Concentrating photovoltaics (CPV) systems use concentrating optical elements to significantly reduce the material and processing costs of multi-junction high efficiency solar cells and improve the conversion efficiency. However, several issues hindered the development of CPV technologies due to the fundamental limit of thermodynamics and practical difficulties of manufacturing and deployment, such as system bulkiness, tight tracking error, thermal management and inability to collect diffuse irradiance. By dramatically scaling down the dimensions of the cells to the level of hundreds of microns and accordingly the concentrating optics, micro hybrid CPV overcomes the listed issues and also delivers a small form factor module prole similar to conventional at panel PV. In this thesis, we are focusing on the critical optical components in the micro hybrid CPV: the micro optics. First, we demonstrated a wafel-level micro hybrid CPV module based on Si fabrication.; By introducing the micro cavities in Si wafer with wet etching, this novel micro optical element illustrates its potential for cost-eective collection of both direct and diffuse sunlight, thereby extending the geographic and market domains for cost-eective PV system deployment. By improving the CPV figure of merit by 46%, our micro hybrid CPV module demonstrated state-of-the-art small-form-factor CPV module optical performance. Next, we focused on developing a micro-prism-array based low-prole spectrum splitting optics assembly. By novelly combining conjugate optics design with materials optical properties, the high-efficiency, low-cost, and low-prole optics potentially enables significant improvement on solar module performance and reduction of energy production costs. Lastly, we developed a simulation frame work to generate annualized diffuse radiance energy distribution map that covers the whole United States region.; This simulation approach accounts for different geographic locations and weather conditions and aims to provide high accuracy reference for diffuse concentrator design.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 113-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123563</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the v₁-periodicity of the Moore space</title>
<link>https://hdl.handle.net/1721.1/123423</link>
<description>On the v₁-periodicity of the Moore space
Panchev, Lyuboslav(Lyuboslav Nikolaev)
We present progress in trying to verify a long-standing conjecture by Mark Mahowald on the v₁-periodic component of the classical Adams spectral sequence for a Moore space M. The approach we follow was proposed by John Palmieri in his work on the stable category of A-comodules. We improve on Palmieri's work by working with the endomorphism ring of M - End(M), thus resolving some of the initial difficulties of his approach and formulating a conjecture of our own that would lead to Mahowald's formulation. We further improve upon a method for calculating differentials via double filtration first used by Miller and apply it to our problem.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (page 36).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123423</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving technologies for the manipulation and measurement of brain activity</title>
<link>https://hdl.handle.net/1721.1/123418</link>
<description>Improving technologies for the manipulation and measurement of brain activity
Henninger, Michael Alan Moncrieff.
The ability to measure and manipulate brain activity is integral to much of neuroscience. Several modalities can be used to transmit information between the brain tissue and the experimental device, each with its own benefits and drawbacks. In this thesis, I treat two major thrusts towards emerging technologies that improve upon optical information transmission modalities. I examine light propagation in the brain, with a focus on its importance to optogenetic manipulation of neural activity. Optogenetics is emerging as a powerful tool for neural activity manipulation: it is fast, can be genetically targeted to specific cell types, and can provide bidirectional control (inhibition or stimulation). Manipulation requires transmitting light from an experimental device to the light sensitive proteins that modulate the cell's activity. The brain is a highly scattering, highly absorbing, nonhomogeneous medium.; I created a Monte Carlo code to simulate light's propagation through this medium and used it to guide the design of optogenetic stimulators and predict the in vivo performance of new optogenetic proteins. I designed and computationally evaluated the performance of a new kind of imager-referred to as an Implantable Light Field Microimager (ILM) -- when used to measure neural activity reported by genetically encoded calcium sensors. Fluorescent reporters of physiological activity are already important tools in the study of neural dynamics, but recording the optical signals with sufficient temporal and spatial resolution -- especially in deep brain structures --; remains challenging due to the optical properties of brain tissue. The ILM fuses recent developments in light field imaging and computational photography with an algorithm that uses priors to solve the otherwise-underconstrained reconstruction problem. My simulations indicate that such a device would be effective at achieving single-cell resolution of neural activity at high speeds, with minimal tissue displacement and impact on brain temperatures -- an often overlooked aspect of brain implants that can have major impacts on neural activity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2017; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 77-84).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123418</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Six-dimensional supergravity : consistency conditions and realizations in string theory</title>
<link>https://hdl.handle.net/1721.1/123417</link>
<description>Six-dimensional supergravity : consistency conditions and realizations in string theory
Gopalan, Vijay Kumar Sreenivasa.
We consider the question of which low-energy effective theories with gravity can be realized as string compactifications. In order to make progress on this question, we consider six-dimensional, A = 1 supersymmetric theories with gauge groups, chiral matter and gravity. Stringent constraints imposed by anomaly cancellation make the analysis of large classes of effective theories and string compactifications tractable. We prove that there are only finitely many combinations of non-abelian gauge group and matter that can appear in these theories if the number of tensor multiplets T &lt;/= 8. For T &gt;/= 9, we find infinite families of effective theories with anomaly cancellation and positive kinetic terms. We show that anomaly cancellation defines an integral lattice associated with any low-energy theory. F-theory compactified on elliptic Calabi-Yau 3-folds gives a large class of string vacua. For the case of one tensor multiplet, we find an explicit map between the low-energy anomaly data and divisors in the base of an F-theory compactification. In the case of more tensors, a low-energy theory is realized by an F-theory compactification only if the low-energy lattice embeds into the second homology lattice of the base. We find examples of apparently consistent low-energy theories which cannot be realized in F-theory. We construct a subset of 6D, N = 1 vacua by compactifying the type I/heterotic string on a K3 surface where the gauge bundle is assumed to be a sum of U(1) bundles. The gauge group in these vacua is a product of unitary groups and chiral matter. We can construct a lattice from the data describing the low-energy theory, distinct from the lattice determined by the anomalies. A given low-energy theory is realized in this landscape if and only if the corresponding lattice embeds into the K3 lattice [Gamma]3, 19.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2010; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 202-213).
</description>
<pubDate>Fri, 01 Jan 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123417</guid>
<dc:date>2010-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pairing of single electron additions at the edge of an ultraclean Mini 2DEG</title>
<link>https://hdl.handle.net/1721.1/123416</link>
<description>Pairing of single electron additions at the edge of an ultraclean Mini 2DEG
Demir, Ahmet,Ph. D.Massachusetts Institute of Technology.
In this work, we created laterally large and low disorder quantum well based quantum dots to study single electron additions to two dimensional electron gas systems ( 2DEG). Their single electron addition spectra has been studied using a capacitance technique in a dilution refrigerator. As a function of magnetic field and density, we measured the single electron addition energies from a completely empty dot, up to dot occupancies of thousands of electrons. For small dots, at low density and magnetic field, we found the expected non-interacting Fock-Darwin behavior. However, at high density and high magnetic field, we observed deviations from single particle picture which is suggestive of more novel physics. To observe collective behaviour in quantum dots, we created relatively larger quantum dots so that the dot would behave as a small two dimensional (2D) system.; However, observing such behavior has been challenging due to the difficulty in the fabrication of sufficiently high quality devices. The quantum dots we are working on differ from those of previous works in that they do not contain any modulation doping nor a Schottky barrier above the dot. This new design eliminates all unscreened dopants. Instead, we populate carriers electrostatically by an external gate. Here, we report the observation in the addition spectra of interaction driven localized states and isolated tunneling to edge states. We see electron additions to the edge states between filling factors v = 1 and v = 2 with single flux quantum (h/e) periodicity in magnetic field. Remarkably, between filling factors V = 2 and V = 5, we observe the pairing of electron additions to states at the edges of the quantum dots with a corresponding 2e charge tunneling. Near filling factor 5/2 and at fixed gate voltage, these twice-height peaks appear uniformly with a periodicity of h/2e.; At other filling factors in the range v = 2 - 5, the mean periodicity for the twice-height electron peaks remains h/2e, but the twice-height peaks are instead further bunched into pairs, with pairs spaced h/e apart. The filling factors for the observed h/2e periodicity coincide with those of a pairing phenomenon seen in conductance oscillations in Fabry-Perot interferometers[1] that indicated inter-channel entanglement between edge channels. Moreover, the unusual 2-electron Coulomb blockade peaks suggest a pair tunneling effect that involves electron correlations that arise in the quantum dot.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-115).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123416</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>NMR studies of localization and out-of-time ordered commutators</title>
<link>https://hdl.handle.net/1721.1/123415</link>
<description>NMR studies of localization and out-of-time ordered commutators
Wei, Ken Xuan.
This thesis presents a study of many-spin dynamics using a Nuclear Magnetic Resonance (NMR) simulator. A novel correlation metric was designed and experimentally measured in a solid state spin chain system to investigate the dynamics of quantum information transport. By using control techniques to tune the parameters of the system interactions, the effects of disorder and interaction on the dynamics were investigated experimentally. It was shown that in the absence disorder can quench the spread of quantum information; while with weak interaction and disorder, quantum information can spread, albeit slowly. Having found a connection between the correlation metric and the recently proposed out-of-time ordered commutator, the correlation metric was used to experimentally probe a transition between ergodic and prethermal dynamics. Future experiments using out-of-time ordered commutators as a tool to probe many-body localization transition are discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-120).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123415</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and devices for metropolitan-scale quantum key distribution</title>
<link>https://hdl.handle.net/1721.1/123414</link>
<description>Algorithms and devices for metropolitan-scale quantum key distribution
Bunandar, Darius.
Secure communication against any possible eavesdropper is important in today's Internet. Quantum key distribution (QKD), along with the one-time pad cryptosystem, provides a quantum-secure way for two distant parties to communicate with composable security. It has recently become clear that a wide-spread utilization of QKD warrants improvements in its implementations. Theoretically, the security of QKD is difficult to analyze and the effects of imperfections on key rates is difficult to estimate. Practically, QKD requires miniaturization and an operation speed comparable to current Internet communications. In this thesis, we develop a robust numerical approach for calculating the key rates for arbitrary QKD protocols with explicitly quantifiable security. The approach formulates semidefinite programs that take, as inputs, the observed statistics from a QKD session and outputs the guaranteed key rates. Next, in an effort to boost the operation speed of current QKD systems, we describe a large-alphabet QKD scheme that can transmit multiple secret bits of information per photon while being immune against a photon-number side channel attack. We also demonstrate the feasibility of this system with an intercity field demonstration that pushes the boundary on its key generation rate. We then present the miniaturization of QKD systems using the silicon photonics platform which allows for the integration of multiple high-speed photonic operations into a single circuit. We present the first intercity field demonstrations of QKD that demonstrates silicon photonics-supported by the currently existing CMOS technology-can pave the way for a high-speed metropolitan-scale quantum communication network.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 177-188).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123414</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Searching for dark matter with the CMS detector in proton-proton collisions containing a single high-pT photon and large E miss/t</title>
<link>https://hdl.handle.net/1721.1/123413</link>
<description>Searching for dark matter with the CMS detector in proton-proton collisions containing a single high-pT photon and large E miss/t
Allen, Brandon Leigh.
In this thesis, we present a search for dark matter in final states containing a high- PT photon and large missing transverse momentum in proton-proton collisions at [square root of s] = 13 TeV using data collected by the Compact Muon Solenoid.(CMS) experiment at the CERN Large Hadron Collider (LHC) corresponding to an integrated luminosity of 35.9 inverse femtobarns. The main advances in experimental technique compared to previous searches in this final state are the use of data-driven control regions to constrain the main irreducible backgrounds from Z( --&gt; vv̄ + [gamma] and W( --&gt; [iota]v)+ [gamma] production and an in-depth study of the unique anomolous detector signatures that result in backgrounds due to non-collision processes. With these improvements, we have the most robust analysis of this kind presented to date. No deviations from the predictions of the standard model are observed. The results are interpreted in the context of dark matter production and limits on new physics parameters are calculated at 95% confidence level. We focus on two simplified dark matter production models where new vector and axial mediators couple a new dark dirac fermion to the Standard Model quarks. These models are chosen as they cover a large class of WIMP-like dark matter particles that show up in many types of more complete new physics models. For the two models considered, the observed (expected) lower limits on the masses of the new mediators are 950 (1150) GeV for a dark matter particle of a mass of 1 GeV.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-165).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123413</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable perturbation and measurement of biological function via molecular encoding</title>
<link>https://hdl.handle.net/1721.1/123412</link>
<description>Scalable perturbation and measurement of biological function via molecular encoding
Feldman, David,Ph. D.Massachusetts Institute of Technology.
Methods are presented for optical genetic screens and lineage tracking and recovery. Pooled optical screens use pooling and molecular encoding to conduct image-based genetic screens at large scales with reduced biological noise. These assays complement existing pooled screening approaches by measuring cellular processes over space and time. Pairing of perturbations with separate barcode sequences expands the range of genetic libraries that can be assayed, but can be challenging due to lentiviral recombination that swap barcodes within a library. Here, barcode swapping is carefully measured on a cell-by-cell basis, and a method is presented to mitigate the effects of barcode swapping in a pooled lentiviral library. Lineages within a complex cell population can be tracked via genomically integrated barcodes. Identifying and isolating lineages of interest from an ancestral population based on the characteristics of their progeny would enable probing underlying lineage-specific mechanisms, but is not possible with existing inert barcode libraries. A novel barcoding technique is shown that uses the high specificity of CRISPR/Cas9 nuclease activity to isolate viable cells from rare lineages within a population. Linking sequences to biological function is one of the defining challenges of the post-genomic era. Genetic screens are essential tools for defining genes underlying functions and enable rigorous testing of models linking sequence to function, but are limited by our ability to link sequence identity to observable cell phenotypes, such as growth, gene expression, and biochemical activity. Technological advances that integrate the fast-growing experimental genetic toolbox with high-throughput functional characterization have the potential to unlock new areas of biology for quantitative, systematic analysis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-117).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123412</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dark matter energy deposition and production from the table-top to the cosmos</title>
<link>https://hdl.handle.net/1721.1/123411</link>
<description>Dark matter energy deposition and production from the table-top to the cosmos
Liu, Hongwan,Ph. D.Massachusetts Institute of Technology.
The discovery of nongravitational interactions between dark matter and the Standard Model would be an important step in unraveling the nature of dark matter. If such an interaction exists, it would have profound implications on how dark matter is produced in both the early universe and in collider experiments. In addition, it would also allow dark matter to deposit energy into Standard Model particles in unexpected ways. This thesis details some recent progress made in understanding these implications, including (i) a new freezeout mechanism for thermal dark matter dominated by a 3-to-2 process within a vector portal dark sector model; (ii) a study of how the existence of dark sector bound states can influence collider, direct and indirect searches for dark matter; (iii) a new axion dark matter interferometric search using a cavity that is sensitive to the axion-induced rotation of linearly polarized light; (iv) a definitive assessment of the potential contribution of dark matter annihilation and decay to cosmic reionization; (v) new constraints on dark matter annihilation rates and decay lifetimes from 21-cm cosmology, and (vi) a new numerical code, DarkHistory, which significantly improves the computation of the ionization and thermal histories of the universe in the presence of exotic sources of energy injection. These novel ideas span length scales ranging from table-top experiments to the entire cosmos, and represent just a few of the myriad of ways in which dark matter may yet surprise us.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 357-389).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123411</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Searching for dark matter using jets and jet substructure at the Large Hadron Collider</title>
<link>https://hdl.handle.net/1721.1/123410</link>
<description>Searching for dark matter using jets and jet substructure at the Large Hadron Collider
Narayanan, Siddharth Madhavan.
Astrophysical observations of gravitational interactions provide strong evidence for the existence of dark matter. Many theories propose and experiments test the hypothesis that dark matter may have a particle physics origin, but this remains unproven. One such experiment is the Compact Muon Solenoid at the Large Hadron Collider. If dark matter couples, at least lightly, to the Standard Model, then it is possible to produce it in collisions at the LHC. Because it would not interact with the detector, we must look for collisions in which dark matter is produced in association with one or more SM particles. This thesis describes two such analyses: dark matter plus one top quark and dark matter plus two light quarks. Both cases result in complicated detector signatures due to the hadronization of final-state quarks. Recently developed jet substructure techniques were applied using novel methods to identify the hadronization products of high-momentum top quarks. In both analyses, the observed data is found to be consistent with SM backgrounds. We translate these results into the most stringent constraints to date on the relevant beyond-SM models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 155-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123410</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing thermodynamics, correlations, and transport of the 2D Fermi-Hubbard model under a Fermi gas microscope</title>
<link>https://hdl.handle.net/1721.1/123409</link>
<description>Probing thermodynamics, correlations, and transport of the 2D Fermi-Hubbard model under a Fermi gas microscope
Nichols, Matthew Alan.
Ultracold fermionic atoms in optical lattices offer a pristine platform for quantum simulation of materials with strong electron correlations. With the advent of quantum gas microscopy, we now have the abilities to observe and manipulate these systems at the level of single atoms and lattice sites. In this thesis, I will describe how we perform fluorescence microscopy on fermionic 4K using Raman sideband cooling, and how we realize the two-dimensional Fermi-Hubbard model on a square lattice, a paradigm believed to capture the essence of high-Tc superconductivity in the cuprates. I will then discuss several experiments we have performed with this system aimed at improving our current understanding of both the equilibrium and transport properties of this strongly correlated many-body Hamiltonian.; The first part of this thesis discusses measurements of thermodynamic properties of different states of the Fermi-Hubbard model in equilibrium, including metallic, Mott-insulating, and band-insulating states. With the single-site resolution afforded by our quantum gas microscope, we examine spatial spin and charge correlations in a fermionic Mott insulator as a function of the filling in the lattice. At halffilling, we observe antiferromagnetic spin correlations in the presence of doublonhole bunching. Upon doping, these spin correlations weaken monotonically, and an interaction-enhanced Pauli-hole emerges, a real-space manifestation of Pauli-blocking.; The second part of this thesis describes near-equilibrium transport experiments we performed with ultracold fermionic atoms, which allowed us to measure both the spin diffusion coefficient and the spin conductivity of a homogeneous Mott insulator at halffilling, transport coefficients that are difficult to measure in the cuprates, and which are highly challenging to calculate theoretically. In the strongly interacting regime, we observe diffusive spin transport that is driven by super-exchange and doublon-hole-assisted tunneling, and which violates the quantum limit of charge diffusion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 389-408).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123409</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Z boson cross section measurements and neutral Higgs boson search in the di-muon channel with the CMS detector</title>
<link>https://hdl.handle.net/1721.1/123408</link>
<description>Z boson cross section measurements and neutral Higgs boson search in the di-muon channel with the CMS detector
Niu, Xinmei,Ph. D.Massachusetts Institute of Technology.
In this thesis, measurements of inclusive and differential Z boson production cross sections in proton-proton collisions at [square root of] s =13 TeV with the CMS detector at Large Hadron Collider are performed with the di-muon channel. The measured total inclusive cross section times branching ratio is or(pp - ZX) x [beta](Z --&gt; [mu][mu]) = 1870 ± 2(stat) ± 35(syst) ± 51(lumi) pb for the di-muon invariant mass in the range of 60 to 120 GeV, which is in good agreement with the next-to-next-to leading order QCD predictions. The spectra of Z boson transverse momentum, ... variable, rapidity, and the muon transverse momentum are also measured and compared with theoretical predictions. The large production cross section of Z boson and good experimental accessibility of final-state muons permit a real-time monitoring of luminosities using the counts of reconstructed Z --&gt; [mu][mu] events. Preliminary results of counting of Z bosons as a luminometer are shown using the entire 2018 data-taking period. A search for beyond Standard Model neutral Higgs bosons decaying to two muons is also presented. No significant excess is observed. A 95% confidence level upper limit is set on ... The exclusion contour is also determined on the parameter phase space in the context of Minimal Supersymmetric Standard Mode representative benchmark scenarios.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 173-186).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123408</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware-efficient quantum computation and error correction in bosonic systems</title>
<link>https://hdl.handle.net/1721.1/123407</link>
<description>Hardware-efficient quantum computation and error correction in bosonic systems
Niu, Murphy Yuezhen.
Quantum computing has the potential to eclipse its classical competitors, but only if the number of high-quality qubits can be scaled up. Large-scale quantum systems are impeded by the formidable hardware resources needed to combat growing amounts of errors from hardware imperfections. Previous efforts have mainly focused on either optimizing quantum hardware or finding new quantum algorithms. This thesis explores synergies between the system-specific hardware physics and algorithm design that together yield more than the sum of their parts in the quest for scalable quantum computation in bosonic systems. We present an algorithm for generating nonclassical states of light, using full-quantum X( 2 ) nonlinearities, that transcends previous limits on conversion efficiency. We show that such nonlinearities-which enable highly efficient three-wave mixing between quantized signal, idler, and pump fields-can be employed in two systematic frameworks for quantum computing. The first, which utilizes X(2) interactions' fundamental symmetries and recognizes that photon-loss is their dominant source of errors, provides a set of hardware-efficient quantum error-correction codes and their associated encoded universal gates. One of our codes achieves a constant rate of protected photons, a necessity for robust large-scale quantum computation. The second framework provides hardware-efficient universal quantum control facilitating plug-and-play application of machine learning algorithms. It takes constraints on the hardware resources and control-error models as inputs, and returns robust control pulse shapes for high-fidelity quantum gate execution. The transformative performance gains obtained from this hardware-efficient approach offer potential for scalable quantum computation using available quantum devices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 199-214).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123407</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision measurements of electron-proton elastic scattering cross sections at large Q²</title>
<link>https://hdl.handle.net/1721.1/123406</link>
<description>Precision measurements of electron-proton elastic scattering cross sections at large Q²
Ou, Longwu
The electromagnetic form factors are fundamental quantities characterizing the internal structure of the nucleon. Their measurements have provided significant insight into the spatial distribution and interaction of quarks inside the proton. The knowledge of high Q2 form factors proves essential in understanding the properties of quantum chromodynamics in the transition region from non-perturbative to perturbative behavior. It also provides important links to generalized parton distributions, which describe the three-dimensional structure of hadrons at the parton level. In view of the significant theoretical research activities in this field, high quality experimental data are crucial for providing stringent tests and benchmarks to guide and test different models. The form factors can be accessed in experiments by measuring elastic scattering of electrons off a hydrogen target. Experiment E12-07-108, which took place at the Thomas Jefferson National Accelerator Facility, conducted precise measurements of the unpolarized e-p elastic scattering cross section over a Q2 range of 0.6-16.5 GeV2 . This thesis presents the results for 7 kinematic settings with total uncertainties that are 1.5 times smaller than those of the existing data at large Q2 . The proton magnetic form factors were extracted using a parameterization of the form factor ratio obtained from recent polarized e-p scattering experiments. Comparisons to existing global and phenomenological fits are presented.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-192).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123406</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topological quantum optics in two-dimensional atomic lattices</title>
<link>https://hdl.handle.net/1721.1/123405</link>
<description>Topological quantum optics in two-dimensional atomic lattices
János, Perczel,
A key goal in quantum science is to build quantum systems that are robust to disorder and experimental imperfections. The application of topology to quantum physics is one of the most promising avenues towards achieving this goal, since topological systems are generally insensitive to moderate, local perturbations. This thesis is dedicated to the introduction and analysis of novel platforms for engineering topological states in the optical domain. First, we analyze the interaction of atoms in Maxwell's fish eye lens, which is an optical medium mimicking light propagation on the surface a sphere. Due to the underlying (trivial) spherical topology of the system, light follows circular trajectories in the lens, giving rise to special focusing properties. We investigate the long-range atomic interactions mediated by the lens and the efficiency of entangling operations. We then turn our attention to two-dimensional atomic arrays in free space, where interactions are mediated by photons. We show that in the presence of a uniform magnetic field, the system exhibits a photonic band structure with non-trivial Chern numbers and a topological gap. We explore the topological edge states that arise on the system boundaries, identify the conditions under which edge states are long-lived and show that they are robust to imperfections in the lattice. Finally, we study two-dimensional atomic emitter arrays embedded in photonic crystals. We engineer a quasi-two-dimensional photonic crystal slab that mediates long-range dipolar interactions between emitters and gives rise to topological behavior. We analyze the topological edge states of the system and show that they are robust to the inhomogeneous broadening of the emitters and to missing lattice sites.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 187-204).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123405</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on magnetism in 2D van der Waals antiferromagnetic insulators using terahertz spectroscopy</title>
<link>https://hdl.handle.net/1721.1/123404</link>
<description>Studies on magnetism in 2D van der Waals antiferromagnetic insulators using terahertz spectroscopy
Ozel, Ilkem Ozge.
Revealing the spin excitations of complex quantum magnets is key to developing a minimal model that explains the underlying magnetic correlations in the ground state. In this thesis, we study the low-energy magnons in a-RuCl3 by combining timedomain terahertz spectroscopy under an external magnetic field and model Hamiltonian calculations. We observe two absorption peaks at 2.0 and 2.4 meV, which we attribute to zone-center spin waves. Using linear spin-wave theory with only nearest-neighbor terms of the exchange couplings, we calculate the antiferromagnetic resonance frequencies and reveal their dependence on an external field applied parallel to the nearest-neigbor Ru-Ru bonds. We find that the magnon behavior in an applied magnetic field can be understood only by including an off-diagonal exchange term to the minimal Heisenberg-Kitaev model. Such an anisotropic exchange interaction that manifests itself as a result of strong spin-orbit coupling can naturally account for the observed mixing of the modes at higher field strengths.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 85-99).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123404</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Von Neumann algebras in field theory</title>
<link>https://hdl.handle.net/1721.1/123403</link>
<description>Von Neumann algebras in field theory
Rajagopal, Srivatsan,Ph. D.Massachusetts Institute of Technology.
In this thesis, I show that modular flows of generic excited states in a quantum field theory can be studied by restricting attention to a special class of states whose associated modular flow can be characterised easily; this follows from a certain theorem involving the invertible group of operators in a general von Neumann algebra, namely that such operators are a dense subset of the algebra in the strong operator topology. I also develop tools to compute these flows using a novel perturbation expansion : the structure of the terms appearing in the expansion is made manifest to all orders in the expansion. Finally, I write down an effective action for a general charged fluid which has an anomalously broken symmetry; the novelty here is that this effective action is local (previous treatments gave a non-local effective action for such a fluid). This construction also gives a simple explanation for various puzzling coincidences which have been reported in the literature before.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 169-171).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123403</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and evaluation of high-performance quantum circuit components</title>
<link>https://hdl.handle.net/1721.1/123402</link>
<description>Design and evaluation of high-performance quantum circuit components
Rines, Richard Ellis.
Quantum computers promise to extend the domain of the computable, performing calculations thought to be intractable on any classical device. Rapid experimental and technological progress suggests that this promise could soon be realized. However, these first quantum computers will inevitably be both small, faulty, and expensive, demanding implementations of quantum algorithms which are compact, fast, and error-resistant. As the complexity of realizable quantum computers accelerates toward the threshold of quantum supremacy, their capacity to demonstrate a meaningful quantum advantage when applied to real-world tasks depends on the high-performance design, implementation, and analysis of quantum circuits. The first half of the thesis is devoted to Shor's factoring algorithm, seeking to determine the most efficient quantum circuit implementation of a quantum modular multiplier.; Three such implementations are introduced which outperform the best known exact reversible modular multiplier circuits for most practical problem sizes. Reformulated in the framework of quantum Fourier transform (QFT) based arithmetic, two of these circuits are further shown to reduce modular multiplication to a constant number of QFT-like circuits, which can then parallelized to a linear-depth circuit with just 2n + O(log n) qubits. Motivated by this deconstruction, the final result in this portion is an algorithm for a 'SIMD QFT' - demonstrating that the parallel QFT can be efficiently implemented on a topologically-limited distributed ion-trap architecture with just a single global shuttling instruction. The second half of this thesis focuses on quantum signal processing (QSP), specifically as applied to quantum Hamiltonian simulation.; Hamiltonian simulation promises to be one of the first practical applications for which a near-term device could demonstrate an advantage over all classical systems. We use high-performance classical tools to construct, optimize, and simulate quantum circuits subject to realistic error models in order to empirically determine the maximum tolerable error rate for a meaningful Hamiltonian simulation experiment on a near-term quantum computer. By exploiting symmetry inherent to the QSP circuit, we demonstrate that their capacity for quantum simulation can be increased by at least two orders of magnitude if errors are systematic and unitary. This portion concludes with a thorough description of the classical simulation software used for the this analysis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 173-181).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123402</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping cell types, dynamics, and connections in neural circuits</title>
<link>https://hdl.handle.net/1721.1/123401</link>
<description>Mapping cell types, dynamics, and connections in neural circuits
Rodriques, Samuel Gordon.
Neuroscience is limited by the difficulty of recording neural activity, identifying cell types, and mapping connectivity in high throughput. In this thesis, I present several scalable technologies aimed at improving our ability to characterize the activity, composition, and connectivity of neural circuits. My primary contributions include the design for a nanofabricated electrical recording device and a new approach to nanofabrication within swellable hydrogels; a high-throughput method for mapping the locations of cell types in tissue; an approach to direct sequencing of proteins at the single molecule level; an approach to directly recording neural activity into the sequence of RNA, enabling it to be detected by DNA sequencing; and a method for molecular barcoding of neurons, with the goal of enabling a high-throughput approach to neural circuit mapping. I conclude with a consideration of the limitations of the academic incentive structure as concerns the development and deployment of new technologies, and propose a structure for basic science research, complementary to the academic structure, based on the systematic establishment of well-funded, highly focused research projects with clear goals, an incentive to rapidly disseminate information, and limited lifetimes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 227-249).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123401</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing nitrogen-vacancy diamond magnetic sensors and imagers for broadband sensitivity</title>
<link>https://hdl.handle.net/1721.1/123400</link>
<description>Optimizing nitrogen-vacancy diamond magnetic sensors and imagers for broadband sensitivity
Schloss, Jennifer May.
Solid-state spin systems form an increasingly impactful quantum sensing platform. Atomic-scale defects in diamond called nitrogen-vacancy (NV) centers offer high-resolution magnetic sensing and imaging under ambient conditions. NV-based magnetometers have found broad utility thanks to long spin lifetimes at room temperature, coherent microwave spin manipulation, and optical spin-state initialization and readout. Their applications span pure and applied sciences, including condensed matter physics, neuroscience and living systems biology, nuclear magnetic resonance, Earth and planetary science, and industrial vector magnetometry. In this work, we employ ensembles of NV centers for high-sensitivity, broadband magnetic sensing and imaging. We present three experiments, which share a common principal application of time-resolved magnetic field detection from firing neurons.; For each experiment, we implement novel techniques to improve magnetometer performance, optimizing a different variant of the DC magnetic field sensitivity. Among solid-state spin-based sensors, these devices demonstrate record sensitivities to broadband magnetic signals. Nonetheless, the achieved sensitivities remain orders of magnitude away from theoretical limits. Primary obstacles include optical readout fidelities far from unity and typical NV-ensemble dephasing times T*2 thousands of times shorter than spin lifetimes T1. We therefore investigate techniques for improving these key parameters to enable considerable sensitivity enhancements. We develop a strategy for extending T*2 in NV-rich diamonds, which could in turn make exotic techniques to increase readout fidelity more practical. Moreover, we identify methods to optimize diamond fabrication and treatment, and we highlight where further materials science research is warranted.; In short, this work demonstrates advances in NV-ensemble magnetic sensing and establishes a basis for further sensitivity improvements, perhaps even inspiring new innovations to approach fundamental limits.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 339-396).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123400</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing quark-gluon plasma with D mesons</title>
<link>https://hdl.handle.net/1721.1/123399</link>
<description>Probing quark-gluon plasma with D mesons
Wang, Jing,Ph. D.Massachusetts Institute of Technology.
Heavy quarks are powerful tools for the study of the properties of the high-density QCD medium created in heavy-ion collisions as they are sensitive to the transport properties of the medium and may interact with the QCD matter differently from light quarks. As the main observables to study the medium effect, the D0 nuclear modification factor (RAA) provides insights into the flavor dependence of in-medium parton energy loss, and the D0 azimuthal anisotropy coefficient (Vn) provides information about the degree of the thermalization of the bulk medium. Furthermore, the production of D0 in jets is sensitive to the diffusion in the medium and the role of medium response. Using the large proton-proton and PbPb data samples at [square root of] SNN = 5.02 TeV collected by the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC), high precision measurements of Do have been performed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 171-192).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123399</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing quark-gluon plasma with beauty quarks</title>
<link>https://hdl.handle.net/1721.1/123398</link>
<description>Probing quark-gluon plasma with beauty quarks
Wang, Ta-Wei,Ph. D.Massachusetts Institute of Technology.
This thesis presents the first ever attempts of direct measurements of B mesons in heavy ion collisions with the CMS experiment. The full decay chains of B mesons were reconstructed by identifying the corresponding decay vertices. Machine learning classification models were used extensively and an unprecedented signal to background ratio was reached. Both cross sections and nuclear modification factors which quantify the medium effect of the quark-gluon plasma were measured. This thesis will introduce the techniques that enabled these novel analyses and discuss the implications of the results.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 203-222).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123398</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel transport behavior in two-dimensional semiconducting and superconducting transitional metal dichalcogenides</title>
<link>https://hdl.handle.net/1721.1/123397</link>
<description>Novel transport behavior in two-dimensional semiconducting and superconducting transitional metal dichalcogenides
Yang, Yafang,Ph. D.Massachusetts Institute of Technology.
Atomically layered transitional metal dichalcogenides (TMDs) manifest many fascinating properties such as atomic-scale thickness, favorable mechanical, electronic and optoelectronic properties and strong spin-orbit coupling. In terms of electronic properties, the TMDs range from insulating or semiconducting to metallic or semi-metallic. Some of them also exhibit exotic electronic phases such as charge density waves and superconductivity. Recent advances in nano-materials characterization and device fabrication, in particular, fabrication of high quality van der Waals heterostructures, have boosted studies on two-dimensional layers of thin TMDs for purpose of both fundamental research and industrial applications. In this thesis, I present a series of experiments investigating electronic and optoelectronic properties of semiconducting TMDs MoS2 and WSe2. I also demonstrate technical advances in fabrication of van der Waals heterostructures, which enables high-quality encapsulated thin TMDs devices with ionic liquid introduced as electrolyte. I further show that phase transitions in superconducting TMDs such as 2H-TaS2 can be greatly impacted by dimensionality reduction. A substantial enhancement of superconducting T, and a suppression of the CDW transition are observed in 2H-TaS2 in the 2D limit. At last, I present a machine learning algorithm to realize pixel-wise classification on laboratory acquired images of various 2D materials, which might open up new opportunities for full automation of nano-material search and device fabrication.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 149-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123397</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the production of innovation</title>
<link>https://hdl.handle.net/1721.1/123384</link>
<description>Essays on the production of innovation
Chavda, Ankur.
Innovation is central to both the competitive strategy of many firms and gains in productivity that lead to economic growth. How do we improve our ability to produce innovations? This dissertation studies three aspects of this question: staged development, vertical integration and venture capital. Staged development, common in many innovative settings including biotech and venture capital of ideas, involves partially funded an idea with the goal of learning more about that idea before additional funding is provided. My result suggests there are cases where committing to ideas, avoiding staged development, can lead to better outcomes. Staged development has the potential of distorting effort to an extent that outweighs any benefit provided by its implicit option value. Research units pursuing innovations can either be integrated within the firm exploiting those innovations or kept as a separate entity. I find that integration leads to a higher rate of new innovations. Separating the research unit can reduce its appetite for risk, changing both the rate and direction of innovation. Finally, uncertainty surrounds strategies to exploit innovations: new ideas by definition have not been tested by market forces. I show how venture capital plays a key role in resolving this uncertainty for entrepreneurs with new ideas. Specifically, venture capital provides the most value for entrepreneurs that are themselves the most uncertain about the underlying value of their ideas.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123384</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on automated vehicles and the future of mobility</title>
<link>https://hdl.handle.net/1721.1/123383</link>
<description>Essays on automated vehicles and the future of mobility
Naumov, Sergey A.
Automated vehicle (AV) and electric vehicle (EV) technologies are expected to substantially reduce the negative externalities of driving. Combined with ubiquitous ride-hailing platforms that facilitate ride-sharing (pooling), AVs promise to make automobile transportation faster, safer, cheaper, more convenient, and environmentally friendly. Yet the endogenous impacts of AVs on demand for driving are not well understood. My first paper explores the effect of AVs and pooling on the performance of both roads and public transit in a bimodal transportation system. I develop a dynamic model that describes how commuters choose between driving a car or riding public transit in response to the changing attractiveness of these modes in the presence of AVs and pooling.; I show that the well-intentioned move to promote pooling may have the unintended consequences of leading to both worse public transit quality and more rather than less traffic congestion if the public transit downward spiral is triggered. In my second paper, I use conjoint analysis to estimate consumer preferences for the attributes of ride-hailing services. I show that because consumers have an inherent aversion to pooling, and prefer cheaper trips, consumer choice of pooling is likely to drop in the future if the cost of driving falls with the introduction of AVs as some predict. In my third paper, I study the role of the accelerated vehicle retirement programs ('cash-for-clunkers') in reducing transportation fleet emissions.; I use a model of vehicle fleet turnover in the United States to show that achieving climate goals will likely require 'cash-for-clunkers' policies that incentivize the accelerated retirement of older, less-efficient vehicles to be replaced by electric vehicles, combined with a rapid transition to renewable electricity. I demonstrate that such policies can be an effective way to make the vehicle fleet less emission-intensive, but that the costs could be high. I show that combining 'cash-for-clunkers' with a gas tax or carbon price would help offset the costs incurred while also reducing driving demand, helping to achieve a low-emissions transition in time.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123383</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on collective intelligence and the future of work</title>
<link>https://hdl.handle.net/1721.1/123382</link>
<description>Essays on collective intelligence and the future of work
Duhaime, Erik P.
This dissertation considers how information technologies have enabled new ways of organizing work. The first essay investigates how artificial intelligence (AI) systems will impact the field of medical diagnostics. While previous research has shown that Al can diagnose skin cancer as accurately as professional dermatologists, I explore what happens when Al is combined with-rather than compared to-human intelligence. Using a dataset of the diagnoses of 1 state-of-the-art Al system and 21 board-certified dermatologists on 371 biopsy-proven cases of skin lesions, I find that averaging the opinion of an individual dermatologist with Al often does not lead to higher accuracy than Al alone. However, combining Al with the average opinion from groups of dermatologists leads to higher performance than individuals alone, Al alone, and groups alone. This suggests that in many cases artificial intelligence will not simply replace jobs, but rather, will transform how work is organized.; The second essay (coauthored with B. Bond, Q. Yang, P. de Boer, and T.W. Malone) considers how crowdsourcing can be used to find innovative solutions to complex problems. In past research, recursive incentive schemes have shown promise for conducting social search by motivating people to use their weak ties to find distant targets, such as specific people or even weather balloons placed at undisclosed locations. We report on a case study of a similar recursive incentive scheme for finding innovative ideas. Specifically, we implemented a competition to reward individual(s) who helped refer Grand Prize winner(s) in MIT's Climate CoLab, an open innovation platform for addressing global climate change. Using data on over 78,000 CoLab members and over 36,000 people from over 100 countries who engaged with the referral contest, we find that people who are referred using this method are more likely than others both to submit proposals and to submit high quality proposals.; Furthermore, we find suggestive evidence that among the contributors referred via the contest, those who had more than one degree of separation from a pre-existing CoLab member were more likely to submit high quality proposals. Thus, the results from this case study are consistent with the theory that people from distant networks are more likely to provide innovative solutions to complex problems. The third essay (coauthored with Z. Woessner) considers how newly enabled organizational designs are changing the social norms and expectations of workers. Specifically, we investigate the social norm of tipping and propose that work in the "gig economy" is associated with a breakdown of tipping norms in part because of workers' increased autonomy in terms of deciding when and whether to work.; We present four studies to support our hypothesis: a survey vignette experiment with workers on Amazon Mechanical Turk (Study 1), an analysis of New York City taxi data (Study 2), a field experiment with restaurant employee food delivery drivers (Study 3), and a field experiment with gig-worker food delivery drivers (Study 4). In Studies 1 and 2, we find that consumers are less likely to tip when workers have autonomy in deciding whether to complete a task. In Study 3, we find that restaurant delivery employees notice upfront tips (or lack thereof) and alter their service as a result. In contrast, in Study 4, we find that gig-workers who agree to complete a delivery for a fixed amount that includes an upfront tip (or lack thereof) are not responsive to tips. Together, these findings suggest that the gig economy has not only transformed employee-employer relationships, but has also altered the norms and expectations of consumers and workers.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123382</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The changing nature of professional work inside an incumbent firm in the age of social media : examining the challenge of coproduction</title>
<link>https://hdl.handle.net/1721.1/123381</link>
<description>The changing nature of professional work inside an incumbent firm in the age of social media : examining the challenge of coproduction
Truelove, Emily.
Advances in the Web, social media, and digital technologies are changing the nature of professional work inside incumbent organizations-often in ways that involve permeability of once-sealed boundaries, and usually in ways that require reconfiguration of long-held work practices. In this dissertation, I explore these issues drawing on data from a 24-month ethnographic study of an incumbent firm in the advertising industry ("AdCo"). During my study., AdCo continued to do traditional advertising such as television commercials, and it developed a strategic new offering called participatory ads, which involved using social media to coproduce an ad's content with the audience. As I show, doing participatory advertising involved technical and political challenges both inside and outside the firm.; In addition, and as can be common in firms experiencing the digital transformation of their industries, doing participatory advertising was also an occasion for AdCo members to reconsider what it meant to be in their business in the first place. In Chapter 1, I focus on how coproducing ads with the audience created tensions between professional groups inside the firm. This tension was between Creative department members, who were accustomed to controlling the initial phase of ad-making where the "big idea" for an ad was developed, and Digital department members, who had long been regarded as a support department but who had critical expertise needed to develop high quality ideas for participatory ads. I show how it was only when Creatives used what I call reconfiguring the sacred practices that workgroups were able to develop high quality ideas and receive a client greenlight to launch.; In Chapter 2, I focus on coordination between workgroups inside AdCo and the audience outside in the participatory advertising projects that launched. In participatory ads, audience members were unpaid, not professionally trained, participating for their own entertainment, and generally not even aware that they were part of a larger effort. Therefore, conventional mechanisms for coordinating work were unsuitable. I describe the importance of professionals using what I call inspiring and harmonizing engagement practices in order to motivate the audience to participate, and to do so in ways that were strategically beneficial for the firm. In Chapter 3, I review and synthesize various research streams that examine how firms are doing hybrid forms of work that involve using nonprofessional actors outside the boundary of the firm in their operations.; I focus in particular on the challenges that professionals inside firms face when doing this kind of work, dividing these challenges into those related to willingness and those related to capabilities. This dissertation advances research on the changing nature of professional work in the age of the Web and social media, the production of collective creative work, and managing boundaries inside and outside of incumbent firms during digital transformation efforts.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123381</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correlations in Monte Carlo eigenvalue simulations : uncertainty quantification, prediction and reduction</title>
<link>https://hdl.handle.net/1721.1/123375</link>
<description>Correlations in Monte Carlo eigenvalue simulations : uncertainty quantification, prediction and reduction
Miao, Jilang.
Monte Carlo methods have mostly been used as a benchmark tool for other transport and diffusion methods in nuclear reactor analysis. One important feature of Monte Carlo calculations is the report of the variance of the estimators as a measure of uncertainty. In the current production codes, the assumption of independence of neutron generations in Monte Carlo eigenvalue simulations leads to the oversimplified estimate of the uncertainty of tallies. The correlation of tallies between neutron generations can make reported uncertainty underestimated by a factor of 8 in assembly size tallies in a typical LWR. This work analyzes the variance/uncertainty convergence rate in Monte Carlo eigenvalue simulations and develops different methods to properly report the variance.; To correct the underestimated variance as a post-processing step, a simple correction factor can be calculated from the correlation coefficients estimated from a sufficient number of active generations and fitted to decreasing exponentials. If the variance convergence rate is needed before or during the simulation to optimize the run strategy (number of generations and neutrons per generation), a discrete model can be constructed from the inactive generations that can predict the correlation behavior of the original problem. Since it is not efficient to perform variance correction to all tallies on all problems, a simple correlation indicator is also developed to quickly determine the potential impact of correlations on a given tally in a given problem. This can help decide if more complicated correction analysis or the use of independent simulations should be used to calculate the true variance.; Run strategy to reduce correlations is also investigated by introducing the notion of delayed neutrons. A predictive model for the new source update scheme was developed to help identify optimal delayed neutron parameters before implementing in OpenMC. Optimal run strategies in terms of delayed bank size, frequency of delayed bank sampling and true simulation costs are proposed.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 323-327).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123375</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Validation of turbulent transport models on Alcator C-Mod and ASDEX upgrade</title>
<link>https://hdl.handle.net/1721.1/123374</link>
<description>Validation of turbulent transport models on Alcator C-Mod and ASDEX upgrade
Creely, Alexander James.
This thesis developed hardware and analysis techniques to measure two validation constraints experimentally, and then applied these constraints in the validation of plasma turbulent transport models on two tokamaks, Alcator C-Mod and ASDEX Upgrade, resulting in both greater physics understanding of multi-scale turbulent interactions and greater confidence in predictions for future fusion devices. On the path toward the clean, sustainable, and safe energy of a fusion power plant, experiment and modeling each contribute something unique. Before one can in good faith use plasma turbulent transport models to explain turbulent dynamics or predict machine performance, however, one must ensure that these models can correctly reproduce experimentally measured conditions on existing devices. Validation, the process of determining how accurately a model represents reality, has thus become a key endeavor in fusion energy research.; First, this thesis developed an analysis technique to measure the electron perturbative thermal diffusivity based on tracking the propagation of heat pulses generated by partial sawtooth crashes. In addition, correlation electron cyclotron emission (CECE) hardware was constructed on both Alcator C-Mod and ASDEX Upgrade, and analysis techniques were derived, in order to measure turbulent electron temperature fluctuations. These validation constraints were applied to two turbulent transport models, the nonlinear gyrokinetic model and the quasi-linear gyrofluid model. In particular, these constraints were used to study the importance of multi-scale turbulent effects (due to coupling between ion- and electron-scales) in correctly modeling plasma behavior.; The gyrokinetic codes GYRO and GENE were validated on Alcator C-Mod and ASDEX Upgrade respectively, using both constraints developed in this thesis as well as ion and electron heat fluxes from power balance, revealing that in some cases ionscale simulations are sufficient to match experimental constraints, while in other cases multi-scale effects are important. To investigate this discrepancy, a novel type of validation study was performed with the gyrofluid code TGLF, including many discharges from both machines. This study resulted in two physical criteria that determine when multi-scale effects are important, and when ion-scale simulations are sufficient to model the plasma behavior, shedding light on the physical phenomena that govern the importance of multi-scale turbulent effects.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Applied Plasma Physics, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 351-369).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123374</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational characterization of radiation-induced defect dynamics and material response</title>
<link>https://hdl.handle.net/1721.1/123373</link>
<description>Computational characterization of radiation-induced defect dynamics and material response
Jin, Miaomiao.
Material degradation due to radiation damage poses serious concern on the reliability and durability of any reactor design. To understand material performance under the extreme environments combining high temperature and intense irradiation, the response of radiation damage must be meticulously analyzed, both experimentally and computationally. These efforts will not only bridge the knowledge gap in the fundamental understanding of physical processes, but also allow for prediction of material behavior under a variety of conditions and development of novel materials with superior radiation tolerance. This thesis investigates multiple aspects of radiation damage in materials using various computational methods over a wide range of time and length scale, including atomistic description of defect dynamics, multiscale simulations of radiation processes, and artificial intelligence prediction of material responses based on experimental studies.; Firstly, to resolve the fundamental mechanisms of radiation-induced behavior, the traditional molecular dynamics simulations on single-atom damage cascade is extended by developing an algorithm to appropriately introduce numerous consecutive cascades; hence, an experimental dose level on the order of dpa (displacement per atom) can be achieved to enable realistic understanding of observed material responses. It has been utilized to examine the radiation behaviors in solid-solution alloys and nanocrystalline metals such as defect dynamics and grain boundary migration. Secondly, to break the intrinsic limitation of scale in atomistic simulations, a multiscale microstructural evolution framework that links binary-collision approximation, molecular dynamics and cluster dynamics is built to describe mesoscale experimental observations. It is used to successfully explain the non-power-law defect distribution in irradiated tungsten.; This tool can be generalized to study the spatial dependent defect evolution in materials under ion irradiation. Finally, to bypass the physics-based complexity of describing materials evolution in real applications, a holistic view enabled by machine learning techniques is utilized, and applied to predict the onset of void swelling in metals with a manually collection of data from experimental studies. The model has generated satisfying results for prediction of unseen data based on material properties and experimental parameters.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 177-200).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123373</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capturing radiation-induced microstructure evolution in situ through direct property monitoring</title>
<link>https://hdl.handle.net/1721.1/123372</link>
<description>Capturing radiation-induced microstructure evolution in situ through direct property monitoring
Dennett, Cody Andrew.
Advanced materials development for nuclear systems is currently a time and resource intensive process relying on many iterations of material exposure and destructive testing. There exist few methods for characterizing irradiated material performance in situ, during exposure. Techniques such as in situ TEM or in situ Raman spectroscopy can provide local structural information during irradiation, but no current methods can continuously monitor bulk thermal and mechanical properties. Such a tool would provide the ability to map dose-property relationships at a resolution not previously possible, enhancing mechanistic understanding of irradiation-induced evolution. These methods could also be used to identify the onset of emergent irradiation-induced effects such as the transition from incubation to steady-state void swelling.; For this purpose, we have identified transient grating spectroscopy (TGS) as an appropriate technique to obtain these dose-property relationships during irradiation. This method, by optically inducing and monitoring monochromatic surface acoustic waves on materials under investigation, is able to determine the elastic and thermal transport properties of a microns-thick layer at the surface of a sample, the same depth to which ion beams can impose damage. First, we demonstrated that this method is sensitive enough to measure changes in material properties induced by radiation. Afterwards, we designed new optical geometries which enable second-scale time-resolved TGS measurements on dynamically changing materials. In addition, we developed new analytical methods through which multiple material properties, acoustic wave speed and thermal transport properties, may be extracted simultaneously from single-shot measurements.; As proof-of-principle experiments, ion irradiation-induced property changes have been measured post-irradiation on pure, single crystal copper. In these copper samples, TGS measurements indicate the presence of volumetric void swelling, which is confirmed with scanning transmission electron microscopy (STEM). These developments together show that TGS is capable of capturing irradiation-induced evolution in real time and motivate the design and commissioning of an in situ experiment for ion beam irradiation and TGS monitoring. To this end, an in situ TGS beamline experiment for concurrent ion beam irradiation and property monitoring has been developed on the 6 MV tandem accelerator at the Ion Beam Laboratory at Sandia National Laboratories. The in situ ion irradiation TGS (I3TGS) facility has the ability to monitor material evolution at high temperatures in real time under ion bombardment.; Using high-energy self-ions, we are studying radiation damage effects on the thermomechanical properties of pure metals. In these experiments, irradiation-induced void swelling has been monitored at an orders-of-magnitude finer dose resolution than is possible with traditional methods. This tool has allowed the onset of swelling to be pinpointed in applied dose, a key consideration when developing new materials for use in nuclear systems, on the timescale of days rather than months or years. We are now able to provide the type of rapid, engineering-relevant data necessary to speed the innovation cycle in nuclear materials development. Moving forward, these methods can be used as a screening tool to expedite the design and testing process for advanced nuclear materials.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 129-138).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123372</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perturbative transport experiments and time-dependent modeling in Alcator C-Mod and DIII-D</title>
<link>https://hdl.handle.net/1721.1/123371</link>
<description>Perturbative transport experiments and time-dependent modeling in Alcator C-Mod and DIII-D
Rodríguez Fernández, Pablo,Ph. D.Massachusetts Institute of Technology.
Perturbative transport experiments in magnetically confined plasmas have shown, for more than 20 years, that the injection of cold pulses at the plasma edge can trigger the fast increase of core temperature. Because no single standard local transport model tried to date has been able to reproduce satisfactorily all the observed temporal behavior in the experiments, these transient transport phenomena feature prominently as an open question in the community and as a challenge for predictive capabilities in tokamak burning plasmas, such as ITER and SPARC. For the first time after more than two decades of experimental evidence, this Thesis resolves this long-standing enigma in plasma transport, by modeling of experiments conducted on the Alcator C-Mod and DIII-D tokamaks.; Predictive integrated simulations with the Trapped Gyro Landau Fluid (TGLF) quasilinear transport model demonstrate that the increase of core temperature in some regimes, and lack thereof in other regimes, can be explained by a change in dominant linear micro-instability in the plasma core. The effect of major radius, electron density and plasma current on the cold pulse are well captured by TGLF, including the relative change in position of the temperature flex point as current density changes. Linear stability analysis of simulated density and current scans in Alcator C-Mod reveals a competition between trapped electron and ion temperature gradient modes as the main driver of the core transient response. Measurements of electron density evolution during the cold-pulse propagation in DIII-D are enabled by a high time resolution density profile reflectometer.; The density evolution reveals the quick propagation of a pulse from edge to core, which is the mechanism to transiently increase core temperature in low-collisionality plasmas. The work presented in this Thesis demonstrates that the existence of nonlocal heat transport phenomena is not necessary for explaining the behavior and time scales of cold-pulse experiments in tokamak plasmas.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123371</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum system identification by local measurements</title>
<link>https://hdl.handle.net/1721.1/123370</link>
<description>Quantum system identification by local measurements
Sone, Akira,Ph. D.Massachusetts Institute of Technology.
Quantum information processing, including quantum computation, quantum communication, and quantum metrology, is expected to be the next-generation information technology capable of outperforming classical devices. The physical platform for quantum information devices requires many-body quantum systems with many interacting qubits (two-level systems), and several experimental systems have been actively studied in recent years for realizing such scalable quantum information processors. Reliable quantum information processing essentially requires robust initialization of the quantum system, robust control protocols and precise measurement. As these indispensable tasks strongly depend on the physical properties and dynamics of the quantum system, the precise characterization of the experimental system, namely quantum system identification, is a crucial prerequisite.; In a closed quantum system, the Hamiltonian includes the information about the properties of the system, such as the number of qubits, single-body energy of qubits, topology of qubit network graph and coupling types between qubits. For thermal equilibrium state, the temperature also becomes an important parameter characterizing the thermal properties of the system. Quantum system identification aims at extracting these elements from measurements of the system dynamics or state. However, performing measurement over the whole many-qubit system is typically a demanding task in the laboratory. Furthermore, given the lack of knowledge about the system, one cannot control or measure the system with high accuracy before characterizing it.; Thanks to recent advances in quantum metrology with a single qubit sensor, a well-characterized qubit system can play the role of a quantum probe, which is coherently coupled to the target system, thus enabling practical schemes to identify the target many-body quantum system indirectly through system-probe correlations. More broadly, this quantum probe strategy is an example of system identification performed via local measurements. This thesis focuses on the role of quantum correlations in quantum system identification assisted by local measurements in two different regimes: (1) dynamical and (2) equilibrium regime. In the dynamical regime, we introduce the mathematical concept of quantum system identifiability with respect to Hamiltonian parameter identification and Hilbert space dimension identification.; Exploiting the linearity of the dynamics, we employ linear system realization theory, algebraic geometry and graph-theory to analyze the identifiability of an unknown Hamiltonian or the dimensions of the Hilbert space of the target many-body quantum system. Based on the formalism, we propose practical algorithms for both identifiability problems. We further find that propagation of correlations between the quantum probe and the whole system is a necessary condition to fully identify the parameters and dimensions of the target system through local measurements on the quantum probe. In the equilibrium regime, the thesis discusses the problem of estimating either the temperature or Hamiltonian parameters, which can in general be extracted by characterizing the thermal equilibrium state. We discuss a general local measurement scheme, which we call "greedy local measurement scheme", where one performs sequential optimal measurement on a complete set of subsystems.; We introduce a practical measure of nonclassical correlations, called discord for local metrology, to measure the nonclassical correlations induced by local optimal measurements. By comparing the greedy local measurement scheme and global measurement scheme, we explicitly demonstrate that in the high-temperature limit discord for local metrology quantifies the ultimate precision limit loss in local metrology. Conversely, this shows that nonclassical correlations could contribute to sensitivity enhancement in parameter estimation even at thermal equilibrium. These results can be expected to contribute to the characterization of near-term quantum information processors, such as Noisy Intermediate-Scale Quantum (NISQ) devices, and to find applications in quantum sensing, e.g. in room-temperature nanoscale magnetic resonance sensing of nuclear spins in molecules or imaging of biological complex systems.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-165).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123370</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A framework for comparing nuclear warhead authentication protocols</title>
<link>https://hdl.handle.net/1721.1/123368</link>
<description>A framework for comparing nuclear warhead authentication protocols
Macdonald, Ruaridh(Ruaridh R.)
Even with the end of the Cold War, nuclear arms control continues to be a cornerstone of strategic stability and international non-proliferation efforts. New treaties are necessary to build upon, or at least maintain, the status-quo, and will rely upon verification technologies and protocols to ensure all sides are dismantling their warheads as promised. The nuclear weapon states refuse to participate in any process which might reveal the design of their warheads to an adversary or would-be proliferator. This makes warhead authentication, the critical verification step where the object to be dismantled is confirmed to be an authentic warhead, especially challenging. Despite several decades of research, there is no agreed means of describing or assessing warhead authentication protocols. This has hindered protocol development, and made it more difficult for the policy and technical communities to communicate what is important and feasible.; This thesis presents a framework for describing warhead authentication protocols and quantifying their performance. The framework draws on methods used to assess digital authentication protocols, as well as information theoretic analysis of privacy. A model is developed for describing authentication protocols; showing how authentication questions, physical properties, and measurable data relate to one another. This allows the objectives and assumptions of a protocol to be made explicit, helping to ensure that protocols are compared fairly. It was found that the protocols in the literature have made use of very different assumptions, and that has influenced their choices of measurement technology and concepts of operation. Having established how to describe protocols, the thesis investigates how best to quantify the completeness (type I error rate), soundness (type II error rate), and information privacy of a protocol.; While the absolute soundness cannot be calculated without knowledge of all possible hoaxes, a conditional soundness can be estimated using a minimax approach. A new measure of information privacy is presented, based on a change in the KL divergence between an inspector's beliefs and the actual warhead design, when the inspector starts from an incorrect prior.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 173-179).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123368</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of an isotope-sensitive warhead verification technique using nuclear resonance fluorescence</title>
<link>https://hdl.handle.net/1721.1/123367</link>
<description>Development of an isotope-sensitive warhead verification technique using nuclear resonance fluorescence
Vavrek, Jayson Robert.
Nearly three decades after the end of the Cold War, nuclear arms control remains a pressing global issue. Despite obligations under the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) to make 'good-faith efforts' to disarm their nuclear arsenals, the United States and Russia still maintain thousands of nuclear warheads. Progress towards complete disarmament has been gradual due to a variety of socio-political barriers, but future efforts towards nuclear arms reduction will face an additional technological hurdle: no technology exists to verify that warheads slated for dismantlement are authentic without revealing any sensitive weapons design information in the process. Despite several decades of research, no technology has solved this apparent paradox between information security and confidence in a warhead verification measurement.; Recent work by Kemp, Danagoulian, Macdonald, and Vavrek [1] has produced a novel physical cryptographic verification protocol that approaches this treaty verification problem by exploiting the isotope-specific nature of nuclear resonance fluorescence (NRF) measurements to verify the authenticity of a warhead. To protect sensitive information, the NRF signal from the warhead is convolved with that of an encryption foil that contains key warhead isotopes in amounts unknown to the inspector. The convolved spectrum from a candidate warhead is then statistically compared against that from an authenticated template warhead to determine whether the candidate itself is authentic. Since only the final, convolved spectra are observable, and the detailed foil construction is unknown to the inspector, sensitive information about the warhead is encrypted by physics rather than by software or electronics.; In this thesis, we performed proof-of-concept NRF warhead verification experiments at the High Voltage Research Laboratory (HVRL) at MIT [2]. Using high-purity germanium (HPGe) detectors, we measured NRF spectra produced by the interrogation of proxy 'genuine' and 'hoax' objects by a 2.52 MeV endpoint bremsstrahlung beam. The observed differences in NRF intensities near 2.2 MeV indicate that the physical cryptographic protocol can distinguish between proxy genuine and hoax objects with high confidence. Extrapolations to thicker warheads and dedicated verification facilities indicate that realistic warhead verification measurements could be made on the order of hours. In support of these and future NRF experiments, we also improved and benchmarked the G4NRF code for the simulation of NRF in Geant4 [3]. We first constructed a high-accuracy semi-analytical model for the expected NRF count rate in both simple homogeneous and more complex heterogeneous geometries.; We then performed Geant4+G4NRF simulations with these geometries, and found agreement in NRF rates predicted by the semi-analytical model and observed in the simulation at a level of ~1% in simple test cases and ~3% in the more realistic complex scenarios. These results improve upon the ~20% level of the initial G4NRF benchmarking study and establish a highly-accurate NRF framework for Geant [4]. Finally, we conducted a G4NRF validation study using the NRF data taken during the warhead verification experiments [4]. Agreement between the absolute NRF count rates observed in the data and predicted by extensive Geant4+G4NRF modeling validate the combined Geant4+G4NRF model to within ~20% in the 238U NRF transitions and 9% in 27Al, for an average 14% discrepancy across the entire study. Additionally, agreement between the model and data in relative NRF rates is found at the level of .5%.; Such agreement in both relative and absolute analyses provides good predictive capability for the design and analysis of future NRF experiments using G4NRF, whether for warhead verification or other applications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 93-99).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123367</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of parton energy loss in heavy ion collisions using charged particle spectra measured with CMS</title>
<link>https://hdl.handle.net/1721.1/123355</link>
<description>Study of parton energy loss in heavy ion collisions using charged particle spectra measured with CMS
Baty, Austin A.
The phenomenology of the strong nuclear force is still not well understood at low momentum transfers and requires experimental input to constrain. Collisions of heavy ions at the Large Hadron Collider provide a unique opportunity to explore this kinematic region because they create a novel form of matter: the quark-gluon plasma (QGP). Using the CMS detector, spectra of charged particles originating from protonproton (pp), proton-lead (pPb), and lead-lead (PbPb) collisions at a center of mass energy per nucleon pair ( [square root of SNN) of 5.02 TeV are examined as a function of transverse momentum and centrality. Nuclear modification factors and fragmentation functions are constructed from these spectra. By comparing to pp collision reference spectra, a puzzle concerning previous measurements in pPb collisions is clarified. A strong suppression of particle production observed in PbPb collisions is also quantified. Finally, collisions of xenon nuclei are also studied to constrain the path length dependence of parton energy loss. The strength of energy loss is found to increase with both [square root of SNN and the average path length through the QGP. Comparisons to theoretical models and previous measurements indicate that the path length dependence is between linear and quadratic, as expected from a combination of collisional and radiative energy loss mechanisms.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 223-245).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123355</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A room temperature optomechanical squeezer</title>
<link>https://hdl.handle.net/1721.1/123354</link>
<description>A room temperature optomechanical squeezer
Aggarwal, Nancy,Ph. D.Massachusetts Institute of Technology.
Decades of advancement in technologies pertaining to interferometric measurements have made it possible for us to make the first ever direct observation of gravitational waves (GWs). These GW emitted from violent events in the distant universe bring us crucial information about the nature of matter and gravity. In order for us to be able to detect GWs from even farther or weaker sources, we must further reduce the noise in our detectors. One of the noise sources that currently limits GW detectors comes from the fundamental nature of measurement itself. When a certain measurement reaches very high precision, the Heisenberg uncertainty principle comes into play. In GW detectors, this uncertainty manifests itself in the quantum nature of the light. Due to its quantum nature, light (or electromagnetic field) has an uncertain amplitude and phase.; Since the interferometric measurement is directly measuring the phase of light, this uncertainty poses a limit on the precision of GW measurements. Additionally, this measurement is also subject to quantum back-action, which arises due to the radiation pressure force fluctuations caused by the amplitude uncertainty (QRPN). In order to lower this quantum noise, GW detectors plan to use squeezed light injection. Squeezed light is a special quantum state of light which has lower uncertainty in a certain quadrature, at the expense of higher uncertainty in the orthogonal quadrature. In this thesis, I focus on using radiation-pressure-mediated optomechanical (OM) interaction to generate squeezed light. Creating squeezed states by using optomechanical interaction opens up possibilities for engineering truly wavelength-independent squeezed light sources that may also be more compact and robust than traditionally used non-linear crystals.; Additionally, this project inherently involves studying the OM interaction, which is the mechanism for back-action noise in GW detectors. Our basic setup is a Fabry-Perot cavity with a movable mirror. We start by understanding the physics of this system in the presence of realistic imperfections like losses and classical noise. This study furthers the previous work done on OM squeezing in an ideal Fabry-Perot cavity. We use this understanding of the system to optimize the experimental parameters to obtain the most possible squeezing in a broad audio-frequency band at room temperature. This optimization involves choosing the optical properties of the cavity, and the mechanical properties of the oscillator. We then present the experimental implementation of this design, and subsequent observation of QRPN as well as OM squeezing from the optimized design.; These observations are the first ever direct observation of a room temperature oscillator's motion being overwhelmed by vacuum fluctuations. More so, this is also the first time it has been shown in the low frequency band, which is relevant to GW detectors, but poses its own technical challenges, and hence has not been done before. Being in the back-action dominated regime along with optimized optical properties has also enabled us to observe OM squeezing in this system. That is the first direct observation of quantum noise suppression in a room temperature OM system. It is also the first direct evidence of quantum correlations in a audio frequency band, in a broadband at non-resonant frequencies.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 294-306).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123354</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultracold bosons in optical lattices for quantum measurement and simulation</title>
<link>https://hdl.handle.net/1721.1/123353</link>
<description>Ultracold bosons in optical lattices for quantum measurement and simulation
Burton, William Cody.
Ultracold atoms provide a platform that allows for pristine control of a physical system, and have found uses in both the fields of quantum measurement and quantum simulation. Optical lattices, created by the AC Stark shift of a coherent laser beam, are a versatile tool to control ultracold atoms and implement novel Hamiltonians. In this thesis, I report on three experiments using the bosonic species Rubidium-87 trapped in optical lattices. I first discuss our work in simulating the Harper-Hofstadter Hamiltonian, which describes charged particles in high magnetic fields, and has connections to topological physics. To simulate the charged particles, we use laser-assisted tunneling to add a complex phase to tunneling in the optical lattice. For the first time, we have condensed bosons into the ground state of the Harper-Hofstadter Hamiltonian.; In addition, we have demonstrated that we can add strong on-site interactions to the effective Hamiltonian, opening the door to studies of interesting states near the Mott insulator transition. Next, I present a novel technique to preserve phase coherence between separated quantum systems, called superfluid shielding. Phase coherence is important for both quantum measurement and simulation, and is fundamentally limited by projection noise. When an interacting quantum system is split, frozen-in number fluctuations lead to fluctuations of the relative phase between separated subsystems. We cancel the effect of these fluctuations by immersing the separated subsystems in a common superfluid bath, and demonstrate that we can increase coherence lifetime beyond the projection noise limit. Finally, I discuss our efforts in simulating magnetic ordering in the spin-1 Heisen- berg Hamiltonian.; It is hard to adiabatically ramp into magnetically ordered ground states, because they often have gapless excitations. Instead, we use a spin-dependent lattice to modify interspin interactions, allowing us to ramp into the spin Mott insulator, which has a gap and can therefore act as a cold starting point for exploration of the rest of the phase diagram. We have achieved a cold spin temperature in the spin Mott insulator, and I discuss plans to also achieve a cold charge temperature and then ramp to the the xy-ferromagnet, which has spin-charge separation.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 131-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123353</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluctuating interfaces and paths in disordered and non-equilibrium systems</title>
<link>https://hdl.handle.net/1721.1/123352</link>
<description>Fluctuating interfaces and paths in disordered and non-equilibrium systems
Chu, Sherry(Yun Sherry)
In this thesis, we study the statistics of fluctuating paths and interfaces in the presence of disorder. Specifically, we consider systems in the Kardar-Parisi-Zhang universality class for stochastic interface growth, from the perspectives of both fundamental statistical mechanics and applications to real world problems. We show numerically that the probability distribution associated with directed polymers in random media, a lattice model in this universality class, interpolates between Tracy-Widom and Gaussian distributions when spatial correlations are added to the random energy landscape. As a possible application, we examine the statistics of optimal paths on actual road networks as given by GPS routing, exploring connections and distinctions to directed polymers. We investigate also the effects of roughness in the growth front of a bacterial range expansion. There, we find that such roughness can account for the experimentally observed super-diffusivity, and leads to a rapid loss of genetic diversity. Finally, we explore the complete eigenvalue spectrum of products of random transfer matrices, as relevant to a finite density of non-intersecting directed polymers. We identify a correspondence in distribution to eigenvalues of Gaussian random matrices, and show that the density of states near the edge of the spectrum is altered by the presence of disorder.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 117-123).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123352</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Population and evolutionary dynamics during microbial range expansions</title>
<link>https://hdl.handle.net/1721.1/123351</link>
<description>Population and evolutionary dynamics during microbial range expansions
Gandhi, Saurabh Rajendra.
Spatial expansions occur across multiple scales, from the expanding range of a species to the growth of tumors and microbial biofilms. In ecology, range expansions are becoming more frequent due to environmental changes and rare long distance dispersal, often facilitated by anthropogenic activities. Simple models in theoretical ecology explain many emergent properties of range expansions, such as a constant expansion velocity, in terms of organism-level properties such as growth and dispersal rates. Moreover, the evolution and potentially even the survival of an expanding population depends on its genetic diversity, which is also predicted to reduce drastically during range expansions. However, testing these quantitative predictions in natural populations is difficult because of large environmental variability and the inability of replicating historical processes.; In this thesis, I describe the use of a microbial model system to gain a deeper understanding of spatial range expansions in a controlled and replicable setting. In particularly, I study the role of cooperative growth in spatial expansions. Given the prevalence of cooperative growth in nature, understanding the effects of cooperativity is essential to managing invading species and understanding their evolution. For non-cooperative growth, the expansion dynamics are dominated by population growth at the low-density front, which pulls the expansion forward. I find these expansions to be in close quantitative agreement with the classical theory of pulled waves by Fisher and Skellam, suitably adapted to my experimental system. However, as cooperativity increases, the expansions transition to being pushed, i.e. controlled by growth in the bulk as well as in the front.; In addition to the population dynamics, cooperation within populations is also predicted to significantly alter the evolutionary fate of expanding populations. This difference in evolutionary dynamics within pulled and pushed waves is also studied experimentally.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 91-98).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123351</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of single-cell mass, volume and stiffness during mitosis</title>
<link>https://hdl.handle.net/1721.1/123350</link>
<description>Dynamics of single-cell mass, volume and stiffness during mitosis
Kang, Joon Ho,Ph. D.Massachusetts Institute of Technology.
Cells undergo dynamic changes in cell shape and mechanics during mitosis, a short cell-cycle stage dedicated to separating replicated chromosomes into two daughter cells. Recently, several groups have reported that mitotic biophysical changes are essential for proper mitotic spindle function, consequently affecting mitotic fidelity and development of cancer. However, studying biophysical dynamics in mitosis is challenging because both the magnitude and duration of mitotic changes are heterogeneous across the cell population. In this thesis, I demonstrate new methods to monitor single-cell mass, volume and mechanical properties throughout mitosis with temporal resolution of &lt;1 min. We utilize the suspended microchannel resonator (SMR) which is a fluid-filled cantilever capable of measuring cell buoyant mass by the change of SMR resonant frequency. First, we monitor the volume and density of single cells in suspension by consecutively weighing them in two fluids of different densities. We find that mitotic cells reversibly increased their volume by more than 10% over a 20-min period after mitotic entry through osmotic regulation. Next, using the SMR and a protein synthesis assay, we quantify the mass accumulation and translation rates of single cells between mitotic stages. Various animal cell types displayed persistent mass accumulation during mitosis and cytokinesis with mitotic-stage specific growth rate dynamics. Finally, we quantify mechanical properties via acoustic scattering of waves from a cell inside the SMR. Through simulations, experiment with hydrogels and chemical perturbation of cells, we show that our readout from acoustic scattering measures stiffness. Cells maintained constant stiffness throughout interphase but show dynamic changes during mitosis. Altogether, continuous monitoring of single-cell physical parameters -- mass, volume and stiffness -- has revealed biophysical dynamics during mitosis that have not been previously observed.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 145-158).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123350</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On typicality and adaptation in driven dynamical systems</title>
<link>https://hdl.handle.net/1721.1/123349</link>
<description>On typicality and adaptation in driven dynamical systems
Chvykov, Pavel.
In this work, I consider the possibility of using typicality-type arguments for understanding intractably complex damped-driven dynamical systems. By approximating such dynamics with appropriately constrained random process, I illustrate quantitative predictive power for some aspects of the motion. In particular, I argue that local dynamical stability, or exit rate, of a state is typically sufficient to predict steady-state probability in such systems -- circumventing the classic no-go theorems via our disorder approximation. I then focus on one consequence of this result: that the most likely long-time configurations should also be the dynamically stable ones. In a strongly-driven system, however, such stability may be hard to achieve, and therefore has interesting implications about the corresponding configurations: they must be well-adapted to the details of the driving forces, their dynamical robustness may be viewed in the context of self-healing, and depending on the drive, they can require substantial collective fine-tuning among the system's degrees of freedom. I confirm the emergence of such adapted states in several example systems, both in simulation and in experiment, and verify a quantitative agreement with the predicted scaling between their steady-state probability and local stability. I then explore several arguments and test-cases suggesting further generality of this framework. While it is not yet clear what the precise limits of applicability are for this approach, our results suggest that the intuition it builds can help with prediction and design in a broad class of complex dynamics.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 125-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123349</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spin-orbit coupling and supersolidity in ultracold quantum gases</title>
<link>https://hdl.handle.net/1721.1/123348</link>
<description>Spin-orbit coupling and supersolidity in ultracold quantum gases
Li, Junru,Ph. D.Massachusetts Institute of Technology.
Ultracold quantum gases provide a clean, isolated, and controllable platform for simulating and characterizing complex physical phenomena. In this thesis, I present several experiments on realizing one-dimensional spin-orbit coupling in ultracold 23Na gases and the creation of a new form of matter with supersolid properties using interacting spin-orbit coupled Bose-Einstein condensates. The first part describes the realization of spin-orbit coupling in optical superlattices which consist of stack of pancakes of imbalanced double-wells. The orbital levels, individual pancakes, in an superlattice potential are used as pseudospin states. Spinorbit coupling was induced by two-photon Raman transition between the pseudospin states, and was experimentally characterized by the spin-dependent momentum structure from this dressing. The realization suppresses heating due to spontaneous emission.; The system is highly miscible, allowing the study of novel phases in interacting spin-orbit coupled systems. Next, spin-orbit coupling was demonstrated by synchronizing a fast periodically modulating magnetic force with the Radio-Frequency (RF) pulses. The modulation effectively dressed the RF photons with tunable momentum. The consequent Doppler shifts for RF transitions were observed as velocity-selective spin flips. The scheme is equivalent to Floquet engineered one-dimensional spin-orbit coupling. Finally, I report experiments on creating a new form of matter, a supersolid, in ultracold quantum gases. An interacting spin-orbit coupled Bose-Einstein condensate in the stripe phase spontaneously breaks two continuous symmetries : the U(1) symmetry, observed as sharp interference peaks in momentum space, and the continuous translational symmetry, observed as a spontaneously formed density modulation. The density modulation is measured and characterized with Bragg scattering.; A system spontaneously breaking these two symmetries is a crystal and a superfluid simultaneously, and is considered as a supersolid.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 207-214).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123348</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metamaterial structures for Wakefield acceleration and high power microwave generation</title>
<link>https://hdl.handle.net/1721.1/123347</link>
<description>Metamaterial structures for Wakefield acceleration and high power microwave generation
Lu, Xueying,Ph. D.Massachusetts Institute of Technology.
This thesis presents the theoretical and experimental investigation of the interaction of metamaterial structures with electron beams for two applications: wakefield acceleration and high power microwave generation. Under the topic of wakefield acceleration, on the theoretical side, several metamaterial structures have been designed and simulated. The novel phenomenon of reversed Cherenkov radiation has been found to enhance the beam-wave interaction in metamaterials. A metallic wagon wheel metamaterial structure was designed and built for use in an experiment at the Argonne Wakefield Accelerator (AWA) Facility. On the experimental side, this thesis presents the first demonstration of high-power, reversed Cherenkov wakefield radiation by short electron bunches passing through the wagon wheel structure at the AWA. Single 45 nC electron bunches of 65 MeV energy traversing the structure generated up to 25 MW in 2 ns pulses at 11.4 GHz, in excellent agreement with theory.; Two bunches of 85 nC with appropriate temporal spacing generated up to 80 MW by coherent wakefield superposition. If this power were applied to a trailing witness bunch in a collinear wakefield accelerator, it would provide an accelerating gradient of 75 MV/m. Under the topic of high power microwave generation, on the theoretical side, an analytical theory has been developed to predict the novel Cherenkov-cyclotron interaction in metamaterial-based microwave devices. An S-band metamaterial-loaded waveguide with reverse symmetry has been designed and built to work with the Cherenkov-cyclotron interaction. On the experimental side, this thesis presents the experimental results of the metamaterial-loaded waveguide built at MIT. Power levels to 2.9 MW at 2.4 GHz in full 1 [mu]s pulses were generated by an electron beam of up to 490 kV of voltage and 84 A of current.; Frequency tuning measurements verified that pulses above 1 MW of output power were only seen in the Cherenkov-cyclotron mode. With these results, this thesis demonstrates the unique features of metamaterial structures that are very attractive for high-gradient wakefield accelerators and high power microwave sources. Advantages include the high shunt impedance for intense beam-wave interaction; the simple and rugged structure; and a large parameter space for various optimization goals
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123347</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerically modeling the evolution of dust grains in galaxy formation simulations</title>
<link>https://hdl.handle.net/1721.1/123346</link>
<description>Numerically modeling the evolution of dust grains in galaxy formation simulations
McKinnon, Ryan Michael.
In this thesis, I present the development of various models for dust physics suited for galaxy formation simulations. I begin by introducing a model to evolve the spatial distribution of dust in galaxies, with dust treated as a passive scalar advected according to hydrodynamic flow. This model accounts for processes that affect the interstellar dust budget, like stellar dust production, accretion of gas-phase metals, and supernova-driven destruction. Using the moving-mesh hydrodynamics code arepo, I perform cosmological zoom-in simulations of Milky Way-sized galaxies to study the evolution of interstellar dust. Predictions from this model compare favorably to a number of observed low-redshift dust scaling relations and suggest that galactic dust-to-gas ratios can strongly increase with cosmic time. I also present simulations of uniformly sampled cosmological volumes to analyze the behavior of dust statistics on large scales.; While these simulations predict a reasonable present-day cosmic dust density, they are unable to produce the abundance of dust-rich galaxies observed at high redshift. Next, I develop a model to more realistically track the dynamics and sizes of interstellar grains. This novel framework handles dust using live simulation particles, each representing a population of dust grains of different sizes and subject to dynamical forces like aerodynamic drag. I implement and validate a second-order semi-implicit integrator for the drag coupling between dust and gas, and I outline how the local size distribution of interstellar grains can be evolved using a second-order piecewise linear discretization. Using simulations of idealized galaxies, I illustrate how different physical processes affecting dust grain sizes would impact galactic extinction curves. Finally, I describe an extension of these methods to couple dust physics and radiation hydrodynamics in arepo-rt.; This enables simulations to directly model radiation pressure on, photon absorption by, and thermal emission from dust grains. The framework introduced in this thesis can be used in the future to model other physics relevant for interstellar dust.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 239-254).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123346</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of synchrotron radiation from relativistic electrons in the Alcator C-Mod tokamak</title>
<link>https://hdl.handle.net/1721.1/123345</link>
<description>An analysis of synchrotron radiation from relativistic electrons in the Alcator C-Mod tokamak
Tinguely, Roy Alexander.
In the Alcator C-Mod tokamak, a magnetic confinement fusion experiment, electrons are accelerated to relativistic energies -- on the order of tens of MeV -- during steady-state conditions of Ohmic, elongated, and diverted plasma discharges. These so-called "runaway" electrons emit synchrotron radiation in their direction of motion due to their gyration in the background toroidal magnetic field, with values of B0 ranging from 2.7 to 7.8 T at the plasma axis. Two spectrometers, a wide-view camera, and a polarimeter are used to measure time-evolving spectra, images, and polarization information, respectively, of the synchrotron radiation in the visible/near-infrared wavelength range, [lambda] ~~ 300-1000 nm. The kinetic equation solver Code [Landreman et al 2014 Comput. Phys. Commun., Stahl et al 2016 Nucl. Fusion] and synthetic diagnostic Soft [Hoppe et al 2018 Nucl. Fusion] are used to model the evolution of the runaway electron phase space distribution and to simulate the detected synchrotron emission, respectively. The major contributions of this thesis work to the fields of plasma physics and fusion energy research are the following: Spectral measurements are consistent with runaway electrons' attaining lower energies as the magnetic field increases, a positive sign for future high-field fusion devices. The runaway electron density profile and other spatiotemporal dynamics, such as increased radial transport due to magnetohydrodynamic activity, are inferred from the two-dimensional synchrotron intensity distributions captured in camera images. Finally, for the first time in a tokamak plasma experiment, polarized synchrotron light is used as a novel diagnostic of the pitch angle distribution of runaway electrons. For all three measurements, discrepancies between experiment and theory/simulation are identified, and opportunities for future work are presented.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123345</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating the evolution histories of giant planets with K2 and TESS</title>
<link>https://hdl.handle.net/1721.1/123344</link>
<description>Elucidating the evolution histories of giant planets with K2 and TESS
Yu Liang, Ph. D.Massachusetts Institute of Technology.
Giant planets play a fundamental role in shaping the architecture of planetary systems and the formation and evolution of smaller planets. Over the last few decades, over 1000 giant planets have been discovered outside the Solar System, yet many open questions remain about their formation and evolution histories. How do close-in giant planets -- the so-called "hot Jupiters" --; reach orbits as short as &lt; 0.01 AU, where they could not have accreted their gaseous envelopes? How do many hot Jupiters attain radii larger than predicted by standard models of planetary structure? How do giant planets form, and what determines their final masses? To answer these questions, we need to amass a large and diverse population of giant planets that will allow us to uncover their evolution histories through both case studies and statistical analyses. In this thesis, I focus on the discovery and characterization of unusual systems that may provide some insight into the pasts of giant planets. In particular, I present detailed analyses of systems that have not been subject to the overwhelming tidal forces capable of erasing many traces of orbital evolution.; These include young systems that may still be actively undergoing planetary migration, giant planets with wider orbits than most hot Jupiters, and planets orbiting predominantly radiative stars, which exert weak tidal forces. Some of these newly discovered planets have spent their entire lifetimes near the stellar irradiation threshold at which giant planets become larger than expected, and are valuable in constraining planet inflation models. Some are also favorable targets for transit spectroscopy to study the atmospheres and chemical compositions of giant planets. Although these discoveries do not by themselves answer the questions posed above, they represent a necessary step toward that goal. I also develop methods and tools to further expand our collection of known giant planets using data from the K2 and TESS space missions.; After demonstrating the traditional human vetting approach to planet candidate identification, I present two convolutional neural networks, AstroNet-Triage and AstroNet-Vetting, capable of automatically performing triage and vetting on TESS light curves. These are the first machine learning-based classifiers to be trained and tested on real TESS data, and can rapidly and accurately eliminate false positives. These models not only allow humans to focus on the strongest planet candidates instead of false positives, but also identify candidates in an unbiased, homogeneous manner so as to facilitate planet occurrence rate calculations.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-229).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123344</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Astrophysical signatures of neutron stars in compact binaries and experimental improvements on gravitational-wave detectors</title>
<link>https://hdl.handle.net/1721.1/123343</link>
<description>Astrophysical signatures of neutron stars in compact binaries and experimental improvements on gravitational-wave detectors
Yu, Hang,Ph. D.Massachusetts Institute of Technology.
Neutron stars (NSs) are astrophysical laboratories that allow us to probe physics at extreme conditions. The first half of this Thesis is devoted to exploring how we can connect theoretical models of NSs to observational signatures whose detections are made possible by state-of-the-art instruments. We start by exploring the dynamics of super-Eddington winds launched in type-I X-ray bursts at the surface of a NS. We show that freshly synthesized heavy elements can be exposed by the wind and will dominate the composition at the photosphere after ~ 1 s. This may create detectable absorption edges in burst spectra and explain the observed transitions from super-expansions to moderate expansions. Gravitational-wave (GW) observatories such as Advanced LIGO (aLIGO) open up a new possibility to probe deep inside the NS by examining the tidal signatures in the GW waveforms.; In this Thesis, we study the tidal excitations of g-modes in a cold, superfluid NS during the inspiral driven by gravitational radiation and their resulting phase shifts in the GW waveform. We consider both the g-modes supported by the muon-to-electron gradient in the outer core and the g-modes supported by the hyperon-to-proton gradient in the inner core. We further show that the former might be detectable by event stacking with the third generation of GW detectors. The second half of this Thesis is devoted to the experimental upgrades to a LIGO interferometers. The focus will be on the angular sensing and control system. We will cover design considerations on the system based on both stability and noise requirements. This is followed by a thorough discussion of the radiation-pressure torques, including both the Sidles-Sigg and the d[Rho]/d[theta] effects.; More importantly, we show that such optical torques can be compensated for with newly developed techniques, which is a critical step for aLIGO to reach high-power operations. Lastly, we discuss the prospects of detecting GW at 5 Hz with ground-based detectors and demonstrate that low-frequency sensitivity is crucial for both increasing the detection range for black-hole binaries and enabling timely localization of binary NS systems.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 269-281).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123343</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed convection and hydrodynamic modeling of flows in rod bundles</title>
<link>https://hdl.handle.net/1721.1/123334</link>
<description>Mixed convection and hydrodynamic modeling of flows in rod bundles
Efthimiadis, Apostolos.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1984.; Bibliography: leaves 284-290.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123334</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an adaptive culture : on the evolution of the social basis for political choice in a plural society</title>
<link>https://hdl.handle.net/1721.1/123333</link>
<description>Towards an adaptive culture : on the evolution of the social basis for political choice in a plural society
Morrison, Donald George.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1982.; Bibliography: leaves 891-1267.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123333</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistically inferring the mechanisms of phage-host interactions</title>
<link>https://hdl.handle.net/1721.1/123250</link>
<description>Statistically inferring the mechanisms of phage-host interactions
Yang, Joy,Ph.D.(Joy Yu)Massachusetts Institute of Technology.
Bacteriophage and their hosts are locked in an age-old arms race. Successful bacteria are subject to predation, forcing the population to diversify, and phage are also quick to adapt tactics for infecting these potential hosts. Sampling of closely related bacterial strains that differ in phage infection profiles can further elucidate the mechanisms of infection. The Polz Lab maintains the Nahant Collection - 243 Vibrio strains challenged by 241 unique phage, all with sequenced genomes. This is the largest phylogenetically resolved host-range cross test available to date. Genetically mapping out the depths of this dataset requires carefully designed analysis techniques as well as further experimental exploration. First, we narrow in on a specific phage in the Nahant Collection, 2.275.0, to characterize the pressures that may select for phage that shuttle their own translational machinery.; While translation is generally considered a hallmark of cellular life, some phage carry abundant tRNA. 2.275.0 carries 18 tRNA spanning 13 amino acids. We find that while encoding translation-related components requires shuttling a larger phage genome, it also reduces dependence on host translational machinery, allowing the phage to be more aggressive in degrading and recycling the host genome and other resources required for replication. Next we develop a systematic approach for uncovering genomic features that underlie phage-host interactions. We find that correcting for phylogenetic relationships allows us to pick out relevant signals that would otherwise be drowned out by spurious correlations resulting from statistically oversampled blooms of microbes. Using these results, we wrote an interative javascript visualization to facilitate the process of developing testable hypotheses concerning the mechanisms of phage infection and host response.; From the visualization, we are able to identify, in the hosts, mobile genetic elements containing restriction modification systems that may defend against infection, as well as membrane protein modifications that may serve as phage attachment sites.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123250</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-cell response to perturbations across biological scales : single organ, organ system and phenotypic individuals</title>
<link>https://hdl.handle.net/1721.1/123245</link>
<description>Single-cell response to perturbations across biological scales : single organ, organ system and phenotypic individuals
Kolb, Kellie Elizabeth.
The biological processes that sustain a complex organism require the orchestrated dynamics of complex cellular ensembles. Several vital systems - such as the immune system, the digestive system and more - must process internal and external signals to maintain functional homeostasis in response to perturbations at the systems-level. To further understand how groups of cells collectively respond to perturbations, we have applied single-cell RNA-sequencing and complementary techniques to explore cellular behaviors within complex systems at multiple relevant biological scales: from within a single organ, to an organ system, to across several human individuals with differing genetic backgrounds linked by a shared phenotype. More specifically, at the level of the organ, we have explored acute injury responses in the liver. We have identified and described a new compensatory phase of the liver response to injury, in which surviving hepatocytes upregulate their expression of critical liver function genes to maintain overall organ function. Next, we extended our approach from a focus on an acute injury targeting a single organ to exploring chronic damage resulting from a long-term high fat diet across multiple gastrointestinal and immune compartments. Our analysis revealed molecular pathways and changes in stem gene expression which may contribute to obesity-related disease. Finally, we characterized shared features across multiple unique human donors with a common phenotype, elite control of HIV-1. We identified and validated a subset of highly functional dendritic cells, and developed broadly applicable computational approaches to identify reproducible responses across donors and to nominate candidate targets for rationally modulating the system. Overall, our work demonstrates the utility of single-cell RNA-sequencing for uncovering important cellular phenotypes that inform systems-level responses at any biological scale.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123245</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering a novel pathway for isoprenoid synthesis</title>
<link>https://hdl.handle.net/1721.1/123243</link>
<description>Engineering a novel pathway for isoprenoid synthesis
Chatzivasileiou, Alkiviadis Orfefs.
Isoprenoids comprise a large class of chemicals, of significant interest due to their diverse properties. Most isoprenoids are plant secondary metabolites and are of commercial importance due to their varied applications in fields spanning medicine, agriculture, flavors, fragrances, cosmetics and nutrition. Biological production of isoprenoids in microbes is considered to be the most efficient and commercially viable way for their large-scale production. Thus far, isoprenoid biosynthesis has been performed through pathways inextricably linked to glycolysis. Furthermore, these pathways are inherently limited due to their extensive cofactor requirements, complex regulation and large number of steps. In this thesis we present a novel pathway for isoprenoid synthesis, the Isopentenol Utilization Pathway (IUP), which aims to overcome these limitations.; This pathway functions through the double phosphorylation of an isopentenol, either isoprenol or prenol, to produce the main precursors to isoprenoid synthesis, isopentenyl diphosphate (IPP) or dimethylallyl diphosphate (DMAPP). This pathway is radically different from naturally-occurring pathways or their engineered variants because it is only two steps long, uses an externally-provided isoprenol as its substrate instead of a glucose-derived catabolite, and uses only a single co-factor, ATP. We identify suitable enzymes, construct the pathway and proceed to demonstrate an in vivo proof of concept. After optimizing the pathway feedstock, we proceed to show that IUP is decoupled from central carbon metabolism. We demonstrate that the IUP can quickly produce copious amounts of IPP &amp; DMAPP and can be used for the production of a variety of isoprenoids.; The IUP flux exceeded the capacity of almost all downstream pathways tested, was competitive with the highest isoprenoid fluxes reported as well as against state-of-the art isoprenoid pathways. Furthermore, we elaborate on our progress towards improving the capacity of a downstream farnesene synthesis pathway, to catch up with and fully utilize IUP's production capacity. Finally, we propose a new scheme for the use of the IUP to produce functionalized isoprenoids using functionalized isopentenols to introduce functionalizations in isoprenoid backbones, and we show preliminary results of this application.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123243</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information-theoretic aspects of quantum channels</title>
<link>https://hdl.handle.net/1721.1/123242</link>
<description>Information-theoretic aspects of quantum channels
Zhu, Elton Yechao.
Quantum information theory is an important element of quantum computing and quantum communication systems. Whenever a quantum computer needs to send an output state to another party, or two parties need to establish quantum entanglement or secure keys via quantum communication, a quantum channel is inevitably involved. Hence it is absolutely important to understand the properties of quantum channels for the purpose of communication. Here, quantum entanglement plays a huge role. Pre-shared entanglement could enhance the capacity, whereas entanglement across inputs could render the capacity formulae impossible to compute. The first part of this thesis seeks to address this issue, by studying the additivity properties in the communication of classical and quantum information, with or without entanglement assistance. I also study the reverse problem that, given a channel capacity, what can be said about the quantum channel itself. Quantum information theory also serves as an important tool in understanding other systems, for example, black holes. In this thesis, I model a closed random system by a unitary channel, and study how typical unitary channels process information. This provides huge insight into the strength of generalized entanglement measures, and the hierarchies in the complexity of information scrambling.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-172).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123242</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Charge and exciton dynamics in quantum dot light-emitting diodes</title>
<link>https://hdl.handle.net/1721.1/123240</link>
<description>Charge and exciton dynamics in quantum dot light-emitting diodes
Zhu, Han,Ph. D.Massachusetts Institute of Technology.
Colloidal quantum dot based light-emitting diodes (QD-LEDs) offer the possibility of bright, saturated, and tunable emission for the next generation of display and solid state lighting technologies. In this thesis, we study how the interplay of charges and excitons in a QD-LED affect its operational behavior. In order to construct a physical model of a QD-LED, we start by developing quantitative characterization methods that directly measure charge accumulation and electric field in an operating device. Comparison of measured internal device variables with observed electroluminescence and current density allows us to disentangle the deleterious effects of charge imbalance, electric field, and Joule heating on the external quantum efficiency. We also find that the magnitude of electron accumulation on the QD film is sensitive to its interface with the neighboring hole transport layer (HTL) and can reach nearly one electron per QD even in the best performing device. We next investigate how exciton formation is affected by the high charge density. Since the degree of electron charging of a nanocrystal shifts the energy barrier for hole injection, the kinetics of exciton formation are dependent on electron occupation statistics on the QD film. Using kinetic Monte Carlo simulations that explicitly incorporate both long and short range Coulomb interactions, we find that energetic disorder of the QD film strongly enhances the formation of negatively charged excitons by increasing the population of two-electron occupied QDs. Finally, we demonstrate that the photoluminescence yield of a QD film can be intentionally quenched by up to 99.5% in a QD-LED under reverse bias. This paves the way for a voltage-tuned optical down-conversion device using colloidal QDs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages [159]-174).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123240</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring car pride and its implications for car ownership and use across individuals, cities, and countries</title>
<link>https://hdl.handle.net/1721.1/123232</link>
<description>Measuring car pride and its implications for car ownership and use across individuals, cities, and countries
Moody, Joanna C.(Joanna Charlotte)
As the world recognizes that its growing reliance on private, fossil fuel-based vehicles is unsustainable, understanding how to avoid growth in car ownership and how to shift current users towards more efficient, environmentally-friendly, safe, and inclusive alternatives is a critical vision for meeting sustainable (transportation) development goals. Policy makers looking to shift consumer behavior away from cars need a more rigorous understanding of how different attitudes play a role in influencing car ownership and use and how this might vary by people and place. In this dissertation we provide deep insight into one of the many symbolic and affective motives behind car consumption: "car pride" or the attribution of social status and personal image to owning and using a car. Using data collected from individuals in two U.S.; cities and in 51 countries around the world, we develop and demonstrate the reliability, validity, and invariance of polytomous (12, 7-point Likert-format statements) and dichotomous (9, dichotomous statements) survey measures for car pride using confirmatory factor analysis (CFA). With these measures, we explore variations in car pride across individuals, cities, and countries using Structural Equation Modeling (SEM). Across individuals, we find that those who are younger, male, and have higher incomes generally have higher car pride. Controlling for individual characteristics, we find that car pride is influenced by context. Between U.S. cities, we find that Houston has higher car pride than New York City. Across countries, we find that less developed countries exhibit higher car pride. We also disentangle the bidirectional causal relations between car pride and car consumption using instrumental variable (IV) techniques.; We find that car pride strongly predicts car ownership, while no statistically significant relation exists in the opposite direction. Car pride additionally predicts car use, but only through its relation with car ownership (mediator). In the reverse direction, car use strongly reinforces car pride. While the directions of these relations appear almost universal across contexts, their strengths differ by country, emphasizing the importance of taking national context into account when measuring and interpreting symbolic motivations for car consumption. This dissertation builds a systematic understanding of car pride and its relations with car consumption across individuals, cities, and countries. For researchers, it serves as an example of methodological good-practice for empirical studies of attitude-behavior relations in transportation.; For policymakers, it builds awareness of how policies can target attitudinal, social, and cultural factors, such as car pride, that present additional obstacles to the adoption of more sustainable transportation alternatives at both the individual and national levels.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 217-241).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123232</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuum modeling of reactive colloids : transformation of cement paste from sol to cohesive gel</title>
<link>https://hdl.handle.net/1721.1/123231</link>
<description>Continuum modeling of reactive colloids : transformation of cement paste from sol to cohesive gel
Petersen, Thomas Alexander.
A colloid is a collection of nanometer- to micron-sized particles interacting in a fluid or solution. And though colloids have traditionally been defined as fluid-like dispersions that remain suspended on account of the system's thermal fluctuations, the term has become more all-encompassing, referring to the collective behavior of particles that attract or repulse and interact at varying relative length and time scales. Cement paste, the binding agent in modern concrete, is classified as a colloid. Nearly instantaneously after mixing water with polydisperse cement powder, nanometer-sized grains of calcium-silicate-hydrates (C-S-H) precipitate out of solution and spontaneously gel. It is at this length scale that many of the physicochemical characteristics that lend the paste its mechanical durability are thought to derive. Yet few modeling approaches have been implemented to investigate how the density patterns in such reactive materials evolve and control mechanical behavior.; Thus, the first part of this thesis presents a nonequilibrium thermodynamic theory for the mean-field precipitation, aggregation and pattern formation of colloids. A variable gradient energy coefficient and the arrest of particle diffusion upon "jamming" of clusters in the spinodal region predicts observable gel patterns that, at high interparticle attraction, form system-spanning, out-of-equilibrium networks with glass-like, quasi-static structural relaxation. For reactive systems, we incorporate the free energy landscape of stable primary particles into the Allen-Cahn reaction equation. We show that pattern formation is dominantly controlled by the Damköhler number - the ratio of the reaction rate to the diffusion rate - and the stability of the primary particles, which modifies the auto-catalytic rate of precipitation. As primary particles individually become more stable, bulk phase separation is suppressed. Next, diffusive motion is replaced by hydrodynamic flow.; By incorporating the thermodynamic stress into a Navier-Stokes equation that measures changes in particle momentum, we enable continuum modeling of two-scale particle aggregation: i) particles phase separate into out-of-equilibrium clusters, which ii) further associate as rigidly moving bodies to form cluster aggregates. It is shown that the coarsening dynamics deviate from the universal scaling relation that is expected for equilibrium phases. Concomitant to local arrest and formation of a gel, mechanical strain energy is stored in the solid-like gel network. Changes in the stress state are track using an incremental mechanics formulation for densifying reactive materials. Throughout our modeling efforts, we relate several results to experimental observations of hydrating cement paste.; Firstly, it is hypothesized that curing temperature modifies the thermodynamic landscape of the C-S-H grains, which in turn influences the paste's pore-size distribution: Cements hydrating at higher temperatures produce more capillary porosity and less gel porosity, which adversely affects the materials durability. Secondly, the thermodynamic stress, which derives from the surface interactions of colloidal particles, is proposed as the driving force for cement shrinkage, which was experimentally observed in course of hydration under constant temperature and pressure conditions.
Thesis: Ph. D. in Mechanics of Materials, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 135-146).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123231</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous planning and resource allocation in reconfigurable smart sensing networks</title>
<link>https://hdl.handle.net/1721.1/123230</link>
<description>Autonomous planning and resource allocation in reconfigurable smart sensing networks
Long, James,Ph.D.Massachusetts Institute of Technology.
Structural health monitoring (SHM) uses sensor data to quantitatively assess the integrity and performance of infrastructure, as a basis for extending the lifespan of aging systems. Wireless sensor networks promise to enable the installation of dense arrays of battery operated sensor nodes at dramatically lower cost than traditional wired systems. However, typical vibration based health monitoring approaches sample at high frequencies, and wirelessly transmitting the resulting large volumes of data can rapidly deplete sensor node batteries. Rather than emulating the behaviour of a traditional wired SHM system, intelligent sensor nodes can analyse vibration data within the sensor network, reducing transmission volumes and conserving battery. Coordinating, configuring, and managing the resources of networks of intelligent sensor nodes, is however a significant challenge.; In this research, we first propose a new computational framework for distributed in-network processing of vibration sensing data, and develop a sensor system which implements this framework. A critical advantage of this framework is its flexibility, allowing data processing logic to be remotely reconfigured almost instantaneously. We then extend this approach, developing a resource allocation algorithm which continually tracks network resources (battery life, computational power and communication bandwidth), and ensures that in-network computations utilise these resources optimally. The efficacy of this approach is then demonstrated by deploying a wireless sensor network on a steel frame tower and conducting a series of experiments investigating the performance of the proposed framework. Lastly, we consider the problem of event detection in energy-harvesting wireless sensor networks. In SHM applications, long periods of time elapse without the occurrence of any event of interest.; To conserve resources, a subset of nodes can actively listen for events, while the remainder power down. Judicious planning of the sequence of active node assignments is needed to ensure that as many nodes as possible can be reached upon the detection of an event, and that the system maintains the ability to detect events in times of low energy availability. We propose and develop a novel reinforcement learning approach to this problem, and through simulation demonstrate that strategies learned by the reinforcement learning agent outperform baseline approaches. The integration of the proposed computational framework, resource allocation algorithm and collaborative event detection strategy enables fully autonomous operation of intelligent wireless SHM systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-145).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123230</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and theoretical studies of reactive transport processes in soluble porous rocks</title>
<link>https://hdl.handle.net/1721.1/123229</link>
<description>Experimental and theoretical studies of reactive transport processes in soluble porous rocks
Li, Wei,Ph.D.Massachusetts Institute of Technology. Department of Civil and Environmental Engineering.
Underground reactive transport processes involve fluid flow and reactions (dissolution, precipitation, and pressure solution) driving the evolution of the rock-fluid systems, which may result in favorable processes such as increased oil production by reservoir acid stimulation, or undesired processes such as caves and subsidence. Flow and reaction in the rock matrix often induce wormholes, which are long, finger-like channels that form due to the dissolution heterogeneity in the matrix. These wormholes become major flow pathways, which greatly increase the permeability of the rock. To study the reactive transport processes and the formation of wormholes, experimental and theoretical studies were conducted. More specifically, a new experimental setup and data analysis methods were introduced to the tube flow tests and core flood tests to experimentally study the evolution of the rock-fluid system. Theoretical studies with analytical and numerical models were used to simulate the experimental results and provide theoretical explanation for the experimental observations. Through the experimental and theoretical studies, this research improved the fundamental understanding of reactive transport processes in rock-fluid systems. This in turn provided accurate prediction of the evolution of the rock-fluid systems driven by the reactive transport processes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 207-219).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123229</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale modeling of two-dimensional materials : structures, properties, and designs</title>
<link>https://hdl.handle.net/1721.1/123228</link>
<description>Multiscale modeling of two-dimensional materials : structures, properties, and designs
Jung, Gang Seob.
Multiscale modeling undertakes to describe a system with multiple models at different scales. In principle, quantum mechanics provides sufficient information. However, the development of a scaled-up model, e.g., molecular dynamics, from quantum mechanics, should be validated against the experiments. Two-dimensional (2D) materials provide excellent platforms to verify theoretical models by directly comparing atomic structures and properties with advanced transmission electron microscopy (TEM) techniques due to their high crystallinity and thin nature. In this thesis, molecular dynamics (MD) models have been developed for the 2D transition metal dichalcogenides (TMDs) such as MoS₂, WS₂, MoSe₂, and WSe₂ from density functional theory (DFT) by focusing on their nonlinearity and failure strains. The structures, crack-tip behaviors, and fracture patterns from the models are directly compared with atomic level in-situ TEM images.; The models have revealed atomic scale mechanisms on the crack-tip behaviors in the single crystals such as roles of sulfur vacancies, geometric interlocking frictions, and the directions of crack propagation. The models have been further validated with more complicated structures from grain boundaries in the WS₂ bilayer and lateral heterostructures, e.g., MoS₂-WSe₂ by the images from ADF-STEM. Also, a method for generation of grain boundary has been proposed for well-stitched grain boundaries based on experimentally observed dislocations and defects. The models and methods have been utilized to understand the chemical reactions for MoS₂ channel growth in WSe₂ and fracture toughness of polycrystalline graphene. Finally, the validated models and methods are utilized to predict the atomic structures of 2D materials with three-dimensional (3D) surfaces, e.g., triply periodic minimal surfaces (TPMS) and corrugated surfaces with non-zero Gaussian curvatures.; The mechanics, failure behaviors, and thermal properties of TPMS graphene are systematically studied from the predicted structures. A recent experiment shows the predicted scaling laws of Young's modulus and strengths agree well with the measurements.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 257-274).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123228</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online discrete choice models : applications in smart mobility</title>
<link>https://hdl.handle.net/1721.1/123227</link>
<description>Online discrete choice models : applications in smart mobility
Danaf, Mazen(Mazen Salah)
Discrete choice models have been widely applied in different fields to better understand behavior and forecast market shares. Because of their ability to capture taste heterogeneity, logit mixture models have gained increasing interest among researchers and practitioners. However, since the estimation of these models is computationally expensive, their applications have been limited to offline contexts. On the other hand, online applications (such as recommender systems) require users' preferences to be updated frequently and dynamically. The objective of this dissertation is to develop a methodology for estimating discrete choice models online, while accounting for inter- and intra-consumer heterogeneity. An offline-online framework is proposed to update individual-specific parameters after each choice using Bayesian estimation.; The online estimator is computationally efficient, as it uses the data of the individual making the choice only in updating his/her individual preferences. Periodically, data from multiple individuals are pooled, and population parameters are updated offline. Online estimation allows for new and innovative applications of discrete choice models such as personalized recommendations, dynamic personalized pricing, and real-time individual forecasting. This methodology subsumes the utility-based advantages of discrete choice models and the personalization capabilities of common recommendation techniques by making use of all the available data including user-specific, item specific, and contextual variables. In order to enhance online learning, two extensions are proposed to the logit mixture model with inter- and intra-consumer heterogeneity.; In the first extension, socio-demographic variables and contextual variables are used to model systematic inter- and intra-consumer taste heterogeneity respectively. In the second extension, a latent class model is used to allow for more flexibility in modeling the inter- and intra-consumer mixing distributions. Finally, the online estimation methodology is applied to Tripod, an app-based travel advisor that aims to incentivize and shift travelers' behavior towards more sustainable alternatives. Stated preferences data are collected in the Greater Boston Area and used to estimate the population parameters, which are then used by the app in online estimation. Using the collected data, a large number of synthetic users is simulated, and the recommendation system is tested over several days, and under different scenarios. The results show that the average hit-rate generally increases over time as we learn individual preferences and population parameters.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 100-108).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123227</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic and analytics-driven inspection operations for critical infrastructure resilience</title>
<link>https://hdl.handle.net/1721.1/123226</link>
<description>Strategic and analytics-driven inspection operations for critical infrastructure resilience
Dahan, Mathieu.
Resilience of infrastructure networks is a key requirement for a functioning modern society. These networks work continuously to enable the delivery of critical services such as water, natural gas, and transportation. However, recent natural disasters and cyber-physical security attacks have demonstrated that the lack of effective failure detection and identification capabilities is one of the main contributors of economic losses and safety risks faced by service utilities. This thesis focuses on both strategic and operational aspects of inspection processes for large-scale infrastructure networks, with the goal of improving their resilience to reliability and security failures. We address three combinatorial problems: (i) Strategic inspection for detecting adversarial failures; (ii) Strategic interdiction of malicious network flows; (iii) Analytics-driven inspection for localizing post-disaster failures.; We exploit the structural properties of these problems to develop new and practically relevant solutions for inspection of large-scale networks, along with approximation guarantees. Firstly, we address the question of determining a randomized inspection strategy with minimum number of detectors that ensures a target detection performance against multiple adversarial failures in the network. This question can be formulated as a mathematical program with constraints involving the Nash equilibria of a large strategic game. We solve this inspection problem with a novel approach that relies on the submodularity of the detection model and solutions of minimum set cover and maximum set packing problems. Secondly, we consider a generic network security game between a routing entity that sends its flow through the network, and an interdictor who simultaneously interdicts multiple edges.; By proving the existence of a probability distribution on a partially ordered set that satisfies a set of constraints, we show that the equilibrium properties of the game can be described using primal and dual solutions of a minimum-cost circulation problem. Our analysis provides a new characterization of the critical network components in strategic flow interdiction problems. Finally, we develop an analytics-driven approach for localizing failures under uncertainty. We utilize the information provided by failure prediction models to calibrate the generic formulation of a team orienteering problem with stochastic rewards and service times. We derive a compact mixed-integer programming formulation of the problem that computes an optimal a-priori routing of the inspection teams. Using the data collected by a major gas utility after an earthquake, we demonstrate the value of predictive analytics for improving their response operations.
Thesis: Ph. D. in Civil Engineering and Computation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 213-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123226</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pore-scale processes : quantification of the controls on the transport of Ra and Cr, and development of a novel geochemical visualization tool</title>
<link>https://hdl.handle.net/1721.1/123225</link>
<description>Pore-scale processes : quantification of the controls on the transport of Ra and Cr, and development of a novel geochemical visualization tool
Chen, Michael Andrew.
Addressing soil contamination inorganic metals and metalloids remains a critical task for the continuing protection of human health globally. The dissolved concentrations of contaminants are controlled by a wide range of biogeochemical processes including oxidation and reduction by microbes, sorption to minerals and organic matter, and complexation with ligands in solution. Depending on the contaminant of interest, the importance of these different processes will widely vary, and the natural heterogeneity of soil systems all further frustrate modeling of contaminant transport. Recent studies have demonstrated that soil conditions vary at scales as small as individual soil pores, suggesting that the controls on contaminant transport also vary at that scale. Understanding the impact these pore scale processes have is necessary to build accurate conceptual models of contaminant fate. The work here explores these types of microscale processes through three different projects. The first project focuses on the sorption of radium, a naturally occurring radioactive material, to different minerals. Surface complexation modeling of Ra was able to replicate sorption experiments, but could not predict the impact of different solution conditions. The second project examines metal reduction via Fe (hyrd)-oxides, showing that bacteria may be able to form networks with semi-conducting Fe (hydr)-oxides. This means bacteria can access electron acceptors without physical contact, and will impact the cycling of redox sensitive metals at pore scales. The final project was the development in a microfluidic device that could be used to directly visualize biogeochemical processes at pore scales through x-ray fluorescence microprobe spectroscopy. The three projects, though focused on different systems, each reveal the importance of considering how microscale processes impact transport of contaminants.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123225</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parametric and nonparametric approaches to explain and predict nonlinear population dynamics in changing environments</title>
<link>https://hdl.handle.net/1721.1/123224</link>
<description>Parametric and nonparametric approaches to explain and predict nonlinear population dynamics in changing environments
Cenci, Simone.
Many aspects of human societies, from the sustenance of national economies to the control of population health, depend on the dynamics of biological populations within a given environment. Therefore, understanding and predicting the effects of changing environments on the dynamics of biological populations evolving in a continuously changing world is, nowadays, one of the most important challenges in biology. In this thesis we have addressed this challenge using two different approaches. The first approach, called the structural approach, is deductive, i.e., the effects of changing environments on population dynamics are studied using parametric models under equilibrium assumptions. In this context, firstly we have shown that, while the approach was originally introduced to investigate the structural stability of the classic Lotka-Volterra dynamics; it can be applied to a much larger class of nonlinear models and to stochastic systems.; Then, we used the approach to analyze empirical data to investigate how structure and dynamics of species interactions regulate species coexistence under fast environmental changes. The generalizability of this approach however, has been limited because equilibrium dynamics are seldom observed in nature and exact equations for population dynamics are rarely known. Therefore, in the second part of the thesis we took an inductive approach. Specifically, we proposed a nonparametric framework to estimate the tolerance of nonequilibrium population dynamics to environmental variability. To apply the framework on empirical data we have improved/developed nonparametric computational methods to infer biotic interactions and their uncertainty from nonlinear time series data. Using our approach we were able to recover important ecological insights without the explicit formulation of parametric models.; That is, we have shown that it is possible to build ecological theories inductively from observational data with minimal assumptions on the data-generating processes. Overall, we believe that the increasing amount of biological data available nowadays paves the way for moving theoretical population biology from being a deductive, assumption driven science towards an inductive data-driven science. In this context, this study is a step forward towards the foundations of a nonparametric data-driven research to monitor and anticipate the response of populations to the increasing rate of environmental changes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 195-211).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123224</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards stable principles of collective intelligence under an environment-dependent framework</title>
<link>https://hdl.handle.net/1721.1/123223</link>
<description>Towards stable principles of collective intelligence under an environment-dependent framework
Almaatouq, Abdullah Mohammed.
A large body of work has shown that a group of individuals can often achieve higher levels of intelligence than the group members working alone. Despite these expectations of group advantage, many examples of collective failure have been documented--from market crashes to the spread of false and harmful rumors. To reconcile these results, a major effort in the study of collective decision making has been focused on understanding the role of group composition and communication patterns in promoting the "wisdom of the crowd" or, conversely, leading to the "madness of the mob." In the past decades, much of this effort has been devoted to inferring the importance of a particular attribute, in isolation, by its capacity to explain the accuracy of collective judgments. In this thesis, we argue that such a perspective can lead to inconsistent conclusions: an 'incoherency problem.' We assert that the importance of an individual-level or structural attribute may change as a function of the environment in which the group is situated. Hence, we propose a research agenda to investigate the relative importance of the group composition and the structure of interaction networks under an environment-dependent framework. We show that under such a framework, we can reconcile previously conflicting claims from the collective intelligence literature and motivate a future research program to identify stable principles of collective performance. Although implementing such a program is logistically challenging, "virtual lab" experiments of the sort discussed in this thesis, in combination with emerging "open science" practices such as pre-registration, data availability, open code, and "many-labs" collaborations, offer a promising route forward.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 135-152).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123223</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The swept rule for breaking the latency barrier in time-advancing PDEs</title>
<link>https://hdl.handle.net/1721.1/123222</link>
<description>The swept rule for breaking the latency barrier in time-advancing PDEs
Alhubail, Maitham Makki(Maitham Makki Hussain)
This thesis describes a method to accelerate parallel, explicit time integration of unsteady PDEs. The method is motivated by our observation that network latency, not bandwidth or computing power, often limits how fast PDEs can be solved in parallel. The method is called the swept rule of space-time domain decomposition. Compared to conventional, space-only domain decomposition, it communicates similar amount of data, but in fewer messages. The swept rule achieves this by decomposing space and time among computing nodes in ways that exploit the domains of influence and the domain of dependency, making it possible to communicate once per many time steps with no redundant computation. By communicating less often, the swept rule effectively breaks the latency barrier, advancing on average more than one time step per round-trip latency of the network. The thesis describes the algorithms, presents simple theoretical analysis to the performance of the swept rule, and supports the analysis with numerical experiments.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-104).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123222</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accessible and easy-to-use educational tools to teach molecular and synthetic biology using freeze-dried, cell-free technology</title>
<link>https://hdl.handle.net/1721.1/123213</link>
<description>Accessible and easy-to-use educational tools to teach molecular and synthetic biology using freeze-dried, cell-free technology
Huang, Ally.
Hands-on demonstrations greatly enhance the teaching of STEM concepts and foster engagement and exploration in the sciences. While numerous chemistry and physics classroom demonstrations exist, few biology demonstrations are practical and accessible due to the challenges and concerns of growing living cells in classrooms. Here I introduce a platform to develop hands-on molecular and synthetic biology educational activities based on easy-to-use, shelf-stable, freeze-dried, cell-free (FD-CF) reactions, which are simply activated by water. By using fluorescent proteins as a visual output, I created a variety of engaging modules using this platform that can teach the central dogma of biology, how certain cellular functions work, and other basic molecular biology topics that are otherwise difficult to easily teach in a hands-on manner. By expanding the platform to other non-visual outputs (such as smell or touch), as well as further incorporating components, such as RNA switches, I also developed modules that can teach more advanced biology topics, such as biochemistry, biomaterials, and synthetic biology, as well as basic laboratory skills such as pipetting, experimental design, and the scientific method. Pilot testing of a prototype kit based on these elements were tested in classrooms across the country and initial results suggest that the activities are accessible, easy to use, educational, and engaging for high school students. Overall, the platform introduces low-cost, user-friendly, and hands-on activities that can be used in classrooms to improve the quality of biology education and open the door for student-driven, independent explorations in the life sciences.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 186-197).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123213</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient operations of smart electricity networks under security and reliability failures</title>
<link>https://hdl.handle.net/1721.1/123207</link>
<description>Resilient operations of smart electricity networks under security and reliability failures
Shelar, Devendra(Devendra Anil)
Blackouts (or cascading failures) in Electricity Networks (ENs) can result in severe consequences for economic activity, human safety and national security. Recent incidents suggest that risk of blackouts due to cyber-security attacks and extreme weather events is steadily increasing in many regions of the world.; This thesis develops a systematic approach to evaluate and improve the resilience of ENs by addressing following questions: (a) How to model security and reliability failures and assess their impact on ENs? (b) What strategies EN operators can implement to plan for and quickly respond to such failures and minimize their overall impact? (c) How to leverage the operational flexibility of "smart" ENs to implement these strategies in a structured manner and provide guarantees against worst-case failure scenarios? We focus on three classes of cyber-physical failures: (i) Inefficient or unsafe economic dispatch decisions induced by an external hacker who exploits the vulnerabilities of control center software; (ii) Simultaneous disruption of a large number of customer-side components (loads and/or distributed generators) by a strategic remote adversary; (iii) Correlated failures of power system components caused by storm events (or hurricanes) with high-intensity wind fields.; We develop new network models to capture the impact of these failures, while accounting for a broad range of operator response actions. These actions include: partial load control, pre-emptive disconnection of non-critical loads, active and reactive power supply by Distributed Energy Resources (DERs) capable of providing grid-forming services, and formation of microgrid islands. We develop practically relevant operational strategies to improve the ENs' resilience to failure classes (i) and (ii) (resp. (iii)) based on solutions of bilevel mixed integer programming (resp. two-stage stochastic optimization) formulations. Our bilevel mixed integer programming formulations capture the worst-case impacts of attacks on radial distribution networks operating under grid-connected or microgrid configurations.; For the case when the operator response can be modeled as continuous decision variables, we provide a greedy heuristic that exploits the radial network structure and provides near-optimal solutions. For the more general case of mixed-binary decision variables, we develop a computationally tractable solution approach based on Benders Decomposition method. This approach can be used to evaluate the value of timely response actions in reducing various losses to the network operator during contingencies induced by attacker-induced failures. We provide some guidelines on improving the network resilience by proactive allocation of contingency resources, and securing network components in a strategic manner. Furthermore, under reasonable assumptions, we show that myopically reconnecting the disrupted components can be eective in restoring the network operation back to nominal condition.; Our two-stage stochastic optimization formulation is motivated by the need of a decision-theoretic framework for allocating DERs and other contingency resources in ENs facing the risk of multiple failures due to high-intensity storm events. The stochastic model in this formulation captures the dependence of probabilistic failure rates on the spatio-temporal wind intensities. Importantly, the formulation allows for the formation of microgrid islands (powered by the allocated DERs), and considers joint DER dispatch and component repair decisions over a multi-period restoration time horizon. We present computational results based on the classical sample average approximation method, with Benders Decomposition applied to solve the mixed-binary programs associated with the restoration stage. Finally, we compare the optimal repair decisions with a simpler greedy scheduling strategy that satisfies soft-precedence constraints.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 265-276).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123207</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomaterials from a modular peptide scaffold</title>
<link>https://hdl.handle.net/1721.1/123199</link>
<description>Biomaterials from a modular peptide scaffold
Wang, Wade.
Modular polymer scaffolds are an attractive solution to address the needs of biomaterial design and engineering. Simple modification of the scaffold enables both screening an array of materials as well as optimization of a single material. This thesis describes the development and applications of two different polypeptide scaffolds that may be functionalized with click chemistry on both the side chain and end group. We demonstrate the utility of these scaffolds with the synthesis of biomaterials and biopolymers for drug delivery and tissue engineering applications. One area of focus is the synthesis of a bioinspired polypeptide-hyaluronic acid conjugate. Proteoglycans are an interesting class of biomacromolecules whose applications have been limited by reproducible isolation from natural sources. Synthetic proteoglycans provide an alternative as a reproducible and scalable solution, though many synthetic systems lack biological activity. We have developed a method to synthesize polypeptide-hyaluronic acid conjugates of various architectures that more closely mimic the composition of proteoglycans found in nature. These conjugates exhibit biological activity distinct from native hyaluronic acid of various sizes. The conjugates were also successfully employed in a three dimensional vasculogenesis application. The synthesis of bulk hydrogels based off end to end linking of a polypeptide scaffold was also investigated. This endeavor required optimization of the polymerization conditions to achieve the desired end functionality. Ultimately, these polymers may be end-linked to form a soft hydrogel. Finally, the effects of secondary structure on polymer-drug conjugate efficacy are interrogated by grafting the anticancer drug doxorubicin and poly(ethylene glycol) to polypeptide scaffolds that exhibit different degrees of a-helicity. The drug release, toxicity, and conjugate association with cells was evaluated by in vitro assays.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123199</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of organic materials for electrooptical applications</title>
<link>https://hdl.handle.net/1721.1/123198</link>
<description>Synthesis of organic materials for electrooptical applications
Voll, Constantin-Christian A.(Constantin-Christian Alexander)
In Chapter 1, we provide an overview of organic materials for light-emitting diodes (OLEDs). We review fundamental photophysical limitations on device efficiencies and discuss how different strategies use highly efficient photophysical pathways to overcome these limitations. The history of delayed emission strategies, particularly thermally activated delayed fluorescence, is described, along with organic radicals as a competing approach for high-efficiency OLEDs. In Chapter 2, we synthesize a donor-acceptor iptycene scaffold with thermally activated delayed fluorescence (TADF). The scaffold bears two carbazole substituents that can be equipped with solubilizing side chains, and a thiadiazoloquinoxaline core with lateral aryl bromides which allows further modification through cross-coupling reactions. Photophysical studies on a model compound suggest that polymeric material based on this scaffold may be highly emissive and TADF-active.; In Chapter 3, we report a twisted donor-acceptor approach as an alternative strategy to achieve TADF. A new class of electron-deficient pyrazinoquinoxaline core acceptors with a variety of donor substituents was prepared. The reaction cascade yields an iptycene-capped p-dibromo-quinoxalinophenazine, to which a variety of aryl and heteroaryl substituents can be cross-coupled. The products' luminescence can be tuned across the visible spectrum. We find that the induced torsion angle between donor and acceptor moieties is insufficient in producing TADF-active compounds. In Chapter 4, we explore a combination of synthetic supramolecular chemistry and materials science to develop exciplexes for TADF. We designed a bowl-shaped acceptor molecule for which we synthesized shape complementary donors that bind in a lock-andkey fashion.; The investigation of three independent donor families, guided by density functional theory calculations, allows coverage over a wide range of the visible spectrum and derive empirical relationships for the prediction of the exciplex emission color. In Chapter 5, we describe a transition-metal-free methodology for the synthesis of extended aromatic structures through dehydrative C-C coupling of readily accessible 1,4-diols with (hetero)arenes in high to quantitative yield. These reactions proceed under mild, open flask conditions and offer high atom economy, while providing an attractive alternative approach to metal-catalyzed cross-coupling reactions. In Chapter 6, we sought to expand the small molecule coupling methodology to dehydrative polymerizations. We synthesized a range of 1,4-diols in order to address reactivity and stability considerations required for the diols to serve as effective monomers.; Titanium(IV) chloride is found to efficiently couple these diols in high yield (up to 93% yield), producing oligomers with molecular weights up to 10 kDa. In future research, an appropriate dehydrating agent with attenuated reactivity will likely allow access to polymers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 230-259).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123198</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thianthrene organic materials : synthesis, properties and applications</title>
<link>https://hdl.handle.net/1721.1/123196</link>
<description>Thianthrene organic materials : synthesis, properties and applications
Ong, Wen Jie.
Thianthrene is a heterocyclic molecule with intriguing electrochemical properties. In this thesis, the synthesis, properties and applications of novel thianthrene-containing organic materials will be discussed. In Chapter 1, key concepts essential for understanding this thesis will be reviewed, including structure, electrochemistry and synthesis of thianthrene, nucleophilic aromatic substitution (S[subscript N]Ar), and redox flow battery. In Chapter 2, we exploit the dynamic, self-correcting nature of the SNAr reaction between ortho-aryldithiols and ortho-aryldifluorides to afford molecules with two, three, and four thianthrene moieties respectively, in excellent yields. The same chemistry is also applied to the synthesis of ladder macrocycles and porous polymer networks. In Chapter 3, we further extend the dynamic SNAr reaction to the synthesis of ladder thianthrene polymers, comparing their electrochemical, photophysical and thermal properties to the properties of their dibenzo-,I 4-dioxin analog. The last two chapters focus on the applications of novel organic materials containing thianthrene and its derivatives. Chapter 4 shows how incorporating thianthrenes into resorcinarene-based cavitand enables electrochemically-induced vase-kite conformation changes. In Chapter 5, we propose a new approach toward designing novel dual anolyte-catholyte molecules by deconstruction of relevant electroactive species. Using thianthrene and anthraquinone as examples, we design and synthesize three new molecular scaffolds exhibiting excellent electrochemical stability over a wide potential range and good solubility for symmetric redox flow battery application.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123196</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of precatalysts and phosphine ligands for Pd-catalyzed transformations</title>
<link>https://hdl.handle.net/1721.1/123195</link>
<description>Design of precatalysts and phosphine ligands for Pd-catalyzed transformations
Ingoglia, Bryan Taylor.
The work described in this thesis pertains to the formation of carbon-heteroatom bonds facilitated by palladium catalysts supported by bulky phosphine ligands. The first chapter is a summary of how biaryl monophosphine ligands have been used for carbon-heteroatom bond formations, including a ligand selection guide. The second chapter demonstrates how phosphinesupported Pd(II) oxidative addition complexes can be used as precatalysts in a variety of cross-coupling reactions. The third chapter presents a systematic study of the ligand architecture in an effort to rationally design new ligands capable of facilitating the challenging C-F reductive elimination from Pd(II). The fourth chapter highlights a structurally interesting side-product that resulted during ligand synthesis.; Chapter 1: Biaryl Monophosphine Ligands in Palladium-Catalyzed C-N Coupling: An Updated User's Guide Over the past three decades, Pd-catalyzed cross-coupling reactions have become a mainstay of organic synthesis. In particular, catalysts derived from biaryl monophosphines have shown wide utility in forming C-N bonds under mild reaction conditions. This work summarizes a variety of C-N cross-coupling reactions using biaryl monophosphines as supporting ligands, with the goal of directing synthetic chemists toward the ligands and conditions best suited for a particular coupling. Chapter 2. Oxidative Addition Complexes as Precatalysts for Cross-Coupling Reactions Requiring Extremely Bulky Biarylphosphine Ligands. Palladium-based oxidative addition complexes were found to be effective precatalysts for C-N, C-O, and C-F cross-coupling reactions with a variety of aromatic electrophiles.; These Pd(II) complexes are easily prepared and offer a convenient alternative to previously developed classes of precatalysts as they can be formed even with extremely large phosphine ligands, for which palladacycle-based precatalysts do not readily form. The complexes were found to be stable to long-term storage under ambient conditions. Chapter 3. Structure-Activity Relationship of Phosphine Ligands for the Fluorination of Five-membered Heteroaromatic Compounds Palladium catalysts supported by bulky dialkyl triaryl monophosphine ligands have been shown to promote the coupling of metal fluorides with (hetero)aryl bromides and triflates in good yield. A limitation of this methodology is the use of five-membered heteroaryl bromides, as the reductive elimination is more challenging due to the smaller size and electron-rich nature of the aryl electrophiles.; In order to understand which structural features of the ancillary ligand are critical to facilitating the desired transformation, the ligand backbone was systematically varied and the initial rate of fluorination was monitored. These studies revealed that substitution at the 2" and 6" positions of the ligand scaffold has a dramatic impact on the reaction rate. As a result of these studies, new ligands were proposed which may be better able to accelerate the fluorination reaction. Chapter 4: Discovery of a Sterically Encumbered Hexasubstituted Arene through the Pdmediated Dearomative Rearrangement of Biaryl Monophosphine Ligands A key feature of the Pd-catalyzed aromatic fluorination reaction is the presence of the aryl group at the 3' position of the ligand backbone.; It has been shown that supporting ligands lacking substitution at this position can be modified through a dearomative rearrangement, which incorporates one catalytic equivalent of the aryl electrophile into the ligand backbone when very bulky biarylphosphines are used. In Chapter 3, it was demonstrated that this rearrangement reaction is useful for rapidly accessing a variety of dialkyl triaryl monophosphine derivatives. During these studies, it was noted that for electron-rich aryl groups, this arylation occurred twice to form an unusual sterically congested hexasubstituted arene. X-ray crystallographic data indicates that the fully substituted aromatic ring is not planar.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123195</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A systems-level view of the tRNA epitranscriptome : defining the role of tRNA abundance, stability, and modifications in the bacterial stress response</title>
<link>https://hdl.handle.net/1721.1/123194</link>
<description>A systems-level view of the tRNA epitranscriptome : defining the role of tRNA abundance, stability, and modifications in the bacterial stress response
Hu, Jennifer F.(Jennifer Fan)
In all organisms, the modulation of gene expression is a critical aspect of growth, development, and adaptation to environmental changes. Technological advancements in the post-genomic era have provided new 'omics tools for achieving a systems-level understanding of transcription and translation. This has led to an emerging appreciation for the complexity of post-transcriptional mechanisms regulating gene expression, including the pool of transfer RNA (tRNA) molecules within the cell and the spectrum of modified ribonucleotides that comprise the epitranscriptome. The studies presented in this thesis address both 'omic tool-building and a mechanistic understanding of the prokaryotic epitranscriptome. Observations that tRNA modifications and tRNA copy numbers change dynamically in response to environmental perturbations have led to the hypothesis that tRNA-mediated mechanisms contribute to the cellular stress response.; Members of the Mycobacterium tuberculosis complex provide a highly relevant model for investigating this mode of post-transcriptional regulation. Mycobacterium tuberculosis (Mtb) is the causative agent of tuberculosis (Tb), one of the most prevalent infectious diseases in the world. During infection, Mtb is subjected to harsh conditions - including hypoxia, nutrient limitation, and macrophage-derived reactive oxygen/nitrogen species (ROS/RNS) - within avascular granulomas. Mtb has evolved to persist by dramatically remodeling its metabolism and entering a non-replicative, quasi-dormant state that renders it highly tolerant of host-inflicted immune assaults. This thesis investigates the mechanisms by which mycobacteria use a combination of tRNA modifications and tRNA copy number changes to orchestrate extensive remodeling of biochemical networks during starvation-induced persistence.; The technologies and questions applied to mycobacteria were also used to characterize the network of tRNA modifications and modifying enzymes in E coli, along with their role in tRNA surveillance and quality control. The results of our studies have led to new technologies with commercial potential and have advanced our understanding of the complex mechanisms governing gene expression.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123194</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical characterization of murine calprotectin and the host-pathogen competition for manganese</title>
<link>https://hdl.handle.net/1721.1/123193</link>
<description>Biochemical characterization of murine calprotectin and the host-pathogen competition for manganese
Hadley, Rose Currier.
Microorganisms need to acquire metal ion nutrients when they attempt to colonize the host milieu. The competition for transition metal ions between a host and an invading pathogen constitutes an important aspect of innate immunity and microbial pathogenesis. The host deploys the metal-sequestering antimicrobial protein calprotectin (CP) to sites of infection to withhold transition metal ions. The goals of this thesis are to characterize the biochemical, and Mn(Il)-binding properties of the murine orthologue of calprotectin (mCP) as well as to evaluate the molecular details of the competition for Mn(II) between calprotectin and Mn(II) transport proteins from pathogenic bacteria. In the first part of this thesis, we provide initial biochemical characterization of mCP, supporting a role of this protein in transition metal sequestration and antibacterial activity. We demonstrate that this protein is a heterodimer than can undergo Ca(Il)-induced tetramerization.; We further show that mCP can bind a range of first row transition metal ions and displays antibacterial species against a panel of bacterial species. In the second part of this thesis, we characterize the Mn(Il)-binding properties of mCP, revealing Ca(Il)-dependent Mn(II) affinity at a hexahistidine site that bears a remarkable resemblance to the Mn(Il)-sequestering site in human CP (hCP). We use biochemical assays and electron paramagnetic resonance (EPR) spectroscopy to elucidate the Mn(ll)-coordinating residues of mCP. Altogether, we find that mCP possesses a much lower Ca(II) sensitivity than human CP, a fact that may have consequences in vivo. In the final portion of this thesis, we use biochemical assays and EPR spectroscopy to monitor the competition for Mn(II) between hCP and the bacterial Mn(II) transport proteins MntC and PsaA.; We show that in the presence of excess Ca(II), hCP rapidly outcompetes these proteins for Mn(II), revealing the notably high Mn(II) affinity of hCP and giving molecular credence to the role of CP in sequestering Mn(II) in vivo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123193</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rational designs of polymers for interfacial assemblies and intrinsic porosity</title>
<link>https://hdl.handle.net/1721.1/123192</link>
<description>Rational designs of polymers for interfacial assemblies and intrinsic porosity
He, Yuan,Ph.D.Massachusetts Institute of Technology.
In Chapter 1, we begin with a brief introduction of dynamic complex colloids and porous organic polymers with their applications in different areas. In Chapter 2, we describe the interfacial polymerization on dynamic complex colloids using an interfacial free-radical initiator. Colloids that are in the Janus morphology with the hemispherical shells are shown to be more stable than their counterparts without the shells. In Chapter 3, we describe a general strategy to make 1 -dimentional porous polymers using Diels-Alder reaction and ring opening metathesis polymerization (ROMP). Corresponding polymers and copolymers are tested in terms of their adsorption capability towards common organic vapors, and the results are compared with commercially available activated carbon under the same test conditions. Chapter 4 discusses the gas separation performance for two specific polymers, CF₃-ROMP and OMe-ROMP, synthesized in chapter 2. CF₃-ROMP and OMe-ROMP casted into membranes are tested in terms of pure-gas permeation, mixed-gas permeation, physical aging and CO₂ plasticization. Corresponding results are compared with PIM-1 under the same treatment conditions. Chapter 5 describes the synthesis of five zwitterionic polymers via free-radical polymerization and post-polymerization functionalization. Their hydrodynamic radiuses are measured in different brines and static adsorptions towards limestone (LS) rock are determined using UV-Vis spectroscopy at 80°C.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123192</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single nanocrystal spectroscopy of near and shortwave infrared emitters</title>
<link>https://hdl.handle.net/1721.1/123191</link>
<description>Single nanocrystal spectroscopy of near and shortwave infrared emitters
Bertram, Sophie Nathalie.
Optimizing material systems for their translation to the marketplace relies on a complete understanding of the underlying fundamental physical mechanisms governing the observed properties of the material. In semiconducting nanocrystals, coordinating the physical properties of the material with the synthetic procedures used to manipulate these properties has led to the successful proliferation of these systems throughout the display industry. Most of the high-profile applications of nanocrystals rely on the quality of these materials as emitters of light. Until recently, all of the applications have been limited to emitters of visible light due to ubiquitous silicon detector technology. If we look at longer wavelengths of light such as the near infrared and shortwave infrared (NIR and SWIR) we move toward regions that were historically exploited by the military and as such, all associated technology was heavily regulated.; Relatively recent deregulation of SWIR detector technology has opened up these technologies to academic researchers and commercial industries. This deregulation has been accompanied by significant advancements in both the detector technology and also the development and discovery of new materials that are active in this wavelength regime. Much of the success that nanocrystals have found in industry has relied on the understanding of the physical mechanisms that lead to an emission event. As spectroscopists and physical chemists we take snapshots of physical properties and we systematically manipulate our materials to develop a basic understanding of how the energy is transported and transformed in our material.; As we push further into the infrared, we are working with materials that are highly unoptimized and underdeveloped but which have incredible potential as material systems for in vivo high resolution biological imaging and single emitters for optical and secure quantum communications. Indium Arsenide (InAs) has long been exploited in the epitaxial quantum dot community for its emission throughout the SWIR and critically at the wavelengths where optical communication occurs. Currently, this is the leading technology for quantum-dot-based single photon and entangled photon emitters. These systems suffer, however, due to their difficult and heterogeneous fabrication procedures. Colloidal InAs has recently been synthesized with high quantum yields and tunability throughout the SWIR. In this thesis we explore some of the fundamental emission mechanisms that occur in colloidal InAs NCs. Colloidal NC synthesis aims for a homogeneous sample of emitters.; While InAs is far off from this goal, with new and sensitive SWIR single photon detector technology, we can study InAs at the single NC level to unravel some of the fundamental physical properties determining emission in this material. We find, strikingly, that while the ensemble properties of this material may be far from ideal, the single InAs NC properties approach some of the best visible-emitter systems that we have. This suggests that there is a path forward for implementation of these materials in high-profile applications. In this thesis I have translated a technique known as solution photon correlation Fourier spectroscopy to study the emission mechanisms in infrared emissive materials. I explore first lead sulfide quantum dots emissive in the NIR and conclude that the emission is mediated by multiple emissive states. I then translate this technique to the SWIR, overcoming several technical challenges with microscopy at longer wavelengths.; Finally I use this new technique to study InAs QDs at the single nanocrystal level. I conclude that single SWIR emissive InAs QDs have narrow emission linewidths but significant broadening due to heterogeneities at the ensemble level. While this is by no means a complete picture of the emission mechanisms in these materials, it is a demonstration of the types of experiments and the current technological capabilities available to us to understand these materials. At the end, I provide some suggestions and preliminary work to push our understanding even further.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 97-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123191</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On traffic disruptions : event detection from visual data and Bayesian congestion games</title>
<link>https://hdl.handle.net/1721.1/123189</link>
<description>On traffic disruptions : event detection from visual data and Bayesian congestion games
Liu, Jeffrey,Ph.D.Massachusetts Institute of Technology.
Road traffic is often subject to random disturbances due to weather, incidents, or special events. Effectively detecting and disseminating information about disturbances is a key goal of modern, "smart" infrastructure. Toward this, this dissertation investigates two related questions. First, how can traffic managers better utilize existing traffic cameras to automatically identify traffic disturbances? Second, how can we model different aspects of information-such as human misperception or ignorance of other's information-and their effects on the travelers' route choices? Part I addresses analyzing unstructured, sequential image data, such as traffic CCTV footage, with a novel, semantics-oriented approach based on natural language and semantic features. The approach extracts structured, human-interpretable "topic signals" from distributions of common object labels, which correspond to physical processes depicted in the footage.; Changes and anomalies in these topic signals are used to identify notable events in weather conditions and traffic congestion. This is demonstrated on a new, real-world dataset collected from Boston freeway CCTV footage. In notable event detection, the use of topic signal representation outperforms the use of any individual label signal. Part II addresses game theoretic modeling of informational effects on travelers' route choices. It considers both access and accuracy of information about the network state, as well as the perception of other's information. It introduces the Subjective Bayesian Congestion Game (BCG), which models a broader set of player beliefs than those allowed by the conventional common prior assumption (Objective BCG). This enables modeling of uncertainty about other's information, such as when one population is unaware of information services.; Analytical solutions are provided for a stylized configuration of the Subjective BCG, and a numerical solver is provided for more general configurations. Compared to the Objective BCG, the Subjective BCG has qualitatively distinct solutions and costs, indicating that the perception of other's information significantly affects equilibrium route choices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123189</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microseismic and real-time imaging of fractures and microfractures in barre granite and opalinus clayshale</title>
<link>https://hdl.handle.net/1721.1/123188</link>
<description>Microseismic and real-time imaging of fractures and microfractures in barre granite and opalinus clayshale
Li, Qiuyi Bing.
The goal of the work is to better understand field microseismicity through laboratory experiments. Currently, industry practice consists of detecting activity using monitoring stations located around the volume of interest, and subsequently using the data to infer characteristics such as location, magnitude and mechanism of the corresponding microseismic sources. With this approach it is difficult, given the generally great depths of the microseismic activity, to verify characteristics such as the extent, orientation and mechanism of the fracturing activity. Our goal in the laboratory is to address this knowledge gap by imaging the initiation and propagation of hydraulic fractures in real time, and compare these data to the microseisms emitted during the experiment.; Experiments are conducted on Barre Granite, a coarse-grained crystalline rock similar to that often found in enhanced geothermal systems, and Opalinus clayshale, a fine-grained sedimentary rock analogous to shales found in unconventional oil and gas reservoirs. These materials are first tested in a four-point beam-bending setup to generate baseline results under dry conditions, and then hydraulically fractured to compare their behaviour under conditions similar to those in the field. We find that differences between granite and shale behaviour can be attributed to at least two factors. Firstly, the grain size affects the size of the process zone ahead of the fracture tip, which results in a significantly larger zone in granite. Secondly, the shale exhibits more velocity strengthening material while granite is a velocity weakening material, i.e. slip in granite tends to nucleate along single small asperities while slip in shale tends to occur along larger contact areas.; As a result, macro-scale tensile fractures in granite are composed of hundreds of micrometre- to millimetre-scale en-echelon shear microcracks that then coalesce with tensile microcracks. This mechanism tends to generate more seismic activity than in shale, where tensile microcracks on the order of tens of micrometres are created directly. The magnitude of seismicity is quantified in this thesis by normalised radiated seismic energy, which we find in shale is approximately 2-5 [percent] of that in granite. We also find that fluid pressure has a significant effect on seismic activity, and hypothesise that increased loading rate leads to increased inertia in the material ahead of the crack tip, which results in increased fracture complexity. This may result in increased seismic activity due to the increased total accumulated fracture length over which seismic slip may occur.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 224-234).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123188</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mass flux, blade dynamics, wave damping and turbulence in model seagrass meadow</title>
<link>https://hdl.handle.net/1721.1/123187</link>
<description>Mass flux, blade dynamics, wave damping and turbulence in model seagrass meadow
Lei, Jiarui,Ph.D.Massachusetts Institute of Technology.
Aquatic vegetation provides ecosystem services of great value. Seagrass and fresh-water macrophytes can improve water quality by filtering nutrients and reducing re-suspension of sediment. They can also protect shorelines by damping waves. This thesis explores the interaction between flexible vegetation (e.g. seagrass) and water flow. Specifically, I develop physically based models to predict the mass flux to individual seagrass blades, the dynamic behaviors of seagrass blade, the wave decay associated with a submerged meadow and the turbulence within a seagrass meadow as a function of plant morphology, flexibility, and shoot density. Flexible plants/blades reconfigure in response to flow velocity, which reduces drag relative to a rigid plant of the same morphology. The impact of reconfiguration on drag can be characterized using an effective length, l[subscript e], which represents the length of a rigid blade that generates the same drag as the flexible blade of length l. The effective blade length depends on the Cauchy number, Ca, which defines the ratio of hydrodynamic drag to restoring force due to blade stiffness. To validate our proposed models, a combination of laboratory experiments and numerical simulation was conducted. Our models also produced good predictions for different laboratory and field studies within 30 %. With these models, engineers and practitioners will be able to assess different scenarios of vegetation restoration for their potential to protect shorelines and to reduce erosion events that drive poor water quality.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123187</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics-driven routing of inspection crews and aerial sensors for post-disaster damage assessment</title>
<link>https://hdl.handle.net/1721.1/123186</link>
<description>Analytics-driven routing of inspection crews and aerial sensors for post-disaster damage assessment
Lee, Andrew C.(Andrew Choong hon)
Infrastructure networks such as natural gas pipelines and water systems are prone to failures from natural disasters, which result in huge societal and economic losses. To minimize these losses, inspection crews must rapidly identify failures (e.g., pipeline bursts, waterway blockages). However, infrastructure agencies often incur high costs and delays due to limited resources and diagnostic uncertainty about locations and types of failures. This thesis presents an analytics-driven network inspection approach that leverages data from fixed sensors and Unmanned Aerial Systems (UAS) to reduce diagnostic uncertainty, and determines optimal routing strategies for both ground crews and UAS. In our approach, the network is partitioned into smaller regions (subnetworks) based on the monitoring range of fixed sensors. We use sensor data and relevant physical features to assign priority inspection levels and predict failure rates for these subnetworks.; We then leverage UAS to localize failures and incrementally update failure rates. The overall inspection is based on two routing problems: the Aerial Sensor Inspection Problem (ASIP), which guides UAS-based inspection of subnetworks; and the Prioritized Inspection Routing Problem (PIRP), which integrates pre-solved ASIP times and failure rates to determine crew routing strategies. For pipeline network inspection, we consider a set of monitoring locations that enable modeling of UAS platform and infrastructure topology constraints, and determine feasibility of UAS routes. To solve the ASIP for realistic situations, we propose an efficient set-cover-based heuristic. We show how to obtain crew routing strategies for large-scale network inspection by integrating ASIP solutions into the PIRP, and solving the resulting Mixed Integer Programming (MIP) problem.; For drainage network inspection, we find that post-storm fixed sensor alerts are strongly correlated to the extent of damage in corresponding subnetworks. We present two formulations of PIRP: an adaptive stochastic dynamic program that considers prediction intervals of failure rates; and a non-adaptive certainty equivalent MIP that only accounts for mean failure rates. Solutions to these problems allow us to evaluate the value of integrating sensor data into inspection operations. We demonstrate the benefits of our approach using real data on network failures and inspections following Hurricane Harvey in 2017.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123186</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mesoscale saturated &amp; unsaturated poroelasticity of highly heterogeneous porous solids- discrete solid fluid descriptions</title>
<link>https://hdl.handle.net/1721.1/123185</link>
<description>Mesoscale saturated &amp; unsaturated poroelasticity of highly heterogeneous porous solids- discrete solid fluid descriptions
Khosh Sokhan Monfared, Siavash.
The capacity of the continuum approach to capture the effective poromechanical response of highly heterogeneous porous solids is limited. Specifically, the mean-field theories of continuum micromechanics cannot capture the full spatial variations of mechanical properties and restricted to scale separability. Additionally, any approach to unsaturated poromechanics requires a description for fluids that accounts for confinement, temperature variations and the strength of fluid-fluid and fluid-solid interactions. Most prevailing models are phenomenological in approach and hinge on the concept of effective stress for capturing liquid and gas interactions with solid(s). Thus, a framework is implemented based on discrete descriptions for solids and fluids. The behavior of solids is captured through Lattice Element Method.; This method utilizes a finite number of mass points, each interacting with their nearest neighbors through linear or non-linear effective interaction potentials while capable to account for anisotropy. The fluid behavior is described in the grand canonical ensemble in a statistical mechanics approach which paves the way to study the behavior of confined fluids while providing access to the capillary stress tensorial field in the pore domain. The two descriptions are brought together via a local pore pressure force formulation that links capillary pressures to solid deformation. For the case of fully saturated poroelasticity, generalized discrete expressions for Biot poroelastic coefficients defined in statistical mechanics ensembles are presented. The developed theoretical model and its implementation are validated on simple porous media for which micromechanics based solution exist.; By way of application to real heterogeneous materials imported from computed tomography (CT) scans, a methodology is presented to merge lab-measured nanoindention data and CT scans into the developed computational framework. Finally, capillary condensation in disordered granular packings is studied. The results provide insights into confined fluid behavior, fluid criticality, the interplay of disorder, temperature and capillary stress fields as well as liquid clustering formation, growth and coalescence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 149-158).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123185</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring wetting behavior at extremes</title>
<link>https://hdl.handle.net/1721.1/123183</link>
<description>Tailoring wetting behavior at extremes
Wilke, Kyle(Kyle L.)
In the classical understanding of liquid interactions with surfaces, liquid/surface chemistry dictates wetting behavior, requiring use of specific materials to achieve desired behavior. This restriction creates a number of challenges this thesis aims to address. First, high-thermal- conductance, hydrophobic coatings are used enhance condensation heat transfer, but have poor durability due to the extreme environment. We developed polymer infused porous surfaces, which 1. provided a large surface area to adhere and constrain the polymer to the condenser surface and 2. created a network of high-thermal-conductivity material through the otherwise low-thermal-conductivity polymer. These surfaces enhanced condensation heat transfer 8x and showed no degradation over 200 days.; Next, we demonstrated the use of reentrant microstructures and contact line pinning to shift the wetting paradigm, achieving any wetting behavior independent of the chemical nature of the surface and liquid, i.e., a surface with omniphobicity (repels all liquids), omniphilicity (wicks all liquids), switchability between repelling and wicking, and selectivity (repels or wicks only certain liquids). We then addressed robustness issues of reentrant microstructures during condensation on the surface by designing reentrant cavities with a pitch on the order of 100 nanometers. These dense, isolated cavities ensured nucleating droplets did not occur within all cavities and prevented liquid propagation within the structures, maintaining repellency to various liquids up to 10 'C below the dew point.; We explored alternative fabrication methods for omniphobic, doubly reentrant microstructures by using intrinsic stresses in thin films to induce bending, achieving omniphobicity with standard microfabrication processes. Finally, we enhanced wicking in pillar arrays by allowing pillar pitch and diameter to vary along the surface, optimizing each section of the surface for minimal pressure drop, increasing the wicking performance relative to uniform arrays. Each chapter of this thesis is dedicated to one of these challenging areas in tailoring wetting behavior at extremes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123183</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance and mechanism of emulsion separation using electrospun fiber membranes</title>
<link>https://hdl.handle.net/1721.1/123138</link>
<description>Performance and mechanism of emulsion separation using electrospun fiber membranes
Lin, Yi-Min,Ph.D.Massachusetts Institute of Technology.
The compositions of oil-in-water emulsions encountered in industrial processes vary widely depending on their sources. This variety creates challenges for the application of membrane-based separation. Electrospun fiber membranes have gained attentions in emulsion separation due to their high porosity and open, highly interconnected pore structure. However, because of the unique fibrous structure of the membrane and the deformability of oil droplets, the fouling mechanism in filtration process remains unclear. To study the membrane-emulsion interaction, electrospun polyamide membranes were challenged by model emulsions of dodecane stabilized by different types of surfactants in dead-end and cross-flow configurations. It was found fouling in dead-end filtration was mainly determined by the electrostatic interactions between the membrane and the foulants, while the fouling in cross-flow filtration was correlated to the hydrophobic interactions between the oil drop and the membrane.; These findings were corroborated by the classical blocking filtration models and in-situ visualization by camera. Blocking filtration models also illustrated the transition of fouling modes in dead-end filtration. Based on the findings, a membrane design strategy to reduce membrane fouling in microfiltration of oily emulsions was developed by inducing electrostatic repulsions between the membrane and emulsion. Plasma treatment and successive layer-by-layer polyelectrolyte dipping depositions were used to alter the surface charge of electrospun polyamide membranes while maintaining the interconnected pore structure and high porosity. The permeate flux of the plasma-treated membranes when separating emulsions stabilized by anionic surfactants were reported to increase ~3.2-fold, compared to that of the as-electrospun membranes, after 4 hours of cross-flow filtration.; When separating emulsions stabilized by cationic surfactants, the permeate flux of the polycation-coated membraned similarly increased ~3.3-fold. This anti-fouling tendency can be expressed quantitatively as a function of a proposed design metric, electrostatic repulsion strength. To investigate the fouling mechanism and oil drop-fiber interaction, direct, three-dimensional (3D) visualization of oil-fouled electrospun fiber membranes are reported for the first time in this thesis. High-resolution 3D images were acquired by a dual-channel confocal laser scanning microscopy (CLSM) in which both the fibers and the oil were fluorescently labeled. Through direct visualization, rejected oil is found to form droplets of clam-shell shape on the oleophobic fibers, while the oil tend to wet the oleophilic polymeric fibers and spread across the membrane.; The morphology of oily foulants is also analyzed as a function of separation time, which is qualitatively consistent with the transition of fouling modes indicated by the blocking filtration models. This direct, 3D visualization CLSM technique is a powerful tool for characterizing the mechanisms of fouling in membranes used for liquid emulsion separations.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123138</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction and characterization of a fission converter based epithermal neutron beam for BNCT</title>
<link>https://hdl.handle.net/1721.1/123100</link>
<description>Construction and characterization of a fission converter based epithermal neutron beam for BNCT
Riley, Kent J.(Kent Jason)
This study demonstrates the first successful implementation of a fission converter to produce a source of epithermal neutrons suitable for BNCT. The final design, construction and characterization of a new epithermal neutron beam is presented. A high intensity source with low contamination is obtained using a fission converter driven by thermal neutrons from the MIT research reactor. The facility is housed in the experimental hall and operates in parallel with other user applications. The fission converter is powered by 10 spent MITR-II fuel elements and employs resonance scattering filters with thermal neutron absorbers to tailor the neutron energy distribution. A lead shield attenuates photon contamination in the beam and lead collimators direct the neutron beam toward the patient. A horizontal beamline leads to the new medical room which is built with 1.1 m thick, high density concrete walls and is large enough to permit various treatment configurations. Ambient dose equivalent rates outside the shielded room are &lt; I mrem/hr with the converter operating at full power and do not interfere with other experimental users and reactor operations.; (cont.) Beam delivery is controlled with three in-line shutters that allow unrestricted access to the medical room while the reactor is at full power. Patient irradiations are controlled by redundant programmable logic controllers that automatically close the beam shutters when the prescribed monitor counts have been accumulated. Measurements were performed on central axis to assess beam performance. An in-air epithermal neutron flux of 8.4 +/- 0.8 E+09 n/cm2s was obtained with concomitant fast neutron and photon absorbed dose rates of 3.9 +/- 0.5 and 11.8 +/- 0.8 cGy/min. Depth dose profiles measured in-phantom are in general agreement with those determined from Monte Carlo calculations and indicate that normal tissue tolerance can be reached in treatment times of less than 10 minutes. The in-beam fast neutron and photon contaminants account for less than 10% of the dose received by normal tissue surrounding the target volume, which approaches the clinical optimum.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2001.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123100</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Upper limits for gravitational radiation from some astrophysical sources</title>
<link>https://hdl.handle.net/1721.1/123082</link>
<description>Upper limits for gravitational radiation from some astrophysical sources
Livas, Jeffrey C.(Jeffrey Clark)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1987.; Title as it appeared in Massachusetts Institute of Technology Graduate List: Upper limits on gravitational radiation from some astrophysical sources.; Bibliography: leaves 155-159.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123082</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A search for astronomical gravitational radiation with an interferometric broad band antenna</title>
<link>https://hdl.handle.net/1721.1/123081</link>
<description>A search for astronomical gravitational radiation with an interferometric broad band antenna
Dewey, Daniel.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1986.; Bibliography: leaves 111-115.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123081</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prestigious American newspapers' coverage of African political crises events</title>
<link>https://hdl.handle.net/1721.1/123080</link>
<description>Prestigious American newspapers' coverage of African political crises events
Coleman, Marsha Lynne.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1982.; Includes bibliographies.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123080</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The synthesis of unimolecular polymers through iterative exponential growth and their applications in block copolymer phase segregation and biological systems</title>
<link>https://hdl.handle.net/1721.1/123073</link>
<description>The synthesis of unimolecular polymers through iterative exponential growth and their applications in block copolymer phase segregation and biological systems
Jiang, Yivan.
Absolute structural control over polymers - in terms of sequence, length, and stereochemistry - is a Holy Grail of polymer science. Inspired by Nature, polymer chemists over the last century have sought new methods and strategies to control these parameters. An inverse relationship exists, however, between the ability to control the primary structure of a macromolecule and the ability to scale the production of the same macromolecule. In this thesis, we describe the application of iterative exponential growth (IEG) toward the scalable synthesis of sequence-defined, unimolecular, chiral polymers. Using this strategy, we have created a wide array of functional molecularly precise polymers of up to 12.1k kDa in molar mass with various side chains for applications in block copolymer phase segregation, lectin binding, and nanoparticle formulations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123073</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biosensor-based strategies for improving pathway production in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/123072</link>
<description>Biosensor-based strategies for improving pathway production in Escherichia coli
Doong, Stephanie J.
Microbial production of chemicals and fuels is an attractive renewable alternative to petroleum-based processes. D-glucaric acid, a Department of Energy top value-added chemical from biomass, is a precursor to polymers such as nylons and used in detergents. An engineered metabolic pathway requiring three heterologous enzymes to convert glucose into glucaric acid in Escherichia coli was previously demonstrated by the Prather lab. Glucaric acid production has been shown to be limited by the two downstream enzymes myo-inositol-1-phosphate synthase (MIPS) and myo-inositol oxygenase (MIOX). This work develops and deploys a biosensor that recognizes a pathway intermediate in order to overcome both limitations. A biosensor for myo-inositol (MI) was developed using the transcriptional regulator IpsA from the organism Corynebacterium glutamicum. A hybrid promoter was designed to enable function in the desired host organism E. coli.; The modular design of the biosensor permitted the behavior and Ml dose response to be adjusted for the pathway applications. The myo-inositol biosensor was used to regulate expression of Miox, the enzyme that consumes myo-inositol, such that Miox was transcribed only in the presence of its substrate. Controlled expression of Miox led to a 2.5-fold increase in glucaric acid titer compared to the static case where Miox was constitutively expressed. This dynamic regulation scheme was then paired with a system that dynamically knocked down glycolysis, which independently improved glucaric acid production by relieving competition of glycolysis with MIPS, the first pathway enzyme. The layered dynamic regulation scheme improved glucaric acid production by up to 9-fold. Next, the biosensor was used as a high-throughput screen for mutants of MIPS generated by directed evolution. The biosensor enabled a large library of MIPS to be screened by fluorescence-activated cell sorting (FACS).; The screen identified MIPS mutants with up to 20% improvement in myo-inositol production. This work used a biosensor to tackle two pathway limitations and improve glucaric acid production, showcasing the biosensor as a powerful metabolic engineering tool.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 112-120).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123072</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic technologies to engineer and understand the microbiome</title>
<link>https://hdl.handle.net/1721.1/123071</link>
<description>Genetic technologies to engineer and understand the microbiome
Mimee, Mark(Mark K.)
The microbes that inhabit the human body are integral to human health and disease: from inflammatory bowel disease to allergy, metabolic syndrome to autism. Due to its high connectivity with human physiology, manipulation of the microbiota has therapeutic potential in a vast array of diseases. However, techniques for targeted modification of microbial communities are currently lacking. In this thesis, I present several technologies that can be applied to engineer and better understand the microbiota. First, we present a subtractive strategy for microbiota manipulation using CRIPSR-Cas engineered bacteriophage that can selectively remove target strains from a community based on the presence of target DNA sequences. Next, we describe an additive strategy whereby commensal Bacteroides spp. are genetically modified to perform novel functions within the murine microbiota. We developed a suite of genetic parts to facilitate organism design and engineering. These tools were then expanded to engineer outer membrane vesicles derived from Bacteroides as immunomodulatory agents. Finally, we leveraged the natural sensing abilities of bacteria to create cellular biosensors for biomarkers of gastrointestinal disease. Heme biosensors were paired with readout electronics to generate an ingestible medical device for in situ detection of gastrointestinal bleeding. The technologies described herein contribute to the progression of microbiome engineering towards clinical applications and the advancement of our understanding of how our smallest friends impact our health.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-195).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123071</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering VHH-based chimeric antigen receptor (CAR) T cell therapy for solid tumor treatment</title>
<link>https://hdl.handle.net/1721.1/123070</link>
<description>Engineering VHH-based chimeric antigen receptor (CAR) T cell therapy for solid tumor treatment
Xie, Yushu Joy.
Chimeric antigen receptor (CAR) T cells are a promising cancer therapeutic, as they can specifically redirect the cytotoxic function of a T cell to a chosen target of interest. CAR T cells have been successful in clinical trials against hematological cancers, but have experienced low efficacy against solid tumors for a number of reasons, including a paucity of tumor-specific antigens to target and a highly immunosuppressive solid tumor microenvironment. In chapter 2 of this thesis, we develop a strategy to target multiple solid tumor types through markers in their microenvironment. The use of single domain antibody (VHH)-based CAR T cells that recognize these markers circumvents the need for tumor-specific targets. Chapter 3 will describe methods to overcome the immunosuppressive microenvironment of solid tumors. Here, we have developed VHH-secreting CAR T cells that can modulate additional aspects of the tumor microenvironment, including the engagement of the innate immune system through secretion of a VHH against an inhibitor of phagocytosis. We show that this strategy of VHH-secretion by CAR T cells can lead to significant benefits in outcome. We also demonstrate that delivery of therapeutics by CAR T cells can improve the safety profile of the therapeutic. Chapter 4 of this thesis explores strategies to increase the targeting capacity of CAR T cells by building logic-gated CARs. Finally, chapter 5 will describe work in imaging CAR T cells specifically, longitudinally, and non-invasively through PET imaging. Our results demonstrate the flexibility of VHHs in CAR T cell engineering and the potential of VHH-based CAR T cells to target the tumor microenvironment, modulate the tumor microenvironment, and treat solid tumors.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123070</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale biomolecular mapping in cells and tissues with expansion microscopy</title>
<link>https://hdl.handle.net/1721.1/123069</link>
<description>Nanoscale biomolecular mapping in cells and tissues with expansion microscopy
Wassie, Asmamaw T.
The ability to map the molecular organization of cells and tissues with nanoscale precision would open the door to understanding their biological functions as well as the mechanisms that lead to pathologies. Though recent technological advances have expanded the repertoire of biological tools, this crucial ability remains an unmet need. Expansion Microscopy (ExM) enables the 3D, nanoscale imaging of biological structures by physically magnifying cells and tissues. Specimens, embedded in a swellable hydrogel, undergo uniform expansion as covalently anchored labels and tags are isotropically separated. ExM thereby allows for the inexpensive nanoscale imaging of biological samples on conventional light microscopes. In this thesis, I describe the development of a method called Expansion FISH (ExFISH) that uses ExM to enable the nanoscale imaging of RNA throughout cells and tissues. A novel chemical approach covalently retains endogenous RNA molecules in the ExM hydrogel. After expansion, RNA molecules can be interrogated with in situ hybridization. ExFISH opens the door for the investigation of the nanoscale organization of RNA molecules in various contexts. Applied to the brain, ExFISH allows for the precise localization of RNA in nanoscale neuronal compartments such as dendrites and spines. Furthermore, the optical homogeneity of expanded samples enables the imaging of RNA in thick tissue-sections. ExFISH also supports multiplexed imaging of RNA as well as signal amplification techniques. Finally, this thesis describes strategies for the multiplexed characterization of biological specimens. Taken together, these approaches will find applications in developing an integrative understanding of cellular and tissue biology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis. "June 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123069</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Macrophage-mediated resistance mechanisms against MAPK inhibitory by cancer therapeutics</title>
<link>https://hdl.handle.net/1721.1/123068</link>
<description>Macrophage-mediated resistance mechanisms against MAPK inhibitory by cancer therapeutics
Wang, Stephanie Joy.
Kinase inhibitors targeting the MAPK pathway are often limited by lack of durable clinical responses or, in many cancer types, lack of even initial responses. While great headway has been made on characterizing mechanisms of resistance, understanding the full influence of complex intercellular interactions on drug resistance remains a challenge. Here, we combine computation with experiment to investigate the cellular and molecular contributions of the tumor microenvironment to MAPK inhibitor response. First, we employ a computational framework using published bulk and single-cell patient gene expression data to investigate immune cell correlates of MAPK inhibitor resistance, and subsequently quantify potential intercellular ligand-receptor interactions between cell populations of interest. Next, we use multiplex proteomic immunoassays and co-culture experiments to characterize the impact of these interactions on tumor-intrinsic bypass signaling and phenotype. To assess the in vivo relevance of these multicellular and multidirectional signaling networks, we develop an intravital imaging strategy to monitor the influence of tumor-associated macrophages on cancer cell kinase activity dynamics. Finally, we rationally design a nanotherapy to exploit inhibitor-induced immunomodulation and crosstalk. Overall, we present a paradigm to systematically dissect signaling pathways between tumors and their microenvironments, validate these interactions in various models of disease, and design therapeutic strategies to target them.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 93-108).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123068</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gastric resident systems for large dose drug delivery</title>
<link>https://hdl.handle.net/1721.1/123066</link>
<description>Gastric resident systems for large dose drug delivery
Verma, Malvika.
Lack of medication adherence is a worldwide problem. As many as 50-70% of patients have trouble following treatment recommendations. Whereas adherence is driven by many factors including the socioeconomic status of a patient and the quality of the health care team, drug regimen complexity also affects treatment outcomes. For example, adherence decreases as the number of pills per dose and the number of doses per day increases. For diseases where potent medications are available, depot formulations provide sustained drug release to simplify dosing. For diseases lacking potent compounds for treatment, there remains an unmet need for depot systems that could transform medication adherence. Tuberculosis (TB) is one such disease with a high pill burden, where poor patient adherence to the treatment regimen is a major cause of treatment failure and contributes to the emergence of drug-resistant TB strains.; For example, an average 60-kg patient with TB needs to take 3.3 g of antibiotics per day, which is a dose that exceeds the largest swallowable capsule and current depot systems. According to the World Health Organization (WHO), 10 million people developed TB in 2017 with a global economic burden amounting to $12 billion annually. This thesis presents a solution to the challenge of prolonged dosing for regimens such as TB that require multigram drug dosing. First, a gastric resident system (GRS) compatible with transesophageal administration was designed using biocompatible materials. The GRS consists of a series of drug pills on a coiled superelastic nitinol wire; the ends are protected with a retainer and tubing. Safe administration, gastric retention for 1 month, and retrieval of the GRS were demonstrated in a swine model. Next, sustained release formulations for 6 TB antibiotics were formulated into drug-polymer pills, and first-order drug release kinetics were achieved in vitro.; Then, the GRS was demonstrated to be capable of safely encapsulating and releasing 10 grams of an antibiotic over the period of weeks in a swine model. Lastly, end-user assessment was evaluated with a field questionnaire in India and an economic model to estimate the impact of the GRS on the health care system. There are multiple applications of the GRS in the field of infectious diseases, as well as for other indications where multigram depots could impart meaningful benefits to patients, helping maximize adherence to their medication.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 154-176).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123066</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering more potent vaccines for the treatment of cancer and autoimmunity</title>
<link>https://hdl.handle.net/1721.1/123065</link>
<description>Engineering more potent vaccines for the treatment of cancer and autoimmunity
Mehta, Naveen K.,Ph.D.Massachusetts Institute of Technology.
Vaccination against infectious diseases has long been heralded as one of the greatest advancements in public health, yet its application to other clinical indications has fallen short of expectations. In this thesis, we apply engineering principles to develop more potent vaccines in the treatment of cancer and autoimmunity. Both major components of molecular vaccines, antigen and adjuvant, are independently explored as a part of this work. Our antigen studies sought to improve the delivery of peptide epitopes to lymphoid organs by fusing epitopes to inert protein carriers with defined pharmacokinetic properties. To promote anti-tumor immunity, we found that antigen carriers should 1) protect peptide cargo from proteolytic degradation, 2) be appropriately bulky to drain into the lymphatics, and 3) be rapidly cleared once in the blood to prevent tolerization at distal poorly inflamed organs.; Applying these principles, we identified transthyretin as an optimal delivery protein, and demonstrated efficacy against a number of clinically relevant antigens. Because our protein-epitope fusion approach is fully recombinant in nature, we were able to convert our protein vaccines into nucleic acid modalities, including plasmid DNA and self-replicating RNA, which are significantly easier and cheaper to manufacture at scale. Finally, we applied our learnings to purposefully induce tolerization in the treatment of autoimmunity, and found that albumin is a particularly efficacious antigen carrier protein for this application due to its extended half-life. On the adjuvant front, we attempted to engineer novel Toll-like receptor 3 (TLR3) agonists via yeast surface display. Although we successfully developed high affinity TLR3 binders, all tested clones failed to agonize TLR3 despite the utilization of several multimerization strategies.; Separately, in an effort to better understand adjuvant biology, we conducted a detailed mechanistic study of lipo-CpG, a particularly potent amphiphilic CpG variant previously developed by the Irvine lab. We uncovered a cascade of inflammatory signals originating from monocytes that facilitates the induction of high magnitude T cell responses, largely by acting in trans rather than directly on the antigen-presenting cell. Overall, these studies have elucidated a number of design principles that should aid in the engineering of next generation vaccines to better treat cancer and autoimmunity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 171-181).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123065</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering protein-based modulators of allergic, temporal, and checkpoint blockade anti-cancer immunity</title>
<link>https://hdl.handle.net/1721.1/123064</link>
<description>Engineering protein-based modulators of allergic, temporal, and checkpoint blockade anti-cancer immunity
Rothschilds, Adrienne Marie.
Effective cancer treatment of the future requires incorporating diverse and innovative aspects of immunity to fight against cancer, accounting for pharmacokinetic and temporal barriers of therapeutics, and engineering approaches to understand and improve upon current immunotherapies. This thesis addresses these challenges in three projects utilizing the Wittrup Lab's quantitative, engineering approach to protein-based cancer immunotherapy. In the first project, I attempted to harness the potency of allergic reactions against cancer by designing IgE class antibodies against two mouse tumor antigens and comparing them with traditional IgG antibodies. These IgE antibodies elicited modest or no tumor control, and limited efficacy could be due to fast pharmacokinetic clearance, absence of human-like allergic effector cells in mice, or tumor-suppressive effects from mast cells responding to IgE.; The second project described in this thesis focused on synchronizing combination immunotherapies with the temporal progression of the anti-cancer immune response. In this work, anti-tumor antibodies were combined with the cytokines interleukin 2 (IL2) and interferon alpha (IFNa). The order of administration of these therapies decoupled strong efficacy from dose-limiting toxicity in two tumor models. Given before IFN[alpha], IL2 activated natural killer cells and heightened their responsiveness to subsequent IFN[alpha], which was ultimately toxic and unnecessary for therapeutic efficacy. This project's proof of concept that efficacy and toxicity could be unlinked in immunotherapy began to establish a framework to use for rational combination therapy treatment schedule design, with the goal of treating with each agent when that piece of the immune system is active.; Finally, the third project used the Wittrup Lab's system of yeast surface display to engineer novel antibodies against the checkpoint blockade target cytotoxic T lymphocyte associated protein 4 (CTLA-4) as tools to improve understanding of the anti-CTLA-4 mechanism of action against cancer. Although the first wave of antibodies made had favorable characteristics against CTLA-4 as a soluble target, they bound a CTLA-4 epitope too close to the cell surface and so could not be used for therapeutic studies. Next generation sequencing on the yeast libraries identified alternative CTLA-4 binding antibody sequences, and these will be tested in future mechanistic and therapeutic studies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 128-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123064</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational analysis of cell-cell communication in the tumor microenvironment</title>
<link>https://hdl.handle.net/1721.1/123063</link>
<description>Computational analysis of cell-cell communication in the tumor microenvironment
Kumar, Manu Prajapati.
Cell-cell communication between malignant, immune, and stromal cells influences many aspects of in vivo tumor biology, including tumorigenesis, tumor progression, and therapeutic resistance. As a result, targeting receptor-ligand interactions, for instance with immune check-point inhibitors, can provide significant benefit for patients. However, our knowledge of this complex network of cell-cell interactions in a tumor microenvironment is still incomplete, and there is a need for systematic approaches to study cell-cell communication. This thesis presents computational approaches for characterizing cell-cell communication networks in three different experimental studies. In the first study, we modeled metastatic triple negative breast cancer in the liver using a microphysiological system and identified inflammatory cytokines secreted by the microenvironment that result in the proliferation of dormant metastases. In the second study, we used single-cell RNA sequencing (scRNA-seq) to quantify receptor-ligand interactions in six syngeneic mouse tumor models. To identify specific receptor-ligand interactions that predict tumor growth rate and immune infiltration, we used receptor-ligand interactions as features in regression models. For the third study, we extended our scRNA-seq approach to include inferences of single-cell signaling pathway and transcription factor activity. We then identified protein-protein interaction networks that connect extra-cellular receptor-ligand interactions to intra-cellular signal transduction pathways. Using this approach, we compared inflammatory versus genetic models of colorectal cancer and identified cancer-associated-fibroblasts as drivers of a partial epithelial-to-mesenchymal transition in tumor cells via MAPK1 and MAPK14 signaling. Overall, the methods developed in this thesis provide a foundational computational framework for constructing "multi-scale" models of communication networks in multi-cellular tissues.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123063</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cancer systems biology : functional insights and therapeutic strategies for medulloblastoma from omic data integration</title>
<link>https://hdl.handle.net/1721.1/123062</link>
<description>Cancer systems biology : functional insights and therapeutic strategies for medulloblastoma from omic data integration
Ehrenberger, Tobias.
Medulloblastoma (MB) is a chiefly pediatric cancer of the cerebellum that has been studied extensively using genomic, epigenomic, and transcriptomic data. It comprises at least four molecularly distinct subgroups: WNT, SHH, Group 3, and Group 4. Despite the detailed characterization of MB, many disease-driving events remain to be elucidated and therapeutic targets to be nominated. In this thesis, we describe three studies that contribute to a better understanding of this devastating disease: First, we describe a study that aims to fully describe the genomic landscape in the largest medulloblastoma cohort to date, using 491 sequenced MB tumors and 1,256 epigenetically analyzed cases. This work describes subgroup-specific driver alterations including previously unappreciated actionable targets; and, based on epigenetic data, identifies further heterogeneity within Group 3 and Group 4 tumors. Second, we focus on the proteomes and phospho-proteomes of 45 medulloblastoma samples.; We identified distinct pathways associated with two subsets of SHH tumors that showed robustly distinct proteomes, but similar transcriptomes, and found post-translational modifications of MYC that are associated with poor outcomes in Group 3 tumors. We also found kinases associated with subtypes and showed that inhibiting PRKDC sensitizes MYC-driven cells to radiation. This study shows that proteomics enables a more comprehensive, functional readout, providing a foundation for future therapeutic strategies. Third, we characterize the metabolomic space of MB on largely the same 45 tumors as used in the proteome-focused study. Here, we present preliminary insights from derived from integrative network and other analyses. We find that MB consensus subgroups are preserved in metabolic space, and that certain classes of metabolites are elevated in MYC-activated MB.; We also show that, similar to other cancers, a previously described gain-of-function mutation in IDH1 may cause elevated 2-hydroxyglutarate levels in MB. The work described in this thesis significantly enhances previous knowledge of medulloblastoma and its subgroups, and provides insights that may aid in the development of medulloblastoma therapies in the near future.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 151-167).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123062</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mining the human microbiome for clinical insight</title>
<link>https://hdl.handle.net/1721.1/123061</link>
<description>Mining the human microbiome for clinical insight
Duvallet, Claire Marie Noëlle.
The human microbiome is essential for health and has been implicated in many diseases. DNA sequencing has enabled the detailed characterization of these human-associated microbial communities, leading to a rapid expansion in studies investigating the human microbiome. In this thesis, I describe multiple projects which overcome various data analysis challenges to extract useful clinical insights from microbiome data. In the first project, I present an analysis of lung, stomach, and oropharyngeal microbiomes. I leverage data collected from multiple sites per patient to identify aspiration-associated changes in the relationships between these communities, discovering new properties of the aerodigestive microbiome and suggesting new approaches for treatment. In the second project, I perform a meta-analysis of case-control gut microbiome datasets with standard data processing and analysis methods.; I find consistent patterns characterizing disease-associated microbiome changes and a set of shared associations which could inform clinical treatment and therapeutic development approaches for different microbiome-mediated diseases. Enabled by this work, in the third project I contribute to the development of a method to correct for batch effects in case-control microbiome studies. In the fourth project, I describe a framework for rational donor selection in fecal microbiota transplant clinical trials in which knowledge derived from clinical and basic science research is used to inform which donor is selected for fecal transplants, increasing the likelihood of successful trials. Finally, I present preliminary results analyzing the microbiome and metabolome of residential sewage as a novel platform for community-level public health surveillance.; Together, these projects demonstrate a variety of approaches to mine the human microbiome for clinically-relevant insights and suggests multiple avenues forward for translating findings from microbiome data analyses into clinical and public health impact.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123061</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery and characterization of a small molecule that modulates c-Myc mediated transcription via max homodimer stabilization</title>
<link>https://hdl.handle.net/1721.1/123060</link>
<description>Discovery and characterization of a small molecule that modulates c-Myc mediated transcription via max homodimer stabilization
Chen, Andrew,Ph.D.Massachusetts Institute of Technology.
The transcription factor Myc is a basic helix-loop-helix leucine zipper (bHLHLZ) protein with crucial roles in regulating normal cellular processes, but its transcriptional activity is deregulated in a majority of human cancers. Myc transcriptional activity is dependent on dimerization with its obligate partner Max, another bHLHLZ transcription factor. Max also forms homodimers as well as heterodimers with other proteins including the Mxd family of proteins, creating a dynamic network of protein-protein interactions to regulate transcriptional programs. Despite the significance of this network, the arsenal of chemical probes to interrogate these proteins in cancer biology remains limited. Here, we utilized small molecule microarrays and luciferase-based reporter assays to identify compounds that bind Max and modulate Myc transcriptional activity. We discovered the small molecule KI-MS2-008, which stabilizes the Max homodimer while reducing Myc protein and Myc-regulated transcript levels. KI-MS2-008 also decreases viable cancer cell growth in a Myc-dependent manner and suppresses tumor growth in mouse models of Myc-driven cancers. In a cancer cell line model treated with KI-MS2-008, the equilibrium of protein-protein interactions shifts toward a transcriptionally repressed state over time by recruiting Mxd4 and other repressive machinery to Max. This study establishes that perturbing Max dimerization with small molecules is a tractable approach to targeting Myc activity in cancer.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 190-200).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123060</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered synthetic translational control for next generation mRNA gene therapies</title>
<link>https://hdl.handle.net/1721.1/123059</link>
<description>Engineered synthetic translational control for next generation mRNA gene therapies
Becraft, Jacob Robert.
Synthetic mRNA is an emerging therapeutic modality for gene and cell therapy. Unlike their synthetic DNA counterparts, synthetic mRNA has an increased safety profile due to its transient gene expression and ability to express outside of the nucleus. Furthermore, it can be more easily delivered to cells via entry only into the cytoplasm. While synthetic biology as a field has existed for over two decades, the main area of research and development has focused on DNA interfaces, building on the mechanisms of transcription factors with small molecule interfaces to create multi-input/multi-output genetic circuitry. Until recently, the field had not developed sufficient synthetic circuit control devices at the translational level due to 1) lack of perceived need and 2) deficiency of available natural systems for adaptation. In this thesis, I present the construction of a diverse synthetic biology toolbox for RNA-only synthetic biology. The creation of new synthetic biology frameworks can be broken down into three modules: Build, Control, and Apply. In the Build phase, I demonstrate how the current toolbox of mRNA binding and recognition proteins can be utilized to form diverse and orthogonal gene regulatory networks. In Control, I construct regulatory networks capable of responding to exogenous signals and utilize advanced circuit design to motivate dynamic control for novel behaviors. When I transition to Apply, I illustrate that these next-generation circuits can be layered into biologically active modalities that are therapeutically relevant. Taken as a whole, the work presented here represents a merging of the fields of synthetic biology and mRNA therapeutics, and serves as a foundational proof-of-principle for future efforts to expand synthetic biology across novel modalities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 139-148).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123059</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>RNA sensing and programming platforms for mammalian synthetic Biology</title>
<link>https://hdl.handle.net/1721.1/123058</link>
<description>RNA sensing and programming platforms for mammalian synthetic Biology
DiAndreth, Breanna Elizabeth.
The field of synthetic biology aims to control cellular behavior using programmable gene circuits. Generally these gene circuits sense molecular biomarkers, process these inputs and execute a desired calculated response. This is especially relevant for gene and cell therapies where integrating multiple disease-related inputs and/or sophisticated control could lead to safer and more effective approaches. While mammalian synthetic biology has made great progress, few gene circuit-based therapies have entered the clinic. Regulatory issues aside, this lag may be due to several technical impediments. First, the computing part of circuits is often accomplished via transcriptional regulation, which presents challenges as we move toward the clinic. Second, the field relies on a limited set of sensors; the detection of other types of disease biomarkers will help robustly identify cell state.; Finally, the design cycle currently used to develop gene circuits is laborious and slow, which is not suitable for clinical development, especially applications in personalized medicine. In this thesis I describe how I address these three limitations. I develop a new posttranscriptional regulation platform based on RNA cleavage that I term "PERSIST" (Programmable Endonucleolytic RNA Scission-Induced Stability Tuning). CRISPR-specific endonucleases are adapted as RNA-level regulators for the platform and we demonstrate several genetic devices including cascades, feedback, logic functions and a bistable switch. I explore sensor designs for relevant biomolecules including mRNAs, miRNAs and proteins via the PERSIST and other platforms. Finally, I present a "poly-transfection" method, associated advanced data analysis pipelines, and computational models that make circuit engineering faster and more predictive.; Taken together, the expanded RNA toolkit that the PERSIST platform offers as well as advancements in sensing and circuit design will enable the more straightforward creation of robust gene circuits for gene and cell therapies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-173).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123058</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emulsification, separation, and manipulation of oil-water systems using condensation, electrocoalescence, and electrowetting</title>
<link>https://hdl.handle.net/1721.1/123056</link>
<description>Emulsification, separation, and manipulation of oil-water systems using condensation, electrocoalescence, and electrowetting
Guha, Ingrid Fuller.
This thesis explores the emulsification, separation, and manipulation of oil/water mixtures using a range of chemical, mechanical, and electrical techniques. Simply explained, this thesis reports new methods to emulsify oil and water using condensation, separate oil and water using low-voltage electrocoalescence, and manipulate oil and water using ultra low-voltage electrowetting. The emulsification method relies on condensation of one liquid phase onto another. As nanoscale droplets of water condense onto the surface of oil, they are submerged and stabilized in the oil by a surfactant in the oil phase. The concentration of surfactant and time of condensation determine the size and stability of the resulting emulsions. The separation method presented in this thesis redesigns the configuration of the standard electrocoalescence setup and the dielectric materials used. The design employs a surface configuration in place of a bulk configuration for electrocoalescence.; Additionally, a high-K dielectric (hafnium oxide) is used in place of a hydrophobic low-K dielectric (e.g. a fluoropolymer). A thermodynamically stable nanoscale oil film-a lipid bilayer-forms on the surface of the hafnium oxide, effectively rendering the surface hydrophobic by buffering water drops from the surface and preventing pinning. This surface configuration coupled with the use of a high-K dielectric drastically reduces the voltage required to induce electrocoalescence. The method of manipulating oil and water presented in this thesis is the electrowetting of a water drop on bare silicon in an oil environment containing zwitterions. The zwitterions form a nanoscale lipid bilayer between the water drop and the silicon surface. This electrowetting system contains no deposited solid dielectrics, resulting in ultra low-voltage actuation of the electrowetting effect.; This thesis presents the theory, experimental results, and discussion of the experimental results for each method of controlling water/oil mixtures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 53-60).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123056</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methodologies for the mechanistic study of compartment-specific hydrogen peroxide perturbations</title>
<link>https://hdl.handle.net/1721.1/122906</link>
<description>Methodologies for the mechanistic study of compartment-specific hydrogen peroxide perturbations
Stein, Kassi Taylor.
Reactive oxygen species (ROS) are an interesting class of molecules because of their ability to promote contradictory phenotypes depending on their intracellular concentration. Most significantly, their elevation has been linked with several pathologies, including cancer. The selective cancer killing hypothesis hinges on the idea that certain cancers will be more susceptible to toxicity via a redox-based mechanism than their surrounding healthy counterparts, and provides an attractive target for those studying redox biology. In order to effectively leverage this strategy, quantitative knowledge of intracellular ROS, specifically hydrogen peroxide (H₂O₂) and its associated pathway proteins, is necessary. This thesis developed tools and methodology for quantitative and mechanistic studies of H₂O₂ in the mitochondria.; Both experimental and computational tools were developed and implemented to analyze mitochondrial H₂O₂, peroxiredoxin (Prx) proteins, and primary patient tumor cells. It was established that the toxicity of H₂O₂ perturbations localized to the mitochondrial matrix is dose- and time-dependent. A computational model of the mitochondrial H₂O₂ reaction network predicted that basal steady-state mitochondrial H₂O₂ concentrations are in the low nM range, and that Prx3 is responsible for H₂O₂ dynamics. Preliminary data in primary patient tumor cells of paraganglioma and related tumors with succinate dehydrogenase b (SDHB) mutations suggested these cancers are potentially sensitive to the investigational chemotherapeutic piperlongumine.; And finally, analysis on the Prx family using statistical coupling analysis suggested evolutionarily conserved clusters of residues at the C-terminus of the Prx2 and Prx1 protein families that may point to a structure-function mechanism for their ability to complex with other proteins. All in all, these tools can be used in other cancer cell systems to better understand the quantitative H₂O₂ signaling mechanisms and possible chemotherapeutic targets within those pathways.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122906</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer assistance in organic synthesis planning and execution</title>
<link>https://hdl.handle.net/1721.1/122903</link>
<description>Computer assistance in organic synthesis planning and execution
Coley, Connor Wilson.
The identification and synthesis of molecules that exhibit a desired function is an essential part of addressing contemporary problems in science and technology. Small molecules are the predominant solution to challenges in the development of medicines, chemical probes, specialty polymers, and organocatalysts, among others. The typical discovery paradigm is an iterative process of designing candidate compounds, synthesizing those compounds, and testing their performance. The rate at which this process yields successful compounds can be limited by bottlenecks and mispredictions at all three stages and is plagued by inefficiencies, not the least of which is the manual nature of synthesis planning and execution. This thesis describes techniques to streamline the synthesis of small molecules in this context of pharmaceutical discovery from two perspectives: one experimental and the other using techniques in data science and machine learning.; Part I focuses on the time-, material-, and experimental-efficiency of data collection. It describes the development of an automated microfluidic reactor platform for studying physical and chemical processes at the micromole scale. Synthesis and purification of small molecule compound libraries are performed without human intervention at a scale suitable for a medicinal chemistry setting. Integration of online analytics enables efficient, closed-loop self-optimization using an optimal design of experiments algorithm to identify reaction conditions suitable for production-scale flow synthesis. To complement the generation of new data through automated experimentation, Part II is driven by the goal of applying existing reaction data to problems in synthesis and synthesis design. This includes the development of data-driven methodologies for the design and validation of small molecule synthetic routes.; An enabling factor in ensuring the feasibility of computationally-proposed reactions is the use of models to predict organic reaction outcomes in silico-also useful for impurity prediction-that leverage the flexibility in pattern recognition afforded by neural networks to understand chemical reactivity in the same way we might by reading the literature. Several predictive models are integrated into an overall framework for computer-aided synthesis planning that can rapidly propose routes to new molecules with the complexity of modern active pharmaceutical ingredients. As a final demonstration, machine learning assisted synthesis planning is brought together with laboratory automation to illustrate an accelerated approach to target-oriented flow synthesis. This is a proof-of-concept for how chemical development might one day occur with less human intervention.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 409-432).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122903</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond gates, guards and guns : the systems-theoretic framework for security at nuclear facilities</title>
<link>https://hdl.handle.net/1721.1/122900</link>
<description>Beyond gates, guards and guns : the systems-theoretic framework for security at nuclear facilities
Williams, Adam D.(Adam David),Ph. D.Massachusetts Institute of Technology.
Current approaches to nuclear security can produce elegantly designed physical protection systems (PPS) that may be limited by untenable assumptions or well stated-albeit vague and imprecise-descriptions of how to improve nuclear security culture itself. According to one nuclear security culture expert: While the International Atomic Energy Agency has released methodologies on evaluating vulnerabilities and physical protection, it has not yet introduced guidelines on assessing the human-factor in detection, delay, and response. (Khripunov, 2014, pp. 39-40) (Emphasis added) This dissertation argues that such a link lies in understanding how organizational influences affect the completion of tasks required for PPS to meet expected nuclear security performance goals. In this dissertation, I propose the System-Theoretic Framework for Security (the STFS) for evaluating system-level interactions between PPS and human/organizational behaviors to describe overall security performance.; Invoking key tenets of systems theory and organization science, the STFS uses the concept of "security task completion" to explain how the interactions between PPS and human/organizational behaviors result in security performance at nuclear facilities. Yet, empirical data is needed to explore the efficacy of this approach for incorporating organizational influences into security performance. As such, my research objectives were to: 1. Improve the understanding of how PPS and human/organizational behaviors interact to produce security performance at nuclear facilities, 2. Identify a manageable (but not exhaustive) set of organizational influences on this interaction, and 3. Develop a framework for assessing these interactions and organizational influences on security performance at nuclear facilities. I used a mixed methods research design to develop the STFS.; My first study consisted of 18 narrative interviews across different areas of nuclear security expertise and my second study examined the case of the 2012 security incident at the Y-12 National Security Complex. These two studies provided evidence for the security task completion construct (as a new causal mechanism), behavioral performance requirements (assumptions on which the causal mechanism is based), a set of organizational influences and quality indicators related to nuclear security performance. While this framework does not address every aspect of achieving high security performance, the STFS offers a structured thought process and direction for further development regarding how technologies and organizations interact to affect individual behaviors that contribute to security at nuclear facilities.
Thesis: Ph. D. Engineering Systems: Human-Systems Engineering, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 146-152).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122900</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local drug delivery for relaxation of the ureter</title>
<link>https://hdl.handle.net/1721.1/122892</link>
<description>Local drug delivery for relaxation of the ureter
Lee, Christopher Xiang.
Human ureteral contraction and relaxation is implicated in common physiologic situations. The success or failure of kidney stone expulsion and the body's pain response to ureteral stents placed during urologic surgery are affected significantly by ureteral contractions. The lifetime prevalence of kidney stones alone in the US is roughly 10% (of the entire US population), and annual health incidence and healthcare costs exceed 2.5M cases and $5bn. Worldwide, an additional IM ureteral stents are placed annually. The ability to decrease ureteral contractions has been shown to improve spontaneous stone passage rates and decrease ureteral stent pain. Several candidate oral medications have been tested but study outcomes are equivocal. Additionally, no oral therapy has been shown to be associated with substantial reproducible positive outcomes. There are currently no topical therapies available for ureteral relaxation.; The hypothesis of this thesis is that greater ureteral relaxation can be achieved with local administration of vasodilators compared with current standard oral therapy. This thesis reports the discovery and development of drugs and drug formulations that can be locally delivered to the ureter via an office-based approach. Representative drugs that can induce smooth muscle relaxation in the ureter were screened in vitro and potent hits were discovered. Nifedipine and rho-kinase inhibitors were found to be highly effective for relaxing ureteral smooth muscle cells at a micromolar dosage. Substantial drug synergy (increased relaxation) was also discovered when these two drugs were used in combination. The effect of these drugs were verified ex vivo. These drugs were ultimately compounded into a clinically deployable formulation for local delivery to validate effect and safety in vivo using pig studies.; Topical administration of this novel formulation reduced ureteral contraction amplitude and frequency by 90% and 50%, respectively, compared with placebo. Standard oral vasodilator therapy currently used for ureteral relaxation reduced contraction amplitude by 50% with minimal effect on frequency, compared with placebo. This thesis ultimately shows that using an office-based approach, high topical drug doses to induce greater ureteral relaxation can be delivered. These results have the potential to create a new class of therapeutics, topical ureteral relaxants, for common ureter- and urology-related conditions.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2018; Cataloged from PDF version of thesis. "September 2018."; Includes bibliographical references (pages 104-111).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122892</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of ride-sharing on mobility trends and vehicle stock</title>
<link>https://hdl.handle.net/1721.1/122891</link>
<description>Impact of ride-sharing on mobility trends and vehicle stock
Deshmukh, Suhrid Avinash.
North American transportation industry is on the verge of a revolutionary change. With the advent of car-sharing services and ride-sharing companies, the transportation industry is experiencing a fundamental way in which people choose to travel. This particular thesis looks at the impact of these disruptive changes in transportation on the way people choose to travel, the car based vehicle miles travelled (VMT) and the national vehicle stock. In particular, this work tries to look at how people choose to make a travel decision when embarking on a particular trip and how that translates to an effect on the national level vehicle stock. When presented with a particular mode of travel, the most relevant aspects associated with that particular mode of travel were explored and evaluated. Each mode was evaluated based on cost, time and comfort associated with the mode.; Multi-attribute utility theory was used to study and evaluate how people make decisions about mode choice when choosing a particular mode for a trip. This work tries to look at the impact of ride-sharing on modal changes and shits that result in a less or more use of personal car travel. Apart from the travel behavior associated with modes, this work also estimates impact of ride-sharing on the total vehicle usage in urban areas. Once the modal share of different modes was estimated, an overall passenger trip demand was generated at the national level. This trip demand was broken down into car-based trips and non-car based trips from the modal share result. Combined with occupancy assumptions, this passenger trip demand was converted into a car based VMT estimate. Finally, combining the car based vehicle miles travelled with the average vehicle utilization, the national vehicle stock was calculated.; In order to measure the impact of these futuristic technologies on modal share, VMT and the national vehicle stock, scenario analysis was the method chosen. In order to have a reference case, a base case scenario was designed assuming the world remains as it is today and nothing changes. A series of progressive scenarios related to ride-sharing were then tested to gauge the impact of ride-sharing. It was found that ride sharing has the most significant impact in the urban areas for short trips. The national level vehicle stock in the year 2050 declined by approximately 1.0% in the improved ride-sharing scenario. Higher-electrification of vehicles along with improvements in ride-sharing did not decrease the stock further by much, as compared to just the improved ride-sharing scenario. In an aggressive scenario, with improved ride-sharing, improved transit and anti-car policies, the national level stock value in year 2050 declined by approximately 6% compared to the base case scenario.; Finally, in the scenario with improved ride-sharing and higher autonomy, the national level VMT increased by 1.3%, but the vehicle stock declined by 9.9%. The results from this work can be further used to inform certain decisions regarding changing travel behaviors or explore questions related to higher level policy analysis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-141).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122891</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial incremental problems</title>
<link>https://hdl.handle.net/1721.1/122890</link>
<description>Combinatorial incremental problems
Unda Surawski, Francisco T.
We study the class of Incremental Combinatorial optimization problems, where solutions are evaluated as they are built, as opposed to only measuring the performance of the final solution. Even though many of these problems have been studied, it has' usually been in isolation, so the first objective of this document is to present them under the same framework. We present the incremental analog of several classic combinatorial problems, and present efficient algorithms to find approximate solutions to some of these problems, either improving, or giving the first known approximation guarantees. We present unifying techniques that work for general classes of incremental optimization problems, using fundamental properties of the underlying problem, such as monotonicity or convexity, and relying on algorithms for the non-incremental version of the problem as subroutines. In Chapter 2 we give an e-approximation algorithm for general incremental minimization problems, improving the best approximation guarantee for the incremental version of the shortest path problem. In Chapter 3 we show constant approximation algorithms for several subclasses of incremental maximization problems, including e/2e-1approximation for the maximum weight matching problem, and a e/e+1 approximation for submodular valuations. In Chapter 4 we introduce a discrete-concavity property that allows us to give constant approximation guarantees to several problems, including an asymptotic 0.85-approximation for the incremental maximum flow with unit capacities, and a 0.9-approximation for incremental maximum cardinality matching, incremental maximum stable set in claw free graphs and incremental maximum size common independent set of two matroids.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 95-98).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122890</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced methods for droplet capture in water recovery systems and agricultural sprays</title>
<link>https://hdl.handle.net/1721.1/122889</link>
<description>Advanced methods for droplet capture in water recovery systems and agricultural sprays
Damak, Maher,Ph. D.Massachusetts Institute of Technology.
Water scarcity is one of the important challenges of our century. In this thesis, we investigate advanced methods to mitigate it in two ways: water recovery from fog and reducing chemicals runoff in agriculture. In the first part, we review current fog collector design and identify droplet deviation around the wires of mesh collectors as the main bottleneck in water collection. We introduce an electrostatic force to overcome aerodynamic drag around the collector, using space-charge injection into the fog droplets and an electric field that drives them to the collector. We quantitatively model the collection and show that it scales from a one-wire system to a mesh. We demonstrate increases of up to 50X in collection efficiency, and show that various geometries and designs can be used. In particular, we propose the usage of this method to capture water from industrial condensation plumes.; We model these plumes by taking into account the mixing dynamics between vapor and air and the heat transfer dynamics for droplet growth. Based on this model, we provide design guidelines for effective plume collectors. In the second part, we aim to enhance the retention of droplets on hydrophobic surfaces to reduce bouncing losses when pesticides are sprayed. We review current methods to retain impacting droplets and identify their limitations. We introduce simultaneous spraying of oppositely charged polyelectrolytes as a new method to enhance retention. We show that in a drop-on-drop impact with polyelectrolytes, a precipitation reaction occurs and surface defects are formed in-situ. These defects pin the retracting droplet and prevent it from bouncing. We quantify the energy dissipation by pinning and make a design map for sticking sprays. We show a 1OX increase in retention and coverage of various superhydrophobic surfaces.; To refine our model, we then systematically study drop-on-drop impacts on non-wetting surfaces and model the maximal expansion diameter and retraction rate based on the interplay of inertia, viscosity and capillarity. We finally study the case where the sprayed liquid is an oil-in-water emulsion. We show that there is a bouncing-sticking-bouncing transition in emulsion impacts as the Weber number increases. We demonstrate that, for low enough viscosities, oil impregnates the surface under the droplet during impact and generates a suction force that prevents bouncing. We then provide the optimal parameters for which the retention can be enhanced, to guide the preparation of effective agricultural sprays.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 161-181).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122889</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Airborne monitoring system for in-season agriculture : operational considerations and image processing for wide-area sensing</title>
<link>https://hdl.handle.net/1721.1/122885</link>
<description>Airborne monitoring system for in-season agriculture : operational considerations and image processing for wide-area sensing
Jeunnette, Mark N.(Mark Nathaniel),1979-
Remote sensing, in particular multispectral imagery, can measure crop health and detect in-season disturbances such as pests and diseases before they are visible to the naked eye, but it is inaccessible to small-plot farmers, especially in developing countries. So-called eExtension services provide up-to-date, reliable information for small-plot farmers, but struggle to collect the plot-specific crop health information on which to base personalized recommendations. This thesis addresses this issue using novel ideas in remote sensing system understanding, image processing, and geospatial workflow to develop the Airborne Monitoring System for In-Season Agriculture (AMSISA) and make remotely sensed crop health data accessible and useful for small-plot farmers. A simulation of platform performance characteristics shows that manned aircraft are the better aerial remote sensing platform, given current performance and regulatory realities. The time-series aerial remote sensing (TSARS) approach allows a reduction in spatial resolution to dramatically reduce survey costs and enable frequent updates for better monitoring performance. This allowance for plot-resolution (but not finer) data and minimal cost per hectare over a large area leads to the development of a model to optimize the data collected per plot based on survey altitude, heading, and camera properties. Imagery collected using a custom camera setup aboard a manned aircraft in Maharashtra, India is used to test and verify the models. Surveying at a spatial resolution near the size of farm plots on the ground requires precise registration of remotely sensed images to ensure accurate crop reflectance measurements. Current and novel multi-modal image registration techniques are tested and found to be inadequate for this application. Instead, a technique using known fiducials is presented to achieve the required registration precision.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-127).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122885</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and ab initio studies of oxide interfaces for photoelectrochemistry</title>
<link>https://hdl.handle.net/1721.1/122884</link>
<description>Experimental and ab initio studies of oxide interfaces for photoelectrochemistry
May, Kevin J.(Kevin Joseph)
The threats posed by anthropogenic climate change have spurred a research thrust towards renewable, carbon-free sources of energy. Photoelectrochemical (PEC) approaches are particularly attractive, combining energy capture from the sun with storage in the form of hydrogen or hydrocarbon fuel. However, there are significant materials challenges to be overcome, as well as a necessity for improved understanding of the material interfaces present in such systems. Transition metal oxides are a popular material for research as photo-electrodes but typically have poor electronic properties compared to conventional semiconductors. However, they are stable in aqueous and oxidizing environments and may present a wide variety of exotic physical behaviors, potentially opening new doors for device design. In this thesis, I explore several aspects of oxide interfaces relevant to PEC devices.; PEC measurements of ultra-thin films of LaFeO 3 grown on Nb:SrTiO3 reveal a thickness-dependent response via the depletion regions that form at both the film-substrate and film-electrolyte interfaces. Depending on the applied bias, reduction or oxidation photocurrent is observed that originates from the film-electrolyte or film-substrate interface, respectively. These qualitative behaviors are then explained with a band model. I then use the ACBNO functional for self-consistent Hubbard U corrections to density functional theory (DFT). First, improvement in treating bulk perovskite oxide electronic structure is demonstrated, followed by a study on a series of thin film slab structures that captures nanoscale changes in formal charge and hybridization (via the change in U) at multiple locations within the film, simultaneously. The trends in oxygen adsorption energy and band alignment are explained in terms of film thickness and electronic structure.; Finally, a first-principles descriptor for oxygen adsorption energy is developed from high-throughput DFT calculations and analysis of the density of states using tight binding and the moments theorem. This descriptor methodology may be used in high-throughput screening for catalyst materials, where bulk calculations may be used to predict surface properties without resorting to more demanding slab calculations. The combination of high-throughput screening of materials with the engineering possibilities afforded by substrate and active layer thickness variation provides a promising path forward to successful oxide photoelectrochemical devices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122884</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and dynamic modeling of an unmanned underwater vehicle for submerged surface inspection exploiting hydrodynamic ground effect</title>
<link>https://hdl.handle.net/1721.1/122882</link>
<description>Design and dynamic modeling of an unmanned underwater vehicle for submerged surface inspection exploiting hydrodynamic ground effect
Bhattacharyya, Sampriti.
Anticipated growth of sub-sea technologies for security, infrastructure inspection, and exploration, motivates a deeper understanding of underwater navigation in proximity to a submerged target surface. Common examples range from water tanks in nuclear reactors, submerged oil rig infrastructure, to ship hulls with hidden compartments and threats. We propose EVIE (Ellipsoidal Vehicle for Inspection and Exploration): a water jet propelled, football sized ellipsoidal Unmanned Underwater Vehicle (UUV) with a flattened base to house necessary sensors needed for surface inspections. The UUV is designed - both in terms of its shape and propulsion - for gliding on submerged surfaces for volumetric inspection, in addition to motion in free stream motion for visual inspections. This thesis research explores the ground effect hydrodynamics due to the motion of a body near a surface.; We demonstrate the formation of a thin fluid bed layer between the surfaces which enables smooth motion even on rough surfaces. The proposed robot eliminates the need for wheels or suction. Use of ground effect fluid dynamics is common in aerial and land vehicles but is almost unexplored for underwater applications. We focus exploiting this phenomenon in real world applications, developing a prototype model to maintain precise distances with reduced actuator control. We explore both parasitic (induced by lateral motion) and explicitly induced (adding a impinging bottom jet) hydrodynamic effects. We find the force is not only non linear, it is not monotonic and has multiple equilibria. As the body approaches the surface it first experiences repulsion (enhanced thrust) due to an up-wash effect - similar to vertical take off and landing (VTOL) vehicles which can hover at reduced thrust. This transitions to a suction force at small distances from a Venturi effect.; At still smaller distances there is again a repulsion due to choking flow between the body and the surface. Given the complexity of the force, and considering the hydrodynamic drag is non linear as well, traditional linearization fails to capture the system behavior and is at best constrained to a small region around the equilibrium. Instead, we use a higher dimensional, data driven approach for modeling. The underlying hypothesis is that dynamical systems behave linearly when recast in a suitable higher dimensional space. State variables are augmented by adding auxiliary variables that sufficiently inform the nonlinear dynamics of the system. We demonstrate a novel and a powerful method of individual estimation of each of the state dependent non linearities by integrating a state estimator into the augmented system. The estimator only uses measured, original states to estimate the non linear forces.; The method is extremely robust: even though the approximated state transition model has significant inaccuracies, we prove guaranteed convergence of the unobserved states. This doctoral thesis encompasses three unique contribution: design and development of a prototype micro UUV platform for testing surface inspection methods; invention and application of a unique underwater phenomenon to the UUV; and establishing a novel mathematical approach for robust estimation of complex non linear elements using a linearized, high dimensional data driven model. The research presented opens a whole new door of opportunities and provides a new perspective for the design of next generation subsea vehicles and technology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 224-233).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122882</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infrared-to-visible upconversion in hybrid thin films of colloidal nanocrystals and organic molecules</title>
<link>https://hdl.handle.net/1721.1/122872</link>
<description>Infrared-to-visible upconversion in hybrid thin films of colloidal nanocrystals and organic molecules
Wu, Mengfei,Ph.D.Massachusetts Institute of Technology.
Photon upconversion is a process where two or more low-energy photons are converted into a single higher-energy photon. Upconversion that turns infrared photons into visible ones is particularly useful, having potential applications in photovoltaics, infrared sensing, and biological imaging. In this thesis, I present a solid-state thin-film device that converts infrared photons with wavelength up to 1.1 [mu]m into visible wavelengths around [lambda] = 610 nm. The device consists of a monolayer of lead sulfide colloidal nanocrystals (NCs) and a thin film of rubrene mixed with emissive DBP molecules. Upconversion is realized via triplet-triplet annihilation (TTA) in rubrene sensitized by the NCs. We demonstrate that compared to the previous all-molecular upconverting systems, the use of inorganic NCs helps extend the excitation wavelength into the infrared and offers simple wavelength tunability.; However, a monolayer of NCs has low infrared absorption, severely limiting the upconversion efficiency and necessitating a high excitation intensity. Here, by adding a silver back reflector with an optical spacer to the device structure, we achieve a five-fold increase in the NC absorption due to optical interference effects and an eleven-fold enhancement in the up-converted output. To extend the idea, we further introduce a distributed Bragg reflector at the front of the device. A resonant microcavity is formed with the NCs placed at the peak of a drastically enhanced optical field. The upconversion efficiency is improved by another order of magnitude, with threshold excitation intensity falling to 13 mW/cm² , which is below the available solar flux. At resonance, the device converts (0.06±0.01)% of incident photons at [lambda] = 980 nm into emitted higher-energy photons. In addition, we improve the upconversion efficiency by shortening the surface ligands on NCs.; With faster triplet transfer, the upconverting device attains higher intrinsic efficiency, converting (7±l)% of the absorbed photons at [lambda] = 808 nm into higher-energy emissive excitons in rubrene. This thesis demonstrates the feasibility of NC-sensitized infrared-to-visible upconversion in solid thin films under low excitation intensities comparable to the solar flux, and paves the way toward the practical utilization of TTA-based upconversion in photovoltaics, imaging, and sensing technologies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 152-163).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122872</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing two issues in machine learning : interpretability and dataset shift</title>
<link>https://hdl.handle.net/1721.1/122870</link>
<description>Addressing two issues in machine learning : interpretability and dataset shift
Wang, Fulton.
In this thesis, I create solutions to two problems. In the first, I address the problem that many machine learning models are not interpretable, by creating a new form of classifier, called the Falling Rule List. This is a decision list classifier where the predicted probabilities are decreasing down the list. Experiments show that the gain in interpretability need not be accompanied by a large sacrifice in accuracy on real world datasets. I then briefly discuss possible extensions that allow one to directly optimize rank statistics over rule lists, and handle ordinal data. In the second, I address a shortcoming of a popular approach to handling covariate shift, in which the training distribution and that for which predictions need to be made have different covariate distributions. In particular, the existing importance weighting approach to handling covariate shift suffers from high variance if the two covariate distributions are very different. I develop a dimension reduction procedure that reduces this variance, at the expense of increased bias. Experiments show that this tradeoff can be worthwhile in some situations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 71-77).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122870</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling technologies for functional optical coherence tomography in retinal disease</title>
<link>https://hdl.handle.net/1721.1/122869</link>
<description>Enabling technologies for functional optical coherence tomography in retinal disease
Lee, ByungKun.
Optical coherence tomography (OCT) imaging of retinal function has become increasingly important because alterations in the function may reflect earlier changes in ocular diseases than the alterations in the structure. Since diagnostic markers in structural OCT using commercial instruments already underwent extensive investigations, novel markers in retinal function implying disease onset or progression are important for further development of novel markers in OCT diagnostics of ocular disease. Retinal OCT applications such as blood flow imaging and stimulus-response imaging often requires high imaging speed since repeated scans of the same position or densely sampled scans are used. Our group developed a high-speed swept-source/Fourier-domain OCT (SS-OCT) prototype to investigate retinal hemodynamics and vascular structures in major ocular diseases such as age-related macular degeneration (AMD), glaucoma, and diabetic retinopathy with OCT angiography or Doppler OCT.; Our group also used an ultrahigh-resolution (UHR) spectral/Fourier-domain OCT (SD-OCT) instrument to investigate dark adaptation of the photoreceptors after a photobleach by measuring the changes in the distances between densely packed, bright bands in the outer retina. Dark adaptation can be an indicator of photoreceptor health and it is postulated that compromised dark adaptation is a manifestation of early AMD. The current thesis reports two technical advancements in OCT imaging of retinal function: image processing algorithm for total retinal blood flow (TRBF) calculation with en face Doppler OCT using the high-speed SS-OCT prototype and the instrumentation of the newly deployed UHR SD-OCT prototype with effective depth range extension using reference arm length modulation (ReALM). These techniques aim for operator-friendly, robust, and repeatable assessment of retinal blood flow or the photoreceptor layer to provide powerful tools for future investigations of retinal function.; The contents of the current thesis includes hardware and software approaches that overcomes inherent limitations of OCT such as signal penetration and depth range as well as preliminary imaging results and pilot study findings.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122869</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and use of bench-stable precatalysts with heck-type activation and progress towards the continuous-flow synthesis of atorvastatin</title>
<link>https://hdl.handle.net/1721.1/122858</link>
<description>Synthesis and use of bench-stable precatalysts with heck-type activation and progress towards the continuous-flow synthesis of atorvastatin
Weber, Jessica M.(Jessica Marie)
[color illustrations] We introduce a new class of bench-stable N-heterocyclic carbene nickel precatalysts for homogeneous nickel-catalysis. The nickel(II) complexes are readily activated to Ni⁰ in situ under mild conditions, via a proposed Heck-type mechanism. The precatalysts are able to facilitate carbonyl-ene, hydroalkenylation, and amination reactions. [color illustrations] Herein, we report the synthesis and characterization of a new class of air- and moisture-stable triphenylphosphine nickel(II) precatalysts, which activate through a Heck-type mechanism. The activity of these precatalysts is demonstrated with a carbonyl-ene coupling reaction.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis. Page 393 blank.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122858</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enantioselective synthesis of pretomanid &amp; investigations of Raman spectroscopy for through-tube monitoring of reactions</title>
<link>https://hdl.handle.net/1721.1/122857</link>
<description>Enantioselective synthesis of pretomanid &amp; investigations of Raman spectroscopy for through-tube monitoring of reactions
Tran, Tho Huu.
Chapter 1: Enantioselective Synthesis of Pretomanid [color illustrations] Pretomanid is a potential cornerstone active pharmaceutical ingredient for the treatment of tuberculosis (TB). It is found in several advanced clinical trials and can possibly treat all forms of TB. In this report, we discuss the development of a novel synthesis of pretomanid through a selective epoxide-opening cyclization and late-stage nitration. This route offers a new strategy to access the core of pretomanid and avoids the necessity of the explosive starting material. Chapter 2: Investigations of Raman Spectroscopy for Through-Tube Monitoring of Reactions [color illustrations] Continuous-flow chemistry has seen expansive growth and thus requires powerful new analytical techniques. The current commonly-used analytics involve inline analysis at a single point, which can hinder analysis. In this report, we demonstrate the utility of the MarqMetrix TouchRaman probe for reaction monitoring through a continuous-flow reactor. The reactions studied were ring-closing metathesis, benzyne generation, and ketene generation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122857</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon nanotube-based chemical sensing</title>
<link>https://hdl.handle.net/1721.1/122856</link>
<description>Carbon nanotube-based chemical sensing
Schroeder, Vera,Ph.D.Massachusetts Institute of Technology.
In this thesis, we introduce approaches to carbon nanotube-based sensing for applications in environmental monitoring, disease diagnostics, and food analysis: In Chapter 1, we introduce carbon nanotube-based sensing. We describe parameters that give rise to the sensing capabilities of CNT-based sensors and discuss important performance parameters of carbon nanotube sensors. In Chapter 2, we demonstrate voltage-activated sensing of carbon monoxide using a sensor comprising iron porphyrin and functionalized single walled carbon nanotubes (F-SWCNTs). Modulation of the gate voltage offers a predicted extra dimension for sensing. Specifically, the sensors show significant increase in sensitivity toward CO when negative gate voltage is applied. In Chapter 3, we describe the design of a sensor for the highly selective detection of acrylates using conditions for the aerobic oxidative Heck reaction. The sensors mirror the catalytic processes and selectively respond to electron deficient alkenes by adapting a catalytic reaction system to modulate the doping levels in carbon nanotubes. In Chapter 4, we introduce sensor arrays consisting of imidazolium-based ILs with different substituents and counterions to provide selective responses for known biomarkers of infectious diseases of the lungs. In Chapter 5, we discuss a sensor array comprised of platform 20 functionalized SWCNT sensing channels for the classification of cheese, liquor, and edible oil samples based on their odor. We classify unknown food samples using a k-nearest neighbors model and a random forest model trained on extracted features. This protocol allows us to accurately differentiate between five cheese and five liquor samples (91% and 78% respectively) and only slightly lower (73%) accuracy for five edible oils.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-183).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122856</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metal-triazolate frameworks for the capture of water vapor and ammonia</title>
<link>https://hdl.handle.net/1721.1/122855</link>
<description>Metal-triazolate frameworks for the capture of water vapor and ammonia
Rieth, Adam J.(Adam Joseph)
Metal-organic frameworks (MOFs) are emerging materials for applications in gas sorption and separations, however, they are widely believed to be unstable towards coordinating vapors such as water and ammonia due to often-rapid hydrolysis or substitution at the metal-ligand bond. Here, we describe a series of micro- and mesoporous MOFs constructed from robust metal-triazolate bonds, which, together with a high density of open metal sites, enable these frameworks to exhibit record water uptake as well as record static and dynamic ammonia capacities. Optimization of the pore diameter has led to materials which adsorb large volumes of water with complete reversibility, portending application in the production of potable water in desert regions as well as for heat transfer and storage. Further studies illuminate the mechanism of initial water clustering at and around the metal-centered open coordination sites. For ammonia, systematic variation of the pore size and metal ion lead to materials with a greater affinity and more than twice the capacity for ammonia than activated carbon, the industry standard for protection and mitigation from this toxic and corrosive gas. Structure-function relationships and kinetic analyses of NH₃ and H₂0 uptake in isostructural micro- and mesoporous materials made from Mn, Co, Ni, and Cu reveal stability trends that are in line with the water self-substitution rates in simple metal-aquo complexes. Altogether, these results provide clear, intuitive descriptors that govern the static and dynamic uptake, kinetics, and stability of MOF sorbents for strongly interacting gases.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 93-108).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122855</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bottlebrush and related polymer architectures for biomedical applications</title>
<link>https://hdl.handle.net/1721.1/122854</link>
<description>Bottlebrush and related polymer architectures for biomedical applications
Nguyen, Hung VanThanh.
Chapter 1: Introduction to Bottlebrush Polymers as Carrier Platforms for Drug Delivery Biomedical Applications A brief intro to the current state of nanotechnology in drug delivery applications is provided, including the common myths and facts, design considerations, as well as important challenges towards successful translation to the clinic. In this context, the promises of bottlebrush polymers as a next-generation delivery platform, and the branched macromonomer approach is also discussed. Chapter 2: Scalable Synthesis of Multivalent Macromonomers for ROMP Here, we report the three-step convergent synthesis (two-step longest linear sequence) of a divalent exo-norbornene imide capable of efficient coupling with various nucleophiles and azides to produce diversely functionalized branched macromonomers optimized for ring-opening metathesis polymerization (ROMP).; In addition, we describe an efficient iterative procedure for the synthesis of tri- and tetra-valent branched macromonomers. We demonstrate the use of these branched macromonomers for the synthesis of Janus bottlebrush block copolymers as well as for the generation of bottlebrush polymers with up to three conjugated small molecules per repeat unit. Chapter 3: Nitroxide-Based Macromolecular Contrast Agents with Unprecedented Transverse Relaxivity and Stability for Magnetic Resonance Imaging of Tumors We report nitroxide-functionalized BASP organic radical contrast agents (ORCAs) that overcome the low contrast and poor in vivo stability associated with nitroxide-based MRI contrast agents. These features combine to provide for accumulation of a sufficient concentration of BASP-ORCA in murine subcutaneous tumors up to 20 h following systemic administration such that MRI contrast on par with metal-based agents is observed.; Chapter 4: Triply Loaded Nitroxide Brush-Arm Star Polymers Enable Metal-Free Millimetric Tumor Detection by Magnetic Resonance Imaging We report a modular and scalable synthetic approach to nitroxide-based BASP-ORCAs with high nitroxide loadings, excellent stability in vivo, no acute toxicity, and highly desirable pharmacokinetic and biodistribution profiles for longitudinal detection of tumors by MRI. When injected intravenously into mice bearing subcutaneous plasmacytomas, BASP-ORCA3 affords distinct in vivo visualization of tumors on translationally relevant time scales while enabling efficient mapping of tumor necrosis, an important biomarker to predict therapeutic outcomes. Moreover, BASP-ORCA3 allows for detection of millimetric tumor implants in a disseminated murine model of advanced-stage human ovarian cancer that possess genetic, histological, and vascular characteristics that are similar to those seen in patients.; Chapter 5: Brush-Arm Star Polymers as a Convergent Designer Platform for Theranostic Nanomaterials: Real Time Magnetic Resonance-Guided Drug Delivery Here, we report the conceptualization and execution of a novel theranostic platform system with heavy emphasis on the independence of its individual components: therapeutic payload, imaging moiety, stimuli trigger, and delivery platform. This would in turn allow for systematic variation of each player while minimizing disturbance towards the whole system. Specifically, we constructed a metal-free MRI-based brush-arm star polymer (BASP) bearing a doxorubicin (DOX) payload that is capable of generating changes in MRI contrast upon the release of the therapeutic payload. Moreover, via rational chemical design, we can specifically change not only the response trigger, but also the response rate under a specific stimulus.; Chapter 6: Chiral Unimolecular-armed Bottlebrush as a Biological Probe Iterative exponential growth (IEG) is used to prepare a series of stereoisomeric norbomene-terminated macromonomers (MMs) with varying stereochemical sequence, length, and distance between sidechain groups. Ring-opening metathesis polymerization (ROMP) of these MMs along with a dye-labeled monomer provided stereoisomeric, fluorescent chiral unimolecular-arm bottlebrush polymers (CUBPs) that could be used as probes for understanding the role of chirality in biological systems, which were examined in both an in vitro and in vivo context. Intriguingly, via precise rational chemical modifications, these chirality-driven behaviors can be modulated, providing a proof-of-principle that the stereochemistry of BPs can be used to tailor their interactions with biological systems.; Chapter 7: Polyoxazoline Bottlebrush and Brush-Arm Star Polymers via ROMP: Syntheses and Applications as Nitroxide-Based Organic Radical Contrast Agents The synthesis of functional poly(2-alkyl-2-oxazoline) (PAOx) copolymers with complex nanoarchitectures using a graft-through ROMP approach is described. Additionally, PEtOx-based BASPs with nitroxide radicals localized at the core-shell interface were also prepared, which displayed relaxivity values on par with state-of-the-art polyethylene glycol (PEG)-based nitroxide materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122854</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a continuous-flow synthesis of neostigmine methylsulfate and studies toward a continuous-flow synthesis of lisinopril</title>
<link>https://hdl.handle.net/1721.1/122853</link>
<description>Development of a continuous-flow synthesis of neostigmine methylsulfate and studies toward a continuous-flow synthesis of lisinopril
Kelly, Liam P.(Liam Porter)
[color illustration] Herein, we describe the development of a continuous flow synthesis of neostigmine methyl sulfate, an acetylcholinesterase inhibitor on the WHO list of essential medicines, and the transfer of the synthesis into a next-generation reconfigurable frame developed by our collaborators. Starting from 3-dimethylaminophenol, the synthesis provides a throughput of approximately 46.8 g/day (or 93,600 doses/day) of crude neostigmine methyl sulfate. The synthesis also showcases a prototype in-line evaporation unit that operates without any added carrier gas. Dr. Christina Dai performed early screening of lithium bases. Dr. Yuqing Cui and Dr. Naomi Briggs developed the downstream purification sequence. Dr. Nopphon Weeranoppanant developed the in-line evaporator and, along with Dr. Dale Thomas, assisted with performing the synthesis within their developed frame. Liam P. Kelly developed the continuous synthesis of neostigmine methyl sulfate. [color illustration] Lisinopril is a member of a large family of ACE inhibitors generally known as N-carboxyethyl dipeptides. Of this family, lisinopril is the most commonly prescribed. All known routes to lisinopril require isolation of several synthetic intermediates and protecting group manipulations, thus, development of an efficient continuous synthesis would provide great benefit. Herein we describe our investigation of several routes to generate intermediates of lisinopril with the end goal of a fully continuous synthesis, high material throughput, and minimal protecting group manipulations. Liam P. Kelly performed all work described within this chapter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122853</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Copper-catalyzed carbon-heteroatom bond formations : asymmetric hydroamination and continuous-flow aromatic Finkelstein reaction</title>
<link>https://hdl.handle.net/1721.1/122852</link>
<description>Copper-catalyzed carbon-heteroatom bond formations : asymmetric hydroamination and continuous-flow aromatic Finkelstein reaction
Ichikawa, Saki.
The studies presented in this dissertation are regarding the development of new methods for copper-catalyzed carbon-heteroatom bond formations, including asymmetric hydroamination and continuous-flow aromatic Finkelstein reaction. The first part of this dissertation focuses on the development of copper-catalyzed asymmetric hydroamination reactions to access various classes of enantioenriched amines. This includes the development of a broadly applicable hydroamination protocol for the synthesis of enantioenriched N-arylamines (Chapter 1) and 1,2- diamines (Chapter 2). The second part of this dissertation describes the development of copper-catalyzed aromatic Finkelstein reaction under continuous-flow conditions (Chapter 3). Part I. Chapter 1.; A Modified System for the Synthesis of Enantioenriched N-Arylamines through Copper-Catalyzed Hydroamination Despite significant recent progress in copper-catalyzed enantioselective hydroamination chemistry, the synthesis of chiral N-arylamines, which are frequently found in natural products and pharmaceuticals, has not been realized. Initial experiments with N-arylhydroxylamine ester electrophiles were unsuccessful and instead, their reduction, in the presence of copper hydride (CuH) catalysts, was observed. We detail key modifications of our previously reported hydroamination protocols that led to broadly applicable conditions for the enantioselective net addition of secondary anilines across the double bond of styrenes, 1,1 -disubstituted alkenes, and terminal alkenes. NMR studies suggest that suppression of the undesired reduction pathway is the basis for the dramatic improvements in yield under this new protocol. Chapter 2.; Regio- and Enantioselective Synthesis of 1,2-Diamine Derivatives by Copper- Catalyzed Hydroamination A highly regio- and enantioselective synthesis of 1,2-diamines using [gamma]-substituted allylic pivalamides via copper-catalyzed hydroamination is reported. The N-pivaloyl group is essential, both in facilitating the hydrocupration step and in suppressing the unproductive [beta]-elimination from the alkylcopper intermediate. This synthetic approach enables an efficient construction of chiral, differentially protected, vicinal diamines under mild conditions with broad functional group tolerance. Part II. Chapter 3. Rapid and Efficient Copper-Catalyzed Finkelstein Reaction of (Hetero)Aromatics under Continuous-Flow Conditions A general, rapid, and efficient method for the copper-catalyzed Finkelstein reaction of (hetero)aromatics has been developed using continuous flow to generate a variety of aryl iodides.; The described method can tolerate a broad range of functional groups, including N-H and O-H groups. Additionally, in lieu of isolation, the aryl iodide products in solution can be directly used in two distinct multistep continuous-flow processes (amidation or Mg-I exchange/nucleophilic addition) to demonstrate the flexibility of this method.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122852</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling polymer network topology</title>
<link>https://hdl.handle.net/1721.1/122851</link>
<description>Controlling polymer network topology
Gu, Yuwei,Ph.D.Massachusetts Institute of Technology.
Chapter 1: Introduction to Polymer Network Topology on a (Macro)Molecular Level Polymer network topology, comprising the ways in which strands and junctions are connected in polymer networks, plays a critical role in dictating many material properties. Here we discuss classical challenges in the field and review existing strategies to characterize and manipulate polymer network topology from a (macro)molecular level. Chapter 2: Semibatch Monomer Addition as a General Method to Tune and Enhance the Mechanics of Polymer Networks via Loop-defect Control In this chapter we introduce semibatch monomer addition as a general strategy to reduce/control an important topological feature at short length scale-primary loops, thus providing materials with tunable and significantly improved mechanical properties without changing their composition.; Chapter 3: Leaving Groups as Traceless Topological Modifiers for Controlling Topological Structure in Chemically Identical Polymer Networks Here we report "traceless topological modification" as a general approach to control an important long length-scale topological feature-junction distribution. Using self-assembled structures as templates that are not themselves incorporated into the network, our method enables us to synthesize truly topologically isomeric networks with drastically different macroscopic properties. Chapter 4: Photoswitching Topology in Polymer Networks with Metal-Organic Cages as Crosslinks Based on our works in Chapter 2 and Chapter 3, we further explored topology as the central design principle to create novel functional materials.; In this chapter we introduce topology switching via cooperative self-assembly as a design principle to reversibly alter multiple network properties simultaneously and enable the preparation of one material that can exist in multiple topological states. Chapter 5: Living Additive Manufacturing: Transformation of Parent Gels into Diversely Functionalized Daughter Gels Made Possible by Visible Light Photoredox Catalysis Our ability to control polymer network topology has been further enhanced by developing living additive manufacturing as an effective strategy to expand the original topology of parent networks in a photo-growth fashion. This approach enables us to transform the mechanical/physical properties of parent networks post-synthetically.; Chapter 6: polyMOF Nanoparticles: Dual Roles of a Multivalent polyMOF Ligand in Size Control and Surface Functionalization Here we present a novel approach to synthesizing well defined metal-organic framework nanoparticles (MOF NPs), where the size control and surface functionalization of MOF-5 NPs were simultaneously achieved using multivalent polyMOF ligands.
Thesis: Ph. D. in Organic Chemistry, Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122851</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemistry of alkylaromatics in crude oil upgrading</title>
<link>https://hdl.handle.net/1721.1/122850</link>
<description>Chemistry of alkylaromatics in crude oil upgrading
Lai, Lawrence Tin Chi.
Due to the rise in demand of crude oil over the long term, technologies to upgrade crude oil need to be developed to ensure maximum use efficiency of future oil sources. In typical carbon rejection processes, coke formation is a common phenomenon that would lead to decreased yield of upgraded oil. As a result, the chemical behavior of coke formation is a potent area of research. Due to the high complexity of the composition of crude oil and coke, this work simplifies the study of supercritical water upgrading of crude oil to a hexylbenzene pyrolysis system. The pyrolysis of hexylbenzene at process conditions of 450°C and 75 creates several hundred products resolved by GCxGC, and the analysis is intractable if one considers only the experimental data, which does not reveal reactive intermediates or reaction paths.; However, introducing theoretical considerations using the Reaction Mechanism Generator (RMG) allows analysis of a vast number of species while retaining information of elementary reaction steps and reactive intermediates. Information on these steps and intermediates can be obtained from Quantum Chemistry Hexylbenzene pyrolysis was characterized using RMG with key steps computed using Quantum Chemistry. The results indicate that the retro-ene reaction, previously thought to carry an important role in hexylbenzene pyrolysis, is much slower than reported in literature. Furthermore, alkylaromatic chemistry at 450°C is extremely sensitive to species thermochemistry. Further investigation was done on the formation of 2-ring aromatic species in hexylbenzene pyrolysis, likely precursors of coke.; Thermochemistry and rate calculations were made for 2-ring species as a result of the intramolecular and intermolecular addition pathways, resulting in 27 thermochemistry group additivity values to allow for extrapolation of this work's calculations to analogous species. In addition, 25 training reactions were added to allow rate rules calculated in this work to be extrapolated to similar reactions. Finally, all this new chemical knowledge was incorporated into RMG, and a detailed kinetic model for hexylbenzene pyrolysis was constructed. The generated model was able to predict the total molar yield of bridged 2-ring aromatics, and fused 2-ring aromatics. However, many individual species had inaccurate molar yield predictions, and some key pathways to form 2-ring species were found to be missing. Additional quantum calculations were performed after the construction of this kinetic model to attempt to resolve these mispredictions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-138).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122850</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of thermostable affinity reagents for low-cost, paper-based medical diagnostics</title>
<link>https://hdl.handle.net/1721.1/122849</link>
<description>Development of thermostable affinity reagents for low-cost, paper-based medical diagnostics
Miller, Eric Alexander.
The timely diagnosis and treatment of disease in resource-constrained settings requires the development of robust point-of-care (POC) diagnostics, which provide accurate results and can be employed by users with minimal medical training and limited access to basic infrastructure. One of the most common POC diagnostic formats is the immunochromatographic rapid diagnostic test, which traditionally uses nitrocellulose-immobilized IgG antibodies as binding proteins for the capture of disease biomarkers from patient samples. However, these antibodies are expensive to produce and structurally complex, and are prone to thermal denaturation. In contexts where continuous cold chain storage may be infeasible, the resulting loss in binding activity can manifest as diminished assay sensitivity, leading to adverse clinical outcomes and eroding patient trust in the diagnostic format.; In the interest of replacing diagnostic antibodies with a more cost-effective, robust class of binding proteins, this thesis explores the development of thermostable affinity reagents based on the hyperthermophilic scaffold protein rcSso7d. Given its native microbial host, minimalist structure, and high wild-type melting temperature (98°C), rcSso7d represents a viable alternative to antibodies in in vitro POC assays. To assess the applicability of the rcSso7d scaffold in this context, protein engineering techniques were used to rapidly select analyte-specific binding variants from a yeast surface display library of &gt;109 members. A high-affinity rcSso7d binder was identified, produced in high yield in a bacterial host, and readily purified in a single chromatographic step.; The in vitro activity and thermal stability of this engineered binder were characterized in the context of a low-cost, paper-based assay, and significant improvements in stability and production economics were observed for rcSso7d-based assays, relative to assays featuring a representative antibody reagent. Additionally, general strategies were developed to improve the diagnostic performance of paper-based assays employing rcSso7d-based reagents. In one instance, chimeric protein constructs in which rcSso7d variants are fused to a cellulose-binding domain were found to bind to unmodified cellulose in an oriented fashion and with high efficiency. This substrate anchoring approach permits the rapid, high-density immobilization of the rcSso7d species in paper-based assays, and yields significant sensitivity enhancement by enabling both the depletion of the soluble analyte from the sample, and the processing of large sample volumes within clinically relevant timescales.; Detection reagents incorporating rcSso7d binders were also developed, using novel fusion constructs which enabled in vivo labelling while preserving analyte binding activity. These techniques were applied in the context of a urine-based tuberculosis biomarker, and may one day permit the development of multiplexed assays targeting a suite of these analytes. Such a development would enable point-of-care diagnostic testing without requiring the production of sputa, facilitating disease detection in otherwise inaccessible patient populations (e.g. children under five, the elderly, and immunocompromised patients).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122849</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast simulation methods for soft matter hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/122848</link>
<description>Fast simulation methods for soft matter hydrodynamics
Fiore, Andrew M.(Andrew Michael)
This thesis describes the systematic development of methods to perform large scale dynamic simulations of hydrodynamically interacting colloidal particles undergoing Brownian motion. Approximations to the hydrodynamic interactions between particles are built from the periodic fundamental solution for flow at zero Reynolds number and are methodically improved by introducing the multipole expansion and constraints on particle dynamics. Ewald sum splitting, which decomposes the sum of slowly decaying interactions into two rapidly decaying sums evaluated indepently in real space and Fourier space, is used to accelerate the calculation and serves as the basis for a new technique to sample the Brownian displacements that is orders of magnitude faster than prior approaches. The simulation method is first developed using the ubiquitous Rotne-Prager approximation for the hydrodynamic interactions.; Extension of the Rotne-Prager approximation is achieved via the multipole expansion, which introduces the notion of induced force moments whose value is determined from the solution of constraint problems (for example, rigid particles cannot deform in flow), and methods for handling these multipole-based constraints are illustrated. The multipole expansion converges slowly when particles are nearly touching, a problem which is functionally solved for dynamic simulations by including divergent lubrication interactions, in the style of Stokesian Dynamics. The lubrication interactions effectively introduce an additional constraint on the relative motion of closely separated particle pairs. This constraint is combined with the multipole constraints by developing a general method to handle nearly arbitrary dynamic constraints using saddle point matrices. Finally, the methods developed herein are applied to study sedimentation in suspensions of attractive colloidal particles.; The simulation results are used to develop a predictive model for the hindered/promoted settling function that describes the mean sedimentation rate as a function of particle concentration and attraction strength.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122848</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven approach to understanding exciton-exciton interactions in CsPbBr₃ nanocrystals</title>
<link>https://hdl.handle.net/1721.1/122847</link>
<description>Data-driven approach to understanding exciton-exciton interactions in CsPbBr₃ nanocrystals
Ashner, Matthew N.(Matthew Nickol)
Lead halide perovskites are a rapidly developing class of materials of interest for optoelectronic applications. They have a number of desirable properties such as long carrier diffusion lengths and defect tolerance that arise from the materials' unique dielectric properties. Although much of the initial interest in lead halide perovskites was geared towards producing highly efficient solar cells from the bulk material, cubic perovskite nanocrystals are a strong candidate system for light-emitting applications. Optical gain in semiconductor nanocrystals relies on emission from biexciton or doubly excited states. Knowledge of the spectral properties of biexciton states is critical for understanding optical gain development as well as many-body interactions between charge carriers more broadly. In this thesis, we develop and demonstrate a data-driven approach to characterizing the energetics and dynamics of biexciton states in CsPbBr₃ nanocrystals using TA spectroscopy.; We then use the understanding developed using the TA data to guide experiments using other techniques and further examine the physical phenomena that influence these excited states. In Chapter 2, we describe our data-driven method in detail and demonstrate its effectiveness in extracting spectral information about CsPbBr₃ nanocrystals. The method combines the target analysis fit commonly employed in organic systems with Bayesian inference and a Markov chain Monte Carlo sampler to accurately characterize the model uncertainty and vet the model itself. In Chapter 3, we apply the analysis developed in Chapter 2 to a size-series of CsPbBr₃ nanocrystals to extract the biexciton and exciton component TA spectra as a function of nanocrystal size. We find that the exciton and biexciton spectra have distinctive shapes, in contrast with the common assumption about these spectra.; The biexciton spectra a broader and slightly blue-shifted from the exciton spectrum, and the broadening and blue-shifting both increase as the nanocrystal size decreases. We verify this with our own time-resolved photoluminescence experiments. In Chapter 4, we propose and discuss in detail the development of an experiment to verify our hypothesis for why the exciton-exciton interaction is repulsive - the effect of polaron formation. We describe the development of a femtosecond stimulated Raman spectroscopy experiment to directly observe polaron formation and the challenges of performing this technique at high repetition rate. The central goal of this thesis is to describe a more careful approach to analyzing spectroscopic data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 105-109).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122847</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On quantum randomness and quantum resources</title>
<link>https://hdl.handle.net/1721.1/122846</link>
<description>On quantum randomness and quantum resources
Liu, Zi-Wen.
This thesis is consisted of two independent parts. The first part is on entanglement, quantum randomness, and complexity beyond scrambling. More explicitly, we study the Rényi entanglement entropies of quantum designs. The results lay the mathematical foundation for studying the hierarchy of complexities in between scrambling and Haar randomness by entanglement. The second part explores the general aspects of quantum resource theory. We introduce three theories that do not rely on the specific resource: the theory of resource destroying maps, the one-shot operational resource theory, and the resource theory of quantum channels.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122846</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ingestible capsules for therapeutic injections in the gastrointestinal tract</title>
<link>https://hdl.handle.net/1721.1/122845</link>
<description>Ingestible capsules for therapeutic injections in the gastrointestinal tract
Abramson, Alex Gilbert.
Macromolecule drugs such as insulin have transformed our capacity to effectively treat diseases; however, their rapid degradation and poor absorption in the gastrointestinal (GI) tract generally limits their administration to parenteral routes. An oral biologic delivery system must aid in both localization and permeation to achieve systemic drug uptake. In this thesis I will describe two oral capsules designed to systemically deliver macromolecules by inserting the drugs directly into the walls of the gastrointestinal tract. One device is designed to deliver to the stomach wall, while the other device is designed to deliver to the wall of the small intestine. Ex vivo studies on human GI tissue and in vivo studies in rats and swine support the devices' safety and high delivery efficiency. I perform a cost effectiveness analysis using a first and second order Monte Carlo simulation to show that these new methods of oral macromolecule delivery should increase the quality-adjusted life expectancies of patients suffering from diabetes. Moreover, I demonstrate that electronic systems can be incorporated into these devices for communication and additional therapeutic applications. With the ability to load a multitude of drug formulations, the devices can serve as platform technologies to orally deliver therapeutic doses of macromolecule drugs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-201).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122845</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and synthesis of polymers for Corona Phase Molecular Recognition (CoPhMoRe) of carbohydrates</title>
<link>https://hdl.handle.net/1721.1/122841</link>
<description>Design and synthesis of polymers for Corona Phase Molecular Recognition (CoPhMoRe) of carbohydrates
Ahn, Jiyoung,Ph.D.Massachusetts Institute of Technology.
The molecular recognition of carbohydrates is difficult to realize synthetically due t&amp; their relatively low affinity for a wide range of substrates, yet this recognition is the underpinning of human immunity, cell signaling, and glycobiology. For the past decade, significant effort has been made in this field to create new technologies to profile glycans and carbohydrates. Corona Phase Molecular Recognition (CoPhMoRe), the concept introduced from Strano group, generates a nanoparticle coupled polymer phase capable of recognizing a specific molecule with high affinity and selectivity. CoPhMoRe has been successfully demonstrated using polymer wrapped single walled carbon nanotubes, resulting in molecular recognition complexes, to date, for dopamine, estradiol, riboflavin, L-thyroxine, and the protein fibrinogen, utilizing combinatorial library screening. As an alternative to this empirical, library screening, we first solve the mathematical formulation that we introduce as the CoPhMoRe inverse problem to provide a theoretical basis for understanding certain types of CoPhMoRe recognition. In addition, we demonstrate that a polymer or surfactant corona phase surrounding a single walled carbon nanotube can substantially modify the selectivity of various pre-adsorbed phenyl-boronic acids (PBA) for mono-, di- and polysaccharides. Based on these findings, a simple and robust RAFT polymerization process is employed to produce novel and distinct classes of water-soluble PBA-based polymers. These polymers in SWNT corona phases demonstrate enhanced selectivity towards specific sugar alcohols, which differ only in the orientation of the hydroxyl groups. By changing the polymer backbone structure, highly selective D-Arabinose sensor was developed and used to differentiate D-Arabinose from L-Arabinose for the first time. Finally we developed a glucose sensor that can measure glucose concentration instantaneously by detecting changes in local refractive index.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2017; Cataloged from PDF version of thesis. Page 158 blank.; Includes bibliographical references (pages 133-157).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122841</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The evolution and specialized metabolism of beetle bioluminescence</title>
<link>https://hdl.handle.net/1721.1/122840</link>
<description>The evolution and specialized metabolism of beetle bioluminescence
Fallon, Timothy Robert.
Fireflies (Lampyridae) and certain other families of beetles including the American railroad worms (Phengodidae), Asian starworms (Rhagophthalmidae), and American click-beetles (Elateridae), produce light in a process known as bioluminescence. The bioluminescent systems of beetles, natively used for the purposes of mating communication and/or an aposematic warning signal, are now well understood and have been widely applied in biotechnology and biomedical research. There have been considerable advancements in the engineering of the luciferin substrate, and the luciferase enzyme, for beneficial characteristics such as altered emission wavelength, improved thermostability, and improved catalytic parameters, but despite this substantial effort focused on the biotechnological applications of beetle bioluminescence, major questions remain regarding its natural biochemistry and evolutionary origins.; Four major questions that were unanswered at the beginning of this PhD study were: (1) Do fireflies possess a storage form of their luciferin? (2) What is the evolutionary relationship of bioluminescence amongst the bioluminescent beetles families, and has this trait independently evolved multiple times? (3) How is firefly luciferin biosynthesized? And (4) Are there accessory genes from the bioluminescent beetles which act in bioluminescent metabolism, and might these genes be useful for biotechnological applications? Here I describe the discovery and characterization of the presumed storage form of luciferin in fireflies, sulfoluciferin, and the enzyme which produces it, luciferin-sulfotransferase.; Furthermore, I describe the sequencing, assembly, and characterization of the genome of the North American "Big Dipper" firefly Photinus pyralis, along with the Japanese "heike" firefly Aquatica lateralis genome, and the genome of the Puerto Rican bioluminescent click beetle or "cucubano" Ignelater luminosus. Genomic comparisons amongst these three species support the hypothesis that firefly and click beetle luciferase evolved independently, suggesting an independent evolutionary origin of the bioluminescent systems between these fireflies and click beetles. I also describe stable isotope tracing experiments in live fireflies, establishing that adult and larval fireflies likely do not de novo biosynthesize firefly luciferin, and may instead rely on a "recycling" pathway to re-synthesize luciferin from the luminescence product oxyluciferin. Lastly, I discuss the future directions resulting from this thesis, and the yet unanswered questions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122840</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biosynthesis and medicinal chemistry of therapeutically promising plant natural products</title>
<link>https://hdl.handle.net/1721.1/122839</link>
<description>Biosynthesis and medicinal chemistry of therapeutically promising plant natural products
Chau, Yasmin-Pei(Yasmin-Pei Kamal)
Modern molecular biology, biochemical, and chemical techniques have made it possible to identify individual natural products that possess pharmacological activity from medicinal plants. While approximately 50% of all new FDA-approved small molecule therapeutics in the past 40 years were natural products or natural product analogs, challenges including limited natural resources and the difficulty of solving the total synthesis or semi-synthesis of natural products has limited our ability to harness the full diversity of chemical structures provided by nature to treat human diseases. One solution to these challenges is the elucidation of plant specialized metabolite biosynthetic pathways. Identifying and characterizing the enzymes involved in specialized metabolite biosynthesis will provide insight into the evolution of enzymes performing interesting chemistries and provide new tools for the enzymatic production of therapeutically promising natural products. The goal of this dissertation is to explore the aspects of both medicinal chemistry and the elucidation of biosynthetic pathways that can contribute to the development of novel therapeutics. First, we analyzed the structure-activity relationship of analogs of the the flavonoid icariin and identified analogs with improved potency in inhibiting human phosphodiesterase-5. We subsequently identified and characterized a novel flavonoid prenyltransferase and O-methyltransferase from the medicinal herb Epimedium sagittatum that is known to produce many bioactive prenylated and methylated flavonoids.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122839</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rev7 is a novel regulator of chemotherapeutic response in drug-resistant lung cancer</title>
<link>https://hdl.handle.net/1721.1/122837</link>
<description>Rev7 is a novel regulator of chemotherapeutic response in drug-resistant lung cancer
Vassel, Faye-Marie.
Most malignant cancers are treated with chemotherapeutic agents that target and damage cellular DNA While genotoxic chemotherapies have proven to be highly effective agents for cancer therapy, it is well known that intrinsic and acquired cancer drug resistance is a problem that severely limits the successful elimination of a wide range of malignancies. This point is particularly important in the context of genotoxic chemotherapies, because the DNA damaging agents used in cancer treatment induce a diverse spectrum of toxic lesions that are recognized by a variety of DNA damage response (DDR) mechanisms. In this thesis I used CRISPR-Cas9 gene-editing and other molecular biology and biochemical techniques to examine the functional relevance of that Rev7, a multi-functional translesion synthesis (TLS) DNA damage tolerance protein in drug-resistant cancers.; In particular, I employed CRISPR-Cas9 gene-editing technologies to generate Rev7 knockout (KO) drug-resistant lung cancers cell to use as a tool to investigate the impact that Rev7 loss may have on chemotherapeutic efficacy. Excitingly, this work reveals that Rev7 loss sensitizes intrinsically drug-resistant lung tumors to cisplatin and drastically enhances the overall survival of syngeneic mice transplanted with drug-resistant lung tumors. Additionally, in this thesis I conducted immunoprecipitation and mass spectrometry to better elucidate the Rev7's functional relevance. Mass spectrometry findings in this thesis reveal that when Rev7 is immunoprecipitated under different cellular conditions (i.e. G2/M arrested or DNA damaged cells), Rev7 interacts with novel and diverse Rev7 protein interactors. Intriguingly, my mass spectrometry findings also reveal that Rev7 forms protein-protein interactions with many proteins that play a role in regulating double-strand break (DSB) repair.; Given these findings we conducted DSB repair studies investigating if Rev7 plays a role regulating DSB repair in drug-resistant lung cancer. Notably, these studies suggest that Rev7 loss results in a decrease in DSB repair capacity and increases in DSBs in drug-resistant lung cancer cells. Altogether, this thesis demonstrates that Rev7 has functional relevance in modulating chemotherapeutic response in drug-resistant lung cancer. Further, this thesis presents findings that strongly argue that the development of small molecule inhibitors targeting Rev7 may provide a new way to enhance chemotherapeutic efficacy in drug-resistant tumors in the clinic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 184-197).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122837</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel DNA damage quantification platform enables high throughput screening for genes that impact DNA double strand breaks</title>
<link>https://hdl.handle.net/1721.1/122836</link>
<description>A novel DNA damage quantification platform enables high throughput screening for genes that impact DNA double strand breaks
Tay, Ian Jun Jie.
DNA is the blueprint of life, and the high fidelity transmission of genetic information from parent cells to progeny is essential for an organism's viability. However, our genomes are constantly being damaged by reactive molecules generated from cellular metabolic processes or introduced from the environment. The resulting DNA damage can alter the information encoded in DNA, and can interfere with the accurate transmission of genetic information when cells divide. The accumulation of cells with highly damaged or altered DNA within an organism can cause diseases, such as growth defects, aging and cancer. Fortunately, cells possess the capability to repair damaged DNA. Since DNA repair mechanisms can reverse the deleterious effects of DNA damage, they are important in disease prevention, and in particular play an important role in preventing cancer. DNA repair factors are also important targets for cancer therapies.; Tumor cells frequently harbor defects in DNA repair, rendering them vulnerable to DNA damage. Many cancer therapies exploit this vulnerability by treatment with DNA damaging agents. However, tumor cells can have differential DNA repair capacities based on the expression levels of various DNA repair genes. Thus, some cancer cells are variable in their response to chemotherapy and radiation. It is well established that inhibiting DNA repair can increase the efficacy of treatment. Therefore, it is critical to develop a better understanding of the network of genes that regulate DNA repair mechanisms both to understand susceptibility to cancer, and also in order to improve the outcomes of cancer therapy. DNA repair is a complex process that requires the coordination of many proteins to respond to specific classes of DNA damage. Many of the major proteins that directly participate in DNA repair pathways are well characterized.; However, recent research has indicated that the core DNA repair factors make up only a small fraction of the proteins that respond to DNA damage, suggesting that a large number of novel DNA repair factors have yet to be discovered and characterized. In this work, we leveraged the CometChip, a high-throughput DNA damage quantification assay, to screen thousands of genes for their ability to modulate DNA repair, by knocking them down with shRNAs. We first designed hardware for the CometChip to make it compatible with high-throughput robotics so as to reduce the amount of manual labor needed to execute our screen. We then exploited the ability of our assay to measure DNA damage at an unparalleled rate to screen an shRNA library targeting 2564 oncology-associated genes. We performed gene network analysis on the top candidate genes and found LATS2 to be a novel DNA repair factor. Further investigation revealed that LATS2 is a modulator of the homologous recombination repair pathway.; In addition, we merged our screen data with that from an assay that queries proteins for their ability to bind to DNA double strand breaks. Our results showed that we were able to identify known DNA repair factors via the intersection of the two datasets, and we pinpointed at least one other novel DNA repair gene for further investigation. Taken together, this work represents an advancement in the ability to discover novel DNA repair factors by large-scale parallel measurement of physical DNA damage in cells. Our technology enables high-throughput screening for DNA damage and repair factors faster than ever before, allowing for extensive studies of DNA damage and opening doors to the discovery of new genes and molecules that affect DNA repair.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122836</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in real estate and finance</title>
<link>https://hdl.handle.net/1721.1/122835</link>
<description>Essays in real estate and finance
Liebersohn, Carl J.
This dissertation consists of three chapters on topics related to real estate and finance. The first chapter studies the effects of bank competition on bank risk-taking and lending. Using a quasi-experimental design that exploits the exogenous application of bank antitrust laws following bank mergers, I show that bank competition leads to more loans going to larger and safer borrowers. The second chapter shows that housing demand shocks from 2000-2006 are highly correlated the elasticity of housing supply in different regions, and explores the implications of this for research on the effects of housing prices. The third chapter, written with Gregory Howard, proposes a new channel for changes in aggregate housing prices. We show empirically and theoretically that increased demand for housing in inelastic areas led to higher aggregate housing prices from 2000-2006, and quantify the magnitude of this channel.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122835</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered red blood cells and their applications</title>
<link>https://hdl.handle.net/1721.1/122831</link>
<description>Engineered red blood cells and their applications
Pishesha, Novalia.
The humble red blood cell (RBC) is the most abundant cell in the human body. Every second, a normal adult generates some 2.5 million RBCs, which subsequently circulate through the blood vessels for a lifespan of 50 or 120 days in mouse and human, respectively. RBCs are also unique in that they are completely enucleated once fully mature. These two characteristics exist as distinct assets for cellular therapy applications utilizing RBCs as a platform, enabling long-lasting availability in vivo and the ability to genetically modify precursor cells without worry of the terminally differentiated progeny carrying any foreign genetic material. The first part of this thesis is devoted to the establishment of methodologies that allow for the covalent attachment of both natural and synthetic cargoes to the surface of red blood cells without compromising its biological properties. This system employs genetic engineering and sortase A, a bacterial transpeptidases.; We show that this strategy is able to efficiently engineer both mature mouse and human RBCs in a site-specific and covalent manner. The next portion of this work describes how these established methodologies can be mixed and matched according to the diverse needs of engineered RBC applications. We provide a proof of concept that utilizes engineered RBCs to prolong prophylactic protection against deadly toxins. By expressing chimeric proteins of single domain antibodies (VHHs) against botulinum neurotoxin A (BoNT/A) with RBC-specific proteins, we demonstrated that mouse RBCs expressing anti-BoNT/A VHHs can provide resistance up to 10,000 times the lethal dose (LD₅₀) of BoNT/A. We validate this finding by repeating our results in a human RBC culture system that we have improved to achieve 90% enucleation, illustrating the broad translatability of our strategy for therapeutic applications.; Finally, drawing upon knowledge that the body clears 2.5 millions RBCs every second to maintain homeostasis, we use sortase to attach disease-associated autoantigens to genetically engineered and to unmodified red blood cells (RBCs). Such modified RBCs masquerade with these autoantigens as their own, and hijack the non-inflammatory nature of the RBC clearance pathway to promote tolerance to their carried payload. We show that this blunts the immune contribution of major subsets of immune effector cells (B cells, CD4+ and CD8+ T cells) in an antigen-specific manner. Transfusion of RBCs expressing self-antigen epitopes alleviates and even prevents signs of disease in an experimental system for autoimmune encephalomyelitis, and also maintains normoglycemia in a mouse model of type 1 diabetes, highlighting the potential of engineered RBCs for treating autoimmune diseases.; Taken together, the results of applying our engineered RBCs in areas of both acute infectious and toxic agents, as well as for longer-term chronic and autoimmune diseases, hint at the tremendous potential of this system, and we have only begun to scratch the surface.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122831</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular characterization of T cells across disease states in the central nervous system</title>
<link>https://hdl.handle.net/1721.1/122830</link>
<description>Molecular characterization of T cells across disease states in the central nervous system
Goods, Brittany A.(Brittany Anne Thomas)
The local cytokine milieu shapes the nature and function of immune cells and by extension the overall course of tissue-specific immune responses. In the context of cancer and autoimmunity, two opposing immune responses, the local immune environment can lead to dysfunctional T cell states. Understanding the mechanisms that distinguish these two states is key to identifying unique pathways that could be induced in autoimmunity and relieved without causing autoimmunity in cancer. We sought to characterize unique and shared T cell states in the context of multiple sclerosis (MS) and glioblastoma (GBM) using a combination of immunophenotyping and functional approaches, including ultra low-input transcriptional profiling. First we use a series of unbiased approaches to identify functional differences between auto-reactive T cells derived from MS or healthy donors.; We found that MS-derived CCR6⁺ T cells produce pathogenic inflammatory cytokines IFN-[upsilon], IL-17, and GM-CSF, while healthy controls produce IL-10 in response to myelin antigen. We also identified a transcriptional signature that characterizes these cells from MS donors and highlights the role of CTLA-4 signaling in autoreactive T cells derived from healthy donors. Next, we sought to better understand the role of another co-inhibitory receptor, PD-1, specifically in the context of Treg and CD4⁺ T effector function in GBM, a central nervous system cancer. We identify unique signatures of dysfunction in both the CD4⁺ T cell and Treg compartment correlated with PD-1 expression. Finally, adverse development of autoimmunity in the context of treatment with blocking CTLA-4 antibodies suggests that studies directly comparing T cells in the context of autoimmunity and cancer can yield valuable insight into mechanisms of T cell dysfunction.; Through a transcriptional comparison of isolated T cell subsets from the spinal fluid of MS patients, tumors of GBM patients, and matched blood we identify common biological mechanisms of CNS T cells and identify unique signatures of CNS exhaustion. Taken together, our data suggests that the CNS may be enriched for pathogenic cells in the context of both MS and GBM.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2017; Cataloged from PDF version of thesis. "February 2017."; Includes bibliographical references (pages 157-170).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122830</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tunneling and ferroelectric based transistors for energy efficient electronics</title>
<link>https://hdl.handle.net/1721.1/122744</link>
<description>Tunneling and ferroelectric based transistors for energy efficient electronics
Zubair, Ahmad,Ph.D.Massachusetts Institute of Technology.
Si CMOS technology is approaching its limit to meet the demand for future data-intensive energy efficient ubiquitous electronics. Emerging materials (i.e. low dimensional nanomaterials, novel dielectric/ferroelectric materials) show great promise to overcome the limits of the Si CMOS technology for specific applications. This thesis studies the prospects of two different emerging materials system (the atomically thin two-dimensional materials and ferroelectric oxides) in energy efficient electronic devices for future Systems-on-a-Chip (SoC's) applications. As the channel length of transistors has shrunk over the years, short-channel effects have become a major limiting factor to transistor miniaturization. Atomically thin MoS₂ is an ideal semiconductor material for field-effect transistors (FETs) with sub-10-nm channel lengths.; We study the limit of channel length scaling in MoS₂ FET using the semiconducting-to-metallic phase transition of MoS₂ and demonstrate sub-10-nm channel-length transistor fabrication by directed self-assembly patterning of mono- and trilayer MoS₂ Novel device concepts based on quantum mechanical tunneling (i.e. inter-band tunneling, hot electron injection) and two-dimensional materials can overcome some of the limitations of conventional CMOS devices for both low power and high frequency. We demonstrate inter-band tunneling transistors with room temperature negative differential resistance using van der Waals heterostructure for low power applications. Moreover, we show vertical hot electron transistor (HET) utilizing tunneling injection of hot electrons, which may enable operation at frequencies beyond what Si CMOS can provide.; The integration of the a highly conductive ultra-thin (0.3 nm) monolayer graphene with a GaN platform by van der Waals interaction can simultaneously offer both scalability and high performance in ballistic HET. In the last part of the thesis, we discuss how the use of ferroelectric Hf 0₂ with conventional (i.e. Si, GaN, InGaAs) semiconductor as well as two-dimensional materials systems offers a new degree of freedom when designing novel electronic devices. We demonstrate that the integration of ferroelectric Hf0₂ can enable ultra-low power (MoS₂ FET with subthreshold swing less than 60 mV/decade) as well as, potentially, higher operating frequencies. Finally, we present a proposal for a new analog synaptic device using ferroelectric Hf0₂ for in-memory computation in future SoC platforms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 208-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122744</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redesigning the memory hierarchy to exploit static and dynamic application information</title>
<link>https://hdl.handle.net/1721.1/122742</link>
<description>Redesigning the memory hierarchy to exploit static and dynamic application information
Tsai, Po-An.
Memory hierarchies are crucial to performance and energy efficiency, but current systems adopt rigid, hardware-managed cache hierarchies that cause needless data movement. The root cause for the inefficiencies of cache hierarchies is that they adopt a legacy interface and ignore most application information. Specifically, they are structured as a rigid hierarchy of progressively larger and slower cache levels, with sizes and policies fixed at design time. Caches expose a flat address space to programs that hides the hierarchy's structure and transparently move data across cache levels in fixed-size blocks using simple, fixed heuristics. Besides squandering valuable application-level information, this design is very costly: providing the illusion of a flat address space requires complex address translation machinery, such as associative lookups in caches. This thesis proposes to redesign the memory hierarchy to better exploit application information.; We take a cross-layer approach that redesigns the hardware-software interface to put software in control of the hierarchy and naturally convey application semantics. We focus on two main directions: First, we design reconfigurable cache hierarchies that exploit dynamic application information to optimize their structure on the fly, approaching the performance of the best application-specific hierarchy Hardware monitors application memory behavior at low overhead, and a software runtime uses this information to periodically reconfigure the system. This approach enables software to (i) build single- or multi-level virtual cache hierarchies tailored to the needs of each application, making effective use of spatially distributed and heterogeneous (e.g., SRAM and stacked DRAM) cache banks; (ii) replicate shared data near-optimally to minimize on-chip and off-chip traffic; and (iii) schedule computation across systems with heterogeneous hierarchies (e.g., systems with near-data processors).; Specializing the memory system to each application improves performance and energy efficiency, since applications can avoid using resources that they do not benefit from, and use the remaining resources to hold their data at minimum latency and energy. For example, virtual cache hierarchies improve full-system energy-delay-product (EDP) by up to 85% over a combination of state-of-the-art techniques. Second, we redesign the memory hierarchy to exploit static application information by managing variable-sized objects, the natural unit of data access in programs, instead of fixed-size cache lines. We present the Hotpads object-based hierarchy, which leverages object semantics to hide the memory layout and dispense with the flat address space interface. Similarly to how memory-safe languages abstract the memory layout, Hotpads exposes an interface based on object pointers that disallows arbitrary address arithmetic. This avoids the need for associative caches.; Instead, Hotpads moves objects across a hierarchy of directly addressed memories. It rewrites pointers to avoid most associative lookups, provides hardware support for memory management, and unifies hierarchical garbage collection and data placement. Hotpads also enables many new optimizations. For instance, we have designed Zippads, a memory hierarchy that leverages Hotpads to compress objects. Leveraging object semantics and the ability to rewrite pointers in Hotpads, Zippads compresses and stores objects more compactly, with a novel compression algorithm that exploits redundancy across objects. Though object-based languages are often seen as sacrificing performance for productivity, this work shows that hardware can exploit this abstraction to improve performance and efficiency over cache hierarchies: Hotpads reduces dynamic memory hierarchy energy by 2.6x and improves performance by 34%; and Zippads reduces main memory footprint by 2x while improving performance by 30%.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-177).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122742</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phosphoproteomic data interpretation applied to studies of cancer and inflammatory diseases</title>
<link>https://hdl.handle.net/1721.1/122741</link>
<description>Phosphoproteomic data interpretation applied to studies of cancer and inflammatory diseases
Strasser, Samantha Dale.
Protein phosphorylation is a process central to cellular signaling. Consequently, targeted interruption or alteration of this process in the context of disease holds the potential for new treatments. Phosphoproteomic data generated by global mass spectrometry (MS) contain high-content information on protein phosphorylation. Nevertheless, the biological function of the vast majority of phosphorylation sites remains unknown. This poses a challenge in drawing meaningful, actionable biological conclusions from this datatype. This thesis presents the development of Substrate-based Kinase Activity Inference, SKAI, a methodology to infer kinase activity from phosphoproteomic data. SKAI first draws upon prior knowledge of kinase-substrate interactions to construct custom lists of kinases and their respective substrate sites, termed kinase-substrate sets. These sets integrate prior knowledge originating from studies of a variety of different organisms (e.g., human, mouse, and rat). This results in more informative and organism specific sets. Kinase inference is carried out by using these sets within the Gene Set Enrichment Analysis (GSEA) framework. To demonstrate its utility, SKAI is applied to global phosphoproteomic data from two disease systems, i.e., in vivo mouse models of inflammatory bowel disease (IBD) and mutant RAS cancers. Results to date have provided new hypotheses for potential therapeutic development, illustrating the utility of SKAI in elucidating new drug target leads for the treatment of disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122741</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid learning for multi-step manipulation in collaborative robotics</title>
<link>https://hdl.handle.net/1721.1/122740</link>
<description>Hybrid learning for multi-step manipulation in collaborative robotics
Pérez D'Arpino, Claudia.
I envision robots that can LEARN a model of the steps and the goal of a constrained multi-step manipulation task by observing human examples of the task, that are flexible enough to COLLABORATE with a human teammate to execute this task, and that are able to DISCOVER their own new strategy for performing the task in a manner that adapts well to unmodeled aspects of the physical world. In this thesis I formulate models and algorithms for hybrid learning, a framework in which a robot learns manipulation tasks by combining observational and self-learning, and develop a learning and collaboration workflow in the context of remote manipulation in shared autonomy. I show experimentally that this collaborative workflow significantly improves performance over other systems for remote manipulation. LEARN: I first present C-LEARN, an algorithm that enables robot learning of multi-step manipulation tasks from a single human demonstration.; I consider quasi-static tasks that are geometrically constrained. The robot uses demonstrations to formulate a task representation in terms of keyframes and geometric constraints than can be used by a motion planner to solve a new instance of the task. This work addresses the technical gap between learning from demonstrations and motion planning, effectively increasing the complexity of manipulation tasks that end users without programming experience can teach robots. COLLABORATE: Second, I present the integration of C-LEARN into a collaborative workflow for remote manipulation. This model is evaluated through a user study that compares four architectures for remote manipulation with expert operators. The proposed method results in task times comparable to direct teleoperation while increasing the accuracy of the execution.; DISCOVER: Finally, I present the hybrid learning framework for discovering novel strategies for multi-step manipulation, by combining learning from demonstrations and self-learning through exploration in a simulation. I demonstrate my approach by tasking a robot to manipulate blocks and assemble a stable structure. While the desired geometry is specified by the example, the underlying physics is unobservable. The robot uses Monte Carlo Tree Search (MCTS) with interleaved task and motion planning in simulation to find a robust strategy to accomplish the task.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122740</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning with generalized negative dependence : probabilistic models of diversity for machine learning</title>
<link>https://hdl.handle.net/1721.1/122739</link>
<description>Learning with generalized negative dependence : probabilistic models of diversity for machine learning
Mariet, Zelda Elaine.
This thesis establishes negative dependence as a powerful and computationally efficient framework to analyze machine learning problems that require a theoretical model of diversification. Examples of such problems include experimental design and model compression: subset-selection problems that require carefully balancing the quality of each selected element with the diversity of the subset as a whole. Negative dependence, which models the behavior of "repelling" random variables, provides a rich mathematical framework for the analysis of such problems. Leveraging negative dependence theory for machine learning requires (a) scalable sampling and learning algorithms for negatively dependent measures, and (b) negatively dependent measures able to model the specific diversity requirements that arise in machine learning. These problems are the focus of this thesis.; The first part of this thesis develops scalable sampling and learning algorithms for determinantal point processes (DPPs), popular negatively dependent measures with many applications to machine learning. For scalable sampling, we introduce a theoretically-motivated generative deep neural network for DPP-like samples over arbitrary ground sets. To address the learning problem, we show that algorithms for maximum likelihood estimation (MLE) for DPps are drastically sped up with Kronecker kernels, and that MLE can be further enriched by negative samples. The second part of this thesis leverages negative dependence for core problems in machine learning. We begin by deriving a generalized form of volume sampling (GVS) based on elementary symmetric polynomials, and prove that the induced measures exhibit strong negative dependence properties.; We then show that classical forms of optimal experimental design can be cast as optimization problems based on GVS, for which we derive randomized and greedy algorithms to obtain the associated designs. Finally, we introduce exponentiated strongly Rayleigh measures, which allow for simple tuning of the strength of repulsive forces between similar items while still enjoying fast sampling algorithms. The great flexibility of exponentiated strongly Rayleigh measures makes them an ideal tool for machine learning problems that benefit from negative dependence theory.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 139-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122739</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New guarantees for cryptographic circuits and data anonymization</title>
<link>https://hdl.handle.net/1721.1/122737</link>
<description>New guarantees for cryptographic circuits and data anonymization
Cohen, Aloni(Aloni Jonathan)
The first part of this thesis presents new definitions and constructions for three modern problems in cryptography: watermarking cryptographic circuits, updatable cryptographic circuits, and proxy reencryption. The second part is dedicate to advancing the understanding of data anonymization. We examine what it means for a data anonymization mechanism to prevent singling out in a data release, a necessary condition to be considered effectively anonymized under the European Union's General Data Protection Regulation. We also demonstrate that heretofore theoretical privacy attacks against ad-hoc privacy preserving technologies are in fact realistic and practical.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 305-320).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122737</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exact geometry algorithms for robotic motion planning</title>
<link>https://hdl.handle.net/1721.1/122736</link>
<description>Exact geometry algorithms for robotic motion planning
Deshpande, Ashwin.
The current generation of robotic motion planning algorithms is dominated by derivatives of the PRM and RRT algorithms. These methods abstract away all geometric information about the underlying problem into a collision checker. While this approach yields simple and general purpose algorithms, it often comes at the cost of theoretical guarantees and performance. In this thesis, we explore deterministic motion planning algorithms that have explicit knowledge of the geometry of the underlying problems. By exploiting this geometry, we give algorithms that can achieve stronger theoretical guarantees and better performance in some problems. This thesis is divided into two main sections comprising of two different motion planning scenarios. In the first case, we explore issues of decidability in task and motion planning by giving a decision procedure for prehensile task and motion planning. In the second section, we present a holonomic motion planning algorithm that can almost always identify the exact optimal solution as a system of differential equations, which can be numerically solved to produce an asymptotically-optimal solution.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-113).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122736</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling miniaturized grid-interface power conversion</title>
<link>https://hdl.handle.net/1721.1/122735</link>
<description>Enabling miniaturized grid-interface power conversion
Hanson, Alex J.(Alex Jordan)
Many of the most critical challenges of the twenty-first century revolve around energy and its management. Improved performance (efficiency, density) in electrical energy management systems require advancements in a number of areas - semiconductor devices, passive energy storage components, and a variety of circuit- and system-level concerns. The sections of this thesis are somewhat distinct and may find application in a great variety of circumstances. Nevertheless, they can be understood as contributions to a single application system: a grid-interface power converter. These kinds of converters have several unique aspects that make them good targets for research, including a heavy reliance on magnetic components, relatively high voltages for application of emerging GaN transistors, wide range of operating voltages and powers, and a twice-line-frequency energy storage component that is difficult to miniaturize. This thesis will present a high-frequency inductor structure with greatly improved density, an exploration of the limits of magnetic-based current sensing, a method for characterizing GaN losses with large-signal excitations, a control approach for miniaturizing grid-interface energy buffers, and a grid-interface circuit with several advantages over the state of the art.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 271-281).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122735</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The benefits and costs of writing a POSIX Kernel in a high-level language</title>
<link>https://hdl.handle.net/1721.1/122734</link>
<description>The benefits and costs of writing a POSIX Kernel in a high-level language
Cutler, Cody.
This dissertation presents an evaluation of the use of a high-level language (HLL) with garbage collection to implement a monolithic POSIX-style kernel. The goal is to explore if it is reasonable to use an HLL instead of C for such kernels, by examining performance costs, implementation challenges, and programmability and safety benefits. This dissertation contributes Biscuit, a kernel written in Go that implements enough of POSIX (virtual memory, mmap, TCP/IP sockets, a logging file system, poll, etc.) to execute significant applications. Biscuit makes liberal use of Go's HLL features (closures, channels, maps, interfaces, garbage collected heap allocation), which subjectively made programming easier. The most challenging puzzle was handling the possibility of running out of kernel heap memory; Biscuit benefited from the analyzability of Go source to address this challenge. On a set of kernel-intensive benchmarks (including NGINX and Redis) the fraction of kernel CPU time Biscuit spends on HLL features (primarily garbage collection and thread stack expansion checks) ranges up to 13%. The longest single GC-related pause suffered by NGINX was 115 microseconds; the longest observed sum of GC delays to a complete NGINX client request was 582 microseconds. In experiments comparing nearly identical system call, page fault, and context switch code paths written in Go and C, the Go version was 5% to 15% slower.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 73-78).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122734</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconducting photodetectors, nanowires, and resonators</title>
<link>https://hdl.handle.net/1721.1/122733</link>
<description>Superconducting photodetectors, nanowires, and resonators
Dane, Andrew E.(Andrew Edward)
Despite almost two decades of research on superconducting nanowire single-photon detectors (SNSPDs) and kinetic inductance detectors (KIDs), open questions remain about the photodetection processes in both. In this thesis, we detail our progress towards understanding and engineering two different physical phenomena relevant to most superconducting photodetectors: (1) the thermal boundary conductance between a superconducting metal and dielectric substrate at liquid helium temperatures, and (2) the effect of a spatially varying superconducting gap on the behavior and lifetime of superconducting quasiparticles. Simple electrical measurements are shown to be an effective means of extracting the thermal boundary conductance between superconducting nanowires and various substrates. While our current understanding of this process is based on diffusive heat transfer, we argue that a phonon black-body emission model is more appropriate. We used this understanding to select a substrate in order to thermally isolate the nanowire to improve infrared detection performance. The substrate we identified, polyethylene terephthalate (PET), is a clear, flexible, plastic material, onto which, we were able to successfully fabricate working niobium nitride SNSPDs. Finally, we detail our progress towards understanding how intrinsic variations in the superconducting gap of high-kinetic-inductance materials used to make KIDs, such as titanium nitride or niobium titanium nitride, could affect quasiparticle lifetimes in them. We fabricated and tested superconducting niobium resonators that incorporated gold nanodot decorations, intended to locally suppress the superconducting gap by proximity and provide a physical simulation of the high-kinetic-inductance case, using better-understood materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122733</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-asymptotic results for point to point channels</title>
<link>https://hdl.handle.net/1721.1/122730</link>
<description>Non-asymptotic results for point to point channels
Collins, Austin Daniel.
This thesis demonstrates some non-asymptotic information theoretic results for point to point channels. Non-asymptotic information theory addresses the question: "For a fixed blocklength and fixed probability of error, what is the maximum number of codewords M that I can support?". Compare this to classical (asymptotic) information theory, which answer this question only for blocklengths tending to infinity and probability of error tending to zero. In this sense, non-asymptotic results are more difficult to derive, but are more practically applicable. First, we look at the multiple input multiple output (MIMO) coherent block fading channel with channel state information available at the receiver, called the MIMO-BF channel.; This is perhaps the most well studied model for a wireless communication channel - it captures the setting where two wireless devices are communicating without a dominant line of sight between them, so the signal reflects off many surfaces before reaching the receiver. A typical example of this channel is communication between two cell phones in a city. The MIMO assumption means the transmitter and receive may have multiple antennas - adding multiple antennas can increase achievable rates enormously while costing very little. This work characterizes the dispersion of the MIMO-BF channel. The dispersion is a fundamental channel quantity similar to capacity - it describes the rate penalty incurred for transmitting at a fixed blocklength and error probability. We first prove achievability and converse theorems, together which demonstrate that the dispersion is given by the conditional variance of the information density, minimized over all capacity achieving input distributions.; We then give an analytic expression for the dispersion, and describe its implications in terms of the channel parameters. For example, we learn that dispersion scales linearly with the coherence time, while capacity is not a function of the coherence time. We then give an achievability bound to help numerically compute the finite blocklength rates, and demonstrate its application to the MIMO-BF channel. Secondly, we analyze the MISO case - where the transmitter has many antennas but the receiver has only one, which turns out to be an interesting special case. For this, we first give a theorem characterizing the input distributions that achieve capacity. It turns out that full 'rate orthogonal design-like input distributions achieve capacity, along with the distribution with i.i.d. Gaussian entries.; It is shown that these orthogonal design objects are in fact the extremal objects of this channel from the point of view of dispersion - using them gives better performance then simply sending independent symbols from each antenna at each time step, a result that cannot be seen with only asymptotic analysis. In this way, orthogonal designs appear as the natural space-time coding scheme for the MISO channel. Finally, we analyze the problem of variable length list decoding with stop feedback. This is the following problem: a transmitter sends symbols one by one into a channel until the decoder says "stop". The decoder then outputs a list of L codewords - if the correct codeword is in the list, it succeeds, else it makes an error. Hence there is a tradeoff between stopping time, number of messages, and the probability of error.; The question becomes: which indicators can tell the decoder that the correct message is in a set of L messages? We demonstrate for the BEC channel that it is possible to communicate with zero dispersion using a variable length list decoding scheme. However for the BSC, surprisingly you cannot stop in a way the gives zero dispersion, when the list size is sub-exponential relative to the number of messages. Furthermore, we show an application to delayed variable length feedback - i.e. the receiver says "stop" but the transmitter only sees the stop signal after a delay, which gives a more practical way of using stop feedback.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 126-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122730</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Firms, industries, and technological change : a patent-based approach to studying disruption and disruptors</title>
<link>https://hdl.handle.net/1721.1/122729</link>
<description>Firms, industries, and technological change : a patent-based approach to studying disruption and disruptors
Metzler, Florian.
This thesis presents a new empirical approach as well as a new patent-based dataset for studying disruption and technology transition cases. At the core of this approach lies a novel engineering systems framework of technological change. The framework focuses on the relationship between changes in technological competencies and changes in product designs, and encompasses both firm-level and industry-level dynamics. The new framework and dataset are applied to the study of three cases of technology transition during the 1993- 2012 period. The cases include (1) disruption in the mobile phone industry with a focus on Apple, BlackBerry, and Nokia; (2) disruption in the photography industry with a focus on Fujifilm, Canon, and Kodak; and (3) technology transition in the automotive industry with a focus on Toyota, Volkswagen, and GM.; The former two industries comprise widely discussed disruption cases, allowing me to demonstrate advantages of the presented approach and develop novel insights into these cases. The third case, on the automotive industry, generates complementary insights by considering an industry with products comprising more integrated product architectures. The case selection allows for cross-case comparisons to begin endogenizing industry-specific factors. The thesis' main contributions are methodological and theoretical: First, I present a new dataset - and corresponding data assembly methods - of comprehensive corporate patent portfolios. The portfolios take into account each firm's corporate family tree structure as well as acquisitions. As such, the dataset reflects the actual range of firms' codified technological activities more closely than previous efforts and enables a more accurate view on how technological change manifests in firms and industries.; To connect the data to theory, I develop a set of novel metrics to operationalize semantic concepts such as technological diversification and concentration of portfolios as well as firms' technological core and growth competencies. These metrics are based on a newly developed variance measure for hierarchically structured networks. I define growth competencies as competencies that undergo rapid year-to-year growth outside of a firm's core competencies. By identifying incumbents' growth competencies from historical data before major transitions, I am able to successfully hindcast future new entrants in the cases presented. Further, I introduce the concepts of technology space and product space as mappings of compositions of technological competencies and of technological competencies required by compositions of products. Second, the thesis makes theoretical contributions to resource-based view (RBV) and disruption literatures.; Specifically, it presents a dynamic extension to the RBV, endogenizing technological change as well as firm-industry interconnections with regard to the emergence of technology convergences and the evolution of product designs. My findings suggest that a firm's relative position and movement in technology space needs to be considered separately from its position and movement in product space, i.e. its changing composition of competencies and its changing composition of products. Specifically, whereas firms' movements in product space can appear abrupt and even surprising - such as the sudden entry into new markets - my analysis shows that changes in technology space tend to be slower, more continuous, and more predictable.; I find that in disruption cases such as with Apple's sudden "entry" into the mobile phone industry, the new framework reveals that it was in fact the mobile phone industry that gradually "entered" Apple's position in technology space - as the technological requirements of phone industry products became more and more similar to Apple's preexisting, and highly stable, competencies. Moreover, I extend the concept of technology-product connections, as put forth statically by RBV theorists, by adding a time-dependent dimension. I argue that incumbent failure - such as Nokia's and Kodak's - can be explained by incumbents' inability to diagnose and respond to the gradual weakening of their technology-product connections; in other words, by neglecting to either adjust their technological competencies or to adjust their product offerings in response to technological change.; In turn, a firm with greater awareness of its own composition of technological competencies relative to its competitors as well as the changing technological requirements of prevalent product designs can deliberately incorporate such insights into strategic decision-making. In the empirical cases, I observe the ability to sense dynamics in technology and product spaces relative to the firm, and the ability to time the firm's actions accordingly, to be more present in some firms than in others. I term the existence of such abilities timing and sensing capabilities and propose them to be a concrete and operationalizable subset of Dynamic Capabilities.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122729</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond industry : an expanded definition of authentic engineering design education</title>
<link>https://hdl.handle.net/1721.1/122728</link>
<description>Beyond industry : an expanded definition of authentic engineering design education
Saulnier, Christopher R.
Authentic approaches to design education are typically defined as experiences centered on industry involvement. This industry connection is commonly either in the form of projects provided by industry partners or practicing engineers that serve as mentors to students. After exploring the goals and current practices of design education, this dissertation proposes an expanded definition of authentic design education: any design project with impact beyond the classroom environment that encourages the development of a student's self-identity as an engineer. To investigate the potential benefits afforded by an expanded definition of authentic design, a new design class was developed, taught, and evaluated across four years. The class, entitled Design for the Wilderness, was developed with a focus on projects that have impact beyond the classroom environment. Students were required to design and build products that they relied on while traveling in remote wilderness environments.; These impactful projects required students to experience the results of their design decisions. Building on our experiences implementing Design for the Wilderness, a curricular approach of Design for Use is introduced that requires students to use products developed by their peers. Design for Use helps increase students' understanding of human-centered design principles by encouraging students to confront the interplay between their intentions when designing a product and their experiences when failing to understand the intentions behind products designed by their peers. This dissertation also considers a mechanical engineering capstone design class (MIT's 2.009). An interesting outcome of this class is that some teams continue to work on commercializing their products after the semester ends. Team characteristics most strongly correlated with persisting on product development beyond the end of the class are related to healthy team dynamics and a positive social environment.; Teams that persisted spent more of their time working together, had fewer teammates that worked significantly more or less than the team average, and spent more of their time simply "hanging out" in lab. Drawing on our findings from investigating multiple approaches to authentic design education, recommendations are made for the future development of effective engineering design classes.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 181-187).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122728</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomedical data sharing and analysis at scale : privacy, compaction, and integration</title>
<link>https://hdl.handle.net/1721.1/122727</link>
<description>Biomedical data sharing and analysis at scale : privacy, compaction, and integration
Cho, Hyunghoon.
Recent advances in high-throughput experimental technologies have led to the exponential growth of biomedical datasets, including personal genomes, single-cell sequencing experiments, and molecular interaction networks. The unprecedented scale, variety, and distributed ownership of emerging biomedical datasets present key computational challenges for sharing and analyzing these data to uncover new scientific insights. This thesis introduces a range of computational methods that overcome these challenges to enable scalable sharing and analysis of massive datasets in a range of biomedical domains. First, we introduce scalable privacy-preserving analysis pipelines built upon modern cryptographic tools to enable large amounts of sensitive biomedical data to be securely pooled from multiple entities for collaborative science. Second, we introduce efficient computational techniques for analyzing emerging large-scale sequencing datasets of millions of cells that leverage a compact summary of the data to speedup various analysis tasks while maintaining the accuracy of results. Third, we introduce integrative approaches to analyzing a growing variety of molecular interaction networks from heterogeneous data sources to facilitate functional characterization of poorly-understood genes. The computational techniques we introduce for scaling essential biomedical analysis tasks to the large volume of data being generated are broadly applicable to other data science domains.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis. Page 307 blank.; Includes bibliographical references (pages 279-306).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122727</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social perspective of mobility sharing : understanding, utilizing, and shaping preference</title>
<link>https://hdl.handle.net/1721.1/122726</link>
<description>Social perspective of mobility sharing : understanding, utilizing, and shaping preference
Zhang, Hongmou,Ph. D.Massachusetts Institute of Technology.
Advances in information and communications technologies are enabling the growth of real-time ride sharing-whereby drivers and passengers or fellow passengers are paired up on car trips with similar origin-destinations and proximate time windows-to improve system efficiency by moving more people in fewer cars. Lesser known, however, are the opportunities of shared mobility as a tool to foster and strengthen human interactions. In this dissertation, I used preference as a lens to investigate the social interaction in mobility sharing, including how the interpersonal preference in mobility sharing can be understood, utilized and reshaped.; More specifically, I answered the questions of how preference could be used to match fellow passengers and to improve trip experiences; how gender, one of the key factors may contribute to this preference; and in the reverse direction, if there are factors in the preference which are unrespectable and need to be changed, whether mobility sharing can be used as a tool to change it, and improve the integration of cities. Besides, I also studied how time flexibility of trips can be incorporated into mobility sharing models to reduce congestion. For policy makers and planners, this dissertation could partially answer or provide a framework of analysis to the following questions.; 1) How could preference in mobility sharing services be used or misused? What is the efficiency trade-off, and how to regulate the use of it? 2) What factors may impact the preference for fellow passengers? Are the preference factors respectable, and what factors should be included/excluded in the mobility sharing services from a regulation perspective? 3) How can mobility sharing be actively used as a tool to encourage more social interaction, especially across different social groups? What is the short-term cost, and the long-term benefit? For the system designers of mobility sharing services, this dissertation can be used as a reference for the development of a preference-based mobility sharing platform. The following questions have been traced, and the methods can be improved when more data are available to the system designers.; 1) If preference is to be used, what input data are needed, and how they need to be processed for the preference-matching model? 2) What preference factors should be included in the system design, what factors should be used with caution, and what factors should be eliminated? 3) If time flexibility of trips can be included in the system design, how much congestion can be reduced? What system design is needed in order to achieve this congestion reduction?
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; "June 2019." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 117-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122726</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information theoretic advances in zero-knowledge</title>
<link>https://hdl.handle.net/1721.1/122725</link>
<description>Information theoretic advances in zero-knowledge
Berman, Itay.
Zero-knowledge proofs have an intimate relation to notions from information theory. In particular, the class of all problems possessing statistical zero-knowledge proofs (SZK) was shown to have complete problems characterized by the statistical distance (Sahai and Vadhan [JACM, 20031) and entropy difference (Goldreich and Vadhan [CCC, 19991) of a pair of efficiently samplable distributions. This characterization has been extremely beneficial in understanding the computational complexity of languages with zero-knowledge proofs and deriving new applications from such languages. In this thesis, we further study the relation between zero-knowledge proofs and information theory. We show the following results: 1. Two additional complete problems for SZK characterized by other information theoretic notions-triangular discrimination and Jensen-Shannon divergence.; These new complete problems further expand the regime of parameters for which the STATISTICAL DIFFERENCE PROBLEM is complete for SZK. We further show that the parameterized STATISTICAL DIFFERENCE PROBLEM, for a regime of parameters in which this problem is not known to be in SZK, still share many properties with SZK. Specifically, its hardness implies the existence of one-way functions, and it and its complement have a constant-round public coin interactive protocol (i.e., AM n coAM). 2. The hardness of a problem related to the ENTROPY DIFFERENCE PROBLEM implies the existence of multi-collision resistant hash functions (MCRH). We also demonstrate the usefulness of such hash functions by showing that the existence of MCRH implies the existence of constant-round statistically hiding (and computationally binding) commitment schemes. 3. We initiate the study of zero-knowledge in the model of interactive proofs of proximity (IPP). We show efficient zero-knowledge IPPs for several problems.; We also show problems with efficient IPPs, for which every zero-knowledge IPP must be inefficient. Central in this study is showing that many of the statistical properties of SZK carry over to the IPP setting.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-179).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122725</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting cognitive impairment from spoken language</title>
<link>https://hdl.handle.net/1721.1/122724</link>
<description>Detecting cognitive impairment from spoken language
Alhanai, Tuka(Tuka Waddah Talib Ali Al Hanai)
Dementia comes second only to spinal cord injuries in terms of its debilitating effects; from memory-loss to physical disability. The standard approach to evaluate cognitive conditions are neuropsychological exams, which are conducted via in-person interviews to measure memory, thinking, language, and motor skills. Work is on-going to determine biomarkers of cognitive impairment, yet one modality that has been relatively less explored is speech. Speech has the advantage of being easy to record, and contains the majority of information transmitted during neuropsychological exams. To determine the viability of speech-based biomarkers, we utilize data from the Framingham Heart Study, that contains hour-long audio recordings of neuropsychological exams for over 5,000 individuals. The data is representative of a population and the real-world prevalence of cognitive conditions (3-4%). We first explore modeling cognitive impairment from a relatively small set of 92 subjects with complete information on audio, transcripts, and speaker turns. We loosen these constraints by modeling with only a fraction of audio (~2-3 minutes), of which the speaker segments are defined through text-based diarization. We next apply this diarization method to extract audio features from all 7,000+ recordings (most of which have no transcripts), to model cognitive impairment (AUC 0.83, spec. 78%, sens. 79%). Finally, we eliminate the need for feature-engineering by training a neural network to learn higher-order representations from filterbank features (AUC 0.85, spec. 81%, sens. 82%). Our speech models exhibit strong performance and are comparable to the baseline demographic model (AUC 0.85, spec. 93%, sens. 65%). Further analysis shows that our neural network model automatically learns to detect specific speech activity which clusters according to: pause followed by onset of speech, short burst of speech, speech activity in high-frequency spectral energy bands, and silence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-165).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122724</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the selective permeability of biological hydrogels</title>
<link>https://hdl.handle.net/1721.1/122723</link>
<description>Understanding the selective permeability of biological hydrogels
Witten, Jacob Julian Seid.
Biological hydrogels are fundamental to life, from microbial biofilms to mucus and the nuclear pore in humans. These hydrogels exhibit complex selective permeability behavior, allowing the passage of some particles while blocking the penetration of others. This selective permeability is critical for understanding the biological and medicinal impact of mucus, which coats all non-keratinized epithelia in the body. Mucus controls the penetration of microbes, pollutants, and nanoparticles through a combination of steric and interactive (binding-based) constraints. For small molecules, binding to mucus and in particular mucin, the main gel-forming component of mucus, affects diffusive permeability and may also affect a molecule's biological or therapeutic activity. However, the molecular characteristics leading to mucus binding are not well understood.; I therefore developed a mucus binding assay with substantially greater throughput than any existing assay, and combined it with a mucin binding screen to identify a new motif as associated with binding to mucin. I also validate the link between binding to mucin and reduced activity in mucin for the antibiotic colistin. Next, I applied my binding technique to study the binding of a wide range of antibiotics and inhaled drugs to respiratory mucus, and identified previously unknown mucus binding interactions. These binding interactions could impact the activity of the drugs within the mucus or impact their lung residence time in the case of highly muco-obstructive lung diseases. The nuclear pore, which controls the passage of material between the nucleus and the cytoplasm, is similar to mucus in that it too is a selectively permeable network of disordered proteins.; Passage through the nuclear pore requires interaction with the network that was initially thought to be purely hydrophobic in character. However, there is evidence that electrostatic interactions also partly govern nuclear pore transport. Here, we apply a peptide-based system to study the interplay of hydrophobic and electrostatic interactions to further dissect the biochemistry underlying nuclear pore function.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 148-160).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122723</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic analog feedback control circuits in living cells</title>
<link>https://hdl.handle.net/1721.1/122722</link>
<description>Synthetic analog feedback control circuits in living cells
Teo, Jonathan Jin Yuan.
Models of biochemical reaction networks in cells are important for advancing our understanding of complex biological systems and for designing functional synthetic biological circuits. However, most models are based on a deterministic digital framework that is largely incompatible with nonlinear dynamics, stochastics, high-order feedback, cross talk, loading, and resource consumption in biology. In contrast, analog circuit design is the nearly 100-year-old art of crafting and analyzing nonlinear, stochastic, coupled differential equations to perform a desired task, often to given speed, precision, input sensitivity, power, load, or part-count constraints and in the presence of noise or device mismatch. In this thesis, we develop a canonical analog circuit that maps a wide class of biological circuits, whether at the DNA, RNA, protein, or small-molecule levels to design schematics that represent their underlying dynamical differential equations exactly.; We then apply techniques from analog feedback circuit design to two concrete biological circuits to improve their feedback performance: 1) We show that a novel synthetic microbial operational amplifier (OpAmp) with three amplification stages based on DNA, RNA, and protein stages and a dominant time constant is capable of high open-loop gain, stable, and robust-and-precise closed-loop performance; 2) We show that a synthetic tissue-homeostasis stem-cell circuit with a novel incoherent feed-forward loop attenuates negative phase and thus improves its robustness and precision of response to cell death in Type I diabetes. We also show that our novel use of both asymmetric division and symmetric division of stem cells improves feedback-loop performance w.r.t transients and robustness. To illustrate scalability of our approach to large-scale and high-speed simulations of the future, we use digitally programmable analog microelectronic chips to run complex simulations in parallel.; We develop a mapping that converts our analog schematics to a corresponding configuration on these chips, and demonstrate how to optimize the parameters of the biological OpAmp for high gain and improved performance. Our work illustrates that synthetic analog feed back control in living cells is amenable to rigorous design, analysis, simulation and implementation with the tools of analog circuit design, and leads to novel and experimentally useful synthetic biological circuits.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-151).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122722</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding neurodegenerative disease-relevant molecular effects of perturbagens using a multi-omics approach</title>
<link>https://hdl.handle.net/1721.1/122721</link>
<description>Understanding neurodegenerative disease-relevant molecular effects of perturbagens using a multi-omics approach
Patel-Murray, Natasha L.(Natasha Leanna)
The complex etiology of neurodegenerative diseases is not fully understood, and the characterization of cellular pathways that are dysfunctional in these diseases is key for therapeutic development. Chemical and genetic perturbagens can probe cellular pathways to shed insight about both disease etiology and potential therapeutic targets. We analyzed the functional effects of chemical perturbagens in neurodegenerative disease models as evidenced by changes in transcriptomic, metabolomic, epigenomic, and proteomic data ("multi-omics" data). Our studies revealed novel modes of action for small molecule compounds that promote survival in a model of Huntington's Disease, a fatal neurodegenerative disorder. Integration of our multi-omics data using an interpretable network approach revealed that the autophagy and bioenergetics cellular pathways are affected by different sets of compounds that promote survival. Using staining and western blot assays, we validated the effect on autophagy for one set of compounds and found that the compounds activate this pathway. Using a cellular bioenergetics assay, we found that a second set of compounds shifts the bioenergetic flux from mitochondrial respiration to glycolysis, validating our network results. In a second study related to Huntington's Disease, we analyzed the effects of two peripheral huntingtin gene silencing techniques in mouse liver. We show that the transcriptional and metabolomic changes associated with both genetic silencing methods converge on similar cellular pathways, such as the immune response and fatty acid metabolism. As a whole, this thesis presents new insights into the functional effects of perturbagens that could impact neurodegenerative disease pathology and drug discovery.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122721</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging latent patterns in the study of living systems</title>
<link>https://hdl.handle.net/1721.1/122720</link>
<description>Leveraging latent patterns in the study of living systems
Cleary, Brian(Brian Lowman)
The development of high-throughput techniques to observe and perturb biological systems has led to remarkable progress in the last several decades. From the tremendous amounts of data being accumulated, new opportunities have emerged, including the possibility of finding latent patterns in high-dimensional variables that are reflective of underlying biological processes. While these methods have led to countless discoveries and innovations, it is clear there is much more we could learn by measuring and perturbing at far greater scales. Here, I advance methods to understand and utilize latent patterns in new types of high-dimensional data. I devise a method of analyzing networks of 'frequency interactions' in 16S/18S time series data, showing that these can be used to identify microbial communities and associated environmental factors.; Then, as part of a highly collaborative project, I show how latent patterns in single cell RNA-Seq can be used together with optimal transport analysis to identify cell types and cell type trajectories, regulatory pathways, and cell-cell interactions in a time-course of developmental reprogramming. I then step back to ask a fundamental question: how do we choose which observations and perturbations to make, and how many of each are necessary? I approach this question on the basis of the inherency of latent structure in biology, and on foundational mathematical results concerning the analysis of highly-structured data. I present the beginnings of a framework to formalize how random composite experiments can make biological discovery more efficient by leveraging latent patterns. I first show how to recover individual genomes using covariance patterns in a series of composite (meta-) genomic data.; I then describe how random composite measurements and compressed sensing can be used to make gene expression profiling more efficient. Finally, I apply this idea to in situ imaging transcriptomics, demonstrating how many individual gene images can be efficiently recovered from a small number of composite gene images.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis. "June 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122720</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation by RNA-binding proteins : sequence determinants and evolutionary dynamics</title>
<link>https://hdl.handle.net/1721.1/122719</link>
<description>Regulation by RNA-binding proteins : sequence determinants and evolutionary dynamics
Alexis, Maria Sarah.
RNA-binding proteins (RBPs) regulate all aspects of RNA metabolism, such as splicing, localization, translation, and degradation. RNA processing is a critical component of gene expression regulation, and adaptive changes in RNA processing underlie many phenotypic differences between species. RBPs regulate RNA processing by recognizing RNA sequence elements (motifs) within RNAs. Studying the determinants of these RBP:RNA interactions is therefore key to understanding how RBPs select their targets, and how RNA processing evolves over time. This thesis presents three chapters revolving around these questions. First, I present a large-scale analysis of the evolution of gene expression across tissues, species, and studies. This study differs from previous studies in its usage of inter-sample distances to model gene expression divergence, a method that allowed us to reconcile disparate findings in the field.; Second, I present a comprehensive study of the affinity landscapes of 78 human RBPs using RNA Bind-N-Seq (RBNS), an unbiased assay that determines the sequence, structure, and context preferences of RBPs. Integrated analysis of all 78 motifs reveals an unexpectedly low diversity of RNA motifs, implying frequent convergence of binding specificity towards a relatively small set of RNA motifs, many with low compositional complexity. Offsetting this trend, RBPs show extensive preferences for contextual features distinct from short linear motifs, including spaced "bipartite" motifs, biased flanking nucleotide composition, and bias away from or toward RNA structure. These results emphasize the importance of contextual features in RNA recognition, which likely enable targeting of distinct subsets of transcripts by different RBPs that recognize the same linear motif.; Lastly, I compile a catalog of all known RBP specificities and examine their conservation patterns, in vivo binding patterns, and evolutionary trajectories across species. This work demonstrates that RNA regulation can be well conserved despite rapid evolution of RBP binding sites, and highlights mechanisms that may contribute to this robustness. This phenomenon is well-characterized for transcriptional regulation at promoters, but has not well been described for RNA regulation. Taken together, these studies advance our understanding of RBP target selection and how it evolves over time, thereby furthering our understanding of the basic mechanisms that govern gene regulation.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 125-146).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122719</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of interactions on correlation, thermalization, and transport of exciton-polaritons</title>
<link>https://hdl.handle.net/1721.1/122718</link>
<description>Effects of interactions on correlation, thermalization, and transport of exciton-polaritons
Yoon, Yoseob.
Light-matter interactions are fundamental processes that allow us not only to interrogate material properties but also to coherently control material phases that cannot be reached otherwise. Matter-matter interactions, on the other hand, result in strong correlations and emergent behavior that cannot be explained in terms of single-particle physics. Exciton-polaritons (hereafter "polaritons") are hybrid quasiparticles in a semiconductor quantum-well microcavity that exhibit both light-matter and matter-matter interactions. Polaritons have the effective mass inherited from the ultralight cavity photon mass, which sets polariton transport phenomena to be photon-like and allows macroscopic quantum phenomena such as Bose-Einstein condensation and superfluidity up to room temperature. Meanwhile, the effect of photon dressing only reduces the exciton-exciton interaction strength by the Hopfield coefficient, which sets the polariton-polariton interaction strength to be exciton-like.; Along with the narrow linewidth protected from inhomogeneously broadening, polaritons are an excellent platform to study interaction-induced physics and nonlinear device applications such as ultralow-power optical switches. In this thesis, we investigated the effects of light-matter and matter-matter interactions on various aspects of polaritons. In the first part, we first measured the polariton-polariton interaction strength by tracking the energy blueshift as a function of polariton density. This was enabled by separating and trapping polaritons away from a pumped region, where the measurement of polariton interactions can be obscured by much heavier particles such as a dark exciton reservoir. We provided possible mechanisms that explain the observed anomalously large blueshifts. In the second part, we addressed a long-standing debate on whether polaritons can reach thermal equilibrium.; We used a long-lifetime microcavity structure to achieve Bose-Einstein distributions of polaritons, which was the first demonstration of polaritons in equilibrium. This allowed us to measure equilibrium properties, such as temperature and chemical potential, and to map out the phase diagram of Bose-Einstein condensation in a quasi-two-dimensional system. We further investigated how all-optical trapping and polariton interactions enhance relaxation and thermalization processes. In particular, we found that a significant redistribution of polaritons occurs through the reduced density of states and polariton interactions. In the third part, we studied trapped eigenstates and interference patterns of polariton condensates in various trapping and pump geometries. Competition between eigenstates and selection of one of them have been well explained by the overlap of real-space, monientum-space, and energy distributions between the pump and the eigenstate.; A mismatch between the pump-induced potential profile and the polariton source profile was a key factor in determining the distribution of transported polaritons. In the last part, we extended the polariton physics to study topological and cooperative effects in open quantum systems. We demonstrated bulk Fermi arcs by connecting two exceptional points arising from the engineered non-Hermitian properties of a photonic crystal. In addition, we theoretically showed that a cascaded-cavity system can outperform a single-cavity system in terms of the single-photon indistinguishability and efficiency, which works even with bad quantum emitters and practical cavity quality factors. Our work provides invaluable insights into the fundamental light-matter and matter-matter interactions, as well as many-body physics of condensed matter and photonic systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 249-271).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122718</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large scale analysis of electronic effects in protein structure</title>
<link>https://hdl.handle.net/1721.1/122717</link>
<description>Large scale analysis of electronic effects in protein structure
Qi, Helena(Helena Wen)
Protein crystal structures provide a valuable source of information on the internal interactions of a protein and provide a starting point for simulations. In this thesis, we examine how large-scale analysis of protein structures can explain unexpected geometries and interactions and provide a starting point for further modeling. The large-scale analysis takes two forms: large datasets and large calculations. We first investigate unexpectedly short non-covalent distances in published protein crystal structures. We observe over 75 000 close contacts in a curated dataset of high quality protein structures, and examine the trends in which residues and atoms are involved in these close contacts. We characterize a subset of over 5000 CC with quantum mechanical and molecular mechanical methods to understand their stability. We examine a particular protein, acyl carrier protein, to see how charge parameterization affects its behavior in long-time molecular dynamics simulations. Finally, we test the Fukui shift analysis (FSA) method, which identifies how frontier states of an active site are altered by the presence of an additional QM residue to identify when QM treatment of a residue is essential.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 136-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122717</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding ultrafast photoacoustics for investigation of mechanical properties and thermal transport</title>
<link>https://hdl.handle.net/1721.1/122716</link>
<description>Expanding ultrafast photoacoustics for investigation of mechanical properties and thermal transport
Shin, Hyun Doug.
To address the need for broadband mechanical spectroscopy, femtosecond laser pulses were used to generate and detect acoustic waves. To expand the acoustic phonon frequency bandwidth and range, a thin metal film, a strongly magnetostrictive galfenol film, and strained piezoelectric InGaN/GaN multiple quantum wells were used as transducers. Acoustic wave detection methods included monitoring of optical transmittance/ reflectance, polarization, and diffraction over time. A magnetostrictive material galfenol (Fe₁₋[subscript x] Ga[subscript x]) with 80 percent iron and 20 percent gallium was used as an acoustic transducer using demagnetostriction effect. Galfenol showed great potential as an optimal transducer for the ultrafast magnetostriction in both longitudinal and shear modes. Strained piezoelectric InGaN/GaN semiconductor superlattices were used to generate and to study longitudinal THz acoustic phonons in GaN based structures. During the investigation of the lifetime of up to a 1.4 THz frequency acoustic phonons, specular reflection from an air/GaN free surface was observed. The photo-excitation of THz acoustic phonons in layered structures was introduced as an effective noninvasive tool to investigate the integrity of the fabrication process. This study opened many possibilities for studying mechanical properties and thermal phonons. Next, thermal conductivity reduction due to carrier-phonon interactions was presented. Phonon contributions are critical in heat transport in semiconductors and insulators. To isolate the carrier contributions to the scattering events, photo-excited carriers were generated in silicon through pulsed laser excitation. To measure thermal conductivity changes, time-domain thermoreflectance and transient thermal grating techniques were employed with a, carefully timed additional excitation pulse for carrier generation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-135).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122716</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two-dimensional terahertz rotational spectroscopy in the gas phase</title>
<link>https://hdl.handle.net/1721.1/122715</link>
<description>Two-dimensional terahertz rotational spectroscopy in the gas phase
Zhang, Yaqing,Ph.D.Massachusetts Institute of Technology.
Two-dimensional (2D) coherent spectroscopy has been developed to study molecular dynamics and structures for decades, but its extension into the terahertz (THz) regime remains rare. In this thesis, I describe several experiments using two-dimensional terahertz rotational spectroscopy. Employing intense THz electromagnetic fields and the differential chopping technique, we have extended multi-dimensional coherent spectroscopy into the THz regime. We have observed rotational dynamics of linear, symmetric-top, and asymmetric-top molecular species, indicating that 2D THz spectroscopy is an incisive tool for investigating collective quantum effects of the rotational degree of freedom. Based on the quantum mechanical rigid rotor model, we have developed simulation and calculation approaches to disentangling spectroscopic signals from molecular rotations.; We have shown ultrafast 2D THz photon echo spectroscopy of gaseous acetonitrile samples, revealing J-state-resolved rotational dynamics in symmetric-top molecular rotors. We have revealed nonlinear rotational couplings and many-body interactions in water vapor, uncovering the strongly correlated nature of rotational quantum states in water molecules. Additionally, experimental evidence of linear and nonlinear THz spectroscopy of stable water dimers in the vicinity of atmospheric conditions has been observed. We have reported dual-type rotational couplings and a propensity for the K-state-dependent cross-peaks in sulfur dioxide, highlighting distinct rotational properties in slightly asymmetric-top molecules. We have measured the quartic THz effect using two-dimensional THz-Raman hybrid spectroscopy, opening the way for understanding and applications of higher-even-order THz-matter coherences beyond the linear and quadratic THz field effects.; Utilizing the density matrix and time propagation approaches, we have developed a set of simulation and calculation methodologies to characterize rotational dynamics in the gas phase based on the quantum mechanical rigid-rotor model. Our work shows the remarkable capability of 2D THz spectroscopy to interrogate rotational dynamics in the gas phase, laying a foundation for understanding and manipulation of nonlinear light-molecule interactions via multi-dimensional coherent THz spectroscopy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122715</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graphite-conjugated catalysts : bridging heterogeneous and homogeneous catalysts</title>
<link>https://hdl.handle.net/1721.1/122714</link>
<description>Graphite-conjugated catalysts : bridging heterogeneous and homogeneous catalysts
Oh, Seokjoon.
This interconversion occurs via complex multistep, multielectron reactions, which can be carried out by either metallic heterogeneous or molecular homogeneous electrocatalysts. Metallic heterogeneous catalysts have a continuum of electronic states that distribute the redox burden of multielectron reactions, allowing for efficient catalysis. However, heterogeneous catalysts display a variety of active sites and local electronic structures, and are difficult to fine-tune at a molecular level. On the other hand, homogeneous catalysts allow a great degree of synthetic control over the catalytic active site. Moreover, the relative ease in spectroscopic characterization allows a mechanistic understanding of molecular catalysis at a level that is unattainable for heterogeneous catalysis. To bridge the advantages of both types of catalysts, we have developed a surface functionalization strategy for conjugating molecularly well-defined active sites to graphitic carbon surfaces.; First, I will discuss the preparation and characterization of two new types of conjugating N-heterocyclic linkages to graphitic carbon surfaces. This work presents a general method for characterizing modified carbon surfaces with molecular-level structural detail. Then, I will present the electrocatalytic carbon dioxide reduction activity of a graphite-conjugated rhenium catalyst, and compare its catalytic behavior to that of a molecular analog. Electrochemical and spectroscopic data show that graphite-conjugated catalysts do not behave identically to their molecular analogs, but rather show properties similar to that of metallic heterogeneous catalysts, providing a unique bridge between the worlds of heterogeneous and homogeneous catalysis.; Finally, in the appendix, I will present a chapter on the stability of graphite-conjugated linkages under electrochemical polarization, followed by a chapter on catalyzing the reduction of molecular pyridinium species using a graphite-conjugated rhodium catalyst.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis. Page 156 blank.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122714</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Terahertz-field-induced nonlinearity in phonons, electrons and spins</title>
<link>https://hdl.handle.net/1721.1/122713</link>
<description>Terahertz-field-induced nonlinearity in phonons, electrons and spins
Li, Xian,Ph.D.Massachusetts Institute of Technology.
In this thesis, I describe work aimed at understanding nonlinear material responses initiated by strong terahertz (THz) field excitation. I discuss two aspects of nonlinear THz spectroscopy in condensed-matter materials: developments of experimental THz capabilities and spectroscopy methods and their applications in investigating ultrafast nonlinear dynamics in different classes of materials. I first describe the THz generation, detection and spectroscopy methods, which are the basis of all of our studies. We have generated strong single- and multi-cycle THz pulses covering several spectral ranges using inorganic and organic crystals and developed linear and nonlinear THz spectroscopy techniques to interrogate light-matter interactions based on different observables and/or symmetry criteria.; We have demonstrated a new method for studying time-domain electron paramagnetic resonance that allows us to measure THz-frequency fine structures of spin energy levels on a tabletop and have developed nonlinear two-dimensional (2D) magnetic resonance spectroscopy to distinguish nonlinear THz-spin interaction pathways. We also show that THz-pump, optical-probe spectroscopy, including THz field-induced second-harmonic generation spectroscopy and THz Kerr effect spectroscopy, can be extended to study phase transitions in quantum paraelectric and topological materials. We have employed the THz methods to drive and detect nonlinear responses from several degrees of freedom in the materials. We have demonstrated collective coherent control over material structure by inducing a quantum paraelectric to ferroelectric phase transition using intense THz electric fields in strontium titanate.; We show that a single-cycle THz field is able to drive ions along the microscopic pathway leading directly to their locations in a new crystalline phase on an ultrafast timescale. We have driven highly nonlinear lattice and electronic responses in a topological crystalline insulator by dynamically perturbing the protecting crystalline symmetry through THz phonon excitation. We have observed oscillations in optical reflectivity that may be associated with electronic gap opening and modulation in the topological surface states. Finally, we have demonstrated nonlinear manipulation of collective spin waves in a canted antiferromagnet using strong THz magnetic fields and we have observed full sets of the second- and third-order nonlinear responses in 2D THz magnetic resonance spectra, which are accurately reproduced in our numerical simulations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-210).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122713</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for the structural modification and characterization of bacterial glycans</title>
<link>https://hdl.handle.net/1721.1/122712</link>
<description>Methods for the structural modification and characterization of bacterial glycans
Calabretta, Phillip Joseph.
Despite the importance of carbohydrates in cellular processes, there are few tools for their study in the context of a cell. The finding that non-natural monosaccharides could be internalized, processed, and displayed in cellular glycans in the early 1990s led to the development of metabolic incorporation probes for mammalian and microbial organisms. Taking advantage of the development of rapid, bioorthogonal chemistries these probes have provided valuable insight into intermolecular interactions, biosynthetic and metabolic pathways, and intercellular interactions. The successful application of metabolic incorporation probes to bacteria has been hampered by their unfastidious use of monosaccharides for energy production. In this work, we describe an alternative approach to metabolic incorporation, termed biosynthetic incorporation, using synthetic sugar donors that do not require intracellular processing prior to glycosyl transfer.; We evaluated our approach in cells of the suborder Corynebacterianeae, for which no useful probes had been described. Within Corynebacterianeae exist important human pathogens, including Mycobacterium tuberculosis. These bacteria utilize a host of lipid-linked sugar donors to construct polysaccharides implicated in immune avoidance and intrinsic antibiotic resistance. We produced a library of sugar donor analogs that were assessed for processing in cells. The most promising analog was used to evaluate incorporation in Corynebacterium glutamicum and Mycobacterium smegmatis, two widely used models of M. tuberculosis. We found that the sugar donor analog could work within the cell's traditional workflow, so analogs bearing azido-groups were synthesized. Incorporation of the azido-analogs labels nascent cell wall as determined by fluorescence microscopy. We have also begun synthesizing and evaluating probes targeting other polysaccharides within Corynebacterianeae.; These findings establish biosynthetic incorporation as a novel mode of polysaccharide structure modification. Furthermore, Biosynthetic incorporation probes offer advantages over metabolic incorporation, including the lack of requisite intracellular processing and the ability to target glycans that were previously recalcitrant to current methods.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis. Page 184 blank.; Includes bibliographical references (pages 151-183).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122712</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural modeling of dynamic polymer networks</title>
<link>https://hdl.handle.net/1721.1/122711</link>
<description>Structural modeling of dynamic polymer networks
Alt, Eric Allen.
Polymer network based gels are an important class of materials with a wide range of applications. Dynamic polymer networks, which crosslink via the formation of reversible bonds, in particular have great potential as stimuli responsive, mechanically tunable, and self-healing materials. Many important emergent properties of these materials, such as mechanical strength, are mediated by their underlying network structure, which can be characterized by the network topology and spatial distribution of nodes. Therefore, unlocking the full potential of these materials through rational design requires an understanding of how network structure arises as a function of network-forming precursor design. Because the bonds that crosslink dynamic polymer networks are reversible, stresses initially present or otherwise induced in these systems can be relieved through network rearrangement. As such, given sufficient time to relax, the network structure is determined by equilibrium thermodynamics.; This work presents a thermodynamic formalism which characterizes the free energy of a network in terms of node positional, network topological, and polymer conformational entropies. Through this lens, and aided by numerical calculations and simulations of model networks, we show how the free energy landscape with respect to density relates to factors which can be readily controlled through precursor design, such as polymer length and node size. Additionally, Monte Carlo simulations of explicit networks reveal that thermodynamic relaxation can give rise to spatial heterogeneity in the arrangement of network nodes. In the last chapter we use the tools developed in the earlier chapters to explore how these same design parameters influence the topological statistics of equilibrium networks. In addition to showing how internode connectivity increases with polymer length and system density, we find that inhomogeneity due to spatial relaxation can also lead to greater network connectivity.; Finally, we explore the weakening of network topologies due to substitution of polymer-linked node forming components with topologically non-functional counterparts, finding that larger nodes fare better than their smaller counterparts in maintaining network connectivity when these substitutions are made.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 125-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122711</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic circular RNA for protein expression</title>
<link>https://hdl.handle.net/1721.1/122710</link>
<description>Synthetic circular RNA for protein expression
Wesselhoeft, R. Alexander(Robert Alexander)
Messenger RNA (mRNA) has broad potential for therapeutic and engineering applications. One fundamental limitation of mRNA is its relatively short half-life in biological systems, effected in part by rapid exonuclease-mediated degradation upon delivery. Circular RNA (circRNA), a type of single-stranded RNA with a contiguous structure that lacks the end motifs necessary for exonuclease recognition, may be resistant to this mechanism of degradation and therefore may exhibit superior stability. However, challenges in circularization, purification, and protein expression have impeded a thorough investigation of exogenous circRNA. By rationally designing ubiquitous accessory sequences to facilitate circularization, we engineered a permuted self-splicing intron that efficiently circularized RNAs up to 5kb in length in vitro.; With the addition of these accessory sequences, we were able to demonstrate nearly complete circularization of precursor RNAs containing an internal ribosome entry site (IRES) for translation initiation and a coding region such as erythropoietin or eGFP. We found that translation from optimized circRNA was robust, and circRNA protein expression stability far exceeded that of both unmodified and nucleoside modified linear mRNA in some cellular contexts. We monitored cytokine release and antiviral defense induction in sensitive cells transfected with circRNA purified by different methods and found that the immunogenicity and stability of circRNA preparations was dependent on the degree of purity, with small amounts of contaminating linear RNA leading to robust cellular immune responses.; In contrast to purified unmodified linear mRNA, purified unmodified circRNA was invisible to several RNA sensors including RIG-i and endosomai toil-like receptors (TLRs) and did not provoke a significant cytokine response upon transfection. Using purified circRNA, we finally provided the first demonstration to our knowledge of exogenous circRNA delivery and translation in vivo, and showed that the duration of circRNA translation was extended in adipose tissue in comparison to unmodified and uridine-modified linear mRNAs. In total, this work suggests that circRNA is a promising alternative to linear mRNA for therapeutic applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-126).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122710</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cholesterol and egg activation</title>
<link>https://hdl.handle.net/1721.1/122709</link>
<description>Cholesterol and egg activation
Wang, Li
The high-density lipoprotein (HDL) receptor SR-BI controls the structure and fate of plasma HDL. The SR-BI knockout (KO) females are infertile, apparently due to their abnormal, cholesterol-enriched HDL particles. In this thesis, my colleagues and I examined the growth and meiotic progression of SR-BI KO oocytes and found that they underwent normal germinal vesicle breakdown; however, SR-BI KO eggs, which had accumulated excess cholesterol in vivo, spontaneously activated; they escaped metaphase II (MII) arrest and progressed to pronuclear, metaphase III and anaphase/telophase III stages. Eggs from fertile, wild-type mice were activated when loaded in vitro with excess cholesterol using a cholesterol/methyl-[beta]-cyclodextrin complex, phenocopying SR-BI KO oocytes.; In vitro cholesterol loading of eggs induced elevation of intracellular calcium (the [Ca²⁺]i spike), reduction in MPF and MAPK activities, extrusion of a second polar body and progression to meiotic stages beyond MI. These results suggest the infertility of SR-BI KO females is due, at least in part, to excess cholesterol in eggs inducing premature activation, and that cholesterol can activate wild-type mouse eggs to escape from MII arrest. In the Chapter 3, I studied the detailed mechanism of egg activation induced by excess cholesterol. I showed that the [Ca²⁺]i spike induced by excess cholesterol was necessary for egg activation and also sufficient for further development of the egg to the blastocysts stage. Excess cholesterol, in calcium free medium, did not induce changes in [Ca²⁺]i, indicating that extracellular calcium was required for the [Ca²⁺]i spike and also suggesting the entry of extracellular calcium via plasma membrane channel(s).; After screening of calcium channel inhibitors, single cell mRNA-sequencing and activation experiments using eggs from mutant females, I was able to show that co-inhibition of both the L-type calcium channel Ca[subscript v]1.3 and the transient receptor potential channel TRPC5, but not inhibition of either one alone, blocked the excess-cholesterol induced [Ca²⁺]i spike and egg activation. This result suggests that excess cholesterol activates the MII eggs by opening of Ca[subscript v]1.3 or TPRC5. Our results raise the possibility that excess cholesterol might also activate the same channels in other systems, and thus contribute to pathophysiology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis. Page 236 blank.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122709</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic pricing mechanisms for airline revenue management : theory, heuristics, and implications</title>
<link>https://hdl.handle.net/1721.1/122707</link>
<description>Dynamic pricing mechanisms for airline revenue management : theory, heuristics, and implications
Wittman, Michael D.(Michael David)
Even as the distribution and sale of commercial airline tickets has shifted in recent years from physical reservation offices to the Internet, many airline commercial processes remain highly reliant on pre-Internet technologies and standards. This legacy infrastructure compels airlines to publish a discrete set of prices in each market they serve, and to select prices for each itinerary from among only this limited set of possible price points. Recent advancements in distribution technology, such as the New Distribution Capability (NDC), offer airlines the chance to break away from these constraints. These new standards enable the creation of customized offers with prices that could be generated dynamically in real time. While airlines have shown interest in these new technologies, practical methods for integrating dynamic pricing into existing airline revenue management (RM) and distribution systems have yet to be defined and evaluated by academics or practitioners.; In this work, we propose the first mechanisms for dynamic pricing designed specifically for use in the airline industry. By selectively providing increments or discounts based on demand segmentation and estimates of willingness-to-pay (WTP), our mechanisms can increase airline revenues by stimulating new bookings from price-sensitive travelers while encouraging more price-inelastic travelers to buy up to higher price points. Moreover, the methods are compatible with the pricing, RM, and distribution systems currently used by airlines today. Our dynamic pricing heuristics emerge from the development of a novel theoretical model of customer choice. Using the model, we introduce a new concept called "conditional WTP" to describe how a customer's willingness-to-pay for an itinerary can change depending on the other alternatives available in his choice set.; We show how assuming an unchanging maximum WTP for air travel, as in past work on dynamic pricing, can lead to overestimation of WTP in competitive environments, and describe how an airline's estimates of conditional WTP play an integral role in our dynamic pricing mechanisms. We test our dynamic pricing methods in the Passenger Origin-Destination Simulator (PODS): a robust agent-based booking simulation that models the interactions between passengers and airlines. In a complex, competitive network, we find that our heuristics can increase airline revenues by up to 1 - 4% from traditional pricing and RM alone. Incrementing prices can result in revenue gains through an increase in yield, and discounting can lead to higher revenues through demand stimulation and share shift from other airlines. In both cases, we identify a phenomenon we call "forecast spiral-up" which increases yield by protecting more seats for higher-value fare classes.; We also develop a variant of the heuristic in which multiple substitutable flights are priced simultaneously, leading to additional revenue gains. Finally, we provide the first in-depth assessment of the practical implications of dynamic pricing for the airline industry. We focus on airline concerns that dynamic pricing could lead to price wars, excessive discounting, and a race to the bottom. We also evaluate some of the potential legal implications and customer reactions that could emerge as dynamic pricing becomes more commonplace. These analyses provide new insight on how airline competition could potentially change as dynamic pricing is integrated into traditional airline processes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 223-228).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122707</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information contraction and decomposition</title>
<link>https://hdl.handle.net/1721.1/122692</link>
<description>Information contraction and decomposition
Makur, Anuran.
Information contraction is one of the most fundamental concepts in information theory as evidenced by the numerous classical converse theorems that utilize it. In this dissertation, we study several problems aimed at better understanding this notion, broadly construed, within the intertwined realms of information theory, statistics, and discrete probability theory. In information theory, the contraction of f-divergences, such as Kullback-Leibler (KL) divergence, X²-divergence, and total variation (TV) distance, through channels (or the contraction of mutual f-information along Markov chains) is quantitatively captured by the well-known data processing inequalities.; These inequalities can be tightened to produce "strong" data processing inequalities (SDPIs), which are obtained by introducing appropriate channel-dependent or source-channel-dependent "contraction coefficients." We first prove various properties of contraction coefficients of source-channel pairs, and derive linear bounds on specific classes of such contraction coefficients in terms of the contraction coefficient for X²-divergence (or the Hirschfeld-Gebelein-Rényi maximal correlation). Then, we extend the notion of an SDPI for KL divergence by analyzing when a q-ary symmetric channel dominates a given channel in the "less noisy" sense. Specifically, we develop sufficient conditions for less noisy domination using ideas of degradation and majorization, and strengthen these conditions for additive noise channels over finite Abelian groups.; Furthermore, we also establish equivalent characterizations of the less noisy preorder over channels using non-linear operator convex f-divergences, and illustrate the relationship between less noisy domination and important functional inequalities such as logarithmic Sobolev inequalities. Next, adopting a more statistical and machine learning perspective, we elucidate the elegant geometry of SDPIs for X²-divergence by developing modal decompositions of bivariate distributions based on singular value decompositions of conditional expectation operators. In particular, we demonstrate that maximal correlation functions meaningfully decompose the information contained in categorical bivariate data in a local information geometric sense and serve as suitable embeddings of this data into Euclidean spaces.; Moreover, we propose an extension of the well-known alternating conditional expectations algorithm to estimate maximal correlation functions from training data for the purposes of feature extraction and dimensionality reduction. We then analyze the sample complexity of this algorithm using basic matrix perturbation theory and standard concentration of measure inequalities. On a related but tangential front, we also define and study the information capacity of permutation channels. Finally, we consider the discrete probability problem of broadcasting on bounded indegree directed acyclic graphs (DAGs), which corresponds to examining the contraction of TV distance in Bayesian networks whose vertices combine their noisy input signals using Boolean processing functions.; This generalizes the classical problem of broadcasting on trees and Ising models, and is closely related to results on reliable computation using noisy circuits, probabilistic cellular automata, and information flow in biological networks. Specifically, we establish phase transition phenomena for random DAGs which imply (via the probabilistic method) the existence of DAGs with logarithmic layer size where broadcasting is possible. We also construct deterministic DAGs where broadcasting is possible using expander graphs in deterministic quasi-polynomial or randomized polylogarithmic time in the depth. Lastly, we show that broadcasting is impossible for certain two-dimensional regular grids using techniques from percolation theory and coding theory.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 327-350).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122692</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient computing for autonomous navigation using algorithm-and-hardware co-design</title>
<link>https://hdl.handle.net/1721.1/122691</link>
<description>Efficient computing for autonomous navigation using algorithm-and-hardware co-design
Zhang, Zhengdong,Ph.D.Massachusetts Institute of Technology.
Autonomous navigation algorithms are the backbone of many robotic systems, such as self-driving cars and drones. However, state-of-the-art autonomous navigation algorithms are computationally expensive, requiring powerful CPUs and GPUs to enable them to run in real time. As a result, it is prohibitive to deploy them on miniature robots with limited computational resources onboard. To tackle this challenge, this thesis presents an algorithm-and-hardware co-design approach to design energy-efficient algorithms that are optimized for dedicated hardware architectures at the same time. It covers the design for three essential modules of an autonomous navigation system: perception, localization, and exploration.; Compared with previous research that considers either algorithmic improvements or hardware architecture optimizations, our approach leads to algorithms that not only have lower time and space complexity but also map efficiently to specialized hardware architectures, resulting in significantly improved energy efficiency and throughput. First, this thesis studies how to design an energy-efficient visual perception system using the deformable part models (DPM) based object detection algorithm. It describes an algorithm that enforces sparsity in the data stored on a chip, which reduces the memory requirement by 34% and lowers the cost of the classification by 43%. Together with other hardware optimizations, this technique leads to an object detection chip that runs at 30 fps on 1920 x 1080 videos while consuming only 58.6mW of power.; Second, this thesis describes a systematic way to explore algorithm-hardware design choices to build a low-power chip that performs visual inertial odometry (VIO) to localize a vehicle. Each of the components in a VIO pipeline has multiple algorithmic choices with different time and space complexity. However, some algorithms of lower time complexity can be more expensive when implemented on-chip. This thesis examines each of the design choices from both the algorithm and hardware's point of view and presents a design that consumes 24mW of power while running at up to 90 fps and achieving near state-of-the-art localization accuracy Third, this thesis presents an efficient information theoretic mapping system for exploration. It features a novel algorithm called Fast computation of Shannon Mutual Information (FSMI) that computes the Shannon mutual information (MI) between perspective range measurements and the environment.; FSMI algorithm features an analytic solution that avoids the expensive numerical integration required by the previous state-of-the-art algorithms, enabling FSMI to run three orders-of-magnitude faster in practice. We also present an extension of the FSMI algorithm to 3D mapping; the algorithm leverages the compression of a large 3D map using run-length encoding (RLE) and achieves 8x acceleration in a real-world exploration task. In addition, this thesis presents a hardware architecture designed for the FSMI algorithm. The design consists of a novel memory banking method that increases the memory bandwidth so that multiple FSMI cores can run in parallel while maintaining high utilization. A novel arbiter is proposed to resolve the memory read conflicts between multiple cores within one clock cycle. The final design on an FPGA achieves more than 100x higher throughput compared with a CPU while consuming less than 1/10 of the power.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 211-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122691</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constructing and evaluating weak memory models</title>
<link>https://hdl.handle.net/1721.1/122690</link>
<description>Constructing and evaluating weak memory models
Zhang, Sizhuo.
A memory model for an instruction set architecture (ISA) specifies all the legal multithreaded-program behaviors, and consequently constrains processor implementations. Weak memory models are a consequence of the desire of architects to preserve the flexibility of implementing optimizations that are used in uniprocessors, while building a shared-memory multiprocessor. Commercial weak memory models like ARM and POWER are extremely complicated: it has taken over a decade to formalize their definitions. These formalization efforts are mostly empirical--they try to capture empirically observed behaviors in commercial processors--and do not provide any insights into the reasons for the complications in weak-memory-model definitions. This thesis takes a constructive approach to study weak memory models. We first construct a base model for weak memory models by considering how a multiprocessor is formed by connecting uniprocessors to a shared memory system.; We try to minimize the constraints in the base model as long as the model enforces single-threaded correctness and matches the common assumptions made in multithreaded programs. With the base model, we can show not only the differences among different weak memory models, but also the implications of these differences, e.g., more definitional complexity or more implementation flexibility or failures to match programming assumptions. The construction of the base model also reveals that allowing load-store reordering (i.e., a younger store is executed before an older load) is the source of definitional complexity of weak memory models. We construct a new weak memory model WMM that disallows load-store reordering, and consequently, has a much simpler definition. We show that WMM has almost the same performance as existing weak memory models.; To evaluate the performance/power/area (PPA) of weak memory models versus that of strong memory models like TSO, we build an out-of-order superscalar cachecoherent multiprocessor. Our evaluation considers out-of-order multiprocessors of small sizes and benchmark programs written using portable multithreaded libraries and compiler built-ins. We find that the PPA of an optimized TSO implementation can match the PPA of implementations of weak memory models. These results provide a key insight that load execution in TSO processors can be as aggressive as, or even more aggressive than, that in weak-memory-model processors. Based on this insight, we further conjecture that weak memory models cannot provide better performance than TSO in case of high-performance out-of-order processors. However, whether weak memory models have advantages over TSO in case of energy-efficient in-order processors or embedded microcontrollers remains an open question.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 211-224).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122690</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning models for functional genomics and therapeutic design</title>
<link>https://hdl.handle.net/1721.1/122689</link>
<description>Machine learning models for functional genomics and therapeutic design
Zeng, Haoyang,Ph.D.Massachusetts Institute of Technology.
Due to the limited size of training data available, machine learning models for biology have remained rudimentary and inaccurate despite the significant advance in machine learning research. With the recent advent of high-throughput sequencing technology, an exponentially growing number of genomic and proteomic datasets have been generated. These large-scale datasets admit the training of high-capacity machine learning models to characterize sophisticated features and produce accurate predictions on unseen examples. In this thesis, we attempt to develop advanced machine learning models for functional genomics and therapeutics design, two areas with ample data deposited in public databases and tremendous clinical implications. The shared theme of these models is to learn how the composition of a biological sequence encodes a functional phenotype and then leverage such knowledge to provide insight for target discovery and therapeutic design.; First, we design three machine learning models that predict transcription factor binding and DNA methylation, two fundamental epigenetic phenotypes closely tied to gene regulation, from DNA sequence alone. We show that these epigenetic phenotypes can be well predicted from the sequence context. Moreover, the predicted change in phenotype between the reference and alternate allele of a genetic variant accurately reflect its functional impact and improves the identification of regulatory variants causal for complex diseases. Second, we devise two machine learning models that improve the prediction of peptides displayed by the major histocompatibility complex (MHC) on the cell surface. Computational modeling of peptide-display by MHC is central in the design of peptide-based therapeutics.; Our first machine learning model introduces the capacity to quantify uncertainty in the computational prediction and proposes a new metric for peptide prioritization that reduces false positives in high-affinity peptide design. The second model improves the state-of-the-art performance in MHC-ligand prediction by employing a deep language model to learn the sequence determinants for auxiliary processes in MHC-ligand selection, such as proteasome cleavage, that are omitted by existing methods due to the lack of labeled data. Third, we develop machine learning frameworks to model the enrichment of an antibody sequence in phage-panning experiments against a target antigen. We show that antibodies with low specificity can be reduced by a computational procedure using machine learning models trained for multiple targets. Moreover, machine learning can help to design novel antibody sequences with improved affinity.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-230).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122689</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure learning in high-dimensional graphical models</title>
<link>https://hdl.handle.net/1721.1/122688</link>
<description>Structure learning in high-dimensional graphical models
Wang, Yuhao,Ph.D.Massachusetts Institute of Technology.
In this thesis, we develop efficient and provably consistent algorithms for learning the structure of undirected and directed (causal) graphical models in the high-dimensional setting. Structure learning in graphical models is a central problem in statistics with numerous applications including learning gene regulatory networks from RNA-seq data and learning the dependence structure among stocks from financial time series. Part I of this thesis investigates the problem of learning causal directed acyclic graph (DAG) models from a combination of observational and interventional data. While previous methods considered greedy search algorithms on the space of graphs, we propose to view a DAG as given by a permutation and an undirected graph and instead consider greedy search on the smaller space of permutations. We present the first consistency guarantees of a permutation-based greedy search algorithm based on observational data.; In addition, we show that this algorithm naturally extends to the interventional setting, thereby resulting in the first provably consistent algorithm for causal structure discovery from a mix of observational and interventional data. In Part II, we consider causal inference based on heterogeneous observational data collected from naturally perturbed systems. Specifically, we investigate two questions, namely 1) learning the difference between two causal DAGs, and 2) jointly estimating multiple related causal DAGs. To answer question 1), we provide the first provably consistent method for directly estimating the differences in a pair of causal DAGs without separately learning two possibly large and dense DAGs. To answer question 2), we provide a joint estimation procedure based on ℓ0-penalized maximum likelihood estimation and prove that such procedure leads to a faster convergence rate than estimating each DAG separately.; Finally, in Part III, we consider the problem of estimating undirected graphical models under distributional constraints. More specifically, we consider a particular form of positive dependence, known as total positivity. Such a constraint is relevant for example for portfolio selection, since assets are often positively dependent. Methods for learning undirected graphical models usually require a particular choice of the tuning parameter for consistent estimation, which is in general unknown a priori and hence a major limitation in applications. We here show that an undirected graphical model under total positivity can be learned consistently without any tuning parameters. The proposed methods are illustrated on various synthetic and real datasets from genomics and finance.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 223-232).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122688</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable neural networks via alignment and dpstribution Propagation</title>
<link>https://hdl.handle.net/1721.1/122686</link>
<description>Interpretable neural networks via alignment and dpstribution Propagation
Malalur, Paresh(Paresh G.)
In this thesis, we aim to develop methodologies to better understand and improve the performance of Deep Neural Networks in various settings where data is limited or missing. Unlike data-rich tasks where neural networks have achieved human-level performance, other problems are naturally data limited where these models have fallen short of human level performance and where there is abundant room for improvement. We focus on three types of problems where data is limited - one-shot learning and open-set recognition in the one-shot setting, unsupervised learning, and classification with missing data. The first setting of limited data that we tackle is when there are only few examples per object type. During object classification, an attention mechanism can be used to highlight the area of the image that the model focuses on thus offering a narrow view into the mechanism of classification.; We expand on this idea by forcing the method to explicitly align images to be classified to reference images representing the classes. The mechanism of alignment is learned and therefore does not require that the reference objects are anything like those being classified. Beyond explanation, our exemplar based cross-alignment method enables classification with only a single example per category (one-shot) or in the absence of any labels about new classes (open-set). While one-shot and open-set recognition operate in cases where complete data is available for few examples, unsupervised and missing data setting focus on cases where the labels are missing or where only partial input is available correspondingly. Variational Auto-encoders are a popular unsupervised learning model which learn how to map the input distribution into a simple latent distribution.; We introduce a mechanism of approximate propagation of Gaussian densities through neural networks using the Hellinger distance metric to find the best approximation and demonstrate how to use this framework to improve the latent code efficiency of Variational Auto- Encoders. Expanding on this idea further, we introduce a novel method to learn the mapping between the input space and latent space which further improves the efficiency of the latent code by overcoming the variational bound. The final limited data setting we explore is when the input data is incomplete or very noisy. Neural Networks are inherently feed-forward and hence inference methods developed for probabilistic models can not be applied directly. We introduce two different methods to handle missing data. We first introduce a simple feed-forward model that redefines the linear operator as an ensemble to reweight the activations when portions of its receptive field are missing.; We then use some of the insights gained to develop deep networks that propagate distributions of activations instead of point activations allowing us to use message passing methods to compensate for missing data while maintaining the feed-forward style approach when data is not missing.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 145-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122686</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards resilient plug-and-play microgrids</title>
<link>https://hdl.handle.net/1721.1/122685</link>
<description>Towards resilient plug-and-play microgrids
Fonkwe Fongang, Edwin.
Microgrids have the potential to increase renewable energy penetration, reduce costs, and improve reliability of the electric grid. However, today's microgrids are unreliable, lack true modularity, and operate with rudimentary control systems. This thesis research makes contributions in the areas of microgrid modeling and simulation; microgrid testing and model validation; and advanced control design and tools in microgrids. These contributions are a step toward design, commissioning, and operation of resilient plug-and-play (pnp) microgrids, which will pave the way towards a more sustainable and electric energy abundant future for all.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 159-164).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122685</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photoinduced dynamics studied by ultrafast single-shot pump-probe spectroscopy</title>
<link>https://hdl.handle.net/1721.1/122684</link>
<description>Photoinduced dynamics studied by ultrafast single-shot pump-probe spectroscopy
Cheng, Yu-Hsiang.
This thesis focuses on the development of dual-echelon single-shot spectroscopy and its applications to study irreversible photoinduced dynamics. First, the ultrafast laser sources and the related control and characterization techniques are discussed. In particular, we have invented a two-stage dual-beam noncollinear optical parametric amplifier to provide tunable sources for pump-probe spectroscopy. Next, the experimental setup of dual-echelon single-shot spectroscopy is illustrated with great detail and possible noise sources and correction methods are explored. Using the single- shot technique, we studied photoinduced dynamics in three different materials. In bismuth, we found a transition into a transient symmetric phase at high fluences. We showed the coherent control of phonon parameters with pump-pump-probe experiments. We also simulated the carrier and phonon dynamics using a modified two-temperature model. In tellurium, we demonstrated that the amorphization of crystalline tellurium induced by femtosecond pulses is a thermal process. We also estimated the lattice temperature by the change in phonon frequency. In a strained manganite film, we observed a photoinduced persistent insulator-to-metal transition and showed the partial recovery of the generated metallic phase to the insulating phase.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 155-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122684</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>InGaAs MOSFETs for logic and RF applications : reliability, scalability and transport studies</title>
<link>https://hdl.handle.net/1721.1/122683</link>
<description>InGaAs MOSFETs for logic and RF applications : reliability, scalability and transport studies
Cai, Xiaowei,Ph.D.Massachusetts Institute of Technology.
InGaAs has emerged as an extraordinary n-channel material due to its superb electron transport properties and low voltage operation. With tremendous advancements over the years, InGaAs MOSFETs have attracted much attention as promising device candidates for both logic and THz applications. However, many challenges remain. This thesis addresses some of the critical issues facing InGaAs MOSFETs and advances the understanding of the limiting factors confronting InGaAs MOSFET technology. First, it identifies a new instability mechanism in self-aligned InGaAs MOSFETs caused by fluorine migration and passivation of Si dopants in n-InAlAs. This problem is successfully mitigated by eliminating n-InAlAs from the device structure. The new device design achieves improved stability and record device performance. Second, it evaluates the impact of oxide trapping in InGaAs MOSFETs.; A comprehensive PBTI study shows that oxide trapping deteriorates device stability, resulting in threshold voltage shifts and degraded device performance. In addition, oxide trapping also compromises DC device performance. High frequency and fast pulse measurements reveal a rich spectrum of oxide traps with different capture/emission times. Furthermore, oxide trapping also complicates the extraction of fundamental parameters in InGaAs MOSFETs and leads to an underestimation of channel mobility. Thus, a new method has been developed, immune to the impact of oxide traps, to evaluate the intrinsic charge-control relationship of the device, and accurately estimate mobility. Thirdly, this thesis re-evaluates the impact of channel scaling on device performance and transport in InGaAs planar MOSFETs and FinFETs. In both cases, mobility degradation with channel thickness or fin width scaling is observed to be much less than suggested by conventional CV methods.; When the impact of oxide trapping is avoided, mitigated scaling induced degradation is observed and promising intrinsic transistor performance is revealed. Notably, InGaAs FinFETs exhibit g[subscript m,max] at 1 GHz competitive with current Silicon FinFET technology and high mobility even in very narrow fins ([mu][subscript peak] ~ 570 cm²/V·s at W[subscript f] = 7 nm). This thesis highlights the importance of mitigating oxide trapping. Further, in light of the results obtained here, the prospects of InGaAs MOSFET technology merit a reassessment.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 133-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122683</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, fabrication, and characterization of an ultra-low cost inductively-coupled plasma chemical vapor deposition tool for micro- and nanofabrication</title>
<link>https://hdl.handle.net/1721.1/122561</link>
<description>Design, fabrication, and characterization of an ultra-low cost inductively-coupled plasma chemical vapor deposition tool for micro- and nanofabrication
Gould, Parker Andrew.
The high cost of semiconductor fabrication equipment has traditionally represented a large barrier to entry for groups seeking to develop or commercialize novel micro- and nanoscale devices. Much of the cost barrier stems from the large size of the substrates processed in this equipment, and the associated complexity of maintaining consistent operation across the full substrate area. By scaling the substrate size down from the 150-300 mm diameter sizes commonly seen in today's production environments, the capital cost and physical footprint of tools for micro- and nanoscale fabrication can be dramatically decreased, while still retaining a similarly high level of performance. In this work, an ultra-low cost inductively-coupled plasma chemical vapor deposition (ICPCVD) system for processing substrates up to 50.8 mm (2") in diameter is presented. The ICPCVD system is built within a modular vacuum tool architecture that allows sections of the full tool to be easily and inexpensively replaced to adapt to new processing conditions or provide additional functionality. The system uses a non-pyrophoric mixture of silane (1.5% in helium) and low substrate temperatures ( : 150*C) to deposit uniform silicon-based films with a high quality comparable to films deposited in research-grade commercial tools. Using response surface methods, the performance of the ICP-CVD system has been characterized for both silicon dioxide and silicon nitride films, and repeatable control of the deposited film properties, including deposition rate, index of refraction, film stress, and density, has been demonstrated.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 223-232).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122561</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to solve problems in computer vision with synthetic data</title>
<link>https://hdl.handle.net/1721.1/122560</link>
<description>Learning to solve problems in computer vision with synthetic data
Jaroensri, Ronnachai.
Deep neural networks (DNN) have become the tool of choice for many researchers due to their superior performance. However, for DNNs to reach their full potential, a large enough dataset must be available. This poses severe limitation over problems that DNN can be applied to. Fortunately, many problems in computer vision have well-understood physical models, and can be simulated readily. This thesis considers the use of synthetic data to allow the use of DNN to solve problems in computer vision. First, we consider using synthetic data for problems where collection of real data is not feasible. We focus on the problem of magnifying small motion in videos. Using synthetic data allows us to train DNN models that magnify motion with reduced artifacts and better noise handling compared to traditional signal-processing based algorithm. Then, we discuss the importance of realism of the generated data. We focus on realistic camera pipeline simulation, and use it to study blind denoising in real images. We show that our noise simulation based on realistic camera pipeline significantly outperforms simplified noise models commonly used in the literature. Finally, we show that synthetic data can also be useful for a more general computer vision research. We use synthetic data to study the effect of label quality to the semantic segmentation task. Synthetic data provides us with large enough datasets that we can study the trade-off between quality and quantity of the data. We find that the accuracy of prediction depends largely on the estimated time required for human to annotate data, and that fine-tuning prediction after training on low-quality labels offers the best trade-off between effort to annotate and the accuracy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122560</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, fabrication, and characterization of a compact magnetron sputtering system for micro/nano fabrication</title>
<link>https://hdl.handle.net/1721.1/122559</link>
<description>Design, fabrication, and characterization of a compact magnetron sputtering system for micro/nano fabrication
Hsing, Mitchell David.
A general rule of thumb for new semiconductor fabrication facilities (fabs) is that revenues from the first year of production must match the capital cost of building the fab itself. With modem fabs routinely exceeding $1 billion to build, this rule serves as a significant barrier to entry for groups seeking to commercialize new semiconductor devices aimed at smaller market segments which require a dedicated process. To address this gap in the industry, we are developing a I" Fab line of dedicated tools which processes small 1-2" wafers and feature the same functionality as large-scale commercial micro/nano fabrication tools, but with a significant reduction in cost and footprint. To enable the envisioned 1" Fab a reality, this thesis describes the design, development and testing of a sputtering physical vapor deposition tool, a critical tool in the 1" Fab line of tools.; The tool is designed to be compatible with the 1" Fab's four-module, modular tool infrastructure, and also to allow for sharing of its peripheral equipment with other components of the 1" Fab. The modularity feature allows for multiple tools be created using an interchangeable tool platform while the shared backend equipment feature allows for a sizable cost-saving benefit, as the cost of peripheral equipment for any given tool is up to 70% of the tool's total cost. Our developed sputtering tool features the successful implementation of these two design components with a final build cost of around $25k - roughly one-seventh of the cost of a commercial tool. The sputtering tool's performance was fully characterized for both reactive and nonreactive sputtering processes. The tool's non-reactive metal depositions were examined in detail using a design of experiment response surface model.; Deposition rates of up to 5.5 A/s were observed while maintaining a uniformity of ~3% across the wafer. Utilizing a direct sputter technique, this represents a deposition rate that is 4x faster than state of the practice tools while also attaining the same level of uniformity. Alongside the development of metal depositions processes, the reactive sputtering capabilities of the tool were also demonstrated through successful process development for the deposition of Aluminum Nitride (AlN). Three unique operation regions, for AlN reactive sputtering were discovered with the highest quality AlN depositions observed in transition region. Stable and repeatable depositions were achieved via the development of two control methods - voltage control and flow control. Using this optimized process, highly c-axis aligned films with columnar growth structures were observed indicating the production of high quality AlN films.; This successfully developed tool alongside its optimized processes is well suited for integration into the 1" Fab, further enabling the realization of our envisioned low-cost micro/nano fabrication platform.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 215-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122559</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for efficient radio frequency power conversion</title>
<link>https://hdl.handle.net/1721.1/122558</link>
<description>Techniques for efficient radio frequency power conversion
Jurkov, Alexander S.
A diverse range of radio-frequency (RF) power applications demand RF power generation systems that allow for dynamic output power control while having the capability to efficiently deliver power into a varying load. While some of these existing and emerging applications are characterized with narrowband or single-frequency operation, others require operation over a range of frequencies. In such applications, the system architecture typically comprises an RF power amplifier (PA) or inverter along with a tunable impedance matching network (TMN). Electronically-controlled TMNs offer substantial benefits when it comes to the implementability of such highly reconfigurable and adaptive RF systems as they allow for proper impedance termination of the PA or inverter over the operating load and frequency range. This work explores the design of TMNs based on a solid-state technique that allows for faster and more accurate impedance matching compared to traditional approaches. The performance and design of such TMNs is demonstrated for plasma driving applications at 13.56 MHz. In addition, this work proposes techniques for designing switched-mode RF inverters that can operate efficiently over a wide load impedance range. These techniques are applied to the design of class E and class [Phi]2 inverter prototypes at 27.12 MHz, and their ability to handle large load modulation while maintaining high operating efficiency is demonstrated. The techniques presented in this work can be further applied to the integration of an RF power amplifier/inverter and a TMN into a single multi-transistor architecture capable of efficiently operating across frequency and load variation while providing dynamic output power control.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 293-304).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122558</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning for spoken dialogue systems : application to nutrition</title>
<link>https://hdl.handle.net/1721.1/122557</link>
<description>Deep learning for spoken dialogue systems : application to nutrition
Korpusik, Mandy B.
Personal digital assistants such as Siri, Cortana, and Alexa must translate a user's natural language query into a semantic representation that the back-end can then use to retrieve information from relevant data sources. For example, answering a user's question about the number of calories in a food requires querying a database with nutrition facts for various foods. In this thesis, we demonstrate deep learning techniques for performing a semantic mapping from raw, unstructured, human natural language directly to a structured, relational database, without any intermediate pre-processing steps or string matching heuristics. Specifically, we show that a novel, weakly supervised convolutional neural architecture learns a shared latent space, where vector representations of natural language queries lie close to embeddings of database entries that have semantically similar meanings. The first instantiation of this technology is in the nutrition domain, with the goal of reducing the burden on individuals monitoring their food intake to support healthy eating or manage their weight. To train the models, we collected 31,712 written and 2,962 spoken meal descriptions that were weakly annotated with only information about which database foods were described in the meal, but not explicitly where they were mentioned. Our best deep learning models achieve 95.8% average semantic tagging F1 score on a held-out test set of spoken meal descriptions, and 97.1% top-5 food database recall in a fully deployed iOS application. We also observed a significant correlation between data logged by our system and that recorded during a 24-hour dietary recall conducted by expert nutritionists in a pilot study with 14 participants. Finally, we show that our approach generalizes beyond nutrition and database mapping to other tasks such as dialogue state tracking.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 207-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122557</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving performance and security of indirect memory references on speculative execution machines</title>
<link>https://hdl.handle.net/1721.1/122556</link>
<description>Improving performance and security of indirect memory references on speculative execution machines
Kiriansky, Vladimir L.(Vladimir Lubenov),1979-
Indirect memory references hobble efficient and secure execution on current processor architectures. Traditional hardware techniques such as caches and speculative execution are ineffective on demanding workloads, such as in-memory databases, machine learning, and graph analytics. While terabytes of DRAM are now available in public cloud machines, indirect memory references in large working sets often incur the full penalty of a random DRAM access. Furthermore, caches and speculative execution enable the recently discovered Spectre family of side-channel attacks, which allow untrusted neighbors in a public cloud to steal secrets. In this thesis, we introduce complementary software and hardware techniques to improve the performance of caches and speculative execution, and to block the largest attack class with low overhead. MILK is our C++ extension to improve data cache locality.; Milk's programming model preserves parallel program semantics and maps well to the Bulk-Synchronous Parallel (BSP) theoretical model. Within a BSP superstep, which may encompass billions of memory references, Milk captures the temporal and spatial locality of ideal infinite caches on real hardware and provides up to 4x speedup. CIMPLE is our domain specific language (DSL) to improve the effectiveness of speculative execution in discovering instruction level parallelism and memory level parallelism. Improving memory parallelism on current CPUs allows up to ten memory references in parallel to reduce the effective DRAM latency. Speculative execution is constrained by branch predictor effectiveness and can only uncover independent accesses within the hardware limits of instruction windows (up to 100 instructions). With Cimple, interleaved co-routines expose instruction and memory level parallelism close to ideal hardware with unlimited instruction windows and perfect predictors.; On in-memory database index data structures, Cimple achieves up to 6x speedup. DAWG is our secure cache architecture that prevents leaks via measuring the cache effects of speculative indirect memory references. Unlike performance isolation mechanisms such as Intel's Cache Allocation Technology (CAT), DAWG blocks both speculative and non-speculative side-channels by isolating cache protection domains. DAWG incurs no overhead over CAT for isolation in public clouds. DAWG also enables OS isolation with efficient sharing and communication via caches, e.g., in system calls.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122556</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards anonymous and metadata private communication at Internet scale</title>
<link>https://hdl.handle.net/1721.1/122555</link>
<description>Towards anonymous and metadata private communication at Internet scale
Kwon, Young Hyun.
As the world becomes more connected, privacy is becoming harder to maintain. From social media services to Internet service providers to state-sponsored mass-surveillance programs, many outlets collect sensitive information about the users and the communication between them - often without the users ever knowing about it. In response, many Internet users have turned to end-to-end encryption, like Signal and TLS, to protect the content of the communication. Unfortunately, these works do little to hide the metadata of the communication, such as when and with whom a user is communicating. In scenarios where the metadata are sensitive, encryption alone is not sufficient to ensure users' privacy. Most prior communication systems that provide metadata privacy fall into one of two categories: (1) systems that provide formal privacy guarantees against global adversaries but do not scale to large numbers of users, or (2) systems that scale easily to a large user base but do not provide strong guarantees against global adversaries. In this thesis, I will present three systems that aim to bridge the gap between the two categories to enable private communication with strong guarantees for many millions of users. First, I will present Atom, a horizontally scalable anonymous broadcast system for short messages that defends against a global adversary who monitors the entire network and controls a significant fraction of the servers while scaling easily by adding more servers to its network. Then, I will present Quark, another horizontally scalable anonymous broadcast system that trades bandwidth for latency to achieve more than an order of magnitude speed-up over Atom under the same threat model. Finally, I will present XRD, which provides metadata private communication between two honest users against the same adversary using a novel cryptographic primitive called aggregate hybrid shuffle.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122555</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable photonics for quantum and classical information processing</title>
<link>https://hdl.handle.net/1721.1/122554</link>
<description>Programmable photonics for quantum and classical information processing
Steinbrecher, Gregory R.
In this thesis, I explore the application of integrated photonic systems to quantum information processing as well as quantum and classical communications. The common thread throughout this work is the efficacy of variational numerical optimization in the design and optimization of photonic/bosonic systems. I present the programmable nanophotonic processor (PNP) platform that we developed, which is one way to realize an arbitrarily reconfigurable linear optics platform. I explore the prospects of realizing high fidelity quantum gates in this system, demonstrating through black box numerical optimization that we can compensate for a realistic model of fabrication error in the silicon photonics platform. Next, I discuss the design and construction of a next-generation PNP laboratory testbed, from the silicon photonics design up through the thermal and mechanical packaging, and the custom control and monitoring electronics. I discuss experiments using PNPs as a novel type of optical network switch, capable of both unicast and multicast operation, demonstrating its benefits in a small network testbed. Looking towards the future, I show that the integration of optical nonlinearities with PNPs would enable a quantum optical neural network (QONN) platform, demonstrating through simulation that these QONNs can be optimized to perform a variety of quantum and classical information processing tasks. I then expand the application of these systems from information processing to communications, showing that QONNs provide a natural platform to realize one-way quantum repeaters. Finally, I demonstrate the efficacy of the numerical techniques used in this thesis to a related system: cold atoms trapped in an optical lattice, the dynamics of which are similar to photons with interactions. We show that the optimization of the parameters of a simple one-dimensional model of this system can realize a universal gate set for quantum computing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-156).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122554</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the financial system of developing economies</title>
<link>https://hdl.handle.net/1721.1/122543</link>
<description>Essays on the financial system of developing economies
Sripakdeevong, Parit.
The thesis consists of three chapters, which explore the financial system of rural Thai Villages through the lens of the Townsend Thai Monthly Survey. The first two chapters examine savings behaviour, while the last chapter investigates borrowing. In the first chapter, I apply the estimation technique of Deaton and Paxson (2000) to the monthly version of the Townsend Thai Household Survey and find that individual savings rate decreases with individual age. This result contrasts the flat savings profile found when estimation is conducted at the household level. I further extend the Deaton-Paxson technique to deal with autocorrelation in panel data, and noisy estimates due to small sample size. Applying the appropriate corrections generates the inverse U shape savings rate profile predicted by the life-cycle model.; In the second chapter, I test the model of Amador, Werning, and Angeletos (2006) empirically against the data of the Townsend Thai Monthly Survey supplemented by Christopher Woodruff's behavioural survey. From the data, I estimate the model's inputs: the household's income distribution, level of risk aversion, and hyperbolic discount rate. The model identifies the subgroup of households where minimum savings policy is optimal and predicts the optimal minimum savings value. For this subgroup, I find that the predicted minimum saving values have a strong positive correlation with the actual amount saved by the household. The relationship is weaker for households outside the subgroup, as the minimum saving policy is no longer optimal. The third chapter, joint with Robert Townsend, documents the existence of active, high volume and relatively sophisticated money markets in villages in Thailand. As with traditional markets, loan repayment can be deferred through standard restructuring.; But there are also more complicated credit refinancing chains involving multiple parties and short/medium maturities. From risk sharing regressions, we find that borrowers surprisingly have higher income coefficient than non-borrowers. However, this is not because financial access is detrimental, but instead due to the self-selection into debt of more risk-tolerant individuals. Yet, those engaged in credit refinancing chains have the smoothest consumption against income shocks of all, as risk tolerance is dominated by low transaction and verification costs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 94-97).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122543</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in econometrics and machine learning</title>
<link>https://hdl.handle.net/1721.1/122542</link>
<description>Essays in econometrics and machine learning
Semenova, Vira.
Establishing the link between a cause and effect is a fundamental question in social science. Standard assumptions about human behavior (e.g., rationality) imply restrictions on the plausible values of the causal effect. In addition to this effect, these restrictions may depend on additional summaries of human behavior. Estimation of these additional parameters presents a trade-off between capturing the complexity of human's decision-making yet constraining it to deliver precise estimates. I resolve this tension by incorporating modern machine learning tools into the estimation of the additional parameters and deliver high-quality estimates of the causal effect and counterfactual outcomes. I estimate the causal effect in a two-stage procedure. At the first stage, I estimate the additional summaries of human behavior by modern machine learning tools. At the second stage, I plug the first-stage output into the sample analog of the restriction that identifies the causal effect. I modify the second-stage restriction to make it insensitive to any regularization biases present in the first-stage components. The second-stage estimate of the causal effect is of high-quality: it converges at fastest rate and can be used to test the hypotheses and build the confidence intervals for the values of the causal effect. I apply this idea in a wide class of economic models, including dynamic games of imperfect information, treatment effect in the presence of endogenous sample selection, and reduced-form demand estimation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 209-213).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122542</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of DNA methylation and genomic imprinting in arabidopsis</title>
<link>https://hdl.handle.net/1721.1/122539</link>
<description>Dynamics of DNA methylation and genomic imprinting in arabidopsis
Picard, Colette Lafontaine.
DNA methylation is an epigenetic mark that is highly conserved and important in diverse cellular processes, ranging from transposon silencing to genomic imprinting. In plants, DNA methylation is both mitotically and meiotically heritable, and changes in DNA methylation can be generationally stable and have long-lasting consequences. This thesis aims to improve understanding of DNA methylation dynamics in plants, particularly across generations and during reproduction. In the first project, I present an analysis of the generational dynamics of gene body methylation using recombinant inbred lines derived from differentially methylated parents. I show that while gene body methylation is highly generationally stable, changes in methylation state occur nonrandomly and are enriched in regions of intermediate methylation.; Important DNA methylation changes also occur during seed development in flowering plants, and these changes underlie genomic imprinting, the phenomenon of parent-of-origin specific gene expression. In plants, imprinting occurs in the endosperm, a seed tissue that functions analogously to the mammalian placenta. Imprinted expression is linked to DNA methylation patterns that serve to differentiate the maternally- and paternally-inherited alleles, but the mechanisms used to achieve imprinted expression are often unknown. I next explore imprinted expression and DNA methylation in Arabidopsis lyrata, a close relative of the model plant Arabidopsis thaliana. I find that the majority of imprinted genes in A. lyrata endosperm are also imprinted in A. thaliana, suggesting that imprinted expression is generally conserved. Surprisingly, a subset of A. lyrata imprinted genes are associated with a novel DNA methylation pattern and may be regulated by a different mechanism than their A.; thaliana counterparts. I then explore the genetics of paternal suppression of the seed abortion phenotype caused by mutation of a maternally expressed imprinted gene. Finally, I present the first large single-nuclei RNA-seq dataset generated in plants, reporting data from 1,093 individual nuclei obtained from developing seeds. I find evidence of previously uncharacterized cell states in endosperm, and examine imprinted expression at the single-cell level. Together, these projects contribute to our understanding of DNA methylation and imprinting dynamics during plant development, and highlight the strong generational stability of certain DNA methylation patterns.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 210-226).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122539</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transgene-free strategies for wireless control of animal physiology using magnetite nanoparticles</title>
<link>https://hdl.handle.net/1721.1/122538</link>
<description>Transgene-free strategies for wireless control of animal physiology using magnetite nanoparticles
Senko, Alexander W.(Alexander William)
Bioelectronic medicines are emerging therapies designed to control human physiology using electrically actuated stimuli instead of drugs. The most famous example is deep brain stimulation (DBS) for Parkinson's disease, in which electrodes are used to control brain activity and prevent tremors. An idealized version of this therapy would use soft materials and be wireless in order to be minimally invasive and cause minimal damage to brain tissue. Magnetic fields are an appealing candidate for wireless therapies because at many frequencies and amplitudes, the human body is similar enough in its magnetic response to vacuum that magnetic fields can penetrate arbitrarily deep. When combined with magnetic nanoparticles of biocompatible iron oxide, which can dissipate heat or produce forces when subjected to applied magnetic fields, magnetic fields can be applied from outside the body and evoke a physiological response within. This thesis describes the synthesis of large disc-shaped magnetic particles which undergo mechanical motion under lower frequency alternating magnetic fields. This mechanical motion enables a new paradigm of activating mechanosensitive ion channels, with increased scalability of the magnetic field apparatuses compared to the high-frequency fields needed to produce heat from magnetic nanoparticles. Wireless magnetic nanoparticle-mediated stimulation has often relied on transgenes, but by choosing tissues that endogenously express the proteins required to detect the physical stimuli (like heat or force) produced by the nanoparticles, it is possible to avoid the need for transgenes. Not relying on transgenes significantly lowers the barrier to clinical translation of this therapy platform.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 130-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122538</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unfolding genome organization in interphase</title>
<link>https://hdl.handle.net/1721.1/122537</link>
<description>Unfolding genome organization in interphase
Abdennur, Nezar Alexander
Genomic contact frequency maps obtained from high throughput chromosome conformation capture technologies have revealed several organizing patterns of mammalian interphase chromosomes, including self-interacting topologically associating domains (TADs) which are believed to function as coherent gene regulatory neighborhoods. However, the mechanisms driving these patterns are still unknown. In this thesis, I describe and apply computational methods that test the predictions of a recently proposed loop extrusion model in the context of experimental perturbations of its key molecular players. In the first project I introduce a new data model, file format, and supporting software package to cope with the challenges of the increasing size and resolution of Hi-C datasets, including a parallel and scalable matrix balancing implementation.; In the second project, I show that depletion of the Structural Maintenance of Chromosomes (SMC) complex, cohesin, in non-cycling mouse liver cells completely eliminates the appearance of TADs in Hi-C maps while preserving genome compartmentalization. In the third project, I demonstrate that depletion of a closely related SMC complex, condensin II, which plays a major role in mitotic chromosome condensation but is also found in the nucleus in interphase, has no impact on gene expression or the maintenance of genome organization in non-dividing cells. In the final project, I compile further evidence for loop extrusion in interphase by employing a combination of polymer simulations and meta-analysis of several Hi-C studies that performed targeted perturbations to modulate the presence of cohesin and the insulator protein, CTCF, on chromatin.; Together, these projects show that rather than being folded in a hierarchical fashion, mammalian genomes in interphase are organized by at least two distinct and antagonistic processes: global compartmental segregation dependent on epigenetic state, and local compaction dependent on cohesin. The latter process is likely to be the dynamic extrusion of chromatin loops driven by a yet-to-be-characterized motor activity of cohesin complexes and limited by DNA-bound CTCF extrusion barriers.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-166).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122537</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-assembly of lead-halide-perovskite laser particles</title>
<link>https://hdl.handle.net/1721.1/122536</link>
<description>Self-assembly of lead-halide-perovskite laser particles
Cho, Sangyeon,Ph. D.Massachusetts Institute of Technology.
As profiling the molecular states of cellular subpopulations has become increasingly important to understand complex systems in biology and medicine, considerable efforts are being made to develop multiplexed techniques. While current fluorescent probes play indispensable roles, their broad emission spectra (about 30-100 nm) limit multiplexing capability. Recently, optical probes emitting narrowband laser spectra (about 0.1-1 nm), called 'laser particles', has drawn attention. Semiconductor microdisk lasers fabricated by top-down lithography have shown potential for massive multiplexing of thousands to millions of samples. In the thesis, I investigated lead halide perovskites (LHP) as a novel material for scalable production of laser particles in a lab flask. I discovered a sonochemical method to produce a large quantity (10 billions/L) of high-quality LHP micro- and sub-micron particles in a polar solvent within minutes. This method enabled me to coat the surface of individual CsPbBr3 laser particles using poly-catecholamine and thereby to improve optical properties and material stability against moisture. With CsPbBr3 microparticles coated with nano-scatterers, I realized disordered lasing based on Anderson localization. In addition, by incorporating plasmonic materials, I demonstrated plasmonic-lasing particles as small as 580 nm. This work paves the way for highly multiplexable laser particles for biomedical applications.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122536</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manufacturing and engineering of therapeutic extracellular vesicles</title>
<link>https://hdl.handle.net/1721.1/122535</link>
<description>Manufacturing and engineering of therapeutic extracellular vesicles
Ng, Kelvin S.
Originally viewed as 'garbage bags' which cells release to dispose of unwanted material, extracellular vesicles (EVs) have emerged as potent messengers that package and disseminate biochemical signals. This newly recognized mode of communication between cells has brought unprecedented therapeutic opportunities; at least 8 clinical trials and 7 companies are investigating or developing EVs as therapeutic products. As the EV industry rapidly grows, there is a rising demand for strategies that facilitate EV manufacturing. In this thesis, we address several challenges in EV manufacturing. By quantifying how many EVs a cell can release before it divides, we discovered that EV output increases as cells divide more slowly, providing a new way to maximize EV output from cells. Using our mathematical description of EV output, we built a computational model to estimate costs of EV manufacturing. Selecting cells with higher EV output despite slower proliferation can drastically lower costs. Meanwhile, although ultracentrifugation is the current standard for purifying EVs, we found that ultrafiltration-specifically tangential-flow filtration-is a more economical and scalable alternative, and we experimentally determined its utility for scaling up EV purification. For quality control, we established a suite of potency assays to measure the overall inflammatory action of EVs derived from human stem cells. Significant variability in EV potency between cells of different donors was detected, substantiating the need to robustly screen for appropriate cell sources when manufacturing EVs. Towards controlling EV function, we genetically constructed a versatile, multi-domain ligand that localizes to and modifies the surface of vesicles. Integrating biological, processing, and economic aspects of EV manufacturing, this thesis recommends strategies that may accelerate commercialization and clinical translation of EV therapy.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, February 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 71-81).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122535</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>HIV-1 protease as a target for antiretroviral therapy</title>
<link>https://hdl.handle.net/1721.1/122533</link>
<description>HIV-1 protease as a target for antiretroviral therapy
Windsor, Ian William.
Human immunodeficiency virus (HIV) is the causative agent of acquired immunodeficiency syndrome (AIDS). HIV employs three enzymes in its lifecycle, including a protease that enables maturation of polyprotein precursors. Despite decades of progress studying the lifecycle of HIV and elaboration of therapeutics targeting nearly every aspect of the viral life cycle, a cure remains elusive. Breakthroughs in HIV research have occurred alongside foundational advances of molecular biology, biotechnology, and medicinal chemistry, highlighting the importance revisiting old questions with new approaches. The goal of this thesis is to advance our biochemical knowledge of HIV-I protease and develop novel therapeutics targeting this key viral enzyme. In Chapter 1, I introduce HIV and the role that HIV-1 protease plays in life cycle and current treatment strategies.; In Chapter 2, I describe an assay that enables the determination of sub-picomolar inhibition constants for competitive inhibitors of HIV-1 protease. This advance was made possible by a peptide substrate selected by phage display. I report in Chapter 3 the enhanced hydrogen bonding in the recognition of this peptide by HIV-1 protease as revealed by X-ray crystallography. The mechanism of aspartic proteases, including HIV-1 protease, has been the subject of numerous enzymology studies spanning over half a century. In Chapter 4, I reveal unappreciated non-covalent interactions within substrates of aspartic proteases that assist in catalysis. In addition to biochemical studies, this thesis includes chapters that account the development of novel antivirals. In Chapter 5, I describe the rational drug design of a boronic acid analog of the clinical inhibitor darunavir with improved potency.; A limitation of boronic acids is metabolic instability; in Chapter 6, I reveal an intramolecular protecting group that can confer oxidative stability to boronic acids. Finally, in Chapter 7, I describe an engineering approach to inactivate human RNase 1. The inactivation relies on installing a substrate for HIV- I protease, the cleavage of which unmasks cytotoxic activity. Together these chapters describe new ways forward and novel therapeutics targeting HIV-1 protease. My thesis also includes an Appendix, which describes the elaboration of boronic acid-based covalent pharmacological chaperones of human transthyretin.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 395-424).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122533</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and design for electrochemical carbon dioxide capture systems</title>
<link>https://hdl.handle.net/1721.1/122532</link>
<description>Modeling and design for electrochemical carbon dioxide capture systems
Shaw, Ryan Alex.
So long as fossil fuels remain humanity's primary source of energy, carbon dioxide capture and storage will be needed to mitigate the effects of global climate change. Existing carbon dioxide separation technology is energetically intensive and expensive to employ in brownfield applications. This thesis describes a novel CO₂ capture technology called Electrochemically-Mediated Amine Recovery (EMAR). EMAR modifies incumbent thermal amine scrubbing, by performing an electrochemical desorption which manipulates the concentration of a cupric competitive complexing agent. Oxidizing copper results in a cupric-amine complex at the anode, releasing CO₂. Reducing this complex at the cathode regenerates amines for capture. This approach is more energy efficient than traditional thermal amine processes and is expected to be less capital intensive to implement. A proof-of-concept EMAR system has been previously demonstrated and initial system optimization has been previously performed. Contained herein is a significant expansion of that work specifically in the areas of thermodynamics, mass and heat transport, and system design. Three thermodynamic paths are described for the EMAR process, which establish the total electrochemical work of separation for this and other similar molecular architectures. Electrochemical work, in concert with work of compression and pump work, allow for an ideal process flow scheme as well as optimized operating conditions for the EMAR process to be developed. The thermodynamics suggests employing a cathodic absorber for substantial efficiency improvements. EMAR cell design is significantly improved by transitioning from a single unit cell to a modular, stacked system. Performance of two stack geometries, series and parallel, is modelled and analyzed. It is demonstrated that the series stack, which is more industrially applicable, shows less severe internal temperature gradients than that of the parallel geometry.
Thesis: Ph. D. in Chemical Engineering Practice, Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-190).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122532</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering of Saccharomyces cerevisiae for renewable production of plastic precursor chemicals from plant biomass</title>
<link>https://hdl.handle.net/1721.1/122531</link>
<description>Engineering of Saccharomyces cerevisiae for renewable production of plastic precursor chemicals from plant biomass
Uranukul, Boonsom.
Currently, plastics are almost exclusively produced from feedstocks derived from crude oil refining and natural gas processing. Despite the increasing awareness of the negative environmental and climate-related impacts associated with fossil fuel consumption, the relevance of fossil fuels has held steady as a result of the recent proliferation of plastics industry. The recent growing attempts to replace conventional petroleum-based production processes with renewable direct bioconversion processes, however, have not yet been made successful due to low production efficiency. Here, we studied the development of a bioprocess for the renewable production of monoethylene glycol (MEG), a precursor chemical of polyethylene terephthalate (PET) plastics, by using yeast Saccharomyces cerevisiae as biosynthesis platform. During the process, we found evidence for the existence of an endogenous biosynthetic route for MEG production from D-xylose in S. cerevisiae.; Based on the discovered biosynthetic pathway, we then demonstrated the implementation of metabolic engineering and fermentation operational strategies that led to an overproduction of MEG, as well as improved strain performance during prolonged bioreactor cultivation. Using the MEG bioconversion process as the starting point, we developed another bioprocess which allowed direct conversion of D-xylose to glycolic acid, a chemical precursor of polylactic-co-glycolic acid (PLGA). Furthermore, we investigated the biosynthesis of 1,4- butanediol, a chemical precursor of thermoplastic engineering polymer polybutylene terephthalate (PBT), in S. cerevisiae. In all of these studies, ethanol fermentation emerged as an important limitation that negatively affected the efficiency of the yeast-based processes. Our attempts to disrupt ethanol fermentation, while successfully reducing ethanol production, led to a compromise in MEG production. An analysis on the energetics of our engineered S.; cerevisiae, revealed that ethanol fermentation might, in fact, be a necessary requirement for maintaining the energy balance in certain systems, including the biosynthesis of MEG. These findings provide some insights as well as a better understanding of Saccharonyces cerevisiae as the microbial cell factory for the biosynthesis of small molecules other than ethanol.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122531</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of redox active organic molecules for use in nonaqueous flow batteries</title>
<link>https://hdl.handle.net/1721.1/122530</link>
<description>Evaluation of redox active organic molecules for use in nonaqueous flow batteries
Kowalski, Jeffrey A.(Jeffrey Adam)
Technical advances in grid energy storage are of critical importance to facilitate the integration of intermittent renewables and improve the efficiency, reliability, and resiliency of the existing fossil fuel infrastructure. Redox flow batteries (RFBs) are an electrochemical technology well suited for stationary energy storage due to independently addressable power and energy components, simplified manufacturing, and long operating lifetimes. While state-of-the-art RFBs utilizing transition metal salts in acidic, aqueous electrolytes have found some success, further cost reductions are needed, motivating research into organic redox couples dissolved in nonaqueous electrolytes. Nonaqueous electrolytes offer the advantages of wider electrochemical stability windows and compatibility with a broader palette of charge-storage materials.; Redox active organic molecules can be modified through targeted functionalization to impart desired properties and consist of earth abundant elements, which may enable scalable, low cost synthesis routes. This thesis focuses on organic molecules intended for use as positive active materials in nonaqueous RFBs. The two redox active cores examined are substituted dialkoxybenzenes and phenothiazines. Both molecules served as learning platforms and were systematically functionalized, with one or more substituent groups, to elucidate structure-function relationships with particular emphasis on increasing solubility, gravimetric capacity, and redox potential. Initial efforts focused on the modification of 2,5-di-tert-butyl-1,4-bis(2-methoxyethoxy)benzene through subtractive engineering to identify stable minimal structures. Next, the impact of halidization was examined leading to a 300 - 400 mV increase in redox potential but severe reductions in cyclability.; Due to limitations of the stability of the substituted dialkoxybenzenes, subsequent efforts were undertaken using N-ethylphenothiazine. Through an iterative approach of targeted functionalization, (1) solubility was increased and (2) the second electron transfer was stabilized resulting in redox active electrolytes with a volumetric charge storage capacity approaching the range envisioned for economically viable RFBs. While the moderate stability of the substituted dialkoxybenzenes appears to limit their applicability as active materials, they have utility as model compounds suitable for supporting the development and standardization of testing protocols for RFBs. As organic materials are emergent for RFB applications, standardized testing protocols and benchmarking techniques are not established, frustrating quantitative comparisons between new materials.; To this end, new electrochemical methods are introduced to evaluate and report the stability of redox active materials at dilute conditions, using bulk electrolysis cycling, and at concentrated conditions, using time-dependent microelectrode voltammetry, which are validated using dialkoxybenzenes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 261-278).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122530</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards more stable and ion-conductive organic electrolytes for rechargeable batteries</title>
<link>https://hdl.handle.net/1721.1/122529</link>
<description>Towards more stable and ion-conductive organic electrolytes for rechargeable batteries
Feng, Shuting,Ph.D.Massachusetts Institute of Technology.
The global society urgently needs to remedy the effects of climate change resulting from burning fossil fuels and significantly increase the utilization of renewable energy. Rechargeable batteries are important enablers of sustainable energy use, as they can be employed to store energy generated from renewable but intermittent source. Enhancing the functionality of battery electrolytes, such as (electro)chemical stability and ion conductivity, can improve battery energy density, operation efficiency, and safety. This thesis explores strategies to improve the stability and ion conductivity of organic electrolytes for rechargeable batteries. Special attention is given to aprotic lithium-oxygen (Li-O₂) batteries, which offer theoretical energy densities that are 2 to 4 times increase over the state-of-the-art Li-ion batteries (LIBs). Currently, the practical development of rechargeable Li-O₂ batteries is hindered by severe electrolyte degradations.; Numerous families of organic solvents, polymers, and ionic liquids have been evaluated as electrolyte candidates; none are stable against the oxygen electrode in LiO₂ batteries. Moreover, the decomposition pathways of many molecules are poorly understood. To investigate the structure-property relationships governing the stability of organic molecules in aprotic Li-O₂ electrode environment, we developed and applied a comprehensive stability framework to a library of organic molecules with varied functionalities using density functional theory (DFT) calculations. Additionally, the chemical stability of the molecules was investigated experimentally. The computed and experimental results were in excellent agreement, and have been employed to identify unstable chemical moieties at the molecular level and to provide insight into the design of new electrolytes that would be stable in Li-O₂ battery environment.; Using the guiding principles provided by this stability framework, we developed three sulfamide- and sulfonamide-based electrolyte solvents that exhibited exceptional stability under aprotic Li-O₂ conditions. In particular, the sulfonamide-based electrolytes have been found to be stable for &gt;90 cycles in a Li-O₂ cell, highlighting the power of rational molecular design for the development of stable and ion-conductive organic electrolytes for next-generation batteries.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 114-127).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122529</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a cultivation medium for protein production in Pichia pastoris based on genome-wide biological understanding</title>
<link>https://hdl.handle.net/1721.1/122528</link>
<description>Design of a cultivation medium for protein production in Pichia pastoris based on genome-wide biological understanding
Matthews, Catherine Bartlett.
Adoption of non-mammalian host organisms for biologic drug manufacturing could lead to step-changes in cost of manufacturing and volumetric productivity, increasing access to these life-saving drugs for large patient populations. One promising alternative host is Pichia pastoris, a methylotrophic yeast that is currently used to manufacture ten approved drugs worldwide. Its fast growth rate and ability to grow to high cell densities can enable fast production cycles, agile process development, and potentially low production costs. While P. pastoris has already been engineered to produce antibodies with human-like glycoforms, titers are still lower than those typically achieved with CHO cells. While standard fermentation processes for P. pastoris have been designed, several areas have limited investment to date. Few chemically-defined cultivation media have been reported for P. pastoris fermentation and all are minimal salt solutions.; While several studies have demonstrated that addition of complex nutrients improves growth and productivity, defined compounds with similar effects have not been identified. Also, methanol feeding protocols for P. pastoris have only been developed for fed-batch operation and have not been studied for perfusion cultivation. In this thesis, we describe the design of a rich defined medium (RDM) for cultivation of P. pastoris through systematic screening and gene expression analysis. The use of RDM for expression of three proteins yields titers comparable to, or higher than, those in standard complex medium. We then outline a similar methodology for the optimization of individual amino acids and fatty acids in the medium. We also describe how a transcriptomic analysis of methanol feeding strategy in perfusion mode enabled the identification and alleviation of limiting biological processes.; This work demonstrates how combining traditional process development strategies with genome-wide sequencing for P. pastoris leads to rapid improvement of fermentation processes. Continued progress in this area could lead to a new model for low-cost production of high-quality biologic drugs.
Thesis: Ph. D. in Chemical Engineering Practice, Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122528</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing bacterial antibiotic resistance, prevalence, and persistence in the marine environment</title>
<link>https://hdl.handle.net/1721.1/122524</link>
<description>Characterizing bacterial antibiotic resistance, prevalence, and persistence in the marine environment
May, Megan Katherine.
Antibiotics are naturally occurring chemicals in bacteria that were recently discovered and utilized by humans. Despite a relatively short time of use, anthropogenic use of antibiotics has increased natural levels of antibiotic resistance, which has caused a looming antibiotic resistance crisis, where antibiotics may not work. Understanding resistance patterns is critical to allow for continued therapeutic use of antibiotics. While resistance is often thought of in hospitals, antibiotics and antibiotic resistance genes from human activity are disposed of into nature where they are able to interact with naturally occurring antibiotics and resistance. In this dissertation, I examine the ocean as an understudied region of the environment for antibiotic resistance. The ocean represents an area of human activity with recreation and food consumption and it is an enormous region of the planet that is affected by both land and sea activities.; In Chapter 2, I explore the policies that have contributed to the antibiotic resistance crisis. I offer explanations of market and political failures that contributed to the situation, areas for growth in terms of assessing scientific knowledge, and finally, recommendations for mitigating antibiotic resistance. In Chapters 3 and 4, I collected individual bacterial cultures from Cape Cod, MA beaches to assess the phenotypic response to antibiotic resistance. I show that 73% of Vibrio-like bacteria and 95% of heterotrophic bacteria (both groups operationally defined) are resistant to at least one antibiotic. These results indicate that antibiotic resistance is prevalent and persistent on beaches over both spatial and temporal scales. In Chapter 5, I used metagenomics to assess the abundance and types of resistance genes at coastal impacted Massachusetts sites. I found that, even in sites that seem distinct in terms of anthropogenic impact, prevalence of resistance remained the same.; Finally, in Appendix A, I examined part of the TARA Ocean dataset for prevalence of antibiotic resistance genes across the world's ocean. Here, I found that there are distinctions between different ocean biomes based upon antibiotic, metal, and mobile genetic elements. This dissertation has increased the understanding of temporal and spatial dynamics of antibiotic resistance in the coastal and open ocean.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Biology; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122524</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism and importance of Mcm2-7 double-hexamer formation during DNA replication initiation</title>
<link>https://hdl.handle.net/1721.1/122523</link>
<description>Mechanism and importance of Mcm2-7 double-hexamer formation during DNA replication initiation
Champasa, Kanokwan.
All cells must duplicate their genome completely and accurately in each cell cycle. Thus, DNA replication is a highly-regulated multi-step process that ensures the genome is duplicated only once per cell cycle. In eukaryotic cells, initiation of DNA replication begins with loading of two heterohexameric Mcm2-7 helicases around origin DNA during G1 phase. The two helicases are loaded in opposite orientations and interact with each other at their N-terminal domains to form a head-to-head "double hexamer". In S phase, the helicases are activated by helicase-activation proteins to initiate DNA unwinding. Importantly, this event is the committed step of replication initiation. Loading of two helicases in the head-to-head double hexamer ensures DNA unwinding on both sides of the origin and allows the assembly of bi-directional forks essential for complete DNA replication. Two Mcm2-7 helicases are loaded onto the DNA sequentially.; The order of events during the first helicase loading has been established, but the mechanism of double-hexamer formation remains unclear. Because the two helicases interact at their N-terminal domains, these regions represent potential mediators of double-hexamer formation. This thesis outlines the potential mechanism and the importance of double-hexamer formation. A conserved motif within Mcm2-7 N-terminal region is required for stable double-hexamer formation and cell viability. Single-molecule analyses of Mcm2-7 containing a mutation within this motif indicated that this mutant form double-hexamer interactions briefly before the two hexamers come apart. Interestingly, after double-hexamer dissolution, the two mutant helicases do not form subsequent double-hexamer interaction. Both wild-type and the mutant Mcm2-7 exhibit double-hexamer interaction rapidly after the arrival of the second Mcm2-7.; Together, these data support the model that double-hexamer formation is coordinated with loading of the second Mcm2-7. Finally, the requirement of the double hexamer during helicase activation was investigated using Mcm2-7 complex containing the mutant that inhibits double-hexamer formation. The double hexamer is not essential for recruitment of three critical helicase-activation proteins, but it is required for initial origin DNA unwinding. These findings identify a crucial motif for stable double-hexamer formation and suggest that DNA unwinding is the first step in replication initiation that requires double-hexamer form of the helicases.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122523</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of natural and engineered bacteriophages as antimicrobials</title>
<link>https://hdl.handle.net/1721.1/122522</link>
<description>Development of natural and engineered bacteriophages as antimicrobials
Citorik, Robert James.
One of the major public health concerns of the modern day is the emergence and spread of extensively antibiotic-resistant pathogens. We have already seen the arrival of infections caused by bacteria resistant to all available antibiotics in the therapeutic arsenal. In addition, we have learned much of the incredible importance of the microbial communities that cohabit our bodies, and of how perturbations to these communities can lead to long-lasting health effects. Bacteriophages may provide a solution for both of these problems, in that they are narrow-spectrum and can be used to specifically kill target microbes without disrupting whole community structure through off-target effects. Here, various approaches to creating phage-based therapeutics are explored, including the isolation and application of naturally occurring wild-type phages, the conversion of temperate phages to obligately lytic phages to permit their use as a resource in phage therapeutics, and the creation of programmable, sequence-specific antimicrobials through phage-mediated genetic payload delivery. These efforts are expected to contribute to the field by expanding the approaches available to develop next-generation, phage-based antimicrobials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 89-103).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122522</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New protein engineering approaches for potentiating and studying antibody-based EGFR antagonism</title>
<link>https://hdl.handle.net/1721.1/122521</link>
<description>New protein engineering approaches for potentiating and studying antibody-based EGFR antagonism
Tisdale, Alison Wedekind.
A variety of cancers are marked by the over-expression and over-activity of the EGF receptor (EGFR), rendering this protein an attractive therapeutic target. Anti-EGFR therapeutics are a mainstay of clinical practice for the treatment of colorectal, lung and head and neck cancers but efficacy is limited and response rates low. Opportunities for improving EGFR antagonism include higher potency inhibition of ligand binding, inducing receptor downregulation, or creating synergistic therapeutic combinations. The Wittrup lab has previously made significant advances in EGFR antagonism by demonstrating the therapeutic potential of inducing receptor downregulation through multi-epitopic targeting. The lab has also pioneered the use of a novel protein scaffold, called Sso7d, for yeast surface display-based libraries and selections. In the first part of this work I show that a combination of traditional yeast display techniques with simple but novel in silico approaches can be applied to derive a panel of Sso7d binders against EGFR with diverse paratopes. I demonstrate the superior EGFR inhibition of antibody-Sso7d fusions in vitro, and discuss the lessons learned from applying these proteins in vivo. In the second part of this work I use a structure-guided yeast display approach to create a novel research tool, a minimally modified verstion of cetuximab called "mCetux", which essentially enables in vivo experiments of cetuximab. I apply this antibody tool in vitro and in vivo in a new and highly relevant model system for colorectal cancer and subsequently discuss future opportunities for its use.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 118-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122521</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A miniature, broadband acoustic spectrometer : design of a unified attenuation model, device development, and experimental performance</title>
<link>https://hdl.handle.net/1721.1/122511</link>
<description>A miniature, broadband acoustic spectrometer : design of a unified attenuation model, device development, and experimental performance
Demas, Nickolas Peter.
The attenuation of sound occurs in polyatomic gases due to both classical and nonclassical physics. Classical attenuation is dominated by viscous dissipation and irreversible heat conduction. Nonclassical attenuation arises from the thermal relaxation between internal and external degrees of freedom for each constituent molecule. Currently, we are not aware of any commercial gas sensors that leverage classical attenuation. Existing methods to detect gas composition using nonclassical attenuation are bulky, heavy, and slow at resolving measurements, as the instruments utilize highly resonant, single frequency transducers mounted within rigid containment vessels that are pressurized sequentially over a wide range of pressures. We leverage both classical and nonclassical attenuation to develop a miniature, broadband acoustic spectrometer capable of detecting gas composition. This thesis covers modeling, design, and experimental efforts.; The first part focuses on the design of an acoustic attenuation simulation software package for gases. This package unifies classical and nonclassical models presented in the literature for the attenuation of plane waves within straight or curved tubes. With this simulation tool, attenuation components can be readily compared and optimized. We also tabulate a parameter library for twenty four unique gases which details the required properties to model a wide range of pure samples and mixtures. Second, we deploy this modeling capability to develop a functional sensor. Through four generations of instruments, we construct a unique approach that pairs broadband transducers with stochastic system identification techniques at constant pressure. This paradigm shift enables the fourth-generation device to function with approximately 5% of the total volume and 0.2% of the total mass of the smallest acoustic spectrometers previously described in the literature.; We produce an attenuation spectrum (derived from a bode plot) in less than 30 seconds, which is two orders of magnitude faster than existing methods. Third, we present experimental results captured using the fourth-generation device. The spectrometer produces measurement results that characterize classical and nonclassical attenuation. These results are unique to each gas, allowing for different samples to be distinguished. The experimental results from the fourth generation device agree with the expected theoretical predictions from the simulation package. Our methods and hardware hold promise in a number of application areas requiring medium sensitivity (± 1%), wide specificity sensing in a miniaturized, rugged, and inexpensive package.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Pages 477 and 478 are blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 265-273).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122511</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Root-cause analysis and characterization of oxygen-related defects in silicon PV material : an approach from macro to nanoscale</title>
<link>https://hdl.handle.net/1721.1/122510</link>
<description>Root-cause analysis and characterization of oxygen-related defects in silicon PV material : an approach from macro to nanoscale
Youssef, Amanda.
With energy demand forecasted to grow significantly, efforts towards mitigating global warming effects by reducing greenhouse gas emissions are becoming stricter as more power generation plants are deployed to meet the global demand. Deployment of renewable energy technologies as a low-carbon alternative to fossil fuel is an attractive solution. Photovoltaics (PV) present several advantages over other energy sources because PV is modular, and has proven to be a scalable and reliable technology. A capital expenditure reduction of 70% has been found to be necessary to meet the climate targets of 7-10 TW of PV by 2030. This can be achieved through different channels: improving conversion efficiency and device performance of silicon modules, increasing solar cell manufacturing yield, reducing silicon feedstock material use, etc. This research focuses on n-type monocrystalline silicon and aims to increase conversion efficiency up to 20% relative and increase manufacturing yield up to 50%, as levers to reduce the capital expenditure. The increase in conversion efficiency and manufacturing yield is achieved by defect engineering and mitigation of a lifetime-limiting bulk defect in n-type monocrystalline silicon, characterized by low-lifetime concentric rings. Temperature- and injection-dependent photoluminescence imaging is applied to investigate the defect's root-cause by studying its evolution under several high temperature process conditions and is found to be caused by oxide-related precipitates. Synchrotron-based mic ...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-155).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122510</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Illumination Diffusion Imaging (SIDI) to measure light transport properties of turbid media</title>
<link>https://hdl.handle.net/1721.1/122509</link>
<description>Structured Illumination Diffusion Imaging (SIDI) to measure light transport properties of turbid media
Jain, Pranay.
We come across colloidal dispersions frequently in our daily lives. Food substances like milk and cheese, cosmetic products like lotions and shampoos, and even biological materials like skin tissue are a few examples of colloidal dispersions. One common property of these dispersions is that they are neither entirely opaque nor transparent to light. Light diffuses in these media due to scattering and absorption by particles or other microscopic inhomogeneities. Measuring the light transport properties of colloidal dispersions is essential for several applications. In milk, this allows us to estimate the size distribution and concentrations of suspended fats and proteins. In dispersions like moisturizers and paints, it lets us estimate consistency and colloidal stability. In the case of skin, it helps us distinguish and characterize internal structures and pigments, giving information vital for skin-related conditions like melanoma, lesions and burns.; Among known techniques to measure the optical scattering and absorption properties, imaging-based methods are particularly interesting. Imaging can work with relatively simple hardware, offloading much of the burden to software and computation. Image sensors provide a rich set of data, millions of independent pixels, per observation. However, they suffer from lower dynamic range compared to other photodetectors. Existing imaging-based methods either struggle to work in low dynamic ranges or are inefficient in retrieving information, precluding real-time measurements. For imaging to be effective, it is essential to devise methods that address these concerns. In this thesis, we introduce Structured Illumination Diffusion Imaging (SIDI), a method to analyze turbid media and quantify its light transport properties. The goal is to extract maximum information per captured image and utilize image sensor resources efficiently.; In SIDI, we project broadband spatial patterns as structured illumination on a turbid sample and image the diffuse backscatter. We then compute the power spectra of the input and output random processes using a windowing-based spectral estimation algorithm. Comparing the spectra gives an estimate of the medium's spatial frequency response. We fit a parametric optical diffusion model to the response to quantify the light transport properties. For demonstration, we choose to project laser speckle patterns as a simple, versatile and scalable source of random structured illumination. Speckles serve as realizations of a broadband random process suitable for analyzing turbid media like milk and skin. We use an off-the-shelf CMOS camera to image the diffuse backscatter. For validation, we present the results of a benchmarking study using standard liquid tissue phantoms as sample turbid media.; We estimate the light transport properties using SIDI and compare them with known values from literature. The research is directed towards developing a fundamental technology that can eventually be adapted to a variety of commercial applications. We discuss multiple proposed applications including milk fat and protein measurements, inline quality control of paints, moisturizers and food products, and skin imaging for medical diagnostics. To validate the first among these applications, we present the results of a study with raw milk samples as turbid media. SIDI is able to distinguish and quantify fats and proteins based on their size contrast and different scattering regimes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 175-179).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122509</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, development, and testing of a stochastically modulated Raman spectrometer</title>
<link>https://hdl.handle.net/1721.1/122508</link>
<description>Design, development, and testing of a stochastically modulated Raman spectrometer
Zervas, Michael Jay.
Detection of cancerous tumors and identification of counterfeit medications are just two examples that demonstrate the chemical specificity provided by Raman Spectroscopy. Yet, the widespread use of Raman Spectroscopy as an analytical tool has been limited to large bench-top systems in controlled laboratory environments. Existing technology, specifically in portable or handheld formats, suffers from a high false detection rate and relatively low sensitivity compared to other spectroscopic techniques. The present work addresses these issues through the design and development of a new system architecture that stochastically modulates the laser excitation wavelength. Small changes in excitation will proportionally shift the Raman scatter while having little effect on other spectral artifacts, including fluorescence.; A custom confocal Raman Spectrometer was built and characterized that can rapidly shift the excitation wavelength by selectively straining an externally mounted Fiber Bragg Grating (FBG). When combined with a super-luminescent diode (SLED), a modulation bandwidth of over half a nanometer was achieved. The functionality of the system was tested and benchmarked against Raman spectra that have been well characterized in literature. In addition, a novel signal processing approach was used to obtain a difference spectrum from a stochastic input excitation sequence. Simulations were conducted that compare the performance to conventional methods, which were then verified experimentally. Results indicate that the stochastic modulation was able to effectively isolate Raman scatter with a higher SNR compared to conventional methods. Finally, it was demonstrated that the developed system could be applied to Surface Enhanced Raman Spectroscopy (SERS).; SERS substrates increase the Raman scatter signal, but also compete with significant fluorescence and a strong background signal. Rhodamine 6G, a fluorescent dye, was tested using the developed system on a SERS substrate. Concentrations on the order of several hundred parts per million (ppm) were successfully measured, with significantly lower limits of detection possible. The experimental data shows that the combination of SERS with stochastically modulated techniques reduces the false detection rate and improves the detection sensitivity by several orders of magnitude, addressing both of the major existing limitations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122508</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient nearest-neighbor search algorithms for sub-Riemannian geometries</title>
<link>https://hdl.handle.net/1721.1/122500</link>
<description>Efficient nearest-neighbor search algorithms for sub-Riemannian geometries
Varricchio, Valerio.
The Motion Planning problem has been at the core of a significant amount of research in the past decades and it has recently gained traction outside academia with the rise of commercial interest in self-driving cars and autonomous aerial vehicles. Among the leading algorithms to tackle the problem are sampling-based planners, such as Probabilistic Road Maps (PRMs), Rapidly-exploring Random Trees (RRTs) and a large number of variants thereof. In this thesis, we focus on a crucial building block shared by these algorithms: nearest-neighbor search. While nearest-neighbor search is known as the asymptotically dominant bottleneck of sampling-based planners, popular algorithms to efficiently identify neighbors are limited to robots capable of unconstrained motions, commonly referred to as holonomic.; Nevertheless, this is rarely the case in the vast majority of practical applications, where the dynamical system at hand is often subject to a class of differential constraints called nonholonomic. We tackle the problem with sub-Riemannian geometries, a mathematical tool to study manifolds that can be traversed under local constraints. After drawing the parallel with nonholonomic mechanical systems, we exploit peculiar properties of these geometries and their natural notion of distance to devise specialized, efficient nearest-neighbor search algorithms. Our contributions are two-fold: First, we generalize existing space-partitioning techniques (k-d trees) to sub-Riemannian metrics. This is achieved by introducing i) a criterion - the outer Box Bound - that discards halfspaces consistently with the metric and ii) a space-partitioning technique - the Lie splitting strategy - that organizes the dataset for optimal asymptotic performance.; Second, we propose pruning techniques to further improve the query runtime. This is achieved by reducing the number of distance evaluations required to discern the nearest neighbors and exploiting heuristics that provably approximate a sub-Riemannian metric up to a constant factor, asymptotically.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122500</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiobjective optimization of crewed spacecraft supportability strategies</title>
<link>https://hdl.handle.net/1721.1/122499</link>
<description>Multiobjective optimization of crewed spacecraft supportability strategies
Owens, Andrew Charles.
Future crewed missions present a logistical challenge that is unprecedented in human spaceflight. Astronauts will travel farther from Earth than ever before, and stay in space for longer, without access to regular resupply or the option to abort and return home quickly in the event of an emergency. Under these conditions, supportability - that is, the set of characteristics of a system that drive the amount of resources required to enable safe and effective system operations - will be a much more significant driver of system lifecycle properties than it has been in the past. Many of the strategies that are currently used to mitigate risk for human spaceflight will no longer be available, feasible, or effective. To enable the human exploration missions of the future, new supportability strategies must be identified, characterized, developed, and implemented.; This dissertation addresses this problem by developing and presenting a new methodology for modeling and multiobjective optimization of supportability strategies, minimizing mass and maintenance crew time requirements subject to constraints on risk. The supportability strategy optimization problem is encoded as a multiobjective Constraint Optimization Problem (COP), with a set of decision variables defining a range of supportability strategy options related to level of maintenance, On-Demand Manufacturing (ODM), commonality, redundancy, and distributed functionality. A supportability model is developed which enables evaluation of the mass and crew time associated with a given assignment to those decision variables, or the lower bounds on those objective values associated with a partial assignment.; The resulting model, called Mass, Crew time, and Risk-based Optimization of Supportability Strategies (MCROSS), advances the state of the art in space mission supportability analysis by enabling holistic, rapid, multiobjective optimization and evaluation of the tradeoffs between mass, crew time, and risk for future missions. Model outputs are verified against results from Monte Carlo simulation, and validated via comparison to an existing state-of-the-art NASA supportability model and to flight maintenance data from the International Space Station (ISS). MCROSS is then demonstrated using two case studies, one based on a notional system and the other examining the ISS Oxygen Generation Assembly (OGA). The notional case study is used to validate optimization results against the Pareto frontier identified via full enumeration.; The second case study demonstrates the application of this methodology to a real-world system, showing that MCROSS can identify supportability strategies offering lower mass and crew time options than current approaches. A series of sensitivity analyses are also presented to demonstrate the application of MCROSS in an iterative design process. These results, and the associated analysis capability, provide a powerful analysis tool that can help inform system development and mission design by characterizing tradeoffs between mass, crew time, and risk, along with the underlying strategy decisions. The results and implications of this research are discussed, along with assumptions and limitations. Finally, the contributions of this research are summarized along with potential areas of future work.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Space Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 331-344).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122499</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced dynamic load sensor for the International Space Station : design, development, musculoskeletal modeling and experimental evaluation</title>
<link>https://hdl.handle.net/1721.1/122498</link>
<description>Enhanced dynamic load sensor for the International Space Station : design, development, musculoskeletal modeling and experimental evaluation
Opperman, Roedolph A.(Roedolph Adriaan)
Prolonged exposure of a vertebrate musculoskeletal system to the microgravity environment of space leads to a reduction in bone mineral density, muscle mass, strength and endurance. Such deconditioning may impede critical astronaut activities and presents an increased injury risk during flight and when exposed to increased gravity like that of Earth or Mars. Exercise countermeasures are used extensively on the International Space Station to mitigate musculoskeletal deconditioning during long duration spaceflight missions. Despite vigorous exercise protocols, bone loss and muscle atrophy are often observed even when countermeasures are in effect. As a first step in understanding the mechanisms of injury and how on-orbit exercise countermeasures compare to those on the ground, an accurate load sensing system is needed to collect ground reaction force data in reduced gravity.; To date, no means of continuous, high resolution biomechanical force data collection and analysis has been realized for on-orbit exercise. Such a capability may advance the efficiency of these systems in mitigating the incidence of bone and muscle loss and injury risk by quantifying loading intensity and distribution during exercise in microgravity, thus allowing for cause-effect tracking of ISS exercise regimes and biomechanics. By measuring these forces and moments on the exercise device and correlating them with the post-flight fitness of crewmembers, the efficacy of various exercise devices may be assessed. More importantly, opportunities for improvement, including optimized loading protocols and lightweight exercise device designs will become apparent.; The overall goal of this research effort is to improve the understanding of astronaut joint loading during resistive exercise in a microgravity environment through the use of rigorous quantitative dynamic analysis, simulation and experimentation. This is accomplished with the development and evaluation of a novel, self-contained load sensing system. The sensor assembly augments existing countermeasures and measures loads imparted by the crew during exercise. Data collected with this system is used to parameterize a unique musculoskeletal model which is then used to evaluate associated joint reaction forces generated during exercise. The effects of varying body posture and load application points on joint loading were investigated and recommendations for enhancing on-orbit exercise protocols that mitigate both injury and deconditioning are discussed.; By validating the sensor and modeling joint loading during on-orbit exercise as described herein, a unique contribution is made in expanding NASA's capability to continuously record and quantify crew loading during exercise on ISS. Data obtained through the system is used to characterize joint loading, inform and optimize exercise protocols to mitigate musculoskeletal deconditioning and may aid in the design of improved, lightweight exercise equipment for use during long-duration spaceflight, including future missions to Mars.
Thesis: Ph. D. in Aerospace Systems Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-179).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122498</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust human motion prediction for safe and efficient human-robot interaction</title>
<link>https://hdl.handle.net/1721.1/122497</link>
<description>Robust human motion prediction for safe and efficient human-robot interaction
Lasota, Przemyslaw A.(Przemyslaw Andrzej)
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.; It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.; In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).; This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 175-188).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122497</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Entropy-stable hybridized discontinuous Galerkin methods for large-eddy simulation of transitional and turbulent flows</title>
<link>https://hdl.handle.net/1721.1/122496</link>
<description>Entropy-stable hybridized discontinuous Galerkin methods for large-eddy simulation of transitional and turbulent flows
Fernández, Pablo.
The use of computational fluid dynamics (CFD) in the aerospace industry is limited by the inability to accurately and reliably predict complex transitional and turbulent flows. This has become a major barrier to further reduce the costs, times and risks in the design process, further optimize designs, and further reduce fuel consumption and toxic emissions. Large-eddy simulation (LES) is currently the most promising simulation technique to accurately predict transitional and turbulent flows. LES, however, remains computationally expensive and often suffers from accuracy and robustness issues to the extent that it is still not practical for most applications of interest. In this thesis, we develop a series of methods and techniques to improve efficiency, accuracy and robustness of large-eddy simulations with the goal of making CFD a more powerful tool in the aerospace industry.; First, we introduce a new class of high-order discretization schemes for the Euler and Navier-Stokes equations, referred to as the entropy-stable hybridized discontinuous Galerkin (DG) methods. As hybridized methods, they are amenable to static condensation and hence to more efficient implementations than standard DG methods. As entropy-stable methods, they are superior to conventional (non-entropy stable) methods for LES of compressible flows in terms of stability, robustness and accuracy. Second, we develop parallel iterative methods to efficiently and scalably solve the nonlinear system of equations arising from the discretization. The combination of hybridized DG methods with the proposed solution method provides excellent parallel scalability up to petascale and, for moderately high accuracy orders, leads to about one order of magnitude speedup with respect to standard DG methods.; Third, we introduced a non-modal analysis theory that characterizes the numerical dissipation of high-order discretization schemes, including hybridized DG methods. Non-modal analysis provides critical guidelines on how to define the polynomial approximation space and the Riemann solver to improve accuracy and robustness in LES. Forth, we investigate how to best account for the effect of the subgrid scales (SGS) that, by definition, exist in LES. Numerical and theoretical results show the Riemann solver in the DG scheme plays the role of an implicit SGS model. More importantly, a change in the current best practices for SGS modeling is required in the context of high-order DG methods. And fifth, we present a physics-based shock capturing method for LES of high-Mach-number and high-Reynolds-number flows. The shock capturing method performs robustly from transonic to hypersonic regimes, provides sharp shock profiles, and has a small impact on the resolved turbulent structures.; These are all critical ingredients to advance the state-of-the-art of high-order methods for LES, both in terms of methodology and understanding the relationship between the physics and the numerics.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-212).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122496</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Caring for star-children : autism, families, and ethics in contemporary China</title>
<link>https://hdl.handle.net/1721.1/122493</link>
<description>Caring for star-children : autism, families, and ethics in contemporary China
Lin, Emily Xi.
Caring for Star-Children: Autism, Families, and Ethics in Contemporary China studies the emergence and development of family caregiving for autistic children after 1982, when autism was first diagnosed separately in two cities in China. Based on 18 months of ethnographic fieldwork at municipal specialist hospitals, community child-health clinics, autism rehabilitation centers, and homes of families with autistic children across six provinces, this study explores how social stratification and the turn towards self-governance not only made autism as an epistemic object, but has intersected with that category to create new forms of inequality. In the absence of thorough and consistent state initiatives, moral economies around the child's potential have sprung up.; Such moral economies lead actors such as medical professionals, philanthropic and educational organizations, and elite parent-activists to prioritize the young autistic child's potential, and to urge parents to become behavioral therapists for their own children. Parents are urged to let go of the normative societal expectation of recompense in the form of elderly care. I argue in this dissertation that the directives around these moral economies fail to take into account the local and gendered inequities that both produce, and constrain, parental diagnostic and therapeutic choices for their autistic children. Autism's spread as a diagnostic category has paralleled other spatial and economic disparities across the country.; The economic reforms which began in 1978 and the devolution of many public functions to the purview of local governments have led to dramatic regional disparities with respect to economic opportunity and, the availability and quality of healthcare, education and social services. Where professional and parental elites in cities such as Beijing refer to autistic children through the valorized term "children of the stars" (a phrase chosen so as to reduce stigma), and are able to provide children in these locations with prompt diagnoses and early therapy, to date many healthcare workers and families responsible for nurturing children in less developed regions of China have not even heard of such a diagnostic category. Many families from rural or otherwise resource-scarce locations in China are not able to obtain a timely diagnosis, much less access therapy for their children.; In managing care in landscapes of great disparity, families are turned into diagnostic and therapeutic internal migrants, as they travel across China in search of the appropriate doctors and therapy. I argue in this thesis that the post-socialist emphasis on choice, rather than care, in fact serves to legitimize neglect of the autistic adult and mother of the child. Autism advocacy rights which fail to take into account local forms of stratification thus serve to broaden the burden of care upon families.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2016; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 209-228).
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122493</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The work of art in the age of its technoscientific re-enhancement : recasting light, Colloids, and microbes for art and heritage conservation in U.S. and Italian laboratories</title>
<link>https://hdl.handle.net/1721.1/122492</link>
<description>The work of art in the age of its technoscientific re-enhancement : recasting light, Colloids, and microbes for art and heritage conservation in U.S. and Italian laboratories
Kim, Grace,Ph. D.Massachusetts Institute of Technology.
This ethnography tracks a diverse set of scientific practices that have developed new technologies for the conservation of artworks and cultural heritage. I examine how scientists in physics, chemistry, and biology have intervened in the restoration of artifacts ranging from faded abstract expressionist paintings to the crumbling clay terraces of an archaeological site. Reporting on archival research, interviews, and participant-observation, I juxtapose three case studies in the U.S. and Italy-two in which physics (Cambridge, MA) and chemistry (Florence) are conscripted into the realm of high modem art, and another in which biological knowledge (Milan) informs the preservation of artistic tradition and craft heritage.; In analyzing interventions in digital projection technology (light), nanotechnology (colloids), and biotechnology (microbes), I argue that scientists today transform artifacts of culture into instances of technoscientific nature through what I call the "technoscientific re-enchantment of art." Aura, philosopher Walter Benjamin once wrote, is the ineffable and singular charisma that confirms an artwork as "the original." He added that technological reproducibility through film and photography strips art of its ritualistic authority, liberating it of the fetish of authenticity. To the contrary, I find, technology today is enlisted as a mode of authenticity's material production. Art's aura, in the age of technoscientific reenchantment, does not disappear but rather, is re-valued through analogy-analogies made through the discursive and material practices that liken light to paint, the colloidal substance of the human body to that of artworks, and microbes to patina.; Laboratory scientists, I show, are recasting the materials of art and heritage to make the terms of their recovery amenable to technoscientific mediation. In so doing, scientists contribute to enduring ethical debates within art history and heritage preservation-debates about how to interpret an artist's intent and an object's pristineness or historicity. Finally, I explore a fourth field site, the Vatican Museums, as a framing device for understanding the stakes in contemporary conservation practice. Drawing on the anthropology of art and heritage, science and technology studies, and art history, I explore the multiple, ever-changing claims of technoscientific expertise over matters of the materiality, aesthetics, and history of artifacts.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 155-169).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122492</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transnational biopolitics and family-making in secrecy : an ethnography of reproductive travel from Turkey to Northern Cyprus</title>
<link>https://hdl.handle.net/1721.1/122491</link>
<description>Transnational biopolitics and family-making in secrecy : an ethnography of reproductive travel from Turkey to Northern Cyprus
Mutlu, Burcu,Ph. D.Massachusetts Institute of Technology.
This dissertation is an ethnographic study of reproductive travel between Turkey and Northern Cyprus. Based on interviews and observations primarily carried out in a private In Vitro Fertilization (IVF) clinic in Northern Cyprus, between November 2014 and January 2016, it investigates how and why Turkish couples travel to the Turkish-speaking part of the island of Cyprus to access biomedical reproductive services - namely, donor gametes and sex selection through pre-implantation genetic diagnosis - that are legally unavailable in Turkey. By combining anthropology of secrecy with feminist studies of assisted reproductive technologies, this dissertation argues that Turkey's ban on gamete donation has helped to normalize IVF in the country by reinforcing the heteronormative nuclear family ideal: that is, if gamete donation is unavailable to Turkish people, then married couples who conceive using IVF are presumed to be genetically related to their children.; However, I argue further that this normalization of IVF is only able to rest upon the national ban on gamete donation so long as access to donor gametes continues to be available - transnationally and clandestinely facilitated through a network of inter-clinical and inter-lab relations between Turkey and Northern Cyprus that have been formed over the last decade. In other words, these travels constitute a discursive and geographical space at the margins of, but fully integral to, Turkish reproductive biopolitics, in which secrecy is essential to diverse actors (including couples, egg donors, clinics, and the Turkish state) for multiple reasons. This ethnographic study of reproductive travels connecting Turkey and Northern Cyprus complicates the familiar analysis of transnational reproductive inequalities by demonstrating the plurality of Turkish experience. In doing so, it also extends the non-western scope of anthropological studies of transnational reproductive travel.; By adding a transnational dimension to the study of national reproductive politics, this dissertation reveals the ways in which Turkey's current ideological, social and economic transformations shape the dynamics for the materialdiscursive (re)making of borders and boundaries of both Turkish families and the Turkish-nation in the Northern Cypriot IVF clinics.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 318-333).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122491</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the Favorskii rearrangement.</title>
<link>https://hdl.handle.net/1721.1/122489</link>
<description>A study of the Favorskii rearrangement.
Frank, George Andrew.
Massachusetts Institute of Technology. Dept. of Chemistry. Thesis. 1965. Ph.D.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122489</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A cost-effective battery management and monitoring strategy for micro-grids in India</title>
<link>https://hdl.handle.net/1721.1/122487</link>
<description>A cost-effective battery management and monitoring strategy for micro-grids in India
Kumar, Rakesh,Ph. D.Massachusetts Institute of Technology.
Energy buffer cost is a key factor in rural electrification, electric vehicles, mobile devices, etc. Increasingly, micro-grids and off-grid home electric systems have become popular in the developing world to help meet energy demands. These systems help provide electric power to customers when the main power utility is unable to do so, such as in rural communities or newly developed townships. However, a major limitation of micro-grid expansion is the system cost, with the energy buffer accounting for perhaps 50% of the cost. This work addresses such costs by addressing battery longevity through the creation of a cost-effective and easy to use battery monitor that helps off-grid system operators increase the lifetime of the battery system. This work also explores methods of extracting battery health information from existing off-grid batteries to help determine remaining battery life.; To this end, a very inexpensive device capable of extracting state-of-charge (SoC) and state-of-health (SoH) is proposed and validated against data from existing battery test equipment. This work then develops an optimal depth of discharge model which can be built solely from manufacturer data. Such a model helps micro-grid operators understand the effects of different operational conditions on battery life. Finally, this work experimentally studies the degradation impact of extreme battery cycling scenarios. Examples include high discharge currents or low battery voltage cut-off. These results, combined with manufacturer data, lead to the synthesis of recommendations on how to size and operate the battery system to maximize lifetime using the proposed SoC/SoH monitoring device. By leveraging these optimal operating recommendations and monitoring tools, micro-grid operators may extend total battery throughputs by 35-40%, thereby reducing overall micro-grid system costs by up to 20%.; For the purposes of this work, the focus will be on lead-acid batteries as they are the most common type of battery used in off-grid settings.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 165-170).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122487</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differentiable visual computing</title>
<link>https://hdl.handle.net/1721.1/122486</link>
<description>Differentiable visual computing
Lo, Tzu-Mao
Derivatives of computer graphics, image processing, and deep learning algorithms have tremendous use in guiding parameter space searches, or solving inverse problems. As the algorithms become more sophisticated, we no longer only need to differentiate simple mathematical functions, but have to deal with general programs which encode complex transformations of data. This dissertation introduces three tools, for addressing the challenges that arise when obtaining and applying the derivatives for complex graphics algorithms. Traditionally, practitioners have been constrained to composing programs with a limited set of coarse-grained operators, or hand-deriving derivatives. We extend the image processing language Halide with reverse-mode automatic differentiation, and the ability to automatically optimize the gradient computations. This enables automatic generation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort.; We demonstrate several applications, including how our system enables quality improvements of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep learning methods. In 3D rendering, the gradient is required with respect to variables such as camera parameters, light sources, geometry, and appearance. However, computing the gradient is challenging because the rendering integral includes visibility terms that are not differentiable. We introduce, to our knowledge, the first general-purpose differentiable ray tracer that solves the full rendering equation, while correctly taking the geometric discontinuities into account. We show prototype applications in inverse rendering and the generation of adversarial examples for neural networks. Finally, we demonstrate that the derivatives of light path throughput, especially the second-order ones, can also be useful for guiding sampling in forward rendering.; Simulating light transport in the presence of multi-bounce glossy effects and motion in 3D rendering is challenging due to the high-dimensional integrand and narrow high-contribution areas. We extend the Metropolis Light Transport algorithm by adapting to the local shape of the integrand, thereby increasing sampling efficiency. In particular, the Hessian is able to capture the strong anisotropy of the integrand. We use ideas from Hamiltonian Monte Carlo and simulate physics in Taylor expansion to draw samples from high-contribution region.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 131-148).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122486</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated structure generation for first-principles transition-metal catalysis</title>
<link>https://hdl.handle.net/1721.1/122481</link>
<description>Automated structure generation for first-principles transition-metal catalysis
Ioannidis, Efthymios Ioannis.
Efficient discovery of new catalytic materials necessitates the rapid but selective generation of candidate structures from a very wide chemical space and the efficient estimation of their properties. We developed an efficient and reliable software utility for high-throughput screening of inorganic complexes that enables chemical discovery by automating molecular and intermolecular complex structure generation, job preparation as well as post-processing analysis to elucidate correlations of electronic or geometric descriptors with energetics. The developed software was then used to unveil different binding modes of small anions on organometallic complexes as well as functionalizations that allow for selective binding. We additionally employed our materials design framework to study the binding of carbon monoxide on functionalized metalloporphyrins providing tuning strategies and uncertainty estimation.; Computational approaches such as density functional theory (DFT) that directly simulate the electronic properties have been increasingly used as tools for materials design mainly due to recent developments in computational speed and accuracy. DFT recasts the many-body problem of interacting electrons into an equivalent problem of non-interacting electrons, greatly simplifying the solution procedure. This approach introduces certain approximations that are effectively modeled with an exchange and correlation functional that accounts for the many-body effects that are not included in the simplified problem. The functional choice is an important modeling decision and therefore computational predictions can be sensitive to user selection. This sensitivity is maximized for systems with highly localized electrons such as transition metals due to self-interaction error, where one electron interacts with its own mean field resulting in an unphysical delocalization of the electron density.; We studied extensively how the incorporation of the widely employed Hartree-Fock and meta-GGA-type exchange functionals affects DFT predictions on transition metal complexes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Chemical Engineering Practice, Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 171-186).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122481</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of developmental exposures to the harmful algal bloom toxin domoic acid on neural development and behavior</title>
<link>https://hdl.handle.net/1721.1/122480</link>
<description>Impacts of developmental exposures to the harmful algal bloom toxin domoic acid on neural development and behavior
Panlilio, Jennifer Martinez.
Harmful algal blooms (HABs) can produce potent neurotoxins that accumulate in seafood and affect human health. One HAB toxin of concern is domoic acid (DomA), a glutamate analog produced by the marine diatom Pseudo-nitzschia spp. Current regulatory limits are designed to prevent acute neurotoxicity in adult humans. However, research shows that low-level exposure during early life can lead to long-term changes in behavior, neural connectivity, and brain morphology. To determine the underlying mechanisms of developmental toxicity, this dissertation used zebrafish as a tool to: i) Establish the developmental window of susceptibility for DomA toxicity, ii) Characterize the behavioral consequences of exposures, and iii) Identify the cellular targets and processes perturbed by DomA. I found that DomA exposure particularly at 2 days post fertilization (dpf) led to altered startle response behavior, myelination defects, and the downregulation of axonal and myelin structural genes.; Using vital dyes and immunolabeling, I assessed DomA-induced alterations in cells required for the startle response. I found no differences in the number of sensory neuromasts or in the sensory cranial ganglia structures that detect the acoustic stimuli. However, the majority of DomA-treated larvae lacked one or both Mauthner cells - hindbrain neurons critical for fast startle responses. DomA-treated larvae also had oligodendrocytes with fewer and shorter myelin sheaths, and appeared to aberrantly myelinate neuronal cell bodies. The loss of the Mauthner neurons and their axons may lead to a cellular environment where oligodendrocytes myelinate neuronal cell bodies in the absence of adequate axonal targets. Indeed, pharmacological treatment that reduced the oligodendrocyte number also led to the reduction in the number of these aberrant, myelinated cell bodies.; These results indicate that exposure to DomA at a particular period in neural development targets specific cell types, disrupts myelination in the spinal cord, and leads to prolonged behavioral deficits. These mechanistic insights support hazard assessments of DomA exposures in humans during critical periods in early development.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Biology; and the Woods Hole Oceanographic Institution), 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122480</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of genetic and gene regulatory sex differences in mammals</title>
<link>https://hdl.handle.net/1721.1/122479</link>
<description>Evolution of genetic and gene regulatory sex differences in mammals
Naqvi, Sahin.
Sex differences are widespread in mammalian health, development, and disease. Ultimately, sex differences derive from the sex chromosomes; males are XY and females are XX, but the mammalian X and Y chromosomes evolved from an ancestral pair of ordinary autosomes. These genetic sex differences, through a variety of regulatory mechanisms, give rise to sex differences in gene expression across the genome, which in turn result in the observed phenotypic differences between males and females. In this thesis, I take an evolutionary perspective on this pathway, using computational analysis of both publically available and newly generated data to provide insight into the molecular basis of mammalian sex differences.; First, to better understand the selective forces underlying the evolution of the amniote sex chromosomes from ordinary autosomes, we reconstructed gene-by-gene dosage sensitivities on the ancestral autosomes through phylogenetic analysis of microRNA target sites, finding that preexisting heterogeneities in dosage sensitivity shaped the evolution of both the mammalian XY and avian ZW sex chromosomes. Second, to understand the extent to which genome-wide sex differences are conserved across both tissues and species, we conducted a five-species, twelve tissue survey of sex differences in gene expression, finding that most sex bias in gene expression has arisen during since the last common ancestor of boroeutherian mammals, and that evolutionary gains or losses of regulation by sex-biased transcription factors likely drove a significant fraction of lineage-specific changes in sex bias.; Third, we used the results of this survey to show that conserved sex bias in gene expression contributes to the male bias in height and body size observed in a range of mammalian species, including humans. Together, these studies suggest that dosage sensitivity played a key role in both the evolution of mammalian sex chromosomes and their contribution to phenotypic sex differences, as well revealing the widespread nature and phenotypic impact of sex differences in gene expression across the genome.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis. "June 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122479</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of enzyme promiscuity in plant metabolic evolution</title>
<link>https://hdl.handle.net/1721.1/122478</link>
<description>The role of enzyme promiscuity in plant metabolic evolution
Levsh, Olesya.
Metabolic expansion was a key event facilitating the transition of early plants from aquatic to terrestrial environments and their subsequent radiation on land. The breadth and diversity of plant specialized metabolism reflects upon the adaptive innovations that took place during plants' colonization of land in the past 500 million years. Moreover, these rich metabolic networks are valued for the array of bioactive molecules they encompass, many of which have industrial and pharmaceutical applications. Studies of plant specialized metabolic pathways reveal that most of these arose as extensions of pre-existing enzymes and metabolites, leveraging promiscuous enzymatic activities and opportunistic gene duplication events to produce novel metabolic traits. The goal of this dissertation is to examine the mechanisms of enzyme promiscuity with regard to the expansion of plant specialized metabolism. First, we probed the structural and dynamic bases of substrate promiscuity hydroxycinnamoyltransferase (HCT), a member of the BAHD acyltransferase family, which has contributed significantly to the diversification of specialized metabolism throughout land plant evolution. We subsequently generated an Arabidopsis thaliana line containing a loss-of-function hct mutation and demonstrated that the mutant phenotype can be complemented with orthologous HCTs in order to establish an in vivo system to study promiscuity. Finally, we characterized a case of independent emergence of rosmarinic acid biosynthesis, a specialized metabolic trait, in the closely related Boraginaceae and Lamiaceae plant families. Collectively, our findings contribute to a mechanistic understanding of the role of enzyme promiscuity in plant metabolic evolution.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; "February 2019." Cataloged from student-submitted PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122478</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning for problems with missing and uncertain data with applications to personalized medicine</title>
<link>https://hdl.handle.net/1721.1/122473</link>
<description>Machine learning for problems with missing and uncertain data with applications to personalized medicine
Pawlowski, Colin.
When we try to apply statistical learning in real-world applications, we frequently encounter data which include missing and uncertain values. This thesis explores the problem of learning from missing and uncertain data with a focus on applications in personalized medicine. In the first chapter, we present a framework for classification when data is uncertain that is based upon robust optimization. We show that adding robustness in both the features and labels results in tractable optimization problems for three widely used classification methods: support vector machines, logistic regression, and decision trees. Through experiments on 75 benchmark data sets, we characterize the learning tasks for which adding robustness provides the most value. In the second chapter, we develop a family of methods for missing data imputation based upon predictive methods and formal optimization.; We present formulations for models based on K-nearest neighbors, support vector machines, and decision trees, and we develop an algorithm OptImpute to find high quality solutions which scales to large data sets. In experiments on 84 benchmark data sets, we show that OptImpute outperforms state-of-the-art methods in both imputation accuracy and performance on downstream tasks. In the third chapter, we develop MedImpute, an extension of OptImpute specialized for imputing missing values in multivariate panel data. This method is tailored for data sets that have multiple observations of the same individual at different points in time. In experiments on the Framingham Heart Study and Dana Farber Cancer Institute electronic health record data, we demonstrate that MedImpute improves the accuracy of models predicting 10-year risk of stroke and 60-day risk of mortality for late-stage cancer patients.; In the fourth chapter, we develop a method for tensor completion which leverages noisy side information available on the rows and/or columns of the tensor. We apply this method to the task of predicting anti-cancer drug response at particular dosages. We demonstrate significant gains in out-of-sample accuracy filling in missing values on two large-scale anticancer drug screening data sets with genomic side information.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 205-215).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122473</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculating bully : explaining Chinese coercion</title>
<link>https://hdl.handle.net/1721.1/122472</link>
<description>Calculating bully : explaining Chinese coercion
Zhang, Ketian.
Since 1990, China has used coercion for territorial disputes, foreign arms sales to Taiwan, and foreign leaders' meetings with the Dalai Lama, despite adverse implications for its international image. China is also curiously selective in the timing, target, and tools of coercion: most cases of Chinese coercion are not military coercion, nor does China use coercion against all states that pose the same threats to its national security. The question regarding China's coercion patterns -- crucial for the prospect of peace and stability in the Asia-Pacific region and critical for understanding states' use of coercion -- has not been systematically answered. My dissertation therefore examines when, why and how China attempts to coerce states over perceived threats to its national security. This question entails two parts: 1) when and why does China choose coercion, and 2) if coercion is chosen, what tools does China utilize? I explain Chinese coercion with the cost balancing theory --; and test it against China's diplomacy. I employ qualitative methods such as process tracing and congruence testing, leveraging on primary Chinese documents and interviews with officials, government policy analysts, and scholars. My dissertation project conducts congruence tests of the macro trends of Chinese coercion while employing process tracing on specific cases of Chinese coercion. For temporal variation, I examine cases in which for the same country that is a potential target for coercion, when China coerces that country and when it refrains from coercion. For cross-national variation, I analyze cases in which for the same period and among comparable countries, China coerces some but not others. Contrary to conventional wisdom and in contrast with historical rising powers, China is a cautious bully, does not coerce frequently, and uses military coercion less when it becomes stronger, resorting mostly to non-militarized tools.; In short, states' decision to coerce and choices over coercive tools cannot be simply explained by the power variable. I identify the centrality of the reputation for resolve and economic vulnerability in states' calculation of coercion. States coerce one target to deter others -- "killing the chicken to scare the monkey."
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 545-562).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122472</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An adaptive space-time discontinuous Galerkin method for reservoir flows</title>
<link>https://hdl.handle.net/1721.1/122471</link>
<description>An adaptive space-time discontinuous Galerkin method for reservoir flows
Jayasinghe, Yashod Savithru.
Numerical simulation has become a vital tool for predicting engineering quantities of interest in reservoir flows. However, the general lack of autonomy and reliability prevents most numerical methods from being used to their full potential in engineering analysis. This thesis presents work towards the development of an efficient and robust numerical framework for solving reservoir flow problems in a fully-automated manner. In particular, a space-time discontinuous Galerkin (DG) finite element method is used to achieve a high-order discretization on a fully unstructured space-time mesh, instead of a conventional time-marching approach. Anisotropic mesh adaptation is performed to reduce the error of a specified output of interest, by using a posteriori error estimates from the dual weighted residual method to drive a metric-based mesh optimization algorithm.; An analysis of the adjoint equations, boundary conditions and solutions of the Buckley-Leverett and two-phase flow equations is presented, with the objective of developing a theoretical understanding of the adjoint behaviors of porous media models. The intuition developed from this analysis is useful for understanding mesh adaptation behaviors in more complex flow problems. This work also presents a new bottom-hole pressure well model for reservoir simulation, which relates the volumetric flow rate of the well to the reservoir pressure through a distributed source term that is independent of the discretization. Unlike Peaceman-type models which require the definition of an equivalent well-bore radius dependent on local grid length scales, this distributed well model is directly applicable to general discretizations on unstructured meshes.; We show that a standard DG diffusive flux discretization of the two-phase flow equations in mass conservation form results in an unstable semi-discrete system in the advection-dominant limit, and hence propose modifications to linearly stabilize the discretization. Further, an artificial viscosity method is presented for the Buckley-Leverett and two-phase flow equations, as a means of mitigating Gibbs oscillations in high-order discretizations and ensuring convergence to physical solutions. Finally, the proposed adaptive solution framework is demonstrated on compressible two-phase flow problems in homogeneous and heterogeneous reservoirs. Comparisons with conventional time-marching methods show that the adaptive space-time DG method is significantly more efficient at predicting output quantities of interest, in terms of degrees-of-freedom required, execution time and parallel scalability.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 205-216).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122471</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing interfacial structures for selective electrocatalysis</title>
<link>https://hdl.handle.net/1721.1/122454</link>
<description>Designing interfacial structures for selective electrocatalysis
Yan, Bing,Ph. D.Massachusetts Institute of Technology.
Selectivity of electrocatalysis in fuel cells stresses on the tolerance of electrode catalysts to cross-over species. A typical polymer electrolyte membrane fuel cell (PEMFC) employs a Nafion-based membrane to separate the Pt-based cathode and anode. Nevertheless, the intrinsic porosity of Nafion membranes allows reactants and products to cross over from one electrode chamber to the other. As a result, the poor selectivity of Pt leads to mixed reactivity at both electrodes and thus, decreased output voltage and power density. In principle, the membrane can be eliminated if cathode and anode catalysts are selective to the desired half reaction. Herein, the thesis aims to develop selective cathode and anode catalysts. For the cathode, transition metal chalcogenides are selective catalysts for the oxygen reduction reaction. We investigate the structure, activity, surface dynamics, and mechanism of a nickel sulfide catalyst during the oxygen reduction catalysis. By employing the selective nickel sulfide catalyst as the cathode, we construct a proof-of- concept membrane-free fuel cell which significantly outperforms an unselective Pt-cathode congener. For the anode, there is still a paucity of intrinsically selective fuel oxidation catalysts. To achieve high 02 tolerance, we design a new configuration of electrocatalysis by employing a solid oxide-based mixed electron-proton conductor (MEPC) as a condensed membrane to segregate the catalyst and electrolyte while only transporting H-atom equivalents, thus blocking 02 and impurities dissolved in the electrolyte from reaching the catalyst surfaces. We investigate the activity, selectivity, and mechanism of the catalyst/MEPC membrane composite electrode.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122454</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopy and rational chemical design of single emitters for quantum photonics</title>
<link>https://hdl.handle.net/1721.1/122453</link>
<description>Spectroscopy and rational chemical design of single emitters for quantum photonics
Utzat, Hendrik.
Single optical emitters have developed from study objects of fundamental photophysics and local chemical dynamics to technologically relevant building blocks in optical quantum information processing (QIP). This thesis mirrors this development by bridging classical photo-physical studies of single emitters and their rational chemical design towards applications in QIP. In the first part, I survey the energetic heterogeneity, emission linewidths, and singlephoton emission purity of single lead halide perovskite quantum dots (PQDs) in dilute solutions using photon-correlation spectroscopy. I identify PQD's unique minimal inhomogeneous broadening and remarkable tunability of the biexciton Auger recombination rate. Mechanisms for the single emitter linewidths broadening and variable biexciton quantum yield are put forth. In the second part, I assess the optical coherence properties of PQDs as efficient single photon emitters at low temperatures with photon-correlation Fourier spectroscopy. These measurements show that single PQDs exhibit a unique combination of fast radiative lifetimes, long optical coherence times, and suppressed spectral diffusion, which renders their emission highly coherent. I propose that PQDs are the first colloidal quantum dot material with the prospect of indistinguishable single photon and entangled photon pair generation and make strides towards the chemical improvement of PQDs. I extend these studies to single quantum defects in monolayers of two-dimensional hexagonal Boron Nitride (hBN). In the third part, I develop and validate a lineshape model for single CdSe/CdS coreshell quantum dots and show that optical phonons play a pivotal role in determining the room temperature emission linewidth. I undertake efforts to extend solution-based single emitter spectroscopy to the in-flow measurements in microfluidic channels.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-175).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122453</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total synthesis and anticancer evaluation of all known communesin alkaloids and related complex derivatives</title>
<link>https://hdl.handle.net/1721.1/122452</link>
<description>Total synthesis and anticancer evaluation of all known communesin alkaloids and related complex derivatives
Pompeo, Matthew M.
I. Convergent and Biomimetic Enantioselective Total Synthesis of (-)-Communesin F Enabled by Diazene-Directed Fragment Assembly. The first biomimetic enantioselective total synthesis of (-)-communesin F based on a late-stage heterodimerization and aminal exchange is described. Our synthesis features the expedient diazene-directed assembly of two advanced fragments to secure the congested C3a-C3a' linkage in three steps, followed by a highly efficient biogenetically inspired aminal reorganization to access the heptacyclic communesin core in only two additional steps. Enantioselective syntheses of the two fragments were developed, with highlights including the catalytic asymmetric halocyclization of tryptamine derivatives, a stereoselective sulfinimine allylation, and an efficient cyclotryptamine-C3a-sulfamate synthesis by either a new silver-promoted nucleophilic amination or a rhodium-catalyzed C-H amination protocol.; The versatile synthesis of the fragments, their stereocontrolled assembly, and the efficient aminal-exchange as supported by in situ monitoring experiments, in addition to the final stage Nl'-acylation of the communesin core provide a highly convergent synthesis of (-)-communesin F. II. Enantioselective Total Synthesis and Anticancer Activity of All Known Communesin Alkaloids and Related Complex Derivatives. A unified enantioselective total synthesis and side-by-side anticancer evaluation of all known epoxide-containing communesin alkaloids and related complex derivatives is described. Our synthesis is predicated on the convergent and modular diazene-directed assembly of two advanced fragments to secure the critical C3a-C3a' linkage followed by a guided biomimetic aminal reorganization to deliver the heptacyclic core of these alkaloids.; Concise enantioselective syntheses of the fragments were devised, with highlights including the use of a new, rationallydesigned sulfinamide chiral auxiliary and a highly efficient 1,1,1 -trifluoroacetone-mediated epoxidation. The modularity of our convergent approach enables the rapid synthesis of all epoxidecontaining members of the communesin family from a common synthetic intermediate, which prompted the stereochemical revision of (-)-communesin I, the most recently isolated communesin analogue. Furthermore, the generality of our biomimetic reorganization was conclusively demonstrated in the first total synthesis of an iso-communesin derivative, an unnatural constitutional isomer of the communesin skeleton. Finally, we report the first comparative analysis of the anticancer activities of all naturally occurring communesin alkaloids and nine complex derivatives against five human cancer cell lines.; From these data, we have identified (-)- communesin B as the most potent natural isolate and discovered that derivatives with N8'- sulfonamide substitution exhibit up to a ten-fold increase in potency over the natural products.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Vita. Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122452</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic strategies for control of structure from individual macromolecules to nanoscale materials to networks</title>
<link>https://hdl.handle.net/1721.1/122451</link>
<description>Synthetic strategies for control of structure from individual macromolecules to nanoscale materials to networks
Ehrlich, Deborah J. C.
Chapter 1. Aqueous self-assembly of prodrug macromonomers. A series of highly tunable micelles for drug delivery were made from norbornene based poly(ethylene glycol) macromonomers with covalently linked drugs. A total of five macromonomers were made using three different drugs (telmisartan, paclitaxel, and SN-38) and three different drug loadings. Combinations of these macromonomers were then allowed to self assemble into micellar aggregates. The size, stability, and shape of these micellar aggregates were controlled with the highly versatile structure. Chapter 2. Post micellization modification of norbornene-containing prodrug macromonomers. Highly tunable micelles for drug delivery were functionalized after their selfassembly. Post-micellization inverse electron demand Diels-Alder reactions of norbornenes and tetrazines were used to signal changes in micelle size and stability through the addition of either hydrophilic or hydrophobic tetrazines.; Thiol-ene additions reactions were used to increase micelle size and form chemically crosslinked nanoparticles. These modifications of norbornene-containing prodrug macromonomer assemblies illustrate their versatility. Chapter 3. Synthesis of polymers by iterative exponential growth. A scalable synthetic route that enables absolute control over polymer sequence and structure has remained a key challenge in polymer chemistry. Here, we report an iterative exponential growth plus side-chain functionalization (IEG+) strategy for the production of macromolecules with defined sequence, length, and stereoconfiguration. Each IEG+ cycle begins with the azide opening of an enantiopure epoxide, followed by side chain functionalization, alkyne deprotection, and copper-catalyzed azide-alkyne cycloaddition (CuAAC). These cycles have been conducted to form unimolecular macromolecules with molar masses of over 6,000 g/mol.; Subsequent modifications to IEG+ allow for the functionalization of monomers prior to the IEG+ cycle, expanding the library of compatible side chain chemistries. Chapter 4. Introduction to elastomer toughening strategies. Silicone elastomers are ubiquitous. Here, silicone elastomers are discussed in terms of network structure, the impact of network structure upon physical properties, and modifications of network structure in order to achieve desired physical properties. Fillers, the standard toughening strategy, are discussed in conjunction with entanglement density. Focus is placed on the impact of entanglement density on material properties. Topological networks are discussed and noted for their stress dissipative properties. Chapter 5. Topology modification of polydimethylsiloxane elastomers through loop formation. Topological networks are well known for their stress dissipation through the pulley effect leading to soft, extensible materials.; Combining these properties with a traditionally crosslinked network to produce a hybrid material allows for enhanced extensibility without a loss in modulus. Here, such hybrid networks were made with poly(dimethyl siloxane) polymers of a range of molecular weights. Side-loop polymer brushes were synthesized and then crosslinked to create hybrid networks with the statistical formation of topological bonds. These materials were characterized through tensile testing. Elastomers formed with the same molecular weight polymer in both side-loops and network formation did not show mechanical properties that depended upon the fraction of networks used for brush formation. Elastomers made with long polymers in brush formation and shorter polymers for network formation resulted in highly extensible systems without significant loss in modulus.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122451</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural insights into conformationally-gated reactions catalyzed by thiamine pyrophosphate-dependent enzymes</title>
<link>https://hdl.handle.net/1721.1/122450</link>
<description>Structural insights into conformationally-gated reactions catalyzed by thiamine pyrophosphate-dependent enzymes
Chen, Yang-Ting(Percival Yang Ting)
Thiamine pyrophosphate (TPP)-dependent enzymes utilize TPP as a biological carbanion to initiate reactions on substrates with carbonyl carbon(s), such as pyruvate and 2-oxoglutarate, two central metabolites. This thesis presents structural analyses of three TPP-dependent enzymes. Most interestingly, each mechanism of three proteins studied contains a gated step that is drastically accelerated by 3-5 orders of magnitudes through conformational changes. 1-deoxy-D-xylulose 5-phosphate (DXP) synthase catalyzes the conversion of pyruvate and D-glyceraldehyde 3-phosphate (D-GAP or G3P) into DXP, an essential precursor of isoprenoids, vitamin B1, and vitamin B6 (pyridoxal phosphate, PLP) in bacterial pathogens. Human adapts different pathways to access those essential metabolites; thus, selective inhibition of DXP synthase has been considered to be a possible approach for antibiotic therapies.; Surprisingly, decarboxylation of pyruvate by DXP synthase is accelerated by 600-fold in the presence of DGAP, the molecular basis of which was unknown. Here we present crystallographic snapshots at different stages of the DXP synthase reaction using the well-studied DXP synthase from Deinococcus radiodurans in order to evaluate the molecular basis of the D-GAP-based rate enhancement and afford a deeper mechanistic understanding that will further pharmaceutical campaigns. Pyruvate:ferredoxin oxidoreductase (PFOR) and 2-oxoglutarate:ferredoxin oxidoreductase (OGOR) reversibly oxidize 2-oxoacids and coenzyme A (CoA) into CO 2 and acyl-CoA. This enzyme family (2-oxoacid:ferredoxin oxidoreductase, OFOR) is essential in three of seven of the known pathways of biological CO 2 fixation and has consequently been a focus for biological engineering. In addition to TPP, OFORs bind [4Fe-4S] clusters that mediate electron transfers between reaction intermediates and external ferredoxins.; Remarkably, CoA binding accelerates electron transfer from the reaction intermediate to an enzyme-bound [4Fe- 4S] cluster by as much as 100,000-fold. In this work, we resolved the long-standing question of how CoA achieves this dramatic rate acceleration by acquiring the first structure of an OFOR with CoA bound. The structures of PFOR, and later OGOR, revealed the binding mode of CoA and showed dramatic conformational changes accompany CoA binding. Comparisons between PFOR and OGOR further explain the molecular basis for substrate specificity in the OFOR family. Our findings are expected to facilitate future efforts for bioengineering CO 2 fixation pathways.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Vita. Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122450</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methodology for the syntheses of pharmaceutically significant functional groups</title>
<link>https://hdl.handle.net/1721.1/122449</link>
<description>Methodology for the syntheses of pharmaceutically significant functional groups
Danahy, Kelley E.
Though pharmaceutical small molecules span a wide range of structures and functions, several common features emerge upon analysis. For example, many medicinal compounds contain combinations of nitrogen-containing heterocycles, polar functional groups such as fluorides or sulfoximines, and stereoisomers that are vital to their bioactivity. As a result, new methods to synthesize these important functional groups are continually in demand. Three main methods to synthesize common pharmacophores are discussed herein: the selective benzylic fluorinations of azaheterocycles via a nitrogen-fluorine halogen bond, the synthesis of sulfoxides and sulfenamides from highly reactive and unstable chloramine in continuous-flow, and partial translation of the process route to enantiopure (S)-naproxen from batch chemistry to continuous-flow.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122449</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on markets and information</title>
<link>https://hdl.handle.net/1721.1/122445</link>
<description>Essays on markets and information
Gitmez, Ahmet Arda.
This thesis contains three chapters on the interplay between markets and information. In the first chapter, I study how market opportunities influence the information revealed in longterm relationships. To this end, I build a model of employment relationships where the worker has private information about match quality, the firm learns about match quality over time, and the firm makes a match-specific investment. Improved market opportunities for workers promote productive relationships because they let the worker signal her firm-specific productivity by forgoing market opportunities. Signaling allows the firm to bypass the learning stage and encourages investment (signaling effect). Improved market opportunities for firms, however, discourage longterm relationships and undermine investment incentives (layoffs effect). I embed the relationship game in a search market equilibrium where market opportunities for both parties depend on search frictions and market thickness.; With intermediate values of market thickness, relationship productivity and worker welfare are u-shaped in search frictions: when search frictions decrease from high to intermediate levels, the layoffs effect dominates; when search frictions are sufficiently low, the signaling effect dominates. In the second chapter, joint with Umut Dur, Parag Pathak and Tayfun Sdnmez, I study the effects of enlarging the message space to allow for further information revelation in centralized markets. School districts with choice plans struggle to expand access to schools across neighborhoods while keeping busing costs down. Existing assignment mechanisms allow students to rank a school, but do not elicit preferences about transportation. Typically, if a student is assigned far from home, the district provides transportation. We propose enlarging the message space in the mechanism by allowing students to apply to a school both with and without transportation.; Under our proposal, a non-neighborhood applicant who is willing to forgo transportation services obtains a greater chance of being assigned to a school. In decentralized admissions systems, we show that this option reduces transportation but not access for non-neighborhood applicants. We then generalize these results to a centralized assignment mechanism under special conditions. Expanding the message space provides a new tool for distributional objectives that operates in a different fashion than more traditional levers like changing priorities or choice sets. In the third chapter, joint with Pooya Molavi, I study an information design problem. We present a model of media capture, a politician having control over the editorial policies of media.; At the heart of the model is the trade-off faced by a politician who wants to persuade the citizens: she wants to capture the media and produce news in her favor, but capture leads the citizens to not follow the media as they find them uninformative. The model is a Bayesian persuasion model (a la Kamenica and Gentzkow (2011)) with an audience of heterogeneous priors. We identify conditions on the distribution of priors that guarantee full information revelation and no information revelation by the captured media. The model also has several testable predictions: (i) the information content of the news provided by the captured media decreases as the politician becomes more popular, (ii) in societies with more extremists than moderates, the media are more likely to produce "negative" news than "positive" ones, and (iii) in societies where the media are less accessible to citizens, they are more informative.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-154).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122445</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in applied theory</title>
<link>https://hdl.handle.net/1721.1/122443</link>
<description>Essays in applied theory
Chen, Lizi,Ph. D.Massachusetts Institute of Technology.
In "Incomplete Contracting with Endogenous Competition," I consider a variant of the incomplete contracting with renegotiation model introduced by Hart and Moore (1988). I study a trading relation between a buyer and seller, where some ex-ante relationship specific investment on the part of the seller is needed to generate value for the buyer. Uncertainty is revealed ex post, in that prior to the investment stage, the buyer does not know which type of service she may need, and it is impossible to describe under what precise circumstances she needs a particular service. The contract can take only two broad forms: (1) a specification of the nature of a service to be provided under all circumstances, or (2) a general option contract, namely, a menu of services that the seller agrees to provide at predetermined terms.; In addition to uncertainty regarding state realizations which was the focus of the literature thus far, I consider a different source of contracting friction, namely, uncertainty about downstream profitability. I show that depending on the specific assumptions, the competitor may invest in, produce and sell an imperfect substitute or free-ride on the incumbent supplier's investment and replicate a perfect substitute with positive probability. However, in a subgame-perfect equilibrium, the incumbent supplier will correctly anticipate potential entrants and change its ex-ante investment to account for downstream competition. It may also distort the level of its ex-ante investment to deter future entry. In this paper, information affects the outcome of economic transactions, but the presence and absence of information is exogenously given by assumptions about the form of competitions.; In my paper "Selling Information: Multidimensional Oligopolistic Competition," I model the information structure as a variable endogenously chosen to optimally manage competition. Specifically, I consider a model of an economic transaction between an upstream monopolist and several downstream oligopolists. The downstream parties may be E-commerce retailers who compete over a heterogeneous customer base. Each party may have some prior assessment over the distribution of customer types, but would benefit from incremental knowledge on customer information. The upstream party, in this scenario, is an information vendor, who has access to technology required to develop a targeting device. Since information is valuable, to extract surplus the upstream party would like to improve the quality of information. Such motive is counterbalanced by the incentive to manage competition.; The upstream monopolist supplies a menu of multi-dimensional intermediate goods from which the downstream oligopolists select. The oligopolists then use the previously purchased intermediate goods to produce the final products and compete with each other. The model enriches Bonatti (2015)'s multi-dimensional information product model by considering what the information product is used for (competing for heterogeneous customers by per information product. The key feature of the model is that the information good is intermediate, whose value is affected by the extent of ex post competition among the The model captures the indirect externalities conferred in the market for information. Specifically, the value of customer information to a given firm is no longer determined solely by the characteristics of that firm and the those customers. Instead, the value now also depends on the market competition structure among all downstream firms.; For example, a model of competition of customer information has features similar to an arms race: having better information over the opponent allows one to better engage in better price discrimination, but it also increases the value of information to the opponent and induces more aggressive demand for information on the part of the opponents. In addition to economic transactions, in my third paper I study the role of information in managing and orienting actions beyond the market. In "Information Theory Foundation of Propaganda," I develop a model of strategic information signaling with an informed sender and a continuum of imperfectly informed receivers. The sender sends a costly signal to disrupt receivers' coordination action and to bias their aggregate action away from the true state towards the sender's desired state. The receivers want to match their actions to the true state and also seek to coordinate with each other.; The leading application of the model is an authoritarian regime sending propaganda to its citizens to prevent them from learning the true strength of the regime and taking collective actions. In equilibrium the sender's manipulation does not succeed in changing the mean of the receivers' beliefs, but manipulation makes their interpretation of the signal noisier. This model helps resolve an empirical puzzle: since we observe propaganda, regimes apparently think it works, in some way, but can propaganda work, even if the citizens who see it know it is biased information? In the model, propaganda works not through changing beliefs per se, but through adding noise and confusion into the communication structure, so that citizens, who value coordination, are more likely to redirect their attention across various sources of information.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122443</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated cell-targeted electrophysiology in vivo and non-invasive gamma frequency entrainment</title>
<link>https://hdl.handle.net/1721.1/122429</link>
<description>Automated cell-targeted electrophysiology in vivo and non-invasive gamma frequency entrainment
Suk, Ho-Jun.
Targeted patch clamp recording is a powerful method for characterizing visually identified cells in intact neural circuits, but it requires skill to perform. We found that a closed-loop real-time imaging strategy, which continuously compensates for cell movement while approaching the cell with a pipette tip, allows for the development of an algorithm amenable to automation. We built a robotic system that can implement this algorithm and validated that our system can automatically patch fluorophore-expressing neurons of multiple types in the living mouse cortex, with yields comparable to skilled human experimenters. By facilitating targeted patch clamp recordings in vivo, our robot may enable scalable characterization of identified cell types in intact neural circuits. Activities of individual neurons in neural circuits give rise to network oscillations, whose frequencies are closely related to specific brain states.; For example, network oscillations in the 30 - 90 Hz range, observed using electroencephalogram (EEG), are called gamma oscillations and increase during attention, memory formation, and recall. In Alzheimer's disease (AD), gamma oscillations are disrupted compared to healthy individuals. Recently, noninvasive visual and auditory stimulations at 40 Hz, called Gamma ENtrainment Using Sensory stimulus ("GENUS"), have been shown to positively impact pathology and improve memory in AD mouse models, with concurrent visual and auditory GENUS leading to a more widespread effect in the AD mouse brain compared to visual or auditory stimulation alone. However, it is unclear what effect such sensory stimulations would have on the human brain. To test for the safety and feasibility of GENUS in humans, we developed a device that can deliver 40 Hz light and sound stimulations at intensity levels tolerable to humans.; We found that our device can safely lead to steady 40 Hz entrainment in cognitively normal young (20 - 33 years old) and older (55 - 75 years old) subjects, with concurrent visual and auditory stimulation leading to stronger and more widespread entrainment than visual or auditory stimulation alone. These findings suggest that GENUS can be a safe and effective method for widespread 40 Hz entrainment, which may have therapeutic effects in people suffering from AD.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 105-110).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122429</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Normative discourse and social negotiation</title>
<link>https://hdl.handle.net/1721.1/122428</link>
<description>Normative discourse and social negotiation
Hesni, Samia.
This dissertation lies at the intersection of philosophy of language, social and political, and feminist philosophy. The first half of the dissertation is primarily about the ways language can be used to stereotype, denigrate, oppress, or otherwise harm. The second half is about how language can be used to resist and undermine those harms. In the four chapters of my dissertation, I examine the ways in which language can shape the social world. Language allows people to reinforce social norms and systems like sexism, racism, and oppression more broadly. But it also allows people to disrupt these systems. I argue that it is worth looking seriously at the linguistic mechanisms by which individuals can do both, and the social and political systems in place that enable such language use in the first place. Only by combining the two can we start to get the full story about language, oppression, and power.; Within this broad research program, I am specifically interested in implicit discourse: language that indirectly or implicitly communicates one thing while explicitly stating another. Implicit language is extremely important to understand various mechanisms of linguistic harm and oppression. Chapter 1 examines normative generics like 'boys don't cry,' whose utterances often carry with them an injunction that boys not cry, or a condemnation of crying boys. When someone utters a normative generic like 'women stay at home and raise families,' they are reinforcing a harmful social norm without explicitly using any evaluative terms like 'should, good, right.' In Chapter 2, I problematize philosophical views on silencing, and introduce a new concept of linguistic harm, illocutionary frustration, that occurs when a hearer treats a speaker as though she does not have standing to say what she is saying.; In Chapter 3, I give a meta-philosophical analysis of socially informed philosophy of language. In it, I argue that in the service of intellectual inquiry and social justice, we would do well to incorporate types of social situatedness into our methodological frameworks.. I end in Chapter 4 by reviewing the ways in which social scripts play pivotal roles in enabling interpersonal subjugation, and offer a way out.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122428</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The dative arguments in Bulgarian</title>
<link>https://hdl.handle.net/1721.1/122427</link>
<description>The dative arguments in Bulgarian
Iovtcheva, Snejana P.
This dissertation is a study of the syntactic and semantic properties of arguments marked with a dative clitic in Bulgarian. Contemporary Bulgarian has been claimed to have lost its Case system compared to its previous historical stages. Yet, as the current study demonstrates, the language has systematically utilized a morphological marker in the form of a dative clitic to identify a particular set of arguments across a wide variety of structural environments - the dative arguments. The major proposal advanced here is that dative arguments are treated uniformly by the grammar of the language because they uniformly represent peripheral arguments introduced in the specifier of a functional head that assigns to them morphological dative Case.; Despite the fact that these arguments might assume a wide variety of thematic interpretations (recipients, goals, possessors, sources, beneficiaries, malefactives, etc.), I demonstrate that their meaning is derived structurally from the position in which they are licensed. Crucially, only one dative occurs within a structural domain. The intra-linguistic comparison of a variety of constructions further leads to the conclusion that in each context datives are prominent arguments introduced at the periphery of a structural domain. This proposal explains the ability of datives to bind nominative subjects, to serve as structural subjects in impersonal predicative constructions, and to interact with nominative subjects in bi-clausal environments.; To capture their uniform structural distribution and simultaneously to account for the wide range of thematic meanings, I propose that the argument introducer is a semantically underspecified functional head of the High APPL(icative) type as defined in Pylkkinen (2002/2008). The current study contributes to the ongoing theoretical debate of argument structure and argument interpretation and introduces Bulgarian as a relevant language when it comes to the study of datives in double object constructions, 'quirky' dative subjects, and dative possessors.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 162-174).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122427</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>What we owe to ourselves : essays on rights and supererogation</title>
<link>https://hdl.handle.net/1721.1/122426</link>
<description>What we owe to ourselves : essays on rights and supererogation
Muñoz, Daniel(Daniel B.)
Some sacrifices-like giving a kidney or heroically dashing into a burning building-are supererogatory: they are good deeds beyond the call of duty. But if such deeds are really so good, philosophers ask, why shouldn't morality just require them? The standard answer is that morality recognizes a special role for the pursuit of self-interest, so that everyone may treat themselves as if they were uniquely important. This idea, however, cannot be reconciled with the compelling picture of morality as impartial-the view that we are each anyone's equal. I propose an alternative Self- Other Symmetric account of our moral freedom: the limits on what morality may demand of us are set by the duties we owe to ourselves. I begin with a defense of the Self-Other Symmetry: the idea that we owe the same duties to ourselves-and have the same rights against ourselves-as any relevantly similar other. Because we are consenting parties to our own actions, I argue, our rights against ourselves do not function like the rights of unwilling others. Instead, they play a permissive function, allowing us to block the demand to give up what is ours. I conclude by uniting, aggravating, and trying to solve some paradoxes of supererogatory permissions, guided by the idea that morality cannot be reduced to a ranking of options from best-to-worst. Rights against oneself are an irreducible second dimension, one that we need if we are to unify rights and supererogation into an impartial moral vision.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122426</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolutionary dynamics of the human gut microbiome</title>
<link>https://hdl.handle.net/1721.1/122423</link>
<description>Evolutionary dynamics of the human gut microbiome
Zhao, Shijie
The composite members of the human gut microbiome encounter a myriad of selective pressures from the host environment and other microbial members in the ecosystem. Understanding the evolutionary dynamics of microbial species in the gut microbiome requires sequencing information that differentiates strains and even single cells. In this thesis, I present efforts that investigate the evolution of bacterial strains in their complex natural environments. In the first project, I discover that a commensal species, Bacteroides fragilis, undergoes within-person adaptive evolution in the absence of antibiotics. Combining culture-based whole genome sequencing with metagenomes, I uncover genes important to B. fragilis survival in the human gut microbiome and describe evolutionary dynamics within individuals and across populations. In the second project, I developed a strain-tracking method that predicts personal microbiomes. Using this method to track closely-related strains, I discover signals of adaptive evolution for Bacteroidetes strains, potentially over decades of colonization in adult twins. In the final project, this strain-tracking method is applied to advance the analysis of microbial transmission within social networks of Fiji islanders. These projects demonstrate the power of genome-resolved and strain-resolved methods in revealing insights of evolutionary dynamics of the gut microbiome. Future studies are expected to further investigate other taxonomical groups in depth and technical breakthroughs are needed to improve the throughput of evolutionary studies of complex systems like the gut microbiome.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 156-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122423</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The eco-evolutionary dynamics of microbial populations</title>
<link>https://hdl.handle.net/1721.1/122422</link>
<description>The eco-evolutionary dynamics of microbial populations
Vanlnsberghe, David(David Stephen)
Microbes have adapted to life in complex microbial communities in a large variety of ways, and they are continually evolving to better compete in their changing environments. But identifying the conditions that a particular microbe thrives under, and how they have become adapted to those condition can be exceedingly difficult. For instance, Clostridium difficile became widely known for being the world's leading cause of hospital associated diarrhea, but people can also have C. difficile in their gut without developing diarrhea. Although these asymptomatic carriers are now thought to be the largest source of infection, we know very little about how these people become colonized. In the first chapter of my thesis I use publicly available microbiome survey data and a mouse model of colonization to show that C. difficile colonizes people immediately after diarrheal illnesses, suggesting C. difficile is a disturbance adapted opportunist.; However, the differences between very recently diverged microbial populations that are adapted for growth in different conditions can be very difficult to detect. To address this limitation, I developed a method of identifying regions that have undergone recent selective sweeps in these populations as a means of distinguishing them, and specifically quantifying their abundance in complex environments. But part of what makes microbial evolution so difficult to interpret is the vast diversity of genes that are only shared by a fraction of all the members in a population. To better understand how these flexible regions are structured, I systematically extracted all contiguous flexible regions in nine marine Vibrio populations and compared their organization and evolutionary histories.; I found that horizontal gene transfer and social interactions have led to the evolution of modular gene clusters that mediate forms of social cooperation, metabolic tradeoffs, and make up a substantial portion of these flexible genomic regions. The observations made in these studies help us understand how microbes are organized into socially and ecologically cohesive groups, and how they have evolved to interact with complex and changing environments.
Thesis: Ph. D. in Microbiology Graduate Program, Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122422</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing VHH-based tools to study Ebolavirus infection</title>
<link>https://hdl.handle.net/1721.1/122421</link>
<description>Developing VHH-based tools to study Ebolavirus infection
Nguyen, Jason V. M. H.(Jason Vu Minh Hien)
Variable domains of camelid-derived heavy chain-only antibodies, or VHHs, have emerged as a unique antigen binding moiety that holds promise in its versatility and utilization as a tool to study biological questions. This thesis focuses on two aspects on developing tools to study infectious disease, specifically Ebolavirus entry. In Chapter 1, I provide an overview about antibodies and how antibodies have transformed the biomedical field and how single domain antibody fragments, or VHHs, have entered this arena. I will also touch upon how VHHs have been used in various fields and certain aspects that remain underexplored. Chapter 2 focuses on the utilization of VHHs to study Ebolavirus entry using VHHs that were isolated from alpacas. Two VHHs were found to neutralize Ebolavirus in both Biosafety Level 2 and 4 laboratory conditions. Ongoing experiments to address mechanism focuses on two aspects of neutralization: Cathepsin inhibition or NPC1-mediated inhibition. Finally, Chapter 3 discusses the overall landscape for Ebolavirus therapeutics and will discuss future directions of this work.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122421</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brass cities : Innovation policy and local economic transformation</title>
<link>https://hdl.handle.net/1721.1/122404</link>
<description>Brass cities : Innovation policy and local economic transformation
Armstrong, Ben(Ben David)(Scientist in political science)Massachusetts Institute of Technology.
How have some former industrial cities become hubs for high-wage jobs while others continue to grapple with economic stagnation? This dissertation aims to show how government interventions have shaped U.S. cities' paths to income and employment growth. In the 1980s, nearly every state government in the U.S. began investing in innovation policies aimed at diversifying local economies and stimulating the growth of high-technology industries. Three political obstacles short-term electoral incentives, industry capture, and barriers to collective action - have made the implementation of these policies difficult. Case studies of U.S. cities illustrate how state innovation policies have the potential to overcome these obstacles and transform local economies adapting to the decline of manufacturing.; Two pairs of cities - Pittsburgh, PA and Cleveland, OH; Albany, NY and Rochester, NY - had similar economic prospects in the early 1980s, but have followed different economic trajectories in the decades since. In Pittsburgh and Albany national leaders in income and employment growth - the state government played the role of coalition builder, convening local coalitions to identify promising innovation initiatives and monitoring local coalitions as they implemented the initiatives. In Cleveland and Rochester, where income and employment growth has been comparatively low, pre-existing local coalitions and powerful incumbent industries crowded out a potential role for the state government. The model of state government intervention that emerges from this research suggests that convening local actors with economic incentives can overcome barriers to collective action and empower new actors - particularly universities - to implement economic development initiatives in the long term.; Monitoring can help avoid policy capture by local interests and amplify the initiatives that showed the most potential. Forming local economic coalitions in this model depends on local actors (e.g. universities, firms) identifying regional economic development goals as institutional priorities.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 285-314).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122404</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive robust model predictive control for nonlinear systems</title>
<link>https://hdl.handle.net/1721.1/122395</link>
<description>Adaptive robust model predictive control for nonlinear systems
Lopez, Brett Thomas.
Modeling error and external disturbances can severely degrade the performance of Model Predictive Control (MPC) in real-world scenarios. Robust MPC (RMPC) addresses this limitation by optimizing over control policies but at the expense of computational complexity. An alternative strategy, known as tube MPC, uses a robust controller (designed offline) to keep the system in an invariant tube centered around a desired nominal trajectory (generated online). While tube MPC regains tractability, there are several theoretical and practical problems that must be solved for it to be used in real-world scenarios. First, the decoupled trajectory and control design is inherently suboptimal, especially for systems with changing objectives or operating conditions. Second, no existing tube MPC framework is able to capture state-dependent uncertainty due to the complexity of calculating invariant tubes, resulting in overly-conservative approximations. And third, the inability to reduce state-dependent uncertainty through online parameter adaptation/estimation leads to systematic error in the trajectory design. This thesis aims to address these limitations by developing a computationally tractable nonlinear tube MPC framework that is applicable to a broad class of nonlinear systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 115-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122395</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytics in promotional pricing and advertising</title>
<link>https://hdl.handle.net/1721.1/122389</link>
<description>Analytics in promotional pricing and advertising
Baardman, Lennart.
Big data and the internet are shifting the paradigm in promotional pricing and advertising. The amount of readily available data from both point-of-sale systems and web cookies has grown, enabling a shift from qualitative design to quantitative tools. In this work, we address how firms can utilize the power of analytics to maximize profits in both their offline and online channels. First, we consider an online setting, in which an advertiser can target ads to the customer in question. The goal of the advertiser is to determine how to target the right audience with their ads. We study this problem as a Multi-Armed Bandit problem with periodic budgets, and develop an Optimistic-Robust Learning algorithm with bounded expected regret. Practically, simulations on synthetic and real-world ad data show that the algorithm reduces regret by at least 10-20% compared to benchmarks.; Second, we consider an offline setting, in which a retailer can boost profits through the use of promotion vehicles such as flyers and commercials. The goal of the retailer is to decide how to schedule the right promotion vehicles for their products. We model the problem as a non-linear bipartite matching-type problem, and develop provably-good algorithms: a greedy algorithm and an approximate integer program of polynomial size. From a practical perspective, we test our methods on actual data and show potential profit increases of 2-9%. Third, we explore a supply chain setting, in which a supplier offers vendor funds to a retailer who promotionally prices the product to the customer. Vendor funds are trade deals in which a supplier offers a retailer a short-term discount on a specific product, encouraging the retailer to discount the product.; We model the problem as a bilevel optimization model and show that a pass-through constrained vendor fund mitigates forward-buying and coordinates the supply chain on the short term. Finally, we present a pilot study on the impact of promotional pricing on retail profits. We assess the potential impact of our promotion planning tool on historical data from a large retailer. Our results suggest a 9.94% profit improvement for the retailer.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 191-198).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122389</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The edge of large-scale optimization in transportation and machine learning</title>
<link>https://hdl.handle.net/1721.1/122388</link>
<description>The edge of large-scale optimization in transportation and machine learning
Martin, Sébastien
This thesis focuses on impactful applications of large-scale optimization in transportation and machine learning. Using both theory and computational experiments, we introduce novel optimization algorithms to overcome the tractability issues that arise in real world applications. We work towards the implementation of these algorithms, through software contributions, public policy work, and a formal study of machine learning interpretability. Our implementation in Boston Public Schools generates millions of dollars in yearly transportation savings and led to important public policy consequences in the United States. This work is motivated by large-scale transportation problems that present significant optimization challenges. In particular, we study the problem of ride-sharing, the online routing of hundreds of thousands of customers every day in New York City.; We also contribute to travel time estimation from origin-destination data, on city routing networks with tens of thousands of roads. We additionally consider the problem of school transportation, the scheduling of hundreds of buses to send tens of thousands of children to school everyday. This transportation problem is related to the choice of school start times, for which we also propose an optimization framework. Building on these applications, we present methodological contributions in large- scale optimization. We introduce state-of-the-art algorithms for scheduling problems with time-window (backbone) and for school bus routing (BiRD). Our work on travel time estimation tractably produces solutions to the inverse shortest path length problem, solving a sequence of second order cone problems. We also present a theoretical and empirical study of the stochastic proximal point algorithm, an alternative to stochastic gradient methods (the de-facto algorithm for large-scale learning).; We also aim at the implementation of these algorithms, through software contributions, public policy work (together with stakeholders and journalists), and a collaboration with the city of Boston. Explaining complex algorithms to decision-makers is a difficult task, therefore we introduce an optimization framework to decomposes models into a sequence of simple building blocks. This allows us to introduce formal measure of the "interpretability" of a large class of machine learning models, and to study tradeoffs between this measure and model performance, the price of interpretability.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 273-284).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122388</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resource scheduling and optimization in dynamic and complex transportation settings</title>
<link>https://hdl.handle.net/1721.1/122387</link>
<description>Resource scheduling and optimization in dynamic and complex transportation settings
Mellou, Konstantina.
Resource optimization has always been a challenge both in traditional fields, such as logistics, and particularly so in most emerging systems in the sharing economy. These systems are by definition founded on the sharing of resources among users, which naturally creates many coordination needs as well as challenges to ensure enough resource supply to cover customer demand. This thesis addresses these challenges in the application of vehicle sharing systems, as well as in the context of multi-operation companies that provide a wide range of services to their users. More specifically, the first part of this thesis focuses on models and algorithms for the optimization of bike sharing systems. Shortage of bikes and docks is a common issue in bike sharing systems, and, to tackle this problem, operators use a fleet of vehicles to redistribute bikes across the network.; We study multiple aspects of these operations, and develop models that can capture all user trips that are performed successfully in the system, as well as algorithms that generate complete redistribution plans for the operators to maximize the served demand, in running times that are fast enough to allow real-time information to be taken into account. Furthermore, we propose an approach for the estimation of the actual user demand which takes into account both the lost demand (users that left the system due to lack of bikes or docks) and shifted demand (users that had to walk to nearby stations to find available resources). More accurate demand representations can then be used to inform better decisions for the daily operations, as well as the long-term planning of the system. The second part of this thesis is focused on schedule generation for resources of large companies that must support a complex set of operations.; Different operation types come with a variety of constraints and requirements that need to be taken into account. Moreover, specialized employees with a variety of skills and experience levels are required, along with an heterogeneous fleet of vehicles with various properties (e.g., refrigerator vehicles). We introduce the Complex Event Scheduling Problem (CESP), which captures known problems such as pickup-and-delivery and technician scheduling as special cases. We then develop a unified optimization framework for CESP, which relies on a combination of metaheuristics (ALNS) and Linear Programming. Our experiments show that our framework scales to large problem instances, and may help companies and organizations improve operation efficiency (e.g., reduce fleet size).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 145-151).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122387</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering individual mobility patterns from Transit Smart Card data : trip prediction, activity inference, and change detection</title>
<link>https://hdl.handle.net/1721.1/122383</link>
<description>Uncovering individual mobility patterns from Transit Smart Card data : trip prediction, activity inference, and change detection
Zhao, Zhan,Ph.D.Massachusetts Institute of Technology.
While conventional travel survey data are limited in sample size and observation period, recent advances in urban sensing technologies afford the opportunity to collect traces of individual mobility at a large scale and over extended periods of time. As a result, individual mobility has become an emerging field dedicated to extracting patterns that describe individual movements in time and space. Individual mobility is the result of spatiotemporal choices (e.g., the decision to go somewhere at some time) made by individuals with diverse and dynamic preferences and lifestyles. These spatiotemporal choices vary across individuals, but also for the same person over time. However, our understanding of the behavioral mechanism underlying individual mobility is lacking. The objective of this dissertation is to develop statistical approaches to extract dynamic and interpretable travel-activity patterns from individual-level longitudinal travel records.; Specifically, this work focuses on three problems related to the spatiotemporal behavioral structures in individual mobility--next trip prediction, latent activity inference, and pattern change detection. Transit smart card data from London's rail network are used as a case study for the analysis. To account for the sequential dependency between trips, a predictive model is developed for the prediction of the next trip based on the previous one. Each trip is defined by a combination of start time t (aggregated to hours), origin o, and destination d. To predict the next trip of an individual, we first predict whether the individual will travel again in the period of interest (trip making prediction), and, if so, predict the attributes of the next trip (t, o, d) (trip attribute prediction). For trip attribute prediction, a Bayesian n-gram model is developed to estimate the probability distribution of the next trip conditional on the previous one.; Based on regularized logistic regression, the trip making prediction models achieve median accuracy levels of over 80%. The prediction accuracy for trip attributes varies by the attribute considered--around 40% for t, 70-80% for o and 60-70% for d. The first trip of the day is more difficult to predict than later trips. Significant variations are found across individuals in terms of the model performance, implying diverse mobility patterns. Human activities have long been recognized as the fundamental driver for travel demand. While passively-collected human mobility data sources, such as the transit smart card data, can accurately capture the time and location of individual movements, they do not explicitly provide any behavioral explanation regarding why people travel, e.g., activity types or travel purposes.; Probabilistic topic models, which are widely used in natural language processing for document classification, can be adapted to uncover latent activity patterns from human mobility data in an unsupervised manner. In this case, the activity episodes (i.e., discrete activity participations between trips) of an individual are treated as words in a document, and each "topic" represents a unique distribution over space and time that corresponds to some activity type. Specifically, a classical topic model, Latent Dirichlet Allocation (LDA), is extended to incorporate multiple heterogeneous spatiotemporal attributes--the location, arrival time, day of week, and duration of stay. The model is tested with different choices of the number of activities Z, and the results demonstrate how new patterns may emerge as Z increases. The discovered latent activities reveal diverse spatiotemporal patterns, and provide a new way to characterize individual activity profiles.; Although stable in the short term, individual mobility patterns are subject to change in the long term. The ability to detect such changes is critical for developing behavior models that are adaptive over time. In this study, a travel pattern change is defined as "an abrupt, substantial, and persistent change in the underlying pattern of travel". To detect these changes from longitudinal travel records, we specify one distribution for each of the three dimensions of travel behavior (the frequency of travel, time of travel, and origins/destinations), and interpret the change of the parameters of the distributions as indicating a pattern change. A Bayesian method is developed to estimate the probability that a pattern change occurs at any given time for each behavioral dimension. The test results show that the method can successfully identify significant changepoints in travel patterns.; Compared to the traditional generalized likelihood ratio (GLR) approach, the Bayesian method requires fewer predefined parameters and is more robust. It is generalizable and may be applied to detect changes in other aspects of travel behavior and human behavior in general.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-160).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122383</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a feedback design process using Digital Thread</title>
<link>https://hdl.handle.net/1721.1/122373</link>
<description>Towards a feedback design process using Digital Thread
Singh, Victor.
Digital Thread is a data-driven architecture that links together information generated from across the product lifecycle. Though Digital Thread is gaining traction as a digital communication framework to streamline design, manufacturing, and operational processes in order to more efficiently design, build and maintain engineering products, a principled and unified formulation of the data-driven design problem using Digital Thread remains absent in the literature. Prior work on Digital Thread has targeted enterprise level risk/value assessments, addressing upcoming challenges and future visions, and establishing requirements from the vantage point of model-based systems engineering, product lifecycle management, and additive manufacturing. However, such a formulation must account for the fact that the design process is highly iterative and not all information is available at once.; Design decisions must be made not only on what data to collect but also on the costs and benefits involved to collect those data. Furthermore, since full use of acquired information can become computationally prohibitive, it is important to evaluate what minimal information is sufficient for design decisions and in what form that information is needed. The contribution of this thesis is to present such a formulation from the context of a data-driven design and decision problem under uncertainty. In particular, we lay down a mathematical foundation for Digital Thread and develop an dynamical feedback process model that governs the mechanics of the data-driven design problem that uses Digital Thread. From this model, we construct a Bayesian filter that describes the overall data assimilation process and formulate a multistage optimization problem that produces optimal data-informed decisions with respect to specified cost and constraint metrics.; A numerical approach based on function and policy approximation is detailed and an algorithm is provided to solve the multistage optimization problem. The overall methodology is illustrated on an example structural fiber-steered composite component. In this setting, the methodology enables design space explorations that assess the costs/benefits of future outcomes based on current design decisions. These design decisions can incorporate structural tailoring, sensor placement, and sensor selection in addition to fiber angle and thickness specification. Different design scenarios utilizing sensor placement and selection are compared and quantified. It is found that placement of sensors has less impact on cost than the choice of which sensors are used and how frequently. Accordingly, this method is able to assess how many sensors need to be used to resolve the dominant loading components effectively, and then use this knowledge to produce appropriate designs of low costs.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 141-148).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122373</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of ion Cluster fragmentation in ionic liquid ion sources</title>
<link>https://hdl.handle.net/1721.1/122372</link>
<description>Characterization of ion Cluster fragmentation in ionic liquid ion sources
Miller, Catherine Elizabeth.
Ion electrospray propulsion is a cutting-edge micropropulsion technology that could revolutionize the capabilities of microsatellites. Ion electrospray thrusters could also be used on large spacecraft for precision attitude control applications such as gravity wave detection and exoplanet imaging. Novel room temperature molten salts, called ionic liquids, are used as propellant, which are composed purely of positive and negative molecular ions. When exposed to strong electric fields, ions and metastable clusters of ions are evaporated from the bulk liquid surface. The free ions and ion clusters can be accelerated to high velocities, producing thrust at high specific impulse. The performance of ion electrospray thrusters is affected by the composition of the ion beam and the amount of ion clusters that break apart during the acceleration phase. To improve thruster performance, a better understanding of the fundamental physics of ion evaporation and cluster break-up is needed.; The break-up of ion clusters, also called fragmentation, is not a well understood phenomenon. It has been observed in past experiments, but the rates of break-up have not been measured. The focus of this work is to experimentally investigate fragmentation more deeply than ever before. To accomplish this, a specialized instrumentation suite has been designed, built, and tested to measure fragmentation characteristics in unprecedented detail. A full-beam, spherical geometry retarding potential analyzer is used to measure the rates of fragmentation of ion clusters both outside the thruster and within the acceleration region for the first time. A narrow-beam, high time-resolution time of flight mass spectrometer is used to measure the beam composition. Single emitters based on resorcinol formaldehyde carbon xerogels were used as ion sources. Four ionic liquids spanning a wide range of liquid properties were characterized: EMI-FAP, EMI-Im, EMI-BF4, and BMI-I.; Analytical models were also developed to enhance the interpretation of the experimental results. The experimental measurements show that the amount of fragmentation increases with distance from the thruster and follows a constant rate equation. The mean lifetimes of ion clusters outside of the thruster range from 1-6 [mu]s, indicating that these clusters are quite unstable. It is observed that the fragmentation throughout most of the acceleration region is linear with respect to electric potential, which can be understood using analytical models. Rapid fragmentation likely occurs immediately after evaporation due to the strong electric fields near the emission site, which has significant implications for thruster performance. It is also observed that clusters of complex molecular ions which consist of many atoms tend to be the most stable. The initial temperature of ion clusters, which range from 520 K - 790 K, were estimated using analytical methods.; The effect of liquid temperature on the rates of fragmentation was also investigated. In conclusion, the work in this thesis provides a greatly enhanced understanding of ion cluster fragmentation, particularly how it is affected by ionic liquid properties, liquid temperature, and electric fields.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 273-281).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122372</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for single-view depth image estimation</title>
<link>https://hdl.handle.net/1721.1/122371</link>
<description>Algorithms for single-view depth image estimation
Ma, Fangchang.
Depth sensing is fundamental in autonomous navigation, localization, and mapping. However, existing depth sensors offer many shortcomings, especially low effective spatial resolutions. In order to attain enhanced resolution with existing hardware, this dissertation studies the single-view depth estimation problem - the goal is to reconstruct the dense and complete 3D structures of the scene, given only sparse depth measurements. To this end, this thesis proposes three different algorithms for depth estimation. The first contribution is an algorithm for efficient reconstruction of 3D planar surfaces. This algorithm assumes that the 3D structure is piecewise-planar, and thus the second-order derivatives of the depth image are sparse. We develop a linear programming problem for recovery of the 3D surfaces under such assumptions, and provide conditions under which the reconstruction is exact.; This method requires no learning, but still outperforms deep learning-based methods under certain conditions. The second contribution is a deep regression network and a self-supervised learning framework. We formulate the depth completion problem as a pixel-level regression problem and solve it by training a neural network. Additionally, to address the difficulty in gathering ground truth annotations for depth data, we develop a self-supervised framework that trains the regression network by enforcing temporal photometric consistency, using only raw RGB and sparse depth data. The supervised method achieves state-of-the-art accuracy, and the self-supervised approach attains a lower but comparable accuracy. Our third contribution is a two-stage algorithm for a broad class of inverse problems (e.g., depth completion and image inpainting). We assume that the target image is the output of a generative neural network, and only a subset of the output pixels is observed.; The goal is to reconstruct the unseen pixels based on the partial samples. Our proposed algorithm first recovers the corresponding low-dimensional input latent vector using simple gradient-descent, and then reconstructs the entire output with a single forward pass. We provide conditions under which the proposed algorithm achieves exact reconstruction, and empirically demonstrate the effectiveness of such algorithms on real data.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 143-158).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122371</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable online nonlinear goal-oriented inference with physics-informed maps</title>
<link>https://hdl.handle.net/1721.1/122370</link>
<description>Scalable online nonlinear goal-oriented inference with physics-informed maps
Li, Harriet.
This thesis develops a physics-informed k-nearest neighbors approach, which draws from both physics-based modeling and data-driven machine learning. In doing so, our method achieves robustness and increased accuracy with small datasets, while being cheap to apply. Our method tackles the challenges of high-dimensional inverse problems governed by complex physical models. Such inverse problems arise in important engineering applications, such as heat transfer, medical and structural imaging, and contaminant control. In particular, we consider the goal-oriented inverse problem setting, where unknown model parameters are inferred from observations in order to calculate some low-dimensional quantity of interest (QoI). When computational resources and/or time are limited, it is infeasible to solve the full inverse problem for inferred parameters to obtain the QoI.; This thesis describes an algorithm that bypasses solving the inverse problem, instead directly giving rapid QoI estimates for observations. We generate a library of physics-informed maps based on local approximations to the goal-oriented inverse problem. Applying tensor decompositions to these approximate problems gives compact multilinear physics-informed maps. These maps are calculated and stored in an offline preparatory phase, and then applied to online observations to give rapid QoI estimates. This thesis also describes tailored active learning algorithms, which efficiently choose training points in observation space at which to generate these physics-informed maps. This improves the online prediction performance given a limited offline computational and/or storage budget. We demonstrate our rapid QoI estimation and active learning algorithms on a quality-control problem for additive manufacturing.; The proposed physics-informed approach achieves 5% relative QoI error in 0.1% of the time to solve the full inverse problem. Our physics-informed mappings give a third of the QoI estimate error that black-box regression methods do for small datasets, and are more robust when the offline dataset does not well represent the online test points. The tailored active learning algorithms produce datasets that reduce maximum QoI error by 25% and misclassification by 15%, compared to randomly chosen datasets.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 103-114).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122370</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing deep uncertainty in space system development through model-based adaptive design</title>
<link>https://hdl.handle.net/1721.1/122369</link>
<description>Addressing deep uncertainty in space system development through model-based adaptive design
Chodas, Mark A.
When developing a space system, many properties of the design space are initially unknown and are discovered during the development process. Therefore, the problem exhibits deep uncertainty. Deep uncertainty refers to the condition where the full range of outcomes of a decision is not knowable. A key strategy to mitigate deep uncertainty is to update decisions when new information is learned. NASA's current uncertainty management processes do not emphasize revisiting decisions and therefore are vulnerable to deep uncertainty. Examples from the development of the James Webb Space Telescope are provided to illustrate these vulnerabilities. In this research, the spacecraft development problem is modeled as a dynamic, chance-constrained, stochastic optimization problem. The Model-based Adaptive Design under Uncertainty (MADU) framework is introduced, in which conflict-directed search is combined with reuse of conflicts to solve the problem efficiently.; The framework is built within a Model-based Systems Engineering (MBSE) paradigm in which a SysML model contains the design and conflicts identified during search. Changes between problems can involve the addition or removal a design variable, expansion or contraction of the domain of a design variable, addition or removal of constraints, or changes to the objective function. These changes are processed to determine their effect on the set of known conflicts. Using Python, an optimization problem is composed from information in the SysML model, including conflicts from past problems, and is solved using IBM ILOG CP Optimizer. The framework is tested on a case study drawn from the thermal design of the REgolith X-ray Imaging Spectrometer (REXIS) instrument and a case study based on the Starshade exoplanet direct imaging mission concept which is sizeable at 35 design variables, 40 constraints, and 10¹⁰ possible solutions.; In these case studies, the MADU framework performs 58% faster on average than an algorithm that doesn't reuse information. Adding a requirement or changing the objective function are particularly efficient types of changes. With this framework, designers can more efficiently explore the design space and perform updates to a design when new information is learned.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 193-202).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122369</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calibration and validation for CubeSat Microwave Radiometers</title>
<link>https://hdl.handle.net/1721.1/122368</link>
<description>Calibration and validation for CubeSat Microwave Radiometers
Crews, Angela B.(Angela Beth)
Miniaturized microwave radiometers deployed on nanosatellites in Low Earth Orbit (LEO) are now demonstrating the ability to provide science-quality weather measurements. For instance, the Micro-sized Microwave Atmospheric Satellite-2A (MicroMAS-2A) is a 3U CubeSat launched in January 2018 that provided the first CubeSat microwave atmospheric sounder data from orbit. The goal of having cost-effective miniature instruments distributed in LEO is to field constellations and improve temporal and geospatial coverage. The Time-Resolved Observations of Precipitations structure and storm Intensity with a Constellation of Smallsats (TROPICS) is a constellation of six 3U CubeSats, based on MicroMAS-2A, scheduled to no earlier than 2020. Each CubeSat hosts a scanning 12-channel passive microwave radiometer in W-band, F-band, and G-band.; TROPICS will provide a temporal resolution of less than 60 minutes and will provide high value investigations of inner-core conditions for tropical cyclones [1]. Calibration for CubeSats presents new challenges as standard blackbody targets are difficult to effectively shroud on a CubeSat platform. Instead, internal noise diodes are used for calibration on CubeSats. The Global Precipitation Measurement (GPM) Microwave Imager (GMI) instrument has shown noise diodes to be stable on orbit [2], but the noise diodes have not been tested on-orbit at TROPICS frequencies. In order to provide state of the art calibration for CubeSats, methods must be developed to track and correct noise diode drift. We quantitatively determine the radiometric accuracy of MicroMAS-2A and compare it to state of the art instruments to provide an assessment of CubeSat performance.; Radiometric accuracy is determined by using the Community Radiative Transfer Model (CRTM) and the Rosenkranz Line-by-Line (LBL) Radiative Transfer Model (RTM) with inputs from GPS radio occultation (GPSRO), radiosondes, and Numerical Weather Prediction (NWP) models in order calculate simulated brightness temperatures that are used as the ground truth. We perform on-orbit calibration corrections using data matchups between MicroMAS-2A and the MicroWave Humidity Sounder (MWHS)-2, which is a microwave radiometer on the operational Chinese weather satellite FengYun (FY)-3C with similar bands. Brightness temperature histograms are analyzed to calculate an initial calibration correction; we develop a Markov Chain-Monte Carlo (MCMC) technique that calculates calibration correction results within 1.2% of the brightness temperature histograms method.; The double difference technique is then used to compare the corrected MicroMAS-2A data to the state-of-the-art microwave radiometer Advanced Technology Microwave Sounder (ATMS) on Suomi-NPP. Double difference results computed using both CRTM and LBL as well as atmospheric inputs from both radiosondes and NWP models indicate MicroMAS-2A accuracies ranging from approximately 0.05 K to 2.73 K, depending on the channel. The upper atmospheric temperature sounding channels for which modeling and surface contamination errors are least significant yield intercalibration accuracies better than 1.0 K. We also develop a novel method of calibration for CubeSat constellations such as TROPICS by incorporating solar and lunar periodic intrusions as an additional source of information to counter noise diode drift.; These lunar intrusions also occur for existing satellites hosting microwave radiometers in sun-synchronous polar orbits, but are much more infrequent than for the TROPICS constellation's scanning payload. Lunar intrusions are typically treated as an observational and calibration limiting constraint. We develop a solar/lunar calibration algorithm and test it using ATMS lunar intrusion data. The mean bias and standard deviation between the algorithm and actual ATMS data falls within the expected ATMS error budget of 0.6 K to 3.9 K, showing that the algorithm is working correctly and can be applied to TROPICS. We assess the daily variation in error that we can expect from instrument noise and source error, and find that lunar intrusions should be analyzed weekly while solar intrusions should be analyzed daily to track 1 K of noise diode drift. In addition, we develop an architecture for validation matchups with TROPICS.; We determine frequencies of single difference matchups, double difference matchups using both intra- and inter- Simultaneous Nadir Observations (SNO), and solar and lunar intrusions. Matchup sensitivity to orbital parameters is studied and we find that changes in true anomaly and Right Ascension of Ascending Node (RAAN) do not decrease the number of SNO matchups that are within our filter criteria of 60 minutes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 187-194).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122368</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Four-dimensional anisotropic mesh adaptation for spacetime numerical simulations</title>
<link>https://hdl.handle.net/1721.1/122367</link>
<description>Four-dimensional anisotropic mesh adaptation for spacetime numerical simulations
Caplan, Philip Claude Delhaye.
Engineers and scientists are increasingly relying on high-fidelity numerical simulations. Within these simulations, mesh adaptation is useful for obtaining accurate predictions of an output of interest subject to a computational cost constraint. In the quest for accurately predicting outputs in problems with time-dependent solution features, a fully unstructured coupled spacetime approach has been shown to be useful in reducing the cost of the overall simulation. However, for the simulation of unsteady three-dimensional partial differential equations (PDEs), a four-dimensional mesh adaptation tool is needed. This work develops the first anisotropic metric-conforming four-dimensional mesh adaptation tool for performing adaptive numerical simulations of unsteady PDEs in three dimensions. The theory and implementation details behind our algorithm are first developed alongside an algorithm for constructing four-dimensional geometry representations.; We then demonstrate our algorithm on three-dimensional benchmark cases and it appears to outperform existing implementations, both in metric-conformity and expected tetrahedra counts. We study the utility of the mesh adaptation components to justify the design of our algorithm. We then develop four-dimensional benchmark cases and demonstrate that metric-conformity and expected pentatope counts are also achieved. This is the first time anisotropic four-dimensional meshes have been presented in the literature. Next, the entire mesh adaptation framework, Mesh Optimization via Error Sampling and Synthesis (MOESS), is extended to the context of finding the optimal mesh to represent a function of four variables. The mesh size and aspect ratio distributions of the optimized meshes match the analytic ones, thus verifying our framework.; Finally, we apply MOESS in conjunction with the mesh adaptation tool to perform the first four-dimensional anisotropic mesh adaptation for the solution of the advection-diffusion equation. The optimized meshes effectively refine the solution features corresponding to both a boundary layer solution as well as an expanding spherical wave.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages [135]-142).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122367</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding vertebral fracture risk in astronauts</title>
<link>https://hdl.handle.net/1721.1/122344</link>
<description>Understanding vertebral fracture risk in astronauts
Burkhart, Katelyn A.
In spaceflight, the loss of mechanical loading has detrimental effects on the musculoskeletal system. These muscular changes will likely affect spinal loading, a key aspect of vertebral fracture risk, but no prior studies have examined how spinal loading is affected by long duration spaceflight. Moreover, the effect of spaceflight on vertebral strength has not been determined, despite reports of significant vertebral trabecular bone loss in long-duration astronauts. Thus trunk muscle and vertebral bone changes and their impact on risk of injury following long-duration spaceflight remain unknown. This is of particular concern for NASA's planned Mars missions and return to Earth after prolonged deconditioning. Our lab has developed a musculoskeletal model of the thoracolumbar spine that has been validated for spinal loading, but has not yet been extended to maximal effort activities or full-body simulations.; Thus, the overall goal of this work consisted of two main sections: 1) address the knowledge gap regarding spaceflight and post-flight recovery effects on trunk muscle properties, vertebral strength, compressive spine loading and vertebral fracture risk, and 2) extend our musculoskeletal modeling work into maximal effort simulations in an elderly population and create a full-body scaled model to investigate reproducibility of spine loading estimates using opto-electronic motion capture data. Whereas deficits in trunk muscle area returned to normal during on-Earth recovery, spaceflight-induced increases in intramuscular fat persisted in some muscles even years after landing. Similarly, spaceflight led to a decrease in lumbar vertebral strength that did not recover even after multiple years on Earth.; To gain insight into the effect of spaceflight on vertebral fracture risk, we created subject-specific musculoskeletal models using an individual's height, weight, sex, muscle measurements, and spine curvature. We found that compressive spine loading was minimally affected by spaceflight and that vertebral fracture risk, calculated as a ratio of vertebral load to strength, was slightly elevated post-flight and remained elevated during readaptation on Earth. Additionally, we focused on the development of additional musculoskeletal modeling tools. Using maximal effort model simulations, we estimated trunk maximum muscle stress in an elderly population, and this critical parameter in musculoskeletal modeling will assist with more detailed model creation. Lastly, we found excellent reliability of spine loading estimations from opto-electronic marker data.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Medical Engineering and Bioastronautics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122344</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stratified and stirred : monsoon freshwater in the Bay of Bengal</title>
<link>https://hdl.handle.net/1721.1/122332</link>
<description>Stratified and stirred : monsoon freshwater in the Bay of Bengal
Spiro Jaeger, Gualtiero Victor Rudi.
Submesoscale ocean dynamics and instabilities, with characteristic scales 0.1-10 kin, can play a critical role in setting the ocean's surface boundary layer thickness and associated density stratification. Submesoscale instabilities contribute to lateral stirring and tracer dispersal. These dynamics are investigated in the Bay of Bengal, motivated by the upper ocean's potentially coupled interactions with Monsoon winds and convection. The region's excess precipitation and runoff generates strong salinity gradients that typically set density fronts and stratification in the upper 50 m. Since we cannot synoptically measure currents containing fast-evolving and oscillating components across the submesoscale range, we instead analyze passive tracer distributions (spice = density-compensated temperature (T) and salinity (S) anomalies), identifying signatures of flows and testing dynamical theories.; The analysis is based on over 9000 vertical profiles of T and S measured along ~4800 km of ship tracks in the Bay of Bengal during ASIRI and MISO-BOB expeditions in 2013, 2015, and 2018. Observations in the surface mixed layer reveal ~1 km scale-selective correlation of surface T and S, with compensation reducing cross-front density gradients by ~50%. Using a process study ocean model, we show this is caused by submesoscale instabilities slumping fronts, plus surface cooling over the resultant enhanced salinity stratification, potentially thwarting the forward cascade of energy. In the stratified interior, we present a spectral analysis of horizontal spice variance statistics from wavenumber k ~0.01 cpkm to ~1 cpkm. At scales &lt;10 km, stratified layers that are closer to the surface exhibit redder passive tracer spectra (power spectra k⁻³, gradient spectra k⁻¹) than predicted by quasi-geostrophic or frontogenetic theories.; Complimentary observations reveal spice patterns with multiple, parallel, ~10 m thin layers, crossing isopycnals with O(10⁻⁴) slopes, coherent over at least 30-80 kin, with coincident layers of stratification anomalies. Comparison with shear measurements, and a numerical process study, suggest that both submesoscale sheared eddies, and thin near-inertial waves, form such layers. Fast formation timescales and large aspect ratios suggest they enhance horizontal mixing by shear dispersion, reducing variance at ~1-10 km scales.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122332</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A modeling study of the marine biogeochemistry, plankton dynamics, and carbon cycle on the continental shelf off the West Antarctic Peninsula</title>
<link>https://hdl.handle.net/1721.1/122331</link>
<description>A modeling study of the marine biogeochemistry, plankton dynamics, and carbon cycle on the continental shelf off the West Antarctic Peninsula
Schultz, Cristina.
Over the past several decades, the West Antarctic Peninsula (WAP) has undergone physical and ecological changes at a rapid pace, with warming surface ocean and a sharp decrease in the duration of the sea ice season. The impact of these changes in the ocean chemistry and ecosystem are not fully understood and have been investigated by the Palmer-LTER since 1991. Given the data acquisition constraints imposed by weather conditions in this region, an ocean circulation, sea ice and biogeochemistry model was implemented to help fill the gaps in the dataset. The results with the present best case from the suite of sensitivity experiments indicate that the model is able to represent the seasonal and interannual variations observed in the circulation, water mass distribution and sea ice observed in the WAP, and has identified gaps in the observations that could guide improvement of the simulation of the regional biogeochemistry. Comparison of model results with data from the Palmer-LTER project suggests that the large spatial and temporal variability observed in the phytoplankton bloom in the WAP is influenced by variability in the glacial sources of dissolved iron. Seasonal progression of the phytoplankton bloom is well represented in the model, and values of vertically integrated net primary production (NPP) are largely consistent with observations. Although a bias towards lower surface dissolved inorganic carbon (DIC) and alkalinity was identified in the model results, interannual variability was similar to the observed in the Palmer-LTER cruise data.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-202).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122331</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lagrangian dispersion and deformation in submesoscale flows</title>
<link>https://hdl.handle.net/1721.1/122330</link>
<description>Lagrangian dispersion and deformation in submesoscale flows
Essink, Sebastian.
Submesoscale currents, with horizontal length scales of 1-20 km, are an important element of upper ocean dynamics. These currents play a crucial role in the horizontal and vertical redistribution of tracers, the cascade of tracer variance to smaller scales, and in linking the mesoscale circulation with the dissipative scales. This thesis investigates submesoscale flows and their properties using Lagrangian trajectories of observed and modeled drifters. We analyze statistics of observed drifter pairs to characterize turbulent dispersion at submeso-scales. Contrary to theoretical expectations, we find that nonlocal velocity gradients associated with mesoscale eddies dominate the separation of drifters even at the kilometer scale. At submeso-scales, we observe energetic motions, such as near-inertial oscillations, that contribute to the energy spectrum but are inefficient at dispersion.; Using trajectories in a model of submesoscale turbulence, we find that, if drifters have a vertical separation, vertical shear dominates the dispersion and conceals horizontal dispersion regimes from drifter observations. Particularly in submesoscale flows, vertical shear is orders of magnitude larger than horizontal gradients in velocity. Since conventional drifters in the ocean are not affected by vertical shear, it is likely that drifter-derived diffusivity underestimates the diffusivity that a tracer would experience. Lastly, we test and apply cluster-based methods, using three or more drifters, to estimate the velocity gradient tensor. Since velocity gradients become large at submesoscales, the divergence, strain, and vorticity control the evolution and deformation of clusters of drifters. Observing the velocity gradients using drifters, enables us to further constrain the governing dynamics and decipher submesoscale motions from inertia-gravity waves.; These insights provide a Lagrangian perspective on submesoscale flows that illuminates scales that are challenging to observe from other platforms. We reveal observational and theoretical challenges that need to be overcome in future investigations.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis. "The pagination in this thesis reflects how it was delivered to the Institute Archives and Special Collections. TOC pagination for Bibliography section is off by one page"--Disclaimer Notice page.; Includes bibliographical references (pages 115-123).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122330</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The response of ocean salinity patterns to climate change : implications for circulation</title>
<link>https://hdl.handle.net/1721.1/122329</link>
<description>The response of ocean salinity patterns to climate change : implications for circulation
Levang, Samuel J.(Samuel James)
Global patterns of ocean salinity arise from the exchange of freshwater between the sea surface and the atmosphere. For a quasi-steady state system, these surface fluxes are balanced by compensating transports of salt in the ocean interior. In a warming climate, the atmosphere holds additional water vapor which acts to intensify the global water cycle. Amplified freshwater fluxes are then absorbed at the surface and propagate along ocean circulation pathways. Here, we use coupled model results from the CMIP5 experiment to identify coherent responses in the atmospheric water cycle and in ocean salinity patterns. Some aspects of the response are consistent across models, while other regions show large inter-model spread. In particular, the salinity response in the North Atlantic subpolar gyre, where the mean salinity plays a role in maintaining high surface density for deep-water formation, has low confidence in CMIP5 models.; To understand how differences in ocean circulation may affect this response, we use two techniques to diagnose the role of salt transports in the present-day climate. The first is a salt budget within the surface mixed layer, which identifies major transport processes. The second is a Lagrangian particle tracking tool, used to understand the regional connectivity of water masses. From this analysis, we find that anomalous freshwater signals become well mixed within the ocean gyres, but can be isolated on larger scales. The subpolar Atlantic salinity response generally shows freshening at the surface, but is sensitive to the transport of anomalously salty water from the subtropics, a largely eddy-driven process. As CMIP5 models use a range of eddy parameterizations, this is likely a source of uncertainty in the salinity response.; Finally, we investigate the effect of salinity changes on the deep overturning cells and other circulations, and find a complex influence that also depends on the details of advective pathways. In a warming scenario, water cycle amplification actually works to strengthen the Atlantic meridional overturning circulation due to the influence of enhanced subtropical evaporation.
Thesis: Ph. D. in Physical Oceanography, Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-133).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122329</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing the theory and applications of Lagrangian Coherent Structures methods for oceanic surface flows</title>
<link>https://hdl.handle.net/1721.1/122328</link>
<description>Advancing the theory and applications of Lagrangian Coherent Structures methods for oceanic surface flows
Filippi, Margaux(Martin-Filippi)
Ocean surface transport is at the core of many environmental disasters, including the spread of marine plastic pollution, the Deepwater Horizon oil spill and the Fukushima nuclear contamination. Understanding and predicting flow transport, however, remains a scientific challenge, because it operates on multiple length- and time-scales that are set by the underlying dynamics. Building on the recent emergence of Lagrangian methods, this thesis investigates the present-day abilities to describe and understand the organization of flow transport at the ocean surface, including the abilities to detect the underlying key structures, the regions of stirring and regions of coherence within the flow. Over the past four years, the field of dynamical system theory has adapted several algorithms from unsupervised machine learning for the detection of Lagrangian Coherent Structures (LCS). The robustness and applicability of these tools is yet to be proven, especially for geophysical flows.; An updated, parameter-free spectral clustering approach is developed and a noise-based cluster coherence metric is proposed to evaluate the resulting clusters. The method is tested against benchmarks flows of dynamical system theory: the quasi-periodic Bickley jet, the Duffing oscillator and a modified, asymmetric Duffing oscillator. The applicability of this newly developed spectral clustering method, along with several common LCS approaches, such as the Finite-Time Lyapunov Exponent, is tested in several field studies. The focus is on the ability to predict these LCS in submesoscale ocean surface flows, given all the uncertainties of the modeled and observed velocity fields, as well as the sparsity of Lagrangian data. This includes the design and execution of field experiments targeting LCS from predictive models and their subsequent Lagrangian analysis.; These experiments took place in Scott Reef, an atoll system in Western Australia, and off the coast of Martha's Vineyard, Massachusetts, two case studies with tidally-driven channel flows. The FTLE and spectral clustering analyses were particularly helpful in describing key transient flow features and how they were impacted by tidal forcing and vertical velocities. This could not have been identified from the Eulerian perspective, showing the utility of the Lagrangian approach in understanding the organization of transport.
Thesis: Sc. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 207-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122328</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic connectivity, adaptation, and phenotypic plasticity of corals and anemones under thermal stress</title>
<link>https://hdl.handle.net/1721.1/122327</link>
<description>Genetic connectivity, adaptation, and phenotypic plasticity of corals and anemones under thermal stress
Rivera, Hanny Elizabeth.
Under global climate change, our oceans are warming at an unprecedented rate. Increased temperatures represent a severe source of stress for many marine organisms. This thesis aims to understand how corals and anemones respond to changing temperatures across different timescales and investigates mechanisms that can facilitate persistence in light of environmental change, from selection and adaptation across generations to phenotypic plasticity within a single individual's lifespan. In this context, I explore three case studies of thermal stress in corals and anemones. I begin with massive Porites lobata corals from the central Pacific. Here, reefs that are most affected by El Niflo, such as Jarvis and the northeast Phoenix Islands maintain genetic diversity indicating recruitment from nearby reefs may occur. Yet, they show significant genetic differentiation (FsT) from farther areas, suggesting this dispersal may be limited.; Thermal variability in this region may also favor plasticity over adaptation, as we do not find differences in bleaching histories among genetic groups. Next, I investigate genetic connectivity and adaptation to chronically elevated temperatures across a natural temperature gradient within the Palauan archipelago. Combining genetic data and historical growth measurements from coral cores, I find that Palau's warmest reefs harbor unique genetic subpopulations of Porites lobata and find evidence for a genetic basis of their higher thermal tolerance. Lastly, I explore if parents can modulate parental effects to increase the thermal tolerance of their offspring over short time scales, using the estuarine anemone Nematostella vectensis. Indeed, I find parents exposed to increased temperatures quickly produce more thermally tolerant larvae. In fact, offspring from these Massachusetts parents show thermal thresholds that are indistinguishable from more southern populations.; This thesis highlights the ability and potential of corals and anemones to persist under variable conditions over different timescales. Nevertheless, a compelling effort to reduce rates of warming worldwide will be imperative to the survival and integrity of key marine ecosystems such as coral reefs.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Civil and Environmental Engineering; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122327</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observing microbial processes at the microscale with In Situ technology</title>
<link>https://hdl.handle.net/1721.1/122326</link>
<description>Observing microbial processes at the microscale with In Situ technology
Lambert, Bennett S.(Bennett Spencer)
Although seawater appears uniform at scales that humans often interact with and sample, the world that marine microbes inhabit can be highly heterogeneous, with numerous biological and physical processes giving rise to resource hotspots where nutrient concentrations exceed background levels by orders of magnitude. While the impact of this microscale heterogeneity has been investigated in the laboratory with microbial isolates and theoretical models, microbial ecologists have lacked adequate tools to interrogate microscale processes directly in the natural environment. Within this thesis I introduce three new technologies that enable interrogation of microbial processes at the microscale in natural marine communities. The IFCB-Sorter acquires images and sorts individual phytoplankton cells, directly from seawater, allowing studies exploring connections between the diversity of forms present in the plankton and genetic variability at the single-cell level.; The In Situ Chemotaxis Assay (ISCA) is a field-going microfluidic device designed to probe the distribution and role of motility behavior among microbes in aquatic environments. By creating microscale hotspots that simulate naturally occurring ones, the ISCA makes it possible to examine the role of microbial chemotaxis in resource acquisition, phytoplankton-bacteria interactions, and host-symbiont systems. Finally, the Millifluidic In Situ Enrichment (MISE) is an instrument that enables the study of rapid shifts in gene expression that permit microbial communities to exploit chemical hotspots in the ocean. The MISE subjects natural microbial communities to a chemical amendment and preserves their RNA in a minute-scale time series.; Leveraging an array of milliliter-volume wells, the MISE allows comparison of community gene expression in response to a chemical stimulus to that of a control, enabling elucidation of the strategies employed by marine microbes to survive and thrive in fluctuating environments. Together, this suite of instruments enables culture-independent examination of microbial life at the microscale and will empower microbial ecologists to develop a more holistic understanding of how interactions at the scale of individual microbes impact processes in marine ecosystems at a global scale.
Thesis: Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Civil and Environmental Engineering; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 126-137).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122326</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational analysis of the biophysical controls on Southern Ocean phytoplankton ecosystem dynamics</title>
<link>https://hdl.handle.net/1721.1/122325</link>
<description>Computational analysis of the biophysical controls on Southern Ocean phytoplankton ecosystem dynamics
Rohr, Tyler W.
Southern Ocean net community productivity plays an out sized role in regulating global biogeochemical cycling and climate dynamics. The structure of spatial-temporal variability in phytoplankton ecosystem dynamics is largely governed by physical processes but a variety of competing pathways complicate our understanding of how exactly they drive net population growth. Here, I leverage two coupled, 3-dimensional, global, numerical simulations in conjunction with remote sensing data and past observations, to improve our mechanistic understanding of how physical processes drive biology in the Southern Ocean. In Chapter 2, I show how different mechanistic pathways can control population dynamics from the bottom-up (via light, nutrients), as well as the top-down (via grazing pressure). In Chapters 3 and 4, I employ a higher resolution, eddy resolving, integration to explicitly track and examine closed eddy structures and address how they modify biomass at the mesoscale.; Chapter 3 considers how simulated eddies drive bottom-up controls on phytoplankton growth and finds that division rates are, on average, amplified in anticyclones and suppressed in cyclones. Anomalous division rates are predominately fueled by an anomalous vertical iron flux driven by eddy-induced Ekman Pumping. Chapter 4 goes on to describe how anomalous division rates combine with anomalous loss rates to drive anomalous net population growth. Biological rate-based mechanisms are then compared to the potential for anomalies to evolve strictly via physical transport (i.e. dilution, stirring, advection). All together, I identify and describe dramatic regional and seasonal variability in when, where, and how different mechanisms drive phytoplankton growth throughout the Southern Ocean. Better understanding this variability has broad implications to our understanding of how oceanic biogeochemisty will respond to, and likely feedback into, a changing climate.; Specifically, the uncertainty associated with this variability should temper recent proposals to artificially stimulate net primary production and the biological pump via iron fertilization. In Chapter 5 I argue that Southern Ocean Iron Fertilization fails to meet the basic tenets required for adoption into any regulatory market based framework.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 193-220).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122325</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mineralogy and chemistry of modern shallow-water and deep-sea corals</title>
<link>https://hdl.handle.net/1721.1/122323</link>
<description>The mineralogy and chemistry of modern shallow-water and deep-sea corals
Farfan, Gabriela A.(Gabriela Aylin)
The architecture of coral reef ecosystems is composed of coral skeletons built from the mineral aragonite (CaCO3). Coral reefs are currently being threatened by ocean acidification (OA), which may lower calcification rates, reduce skeletal density, and increase aragonite dissolution. Crystallography and chemistry are what govern the materials properties of minerals, such solubility and strength. Thus, understanding the mineralogical nature of coral aragonite and how it forms are important for predicting bulk skeletal responses under climate change. Different models based on geochemical versus biological controls over coral skeleton biomineralization propose conflicting predictions about the fate of corals under OA. Rather than investigating the mechanism directly, I use a mineralogical approach to study the aragonite end-products of coral biomineralization.; I hypothesize that coral mineralogy and crystallography will lend insights into how coral aragonite crystals form and how sensitive coral aragonite material properties may be to OA. Here I compare the crystallography, bonding environments, and compositions of coral aragonite with aragonite produced by other organisms (mollusk), synthetically (abiogenic precipitation in aragonite-supersaturated seawater and freshwater), and in natural geological settings (abiogenic). Coral aragonite crystallography does not resemble mollusk aragonite (aragonite formed with a strong biological influence), but rather is identical to abiogenic synthetic aragonite precipitated from seawater. I predict that the material properties of coral aragonite are similar to that of abiogenic synthetic seawater aragonites and that coral aragonite formation is sensitive to surrounding seawater chemistry.; To test the effect OA on coral aragonites, I studied deep-sea corals from a natural [omega][subscript sw], gradient (1.15-1.44) in the Gulf of Mexico and shallow-water corals across a natural [omega][subscript sw] (2.3-3.7) and pH (7.84-8.05) gradient in Palau. Minor shifts in crystallography are expressed by coral aragonite in these natural systems, likely governed by skeletal calcite contents, density, and [omega] of the coral calcifying fluid. My results are most consistent with a geochemical model for biomineralization, which implies that coral calcification may be sensitive to OA. However, further work is required to determine whether the modest crystallographic shifts I observe are representative on a global scale and whether they could influence bulk skeletal material properties.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122323</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The production and fate of nitrogen species in deep-sea hydrothermal environments</title>
<link>https://hdl.handle.net/1721.1/122322</link>
<description>The production and fate of nitrogen species in deep-sea hydrothermal environments
Charoenpong, Chawalit(Chawalit Net)
Nitrogen (N) species in hydrothermal vent fluids serve as both a nutrient and energy source for the chemosynthetic ecosystems surrounding deep-sea vents. While numerous pathways have been identified in which N-species can be produced and consumed in the context of submarine hydrothermal vent systems, their exact nature has been largely limited to interpretation of variations in concentrations. This thesis applies stable isotope approaches to further constrain the sources and fate of N-species in deep-sea vents across a variety of geological settings. First, I discuss isotope fractionation and reaction kinetics during abiotic reduction of nitrate (NO₃⁻) to ammonium ([sigma]NH₄⁺ = NH₃+NH₄⁺) under hydrothermal conditions. Results of lab experiments conducted at high temperatures and pressures revealed a wide degree of N isotope fractionation as affected by temperature, fluid/rock ratio, and pH-all which exert control over reaction rates.; Moreover, a clear pattern in terms of reaction products can be discerned with the reaction producing [sigma]NH₄⁺ only at high pH, but both [sigma]NH₄⁺ and N₂ at low pH. This challenges previous assumptions that NO₃⁻ is always quantitatively converted to NH₄⁺ during submarine hydrothermal circulation. Next, I report measurements of [sigma]NH₄⁺ concentrations and N isotopic composition ([delta]¹⁵N[subscript NH4]) from vent fluid samples, together with the largest compilation to date of these measurements made from other studies of deep-sea vent systems for comparison. The importance of different processes at sediment-influenced and unsedimented systems are discussed with a focus on how they ultimately yield observed vent [sigma]NH₄⁺ values.; Notable findings include the role that phase separation might play under some conditions and a description of how an unsedimented site from Mid-Cayman Rise with unexpectedly high NH4+ may be uniquely influenced by N₂ reduction to [sigma]NH₄⁺. Lastly, I explore [sigma]NH₄⁺ dynamics in the context of low-temperature vent sites at 9°50'N East Pacific Rise to investigate dynamics of microbially-mediated N transformations. Through both measurements of natural samples, as well as isotopic characterization of N species from incubation experiments and model simulations thereof, an exceptionally high variability observed in [delta]¹⁵N[subscript NH4] values emphasizes the complexity of these microbe-rich systems.; In sum, this thesis highlights the role of microbial processes in low temperature systems, demonstrates a more mechanistic understanding of lesser-understood abiotic N reactions and improves the coverage of available data on deep-sea vent [sigma]NH₄⁺ measurements.
Thesis: Ph. D., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122322</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributional models of ocean carbon export</title>
<link>https://hdl.handle.net/1721.1/122321</link>
<description>Distributional models of ocean carbon export
Barry, Brendan(Brendan Cael)
Each year, surface ocean ecosystems export sinking particles containing gigatons of carbon into the ocean's interior. This particle flux connects the entire ocean microbiome and constitutes a fundamental aspect of marine microbial ecology and biogeochemical cycles. Particle flux is also variable and intricately complex, impeding its mechanistic or quantitative description. In this thesis we pair compilations of available data with novel mathematical models to explore the relationships between particle flux and other key variables - temperature, net primary production, and depth. Particular use is made of (probability) distributional descriptions of quantities that are known to vary appreciably. First, using established thermodynamic dependencies for primary production and respiration, a simple mechanistic model is developed relating export efficiency (i.e. the fraction of primary production that is exported out of the surface ocean via particle flux) to temperature.; The model accounts for the observed variability in export efficiency due to temperature without idealizing out the remaining variability that evinces particle flux's complexity. This model is then used to estimate the metabolically-driven change in average export efficiency over the era of long-term global sea surface temperature records, and it is shown that the underlying mechanism may help explain glacial-interglacial atmospheric carbon dioxide drawdown. The relationship between particle flux and net primary production is then explored. Given that these are inextricable but highly variable and measured on different effective scales, it is hypothesized that a quantitative relationship emerges between collections of the two measurements - i.e. that they can be related not measurement-by-measurement but rather via their probability distributions.; It is shown that on large spatial or temporal scales both are consistent with lognormal distributions, as expected if each is considered as the collective result of many subprocesses. A relationship is then derived between the log-moments of their distributions and agreement is found between independent estimates of this relationship, suggesting that upper ocean particle flux is predictable from net primary production on large spatiotemporal scales. Finally, the attenuation of particle flux with depth is explored. It is shown that while several particle flux-versus-depth models capture observations equivalently, these carry very different implications mechanistically and for magnitudes of export out of the surface ocean. A model is then proposed for this relationship that accounts for measurements of both the flux profile and of the settling velocity distribution of particulate matter, and is thus more consistent with and constrained by empirical knowledge.; Possible future applications of these models are discussed, as well as how they could be tested and/or constrained observationally.
Thesis: Ph. D., Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122321</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer-aided performance monitoring of employees in large-volume office operations</title>
<link>https://hdl.handle.net/1721.1/122307</link>
<description>Computer-aided performance monitoring of employees in large-volume office operations
Chalykoff, John.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, 1988.; Bibliography: leaves 182-188.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122307</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A theoretical and experimental investigation of the properties and behavior of hydraulic-fill dams</title>
<link>https://hdl.handle.net/1721.1/122306</link>
<description>A theoretical and experimental investigation of the properties and behavior of hydraulic-fill dams
Gilboy, Glennon.
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Civil and Sanitary Engineering, 1928.; Includes bibliographical references (leaves 197-198).
</description>
<pubDate>Sun, 01 Jan 1928 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122306</guid>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A method predicting earthquake-induced permanent deformations of foundations</title>
<link>https://hdl.handle.net/1721.1/122295</link>
<description>A method predicting earthquake-induced permanent deformations of foundations
Stamatopoulos, Constantine Aris.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1989.; Includes bibliographical references (leaves 227-232).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122295</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analytical method for predicting permanent deformation of foundations under cyclic loads</title>
<link>https://hdl.handle.net/1721.1/122294</link>
<description>An analytical method for predicting permanent deformation of foundations under cyclic loads
Bouckovalas, George.
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1982.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122294</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale optimization Methods for data-science applications</title>
<link>https://hdl.handle.net/1721.1/122272</link>
<description>Large-scale optimization Methods for data-science applications
Lu, Haihao,Ph.D.Massachusetts Institute of Technology.
In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume knowledge of a strict lower bound (like what happened in empirical risk minimization in machine learning). We introduce a new functional measure called the growth constant for the convex objective function, that measures how quickly the level sets grow relative to the function value, and that plays a fundamental role in the complexity analysis. Based on such measure, we present new computational guarantees for both smooth and non-smooth convex optimization, that can improve existing computational guarantees in several ways, most notably when the initial iterate is far from the optimal solution set.; The usual approach to developing and analyzing first-order methods for convex optimization always assumes that either the gradient of the objective function is uniformly continuous (in the smooth setting) or the objective function itself is uniformly continuous. However, in many settings, especially in machine learning applications, the convex function is neither of them. For example, the Poisson Linear Inverse Model, the D-optimal design problem, the Support Vector Machine problem, etc. In the second part, we develop a notion of relative smoothness, relative continuity and relative strong convexity that is determined relative to a user-specified "reference function" (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth or relatively continuous with respect to a correspondingly fairly-simple reference function.; We extend the mirror descent algorithm to our new setting, with associated computational guarantees. Gradient Boosting Machine (GBM) introduced by Friedman is an extremely powerful supervised learning algorithm that is widely used in practice -- it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In the third part, we propose the Randomized Gradient Boosting Machine (RGBM) and the Accelerated Gradient Boosting Machine (AGBM). RGBM leads to significant computational gains compared to GBM, by using a randomization scheme to reduce the search in the space of weak-learners. AGBM incorporate Nesterov's acceleration techniques into the design of GBM, and this is the first GBM type of algorithm with theoretically-justified accelerated convergence rate. We demonstrate the effectiveness of RGBM and AGBM over GBM in obtaining a model with good training and/or testing data fidelity.
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 203-211).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122272</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CFD simulation of long slender offshore structures at high Reynolds number</title>
<link>https://hdl.handle.net/1721.1/122262</link>
<description>CFD simulation of long slender offshore structures at high Reynolds number
Olaoye, Abiodun Timothy.
Slender cylindrical structures are common in many offshore engineering applications such as floating wind turbines and subsea risers. These structures are vulnerable to flow-induced vibrations under certain environmental conditions which impacts their useful life. Flow-induced vibrations have been widely studied both experimentally and numerically especially at low Reynolds number. However, many questions remain unanswered in detail regarding the effects of high Re on structural responses and fluid-structure interaction (FSI) phenomena such as lock-in for different design configurations. Furthermore, under realistic environmental conditions, the oncoming flow velocity profile may not be uniform. In such scenarios, effects of large changes in Re along span on nature of structural responses may be significant.; This research project is focused on computational fluid dynamics (CFD) simulation of slender structures under realistic oncoming ocean currents with relatively higher Reynolds number (Re &gt;/- 10,000) compared to existing literature. Computational methods for investigating FSI phenomena are limited by high Reynolds number, complex flow profiles, low mass ratio and large aspect ratio of structures. Despite these challenges, numerical approach potentially offers more detailed analysis and ease of parameter tuning to investigate unique cases too expensive to conduct in experiments. Therefore, advances in research is increasingly supported by numerical modeling. In the framework of Fourier Spectral/hp element method implemented in NEKTAR code, an entropy-based viscosity method (EVM) was employed to account for turbulence effects not captured by the numerical grid and fictitious added mass method was utilized in the structure solver to handle low mass ratio problems.; Also, the mapping-enabled smoothed profile method (SPM) in addition to already stated techniques was used to simulate cases involving buoyancy modules. A thorough verification and validation of the current algorithms was carried out for stationary cylinders with uniform cross-sections, flexibly-mounted rigid cylinders and flexible cylinders. Major contributions include EVM enabled simulations of dynamic responses of flexibly-mounted rigid cylinders with low mass ratio in higher Reynolds number uniform flows (Re = 140,000) compared with existing literature thereby yielding numerically novel response maps. The new results provide more insights on the role of Re in amplitude responses and FSI phenomena associated with vortex-induced vibrations in practical applications. Another major contribution is the investigation in detail of complex flows past a flexible cylinder at Re[subscript max] &lt;/- 11,000 which is higher than existing literature (Re[subscript max] 2000).; The relatively large change in Re along span revealed new fluid-structure energy transfer behavior in linearly and exponentially sheared flows.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122262</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on economic theory</title>
<link>https://hdl.handle.net/1721.1/122240</link>
<description>Essays on economic theory
Dai, Tianjiao,Ph.D.Massachusetts Institute of Technology.
The first chapter considers team incentive schemes that are robust to nonquantifiable uncertainty about the game played by the agents. A principal designs a contract for a team of agents, each taking an unobservable action that jointly determine a stochastic contractible outcome. The game is common knowledge among the agents, but the principal only knows some of the available action profiles. Realizing that the game may be bigger than he thinks, the principal evaluates contracts based on their guaranteed performance across all games consistent with his knowledge. All parties are risk neutral and the agents are protected by limited liability. A contract is said to align the agents' interests if each agent's compensation covaries positively and linearly with the other agents' compensation.; It is shown that contracts that fail to do so are dominated by those that do, both in terms of the surplus guarantee under budget balance, and in terms of the principal's profit guarantee when he is the residual claimant. It thus suffices to base compensation on a one-dimensional aggregate even if richer outcome measures are available. The best guarantee for either objective is achieved by a contract linear in the monetary value of the outcome. This provides a foundation for practices such as team-based pay and profit-sharing in partnership. The second chapter models a ride-sharing market in a traffic network with stochastic ride demands. A monopolistic ride-sharing platform in this traffic network faces a dynamic optimization problem to maximize its per period average payoff in the long run, by choosing policies of setting trip prices, matching ride requests and relocating idle drivers to meet future potential demands.; Directly solving the dynamic optimization problem for the ridesharing platform is computationally prohibitively expensive for a traffic network with reasonably large number of locations and vehicles due to its intrinsic complexity. I provide an theoretical upper bound on the performance of dynamic policies by analyzing a related deterministic problem. Based on the optimal solution to the deterministic problem, I propose implementable heuristic policies for the original stochastic problem that yield average payoffs converging to the theoretical upper bound asymptotically. I also discuss the relative value function iteration method to solve the optimization problem for small-scale markets numerically. The third chapter examines several discrete-time versions of a dynamic moral hazard in teams problem, a continuous-time model of which has been extensively studied in the previous literature.; The way to transform the continuous-time game into a discrete-time one is not unique, and different discrete-time assumptions with the same continuous-time technology limit lead to different discrete-time equilibria. Regardless of the technology assumption, I find that two-period models can give equilibrium results quite different from that in a continuous-time model: while the continuous-time model predicts existence and uniqueness of symmetric equilibrium, its two-period versions can either have multiple symmetric equilibria or none. Also, not all equilibria in the discrete-time models share features similar to the one predicted by the continuous-time model. The subsequent study of multiple-period models with no learning sheds some light on how the equilibria evolve as the discrete-time model better approximates the continuous-time one.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122240</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The economics of online retailers</title>
<link>https://hdl.handle.net/1721.1/122232</link>
<description>The economics of online retailers
Aparicio, Diego.
This thesis consists of four chapters on applied economics related to online retailers. The first chapter studies the impact of targeted price controls on supermarket products in Argentina between 2007 and 2015. Online daily prices for controlled and non-controlled goods were collected to examine the differential effects of the policy on inflation, product availability, entry and exit, and price dispersion. Price controls only have a temporary effect on inflation. Moreover, firms compensate for price controls by introducing new product varieties at higher prices, increasing price dispersion within narrow categories. The second chapter studies choice overload in a large scale online experiment, involving thousands of households making major purchases. Purchase rates and click-through rates decreased with fewer choices.; However, there are heterogeneous effects across users: while local users exhibit lower purchase rates, foreign users exhibit higher purchase rates when offered fewer choices. The third chapter studies the use of online price data to forecast the Consumer Price Index. Online price indices anticipate changes in official inflation trends more than one month in advance. The baseline one-month forecast outperforms Bloomberg surveys of forecasters, which predict the contemporaneous inflation rate. Similarly, online-based quarterly forecasts for the US inflation rate outperform the Survey of Professional Forecasters. The fourth chapter studies pricing in the fashion retail industry. Online data was collected from over 65 retailers in the U.S. and the U.K. A machine learning classifier categorizes products within- and across- retailers, as well as over different seasons.; A fair fraction of firms implements what is described as price clustering: a large number of different products are priced using just a small number of sparse prices, with price changes occurring rarely and in large increments. This pricing strategy is consistent with a behavioral model where fewer prices makes price advertising more effective.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 161-175).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122232</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on imperfect competition in the labor market</title>
<link>https://hdl.handle.net/1721.1/122228</link>
<description>Essays on imperfect competition in the labor market
Caldwell, Sydnee Christian.
This thesis consists of three chapters on imperfect competition in the labor market. The first chapter (joint with Nikolaj Harmon) explores the relationship between an individual's wages and the quality of her opportunities at other firms (her outside options). To overcome the fact that many factors that shift an individual's outside opportunities also impact her productivity at her current job, we develop a novel identification strategy that generates within-individual (and within-firm-by-occupation) variation in workers' information about their outside options. This strategy, which we implement using linked employer-employee data from Denmark, exploits the fact that individuals often learn about labor market opportunities through their social networks. We find that increases in labor demand at former coworkers' current firms increases an incumbent worker's job-to-job mobility and wage growth.; Consistent with theory, larger changes are necessary to induce a job-to-job transition than to induce a wage gain. Tests that exploit within-firm or within-firm-and-occupation variation and tests that exploit different subsets of an individual's former coworkers confirm that the results are not driven by unobserved changes in demand for workers' skills. Finally, we use our reduced form moments to identify a structural search model incorporating both posting and bargaining firms. We find that bargaining is more prevalent among high skilled workers. The second chapter (joint with Oren Danieli) investigates the role that cross-sectional differences in individuals' outside options play in generating between-group wage inequality. We use a two-sided matching model to micro-found a measure of workers' outside options, which we call the "Outside Options Index" (001). The index is similar to those used in the industrial organization literature to measure concentration (e.g.; the Herfindal-Hirschman Index, the HHI). We then use German administrative data to estimate this index and use two sources of quasi-random variation: (1) the introduction of high-speed trains and (2) a standard shift-share instrument to identify the elasticity between our index and wages. When we combine these two ingredients, we find that roughly 1/3 of the gender wage gap in Germany can be explained by differences in options, mostly the result of differences in effective labor market size (commuting costs). The third chapter (joint with Emily Oehlsen) asks whether, in the absence of commuting costs, firms with market power have an incentive to pay women less than men. We use data from a series of experiments at Uber where we offered random subsets of male and female drivers higher "wages". Drivers varied both in the size of the wage increase and in whether they could drive for Uber's main competitor, Lyft.; These two sources of variation allow us to experimentally identify: (1) Frisch elasticities and (2) firm substitution elasticities. We find that women have Frisch elasticities double those of men on both the intensive and extensive margin. However, unlike the prior literature, we find that women are not less likely to shift between firms in response to changes in relative wages. The results suggest that, at least in the gig economy, firms have little incentive to wage discriminate between men and women based on their labor supply choices. JEL Codes: JOO, J31, J42
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis. "The second half of TOC page numbers are off by 2 pages"--Disclaimer Notice page.; Includes bibliographical references (pages 309-319).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122228</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying melting and chemical differentiation processes on Earth and the Moon</title>
<link>https://hdl.handle.net/1721.1/122225</link>
<description>Quantifying melting and chemical differentiation processes on Earth and the Moon
Brown, Stephanie Marie,Ph.D.Massachusetts Institute of Technology.
This thesis contains four chapters that together investigate the igneous history of the Earth and the Moon. Each chapter explores new quantitative methods for combining experiments in igneous petrology with observed local and global major and trace element compositional variations and geophysical constraints. Integrating all types of geochemical fingerprints and geophysical observations allows us to solve complex natural processes where several interdependent variables are always at play. Chapter 1 investigates the timing and trace element partition coefficient conditions under which the Earth could have crystallized a magma ocean that then overturned and remixed to form an Early Enriched Reservoir and a complementary Early Depleted Reservoir consistent with isotopic measurements of Archean rocks. This study found that Earth most likely last differentiated a highly heterogenous mantle ~40 Ma after Solar System formation.; Chapter 2 is an experimental study of the multiple saturation point of the ultramafic Apollo 14 volcanic yellow glasses and their genesis via mixing melts of different lunar magma ocean cumulates. In finding successful mixing scenarios, this study highlighted the importance and possibility of internally consistent petrologic models. Chapters 3 and 4 shift in time from the Hadean and the Archean to the present by focusing on the generation and evolution of mid-ocean ridge basalts. Chapter 3 answers the question "What is the source of the garnet signature in MORB?" by quantifying the permissible range of mantle potential temperatures, mantle compositions, spreading rates, and mantle flow regimes that give rise to recognizable garnet-lherzolite field melting. Chapter 4 applies garnet melting systematics (Chapter 3) and consistency in petrologic models (Chapter 2) to the slow to ultraslow spreading 9-25°E Southwest Indian Ridge.; This study found that plagioclase peridotite melting, and not garnet peridotite or pyroxenite melting, of compositionally variable peridotite explains all observed compositional and geophysical variations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122225</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some rock mechanics problems with application for hydraulic fracturing</title>
<link>https://hdl.handle.net/1721.1/122224</link>
<description>Some rock mechanics problems with application for hydraulic fracturing
Mighani, Saied,1989-
Hydraulic fracturing is an essential tool used to enhance connectivity in shale gas reservoirs by maximizing the intersection between the hydraulic fracture (HF) and pre-existing natural fractures (NF) or faults. The technique is most effective when the hydraulic fracture crosses natural fractures rather than arresting on them. Experiments conducted to examine the interaction between HF and artificial pre-existing faults suggest that the coupling of diffusivity and fault slip is an important element of the HF-fault interaction problem. Fault slip, once activated is associated with an apparent increase in diffusivity. Whether the hydrofracture crosses or arrests on the pre-existing fault is also affected by surface roughness, differential stresses, and fault slip mode (i.e., stable or stick-slip sliding). Calibrated piezoelectric transducers were used to measure acoustic emissions (AE) generated during HF and fault slip.; Moment tensor analysis of these events was used to distinguish pure tensile, shear, and possibly closure events during the experiments. Seismic moment magnitudes were approximately -7 for events during the initiation of the HF and about -5 for events during fault slip. Such a low ratio of seismic moments for tensile and slip events may explain the small numbers of tensile events recorded during reservoir stimulations. I also studied the time-dependent behavior in shales to gain insight into the post-stimulation efficiency of exploitations. Shale experiences strain hardening and compaction during loading by both isostatic (pressure-driven) and differential stress (shear-driven). Transient creep strain increased linearly with log(time), possibly transitioning to constant rate in timescale of several days. Motivated by the multi-scale nature of heterogeneities in shales, I examined the micromechanics of deformation using the nano-indentation technique.; Elastic and creep moduli found in nano-indentation and triaxial tests agreed within a factor of 2, but within that factor, the creep strength may depend on spatial scale.
Thesis: Ph. D. in Geophysics, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-205).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122224</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of emission policies in China on air pollution and human health</title>
<link>https://hdl.handle.net/1721.1/122223</link>
<description>Impacts of emission policies in China on air pollution and human health
Li, Mingwei
Precursor emissions of air pollution can be reduced at emitting sources by end-of-pipe control policies or as co-benefits of climate policies that limit fossil fuel. Identifying cost-effective control strategies requires understanding policy costs, chemical non-linearities in pollution formation, and the value of health benefits. China suffers from severe air pollution, and is implementing both policies, but relevant studies are limited. This thesis incorporates three studies that examine the air quality co-benefits of China's recent climate policy for China and transboundary countries, and the potential changes in the sensitivities of inorganic PM₂.₅ to precursor emissions in China. The first study quantifies co-benefits of China's climate policy from reducing PM₂.₅ using a modeling framework that couples an energy-economic model with sub-national detail for China (C-REM) and an atmospheric chemical transport model GEOS-Chem.; The effects of an illustrative climate policy, a price on CO₂ emissions, are simulated under three stringencies. In a policy scenario consistent with China's recent pledge to peak CO2 emissions by 2030 (the 4% Policy scenario), national health co-benefits from improved PM₂.₅ pollution can partially or fully offset policy costs depending on chosen health valuation. This study also suggests co-benefits would rise with increasing policy stringency. Using the same model simulations, the second study further compares co-benefits from PM₂.₅ and ozone in China and three downwind countries (South Korea, Japan and the United States). This study suggests that under the 4% Policy scenario, avoided premature deaths from reducing ozone are about half of those from PM₂.₅ in China, and the total avoided deaths in transboundary countries are about 4% of those in China.; The third study examines the potential changes in the sensitivities of inorganic PM₂.₅ to precursor emissions in China in response to the current and projected national reductions in SO₂ and NO[subscript x] emissions. Under scenarios that reduce SO₂ and NO[subscript x], emissions, sensitivities to SO₂ and NO[subscript x] emissions would increase, but sensitivity to NH₃ emissions would decrease in January and July. The largest absolute changes in sensitivities are found in January for NO[subscript x] and NH₃.
Thesis: Ph. D. in Atmospheric Chemistry, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 85-93).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122223</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Offshore wind turbine nonlinear wave loads and their statistics</title>
<link>https://hdl.handle.net/1721.1/122220</link>
<description>Offshore wind turbine nonlinear wave loads and their statistics
Zhang, Yu,Ph.D.Massachusetts Institute of Technology. Department of Mechanical Engineering.
Due to the large influence of lateral flexible vibrations on offshore wind turbine foundations and the higher natural frequencies of the offshore wind turbine foundation relative to the dominant frequencies of the linear wave load model, the modeling of the dynamic behavior of the foundation under nonlinear wave loads and analysis of their statistical characteristics have become an important issue for offshore wind turbine design. This thesis derives an approximate model of the nonlinear wave loads in the time domain by Fluid Impulse Theory, verifies it with a boundary element method software WAMIT and validates it with experimental measurements. The load level crossing rates and the load power spectral density is obtained in multiple sea states. The simulated nonlinear wave loads are applied as the forcing mechanism on the offshore wind turbine and its foundation, and the mudline bending moments are computed and compared with experimental measurements. The system identification is conducted by fitting the model with the experimental data using linear regression method. The analytical extreme and fatigue prediction of the offshore wind turbine system are derived and evaluated in waters of finite depth and in multiple seastates. Key words: Nonlinear wave loads, nonlinear wave loads statistics, system identification, extremes and fatigue
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 83-86).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122220</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluid mechanics of lubricant transport in non-contact regions in the piston ring pack in internal combustion engines</title>
<link>https://hdl.handle.net/1721.1/122218</link>
<description>Fluid mechanics of lubricant transport in non-contact regions in the piston ring pack in internal combustion engines
Fang, Tianshi.
The compromise between friction and lubricant consumption has been a long-lasting challenge for the design of the piston ring pack in internal combustion engines. In order to achieve a satisfactory compromise, a systematic understanding of the lubricant transport in the piston ring pack is of critical importance. In the context of increasingly stringent standards on engine emissions, there is a more urgent need for the knowledge on the lubricant transport. This work is focused on the lubricant transport in two non-contact regions in the piston ring pack: 1) the region near a piston skirt chamfer; 2) the region near a piston third land. While the Reynolds equation has been widely employed to model the contact interfaces, more general fluid mechanics has to be applied in the non-contact regions. This thesis is the first work to comprehensively apply Computational Fluid Dynamics (CFD) and theoretical modelling to the non-contact regions in the piston ring pack.; CFD was employed to fundamentally understand the lubricant transport, and theoretical models were developed to more efficiently quantify the lubricant transport. This work is a major step towards an accurate quantification of the lubricant leakage through the oil control ring (OCR) that can be critical to the lubricant consumption. The lubricant transport in a skirt chamfer region determines the pressure outside the contact interface between the lower flank of the OCR and its groove, and thus the lubricant flow rate into the OCR groove. A numerical model and a closed-form correlation were developed to efficiently predict the pressure. While the lubricant transport into the OCR groove had often been overlooked, this work revealed that this lubricant transport could be remarkable. In the region near a piston third land, two mechanisms of lubricant transport were studied: 1) high-speed bridging; 2) reattachment.; Both of them introduce additional lubricant to the ring/liner contact interfaces. The effects on the inlet conditions to the ring/liner contact interfaces were quantitatively studied. The existing knowledge on high-speed bridging was enhanced in a quantitative sense. The reattachment process was first discovered and studied.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 175-177).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122218</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Outdoor wind speed enhancement in residential precincts in tropical cities</title>
<link>https://hdl.handle.net/1721.1/122217</link>
<description>Outdoor wind speed enhancement in residential precincts in tropical cities
Chew, Lup Wai.
Wind speeds in urban areas are significantly reduced due to the blockage effects of urban structures. Lower wind speeds inhibit passive ventilation and reduce thermal comfort in tropical cities. This thesis uses both experimental and computational fluid dynamics (CFD) modeling to explore the potential of building porosity to increase wind speeds at the pedestrian level. The computational models are first validated with experiments to justify the exclusion of thermal effects, as the models over-predict the thermal effects on heated surfaces. Validated computational simulations show that void decks can increase pedestrian-level wind speeds by more than twofold in two-dimensional urban street canyons. In three-dimensional urban street canyons, void decks not only increase the wind speeds in the street canyons, but also along the streets. The effectiveness of void decks to increase wind speed is significantly influenced by the height of void decks but not the building height. Next, the heat wave in April 2016 in Singapore is simulated with the Weather Research and Forecasting model to identify two residential precincts with high temperatures for detailed case studies. Both selected precincts are simulated with CFD models to obtain the pedestrian-level wind fields. The effects of void decks in these two real urban areas are evaluated by comparing the wind field with void decks to that in the control case without void decks. The first precinct with smooth upwind areas shows significant wind speed enhancement up to 80% of the freestream wind speed (the wind speed above the roof level) with void decks. The second precinct with rough upwind areas shows wind speed enhancement up to 50% of the freestream wind. In conclusion, void decks are an effective architectural intervention to enhance pedestrian-level wind speeds, but the effectiveness is influenced by the upwind conditions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122217</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactions of carbon vapor with hydrogen and with methane in a high intensity arc</title>
<link>https://hdl.handle.net/1721.1/122215</link>
<description>Reactions of carbon vapor with hydrogen and with methane in a high intensity arc
Blanchet, Jean L.
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 1963.; Vita.; Includes bibliographical references (leaves 174-176).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122215</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building a paneth cell :</title>
<link>https://hdl.handle.net/1721.1/122214</link>
<description>Building a paneth cell :
Mead, Benjamin E.(Benjamin Elliott)
Understanding how our bodies interact with our resident gut microbes may unlock new therapies for multiple diseases. However, we presently lack representative models to study such interactions, limiting therapeutic development. Here, we advance methods of cellular bioengineering to create high-fidelity models of intestinal cell types, in particular the microbe-interfacing antimicrobial Paneth cell. With this model, we study at scale molecular interventions that direct Paneth cell development and function as a means to modulate our gut microbes in health and disease. Multiple diseases are linked to alterations in Paneth cell function and the composition of the gut bacteria, including inflammatory bowel disease. Past studies of Paneth cells have relied on complex and poorly scaled animal models, or limited in vitro models, including stem cell-derived organoids.; The extent to which in vitro models of Paneth cells reproduce in vivo biology is an unanswered question central to their utility in studying host-microbe interaction for therapeutic development. We first present a generalizable approach using single-cell RNA sequencing to compare the identity of in vivo Paneth cells to those of in vitro organoids and based on lineage-defining differences between the two, nominate small molecule interventions to improve model representation. We then validate our improved Paneth cell model through rigorous characterization and a demonstration of functional (antimicrobial activity, niche support) improvements in Paneth cell physiology following our intervention. With this high-fidelity Paneth cell model, we built a scalable platform to study interventions which may enhance cell function and development.; As a proof-of-concept screen, we use a well-defined and clinically-relevant set of small molecules to identify drugs that enhance Paneth cell differentiation and antimicrobial function. We validate the most potent drugs with additional analyses, revealing multiple molecular targets that may serve as therapeutic candidates to restore Paneth cells in disease or act as a new approach to therapeutically shape the gut microbiota.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 120-133).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122214</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable biomolecular integration and dynamic behavior of DNA-based systems for development of biomedical nano-devices</title>
<link>https://hdl.handle.net/1721.1/122213</link>
<description>Programmable biomolecular integration and dynamic behavior of DNA-based systems for development of biomedical nano-devices
Hahn, Jaeseung.
Departing from the traditional role as a carrier of genetic information, DNA has emerged as an engineering material for construction of nano-devices. The advances in the field of DNA nanotechnology have enabled design and synthesis of DNA nanostructures of arbitrary shapes and manipulation of the nanostructures' conformations in a programmable way. DNA-based systems offer potential applications in medicine by manipulating the biological components and processes that occur at the nanometer scale. To accelerate the translation of DNA-based systems for medical applications, we identified some of the challenges that are hindering our ability to construct biomedical nano-devices and addressed these challenges through advances in both structural and dynamic DNA nanotechnology. First, we tested the stability of DNA nanostructures in biological environments to highlight the necessity of and path towards protection strategies for prolonged integrity of biomedical nano-devices. Then, we constructed a platform for robust 3D molecular integration using DNA origami technique and implemented the platform for a nanofactory capable of production of therapeutic RNA to overcome the challenges in RNA delivery. Moreover, we established a mechanism to drive DNA devices by changing temperature with prolonged dynamic behavior that was previously challenging to accomplish without special modification of DNA and/or equipment not readily available in a typical lab setting. Together, the progress made in this thesis bring us another step closer to realization of medical applications of DNA nanotechnology by focusing on the challenges in both structural and dynamic aspects of the technology.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122213</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic study of dislocation nucleation at a crack tip</title>
<link>https://hdl.handle.net/1721.1/122212</link>
<description>Atomistic study of dislocation nucleation at a crack tip
Cheung, Kin Sang,1965-
Thesis (Ph.D)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1990.; Vita.; Includes bibliographical references (leaves 270-278).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122212</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural correlates of locomotion, cues, and context in the interactions between hippocampus and lateral septum</title>
<link>https://hdl.handle.net/1721.1/122208</link>
<description>Neural correlates of locomotion, cues, and context in the interactions between hippocampus and lateral septum
Wirtshafter, Hannah(Hannah Suzanne)
The lateral septum (LS) has been implicated in anxiety and fear modulation, and may regulate interactions between the hippocampus (HPC) and regions that mediate goal directed behavior. In this study, we simultaneously record from cells in the LS and the HPC during navigation and conditioning tasks. We identify a speed and acceleration spiking code in the LS that does not map to states of motivation or anticipation. We also identify an overlapping population of LS cells that change firing to cue and reward during conditioning. These cells display sharp wave ripple and theta modulation, spatial firing fields, and responses similar to the HPC during conditioning. These HPC-associated cells are not disproportionately speed or acceleration modulated, suggesting that these movement correlates are not hippocampally derived. This suggests a role for the LS in evaluating movement-dependent changes in context that can be used to guide task-relevant behavior.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122208</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic regulation of cell extrusion in caenorhabditis elegans</title>
<link>https://hdl.handle.net/1721.1/122207</link>
<description>Genetic regulation of cell extrusion in caenorhabditis elegans
Dwivedi, Vivek Kumar.
Programmed elimination of cells occurs during animal development and homeostasis to maintain appropriate cell numbers. One evolutionarily conserved method by which organisms eliminate cells in a programmed manner is by cell-autonomous activation of the caspase-mediated apoptosis pathway, which produces a corpse that is engulfed and degraded by phagocytic cells. Cell elimination can also occur by a different method, called cell extrusion, in which the cell to be eliminated is squeezed out from a layer of cells, such as an epithelium. Cell extrusion is also an evolutionarily conserved form of cell elimination and has been observed in organisms ranging from Drosophila to mammals. It is the primary method of cell elimination in mammalian epithelial tissues, such as the small intestinal epithelium. A specific set of cells is eliminated by cell extrusion by caspase-deficient C. elegans embryos.; Remarkably, these cells show morphological and cytological features of apoptosis in the complete absence of caspases. To identify the genes required for cell extrusion, I performed a genome-scale RNAi screen for worms that express a phenotype arising from defective extrusion. This RNAi screen revealed that genes required for entry into the cell cycle and G1/S-phase transition are required for cell extrusion. From subsequent live-imaging experiments using confocal microscopy, I discovered that cells fated for extrusion earlier entered the cell cycle and arrested in S phase. I found that extruded cells are the much smaller sisters generated by unequal cell divisions. Taken together, my findings indicate that generation from an unequal cell division, entry into the cell cycle, and the S-phase arrest that likely results from these processes are all coupled to the cell extrusion fate, as genetically perturbing any of these processes blocks cell extrusion.; In short, I have identified the genetic factors that lead to the arrested cell-cycle state that drives caspase-independent apoptotic cell extrusion by C. elegans. Studies of mammalian epithelial cells using hydroxyurea (performed in collaboration with the Jody Rosenblatt Laboratory) indicate that S-phase arrest drives cell extrusion from mammalian epithelia. These findings demonstrate that cell extrusion driven by S-phase arrest is evolutionarily conserved and suggest implications for development, physiology and disease.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122207</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure-function relationships in monotopic phosphoglycosyl transferases</title>
<link>https://hdl.handle.net/1721.1/122206</link>
<description>Structure-function relationships in monotopic phosphoglycosyl transferases
Entova, Sonya.
Complex glycans play essential roles in prokaryotic and eukaryotic biology. While this ubiquitous post-translational modification takes a diversity of forms, many glycoconjugate biosynthesis pathway across domains of life follows a common logic. Glycan assembly is initiated by a phosphoglycosyl transferase (PGT) that transfers a phosphosugar from a nucleotide donor to a polyprenol phosphate (PrenP) chain embedded in the membrane. The PrenPP-sugar product is elaborated by downstream glycosyltransferases, transferred across the membrane and ultimately appended to various acceptor molecules. The PGTs initiating glycan assembly adopt diverse membrane architectures. An extensive superfamily of PGTs, elucidated in part by this thesis, is exemplified by PglC from the Gramnegative pathogen, Campylobacterjejuni. PglC comprises a globular cytosolic domain and an N-terminal membrane-resident domain.; Recent structural and biochemical analyses determined that this domain forms a helix-break-helix motif, termed the reentrant membrane helix (RMH), that enters and exits on the same face of the membrane, resulting in a monotopic topology. The RMH anchors the PglC fold in the membrane in a manner not previously observed among other monotopic membrane proteins. This thesis focuses on structure-function relationships in the RMH and associated domains. Two conserved motifs are shown to drive formation of a reentrant topology for PglC, and to exemplify common principles of topology determination among diverse monotopic proteins. These principles are further applied to the identification of reentrant domains in an extensive superfamily of monotopic lipid A acyltransferases previously thought to be membrane-spanning. The next section of the thesis explores the highly conserved role of PrenP in complex glycan biosynthesis.; The significance of PrenP geometry in mediating substrate binding and modulating the local membrane environment is presented. Additionally, a conserved proline residue in the PglC RMH is determined to drive PrenP binding and specificity. Molecular insights from this study shed new light on the roles of PrenP in facilitating diverse glycoconjugate biosynthesis pathways. Finally, a cell-free methodology for expression of PglC directly into model membrane lipid Nanodiscs is described. This system has valuable applications for the study of interactions between PglC and downstream glycosyltransferase enzymes, and for further structural characterization of PglC in a membrane environment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122206</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A GNAQ/11-driven zebrafish cancer model identifies MITF and YAP as key determinants for uveal melanoma</title>
<link>https://hdl.handle.net/1721.1/122205</link>
<description>A GNAQ/11-driven zebrafish cancer model identifies MITF and YAP as key determinants for uveal melanoma
Hagen, Hannah R.
Uveal melanoma (UM) is a cancer of eye melanocytes that has a particularly poor prognosis and a dearth of treatment options. In contrast to cutaneous melanoma (CM), which is driven by oncogenic mutation of BRAF or NRAS, UM is driven by oncogenic mutations in GNAQ or GNA11 (&gt;80% of patients). We developed a zebrafish model of UM by expressing GNAQ[superscript Q209L] in the melanocytes of p53-/- zebrafish. In this model, oncogenic GNAQ activates biological programs involved in pigmentation and tumorigenesis. The melanocyte lineage transcription factor MITF plays a well-defined role in CM tumorigenesis, wherein MITF expression is absolutely necessary for tumorigenesis and regulates proliferative versus invasive tumor phenotypes. In contrast, here we elucidated a novel tumor suppressor function for MITF in UM. In the context of oncogenic GNAQ, loss of mitfa accelerates tumorigenesis and reduces tumor reliance on p53 mutation.; Expression of oncogenic GNAQ can also overcome mitfa-/- tumor inhibition in BRAF-driven CM. Moreover, mitfa's tumor suppressive role is generalizable to UM and mitfa-/- accelerates tumorigenesis driven by oncogenic GNA11 or CYSLTR2, a GNAQ/11 upstream receptor. Interestingly, we show that GNAQ directly regulates pigment programs, as evidenced by hyperpigmentation patches that develop even in the absence of mitfa. Furthermore, the pigmentation and tumorigenesis phenotypes are decoupled, suggesting differential regulation of these programs downstream of oncogenic GNAQ. The observation that mitfa has different roles in CM vs UM led us to investigate the different signaling mechanisms of BRAF- vs GNAQ-driven melanoma. Oncogenic GNAQ activates the downstream pathways YAP and PLC[beta], and there is conflicting evidence implicating the relative importance of these two signaling axes in driving UM tumorigenesis. Here, we outline a central role for YAP for in in vivo UM tumorigenesis.; Constitutively active YAP[superscript S127A;S381A] drives rapid UM tumorigenesis, furthermore, YAP staining was ubiquitous across UM tumors. Finally, we show here for the first time that PLCB4D630Y was sufficient to drive tumors at long latency. Together, these findings clarify the genetic pathways that drive UM formation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis. "May 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122205</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of cost-optimized village-scale electrodialysis systems for brackish water desalination</title>
<link>https://hdl.handle.net/1721.1/122204</link>
<description>Design of cost-optimized village-scale electrodialysis systems for brackish water desalination
Wright, Natasha C. (Natasha Catherine)
This thesis proposes methods of reducing the cost of electrodialysis brackish water desalination systems, specifically for use in rural India, where 60% of the groundwater is too saline to drink. Convergence of socioeconomic and technical factors led to the insight that photovoltaic (PV) powered electrodialysis (ED) has the potential for impact in rural water treatment. In order to design a system that can meet the necessary production requirements, a robust parametric model was created to predict the desalination rate, limiting current density, and total energy use in an ED system. The model agrees with experimental measurements across two diverse ED stack designs, differing in total membrane area, membrane manufacturers, and flow channel spacers. A commercial-scale ED stack was additionally tested in Chelluru, India, building confidence that the model is predictive for real groundwater, and that ED systems are feasible to operate in the rural Indian context.; The ED model was used within an optimization routine to determine the lowest cost operating mode and stack design, assuming existing, flat-stack architectures. Common operating modes including constant-voltage batch and multi-stage continuous systems were considered alongside novel operation modes including voltage-regulated batch and hybrid batch-continuous systems. For the production and desalination rates required for a village-scale application, a voltage-regulated hybrid system that is fully optimized for membrane width, length, and channel thickness reduces the 10-year total cost and capital cost of the system by 37% and 47%, respectively, in comparison to a commercially available stack optimized under the same operation modes. While matching of applied and limiting current densities can be achieved using a voltage-regulated batch operation (minimizing stack cost), this requires a potentially costly DC power supply and control system.; The final part of the thesis proposes a spiral ED stack architecture that allows for matching through the geometry of the stack alone. Both a standard Archimedean spiral and an ideal irregular spiral shape are presented. The ideal spiral shape would reduce the 10-year total cost and capital cost by 21% and 39%, respectively, in comparison to the Archimedean spiral, and is cost-competitive with a hybrid voltage-regulated flat-stack design.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122204</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A derivation of K. Wilson's equations and applications to current operators in relativistic quantum field theory.</title>
<link>https://hdl.handle.net/1721.1/122200</link>
<description>A derivation of K. Wilson's equations and applications to current operators in relativistic quantum field theory.
Brandt, Richard Allan.
Massachusetts Institute of Technology. Dept. of Physics. Thesis. 1966. Ph.D.; Bibliography: leaves 162-164.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122200</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular partitioning and approximate coupling techniques in multiple scattering theory</title>
<link>https://hdl.handle.net/1721.1/122199</link>
<description>Molecular partitioning and approximate coupling techniques in multiple scattering theory
Leon, Francisco Alexander.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1984.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122199</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional integration applied to the nuclear many body-problem</title>
<link>https://hdl.handle.net/1721.1/122198</link>
<description>Functional integration applied to the nuclear many body-problem
Troudet, Thierry.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1982.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122198</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron scattering from deformed heavy nuclei.</title>
<link>https://hdl.handle.net/1721.1/122197</link>
<description>Electron scattering from deformed heavy nuclei.
Sasanuma, Toichi.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1979.; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122197</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An injectable gelatin-based conjugate incorporating EGF promotes tissue repair and functional recovery after spinal cord injury in a rat model</title>
<link>https://hdl.handle.net/1721.1/122195</link>
<description>An injectable gelatin-based conjugate incorporating EGF promotes tissue repair and functional recovery after spinal cord injury in a rat model
Shah, Adhvait M.
Spinal cord injury (SCI) is a devastating condition drastically reducing the quality of life that affects about 300,000 patients in the USA. As a result of the injury, sensory perception and motor functions are lost. Current treatments do not address the root cause - degeneration and loss of neural tissue. The overall goal of this pre-clinical work was to evaluate a novel gelatin-based conjugate (gelatin-hydroxyphenyl propionic acid; Gtn-HPA) capable of undergoing covalent cross-linking in vivo after being injected as a liquid. Gtn-HPA incorporating epidermal growth factor (EGF) and/or stromal cell-derived factor - 1ɑ (SDF-1ɑ) was evaluated for promoting tissue healing and functional recovery using a standardized 2-mm hemi-resection SCI rat model, four weeks after injection. Injection of Gtn-HPA/EGF immediately after the surgical excision injury significantly improved motor functional recovery, compared to gel alone and non-treated controls.; Bladder function was also improved in Gtn-HPA/EGF-treated animals. Functional improvement correlated with the amount of spared tissue. The volume of gel in the defects was quantified by a newly developed MRI-based method employing T1-weighted inversion recovery to unambiguously image Gtn-HPA in the injury site in a non-destructive manner. Histological analysis showed the presence of multiple islands of Gtn-HPA in the injury site after four weeks. There was a significantly greater number of cells migrating into the Gtn-HPA/EGF, compared to the gel alone, and these cells displayed neural progenitor cell markers: nestin, vimentin, and Musashi. The cells infiltrating Gtn- HPA were negative for glial fibrillary acidic protein (GFAP), a marker for astrocytes. Injection of the gel reduced the reactive astrocytic presence at the border outlining the injury site indicating the reduction of glial scar.; There was no notable inflammatory response to the Gtn-HPA gel, reflected in the number of CD68-positive cells, including macrophages. Of note was the demonstration by immunohistochemistry that the Gtn-HPA remaining at 4 weeks post-injection contained EGF. MMP2 was found to be playing a role in in vivo degradation of the Gtn-HPA gel. Additional behavioral and histological results were acquired injecting Gtn-HPA/EGF in 2-mm complete resection SCI rat model. Collectively, the findings signaled that injury sites injected with Gtn-HPA/EGF had greater potential for regeneration. In summary, this work commends an injectable, covalently cross-linkable formulation of Gtn-HPA incorporating EGF for further investigation in promoting functional recovery and potential regeneration for treatment of SCI and thereby improve the quality of life of SCI patients.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122195</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Validation of dMRI techniques for mapping brain pathways</title>
<link>https://hdl.handle.net/1721.1/122194</link>
<description>Validation of dMRI techniques for mapping brain pathways
Grisot, Giorgia.
Diffusion magnetic resonance imaging (dMRI) tractography is the only non-invasive tool for studying the connectional architecture of the brain in vivo. By measuring the diffusion of water molecules dMRI provides unique information about white matter pathways and their integrity, making it an invaluable neuroimaging tool that has improved our understanding of the human brain and how it is affected by disease. A major roadblock to its acceptance into clinical practice has been the difficulty in assessing its anatomical accuracy and reliability. In fact, obtaining a map of brain pathways is a multi-step process with numerous variables, assumptions and approximations that can influence the veracity of the generated pathways. Validation is, thus, necessary and yet challenging because there is no gold standard which dMRI can be compared to, since the configuration of human brain connections is largely unknown. Which aspects of tractography processing have the greatest effect on its performance? How do mapping methods compare? Which one is the most anatomically accurate? We tackle these questions with a multi-modal approach that capitalizes on the complementary strengths of available validation strategies to probe dMRI performance on different scales and across a wide range of acquisition and analysis parameters. The outcome is a multi-layered validation of dMRI tractography that 1) quantifies dMRI tractography accuracy both on the level of brain connections and tissue microstructure; 2) highlights the strengths and weaknesses of different modeling and tractography approaches, offering guidance on the issues that need to be resolved to achieve a more accurate mapping of the human brain.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 201-222).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122194</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Climate allies : how urban/military interdependence enables adaptation</title>
<link>https://hdl.handle.net/1721.1/122193</link>
<description>Climate allies : how urban/military interdependence enables adaptation
Teicher, Hannah M.
As climate impacts escalate, U.S. cities and regions have attempted to fill the federal leadership vacuum in spite of their own resource constraints. In the midst of federal inertia, the Department of Defense (DoD) acknowledges climate risk, mainstreaming it into policy, while defense experts promote a climate security agenda. However, defense adaptation has been modest. Installations and the communities around them remain vulnerable, but these shared risks surface the potential for joint adaptation planning.; Through a relational case study of two regions with large defense complexes and the climate security policy community in DC, I ask: how and why do municipal and military leaders undertake joint adaptation? What impact does this have on commonly understood barriers to adaptation? How does climate security discourse shape urban/military collaboration? I find that in Hampton Roads, Virginia and San Diego, California, urban leaders are leveraging the military presence to reinforce their own adaptation efforts and elevate a broader adaptation agenda. This alliance operates through two mutually reinforcing enablers: recognizing interdependence and constructing credibility. As climate impacts compromise infrastructural and social networks, urban and military stakeholders have adopted interdependence as an operating premise, explicitly rejecting military islanding.; This challenges expectations in critical adaptation studies of the rise of ecological enclaves while more broadly challenging critiques of urban securitization. Further, it complicates notions of defense-dependency, as the military contingent increasingly recognizes reliance on the community. Amidst the politics of doubt, the military serves as a "credible messenger" on an institutional and individual level; climate security advocates work strategically, deploying this authority to build support for climate action. Both enablers reinforce the centrality of effective framing and multilevel coordination to urban adaptation. Benefits include expanded cooperation, increased technical capacity, and access to resources; pitfalls include favoring adaptation over mitigation and prioritizing conspicuous over mundane climate risks. Urban leaders' qualified success in leveraging the military for adaptation suggests implications for other powerful institutions.; Conceptualizing military installations as anchor institutions with an embedded local presence and dedicated mission highlights pathways for communities to form additional adaptation alliances.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; "June 2019." Page 224 blank. Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 207-223).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122193</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Point processes of representation theoretic origin</title>
<link>https://hdl.handle.net/1721.1/122190</link>
<description>Point processes of representation theoretic origin
Cuenca, Cesar(Cesar A.)
There are two parts to this thesis. In the first part we compute the correlation functions of the 4-parameter family of BC type Z-measures. The result is given explicitly in terms of Gauss's hypergeometric function. The BC type Z-measures are point processes on the punctured positive real line. They arise as interpolations of the spectral measures of a distinguished family of spherical representations of certain infinite-dimensional symmetric spaces. In representation-theoretic terms, our result solves the problem of noncommutative harmonic for the aforementioned family of representations. The second part of the text is based on joint work with Grigori Olshanski. We consider a new 5-parameter family of probability measures on the space of infinite point configurations of a discrete lattice. One of the 5 parameters is a quantization parameter and the measures in the family are closely related to the BC type Z-measures. We prove that the new measures serve as orthogonality weights for symmetric function analogues of the multivariate q-Racah polynomials. Further we show that the q-Racah symmetric functions (and their corresponding orthogonality measures) can be degenerated into symmetric function analogues of the big q-Jacobi, q-Meixner and Al-Salam-Carlitz polynomials, thus giving rise to a partial q-Askey scheme hierarchy in the algebra of symmetric functions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 191-195).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122190</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrodynamic analogues of quantum corrals and Friedel oscillations</title>
<link>https://hdl.handle.net/1721.1/122187</link>
<description>Hydrodynamic analogues of quantum corrals and Friedel oscillations
Cristea-Platon, Tudor.
We consider the walking droplet (or 'walker') system discovered in 2005 by Yves Couder and coworkers. We investigate experimentally and theoretically the behaviour of this hydrodynamic pilot-wave system in both closed and open geometries. First, we consider the dynamics and statistics of walkers confined to corrals. In the elliptical corral, we demonstrate that by introducing a submerged topographical defect, one can create statistical projection effects analogous to the quantum mirage effect arising in quantum corrals. We also report a link between the droplet's statistics and the mean wave field. In the circular corral, we investigate a parameter regime marked by periodic and weakly aperiodic orbits, then characterise the emergence and breakdown of double quantisation, reminiscent of that arising for walker motion in a harmonic potential. In the chaotic regime, we test the theoretical result of Durey et al. relating the walker statistics to the mean wave-field. We also rationalise the striking similarity between this mean wave-field and the circular corral's dominant azimuthally-symmetric Faraday mode. Our corral studies underscore the compatibly of the notion of quantum eigenstates and particle trajectories in closed geometries. We proceed by exploring a new hydrodynamic quantum analogue of the Friedel oscillations arising when a walker interacts with a submerged circular well, which acts as a localised region of high excitability. In so doing, we report the first successful realisation of an open hydrodynamic quantum analogue. We conclude by comparing the hydrodynamic systems to their quantum counterparts. Our work illustrates how, in the closed and open settings considered herein, a pilot-wave dynamics of the form envisaged by de Broglie may lead naturally to emergent statistics similar in form to those predicted by standard quantum mechanics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 145-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122187</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Totally positive spaces : topology and applications</title>
<link>https://hdl.handle.net/1721.1/122186</link>
<description>Totally positive spaces : topology and applications
Galashin, Pavel(Pavel A.)
This thesis studies topological spaces arising in total positivity. Examples include the totally nonnegative Grassmannian Gr[subscripts &gt;_0](k, n), Lusztig's totally nonnegative part (G/P)[subscripts &gt;_0] of a partial flag variety, Lam's compactification of the space of electrical networks, and the space of (boundary correlation matrices of) planar Ising networks. We show that all these spaces are homeomorphic to closed balls. In addition, we confirm conjectures of Postnikov and Williams that the CW complexes Gr[subscripts &gt;_0](k, n) and (G/P)[subscripts &gt;_0] are regular. This implies that the closure of each positroid cell inside Gr[subscripts &gt;_0](k, n) is homeomorphic to a closed ball. We discuss the close relationship between the above spaces and the physics of scattering amplitudes, which has served as a motivation for most of our results. In the second part of the thesis, we investigate the space of planar Ising networks. We give a simple stratification-preserving homeomorphism between this space and the totally nonnegative orthogonal Grassmannian, describing boundary correlation matrices of the planar Ising model by inequalities. Under our correspondence, Kramers-Wannier's high/low temperature duality transforms into the cyclic symmetry of Gr[subscripts &gt;_0](k, n).
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 195-203).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122186</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimax estimation with structured data : shape constraints, causal models, and optimal transport</title>
<link>https://hdl.handle.net/1721.1/122184</link>
<description>Minimax estimation with structured data : shape constraints, causal models, and optimal transport
Hütter, Jan-Christian Klaus.
Modern statistics often deals with high-dimensional problems that suffer from poor performance guarantees and from the curse of dimensionality. In this thesis, we study how structural assumptions can be used to overcome these difficulties in several estimation problems, spanning three different areas of statistics: shape-constrained estimation, causal discovery, and optimal transport. In the area of shape-constrained estimation, we study the estimation of matrices, first under the assumption of bounded total-variation (TV) and second under the assumption that the underlying matrix is Monge, or supermodular. While the first problem has a long history in image denoising, the latter structure has so far been mainly investigated in the context of computer science and optimization. For TV denoising, we provide fast rates that are adaptive to the underlying edge sparsity of the image, as well as generalizations to other graph structures, including higher-dimensional grid-graphs. For the estimation of Monge matrices, we give near minimax rates for their estimation, including the case where latent permutations act on the rows and columns of the matrix. In the latter case, we also give two computationally efficient and consistent estimators. Moreover, we show how to obtain estimation rates in the related problem of estimating continuous totally positive distributions in 2D. In the area of causal discovery, we investigate a linear cyclic causal model and give an estimator that is near minimax optimal for causal graphs of bounded in-degree. In the area of optimal transport, we introduce the notion of the transport rank of a coupling and provide empirical and theoretical evidence that it can be used to significantly improve rates of estimation of Wasserstein distances and optimal transport plans. Finally, we give near minimax optimal rates for the estimation of smooth optimal transport maps based on a wavelet regularization of the semi-dual objective.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 275-299).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122184</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical problems in transport and alignment</title>
<link>https://hdl.handle.net/1721.1/122183</link>
<description>Statistical problems in transport and alignment
Weed, Jonathan Daniel.
How should you analyze complicated data? Faced with scans of handwritten digits, noisy snapshots of large biomolecules, three-dimensional LIDAR data, or corrupted social networks, what practical techniques and theoretical guarantees does the statistician have at her disposal? This thesis develops new theory for statistical problems involving data with geometric structure of this kind. First, we study the Wasserstein distance, a metric on the space of probability measures on an arbitrary metric space. We prove sharp rates of convergence for empirical measures in Wasserstein distance on sufficiently regular compact metric spaces, improving on a line of work going back to Dudley (1969). We give the first nearly-optimal minimax lower bounds for the problem of estimating the Wasserstein distance between two measures, and we prove much better rates can be obtained under three different structural assumptions on the measures. These assumptions, inspired by practice and theory, reveal novel statistical features of the Wasserstein distance. Second, we consider data corrupted by group transformations. These problems are motivated by cryo-electron microscopy, an important technique in structural biology, the use of which requires reconstructing the structure of biological macromolecules on the basis of noisy, randomly rotated images. We prove the first minimax rates of estimation for a two-dimensional version of this problem. Along the way, we develop a general theory for problems of this kind, applicable to arbitrary compact groups acting on R[superscript d], and provide a novel analysis of the maximum-likelihood estimator for Gaussian mixtures with algebraic structure.
Thesis: Ph. D. in Mathematics and Statistics, Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 169-181).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122183</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Static and dynamic properties of magnetic Skyrmions in engineered multilayer films</title>
<link>https://hdl.handle.net/1721.1/122179</link>
<description>Static and dynamic properties of magnetic Skyrmions in engineered multilayer films
Lemesh, Ivan.
Magnetic textures known as skyrmions promise new breakthroughs in memory, logic, and neuromorphic applications. Skyrmions have been found in a variety of material systems, yet there existed no experimental evidence of a material that could simultaneously host them at room temperature and also allow for their reproducible current-induced nucleation and motion. One main goal of this thesis is to fill this gap and demonstrate all the aforementioned properties in the introduced here [Pt/CoFeB/MgO]₁₅ thin film heterostructures, consisting of a perpendicularly magnetized ferromagnetic layer (M), a heavy metal (H), and a symmetry-breaking spacer layer (S). Here, I developed, fabricated, and characterized the [Pt/CoFeB/MgO]₁₅ multilayers with an extremely low density of pinning centers, which enable not only a fully reproducible skyrmion motion but also a clean study of the skyrmion nucleation process. By using X-ray microscopy, I performed the imaging of various magnetic textures in these multilayers and studied their current-induced generation and motion as a function of applied field and temperature. Finally, another goal of this work is to establish a direct link between the properties of these [H/M/S][subscript N]-type materials and the structure of magnetic textures that they can host. The energetics of such systems is understood very poorly due to the very complex multilayer stray fields and up until now, most of their analysis involved the exclusive use of micromagnetic simulations. Here, I develop an alternative theoretical approach by calculating all the stray field interactions analytically, which enables the prediction of the exact structure and dynamics of magnetic domain walls, domains, and skyrmions. Thesis
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 205-219).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122179</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Germ expansion, endoscopic transfer, and unitary periods</title>
<link>https://hdl.handle.net/1721.1/122178</link>
<description>Germ expansion, endoscopic transfer, and unitary periods
Xiao, Jingwei,Ph.D.Massachusetts Institute of Technology.
In this thesis, we study the germ expansions in the Jacquet-Rallis transfer. We prove an identity that relates certain nilpotent orbital integrals for any smooth matching in this transfer. We give two applications of this identity. For the first, we give an elementary local proof of the endoscopic fundamental lemma for unitary groups (theorem of Laumon and Ngo). For the second, we establish a new relative trace formula comparison conjectured by Jacquet that can be used to study unitary periods.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 85-87).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122178</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of emerging optoelectronic materials : lead sulphide quantum dots and steam cracker tar</title>
<link>https://hdl.handle.net/1721.1/122177</link>
<description>Applications of emerging optoelectronic materials : lead sulphide quantum dots and steam cracker tar
Morris, Owen P.(Owen Paul)
Optoelectronics covers a wide and varied field of devices and applications, with many requiring different material properties for operation and manufacturing. In this thesis, I describe work performed over the course of my degree to improve the performance and understanding of various optoelectronic materials. In particular, I, with the help of my collegues and collaborators, have focused on two materials with emerging applications in optoelectronic devices: lead sulphide quantum dots (PbS QDs), quantum confined materials primarily used in photovoltaics; and steam cracker tar (SCT), a petrochemical by-product that at the outset of this work was simply a waste product, with no electronic applications demonstrated. This thesis documents the progress made in five distinct projects: the development of direct nanoimprinting as a nanoscale patterning method for PbS QDs and other nanoparticles; experimental and computational work to improve our understanding of the anomalously large Stokes shift in PbS QDs; the development of a novel processing method to produce optoelectronic films from SCT and the demonstration of first applications; an in depth study of SCT as a material for windscreen de-icing, including a demonstration prototype and a technoeconomic analysis; and further optoelectronic study of SCT with the aim of evaluating its potential for use in more complex, active optoelectronics. Finally, this thesis concludes with an outlook on the future prospects of these materials and suggestions for continuing research.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 125-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122177</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A restriction estimate in R³</title>
<link>https://hdl.handle.net/1721.1/122176</link>
<description>A restriction estimate in R³
Wang, Hong,Ph.D.Massachusetts Institute of Technology. Department of Mathematics.
In this thesis, I proved a restriction estimate for paraboloid in R³ based on the polynomial partitioning method introduced by Larry Guth and the "two ends argument" introduced by Wolff and Tao.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (page 87).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122176</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing the efficiency and cost of catalysts for sustainable energy applications : First-Principles Density Functional Theory studies</title>
<link>https://hdl.handle.net/1721.1/122175</link>
<description>Optimizing the efficiency and cost of catalysts for sustainable energy applications : First-Principles Density Functional Theory studies
Liu, Yusu,Ph.D.Massachusetts Institute of Technology.
In this thesis, I tackled, on two key fronts, the challenge of designing optimal catalysts for a sustainable future built on renewables. On the efficiency front, I studied electrochemical water-splitting, a reaction important for on-demand renewable energy conversion. I presented the electronic origin and feasibility of surface lattice oxygen participation during the kinetic bottleneck of the water-splitting reaction on perovskites with competing reactions, solvent effects and vacancy effects. On the cost reduction front, I provided design guidelines based on electronic structure modifications that employ core-shell nanoparticle architectures to reduce the loading of expensive noble metal catalysts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122175</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational frameworks to enable accelerated development of defect-tolerant photovoltaic materials</title>
<link>https://hdl.handle.net/1721.1/122174</link>
<description>Computational frameworks to enable accelerated development of defect-tolerant photovoltaic materials
Kurchin, Rachel Chava.
Widespread adoption of carbon-free energy technologies, including and especially photovoltaics (PV), is vital to address the issue of climate change. Absent sweeping policy action, this will only happen if these technologies become the most economically viable energy source. While PV has become cheaper in recent years, technoeconomic modeling suggests that current PV technologies cannot come down in price enough to be deployed at the scale required by future energy-mix scenarios that avert catastrophic climate change. This issue can be addressed by developing new technologies - in particular, ones that rely on materials that can be manufactured at drastically cheaper costs than present-day ones. To perform well in a PV device, silicon (the active material in approximately 90% of devices on the market today) must be free of detrimental metallic impurities at levels of parts per billion; the process to achieve this purity requires expensive equipment and large energy expenditures.; In contrast, hybrid halide perovskites (a new class of PV materials developed in the past decade) are extremely defect-tolerant. Synthesized using solution-based methods at ambient temperatures and pressures, they contain orders of magnitude more defects and yet achieve power conversion efficiencies comparable to silicon-based devices. Unfortunately, these materials suffer from lack of long-term stability as well as concerns surrounding toxicity since all high-performing variants to date contain lead. This work centers on accelerating the process of discovering other defect-tolerant materials that would share the remarkable optoelectronic properties of the perovskites without suffering these drawbacks, and focuses on two particular areas. The first is aimed at understanding the atomic-scale physics enabling the defect-tolerant behavior of the perovskites in order to formulate screening/design criteria for new materials.; The primary reason that perovskites perform so well even in the presence of defects is that the energy states due to the most abundant defects are all shallow in nature, i.e., close in energy to the band edges, and hence contribute very little to nonradiative recombination current losses. I identified several novel mechanisms for this behavior that have strong explanatory power for systems that have been examined in detail; this improved understanding also promises to aid in prediction of future compounds. To complement the theoretical/computational identification of new candidate materials, the second thrust of this thesis is accelerating their experimental characterization.; I have developed open-source software that enables the use of high-throughput experimental measurements (e.g., photocurrent as a function of voltage, temperature, and light intensity), in concert with device simulation run on high-performance computers and Bayesian parameter estimation, to construct probability distributions over unknown input parameters of those device simulations. This enables extraction of multiple parameters (in realistic, device-relevant contexts) from a single set of inexpensive, automatable measurements. This approach has the potential to supplant traditional direct characterization methods, which can be time-consuming and subject to confounding factors such as different sample preparation requirements. Taken together, these two primary thrusts can dramatically accelerate PV materials discovery.; First, we reduce the search space of materials by defining better selection criteria, focusing limited experimental bandwidth on only the most promising candidate compounds. Second, once a material has been synthesized, it can be characterized and optimized rapidly through the Bayesian inference technique. While I have focused primarily on PV materials, many aspects of this work could be applicable in a broader array of energy materials studies such as batteries or thermoelectrics. If we can speed up the process of discovering and developing new materials systems, then we can speed up the adoption of the resulting energy technologies, thereby lowering costs, reducing emissions, and improving lives.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122174</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimates for solutions to the Dysthe equation and numerical simulations of walking droplets in harmonic potentials</title>
<link>https://hdl.handle.net/1721.1/122173</link>
<description>Estimates for solutions to the Dysthe equation and numerical simulations of walking droplets in harmonic potentials
Kurianski, Kristin Marie-Dettmers.
In this thesis, we study wave-type phenomena both from a numerical point of view and a theoretical one. We first present the results of a numerical investigation of droplets walking in a harmonic potential on a vibrating fluid bath. The droplet's trajectory is described by an integro-differential equation, which is simulated numerically in various parameter regimes. We produce a regime diagram that summarizes the dependence of the walker's behavior on the system parameters for a droplet of fixed size. At relatively low vibrational forcing, a number of periodic and quasiperiodic trajectories emerge. In the limit of large vibrational forcing, the walker's trajectory becomes chaotic, but the resulting trajectories can be decomposed into portions of unstable quasiperiodic states. We then recast the integro-differential equation as a coupled system of ordinary differential equations in time. This method is used to simulate droplet lattices in various configurations and in the presence of a harmonic potential, creating structures reminiscent of Wigner molecules. The development of this approach is presented in detail along with its future applications. We then switch focus to a fluid system described by a modified nonlinear Schrödinger equation. The surface of an incompressible, inviscid, irrotational fluid of infinite depth can be described in two dimensions by the Dysthe equation. Recently, this equation has been used to model extraordinarily large waves occurring on the ocean's surface called rogue waves. In this thesis, we prove dispersive estimates and Strichartz estimates for the Dysthe equation. We then prove a Kato-type smoothing effect in which we are able to bound uniformly in space the L² norm in time of a fractional derivative of the linear solution by the L² norm in space of the initial data. This section of the thesis lays the groundwork for further developments in proving well-posedness via a contraction argument.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 119-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122173</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Restriction of scalars, the Chabauty-Coleman Method, and P¹ \ {0, 1, |}</title>
<link>https://hdl.handle.net/1721.1/122172</link>
<description>Restriction of scalars, the Chabauty-Coleman Method, and P¹ \ {0, 1, |}
Triantafillou, Nicholas George.
We extend Siksek's development of Chabauty's method for restriction of scalars of curves to give a method to compute the set of S-integral points on certain O [subscript capital K,S⁻ models C of punctured genus g curves C over a number field K. Our assumptions on C guarantee that it carries a morphism j : C --&gt; J to a commutative group scheme J over O[subscript K,S] which is analogous to the Abel-Jacobi map from a proper curve of positive genus to its Jacobian. While Chabauty's method (generally) requires that rank J(O[subscript K,S]) &lt;/_ J[subscript K] - 1 in order to compute a finite subset p-adic points on C containing C(O[subscript K,S]), Chabauty's method for restriction of scalars computes a subset [sigma]-subscript C] of p-adic points of Res C which contains C(O[subscript K,S]). Naïvely, one might expect that [sigma]C is finite whenever the RoS inequality rank J(O[subscript K,S]) &lt;_ [K : Q](dim J[subscript K] - 1) is satisfied.; However, even if this inequality is satisfied, [sigma]C can be infinite for geometric reasons, which we call base change obstructions and full Prym obstructions. When attempting to compute the O[subscript K,S]⁻points of C = P1 x {0, 1, |o}, we show that C can be replaced with a suitable descent set 2 of covers D, such that for each D [epsilon] T the RoS Chabauty inequality holds for D. Although we do not prove that the [sigma][subscript D] are finite, we do prove that the [sigma][subscript D] are not forced to be infinite for any of the known geometric reasons. In other words, there are no base change or full Prym obstructions to RoS Chabauty for D. We also give several examples of the method. For instance, when both 3 splits completely in K and [K : Q] is prime to 3 we show that (P¹ \ {0, 1, |})(O[subscript K) = [phi]. We also give new proofs that (P¹ \ {0, 1, |})(O[subscript K]) is finite for several classes of number fields K of low degree.; These results represent the first infinite class of cases where Chabauty's method for restrictions of scalars is proved to succeed where the classical Chabauty's method does not.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; In title on title page, "|" appears as "infinity." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-119).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122172</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The geometry and dynamics of twisted quadratic differentials</title>
<link>https://hdl.handle.net/1721.1/122171</link>
<description>The geometry and dynamics of twisted quadratic differentials
Wang, Jane,Ph.D.Massachusetts Institute of Technology.
This thesis examines twisted quadratic differentials, also known as dilation surfaces. These are variants of translation surfaces, their more well-studied counterpart. In this work, we study questions about the realizability of mapping class group elements in the affine automorphism groups of dilation surfaces, and how large affine automorphism groups can be. We demonstrate how to construct dilation surfaces with a given pseudo-Anosov map in their affine automorphism group, show the existence of exotic Dehn twists, and construct dilation surfaces with simultaneous Dehn multitwists. The last construction also gives rise to some large affine automorphism groups.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-92).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122171</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algebraic geometry and representation theory in the Verlinde category</title>
<link>https://hdl.handle.net/1721.1/122170</link>
<description>Algebraic geometry and representation theory in the Verlinde category
Venkatesh, Siddharth(Siddharth Narayan)
This thesis studies algebraic geometry and the representation theory of group schemes in the setting of symmetric tensor categories over algebraically closed fields of positive characteristic. A specific focus is paid to the Verlinde category, a symmetric fusion category in characteristic p that serves as a universal base for all such categories. Symmetric tensor categories provide a natural setting in which it makes sense to discuss the notion of a commutative, associative unital algebra. In the first third of the thesis, we prove some fundamental facts about these algebras, showing that, in the Verlinde category and any category built out of it, finitely generated algebras are Noetherian, have finitely generated invariants and are finite as a module over their invariants. Subsequently, we use this result to extend some fundamental properties of commutative algebras from the original setting of vector spaces to the more general setting of symmetric tensor categories.; The middle portion of the thesis focuses on some other applications of commutative algebra in the Verlinde category to elaborate on some work of Ostrik and to obtain important combinatorial decomposition formulas. This is a presentation of joint work by the author with Etingof and Ostrik. Symmetric tensor categories also provide a natural setting in which to discuss affine group schemes and their representations, namely the commutative Hopf algebras and their comodules. The last part of this thesis focuses on the structure and representation theory of affine group schemes of finite type in the Verlinde category. Using some of the basic commutative algebra properties proved in the thesis, we extend some results of Masuoka from the setting of super vector spaces in positive characteristic to that of the Verlinde category.; In particular, we define the notion of a Harish-Chandra pair and show that the category of affine group schemes and their representations in the Verlinde category is equivalent to the category of Harish-Chandra pairs and their representations in the Verlinde category.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-158).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122170</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The arrival time for mean curvature flow on a convex domain</title>
<link>https://hdl.handle.net/1721.1/122169</link>
<description>The arrival time for mean curvature flow on a convex domain
Strehlke, Nicholas(Nicholas Brian)
We give asymptotics for the level set equation for mean curvature flow on a convex domain near the point where it attains a maximum. It was shown by Natasa Sesum that solutions are not necessarily C³, and we recover this result and construct non-smooth solutions which are C³. We also construct solutions having prescribed behavior near the maximum. We do this by analyzing the asymptotics for rescaled mean curvature flow converging to a stationary sphere.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 65-67).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122169</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some results in the arithmetic and geometry of curves</title>
<link>https://hdl.handle.net/1721.1/122168</link>
<description>Some results in the arithmetic and geometry of curves
Vogt, Isabel Marley.
This thesis consists of four parts, roughly progressing from results in algebraic geometry to results in number theory. The unifying figure throughout this thesis is an algebraic curve: we begin with curves over the complex numbers and their realizations in projective space, move then to curves over number fields, and finally end with arithmetic questions concerning Galois properties of torsion points on elliptic curves. The first part generalizes the familiar fact that there always exists a line through 2 points in projective space, but not through 3 points in general position. We consider the more general fundamental incidence question: when does there exists a degree d, genus g curve though n general points in P? We work in the case when the curve is of general moduli, and hence by the Brill-Noether theorem p(d, g, r) = g - (r + 1)(g - d + r) &gt;/_ 0. In this range, a dimension count predicts the maximal possible n.; Using deformation theory to translate this problem into a stability-like condition on the normal bundle of a general such curve, when r = 3 and 4 we prove that this naive dimension count is correct in all but two cases. The work in P⁴ is joint with Eric Larson. In the second part, we investigate an arithmetic analogue of the gonality of a smooth projective curve C over a number field k: the minimal e such there are infinitely many points of degree bounded by e. We call this invariant the arithmetic degree of irrationality of the curve C over k. Such an integer is always bounded by the gonality of the curve, since the preimage of the infinite set of rational points on P¹ lie in the set of points of residue degree at most the gonality. By work of Faltings [Fal94], Harris-Silverman [HS91] and Abramovich-Harris [AH91], it is well-understood when this invariant is 1, 2, or 3; by work of Debarre-Fahlaoui [DF93] these criteria do not generalize to e at least 4.; In this chapter, we develop techniques to compute this invariant that make use of an auxiliary smooth surface containing the curve. Using this idea, we show that this invariant can take any value subject to constraints imposed by the gonality. We then use these techniques to generalize work of Debarre-Klassen [DK94] and show that this invariant is equal to the gonality for all sufficiently positive curves on a surface S with trivial irregularity (i.e., discrete Picard group). This chapter is joint work with Geoffrey Smith.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 209-212).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122168</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distinguishing open symplectic mapping tori via their wrapped Fukaya categories</title>
<link>https://hdl.handle.net/1721.1/122167</link>
<description>Distinguishing open symplectic mapping tori via their wrapped Fukaya categories
Kartal, Yusuf Bariș.
The main goal of this thesis is to use homological methods as a step towards the classification of symplectic mapping tori. More precisely, we exploit the dynamics of wrapped Fukaya categories to distinguish an open version of symplectic mapping torus associated to a symplectomorphism from the mapping torus of the identity. As an application, we obtain pairs of diffeomorphic Weinstein domains with the same contact boundary and symplectic cohomology, but that are different as Liouville domains. This work consists of two parts: in the first part, we define an algebraic model for the wrapped Fukaya category of the open symplectic mapping tori. This construction produces a category, called the mapping torus category, for a given dg-category over C with an autoequivalence. We then use the continuous dynamics of deformations of these categories to distinguish them under certain hypotheses. More precisely, we construct families of bimodules- analogous to flow lines- and use their different periodicity. The construction of the flow uses the geometry of the Tate curve and formal models for the graph of multiplication on G[superscript an] [subscript m,C((q))]. The second part focuses on the comparison of mapping torus categories and the wrapped Fukaya categories of the open symplectic mapping tori. For this goal, we introduce the notion of "twisted tensor product" and prove a twisted Kunneth theorem for the open symplectic mapping tori by using a count of quilted strips. In this part, we also give a large class of Weinstein domains whose wrapped Fukaya category satisfies the conditions for the theorem on mapping torus categories to hold.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 217-223).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122167</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semi-algebraic graphs and hypergraphs in incidence geometry</title>
<link>https://hdl.handle.net/1721.1/122166</link>
<description>Semi-algebraic graphs and hypergraphs in incidence geometry
Do, Thao Thi Thu.
A (hyper)graph is semi-algebraic if its vertices are points in some Euclidean spaces and the (hyper)edge relation is defined by a finite set of polynomial inequalities. Semi-algebraic (hyper)graphs have been studied extensively in recent years, and many classical results in (hyper)graph theory such as Ramsey's theorem and Szemerédi's regularity lemma can be significantly improved in the semi-algebraic setting. In this dissertation, we discuss three problems in incidence geometry where the bounds for semi-algebraic (hyper)graphs are generally better than the ones for arbitrary (hyper)graphs : (1) what is the maximum number of hyperedges in a hypergraph forbidding some pattern? (2) what is the most compact way to decompose a graph by complete bipartite subgraphs? and (3) what is the maximum number of edges in a graph where no two neighbor sets have a large intersection? As most graphs and hypergraphs arising from problems in discrete geometry are semi-algebraic, our results have applications to discrete geometry. The main tools used in our proofs include some version of polynomial partitioning, a Milnor-Thom-type result from topology and a packing-type result in set system theory.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 63-68).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122166</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Getting a handle on contact manifolds</title>
<link>https://hdl.handle.net/1721.1/122165</link>
<description>Getting a handle on contact manifolds
Sackel, Kevin(Kevin Ryan)
In this thesis, we develop the details of a surgery theory for contact manifolds of arbitrary dimension via convex structures, extending the 3-dimensional theory developed by Giroux. The theory is analogous to that of Weinstein manifolds in symplectic geometry, with the key difference that the vector field does not necessarily have positive divergence everywhere. The surgery theory for contact manifolds contains the surgery theory for Weinstein manifolds via a sutured model for attaching critical points of low index. Using this sutured model, we show that the existence of convex structures on closed contact manifolds is guaranteed, a result equivalent to the existence of supporting Weinstein open book decompositions. In the final chapter, we provide a few words about how this theory is related to the Giroux correspondence between Weinstein open book decompositions and contact structures in three dimensions, as well as providing a framework for possible generalizations to higher dimensions and homotopy data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-143).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122165</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Classical simulation complexity of restricted models of quantum computation</title>
<link>https://hdl.handle.net/1721.1/122164</link>
<description>Classical simulation complexity of restricted models of quantum computation
Koh, Dax Enshan.
Restricted models of quantum computation are mathematical models which describe quantum computers that have limited access to certain resources. Well-known examples of such models include the boson sampling model, extended Clifford circuits, and instantaneous quantum polynomial-time circuits. While unlikely to be universal for quantum computation, several of these models appear to be able to outperform classical computers at certain computational tasks, such as sampling from certain probability distributions. Understanding which of these models are capable of performing such tasks and characterizing the classical simulation complexity of these models--i.e. how hard it is to simulate these models on a classical computer--are some of the central questions we address in this thesis. Our first contribution is a classification of various extended Clifford circuits according to their classical simulation complexity.; Among these circuits are the conjugated Clifford circuits, which we prove cannot be efficiently classically simulated up to multiplicative or additive error, under certain plausible conjectures in computational complexity theory. Our second contribution is an estimate of the number of qubits needed in various restricted quantum computation models in order for them to be able to demonstrate quantum computational supremacy. Our estimate is obtained by fine-graining existing hardness results for these restricted models. Our third contribution is a new alternative proof of the Gottesman-Knill theorem, which states that Clifford circuits can be efficiently simulated by a classical computer. Our proof uses the sum-over-paths technique and establishes a correspondence between quantum circuits and a class of exponential sums. Our final contribution is a theorem characterizing the operations that can be efficiently simulated using a particular rebit simulator.; An application of this result is a generalization of the Gottesman-Knill theorem that allows for the efficient classical simulation of certain nonlinear operations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 355-372).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122164</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New progress towards three open conjectures in geometric analysis</title>
<link>https://hdl.handle.net/1721.1/122163</link>
<description>New progress towards three open conjectures in geometric analysis
Gallagher, Paul,Ph.D.Massachusetts Institute of Technology.
This thesis, like all of Gaul, is divided into three parts. In Chapter One, I study minimal surfaces in R⁴ with quadratic area growth. I give the first partial result towards a conjecture of Meeks and Wolf on asymptotic behavior of such surfaces at infinity. In particular, I prove that under mild conditions, these surfaces must have unique tangent cones at infinity. In Chapter Two, I give new results towards a conjecture of Schoen on minimal hypersurfaces in R⁴. I prove that if a stable minimal hypersurface E with weight given by its Jacobi field has a stable minimal weighted subsurface, then E must be a hyperplane inside of R⁴. Finally, in Chapter Three, I do an in-depth analysis of the nodal set results of Logonov-Malinnikova. I give explicit bounds for the eigenvalue exponent in terms of dimension, and make a slight improvement on their methodology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 68-70).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122163</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical engineering considerations for gas evolution in molten sulfide electrolytes</title>
<link>https://hdl.handle.net/1721.1/122158</link>
<description>Electrochemical engineering considerations for gas evolution in molten sulfide electrolytes
Chmielowiec, Brian John.
The current interrupt and galvanostatic electrochemical impedance spectroscopy techniques were utilized to characterize the ohmic, charge transfer, and mass transfer over-potential behavior of gas evolving electrodes in aqueous, molten chloride, and molten sulfide electrolyte solutions under steady-state natural convective flow conditions as a means to gain access to thermodynamic, physicochemical, and hydrodynamic properties of these systems. Previous efforts purposely chose operating conditions under which one or more sources of overpotential were negligible to facilitate analysis of the total overpotential observed at the expense of maintaining operating conditions of industrial relevance. This work represents a preliminary effort to understand the fundamental material properties of a molten sulfide electrolyte, by application of materials-blind electrochemical techniques that were validated on previously well characterized systems-oxygen evolution in aqueous KOH and chlorine evolution in eutectic LiCl-KCl-CsCl. For the first time, values are reported for the saturation concentration of dissolved sulfur gas, an approximate range of Schmidt number for dissolved sulfur, and natural convection limiting current densities in a molten sulfide electrolyte consisting of Cu₂S-BaS-La₂S₃ at 1300°C.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122158</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials design and discovery of catalysts for small molecule conversion</title>
<link>https://hdl.handle.net/1721.1/122157</link>
<description>Materials design and discovery of catalysts for small molecule conversion
Hwang, Jonathan,Ph.D.Massachusetts Institute of Technology.
(Electro)catalysis is essential for addressing the most pressing societal and environmental challenges of this century, ranging from fossil fuel emission reduction to the production of sustainable fuels and chemicals. Among the technologies driven by (electro)catalysis are toxic gas abatement processes, greenhouse gas storage and utilization, and electrochemical energy storage and conversion devices like fuel cells, electrolyzers, and metal-air batteries. However, the field of (electro)catalysis has been historically hindered by a lack of guiding principles for materials selection and design, especially for increasingly complex materials systems such as oxides. This has stemmed from a lack of systematic studies of surface reactivity at thermodynamic conditions relevant to the reaction of interest and understanding of chemical origins of such reactivity trends.; Thus, understanding the relationship between the bulk chemistry and surface reactivity can be a powerful tool to guide materials design for a wide range of (electro)catalytic processes. To drive forward the deployment of (electro)catalytic technologies, this work will first describe the need for mechanistically-driven and intuitive materials design principles. In order to develop this understanding, we then describe investigations on perovskite oxide model systems centered on oxygen-based, nitrogen-based, and carbon-based surface reactions relating to the oxygen evolution reaction (OER) structural stability for electrolysis, NOx reaction network for toxic gas abatement, and CO₂ gas reactivity for gas storage and conversion applications, respectively.; Next, we translate insights regarding bulk-surface relationships to the complex electrochemical CO₂ reduction reaction, a reaction with a high degree of complexity for kinetics and selectivity, using the perovskite oxide materials family as a model system. We demonstrate that the bulk metal-oxygen covalency, quantified by parameters such as the O 2p-band center distance from the Fermi level, can rationalize activity trends for the production of CH₄ and H₂, where an intermediate covalency maximizes the former and high covalency maximizes the latter. Lastly, we investigate the role of surface coordination of Cu-based catalysts on the CO₂ reduction selectivity to high energy density chemicals and fuels, highlighting the role of careful understanding of surface reactivity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122157</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Templated self-assembly of novel block copolymers</title>
<link>https://hdl.handle.net/1721.1/122156</link>
<description>Templated self-assembly of novel block copolymers
Cheng, Li-Chen,Ph.D.Massachusetts Institute of Technology.
Self-assembly of block copolymers (BCPs) is emerging as a promising route for numerous technological applications to fabricate a variety of nanoscopic structures. The resulting feature sizes range from a few to several hundred nanometers, and are readily tunable by varying the molecular weights of block copolymers. Directed self-assembly of block copolymer is an effective way to pattern periodic arrays of features with long-range order, to generate complex patterns, and to multiplicatively increase the pattern density and resolution that are far beyond the limit of conventional lithography. Despite of the significant progress in the area of directed self-assembly in recent years, critical research problems regarding the dimension scalability toward sub-10-nm regime and large feature sizes on hundreds of nanometers scale as well as the capability of generating complex device-oriented patterns remain challenging. In this thesis, BCP systems, including high-v BCPs that are capable of self-assembling into extreme small and large feature sizes as well as those with more complex block architectures, are identified and studied in order to understand how those materials may be processed and directed selfassembly to bridge the patterning size spectrum between nano- and micro-fabrication. Another focus is placed on the scientific exploration of directed self-assembly of triblock terpolymers and the investigation on the mechanisms that regulate the scaling and geometry of self-assembled patterns. A comprehensive understanding about self-assembly of BCP thin films will enable developing device-oriented geometries, manipulating BCPs phase behavior, and incorporating new functional materials for a wider range of applications. In the meanwhile, optimizing the processing condition of self-assembly of various BCPs is essential to confirm viability of the directed self-assembly of block copolymers process in manufacturing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122156</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Growth and characterizations of two-dimensional metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/122155</link>
<description>Growth and characterizations of two-dimensional metal-organic frameworks
Ha, Dong-Gwang.
Metal-Organic Frameworks (MOFs) are a class of porous materials with a crystalline structure that can be designed based on extremely tunable building blocks of organic molecules and metal ions. They are typically insulators but making them [pi]-conjugated with two-dimensional structure results in high electrical conductivity. This makes the two-dimensional a-conjugated MOFs (2D [pi]MOFs) good candidates for applications that need porous conductors such as supercapacitors and batteries. More importantly, tunability of the crystal structure enables us to explore exotic physical properties, including topological protection. This great potential has inspired the synthesis of various 2D [pi]MOFs, but their crystal growth remains challenging, preventing the characterization of intrinsic electrical properties. In this thesis, I will explain the growth mechanisms of 2D [pi]MOFs and the limitations of conventional growth methods.; Based on the analysis, I developed a novel growth method that generates single-crystal plates of a 2D [pi]MOF, Ni₃(HHTP)₂ (HHTP= 2,3,6,7,10,11 hexahydroxytriphenylene), over 10 [mu]m in lateral dimension, two orders of magnitude larger than previous reports. The growth mechanism of the new method is also studied by varying multiple growth parameters. The properties of the single crystals are characterized by various spectroscopic techniques. Among assorted characteristics, the electrical properties are explored closely. The large single-crystal plates enable us to study in-plane properties of a 2D [pi]MOF for the first time. The in-plane conductivity of Ni₃(HHTP)₂ is up to 2 S/cm, two orders of magnitude higher than pressed pellet, and shows a clear temperature dependence. Hall measurements reveal that the origin of the high conductivity is a high charge carrier density rather than high charge carrier mobility.; We anticipate our demonstration will facilitate the discovery of fundamental properties of various 2D [pi]MOFs and further our realization of their potential as electronic materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-132).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122155</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel applications of ionic Liquids in molecular crystallization</title>
<link>https://hdl.handle.net/1721.1/122154</link>
<description>Novel applications of ionic Liquids in molecular crystallization
Zeng, Qingying,Ph.D.Massachusetts Institute of Technology.
In recent years, pharmaceutical companies have made efforts to study the properties of other potential solvents in active pharmaceutical ingredient (API) synthesis and manufacturing, in order to seek substitutes for toxic and volatile organic solvents, ionic liquids (ILs), which are defined as salts with melting points less than 100° C, have demonstrated great potential in this line due to their desired physicochemical properties: environmentally friendly, low or negligible volatility, high solvency, wide liquidus range, and great tunability. However, significant uncertainties remain regarding the use of ionic liquids as solvents in the pharmaceutical industry, and relevant research studies are lacking. This thesis aims to look at the solvent-solute interaction at a micro level and come up with double salt ionic liquid (DSIL) systems with optimal selection and finely tuned ratio of cations and anions in order to understand, control and improve the crystallization performance.; The thesis first explores the potential of DSILs in the control of crystal growth kinetics. The tailoring help tune both the solubility and the crystal growth kinetics. It is shown that with the increase of basicity in the solvent system, solubility increased linearly while the crystal growth pre-factor dropped. The competition between solvent ions and solute attached onto the surface of crystal seeds explains the decreasing trend of growth rate as the solvent basicity ramps up. This thesis further examines the use of ILs and DSILs in polymorph screening. The mixing and tuning of ILs with various polarity, acidity, hydrogen bond accepting (HBA) and hydrogen bond donating (HBD) capacities helps to control the polymorphic and conformational polymorphic outcomes of the selected systems, and managed to produce forms that were previously discovered by serendipity and never successfully reproduced, even a new IL-API salt.; The combination of spectroscopy evidence and experimental outcome elucidate how the tuning of DSIL system help control the polymorphic outcome of model systems. Last but not least, the thesis proposes and demonstrates the creation of IL-water micellar system for the control of nucleation rate. Crystal habit and preferred orientation studies have proved the similarity of crystals harvested from micellar and non-micellar solutions, thus shown that the templating effect on the surface of micelles is not likely to exist. Herein, it is more likely for micelles to form blockages and slow down the aggregation and nucleation of the model compound.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122154</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of diffusion impedance in nanostructured electrochemical systems</title>
<link>https://hdl.handle.net/1721.1/122153</link>
<description>Theory of diffusion impedance in nanostructured electrochemical systems
Song, Juhyun.
Electrochemical energy systems, including batteries, capacitors, fuel cells, and their hybrids, are under spotlight with ever-increasing interest in their applications to portable electronics, electric vehicles, and stationary energy storage systems. In pursuit of higher energy and power density, as well as longer lifetime, modem electrochemical energy systems commonly employ nanostructured materials: battery electrodes consist of nanoparticles; capacitor electrodes are full of nanopores; and in fuel cells and flow batteries, electrolytes flow through nanoporous electrodes. Their nanostructures determine important parameters affecting the diffusion impedance, such as the diffusion geometries and lengths as well as their distributions. Nonetheless, configuration of the nanostructures is largely overlooked in interpreting the impedance spectra.; In this thesis, we investigate the diffusion impedance of nanostructured electrochemical systems by developing theoretical models and validating them with experimental impedance spectra. We begin by studying the diffusion impedance of nanoparticle batteries, and extend the approach to other electrochemical energy systems, including capacitors, fuel cells, and flow batteries. In nanoparticle batteries, the short diffusion lengths of the nanoparticles render the capacitive transition of the bounded diffusion impedance, which is rarely observable with conventional larger particles. The transition can be properly interpreted by considering the configuration of the nanoparticles, and the result presents more precise measurement of the solid-state diffusion than the traditional approach of fitting the Warburg element. A theoretical model incorporating the nanoparticle configuration is developed.; Then, by using the model, we investigate the effects of nanoparticle geometry and size distribution on the diffusion impedance. The model is also applied to experimental spectra of a silicon nanowire electrode, and we show that it is essential to take into account the configurational aspects of nanoparticles to correctly interpret diffusion impedance of battery electrodes. Conventionally, diffusion impedance of battery electrodes is interpreted assuming isotropic properties of active particles. While this is reasonable approximation for amorphous or polycrystalline materials with randomly oriented grains, modem battery active materials increasingly consist of highly anisotropic, single-crystalline nanoparticles, for which assuming isotropic properties is not valid any more.; Motivated by this trend, we present a general theory of diffusion impedance to include the anisotropic properties in the battery active materials, employing tensorial diffusivities, and orientation-dependent surface properties. Analytical expressions are derived for a single anisotropic particle by finite Fourier transformation. it is then incorporated to the overall electrode impedance by considering the joint distribution of particle lengths in different crystallographic directions. The resulting impedance shows clear signatures of the anisotropic material properties and their statistical variations. In general, nanostructures in electrochemical energy systems have inherent randomness, such as particle size distribution in batteries, pore size distribution in capacitors, tortuosity distribution in membranes and porous electrodes, and inhomogeneous boundary layer thickness in flow batteries.; Such configurational randomness commonly introduces the distribution of diffusion time; that is, the diffusion time constant is not set to a single value but rather expands over a range. Therefore, not only nominal properties of the nanostructures, but also their distributions play an important role in diffusion impedance. The finite Warburg and Gerischer impedance models are generalized by incorporating the distribution of diffusion time. Inversion of the model renders accurate image-equivalent information of the nanostructures, suggesting a nondestructive, global diagnostic method, "impedance imaging", by simple impedance measurement. An inversion method based on Tikhonov regularization is presented and demonstrated by applying it to experimental spectra of a capacitor, a battery, and a flow battery. This approach can be generally applied to all electrochemical systems that employing nanostructures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis. "June 2019."; Includes bibliographical references (pages 151-156).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122153</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems engineering for biomanufacturing</title>
<link>https://hdl.handle.net/1721.1/122152</link>
<description>Systems engineering for biomanufacturing
Lu, Amos Enshen.
Biologics are an important class of drugs that have seen rapid growth in recent years. However, complexities in production and characterization result in large-scale centralized production and cold chain distribution being the primary logistical paradigm. The large upfront costs limit the ability to address small patient population needs of precision medicine and orphan drugs. The cold chain requirements also limits therapeutic potential in the developing world, crisis scenarios in the developed world, and requires stockpiling for pandemic response. To address these currently unmet needs, this thesis develops the Integrated Scalable Cyto-Technology (InSCyT), a fully automated and integrated biomanufacturing platform. It comprises of a continuous perfusion bioreactor cultivating the host Pichia pastoris, a continuous pH adjustment unit, three chromatography columns, and a tangential flow ultrafiltration unit.; It enables hands-free production of hundreds to thousands of doses of clinical quality biologics in final dosage form in about three days. We demonstrate the production of human growth hormone, interferon [alpha]-2b, and granulocyte colony-stimulating factor and show purity and potency comparable to currently marketed products. The thesis then addresses systems engineering problems within InSCyT. On-demand buffer production requires fast and accurate control of both conductivity and pH. We model a buffer production unit and improve pH control performance through the use of reaction-invariant model-based nonlinear control and maximum a posteriori adaptation techniques to address system nonlinearity and parametric model uncertainty respectively. We validate the in silico results with experimental testing in a single-use disposable prototype. We also model the genomic stability of Pichia pastoris through copy number variability.; This framework allows for the distillation of existing literature data into a single strain and product specific rate constant controlling copy loss. These models then allow us to evaluate antibiotic selection and continuous seeding as methods to ensure consistent productivity and quality over extended production periods. Lastly, we develop and experimentally demonstrate an in-reactor hollow fiber cell separator for perfusion operation in single-use disposable reactors. Improvements to the design are suggested through the use of computational fluid dynamics (CFD) simulations coupled with a fouling model for geometry optimization.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-177).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122152</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel continuous crystallization strategies for purification of active pharmaceutical ingredients</title>
<link>https://hdl.handle.net/1721.1/122151</link>
<description>Novel continuous crystallization strategies for purification of active pharmaceutical ingredients
Vartak, Shankul(Shankul Shisheer)
The pharmaceutical sector is making a paradigm shift towards continuous manufacturing in order to improve throughput, flexibly produce tailored active pharmaceutical ingredients (APIs) in smaller amounts, utilize hitherto untested chemistry and exploit other advantages of continuous processing over its batch counterpart. Continuous crystallization is set to play a key role in the years to come since almost 90% of APIs are crystalline. This thesis presents three distinct projects with the underlying theme of industry-relevant novel continuous crystallization strategies for APIs. The first project titled "Continuous Crystallization with Nanofiltration and Impurity Complexation" establishes a methodology for simultaneously improving the yield and crystal purity for difficult-to-purify API systems in a continuous mode.; This presents a better alternative to a multistep batch recrystallization process for systems where the impurity has a strong tendency for API lattice incorporation owing to similarities in structure and molecular weight. The novel process involves the addition of a complexing agent to the feed prior to crystallization in order to selectively complex the impurity and increase its apparent molecular dimensions. This sterically prevents the impurity from incorporating within the API lattice, thereby providing high-purity crystals. The increase in dimensions is further exploited using a nanofiltration membrane to purify the post-crystallization mother liquor prior to recycle. The membrane-coupled continuous mode with recycle and complexation can be tuned to provide better performance than a comparable batch or unrecycled continuous process.; The second project on "Rapid Crystallization Process Development" endeavors to minimize the number of experiments to identify the best combination of temperature and solvent composition to produce crystals which can meet product specifications. This high-throughput strategy allows for reduction in both material consumption and screening time, and lends itself well to automation. The methodology has been applied to develop an end-to-end purification process for crude APIs right from solvent selection all the way to designing a 2-stage MSMPR. In the third project, "Crystallization Mediated by Functionalized Nanoporous Silica", crystallization is induced from undersaturated solution by the introduction of a functionalized nanoporous silica matrix. The matrix has been demonstrated to act as a source of bound antisolvent groups and reduce the API solubility within the small pore volume resulting in the formation of nanocrystals within the pores.; The solubility reduction has been shown to stem from the nanoconfinement effects, which increase the effective internal concentration of the API within the nanopores above its bulk solubility, and from template effects of the functionalization which promotes nucleation. This presents an energy efficient strategy since crystallization can be induced from previously undersaturated solutions without the need for concentration, and the loaded silica matrix can be readily filtered and then treated with pure solvent to recover the API. The strategy can be applied for crude purification or solvent switch.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122151</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical failure of lithium-ion batteries</title>
<link>https://hdl.handle.net/1721.1/122143</link>
<description>Mechanical failure of lithium-ion batteries
Zhu, Juner.
The commercialization of lithium-ion batteries has accelerated the electrification process of vehicles. In the past decade, one could see great advances in the life span, cost, performance, specific energy, and specific power of batteries. At the same time, the safety of batteries has not been adequately addressed by most stakeholders in the Electric Vehicle market. The present thesis systematically investigates the deformation mechanisms of the multi-layered structure of lithium-ion battery cells subjected to various loading conditions with particular emphasis on predicting the onset of the electrical short circuit. It starts with a comprehensive testing and modeling study of all the components of the cell, including the current collectors, the separator, the pouch/shell casing, and particularly, the coatings of electrodes.; A detailed computational model for quasi-static loading is subsequently established in Abaqus/explicit, which is very effective to predict the load-displacement response, peak load, displacement to fracture and short circuit, as well as the shear fracture phenomenon. The computational model is then extended to cover the effect of strain rate dependence by introducing the poro-mechanical theory. Darcy's law is used to describe the flow of the electrolyte inside the granular structure of the coating, and the Kozeny-Carman equation is adapted to calculate the permeability of the porous media of the battery cell. The model is shown to accurately predict the strengthening effect of the battery cell under low-speed dynamic loading, observed in experiments. The effect of mechanical deformations of a battery cell on its electrochemical performance is investigated next through a series of control tests on the coin-cell type batteries made of deformed electrodes.; The batteries are tested with ten cycles of charge-discharge, and a clear capacity fade in the damaged cells compared with the undamaged ones is observed. Electrochemical impedance spectroscopy tests are then performed, and the possible mechanism of the capacity fade is proposed. In the last part of the thesis, two applications of the developed computational modeling strategy are exhibited. One is the axial deformation of the 18650 cylindrical cells, and the other is the protective structural design of EV battery pack subjected to a "ground impact".
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 223-244).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122143</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetically levitated hysteresis motor driven linear stage for in-vacuum transportation tasks</title>
<link>https://hdl.handle.net/1721.1/122142</link>
<description>Magnetically levitated hysteresis motor driven linear stage for in-vacuum transportation tasks
Zhou, Lei,Ph.D.Massachusetts Institute of Technology.
This thesis presents a new in-vacuum reticle transportation mechanism for extreme ultraviolet (EUV) photolithography machines. In the photolithography process, the reticle is a quartz plate that contains a pattern of the integrated circuit, which needs to be transported between a storage position and the exposure stage. In next-generation EUV lithography machines, the reticle handling system must satisfy the following requirements: (1) transport the reticle through a distance of 2 meters, (2) the height of the mechanism needs to be within 100 mm, (3) operate in vacuum, and (4) satisfy ultra-tight contamination requirements. To fulfill these requirements, a conventional robotic reticle handler is inadequate. In this work, we designed, built, and tested a magnetically-levitated linear stage prototype, targeting at the reticle transportation application. Compared with robot manipulators, linear stages typically require less volume for long-distance transportation tasks.; Magnetic suspension is used to eliminate mechanical contact and thereby avoid particle generation that can contaminate the reticle. The stage's linear motion is driven by linear hysteresis motors, which allows using solid-steel motor secondaries on the moving stage. This is desirable for in-vacuum operation, since permanent magnets can out-gas in high vacuum when not encapsulated. The magnetic suspension of the stage is achieved using a novel linear bearingless slice motor design, where the stage's magnetic suspension in three degrees of freedom, including vertical, pitch, and roll, are achieved passively. This compact design effectively reduces the number of sensors and actuators being used. The prototype system has successfully levitated the moving stage. The resonance frequency of the passively levitated degrees of freedom is approximately 10 Hz, and the suspension bandwidth of the actively-controlled degrees of freedom is about 60 Hz.; The stage's maximum thrust force is 5.8 N under a 2.5 A current amplitude, which corresponds to a stage acceleration of 1200 M/s². This is able to satisfy the acceleration requirement for reticle transportation task. The stage was tested to track a reticle handling reference trajectory, where the maximum position tracking error of our linear stage is 50 [mu]m. The stage's lateral displacements during motion is below 50 [mu]m, which is well below making mechanical contact to the side walls. To our knowledge, this work represents the first study of linear hysteresis motors, and the first linear bearingless slice motor design. Hysteresis motors are a type of electric machine that operates using the magnetic hysteresis effect of the secondary material. Since the magnetization in the rotor lags behind the external field, a thrust force/torque can be generated.; In prior usage, hysteresis motors have been operated in open-loop, which makes them unsuitable for applications where dynamic performance is critical. As a part of this thesis work, we also studied the modeling and closed-loop torque and position control for hysteresis motors. The proposed control method was tested with three rotary hysteresis motors, including two custom-made motors of different rotor materials and one off-the-shelf hysteresis motor. Experimental results show that position control for all three motors can reach a bandwidth of 130 Hz. To our best knowledge, this is the first work that enabled high-bandwidth torque and position control for hysteresis motors, which allows this motor to be used for servo applications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 241-246).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122142</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the active sites and reaction mechanism of water oxidation on metal oxides</title>
<link>https://hdl.handle.net/1721.1/122141</link>
<description>Understanding the active sites and reaction mechanism of water oxidation on metal oxides
Rao, Reshma R.
Solar energy irradiating the Earth's surface exceeds human energy consumption by four orders of magnitude and the key to alleviating the global energy crisis lies in efficiently harnessing it. An ideal means of storing surplus energy from solar is to convert it to hydrogen using proton exchange membrane water electrolyzers, which are amenable to integration with solar devices due to their high performance under fluctuating power input. Water oxidation to molecular oxygen is the most energy intensive part of the water splitting process, limiting the overall efficiency of water splitting devices. Rutile Ruthenium Dioxide (RuO₂) is a gold standard catalyst for water oxidation in acidic solutions. It can also undergo fast surface redox reactions in the electrochemically stable potential window of water, making it an ideal material for electrochemical capacitors that can charge and discharge in a much shorter time scale than batteries.; Understanding the interaction of RuO₂ with water can provide critical insights into the physical origin of its fascinating electrochemical properties and the active site(s) for water oxidation. Herein, we use ambient pressure X-ray photoelectron spectroscopy, in situ surface diffraction, surface enhanced infrared spectroscopy, electrochemical mass spectrometry and ab initio density functional theory calculations on well-defined RuO₂ surfaces to understand the mechanism and kinetics of the water oxidation reaction. We elucidate how different surface terminations can alter the binding energetics of oxygenated intermediates by changing the local environment of surface ruthenium and oxygen atoms.; Going beyond the conventional approach of changing the surface chemistry to tune the energetics of active sites, we also consider how changing the nature of the electrolyte (pH, cations in the supporting electrolyte) can modify the interfacial dynamics and increase electrocatalytic activity. Finally, we consider the use of Li-rich layered ruthenium oxides as a means to access bulk ruthenium redox for electrocatalytic reactions. Thus, through the use of surface-sensitive in operando techniques, this thesis identifies the active sites and reaction mechanism for oxygen electrocatalysis and demonstrates how catalyst surface structure and interfacial water structure can be altered to improve kinetics for next-generation water oxidation catalysts.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 184-196).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122141</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-economic analysis towards Terawatt-scale photovoltaics</title>
<link>https://hdl.handle.net/1721.1/122140</link>
<description>Techno-economic analysis towards Terawatt-scale photovoltaics
Sofia, Sarah E.(Sarah Elizabeth)
There is an urgent need for rapid deployment of renewable-energy technologies to reduce anthropogenic carbon emissions and mitigate global climate change. Solar photovoltaics are essential for near-term decarbonization, but to achieve the terawatt scale deployment required, the technical and economic limitations of both Photovoltaic (PV) modules and systems must be addressed. Further solar panel manufacturing cost reductions and manufacturing scale-up are needed to increase the rate of adoption, as well as storage solutions to address the intermittency of solar at high adoption levels. This work uses technoeconomic analysis to examine the potential of tandem solar cell architecture to reduce the cost of solar, and what is required for solar-plus-storage systems to meet competitive levelized costs of electricity (LCOE) at high penetration.; Tandems comprising one or more industrially mature PV materials, cadmium telluride (CdTe)-copper indium gallium selenide (CIGS), and perovskite-silicon tandems are explored using LCOE as a figure of merit and detailed energy yield analysis paired with manufacturing cost modeling. Through this, the optimal tandem architectures for different locations and applications are found. Significant potential for four-terminal tandems to be cost competitive in residential markets in the United States was demonstrated, while two-terminal tandems require significantly cheaper sub-cells costs. Additionally, the optimal path to designing cost-effective silicon-based tandems is shown, and a clear economic motivation for existing silicon manufacturers to invest in multicrystalline silicon tandems is outlined. To examine challenges of high-penetration PV, a solar-plus-storage model was developed and used to find target battery and storage costs required to reach competitive LCOEs at high penetrations.; Flow batteries, a promising storage technology, were then modeled. These models show that current and near-term cost projections for both flow batteries and solar are still too high for cost effective solar-plus-storage systems at penetrations above 30%. Cost-performance trade-offs for flow batteries were then explored as potential paths for cost reduction.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122140</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing functionalities of engineered skeletal muscle tissues by recreating natural environmental cues</title>
<link>https://hdl.handle.net/1721.1/122138</link>
<description>Enhancing functionalities of engineered skeletal muscle tissues by recreating natural environmental cues
Kim, Hyeon Yu,Ph.D.Massachusetts Institute of Technology.
Engineered skeletal muscle tissue is a three-dimensional contractile tissue made from muscle cells and the extracellular matrix (ECM). It can be used as a drug testing platform or an implantable tissue, but its practical use has been limited by inferior contractile performance and small size compared to natural muscles. This thesis aims to implement environmental cues and essential elements of natural muscles to improve the contractile performance and increase its size beyond the diffusion limit. Firstly, inspired by the observation that the natural muscles are exposed to electric potentials from neurons in combination with mechanical stretching from surrounding muscles, a new muscle training system was developed to apply coordinated electrical and mechanical stimulation.; Both the experimental results and the mechanistic model suggest the combined stimulation reorients the ECM fibers in such a way that the parallel ECM stiffness is reduced, while the serial ECM stiffness is increased, which reduces resistance to muscle contraction and increases force transmission in the engineered muscles, respectively. Secondly, large-sized natural muscles are fully vascularized so that oxygen and nutrients can be supplied. However, vascularization of the engineered skeletal muscle has been challenging because the microenvironmental requirement for differentiating myoblasts is incompatible with the one for culturing endothelial cells. In contrast, the natural muscle tissue has a compartment structure, where endothelial cells are exposed to blood plasma, while myoblasts are surrounded by interstitial fluid.; In this thesis, we modeled the natural fluid compartments by creating an in vitro perfusable vasculature running through a skeletal muscle tissue with physiologic cell density. The tissue is designed to have a coaxial tubular shape with a perfusable vasculature at the center. Through the in vitro fluid compartments, endothelial cells are exposed to endothelial cell growth medium running through the vascular channel, and the skeletal muscle cells are surrounded by muscle differentiation medium. By using this platform, engineered muscle tissue was successfully scaled up from microscale to subcentimeter scale. This platform also enabled to show that coculturing with the two separate media from an early stage of muscle differentiation leads to increased contractile force, thicker myotubes, and more muscle differentiation compared to using a single coculture medium.; Furthermore, the engineered skeletal muscles were further vascularized by inducing angiogenic sprouting from the vascular channel penetrating into the muscle tissue. This thesis will contribute to utilizing engineered skeletal muscles in practical applications with improved functionalities and provide a new model to study heterotypic cell-cell interactions in skeletal muscle tissues.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 101-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122138</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and controlling the reactions between the electrolyte and positive electrodes for Li-ion batteries</title>
<link>https://hdl.handle.net/1721.1/122137</link>
<description>Understanding and controlling the reactions between the electrolyte and positive electrodes for Li-ion batteries
Karayaylali, Pinar.
Improving electrochemical energy storage devices is critical for the development of renewable energy storage devices for transportation, grid and residential applications. Lithium-ion batteries are the leading technology in the markets due to its good cycle life and high-energy density. However, current high production cost and safety concerns constrain their usage for various applications. Understanding the reactivity between the positive electrode and electrolyte is crucial for developing next-generation safer and cheaper lithium-ion batteries. The focus of this thesis is on the effect of the positive electrode/electrolyte interface (EEI) on the electrochemical performance of lithium-ion cells. To avoid any ambiguities from the presence of carbon or binder, oxide-only electrodes were used for these studies.; First, the EEI layer on LiCoO₂ electrodes was studied using X-ray Photoelectron Spectroscopy (XPS) and a correlation between interface composition and the ethylene carbonate (EC) dissociation on positive electrode surfaces was observed. This concept was extended to lithium nickel manganese cobalt oxides with different nickel contents (NMCl 11, NMC622, and NMC811 electrodes). Using these electrodes, we showed experimental evidence for EC dehydrogenation on charged NMC electrodes, which became more pronounced as the nickel content increased. Greater salt decomposition was coupled with the earlier onset of EC dehydrogenation with increasing nickel content or delithiation amount. Building upon these studies, we investigated different ways to improve cycling performance and to reduce or eliminate the effect of EC dehydrogenation on NMC surfaces. In this thesis, we explore various methods to stabilize high-energy positive electrodes such as coatings and electrolyte additives.; Through studying different coatings, we propose that high band-gap insulators such as A1₂O₃ are the best coating materials for positive electrodes due to reduced reactivity with electrolyte solvent (EC dehydrogenation) and salts (formation of lithium nickel fluoride/oxyfluoride species). We also show that adding chemically stable but electrochemically unstable electrolyte additives can reduce the effect of EC dehydrogenation even for NMC811 electrodes. We believe that by connecting surface reactivity on oxides with cycling performance, we can pinpoint key parameters to better lithium-ion cells.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122137</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An individual-based GPU simulation framework for collective bacterial dynamics in swarms and biofilms</title>
<link>https://hdl.handle.net/1721.1/122136</link>
<description>An individual-based GPU simulation framework for collective bacterial dynamics in swarms and biofilms
Mok, Rachel V.(Rachel Verla)
With recent technological advancements, observations and measurements of complex bacterial communities at single-cell resolution are now possible. Guided by these rich experimental data sets, we develop minimal individual-based models to uncover the governing forces driving the dynamics in microbial systems. Our model incorporates the biophysical processes of cell growth and division, viscous drag, bacteria self-propulsion, and mechanical cell-surface and cell-cell interactions through interaction potentials. In particular, our cell-cell interaction potential accounts for hard steric and osmotic repulsion as well as attraction mediated through secreted components which bind cells together. Implementing this model on graphics processing units (GPUs) such that the computational time scales linearly with the system size, we achieve a 10x speedup over a comparable code written on central processing units (CPUs). With this simulation framework, we investigate the collective dynamics of Bacillus subtilis swarm expansion and Vibrio cholerae biofilm formation. Our experimental and numerical results imply that mechanical cell-cell interactions dominate the swarming motility phases and can account for the emergence of order and structure seen in growing biofilms. Furthermore, this model is used to explore the effectiveness of surface topography on deterring biofilm formation by investigating how locally varying boundary curvature impact the scattering and accumulation dynamics of swimming bacteria. This work shows great promise at increasing our understanding of the physics governing microbial communities, which knowledge is essential to control and inhibit bacterial populations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-133).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122136</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metamaterials for acoustic sensing</title>
<link>https://hdl.handle.net/1721.1/122135</link>
<description>Metamaterials for acoustic sensing
Ma, Chu,Ph.D.Massachusetts Institute of Technology.
Acoustic sensing has played an important role in engineering and daily life, especially in biomedical imaging and in the recently developed field of the Internet of Things. Acoustic metamaterials are man-made materials composed of subwavelength unit cells, and can be viewed as macroscopically homogenized media with effective material properties non-existing in nature. Acoustic metamaterials provide new opportunities for solving the challenges in acoustic sensing. This thesis is about exploring new acoustic metamaterials and their applications in acoustic sensing. The first part of the thesis is about new acoustic metamaterials. In this part, two types of phase shifters are designed. When the frequency of the incident wave changes, one type of them has constant time delay and the other has constant phase shift.; New acoustic metasurfaces are designed based on the phase shifters, including acoustic binary phase grating that can realize highly efficient wave steering with much less complex structure compared to previous metasurfaces, and acoustic flat lenses with different steering angles and focusing locations for waves having different frequencies. Besides the phase shifters for phase modulation, tunable amplitude modulation is also proposed and experimentally demonstrated by creating iD channels in a hydrogel sheet. When the channels are filled with different materials and with different filling ratios, the acoustic properties can be tuned by orders of magnitudes over broad frequency ranges. The second part of the thesis is about new acoustic sensing methods and systems. First, an acoustic imaging system that can expand the evanescent wave travel distance and improve imaging resolution in far-field is proposed based on the binary phase gratings and additional filter layers.; Subwavelength imaging and edge detection for 1D slit objects is demonstrated experimentally with 3D printed prototypes. Second, a system that can select the direction of acoustic transmission is designed and experimentally demonstrated based on the combination of two acoustic binary phase gratings. The double-grating structure is shown to select the direction and the frequency at the same time. By stacking multiple double-grating structures that are configured for different frequencies, the system for broadband direction selection is proposed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-171).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122135</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extra robotic legs for augmenting human payload and positioning capabilities</title>
<link>https://hdl.handle.net/1721.1/122134</link>
<description>Extra robotic legs for augmenting human payload and positioning capabilities
Gonzalez, Daniel Jesus.
The Extra Robotic Legs (XRL) system is a robotic augmentation worn by a human operator that consists of two articulated robot legs that help bear a heavy backpack payload and a portion of the operator's own weight. The design was driven by a need to increase the effectiveness of Department of Energy hazardous material emergency response personnel who are encumbered by their Personal Protective Equipment. Essentially a backpack-with-legs, the XRL system must bear large loads during operation, but also requires a proprioceptive transmission to allow for close physical interaction with the human operator. The linkage and actuator design minimizes the maximum required actuator torque by exploiting torque redistribution using a closed kinematic chain. A prototype was fabricated utilizing insights gained from force analyses and human-robot interaction safety requirements.; A seamless hybrid control architecture was developed to allow the operator command over the pace of the XRL stand-to-squat transition. A fail-safe Hybrid Open-Loop/Closed-Loop Control Architecture splits the Cartesian space into a closed-loop subspace in which the robot controls its balance and stability, and an open-loop subspace in which the human operator may move the robot at will through only a force interaction. Distributing the control computation to the joint level wherever possible makes the system robust to disconnections from the central computer. Initial tests of balance control while performing squatting transitions indicate the feasibility of this control scheme for the XRL system. It is desirable for the Human-XRL quadruped system to walk with an ambling gait in which the rear legs lead the front legs by 25% of the gait period, which minimizes the energy lost from foot impacts while maximizing the margin of balance stability.; Unlike quadrupedal robots, the XRL system cannot command the human's limbs to coordinate quadrupedal locomotion. By modeling the human-robot system during steady state walking as a coupled pair of simple nonlinear limit cycle oscillators, it can be shown that, using only a coupling made of passive mechanical components, a stable limit cycle that synchronizes the gaits while maximizing stability between the human and robot during walking may be achieved. By exploiting these inherently stable passive dynamics, the margin of stability and rate of synchronization may be supplemented with active control. By using these key design, control, and gait synchronization techniques, the XRL System will ultimately walk, climb stairs, crouch down, and crawl with the operator while eliminating all equipment loads acting on them.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 119-123).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122134</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the hydrodynamic properties of flexible and rigid bodies undergoing vortex-induced vibrations</title>
<link>https://hdl.handle.net/1721.1/122133</link>
<description>Mapping the hydrodynamic properties of flexible and rigid bodies undergoing vortex-induced vibrations
Fan, Dixia.
Long slender structures within the steady oncoming flow are subject to vibrations caused by vortical structures forming due to a distributed flow instability in their wake. The features of these vibrations are a complex function of the main parameters, and hence systematic exploration of the basic mechanisms has only recently begun. The problem has considerable theoretical interest as it constitutes a fundamental nonlinear flow-structure interaction system, while it is very important for the design of offshore industry systems, such as risers, cables, and hawsers, which are subject to large drag loads and potentially catastrophic fatigue damage as a consequence of the vortex-induced vibrations. The much simpler problem of a flexibly mounted rigid cylinder in cross-flow has become the canonical problem of bluff body flow-structure interaction and has been extensively studied and mapped both experimentally and computationally.; Attempts have been made to extend and apply the properties and databases found in rigid structures to flexible structures. This requires, first of all, a strip-theory approach, but it was also found that additional concepts are needed, such as the presence of "hybrid" modes consisting of mixed traveling and standing waves, a strong interaction between in-line and cross-flow modal responses, and a response that consists of multiple frequencies. In this thesis we employ an optical tracking experimental tool that can measure the detailed vibrational response and also allows the evaluation of the distributed forces acting along the structure, together with targeted CFD calculations that provide the detailed vortical structures in the wake, to investigate some fundamental properties of the vortex-induced vibrations of flexible structures, and assess the validity of basic assumptions that are currently in use.; Specifically, we assess first the validity of the strip theory approach that uses databases obtained from rigid cylinders to calculate the local forces in a flexible structure. We compare the experimentally derived forces in a flexible structure with those obtained from the rigid cylinder and we find the conditions under which they are applicable. Next, we demonstrate that a newly designed "Intelligent Towing Tank" (ITT) can derive with a minimal number of experiments databases for rigid cylinders to be used for flexible structure response prediction, covering the entire parametric space. We employ Gaussian Process Regression to guide the selection of a sequence of experiments in an automated Towing Tank to cover the parametric space within narrow uncertainty range. Our findings are as follows: 1.; The strip theory assumption for uniform flexible structures in uniform flow undergoing in-line and cross-flow oscillations is applicable, but additional concepts are needed to describe the form of the extended coupling of flow and structure, which we discuss in detail. 2. For a flexible cylinder consisting of two segments of equal length but different diameter, it is shown that the strip theory approach is still valid, but the existing rigid cylinder databases do not provide accurate force estimates because they fail to account for multi-frequency response. 3. We derived a new database for rigid cylinders that accounts for a two-frequency response. This was made possible due to the substantial experimental time saving of the ITT. A major conclusion is that when the response contains two frequencies that are both within the excitation region, reduced energy is transferred from the fluid to structure.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 233-243).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122133</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision pipetting and crack-free colloidal assembly</title>
<link>https://hdl.handle.net/1721.1/122132</link>
<description>Precision pipetting and crack-free colloidal assembly
Beroz, Justin(Justin Douglas)
Applications involving millimeter and micrometer scale liquid handling combine precision instrumentation and capillary-driven fluid mechanics. This thesis develops two such applications. First, a design for a single handheld pipette that may draw and dispense liquid volumes spanning the range of an entire suite of current commercial pipettes is presented. The design, construction and validation of a proof-of-concept prototype device for this universal micropipette concept is reported, along with practical considerations for implementation and possible commercialization. Second, a direct-write method to build freestanding colloidal structures via capillary-driven self-assembly from a needle is reported. A scaling law is derived that governs the rate of assembly, as well as a criterion for the initiation of cracks, thereby explaining how to build crack-free structures over a wide range of particle sizes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122132</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards the clinical translation of a physiological closed-loop control system for precise control of propofol-induced pharmacological coma</title>
<link>https://hdl.handle.net/1721.1/122131</link>
<description>Towards the clinical translation of a physiological closed-loop control system for precise control of propofol-induced pharmacological coma
An, Jingzhi,Ph. D.Massachusetts Institute of Technology.
Physiological closed-loop control (PCLC) is an emerging technology that can revolutionize the standard-of-care by ensuring timely, precise, and consistent delivery of therapy that is also cost-effective, distraction-free, and non-labor intensive. However, the clinical translation of PCLC systems has been impeded by the lack of real-world performance and safety guarantees, which are difficult to provide due to unique engineering and regulatory challenges, such as uncertain, nonlinear physiological systems and complex healthcare settings. We studied the delivery of propofol-induced pharmacological coma in patients with refractory status epilepticus (RSE) as a prototypical application to address this translational challenge for PCLC systems. To motivate the chosen application, we conducted a retrospective clinical study and showed that the clinical management of pharmacological coma in RSE patients may be improved with PCLC.; We then made contributions in five areas to enhance the performance and safety of a PCLC system for delivering propofol-induced pharmacological coma. First, we built the PCLC system with explicit considerations of usability issues and regulatory requirements for preclinical evidence of safety. Second, we built a hardware-in-the-loop platform to evaluate the PCLC system under conditions mimicking real-world operation. Third, we devised new experimental methods to characterize the performance of clinical infusion pumps for PCLC applications, and performed experiments to characterize several commercially-available infusion pumps. Fourth, we systematically analyzed pharmacokinetic-pharmacodynamic models to better understand physiological response to propofol. Finally, we proposed a flexible chance-constrained stochastic optimal control framework for designing controllers with explicit performance and safety guarantees under real-world uncertainties for PCLC systems.; Overall, we have taken concrete steps towards clinical translation of a PCLC system for precise control of propofol-induced pharmacological coma, and the methods developed can be applied to other PCLC systems.
Thesis: Ph. D. in Medical Physics and Medical Engineering, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 259-273).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122131</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Focal delivery and sampling within heterogeneous tissues</title>
<link>https://hdl.handle.net/1721.1/122130</link>
<description>Focal delivery and sampling within heterogeneous tissues
Ramadi, Khalil B.(Khalil Basil)
Heterogeneity exists on multiple scales within the body, at the organ, tissue, and cellular level. Such heterogeneity presents a challenge when attempting to modulate or interrogate different tissues. The brain is one such tissue. Many neuropsychiatric disorders, for example, arise from pathologic signaling from a single neural node. Patients often fail to respond to oral medication therapy due to poor control over spatial and temporal resolution of medication, and off-target effects. Tumors are also inherently heterogeneous. This renders analysis of them difficult in experimental studies investigating tumor sensitivity to therapies. This thesis describes approaches for targeting of heterogeneous brain and tumor tissues with high spatial and temporal resolution. Chronic miniaturized neural drug delivery systems (MiNDS) were fabricated for repeated targeting of brains structures. Modular assembly techniques enabled variation in probe length and modalities (electric, fluidic, etc.).; Positron emission tomography (PET) was used to visualize intracerebral infusions of various volumes through chronic probes. Muscimol, a GABA agonist, was delivered using MiNDS and its effect on modulation of neuronal firing and behavior was characterized. These findings highlight that volume is as important to consider as dose when targeting neural structures. Other technologies for refined neural targeting were also developed. Concentric marking electrodes (CME) allow for simultaneous marking and stimulation. Marks are visible on MRI for real-time modification of experimental approach, and remain visible for up to 10 months. Polished microprobes were developed for single-modality interfacing in a 60 ptm footprint, minimizing chronic gliosis. Polishing microprobes allowed for independent implantation and steering. Varying the probe polish angle, material, and size resulted in distinct insertion trajectories.; A computational framework was also developed based to optimize drug delivery to any given brain structure. Microdevices containing micro-depots of various drugs can be used to assay drug efficacy on patientderived xenograft (PDX) tumor models. However, PDXs have significant tissue heterogeneity in tissue. A machine learning method of classifying tumors into various tissue types was developed, and allowed for greater accuracy in characterizing drug efficacy on individual PDXs.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; "June 2019." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 132-149).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122130</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Massively multiplexed imaging probes for comprehensive single-cell analysis</title>
<link>https://hdl.handle.net/1721.1/122129</link>
<description>Massively multiplexed imaging probes for comprehensive single-cell analysis
Kwok, Sheldon J. J.
Optical microscopy techniques are widely used to study cellular physiology in their native tissue environments. In particular, the use of fluorescent probes to tag different cell populations, subcellular compartments, specific proteins and nucleotide sequences has enabled examination of cellular phenotypes with increasingly sophisticated detail. Recent efforts to combine physiological imaging and single-cell molecular analysis seek complete understanding of cellular identity and function within complex tissues. Specific cells of interest can be selected and isolated from tissues for downstream molecular analyses using techniques such as laser capture micro-dissection, or cell tagging with photo-conversion. However, high-throughput, unbiased molecular profiling of every cell imaged within a tissue remains an elusive challenge. A fundamental obstacle in previous approaches is spectral overlap due to the relatively broad emission of typical fluorescent probes, which limits their capabilities for multiplexed tagging. The first part of this thesis describes methods for studying cellular physiology in mice at single-cell resolution using two-photon fluorescence microscopy. The second part of this thesis describes the development of a novel class of imaging probes, called laser particles, which rely on narrowband laser emission for massively multiplexed cell tagging. This work establishes laser particles as promising tools for comprehensive single-cell analyses.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages [165]-191).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122129</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards brain-wide noninvasive molecular imaging</title>
<link>https://hdl.handle.net/1721.1/122128</link>
<description>Towards brain-wide noninvasive molecular imaging
Wiśniowska, Agata Elżbieta.
An intricate interplay of signaling molecules underlies brain activity, yet studying these molecular events in living whole organisms remains a challenge. Magnetic resonance imaging (MRI) is the most promising imaging modality for development of molecular signaling sensors with deeper tissue penetration than optical imaging, and better spatial resolution and more dynamic potential in sensor design, compared to radioactive probes. MRI molecular sensors, however, have largely required micromolar concentrations to achieve detectable signals. In order to detect signaling molecules in the brain at their native low nanomolar concentrations, an improvement in MRI molecular sensors is necessary. Here we introduce a new in vivo imaging paradigm that uses vasoactive probes (vasoprobes) that couple molecular signals to vascular responses. We apply the vasoprobes to detect molecular targets at nanomolar concentrations in living rodent brains, thus satisfying the sensitivity requirement for imaging endogenous signaling events. Even with more sensitive probes, molecular imaging of the brain is further complicated by the presence of the blood-brain barrier (BBB), designed by nature to protect this most vital of organs. We have therefore implemented a means to permit noninvasive delivery of imaging agents following ultrasonic BBB opening. We use the ultrasound technique to deliver another potent class of contrast agents, superparamagnetic iron oxides, and we show that effective permeation of brain tissue is achieved using this approach. We have also designed ultrasensitive vasoprobe variants designed to permeate the brain completely noninvasively, using endogenous transporter-mediated mechanisms. We present preliminary results based on this approach and discuss future directions.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122128</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sleeping Beauty : tackling the dormant Plasmodium vivax hypnozoite</title>
<link>https://hdl.handle.net/1721.1/122127</link>
<description>Sleeping Beauty : tackling the dormant Plasmodium vivax hypnozoite
Gural, Nil.
Malaria, named for 'bad air' in Italian, is one of the oldest diseases known, and has brought down explorers, popes, kings and emperors through centuries. Yet, this mosquito-borne disease still evades all attempts at eradication, and puts almost half the global population at risk of infection. Malaria is most commonly known as a blood disease, but all malaria species have an initial obligate, yet clinically-silent development stage in the liver, before the symptomatic and cyclic infection of erythrocytes begins. It is during the liver stage that Plasmodium vivax (P. vivax), the most widely distributed human-infecting malaria species, harbors dormant forms called hypnozoites which can linger for weeks to months, and then relapse to cause recurrent blood stage infection. This dormant parasite reservoir is one of the biggest barriers to malaria eradication, yet very little is known about its biology.; Furthermore, there is a dire need for the development of new hypnozoite-killing drugs but phenotypic screens are hindered by a lack of in vitro platforms. In this work, I set out to develop an in vitro liver stage P. vivax model which could help elucidate the mysterious biology of hypnozoites and could serve as an antimalarial screening platform. As an added challenge, P. vivax parasites that are suitable for liver stage infection cannot be obtained outside of endemic settings. Thus the majority of the work in this thesis was performed in Thailand, where the entire liver stage of P. vivax was recapitulated using a multi-well culture format that incorporates micropatterned primary human hepatocyte co-cultures (MPCCs) using clinical P. vivax isolates. MPCCs feature key aspects of P. vivax biology, including establishment of persistent hypnozoites and growing schizonts, merosome release, and subsequent infection of red blood cells.; The platform was piloted as a tool to test existing and candidate anti-hypnozoite drugs, and further miniaturized to be suitable for high-throughput screening. Finally, a hybrid capture strategy and RNA sequencing was employed to describe the first transcriptome of any human malaria species and gain insight into hypnozoite biology. Taken together, the work presented here has already identified unique aspects of hypnozoite biology, a form that has remained a relative biological mystery since its discovery 3 decades ago. Future work offers the unique potential to gain further biological insights into P. vivax development in human hepatocytes, and represents a screening platform for candidate drugs directed against distinct stages of P. vivax.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-129).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122127</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three dimensional high resolution and high throughput nonlinear optical microscopy</title>
<link>https://hdl.handle.net/1721.1/122126</link>
<description>Three dimensional high resolution and high throughput nonlinear optical microscopy
Xue, Yi,Ph. D.Massachusetts Institute of Technology.
High throughput and high resolution two-photon fluorescence microscopy is an essential tool for functional and structural in vivo imaging of the brain. First, simultaneous, high-resolution functional imaging across a large number of synaptic and dendritic sites is critical for understanding how neurons receive and integrate signals. Yet, functional imaging that targets a large number of sub-micron sized synaptic and dendritic locations poses significant technical challenges. We demonstrate a new parallelized approach to address such questions, increasing the signal-to-noise ratio by an order of magnitude compared to previous approaches. This selective access multifocal multiphoton microscopy (saMMM) uses a spatial light modulator to generate multifocal excitation in three dimensions (3D) and a Gaussian-Laguerre phase plate to simultaneously detect fluorescence from these spots throughout the volume.; We test the performance of this system by simultaneously recording Ca2+ dynamics from cultured neurons at 98-118 locations distributed throughout a 3D volume. This is the first demonstration of 3D imaging in a "single shot" and permits synchronized monitoring of signal propagation across multiple different dendrites. Second, monitoring changes in dendritic and synaptic structures are important for understanding brain plasticity requiring high resolution and high sensitivity imaging of micron size structures over large volume of ~~ 500[mu]m3 . We have developed temporal focusing two-photon microscopy in vivo brain imaging with improved imaging speed over standard point scanning approach. However, the imaging depth of temporal focusing two-photon microscopy is severely limited by blurring due to scattering of emission photons. We have developed Multiline Orthogonal Scanning Temporal Focusing (mosTF) microscopy that enable reassignment of scattered photons back to the original position.; mosTF is able to overcome the scattering issue without a prior knowledge of the scattering media. We demonstrated mosTF by acquiring in vivo brain images from mice under anesthesia. mosTF is not only 10 times faster imaging speed than point-scanning two-photon microscopy; mosTF has a remarkable signal-to- background ratio improvement for in vivo brain imaging over typical temporal focusing approach.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 93-101).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122126</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in the economics of education</title>
<link>https://hdl.handle.net/1721.1/122118</link>
<description>Essays in the economics of education
Zárate, Román Andrés(Zárate Vasquez); Angrist, Joshua David,; Pathak, Parag A.,
This thesis consists of three chapters that study how characteristics of peers and schools affect human capital. The first chapter reports estimates of academic and social peer effects from a large-scale field experiment at selective boarding schools in Peru. The experimental design overcomes some methodological challenges in the peer effects literature. I randomly varied the characteristics of neighbors in dormitories with two treatments: (a) less or more sociable peers (identified by their position in the school's friendship network before the intervention) and (b) lower- or higher-achieving peers (identified by admission test scores). While more sociable peers enhance the formation of social skills, higher-achieving peers do not improve academic achievement; in fact, they further reduce the academic performance of lower-achieving students. These results appear to be driven by students' self-confidence.; The second chapter studies whether students prefer friends who are similar to them and whether these preferences persist when students have to interact frequently. I use network surveys and exploit variation in the exact position of the students in the allocation to dormitories at selective boarding schools in Peru. Students are more likely to form social relations with peers who are of their same poverty status, academic level, and sociability. However, students who are neighbors in the allocation to dormitories are more likely to become friends, and this occurs regardless of their type. Furthermore, being exposed to peers of a different type also encourages more diverse friendships and study groups that go beyond the neighbors in the dormitories. The third chapter (co-authored with Joshua Angrist and Parag Pathak) evaluates mismatch in Chicago's selective public exam schools, which admit students using neighborhood-based diversity criteria as well as test scores.; Regression discontinuity estimates for applicants favored by affirmative action indeed show no gains in reading and substantial negative effects of exam school attendance on math scores. These results hold for more selective schools and for applicants most likely to benefit from affirmative-action, a pattern suggestive of mismatch. However, exam school effects in Chicago are explained by the high quality of schools attended by applicants who are not offered an exam school seat. Specifically, mismatch arises because exam school admission diverts many applicants from high-performing Noble Network charter schools, where they would have done well.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis. "The third chapter (co-authored with Joshua Angrist and Parag Pathak)"--Page 3.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122118</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in military healthcare</title>
<link>https://hdl.handle.net/1721.1/122117</link>
<description>Essays in military healthcare
Simmons, Timothy F.(Timothy Franklin)
This dissertation examines how American foreign policy and Department of Defense policies influence patterns of health and healthcare utilization for active-duty and retired military personnel and their families. Chapter one uses variation in the timing of deployments to study temporary parental absence. Deployment reduces preventative care, as measured by wellness visits and vaccinations. However, it increases emergency room use and mental health care. These effects are concurrent with parental absence, and disappear upon reunion. The response to deployment differs significantly by parental gender; when fathers become sole caregivers all types of utilization fall. Chapter two uses the administratively determined moves of military doctors to investigate the effect of interrupting the doctor-patient relationship. Exploiting variation in the timing of these moves, I find that change of a primary care provider increases outpatient utilization by 23 percent over the following year. While extra primary care drives three-quarters of this increase, specialist care, and labs tests, and images are also elevated. Increased emergency room use and preventable hospitalizations imply a causal link between the doctor-patient relationship and beneficiary health. These effects increase with the length of the relationship. Chapter three leverages on-base hospital closures to estimate differences in utilization between admissions to on- and off-base hospitals. We find that inpatient utilization is one percent higher per admission to private hospital. There is no measurable difference in health outcomes for nondeferrable care. Private prenatal care results in better outcomes for mothers and infants. Nevertheless, utilization remains higher for private birth admissions, including for women who present in healthier condition as indicated by birth weight and prematurity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122117</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market failures and monetary policy</title>
<link>https://hdl.handle.net/1721.1/122115</link>
<description>Market failures and monetary policy
Wang, Olivier.
Each chapter of this thesis focuses on the interactions between a particular market failure, such as financial frictions or imperfect competition, and the real effects of monetary policy. The first chapter studies the effect of low interest rates on financial intermediation and the transmission of monetary policy. Using U.S. bank- and branch-level data, I document two new facts: first, the long-run decline in bond rates has not been fully passed through to loan rates; second, the short-run pass-through of policy rates to loan rates is lower at lower rates. To explain these facts, I build a model in which banks provide both credit and liquidity, and the nominal interest rate affects the composition of bank interest income between loan and deposit spreads. In the long run, a decline in the equilibrium real rate r* compresses deposit spreads but increases loan spreads.; In the short run, the sensitivity of output to monetary shocks is dampened relative to a benchmark with perfect pass-through, and even more so the lower r* is: I find a dampening that grows from 20% to 32% as r* falls from 3% to -1%. A higher inflation target can redistribute from depositors to borrowers and enhance monetary policy transmission. In the second chapter, I propose incomplete exchange rate pass-through as a new channel through which balance sheet effects can constrain monetary policy. I consider a New Keynesian open economy with external debt in foreign currency. I first show that absent heterogeneity within the country, outstanding external debt imposes no constraint on monetary policy under complete pass-through. If, however, pass-through is incomplete, then the expenditure switching benefits of a depreciation are weakened, while the strength of debt deflation is unchanged.; As a result, sudden stops are contractionary, and prior current account deficits are inefficiently high due to aggregate demand externalities. Optimal macroprudential policy makes the private sector internalize the social value of future exchange rate flexibility. Absent perfect macroprudential tools, monetary policy would like to deter borrowing by promising a large, but time-inconsistent, depreciation during crises. I extend the model to capital flows within currency unions to show how the interaction of goods pricing and debt denomination slowly destroys the option value of exiting the union upon a crisis. The third chapter is joint with Iván Werning. We study oligopolistic competition and its effect on monetary policy. In our model, within each sector, any finite number of firms compete a la Bertrand. Firms can change their posted nominal prices at random intervals a la Calvo.; Following an extensive IO literature, we focus on Markov equilibria of the resulting game in each sector. We aggregate up to the macroeconomic level and study unexpected monetary shocks. Our main result provides a closed-form formula for the response of aggregate output, highlighting three measurable sufficient statistics: demand elasticities, market concentration, and markups. We also separate the strategic effects of market concentration from its effects on the residual demand system. Higher concentration may significantly amplify or dampen the real effects of monetary policy, but the response approximates a recalibrated monopolistic competition model with Kimball demand, if one matches the local demand elasticities of the oligopolistic model.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122115</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on labor market dynamics</title>
<link>https://hdl.handle.net/1721.1/122113</link>
<description>Essays on labor market dynamics
Patterson, Christina Hyde.
This thesis consists of three chapters on labor market dynamics. In the first chapter, I show empirically that the unequal incidence of recessions is a core channel through which aggregate shocks are amplified. I show that the aggregate marginal propensity to consume (MPC) is larger when income shocks disproportionately hit high-MPC individuals, and I define the Matching Multiplier as the increase in the output multiplier originating from the matching of workers to jobs with different income elasticities - a greater matching multiplier translates into more powerful amplification in a range of business cycle models. Using administrative data from the United States, I document that the earnings of individuals with a higher marginal propensity to consume are more exposed to recessions.; I show that this covariance between worker MPCs and the elasticity of their earnings to GDP is large enough to increase shock amplification by 40 percent over a benchmark in which all workers are equally exposed. Using local labor market variation, I validate this amplification mechanism by showing that areas with higher matching multipliers experience larger employment fluctuations over the business cycle. Lastly, I derive a generalization of the matching multiplier in an incomplete markets model and show numerically that this mechanism is quantitatively similar within this structural framework. In the second chapter, joint with David Autor, David Dorn, Lawrence Katz, and John Van Reenen, we explore the well-documented fall of labor's share of GDP in the United States and many other countries. Existing empirical assessments typically rely on industry or macro data, obscuring heterogeneity among firms. In this paper, we analyze micro panel data from the U.S.; Economic Census since 1982 and document empirical patterns to assess a new interpretation of the fall in the labor share based on the rise of "superstar firms." If globalization or technological changes advantage the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms. Since these firms have high markups and a low labor share of firm value-added and sales, this depresses the aggregate labor share.; We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity and innovation; (vi) the aggregate markup will rise more than the unweighted firm markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. In the third chapter, I explore how the distribution of tasks across industries affects labor market responses to shocks.; I present a model in which task-level wages connect industries employing the same tasks, meaning that the distribution of tasks across industries insures some workers against shocks and alters their labor market experiences. Workers trained in more dispersed tasks (e.g. accountants) face less unemployment risk from industry-specific shocks than workers who do tasks that are concentrated in few industries (e.g. petroleum engineers). Using industry and regional data, I show empirical evidence that supports the model's predictions - industries that employ more specialized labor contract less in response to demand shocks than industries with less specialized labor. JEL Classifications: E21, J23, D33
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122113</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consumption heterogeneity in macroeconomics and public finance</title>
<link>https://hdl.handle.net/1721.1/122112</link>
<description>Consumption heterogeneity in macroeconomics and public finance
Olivi, Alan(Alan Kevin)
This thesis consists of three chapters on households' consumption. In the first chapter we study the canonical consumption-savings income-fluctuations problem with incomplete markets and show theoretically how to recover households' preferences and beliefs from their consumption and savings decisions. The main innovation is to show how to use the transitory component of income as an instrument that shifts current consumption without changing beliefs about future stochastic changes in consumption. As such, the transitory component of income, affects consumption growth through an intertemporal smoothing motive with no immediate effect on precautionary savings. With the precautionary motive neutralized, comparing changes in consumption and savings in response to temporary shocks allows us to identify the curvature of marginal utility: when savings respond more than consumption to transitory changes in income, the relative prudence is higher.; Additionally, the transitory component makes it possible to identify an effective discount rate, which in turns makes it possible to control the degree of households' impatience. The curvature of marginal utility and the effective discount rate are sufficient to understand how preferences restrict consumption choices through the Euler equation. To then recover beliefs, we assume that beliefs are independent of exogenous changes in assets. This gives us an additional instrument to identify beliefs since the belief system then has to be consistent with the implied savings patterns as assets vary. These two instruments allows us to non parametrically recover preferences and beliefs in a very general framework: we can accommodate multiple consumption items (both durable and non-durable), multiple assets (liquid and illiquid, risky or not), habits, endogenous labor supply and so on. The second chapter builds on the first.; We investigate empirically, in data from the PSID and the SIPP, how households' expectations deviate from rationality. Our estimation shows that households are overconfident and overoptimistic. The main source of overconfidence is that households underestimate the frequency of shocks and their optimism is driven by an underappreciation of negative shocks. However, these biases are not homogeneous in the population: they are amplified for lower income households while higher income households' perceptions are closer to rational expectations. These results explain not only the quantitative magnitude of undersaving and overreaction to income shocks, but also why higher income households accumulate disproportionately more wealth. We then explore how these beliefs affect the design of unemployment insurance and the transmission of countercyclical income risk to aggregate demand.; In the third chapter, written with Xavier Jaravel, we investigate how to design optimal income redistribution policies when the price of goods is depends on the size of the corresponding markets and different households consume different goods. We introduce Increasing Returns to Scale (IRS) and heterogeneous spending patterns (non-homothetic preferences) into the canonical tax problem of Mirrlees. In this environment, any change in tax policy induces a change in labor supply, hence a change in market size, which translates endogenously into a change in productivity; this productivity response affects consumer prices and sets off another round of labor supply changes, market size changes, productivity changes, further labor supply changes, and so on. We show theoretically how to characterize these general equilibrium effects and we quantify their importance for the optimal tax schedule.; The calibrated model matches empirical evidence on IRS as well as the tax schedule, earnings distribution and spending patterns observed in the United States. We establish three main results: (1) the optimal average tax rate is substantially lower on average, falling from about 45% under Constant Return to Scale (CRS) to about 35% with IRS (because IRS increase the efficiency cost of taxation); (2) with IRS and homothetic utility, optimal marginal tax rates are much less progressive than under CRS, and they become regressive above the 65th percentile of the income distribution (because IRS increase the efficiency cost of taxation relatively more for the rich); (3) with IRS and non-homothetic utility, optimal marginal tax rates become more progressive (intuitively, the planner internalizes that the productivity increase that could result from a tax break to the rich has low social value if the rich spend their marginal dollar on products that the poor do not consume much of).; These findings indicate the importance of endogenous productivity and non-homotheticities for optimal taxation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122112</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essay on beliefs and the macroeconomy</title>
<link>https://hdl.handle.net/1721.1/122111</link>
<description>Essay on beliefs and the macroeconomy
Molavi, Pooya.
This thesis consists of three essays. The first essay explores a form of bounded rationality where agents learn about the economy with possibly misspecified models. I consider a recursive general-equilibrium framework that nests a large class of macroeconomic models. Misspecification is represented as a constraint on the set of beliefs agents can entertain. I introduce the solution concept of constrained-rational-expectations equilibrium (CREE), in which each agent selects the belief from her constrained set that is closest to the endogenous distribution of observables in the Kullback-Leibler divergence. If the set of permissible beliefs contains the rational-expectations equilibria (REE), then the REE are CREE; otherwise, they are not. I show that a CREE exists, that it arises naturally as the limit of adaptive and Bayesian learning, and that it incorporates a version of the Lucas critique.; I then apply CREE to a particular novel form of bounded rationality where beliefs are constrained to factor models with a small number of endogenously chosen factors. Misspecification leads to amplification or dampening of shocks and history dependence. The calibrated economy exhibits hump-shaped impulse responses and co-movements in consumption, output, hours, and investment that resemble business-cycle fluctuations. In the second essay, I ask the following question: What are the testable restrictions imposed on the dynamics of an agent's belief by the hypothesis of Bayesian rationality, which do not rely on the additional assumption that the agent has an objectively correct prior? In this paper, I argue that there are essentially no such restrictions. I consider an agent who chooses a sequence of actions and an econometrician who observes the agent's actions and is interested in testing the hypothesis that the agent is Bayesian.; I argue that--absent a priori knowledge on the part of the econometrician on the set of models considered by the agent--there are almost no observations that would lead the econometrician to conclude that the agent is not Bayesian. This result holds even if the set of actions is sufficiently rich that the agent's action fully reveals her belief about the payoff-relevant state and even if the econometrician observes a large number of identical agents facing the same sequence of decision problems. In the third essay, I propose an equilibrium search and matching model with permanent worker heterogeneity, asymmetric information, and endogenous separations and study the dynamics of adverse selection in the labor market. The interaction between asymmetric information and endogenous separations leads to a cyclical adverse selection problem that has testable predictions both for the aggregate variables and for individual workers' outcomes.; First, a deterioration in the distribution of ability in the pool of the unemployed leads firms to raise their hiring standards, thus resulting in shifting out of the Beveridge curve. Second, if the separation rate is log-supermodular (log-submodular) in productivity and ability, the pool of the unemployed becomes more (less) adversely selected in downturns. Third, firms rationally discriminate against the long-term unemployed by demanding more unequivocally positive signals of their ability before hiring them. Fourth, this scarring effect is more (less) severe for lower-ability workers and after deeper recessions if the separation rate is log-supermodular (log-submodular). I conclude by providing conditions on the fundamentals of the economy that lead to log-supermodular and log-submodular separation rates.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-182).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122111</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on political economy and development</title>
<link>https://hdl.handle.net/1721.1/122110</link>
<description>Essays on political economy and development
Sarkar, Sourav,Ph.D.Massachusetts Institute of Technology.
This thesis includes three papers. In the first paper, I use a close election regression discontinuity design to study the development effects of political alignment between local legislative constituency representatives and state governments in India. I assemble a comprehensive annual dataset on India at a fine geographic unit and find that aligned politicians have lesser growth of visible long-term investment goods, although aligned constituencies do not get less of some other variables. In the second paper, I study the consequences as well as determinants of formation of new districts and district headquarters in the Indian context. There is evidence of an increased growth (or reorganization) of economic activity around newly formed district headquarters. However, the evidence of an effect on the entire district is mixed. In the third paper, I find a negative relationship between size of political constituencies and the various variates pertaining to the Mahatma Gandhi National Rural Guarantee Scheme in India. The results are consistent with a simple theory of maximization of electoral prospects by electorally motivated government where the citizens' total demand from the government is not significantly increasing in the size of the electorate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 169-181).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122110</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in development and political economy</title>
<link>https://hdl.handle.net/1721.1/122108</link>
<description>Essays in development and political economy
Shaukat, Mahvish Ifrah.
This thesis studies three questions in economic development and political economy. Chapter 1 presents empirical evidence on the causal effect of close elections on political selection and performance in India. An extensive literature in political economy predicts electoral competition improves governance. Empirical work testing this theory is scarce, however, due in large part to the difficulty in identifying the causal impact of competition, which is itself an equilibrium outcome shaped by politician behavior. To address this identification challenge, this paper proposes a shift-share instrument that uses variation in aggregate party popularity over time ("shifts") and local party preferences across space ("initial shares") to predict the vote-margin gap in local elections.; Using data on Indian state constituency elections, I find political parties respond strategically to competition by selecting high quality (high valence) candidates to run for office, where quality is proxied by lower criminality, higher education, and higher wealth. Once in office, however, these politicians perform no better in facilitating economic growth or providing public goods. Instead, I find parties make temporary, and likely inefficient, reallocations in state-run resources such as electricity in the run up to competitive elections. Together, the results suggest that while the anticipation of close elections may lead politicians to take costly actions prior to an election, electoral competition does not necessarily result in improved economic outcomes. Chapter 2 studies how the spatial allocation of public goods within cities depends on politicians and bureaucrats.; In many cities around the world, the bureaucrats and politicians that comprise local governments manage non-overlapping administrative and political units. Using geo-referenced data on public goods, property characteristics, and residents in two major cities in Pakistan, I use a spatial regression discontinuity design to estimate discontinuities in public goods provision and property values across these boundaries. These discontinuities are used to construct measures of political and administrative unit "value-added," and to quantify the variation across each type of unit. I find that the delimitation of political and administrative boundaries results in large variation in public goods quality and self-reported property valuations within cities. Political unit quality is correlated with a number of politician and constituency observables, suggesting that political factors may be important determinants of internal city structure. Chapter 3, coauthored with Adnan Q.; Khan, Asim Khwaja, and Benjamin Olken, investigates whether strengthening the link between local taxation and urban services can revitalize the social compact between citizen and state. A significant challenge to the provision of local public services in developing economies is the inability to raise adequate resources, especially through local taxation. In many countries, the social compact, whereby citizens agree to pay taxes to fund their desired services, is broken. A low willingness to pay taxes leads to low revenue collection, and prevents adequate service provision, which in turn reduces willingness to pay and can even lead to citizen disengagement from the state. We investigate whether strengthening the link between local collections and urban services can increase citizen's willingness to pay for services, improve service delivery, and enhance local politics.; We test this in major urban centers in Punjab, Pakistan via several interventions - including eliciting citizen preferences for specific services when taxes are collected, earmarking revenue for specific services, and enabling local politicians - that credibly strengthen the link between tax collection and urban service provision. This paper presents the experimental design and reports first year impacts on tax payments. We find strengthening the link between taxes and services increases tax payments: taxpayers who failed to make a full payment prior to the interventions increase property tax payments after the interventions are introduced.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122108</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on finance and labor markets</title>
<link>https://hdl.handle.net/1721.1/122107</link>
<description>Essays on finance and labor markets
He, Alex Xi.
This thesis consists of three chapters in corporate finance and labor economics. The first two chapters study the interaction of the financial sector and labor market, and the last chapter focuses on corporate R&amp;D investment. The first chapter (co-authored with Daniel le Maire) studies how the market for corporate control disciplines managers who pay high wages. We construct a manager-firm-worker matched panel data set covering the population of Denmark from 1995 to 2011 and develop a framework to measure manager styles in wage-setting by tracking workers and managers across different firms over time. We find that individual managers do matter for wages, and variation in manager fixed effects can explain a significant part of wage differences between firms. Using a comprehensive sample of over 3000 M&amp;As, we show that mergers target high-paying managers and reduce wage premiums but not employment at target firms, and that the effect is stronger in less competitive industries.; Establishments with high wage premiums due to generous managers are more likely to be acquired, and experience higher manager turnover and larger wage declines after acquisition. Lower wages have little effect on firms? productivity, and therefore represent a transfer from workers to shareholders. We show that increased market power in product markets or labor markets cannot account entirely for these facts. The reduction in wages accounts for about half the shareholder gains in all M&amp;As, suggesting that rent extraction might be a major motive for merger transactions. The second chapter (co-authored with Daniel le Maire) investigates the effects of liquidity constraints on employment and earnings by exploiting a mortgage reform in Denmark in 1992, which for the first time allowed homeowners to borrow against housing equity for non-housing purposes. Liquidity-constrained homeowners extracted housing equity, increased debt levels and experienced higher earnings growth after the reform.; In contrast, the reform had little impact on employment and earnings of homeowners with high liquid asset holdings. Consistent with models of job search with risk aversion, the option to borrow against housing equity allows individuals to seek jobs that have higher earnings growth but higher unemployment risks. This effect is larger for low-income and older individuals. The results imply that relaxing liquidity constraints can increase output, and policies restricting mortgage refinancing during economic distress may backfire in recessions. The third chapter studies the spillovers of corporate R&amp;D investment across different technological fields. I build a measure of technological distance between firms using the citation-based innovation network, which incorporates knowledge spillovers from upstream technological fields to downstream technological fields. I then use this measure to estimate the impact of technology spillovers using panel data on U.S. firms.; I find that spillovers from firms innovating in upstream fields are quantitatively as important as spillovers from firms innovating in same fields. Consistent with the idea that firms innovate more when there is more past upstream innovation to build on, firms' R&amp;D investments respond positively to R&amp;D investments of firms in upstream fields, but not to R&amp;D investments of firms in downstream fields or in the same fields. Smaller firms on average operate in more upstream technological fields and generate more spillovers and higher social returns, which is contrary to the findings of previous research. JEL Codes: G34, J30, D22
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122107</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on identification in macroeconomics</title>
<link>https://hdl.handle.net/1721.1/122106</link>
<description>Essays on identification in macroeconomics
Sarto, Andres Pablo.
This thesis studies new methods to estimate the effects of interventions at the macroeconomic level using regional variation. Chapter 1 studies the relationship between elasticities at the regional level and elasticities at the macroeconomic level. To this end, I analyze regional versions of canonical macroeconomic models, along with different policies of interest, such as aggregate government spending. At the regional level, I show that there are two relevant elasticities for policy analyses, the micro-local elasticities and the micro-global elasticities. The former measure how a region reacts to a regional policy, the latter measure how a region reacts to an aggregate policy. I then show that the macro elasticity of interest is a function of the micro-global elasticities exclusively. Finally, I show that if we fix a policy and an outcome variable, the mapping from the micro-global elasticities to the macro elasticity is the same across models.; Chapter 2 proposes a new approach (Regional Structural VAR (RSVAR)) to estimate macroeconomic elasticities using regional data that avoids the problem of model-specific estimates. I first define a class of models, which includes the most widely used models for policy analysis, that gives equilibrium regional equations that contain the micro-local and micro-global elasticities. I then specify different sets of identification assumptions, along with estimators of the macroeconomic elasticities, and show that the estimators proposed are consistent. The crucial assumption underlying all of these results is that regions are heterogeneous in the sense that they react differently to the same shocks. Chapter 3 provides a new identification strategy to estimate the fiscal multiplier in the US. Using state-level data for the period 1971-2008, I apply a RSVAR approach to recover the national fiscal multiplier.; The instrument employed at the state level is the official declarations of natural disasters by the federal government. My results suggest a very precisely estimated fiscal multiplier of around 1, depending on the specification used. Thus, it is not possible to rule out the possibility that government spending crowds out/in private spending. However, the range of the estimates obtained (0.7 - 1.2) suggests that either effect should be small.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 144-147).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122106</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays in economics</title>
<link>https://hdl.handle.net/1721.1/122104</link>
<description>Three essays in economics
Hildebrand, Nikolaus(Nikolaus Valentin)
This thesis consists of three chapters. I, first, study the long-run growth of finance. I construct comprehensive national financial balance sheets, financial accounts and production accounts for twelve advanced economies over the past two centuries using hundreds of historical sources. Financial asset-to-output ratios more than quintuple while the output share of finance industry more than quadruples. Despite this dramatic growth in the volume of finance, financial deepening is slow and the structure of finance is stable. Financial growth over the past two centuries is primarily driven by capital formation. These results shed new light on the recent dramatic expansion of financial assets, services, and debt, and directly link these secular trends to an ongoing debate on rising wealth-income ratios and inequality. The second chapter, joint with Johannes Hermle, studies the impact of a male breadwinner norm on gender inequality.; We develop a marriage market matching model featuring a male breadwinner norm - a discontinuity in husbands' marginal utility from spousal income. We predict that a norm should cause a concave kink in households' relative income distribution, a kinked increase in the divorce probability and a kinked increase in household good provision by females at the point where both spouses earn the same. We test these predictions using large administrative tax data from Germany. Consistent with the presence of a male breadwinner norm, we find a sharp kink in the relative income distribution at the 50% threshold as well as a crowd-out of couples with a female primary earner relative to random matching. The third chapter studies the evolution of financial systems in common and civil law countries since 1870.; I show that, In line with legal origins theory, civil law countries have always had significantly smaller capital markets, smaller financial sectors and more bank-reliant financial systems than common law countries. Convergence immediately prior to World War I can be explained by increased financial integration and free capital movement, while financial fragmentation and shocks to the capital stock during and after the war help explain the strong divergence afterwards.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122104</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on innovation and public policy</title>
<link>https://hdl.handle.net/1721.1/122103</link>
<description>Essays on innovation and public policy
Choi, Jane Jungeun.
Innovation is an important driver of economic growth, and public policy can affect many aspects of innovation. This thesis investigates the role of public policy in relation to two specific aspects of innovation: 1) who becomes an innovator and 2) where intellectual property is located once an innovation occurs. The first chapter analyzes how tax rates on patent- and trademark-related income affect where patents and trademarks are located internationally. I study how changes in patent and trademark tax rates in various countries altered the flow of patents and trademarks in and out of the countries. Using data on patent and trademark transfers from the US Patent &amp; Trademark Office (USPTO), combined with market-based patent value estimates, I estimate the sensitivity of IP location to the changes in tax rates. I present suggestive evidence of income shifting and tax base erosion by showing that patents and trademarks tend to locate in countries with lower tax rates. The second chapter (jointly written with Carolyn Stein and Heidi Williams) investigates the role of gender in the evaluation of patent applications submitted to the USPTO. We document that patent examiner gender appears to have no effect on the evaluation of patent applications submitted by female inventors relative to male inventors, suggesting male examiners are not differentially biased in their evaluation of patent applications from female inventors. The third chapter (jointly written with Yosub Jung) investigates how the passage of US state laws granting married women the rights to own separate property and own their earnings affected patenting by female inventors. In the 1800s, before such laws were passed, the notion of coverture meant that married women's property and earnings were controlled by their husbands. We compare patenting by women before and after the acts and show that patenting by women increased after these laws.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-148).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122103</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual and auditory scene parsing</title>
<link>https://hdl.handle.net/1721.1/122101</link>
<description>Visual and auditory scene parsing
Zhao, Hang,Ph.D.Massachusetts Institute of Technology.
Scene parsing is a fundamental topic in computer vision and computational audition, where people develop computational approaches to achieve human perceptual system's ability in understanding scenes, e.g. group visual regions of an image into objects and segregate sound components in a noisy environment. This thesis investigates fully-supervised and self-supervised machine learning approaches to parse visual and auditory signals, including images, videos, and audios. Visual scene parsing refers to densely grouping and labeling of image regions into object concepts. First I build the MIT scene parsing benchmark based on a large scale, densely annotated dataset ADE20K. This benchmark, together with the state-of-the-art models we open source, offers a powerful tool for the research community to solve semantic and instance segmentation tasks. Then I investigate the challenge of parsing a large number of object categories in the wild. An open vocabulary scene parsing model which combines a convolutional neural network with a structured knowledge graph is proposed to address the challenge. Auditory scene parsing refers to recognizing and decomposing sound components in complex auditory environments. I propose a general audio-visual self-supervised learning framework that learns from a large amount of unlabeled internet videos. The learning process discovers the natural synchronization of vision and sounds without human annotation. The learned model achieves the capability to localize sound sources in videos and separate them from mixture. Furthermore, I demonstrate that motion cues in videos are tightly associated with sounds, which help in solving sound localization and separation problems.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 121-132).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122101</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in data-driven models for transportation</title>
<link>https://hdl.handle.net/1721.1/122100</link>
<description>Advances in data-driven models for transportation
Ng, Yee Sian.
With the rising popularity of ride-sharing and alternative modes of transportation, there has been a renewed interest in transit planning to improve service quality and stem declining ridership. However, it often takes months of manual planning for operators to redesign and reschedule services in response to changing needs. To this end, we provide four models of transportation planning that are based on data and driven by optimization. A key aspect is the ability to provide certificates of optimality, while being practical in generating high-quality solutions in a short amount of time. We provide approaches to combinatorial problems in transit planning that scales up to city-sized networks. In transit network design, current tractable approaches only consider edges that exist, resulting in proposals that are closely tethered to the original network. We allow new transit links to be proposed and account for commuters transferring between different services. In integrated transit scheduling, we provide a way for transit providers to synchronize the timing of services in multimodal networks while ensuring regularity in the timetables of the individual services. This is made possible by taking the characteristics of transit demand patterns into account when designing tractable formulations. We also advance the state of the art in demand models for transportation optimization. In emergency medical services, we provide data-driven formulations that outperforms their probabilistic counterparts in ensuring coverage. This is achieved by replacing independence assumptions in probabilistic models and capturing the interactions of services in overlapping regions. In transit planning, we provide a unified framework that allows us to optimize frequencies and prices jointly in transit networks for minimizing total waiting time.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 163-176).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122100</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive and prescriptive methods in operations research and machine learning : an optimization approach</title>
<link>https://hdl.handle.net/1721.1/122099</link>
<description>Predictive and prescriptive methods in operations research and machine learning : an optimization approach
Mundru, Nishanth.
The availability and prevalence of data have provided a substantial opportunity for decision makers to improve decisions and outcomes by effectively using this data. In this thesis, we propose approaches that start from data leading to high-quality decisions and predictions in various application areas. In the first chapter, we consider problems with observational data, and propose variants of machine learning (ML) algorithms that are trained by taking into account decision quality. The traditional approach to such a task has often focused on two-steps, separating the estimation task from the subsequent optimization task which uses these estimated models. Consequently, this approach can miss out on potential improvements in decision quality by considering these tasks jointly. Crucially, this leads to stronger prescriptive performance, particularly for smaller training set sizes, and improves the decision quality by 3 - 5% over other state-of-the-art methods.; We introduce the idea of uncertainty penalization to control the optimism of these methods which improves their performance, and propose finite-sample regret bounds. Through experiments on real and synthetic data sets, we demonstrate the value of this approach. In the second chapter, we consider observational data with decision-dependent uncertainty; in particular, we focus on problems with a finite number of possible decisions (treatments). We present our method of prescriptive trees, that prescribes the best treatment option by learning from observational data while simultaneously predicting counterfactuals. We demonstrate the effectiveness of such an approach using real data for the problem of personalized diabetes management. In the third chapter, we consider stochastic optimization problems when the sample average approximation approach is computationally expensive.; We introduce a novel measure, called the Prescriptive divergence which takes into account the decision quality of the scenarios, and consider scenario reduction in this context. We demonstrate the power of this optimization-based approach on various examples. In the fourth chapter, we present our work on a problem in predictive analytics where we focus on ML problems from a modern optimization perspective. For sparse shape-constrained regression problems, we propose modern optimization based algorithms that are scalable, and recover the true support with high accuracy and low false positive rates.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122099</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven dynamic optimization with auxiliary covariates</title>
<link>https://hdl.handle.net/1721.1/122098</link>
<description>Data-driven dynamic optimization with auxiliary covariates
McCord, Christopher George.
Optimization under uncertainty forms the foundation for many of the fundamental problems the operations research community seeks to solve. In this thesis, we develop and analyze algorithms that incorporate ideas from machine learning to optimize uncertain objectives directly from data. In the first chapter, we consider problems in which the decision affects the observed outcome, such as in personalized medicine and pricing. We present a framework for using observational data to learn to optimize an uncertain objective over a continuous and multi-dimensional decision space. Our approach accounts for the uncertainty in predictions, and we provide theoretical results that show this adds value. In addition, we test our approach on a Warfarin dosing example, and it outperforms the leading alternative methods.; In the second chapter, we develop an approach for solving dynamic optimization problems with covariates that uses machine learning to approximate the unknown stochastic process of the uncertainty. We provide theoretical guarantees on the effectiveness of our method and validate the guarantees with computational experiments. In the third chapter, we introduce a distributionally robust approach for incorporating covariates in large-scale, data-driven dynamic optimization. We prove that it is asymptotically optimal and provide a tractable general-purpose approximation scheme that scales to problems with many temporal stages. Across examples in shipment planning, inventory management, and finance, our method achieves improvements of up to 15% over alternatives. In the final chapter, we apply the techniques developed in previous chapters to the problem of optimizing the operating room schedule at a major US hospital.; Our partner institution faces significant census variability throughout the week, which limits the amount of patients it can accept due to resource constraints at peak times. We introduce a data-driven approach for this problem that combines machine learning with mixed integer optimization and demonstrate that it can reliably reduce the maximal weekly census.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 183-190).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122098</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping endothelial functional phenotype in cancer by unveiling the kinase and phosphatase pathways</title>
<link>https://hdl.handle.net/1721.1/122089</link>
<description>Mapping endothelial functional phenotype in cancer by unveiling the kinase and phosphatase pathways
Gadish, Or.
Endothelial cells (EC) are critical to the tumor ecosystem, lining the blood vessels that control nutrient transport while also regulating homeostasis. Normalization of vessel structure has shown clinical promise, but EC regulation is known to be state-dependent: while proliferative ECs stimulate growth, quiescent ECs can inhibit it. We studied the functional and phosphorylative transformations of EC state in cancer to elucidate further targets for EC normalization. Confluent ECs cultured in breast cancer cell conditioned media displayed marked elongation and impaired wound healing. Given the well-established relationship between cytoskeletal reorganization and phosphorylative regulation, we estimated kinase and phosphatase activity by quantifying phosphorylation of downstream targets using mass spectrometry.; Of the 152 kinases and phosphatases analyzed across 62 phosphoenzyme families, 9 families were categorized as significant drivers of dysfunction, and potential targets for normalization. Using inhibitors, we functionally validated six of the most significant in morphology and wound healing. The most promising candidate target for normalization was Akt, whose inhibition restored the control phenotype in both assays. Counter to much of the literature, Src activity was decreased in cancer-conditioned transformation and Src inhibition in control cells induced a dysfunctional phenotype. These data prompt further investigation of Akt and caution with regard to Src as targets for inducing cancer homeostasis. Further, the inhibitors overall charted a continuum of EC phenotypes, highlighting the need for further exploration of the complex relationships between EC phenotype, transformation, and regulation.; The framework presented in this thesis that maps functional changes to phosphoenzyme drivers can be readily applied to other models, and the comprehensive library of phosphoenzyme activity developed will shed light on how existing cancer-targeting inhibitors affect tumor endothelium.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 197-209).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122089</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomechanical human performance metrics of coordination and balance for operational decision-making</title>
<link>https://hdl.handle.net/1721.1/122088</link>
<description>Biomechanical human performance metrics of coordination and balance for operational decision-making
Fineman, Richard A.(Richard Andres)
The overall goal of this work is to develop a series of biomechanically-driven human performance metrics that aid operational decision-making. By quantifying inter-limb coordination and balance, we enable decoupling motor patterns without direct visual observation, providing objective feedback to decision-makers on the quality of human motion. To effectively develop and validate metrics for coordination and balance, we take a human-centered approach, contextualizing and evaluating in specific domains of interest. This work will focus on two: clinical geriatrics and aerospace spacesuit assembly (SSA) design. While these domains might seem distinct, both require a detailed understanding of nominal human motion and are interested in measuring deviation from desired motor patterns. To this end, we will test the hypothesis that we can augment decision-making in two domains of interest through the development and validation of biomechanically-driven human performance metrics for coordination and balance.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 190-210).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122088</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from clinical health data for real-time decision support in emergency department care of sepsis</title>
<link>https://hdl.handle.net/1721.1/122087</link>
<description>Learning from clinical health data for real-time decision support in emergency department care of sepsis
Prasad, Varesh.
The emergency department (ED) is the first point of contact with clinicians for most patients with acute illnesses. Early identification along with appropriate interventions (including procedures, medications, and triaging to an appropriate level of care) in the ED can be critical drivers of good outcomes, particularly in the care of patients with sepsis. Although sepsis is a leading cause of in-hospital mortality, it can be difficult to identify on presentation, and debate continues about the best practices in certain aspects of managing sepsis patients. In this thesis, we applied machine learning-based analyses to better understand the ED course of patients with sepsis and to build systems that can operate at the bedside to aid clinicians in the care of sepsis, including both detection of sepsis at the earliest possible stages and management of deteriorating cardiovascular function and hemodynamic status.; We extracted data using automated methods as well as manual chart review in a selection of two years' worth of ED visits to Massachusetts General Hospital. Clustering blood pressure trajectories showed that only 20% of 765 sepsis patients showed sustained responses to fluid bolus therapy, while 25% of patients requiring escalated hemodynamic support via vasopressor therapy had very low blood pressure for at least two hours before escalation from fluid to vasopressor administration. Subsequently, we showed that a simple logistic regression model with only six basic elements of patient data can distinguish between patients who required vasopressors and those whose hemodynamic function recovered with fluid therapy alone with area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.88-0.94) at a final decision time.; A predictive version of the model could detect advance need for vasopressors within six hours with an AUC of 0.82 (95% CI: 0.80-0.83) and could retain performance in acutely hypotensive patients at an AUC of 0.77 (95% CI: 0.74-0.90). We also developed a model to detect the presence of sepsis at triage and throughout the ED stay, combining vital signs, presenting symptoms, and baseline risk factors to discriminate between 1,663 sepsis and non-sepsis acutely ill patients at triage with an AUC of 0.88 (95% CI: 0.86-0.90) and over the course of the whole ED stay with an AUC of 0.92 (95% CI: 0.91-0.94), improving significantly over existing sepsis screening tools such as qSOFA (triage AUC of 0.61). We designed these models to minimize user input needs so as to integrate into clinical workflows without extensive demands on clinicians interacting with the electronic medical record system or a bedside monitor.; These models provide a feasible way to build clinical decision support tools that can operate in real-time in the ED to improve sepsis care from the very first point of contact with a potential sepsis patient.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 109-119).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122087</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on health and social insurance</title>
<link>https://hdl.handle.net/1721.1/122086</link>
<description>Essays on health and social insurance
Stepner, Michael,Ph.D.Massachusetts Institute of Technology.
This thesis consists of three chapters on the economics of health and social insurance. In the first chapter, I examine the distribution of income risk that adults face from severe illness and the social insurance provided by taxes and transfers using an event study research design with linked Canadian hospital and tax records. I find that adults with lower incomes face larger pre-tax earnings risk from hospitalization events, primarily due to extensive margin exits from employment. Canada's tax and transfer system insures 44% of post-hospitalization income losses in the bottom income quintile and 12% of losses in the top income quintile. But less than two thirds of this insurance comes from replacing lost earnings with increased transfers. In the bottom income quintile, 30% of insurance is due to a stable stream of transfers; in the top income quintile, 30% of insurance is due to progressive taxation.; Using a calibrated model, I find that the marginal value of additional insurance against hospitalization risk is approximately flat across the income distribution. In the second chapter, I show that employer-provided short-term disability insurance (STDI) increases long-term disability insurance (LTDI) take-up and imposes a negative fiscal externality on the government budget. Using variation in private STDI coverage caused by Canadian firms ending their plans, I find that private STDI raises two-year flows onto LTDI by 0.07 percentage points (33%). Extrapolating to Canada's entire population, private STDI generated 18,300 LTDI recipients and CA$230 million dollars (5%) of public LTDI spending in 2015. In the third chapter, Raj Chetty, Sarah Abraham, Shelby Lin, Benjamin Scuderi, Nicholas Turner, Augustin Begeron, David Cutler and I examine the relationship between income and life expectancy in the United States from 2001 to 2014.; Using 1.4 billion linked earnings and mortality records, we document the levels of life expectancy and changes in life expectancy over time by income group, at a national level and within local areas. We also examine the factors correlated with differences in life expectancy across local areas. JEL Classification: I38, H53, I14
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged student-submitted from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122086</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic modulation of material properties by solid state proton gating</title>
<link>https://hdl.handle.net/1721.1/122082</link>
<description>Dynamic modulation of material properties by solid state proton gating
Tan, Aik Jun.
As functionalities become more abundant in solid state devices, one key capability which remains lacking is an effective means to dynamically tune material properties. In this thesis, we establish a pathway towards this capability by utilizing the simplest ion known to mankind: the proton. We demonstrate for the first time dynamic control of magnetic properties in an all-solid-state heterostructures using solid state proton gating in a metal/oxide heterostructure. We also demonstrate dynamic modulation of magnetic anisotropy at a metal-metal interface through hydrogen insertion in a heavy metal adjacent to a ferromagnet. Besides magnetic properties, solid state proton gating also enables dynamic modulation of optical properties in a thin film oxide. We observe fast gating of optical reflectivity by ~10% at timescale down to ~20ms in a metal/oxide/metal heterostructure. Finally, we also demonstrate a room temperature reversible solid oxide fuel cell based on hydrogen storage. The cell has a small form factor which is suitable for energy storage in solid state microelectronics application. Our work hence provides a platform for complete control of material properties through solid state proton gating.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 195-215).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122082</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel optical sensors for chemical and biological applications</title>
<link>https://hdl.handle.net/1721.1/122080</link>
<description>Novel optical sensors for chemical and biological applications
Michon, Jérôme,Ph.D.Massachusetts Institute of Technology.
Optical sensors have attracted a lot of interest due to their increased performance and ability to perform chemical identification through spectroscopy. Integrated sensors present the additional advantages of compactness and increased light-matter interactions. This thesis aimed at advancing the field of photonic sensing by demonstrating novel devices and applications, and improving the performance of current sensors. In particular, we studied flexible integrated photonic sensors and substrates for surface-enhanced Raman spectroscopy. We first propose and demonstrate a three-dimensional flexible photonic sensor array for stress mapping in soft materials systems such as cell cultures. Our device relies on stress-optical coupling to infer stress from optical measurements and uses a deterministic 3-D fabrication method to precisely position the sensors in space. We characterized the sensors' response to mechanical stimulation by measuring their strain-optical coupling coefficient.; Our device is amenable to measuring strains down to 0.001% or forces down to 1 nN in any matrix with a modulus greater than 300 Pa, with a spatial resolution of 100 æm, enabling the detection of the effects of about a dozen cells. Overall, our device provides fast, easy, and precise measurements even in opaque samples, in a greater range of volumes and geometries than previously available. More broadly, this platform prefigures the ability to perform multifunctional sensing and light delivery in three dimensions. In addition, we look at the efficiency of surface-enhanced Raman spectroscopy (SERS), a popular spectroscopy technique with a broad range of applications. Using a reasoning based on the local density of states (LDOS), we derived a limit for the enhancement provided by nanoantennas, which is shown to include factors relating to the antenna's material and to the antenna's geometry.; We then simulated the response of typical structures and found that they lie several orders of magnitude away from the bound. In the case of spheres, we showed that periodic structures can outperform isolated structures only under certain geometrical conditions. This study paves the way for the definition of performance metrics that can be used for further optimization of SERS substrates.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 127-139).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122080</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multilayer thin film oxides for resistive switching</title>
<link>https://hdl.handle.net/1721.1/122078</link>
<description>Multilayer thin film oxides for resistive switching
Tan, Zheng Jie.
Resistive switching devices are hotly being pursued for use as the fundamental units in next-generation hardware deep-learning or neuromorphic systems. However, these devices are still tricky both to fabricate and to operate with consistency. We present strategies which guarantees that switching devices are functional post-fabrication, and with switching cycles that are consistent both from cycle-to-cycle and device-to-device. The resistance of all observed high and low resistance states (HRS/LRS) spanned just 0.23 and 0.19 decades on the logarithm scale across all devices, with both states spanning 0.05 within single devices and all SET transitions falling within a 0.3V span in our multilayer FIB-processed device. DFT simulations suggest that Au atoms from the top metal electrode implanted deeper in the device by FIB would serve as bridging atoms for oxygen vacancies filament by promoting the formation of these vacancies.; In addition, multilayer thin oxide films reduce the stochasticity of filament formation and further improves the switching consistency. This strategy for high consistency resistive switching devices was subsequently exploited for a few purposes. Firstly, multi-bit switching was demonstrated to yield approximately 7 distinguishable states with a single blind set, i.e, without having to program current compliances or use iterative schemes. Secondly, further insights into the SET and RESET mechanisms using pulsed measurements could be obtained since switching stochasticity no longer obscures subtle trends in experimental data. The implications of this study goes beyond the demonstration of a single high consistency device. Future understandings in resistive switching devices shall be achieved more easily since causality between processing parameters and device behaviors can now be quickly established under the significant reduction in switching stochasticity.; The new degrees of freedom introduced here in designing resistive switching devices will also hasten the search for an optimal device, bringing forward the realization of large scale resistive RAM arrays for machine learning or hardware neuromorphic computing applications towards a nearer future.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 103-109).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122078</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-mining natural language materials syntheses</title>
<link>https://hdl.handle.net/1721.1/122075</link>
<description>Data-mining natural language materials syntheses
Kim, Edward Soo.
Discovering, designing, and developing a novel material is an arduous task, involving countless hours of human effort and ingenuity. While some aspects of this process have been vastly accelerated by the advent of first-principles-based computational techniques and high throughput experimental methods, a vast ocean of untapped historical knowledge lies dormant in the scientific literature. Namely, the precise methods by which many inorganic compounds are synthesized are recorded only as text within journal articles. This thesis aims to realize the potential of this data for informing the syntheses of inorganic materials through the use of data-mining algorithms. Critically, the methods used and produced in this thesis are fully automated, thus maximizing the impact for accelerated synthesis planning by human researchers.; There are three primary objectives of this thesis: 1) aggregate and codify synthesis knowledge contained within scientific literature, 2) identify synthesis "driving factors" for different synthesis outcomes (e.g., phase selection) and 3) autonomously learn synthesis hypotheses from the literature and extend these hypotheses to predicted syntheses for novel materials. Towards the first goal of this thesis, a pipeline of algorithms is developed in order to extract and codify materials synthesis information from journal articles into a structured, machine readable format, analogous to existing databases for materials structures and properties. To efficiently guide the extraction of materials data, this pipeline leverages domain knowledge regarding the allowable relations between different types of information (e.g., concentrations often correspond to solutions).; Both unsupervised and supervised machine learning algorithms are also used to rapidly extract synthesis information from the literature. To examine the autonomous learning of driving factors for morphology selection during hydrothermal syntheses, TiO₂ nanotube formation is found to be correlated with NaOH concentrations and reaction temperatures, using models that are given no internal chemistry knowledge. Additionally, the capacity for transfer learning is shown by predicting phase symmetry in materials systems unseen by models during training, outperforming heuristic physically-motivated baseline stratgies, and again with chemistry-agnostic models. These results suggest that synthesis parameters possess some intrinsic capability for predicting synthesis outcomes. The nature of this linkage between synthesis parameters and synthesis outcomes is then further explored by performing virtual synthesis parameter screening using generative models.; Deep neural networks (variational autoencoders) are trained to learn low-dimensional representations of synthesis routes on augmented datasets, created by aggregated synthesis information across materials with high structural similarity. This technique is validated by predicting ion-mediated polymorph selection effects in MnO₂, using only data from the literature (i.e., without knowledge of competing free energies). This method of synthesis parameter screening is then applied to suggest a new hypothesis for solvent-driven formation of the rare TiO₂ phase, brookite. To extend the capability of synthesis planning with literature-based generative models, a sequence-based conditional variational autoencoder (CVAE) neural network is developed. The CVAE allows a materials scientist to query the model for synthesis suggestions of arbitrary materials, including those that the model has not observed before.; In a demonstrative experiment, the CVAE suggests the correct precursors for literature-reported syntheses of two perovskite materials using training data published more than a decade prior to the target syntheses. Thus, the CVAE is used as an additional materials synthesis screening utility that is complementary to techniques driven by density functional theory calculations. Finally, this thesis provides a broad commentary on the status quo for the reporting of written materials synthesis methods, and suggests a new format which improves both human and machine readability. The thesis concludes with comments on promising future directions which may build upon the work described in this document.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122075</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling and understanding electro-chemo-mechanical properties of layered cuprate thin films</title>
<link>https://hdl.handle.net/1721.1/122074</link>
<description>Controlling and understanding electro-chemo-mechanical properties of layered cuprate thin films
Kim, Chang Sub,Ph.D.Massachusetts Institute of Technology.
Surface exchange kinetics are a key indicator of performance for electrochemical devices including solid oxide fuel cells. Due to broad flexibility in dopant selection and concentration, mixed ionic-electronic conducting (MIEC) ABO₃ perovskite oxides have been extensively explored as model systems to understand oxygen surface exchange kinetics for solid oxide fuel cell (SOFC) electrodes. Traditionally, transport properties are examined as functions of type and concentration of aliovalent cations, requiring multiple samples, resulting in changes in multiple characteristics and properties, often unintended. Moreover, the perovskite oxides generally accommodate only oxygen vacancies and not interstitials.; In this study, the type and concentration of ionic defects (oxygen vacancies vs interstitials) in MIEC layered cuprates (La₁.₈₅Ce₀.₁₅CuO₄) are systematically controlled, without change in cation doping or electronic conductivity, by electrochemical pumping of oxygen with and are analyzed through chemical capacitance, defect chemical modelling, and electrical conductivity. Oxygen surface exchange kinetics derived from electrochemical impedance spectra show a strong correlation with oxygen defect concentration increase, for both vacancies and interstitials. Key thermodynamic parameters, such as band gap energy (0.54±0.10 eV) and anion Frenkel enthalpy (0.618±0.074 eV) are derived. Evidence of oxygen vacancy ordering is observed from chemical capacitance analysis. Layered cuprates have multiple crystalline structure types - namely T, T*, and T' - which share similar chemistry, but are known to have different properties, such as oxygen diffusivities.; Control of structure is systematically studied by using different substrates and seed layers, and by electrochemical pumping of oxygen. A dynamic and reversible structural change in layered cuprate thin films is discovered, for the first time, by oxygen nonstoichiometry control. Oxygen diffusivities of T and T' structures with the same cation chemistry (La₂CuO₄) are measured, for the first time, by oxygen isotope exchange experiment. The T-structured layered cuprate shows faster oxygen diffusion, but with higher activation compared to the T' variant. On the other hand, faster oxygen surface exchange kinetics exhibited by the T'- as compared to the T- type structured cuprate, as measured by thin film conductivity relaxation, is attributed to a lower enthalpy of oxygen interstitial formation.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 98-100).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122074</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The ceramic technology of bronze-casting molds in ancient China : production practices at three western Zhou foundry sites in the Zhouyuan area</title>
<link>https://hdl.handle.net/1721.1/122073</link>
<description>The ceramic technology of bronze-casting molds in ancient China : production practices at three western Zhou foundry sites in the Zhouyuan area
Chastain, Matthew Lincoln.
During the second and first millennia BCE, peoples living near China's Yellow and Yangzi Rivers produced bronze ritual and military paraphernalia that represent arguably the most sophisticated use of metal casting by any ancient society. These objects were cast by pouring bronze into mold assemblies composed of interlocking sections. To survive the mechanical and thermal rigors of this casting process, the mold sections were constructed from highly specialized ceramic materials. This study investigates these ceramic materials. The primary focus is three foundry sites (Zhougongmiao, Kongtougou, Lijia) in the Zhouyuan area, Shaanxi province, a major bronze production center during the Western Zhou period (1045-771 BCE). Casting molds (72 total), other ceramic artifacts, and soils, all from the Zhouyuan area, were analyzed using electron microscopy, optical microscopy, and infrared spectroscopy.; Results were compared to similar analyses of molds from other sites in China (Houma, Xinzheng, Tangjiadun, Shigudun). Replication experiments were undertaken to reconstruct the production process of casting molds and to identify the performance advantages of ancient casting-mold material. Casting molds were made from a material unlike the clay-rich pastes used for pottery. This material, here called "silt paste", consists of a porous network of silt-sized (3.9-62.5[mu]m) quartz particles held together by a small proportion of clay. Across north-central China, similar material was used to make molds for all types of bronze objects. Silt paste was produced from commonplace loessic soils. Its composition and properties were manipulated by processing the soil to remove much of its clay. The resulting low-clay paste offers little workability, requiring specialized forming techniques. "Piping" was used to decorate some molds. Molds were fired at 400-700°C.; The low clay content and low firing temperature of casting-mold material ensured minimal drying shrinkage and high thermal shock resistance, minimizing the risk of failure during the casting process. Producers at the three Zhouyuan-area sites practiced different engineering strategies, apparently because casting technology descended from the earlier Shang tradition was introduced into the area midway through the Western Zhou period. Differences in soil resources between northern and southern China may have influenced how bronze casting developed in each region.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 681-718).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122073</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defect-driven processing of two-dimensional transition metal dichalcogenides</title>
<link>https://hdl.handle.net/1721.1/122072</link>
<description>Defect-driven processing of two-dimensional transition metal dichalcogenides
Bogaert, Kevin Christopher.
Two-dimensional transition metal dichalcogenides (TMDs) are an emerging class of semiconductor materials that offer exciting new properties for future electronic and optoelectronic applications. However, many ongoing challenges related to synthesis and processing must be overcome before this nascent technology can become industrially viable. In this thesis, processing-related phenomena relevant to the fabrication of TMD heterostructures, alloys, and nanoporous membranes are presented. This thesis begins with an investigation of the role of substrate temperature in two-step chemical vapor deposition (CVD) growth of MoS₂/WS₂ heterostructures. We demonstrate diffusion-mediated synthesis of inverted lateral heterostructures following low MoS2 growth temperatures in the second CVD step and homogeneous Mo[subscript x]W[subscript 1-x]S₂ alloyed crystals following higher MoS₂ growth temperatures.; Investigating the nature of this diffusion-mediated process, we identify an energetically favorable atomistic model proposing that transition metal diffusion is driven by a heterogeneous distribution of sulfur vacancies. This model is corroborated by the synthesis of a composition-graded Mo[subscript x]W[subscript 1-x]S₂ alloy crystals in which the final-stage spatial distribution of transition metal atoms correlates with intermediate-stage distribution of point defects. These heterogeneous crystals allow for correlation of the local optical properties with the local composition, demonstrating a variation in photoluminescence intensity spanning two orders of magnitude and reaching the maximum value for equicompositional alloy Mo₀.₅W₀.₅S₂ (x=0.5). Furthermore, the correlation between intermediate-stage distribution of point defects and final-stage spatial distribution of transition metal atoms enables the opportunity for bespoke patterning.; Utilizing a laser annealing technique, we demonstrate the ability to locally induce defects that define the regions of preferential nucleation during subsequent CVD growth. Finally, defect processing is also demonstrated in nanoporous TMD membrane applications. Combining modeling with experimentation, we demonstrate the relationship between vacuum annealing time and temperature with nanopore properties such as average radius and edge structure. Control of these properties is essential for the fabrication of functional nanoporous membrane devices for sensing, filtration, and energy applications. This thesis motivates further work on TMD processing in pursuit of developing a fundamental understanding of the defect-driven diffusion mechanism, a larger library of interesting TMD compositions and structures, as well as industrially viable TMD devices.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-161).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122072</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chiral spin textures and dynamics in multi-sublattice magnetic materials</title>
<link>https://hdl.handle.net/1721.1/122071</link>
<description>Chiral spin textures and dynamics in multi-sublattice magnetic materials
Caretta, Lucas Marcelo.
Spintronics is a research field that aims to understand and control magnetic spins on the nanoscale and should enable next-generation data storage and logic. A promising approach is to encode bits of information using nanoscale spin textures, such as chiral domain walls or skyrmions that can be translated by currents across racetrack-like wire devices. One technological and scientific challenge is to stabilize small spin textures and to move them efficiently with high velocities, which is critical for dense, fast memory. For the past decade, work has focused on using ferromagnetic heterostructures to host chiral spin textures. However, ferromagnets have fundamental limitations that inhibit further progress: large stray fields limit bit sizes and precessional dynamics limit operating speeds. In this thesis, we examine a broader class of multi-sublattice materials: ferrimagnets.; We show that by using ferrimagnets, the fundamental limits of ferromagnets can be overcome, realizing order-of-magnitude improvements in both size and speed. Using metallic, ferrimagnetic Pt/Gd₄₄Co₅₆/TaOx films with a sizeable Dzyaloshinskii-Moriya interaction (DMI), we realize a current-driven domain wall motion of 1.3 km s⁻¹ near the angular momentum compensation temperature and room-temperature-stable skyrmions with diameters close to 10 nm near the magnetic compensation temperature. For the first time, we show that the DMI is present in ferrimagnetic insulator garnet films and that the DMI necessitates a rare-earth ion in the magnetic insulator. Thickness dependent studies and interface engineering show that the DMI manifests at the ferrimagnetic insulator - substrate oxide interface.; We use a large spin-orbit torque from a Pt overlayer and the DMI to exploit ferrimagnetic dynamics, driving domain walls in low-damping and low-pinning GGG/TmIG/Pt heterostructures at velocities as high as 2.1 km s⁻¹. Moreover, by utilizing the ultra-low damping nature of Bi-YIG and an in-plane field, we can drive domain walls in GSGG/Bi-YIG/Pt at near relativistic velocities exceeding 4.0 km s-1, where the domain wall velocity is no longer limited by a velocity plateau defined by the in-plane field, but the magnon group velocity in Bi-YIG. These results show that multi-sublattice ferrimagnetic films are a promising materials system for next-generation data storage, paving a path forward for the field of spintronics.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122071</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational imaging through deep learning</title>
<link>https://hdl.handle.net/1721.1/122070</link>
<description>Computational imaging through deep learning
Li, Shuai,Ph.D.Massachusetts Institute of Technology.
Computational imaging (CI) is a class of imaging systems that uses inverse algorithms to recover an unknown object from the physical measurement. Traditional inverse algorithms in CI obtain an estimate of the object by minimizing the Tikhonov functional, which requires explicit formulations of the forward operator of the physical system, as well as the prior knowledge about the class of objects being imaged. In recent years, machine learning architectures, and deep learning (DL) in particular, have attracted increasing attentions from CI researchers. Unlike traditional inverse algorithms in CI, DL approach learns both the forward operator and the objects' prior implicitly from training examples. Therefore, it is especially attractive when the forward imaging model is uncertain (e.g. imaging through random scattering media), or the prior about the class of objects is difficult to be expressed analytically (e.g. natural images).; In this thesis, the application of DL approaches in two different CI scenarios are investigated: imaging through a glass diffuser and quantitative phase retrieval (QPR), where an Imaging through Diffuser Network (IDiffNet) and a Phase Extraction Neural Network (PhENN) are experimentally demonstrated, respectively. This thesis also studies the influences of the two main factors that determine the performance of a trained neural network: network architecture (connectivity, network depth, etc) and training example quality (spatial frequency content in particular). Motivated by the analysis of the latter factor, two novel approaches, spectral pre-modulation approach and Learning Synthesis by DNN (LS-DNN) method, are successively proposed to improve the visual qualities of the network outputs. Finally, the LS-DNN enhanced PhENN is applied to a phase microscope to recover the phase of a red blood cell (RBC) sample.; Furthermore, through simulation of the learned weak object transfer function (WOTF) and experiment on a star-like phase target, we demonstrate that our network has indeed learned the correct physical model rather than doing something trivial as pattern matching.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 143-154).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122070</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical breakdown acoustics : transduction and sensing underwater</title>
<link>https://hdl.handle.net/1721.1/122069</link>
<description>Optical breakdown acoustics : transduction and sensing underwater
Athanassiadis, Athanasios G.
In the sea, infrastructures such as ships, pipelines, and wind turbines are exposed to harsh conditions that can wear down the structures through wave loading and corrosion. Because of these wear mechanisms, maritime structures require regular inspections to identify early signs of damage or fatigue. Currently, inspections are performed visually or with contact acoustic transducers, often by a human diver. However, these methods are slow and costly, and can be hindered by surface irregularities like biofouling. Therefore, new sensing techniques are needed to meet the rising demand for offshore infrastructure monitoring. In this thesis, I develop optical breakdown as an acoustic source for non-contact measurements of underwater structures. Optical breakdown occurs when a high-power laser is focused to a small spot, causing nonlinear interactions between the light and water. A compact plasma forms at the focus and expands explosively, radiating a loud, broadband pressure wave.; Since this source is compact, laser-controlled and broadband, it provides unique sensing capabilities that overcome challenges faced by traditional transducers. First, I demonstrate how the breakdown source can be used to remotely measure the internal properties of submerged plates. The source is used to excite leaky Lamb waves in the plates, and broadband elastic dispersion spectra are measured using hydrophones in the water. The dispersion spectra are used to calculate the thicknesses and sound speeds in aluminum, steel, bronze and glass plates of varying thickness. Second, I characterize how the source can be controlled and scaled up by combining acoustic measurements with high-speed images of the breakdown plasma. In general, breakdown produces a loud (&gt;100kPa at 10cm), ultra-broadband (5kHz-5MHz) source, whose characteristics depend on measurement orientation and laser properties.; This transduction behavior is explained by modeling the breakdown plasma as an array of laser-driven explosions. When the laser is tightly focused, the plasma is compact, producing a loud and omnidirectional signal. However, for weak focusing and high energies, the plasma lengthens and becomes erratic, producing a weaker signal with less consistent behavior. These results reveal design challenges, tradeoffs and opportunities when adapting the breakdown source for dierent applications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 191-199).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122069</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pattern formation and essential responses for regeneration in planarians</title>
<link>https://hdl.handle.net/1721.1/122068</link>
<description>Pattern formation and essential responses for regeneration in planarians
Tewari, Aneesha G.(Aneesha Ghanhi)
The fundamental requirements for regeneration are poorly understood. Planarians can robustly regenerate all tissues after injury, involving stem cells, patterning cues and a set of cellular and molecular responses collectively called the "missing tissue" or "regenerative" response. The missing tissue response has long been considered a fundamental requirement of planarian regeneration. follistatin, which encodes an extracellular Activin inhibitor, is required for the missing tissue response after head amputation, and for subsequent regeneration. We found that follistatin is required for the missing tissue response regardless of the wound context, but only causes regeneration failure after head amputation. This head regeneration failure involves follistatin-mediated regulation of Wnt signaling at wounds, and is not a consequence of a diminished missing tissue response.; We found that all tested contexts of regeneration, including head regeneration, could occur with a defective missing tissue response, however, at a slower pace. Our findings suggest that in the absence of major cellular and molecular programs induced by large injuries, regulation of wound-induced Wnt signaling to enable regenerative re-patterning along with continuous tissue turnover can mediate successful regeneration in essentially any wound context. Wnt signaling regulates primary body axis formation across the Metazoa, with high Wnt signaling specifying posterior identity. Whether a common Wnt-driven transcriptional program accomplishes this broad role is poorly understood. We identified genes acutely affected after Wnt signaling inhibition in the posterior of two regenerative species, the planarian Schmidtea mediterranea and the acoel Hofstenia miamia, which are separated by &gt;550 million years of evolution.; Wnt signaling was found to maintain positional information in muscle and regional gene expression in multiple differentiated cell types. sp5, Hox genes, and Wnt pathway components are down-regulated rapidly after [beta]-catenin RNAi in both species. brachyury, a vertebrate Wnt target, also displays Wnt-dependent expression in Hofstenia. Planarian sp5 inhibits Wnt-dependent expression of trunk genes in the tail, promoting separate tail-trunk body domains. We propose that common regulation of a small gene set - Hox, sp5, and brachyury - might underlie the widespread utilization of Wnt signaling in primary axis patterning across the Bilateria.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122068</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enzyme structure, function, and evolution in flavonoid biosynthesis</title>
<link>https://hdl.handle.net/1721.1/122067</link>
<description>Enzyme structure, function, and evolution in flavonoid biosynthesis
Liou, Geoffrey.
Plant specialized metabolism is a key evolutionary adaptation that has enabled plants to migrate from water onto land and subsequently spread throughout terrestrial environments. Flavonoids are one particularly important class of plant specialized metabolites, playing a wide variety of roles in plant physiology including UV protection, pigmentation, and defense against herbivores and pathogens. Flavonoid diversity has increased in conjunction with land plant evolution over the past 470 million years. This dissertation examines the structure, function, and evolution of enzymes in the flavonoid biosynthetic pathway. First, we structurally and biochemically characterized orthologs of chalcone synthase (CHS), the enzyme that catalyzes the first step of flavonoid biosynthesis, from diverse plant lineages. By doing so, we gained insight into the sequence changes that gave rise to increased reactivity of the catalytic cysteine residue in CHS orthologs in euphyllophytes compared to basal land plants. We then developed methods and transgenic plant lines to study the in vivo function of these CHS orthologs, as well as whether their functional differences play a role in redox-based regulation of flavonoid biosynthesis. Finally, we examined enzymes involved in the biosynthesis of galloylated catechins, a highly enriched class of flavonoids in tea that are thought to have health benefits in humans. These findings contribute to an understanding of the evolution of enzyme structure and function in flavonoid biosynthesis, and how it has facilitated the adaptation of plants to a wide variety of terrestrial habitats.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122067</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The instructive roles of muscle cells in planarian regeneration</title>
<link>https://hdl.handle.net/1721.1/122066</link>
<description>The instructive roles of muscle cells in planarian regeneration
Cote, Lauren E.(Lauren Esther)
Regeneration requires both new cell production and patterning information to correctly place new tissue. Planarians are flatworms with remarkable capacity to regenerate after nearly any injury and to indefinitely maintain tissue homeostasis. Dividing cells, neoblasts, are the source of all new tissue, whereas positional information is hypothesized to be harbored by post-mitotic muscle, including the subepidermal body wall musculature. Single-muscle-cell mRNA sequencing along the anterior-posterior axis revealed regional gene expression within muscle cells. The resulting axial gene expression map included FGF receptor-like (FGFRL) homologs and genes encoding components of Wnt signaling. Two distinct FGFRL-Wnt circuits, involving juxtaposed anterior FGFRL and posterior Wnt expression domains, controlled head and trunk patterning.; Inhibition of FGFRL-Wnt circuit components led to the formation of ectopic posterior eyes or secondary pharynges, indicating their importance in maintaining the anterior-posterior axis. Inhibition of different myogenic transcription factors specifically ablated orthogonal subsets of the body wall musculature. Longitudinal fibers, oriented along the anterior-posterior axis, are required for regeneration initiation. Circular fibers maintained medial-lateral patterning during head regeneration. During early regeneration, transcriptional changes in muscle cells comprised part of a generic wound response displayed by all injuries, from incisions to decapitations. The sole exception to this generic response was the expression in body-wall muscle of the Wnt inhibitor notum, which occurs preferentially at anterior-facing wounds in longitudinal muscle fibers. Therefore, anterior-posterior polarity, the choice of head or tail regeneration, involves longitudinal body wall muscle fibers.; Planarian muscle were found to be highly secretory. Combining an in silico definition of the planarian matrisome and recent whole animal single-cell transcriptome data revealed that muscle is a major source of extracellular matrix (ECM). Inhibition of hemicentin-1 (hmcn-1), which encodes a highly conserved ECM glycoprotein expressed in body wall muscle, resulted in ectopic localization of internal cells, including neoblasts, outside of the muscle fiber layer. ECM secretion and maintenance of tissue separation indicated that muscle functions as planarian connective tissue. Whereas muscle is often viewed as a strictly contractile tissue, these findings reveal that planarian muscle has specific regulatory roles in axial patterning, wound signaling, and tissue architecture to enable correct regeneration.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122066</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identification of genotype-specific dependencies in Keap1-deficient lung adenocarcinoma</title>
<link>https://hdl.handle.net/1721.1/122065</link>
<description>Identification of genotype-specific dependencies in Keap1-deficient lung adenocarcinoma
Romero, Rodrigo,Ph. D.Massachusetts Institute of Technology.
Lung adenocarcinoma (LUAD) remains the world's leading cause of cancer related mortality with an estimated two-hundred thousand new cases arising in 2019 in the United states alone. Molecular characterization of patient tumors has identified activating mutations in the small GTPase, KRAS, encompassing ~30% of all LUAD patient samples. Efforts in therapeutic intervention for KRAS-mutant LUAD are numerous, but have been met with little clinical success. Importantly, large-scale sequencing studies have defined the genomic complexities of LUAD resulting in the need to functionally characterize frequently mutated genes in the context of tumorigenesis. The development of somatic genome editing provided by clustered regularly interspaced short palindromic repeats (CRISPR) and the endonuclease, CRISPR associated protein 9 (Cas9) has accelerated our ability to functionally interrogate putative genetic drivers in the context of mammalian cells and autochthonously arising tumors.; Using CRISPR/Cas9, we validate the redox sensor, Keap1, as a functional tumor suppressor that is frequently mutated in human LUAD. Notably, loss of Keap1 results in the hyperactivation of the Nrf2-regulated antioxidant response program with a concomitant increase in lung tumor burden and advanced disease. Moreover, we find that Keap1-mutant tumors display a heightened dependency for glutaminolysis. Pharmacological glutaminase inhibition suppresses the growth of Keap1-mutant tumors and human KEAP1-mutant patient-derived xenografts (PDXs). In an expanded CRISPR/Cas9 genetic screen targeting the druggable genome, we identify a novel Keap1-mutant specific vulnerability to loss of the putative endoplasmic reticulum resident acetyl-CoA transporter, Slc33a1. Targeted in vivo somatic ablation of Slc33a1 results in decreased tumor burden in Kras-driven; Keap1-deficient genetically engineered mouse models of LUAD.; In short, the data presented here demonstrate the power of CRISPR/Cas9 somatic editing to functionally validate tumor promoting and tumor suppressive genetic events with the simultaneous identification of genotype-specific vulnerabilities for a rapid target identification pipeline.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged student-submitted from PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122065</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intracellular and extracellular promoters of metastasis to different organs</title>
<link>https://hdl.handle.net/1721.1/122064</link>
<description>Intracellular and extracellular promoters of metastasis to different organs
Hebert, Jess Dale.
Metastasis is the cause of the vast majority of cancer-related deaths, yet much remains unknown about this complex process, from how tumor cells complete the many steps of the metastatic cascade to how they can adapt to survive in multiple, vastly different secondary sites. I have therefore conducted two studies into different aspects of metastasis. First, I investigated the scaffold protein IQGAP1, which promotes primary tumor growth and invasiveness in several cancers. However, the role of IQGAP1 in metastasis has been unclear. We used IQGAP1 knockdown and knockout to investigate its role in the metastatic cascade in melanoma and breast cancer. I found that reduction of IQGAP1 expression inhibited the formation of metastases, without limiting primary or metastatic tumor growth. Furthermore, IQGAP1 knockout significantly decreased extravasation of tumor cells from circulation.; These data demonstrate that IQGAP1 promotes metastasis in vivo through regulation of extravasation and suggest that it may represent a valid therapeutic target for inhibiting metastasis. Second, I examined how cells from the same primary tumor can adapt to several different secondary environments. A critical component of every metastatic niche is the extracellular matrix (ECM), which provides structural support, migration control, and growth and survival signals. However, a comprehensive comparison of the ECM components of metastatic niches at various secondary sites had not yet been conducted. I isolated metastases from the brain, lungs, liver and bone marrow, which were all derived from parental MDA-MB-231 human breast cancer cells. We then enriched these tumor samples for ECM proteins and used quantitative mass spectrometry to analyze their ECM composition. Strikingly, the niches created at each site were distinct.; Using these data, I compared protein abundance across all metastatic sites to determine which ECM proteins were most significantly elevated in each particular tissue relative to the others. Following this analysis, I knocked down tumor cell expression of SERPINB1, a protein characteristically elevated in brain metastases, and observed reduced metastasis to the brain. Together, these studies offer insight into the fundamental biology of metastasis and metastatic niches, as well as provide potential targets of metastatic breast cancer for imaging and therapy.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122064</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining sources of nutrient limitation for tumors</title>
<link>https://hdl.handle.net/1721.1/122063</link>
<description>Defining sources of nutrient limitation for tumors
Sullivan, Mark Robert.
Tumor growth requires that cancer cells accumulate sufficient biomass to grow and divide. To accomplish this, tumor cells must acquire various nutrients, and growth slows if these metabolites are not obtained in sufficient quantities. Importantly, the metabolic demands of cancer cells can be different from those of untransformed cells, and nutrient accessibility in tumors is different than in normal tissues. Thus, tumor survival and growth may be limited by different metabolic factors than those that are necessary to maintain non-cancerous cells. This dissertation examines sources of nutrient limitation in tumors. We study the role of the amino acid serine in tumor growth and show that endogenous serine availability restrains growth of breast tumors. We also demonstrate that breast cancer and melanoma can overcome physiological serine limitation by upregulating expression of the serine synthesis pathway enzyme phosphoglycerate dehydrogenase (PHGDH).; To further study amino acid and nucleotide metabolism in tumor growth, we examine the role of the enzyme methionine synthase (MTR) in tumor progression. MTR is involved in both methionine synthesis and folate metabolism and may be important for tumor progression. We find that MTR is required to maintain intracellular levels of both S-adenosyl methionine and nucleotides, but not methionine. We observe that MTR is dispensable for growth in standard culture media, but essential in media containing the folate source available in blood. Further, MTR is essential for folate metabolism and tumor growth in vivo. The conditional requirement for MTR depending on the source of extracellular folate highlights the importance of understanding which nutrients are available to tumors in vivo, as nutrient accessibility can determine whether a given metabolite or pathway is limiting for tumor growth.; To define the nutrient environment present in tumors, we quantitatively profile the metabolites present in tumor interstitial fluid (TIF). We find that the nutrients available to tumors in TIF differ from those present in circulation. Further, by comparing TIF nutrient levels between murine cancer models, we find that tumor type, anatomical location and animal diet affect local nutrient availability. Together, these studies provide new insight into sources of nutrient limitation in tumors.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged student-submitted from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122063</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Mitotic Exit Network detects spindle position and anaphase entry</title>
<link>https://hdl.handle.net/1721.1/122062</link>
<description>The Mitotic Exit Network detects spindle position and anaphase entry
Campbell, Ian Winsten.
The Mitotic Exit Network (MEN), an essential GTPase signal-transduction cascade, controls mitotic exit in budding yeast. The MEN protects genomic integrity by ensuring chromosome segregation is complete prior to cytokinesis. Two signals are required for MEN activation: (1) movement of the nucleus into the daughter cell and (2) anaphase onset. These two events only coincide after anaphase chromosome segregation, ensuring that mitosis is complete prior to cytokinesis. The MEN is regulated by spindle position. The MEN GTPase, Tem1, is inhibited as long as the entire spindle resides in the mother cell. Tem1 becomes active when spindle elongation along the mother-daughter axis drives half of the nucleus into the bud. If spindle elongation fails to move part of the nucleus into the daughter cell, MEN activation is prevented, providing time to reposition the spindle. In addition to this spatial regulation, activation of the MEN is restricted to anaphase by inhibitory cyclin-dependent kinase (Cdk) phosphorylation of the MEN kinase cascade. During anaphase onset, Cdk activity decreases; creating a temporal signal that releases the MEN from inhibition. This temporal signal prevents MEN activation should the nucleus move into the daughter cell prior to anaphase. By integrating multiple inputs the MEN creates a regulated cell-cycle transition that is responsive to cell-cycle stage and spindle position.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122062</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>BMI1 is a context-dependent tumor suppressor that is a barrier to dedifferentiation in non-small cell lung adenocarcinoma</title>
<link>https://hdl.handle.net/1721.1/122061</link>
<description>BMI1 is a context-dependent tumor suppressor that is a barrier to dedifferentiation in non-small cell lung adenocarcinoma
Neupane, Rachit.
Predictive value is expected when preclinical models of disease are used for research. However, not all models appropriately mimic the disease progression or the treatment paradigm in the clinic. This thesis addresses an epigenetic regulator, Bmi1, which acts in stem cells to maintain their proliferative and self-renewal capacity primarily through silencing of the Ink4a/Arf locus. Bmi1 has been proposed as a good therapeutic candidate in cancer because of its presumed role in maintaining tumor propagating cells (TPCs). This conclusion is based on the observed tumor suppressive effects of Bmi1 deletion in in vitro cell culture models, in vivo transplant models, and autochthonous models in which Bmi1 was absent throughout development. However, to date, no one has assessed the consequences of deleting Bmi1 in existing autochthonous tumors, to mimic patient treatment in the clinic.; To accomplish this, we have generated a mouse model that allows induction of autochthonous lung adenocarcinoma, driven by oncogenic Kras and Tp53 loss (KP LUAD), and subsequent deletion of Bmi1 specifically within the tumor cells once more than half the tumors progress to grade 3 or higher. We confirmed that this model yielded Bmi1 loss that was tumor-specific and almost complete. We then aged tumor bearing mice for up to seven weeks post Bmi1 deletion to determine the impact on LUAD. Unexpectedly, Bmi1 deletion did not yield significant tumor suppression. Instead, gene expression analyses of Bmi1 deficient tumor cells revealed upregulation of a gastric gene expression program that is a known marker of lung tumor progression towards a more aggressive state in the KP LUAD model. Additionally, single cell sequencing showed that Bmi1 deficient tumors contained a higher frequency of cells that expressed previously described markers of TPCs and metastasis.; We also extended these findings to colorectal cancer where we show that deletion of Bmi1 is not tumor suppressive in either in vitro organoids or orthotopic transplants. Given these findings, we conclude that deletion, or inhibition, of BMI1 in existing tumors will be ineffective for cancer treatment in the contexts examined, and potentially deleterious because it can enable acquisition of alternate differentiation states that promote tumor progression.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis. "May 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122061</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dimensions of ergativity in Inuit : theory and microvariation</title>
<link>https://hdl.handle.net/1721.1/122054</link>
<description>Dimensions of ergativity in Inuit : theory and microvariation
Yuan, Michelle,Ph. D.Massachusetts Institute of Technology.
This thesis investigates microvariation in the ergative system of the Inuit dialect continuum, as a window into the theoretical status of ergative alignment, argument licensing, and Agree. The empirical focus of this thesis is on the Inuit varieties in which the canonical erg-abs ergative construction has been observed to be relatively weak compared to other varieties, arising in an unusual case alignment that has properties of both ergative and accusative systems (e.g. Johns, 2001, 2006; Carrier, 2017). Consequently, from a typological standpoint, the existence of such variation oers a unique testing ground for examining these grammatical phenomena. While most previous literature on this weaker pattern has focused on the widening distribution of the abs-mod antipassive construction, I present novel evidence pointing towards microvariation in the syntax of the ergative construction itself.; The central proposal of this thesis is that the status of ergativity within a given Inuit variety is directly attributable to the underlying status of its object agreement morphology, and that this is the source of variation in case alignment properties across the Inuit dialect continuum. This correlation is revealed by documenting and analyzing several previously unnoticed properties of Inuktitut, the group of Inuit dialects spoken in Nunavut, Canada. In particular, the object-referencing morphemes in Inuktitut pattern like pronominal doubled clitics, diverging from canonically ergative Inuit varieties (e.g. Kalaallisut, spoken in Greenland), whose object-referencing morphemes behave like exponents of true -agreement. I present a novel analysis of ergativity across Inuit that recasts this - agreement vs. clitic doubling distinction as variation in the syntactic nature of the structurally high abs object co-occurring with the erg subject.; Specially, I argue that the modality of erg case assignment holds constant across all dialects: erg case is uniformly a dependent case (Marantz, 1991; Baker, 2015), assigned to a nominal in the presence of a second, structurally local nominal element (its case competitor). However, the distribution of erg case is simultaneously constrained by the nature of its local case competitor -- which is a full abs DP in robustly ergative varieties such as Kalaallisut, but a pronominal D0 element in more weakly ergative varieties such as Inuktitut. Variation in the status of ergativity across Inuit is therefore solely determined by the properties of the transitive object, while the properties of the transitive (erg-marked) subject remain constant. I then relate the theoretical underpinnings of this proposal to two other major properties of Inuktitut grammar. First, I argue that clitic doubling is derived by two interacting steps --; syntactic movement of a D0-element, followed by postsyntactic Merger--and demonstrate that the pronunciation of movement chains is regulated by Merger. Crucially, this same level of interaction can be seen to underlie certain recalcitrant aspects of noun incorporation in Inuktitut, in turn motivating a postsyntactic analysis of Inuktitut noun incorporation (cf. Bok-Bennema and Groos, 1988). Second, I argue that clitic doubling is triggered by -Agree, which in Inuktitut is able to target DPs but not PPs; encountering a PP leads to failed Agree (Bobaljik, 2008; Preminger, 2011, 2014). This is evidenced by hitherto unnoticed interactions between -Agree and anaphoric objects, which are argued to bear lexical mod case as an Anaphor Agreement Eect, as well as parallel interactions between -Agree and antipassive objects,which bear structural mod Case (cf. Bok-Bennema, 1991; Spreng, 2012).; More broadly, this thesis oers a case study on using microvariation as a methodology for investigating syntactic theory, and vice versa, by treating the Inuit varieties under discussion as minimally-diering points along an otherwise gradient system.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 251-274).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122054</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in financial innovation and development</title>
<link>https://hdl.handle.net/1721.1/122051</link>
<description>Essays in financial innovation and development
Carlson, Stacy(Stacy Lynn)
In this thesis, I use rich individual- and household-level data to explore the impact of different forms of financial innovation on development outcomes in Africa. Chapters 1 and 2 utilize data from a digital lender that provides credit over mobile phones. Chapter 1 presents novel evidence on the magnitude of consumer liquidity constraints and the relative importance of the various forms of asymmetric information that may contribute to them. I find that borrowers almost always take out the maximum credit line available to them, consistent with short-term liquidity constraints. I then use quasi-experimental variation in credit policies across individuals and time to estimate the relative magnitude of selection and incentive effects among new borrowers. I find that information asymmetries go a long way toward explaining high observed default rates. Chapter 2, my job market paper, explores the impact of dynamic incentive schemes on borrower behavior in the digital credit market. I use a series of quasi-experiments induced by policy nonlinearities to estimate the effect of progressive lending policies on borrower repayment decisions. I find that new borrowers who receive a larger initial loan are more likely to default on that loan. By contrast, repeat borrowers who receive a larger loan (relative to their previous loan) are actually less likely to default. I provide evidence that this reflects a strategic repayment motive, whereby borrowers repay in order to get access to larger loans in the future. Chapter 3, written with Yu Shi, uses household-level data from a panel survey in Nigeria to explore the relative importance of formal versus informal finance. We find that informal financial markets remain important and are quite effective in enabling consumption smoothing by lower-income households and businesses in Nigeria.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018; Cataloged from student-submitted PDF version of thesis. "Some pages in the original document contain text that runs off the edge of the page"--Disclaimer Notice page.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122051</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A simplified theory of neutron thermalization</title>
<link>https://hdl.handle.net/1721.1/121925</link>
<description>A simplified theory of neutron thermalization
De Sobrino, Luis Gonzaga,1929-
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1960.; Vita.; Includes bibliographical references (leaves 131-134).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121925</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The operation ring for connective K-theory.</title>
<link>https://hdl.handle.net/1721.1/121923</link>
<description>The operation ring for connective K-theory.
Meiselman, Moshe.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1967.; Vita.; Bibliography: leaves 67-69.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121923</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional and structural studies of a small F-actin binding domain</title>
<link>https://hdl.handle.net/1721.1/121921</link>
<description>Functional and structural studies of a small F-actin binding domain
Doering, Don Shimon.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Biology, 1992.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121921</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emission and absorption of microwave radiation by interstellar OH.</title>
<link>https://hdl.handle.net/1721.1/121920</link>
<description>Emission and absorption of microwave radiation by interstellar OH.
Rogers, Alan Ernest Exel.
Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1967. Ph.D.; Lacking l. 106. Vita.; Bibliography: leaves 195-197.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121920</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Macrosegregation.</title>
<link>https://hdl.handle.net/1721.1/121918</link>
<description>Macrosegregation.
Nereo, George Edmund.
Massachusetts Institute of Technology. Dept. of Metallurgy. Thesis. 1966. Sc.D.; Bibliography: leaves 115-118.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121918</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A modal expansion technique for space-time reactor kinetics</title>
<link>https://hdl.handle.net/1721.1/121917</link>
<description>A modal expansion technique for space-time reactor kinetics
Foulke, Larry Ray.
Massachusetts Institute of Technology. Dept. of Nuclear Engineering. Thesis. 1967. Ph.D.; Bibliography: leaves 286-290.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121917</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Money, capital flows and protectionism : the Industrial Revolution revisited</title>
<link>https://hdl.handle.net/1721.1/121911</link>
<description>Money, capital flows and protectionism : the Industrial Revolution revisited
Brezis, Elise Scheiner.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 1989.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121911</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel approach for the study of sequences that control cytoplasmic RNA stability</title>
<link>https://hdl.handle.net/1721.1/121909</link>
<description>A novel approach for the study of sequences that control cytoplasmic RNA stability
Kabnick, Karen Stephanie.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Biology, 1987.; Bibliography: leaves 190-202.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121909</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of three dimensional trace element distributions by the use of monochromatic x-ray microbeams</title>
<link>https://hdl.handle.net/1721.1/121908</link>
<description>Determination of three dimensional trace element distributions by the use of monochromatic x-ray microbeams
Boisseau, Paul.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1986.; Bibliography: leaves 10-11.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121908</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Problems in discrete applied mathematics</title>
<link>https://hdl.handle.net/1721.1/121905</link>
<description>Problems in discrete applied mathematics
Assmann, Susan Fera.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1983.; Vita.; Bibliography: leaves 120-124.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121905</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The analysis of hostage negotiation through a novel</title>
<link>https://hdl.handle.net/1721.1/121903</link>
<description>The analysis of hostage negotiation through a novel
Pieczenik, Steve Richard.
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1982.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121903</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Method development for the analysis of electrochemical and transport processes in redox flow batteries at practical operating conditions</title>
<link>https://hdl.handle.net/1721.1/121901</link>
<description>Method development for the analysis of electrochemical and transport processes in redox flow batteries at practical operating conditions
Barton, John Leonard.
The focus of this thesis is the development and assessment of techniques for the analysis of electrochemical and transport processes in redox flow batteries (RFBs) at moderate to high active species concentrations under direct current conditions. RFBs hold promise as an energy-intensive storage technology suitable for supporting the integration of intermittent renewable energy sources into the grid, but further improvements in technical performance and reductions in system cost are needed for broad deployment. At their core, all thesis projects are aimed at enabling the development of system descriptors that correlate material properties (e.g., viscosity, conductivity), cell geometry (e.g., flow field design), and operating parameters (e.g., flow rate, current density) to system performance metrics, such as cycle efficiencies and area-specific resistance.; More specifically, the investigation is divided into three primary projects: the development and assessment of a research-scale flow cell; measurements of mass-transfer coefficients; and integration of a polarization model into a standalone application useful for assessing system performance. The differential flow cell is engineered leveraging validation material from industrial collaborators. Not only is the performance is consistent with that of a ten-fold larger cell, but its smaller modular design enables rapid assessment of new chemistries and cell components with minimal materials requirements. Mass-transfer coefficients are then measured using this cell with a well-behaved redox active electrolyte (RAE), in which glucose is added in various amounts to modify the system viscosity with minimal changes to other properties.; The results or methodology developed could be extended other similar RAE systems either as preliminary estimates of mass-transfer performance or as a protocol for carefully evaluating the impact of new system parameters on mass-transfer. Finally, results of this mass-transfer analysis are incorporated into a simple flexible stack model, which can be used to estimate system performance as a function of key materials properties with limited empiricism.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-134).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121901</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure-property engineering and device fabrication of conjugated polymers by chemical vapor deposition</title>
<link>https://hdl.handle.net/1721.1/121900</link>
<description>Structure-property engineering and device fabrication of conjugated polymers by chemical vapor deposition
Wang, Xiaoxue,Ph. D.Massachusetts Institute of Technology.
This thesis focuses on the in-situ molecular engineering of chemical vapor deposition (CVD)-synthesized soft materials and device applications. High quality and large-scale synthesis of soft materials are the foundation of soft electronics. CVD approach has proved to be a low-cost and scalable technique to synthesize a wide variety of soft materials with desired properties. The first part (Chapter 2) will be dedicated to the record high electrical conductivity in CVD-grown poly(3,4-ethylenedioxythiophene) (PEDOT) thin films with controllable crystallization and morphology. The polymeric conducting thin film can be used as flexible and transparent electrodes in many electronic devices. Previously, the key problem limiting the electrical conductivity of PEDOT is the difficulty of maintaining a high carrier mobility simultaneously with a high carrier density in this polymer.; In order to solve this problem, we developed a facile CVD technology to effectively control the carrier mobility at high carrier density by controlling the crystallite configuration and morphology of PEDOT through molecular engineering. As a result, we successfully synthesized wafer-scale PEDOT thin films with a conductivity of 6259 S/cm, which is comparable to the widely used expensive indium tin oxide (ITO). This is the record high conductivity for large-scale thin film PEDOT. In addition, we also analyzed the polymeric system with a detailed theoretical model based on Boltzmann transport in order to understand the charge carrier transport mechanism. As a wafer-scale demonstration, we directly synthesized the highly conductive PEDOT thin film on a 4-inch silicon wafer and successfully fabricated PEDOT-Si diode arrays operating at 13.56 MHz, which can be used as high frequency rectifiers for RFID readers.; The second part (Chapter 3) will be dedicated to the enhancement of thermal properties of conjugated polymers synthesized using CVD technology. We developed a self-assembling CVD growth method for intrinsic poly(3- hexylthiophene) (P3HT) thin films and successfully achieved record high cross-plane thermal conductivity (&gt;10x common polymers) in soft materials. Such a unique CVD growth mechanism results in an extended chain structure with good [pi]-[pi] stacking in P3HT, which significantly enhances the thermal transport within the CVD P3HT thin films. The third part (Chapter 4-7) will be about the fabrication of chemical sensors based on various nanostructured PEDOT and related copolymers, taking advantage of their ultrahigh surface-to-volume ratio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis. Page 223 blank.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121900</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanostructured electroactive polymeric composites for energy storage and separation applications</title>
<link>https://hdl.handle.net/1721.1/121899</link>
<description>Nanostructured electroactive polymeric composites for energy storage and separation applications
Tian, Wenda.
Electroactive polymeric materials have garnered considerable interest due to their potential applications in advancing electrochemical energy storage, sensing, catalysis, and separations systems. Electroactive polymers include conducting polymers with a-conjugated backbones and redox polymers with localized redox-responsive moieties. The electro-responsive property of both conjugated and redox polymers is highly impacted by the efficient transport of counter-ions within polymers to maintain charge neutrality. The interactions at the molecular interface between the polymer and target entities ultimately dictate the performance of electroactive materials in the aforementioned applications. Nanostructures provide a shortened diffusion path for the transport of electrolyte ions or target molecules during a reversible redox process. The large interfacial area arising from an improved morphology allows efficient utilization of polymeric materials.; Consequently, the union of nanostructures and electro-responsiveness has proven to be a powerful strategy to enhance the merit of electroactive polymers in the design of next generation energy storage devices, sensors, catalysts and separation platforms. In this thesis, we focused on developing novel synthesis strategies for nanostructured electroactive polymeric composites. Two different synthesis approaches for the polymeric component were realized by exploiting the inter-molecular interactions between monomeric units and other entities during an electrochemical polymerization process. In the first approach, a nanostructured polyvinylferrocene /polypyrrole hybrid was fabricated via a co-deposition method as a result of the [pi]-[pi] stacking interactions between the aromatic pyrrole monomers and the metallocene moieties of polyvinylferrocene. The hybrid has a highly porous morphology with a significantly increased surface area compared to its bulk counterpart.; The synergistic effects between polyvinylferrocene and polypyrrole lead to enhanced ionic and electronic conductivity and, consequently, a higher specific capacitance as a supercapacitor electrode material. The second approach was a diffusion-controlled electrochemical method facilitated by the interactions between pyrrole monomers and the carbamate groups in CO₂-bound polyamines. As a result, a porous polypyrrole coating consisting of nanofibrous structures was synthesized and deposited on a carbon microfiber substrate. This composite material demonstrated enhanced electrochemical properties and adsorption capability towards aldehydes as a result of its porous morphology and high surface area. We later applied this composite material in achieving electrochemically modulated adsorption of polynucleotides.; The adsorption process was found to have a strong dependence on the oxidation states of the composite due to the electrostatic interactions between positively charged polypyrrole backbones and negatively charged phosphate groups in DNAs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-137).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121899</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>DNA methylation detection using an engineered methyl-CpG-binding protein</title>
<link>https://hdl.handle.net/1721.1/121898</link>
<description>DNA methylation detection using an engineered methyl-CpG-binding protein
Tam, Brooke Elizabeth.
DNA methylation, specifically the methylation of cytosine bases, is an important biomarker, as abnormal DNA methylation patterns are found in many different types of cancer. Currently, a small number of cancer hospitals evaluate the methylation status of the MGMT gene promoter to determine the best course of treatment for patients with glioblastoma. However, improved methylation detection techniques are required in order to expand the availability of such testing to more patients. Methyl-CpG-binding domain (MBD) proteins bind specifically to methylated DNA sequences, and many assays have been developed that use these proteins in methylation profiling of DNA. The wild-type proteins in the MBD family bind specifically to symmetrically methylated CpG dinucleotides. Here, I have engineered a new MBD variant that binds to hemi-methylated DNA but not unmethylated DNA, allowing for the detection of a methylated target sequence hybridized to a simple, unmethylated DNA probe.; With four amino acid substitutions, a protein that did not show any binding to hemi-methylated DNA at concentrations up to 100 nM was altered to bind hemi-methylated DNA with high affinity. Based on equilibrium binding titrations, this engineered variant binds a DNA sequence with a single hemi-methylated CpG dinucleotide with a dissociation constant of 5.6 ± 1.4 nM. After engineering a protein to bind hemi-methylated CpG dinucleotides, I developed a simple, hybridization-based assay to determine the methylation status of the MGMT promoter using this protein variant and magnetic microparticles. The target DNA molecules are captured on the surface of magnetic microparticles and an MBD-GFP fusion protein is added to bind if the captured target is methylated. Therefore, MBD binding can be detected directly based on fluorescence of the microparticles after the binding step without requiring any chemical conversion or additional labeling steps.; In addition to simplifying the assay and eliminating the need for methylated capture probes, I was able to improve the sensitivity of the assay to 5 pM target DNA. Finally, I also studied the DNA capture and MBD binding events to identify the key parameters and guide future efforts to develop clinically relevant diagnostics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 96-103).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121898</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of solution complexation on crystallization processes</title>
<link>https://hdl.handle.net/1721.1/121897</link>
<description>Effects of solution complexation on crystallization processes
Pons-Siepermann, Carlos A.(Carlos Alberto)
Crystallization is a separation technique widely used in chemical processes to produce high-purity solid products. The impact of solution chemistry on the kinetics and thermodynamics of crystallization processes is neither well understood nor properly characterized. Therefore, there exists a need for research to develop chemistry that can exploit the effect of impurities, additives and foreign molecules on the chemistry within crystallizing solutions. The use of rational chemical interactions has the potential of enhancing the controllability of crystallization unit operations, providing a new process handle for chemical engineers with which they can create new crystal forms, enhance product purity, improve yields, or inhibit the formation of undesirable crystals. This thesis focuses on the use of small-molecule chemical additives that exhibit selective intermolecular interactions with crystallizing solutes or their impurities.; Within the work reported, there were two major areas of study: purification and nucleation control. Additive-driven solution complexation with impurities was demonstrated to be a powerful tool for enhancing the purity of crystal product, without penalizing the process yield. The technique was implemented for the separation of structural isomers, and tested for the purification of a large pharmaceutical compound with challenging chemical features. The results herein helped elucidate the capabilities of complex-assisted crystallization, and also outline the thermodynamic and chemical limitations of the technique. The second half of the work explored the impact on nucleation rates of dilute impurities that interact with the supersaturated crystallizing solute. For the first time, impurity-driven nucleation inhibition was systematically and quantitatively proven, using high-throughput induction measurements.; The experimental results were used to discern the thermodynamic and kinetic impact of the inhibitor, and to elucidate a potential underlying mechanism for the observed behavior. Data demonstrated that even a weakly-interacting dilute additive can lead to massive nucleation rate depression through a kinetic pathway, most-likely due to the disruption of the ordering of the solute molecules within high-concentration clusters during nucleation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 113-119).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121897</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonequilibrium energy transport in heterogeneous nanoscale semiconductors</title>
<link>https://hdl.handle.net/1721.1/121896</link>
<description>Nonequilibrium energy transport in heterogeneous nanoscale semiconductors
Lee, Elizabeth Moon Young.
Modern optoelectronic devices such as photovoltaics and LEDs operate based on transport of nanoscale energy carriers. Promising material for these devices are assemblies of nanometer-sized semiconductors, including quantum dots (QD) and organic conjugated polymers. Unlike crystalline semiconductors that are homogenous in energy and space, there exist static and dynamical heterogeneity in nanoscale semiconductors. To control energy transport and improve their device efficiencies, this thesis presents nonequilibrium energy transport models to understand the effect of nanoscale heterogeneity on material-wide optoelectronic properties with emphasis on excitons. At first, continuum-level analytical theories and finite element simulations are employed to derive exciton distributions in semiconductor films. These models are applied to transient photoluminescence interfacial quenching experiments of CdSe QD thin film interface to measure exciton diffusion length.; A linear elasticity theory is constructed to understand the effect of surface ligands on low-frequency vibrations of colloidal QDs. This theory combined with Raman spectroscopy allows measurement of elastic properties of surface bound ligands that are otherwise challenging to probe. Next, a nonequilibrium exciton dynamics model developed at the coarse-grained level is used to investigate energy transport in disordered QD solids. Through kinetic Monte Carlo and chemical master equation simulations combined with time-resolved spectroscopy techniques, this model reveals that static energetic disorder causes ensemble-averaged exciton diffusivity to decrease over time such that the net diffusivity is reduced relative to the ordered case. A subsequent model based on resonance energy transfer theory discovers scenarios in which there can be disorder-enhanced incoherent energy transport. Such enhancements can be important in processes that are sensitive to molecular-scale fluctuations.; Finally, the role of dynamical disorder due to the environment in the case of organic conjugated polymers in solution is interrogated: the interplay between thermal fluctuations and excited state forces drives exciton migration along the polymer backbone. Simulation results are verified with anisotropy decay measurements of poly(3-hexylthiophene). To simulate exciton transport in large systems like long-chain conjugated polymers, a constrained adiabatic dynamics method is developed. Application of this method highlights that failure to preserve wavefunction symmetry while preventing trivial unavoided problem in an adiabatic dynamics simulation can create unphysical electronic dynamics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121896</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of polymer - lipid nanoparticles for potent mRNA delivery to the lung</title>
<link>https://hdl.handle.net/1721.1/121895</link>
<description>Development of polymer - lipid nanoparticles for potent mRNA delivery to the lung
Kaczmarek, James Cliff.
Messenger RNAs (mRNAs) are an emerging therapeutic modality that holds great promise to specifically and completely treat genetic disease. mRNA has been used as a vaccine, as a protein replacement therapy, and even as a means of inducing permanent genomic editing via CRISPR. However, unlike traditional small molecule drugs, naked mRNA cannot readily enter the cellular cytosol, where it must localize in order to be successfully translated. The field of nucleic acid delivery, therefore, is largely concerned with the development of materials which can encapsulate mRNA and facilitate its transport into cellular cytoplasm in vivo. Many lipid nanoparticles originally designed to deliver short interfering RNA (siRNA) have been successfully repurposed to deliver mRNA, although their application is limited mainly to the liver. Thus, there is a continued need for the development of new materials for mRNA delivery in order to expand its therapeutic potential throughout the body.; Another class of nucleic acid delivery materials, poly(P-amino ester)s, or PBAEs, have been successful in delivering DNA cargo in vitro and in vivo, but their capacity for mRNA delivery has been relatively understudied. In this thesis, we utilized formulation techniques developed for lipid nanoparticles to systemically deliver mRNA-loaded PBAEs. We showed that non-covalent formulation of PBAE-mRNA nanoparticles with a polyethylene glycol-lipid conjugate imparts serum stability to the nanoparticles, which in turn correlates with in vivo efficacy. Specifically, we demonstrated that these materials are mainly effective in lung tissue following intravenous administration. The lung targeting and potency of these nanoparticles was then greatly improved through statistical optimization of the polymer synthesis and nanoparticle formulation. These optimized nanoparticles transfected the majority of lung endothelial cells, as well as a variety of immune cells populations.; The nanoparticles were also used as a means of quantitatively comparing mRNA and DNA delivery in vitro and in vivo. We showed a drastic decrease in DNA potency in vivo compared to mRNA, attributed to the difficulty in crossing the nucleus of slowly dividing cells. Moreover, we observed similar kinetics of protein expression between mRNA and DNA. Additionally, we demonstrated in in vitro proof-of-concept studies that PBAE nanoparticles are capable of CRISPR-mediated genome editing. We also show successful mRNA delivery in a variety of other tissues beyond the lung endothelium. Through the development of novel chemistries using both PBAEs and lipids, we were able to achieve mRNA delivery specifically to the lung endothelium as well as the spleen in vivo. Taken together, the materials developed herein greatly expand the therapeutic capabilities of mRNA.; It is our hope that the work presented in this thesis translates into therapeutically relevant treatments while also providing insight into design criteria for successful mRNA delivery.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 180-199).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121895</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making computer vision Methods accessible for cell classification</title>
<link>https://hdl.handle.net/1721.1/121894</link>
<description>Making computer vision Methods accessible for cell classification
Hung, Jane Yen.
Computers are better than ever at extracting information from visual media like images, which are especially powerful in biology. The field of computer vision tries to take advantage of this fact and use computational algorithms to analyze image data and gain higher level understanding. Recent advances in machine learning such as deep learning based architectures have greatly expanded their potential. However, biologists often lack the training or means to use new software or algorithms, leading to slower or less complete results. This thesis focuses on developing different computer vision methods and software implementations for biological applications that are both easy to use and customizable. The first application is cardiomyocytes, which contain sarcomeric qualities that can be quantified with spectral analysis. Next, CellProfiler Analyst, an updated software application for interactive machine learning classification and feature analysis is described along with its use for classifying imaging flow cytometry data. Further software related advances include the first demonstration of a deep learning based model designed to classify biological images with a user-friendly interface. Finally, blood smear images of malaria-infected blood are examined using traditional machine learning based segmentation pipelines and using novel deep learning based object detection models. To entice further development of these types of object detection models, a software package for simpler object detection training and testing called Keras R-CNN is presented. The applications investigated here show how computer vision can be a viable option for biologists who want to take advantage of their image data.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-113).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121894</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting self-assembly in globular protein-polymer bioconjugates</title>
<link>https://hdl.handle.net/1721.1/121893</link>
<description>Predicting self-assembly in globular protein-polymer bioconjugates
Huang, Aaron.
Globular proteins offer powerful solutions for addressing challenges in the fields of medicine, industry, defense, and energy. Enzymes perform reactions with high efficiency and specificity, allowing for minimal generation of undesired side products even while exhibiting rapid turnover-traits difficult to replicate in synthetic catalysts. These targets make proteins attractive tools for immobilization to form functional catalysts and sensors. Nevertheless, there are many challenges in creating these advanced materials. The activity of the protein must be retained, and control over the structure of the material is desirable. Protein-polymer block copolymers offer an attractive solution to these issues. These materials have been shown to selfassemble into ordered nanodomains while retaining protein activity. However, the phase behavior of these materials is not fully understood due to the complex nature of anisotropic interactions between the proteins.; Within this thesis, a method for creating highly-active thin-film catalysts from myoglobin-PNIPAM bioconjugates is established by flow-coating these materials onto solid supports and then cross-linking them with glutaraldehyde. These catalysts exhibit considerable stability and perform reactions 5-10 times more efficiently than catalysts formed using other common immobilization techniques. However, the self-assembly and structural control of this catalyst was observed to be poor, and it was hypothesized that the poor self-assembly relative to mCherry and EGFP systems could be a consequence of the protein shape. In order to probe the effect of protein shape on self-assembly, a panel of mCherry bioconjugates with differing conjugation sites was studied using small-angle x-ray scattering.; The self-assembly behavior of these conjugation site variants was observed to be robust with only minor differences in phase boundaries and observed phases resulting from the changes in conjugation site. However, observed changes in the domain spacing signaled that modifications to conjugation site offer control over protein orientation within the domains. Based on studies showing that polymer chemistry in bioconjugates has a significant effect on self-assembly, an attempt to quantify these protein-polymer interactions was made using contrast-variation small-angle neutron scattering on mCherry and polymer blends. This technique allows for decomposition of the scattering intensity into its component parts corresponding to correlations between the 3 different pairs of the 2 species in the blends. From this analysis, it was determined that the best ordering bioconjugates have primarily repulsive interactions that can be described using a depletion layer model.; Lastly, the effect of protein properties was screened using a large library of bioconjugates made from 11 different proteins. The primary observed trend was that order increases as molecular weight increases, but a narrow region around 28-30 kDa was observed where bioconjugate ordering was significantly enhanced and additional nanostructures were accessible.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121893</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling automatic generation of accurate kinetic models for complicated chemical systems</title>
<link>https://hdl.handle.net/1721.1/121892</link>
<description>Enabling automatic generation of accurate kinetic models for complicated chemical systems
Han, Kehang.
The past decades have seen much progress in predictive kinetic modeling. Reaction mechanisms have shown increased predictive capability, providing key insights into chemical transformations under conditions of interest. Coupled and integrated in multiscale-multiphysics models, reaction mechanisms help elucidate physical phenomena that are driven by chemical kinetics and are recognized as a necessary tool for chemical selection, reactor design and process optimization. These past kinetic modeling achievements have opened new opportunities for novel scientific applications in chemical kinetics community and encouraged kinetic modelers to study even more complex chemical systems. As one can expect, the system complexity significantly increases modeling cost in both reaction mechanism construction and simulation. Over the years we have seen formulation of various lumping strategies.; Despite simplicity, the lumping strategy introduces an intrinsic error where the lumps contain molecules with very different reactivities. Frequently, oversimplified models using the kinetic parameters fitted from a very limited set of pilot experiments, resulting in poor accuracy in extrapolation. This thesis focuses on automated detailed kinetic modeling strategy using Reaction Mechanism Generator (RMG). RMG-generated models more faithfully represent the chemistry so they have superior extrapolation potential. But as system complexity increases, several computational limitations prevent RMG from converging. This thesis has made several contributions: reducing memory usage, boosting algorithm scalability, improving thermochemistry estimation accuracy, which eventually expand RMG's modeling capability toward large complex systems. These contributions are available to the kinetics community through the RMG software package.; To demonstrate the improved modeling capability of RMG, the thesis also includes a large chemical application: heavy oil thermal decomposition under geological conditions via a C18 model compound, phenyldodecane. As an extension of RMG, the thesis also explores a promising alternative to detailed kinetic modeling when dealing with extremely large chemical systems: fragment-based kinetic modeling, which generates a reaction network in fragment space rather than molecule space. The thesis shows via a case study that the new method creates a much smaller reaction network but with similar prediction accuracy on feedstock conversion and products' molecular weight distribution compared to its counterpart model generated by RMG.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121892</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and computational study of mass transport in novel emulsion systems : strategies for reaction engineering and microparticle preparation</title>
<link>https://hdl.handle.net/1721.1/121891</link>
<description>Experimental and computational study of mass transport in novel emulsion systems : strategies for reaction engineering and microparticle preparation
Gu, Tonghan.
Emulsions are complex fluids with interesting physiochemical properties, which have been widely used in health and personal care, food, coating, and manufacturing, etc.. Rather than considering emulsions as passive materials, they can also be used as active blocks for material preparation and chemical synthesis. This thesis presents a study of mass transport phenomena in two specific types of emulsion systems: microfluidic emulsions and concentrated food emulsions. For microfluidic emulsion systems, the first mass transport phenomenon studied is the exchange of chemicals between microfluidic droplets, which are 10-100 pm in size, and nanodroplets, which are dispersed as a nano- or mini-emulsion. Chemically, thermally, or electrically induced coalescence and micelle activity control the mass exchange between micro- and nano-droplets, leading to applications in reaction engineering and microparticle preparation.; Microdroplets function as micro-reactors that receive chemical from nanodroplets with both the addition rate and dosage well-controlled. The microdroplets could also function as micro-reservoirs that steadily supply chemical to the nanodroplets. For microparticle preparation, microdroplets function as templates to be solidified by reagents carried by the nanodroplets. The second mass transport phenomenon in microfluidic emulsions is the evaporation of droplet solvents or the exchange of solvents between droplets and the continuous phase, which leads to solid precipitation. In particular, this thesis focuses on the formation of drug crystalline particles. A novel solvent/anti-solvent exchange method with a hydrogel binder was developed to prepare highly monodisperse microparticles of either hydrophilic or hydrophobic drugs from microdroplet templates. In addition, we also improved a previously developed spherical crystallization method based on droplet solvent evaporation.; We used the same hydrogel but as a temporary immobilization media to prevent droplet coalescence and to expand the applicable solvent library of this method for industrial applications. For concentrated food emulsions, the mass transport phenomenon studied is the fast removal of the continuous phase and the microencapsulation of lipids into microparticles. With the spray drying technology, "powdered oil" containing up to 55 wt% (dry mass basis) of liquid oil was successfully prepared from concentrated milk protein stabilized emulsions. We discovered that pre-evaporation of raw milk not only offers energy cost savings, but also reduces fat loss. With additional carbohydrates, the surface extractable fat was reduced and powder wettability was improved. This product will serve as the main ingredient of an instant powder ready-to-use therapeutic food for treating child malnutrition in India.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-171).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121891</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transition metal carbide and nitride nanoparticles with Noble metal shells as enhanced catalysts</title>
<link>https://hdl.handle.net/1721.1/121890</link>
<description>Transition metal carbide and nitride nanoparticles with Noble metal shells as enhanced catalysts
Garg, Aaron R.
Core-shell nanostructures represent a promising and versatile design platform for enhancing the performance of noble metal catalysts while reducing the cost. Early transition metal carbides (TMCs) and nitrides (TMNs) have been identified as ideal core materials for supporting noble metal shells owing to their earth-abundance, thermal and chemical stability, electrical conductivity, and their ability to bind strongly to noble metals while still being immiscible with them. Unfortunately, the formation of surface oxides or carbon on TMCs and TMNs presents a difficult synthetic challenge for the deposition of atomically thin, uniform noble metal layers. Recent advances have enabled the synthesis of TMC core nanoparticles with noble metal shells (denoted as NM/TMC), although applicability toward TMN cores has not been previously demonstrated. Furthermore, the complete properties of these unique materials are still unknown.; This thesis conducts a detailed investigation of the synthesis, characterization, and catalytic performance of NM/TMC and NM/TMN core-shell nanoparticles to provide a comprehensive understanding of their material properties and the underlying phenomena. First, in-situ studies yielded insight into the mechanism behind the high temperature self-assembly of NM/TMC particles, indicating the presence of a metallic alloy phase preceding the formation of the core-shell structure upon insertion of carbon into the lattice. Next, the synthesis of NM/TMN nanoparticles was demonstrated via nitridation of a parent NM/TMC, and the structural and electronic properties of both core-shell materials were examined through in-situ X-ray absorption spectroscopy (XAS). The analysis revealed significant alterations to the electronic structure of the noble metal shell due to bonding interactions with the TMC and TMN cores, which led to weakened adsorbate binding energies.; Finally, the materials displayed improved performance for the oxygen reduction reaction (ORR), a critical challenge for fuel cell technologies. Notably, particles with complete, uniform shells exhibited unprecedented stability during electrochemical ageing at highly oxidizing conditions, highlighting the great potential of core-shell architectures with earth-abundant TMC and TMN cores for future ORR applications. Overall, this work will provide new opportunities toward the design of enhanced noble metal catalysts and enable further optimization of their performance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis. Page 157 blank. Vita.; Includes bibliographical references (pages 137-153).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121890</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving gas fixation in acetogenic bacteria</title>
<link>https://hdl.handle.net/1721.1/121889</link>
<description>Improving gas fixation in acetogenic bacteria
Emerson, David Frederic.
Waste gases are an increasing concern for their impact on the environment. This includes carbon dioxide (CO₂) produced from industry and transportation, and methane from distributed oil wells and shale gas formations. Generally, these gases are released to the environment, or CH₄ is burned to CO₂, as CH4 is a more harmful greenhouse gas than CO₂. Researchers are increasingly interested in methods to fix these two gases into valuable fuels and chemicals so as to achieve a sustainable economy. However, any such technology must be economically viable to ensure widespread adoption in the regions that these gases are released. Yet biological gas fixation has many limitations that currently prevent practical implementation. This includes biological limitations such as slow growth rates, and low biomass and product titers. Process limitations are also a concern, as full conversion is desirable and gas mass transfer can be a substantial rate limiting step.; Reactors must also be simple and cheap due to the expected cost differential between the reducing equivalents (electricity, H₂ , etc.) and the products (fuels and commodity chemicals). This thesis approaches the limitations of gas fixation from three different perspectives to overcome biological limitations and design low cost bioreactors. Syngas fermentation via the Wood-Ljungdahl pathway is receiving growing attention as a possible platform for the fixation of CO₂ and renewable production of fuels and chemicals. However, the pathway operates near the thermodynamic limit of life, resulting in minimal ATP production and long doubling times. This calls into question the feasibility of producing high-energy compounds at industrially relevant levels. In Chapter 2, we investigated the possibility of co-utilizing nitrate as an inexpensive additional electron acceptor to enhance ATP production during autotrophic growth of Clostridium ljungdahlii.; In contrast to other acetogens tested, growth rate and final biomass titer were improved for C. ljungdahlii growing on a mixture of H₂ and CO₂ when supplemented with nitrate. Transcriptomic analysis, ¹³CO₂ labeling, and an electron balance were employed to understand how electron flux is partitioned between CO₂ and nitrate. Finally, we propose a pathway for enhanced ATP production from nitrate, and use this as a basis to calculate theoretical yields for a variety of products. This was experimentally confirmed, whereby nitrate improved heterologous production of 3-hydroxybutyrate, though yields remain much lower than could be theoretically achieved. This work demonstrates a viable strategy for the decoupling of ATP production from carbon dioxide fixation, which will serve to significantly improve the CO₂ fixation rate and the production metrics of other chemicals from CO₂ and H₂ in this host.; Future metabolic engineering could greatly increase the yield to those predicted in the theoretical analysis. In Chapter 3, we utilize another electron donor, methanol, for the fixation of CO₂ with the Wood-Ljungdahl pathway. However, the literature is somewhat in disagreement for the mechanism of methanol assimilation by acetogens and the genes involved. Deuterated methanol labeling was used to confirm that methanol was assimilated at the methyl level, indicating that a 3-component methyltransferase system (mtaA BC) was involved. RNAseq analysis revealed that, while mtaB and mtaC were likely properly annotated in Moorella thermoacetica, mtaA was not properly annotated based just on homology. We propose other methyltransferases that were highly upregulated in the presence of methanol as putative mtaA genes that must be confirmed with future in vitro assays.; In the final Chapter, 4, we propose a potentially inexpensive reactor for the in situ conversion of methane at natural gas well heads. The mass transfer limits of simple reactor were analyzed, and minimum catalytic rates were calculated such that this methane would be full converted.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis. "September 2018."; Includes bibliographical references (pages 193-211).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121889</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving seawater desalination and seawater desalination brine management</title>
<link>https://hdl.handle.net/1721.1/121886</link>
<description>Improving seawater desalination and seawater desalination brine management
Nayar, Kishor Govind.
Water scarcity is an increasing problem globally. Seawater desalination is increasingly being relied upon as a means of mitigating the problem of water scarcity. However, seawater desalination has costs associated with it: capital costs, cost of energy to desalinate and environmental costs from the discharge of high salinity brine. Efficient and cost-effective seawater desalination and desalination brine management systems are necessary to make seawater desalination a sustainable scalable process. This work seeks to improve seawater desalination and seawater desalination brine management in several ways. For the first time, the thermophysical properties of seawater have been characterized as a function of pressure across the full desalination operating regimes of temperature, salinity and pressure. Functions that allow accurate thermodynamic least work of desalination and seawater flow exergy analysis have been developed.; The least work of desalination, brine concentration and salt production was investigated and the performance of state-of-the-art brine concentrators and crystallizers was calculated. Hybrid designs of reverse osmosis (RO) and electrodialysis (ED) were proposed to be integrated with a crystallizer to concentrate desalination brine more efficiently. The RO-ED-crystallizer concept was applied to two separate applications: (a) salt production from seawater and (b) zero brine discharge seawater desalination. A parametric analysis to minimize the specific cost of salt production and water production was conducted. Parameters varied were: the ratio of seawater to RO brine in the ED diluate channel, ED current density, ED diluate outlet salinity, electricity, water and salt prices, and RO recovery by adding a high pressure RO (HPRO) stage. Results showed that significant cost reductions could be achieved in RO-ED systems by increasing the ED current density from 300 A/m² to 600 A/m².; Increasing RO brine salinity by using HPRO and operating at 120 bar pressure reduced salt production costs while increasing water production costs. Transport properties of monovalent selective ED (MSED) membranes were also experimentally obtained for sodium chloride, significantly improving the accuracy of modeling MSED brine concentration systems. MSED cell pairs transported only about ~~50% the water but nearly as much salt as a standard ED cell pair, while having twice the average membrane resistance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis. "Thesis contains very faint/illegible footnote numbering"--Disclainer Notice page.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121886</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics and rheology of soft phase-change materials</title>
<link>https://hdl.handle.net/1721.1/121885</link>
<description>Dynamics and rheology of soft phase-change materials
Geri, Michela.
Many industrial processes involve multicomponent or composite materials in which one component can undergo a phase transition leading to the appearance of a solid phase dispersed in a liquid-like continuous phase. Examples of soft phase-change materials can be found in a variety of applications from food products (e.g., organogels, casein gels and gelatin), pharmaceutical products (e.g., tissue mimicking phantoms and encapsulating agents), cosmetics (e.g., foundations and lipsticks), and in the oil and gas industry, where formation of paraffin waxes and clathrate hydrates represent major issues for upstream production and flow assurance.; Historically, phase-changing materials have been exploited for their unique thermal properties in energy storage applications, however soft solids and complex fluids that undergo phase transformation have broader impact in industrial and biomedical applications because of the dramatic changes in mechanical properties that result from the conditions across the phase transition. Typically, these soft phase-change materials are part of the broader class of elasto-visco-plastic materials, showing both viscoelasticity at small deformations and plasticity at large deformations. However, their material properties are greatly influenced by the specific processing conditions during formation, such as temperature and applied deformation, leading to a thermo-rheological complexity that still poses major challenges for their experimental and theoretical characterization.; In this Thesis, we develop novel experimental protocols and theoretical frameworks to characterize and describe the complex rheological behavior of soft phase-changing materials, under both linear and non-linear deformations. We focus mainly on two types of materials that are of major importance in the oil and gas industry: paraffin gels, as model waxy crude oils, and clathrate hydrate suspensions. In the limit of small deformations, we are usually interested in measuring the frequency response of the material as it evolves, or mutates, over time. Current state-of-the-art techniques have major limitations in providing both time- and frequency-resolution primarily due to the type of input signals used. To overcome this, we develop a robust excitation signal that allows us to perform time-resolved mechanical spectroscopy of fast mutating systems. Inspired by the biosonar signals of bats and dolphins, we introduce a joint frequency- and amplitude- modulated chirp signal.; Combining experiments and numerical simulations, we show that there exists an optimized range of amplitude modulation that minimizes the estimation error while reducing the total acquisition time by almost two orders of magnitude. With this new technique, which we call the Optimally Windowed Chirp (or OWCh), we then explore the phase transition during gelation of a series of mutating, phase-changing materials, including casein gels, gelatin and paraffin gels. To address large, non-linear deformations, we start from a thorough investigation of the steady state and transient response of paraffin gels under shear. We develop a robust protocol that enables us to systematically extract the main rheological features including the thermokinematic memory (i.e. the effect of thermal and shear history on the rheological behavior of the gel) and thixotropy (i.e. the time-dependent behavior under constant applied deformation).; We show that these features can be understood in terms of microstructural rearrangements of the underlying solid particle network, which can be quantified through differential scanning calorimetry, birefringence imaging and rheometry. Based on this understanding, we present a constitutive framework that captures all of the different features while respecting thermodynamic and objectivity constraints. We also investigate mechanical instabilities that may arise during rheological measurements. Combining ultrasonic image velocimetry and rheometry, we show that both shear banding and slip can take place during steady shear below a critical value of the shear rate. However, the thixotropic nature of these materials precludes the banding instability from growing in the sheared region of the gap, ensuring that the measured stress response corresponds to the real bulk behavior. Finally, we study the visco-plastic response of clathrate hydrate suspensions.; To do so, we develop a novel method to robustly control their formation, which so far has been a major issue in experimental studies due to uncontrolled nucleation and growth of hydrate crystals. Our method, based on the use of "frozen emulsions", decreases the induction time by orders of magnitude while guaranteeing that all the water droplets initially frozen into ice particles are converted into hydrate particles. Rheological measurements for different water volume fractions and shear rates reveal that the macroscopic rheological response is again governed by rearrangements of the microstructure; however, due to the very strong interparticle forces (which are the result of a continuous sintering process) the microstructure evolves towards a fully connected network that behaves as a porous solid structure.; Incorporating this limit into our theoretical model, we show that the framework developed for softer interparticle interaction can also capture the macroscopic plastic response of hydrate suspensions. The results from this Thesis have the potential to impact many industrial processes that involve soft phase-change materials, such as flow assurance and oil extraction, thermal energy storage, gas transport and storage, and other processes where the dynamics of gelation are used to control the rheological properties of the ultimate product.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 325-353).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121885</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intrusion dynamics of particles or droplets released continuously to the deep ocean</title>
<link>https://hdl.handle.net/1721.1/121884</link>
<description>Intrusion dynamics of particles or droplets released continuously to the deep ocean
Wang, Dayang,Ph. D.Massachusetts Institute of Technology.
This thesis analyzes the flow and transport of multiphase plumes that result from a dispersed phase (e.g., droplets or particles) being discharged continuously into a stratified flowing ambient. Results are relevant to a wide range of natural and man-made applications, including the transport of oil droplets created accidentally during a deep-sea oil spill and sediment released purposefully during deep-sea mining operations. The subsequent transport and fate of small oil droplets with and without treatment using chemical dispersants is addressed experimentally and analytically. Next, we characterize the dependence of plume trapping behavior with crossflow velocity, bridging a gap in the current plume parametrization scheme. We also present a novel laboratory study aimed at understanding and quantitatively predicting secondary intrusion formation. In addition, a sensitivity analysis is conducted on plume trapping behavior to variations in key factors observed in the 2010 Deepwater Horizon oil spill field data. Another important environmental application of the study is deep-sea mining. Part of the work includes a field investigation to elucidate the trapping and dilution of mid-water plume consisted of fine sediment residuals from the mining operations, with the goal of assessing the potential environmental impact of this type of tailing disposal technique. Analytical and numerical model results are validated against field observations of the plume dynamics. Knowledge in this area helps the regulatory authorities to develop sound policies to better guide the mining activities in the deep sea to ensure sustainability of the resources and the environment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121884</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shallow water outfalls for brine disposal from desalination plants</title>
<link>https://hdl.handle.net/1721.1/121883</link>
<description>Shallow water outfalls for brine disposal from desalination plants
Shrivastava, Ishita.
Submerged outfalls consisting of multiple, closely spaced jets are often used to discharge industrial effluents in coastal waterbodies. Examples of such effluents include heated water from thermal power plants, treated wastewater effluent from sewage treatment plants, and reject brine from desalination plants. At locations with shallow water depth, the interaction between adjacent jets is enhanced and can affect mixing. The mixing of submerged outfalls in shallow water is studied in this thesis with particular emphasis on discharge of dense treated brine from desalination plants. Treatment options for brine involve blending it with less saline effluents or its concentration, and can have significant effect on the design of outfall and its mixing. The effect of shallow water depth on dilution of submerged outfalls is determined first for quiescent conditions, and a unified theory is developed for single and multiple jets discharging in shallow water.; The effect of shallowness is shown to be characterized by a non-dimensional parameter, which depends on the receiving water depth and the effluent momentum and buoyancy fluxes. The effect of brine treatment processes, which affect both discharge momentum and buoyancy, on the dilution of various contaminants is determined next. The effect of brine treatment on outfall design is also explored, and optimum outfall design variables are calculated for a range of conditions. In the presence of a crossflow, the mixing dynamics of multiple port outfalls are quite different, and can give rise to complex jet interactions and even reversing flow close to the upstream jets. Laboratory experiments, in which discharge and ambient parameters are varied, have led to an improved empirical expression for dilution. In addition to the strength of crossflow, outfall length and spacing of jets are also found to significantly affect dilution.; A numerical model, capable of modeling the discharge of multiple jets in a crossflow, is developed and shows significant improvement over existing models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121883</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory studies of the multiday oxidative aging of atmospheric organic aerosol</title>
<link>https://hdl.handle.net/1721.1/121882</link>
<description>Laboratory studies of the multiday oxidative aging of atmospheric organic aerosol
Lim, Christopher Y.(Christopher Yung-Ta)
Fine particulate matter (PM, or "aerosol") in the atmosphere affects the Earth's radiative balance and is one of the most important risk factors leading to premature mortality worldwide. Thus, understanding the processes that control the loading and chemical composition of PM in the atmosphere is key to understanding air quality and climate. However, the chemistry of organic aerosol (OA), which comprises a significant fraction of submicron atmospheric PM, is immensely complex due to the vast number of organic compounds in the atmosphere and their numerous reaction pathways. Laboratory experiments have generally focused on the initial formation of OA from volatile organic compounds (VOCs), but have neglected processes that can change the composition and loading of OA over longer timescales ("aging").; This thesis describes several laboratory studies that better constrain the effect of two important aging processes over timescales of several days, the oxidation of gas phase species to form secondary OA (condensation) and the reaction of gas phase radicals with organic molecules in the particle phase (heterogeneous oxidation). First, the oxidation of biomass burning emissions is studied by exposing particles and gases present in smoke to hydroxyl radicals (OH). Increases in organic aerosol mass are observed for all fuels burned, and the amount of OA formed is explained well by the extent of aging and the total concentration of measured organic gases. Second, the effect of particle morphology on the rate of heterogeneous oxidation is examined by comparing the oxidation of particles with thin organic coatings to the oxidation of pure organic particles.; Results show that morphology can have a strong impact on oxidation kinetics and that particles with high organic surface area to volume ratios can be rapidly oxidized. Third, the molecular products from the heterogeneous OH oxidation of a single model compound (squalane) are measured. Formation of a range of gas-phase oxygenated VOCs is observed, indicating the importance of fragmentation reactions that decrease OA mass, and providing insight into heterogeneous reaction mechanisms. The results from this work emphasize that the concentration and composition of OA can change dramatically over multiple days of atmospheric oxidation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121882</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable agricultural management : a systems approach for examining food security tradeoffs</title>
<link>https://hdl.handle.net/1721.1/121881</link>
<description>Sustainable agricultural management : a systems approach for examining food security tradeoffs
Jain Figueroa, Anjuli.
Estimates suggest that the world needs a 50% increase in food production to meet the demands of the 2050 global population (Tilman et. al. 2011). Cropland expansion is unlikely to be sufficient, and yield improvements that require more inputs may lead to more environmental damage. This work focuses on reallocating limited land and water resources to optimize cropping patterns. By combining optimization methods, surrogate modeling, global data sources, data assimilation, and hydrologic modeling, we identify opportunities for increasing food-crop production and cash-crop revenue, while maintaining sustainability constraints that limit cropland expansion and prevent groundwater depletion. We apply the framework in India's Krishna river basin and find that reallocating resources to meet or exceed current production can lead to 96% gain in net revenue as resources over an estimated current baseline. Resources in this case are moved to high-yielding cash crops. Imposing a self-sufficient southern diet which depends on rice reduces the gains to 77% while imposing a self-sufficient national diet with more emphasis on wheat eliminates all net revenue gains to the region. The approach described in this thesis, highlights the trade-offs between food production, cost and environmental impacts in achieving specified food-security objectives. This research contributes to the field in two ways: 1) it provides a novel method for combining remotely sensed data, surrogate models and optimization to understand agricultural trade-offs, and 2) it furthers the discussion on food and water security and sustainable resource management by demonstrating that resource reallocation with sustainability constraints provides revenue gains in certain situations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 118-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121881</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compression behavior of smectitic vs. illitic mudrocks</title>
<link>https://hdl.handle.net/1721.1/121880</link>
<description>Compression behavior of smectitic vs. illitic mudrocks
Ge, Chunwei.
Overpressure or fluid pressure in excess of hydrostatic pressure has been observed globally in many deep water sedimentary basins. One of the possible mechanisms for overpressure is the smectite-to-illite (S-I) transformation. During the transformation, the basal spacing of the smectite layer reduces. The interlayer water is released into pore space, causing an increase in pore pressure. This thesis investigates the compression and permeability behavior change due to S-I transformation. Uniaxial compression testing was performed on smectitic and illitic mudrocks. The original Gulf of Mexico - Eugene Island (GoM-EI) mudrock sets the baseline for smectitic mudrock in order to compare with illitic mudrocks. Two methods were used to create illitic mudrock from the GoM-EI sediment.; The illitic mudrock A was cooked in a high temperature constant rate of strain (CRS) device with effective stress applied (200 °C and 30 days); the illitic mudrock B was cooked in a hydrothermal cooker in a slurry state (250 °C and 18 days). The multi-functional high temperature CRS device was designed from scratch to tackle the challenge of measuring the mechanical properties of a mudrock and transforming the clay minerals. Although the methods of inducing S-I transformation are different, similar degrees of illitization for the illitic mudrock A and B was achieved by selecting the right temperature and time combination. The mineral transformation does not greatly alter the compressibility of the mudrocks. However, both the illitic mudrock A and B sit higher in porosity space than the smectitic mudrock at low stress level.; As effective stress increases, the illitic mudrock A converges with the smectitic mudrock, while the illitic mudrock B reverses order with the smectitic mudrock at 30 MPa. The permeability of the smectitic mudrock ranges over five orders from 10⁻¹⁶ to 10⁻²⁰ m² from a porosity of 0.58 to 0.23. The permeability of the mudrocks are greatly increased by the mineral transformation. The permeability ratio of the illitic mudrocks over the smectitic mudrock increases from 2 to 12 as porosity decreases. The creep rate (C[subscript alpha]) at room temperature and elevated temperature were measured during the transformation stage of the illitic mudrock A. C[subscript alpha] at elevated temperature increases by 50 % compared with that at room temperature. The increase in rate is caused by mineral transformation. Using the difference in rate, a model is proposed to estimate the effective stress reduction or overpressure generation based on the degree of mineral transformation.
Thesis: Ph. D. in the field of Geotechnical and Geoenvironmental Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121880</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ancillary services in the airline industry : passenger choice and revenue management optimization</title>
<link>https://hdl.handle.net/1721.1/121879</link>
<description>Ancillary services in the airline industry : passenger choice and revenue management optimization
Bockelie, Adam.
The recent proliferation of ancillary services means that airline passengers can face substantially different ancillary service prices and offerings based on their itinerary and fare class selection. At the same time, airlines have become interested in accounting for this supplementary revenue stream in their revenue management (RM) systems to maximize total, not just ticket, revenue. This thesis develops models for both of these issues, with a goal of providing a better understanding of how ancillary services affect the airline industry. We develop the Ancillary Choice Model (ACM) to describe how passengers make purchase decisions about ancillary services in conjunction with the selection of a fare class. We model two extremes of passenger knowledge and awareness of ancillary services, which we term simultaneous and sequential. We show that under the simultaneous model, the presence and price of ancillary services can affect the fare class selection of a passenger, even when all fare classes have the same ancillary prices. The second part of this thesis studies total revenue optimization. We provide a detailed assessment of a prior total revenue maximization approach, the Optimizer Increment (01), proving that it can be an optimal revenue management strategy under limited conditions, but also showing through the Passenger Origin-Destination Simulator (PODS) that it decreases revenue in more realistic environments. We then develop a new revenue management optimization model, the Ancillary Choice Dynamic Program (ACDP), which maximizes total revenue by explicitly including the revenue and fare class choice impacts of ancillary services. We describe an Ancillary Marginal Demand (AMD) and Ancillary Marginal Revenue (AMR) transformation that can be used as heuristics to provide the ancillary and choice awareness benefits of ACDP to existing RM optimization models.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 274-282).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121879</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of endogenous and adoptively transferred T cell function in a genetic mouse model of lung adenocarcinoma</title>
<link>https://hdl.handle.net/1721.1/121878</link>
<description>Investigation of endogenous and adoptively transferred T cell function in a genetic mouse model of lung adenocarcinoma
Canner, David(David Allen)
The clinical success of immune checkpoint blockade, including in patients with lung cancer, has clearly demonstrated that the body's immune system is capable of tumor cytotoxicity if modulated appropriately. Unfortunately, only a minority of patients with solid tumors respond to checkpoint blockade, and even fewer respond to other immunotherapeutic modalities like adoptive cell therapy (ACT). The reasons for these low response rates are not well understood, suggesting an improved understanding of the mechanisms which shape a tumor immune response and promote T cell dysfunction is needed. Toward these ends, we first sought to characterize the transcriptional changes leading to toward dysfunction in endogenous T cell responses. Using an autochthonous mouse model of lung adenocarcinoma, we longitudinally profiled CD8⁺ T cells from the lungs of tumor bearing mice using single cell RNAseq (scRNAseq).; We identified significant longitudinal changes within the tumor specific T cell population over the course of more than 20 weeks of tumor development. Among these transcriptional changes was a transition from functional effector cells, to states of T cell exhaustion. We used this information to develop a signature which identifies T cells that are more readily reinvigorated by checkpoint blockade. We also identified multiple factors which promote heterogeneity within the tumor specific T cell response including TCR affinity for antigen and antigen identity within a dominance hierarchy. We close this 2nd chapter by demonstrating that this transcriptional information allows for the identification of mediators limiting anti-tumor T cell responses. Modern adoptive cell therapy has demonstrated remarkable, durable efficacy in treating patients with hematological malignancies but has failed to deliver comparable efficacy in patients with solid tumors like NSCLC.; In Chapter 3, we transcriptionally profiled adoptively transferred cells in our autochthonous mouse model of lung cancer and identified multiple mechanisms which limit ACT efficacy. We used this transcriptional data to perform focused in vivo CRISPR mediated screens to identify mediators of T cell dysfunction. We identify a number of genes limiting T cell persistence and functionality within the tumor microenvironment and demonstrate that by ablating expression of these genes, we can dramatically improve the efficacy of ACT.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121878</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-targeting among microRNAs is widespread and enriched in the brain</title>
<link>https://hdl.handle.net/1721.1/121877</link>
<description>Co-targeting among microRNAs is widespread and enriched in the brain
Cherone, Jennifer M.(Jennifer Michelle)
MicroRNAs (miRNAs) play roles in diverse developmental processes and cellular differentiation. Distinct miRNAs have hundreds to thousands of conserved binding sites in mRNAs, but typically exert only modest repression on a single site. Co-targeting of individual mRNAs by multiple different miRNAs could be commonly used to achieve stronger and more complex patterns of repression. Comparing target sets of different miRNAs, we identified hundreds of pairs of miRNAs that share more mRNA targets than expected (often ~2-fold or more) relative to stringent controls. For one co-targeting pair, miR-138 and miR-137, we validated functional overlap in neuronal differentiation. Clustering of the pairing relationships revealed a group of 9 predominantly brain-enriched miRNAs that share many targets. In reporter assays, subsets of these miRNAs together repressed gene expression by 5- to 10-fold or more, sometimes exhibiting cooperative repression. Our results uncover an unexpected pattern in which certain combinations of miRNAs can collaborate to strongly repress particular targets, and suggest important developmental roles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121877</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of DNA replication and the replication initiator, DnaA, in Bacillus subtilis</title>
<link>https://hdl.handle.net/1721.1/121876</link>
<description>Regulation of DNA replication and the replication initiator, DnaA, in Bacillus subtilis
Anderson, Mary E.,Ph. D.(Mary Elizabeth)Massachusetts Institute of Technology.
DNA replication is a highly regulated process across all organisms. Improper regulation of DNA replication can be detrimental. I identified an overinitiating, conditional synthetic lethal mutant of Bacillus subtilis. I isolated suppressors of this mutant and uncovered novel genes associated with DNA replication. These suppressors acted both at the steps of initiation and elongation to overcome the detrimental replication initiation of the synthetic lethal [delta]yabA dnaA 1 mutant. One class of suppressors decreased levels of the replicative helicase, DnaC. I showed that decreased levels of helicase are sufficient to limit replication initiation under fast growth conditions. I also explored the regulation of DnaA as a transcription factor. The replication initiation inhibitor, YabA, binds to DnaA and prevents its cooperative binding at the origin. In addition to its role in replication initiation, DnaA also directly regulates expression of several genes. YabA has been shown to inhibit DnaA binding at several promoters but its effect on DnaA-mediated gene expression is unclear. I found that YabA inhibits sda activation by DnaA but does not significantly affect repression of ywlC by DnaA. Lastly, I showed that YabA appears to stimulate autoregulation of dnaA. This preliminary data illustrates a role for YabA regulation in DnaA-mediated gene expression.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 118-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121876</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing systems and the beholding eye : computer-aided visions of the postwar British landscape</title>
<link>https://hdl.handle.net/1721.1/121875</link>
<description>Seeing systems and the beholding eye : computer-aided visions of the postwar British landscape
Carlsson, Moa Karolina.
In the decades after World War II-a period that saw the accelerated transformation of Britain's countryside into a modem industrial landscape-the visual appearance of the country was placed at the center of debates about identity, progress, and heritage. Among a vocal and interested public, the proliferating power stations, power transmission lines, open-pit mines, dams, motorways, and oil-related facilities were often felt as threats to the national past, to cultural values, and to the very idea of what it meant to be British. Amidst this political complexity, the computer-generated diagram, with its underlying mathematical structure, may seem an unlikely vehicle for settling planning disputes about Britain's countryside. My study reveals how landscape practitioners, hired by industrial developers, began to exploit the general characteristics of mainframe computers (speed, accuracy, replicability, and economy) to define new ways of representing and measuring visual phenomena, and of comparing alternative visions of the country, using quantitative "facts." The result was a digital technology-seeing systems-that enumerated and quantified rather than depicted visual landscape, a new technology that profoundly transformed not only visualization and representation practices, but that also ensured continued industrial expansion.
Thesis: Ph. D. in Architecture: Design and Computation, Massachusetts Institute of Technology, Department of Architecture, 2019; Cataloged from PDF version of thesis. "The pagination in this thesis reflects how it was delivered to the Institute Archives and Special Collections. Figure images not found in original thesis"--Disclaimer Notice page.; Includes bibliographical references (pages 257-287).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121875</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing nanocarriers to penetrate cartilage and improve delivery of biologic drugs for osteoarthritis</title>
<link>https://hdl.handle.net/1721.1/121874</link>
<description>Designing nanocarriers to penetrate cartilage and improve delivery of biologic drugs for osteoarthritis
Geiger, Brett Charles.
Osteoarthritis is a debilitating joint disease that affects over 30 million people and has no disease-modifying therapies. The current standard of care for the disease is merely palliative until joint replacement is necessary. Disease-modifying osteoarthritis drugs have been tested in the clinic, but all have been unsuccessful in clinical trials. A key point of failure for several of these drugs has been inefficient and inadequate delivery to target cartilage cells. Cartilage is avascular and thus cannot be targeted efficiently through the systemic circulation. Due to the localized nature of osteoarthritis, direct injection of therapeutics into affected joints is an attractive solution to this problem. However, delivery via this approach remains impeded by rapid turnover of the synovial fluid within joints and the dense, highly charged nature of cartilage tissue.; To overcome this biological barrier, we took advantage of a recently demonstrated phenomenon in which positively charged nanomaterials electrostatically interact with anionic cartilage, both avoiding joint clearance and facilitating diffusion through the tissue in the process. This work describes two strategies using such polycationic materials to deliver insulin-like growth factor 1 (IGF-1), a promising anabolic growth factor for osteoarthritis that has known delivery challenges. The first approach used an electrostatic assembly of IGF-1, poly(L-glutamic acid), and poly(L-arginine) into a nanoscale complex coacervate, or nanoplex, for delivery of unmodified, bioactive IGF-1. The second approach involved a densely charged polyamidoamine (PAMAM) dendrimer, end-grafted with poly(ethylene glycol) (PEG) of various molecular weights at various % end group functionalization.; From this panel of nearly 50 PEGylated dendrimers, an optimally charged dendrimer was selected based on criteria of cartilage uptake and nontoxicity. The selected dendrimer was covalently modified with IGF-1. Both systems were tested to ensure that they could deliver bioactive IGF-1, penetrate human thickness cartilage tissue, extend joint residence time in vivo, and mitigate the progression of early traumatic osteoarthritis in rats. Both the nanoplex and optimally PEGylated dendrimer-IGF-1 achieved these goals, suggesting that polycationic nanocarriers could potentially improve pharmacokinetics and efficacy of disease-modifying osteoarthritis drugs in the clinic.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; "DOCTOR OF PHILOSOPHY IN BIOLOGICAL ENGINEERING With a focus in Polymers and Soft Matter (PPSM)." Cataloged from PDF version of thesis.; Includes bibliographical references (pages 106-112).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121874</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward understanding and mitigating heterogeneity in bone marrow stromal cell cultures for improved therapeutic efficacy</title>
<link>https://hdl.handle.net/1721.1/121873</link>
<description>Toward understanding and mitigating heterogeneity in bone marrow stromal cell cultures for improved therapeutic efficacy
Rennerfeldt, Deena Antoinette.
Bone marrow stromal cells (BMSCs), a subset of which are considered mesenchymal stem cells (MSCs), have been used in over 600 clinical trials for indications ranging anywhere from autism to liver cirrhosis to diabetes. They have cited enthusiasm in the cell therapy community not only for their demonstrated differentiation potential toward several lineages, but also due to the anti-inflammatory and immunomodulatory effects of their secretome. However, the necessary in vitro expansion of BMSCs renders cell populations functionally diverse, and understanding of what drives heterogeneity onset - as well as which distinct phenotypes elicit therapeutic responses of interest - remains an open challenge. This lack of characterization confounds studies focused on basic cell behavior as well as translational applications, and U.S. Food &amp; Drug Administration approval for BMSC therapies has yet to be achieved for any of the several dozen indications explored to date.; This thesis describes our work toward understanding the extent, mechanisms, and possible mitigation strategies regarding heterogeneity in BMSC cultures, by exploring the biophysical and transcriptomic profiles of single cells. We report our findings that cell generation most succinctly dictates the combined biophysical properties studied and that at the transcriptome level four distinct functional phenotypes exist. We further explore mechanisms by which heterogeneity emerges, demonstrating that cellular senescence and asynchronous proliferation kinetics leads to a distribution of biophysical properties and that at fixed time points cells are somewhere along a gene expression cascade trajectory from one functional state to another.; We also report our discovery of novel surface marker candidates for enrichment of specific phenotypes and demonstrate that these discrete subpopulations differentially express genes implicated in the distinct, yet established therapeutic applications of immunosuppression, neurogeneration, and wound healing. These findings were enabled by our technological advancements that include complex time lapse imaging protocols, innovative assays for probing of label-free cell behavior, establishment of best practices for generating single BMSC transcriptome libraries, and robust analytical pipelines for time lapse imaging and single-cell RNA sequencing datasets. Collectively, these tools and analyses provide a strong foundation toward leveraging the discrete functional roles of this diverse collection of cells for both well-designed basic research studies and improved therapeutic efficacy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121873</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elastically-isotropic mechanical metamaterials : theory and experiments</title>
<link>https://hdl.handle.net/1721.1/121853</link>
<description>Elastically-isotropic mechanical metamaterials : theory and experiments
Tancogne-Dejean, Thomas(Thomas Vincent)
Lightweight engineering requires the development of low-density materials featuring high mechanical properties with an emphasis on high-specific stiffness and strength. Besides improvement in the composition of bulk materials, high specific mechanical properties are obtained by carefully architecting materials through the controlled introduction of porosities. The recent rise of additive manufacturing allows for the manufacturing of complex structures at the material length scale, opening an unprecedented design space of metamaterials. Amongst this design space, this thesis is concerned with the conception of three-dimensional isotropic metamaterials, a particularly important class of mechanical metamaterials exhibiting direction-independent behavior at the macroscopic level.; The mechanical behavior of the anisotropic Face-Centered-Cubic (FCC) and Body-Centered-Cubic (BCC) lattices is investigated at small and large strains, through a combined analytical, numerical and experimental study including an extensive characterization of stainless steel micro-lattices. Based on this investigation, elastically-isotropic truss lattices are designed via topological constraints obtained from analytical homogenization. The precise composition of anisotropic lattices including the Simple Cubic (SC), BCC and FCC lattices allows achieving elastic isotropy. The introduction of elastically-isotropic hollow-truss lattices eliminates the need of combining anisotropic lattices, as the anisotropy in hollow-truss lattices is dictated by the ratio of the inner to outer radii of each beams. Finally, a new class of plate-lattice is proposed which reaches optimal isotropic elastic properties. They are conceived by placing plates along the close-packed planes of crystal structures.; Based on theoretical analysis, a design map is developed for elastically isotropic plate-lattices of cubic symmetry. The newly-proposed designs are validated through extensive unit cell simulations and experiments carried on polymeric specimens. Furthermore, the initial yield surface of the elastically-isotropic lattices is investigated numerically and the direction-dependency of the initial strength is reported using pole figures. A plate-lattice is found to exhibit an almost isotropic initial yield with its strength close to theoretical upper bound for porous solids. The main outcomes of this thesis are (i) the design strategies used to create elastically-isotropic three-dimensional lattices based on truss, shell and plates assemblies and (ii) the discovery of an optimal elastically-isotropic lattice family with almost optimal initial yield response.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 203-211).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121853</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design tools and mechanisms for progressive cavity pumps</title>
<link>https://hdl.handle.net/1721.1/121852</link>
<description>Design tools and mechanisms for progressive cavity pumps
Simon, Kevin Patrick.
This thesis presents tools to design progressive cavity pumps (PCPs), with an emphasis on low-viscosity fluids. These models indicate that high speed operation can increase sealing performance, decrease pump size, and eliminate gear-reductions. New models for estimating both laminar and turbulent internal flow and shear losses in these pumps are presented. The new models are capable of estimating pump performance 1000x faster than traditional simulation methods, and do not require empirical calibration, making them 'designer-ready'. A proof-of-concept turbulent PCP was designed using these models. Its volumetric efficiency is within 20% of predicted values. This thesis also presents a novel one degree-of-freedom hypocycloidal bearing to constrain the motion of the rotor for increased performance and control. 3 different bearing topologies have been developed: roller, rail, and flexural. An experimental PCP concept with integrated hypocycloidal rail bearings was developed and tested with efficiencies as high as 45%. Experimental data are compared with a new lubrication theory model which accounts for rotor motion, rotor geometric error, and stator geometric error. The experimental and theoretical results show strong agreement, proving that low-order lubrication theory models are accurate simulation tools. Additionally, performance results from the rail bearing pump and first-order analysis inspire new scaling laws for connecting the volumetric and mechanical efficiency of PCPs. These scaling laws show strong agreement in both turbulent and laminar flows. A new generation of PCPs has the potential to transform irrigation, water purification, oil-sand extraction, among other applications. The new tools required to create these PCPs also have strong implications for how traditional PCPs are designed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 159-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121852</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and implementation of an automated reconfigurable modular flow chemistry synthesis platform</title>
<link>https://hdl.handle.net/1721.1/121851</link>
<description>Design and implementation of an automated reconfigurable modular flow chemistry synthesis platform
Thomas, Dale Arlington,III.
Synthetic chemistry has been the driving force behind advances in pharmaceuticals, agricultural chemicals to advanced materials; however, these fields have struggled with a slow pace of discovery, limited reproducibility, and difficulty scaling promising new molecules. Current organic chemistry labs rely on batch methodologies limiting the safe process windows, contributing to scaling difficulties, and causing reproducibility issues. Advances in laboratory automation and flow chemistry can be combined to address this bottleneck while increasing expert chemists' productivity. Automated reaction platforms, however, have been limited in their ability to access a diverse set of process units, beyond simple mixing and stirring. A system capable of carrying out multi-step syntheses, inline reaction monitoring, multi-phase reactions, and is easily reconfigurable could enable access to novel process windows and enhance laboratory productivity.; In this work, the development of a reconfigurable continuous flow chemistry platform capable of multistep syntheses is undertaken. This system is capable of interfacing with a library of process modules capable of handling solids, aggressive reagents, inline separations, and reaction conditions required for organic synthesis. These modules can be reconfigured and connected into the required sequence for target molecule synthesis. With reagents being routed to the process modules through the physical wiring of the connections to the assembled process modules eliminating complex valving manifolds. The assembly of the system is coordinated through graphical user interfaces (GUI) which executes a user generated recipe. The platform has been used to rapidly synthesize a variety of active pharmaceutical ingredients (API) and dyes requiring stereo-selectivity, site-selectivity, library generation, and convergent synthesis.; This integrated reconfigurable flow chemistry platform aims to decrease the time required for synthesizing new molecules while increasing synthetic repeatability and lab-to-lab transferability. Automation of synthetic chemistry can decrease the time for molecule development and allow chemists to focus on pathway refinement, reaction optimization, and process analytics. This work required the incorporation of design concepts from microfluidics, robotics, and precision machine design into an integrated modular system for continuous end-to-end production of molecules.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 115-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121851</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic engineering of fluid Structure at the fluid-solid interface</title>
<link>https://hdl.handle.net/1721.1/121850</link>
<description>Atomistic engineering of fluid Structure at the fluid-solid interface
Wang, Gerald J.(Gerald Jonathan)
Under extreme confinement, fluids exhibit a number of remarkable effects that cannot be predicted using macroscopic fluid mechanics. These phenomena are especially pronounced when the confining length scale is comparable to the fluid's internal (molecular) length scale. Elucidating the physical principles governing nanoconfined fluids is critical for many pursuits in nanoscale engineering. In this thesis, we present several theoretical and computational results on the structure and transport properties of nanoconfined fluids. We begin by discussing the phenomenon of fluid layering at a solid interface. Using molecular-mechanics principles and molecular-dynamics (MD) simulations, we develop several models to characterize density inhomogeneities in the interfacial region. Along the way, we introduce a non-dimensional number that predicts the extent of fluid layering by comparing the effects of fluid-solid interaction to thermal energy.; We also present evidence for a universal scaling relation that relates the density enhancement of layered fluid to the non-dimensional temperature, valid for dense-fluid systems. We then apply these models of fluid layering to the problem of anomalous fluid diffusion under nanoconfinement. We show that anomalous diffusion is controlled by the degree of interfacial fluid layering; in particular, layered fluid exhibits restricted diffusive dynamics, an effect whose origins can be traced to the (quasi-) two dimensionality and density enhancement of the fluid layer. We construct models for the restricted diffusivity of interfacial fluid, which enables accurate prediction of the overall diffusivity anomaly as a function of confinement length scale. Finally, we use these earlier developments to tackle the notorious problem of dense fluid slip at a solid interface.; We propose a molecular-kinetic theory that formulates slip as a series of thermally activated hops performed by interfacial fluid molecules, under the influence of the bulk fluid shear stress, within the corrugated energy landscape generated by the solid. This theory linearizes to the Navier slip condition in the limit of low shear rate, captures the central features of existing models, and demonstrates excellent agreement with MD simulation as well as experiments.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121850</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles approaches for accurate predictions of nanostructured materials</title>
<link>https://hdl.handle.net/1721.1/121849</link>
<description>First-principles approaches for accurate predictions of nanostructured materials
Zhao, Qing,Ph. D.Massachusetts Institute of Technology.
Nanostructured materials have attracted increasing interest in recent years due to their unusual mechanical, electrical, electronic and optical properties. First-principles electronic structure calculations (e.g., with density functional theory or DFT) provide unique insights into the structure-property relationships of nanostructured materials that can enable further design and engineering. The favorable balance between efficiency and accuracy of DFT has led to its wide application in chemistry, solid-state physics and biology. However, DFT still has limitations and suffers from large pervasive errors in its predicted properties. For small systems, more accurate methods are available but challenges remain for studying nm-scale materials. In the solid-state, unique challenges arise from both the strong sensitivity of correlated transition metal oxides on approximations in DFT and the periodic boundary condition.; Therefore, a greater understanding of approximations inherent in DFT is needed for nanostructured materials. In this thesis, we study nanostructured semiconducting materials, where conventional DFT can be expected to perform well. We develop methods for sampling amorphous materials, rationalizing periodic table dependence in material stability for materials discovery of ordered materials, and bring a surface reactivity perspective to understanding growth processes during materials synthesis. Within the challenging cases of transition metal oxides, we explore how common approximations (e.g., DFT+U and hybrids) affect key nanoscale properties, such as the nature of density localization, and as a result, key observables such as surface stability and surface reactivity. Observation of divergent behavior between these two methods highlights the limited interchangeability of DFT+U and hybrids in the solid-state community.; Finally, leveraging the understanding developed in the first two parts of the thesis, we employ a multiscale approach to systematically tailor DFT functional choice for challenging condensed phase systems using accurate reference data from higher level methods. The combination of large-scale electronic structure modeling with state-of-the-art methodology will provide important, predictive insight into tailoring the nanoscale properties of useful materials, and further development in approximate DFT.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 154-180).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121849</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-scale modeling tools for coupled reaction, phase equilibrium and two-phase mixing phenomena with application to supercritical water heavy oil upgrading process</title>
<link>https://hdl.handle.net/1721.1/121848</link>
<description>Multi-scale modeling tools for coupled reaction, phase equilibrium and two-phase mixing phenomena with application to supercritical water heavy oil upgrading process
Raghavan, Ashwin,Ph. D.Massachusetts Institute of Technology.
Supercritical water heavy oil upgrading, which has the potential for high distillate liquid yields with lower coke formation, is a complex process involving coupled reactions, phase equilibrium and two-phase mixing phenomena. This thesis presents the development of a range of models and tools to simulate such processes at different scales with varying levels of fidelity in describing the underlying physical phenomena. Over the past decade, a number of experiments have been performed in batch reactors to study the upgrading of various heavy oils and crude oil vacuum residua through both oil-phase pyrolysis and reaction in the presence of supercritical water (SCW). While these studies indicate that the presence of SCW can significantly affect the outcomes of the upgrading process, there remains a lack of clarity on whether thermolytic processing in the presence of SCW is indeed significantly beneficial as opposed to pure oil phase pyrolysis and if so, at what operating conditions.; In addition, modeling tools coupling the reaction kinetics and phase equilibrium, which can provide deeper insight into the process and the underlying phenomena have been lacking. The first part of this thesis describes the development and application of a two-phase stirred reactor (TPSR) model which couples sub-models for the phase-specific reaction kinetics and multi-component hydrocarbon-water phase equilibrium. Using this model, separate lumped kinetics rate parameters for the oil and SCW phases were inferred that best fit batch reactor experimental data. Analysis of the obtained kinetics parameters reveal the following crucial insights on the chemical pathways involved in the SCW upgrading process (i) the primary coke precursor formation pathway is not suppressed in the SCW phase and (ii) only the secondary pathway towards coke precursors from product distillate species is suppressed in the SCW phase, especially at higher operating temperatures.; The TPSR model was then applied to evaluate the performance of heavy oil upgrading using SCW in an oil-water co-flow (visbreaking) Next, an extractive upgrading reactor design was hypothesized to improve high-value distillate liquid product yields and reduce undesirable extrinsic coke formation by removing the distillate products safely out of the reactor in a SCW up-flow thereby preventing their further participation in secondary retrograde combination reactions towards more aromatic coke precursors and low-value gas. The TPSR model was used to evaluate the performance of the SCW extractive upgrading process in terms of distillate liquid yields and coke formation rates for heavy oil vacuum residue over a range of operating temperatures and water flow rates. The predictions demonstrate the significant potential of the extractive upgrading process to achieve the aforementioned objectives.; The effect of the extraction rate governed by the interphase mixing time-scale on the product yields and oil-inflow rate for steady-state operation was then quantified. The second part of the thesis describes the development of a computational fluid dynamics (CFD) framework and modeling tool for simulating the coupled two-phase flow and multi-component interphase mass transfer at near-critical/supercritical conditions in applications like SCW heavy oil upgrading. The CFD tool accounts for interface tracking in 2-D/3-D, intra-phase species diffusion, phase-equilibrium limited interphase species transfer and non-ideal thermodynamics. In this tool, the interface is tracked with a conservative sharp interface capturing Volume of Fluid (VoF) scheme using (i) a Piece-wise Linear Interface Reconstruction (PLIC) algorithm (ii) an unsplit geometrical advective flux calculation and (iii) a flux-polyhedron correction.; The intra-phase species diffusion is handled using a corrected face-normal gradient calculation accounting for the arbitrary shape and size of phase-specific sub-cells. The interphase mass transfer is computed as a source term consistent with the local phase equilibrium and transport flux constraints at the interface. Finally, the phase-volume change is rigorously accounted for in the discrete pressure equation. The tool was implemented on an open-source CFD platform ensuring compatibility with unstructured mesh information and parallel processing constraints. Finally, the developed CFD tool was applied to determine the two-phase mixing rates in an extractive upgrading configuration for water flow rates of interest. The predictions suggest that the earlier assumption of instantaneous phase-equilibration with respect to the time-scale of reactions relevant in the SCW heavy-oil upgrading process is a reasonable approximation for centimeter-scale reactors.; Furthermore, the scaling of the total oil-water interfacial area in the reactor and the average Sherwood number with the water inlet velocity and oil-water interfacial tension were established, providing insight into ways to manipulate the two-phase mixing rate to enable control of the extractive upgrading process at higher operating temperatures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 253-261).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121848</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanics and manufacturing of crosslinked cellulose nanocomposites</title>
<link>https://hdl.handle.net/1721.1/121847</link>
<description>Mechanics and manufacturing of crosslinked cellulose nanocomposites
Rao, Abhinav.
Cellulose nanocrystals (CNCs) are naturally derived and renewable nanostructures with exceptional mechanical and chemical properties. Consequently, CNCs provide a compelling platform to study the mechanical properties and manufacturing processes of nanocomposites, toward sustainable, high-performance structural materials. This thesis presents the formulation, mechanics and additive manufacturing of CNC composites with high hardness and toughness. A gel precursor is formulated, combining CNCs, oligomers and solvent; and net-shape forming and additive manufacturing of macroscopic parts are achieved by a UV and thermal curing sequence. Characterization of CNC composites by nanoindentation and bi-modal atomic force microscopy (AFM) reveals a nanoscale grain structure and fracture toughening mechanism.; By quantitative analysis of AFM images and robust statistical treatment of nanoindentation data, the measured mechanical properties are correlated with the microstructure of the composite. The composites are observed to have modulus, hardness and fracture toughness of around 9 GPa, 0.6 GPa and 5 MPa-m¹/², exceeding most conventional polymers in performance. Rheological characterization reveals the effect of shear history applied during processing, on the microstructure of the composites. Rheology coupled with in-situ infrared spectroscopy shows that CNCpolymer composite gels display the distinctive features of colloidal glasses and that intrinsic chemical additives can be used to tune their behavior during extrusion. A complementary study is performed on the photopolymerization kinetics and process control of interpenetrating polymer networks (IPNs) using a custom-built linear shear rheometer.; Photopolymers are formulated with dual monomer systems, that respond to separate wavelengths of light. The mechanical and chemical properties enabled by IPNs, and their potential for nanocomposite manufacturing are explored. Using cellulose as a model system, this thesis presents a route towards formulation, processing and bulk fabrication of nanocomposites, and a fundamental understanding of the structure-property relationships from the nano to the macro scale, arising at high loading fractions of nanomaterial fillers.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-154).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121847</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wide-field structured illumination microscopy for fluorescence and pump-probe imaging</title>
<link>https://hdl.handle.net/1721.1/121846</link>
<description>Wide-field structured illumination microscopy for fluorescence and pump-probe imaging
Kim, Yang-Hyo.
The optical resolution of microscopy is limited by the wave-like characteristic of the light. There are many recent advances in overcoming this diffraction limited resolution, but mostly focused on fluorescent imaging. Furthermore, there are few non-fluorescence wide-field super-resolution techniques that can fully utilize the applicable laser power to optimize imaging speed. Structured illumination microscopy is a super-resolution method that relies on patterned excitation. This thesis has presented novel applications of structured illumination microscopy to surface plasmon resonance fluorescence and pump-probe scattering imaging. First, structured illumination microscopy was introduced to surface plasmon resonance fluorescence imaging for high signal-to-noise and high resolution. Secondly, a theoretical framework for three-dimensional wide-field pump-probe structured illumination microscopy has been developed to increase the lateral resolution and enable depth sectioning. Further, structured illumination wide-field photothermal digital phase microscopy is proposed as a high throughput, high sensitivity super-resolution imaging tool to diagnose ovarian cancer. Finally, I have derived the exact analytical solution to the heat conduction problem in which a sphere absorbs temporally modulated laser beam for photothermal microscopy. The proposed method also has a great potential to be applied to other pump-probe modalities such as transient absorption and stimulated Raman scattering.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121846</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Figure correction of thin plate and shell substrates using stress generated by ion implantation</title>
<link>https://hdl.handle.net/1721.1/121845</link>
<description>Figure correction of thin plate and shell substrates using stress generated by ion implantation
Chalifoux, Brandon D.
I developed a method to correct height errors in thin substrates. Accurately-figured thin plates and shallow shells are important for large-area space telescopes and for the semiconductor industry. Thin substrates can be bent into the desired shape, for example by tens of microns on a 100 mm diameter substrate, by applying stress to their surface. Ion implantation is one of many possible approaches to applying a controlled stress field to the surface of a substrate. I develop analytical and numerical approaches to calculating stress fields that generate a desired deformation field in thin flat plates and shallow shells. Equibiaxial stress alone is insufficient to generate some deformation fields exactly, making non-equibiaxial stress components critical for figure correction, in general. I experimentally demonstrate the generation of non-equibiaxial stress using ion implantation in glass substrates, by angling the ion beam. To generate a desired deformation field, I developed a process to calculate, and built a system to implement, implantation recipes (i.e. the doses and ion beam angles at each substrate position) on 100 mm glass wafers. Using this system, I demonstrate a 2-4 x improvement in height and slope errors of glass wafers using ion implantation. I also demonstrate the use of ion implantation to compensate for the deformation caused by stress in thin film coatings. Thin films, such as those used for mirror coatings, often have non-uniform equibiaxial stress fields. I developed a process to restore the figure of 100 mm silicon wafers that have been deformed by thin metal films, by applying the same equibiaxial stress field on the other side of the wafer using ion implantation. I demonstrate a 20 x reduction of the coating-induced deformation.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 213-225).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121845</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lovable sustainability : from residential solar PV systems to eco-feedback designs</title>
<link>https://hdl.handle.net/1721.1/121844</link>
<description>Lovable sustainability : from residential solar PV systems to eco-feedback designs
Bao, Qifang.
Traditional research in sustainable product design strongly emphasizes the material and energy domains and aims to reduce resource consumption and waste production in the manufacturing process and at the end of the product lifecycle. Less attention has been paid to products' environmental impact in the use phase and market adoption of sustainable products, which are also important components of sustainability and are heavily influenced by how users perceive products and how they use them. This points to an opportunity to apply user-centered design strategies to the realm of design for environmental sustainability. This thesis investigates the relationship between sustainable products and their users. The overarching goal is to gain knowledge of how to design lovable sustainable products, which are desirable and have strong emotional connections with users, to increase product adoption and to encourage sustainable product use.; Two classes of sustainable products, residential solar photovoltaic (PV) systems and eco-feedback products, are investigated as case studies. Residential solar PV systems produce clean energy and allow for energy independence of individual households. Via stakeholder interviews, key attributes of residential solar PV system and installation that users are concerned with are identified, including system reliability and installer reputation. Discrete choice experiments with 1773 homeowners in California and Massachusetts shed light on how they make tradeoffs between these attributes. The findings provide first-hand information on homeowners' needs and preferences for residential solar PV systems and open up opportunities for designing more attractive and more widely adopted renewable energy systems. Eco-feedback products provide information on resource usage with the aim of encouraging resource conservation behavior in users.; Surveys of 658 university students in two countries revealed that quantitative feedback in these products better aids users with higher knowledge about resource consumption; however, emotionally evocative information aids users who have low or high consumption knowledge to a similar degree. In-lab experiments with 68 participants of a wide range of ages and backgrounds show that users' resource conservation behaviors are strongly linked to negative emotions (such as guilt) towards waste of resources; and, better product evaluations have stronger links with positive emotions (such as satisfaction) towards conservation. These results suggest the value of creating emotionally evocative eco-feedback products and fostering positive emotions in order to improve user engagement. These studies provide guidelines for sustainable product design and offer experimental approaches to facilitate future research in user-centered design for sustainability.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-152).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121844</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular interactions in phonology</title>
<link>https://hdl.handle.net/1721.1/121841</link>
<description>Modular interactions in phonology
Rasin, Ezer.
This thesis makes two separate claims about the architecture of phonology: (1) The computation of stress takes place in a distinct cognitive module from segmental phonology. This module is informationally encapsulated from segmental features. (2) Phonological generalizations over underlying representations can be captured in the lexicon. The claim in (1) suggests a departure from a consensus view in generative phonology since the 1950's. According to this view, multiple phonological computations, including the computation of word stress and segmental processes, are carried out in a single cognitive module known as phonology. In Chapter 1 I challenge this view in two steps. I first argue for a new phonological universal based on the stress patterns of around 400 languages: (3) STREss-ENCAPSULATION UNIVERSAL: the distribution of stress is never directly conditioned by segmental features. After reanalyzing reported counterexamples to the universal, I argue for an account of the universal in terms of a modular decomposition of phonology along the lines of (1). The claim in (2) suggests a return to the architecture of early generative phonology, in which phonological generalizations could be captured in the lexicon (using constraints on underlying representations) as well as in the mapping from underlying representations to surface forms. Most recent work in phonology has abandoned that architecture, taking the lexicon to be merely a storage place for lexical items. Chapter 2, written jointly with Roni Katzir, presents an argument for constraints on underlying representations from learnability. In Chapter 3 I develop a new theory of blocking in non-derived environments, a phenomenon that has posed a long-standing puzzle for phonological theory since the 1970's. I argue that the new theory, which relies on constraints on underlying representations, offers a better account of the phenomenon than its predecessors.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-162).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121841</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The global integration challenge : global management teams, temporal differences, and constructing the identity of the global 'other'</title>
<link>https://hdl.handle.net/1721.1/121839</link>
<description>The global integration challenge : global management teams, temporal differences, and constructing the identity of the global 'other'
Grassin-Drake, Laurel Edwards.
In this thesis, I investigate the challenges of 21st century global integration by examining it indepth at a Fortune 200 global supplier (CC) that faces strong pressures of both global integration and local responsiveness. While global integration is typically discussed at a macro level, here I use qualitative methods to study senior managers at a micro level. In this way, I establish critical internal struggles with integration, and reveal external dynamics of integration in a close inter-firm relationship as a strategic supplier. Results highlight the role of Global Management Teams as an essential linking tool. At the same time, I find that conflicting temporalities across locations, accompanied by a globally-shared logic of appropriateness, place restraints on this mechanism of integration: limiting the hours available, and consistently advantaging and disadvantaging specific geographies.; I also draw on a unique data set to examine global integration in the context of a close, dependent relationship, with multiple boundary-crossing links. This consists of thirteen months of observations of the weekly virtual meeting of the global account team responsible for CC's largest customer (Alpha). Tracing the collective process by which the team constructs the organizational identity (01) of the "other" from their individual distributed interactions, I find that in constructing Alpha's global 01 the team is also co-creating their own global 01. Further, the process by which the team constructs Alpha's 01 to answer the question "who are they?" parallels well-documented internal processes used by organizational members to answer "who are we?" Importantly, through this co-creation, identity acts both as an alignment mechanism within CC, and across firm boundaries in the relationship.; Finally, CC's identity work on Alpha extends beyond construction to shaping and enforcing Alpha's 01 as partner through active voice, engaging the customer's hierarchy from outside to discipline and shape the customer's behavior from within.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, February, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-158).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121839</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tasks, stratification and occupational change : evidence from the legal profession</title>
<link>https://hdl.handle.net/1721.1/121837</link>
<description>Tasks, stratification and occupational change : evidence from the legal profession
Riordan, Christine A.(Christine Ann)
The organization of professional work-that of lawyers, doctors, and accountants, among others-is undergoing change. One of the most notable changes is the disaggregation of work processes, or the unbundling of work into component tasks and their allocation to different sources of labor. The legal profession, and specifically the corporate law firms and clients that make up the profession's "core", is increasingly subject to such reorganization. An emerging hypothesis is that this unbundling and reallocation of tasks underpins new forms of stratification. This dissertation explores the extent to which tasks underpin the distribution of opportunities and rewards that come to define occupational stratification in law.; In the first essay, I show how market pressures-specifically, rooted in changing firm-client relationships, incongruities in law firm business models, and increased competition from alternative legal service providers-contributes to occupational change by transforming law firms' division of labor and its tasks. The precise implications of such transformation remain unclear, as tasks are not typically examined as a mechanism of professional stratification. The next two essays aim to bring clarity to this issue. In each, I build from an emerging model of task-based stratification found in work design and organizational scholarship. In this model, tasks are theorized to underpin stratification through their technical, social, and subjective characteristics, doing so in ways that hinge on their status.; I examine this model first using qualitative data collected in two major legal markets, showing that the disaggregation of tasks that vary by status shapes divergent opportunity structures related to skill, social resources, and signals of professional status, such professional expertise and autonomy, which reinforce existing stratification in new ways. The third essay builds on these insights, using both fieldwork and survey data to test the relationship between task status and occupational outcomes more systematically. My findings show that certain high- and low-status tasks are associated with three forms of social resources, mostly in the expected direction. Yet nuance in these relationships suggests further refinement and new conditions of the model. These findings raise implications for task-based stratification and stratification in the profession more generally.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121837</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of nonstoichiometry on the magnetism and ferroelectricity of Fe- and Co-substituted strontium titanate thin films</title>
<link>https://hdl.handle.net/1721.1/121832</link>
<description>Effect of nonstoichiometry on the magnetism and ferroelectricity of Fe- and Co-substituted strontium titanate thin films
Tang, Astera S.
Multiferroic and magnetoelectric materials, in which the magnetic properties can be controlled via electric field and vice versa, hold the potential to be useful in several emerging memory technologies, including spin-wave devices and multi-state memory. Room temperature ferromagnetism and ferroelectricity are of great interest to the multiferroic and magnetoelectric community, as a key challenge in the field is engineering a material exhibiting room temperature ferroelectricity and ferromagnetism. Pulsed laser deposition grown oxygen-deficient Fe- (STF) and Co- (STC) substituted strontium titanate thin films are investigated for their microstructural, magnetic and ferroelectric properties. The films demonstrate room temperature ferromagnetism when grown under highly reducing conditions and in the case of STF, appear to exhibit room temperature ferroelectricity, though the magnetoelectric coupling is quite small. The impact of double-epitaxy on magnetism is investigated in STC.; Double-epitaxy is a strain relaxation mechanism and microstructural feature observed in several perovskite oxides, and its effect on strain can in turn reduce the observed ferromagnetism of STC thin films. X-ray magnetic circular dichroism and x-ray absorption spectroscopy for STF thin films are also investigated. The results clarify the source of the observed ferromagnetism to be Fe²⁺ cations, which can be controlled via oxygen vacancy content, and a corresponding mechanism is suggested to explain how the ferromagnetism arises. The strong correlation between magnetic moment and strain is attributed to magnetoelasticity, and a preliminary study into the effect of epitaxial strain on oxygen-deficient STF is presented. Finally, attempts to alter the magnetic properties post-growth, with varying levels of success, are discussed.; Although a method to manipulate the magnetic properties post-growth could prove useful to enabling easier processing and new applications for these materials, this work finds that post-growth manipulation of the magnetic properties is not completely reversible. The results presented in this work may provide insight into room temperature magnetism and ferroelectricity of related material systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis. "Feb 2019."; Includes bibliographical references (pages 170-185).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121832</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biotemplated resin and carbon nanomaterials for energy and environmental applications</title>
<link>https://hdl.handle.net/1721.1/121831</link>
<description>Biotemplated resin and carbon nanomaterials for energy and environmental applications
Zhang, Geran.
The M13 bacteriophage had been shown to be a highly versatile toolkit for growing and assembling nanomaterials with technological importance. Inspired by the natural biomineralization process, much of the existing literature focused on genetically engineering the M13 viral capsid for interaction with inorganic materials, such as metals and oxides. In this thesis, the utility of the M13 toolkit was extended to the synthesis of organic and carbonaceous materials. Biotemplating of phenolic resins was extensively studied, with a particular focus on colloidal assembly and materials chemistry. Genetically engineered M13 bacteriophage was shown to be particularly apt at controlling the morphology and selfassembly of phenolic resin nanofibers. The properties of these nanomaterials could be simultaneously controlled by introducing additional molecular moieties using simple aqueous, organic chemistry, to enable their application as catalyst scaffolds and carbon dioxide sorbents.; Modification of the phenolic resin nanofibers with organosilicon moieties offered a direct route to nanoporous carbon nanofibers upon carbonization. The properties of these biotemplated carbon nanofibers could be tailored for specific applications by independently controlling morphology and carbon texture. Their practical utility was demonstrated by the rapid adsorption of small molecules with uptake values comparable to some highest values reported for carbon materials. High conductivity nanofibers could also be incorporated into lithium-sulfur batteries as interlayers to significantly improve electrochemical performance. New biotemplating approaches to the synthesis of some other inorganic nanomaterials such as zinc sulfide and noble metal nanomaterials were also demonstrated. Biotemplated zinc sulfide nanofibers were shown to be promising anode material for sodium-ion batteries, with potential for further study.; The facile synthesis of a range of noble metal nanowires opens up potential applications in catalysis and energy storage.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 171-185).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121831</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in non-convex optimization and learning</title>
<link>https://hdl.handle.net/1721.1/121830</link>
<description>Topics in non-convex optimization and learning
Zhang, Hongyi,Ph. D.Massachusetts Institute of Technology.
Non-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 165-186).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121830</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms and circuits for motor control and learning in the songbird</title>
<link>https://hdl.handle.net/1721.1/121829</link>
<description>Algorithms and circuits for motor control and learning in the songbird
Stetner, Michael E.(Michael Edward)
From riding a bike to brushing our teeth, we learn many of our motor skills through trial and error. Many biologically based trial and error learning models depend on a teaching signal from dopamine neurons. Dopamine neurons increase their firing rates to signal outcomes that are better than expected and decrease their firing rates to signal outcomes that are worse than expected. This dopamine signal is thought to control learning by triggering synaptic changes in the basal ganglia. What are the origins of this dopaminergic teaching signal? How do synaptic changes in the basal ganglia lead to changes in behavior? In this thesis, I study these questions in a model of skill learning - the songbird. In the first part of my thesis, I develop a computational model of song learning. This model incorporates a dopaminergic reinforcement signal in VTA and dopamine-dependent synaptic plasticity in the singing-related part of the basal ganglia.; I demonstrate that this model can provide explanations for a variety of experimental results from the literature. In the second part of my thesis, I investigate a potential source of the dopaminergic error signal in VTA. I performed the first recordings from one cortical input to VTA: the dorsal intermediate arcopallium (AId). Previous studies disagree on the role of Ald in behavior. Some studies argue that AId contributes vocal error information to VTA. Other studies suggest that AId is not involved in the computation of error signals, but is instead responsible for controlling head and body movements. I directly tested these hypotheses by recording single neurons in AId during singing and during natural movements. My results support a motor role for AId - AId neurons had highly significant changes in activity during head and body movements. Meanwhile, following vocal errors Aid neurons had small but marginally significant decrease in firing rate.; In a more detailed analysis, I developed an automated behavior classification algorithm to categorize zebra finch behavior and related these behavior classes to the activity of single units in Aid. My results support the hypothesis that AId is part of a general-purpose motor control network in the songbird brain.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 179-192).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121829</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>FMRI studies of the relationship between language and theory of mind in adult cognition</title>
<link>https://hdl.handle.net/1721.1/121828</link>
<description>FMRI studies of the relationship between language and theory of mind in adult cognition
Paunov, Alexander(Alexander Marinov)
Language is the primary means for human interaction, and communicative success requires an ability to reason about a conversation partner's beliefs, desires, and goals (e.g., Grice 1957, 1968, 1975; Sperber &amp; Wilson, 1986). Many questions remain about the precise nature of the relationship between language processing and Theory of Mind (ToM), and their neural substrates. On the one hand, the two domains share an intimate connection given that a) language appears to have been shaped by communicative pressures; b) pragmatic inference is prevalent in language; c) aspects of ToM are important for language acquisition; and d) language is, in turn, critical for acquiring some of the more complex aspects of ToM. But interestingly, prior neuroimaging and patient work have provided some evidence for a dissociation between basic linguistic processing and reasoning about mental states. In this thesis, I use functional MRI to elucidate the relationship between these two core human capacities.; In Chapter 1, I review some of the past research that informs the relationship between language and ToM. In Chapter 2, I examine the topography of language processing and verbal vs. non-verbal ToM in two fMRI datasets (n=90 and n=47). I find that verbal, but not non-verbal, ToM elicits robust responses throughout the language network. Lack of response to non-verbal ToM argues against the core role of the language regions in social reasoning in adulthood. Next, in Chapter 3, I investigate the relationship between the language and ToM networks using a functional correlation approach during several naturalistic conditions (n=55 participants across three studies). I find that although the two networks are dissociable, they also show reliable synchronization both at rest and during story comprehension. This synchronization may provide one mechanism for inter-network information sharing.; And finally, in Chapter 4, I report a case study of an individual (EG) born without a left temporal lobe (likely, due to perinatal stroke damage). Given that in cases of early left hemisphere damage, language processing usually shifts to the right hemisphere, EG presents an interesting case study of how language and ToM - whose most ToM-selective component resides in the right temporoparietal junction - can co-exist within a single temporal lobe. I show that language and ToM overlap to a greater extent in EG compared to a large normative dataset, and some areas that support ToM in neurotypical individuals instead support language processing in EG. However, both their basic language processing, and pragmatic processing appear fully intact.; These results suggest that a single temporal lobe is sufficient to support both language processing and ToM, and that some cortical areas may assume either ToM or language functions, pointing to a deep relationship between the two systems, and the inter-changeable nature of their cortical substrates at least early in development.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 120-153).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121828</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards understanding facial movements in real life</title>
<link>https://hdl.handle.net/1721.1/121827</link>
<description>Towards understanding facial movements in real life
Le Mau, Tuan.
It is commonly assumed that there is a reliable one-to-one mapping between a certain configuration of facial movements and the specific emotional state that is supposedly signals. One common way to test this one-to-one hypothesis is to ask people to deliberately pose the facial configurations that they believe they use to express emotions. Participants are randomly sampled, without concern for their emotional expertise, and are given a single emotion word or a single, brief statement to describe each emotion category. They then deliberately pose the facial configuration that they believe they make when expressing instances of this category. Such studies routinely find that participants from different countries show moderate to strong evidence for a one-to-one mapping between an emotion category and a single facial configuration (its presumed facial expression).; In Study 1, we examined the facial configurations posed by emotion experts - famous actors who were provided with a diverse sample of richly described scenarios, full of context. Participants inferred the emotional meaning of the scenarios, which were then grouped into categories. Systematic coding of the facial poses for each emotion category revealed little evidence for the hypothesis that each category has a diagnostic facial expression. Instead, we observed a high degree of variability among expert's facial poses for any given emotion category, and little specificity for any pose. Furthermore, an unsupervised statistical analysis discovered 29 novel emotion categories with moderately consistent facial poses. In Study 2, participants were asked to infer the emotional meaning of each facial pose when presented alone, or when presented in the context of its eliciting scenario.; In fact, the majority of studies designed to test the one-to-one hypothesis ask people from various cultures to judge posed configurations of facial movements, such as a scowl (the proposed facial expression for anger), a frown (the proposed expression for sadness), and so on, on the assumption that these facial configurations, as universal expressions of emotional states, co-evolved with the ability to recognize and read them. These studies routinely show participants one facial configuration posed by multiple posers for each emotion category and observe variable findings, depending on the experimental method used. Our analyses indicated that participants's inferences about the emotional meaning of the facial poses were influenced more by their eliciting scenarios than by the physical morphology of the facial configurations.; These findings strongly replicate emerging evidence that the emotional meaning of any set of facial movements may be much more variable and context-dependent than hypothesized by the common one-to-one view which continues to influence the public understanding of emotion, and hence education, clinical practice, and applications in government and industry. Although more ecologically valid research on how people actually move their faces to express emotion is urgently needed, doing so was immensely difficult without the right tools that support the process of capturing facial data in real life, automatically processing these data, and finally supporting data verification and analysis. We developed a system of technological tools to support the investigations of facial movements during emotional episodes in naturalistic settings with the use of dynamic and longitudinal facial data. We then collected, pre-processed, verified and analyzed data from Youtube using our newly-developed tools.; In particular, we examined two talk show hosts and presented preliminary insights on the answers to questions that were previously very difficult to investigate.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis. "Some pages in the original document contain text that runs off the edge of the page. See Appendix A - pages 162-171"--Disclaimer Notice page.; Includes bibliographical references (pages 147-159).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121827</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated robust and adaptive methods in the heating oil industry</title>
<link>https://hdl.handle.net/1721.1/121826</link>
<description>Integrated robust and adaptive methods in the heating oil industry
Tay, Joel(Joel Wei En)
Almost six million households in the United States alone use heating oil as their main fuel, the vast majority of these in the Northeastern US. In this thesis, we examine some problems faced by a planner who is contracted to resupply customers with heating oil through the winter season, and use robust and adaptive optimization and machine learning to develop models that allow the planner to address these problems under uncertainty at a realistic scale. In the first part of the thesis (Chapter 2), we consider the problem of resupplying customers spread over a geographical area. Due to the presence of uncertainty in demand, the planner has to choose an appropriate fleet size, decide on the most cost-effective routes and schedules, and on how much to resupply each customer. We develop novel scalable and adaptive algorithms to address this problem, demonstrating the potential for significant cost savings in simulations while being able to address problem sizes in the thousands.; In the second part of the thesis (Chapter 3), we consider the problem of executing the purchase of a commodity. In addition to price uncertainty on the daily commodity market, we model two kinds of discounts offered by commodity sellers vying for the planner's business. We develop a tractable model to formulate a purchasing strategy for a desired quantity, and use recently-developed machine learning techniques to find optimal decision trees that the planner can apply to different problem parameters to yield readily interpretable purchasing strategies, without having to re-solve the optimization models. We demonstrate experimentally that these strategies perform almost as well as those given by the actual optimization models.; Finally, in the third part of the thesis (Chapter 4), we demonstrate the possibility of solving the previous two problems as an integrated whole, allowing the planner to simultaneously optimize the routing, scheduling, and purchasing aspects of heating oil delivery. Although the integrated problem size may be too large to solve directly with realistic problem sizes, we use Lagrangean decomposition methods to make the problem tractable, and show experimentally that this allows us to get high-quality solutions that reduce the combined cost of the two subproblems.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-144).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121826</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demand uncensored : car-sharing mobility services using data-driven and simulation-based techniques</title>
<link>https://hdl.handle.net/1721.1/121825</link>
<description>Demand uncensored : car-sharing mobility services using data-driven and simulation-based techniques
Fields, Evan(Evan Jerome)
In the design and operation of urban mobility systems, it is often desirable to understand patterns in traveler demand. However, demand is typically unobserved and must be estimated from available data. To address this disconnect, we begin by proposing a method for recovering an unknown probability distribution given a censored or truncated sample from that distribution. The proposed method is a novel and conceptually simple detruncation technique based on sampling the observed data according to weights learned by solving a simulation-based optimization problem; this method is especially appropriate in cases where little analytic information about the unknown distribution is available but the truncation process can be simulated.; The proposed method is compared to the ubiquitous maximum likelihood (MLE) method in a variety of synthetic validation experiments where it is found that the proposed method performs slightly worse than perfectly specified MLE and competitively with slight misspecified MLE. We then describe a novel car-sharing simulator which captures many of the important interactions between supply, demand, and system utilization while remaining simple and computationally efficient. In collaboration with Zipcar, a leading car-sharing operator in the United States, we demonstrate the usefulness of our detruncation method combined with our simulator via a pair of case studies. These tools allow us to estimate demand for round trip car-sharing services in the Boston and New York metropolitan areas, and the inferred demand distributions contain actionable insights.; Finally, we extend the detruncation method to cover cases where data is noisy, missing, or must be combined from different sources such as web or mobile applications. In synthetic validation experiments, the extended method is benchmarked against kernel density estimation (KDE) with Gaussian kernels. We find that the proposed method typically outperforms KDE, especially when the distribution to be estimated is not unimodal. With this extended method we consider the added utility of search data when estimating demand for car-sharing.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-145).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121825</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering periodic short hairpin RNA delivery systems for enhanced therapeutic efficacy</title>
<link>https://hdl.handle.net/1721.1/121821</link>
<description>Engineering periodic short hairpin RNA delivery systems for enhanced therapeutic efficacy
Wu, Connie,Ph. D.Massachusetts Institute of Technology.
RNA interference (RNAi) presents a highly promising approach for cancer therapeutics via specific silencing of disease-implicated genes, but its clinical translation remains severely limited by barriers in delivering short interfering RNA (siRNA). Numerous delivery vehicles have been developed to protect siRNA from degradation, promote target cell uptake, and facilitate endosomal escape into the cytoplasm, where RNAi occurs. However, in vivo instability, low silencing efficiency, undesired toxicity, and immunogenicity remain challenges for current siRNA delivery systems, particularly as the low valency and high rigidity of siRNA make it difficult to condense into stable nanoparticles. Here we engineer the siRNA cargo to make it more amenable to stable encapsulation by using a polymeric form of siRNA, or periodic short hairpin RNA (p-shRNA), as well as design a biodegradable polycationic carrier for efficient in vivo delivery of p-shRNA.; Consisting of tens of linked siRNA repeats, p-shRNA is synthesized by the repeated action of T7 RNA polymerase around a circular DNA template. We first leverage molecular engineering design an open-ended p-shRNA structure that is efficiently processed inside cells into siRNAs, greatly enhancing its silencing potency. Furthermore, the much higher valency and flexibility of p-shRNA compared to siRNA enable more stable complexation with delivery materials. To exploit these advantages of p-shRNA, we optimize biodegradable polycations with hydrophobic regions that promote stable condensation and efficient intracellular release. Our approach unveils key design rules governing p-shRNA delivery, and we develop stabilized p-shRNA complexes that show in vivo therapeutic efficacy in a syngeneic melanoma mouse model. Finally, we extend our p-shRNA platform to act as a dual therapeutic agent, harnessing innate immune activation together with gene silencing.; By modulating the surface of the p-shRNA complexes with an anionic polypeptide, we dramatically enhance innate immune recognition of p-shRNA by pattern recognition receptors while maintaining high silencing efficiency. These dually acting complexes can target ovarian tumors in vivo and prolong survival in a syngeneic ovarian cancer mouse model. Our findings establish a potent, multifunctional RNAi platform that can potentially move RNAi therapeutics closer to clinical translation by addressing the delivery and in vivo efficacy challenges faced by current siRNA systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121821</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using semidefinite programming to bound distributions in chemical engineering systems</title>
<link>https://hdl.handle.net/1721.1/121820</link>
<description>Using semidefinite programming to bound distributions in chemical engineering systems
Dowdy, Garrett Ryan.
Distributions appear in many forms in models of chemical engineering systems. Such distributions account for microscopic variability in the system while simultaneously explaining its macroscopic properties. These macroscopic properties are often of practical engineering interest. Thus, it is valuable to be able to characterize the underlying distributions that affect them. Recently, in the mathematical programming literature, it was shown that it is possible to optimize a linear objective over a set of distributions by solving a specific type of convex optimization problem called a semidefinite program (SDP). From a theoretical perspective, SDPs can be solved efficiently. Furthermore, there exist several off-the-shelf codes designed specifically to solve SDPs. This thesis demonstrates how these theoretical and practical advancements can be applied to chemical engineering problems featuring distributions. Broadly speaking, it shows how, given limited information about a distribution, one can use SDPs to calculate mathematically rigorous bounds on various descriptions of that distribution. Two specific types of distributions are examined: particle size distributions and probability distributions arising in stochastic chemical kinetics, with the majority of the thesis covering the latter topic. The SDP-based bounding method described herein provides a rigorous solution to the long-standing "moment closure problem" arising in stochastic chemical kinetics. Moreover, it provides a means of analyzing of stochastic chemical kinetic systems which cannot be effectively analyzed using existing methods. The bounding method does have some limitations, and we present several refinements of the method aimed at overcoming these limitations. Finally, we discuss several ideas through which the bounding method may be further improved, which have not yet been explored.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 329-334).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121820</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and applications of plant nanobionics</title>
<link>https://hdl.handle.net/1721.1/121819</link>
<description>Mechanisms and applications of plant nanobionics
Wong, Min Hao.
Plant Nanobionics seeks to use rationally designed nanoparticles to interface directly with plant cells and organelles to augment plant functions, as well as to introduce non-native functionalities. The broader vision is to create a wide array of wild-type plants, not genetically limited to specific species, capable of imaging objects in their environment, self powering themselves as light sources, IR communication devices, and also function as self-powered ground water sensors. Plants are uniquely suited to perform such roles due to their ability to generate energy from sunlight and photosynthesis. They are also in constant fluidic exchange with the environment, both in gaseous exchange via the stomata, as well as in their continual uptake of water and mineral salts from the ground. Furthermore, plants have a negative carbon footprint and contribute aesthetically to our living environment.; In this thesis, we first study the trafficking and uptake of nanoparticles into plant tissues, cells and organelles. We focus our work particularly on the chloroplast, which is a plant organelle organized for photosynthesis and self-repair of photosynthetic proteins. Many important plant functions, such as carbon dioxide reduction or energy generation is carried out within the chloroplast - a plant organelle that appears greatly under explored as an engineering material. We examine the subcellular uptake and kinetic trapping of a wide range of nanoparticles for the first time, using the plant chloroplast as a model system, but validated in vivo in living plants.; Confocal visible and near-infrared fluorescent microscopy and single particle tracking of gold-cysteine-AF405 (GNPCys- AF405), streptavidin-quantum dot (SA-QD), dextran and poly(acrylic acid) nanoceria, and various polymer-wrapped single-walled carbon nanotubes (SWCNTs), including lipid-PEGSWCNT, chitosan-SWCNT and 30-base (dAdT) sequence of ssDNA (AT) 15 wrapped SWCNTs (hereafter referred to as ss(AT) 15-SWCNT), are used to demonstrate that particle size and the magnitude, but not the sign, of the zeta potential are key in determining whether a particle is spontaneously and kinetically trapped within the organelle, despite the negative zeta potential of the envelope. We develop a mathematical model of this lipid exchange envelope and penetration (LEEP) mechanism, which agrees well with observations of this size and zeta potential dependence.; The theory predicts a critical particle size below which the mechanism fails at all zeta potentials, explaining why nanoparticles are critical for this process. Lastly, we then extend this study to consider whole protoplast, which are isolated plant cells without cellulosic walls. Taken collectively, the understanding of nanoparticle trafficking and uptake would enable the design of nanomaterials for gene delivery and sensing, augmented plant functions, or the introduction of non-native functionalities into plants. We then extend these design principles to demonstrate living spinach plants (Spinacia oleracea) as new materials and functional devices that serve as self-powered auto-samplers and pre-concentrators of nitroaromatics within ambient groundwater, detectors of the organic molecules contained therein, and infrared (IR) communication platforms that can send this information to a user's smart phone.; The design employs a pair of near infrared (nIR) fluorescent nanosensors embedded within the plant leaf mesophyll. One nanosensor is engineered through the Corona Phase Molecular Recognition (CoPhMoRe) technique using single walled carbon nanotubes (SWCNTs) conjugated to the peptide Bombolitin II to recognize nitroaromatics via IR fluorescent emission at &gt; 1100 nm with a response time of 5-15 mins after introducing 400 [mu]M of picric acid to the roots. The second IR channel is a polyvinyl alcohol (PVA) functionalized SWCNT that acts as an invariant reference signal. As contaminant nitroaromatics or dopamine in solution are transported up the roots and stem into the leaf tissues, they accumulate in the mesophyll where the pair of SWCNT sensors are embedded. This results in relative changes in the intensity of SWCNT emission, with a response rate that is mathematically described using a whole plant residence time model.; The real-time monitoring of embedded SWCNT sensors also allows residence times in the roots, stems and leaves to be estimated, calculated to be 8.3 min (combined residence times of root and stem) and 1.9 min/mm leaf, respectively. We further show that this system is generalizable to the detection of other analytes, such as dopamine which is known to effect physiological changes in plants. These results demonstrate the ability of living, wild-type plants to function as chemical monitors of groundwater and communication devices to external electronics at standoff distances. Overall, the mechanistic understanding of nanoparticle trafficking within plant tissues, cells and organelles, as well as the development of new nanobionic methods for nanomaterial-plant interactions will enable the elucidation of basic plant functions and the creation of self-powered, net zero-carbon plant based devices.
Thesis: Ph. D. in Chemical Engineering Practice, Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 128-141).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121819</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequence design principles for 3D wireframe DNA origami</title>
<link>https://hdl.handle.net/1721.1/121818</link>
<description>Sequence design principles for 3D wireframe DNA origami
Ratanalert, Sakul.
DNA is a highly programmable molecule that can be designed to self-assemble into nearly arbitrary 2D and 3D nanoscale structures. DNA origami is a particularly versatile method to achieve complex molecular architectures. However, the rules for designing scaffolded DNA origami have not been well-formalized, which hinders both the investigation of characteristics of well- and poorly-folded structures as well as the participation of a larger scientific audience in DNA nanotechnology. In my thesis work, a fully automatic inverse design procedure DAEDALUS (DNA Origami Sequence Design Algorithm for User-defined Structures) has been developed that programs arbitrary wireframe DNA assemblies based on an input wireframe mesh without reliance on user feedback. This general, top-down strategy is able to design nearly arbitrary DNA architectures, routing the scaffold strand using a spanning tree algorithm and adding staple strands in a prescribed manner. The wireframe nanoparticles produced can use antiparallel crossover (DX) motifs, for robust selfassembly, parallel paranemic crossover (PX) motifs, for staple-free self-assembly, or a hybrid of the two, to minimize the number of staples required for folding to the ones necessary for functionalization. The thermodynamics of the self-assembly of these wireframe structures, and the effects of scaffold and staple routing, are investigated using quantitative PCR and FRET measurements, tracking fluorescence to elucidate global and local folding events. The framework developed should enable the broad participation of nonexperts in this powerful molecular design paradigm and set the foundation for further predictive models of DNA self-assembly.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-151).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121818</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing scalable and modular technologies for continuous biopharmaceutical production</title>
<link>https://hdl.handle.net/1721.1/121817</link>
<description>Developing scalable and modular technologies for continuous biopharmaceutical production
Mozdzierz, Nicholas J.(Nicholas Joseph)
The existing biopharmaceutical manufacturing paradigm is poorly suited to produce biologic drugs on demand at a point-of-care. Generally, commercial-scale manufacturing using fed-batch cultivation and fixed infrastructure is concentrated in developed nations and results in process cycle times of weeks or months. Coupled with the complex logistical challenges associated with continuous 'plant-to-patient' cold-chains, the geographically biased nature of therapeutic protein production today can limit access to biologic drugs in developing areas of the world. These same logistical hurdles can also hamper the efficient distribution of life-saving protein therapeutics following crises in developed nations. Compounding these issues is the fact that lead times between bioreactor inoculation and patient dosing typically range from 6 to 12 months due to processing and regulatory constraints.; As such, there is an opportunity to create technologies capable of rapidly generating biopharmaceuticals in emergency situations and remote healthcare settings. A platform that couples modular flow-through bioreactors and purification systems with real-time feedback control has the potential to bridge this gap if developed in parallel with appropriate expression hosts. To this end, we first developed a state-of-the-art microfluidic perfusion process that supported sustained secretion of heterologous proteins from the yeast Komagataella phaffi. Using palm-sized bioreactors with a 1 mL cultivation volume, we showed that 1 - 10 adult doses worth of hGH or IFN[alpha]-2b could be manufactured in under 24 hours. Next, we reengineered an array of 1 L-scale stirred-tank bioreactors to operate under continuous perfusion conditions and integrate with custom-built reconfigurable chromatography systems.; Leveraging controllers designed in-house, we demonstrated that this system was capable of meeting the metabolic demands of high-density cultures of K. phaffi and preventing perfusion filter fouling. We further showed the production of high-quality hGH and IFN[alpha]-2b via the direct transfer of cell-free perfused supernatant onto a chromatography system, and extended these results to the automated expression and purification of over 400 adult doses of hGH in under one week. Finally, we designed and built a scalable, tubular crystallizer that leverages continuous slug-flow, directed ultrasonic irradiation, modular counter-current heat exchangers, and model-predictive control to tune the crystal size distributions of small-molecules and proteins alike.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 327-353).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121817</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and application of a genetically-encoded probe for peroxiredoxin-2 oxidation in human cells</title>
<link>https://hdl.handle.net/1721.1/121816</link>
<description>Design and application of a genetically-encoded probe for peroxiredoxin-2 oxidation in human cells
Langford, Troy Frederick.
Hydrogen peroxide (H₂O₂) is a well-known oxidant species commonly produced in eukaryotic organisms as a result of cellular metabolism that plays a central role in numerous processes in cells, and dysregulation of this species can result in a number of different disease states in human cells. In the case of cancer, elevated metabolism is believed to result in higher rates of H₂O₂ production in these cells, as well as more susceptibility to H₂O₂-induced apoptosis than normal cells. To this end, researchers have identified several therapeutic compounds that are believed to kill cancer cells via the intracellular elevation of one or more oxidants. However, due to the limitations of current tools for detection of these species, little is known about which therapeutic compounds induce toxicity via elevation of specific oxidants, which would aid in the identification of susceptible tumors to these treatments.; Currently, the main limitation of genetically-encoded tools for detection of H₂O₂ in these applications is the low sensitivity to H₂O₂ . Most genetically-encoded probes for this species used in human cells utilize H₂O₂-responsive domains with reaction rate coefficients nearly two orders of magnitude lower than other, more reactive peroxidases in the cell, such as peroxiredoxins (Prxs). In this regard, several studies have demonstrated that Prxs should react with the majority of intracellular H₂O₂ on the basis of a high reaction rate coefficient with H₂O₂ and intracellular abundance. In light of these studies, research in the field of redox biology has shifted to focus more on Prxs' role as natural sensors of H₂O₂ fluctuations in human cells. To this end, the first part of my thesis project focuses on the development of a genetically-encoded probe for H₂O₂-mediated human Prx2 oxidation in human cells.; The second part of my thesis focuses on the application of this probe in a high-throughput screen to identify small-molecule cancer therapeutics that act through H₂O₂-mediated mechanisms. Together, this thesis lays the foundation for a new class of genetically-encoded sensors that enable specific, sensitive measurement of H₂O₂ perturbations in human cells in response to redox-based therapeutics, which will facilitate the advancement of these therapeutic compounds in the future.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 83-101).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121816</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of optimal conditions and kinetic rate parameters in continuous flow systems with dynamic inputs</title>
<link>https://hdl.handle.net/1721.1/121815</link>
<description>Determination of optimal conditions and kinetic rate parameters in continuous flow systems with dynamic inputs
Aroh, Kosisochukwu C.
.The fourth industrial revolution is said to be brought about by digitization in the manufacturing sector. According to this understanding, the third industrial revolution which involved computers and automation will be further enhanced with smart and autonomous systems fueled by data and machine learning. At the research stage, an analogous story is being told in how automation and new technologies could revolutionize a chemistry laboratory. Flow chemistry is a technique that contrast with traditional batch chemistry in one aspect as a method that facilitates process automation and in small scales, delivers process improvements such as high heat and mass transfer rates. In addition to flow chemistry, analytical tools have also greatly improved and have become fully automated with potential for remote control. Over the past decade, work utilizing optimization techniques to find optimal conditions in flow chemistry have become more prevalent.; In addition, the scope of reactions performed in these systems have also increased. In the first part of this thesis, the construction of a platform capable of performing a wide range of these reactions on the lab scale is discussed. This platform was built with the capability of performing global optimizations using steady state experiments. The rest of the thesis concerns generating dynamic experiments in flow systems and using these conditions to gain more information about a reaction. The ability to use dynamic experiments to accurately determine reaction kinetics is first detailed. Through these experiments we found that only two orthogonal experiments were needed to sample the experimental space. After this an algorithm that utilizes dynamic experiments for kinetic parameter estimation problems is described. The approach here was to use dynamic experiments to first quickly sample the design space to get a reasonable estimate of the kinetic parameters.; Then steady state optimal design of experiments were used to fine tune these estimates. We observed that after initial orthogonal experiments only three more conditions were needed for accurate estimates of the multi-step reaction example. In a similar fashion, an algorithm for reaction optimization that relies on dynamic experiments is also described. The approach here extended that of adaptive response surface methodology where dynamic orthogonal experiments were performed in place of steady state experiments. When compared to steady state optimizations of multi-step reactions, a reduction by half in time needed to locate the optimum is observed. Finally, the potential issues that arise when using transient experiments in automated systems for reaction analysis are addressed. These issues include dispersion, sampling rate, reactor sizes and the rate of change of transients.; These results demonstrate a way with which technological innovation could further revolutionize the chemistry laboratory. By combining machine learning, clouding computing and efficient, high information experiments reaction data could be quickly collected, and the information gained could be maximized for future predictions or optimizations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 171-185).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121815</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compositional simulation in perception and cognition</title>
<link>https://hdl.handle.net/1721.1/121814</link>
<description>Compositional simulation in perception and cognition
Siegel, Max Harmon.
Despite rapid recent progress in machine perception and models of biological perception, fundamental questions remain open. In particular, the paradigm underlying these advances, pattern recognition, requires large amounts of training data and struggles to generalize to situations outside the domain of training. In this thesis, I focus on a broad class of perceptual concepts - those that are generated by the composition of multiple causal processes, in this case certain physical interactions - that human use essentially and effortlessly in making sense of the world, but for which any specific instance is extremely rare in our experience. Pattern recognition, or any strongly learning-based approach, might then be an inappropriate way to understand people's perceptual inferences.; I propose an alternative approach, compositional simulation, that can in principle account for these inferences, and I show in practice that it provides both qualitative and quantitative explanatory value for several experimental settings. Consider a box and a number of marbles in the box, and imagine trying to guess how many there are based on the sound produced when the box is shaken. I demonstrate that human observers are quite good at this task, even for subtle numerical differences. Compositional simulation hypothesizes that people succeed by leveraging internal causal models: they simulate the physical collisions that would result from shaking the box (in a particular way), and what those collisions would sound like, for different numbers of marbles. They then compare their simulated sounds with the sound they heard.; Crucially these simulation models can generalize to a wide range of percepts, even those never before experienced, by exploiting the compositional structure of the causal processes being modeled, in terms of objects and their interactions, and physical dynamics and auditory events. Because the motion of the box is a key ingredient in physical simulation, I hypothesize that people can take cues to motion into account in our task; I give evidence that people do. I also consider the domain of unfamiliar objects covered by cloth. a similar mechanism should enable successful recognition even for unfamiliar covered objects (like airplanes). I show that people can succeed in the recognition task, even when the shape of the object is very different when covered. Finally, I show how compositional simulation provides a way to "glue together" the data received by perception (images and sounds) with the contents of cognition (objects).; I apply compositional simulation to two cognitive domains: children's intuitive exploration (obtaining quantitative prediction of exploration time), and causal inference from audiovisual information.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 97-103).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121814</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory-based learning in humans and machines</title>
<link>https://hdl.handle.net/1721.1/121813</link>
<description>Theory-based learning in humans and machines
Tsividis, Pedro A.
Humans are remarkable in their ability to rapidly learn complex tasks from little experience. Recent successes in Al have produced algorithms that can perform complex tasks well in environments whose simple dynamics are known in advance, as well as models that can learn to perform expertly in unknown environments after a great amount of experience. Despite this, no current AI models are able to learn sufficiently rich and general representations so as to support rapid, human-level learning on new, complex, tasks. This thesis examines some of the epistemic practices, representations, and algorithms that we believe underlie humans' ability to quickly learn about their world and to deploy that understanding to achieve their aims. In particular, the thesis examines humans' ability to effectively query their environment for information that helps distinguish between competing hypotheses (Chapter 2); children's ability to use higher-level amodal features of data to match causes and effects (Chapter 3); and adult human rapid-learning abilities in artificial video-game environments (Chapter 4). The thesis culminates by presenting and testing a model, inspired by human inductive biases and epistemic practices, that learns to perform complex video-game tasks at human levels with human-level amounts of experience (Chapter 5). The model is an instantiation of a more general approach, Theory-Based Reinforcement Learning, which we believe can underlie the development of human-level agents that may eventually learn and act adaptively in the real world.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121813</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction under uncertainty : from models for marine-terminating glaciers to Bayesian computation</title>
<link>https://hdl.handle.net/1721.1/121812</link>
<description>Prediction under uncertainty : from models for marine-terminating glaciers to Bayesian computation
Davis, Andrew D.(Andrew Donaldson)
The polar ice sheets have enormous potential impact on future global mean sea level rise. Recent observations suggest they are losing mass to the ocean at an accelerated rate. Skillful prediction of the ice sheets' future mass loss remains difficult, however; observations of key variables are insufficient and physical processes are poorly understood. Even when a relatively accurate dynamical model is available, computational limitations make it difficult to characterize uncertainties associated with the model's predictions. To address this prediction challenge, this thesis presents complementary developments in glaciology and in Bayesian computation.; In particular, (i) we develop new models of marine-terminating glaciers whose dynamics are controlled by an extended set of physical processes and geometric constraints; and (ii) we develop new sampling algorithms to efficiently characterize selected marginals of a high-dimensional probability distribution describing uncertain parameters. The latter algorithms have broader utility in Bayesian modeling and inference with computationally intensive models. We begin by studying laterally confined ice streams that terminate in the ocean, where they may form floating ice shelves. Such marine-terminating outlet glaciers are the main conduits by which Greenland and Antarctica drain their ice mass into the ocean. Ice shelves play an important role in buttressing the grounded inland ice. The seaward ice flow is typically accompanied by acceleration and thinning. Increased thinning eventually leads to flotation of the ice supported by buoyant forces from the ocean.; The transition region from grounded to floating ice is referred to as the grounding line (or zone), and the mass transport across the grounding line as the output flux. Previous work by Weertman (1974) and Schoof (2007) considers laterally unconfined ice streams, showing that their output flux is a monotonically increasing function of the bedrock rock depth at the grounding line. This scenario leads to the marine ice sheet instability (MISI): retreating into deeper water increases the output flux, and retreat accelerates. Therefore, stable steady states cannot exist on downward sloping beds. We extend this analysis to laterally confined glaciers and investigate when side-wall drag is sufficient to stabilize glaciers on downward sloping beds. Additionally, we include a parameterization of sub-shelf melt. We find that, whereas lateral drag can stabilize glaciers that would otherwise be subject to the MISI, sub-shelf melt can destabilize them.; Our ultimate goal is to predict future ice sheet volume and to quantify its uncertainty. We do so in the Bayesian statistical setting, conditioning our prediction on available observations. Yet characterizing a posterior distribution-using, for example, Markov chain Monte Carlo (MCMC)-involves repeated evaluations of an ice stream model, which are prohibitively expensive. Furthermore, the model parameters that need to be inferred are high dimensional, even though we are primarily interested in a low dimensional quantity: the future ice volume. We address this computational challenge by developing new structure-exploiting Monte Carlo methods that combine marginalization with surrogate modeling. Given a high-dimensional (posterior) distribution on the model parameters, whose density evaluations are computationally intensive, we construct an MCMC chain that directly targets a particular low-dimensional marginal of interest. In general, the marginal density is not available analytically.; Instead, we can compute unbiased noisy estimates of this density. Our MCMC algorithm incrementally constructs a local regression approximation of the target marginal density using these estimates. Continual refinement of the approximation, as MCMC sampling proceeds, leads to an asymptotically exact characterization of the desired marginal distribution. Analysis of the bias-variance tradeoff guides an ideal refinement strategy that balances the decay rates of different components of the error. Our approach exploits regularity in the marginal density to significantly reduce computational expense relative to both full-dimensional and pseudo-marginal MCMC.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 255-266).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121812</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured learning and inference with neural networks and generative models</title>
<link>https://hdl.handle.net/1721.1/121810</link>
<description>Structured learning and inference with neural networks and generative models
Lewis, Owen,Ph. D.Massachusetts Institute of Technology.
Neural networks and probabilistic models have different and in many ways complementary strengths and weaknesses: neural networks are flexible and support efficient inference, but rely on large quantities of labeled training data. Probabilistic models can learn from fewer examples, but in many cases remain limited by time-consuming inference algorithms. Thus, both classes of models have drawbacks that both limit their engineering applications and prevent them from being fully satisfying as process models of human learning. This thesis aims to address this state of affairs from both directions, exploring case studies where we make neural networks that learn from less data, and in which we design more efficient inference procedures for generative models. First, we explore recurrent neural networks that learn list-processing procedures (sort, reverse, etc.), and show how ideas from type theory and programming language theory can be used to design a data augmentation scheme that enables effective learning from small datasets. Next, we show how error-driven proposal mechanisms can speed up stochastic search for generative model inversion, first developing a symbolic model for inferring Boolean functions and Horn clause theories, and then a general-purpose neural network model for doing inference in continuous domains such as inverse graphics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-100).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121810</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, synthesis, and evaluation of insulin bioconjugates for the application of enhanced basal and glucose-responsive activity</title>
<link>https://hdl.handle.net/1721.1/121809</link>
<description>Design, synthesis, and evaluation of insulin bioconjugates for the application of enhanced basal and glucose-responsive activity
Cortinas, Abel Bryan.
Since its discovery by Banting and Best, the administration of exogenous insulin to control blood glucose levels has been a primary method of treatment for severe cases of diabetes mellitus. Several decades of insulin engineering and development has led to the clinical introduction of two broadly classified categories of protein therapies: prandial and basal insulins. Although these developments have had profound effects on disease management with respect to insulin-dependent diabetes, the overall strategy for development has historically been restricted to rational design criteria based on static pharmacodynamic profiles, profiles that are inherently naive to physiological changes in the diabetic patient. As a result, stringent patient-dependent regimens are the standard of care with regard to glycemic monitoring and management.; When coupled with issues such as patient noncompliance, severe hypoglycemia, as well as the adverse health effects that result from chronic mismanagement of hyperglycemia, it is obvious that there are still major hurdles that must be overcome to properly treat the disease. Herein, we introduce innovative strategies aimed towards the advancement of novel insulin bioconjugate design and development for enhanced long-term efficacy and glucose-responsive activity. First, we develop a class of unimolecular, glucose-responsive insulin conjugates towards the design of a state-responsive, patient-dependent therapy.; Optimization of this system resulted in the identification of a lead candidate, lns-PL-4FPBA, capable of (1) glucose-mediated changes in solubility for long-term sequestration and intelligent depot formation, (2) superior safety in comparison to clinically used long-acting insulins, and (3) glucose-responsive pharmacokinetic and pharmacodynamic activity in both healthy and diabetic animal models. Next, we pioneer the first design and synthesis strategy of a novel class of sugar-responsive insulin conjugates, with the ultimate goal of developing an insulin bioconjugate capable of sugar-mediated receptor binding interactions. In this effort, we created dynamically cyclized insulin conjugates that were found to exhibit enhanced chemical stability and superior thermal stability relative to the native protein, as well as sugar-mediated destabilization, suggesting the potential to exploit the insulin receptor binding mechanism.; Lastly, we focus on improving basal activity of the insulin protein by utilizing a novel class of zwitterionic insulin polymer conjugates towards the generation of ultra long-acting insulin therapies. The resulting library is demonstrated to afford equivalent biological potency relative to native human insulin, augmented thermal and chemical stability capable of withstanding thermal aggregation for over 80 days, as well as ultra long-acting basal insulin potential.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 132-143).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121809</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Epistemologies of safety : a comparative study of contemporary French and American reactor design practices</title>
<link>https://hdl.handle.net/1721.1/121807</link>
<description>Epistemologies of safety : a comparative study of contemporary French and American reactor design practices
Verma, Aditi.
What makes for a safe nuclear power reactor? One of school of thought (that has significant traction within the academic nuclear engineering community) calls for using risk as a surrogate for safety. Safety then becomes a measurable quantity and can be specified in probabilistic terms and treated quantitatively. But do reactor designers in fact conceptualize safety in this way? I probe this question empirically through a comparative study of contemporary French and American reactor design practices. I examine the evolution of national nuclear safety regulators' conceptualizations of safety, industry norms, pedagogical context, the manner in which reactor designers make foundational early-stage design decisions, their use of formal models of risk and local heuristics, and finally their response to crisis (in this case the Fukushima Daiichi accident). I find that designers broadly take two different approaches to the early stages of design.; Some designers draw on and emulate aspects of previous designs on which they have worked and which have been developed within their companies. For these designers, improvements in safety, measured in quantitative terms as risk reductions, take the form of improvements to prior designs. Another set of designers, while also drawing on previous reactor designs, typically adopt a critical stance towards these designs. They frame the improvements in safety they hope to achieve in qualitative terms, and explore a wider range of design choices within what we might consider a wider design space. Taken together these findings suggest that the reactor designers' conceptualizations of safety are deeply nuanced. To the reactor designer, safety is not always conceptualized as risk and is often framed qualitatively rather than quantitatively. It is a property that is contingent on the designer's background and the design setting.; Further, the designers' own accounts of their work suggest that it is precisely these qualitative ways of thinking about the design problem that led them toward novel design ideas, and ultimately to achieving advances in safety. This suggests that designers should learn how to further draw on these non-analytical ways of thinking about safety. For the academic nuclear engineering community, these findings underscore the importance of the early, ideational stages of design. Further study and conceptualization of these early stages of design may be a fruitful area of inquiry for improving design outcomes. Finally, the varied and diverse accounts of safety held by the reactor designers studied in this work bear some similarities to the views of non-experts reported in the literature on risk perceptions.; Thus, for the design practitioner, these findings suggest the possibility of a richer dialog with the public on nuclear safety, going beyond the traditional emphasis on public education and securing public acceptance.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 191-213).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121807</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of extended defects on ion diffusion and reactivity in binary oxides : assessed by atomistic simulations</title>
<link>https://hdl.handle.net/1721.1/121806</link>
<description>Impact of extended defects on ion diffusion and reactivity in binary oxides : assessed by atomistic simulations
Sun, Lixin,Ph. D.Massachusetts Institute of Technology.
Extended defects, such as dislocations, grain boundaries and surface step edges, are ubiquitous in nanomaterials that function in electrochemical devices and heterogeneous catalysts. The objective of this thesis is to advance the understanding of the influence of extended defects on ion diffusion and surface reactivity. The thesis investigates the interplay between extended defects and point defects and how this interplay alters ion diffusion and surface reactivity of binary oxides using multiple advanced computational modeling methods. The findings provide physics based insights into how to design the microstructure of materials in order to enhance the reactivity of materials in solid oxide fuel/electrolysis cells and catalysis. The work brings together computational techniques to address the chosen problems at suitable size and time scales.; In the first two studies, the goal is to resolve the distribution and transport kinetics of charged point defects, including both anions and cations, the latter being much slower than the former. The approach that is adapted for this problem area in this thesis is hybrid Monte Carlo and Molecular Dynamics simulations. The first study models an edge dislocation in reduced and doped ceria. The results showed that the dislocation redistributes point defects via elastic energy minimization and charged defect electrostatic association. The defect redistribution slows down oxide ion transport in the dislocation, contrary to the role of dislocations in accelerating atom migration in metals. The finding indicates that dislocations are detrimental for binary oxide solid electrolytes used for solid fuel cells, electrolyzers and membranes. In the second study, the interface between CeO₂ and Y₂O₃ was investigated.; The interface structural vacancies stabilizes and attract Ce³⁺ ions and oxygen vacancies. This leads to 1-2 orders of magnitude higher oxygen non-stoichiometry at the interface at low temperature and oxygen rich envirionments compared with that in bulk CeO₂. The interfacial enhanced oxygen capacity and the enhancement of electron polarons can be beneficial for catalyzing oxidation and reduction reactions and electron transport on ceria-based heterogeneous catalysts. The third study focuses on resolving whether dislocations could be sites to stabilize single atom catalysts, that are attractive for catalysis reactions but difficult to keep stable as single atoms at elevated temperatures. The approach here is density functional theory to compute defect formation energy at the core of an edge dislocation in Cu-CeO₂ in comparison to that in the bulk.; The result shows that dislocations can enrich Cu defects in an atomically-sized area and also stabilizes catalytically active cation species Cu¹⁺ and Ce³⁺. This result indicates that dislocations in Cu-CeO₂ have a great potential in anchoring single atom catalysts and enhance local reactivity. Lastly, a coupled quantum mechanics and molecular mechanics algorithm (QM/MM) is established and used to quantify the reactivity for oxygen evolution reaction (OER) at corners and edges of realistic nanoparticles. A 9.5-nm sized anatase titania nanoparticle was simulated by this approach. The reaction energies of oxygen evolution at the corners and edges are found 0.1 - 0.5 eV lower than on the facets since the former have a stronger adsorption of hydroxyl. However, some of the active sites are also prone to form electron polarons which can recombine with holes and compromise the high OER activity.; By considering both factors, the most active structures are found to be the (101) facets and the edges between {101} facets. This work provides insights for designing nanoparticle shapes for better photocatalysts and also demonstrates the potential of using the QM/MM method to simulate nanoparticles with realistic sizes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 195-238).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121806</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on applied mircoeconomics and finance</title>
<link>https://hdl.handle.net/1721.1/121786</link>
<description>Essays on applied mircoeconomics and finance
Song, Fei,Ph. D.Massachusetts Institute of Technology.
This dissertation consists of four chapters. Chapter 1 studies the effect of online review manipulations on review systems. Chapter 2 and Chapter 3 are co-authored with Ali Kakhbod and focus on post-trade transparency in dynamic over-the-counter markets. Chapter 4 is co-authored with Umut Dur, Parag A. Pathak and Tayfun Sönmez and studies the effect of the Taiwan mechanism, a mechanism that allocates high school seats to applicants. Chapter 1 shows that the conventional impression holds in the short-run that review manipulation makes review systems less informative. In the long-run, however., a manipulated review system can contain the same level of information as an un-manipulated counterpart. I develop a dynamic programming model with fixed product quality and naive buyers who are unaware of manipulation. I then extend it to consider endogenous product quality and sophisticated buyers. I also identify an unexpected effect of a policy to target sellers and check for manipulation.; Chapter 2 studies how mandatory transparency (through TRACE), along with the long-term incentive of informed dealers, affects market price informativeness, liquidity and welfare in dynamic over-the-counter (OTC) markets. We show that the public disclosure of additional information about past trades, paradoxically, makes the markets more opaque, by reducing the market price informativeness. Thus, surprisingly, transparency requirements such as U.S. Dodd-Frank Act may make markets more opaque. However, this market opacity creates liquidity and increases welfare. To enhance financial transparency and improve the price informativeness as well as the market liquidity and welfare, an effective approach is to randomly audit dealers. Chapter 3 then studies how public disclosure of past trade details affects price discovery dynamics under asymmetric information with heterogenous hedging motives.; We model that an informed buyer (informed trader) sequentially trades with a series of uninformed sellers (hedgers). The informed buyer is forward-looking and risk-neutral, and uninformed sellers are myopic and heterogeneously risk-averse. We discover that sellers' price discovery over the underlying fundamentals is crucially affected by what they can observe about past trade details. Specifically, (i) post-trade price transparency delays price discovery, but once it happens, it is always perfect. (ii) In contrast, when only past order information is available, price discovery can never be perfect, and can even be in the wrong direction. (iii) The availability of past trade details, paradoxically, makes it easier for the informed buyer to hide her private information and offer opaque prices. We establish that, under some minor regularity conditions, our equilibrium characterization achieves the maximal degree of ignorance among all pure-strategy PBE.; Hence, this chapter can be viewed as a worst case analysis for regulators who care about market transparency. Moreover, we show that our findings are robust when the informed party's bargaining power decreases along the length of past trade history. Finally, we extend our results to the case where the informed buyer has a non-zero outside option, and the case where both parties switch their trading positions. Chapter 4 analyzes the properties of the Taiwan mechanism, used for high school placement nationwide starting in 2014. In the Taiwan mechanism, points are deducted from an applicant's score with larger penalties for lower ranked choices. Deduction makes the mechanism a new hybrid between the well-known Boston and deferred acceptance mechanisms. Our analysis sheds light on why Taiwan's new mechanism has led to massive nationwide demonstrations and why it nonetheless still remains in use.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 255-262).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121786</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endogenous and chemical modifications of model proteins</title>
<link>https://hdl.handle.net/1721.1/121784</link>
<description>Endogenous and chemical modifications of model proteins
Ressler, Valerie T.(Valerie Terynn)
Protein modifications are ubiquitous in nature, introducing biological complexity and functional diversity. Of the known post-translational modifications, glycosylation is one of the most common and most complex, yet some of the biological implications of this modification remain poorly understood. The development of chemical tools to mimic these modifications is helping to elucidate their biological roles and improve the range of biopharmaceuticals. To probe the biochemistry of endogenous glycosylation and to test the efficacy of novel synthetic modifications, tractable protein scaffolds are needed. Previously, members of the pancreatic-type ribonuclease (ptRNases) superfamily have been utilized as model protein scaffolds. They are a class of highly conserved, secretory endoribonucleases that mediate diverse biological functions through the cleavage of RNA.; The prototypical family homolog, human ribonuclease 1 (RNase 1), has been observed as a differentially glycosylated protein in vivo and been shown to tolerate a wide range of chemical manipulations. It has also emerged as an ideal candidate for protein-based drug therapy. The goal of this thesis is to showcase the biological potential of RNase 1 as a model endogenously glycosylated protein and as a protein payload for evaluating intracellular delivery systems. In CHAPTER 1, I summarize the current knowledge about ptRNases including their biochemical characterization, conservation of N-glycosylation, and therapeutic potential. RNase 1 possesses three N-glycosylation sites giving rise to enormous heterogeneity in biological samples, with unknown implications. In CHAPTER 2, I demonstrate that glycosylation of RNase 1 enhances protein stability and attenuates enzymatic activity.; In CHAPTER 3, I utilize a previously developed diazo compound to enhance delivery of a therapeutically relevant RNase 1 variant. The modification is shown to be reversed upon entry into the cell, presenting a novel approach for delivering native, functional proteins to the cytosol. Intracellular delivery of another model protein, Cytochrome C (CytoC), has shown therapeutic promise as well. In CHAPTER 4, I demonstrate that synthetic glycosylation with a large, monofunctionalized dextran conveys CytoC into the intracellular space, triggering apoptosis. Finally, CHAPTER 5 outlines future directions for the study of RNase 1 glycosylation and expanding the utility of the established diazo and dextran-based delivery systems. Taken together, this thesis explores a wide variety of protein modifications, demonstrating biochemical effects of endogenous glycosylation and enhanced delivery of protein payloads with chemical tools.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 220-236).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121784</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy storage and conversion applications of conductive metal-organic frameworks</title>
<link>https://hdl.handle.net/1721.1/121783</link>
<description>Energy storage and conversion applications of conductive metal-organic frameworks
Miner, Elise Marie.
Establishing catalytic structure-function relationships enables optimization of the catalyst structure for enhanced activity, selectivity, and durability against reaction conditions and prolonged catalysis. One class of catalysts that could benefit from systematic optimization is non-platinum group metal (non-PGM) electrocatalysts for the O₂ reduction reaction (ORR) to water (4e⁻ reduction) and / or hydrogen peroxide (2e⁻ reduction). The electrically conductive metal-organic frameworks (MOFs) M₃(HXTP)₂ (HXTP = 2,3,6,7,10,11-hexaimino or hexahydroxytriphenylene (HITP or HHTP, respectively)) feature a crystalline structure that contains homogeneously distributed, square planar transition metal sites reminiscent of those doped into carbonaceous media for ORR catalysis. Ni₃(HITP)2 functions as an active and stable ORR electrocatalyst in alkaline medium.; Experimental and computational techniques enabled elucidation of the kinetics, mechanism, and active site for ORR with Ni₃(HITP)₂, as well as understanding the essential nature of the extended MOF structure in providing catalytic activity. Varying the metal and ligand combinations within this class of MOFs afforded two distinct phases. Probing the stability, catalytic activity, product distribution, and electronic properties of the two phases of MOFs identified phase-dependent catalytic activity, regardless of the metal or chelating atom identity. Since the birth of the first rechargeable battery in 1860, emerging battery technologies have both provided answers to energy demands as well as additional obstacles to navigate.; Recent works have explored using MOFs as ionically conductive solid-state electrolytes which would eliminate the need for volatile organic liquids and potentially offer a wider electrolyte potential window and means of controlling the plating of alkali metals during charging. This work has taken advantage of the modular charge found in a Cu-azolate MOF, wherein guest Cl⁻ ions coordinated to Cu₄-lined clusters can be washed out of the structure, and stoichiometric loadings of anions varying in size can be reconstituted into the MOF when soaking the MOF in solutions containing alkali or alkaline earth metal salts. The anions are held in place through coordination to the Cu²⁺ centers, thus enabling the charge-balancing metal cations to achieve high transference numbers within this solid electrolyte. Further, the versatility regarding the identity of the guest metal salt provides a handle for modulating the cation transport activation energy and ionic conductivity.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 182-200).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121783</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating exciton dynamics in organic semiconductors</title>
<link>https://hdl.handle.net/1721.1/121782</link>
<description>Simulating exciton dynamics in organic semiconductors
Lee, Chee Kong,Ph. D.Massachusetts Institute of Technology.
Organic semiconductors are carbon-based semiconductors with number of unique benefits over traditional semiconductors such as low production costs, versatile synthesis processes, and high portability. Unlike traditional crystalline semiconductors that exhibit high level of homogeneity, organic semiconductors are spatially and temporally heterogeneous due to the weak van der Waals intermolecular forces. In this thesis we utilize computational and theoretical methods to investigate how this heterogeneity affects the electronic properties in organic semiconductors. In particular, we focus on two microscopic processes fundamental to the performance of organic semiconductors: the transport of Frenkel exciton and dissociation of charge-transfer (CT) exciton. Frenkel excitons are tightly bound electron-hole pairs created upon photo-excitation of molecules and they carry the excess energy imparted by photons.; We employ theoretical approach that combines molecular dynamics and semi-empirical electronic structure calculations to reveal the effects of molecular disorder on Frenkel exciton transport in oligothiophene-based molecular semiconductors. Using this approach, we find that the magnitude and details of molecular disorder (i.e. spatial and temporal correlations) could have huge impact on exciton transport in this class of materials. CT excitons are electron-hole pairs partially separated across the donor-acceptor interface. To generate free charges, the oppositely charged electron and hole must overcome an electrostatic binding energy before they undergo ground state recombination. We explore the CT exciton dissociate mechanism and magnetic field effects through a model of quantum spin dynamics combined with a stochastic coarse-grained model of charge transport.; We demonstrate that simulations carried out on our model are capable of reproducing experimental results as well as generating theoretical predictions related to the efficiency of organic electronic materials. Next, we consider the effect of disorder in electronic energy levels on dissociation yield and demonstrate that it is maximized with a finite amount of disorder as a result of non-equilibrium effect.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 105-116).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121782</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functionalization of metal-organic frameworks with early transition metals : from fundamental studies to catalytic applications</title>
<link>https://hdl.handle.net/1721.1/121781</link>
<description>Functionalization of metal-organic frameworks with early transition metals : from fundamental studies to catalytic applications
Korzyński, Maciej Damian.
Metal-organic frameworks (MOFs) have established themselves as some of the most versatile materials available, with applications ranging from gas sorption to separation to sensing to catalysis. With a large abundance of structural motifs published to date, research efforts have shifted towards further framework elaboration via post-synthetic modification (PSM), a method to alter the chemical structure of preformed MOFs. The secondary building units (SBUs) of MOFs, which are commonly small inorganic clusters, have been particularly interesting targets for this synthetic approach. The aim of this thesis is to further our understanding of how metal cations interact with these inorganic nodes. Additionally, the node functionalization approach is used to synthesize novel catalysts for the olefin metathesis reaction. In Chapter 1, the reader is introduced to post-synthetic modification of MOFs with a focus on early transition metal species. A review of pertinent literature is presented. Chapter 2 describes how a desire to challenge the limits of the well-precedented cation exchange process led to a serendipitous discovery of a long-sought binding mode in the iconic MOF-5 system using NbCl₄(THF)₂ as a precursor of niobium. In Chapter 3, attention shifts from fundamental studies to the development of new catalysts for olefin metathesis, a process that to (late has been not been extensively studied in MOFs. After a short introduction about the traditional olefin metathesis catalysis, the prospect of using the inorganic nodes of MOFs as supports akin to the classical platforms used in heterogeneous catalysis is explored. Chapter 4 expands the concepts developed in the previous chapter to rhenium oxide-based olefin metathesis, which is unique compared to catalysis using molybdenum and tungsten oxide systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 191-215).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121781</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular control of interfacial inner-sphere electron</title>
<link>https://hdl.handle.net/1721.1/121780</link>
<description>Molecular control of interfacial inner-sphere electron
Jackson, Megan N.(Megan Nora)
The interconversion of electrical and chemical energy relies on the intimate coupling of protons and electrons to activate small molecules including 02, C0 2, and H20. The mechanistic pathways governing the proton- and electron-transfer steps are key to the efficiency of catalysis because they determine the rate and selectivity of the reaction. Energy conversion devices commonly use metallic heterogeneous electrocatalysts, so a deep understanding of the factors governing the rate and thermochemistry of interfacial proton transfer (PT) and electron transfer (ET) steps at electrode surfaces is critical for improving the efficiency and selectivity of energy conversion.; This thesis focuses on developing a molecular-level understanding of interfacial inner-sphere PT and ET steps in two parts: (1) incorporation of molecular active sites into heterogeneous electrodes through graphite conjugation (Chapters 1-3), and (2) elucidation of the role of the proton donor in interfacial proton-coupled electron transfer steps through detailed kinetic studies (Chapters 4-6). Part 1 describes a class of catalysts that incorporate molecularly well-defined, highly-tunable active sites into heterogeneous graphite surfaces. These graphite-conjugated catalysts (GCCs) feature a unique conjugated linkage between a discrete molecular active site and the delocalized states of graphitic carbons. Electrochemical and spectroscopic investigations establish that GCCs exhibit strong electronic coupling to the electrode, leading to ET behavior that diverges fundamentally from that of solution phase or surface-tethered analogues.; We find that: (1) ET is not observed between the electrode and a redoxactive GCC moiety regardless of applied potential. (2) ET is observed at GCCs only if the interfacial reaction is ion-coupled. (3) Even when ET is observed, the oxidation state of a transition metal GCC site remains unchanged. From these observations, we construct a mechanistic model for GCC sites in which ET proceeds exclusively through inner-sphere mechanisms. We additionally demonstrate that the catalytic hydrogen evolution reaction (HER) at a Rh-based GCC proceeds via concerted electron transfer and substrate activation via elimination of the stepwise pathway. In these respects, GCC active sites behave like metallic solids, but with an unprecedented level of molecular control.; Using the GCC platform, we demonstrate that pKa is a useful thermodynamic descriptor for proton-coupled electron transfer (PCET) reactions at surface sites, just as it is in molecular sites, providing a framework with which to understand the thermochemistry of inner-sphere ET processes at electrode surfaces. Part 2 investigates the role of the explicit proton donor in interfacial PCET steps that result in metal-H bond formation by using the rate of HER on Au as a proxy for the rate of PCET to the Au surface. Detailed mechanistic studies revealed that in nonaqueous electrolytes, a trialkylammonium proton donor with a less bulky steric profile not only has overall faster HER kinetics than its bulkier counterpart, but also displays a greater transfer coefficient and a greater H/D kinetic isotope effect, demonstrating that even small changes to the identity of the proton donor can drastically affect the intrinsic kinetics of PCET at an interface.; In aqueous electrolytes, we also observed a strong dependence on the identity of the proton-donating species in solution and found that only proton donors that can pre-associate with acceptor sites on the electrode surface can accelerate the rate of interfacial CPET. Finally, we used an innocent mixed-buffer electrolyte to probe the pH-dependence of steady-state HER current on Au and found that, surprisingly, the loss in catalytic efficiency occurs between pH 1 and pH 4 rather than systematically decreasing across the entire pH range. These results highlight the crucial role of the proton donor in achieving selective and efficient electrochemical energy conversion across a range of solvent and pH conditions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121780</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A molecular investigation of the antimicrobial functions of the human S100 host-defense proteins</title>
<link>https://hdl.handle.net/1721.1/121779</link>
<description>A molecular investigation of the antimicrobial functions of the human S100 host-defense proteins
Cunden, Lisa Stephanie.
The human host is continually exposed to potentially harmful organisms and the innate immune response is the first line of defense against microbial invasion. One strategy employed by the human innate immune system includes the release of antimicrobial host-defense proteins (HDPs). The goal of this thesis is to understand the antimicrobial functions of four host-defense proteins of the S100 family of proteins: calprotectin (CP), S100A12, S100A7, and S100A15. In the first half of this thesis, we elucidate the Zn(lI)-binding and antimicrobial properties of S100A12 and S100A7 through the use of solution and microbiology studies. We evaluate the affinity of S100A12 for Zn(ll), the scope of its antimicrobial activity, and put forward a model whereby S100A12 uses Ca(ll) ions to tune its Zn(Il)-chelating properties and antimicrobial activity. Our work with S1 00A7 demonstrates that the protein may exist in more than one redox state under physiological conditions, and that unlike CP and S100A12, the antimicrobial properties of S100A7 are not directly modulated by Ca(ll) ions. We report a model whereby the local redox environment of S100A7 tunes its Zn(ll)-sequestration capacity through intramolecular disulfide-bond redox chemistry, and that Ca(II) ions exert an indirect modulatory effect on the Zn(Il)-binding properties of this protein. In the second half of this thesis, we examine the bactericidal properties of the four S100 proteins. Our results agree with prior work on the bactericidal properties of S100A7. Furthermore, we show that CP and S100A15, but not S100A12, possess bactericidal activity at pH 5, and that CP is a broad-spectrum Gram-negative bactericidal factor that functions through a mechanism of membrane permeabilization. Taken together, our studies provide new insights into the multifunctionality of the antimicrobial S100 HDPs.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121779</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed evolution in human cells via adenoviral replication</title>
<link>https://hdl.handle.net/1721.1/121778</link>
<description>Directed evolution in human cells via adenoviral replication
Berman, Chet Michael.
The discovery and optimization of biomolecules that reliably function in metazoan cells is imperative for both basic biological research, and therapeutic development. Typically, researchers turn to directed evolution either in vitro, in bacteria, or in yeast, to mutate, select, and amplify biomolecules of interest (BOls) with new and highly-tailored activities. Unfortunately, BOIs evolved in these environments often fail to function when translated into the complex environment of metazoan cells. Unique metazoan biology such as sophisticated proteostasis networks, complex cell signaling pathways, distinctive post-translational modifications, cellular trafficking, and highly regulated chromatin architecture can all derail the activity of BOIs evolved in simpler systems. Current approaches to directed evolution in metazoan cells rely on labor-intensive and time-consuming screening approaches that have a high potential for false positives.; Robust approaches for directed evolution directly in human cells are profoundly needed. In this thesis, I describe the development, characterization, and application of a broadly applicable platform for directed evolution of diverse BOIs directly in human cells. The platform relies on a partially gutted adenovirus lacking multiple genes, including the essential DNA polymerase and protease genes, features that allow us to evolve BOIs encoded by genes as large as 7 kb while attaining the mutation rates and enforcing the selection pressure required for successful directed evolution of complex function. High mutagenesis rates are attained by trans-complementation of an engineered, highly error-prone form of the adenoviral polymerase. Selection pressure that couples desired BOI functions to adenoviral propagation is achieved by linking the functionality of the encoded BOI to the production of adenoviral protease activity by the human cell.; This platform makes it possible, in principle, to evolve any biomolecule activity that can be coupled to protease expression or activation by serially passaging adenovirus carrying the BOI. As proof-of- concept, we use the platform to evolve, directly in the human cell environment, several transcription factor variants that maintain high levels of function while gaining resistance to a small molecule inhibitor. We anticipate that this platform will substantially expand the repertoire of biomolecules that can be reliably and robustly engineered for both research and therapeutic applications in metazoan systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121778</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pathway and protein engineering for improved glucaric acid production in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/121776</link>
<description>Pathway and protein engineering for improved glucaric acid production in Escherichia coli
Guay, Lisa Marie,Ph. D.Massachusetts Institute of Technology.
Microbial fermentation is an attractive method for the renewable production of chemicals. Glucaric acid was identified as a "top value added chemical from biomass" by the Department of Energy in 2004, and a biological route for its production from glucose in E. coli was developed in our lab in 2009. Two of the pathway enzymes, myo-inositol phosphate synthase (MIPS) and myoinositol oxygenase (MIOX), appear to control flux. This work addressed several limitations of these reactions. One approach was the relief of reactive oxygen species (ROS) to improve MIOX performance. MIOX converts myo-inositol (MI) to glucuronic acid. Overexpression of native catalase and superoxide dismutases led to significantly higher titers of glucuronic acid from MI. This result corresponded to better maintenance of MIOX activity and expression over the course of the fermentation. A reduction in labile iron levels, which are linked to ROS formation, was also shown to improve glucuronic acid titers.; A second approach was the examination of natural MIPS diversity. MIPS competes with central carbon metabolism for its substrate, glucose-6-phosphate. Thirty-one representative MIPS homologs were selected using a sequence similarity network. Nineteen variants produced detectible myo-inositol (MI) from glucose, and H. contortus MIPS performed equally well or better than the current S. cerevisiae MIPS. Interesting differences in stability were identified between the variants, and further work to explore the network may yield more information about important sequence features. A third approach was the evaluation of screening methods for glucuronic and glucaric acid to support protein engineering. We attempted to extend a previous screen to growth from glucose, but while growth was achieved from MI, low flux appeared to prevent growth from glucose. A previously-developed biosensor based on the regulator CdaR was also tested.; We discovered that the biosensor does not respond to glucaric acid but instead to a downstream metabolite, likely glycerate, and that the biosensor is affected by catabolite repression. While a reliable screen was not realized, our improved understanding of native regulation aids in the identification of alternative strategies. This work overall produced significant improvements in the glucaric acid pathway and helped to identify opportunities for further development.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 115-124).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121776</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous ion-selective separation by shock electrodialysis</title>
<link>https://hdl.handle.net/1721.1/121775</link>
<description>Continuous ion-selective separation by shock electrodialysis
Conforti, Kameron Michael.
Cleaning water remains a challenge across sectors and across the globe. Many go without access to clean drinking water simply because the technologies that exist are too expensive in capital or energy. Areas thought to have reliably safe water can be betrayed by aging water infrastructure and exposed to hazardous contaminants. In addition to the need to purify drinking water is the necessity to treat waste water produced by chemical or energy plants. For a long time, reverse osmosis has been used as a catch-all technology for the robust treatment of contaminated water. That robustness comes as the cost of high energy requirements and membranes that can foul quickly under harsh conditions. For low-salinity separations or separations that target specific ions in solution, there may be a better technological fit. In this thesis, shock electrodialysis (SED) is demonstrated to achieve highly selective continuous removal of magnesium ions from an aqueous mixture of NaCl and MgCl₂.; To explore this phenomena, the SED device has all of its inputs and outputs characterized to determine internal flows of fluid and ions. This careful study provides valuable insight into the mechanisms that drive selectivity, current efficiency, and desalination, as well as potential methods to improve performance. The selectivity comes as a result of the deionization shock and associated depletion region in a negatively charged porous frit. For solutions initially rich in sodium and dilute in magnesium, high (&gt; 98%) removal of magnesium can be achieved with only moderate (50-70%) removal of total salt. Dilute lead is also shown to be selectively removed from a mixture of NaCl and PbCl₂. A high removal of lead (90%) can be achieved at very low total desalination (&lt; 25%). The final section of this thesis covers work on a related electrochemical technology that also utilizes current applied perpendicular to flow: flow batteries.; A membraneless hydrogen bromine flow battery is developed, achieving record cycles and power density for a membraneless flow system.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 155-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121775</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and large deformation response of additively-manufactured shell-lattices</title>
<link>https://hdl.handle.net/1721.1/121763</link>
<description>Design and large deformation response of additively-manufactured shell-lattices
Bonatti, Colin.
Open-cell cellular solids are porous structures with a large variety of applications, from energy absorption to medical engineering. In an attempt to identify isotropic configurations with high low and large strain mechanical properties, detailed numerical simulations are conducted on a wide range of mesostructures of cubic symmetry. Results are partially validated through uniaxial compression of specimens made of 316L stainless steel via selective laser melting. In a first study the large deformation responses of four different mesostructures of relative density 20% are compared: an octet truss-lattice, tube-lattice, a sphere assembly and a tube/sphere hybrid. It is concluded that periodic shell structures provide superior strength and energy absorption capacity for the same weight, as compared to truss-lattices.; Another conclusion is that to avoid concentrations of plastic strains that are detrimental to the overall energy absorption of the structure, it is best to avoid peaks in curvature. Based on these conclusions, a shell-lattice is developed that resembles a smoothened Triply Periodic Minimal Surface of FCC symmetry. Its properties are compared to those of the octet-truss for a wide range of relative densities, revealing the shell-lattice as superior to the octet-truss in almost all cases. The FCC shell-lattice is then compared to its BCC and SC equivalents. It is found that the structures all present high anisotropic properties. For a given structure, directional difference factors of up to 4.1 in uniaxial stiffness, 2 in yield strength and 1.8 in specific energy absorption are observed. However the directional averages of their properties are very similar.; Irrespective of the specific type of cubic symmetry, the shell-lattices are remarkably stiff, strong and energy-specific type of cubic symmetry, the shell-lattices are remarkably stiff, strong and energy-absorbing, particularly at relative densities above 0.1. To address the problem of anisotropy, novel families of shell-lattices that contain the previous examples are proposed. Design maps are established and reveal that the elastic anisotropy of shell-lattices can be conveniently tailored. As a result, isotropic topologies are identified. The elastically-isotropic shell-lattices feature similar overall performance that their TPMS-like counterparts as well as a significantly reduced plastic anisotropy. The structures obtained are believed to be the best performing open-cell topologies to date.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 179-185).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121763</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetics of surface growth and coupled diffusion in a continuum framework</title>
<link>https://hdl.handle.net/1721.1/121762</link>
<description>Kinetics of surface growth and coupled diffusion in a continuum framework
Abi-Akl, Rami.
Surface growth by association or dissociation of material on the boundary of a body is ubiquitous in both natural and engineering systems. It is the fundamental mechanism by which biological materials grow, starting from the level of a single cell, and is increasingly applied in engineering processes for fabrication and self-assembly. A significant challenge in modeling such processes arises due to the inherent coupled interaction between the growth kinetics, the local stresses, and the diffusing constituents needed to sustain the growth. Moreover, the volume of the body changes not only due to surface growth but also by variation in solvent concentration within the bulk of the body. In this thesis we present a general theoretical framework that captures these phenomena and describes the kinetics of surface growth while accounting for coupled diffusion.We then use a combination of analytical and numerical tools to study growth in simple geometries.; In the particular problems of growth on flat and on spherical surfaces, we show that the growth process typically involves two stages. Initially, the body deforms, primarily due to diffusion with minimal growth. This is followed by a second stage during which surface growth and diffusion act harmoniously. It is shown that during this latter stage the body evolves along a 'universal path' that is independent of initial conditions. We make use of this path to analytically describe the evolution of a body from inception up to treadmilling, the latter being a steady state in which the addition and removal of material are balanced. Among the wide spectrum of possible applications for our general continuum framework, we chose to examine in this thesis a specific problem of high interest in microbiology. In this context, we explore the kinetics of degradation of biomatter in the natural environment.; This process plays a key role in the global carbon cycle; it hasn't so far been thoroughly elucidated due to the complexity that arises from the coupling of mechanisms that are at the interface of biology, chemistry, mechanics and physics. Micro-scale experiments performed in a highly controlled fashion reveal highly nonlinear degradation kinetics, that classical scaling arguments fail to explain. It is shown that our model captures the complexity of the experimental observation. It can reveal distinct signatures of different bacterial strains and can thus predict degradation processes that are beyond the reach of the laboratory setting.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-173).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121762</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formation and maintenance of tropical cyclone spiral bands in idealized numerical simulations</title>
<link>https://hdl.handle.net/1721.1/121760</link>
<description>Formation and maintenance of tropical cyclone spiral bands in idealized numerical simulations
Perez-Betancourt, Diamilet.
Spiral bands are one of the most prominent features of tropical cyclones (TCs). These regions of clouds and rainfall are often the source of major TC hazards, such as inland flooding, mudslides, and tornadoes. Since the advent of radar technology, numerous ideas have been proposed to explain the existence of TC spiral bands. Previous hypotheses include the manifestation of atmospheric waves emanating from the TC inner core, boundary layer instabilities, and the interaction between surface cold pools and low-level vertical wind shear. Despite much effort, no consensus has yet been reached on the underlying physical mechanism responsible for TC bands. We approach this problem by examining the formation of TC spiral bands in a set of idealized three-dimensional simulations from the System for Atmospheric Modeling.; The simulations are run with doubly-periodic horizontal boundaries, fixed sea surface temperature (300-301K), interactive surface fluxes, and a constant rotation rate corresponding to latitude 15N. No background mean flow is imposed on the TC runs. We find that, in simulations with full moist physics and interactive radiative fluxes, spiral bands are consistently collocated with surface cold pools and aligned normal to the low-level wind shear, similar to tropical squall-lines. However, convection still organizes into spiral bands in simulations in which surface cold pools are supressed. Non-rotating experiments with imposed background wind shear taken from a TC simulation suggest that, in the absence of surface cold pools, vortex dynamics are necessary for convection to align into spiral bands. Finally, we examine numerical simulations of TC-like vortices that develop over a completely dry surface.; We find that these dry TCs also exhibit many spiral bands in the wind and temperature fields extending far away from the inner core. Initially, these perturbations are nearly stationary and exhibit overturning circulations consistent with boundary layer instabilities. Barotropic-baroclinic instability dominates the TC structure later in the simulation, reducing the outer region to just a few spiral bands.
Thesis: Ph. D. in Atmospheric Science, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-114).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121760</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Processes and rates of arc crust growth and differentiation in the Southern Sierra Nevada crustal section</title>
<link>https://hdl.handle.net/1721.1/121759</link>
<description>Processes and rates of arc crust growth and differentiation in the Southern Sierra Nevada crustal section
Klein, Benjamin Zachary.
This thesis presents a multidisciplinary investigation of the processes and timescales for the construction of arc crust, with a focus on the exposed crustal section in the southernmost Sierra Nevada Batholith, California. This section exposes plutons that were emplaced at pressures ranging from 3-10 kbars, as well as metamorphic wall rocks. Chapters 1 and 2 represent focused studies of the Bear Valley Intrusive Suite (BVIS), the dominant igneous component of the crustal section. Chapter 1 presents new magmatic structural data and whole rock geochemical data that highlight a discontinuity in the BVIS between a lower crust dominated by originally shallowly lying mafic cumulates and an upper crust dominated by steeply oriented felsic intrusives. These observations are used to constrain the thermal state of the arc during the emplacement of the BVIS. Chapter 2 is a high-precision CA-ID-TIMS U/Pb zircon geochronology study of the BVIS.; This study shows that the entire BVIS was emplaced within 1.1 million years, and thus represents the highest documented (intrusive) subduction zone magmatic flux. Chapter 3 focuses on the contribution of the metamorphic wall-rocks to the observed crustal section. Using detrital zircon geochronology, I argue that these wall-rocks preserve an inverted stratigraphy that is most easily explained if these sediments were first subducted and subsequently returned as relaminated material, which would make these materials the first in situ example of relaminated sediments. Chapters 4 and 5 present broader studies of subduction zone processes in space and time. In Chapter 4, I present a study based on a global compilation of modern arc lavas.; This study develops new proxies that use distinctive major element trends produced by fractionating magmas to qualitatively constrain the hydration state and initial fractionation pressure of differentiating magmas, and finds that magmas in continental arcs typically evolve at wetter and higher-pressure conditions compared to island arcs. Finally, Chapter 5 investigates the dynamics of subducted slabs through Earth's history and finds that, based on anticipated higher mantle temperatures and concomitant thicker, more mafic oceanic crust, subducted slabs in the Archean are unlikely to have stagnated within or immediately below the mantle transition zone.
Thesis: Ph. D. in Geology, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121759</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction between water vapor, radiation and convection in the tropics</title>
<link>https://hdl.handle.net/1721.1/121758</link>
<description>Interaction between water vapor, radiation and convection in the tropics
Beucler, Tom(Tom George)
The spatiotemporal variability of water vapor near the Equator remains poorly understood because convective organization simultaneously spans the cloud scale (~ 10km) and the planetary scale (~ 10, 000km). Spatiotemporal variability of tropical water vapor may result from internal instabilities of the atmosphere, arising from the interaction between water vapor, radiation and convection. The present work leverages the instability of radiative-convective equilibrium, the most fundamental state of the tropical atmosphere, to connect convective organization in cloud-permitting models with the observed variability of water vapor through common physical mechanisms. First, we propose a simple theory that explains when instability of radiative-convective equilibrium may occur: If the total atmospheric cooling decreases with column water vapor, then radiative-convective equilibrium may be unstable to the growth of moist and dry perturbations.; Secondly, we combine a linear response framework with the weak temperature gradient approximation to analyze the interaction between convection, radiation and water vapor at each level of the atmosphere. We find that convection may interact with radiation to trigger the growth of mid-tropospheric water vapor anomalies by transporting water vapor to the upper troposphere, where it can prevent lower-tropospheric water vapor from radiatively cooling to space. Thirdly, we turn to the spatial organization of water vapor anomalies and relate the evolution of the size of moist and dry regions to diabatic fluxes in twenty cloud-permitting simulations on large domains. Longwave radiation from ice clouds aggregates convection at larger scales, shortwave radiation aggregates convection at smaller scales, and surface enthalpy fluxes smooth out water vapor anomalies through their enthalpy disequilibrium component.; Finally, we relate the transient zonal variability of precipitable water to convective-aggregation mechanisms in realistic models and observations of the atmosphere. Radiative fluxes generate transient water vapor structures of planetary scales, while surface enthalpy fluxes and horizontal energy transport act to smooth out these structures, suggesting parallels between observations and idealized simulations of aggregated convection.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 227-251).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121758</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of a fast helmholtz solver in exploration seismology</title>
<link>https://hdl.handle.net/1721.1/121757</link>
<description>Applications of a fast helmholtz solver in exploration seismology
Ely, Gregory Tsiang.
Seismic imaging techniques rely on a velocity model inverted from noisy data via a non-linear inverse problem. This inferred velocity model may be inaccurate and lead to incorrect interpretations of the subsurface. In this thesis, I combine a fast Helmholtz solver, the field expansion method, with a reduced velocity model parameterization to address the impact of an uncertain or inaccurate velocity model. I modify the field expansion framework to accurately simulate the acoustic field for velocity models that commonly occur in seismic imaging. The field expansion method describes the acoustic field in a periodic medium in which the velocity model and source repeat infinitely in the horizontal direction, much like a diffraction grating. This Helmholtz solver achieves significant computational speed by restricting the velocity model to consists of a number of non-overlapping piecewise layers.; I modify this restricted framework to allow for the modeling of more complex velocity models with dozens of parameters instead of the thousands or millions of parameters used to characterize pixelized velocity models. This parameterization, combined with the speed of the forward solver allow me to examine two problems in seismic imaging: uncertainty quantification and benchmarking global optimization methods. With the rapid speed of the forward solver, I use Markov Chain Monte Carlo methods to estimate the non-linear probability distribution of a 2D seismic velocity model given noisy data. Although global optimization methods have recently been applied to inversion of seismic velocity model using raw waveform data, it has been impossible to compare various types of algorithms and impacts of parameters on convergence. The reduced forward model presented in this paper allows me to benchmark these algorithms and objectively compare their performance to one another.; I also explore the application of these and other geophysical methods to a medical ultrasound dataset that is well approximated by a layered model.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 141-150).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121757</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Negotiated developments : exploring the trends, efficacy, and politics of negotiating zoning on a project-by-project basis</title>
<link>https://hdl.handle.net/1721.1/121756</link>
<description>Negotiated developments : exploring the trends, efficacy, and politics of negotiating zoning on a project-by-project basis
Kim, Minjee.
Large-scale real estate developments present unique regulatory challenges for local governments, prompting them to employ non-traditional, negotiation-based zoning approaches that offer flexibility unattainable via conventional zoning. Existing planning literature falls short of answering at least three broad areas of inquiry that can help local governments navigate this challenge. First, there is a general lack of understanding of if, when, and how local governments use negotiation-based zoning. Second, little empirical research thus has examined the negotiated outputs. Last, the politics of negotiated developments-who participates and influences these negotiations and under what conditions-also remains largely unknown. Each of these research areas is taken up in the three papers that comprise this dissertation.; The first paper surveys the current state of zoning practices; I investigate the experiences of Boston, Chicago, New York, San Francisco, and Seattle to explore if, when, and how they have negotiated zoning on a project-by-project basis. The second paper identifies the gains and losses of using a negotiation-based approach vis-a-vis zoning that closely adheres to the rule of law. I compare the experience of Boston and Seattle in more detail to explore this subject. The third and final paper delves deep into the micro-politics of negotiations for the largest private development in Boston to expose who actually influenced the negotiations and whether public participation mattered in the process. I find that all five cities employed negotiation-based zoning approach for large-scale developments, but their attitude towards negotiation varied widely city-by-city and even within a city.; I further establish that cities are likely to obtain more substantial public benefit packages when they negotiate zoning, but that there may be profound structural consequences for pursuing a regulatory regime heavily based on negotiations. Moreover, I provide empirical evidence that the process of negotiation can in fact accommodate meaningful public participation. Negotiated developments can become valuable opportunities for local governments to implement important planning objectives when they are used selectively and when the negotiation process is administered in a transparent and communicative manner.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 180-188).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121756</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental politics in a polarized America : public mood and policy consequences</title>
<link>https://hdl.handle.net/1721.1/121755</link>
<description>Environmental politics in a polarized America : public mood and policy consequences
Bergquist, Parrish(Sarah Parrish)
As the American political parties have polarized and nationalized, what are the implications for environmental policy? This question is particularly important at the state and local levels, where many environmental policy decisions are made and implemented, but about which scholars have drawn mixed conclusions. This dissertation enters the debate to expand understanding of the parties' role in state-level regulatory enforcement; describe and assess changing public attitudes about environmental protection; and deeply explore local perceptions of an important type of environmental disruption: energy infrastructure. I begin by exploring the public basis for environmental protection. In paper one, I estimate state-level public opinion about environmental protection from the late 1970s through 2016. I show that regional differences in public views about environmental protection have declined, whereas state publics have sorted more cleanly into partisan camps in every state.; I also find that economic tradeoffs have increased in their importance for shaping Americans' environmental views. These data provide a crucial foundation for assessing the evolution of the state and national parties' positions about environmental protection, and exploring the elite rhetoric that may explain the shifting drivers of public environmental preferences. In the second paper, I ask how party control of state government institutions influences regulatory enforcement in the U.S. Despite growing evidence for the parties' influence across the slate of policy issues, scholars have drawn divergent conclusions regarding the parties' impact on state environmental policy. I apply a regression discontinuity design to assess whether party control of state houses and governors' mansions causes a meaningful change in Clean Air Act enforcement between 2000 and 2017.; The findings suggest that narrowly elected Republican governors and legislative majorities reduce enforcement effort, and that the two branches' influence differs according to their distinct mechanisms of political control over the bureaucracy. Paper three moves beyond public attitudes about environmental topics in the abstract to assess local views of one particularly salient environmental topic: energy. Public views of energy technologies are critical to the United States' energy future, but party and ideology do not contribute much explanatory power in explaining Americans' views of the energy system. I apply a framework rooted in social psychology to explain how sense of place shapes residents' interpretations and evaluations of large-scale energy transmission infrastructure as a threat or an opportunity.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Political Science and Urban and Regional Planning, Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019; "Submitted to the Department of Urban Studies and Planning, Department of Political Scinece in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Political Science and Urban and Regional Planning." Cataloged from PDF version of thesis.; Includes bibliographical references (pages A-57 to A-82).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121755</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Type system for resource bounds with type-preserving compilation</title>
<link>https://hdl.handle.net/1721.1/121730</link>
<description>Type system for resource bounds with type-preserving compilation
Wang, Peng,Ph. D.Massachusetts Institute of Technology.
This thesis studies the problem of statically bounding the resource usage of computer programs, from programs written in high-level languages to those in assembly languages. Resource usage is an aspect of programs not covered by conventional software-verification techniques, which focus mostly on functional correctness; but it is important because when resource usage exceeds the programmer's expectation by a large amount, user experience can be disrupted and large fees (such as cloud-service fees) can be charged. I designed TiML, a new typed functional programming language whose types contain resource bounds; when a TiML program passes the typechecking phase, upper bounds on its resource usage can be guaranteed. TiML uses indexed types to express sizes of data structures and upper bounds on running time of functions; and refinement kinds to constrain these indices, expressing data-structure invariants and pre/post-conditions.; TiML's distinguishing characteristic is supporting highly automated time-bound verification applicable to data structures with nontrivial invariants. Type and index inference are supported to lower annotation burden, and, furthermore, big-O complexity can be inferred from recurrences generated during typechecking by a recurrence solver based on heuristic pattern matching. I also designed a typed assembly language with resource bounds, and a typepreserving compiler that compiles well-typed TiML programs into well-typed assembly programs, conforming to the same bounds. Typechecking at the assembly level reestablishes the soundness of the bounds, and the types can serve as resource-usage certificates for the assembly programs. I used Ethereum smart contracts as a real-world application of the techniques developed in this thesis. The assembly language I designed, TiEVM, is a typed version of the Ethereum Virtual Machine (EVM) bytecode language.; I will demonstrate that TiML can be used as a new language to write smart contracts, and the generated TiEVM code is equipped with types proving that its resource usage - "gas" in Ethereum terminology - is bounded.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 161-168).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121730</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Architectural techniques to unlock ordered and nested speculative parallelism</title>
<link>https://hdl.handle.net/1721.1/121729</link>
<description>Architectural techniques to unlock ordered and nested speculative parallelism
Subramanian, Suvinay.
Current multicores suffer from two major limitations: they can only exploit a fraction of the parallelism available in applications and they are very hard to program. This is because they are limited to programs with coarse-grained tasks that synchronize infrequently. However, many applications have abundant parallelism when divided into small tasks (of a few tens to hundreds of instructions each). Current systems cannot exploit this fine-grained parallelism because synchronization and task management overheads overwhelm the benefits of parallelism. This thesis presents novel techniques that tackle the scalability and programmability issues of current multicores. First, Swarm is a parallel architecture that makes fine-grained parallelism practical by leveraging order as a general synchronization primitive. Swarm programs consist of tasks with programmer-specified order constraints. Swarm hardware provides support for fine-grained task management, and executes tasks speculatively and out of order to scale. Second, Fractal extends Swarm to harness nested speculative parallelism, which is crucial to scale large, complex applications and to compose parallel speculative algorithms. Third, Amalgam makes more efficient use of speculation resources by splitting and merging address set signatures to create fixed-size units of speculative work. Amalgam can improve performance and reduce implementation costs. Together, these techniques unlock abundant fine-grained parallelism in applications from a broad set of domains, including graph analytics, databases, machine learning, and discrete-event simulation. At 256 cores, our system is 40x -512x faster than a single core system and outperforms state-of-the-art software-only parallel algorithms by one to two orders of magnitude. Besides achieving near-linear scalability, the resulting programs are almost as simple as their sequential counterparts, as they do not use explicit synchronization.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-144).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121729</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Terahertz laser frequency combs : devices and applications</title>
<link>https://hdl.handle.net/1721.1/121728</link>
<description>Terahertz laser frequency combs : devices and applications
Yang, Yang,Ph. D.Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
In recent years, there has been growing interest in chip-scale frequency combs, such as micro-resonator combs and semiconductor mode-locked sources. From the mid-infrared to the terahertz regime, it has been shown that quantum cascade lasers (QCLs) are capable of forming a frequency comb state where dispersed cavity modes of the Fabry-Perot cavity are synchronized by third order nonlinearity. With proper dispersion engineering, we have shown that it is possible to create QCL frequency combs at terahertz wavelengths, which possess broadband coverage in a compact package. These QCL combs are particularly attractive as sources for high-sensitivity laser spectroscopy: by using a dual-comb technique, it is possible to perform broadband spectroscopy only using chip-scale components, making it an intriguing candidate for spectroscopic applications in the open field.; Moreover, due to the semi-continuous nature of the temporal output from such combs, tracking the instantaneous phase and timing signals of the dual-comb waveform in the time domain becomes feasible. This enables a computational coherent averaging scheme of the dual-comb signal even without external reference. The first part of this thesis describes the development for better THz laser frequency combs. To realize all expectations in the spectroscopy applications using such devices, three main aspects of improvement are highly desired. First, the laser device should have a robust comb state that ideally can operate from device's threshold current, I[subscript th], to its maximum current, I[subscript m]. In addition, the comb states should have a broad spectral coverage: its bandwidth should cover at least an octave span to stabilize its carrier offset.; Furthermore, the comb state should have a flexible tunability that allows tuning across the entire free spectral range for gapless sensing. All listed aspects are investigated during the course of this thesis and as a result, a THz QCL device featuring comb state performance over the entire lasing bias range has been demonstrated. Meanwhile, we show that, by compensating cavity dispersion up to higher orders, the comb bandwidth from the full dispersion compensated devices can reach 80 % of its gain bandwidth. One common method to achieve very broadband coverage relies on using the heterogeneous gain media. This comes at the cost of reduced peak gain and hampered temperature performance. Also, engineering the dispersion of such a broadband gain medium becomes extremely challenging, which might not lead to a broader comb coverage albeit its broader gain. However, a unique feature of the metal-metal waveguide is that it is completely agnostic about its bonded gain media.; Therefore, it is possible to bond multiple gain media together onto the same chip. The lateral heterogeneous integration scheme is investigated as an alternative method to expand the comb's spectral coverage. We show that using this strategy we can couple the output of combs at vastly different wavelengths without the trade-off with its temperature performance, yet maintain a compact package. Dual-comb spectroscopy allows for high-resolution spectra to be measured over broad bandwidth, but in order to achieve high resolution and acquire low-uncertainty spectroscopic information, the capability for coherent averaging is of the most importance. An essential requirement for coherent averaging is the availability of a phase reference. Usually, this means that the combs' phase and timing errors must be measured and either minimized by stabilization or removed by correction.; These hardware-based solutions often require extra electronic or optical components, thus complicates the overall system and further limits the technique's applicability. We demonstrate that it is possible to extract the phase and timing signals of a multiheterodyne spectrum in a completely computational fashion without any extra measurements, which can potentially simplify any dual-comb system. Other works in this thesis include the first proof-of-principle demonstration of THz dual-comb spectroscopy using laser frequency combs, THz hyper-spectral imaging for pharmaceutical compound identification, and the exploratory work on the development of the germanium-on-gallium arsenide platform, a passive on-chip platform showing potential to bridge the THz and mid-infrared regime.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-162).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121728</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic domain wall devices : from physics to system level application</title>
<link>https://hdl.handle.net/1721.1/121727</link>
<description>Magnetic domain wall devices : from physics to system level application
Siddiqui, Saima Afroz.
Spintronics promises intriguing device paradigms where electron spin is used as the information token instead of its charge counterpart. Spin transfer torque-magnetic random access memory (STT-MRAM) is considered one of the most mature nonvolatile memory technologies for next generation computers. Spin based devices show promises also for beyond-CMOS, in memory computing and neuromorphic accelerators. In the future cognitive era, nonvolatile memories hold the key to solve the bottleneck in the computational performance due to data shuttling between the processing and the memory units. The application of spintronic devices for these purposes requires versatile, scalable device design that is adaptable to emerging material physics. We design, model and experimentally demonstrate spin orbit torque induced magnetic domain wall devices as the building blocks (i.e. linear synaptic weight generator and the nonlinear activation function generator) for in-memory computing, in particular for artificial neural networks. Spin orbit torque driven magnetic tunnel junctions show great promise as energy efficient emerging nonvolatile logic and memory devices.; In addition to its energy efficiency, we take advantage of the spin orbit torque induced domain wall motion in magnetic nanowires to demonstrate the linear change in resistances of the synaptic devices. Modifying the spin-orbit torque from a heavy metal or utilizing the size dependent magnetoresistance of tunnel junctions, we also demonstrate a nonlinear activation function for thresholding signals (analog or digitized) between layers for deep learning. The analog modulation of resistances in these devices requires characterizing the resolution of the resistance.; Since domain wall in magnetic wires is the nonvolatile data token for these devices, we study the spatial resolution of discrete magnetic domain wall positions in nanowires. The studies on domain wall is further extended to identify energy-efficient and dynamically robust superior magnetic material for ultra-fast and efficient devices for neuromorphic accelerators.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121727</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>In-situ depth monitoring for a deep reactive ion etcher using a white light interferometer with active vibration cancellation</title>
<link>https://hdl.handle.net/1721.1/121726</link>
<description>In-situ depth monitoring for a deep reactive ion etcher using a white light interferometer with active vibration cancellation
Teale, Carson(Carson Arthur)
Standard process development for micro and nanofabrication etching technologies relies on open-loop trial and error testing of recipes to achieve optimal etch depths and uniformities. This strategy is inefficient for research and fabrication of novel devices where one-of-a-kind experiments cannot justify lengthy process development times. This thesis describes the development of an in-situ depth measurement device for real-time feedback of etch depth and uniformity. This device will help facilitate far shorter process development times, potentially enabling the desired etch to be achieved on the first process run. The depth imager consists of a wide-field, white light interferometer with a 12" working distance, capable of imaging across a 1/2" field of view. Active feedback from a co-propagating laser interferometer is used to stabilize the system against vibrations through a feedback loop that controls the position of the reference mirror using a piezo actuator. This scheme ties the accuracy of the white light depth scan to the stability of the laser wavelength, allowing for accurate step sizes without the need for an expensive scanning stage. The well defined sampling period allows for the phase sensitive detection of the white light interference signal, reducing amplitude fluctuations from plasma emissions. This design is able to image deep trenches with optically rough surfaces, etched directly into a silicon substrate with aspect ratios of 10 or more. The device is demonstrated on a custom built deep reactive ion etcher (DRIE), achieving a depth resolution of better than 1 [mu]m in the presence of large vibrations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 119-121).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121726</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligible models for learning categorical data via generalized fourier spectrum</title>
<link>https://hdl.handle.net/1721.1/121725</link>
<description>Intelligible models for learning categorical data via generalized fourier spectrum
Zhang, Xuhong,Ph. D.Massachusetts Institute of Technology.
Machine learning techniques have found ubiquitous applications in recent years and sophisticated models such as neural networks and ensemble methods have achieved impressive predictive performances. However, these models are hard to interpret and usually used as a blackbox. In applications where an explanation is required in addition to a prediction, linear models (e.g. Linear Regression or Logistic Regression) remain to be mainstream tools due to their simplicity and good interpretability. This thesis considers learning problems on categorical data and proposes methods that retain the good interpretability of linear models but significantly improve the predictive performance. In particular, we provide ways to automatically generate and efficiently select new features based on the raw data, and then train a linear model in the new feature space. The proposed methods are inspired by the Boolean function analysis literature, which studies the Fourier spectrum of Boolean functions and in turn provides spectrum-based learning algorithms. Such algorithms are important tools in computational learning theory, but not considered practically useful due to the unrealistic assumption of uniform input distribution. This work generalizes the idea of Fourier spectrum of Boolean functions to allow arbitrary input distribution. The generalized Fourier spectrum is also of theoretical interest. It carries over and meaningfully generalizes many important results of Fourier spectrum. Moreover, it offers a framework to explore how the input distribution and target function jointly affect the difficulty of a learning problem, and provides the right language for discussing data-dependent, algorithm-independent complexity of Boolean functions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-170).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121725</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity-inducing probability measures for machine learning</title>
<link>https://hdl.handle.net/1721.1/121724</link>
<description>Diversity-inducing probability measures for machine learning
Li, Chengtao,Ph. D.Massachusetts Institute of Technology.
Subset selection problems arise in machine learning within kernel approximation, experimental design, and numerous other applications. In such applications, one often seeks to select diverse subsets of items to represent the population. One way to select such diverse subsets is to sample according to Diversity-Inducing Probability Measures (DIPMs) that assign higher probabilities to more diverse subsets. DIPMs underlie several recent breakthroughs in mathematics and theoretical computer science, but their power has not yet been explored for machine learning. In this thesis, we investigate DIPMs, their mathematical properties, sampling algorithms, and applications. Perhaps the best known instance of a DIPM is a Determinantal Point Process (DPP). DPPs originally arose in quantum physics, and are known to have deep relations to linear algebra, combinatorics, and geometry. We explore applications of DPPs to kernel matrix approximation and kernel ridge regression.; In these applications, DPPs deliver strong approximation guarantees and obtain superior performance compared to existing methods. We further develop an MCMC sampling algorithm accelerated by Gauss-type quadratures for DPPs. The algorithm runs several orders of magnitude faster than the existing ones. DPPs lie in a larger class of DIPMs called Strongly Rayleigh (SR) Measures. Instances of SR measures display a strong negative dependence property known as negative association, and as such can be used to model subset diversity. We study mathematical properties of SR measures, and construct the first provably fast-mixing Markov chain that samples from general SR measures. As a special case, we consider an SR measure called Dual Volume Sampling (DVS), for which we present the first poly-time sampling algorithm.; While all considered distributions over subsets are unconstrained, those of interest in the real world usually come with constraints due to prior knowledge, resource limitations or personal preferences. Hence we investigate sampling from constrained versions of DIPMs. Specifically, we consider DIPMs with cardinality constraints and matroid base constraints and construct poly-time approximate sampling algorithms for them. Such sampling algorithms will enable practical uses of constrained DIPMs in real world.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 163-176).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121724</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel micro/nanofluidic system for separation and monitoring of cells and proteins in perfusion</title>
<link>https://hdl.handle.net/1721.1/121723</link>
<description>Novel micro/nanofluidic system for separation and monitoring of cells and proteins in perfusion
Kwon, Taehong.
High-quality complex biopharmaceutical products based on cells and proteins are transforming modem medicine and advancing treatments for many health conditions. Continuous biomanufacturing is one of the top technology trends in the biopharmaceutical industry to improve biological product quality and reduce manufacturing cost. This thesis introduces novel high-throughput microfluidic cell separation and nanofluidic protein quality monitoring technologies. The novel micro/nanofluidic system enables reliable and efficient microfiltration and robust online rapid product quality monitoring during continuous biomanufacturing. This technology overcomes the limitations of the current membrane-based microfiltration and quality monitoring technologies, including filter clogging, low product recovery, manual sample preparation, and off-line analysis. The first part describes the novel cell retention device for perfusion culture based on inertial sorting.; Size-dependent hydrodynamic forces enabled membrane-less microfiltration for the separation of suspended mammalian cells. The device performance in terms of cell retention efficiency, long-term biocompatibility, and scalability was assessed. Long-term and small-scale perfusion culture using the device was subsequently demonstrated. Clog-free cell retention with high product recovery in this work can be utilized for long-term reliable and efficient perfusion culture. The next part describes the removal of small dead cells from bioreactor cultivation by high-throughput size-based cell separation using inertial sorting. The device parameters were studied to optimize removal of the dead cells, and high-throughput and high-concentration dead cell removal was demonstrated. Finally, continuous online purity monitoring of the proteins in the cell culture supernatant during perfusion culture was achieved with a novel nanofluidic filter array.; This nanofluidic device with online sample preparation system was integrated with perfusion culture using the microfluidic cell retention device. The purity of proteins in the cell culture supernatant was monitored for more than a week in a fully automated continuous manner. As a robust online sensing technology for continuous biomanufacturing, this nanofluidic filter array could replace the existing offline analytical technologies for protein purity monitoring. In summary, this thesis presents a novel micro/nanofluidic system for separation and monitoring of cells and proteins for continuous biomanufacturing. This innovative approach can contribute to long-term reliable and efficient biomanufacturing in the future.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121723</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic matching algorithms</title>
<link>https://hdl.handle.net/1721.1/121713</link>
<description>Dynamic matching algorithms
Burq, Maximilien.
We study marketplaces in which participants arrive over time, looking to interact with each other. While such interactions have historically been decentralized, the past few years have seen a dramatic increase in the number of internet-enabled platforms which facilitate the process of connecting together, or matching, sets of two or more participants. We will focus mainly on centralized matching markets such as kidney exchange and carpooling platforms. In such platforms, the algorithm which determines whom to match and when to do so plays an important role in the efficiency of the marketplace. In the first part, we study the interface between the participant heterogeneity, the types of matchings that are allowed, and the frequency at which the platform computes the allocations. We provide an empirical analysis of the effect of match frequency based on data from major US Kidney exchange programs. We then study models that enable us to compare the participants' match rates and waiting times under varying matching policies. We show both in theory and in practice that matching quickly can be beneficial, compared to policies which try to increase opportunities for optimization through artificial waiting. Until now, the theory of matching algorithms has focused mostly on static environments and little is known in the case where all participants arrive and depart dynamically. In our second part, we help bridge this gap by introducing a new theoretical problem for dynamic matching when anyone can arrive online. We provide new algorithms with state-of-the-art theoretical guarantees, both in the case of adversarial and random order inputs. Finally, we show that these algorithms perform well on kidney exchange and carpooling data.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 203-213).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121713</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscopic materials response to radiation and corrosion environments</title>
<link>https://hdl.handle.net/1721.1/121710</link>
<description>Nanoscopic materials response to radiation and corrosion environments
Yang, Yang
In this thesis, computational and experimental techniques are developed to study the response of materials to radiation and corrosion environments at nanoscale, respectively. Firstly, controlled ion radiation has become a popular tool for the fabrication and modification of nanostructured materials as well as understanding materials degradation in radiation environment. Here we aim to overcome a major limitation in current 1D Monte Carlo simulation codes for ion radiation, i.e., the incapability to predict the primary radiation damage in nanoscale ion implantation experiments. A prototype code in MATLAB named "Mat-TRIM", and a more advanced code in C-language named "IM3D", are developed to accurately capture the key physics of ion-mater interaction in nano-structured materials in three-dimensions (3D). Using IM3D, we revealed the nano-beam and nano-target effect of ion radiation.; We then quantified the relative error of 1D approach in several classical examples, showing significant relative errors of more than 1000% when the beam/target- size is close to or smaller than the range of ions, indicating the necessity of full-3D simulations. We also observed a topological evolution of point defects' distributions in 3D when beam-size varies. Also, radiation is a powerful characterization tool. In particular, in-situ environmental transmission electron microscopy (E-TEM) technique, using electron radiation for imaging, enables direct observation of materials corrosion at nano/atomic resolution. Using this technique, we directly visualized the deformation of 2nm-thick surface oxide on aluminum nanotips under oxygen environment. We showed the native aluminum oxide can deform like liquid and self-heal its branches quickly at room temperature, rendering a continuous oxide layer without fracture/spallation during deformation.; We also developed a "mechanical-break-junction" method to overcome the difficulty of preparing fresh metal surface in a TEM for initial oxidation studies. A contrast experiment to aluminum oxidation is performed for zirconium alloy, a metal which is used as the cladding in water-cooled reactors. We in-situ observed the oxidation-induced crack/pore evolution at nanoscale. The crack/pores in oxide will form a percolated network, leading to the failure of oxide as a passivation layer. Our observations demonstrated that the plasticity of metal oxide is crucial for the oxidation resistance of metals.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 181-205).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121710</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Near-wall modeling of bubbly flows</title>
<link>https://hdl.handle.net/1721.1/121709</link>
<description>Near-wall modeling of bubbly flows
Lubchenko, Nazar.
Multiphase computational fluid dynamics (M-CFD) codes are gaining acceptance in the nuclear industry for the prediction of thermal-hydraulic behavior, offering the potential to improve the operation, economics, and safety of current systems, and enhance the design of next generation reactors. The common approach when applying M-CFD methods to the bubbly flow regime is to use an Eulerian-Eulerian two-fluid model, which solves for averaged mass and momentum equations for liquid and gas phases, as well as the k-epsilon turbulence model with modifications to account for the presence of bubbles. The resulting partial differential equations require well-posed boundary conditions, with special treatment at the walls, where there exist strong gradients of all variables. The present work systematically addresses the boundary conditions at solid walls for turbulent bubbly flows.; The complete coupled problem involving six variables is decoupled into three separate tasks, which consider void fraction profile, turbulent quantities, and gas velocity near the wall. Based on available experimental data it is shown that the reduction in void fraction near the wall is a consequence of the bubble shape, and not the wall lubrication effect repelling bubbles from the wall. Aiming at restoring the correct profile, a new wall force is derived from consideration of the interfacial forces balance near the wall. Its performance is evaluated through simulations of bubbly pipe flow experiments, confirming its improvements when compared to previous models. Three phenomena, namely, bubble-induced turbulence, buoyancy of gas, and displacement of liquid by gas, are speculated to have effect on the near-wall turbulent boundary layer.; These effects are incorporated in the Analytical Wall Functions (AWF), which provide quantitative treatment of these bubble effects in the boundary layer. The boundary layer model is validated on the existing experimental data, and the AWF are assessed based on simulations of bubbly pipe flow experiments, as well as at the prototypical reactor conditions. It is demonstrated that most of the effects that arise due to bubbles in the boundary layer can be neglected, and consequently, single phase wall functions can be used in numerical simulations. Finally, through analysis of experimental data, it is suggested that the relative velocity between bubbles and the surrounding liquid does not remain constant throughout the domain in the Eulerian-Eulerian representation of the flow, but instead increases near the wall. A corresponding correction to the drag coefficient is proposed and validated against the experimental data.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 119-127).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121709</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rheology of concentrated protein solutions and attractive colloidal dispersions</title>
<link>https://hdl.handle.net/1721.1/121708</link>
<description>Rheology of concentrated protein solutions and attractive colloidal dispersions
Wang, Gang,Ph. D.Massachusetts Institute of Technology.
Therapeutic protein products with high solution concentration often possess extremely high viscosity and have difficulties in processing and delivery. It is desirable to predict and control the viscosity of protein solutions based on their interactions at the molecular level. Fundamental understanding on their rheology will greatly facilitate the development and engineering of biopharmaceuticals. In general, viscosity of attractive colloidal dispersions increases with their concentration and attraction strength, and diverges at the gel point. In this thesis, we investigate the mechanism of enhanced viscosity of concentrated protein solutions and colloidal dispersions due to inter-particle attractions. Coarse-grained models of protein solutions and colloidal dispersions are developed.; We improve a previously developed 12-bead model by considering the hydro-dynamic interactions and using the correct forms of screened electrostatic potential and dispersion forces to simulate monoclonal antibody solutions. The model captures anisotropic effects and correctly recovers the solution micro-structures. A random patchy sphere model with controllable surface patchiness is also developed to describe more general colloidal particles with anisotropic interactions. We observe signicant deviations in micro-structure and thermodynamics from isotropic particles at modest particle concentrations. Dynamics and rheology are sensitive to near-field non-central interactions and the resulting rigid constraints. Considering these constraints improves the viscosity prediction of concentrated antibody solutions and explains the diverging viscosity during gelation of attractive colloidal dispersions.; It is also noticed that the rigid constraints in physical gels play a similar role in rheology as the cross-links in chemical gels. We have demonstrated that the rigid constraints, which are seldom accounted for in previous works, are indispensable when computing the stress of a sheared suspension.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121708</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and testing of an in situ method of ion beam analysis for measuring high-Z erosion inside a tokamak using an AIMS diagnostic</title>
<link>https://hdl.handle.net/1721.1/121707</link>
<description>Development and testing of an in situ method of ion beam analysis for measuring high-Z erosion inside a tokamak using an AIMS diagnostic
Kesler, Leigh A.(Leigh Ann)
While many ex situ measurements exist to measure plasma-facing component (PFC) surfaces of materials extracted from tokamaks, developing a deeper understanding of the dynamics of erosion, redeposition, and fuel retention in these surfaces will require in situ measurements. A first-of-a-kind technique, Accelerator-Based In-Situ Materials Surveillance (AIMS), was developed for this purpose and first demonstrated on Alcator C-Mod to study divertor surfaces with shot-by-shot resolution [1]. However, the original AIMS methods are not applicable to studying the erosion of bulk, high-Z PFCs like molybdenum and tungsten. Thus, a new method of ion beam analysis (IBA) has been developed to expand the capabilities of AIMS to directly measure this high-Z bulk erosion. This new method, called DEA (Depth markers for Evaluating high-Z materials with AIMS), combines the traditional IBA technique of particle-induced gamma emission (PIGE) with implanted depth markers.; The implanted markers enable the study of bulk material by providing a reference to the surface that can be monitored for erosion and redeposition. Implanting the marker eliminates the need for specially-manufactured "marker tiles" formed by deposited layers that can delaminate and otherwise fail under operational conditions. Two variations of this method were developed: ex situ DEA (eDEA) and in situ DEA (iDEA). Both use PIGE spectroscopy with implanted markers, but they take advantage of different features in gamma production cross sections to analyze data. eDEA, which has shown promising results in ex situ analysis of materials exposed in a tokamak, can also be used to validate the use of depth markers. iDEA provides AIMS with the ability to measure in situ high-Z bulk erosion. As part of this thesis, the following ex situ experiments have been carried out to assess the viability of these techniques.; eDEA samples with implanted depth markers have been studied after plasma exposure on the Material and Plasma Evaluation System (MAPES) in the Experimental Advanced Superconducting Tokamak (EAST). Stability of the marker to temperature excursions was studied by exposing samples to temperatures from 200 to 1000C for times from 1 to 24 hours. iDEA samples were implanted at different depths to determine the sensitivity of the technique to depth. Two simulations were developed to allow interpretation of the experimental data and to test the sensitivity, with initial studies showing a match between predicted and experimental results. eDEA measured erosion of 42.0 23.5 nm on one sample exposed in EAST, and iDEA depth markers were located with 40 nm of accuracy. These results show that DEA, as a part of an AIMS experiment, has the appropriate resolution to monitor surfaces inside a tokamak for time-resolved bulk erosion.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 157-163).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121707</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic modeling and simulations of 2D materials : chemical vapor deposition, nanoporous defects, force-field development, wetting, and friction</title>
<link>https://hdl.handle.net/1721.1/121706</link>
<description>Atomistic modeling and simulations of 2D materials : chemical vapor deposition, nanoporous defects, force-field development, wetting, and friction
Govind Rajan, Ananth.
Two-dimensional (2D) materials, such as, graphene, transition metal dichalcogenides (TMDs) (e.g., molybdenum disulfide (MoS₂)), and hexagonal boron nitride (hBN), have recently received considerable attention, due to their layer-number-dependent optoelectronic, mechanical, and barrier properties. However, physical understanding of the controlled synthesis and interfacial behavior of 2D materials is still lacking. In this thesis: First, I construct a generalized mechanistic model for the growth of TMD monolayers using chemical vapor deposition (CVD). Combining kinetic Monte Carlo (KMC) simulations and a chemical engineering transport model, I am able to predict the experimentally-observed shape and size evolution of the MoS₂ morphology inside a CVD reactor. Second, I address the challenge of solving the Isomer Cataloging Problem (ICP) for lattice nanopores in 2D materials.; Combining electronic structure density functional theory (DFT) calculations, KMC simulations, and chemical graph theory, I generate a catalog of unique, most-probable isomers of 2D lattice nanopores, demonstrating remarkable agreement with experimental microscopy data for nanopores in graphene and hBN. Third, I study the photoluminescent properties of nanoporous defects in hBN by combining my solution to the ICP with extensive hybrid DFT calculations of electronic bandgaps. Doing so, I map the experimentally-observed emission energies to one or more defect shapes in hBN, thereby demonstrating structure-property relationships for defects in hBN, with implications for single-photon emission from hBN devices. Fourth, using molecular dynamics (MD) simulations, I show that electrostatic interactions play a negligible role in determining the contact angle and the friction coefficient of water on the MoS₂ basal plane.; I show that other planes (e.g., the zigzag plane) are polar with respect to interactions with water, thereby illustrating the role of edge effects in MoS₂. Fifth, I combine lattice dynamics calculations with DFT-based MD simulations to develop a force field for hBN for use in mechanical and interfacial applications. The force field predicts the crystal structure, elastic constants, and phonon dispersion relation of hBN with good accuracy, and demonstrates remarkable agreement with the interlayer and water-hBN binding energies predicted by advanced ab initio calculations. Finally, using MD simulations, I study the wetting and frictional properties of hBN by three different liquids of varying degrees of polarity. I infer that electrostatic interactions affect the frictional properties of various liquids in contact with hBN to different extents, and propose the mean-squared total lateral force as a physical metric to rationalize this observation.; This finding implies that liquids with lower wettability can exhibit higher friction on hBN surfaces. In conclusion, the theoretical and simulation methods developed and applied in this thesis should inform the synthesis of 2D materials, and their use in various applications, such as, optoelectronic devices, mechanical composites, and membranes for gas separation and water desalination.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121706</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanocrystallization confined to porous matrices with and without surface functionalization effects</title>
<link>https://hdl.handle.net/1721.1/121705</link>
<description>Nanocrystallization confined to porous matrices with and without surface functionalization effects
Dwyer, Leia M.(Leia Mary)
Poorly water-soluble active pharmaceutical ingredients (APIs), which represent a major fraction of the molecules in drug discovery and development, are a challenge to the pharmaceutical industry given their low bioavailability. One way to address this issue is to generate nanocrystals of these APIs. Nanocrystals have a significantly increased surface area to volume ratio as compared to standard micron-sized crystals, which results in improved solubility and dissolution rates. There already exist some industrially relevant techniques for producing pharmaceutical nanocrystals, which typically exploit contact forces and high pressures to bring crystals of a normal micron range down to the nanocrystal scale. However, these techniques are often plagued with challenges such as low production rates, high energy input, and issues with stabilization and control over the final crystalline form produced.; Because of this, techniques which produce nanocrystals in the desired size range from the start are gaining interest. In this work, crystallization in confinement is used to produce stable pharmaceutical nanocrystals of a well-controlled size. Rigid, nanoporous silica matrices were used to confine crystallization volumes to the nanoscale, resulting in the formation of nanocrystals within these pores. The technique was demonstrated across a wide range of pore sizes, and using several poorly water-soluble APIs. When the principles were extended to a two-stage continuous crystallizer setup, the loadings of API in these porous matrices were improved to over 50 weight percent. When these drug loaded porous silicas were tested in a dissolution rate apparatus, the resulting dissolution profiles showed dramatic improvements as compared to the dissolution of bulk micron-sized crystals.; In the later stages of this research, porous silica with surface functionalization was used rather than bare porous matrices. Herein, it was demonstrated that, at the small pore volumes present in these systems, the surface functionalization from the media may contribute enough functional group interaction to the solvent-solute system for the solubility of a dissolved API within these pores to change. Thus, through the combination of surface functionalization and confinement effects, this work demonstrated nanocrystallization from undersaturated API solutions using functionalized nanoporous matrices.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 147-153).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121705</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radium cycling in groundwater : implications for bioremediation and tracer studies</title>
<link>https://hdl.handle.net/1721.1/121704</link>
<description>Radium cycling in groundwater : implications for bioremediation and tracer studies
Mehta, Neha,Ph. D.Massachusetts Institute of Technology.
Radium (Ra) is a radioactive alkaline earth element and forms naturally from the decay of uranium (U) and thorium (Th), elements that are ubiquitous within most rocks, soils and sediments. Radium contamination associated with anthropogenic activities such as hydraulic fracturing, uranium mining, and nuclear waste disposal poses significant public and ecological health risks. While occurrence of Ra in groundwater is concerning for public health and safety reasons, Ra is also a powerful tracer for calculating groundwater discharge and pollutant loading to coastal water. In this thesis, I investigate processes controlling Ra mobility in groundwater that are important for developing remediation strategies and improving our understanding of Ra tracer applications.; In first section of this thesis, I use a novel closed loop flow-through system to measure recoil-derived fluxes of Ra and other daughter nuclides from two crushed rock types with disparate physical and geochemical characteristics. Next, I model the effect of alpha recoil on fractionation of Ra isotopes in groundwater. Our findings showed that alpha-recoil process drives the widespread variability in Ra isotopes observed numerous field measurements and highlight the importance of understanding the hydrology of a groundwater flow system prior to interpreting Ra activities. In the second section of this thesis, I experimentally studied geochemical controls on processes affecting Ra and metal mobility in shale-water system. The results elucidate role of pH, ionic strength and additive in fracture fluid on Ra mobility in produced water. The findings from this work illuminated processes responsible for retention and mobilization of Ra and potentially problematic solutes in the subsurface.; In the last section of this thesis, the role of biomineralization on Ra uptake from solution by the cyanobacteria species Candidatus gleomargarita lithophora was investigated. Our results showed that G. lithophora grew normally in presence of Ra and sequestered 99% of the total aqueous Ra over this time period within biomass and biominerals. The findings suggest that under certain conditions, biomineralization of Ra could be used to develop in-situ bioremediation schemes for removal of Ra in groundwater and contaminated effluent streams.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Environmental Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from student-submitted PDF version of thesis. Page 133 blank.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121704</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel trimodal sensor for eddy correlation measurements of benthic flux in aquatic environments</title>
<link>https://hdl.handle.net/1721.1/121703</link>
<description>A novel trimodal sensor for eddy correlation measurements of benthic flux in aquatic environments
Hu, Irene Helen.
Quantifying chemical fluxes between natural waters and their benthic sediments is a central problem in biogeochemistry, yet it is notoriously challenging. A relatively new method for measuring benthic fluxes, Eddy Correlation (EC) addresses many shortcomings of traditional techniques. Minimally invasive and measured in situ, EC is based on high-speed, simultaneous, and co-located velocity and concentration measurements. It has been successfully used in a range of settings to determine benthic fluxes of dissolved oxygen, using an Acoustic Doppler Velocimeter (ADV) to measure water velocity and an oxygen microelectrode to measure concentration. Widespread application to a larger range of compounds is limited, however, by the lack of chemical sensors that are fast, small, and sensitive enough for EC. To address this need, a novel trimodal sensor has been developed that is capable of high-speed, high-resolution measurements of fluorescence, temperature, and conductivity.; The core of the instrument is an optical fiber spectrofluorometer, which utilizes an LED for low-cost excitation; pair of 1000 [mu]m optical fibers for minimal disruption to velocity measurements; a tunable monochromator to enable a wide range of detection wavelengths; and a custom photon counting detector for maximum sensitivity. It can be used in an EC system to measure benthic fluxes of fluorescing compounds, such as fluorescent dissolved organic material. A fast thermistor and conductivity cell are also located at the tips of the optical fibers, enabling heat and salinity flux measurements that can be used as tracers for submarine groundwater discharge. Additionally, the ability to measure three simultaneous fluxes enables exploration of the potential to use the measured flux of one compound to infer another. Such 'flux tracing' would vastly expand the range of chemicals measurable with EC.; After development and testing of the individual sensors, the ability of the instrument to take three simultaneous, co-located measurements was demonstrated in a flume: under turbulent flow, the three sensors were able to detect similar features from an injection of warm, salty, fluorescent dye. The instrument was then coupled to an ADV for flux measurements, and tested in a specially constructed laboratory tank whereby benthic fluxes were released at known rates from the tank floor. The fluxes measured by all three sensors compared favorably with expected values. In addition, fluxes measured by the three sensors were observed to track each other, demonstrating the viability of flux tracing in settings with co-transported compounds.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 237-251).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121703</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lysosomal nutrients and the mTORC1 pathway</title>
<link>https://hdl.handle.net/1721.1/121702</link>
<description>Lysosomal nutrients and the mTORC1 pathway
Wyant, Gregory A.(Gregory Andrew)
The lysosome is the major catabolic organelle, is the site of activation of the master growth regulator mTORC1 (mechanistic target of rapamycin (mTOR) complex 1), and is often deregulated in common diseases, such as cancer. Given the critical role of lysosomes in maintaining cellular homeostasis, a better understanding of lysosomal function and metabolism and its relation to the mTOR pathway is necessary. Most components of the nutrient-sensing machinery upstream of mTORC1 localize to the lysosomal surface, and amino acids generated by lysosomes regulate mTORC1 by promoting its translocation there, a key step in its activation. Activation of mTORC1 by the amino acid arginine requires SLC38A9, a poorly understood lysosomal membrane protein with homology to amino acid transporters. To study SLC38A9 function at the lysosome, we developed a novel method for the rapid isolation of intact mammalian lysosomes suitable for metabolite profiling.; First, we validate that SLC38A9 is an arginine sensor for the mTORC1 pathway, and we uncover a central role for SLC38A9 in amino acid homeostasis. SLC38A9 mediates the transport, in an arginine-regulated fashion, of many essential amino acids out of lysosomes to be used in growth-promoting processes. Pancreatic cancer cells, which use lysosomal protein degradation as a nutrient source, require SLC38A9 to form tumors. Thus, through SLC38A9, arginine acts a lysosomal messenger to connect mTORC1 activation and the release of the essential amino acids to drive cell growth. Finally, by performing quantitative proteomic analyses of rapidly isolated lysosomes, we find that ribosome degradation provides the lysosomal arginine that promotes SLC38A9 activation. Lysosome degradation of ribosomes is mediated by NUFIP1 (nuclear fragile X mental retardation-interacting protein 1).; The starvation-induced degradation of ribosomes via autophagy (ribophagy) depends on the capacity of NUFIP1 to bind LC3B and promotes cell survival. Thus, the NUFIP1-mediated degradation of ribosomes provides both the necessary substrate to activate SLC38A9 and the nutrients needed to promote cell survival under starvation. Altogether, this work provides insight into the regulation of lysosomal nutrients and their role in cellular growth and survival.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2019; Cataloged from student-submitted PDF version of thesis. "February 2019."; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121702</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and controlling uncertainty in multi-level biological systems</title>
<link>https://hdl.handle.net/1721.1/121701</link>
<description>Modeling and controlling uncertainty in multi-level biological systems
Shi, Kevin,Ph. D.Massachusetts Institute of Technology.
Mathematical modeling is essential to the understanding and design of biological systems. Modeling uncertainty, which variously represents lack of data, variability between individuals and between different measurements of a single individual, ambiguity in the proper model form, and others, is essential to explaining the limitations of our understanding and constraining the confidence of our predictions. Current methods for modeling uncertainty provide a rich mathematical means of analyzing simple forms of uncertainty in self-contained models. However, real biological systems of interest exhibit many forms of uncertainty simultaneously and may require the composition of multiple levels of models to create useful predictions. I develop and test new methods for characterizing and propagating uncertainty through multi-level models in order to better make clinically relevant predictions. These methods are applied to three systems.; First, a selenium chemoprevention clinical trial's patients were modeled at the cellular metabolic, mutation accumulation, and cancer detection levels. Metabolite, demographic, and epidemiological data were integrated to produce predictions of prostate cancer risk and putative trial outcomes. The value of information - from doing experiments to reduce uncertainty in a targeted manner - was evaluated on trial design. Second, a population pharmacokinetics/ pharmacodynamics model was created to guide preclinical studies of antibody-drug conjugates targeting breast cancer. An optimal experimental design method was created to efficiently reduce uncertainty in estimates of drug-related parameters of interest. The contributions of inter-individual variability and parameter uncertainty are specially handled by sampling and propagating ensembles of models.; Third, a two-level drug efficacy and cellular dynamics model was created to analyze the efficacy of targeted liposomal-doxorubicin in multiple nucleolin-overexpressing cell lines. A single model topology (but with selected species- and cell line-independent parameters) adequately described the measured behavior in all cell lines. These were then used predict drug uptake and cell killing as a function of surface receptor density. In each system, a modeling framework that integrates data from multiple sources and different forms of uncertainty is applied to make predictions, quantify gaps in knowledge (and help fill them), and guide decision making in controlling clinically important outcomes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 153-172).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121701</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and screening of degenerate-codon-based protein ensembles with M13 bacteriophage</title>
<link>https://hdl.handle.net/1721.1/121700</link>
<description>Design and screening of degenerate-codon-based protein ensembles with M13 bacteriophage
Clausen, Griffin James.
A billion years of evolution has crafted a diverse set of proteins capable of complex and varied functionalities. Within recent decades, scientists have applied both rational design and directed evolution to accelerate development of high-value proteins, including biotherapeutics. While computational modelling increasingly facilitates protein design, empirically screening large collections of protein variants remains an essential component of protein engineering. This process requires generating protein variation, partitioning variants with a selection pressure, and identifying highly functional proteins. This thesis presents computational tools for initial protein library design, leverages high-throughput sequencing for phage display screenings, and reports biotemplating of an inorganic phase-change material onto the filamentous M13 phage surface.; Designing ensembles of protein variants involves optimizing library size and quality with constraints on screening capacity, cost, and experimental complexity. Incorporating degenerate codons during oligonucleotide synthesis enables residue-specific protein randomization with a known amino acid scope. However, this widely adopted method often generates uneven variant abundances that diverge exponentially with additional randomized residues. The first section of this work presents tools for the design and assessment of degenerate-codon-based protein libraries. This framework facilitates incorporating an arbitrarily large number of randomized sites, non-standard genetic codes, and non-equimolar nucleotide mixtures. In addition to library size and coverage calculations, whole-population diversity metrics and abundance quantiles are reported.; An evolutionary solver to optimize non-equimolar base compositions to match amino acid profiles, as well as mutational profiling for spike-in oligonucleotides is also presented. The second section of this thesis develops an experimental and data analysis pipeline for integrating high-throughput DNA sequencing with M13 phage display biopanning. Deeply sequencing naïve M13 peptide libraries elucidated censorship patterns for both p3 and p8 coat protein fusions. Streptavidin panning recapitulated the HPQ binding motif after a single panning round, and additional biopannings pursued M13 p8 variants that interact with both gold films and carbon nanotubes. Furthermore, this thesis explores the effect of M13 p8 surface charge on the biotemplating of an inorganic phase-change material. An ambient temperature synthesis for modulating the atomic composition of germanium-tin-oxide nanomaterials is reported.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121700</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-bounded coordination of human-robot teams through concurrent intent recognition and adaptation</title>
<link>https://hdl.handle.net/1721.1/121652</link>
<description>Risk-bounded coordination of human-robot teams through concurrent intent recognition and adaptation
Levine, Steven James.
There is an ever-growing demand for humans and robots to work fluidly together in a number of important domains, such as home care, manufacturing, and medical robotics. In order to achieve this fluidity, robots must be able to (1) recognize their human teammate's intentions, and (2) automatically adapt to those intentions in an intelligent manner. This thesis makes progress in these areas by proposing a framework that solves these two problems (task-level intent recognition and robotic adaptation) concurrently and holistically, using a single model and set of algorithms for both. The result is a mixed-initiative human-robot interaction that achieves the team's goals. The robot is able to reason about the action requirements, timing constraints, and unexpected disturbances in order to adapt intelligently to the human. We extend this framework by additionally maintaining a probabilistic belief over the human's intentions. We develop a risk-aware executive that performs concurrent intent recognition and adaptation. Our executive continuously assesses the risk associated with plan execution, selects adaptations that are safe enough, asks uncertainty-reducing questions when appropriate, and provides a proactive early warning of likely failure. Finally, we present an extension to this work which enables the robot to save time by ignoring potentially many, vanishingly-unlikely scenarios. To achieve this behavior, we frame concurrent intent recognition and adaptation as a constraint satisfaction problem, and compactly represent their associated solutions and policies using compiled structures that are updated online as new observations arise. Through the use of these compiled structures, the robot efficiently reasons about which actions to perform, as well as when to perform them - thereby ensuring decision making consistent with the team's goals.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 369-378).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121652</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless communication and localization systems under spatial and temporal channel variations</title>
<link>https://hdl.handle.net/1721.1/121651</link>
<description>Wireless communication and localization systems under spatial and temporal channel variations
Iannucci, Peter Anthony.
Wireless signals inevitably vary in time and space. The three chapters of this dissertation revolve around the exploitation of signal variations. This line of work has yielded new link-layer protocols for rateless codes on half-duplex additive white Gaussian noise channels; a new abstraction for short-range mobile-to-mobile and mobile-to-infrastructure "room-area" networks that adhere to the spatial boundaries of human conversation; a reduced-complexity tone reservation algorithm for optimizing signals to avoid amplifier non-linearities; and new tools for the study of physical-layer privacy and anonymity in wireless systems.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-218).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121651</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning contact-aware robot controllers from mixed integer optimization</title>
<link>https://hdl.handle.net/1721.1/121650</link>
<description>Learning contact-aware robot controllers from mixed integer optimization
Deits, Robin L. H.(Robin Lloyd Henderson)
The problem of handling contact is central to the task of controlling a walking robot. Robots can only move through the world by exerting forces on their environment, and choosing where, when, and how to touch the world is the fundamental challenge of locomotion. Because the space of possible contacts is a high-dimensional mix of discrete and continuous decisions, it has historically been difficult or impossible to make complex contact decisions online at control rates. This work first presents an approach to contact planning which is able to make some guarantees of global optimality through mixed-integer programming. That method is applied successfully to a humanoid robot in laboratory conditions, but proves difficult to rely on when the robot is experiences unmodeled disturbances. To overcome those limitations, this thesis also introduces LVIS (Learning from Value Interval Sampling) a new approach to the control of walking robots which allows complex contact decisions to be made online using a cost function trained from offline trajectory optimizations. The LVIS algorithm is demonstrated on a simple cart-pole system with walls as well as a simplified bipedal robot model, and its success at allowing both models to use contact decisions to recover from external disturbances is demonstrated in simulation.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 117-128).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121650</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliably arranging objects : a conformant planning approach to robot manipulation</title>
<link>https://hdl.handle.net/1721.1/121649</link>
<description>Reliably arranging objects : a conformant planning approach to robot manipulation
Anders, Ariel(Ariel Sharone)
A crucial challenge in robotics is achieving reliable results in spite of sensing and control uncertainty. In this work, we explore the conformant planning approach to reliable robot manipulation. In particular, we tackle the problem of pushing multiple planar objects simultaneously to achieve a specified arrangement without using external sensing. A conformant plan is a sequence of manipulation actions that reliably achieve a goal arrangement in spite of uncertainty in object pose and nondeterministic action outcomes, and without assuming the availability of additional observations. To find conformant plans, we explored two different approaches: Conformant planning by construction. This approach formalizes conformant planning as a belief-state planning problem. A belief state is the set of all possible states of the world, and the objective is to find a sequence of actions that will bring an initial belief state to a goal belief state. To do forward belief-state planning, we created a deterministic belief-state transition model from on-line physics-based simulations and supervised learning based on off-line physics simulations. Conformant planning through plan improvement. This approach takes a deterministic manipulation plan and augments it by adding fixtures (movable obstacles) to push parts up against. This method uses an optimization-based approach to determine the ideal fixture placement location. This thesis provides insight and develops approaches toward scalable methods for solving challenging planar manipulation problems with multiple objects or concave shape geometry. We show the success of these approaches based on planning times and robustness in real and simulated experiments.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF version of thesis. Vita.; Includes bibliographical references (pages 123-126).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121649</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluorescent materials for short-wave infrared imaging</title>
<link>https://hdl.handle.net/1721.1/121616</link>
<description>Fluorescent materials for short-wave infrared imaging
Franke, Daniel.
Our understanding of the fundamental processes that drive biology and medicine is, in large part, based on our ability to visualize biological structures and monitor their transformations over time. Fluorescence imaging is one of the most transformative technologies of modern biomedical imaging as it provides a low cost, high sensitivity method for real-time molecular imaging in vivo. As the scattering and absorption of light through biological tissue impose significant restrictions on imaging penetration depth, acquisition speed, and spatial resolution, the development of novel optical imaging technologies has increasingly shifted toward the use of light of longer wavelengths. Fluorescence imaging in the shortwave infrared (SWIR, 1000 - 2000 nm) spectral region mitigates the negative effects of light attenuation and benefits from a general lack of tissue autofluorescence.; As a result, SWIR imaging promises higher contrast, sensitivity, and penetration depths compared to conventional visible and near-infrared (NIR) fluorescence imaging. However, the lack of versatile and functional SWIR emitters has prevented the general adoption of SWIR imaging both in academic and clinical settings. Here, we will present progress toward the synthesis of a new generation of SWIR-emissive materials and discuss their use in enabling biomedical imaging applications. In the first part of this thesis, we will examine the synthesis of SWIR-emissive indium arsenide (InAs) quantum dots (QDs). To address existing challenges in the synthesis of these semiconductor nanocrystals, we will investigate the processes that govern nanoparticle formation and growth.; Combining experimental and theoretical methods, we demonstrate that the synthesis of large nanocrystals is hindered by slow growth rates for large particles, as well as the formation and persistence of small cluster intermediates throughout nanocrystal growth. Based on these insights, we design a novel, rational synthesis for large InAs QDs with high brightness across the SWIR spectral region. Second, we will discuss the use of InAs-based QDs in functional SWIR imaging applications in pre-clinical settings. We will present three QD surface functionalizations that enable the non-invasive real-time imaging of hemorrhagic stroke, the quantification of metabolic activity in genetically-engineered animals, and the measurement of hemodynamics in the brain vasculature of mice. In addition, we will present preliminary results for the synthesis of SWIR-emissive QD probes for the molecular targeting of biological entities and for advanced particle tracking applications.; Using a QD-based broadband SWIR emitter, we will further investigate the eæect of SWIR imaging wavelength on image contrast and tissue penetration depth. While it was previously assumed that reduced scattering of light at longer wavelengths is the primary cause for increased image contrast, our results indicate that for imaging scenarios with strong fluorescent background signals, image contrast and penetration depth correlate closely with the absorptive properties of biological tissue. As a result, deliberate selection of imaging wavelengths at which biological tissue is highly absorptive can help to overcome contrast-limited imaging scenarios. In the last part of this thesis, we will take a closer look at SWIR emitters with the potential for translation into clinical settings.; We will demonstrate that the FDA-approved NIR dye indocyanine green (ICG) exhibits an unexpectedly high SWIR brightness that arises from a large absorption cross-section and a vibronic shoulder in its fluorescence spectrum that extends well into the SWIR spectral region. We expand on this finding by showing that ICG outperforms commercial SWIR dyes during in vivo imaging, and additionally by demonstrating a variety of high-contrast and high-speed imaging applications in small animals. These results suggest that ICG enables the direct translation of SWIR imaging into the clinic. In summary, this thesis will paint a comprehensive picture of the current state of SWIR-emissive materials, present the synthesis of novel versatile SWIR probes, and show their application in unprecedented functional SWIR imaging applications.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 223-247).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121616</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>In-home passive monitoring for medical applications</title>
<link>https://hdl.handle.net/1721.1/121613</link>
<description>In-home passive monitoring for medical applications
Kabelac, Zachary(Zachary E.)
Recent years have witnessed a surge of in-home monitoring and sensing systems. They promise to change healthcare as we know it by continuously monitoring patients at home. Yet, despite all of the interest and effort that has gone into designing these systems, their capabilities are rudimentary and long term retention rates remain low. One of the main reasons for this is that they require the user to either wear or interact with the sensor in order to work effectively. This thesis addresses many of the challenges faced by systems today enabling novel applications in both in-home monitoring and healthcare. To overcome these challenges, this thesis introduces a novel hardware / software sensor that uses radio signals to enable patient health monitoring at home. It hangs on the wall like a picture frame and transmits low-power radio signals which reflect off of the user and return back to the device. By capturing and processing the reflected signals, physiological metrics related to mobility and vital signs can be extracted without touching the user in any way. Furthermore, it relates these health signals to symptoms of Parkinson Disease by deploying the sensor in a pilot study and comparing the health metrics to gold standard clinical assessments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 145-161).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121613</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic control strategy for inverter-based microgrids</title>
<link>https://hdl.handle.net/1721.1/121612</link>
<description>Systematic control strategy for inverter-based microgrids
Huang, Po-Hsu.
Small-scale power systems, microgrids (MGs), are becoming economically and technically feasible due to cost-effective battery storage with high-bandwidth inverter interfaces, thus facilitating efficient energy utilization from renewable sources to maintain autonomous operation without a grid connection. Therefore, control of inverter-based or inverter-dominant systems is gaining a lot of attention while posing different challenges compared to traditional power systems. Conventional droop-based control architectures can provide power-sharing capability, and are considered to be a cost-effective and reliable solution for microgrids. However, experimental studies have revealed that for small-scale microgrids, stability is significantly compromised by the droop control due to low X/R ratios and short lines. Therefore, a proper modeling framework for obtaining concise and accurate models becomes important to understand the physical nature of the instability. Such a framework can further facilitate a systematic control design for stability enhancement, allowing the development of power-sharing strategies and plug-and-play functionality for efficient microgrid operation. In this thesis, high-fidelity reduced-order models for microgrids are first developed and investigated. Then, based on the proposed models, concise and simple stability certificates are derived along with virtual impedance methods for local and global stability enhancement. Detailed discussions are carried out on the control design that aims at achieving both droop stability and controller robustness. Finally, a power and energy management scheme based on secondary compensation is developed to enhance operational efficiency. The integrated solution provides a comprehensive reference for the development of stable, reliable, and flexible inverter-based microgrids. All results are validated through both simulation and experimental studies.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-125).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121612</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building generative models over discrete structures : from graphical models to deep learning</title>
<link>https://hdl.handle.net/1721.1/121611</link>
<description>Building generative models over discrete structures : from graphical models to deep learning
Gane, Georgiana Andreea.
The goal of this thesis is to investigate generative models over discrete structures, such as binary grids, alignments or arbitrary graphs. We focused on developing models easy to sample from, and we approached the task from two broad perspectives: defining models via structured potential functions, and via neural network based decoders. In the first case, we investigated Perturbation Models, a family of implicit distributions where samples emerge through optimization of randomized potential functions. Designed explicitly for efficient sampling, Perturbation Models are strong candidates for building generative models over structures, and the leading open questions pertain to understanding the properties of the induced models and developing practical learning algorithms.; In this thesis, we present theoretical results showing that, in contrast to the more established Gibbs models, low-order potential functions, after undergoing randomization and maximization, lead to high-order dependencies in the induced distributions. Furthermore, while conditioning in Gibbs' distributions is straightforward, conditioning in Perturbation Models is typically not, but we theoretically characterize cases where the straightforward approach produces the correct results. Finally, we introduce a new Perturbation Models learning algorithm based on Inverse Combinatorial Optimization. We illustrate empirically both the induced dependencies and the inverse optimization approach, in learning tasks inspired by computer vision problems. In the second case, we sequentialize the structures, converting structure generation into a sequence of discrete decisions, to enable the use of sequential models.; We explore maximum likelihood training with step-wise supervision and continuous relaxations of the intermediate decisions. With respect to intermediate discrete representations, the main directions consist of using gradient estimators or designing continuous relaxations. We discuss these solutions in the context of unsupervised scene understanding with generative models. In particular, we asked whether a continuous relaxation of the counting problem also discovers the objects in an unsupervised fashion (given the increased training stability that continuous relaxations provide) and we proposed an approach based on Adaptive Computation Time (ACT) which achieves the desired result. Finally, we investigated the task of iterative graph generation. We proposed a variational lower-bound to the maximum likelihood objective, where the approximate posterior distribution renormalizes the prior distribution over local predictions which are plausible for the target graph.; For instance, the local predictions may be binary values indicating the presence or absence of an edge indexed by the given time step, for a canonical edge indexing chosen a-priori. The plausibility of each local prediction is assessed by solving a combinatorial optimization problem, and we discuss relevant approaches, including an induced sub-graph isomorphism-based algorithm for the generic graph generation case, and a polynomial algorithm for the special case of graph generation resulting from solving graph clustering tasks. In this thesis, we focused on the generic case, and we investigated the approximate posterior's relevance on synthetic graph datasets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from PDF version of thesis. Page 173 blank.; Includes bibliographical references (pages 159-172).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121611</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale optoelectronic properties in traditional and emerging materials for light-emitting diodes</title>
<link>https://hdl.handle.net/1721.1/121609</link>
<description>Nanoscale optoelectronic properties in traditional and emerging materials for light-emitting diodes
Zhao, Zhibo,Ph. D.Massachusetts Institute of Technology.
Although InGaN/GaN-based quantum well (QW) heterostructures continue to set the industry standard for inorganic blue and green light emitting diodes (LEDs), these devices suffer from efficiency droop at high current densities and material quality degradation at longer emission wavelengths. Establishing rational process design principles to address such issues remains inhibited by ongoing controversy surrounding the impact of commonly observed defects such as well-width fluctuations or V-pit defects on carrier recombination. Organic-inorganic perovskites have begun to attract attention as a potential next-generation LED material, but these nascent materials suffer from rapid material degradation under device operating conditions. Understanding structure-property correlations will be necessary to improve incumbent InGaN/GaN technologies and evaluate the potential of organic-inorganic perovskites.; In InGaN/GaN QW heterostructures, we first employ aberration-corrected scanning transmission electron microscopy (STEM) to examine the impact of well-width fluctuations and QW period on measured EQE and find no significant correlation. Next, we observe time-delayed cathodoluminescence (CL) rise dynamics in droop-mitigating QW designs and propose a model linking rise behavior to carrier transport and deep level defects. Finally, we use CL-STEM to map radiative recombination around commonly observed V-pit defects with nanoscale spatial and spectral resolution. Furthermore, dark field diffraction contrast imaging elucidates the relationship between V-pit optical emission and threading dislocation character. These results provide a platform for evaluating the impacts of microstructural defects on LED device performance. In methylammonium lead iodide, we use STEM imaging to establish a direct correlation between local stoichiometry and CL intensity.; We demonstrate that areas of high CL intensity correspond to regions which are enriched in iodide content relative to lead. Furthermore, CL-STEM imaging reveals the presence of localized high-energy emissions which we attribute to beam-induced ion migration. The continuous evolution of such high-energy emissions under electron beam irradiation suggests these local spectral heterogeneities could reflect material evolution during device degradation.In summary, the current work demonstrates novel insights gained by the application of advanced electron imaging techniques to two vastly different materials systems. Our findings suggest that continued improvements in process design will hinge on controlling the distribution of structural defects in order to minimize undesirable recombination pathways.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-159).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121609</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed ion and electron conducting polymer composite membranes for artificial photosynthesis</title>
<link>https://hdl.handle.net/1721.1/121607</link>
<description>Mixed ion and electron conducting polymer composite membranes for artificial photosynthesis
Zhang, Ketian.
Inspired by the fact that OH- has a very high mobility in water, highly conductive OH⁻conducting membranes were developed for alkaline water electrolysis. The membranes were semi-interpenetrating networks of crosslinked poly(vinyl alcohol) (PVA) and a polycation miscible with PVA. It is analogous to aqueous strong base solution. The polycation is a OH- containing polymer; PVA solvates this polycation and facilitates the ion conduction via Grotthuss mechanism. The membrane with proper composition has an exceptionally high OH⁻ conductivity of 151 mS/cm, 6.51 times as high as the commercial membrane Neosepta AHA. At the same time, the hydrogen bonds and covalent crosslinks in the system give this membrane a high tensile strength of 41 MPa in the wet state, 46% higher than the Neosepta AHA membrane. Insight in the ion conduction mechanism was gained by spectroscopic studies and the measurement of OH- conduction activation energy.; This material system is also highly anion perm-selective, a feature critical to sustaining the pH gradient in bipolar membrane based artificial photosynthesis devices. A highly transparent mixed proton and electron conducting membrane was developed. The Nafion and reduced graphene oxide (rGO) were chosen as the proton conducting polymer matrix and the electrically conductive filler respectively. The filler has a high aspect ratio. As predicted by simulations, it will have low percolation threshold if homogeneously dispersed. To achieve this homogeneity, water-aided mixing was employed followed by fast freezing in liquid nitrogen. Though rGO is a light absorber, the extremely low percolation threshold (0.1%) allows us to achieve sufficient electrical conductivity with only a small volume fraction of rGO. Therefore, the membrane was highly transparent in addition to its ability to conduct both electrons and protons.; Detailed modeling of the energy loss from conduction, light absorption, and gas crossover was conducted, showing that this material system is promising for the artificial photosynthesis application.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121607</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein mimetic nanoparticles</title>
<link>https://hdl.handle.net/1721.1/121606</link>
<description>Protein mimetic nanoparticles
Tahir, Mukarram Ahmad.
Gold nanoparticles with amphiphilic surface functionalization have been shown to spontaneously fuse with lipid bilayers through a non-endocytic mechanism that generates minimal membrane perturbation. The membrane translocation capability of these nanoparticles makes them attractive candidates for engineering clinical applications that operate on a single-cell resolution. In particular, the physiochemical similarity between these nanoparticles and membrane-bound and free-circulating proteins suggests a possibility for designing nanostructures that can function as synthetic alternatives to proteins. In this thesis, we demonstrate how molecular simulation techniques have allowed us to tackle this engineering challenge and develop nanoparticles that can modulate fusion between lipid membranes, transport hydrophobic small molecules to lipid-bound compartments, and modify the permeability of lipid membranes. These are concrete realizations of nanoparticles functioning as protein mimics, and unlock new avenues of research on how nanomaterials can be designed from first principles to perform targeted functions in biological systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages [121]-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121606</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Block copolymer self-assembly - a computational approach towards novel morphologies</title>
<link>https://hdl.handle.net/1721.1/121605</link>
<description>Block copolymer self-assembly - a computational approach towards novel morphologies
Gadelrab, Karim R.(Karim Raafat)
Spontaneous self-assembly of materials is a phenomenon exhibited by different molecular systems. Among many, Block copolymers (BCPs) proved to be particularly interesting due to their ability to microphase separate into periodic domains. Nonetheless, the rising need for arbitrary, complex, 3D nanoscale morphology shows that what is commonly achievable is quite limited. Expanding the range of BCPs morphologies could be attained through the implementation of a host of strategies that could be used concurrently. Using directed self-assembly (DSA), a sphere forming BCP was assembled in a randomly displaced post template to study system resilience towards defect creation. Template shear-like distortion seemed to govern local defect generation. Defect clusters with symmetries compatible with that of the BCP showed enhanced stability.; Using 4₄ and 3₂434 Archimedean tiling templates that are incompatible with BCP six-fold symmetry created low symmetry patterns with an emergent behavior dependent on pattern size and shape. A variation of DSA is studied using modulated substrates. Layer-by-layer deposition of cylinder forming BCPs was investigated. Self-consistent field theory (SCFT) and strong segregation theory SST were employed to provide the understanding and the conditions under which particular orientations of consecutive layers were produced. Furthermore, deep functionalized trenches were employed to create vertically standing high-[chi] BCP structures. Changing annealing conditions for a self-assembled lamellar structure evolved the assembled pattern to a tubular morphology that is non-native to diblock copolymers. A rather fundamental but challenging strategy to go beyond the standard motifs common to BCPs is to synthesize multiblock molecules with an expanded design space.; Triblock copolymers produced bilayer perforated lamellar morphology. SCFT analysis showed a large window of stability of such structures in thin films. In addition, a model for bottlebrush BCPs (BBCPs) was constructed to investigate the characteristics of BBCPs self-assembly. Pre-stacked diblock sidechains showed improved microphase separation while providing domain spacing relevant to lithography applications. A rich phase diagram was constructed at different block concentrations. The ability to explore new strategies to discover potential equilibrium morphologies in BCPs is supported by strong numerical modeling and simulations efforts. Accelerating SCFT performance would greatly benefit BCP phase discovery. Preliminary work discussed the first attempt to Neural Network (NN) assisted SCFT.; The use of NN was able to cut on the required calculations steps to reach equilibrium morphology, demonstrating accelerated calculation, and escaping trapped states, with no effect on final structure.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-140).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121605</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the oxidation and reduction process in transparent conducting oxides</title>
<link>https://hdl.handle.net/1721.1/121604</link>
<description>Understanding the oxidation and reduction process in transparent conducting oxides
Campion, Michael J.(Michael John)
Transparent conductors play important roles in many optoelectronic devices such as LEDs, thin film solar cells, and smart windows through their ability to efficiently transport both photons and electrons. Simultaneous requirements of a wide band gap, high free carrier concentration, and high electron mobility limits the selection of available transparent conductor materials. Further improvements in the optical and electrical properties, along with improvements in processing tolerance, are highly desirable for this material class. One key limitation of current transparent conducting oxides is their response to oxidation, which can cause severe decreases to the conductivity of the material through ionic compensation. Materials with slow oxygen kinetics or resistance to the formation of compensating ionic defects could lead to more flexible operating and processing conditions for applications requiring transparent conductors.; The properties of transparent conducting oxides, Al-doped ZnO and La-doped BaSnO₃, were examined through a variety of methods with a focus on the impact of processing on the free carrier concentration, electron transport, and optical properties. Al-doped ZnO was examined as a well-known alternative to indium tin oxide (ITO) that has been shown to be limited by relatively narrow processing conditions and large variances in reported properties. BaSnO₃ is a comparatively new material in the field of transparent conductors, attractive mainly due to its exceptionally high electron mobility for an oxide. Little is currently known about the nature of defects and processing on the optical and electrical properties of this material, but this information will be important to understand before implementing this material in practical devices.; For these materials, I examined the roles of oxygen stoichiometry and point defect formation in impacting properties and stability under both processing conditions and harsh operating conditions and explored the limitations and opportunities provided by these transparent conducting oxide systems. Al-doped ZnO thin films were produced by pulsed laser deposition under a variety of oxygen conditions demonstrating the strong dependence of free electron concentration and mobility on the oxidation state of the material. The free carrier absorption in the infrared photon range was measured and modeled and found to agree well with theory assuming ionized impurity scattering as the limiting electron scattering mechanism. These effects were understood through the framework of the formation of compensating zinc vacancies under oxidizing conditions, leading to decreases in the free electron concentration.; Atom probe tomography was applied to Al-doped ZnO thin films deposited on Si substrates, demonstrating an effective accumulation of Al near the ZnO/Si interface, but with no detected precipitation or agglomeration in the x-y plane of the film, even for heavily doped films. This was surprising due to the high concentration of Al-dopant in the material, exceeding the thermodynamic solubility limit of bulk ZnO. An accumulation of Al-dopant was observed at the ZnO/Si interface under multiple conditions, with the oxygen atmosphere during deposition and nature of the Si substrate affecting the degree of accumulation. Because transparent conductors are typically used to transfer charge through interfaces, understanding the nature and implications of this observed accumulation effect could be essential to understanding device performance.; La-doped and undoped BaSnO₃ thin films and bulk samples were tested for their electrical conductivity in-situ under various temperatures and oxygen partial pressures. In the undoped case, a p-type to n-type transition was observed at lower temperatures with decreasing oxygen partial pressure, with the behavior correlated to the formation and annihilation of oxygen and cation vacancies. Under donor-doping, a measurable, but weak n-type dependence of conductivity was demonstrated, pointing to a surprisingly weak role played by cation vacancy charge compensation over the measured temperature ranges. Compared to other similar oxide systems, compensation by cation vacancies would normally be expected to be strong under oxidizing conditions.; This is a key advantage for La-doped BaSnO₃ as a high temperature oxygen stable material compared to other competing materials that are more susceptible to conductivity degradation due to ionic compensation of the donor dopant under oxidizing conditions. This was directly demonstrated in the testing of the conductivity response of La-doped BaSnO₃ thin films that maintained high conductivity under a large range of oxygen and temperature conditions. Oxygen diffusion in the material was estimated from conductivity relaxation and further explored with oxygen tracer diffusion studies. These studies revealed an activation energy of 2 eV for the oxygen diffusion process, as well as a depth dependent diffusivity leading to depressed oxygen diffusivities near the surface. Study of epitaxial and polycrystalline thin films of La-doped BaSnO₃ revealed a difference in the rate of oxidation response of the conductivity.; Epitaxial thin films exhibited a weak power law dependence on temperature while polycrystalline thin films under oxidizing conditions exhibited an activation energy of 0.36 eV. This effect was attributed to the formation of narrow space charge regions at the grain boundaries under oxidizing conditions. Simultaneous measurements of the infrared transmission and electrical conductivity of thin films were performed as a means of correlating infrared transmission with conductivity at high temperatures under various controlled atmospheres. These two measurements were found to be strongly correlated and were demonstrated to be connected to the formation and annihilation of free carriers in the thin films. A novel measurement technique was explored in which the conductance response was measured across a substrate during pulsed laser deposition of Al-doped ZnO.; The measured conductance profile as a function of time was correlated to the expected growth regimes typical of an island growth mode, and the thickness dependence of resistivity was directly observed. Additional information about the growth conditions was obtained through conductance relaxation after single pulses, performed under different growth chamber atmospheres.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121604</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic responsiveness in the American states : legislators, constituents, and organized interests</title>
<link>https://hdl.handle.net/1721.1/121603</link>
<description>Dynamic responsiveness in the American states : legislators, constituents, and organized interests
Dunham, James(James Wolcott)
An independent commission redrew California's electoral map after the 2012 redistricting cycle, inducing large, exogenous shocks to the composition and policy preferences of many districts. The first paper of the dissertation assesses repositioning among members of the state legislature. Did they respond when redistricting led to changes in the policy ideology of their districts? The result speaks to the kind of representation that constituents receive-and the obstacles facing would-be reformers. The paper is the first in the literature to identify the causal quantity of interest using design-based inference; it also improves on previous measures of district ideology. Contrary to prior findings, there is little evidence of responsiveness to shifts in district preferences from redistricting. This result points to the role of strong parties and organized interests in the selection of representatives and legislative activity.; In my second paper, I demonstrate the effectiveness of supervised machine learning methods in recognizing textual references to firms, organized interests, or any other political actors (an application of named entity recognition), and then resolving these references to real-world referents (an entity resolution task). Together, these methods make possible the large-scale measurement of political actors or their activity from sources such as diplomatic cables, transcripts, and administrative or legislative records. Organized interests are embedded in the legislative process in state capitols, writing bills and participating in committee meetings; they contribute stakeholder perspective and testify to the technical points of proposed legislation. Studying exactly which groups participate addresses a minimal standard for democratic governance. The third paper accomplishes this using the measurement strategy described in the second paper.; It reveals how organized interests engage on specific bills (or bill versions) and expands the scope of measurement beyond activities whose disclosure is required under state law. Diverging from typical measurement strategies identifies less-resourced groups, in particular citizen and issue organizations, engaging in undisclosed legislative activities. The paper argues for an alternative view of the distribution of political voice in the states, and the integration of research on dynamic responsiveness and organized interests.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121603</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximizing leverage : explaining China's strategic force postures in limited wars</title>
<link>https://hdl.handle.net/1721.1/121602</link>
<description>Maximizing leverage : explaining China's strategic force postures in limited wars
Cunningham, Fiona S.(Fiona Stephanie)
How do nuclear-armed states maximize strategic leverage to coerce their adversaries in limited wars? Although the existing literature has examined how states have used their nuclear weapons as sources of strategic leverage, it has not fully explored the challenges states face in using these extremely destructive weapons in limited wars. China's approach to maximizing strategic leverage offers one possible solution to these challenges. It has pledged not to use nuclear weapons unless it first suffers a nuclear attack from an adversary. Instead it threatens to use space, cyber and conventional missile weapons to maximize strategic leverage against an adversary in a limited war. I develop a theory of strategic substitution to explain why states might substitute space, cyber, and conventional missile weapons for nuclear weapons as sources of strategic leverage in limited wars and how they select force postures for each of these weapons.; First, I develop a typology of force postures for these non-nuclear strategic weapons based on how much they increase the risk of the state using its most destructive space, cyber or conventional missile weapons. Second, I outline two variables that determine whether a state pursues a non-nuclear strategic weapons capability and, if so, which force posture it selects. States pursue a coercive capability if they have a need for strategic leverage because they cannot respond to changes for the worse in their threat environment with credible threats to use nuclear weapons or their conventional military forces. States select postures by estimating the expected cost of an adversary's retaliation if they have to carry out a threat to use a non-strategic nuclear weapon. To demonstrate the explanatory power of the theory, I conduct comparative case studies of all seven Chinese decisions about its space, cyber and conventional missile postures since 1988.; Using original Chinese-language sources, I provide the most comprehensive account of China's post-Cold War strategic force posture choices in the existing literature. I show how China's nuclear posture, conventional military power, and its force postures for new military technologies are related, although they are often examined independently of one another in the existing literature.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 420-444).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121602</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling personal vehicle energy consumption to assess the potential for electrification and decarbonization</title>
<link>https://hdl.handle.net/1721.1/121595</link>
<description>Modeling personal vehicle energy consumption to assess the potential for electrification and decarbonization
Needell, Zachary Adam.
This thesis develops a new model of the energy requirements of personal vehicle travel and uses it to evaluate tools to decarbonize the transport sector. Energy use and carbon emissions from transportation are spread across millions of miles of roadways and hundreds of millions of travelers. This diversity of travel patterns makes it challenging to catalogue and predict those quantities and difficult to characterize the mechanisms that drive them. However, a better understanding of transport energy use patterns is needed to find options for reducing personal vehicle energy requirements and greenhouse gas emissions. Existing research on transportation and climate policy often represents energy use in a fundamentally simplified manner. Some research does not account for the effect of usage patterns on technology performance, missing variation in technology impacts across context of use.; Other research informs technology modeling with a simplified picture of travel patterns, missing contexts in which technologies will be used. The research in this thesis adds new insight by assessing technology performance based on a comprehensive picture of travel patterns. This better captures both how travel patterns determine technology performance and how technology performance constrains achievable transformations to the transport sector. It combines high-resolution driving data with comprehensive travel patterns from household travel surveys or a transport network simulation, integrating data at multiple scales to avoid simplifications that mask relationships between technology use, technology performance, and systemwide carbon intensity. The central finding of this thesis is that retaining heterogeneity in travel behavior and technology performance allows us to better understand barriers to and strategies for transport decarbonization that will be missed with simpler methods.; Specifically, this thesis addresses electric vehicle range limitations, finding that they provide a constraint on transport electrification that is relatively limited and consistent across locations. This research also reveals interactions between electric vehicle charging and the electricity grid and uncovers how to better align electricity demand and supply under high solar photovoltaics penetration. This understanding will help inform targeted technological development and policies as well as help identify risks and unintended consequences in a transition to a low-carbon transportation system.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 145-158).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121595</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dislocation engineering in InAlGaAs/SiGe alloy systems for heteroepitaxial integration</title>
<link>https://hdl.handle.net/1721.1/121594</link>
<description>Dislocation engineering in InAlGaAs/SiGe alloy systems for heteroepitaxial integration
Shah, Rushabh(Rushabh Dinesh)
Semiconductor devices based on GaAs substrates use III-V alloy systems like Inx(AlyGa₁₋y)₁₋xAs for optoelectronic devices and high frequency communication applications. Heteroepitaxial integration of such devices with Si has generated a lot of interest because it has the potential of combining the desired material properties with the low cost of the Si manufacturing platform and enabling monolithic integration with Si CMOS, creating novel integrated circuits. However, heteroepitaxial integration introduces challenges related to detrimental crystal defects like dislocations that are created in the process and require buffer layers to accommodate such defects and minimize their impact on device performance. One buffer layer scheme that is receiving widespread adoption involves the direct growth of Ge on Si. It produces a threading dislocation density ~2x10⁷ cm⁻² but this density is too high for applications like high efficiency GaAs solar cells.; In this work, we aim to study two methods that can reduce this defect density. One involves developing a similar procedure to Ge on Si growth, for growing Ge on Si₅₀Ge₅₀. We find that despite the lower lattice mismatch and availability of pre-existing mobile dislocations in this approach, the Ge film retains a high metastable strain that generates a high density of crystal defects. The second method involves the use of low x Inx(A1yGa₁₋y)₁₋xAs buffer layers that use compressive strain to encourage dislocation reactions and drive dislocations to sinks (mesa sidewalls). Filtering effects lead to a reduced threading dislocation density, the lowest measured in this study is 7x10⁶cm⁻² , but these effects do not scale with x, showing that dislocation sources are active and play an important role. These films have residual tension due to differences in coefficient of thermal expansion with Si substrates.; Recombination enhanced dislocation glide effects permit direct visualization of processes that operate to relive this residual strain at room temperature, in cathodoluminescence imagining conditions. This provides a first estimate of density of dislocation sources and interaction probability between dislocations under residual strain conditions. Unexpected dark line defects were observed in cathodoluminescence imaging and the use of "overshoot layer" was demonstrated to be critical in reducing their density. Preliminary work towards processing photovoltaic cells and a layer transfer method that involves wafer bonding for reuse of buffer layers is also discussed.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 87-92).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121594</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical aspects of electrodeposition in charged porous media</title>
<link>https://hdl.handle.net/1721.1/121481</link>
<description>Theoretical aspects of electrodeposition in charged porous media
Khoo, Edwin Sze Lun.
Electrodeposition is a fascinating electrochemical phenomenon that contains deep physical insights and has broad practical applications. At the heart of the physics governing electrodeposition is a competition between a destabilizing force caused by surface crests growing more rapidly than surface troughs in a positive feedback loop and a stabilizing force arising from surface energy effects that prevent the surface from roughening excessively. The physical manifestation of this surface instability is the formation and propagation of dendrites. Some applications of electrodeposition include electroplating of metals such as copper and charging of next-generation high-energy-density metal batteries such as lithium metal batteries. From both theoretical and practical standpoints, it is important to understand how to control and exploit electrode-position.; In this thesis, we explore electrodeposition in a homogenized charged porous medium that contains a fixed background charge density, which affords us a new knob for controlling electrodeposition. In practice, such a background charge density can either naturally arise from ionization of surface functional groups or be generated through experimental techniques such as layer-by-layer deposition of polyelectrolytes on the pore surfaces. We investigate the theoretical aspects of electrodeposition in charged porous media in three different ways. First, we introduce a simple transport model that accounts for the background charge density and couple it with electrochemical reaction kinetics for electrodeposition. We then validate the coupled model by comparing predicted steady state current-voltage relations and linear sweep voltammetry with experimental data for copper electrodeposition in a variety of nanoporous media.; Second, we perform linear stability analysis on the model to understand how key system parameters such as the background charge density affect the linear stability of the metal surface. We then show good agreement between theoretical predictions and experimental observations of the critical and instability wavelengths for copper electrodeposition in cellulose nitrate membranes. Third, we carry out impedance analysis on the model and explain some intriguing features in the experimental impedance spectra for copper electrodeposition in anodic aluminum oxide membranes. Through these three different types of analysis, we demonstrate the predictive power and robustness of the theory despite its simplicity
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 211-233).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121481</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High field terahertz radiation : conduits to synchronized hyper spectral systems</title>
<link>https://hdl.handle.net/1721.1/121429</link>
<description>High field terahertz radiation : conduits to synchronized hyper spectral systems
Ravi, Koustuban.
After the first experiments of nonlinear phenomena in optics, the development of the mode-locked laser has led to rapid proliferation of the study of nonlinear frequency conversion techniques-enabling the conversion of light from the infra-red, all the way to the soft X-Ray region. However, accessing the hard X-ray region remains elusive, and the domain of specialized facilities. The key insight to accessing hard X-rays optically may not be in seeking to convert optical frequencies upward, but rather downward to frequencies spanning a 100 to 10,000 GHz. This would enable unprecedented control of electrons and consequently the generation of hard X-rays. The efficient optical conversion to terahertz radiation would thus open up the possibility of highly synchronized multi-spectral systems to transform the landscape of scientific investigation and medicine among others. In this thesis, the problem of efficient conversion is tackled theoretically. A montage of novel computational techniques, analyses, device proposals and physical mechanisms culminate in record breaking experimental demonstrations with efficiencies, an order of magnitude larger than prior art. The thesis further paves the way for even greater improvements, by another order of magnitude. The underlying science of cascaded difference frequency generation, expounded here would be of significant value to terahertz generation in chip-scale systems for future applications such as Quantum computing, chip-scale accelerators and X-ray sources.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 253-267).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121429</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparison of the physiology, ecology and distribution of some New England woodlice</title>
<link>https://hdl.handle.net/1721.1/120912</link>
<description>A comparison of the physiology, ecology and distribution of some New England woodlice
Fuller, John L. (John Langworthy), 1910-1992
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Biology and Public Health, 1935.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1935 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120912</guid>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A microfluidic platform for combinatorial experiments</title>
<link>https://hdl.handle.net/1721.1/120911</link>
<description>A microfluidic platform for combinatorial experiments
Kulesa, Anthony Benjamin
Experiments in biology are often combinatorial in nature and require analysis of large multi-dimensional spaces, but the scales of these experiments are limited by logistical complexity, cost, and reagent consumption. By miniaturizing experiments across nanoliter-scale emulsions that can be processed at large scales, droplet microfluidic platforms are poised to attack these challenges. Here we describe a droplet microfluidic platform for combinatorial experiments that automates the assembly of reagent combinations, with order-of-magnitude improvements over conventional liquid handling. Moreover, our design is accessible, requiring only standard lab equipment such as micropipettes, and improves the chemical compatibility of droplet microfluidic platforms for small molecules. We applied our platform to two experimental problems: combinatorial drug screening and microbial ecology. First, we used our platform to enable screening of pairwise combinations of a panel of antibiotics and 4,000+ investigational and approved drugs to overcome intrinsic antibiotic resistance in the model Gram-negative bacterial pathogen E. coli. This screen processed 4+ million droplet-level assays by hand in just 10 days to discover more than 10 combinations of antibiotics and non-antibiotic drugs for further study. We then applied our platform to microbial ecology, where the interactions between microbes in communities can dictate functions important for both basic science and biotechnology. As a proof of concept, we used our platform to survey 960 pairwise interactions of microbes isolated from soil, and deconstruct higher-order interactions in a 4-strain community. Altogether, we expect that our platform can be used to efficiently attack combinatorial problems across molecular and cellular biology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Page 165 blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 151-164).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120911</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of polymicrobial infections in macaques with chronic cranial implants and evaluation of alternative antimicrobial strategies</title>
<link>https://hdl.handle.net/1721.1/120910</link>
<description>Characterization of polymicrobial infections in macaques with chronic cranial implants and evaluation of alternative antimicrobial strategies
Lieberman, Mia Tova Rock
Macaques are the most commonly used non-human primate in cognitive neuroscience research due to similarities between the macaque and human brain. Cephalic recording chambers (CRCs) are often surgically implanted to obtain neuronal recordings. CRCs represent a persistent source of microbial contamination, which can occasionally progress to clinical sequelae of meningitis and brain abscesses. In this thesis, we first examined aerobic and anaerobic bacterial species colonizing CRCs using both traditional culture-dependent methods and 16S microbiota culture-independent methods. We evaluated the most prevalent species, and compared CRC bacterial communities to skin, oral and fecal bacterial communities. Our results indicated that CRC bacterial communities are predominantly composed of anaerobic flora and are relatively unique between individual macaques. Additionally CRC bacterial communities are more similar to skin and oral bacterial communities than fecal bacterial communities, indicating that fecal contamination of CRCs is a less likely source of contamination. Aerobic culture and sensitivity data from samples collected in 2011 identified Staphylococcus aureus, Enterococcus faecalis and Proteus spp. as the most prevalent species isolated, and that E.faecalis isolates displayed marked resistance to multiple antimicrobial classes. Routine CRC sanitization procedures were revised in September 2014 to prohibit antimicrobial use within CRCs, and we evaluated how E.faecalis lineages persisted and evolved between 2011 and 2017. We identified a shift in sequence type (ST) from ST4 and ST55, predominating in 2011, to ST48 predominating in macaques implanted after 2013. ST48 lineages were less resistant to antimicrobials and stronger biofilm producers as compared to ST4 and ST55 lineages. We concluded that loss of selective pressure from antimicrobial use within CRCs permitted ST48 to emerge as the predominant lineage due to its strong biofilm-forming abilities. Finally, we evaluated alternative E.faecalis biofilm treatment strategies. We isolated lytic bacteriophages with activity against ST55 E.faecalis and evaluated the use of phages and antimicrobial peptides LL-37 and PR-39 against E. faecalis biofilm, alone, and in combination with antimicrobials. Our results identified that bacteriophages successfully decreased biofilm produced by ST55 and ST4 E. faecalis isolates and should be evaluated further for treatment of animal and human enterococcal-associated biofilm infections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 221-242).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120910</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed evolution of TurboID for efficient proximity labeling in living cells and organisms</title>
<link>https://hdl.handle.net/1721.1/120909</link>
<description>Directed evolution of TurboID for efficient proximity labeling in living cells and organisms
Branon, Tess C
Protein interaction networks and protein compartmentalization underlie all signaling and regulatory processes in cells. Traditional approaches to proteomics employ mass spectrometry (MS) coupled to biochemical fractionation or affinity purification but require cell lysis prior to analysis which often results in false-negatives from missed interactions or incomplete purification and false-positives from contaminants. Enzyme-catalyzed proximity labeling (PL) has emerged as a new approach to study the spatial and interaction characteristics of proteins in which a PL enzyme can be genetically targeted to a subcellular region and used to tag surrounding endogenous proteins with a chemical handle that allows their identification by MS. Tagging is carried out in living cells in a distance-dependent manner, allowing data collection from a physiologically relevant environment with preservation of spatial information. Current PL methods are limited by poor catalytic efficiency or toxic substrates that limit their application in vivo. Therefore, we have developed a new proximity labeling method, called TurboID, that uses non-toxic labeling conditions and has high catalytic efficiency that allows its use in a wide variety of biological contexts. Here, we describe our use of yeast display-based directed evolution to engineer two promiscuous mutants of biotin ligase, TurbolD and miniTurbo. We describe our characterization of the evolved PL enzymes in microbes, cultured cells, in vitro, and in vivo in flies and worms, and show that TurbolD and miniTurbo have much greater catalytic efficiency than any other biotin ligase-based PL method currently available. Lastly, we demonstrate that TurbolD and miniTurbo can be used to obtain proteomes with the same size, specificity, and depth-of-coverage as existing biotin-ligase based PL techniques with over 100- fold shorter labeling times. In the Appendix, we discuss two separate projects. In Part I, we describe how fusion of the PL enzyme APEX2 to various mitochondrial proteins could be used to map the proteomes of mitochondrial subdomains and be used to visualize the localization of mitochondrial proteins in mitochondrial subdomains using APEX2 to generate contrast for electron microscopy imaging. In Part II, we discuss the development of two platforms that could be used to temporally control genome editing using light.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120909</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and dynamics of membrane proteins from solid-state NMR</title>
<link>https://hdl.handle.net/1721.1/120908</link>
<description>Structure and dynamics of membrane proteins from solid-state NMR
Lee, Myungwoon
Solid-state nuclear magnetic resonance (SSNMR) spectroscopy is an essential tool to elucidate the structure, dynamics, and function of biomolecules. This thesis mainly focuses on the structure determination of the hydrophobic domains for fusion proteins which are involved in membrane fusion between the cell membrane and viral envelope. Although extensive structural studies have been conducted on the soluble ectodomain by crystallography, the structural topologies of the hydrophobic TMD of the fusion proteins have been poorly understood. Here, we introduced SSNMR to investigate the secondary structure and oligomeric states of the TMD of two fusion proteins, PIV5 F and HIV gp41. For the PIV5 TMD, the membrane dependent secondary structure was determined by measuring the chemical shifts: predominant a-helical conformation in the POPC/cholesterol membrane shifts to the [Beta]-strand in the POPE membrane. Using 19F spin diffusion experiments on the fluorinated TMD, we have determined that the TMD forms a trimeric helical bundle. For the HIV gp4l MPER-TMD, we found the presence of a turn between the MPER helix and the TMD helix by measuring intramolecular distances and probing the lipid-peptide and water-peptide interactions. Intermolecular 19F- 19F distances of the fluorinated peptides indicate that the MPER-TMD is a trimeric. In addition to membrane fusion proteins, we have studied the oligomeric structure and the zinc-bound coordination geometry of a de novo designed amyloid fibril that catalyzes ester hydrolysis. By measuring the intermolecular contacts, we determined that peptides form parallel-in- register P-sheets and further assemble into stacked bilayers in an antiparallel orientation. The zinc binding sites were confirmed by the chemical shifts perturbation of histidines with zinc and the specific zinc-bound geometry was identified by measuring intra-residue distances of histidines. We also investigated the effects of cryoprotectants on the spectral resolution of lipid membranes and membrane peptides at low temperature. 13C and 1H MAS spectra of various cryoprotected membranes showed that DMSO provides the best resolution enhancement with the best ice formation retardation at low temperature and DLPE lipid exhibits the excellent resolution.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120908</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed evolution of split APEX peroxidase</title>
<link>https://hdl.handle.net/1721.1/120907</link>
<description>Directed evolution of split APEX peroxidase
Han, Yisu, Ph. D. Massachusetts Institute of Technology
APEX is an engineered peroxidase that catalyzes the oxidation of a wide range of substrates, facilitating its use in a variety of applications, from subcellular staining for electron microscopy to proximity biotinylation for spatially restricted proteomics and transcriptomics. While this strategy has provided access to many cellular regions and organelles, there are still many compartments and structures that cannot be accessed; this strategy is limited by the specificity of genetic targeting; there are cellular regions that cannot be exclusively targeted by a single genetic tag (Chapter 1). To further advance the capabilities of APEX and address the need for an interaction-dependent proximity labeling tool, this thesis describes the development of a split APEX2 system. Short enzymatic reconstitution times are also desired, to further ensures both organelles' morphological integrity. Thus, it is critical that split APEX2 reconstitute peroxidase activity both rapidly and robustly. We first performed two subsequent rounds of structure-guided screening to determine the most optimal cut site (Chapter 2). We then used directed evolution on the top candidate pair to engineer a split APEX tool (sAPEX). Selections were performed via FACS on yeast-displayed fragment libraries, and 20 rounds of evolution produced a 200-amino acid Nterminal fragment (with 9 mutations relative to APEX2) called "AP" and a 50-amino acid Cterminal fragment called "EX". AP and EX fragments were each inactive on their own, but reconstituted to give peroxidase activity when driven together by a molecular interaction (Chapter 3). Our resulting split APEX2 fragment pair has significantly diverged from its parental sequence and shows interaction-dependent reconstitution in multiple contexts in living mammalian cells (Chapter 4). Our split APEX tool adds to the proximity labeling toolkit (Chapter 5 and 6), and in the future, should extend the utility of APEX-based approaches to new areas of biology at higher spatiotemporal resolution.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120907</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protective antigen-mediated delivery of biomolecules</title>
<link>https://hdl.handle.net/1721.1/120906</link>
<description>Protective antigen-mediated delivery of biomolecules
Lu, Zeyu (Zeyu Mike)
The intracellular delivery of therapeutic biomolecules such as oligonucleotides and proteins remains a key challenge today. Protective antigen, a naturally evolved protein translocase derived from Bacillus anthracis, has shown promise as a platform of protein delivery due to its ability to form a transmembrane pore that allows the cargo to have cytosolic access. We and others have used the LFN/PA system to deliver a wide variety of natural and non-natural peptides and proteins. Despite the significant progress made with the LFN/PA delivery platform, some aspects including cargo selection and targeting still remain limited. In the first part of the thesis, we greatly expand the application of the platform by demonstration of efficient delivery of peptide nucleic acids (PNAs), an oligonucleotide analog. Using this technology, we successfully exploited a cancer- specific gene dependency by the intracellular delivery of an anti-sense PNA in a receptor-dependent manner. In addition to exploiting new types of cargo for delivery, we developed a new strategy to target the LFN/PA system to specific cell types. In the second part of the thesis, we chemically conjugated a full-length immunoglobulin G (IgG) to a mutant PA (mPA). Significantly, we took advantage of the fact that PA activation is protease-dependent and created highly specific delivery vehicles that can only be activated by the concurrent presence of two entities on the cell surface. We showed a protein toxin delivered by these IgG-mPA variants effectively inhibited cell growth in different cancer cell lines and exhibited a significantly increased therapeutic window over previously reported PA variants both in vitro and in vivo. In the last part of the thesis, we explored the possibility of simplifying the LFN/PA system by directly ligating protein cargos to PA. In the absence of LFN, the chemically created single-component system significantly increased the amount of delivered cargo. Moreover, the single-component system combined with a short N-terminal polylysine tag further improved the delivery efficiency by more than 100-fold. Our findings raise the prospect of a simpler PA-mediated delivery platform..
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120906</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Palladium reagents for bioconjugation</title>
<link>https://hdl.handle.net/1721.1/120905</link>
<description>Palladium reagents for bioconjugation
Rojas, Anthony J. (Anthony Jose)
physicochemical properties in comparison to their linear counterparts. Here we detail a method for a divergent macrocyclization of unprotected peptides by crosslinking two cysteine residues with bis-palladium organometallic reagents. These synthetic intermediates are prepared in a single step from commercially available aryl bis-halides. Two bioactive linear peptides with cysteine residues at i, i + 4 and i, i + 7 positions, respectively, were cyclised to introduce a diverse array of aryl and bi-aryl linkers. These two series of macrocyclic peptides displayed similar linker-dependent lipophilicity, phospholipid affinity, and unique volume of distributions. Additionally, one of the bioactive peptides showed target binding affinity that was predominantly affected by the length of the linker. Collectively, this divergent strategy allowed rapid and convenient access to various aryl linkers, enabling the systematic evaluation of the effect of appending unit on the medicinal properties of macrocyclic peptides. Chapter 2: We report the use of a sulfonated biarylphosphine ligand (sSPhos) to promote the chemoselective modification of cysteine containing proteins and peptides with palladium reagents in aqueous medium. The use of sSPhos allowed for the isolation of several air-stable and water-soluble mono- and bis-palladium reagents, which were used in an improved protocol for the rapid S-arylation of cysteines under benign and physiologically relevant conditions. The cosolvent-free aqueous conditions were applied to the conjugation of a variety of biomolecules with affinity tags, heterocycles, fluorophores, and functional handles. Additionally, bispalladium reagents were used to perform macrocyclization of peptides bearing two cysteine residues. Chapter 3: The synthesis of palladium oxidative addition complexes of unprotected peptides is described. Incorporation of 4-halophenylalanine into a peptide during solid phase peptide synthesis allows for subsequent oxidative addition at this position of the unprotected peptide upon treatment with a palladium precursor and suitable ligand. The resulting palladium-peptide complexes are solid, storable, water-soluble, and easily purified via high-performance liquid chromatography. These complexes react rapidly with thiols at low micromolar concentrations in an aqueous buffer, offering an efficient method for bioconjugation. Using this strategy, peptides can be rapidly functionalized with small molecules to prepare modified aryl thioether sidechains. Additionally, peptide-peptide and peptide-protein ligations are demonstrated under dilute aqueous conditions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120905</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular design for controlling morphology in organic electronics</title>
<link>https://hdl.handle.net/1721.1/120904</link>
<description>Molecular design for controlling morphology in organic electronics
Truong, Tran N. B
Chapter 1 gives an introduction to the structure, operation mechanism, performance parameters, and challenges of organic photovoltaic devices. We also discuss some strategies to improve the performance of photovoltaics, with an emphasis on morphology control in polymer bulk-heterojunction devices. Chapter 2 describes the synthesis of a class of polymer additives for bulk-heterojunction (BHJ) solar cells based on an extended triptycene-containing unit. The incorporation of these additives on BHJ photovoltaic devices based on PTB7 and PC71BM leads to an increase in power conversion efficiencies of 10-20%. We also found that the additives produce more consistent performance in devices, minimizing variation from processing conditions. Chapter 3 presents a modular synthetic route to access functionalized 2,5-di(thiophen-2- y1)- 1-H-arylpyrroles (SNS) from readily available starting materials. We demonstrated the use of this building block in the synthesis of conjugated polymers with high thermal stability and solubility. Characterization of the polymers reveals a correlation between molecular packing and charge carrier mobility. Chapter 4 discusses strategies to enhance conjugation in organic electronic materials, using 2,5-di(thiophenyl)-N-arylpyrrole (SNS) as a model system. The first section describes synthetic routes to access a novel polycyclic heteroaromatic building block via intramolecular cyclization reactions. The second section explores the electrochemical properties of SNS units for the opportunity to enhance conjugation via electrochemical methods.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 140-158).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120904</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design and synthesis of benzodithiophene-containing poly(arylene ethynylene)s</title>
<link>https://hdl.handle.net/1721.1/120903</link>
<description>The design and synthesis of benzodithiophene-containing poly(arylene ethynylene)s
White, Kathleen R. (Kathleen Rose)
In Chapter 1, we provide an overview of the properties and applications of conjugated polymers, with a particular focus on poly(arylene ethynylene)s. We discuss the synthesis and photophysical properties of poly(arylene ethynylene)s, as well as their behavior in the solid state and at the air-water interface. We also provide a brief overview of the synthesis and properties of conjugated polymer networks and nanoparticles of poly(arylene ethynylene)s. Finally, we discuss the synthesis of benzodithiophene and benzodithiophene-containing conjugated polymers and poly(arylene ethynylene)s. In Chapter 2, we describe the design and synthesis of amphiphilic benzodithiophene-containing poly(arylene ethynylene)s for the synthesis of 2D-conjugated 2D polymers. We explore the behavior of the 1D-conjguated linear polymers at the air-water interface of a Langmuir- Blodgett trough, and describe our synthetic efforts toward the cross-polymerization of these polymers into 2D-conjugated 2D polymers. In Chapter 3, we explore the synthesis of conjugated polymer networks of benzodithiophene-containing poly(arylene ethynylene)s via the electrochemical and chemical crosslinking of two different 1 D-conjugated precursor polymers. We describe the characterization of the conjugated polymer network thin films and bulk materials and discuss the differences in the material properties depending on the starting polymer. In Chapter 4, we describe the synthesis and characterization of a series of benzodithiophene-containing poly(arylene ethynylene)s and poly(arylene butadiynylene)s for magneto-optical applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120903</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distant chaperones and N-glycan signals : new mechanisms of secretory pathway proteostasis</title>
<link>https://hdl.handle.net/1721.1/120902</link>
<description>Distant chaperones and N-glycan signals : new mechanisms of secretory pathway proteostasis
Wong, Madeline Y
Approximately one-third of all cellular proteins traverse the secretory pathway. After translation and folding in the endoplasmic reticulum (ER), proteins are transported through the Golgi to their final locations inside or outside of cells. At each step, proteins are helped by chaperones, which both shepherd proteins towards their native structures and serve as gatekeepers for export. Many proteins in the secretory pathway are also modified by installation of polysaccharides on specific asparagine residues. These N-glycans are installed in the ER as uniform precursors, but are trimmed and built up by Golgi glycan maturation enzymes into a striking array of epitopes. N-glycans act as a second mechanism to stabilize protein structure and prevent the release of misfolded proteins. Outside the cell, N-glycans on cell surfaces and secreted, soluble proteins allow cells to interact with each other, with their environment, and with distal tissues. During development, cells encounter physiological ER stress incurred by high levels of sustained protein production. Unresolved protein misfolding, on the other hand, results in pathological ER stress and tissue dysfunction. Prior work has used small model substrates to show that cells utilize secretory pathway chaperones and tune N-glycosylation to respond to ER stress. This thesis examines how cells use similar strategies to accommodate challenging cargoes such as collagen-1. In the human body, collagen-I constitutes the primary protein component of bone, skin, and other organs; collagen-I misfolding results in pathological ER stress and connective tissue diseases. We therefore set out 1) to identify cellular components required for collagen-I secretion that could be targeted to address disease and 2) to assess the effects of ER stress on both cellular N-glycan structures and individual glycoproteins. Here, we employ a high-throughput assay for collagen-I secretion and find that the cytosolic isoform of Hsp90 is required for collagen-I export. We also show that intracellular stress signaling alters the structures of cell surface and secreted N-glycans. Finally, we demonstrate that the collagen-I N-glycan buffers collagen-I folding against destabilizing mutations and ER stress. Our results identify potential therapeutic leads for collagen misfolding diseases and point to new mechanisms for maintaining secretory pathway proteostasis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Page 176 blank. Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120902</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic complex smart colloids for the detection of biomolecules</title>
<link>https://hdl.handle.net/1721.1/120901</link>
<description>Dynamic complex smart colloids for the detection of biomolecules
Zhang, Qifan, Ph. D. Massachusetts Institute of Technology
Complex smart colloid is a new class of dynamically reconfigurable emulsion droplets that switch morphologies between encapsulated and Janus configuration upon binding to chemical and biological analytes. The changes of morphologies or orientations of the Janus droplets are readily detected with an optical transduction mechanism. The dynamic complex smart colloids are ideal sensing particles for aqueous sensing of biomolecules such as bacteria, oligonucleotide, antibodies and viruses. This thesis expands the applications of complex smart colloids as bioassays that can be potentially adopted in food and beverage industry, environmental monitoring and medical diagnostics. In Chapter 2, we demonstrate an example of using emulsion agglutination assay for E.coli sensing with a continuous phase carbohydrate surfactant. In Chapter 3, we expand the emulsion assay by using interfacial bioconjugation methods and eliminating the needs of a synthetic surfactant in the continuous phase. In Chapter 4, we develop the protein-protein agglutination assay with a thermal stable protein conjugate to the droplet-water interface for sensing of Zika protein NS 1.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120901</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering layer-by-layer nanoparticles for the targeted delivery of therapeutics to ovarian cancer</title>
<link>https://hdl.handle.net/1721.1/120900</link>
<description>Engineering layer-by-layer nanoparticles for the targeted delivery of therapeutics to ovarian cancer
Correa, Santiago (Santiago Correa Echavarria)
Survival rates for ovarian cancer haven't meaningfully improved in thirty years. Ovarian cancer is particularly difficult to treat because it is usually discovered after it has metastasized and it quickly develops resistance to the few drugs that are initially effective at controlling it. Nanomedicine has the potential to change the paradigm for ovarian cancer treatment by delivering complex combinations of conventional drugs plus next-generation therapies like small interfering RNA (siRNA) and immunotherapy. However, nanoparticles must be tailored to the particular drug-delivery challenges and opportunities posed by ovarian cancer. In this thesis, we designed layer-by-layer (LbL) nanoparticles (NPs) to target ovarian cancer using library-based approaches. Using this approach, we identified promising formulations for developing an advanced nanotheranostic that both treats and detects ovarian cancer. In order to develop LbL NPs for treating ovarian cancer, we identified and overcame process engineering and fundamental materials challenges, thereby improving synthesis robustness, throughput and scale. Chapter 2 describes how modern tangential flow filtration significantly improves throughput and scalability in colloidal LbL assembly. Chapter 3 implements this improved synthetic approach to generate a small library of LbL NPs that screen for tumor-targeting properties on ovarian cancer cells, both in vitro and in vivo. Our results demonstrate that ovarian cancer cells have a high affinity to carboxylated LbL NPs, and we report several tumor-targeting formulations with distinct subcellular trafficking patterns. Chapter 4 explores the role of salt in LbL colloidal assembly, and we develop strategies for robustly synthesizing LbL-modified liposomes with high loading of siRNA. Chapter 5 advances a promising formulation identified by our surface chemistry screen, which we developed into an advanced nanotheranostic device that delivers siRNA and mediates urinary-based tumor detection. Future work that continues to improve the synthesis of LbL NPs will be essential to generate larger and more ambitious LbL NP libraries. In turn, these libraries will facilitate systematic studies that further tailor the LbL platform to specific diseases and biomedical applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120900</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and performance of hemostatic biomaterials for managing hemorrhaging</title>
<link>https://hdl.handle.net/1721.1/120898</link>
<description>Design and performance of hemostatic biomaterials for managing hemorrhaging
Avery, Reginald Keith
The high mortality rates associated with uncontrolled bleeding motivate hemostatic material development for traumatic injuries. Uncontrolled, or hemorrhagic, internal bleeding requires hemostatic materials that can be directly delivered to or target the bleeding locations. To address these needs, injectable systems are being developed that: (1) generate artificial clots independent of the coagulation cascade or (2) interact with blood components to accelerate or otherwise improve coagulation processes. Hemostatic materials designed for internal bleeding can save lives in the battlefield, en route to emergency rooms, and in the operating room. This thesis first focuses on developing a shear-thinning hydrogel for injection onto bleeding surfaces and into ruptured vasculature. Based on in vitro assays of hydrogel performance, it was amenable to clinical delivery methods and reduced whole blood clotting times by 77%. In vivo bleeding models showed reduced blood loss and improved survival rates following a lethal liver injury. The hydrogel was also used as an embolic agent, where its occlusive potential in an anticoagulated model was demonstrated. Next, recombinant protein-based hemostatic materials were expressed to modulate clotting kinetics and performance. By incorporating clot interacting peptide sequences (CIPs) into a protein scaffold, a family of multifunctional fibrinogen like proteins (MFLPs) was developed and assayed. Clot turbidity, an indication of fibrin clot formation, was increased among enzyme-interacting CIPs. Mimicking the polymerization mechanism of fibrinogen, knob sequences were shown to be procoagulant at low concentrations by increasing clot turbidity, reducing clotting times, and inhibiting plasmin lysis. Finally, to understand the impact of hemostats on clot structure, imaging procedures were developed to systematically assess hemostatic materials and their influence on clot architecture. Static and dynamic approaches were developed to quantify the activity of hemostats based on the spatial distribution of fibrinogen, red blood cells, and platelets around hemostat surfaces. Quantification of these hemostat-blood component interactions resulted in a unique pattern of interactions for each hemostat studied. These techniques could serve as a screening technique for hemostats and improve characterization prior to in vivo assays. Taken together, the results highlight multiple approaches to address internal bleeding and opportunities to improve in vitro characterization of hemostats using microscopy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120898</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transactional terrains : partnerships, bargains and the Postwar redefinition of the public realm, New York City 1965-1980</title>
<link>https://hdl.handle.net/1721.1/120889</link>
<description>Transactional terrains : partnerships, bargains and the Postwar redefinition of the public realm, New York City 1965-1980
Ramaswamy, Deepa, Ph. D. Massachusetts Institute of Technology
This dissertation traces the architectural and urban history of the privatization of the public realm. At the center of the research is New York City during the "urban crisis" years of the 1960s, as the city grappled with issues of civil rights, urban policy, and physical decline. The period saw an ongoing shift in how city and state governments initiated, financed, and managed architecture and urban development. As an administrative apparatus of crisis management, the public-private partnership was the fiscal and legal device that was at the center of this shift. With the public-private partnership, there was an increased emphasis on transactions between jurisdictional authorities and private sector actors. These transactions privileged negotiations and bargains that exchanged power, responsibilities, resources, expertise, and narratives across a network of public and private sector entities such as city and state governments, quasi-governmental agencies and thinktanks, developers, design practices, and nonprofits. The 1960s saw the beginnings of an organized cultivation of private sector participation by city and state governments, in the funding, management and provision of public goods (parks, plazas and housing). Privately-owned public plazas, privately-managed public parks, privately-owned and managed low-income housing and Special Zoning Districts are some of the outcomes of these partnerships that have shaped and influenced New York City's public realm ever since. By examining the ecology and economy of these public-private partnerships, this dissertation seeks to examine the privatization of the public realm in New York City as a series of complex intersections between the city's economic, political, urban, architectural and real-estate histories beginning in the 1960s.
Thesis: Ph. D. in Architecture: History and Theory of Architecture, Massachusetts Institute of Technology, Department of Architecture, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 194-214).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120889</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building protestant modernism : John D. Rockefeller Jr. and the architecture of an American internationalism (1919-1939)</title>
<link>https://hdl.handle.net/1721.1/120888</link>
<description>Building protestant modernism : John D. Rockefeller Jr. and the architecture of an American internationalism (1919-1939)
Dawood, Azra
Following the First World War, American philanthropist and heir to the Standard Oil fortune, John D. Rockefeller Jr., financed an international body of cultural and scientific institutions, which he presented as sites of global "brotherhood," intellectual cooperation, and shared world heritage. Combining religious history with an architectural and urban-spatial analysis of the built environments associated with these institutions, this dissertation argues that Rockefeller's putatively secular and inclusive promotion of culture and science was, in fact, designed to arrest the rise of "savage" nationalisms and a "godless" communism. While scholars have recognized the imperialism inherent in Rockefeller's philanthropy, they have largely ignored its theopolitical dimensions and its unexpected instrumentalization of secular buildings and landscapes towards religious concerns. Focusing on the so-called Orient (which valued modernity, sought freedom from European powers, and understood Western Christianity as an accomplice to imperialism), Rockefeller, a devout capitalist and Baptist, underwrote projects reconciling Christianity with modernity. Simultaneously, he presented Protestant America as a new civilizational leader to diverse audiences, including nationalists in the former Ottoman territories abroad and students from China and other modernizing nations studying in the U.S. Concentrating on two groups of buildings from his larger patronage-the International Student Houses and the museums and archaeological expedition-houses of the University of Chicago's Oriental Institute-I show how Rockefeller leveraged architecture to present an American internationalism and its scientific and religious modernity as a "logical" fit for diverse regions and conditions. From the Pacific Coast to the Middle East, his team of architects, theologians, archaeologists, and social reformers attempted to harmonize each of their buildings within its immediate geopolitical and aesthetic site. I show how Rockefeller's "re-Orientation" of Christianity (and America) coheres these eclectic, geographically dispersed buildings and landscapes into a rational or deliberate oeuvre, and is further revealed in specific erasures and accentuations of the built environment. Finally, this dissertation counters a widespread belief that religion in the modern era is solely contained within explicitly ecclesiastical architecture, or limited to societies "outside" the technoscientific modernity and secular progress of the global North.
Thesis: Ph. D. in Architecture: History and Theory of Architecture, Massachusetts Institute of Technology, Department of Architecture, 2018.; Cataloged from PDF version of thesis. Some images were redacted.; Includes bibliographical references (pages 698-709).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120888</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery of novel CRISPR enzymes for transcriptome engineering and human health</title>
<link>https://hdl.handle.net/1721.1/120887</link>
<description>Discovery of novel CRISPR enzymes for transcriptome engineering and human health
Abudayyeh, Omar O
RNA plays important and diverse roles in biology, yet molecular tools to measure and manipulate RNA are limited. Recently, the bacterial adaptive immune system, CRISPR, has revolutionized our ability to manipulate DNA, but no known RNA-targeting versions exist. To discover parallel bacterial RNA-targeting systems that could be used for transcriptome engineering, we developed a computational pipeline to mine for novel Class 2 CRISPR systems across more than 25,000 bacterial genomes. Among the many novel CRISPR systems, we found a programmable RNA-targeting CRISPR system, CRISPR-Cas 13, that could provide immunity to E. coli against the ssRNA MS2 phage and biochemically characterized the enzyme. We adapted CRISPR-Casl3 for modulating the transcriptome in mammalian and plant cells by heterologously expressing Casl 3 and engineering the enzyme to precisely knockdown, bind, and edit RNA. Cas 13 knockdown was as efficient as RNA interference, but much more specific, across many transcripts tested. RNA editing with Cas 13 was also highly efficient, with up to 90% base editing rates, and as low as 20 off-targets with engineered specificity versions. Lastly, we combined Cas13 with isothermal amplification to develop a CRISPR-based diagnostic (CRISPR-Dx), providing rapid DNA or RNA detection with single-molecule sensitivity and singlebase mismatch specificity. We used this Casl3a-based molecular detection platform, termed SHERLOCK (Specific High Sensitivity Enzymatic Reporter UnLOCKing), to specifically detect pathogenic bacteria, genotype human DNA, and identify cell-free tumor DNA mutations. Our results establish CRISPR-Cas13 as a flexible platform for RNA targeting with wide applications in RNA biology, diagnostics, and therapeutics.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, September 2018.; Page 399 blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 210-229).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120887</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neurophysiological properties of cortico-cortical evoked potentials in humans</title>
<link>https://hdl.handle.net/1721.1/120886</link>
<description>Neurophysiological properties of cortico-cortical evoked potentials in humans
Crocker, Britni
Invasive electrical brain stimulation has been increasingly used to treat an ever wide range of neuropsychiatric disorders from Parkinson's disease to epilepsy and depression. In addition, single pulse electrical stimulation (SPES) is increasingly used to map connections between cortical areas using cortico-cortical evoked potentials (CCEPs). However, the properties and mechanisms underlying brain stimulation remain mostly unknown, hindering the application of stimulation to new neurological disorders and the development of adaptive stimulation technologies that could improve clinical outcome. To improve understanding of SPES, we systematically explored the effects of cortical electrical stimulation in human epilepsy patients. These patients have intracranial electrodes implanted for intractable epilepsy as part of their clinical course, creating a unique opportunity to simultaneously stimulate and record the human brain in multiple locations. Single pulses of electrical current were delivered across pairs of electrodes in the human cortex, and the neurophysiological responses are recorded. Examining some fundamental properties of CCEPs, we show that the brain's response to less than a millisecond pulse of stimulation can be detected up to one second post-stimulus. This response has two peaks with distinct properties; compared to the second peak, the first is less variable, and its timing is less delayed by distance, while its magnitude is more diminished by distance. Looking at the spatial distribution of CCEPs, we show that stimulation-derived networks are more closely related to structural connectivity than functional connectivity. However, correcting for distance eliminates this difference. Monitoring CCEPs across different brain states, we show that the second peak of the CCEP is significantly diminished during anesthesia. Taken together, these results provide important insight into the basic neurophysiological properties of CCEPs, their spatial distribution, and how they are modulated by the state of the brain itself. These characteristics can inform experimental design, provide input parameters for modeling studies, and be applied towards the development of adaptive closed-loop stimulation paradigms.
Thesis: Ph. D. in Medical Engineering and Medical Physics, Harvard-MIT Program in Health Sciences and Technology, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 94-103).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120886</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling transmission heterogeneity for infectious disease outbreaks</title>
<link>https://hdl.handle.net/1721.1/120885</link>
<description>Modeling transmission heterogeneity for infectious disease outbreaks
Majumder, Maimuna S. (Maimuna Shahnaz)
The transmissibility of a given infectious disease is often described by its basic reproduction number (Ro) - namely, the average number of secondary infections caused by an index case in a fully susceptible population. Typical approaches to modeling transmission dynamics associated with infectious disease outbreaks frequently use Ro to produce deterministic case count projections, in effect treating the affected population as homogeneous (i.e. as if every individual in the population interest has an equal likelihood of passing on the infection of interest). As a result, such approaches often fail to effectively capture transmission dynamics during real-world outbreaks in heterogeneous populations. Here, we use analytical and simulation methods to show that the treatment of Ro as the mean of a random variable (thus permitting the estimation of non-deterministic case count projections) allows us to better assess outbreak trajectory and likelihood of disease propagation in non-homogeneous populations (Chapter 2). We then empirically investigate predictors of in-population transmission heterogeneity (i.e. the fact that some individuals in a given population are more likely than others to pass on the infection of interest) within the context of Middle East Respiratory Syndrome in South Korea using a combination of statistical- and review-driven approaches (Chapter 3). Then, in Chapter 4, we explore how in-population transmission heterogeneity can be used to our advantage through the deployment of risk-informed interventions (i.e. in which individuals who are more likely to pass on the infection of interest are exclusively targeted to receive the intervention) during infectious disease outbreaks. More specifically, we use the analytical and simulation methods first introduced in Chapter 2 - paired with inpopulation transmission heterogeneity data from Chapter 3 - to compare the utility of a variance-informed deployment scheme against a traditional, uniform deployment scheme (i.e. in which every individual has an equal likelihood of receiving the intervention). Finally, building off of our findings in Chapters 2, 3, and 4, we recommend four interrelated policies in Chapter 5 that aim to (1) normalize the treatment and reporting of Ro as the mean of a random variable and (2) improve access to the data required to sufficiently capture population heterogeneity when modeling disease propagation.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120885</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustained-release implants for intraperitoneal cisplatin delivery</title>
<link>https://hdl.handle.net/1721.1/120884</link>
<description>Sustained-release implants for intraperitoneal cisplatin delivery
Mantzavinou, Aikaterini
The objective of this work was to develop materials for continuous low-dose delivery of cisplatin directly into the abdomen, also known as intraperitoneal (IP) chemotherapy. IP chemotherapy can help treat peritoneal metastasis in many advanced gynecologic and gastrointestinal cancers and has shown particular promise in treating advanced ovarian cancer. It is however tremendously underutilized because it requires a lot of resources and the current technology and maximum tolerated dose regimen cause complications and severe toxicity to patients. We previously showed that continuous low-dose IP cisplatin delivery via an implanted diffusion-based reservoir device can be as effective as and less toxic than intermittent maximum tolerated dose IP injections. To translate this work to a clinically relevant implantable system, we developed composite materials that can deliver cisplatin at a continuous low dose that is tunable. The materials were mechanically well suited for placement in the abdomen and were evaluated for in vitro bioactivity, in vivo tolerability and in vivo ability to deliver platinum to key abdominal organs with promising results. Dosing studies with different material dimensions helped identify a dose to pilot treatment of ovarian cancer in human xenograft-bearing mice. The implications of more accessible and affordable IP chemotherapy are especially important in countries with limited resources. Design reviews and a clinician survey in India reveal eagerness for early adoption of new technologies and dosing regimens to treat peritoneal metastasis and show promise for utilization of our implant in the developing world. The work described in this thesis carries implications for the treatment of advanced ovarian cancer and peritoneal metastasis of other tumors affecting millions of patients worldwide and may help with the management of nonmalignant conditions with abdominal involvement.
Thesis: Ph. D. in Medical Engineering, Harvard-MIT Program in Health Sciences and Technology, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 217-226).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120884</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Score instruments : a new paradigm of musical instruments to guide musical wonderers</title>
<link>https://hdl.handle.net/1721.1/120882</link>
<description>Score instruments : a new paradigm of musical instruments to guide musical wonderers
Troyer, Akito van
Advancements in technology have made musical instruments, especially electronic instruments, accessible to the masses. As a result, music-making has become more widespread and convenient. However, the blackboxing practices of commercial Digital Musical Instruments (DMIs) have conditioned many users to produce only specific styles of music. Furthermore, as many of these commercial instruments produce sound through loudspeakers, rather than the body of the instrument, players lose the physical and tactile connection to sound and music. Consequently, these DMIs inhibit understanding of the relationship between musicality and our everyday physical world, and cut players off from exploring a more extensive range of musical possibilities. Despite the multiplication of music-making tools, music-making practices still operate on the same principles. The production of music requires instruments to generate organized physical sound energies that follow the schema of a score. This dissertation studies a new class of Interactive Music Systems (IMSs) called Score Instruments that embed both instrument and score into a single unified interface. Score Instruments reopen the range of possibilities offered by everyday sounds and objects as musical bricolage tools to bring players into a personalized, guided, and open-ended use of the instrument. Players of Score Instruments are called Musical Wonderers, as the instruments encourage them to focus on exploration to build their own musical language, rather than on the technically correct realization of music. The dissertation describes the concept of Score Instruments. Two instances of Score Instruments demonstrate how the techniques and criteria translate into specific IMSs. City Symphonies is a massive musical collaboration platform that encourages players to listen to their cities and create music with environmental sounds. MM-RT is a tabletop tangible musical instrument that employs electromagnetic actuators and small permanent magnets to physically induce sounds with found objects. Both projects exemplify how Score Instruments can simultaneously stimulate open creativity and provide meaningful direction and constraints that guide users to learn underlying principles about music and the physical world. The design investigations and historical perspective of this dissertation offer a future of music-making practice that is based on exploration and designed to broaden the definition and variety of music.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 167-190).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120882</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The plantation network : Brazilian bioenergy science and sustainability in the global South</title>
<link>https://hdl.handle.net/1721.1/120881</link>
<description>The plantation network : Brazilian bioenergy science and sustainability in the global South
Labruto, Nicole Francesca Hayes
This dissertation provides a multiscalar analysis of climate change solutions from the global South by investigating how bioscientists are leveraging postcolonial ecological legacies into the basis for what they envision as a sustainable future. In Brazil, scientists from different disciplines are reengineering sugarcane-a crop central to the colonial project-at molecular, organismic, and economic scales in order to expand biofuels as international energy commodities. I argue that biology has become central to what I call the plantation network: a postcolonial agricultural formation that includes laboratories as obligatory passage points in the growing of plants to meet human needs and desires, especially in the era of "sustainability" and "green capitalism." My research uses the plantation network formation to show that even though Brazilian scientists work under ethical and ecological threats posed by climate change, they also rely on Brazilian history, ideology, and cultural practices as they reshape life forms, landscapes, and labor in Brazil and Mozambique. This multisited analysis draws on ethnographic research conducted with molecular biologists attempting to create the world's first commercially viable transgenic sugarcane plant, biochemists working to develop waste-reducing fermentation technologies by using bioprospected "wild" yeasts to digest sugarcane bagasse, and a think tank of agronomic economists seeking to transfer a "Brazilian biofuel model" to Lusophone Mozambique. For these scientists, Brazil's long history of sugarcane is coming to center on ethoses and practices of what they call "sustentabilidade" (sustainability): a form of technoscientifically-aided industrial development that contributes to environmental wellbeing while maintaining the possibility of continued capitalist production for future populations. The dissertation examines "sustainability" as it has emerged in these sites by considering the plantation as a pharmakon-like entity: at the same time (1) a destructive nexus of social-ecological relations that has propelled the harmful, unjust conditions that have led to calls for "sustainable" practices and principles and (2) a redemptive space for ethically-sound renewable fuel and food production that scientists believe is central to creating a more just, livable world. I investigate how scientific practices related to ethically-rendered biofuels are motivating changes to the biotechnologies, production techniques, and locations of sugarcane plantations.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 273-315).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120881</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bodies at war : National Security in American controversies over animal &amp; human experimentation from WWI to the War on Terror</title>
<link>https://hdl.handle.net/1721.1/120880</link>
<description>Bodies at war : National Security in American controversies over animal &amp; human experimentation from WWI to the War on Terror
Shapiro, Ryan Noah
The rhetoric and apparatus of national security have played critical roles in American controversies over animal and human experimentation from the dawn of the Twentieth Century to today's "War on Terror." Drawing on archival and Freedom of Information Act (FOIA) research, this dissertation traces how American partisans in the enduring vivisection controversy have sought to mobilize national security concerns to tar their domestic political adversaries as enemy agents of foreign enemies from the Kaiser and Hitler to Stalin and Al-Qaeda. Further, this study explores how these efforts have intersected with issues of gender, slavery, and the pathologizing of political dissent, as well as campaigns for the absolute freedom of research, the functioning of Nazism and the Holocaust in the American political imagination, civil liberties in the Post-9/11 world, and ongoing debates over animal rights, the Federal Bureau of Investigation (FBI), and domestic terrorism.
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120880</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principles for composing genetic circuits in mammalian cells with a focus on miRNA sensing</title>
<link>https://hdl.handle.net/1721.1/120877</link>
<description>Principles for composing genetic circuits in mammalian cells with a focus on miRNA sensing
Gam, Jeremy Jonathan
In this thesis, we developed two synthetic biology frameworks to facilitate the construction of useful genetic circuits, with a focus on circuits that sense miRNAs. miRNAs are an attractive biomarker for sensing since they regulate virtually all biological pathways in plants and animals, and because miRNA sensors can be easily designed by incorporating sequences complementary to the miRNA into a genetic circuit. Therefore, circuits that sense endogenous miRNAs can dynamically respond to cellular signaling or classify between cell types. However, the development of genetic circuits, and especially multi-input miRNA sensors, has traditionally been iterative, costly, and time-consuming. To this end, we have developed a framework to measure miRNA activity and generate accurate predictions for sensors with multiple miRNA inputs. We started by building the largest library of miRNA sensors to date (620 sensors) and used the library to measure miRNA activity in several cell lines. We then constructed multi-input sensors and determined design rules for predicting their function, namely that miRNAs repress targets synergistically in opposite UTRs and antagonistically within the same UTR. In our second framework, we developed a 'one-pot' method for high-information transfection and analysis that allows researchers to quickly determine performance of many tuned circuit variants in a single well. We used our one-pot method to quickly characterize a variety of genetic elements and to optimize the design of a miRNA sensor with inverted logic, a circuit topology we found difficult to design using traditional methods.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120877</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Liberation of low-coordinate phosphorus species from anthracene-based molecular precursors</title>
<link>https://hdl.handle.net/1721.1/120876</link>
<description>Liberation of low-coordinate phosphorus species from anthracene-based molecular precursors
Transue, Wesley J
The study of heavier p-block analogs to unsaturated organic small molecules is of fundamental importance, allowing insight into the differing bonding and reactivity patterns that control their synthetic utility. A major complication in this pursuit is the instability of these unsaturated compounds containing heavier elements, as they can rarely be isolated without sterically encumbering substituents to provide kinetic protection. Herein, a series of anthracene-based molecular precursors are leveraged for the generation of low-coordinate phosphorus species that have been generally too unstable to isolate and use under ambient conditions. These precursors exhibit a dibenzo-7-phosphanorbornadiene framework that has allowed access to free phosphinidenes, phosphinidene sulfides, and phosphaalkynes. The thermal release of anthracene as a relatively innocuous coproduct allows for studies of these species' reactivities toward new substrates. Where possible, the intermediacy of the unstable phosphorus compound is demonstrated by direct spectroscopic detection, particularly using nuclear magnetic resonance (NMR) and microwave (rotational) spectroscopy. The mechanism and scope of thermal fragmentation of these molecular precursors is described from experimental kinetic studies bolstered by density functional theory (DFT) calculations. Initial forays into further uses of dibenzo-7-phosphanorbornadiene compounds have included the preparation of a new material of composition P2S and the transition metal-catalyzed transfer of tert-butylphosphinidene to styrene to generate phosphiranes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Page 467 blank. Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120876</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building a state space for song learning</title>
<link>https://hdl.handle.net/1721.1/120871</link>
<description>Building a state space for song learning
Mackevicius, Emily Lambert
Song learning circuitry is thought to operate using a unique representation of each moment within each song syllable. Distinct timestamps for each moment in the song have been observed in the premotor cortical nucleus HVC, where neurons burst in sparse sequences. However, such sparse sequences are not present in very young birds, which sing highly variable syllables of random lengths. Furthermore, young birds learn by imitating a tutor song, and it was previously unclear precisely how the experience of hearing a tutor might shape auditory, motor, and evaluation pathways in the songbird brain. My thesis presents a framework for how these pathways may assemble during early learning, using simple neural mechanisms. I start with a neural network model for how premotor sequences may grow and split. This model predicts that the sequence-generating nucleus HVC would receive rhythmically patterned training inputs. I found such a signal when I recorded neurons that project to HVC. When juvenile birds sing, these neurons burst at the beginning of each syllable, and when the birds listen to a tutor, neurons burst at the rhythm of the tutor's song. Bursts marking the beginning of every tutor syllable could seed chains of sequential activity in HVC that could be used to generate the bird's own song imitation. I next used functional calcium imaging to characterize HVC sequences before and after tutor exposure. Analysis of these datasets led us to develop a new method for unsupervised detection of neural sequences. Using this method, I was able to observe neural sequences even prior to tutor exposure. Some of these sequences could be tracked as new syllables emerged after tutor exposure, and some sequences appeared to become coupled to the new syllables. In light of my new data, I expand on previous models of song learning to form a detailed hypothesis for how simple neural processes may perform song learning from start to finish.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 159-177).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120871</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimal art and body politics in New York City, 1961-1975</title>
<link>https://hdl.handle.net/1721.1/120870</link>
<description>Minimal art and body politics in New York City, 1961-1975
Ketcham, Christopher M. (Christopher Michael)
In the mid-1960s, the artists who would come to occupy the center of minimal art's canon were engaged with the city as a site and source of work. These artists drew on the social, material, and spatial conditions of the surrounding environment, producing sculpture that addressed the problem of the city as a problem of the body. At the same time, minimal art was deployed by civic leaders, including New York City's mayor John V. Lindsay, as an instrument to organize a public and project a new urban image in the midst of sweeping social and economic change. The work of Carl Andre, Tony Smith, Dennis Oppenheim and many of their peers, informed by Merleau-Ponty's phenomenology, promised to heighten one's consciousness of self, others, and environment. The Lindsay administration and its allies positioned sculpture as an aesthetic rupture that could ameliorate the sensorial burden and alienation of urban life. The phenomenological and spatial claims of minimal art were adopted and mobilized by the city's power brokers as they sought to assert authority over New York. This dissertation assesses the intertwined agency of artists, political leaders, corporate stakeholders, and private developers as they made proprietary claims for urban space. In the canonical formation of minimal art, the city has been marginalized as a field of meaning. The phenomenological reading has become naturalized in historiography. Rather than perpetuate this historiographical opposition, this dissertation pursues an urban history of minimal art and a social history of its phenomenology. It focuses on artists and organizers whose work constitutes a sustained engagement with the social, material, and spatial realities of New York City in the 1960s. Merleau-Ponty's phenomenology resonated with artists in 1960s New York, in part, because it overlapped with a politics of the urban body that was developing simultaneously. The city's use of minimal art was closely related to the problematic visibility of politicized bodies. As Lindsay was confronted with issues of race, gender, and class that emerged in the wake of massive social and economic transition, his administration turned to minimal art to serve as a tangible sign of order. Sculpture was deployed as a tool to orient the body and the public within the city's new spatial realities.
Thesis: Ph. D. in Architecture: History and Theory of Art, Massachusetts Institute of Technology, Department of Architecture, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged student-submitted from PDF version of thesis.; Includes bibliographical references (pages 335-362).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120870</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The environmental effect on corrosion fatigue behavior of austenitic stainless steels</title>
<link>https://hdl.handle.net/1721.1/120869</link>
<description>The environmental effect on corrosion fatigue behavior of austenitic stainless steels
Yu, Lun, Ph. D. Massachusetts Institute of Technology
Corrosion fatigue is a multivariate challenge that threatens the lifetime of service of nuclear power plant materials, especially austenitic stainless steels. Both enhancement and retardation of crack growth have been observed in laboratory tests. This thesis work performs high temperature autoclave testing, post-test characterization and mechanistic modeling to understand the corrosion fatigue behavior of austenitic stainless steels in simulated light water reactor (LWR) environments. Crack growth rate (CGR) data were generated from the autoclave testing on low (0.001 wt.%) and high (0.03 wt.%) sulfur content heat 1T compact tension (CT) specimens. Tests were controlled under constant K (22-35 MPa [square root of]m) with load ratio of 0.7 and sawtooth waveform (85% rise vs. 15% fall), and at pH =10 and 288 °C with system pressure of 9.54 MPa. Crack enhancement was observed in low sulfur content heat specimens, and the CGR increases as the loading rise time increases. The fracture surfaces of low sulfur content heat specimens showed transgranular features with facets ("river pattern") and few oxide particles. Crack retardation was observed in high sulfur content heat specimens, and the CGR decreases as the loading rise time increases. The fracture surfaces of high sulfur content heat specimens showed distinct features at different rise time step. Transgranular features ("river pattern") were observed at short rise time step, while non-descript surfaces with fine octahedral oxide particles were observed at long rise time step. Additionally, tests in deuterium water were performed to enable measurements on hydrogen/deuterium concentrations in specimens using ToF-SIMS and hot vacuum extraction techniques. Deuterium pick-up from the testing environment was observed, and the enrichment of deuterium/hydrogen ahead of crack tip was also observed. Controlled experiments were also conducted, where specimens were baked prior to the autoclave testing to remove the residual internal hydrogen. Such heat treatment removing the internal hydrogen was found to not affect the crack growth behavior. Dissolved gases, hydrogen and argon respectively, were bubbled into system during the autoclave tests, and they resulted in similar crack growth behaviors. Modeling indicates that there exists an enhancement mechanism other than corrosion mass removal driving the crack growth in simulated LWR environments. Possibly it comes from the effect of corrosion-generated hydrogen. Retardation behavior and experimental observations could be understood and explained by concept and modeling of corrosion blunting. The results suggest excess conservatism of current ASME standards N-809 for high sulfur content austenitic stainless steels.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120869</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plate efficiency of bubble cap rectifying columns</title>
<link>https://hdl.handle.net/1721.1/120698</link>
<description>Plate efficiency of bubble cap rectifying columns
Carey, James S
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 1930.; Includes bibliographical references (leaves 332-335).
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120698</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robotic symbionts : exploring integrated human-machine action and expression</title>
<link>https://hdl.handle.net/1721.1/120685</link>
<description>Robotic symbionts : exploring integrated human-machine action and expression
Leigh, Sang-won
Throughout history we have augmented our physical abilities with machines. Concepts for flying machines and the ideas behind today's exoskeletons were recorded as early as the 13th century. Today, as technology permeates every aspect of our lives, it is easy to imagine a much closer integration of machines into the tasks we carry out. This thesis explores a vision of humans and machines symbiotically working together on a task through co-action and coagency. This vision opens up many opportunities in between the extremes of autonomous robots and master-slave systems, through more complex systems in which human and machine collaborate to perform actions and manipulate robotic extensions. This dissertation also reports on three extensive experiments, each consisting of multiple iterations of actual, tested designs: a series of robotic extra-numerary finger robots for increasing manual dexterity, a series of collaborative human-drone drawing systems enabling novel expressive capability, and a series of semi-automated guitar systems enabling extended musical expression as well as new instrument-learning opportunities. The studies performed with these prototypes give insight into the impact of such robotic integration on the human user: the user is nudged to adapt to the new condition and re-calibrate the expectations associated with certain input actions; the division of roles allows the user to explore and understand experiences outside their given skills or physical limits; and the robotic extension inspires activity outside of the user's regular practice. Finally, the thesis also defines a design space and corresponding terminology to situate different technical and design choices for these new forms of human-robot integration. I categorize some of the existing approaches based on how human and robotic actions are coordinated, and how the robotic movements are controlled. I also propose ways to qualitatively describe the interaction between human and machine, in terms of how the robotic extension may affect the cognition and behaviors of its user. The experiments with the prototypes support and are analyzed through these definitions, and discuss how we could achieve novel or synergistic outcomes with robotic augmentations.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018.; Cataloged from PDF version of thesis. Page 170 blank.; Includes bibliographical references (pages 157-169).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120685</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid body craft</title>
<link>https://hdl.handle.net/1721.1/120684</link>
<description>Hybrid body craft
Kao, Hsin-Liu (Hsin-Liu Cindy)
Sensor device miniaturization and breakthroughs in novel materials are allowing for the placement of technology increasingly close to our physical bodies. However, unlike all other media, the human body is not simply another surface for enhancement - it is the substance of life, one that encompasses the complexity of individual and social identity. The human body is inseparable from the cultural, the social, and the political, yet technologies for placement on the body have often been developed separately from these considerations, with an emphasis on engineering breakthroughs. This dissertation investigates opportunities for cultural interventions in the development of technologies that move beyond wearable clothing and accessories, and that are purposefully designed to be placed directly on the skin surface. How can we design emerging on-body interfaces to reflect existing cultural practices of decorating the body, with the intent to expand the agency of self-expression? This dissertation looks at this question through the development of a series of research artifacts, and the contextualization of a design space for culturally sensitive design. In this dissertation, Body Craft is defined as existing cultural, historical, and fashion-driven practices and rituals associated with body decoration, ornamentation, and modification. As its name implies, Hybrid Body Craft (HBC) is an attempt to hybridize technology with body craft materials, form factors, and application rituals, with the intention of integrating existing cultural practices with new technological functions that have no prior relationships with the human body. With this grounding, HBC seeks to support the generation of future technologized customs in which technology is integrated into culturally meaningful body adornments. The artifacts in this dissertation encompass the integration of technologies such as flexible electronics, chemical processes, and bio-compatible materials into existing Body Craft customs. These artifacts contribute novel, culturally inspired form factors, and introduce unprecedented interaction modalities for on-body technologies. A design space is created in which to examine shifts in the communicative qualities of these Body Crafts due to the integration of technology, as well as new forms of self-expression that have emerged. This dissertation contributes a culturally sensitive lens to the design of on-body technologies. The intention is to expand their lifetimes and purposes beyond mere novelty and into the realms of cultural customs and traditions.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 209-223).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120684</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reengineering Education : systems engineering and the LearningGraph as a means to develop a coherent learning data architecture</title>
<link>https://hdl.handle.net/1721.1/120683</link>
<description>Reengineering Education : systems engineering and the LearningGraph as a means to develop a coherent learning data architecture
Groff, Jen
Today's educational systems are complex, political, sociotechnical ecosystems that struggle to meet the needs of most learners and societal demands-and most critically, struggle to change. Yet, learners globally need access to high quality learning environments and coherent learning pathways that support them to thrive in our complex world. Fundamental to every learning technology, environment, and system, is a learning data model and architecture that helps to organize the learner's experience. To date, in traditional educational systems, this has largely been dominated by public policy curriculum standards, which have tremendous limitations and shortcomings on classroom practice and their ability to support complex learning technologies. At the same time, over the past several decades significant advances have been made in the learning sciences, learning analytics, and learning technologies that have greatly expanded our ability to model learning and provide immersive and adaptive learning environments. Yet each of these communities rarely coordinates and aligns these data models. The disjointedness of these structures leaves their architecture in a messy, challenging state, unable to successfully carry us into an advanced future of learning technologies and effective learning ecosystems. This dissertation explores the use of Systems Engineering as a means to reengineering this critical infrastructure of the system, through the LearningGraph-a research initiative that used this methodology to create a unified data structure for modeling learning constructs in a coherent learning data architecture. The aim of the project is to ultimately inform a new infrastructure to support learning development across learning technologies and environments. In doing so, we create the foundation for closing significant gaps in the current system: between learning sciences research and practice; curriculum and assessment design; the design of learning technologies and all the aforementioned components; and between and across education systems globally. Moreover, it creates the potential for the foundation of a very different future for learning ecosystems.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018.; Cataloged from PDF version of thesis. Vita.; Includes bibliographical references (pages 179-201).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120683</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedded jussives as instances of control : the case of Mongolian and Korean</title>
<link>https://hdl.handle.net/1721.1/120681</link>
<description>Embedded jussives as instances of control : the case of Mongolian and Korean
Sisovics, Milena
This dissertation is an investigation into the semantics of imperatives and imperative-like forms (collectively referred to as jussives) in embedded contexts. The long-held view that imperatives are confined to root (matrix) contexts has been challenged by recent findings of counterexamples from a variety of languages. This thesis contributes to the debate by introducing novel empirical evidence from Mongolian confirming that the restriction on imperative embedding is not universal: Mongolian is shown to allow for embedding of a a speaker-directed jussive form voluntative and a hearer-directed imperative. The empirical domain is widenend to include data from jussive embedding in Korean (drawing on Madigan 2008, Pak et al. 2008b, a.o.). This thesis takes special interest in the complex combination of properties characterizing the subjects of embedded jussives in Mongolian and Korean, to wit, (i) their dependence on an antecedent in the embedding clause, (ii) the requirement to be interpreted de se, and (iii) the presence of [phi]-features. These properties are used to make a case for an analysis of jussive subjects as instances of Obligatory Control PRO, and against an analysis as indexical pronouns. In particular, it is demonstrated how a view of PRO as a syntactically and semantically complex unit closely resembling de re expressions in attitude reports (Percus &amp; Sauerland 2003a) provides an elegant way of accounting for the combined characteristics of jussive subjects. Set against the background of a Neo-Davidsonian event semantics, this thesis puts forward the idea that jussive clauses denote sets of events whose propositional content amounts to a desire statement. An analysis of jussives as sets of events is shown to afford a natural extension to matrix occurrences on the assumption that the content denoted by matrix jussives is anchored to the speech event. Finally, this thesis proposes to bridge the gap between jussive reports and canonical Obligatory Control constructions and demonstrates how the presented account can be generalized to provide a novel perspective on Obligatory Control constructions as well.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 179-185).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120681</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Belief and evidence</title>
<link>https://hdl.handle.net/1721.1/120680</link>
<description>Belief and evidence
Schultheis, Ginger (Virginia Kathleen)
Chapter 1, 'Living on the Edge: Against Epistemic Permissivism,' argues that Epistemic Permissivists face a special problem about the relationship between our first- and higher-order attitudes. They claim that rationality often permits a range of doxastic responses to the evidence. Given plausible assumptions about the relationship between your first- and higher-order attitudes, you can't stably be on the edge of the range, so there can't be a range at all. Permissivism, at least as it has been developed so far, can't be right. I consider some new ways of developing Permissivism, but each has problems of its own. Chapter 2, 'Belief and Probability,' argues that rational belief doesn't reduce to subjective probability. Under the right circumstances, I argue, acquiring conflicting evidence can defeat your entitlement to believe a certain hypothesis without probabilistically disconfirming that hypothesis. I consider three probabilistic theories of rational belief-a simple threshold view, Hannes Leitgeb's stability theory, and a new theory involving imprecise credence-and show that none of them can account for the cases I describe. Chapter 3, 'Can We Decide to Believe?', takes up the question of whether we can decide to believe. There are two main arguments for the conclusion that believing at will is impossible, which I call the retrospective argument and the aim-of-belief argument, respectively. Neither, I argue, demonstrates that believing at will is impossible in all cases. The retrospective argument leaves open the possibility of believing at will in acknowledged permissive cases; the aim-of-belief argument leaves open the possibility of believing at will when credal attitudes are imprecise.
Thesis: Ph. D. in Philosophy, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 76-80).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120680</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowing and doing, well and good</title>
<link>https://hdl.handle.net/1721.1/120679</link>
<description>Knowing and doing, well and good
Jaques, Abby Everett, Ph. D. Massachusetts Institute of Technology
My dissertation explores the relationship between intentional action and knowledge. I argue that understanding this relationship is not only of central importance to action theory; it is also a means to progress on questions about the nature of knowledge, the mechanisms of oppression, and the foundations of ethics. The opening chapter argues for my view of the nature of intentional action. I show that it is distinguished from nearby phenomena by an aim of control-and that this control turns out to be Anscombean practical knowledge, the special knowledge an agent can have of what she is doing, how, and why. The second chapter considers how my view about the knowledge-action relationship differs from those advocated by 'shifty epistemologists'-theorists who claim that what you know depends on practical factors like what's at stake for you. I argue that my view undermines the motivation for this claim and may debunk it. The third chapter presents a new way of understanding epistemic injustice and describes how epistemic injustice (thus understood) interacts with action's constitutive aim of practical knowledge to cause shackling-a distinctive dilemma faced by marginalized agents that both manifests and constitutes oppression. The fourth chapter shows how my view of the nature of intentional action entails a new kind of constitutivism about practical reason. I raise a worry that this form of constitutivism threatens the existence and/or generality of moral reasons before suggesting some possible ways out.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120679</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributivity across domains : a study of the distributive numerals in Bangla</title>
<link>https://hdl.handle.net/1721.1/120678</link>
<description>Distributivity across domains : a study of the distributive numerals in Bangla
Guha, Ishani
In this thesis, studying the numeral indefinites in Bangla, I argue that distributive numerals are not distributivity operators themselves. The distributive numerals introduce a plurality of discourse referents, and they require that this plurality of discourse referents must enter into a formal relationship with the plurality of individuals introduced by another discourse referent. This formal requirement is known as dependency. Conventionally the phenomenon is called covariation. A distributivity operator is such that it allows this formal relationship to hold in its scope. I argue that examples involving ditransitives provide clear evidence for such an analysis. Apart from this, I discuss that the different forms of numerals have an additional restriction about encoding specificity effects. I show that the requirement of specificity and the requirement of covariation interact with each other in the scope of a distributivity operator. This interaction is encoded morphologically by differentiating between simple and complex forms of distributive numerals. The proposal is implemented by using Dynamic Plural Logic. Finally I show that the particular formalization can be extended to account for the difference between adnominal distributive numerals and adverbial (which I call 'pluractional') distributive numerals. To analyze the adnominal and adverbial distributive numerals I propose to differentiate between distributivity in the domain of individuals and distributivity in the domain of events.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 177-180).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120678</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ethics for artificial agents</title>
<link>https://hdl.handle.net/1721.1/120677</link>
<description>Ethics for artificial agents
Gray, David Michael, Ph. D. Massachusetts Institute of Technology
Machine ethics is a nascent subfield of computer ethics that focuses on the ethical issues involved in the design of autonomous software agents ("artificial agents"). Chapter 1 of this thesis considers how best to understand the central projects of this new subfield, and reconstructs a prominent theory of how artificial agents ought to be designed. This theory, which I call the "agential theory" of machine ethics, says that artificial agents morally ought to be designed to behave only in ways that would be permissible for a human agent to behave, and that only artificial agents that have been designed in this way are morally permissible for human beings to use. Chapter 2 critically assesses two versions of the agential theory-one that assumes that artificial agents are moral agents, and another that does not. After considering arguments for both versions of the theory, I argue that both versions should be rejected. Chapter 3 sets out and analyzes a case study in machine ethics, focusing on the development of an artificial agent to assist with the planning of a public health social work intervention.
Thesis: Ph. D. in Philosophy, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 121-125).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120677</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biases in segmenting non-concatenative morphology</title>
<link>https://hdl.handle.net/1721.1/120676</link>
<description>Biases in segmenting non-concatenative morphology
Fullwood, Michelle Alison
Segmentation of words containing non-concatenative morphology into their component morphemes, such as Arabic /kita:b/ 'book' into root [check symbol]ktb and vocalism /i-a:/ (McCarthy, 1979, 1981), is a difficult task due to the size of its search space of possibilities, which grows exponentially as word length increases, versus the linear growth that accompanies concatenative morphology. In this dissertation, I investigate via computational and typological simulations, as well as an artificial grammar experiment, the task of morphological segmentation in root-and-pattern languages, as well as the consequences for majority-concatenative languages such as English when we do not presuppose concatenative segmentation and its smaller hypothesis space. In particular, I examine the necessity and sufficiency conditions of three biases that may be hypothesised to govern the learning of such a segmentation: a bias towards a parsimonious morpheme lexicon with a power-law (Zipfian) distribution over tokens drawn from this lexicon, as has successfully been used in Bayesian models of word segmentation and morphological segmentation of concatenative languages (Goldwater et al., 2009; Poon et al., 2009, et seq.); a bias towards concatenativity; and a bias against interleaving morphemes that are mixtures of consonants and vowels. I demonstrate that while computationally, the parsimony bias is sufficient to segment Arabic verbal stems into roots and residues, typological considerations argue for the existence of biases towards concatenativity and towards separating consonants and vowels in root-and-pattern-style morphology. Further evidence for these as synchronic biases comes from the artificial grammar experiment, which demonstrates that languages respecting these biases have a small but significant learnability advantage.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-140).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120676</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple case assignment : an Amis case study</title>
<link>https://hdl.handle.net/1721.1/120672</link>
<description>Multiple case assignment : an Amis case study
Chen, Tingchun
This dissertation investigates two case-related phenomena: aspect-conditioned differential subject case marking and overt case-stacking, and why case morphology on a DP may correlate with movement of a DP. Guided by data from Amis (Formosan, Austronesian), I argue that case assignment may apply to a single DP more than once and case-stacking is overt realisation of multiple case assignment. In Amis, a DP surfaces with all the cases it has been assigned when it is a contrastive topic. Moreover, Amis provides strong evidence for treating case-stacking truly as stacking of multiple cases, instead of stacking a focus marker on top of a case marker. In addition, I propose that case morphology and whether a DP can undergo certain type of movement are both mediated by [phi]-agreement. In particular, each successful [phi]-agreement with a DP introduces to the DP a K(ase), a structural correlate of morphological case. This is based on the behaviour of subjects of perfective clauses. Subjects of perfective clauses receive genitive case in a neutral context but appear with an additional nominative case when they are contrastive topics. Moreover, there are more restrictions on moving these subjects, compared with nominative-marked subjects of imperfective clauses. I posit that subjects of perfective clauses become [phi]-defective as a result of agreeing with perfective Asp(ect). This is manifested in one less case assignment, which results in genitive case on the surface, and inability to be attracted by certain complex A/Ā-movement probes.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 295-307).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120672</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relationship preservation</title>
<link>https://hdl.handle.net/1721.1/120671</link>
<description>Relationship preservation
Branan, Kenyon Garrett
This thesis deals with a number of puzzles related to word order, in which the co-occurence of two elements in the same clause imposes a restriction on the distribution of these elements. I suggest that elements involved in Agree relationships [Chomsky (2000, 2001)] are subject to a requirement that they be aligned with the left or right edge of a prosodic phrase, following work done in Richards (2016). I argue that there is a restriction on the opaque satisfaction of this requirement, and show that this provides a unified solution to these word order puzzles. Chapters 1 and 2 deal with movement phenomena, primarily in left-headed languages. In chapter 1, I show that some languages allow A-movement of subjects across other DPs, whereas others do not. I note that this appears to be determined by which edge of a prosodic phrase they require phrases in Agree relationships to be aligned with, and show that this is a consequence of the proposed restriction on opaque satisfaction of the alignment requirement. Chapter 2 builds on the results of chapter 1. I show that A-movement of a subject may cross another nominal in all languages, but only if there is a phase boundary between the launch site and landing site, and show how this falls out from the proposed restriction on opacity. I show also that languages that do not allow A-movement of subjects across other DPs display a similar restriction in wh-questions, and argue that this too is a result of the proposed restriction on opacity. Chapters 3 and 4 deal with the distribution of wh-phrases in languages that allow them to remain in-situ. Chapter 3 deals with co-occurance restrictions between foci and wh-phrases. I argue that these restrictions emerge as a result of a conflict between a prosodic strategy that languages might use to satisfy the alignment requirement, called Grouping, and the proposed restriction on opaque satisfaction of this requirement. Chapter 4 deals with Grouping more generally. I show that languages with Grouping have a particular prosodic signature which marks phonological phrases that contain wh-phrases, whereas languages that lack Grouping do not, and explore the consequences of this for the architecture developed in this thesis.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 329-351).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120671</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Subjective modality</title>
<link>https://hdl.handle.net/1721.1/120670</link>
<description>Subjective modality
Boylan, David (David Henry)
This dissertation focuses on subjective or epistemic readings of the modals 'might' and 'should' and considers how they fit into broader theories of modal vocabulary. Chapter 1, 'What the Future "Might" Brings', develops a puzzle about epistemic modals and tense, showing that future tensed epistemic modals are surprisingly marked in cases of predictable forgetting. It gives a solution whereupon epistemic modals are monotonic: their domains only shrink going forward in time. It is noted that this property is also a feature of circumstantial modals and a new general picture of how epistemic and historical modality are related is proposed. Chapter 2, 'Putting "Ought"s Together', argues that deontic but not epistemic 'ought's appear to obey the inference pattern Agglomeration. It gives a new semantics for 'ought', where it is an existential quantifier over best propositions, and shows how this semantics together with pragmatic features of deontic contexts can explain the differing inferential properties of deontics and epistemics. Chapter 3, 'More Miners', generalises the now infamous miners problem to epistemic 'ought's. It shows that conservative non-probabilistic solutions do not extend to epistemic cases with the same structure. It solves the problem using probabilisitic orderings over propositions and draws some morals about the metasemantics of such orderings and the role of neutrality in the semantics of deontic modals.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 99-104).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120670</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presuppositions in context</title>
<link>https://hdl.handle.net/1721.1/120669</link>
<description>Presuppositions in context
Aravind, Athulya, Ph. D. Massachusetts Institute of Technology
This dissertation is about the acquisition of presupposition. The specific focus is on the interplay between presuppositional content as hardwired in the semantics of particular expressions and the conversational contexts in which utterances containing those expressions may be used. A series of behavioral experiments examine what children in the preschool age range know about the pragmatic principles governing presupposition, and how they come to acquire this knowledge. The dissertation is organized into two thematic halves. The first half investigates the conditions that govern when presupposing something is appropriate, hence allow for the use of a presupposition triggering expression. Specifically, I ask: do young children know the common ground requirement - the formal requirement that presuppositions be previously established common knowledge - and do they know when and how this requirement can be violated? Two sets of experiments, using two presupposition-carrying expressions with importantly divergent properties (too and the), reveal that children, like adults, generate a default expectation that a presuppositional sentence be uttered to a listener who already takes for granted the presupposition. However, they hold onto this expectation even in circumstances where adult speakers do not. Unlike adults, children do not expect that an otherwise 'neutral' listener might accommodate a speaker's informative presupposition. Together, these findings point to a developmental path where the formal requirement - that presuppositions be presuppositions - is acquired before an understanding that the rule can be bent and how. The second half examines the conditions that make marking of presuppositions obligatory, hence require the use of a presupposition triggering expression. Are children sensitive to Maximize Presupposition! (Heim 1991) as a principle governing competition and utterance choice? The ability to deploy Maximize Presupposition! in an adult-like way shows a more protracted developmental trajectory. Moreover, children's ability to rule out presuppositionally weaker sentences seems to vary across competition environments. Taking the non-uniformity in development as signaling non-uniformity in the underlying phenomena, I develop an alternative account for a pair of expressions commonly thought to compete for Maximize Presupposition!: another vs. a. Ultimately, I suggest that Maximize Presupposition! is one of several pragmatic principles that lead to competition and selection of structures imposing the strongest contextual requirement. Children have command of some of these conditions, but not others. The acquisition trajectories are modulated by various factors, including the type of requirement imposed on the context (e.g. that some proposition is salient vs. accepted common belief) and the types of knowledge that are pre-requisites (e.g. knowledge of idiosyncratic properties of the lexicon).
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-199).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120669</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport demand in China : estimation, projection, and policy assessment</title>
<link>https://hdl.handle.net/1721.1/120664</link>
<description>Transport demand in China : estimation, projection, and policy assessment
Kishimoto, Paul Natsuo
China's rapid economic growth in the twenty-first century has driven, and been driven by, concomitant motorization and growth of passenger and freight mobility, leading to greater energy demand and environmental impacts. In this dissertation I develop methods to characterize the evolution of passenger transport demand in a rapidly-developing country, in order to support projection and policy assessment. In Essay #1, I study the role that vehicle tailpipe and fuel quality standards ("emissions standards") can play vis-à-vis economy-wide carbon pricing in reducing emissions of pollutants that lead to poor air quality. I extend a global, computable general equilibrium (CGE) model resolving 30 Chinese provinces by separating freight and passenger transport subsectors, road and non-road modes, and household-owned vehicles; and then linking energy demand in these subsectors to a province-level inventory of primary pollutant emissions and future policy targets. While climate policy yields an air quality co-benefit by inducing shifts away from dirtier fuels, this effect is weak within the transport sector. Current emissions standards can drastically reduce transportation emissions, but their overall impact is limited by transport's share in total emissions, which varies across provinces. I conclude that the two categories of measures examined are complementary, and the effectiveness of emissions standards relies on enforcement in removing older, higher-polluting vehicles from the roads. In Essay #2, I characterize Chinese households' demand for transport by estimating the recently-developed, Exact affine Stone index (EASI) demand system on publicly-available data from non-governmental, social surveys. Flexible, EASI demands are particularly useful in China's rapidly-changing economy and transport system, because they capture ways that income elasticities of demand, and household transport budgets, vary with incomes; with population and road network densities; and with the supply of alternative transport modes. I find transport demand to be highly elastic ([epsilon][subscript x] = 1.46) at low incomes, and that income-elasticity of demand declines but remains greater than unity as incomes rise, so that the share of transport in households' spending rises monotonically from 1.6 % to 7.5 %; a wider, yet lower range than in some previous estimates. While no strong effects of city-level factors are identified, these and other non-income effects account for a larger portion of budget share changes than rising incomes. Finally, in Essay #3, I evaluate the predictive performance of the EASI demand system, by testing the sensitivity of model fit to the data available for estimation, in comparison with the less flexible, but widely used, Almost Ideal demand system (AIDS). In rapidly-evolving countries such as China, survey data without nationwide coverage can be used to characterize transport systems, but the omission of cities and provinces could bias results. To examine this possibility, I estimate demand systems on data subsets and test their predictions against observations for the withheld fraction. I find that simple EASI specifications slightly outperform AIDS under cross-validation; these offer a ready replacement in standalone and CGE applications. However, a trade-off exists between accuracy and the inclusion of policy-relevant covariates when data omit areas with high values of these variables. Also, while province-level fixed-effects control for unobserved heterogeneity across units that may bias parameter estimates, they increase prediction error in out-of-sample applications-revealing that the influence of local conditions on household transport expenditure varies significantly across China's provinces. The results motivate targeted transport data collection that better spans variation on city types and attributes; and the validation technique aids transport modelers in designing and validating demand specifications for projection and assessment.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.; Cataloged from PDF version of thesis. "Some pages in the original document contain text that runs off the edge of the page"--Disclaimer Notice page.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120664</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Backroom space allocation in retail stores</title>
<link>https://hdl.handle.net/1721.1/120662</link>
<description>Backroom space allocation in retail stores
Das, Lita
Space is one of the most scarce, expensive, and difficult to manage resources in urban retail establishments. A typical retail space broadly consists of two areas, the customer facing frontroom area and the backroom area, which is used for inventory storage and other support activities. While frontrooms have received considerable amount of attention from both academics and practitioners, backrooms are an often neglected area of retail space management and design. However, the allocation of space to the backroom and its management impact multiple operational aspects of retail establishments. These include in-store labor utilization, delivery schedules, product packaging, and inventory management. Therefore, the backroom area directly affects the performance of the store because it impacts stock-outs, customer service levels, and labor productivity. Moreover, extant literature suggests that backroom related operations contribute to a large fraction of the total retail supply chain costs. Thus, optimizing the management of backroom spaces is an important lever for store performance improvement. We address the gap in the extant literature related to space management of retail backrooms by investigating the following three questions: First, what is the effect of pack size on inventory levels and space needs in the backroom? Second, how can a given backroom space be efficiently utilized through optimal inventory control? Third, what is the optimal amount of space that should be allocated to the backroom in a given retail establishment? To address the first question, we evaluate the effect of two discrete pack sizes, order pack size (OPS) and storable pack size (SPS), on inventory levels and storage space requirements in the backrooms. While SPS drives the space needs for a given inventory level, OPS drives the amount of excess inventory and therefore, the space needs. Using inventory theory and probability theory, we quantify the amount of excess inventory and the expected stock-out probability for a given OPS in the case of a normally distributed demand. To address the second question, we discuss an inventory-theoretic approach to efficiently manage a given backroom space within a limited service restaurant. Specifically, we formulate a mathematical optimization model using mixed-integer linear programing with the objective of maximizing store profit. Applying this optimization model to real store data in collaboration with a major US retailer reveals cost implications related to constrained backroom space and the sensitivity of backroom space requirements to changes in OPS and SPS. The proposed model can serve as a decision support tool for various real-world use cases. For instance, the tool can help the retailers to identify (i) items whose contribution to the store profit does not justify their space needs in the backroom, and (ii) stores that are constrained in their profitability growth by backroom space limitations. To address the third question, we introduce the notion of interdependency between the frontroom and the backroom of a retail establishment. Such interdependencies yield nontrivial trade-offs inherent to the optimal retail space allocation. Demand can be lost due to unavailability of inventory (or inventory stock-out), which is a result of scarce amount of backroom space, or due to unavailability of sufficient frontroom space (or space stock-out). Furthermore, constrained backroom spaces increase in-store labor cost and the ordering costs incurred per unit of revenue generated in a retail establishment. The strategic decision model formulated in this chapter accounts for revenue, inventory cost, labor cost and ordering cost to determine the optimal amount of backroom space that should be allocated within a retail establishment. Sensitivity analyses with respect to the change in input parameters is used to connect the backroom space allocation and its impact on store profit to the different supply chain levers that can be managed by the retailers.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 168-171).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120662</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable validation : new perspectives on adaptive data analysis and cross-validation</title>
<link>https://hdl.handle.net/1721.1/120660</link>
<description>Reliable validation : new perspectives on adaptive data analysis and cross-validation
Elder, Samuel Scott
Validation refers to the challenge of assessing how well a learning algorithm performs after it has been trained on a given data set. It forms an important step in machine learning, as such assessments are then used to compare and choose between algorithms and provide reasonable approximations of their accuracy. In this thesis, we provide new approaches for addressing two common problems with validation. In the first half, we assume a simple validation framework, the holdout set, and address an important question of how many algorithms can be accurately assessed using the same holdout set, in the particular case where these algorithms are chosen adaptively. We do so by first critiquing the initial approaches to building a theory of adaptivity, then offering an alternative approach and preliminary results within this approach, all geared towards characterizing the inherent challenge of adaptivity. In the second half, we address the validation framework itself. Most common practice does not just use a single holdout set, but averages results from several, a family of techniques known as cross-validation. In this work, we offer several new cross-validation techniques with the common theme of utilizing training sets of varying sizes. This culminates in hierarchical cross-validation, a meta-technique for using cross-validation to choose the best cross-validation method.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-109).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120660</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical limits of graphical channel models and a semidefinite programming approach</title>
<link>https://hdl.handle.net/1721.1/120659</link>
<description>Statistical limits of graphical channel models and a semidefinite programming approach
Kim, Chiheon
Community recovery is a major challenge in data science and computer science. The goal in community recovery is to find the hidden clusters from given relational data, which is often represented as a labeled hyper graph where nodes correspond to items needing to be labeled and edges correspond to observed relations between the items. We investigate the problem of exact recovery in the class of statistical models which can be expressed in terms of graphical channels. In a graphical channel model, we observe noisy measurements of the relations between k nodes while the true labeling is unknown to us, and the goal is to recover the labels correctly. This generalizes both the stochastic block models and spiked tensor models for principal component analysis, which has gained much interest over the last decade. We focus on two aspects of exact recovery: statistical limits and efficient algorithms achieving the statistic limit. For the statistical limits, we show that the achievability of exact recovery is essentially determined by whether we can recover the label of one node given other nodes labels with fairly high probability. This phenomenon was observed by Abbe et al. for generic stochastic block models, and called "local-to-global amplification". We confirm that local-to-global amplification indeed holds for generic graphical channel models, under some regularity assumptions. As a corollary, the threshold for exact recovery is explicitly determined. For algorithmic concerns, we consider two examples of graphical channel models, (i) the spiked tensor model with additive Gaussian noise, and (ii) the generalization of the stochastic block model for k-uniform hypergraphs. We propose a strategy which we call "truncate-and-relax", based on a standard semidefinite relaxation technique. We show that in these two models, the algorithm based on this strategy achieves exact recovery up to a threshold which orderwise matches the statistical threshold. We complement this by showing the limitation of the algorithm.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 205-213).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120659</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an integrated understanding of neural networks</title>
<link>https://hdl.handle.net/1721.1/120658</link>
<description>Towards an integrated understanding of neural networks
Rolnick, David (David S.)
Neural networks underpin both biological intelligence and modern Al systems, yet there is relatively little theory for how the observed behavior of these networks arises. Even the connectivity of neurons within the brain remains largely unknown, and popular deep learning algorithms lack theoretical justification or reliability guarantees. This thesis aims towards a more rigorous understanding of neural networks. We characterize and, where possible, prove essential properties of neural algorithms: expressivity, learning, and robustness. We show how observed emergent behavior can arise from network dynamics, and we develop algorithms for learning more about the network structure of the brain.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-136).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120658</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applied math in geophysical fluids : partially trapped wave problems and mining plumes</title>
<link>https://hdl.handle.net/1721.1/120657</link>
<description>Applied math in geophysical fluids : partially trapped wave problems and mining plumes
Rzeznik, Andrew Joseph
The first portion of this work focuses on leaky modes in the atmospheric sciences. Leaky modes (related to quasi-modes, scattering resonances, and the singularity expansion method) are discrete, oscillatory and decaying modes that arise in conservative systems where waves are partially trapped. By replacing the infinite domain with a finite domain and appropriate boundary conditions it is possible in many cases to construct a complete basis for the solution in terms of these modes. Formulating such effective boundary conditions requires a notion of the direction of propagation of the waves. For this purpose we introduce a generalization of the concept of group speed for exponentially decaying but conservative waves. This is found via an extended modulation argument and a generalization of Whitham's Average Lagrangian theory. The theory also shows that a close relationship exists between the branch cuts of the dispersion relation and the propagation direction, and is used to create spectral decompositions for simple problems in internal gravity waves. The last chapter considers deep-sea nodule mining operations, which potentially involve plans for discharge plumes to be released into the water column by surface operation vessels. We consider the effects of non-uniform, realistic stratifications with vertical shear on forced compressible plumes. The plume model is developed to account for the influence of thermal conduction through the discharge pipe and an initial adjustment phase. We investigate the substantial role of compressibility, for which a dimensionless number is introduced to determine its importance compared to that of the background stratification. Our results show that (i) small-scale stratification features can have a significant impact, (ii) in a static ambient there exists a discharge flow rate that minimizes the plume vertical extent, (iii) the ambient velocity profile plays an important role in determining the final plume scale and dilution factor, and (iv) for a typical plume the dilution factor is expected to be several hundred to a thousand.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-132).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120657</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meditations in an emergency : social scientists and the problem of conflict in Cold War America</title>
<link>https://hdl.handle.net/1721.1/120645</link>
<description>Meditations in an emergency : social scientists and the problem of conflict in Cold War America
Burks, Marie Elizabeth
Through the mode of conceptual history, this dissertation examines some of the forms dissent could take within academic social science in the United States from roughly 1945-1970. The concept in question is "conflict." There are many stories one could tell about this concept and its transformations in postwar American social science, but in this dissertation I focus on one in particular: how certain social scientists sought to frame conflict as a problem of knowledge, by stretching the concept to fit the global proportions of the bipolar world that seemed to have emerged from World War II, and then using that conceptualization to oppose the Cold War. The dissertation first considers a specific moment of conceptual change, when some social scientists sought to redefine "conflict" in the immediate aftermath of World War II, so that it would be capacious enough to describe conflict at all levels of analysis, from the intrapersonal to the international. From there, it follows a cadre of social scientists who used that novel conceptualization to build an intellectual movement around a new journal and research center starting in the mid- 1950s. The scholars who participated in that movement, known as "peace research" or ''conflict resolution," endeavored to construct a "general theory of conflict," which they would then employ to challenge the notion that the Cold War was inevitable. The language of mid-century social science was the idiom in which they expressed their dissent. Although this was to become an international movement, this dissertation focuses on its American incarnation, which came to fruition at the University of Michigan in Ann Arbor beginning around 1957. The dissertation then looks closely at how two of the leading theorists of that movement modeled conflict in the early 1960s, and considers the ethical and political impulses that animated their work, demonstrating that it was possible for some intellectuals to inhabit the dual role of academic social scientist and social critic in the early 1960s. It concludes with a brief set of reflections on the United States Institute of Peace, an independent federal institute established in 1984 to embody the dream of "conflict resolution."
Thesis: Ph. D. in History, Anthropology, and Science, Technology and Society (HASTS), Massachusetts Institute of Technology, Program in Science, Technology and Society, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 194-206).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120645</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computationally efficient offline demand calibration algorithms for large-scale stochastic traffic simulation models</title>
<link>https://hdl.handle.net/1721.1/120639</link>
<description>Computationally efficient offline demand calibration algorithms for large-scale stochastic traffic simulation models
Zhang, Chao, Ph. D. Massachusetts Institute of Technology
This thesis introduces computationally efficient, robust, and scalable calibration algorithms for large-scale stochastic transportation simulators. Unlike a traditional "black-box" calibration algorithm, a macroscopic analytical network model is embedded through a metamodel simulation-based optimization (SO) framework. The computational efficiency is achieved through the analytical network model, which provides the algorithm with low-fidelity, analytical, differentiable, problem-specific structural information and can be efficiently evaluated. The thesis starts with the calibration of low-dimensional behavioral and supply parameters, it then addresses a challenging high-dimensional origin-destination (OD) demand matrix calibration problem, and finally enhances the OD demand calibration by taking advantage of additional high-resolution traffic data. The proposed general calibration framework is suitable to address a broad class of calibration problems and has the flexibility to be extended to incorporate emerging data sources. The proposed algorithms are first validated on synthetic networks and then tested through a case study of a large-scale real-world network with 24,335 links and 11,345 nodes in the metropolitan area of Berlin, Germany. Case studies indicate that the proposed calibration algorithms are computationally efficient, improve the quality of solutions, and are robust to both the initial conditions and to the stochasticity of the simulator, under a tight computational budget. Compared to a traditional "black-box" method, the proposed method improves the computational efficiency by an average of 30%, as measured by the total computational runtime, and simultaneously yields an average of 70% improvement in the quality of solutions, as measured by its objective function estimates, for the OD demand calibration. Moreover, the addition of intersection turning flows further enhances performance by improving the fit to field data by an average of 20% (resp. 14%), as measured by the root mean square normalized (RMSN) errors of traffic counts (resp. intersection turning flows).
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 168-181).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120639</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of vegetation-generated turbulence on sediment transport</title>
<link>https://hdl.handle.net/1721.1/120638</link>
<description>Impacts of vegetation-generated turbulence on sediment transport
Yang, Qingjun (Judy Qingjun)
Aquatic vegetated habitats, including wetlands and mangroves, are disappearing at an annual rate of 1 to 7%. These ecosystems provide habitats important to fisheries, enhance water quality by filtering nutrients from run-off, and also protect coastal regions from storm surges and waves. To mitigate the loss of these habitats, restoration projects import sediment to eroded areas. The success of the restoration depends on its ability to retain sediment; therefore restoration design requires a good understanding of sediment transport within vegetated landscapes. However, there is currently no quantitative model for sediment transport in vegetated regions, and many restoration projects have failed due to unanticipated erosion from the restored regions. The goal of this thesis is to develop a predictive model for sediment transport in regions with vegetation. First, the affect of vegetation on the critical condition when sediment start to move was explored. To identify the critical condition, an imaging system was designed to track the trajectories of individual moving grain through running water. The critical flow velocity (U[subscript crit]) above which sediment starts to move was identified from the tracked sediment trajectories for both bare (non-vegetated) and vegetated regions. The experimental results showed that for the same type of sediment, U[subscript crit] decreased with increasing vegetation solid volume fraction. This was attributed to the vegetation-generated turbulence, which induced a local, vertical, adverse pressure, or a lift force on the sediment grain, facilitating sediment transport. In contrast, the turbulent kinetic energy (k[subscript t]) was found to be roughly a constant at the critical condition for different vegetation volume fractions, suggesting that k[subscript t] is a more universal metric than T for predicting the critical condition of the sediment transport. A k[subscript t]-based model was developed to predict U[subscript crit] for channels with different vegetation solid volume fractions. The turbulence-based model successfully predicted U[subscript crit] for both bare and vegetated channels, providing a useful tool for ecologists to predict whether a vegetated landscape will erode or not. Second, the impact of vegetation on the bed load transport rate was explored. A system that allows sediment to be bypassed, a cart to distribute sediment, a method that measures the dry weight of wet sand without drying the sediment, a topography system, and an sediment trajectory imaging system were designed. The bed load transport rate (Q[subscript s],) was measured for both bare channels and channels with different vegetation solid volume fractions ([phi]) under different flow rates. At the same [tau], the measured Q[subscript s], increased with increasing [phi], suggesting that vegetation-generated turbulence, which also increased with increasing ]phi], was augmenting the bed load transport. At the same near-bed turbulent kinetic energy, k[subscript t], the Q[subscript s], measured in both bare and vegetated channels agreed within uncertainty, suggesting that k[subscript t] may be a more universal predictor of Q[subscript s] than [tau]. The Einstein-Brown [tau]-based bed load transport model was reinterpreted as a k[subscript t]-based model. The new kt-based model predicted the Q[subscript s] measurements for both bare and vegetated channels. The dependence of Q[subscript s] on k[subscript t] was explained by the statistics of individual grain motion, which showed that Q[subscript s] was predominantly controlled by the number of grains in motion, which correlated with k[subscript t]. The proposed k[subscript t]-based sediment transport model can be used to simulate large-scale landscape evolution and to help ecologists design better coastal restoration strategies. Third, the impacts of vegetation on bed-form characteristics and migration rate were studied. After the measured bed load transport rate converged to an equilibrium value, the bed topography was scanned by a laser topography system. Bed-forms with height less than 2cm were observed and characterized as ripples. For low vegetation solid volume fraction ([phi] &lt;/_ 0.012), the ripple wavelength was constrained by stem spacing and the ripple height increased with increasing [phi]. However, at the highest vegetation solid volume fraction ([phi] = 0.025), the ripple height was comparable to the grain size, indicating that an plane bed had formed. The ripple migration speed and the bed load flux associated with the migrating ripples increased with increasing vegetation solid volume fraction for [phi] &lt;/_ 0.012. However, the fraction of the bed load flux carried by migrating ripples decreased with increasing [phi], suggesting that vegetation facilitated the formation of sheet flow. The impacts of vegetation on bed-forms presented here will provide a potential tool for geologists to infer the occurrence of vegetation-related events from geomorphological records.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 179-188).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120638</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Personalization of future urban mobility</title>
<link>https://hdl.handle.net/1721.1/120637</link>
<description>Personalization of future urban mobility
Song, Xiang, Ph. D. Massachusetts Institute of Technology
In the past few years, we have been experiencing rapid growth of new mobility solutions fueled by a myriad of innovations in technologies such as automated vehicles and in business models such as shared-ride services. The emerging mobility solutions are often required to be profitable, sustainable, and efficient while serving heterogeneous needs of mobility consumers. Given high-resolution consumer mobility behavior collected from smartphones and other GPS-enabled devices, the operational management strategies for future urban mobility can be personalized and serve for various system objectives. This thesis focuses on the personalization of future urban mobility through the personalized menu optimization model. The model built upon individual consumer's choice behavior generates a personalized menu for app-based mobility solutions. It integrates behavioral modeling of consumer mobility choice with optimization objectives. Individual choice behavior is modeled through logit mixture and the parameters are estimated with a hierarchical Bayes (HB) procedure. In this thesis, we first present an enhancement to HB procedure with alternative priors for covariance matrix estimation in order to improve the estimation performance. We also evaluate the benefits of personalization through a Boston case study based on real travel survey data. In addition, we present a sequential personalized menu optimization algorithm that addresses trade-off between exploration (learn uncertain demand of menus) and exploitation (offer the best menu based on current knowledge). We illustrate the benefits of exploration under different conditions including different types of heterogeneity.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 91-97).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120637</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-agent real-time decision making in water resources systems</title>
<link>https://hdl.handle.net/1721.1/120636</link>
<description>Multi-agent real-time decision making in water resources systems
Sahu, Reetik Kumar
Optimal utilization of natural resources such as water, wind and land over extended periods of time requires a carefully designed framework coupling decision making and a mathematical abstraction of the physical system. On one hand, the choice of the decision-strategy can set limits/bounds on the maximum benefit that can be extracted from the physical system. On the other hand the mathematical formulation of the physical system determines the limitations of such strategies when applied to real physical systems. The nuances of decision making and abstraction of the physical system are illustrated with two classical water resource problems: optimal hydropower reservoir operation and competition for a common pool groundwater source. Reservoir operation is modeled as a single agent stochastic optimal control problem where the operator (agent) negotiates a firm power contract before operations begin and adjusts the reservoir release during operations. A probabilistic analysis shows that predictive decision strategies such as stochastic dynamic programming and model predictive control give better performance than standard deterministic operating rules. Groundwater competition is modeled as a multi-agent dynamic game where each farmer (agent) aims to maximize his/her personal benefit. The game analysis shows that uncooperative competition for the resource reduces economic efficiency somewhat with respect to the cooperative socially optimum behavior. However, the efficiency reduction is relatively small compared to what might be expected from incorrect assumptions about uncertain factors such as future energy and crop prices. Spatially lumped and distributed models of the groundwater system give similar pictures of the inefficiencies that result from uncooperative behavior. The spatially distributed model also reveals the important roles of the geometry and density of the pumping well network. Overall, the game analysis provides useful insight about the factors that make cooperative groundwater management beneficial in particular situations.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 77-83).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120636</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient operations of smart highways : platooning, ramp metering, and incident management</title>
<link>https://hdl.handle.net/1721.1/120635</link>
<description>Resilient operations of smart highways : platooning, ramp metering, and incident management
Jin, Li, Ph. D. Massachusetts Institute of Technology
Highway systems have witnessed a significant modernization in recent years due to the deployment of traffic sensing and control capabilities. In addition, the ongoing developments in connected and autonomous vehicle technology are poised to enable advanced capabilities such as platooning and vehicle-to-infrastructure communications. On one hand, these advancements offer new opportunities for improving the operational efficiency of highway systems. On the other hand, most highway system operators still face significant challenges in ensuring adequate performance under disruptions such as incidents and other capacity-reducing events, as well as demand fluctuations. Furthermore, the inherent vulnerabilities of cyber-physical components in smart highway systems are prone to exploitation by adversarial agents, who can introduce strategic disruptions. Thus, ensuring the resiliency of highway operations is a principal concern of system operators. In this thesis, we contribute to the above-mentioned challenge by developing a system-theoretic approach for maintaining resilient highway operations under a broad range of disruptions, modeled as stochastic perturbations in highway capacity or traffic demand. In particular, we focus on three types of highway operations: vehicle platooning, ramp metering, and capacity-aware routing/demand management. Our approach relies on (i) modeling partially automated traffic flow dynamics under disruptions as stochastically switching dynamical systems, (ii) analyzing their long-time properties (stability and/or convergence), and (iii) designing traffic control schemes that improve system throughput with stability guarantees. We demonstrate the application of our approach to several realistic situations ranging from capacity perturbations at incident hotspots to moving bottlenecks created by heavy-duty vehicles to stochastic arrivals/progression of autonomous vehicle platoons. To model traffic flow dynamics under disruptions, we extend classical macroscopic traffic flow/queuing models by combining them with Markovian switches in flow/queuing dynamics that capture the stochasticity in occurrence/clearance of disruptions. Specifically, we propose two models: Piecewise-Deterministic Queuing (PDQ) model, and Stochastic Switching Cell Transmission Model (SS-CTM). The PDQ model is the most basic model that captures the dynamic evolution of a traffic queue upstream of a highway bottleneck under perturbations in capacity or demand. We use this model to analyze link-level capacity management schemes and design capacity-aware routing schemes for parallel-route highway systems. The SS-CTM captures the spatial propagation of a disturbance created by capacity perturbations, and is useful for identifying the congestion bottlenecks induced by these perturbations. We adopt this model to analyze the impact of perturbations on the on-ramp queues and highway throughput as well as to design new ramp control schemes with improved performance guarantees. Our results on the stability analysis of PDQ and SS-CTM utilize more general results on the stability of continuous-time Markov processes. We refine them for the purpose of evaluating the boundedness of traffic queues upstream of highway bottlenecks and on the ramps. Our key contribution is a computationally tractable approach for verifying the classical Foster-Lyapunov drift condition over a finite subset of states, which happen to be the vertices of an invariant set for the stochastic traffic dynamics. This requires us to exploit the long-time properties of the PDQ and SS-CTM-in particular, the cooperativity of traffic flow dynamics and ergodicity of Markov chain that models disruptions. Our analysis approach enables us to estimate how performance metrics such as throughput and travel time change with location and intensity (rate) of disruptions. We also extend our results to the problem of designing traffic control schemes that improve system throughput under perturbations, while maintaining stable traffic queues. This leads us to identify somewhat surprising ways to prioritize and route traffic on real-world highway systems, and relate them to important operational capabilities such as lane control on automated highways, speed regulation of platoons, incident-aware routing, and stabilization of on-ramp queues. Finally, we also consider the modeling and impact evaluation of security disruptions. We report an initial game-theoretic model that captures an emerging security concern in multi-priority highway systems. The model is relevant to study the incentives of strategic misbehavior by individual vehicles who can exploit the security vulnerabilities in vehicle-to-infrastructure communications and impact the highway operations. We also discuss strategic response to cyber-physical attacks on smart highway infrastructure for timely recovery of compromised traffic links.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 185-194).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120635</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Influences of nutrition and pathogenicity from a microbial diet on immunity and longevity in Caenorhabditis elegans</title>
<link>https://hdl.handle.net/1721.1/120633</link>
<description>Influences of nutrition and pathogenicity from a microbial diet on immunity and longevity in Caenorhabditis elegans
Fletcher, Marissa (Marissa Ann)
Interactions with the environment play a critical role in animal physiology and evolution. Animals alter their cellular functions and organismal behavior to maximize survival in a given setting, or make the decision to seek out a new environment that is more compatible with life. Responses to external conditions can be carried out via sensory perception of environmental cues, or as a result to decreased cellular energy or tissue damage. Both modes of modification present complex biological questions as to how animals recognize the need to adapt, make the decision to adapt, and relay that decision into a physical outcome. This thesis focuses on how we can answer some of these questions through study of the model organism, Caenorhabditis elegans, and its interactions with its microbial environment, which serves as both a nutrient source as well as a potential pathogenic threat. In Chapter One, I provide an overview of aging and infection in C. elegans. Many of the pathways involved in regulating longevity and immunity in C. elegans are conserved in mammals, and work in this system has revealed a surprising amount of intersection of these two seemingly separate matters. Chapter Two focuses on how a TGF[beta] neuroendocrine signaling pathway contributes to lifespan extension as a result of reduced nutrient availability in adulthood, commonly known as dietary restriction. Chapter Three explores how a bZIP transcription factor works to regulate the response to pathogenic bacteria downstream of p38 MAP Kinase signaling. In Chapter Four, I present ideas for exploring the future directions of these two projects that focus on untangling how C. elegans respond to a changing microbial environment.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120633</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual microfluidics : a novel single-cell technology based on diffusion-restricted reaction that makes high-quality low-input genomic research accessible</title>
<link>https://hdl.handle.net/1721.1/120632</link>
<description>Virtual microfluidics : a novel single-cell technology based on diffusion-restricted reaction that makes high-quality low-input genomic research accessible
Xu, Liyi, Ph. D. Massachusetts Institute of Technology
The extensive genomic diversity of complex systems, such as the human gut microbiome and the evolution of human cancer, has been revealed with advances in DNA sequencing. But we are still at an early stage in understanding this genomic diversity to expand our knowledge in biology and for biomedical applications. Taking the diverse human gut microbiome as an example, little is known about the rapid exchange of antibiotic resistance genes and virulence factors as part of the mobile gene flow between the microbes in the gut. Understanding such heterogeneous systems often involves studying the nature and behavior of the individual cells that constitute the system and their interactions. However, it is technically challenging to probe the genomic material of cells, the smallest unit of life and amplify single genomes for sequencing. Current single-cell technologies require complex instrumentation and the data quality is often confounded by biased genome coverage and chimera artifacts. We address these challenges with a new single-cell technology paradigm to make high-quality low-input genomic research accessible to scientists. We developed hydrogel-based virtual microfluidics as a simple and robust platform for the compartmentalization of nucleic acid amplification reactions. We applied whole genome amplification (WGA) to purified DNA molecules, cultured bacterial cells, human gut microbiome samples, and human cell lines in the virtual microfluidics system. We demonstrated whole-genome sequencing of single-cell WGA products with excellent coverage uniformity and markedly reduced chimerism compared with traditional methods. Additionally, we applied single-cell sequencing to identify horizontally transferred genes between the microbes in the gut and revealed human population activities' selective pressure in shaping the mobile gene pools. Altogether, we expect virtual microfluidics will find application as a low-cost digital assay platform and as a high-throughput platform for single-cell sample preparation. This work offers a significant improvement in making high-quality low-input genomic research accessible to scientists in microbiology and oncology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-124).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120632</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of selective peptide inhibitors of anti-apoptotic Bfl-1 using experimental screening, structure-based design, and data-driven modeling</title>
<link>https://hdl.handle.net/1721.1/120631</link>
<description>Design of selective peptide inhibitors of anti-apoptotic Bfl-1 using experimental screening, structure-based design, and data-driven modeling
Jenson, Justin Michael
Protein-protein interactions are central to all biological processes. Designer reagents that selectively bind to proteins and inhibit their interactions can be used to probe protein interaction networks, discover druggable targets, and generate potential therapeutic leads. Current technology makes it possible to engineer proteins and peptides with desirable interaction profiles using carefully selected sets of experiments that are customized for each design objective. There is great interest in improving the protein design pipeline to create protein binders more efficiently and against a wider array of targets. In this thesis, I describe the design and development of selective peptide inhibitors of anti-apoptotic BcI-2 family proteins, with an emphasis on targeting Bfl-1. Anti-apoptotic Bcl-2 family proteins bind to short, pro-apoptotic BH3 motifs to support cellular survival. Overexpression of BfI-1 has been shown to promote cancer cell survival and the development of chemoresistance. Prior work suggests that selective inhibition of Bfl-1 can induce cell death in Bfl-1 overexpressing cancer cells without compromising healthy cells that also rely on anti-apoptotic BcI-2 proteins for survival. Thus, Bfl-1-selective BH3 mimetic peptides are potentially valuable for diagnosing Bfl-1 dependence and can serve as leads for therapeutic development. In this thesis, I describe three distinct approaches to designing potent and selective Bfl-1 inhibitors. First, I describe the design and screening of libraries of variants of BH3 peptides. I show that peptides from this screen bind in a previously unobserved BH3 binding mode and have large margins of specificity for Bfl-1 when tested in vitro and in cultured cells. Second, I describe a computational model of the specificity landscape of three anti-apoptotic Bcl-2 proteins including Bfl-1. This model was derived from high-throughput affinity measurement of thousands of peptides from BH3 libraries. I show that this model is useful for designing peptides with desirable interaction profiles within a family of related proteins. Third, I describe the use of a scoring potential built on the amino acid frequencies from well-defined structural motifs complied from the Protein Data Bank to design novel BH3 peptides targeting Bfl-1.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120631</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emotion as information : inferring the unobserved causes of others' emotional expressions</title>
<link>https://hdl.handle.net/1721.1/120629</link>
<description>Emotion as information : inferring the unobserved causes of others' emotional expressions
Wu, Yang, Ph. D. Massachusetts Institute of Technology
Research in the domain of cognitive science has tended to neglect emotions. In my thesis, I take several steps to fill this gap by looking at people's representation of emotions, and its connection to other representations typically studied in cognitive science. I argue that people have an intuitive theory of emotion that is causally intertwined with their understanding of the physical and social world broadly. This intuitive theory allows us to use observed emotional cues as a window, to recover unobserved information about the world. I study these abilities in both adults and children, to gain insight into the most fundamental representations supporting such abilities. I also use computational models to capture the hierarchical, causal structure of this intuitive theory of emotion. In Study 1, I show that infants as young as 12-17 months can discriminate diverse within-valence emotional expressions elicited by funny, exciting, adorable, delicious, and sympathetic events, and map them onto their probable causes. In Study 2.1, I present that preschoolers can recover rich mental state information from observed emotional expressions. When the valence of someone's face changes between anticipated and actual outcomes, children by five gain insight into what she wants and believes about the world. Study 2.2 bridges theory of mind research, accounts of emotion attribution, and formal modeling, to provide a formal account of how people jointly infer beliefs and desires from emotional expressions. Study 3 tests children's understanding of social display rules. By middle childhood, children can use one person's emotional expressions regulated by a social context to infer the mental states of another. Altogether, these findings suggest that emotional cues provide a valuable entrée into the unseen world. Not only adults, but also children, can use observed emotional expressions to infer their external causes and the internal mental states of other people. Although this intuitive theory of emotion may not necessarily mirror the actual processes of how emotions are generated, it supports rational inferences much of time, and it may be formed early in development. I see this work as bridging gaps across disciplines and helping advance the cognitive science of emotion understanding.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 188-214).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120629</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dopaminergic modulation of prefrontal cortex subpopulations</title>
<link>https://hdl.handle.net/1721.1/120628</link>
<description>Dopaminergic modulation of prefrontal cortex subpopulations
Vander Weele, Caitlin Miya
Despite abundant evidence that dopamine modulates medial prefrontal cortex (mPFC) activity to mediate diverse behavioral functions, the precise circuit computations remain elusive. One potentially unifying theoretical model by which dopamine can modulate functions from working memory to schizophrenia is that dopamine serves to increase the signal-to-noise ratio in mPFC neurons, where neuronal activity conveying sensory information (signal) are amplified relative to spontaneous firing (noise). To connect theory to biology, we lack direct evidence for dopaminergic modulation of signal-to-noise in neuronal firing patterns in vivo and a mechanistic explanation of how such computations would be transmitted downstream to instruct specific behavioral functions. Here, we demonstrate that dopamine increases signal-to-noise ratio in mPFC neurons projecting to the dorsal periaqueductal gray (dPAG) during the processing of an aversive stimulus. First, using electrochemical approaches, we reveal the precise time course of tail pinch-evoked dopamine release in the mPFC. Second, we show that dopamine signaling in the mPFC biases behavioral responses to punishment-predictive stimuli, rather than reward-predictive cues. Third, in contrast to the well-characterized mPFC-NAc projection, we show that activation of mPFC-dPAG neurons is sufficient to drive place avoidance and defensive behaviors. Fourth, to determine the natural dynamics of individual mPFC neurons, we performed single-cell projection-defined microendoscopic calcium imaging to reveal a robust preferential excitation of mPFC-dPAG, but not mPFC-NAc, neurons to aversive stimuli. Finally, photostimulation of VTA dopamine terminals in the mPFC revealed an increase in signal-to-noise ratio in mPFC-dPAG neuronal activity during the processing of aversive, but not rewarding stimuli. Together, these data unveil the utility of dopamine in the mPFC to effectively filter sensory information in a valence-specific manner.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis. Page 176 blank.; Includes bibliographical references (pages 159-175).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120628</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Theory of Mind : insights from FMRI studies of children</title>
<link>https://hdl.handle.net/1721.1/120627</link>
<description>Developing a Theory of Mind : insights from FMRI studies of children
Richardson, Hilary L. (Hilary Leigh)
Social cognitive abilities undergo drastic changes throughout childhood. Theory of mind (ToM), the ability to reason about the mental states of others, is a core social cognitive ability that is crucial for navigating the social world. A majority of prior fMRI research on ToM has characterized the functional response in brain regions that are preferentially recruited to reason about the minds of others in adults. By contrast, a majority of prior developmental research on ToM has used behavioral methods to describe milestones in theory of mind acquisition in early childhood. The experiments described in this thesis draw heavily from these two approaches, in order to link them: what is the relationship between the development of functionally selective responses in ToM brain regions, and developmental changes in ToM reasoning in childhood? Chapter 1 describes two longitudinal fMRI experiments that test for developmental change and stable individual differences in neural and behavioral measures of ToM, and for predictive relationships between the two measures. Chapter 2 describes a large, cross-sectional study that measures the development of the cortical dissociation between brain regions that process minds (the ToM network) and those that process bodies (the Pain Matrix). Chapter 2 additionally provides insight into the neural correlates of passing the false-belief task - the best known developmental milestone in ToM reasoning. Chapter 3 uses a publicly available dataset in order to provide confirmatory evidence for the results described in Chapter 2, and clarifies the relationship between stimulus-driven functional responses, and inter-region correlations within and between ToM and pain brain regions. Chapter 4 characterizes ToM development, neurally and behaviorally, in children who have experienced delayed access to sign language. Finally, Chapter 5 provides a discussion of challenges and strategies in developmental cognitive neuroscience research. This interdisciplinary thesis has three broad goals: 1) to characterize kinds of neural change that support and/or predict behavioral improvements in theory of mind, 2) to gain novel insight into the nature of specific behavioral milestones in social reasoning, and 3) to better understand the impact of experience (e.g., linguistic input) on ToM development, behaviorally and neurally.
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120627</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-level visual object representation in juvenile and adult primates</title>
<link>https://hdl.handle.net/1721.1/120626</link>
<description>High-level visual object representation in juvenile and adult primates
Seibert, Darren (Darren Allen)
Despite being reflexive, primate view invariant object recognition is a complex computational task. These computations are thought to reside in the ventral visual stream, specifically culminating in inferior temporal (IT) cortex. Recent research in machine learning has made great progress in modeling primate ventral visual stream computations. While the end result of current machine learning approaches produces models that are highly predictive of the adult state of the ventral stream, the learning approaches themselves are not biologically plausible, requiring tens of thousands to millions of human-labeled training points. Understanding primate visual development is therefore not only interesting from the perspective of neuroscience, but also has practical value in building more robust learning algorithms capable of functioning in domains where large amounts of human-labeled training information may be difficult or impossible to create. Better learning algorithms may also produce agents capable of adapting and behaving in the world not unlike humans. This thesis first describes work on predicting visual responses across the human ventral stream using convolutional neural networks (CNNs). We then describe a set of natural image statistics automatically incorporated into high-performing CNNs from supervised training-it is possible primate development incorporates these or similar natural image statistics into its synaptic strengths. Finally, we describe the first-large scale characterization of IT in 19-32 week old macaques. While we find longer response latencies in these younger animals, we do not find any differences in representation between adults and juveniles suggesting that, at 19-32 weeks of age, IT already supports robust object recognition consistent with adults. Our data provide an upper limit on the amount of training data needed to reach adult-level performance-approximately 2,800 hours of waking visual experience.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-130).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120626</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>How does the primate ventral visual stream causally support core object recognition?</title>
<link>https://hdl.handle.net/1721.1/120625</link>
<description>How does the primate ventral visual stream causally support core object recognition?
Rajalingham, Rishi
Primates are able to rapidly, accurately and effortlessly perform the computationally difficult visual task of invariant object recognition - the ability to discriminate between different objects in the face of high variation in object viewing parameters and background conditions. This ability is thought to rely on the ventral visual stream, a hierarchy of visual cortical areas culminating in inferior temporal (IT) cortex. In particular, decades of research strongly suggests that the population of neurons in IT supports invariant object recognition behavior. However, direct causal evidence for this decoding hypothesis has been equivocal to date, especially beyond the specific case of face-selective sub-regions of IT. This research aims to directly test the general causal role of IT in invariant object recognition. To do so, we first characterized human and macaque monkey behavior over a large behavioral domain consisting of binary discriminations between images of basic-level objects, establishing behavioral metrics and benchmarks for computational models of this behavior. This work suggests that, in the domain of basic-level core object recognition, humans and monkeys are remarkably similar in their behavioral responses, while leading models of the visual system significantly diverge from primate behavior. We then reversibly inactivated individual, millimeter-scale regions of IT via injection of muscimol while monkeys performed several interleaved binary object discrimination tasks. We found that inactivating different millimeter-scale regions of primate IT resulted in different patterns of object recognition deficits, each predicted by the local region's neuronal selectivity. Our results provide causal evidence that IT directly underlies primate object recognition behavior in a topographically organized manner. Taken together, these results establish quantitative experimental constraints for computational models of the ventral visual stream and object recognition behavior.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 161-173).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120625</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting more wisdom from the crowd</title>
<link>https://hdl.handle.net/1721.1/120624</link>
<description>Extracting more wisdom from the crowd
McCoy, John Patrick
In many situations, from economists predicting unemployment rates to chemists estimating fuel safety, individuals have differing opinions or predictions. We consider the wisdom-of-the-crowd problem of aggregating the judgments of multiple individuals on a single question, when no outside information about their competence is available. Many standard methods select the most popular answer, after correcting for variations in confidence. Using a formal model, we prove that any such method can fail even if based on perfect Bayesian estimates of individual confidence, or, more generally, on Bayesian posterior probabilities. Our model suggests a new method for aggregating opinions: select the answer that is more popular than people predict. We derive theoretical conditions under which this new method is guaranteed to work, and generalize it to questions with more than two possible answers. We conduct empirical tests in which respondents are asked for both their own answer to some question and their prediction about the distribution of answers given by other people, and show that our new method outperforms majority and confidence-weighted voting in a range of domains including geography and trivia questions, laypeople and professionals judging art prices, and dermatologists evaluating skin lesions. We develop and evaluate a probabilistic generative model for crowd wisdom, including applying it across questions to determine individual respondent expertise and comparing it to various Bayesian hierarchical models. We extend our new crowd wisdom method to operate on domains where the answer space is unknown in advance, by having respondents predict the most common answers given by others, and discuss performance on a cognitive reflection test as a case study of this extension.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-140).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120624</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Young children's reasoning about their own and others' cognition</title>
<link>https://hdl.handle.net/1721.1/120623</link>
<description>Young children's reasoning about their own and others' cognition
Magid, Rachel W. (Rachel Willcox)
This thesis aims to address a central question in cognitive science: how we reason about our own and others' cognition. Representing the self and others as distinct individuals is a fundamental epistemological feature of being human; the richness of these representations underlies our ability to tackle our own objectives and to understand the goals of others. Yet there is much debate about the metacognitive abilities of young children, in particular the extent to which children's estimations of their own and others' knowledge are accurate, whether children's beliefs about their own and others' cognition are influenced by the evidence they observe, and if these beliefs inform effective self-directed learning. I investigate these questions, examining metacognition and its relationship to learning in 3- to 8-year-olds. Chapter 1 provides an overview of metacognition regarding the self and others. Chapter 2 considers whether young children expect others will learn rationally from evidence. We find that by age 4.5 years, children have a nuanced understanding of how evidence and prior beliefs interact to yield new knowledge. Chapter 3 investigates how children's exploration is influenced by representations of task difficulty, as indexed by the discriminability of alternative hypotheses. We show that there is a precise quantitative relationship between uncertainty and information seeking. Chapter 4 considers how preschoolers use social comparison information to calibrate their self-directed learning, demonstrating that when a task is within children's zone of proximal development, observing evidence that peers perform better increases one's own persistence. Chapter 5 asks how 3- to 5-year-olds integrate representations of their own and others' abilities when allocating roles across contexts. This work demonstrates that children consider who is best suited for a task based on relative ability. Across all four chapters, the results of these studies demonstrate that children have a sophisticated understanding of their own and others' knowledge and skills. In addition, children use information about others to effectively direct their own learning and problem solving. I end by arguing that young children have a theory of individuals' characteristics, of which reasoning about the self is a special case. Taken together, these studies illustrate the importance of considering how reasoning about the self and about others are integrated and are fundamental to our human intelligence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 120-151).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120623</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social influences on children's learning</title>
<link>https://hdl.handle.net/1721.1/120622</link>
<description>Social influences on children's learning
Leonard, Julia Anne, Ph. D. Massachusetts Institute of Technology
Adults greatly impact children's learning: they serve as models of how to behave, and as parents, provide the larger social context in which children grow up. This thesis explores how adults impact children's learning across two time scales. Chapters 2 and 3 ask how a brief exposure to an adult model impacts children's moment-to-moment approach towards learning, and Chapters 4 and 5 look at how children's long-term social context impacts their brain development and capacity to learn. In Chapter 2, I show that preschool-age children integrate information from adults' actions, outcomes, and testimony to decide how hard to try on novel tasks. Children persist the longest when adults practice what they preach: saying they value effort, or giving children a pep talk, in conjunction with demonstrating effortful success on their own task. Chapter 3 demonstrates that social learning about effort is present in the first year of life and generalizes across tasks. In Chapter 4, I find that adolescents' long-term social environments have a selective impact on neural structure and function: socioeconomic-status (SES) relates to hippocampal-prefrontal declarative memory, but not striatal-dependent procedural memory. Finally, in Chapter 5 I demonstrate that the neural correlates of fluid reasoning differ by SES, suggesting that positive brain development varies by early life environment. Collectively, this work elucidates both the malleable social factors that positively impact children's learning and the unique neural and cognitive adaptations that children develop in response to adverse environments.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-170).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120622</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational foundations of human social intelligence</title>
<link>https://hdl.handle.net/1721.1/120621</link>
<description>Computational foundations of human social intelligence
Kleiman-Weiner, Max
This thesis develops formal computational cognitive models of the social intelligence underlying human cooperation and morality. Human social intelligence is uniquely powerful. We collaborate with others to accomplish together what none of us could do on our own; we share the benefits of collaboration fairly and trust others to do the same. Even young children work and play collaboratively, guided by normative principles, and with a sophistication unparalleled in other animal species. Here, I seek to understand these everyday feats of social intelligence in computational terms. What are the cognitive representations and processes that underlie these abilities and what are their origins? How can we apply these cognitive principles to build machines that have the capacity to understand, learn from, and cooperate with people? The overarching formal framework of this thesis is the integration of individually rational, hierarchical Bayesian models of learning, together with socially rational multi-agent and game-theoretic models of cooperation. I use this framework to probe cognitive questions across three time-scales: evolutionary, developmental, and in the moment. First, I investigate the evolutionary origins of the cognitive structures that enable cooperation and support social learning. I then describe how these structures are used to learn social and moral knowledge rapidly during development, leading to the accumulation of knowledge over generations. Finally I show how this knowledge is used and generalized in the moment, across an infinitude of possible situations. This framework is applied to a variety of cognitively challenging social inferences: determining the intentions of others, distinguishing who is friend or foe, and inferring the reputation of others all from just a single observation of behavior. It also answers how these inferences enable fair and reciprocal cooperation, the computation of moral permissibility, and moral learning. This framework predicts and explains human judgment and behavior measured in large-scale multi-person experiments. Together, these results shine light on how the scale and scope of human social behavior is ultimately grounded in the sophistication of our social intelligence.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 199-211).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120621</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using the language of thought</title>
<link>https://hdl.handle.net/1721.1/120620</link>
<description>Using the language of thought
Dechter, Eyal
In this thesis, I develop and explore two novel models of how humans might be able to acquire high-level conceputal knowledge by performing probabilistic inference over a language of thought (Fodor 1975) - a space of symbolic and compositional mental representations sufficiently expressive to capture the meanings of human thoughts and utterances. These models and their associated learning algorithms are motivated by an attempt to provide an understanding of the algorithmic principles that might underlie a child's ability to search the haystack of sentences in her language of thought to find the needle that corresponds to any specific concept. The first model takes advantage of the compositionality inherent to LOT representations, framing concept acquisition as program induction in a functional programming language; the Exploration- Compression algorithm this model motivates iteratively builds a library of useful program fragments that, when composed, restructures the search space, making more useful programs shorter and easier to find. The second model, the Infinite Knowledge Base Model (IKM), frames concept learning as probabilistic inference over the space of relational knowledge bases; the algorithm I develop for learning in this model frames this inference problem as a state-space search over abductive proofs of the learner's observed data. This framing allows us to take advantage of powerful techniques from the heuristic search and classical planning literature to guide the learner. In the final part of this thesis, I explore the behavior of the IKM on several case studies of intuitive theories from the concept learning literature, and I discuss evidence for and against it with respect to other approaches to LOT models.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 125-129).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120620</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and inference with Wasserstein metrics</title>
<link>https://hdl.handle.net/1721.1/120619</link>
<description>Learning and inference with Wasserstein metrics
Frogner, Charles (Charles Albert)
This thesis develops new approaches for three problems in machine learning, using tools from the study of optimal transport (or Wasserstein) distances between probability distributions. Optimal transport distances capture an intuitive notion of similarity between distributions, by incorporating the underlying geometry of the domain of the distributions. Despite their intuitive appeal, optimal transport distances are often difficult to apply in practice, as computing them requires solving a costly optimization problem. In each setting studied here, we describe a numerical method that overcomes this computational bottleneck and enables scaling to real data. In the first part, we consider the problem of multi-output learning in the presence of a metric on the output domain. We develop a loss function that measures the Wasserstein distance between the prediction and ground truth, and describe an efficient learning algorithm based on entropic regularization of the optimal transport problem. We additionally propose a novel extension of the Wasserstein distance from probability measures to unnormalized measures, which is applicable in settings where the ground truth is not naturally expressed as a probability distribution. We show statistical learning bounds for both the Wasserstein loss and its unnormalized counterpart. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data image tagging problem, outperforming a baseline that doesn't use the metric. In the second part, we consider the probabilistic inference problem for diffusion processes. Such processes model a variety of stochastic phenomena and appear often in continuous-time state space models. Exact inference for diffusion processes is generally intractable. In this work, we describe a novel approximate inference method, which is based on a characterization of the diffusion as following a gradient flow in a space of probability densities endowed with a Wasserstein metric. Existing methods for computing this Wasserstein gradient flow rely on discretizing the underlying domain of the diffusion, prohibiting their application to problems in more than several dimensions. In the current work, we propose a novel algorithm for computing a Wasserstein gradient flow that operates directly in a space of continuous functions, free of any underlying mesh. We apply our approximate gradient flow to the problem of filtering a diffusion, showing superior performance where standard filters struggle. Finally, we study the ecological inference problem, which is that of reasoning from aggregate measurements of a population to inferences about the individual behaviors of its members. This problem arises often when dealing with data from economics and political sciences, such as when attempting to infer the demographic breakdown of votes for each political party, given only the aggregate demographic and vote counts separately. Ecological inference is generally ill-posed, and requires prior information to distinguish a unique solution. We propose a novel, general framework for ecological inference that allows for a variety of priors and enables efficient computation of the most probable solution. Unlike previous methods, which rely on Monte Carlo estimates of the posterior, our inference procedure uses an efficient fixed point iteration that is linearly convergent. Given suitable prior information, our method can achieve more accurate inferences than existing methods. We additionally explore a sampling algorithm for estimating credible regions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-143).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120619</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised learning of lexical subclasses from phonotactics</title>
<link>https://hdl.handle.net/1721.1/120612</link>
<description>Unsupervised learning of lexical subclasses from phonotactics
Morita, Takashi, Ph. D. Massachusetts Institute of Technology
Languages are constantly borrowing words from one another. Since the donor and recipient languages typically differ in their phonology and phonotactics, the native words and the loanwords of the borrower language can also exhibit dierent phonology/ phonotactics. Accordingly, it has been proposed that the phonotactics of languages such as Japanese is better explained if words are classified into etymologically defined sublexica. However, this sublexical analysis is challenged by a learnability problem: the sublexical membership of words is not directly observable. This study applies a state-of-the-art clustering method (a Dirichlet process mixture model) to a substantial number of Japanese and English words extracted from corpora. It turns out that the predicted clusters largely correspond to the etymologically defined sublexica. Since the clustering method is domain-general and not specialized to sublexicon identication, the results can be taken as statistical evidence for the heterogeneous lexica of the two languages. Moreover, the unsupervised nature of the clustering method demonstrates the learnability of sublexica from naturalistic data. The learned sublexica also replicate linguistic characterizations of actual sublexica proposed in previous literature, such as the biased distribution of (certain substrings of) segments to particular sublexica. In addition, the learned sublexica make informative predictions based on previous experimental studies. These results suggest that the predicted sublexica are linguistically sound. Finally, the predicted sublexica reveal hitherto unnoticed phonotactic properties. These discoveries can be used for further investigation of native speakers' knowledge.
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 203-215).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120612</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electricity system planning with distributed energy resources : new methods and insights for economics, regulation, and policy</title>
<link>https://hdl.handle.net/1721.1/120611</link>
<description>Electricity system planning with distributed energy resources : new methods and insights for economics, regulation, and policy
Jenkins, Jesse D. (Jesse David)
This dissertation demonstrates a novel integrated electric power system planning framework that incorporates distributed energy resources (DERs), flexible and price-responsive demand, and distribution network losses and reinforcement costs. New methods are developed and demonstrated to derive the aggregate impact of demand and DERs on distribution network losses and upgrade costs from detailed distribution network simulations. The results from these simulations are used to parameterize a novel and tractable representation of distribution networks at an aggregate scale in a new formulation of the electricity resource capacity planning problem. A set of case studies demonstrate the utility of this modeling framework for modeling competition amongst distributed and centralized resources, exploring tradeoffs between economies of unit scale and locational value of various resources, assessing the value of price-responsive electricity demand, and considering the impact of policy or regulation that drives the adoption of DERs. Methodologically, this dissertation makes a set of contributions, including: 1. A new approach to using AC power flow simulations to accurately derive the effect of aggregate changes in power withdrawals and injections on resistive network losses in large-scale distribution networks. 2. A method for adapting AC optimal power flow simulations to identify the minimum quantity of net reductions in coincident peak demand (achieved either by demand flexibility or distributed generation or storage) necessary to accommodate demand growth in large-scale distribution networks without investment in new network assets (e.g., 'non-wires' alternatives). 3. A method for using a distribution network planning model to determine the cost of traditional network upgrades required to accommodate an equivalent increase in peak demand. 4. An integrated electricity resource capacity planning model that employs results from the above methods to incorporate DERs and flexible demand and consider key sources of locational value or cost, including impacts on transmission and distribution network costs and losses. Electricity system planning models provide decision support for electric utilities, insight for policy-makers, and a detailed techno-economic framework to evaluate emerging technologies. Collectively, the new methods demonstrated herein expand the capabilities of these important planning tools to keep pace with a rapidly evolving electric power sector.
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis. Page 274 blank.; Includes bibliographical references (pages 265-273).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120611</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear asset shutdown under uncertainty</title>
<link>https://hdl.handle.net/1721.1/120610</link>
<description>Nuclear asset shutdown under uncertainty
Haratyk, Geoffrey
The restructuring of the power sector that began in the 1990s created acute competition for nuclear power plants. While merchant reactors benefited from this change in the early years, recently they started to retire for economic reasons well before the expiration of their operating licenses. This unexpected wave of premature shutdowns has severe implications for energy and climate policy. It invites us to re-assess the viability and role of nuclear in a transitioning energy sector. The thesis first develops two new tools aimed at measuring nuclear competitiveness and informing retirement strategies: (1) a structural model of electricity markets based on supply and demand equilibrium and (2) a long-term asset valuation framework accounting for stochastic price dynamics and flexible retirement options. We employ these tools to analyze the challenges facing nuclear energy in two countries: the United States and Japan. After evaluating the drivers and likelihood of premature retirements, we discuss a range of technological innovations and regulatory options that could help nuclear bring value to future competitive markets. We show that low natural gas prices and stagnant electricity demand have been responsible for the drop in nuclear plant revenue in the United States. We measure that renewable wind and solar PV impact nuclear operations only for penetration levels above 15% and 30% respectively. We also find that spot price volatility, a feature of competitive markets, defers nuclear retirement decisions rather than precipitates them. In this context, nuclear must adapt. Greater operational flexibility can prevent financial losses in areas where renewables are being deployed on a large scale. In the medium-term, heat storage technologies would protect plants' profitability while enabling deep decarbonization of the energy sector. Finally, a few plants may be able to reach niche markets by diversifying their output beyond electricity. We recognize that the carbon-free attributes of nuclear energy are not valued in competitive markets. Yet, even a moderate price on carbon would save most reactors. If not possible, states may adopt nuclear subsidies to meet their policy objectives. As a last resort, the exercise of a new mothballing license could prevent the irreversible loss of nuclear assets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 197-208).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120610</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online calibration for simulation-based dynamic traffic assignment : towards large-scale and real-time performance</title>
<link>https://hdl.handle.net/1721.1/120603</link>
<description>Online calibration for simulation-based dynamic traffic assignment : towards large-scale and real-time performance
Zhang, Haizheng, Ph. D. Massachusetts Institute of Technology
The severity of traffic congestion is increasing each year in the US, resulting in higher travel times, and increased energy consumption and emissions. They have led to an increasing emphasis on the development of tools for trac management, which intends to alleviate congestion by more eciently utilizing the existing infrastructure. Eective trac management necessitates the generation of accurate short-term predictions of trac states and in this context, simulation-based Dynamic Trac Assignment (DTA) systems have gained prominence over the years. However, a key challenge that remains to be addressed with real-time DTA systems is their scalability and accuracy for applications to large-scale urban networks. A key component of real-time DTA systems that impacts scalability and accuracy is online calibration which attempts to adjust simulation parameters in real-time to match as closely as possible simulated measurements with real-time surveillance data. This thesis contributes to the existing literature on online calibration of DTA systems in three respects: (1) modeling explicitly the stochasticity in simulators and thereby improving accuracy; (2) augmenting the State Space Model (SSM) to capture the delayed measurements on large-scale and congested networks; (3) presenting a gradient estimation procedure called partitioned simultaneous perturbation (PSP) that utilizes an assumed sparse gradient structure to facilitate real-time performance. The results demonstrate that, first, the proposed approach to address stochasticity improves the accuracy of supply calibration on a synthetic network. Second, the augmented SSM improves both estimation and prediction accuracy on a congested synthetic network and the large-scale Singapore expressway network. Finally, compared with the traditional finite difference method, the PSP reduces the number of computations by 90% and achieves the same calibration accuracy on the Singapore expressway network. The proposed methodologies have important applications in the deployment of real-time DTA systems for large scale urban networks.
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 149-152).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120603</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying China's carrying capacity : using optimization to explore sustainable food production</title>
<link>https://hdl.handle.net/1721.1/120602</link>
<description>Quantifying China's carrying capacity : using optimization to explore sustainable food production
Smith, Tiziana
Feeding the world's growing population in an environmentally sustainable way is a complex social and engineering challenge. In this thesis, we develop a novel method for assessing the number of people that can be fed sustainably in a particular region for given natural resources and diet (the carrying capacity). A quantitative assessment of carrying capacity provides insight into the food security of the study region as well as the stress on the environmental system; in addition, this methodology can be used to assess the carrying capacity under a variety of policy interventions such as increasing yields, changing diets, or expanding irrigation infrastructure. The carrying capacity assessment uses optimization methods that find the cropping pattern that maximizes population subject to land, water, and diet constraints, considering a range of rainfed and irrigated crops. A data fusion procedure estimates the regional water and land resources needed to assess carrying capacity by combining measurements from diverse hydrologic and agronomic sources, including remote sensing data. Our carrying capacity methodology is illustrated with a case study of food security in China. China has historically been largely food self-sufficient, although its food imports have been increasing since the year 2000. We find that the population in China was well below the country's carrying capacity in the year 2000 given the diet and yields in that year. However, the population's changing diet - especially the growing preference for meat - is exacting a growing toll on land and water resources. We find that under a more recent diet (2013), China is not likely to be food self-sufficient, even with major investments in irrigated agriculture, without substantial increases in crop yield.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 113-119).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120602</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating the impacts of arsenic on human health and rice yield in Bangladesh</title>
<link>https://hdl.handle.net/1721.1/120601</link>
<description>Mitigating the impacts of arsenic on human health and rice yield in Bangladesh
Huhmann, Brittany Lynn
Naturally-occurring groundwater arsenic can threaten human health and food security. In Bangladesh, &gt;50 million people are estimated to have chronically consumed water with arsenic above the World Health Organization (WHO) guideline of 10 μg/L, which can contribute to cancer, cardiovascular disease, and reproductive and developmental effects. Studies relating arsenic exposure to health impacts generally estimate dose based on participants' primary household wells. Using a mass-balance for arsenic and water, we estimate that participants in Araihazar, Bangladesh obtain 37±8% of their water from primary household wells and 31±14% from other wells, and we thus recommend the inclusion of other wells in dose estimation. Concentrations of arsenic in well water are spatially variable, enabling many exposed households to switch to nearby lower-arsenic wells in response to area-wide well testing. Following well testing and education in Araihazar, arsenic exposure declined and remained lowered for at least eight years. Participants with arsenic-unsafe wells were 6.8 times more likely to switch wells over the first two years and 1.4-1.8 times more likely to switch wells over the ensuing decade. Rice comprises more than 70% of calories consumed in Bangladesh, and rice yield is negatively impacted by the buildup of arsenic in soil from irrigation with high-arsenic water. We investigated the effect of soil arsenic on yield using a controlled study design where we exchanged the top 15 cm of soil between high-arsenic and low-arsenic plots. Differences in yield were negatively correlated to differences in soil arsenic between adjacent soil replacement and control plots, suggesting that boro rice yield countrywide may be diminished by 7-26% due to arsenic in soil. Soil testing and removal of high-arsenic soil may enable farmers to mitigate the impacts of arsenic on rice. Twelve measurements made with the ITS Econo-Quick field kit could be used to estimate whether soil arsenic was above or below a 30 mg/kg intervention threshold with 80-90% accuracy. A soil inversion, where deep low-arsenic soil was exchanged with surface high-arsenic soil, decreased soil arsenic, organic carbon, nitrogen, and phosphorus concentrations by about 40% in the top 20 cm of soil and improved rice yield by 15-30%.
Thesis: Ph. D. in Environmental Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120601</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Connections between circuit analysis problems and circuit lower bounds</title>
<link>https://hdl.handle.net/1721.1/120467</link>
<description>Connections between circuit analysis problems and circuit lower bounds
Murray, Cody (Cody Daniel)
A circuit analysis problem takes a Boolean function f as input (where f is represented either as a logical circuit, or as a truth table) and determines some interesting property of f. Examples of circuit analysis problems include Circuit Satisfiability, Circuit Composition, and the Minimum Size Circuit Problem (MCSP). A circuit lower bound presents an interesting function f and shows that no "easy" family of logical circuits can compute f correctly on all inputs, for some definition of "easy". Lower bounds are infamously hard to prove, but are of significant interest for understanding computation. In this thesis, we derive new connections between circuit analysis problems and circuit lower bounds, to prove new lower bounds for various well-studied circuit classes. We show how faster algorithms for Circuit Satisfiability can imply non-uniform lower bounds for functions in NP and related classes. We prove that MCSP cannot be NP-hard under "local" gadget reductions of the kind that appear in textbooks, and if MCSP proved to be NP-hard in the usual (polynomial-time reduction) sense then we would also prove longstanding lower bounds in other regimes. We also prove that natural versions of the Circuit Composition problem do not have small circuits that are constructible in very small (logarithmic) space.
Thesis: Ph. D. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 107-112).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120467</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stopping self-discharge in metal-air batteries</title>
<link>https://hdl.handle.net/1721.1/120466</link>
<description>Stopping self-discharge in metal-air batteries
Hopkins, Brandon J. (Brandon James)
Metal-air batteries boast high theoretical energy densities, but negative electrode corrosion can severely reduce their usable capacity and commercial utility. Most methods to mitigate corrosion focus on electrode and electrolyte modification such as electrode alloying, electrolyte additives, and gel and nonaqueous electrolytes. These methods, however, either insufficiently suppress the parasitic reaction or compromise power and energy density. This thesis focuses on a different approach to corrosion mitigation involving electrolyte displacement from the electrode surface. Multiple electrolyte-displacement concepts were generated and investigated. The most promising of the concepts was the reversible displacement of the electrolyte from the electrode surface with an oil. To enable this method, the fundamental physics of underwater oil-fouling resistant surfaces was investigated, tested, and characterized. Design equations that aid in the appropriate selection of electrodes, displacing oils, and separator membranes were also developed. The oil displacement method was demonstrated in a primary (single-use) aluminum-air (Al-air) battery that achieved a 420% increase in useable energy density and was estimated to enable pack-level energy densities as high as 700 Wh 1- and 900 Wh kg-1. This method could, in principle, be used in any of the metal-air batteries, aqueous or nonaqueous, or in other energy storage systems that suffer from corrosion if appropriate displacing oils and separator membranes are found using the discussed design principles. With the oil displacement method, aqueous metal-air batteries that rely on abundant, broadly dispersed materials could provide safe, low-cost, sustainable primary and secondary (rechargeable) batteries for many applications including grid-storage, off-grid storage, robot power, and vehicular propulsion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 67-78).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120466</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the liquid-vapor phase change of mercury based on irreversible thermodynamics.</title>
<link>https://hdl.handle.net/1721.1/120452</link>
<description>A study of the liquid-vapor phase change of mercury based on irreversible thermodynamics.
Adt, R. R
Thesis (Sc. D.)--Massachusetts Institute of Technology. Dept. of Mechanical Engineering, 1967.; Vita.; Bibliography: leaves 52-53.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120452</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An electrophysiological investigation of retinular cells of the crayfish Procambarus clarkii.</title>
<link>https://hdl.handle.net/1721.1/120451</link>
<description>An electrophysiological investigation of retinular cells of the crayfish Procambarus clarkii.
Muller, Kenneth Joseph
Massachusetts Institute of Technology. Dept. of Biology. Thesis. 1971. Ph.D.; Vita.; Bibliography: leaves 189-200.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120451</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in search for match quality</title>
<link>https://hdl.handle.net/1721.1/120449</link>
<description>Essays in search for match quality
Takala, Kosti (Kosti Oskari)
This thesis consists of three essays on search for reviews. In Chapter 1, I study how homogeneous consumers behave when they are considering to buy a new experience good, such as a pair of shoes, from a monopolist with an observable price. They have costly access to consumer reviews which are perfectly or imperfectly informative of match quality. Knowing how consumers behave, the monopolist sets an optimal price inducing certain behavior; sometimes the firm will find it optimal to set a high price to induce the consumers to search for earlier reviews, sometimes a low price to induce them to purchase the product. Consumer, producer and social surplus are non-monotone in search cost, and this result extends to settings with heterogeneous consumers. In Chapter 2, we extend the model of Chapter 1 to allow for competition between ex-ante identical firms. Consumers are homogeneous and all prices observed at no cost. They can search for earlier reviews which perfectly reveal match quality. Consumers can keep searching as long as they have not found a match or exhausted all of their options. We learn that high search costs lead to relatively high prices but when costs decrease, surplus is first transferred from firms to consumers but further reductions may decrease consumer surplus. Additional effects are noted as reviews get even cheaper to access, and consumer surplus, profits, and social surplus are all non-monotone in the cost of reading reviews. In Chapter 3, we analyze selection into consumption in the case of movies. People leaving reviews are assumed to be a representative sample of consumers, so that reviews perfectly reveal experiences. New consumers observe these reviews but do not know if their preferences are aligned with those of the reviewers. Examining two datasets with movie reviews and box office revenue, both in a cross-section of movies and within movies over time, we learn that selection decreases in the expected quality of a movie, the precision of the prior quality, and consumer homogeneity. Selection may increase or decrease over time and it tends to increase in the number of movies. Thesis Supervisor: Glenn Ellison Title: Gregory K. Palm Professor of Economics Thesis Supervisor: Michael Whinston Title: Sloan Fellows Professor of Management 3
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-154).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120449</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in international economics</title>
<link>https://hdl.handle.net/1721.1/120448</link>
<description>Essays in international economics
Fanelli, Pablo Sebastiáin
This thesis consists of three chapters on international economics. The first chapter explores the implications of the large increase in cross-border holdings of financial assets for monetary policy and capital controls. I study an open economy model with nominal rigidities, incomplete markets, and assets denominated in home and foreign currency. I develop an approximation method that allows me to characterize the optimal policy sharply. The planner trades-off stabilizing output gaps with creating insurance via cross-country balance-sheet effects. Perhaps surprisingly, as insurance considerations become more important, home-currency positions become larger, and the excess-return volatility of home-currency assets actually decreases, rather than increases as one would expect with fixed ad hoc portfolios. Capital controls are not called for by the approximate solution, i.e., private portfolio decisions are approximately efficient. In my baseline calibration, the welfare gains from the optimal policy are 1.5 times larger than those from inflation-targeting. The second chapter, joint with Ludwig Straub, develops a theory of foreign exchange interventions for small open economies. In the model, the central bank can implement nonzero spreads between home- and foreign-currency bonds by managing its portfolio due to financial frictions that limit arbitrage by the private sector. Nonzero spreads are costly as they allow foreign intermediaries to make carry-trade profits. Optimal interventions balance these costs with terms of trade benefits. The optimal policy gives rise to a smooth path for the spread, relying on credible promises of future interventions (forward guidance). By contrast, we find smoothing exchange rates aggressively is not optimal since it invites costly speculation. We conclude with a multi-country extension of our model. The third chapter, joint with Juan Carlos Hallak, studies the relevance of uncertainty and experimentation as a central feature of exporter dynamics. We show that a standard model without these features cannot explain two key facts of exporter dynamics: the strikingly low survival rates one year after entering a foreign market, and the novel fact that re-entrants in export markets are more likely to survive than first-time entrants. We develop a tractable model with experimentation that can explain these facts. We also provide support for the main mechanism of the model by exploiting variation in the degree of uncertainty across products and markets.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 213-221).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120448</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on health and healthcare economics</title>
<link>https://hdl.handle.net/1721.1/120447</link>
<description>Essays on health and healthcare economics
Abraham, Sarah Marie
This thesis consists of three chapters on the economics of health and healthcare. The first and third chapters explore geographic variation in health outcomes within the United States. The second chapter focuses on empirical methods for obtaining causal estimates of treatment effects with an application to healthcare settings. In the first chapter I study geographic variation in health care utilization under two different insurance systems: traditional Medicare and employer-provided private insurance. For each system, I use patient migration as a source of identification combined with empirical Bayes methods to construct optimal linear forecasts for the causal effects of place on utilization. These place effects measure the causal differences in treatment intensity across areas. I find similar levels of variation in the causal place effects for the publicly and privately insured patients, with a correlation of .39 across the two systems. These findings emphasize that insurance systems are affecting the forces that drive the causal component of geographic variation in utilization. In the second chapter, Liyang Sun and I explore event studies, a model for estimating treatment effects using variation in the timing of treatment. Researchers often run fixed effects regressions for event studies that implicitly assume treatment effects are constant across cohorts first treated at different times. In this paper we show that these regressions produce causally uninterpretable estimands when treatment effects vary across cohorts. We propose alternative estimators that identify convex averages of the cohort-specific treatment effects, hence allowing for causal interpretation even under heterogeneous treatment effects. We illustrate the shortcomings of fixed effects estimators in comparison to our proposed estimators through an empirical application on the economic consequences of hospitalization. In the third chapter, Raj Chetty, Michael Stepner, Shelby Lin, Benjamin Scuderi, Nicholas Turner, Augustin Begeron, David Cutler and I use newly available administrative data to quantify the relationship between income and mortality in the United States. Although it is well known that there are significant differences in health and longevity between income groups, debate remains about the magnitudes and determinants of these differences. We use new data from 1.4 billion anonymous earnings and mortality records to construct more precise estimates of the relationship between income and life expectancy at the national level than was feasible in prior work. We then construct new local area (county and metro area) estimates of life expectancy by income group and identify factors that are associated with higher levels of life expectancy for low-income individuals. Our study yields four sets of results. First, higher income was associated with greater longevity throughout the income distribution. The gap in life expectancy between the richest 1% and poorest 1% of individuals was 14.6 years for men and 10.1 years for women. Second, inequality in life expectancy increased over time. Between 2001 and 2014, life expectancy increased by 2.34 years for men and 2.91 years for women in the top 5% of the income distribution, but increased by only 0.32 years for men and 0.04 years for women in the bottom 5%. Third, life expectancy varied substantially across local areas. For individuals in the bottom income quartile, life expectancy differed by approximately 4.5 years between areas with the highest and lowest longevity. Changes in life expectancy between 2001 and 2014 ranged from gains of more than 4 years to losses of more than 2 years across areas. Fourth, geographic differences in life expectancy for individuals in the lowest income quartile were significantly correlated with health behaviors such as smoking, but were not significantly correlated with access to medical care, physical environmental factors, income inequality, or labor market conditions. Life expectancy for low income individuals was positively correlated with the local area fraction of immigrants, fraction of college graduates, and local government expenditures. Additional information on this project is available at https: //healthinequality. org/.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 147-156).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120447</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and designing Bc1-2 family protein interactions using high-throughput interaction data</title>
<link>https://hdl.handle.net/1721.1/120446</link>
<description>Modeling and designing Bc1-2 family protein interactions using high-throughput interaction data
Xue, Vincent
Protein-protein interactions (PPIs) play a major role in cellular function, mediating signal processing and regulating enzymatic activity. Understanding how proteins interact is essential for predicting new binding partners and engineering new functions. Mutational analysis is one way to study the determinants of protein interaction. Traditionally, the biophysical study of protein interactions has been limited by the number of mutants that could be made and analyzed, but advances in high-throughput sequencing have enabled rapid assessment of thousands of variants. The Keating lab has developed an experimental protocol that can rank peptides based on their binding affinity for a designated receptor. This technique, called SORTCERY, takes advantage of cell sorting and deep-sequencing technologies to provide more binding data at a higher resolution than has previously been achievable. New computational methods are needed to process and analyze the high-throughput datasets. In this thesis, I show how experimental data from SORTCERY experiments can be processed, modeled, and used to design novel peptides with select specificity characteristics. I describe the computational pipeline that I developed to curate the data and regression models that I constructed from the data to relate protein sequence to binding. I applied models trained on experimental data sets to study the peptide-binding specificity landscape of the Bc1-xL, Mc1-1, and Bf1-1 anti-apoptotic proteins, and I designed novel peptides that selectively bind tightly to only one of these receptors, or to a pre-specified combination of receptors. My thesis illustrates how data-driven models combined with high-throughput binding assays provide new opportunities for rational design.
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 153-164).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120446</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetically encodable calcium sensors for Magnetic Resonance Imaging</title>
<link>https://hdl.handle.net/1721.1/120443</link>
<description>Genetically encodable calcium sensors for Magnetic Resonance Imaging
Ghosh, Souparno, Ph. D. Massachusetts Institute of Technology
A key requirement for understanding the workings of the brain is to fill in the explanatory gap between molecular phenomena and identifiable behavior at the organismal level. Magnetic resonance imaging (MRI) provides a unique tool for bridging this gap, as it allows for imaging tissue throughout whole organisms. Although functional MRI (fMRI) is already a workhorse technique in human neuroscience research, current fMRI methods give us limited information about brain mechanisms because they rely on blood flow changes that are only indirectly coupled to cellular and molecular events. To associate cellular or molecular specificity to MRI, there is a need for genetically targeted, analyte-specific sensors. Calcium is a molecule of great interest to biology since its fluctuations are highly correlated with neural activity. While much progress has been made in pursuit of genetically encoded calcium sensors none allow for deep tissue imaging of whole rodent brains. In this thesis we demonstrate that genetically encodable calcium sensors based on known MRI gene reporter ferritin show modest sensitivity. To achieve higher amplification we leverage the hemodynamic response, which is coupled to neuronal activity through a calcium-activated enzyme, neuronal nitric oxide synthase (nNOS). We show that chemical stimulation of ectopically expressed neuronal nitric oxide synthase (nNOS) elicits an artificial hemodynamic response detectable by MRI. To distinguish signaling from endogenous nNOS we use a two-prong strategy to engineer a suite of enzymes with altered inhibition constants compared to nNOS. We demonstrate that these engineered enzymes (NOSTICs) exhibit calcium-dependent catalytic activity. One such NOSTIC was then virally delivered to rodent brains and shown to express in certain cell populations. Hemodynamic responses from these cell populations were recorded following electrical stimulation using MRI. The imaging strategy demonstrated here thus offers a novel and potentially powerful approach for cell-targeted functional imaging of the brain.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2018.; Cataloged from PDF version of thesis. "September 2018."; Includes bibliographical references (pages 57-73).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120443</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning efficient image processing pipelines</title>
<link>https://hdl.handle.net/1721.1/120437</link>
<description>Learning efficient image processing pipelines
Gharbi, Michael (Michael Yanis)
The high resolution of modern cameras puts significant performance pressure on image processing pipelines. Tuning the parameters of these pipelines for speed is subject to stringent image quality constraints and requires significant efforts from skilled programmers. Because quality is driven by perceptual factors with which most quantitative image metrics correlate poorly, developing new pipelines involves long iteration cycles, alternating between software implementation and visual evaluation by human experts. These concerns are compounded on modern computing platforms, which are increasingly mobile and heterogeneous. In this dissertation, we apply machine learning towards the design of high-performance, high-fidelity algorithms whose parameters can be optimized automatically on large-scale datasets via gradient based methods. We present applications to low-level image restoration and high performance image filtering on mobile devices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages [125]-138).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120437</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New directions in sublinear algorithms and testing properties of distributions</title>
<link>https://hdl.handle.net/1721.1/120434</link>
<description>New directions in sublinear algorithms and testing properties of distributions
Gouleakis, Themistoklis
This thesis deals with sublinear algorithms for various types of problems in statistics, combinatorial optimization and graph algorithms. A first focus of this thesis is algorithms for testing whether a probability distribution, to which the algorithms have sample access, is equal to a given hypothesis distribution, using a number of samples that is sublinear in the domain size. A second focus is to consider various other models of computation defined by type of queries available to the user. This thesis shows how more powerful queries, such as the ability to get a sample according to the conditional distribution on a specified set, allows one to get faster algorithms for a number of problems. Thirdly, this thesis considers the problem of certifying and correcting the result of a crowdsourced computation with potentially erroneous worker reports, by using verification queries on a sublinear number of reports. Finally, we show improved methods to simulate graph algorithms for maximal independent set, minimum vertex cover and maximum matching by distributing the computation to multiple sublinear space computing machines and allowing only a sublinear number of rounds of communication between them.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 189-200).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120434</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational and statistical approaches to optical spectroscopy</title>
<link>https://hdl.handle.net/1721.1/120432</link>
<description>Computational and statistical approaches to optical spectroscopy
Han, Ningren
Compact and smart optical sensors have had a major impact on people's lives over the last decade. Although the spatial information provided by optical imaging systems has already had a major impact, there is untapped potential in the spectroscopic domain. By transforming molecular information into wavelength-domain data, optical spectroscopy techniques have become some of the most popular scientific tools for examining the composition and nature of materials and chemicals in a non-destructive and non-intrusive manner. However, unlike imaging, spectroscopic techniques have not achieved the same level of penetration due to multiple challenges. These challenges have ranged from a lack of sensitive, miniaturized, and low-cost systems, to the general reliance on domain-specific expertise for interpreting complex spectral signals. In this thesis, we aim to address some of these challenges by combining modern computational and statistical techniques with physical domain knowledge. In particular, we focus on three aspects where computational or statistical knowledge have either enabled realization of a new instrument-with a compact form factor yet still maintaining a competitive performance-or deepened statistical insights of analyte detection and quantification in highly mixed or heterogeneous environments. In the first part, we utilize the non-paraxial Talbot effect to build compact and high-performance spectrometers and wave meters that use computational processing for spectral information retrieval without the need for a full-spectrum calibration process. In the second part, we develop an analyte quantification algorithm for Raman spectroscopy based on spectral shaping modeling. It uses a hierarchical Bayesian inference model and reversible-jump Markov chain Monte Carlo (RJMCMC) computation with a minimum training sample size requirement. In the last part, we numerically investigate the spectral characteristics and signal requirements for universal and predictive non-invasive glucose estimation with Raman spectroscopy, using an in vivo skin Raman spectroscopy dataset. These results provide valuable advancements and insights in bringing forth smart compact optical spectroscopic solutions to real-world applications.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 223-236).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120432</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated silicon photonic circuit simulation</title>
<link>https://hdl.handle.net/1721.1/120431</link>
<description>Integrated silicon photonic circuit simulation
Leu, Jonathan Chung
Integrated silicon photonics is an exciting emerging technology, utilizing the high bandwidth and high timing resolution that optics provides in many applications. To maximize the benefits of these optical-electrical systems, tight integration of the electronic and photonic components are necessary. In light of this need, we've developed a Cadence toolkit library written in VerilogA that simulates both the amplitude and phase of optical signals, as well as optical-electrical interactions. The runtime is greatly improved by simulating the optical signal relative to a reference frequency, which is chosen to be close to the frequency range of interest. We have identified a set of fundamental photonic components, and described each at the physical level, such that the characteristics of a composite device will be created organically. We show that the simulated results match analytic solutions for simple devices like resonant ring filters and more complicated devices like single sideband modulators. Adding to this toolkit library, we then discuss devices that are required for handling more special cases, such as chromatic dispersion in the waveguide, and non-ideal optoelectronic devices. Finally, we demonstrate simulations of complicated systems such as WDM links and Pound-Drever-Hall loops. This will allow designers to unify our photonic device designing and modeling environment with circuit and system level design, giving us greater insight on the trade-offs that take place between the two realms.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 97-111).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120431</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring possible coupling between phonons and internal nuclear states</title>
<link>https://hdl.handle.net/1721.1/120429</link>
<description>Exploring possible coupling between phonons and internal nuclear states
Lu, Siyuan, Ph. D. Massachusetts Institute of Technology
During the past three decades, there were approximately 25 different anomalies in the field of condensed matter nuclear science reported by researchers. One example involves collimated X-rays coming from metal samples with vibrations without a clear explanation or understanding of the underlying physics involved. Another example involves unexpected non-exponential decay of radioactive sources. These anomalies have motivated a research effort by my Ph.D. advisor at MIT, Professor Peter Hagelstein, to investigate the physical phenomena involved. Hagelstein came up with a theory predicting coupling between phonons and internal nuclear states, leading to excitation transfer between nuclei. The aim of this Ph.D. thesis is to experimentally test Hagelstein's theory. In this research, we used Co-57 as the sample to investigate the nuclear excited states. Unexpected non-exponential decay was seen in the first attempt to look for excitation transfer effect. Heat pulse can trigger X-ray signal increments. We performed angular anisotropy experiments which appears to support the conjecture that slow resonant excitation transfer occurs for the 136 keV excited state of Co-57. We also performed delocalization experiments which appears to support the conjecture that fast excitation transfer occurs for the 14.4 keV excited state of Co-57. Our conclusion is that the experimental data are not inconsistent with Hagelstein's theory.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120429</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling, analysis, and control of interdependent networks</title>
<link>https://hdl.handle.net/1721.1/120428</link>
<description>Modeling, analysis, and control of interdependent networks
Zhang, Jianan, Ph. D. Massachusetts Institute of Technology
The integrations of communication and physical networks facilitate network monitoring, operation and control, advancing the development of Internet of Things, smart power grids, and other cyber-physical systems. In these integrated networks, one network depends on another in order to be fully functional, leading to interdependence. The interdependence brings new challenges in the evaluation of network robustness under failures, and the design of control policies for mitigating failures, since failures may cascade from one network to another network that depends on it. We develop new models and analytical tools to study interdependent networks, with a focus on designing robust interdependent networks that can withstand failures and attacks. We first model two interdependent networks of arbitrary topologies by layered graphs, where nodes in the demand layer depend on nodes in the supply layer. We study the supply node connectivity of the demand layer network: namely, the minimum number of supply node removals that would disconnect the demand network. We develop algorithms to evaluate the supply node connectivity given arbitrary network topologies and dependence between networks. We develop dependence assignment algorithms that maximize the supply node connectivity to enhance network robustness. We then study the robust routing problems: namely, delivering information or commodities through paths with high reliability. We develop algorithms to compute the path failure probability under correlated failures, and obtain the most reliable path for single-path routing and most reliable pair of paths for diverse routing between any pair of nodes in a network. To study the formation and properties of large-scale interdependent networks, we develop an interdependent random geometric graph (RGG) model. The model represents two interdependent spatially embedded networks where interdependence exists between geographically nearby nodes in the two networks. We characterize the emergence of the giant mutual component in two interdependent RGGs as node densities increase, and obtain analytical bounds and confidence intervals for the percolation thresholds. This new model and analytical tools provide a framework for robustness evaluation of large-scale interdependent networks under uniform random node failures, geographical attacks, and degree-dependent failures that capture non-uniform vulnerabilities of network components. Finally, we consider two applications of interdependent networks. First, we consider interdependent power grid and communication network. We characterize the impact of communication failures on power grid control, and develop control policies for power grid frequency regulation and economic dispatch using limited communication. Second, we consider the robustness of distributed computing networks, where network flows depend on both communication and computation resources. We study the network robustness under the failure of network resources and solve network flow interdiction problems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 221-229).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120428</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, optimization, and performance of an adaptable aircraft manufacturing architecture</title>
<link>https://hdl.handle.net/1721.1/120427</link>
<description>Design, optimization, and performance of an adaptable aircraft manufacturing architecture
Tao, Tony S. (Tony Shuo)
The cost and time required to develop aircraft have grown strongly over time, to the point where aircraft have become prohibitively expensive and are outpaced by ever-evolving mission needs. To address this problem, this thesis presents and explores an aircraft platform architecture called "Adaptable Aircraft Manufacturing" or "AAM", which features common tooling geometry that enables the creation of any composite aircraft (within a reachable subspace) on demand. To prove the feasibility of this architecture, a family of aircraft was constructed using a single set of AAM tooling. This thesis also optimizes the AAM geometry and quantifies the inefficiencies incurred by its adoption. This family optimization problem is both logically and computationally complex since the constraints AAM places between the variants cannot be prescribed by the designer, but arise as a result of the optimization gradients that exist between variants. A sequential-process framework which clarifies the relations and points of adjustability available in aircraft manufacturing is presented. A signomial-programming (SP) aircraft optimization code that is capable of simulating the inefficiencies generated by the AAM geometry was developed. The SP mathematical structure and the GPkit codebase were selected due to their compatibility with the constraint-heavy geometric rules that describe AAM and for the rapid speed of computation, which is necessary due to the scale of the optimization problem. To quantify the performance of AAM, a series of explorations are conducted whereby the performance of an AAM-family of aircraft is compared against a fleet of Individually-Optimal (IO) aircraft. These explorations are conducted along the axes of payload size, cruise speed, mission scope, and market bias to gain an understanding of how (and by how much) the AAM constraints affect both the performance and the geometry of the aircraft it produces. The results show that, for total-mission-cost-minimizing fleets of three designs each, the AAM fleet is between 10 and 20% more costly, but only require between 30% and 80% the tooling as an 10 fleet.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 101-103).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120427</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information provision in rating systems and traffic systems</title>
<link>https://hdl.handle.net/1721.1/120426</link>
<description>Information provision in rating systems and traffic systems
Makhdoumi, Ali (Makhdoumi Kakhaki)
This thesis studies information provision in two contexts: online rating systems and traffic information systems. In the first part of the thesis, we develop a model of Bayesian learning from online reviews and investigate the conditions for asymptotic learning of the quality of a product. Moreover, we characterize the speed of learning under different rating systems and characterize the impact of information provision on the speed of learning. In the model, a sequence of potential customers after observing the ratings of the product, and based on their ex ante valuation, decide whether to purchase. If they purchase, the true quality of the product, their ex ante valuation and ex post valuation determine their overall satisfaction. Given the rating system of the platform, they decide to leave a review as a function of their overall satisfaction. We study learning dynamics under two classes of rating systems: full history, where customers see the full history of reviews, and summary statistics, where the platform reports some summary statistics of past reviews. In both cases, we characterize the asymptotic speed of learning and show that the incentivizes of the platform is aligned with maximizing the speed of learning. We then study the design of rating systems in terms of information collection and information provision schemes. In particular, we identify situations in which providing more information leads to slower learning. In the second part of the thesis, we develop a framework for systematically analyzing how changes in the information sets of users in a traffic network (e.g., due to route guidance systems) impact the traffic equilibrium, and show the conditions under which even those with access to additional information may suffer greater congestion. To this regard, we first introduce a new class of congestion games in which users have differing information sets about the available edges and can only use routes consisting of edges in their information set. We then define the notion of information constrained Wardrop equilibrium for this class of congestion games and establish its existence and uniqueness. Finally, we turn to our main question formulated in the forms of Informational Braess' Paradox (IBP), which extends the classic Braess' Paradox in traffic equilibria. IBP asks whether users receiving additional information can become worse off. We provide a comprehensive answer to this question by providing a tight characterization of network topologies under which IBP emerges.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-141).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120426</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viscosity stabilized adjoint method for unsteady compressible Navier-Stokes equations</title>
<link>https://hdl.handle.net/1721.1/120425</link>
<description>Viscosity stabilized adjoint method for unsteady compressible Navier-Stokes equations
Talnikar, Chaitanya Anil.
Design optimization methods are a popular tool in computational fluid dynamics for designing components or finalizing the flow parameters of a system. The adjoint method accelerates the design process by providing gradients of the design objective with respect to the system parameters. But, typically, adjoint-based design optimization methods have used low fidelity simulations like Reynolds Averaged Navier-Stokes (RANS). To reliably capture the complex flow phenomena like turbulent boundary layers, turbulent wakes and fluid separation involved in high Reynolds number flows, high fidelity simulations like large eddy simulation (LES) are required. Unfortunately, due to the chaotic dynamics of turbulence, the adjoint method for LES diverges and produces incorrect gradients. In this thesis, the adjoint method for unsteady flow equations is modified by adding artificial viscosity to the adjoint equations. The additional viscosity stabilizes the adjoint solution and maintains reasonable accuracy of the gradients obtained from it. The accuracy of the method is assessed on multiple turbulent flow problems, including subsonic flow over a cylinder and transonic flow over a gas turbine vane. The utility of the method is then tested in performing shape optimization of the trailing edge of a transonic turbine vane. The optimal design, found using a modified gradient-based Bayesian optimization algorithm, shows approximately 15% better aero-thermal performance than the baseline design. Such design optimizations are possible due to the availability of massively parallel supercomputers. Designing high performance fluid flow solvers for the next generation supercomputers is a challenging task. In this thesis, a two-level computational graph method for writing optimized distributed flow solvers on heterogeneous architectures is presented. A checkpoint-based automatic differentiation method is used to derive the corresponding adjoint flow solver in this framework.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 187-195).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120425</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The power of randomized algorithms : from numerical linear algebra to biological systems</title>
<link>https://hdl.handle.net/1721.1/120424</link>
<description>The power of randomized algorithms : from numerical linear algebra to biological systems
Musco, Cameron N. (Cameron Nicholas)
In this thesis we study simple, randomized algorithms from a dual perspective. The first part of the work considers how randomized methods can be used to accelerate the solution of core problems in numerical linear algebra. In particular, we give a randomized low-rank approximation algorithm for positive semidefinite matrices that runs in sublinear time, significantly improving upon what is possible with traditional deterministic methods. We also discuss lower bounds on low-rank approximation and spectral summarization problems that attempt to explain the importance of randomization and approximation in accelerating linear algebraic computation. The second part of the work considers how the theory of randomized algorithms can be used more generally as a tool to understand how complexity emerges from low-level stochastic behavior in biological systems. We study population density- estimation in ant colonies, which is a key primitive in social decision-making and task allocation. We define a basic computational model and show how agents in this model can estimate their density using a simple random-walk-based algorithm. We also consider simple randomized algorithms for computational primitives in spiking neural networks, focusing on fast winner-take-all networks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 323-347).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120424</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
